content
stringlengths
0
14.9M
filename
stringlengths
44
136
--- title: "CRE" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{CRE} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` # Installation Installing from CRAN. ```{r, eval=FALSE} install.packages("CRE") ``` Installing the latest developing version. ```{r, eval=FALSE} library(devtools) install_github("NSAPH-Software/CRE", ref = "develop") ``` Import. ```{r, eval=FALSE} library("CRE") ``` # Arguments __Data (required)__ **`y`** The observed response/outcome vector (binary or continuous). **`z`** The treatment/exposure/policy vector (binary). **`X`** The covariate matrix (binary or continuous). __Parameters (not required)__ **`method_parameters`** The list of parameters to define the models used, including: - **`ratio_dis`** The ratio of data delegated to the discovery sub-sample (default: 0.5). - **`ite_method`** The method to estimate the individual treatment effect (default: "aipw") [1]. - **`learner_ps`** The ([SuperLearner](https://CRAN.R-project.org/package=SuperLearner)) model for the propensity score estimation (default: "SL.xgboost", used only for "aipw","bart","cf" ITE estimators). - **`learner_y`** The ([SuperLearner](https://CRAN.R-project.org/package=SuperLearner)) model for the outcome estimation (default: "SL.xgboost", used only for "aipw","slearner","tlearner" and "xlearner" ITE estimators). **`hyper_params`** The list of hyper parameters to fine tune the method, including: - **`intervention_vars`** Intervention-able variables used for Rules Generation (default: `NULL`). - **`ntrees`** The number of decision trees for random forest (default: 20). - **`node_size`** Minimum size of the trees' terminal nodes (default: 20). - **`max_rules`** Maximum number of candidate decision rules (default: 50). - **`max_depth`** Maximum rules length (default: 3). - **`t_decay`** The decay threshold for rules pruning (default: 0.025). - **`t_ext`** The threshold to define too generic or too specific (extreme) rules (default: 0.01). - **`t_corr`** The threshold to define correlated rules (default: 1). - **`stability_selection`** Method for stability selection for selecting the rules. `vanilla` for stability selection, `error_control` for stability selection with error control and `no` for no stability selection (default: `vanilla`). - **`B`** Number of bootstrap samples for stability selection in rules selection and uncertainty quantification in estimation (default: 20). - **`subsample`** Bootstrap ratio subsample for stability selection in rules selection and uncertainty quantification in estimation (default: 0.5). - **`offset`** Name of the covariate to use as offset (i.e. "x1") for T-Poisson ITE Estimation. `NULL` if not used (default: `NULL`). - **`cutoff`** Threshold defining the minimum cutoff value for the stability scores in Stability Selection (default: 0.9). - **`pfer`** Upper bound for the per-family error rate (tolerated amount of falsely selected rules) in Error Control Stability Selection (default: 1). __Additional Estimates (not required)__ **`ite`** The estimated ITE vector. If given, both the ITE estimation steps in Discovery and Inference are skipped (default: `NULL`). ## Notes ### Options for the ITE estimation **[1]** Options for the ITE estimation are as follows: - [S-Learner](https://CRAN.R-project.org/package=SuperLearner) (`slearner`). - [T-Learner](https://CRAN.R-project.org/package=SuperLearner) (`tlearner`) - T-Poisson(`tpoisson`) - [X-Learner](https://CRAN.R-project.org/package=SuperLearner) (`xlearner`) - [Augmented Inverse Probability Weighting](https://CRAN.R-project.org/package=SuperLearner) (`aipw`) - [Causal Forests](https://CRAN.R-project.org/package=grf) (`cf`) - [Causal Bayesian Additive Regression Trees](https://CRAN.R-project.org/package=bartCause) (`bart`) If other estimates of the ITE are provided in `ite` additional argument, both the ITE estimations in discovery and inference are skipped and those values estimates are used instead. The ITE estimator requires also an outcome learner and/or a propensity score learner from the [SuperLearner](https://CRAN.R-project.org/package=SuperLearner) package (i.e., "SL.lm", "SL.svm"). Both these models are simple classifiers/regressors. By default XGBoost algorithm is used for both these steps. ### Customized wrapper for SuperLearner One can create a customized wrapper for SuperLearner internal packages. The following is an example of providing the number of cores (e.g., 12) for the xgboost package in a shared memory system. ```R m_xgboost <- function(nthread = 12, ...) { SuperLearner::SL.xgboost(nthread = nthread, ...) } ``` Then use "m_xgboost", instead of "SL.xgboost". # Examples Example 1 (*default parameters*) ```R set.seed(9687) dataset <- generate_cre_dataset(n = 1000, rho = 0, n_rules = 2, p = 10, effect_size = 2, binary_covariates = TRUE, binary_outcome = FALSE, confounding = "no") y <- dataset[["y"]] z <- dataset[["z"]] X <- dataset[["X"]] cre_results <- cre(y, z, X) summary(cre_results) plot(cre_results) ite_pred <- predict(cre_results, X) ``` Example 2 (*personalized ite estimation*) ```R set.seed(9687) dataset <- generate_cre_dataset(n = 1000, rho = 0, n_rules = 2, p = 10, effect_size = 2, binary_covariates = TRUE, binary_outcome = FALSE, confounding = "no") y <- dataset[["y"]] z <- dataset[["z"]] X <- dataset[["X"]] ite_pred <- ... # personalized ite estimation cre_results <- cre(y, z, X, ite = ite_pred) summary(cre_results) plot(cre_results) ite_pred <- predict(cre_results, X) ``` Example 3 (*setting parameters*) ```R set.seed(9687) dataset <- generate_cre_dataset(n = 1000, rho = 0, n_rules = 2, p = 10, effect_size = 2, binary_covariates = TRUE, binary_outcome = FALSE, confounding = "no") y <- dataset[["y"]] z <- dataset[["z"]] X <- dataset[["X"]] method_params <- list(ratio_dis = 0.5, ite_method ="aipw", learner_ps = "SL.xgboost", learner_y = "SL.xgboost") hyper_params <- list(intervention_vars = c("x1","x2","x3","x4"), offset = NULL, ntrees = 20, node_size = 20, max_rules = 50, max_depth = 3, t_decay = 0.025, t_ext = 0.025, t_corr = 1, stability_selection = "vanilla", cutoff = 0.8, pfer = 1, B = 10, subsample = 0.5) cre_results <- cre(y, z, X, method_params, hyper_params) summary(cre_results) plot(cre_results) ite_pred <- predict(cre_results, X) ``` More synthetic data sets can be generated using `generate_cre_dataset()`.
/scratch/gouwar.j/cran-all/cranData/CRE/vignettes/CRE.Rmd
--- title: "Contribution" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Contribution} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` CRE is an open source package and contributions are welcome from open source community in the form of pull request.Please read the following documents before making changes to the codebase. ## Environment Setup Please follow these steps to get a copy of _CRE_ on your Github account. - Navigate to CRE Github [repository](https://github.com/NSAPH-Software/CRE), and at the top right corner, click on the `Fork` button. This will add a clone of the project to your Github account. - Open your terminal (or Gitbash for Windows, Anaconda prompt, ...) and run the following command (brackets are not included): ```S git clone [email protected]:[your user name]/CRE.git ``` - If you do not already have an SSH key, you need to generate one. Read more [here](https://docs.github.com/en/github-ae@latest/github/authenticating-to-github/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent). - Now, you can modify the codebase and track your modification. - It is a good idea to create a new branch to work on the codebase. Read the following instructions for git branching. ## Git Branching Model Although, in your personal repository, you can pick any branch name, however, in order to keep consistency and also understand who is working on what, the following convention is strongly recommended. In this project, we follow the convention that is proposed by Vincent Driessen in his [A successful Git branching model](https://nvie.com/posts/a-successful-git-branching-model/) post. ## Where to submit pull requests? All pull requests should be submitted to `base repository: fasrc/CRE` and `base: develop` branch. ## Pull request checklist - Please run `devtools::document()`, `devtools::load_all()` after your final modifications. - Make sure that your modified code passes all checks and tests (you can run `devtools::check()` in RStudio) - Your PR should pass all the CI and reviews so we can merge it. - Add a line(s) about the modification to the NEWS.md file. - If you are adding new features, please make sure that appropriate documentation is added or updated. - Please clean up white spaces. Read more [here](https://softwareengineering.stackexchange.com/questions/121555/why-is-trailing-whitespace-a-big-deal). ## Reporting bugs Please report potential bugs by creating a [new issue](https://github.com/NSAPH-Software/CRE/issues) or sending us an email. Please include the following information in your bug report: - A brief description of what you are doing, what you expected to happen, and what happened. - OS that you are using and whether you are using a personal computer or HPC cluster. - The version of the package that you have installed. ## Style Guide In this project, we follow the [tidyverse style guide](https://style.tidyverse.org). ### Summary #### Names - File names all snake_case and ends with .R (e.g., create_matching.R) - variable names small letter and separate with _ if need (e.g., delta_n) - Function names should follow snake_case style (e.g., generate_data) - Function names follow verb+output convention (e.g., compute_resid) #### Spaces and Indentation - Indentations are two spaces (do not use tab) - Place space around binary operators (e.g., x + y) ```R #Acceptable: z <- x + y #Not recommended: z<-x+y # (no space) z<- x+y z<-x +y ``` - Place space after comma ```R #Acceptable: a <- matrix(c(1:100), nrow = 5) #Not recommended: a <- matrix(c(1:100),nrow = 5) # (no space after comma) a <- matrix( c(1:100), nrow = 5 ) # (extra space after and before parentheses) a<-matrix(c(1:100), nrow = 5) # (no space around unary operator <- ) ``` - Place space after # and before commenting and avoid multiple ### ```R #Acceptable: # This is a comment #Not recommended: #This is a comment # This is a comment (more than one space after #) ## This is a comment (multiple #) ### This is a comment (multiple # and more than one space) ``` - Do not put space at the opening and closing the parenthesis ```R #Acceptable: x <- (z + y) #Not recommended: x <- ( z + y ) # (unnecessary space) x <- (z + y ) x <- ( z + y) ``` - Place a space before and after `()` when used with `if`, `for`, or `while`. ```R #Acceptible if (x > 2) { print(x) } # Not recommended if(x > 2){ print(x) } ``` #### Other notes - Maximum line length is 80 character - Use explicit returns - Use explicit tags in documentation (e.g., @title, @description, ...)
/scratch/gouwar.j/cran-all/cranData/CRE/vignettes/Contribution.Rmd
--- title: "Testing the CRE Package" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Testing the CRE package} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` We encourage all developers to test the package in different conditions. Testing the package is the easiest way to get familiar with the package and its functionalities. # Getting the code To test the package, please install the package on your system (R (>= 3.5.0)). You can install the package by following one of these approaches: - Directly from GitHub - CRAN (not recommended) - Source - Forked Repository (recommended) ## Installing the package directly from Github In this project, we follow [A successful Git Branching Model](https://nvie.com/posts/a-successful-git-branching-model/). As a result, the `develop` branch is the most updated branch for developers. Use `devtools::install_github` to install the package. If you do not specify the `ref`, it will install the master (or main) branch. ```R library(devtools) try(detach("package:CRE", unload = TRUE), silent = TRUE) # if already you have the package, detach and unload it, to have a new install. install_github("NSAPH-Software/CRE", ref="develop") library(CRE) ``` Try `?CRE`. It should open the package description page under the help tab (assuming you are using RStudio). ## Installing the package from CRAN Installing the package from CRAN for developing purposes is not recommended. Because most probably, the version on CRAN is not the latest version. [Complete this section after submitting the package to CRAN] ## Installing the package from the source In order to install the package from the source, you need to download the source code into your computer and install it from the source. Here are the steps: - Go to package [Github repository](https://github.com/NSAPH-Software/CRE) and from the drop-down menu change the branch to `develop`. Then click on the `Code` tab and then click on the `Download Zip` tab. - Open one of the files using RStudio, then change the project directory to the project directory (`Session > Set Working Directory > To Project Directory`). - Load `devtools` library and then load CRE. ```R library(devtools) load_all() ?CRE ``` ## Forking the package Forking the package under your Github account is the best option if you are planning on installing, testing, modifying, and contributing to the project. Go to package [Github repository](https://github.com/NSAPH-Software/CRE) and click on the `Fork` button at the top right corner. After forking the package, Open your terminal (or Gitbash for Windows, Anaconda prompt, ...) and run the following command (brackets are not included): ```S git clone [email protected]:[your user name]/CRE.git ``` Now, you can modify the codebase and track your modification. Navigate to the package folder and Install the package following the **Installing the package from source** steps. It is a good idea to create a new branch to work on the codebase. Read [A successful Git Branching Model](https://nvie.com/posts/a-successful-git-branching-model/) for branching convention. # Testing the Package Run the following command to test the package. ```{r, warning=FALSE, eval=FALSE} library(CRE) # Generate sample data set.seed(1358) dataset <- generate_cre_dataset(n = 1000, rho = 0, n_rules = 2, p = 10, effect_size = 2, binary_covariates = TRUE, binary_outcome = FALSE, confounding = "no") y <- dataset[["y"]] z <- dataset[["z"]] X <- dataset[["X"]] method_params <- list(ratio_dis = 0.5, ite_method = "aipw", learner_ps = "SL.xgboost", learner_y = "SL.xgboost", offset = NULL) hyper_params <- list(intervention_vars = NULL, ntrees = 20, node_size = 20, max_rules = 50, max_depth = 3, t_decay = 0.025, t_ext = 0.01, t_corr = 1, t_pvalue = 0.05, stability_selection = "vanilla", cutoff = 0.6, pfer = 1, B = 10, subsample = 0.5) # linreg CATE estimation with aipw ITE estimation cre_results <- cre(y, z, X, method_params, hyper_params) summary(cre_results) plot(cre_results) ite_pred <- predict(cre_results, X) ```
/scratch/gouwar.j/cran-all/cranData/CRE/vignettes/Testing-the-Package.Rmd
#' CREAM is the main function for CORE identification #' #' @param in_path Path to the input file (The file inclusing the functional #' regions) #' Note. You have to make sure that there is no overlapping regions within the #' input file #' @param MinLength Criteria for the minimum number of functional regions in the #' input file #' @param peakNumMin Minimum number of peaks for CORE identification #' @param WScutoff Threshold used to identify WS within distribution of maximum #' distance between peaks for each order of CORE #' @return Bed file including the identified COREs #' @examples #' CREAM(system.file("extdata", "A549_Chr21.bed", package = "CREAM"), #' MinLength = 1000, peakNumMin = 2) #' @importFrom utils read.table write.table #' @export CREAM <- function(in_path, WScutoff = 1.5, MinLength = 1000, peakNumMin = 2){ InputData <- read.table(in_path, sep="\t") colnames(InputData) <- c("chr", "start", "end") ########################### print(paste("Please make sure there is no overlap between the input genomic regions.", "Overlap between the input regions may cause error.")) ########################### Checking total number of input regions if(nrow(InputData) < MinLength){ stop(paste( "Number of functional regions is less than ", MinLength, ".", sep = "", collapse = "")) } ##################### Checking if there are chromosomes with low number of input regions ChrRegNum <- table(InputData[,"chr"]) LowNumChr_Ind <- which(ChrRegNum < 200) if(length(LowNumChr_Ind) > 0){ warning(paste("There are chromosome with low number of regions and you may get error.", "The reason is that highest Order may become larger than the number of regions in a chromosome.", "Hence, there will not be enough regions in that chromosome for clustering.")) } ##################### WindowVecFinal <- WindowVec(InputData, peakNumMin, WScutoff) OutputList <- ElementRecog(InputData, WindowVecFinal, (1+length(WindowVecFinal)), peakNumMin) WidthSeq_Vec <- OutputList[[1]] StartSeq_Vec <- OutputList[[2]] EndSeq_Vec <- OutputList[[3]] ChrSeq_Vec <- OutputList[[4]] OrderSeq_Vec <- OutputList[[5]] WinSizeSeq_Vec <- OutputList[[6]] #################### if(!is.null(StartSeq_Vec)){ SortedOrderInd <- sort(OrderSeq_Vec, decreasing = T, index.return = T)[[2]] CombinedData <- cbind(ChrSeq_Vec[SortedOrderInd], StartSeq_Vec[SortedOrderInd], EndSeq_Vec[SortedOrderInd], OrderSeq_Vec[SortedOrderInd], WidthSeq_Vec[SortedOrderInd], WinSizeSeq_Vec[SortedOrderInd]) colnames(CombinedData) <- c("Chr", "Start", "End", "Order", "Width", "WindowSize") MinPeaks <- PeakMinFilt(CombinedData, WindowVecFinal) RemovePeaks <- which(as.numeric(CombinedData[,"Order"]) < MinPeaks) if(length(RemovePeaks) > 0){ CombinedData <- CombinedData[-RemovePeaks,] } CombinedData <- CombinedData[,c(1:3)] colnames(CombinedData) <- NULL return(CombinedData) } }
/scratch/gouwar.j/cran-all/cranData/CREAM/R/CREAM.R
#' ElementRecog is a function to identify COREs #' #' @param InputData The input data as a table including chromosome regions #' in which the first column is chromosome annotation, and second and third #' columns are start and ending positions. #' @param windowSize_Vec Vector of window sizes ordered based on order of CORE #' @param peakNumMax Maximum order of COREs (e.g. maximum number of peaks within COREs) #' @param peakNumMin Minimum order of COREs (e.g. minimum number of peaks within COREs) #' @return Identified COREs for the given input regions #' @examples #' InputData <- read.table(system.file("extdata", "A549_Chr21.bed", #' package = "CREAM"), sep="\t") #' colnames(InputData) <- c("chr", "start", "end") #' MinLength <- 1000 #' if(nrow(InputData) < MinLength){ #' stop(paste( "Number of functional regions is less than ", MinLength, #' ".", sep = "", collapse = "")) #' } #' peakNumMin <- 2 #' WScutoff <- 1.5 #' WindowVecFinal <- WindowVec(InputData, peakNumMin, WScutoff) #' OutputList <- ElementRecog(InputData, WindowVecFinal, #' (1+length(WindowVecFinal)), peakNumMin) #' @export ElementRecog <- function(InputData, windowSize_Vec, peakNumMax, peakNumMin){ print("Identifying COREs") ChrSeq <- as.character(unique(InputData[,1])) WidthSeq_All <- c() StartRegionAll_Vec <- c() EndRegionAll_Vec <- c() ChrSeqAll_Vec <- c() OrderSeqAll_Vec <- c() SDSeqAll_Vec <- c() WindowSizeAll_Vec <- c() for(chrIter in ChrSeq){ InputData_Start <- InputData[which(InputData[,1] == chrIter),"start"] InputData_End <- InputData[which(InputData[,1] == chrIter),"end"] InputData_End <- InputData_End[order(InputData_Start, decreasing = F)] InputData_Start <- InputData_Start[order(InputData_Start, decreasing = F)] InputData_Center <- 0.5*(InputData_Start + InputData_End) InputData_StartSeq <- min(InputData_Start) InputData_EndSeq <- max(InputData_End) ChrElement_Vec <- c() StartElement_Vec <- c() EndElement_Vec <- c() WidthElement_Vec <- c() OrderElement_Vec <- c() SDElement_Vec <- c() WindowSize_Vec <- c() for(peakNumIter in seq(peakNumMax, peakNumMin, by = -1)){ i <- 1 WindowSize <- windowSize_Vec[(peakNumIter - 1)] while(i < (length(InputData_Start)-(peakNumIter - 1))){ widthElement <- (InputData_End[(i+(peakNumIter - 1))] - InputData_Start[i]) checkwindow <- max(InputData_Start[(i+1):(i + (peakNumIter - 1))] - InputData_End[i:(i+ (peakNumIter - 1) - 1)]) if(checkwindow < WindowSize){ ChrElement_Vec <- c(ChrElement_Vec, chrIter) StartElement_Vec <- c(StartElement_Vec, InputData_Start[i]) EndElement_Vec <- c(EndElement_Vec, InputData_End[(i+(peakNumIter-1))]) WidthElement_Vec <- c(WidthElement_Vec, widthElement) OrderElement_Vec <- c(OrderElement_Vec, peakNumIter) WindowSize_Vec <- c(WindowSize_Vec, checkwindow) InputData_Start <- InputData_Start[-(i:(i+(peakNumIter-1)))] InputData_End <- InputData_End[-(i:(i+(peakNumIter-1)))] InputData_Center <- InputData_Center[-(i:(i+(peakNumIter-1)))] }else{ i <- i + 1 } } } ##### Window-based analysis WidthSeq_All <- c(WidthSeq_All, WidthElement_Vec) StartRegionAll_Vec <- c(StartRegionAll_Vec, StartElement_Vec) EndRegionAll_Vec <- c(EndRegionAll_Vec, EndElement_Vec) ChrSeqAll_Vec <- c(ChrSeqAll_Vec, ChrElement_Vec) OrderSeqAll_Vec <- c(OrderSeqAll_Vec, OrderElement_Vec) WindowSizeAll_Vec <- c(WindowSizeAll_Vec, WindowSize_Vec) } return(list(WidthSeq_All, StartRegionAll_Vec, EndRegionAll_Vec, ChrSeqAll_Vec, OrderSeqAll_Vec, WindowSizeAll_Vec)) }
/scratch/gouwar.j/cran-all/cranData/CREAM/R/ElementRecog.R
#' PeakMinFilt is a function to filter the lowest Order of COREs which distance #' between functional regions is close to the corresponding Window Size #' #' @param Clusters_init Table of indetified COREs before filteration #' @param WindowVecFinal Vector of window sizes ordered based on order of CORE #' @return Minimum order of COREs #' @importFrom stats median #' @export PeakMinFilt <- function(Clusters_init, WindowVecFinal){ print("Filtering clusters of low order") UniqueOrder <- sort(unique(as.numeric(Clusters_init[,"Order"])), decreasing = F) Zscore <- c() for(OrderIter in 1:length(UniqueOrder)){ TargetInd <- which(as.numeric(Clusters_init[,"Order"]) == UniqueOrder[OrderIter]) Zscore <- c(Zscore, (WindowVecFinal[(UniqueOrder[OrderIter] - 1)] - median(as.numeric(Clusters_init[TargetInd,"WindowSize"])) )/median(as.numeric(Clusters_init[TargetInd,"WindowSize"]))) } TargetOrder <- 0 if(length(Zscore) > 2){ for(OrderIter in 2:length(Zscore)){ if(Zscore[OrderIter] < Zscore[(OrderIter-1)]){ return(UniqueOrder[OrderIter]) stop("Vector obtained") } } } return(max(UniqueOrder)) }
/scratch/gouwar.j/cran-all/cranData/CREAM/R/PeakMinFilt.R
#' WindowSizeRecog is a function to specify window size for each order of COREs #' #' @param InputData The input data as a table including chromosome regions #' in which the first column is chromosome annotation, and second and third #' columns are start and ending positions. #' @param COREorder Order of the COREs which window size has to be determined for. #' @param WScutoff Threshold used to identify WS within distribution of maximum distance between peaks for each order of CORE #' @return Window size identified for each order of CORE #' @importFrom stats median quantile #' @examples #' InputData <- read.table(system.file("extdata", "A549_Chr21.bed", #' package = "CREAM"), sep="\t") #' colnames(InputData) <- c("chr", "start", "end") #' MinLength <- 1000 #' if(nrow(InputData) < MinLength){ #' stop(paste( "Number of functional regions is less than ", MinLength, #' ".", sep = "", collapse = "")) #' } #' peakNumMin <- 2 #' WScutoff <- 1.5 #' WindowSize <- WindowSizeRecog(InputData, peakNumMin, WScutoff) #' @export WindowSizeRecog <- function(InputData, COREorder, WScutoff){ ChrSeq <- as.character(unique(InputData[,1])) OrderSeqAll_Vec <- c() WindowAll_Vec <- c() for(chrIter in ChrSeq){ InputData_Start <- InputData[which(InputData[,1] == chrIter),"start"] InputData_End <- InputData[which(InputData[,1] == chrIter),"end"] RemInd <- which(duplicated(paste(InputData_Start, InputData_End, sep = "_"))) if(length(RemInd) > 0){ InputData_Start <- InputData_Start[-RemInd] InputData_End <- InputData_End[-RemInd] } InputData_End <- InputData_End[order(InputData_Start, decreasing = F)] InputData_Start <- InputData_Start[order(InputData_Start, decreasing = F)] InputData_Center <- 0.5*(InputData_Start + InputData_End) InputData_StartSeq <- min(InputData_Start) InputData_EndSeq <- max(InputData_End) peakNumIter <- COREorder if(((length(InputData_Start)-(peakNumIter - 1))-1) > 0){ OrderElement_Vec <- rep(0,((length(InputData_Start)-(peakNumIter - 1))-1)) WindowElement_Vec <- rep(0,((length(InputData_Start)-(peakNumIter - 1))-1)) ########## i <- 1 while(i < (length(InputData_Start)-(peakNumIter - 1))){ widthElement <- (InputData_End[(i+(peakNumIter - 1))] - InputData_Start[i]) checkwindow <- max(InputData_Start[(i+1):(i + (peakNumIter - 1))] - InputData_End[i:(i+ (peakNumIter - 1) - 1)]) OrderElement_Vec[i] <- peakNumIter WindowElement_Vec[i] <- checkwindow i <- i + 1 } ##### Window-based analysis OrderSeqAll_Vec <- c(OrderSeqAll_Vec, OrderElement_Vec) WindowAll_Vec <- c(WindowAll_Vec, WindowElement_Vec) } } i <- COREorder SortedWindow_Vec <- sort(WindowAll_Vec[which(OrderSeqAll_Vec == i)]) SortedWindowQuan <- quantile(SortedWindow_Vec) aa <- (as.numeric(SortedWindowQuan[4]) + WScutoff*(as.numeric(SortedWindowQuan[4])- as.numeric(SortedWindowQuan[2]))) RemovePeaks <- which(SortedWindow_Vec > aa) if(length(RemovePeaks) > 0){ bb <- log(SortedWindow_Vec[-RemovePeaks]) }else{ bb <- log(SortedWindow_Vec) } bb_quan <- quantile(bb) TightReg <- (as.numeric(bb_quan[2]) - WScutoff*(as.numeric(bb_quan[4]) - as.numeric(bb_quan[2]))) Outliers <- which(bb < TightReg) if(length(Outliers) > 0){ WindowSize <- exp(TightReg) }else{ WindowSize <- 1 } return(WindowSize) }
/scratch/gouwar.j/cran-all/cranData/CREAM/R/WindowSizeRecog.R
#' WindowVec is a function to specify window size for each order of COREs #' #' @param InputData The input data as a table including chromosome regions in #' which the first column is chromosome annotation, and second and third #' columns are start and ending positions. #' @param peakNumMin Minimum order of COREs #' @param WScutoff Threshold used to identify WS within distribution of maximum distance between peaks for each order of CORE #' @return Vector of window sizes from order 2 up to maximum order of COREs #' @examples #' InputData <- read.table(system.file("extdata", "A549_Chr21.bed", #' package = "CREAM"), sep="\t") #' colnames(InputData) <- c("chr", "start", "end") #' MinLength <- 1000 #' if(nrow(InputData) < MinLength){ #' stop(paste( "Number of functional regions is less than ", MinLength, #' ".", sep = "", collapse = "")) #' } #' peakNumMin <- 2 #' WScutoff <- 1.5 #' WindowVecFinal <- WindowVec(InputData, peakNumMin, WScutoff) #' @export WindowVec <- function(InputData, peakNumMin, WScutoff){ print("Identifying window size for each Order") WindowVec_Act <- c() WindowSize <- WindowSizeRecog(InputData, peakNumMin, WScutoff) WindowVec_Act <- c(WindowVec_Act, WindowSize) OrderIter <- 2 while(WindowVec_Act[length(WindowVec_Act)] >0){ OrderIter <- (OrderIter+1) peakNumMax <- OrderIter WindowSize <- WindowSizeRecog(InputData, peakNumMax, WScutoff) WindowVec_Act <- c(WindowVec_Act, WindowSize) lenWinVec <- length(WindowVec_Act) if(WindowVec_Act[lenWinVec] > 1 & WindowVec_Act[(lenWinVec - 1)] > 1){ if(WindowVec_Act[lenWinVec] < WindowVec_Act[(lenWinVec - 1)] || WindowVec_Act[lenWinVec] == WindowVec_Act[(lenWinVec - 1)]){ return(WindowVec_Act[1:(lenWinVec - 1)]) stop("Vector obtained") } } } return(WindowVec_Act) }
/scratch/gouwar.j/cran-all/cranData/CREAM/R/WindowVec.R
#'@title Calibrated Ratio Estimator under Double Sampling Design #' @description Population ratio estimator under two-phase random sampling design has gained enormous popularity in present era. This package provides functions for estimation calibrated population ratio under two phase sampling design, including the approximate variance of the ratio estimator. The improved ratio estimator can be applicable for both the case, when auxiliary data is available at unit level or aggregate level for first phase sampled. Calibration weight of each unit of the second phase sample was calculated. Single and combined inclusion probabilities were also estimated for both phases under two phase random sampling. The improved ratio estimator's percentage coefficient of variation was also determined as a measure of accuracy. #' @param N Population size #' @param FSU First stage sampling units #' @param SSU Second stage sampling units #' @import MASS stats #' #' @return #' \itemize{ #' \item CalEstimate: Estimate value of calibration estimator #' \item CalVariance: Variance of calibration estimator #' \item CV: Coefficient of variance #' \item SampleSize: Sample Size of FSU and SSU #' \item DesignWeight: Design weight vector #' \item InclusionProb: Inclusion probability vector #' \item Correlation: Correlation value #' } #' #' @export #' #' @examples #' f1<-rnorm(100,20,5) #' f2<-rnorm(100,20,5) #' fsu<-cbind(f1,f2) #' s1<-rnorm(50,20,5) #' s2<-rnorm(50,20,5) #' s3<-rnorm(50,20,5) #' s4<-rnorm(50,20,5) #' ssu<-cbind(s1,s2,s3,s4) #' RCRatio(N=1000, FSU=fsu, SSU=ssu) #' #' @references #' \itemize{ #'\item Islam, S., Chandra, H., Sud, U.C., Basak, P., Ghosh, N. and Ojasvi, P.R. (2021). A Revised Calibration Weight based Ratio Estimator in Two-phase Sampling: A Case when Unit Level Auxiliary Information is Available for the First-phase Sample, Journal of Indian Society of Agricultural Statistics, 75(2), 147–156. #'\item Ozgul, N. (2021). New improved calibration estimator based on two auxiliary variables in stratified two-phase sampling. Journal of Statistical Computation and Simulation, 91(6), 1243-1256. #' } RCRatio<-function(N,FSU,SSU ){ fsu<-as.data.frame(FSU) ssu<-as.data.frame(SSU) n1=dim(fsu)[1] # first phase sample size n2=dim(ssu)[1] # second phase sample size d2i<-n1/n2; di<-N/n2 d1i=N/n1;#di=n1/n2;d1i;di pi_1i=1/d1i #First phase inclusion probability pi_1ij=(n1*(n1-1))/(N*(N-1)) #joined inclusion probability pi_2ij=(n2*(n2-1))/(n1*(n1-1)) pi=1/di #overall inclusion probability x1_fsu<-fsu[,1] x2_fsu<-fsu[,2] y_ssu<-ssu[,1] z_ssu<-ssu[,3] x1_ssu<-ssu[,2] x2_ssu<-ssu[,4] #Pearson's correlation coefficient Rho_yz<-cor(y_ssu,z_ssu) Rho_yx1<-cor(y_ssu,x1_ssu) Rho_yx2<-cor(y_ssu,x2_ssu) Rho_zx1<-cor(z_ssu,x1_ssu) Rho_zx2<-cor(z_ssu,x2_ssu) cor<-cbind(yz=Rho_yz,yx1=Rho_yx1,yx2=Rho_yx2,zx1=Rho_zx1,zx2=Rho_zx2) t.xs1<-as.matrix(apply(fsu,2,sum)) t.xs2<-as.matrix(apply(ssu[,c(1,4)],2,sum)) # Direct ratio estimator Est_R<- sum(y_ssu)/sum(z_ssu) #Simple est_R A_prime=d1i*t.xs1[1] B_prime=d1i*t.xs1[2] sum_da=di*t.xs2[1] sum_db=di*t.xs2[2] a=x1_ssu b=x2_ssu sum_daa=di*(t(a)%*%a) sum_dbb=di*(t(b)%*%b) sum_dab=di*(t(a)%*%b) #calibration weight w_cap1<- rep(0,n2) for (i in 1:n2){ c1=((di*a[i])*(sum_dbb*(A_prime-sum_da)-sum_dab*(B_prime-sum_db))) c2=(sum_daa*sum_dbb-sum_dab*sum_dab) c3=((di*b[i])*(sum_daa*(B_prime-sum_db)-sum_dab*(A_prime-sum_da))) w_cap1[i]=di+c1/c2+c3/c2 } #proposed calibration estimator R_cap2<- (t(w_cap1)%*%y_ssu)/(t(w_cap1)%*%z_ssu) #Variance of proposed calibration estimator sum_daa=di*(t(a)%*%a) sum_dbb=di*(t(b)%*%b) sum_dab=di*(t(a)%*%b) sum_dyy=di*(t(y_ssu)%*%y_ssu) sum_dzz=di*(t(z_ssu)%*%z_ssu) sum_dya=di*(t(y_ssu)%*%a) sum_dyb=di*(t(y_ssu)%*%b) sum_dza=di*(t(z_ssu)%*%a) sum_dzb=di*(t(z_ssu)%*%b) t_cap_z<-di*(t(z_ssu)%*%z_ssu) c11<-N*(N-n1)/(n1*t_cap_z^2) c22<-N^2*(n1-n2)/(n1*n2*t_cap_z^2) R_cap<-Est_R u_cap<-y_ssu-R_cap*z_ssu q<-sum_daa*sum_dbb-sum_dab^2 l1<-(sum_dyb*sum_dyy-sum_dya*sum_dab)/q l2<-(sum_dya*sum_dbb-sum_dyb*sum_dab)/q l3<-(sum_dzb*sum_daa-sum_dza*sum_dab)/q l4<-(sum_dza*sum_dbb-sum_dzb*sum_dab)/q v_cap=u_cap+b%*%(l3*R_cap-l1)+a%*%(l4*R_cap-l2) v_cap_mean=mean(v_cap) vec_1=c(rep(1,n2)) SS_v=(1/(n2-1))*(vec_1%*%(v_cap-vec_1*v_cap_mean)^2) y_cap_mean=mean(y_ssu) z_cap_mean=mean(z_ssu) SS_y=(1/(n2-1))*(vec_1%*%(y_ssu-vec_1*y_cap_mean)^2) SS_z=(1/(n2-1))*(vec_1%*%(z_ssu-vec_1*z_cap_mean)^2) Rho_yz<-cor(y_ssu,z_ssu) SS_u=SS_y+R_cap^2*SS_z-2*R_cap*Rho_yz*sqrt(SS_y)*sqrt(SS_z) Var_RC2_SRS<-c11*SS_u+c22*SS_v CV=(sqrt(Var_RC2_SRS)/R_cap2)*100 samplesize<-cbind(FS=n1,SS=n2) designweight<-cbind(FS=d1i, SS=d2i, Overall=di) incprob<-cbind(pi_1i,pi_1ij,pi_2ij,pi_2ij) cor<-cbind(yz=Rho_yz,yx1=Rho_yx1,yx2=Rho_yx2,zx1=Rho_zx1,zx2=Rho_zx2) out<-list(CalEstimate=R_cap2,CalVariance=Var_RC2_SRS,CV=CV,SampleSize=samplesize, DesignWeight=designweight, InclusionProb=incprob,Correlation=cor ) return(out) }
/scratch/gouwar.j/cran-all/cranData/CREDS/R/CREDS.R
#' CRF - Conditional Random Fields #' #' Library of Conditional Random Fields model #' #' CRF is R package for various computational tasks of conditional random #' fields as well as other probabilistic undirected graphical models of #' discrete data with pairwise and unary potentials. The #' decoding/inference/sampling tasks are implemented for general discrete #' undirected graphical models with pairwise potentials. The training task is #' less general, focusing on conditional random fields with log-linear #' potentials and a fixed structure. The code is written entirely in R and C++. #' The initial version is ported from UGM written by Mark Schmidt. #' #' Decoding: Computing the most likely configuration #' \itemize{ #' \item \code{\link{decode.exact}} Exact decoding for small graphs with brute-force search #' \item \code{\link{decode.chain}} Exact decoding for chain-structured graphs with the Viterbi algorithm #' \item \code{\link{decode.tree}} Exact decoding for tree- and forest-structured graphs with max-product belief propagation #' \item \code{\link{decode.conditional}} Conditional decoding (takes another decoding method as input) #' \item \code{\link{decode.cutset}} Exact decoding for graphs with a small cutset using cutset conditioning #' \item \code{\link{decode.junction}} Exact decoding for low-treewidth graphs using junction trees #' \item \code{\link{decode.sample}} Approximate decoding using sampling (takes a sampling method as input) #' \item \code{\link{decode.marginal}} Approximate decoding using inference (takes an inference method as input) #' \item \code{\link{decode.lbp}} Approximate decoding using max-product loopy belief propagation #' \item \code{\link{decode.trbp}} Approximate decoding using max-product tree-reweighted belief propagtion #' \item \code{\link{decode.greedy}} Approximate decoding with greedy algorithm #' \item \code{\link{decode.icm}} Approximate decoding with the iterated conditional modes algorithm #' \item \code{\link{decode.block}} Approximate decoding with the block iterated conditional modes algorithm #' \item \code{\link{decode.ilp}} Exact decoding with an integer linear programming formulation and approximate using LP relaxation #' } #' #' Inference: Computing the partition function and marginal probabilities #' \itemize{ #' \item \code{\link{infer.exact}} Exact inference for small graphs with brute-force counting #' \item \code{\link{infer.chain}} Exact inference for chain-structured graphs with the forward-backward algorithm #' \item \code{\link{infer.tree}} Exact inference for tree- and forest-structured graphs with sum-product belief propagation #' \item \code{\link{infer.conditional}} Conditional inference (takes another inference method as input) #' \item \code{\link{infer.cutset}} Exact inference for graphs with a small cutset using cutset conditioning #' \item \code{\link{infer.junction}} Exact decoding for low-treewidth graphs using junction trees #' \item \code{\link{infer.sample}} Approximate inference using sampling (takes a sampling method as input) #' \item \code{\link{infer.lbp}} Approximate inference using sum-product loopy belief propagation #' \item \code{\link{infer.trbp}} Approximate inference using sum-product tree-reweighted belief propagation #' } #' #' Sampling: Generating samples from the distribution #' \itemize{ #' \item \code{\link{sample.exact}} Exact sampling for small graphs with brute-force inverse cumulative distribution #' \item \code{\link{sample.chain}} Exact sampling for chain-structured graphs with the forward-filter backward-sample algorithm #' \item \code{\link{sample.tree}} Exact sampling for tree- and forest-structured graphs with sum-product belief propagation and backward-sampling #' \item \code{\link{sample.conditional}} Conditional sampling (takes another sampling method as input) #' \item \code{\link{sample.cutset}} Exact sampling for graphs with a small cutset using cutset conditioning #' \item \code{\link{sample.junction}} Exact sampling for low-treewidth graphs using junction trees #' \item \code{\link{sample.gibbs}} Approximate sampling using a single-site Gibbs sampler #' } #' #' Training: Given data, computing the most likely estimates of the parameters #' \itemize{ #' \item \code{\link{train.crf}} Train CRF model #' \item \code{\link{train.mrf}} Train MRF model #' } #' #' Tools: Tools for building and manipulating CRF data #' \itemize{ #' \item \code{\link{make.crf}} Generate CRF from the adjacent matrix #' \item \code{\link{make.features}} Make the data structure of CRF features #' \item \code{\link{make.par}} Make the data structure of CRF parameters #' \item \code{\link{duplicate.crf}} Duplicate an existing CRF #' \item \code{\link{clamp.crf}} Generate clamped CRF by fixing the states of some nodes #' \item \code{\link{clamp.reset}} Reset clamped CRF by changing the states of clamped nodes #' \item \code{\link{sub.crf}} Generate sub CRF by selecting some nodes #' \item \code{\link{mrf.update}} Update node and edge potentials of MRF model #' \item \code{\link{crf.update}} Update node and edge potentials of CRF model #' } #' #' @name CRF-package #' @aliases CRF-package CRF #' @docType package #' @keywords package #' @author Ling-Yun Wu \email{wulingyun@@gmail.com} #' #' @references J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: #' Probabilistic models for segmenting and labeling sequence data. In \emph{the #' proceedings of International Conference on Machine Learning (ICML)}, pp. 282-289, 2001. #' @references Mark Schmidt. UGM: A Matlab toolbox for probabilistic undirected graphical models. #' \url{http://www.cs.ubc.ca/~schmidtm/Software/UGM.html}, 2007. #' #' @examples #' #' library(CRF) #' data(Small) #' decode.exact(Small$crf) #' infer.exact(Small$crf) #' sample.exact(Small$crf, 100) #' #' @useDynLib CRF, .registration = TRUE #' NULL #' Small CRF example #' #' This data set gives a small CRF example #' #' @format A list containing two elements: #' \itemize{ #' \item \code{crf} The CRF #' \item \code{answer} A list of 4 elements: #' \itemize{ #' \item \code{decode} The most likely configuration #' \item \code{node.bel} The node belief #' \item \code{edge.bel} The edge belief #' \item \code{logZ} The logarithmic value of CRF normalization factor Z #' } #' } #' #' @name Small #' @aliases Small #' @docType data #' @keywords datasets #' @usage data(Small) #' NULL #' Chain CRF example #' #' This data set gives a chain CRF example #' #' @format A list containing two elements: #' \itemize{ #' \item \code{crf} The CRF #' \item \code{answer} A list of 4 elements: #' \itemize{ #' \item \code{decode} The most likely configuration #' \item \code{node.bel} The node belief #' \item \code{edge.bel} The edge belief #' \item \code{logZ} The logarithmic value of CRF normalization factor Z #' } #' } #' #' @name Chain #' @aliases Chain #' @docType data #' @keywords datasets #' @usage data(Chain) #' NULL #' Tree CRF example #' #' This data set gives a tree CRF example #' #' @format A list containing two elements: #' \itemize{ #' \item \code{crf} The CRF #' \item \code{answer} A list of 4 elements: #' \itemize{ #' \item \code{decode} The most likely configuration #' \item \code{node.bel} The node belief #' \item \code{edge.bel} The edge belief #' \item \code{logZ} The logarithmic value of CRF normalization factor Z #' } #' } #' #' @name Tree #' @aliases Tree #' @docType data #' @keywords datasets #' @usage data(Tree) #' NULL #' Loop CRF example #' #' This data set gives a loop CRF example #' #' @format A list containing two elements: #' \itemize{ #' \item \code{crf} The CRF #' \item \code{answer} A list of 4 elements: #' \itemize{ #' \item \code{decode} The most likely configuration #' \item \code{node.bel} The node belief #' \item \code{edge.bel} The edge belief #' \item \code{logZ} The logarithmic value of CRF normalization factor Z #' } #' } #' #' @name Loop #' @aliases Loop #' @docType data #' @keywords datasets #' @usage data(Loop) #' NULL #' Clique CRF example #' #' This data set gives a clique CRF example #' #' @format A list containing two elements: #' \itemize{ #' \item \code{crf} The CRF #' \item \code{answer} A list of 4 elements: #' \itemize{ #' \item \code{decode} The most likely configuration #' \item \code{node.bel} The node belief #' \item \code{edge.bel} The edge belief #' \item \code{logZ} The logarithmic value of CRF normalization factor Z #' } #' } #' #' @name Clique #' @aliases Clique #' @docType data #' @keywords datasets #' @usage data(Clique) #' NULL #' Rain data #' #' This data set gives an example of rain data used to train CRF and MRF models #' #' @format A list containing two elements: #' \itemize{ #' \item \code{rain} A matrix of 28 columns containing raining data (1: rain, 2: sunny). #' Each row is an instance of 28 days for one month. #' \item \code{months} A vector containing the months of each instance. #' } #' #' @name Rain #' @aliases Rain #' @docType data #' @keywords datasets #' @usage data(Rain) #' #' @references Mark Schmidt. UGM: A Matlab toolbox for probabilistic undirected graphical models. #' \url{http://www.cs.ubc.ca/~schmidtm/Software/UGM.html}, 2007. #' NULL
/scratch/gouwar.j/cran-all/cranData/CRF/R/CRF-package.R
#' Make clamped CRF #' #' Generate clamped CRF by fixing the states of some nodes #' #' The function will generate a clamped CRF from a given CRF #' by fixing the states of some nodes. The vector \code{clamped} #' contains the desired state for each node while zero means the state is not #' fixed. The node and edge potentials are updated to the conditional #' potentials based on the clamped vector. #' #' @param crf The CRF generated by \code{\link{make.crf}} #' @param clamped The vector of fixed states of nodes #' @return The function will return a new CRF with additional components: #' \item{original}{The original CRF.} #' \item{clamped}{The vector of fixed states of nodes.} #' \item{node.id}{The vector of the original node ids for nodes in the new CRF.} #' \item{node.map}{The vector of the new node ids for nodes in the original CRF.} #' \item{edge.id}{The vector of the original edge ids for edges in the new CRF.} #' \item{edge.map}{The vector of the new edge ids for edges in the original CRF.} #' @seealso \code{\link{make.crf}}, \code{\link{sub.crf}}, \code{\link{clamp.reset}} #' @examples #' #' library(CRF) #' data(Small) #' crf <- clamp.crf(Small$crf, c(0, 0, 1, 1)) #' #' #' @export clamp.crf <- function(crf, clamped) { data <- new.env() if (!is.vector(clamped) || length(clamped) != crf$n.nodes) stop("'clamped' should be a vector of length ", crf$n.nodes, "!") if (any(clamped > crf$n.states | clamped < 0)) stop("'clamped' has invalid value(s)!") data$original <- crf data$clamped <- clamped data$node.id <- which(clamped == 0) data$n.nodes <- length(data$node.id) data$node.map <- rep(0, crf$n.nodes) data$node.map[data$node.id] <- 1:data$n.nodes data$edge.id <- which(clamped[crf$edges[,1]] == 0 & clamped[crf$edges[,2]] == 0) data$n.edges <- length(data$edge.id) data$edges <- matrix(data$node.map[crf$edges[data$edge.id,]], ncol=2) data$edge.map <- rep(0, crf$n.edges) data$edge.map[data$edge.id] <- 1:data$n.edges .Call(Make_AdjInfo, data) data$n.states <- crf$n.states[data$node.id] data$max.state <- max(data$n.states) data$node.pot <- crf$node.pot[data$node.id, 1:data$max.state] data$edge.pot <- crf$edge.pot[data$edge.id] .Call(Clamp_Reset, data) class(data) <- c("CRF.clamped", "CRF") data } #' Reset clamped CRF #' #' Reset clamped CRF by changing the states of clamped nodes #' #' The function will reset a clamped CRF by changing the states of fixed nodes. #' The vector \code{clamped} contains the desired state for each node #' while zero means the state is not fixed. The node and edge potentials are #' updated to the conditional potentials based on the clamped vector. #' #' @param crf The clamped CRF generated by \code{\link{clamp.crf}} #' @param clamped The vector of fixed states of nodes #' @return The function will return the same clamped CRF. #' @seealso \code{\link{make.crf}}, \code{\link{clamp.crf}} #' @examples #' #' library(CRF) #' data(Small) #' crf <- clamp.crf(Small$crf, c(0, 0, 1, 1)) #' clamp.reset(crf, c(0,0,2,2)) #' #' #' @export clamp.reset <- function(crf, clamped) { if (is.na(class(crf)[1]) || class(crf)[1] != "CRF.clamped") stop("'crf' is not class CRF.clamped!") if (sum(xor(crf$clamped == 0, clamped == 0)) != 0) stop("'clamped' has different clamped structure!") if (any(clamped > crf$original$n.states | clamped < 0)) stop("'clamped' has invalid clamped value(s)!") crf$clamped <- clamped .Call(Clamp_Reset, crf) crf } #' Make sub CRF #' #' Generate sub CRF by selecting some nodes #' #' The function will generate a new CRF from a given CRF #' by selecting some nodes. The vector \code{subset} contains the #' node ids selected to generate the new CRF. Unlike #' \code{\link{clamp.crf}}, the potentials of remainning nodes and edges are #' untouched. #' #' @param crf The CRF generated by \code{\link{make.crf}} #' @param subset The vector of selected node ids #' @return The function will return a new CRF with additional components: #' \item{original}{The original CRF data.} #' \item{node.id}{The vector of the original node ids for nodes in the new CRF.} #' \item{node.map}{The vector of the new node ids for nodes in the original CRF.} #' \item{edge.id}{The vector of the original edge ids for edges in the new CRF.} #' \item{edge.map}{The vector of the new edge ids for edges in the original CRF.} #' @seealso \code{\link{make.crf}}, \code{\link{clamp.crf}} #' @examples #' #' library(CRF) #' data(Small) #' crf <- sub.crf(Small$crf, c(2, 3)) #' #' #' @export sub.crf <- function(crf, subset) { data <- new.env() data$original <- crf data$node.id <- intersect(1:crf$n.nodes, unique(subset)) data$n.nodes <- length(data$node.id) data$node.map <- rep(0, crf$n.nodes) data$node.map[data$node.id] <- 1:data$n.nodes data$edge.id <- which(data$node.map[crf$edges[,1]] != 0 & data$node.map[crf$edges[,2]] != 0) data$n.edges <- length(data$edge.id) data$edges <- matrix(data$node.map[crf$edges[data$edge.id,]], ncol=2) data$edge.map <- rep(0, crf$n.edges) data$edge.map[data$edge.id] <- 1:data$n.edges adj.info <- .Call(Make_AdjInfo, data) data$n.adj <- adj.info$n.adj data$adj.nodes <- adj.info$adj.nodes data$adj.edges <- adj.info$adj.edges data$n.states <- crf$n.states[data$node.id] data$max.state <- max(data$n.states) data$node.pot <- array(crf$node.pot[data$node.id, 1:data$max.state], dim=c(data$n.nodes, data$max.state)) data$edge.pot <- crf$edge.pot[data$edge.id] class(data) <- c("CRF.sub", "CRF") data }
/scratch/gouwar.j/cran-all/cranData/CRF/R/clamp.R
#' Decoding method for small graphs #' #' Computing the most likely configuration for CRF #' #' Exact decoding for small graphs with brute-force search #' #' @param crf The CRF #' @return This function will return the most likely configuration, which is a vector of length \code{crf$n.nodes}. #' #' @examples #' #' library(CRF) #' data(Small) #' d <- decode.exact(Small$crf) #' #' @export decode.exact <- function(crf) .Call(Decode_Exact, crf) #' Decoding method for chain-structured graphs #' #' Computing the most likely configuration for CRF #' #' Exact decoding for chain-structured graphs with the Viterbi algorithm. #' #' @param crf The CRF #' @return This function will return the most likely configuration, which is a vector of length \code{crf$n.nodes}. #' #' @examples #' #' library(CRF) #' data(Small) #' d <- decode.chain(Small$crf) #' #' @export decode.chain <- function(crf) .Call(Decode_Chain, crf) #' Decoding method for tree- and forest-structured graphs #' #' Computing the most likely configuration for CRF #' #' Exact decoding for tree- and forest-structured graphs with max-product belief propagation #' #' @param crf The CRF #' @return This function will return the most likely configuration, which is a vector of length \code{crf$n.nodes}. #' #' @examples #' #' library(CRF) #' data(Small) #' d <- decode.tree(Small$crf) #' #' @export decode.tree <- function(crf) .Call(Decode_Tree, crf) #' Conditional decoding method #' #' Computing the most likely configuration for CRF #' #' Conditional decoding (takes another decoding method as input) #' #' @param crf The CRF #' @param clamped The vector of fixed values for clamped nodes, 0 for unfixed nodes #' @param decode.method The decoding method to solve clamped CRF #' @param ... The parameters for \code{decode.method} #' @return This function will return the most likely configuration, which is a vector of length \code{crf$n.nodes}. #' #' @examples #' #' library(CRF) #' data(Small) #' d <- decode.conditional(Small$crf, c(0,1,0,0), decode.exact) #' #' @export decode.conditional <- function(crf, clamped, decode.method, ...) { newcrf <- clamp.crf(crf, clamped) decode <- clamped decode[newcrf$node.id] <- decode.method(newcrf, ...) decode } #' Decoding method for graphs with a small cutset #' #' Computing the most likely configuration for CRF #' #' Exact decoding for graphs with a small cutset using cutset conditioning #' #' @param crf The CRF #' @param cutset A vector of nodes in the cutset #' @param engine The underlying engine for cutset decoding, possible values are "default", "none", "exact", "chain", and "tree". #' @param start An initial configuration, a good start will significantly reduce the seraching time #' @return This function will return the most likely configuration, which is a vector of length \code{crf$n.nodes}. #' #' @examples #' #' library(CRF) #' data(Small) #' d <- decode.cutset(Small$crf, c(2)) #' #' @export decode.cutset <- function(crf, cutset, engine = "default", start = apply(crf$node.pot, 1, which.max)) { engine.id <- c("default"=-1, "none"=0, "exact"=1, "chain"=2, "tree"=3); clamped <- rep(0, crf$n.nodes) clamped[cutset] <- 1 newcrf <- clamp.crf(crf, clamped) .Call(Decode_Cutset, newcrf, engine.id[engine], start) } #' Decoding method for low-treewidth graphs #' #' Computing the most likely configuration for CRF #' #' Exact decoding for low-treewidth graphs using junction trees #' #' @param crf The CRF #' @return This function will return the most likely configuration, which is a vector of length \code{crf$n.nodes}. #' #' @examples #' #' library(CRF) #' data(Small) #' d <- decode.junction(Small$crf) #' #' @export decode.junction <- function(crf) .Call(Decode_Junction, crf); #' Decoding method using sampling #' #' Computing the most likely configuration for CRF #' #' Approximate decoding using sampling (takes a sampling method as input) #' #' @param crf The CRF #' @param sample.method The sampling method #' @param ... The parameters for \code{sample.method} #' @return This function will return the most likely configuration, which is a vector of length \code{crf$n.nodes}. #' #' @examples #' #' library(CRF) #' data(Small) #' d <- decode.sample(Small$crf, sample.exact, 10000) #' #' @export decode.sample <- function(crf, sample.method, ...) .Call(Decode_Sample, crf, sample.method(crf, ...)) #' Decoding method using inference #' #' Computing the most likely configuration for CRF #' #' Approximate decoding using inference (takes an inference method as input) #' #' @param crf The CRF #' @param infer.method The inference method #' @param ... The parameters for \code{infer.method} #' @return This function will return the most likely configuration, which is a vector of length \code{crf$n.nodes}. #' #' @examples #' #' library(CRF) #' data(Small) #' d <- decode.marginal(Small$crf, infer.exact) #' #' @export decode.marginal <- function(crf, infer.method, ...) apply(infer.method(crf, ...)$node.bel, 1, which.max) #' Decoding method using loopy belief propagation #' #' Computing the most likely configuration for CRF #' #' Approximate decoding using max-product loopy belief propagation #' #' @param crf The CRF #' @param max.iter The maximum allowed iterations of termination criteria #' @param cutoff The convergence cutoff of termination criteria #' @param verbose Non-negative integer to control the tracing informtion in algorithm #' @return This function will return the most likely configuration, which is a vector of length \code{crf$n.nodes}. #' #' @examples #' #' library(CRF) #' data(Small) #' d <- decode.lbp(Small$crf) #' #' @export decode.lbp <- function(crf, max.iter = 10000, cutoff = 1e-4, verbose = 0) .Call(Decode_LBP, crf, max.iter, cutoff, verbose) #' Decoding method using residual belief propagation #' #' Computing the most likely configuration for CRF #' #' Approximate decoding using max-product residual belief propagation #' #' @param crf The CRF #' @param max.iter The maximum allowed iterations of termination criteria #' @param cutoff The convergence cutoff of termination criteria #' @param verbose Non-negative integer to control the tracing informtion in algorithm #' @return This function will return the most likely configuration, which is a vector of length \code{crf$n.nodes}. #' #' @examples #' #' library(CRF) #' data(Small) #' d <- decode.rbp(Small$crf) #' #' @export decode.rbp <- function(crf, max.iter = 10000, cutoff = 1e-4, verbose = 0) .Call(Decode_RBP, crf, max.iter, cutoff, verbose) #' Decoding method using tree-reweighted belief propagation #' #' Computing the most likely configuration for CRF #' #' Approximate decoding using max-product tree-reweighted belief propagtion #' #' @param crf The CRF #' @param max.iter The maximum allowed iterations of termination criteria #' @param cutoff The convergence cutoff of termination criteria #' @param verbose Non-negative integer to control the tracing informtion in algorithm #' @return This function will return the most likely configuration, which is a vector of length \code{crf$n.nodes}. #' #' @examples #' #' library(CRF) #' data(Small) #' d <- decode.trbp(Small$crf) #' #' @export decode.trbp <- function(crf, max.iter = 10000, cutoff = 1e-4, verbose = 0) .Call(Decode_TRBP, crf, max.iter, cutoff, verbose) #' Decoding method using greedy algorithm #' #' Computing the most likely configuration for CRF #' #' Approximate decoding with greedy algorithm #' #' @param crf The CRF #' @param restart Non-negative integer to control how many restart iterations are repeated #' @param start An initial configuration, a good start will significantly reduce the seraching time #' @return This function will return the most likely configuration, which is a vector of length \code{crf$n.nodes}. #' #' @examples #' #' library(CRF) #' data(Small) #' d <- decode.greedy(Small$crf) #' #' @export decode.greedy <- function(crf, restart = 0, start = apply(crf$node.pot, 1, which.max)) .Call(Decode_Greedy, crf, restart, start) #' Decoding method using iterated conditional modes algorithm #' #' Computing the most likely configuration for CRF #' #' Approximate decoding with the iterated conditional modes algorithm #' #' @param crf The CRF #' @param restart Non-negative integer to control how many restart iterations are repeated #' @param start An initial configuration, a good start will significantly reduce the seraching time #' @return This function will return the most likely configuration, which is a vector of length \code{crf$n.nodes}. #' #' @examples #' #' library(CRF) #' data(Small) #' d <- decode.icm(Small$crf) #' #' @export decode.icm <- function(crf, restart = 0, start = apply(crf$node.pot, 1, which.max)) .Call(Decode_ICM, crf, restart, start) #' Decoding method using block iterated conditional modes algorithm #' #' Computing the most likely configuration for CRF #' #' Approximate decoding with the block iterated conditional modes algorithm #' #' @param crf The CRF #' @param blocks A list of vectors, each vector containing the nodes in a block #' @param decode.method The decoding method to solve the clamped CRF #' @param restart Non-negative integer to control how many restart iterations are repeated #' @param start An initial configuration, a good start will significantly reduce the seraching time #' @param ... The parameters for \code{decode.method} #' @return This function will return the most likely configuration, which is a vector of length \code{crf$n.nodes}. #' #' @examples #' #' library(CRF) #' data(Small) #' d <- decode.block(Small$crf, list(c(1,3), c(2,4))) #' #' @export decode.block <- function(crf, blocks, decode.method = decode.tree, restart = 0, start = apply(crf$node.pot, 1, which.max), ...) { y <- integer(crf$n.nodes) y[] <- start newcrf <- list() for (i in 1:length(blocks)) { blocks[[i]] <- sort(blocks[[i]]) clamped <- y clamped[blocks[[i]]] <- 0 newcrf[[i]] <- clamp.crf(crf, clamped) } maxPot <- -1 decode <- y restart <- max(0, restart) for (iter in 0:restart) { done = F while(!done) { done = T for (i in 1:length(blocks)) { clamped <- y clamped[blocks[[i]]] <- 0 newcrf[[i]] <- clamp.reset(newcrf[[i]], clamped) temp <- decode.method(newcrf[[i]], ...) if (any(temp != y[blocks[[i]]])) { y[blocks[[i]]] <- temp done = F } } } pot <- get.potential(crf, y) if (pot > maxPot) { maxPot <- pot decode <- y } if (iter < restart) { y <- ceiling(stats::runif(crf$n.nodes) * crf$n.states) } } decode } #' Decoding method using integer linear programming #' #' Computing the most likely configuration for CRF #' #' Exact decoding with an integer linear programming formulation and approximate using LP relaxation #' #' @param crf The CRF #' @param lp.rounding Boolean variable to indicate whether LP rounding is need. #' @return This function will return the most likely configuration, which is a vector of length \code{crf$n.nodes}. #' #' @examples #' #' \dontrun{ #' library(CRF) #' data(Small) #' d <- decode.ilp(Small$crf) #' } #' #' @export decode.ilp <- function(crf, lp.rounding = FALSE) { if (!requireNamespace("Rglpk", quietly = TRUE)) { stop("Package \"Rglpk\" needed for the function decode.ilp to work. Please install it.", call. = FALSE) } vmap.nodes <- matrix(nrow=crf$n.nodes, ncol=2) vmap.edges <- matrix(nrow=crf$n.edges, ncol=2) n <- 0 for (i in 1:crf$n.nodes) { vmap.nodes[i, 1] <- n + 1 n <- n + crf$n.states[i] vmap.nodes[i, 2] <- n } vnum.nodes <- n for (i in 1:crf$n.edges) { vmap.edges[i, 1] <- n + 1 n <- n + crf$n.states[crf$edges[i,1]] * crf$n.states[crf$edges[i,2]] vmap.edges[i, 2] <- n } vnum.edges <- n - vnum.nodes vnum.total <- n obj <- rep(0, vnum.total) for (i in 1:crf$n.nodes) { obj[vmap.nodes[i,1]:vmap.nodes[i,2]] <- -log(crf$node.pot[i,1:crf$n.states[i]]) } for (i in 1:crf$n.edges) { obj[vmap.edges[i,1]:vmap.edges[i,2]] <- -log(crf$edge.pot[[i]]) } obj[is.infinite(obj)] <- 1000 cnum.nodes <- crf$n.nodes cnum.edges <- sum(crf$n.states[crf$edges]) cnum.total <- cnum.nodes + cnum.edges mat <- matrix(0, nrow=cnum.total, ncol=vnum.total) for (i in 1:crf$n.nodes) { mat[i, vmap.nodes[i,1]:vmap.nodes[i,2]] <- 1 } n <- cnum.nodes for (i in 1:crf$n.edges) { n1 <- crf$edges[i,1] n2 <- crf$edges[i,2] for (j in 1:crf$n.states[n1]) { n <- n + 1 mat[n, vmap.nodes[n1,1]-1+j] <- -1 mat[n, seq.int(vmap.edges[i,1]-1+j, vmap.edges[i,2], by=crf$n.states[n1])] <- 1 } for (j in 1:crf$n.states[n2]) { n <- n + 1 mat[n, vmap.nodes[n2,1]-1+j] <- -1 mat[n, (vmap.edges[i,1]+(j-1)*crf$n.states[n1]):(vmap.edges[i,1]-1+j*crf$n.states[n1])] <- 1 } } dir <- rep("==", cnum.total) rhs <- rep(0, cnum.total) rhs[1:cnum.nodes] <- 1 if (lp.rounding) { types <- rep("C", vnum.total) bounds <- list(upper = list(ind = 1:vnum.total, val = rep(1, vnum.total))) } else { types <- rep("B", vnum.total) bounds <- NULL } result <- Rglpk::Rglpk_solve_LP(obj, mat, dir, rhs, types = types, bounds = bounds) if (result$status != 0) { warning("LP solution is not optimal.") } node.bel <- matrix(0, nrow=crf$n.nodes, ncol=crf$max.state) for (i in 1:crf$n.nodes) { node.bel[i, 1:crf$n.states[i]] <- result$solution[vmap.nodes[i,1]:vmap.nodes[i,2]] } apply(node.bel, 1, which.max) }
/scratch/gouwar.j/cran-all/cranData/CRF/R/decode.R
#' Inference method for small graphs #' #' Computing the partition function and marginal probabilities #' #' Exact inference for small graphs with brute-force counting #' #' @param crf The CRF #' @return This function will return a list with components: #' \item{node.bel}{Node belief. It is a matrix with \code{crf$n.nodes} rows and \code{crf$max.state} columns.} #' \item{edge.bel}{Edge belief. It is a list of matrices. The size of list is \code{crf$n.edges} and #' the matrix \code{i} has \code{crf$n.states[crf$edges[i,1]]} rows and \code{crf$n.states[crf$edges[i,2]]} columns.} #' \item{logZ}{The logarithmic value of CRF normalization factor Z.} #' #' @examples #' #' library(CRF) #' data(Small) #' i <- infer.exact(Small$crf) #' #' @export infer.exact <- function(crf) .Call(Infer_Exact, crf) #' Inference method for chain-structured graphs #' #' Computing the partition function and marginal probabilities #' #' Exact inference for chain-structured graphs with the forward-backward algorithm #' #' @param crf The CRF #' @return This function will return a list with components: #' \item{node.bel}{Node belief. It is a matrix with \code{crf$n.nodes} rows and \code{crf$max.state} columns.} #' \item{edge.bel}{Edge belief. It is a list of matrices. The size of list is \code{crf$n.edges} and #' the matrix \code{i} has \code{crf$n.states[crf$edges[i,1]]} rows and \code{crf$n.states[crf$edges[i,2]]} columns.} #' \item{logZ}{The logarithmic value of CRF normalization factor Z.} #' #' @examples #' #' library(CRF) #' data(Small) #' i <- infer.chain(Small$crf) #' #' @export infer.chain <- function(crf) .Call(Infer_Chain, crf) #' Inference method for tree- and forest-structured graphs #' #' Computing the partition function and marginal probabilities #' #' Exact inference for tree- and forest-structured graphs with sum-product belief propagation #' #' @param crf The CRF #' @return This function will return a list with components: #' \item{node.bel}{Node belief. It is a matrix with \code{crf$n.nodes} rows and \code{crf$max.state} columns.} #' \item{edge.bel}{Edge belief. It is a list of matrices. The size of list is \code{crf$n.edges} and #' the matrix \code{i} has \code{crf$n.states[crf$edges[i,1]]} rows and \code{crf$n.states[crf$edges[i,2]]} columns.} #' \item{logZ}{The logarithmic value of CRF normalization factor Z.} #' #' @examples #' #' library(CRF) #' data(Small) #' i <- infer.tree(Small$crf) #' #' @export infer.tree <- function(crf) .Call(Infer_Tree, crf) #' Conditional inference method #' #' Computing the partition function and marginal probabilities #' #' Conditional inference (takes another inference method as input) #' #' @param crf The CRF #' @param clamped The vector of fixed values for clamped nodes, 0 for unfixed nodes #' @param infer.method The inference method to solve the clamped CRF #' @param ... The parameters for \code{infer.method} #' @return This function will return a list with components: #' \item{node.bel}{Node belief. It is a matrix with \code{crf$n.nodes} rows and \code{crf$max.state} columns.} #' \item{edge.bel}{Edge belief. It is a list of matrices. The size of list is \code{crf$n.edges} and #' the matrix \code{i} has \code{crf$n.states[crf$edges[i,1]]} rows and \code{crf$n.states[crf$edges[i,2]]} columns.} #' \item{logZ}{The logarithmic value of CRF normalization factor Z.} #' #' @examples #' #' library(CRF) #' data(Small) #' i <- infer.conditional(Small$crf, c(0,1,0,0), infer.exact) #' #' @export infer.conditional <- function(crf, clamped, infer.method, ...) { belief <- list() belief$node.bel <- array(0, dim=c(crf$n.nodes, crf$max.state)) belief$edge.bel <- lapply(1:crf$n.edges, function(i) array(0, dim=c(crf$n.states[crf$edges[i,1]], crf$n.states[crf$edges[i,2]]))) newcrf <- clamp.crf(crf, clamped) b <- infer.method(newcrf, ...) belief$node.bel[newcrf$node.id, 1:newcrf$max.state] <- b$node.bel belief$edge.bel[newcrf$edge.id] <- b$edge.bel belief$logZ <- b$logZ belief$node.bel[cbind(which(clamped != 0), clamped[clamped != 0])] <- 1 e <- newcrf$original$edges e0 <- which(clamped[e[,1]] != 0 & clamped[e[,2]] != 0) e1 <- which(clamped[e[,1]] != 0 & clamped[e[,2]] == 0) e2 <- which(clamped[e[,1]] == 0 & clamped[e[,2]] != 0) for (i in e0) belief$edge.bel[[i]][clamped[e[i,1]], clamped[e[i,2]]] <- 1 for (i in e1) belief$edge.bel[[i]][clamped[e[i,1]],] <- b$node.bel[newcrf$node.map[e[i,2]], 1:crf$n.states[e[i,2]]] for (i in e2) belief$edge.bel[[i]][,clamped[e[i,2]]] <- b$node.bel[newcrf$node.map[e[i,1]], 1:crf$n.states[e[i,1]]] belief } #' Inference method for graphs with a small cutset #' #' Computing the partition function and marginal probabilities #' #' Exact inference for graphs with a small cutset using cutset conditioning #' #' @param crf The CRF #' @param cutset A vector of nodes in the cutset #' @param engine The underlying engine for cutset decoding, possible values are "default", "none", "exact", "chain", and "tree". #' @return This function will return a list with components: #' \item{node.bel}{Node belief. It is a matrix with \code{crf$n.nodes} rows and \code{crf$max.state} columns.} #' \item{edge.bel}{Edge belief. It is a list of matrices. The size of list is \code{crf$n.edges} and #' the matrix \code{i} has \code{crf$n.states[crf$edges[i,1]]} rows and \code{crf$n.states[crf$edges[i,2]]} columns.} #' \item{logZ}{The logarithmic value of CRF normalization factor Z.} #' #' @examples #' #' library(CRF) #' data(Small) #' i <- infer.cutset(Small$crf, c(2)) #' #' @export infer.cutset <- function(crf, cutset, engine = "default") { engine.id <- c("default"=-1, "none"=0, "exact"=1, "chain"=2, "tree"=3); clamped <- rep(0, crf$n.nodes) clamped[cutset] <- 1 newcrf <- clamp.crf(crf, clamped) .Call(Infer_Cutset, newcrf, engine.id[engine]) } #' Inference method for low-treewidth graphs #' #' Computing the partition function and marginal probabilities #' #' Exact decoding for low-treewidth graphs using junction trees #' #' @param crf The CRF #' @return This function will return a list with components: #' \item{node.bel}{Node belief. It is a matrix with \code{crf$n.nodes} rows and \code{crf$max.state} columns.} #' \item{edge.bel}{Edge belief. It is a list of matrices. The size of list is \code{crf$n.edges} and #' the matrix \code{i} has \code{crf$n.states[crf$edges[i,1]]} rows and \code{crf$n.states[crf$edges[i,2]]} columns.} #' \item{logZ}{The logarithmic value of CRF normalization factor Z.} #' #' @examples #' #' library(CRF) #' data(Small) #' i <- infer.junction(Small$crf) #' #' @export infer.junction <- function(crf) .Call(Infer_Junction, crf) #' Inference method using sampling #' #' Computing the partition function and marginal probabilities #' #' Approximate inference using sampling (takes a sampling method as input) #' #' @param crf The CRF #' @param sample.method The sampling method #' @param ... The parameters for \code{sample.method} #' @return This function will return a list with components: #' \item{node.bel}{Node belief. It is a matrix with \code{crf$n.nodes} rows and \code{crf$max.state} columns.} #' \item{edge.bel}{Edge belief. It is a list of matrices. The size of list is \code{crf$n.edges} and #' the matrix \code{i} has \code{crf$n.states[crf$edges[i,1]]} rows and \code{crf$n.states[crf$edges[i,2]]} columns.} #' \item{logZ}{The logarithmic value of CRF normalization factor Z.} #' #' @examples #' #' library(CRF) #' data(Small) #' i <- infer.sample(Small$crf, sample.exact, 10000) #' #' @export infer.sample <- function(crf, sample.method, ...) .Call(Infer_Sample, crf, sample.method(crf, ...)) #' Inference method using loopy belief propagation #' #' Computing the partition function and marginal probabilities #' #' Approximate inference using sum-product loopy belief propagation #' #' @param crf The CRF #' @param max.iter The maximum allowed iterations of termination criteria #' @param cutoff The convergence cutoff of termination criteria #' @param verbose Non-negative integer to control the tracing informtion in algorithm #' @param maximize Logical variable to indicate using max-product instead of sum-product #' @return This function will return a list with components: #' \item{node.bel}{Node belief. It is a matrix with \code{crf$n.nodes} rows and \code{crf$max.state} columns.} #' \item{edge.bel}{Edge belief. It is a list of matrices. The size of list is \code{crf$n.edges} and #' the matrix \code{i} has \code{crf$n.states[crf$edges[i,1]]} rows and \code{crf$n.states[crf$edges[i,2]]} columns.} #' \item{logZ}{The logarithmic value of CRF normalization factor Z.} #' #' @examples #' #' library(CRF) #' data(Small) #' i <- infer.lbp(Small$crf) #' #' @export infer.lbp <- function(crf, max.iter = 10000, cutoff = 1e-4, verbose = 0, maximize = FALSE) .Call(Infer_LBP, crf, max.iter, cutoff, verbose, maximize) #' Inference method using residual belief propagation #' #' Computing the partition function and marginal probabilities #' #' Approximate inference using sum-product residual belief propagation #' #' @param crf The CRF #' @param max.iter The maximum allowed iterations of termination criteria #' @param cutoff The convergence cutoff of termination criteria #' @param verbose Non-negative integer to control the tracing informtion in algorithm #' @param maximize Logical variable to indicate using max-product instead of sum-product #' @return This function will return a list with components: #' \item{node.bel}{Node belief. It is a matrix with \code{crf$n.nodes} rows and \code{crf$max.state} columns.} #' \item{edge.bel}{Edge belief. It is a list of matrices. The size of list is \code{crf$n.edges} and #' the matrix \code{i} has \code{crf$n.states[crf$edges[i,1]]} rows and \code{crf$n.states[crf$edges[i,2]]} columns.} #' \item{logZ}{The logarithmic value of CRF normalization factor Z.} #' #' @examples #' #' library(CRF) #' data(Small) #' i <- infer.rbp(Small$crf) #' #' @export infer.rbp <- function(crf, max.iter = 10000, cutoff = 1e-4, verbose = 0, maximize = FALSE) .Call(Infer_RBP, crf, max.iter, cutoff, verbose, maximize) #' Inference method using tree-reweighted belief propagation #' #' Computing the partition function and marginal probabilities #' #' Approximate inference using sum-product tree-reweighted belief propagation #' #' @param crf The CRF #' @param max.iter The maximum allowed iterations of termination criteria #' @param cutoff The convergence cutoff of termination criteria #' @param verbose Non-negative integer to control the tracing informtion in algorithm #' @param maximize Logical variable to indicate using max-product instead of sum-product #' @return This function will return a list with components: #' \item{node.bel}{Node belief. It is a matrix with \code{crf$n.nodes} rows and \code{crf$max.state} columns.} #' \item{edge.bel}{Edge belief. It is a list of matrices. The size of list is \code{crf$n.edges} and #' the matrix \code{i} has \code{crf$n.states[crf$edges[i,1]]} rows and \code{crf$n.states[crf$edges[i,2]]} columns.} #' \item{logZ}{The logarithmic value of CRF normalization factor Z.} #' #' @examples #' #' library(CRF) #' data(Small) #' i <- infer.trbp(Small$crf) #' #' @export infer.trbp <- function(crf, max.iter = 10000, cutoff = 1e-4, verbose = 0, maximize = FALSE) .Call(Infer_TRBP, crf, max.iter, cutoff, verbose, maximize)
/scratch/gouwar.j/cran-all/cranData/CRF/R/infer.R
#' Make CRF #' #' Generate CRF from the adjacent matrix #' #' The function will generate an empty CRF from a given adjacent #' matrix. If the length of \code{nstates} is less than \code{n.nodes}, it will #' be used repeatly. All node and edge potentials are initilized as 1. #' #' Since the CRF data are often very huge, CRF is implemented as an environment. #' The assignment of environments will only copy the addresses instead of real data, #' therefore the variables using normal assignment will refer to the exactly same CRF. #' For complete duplication of the data, please use \code{\link{duplicate.crf}}. #' #' @param adj.matrix The adjacent matrix of CRF network. #' @param n.states The state numbers of nodes. #' @param n.nodes The number of nodes, which is only used to generate linear chain CRF when \code{adj.matrix} is NULL. #' @return The function will return a new CRF, which is an environment with #' components: #' \item{n.nodes}{The number of nodes.} #' \item{n.edges}{The number of edges.} #' \item{n.states}{The number of states for each node. It is a vector of length \code{n.nodes}.} #' \item{max.state}{The maximum number of states. It is equal to \code{max(n.states)}.} #' \item{edges}{The node pair of each edge. It is a matrix with 2 columns and \code{n.edges} rows. Each row #' denotes one edge. The node with smaller id is put in the first column.} #' \item{n.adj}{The number of adjacent nodes for each node. It is a vector of length \code{n.nodes}.} #' \item{adj.nodes}{The list of adjacent nodes for each #' node. It is a list of length \code{n.nodes} and the i-th element is a vector #' of length \code{n.adj[i]}.} #' \item{adj.edges}{The list of adjacent edges for each node. It is similiar to \code{adj.nodes} #' while contains the edge ids instead of node ids.} #' \item{node.pot}{The node potentials. It is a matrix with dimmension \code{(n.nodes, max.state)}. #' Each row \code{node.pot[i,]} denotes the node potentials of the i-th node.} #' \item{edge.pot}{The edge potentials. It is a list of \code{n.edges} matrixes. Each matrix #' \code{edge.pot[[i]]}, with dimension \code{(n.states[edges[i,1]], #' n.states[edges[i,2]])}, denotes the edge potentials of the i-th edge.} #' #' @seealso \code{\link{duplicate.crf}}, \code{\link{clamp.crf}}, \code{\link{sub.crf}} #' #' @examples #' #' library(CRF) #' #' nNodes <- 4 #' nStates <- 2 #' #' adj <- matrix(0, nrow=nNodes, ncol=nNodes) #' for (i in 1:(nNodes-1)) #' { #' adj[i,i+1] <- 1 #' adj[i+1,i] <- 1 #' } #' #' crf <- make.crf(adj, nStates) #' #' crf$node.pot[1,] <- c(1, 3) #' crf$node.pot[2,] <- c(9, 1) #' crf$node.pot[3,] <- c(1, 3) #' crf$node.pot[4,] <- c(9, 1) #' #' for (i in 1:crf$n.edges) #' { #' crf$edge.pot[[i]][1,] <- c(2, 1) #' crf$edge.pot[[i]][2,] <- c(1, 2) #' } #' #' @import Matrix #' #' @export make.crf <- function(adj.matrix = NULL, n.states = 2, n.nodes = 2) { data <- new.env() if (is.null(adj.matrix)) adj.matrix <- sparseMatrix(1:(n.nodes-1), 2:n.nodes, x = TRUE, dims = c(n.nodes, n.nodes)) if (length(dim(adj.matrix)) != 2 || dim(adj.matrix)[1] != dim(adj.matrix)[2]) stop("Parameter 'adj.matrix' should be a square matrix") data$n.nodes <- dim(adj.matrix)[1] e <- which(adj.matrix != 0, arr.ind = TRUE) e <- matrix(c(e, e[,2], e[,1]), ncol=2) e <- unique(matrix(e[e[,1] < e[,2],], ncol=2)) data$edges <- matrix(e[order(e[,1], e[,2]),], ncol=2) data$n.edges <- nrow(data$edges) .Call(Make_AdjInfo, data) data$n.states <- rep(n.states, length.out=data$n.nodes) data$max.state <- max(n.states) data$node.pot <- array(1, dim=c(data$n.nodes, data$max.state)) data$edge.pot <- lapply(1:data$n.edges, function(i) array(1, dim=c(data$n.states[data$edges[i,1]], data$n.states[data$edges[i,2]]))) class(data) <- "CRF" data } #' Duplicate CRF #' #' Duplicate an existing CRF #' #' This function will duplicate an existing CRF. Since CRF is implemented as an #' environment, normal assignment will only copy the pointer instead of the #' real data. This function will generate a new CRF and really copy all data. #' #' @param crf The existing CRF #' @return The function will return a new CRF with copied data #' #' @seealso \code{\link{make.crf}} #' #' @export duplicate.crf <- function(crf) { data <- new.env() for (i in ls(envir=crf)) assign(i, get(i, envir=crf), envir=data) data } #' Calculate the potential of CRF #' #' Calculate the potential of a CRF with given configuration #' #' The function will calculate the potential of a CRF with given configuration, #' i.e., the assigned states of nodes in the CRF. #' #' @param crf The CRF #' @param configuration The vector of states of nodes #' @return The function will return the potential of CRF with given configuration #' #' @seealso \code{\link{get.logPotential}} #' #' @export get.potential <- function(crf, configuration) .Call(Get_Potential, crf, configuration) #' Calculate the log-potential of CRF #' #' Calculate the logarithmic potential of a CRF with given configuration #' #' The function will calculate the logarithmic potential of a CRF with given configuration, #' i.e., the assigned states of nodes in the CRF. #' #' @param crf The CRF #' @param configuration The vector of states of nodes #' @return The function will return the log-potential of CRF with given configuration #' #' @seealso \code{\link{get.potential}} #' #' @export get.logPotential <- function(crf, configuration) .Call(Get_LogPotential, crf, configuration)
/scratch/gouwar.j/cran-all/cranData/CRF/R/misc.R
#' Sampling method for small graphs #' #' Generating samples from the distribution #' #' Exact sampling for small graphs with brute-force inverse cumulative distribution #' #' @param crf The CRF #' @param size The sample size #' @return This function will return a matrix with \code{size} rows and \code{crf$n.nodes} columns, #' in which each row is a sampled configuration. #' #' @examples #' #' library(CRF) #' data(Small) #' s <- sample.exact(Small$crf, 100) #' #' @export sample.exact <- function(crf, size) .Call(Sample_Exact, crf, size) #' Sampling method for chain-structured graphs #' #' Generating samples from the distribution #' #' Exact sampling for chain-structured graphs with the forward-filter backward-sample algorithm #' #' @param crf The CRF #' @param size The sample size #' @return This function will return a matrix with \code{size} rows and \code{crf$n.nodes} columns, #' in which each row is a sampled configuration. #' #' @examples #' #' library(CRF) #' data(Small) #' s <- sample.chain(Small$crf, 100) #' #' @export sample.chain <- function(crf, size) .Call(Sample_Chain, crf, size) #' Sampling method for tree- and forest-structured graphs #' #' Generating samples from the distribution #' #' Exact sampling for tree- and forest-structured graphs with sum-product belief propagation and backward-sampling #' #' @param crf The CRF #' @param size The sample size #' @return This function will return a matrix with \code{size} rows and \code{crf$n.nodes} columns, #' in which each row is a sampled configuration. #' #' @examples #' #' library(CRF) #' data(Small) #' s <- sample.tree(Small$crf, 100) #' #' @export sample.tree <- function(crf, size) .Call(Sample_Tree, crf, size) #' Conditional sampling method #' #' Generating samples from the distribution #' #' Conditional sampling (takes another sampling method as input) #' #' @param crf The CRF #' @param size The sample size #' @param clamped The vector of fixed values for clamped nodes, 0 for unfixed nodes #' @param sample.method The sampling method to solve the clamped CRF #' @param ... The parameters for \code{sample.method} #' @return This function will return a matrix with \code{size} rows and \code{crf$n.nodes} columns, #' in which each row is a sampled configuration. #' #' @examples #' #' library(CRF) #' data(Small) #' s <- sample.conditional(Small$crf, 100, c(0,1,0,0), sample.exact) #' #' @export sample.conditional <- function(crf, size, clamped, sample.method, ...) { newcrf <- clamp.crf(crf, clamped) s <- sample.method(newcrf, size, ...) samples <- matrix(rep(clamped, nrow(s)), nrow=nrow(s), ncol=length(clamped), byrow=TRUE) samples[,newcrf$node.id] <- s samples } #' Sampling method for graphs with a small cutset #' #' Generating samples from the distribution #' #' Exact sampling for graphs with a small cutset using cutset conditioning #' #' @param crf The CRF #' @param size The sample size #' @param cutset A vector of nodes in the cutset #' @param engine The underlying engine for cutset sampling, possible values are "default", "none", "exact", "chain", and "tree". #' @return This function will return a matrix with \code{size} rows and \code{crf$n.nodes} columns, #' in which each row is a sampled configuration. #' #' @examples #' #' library(CRF) #' data(Small) #' s <- sample.cutset(Small$crf, 100, c(2)) #' #' @export sample.cutset <- function(crf, size, cutset, engine = "default") { engine.id <- c("default"=-1, "none"=0, "exact"=1, "chain"=2, "tree"=3); clamped <- rep(0, crf$n.nodes) clamped[cutset] <- 1 newcrf <- clamp.crf(crf, clamped) .Call(Sample_Cutset, newcrf, size, engine.id[engine]) } #' Sampling method for low-treewidth graphs #' #' Generating samples from the distribution #' #' Exact sampling for low-treewidth graphs using junction trees #' #' @param crf The CRF #' @param size The sample size #' @return This function will return a matrix with \code{size} rows and \code{crf$n.nodes} columns, #' in which each row is a sampled configuration. #' #' @examples #' #' library(CRF) #' data(Small) #' s <- sample.junction(Small$crf, 100) #' #' @export sample.junction <- function(crf, size) .Call(Sample_Junction, crf, size) #' Sampling method using single-site Gibbs sampler #' #' Generating samples from the distribution #' #' Approximate sampling using a single-site Gibbs sampler #' #' @param crf The CRF #' @param size The sample size #' @param burn.in The number of samples at the beginning that will be discarded #' @param start An initial configuration #' @return This function will return a matrix with \code{size} rows and \code{crf$n.nodes} columns, #' in which each row is a sampled configuration. #' #' @examples #' #' library(CRF) #' data(Small) #' s <- sample.gibbs(Small$crf, 100) #' #' @export sample.gibbs <- function(crf, size, burn.in = 1000, start = apply(crf$node.pot, 1, which.max)) .Call(Sample_Gibbs, crf, size, burn.in, start)
/scratch/gouwar.j/cran-all/cranData/CRF/R/sample.R
#' Make CRF features #' #' Make the data structure of CRF features #' #' This function makes the data structure of features need for modeling and training CRF. #' #' The parameters \code{n.nf} and \code{n.ef} specify the number of node and edge features, #' respectively. #' #' The objects \code{node.par} and \code{edge.par} define the corresponding #' parameters used with each feature. \code{node.par} is a 3-dimensional arrays, #' and element \code{node.par[n,i,f]} is the index of parameter associated with the #' corresponding node potential \code{node.pot[n,i]} and node feature \code{f}. #' \code{edge.par} is a list of 3-dimensional arrays, and element #' \code{edge.par[[e]][i,j,f]} is the index of parameter associated with the #' corresponding edge potential \code{edge.pot[[e]][i,j]} and edge feature \code{f}. #' The value 0 is used to indicate the corresponding node or edge potential #' does not depend on that feature. #' #' For detail of calculation of node and edge potentials from features and parameters, #' please see \code{\link{crf.update}}. #' #' @param crf The CRF #' @param n.nf The number of node features #' @param n.ef The number of edge features #' @return This function will directly modify the CRF and return the same CRF. #' #' @seealso \code{\link{crf.update}}, \code{\link{make.par}}, \code{\link{make.crf}} #' #' @export make.features <- function(crf, n.nf = 1, n.ef = 1) { crf$n.nf <- n.nf crf$n.ef <- n.ef crf$node.par <- array(0, dim=c(crf$n.nodes, crf$max.state, n.nf)) crf$edge.par <- lapply(1:crf$n.edges, function(i) array(0, dim=c(crf$n.states[crf$edges[i,1]], crf$n.states[crf$edges[i,2]], n.ef))) crf } #' Make CRF parameters #' #' Make the data structure of CRF parameters #' #' This function makes the data structure of parameters need for modeling and training CRF. #' The parameters are stored in \code{par}, which is a numeric vector of length \code{n.par}. #' #' @param crf The CRF #' @param n.par The number of parameters #' @return This function will directly modify the CRF and return the same CRF. #' #' @seealso \code{\link{crf.update}}, \code{\link{make.features}}, \code{\link{make.crf}} #' #' @export make.par <- function(crf, n.par = 1) { crf$n.par <- n.par crf$par <- numeric(crf$n.par) crf$nll <- numeric(1) crf$gradient <- numeric(crf$n.par) crf } #' Update MRF potentials #' #' Update node and edge potentials of MRF model #' #' The function updates \code{node.pot} and \code{edge.pot} of MRF model. #' #' @param crf The CRF #' @return This function will directly modify the CRF and return the same CRF. #' #' @seealso \code{\link{mrf.nll}}, \code{\link{train.mrf}} #' #' @export mrf.update <- function(crf) .Call(MRF_Update, crf) #' Update CRF potentials #' #' Update node and edge potentials of CRF model #' #' This function updates \code{node.pot} and \code{edge.pot} of CRF model by using #' the current values of parameters and features. #' #' There are two ways to model the relationship between parameters and features. #' The first one exploits the special structure of features to reduce the memory #' usage. However it may not suitable for all circumstances. The other one is more #' straighforward by explicitly specifying the coefficients of each parameter to #' calculate the potentials, and may use much more memory. Two approaches can be #' used together. #' #' The first way uses the objects \code{node.par} and \code{edge.par} to define #' the structure of features and provides the feature information in variables #' \code{node.fea} and \code{edge.fea}. The second way directly provides the #' feature information in variables \code{node.ext} and \code{edge.ext} without #' any prior assumption on feature structure. \code{node.ext} is a list and #' each element has the same structure as \code{node.pot}. \code{edge.ext} is #' a list and each element has the same structure as \code{edge.pot}. #' #' In detail, the node potential is updated as follows: #' #' \deqn{ #' node.pot[n,i] = exp( \sum_{f} par[node.par[n,i,f]] * node.fea[f,n] + \sum_{k} par[k] * node.ext[[k]][n,i] ) #' } #' #' and the edge potential is updated as follows: #' #' \deqn{ #' edge.pot[[e]][i,j] = exp( \sum_{f} par[edge.par[[e]][i,j,f]] * edge.fea[f,e] + \sum_{k} par[k] * edge.ext[[k]][[e]][i,j] ) #' } #' #' @param crf The CRF #' @param node.fea The node features matrix with dimension \code{(n.nf, n.nodes)} #' @param edge.fea The edge features matrix with dimension \code{(n.ef, n.edges)} #' @param node.ext The extended information of node features #' @param edge.ext The extended information of edge features #' @return This function will directly modify the CRF and return the same CRF. #' #' @seealso \code{\link{crf.nll}}, \code{\link{train.crf}} #' #' @export crf.update <- function(crf, node.fea = NULL, edge.fea = NULL, node.ext = NULL, edge.ext = NULL) .Call(CRF_Update, crf, node.fea, edge.fea, node.ext, edge.ext) #' Calculate MRF sufficient statistics #' #' Calculate the sufficient statistics of MRF model #' #' This function calculates the sufficient statistics of MRF model. This function #' much be called before the first calling to \code{\link{mrf.nll}}. #' In the training data matrix \code{instances}, each row is an instance and #' each column corresponds a node in CRF. #' #' @param crf The CRF #' @param instances The training data matrix of MRF model #' @return This function will return the value of MRF sufficient statistics. #' #' @seealso \code{\link{mrf.nll}}, \code{\link{train.mrf}} #' #' @export mrf.stat <- function(crf, instances) .Call(MRF_Stat, crf, instances) #' Calculate MRF negative log-likelihood #' #' Calculate the negative log-likelihood of MRF model #' #' This function calculates the negative log-likelihood of MRF model as well as #' the gradient. This function is intended to be called by optimization algorithm #' in training process. Before calling this function, the MRF sufficient #' statistics must be calculated and stored in object \code{par.stat} of CRF. #' #' In the training data matrix \code{instances}, each row is an instance and #' each column corresponds a node in CRF. #' #' @param crf The CRF #' @param par The parameter vector of CRF #' @param instances The training data matrix of MRF model #' @param infer.method The inference method used to compute the likelihood #' @param ... Extra parameters need by the inference method #' @return This function will return the value of MRF negative log-likilihood. #' #' @seealso \code{\link{mrf.stat}}, \code{\link{mrf.update}}, \code{\link{train.mrf}} #' #' @export mrf.nll <- function(par, crf, instances, infer.method = infer.chain, ...) .Call(MRF_NLL, crf, par, instances, quote(infer.method(crf, ...)), environment()) #' Calculate CRF negative log likelihood #' #' Calculate the negative log likelihood of CRF model #' #' This function calculates the negative log likelihood of CRF model as well as #' the gradient. This function is intended to be called by optimization algorithm #' in training process. #' #' In the training data matrix \code{instances}, each row is an instance and #' each column corresponds a node in CRF. #' The variables \code{node.fea}, \code{edge.fea}, \code{node.ext}, \code{edge.ext} #' are lists of length equal to the number of instances, and their elements are #' defined as in \code{\link{crf.update}} respectively. #' #' @param crf The CRF #' @param par The parameter vector of CRF #' @param instances The training data matrix of CRF model #' @param node.fea The list of node features #' @param edge.fea The list of edge features #' @param node.ext The list of extended information of node features #' @param edge.ext The list of extended information of edge features #' @param infer.method The inference method used to compute the likelihood #' @param ... Extra parameters need by the inference method #' @return This function will return the value of CRF negative log-likelihood. #' #' @seealso \code{\link{crf.update}}, \code{\link{train.crf}} #' #' @export crf.nll <- function(par, crf, instances, node.fea = NULL, edge.fea = NULL, node.ext = NULL, edge.ext = NULL, infer.method = infer.chain, ...) .Call(CRF_NLL, crf, par, instances, node.fea, edge.fea, node.ext, edge.ext, quote(infer.method(crf, ...)), environment()) #' Train MRF model #' #' Train the MRF model to estimate the parameters #' #' This function trains the Markov Random Fields (MRF) model, which is a simple variant of CRF model. #' #' In the training data matrix \code{instances}, each row is an instance and #' each column corresponds a node in CRF. #' #' @param crf The CRF #' @param instances The training data matrix of CRF model #' @param nll The function to calculate negative log likelihood #' @param infer.method The inference method used to compute the likelihood #' @param ... Extra parameters need by the inference method #' @param trace Non-negative integer to control the tracing informtion of the optimization process #' @return This function will directly modify the CRF and return the same CRF. #' #' @seealso \code{\link{mrf.update}}, \code{\link{mrf.stat}}, \code{\link{mrf.nll}}, \code{\link{make.crf}} #' #' @export train.mrf <- function(crf, instances, nll = mrf.nll, infer.method = infer.chain, ..., trace = 0) { gradient <- function(par, crf, ...) { crf$gradient } crf$par.stat <- mrf.stat(crf, instances) solution <- stats::optim(crf$par, nll, gradient, crf, instances, infer.method, ..., method = "L-BFGS-B", control = list(trace = trace)) crf$par <- solution$par mrf.update(crf) crf } #' Train CRF model #' #' Train the CRF model to estimate the parameters #' #' This function train the CRF model. #' #' In the training data matrix \code{instances}, each row is an instance and #' each column corresponds a node in CRF. #' The variables \code{node.fea}, \code{edge.fea}, \code{node.ext}, \code{edge.ext} #' are lists of length equal to the number of instances, and their elements are #' defined as in \code{\link{crf.update}} respectively. #' #' @param crf The CRF #' @param instances The training data matrix of CRF model #' @param node.fea The list of node features #' @param edge.fea The list of edge features #' @param node.ext The list of extended information of node features #' @param edge.ext The list of extended information of edge features #' @param nll The function to calculate negative log likelihood #' @param infer.method The inference method used to compute the likelihood #' @param ... Extra parameters need by the inference method #' @param trace Non-negative integer to control the tracing informtion of the optimization process #' @return This function will directly modify the CRF and return the same CRF. #' #' @seealso \code{\link{crf.update}}, \code{\link{crf.nll}}, \code{\link{make.crf}} #' #' @export train.crf <- function(crf, instances, node.fea = NULL, edge.fea = NULL, node.ext = NULL, edge.ext = NULL, nll = crf.nll, infer.method = infer.chain, ..., trace = 0) { gradient <- function(par, crf, ...) { crf$gradient } solution <- stats::optim(crf$par, nll, gradient, crf, instances, node.fea, edge.fea, node.ext, edge.ext, infer.method, ..., method = "L-BFGS-B", control = list(trace = trace)) crf$par <- solution$par crf.update(crf, node.fea[[1]], edge.fea[[1]], node.ext[[1]], edge.ext[[1]]) crf }
/scratch/gouwar.j/cran-all/cranData/CRF/R/train.R
## ----------------------------------------------------------------------------- library(CRF) ## ----------------------------------------------------------------------------- n.nodes <- 10 n.states <- 2 prior.prob <- c(0.8, 0.2) trans.prob <- matrix(0, nrow=2, ncol=2) trans.prob[1,] <- c(0.95, 0.05) trans.prob[2,] <- c(0.05, 0.95) ## ----------------------------------------------------------------------------- prior.prob ## ----------------------------------------------------------------------------- trans.prob ## ----------------------------------------------------------------------------- adj <- matrix(0, n.nodes, n.nodes) for (i in 1:(n.nodes-1)) { adj[i, i+1] <- 1 } ## ----------------------------------------------------------------------------- adj ## ----------------------------------------------------------------------------- mc <- make.crf(adj, n.states) ## ----------------------------------------------------------------------------- mc$node.pot[1,] <- prior.prob for (i in 1:mc$n.edges) { mc$edge.pot[[i]] <- trans.prob } ## ----------------------------------------------------------------------------- mc.samples <- sample.chain(mc, 10000) mc.samples[1:10, ] ## ----------------------------------------------------------------------------- mrf.new <- make.crf(adj, n.states) ## ----------------------------------------------------------------------------- mrf.new <- make.features(mrf.new) mrf.new <- make.par(mrf.new, 4) ## ----------------------------------------------------------------------------- mrf.new$node.par[1,1,1] <- 1 for (i in 1:mrf.new$n.edges) { mrf.new$edge.par[[i]][1,1,1] <- 2 mrf.new$edge.par[[i]][1,2,1] <- 3 mrf.new$edge.par[[i]][2,1,1] <- 4 } ## ----------------------------------------------------------------------------- mrf.new <- train.mrf(mrf.new, mc.samples) ## ----------------------------------------------------------------------------- mrf.new$par ## ----------------------------------------------------------------------------- mrf.new$node.pot <- mrf.new$node.pot / rowSums(mrf.new$node.pot) mrf.new$edge.pot[[1]] <- mrf.new$edge.pot[[1]] / rowSums(mrf.new$edge.pot[[1]]) ## ----------------------------------------------------------------------------- mrf.new$node.pot[1,] ## ----------------------------------------------------------------------------- mrf.new$edge.pot[[1]] ## ----------------------------------------------------------------------------- emmis.prob <- matrix(0, nrow=2, ncol=4) emmis.prob[1,] <- c(0.59, 0.25, 0.15, 0.01) emmis.prob[2,] <- c(0.01, 0.15, 0.25, 0.59) emmis.prob ## ----------------------------------------------------------------------------- hmm.samples <- mc.samples hmm.samples[mc.samples == 1] <- sample.int(4, sum(mc.samples == 1), replace = TRUE, prob=emmis.prob[1,]) hmm.samples[mc.samples == 2] <- sample.int(4, sum(mc.samples == 2), replace = TRUE, prob=emmis.prob[2,]) hmm.samples[1:10,] ## ----------------------------------------------------------------------------- crf.new <- make.crf(adj, n.states) ## ----------------------------------------------------------------------------- crf.new <- make.features(crf.new, 5, 1) crf.new <- make.par(crf.new, 8) ## ----------------------------------------------------------------------------- crf.new$node.par[1,1,1] <- 1 for (i in 1:crf.new$n.edges) { crf.new$edge.par[[i]][1,1,] <- 2 crf.new$edge.par[[i]][1,2,] <- 3 crf.new$edge.par[[i]][2,1,] <- 4 } crf.new$node.par[,1,2] <- 5 crf.new$node.par[,1,3] <- 6 crf.new$node.par[,1,4] <- 7 crf.new$node.par[,1,5] <- 8 ## ----------------------------------------------------------------------------- hmm.nf <- lapply(1:dim(hmm.samples)[1], function(i) matrix(1, crf.new$n.nf, crf.new$n.nodes)) for (i in 1:dim(hmm.samples)[1]) { hmm.nf[[i]][2, hmm.samples[i,] != 1] <- 0 hmm.nf[[i]][3, hmm.samples[i,] != 2] <- 0 hmm.nf[[i]][4, hmm.samples[i,] != 3] <- 0 hmm.nf[[i]][5, hmm.samples[i,] != 4] <- 0 } hmm.ef <- lapply(1:dim(hmm.samples)[1], function(i) matrix(1, crf.new$n.ef, crf.new$n.edges)) ## ----------------------------------------------------------------------------- crf.new <- train.crf(crf.new, mc.samples, hmm.nf, hmm.ef) ## ----------------------------------------------------------------------------- crf.new$par ## ----------------------------------------------------------------------------- hmm.infer <- matrix(0, nrow=dim(hmm.samples)[1], ncol=dim(hmm.samples)[2]) for (i in 1:dim(hmm.samples)[1]) { crf.new <- crf.update(crf.new, hmm.nf[[i]], hmm.ef[[i]]) hmm.infer[i,] <- decode.chain(crf.new) } ## ----------------------------------------------------------------------------- sum(hmm.infer != mc.samples) ## ----------------------------------------------------------------------------- crf.new <- train.crf(crf.new, mc.samples, hmm.nf, hmm.ef, infer.method = infer.lbp)
/scratch/gouwar.j/cran-all/cranData/CRF/inst/doc/Tutorial.R
--- title: "CRF Tutorial" author: "Ling-Yun Wu" date: "`r Sys.Date()`" output: pdf_document: latex_engine: xelatex number_sections: yes toc: yes vignette: > %\VignetteIndexEntry{CRF Tutorial} %\VignetteEngine{knitr::rmarkdown} --- # Markov Random Field In this section, we considered a Markov chain example. We represented this Markov chain model by a CRF object and generate the samples by using the sampling functions provided in CRF package. Finally, we learned a new Markov random field (MRF) model from the generated samples. ## Build Markov chain model First we imported the CRF package: ```{r} library(CRF) ``` We set the parameters for Markov chain model: ```{r} n.nodes <- 10 n.states <- 2 prior.prob <- c(0.8, 0.2) trans.prob <- matrix(0, nrow=2, ncol=2) trans.prob[1,] <- c(0.95, 0.05) trans.prob[2,] <- c(0.05, 0.95) ``` The Markov chain consists of 10 nodes and there are 2 states for each node. The prior probability is ```{r} prior.prob ``` and the transition probability is ```{r} trans.prob ``` Then we constructed the adjacent matrix of chain: ```{r} adj <- matrix(0, n.nodes, n.nodes) for (i in 1:(n.nodes-1)) { adj[i, i+1] <- 1 } ``` Note that the adjacent matrix will be automatically symmetrized when used to build the CRF object, therefore only the upper (or lower) triangular matrix is need here. ```{r} adj ``` Now we can build the CRF object for Markov chain model: ```{r} mc <- make.crf(adj, n.states) ``` and set the parameters: ```{r} mc$node.pot[1,] <- prior.prob for (i in 1:mc$n.edges) { mc$edge.pot[[i]] <- trans.prob } ``` ## Generate samples We generated 10000 samples from the Markov chain model and displayed the first 10 samples: ```{r} mc.samples <- sample.chain(mc, 10000) mc.samples[1:10, ] ``` ## Learn Markov random field model from MC data In order to learn Markov random field model from generated data, we first built another CRF object: ```{r} mrf.new <- make.crf(adj, n.states) ``` and created the paramter structure: ```{r} mrf.new <- make.features(mrf.new) mrf.new <- make.par(mrf.new, 4) ``` We only need 4 paramters in the MRF model, one for prior probability and three for transition probability, since the probabilities are summed to one. ```{r} mrf.new$node.par[1,1,1] <- 1 for (i in 1:mrf.new$n.edges) { mrf.new$edge.par[[i]][1,1,1] <- 2 mrf.new$edge.par[[i]][1,2,1] <- 3 mrf.new$edge.par[[i]][2,1,1] <- 4 } ``` Then we trained the model using `train.mrf` function: ```{r} mrf.new <- train.mrf(mrf.new, mc.samples) ``` After training, we can check the parameter values: ```{r} mrf.new$par ``` We normalized the potentials in MRF to make it more like probability: ```{r} mrf.new$node.pot <- mrf.new$node.pot / rowSums(mrf.new$node.pot) mrf.new$edge.pot[[1]] <- mrf.new$edge.pot[[1]] / rowSums(mrf.new$edge.pot[[1]]) ``` Now we can check the learned prior probability ```{r} mrf.new$node.pot[1,] ``` and transition probability ```{r} mrf.new$edge.pot[[1]] ``` # Conditional Random Field In this section, we generated hidden Markov Model (HMM) samples based on the Markov chain samples in previous section. Then we learned a conditional random field (CRF) model from the HMM data. ## Generate samples Suppose that the Markov chain can not be directly observed. There are 4 observation states and the observation probability (emmision probability) is given as follows: ```{r} emmis.prob <- matrix(0, nrow=2, ncol=4) emmis.prob[1,] <- c(0.59, 0.25, 0.15, 0.01) emmis.prob[2,] <- c(0.01, 0.15, 0.25, 0.59) emmis.prob ``` We simulated the observation data from Markov chain samples: ```{r} hmm.samples <- mc.samples hmm.samples[mc.samples == 1] <- sample.int(4, sum(mc.samples == 1), replace = TRUE, prob=emmis.prob[1,]) hmm.samples[mc.samples == 2] <- sample.int(4, sum(mc.samples == 2), replace = TRUE, prob=emmis.prob[2,]) hmm.samples[1:10,] ``` ## Learn conditional random field model from HMM data Now we try to learn a CRF model from HMM data. We first built another CRF object: ```{r} crf.new <- make.crf(adj, n.states) ``` and created the paramter structure: ```{r} crf.new <- make.features(crf.new, 5, 1) crf.new <- make.par(crf.new, 8) ``` The major difference between CRF and MRF is that we have 5 node features now, instead of 1 constant feature in MRF model. The first node feature is the constant feature as in MRF model, and the other 4 node features correspond to observation states respectively. The number of edge feature is still one. We now need eight paramters, one for prior probability, three for transition probability, and four for emmision probability. ```{r} crf.new$node.par[1,1,1] <- 1 for (i in 1:crf.new$n.edges) { crf.new$edge.par[[i]][1,1,] <- 2 crf.new$edge.par[[i]][1,2,] <- 3 crf.new$edge.par[[i]][2,1,] <- 4 } crf.new$node.par[,1,2] <- 5 crf.new$node.par[,1,3] <- 6 crf.new$node.par[,1,4] <- 7 crf.new$node.par[,1,5] <- 8 ``` We prepared the node features and the edge features, which are need for training: ```{r} hmm.nf <- lapply(1:dim(hmm.samples)[1], function(i) matrix(1, crf.new$n.nf, crf.new$n.nodes)) for (i in 1:dim(hmm.samples)[1]) { hmm.nf[[i]][2, hmm.samples[i,] != 1] <- 0 hmm.nf[[i]][3, hmm.samples[i,] != 2] <- 0 hmm.nf[[i]][4, hmm.samples[i,] != 3] <- 0 hmm.nf[[i]][5, hmm.samples[i,] != 4] <- 0 } hmm.ef <- lapply(1:dim(hmm.samples)[1], function(i) matrix(1, crf.new$n.ef, crf.new$n.edges)) ``` Then we trained the model using `train.crf` function: ```{r} crf.new <- train.crf(crf.new, mc.samples, hmm.nf, hmm.ef) ``` After training, we can check the parameter values: ```{r} crf.new$par ``` With trained CRF model, we can infer the hidden states given the observations: ```{r} hmm.infer <- matrix(0, nrow=dim(hmm.samples)[1], ncol=dim(hmm.samples)[2]) for (i in 1:dim(hmm.samples)[1]) { crf.new <- crf.update(crf.new, hmm.nf[[i]], hmm.ef[[i]]) hmm.infer[i,] <- decode.chain(crf.new) } ``` The inferred result was compared with the true hidden states: ```{r} sum(hmm.infer != mc.samples) ``` ## Use other inference methods in the training The default inference method used in the `train.mrf` and `train.crf` functions is `infer.chain`, which can only handle chain-structured graphs. We can provide the preferred inference method when calling the training functions. For example, use the loopy brief propagation algorithm: ```{r} crf.new <- train.crf(crf.new, mc.samples, hmm.nf, hmm.ef, infer.method = infer.lbp) ``` In a more complicated way, we can redefine the functions for calculating the negative log-likelihood, i.e., the functions `mrf.nll` and `crf.nll`, respectively.
/scratch/gouwar.j/cran-all/cranData/CRF/inst/doc/Tutorial.Rmd
--- title: "CRF Tutorial" author: "Ling-Yun Wu" date: "`r Sys.Date()`" output: pdf_document: latex_engine: xelatex number_sections: yes toc: yes vignette: > %\VignetteIndexEntry{CRF Tutorial} %\VignetteEngine{knitr::rmarkdown} --- # Markov Random Field In this section, we considered a Markov chain example. We represented this Markov chain model by a CRF object and generate the samples by using the sampling functions provided in CRF package. Finally, we learned a new Markov random field (MRF) model from the generated samples. ## Build Markov chain model First we imported the CRF package: ```{r} library(CRF) ``` We set the parameters for Markov chain model: ```{r} n.nodes <- 10 n.states <- 2 prior.prob <- c(0.8, 0.2) trans.prob <- matrix(0, nrow=2, ncol=2) trans.prob[1,] <- c(0.95, 0.05) trans.prob[2,] <- c(0.05, 0.95) ``` The Markov chain consists of 10 nodes and there are 2 states for each node. The prior probability is ```{r} prior.prob ``` and the transition probability is ```{r} trans.prob ``` Then we constructed the adjacent matrix of chain: ```{r} adj <- matrix(0, n.nodes, n.nodes) for (i in 1:(n.nodes-1)) { adj[i, i+1] <- 1 } ``` Note that the adjacent matrix will be automatically symmetrized when used to build the CRF object, therefore only the upper (or lower) triangular matrix is need here. ```{r} adj ``` Now we can build the CRF object for Markov chain model: ```{r} mc <- make.crf(adj, n.states) ``` and set the parameters: ```{r} mc$node.pot[1,] <- prior.prob for (i in 1:mc$n.edges) { mc$edge.pot[[i]] <- trans.prob } ``` ## Generate samples We generated 10000 samples from the Markov chain model and displayed the first 10 samples: ```{r} mc.samples <- sample.chain(mc, 10000) mc.samples[1:10, ] ``` ## Learn Markov random field model from MC data In order to learn Markov random field model from generated data, we first built another CRF object: ```{r} mrf.new <- make.crf(adj, n.states) ``` and created the paramter structure: ```{r} mrf.new <- make.features(mrf.new) mrf.new <- make.par(mrf.new, 4) ``` We only need 4 paramters in the MRF model, one for prior probability and three for transition probability, since the probabilities are summed to one. ```{r} mrf.new$node.par[1,1,1] <- 1 for (i in 1:mrf.new$n.edges) { mrf.new$edge.par[[i]][1,1,1] <- 2 mrf.new$edge.par[[i]][1,2,1] <- 3 mrf.new$edge.par[[i]][2,1,1] <- 4 } ``` Then we trained the model using `train.mrf` function: ```{r} mrf.new <- train.mrf(mrf.new, mc.samples) ``` After training, we can check the parameter values: ```{r} mrf.new$par ``` We normalized the potentials in MRF to make it more like probability: ```{r} mrf.new$node.pot <- mrf.new$node.pot / rowSums(mrf.new$node.pot) mrf.new$edge.pot[[1]] <- mrf.new$edge.pot[[1]] / rowSums(mrf.new$edge.pot[[1]]) ``` Now we can check the learned prior probability ```{r} mrf.new$node.pot[1,] ``` and transition probability ```{r} mrf.new$edge.pot[[1]] ``` # Conditional Random Field In this section, we generated hidden Markov Model (HMM) samples based on the Markov chain samples in previous section. Then we learned a conditional random field (CRF) model from the HMM data. ## Generate samples Suppose that the Markov chain can not be directly observed. There are 4 observation states and the observation probability (emmision probability) is given as follows: ```{r} emmis.prob <- matrix(0, nrow=2, ncol=4) emmis.prob[1,] <- c(0.59, 0.25, 0.15, 0.01) emmis.prob[2,] <- c(0.01, 0.15, 0.25, 0.59) emmis.prob ``` We simulated the observation data from Markov chain samples: ```{r} hmm.samples <- mc.samples hmm.samples[mc.samples == 1] <- sample.int(4, sum(mc.samples == 1), replace = TRUE, prob=emmis.prob[1,]) hmm.samples[mc.samples == 2] <- sample.int(4, sum(mc.samples == 2), replace = TRUE, prob=emmis.prob[2,]) hmm.samples[1:10,] ``` ## Learn conditional random field model from HMM data Now we try to learn a CRF model from HMM data. We first built another CRF object: ```{r} crf.new <- make.crf(adj, n.states) ``` and created the paramter structure: ```{r} crf.new <- make.features(crf.new, 5, 1) crf.new <- make.par(crf.new, 8) ``` The major difference between CRF and MRF is that we have 5 node features now, instead of 1 constant feature in MRF model. The first node feature is the constant feature as in MRF model, and the other 4 node features correspond to observation states respectively. The number of edge feature is still one. We now need eight paramters, one for prior probability, three for transition probability, and four for emmision probability. ```{r} crf.new$node.par[1,1,1] <- 1 for (i in 1:crf.new$n.edges) { crf.new$edge.par[[i]][1,1,] <- 2 crf.new$edge.par[[i]][1,2,] <- 3 crf.new$edge.par[[i]][2,1,] <- 4 } crf.new$node.par[,1,2] <- 5 crf.new$node.par[,1,3] <- 6 crf.new$node.par[,1,4] <- 7 crf.new$node.par[,1,5] <- 8 ``` We prepared the node features and the edge features, which are need for training: ```{r} hmm.nf <- lapply(1:dim(hmm.samples)[1], function(i) matrix(1, crf.new$n.nf, crf.new$n.nodes)) for (i in 1:dim(hmm.samples)[1]) { hmm.nf[[i]][2, hmm.samples[i,] != 1] <- 0 hmm.nf[[i]][3, hmm.samples[i,] != 2] <- 0 hmm.nf[[i]][4, hmm.samples[i,] != 3] <- 0 hmm.nf[[i]][5, hmm.samples[i,] != 4] <- 0 } hmm.ef <- lapply(1:dim(hmm.samples)[1], function(i) matrix(1, crf.new$n.ef, crf.new$n.edges)) ``` Then we trained the model using `train.crf` function: ```{r} crf.new <- train.crf(crf.new, mc.samples, hmm.nf, hmm.ef) ``` After training, we can check the parameter values: ```{r} crf.new$par ``` With trained CRF model, we can infer the hidden states given the observations: ```{r} hmm.infer <- matrix(0, nrow=dim(hmm.samples)[1], ncol=dim(hmm.samples)[2]) for (i in 1:dim(hmm.samples)[1]) { crf.new <- crf.update(crf.new, hmm.nf[[i]], hmm.ef[[i]]) hmm.infer[i,] <- decode.chain(crf.new) } ``` The inferred result was compared with the true hidden states: ```{r} sum(hmm.infer != mc.samples) ``` ## Use other inference methods in the training The default inference method used in the `train.mrf` and `train.crf` functions is `infer.chain`, which can only handle chain-structured graphs. We can provide the preferred inference method when calling the training functions. For example, use the loopy brief propagation algorithm: ```{r} crf.new <- train.crf(crf.new, mc.samples, hmm.nf, hmm.ef, infer.method = infer.lbp) ``` In a more complicated way, we can redefine the functions for calculating the negative log-likelihood, i.e., the functions `mrf.nll` and `crf.nll`, respectively.
/scratch/gouwar.j/cran-all/cranData/CRF/vignettes/Tutorial.Rmd
### crm function for finding next dose level and the updated parameter 'a' crm <- function(target,prior,ptdata,model=1,a0=1,b=3){ if(model != 1 && model != 2){ stop("Error: model must be 1 or 2\n") } if(target<0 || target > 1){ stop("Error: target must be greater than 0 and less than 1\n") } if(any(prior<0) || any(prior>1)){ stop("Error: All elements in prior must be greater than 0 and less than 1\n") } ptdt = matrix(ptdata,ncol=2) callCRM = .C("CRM",as.integer(model),as.double(target),as.double(prior),as.integer(length(prior)), as.double(a0),as.double(b),as.integer(ptdt),as.integer(nrow(ptdt)),mtd=as.integer(0), aMean=as.double(0),PACKAGE="CRM") return(list(MTD=callCRM$mtd,a=callCRM$aMean)) }
/scratch/gouwar.j/cran-all/cranData/CRM/R/crm.R
#CRM simulator that called the CRM c function #codes in this file are used to build CRM program version 1.2, the latest version ## 8/26/08 ########################################################## CRMsimOne <- function(model,cohort,nsubject,rate,cycle,prior,true,target,a0,b,jump,start.dose){ lenprior <- as.integer(length(prior)) dose.start <- rep(start.dose,cohort) # dose-level at the beginning of trials y.rand <- runif(cohort) y <- 1.0*(y.rand<=true[dose.start]) #results for toxicity come from the true distribution pData <- c(dose.start,y) pData <- matrix(pData,ncol=2) modl <- as.integer(model) targ <- as.double(target) pri <- as.double(prior) azero <- as.double(a0) bvalue <- as.double(b) for (i in 2:(nsubject/cohort)){ callCRM <- .C("CRM",modl,targ,pri,lenprior,azero,bvalue,as.integer(pData), as.integer(nrow(pData)),nextDose=as.integer(0),aMean=as.double(0),PACKAGE="CRM") x <- callCRM$nextDose if (((x-pData[nrow(pData),1])>1) && (jump==FALSE)){ x <- pData[nrow(pData),1]+1 } # make dose escalation <= 1 x.cohort <- rep(x,cohort) y.rand <- runif(cohort) y <- 1.0*(y.rand<=true[x]) xy <- cbind(x.cohort,y) pData <- rbind(pData,xy) } nTox<-sum(pData[,2]) ## number of toxicity doseLevel <- 1:lenprior #Proportion of patients treated at each dose level # doseTab <- table(pData[,1])/nrow(pData) doseTab <- table(pData[,1]) doseTab <- doseTab[match(doseLevel,as.numeric(names(doseTab)))] doseTab[is.na(doseTab)] <- 0 names(doseTab) <- doseLevel #Number of toxicities at each dose level toxTab <- table(pData[,1],pData[,2]) if(any(colnames(toxTab) == "1")){ toxCt <- toxTab[,"1"] toxTab <- toxCt[match(doseLevel,as.numeric(rownames(toxTab)))] toxTab[is.na(toxTab)] <- 0 }else { toxTab <- rep(0,lenprior) } names(toxTab) <- doseLevel # mtd <- CRM(model,prior,target,pData,a0,b) #mtd at nsubject+1,proposed mtd for next patient callCRM <- .C("CRM",modl,targ,pri,lenprior,azero,bvalue,as.integer(pData), as.integer(nrow(pData)),nextDose=as.integer(0),aMean=as.double(0),PACKAGE="CRM") mtd <- callCRM$nextDose if (((mtd-pData[nrow(pData),1])>1) && (jump==FALSE)){ mtd <- pData[nrow(pData),1]+1 } # make dose escalation <= 1 ### calculate study duration ### ## total number of patients ndata <- nrow(pData) tarrive <- 0 tarr <- rep(NA,ndata) tstart <- rep(NA,ndata) y.DLT <- rep(NA,ndata) tend <- rep(NA,ndata) # for the first cohort of patients for (i in 1:cohort){ if (pData[,2][i]==1) {y.DLT[i] <- runif(1,min=0,max=cycle)} if (pData[,2][i]==0) {y.DLT[i] <- cycle} #arrival time t1 <- (rexp(1,rate)) tarrive <- tarrive+t1 tarr[i] <- tarrive tstart[i] <- tarrive tend[i] <- tstart[i] +y.DLT[i] } #next pts can start after maxend (ie after cohort 1 is done) maxend=max(tend[!is.na(tend)]) #for patients (cohort+1) and on j <- 0 for (i in (cohort+1):ndata){ j <- j+1 t1 <- (rexp(1,rate)) tarrive <- tarrive+t1 #arrival time tarr[i] <- tarrive if (tarr[i]>maxend) {tstart[i] <- tarrive} #find start time if (tarr[i]<=maxend) {tstart[i] <- maxend} if (pData[,2][i]==1) {y.DLT[i] <- runif(1,min=0,max=cycle)} #time to DLT if (pData[,2][i]==0) {y.DLT[i] <- cycle} tend[i] <- tstart[i] +y.DLT[i] #end time of each pt if (j==cohort){ maxend <- max(tend[!is.na(tend)]) j <- 0 } } # new=cbind(tarr,tstart,y.DLT,tend) # print(new) studydur <- max(tend) result <- list(mtd,nTox,doseTab,toxTab,studydur) return(result) } ##################################### crmsim <- function(target,prior,true,rate,cycle,cohort=1,nsubject=24,nsim=1000,model=1,a0=1,b=3,jump=FALSE, start.dose=1,seed=777){ if(target<0 || target > 1){ stop("Error: target must be greater than 0 and less than 1\n") } if(any(prior<0) || any(prior>1)){ stop("Error: All elements in prior must be greater than 0 and less than 1\n") } if(any(true<0) || any(true>1)){ stop("Error: All elements in true must be greater than 0 and less than 1\n") } if(rate<0 || cycle <0){ stop("Error: rate and cycle must be greater than 0\n") } if(length(prior)!=length(true)){ stop("Error: prior and true should have the same length\n") }else { for(i in 2:length(prior)){ if(prior[i]<prior[i-1] || true[i]<true[i-1]){ stop("Error: prior and true must be in an ascending order\n") } } } if(model != 1 && model != 2){ stop("Error: model must be 1 or 2\n") } if(is.na(match(start.dose,1:length(prior)))){ stop("Error: start.dose must be in 1:length(prior)\n") } if(jump != FALSE && jump != TRUE){ stop("Error: jump must be FALSE or TRUE\n") } if(floor(nsubject/cohort) != (nsubject/cohort)){ mess = paste("nsubject/cohort is not an integer. The number of subjects is changed to", cohort*floor(nsubject/cohort)) warning(mess); } set.seed(seed) toxCt <- rep(NA,nsim) mtd <- rep(NA,nsim) duration <- rep(NA,nsim) toxTab <- rep(0,length(prior)) doseTab <- rep(0,length(prior)) for (i in 1:nsim){ fit <- CRMsimOne(model,cohort,nsubject,rate,cycle,prior,true,target,a0,b,jump,start.dose) mtd[i] <- fit[[1]] toxCt[i] <- fit[[2]] doseTab <- doseTab + fit[[3]] toxTab <- toxTab + fit[[4]] duration[i] <- round(fit[[5]]) } doseLevel <- 1:length(prior) toxTab <- toxTab/nsim doseTab <- doseTab/nsim propDoseTab <- doseTab/(cohort*floor(nsubject/cohort)) mtdTab <- table(mtd) mtdTab <- mtdTab[match(doseLevel,as.numeric(names(mtdTab)))] mtdTab[is.na(mtdTab)] <- 0 mtdTab <- mtdTab/nsim names(mtdTab) <- doseLevel simTab <- rbind(100*mtdTab,100*propDoseTab,doseTab,toxTab,true) simTab <- round(simTab,2) rownames(simTab) <- c("% Selection","% Subjects Treated","# Subjects Treated","Average Toxicities", "True probabilities") list(SimResult=simTab,TrialDuration=summary(duration)) }
/scratch/gouwar.j/cran-all/cranData/CRM/R/crmsim.R
#CRM simulator that called the CRM c function #codes in this file are used to build CRM program #This code is used to make CRM package ## function for one simulation; allow one incomplete data CRMoneIncomplete <- function(model,nsubject,prior,true,target,a0,b,jump,start.dose,rate,cycle){ tstart <- rep(NA,nsubject) tend <- rep(NA,nsubject) lenprior <- as.integer(length(prior)) # begin trial for the 1st patient dose.start <- start.dose # dose-level at the beginning of trials y.rand <- runif(1) y <- 1.0*(y.rand<=true[dose.start]) #results for toxicity come from the true distribution pData <- c(dose.start,y) pData <- matrix(pData,ncol=2) i <- 1 if (y==1) {y.DLT=runif(1,min=0,max=cycle)} # time to DLT if (y==0) {y.DLT=cycle} t1 <- (rexp(1,rate)) #arrive time tstart[i] <- t1 tend[i] <- tstart[i] + y.DLT new <- cbind(pData,t1,tstart[i],y.DLT,tend[i]) new <- matrix(new,ncol=6) #2nd pt modl <- as.integer(model) targ <- as.double(target) pri <- as.double(prior) azero <- as.double(a0) bvalue <- as.double(b) i <- 2 callCRM <- .C("CRM",modl,targ,pri,lenprior,azero,bvalue,as.integer(pData), as.integer(nrow(pData)),nextDose=as.integer(0),aMean=as.double(0),PACKAGE="CRM") x <- callCRM$nextDose if (((x-pData[(i-1),1])>1) && (jump==FALSE)) {x<-pData[(i-1),1]+1} y.rand <- runif(1) y <- 1.0*(y.rand<=true[x]) pData <- rbind(pData,c(x,y)) if (y==1) {y.DLT <- runif(1,min=0,max=cycle)} if (y==0) {y.DLT <- cycle} t1 <- t1+(rexp(1,rate)) #arrive time for pts 2 if (t1>tend[i-1]) {tstart[i] <- t1} if (t1<=tend[i-1]) {tstart[i] <- tend[i-1]} tend[i] <- tstart[i] + y.DLT new <- rbind(new,c(pData[,1][i],pData[,2][i],t1,tstart[i],y.DLT,tend[i])) for (i in 3:nsubject){ t1 <- t1+(rexp(1,rate)) #arrival time #pt minimum start time if (t1>tend[i-2]) {tstart[i] <- t1} if (t1<=tend[i-2]) {tstart[i] <- tend[i-2]} if ((tstart[i])>=tend[i-1]) { callCRM <- .C("CRM",modl,targ,pri,lenprior,azero,bvalue,as.integer(pData), as.integer(nrow(pData)),nextDose=as.integer(0),aMean=as.double(0),PACKAGE="CRM") x <- callCRM$nextDose if (((x-pData[(i-1),1])>1) && (jump==FALSE)) {x <- pData[(i-1),1]+1} } else {#if pt came in after end time of last pt then use response of last pt tmpData <- matrix(pData[-(i-1),],ncol=2) callCRM <- .C("CRM",modl,targ,pri,lenprior,azero,bvalue,as.integer(tmpData), as.integer(nrow(tmpData)),nextDose=as.integer(0),aMean=as.double(0),PACKAGE="CRM") x <- callCRM$nextDose if (((x-pData[(i-2),1])>1) && (jump==FALSE)) {x <- pData[(i-2),1]+1} } y.rand <- runif(1) y <- 1.0*(y.rand<=true[x]) xy <- cbind(x,y) pData <- rbind(pData,xy) if (y==1) {y.DLT <- runif(1,min=0,max=cycle)} if (y==0) {y.DLT <- cycle} tend[i] <- tstart[i] + y.DLT #pt end time new=rbind(new,c(pData[,1][i],pData[,2][i],t1,tstart[i],y.DLT,tend[i])) } studyTime <- max(tend) nTox <- sum(pData[,2]) ## number of toxicity doseLevel <- 1:lenprior #Proportion of patients treated at each dose level # doseTab <- table(pData[,1])/nrow(pData) #Proportion of patients treated at each dose level doseTab <- table(pData[,1]) doseTab <- doseTab[match(doseLevel,as.numeric(names(doseTab)))] doseTab[is.na(doseTab)] <- 0 names(doseTab) <- doseLevel #Number of toxicities at each dose level toxTab <- table(pData[,1],pData[,2]) if(any(colnames(toxTab) == "1")){ toxCt <- toxTab[,"1"] toxTab <- toxCt[match(doseLevel,as.numeric(rownames(toxTab)))] toxTab[is.na(toxTab)] <- 0 }else { toxTab <- rep(0,lenprior) } names(toxTab) <- doseLevel callCRM <- .C("CRM",modl,targ,pri,lenprior,azero,bvalue,as.integer(pData), as.integer(nrow(pData)),nextDose=as.integer(0),aMean=as.double(0),PACKAGE="CRM") mtd <- callCRM$nextDose if (((mtd-pData[nrow(pData),1])>1) && (jump==FALSE)){ mtd <- pData[nrow(pData),1]+1 } # make dose escalation <= 1 result <- list(mtd,nTox,doseTab,toxTab,studyTime) return(result) } crmsiminc1 <- function(target,prior,true,rate,cycle,nsubject=24,nsim=1000,model=1,a0=1,b=3,jump=FALSE, start.dose=1,seed=777){ if(target<0 || target > 1){ stop("Error: target must be greater than 0 and less than 1\n") } if(any(prior<0) || any(prior>1)){ stop("Error: All elements in prior must be greater than 0 and less than 1\n") } if(any(true<0) || any(true>1)){ stop("Error: All elements in true must be greater than 0 and less than 1\n") } if(length(prior)!=length(true)){ stop("Error: prior and true should have the same length\n") }else { for(i in 2:length(prior)){ if(prior[i]<prior[i-1] || true[i]<true[i-1]){ stop("Error: prior and true must be in an ascending order\n") } } } if(model != 1 && model != 2){ stop("Error: model must be 1 or 2\n") } if(is.na(match(start.dose,1:length(prior)))){ stop("Error: start.dose must be in 1:length(prior)\n") } if(jump != FALSE && jump != TRUE){ stop("Error: jump must be FALSE or TRUE\n") } set.seed(seed) toxCt <- rep(NA,nsim) mtd <- rep(NA,nsim) duration <- rep(NA,nsim) toxTab <- rep(0,length(prior)) doseTab <- rep(0,length(prior)) for (i in 1:nsim){ fit <- CRMoneIncomplete(model,nsubject,prior,true,target,a0,b,jump,start.dose,rate,cycle) mtd[i] <- fit[[1]] toxCt[i] <- fit[[2]] doseTab <- doseTab + fit[[3]] toxTab <- toxTab + fit[[4]] duration[i] <- round(fit[[5]]) } doseLevel = 1:length(prior) toxTab <- toxTab/nsim doseTab <- doseTab/nsim propDoseTab <- doseTab/nsubject mtdTab <- table(mtd) mtdTab <- mtdTab[match(doseLevel,as.numeric(names(mtdTab)))] mtdTab[is.na(mtdTab)] <- 0 mtdTab <- mtdTab/nsim names(mtdTab) <- doseLevel simTab <- rbind(100*mtdTab,100*propDoseTab,doseTab,toxTab,true) simTab <- round(simTab,2) rownames(simTab) <- c("% Selection","% Subjects Treated","# Subjects Treated","Average Toxicities", "True probabilities") list(SimResult=simTab,TrialDuration=summary(duration)) }
/scratch/gouwar.j/cran-all/cranData/CRM/R/crmsiminc1.R
#CRM simulator that called the CRM c function #codes in this file are used to build CRM package # Quincy Mo, 12/1/2008 ## allow 2 incomplete pt data CRMtwoIncomplete <- function(model,nsubject,prior,true,target,a0,b,jump,start.dose,rate,cycle){ tstart <- rep(NA,nsubject) tend <- rep(NA,nsubject) lenprior <- as.integer(length(prior)) # begin trial for the 1st patient dose.start <- start.dose # dose-level at the beginning of trials y.rand <- runif(1) y <- 1.0*(y.rand<=true[dose.start]) #results for toxicity come from the true distribution pData <- c(dose.start,y) pData <- matrix(pData,ncol=2) i <- 1 if (y==1) {y.DLT=runif(1,min=0,max=cycle)} # time to DLT if (y==0) {y.DLT=cycle} t1 <- (rexp(1,rate)) #arrive time tstart[i] <- t1 tend[i] <- tstart[i] + y.DLT new <- cbind(pData,t1,tstart[i],y.DLT,tend[i]) new <- matrix(new,ncol=6) modl <- as.integer(model) targ <- as.double(target) pri <- as.double(prior) azero <- as.double(a0) bvalue <- as.double(b) for(i in 2:3){ callCRM = .C("CRM",modl,targ,pri,lenprior,azero,bvalue,as.integer(pData), as.integer(nrow(pData)),nextDose=as.integer(0),aMean=as.double(0),PACKAGE="CRM") x <- callCRM$nextDose if (((x-pData[(i-1),1])>1) && (jump==FALSE)) {x<-pData[(i-1),1]+1} y.rand <- runif(1) y <- 1.0*(y.rand<=true[x]) pData <- rbind(pData,c(x,y)) if(y==1){ y.DLT <- runif(1,min=0,max=cycle) }else {y.DLT <- cycle} t1 <- t1+(rexp(1,rate)) if (t1>tend[i-1]) {tstart[i] <- t1} if (t1<=tend[i-1]) {tstart[i] <- tend[i-1]} tend[i] <- tstart[i] + y.DLT new <- rbind(new,c(pData[i,1],pData[i,2],t1,tstart[i],y.DLT,tend[i])) } for (i in 4:nsubject){ t1=t1+(rexp(1,rate)) #arrival time #pt minimum start time if (t1>tend[i-3]){tstart[i]=t1} else {tstart[i]=tend[i-3]} if ((tstart[i])>=tend[i-1]) {#if pt came in after end time of last pt then use response of last pt callCRM <- .C("CRM",modl,targ,pri,lenprior,azero,bvalue,as.integer(pData), as.integer(nrow(pData)),nextDose=as.integer(0),aMean=as.double(0),PACKAGE="CRM") x <- callCRM$nextDose if (((x-pData[(i-1),1])>1) && (jump==FALSE)) {x <- pData[(i-1),1]+1} }else if(tstart[i]>=tend[i-2]){ tmpData = matrix(pData[-(i-1),],ncol=2) callCRM <- .C("CRM",modl,targ,pri,lenprior,azero,bvalue,as.integer(tmpData), as.integer(nrow(tmpData)),nextDose=as.integer(0),aMean=as.double(0),PACKAGE="CRM") x <- callCRM$nextDose if (((x-pData[(i-2),1])>1) && (jump==FALSE)) {x <- pData[(i-2),1]+1} }else { tmpData = matrix(pData[1:(i-3),],ncol=2) callCRM <- .C("CRM",modl,targ,pri,lenprior,azero,bvalue,as.integer(tmpData), as.integer(nrow(tmpData)),nextDose=as.integer(0),aMean=as.double(0),PACKAGE="CRM") x <- callCRM$nextDose if (((x-pData[(i-3),1])>1) && (jump==FALSE)) {x <- pData[(i-3),1]+1} } y.rand <- runif(1) y <- 1.0*(y.rand<=true[x]) xy <- cbind(x,y) pData <- rbind(pData,xy) if (y==1) {y.DLT=runif(1,min=0,max=cycle)} if (y==0) {y.DLT=cycle} tend[i]=tstart[i] + y.DLT #pt end time new=rbind(new,c(pData[,1][i],pData[,2][i],t1,tstart[i],y.DLT,tend[i])) } studyTime <- max(tend) nTox <- sum(pData[,2]) ## number of toxicity doseLevel <- 1:lenprior #Proportion of patients treated at each dose level # doseTab <- table(pData[,1])/nrow(pData) #Proportion of patients treated at each dose level doseTab <- table(pData[,1]) doseTab <- doseTab[match(doseLevel,as.numeric(names(doseTab)))] doseTab[is.na(doseTab)] <- 0 names(doseTab) <- doseLevel #Number of toxicities at each dose level toxTab <- table(pData[,1],pData[,2]) if(any(colnames(toxTab) == "1")){ toxCt <- toxTab[,"1"] toxTab <- toxCt[match(doseLevel,as.numeric(rownames(toxTab)))] toxTab[is.na(toxTab)] <- 0 }else { toxTab <- rep(0,lenprior) } names(toxTab) <- doseLevel callCRM <- .C("CRM",modl,targ,pri,lenprior,azero,bvalue,as.integer(pData), as.integer(nrow(pData)),nextDose=as.integer(0),aMean=as.double(0),PACKAGE="CRM") mtd <- callCRM$nextDose if (((mtd-pData[nrow(pData),1])>1) && (jump==FALSE)){ mtd <- pData[nrow(pData),1]+1 } # make dose escalation <= 1 result <- list(mtd,nTox,doseTab,toxTab,studyTime) return(result) } crmsiminc2 <- function(target,prior,true,rate,cycle,nsubject=24,nsim=1000,model=1,a0=1,b=3,jump=FALSE, start.dose=1,seed=777){ if(target<0 || target > 1){ stop("Error: target must be greater than 0 and less than 1\n") } if(any(prior<0) || any(prior>1)){ stop("Error: All elements in prior must be greater than 0 and less than 1\n") } if(any(true<0) || any(true>1)){ stop("Error: All elements in true must be greater than 0 and less than 1\n") } if(length(prior)!=length(true)){ stop("Error: prior and true should have the same length\n") }else { for(i in 2:length(prior)){ if(prior[i]<prior[i-1] || true[i]<true[i-1]){ stop("Error: prior and true must be in an ascending order\n") } } } if(model != 1 && model != 2){ stop("Error: model must be 1 or 2\n") } if(is.na(match(start.dose,1:length(prior)))){ stop("Error: start.dose must be in 1:length(prior)\n") } if(jump != FALSE && jump != TRUE){ stop("Error: jump must be FALSE or TRUE\n") } set.seed(seed) toxCt <- rep(NA,nsim) mtd <- rep(NA,nsim) duration <- rep(NA,nsim) toxTab <- rep(0,length(prior)) doseTab <- rep(0,length(prior)) for (i in 1:nsim){ fit <- CRMtwoIncomplete(model,nsubject,prior,true,target,a0,b,jump,start.dose,rate,cycle) mtd[i] <- fit[[1]] toxCt[i] <- fit[[2]] doseTab <- doseTab + fit[[3]] toxTab <- toxTab + fit[[4]] duration[i] <- round(fit[[5]]) } doseLevel = 1:length(prior) toxTab <- toxTab/nsim doseTab <- doseTab/nsim propDoseTab <- doseTab/nsubject mtdTab <- table(mtd) mtdTab <- mtdTab[match(doseLevel,as.numeric(names(mtdTab)))] mtdTab[is.na(mtdTab)] <- 0 mtdTab <- mtdTab/nsim names(mtdTab) <- doseLevel simTab <- rbind(100*mtdTab,100*propDoseTab,doseTab,toxTab,true) simTab <- round(simTab,2) rownames(simTab) <- c("% Selection","% Subjects Treated","# Subjects Treated","Average Toxicities", "True probabilities") list(SimResult=simTab,TrialDuration=summary(duration)) }
/scratch/gouwar.j/cran-all/cranData/CRM/R/crmsiminc2.R
#The table 1 in O'Quingley et al.'s paper, page 40 #This example is used to illustrate how the program is used to find #the MTD and the updated parameter target <- 0.2 prior <- c(0.05,0.1,0.2,0.3,0.5,0.7) x <- c(3,4,4,3,3,2,1,1,1,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,1) y <- c(0,0,1,0,1,1,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,1,0,1,1) ptdata <- cbind(x,y) for(i in 1:25){ if(i == 1){ cat(1,1,3,0,"\n") } res <- crm(target,prior,ptdata[1:i,],model=1,a0=1) if(i < 25){ cat(i+1,res$a,res$MTD,ptdata[i+1,2],"\n") }else { cat(i+1,res$a,res$MTD,"\n") } } #the proposed MTD is res$MTD
/scratch/gouwar.j/cran-all/cranData/CRM/demo/crm.R
prior1 <- c(0.05,0.1,0.2,0.3,0.5,0.7) true1 <- c(0.1,0.15,0.2,0.4,0.5,0.8) #uncomment the following code to run # simulations using model 1 (hyperbolic tangent model) #crmsim(target=0.2,prior=prior1,true=true1,rate=0.1,cycle=21,cohort=1,nsubject=24,nsim=1000, # model=1,a0=1,b=3,jump=FALSE,start.dose=1,seed=777) #crmsiminc1(target=0.2,prior=prior1,true=true1,rate=0.1,cycle=21,nsubject=24,nsim=1000, # model=1,a0=1,b=3,jump=FALSE,start.dose=1,seed=777) #crmsiminc2(target=0.2,prior=prior1,true=true1,rate=0.1,cycle=21,nsubject=24,nsim=1000, # model=1,a0=1,b=3,jump=FALSE,start.dose=1,seed=777) # simulations using model 2 (one-parameter logistic model) #crmsim(target=0.2,prior=prior1,true=true1,rate=0.1,cycle=21,cohort=1,nsubject=24,nsim=1000, # model=2,a0=1,b=3,jump=FALSE,start.dose=1,seed=777) #crmsiminc1(target=0.2,prior=prior1,true=true1,rate=0.1,cycle=21,nsubject=24,nsim=1000, # model=2,a0=1,b=3,jump=FALSE,start.dose=1,seed=777) #crmsiminc2(target=0.2,prior=prior1,true=true1,rate=0.1,cycle=21,nsubject=24,nsim=1000, # model=2,a0=1,b=3,jump=FALSE,start.dose=1,seed=777)
/scratch/gouwar.j/cran-all/cranData/CRM/demo/crmsim.R
#' @import dplyr magrittr ggplot2 ggrepel ggpmisc #' @importFrom R6 R6Class #' @importFrom sccore plapply checkPackageInstalled #' @importFrom Matrix t #' @importFrom ggpubr stat_compare_means #' @importFrom cowplot plot_grid #' @importFrom stats setNames relevel #' @importFrom tidyr pivot_longer replace_na #' @importFrom ggbeeswarm geom_quasirandom #' @importFrom tibble add_column #' @importFrom scales comma #' @importFrom sparseMatrixStats colSums2 rowSums2 #' @importFrom utils globalVariables NULL utils::globalVariables(c("Valid Barcodes","Fraction Reads in Cells")) #' CRMetrics class object #' #' @description Functions to analyze Cell Ranger count data. To initialize a new object, 'data.path' or 'cms' is needed. 'metadata' is also recommended, but not required. #' @export CRMetrics <- R6Class("CRMetrics", lock_objects = FALSE, public = list( #' @field metadata data.frame or character Path to metadata file or name of metadata data.frame object. Metadata must contain a column named 'sample' containing sample names that must match folder names in 'data.path' (default = NULL) metadata = NULL, #' @field data.path character Path(s) to Cell Ranger count data, one directory per sample. If multiple paths, do c("path1","path2") (default = NULL) data.path = NULL, #' @field cms list List with count matrices (default = NULL) cms = NULL, #' @field cms.preprocessed list List with preprocessed count matrices after $doPreprocessing() (default = NULL) cms.preprocessed = NULL, #' @field cms.raw list List with raw, unfiltered count matrices, i.e., including all CBs detected also empty droplets (default = NULL) cms.raw = NULL, #' @field summary.metrics data.frame Summary metrics from Cell Ranger (default = NULL) summary.metrics = NULL, #' @field detailed.metrics data.frame Detailed metrics, i.e., no. genes and UMIs per cell (default = NULL) detailed.metrics = NULL, #' @field comp.group character A group present in the metadata to compare the metrics by, can be added with addComparison (default = NULL) comp.group = NULL, #' @field verbose logical Print messages or not (default = TRUE) verbose = TRUE, #' @field theme ggplot2 theme (default: theme_bw()) theme = ggplot2::theme_bw(), #' @field pal Plotting palette (default = NULL) pal = NULL, #' @field n.cores numeric Number of cores for calculations (default = 1) n.cores = 1, #' Initialize a CRMetrics object #' @description To initialize new object, 'data.path' or 'cms' is needed. 'metadata' is also recommended, but not required. #' @param data.path character Path to directory with Cell Ranger count data, one directory per sample (default = NULL). #' @param metadata data.frame or character Path to metadata file (comma-separated) or name of metadata dataframe object. Metadata must contain a column named 'sample' containing sample names that must match folder names in 'data.path' (default = NULL) #' @param cms list List with count matrices (default = NULL) #' @param samples character Sample names. Only relevant is cms is provided (default = NULL) #' @param unique.names logical Create unique cell names. Only relevant if cms is provided (default = TRUE) #' @param sep.cells character Sample-cell separator. Only relevant if cms is provided and `unique.names=TRUE` (default = "!!") #' @param comp.group character A group present in the metadata to compare the metrics by, can be added with addComparison (default = NULL) #' @param verbose logical Print messages or not (default = TRUE) #' @param theme ggplot2 theme (default: theme_bw()) #' @param n.cores integer Number of cores for the calculations (default = self$n.cores) #' @param sep.meta character Separator for metadata file (default = ",") #' @param raw.meta logical Keep metadata in its raw format. If FALSE, classes will be converted using "type.convert" (default = FALSE) #' @param pal character Plotting palette (default = NULL) #' @return CRMetrics object #' @examples #' \dontrun{ #' crm <- CRMetrics$new(data.path = "/path/to/count/data/") #' } initialize = function(data.path = NULL, metadata = NULL, cms = NULL, samples = NULL, unique.names = TRUE, sep.cells = "!!", comp.group = NULL, verbose = TRUE, theme = theme_bw(), n.cores = 1, sep.meta = ",", raw.meta = FALSE, pal = NULL) { if ('CRMetrics' %in% class(data.path)) { # copy constructor for (n in ls(data.path)) { if (!is.function(get(n, data.path))) assign(n, get(n, data.path), self) } return(NULL) } # Check that either data.path or cms is provided if (is.null(data.path) & is.null(cms)) stop("Either 'data.path' or 'cms' must be provided.") # Check that last character is slash if (!is.null(data.path)) { data.path %<>% sapply(\(path) { length.path <- nchar(path) last.char <- path %>% substr(length.path, length.path) if (last.char != "/") paste0(path,"/") else path }) } # Write stuff to object self$n.cores <- as.integer(n.cores) self$data.path <- data.path self$verbose <- verbose self$theme <- theme self$pal <- pal # Metadata if (is.null(metadata)) { if (!is.null(data.path)) { self$metadata <- list.dirs(data.path, recursive = FALSE, full.names = FALSE) %>% .[pathsToList(data.path, .) %>% sapply(\(path) file.exists(paste0(path[2],"/",path[1],"/outs")))] %>% {data.frame(sample = .)} } else { if (is.null(names(cms))) { if (is.null(samples)) { stop("Either `samples` must be provided, or `cms` must be named.") } else { sample.out <- samples } } else { sample.out <- names(cms) } self$metadata <- data.frame(sample = sample.out) %>% arrange(sample) } } else { if (inherits(metadata, "data.frame")) { self$metadata <- metadata %>% arrange(sample) } else { stopifnot(file.exists(metadata)) self$metadata <- read.table(metadata, header = TRUE, colClasses = "character", sep = sep.meta) %>% arrange(sample) } } if (!is.null(metadata)) { if (!raw.meta) self$metadata %<>% lapply(type.convert, as.is = FALSE) %>% bind_cols() } # Add CMs if (!is.null(cms)) { self$addCms(cms = cms, samples = samples, unique.names = unique.names, sep = sep.cells, n.cores = self$n.cores, add.metadata = FALSE, verbose = verbose) } checkCompMeta(comp.group, self$metadata) # Add summary metrics if (is.null(cms)) self$summary.metrics <- addSummaryMetrics(data.path = data.path, metadata = self$metadata, n.cores = self$n.cores, verbose = verbose) }, #' @description Function to read in detailed metrics. This is not done upon initialization for speed. #' @param cms list List of (sparse) count matrices (default = self$cms) #' @param min.transcripts.per.cell numeric Minimal number of transcripts per cell (default = 100) #' @param n.cores integer Number of cores for the calculations (default = self$n.cores). #' @param verbose logical Print messages or not (default = self$verbose). #' @return Count matrices #' @examples #' # Simulate data #' testdata.cms <- lapply(seq_len(2), \(x) { #' out <- Matrix::rsparsematrix(2e3, 1e3, 0.1) #' out[out < 0] <- 1 #' dimnames(out) <- list(sapply(seq_len(2e3), \(x) paste0("gene",x)), #' sapply(seq_len(1e3), \(x) paste0("cell",x))) #' return(out) #' }) #' #' # Initialize #' crm <- CRMetrics$new(cms = testdata.cms, samples = c("sample1", "sample2"), n.cores = 1) #' #' # Run function #' crm$addDetailedMetrics() addDetailedMetrics = function(cms = self$cms, min.transcripts.per.cell = 100, n.cores = self$n.cores, verbose = self$verbose) { # Checks if (!is.null(self$detailed.metrics)) stop("Detailed metrics already present. To overwrite, set $detailed.metrics = NULL and rerun this function") if (is.null(cms)) stop("No CMs found, run $addCms first.") size.check <- cms %>% sapply(dim) %>% apply(2, prod) %>% {. > 2^31-1} if (any(size.check)) warning(message(paste0("Unrealistic large samples detected that are larger than what can be handled in R. Consider removing ",paste(size.check[size.check] %>% names(), collapse = " "),". If kept, you may experience errors."))) # Calculate metrics if (min.transcripts.per.cell > 0) cms %<>% lapply(\(cm) cm[,sparseMatrixStats::colSums2(cm) > min.transcripts.per.cell]) self$detailed.metrics <- addDetailedMetricsInner(cms = cms, verbose = verbose, n.cores = n.cores) }, #' @description Add comparison group for statistical testing. #' @param comp.group character Comparison metric (default = self$comp.group). #' @param metadata data.frame Metadata for samples (default = self$metadata). #' @return Vector #' @examples #' # Simulate data #' testdata.cms <- lapply(seq_len(2), \(x) { #' out <- Matrix::rsparsematrix(2e3, 1e3, 0.1) #' out[out < 0] <- 1 #' dimnames(out) <- list(sapply(seq_len(2e3), \(x) paste0("gene",x)), #' sapply(seq_len(1e3), \(x) paste0("cell",x))) #' return(out) #' }) #' #' # Initialize #' crm <- CRMetrics$new(cms = testdata.cms, samples = c("sample1", "sample2"), n.cores = 1) #' #' # Add metadata #' crm$metadata <- data.frame(sex = c("male","female")) #' #' # Add comparison group #' crm$addComparison(comp.group = "sex") addComparison = function(comp.group, metadata = self$metadata) { checkCompMeta(comp.group, metadata) self$comp.group <- comp.group }, #' @description Plot the number of samples. #' @param comp.group character Comparison metric, must match a column name of metadata (default = self$comp.group). #' @param h.adj numeric Position of statistics test p value as % of max(y) (default = 0.05). #' @param exact logical Whether to calculate exact p values (default = FALSE). #' @param metadata data.frame Metadata for samples (default = self$metadata). #' @param second.comp.group character Second comparison metric, must match a column name of metadata (default = NULL). #' @param pal character Plotting palette (default = self$pal) #' @return ggplot2 object #' @examples #' samples <- c("sample1", "sample2") #' #' # Simulate data #' testdata.cms <- lapply(seq_len(2), \(x) { #' out <- Matrix::rsparsematrix(2e3, 1e3, 0.1) #' out[out < 0] <- 1 #' dimnames(out) <- list(sapply(seq_len(2e3), \(x) paste0("gene",x)), #' sapply(seq_len(1e3), \(x) paste0("cell",x))) #' return(out) #' }) #' names(testdata.cms) <- samples #' #' # Create metadata #' metadata <- data.frame(sample = samples, #' sex = c("male","female"), #' condition = c("a","b")) #' #' # Initialize #' crm <- CRMetrics$new(cms = testdata.cms, metadata = metadata, n.cores = 1) #' #' # Plot #' crm$plotSamples(comp.group = "sex", second.comp.group = "condition") plotSamples = function(comp.group = self$comp.group, h.adj = 0.05, exact = FALSE, metadata = self$metadata, second.comp.group = NULL, pal = self$pal) { comp.group %<>% checkCompGroup("sample", self$verbose) if (!is.null(second.comp.group)) { second.comp.group %<>% checkCompGroup(second.comp.group, self$verbose) } else { second.comp.group <- comp.group } plot.stats <- ifelse(comp.group == "sample", FALSE, TRUE) g <- metadata %>% select(comp.group, second.comp.group) %>% table() %>% data.frame() %>% ggplot(aes(!!sym(comp.group), Freq, fill = !!sym(second.comp.group))) + geom_bar(stat = "identity", position = "dodge") + self$theme + labs(x = comp.group, y = "Freq") + theme(legend.position = "right") if (plot.stats) { g %<>% addPlotStatsSamples(comp.group, metadata, h.adj, exact, second.comp.group) } if (!is.null(pal)) g <- g + scale_fill_manual(values = pal) return(g) }, #' @description Plot all summary stats or a selected list. #' @param comp.group character Comparison metric (default = self$comp.group). #' @param second.comp.group character Second comparison metric, used for the metric "samples per group" or when "comp.group" is a numeric or an integer (default = NULL). #' @param metrics character Metrics to plot (default = NULL). #' @param h.adj numeric Position of statistics test p value as % of max(y) (default = 0.05) #' @param plot.stat logical Show statistics in plot. Will be FALSE if "comp.group" = "sample" or if "comp.group" is a numeric or an integer (default = TRUE) #' @param stat.test character Statistical test to perform to compare means. Can either be "non-parametric" or "parametric" (default = "non-parametric"). #' @param exact logical Whether to calculate exact p values (default = FALSE). #' @param metadata data.frame Metadata for samples (default = self$metadata). #' @param summary.metrics data.frame Summary metrics (default = self$summary.metrics). #' @param plot.geom character Which geometric is used to plot the data (default = "point"). #' @param se logical For regression lines, show SE (default = FALSE) #' @param group.reg.lines logical For regression lines, if FALSE show one line, if TRUE show line per group defined by second.comp.group (default = FALSE) #' @param secondary.testing logical Whether to show post hoc testing (default = TRUE) #' @param pal character Plotting palette (default = self$pal) #' @return ggplot2 object #' @examples #' \donttest{ #' # Simulate data #' testdata.cms <- lapply(seq_len(2), \(x) { #' out <- Matrix::rsparsematrix(2e3, 1e3, 0.1) #' out[out < 0] <- 1 #' dimnames(out) <- list(sapply(seq_len(2e3), \(x) paste0("gene",x)), #' sapply(seq_len(1e3), \(x) paste0("cell",x))) #' return(out) #' }) #' #' # Initialize #' crm <- CRMetrics$new(cms = testdata.cms, samples = c("sample1", "sample2"), n.cores = 1) #' #' # Add summary metrics #' crm$addSummaryFromCms() #' #' crm$plotSummaryMetrics(plot.geom = "point") #' } plotSummaryMetrics = function(comp.group = self$comp.group, second.comp.group = NULL, metrics = NULL, h.adj = 0.05, plot.stat = TRUE, stat.test = c("non-parametric","parametric"), exact = FALSE, metadata = self$metadata, summary.metrics = self$summary.metrics, plot.geom = "bar", se = FALSE, group.reg.lines = FALSE, secondary.testing = TRUE, pal = self$pal) { # Checks comp.group %<>% checkCompGroup("sample", self$verbose) if (is.null(plot.geom)) { stop("A plot type needs to be defined, can be one of these: 'point', 'bar', 'histogram', 'violin'.") } stat.test %<>% match.arg(c("non-parametric","parametric")) # if no metrics selected, plot all if (is.null(metrics)) { metrics <- summary.metrics$metric %>% unique() } else { # check if selected metrics are available difs <- setdiff(metrics, summary.metrics$metric %>% unique()) if ("samples per group" %in% difs) difs <- difs[difs != "samples per group"] if (length(difs) > 0) stop(paste0("The following 'metrics' are not valid: ",paste(difs, collapse=" "))) } # if samples per group is one of the metrics to plot use the plotSamples function to plot if ("samples per group" %in% metrics){ sample.plot <- self$plotSamples(comp.group, h.adj, exact, metadata, second.comp.group, pal) metrics <- metrics[metrics != "samples per group"] } # Plot all the other metrics plotList <- metrics %>% lapply(function (met) { tmp <- summary.metrics %>% filter(metric == met) %>% merge(metadata, by = "sample") # Create ggplot object g <- tmp %>% ggplot(aes(!!sym(comp.group), value)) + self$theme # Add geom + palette if (is.null(second.comp.group)) { g %<>% plotGeom(plot.geom, col = comp.group, pal) g <- g + labs(y = met, x = element_blank()) } else { g %<>% plotGeom(plot.geom, col = second.comp.group, pal) g <- g + labs(y = met, x = comp.group) } if (is.numeric(metadata[[comp.group]])) { if (!group.reg.lines) { g <- g + stat_poly_eq(color = "black", aes(label = paste(after_stat(rr.label), after_stat(p.value.label), sep = "*\", \"*"))) + stat_poly_line(color = "black", se = se) } else { g <- g + stat_poly_eq(aes(label = paste(after_stat(rr.label), after_stat(p.value.label), sep = "*\", \"*"), col = !!sym(second.comp.group))) + stat_poly_line(aes(col = !!sym(second.comp.group)), se = se) } } # a legend only makes sense if the comparison is not the samples if (comp.group != "sample") { g <- g + theme(legend.position = "right") } else { plot.stat <- FALSE g <- g + theme(legend.position = "none") } # Statistical testing if (plot.stat & !is.numeric(metadata[[comp.group]])) { if (stat.test == "non-parametric") { primary.test <- "kruskal.test" secondary.test <- "wilcox.test" } else { primary.test <- "anova" secondary.test <- "t.test" } if (length(unique(metadata[[comp.group]])) < 3) { primary.test <- secondary.test secondary.test <- NULL } if (!secondary.testing) secondary.test <- NULL g %<>% addPlotStats(comp.group, metadata, h.adj, primary.test, secondary.test, exact) } g <- g + theme(axis.text.x = element_text(angle = 45, vjust = 1, hjust = 1)) return(g) }) # To return the plots if (exists("sample.plot")) { if (length("plotList") > 0){ return(plot_grid(plotlist = plotList, sample.plot, ncol = min(length(plotList)+1, 3))) } else { return(sample.plot) } } else { if (length(plotList) == 1) { return(plotList[[1]]) } else { return(plot_grid(plotlist = plotList, ncol = min(length(plotList), 3))) } } }, #' @description Plot detailed metrics from the detailed.metrics object #' @param comp.group character Comparison metric (default = self$comp.group). #' @param detailed.metrics data.frame Object containing the count matrices (default = self$detailed.metrics). #' @param metadata data.frame Metadata for samples (default = self$metadata). #' @param metrics character Metrics to plot. NULL plots both plots (default = NULL). #' @param plot.geom character How to plot the data (default = "violin"). #' @param data.path character Path to Cell Ranger count data (default = self$data.path). #' @param hline logical Whether to show median as horizontal line (default = TRUE) #' @param pal character Plotting palette (default = self$pal) #' @return ggplot2 object #' @examples #' \donttest{ #' # Simulate data #' testdata.cms <- lapply(seq_len(2), \(x) { #' out <- Matrix::rsparsematrix(2e3, 1e3, 0.1) #' out[out < 0] <- 1 #' dimnames(out) <- list(sapply(seq_len(2e3), \(x) paste0("gene",x)), #' sapply(seq_len(1e3), \(x) paste0("cell",x))) #' return(out) #' }) #' #' # Initialize #' crm <- CRMetrics$new(cms = testdata.cms, samples = c("sample1", "sample2"), n.cores = 1) #' #' # Add detailed metrics #' crm$addDetailedMetrics() #' #' # Plot #' crm$plotDetailedMetrics() #' } plotDetailedMetrics = function(comp.group = self$comp.group, detailed.metrics = self$detailed.metrics, metadata = self$metadata, metrics = NULL, plot.geom = "violin", hline = TRUE, pal = self$pal){ # Checks if (is.null(detailed.metrics)) stop("'detailed.metrics' not calculated. Please run 'addDetailedMetrics()'.") comp.group %<>% checkCompGroup("sample", self$verbose) if (is.null(metrics)) { metrics <- c("UMI_count","gene_count") } # check if selected metrics are available difs <- setdiff(metrics, self$detailed.metrics$metric %>% unique()) if (length(difs) > 0) stop(paste0("The following 'metrics' are not valid: ",paste(difs, collapse=" "))) # if no plot type is defined, return a list of options if (is.null(plot.geom)) { stop("A plot type needs to be defined, can be one of these: 'point', 'bar', 'histogram', 'violin'.") } # Plot all the other metrics plotList <- metrics %>% lapply(function (met) { tmp <- detailed.metrics %>% filter(metric == met) %>% merge(metadata, by = "sample") g <- ggplot(tmp, aes(x = sample, y = value)) g %<>% plotGeom(plot.geom, comp.group, pal) g <- g + {if (plot.geom == "violin") scale_y_log10()} + {if (hline) geom_hline(yintercept = median(tmp$value))} + labs(y = met, x = element_blank()) + self$theme # a legend only makes sense if the comparison is not the samples if (comp.group != "sample") { g <- g + theme(legend.position = "right") } else { g <- g + theme(legend.position = "none") } g <- g + theme(axis.text.x = element_text( angle = 45, vjust = 1, hjust = 1 )) return(g) }) # To return the plots if (length(plotList) == 1) { return(plotList[[1]]) } else { return(plot_grid(plotlist = plotList, ncol = min(length(plotList), 3))) } }, #' @description Plot cells in embedding using Conos and color by depth and doublets. #' @param depth logical Plot depth or not (default = FALSE). #' @param doublet.method character Doublet detection method (default = NULL). #' @param doublet.scores logical Plot doublet scores or not (default = FALSE). #' @param depth.cutoff numeric Depth cutoff (default = 1e3). #' @param mito.frac logical Plot mitochondrial fraction or not (default = FALSE). #' @param mito.cutoff numeric Mitochondrial fraction cutoff (default = 0.05). #' @param species character Species to calculate the mitochondrial fraction for (default = c("human","mouse")). #' @param size numeric Dot size (default = 0.3) #' @param sep character Separator for creating unique cell names (default = "!!") #' @param pal character Plotting palette (default = NULL) #' @param ... Plotting parameters passed to `sccore::embeddingPlot`. #' @return ggplot2 object #' @examples #' \donttest{ #' if (requireNamespace("pagoda2", quietly = TRUE)) { #' if (requireNamespace("conos", quietly = TRUE)) { #' # Simulate data #' testdata.cms <- lapply(seq_len(2), \(x) { #' out <- Matrix::rsparsematrix(2e3, 1e3, 0.1) #' out[out < 0] <- 1 #' dimnames(out) <- list(sapply(seq_len(2e3), \(x) paste0("gene",x)), #' sapply(seq_len(1e3), \(x) paste0("cell",x))) #' return(out) #' }) #' #' # Initialize #' crm <- CRMetrics$new(cms = testdata.cms, samples = c("sample1", "sample2"), n.cores = 1) #' #' # Create embedding #' crm$doPreprocessing() #' crm$createEmbedding() #' #' crm$plotEmbedding() #' } else { #' message("Package 'conos' not available.") #' } #' } else { #' message("Package 'pagoda2' not available.") #' } #' } plotEmbedding = function(depth = FALSE, doublet.method = NULL, doublet.scores = FALSE, depth.cutoff = 1e3, mito.frac = FALSE, mito.cutoff = 0.05, species = c("human","mouse"), size = 0.3, sep = "!!", pal = NULL, ...) { checkPackageInstalled("conos", cran = TRUE) if (sum(depth, mito.frac, !is.null(doublet.method)) > 1) stop("Only one filter allowed. For multiple filters, use plotFilteredCells(type = 'embedding').") species %<>% tolower() %>% match.arg(c("human","mouse")) # Check for existing Conos object and preprocessed data if (is.null(self$con)) { if (self$verbose) stop("No embedding found, please run createEmbedding.") } # Depth if (depth) { depths <- self$getDepth() %>% filterVector("depth.cutoff", depth.cutoff, self$con$samples %>% names(), sep) if (length(depth.cutoff) > 1) { main <- "Cells with low depth with sample-specific cutoff" } else { main <- paste0("Cells with low depth, < ",depth.cutoff) } g <- self$con$plotGraph(colors = (!depths) * 1, title = main, size = size, ...) } # Doublets if (!is.null(doublet.method)) { dres <- self$doublets[[doublet.method]]$result if (is.null(dres)) stop("No results found for doublet.method '",doublet.method,"'. Please run doubletDetection(method = '",doublet.method,"'.") if (doublet.scores) { doublets <- dres$scores label <- "scores" } else { doublets <- dres$labels * 1 label <- "labels" } doublets %<>% setNames(rownames(dres)) g <- self$con$plotGraph(colors = doublets, title = paste(doublet.method,label, collapse = " "), size = size, palette = pal, ...) } # Mitochondrial fraction if (mito.frac) { mf <- self$getMitoFraction(species = species) %>% filterVector("mito.cutoff", mito.cutoff, self$con$samples %>% names(), sep) if (length(mito.cutoff) > 1) { main <- "Cells with low mito. frac with sample-specific cutoff" } else { main <- paste0("Cells with high mito. fraction, > ",mito.cutoff*100,"%") } g <- self$con$plotGraph(colors = mf * 1, title = main, size = size, ...) } if (!exists("g")) g <- self$con$plotGraph(palette = pal, size = size, ...) return(g) }, #' @description Plot the sequencing depth in histogram. #' @param cutoff numeric The depth cutoff to color the cells in the embedding (default = 1e3). #' @param samples character Sample names to include for plotting (default = $metadata$sample). #' @param sep character Separator for creating unique cell names (default = "!!") #' @param keep.col character Color for density of cells that are kept (default = "#E7CDC2") #' @param filter.col Character Color for density of cells to be filtered (default = "#A65141") #' @return ggplot2 object #' @examples #' \donttest{ #' if (requireNamespace("pagoda2", quietly = TRUE)) { #' if (requireNamespace("conos", quietly = TRUE)) { #' # Simulate data #' testdata.cms <- lapply(seq_len(2), \(x) { #' out <- Matrix::rsparsematrix(2e3, 1e3, 0.1) #' out[out < 0] <- 1 #' dimnames(out) <- list(sapply(seq_len(2e3), \(x) paste0("gene",x)), #' sapply(seq_len(1e3), \(x) paste0("cell",x))) #' return(out) #' }) #' #' # Initialize #' crm <- CRMetrics$new(cms = testdata.cms, samples = c("sample1", "sample2"), n.cores = 1) #' #' # Create embedding #' crm$doPreprocessing() #' crm$createEmbedding() #' #' # Plot #' crm$plotDepth() #' } else { #' message("Package 'conos' not available.") #' } #' } else { #' message("Package 'pagoda2' not available.") #' } #' } plotDepth = function(cutoff = 1e3, samples = self$metadata$sample, sep = "!!", keep.col = "#E7CDC2", filter.col = "#A65141"){ # Checks checkPackageInstalled("conos", cran = TRUE) if (is.null(self$con)) { stop("No Conos object found, please run createEmbedding.") } if (length(cutoff) > 1 & length(self$con$samples) != length(cutoff)) stop(paste0("'cutoff' has a length of ",length(cutoff),", but the conos object contains ",length(tmp)," samples. Please adjust.")) depths <- self$getDepth() # Preparations tmp <- depths %>% {data.frame(depth = unname(.), sample = names(.))} %>% mutate(sample = sample %>% strsplit(sep, TRUE) %>% sapply(`[[`, 1)) %>% split(., .$sample) %>% .[samples] %>% lapply(\(z) with(density(z$depth, adjust = 1/10), data.frame(x,y))) %>% {lapply(names(.), \(x) data.frame(.[[x]], sample = x))} %>% bind_rows() ncol.plot <- samples %>% length() %>% pmin(3) # Plot depth.plot <- tmp %>% pull(sample) %>% unique() %>% lapply(\(id) { tmp.plot <- tmp %>% filter(sample == id) xmax <- tmp.plot$x %>% max() %>% pmin(2e4) g <- ggplot(tmp.plot, aes(x,y)) + self$theme + geom_line() + xlim(0,xmax) + theme(legend.position = "none", axis.text.x = element_text(angle = 45, hjust = 1), plot.margin = unit(c(0, 0, 0, 0.5), "cm")) + labs(title = id, y = "Density [AU]", x = "") if (length(cutoff) == 1) { plot.cutoff <- cutoff } else { plot.cutoff <- cutoff[names(cutoff) == id] } if (all(tmp.plot$x < plot.cutoff)) { g <- g + geom_area(fill = filter.col) } else { g <- g + geom_area(fill = filter.col) + geom_area(data = tmp.plot %>% filter(x > plot.cutoff), aes(x), fill = keep.col) } return(g) }) %>% plot_grid(plotlist = ., ncol = ncol.plot, label_size = 5) return(depth.plot) }, #' @description Plot the mitochondrial fraction in histogram. #' @param cutoff numeric The mito. fraction cutoff to color the embedding (default = 0.05) #' @param species character Species to calculate the mitochondrial fraction for (default = "human") #' @param samples character Sample names to include for plotting (default = $metadata$sample) #' @param sep character Separator for creating unique cell names (default = "!!") #' @param keep.col character Color for density of cells that are kept (default = "#E7CDC2") #' @param filter.col Character Color for density of cells to be filtered (default = "#A65141") #' @return ggplot2 object #' @examples #' \donttest{ #' if (requireNamespace("pagoda2", quietly = TRUE)) { #' if (requireNamespace("conos", quietly = TRUE)) { #' # Simulate data #' testdata.cms <- lapply(seq_len(2), \(x) { #' out <- Matrix::rsparsematrix(2e3, 1e3, 0.1) #' out[out < 0] <- 1 #' dimnames(out) <- list(sapply(seq_len(2e3), \(x) paste0("gene",x)), #' sapply(seq_len(1e3), \(x) paste0("cell",x))) #' return(out) #' }) #' #' # Initialize #' crm <- CRMetrics$new(cms = testdata.cms, samples = c("sample1", "sample2"), n.cores = 1) #' #' # Create embedding #' crm$doPreprocessing() #' crm$createEmbedding() #' #' # Plot #' crm$plotMitoFraction() #' } else { #' message("Package 'conos' not available.") #' } #' } else { #' message("Package 'pagoda2' not available.") #' } #' } plotMitoFraction = function(cutoff = 0.05, species = c("human","mouse"), samples = self$metadata$sample, sep = "!!", keep.col = "#E7CDC2", filter.col = "#A65141"){ # Checks checkPackageInstalled("conos", cran = TRUE) if (is.null(self$con)) { stop("No Conos object found, please run createEmbedding.") } if (length(cutoff) > 1 & length(self$con$samples) != length(cutoff)) stop(paste0("'cutoff' has a length of ",length(cutoff),", but the conos object contains ",length(tmp)," samples. Please adjust.")) mf <- self$getMitoFraction(species = species) mf.zero <- sum(mf == 0) / length(mf) * 100 if (mf.zero > 95) warning(paste0(mf.zero,"% of all cells do not express mitochondrial genes. Plotting may behave unexpected.")) # Preparations tmp <- mf %>% {data.frame(mito.frac = unname(.), sample = names(.))} %>% mutate(sample = sample %>% strsplit(sep, TRUE) %>% sapply(`[[`, 1)) %>% split(., .$sample) %>% .[samples] %>% lapply(\(z) with(density(z$mito.frac, adjust = 1/10), data.frame(x,y))) %>% {lapply(names(.), \(x) data.frame(.[[x]], sample = x))} %>% bind_rows() ncol.plot <- samples %>% length() %>% pmin(3) # Plot mf.plot <- tmp %>% pull(sample) %>% unique() %>% lapply(\(id) { tmp.plot <- tmp %>% filter(sample == id) g <- ggplot(tmp.plot, aes(x,y)) + self$theme + geom_line() + theme(legend.position = "none", axis.text.x = element_text(angle = 45, hjust = 1), plot.margin = unit(c(0, 0, 0, 0.5), "cm")) + labs(title = id, y = "Density [AU]", x = "") if (length(cutoff) == 1) { plot.cutoff <- cutoff } else { plot.cutoff <- cutoff[names(cutoff) == id] } if (all(tmp.plot$x < plot.cutoff)) { g <- g + geom_area(fill = filter.col) } else { g <- g + geom_area(fill = filter.col) + geom_area(data = tmp.plot %>% filter(x < plot.cutoff), aes(x), fill = keep.col) } return(g) }) %>% plot_grid(plotlist = ., ncol = ncol.plot, label_size = 5) return(mf.plot) }, #' @description Detect doublet cells. #' @param method character Which method to use, either `scrublet` or `doubletdetection` (default="scrublet"). #' @param cms list List containing the count matrices (default=self$cms). #' @param samples character Vector of sample names. If NULL, samples are extracted from cms (default = self$metadata$sample) #' @param env character Environment to run python in (default="r-reticulate"). #' @param conda.path character Path to conda environment (default=system("whereis conda")). #' @param n.cores integer Number of cores to use (default = self$n.cores) #' @param verbose logical Print messages or not (default = self$verbose) #' @param args list A list with additional arguments for either `DoubletDetection` or `scrublet`. Please check the respective manuals. #' @param export boolean Export CMs in order to detect doublets outside R (default = FALSE) #' @param data.path character Path to write data, only relevant if `export = TRUE`. Last character must be `/` (default = self$data.path) #' @return data.frame #' @examples #' \dontrun{ #' # Simulate data #' testdata.cms <- lapply(seq_len(2), \(x) { #' out <- Matrix::rsparsematrix(2e3, 1e3, 0.1) #' out[out < 0] <- 1 #' dimnames(out) <- list(sapply(seq_len(2e3), \(x) paste0("gene",x)), #' sapply(seq_len(1e3), \(x) paste0("cell",x))) #' return(out) #' }) #' #' # Initialize #' crm <- CRMetrics$new(cms = testdata.cms, samples = c("sample1", "sample2"), n.cores = 1) #' #' #' # Detect doublets #' crm$detectDoublets(method = "scrublet", #' conda.path = "/opt/software/miniconda/4.12.0/condabin/conda") #' } detectDoublets = function(method = c("scrublet","doubletdetection"), cms = self$cms, samples = self$metadata$sample, env = "r-reticulate", conda.path = system("whereis conda"), n.cores = self$n.cores, verbose = self$verbose, args = list(), export = FALSE, data.path = self$data.path) { # Checks method %<>% tolower() %>% match.arg(c("scrublet","doubletdetection")) if (!is.list(args)) stop("'args' must be a list.") if (!inherits(cms, "list")) stop("'cms' must be a list") if (!all(sapply(cms, inherits, "Matrix"))) { warning("All samples in 'cms' must be a matrix, trying to convert to dgCMatrix...") cms %<>% lapply(as, "CsparseMatrix") if (!all(sapply(cms, inherits, "Matrix"))) stop("Could not convert automatically.") } if (length(data.path) > 1) data.path <- data.path[1] if (export && is.null(data.path)) stop("When 'export = TRUE', 'data.path' must be provided.") # Prepare arguments if (method == "doubletdetection") { args.std <- list(boost_rate = 0.25, clustering_algorithm = "phenograph", clustering_kwargs = NULL, n_components = 30, n_iters = 10, n_jobs = n.cores, n_top_var_genes = 10000, normalizer = NULL, pseudocount = 0.1, random_state = 0, replace = FALSE, standard_scaling = FALSE, p_thresh = 1e-7, voter_thresh = 0.9) ints <- c("n_components","n_iters","n_jobs","n_top_var_genes","random_state") } else { args.std <- list(total_counts = NULL, sim_doublet_ratio = 2.0, n_neighbors = NULL, expected_doublet_rate = 0.1, stdev_doublet_rate = 0.02, random_state = 0, synthetic_doublet_umi_subsampling = 1.0, use_approx_neighbors = TRUE, distance_metric = "euclidean", get_doublet_neighbor_parents = FALSE, min_counts = 3, min_cells = 3, min_gene_variability_pctl = 85, log_transform = FALSE, mean_center = TRUE, normalize_variance = TRUE, n_prin_comps = 30, svd_solver = "arpack") ints <- c("random_state","min_cells","n_prin_comps") } # Update arguments based on input if (length(args) > 0) { diff <- setdiff(args %>% names(), args.std %>% names()) if (length(diff) > 0) stop(paste0("Argument(s) not recognized: ",paste(diff, collapse = " "),". Please update 'args' and try again.")) for (i in names(args)) { args.std[[i]] <- args[[i]] } } # Ensure integers for (i in ints) { args.std[[i]] <- as.integer(args.std[[i]]) } if (!export) { # Prep environment if (verbose) message(paste0(Sys.time()," Loading prerequisites...")) checkPackageInstalled("reticulate", cran = TRUE) reticulate::use_condaenv(condaenv = env, conda = conda.path, required = TRUE) if (!reticulate::py_module_available(method)) stop(paste0("'",method,"' is not installed in your current conda environment.")) reticulate::source_python(paste(system.file(package="CRMetrics"), paste0(method,".py"), sep ="/")) if (verbose) message(paste0(Sys.time()," Identifying doublets using '",method,"'...")) # Calculate tmp <- cms %>% names() %>% lapply(\(cm) { if (verbose) message(paste0(Sys.time()," Running sample '",cm,"'...")) args.out <- list(cm = Matrix::t(cms[[cm]])) %>% append(args.std) if (method == "doubletdetection") { tmp.out <- do.call("doubletdetection_py", args.out) } else { tmp.out <- do.call("scrublet_py", args.out) } tmp.out %<>% setNames(c("labels","scores","output")) }) %>% setNames(cms %>% names()) df <- tmp %>% names() %>% lapply(\(name) { tmp[[name]] %>% .[c("labels","scores")] %>% bind_rows() %>% as.data.frame() %>% mutate(sample = name) %>% `rownames<-`(cms[[name]] %>% colnames()) }) %>% bind_rows() df[is.na(df)] <- FALSE df %<>% mutate(labels = as.logical(labels)) output <- tmp %>% lapply(`[[`, 3) %>% setNames(tmp %>% names()) res <- list(result = df, output = output) if (verbose) message(paste0(Sys.time()," Detected ",sum(df$labels, na.rm = TRUE)," possible doublets out of ",nrow(df)," cells.")) self$doublets[[method]] <- res } else { # Check for existing files files <- setdiff(samples %>% sapply(paste0, ".mtx"), dir(data.path)) %>% strsplit(".mtx",) %>% sapply('[[', 1) diff <- length(samples) - length(files) # Save data if (verbose) message(paste0(Sys.time()," Saving ",length(cms)," CMs")) if (diff > 0) message("Existing save files already found, skipping ",diff," samples: ",paste(c("",setdiff(samples, files)), collapse = "\n")) for (i in files) { cms[[i]] %>% Matrix::t() %>% Matrix::writeMM(paste0(data.path,i,".mtx")) } if (verbose) message(paste0(Sys.time()," Done! Python script is saved as ",data.path,toupper(method),".py")) # Create Python script args.std %<>% `names<-`(sapply(names(.), paste0, "X")) %>% lapply(\(x) { if (is.null(x)) return("None") if (is.logical(x) && x) return("True") if (is.logical(x) && !x) return("False") if (is.character(x)) return(paste0('"',x,'"')) return(x) }) %>% append(list(data.path = data.path, method = method)) tmp <- readLines(paste(system.file(package="CRMetrics"), paste0(method,"_manual.py"), sep ="/")) for (i in names(args.std)) { tmp %<>% gsub(pattern = i, replace = args.std[i], x = .) } writeLines(tmp, con=paste0(data.path,toupper(method),".py")) } }, #' @description Perform conos preprocessing. #' @param cms list List containing the count matrices (default = self$cms). #' @param preprocess character Method to use for preprocessing (default = c("pagoda2","seurat")). #' @param min.transcripts.per.cell numeric Minimal transcripts per cell (default = 100) #' @param verbose logical Print messages or not (default = self$verbose). #' @param n.cores integer Number of cores for the calculations (default = self$n.cores). #' @param get.largevis logical For Pagoda2, create largeVis embedding (default = FALSE) #' @param tsne logical Create tSNE embedding (default = FALSE) #' @param make.geneknn logical For Pagoda2, estimate gene kNN (default = FALSE) #' @param cluster logical For Seurat, estimate clusters (default = FALSE) #' @param ... Additional arguments for `Pagaoda2::basicP2Proc` or `conos:::basicSeuratProc` #' @return Conos object #' @examples #' \donttest{ #' if (requireNamespace("pagoda2", quietly = TRUE)) { #' # Simulate data #' testdata.cms <- lapply(seq_len(2), \(x) { #' out <- Matrix::rsparsematrix(2e3, 1e3, 0.1) #' out[out < 0] <- 1 #' dimnames(out) <- list(sapply(seq_len(2e3), \(x) paste0("gene",x)), #' sapply(seq_len(1e3), \(x) paste0("cell",x))) #' return(out) #' }) #' #' # Initialize #' crm <- CRMetrics$new(cms = testdata.cms, samples = c("sample1", "sample2"), n.cores = 1) #' #' # Perform preprocessing #' crm$doPreprocessing(preprocess = "pagoda2") #' } else { #' message("Package 'pagoda2' not available.") #' } #' } doPreprocessing = function(cms = self$cms, preprocess = c("pagoda2","seurat"), min.transcripts.per.cell = 100, verbose = self$verbose, n.cores = self$n.cores, get.largevis = FALSE, tsne = FALSE, make.geneknn = FALSE, cluster = FALSE, ...) { preprocess %<>% tolower() %>% match.arg(c("pagoda2","seurat")) if (is.null(cms)) { stop("No count matrices found, please add them using addDetailedMetrics or addCms.") } if (preprocess == "pagoda2") { if (verbose) message('Running preprocessing using pagoda2...') checkPackageInstalled("pagoda2", cran = TRUE) tmp <- lapply( cms, pagoda2::basicP2proc, get.largevis = FALSE, get.tsne = FALSE, make.geneknn = FALSE, min.transcripts.per.cell = min.transcripts.per.cell, n.cores = n.cores, ...) } else if (preprocess == "seurat") { if (verbose) message('Running preprocessing using Seurat...') checkPackageInstalled("conos", cran = TRUE) tmp <- lapply( cms, conos::basicSeuratProc, do.par = (n.cores > 1), tsne = FALSE, cluster = FALSE, verbose = FALSE, ...) } if (verbose) message('Preprocessing done!\n') self$cms.preprocessed <- tmp invisible(tmp) }, #' @description Create Conos embedding. #' @param cms list List containing the preprocessed count matrices (default = self$cms.preprocessed). #' @param verbose logical Print messages or not (default = self$verbose). #' @param n.cores integer Number of cores for the calculations (default = self$n.cores). #' @param arg.buildGraph list A list with additional arguments for the `buildGraph` function in Conos (default = list()) #' @param arg.findCommunities list A list with additional arguments for the `findCommunities` function in Conos (default = list()) #' @param arg.embedGraph list A list with additional arguments for the `embedGraph` function in Conos (default = list(method = "UMAP)) #' @return Conos object #' @examples #' \donttest{ #' if (requireNamespace("pagoda2", quietly = TRUE)) { #' if (requireNamespace("conos", quietly = TRUE)) { #' # Simulate data #' testdata.cms <- lapply(seq_len(2), \(x) { #' out <- Matrix::rsparsematrix(2e3, 1e3, 0.1) #' out[out < 0] <- 1 #' dimnames(out) <- list(sapply(seq_len(2e3), \(x) paste0("gene",x)), #' sapply(seq_len(1e3), \(x) paste0("cell",x))) #' return(out) #' }) #' #' # Initialize #' crm <- CRMetrics$new(cms = testdata.cms, samples = c("sample1", "sample2"), n.cores = 1) #' #' # Create embedding #' crm$doPreprocessing() #' crm$createEmbedding() #' } else { #' message("Package 'conos' not available.") #' } #' } else { #' message("Package 'pagoda2' not available.") #' } #' } createEmbedding = function(cms = self$cms.preprocessed, verbose = self$verbose, n.cores = self$n.cores, arg.buildGraph = list(), arg.findCommunities = list(), arg.embedGraph = list(method = "UMAP")) { checkPackageInstalled("conos", cran = TRUE) if (is.null(cms)) { stop("No preprocessed count matrices found, please run doPreprocessing.") } if (verbose) message('Creating Conos object... ') con <- conos::Conos$new(cms, n.cores = n.cores) if (verbose) message('Building graph... ') do.call(con$buildGraph, arg.buildGraph) if (verbose) message('Finding communities... ') do.call(con$findCommunities, arg.findCommunities) if (verbose) message('Creating embedding... ') do.call(con$embedGraph, arg.embedGraph) self$con <- con invisible(con) }, #' @description Filter cells based on depth, mitochondrial fraction and doublets from the count matrix. #' @param depth.cutoff numeric Depth (transcripts per cell) cutoff (default = NULL). #' @param mito.cutoff numeric Mitochondrial fraction cutoff (default = NULL). #' @param doublets character Doublet detection method to use (default = NULL). #' @param species character Species to calculate the mitochondrial fraction for (default = "human"). #' @param samples.to.exclude character Sample names to exclude (default = NULL) #' @param verbose logical Show progress (default = self$verbose) #' @param sep character Separator for creating unique cell names (default = "!!") #' @param raw boolean Filter on raw, unfiltered count matrices. Usually not intended (default = FALSE) #' @return list of filtered count matrices #' @examples #' \donttest{ #' if (requireNamespace("pagoda2", quietly = TRUE)) { #' if (requireNamespace("conos", quietly = TRUE)) { #' # Simulate data #' testdata.cms <- lapply(seq_len(2), \(x) { #' out <- Matrix::rsparsematrix(2e3, 1e3, 0.1) #' out[out < 0] <- 1 #' dimnames(out) <- list(sapply(seq_len(2e3), \(x) paste0("gene",x)), #' sapply(seq_len(1e3), \(x) paste0("cell",x))) #' return(out) #' }) #' #' # Initialize #' crm <- CRMetrics$new(cms = testdata.cms, samples = c("sample1", "sample2"), n.cores = 1) #' #' # Create embedding #' crm$doPreprocessing() #' crm$createEmbedding() #' #' #' # Filter CMs #' crm$filterCms(depth.cutoff = 1e3, mito.cutoff = 0.05) #' } else { #' message("Package 'conos' not available.") #' } #' } else { #' message("Package 'pagoda2' not available.") #' } #' } filterCms = function(depth.cutoff = NULL, mito.cutoff = NULL, doublets = NULL, species = c("human","mouse"), samples.to.exclude = NULL, verbose = self$verbose, sep = "!!", raw = FALSE) { if (verbose) { filters <- c() if (!is.null(depth.cutoff)) filters %<>% c(paste0("depth.cutoff = ",depth.cutoff)) if (!is.null(mito.cutoff)) filters %<>% c(paste0("mito.cutoff = ",mito.cutoff," and species = ",species)) if (!is.null(doublets)) filters %<>% c(paste0("doublet method = ",doublets)) message(paste0("Filtering based on: ",paste(filters, collapse="; "))) } # Preparations species %<>% tolower() %>% match.arg(c("human","mouse")) # Extract CMs if (!raw) cms <- self$cms else cms <- self$cms.raw if (is.null(cms)) stop(if (raw) "$cms.raw" else "$cms"," is NULL. filterCms depends on this object. Aborting") # Exclude samples if (!is.null(samples.to.exclude)) { if (!((samples.to.exclude %in% names(cms)) %>% all())) stop("Not all 'samples.to.exclude' found in names of ",if (raw) "self$cms.raw" else "self$cms. Please check and try again.") if (verbose) message(paste0("Excluding sample(s) ",paste(samples.to.exclude, sep = "\t"))) cms %<>% .[setdiff(names(.), samples.to.exclude)] } if (verbose) message(paste0(Sys.time()," Preparing filter")) # Extract sample names samples <- cms %>% names() # Depth if (!is.null(depth.cutoff)) { depth.filter <- self$getDepth() %>% filterVector("depth.cutoff", depth.cutoff, samples, sep) } else { depth.filter <- NULL } # Mitochondrial fraction if (!is.null(mito.cutoff)) { mito.filter <- self$getMitoFraction(species = species) %>% filterVector("mito.cutoff", mito.cutoff, samples, sep) %>% !. # NB, has to be negative } else { mito.filter <- NULL } # Doublets if (!is.null(doublets)) { if (is.null(self$doublets[[doublets]])) stop("Results for doublet detection method '",doublets,"' not found. Please run detectDoublets(method = '",doublets,"'.") doublets.filter <- self$doublets[[doublets]]$result %>% mutate(labels = replace_na(labels, FALSE)) %>% {setNames(!.$labels, rownames(.))} } else { doublets.filter <- NULL } # Get cell index cell.idx <- list(names(depth.filter), names(mito.filter), names(doublets.filter)) %>% .[!sapply(., is.null)] %>% Reduce(intersect, .) # Create split vector split.vec <- strsplit(cell.idx, sep) %>% sapply('[[', 1) # Filter filter.list <- list(depth = depth.filter, mito = mito.filter, doublets = doublets.filter) %>% .[!sapply(., is.null)] %>% lapply(\(filter) filter[cell.idx]) %>% # Ensure same order of cells bind_cols() %>% apply(1, all) %>% split(split.vec) if (verbose) { cells.total <- cms %>% sapply(ncol) %>% sum() cells.remove <- sum(!filter.list %>% unlist()) if (!any(is.null(depth.filter), is.null(mito.filter))) cells.remove <- cells.remove + cells.total - nrow(self$con$embedding) cells.percent <- cells.remove / cells.total * 100 message(paste0(Sys.time()," Removing ",cells.remove," of ", cells.total," cells (",formatC(cells.percent, digits = 3),"%)")) } self$cms.filtered <- samples %>% lapply(\(sample) { cms[[sample]][,filter.list[[sample]]] }) %>% setNames(samples) }, #' @description Select metrics from summary.metrics #' @param ids character Metric id to select (default = NULL). #' @return vector #' @examples #' # Simulate data #' testdata.cms <- lapply(seq_len(2), \(x) { #' out <- Matrix::rsparsematrix(2e3, 1e3, 0.1) #' out[out < 0] <- 1 #' dimnames(out) <- list(sapply(seq_len(2e3), \(x) paste0("gene",x)), #' sapply(seq_len(1e3), \(x) paste0("cell",x))) #' return(out) #' }) #' #' # Initialize #' crm <- CRMetrics$new(cms = testdata.cms, samples = c("sample1", "sample2"), n.cores = 1) #' #' # Select metrics #' crm$selectMetrics() #' selection.metrics <- crm$selectMetrics(c(1:4)) selectMetrics = function(ids = NULL) { metrics <- self$summary.metrics$metric %>% unique() if (is.null(ids)) tmp <- data.frame(no = seq_len(length(metrics)), metrics = metrics) else tmp <- metrics[ids] return(tmp) }, #' @description Plot filtered cells in an embedding, in a bar plot, on a tile or export the data frame #' @param type character The type of plot to use: embedding, bar, tile or export (default = c("embedding","bar","tile","export")). #' @param depth logical Plot the depth or not (default = TRUE). #' @param depth.cutoff numeric Depth cutoff, either a single number or a vector with cutoff per sample and with sampleIDs as names (default = 1e3). #' @param doublet.method character Method to detect doublets (default = NULL). #' @param mito.frac logical Plot the mitochondrial fraction or not (default = TRUE). #' @param mito.cutoff numeric Mitochondrial fraction cutoff, either a single number or a vector with cutoff per sample and with sampleIDs as names (default = 0.05). #' @param species character Species to calculate the mitochondrial fraction for (default = c("human","mouse")). #' @param size numeric Dot size (default = 0.3) #' @param sep character Separator for creating unique cell names (default = "!!") #' @param cols character Colors used for plotting (default = c("grey80","red","blue","green","yellow","black","pink","purple")) #' @param ... Plotting parameters passed to `sccore::embeddingPlot`. #' @return ggplot2 object or data frame #' @examples #' \donttest{ #' if (requireNamespace("pagoda2", quietly = TRUE)) { #' if (requireNamespace("conos", quietly = TRUE)) { #' # Simulate data #' testdata.cms <- lapply(seq_len(2), \(x) { #' out <- Matrix::rsparsematrix(2e3, 1e3, 0.1) #' out[out < 0] <- 1 #' dimnames(out) <- list(sapply(seq_len(2e3), \(x) paste0("gene",x)), #' sapply(seq_len(1e3), \(x) paste0("cell",x))) #' return(out) #' }) #' #' # Initialize #' crm <- CRMetrics$new(cms = testdata.cms, samples = c("sample1", "sample2"), n.cores = 1) #' #' # Create embedding #' crm$doPreprocessing() #' crm$createEmbedding() #' #' # Plot and extract result #' crm$plotFilteredCells(type = "embedding") #' filtered.cells <- crm$plotFilteredCells(type = "export") #' } else { #' message("Package 'conos' not available.") #' } #' } else { #' message("Package 'pagoda2' not available.") #' } #' } plotFilteredCells = function(type = c("embedding","bar","tile","export"), depth = TRUE, depth.cutoff = 1e3, doublet.method = NULL, mito.frac = TRUE, mito.cutoff = 0.05, species = c("human","mouse"), size = 0.3, sep = "!!", cols = c("grey80","red","blue","green","yellow","black","pink","purple"), ...) { type %<>% tolower() %>% match.arg(c("embedding","bar","tile","export")) if (mito.frac) species %<>% tolower() %>% match.arg(c("human","mouse")) # Prepare data if (depth) { depths <- self$getDepth() %>% filterVector("depth.cutoff", depth.cutoff, depth.cutoff %>% names(), sep) %>% {ifelse(!., "depth", "")} } else { depths <- NULL } if (mito.frac) { mf <- self$getMitoFraction(species = species) %>% filterVector("mito.cutoff", mito.cutoff, mito.cutoff %>% names(), sep) %>% {ifelse(., "mito", "")} } else { mf <- NULL } if (!is.null(doublet.method)) { tmp.doublets <- self$doublets[[doublet.method]]$result doublets <- tmp.doublets$labels %>% ifelse("doublet","") %>% setNames(rownames(tmp.doublets)) } else { doublets <- NULL } # Get cell index cell.idx <- crm$cms %>% lapply(colnames) %>% Reduce(c, .) # Create data.frame tmp <- list(depth = depths, mito = mf, doublets = doublets) %>% .[!sapply(., is.null)] %>% lapply(\(filter) filter[cell.idx]) %>% # Ensure same order of cells bind_cols() %>% as.data.frame() %>% `rownames<-`(cell.idx) if (type == "embedding" || type == "bar") { tmp %<>% mutate(., filter = apply(., 1, paste, collapse=" ")) %>% mutate(filter = gsub('^\\s+|\\s+$', '', filter) %>% gsub(" ", " ", ., fixed = TRUE) %>% gsub(" ", "+", .)) tmp$filter[tmp$filter == ""] <- "kept" tmp$filter %<>% factor() if ("kept" %in% levels(tmp$filter)) { tmp$filter %<>% relevel(ref = "kept") colstart <- 1 } else { colstart <- 2 } } else { tmp %<>% apply(2, \(x) x != "") %>% {data.frame(. * 1)} %>% mutate(., sample = rownames(.) %>% strsplit(sep, TRUE) %>% sapply(`[[`, 1), cell = rownames(.)) %>% pivot_longer(cols = -c(sample,cell), names_to = "variable", values_to = "value") } # Embedding plot if (type == "embedding"){ g <- self$con$plotGraph(groups = tmp$filter %>% setNames(rownames(tmp)), mark.groups = FALSE, show.legend = TRUE, shuffle.colors = TRUE, title = "Cells to filter", size = size, ...) + scale_color_manual(values = cols[colstart:(tmp$filter %>% levels() %>% length())]) } # Bar plot if (type == "bar") { g <- tmp %>% mutate(., sample = rownames(.) %>% strsplit(sep) %>% sapply('[[', 1), filter = ifelse(grepl("+", filter, fixed = TRUE), "combination", as.character(filter))) %>% group_by(sample,filter) %>% dplyr::count() %>% ungroup() %>% group_by(sample) %>% mutate(pct = n/sum(n)*100) %>% ungroup() %>% filter(filter != "kept") %>% ggplot(aes(sample, pct, fill = filter)) + geom_bar(stat = "identity") + geom_text_repel(aes(label = sprintf("%0.2f", round(pct, digits = 2))), position = position_stack(vjust = 0.5), direction = "y", size = 2.5) + self$theme + theme(axis.text.x = element_text(angle = 45, hjust = 1)) + labs(x = "", y = "Percentage cells filtered") } else if (type == "tile") { # Tile plot tmp.plot <- labelsFilter(tmp) if ("mito" %in% tmp.plot$fraction) { tmp.plot %<>% mutate(., fraction = gsub("mito", "mito.fraction", .$fraction)) } g <- tmp.plot %>% ggplot(aes(fraction, sample, fill = value)) + geom_tile(aes(width = 0.7, height = 0.7), color = "black", size = 0.5) + scale_fill_manual(values = c("green", "orange", "red")) + self$theme + labs(x = "", y = "", fill = "") } else if (type == "export") { g <- tmp } return(g) }, #' @description Extract sequencing depth from Conos object. #' @param cms list List of (sparse) count matrices (default = self$cms) #' @return data frame #' @examples #' \donttest{ #' if (requireNamespace("pagoda2", quietly = TRUE)) { #' if (requireNamespace("conos", quietly = TRUE)) { #' # Simulate data #' testdata.cms <- lapply(seq_len(2), \(x) { #' out <- Matrix::rsparsematrix(2e3, 1e3, 0.1) #' out[out < 0] <- 1 #' dimnames(out) <- list(sapply(seq_len(2e3), \(x) paste0("gene",x)), #' sapply(seq_len(1e3), \(x) paste0("cell",x))) #' return(out) #' }) #' #' # Initialize #' crm <- CRMetrics$new(cms = testdata.cms, samples = c("sample1", "sample2"), n.cores = 1) #' #' # Create embedding #' crm$doPreprocessing() #' crm$createEmbedding() #' #' # Get depth #' crm$getDepth() #' } else { #' message("Package 'conos' not available.") #' } #' } else { #' message("Package 'pagoda2' not available.") #' } #' } getDepth = function(cms = self$cms) { cms %>% lapply(\(cm) `names<-`(sparseMatrixStats::colSums2(cm), colnames(cm))) %>% Reduce(c, .) }, #' @description Calculate the fraction of mitochondrial genes. #' @param species character Species to calculate the mitochondrial fraction for (default = "human"). #' @param cms list List of (sparse) count matrices (default = self$cms) #' @return data frame #' @examples #' \donttest{ #' if (requireNamespace("pagoda2", quietly = TRUE)) { #' if (requireNamespace("conos", quietly = TRUE)) { #' # Simulate data #' testdata.cms <- lapply(seq_len(2), \(x) { #' out <- Matrix::rsparsematrix(2e3, 1e3, 0.1) #' out[out < 0] <- 1 #' dimnames(out) <- list(sapply(seq_len(2e3), \(x) paste0("gene",x)), #' sapply(seq_len(1e3), \(x) paste0("cell",x))) #' return(out) #' }) #' #' # Initialize #' crm <- CRMetrics$new(cms = testdata.cms, samples = c("sample1", "sample2"), n.cores = 1) #' #' # Create embedding #' crm$doPreprocessing() #' crm$createEmbedding() #' #' # Get mito. fraction #' crm$getMitoFraction(species = c("human", "mouse")) #' } else { #' message("Package 'conos' not available.") #' } #' } else { #' message("Package 'pagoda2' not available.") #' } #' } getMitoFraction = function(species = c("human", "mouse"), cms = self$cms) { # Checks species %<>% match.arg(c("human", "mouse")) if (is.null(cms)) stop("Cms is NULL, aborting.") if (species=="human") symb <- "MT-" else if (species=="mouse") symb <- "mt-" else stop("Species must either be 'human' or 'mouse'.") # Calculate tmp <- cms %>% lapply(\(cm) { tmp.mat <- cm[grep(symb, rownames(cm)),] if (inherits(tmp.mat, "numeric")) { nom <- tmp.mat } else { nom <- sparseMatrixStats::colSums2(tmp.mat) } out <- (nom / sparseMatrixStats::colSums2(cm)) %>% `names<-`(colnames(cm)) out[is.na(out)] <- 0 return(out) }) %>% Reduce(c, .) return(tmp) }, #' @description Create plots and script call for CellBender #' @param shrinkage integer Select every nth UMI count per cell for plotting. Improves plotting speed drastically. To plot all cells, set to 1 (default = 100) #' @param show.expected.cells logical Plot line depicting expected number of cells (default = TRUE) #' @param show.total.droplets logical Plot line depicting total droplets included for CellBender run (default = TRUE) #' @param expected.cells named numeric If NULL, expected cells will be deduced from the number of cells per sample identified by Cell Ranger. Otherwise, a named vector of expected cells with sample IDs as names. Sample IDs must match those in summary.metrics (default: stored named vector) #' @param total.droplets named numeric If NULL, total droplets included will be deduced from expected cells multiplied by 3. Otherwise, a named vector of total droplets included with sample IDs as names. Sample IDs must match those in summary.metrics (default: stored named vector) #' @param cms.raw list Raw count matrices from HDF5 Cell Ranger outputs (default = self$cms.raw) #' @param umi.counts list UMI counts calculated as column sums of raw count matrices from HDF5 Cell Ranger outputs (default: stored list) #' @param data.path character Path to Cell Ranger outputs (default = self$data.path) #' @param samples character Sample names to include (default = self$metadata$sample) #' @param verbose logical Show progress (default: stored vector) #' @param n.cores integer Number of cores (default: stored vector) #' @param unique.names logical Create unique cell names (default = FALSE) #' @param sep character Separator for creating unique cell names (default = "!!") #' @return ggplot2 object and bash script #' @examples #' \dontrun{ #' crm <- CRMetrics$new(data.path = "/path/to/count/data") #' crm$prepareCellbender() #' } prepareCellbender = function(shrinkage = 100, show.expected.cells = TRUE, show.total.droplets = TRUE, expected.cells = NULL, total.droplets = NULL, cms.raw = self$cms.raw, umi.counts = self$cellbender$umi.counts, data.path = self$data.path, samples = self$metadata$sample, verbose = self$verbose, n.cores = self$n.cores, unique.names = FALSE, sep = "!!") { checkPackageInstalled("sparseMatrixStats", bioc = TRUE) # Preparations if (verbose) message(paste0(Sys.time()," Started run using ", if (n.cores < length(samples)) n.cores else length(samples)," cores")) if (is.null(expected.cells)) expected.cells <- self$getExpectedCells(samples) if (is.null(total.droplets)) total.droplets <- self$getTotalDroplets(samples) # Read CMs from HDF5 files if (!is.null(cms.raw)) { if (verbose) message(paste0(Sys.time()," Using stored HDF5 Cell Ranger outputs. To overwrite, set $cms.raw <- NULL")) } else { if (verbose) message(paste0(Sys.time()," Loading HDF5 Cell Ranger outputs")) checkDataPath(data.path) cms.raw <- read10xH5(data.path, samples, "raw", n.cores = n.cores, verbose = verbose, unique.names = unique.names, sep = sep) self$cms.raw <- cms.raw } # Get UMI counts if (!is.null(umi.counts)) { if (verbose) message(paste0(Sys.time()," Using stored UMI counts calculations. To overwrite, set $cellbender$umi.counts <- NULL")) } else { if (verbose) message(paste0(Sys.time()," Calculating UMI counts per sample")) umi.counts <- cms.raw[samples] %>% plapply(\(cm) { sparseMatrixStats::colSums2(cm) %>% sort(decreasing = TRUE) %>% {data.frame(y = .)} %>% filter(y > 0) %>% mutate(., x = seq_len(nrow(.))) }, n.cores = n.cores) %>% setNames(samples) self$cellbender$umi.counts <- umi.counts } # Create plot if (verbose) message(paste0(Sys.time()," Plotting")) data.df <- umi.counts[samples] %>% names() %>% lapply(\(sample) { umi.counts[[sample]] %>% mutate(sample = sample) %>% .[seq(1, nrow(.), shrinkage),] }) %>% bind_rows() line.df <- expected.cells %>% {data.frame(sample = names(.), exp = .)} %>% mutate(total = total.droplets %>% unname()) g <- ggplot(data.df, aes(x, y)) + geom_line(color = "red") + scale_x_log10(labels = scales::comma) + scale_y_log10(labels = scales::comma) + self$theme + labs(x = "Droplet ID ranked by count", y = "UMI count per droplet", col = "") if (show.expected.cells) g <- g + geom_vline(data = line.df, aes(xintercept = exp, col = "Expected cells")) if (show.total.droplets) g <- g + geom_vline(data = line.df, aes(xintercept = total, col = "Total droplets included")) g <- g + facet_wrap(~ sample, scales = "free") if (verbose) message(paste0(Sys.time()," Done!")) return(g) }, #' @param file character File name for CellBender script. Will be stored in `data.path` (default: "cellbender_script.sh") #' @param fpr numeric False positive rate for CellBender (default = 0.01) #' @param epochs integer Number of epochs for CellBender (default = 150) #' @param use.gpu logical Use CUDA capable GPU (default = TRUE) #' @param expected.cells named numeric If NULL, expected cells will be deduced from the number of cells per sample identified by Cell Ranger. Otherwise, a named vector of expected cells with sample IDs as names. Sample IDs must match those in summary.metrics (default: stored named vector) #' @param total.droplets named numeric If NULL, total droplets included will be deduced from expected cells multiplied by 3. Otherwise, a named vector of total droplets included with sample IDs as names. Sample IDs must match those in summary.metrics (default: stored named vector) #' @param data.path character Path to Cell Ranger outputs (default = self$data.path) #' @param samples character Sample names to include (default = self$metadata$sample) #' @param args character (optional) Additional parameters for CellBender #' @return bash script #' @examples #' \dontrun{ #' crm <- CRMetrics$new(data.path = "/path/to/count/data/") #' crm$prepareCellbender() #' crm$saveCellbenderScript() #' } saveCellbenderScript = function(file = "cellbender_script.sh", fpr = 0.01, epochs = 150, use.gpu = TRUE, expected.cells = NULL, total.droplets = NULL, data.path = self$data.path, samples = self$metadata$sample, args = NULL) { # Preparations checkDataPath(data.path) inputs <- getH5Paths(data.path, samples, "raw") outputs <- data.path %>% pathsToList(samples) %>% sapply(\(sample) paste0(sample[2],sample[1],"/outs/cellbender.h5")) %>% setNames(samples) if (is.null(expected.cells)) expected.cells <- self$getExpectedCells(samples) if (is.null(total.droplets)) total.droplets <- self$getTotalDroplets(samples) # Create CellBender shell scripts script.list <- samples %>% lapply(\(sample) { paste0("cellbender remove-background --input ",inputs[sample]," --output ",outputs[sample],if (use.gpu) c(" --cuda ") else c(" "),"--expected-cells ",expected.cells[sample]," --total-droplets-included ",total.droplets[sample]," --fpr ",fpr," --epochs ",epochs," ",if (!is.null(args)) paste(args, collapse = " ")) }) out <- list("#! /bin/sh", script.list) %>% unlist() cat(out, file = paste0(data.path,file), sep = "\n") }, #' @description Extract the expected number of cells per sample based on the Cell Ranger summary metrics #' @param samples character Sample names to include (default = self$metadata$sample) #' @return A numeric vector #' @examples #' # Simulate data #' testdata.cms <- lapply(seq_len(2), \(x) { #' out <- Matrix::rsparsematrix(2e3, 1e3, 0.1) #' out[out < 0] <- 1 #' dimnames(out) <- list(sapply(seq_len(2e3), \(x) paste0("gene",x)), #' sapply(seq_len(1e3), \(x) paste0("cell",x))) #' return(out) #' }) #' #' # Initialize #' crm <- CRMetrics$new(cms = testdata.cms, samples = c("sample1", "sample2"), n.cores = 1) #' #' # Get summary #' crm$addSummaryFromCms() #' #' # Get no. cells #' crm$getExpectedCells() getExpectedCells = function(samples = self$metadata$sample) { expected.cells <- self$summary.metrics %>% filter(metric == "estimated number of cells") %$% setNames(value, sample) %>% .[samples] return(expected.cells) }, #' @description Get the total number of droplets included in the CellBender estimations. Based on the Cell Ranger summary metrics and multiplied by a preset multiplier. #' @param samples character Samples names to include (default = self$metadata$sample) #' @param multiplier numeric Number to multiply expected number of cells with (default = 3) #' @return A numeric vector #' @examples #' # Simulate data #' testdata.cms <- lapply(seq_len(2), \(x) { #' out <- Matrix::rsparsematrix(2e3, 1e3, 0.1) #' out[out < 0] <- 1 #' dimnames(out) <- list(sapply(seq_len(2e3), \(x) paste0("gene",x)), #' sapply(seq_len(1e3), \(x) paste0("cell",x))) #' return(out) #' }) #' #' # Initialize #' crm <- CRMetrics$new(cms = testdata.cms, samples = c("sample1", "sample2"), n.cores = 1) #' #' # Add summary #' crm$addSummaryFromCms() #' #' # Get no. droplets #' crm$getTotalDroplets() getTotalDroplets = function(samples = self$metadata$sample, multiplier = 3) { if (!is.numeric(multiplier)) stop("'multiplier' must be numeric.") expected.cells <- self$getExpectedCells(samples = samples) total.droplets <- expected.cells * multiplier return(total.droplets) }, #' @description Add a list of count matrices to the CRMetrics object. #' @param cms list List of (sparse) count matrices (default = NULL) #' @param data.path character Path to cellranger count data (default = self$data.path). #' @param samples character Vector of sample names. If NULL, samples are extracted from cms (default = self$metadata$sample) #' @param cellbender logical Add CellBender filtered count matrices in HDF5 format. Requires that "cellbender" is in the names of the files (default = FALSE) #' @param raw logical Add raw count matrices from Cell Ranger output. Cannot be combined with `cellbender=TRUE` (default = FALSE) #' @param symbol character The type of gene IDs to use, SYMBOL (TRUE) or ENSEMBLE (default = TRUE) #' @param unique.names logical Make cell names unique based on `sep` parameter (default = TRUE) #' @param sep character Separator used to create unique cell names (default = "!!") #' @param add.metadata boolean Add metadata from cms or not (default = TRUE) #' @param n.cores integer Number of cores to use (default = self$n.cores) #' @param verbose boolean Print progress (default = self$verbose) #' @return Add list of (sparse) count matrices to R6 class object #' @examples #' \dontrun{ #' crm <- CRMetrics$new(data.path = "/path/to/count/data/") #' #' # Simulate data #' testdata.cms <- lapply(seq_len(2), \(x) { #' out <- Matrix::rsparsematrix(2e3, 1e3, 0.1) #' out[out < 0] <- 1 #' dimnames(out) <- list(sapply(seq_len(2e3), \(x) paste0("gene",x)), #' sapply(seq_len(1e3), \(x) paste0("cell",x))) #' return(out) #' }) #' #' crm$addCms(cms = testdata.cms) #' } addCms = function(cms = NULL, data.path = self$data.path, samples = self$metadata$sample, cellbender = FALSE, raw = FALSE, symbol = TRUE, unique.names = TRUE, sep = "!!", add.metadata = TRUE, n.cores = self$n.cores, verbose = self$verbose) { # Check if (is.null(cms) && is.null(data.path)) stop("Either 'cms' or 'data.path' must be provided.") if (!is.null(self$cms)) stop("CMs already present. To overwrite, set $cms = NULL and rerun this function.") if (!is.null(cms)) { # Add from cms argument ## Checks if (!is.list(cms)) stop("'cms' must be a list of count matrices") if (verbose) message(paste0("Adding list of ",length(cms)," count matrices.")) sample.class <- sapply(cms, class) %>% unlist() %>% sapply(\(x) grepl("Matrix", x)) if (!any(sample.class)) { warning(paste0("Some samples are not a matrix (maybe they only contain 1 cell). Removing the following samples: ",paste(sample.class[!sample.class] %>% names(), collapse = " "))) cms %<>% .[sample.class] } sample.cells <- sapply(cms, ncol) %>% unlist() if (any(sample.cells == 0)) { warning(paste0("Some samples does not contain cells. Removing the following samples: ",paste(sample.cells[sample.cells == 0] %>% names(), collapse=" "))) cms %<>% .[sample.cells > 0] } if (is.null(samples)) samples <- names(cms) if (is.null(samples)) stop("Either 'cms' must be named or 'samples' cannot be NULL") if (length(samples) != length(cms)) stop("Length of 'samples' does not match length of 'cms'.") ## Create unique names if (unique.names) cms %<>% createUniqueCellNames(samples, sep) } else { # Add from data.path argument if (cellbender) { cms <- read10xH5(data.path = data.path, samples = samples, symbol = symbol, type = "cellbender_filtered", sep = sep, n.cores = n.cores, verbose = verbose, unique.names = unique.names) } else { cms <- read10x(data.path = data.path, samples = samples, raw = raw, symbol = symbol, sep = sep, n.cores = n.cores, verbose = verbose, unique.names = unique.names) } } self$cms <- cms if (add.metadata) { if (!is.null(self$metadata)) { warning("Overwriting metadata\n") } self$metadata <- data.frame(sample = samples) } if (!is.null(self$detailed.metrics)) warning("Consider updating detailed metrics by setting $detailed.metrics <- NULL and running $addDetailedMetrics().\n") if (!is.null(self$con)) warning("Consider updating embedding by setting $cms.preprocessed <- NULL and $con <- NULL, and running $doPreprocessing() and $createEmbedding().\n") if (!is.null(self$doublets)) warning("Consider updating doublet scores by setting $doublets <- NULL and running $detectDoublets().\n") }, #' @description Plot the results from the CellBender estimations #' @param data.path character Path to Cell Ranger outputs (default = self$data.path) #' @param samples character Sample names to include (default = self$metadata$sample) #' @param pal character Plotting palette (default = self$pal) #' @return A ggplot2 object #' @examples #' \dontrun{ #' crm <- CRMetrics$new(data.path = "/path/to/count/data/") #' crm$prepareCellbender() #' crm$saveCellbenderScript() #' ## Run CellBender script #' crm$plotCbTraining() #' } plotCbTraining = function(data.path = self$data.path, samples = self$metadata$sample, pal = self$pal) { checkDataPath(data.path) checkPackageInstalled("rhdf5", bioc = TRUE) paths <- getH5Paths(data.path, samples, "cellbender") train.df <- samples %>% lapply(\(id) { rhdf5::h5read(paths[id], "matrix/training_elbo_per_epoch") %>% {data.frame(ELBO = ., Epoch = seq_len(length(.)), sample = id)} }) %>% setNames(samples) %>% bind_rows() test.df <- samples %>% lapply(\(id) { path <- paths[id] data.frame(ELBO = rhdf5::h5read(path, "matrix/test_elbo"), Epoch = rhdf5::h5read(path, "matrix/test_epoch"), sample = id) }) %>% setNames(samples) %>% bind_rows() g <- ggplot() + geom_point(data = train.df, aes(Epoch, ELBO, col = "Train")) + geom_line(data = train.df, aes(Epoch, ELBO, col = "Train")) + geom_point(data = test.df, aes(Epoch, ELBO, col = "Test")) + geom_line(data = test.df, aes(Epoch, ELBO, col = "Test")) + self$theme + labs(col = "") + facet_wrap(~sample, scales = "free_y") if (!is.null(pal)) g <- g + scale_color_manual(values = pal) return(g) }, #' @description Plot the CellBender assigned cell probabilities #' @param data.path character Path to Cell Ranger outputs (default = self$data.path) #' @param samples character Sample names to include (default = self$metadata$sample) #' @param low.col character Color for low probabilities (default = "gray") #' @param high.col character Color for high probabilities (default = "red") #' @return A ggplot2 object #' @examples #' \dontrun{ #' crm <- CRMetrics$new(data.path = "/path/to/count/data/") #' crm$prepareCellbender() #' crm$saveCellbenderScript() #' ## Run the CellBender script #' crm$plotCbCellProbs() #' } plotCbCellProbs = function(data.path = self$data.path, samples = self$metadata$sample, low.col = "gray", high.col = "red") { checkDataPath(data.path) checkPackageInstalled("rhdf5", bioc = TRUE) paths <- getH5Paths(data.path, samples, "cellbender") cell.prob <- samples %>% lapply(\(id) { rhdf5::h5read(paths[id], "matrix/latent_cell_probability") %>% {data.frame(prob = ., cell = seq_len(length(.)), sample = id)} }) %>% setNames(samples) %>% bind_rows() ggplot(cell.prob, aes(cell, prob, col = prob)) + geom_point() + scale_color_gradient(low=low.col, high=high.col) + self$theme + labs(x = "Cells", y = "Cell probability", col = "") + facet_wrap(~sample, scales = "free_x") }, #' @description Plot the estimated ambient gene expression per sample from CellBender calculations #' @param cutoff numeric Horizontal line included in the plot to indicate highly expressed ambient genes (default = 0.005) #' @param data.path character Path to Cell Ranger outputs (default = self$data.path) #' @param samples character Sample names to include (default = self$metadata$sample) #' @return A ggplot2 object #' @examples #' \dontrun{ #' crm <- CRMetrics$new(data.path = "/path/to/count/data/") #' crm$prepareCellbender() #' crm$saveCellbenderScript() #' ## Run CellBender script #' crm$plotCbAmbExp() #' } plotCbAmbExp = function(cutoff = 0.005, data.path = self$data.path, samples = self$metadata$sample) { checkDataPath(data.path) checkPackageInstalled("rhdf5", bioc = TRUE) paths <- getH5Paths(data.path, samples, "cellbender") amb <- samples %>% lapply(\(id) { rhdf5::h5read(paths[id], "matrix/ambient_expression") %>% {data.frame(exp = ., cell = seq_len(length(.)), gene.names = rhdf5::h5read(paths[id], "matrix/features/name") %>% as.character(), sample = id)} }) %>% setNames(samples) %>% bind_rows() g <- ggplot(amb, aes(cell, exp)) + geom_point() + geom_hline(yintercept = cutoff) + geom_label_repel(data = amb[amb$exp > cutoff,], aes(cell, exp, label = gene.names)) + self$theme + labs(y = "Ambient expression", x = "Genes") + facet_wrap(~sample, scales = "free_y") return(g) }, #' @description Plot the most abundant estimated ambient genes from the CellBender calculations #' @param cutoff numeric Cutoff of ambient gene expression to use to extract ambient genes per sample #' @param data.path character Path to Cell Ranger outputs (default = self$data.path) #' @param samples character Sample names to include (default = self$metadata$sample) #' @param pal character Plotting palette (default = self$pal) #' @return A ggplot2 object #' @examples #' \dontrun{ #' crm <- CRMetrics$new(data.path = "/path/to/count/data/") #' crm$prepareCellbender() #' crm$saveCellbenderScript() #' ## Run CellBender script #' crm$plotCbAmbGenes() #' } plotCbAmbGenes = function(cutoff = 0.005, data.path = self$data.path, samples = self$metadata$sample, pal = self$pal) { checkDataPath(data.path) checkPackageInstalled("rhdf5", bioc = TRUE) paths <- getH5Paths(data.path, samples, "cellbender") amb <- samples %>% lapply(\(id) { rhdf5::h5read(paths[id], "matrix/ambient_expression") %>% {data.frame(exp = ., cell = seq_len(length(.)), gene.names = rhdf5::h5read(paths[id], "matrix/features/name") %>% as.character(), sample = id)} %>% filter(exp >= cutoff) }) %>% setNames(samples) %>% bind_rows() %$% table(gene.names) %>% as.data.frame() %>% arrange(desc(Freq)) %>% mutate(Freq = Freq / length(samples), gene.names = factor(gene.names, levels = gene.names)) g <- ggplot(amb, aes(gene.names, Freq, fill = gene.names)) + geom_bar(stat = "identity") + self$theme + labs(x = "", y = "Proportion") + theme(axis.text.x = element_text(angle = 90)) + guides(fill = "none") if (!is.null(pal)) { gene.len <- amb$gene.names %>% unique() %>% length() if (length(pal) < gene.len) warning(paste0("Palette has ",length(pal)," colors but there are ",gene.len," genes, omitting palette.")) else g <- g + scale_fill_manual(values = pal) } return(g) }, #' @description Add summary metrics from a list of count matrices #' @param cms list A list of filtered count matrices (default = self$cms) #' @param n.cores integer Number of cores to use (default = self$n.cores) #' @param verbose logical Show progress (default = self$verbose) #' @return data.frame #' @examples #' # Simulate data #' testdata.cms <- lapply(seq_len(2), \(x) { #' out <- Matrix::rsparsematrix(2e3, 1e3, 0.1) #' out[out < 0] <- 1 #' dimnames(out) <- list(sapply(seq_len(2e3), \(x) paste0("gene",x)), #' sapply(seq_len(1e3), \(x) paste0("cell",x))) #' return(out) #' }) #' #' # Initialize #' crm <- CRMetrics$new(cms = testdata.cms, samples = c("sample1", "sample2"), n.cores = 1) #' #' # Add summary #' crm$addSummaryFromCms() addSummaryFromCms = function(cms = self$cms, n.cores = self$n.cores, verbose = self$verbose) { checkPackageInstalled("sparseMatrixStats", bioc = TRUE) if (!is.null(self$summary.metrics)) warning("Overwriting existing summary metrics \n") if (verbose) message(paste0(Sys.time()," Calculating ",length(cms)," summaries using ", if (n.cores < length(cms)) n.cores else length(cms)," cores")) self$summary.metrics <- cms %>% names() %>% plapply(\(id) { cm <- cms[[id]] cm.bin <- cm cm.bin[cm.bin > 0] <- 1 data.frame(cells = ncol(cm), median.genes = sparseMatrixStats::colSums2(cm.bin) %>% median(), median.umi = sparseMatrixStats::colSums2(cm) %>% median(), total.genes = sum(sparseMatrixStats::rowSums2(cm.bin) > 0), sample = id) }, n.cores = n.cores) %>% bind_rows() %>% pivot_longer(cols = -c(sample), names_to = "metric", values_to = "value") %>% mutate(metric = factor(metric, labels = c("estimated number of cells", "median genes per cell", "median umi counts per cell", "total genes detected"))) %>% arrange(sample) if (verbose) message(paste0(Sys.time()," Done!")) }, #' @description Run SoupX ambient RNA estimation and correction #' @param data.path character Path to Cell Ranger outputs (default = self$data.path) #' @param samples character Sample names to include (default = self$metadata$sample) #' @param n.cores numeric Number of cores (default = self$n.cores) #' @param verbose logical Show progress (default = self$verbose) #' @param arg.load10X list A list with additional parameters for `SoupX::load10X` (default = list()) #' @param arg.autoEstCont list A list with additional parameters for `SoupX::autoEstCont` (default = list()) #' @param arg.adjustCounts list A list with additional parameters for `SoupX::adjustCounts` (default = list()) #' @return List containing a list with corrected counts, and a data.frame containing plotting estimations #' @examples #' \dontrun{ #' crm <- CRMetrics$new(data.path = "/path/to/count/data/") #' crm$runSoupX() #' } runSoupX = function(data.path = self$data.path, samples = self$metadata$sample, n.cores = self$n.cores, verbose = self$verbose, arg.load10X = list(), arg.autoEstCont = list(), arg.adjustCounts = list()) { checkDataPath(data.path) checkPackageInstalled("SoupX", cran = TRUE) if (verbose) message(paste0(Sys.time()," Running using ", if (n.cores <- length(samples)) n.cores else length(samples)," cores")) # Create SoupX objects if (verbose) message(paste0(Sys.time()," Loading data")) soupx.list <- data.path %>% pathsToList(samples) %>% plapply(\(sample) { arg <- list(dataDir = paste(sample[2],sample[1],"outs", sep = "/")) %>% append(arg.load10X) out <- do.call(SoupX::load10X, arg) return(out) }, n.cores = n.cores) %>% setNames(samples) # Perform automatic estimation of contamination if (verbose) message(paste0(Sys.time()," Estimating contamination")) tmp <- soupx.list %>% plapply(\(soupx.obj) { arg <- list(sc = soupx.obj) %>% append(arg.autoEstCont) out <- do.call(SoupX::autoEstCont, arg) return(out) }, n.cores = n.cores) %>% setNames(samples) # Save plot data if (verbose) message(paste0(Sys.time()," Preparing plot data")) rhoProbes <- seq(0,1,.001) self$soupx$plot.df <- samples %>% plapply(\(id) { dat <- tmp[[id]] ## The following is taken from the SoupX package post.rho <- dat$fit$posterior priorRho <- dat$fit$priorRho priorRhoStdDev <- dat$fit$priorRhoStdDev v2 <- (priorRhoStdDev/priorRho)**2 k <- 1+v2**-2/2*(1+sqrt(1+4*v2)) theta <- priorRho/(k-1) prior.rho <- dgamma(rhoProbes, k, scale=theta) df <- data.frame(rhoProbes = rhoProbes, post.rho = post.rho, prior.rho = prior.rho) %>% tidyr::pivot_longer(cols = -c("rhoProbes"), names_to = "variable", values_to = "value") %>% mutate(rhoProbes = as.numeric(rhoProbes), value = as.numeric(value), sample = id) return(df) }, n.cores = n.cores) %>% setNames(samples) %>% bind_rows() # Adjust counts if (verbose) message(paste0(Sys.time()," Adjusting counts")) self$soupx$cms.adj <- tmp %>% plapply(\(sample) { arg <- list(sc = sample) %>% append(arg.adjustCounts) out <- do.call(SoupX::adjustCounts, arg) return(out) }, n.cores = n.cores) %>% setNames(samples) if (verbose) message(paste0(Sys.time()," Done!")) }, #' @description Plot the results from the SoupX estimations #' @param plot.df data.frame SoupX estimations (default = self$soupx$plot.df) #' @return A ggplot2 object #' @examples #' \dontrun{ #' crm <- CRMetrics$new(data.path = "/path/to/count/data/") #' crm$runSoupX() #' crm$plotSoupX() #' } plotSoupX = function(plot.df = self$soupx$plot.df) { if (is.null(plot.df)) stop("No plot data found. Please run $runSoupX first.") line.df <- plot.df %>% split(., .$sample) %>% lapply(\(x) x$rhoProbes[x$value == max(x$value)]) %>% {lapply(names(.), \(x) data.frame(value = .[[x]], sample = x))} %>% do.call(rbind, .) ggplot(plot.df, aes(rhoProbes, value, linetype = variable, col = variable)) + geom_line(show.legend = FALSE) + geom_vline(data = line.df, aes(xintercept = value, col = "rho.max", linetype = "rho.max")) + scale_color_manual(name = "", values = c("post.rho" = "black", "rho.max" = "red", "prior.rho" = "black")) + scale_linetype_manual(name = "", values = c("post.rho" = "solid", "rho.max" = "solid", "prior.rho" = "dashed")) + self$theme + labs(x = "Contamination fraction", y = "Probability density") + facet_wrap(~sample, scales = "free_y") + theme(legend.spacing.y = unit(3, "pt")) + guides(linetype = guide_legend(byrow = TRUE), col = guide_legend(byrow = TRUE)) }, #' @description Plot CellBender cell estimations against the estimated cell numbers from Cell Ranger #' @param data.path character Path to Cell Ranger outputs (default = self$data.path) #' @param samples character Sample names to include (default = self$metadata$sample) #' @param pal character Plotting palette (default = self$pal) #' @return A ggplot2 object #' @examples #' \dontrun{ #' crm <- CRMetrics$new(data.path = "/path/to/count/data/") #' crm$prepareCellbender() #' crm$saveCellbenderScript() #' ## Run CellBender script #' crm$plotCbCells() #' } plotCbCells = function(data.path = self$data.path, samples = self$metadata$sample, pal = self$pal) { checkDataPath(data.path) checkPackageInstalled("rhdf5", bioc = TRUE) paths <- getH5Paths(data.path, samples, "cellbender_filtered") df <- samples %>% sapply(\(id) rhdf5::h5read(paths[id], "matrix/shape")[2]) %>% {data.frame(exp = self$getExpectedCells(samples), cb.cells = ., sample = samples)} %>% mutate(diff = cb.cells - exp, rel = diff / exp) %>% pivot_longer(cols = c(-sample), names_to = "variable", values_to = "value") %>% mutate(variable = factor(variable, labels = c("CellBender cells", "Difference to exp. cells", "Expected cells", "Relative difference to exp. cells"))) g <- ggplot(df, aes(sample, value, fill = sample)) + geom_bar(stat = "identity") + self$theme + theme(axis.text.x = element_text(angle = 90, hjust = 1, vjust = 0.5)) + guides(fill = "none") + labs(x = "", y = "") + facet_wrap(~variable, scales = "free_y", nrow = 2, ncol = 2) if (!is.null(pal)) g <- g + scale_fill_manual(values = pal) return(g) }, #' @description Add doublet results created from exported Python script #' @param method character Which method to use, either `scrublet` or `doubletdetection` (default is both). #' @param data.path character Path to Cell Ranger outputs (default = self$data.path) #' @param samples character Sample names to include (default = self$metadata$sample) #' @param cms list List containing the count matrices (default = self$cms). #' @param verbose boolean Print progress (default = self$verbose) #' @return List of doublet results #' @examples #' \dontrun{ #' crm <- CRMetrics$new(data.path = "/path/to/count/data/") #' crm$detectDoublets(export = TRUE) #' ## Run Python script #' crm$addDoublets() #' } addDoublets = function(method = c("scrublet","doubletdetection"), data.path = self$data.path, samples = self$metadata$sample, cms = self$cms, verbose = self$verbose) { # Check if (length(data.path) > 1) data.path <- data.path[1] unk <- setdiff(method, c("doubletdetection","scrublet")) if (length(unk) > 0) stop(paste0("Method(s) not recognised: ",paste(unk, collapse = " "))) ex.res <- self$doublets if (!is.null(names(ex.res))) { warning(paste0("Results for method(s) '",paste(intersect(names(ex.res), method), collapse = " "),"' found, skipping. Set $doublets$<METHOD> <- NULL and rerun this function to load.")) method <- setdiff(method, names(ex.res)) } if (length(method) == 0) stop("No method left to load data for.") # Load data files <- dir(data.path, full.names = TRUE) files.list <- method %>% {`names<-`(lapply(., \(met) files[grepl(paste0(".",met), files, fixed = TRUE)]), .)} not.found <- names(files.list)[sapply(files.list, length) == 0] if (length(not.found) > 0) { warning("No results found for method(s): ",paste(not.found, collapse = " ")) files.list %<>% .[setdiff(names(.), not.found)] } if (length(files.list) == 0) stop("No method left to load data for.") res.list <- files.list %>% names() %>% lapply(\(met) { len <- length(files.list[[met]]) if (len > 0) { if (verbose) message(paste0("Found ",len," results for ",met)) tmp <- files.list[[met]] %>% lapply(read.delim, sep = ",", header = TRUE) %>% bind_rows() %>% `names<-`(c("scores","labels")) tmp[is.na(tmp)] <- 0 tmp %<>% mutate(labels = as.logical(labels)) %>% `rownames<-`(cms %>% lapply(colnames) %>% unlist() %>% unname()) if (verbose) message(paste0("Detected ",sum(tmp$labels, na.rm = TRUE)," possible doublets out of ",nrow(tmp)," cells.")) out <- list(results = tmp) return(out) } }) %>% `names<-`(method) if (!is.null(ex.res)) res.list <- append(ex.res, res.list) return(invisible(self$doublets <- res.list)) } ))
/scratch/gouwar.j/cran-all/cranData/CRMetrics/R/CRMetrics.R
#' @importFrom stats chisq.test fisher.test #' @importFrom utils combn read.delim glob2rx #' @importFrom Matrix sparseMatrix #' @importFrom methods as #' @importFrom utils globalVariables read.table #' @importFrom sccore checkPackageInstalled NULL utils::globalVariables(c(".","value","variable","V1","V2","metric")) #' @title Set correct 'comp.group' parameter #' @description Set comp.group to 'category' if null. #' @param comp.group Comparison metric. #' @param category Comparison metric to use if comp.group is not provided. #' @param verbose Print messages (default = TRUE). #' @keywords internal #' @return vector checkCompGroup <- function(comp.group, category, verbose = TRUE) { if (is.null(comp.group)) { if (verbose) message(paste0("Using '",category,"' for 'comp.group'")) comp.group <- category } return(comp.group) } #' @title Check whether 'comp.group' is in metadata #' @description Checks whether 'comp.group' is any of the column names in metadata. #' @param comp.group Comparison metric. #' @param metadata Metadata for samples. #' @keywords internal #' @return nothing or stop checkCompMeta <- function(comp.group, metadata) { if (!is.null(comp.group) && (!comp.group %in% colnames(metadata))) stop("'comp.group' doesn't match any column name in metadata.") } #' @title Load 10x count matrices #' @description Load gene expression count data #' @param data.path Path to cellranger count data. #' @param samples Vector of sample names (default = NULL) #' @param raw logical Add raw count matrices (default = FALSE) #' @param symbol The type of gene IDs to use, SYMBOL (TRUE) or ENSEMBLE (default = TRUE). #' @param sep Separator for cell names (default = "!!"). #' @param n.cores Number of cores for the calculations (default = 1). #' @param verbose Print messages (default = TRUE). #' @keywords internal #' @return data frame #' @examples #' \dontrun{ #' cms <- read10x(data.path = "/path/to/count/data", #' samples = crm$metadata$samples, #' raw = FALSE, #' symbol = TRUE, #' n.cores = crm$n.cores) #' } #' @export read10x <- function(data.path, samples = NULL, raw = FALSE, symbol = TRUE, sep = "!!", unique.names = TRUE, n.cores = 1, verbose = TRUE) { checkPackageInstalled("data.table", cran = TRUE) if (is.null(samples)) samples <- list.dirs(data.path, full.names = FALSE, recursive = FALSE) full.path <- data.path %>% pathsToList(samples) %>% sapply(\(sample) { if (raw) pat <- glob2rx("raw_*_bc_matri*") else pat <- glob2rx("filtered_*_bc_matri*") dir(paste(sample[2],sample[1],"outs", sep = "/"), pattern = pat, full.names = TRUE) %>% .[!grepl(".h5", .)] }) if (verbose) message(paste0(Sys.time()," Loading ",length(full.path)," count matrices using ", if (n.cores > length(full.path)) length(full.path) else n.cores," cores")) tmp <- full.path %>% plapply(\(sample) { tmp.dir <- dir(sample, full.names = TRUE) # Read matrix mat.path <- tmp.dir %>% .[grepl("mtx", .)] if (grepl("gz", mat.path)) { mat <- as(Matrix::readMM(gzcon(file(mat.path, "rb"))), "CsparseMatrix") } else { mat <- as(Matrix::readMM(mat.path), "CsparseMatrix") } # Add features feat <- tmp.dir %>% .[grepl(ifelse(any(grepl("features.tsv", .)),"features.tsv","genes.tsv"), .)] %>% data.table::fread(header = FALSE) if (symbol) rownames(mat) <- feat %>% pull(V2) else rownames(mat) <- feat %>% pull(V1) # Add barcodes barcodes <- tmp.dir %>% .[grepl("barcodes.tsv", .)] %>% data.table::fread(header = FALSE) colnames(mat) <- barcodes %>% pull(V1) return(mat) }, n.cores = n.cores) %>% setNames(samples) if (unique.names) tmp %<>% createUniqueCellNames(samples, sep) if (verbose) message(paste0(Sys.time()," Done!")) return(tmp) } #' @title Add detailed metrics #' @description Add detailed metrics, requires to load raw count matrices using pagoda2. #' @param cms List containing the count matrices. #' @param verbose Print messages (default = TRUE). #' @param n.cores Number of cores for the calculations (default = 1). #' @keywords internal #' @return data frame addDetailedMetricsInner <- function(cms, verbose = TRUE, n.cores = 1) { if (verbose) message(Sys.time()," Counting using ", if (n.cores < length(cms)) n.cores else length(cms)," cores") samples <- cms %>% names() metricsDetailed <- cms %>% plapply(\(cm) { # count UMIs totalUMI <- cm %>% sparseMatrixStats::colSums2() %>% as.data.frame() %>% setNames("value") %>% mutate(., metric = "UMI_count", barcode = rownames(.)) cm.bin <- cm cm.bin[cm.bin > 0] <- 1 totalGenes <- cm.bin %>% sparseMatrixStats::colSums2() %>% as.data.frame() %>% setNames("value") %>% mutate(., metric = "gene_count", barcode = rownames(.)) metricsDetailedSample <- rbind(totalUMI, totalGenes) return(metricsDetailedSample) }, n.cores = n.cores) %>% setNames(samples) if (verbose) message(paste0(Sys.time()," Creating table")) tmp <- samples %>% lapply(\(sample.name) { metricsDetailed[[sample.name]] %>% mutate(sample = sample.name) }) %>% setNames(samples) %>% bind_rows() %>% select(c("sample", "barcode", "metric", "value")) if (verbose) message(paste0(Sys.time()," Done!")) return(tmp) } #' @title Add statistics to plot #' @description Use ggpubr to add statistics to plots. #' @param p Plot to add statistics to. #' @param comp.group Comparison metric. #' @param metadata Metadata for samples. #' @param h.adj Position of statistics test p value as % of max(y) (default = 0.05). #' @param primary.test Primary statistical test, e.g. "anova", "kruskal.test". #' @param secondary.test Secondary statistical test, e.g. "t-test", "wilcox.test" #' @param exact Whether to calculate exact p values (default = FALSE). #' @keywords internal #' @return ggplot2 object addPlotStats <- function(p, comp.group, metadata, h.adj = 0.05, primary.test, secondary.test, exact = FALSE) { checkCompMeta(comp.group, metadata) g <- p if (!is.null(secondary.test)) { comp <- metadata[[comp.group]] %>% unique() %>% as.character() %>% combn(2) %>% data.frame() %>% as.list() g <- g + stat_compare_means(comparisons = comp, method = secondary.test, exact = exact) } y.upper <- layer_scales(g, 1)$y$range$range[2] g <- g + stat_compare_means(method = primary.test, label.y = y.upper * (1 + h.adj)) return(g) } #' @title Add statistics to plot #' @description Use ggpubr to add statistics to samples or plot #' @param p Plot to add statistics to. #' @param comp.group Comparison metric. #' @param metadata Metadata for samples. #' @param h.adj Position of statistics test p value as % of max(y) (default = 0.05). #' @param exact Whether to calculate exact p values (default = FALSE). #' @param second.comp.group Second comparison metric. #' @keywords internal #' @return ggplot2 object addPlotStatsSamples <- function(p, comp.group, metadata, h.adj = 0.05, exact = FALSE, second.comp.group) { checkCompMeta(comp.group, metadata) checkCompMeta(second.comp.group, metadata) if (comp.group == second.comp.group) { stat <- metadata %>% select(comp.group, second.comp.group) %>% table(dnn = comp.group) %>% chisq.test() } else if (length(unique(metadata[[comp.group]])) == 2 && length(unique(metadata[[second.comp.group]])) == 2) { stat <- metadata %>% select(comp.group, second.comp.group) %>% table(dnn = comp.group) %>% chisq.test() } else { stat <- metadata %>% select(comp.group, second.comp.group) %>% table(dnn = comp.group) %>% fisher.test() } if (exact){ g <- p + labs(subtitle = paste0(stat$method, ": ", stat$p.value), h.adj = h.adj) } else { g <- p + labs(subtitle = paste0(stat$method, ": ", round(stat$p.value, digits = 4)), h.adj = h.adj) } return(g) } #' @title Add summary metrics #' @description Add summary metrics by reading Cell Ranger metrics summary files. #' @param data.path Path to cellranger count data. #' @param metadata Metadata for samples. #' @param n.cores Number of cores for the calculations (default = 1). #' @param verbose Print messages (default = TRUE). #' @keywords internal #' @return data frame addSummaryMetrics <- function(data.path, metadata, n.cores = 1, verbose = TRUE) { samples.tmp <- list.dirs(data.path, recursive = FALSE, full.names = FALSE) samples <- intersect(samples.tmp, metadata$sample) doubles <- table(samples.tmp) %>% .[. > 1] %>% names() if (length(doubles) > 0) stop(paste0("One or more samples are present twice in 'data.path'. Sample names must be unique. Affected sample(s): ",paste(doubles, collapse = " "))) if (length(samples) != length(samples.tmp)) message("'metadata' doesn't contain the following sample(s) derived from 'data.path' (dropped): ",setdiff(samples.tmp, samples) %>% paste(collapse = " ")) if (verbose) message(paste0(Sys.time()," Adding ",length(samples)," samples")) # extract and combine metrics summary for all samples metrics <- data.path %>% pathsToList(metadata$sample) %>% plapply(\(s) { tmp <- read.table(dir(paste(s[2],s[1],"outs", sep = "/"), glob2rx("*ummary.csv"), full.names = TRUE), header = TRUE, sep = ",", colClasses = numeric()) %>% mutate(., across(.cols = grep("%", .), ~ as.numeric(gsub("%", "", .x)) / 100), across(.cols = grep(",", .), ~ as.numeric(gsub(",", "", .x)))) # Take into account multiomics if ("Sample.ID" %in% colnames(tmp)) tmp %<>% select(-c("Sample.ID","Genome","Pipeline.version")) tmp %>% mutate(sample = s[1]) %>% pivot_longer(cols = -c(sample), names_to = "metric", values_to = "value") %>% mutate(metric = metric %>% gsub(".", " ", ., fixed = TRUE) %>% tolower()) }, n.cores = n.cores) %>% bind_rows() %>% arrange(sample) if (verbose) message(paste0(Sys.time()," Done!")) return(metrics) } #' @title Plot the data as points, as bars as a histogram, or as a violin #' @description Plot the data as points, barplot, histogram or violin #' @param g ggplot2 object #' @param plot.geom The plot.geom to use, "point", "bar", "histogram", or "violin". #' @param pal character Palette (default = NULL) #' @keywords internal #' @return geom plotGeom <- function(g, plot.geom, col, pal = NULL) { if (plot.geom == "point"){ g <- g + geom_quasirandom(size = 1, groupOnX = TRUE, aes(col = !!sym(col))) + if (is.null(pal)) scale_color_hue() else scale_color_manual(values = pal) } else if (plot.geom == "bar"){ g <- g + geom_bar(stat = "identity", position = "dodge", aes(fill = !!sym(col))) + if (is.null(pal)) scale_fill_hue() else scale_fill_manual(values = pal) } else if (plot.geom == "histogram"){ g <- g + geom_histogram(binwidth = 25, aes(fill = !!sym(col))) + if (is.null(pal)) scale_fill_hue() else scale_fill_manual(values = pal) } else if (plot.geom == "violin"){ g <- g + geom_violin(show.legend = TRUE, aes(fill = !!sym(col))) + if (is.null(pal)) scale_fill_hue() else scale_fill_manual(values = pal) } return(g) } #' @title Calculate percentage of filtered cells #' @description Calculate percentage of filtered cells based on the filter #' @param filter.data Data frame containing the mitochondrial fraction, depth and doublets per sample. #' @param filter The variable to filter (default = "mito") #' @param no.vars numeric Number of variables (default = 1) #' @keywords internal #' @return vector percFilter <- function(filter.data, filter = "mito", no.vars = 1) { cells.per.sample <- filter.data$sample %>% table() / no.vars %>% c() variable.count <- filter.data %>% filter(variable == filter) %$% split(value, sample) %>% lapply(sum) perc <- seq_len(length(cells.per.sample)) %>% sapply(\(x) { variable.count[[x]] / cells.per.sample[x] }) %>% setNames(names(cells.per.sample)) return(perc) } #' @title Get labels for percentage of filtered cells #' @description Labels the percentage of filtered cells based on mitochondrial fraction, sequencing depth and doublets as low, medium or high #' @param filter.data Data frame containing the mitochondrial fraction, depth and doublets per sample. #' @keywords internal #' @return data frame labelsFilter <- function(filter.data) { var.names <- filter.data$variable %>% unique() tmp <- list() if ("mito" %in% var.names) { tmp$mito <- percFilter(filter.data, "mito", length(var.names)) %>% sapply(\(x) {if (x < 0.01) "Low" else if (x > 0.05) "High" else "Medium"}) %>% {data.frame(sample = names(.), value = .)} } if ("depth" %in% var.names) { tmp$depth <- percFilter(filter.data, "depth", length(var.names)) %>% sapply(\(x) {if (x < 0.05) "Low" else if (x > 0.1) "High" else "Medium"}) %>% {data.frame(sample = names(.), value = .)} } if ("doublets" %in% var.names) { tmp$doublets <- percFilter(filter.data, "doublets", length(var.names)) %>% sapply(\(x) {if (x < 0.05) "Low" else if (x > 0.1) "High" else "Medium"}) %>% {data.frame(sample = names(.), value = .)} } tmp %<>% names() %>% lapply(\(x) tmp[[x]] %>% mutate(fraction = x)) %>% bind_rows() %>% mutate(value = value %>% factor(levels = c("Low","Medium","High"))) return(tmp) } #' @title Read 10x HDF5 files #' @param data.path character #' @param samples character vector, select specific samples for processing (default = NULL) #' @param type name of H5 file to search for, "raw" and "filtered" are Cell Ranger count outputs, "cellbender" is output from CellBender after running script from saveCellbenderScript #' @param symbol logical Use gene SYMBOLs (TRUE) or ENSEMBL IDs (FALSE) (default = TRUE) #' @param sep character Separator for creating unique cell names from sample IDs and cell IDs (default = "!!") #' @param n.cores integer Number of cores (default = 1) #' @param verbose logical Print progress (default = TRUE) #' @param unique.names logical Create unique cell IDs (default = FALSE) #' @return list with sparse count matrices #' @examples #' \dontrun{ #' cms.h5 <- read10xH5(data.path = "/path/to/count/data") #' } #' @export read10xH5 <- function(data.path, samples = NULL, type = c("raw","filtered","cellbender","cellbender_filtered"), symbol = TRUE, sep = "!!", n.cores = 1, verbose = TRUE, unique.names = FALSE) { checkPackageInstalled("rhdf5", bioc = TRUE) if (is.null(samples)) samples <- list.dirs(data.path, full.names = FALSE, recursive = FALSE) full.path <- getH5Paths(data.path, samples, type) if (verbose) message(paste0(Sys.time()," Loading ",length(full.path)," count matrices using ", if (n.cores < length(full.path)) n.cores else length(full.path)," cores")) out <- full.path %>% plapply(\(path) { h5 <- rhdf5::h5read(path, "matrix") tmp <- sparseMatrix( dims = h5$shape, i = h5$indices %>% as.integer(), p = h5$indptr %>% as.integer(), x = h5$data %>% as.integer(), index1 = FALSE ) # Extract gene names, different after V3 if ("features" %in% names(h5)) { if (symbol) { rows <- h5$features$name } else { rows <- h5$features$id } } else { if (symbol) { rows <- h5$genes$name } else { rows <- h5$genes$id } } tmp %<>% `dimnames<-`(list(rows, h5$barcodes)) return(tmp) }, n.cores = n.cores) %>% setNames(samples) if (unique.names) out %<>% createUniqueCellNames(samples, sep) if (verbose) message(paste0(Sys.time()," Done!")) return(out) } #' @title Create unique cell names #' @description Create unique cell names from sample IDs and cell IDs #' @param cms list List of count matrices, should be named (optional) #' @param samples character Optional, list of sample names #' @param sep character Separator between sample IDs and cell IDs (default = "!!") #' @keywords internal createUniqueCellNames <- function(cms, samples, sep = "!!") { names(cms) <- samples samples %>% lapply(\(sample) { cms[[sample]] %>% `colnames<-`(., paste0(sample,sep,colnames(.))) }) %>% setNames(samples) } #' @title Get H5 file paths #' @description Get file paths for H5 files #' @param data.path character Path for directory containing sample-wise directories with Cell Ranger count outputs #' @param samples character Sample names to include (default = NULL) #' @param type character Type of H5 files to get paths for, one of "raw", "filtered" (Cell Ranger count outputs), "cellbender" (raw CellBender outputs), "cellbender_filtered" (CellBender filtered outputs) (default = "type") #' @keywords internal getH5Paths <- function(data.path, samples = NULL, type = NULL) { # Check input type %<>% tolower() %>% match.arg(c("raw","filtered","cellbender","cellbender_filtered")) # Get H5 paths paths <- data.path %>% pathsToList(samples) %>% sapply(\(sample) { if (grepl("cellbender", type)) { paste0(sample[2],"/",sample[1],"/outs/",type,".h5") } else { dir(paste0(sample[2],sample[1],"/outs"), glob2rx(paste0(type,"*.h5")), full.names = TRUE) } }) %>% setNames(samples) # Check that all files exist if (paths %>% sapply(length) %>% {any(. == 0)}) { miss.names <- paths %>% sapply(length) %>% {paths[. == 0]} %>% names() miss <- miss.names %>% sapply(\(sample) { if (type == "raw") { paste0(data.path,sample,"/outs/raw_[feature/gene]_bc_matrix.h5") } else if (type == "filtered") { paste0(data.path,sample,"/outs/filtered_[feature/gene]_bc_matrix.h5") } else { paste0(data.path,sample,"/outs/",type,".h5") } }) %>% setNames(miss.names) } else if (!(paths %>% sapply(file.exists) %>% all())) { miss <- paths %>% sapply(file.exists) %>% {paths[!.]} } else { miss <- NULL } if (!is.null(miss)) { stop(message("Not all files exist. Missing the following: \n",paste(miss, sep = "\n"))) } return(paths) } #' @title Create filtering vector #' @description Create logical filtering vector based on a numeric vector and a (sample-wise) cutoff #' @param num.vec numeric Numeric vector to create filter on #' @param name character Name of filter #' @param filter numeric Either a single numeric value or a numeric value with length of samples #' @param samples character Sample IDs #' @param sep character Separator to split cells by into sample-wise lists (default = "!!") #' @keywords internal filterVector <- function(num.vec, name, filter, samples, sep = "!!") { if (!is.numeric(filter)) stop(paste0("'",name,"' must be numeric.")) if (length(filter) > 1) { if (is.null(names(filter))) stop(paste0("'",name,"' must have sample names as names.")) filter %<>% .[samples] num.list <- strsplit(names(num.vec), sep) %>% sapply('[[', 1) %>% split(num.vec, .) out <- samples %>% sapply(\(sample) { num.list[[sample]] >= filter[sample] }) %>% unname() %>% unlist() } else { out <- num.vec >= filter } return(out) } #' @title Check data path #' @description Helper function to check that data.path is not NULL #' @param data.path character Path to be checked #' @keywords internal checkDataPath <- function(data.path) { if (is.null(data.path)) stop("'data.path' cannot be NULL.") } pathsToList <- function(data.path, samples) { data.path %>% lapply(\(path) list.dirs(path, recursive = F, full.names = F) %>% {if (!is.null(samples)) .[. %in% samples] else . } %>% data.frame(sample = ., path = path)) %>% bind_rows() %>% t() %>% data.frame() %>% as.list() }
/scratch/gouwar.j/cran-all/cranData/CRMetrics/R/inner_functions.R
--- title: "CRMetrics - Cell Ranger Filtering and Metrics Visualization" output: html_document: toc: true toc_float: true --- # Introduction ## Preparations We have selected a [publicly available dataset](https://www.ncbi.nlm.nih.gov/geo/) from GEO with accession number GSE179590 which can be downloaded [here](http://kkh.bric.ku.dk/fabienne/crmetrics_testdata.tar.gz). You can download the zipped data using wget or curl, e.g. `wget http://kkh.bric.ku.dk/fabienne/crmetrics_testdata.tar.gz`, and then unpack using `tar -xvf crmetrics_testdata.tar.gz` ## Using Python modules CRMetrics utilizes several Python packages. Of these, the packages for doublet detection, `scrublet` and `DoubletDetection`, can be run from R using `reticulate`. However, CRMetrics can provide a Python script to run the analyses directly in Python instead through the `export` parameter. In order to run the analyses in R, first you should install `reticulate`: ```{r, eval=FALSE} install.packages("reticulate") library(reticulate) ``` Also, you need to install Conda. Then, you are ready to create a Conda virtual environment. In this example, we're on a server and we load `miniconda` using modules. You may need to include the `conda` parameter to point to wherever your Conda binary is located (in terminal, try `whereis conda`). In this example, we install a virtual environment for `scrublet`. ```{r, eval=FALSE} conda_create("scrublet", conda = "/opt/software/miniconda/4.12.0/condabin/conda", python_version = 3.8) ``` There is a known problem with openBLAS which may be different between R and Python. If this is the case, you will receive the error `floating point exception` and R will crash when you try to run a Python script using `reticulate`. In Python, the problem lies within `numpy`. `numba` requires `numpy` \< 1.23, so force reinstall from scratch with no binaries in the `scrublet` Conda environment from terminal `module load miniconda/4.12.0` `conda activate scrublet` `python -m pip install numpy==1.22.0 --force-reinstall --no-binary numpy` Then, follow the instructions to install [`scrublet`](https://github.com/swolock/scrublet). Finally, restart your R session. Please note, if at any point you receive an error that you can't change the current Python instance, please remove any Python-dependent object in your environment and restart your R session. # Main use Load the library ```{r setup, message=FALSE} library(CRMetrics) library(magrittr) library(dplyr) ``` There are two ways to initialize a new object of class `CRMetrics`, either by providing `data.path` or `cms`. `data.path` is the path to a directory containing sample-wise directories with the Cell Ranger count outputs. Optionally, it can be a vector of multiple paths. `cms` is a (named, optional) list of (sparse, optional) count matrices. Please note, if `data.path` is not provided, some functionality is lost, e.g. ambient RNA removal. Optionally, metadata can be provided, either as a file or as a data.frame. For a file, the separator can be set with the parameter `sep.meta` (most often, either `,` (comma) or `\t` (tab) is used). In either format, the columns must be named and one column must be named `sample` and contain sample names. In combination with `data.path`, the sample names must match the sample directory names. Unmatched directory names are dropped. If `cms` is provided, it is recommended to add summary metrics afterwards: ```{r cms, eval=FALSE} crm <- CRMetrics$new(cms = cms, n.cores = 10, pal = grDevices::rainbow(8), theme = ggplot2::theme_bw()) crm$addSummaryFromCms() ``` Please note, some functionality depends on aggregation of sample and cell IDs using the `sep.cell` parameter. The default is `!!` which creates cell names in the format of `<sampleID>!!<cellID>`. If another separator is used, this needs to be provided in relevant function calls. Here, the folder with our test data is stored in `/data/ExtData/CRMetrics_testdata/` and we provide metadata in a comma-separated file. ```{r init, eval=FALSE} crm <- CRMetrics$new(data.path = "/data/ExtData/CRMetrics_testdata/", metadata = "/data/ExtData/CRMetrics_testdata/metadata.csv", sep.meta = ",", n.cores = 10, verbose = FALSE, pal = grDevices::rainbow(8), theme = ggplot2::theme_bw()) ``` ```{r load-obj, echo = F} crm <- qs::qread("/data/ExtData/CRMetrics_testdata/crm.qs", nthreads = 10) ``` We can review our metadata ```{r meta} crm$metadata ``` ## Plot summary statistics We can investigate which metrics are available and choose the ones we would like to plot ```{r select-metrics} crm$selectMetrics() ``` ### Samples per condition First, we can plot the number of samples per condition. Here, we investigate how the distribution of the sex differs between the type of MS of the samples where RRMS is short for relapsing remitting MS, and SPMS is short for secondary progressive MS. ```{r plot-summary-metrics, eval=FALSE} crm$plotSummaryMetrics(comp.group = "sex", metrics = "samples per group", second.comp.group = "type", plot.geom = "bar") ``` ![](img/fig1.png) ### Metrics per sample In one plot, we can illustrate selected metric summary stats. If no comparison group is set, it defaults to `sample`. ```{r plot-sum-metrics-selected, fig.width=12, fig.height=12, eval=FALSE} metrics.to.plot <- crm$selectMetrics(ids = c(1:4,6,18,19)) crm$plotSummaryMetrics(comp.group = "sample", metrics = metrics.to.plot, plot.geom = "bar") ``` ![](img/fig2.png) ### Metrics per condition We can do the same, but set the comparison group to `type`. This will add statistics to the plots. Additionally, we can add a second comparison group for coloring. ```{r plot-sum-metrics-comp, fig.width=12, fig.height=10, eval=FALSE} crm$plotSummaryMetrics(comp.group = "type", metrics = metrics.to.plot, plot.geom = "point", stat.test = "non-parametric", second.comp.group = "sex") ``` ![](img/fig3.png) ### Metrics per condition with >2 levels For the sake of the example, we change the `RIN` values to `low` (RIN\<6), `medium` (6\<RIN\<7), and `high` (RIN\>7). This will provide us with three comparisons groups to exemplify how to use automated statistics for such situations. ```{r plot-sum-metrics-multilevel, fig.width=12, fig.height=10, eval=FALSE} crm$metadata$RIN %<>% as.character() %>% {c("medium","high","high","medium","high","high","low","high")} %>% factor(., levels = c("low", "medium", "high")) crm$plotSummaryMetrics(comp.group = "RIN", metrics = metrics.to.plot, plot.geom = "point", stat.test = "non-parametric", second.comp.group = "type", secondary.testing = TRUE) ``` ![](img/fig4.png) ### Metrics per condition with numeric covariate We can choose a numeric comparison group, in this case `age`, which will add regression lines to the plots. ```{r plot-sum-metrics-num-cov, fig.height=10, fig.width=12, eval=FALSE} crm$plotSummaryMetrics(comp.group = "age", metrics = metrics.to.plot, plot.geom = "point", second.comp.group = "type", se = FALSE) ``` ![](img/fig5.png) If the numeric vector has a significant effect on one of the metrics we can investigate it closer by performing regression analyses for both conditions of `type`. ```{r plot-sum-metrics-sec-comp, eval=FALSE} crm$plotSummaryMetrics(comp.group = "age", metrics = "mean reads per cell", plot.geom = "point", second.comp.group = "type", group.reg.lines = TRUE) ``` ![](img/fig6.png) We see that there is no significant effect of the numeric vector on neither of the MS types. ## Add detailed metrics We can read in count matrices to assess detailed metrics. Otherwise, if count matrices have already been added earlier, this step prepares data for plotting UMI and gene counts. ```{r add-detailed-metrics, eval=FALSE} crm$addCms(add.metadata = FALSE) crm$addDetailedMetrics() ``` We plot the detailed metrics. The horizontal lines indicates the median values for all samples. ```{r plot-detailed-metrics, eval=FALSE} metrics.to.plot <- crm$detailed.metrics$metric %>% unique() crm$plotDetailedMetrics(comp.group = "type", metrics = metrics.to.plot, plot.geom = "violin") ``` ![](img/fig7.png) ## Embed cells using Conos In order to plot our cells in our embedding, we need to perform preprocessing of the raw count matrices. To do this, either `pagoda2` (default) or `Seurat` can be used. ```{r preprocessing, eval=FALSE} crm$doPreprocessing() ``` Then, we create the embedding using `conos`. ```{r create-embedding, eval=FALSE} crm$createEmbedding() ``` We can now plot our cells. ```{r plot-embedding, eval=FALSE} crm$plotEmbedding() ``` ![](img/fig8.png) ## Cell depth We can plot cell depth, both in the embedding or as histograms per sample. ```{r plot-embedding-depth, eval=FALSE} crm$plotEmbedding(depth = TRUE, depth.cutoff = 1e3) ``` ![](img/fig9.png) ```{r plot-depth, eval=FALSE} crm$plotDepth() ``` ![](img/fig10.png) We can see that the depth distribution varies between samples. We can create a cutoff vector specifying the depth cutoff per sample. It should be a named vector containing sample names. ```{r plot-sw-depth} depth_cutoff_vec <- c(2.5e3, 2e3, 1e3, 1.5e3, 1.5e3, 2e3, 2.5e3, 2e3) %>% setNames(crm$detailed.metrics$sample %>% unique() %>% sort()) depth_cutoff_vec ``` Let's plot the updated cutoffs: ```{r plot-upd-depth, eval=FALSE} crm$plotDepth(cutoff = depth_cutoff_vec) ``` ![](img/fig11.png) Also, we can do this in the embedding: ```{r plot-embedding-depth-upd, eval=FALSE} crm$plotEmbedding(depth = TRUE, depth.cutoff = depth_cutoff_vec) ``` ![](img/fig12.png) ## Mitochondrial fraction We can also investigate the mitochondrial fraction in our cells ```{r plot-emb-mf, eval=FALSE} crm$plotEmbedding(mito.frac = TRUE, mito.cutoff = 0.05, species = "human") ``` ![](img/fig13.png) Similar as for depth, we can plot the distribution of the mitochondrial fraction per sample and include sample-wise cutoffs (not shown here). ```{r plot-mf, eval=FALSE} crm$plotMitoFraction(cutoff = 0.05) ``` ![](img/fig14.png) # Remove ambient RNA We have added functionality to remove ambient RNA from our samples. This approach should be used with caution since it induces changes to the UMI counts (NB: it does not overwrite the outputs from Cell Ranger). We have included preparative steps for [CellBender](https://github.com/broadinstitute/CellBender/) as well as incorporated [SoupX](https://github.com/constantAmateur/SoupX) into CRMetrics. ## CellBender ### Installation To install, follow [these instructions](https://cellbender.readthedocs.io/en/latest/installation/index.html#manual-installation). It is highly recommended to run `CellBender` using GPU acceleration. If you are more comfortable installing through `reticulate` in R, these lines should be run: ```{r cellbender-install, eval=FALSE} library(reticulate) conda_create("cellbender", conda = "/opt/software/miniconda/4.12.0/condabin/conda", python_version = 3.7) conda_install("cellbender", conda = "/opt/software/miniconda/4.12.0/condabin/conda", forge = FALSE, channel = "anaconda", packages = "pytables") conda_install("cellbender", conda = "/opt/software/miniconda/4.12.0/condabin/conda", packages = c("pytorch","torchvision","torchaudio"), channel = "pytorch") ``` Then, clone the `CellBender` repository as instructed in the manual. Here, we clone to `/apps/` through `cd /apps/; git clone https://github.com/broadinstitute/CellBender.git` and then `CellBender` can be installed: ```{r cellbender-install-2, eval=FALSE} conda_install("cellbender", conda = "/opt/software/miniconda/4.12.0/condabin/conda", pip = TRUE, pip_options = "-e", packages = "/apps/CellBender/") ``` ### Analysis For `CellBender`, we need to specify expected number of cells and total droplets included (please see the [manual](https://cellbender.readthedocs.io/en/latest/usage/index.html) for additional information). As hinted in the manual, the number of total droplets included could be expected number of cells multiplied by 3 (which we set as default). First, we plot these measures: ```{r cbprep, eval=FALSE} crm$prepareCellbender() ``` ![](img/fig15.png) We could change the total droplets included for any sample. Let us first look at the vector. ```{r totdrops} droplets <- crm$getTotalDroplets() droplets ``` Then we change the total droplets for SRR15054424. ```{r change-topdrops} droplets["SRR15054424"] <- 2e4 ``` We plot this change. ```{r cbprep-totdrops, eval=FALSE} crm$prepareCellbender(shrinkage = 100, show.expected.cells = TRUE, show.total.droplets = TRUE, total.droplets = droplets) ``` ![](img/fig16.png) We could also multiply expected cells by 2.5 for all samples and save this in our CRMetrics object. ```{r cb-totdrops-multiply, eval=FALSE} crm$cellbender$total.droplets <- crm$getTotalDroplets(multiplier = 2.5) ``` Finally, we save a script for running `CellBender` on all our samples. To only select specific samples, use the `samples` parameter. Here, we use our modified total droplet vector. If `total.droplets` is not specified, it will use the stored vector at `crm$cellbender$total.droplets`. ```{r cb-save, eval=FALSE} crm$saveCellbenderScript(file = "/apps/cellbender_script.sh", fpr = 0.01, epochs = 150, use.gpu = TRUE, total.droplets = droplets) ``` We can run this script in the terminal. Here, we activate the environment: `conda activate cellbender` and we run the bash script: `sh /apps/cellbender_script.sh` ### Plotting We can plot the changes in cell numbers following CellBender estimations. ```{r cb-plotcells, eval=FALSE} crm$plotCbCells() ``` ![](img/fig17.png) We can plot the CellBender training results. ```{r cb-plottraining, fig.width = 12, fig.height = 10, eval=FALSE} crm$plotCbTraining() ``` ![](img/fig18.png) We can plot the cell probabilities. ```{r cb-plotcellprobs, fig.width = 12, fig.height = 10, eval=FALSE} crm$plotCbCellProbs() ``` ![](img/fig19.png) We can plot the identified ambient genes per sample. ```{r cb-plotambexp, fig.width = 12, fig.height = 10, eval=FALSE} crm$plotCbAmbExp(cutoff = 0.005) ``` ![](img/fig20.png) Lastly, we can plot the proportion of samples expressing ambient genes. We see that *MALAT1* is identified as an ambient gene in all samples [which is expected](https://kb.10xgenomics.com/hc/en-us/articles/360004729092-Why-do-I-see-high-levels-of-Malat1-in-my-gene-expression-data-). ```{r cb-plotambgenes, eval=FALSE} crm$plotCbAmbGenes(cutoff = 0.005, pal = grDevices::rainbow(17)) ``` ![](img/fig21.png) Then, we can add the filtered CMs to our object. Additionally, it is recommended to generate new summary metrics from the adjusted CMs which can then be plotted. ```{r, eval = FALSE} crm$cms <- NULL crm$addCms(cellbender = T) crm$addSummaryFromCms() crm$detailed.metrics <- NULL crm$addDetailedMetrics() ``` ## SoupX The implementation of SoupX uses the automated estimation of contamination and correction. Please note, SoupX depends on Seurat for import of data. Since this calculation takes several minutes, it is not run in this vignette. ```{r runsoupx, eval=FALSE} crm$runSoupX() ``` Then, we can plot the corrections. ```{r plotsoupx, eval=FALSE} crm$plotSoupX() ``` ![](img/fig22.png) Then, we can add the SoupX adjusted CMs to our object. Additionally, it is recommended to generate new summary metrics from the adjusted CMs which can then be plotted. ```{r add-adj-cms} crm$cms <- NULL crm$addCms(cms = crm$soupx$cms.adj, unique.names = TRUE, sep = "!!") crm$addSummaryFromCms() crm$detailed.metrics <- NULL crm$addDetailedMetrics() ``` # Doublet detection For doublet detection, we included the possibility to do so using the Python modules `scrublet` and `DoubletDetection`. Here, we use virtual environments where we installed each method. `scrublet` is the default method, which is fast. `DoubletDetection` is significantly slower, but performs better according to [this](https://www.sciencedirect.com/science/article/pii/S2405471220304592) review. Here, we show how to run `scrublet` and `DoubletDetection` to compare in the next section. Since this takes some time, the results have been precalculated and are not run in this vignette. ```{r run-dd, eval=FALSE} crm$detectDoublets(env = "scrublet", conda.path = "/opt/software/miniconda/4.12.0/condabin/conda", method = "scrublet") crm$detectDoublets(env = "doubletdetection", conda.path = "/opt/software/miniconda/4.12.0/condabin/conda", method = "doubletdetection") ``` It is also possible to generate a Python script to run each method from the terminal. To do this, set `export = TRUE` and run `python <METHOD>.py` inside the virtual environment to generate data. Then, load data with: ```{r, eval = FALSE} crm$addDoublets() ``` We can plot the estimated doublets in the embedding. ```{r plot-scrublet-embedding, eval=FALSE} crm$plotEmbedding(doublet.method = "scrublet") crm$plotEmbedding(doublet.method = "doubletdetection") ``` ![](img/fig23a.png) ![](img/fig23b.png) And we can plot the scores for the doublet estimations. ```{r plot-scrublet-scores, eval=FALSE} crm$plotEmbedding(doublet.method = "scrublet", doublet.scores = TRUE) crm$plotEmbedding(doublet.method = "doubletdetection", doublet.scores = TRUE) ``` ![](img/fig24a.png) ![](img/fig24b.png) ## Differences between methods We can compare how much `scrublet` and `DoubletDetection` overlap in their doublets estimates. First, let us plot a bar plot of the number of doublets per sample. ```{r compare-dd-res, eval=FALSE} scrub.res <- crm$doublets$scrublet$result %>% select(labels, sample) %>% mutate(method = "scrublet") dd.res <- crm$doublets$doubletdetection$result %>% select(labels, sample) %>% mutate(labels = as.logical(labels), method = "DoubletDetection") dd.res[is.na(dd.res)] <- FALSE plot.df <- rbind(scrub.res, dd.res) %>% filter(labels) %>% group_by(sample, method) %>% summarise(count = n()) ggplot(plot.df, aes(sample, count, fill = method)) + geom_bar(stat = "identity", position = position_dodge()) + crm$theme + theme(axis.text.x = element_text(angle = 90, vjust = 0.5)) + labs(x = "", y = "No. doublets", fill = "Method", title = "Doublets per sample") + scale_fill_manual(values = crm$pal) ``` ![](img/fig25.png) We can also show the total number of doublets detected per method. ```{r plot-dd-per-method, eval=FALSE} plot.df %>% group_by(method) %>% summarise(count = sum(count)) %>% ggplot(aes(method, count, fill = method)) + geom_bar(stat = "identity") + crm$theme + guides(fill = "none") + labs(x = "", y = "No. doublets", title = "Total doublets per method") + scale_fill_manual(values = crm$pal) ``` ![](img/fig26.png) Finally, let's plot an embedding showing the method-wise estimations as well as overlaps. ```{r plot-dd-emb-per-method, eval=FALSE} plot.vec <- data.frame(scrublet = scrub.res$labels %>% as.numeric(), doubletdetection = dd.res$labels %>% as.numeric()) %>% apply(1, \(x) if (x[1] == 0 & x[2] == 0) "Kept" else if (x[1] > x[2]) "scrublet" else if (x[1] < x[2]) "DoubletDetection" else "Both") %>% setNames(rownames(scrub.res)) %>% factor(levels = c("Kept","scrublet","DoubletDetection","Both")) crm$con$plotGraph(groups = plot.vec, mark.groups = FALSE, show.legend = TRUE, shuffle.colors = TRUE, title = "Doublets", size = 0.3) + scale_color_manual(values = c("grey80","red","blue","black")) ``` ![](img/fig27.png) # Plot filtered cells We can plot all the cells to be filtered in our embedding ```{r plot-filtered-cells-emb, eval=FALSE} crm$plotFilteredCells(type = "embedding", depth = TRUE, depth.cutoff = depth_cutoff_vec, doublet.method = "scrublet", mito.frac = TRUE, mito.cutoff = 0.05, species = "human") ``` ![](img/fig28.png) And we can plot the cells to be filtered per sample where `combination` means a cell that has at least two filter labels, e.g. `mito` and `depth`. ```{r plot-filtered-cells-bar, eval=FALSE} crm$plotFilteredCells(type = "bar", doublet.method = "scrublet", depth = TRUE, depth.cutoff = depth_cutoff_vec, mito.frac = TRUE, mito.cutoff = 0.05, species = "human") ``` ![](img/fig29.png) Finally, we can create a tile plot with an overview of sample quality for the different filters. NB, this is experimental and has not been validated across datasets. ```{r plot-filtered-cells-tile, eval=FALSE} crm$plotFilteredCells(type = "tile", doublet.method = "doubletdetection", depth = TRUE, depth.cutoff = depth_cutoff_vec, mito.frac = TRUE, mito.cutoff = 0.05, species = "human") ``` ![](img/fig30.png) We can also extract the raw numbers for plotting in other ways than those included here ```{r export-filtered-cells, eval=FALSE} filter.data <- crm$plotFilteredCells(type = "export") filter.data %>% head() ``` # Filter count matrices Finally, we can filter the count matrices to create a cleaned list to be used in downstream applications. ```{r filter} crm$filterCms(depth.cutoff = depth_cutoff_vec, mito.cutoff = 0.05, doublets = "scrublet", samples.to.exclude = NULL, species = "human", verbose = TRUE) ``` The filtered list of count matrices is stored in \$cms.filtered which can be saved on disk afterwards. ```{r save, eval=FALSE} qs::qsave(crm$cms.filtered, "/data/ExtData/CRMetrics_testdata/cms_filtered.qs", nthreads = 10) ``` ```{r session-info} sessionInfo() ```
/scratch/gouwar.j/cran-all/cranData/CRMetrics/inst/docs/walkthrough.Rmd
#' @keywords internal "_PACKAGE" ## usethis namespace: start #' @useDynLib CRTConjoint #' @importFrom Rcpp sourceCpp #' @importFrom foreach %dopar% #' @importFrom stats model.matrix #' @importFrom stats formula #' @importFrom stats contrasts #' @importFrom utils capture.output #' @importFrom methods is ## usethis namespace: end NULL
/scratch/gouwar.j/cran-all/cranData/CRTConjoint/R/CRTConjoint-package.R
# Generated by using Rcpp::compileAttributes() -> do not edit by hand # Generator token: 10BE3573-1514-4C36-9D1C-5A225CD40393 demean_center <- function(zz, rows, cols, m) { .Call('_CRTConjoint_demean_center', PACKAGE = 'CRTConjoint', zz, rows, cols, m) } fast_demean <- function(zz, rows, cols) { .Call('_CRTConjoint_fast_demean', PACKAGE = 'CRTConjoint', zz, rows, cols) }
/scratch/gouwar.j/cran-all/cranData/CRTConjoint/R/RcppExports.R
#' Immigration Choice Conjoint Experiment Data from Hainmueller et. al. (2014). #' #' This dataset is a subset of the first 1000 rows chosen from the original immigration dataset. #' Each row consists of a pair of immigrant candidates that were shown to respondents. #' The respondent then chooses either the left or right profile (main binary response) from #' the nine profile factors shown for each candidate. Respondent characteristics are also recorded. #' For example, the first row shows that the left immigrant candidate was a Male from Iraq #' who had a high school degree, etc. while the right immigrant candidate was a Female from France #' with no formal education, etc. The respondent who evaluated this task was a 20 year old #' college educated White Male, who voted for the left immigrant candidate. #' #' @format A data frame with 1000 rows and 23 variables: #' \describe{ #' \item{FeatEd}{Education of left candidate containing levels: "College Degree", "Graduate Degree", "Eighth Grade", "Fourth Grade", "High School", "Two Years of College", and "No Education"} #' \item{FeatGender}{Gender of left candidate containing levels: "Female" and "Male"} #' \item{FeatCountry}{Country of origin of left candidate containing levels: "Poland", "France", "Iraq", "Somalia", "Sudan", "China", "Mexico", "Germany", "Philippines", and "India"} #' \item{FeatReason}{Reason for immigration of left candidate containing levels: "Escape political/religious persecution", "Reunite with family members", and "Seek better job"} #' \item{FeatJob}{Occupation of left candidate containing levels: "Waiter", "Child care provider", "Teacher", "Nurse", "Construction worker", "Janitor", "Gardener", "Financial analyst", "Computer programmer", "Research scientist", and "Doctor"} #' \item{FeatExp}{Job Experience of left candidate containing levels: "More than five years", "No job training", "One-two years", and "Three-five years"} #' \item{FeatPlans}{Job Plans of left candidate containing levels: "Does not contract with U.S. employer but have job interview", "No Contract", "No plans", and "Will look for work after arriving"} #' \item{FeatTrips}{Trips to U.S. of left candidate containing levels: "Entered U.S. once without legal authorization", "Entered U.S. once before on tourist visa", "Multiple visits on tourist visa", "Never visited", and "Spent six months with family in U.S."} #' \item{FeatLang}{Language of left candidate containing levels: "Used interpreter", "Broken English", "Unable to speak English", and "Fluent English"} #' \item{FeatEd_2}{Education of right candidate containing levels: "College Degree", "Graduate Degree", "Eighth Grade", "Fourth Grade", "High School", "Two Years of College", and "No Education"} #' \item{FeatGender_2}{Gender of right candidate containing levels: "Female" and "Male"} #' \item{FeatCountry_2}{Country of origin of right candidate containing levels: "Poland", "France", "Iraq", "Somalia", "Sudan", "China", "Mexico", "Germany", "Philippines", and "India"} #' \item{FeatReason_2}{Reason for immigration of right candidate containing levels: "Escape political/religious persecution", "Reunite with family members", and "Seek better job"} #' \item{FeatJob_2}{Occupation of right candidate containing levels: "Waiter", "Child care provider", "Teacher", "Nurse", "Construction worker", "Janitor", "Gardener", "Financial analyst", "Computer programmer", "Research scientist", and "Doctor"} #' \item{FeatExp_2}{Job Experience of right candidate containing levels: "More than five years", "No job training", "One-two years", and "Three-five years"} #' \item{FeatPlans_2}{Job Plans of right candidate containing levels: "Does not contract with U.S. employer but have job interview", "No Contract", "No plans", and "Will look for work after arriving"} #' \item{FeatTrips_2}{Trips to U.S. of right candidate containing levels: "Entered U.S. once without legal authorization", "Entered U.S. once before on tourist visa", "Multiple visits on tourist visa", "Never visited", and "Spent six months with family in U.S."} #' \item{FeatLang_2}{Language of right candidate containing levels: "Used interpreter", "Broken English", "Unable to speak English", and "Fluent English"} #' \item{ppage}{Respondent age (numeric variable)} #' \item{ppeducat}{Respondent education containing levels} #' \item{ppethm}{Respondent ethnicity} #' \item{ppgender}{Respondent gender} #' \item{Y}{Binary response variable Y: 1 if left profile is selected and 0 otherwise} #' } "immigrationdata"
/scratch/gouwar.j/cran-all/cranData/CRTConjoint/R/data.R
# Obtaining main HierNet test statistic hiernet_group = function(hiernet_object, idx, X, analysis = 0, group) { main = hiernet_object$bp[idx] - hiernet_object$bn[idx] main_contribution = vector() for (i in 1:length(group)) { main_contribution[i] = sum((main[group[[i]]] - mean(main[group[[i]]]))^2) } I_int = list() for (i in 1:length(idx)) { I_int[[i]] = (hiernet_object$th[idx[i], ] + hiernet_object$th[, idx[i]])/2 } int_cont_bygroup = vector() tracking_all_ints = list() for (i in 1:length(group)) { in_int = I_int[group[[i]]] int_means = Reduce("+", in_int)/length(in_int) getting_int_cont = vector() for (j in 1:length(in_int[[1]])) { rel_ints = sapply(in_int, "[[", j) getting_int_cont[j] = sum((rel_ints - int_means[j])^2) } tracking_all_ints[[i]] = getting_int_cont int_cont_bygroup[i] = sum(getting_int_cont) } # divide by 2 to account for counting both left and right profile ts = sum(main_contribution)/2 + sum(int_cont_bygroup)/2 if (analysis > 0) { main_contribution = sum(main_contribution)/2 int_contribution = sum(int_cont_bygroup)/2 largest_ints = largest_ints_idx = list() for (i in 1:length(group)) { largest_ints[[i]] = sort(tracking_all_ints[[i]], decreasing = TRUE)[c(1:(2*analysis))] rel_int_name = paste0(colnames(X)[idx][group[[i]]], collapse = ", ") largest_ints_idx[[i]] = paste0("interactions with ",colnames(X)[order(tracking_all_ints[[i]], decreasing = TRUE)[c(1:(2*analysis))]]) } largest_ints = unlist(largest_ints) largest_ints_idx = unlist(largest_ints_idx) largest_ints_final = sort(largest_ints, decreasing = TRUE)[seq(1, (2*analysis), by = 2)] largest_int_contributer = largest_ints_idx[order(largest_ints, decreasing = TRUE)[seq(1, (2*analysis), by = 2)]] analysis_list = list() analysis_list$observed_TS = ts analysis_list$main_contirbution = main_contribution analysis_list$int_contribution = int_contribution analysis_list$largest_int_size = largest_ints_final analysis_list$largest_int_contributer = largest_int_contributer return(analysis_list) } else { return(ts) } } # Obtaining profile order effect test statistic PO_stat = function(hiernet_object, in_idx_left, in_idx_right, in_idx_respondent) { main_1 = hiernet_object$bp[in_idx_left] - hiernet_object$bn[in_idx_left] main_2 = hiernet_object$bp[in_idx_right] - hiernet_object$bn[in_idx_right] # interaction effects within_int_list_left = list() within_int_list_right = list() for (i in 1:length(in_idx_left)) { within_int_list_left[[i]] = (hiernet_object$th[in_idx_left[i], in_idx_left] + hiernet_object$th[in_idx_left, in_idx_left[i]])/2 within_int_list_right[[i]] = (hiernet_object$th[in_idx_right[i], in_idx_right] + hiernet_object$th[in_idx_right, in_idx_right[i]])/2 } within_diff = unlist(Map("+", within_int_list_left, within_int_list_right)) #between profiles between_int_list_left = list() between_int_list_right = list() for (i in 1:length(in_idx_left)) { between_int_list_left[[i]] = (hiernet_object$th[in_idx_left[i], in_idx_right] + hiernet_object$th[in_idx_right, in_idx_left[i]])/2 between_int_list_right[[i]] = (hiernet_object$th[in_idx_right[i], in_idx_left] + hiernet_object$th[in_idx_left, in_idx_right[i]])/2 } between_diff = unlist(Map("+", between_int_list_left, between_int_list_right)) #respondent R_int_list_left = list() R_int_list_right = list() for (i in 1:length(in_idx_left)) { R_int_list_left[[i]] = (hiernet_object$th[in_idx_left[i], in_idx_respondent] + hiernet_object$th[in_idx_respondent, in_idx_left[i]])/2 R_int_list_right[[i]] = (hiernet_object$th[in_idx_right[i], in_idx_respondent] + hiernet_object$th[in_idx_respondent, in_idx_right[i]])/2 } respondent_effects = unlist(Map("+", R_int_list_left, R_int_list_right)) #division of two in the interactions because of overcounting stat= sum((main_1 + main_2)^2) + sum(within_diff^2) + sum(between_diff^2) + sum((respondent_effects)^2) return(stat) } # Obtaining carryover effect test statistic CO_stat = function(hiernet_object, idx) { I_list = list() for (i in 1:length(idx)) { I_list[[i]] = (hiernet_object$th[idx[i], -idx] + hiernet_object$th[-idx, idx[i]])/2 } ts = sum(unlist(I_list)^2) return(ts) } # Main helper function that performs CRT to test for Y independent of X given Z get_CRT_pval = function(x, y, xcols, left_idx, right_idx, design, B, num_cores, profileorder_constraint, lambda, non_factor_idx, in_levs, analysis, p, resample_func_1, resample_func_2, tol, resample_X, full_X, restricted_X, left_allowed, right_allowed, forced, speedup, seed, supplyown_resamples, parallel, nfolds, verbose) { num_x_levs = levels(x[, xcols[1]]) # checks if (!(all(sapply(x[, !(1:ncol(x)) %in% non_factor_idx], function(x) is(x, "factor"))))) stop("factors provided in formula are not all factors please supply non_factor_idx") if (!all(in_levs %in% num_x_levs)) stop("in_levs supplied are not levels in X") if (length(left_idx) != length(right_idx)) stop("length left idx does not match right idx") if (length(xcols) != 2) stop("supply left and right column for factor X") if (!is.null(p)) { if (!is.null(in_levs)) { if (length(p) != length(xcols)) stop("length of supplied probability should match length of interested levels in_levs") } else { if (length(num_x_levs) != p) stop("length of supplied probability should match length of number of levels in X") } if (length(p) != length(xcols)) stop("length of supplied probability should match length of interested levels in_levs") } # forcing no profile order effect constraint if (profileorder_constraint) { x_df = x n = nrow(x_df) empty_df = x_df y_new = 1 - y for (i in 1:length(left_idx)) { empty_df[, left_idx[i]] = x_df[, right_idx[i]] empty_df[, right_idx[i]] = x_df[, left_idx[i]] } final_df = rbind(x_df, empty_df) final_df$Y = c(y, y_new) } else { final_df = x final_df$Y = y } if (is.null(in_levs)) { X_names = list() for (j in 1:length(xcols)) { start_name = colnames(x)[xcols[j]] resulting_X_name = vector() for (i in 1:length(num_x_levs)) { resulting_X_name[i] = paste0(start_name, num_x_levs[i]) } X_names[[j]] = resulting_X_name } all_X_names = unlist(X_names) } else { #### focus only on the relevant levels given X_names = list() for (j in 1:length(xcols)) { start_name = colnames(x)[xcols[j]] resulting_X_name = vector() for (i in 1:length(in_levs)) { resulting_X_name[i] = paste0(start_name, in_levs[i]) } X_names[[j]] = resulting_X_name } all_X_names = unlist(X_names) } # create X matrix and track index for the relevant effects if (is.null(forced)) { form = formula(paste0("Y ~ . ")) X = model.matrix(form, final_df, contrasts.arg = lapply(final_df[, -c(non_factor_idx, ncol(final_df))], contrasts, contrasts = FALSE))[, -1] idx_in = (1:ncol(X))[colnames(X) %in% all_X_names] all_X_names = X_names } else { x_names = colnames(x)[xcols] forced_names = colnames(x)[forced] force_syntax = "" for (i in 1:length(forced_names)) { force_syntax = paste0(force_syntax, " + ", x_names[1], "*", forced_names[i], " + ", x_names[2], "*", forced_names[i]) } force_syntax = substr(force_syntax, 4, nchar(force_syntax)) form = formula(paste0("Y ~ . + ", force_syntax)) X = model.matrix(form, final_df, contrasts.arg = lapply(final_df[, -c(non_factor_idx, ncol(final_df))], contrasts, contrasts = FALSE))[, -1] X_forced_names = vector() forced_levs = unique(x[, forced[1]]) all_forced_names = vector() for (j in 1:length(forced)) { start_name = colnames(x)[forced[j]] resulting_forced_name = vector() for (i in 1:length(forced_levs)) { resulting_forced_name[i] = paste0(start_name, forced_levs[i]) } all_forced_names = c(all_forced_names, resulting_forced_name) } #repeat this for X:Z and Z:X since model.matrix arbitarily assigns them all_possible_ints_1 = list() for (j in 1:length(all_forced_names)) { temp_vec = list() for (i in 1:length(X_names)) { temp_vec[[i]] = paste0(all_forced_names[j], ":", X_names[[i]]) } all_possible_ints_1 = c(all_possible_ints_1, temp_vec) } all_possible_ints_2 = list() for (j in 1:length(all_forced_names)) { temp_vec = list() for (i in 1:length(X_names)) { temp_vec[[i]] = paste0(X_names[[i]], ":", all_forced_names[j]) } all_possible_ints_2 = c(all_possible_ints_2, temp_vec) } all_possible_ints = unlist(c(all_possible_ints_1, all_possible_ints_2)) idx_in = (1:ncol(X))[colnames(X) %in% c(all_X_names, all_possible_ints)] all_X_names = c(X_names, all_possible_ints_1, all_possible_ints_2) } # this is a quick harmless fix to avoid columns that have all zero check_cols = sapply(data.frame(X), function(x) length(unique(x))) trouble_cols = as.vector(which(check_cols == 1)) if (length(trouble_cols) > 0) { X[, trouble_cols][1, ] = X[, trouble_cols][1, ] + 1e-5 } in_col_names = colnames(X)[idx_in] group_idx_track = vector() for (i in 1:length(in_col_names)) { group_idx_track[i] = which((sapply(sapply(all_X_names, function(x) which(in_col_names[i] == x)), length)) >0) } group = list() unique_group_idx = unique(group_idx_track) for (i in 1:length(unique_group_idx)) { group[[i]] = which(group_idx_track == unique_group_idx[i]) } # if in_levs is non-null find idx to resample if (is.null(in_levs)) { idx_1 = idx_2 = 1:nrow(x) } else { idx_1 = (1:nrow(x))[x[, xcols[1]] %in% c(in_levs)] idx_2 = (1:nrow(x))[x[, xcols[2]] %in% c(in_levs)] } num_sample_1 = length(idx_1) num_sample_2 = length(idx_2) set.seed(seed) # Creates only Z matrix to perform CV on Z if (speedup) { Z_df = final_df[, -xcols] if (is.null(non_factor_idx)) { Z_long = model.matrix(Y ~ . , Z_df, contrasts.arg = lapply(Z_df[, 1:(ncol(Z_df) - 1)], contrasts, contrasts = FALSE))[, -1] } else { non_factor_names = colnames(x)[non_factor_idx] Z_nonfactor_idx = (1:ncol(Z_df))[(colnames(Z_df) %in% non_factor_names)] Z_long = model.matrix(Y ~ . , Z_df, contrasts.arg = lapply(Z_df[, -c(Z_nonfactor_idx, ncol(Z_df))], contrasts, contrasts = FALSE))[, -1] } # obtains CV lambda best_lam = hierNet_logistic_CV(lambda, nfolds = nfolds, X = Z_long, y_var = Z_df$Y, tol = tol, constraint = profileorder_constraint, seed = seed) if (verbose) { cat("Initial step completed: finished computing cross validated lambda") } } else { best_lam = hierNet_logistic_CV(lambda, nfolds = nfolds, X = as.matrix(X), y_var = final_df$Y, tol = tol, constraint = profileorder_constraint, seed = seed) } if (speedup) { chosen_initial = sample(c(0:B), size = 1) if (chosen_initial != 0) { x_df = x if (is.null(supplyown_resamples)) { if (design == "Uniform") { newx_1 = factor(resample_func_1(resample_X, nrow(x))) newx_2 = factor(resample_func_2(resample_X, nrow(x))) } if (design == "Constrained Uniform") { newx_1 = factor(resample_func_1(full_X, restricted_X, nrow(x), left_allowed)) newx_2 = factor(resample_func_2(full_X, restricted_X, nrow(x), left_allowed)) } if (design == "Nonuniform") { newx_1 = factor(resample_func_1(resample_X, nrow(x), p)) newx_2 = factor(resample_func_2(resample_X, nrow(x), p)) } } else { newx_1 = factor(supplyown_resamples[[chosen_initial]][, 1]) newx_2 = factor(supplyown_resamples[[chosen_initial]][, 2]) } x_df[, xcols[1]][idx_1] = newx_1[idx_1] x_df[, xcols[2]][idx_2] = newx_2[idx_2] # forcing no profile order effect constraint if (profileorder_constraint) { n = nrow(x_df) empty_df = x_df y_new = 1 - y for (i in 1:length(left_idx)) { empty_df[, left_idx[i]] = x_df[, right_idx[i]] empty_df[, right_idx[i]] = x_df[, left_idx[i]] } final_df = rbind(x_df, empty_df) final_df$Y = c(y, y_new) } else { final_df = x_df final_df$Y = y } X_initial = model.matrix(form, final_df, contrasts.arg = lapply(final_df[, -c(non_factor_idx, ncol(final_df))], contrasts, contrasts = FALSE))[, -1] # this is a quick harmless fix to avoid columns that have all zero check_cols = sapply(data.frame(X_initial), function(x) length(unique(x))) trouble_cols = as.vector(which(check_cols == 1)) if (length(trouble_cols) > 0) { X_initial[, trouble_cols][1, ] = X_initial[, trouble_cols][1, ] + 1e-5 } invisible(capture.output(initial <- hierNet_logistic(as.matrix(X_initial), final_df$Y, lam= best_lam, tol = tol))) } else { invisible(capture.output(initial <- hierNet_logistic(as.matrix(X), final_df$Y, lam= best_lam, tol = tol))) } aa = initial } else { aa = NULL } invisible(capture.output(fit <- hierNet_logistic(as.matrix(X), final_df$Y, lam= best_lam, tol = tol, aa = aa))) obs_test_stat = hiernet_group(fit, idx = idx_in, X = X, analysis = analysis, group = group) ## parallel computing setup if (parallel) { cl <- snow::makeCluster(num_cores) doSNOW::registerDoSNOW(cl) if (verbose) { iterations <- B pb <- utils::txtProgressBar(max = iterations, style = 3) progress <- function(n) utils::setTxtProgressBar(pb, n) opts <- list(progress = progress) } else { opts = NULL } ## e <- foreach::foreach(j = 1:iterations, .combine = rbind, .options.snow = opts) %dopar% { set.seed(j + B) library(CRTConjoint) x_df = x if (is.null(supplyown_resamples)) { if (design == "Uniform") { newx_1 = factor(resample_func_1(resample_X, nrow(x))) newx_2 = factor(resample_func_2(resample_X, nrow(x))) } if (design == "Constrained Uniform") { newx_1 = factor(resample_func_1(full_X, restricted_X, nrow(x), left_allowed)) newx_2 = factor(resample_func_2(full_X, restricted_X, nrow(x), right_allowed)) } if (design == "Nonuniform") { newx_1 = factor(resample_func_1(resample_X, nrow(x), p)) newx_2 = factor(resample_func_2(resample_X, nrow(x), p)) } } else { newx_1 = factor(supplyown_resamples[[j]][, 1]) newx_2 = factor(supplyown_resamples[[j]][, 2]) } x_df[, xcols[1]][idx_1] = newx_1[idx_1] x_df[, xcols[2]][idx_2] = newx_2[idx_2] # forcing no profile order effect constraint if (profileorder_constraint) { n = nrow(x_df) empty_df = x_df y_new = 1 - y for (i in 1:length(left_idx)) { empty_df[, left_idx[i]] = x_df[, right_idx[i]] empty_df[, right_idx[i]] = x_df[, left_idx[i]] } final_df = rbind(x_df, empty_df) final_df$Y = c(y, y_new) } else { final_df = x_df final_df$Y = y } X = model.matrix(form, final_df, contrasts.arg = lapply(final_df[, -c(non_factor_idx, ncol(final_df))], contrasts, contrasts = FALSE))[, -1] # this is a quick harmless fix to avoid columns that have all zero check_cols = sapply(data.frame(X), function(x) length(unique(x))) trouble_cols = as.vector(which(check_cols == 1)) if (length(trouble_cols) > 0) { X[, trouble_cols][1, ] = X[, trouble_cols][1, ] + 1e-5 } if (!speedup) { best_lam = hierNet_logistic_CV(lambda, nfolds = nfolds, X = as.matrix(X), y_var = final_df$Y, tol = tol, constraint = profileorder_constraint, seed = seed) } invisible(capture.output(fit <- hierNet_logistic(as.matrix(X), final_df$Y, lam= best_lam, tol = tol, aa = aa))) resamp_TS = hiernet_group(fit, idx = idx_in, X = X, analysis = 0, group = group) return(resamp_TS) } snow::stopCluster(cl) } else { e = vector() for (j in 1:B) { set.seed(j + B) x_df = x if (is.null(supplyown_resamples)) { if (design == "Uniform") { newx_1 = factor(resample_func_1(resample_X, nrow(x))) newx_2 = factor(resample_func_2(resample_X, nrow(x))) } if (design == "Constrained Uniform") { newx_1 = factor(resample_func_1(full_X, restricted_X, nrow(x), left_allowed)) newx_2 = factor(resample_func_2(full_X, restricted_X, nrow(x), right_allowed)) } if (design == "Nonuniform") { newx_1 = factor(resample_func_1(resample_X, nrow(x), p)) newx_2 = factor(resample_func_2(resample_X, nrow(x), p)) } } else { newx_1 = factor(supplyown_resamples[[j]][, 1]) newx_2 = factor(supplyown_resamples[[j]][, 2]) } x_df[, xcols[1]][idx_1] = newx_1[idx_1] x_df[, xcols[2]][idx_2] = newx_2[idx_2] # forcing no profile order effect constraint if (profileorder_constraint) { n = nrow(x_df) empty_df = x_df y_new = 1 - y for (i in 1:length(left_idx)) { empty_df[, left_idx[i]] = x_df[, right_idx[i]] empty_df[, right_idx[i]] = x_df[, left_idx[i]] } final_df = rbind(x_df, empty_df) final_df$Y = c(y, y_new) } else { final_df = x_df final_df$Y = y } X = model.matrix(form, final_df, contrasts.arg = lapply(final_df[, -c(non_factor_idx, ncol(final_df))], contrasts, contrasts = FALSE))[, -1] # this is a quick harmless fix to avoid columns that have all zero check_cols = sapply(data.frame(X), function(x) length(unique(x))) trouble_cols = as.vector(which(check_cols == 1)) if (length(trouble_cols) > 0) { X[, trouble_cols][1, ] = X[, trouble_cols][1, ] + 1e-5 } if (!speedup) { best_lam = hierNet_logistic_CV(lambda, nfolds = nfolds, X = as.matrix(X), y_var = final_df$Y, tol = tol, constraint = profileorder_constraint, seed = seed) } invisible(capture.output(fit <- hierNet_logistic(as.matrix(X), final_df$Y, lam= best_lam, tol = tol, aa = aa))) e[j] = hiernet_group(fit, idx = idx_in, X = X, analysis = 0, group = group) if (verbose) { cat(paste0("Done with task: ",j, " out of ", B, " resamples")) } } } p_val = (length(which(e >= as.numeric(obs_test_stat[1]))) + 1)/(B + 1) out = list() out$p_val = p_val out$obs_test_stat = obs_test_stat out$resampled_test_stat = e out$tol = tol if (speedup) { out$lam = best_lam } out$hiernet_fit = fit out$seed = seed return(out) } # Main helper function that performs CRT to test for no profile order effect get_profileordereffect = function(x, y, left_idx, right_idx, B, num_cores, lambda, non_factor_idx, tol, speedup, seed, parallel, nfolds, verbose) { # checks if (!(all(sapply(x[, !(1:ncol(x)) %in% non_factor_idx], function(x) is(x, "factor"))))) stop("factors provided in formula are not all factors please supply non_factor_idx") if (length(left_idx) != length(right_idx)) stop("length left idx does not match right idx") final_df = x final_df$Y = y form = formula(paste0("Y ~ . ")) X = model.matrix(form, final_df, contrasts.arg = lapply(final_df[, -c(non_factor_idx, ncol(final_df))], contrasts, contrasts = FALSE))[, -1] set.seed(seed) if (speedup) { chosen_initial = sample(c(0:B), size = 1) if (chosen_initial != 0) { x_df = x resampling_df = final_df a = sample(c(0,1), size =nrow(x), replace = TRUE) sample_idx = which(a == 1) kept = resampling_df[-sample_idx, ] b = resampling_df[sample_idx, ] new_y = 1 - b$Y new_first = b[, left_idx] new_second = b[, right_idx] name_1 = colnames(new_first) name_2 = colnames(new_second) colnames(new_first) = name_2 colnames(new_second) = name_1 new_df = cbind(new_second, new_first) new_df = cbind(new_df, b[, -c(left_idx, right_idx, ncol(b))]) new_df$Y = new_y final_df_initial = rbind(kept, new_df) X_initial = model.matrix(form, final_df_initial, contrasts.arg = lapply(final_df_initial[, -c(non_factor_idx, ncol(final_df_initial))], contrasts, contrasts = FALSE))[, -1] best_lam = hierNet_logistic_CV(lambda, nfolds = nfolds, X = X_initial, y_var = final_df_initial$Y, tol = tol, constraint = FALSE, seed = seed) invisible(capture.output(initial <- hierNet_logistic(as.matrix(X_initial), final_df$Y, lam= best_lam, tol = tol))) if (verbose) { cat("Initial step completed: finished computing cross validated lambda") } } else { best_lam = hierNet_logistic_CV(lambda, nfolds = nfolds, X = X, y_var = final_df$Y, tol = tol, constraint = FALSE, seed = seed) invisible(capture.output(initial <- hierNet_logistic(as.matrix(X), final_df$Y, lam= best_lam, tol = tol))) } aa = initial } else { best_lam = hierNet_logistic_CV(lambda, nfolds = nfolds, X = X, y_var = final_df$Y, tol = tol, constraint = FALSE, seed = seed) aa = NULL } invisible(capture.output(fit <- hierNet_logistic(as.matrix(X), final_df$Y, lam= best_lam, tol = tol, aa = aa))) # get relevant index left_X = final_df[, c(left_idx, ncol(final_df))] X_left = model.matrix(form, left_X, contrasts.arg = lapply(left_X[, -c(non_factor_idx, ncol(left_X))], contrasts, contrasts = FALSE))[, -1] in_idx_left = (1:ncol(X))[colnames(X) %in% colnames(X_left)] right_X = final_df[, c(right_idx, ncol(final_df))] X_right = model.matrix(form, right_X, contrasts.arg = lapply(right_X[, -c(non_factor_idx, ncol(right_X))], contrasts, contrasts = FALSE))[, -1] in_idx_right = (1:ncol(X))[colnames(X) %in% colnames(X_right)] in_idx_respondent = (1:ncol(X))[-c(in_idx_left, in_idx_right)] # obs_test_stat = PO_stat(fit, in_idx_left, in_idx_right, in_idx_respondent) ## parallel computing setup if (parallel) { cl <- snow::makeCluster(num_cores) doSNOW::registerDoSNOW(cl) if (verbose) { iterations <- B pb <- utils::txtProgressBar(max = iterations, style = 3) progress <- function(n) utils::setTxtProgressBar(pb, n) opts <- list(progress = progress) } else { opts = NULL } ## e <- foreach::foreach(j = 1:iterations, .combine = rbind, .options.snow = opts) %dopar% { set.seed(j + B) library(CRTConjoint) x_df = x resampling_df = final_df a = sample(c(0,1), size =nrow(x), replace = TRUE) sample_idx = which(a == 1) kept = resampling_df[-sample_idx, ] b = resampling_df[sample_idx, ] new_y = 1 - b$Y new_first = b[, left_idx] new_second = b[, right_idx] name_1 = colnames(new_first) name_2 = colnames(new_second) colnames(new_first) = name_2 colnames(new_second) = name_1 new_df = cbind(new_second, new_first) new_df = cbind(new_df, b[, -c(left_idx, right_idx, ncol(b))]) new_df$Y = new_y final_df_initial = rbind(kept, new_df) X_initial = model.matrix(form, final_df_initial, contrasts.arg = lapply(final_df_initial[, -c(non_factor_idx, ncol(final_df_initial))], contrasts, contrasts = FALSE))[, -1] if (!speedup) { best_lam = hierNet_logistic_CV(lambda, nfolds = nfolds, X = X_initial, y_var = final_df_initial$Y, tol = tol, constraint = FALSE, seed = seed) } invisible(capture.output(initial <- hierNet_logistic(as.matrix(X_initial), final_df_initial$Y, lam= best_lam, tol = tol, aa = aa))) resamp_TS = PO_stat(initial, in_idx_left, in_idx_right, in_idx_respondent) return(resamp_TS) } snow::stopCluster(cl) } else{ e = vector() for (j in 1:B) { set.seed(j + B) x_df = x resampling_df = final_df a = sample(c(0,1), size =nrow(x), replace = TRUE) sample_idx = which(a == 1) kept = resampling_df[-sample_idx, ] b = resampling_df[sample_idx, ] new_y = 1 - b$Y new_first = b[, left_idx] new_second = b[, right_idx] name_1 = colnames(new_first) name_2 = colnames(new_second) colnames(new_first) = name_2 colnames(new_second) = name_1 new_df = cbind(new_second, new_first) new_df = cbind(new_df, b[, -c(left_idx, right_idx, ncol(b))]) new_df$Y = new_y final_df_initial = rbind(kept, new_df) X_initial = model.matrix(form, final_df_initial, contrasts.arg = lapply(final_df_initial[, -c(non_factor_idx, ncol(final_df_initial))], contrasts, contrasts = FALSE))[, -1] if (!speedup) { best_lam = hierNet_logistic_CV(lambda, nfolds = nfolds, X = X_initial, y_var = final_df_initial$Y, tol = tol, constraint = FALSE, seed = seed) } invisible(capture.output(initial <- hierNet_logistic(as.matrix(X_initial), final_df_initial$Y, lam= best_lam, tol = tol, aa = aa))) e[j] = PO_stat(initial, in_idx_left, in_idx_right, in_idx_respondent) if (verbose) { cat(paste0("Done with task: ",j, " out of ", B, " resamples")) } } } p_val = (length(which(e >= as.numeric(obs_test_stat[1]))) + 1)/(B + 1) out = list() out$p_val = p_val out$obs_test_stat = obs_test_stat out$resampled_test_stat = e out$tol = tol if (speedup) { out$lam = best_lam } out$hiernet_fit = fit out$seed = seed return(out) } # Main helper function that performs CRT to test for no carryover effect get_carryovereffect = function(x, y, left_idx, right_idx, B, num_cores, lambda, non_factor_idx, tol, seed, parallel, profileorder_constraint, task_var, resample_func, supplyown_resamples, nfolds, verbose) { # checks if (!(all(sapply(x[, left_idx[!left_idx %in% non_factor_idx]], function(x) is(x, "factor"))))) stop("left factors are not all factors please supply non_factor_idx") if (!(all(sapply(x[, right_idx[!right_idx %in% non_factor_idx]], function(x) is(x, "factor"))))) stop("right factors are not all factors please supply non_factor_idx") if (length(left_idx) != length(right_idx)) stop("length left idx does not match right idx") if((length(unique(table(x[, task_var])))) != 1) {stop("Some tasks are missing")} total_tasks = max(x[, task_var]) Z_tasks = seq(2, total_tasks, by = 2) X_tasks = seq(1, max(Z_tasks) -1 , by = 2) Z_df = x[(x[, task_var] %in% Z_tasks), ][, c(left_idx, right_idx)] Z_left = (1:ncol(Z_df))[colnames(Z_df) %in% colnames(x)[left_idx]] Z_right = (1:ncol(Z_df))[!(colnames(Z_df) %in% colnames(x)[left_idx])] colnames(Z_df) = paste0("Z_", colnames(Z_df)) X_df = x[(x[, task_var] %in% X_tasks), ][, c(left_idx, right_idx)] X_left = (1:ncol(X_df))[colnames(X_df) %in% colnames(x)[left_idx]] X_right = (1:ncol(X_df))[!(colnames(X_df) %in% colnames(x)[left_idx])] colnames(X_df) = paste0("X_", colnames(X_df)) Y = y[(x[, task_var] %in% Z_tasks)] original_Xdf = X_df original_Zdf = Z_df final_df = cbind(X_df, Z_df) final_df$Y = Y just_Z = cbind(original_Zdf, Y = Y) if (profileorder_constraint) { y_new = 1 - final_df$Y x_df = X_df first = X_left second = X_right n = nrow(x_df) empty_df = x_df for (i in 1:length(first)) { empty_df[, first[i]] = x_df[, second[i]] empty_df[, second[i]] = x_df[, first[i]] } X_df = rbind(x_df, empty_df, x_df, empty_df) z_df = Z_df first = Z_left second = Z_right n = nrow(z_df) empty_df = z_df for (i in 1:length(first)) { empty_df[, first[i]] = z_df[, second[i]] empty_df[, second[i]] = z_df[, first[i]] } Z_df = rbind(Z_df, empty_df, empty_df, Z_df) full_Y = c(final_df$Y, y_new, y_new, final_df$Y) final_df = cbind(X_df, Z_df, Y = full_Y) just_Z = cbind(Z_df, Y = full_Y) } X = model.matrix(Y ~ ., data = final_df, contrasts.arg = lapply(final_df[, -ncol(final_df)], contrasts, contrasts = FALSE))[, -1] # this is because X and Z are symmetric so we can WLOG just take the first half which is always the X indexes idx = 1:(ncol(X)/2) Z_mat = model.matrix(Y ~ ., data = just_Z, contrasts.arg = lapply(just_Z[, -ncol(just_Z)], contrasts, contrasts = FALSE))[, -1] set.seed(seed) half = (nrow(Z_mat)/4) random_idx = suppressWarnings(split(sample(half, half, replace = FALSE), as.factor(1:nfolds))) if (profileorder_constraint) { for (i in 1:nfolds) { random_idx[[i]] = c(random_idx[[i]], random_idx[[i]] + half, random_idx[[i]] + 2*half, random_idx[[i]] + 3*half) } } best_lam = hierNet_logistic_CV(lambda, nfolds = nfolds, X = Z_mat, y_var = final_df$Y, tol = tol, constraint = profileorder_constraint, seed = seed, fold_idx = random_idx) if (verbose) { cat("Initial step completed: finished computing cross validated lambda") } if (is.null(resample_func)) { resample_func = function(x, seed = sample(c(0, 1000), size = 1), left_idx, right_idx) { set.seed(seed) df = x[, c(left_idx, right_idx)] variable = colnames(x)[c(left_idx, right_idx)] len = length(variable) resampled = list() n = nrow(df) for (i in 1:len) { var = df[, variable[i]] lev = levels(var) resampled[[i]] = factor(sample(lev, size = n, replace = TRUE)) } resampled_df = data.frame(resampled[[1]]) for (i in 2:len) { resampled_df = cbind(resampled_df, resampled[[i]]) } colnames(resampled_df) = colnames(df) return(resampled_df) } } chosen_initial = sample(c(0:B), size = 1) if (chosen_initial != 0) { if (is.null(supplyown_resamples)) { resampled_x = resample_func(x = x, left_idx = left_idx, right_idx = right_idx, seed = chosen_initial) resampled_x_df = resampled_x[(x[, task_var] %in% X_tasks), ] } else { resampled_x = supplyown_resamples[[chosen_initial]] resampled_x_df = resampled_x[(x[, task_var] %in% X_tasks), ] } colnames(resampled_x_df) = paste0("X_", colnames(resampled_x_df)) final_df_initial = cbind(resampled_x_df, original_Zdf) final_df_initial$Y = Y if (profileorder_constraint) { x_df = resampled_x_df first = X_left second = X_right n = nrow(x_df) empty_df = x_df for (i in 1:length(first)) { empty_df[, first[i]] = x_df[, second[i]] empty_df[, second[i]] = x_df[, first[i]] } X_df = rbind(x_df, empty_df, x_df, empty_df) final_df_initial = cbind(X_df, Z_df, Y = full_Y) } X_initial = model.matrix(Y ~ ., data = final_df_initial, contrasts.arg = lapply(final_df_initial[, -ncol(final_df)], contrasts, contrasts = FALSE))[, -1] invisible(capture.output(initial <- hierNet_logistic(as.matrix(X_initial), final_df$Y, lam= best_lam, tol = tol))) } else { invisible(capture.output(initial <- hierNet_logistic(as.matrix(X), final_df$Y, lam= best_lam, tol = tol))) } aa = initial invisible(capture.output(fit <- hierNet_logistic(as.matrix(X), final_df$Y, lam= best_lam, tol = tol, aa = aa))) obs_test_stat = CO_stat(fit, idx) ## parallel computing setup if (parallel) { cl <- snow::makeCluster(num_cores) doSNOW::registerDoSNOW(cl) if (verbose) { iterations <- B pb <- utils::txtProgressBar(max = iterations, style = 3) progress <- function(n) utils::setTxtProgressBar(pb, n) opts <- list(progress = progress) } else { opts = NULL } ## e <- foreach::foreach(j = 1:iterations, .combine = rbind, .options.snow = opts) %dopar% { set.seed(j + B) library(CRTConjoint) if (is.null(supplyown_resamples)) { resampled_x = resample_func(x = x, left_idx = left_idx, right_idx = right_idx, seed = chosen_initial) resampled_x_df = resampled_x[(x[, task_var] %in% X_tasks), ] } else { resampled_x = supplyown_resamples[[j]] resampled_x_df = resampled_x[(x[, task_var] %in% X_tasks), ] } colnames(resampled_x_df) = paste0("X_", colnames(resampled_x_df)) final_df_initial = cbind(resampled_x_df, original_Zdf) final_df_initial$Y = Y if (profileorder_constraint) { x_df = resampled_x_df first = X_left second = X_right n = nrow(x_df) empty_df = x_df for (i in 1:length(first)) { empty_df[, first[i]] = x_df[, second[i]] empty_df[, second[i]] = x_df[, first[i]] } X_df = rbind(x_df, empty_df, x_df, empty_df) final_df_initial = cbind(X_df, Z_df, Y = full_Y) } X_initial = model.matrix(Y ~ ., data = final_df_initial, contrasts.arg = lapply(final_df_initial[, -ncol(final_df_initial)], contrasts, contrasts = FALSE))[, -1] invisible(capture.output(initial <- hierNet_logistic(as.matrix(X_initial), final_df_initial$Y, lam= best_lam, tol = tol, aa = aa))) resamp_TS = CO_stat(initial, idx) return(resamp_TS) } snow::stopCluster(cl) } else { e = vector() for (j in 1:B) { if (is.null(supplyown_resamples)) { resampled_x = resample_func(x = x, left_idx = left_idx, right_idx = right_idx, seed = chosen_initial) resampled_x_df = resampled_x[(x[, task_var] %in% X_tasks), ] } else { resample_x = supplyown_resamples[[j]] resample_x_df = resampled_x[(x[, task_var] %in% X_tasks), ] } colnames(resampled_x_df) = paste0("X_", colnames(resampled_x_df)) final_df_initial = cbind(resampled_x_df, original_Zdf) final_df_initial$Y = Y if (profileorder_constraint) { x_df = resampled_x_df first = X_left second = X_right n = nrow(x_df) empty_df = x_df for (i in 1:length(first)) { empty_df[, first[i]] = x_df[, second[i]] empty_df[, second[i]] = x_df[, first[i]] } X_df = rbind(x_df, empty_df, x_df, empty_df) final_df_initial = cbind(X_df, Z_df, Y = full_Y) } X_initial = model.matrix(Y ~ ., data = final_df_initial, contrasts.arg = lapply(final_df_initial[, -ncol(final_df)], contrasts, contrasts = FALSE))[, -1] invisible(capture.output(initial <- hierNet_logistic(as.matrix(X_initial), final_df_initial$Y, lam= best_lam, tol = tol, aa = aa))) e[j] = CO_stat(initial, idx) if (verbose) { cat(paste0("Done with task: ",j, " out of ", B, " resamples")) } } } p_val = (length(which(e >= as.numeric(obs_test_stat[1]))) + 1)/(B + 1) out = list() out$p_val = p_val out$obs_test_stat = obs_test_stat out$resampled_test_stat = e out$tol = tol out$hiernet_fit = fit out$seed = seed out$lam = best_lam return(out) } # Main helper function that performs CRT to test for no fatigue effect get_fatigueeffect = function(x, y, left_idx, right_idx, B, num_cores, lambda, non_factor_idx, tol, speedup, seed, parallel, profileorder_constraint, task_var, respondent_var, nfolds, verbose) { # checks if (!(all(sapply(x[, left_idx[!left_idx %in% non_factor_idx]], function(x) is(x, "factor"))))) stop("left factors are not all factors please supply non_factor_idx") if (!(all(sapply(x[, right_idx[!right_idx %in% non_factor_idx]], function(x) is(x, "factor"))))) stop("right factors are not all factors please supply non_factor_idx") if (length(left_idx) != length(right_idx)) stop("length left idx does not match right idx") if((length(unique(table(x[, task_var])))) != 1) {stop("Some tasks are missing")} # forcing no profile order effect constraint if (profileorder_constraint) { x_df = x n = nrow(x_df) empty_df = x_df y_new = 1 - y for (i in 1:length(left_idx)) { empty_df[, left_idx[i]] = x_df[, right_idx[i]] empty_df[, right_idx[i]] = x_df[, left_idx[i]] } final_df = rbind(x_df, empty_df) final_df$Y = c(y, y_new) } else { final_df = x final_df$Y = y } final_df = final_df[, c(left_idx, right_idx, task_var, ncol(final_df))] task_var_name = colnames(x)[task_var] final_df_var_idx = which(colnames(final_df) == task_var_name) set.seed(seed) # Creates only Z matrix to perform CV on Z if (speedup) { Z_df = final_df[, !(colnames(final_df) %in% task_var_name)] if (is.null(non_factor_idx)) { Z_long = model.matrix(Y ~ . , Z_df, contrasts.arg = lapply(Z_df[, 1:(ncol(Z_df) - 1)], contrasts, contrasts = FALSE))[, -1] } else { non_factor_names = colnames(x)[non_factor_idx] Z_nonfactor_idx = (1:ncol(Z_df))[(colnames(Z_df) %in% non_factor_names)] Z_long = model.matrix(Y ~ . , Z_df, contrasts.arg = lapply(Z_df[, -c(Z_nonfactor_idx, ncol(Z_df))], contrasts, contrasts = FALSE))[, -1] } # obtains CV lambda best_lam = hierNet_logistic_CV(lambda, nfolds = nfolds, X = Z_long, y_var = Z_df$Y, tol = tol, constraint = profileorder_constraint, seed = seed) if (verbose) { cat("Initial step completed: finished computing cross validated lambda") } } else { best_lam = hierNet_logistic_CV(lambda, nfolds = nfolds, X = as.matrix(X), y_var = final_df$Y, tol = tol, constraint = profileorder_constraint, seed = seed) } form = formula(paste0("Y ~ .")) X = model.matrix(form, final_df, contrasts.arg = lapply(final_df[, -c(non_factor_idx, final_df_var_idx, ncol(final_df))], contrasts, contrasts = FALSE))[, -1] num_task = length(unique(x[, task_var])) respondent_idx = unique(x[, respondent_var]) if (speedup) { chosen_initial = sample(c(0:B), size = 1) if (chosen_initial != 0) { # shuffle task number new_x = x for (i in 1:(length(respondent_idx))) { tasks = x[, task_var][(x[, respondent_var] == respondent_idx[i])] new_tasks = sample(tasks, replace = FALSE) new_x[, task_var][(x[, respondent_var] == respondent_idx[i])] = new_tasks } a = new_x[, task_var] resampled = final_df if (profileorder_constraint) { resampled[, (colnames(final_df) %in% task_var_name)] = c(a,a) } else { resampled[, (colnames(final_df) %in% task_var_name)] = c(a,a) } X_initial = model.matrix(form, resampled, contrasts.arg = lapply(resampled[, -c(non_factor_idx, final_df_var_idx, ncol(final_df))], contrasts, contrasts = FALSE))[, -1] invisible(capture.output(initial <- hierNet_logistic(as.matrix(X_initial), final_df$Y, lam= best_lam, tol = tol))) } else { invisible(capture.output(initial <- hierNet_logistic(as.matrix(X), final_df$Y, lam= best_lam, tol = tol))) } aa = initial } else { aa = NULL } invisible(capture.output(fit <- hierNet_logistic(as.matrix(X), final_df$Y, lam= best_lam, tol = tol, aa = aa))) I_1 = as.vector(fit$th[final_df_var_idx, ]) I_2 = as.vector(t(fit$th[, final_df_var_idx])) I = (I_1 + I_2)/2 obs_test_stat = sum(I^2)/2 ## parallel computing setup if (parallel) { cl <- snow::makeCluster(num_cores) doSNOW::registerDoSNOW(cl) if (verbose) { iterations <- B pb <- utils::txtProgressBar(max = iterations, style = 3) progress <- function(n) utils::setTxtProgressBar(pb, n) opts <- list(progress = progress) } else { opts = NULL } ## e <- foreach::foreach(j = 1:iterations, .combine = rbind, .options.snow = opts) %dopar% { set.seed(j + B) library(CRTConjoint) # shuffle task number new_x = x for (i in 1:(length(respondent_idx))) { tasks = x[, task_var][(x[, respondent_var] == respondent_idx[i])] new_tasks = sample(tasks, replace = FALSE) new_x[, task_var][(x[, respondent_var] == respondent_idx[i])] = new_tasks } a = new_x[, task_var] resampled = final_df if (profileorder_constraint) { resampled[, (colnames(final_df) %in% task_var_name)] = c(a,a) } else { resampled[, (colnames(final_df) %in% task_var_name)] = c(a,a) } X_initial = model.matrix(form, resampled, contrasts.arg = lapply(resampled[, -c(non_factor_idx, final_df_var_idx, ncol(final_df))], contrasts, contrasts = FALSE))[, -1] if (!speedup) { best_lam = hierNet_logistic_CV(lambda, nfolds = nfolds, X = as.matrix(X_initial), y_var = final_df$Y, tol = tol, constraint = profileorder_constraint, seed = seed) } invisible(capture.output(fit <- hierNet_logistic(as.matrix(X_initial), final_df$Y, lam= best_lam, tol = tol, aa = aa))) I_1 = as.vector(fit$th[final_df_var_idx, ]) I_2 = as.vector(t(fit$th[, final_df_var_idx])) I = (I_1 + I_2)/2 resamp_TS = sum(I^2)/2 return(resamp_TS) } snow::stopCluster(cl) } else { e = vector() for (j in 1:B) { set.seed(j + B) # shuffle task number new_x = x for (i in 1:(length(respondent_idx))) { tasks = x[, task_var][(x[, respondent_var] == respondent_idx[i])] new_tasks = sample(tasks, replace = FALSE) new_x[, task_var][(x[, respondent_var] == respondent_idx[i])] = new_tasks } a = new_x[, task_var] resampled = final_df if (profileorder_constraint) { resampled[, (colnames(final_df) %in% task_var_name)] = c(a,a) } else { resampled[, (colnames(final_df) %in% task_var_name)] = c(a,a) } X_initial = model.matrix(form, resampled, contrasts.arg = lapply(resampled[, -c(non_factor_idx, final_df_var_idx, ncol(final_df))], contrasts, contrasts = FALSE))[, -1] if (!speedup) { best_lam = hierNet_logistic_CV(lambda, nfolds = nfolds, X = as.matrix(X_initial), y_var = final_df$Y, tol = tol, constraint = profileorder_constraint, seed = seed) } invisible(capture.output(fit <- hierNet_logistic(as.matrix(X_initial), final_df$Y, lam= best_lam, tol = tol, aa = aa))) I_1 = as.vector(fit$th[final_df_var_idx, ]) I_2 = as.vector(t(fit$th[, final_df_var_idx])) I = (I_1 + I_2)/2 e[j] = sum(I^2)/2 if (verbose) { cat(paste0("Done with task: ",j, " out of ", B, " resamples")) } } } p_val = (length(which(e >= as.numeric(obs_test_stat[1]))) + 1)/(B + 1) out = list() out$p_val = p_val out$obs_test_stat = obs_test_stat out$resampled_test_stat = e out$tol = tol if (speedup) { out$lam = best_lam } out$hiernet_fit = fit out$seed = seed return(out) }
/scratch/gouwar.j/cran-all/cranData/CRTConjoint/R/helper_funcs.R
# slight computational modifications of functions from hierNet package to quicken computation # All functions here are originally from author Jacob Bien source: https://github.com/cran/hierNet # All functions here are used with explicit permission from Jacob Bien predict_new <- function(object, newx, newzz=NULL, ...) { n <- nrow(newx) newx <- scale(newx, center=object$mx, scale=object$sx) p <- ncol(newx) cp2 <- p * (p - 1) / 2 out <- .C("ComputeInteractionsWithIndices", as.double(newx), as.integer(n), as.integer(p), z=rep(0, n * cp2), i1=as.integer(rep(0, cp2)), i2=as.integer(rep(0, cp2)), PACKAGE="CRTConjoint") unscaled_z = out$z # modifications to speed things more faster newzz = demean_center(unscaled_z, n, cp2, object$mzz)$vec newx <- as.numeric(newx) stopifnot(is.finite(newzz), is.finite(newx)) yhatt <- Compute.yhat.c(newx, newzz, object) + object$my b0 <- object$b0 yhatt <- b0 + yhatt pr <- 1 / (1 + exp(-yhatt)) return(pr) } hierNet_logistic = function(x, y, lam, delta=1e-8, diagonal=FALSE, strong=FALSE, aa=NULL, zz=NULL, center=TRUE, stand.main=TRUE, stand.int=FALSE, rho=nrow(x), niter=100, sym.eps=1e-3,# ADMM params step=2, maxiter=2000, backtrack=0.1, tol=1e-3, # GG descent params trace=0) { this.call <- match.call() n <- nrow(x) p <- ncol(x) stopifnot(y %in% c(0,1)) stopifnot(length(y) == n, lam >= 0, delta >= 0, delta <= 1) stopifnot(!is.null(step) && !is.null(maxiter)) stopifnot(class(lam) == "numeric") stopifnot(class(delta) == "numeric") stopifnot(class(step) == "numeric", step > 0, maxiter > 0) stopifnot(is.finite(x), is.finite(y), is.finite(lam), is.finite(delta)) lam.l1 <- lam * (1 - delta) lam.l2 <- lam * delta x <- scale(x, center = center, scale = stand.main) mx <- attr(x, "scaled:center") sx <- attr(x, "scaled:scale") if (is.null(aa)) aa <- list(b0=0, bp=rep(0, p), bn=rep(0, p), th=matrix(0, p, p), diagonal=diagonal) cp2 <- p * (p - 1) / 2 out <- .C("ComputeInteractionsWithIndices", as.double(x), as.integer(n), as.integer(p), z=rep(0, n * cp2), i1=as.integer(rep(0, cp2)), i2=as.integer(rep(0, cp2)), PACKAGE="CRTConjoint") out_zz = fast_demean(out$z, n, cp2) szz = NULL mzz = out_zz$means zz = out_zz$vec xnum <- as.numeric(x) out <- ggdescent.logistic(xnum=xnum, zz=zz, y=y, lam.l1=lam.l1, lam.l2=lam.l2, diagonal=diagonal, rho=0, V=matrix(0,p,p), stepsize=step, backtrack=backtrack, maxiter=maxiter, tol=tol, aa=aa, trace=trace) out$call <- this.call out$lam <- lam out$delta <- delta out$type <- "logistic" out$diagonal <- diagonal out$strong <- strong out$step <- step out$maxiter <- maxiter out$backtrack <- backtrack out$tol <- tol out$mx <- mx out$my <- 0 out$sx <- sx out$mzz = mzz out$szz = szz class(out) <- "hierNet" return(out) } hierNet_logistic_CV = function(lambda, nfolds = 3, X, y_var, seed = sample(1:1000, size = 1), constraint = TRUE, tol= 1e-3, fold_idx = NULL) { if (is.null(fold_idx)) { if (!constraint) { half = nrow(X) random_idx = suppressWarnings(split(sample(half, half, replace = FALSE), as.factor(1:nfolds))) } else { half = (nrow(X)/2) random_idx = suppressWarnings(split(sample(half, half, replace = FALSE), as.factor(1:nfolds))) for (i in 1:nfolds) { random_idx[[i]] = c(random_idx[[i]], random_idx[[i]] + half) } } } else { random_idx = fold_idx } error_list_prob = list() for (i in 1:nfolds) { errors_prob = vector() test_idx = random_idx[[i]] train_idx = (1:nrow(X))[-test_idx] for (j in 1:length(lambda)) { if( j == 1) { invisible(capture.output(cv_hiernets <- hierNet_logistic(X[train_idx, ], y_var[train_idx], lam = lambda[j], tol = tol))) } else { invisible(capture.output(cv_hiernets <- hierNet_logistic(X[train_idx, ], y_var[train_idx], lam = lambda[j], tol = tol, aa = cv_hiernets))) } predicted_y_prob = predict_new(cv_hiernets, X[test_idx, ]) errors_prob[j] = mean((y_var[test_idx] - predicted_y_prob)^2) } error_list_prob[[i]] = errors_prob } cv_errors_prob = vector() for (i in 1:length(lambda)) { cv_errors_prob[i] = mean(sapply(error_list_prob, "[[", i)) } gotten_lam = lambda[which.min(cv_errors_prob)] return(gotten_lam) } ggdescent.logistic <- function(xnum, zz, y, lam.l1, lam.l2, diagonal, rho, V, stepsize, backtrack=0.2, maxiter=100, tol = 1e-3, aa=NULL, trace=1) { # See ADMM4 pdf and logistic.pdf for the problem this solves. # # xnum, zz, y: data (note: zz is a length n*cp2 vector, not a matrix) xnum is x as a (n*p)-vector # lam.l1: l1-penalty parameter # lam.l2: l2-penalty parameter # rho: admm parameter # V: see ADMM4 pdf # stepsize: step size to start backtracking with # backtrack: factor by which step is reduced on each backtrack. # maxiter: number of generalized gradient steps to take. # tol: stop gg descent if change in objective is below tol. # aa: initial estimate of (b0, th, bp, bn) # trace: how verbose to be # #void ggdescent_logistic_R(double *x, int *n, int *p, double *zz, int * diagonal, double *y, # double *lamL1, double *lamL2, double *rho, double *V, int *maxiter, # double *curb0, double *curth, double *curbp, double *curbn, # double *t, int *stepwindow, double *backtrack, double *tol, int *trace, # double *b0, double *th, double *bp, double *bn) { n <- length(y) p <- length(xnum) / n stopifnot(p == round(p)) if (diagonal) stopifnot(length(zz) == n * (choose(p,2)+p)) else stopifnot(length(zz) == n * choose(p,2)) stepwindow <- 10 if (is.null(aa)) aa <- list(b0=0, th=matrix(0,p,p), bp=rep(0,p), bn=rep(0,p)) out <- .C("ggdescent_logistic_R", xnum, as.integer(n), as.integer(p), zz, as.integer(diagonal), as.double(y), # convert from integer to double as.double(lam.l1), as.double(lam.l2), as.double(rho), as.double(V), as.integer(maxiter), as.double(aa$b0), as.double(aa$th), aa$bp, aa$bn, as.double(stepsize), as.integer(stepwindow), as.double(backtrack), as.double(tol), as.integer(trace), b0=as.double(0), th=rep(0, p*p), bp=rep(0, p), bn=rep(0, p), PACKAGE="CRTConjoint") list(b0=out$b0, bp=out$bp, bn=out$bn, th=matrix(out$th, p, p)) } Compute.yhat.c <- function(xnum, zz, aa) { # aa: list containing bp, bn, th, diagonal # note: zz is the n by cp2 matrix, whereas z is the n by p^2 one. p <- length(aa$bp) n <- length(xnum) / p stopifnot(n==round(n)) stopifnot("diagonal" %in% names(aa)) if (aa$diagonal) stopifnot(length(zz) == n * (choose(p,2) + p)) else stopifnot(length(zz) == n * choose(p,2)) out <- .C("compute_yhat_zz_R", xnum, as.integer(n), as.integer(p), zz, as.integer(aa$diagonal), as.double(aa$th), aa$bp, aa$bn, yhat=rep(0, n), PACKAGE="CRTConjoint") out$yhat }
/scratch/gouwar.j/cran-all/cranData/CRTConjoint/R/helper_hierNet.R
# main functions to be exported #' Testing whether factor X matters in Conjoint Experiments #' #' This function takes a conjoint dataset and returns the p-value when using the #' CRT to test if Y is independent of X given Z using HierNet test statistic. #' The function requires user to specify the outcome, all factors used in the #' conjoint experiment, and any additional respondent characteristics. By default, #' this function assumes a uniform randomization of factor levels. In addition, #' the function assumes the forced choice conjoint experiment and consequently assumes #' the data to contain the left and right profile factors in separate columns in the #' supplied dataset. #' #' @param formula A formula object specifying the outcome variable on the left-hand side #' and factors of (X,Z) and respondent characteristics (V) in the right hand side. #' RHS variables should be separated by + signs and should only contain either left #' or right for each (X,Z). #' For example Y ~ Country_left + Education_left is sufficient as opposed to #' Y ~ Country_left + Country_right + Education_left + Education_right #' @param data A dataframe containing outcome variable and all factors (X,Z,V) #' (including both left and right profile factors). All (X,Z,V) listed in #' the formula above are expected to be of class factor unless explicitly stated #' in non_factor input. #' @param X Character string specifying the variable of interest. This character #' should match column name in data. For example "Country_left" is sufficient. #' @param left Vector of column names of data that corresponds to the left profile factors #' @param right Vector of column names of data that corresponds to the right profile factors. #' NOTE: left and right are assumed to be the same length and the #' order should correspond to the same variables. For example left = c("Country_left", #' "Education_left") and right = c("Country_right", "Education_right") #' @param design A character string of one of the following options: "Uniform", #' "Constrained Uniform", "Nonuniform", "Manual". "Uniform" refers to a completely uniform #' design where all (X,Z) are sampled uniformly. "Nonuniform" refers to a design where all #' (X,Z) are sampled independently but the levels of X are not sampled uniformly. #' If design="Nonuniform", then user should supply the non-uniform probability weights in p. #' If in_levs is not NULL, then length of p should match in_levs. "Constrained Uniform" #' refers to a dependent randomization design where some levels of X are only #' possible based on certain levels of Z. If design="Constrained Uniform" #' user should supply constraint_randomization list indicating the dependencies. #' See examples below. #' "Manual" refers to more complex conjoint designs, where the user will supply #' their own resamples in supplyown_resamples input. Default is design="Uniform" #' @param p A vector of nonuniform probability weights used when design="Nonuniform". #' Length of p should match number of levels of X or length of in_levs. #' @param constraint_randomization List containing levels of X that can only be #' sampled with certain values of Z (used when design="Constrained Uniform"). #' The first element of constraint_randomization should contain the levels of X #' that can only be sampled with certain values of Z, which are included in the #' second element of the list. See example below. #' @param supplyown_resamples List of length B that contains own resamples of X #' when design="Manual". Each element of list should contain a dataframe #' with the same number of rows of data and two columns for the left and right #' profile values of X. #' @param profileorder_constraint Boolean indicating whether to enforce profile #' order constraint (default = TRUE) #' @param in_levs A vector of strings which indicates a subset of the levels of X to #' test for. See example below. #' @param forced_var A character string indicating column name of Z or V that user #' wishes to force an interaction with. #' @param non_factor A vector of strings indicating columns of data that are not #' factors. This should only be used for respondent characteristics (V) that are #' not factors. For example non_factor = "Respondent_Age". #' @param B Numeric integer value indicating the number of resamples for the CRT #' procedure. Default value is B=200. #' @param parallel Boolean indicating whether parallel computing should be used. #' Default value is TRUE. #' @param num_cores Numeric integer indicating number of cores to use when parallel=TRUE. #' num_cores should not exceed the number of cores the user's machine can handle. Default is 2. #' @param nfolds Numeric integer indicating number of cross-validation folds. Default is 3. #' @param lambda Numeric vector indicating lambda used for cross-validation for #' HierNet fit. Default lambda=c(20,30,40). #' @param tol Numeric value indicating acceptable tolerance for terminating optimization #' fit for HierNet. Default is tol=1e-3. WARNING: Do not increase as it greatly #' increases computation time. #' @param speedup Boolean indicating whether to employ computational tricks to #' make function run faster. It is always recommended to use default speedup=TRUE. #' @param seed Seed used for CRT procedure #' @param analysis Numeric integer indicating whether to return the top x number #' of strongest interactions that contributed to the the observed test statistic. #' Default analysis = 0 to not return any top interactions. If analysis > 0, #' for example analysis = 2, then the top two strongest interactions contribution to #' the test statistic along with which interaction is returned. #' NOTE: this is purely for exploratory analysis. #' @param verbose Boolean indicating verbose output. Default verbose=TRUE #' #' @return A list containing: \item{p_val}{A numeric value for the p-value testing #' Y independent of X given Z.} #' \item{obs_test_stat}{A numeric value for the observed test statistic. If analysis #' is > 0, obs_test_stat will contain a list detailing the contribution of the main effects #' interaction effects and the top interactions.} #' \item{resampled_test_stat}{Matrix containing all the B resampled test statistics} #' \item{tol}{Tolerance used for HierNet} #' \item{lam}{Best cross-validated lambda} #' \item{hiernet_fit}{An object of class hiernet that contains the hiernet fit for #' the observed test statistic} #' \item{seed}{Seed used} #' \item{elapsed_time}{Elapsed time} #' #' @export #' @references Ham, D., Janson, L., and Imai, K. (2022) #' Using Machine Learning to Test Causal Hypotheses in Conjoint Analysis #' @examples #' # Subset of Immigration Choice Conjoint Experiment Data from Hainmueller et. al. (2014). #' data("immigrationdata") #' form = formula("Y ~ FeatEd + FeatGender + FeatCountry + FeatReason + FeatJob + #' FeatExp + FeatPlans + FeatTrips + FeatLang + ppage + ppeducat + ppethm + ppgender") #' left = colnames(immigrationdata)[1:9] #' right = colnames(immigrationdata)[10:18] #' #' \donttest{ #' # Testing whether edcuation matters for immigration preferences #' education_test = CRT_pval(formula = form, data = immigrationdata, X = "FeatEd", #' left = left, right = right, non_factor = "ppage", B = 100, analysis = 2) #' education_test$p_val #' } #' #' # Testing whether job matters for immigration preferences #' constraint_randomization = list() # (Job has dependent randomization scheme) #' constraint_randomization[["FeatJob"]] = c("Financial analyst","Computer programmer", #' "Research scientist","Doctor") #' constraint_randomization[["FeatEd"]] = c("Equivalent to completing two years of #' college in the US", "Equivalent to completing a graduate degree in the US", #' "Equivalent to completing a college degree in the US") #' \donttest{ #' job_test = CRT_pval(formula = form, data = immigrationdata, X = "FeatJob", #' left = left, right = right, design = "Constrained Uniform", #' constraint_randomization = constraint_randomization, non_factor = "ppage", B = 100) #' job_test$p_val #' } #' #' #' # Testing whether Mexican and European immigrants are treated indistinguishably #' country_data = immigrationdata #' country_data$FeatCountry = as.character(country_data$FeatCountry) #' country_data$FeatCountry_2 = as.character(country_data$FeatCountry_2) #' country_data$FeatCountry[country_data$FeatCountry %in% c("Germany", "France", #' "Poland")] = "Europe" #' country_data$FeatCountry_2[country_data$FeatCountry_2 %in% c("Germany", "France", #' "Poland")] = "Europe" #' country_data$FeatCountry = factor(country_data$FeatCountry) #' country_data$FeatCountry_2 = factor(country_data$FeatCountry_2) #' \donttest{ #' mexico_Europe_test = CRT_pval(formula = form, data = country_data, X = "FeatCountry", #' left = left, right = right, design = "Nonuniform", #' in_levs = c("Mexico", "Europe"), p = c(0.25, 0.75), non_factor = "ppage", B = 100, #' analysis = 2) #' } #' \donttest{ #' # example case with supplying own resamples #' resample_Mexico_Europe = function(country_data) { #' resamples_1 = sample(c("Mexico", "Europe"), size = nrow(country_data), #' replace = TRUE, p = c(0.25, 0.75)) #' resamples_2 = sample(c("Mexico", "Europe"), size = nrow(country_data), #' replace = TRUE, p = c(0.25, 0.75)) #' resample_df = data.frame(resamples_1, resamples_2) #' return(resample_df) #' } #' own_resamples = list() #' for (i in 1:100) { #' own_resamples[[i]] = resample_Mexico_Europe(country_data) #' } #' mexico_Europe_test = CRT_pval(formula = form, data = country_data, X = "FeatCountry", #' left = left, right = right, design = "Manual", #' in_levs = c("Mexico", "Europe"), supplyown_resamples = own_resamples, #' non_factor = "ppage", B = 100, analysis = 2) #' } #' # example case with forcing with candidate gender #' \donttest{ #' mexico_Europe_test_force = CRT_pval(formula = form, data = country_data, #' X = "FeatCountry", left = left, right = right, design = "Nonuniform", #' in_levs = c("Mexico", "Europe"), p = c(0.25, 0.75), forced_var = "FeatGender", #' non_factor = "ppage", B = 100) #' } CRT_pval = function(formula, data, X, left, right, design = "Uniform", p = NULL, constraint_randomization = NULL, supplyown_resamples = NULL, profileorder_constraint = TRUE, in_levs = NULL, forced_var = NULL, non_factor = NULL, B = 200, parallel = TRUE,num_cores = 2,nfolds = 3, lambda = c(20, 30, 40), tol = 1e-3, speedup = TRUE, seed = sample(c(1:1000), size = 1), analysis = 0, verbose = TRUE) { start_time = Sys.time() # processing stage if (!(design %in% c("Uniform", "Constrained Uniform", "Nonuniform", "Manual"))) stop("Design should be either Uniform, Constrained Uniform, Nonuniform, or Manual") left_levs = sapply(data[, left], function(x) levels(x)) right_levs = sapply(data[, right], function(x) levels(x)) for (i in 1:length(left_levs)) { check = all.equal(left_levs[[i]], right_levs[[i]]) check = isTRUE(check) if (!check) stop("Left factors and right factors levels do not agree") } X_right = right[left %in% X] X_left = left[right %in% X] Y_var = as.character(formula)[2] y = data[, Y_var] if (!is.numeric(y)) stop("Response is not numeric") X_Z_V = unlist(strsplit(as.character(formula)[3], " \\+")) X_Z_V = gsub(" ", "", X_Z_V) V = X_Z_V[!(X_Z_V %in% c(left, right))] x = data[, c(left, right, V)] xcols = (1:ncol(x))[colnames(x) %in% c(X, X_left, X_right)] num_x_levs = levels(data[, X]) non_factor_idx = (1:ncol(x))[colnames(x) %in% non_factor] if (length(non_factor_idx) == 0) { non_factor_idx = NULL } left_idx = (1:ncol(x))[colnames(x) %in% left] right_idx = (1:ncol(x))[colnames(x) %in% right] if (is.null(forced_var)) { forced = NULL } else { if (forced_var %in% V) { forced = (1:ncol(x))[colnames(x) == forced_var] } else { forced_left = left[right %in% forced_var] forced_right = right[left %in% forced_var] all_forced = c(forced_var, forced_left, forced_right) left_forced = left_idx[colnames(x)[left_idx] %in% all_forced] right_forced = right_idx[colnames(x)[right_idx] %in% all_forced] forced = c(left_forced, right_forced) } } ### if (is.null(in_levs)) { resample_X = num_x_levs } else{ resample_X = in_levs } if (design == "Uniform") { resample_func_U1 = function(resample_X, n) { new_samp = sample(resample_X, size = n, replace = TRUE) return(new_samp) } resample_func_U2 = function(resample_X, n) { new_samp = sample(resample_X, size = n, replace = TRUE) return(new_samp) } resample_func_1 = resample_func_U1; resample_func_2 = resample_func_U2 left_allowed = right_allowed = full_X = restricted_X = NULL } if (design == "Constrained Uniform") { if (is.null(constraint_randomization)) stop("Please supply constraint list") constraint_name = names(constraint_randomization)[2] constraint_right = right[left %in% constraint_name] constraint_left = left[right %in% constraint_name] if (length(constraint_right) >0) { constraint_left = constraint_name } else { constraint_right = constraint_name } left_allowed = x[, colnames(x) %in% constraint_left] %in% constraint_randomization[[2]] right_allowed = x[, colnames(x) %in% constraint_right] %in% constraint_randomization[[2]] full_X = resample_X restricted_X = full_X[!(full_X %in% constraint_randomization[[1]])] resample_func_CU1 = function(full_X, restricted_X, n, left_allowed) { new_samp = sample(full_X, size = n, replace = TRUE) new_samp[!left_allowed] = sample(restricted_X, size = n - sum(left_allowed), replace = TRUE) return(new_samp) } resample_func_CU2 = function(full_X, restricted_X, n, right_allowed) { new_samp = sample(full_X, size = n, replace = TRUE) new_samp[!right_allowed] = sample(restricted_X, size = n - sum(right_allowed), replace = TRUE) return(new_samp) } resample_func_1 = resample_func_CU1; resample_func_2 = resample_func_CU2 } if (design == "Nonuniform") { if (is.null(p)) stop("Please supply non-uniform probability weight p") resample_func_NU1 = function(resample_X, n, p, null_input = NULL) { new_samp = sample(resample_X, size = n, replace = TRUE, prob = p) return(new_samp) } resample_func_NU2 = function(resample_X, n, p, null_input = NULL) { new_samp = sample(resample_X, size = n, replace = TRUE, prob = p) return(new_samp) } resample_func_1 = resample_func_NU1; resample_func_2 = resample_func_NU2 left_allowed = right_allowed = full_X = restricted_X = NULL } if (design == "Manual") { if (is.null(supplyown_resamples)) stop("Please supply own resamples") left_allowed = right_allowed = full_X = restricted_X = resample_func_1 = resample_func_2 = NULL } out = get_CRT_pval(x = x, y = y, xcols = xcols, left_idx = left_idx, right_idx = right_idx, design = design, B= B, forced = forced, lambda = lambda, non_factor_idx = non_factor_idx, in_levs = in_levs, analysis = analysis, p = p, resample_func_1 = resample_func_1, resample_func_2 = resample_func_2, supplyown_resamples = supplyown_resamples, resample_X = resample_X, full_X = full_X, restricted_X = restricted_X, left_allowed = left_allowed, right_allowed = right_allowed, tol = tol, speedup = speedup, parallel = parallel, num_cores = num_cores, profileorder_constraint = profileorder_constraint, nfolds = nfolds, seed = seed, verbose = verbose) end_time = Sys.time() out$elapsed_time = end_time - start_time return(out) } #' Testing profile order effect in Conjoint Experiments #' #' This function takes a conjoint dataset and returns the p-value when using the #' CRT to test if the profile order effect holds using HierNet test statistic. #' The function requires user to specify the outcome, all factors used in the #' conjoint experiment, and any additional respondent characteristics. #' The function assumes the forced choice conjoint experiment and consequently #' assumes the data to contain the left and right profile factors in separate column #' in the dataset supplied. #' #' @param formula A formula object specifying the outcome variable on the #' left-hand side and factors of (X,Z) and respondent characteristics (V) in the #' right hand side. #' RHS variables should be separated by + signs and should only contain either #' left or right for each (X,Z). #' For example Y ~ Country_left + Education_left is sufficient as opposed to #' Y ~ Country_left + Country_right + Education_left + Education_right #' @param data A dataframe containing outcome variable and all factors (X,Z,V) #' (including both left and right profile factors). All (X,Z,V) listed in #' the formula above are expected to be of class factor unless explicitly stated #' in non_factor input. #' @param left Vector of column names of data that corresponds to the left profile factors #' @param right Vector of column names of data that corresponds to the right profile factors. #' NOTE: left and right are assumed to be the same length and the #' order should correspond to the same variables. For example left = #' c("Country_left", "Education_left") and right = c("Country_right", "Education_right") #' @param non_factor A vector of strings indicating columns of data that are not factors. #' This should only be used for respondent characteristics (V) that are not factors. #' For example non_factor = "Respondent_Age". #' @param B Numeric integer value indicating the number of resamples for the CRT #' procedure. Default value is B=200. #' @param parallel Boolean indicating whether parallel computing should be used. #' Default value is TRUE. #' @param num_cores Numeric integer indicating number of cores to use when parallel=TRUE. #' num_cores should not exceed the number of cores the user's machine can handle. Default is 2. #' @param nfolds Numeric integer indicating number of cross-validation folds. Default is 3. #' @param lambda Numeric vector indicating lambda used for cross-validation for HierNet fit. #' Default lambda=c(20,30,40). #' @param tol Numeric value indicating acceptable tolerance for terminating #' optimization fit for HierNet. Default is tol=1e-3. WARNING: Do not increase #' as it greatly increases computation time. #' @param speedup Boolean indicating whether to employ computational tricks to #' make function run faster. It is always recommended to use default speedup=TRUE. #' @param seed Seed used for CRT procedure #' @param verbose Boolean indicating verbose output. Default verbose=TRUE #' #' @return A list containing: \item{p_val}{A numeric value for the p-value testing #' profile order effect.} #' \item{obs_test_stat}{A numeric value for the observed test statistic.} #' \item{resampled_test_stat}{Matrix containing all the B resampled test statistics} #' \item{tol}{Tolerance used for HierNet} #' \item{lam}{Best cross-validated lambda} #' \item{hiernet_fit}{An object of class hiernet that contains the hiernet fit #' for the observed test statistic} #' \item{seed}{Seed used} #' \item{elapsed_time}{Elapsed time} #' #' @export #' @references Ham, D., Janson, L., and Imai, K. (2022) #' Using Machine Learning to Test Causal Hypotheses in Conjoint Analysis #' @examples #' # Subset of Immigration Choice Conjoint Experiment Data from Hainmueller et. al. (2014). #' data("immigrationdata") #' form = formula("Y ~ FeatEd + FeatGender + FeatCountry + FeatReason + FeatJob + #' FeatExp + FeatPlans + FeatTrips + FeatLang + ppage + ppeducat + ppethm + ppgender") #' left = colnames(immigrationdata)[1:9] #' right = colnames(immigrationdata)[10:18] #' #' # Testing if profile order effect is present or not in immigration data #' \donttest{ #' profileorder_test = CRT_profileordereffect(formula = form, data = immigrationdata, #' left = left, right = right, B = 50) #' profileorder_test$p_val #' } CRT_profileordereffect = function(formula, data, left, right, non_factor = NULL, B = 200, parallel = TRUE,num_cores = 2,nfolds = 3, lambda = c(20, 30, 40), tol = 1e-3, speedup = TRUE, seed = sample(c(1:1000), size = 1), verbose = TRUE) { start_time = Sys.time() # processing stage left_levs = sapply(data[, left], function(x) levels(x)) right_levs = sapply(data[, right], function(x) levels(x)) for (i in 1:length(left_levs)) { check = all.equal(left_levs[[i]], right_levs[[i]]) check = isTRUE(check) if (!check) stop("Left factors and right factors levels do not agree") } Y_var = as.character(formula)[2] y = data[, Y_var] if (!is.numeric(y)) stop("Response is not numeric") x = data[, c(left, right)] non_factor_idx = (1:ncol(x))[colnames(x) %in% non_factor] if (length(non_factor_idx) == 0) { non_factor_idx = NULL } left_idx = (1:ncol(x))[colnames(x) %in% left] right_idx = (1:ncol(x))[colnames(x) %in% right] out = get_profileordereffect(x = x, y = y, left_idx = left_idx, right_idx = right_idx, B= B, lambda = lambda, non_factor_idx = non_factor_idx, tol = tol, speedup = speedup, parallel = parallel, num_cores = num_cores, nfolds = nfolds, seed = seed, verbose = verbose) end_time = Sys.time() out$elapsed_time = end_time - start_time return(out) } #' Testing carryover effect in Conjoint Experiments #' #' This function takes a conjoint dataset and returns the p-value when using the #' CRT to test if the carryover effect holds using HierNet test statistic. #' The function requires user to specify the outcome, all factors used in the #' conjoint experiment, and the evaluation task number. By default, this function #' assumes a uniform randomization of factor levels. The function assumes the #' forced choice conjoint experiment and consequently assumes #' the data to contain the left and right profile factors in separate column in #' the dataset supplied. #' #' @param formula A formula object specifying the outcome variable on the left-hand #' side and factors of (X,Z) and respondent characteristics (V) in the right hand side. #' RHS variables should be separated by + signs and should only contain either #' left or right for each (X,Z). #' For example Y ~ Country_left + Education_left is sufficient as opposed to #' Y ~ Country_left + Country_right + Education_left + Education_right #' @param data A dataframe containing outcome variable and all factors (X,Z,V) #' (including both left and right profile factors). All (X,Z,V) listed in #' the formula above are expected to be of class factor unless explicitly stated #' in non_factor input. #' @param left Vector of column names of data that corresponds to the left profile factors #' @param right Vector of column names of data that corresponds to the right profile factors. #' NOTE: left and right are assumed to be the same length and the #' order should correspond to the same variables. For example left = #' c("Country_left", "Education_left") and right = c("Country_right", "Education_right") #' @param task A character string indicating column of data that contains the task evaluation. #' IMPORTANT: The task variable is assumed to have no missing tasks, i.e., #' each respondent should have 1:J tasks. Please drop respondents with missing tasks. #' @param design A character string of one of the following options: "Uniform" or "Manual". #' "Uniform" refers to a completely uniform design where all (X,Z) are sampled uniformly. #' "Manual" refers to more complex conjoint designs, where the user will supply #' their own resamples in supplyown_resamples input. #' @param supplyown_resamples List of length B that contains own resamples of X #' when design="Manual". Each element of list should contain a dataframe #' with the same number of rows of data and two columns for the left and right #' profile values of X. #' @param profileorder_constraint Boolean indicating whether to enforce profile #' order constraint (default = TRUE) #' @param non_factor A vector of strings indicating columns of data that are not #' factors. This should only be used for respondent characteristics (V) that are #' not factors. For example non_factor = "Respondent_Age". #' @param B Numeric integer value indicating the number of resamples for the CRT #' procedure. Default value is B=200. #' @param parallel Boolean indicating whether parallel computing should be used. #' Default value is TRUE. #' @param num_cores Numeric integer indicating number of cores to use when parallel=TRUE. #' num_cores should not exceed the number of cores the user's machine can handle. Default is 2. #' @param nfolds Numeric integer indicating number of cross-validation folds. Default is 3. #' @param lambda Numeric vector indicating lambda used for cross-validation for #' HierNet fit. Default lambda=c(20,30,40). #' @param tol Numeric value indicating acceptable tolerance for terminating optimization #' fit for HierNet. Default is tol=1e-3. WARNING: Do not increase as it greatly increases #' computation time. #' @param seed Seed used for CRT procedure #' @param verbose Boolean indicating verbose output. Default verbose=TRUE #' #' @return A list containing: \item{p_val}{A numeric value for the p-value testing #' carryover effect.} #' \item{obs_test_stat}{A numeric value for the observed test statistic.} #' \item{resampled_test_stat}{Matrix containing all the B resampled test statistics} #' \item{tol}{Tolerance used for HierNet} #' \item{lam}{Best cross-validated lambda} #' \item{hiernet_fit}{An object of class hiernet that contains the hiernet fit for #' the observed test statistic} #' \item{seed}{Seed used} #' \item{elapsed_time}{Elapsed time} #' #' @export #' @references Ham, D., Janson, L., and Imai, K. (2022) #' Using Machine Learning to Test Causal Hypotheses in Conjoint Analysis #' @examples #' # Subset of Immigration Choice Conjoint Experiment Data from Hainmueller et. al. (2014). #' data("immigrationdata") #' form = formula("Y ~ FeatEd + FeatGender + FeatCountry + FeatReason + FeatJob + #' FeatExp + FeatPlans + FeatTrips + FeatLang + ppage + ppeducat + ppethm + ppgender") #' left = colnames(immigrationdata)[1:9] #' right = colnames(immigrationdata)[10:18] #' # Each respondent evaluated 5 tasks #' J = 5 #' carryover_df = immigrationdata #' carryover_df$task = rep(1:J, nrow(carryover_df)/J) #' # Since immigration conjoint experiment had dependent randomization for several factors #' # we supply our own resamples #' resample_func_immigration = function(x, seed = sample(c(0, 1000), size = 1), left_idx, right_idx) { #' set.seed(seed) #' df = x[, c(left_idx, right_idx)] #' variable = colnames(x)[c(left_idx, right_idx)] #' len = length(variable) #' resampled = list() #' n = nrow(df) #' for (i in 1:len) { #' var = df[, variable[i]] #' lev = levels(var) #' resampled[[i]] = factor(sample(lev, size = n, replace = TRUE)) #' } #' #' resampled_df = data.frame(resampled[[1]]) #' for (i in 2:len) { #' resampled_df = cbind(resampled_df, resampled[[i]]) #' } #' colnames(resampled_df) = colnames(df) #' #' #escape persecution was dependently randomized #' country_1 = resampled_df[, "FeatCountry"] #' country_2 = resampled_df[, "FeatCountry_2"] #' i_1 = which((country_1 == "Iraq" | country_1 == "Sudan" | country_1 == "Somalia")) #' i_2 = which((country_2 == "Iraq" | country_2 == "Sudan" | country_2 == "Somalia")) #' #' reason_1 = resampled_df[, "FeatReason"] #' reason_2 = resampled_df[, "FeatReason_2"] #' levs = levels(reason_1) #' r_levs = levs[c(2,3)] #' #' reason_1 = sample(r_levs, size = n, replace = TRUE) #' #' reason_1[i_1] = sample(levs, size = length(i_1), replace = TRUE) #' #' reason_2 = sample(r_levs, size = n, replace = TRUE) #' #' reason_2[i_2] = sample(levs, size = length(i_2), replace = TRUE) #' #' resampled_df[, "FeatReason"] = reason_1 #' resampled_df[, "FeatReason_2"] = reason_2 #' #' #profession high skill fix #' educ_1 = resampled_df[, "FeatEd"] #' educ_2 = resampled_df[, "FeatEd_2"] #' i_1 = which((educ_1 == "Equivalent to completing two years of college in the US" | #' educ_1 == "Equivalent to completing a college degree in the US" | #' educ_1 == "Equivalent to completing a graduate degree in the US")) #' i_2 = which((educ_2 == "Equivalent to completing two years of college in the US" | #' educ_2 == "Equivalent to completing a college degree in the US" | #' educ_2 == "Equivalent to completing a graduate degree in the US")) #' #' #' job_1 = resampled_df[, "FeatJob"] #' job_2 = resampled_df[, "FeatJob_2"] #' levs = levels(job_1) #' # take out computer programmer, doctor, financial analyst, and research scientist #' r_levs = levs[-c(2,4,5, 9)] #' #' job_1 = sample(r_levs, size = n, replace = TRUE) #' #' job_1[i_1] = sample(levs, size = length(i_1), replace = TRUE) #' #' job_2 = sample(r_levs, size = n, replace = TRUE) #' #' job_2[i_2] = sample(levs, size = length(i_2), replace = TRUE) #' #' resampled_df[, "FeatJob"] = job_1 #' resampled_df[, "FeatJob_2"] = job_2 #' #' resampled_df[colnames(resampled_df)] = lapply(resampled_df[colnames(resampled_df)], factor ) #' #' return(resampled_df) #' } #' #'\donttest{ #'own_resamples = list() #' B = 50 #' for (i in 1:B) { #' newdf = resample_func_immigration(carryover_df, left_idx = 1:9, right_idx = 10:18, seed = i) #' own_resamples[[i]] = newdf #' } #' carryover_test = CRT_carryovereffect(formula = form, data = carryover_df, left = left, #' right = right, task = "task", supplyown_resamples = own_resamples, B = B) #' carryover_test$p_val #' } CRT_carryovereffect = function(formula, data, left, right, task, design = "Uniform", supplyown_resamples = NULL, profileorder_constraint = TRUE, non_factor = NULL, B = 200, parallel = TRUE,num_cores = 2,nfolds = 3, lambda = c(20, 30, 40), tol = 1e-3, seed = sample(c(1:1000), size = 1), verbose = TRUE) { start_time = Sys.time() # processing stage if (!(design %in% c("Uniform", "Manual"))) stop("Design should be either Uniform or Manual") left_levs = sapply(data[, left], function(x) levels(x)) right_levs = sapply(data[, right], function(x) levels(x)) for (i in 1:length(left_levs)) { check = all.equal(left_levs[[i]], right_levs[[i]]) check = isTRUE(check) if (!check) stop("Left factors and right factors levels do not agree") } Y_var = as.character(formula)[2] y = data[, Y_var] if (!is.numeric(y)) stop("Response is not numeric") x = data[, c(left, right, task)] non_factor_idx = (1:ncol(x))[colnames(x) %in% non_factor] if (length(non_factor_idx) == 0) { non_factor_idx = NULL } left_idx = (1:ncol(x))[colnames(x) %in% left] right_idx = (1:ncol(x))[colnames(x) %in% right] task_var = which(colnames(x) == task) if (design == "Uniform") { resample_func = NULL } if (design == "Manual") { resample_func = c("Not Null") } out = get_carryovereffect(x = x, y = y, left_idx = left_idx, right_idx = right_idx, task_var = task_var, resample_func = resample_func, supplyown_resamples = supplyown_resamples, profileorder_constraint = profileorder_constraint, B= B, lambda = lambda, non_factor_idx = non_factor_idx, tol = tol, parallel = parallel, num_cores = num_cores, nfolds = nfolds, seed = seed, verbose = verbose) end_time = Sys.time() out$elapsed_time = end_time - start_time return(out) } #' Testing fatigue effect in Conjoint Experiments #' #' This function takes a conjoint dataset and returns the p-value when using the #' CRT to test if the fatigue effect holds using HierNet test statistic. #' The function requires user to specify the outcome, all factors used in the #' conjoint experiment, and both the evaluation task number and respondent index. #' The function assumes the forced choice conjoint experiment and consequently #' assumes the data to contain the left and right profile factors in separate #' column in the dataset supplied. #' #' @param formula A formula object specifying the outcome variable on the left-hand #' side and factors of (X,Z) and respondent characteristics (V) in the right hand side. #' RHS variables should be separated by + signs and should only contain either #' left or right for each (X,Z). #' For example Y ~ Country_left + Education_left is sufficient as opposed to #' Y ~ Country_left + Country_right + Education_left + Education_right #' @param data A dataframe containing outcome variable and all factors (X,Z,V) #' (including both left and right profile factors). All (X,Z,V) listed in #' the formula above are expected to be of class factor unless explicitly stated #' in non_factor input. #' @param left Vector of column names of data that corresponds to the left profile factors #' @param right Vector of column names of data that corresponds to the right profile factors. #' NOTE: left and right are assumed to be the same length and the #' order should correspond to the same variables. For example left = #' c("Country_left", "Education_left") and right = c("Country_right", "Education_right") #' @param task A character string indicating column of data that contains the task #' evaluation. IMPORTANT: The task variable is assumed to have no missing tasks, i.e., #' each respondent should have 1:J tasks. Please drop respondents with missing tasks. #' @param respondent A character string indicating column of data that contains the #' respondent index. The column should contain integers from 1:N indicating respondent index. #' @param profileorder_constraint Boolean indicating whether to enforce profile order #' constraint (default = TRUE) #' @param non_factor A vector of strings indicating columns of data that are not factors. #' This should only be used for respondent characteristics (V) that are not factors. #' For example non_factor = "Respondent_Age". #' @param B Numeric integer value indicating the number of resamples for the CRT procedure. #' Default value is B=200. #' @param parallel Boolean indicating whether parallel computing should be used. #' Default value is TRUE. #' @param num_cores Numeric integer indicating number of cores to use when parallel=TRUE. #' num_cores should not exceed the number of cores the user's machine can handle. Default is 2. #' @param nfolds Numeric integer indicating number of cross-validation folds. Default is 3. #' @param lambda Numeric vector indicating lambda used for cross-validation for HierNet fit. #' Default lambda=c(20,30,40). #' @param tol Numeric value indicating acceptable tolerance for terminating optimization #' fit for HierNet. Default is tol=1e-3. WARNING: Do not increase as it greatly increases #' computation time. #' @param speedup Boolean indicating whether to employ computational tricks to make #' function run faster. It is always recommended to use default speedup=TRUE. #' @param seed Seed used for CRT procedure #' @param verbose Boolean indicating verbose output. Default verbose=TRUE #' #' @return A list containing: \item{p_val}{A numeric value for the p-value testing #' fatigue effect.} #' \item{obs_test_stat}{A numeric value for the observed test statistic.} #' \item{resampled_test_stat}{Matrix containing all the B resampled test statistics} #' \item{tol}{Tolerance used for HierNet} #' \item{lam}{Best cross-validated lambda} #' \item{hiernet_fit}{An object of class hiernet that contains the hiernet fit for #' the observed test statistic} #' \item{seed}{Seed used} #' \item{elapsed_time}{Elapsed time} #' #' @export #' @references Ham, D., Janson, L., and Imai, K. #' (2022) Using Machine Learning to Test Causal Hypotheses in Conjoint Analysis #' @examples #' # Subset of Immigration Choice Conjoint Experiment Data from Hainmueller et. al. (2014). #' data("immigrationdata") #' form = formula("Y ~ FeatEd + FeatGender + FeatCountry + FeatReason + FeatJob + #' FeatExp + FeatPlans + FeatTrips + FeatLang + ppage + ppeducat + ppethm + ppgender") #' left = colnames(immigrationdata)[1:9] #' right = colnames(immigrationdata)[10:18] #' # Each respondent evaluated 5 tasks #' J = 5 #' fatigue_df = immigrationdata #' fatigue_df$task = rep(1:J, nrow(fatigue_df)/J) #' fatigue_df$respondent = rep(1:(nrow(fatigue_df)/J), each = J) #' \donttest{ #' fatigue_test = CRT_fatigueeffect(formula = form, data = fatigue_df, left = left, #' right = right, task = "task", respondent = "respondent", B = 50) #' fatigue_test$p_val #' } CRT_fatigueeffect = function(formula, data, left, right, task, respondent, profileorder_constraint = TRUE, non_factor = NULL, B = 200, parallel = TRUE,num_cores = 2,nfolds = 3, lambda = c(20, 30, 40), tol = 1e-3, speedup = TRUE, seed = sample(c(1:1000), size = 1), verbose = TRUE) { start_time = Sys.time() # processing stage left_levs = sapply(data[, left], function(x) levels(x)) right_levs = sapply(data[, right], function(x) levels(x)) for (i in 1:length(left_levs)) { check = all.equal(left_levs[[i]], right_levs[[i]]) check = isTRUE(check) if (!check) stop("Left factors and right factors levels do not agree") } Y_var = as.character(formula)[2] y = data[, Y_var] if (!is.numeric(y)) stop("Response is not numeric") x = data[, c(left, right, task, respondent)] non_factor_idx = (1:ncol(x))[colnames(x) %in% non_factor] if (length(non_factor_idx) == 0) { non_factor_idx = NULL } left_idx = (1:ncol(x))[colnames(x) %in% left] right_idx = (1:ncol(x))[colnames(x) %in% right] task_var = which(colnames(x) == task) respondent_var = which(colnames(x) == respondent) out = get_fatigueeffect(x = x, y = y, left_idx = left_idx, right_idx = right_idx, task_var = task_var, respondent_var = respondent_var, profileorder_constraint = profileorder_constraint, B= B, lambda = lambda, non_factor_idx = non_factor_idx, tol = tol, speedup = speedup, parallel = parallel, num_cores = num_cores, nfolds = nfolds, seed = seed, verbose = verbose) end_time = Sys.time() out$elapsed_time = end_time - start_time return(out) }
/scratch/gouwar.j/cran-all/cranData/CRTConjoint/R/main_funcs.R
## ---- include = FALSE--------------------------------------------------------- knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ## ----setup-------------------------------------------------------------------- library(CRTConjoint) ## ----------------------------------------------------------------------------- data("immigrationdata") ## ----------------------------------------------------------------------------- form = formula("Y ~ FeatEd + FeatGender + FeatCountry + FeatReason + FeatJob + FeatExp + FeatPlans + FeatTrips + FeatLang + ppage + ppeducat + ppethm + ppgender") left = colnames(immigrationdata)[1:9] right = colnames(immigrationdata)[10:18] left; right ## ---- eval = FALSE------------------------------------------------------------ # education_test = CRT_pval(formula = form, data = immigrationdata, X = "FeatEd", # left = left, right = right, non_factor = "ppage", B = 100, analysis = 2) # education_test$p_val ## ---- eval = FALSE------------------------------------------------------------ # constraint_randomization = list() # (Job has dependent randomization scheme) # constraint_randomization[["FeatJob"]] = c("Financial analyst","Computer programmer", # "Research scientist","Doctor") # constraint_randomization[["FeatEd"]] = c("Equivalent to completing two years of # college in the US", "Equivalent to completing a graduate degree in the US", # "Equivalent to completing a college degree in the US") ## ---- eval = FALSE------------------------------------------------------------ # job_test = CRT_pval(formula = form, data = immigrationdata, X = "FeatJob", # left = left, right = right, design = "Constrained Uniform", # constraint_randomization = constraint_randomization, non_factor = "ppage", B = 100) # job_test$p_val ## ---- eval = FALSE------------------------------------------------------------ # profileorder_test = CRT_profileordereffect(formula = form, data = immigrationdata, # left = left, right = right, B = 100) # profileorder_test$p_val ## ---- eval = FALSE------------------------------------------------------------ # resample_func_immigration = function(x, seed = sample(c(0, 1000), size = 1), left_idx, right_idx) { # set.seed(seed) # df = x[, c(left_idx, right_idx)] # variable = colnames(x)[c(left_idx, right_idx)] # len = length(variable) # resampled = list() # n = nrow(df) # for (i in 1:len) { # var = df[, variable[i]] # lev = levels(var) # resampled[[i]] = factor(sample(lev, size = n, replace = TRUE)) # } # # resampled_df = data.frame(resampled[[1]]) # for (i in 2:len) { # resampled_df = cbind(resampled_df, resampled[[i]]) # } # colnames(resampled_df) = colnames(df) # # #escape persecution was dependently randomized # country_1 = resampled_df[, "FeatCountry"] # country_2 = resampled_df[, "FeatCountry_2"] # i_1 = which((country_1 == "Iraq" | country_1 == "Sudan" | country_1 == "Somalia")) # i_2 = which((country_2 == "Iraq" | country_2 == "Sudan" | country_2 == "Somalia")) # # reason_1 = resampled_df[, "FeatReason"] # reason_2 = resampled_df[, "FeatReason_2"] # levs = levels(reason_1) # r_levs = levs[c(2,3)] # # reason_1 = sample(r_levs, size = n, replace = TRUE) # # reason_1[i_1] = sample(levs, size = length(i_1), replace = TRUE) # # reason_2 = sample(r_levs, size = n, replace = TRUE) # # reason_2[i_2] = sample(levs, size = length(i_2), replace = TRUE) # # resampled_df[, "FeatReason"] = reason_1 # resampled_df[, "FeatReason_2"] = reason_2 # # #profession high skill fix # educ_1 = resampled_df[, "FeatEd"] # educ_2 = resampled_df[, "FeatEd_2"] # i_1 = which((educ_1 == "Equivalent to completing two years of college in the US" | # educ_1 == "Equivalent to completing a college degree in the US" | # educ_1 == "Equivalent to completing a graduate degree in the US")) # i_2 = which((educ_2 == "Equivalent to completing two years of college in the US" | # educ_2 == "Equivalent to completing a college degree in the US" | # educ_2 == "Equivalent to completing a graduate degree in the US")) # # # job_1 = resampled_df[, "FeatJob"] # job_2 = resampled_df[, "FeatJob_2"] # levs = levels(job_1) # # take out computer programmer, doctor, financial analyst, and research scientist # r_levs = levs[-c(2,4,5, 9)] # # job_1 = sample(r_levs, size = n, replace = TRUE) # # job_1[i_1] = sample(levs, size = length(i_1), replace = TRUE) # # job_2 = sample(r_levs, size = n, replace = TRUE) # # job_2[i_2] = sample(levs, size = length(i_2), replace = TRUE) # # resampled_df[, "FeatJob"] = job_1 # resampled_df[, "FeatJob_2"] = job_2 # # resampled_df[colnames(resampled_df)] = lapply(resampled_df[colnames(resampled_df)], factor ) # # return(resampled_df) # } ## ---- eval = FALSE------------------------------------------------------------ # carryover_df = immigrationdata # own_resamples = list() # B = 100 # for (i in 1:B) { # newdf = resample_func_immigration(carryover_df, left_idx = 1:9, right_idx = 10:18, seed = i) # own_resamples[[i]] = newdf # } ## ---- eval = FALSE------------------------------------------------------------ # J = 5 # carryover_df$task = rep(1:J, nrow(carryover_df)/J) # # carryover_test = CRT_carryovereffect(formula = form, data = carryover_df, left = left, # right = right, task = "task", supplyown_resamples = own_resamples, B = B) # carryover_test$p_val ## ---- eval = FALSE------------------------------------------------------------ # fatigue_df = immigrationdata # fatigue_df$task = rep(1:J, nrow(fatigue_df)/J) # fatigue_df$respondent = rep(1:(nrow(fatigue_df)/J), each = J) # # fatigue_test = CRT_fatigueeffect(formula = form, data = fatigue_df, left = left, # right = right, task = "task", respondent = "respondent", B = 100) # fatigue_test$p_val
/scratch/gouwar.j/cran-all/cranData/CRTConjoint/inst/doc/CRTConjoint.R
--- title: "Using CRTConjoint" output: rmarkdown::html_vignette description: > This vignette describes the basics of how to use CRTConjoint package along with how to effectively use this in the Amazon Web Services parallel computing cluster. vignette: > %\VignetteIndexEntry{Using CRTConjoint} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` ```{r setup} library(CRTConjoint) ``` This vignette will go through examples cases listed in the package for all the main functions with details on how to use the inputs. Finally, the last section contains detailed steps on how to use CRTConjoint with the Amazon Web Services computing cluster. ## Understanding CRT_pval To begin, we start with understanding the main function `CRT_pval`. All examples use the immigration choice conjoint experiment data from Hainmueller et. al. (2014). We first load this data from the package. ```{r} data("immigrationdata") ``` Each row consists a pair of immigrant candidates that were shown to respondents. For example, the first row shows that the left immigrant candidate was a Male from Iraq who had a high school degree while the right immigrant candidate was a Female from France with no formal education. The respondent who evaluated this task was a 20 year old college educated White Male, who voted for the left immigrant candidate. In the first example, we aim to understand if the candidate's education matters for immigration preferences. To test this, `CRT_pval` requires users to specify all factors and respondent factors of interest in the `formula` input along with the binary response. `CRT_pval` requires us to additionally specify which of the factors are the left and right profiles. ```{r} form = formula("Y ~ FeatEd + FeatGender + FeatCountry + FeatReason + FeatJob + FeatExp + FeatPlans + FeatTrips + FeatLang + ppage + ppeducat + ppethm + ppgender") left = colnames(immigrationdata)[1:9] right = colnames(immigrationdata)[10:18] left; right ``` Users can see that the left and right profile factors are aligned, i.e., the first entry is the education for both the left and right profiles. It is important that they are expected to be aligned. We also note that the formula only contains the factors for the left profile. This is sufficient as the algorithm will take the left and right input to use both left and right profile attributes for testing the hypothesis. Lastly, we include all respondent characteristics (ppage, ppeducat, etc.) to boost power. We are ready to now run `CRT_pval` to test whether education matters. ```{r, eval = FALSE} education_test = CRT_pval(formula = form, data = immigrationdata, X = "FeatEd", left = left, right = right, non_factor = "ppage", B = 100, analysis = 2) education_test$p_val ``` We again note `X = "FeatEd"` is sufficient to clarify which factor we are testing for. Because the function assumes all attributes in conjoint experiments are of class factor, if there are variables that are not factor class, for example respondent age (ppage), the function must know which are these non-factor variables. Furthermore, to save time we only run it for $B = 100$ resamples. Lastly, we set $analysis = 2$ to also allow the function to spit out two of the strongest interactions that contributed to the observed test statistic. We note that this is purely for exploratory purposes. The output should contain not only the $p$-value but also the observed test statistic, all the resampled test statistic, etc. The function will also show a progress bar to show the percentage of resamples the user is finished with. ### Using constrained randomization design Since the immigrant's education was uniformly sampled, we did not need to specify the design because the default design was uniform. However, there are some factors that used more complex design. One such factor was job (FeatJob). For this experiment, the candidate's occupation could only be financial analyst, computer programmer, research scientist, and doctor if their education degree was equivalent to at least some level of college. Because this constrained uniform design is popular in conjoint experiments, we allow the function to account for this design so long as the user specifies the constraints. We now show, using the same example, how to specify this constraint. ```{r, eval = FALSE} constraint_randomization = list() # (Job has dependent randomization scheme) constraint_randomization[["FeatJob"]] = c("Financial analyst","Computer programmer", "Research scientist","Doctor") constraint_randomization[["FeatEd"]] = c("Equivalent to completing two years of college in the US", "Equivalent to completing a graduate degree in the US", "Equivalent to completing a college degree in the US") ``` The `constraint_randomization` list is a list of length two. The first element contains the levels of job that can only be randomized with certain levels of education. The second element of the list contains the levels of education that allows the aforementioned jobs to have positive probability. The listed levels are assumed to match the levels in the supplied data. Additionally, the names of the list is also assumed to match the column names in the supplied data. We note that the user only has to supply the constraint for either the left or right factor and the function will assume the constraint randomization scheme is the same for both left and right factors, i.e., same for FeatJob_2 and FeatEd_2. ```{r, eval = FALSE} job_test = CRT_pval(formula = form, data = immigrationdata, X = "FeatJob", left = left, right = right, design = "Constrained Uniform", constraint_randomization = constraint_randomization, non_factor = "ppage", B = 100) job_test$p_val ``` Once, we have the constraint list, we input it into the `constraint_randomization` input, after stating that the `design = "Constrained Uniform"`. We supply other examples when a user has a nonuniform (but no constraints) design and how to force a variable to include as an interaction in the examples. ## Understanding extensions of CRT We provide three other CRT functions that take similar inputs but tests regularity conditions often invoked in conjoint experiments. The first aims to test the profile order effect (`CRT_profileordereffect`), i.e., whether or not being in the left or right has any impact on the response. The syntax for such a test is straightforward. ```{r, eval = FALSE} profileorder_test = CRT_profileordereffect(formula = form, data = immigrationdata, left = left, right = right, B = 100) profileorder_test$p_val ``` Testing the carryover effect and fatigue effect is slightly more complex. When testing the carryover effect, it requires resampling all the left and right factors. The default `design = "Uniform"` assumes that all factors were uniformly sampled. If that is not the case, (which the immigration example is not) we need to supply our own resamples. To do this, we build our resampling function, ```{r, eval = FALSE} resample_func_immigration = function(x, seed = sample(c(0, 1000), size = 1), left_idx, right_idx) { set.seed(seed) df = x[, c(left_idx, right_idx)] variable = colnames(x)[c(left_idx, right_idx)] len = length(variable) resampled = list() n = nrow(df) for (i in 1:len) { var = df[, variable[i]] lev = levels(var) resampled[[i]] = factor(sample(lev, size = n, replace = TRUE)) } resampled_df = data.frame(resampled[[1]]) for (i in 2:len) { resampled_df = cbind(resampled_df, resampled[[i]]) } colnames(resampled_df) = colnames(df) #escape persecution was dependently randomized country_1 = resampled_df[, "FeatCountry"] country_2 = resampled_df[, "FeatCountry_2"] i_1 = which((country_1 == "Iraq" | country_1 == "Sudan" | country_1 == "Somalia")) i_2 = which((country_2 == "Iraq" | country_2 == "Sudan" | country_2 == "Somalia")) reason_1 = resampled_df[, "FeatReason"] reason_2 = resampled_df[, "FeatReason_2"] levs = levels(reason_1) r_levs = levs[c(2,3)] reason_1 = sample(r_levs, size = n, replace = TRUE) reason_1[i_1] = sample(levs, size = length(i_1), replace = TRUE) reason_2 = sample(r_levs, size = n, replace = TRUE) reason_2[i_2] = sample(levs, size = length(i_2), replace = TRUE) resampled_df[, "FeatReason"] = reason_1 resampled_df[, "FeatReason_2"] = reason_2 #profession high skill fix educ_1 = resampled_df[, "FeatEd"] educ_2 = resampled_df[, "FeatEd_2"] i_1 = which((educ_1 == "Equivalent to completing two years of college in the US" | educ_1 == "Equivalent to completing a college degree in the US" | educ_1 == "Equivalent to completing a graduate degree in the US")) i_2 = which((educ_2 == "Equivalent to completing two years of college in the US" | educ_2 == "Equivalent to completing a college degree in the US" | educ_2 == "Equivalent to completing a graduate degree in the US")) job_1 = resampled_df[, "FeatJob"] job_2 = resampled_df[, "FeatJob_2"] levs = levels(job_1) # take out computer programmer, doctor, financial analyst, and research scientist r_levs = levs[-c(2,4,5, 9)] job_1 = sample(r_levs, size = n, replace = TRUE) job_1[i_1] = sample(levs, size = length(i_1), replace = TRUE) job_2 = sample(r_levs, size = n, replace = TRUE) job_2[i_2] = sample(levs, size = length(i_2), replace = TRUE) resampled_df[, "FeatJob"] = job_1 resampled_df[, "FeatJob_2"] = job_2 resampled_df[colnames(resampled_df)] = lapply(resampled_df[colnames(resampled_df)], factor ) return(resampled_df) } ``` This resampling function takes the data (x) as an input and given the indexes of the left and right profile attributes, it returns a completely new resampled dataframe of the same dimension as x. As stated above, because testing the carryover effect requires resampling all the attributes B times, we must supply all the manually resampled dataframes. To do this we store them in a list of length B, each containing a resampled data of all the left and right attributes. We store this in `own_resamples`. ```{r, eval = FALSE} carryover_df = immigrationdata own_resamples = list() B = 100 for (i in 1:B) { newdf = resample_func_immigration(carryover_df, left_idx = 1:9, right_idx = 10:18, seed = i) own_resamples[[i]] = newdf } ``` Lastly, the carryover test requires a column that indicates the task evaluation number for each row. In the immigration experiment, each respondent rated five tasks and the data is sorted by each task, i.e., the first five rows are the first five tasks for the first respondent. Consequently, we can define a new column, task, that iterates 1 to 5 and run the main function. NOTE: it is important that the task variable has no missing tasks, i.e., a respondent that only rated four tasks while all other respondents rated five tasks. If there is such a respondent, please drop it before using this function. ```{r, eval = FALSE} J = 5 carryover_df$task = rep(1:J, nrow(carryover_df)/J) carryover_test = CRT_carryovereffect(formula = form, data = carryover_df, left = left, right = right, task = "task", supplyown_resamples = own_resamples, B = B) carryover_test$p_val ``` Lastly, to test the fatigue effect, we only need to additionally specify the respondent index. Like the task column, we similarly repeat 1 to 200 five times and store it in a new column called respondent. ```{r, eval = FALSE} fatigue_df = immigrationdata fatigue_df$task = rep(1:J, nrow(fatigue_df)/J) fatigue_df$respondent = rep(1:(nrow(fatigue_df)/J), each = J) fatigue_test = CRT_fatigueeffect(formula = form, data = fatigue_df, left = left, right = right, task = "task", respondent = "respondent", B = 100) fatigue_test$p_val ``` ## Using CRTConjoint with Amazon Web Services An optional argument for all CRT functions in this package is the `num_cores` input. Currently it is set at 2 cores as the default. Although 2 cores may be sufficient for someone who is doing exploratory work with $B = 100$, the final reported $p$-values is recommended to have a much higher value of $B$. A typical Mac laptop will only support up to `num_cores = 4`, which may be unsatisfactory for researchers. For researchers with their own computing cluster, this section is not applicable for them. However, for those with no easy access to computing cluster, we write this section to easily use our functions in powerful computing clusters provided by Amazon Web Services (AWS). We will leverage the RStudio Server in AWS maintained and provided by: https://www.louisaslett.com/RStudio_AMI/. We will now list the steps needed to use AWS Rstudio. * Step 1) Sign up for an AWS account. Then login. * Step 2) Go to the above link: https://www.louisaslett.com/RStudio_AMI/ and click on the region closest to user's local region. For example, we click on US East, Ohio ami-07038... * Step 3) This will redirect you to a new page. Under instance type select the desired instance type. The number next to vCPU indicates the number of cores available. We recommend c5a.16xlarge with 64 cores. Please note that this costs 2.4 USD per hour. The user will be billed per hour. There are cheaper options. * Step 4) After selecting instance type, scroll down to "Network settings". Make sure "Allow SSH Traffic" is set to "Anywhere". Also tick "Allow HTTP traffic from the internet". * Step 5) Press the "launch instance" orange button. * Step 6) After being redirected, MANUALLY copy and paste the link below "Public IPv4 DNS". Then paste it on a new tab on google chrome. DO NOT press the open address button as it does not work. * Step 7) You will be prompted (might take a minute) to a RStudio login page. Username: rstudio and password is your instance ID. * Step 8) You can now enjoy RStudio on this AWS server. Install CRTConjoint package set num_cores to the desired cores and run the CRT functions faster! We recommend that the user has already written all necessary code to run on the AWS. Since users will be charged hourly, it is recommended that they can directly run the code. If the user runs with 50 cores, the runtime for a typical conjoint experiment should not exceed more than 5-10 minutes for a single $p$-value with $B = 2000$, thus we do not believe any user to have to incur much fees. For any questions please do not hesitate to contact me at: daewoongham at g dot harvard dot edu
/scratch/gouwar.j/cran-all/cranData/CRTConjoint/inst/doc/CRTConjoint.Rmd
--- title: "Using CRTConjoint" output: rmarkdown::html_vignette description: > This vignette describes the basics of how to use CRTConjoint package along with how to effectively use this in the Amazon Web Services parallel computing cluster. vignette: > %\VignetteIndexEntry{Using CRTConjoint} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` ```{r setup} library(CRTConjoint) ``` This vignette will go through examples cases listed in the package for all the main functions with details on how to use the inputs. Finally, the last section contains detailed steps on how to use CRTConjoint with the Amazon Web Services computing cluster. ## Understanding CRT_pval To begin, we start with understanding the main function `CRT_pval`. All examples use the immigration choice conjoint experiment data from Hainmueller et. al. (2014). We first load this data from the package. ```{r} data("immigrationdata") ``` Each row consists a pair of immigrant candidates that were shown to respondents. For example, the first row shows that the left immigrant candidate was a Male from Iraq who had a high school degree while the right immigrant candidate was a Female from France with no formal education. The respondent who evaluated this task was a 20 year old college educated White Male, who voted for the left immigrant candidate. In the first example, we aim to understand if the candidate's education matters for immigration preferences. To test this, `CRT_pval` requires users to specify all factors and respondent factors of interest in the `formula` input along with the binary response. `CRT_pval` requires us to additionally specify which of the factors are the left and right profiles. ```{r} form = formula("Y ~ FeatEd + FeatGender + FeatCountry + FeatReason + FeatJob + FeatExp + FeatPlans + FeatTrips + FeatLang + ppage + ppeducat + ppethm + ppgender") left = colnames(immigrationdata)[1:9] right = colnames(immigrationdata)[10:18] left; right ``` Users can see that the left and right profile factors are aligned, i.e., the first entry is the education for both the left and right profiles. It is important that they are expected to be aligned. We also note that the formula only contains the factors for the left profile. This is sufficient as the algorithm will take the left and right input to use both left and right profile attributes for testing the hypothesis. Lastly, we include all respondent characteristics (ppage, ppeducat, etc.) to boost power. We are ready to now run `CRT_pval` to test whether education matters. ```{r, eval = FALSE} education_test = CRT_pval(formula = form, data = immigrationdata, X = "FeatEd", left = left, right = right, non_factor = "ppage", B = 100, analysis = 2) education_test$p_val ``` We again note `X = "FeatEd"` is sufficient to clarify which factor we are testing for. Because the function assumes all attributes in conjoint experiments are of class factor, if there are variables that are not factor class, for example respondent age (ppage), the function must know which are these non-factor variables. Furthermore, to save time we only run it for $B = 100$ resamples. Lastly, we set $analysis = 2$ to also allow the function to spit out two of the strongest interactions that contributed to the observed test statistic. We note that this is purely for exploratory purposes. The output should contain not only the $p$-value but also the observed test statistic, all the resampled test statistic, etc. The function will also show a progress bar to show the percentage of resamples the user is finished with. ### Using constrained randomization design Since the immigrant's education was uniformly sampled, we did not need to specify the design because the default design was uniform. However, there are some factors that used more complex design. One such factor was job (FeatJob). For this experiment, the candidate's occupation could only be financial analyst, computer programmer, research scientist, and doctor if their education degree was equivalent to at least some level of college. Because this constrained uniform design is popular in conjoint experiments, we allow the function to account for this design so long as the user specifies the constraints. We now show, using the same example, how to specify this constraint. ```{r, eval = FALSE} constraint_randomization = list() # (Job has dependent randomization scheme) constraint_randomization[["FeatJob"]] = c("Financial analyst","Computer programmer", "Research scientist","Doctor") constraint_randomization[["FeatEd"]] = c("Equivalent to completing two years of college in the US", "Equivalent to completing a graduate degree in the US", "Equivalent to completing a college degree in the US") ``` The `constraint_randomization` list is a list of length two. The first element contains the levels of job that can only be randomized with certain levels of education. The second element of the list contains the levels of education that allows the aforementioned jobs to have positive probability. The listed levels are assumed to match the levels in the supplied data. Additionally, the names of the list is also assumed to match the column names in the supplied data. We note that the user only has to supply the constraint for either the left or right factor and the function will assume the constraint randomization scheme is the same for both left and right factors, i.e., same for FeatJob_2 and FeatEd_2. ```{r, eval = FALSE} job_test = CRT_pval(formula = form, data = immigrationdata, X = "FeatJob", left = left, right = right, design = "Constrained Uniform", constraint_randomization = constraint_randomization, non_factor = "ppage", B = 100) job_test$p_val ``` Once, we have the constraint list, we input it into the `constraint_randomization` input, after stating that the `design = "Constrained Uniform"`. We supply other examples when a user has a nonuniform (but no constraints) design and how to force a variable to include as an interaction in the examples. ## Understanding extensions of CRT We provide three other CRT functions that take similar inputs but tests regularity conditions often invoked in conjoint experiments. The first aims to test the profile order effect (`CRT_profileordereffect`), i.e., whether or not being in the left or right has any impact on the response. The syntax for such a test is straightforward. ```{r, eval = FALSE} profileorder_test = CRT_profileordereffect(formula = form, data = immigrationdata, left = left, right = right, B = 100) profileorder_test$p_val ``` Testing the carryover effect and fatigue effect is slightly more complex. When testing the carryover effect, it requires resampling all the left and right factors. The default `design = "Uniform"` assumes that all factors were uniformly sampled. If that is not the case, (which the immigration example is not) we need to supply our own resamples. To do this, we build our resampling function, ```{r, eval = FALSE} resample_func_immigration = function(x, seed = sample(c(0, 1000), size = 1), left_idx, right_idx) { set.seed(seed) df = x[, c(left_idx, right_idx)] variable = colnames(x)[c(left_idx, right_idx)] len = length(variable) resampled = list() n = nrow(df) for (i in 1:len) { var = df[, variable[i]] lev = levels(var) resampled[[i]] = factor(sample(lev, size = n, replace = TRUE)) } resampled_df = data.frame(resampled[[1]]) for (i in 2:len) { resampled_df = cbind(resampled_df, resampled[[i]]) } colnames(resampled_df) = colnames(df) #escape persecution was dependently randomized country_1 = resampled_df[, "FeatCountry"] country_2 = resampled_df[, "FeatCountry_2"] i_1 = which((country_1 == "Iraq" | country_1 == "Sudan" | country_1 == "Somalia")) i_2 = which((country_2 == "Iraq" | country_2 == "Sudan" | country_2 == "Somalia")) reason_1 = resampled_df[, "FeatReason"] reason_2 = resampled_df[, "FeatReason_2"] levs = levels(reason_1) r_levs = levs[c(2,3)] reason_1 = sample(r_levs, size = n, replace = TRUE) reason_1[i_1] = sample(levs, size = length(i_1), replace = TRUE) reason_2 = sample(r_levs, size = n, replace = TRUE) reason_2[i_2] = sample(levs, size = length(i_2), replace = TRUE) resampled_df[, "FeatReason"] = reason_1 resampled_df[, "FeatReason_2"] = reason_2 #profession high skill fix educ_1 = resampled_df[, "FeatEd"] educ_2 = resampled_df[, "FeatEd_2"] i_1 = which((educ_1 == "Equivalent to completing two years of college in the US" | educ_1 == "Equivalent to completing a college degree in the US" | educ_1 == "Equivalent to completing a graduate degree in the US")) i_2 = which((educ_2 == "Equivalent to completing two years of college in the US" | educ_2 == "Equivalent to completing a college degree in the US" | educ_2 == "Equivalent to completing a graduate degree in the US")) job_1 = resampled_df[, "FeatJob"] job_2 = resampled_df[, "FeatJob_2"] levs = levels(job_1) # take out computer programmer, doctor, financial analyst, and research scientist r_levs = levs[-c(2,4,5, 9)] job_1 = sample(r_levs, size = n, replace = TRUE) job_1[i_1] = sample(levs, size = length(i_1), replace = TRUE) job_2 = sample(r_levs, size = n, replace = TRUE) job_2[i_2] = sample(levs, size = length(i_2), replace = TRUE) resampled_df[, "FeatJob"] = job_1 resampled_df[, "FeatJob_2"] = job_2 resampled_df[colnames(resampled_df)] = lapply(resampled_df[colnames(resampled_df)], factor ) return(resampled_df) } ``` This resampling function takes the data (x) as an input and given the indexes of the left and right profile attributes, it returns a completely new resampled dataframe of the same dimension as x. As stated above, because testing the carryover effect requires resampling all the attributes B times, we must supply all the manually resampled dataframes. To do this we store them in a list of length B, each containing a resampled data of all the left and right attributes. We store this in `own_resamples`. ```{r, eval = FALSE} carryover_df = immigrationdata own_resamples = list() B = 100 for (i in 1:B) { newdf = resample_func_immigration(carryover_df, left_idx = 1:9, right_idx = 10:18, seed = i) own_resamples[[i]] = newdf } ``` Lastly, the carryover test requires a column that indicates the task evaluation number for each row. In the immigration experiment, each respondent rated five tasks and the data is sorted by each task, i.e., the first five rows are the first five tasks for the first respondent. Consequently, we can define a new column, task, that iterates 1 to 5 and run the main function. NOTE: it is important that the task variable has no missing tasks, i.e., a respondent that only rated four tasks while all other respondents rated five tasks. If there is such a respondent, please drop it before using this function. ```{r, eval = FALSE} J = 5 carryover_df$task = rep(1:J, nrow(carryover_df)/J) carryover_test = CRT_carryovereffect(formula = form, data = carryover_df, left = left, right = right, task = "task", supplyown_resamples = own_resamples, B = B) carryover_test$p_val ``` Lastly, to test the fatigue effect, we only need to additionally specify the respondent index. Like the task column, we similarly repeat 1 to 200 five times and store it in a new column called respondent. ```{r, eval = FALSE} fatigue_df = immigrationdata fatigue_df$task = rep(1:J, nrow(fatigue_df)/J) fatigue_df$respondent = rep(1:(nrow(fatigue_df)/J), each = J) fatigue_test = CRT_fatigueeffect(formula = form, data = fatigue_df, left = left, right = right, task = "task", respondent = "respondent", B = 100) fatigue_test$p_val ``` ## Using CRTConjoint with Amazon Web Services An optional argument for all CRT functions in this package is the `num_cores` input. Currently it is set at 2 cores as the default. Although 2 cores may be sufficient for someone who is doing exploratory work with $B = 100$, the final reported $p$-values is recommended to have a much higher value of $B$. A typical Mac laptop will only support up to `num_cores = 4`, which may be unsatisfactory for researchers. For researchers with their own computing cluster, this section is not applicable for them. However, for those with no easy access to computing cluster, we write this section to easily use our functions in powerful computing clusters provided by Amazon Web Services (AWS). We will leverage the RStudio Server in AWS maintained and provided by: https://www.louisaslett.com/RStudio_AMI/. We will now list the steps needed to use AWS Rstudio. * Step 1) Sign up for an AWS account. Then login. * Step 2) Go to the above link: https://www.louisaslett.com/RStudio_AMI/ and click on the region closest to user's local region. For example, we click on US East, Ohio ami-07038... * Step 3) This will redirect you to a new page. Under instance type select the desired instance type. The number next to vCPU indicates the number of cores available. We recommend c5a.16xlarge with 64 cores. Please note that this costs 2.4 USD per hour. The user will be billed per hour. There are cheaper options. * Step 4) After selecting instance type, scroll down to "Network settings". Make sure "Allow SSH Traffic" is set to "Anywhere". Also tick "Allow HTTP traffic from the internet". * Step 5) Press the "launch instance" orange button. * Step 6) After being redirected, MANUALLY copy and paste the link below "Public IPv4 DNS". Then paste it on a new tab on google chrome. DO NOT press the open address button as it does not work. * Step 7) You will be prompted (might take a minute) to a RStudio login page. Username: rstudio and password is your instance ID. * Step 8) You can now enjoy RStudio on this AWS server. Install CRTConjoint package set num_cores to the desired cores and run the CRT functions faster! We recommend that the user has already written all necessary code to run on the AWS. Since users will be charged hourly, it is recommended that they can directly run the code. If the user runs with 50 cores, the runtime for a typical conjoint experiment should not exceed more than 5-10 minutes for a single $p$-value with $B = 2000$, thus we do not believe any user to have to incur much fees. For any questions please do not hesitate to contact me at: daewoongham at g dot harvard dot edu
/scratch/gouwar.j/cran-all/cranData/CRTConjoint/vignettes/CRTConjoint.Rmd
n4incidence <- function(le, lc, m, t, CV, alpha=0.05, power=0.8, AR=1, two.tailed=TRUE, digits=3) { #Error Checking if ( (le <=0) || (lc <=0) ) stop("Sorry, the specified value of le and lc must be strictly positive...") if ((alpha >= 1) || (alpha <= 0) || (power <= 0) || (power >= 1)) stop("Sorry, the alpha and power must lie within (0,1)...") if (t <= 0) stop("Sorry, the value of t must lie be strictly positive...") if (AR <=0) stop("Sorry, the specified value of the Allocation Ratio (AR) must be strictly positive...") if (m <=1) stop("Sorry, the (average) cluster size, m, should be greater than one...") #If m is a decimal, round up to generate a more conservative sample size. m <- ceiling(m); #Initialize Parameters r <- NULL r$le <- le; r$lc <- lc; r$t <- t; r$CV <- CV; r$m <- m; r$alpha <- alpha; r$power <- power; r$two.tailed <- two.tailed; r$AR <- AR; #Calculate total number of clusters n, if (two.tailed) { T <- t*m; IFt <- 1 + ((CV^2)*(le^2 + lc^2)*T)/(le + lc); r$n <- ( (qnorm(1 - alpha/2) + qnorm(power))^2*(le + lc))/(T*(le - lc)^2); r$n <- IFt*r$n; #Perform iterative sample size, using the T statistic for small n. if (r$n < 30) { nTemp <- 0; while (abs(r$n - nTemp) > 1) { nTemp <- r$n; IFt <- 1 + ((CV^2)*(le^2 + lc^2)*T)/(le + lc); r$n <- ( ( qt( (1 - alpha/2), df=( 2*(nTemp - 1))) + qt(power, df=(2*(nTemp - 1))) )^2*(le + lc))/(T*(le - lc)^2); r$n <- IFt*r$n; } } } if (!two.tailed) { IFt <- 1 + ((CV^2)*(le^2 + lc^2)*T)/(le + lc); r$n <- ( (qnorm(1 - alpha) + qnorm(power))^2*(le + lc))/(T*(le - lc)^2); r$n <- IFt*r$n; if (r$n < 30) { nTemp <- 0; while (abs(r$n - nTemp) > 1) { nTemp <- r$n; IFt <- 1 + ((CV^2)*(le^2 + lc^2)*T)/(le + lc); r$n <- ( ( qt( (1 - alpha), df=( 2*(nTemp - 1))) + qt(power, df=(2*(nTemp - 1))) )^2*(le + lc))/(T*(le - lc)^2); r$n <- IFt*r$n; } } } #Adjust for allocation ratio; r$nE <- (1/2)*r$n*(1 + (1/AR)); r$nC <- (1/2)*r$n*(1 + AR); class(r) <- "n4incidence"; return(r); } #Print method print.n4incidence <- function(x, ...) { cat("The required sample size is a minimum of ", ceiling(x$nE), " clusters of size ", x$m, " in the Experimental Group \n", sep="") cat(" and a minimum of ", ceiling(x$nC), " clusters (size ", x$m, ") in the Control Group, followed for time period of length ", x$t, "\n", sep="") } #Summary Method summary.n4incidence <- function(object, ...) { cat("Sample Size Calculation for Comparison of Incidence Rates", "\n \n") cat("Assuming:", "\n") cat("Treatment Incidence Rate, le = ", object$le, "\n") cat("Control Incidence Rate, lc = ", object$lc, "\n") cat("Time Period, t = ", object$t, "\n") cat("Cluster Size (average) = ", object$m, "\n"); cat("Coefficient of Variation, CV = ", object$CV, "\n") cat("Type I Error Rate (alpha) = ", object$alpha, " and Power = ", object$power, "\n \n",sep="") cat("The required sample size is a minimum of ", ceiling(object$nE), " clusters of size ", object$m, " in the Experimental Group \n", sep="") cat(" and a minimum of ", ceiling(object$nC), " clusters (size ", object$m, ") in the Control Group. Followed for time period of ", object$t, "\n", sep="") }
/scratch/gouwar.j/cran-all/cranData/CRTSize/R/n4incidence.R
n4means <- function(delta, sigma, m, ICC, alpha=0.05, power=0.8, AR=1, two.tailed=TRUE, digits=3) { #Error Checking if (sigma <=0) stop("Sorry, the specified value of sigma must be strictly positive...") if ((alpha >= 1) || (alpha <= 0) || (power <= 0) || (power >= 1)) stop("Sorry, the alpha and power must lie within (0,1)") if (ICC <= 0) stop("Sorry, the ICC must lie within (0,1)") if (AR <=0) stop("Sorry, the specified value of the Allocation Ratio (AR) must be strictly positive...") if (m <=1) stop("Sorry, the (average) cluster size, m, should be greater than one...") #If m is a decimal, round up to generate a more conservative sample size. m <- ceiling(m); #Initialize Parameters r <- NULL r$delta <- delta; r$sigma <- sigma; r$ICC <- ICC; r$m <- m; r$alpha <- alpha; r$power <- power; r$two.tailed <- two.tailed; r$AR <- AR; #Calculate total number of clusters n, if (two.tailed) { r$n <- 2*sigma^2*(1 + (m - 1)*ICC)*(qnorm(1 - alpha/2) + qnorm(power))^2/(m*(delta^2)); #Perform iterative sample size, using the T statistic for small n. if (r$n < 30) { nTemp <- 0; while (abs(r$n - nTemp) > 1) { nTemp <- r$n; r$n <- 2*sigma^2*(1 + (m - 1)*ICC)*((qt((1 - alpha/2), df=( 2*(nTemp - 1) ))) + qt(power, df=(2*(nTemp - 1))))^2/(m*(delta^2)); } } } if (!two.tailed) { r$n <- 2*sigma^2*(1 + (m - 1)*ICC)*(qnorm(1 - alpha) + qnorm(power))^2/(m*(delta^2)); if (r$n < 30) { nTemp <- 0; while (abs(r$n - nTemp) > 1) { nTemp <- r$n; r$n <- 2*sigma^2*(1 + (m - 1)*ICC)*((qt((1 - alpha/2), df=( 2*(nTemp - 1) ))) + qt(power, df=(2*(nTemp - 1))))^2/(m*(delta^2)); } } } #Adjust for allocation ratio; r$nE <- (1/2)*r$n*(1 + (1/AR)); r$nC <- (1/2)*r$n*(1 + AR); class(r) <- "n4means"; return(r); } #Print method print.n4means <- function(x, ...) { cat("The required sample size is a minimum of ", ceiling(x$nE), " clusters of size ", x$m, " in the Experimental Group \n", sep="") cat(" and a minimum of ", ceiling(x$nC), " clusters (size ", x$m, ") in the Control Group. \n", sep="") } #Summary Method summary.n4means <- function(object, ...) { cat("Sample Size Calculation for Difference Between Means of Two Populations", "\n \n") cat("Assuming:", "\n") cat("Desired Minimum Detectable Difference between groups, delta = ", object$delta, "\n") cat("Sigma = ", object$sigma, "\n"); cat("Cluster Size (average) = ", object$m, "\n"); cat("ICC = ", object$ICC, "\n"); cat("Type I Error Rate (alpha) = ", object$alpha, " and Power = ", object$power, "\n \n",sep="") cat("The required sample size is a minimum of ", ceiling(object$nE), " clusters of size ", object$m, " in the Experimental Group \n", sep="") cat(" and a minimum of ", ceiling(object$nC), " clusters (size ", object$m, ") in the Control Group. \n", sep="") }
/scratch/gouwar.j/cran-all/cranData/CRTSize/R/n4means.R
n4meansEB <- function(ICC, varICC=0, delta, from, to, sigma, m, iter=1000, alpha=0.05, power=0.8, two.tailed=TRUE, digits=3, plot=TRUE) { if (sigma <=0) stop("Sorry, the specified value of sigma must be strictly positive...") if ((alpha >= 1) || (alpha <= 0) || (power <= 0) || (power >= 1)) stop("Sorry, the alpha and power must lie within (0,1)") for (i in 1:length(ICC)) { if (ICC[i] <= 0) stop("Sorry, the ICC must lie within (0,1)") } if (m <=1) stop("Sorry, the (average) cluster size, m, should be greater than one...") if (to < from) stop("From and To form the range of the estimated density for the ICC...") #If m is a decimal, round up to generate a more conservative sample size. m <- ceiling(m); #Initialize Parameters r <- NULL r$delta <- delta; r$sigma <- sigma; r$ICC <- ICC; r$varICC <- varICC; r$m <- m; r$alpha <- alpha; r$power <- power; r$two.tailed <- two.tailed; r$digits <- digits; r$from <- from; r$to <- to; #One or two-tailed tests if (two.tailed) { ZA <- -qnorm(alpha/2); ZB <- -qnorm(1 - power); } else { ZA <- -qnorm(alpha); ZB <- -qnorm(1 - power); } #Initialization of Results Vectors r$ResRho <- NULL; r$ResK <- NULL; #Compute Density Function; n must be much larger than iter. if (sum(varICC) != 0) { dens <- density(ICC, n=2^16, from=from, to=to, weights = 1/(varICC + var(varICC))/sum(1/(varICC+var(varICC)))) } else { dens <- density(ICC, n=2^16, from=from, to=to) } rhoVector <- sample(dens$x, size=iter, prob=dens$y) #Computational Loop for (i in 1:iter) { rho <- rhoVector[i] k <- (((ZA + ZB)^2)*2*sigma*(1 + (m -1)*rho))/(m*delta^2); r$ResRho <- append(r$ResRho, rho); r$ResK <- append(r$ResK, k); } if (plot) { hist(ICC, freq = FALSE, main="Histogram of Values of ICC and Empirical Density", ylab="Density", xlab="ICC Estimates", xlim=c(from, to), ylim=c(0,25)); par(new=TRUE); plot(dens, xlim=c(from,to), ylim=c(0,25), main="", xlab=""); } class(r) <- "n4meansEB"; return(r); } #Print Method print.n4meansEB <- function(x, ...) { cat("Simulation of the Empirical Density suggests that appropriate quantiles \n") cat("for the number of clusters to be randomized in each group are: \n") print(round(quantile(x$ResK, probs=c(0,0.25,0.5,0.75, 1.0)),digits=x$digits)) cat("With ICC quantiles: \n ") print(round(quantile(x$ResRho, probs=c(0,0.25,0.5,0.75, 1.0)),digits=x$digits)) } #Summary Method summary.n4meansEB <- function(object, ...) { cat("Simulation Based Sample Size Estimation (Empirical Density) to Compare Means of Two Populations", "\n \n") cat("Assuming:", "\n") cat("Desired Minimum Detectable Difference between groups, delta = ", object$delta, "\n") cat("Sigma = ", object$sigma, "\n"); cat("Cluster Size (average) = ", object$m, "\n"); cat("ICCs = ", object$ICC, "\n"); cat("Variance of ICCs = ", object$varICC, "\n"); cat("Type I Error Rate (alpha) = ", object$alpha, " and Power = ", object$power, "\n \n",sep="") cat("Simulation of the Empirical Density suggests that appropriate quantiles \n") cat("for the number of clusters to be randomized in each group are: \n") print(round(quantile(object$ResK, probs=c(0,0.1, 0.25,0.5,0.75, 0.9, 1.0)),digits=object$digits)) cat("With ICC quantiles: \n") print(round(quantile(object$ResRho, probs=c(0,0.1, 0.25,0.5,0.75, 0.9, 1.0)),digits=object$digits)) }
/scratch/gouwar.j/cran-all/cranData/CRTSize/R/n4meansEB.R
n4meansMeta <- function(data, model="fixed", k, ICC, ICCDistn="unif", lower=0, upper=0.25, varRed=FALSE, m, sdm, meanC, sdC, sdT=sdC, iter=1000, alpha=0.05) { if (!is.matrix(data)) stop("Sorry data must be a matrix of Mean Difference, 95 % Lower and Upper Limits from Previous Studies") if (! ( (ICCDistn == "fixed") | (ICCDistn == "unif") | (ICCDistn == "normal") | (ICCDistn == "smooth") ) ) stop("Sorry, the ICC Distribution must be one of: fixed, unif, normal or smooth.") if (! ( (model == "fixed") | (model == "random") ) ) stop("Sorry, model must be fixed, or random.") if ( (ICCDistn == "fixed") && (length(ICC) > 1) ) stop("Sorry you can only provide a single ICC value with the fixed distribution option.") if ((alpha >= 1) || (alpha <= 0)) stop("Sorry, the alpha must lie within (0,1)") if ((sdm < 0) || (sdC < 0)) stop("Sorry, variances must be non-negative.") for (i in 1:length(ICC)) { if (ICC[i] <= 0) stop("Sorry, the ICC must lie within (0,1)") } if (m <=1) stop("Sorry, the (average) cluster size, m, should be greater than one...") for (i in 1:length(k)) { if (k[i] <= 1) stop("Sorry, the values of k must be greater than 1") } X <- NULL; X$data <- data; X$model <- model; X$k <- k; X$ICC <- ICC; X$m <- m; X$sdm <- sdm; X$meanC <- meanC; X$sdC <- sdC; X$iter <- iter; X$varRed <- varRed; X$alpha <- alpha; original <- .metaAnalMD(data, model=X$model, alpha=X$alpha); X$newMean <- original$theta; X$newVar <- original$Var; X$l <- original$l; X$u <- original$u X$Power <- rep(0, length(k)); if (varRed) { X$varianceReduction <- rep(0, length(k)); } for (a in 1:length(k)) { kT0 <- k[a]; kC0 <- k[a]; if (ICCDistn == "unif") { ICCT0 <- runif(iter, lower, upper) } if (ICCDistn == "fixed") { ICCT0 <- rep(ICC, iter) } if (ICCDistn == "normal") { ICCT0 <- abs( rnorm(iter, 0, sd(ICC) ) ); } if (ICCDistn == "smooth") { dens <- density(ICC, n=2^16, from=lower, to=upper) ICCT0 <- sample(dens$x, size=iter, prob=dens$y) } Reject <- rep(NA, iter); if (varRed) { varReductionIter <- rep(NA,iter); } for (i in 1:iter) { X$thetaNew <- rnorm(1, X$newMean, sqrt(X$newVar)) meanC0 <- rnorm(1, meanC, sdC); meanT0 <- X$thetaNew + meanC0; w <- .oneCRTCTS(meanT=meanT0, meanC=meanC0, sdT=sdT, sdC=sdC, kC=kC0, kT=kT0, mTmean=m, mTsd=sdm, mCmean=m, mCsd=sdm, ICCT=ICCT0[i], ICCC=ICCT0[i]) x <- .summarizeTrialCTS(ResultsTreat=w$ResultsTreat, ResultsControl=w$ResultsControl) y <- .makeCI(Delta=x$meanDiff, varDelta=x$varMeanDiff, alpha=X$alpha) z <- .metaAnalMD(data=rbind(data, y), model=X$model, alpha=X$alpha); Reject[i] <- z$Sig; if (varRed) { varReductionIter[i] <- z$Var; } } X$Power[a] <- sum(Reject, na.rm=TRUE)/iter; if (varRed) { X$varianceReduction[a] <- mean(varReductionIter, na.rm=TRUE)/X$newVar; } } names(X$Power) <- k if (varRed) { names(X$varianceReduction) <- k } class(X) <- "n4meansMeta"; return(X); } #Print Method print.n4meansMeta <- function(x, ...) { cat("The Approximate Power of the Updated Meta-Analysis is: (Clusters per Group) \n"); print(x$Power); if (x$varRed) { cat("The Approximate Proportion of Variance Reduction is: (Clusters per Group) \n"); print(1 - x$varianceReduction); } } #Print method summary.n4meansMeta <- function(object, ...) { cat("Sample Size Calculation for Continuous Outcomes Based on Updated Meta-Analysis", "\n \n", sep="") cat("The original ", object$model, " Effects Mean Difference is ", object$newMean, "\n", sep=""); cat("With ", (1 - object$alpha)*100, "% Confidence Limits: (", object$l, ", ", object$u, ") \n \n",sep=""); cat("The Approximate Power of the Updated Meta-Analysis is: (Clusters per Group) \n", sep=""); print(object$Power); if (object$varRed) { cat("The Approximate Proportion of Variance Reduction is: (Clusters per Group) \n"); print(1 - object$varianceReduction); } cat("\n", "Assuming:", "\n", sep="") cat("Control Group Mean: ", object$meanC, " with standard deviation: ", object$sdC, "\n", sep=""); cat("Mean Cluster Size: ", object$m, " with standard deviation: ", object$sdm, "\n", sep=""); cat("ICC =", object$ICC, "\n"); cat("ICC Distribution", object$ICCDistn, "\n"); cat("Clusters =", object$k, "\n"); cat("Iterations =", object$iter, "\n"); } ################################################# #A couple of basic helper functions; #Summarize: Requires ResultsTreat and ResultsControl to give information about the mean diff and its variance #X$ResultsTreat <- c(meanTreat, MTreat, CT, ICC, MSC, MSW, VarTreat,kT); #X$ResultsControl <- c(meanControl, MControl, CC, ICC, MSC, MSW, VarControl, kC); .summarizeTrialCTS <- function(ResultsTreat, ResultsControl, ...) { Summary <- NULL; Summary$meanDiff <- ResultsTreat[1] - ResultsControl[1]; Summary$varMeanDiff <- (ResultsTreat[5] + ResultsTreat[6]); return(Summary); } ############################### #Returns Confidence Interval .makeCI <- function(Delta, varDelta, alpha=0.05) { X <- c(Delta, (Delta - qnorm((1-alpha/2))*sqrt(varDelta)), (Delta + qnorm((1-alpha/2))*sqrt(varDelta))) return(X); } ################### #Simple function to count number of 2500's, as very large clusters #cause the system to run out of memory. .number2500s <- function(n) { x <- floor(n/2500); return(x); } ##################### #Internal Method for generation of Clustered CTS Data (Normally Distributed) .oneCRTCTS <- function(meanT, sdT, meanC, sdC, kT, kC, mTmean, mTsd, mCmean, mCsd, ICCT, ICCC) { X <- NULL; ####Treatment Loop, generate CTS data in blocks.. mT <- floor(rnorm(kT, mTmean, mTsd)) dataT <- matrix(NA, nrow=max(mT), ncol=kT); for (j in 1:kT) { a <- .number2500s(mT[j]); if (a >= 1) { for (k in 1:a) { Sigma <- matrix(ICCT*sdT^2, nrow=2500, ncol=2500); diag(Sigma) <- c(rep(sdT^2, 2500)); dataT[((k-1)*2500 +1):(2500*k),j] <- round(.mvrnorm(n=1, mu=rep(meanT, 2500), Sigma=Sigma), digits=5) } if ( (mT[j] - 2500*a) != 0) { lastones <- mT[j] - 2500*a; Sigma <- matrix(ICCT*sdT^2, nrow=lastones, ncol=lastones); diag(Sigma) <- c(rep(sdT^2, lastones)); dataT[(2500*k+1):mT[j],j] <- round(.mvrnorm(n=1, mu=rep(meanT, lastones), Sigma=Sigma), digits=5) } } if (a < 1) { lastones <- mT[j] - 2500*a; Sigma <- matrix(ICCT*sdT^2, nrow=lastones, ncol=lastones); diag(Sigma) <- c(rep(sdT^2, lastones)); dataT[1:mT[j],j] <- round(.mvrnorm(n=1, mu=rep(meanT, lastones), Sigma=Sigma), digits=5) } } ####Control Loop mC <- floor(rnorm(kC, mCmean, mCsd)) dataC <- matrix(NA, nrow=max(mC), ncol=kC); j <- 1; k <- 1; for (j in 1:kC) { a <- .number2500s(mC[j]); if (a >= 1) { for (k in 1:a) { Sigma <- matrix(ICCC*sdC^2, nrow=2500, ncol=2500); diag(Sigma) <- c(rep(sdC^2, 2500)); dataC[((k-1)*2500 +1):(2500*k),j] <- round(.mvrnorm(n=1, mu=rep(meanC, 2500), Sigma=Sigma), digits=5) } if ( (mC[j] - 2500*a) != 0) { lastones <- mC[j] - 2500*a; Sigma <- matrix(ICCC*sdC^2, nrow=lastones, ncol=lastones); diag(Sigma) <- c(rep(sdC^2, lastones)); dataC[(2500*k+1):mC[j],j] <- round(.mvrnorm(n=1, mu=rep(meanC, lastones), Sigma=Sigma), digits=5) } } if (a < 1) { lastones <- mC[j] - 2500*a; Sigma <- matrix(ICCC*sdC^2, nrow=lastones, ncol=lastones); diag(Sigma) <- c(rep(sdC^2, lastones)); dataC[1:mC[j],j] <- round(.mvrnorm(n=1, mu=rep(meanC, lastones), Sigma=Sigma), digits=5) } } meanTreat <- mean(dataT, na.rm=TRUE); meanControl <- mean(dataC, na.rm=TRUE); MTreat <- nrow(dataT)*ncol(dataT) - sum(is.na(dataT)) MControl <- nrow(dataC)*ncol(dataC) - sum(is.na(dataC)) mbarT <- sum(mT^2)/sum(mT); mbarC <- sum(mC^2)/sum(mC); m0 <- ( (MTreat + MControl) - (mbarT + mbarC) ) / ( (kT + kC) - 2); #ICC Calculations MSC <- 0; MSW <- 0; VarTreat <- 0; VarControl <- 0; for (j in 1:kT) { MSC <- MSC + mT[j]*( sum(dataT[,j], na.rm=TRUE)/mT[j] - meanTreat)^2 / (kT + kC - 2); for (i in 1:mT[j]) { MSW <- MSW + ((dataT[i,j] - mean(dataT[,j], na.rm=TRUE))^2/ (MTreat + MControl -(kT + kC))); VarTreat <- VarTreat + ((dataT[i,j] - meanTreat)^2/(MTreat - 1)) } } for (j in 1:kC) { MSC <- MSC + mC[j]*( sum(dataC[,j], na.rm=TRUE)/mC[j] - meanControl)^2 / (kT + kC - 2); for (i in 1:mC[j]) { MSW <- MSW + ((dataC[i,j] - mean(dataC[,j], na.rm=TRUE))^2/ (MTreat + MControl - (kT + kC))); VarControl <- VarControl + ((dataC[i,j] - meanControl)^2/(MControl - 1)) } } ICC <- max((MSC - MSW)/(MSC + (m0 - 1)*MSW), 0); CT <- (1 + (mbarT - 1)*ICC); CC <- (1 + (mbarC - 1)*ICC); X$ResultsTreat <- c(meanTreat, MTreat, CT, ICC, MSC, MSW, VarTreat, kT); X$ResultsControl <- c(meanControl, MControl, CC, ICC, MSC, MSW, VarControl, kC); return(X); } .mvrnorm <- function (n = 1, mu, Sigma, tol = 1e-06, empirical = FALSE) { p <- length(mu) if (!all(dim(Sigma) == c(p, p))) stop("incompatible arguments") eS <- eigen(Sigma, symmetric = TRUE) ev <- eS$values if (!all(ev >= -tol * abs(ev[1]))) stop("'Sigma' is not positive definite") X <- matrix(rnorm(p * n), n) if (empirical) { X <- scale(X, TRUE, FALSE) X <- X %*% svd(X, nu = 0)$v X <- scale(X, FALSE, TRUE) } X <- drop(mu) + eS$vectors %*% diag(sqrt(pmax(ev, 0)), p) %*% t(X) nm <- names(mu) if (is.null(nm) && !is.null(dn <- dimnames(Sigma))) nm <- dn[[1]] dimnames(X) <- list(nm, NULL) if (n == 1) drop(X) else t(X) } ############## .metaAnalMD <- function(data, model="fixed", alpha=0.05) { if (!is.matrix(data)) stop("Sorry data must be a matrix of Mean Diff, 95 % Lower and Upper Limits from Previous Studies") if (ncol(data) != 3) stop("Data must have 3 columns, Mean Diff, 95 % Lower Limit and 95 % Upper Limits from Previous Studies") if ((alpha >= 1) || (alpha <= 0)) stop("Sorry, the alpha must lie within (0,1)") X <- NULL; X$data <- data; X$alpha <- alpha; X$model <- model colnames(X$data) <- c("Mean Diff", "Lower Limit", "Upper Limit"); MD <- data[,1]; seMD <- (data[,3] - data[,1])/1.96; varMD <- seMD^2; Z <- -qnorm(alpha/2) if (X$model == "fixed") { w <- 1/varMD; X$theta <- sum(MD*w)/sum(w); X$u <- X$theta + Z/sqrt(sum(w)); X$l <- X$theta - Z/sqrt(sum(w)); X$Var <- 1/sum(w); } if (X$model == "random") { w <- 1/varMD; thetaF <- sum(MD*w)/sum(w); Q <- sum(w*(MD-thetaF)^2) C <- sum(w) - (sum(w^2)/sum(w)) t <- ( Q - nrow(X$data) + 1)/C; if (t < 0) {t <- 0} w <- 1/(varMD+ t) X$theta <- sum(MD*w)/sum(w); X$u <- X$theta + Z/sqrt(sum(w)); X$l <- X$theta - Z/sqrt(sum(w)); X$Var <- 1/sum(w); } if ( (X$u < 0) && (X$l < 0) || (X$u > 0) && (X$l > 0) ) { X$Sig <- 1; } else { X$Sig <- 0; } class(X) <- "metaAnalCTS"; return(X); }
/scratch/gouwar.j/cran-all/cranData/CRTSize/R/n4meansMeta.R
n4props <- function(pe,pc, m, ICC, alpha=0.05, power=0.8, AR=1, two.tailed=TRUE, digits=3) { #Error Checking if ((pe >= 1) || (pe <= 0) || (pc <= 0) || (pc >= 1)) stop("Sorry, the prior proportions must lie within (0,1)") if ((alpha >= 1) || (alpha <= 0) || (power <= 0) || (power >= 1)) stop("Sorry, the alpha and power must lie within (0,1)") if (ICC <= 0) stop("Sorry, the ICC must lie within (0,1)") if (AR <=0) stop("Sorry, the specified value of the Allocation Ratio (AR) must be strictly positive...") if (m <=1) stop("Sorry, the (average) cluster size, m, should be greater than one...") #If m is a decimal, round up to generate a more conservative sample size. m <- ceiling(m); #Initialize Parameters r <- NULL; r$pe <- pe; r$pc <- pc; r$digits <- digits; r$m <- m; r$ICC <- ICC; r$alpha <- alpha; r$power <- power; r$AR <- AR; r$two.tailed <- two.tailed; #One or two-tailed tests if (two.tailed) { r$n <- ((qnorm(1 - alpha/2) + qnorm(power))^2*(pe*(1-pe) + pc*(1-pc))*(1 + (m - 1)*ICC))/(m*(pe - pc)^2); if (r$n < 30) { nTemp <- 0; while (abs(r$n - nTemp) > 1) { nTemp <- r$n; r$n <- ( (qt((1 - alpha/2), df=(2*(nTemp - 1))) + qt(power, df=(2*(nTemp - 1))))^2*(pe*(1-pe) + pc*(1-pc))*(1 + (m - 1)*ICC))/(m*(pe - pc)^2); } } } if (!two.tailed) { r$n <- ((qnorm(1 - alpha) + qnorm(power))^2*(pe*(1-pe) + pc*(1-pc))*(1 + (m - 1)*ICC))/(m*(pe - pc)^2); if (r$n < 30) { nTemp <- 0; while (abs(r$n - nTemp) > 1) { nTemp <- r$n; r$n <- ((qt((1 - alpha), df=(2*(nTemp - 1))) + qt(power, df=(2*(nTemp - 1))))^2*(pe*(1-pe) + pc*(1-pc))*(1 + (m - 1)*ICC))/(m*(pe - pc)^2); } } } #Adjust for allocation ratio; r$nE = (1/2)*r$n*(1 + (1/AR)); r$nC = (1/2)*r$n*(1 + AR); class(r) <- "n4props"; return(r); } #Print Method print.n4props <- function(x, ...) { cat("The required sample size is a minimum of ", ceiling(x$nE), " clusters of size ", x$m, " in the Experimental Group \n", sep="") cat(" and a minimum of ", ceiling(x$nC), " clusters (size ", x$m, ") in the Control Group. \n", sep="") } #Summary Method summary.n4props <- function(object, ...) { cat("Sample Size Calculation for Binary Outcomes", "\n \n") cat("Assuming:", "\n") cat("Proportion with Outcome in Experimental Group: ", object$pe, "\n") cat("Proportion with Outcome in Control Group: ", object$pc, "\n") cat("Cluster Size (average) = ", object$m, "\n"); cat("ICC = ", object$ICC, "\n"); cat("Type I Error Rate (alpha) = ", object$alpha, " and Power = ", object$power, "\n \n",sep="") cat("The required sample size is a minimum of ", ceiling(object$nE), " clusters of size ", object$m, " in the Experimental Group \n", sep="") cat(" and a minimum of ", ceiling(object$nC), " clusters (size ", object$m, ") in the Control Group. \n", sep="") }
/scratch/gouwar.j/cran-all/cranData/CRTSize/R/n4props.R
n4propsEB <- function(ICC, varICC=0, from=0, to, pe,pc,m, iter=1000, alpha=0.05, power=0.8, two.tailed=TRUE, digits=3, plot=TRUE) { if ((alpha >= 1) || (alpha <= 0) || (power <= 0) || (power >= 1)) stop("Sorry, the alpha and power must lie within (0,1)") if ((pe >= 1) || (pe <= 0) || (pc <= 0) || (pc >= 1)) stop("Sorry, the prior proportions must lie within (0,1)") for (i in 1:length(ICC)) { if (ICC[i] <= 0) stop("Sorry, the ICC must lie within (0,1)") } if (m <=1) stop("Sorry, the (average) cluster size, m, should be greater than one...") if (to < from) stop("From and To form the range of the estimated density for the ICC...") #If m is a decimal, round up to generate a more conservative sample size. m <- ceiling(m); #Initialize Parameters r <- NULL r$pe <- pe; r$pc <- pc; r$ICC <- ICC; r$varICC <- varICC; r$m <- m; r$alpha <- alpha; r$power <- power; r$two.tailed <- two.tailed; r$digits <- digits; r$from <- from; r$to <- to; #One or two-tailed tests if (two.tailed) { ZA <- -qnorm(alpha/2); ZB <- -qnorm(1 - power); } else { ZA <- -qnorm(alpha); ZB <- -qnorm(1 - power); } #Initialization of Results Vectors r$ResRho <- NULL; r$ResK <- NULL; #Compute Density Function; n must be much larger than iter. if (sum(varICC) != 0) { dens <- density(ICC, n=2^16, from=from, to=to, weights = 1/(varICC + var(varICC))/sum(1/(varICC+var(varICC)))) } else { dens <- density(ICC, n=2^16, from=from, to=to) } rhoVector <- sample(dens$x, size=iter, prob=dens$y) #Computational Loop for (i in 1:iter) { rho <- rhoVector[i] k <- (((ZA + ZB)^2)*(pe*(1-pe) + pc*(1-pc))*(1 + (m - 1)*ICC))/(m*(pe - pc)^2); r$ResRho <- append(r$ResRho, rho); r$ResK <- append(r$ResK, k); } if (plot) { hist(ICC, freq = FALSE, main="Histogram of Values of ICC and Empirical Density", ylab="Density", xlab="ICC Estimates", xlim=c(from, to), ylim=c(0,25)); par(new=TRUE); plot(dens, xlim=c(from,to), ylim=c(0,25), main="", xlab=""); } class(r) <- "n4propsEB"; return(r); } #Print Method print.n4propsEB <- function(x, ...) { cat("Simulation of the Empirical Density suggests that appropriate quantiles \n") cat("for the number of clusters to be randomized in each group are: \n") print(round(quantile(x$ResK, probs=c(0,0.25,0.5,0.75, 1.0)),digits=x$digits)) cat("With ICC quantiles: \n ") print(round(quantile(x$ResRho, probs=c(0,0.25,0.5,0.75, 1.0)),digits=x$digits)) } #Summary Method summary.n4propsEB <- function(object, ...) { cat("Simulation Based Sample Size Estimation (Empirical Density) to Compare Means of Two Populations", "\n \n") cat("Assuming:", "\n") cat("Treatment Rate = ", object$pe, "\n") cat("Control Rate = ", object$pc, "\n"); cat("Cluster Size (average) = ", object$m, "\n"); cat("ICCs = ", object$ICC, "\n"); cat("Variance of ICCs = ", object$varICC, "\n"); cat("Type I Error Rate (alpha) = ", object$alpha, " and Power = ", object$power, "\n \n",sep="") cat("Simulation of the Empirical Density suggests that appropriate quantiles \n") cat("for the number of clusters to be randomized in each group are: \n") print(round(quantile(object$ResK, probs=c(0,0.1, 0.25,0.5,0.75, 0.9, 1.0)),digits=object$digits)) cat("With ICC quantiles: \n") print(round(quantile(object$ResRho, probs=c(0,0.1, 0.25,0.5,0.75, 0.9, 1.0)),digits=object$digits)) }
/scratch/gouwar.j/cran-all/cranData/CRTSize/R/n4propsEB.R
n4propsMeta <- function(data, measure="RR", model="fixed", k, ICC, ICCDistn="unif", lower=0, upper=0.25, varRed=FALSE, m, sdm, pC, sdpC, iter=1000, alpha=0.05) { if (!is.matrix(data)) stop("Sorry data must be a matrix of RR/OR, 95 % Lower and Upper Limits from Previous Studies.") if (! ( (ICCDistn == "fixed") | (ICCDistn == "unif") | (ICCDistn == "normal") | (ICCDistn == "smooth") ) ) stop("Sorry, the ICC Distribution must be one of: fixed, unif, normal or smooth.") if (! ( (model == "fixed") | (model == "random") ) ) stop("Sorry, model must be fixed, or random.") if (! ( (measure == "OR") | (measure == "RR") ) ) stop("Sorry, measure must be one of OR or RR.") if ( (ICCDistn == "fixed") && (length(ICC) > 1) ) stop("Sorry you can only provide a single ICC value with the fixed distribution option.") if ((alpha >= 1) || (alpha <= 0)) stop("Sorry, the alpha must lie within (0,1).") if ((sdm < 0) || (sdpC < 0)) stop("Sorry, the standard deviations of m must be non-negative.") if ( (pC <= 0) || (pC >= 1) ) stop("Sorry, the Control Rate must lie within (0,1).") for (i in 1:length(ICC)) { if (ICC[i] <= 0) stop("Sorry, the ICC must lie within (0,1).") } if (m <=1) stop("Sorry, the (average) cluster size, m, should be greater than one.") for (i in 1:length(k)) { if (k[i] <= 1) stop("Sorry, the values of k must be greater than 1") } X <- NULL; X$data <- data; X$model <- model; X$k <- k; X$ICC <- ICC; X$m <- m; X$sdm <- sdm; X$pC <- pC; X$sdpC <- sdpC; X$iter <- iter; X$ICCDistn <- ICCDistn; X$lower <- lower; X$upper <- upper; X$alpha <- alpha; X$varRed <- varRed; original <- .metaAnalRROR(data, model=X$model, alpha=X$alpha); X$newMean <- original$theta; X$newVar <- original$Var; X$l <- original$l; X$u <- original$u #Obtain pT from the new simulated value and pC... X$Power <- rep(0, length(k)); if (varRed) { X$varianceReduction <- rep(0, length(k)); } for (a in 1:length(k)) { kT0 <- k[a]; kC0 <- k[a]; if (ICCDistn == "unif") { ICCT0 <- runif(iter, lower, upper) } if (ICCDistn == "fixed") { ICCT0 <- rep(ICC, iter) } if (ICCDistn == "normal") { ICCT0 <- abs( rnorm(iter, 0, sd(ICC) ) ); } if (ICCDistn == "smooth") { dens <- density(ICC, n=2^16, from=lower, to=upper) ICCT0 <- sample(dens$x, size=iter, prob=dens$y) } Reject <- rep(NA, iter); if (varRed) { varReductionIter <- rep(NA,iter); } for (i in 1:iter) { pC0 <- rnorm(1, pC, sdpC); X$thetaNew <- rnorm(1, X$newMean, sqrt(X$newVar)) if (measure == "RR") { pT0 <- exp(X$thetaNew + log(pC0)); } if (measure == "OR") { o <- pC0/(1-pC0) pT0 <- exp(X$thetaNew + log(o)) / (1 + exp(X$thetaNew + log(o)) ) } #Ensure calculated treatment rate is within (0,1) if (pT0 >= 0.99) {pT0 <- 0.99} if (pT0 <= 0.01) {pT0 <- 0.01} w <- .oneCRTBinary(pT=pT0, pC=pC0, kC=kC0, kT=kT0, mTmean=m, mTsd=sdm, mCmean=m, mCsd=sdm, ICCT=ICCT0[i], ICCC=ICCT0[i]) x <- .summarizeTrialRROR(ResultsTreat=w$ResultsTreat, ResultsControl=w$ResultsControl, measure=measure) y <- .makeCIRROR(logRROR=x$logRROR, varlogRROR=x$VarLogRROR, alpha=X$alpha) z <- .metaAnalRROR(data=rbind(data, y), model=X$model, alpha=X$alpha); Reject[i] <- z$Sig; if (varRed) { varReductionIter[i] <- z$Var; } } X$Power[a] <- sum(Reject, na.rm=TRUE)/iter; if (varRed) { X$varianceReduction[a] <- mean(varReductionIter, na.rm=TRUE)/X$newVar; } } names(X$Power) <- k if (varRed) { names(X$varianceReduction) <- k } class(X) <- "n4propsMeta"; return(X); } #Print Method print.n4propsMeta <- function(x, ...) { cat("The Approximate Power of the Updated Meta-Analysis is: (Clusters per Group) \n"); print(x$Power); if (x$varRed) { cat("The Approximate Proportion of Variance Reduction is: (Clusters per Group) \n"); print(1 - x$varianceReduction); } } #Summary method summary.n4propsMeta <- function(object, ...) { cat("Sample Size Calculation for Binary Outcomes Based on Updated Meta-Analysis", "\n \n", sep="") cat("The original ", object$model, " effects Relative Risk/Odds Ratio is ", exp(object$newMean), "\n", sep=""); cat("With ", (1 - object$alpha)*100, "% Confidence Limits: (", exp(object$l), ", ", exp(object$u), ") \n \n",sep=""); cat("The Approximate Power of the Updated Meta-Analysis is: (Clusters per Group) \n", sep=""); print(object$Power); if (object$varRed) { cat("The Approximate Proportion of Variance Reduction is: (Clusters per Group) \n"); print(1 - object$varianceReduction); } cat("\n", "Assuming:", "\n", sep="") cat("Proportion with Outcome in Control Group: ", object$pC, " with standard deviation: ", object$sdpC, "\n", sep=""); cat("Mean Cluster Size: ", object$m, " with standard deviation: ", object$sdm, "\n", sep=""); cat("ICC =", object$ICC, "\n"); cat("ICC Distribution", object$ICCDistn, "\n"); cat("Clusters =", object$k, "\n"); cat("Iterations =", object$iter, "\n"); } ############################################# #A couple of basic helper functions; #Takes a trial, from oneCRT function; #Generates RR or OR and variances; .summarizeTrialRROR <- function(ResultsTreat, ResultsControl, measure="RR") { Summary <- NULL; if (measure== "RR") { Summary$RROR <- ResultsTreat[1]/ResultsControl[1]; Summary$logRROR <- log(Summary$RROR); Summary$VarLogRROR <- ( ((1 - ResultsTreat[1])*ResultsTreat[3])/(ResultsTreat[2]*ResultsTreat[1]) + ((1 - ResultsControl[1])*ResultsControl[3])/(ResultsControl[2]*ResultsControl[1]) ) } if (measure == "OR") { Summary$RROR <- (ResultsTreat[1]/(1 - ResultsTreat[1]))/ (ResultsControl[1]/(1 - ResultsControl[1])); Summary$logRROR <- log(Summary$RROR); Summary$VarLogRROR <- ResultsTreat[3]/(ResultsTreat[2]*ResultsTreat[1]*(1 - ResultsTreat[1])) + ResultsControl[3]/(ResultsControl[2]*ResultsControl[1]*(1 - ResultsControl[1])) } return(Summary); } ############################### #Returns Confidence Interval for either OR or RR .makeCIRROR <- function(logRROR, varlogRROR, alpha=0.05) { X <- c(exp(logRROR), exp(logRROR - qnorm((1-alpha/2))*sqrt(varlogRROR)), exp(logRROR + qnorm((1-alpha/2))*sqrt(varlogRROR))) return(X); } ################### #Internal method for generating clustered binary data according to Lunn and Davies; .oneCRTBinary <- function(pC, pT, kC, kT, mTmean, mTsd, mCmean, mCsd, ICCT, ICCC) { X <- NULL; #Treatment Loop, generate mT <- floor(rnorm(kT, mTmean, mTsd)) for (j in 1:kT) { if (mT[j] <= 10) { mT[j] <- 10; } } dataT <- matrix(NA, nrow=max(mT), ncol=kT); for (j in 1:kT) { Z <- rbinom(1, 1, pT); for (i in 1:mT[j]) { U <- rbinom(1, 1, sqrt(ICCT)); Y <- rbinom(1, 1, pT); dataT[i,j] <- (1 - U)*Y + U*Z; } } #Control Loop... mC <- floor(rnorm(kC, mCmean, mCsd)) for (j in 1:kC) { if (mC[j] <= 10) { mC[j] <- 10; } } dataC <- matrix(NA, nrow=max(mC), ncol=kC); for (j in 1:kC) { Z <- rbinom(1, 1, pC); for (i in 1:mC[j]) { U <- rbinom(1, 1, sqrt(ICCC)); Y <- rbinom(1, 1, pC); dataC[i,j] <- (1 - U)*Y + U*Z; } } #Total number in treatment and control groups; MTreat <- nrow(dataT)*ncol(dataT) - sum(is.na(dataT)) MControl <- nrow(dataC)*ncol(dataC) - sum(is.na(dataC)) #PhatT is the treatment rate; PhatT <- sum(dataT, na.rm=TRUE)/MTreat if ((PhatT == 0) || is.na(PhatT) || is.infinite(PhatT) ) { PhatT <- 1/MTreat; } #PhatC is control rate; PhatC <- sum(dataC, na.rm=TRUE)/MControl if ((PhatC == 0) || is.na(PhatC) || is.infinite(PhatC) ) { PhatC <- 1/MControl; } #Average cluster size; mbarT <- sum(mT^2)/sum(mT); mbarC <- sum(mC^2)/sum(mC); #ICC Calculations MSC <- 0; MSW <- 0; for (j in 1:kT) { MSC <- MSC + mT[j]*( sum(dataT[,j], na.rm=TRUE)/mT[j] - PhatT)^2 / (kT + kC - 2); MSW <- MSW + mT[j]*(sum(dataT[,j], na.rm=TRUE)/mT[j])*(1 - (sum(dataT[,j], na.rm=TRUE)/mT[j]))/ (MTreat + MControl -(kT + kC)); } for (j in 1:kC) { MSC <- MSC + mC[j]*(sum(dataC[,j], na.rm=TRUE)/mC[j] - PhatC)^2 / (kT + kC - 2); MSW <- MSW + mC[j]*(sum(dataC[,j], na.rm=TRUE)/mC[j])*(1 - (sum(dataC[,j], na.rm=TRUE)/mC[j]))/ (MTreat + MControl -(kT + kC)); } #Assume a common ICC between treatment and control groups; m0 <- ( (MTreat + MControl) - (mbarT + mbarC) ) / ( (kT + kC) - 2); ICC <- max((MSC - MSW)/(MSC + (m0 - 1)*MSW), 0); #Inflation factors; CT <- (1 + (mbarT - 1)*ICC); CC <- (1 + (mbarC - 1)*ICC); #Summary of what is typically required, including rates, total number of subjects (group), clusters and ICC; X$ResultsTreat <- c(PhatT, MTreat, CT, ICC, kT); X$ResultsControl <- c(PhatC, MControl, CC, ICC, kC); return(X); } ############ .metaAnalRROR <- function(data, model="fixed", alpha=0.05) { if (!is.matrix(data)) stop("Sorry data must be a matrix of OR/RR, 95 % Lower and Upper Limits from Previous Studies") if (ncol(data) != 3) stop("Data must have 3 columns, Odds Ratio/Relative Risk, 95 % Lower Limit and 95 % Upper Limit from Previous Studies") if ((alpha >= 1) || (alpha <= 0)) stop("Sorry, the alpha must lie within (0,1)") X <- NULL; X$data <- data X$alpha <- alpha X$model <- model logRR <- log(data[,1]) logL <- log(data[,2]) logU <- log(data[,3]) colnames(X$data) <- c("OR/RR", "Lower Limit", "Upper Limit"); selogRR <- (logU - logRR)/1.96; varlogRR <- selogRR^2; Z <- -qnorm(alpha/2) if (X$model == "fixed") { w <- 1/varlogRR; X$theta <- sum(logRR*w)/sum(w); X$u <- X$theta + Z/sqrt(sum(w)); X$l <- X$theta - Z/sqrt(sum(w)); X$Var <- 1/sum(w); } if (X$model == "random") { w <- 1/varlogRR; thetaF <- sum(logRR*w)/sum(w); Q <- sum(w*(logRR-thetaF)^2) C <- sum(w) - (sum(w^2)/sum(w)) t <- ( Q - nrow(X$data) + 1)/C; if (t < 0) {t <- 0} w <- 1/(varlogRR + t) X$theta <- sum(logRR*w)/sum(w); X$u <- X$theta + Z/sqrt(sum(w)); X$l <- X$theta - Z/sqrt(sum(w)); X$Var <- 1/sum(w); } if ( (X$u < 0) && (X$l < 0) || (X$u > 0) && (X$l > 0) ) { X$Sig <- 1; } else { X$Sig <- 0; } class(X) <- "metaAnalRROR"; return(X); }
/scratch/gouwar.j/cran-all/cranData/CRTSize/R/n4propsMeta.R
#' Doubly Robust Inverse Probability Weighted Augmented GEE Estimator #' #' This function implements a GEE estimator. It implements classical GEE, IPW-GEE, augmented GEE and IPW-Augmented GEE (Doubly robust). \cr #' #' @param formula an object of class "formula" (or one that can be coerced to that class): a symbolic description of the model to be fitted. #' @param id a vector which identifies the clusters. The length of "id" should be the same as the number of observations. Data are assumed to be sorted so that observations on a cluster are contiguous rows for all entities in the formula. #' @param data an optional data frame, list or environment (or object coercible by as.data.frame to a data frame) containing the variables in the model. If not found in data, the variables are taken from environment(formula), typically the environment from which CRTgeeDR is called. #' @param family a description of the error distribution and link function to be used in the model. This can be a character string naming a family function, a family function or the result of a call to a family function. (See family for details of family functions.) #' @param corstr a character string specifying the correlation structure. The following are permitted: '"independence"', '"exchangeable"', '"ar1"', '"unstructured"' and '"userdefined"' #' @param Mv for "m-dependent", the value for m #' @param weights A vector of weights for each observation. If an observation has weight 0, it is excluded from the calculations of any parameters. Observations with a NA anywhere (even in variables not included in the model) will be assigned a weight of 0. #' @param aug A list of vector (one for A=1 treated, one for A=0 control) for each observation representing E(Y|X,A=a). #' @param pi.a A number, Probability of treatment attribution P(A=1) #' @param corr.mat The correlation matrix for "fixed". Matrix should be symmetric with dimensions >= the maximum cluster size. If the correlation structure is "userdefined", then this is a matrix describing which correlations are the same. #' @param init.beta an optional vector with the initial values of beta. If not specified, then the intercept will be set to InvLink(mean(response)). init.beta must be specified if not using an intercept. #' @param init.alpha an optional scalar or vector giving the initial values for the correlation. If provided along with Mv>1 or unstructured correlation, then the user must ensure that the vector is of the appropriate length. #' @param init.phi an optional initial overdispersion parameter. If not supplied, initialized to 1. #' @param scale.fix if set to TRUE, then the scale parameter is fixed at the value of init.phi. #' @param sandwich if set to TRUE, the sandwich variance is provided together with the naive estimator of variance. #' @param maxit maximum number of iterations. #' @param tol tolerance in calculation of coefficients. #' @param print.log if set to TRUE, a report is printed. #' @param typeweights a character string specifying the weights implementation. The following are permitted: "GENMOD" for $W^{1/2}V^{-1}W^{1/2}$, "WV" for $V^{-1}W$ #' @param nameTRT Name of the variable containing information for the treatment #' @param model.weights an object of class "formula" (or one that can be coerced to that class): a symbolic description of the model to be fitted for the propensity score. Must model the probability of being observed. #' @param model.augmentation.trt an object of class "formula" (or one that can be coerced to that class): a symbolic description of the model to be fitted for the ouctome model for the treated group (A=1). #' @param model.augmentation.ctrl an object of class "formula" (or one that can be coerced to that class): a symbolic description of the model to be fitted for the ouctome model for the control group (A=0). #' @param stepwise.weights if set to TRUE, a stepwise for the propensity score is performed during the fit of the augmentation model for the OM #' @param stepwise.augmentation if set to TRUE, a stepwise for the augmentation model is performed during the fit of the augmentation model for the OM #' @param nameMISS Name of the variable containing information for the Missing indicator #' @param nameY Name of the variable containing information for the outcome #' @param sandwich.nuisance if set to TRUE, the nuisance adjusted sandwich variance is provided. #' @param fay.adjustment if set to TRUE, the small-sample nuisance adjusted sandwich variance with Fay's adjustement is provided. #' @param fay.bound if set to 0.75 by default, bound value used for Fay's adjustement. #' #' @export #' @author Melanie Prague [based on R packages 'geeM' L. S. McDaniel, N. C. Henderson, and P. J. Rathouz. Fast Pure R Implementation of GEE: Application of the Matrix Package. The R Journal, 5(1):181-188, June 2013.] #' #' #' @details #' The estimator is founds by solving: #' \deqn{ 0= \sum_{i=1}^M \Bigg[ \boldsymbol D_i^T \boldsymbol V_i^{-1} \boldsymbol W_i(\boldsymbol X_i, A_i, \boldsymbol \eta_W) \left( \boldsymbol Y_i - \boldsymbol B(\boldsymbol X_i, A_i, \boldsymbol \eta_B) \right) } #' \deqn{ \qquad + \sum_{a=0,1} p^a(1-p)^{1-a} \boldsymbol D_i^T \boldsymbol V_i^{-1} \Big( \boldsymbol B(\boldsymbol X_i,A_i=a, \boldsymbol \eta_B) -\boldsymbol \mu_i(\boldsymbol \beta,A_i=a)\Big) \Bigg]} #' where \eqn{\boldsymbol D_i=\frac{\partial \boldsymbol \mu_i(\boldsymbol \beta,A_i)}{\partial \boldsymbol \beta^T}} is the design matrix, \eqn{\boldsymbol V_i} is the covariance matrix equal to \eqn{\boldsymbol U_i^{1/2} \boldsymbol C(\boldsymbol \alpha)\boldsymbol U_i^{1/2}$ with $\boldsymbol U_i} a diagonal matrix with elements \eqn{{\rm var}(y_{ij})} and \eqn{\boldsymbol C(\boldsymbol \alpha)} is the working correlation structure with non-diagonal terms \eqn{\boldsymbol \alpha}. #' Parameters \eqn{\boldsymbol \alpha} are estimated using simple moment estimators from the Pearson residuals. #' The matrix of weights \eqn{\boldsymbol W_i(\boldsymbol X_i, A_i, \boldsymbol \eta_W)=diag\left[R_{ij}/\pi_{ij}(\boldsymbol X_i, A_i, \boldsymbol \eta_W)\right]_{j=1,\dots,n_{i}}}, where \eqn{\pi_{ij}(\boldsymbol X_i, A_i, \boldsymbol \eta_W)=P(R_{ij}|\boldsymbol X_i, A_i)} is the Propensity score (PS). #' The function \eqn{\boldsymbol B(\boldsymbol X_i,A_i=a,\boldsymbol \eta_B)}, which is called the Outcome Model (OM), is a function linking \eqn{Y_{ij}} with \eqn{\boldsymbol X_i} and \eqn{A_i}. #' The \eqn{\boldsymbol \eta_B} are nuisance parameters that are estimated. #' The estimator is most efficient if the OM is equal to \eqn{E(\boldsymbol Y_i|\boldsymbol X_i,A_i=a)} #' The estimator denoted \eqn{\hat{\beta}_{aug}} is found by solving the estimating equation. #' Although analytic solutions sometimes exist, coefficient estimates are generally obtained using an iterative procedure such as the Newton-Raphson method. #' Automatic implementation is such that, \eqn{\hat{ \boldsymbol \eta}_W$ in $\boldsymbol W_i(\boldsymbol X_i, A_i, \hat{ \boldsymbol \eta}_W)} are obtained using a logistic regression and \eqn{\hat{ \boldsymbol \eta}_B$ in $\boldsymbol B(\boldsymbol X_i,A_i,\hat{ \boldsymbol \eta}_B)} are obtained using a linear regression. #' #' #' The variance of \eqn{\hat{\boldsymbol \beta}_{aug}} is estimated by the sandwich variance estimator. #' There are two external sources of variability that need to be accounted for: estimation of \eqn{\boldsymbol \eta_W} for the PS and of \eqn{\boldsymbol \eta_B} for the OM. #' We denote \eqn{\boldsymbol \Omega=(\boldsymbol \beta, \boldsymbol \eta_W,\boldsymbol \eta_B)} the estimated parameters of interest and nuisance parameters. #' We can stack estimating functions and score functions for \eqn{\boldsymbol \Omega}: #' \deqn{\small \boldsymbol U_i(\boldsymbol \Omega)= \left( \begin{array}{c} \boldsymbol \Phi_i(\boldsymbol Y_i,\boldsymbol X_i,A_i,\boldsymbol \beta, \boldsymbol \eta_W, \boldsymbol \eta_B) \\ \boldsymbol S^W_i(\boldsymbol X_i, A_i, \boldsymbol \eta_W)\\ \boldsymbol S^B_i(\boldsymbol X_i, A_i, \boldsymbol \eta_B)\\ \end{array} \right)} #' where \eqn{\boldsymbol S^W_i} and \eqn{\boldsymbol S^B_i} represent the score equations for patients in cluster \eqn{i} for the estimation of \eqn{\boldsymbol \eta_W} and \eqn{\boldsymbol \eta_B} in the PS and the OM. #' A standard Taylor expansion paired with Slutzky's theorem and the central limit theorem give the sandwich estimator adjusted for nuisance parameters estimation in the OM and PS: #' \deqn{Var(\boldsymbol \Omega)={{E\left[\frac{\partial \boldsymbol U_i(\boldsymbol \Omega)}{\partial \boldsymbol \Omega}\right]}^{-1}}^{T} \underbrace{{E\left[ \boldsymbol U_i(\boldsymbol \Omega)\boldsymbol U_i^T(\boldsymbol \Omega) \right]}}_{\boldsymbol \Delta_{adj}} \underbrace{E\left[\frac{\partial \boldsymbol U_i(\boldsymbol \Omega)}{\partial \boldsymbol \Omega}\right]^{-1} }_{\boldsymbol \Gamma^{-1}_{adj}}.} #' #' #' #' #' @references Details regarding implementation can be found in #' \itemize{ #' \item 'Augmented GEE for improving efficiency and validity of estimation in cluster randomized trials by leveraging cluster-and individual-level covariates' - 2012 - Stephens A., Tchetgen Tchetgen E. and De Gruttola V. : Stat Med 31(10) - 915-930. #' \item 'Accounting for interactions and complex inter-subject dependency for estimating treatment effect in cluster randomized trials with missing at random outcomes' - 2015 - Prague M., Wang R., Stephens A., Tchetgen Tchetgen E. and De Gruttola V. : in revision. #' \item 'Fast Pure R Implementation of GEE: Application of the Matrix Package' - 2013 - McDaniel, Lee S and Henderson, Nicholas C and Rathouz, Paul J : The R Journal 5(1) - 181-197. #' \item 'Small-Sample Adjustments for Wald-Type Tests Using Sandwich Estimators' - 2001 - Fay, Michael P and Graubard, Barry I : Biometrics 57(4) - 1198-1206. #' } #' #' #' @return An object of type 'CRTgeeDR' \cr #' @return $beta Final values for regressors estimates \cr #' \itemize{ #' \item $phi scale parameter estimate\cr #' \item $alpha Final values for association parameters in the working correlation structure when exchangeable\cr #' \item $coefnames Name of the regressors in the main regression \cr #' \item $niter Number of iteration done by the algorithm before convergence #' \item $converged convergence status #' \item $var.naiv Variance of the estimates model based (naive)\cr #' \item $var Variance of the estimates sandwich\cr #' \item $var.nuisance Variance of the estimates nuisance adjusted sandwich\cr #' \item $var.fay Variance of the estimates nuisance adjusted sandwich with Fay correction for small samples #' \item $call Call function #' \item $corr Correlation structure used #' \item $clusz Number of unit in each cluster #' \item $FunList List of function associated with the family #' \item $X design matrix for the main regression #' \item $offset Offset specified in the regression #' \item $eta predicted values #' \item $weights Weights vector used in the diagonal term for the IPW #' \item $ps.model Summary of the regression fitted for the PS if computed internally #' \item $om.model.trt Summary of the regression fitted for the OM for treated if computed internally #' \item $om.model.ctrl Summary of the regression fitted for the OM for control if computed internally #' } #' #' @import MASS #' @import Matrix #' @import ggplot2 #' @importFrom grDevices dev.off pdf #' @importFrom graphics plot #' @importFrom stats as.formula binomial fitted gaussian glm median model.frame model.matrix model.offset model.response na.pass pnorm predict quantile step terms weights #' @importFrom methods as #' @examples #' #' data(data.sim) #' \dontrun{ #' #### STANDARD GEE #' geeresults<-geeDREstimation(formula=OUTCOME~TRT, #' id="CLUSTER" , data = data.sim, #' family = "binomial", corstr = "independence") #' summary(geeresults) #' #### IPW GEE #' ipwresults<-geeDREstimation(formula=OUTCOME~TRT, #' id="CLUSTER" , data = data.sim, #' family = "binomial", corstr = "independence", #' model.weights=I(MISSING==0)~TRT*AGE) #' summary(ipwresults) #' #### AUGMENTED GEE #' augresults<-geeDREstimation(formula=OUTCOME~TRT, #' id="CLUSTER" , data = data.sim, #' family = "binomial", corstr = "independence", #' model.augmentation.trt=OUTCOME~AGE, #' model.augmentation.ctrl=OUTCOME~AGE, stepwise.augmentation=FALSE) #' summary(augresults) #' } #' #### DOUBLY ROBUST #' drresults<-geeDREstimation(formula=OUTCOME~TRT, #' id="CLUSTER" , data = data.sim, #' family = "binomial", corstr = "independence", #' model.weights=I(MISSING==0)~TRT*AGE, #' model.augmentation.trt=OUTCOME~AGE, #' model.augmentation.ctrl=OUTCOME~AGE, stepwise.augmentation=FALSE) #' summary(drresults) geeDREstimation <- function(formula, id,data = parent.frame(), family = gaussian, corstr = "independence", Mv = 1, weights = NULL, aug=NULL,pi.a=1/2, corr.mat = NULL, init.beta=NULL, init.alpha=NULL, init.phi = 1, scale.fix=FALSE, sandwich=TRUE, maxit=20, tol=0.00001,print.log=FALSE,typeweights="VW",nameTRT="TRT",model.weights=NULL,model.augmentation.trt=NULL,model.augmentation.ctrl=NULL,stepwise.augmentation=FALSE,stepwise.weights=FALSE,nameMISS="MISSING",nameY="OUTCOME",sandwich.nuisance=FALSE,fay.adjustment=FALSE,fay.bound=0.75){ if(print.log)print("********************************************************************************************") if(print.log){print("DESCRIPTION: Doubly Robust Inverse Probability Weighted Augmented GEE estimator")} if(print.log)print("********************************************************************************************") call <- match.call() ### Get the information from the family: link function for the outcome FunList <- getfam(family) LinkFun <- FunList$LinkFun VarFun <- FunList$VarFun InvLink <- FunList$InvLink InvLinkDeriv <- FunList$InvLinkDeriv ### Check that all the arguments are ok if(scale.fix & is.null(init.phi)){ stop("If scale.fix=TRUE, then init.phi must be supplied") } if((!(sum(!(unique(data[,nameTRT])%in%c(0,1)))==0))&(!(is.null(aug)&is.null(model.augmentation.trt)&is.null(model.augmentation.ctrl)))){ stop("Augmentation is requested whereas more than two level of treatment exist. Implementation not available yet.") } if(is.null(weights)&is.null(model.weights)){ typeweights <- NULL }else{ w.vec <- c("GENMOD", "VWR") w.match <- charmatch(typeweights, w.vec) if(is.na(w.match)){stop("Unsupported type of weights specification")} } if(!is.null(model.weights))dat.ps <- data if(!is.null(model.augmentation.trt)){ dat.om.trt <- data dat.om.trt<-cleandata(dat=dat.om.trt,type="OM model",nameY=nameY,cc=FALSE,formula=model.augmentation.trt) #dat.om.trt<-dat.om.trt[which(!is.na(dat.om.trt[,nameY])),] } if(!is.null(model.augmentation.ctrl)){ dat.om.ctrl <- data dat.om.ctrl<-cleandata(dat=dat.om.ctrl,type="OM model",nameY=nameY,cc=FALSE,formula=model.augmentation.ctrl) #dat.om.ctrl<-dat.om.ctrl[which(!is.na(dat.om.ctrl[,nameY])),] } ## Compute the PS and the weights if needed propensity.score<-NULL if(!is.null(model.weights)){ dat.ps<-cleandata(dat=dat.ps,type="PS model",nameY=nameMISS,cc=FALSE,formula=model.weights) if(print.log){print("------------------------------------------------------------> Information for PS")} if(print.log){print("PS is computed internally...")} if(!is.null(weights))warning("Warning: Propensity score is computed internally, make sure that the formula given in model.weights models the probability of being observed. Information given in argument 'weights' will be ingnored.") if(stepwise.weights){ propensity.score<-step(glm(model.weights,data=dat.ps,family=binomial(link = "logit")),trace=0) }else{ propensity.score<-glm(model.weights,data=dat.ps,family=binomial(link = "logit")) } dat.ps$weights.inside<-1/(fitted(propensity.score)) weights <- "weights.inside" if(print.log){ print("Details for PS regression") print(summary(propensity.score))} } weightsname<-weights ### Get the data - either from the environement or from the argument dataset dat <- model.frame(formula, data, na.action=na.pass) if(!is.null(model.weights)){ data$weights.inside<-dat.ps$weights.inside } nn <- dim(dat)[1] if(typeof(data) == "environment"){ id <- id if(!is.null(weights)) weights <- weights dat$id <- id }else{ if(length(id) == 1){ subj.col <- which(colnames(data) == id) if(length(subj.col) > 0){ id <- data[,subj.col] }else{ id <- parent.frame()$id } }else if(is.null(id)){ id <- 1:nn } if(!is.null(weights)){ if(length(weights) == 1){ weights.col <- which(colnames(data) == weights) if(length(weights.col) > 0){ weights <- data[,weights.col] }else{ weights <- parent.frame()$weights } dat$weights<-weights }else if(is.null(weights)){ weights <- NULL } } } dat$id <- id if(!is.null(weights)){ dat$weights <- weights }else{ dat$weights <- 1 } ## Clean the dataset for missing data in the covariates dat<-cleandata(dat=dat,type="marginal model",nameY=nameY,cc=TRUE,formula=formula) weights<-dat$weights includedvec <- weights>0 inclsplit <- split(includedvec, id) #Drop cluster with no observations dropid <- NULL allobs <- T if(any(!includedvec)){ allobs <- F for(i in 1:length(unique(id))){ if(all(!inclsplit[[i]])){ dropid <- c(dropid, i) } } } if(length(dropid)>0){ dropind <- which(is.element(id, dropid)) dat <- dat[-dropind,] includedvec <- includedvec[-dropind] weights <- weights[-dropind] id <- id[-dropind] } ### Get the numbers of clusters and individuals nn <- dim(dat)[1] K <- length(unique(id)) modterms <- terms(formula) X <- model.matrix(formula,dat) X.t <- X.c <- X if(nameTRT %in% colnames(dat)){ X.t[,nameTRT]<-1.0000 X.c[,nameTRT]<-0.0000 }else{ stop("User need to provide the name of the treatment variable in nameTRT. Default nameTRT='TRT' does not exist in the dataset.") } Y <- as.matrix(model.response(dat)) ## Compute the OM and the augmentation terms if needed B<-NULL om.t<-NULL om.c<-NULL if(!(is.null(model.augmentation.trt)|is.null(model.augmentation.ctrl))){ if(print.log){print("------------------------------------------------------------> Information for OM")} if(print.log){print("OM are computed internally...")} if(!is.null(aug))warning("Warning: Outcome model for augmentation is computed internally. Information given in argument 'aug' will be ingnored.") data.trt<-dat.om.trt[which((dat.om.trt[,nameTRT]==1)),] data.ctrl<-dat.om.ctrl[which((dat.om.ctrl[,nameTRT]==0)),] data.t<-dat.om.trt data.t[,nameTRT]<-1 data.c<-dat.om.ctrl data.c[,nameTRT]<-0 if(stepwise.augmentation){ om.t<-step(glm(model.augmentation.trt,data=data.trt,family=family),trace=0) om.c<-step(glm(model.augmentation.ctrl,data=data.ctrl,family=family),trace=0) }else{ om.t<-glm(model.augmentation.trt,data=data.trt,family=family) om.c<-glm(model.augmentation.ctrl,data=data.ctrl,family=family) } data$B1<-InvLink(predict(om.t,newdata=data.t)) data$B0<-InvLink(predict(om.c,newdata=data.c)) aug<-c(ctrl="B0",trt="B1") if(print.log){ print("Details for OM regression in treated") print(summary(om.t)) print("Details for OM regression in control") print(summary(om.c)) } } if(!is.null(aug)){ if(length(aug)!=2){ stop("If augmentation is requested, then aug must be supplied (length=2)") } ### Alert the user if there are covariates in the main regression and augmentation is used -- Stop had been removed because theoritical result is still valid. if(formula!=as.formula(paste(paste(as.character(formula)[2],as.character(formula)[1],sep=""),nameTRT,sep=""))){ warning("Warning: Augmentation approach is used with a marginal model including covariates.") } B.c<-data[,aug["ctrl"]] B.t<-data[,aug["trt"]] temp<-cbind(X,B.c,B.t) Bi<-ifelse(temp[,nameTRT]==1,temp[,"B.t"],temp[,"B.c"]) B<-cbind(B.c,B.t,Bi) } ### if no offset is given, then set to zero offset <- model.offset(dat) p <- dim(X)[2] if(is.null(offset)){ off <- rep(0, nn) }else{ off <- offset } # Is there an intercept column? interceptcol <- apply(X==1, 2, all) ## Basic check to see if link and variance functions make any kind of sense linkOfMean <- LinkFun(mean(Y,na.rm=T)) if( any(is.infinite(linkOfMean) | is.nan(linkOfMean)) ){ stop("Infinite or NaN in the link of the mean of responses. Make sure link function makes sense for these data.") } if( any(is.infinite( VarFun(mean(Y))) | is.nan( VarFun(mean(Y)))) ){ stop("Infinite or NaN in the variance of the mean of responses. Make sure variance function makes sense for these data.") } if(is.null(init.beta)){ if(any(interceptcol)){ #if there is an intercept and no initial beta, then use link of mean of response init.beta <- rep(0, dim(X)[2]) init.beta[which(interceptcol)] <- linkOfMean }else{ stop("Must supply an initial beta if not using an intercept.") } } # Number of included observations for each cluster includedlen <- rep(0, K) len <- rep(0,K) uniqueid <- unique(id) tmpwgt <- as.numeric(includedvec) idspl <-ifelse(tmpwgt==0, NA, id) includedlen <- as.numeric(summary(split(Y, idspl, drop=T))[,1]) len <- as.numeric(summary(split(Y, id, drop=T))[,1]) W <- Diagonal(x=weights) sqrtW <- sqrt(W) included <- Diagonal(x=(as.numeric(weights>0))) # Figure out the correlation structure cor.vec <- c("independence", "ar1", "exchangeable", "m-dependent", "unstructured", "fixed", "userdefined") cor.match <- charmatch(corstr, cor.vec) if(is.na(cor.match)){stop("Unsupported correlation structure")} # Set the initial alpha value if(is.null(init.alpha)){ alpha.new <- 0.2 if(cor.match==4){ # If corstr = "m-dep" alpha.new <- 0.2^(1:Mv) }else if(cor.match==5){ # If corstr = "unstructured" alpha.new <- rep(0.2, sum(1:(max(len)-1))) }else if(cor.match==7){ # If corstr = "userdefined" alpha.new <- rep(0.2, max(unique(as.vector(corr.mat)))) } }else{ alpha.new <- init.alpha } #if no initial overdispersion parameter, start at 1 if(is.null(init.phi)){ phi <- 1 }else{ phi <- init.phi } beta <- init.beta #Set up matrix storage StdErr <- Diagonal(nn) dInvLinkdEta <- Diagonal(nn) Resid <- Diagonal(nn) # Initialize for each correlation structure if(print.log){print(paste("Initialize the correlation structure:",corstr,sep=" "))} if(cor.match == 1){ # INDEPENDENCE R.alpha.inv <- Diagonal(x = rep.int(1, nn))/phi BlockDiag <- getBlockDiag(len)$BDiag }else if(cor.match == 2){ # AR-1 tmp <- buildAlphaInvAR(len) # These are the vectors needed to update the inverse correlation a1<- tmp$a1 a2 <- tmp$a2 a3 <- tmp$a3 a4 <- tmp$a4 # row.vec and col.vec for the big block diagonal of correlation inverses # both are vectors of indices that facilitate in updating R.alpha.inv row.vec <- tmp$row.vec col.vec <- tmp$col.vec BlockDiag <- getBlockDiag(len)$BDiag }else if(cor.match == 3){ # EXCHANGEABLE # Build a block diagonal correlation matrix for updating and sandwich calculation # this matrix is block diagonal with all ones. Each block is of dimension cluster size. tmp <- getBlockDiag(len) BlockDiag <- tmp$BDiag # Create a vector of length number of observations with associated cluster size for each observation n.vec <- vector("numeric", nn) index <- c(cumsum(len) - len, nn) for(i in 1:K){ n.vec[(index[i]+1) : index[i+1]] <- rep(len[i], len[i]) } }else if(cor.match == 4){ # M-DEPENDENT, check that M is not too large if(Mv >= max(len)){ stop("Cannot estimate that many parameters: Mv >= max(clustersize)") } # Build block diagonal similar to in exchangeable case, also get row indices and column # indices for fast matrix updating later. tmp <- getBlockDiag(len) BlockDiag <- tmp$BDiag row.vec <- tmp$row.vec col.vec <- tmp$col.vec R.alpha.inv <- NULL }else if(cor.match == 5){ # UNSTRUCTURED if( max(len^2 - len)/2 > length(len)){ stop("Cannot estimate that many parameters: not enough subjects for unstructured correlation") } tmp <- getBlockDiag(len) BlockDiag <- tmp$BDiag row.vec <- tmp$row.vec col.vec <- tmp$col.vec }else if(cor.match == 6){ # FIXED # check if matrix meets some basic conditions corr.mat <- checkFixedMat(corr.mat, len) R.alpha.inv <- as(getAlphaInvFixed(corr.mat, len), "symmetricMatrix")/phi BlockDiag <- getBlockDiag(len)$BDiag }else if(cor.match == 7){ # USERDEFINED corr.mat <- checkUserMat(corr.mat, len) # get the structure of the correlation matrix in a way that # I can use later on. tmp1 <- getUserStructure(corr.mat) corr.list <- tmp1$corr.list user.row <- tmp1$row.vec user.col <- tmp1$col.vec struct.vec <- tmp1$struct.vec # the same block diagonal trick. tmp2 <- getBlockDiag(len) BlockDiag <- tmp2$BDiag row.vec <- tmp2$row.vec col.vec <- tmp2$col.vec }else if(cor.match == 0){ stop("Ambiguous Correlation Structure Specification") }else{ stop("Unsupported Correlation Structure") } stop <- F converged <- F count <- 0 beta.old <- beta unstable <- F phi.old <- phi if(print.log){print("------------------------------------------------------------> Dataset description")} if(print.log){print(paste("Number of CLUSTERS:",length(unique(id)),sep=" "))} if(print.log){print(paste("Number of INDIVIDUAL:",nn,sep=" "))} if(print.log){print(paste("Number of Observations included:",sum(includedvec),sep=" "))} if(print.log){print(paste("Variable for PS:",weightsname,sep=" "))} if(print.log){print(paste("Variable for OUTCOME:",nameY,sep=" "))} if(print.log){print(paste("Variable for MISSING:",nameMISS,sep=" "))} if(print.log){print(paste("Variable for TRT:",nameTRT,sep=" "))} # Main fisher scoring loop if(print.log){print("------------------------------------------------------------> Estimations")} if(max(diag(sqrtW))==1){ if(print.log)print("NON-IPW Analysis") }else{ if(print.log)print("IPW Analysis") } if(is.null(B)){ if(print.log)print("NON-AUGMENTED Analysis") }else{ if(print.log)print("AUGMENTED Analysis") } while(!stop){ count <- count+1 if(print.log){print(paste("Main loop for estimation:",count,sep=" "))} eta <- as.vector(X %*% beta) + off mu <- InvLink(eta) diag(StdErr) <- sqrt(1/VarFun(mu)) if(!scale.fix){ phi <- updatePhi(Y, mu, VarFun, p, StdErr, included, includedlen) } phi.new <- phi if(print.log){print(paste("Phi",phi.new,sep=" "))} ## Calculate alpha, R(alpha)^(-1) / phi if(cor.match == 2){ # AR-1 alpha.new <- updateAlphaAR(Y, mu, VarFun, phi, id, len, StdErr, p, included, includedlen, includedvec, allobs) R.alpha.inv <- getAlphaInvAR(alpha.new, a1,a2,a3,a4, row.vec, col.vec)/phi }else if(cor.match == 3){ #EXCHANGEABLE alpha.new <- updateAlphaEX(Y, mu, VarFun, phi, id, len, StdErr, Resid, p, BlockDiag, included, includedlen) R.alpha.inv <- getAlphaInvEX(alpha.new, n.vec, BlockDiag)/phi }else if(cor.match == 4){ # M-DEPENDENT if(Mv==1){ alpha.new <- updateAlphaAR(Y, mu, VarFun, phi, id, len, StdErr, p, included, includedlen, includedvec, allobs) }else{ alpha.new <- updateAlphaMDEP(Y, mu, VarFun, phi, id, len, StdErr, Resid, p, BlockDiag, Mv, included, includedlen, allobs) if(sum(len>Mv) <= p){ unstable <- T } } if(any(alpha.new >= 1)){ stop <- T warning("Some estimated correlation is greater than 1, stopping.") } R.alpha.inv <- getAlphaInvMDEP(alpha.new, len, row.vec, col.vec)/phi }else if(cor.match == 5){ # UNSTRUCTURED alpha.new <- updateAlphaUnstruc(Y, mu, VarFun, phi, id, len, StdErr, Resid, p, BlockDiag, included, includedlen, allobs) # This has happened to me (greater than 1 correlation estimate) if(any(alpha.new >= 1)){ stop <- T warning("Some estimated correlation is greater than 1, stopping.") } R.alpha.inv <- getAlphaInvUnstruc(alpha.new, len, row.vec, col.vec)/phi }else if(cor.match ==6){ # FIXED CORRELATION, DON'T NEED TO RECOMPUTE R.alpha.inv <- R.alpha.inv*phi.old/phi }else if(cor.match == 7){ # USER SPECIFIED alpha.new <- updateAlphaUser(Y, mu, phi, id, len, StdErr, Resid, p, BlockDiag, user.row, user.col, corr.list, included, includedlen, allobs) R.alpha.inv <- getAlphaInvUser(alpha.new, len, struct.vec, user.row, user.col, row.vec, col.vec)/phi }else if(cor.match == 1){ # INDEPENDENT R.alpha.inv <- Diagonal(x = rep.int(1/phi, nn)) alpha.new <- "independent" } beta.list <- updateBeta(Y=Y, X=X,X.t=X.t,X.c=X.c, B=B, beta=beta, off=off, InvLinkDeriv=InvLinkDeriv, InvLink=InvLink, VarFun=VarFun, R.alpha.inv=R.alpha.inv, StdErr=StdErr, dInvLinkdEta=dInvLinkdEta, tol=tol, sqrtW=sqrtW,W=W,included=included,typeweights=typeweights,pi.a=pi.a) if(print.log){print(paste(cat("beta",unlist(beta.list$beta))," "))} if(print.log){print(paste("alpha",alpha.new,sep=" "))} #print("LOOP") beta <- beta.list$beta phi.old <- phi if( max(abs((beta - beta.old)/(beta.old + .Machine$double.eps))) < tol ){converged <- T; stop <- T} if(count >= maxit){stop <- T} beta.old <- beta } biggest <- which.max(len)[1] index <- cumsum(len[biggest]) biggest.R.alpha.inv <- R.alpha.inv[(index+1):(index+len[biggest]) , (index+1):(index+len[biggest])] eta <- as.vector(X %*% beta) + off if(print.log){print("------------------------------------------------------------> Variance estimation")} sandvar.list <- list() sandvar.list$sandvar <- NULL if(sandwich){ sandvar.list <- getSandwich(Y=Y, X=X,X.t=X.t,X.c=X.c, B=B, beta=beta, off=off, id=id, R.alpha.inv=R.alpha.inv, phi=phi, InvLinkDeriv=InvLinkDeriv, InvLink=InvLink, VarFun=VarFun, hessMat=beta.list$hess, StdErr=StdErr, dInvLinkdEta=dInvLinkdEta, BlockDiag=BlockDiag, sqrtW=sqrtW,W=W,included=included,typeweights=typeweights,pi.a=pi.a, print.log=print.log) }else{ sandvar.list <- list() sandvar.list$sandvar <- NULL if(print.log){print("No sandwich")} } if(is.null(model.augmentation.trt)&is.null(model.augmentation.ctrl)&is.null(model.weights)){ sandwich.nuisance<-FALSE } # sandvarnuis.list <- list() # sandvarnuis.list$sandadjvar <- NULL if(sandwich.nuisance){ dat.nuis<-cleandata(dat=data,type="nuisance",nameY=nameY,cc=FALSE,formula=formula,print=FALSE) tryCatch({ sandvarnuis.list <- getSandwichNuisance(Y=Y, X=X,X.t=X.t,X.c=X.c, B=B, beta=beta, off=off, id=id, R.alpha.inv=R.alpha.inv, phi=phi, InvLinkDeriv=InvLinkDeriv, InvLink=InvLink, VarFun=VarFun, hessMat=beta.list$hess, StdErr=StdErr, dInvLinkdEta=dInvLinkdEta, BlockDiag=BlockDiag, sqrtW=sqrtW,W=W,included=included,typeweights=typeweights,pi.a=pi.a, nameTRT=nameTRT,propensity.score=propensity.score,om.t=om.t,om.c=om.c, data=dat.nuis,nameY=nameY,nameMISS=nameMISS,print.log=print.log) }, error=function(e){ cat("There was an error in the nuisance variance computation \n") }) }else{ sandvarnuis.list <- list() sandvarnuis.list$sandadjvar <- NULL if(print.log){print("No nuisance-adjusted sandwich")} } sandvarfay.list <- list() sandvarfay.list$sandadjfay <- NULL if(fay.adjustment){ tryCatch({ sandvarfay.list <- getFay(formula=formula,id=id,family=family,data=data,corstr=corstr,b=fay.bound,beta=beta,alpha=alpha.new,scale=phi.new,Y=Y, X=X,hessMAT=beta.list$hess, X.t=X.t,X.c=X.c, B=B, off=off, R.alpha.inv=R.alpha.inv, phi=phi, InvLinkDeriv=InvLinkDeriv, InvLink=InvLink, VarFun=VarFun, StdErr=StdErr, dInvLinkdEta=dInvLinkdEta, BlockDiag=BlockDiag, sqrtW=sqrtW,W=W,included=included,typeweights=typeweights,pi.a=pi.a, nameTRT=nameTRT,propensity.score=propensity.score,om.t=om.t,om.c=om.c, nameY=nameY,nameMISS=nameMISS,print.log=print.log) }, error=function(e){ cat("There was an error in the fay variance computation \n")} ) }else{ sandvarfay.list <- list() sandvarfay.list$sandadjfay <- NULL if(print.log){print("No fay adjustment sandwich for small sample")} } if(!converged){warning("Did not converge")} if(unstable){warning("Number of subjects with number of observations >= Mv is very small, some correlations are estimated with very low sample size.")} # Create object of class CRTgeeDR with information about the fit dat <- model.frame(formula, data, na.action=na.pass) X <- model.matrix(formula, dat) if(alpha.new == "independent"){alpha.new <- 0} results <- list() results$beta <- as.vector(beta) results$phi <- phi results$alpha <- alpha.new if(cor.match == 6){ results$alpha <- as.vector(triu(corr.mat, 1)[which(triu(corr.mat,1)!=0)]) } results$coefnames <- colnames(X) results$niter <- count results$converged <- converged results$var.naiv <- solve(beta.list$hess)##*phi.new ## call model-based results$var <- sandvar.list$sandvar results$var.nuisance <- sandvarnuis.list$sandadjvar results$var.fay <- sandvarfay.list$sandadjfay results$call <- call results$corr <- cor.vec[cor.match] results$clusz <- len results$FunList <- FunList results$X <- X results$offset <- off results$eta <- eta results$dropped <- dropid results$weights <- weights results$ps.model<-propensity.score if(is.null(propensity.score)){ results$used.weights<- NULL }else{ results$used.weights<-diag(W) } results$om.model.trt<-om.t results$om.model.ctrl<-om.c class(results) <- "CRTgeeDR" if(print.log){ print("********************************************************************************************") print("RESULTS: Doubly Robust Inverse Probability Weighted Augmented GEE estimator") print("********************************************************************************************") print(results) print(summary(results)) } return(results) } #' The data.sim Dataset. #' #' HIV risk of infection after STI/HIV intervention in a cluster randomized trial. #' #' @details A dataset containing the HIV risk scores and presence of risky behaviors (yes/no) and other covarites of 10000 subjects among 100 communities. #' The variables are as follows: #' #' \itemize{ #' \item IDPAT subject id #' \item CLUSTER cluster id #' \item TRT treatment status, 1 is received STI/HIV intervention #' \item X1 A covariate following a N(0,1) #' \item JOB employement status #' \item MARRIED marital status #' \item AGE age #' \item HIV.KNOW Score for HIV knowlege #' \item RELIGION religiosity score #' \item OUTCOME Binary outcome - 1 if the subject is at high risk of HIV infection, 0 otherwise. NA if missing. #' \item MISSING 1 if the ouctome is missing - 0 otherwise. #' } #' #' @format A data frame with 10000 rows and 8 variables #' @name data.sim NULL #' Doubly Robust Inverse Probability Weighted Augmented GEE estimator #' #' The CRTgeeDR package allows you to estimates parameters in a regression model (with possibly a link function). #' It allows treatment augmentation and IPW for missing data alone. #' #' The only function you're likely to need from \pkg{CRTgeeDR} is #' \code{\link{geeDREstimation}}. Otherwise refer to the help documentation. #' #' @docType package #' @name CRTgeeDR NULL
/scratch/gouwar.j/cran-all/cranData/CRTgeeDR/R/CRTgeeDR-main.R
### Calculate the Fay adjusted estimator of the variance #Adapted from the function 'gee.var.fg' included in the package 'geesmv', #authored by Ming Wang #under the GPL-2 license. getFay = function(formula,id,family,data,corstr,b,beta,alpha,scale,Y,X,hessMAT, X.t,X.c, B, off, R.alpha.inv, phi, InvLinkDeriv, InvLink, VarFun, StdErr, dInvLinkdEta, BlockDiag, sqrtW,W,included,typeweights,pi.a, nameTRT,propensity.score,om.t,om.c, nameY,nameMISS,print.log){ data<-as.data.frame(cbind(id,Y)) names(data)<-c("id","response") mat<-as.data.frame(X) mat.c<-as.data.frame(X.c) mat.t<-as.data.frame(X.t) ### Fit the GEE model to get the estimate of parameters \hat{\beta}; #library(stats) #gee.fit <- gee(formula,data=data,id=id,family=family,corstr=corstr) beta_est <- beta alpha <- alpha len <- length(beta_est) len_vec <- len^2 ### Estimate the robust variance for \hat{\beta} #data$id <- id cluster<-cluster.size(data$id) ncluster<-max(cluster$n) size<-cluster$m mat$subj <- rep(unique(data$id), cluster$n) mat.t$subj <- rep(unique(data$id), cluster$n) mat.c$subj <- rep(unique(data$id), cluster$n) if(is.character(corstr)){ var <- switch(corstr, "independence"=cormax.ind(ncluster), "exchangeable"=cormax.exch(ncluster, alpha), "AR-M"=cormax.ar1(ncluster, alpha)) }else{ print(corstr) stop("'working correlation structure' not recognized") } if(is.character(family)){ family <- switch(family, "gaussian"="gaussian", "binomial"="binomial", "poisson"="poisson") }else{ if(is.function(family)){ family <- family()[[1]] }else{ print(family) stop("'family' not recognized") } } cov.beta<-unstr<-matrix(0,nrow=len,ncol=len) step11<-hessMAT step12<-matrix(0,nrow=len,ncol=len) step13<-matrix(0,nrow=len_vec,ncol=1) step14<-matrix(0,nrow=len_vec,ncol=len_vec) p<-matrix(0,nrow=len_vec,ncol=size) for (i in 1:size){ y<-as.matrix(data$response[data$id==unique(data$id)[i]]) covariate<-as.matrix(subset(mat[,-length(mat[1,])], mat$subj==unique(data$id)[i])) covariate.t<-as.matrix(subset(mat.t[,-length(mat.t[1,])], mat.t$subj==unique(data$id)[i])) covariate.c<-as.matrix(subset(mat.c[,-length(mat.c[1,])], mat.c$subj==unique(data$id)[i])) ncluster=cluster$n[i] var1=var[1:ncluster,1:ncluster] hessi<-hessianCommunity(Y=y, X=covariate,X.t=covariate.t,X.c=covariate.c, B=B[which(data$id==unique(data$id)[i]),], beta=beta, off=off[which(data$id==unique(data$id)[i])], InvLinkDeriv=InvLinkDeriv, InvLink=InvLink, VarFun=VarFun, R.alpha.inv=R.alpha.inv[which(data$id==unique(data$id)[i]),which(data$id==unique(data$id)[i])], StdErr=StdErr[which(data$id==unique(data$id)[i]),which(data$id==unique(data$id)[i])], dInvLinkdEta=dInvLinkdEta[which(data$id==unique(data$id)[i]),which(data$id==unique(data$id)[i])], sqrtW[which(data$id==unique(data$id)[i]),which(data$id==unique(data$id)[i])],W[which(data$id==unique(data$id)[i]),which(data$id==unique(data$id)[i])], included[which(data$id==unique(data$id)[i]),which(data$id==unique(data$id)[i])],typeweights=typeweights,pi.a=pi.a) xx<-hessi$hess Qi <- xx%*%solve(step11) Ai<-diag((1-pmin(b,diag(Qi)))^(-0.5)) xy<-Ai%*%hessi$esteq step12<-step12+xy%*%t(xy) step13<-step13+vec(xy%*%t(xy)) p[,i]<-vec(xy%*%t(xy)) } for (i in 1:size){ dif<-(p[,i]-step13/size)%*%t(p[,i]-step13/size) step14<-step14+dif } cov.beta<-solve(step11)%*%(step12)%*%solve(step11) cov.var<-size/(size-1)*kronecker(solve(step11), solve(step11))%*%step14%*%kronecker(solve(step11), solve(step11)) return(list(sandadjfay=cov.beta, cov.var=cov.var)) } hessianCommunity = function(Y, X,X.t,X.c, B, beta, off, InvLinkDeriv, InvLink, VarFun, R.alpha.inv, StdErr, dInvLinkdEta, sqrtW,W,included,typeweights,pi.a){ beta.new<-beta eta <- as.vector(X%*%beta.new) + off diag(dInvLinkdEta) <- InvLinkDeriv(eta) mu <- InvLink(eta) diag(StdErr) <- sqrt(1/VarFun(mu)) if(is.null(B)){ if(is.null(typeweights)){ hess <- crossprod(sqrtW %*% StdErr %*% dInvLinkdEta %*%X, R.alpha.inv %*% sqrtW %*% StdErr %*%dInvLinkdEta %*% X) esteq <- crossprod(sqrtW %*% StdErr %*%dInvLinkdEta %*%X , R.alpha.inv %*% sqrtW %*% StdErr %*% as.matrix(Y - mu)) }else{ if(typeweights=="GENMOD"){ hess <- crossprod(sqrtW %*% StdErr %*% dInvLinkdEta %*%X, R.alpha.inv %*% sqrtW %*% StdErr %*%dInvLinkdEta %*% X) esteq <- crossprod(sqrtW %*% StdErr %*%dInvLinkdEta %*%X , R.alpha.inv %*% sqrtW %*% StdErr %*% as.matrix(Y - mu)) } else{ hess <- crossprod( StdErr %*% dInvLinkdEta %*%X, R.alpha.inv %*% W %*% StdErr %*%dInvLinkdEta %*% X) esteq <- crossprod( StdErr %*%dInvLinkdEta %*%X , R.alpha.inv %*% W %*% StdErr %*% as.matrix(Y - mu)) } } } else{ nn<-length(Y) StdErr.c <- Diagonal(nn) dInvLinkdEta.c <- Diagonal(nn) eta.c <- as.vector(X.c%*%beta.new) + off diag(dInvLinkdEta.c) <- InvLinkDeriv(eta.c) mu.c <- InvLink(eta.c) diag(StdErr.c) <- sqrt(1/VarFun(mu.c)) nn<-length(Y) StdErr.t <- Diagonal(nn) dInvLinkdEta.t <- Diagonal(nn) eta.t <- as.vector(X.t%*%beta.new) + off diag(dInvLinkdEta.t) <- InvLinkDeriv(eta.t) mu.t <- InvLink(eta.t) diag(StdErr.t) <- sqrt(1/VarFun(mu.t)) if(is.null(typeweights)){ hess <- (1-pi.a)*crossprod( StdErr.c %*% dInvLinkdEta.c %*%X.c, R.alpha.inv %*% StdErr.c %*%dInvLinkdEta.c %*% X.c)+ (pi.a)*crossprod( StdErr.t %*% dInvLinkdEta.t %*%X.t, R.alpha.inv %*% StdErr.t %*%dInvLinkdEta.t %*% X.t) esteq <- crossprod( StdErr %*%dInvLinkdEta %*%X , R.alpha.inv %*% StdErr %*% as.matrix(Y - B[,"Bi"])) + (1-pi.a)*crossprod( StdErr.c %*%dInvLinkdEta.c%*%X.c , R.alpha.inv %*% StdErr.c %*% as.matrix(B[,"B.c"]-mu.c))+ (pi.a)*crossprod( StdErr.t %*%dInvLinkdEta.t %*%X.t , R.alpha.inv %*% StdErr.t %*% as.matrix(B[,"B.t"]-mu.t)) }else{ if(typeweights=="GENMOD"){ hess <- (1-pi.a)*crossprod( StdErr.c %*% dInvLinkdEta.c %*%X.c, R.alpha.inv %*% StdErr.c %*%dInvLinkdEta.c %*% X.c)+ (pi.a)*crossprod( StdErr.t %*% dInvLinkdEta.t %*%X.t, R.alpha.inv %*% StdErr.t %*%dInvLinkdEta.t %*% X.t) esteq <- crossprod(sqrtW %*% StdErr %*%dInvLinkdEta %*%X , R.alpha.inv %*% sqrtW %*% StdErr %*% as.matrix(Y - B[,"Bi"])) + (1-pi.a)*crossprod( StdErr.c %*%dInvLinkdEta.c%*%X.c , R.alpha.inv %*% StdErr.c %*% as.matrix(B[,"B.c"]-mu.c))+ (pi.a)*crossprod( StdErr.t %*%dInvLinkdEta.t %*%X.t , R.alpha.inv %*% StdErr.t %*% as.matrix(B[,"B.t"]-mu.t)) }else{ hess <- (1-pi.a)*crossprod( StdErr.c %*% dInvLinkdEta.c %*%X.c, R.alpha.inv %*% StdErr.c %*%dInvLinkdEta.c %*% X.c)+ (pi.a)*crossprod( StdErr.t %*% dInvLinkdEta.t %*%X.t, R.alpha.inv %*% StdErr.t %*%dInvLinkdEta.t %*% X.t) esteq <- crossprod( StdErr %*%dInvLinkdEta %*%X ,R.alpha.inv%*% W %*% StdErr %*% as.matrix(Y - B[,"Bi"])) + (1-pi.a)*crossprod( StdErr.c %*%dInvLinkdEta.c%*%X.c , R.alpha.inv %*% StdErr.c %*% as.matrix(B[,"B.c"]-mu.c))+ (pi.a)*crossprod( StdErr.t %*%dInvLinkdEta.t %*%X.t , R.alpha.inv %*% StdErr.t %*% as.matrix(B[,"B.t"]-mu.t)) } } } return(list(beta = beta.new, hess = hess, esteq=esteq)) }
/scratch/gouwar.j/cran-all/cranData/CRTgeeDR/R/getFay.R
### Calculate the sandwich estimator #Adapted from the function 'getSandwich' included in the package 'geeM', #authored by Lee S. McDaniel and Nick Henderson #under the GPL-2 license. getSandwich = function(Y, X,X.t,X.c, B, beta,off, id, R.alpha.inv, phi, InvLinkDeriv, InvLink, VarFun, hessMat, StdErr, dInvLinkdEta, BlockDiag, sqrtW,W,included,typeweights,pi.a,print.log){ eta <- as.vector(X%*%beta) + off diag(dInvLinkdEta) <- InvLinkDeriv(eta) mu <- InvLink(eta) diag(StdErr) <- sqrt(1/VarFun(mu)) if(is.null(B)){ if(is.null(typeweights)){ scoreDiag <- Diagonal(x= Y - mu) BlockDiag <- scoreDiag %*% BlockDiag %*% scoreDiag numsand <- as.matrix(crossprod( StdErr %*% dInvLinkdEta %*% X, R.alpha.inv %*% StdErr %*% BlockDiag %*% StdErr %*% R.alpha.inv %*% StdErr %*% dInvLinkdEta %*% X)) }else{ scoreDiag <- Diagonal(x= Y - mu) BlockDiag <- scoreDiag %*% BlockDiag %*% scoreDiag if(typeweights=="GENMOD"){ numsand <- as.matrix(crossprod(sqrtW %*% StdErr %*% dInvLinkdEta %*% X, R.alpha.inv %*% sqrtW %*% StdErr %*% BlockDiag %*% StdErr %*% sqrtW %*% R.alpha.inv %*% sqrtW %*% StdErr %*% dInvLinkdEta %*% X)) }else{ numsand <- as.matrix(crossprod( StdErr %*% dInvLinkdEta %*% X, R.alpha.inv %*% W %*% StdErr %*% BlockDiag %*% StdErr %*% W %*% R.alpha.inv %*% StdErr %*% dInvLinkdEta %*% X)) } } }else{ nn<-length(Y) StdErr.c <- Diagonal(nn) dInvLinkdEta.c <- Diagonal(nn) eta.c <- as.vector(X.c%*%beta) + off diag(dInvLinkdEta.c) <- InvLinkDeriv(eta.c) mu.c <- InvLink(eta.c) diag(StdErr.c) <- sqrt(1/VarFun(mu.c)) nn<-length(Y) StdErr.t <- Diagonal(nn) dInvLinkdEta.t <- Diagonal(nn) eta.t <- as.vector(X.t%*%beta) + off diag(dInvLinkdEta.t) <- InvLinkDeriv(eta.t) mu.t <- InvLink(eta.t) diag(StdErr.t) <- sqrt(1/VarFun(mu.t)) scoreDiag <- Diagonal(x= Y - B[,"Bi"]) scoreDiag.t <-Diagonal(x= B[,"B.t"]-mu.t) scoreDiag.c <-Diagonal(x= B[,"B.c"]-mu.c) if(is.null(typeweights)){ aa <- crossprod( StdErr %*% dInvLinkdEta %*% X, R.alpha.inv %*% StdErr %*% scoreDiag %*% BlockDiag %*% scoreDiag %*% StdErr %*% R.alpha.inv %*% StdErr %*% dInvLinkdEta %*% X) bb <- (pi.a**2)*crossprod( StdErr.t %*% dInvLinkdEta.t %*% X.t, R.alpha.inv %*% StdErr.t %*% scoreDiag.t %*% BlockDiag %*% scoreDiag.t %*% StdErr.t %*% R.alpha.inv %*% StdErr.t %*% dInvLinkdEta.t %*% X.t) cc <- ((1-pi.a)**2)*crossprod( StdErr.c %*% dInvLinkdEta.c %*% X.c, R.alpha.inv %*% StdErr.c %*% scoreDiag.c %*% BlockDiag %*% scoreDiag.c %*% StdErr.c %*% R.alpha.inv %*% StdErr.c %*% dInvLinkdEta.c %*% X.c) ab <- pi.a*crossprod( StdErr %*% dInvLinkdEta %*% X, R.alpha.inv %*% StdErr %*% scoreDiag %*% BlockDiag %*% scoreDiag.t %*% StdErr.t %*% R.alpha.inv %*% StdErr.t %*% dInvLinkdEta.t %*% X.t) ac <- (1-pi.a)*crossprod( StdErr %*% dInvLinkdEta %*% X, R.alpha.inv %*% StdErr %*% scoreDiag %*% BlockDiag %*% scoreDiag.c %*% StdErr.c %*% R.alpha.inv %*% StdErr.c %*% dInvLinkdEta.c %*% X.c) ba <- (pi.a)*crossprod( StdErr.t %*% dInvLinkdEta.t %*% X.t, R.alpha.inv %*% StdErr.t %*% scoreDiag.t %*% BlockDiag %*% scoreDiag %*% StdErr %*% R.alpha.inv %*% StdErr %*% dInvLinkdEta %*% X) bc <- (pi.a*(1-pi.a))*crossprod( StdErr.t %*% dInvLinkdEta.t %*% X.t, R.alpha.inv %*% StdErr.t %*% scoreDiag.t %*% BlockDiag %*% scoreDiag.c %*% StdErr.c %*% R.alpha.inv %*% StdErr.c %*% dInvLinkdEta.c %*% X.c) ca <- ((1-pi.a))*crossprod( StdErr.c %*% dInvLinkdEta.c %*% X.c, R.alpha.inv %*% StdErr.c %*% scoreDiag.c %*% BlockDiag %*% scoreDiag %*% StdErr %*% R.alpha.inv %*% StdErr %*% dInvLinkdEta %*% X) cb <- ((1-pi.a)*pi.a)*crossprod( StdErr.c %*% dInvLinkdEta.c %*% X.c, R.alpha.inv %*% StdErr.c %*% scoreDiag.c %*% BlockDiag %*% scoreDiag.t %*% StdErr.t %*% R.alpha.inv %*% StdErr.t %*% dInvLinkdEta.t %*% X.t) numsand <- as.matrix((aa+bb+cc+ab+ac+ba+bc+ca+cb)) }else{ if(typeweights=="GENMOD"){ aa <- crossprod(sqrtW %*% StdErr %*% dInvLinkdEta %*% X, R.alpha.inv %*% sqrtW %*% StdErr %*% scoreDiag %*% BlockDiag %*% scoreDiag %*% StdErr %*% sqrtW %*% R.alpha.inv %*% sqrtW %*% StdErr %*% dInvLinkdEta %*% X) bb <- (pi.a**2)*crossprod( StdErr.t %*% dInvLinkdEta.t %*% X.t, R.alpha.inv %*% StdErr.t %*% scoreDiag.t %*% BlockDiag %*% scoreDiag.t %*% StdErr.t %*% R.alpha.inv %*% StdErr.t %*% dInvLinkdEta.t %*% X.t) cc <- ((1-pi.a)**2)*crossprod( StdErr.c %*% dInvLinkdEta.c %*% X.c, R.alpha.inv %*% StdErr.c %*% scoreDiag.c %*% BlockDiag %*% scoreDiag.c %*% StdErr.c %*% R.alpha.inv %*% StdErr.c %*% dInvLinkdEta.c %*% X.c) ab <- pi.a*crossprod(sqrtW %*% StdErr %*% dInvLinkdEta %*% X, R.alpha.inv %*% sqrtW %*% StdErr %*% scoreDiag %*% BlockDiag %*% scoreDiag.t %*% StdErr.t %*% R.alpha.inv %*% StdErr.t %*% dInvLinkdEta.t %*% X.t) ac <- (1-pi.a)*crossprod(sqrtW %*% StdErr %*% dInvLinkdEta %*% X, R.alpha.inv %*% sqrtW %*% StdErr %*% scoreDiag %*% BlockDiag %*% scoreDiag.c %*% StdErr.c %*% R.alpha.inv %*% StdErr.c %*% dInvLinkdEta.c %*% X.c) ba <- (pi.a)*crossprod( StdErr.t %*% dInvLinkdEta.t %*% X.t, R.alpha.inv %*% StdErr.t %*% scoreDiag.t %*% BlockDiag %*% scoreDiag %*% StdErr %*% sqrtW %*% R.alpha.inv %*% sqrtW %*% StdErr %*% dInvLinkdEta %*% X) bc <- (pi.a*(1-pi.a))*crossprod( StdErr.t %*% dInvLinkdEta.t %*% X.t, R.alpha.inv %*% StdErr.t %*% scoreDiag.t %*% BlockDiag %*% scoreDiag.c %*% StdErr.c %*% R.alpha.inv %*% StdErr.c %*% dInvLinkdEta.c %*% X.c) ca <- ((1-pi.a))*crossprod( StdErr.c %*% dInvLinkdEta.c %*% X.c, R.alpha.inv %*% StdErr.c %*% scoreDiag.c %*% BlockDiag %*% scoreDiag %*% StdErr %*% sqrtW %*% R.alpha.inv %*% sqrtW %*% StdErr %*% dInvLinkdEta %*% X) cb <- ((1-pi.a)*pi.a)*crossprod( StdErr.c %*% dInvLinkdEta.c %*% X.c, R.alpha.inv %*% StdErr.c %*% scoreDiag.c %*% BlockDiag %*% scoreDiag.t %*% StdErr.t %*% R.alpha.inv %*% StdErr.t %*% dInvLinkdEta.t %*% X.t) numsand <- as.matrix((aa+bb+cc+ab+ac+ba+bc+ca+cb)) } else{ aa <- crossprod( StdErr %*% dInvLinkdEta %*% X, R.alpha.inv %*% StdErr %*% W %*% scoreDiag %*% BlockDiag %*% scoreDiag %*% W %*% StdErr %*% R.alpha.inv %*% StdErr %*% dInvLinkdEta %*% X) bb <- (pi.a**2)*crossprod( StdErr.t %*% dInvLinkdEta.t %*% X.t, R.alpha.inv %*% StdErr.t %*% scoreDiag.t %*% BlockDiag %*% scoreDiag.t %*% StdErr.t %*% R.alpha.inv %*% StdErr.t %*% dInvLinkdEta.t %*% X.t) cc <- ((1-pi.a)**2)*crossprod( StdErr.c %*% dInvLinkdEta.c %*% X.c, R.alpha.inv %*% StdErr.c %*% scoreDiag.c %*% BlockDiag %*% scoreDiag.c %*% StdErr.c %*% R.alpha.inv %*% StdErr.c %*% dInvLinkdEta.c %*% X.c) ab <- pi.a*crossprod( StdErr %*% dInvLinkdEta %*% X, R.alpha.inv %*% StdErr %*% W %*% scoreDiag %*% BlockDiag %*% scoreDiag.t %*% StdErr.t %*% R.alpha.inv %*% StdErr.t %*% dInvLinkdEta.t %*% X.t) ac <- (1-pi.a)*crossprod( StdErr %*% dInvLinkdEta %*% X, R.alpha.inv %*% StdErr %*% W %*% scoreDiag %*% BlockDiag %*% scoreDiag.c %*% StdErr.c %*% R.alpha.inv %*% StdErr.c %*% dInvLinkdEta.c %*% X.c) ba <- (pi.a)*crossprod( StdErr.t %*% dInvLinkdEta.t %*% X.t, R.alpha.inv %*% StdErr.t %*% scoreDiag.t %*% BlockDiag %*% scoreDiag %*% W %*% StdErr %*% R.alpha.inv %*% StdErr %*% dInvLinkdEta %*% X) bc <- (pi.a*(1-pi.a))*crossprod( StdErr.t %*% dInvLinkdEta.t %*% X.t, R.alpha.inv %*% StdErr.t %*% scoreDiag.t %*% BlockDiag %*% scoreDiag.c %*% StdErr.c %*% R.alpha.inv %*% StdErr.c %*% dInvLinkdEta.c %*% X.c) ca <- ((1-pi.a))*crossprod( StdErr.c %*% dInvLinkdEta.c %*% X.c, R.alpha.inv %*% StdErr.c %*% scoreDiag.c %*% BlockDiag %*% scoreDiag %*% W %*% StdErr %*% R.alpha.inv %*% StdErr %*% dInvLinkdEta %*% X) cb <- ((1-pi.a)*pi.a)*crossprod( StdErr.c %*% dInvLinkdEta.c %*% X.c, R.alpha.inv %*% StdErr.c %*% scoreDiag.c %*% BlockDiag %*% scoreDiag.t %*% StdErr.t %*% R.alpha.inv %*% StdErr.t %*% dInvLinkdEta.t %*% X.t) numsand <- as.matrix((aa+bb+cc+ab+ac+ba+bc+ca+cb)) #print("old") #print(numsand) #SEE<-crossprod( StdErr %*% dInvLinkdEta %*% X, R.alpha.inv %*% StdErr %*% W %*% scoreDiag)+ # (pi.a)*crossprod( StdErr.t %*% dInvLinkdEta.t %*% X.t, R.alpha.inv %*% StdErr.t %*% scoreDiag.t)+ # ((1-pi.a))*crossprod( StdErr.c %*% dInvLinkdEta.c %*% X.c, R.alpha.inv %*% StdErr.c %*% scoreDiag.c) #numsand<-SEE%*%BlockDiag%*%t(SEE) #print("new") #print(numsand) } } } hessMat<-as.matrix(hessMat) sandvar <- t(solve(hessMat, numsand)) sandvar <- t(solve(t(hessMat), sandvar)) if(print.log){ print("Sandwich variance") print(sandvar) } return(list(sandvar = sandvar, numsand = numsand, hessMat=hessMat)) }
/scratch/gouwar.j/cran-all/cranData/CRTgeeDR/R/getSandwich.R
### Calculate the sandwich estimator accounting for estimation of nuisance parameters in PS and OM. getSandwichNuisance = function(Y, X,X.t,X.c, B, beta,off, id, R.alpha.inv, phi, InvLinkDeriv, InvLink, VarFun, hessMat, StdErr, dInvLinkdEta, BlockDiag, sqrtW,W,included,typeweights,pi.a,nameTRT,propensity.score,om.t,om.c,data,nameY,nameMISS,print.log){ eta <- as.vector(X%*%beta) + off diag(dInvLinkdEta) <- InvLinkDeriv(eta) mu <- InvLink(eta) diag(StdErr) <- sqrt(1/VarFun(mu)) nn<-length(Y) StdErr.c <- Diagonal(nn) dInvLinkdEta.c <- Diagonal(nn) eta.c <- as.vector(X.c%*%beta) + off diag(dInvLinkdEta.c) <- InvLinkDeriv(eta.c) mu.c <- InvLink(eta.c) diag(StdErr.c) <- sqrt(1/VarFun(mu.c)) nn<-length(Y) StdErr.t <- Diagonal(nn) dInvLinkdEta.t <- Diagonal(nn) eta.t <- as.vector(X.t%*%beta) + off diag(dInvLinkdEta.t) <- InvLinkDeriv(eta.t) mu.t <- InvLink(eta.t) diag(StdErr.t) <- sqrt(1/VarFun(mu.t)) if((!is.null(om.t))&(!is.null(om.c))){ scoreDiag <- Diagonal(x= Y - B[,"Bi"]) scoreDiag.t <-Diagonal(x= B[,"B.t"]-mu.t) scoreDiag.c <-Diagonal(x= B[,"B.c"]-mu.c) } scoreDiag.ps <- Diagonal(x= Y - mu) design.weights<-NULL if(!is.null(propensity.score)){ ######## Nuisance parameters for weights design.weights<-model.matrix(propensity.score) var.w <- getfam(propensity.score$family)$VarFun HW<--crossprod(design.weights,Diagonal(x=var.w(fitted(propensity.score)))%*%design.weights) SW<-t(design.weights)%*%Diagonal(x=((1-data[,nameMISS])-fitted(propensity.score))) #%*%Diagonal(x=((1-data[,nameMISS])-fitted(propensity.score)))%*%design.weights } design.om.trt<-NULL design.om.ctrl<-NULL design.om.trt.all<-NULL design.om.trt.all<-NULL if((!is.null(om.t))&(!is.null(om.c))){ ######## Nuisance parameters for outcome design.om.trt<-model.matrix(om.t) design.om.ctrl<-model.matrix(om.c) design.om.trt.all<-model.matrix(as.formula(paste("~",as.character(om.t$formula[3]))),data=data) design.om.ctrl.all<-model.matrix(as.formula(paste("~",as.character(om.c$formula[3]))),data=data) var.om <- getfam(om.t$family)$VarFun HB.trt<--crossprod(design.om.trt,Diagonal(x=var.om(fitted(om.t)))%*%design.om.trt) HB.ctrl<--crossprod(design.om.ctrl,Diagonal(x=var.om(fitted(om.c)))%*%design.om.ctrl) YsansNA<-ifelse(is.na(data[,nameY]),0,data[,nameY]) predtrt<-as.data.frame(InvLink(as.matrix(design.om.trt.all)%*%om.t$coefficients)) diag.trt<-unlist((YsansNA-predtrt)*as.numeric(I(!is.na(data[,nameY])))) predctrl<-as.data.frame(InvLink(as.matrix(design.om.ctrl.all)%*%om.c$coefficients)) diag.ctrl<-unlist((YsansNA-predctrl)*as.numeric(I(!is.na(data[,nameY])))) SB.trt.all<-t(design.om.trt.all)%*%Diagonal(x=as.numeric(diag.trt))#%*%Diagonal(x=(data.trt[which(!is.na(data.trt[,nameY])),nameY]-fitted(om.t)))%*%design.om.trt SB.ctrl.all<-t(design.om.ctrl.all)%*%Diagonal(x=as.vector(diag.ctrl))#%*%Diagonal(x=(data.ctrl[which(!is.na(data.ctrl[,nameY])),nameY]-fitted(om.c)))%*%design.om.ctrl } if((!is.null(om.t))&(!is.null(om.c))&(!is.null(propensity.score))){ nuisance<-c(beta,propensity.score$coefficients,om.t$coefficients,om.c$coefficients) nuisance<-replace(nuisance, is.na(nuisance), 0) jacobian.nuisance<-jacobian(funcoptvarnuisanceOMPS, nuisance,method="Richardson",sqrtW=sqrtW, design.weights=design.weights,design.om.trt.all=design.om.trt.all,design.om.ctrl.all=design.om.ctrl.all, ,design.om.trt=design.om.trt,design.om.ctrl=design.om.ctrl, nameTRT=nameTRT,nameY=nameY,nameMISS=nameMISS, data=data,X=X,X.c=X.c,X.t=X.t,InvLinkDeriv=InvLinkDeriv, InvLink=InvLink, VarFun=VarFun,dInvLinkdEta=dInvLinkdEta,StdErr=StdErr,R.alpha.inv=R.alpha.inv,Y=Y,off=off,pi.a=pi.a,B=B,typeweights=typeweights) if(typeweights=="GENMOD"){ SEE<-crossprod(sqrtW %*% StdErr %*% dInvLinkdEta %*% X, R.alpha.inv %*% sqrtW %*% StdErr %*% scoreDiag)+ (pi.a)*crossprod( StdErr.t %*% dInvLinkdEta.t %*% X.t, R.alpha.inv %*% StdErr.t %*% scoreDiag.t)+ ((1-pi.a))*crossprod( StdErr.c %*% dInvLinkdEta.c %*% X.c, R.alpha.inv %*% StdErr.c %*% scoreDiag.c) }else{ SEE<-crossprod( StdErr %*% dInvLinkdEta %*% X, R.alpha.inv %*% StdErr %*% W %*% scoreDiag)+ (pi.a)*crossprod( StdErr.t %*% dInvLinkdEta.t %*% X.t, R.alpha.inv %*% StdErr.t %*% scoreDiag.t)+ ((1-pi.a))*crossprod( StdErr.c %*% dInvLinkdEta.c %*% X.c, R.alpha.inv %*% StdErr.c %*% scoreDiag.c) } stackEE<-rbind(as.matrix(SEE),as.matrix(SW),as.matrix(SB.trt.all),as.matrix(SB.ctrl.all)) stackEE<-replace(stackEE, is.na(stackEE), 0) stackEEprod<-stackEE%*%BlockDiag%*%t(stackEE) sandadjvar<-t(ginv(jacobian.nuisance))%*%stackEEprod%*%ginv(jacobian.nuisance) } if((!is.null(om.t))&(!is.null(om.c))&(is.null(propensity.score))){ nuisance<-c(beta,om.t$coefficients,om.c$coefficients) nuisance<-replace(nuisance, is.na(nuisance), 0) jacobian.nuisance<-jacobian(funcoptvarnuisanceOM, nuisance,method="Richardson",sqrtW=sqrtW, design.weights=design.weights,design.om.trt.all=design.om.trt.all,design.om.ctrl.all=design.om.ctrl.all, ,design.om.trt=design.om.trt,design.om.ctrl=design.om.ctrl, nameTRT=nameTRT,nameY=nameY,nameMISS=nameMISS, data=data,X=X,X.c=X.c,X.t=X.t,InvLinkDeriv=InvLinkDeriv, InvLink=InvLink, VarFun=VarFun,dInvLinkdEta=dInvLinkdEta,StdErr=StdErr,R.alpha.inv=R.alpha.inv,Y=Y,off=off,pi.a=pi.a,B=B,typeweights=typeweights) if(!is.null(typeweights)){ if(typeweights=="GENMOD"){ SEE<-crossprod(sqrtW %*% StdErr %*% dInvLinkdEta %*% X, R.alpha.inv %*% sqrtW %*% StdErr %*% scoreDiag)+ (pi.a)*crossprod( StdErr.t %*% dInvLinkdEta.t %*% X.t, R.alpha.inv %*% StdErr.t %*% scoreDiag.t)+ ((1-pi.a))*crossprod( StdErr.c %*% dInvLinkdEta.c %*% X.c, R.alpha.inv %*% StdErr.c %*% scoreDiag.c) }else{ SEE<-crossprod(StdErr %*% dInvLinkdEta %*% X, R.alpha.inv %*% sqrtW %*% sqrtW %*% StdErr %*% scoreDiag)+ (pi.a)*crossprod( StdErr.t %*% dInvLinkdEta.t %*% X.t, R.alpha.inv %*% StdErr.t %*% scoreDiag.t)+ ((1-pi.a))*crossprod( StdErr.c %*% dInvLinkdEta.c %*% X.c, R.alpha.inv %*% StdErr.c %*% scoreDiag.c) } }else{ SEE<-crossprod(StdErr %*% dInvLinkdEta %*% X, R.alpha.inv %*% StdErr %*% scoreDiag)+ (pi.a)*crossprod( StdErr.t %*% dInvLinkdEta.t %*% X.t, R.alpha.inv %*% StdErr.t %*% scoreDiag.t)+ ((1-pi.a))*crossprod( StdErr.c %*% dInvLinkdEta.c %*% X.c, R.alpha.inv %*% StdErr.c %*% scoreDiag.c) } stackEE<-rbind(as.matrix(SEE),as.matrix(SB.trt.all),as.matrix(SB.ctrl.all)) stackEE<-replace(stackEE, is.na(stackEE), 0) stackEEprod<-stackEE%*%BlockDiag%*%t(stackEE) sandadjvar<-t(ginv(jacobian.nuisance))%*%stackEEprod%*%ginv(jacobian.nuisance) } if((is.null(om.t))&(is.null(om.c))&(!is.null(propensity.score))){ nuisance<-c(beta,propensity.score$coefficients) nuisance<-replace(nuisance, is.na(nuisance), 0) jacobian.nuisance<-jacobian(funcoptvarnuisancePS, nuisance,method="Richardson",sqrtW=sqrtW, design.weights=design.weights,design.om.trt.all=design.om.trt.all,design.om.ctrl.all=design.om.ctrl.all, design.om.trt=design.om.trt,design.om.ctrl=design.om.ctrl, nameTRT=nameTRT,nameY=nameY,nameMISS=nameMISS, data=data,X=X,X.c=X.c,X.t=X.t,InvLinkDeriv=InvLinkDeriv, InvLink=InvLink, VarFun=VarFun,dInvLinkdEta=dInvLinkdEta,StdErr=StdErr,R.alpha.inv=R.alpha.inv,Y=Y,off=off,pi.a=pi.a,B=B,typeweights=typeweights) if(is.null(B)){ if(typeweights=="GENMOD"){ SEE<-crossprod(sqrtW %*% StdErr %*% dInvLinkdEta %*% X, R.alpha.inv %*% sqrtW %*% StdErr %*% scoreDiag.ps) }else{ SEE<-crossprod( StdErr %*% dInvLinkdEta %*% X, R.alpha.inv %*% StdErr %*% W %*% scoreDiag.ps) } }else{ if(typeweights=="GENMOD"){ SEE<-crossprod(sqrtW %*% StdErr %*% dInvLinkdEta %*% X, R.alpha.inv %*% sqrtW %*% StdErr %*% scoreDiag)+ (pi.a)*crossprod( StdErr.t %*% dInvLinkdEta.t %*% X.t, R.alpha.inv %*% StdErr.t %*% scoreDiag.t)+ ((1-pi.a))*crossprod( StdErr.c %*% dInvLinkdEta.c %*% X.c, R.alpha.inv %*% StdErr.c %*% scoreDiag.c) }else{ SEE<-crossprod( StdErr %*% dInvLinkdEta %*% X, R.alpha.inv %*% StdErr %*%W %*% scoreDiag)+ (pi.a)*crossprod( StdErr.t %*% dInvLinkdEta.t %*% X.t, R.alpha.inv %*% StdErr.t %*% scoreDiag.t)+ ((1-pi.a))*crossprod( StdErr.c %*% dInvLinkdEta.c %*% X.c, R.alpha.inv %*% StdErr.c %*% scoreDiag.c) } } stackEE<-rbind(as.matrix(SEE),as.matrix(SW)) stackEE<-replace(stackEE, is.na(stackEE), 0) stackEEprod<-stackEE%*%BlockDiag%*%t(stackEE) sandadjvar<-t(ginv(jacobian.nuisance))%*%stackEEprod%*%ginv(jacobian.nuisance) } if(print.log){ print("Nuisance-adjusted sandwich") print(sandadjvar) } sandadjvar<-sandadjvar[1: dim(X)[2],1: dim(X)[2]] return(list(jacobian.nuisance=jacobian.nuisance,sandadjvar=sandadjvar)) } ### Function computing the join score equation (EE,PS,OM) depending on main and nuisance parameters values funcoptvarnuisanceOMPS <- function(nuisance,sqrtW, design.weights,design.om.trt.all,design.om.ctrl.all,design.om.trt,design.om.ctrl, nameTRT,nameY,nameMISS, data,X,X.c,X.t,InvLinkDeriv, InvLink, VarFun,dInvLinkdEta,StdErr,R.alpha.inv,Y,off,pi.a,B,typeweights){ etaEE<-nuisance[1:ncol(X)]+1 etaW<-nuisance[(ncol(X)+1):(ncol(X)+ncol(design.weights))] etaBtrt<-nuisance[(ncol(X)+ncol(design.weights)+1):(ncol(X)+ncol(design.weights)+ncol(design.om.trt.all))] etaBctrl<-nuisance[(ncol(X)+ncol(design.weights)+ncol(design.om.trt.all)+1):(ncol(X)+ncol(design.weights)+ncol(design.om.trt.all)+ncol(design.om.ctrl.all))] eta <- as.vector(X%*%etaEE) + off diag(dInvLinkdEta) <- InvLinkDeriv(eta) mu <- InvLink(eta) diag(StdErr) <- sqrt(1/VarFun(mu)) nn<-length(Y) StdErr.c <- Diagonal(nn) dInvLinkdEta.c <- Diagonal(nn) eta.c <- as.vector(X.c%*%etaEE) + off diag(dInvLinkdEta.c) <- InvLinkDeriv(eta.c) mu.c <- InvLink(eta.c) diag(StdErr.c) <- sqrt(1/VarFun(mu.c)) nn<-length(Y) StdErr.t <- Diagonal(nn) dInvLinkdEta.t <- Diagonal(nn) eta.t <- as.vector(X.t%*%etaEE) + off diag(dInvLinkdEta.t) <- InvLinkDeriv(eta.t) mu.t <- InvLink(eta.t) diag(StdErr.t) <- sqrt(1/VarFun(mu.t)) temp<-sqrt(as.numeric(diag(sqrtW)>0)/(exp((design.weights%*%etaW))/(1+exp((design.weights%*%etaW)))) ) temp<-ifelse(is.na(temp),0,temp) sqrtW.temp<-Diagonal(x= temp) temp<-(((1-data[,nameMISS])-(exp((design.weights%*%etaW))/(1+exp((design.weights%*%etaW)))))) SW<-as.vector(t(design.weights)%*%ifelse(is.na(temp),0,temp)) B.temp<-as.data.frame(InvLink(as.matrix(design.om.trt.all)%*%etaBtrt)) names(B.temp)<-c("B.t") B.temp[,"B.c"]<-as.data.frame(InvLink(as.matrix(design.om.ctrl.all)%*%etaBctrl)) temp<-as.data.frame(cbind(as.data.frame(X)[,colnames(as.data.frame(X))==nameTRT],B.temp[,"B.c"],B.temp[,"B.t"])) names(temp)<-c(nameTRT,"B.c","B.t") Bi.temp<-ifelse(temp[,nameTRT]==1,temp[,"B.t"],temp[,"B.c"]) B.temp<-as.data.frame(cbind(temp[,"B.c"],temp[,"B.t"],Bi.temp)) names(B.temp)<-c("B.c","B.t","Bi") predtrt<-as.data.frame(InvLink(as.matrix(design.om.trt)%*%etaBtrt)) predctrl<-as.data.frame(InvLink(as.matrix(design.om.ctrl)%*%etaBctrl)) SB.trt<-as.vector(t(design.om.trt)%*%as.matrix((data[which((!is.na(data[,nameY]))&(data[,nameTRT]==1)),nameY]-predtrt))) SB.ctrl<-as.vector(t(design.om.ctrl)%*%as.matrix((data[which((!is.na(data[,nameY]))&(data[,nameTRT]==0)),nameY]-predctrl))) if(typeweights=="GENMOD"){ scoreEE<-as.vector(crossprod(sqrtW.temp %*% StdErr %*%dInvLinkdEta %*%X , R.alpha.inv %*% sqrtW.temp %*% StdErr %*% (Y - B.temp[,"Bi"])) + (1-pi.a)*crossprod( StdErr.c %*%dInvLinkdEta.c%*%X.c , R.alpha.inv %*% StdErr.c %*% (B.temp[,"B.c"]-mu.c))+ (pi.a)*crossprod( StdErr.t %*%dInvLinkdEta.t %*%X.t , R.alpha.inv %*% StdErr.t %*% (B.temp[,"B.t"]-mu.t))) }else{ scoreEE<-as.vector(crossprod( StdErr %*%dInvLinkdEta %*%X , R.alpha.inv %*% StdErr %*% sqrtW.temp %*% sqrtW.temp %*% (Y - B.temp[,"Bi"])) + (1-pi.a)*crossprod( StdErr.c %*%dInvLinkdEta.c%*%X.c , R.alpha.inv %*% StdErr.c %*% (B.temp[,"B.c"]-mu.c))+ (pi.a)*crossprod( StdErr.t %*%dInvLinkdEta.t %*%X.t , R.alpha.inv %*% StdErr.t %*% (B.temp[,"B.t"]-mu.t))) } funcoptvarnuisance<-c(scoreEE,SW,SB.trt,SB.ctrl) } ### Function computing the join score equation (EE,OM) depending on main and nuisance parameters values funcoptvarnuisanceOM <- function(nuisance,sqrtW, design.weights,design.om.trt.all,design.om.ctrl.all,design.om.trt,design.om.ctrl, nameTRT,nameY,nameMISS, data,X,X.c,X.t,InvLinkDeriv, InvLink, VarFun,dInvLinkdEta,StdErr,R.alpha.inv,Y,off,pi.a,B,typeweights){ etaEE<-nuisance[1:ncol(X)]+1 etaBtrt<-nuisance[(ncol(X)+1):(ncol(X)+ncol(design.om.trt.all))] etaBctrl<-nuisance[(ncol(X)+ncol(design.om.trt.all)+1):(ncol(X)+ncol(design.om.trt.all)+ncol(design.om.ctrl.all))] eta <- as.vector(X%*%etaEE) + off diag(dInvLinkdEta) <- InvLinkDeriv(eta) mu <- InvLink(eta) diag(StdErr) <- sqrt(1/VarFun(mu)) nn<-length(Y) StdErr.c <- Diagonal(nn) dInvLinkdEta.c <- Diagonal(nn) eta.c <- as.vector(X.c%*%etaEE) + off diag(dInvLinkdEta.c) <- InvLinkDeriv(eta.c) mu.c <- InvLink(eta.c) diag(StdErr.c) <- sqrt(1/VarFun(mu.c)) nn<-length(Y) StdErr.t <- Diagonal(nn) dInvLinkdEta.t <- Diagonal(nn) eta.t <- as.vector(X.t%*%etaEE) + off diag(dInvLinkdEta.t) <- InvLinkDeriv(eta.t) mu.t <- InvLink(eta.t) diag(StdErr.t) <- sqrt(1/VarFun(mu.t)) B.temp<-as.data.frame(InvLink(as.matrix(design.om.trt.all)%*%etaBtrt)) names(B.temp)<-c("B.t") B.temp[,"B.c"]<-as.data.frame(InvLink(as.matrix(design.om.ctrl.all)%*%etaBctrl)) temp<-as.data.frame(cbind(as.data.frame(X)[,colnames(as.data.frame(X))==nameTRT],B.temp[,"B.c"],B.temp[,"B.t"])) names(temp)<-c(nameTRT,"B.c","B.t") Bi.temp<-ifelse(temp[,nameTRT]==1,temp[,"B.t"],temp[,"B.c"]) B.temp<-as.data.frame(cbind(temp[,"B.c"],temp[,"B.t"],Bi.temp)) names(B.temp)<-c("B.c","B.t","Bi") predtrt<-as.data.frame(InvLink(as.matrix(design.om.trt)%*%etaBtrt)) predctrl<-as.data.frame(InvLink(as.matrix(design.om.ctrl)%*%etaBctrl)) SB.trt<-as.vector(t(design.om.trt)%*%as.matrix((data[which((!is.na(data[,nameY]))&(data[,nameTRT]==1)),nameY]-predtrt))) SB.ctrl<-as.vector(t(design.om.ctrl)%*%as.matrix((data[which((!is.na(data[,nameY]))&(data[,nameTRT]==0)),nameY]-predctrl))) if(!is.null(typeweights)){ if(typeweights=="GENMOD"){ scoreEE<-as.vector(crossprod(sqrtW %*% StdErr %*%dInvLinkdEta %*%X , R.alpha.inv %*% sqrtW %*% StdErr %*% (Y - B.temp[,"Bi"])) + (1-pi.a)*crossprod( StdErr.c %*%dInvLinkdEta.c%*%X.c , R.alpha.inv %*% StdErr.c %*% (B.temp[,"B.c"]-mu.c))+ (pi.a)*crossprod( StdErr.t %*%dInvLinkdEta.t %*%X.t , R.alpha.inv %*% StdErr.t %*% (B.temp[,"B.t"]-mu.t))) }else{ scoreEE<-as.vector(crossprod(StdErr %*%dInvLinkdEta %*%X , R.alpha.inv %*% sqrtW %*% sqrtW %*% StdErr %*% (Y - B.temp[,"Bi"])) + (1-pi.a)*crossprod( StdErr.c %*%dInvLinkdEta.c%*%X.c , R.alpha.inv %*% StdErr.c %*% (B.temp[,"B.c"]-mu.c))+ (pi.a)*crossprod( StdErr.t %*%dInvLinkdEta.t %*%X.t , R.alpha.inv %*% StdErr.t %*% (B.temp[,"B.t"]-mu.t))) } }else{ scoreEE<-as.vector(crossprod(StdErr %*%dInvLinkdEta %*%X , R.alpha.inv %*% StdErr %*% (Y - B.temp[,"Bi"])) + (1-pi.a)*crossprod( StdErr.c %*%dInvLinkdEta.c%*%X.c , R.alpha.inv %*% StdErr.c %*% (B.temp[,"B.c"]-mu.c))+ (pi.a)*crossprod( StdErr.t %*%dInvLinkdEta.t %*%X.t , R.alpha.inv %*% StdErr.t %*% (B.temp[,"B.t"]-mu.t))) } funcoptvarnuisance<-c(scoreEE,SB.trt,SB.ctrl) } ### Function computing the join score equation (EE,PS) depending on main and nuisance parameters values funcoptvarnuisancePS <- function(nuisance,sqrtW, design.weights,design.om.trt.all,design.om.ctrl.all,design.om.trt,design.om.ctrl, nameTRT,nameY,nameMISS, data,X,X.c,X.t,InvLinkDeriv, InvLink, VarFun,dInvLinkdEta,StdErr,R.alpha.inv,Y,off,pi.a,B,typeweights){ etaEE<-nuisance[1:ncol(X)]+1 etaW<-nuisance[(ncol(X)+1):(ncol(X)+ncol(design.weights))] eta <- as.vector(X%*%etaEE) + off diag(dInvLinkdEta) <- InvLinkDeriv(eta) mu <- InvLink(eta) diag(StdErr) <- sqrt(1/VarFun(mu)) nn<-length(Y) StdErr.c <- Diagonal(nn) dInvLinkdEta.c <- Diagonal(nn) eta.c <- as.vector(X.c%*%etaEE) + off diag(dInvLinkdEta.c) <- InvLinkDeriv(eta.c) mu.c <- InvLink(eta.c) diag(StdErr.c) <- sqrt(1/VarFun(mu.c)) nn<-length(Y) StdErr.t <- Diagonal(nn) dInvLinkdEta.t <- Diagonal(nn) eta.t <- as.vector(X.t%*%etaEE) + off diag(dInvLinkdEta.t) <- InvLinkDeriv(eta.t) mu.t <- InvLink(eta.t) diag(StdErr.t) <- sqrt(1/VarFun(mu.t)) temp<-sqrt(as.numeric(diag(sqrtW)>0)/(exp((design.weights%*%etaW))/(1+exp((design.weights%*%etaW)))) ) temp<-ifelse(is.na(temp),0,temp) sqrtW.temp<-Diagonal(x= temp) temp<-(((1-data[,nameMISS])-(exp((design.weights%*%etaW))/(1+exp((design.weights%*%etaW)))))) SW<-as.vector(t(design.weights)%*%ifelse(is.na(temp),0,temp)) if(is.null(B)){ if(typeweights=="GENMOD"){ scoreEE<-as.vector(crossprod(sqrtW %*% StdErr %*%dInvLinkdEta %*%X , R.alpha.inv %*% sqrtW %*% StdErr %*% (Y - mu))) }else{ scoreEE<-as.vector(crossprod(StdErr %*%dInvLinkdEta %*%X , R.alpha.inv %*% sqrtW %*% sqrtW %*% StdErr %*% (Y - mu))) } }else{ if(typeweights=="GENMOD"){ scoreEE<-as.vector(crossprod(sqrtW.temp %*% StdErr %*%dInvLinkdEta %*%X , R.alpha.inv %*% sqrtW.temp %*% StdErr %*% (Y - B[,"Bi"])) + (1-pi.a)*crossprod( StdErr.c %*%dInvLinkdEta.c%*%X.c , R.alpha.inv %*% StdErr.c %*% (B[,"B.c"]-mu.c))+ (pi.a)*crossprod( StdErr.t %*%dInvLinkdEta.t %*%X.t , R.alpha.inv %*% StdErr.t %*% (B[,"B.t"]-mu.t))) }else{ scoreEE<-as.vector(crossprod(StdErr %*%dInvLinkdEta %*%X , R.alpha.inv %*% sqrtW.temp%*% sqrtW.temp %*% StdErr %*% (Y - B[,"Bi"])) + (1-pi.a)*crossprod( StdErr.c %*%dInvLinkdEta.c%*%X.c , R.alpha.inv %*% StdErr.c %*% (B[,"B.c"]-mu.c))+ (pi.a)*crossprod( StdErr.t %*%dInvLinkdEta.t %*%X.t , R.alpha.inv %*% StdErr.t %*% (B[,"B.t"]-mu.t))) } } funcoptvarnuisance<-c(scoreEE,SW) }
/scratch/gouwar.j/cran-all/cranData/CRTgeeDR/R/getSandwichNuisance.R
#Exactly the file 'numDeriv' included in the package 'numDeriv', #authored by Paul Gilbert and Ravi Varadhan #under the GPL-2 license. ############################################################################ # functions for gradient calculation ############################################################################ grad <- function (func, x, method="Richardson", side=NULL, method.args=list(), ...) UseMethod("grad") grad.default <- function(func, x, method="Richardson", side=NULL, method.args=list(), ...){ # modified by Paul Gilbert from code by Xingqiao Liu. # case 1/ scalar arg, scalar result (case 2/ or 3/ code should work) # case 2/ vector arg, scalar result (same as special case jacobian) # case 3/ vector arg, vector result (of same length, really 1/ applied multiple times)) f <- func(x, ...) n <- length(x) #number of variables in argument if (is.null(side)) side <- rep(NA, n) else { if(n != length(side)) stop("Non-NULL argument 'side' should have the same length as x") if(any(1 != abs(side[!is.na(side)]))) stop("Non-NULL argument 'side' should have values NA, +1, or -1.") } case1or3 <- n == length(f) if((1 != length(f)) & !case1or3) stop("grad assumes a scalar valued function.") if(method=="simple"){ # very simple numerical approximation args <- list(eps=1e-4) # default args[names(method.args)] <- method.args side[is.na(side)] <- 1 eps <- rep(args$eps, n) * side if(case1or3) return((func(x+eps, ...)-f)/eps) # now case 2 df <- rep(NA,n) for (i in 1:n) { dx <- x dx[i] <- dx[i] + eps[i] df[i] <- (func(dx, ...) - f)/eps[i] } return(df) } else if(method=="complex"){ # Complex step gradient if (any(!is.na(side))) stop("method 'complex' does not support non-NULL argument 'side'.") eps <- .Machine$double.eps v <- try(func(x + eps * 1i, ...)) if(inherits(v, "try-error")) stop("function does not accept complex argument as required by method 'complex'.") if(!is.complex(v)) stop("function does not return a complex value as required by method 'complex'.") if(case1or3) return(Im(v)/eps) # now case 2 h0 <- rep(0, n) g <- rep(NA, n) for (i in 1:n) { h0[i] <- eps * 1i g[i] <- Im(func(x+h0, ...))/eps h0[i] <- 0 } return(g) } else if(method=="Richardson"){ args <- list(eps=1e-4, d=0.0001, zero.tol=sqrt(.Machine$double.eps/7e-7), r=4, v=2, show.details=FALSE) # default args[names(method.args)] <- method.args d <- args$d r <- args$r v <- args$v show.details <- args$show.details a <- matrix(NA, r, n) #b <- matrix(NA, (r - 1), n) # first order derivatives are stored in the matrix a[k,i], # where the indexing variables k for rows(1 to r), i for columns (1 to n), # r is the number of iterations, and n is the number of variables. h <- abs(d*x) + args$eps * (abs(x) < args$zero.tol) pna <- (side == 1) & !is.na(side) # double these on plus side mna <- (side == -1) & !is.na(side) # double these on minus side for(k in 1:r) { # successively reduce h ph <- mh <- h ph[pna] <- 2 * ph[pna] ph[mna] <- 0 mh[mna] <- 2 * mh[mna] mh[pna] <- 0 if(case1or3) a[k,] <- (func(x + ph, ...) - func(x - mh, ...))/(2*h) else for(i in 1:n) { if((k != 1) && (abs(a[(k-1),i]) < 1e-20)) a[k,i] <- 0 #some func are unstable near zero else a[k,i] <- (func(x + ph*(i==seq(n)), ...) - func(x - mh*(i==seq(n)), ...))/(2*h[i]) } if (any(is.na(a[k,]))) stop("function returns NA at ", h," distance from x.") h <- h/v # Reduced h by 1/v. } if(show.details) { cat("\n","first order approximations", "\n") print(a, 12) } #------------------------------------------------------------------------ # 1 Applying Richardson Extrapolation to improve the accuracy of # the first and second order derivatives. The algorithm as follows: # # -- For each column of the derivative matrix a, # say, A1, A2, ..., Ar, by Richardson Extrapolation, to calculate a # new sequence of approximations B1, B2, ..., Br used the formula # # B(i) =( A(i+1)*4^m - A(i) ) / (4^m - 1) , i=1,2,...,r-m # # N.B. This formula assumes v=2. # # -- Initially m is taken as 1 and then the process is repeated # restarting with the latest improved values and increasing the # value of m by one each until m equals r-1 # # 2 Display the improved derivatives for each # m from 1 to r-1 if the argument show.details=T. # # 3 Return the final improved derivative vector. #------------------------------------------------------------------------- for(m in 1:(r - 1)) { a <- (a[2:(r+1-m),,drop=FALSE]*(4^m)-a[1:(r-m),,drop=FALSE])/(4^m-1) if(show.details & m!=(r-1) ) { cat("\n","Richarson improvement group No. ", m, "\n") print(a[1:(r-m),,drop=FALSE], 12) } } return(c(a)) } else stop("indicated method ", method, "not supported.") } jacobian <- function (func, x, method="Richardson", side=NULL, method.args=list(), ...) UseMethod("jacobian") jacobian.default <- function(func, x, method="Richardson", side=NULL, method.args=list(), ...){ f <- func(x, ...) n <- length(x) #number of variables. if (is.null(side)) side <- rep(NA, n) else { if(n != length(side)) stop("Non-NULL argument 'side' should have the same length as x") if(any(1 != abs(side[!is.na(side)]))) stop("Non-NULL argument 'side' should have values NA, +1, or -1.") } if(method=="simple"){ # very simple numerical approximation args <- list(eps=1e-4) # default args[names(method.args)] <- method.args side[is.na(side)] <- 1 eps <- rep(args$eps, n) * side df <-matrix(NA, length(f), n) for (i in 1:n) { dx <- x dx[i] <- dx[i] + eps[i] df[,i] <- (func(dx, ...) - f)/eps[i] } return(df) } else if(method=="complex"){ # Complex step gradient if (any(!is.na(side))) stop("method 'complex' does not support non-NULL argument 'side'.") # Complex step Jacobian eps <- .Machine$double.eps h0 <- rep(0, n) h0[1] <- eps * 1i v <- try(func(x+h0, ...)) if(inherits(v, "try-error")) stop("function does not accept complex argument as required by method 'complex'.") if(!is.complex(v)) stop("function does not return a complex value as required by method 'complex'.") h0[1] <- 0 jac <- matrix(NA, length(v), n) jac[, 1] <- Im(v)/eps if (n == 1) return(jac) for (i in 2:n) { h0[i] <- eps * 1i jac[, i] <- Im(func(x+h0, ...))/eps h0[i] <- 0 } return(jac) } else if(method=="Richardson"){ args <- list(eps=1e-4, d=0.0001, zero.tol=sqrt(.Machine$double.eps/7e-7), r=4, v=2, show.details=FALSE) # default args[names(method.args)] <- method.args d <- args$d r <- args$r v <- args$v a <- array(NA, c(length(f),r, n) ) h <- abs(d*x) + args$eps * (abs(x) < args$zero.tol) pna <- (side == 1) & !is.na(side) # double these on plus side mna <- (side == -1) & !is.na(side) # double these on minus side for(k in 1:r) { # successively reduce h ph <- mh <- h ph[pna] <- 2 * ph[pna] ph[mna] <- 0 mh[mna] <- 2 * mh[mna] mh[pna] <- 0 for(i in 1:n) { a[,k,i] <- (func(x + ph*(i==seq(n)), ...) - func(x - mh*(i==seq(n)), ...))/(2*h[i]) #if((k != 1)) a[,(abs(a[,(k-1),i]) < 1e-20)] <- 0 #some func are unstable near zero } h <- h/v # Reduced h by 1/v. } for(m in 1:(r - 1)) { a <- (a[,2:(r+1-m),,drop=FALSE]*(4^m)-a[,1:(r-m),,drop=FALSE])/(4^m-1) } # drop second dim of a, which is now 1 (but not other dim's even if they are 1 return(array(a, dim(a)[c(1,3)])) } else stop("indicated method ", method, "not supported.") }
/scratch/gouwar.j/cran-all/cranData/CRTgeeDR/R/numDeriv.R
### summary function for CRTgeeDR object. #' Summarizing CRTgeeDR object. #' #' Summary CRTgeeDR object #' #' @param object CRTgeeDR object #' @param ... ignored #' #' @aliases summary.CRTgeeDR summary #' @method summary CRTgeeDR #' @export summary.CRTgeeDR <- function(object, ...) { Coefs <- matrix(0,nrow=length(object$beta),ncol=5) Coefs[,1] <- c(object$beta) naive <- is.character(object$var) Coefs[,2] <- sqrt(diag(object$var.naiv)) if(naive){Coefs[,3] <- rep(0, length(object$beta))}else{Coefs[,3] <- sqrt(diag(object$var))} if(naive){Coefs[,4] <- Coefs[,1]/Coefs[,2]}else{Coefs[,4] <- Coefs[,1]/Coefs[,3]} Coefs[,5] <- round(2*pnorm(abs(Coefs[,4]), lower.tail=F), digits=8) colnames(Coefs) <- c("Estimates","Naive SE","Robust SE", "wald", "p") summ <- list(beta = Coefs[,1], se.model = Coefs[,2], se.robust = Coefs[,3], wald.test = Coefs[,4], p = Coefs[,5], alpha = object$alpha, corr = object$corr, phi = object$phi, niter = object$niter, clusz = object$clusz, coefnames = object$coefnames, weights=object$weights) class(summ) <- 'summary.CRTgeeDR' return(summ) } ### summary function for CRTgeeDR object. #' Print the summarizing CRTgeeDR object. #' #' Print Summary CRTgeeDR object #' #' @param x summary.CRTgeeDR x #' @param ... ignored #' #' @aliases print.summary.CRTgeeDR print.summary #' @method print summary.CRTgeeDR #' @export print.summary.CRTgeeDR <- function(x, ...){ Coefs <- matrix(0,nrow=length(x$coefnames),ncol=5) rownames(Coefs) <- c(x$coefnames) colnames(Coefs) <- c("Estimates","Model SE","Robust SE", "wald", "p") Coefs[,1] <- x$beta Coefs[,2] <- x$se.model Coefs[,3] <- x$se.robust Coefs[,4] <- x$wald.test Coefs[,5] <- x$p #print("Call: ", object$call, "\n") print(signif(Coefs, digits=4)) cat("\n Est. Correlation: ", signif(x$alpha, digits=4), "\n") cat(" Correlation Structure: ", x$corr, "\n") cat(" Est. Scale Parameter: ", signif(x$phi, digits=4), "\n") cat("\n Number of GEE iterations:", x$niter, "\n") cat(" Number of Clusters: ", length(x$clusz), " Maximum Cluster Size: ", max(x$clusz), "\n") cat(" Number of observations with nonzero weight: ", sum(x$weights != 0), "\n") } #' Prints CRTgeeDR object. #' #' Prints CRTgeeDR object #' #' @param x CRTgeeDR x #' @param ... ignored #' #' @aliases print.CRTgeeDR print #' @method print CRTgeeDR #' @export ### print function for CRTgeeDR object print.CRTgeeDR <- function(x, ...){ coefdf <- signif(data.frame(x$beta), digits=4) rownames(coefdf) <- x$coefnames colnames(coefdf) <- "" print(x$call) cat("\n", "Coefficients:", "\n") print(t(coefdf)) cat("\n Scale Parameter: ", signif(x$phi, digits=4), "\n") cat("\n Correlation Model: ", x$corr) cat("\n Estimated Correlation Parameters: ", signif(x$alpha, digits=4), "\n") cat("\n Number of clusters: ", length(x$clusz), " Maximum cluster size: ", max(x$clusz), "\n") cat(" Number of observations with nonzero weight: ", sum(x$weights != 0), "\n") } #' Get Mean, Sd and CI for estimates from CRTgeeDR object. #' #' Get the estimates, standard deviations and confidence intervals from an CRTgeeDR object associated with a regressor given in argument. #' #' @param object CRTgeeDR #' @param nameTRT, character including the name of the variable of interest (often the treatment) #' @param quantile, value of the normal quantile for the IC. default is 1.96 for 95\%CI. #' @export #' getCI <-function(object,nameTRT="TRT",quantile=1.96){ stats.summary <- c() stats.summary<-c(stats.summary,object$beta[which(object$coefnames==nameTRT)]) stats.summary<-c(stats.summary,sqrt(object$var.naiv[which(object$coefnames==nameTRT),which(object$coefnames==nameTRT)])) stats.summary<-c(stats.summary,object$beta[which(object$coefnames==nameTRT)]-quantile*sqrt(object$var.naiv[which(object$coefnames==nameTRT),which(object$coefnames==nameTRT)])) stats.summary<-c(stats.summary,object$beta[which(object$coefnames==nameTRT)]+quantile*sqrt(object$var.naiv[which(object$coefnames==nameTRT),which(object$coefnames==nameTRT)])) stats.summary<-c(stats.summary,ifelse(is.null(object$var),NA,sqrt(object$var[which(object$coefnames==nameTRT),which(object$coefnames==nameTRT)]))) stats.summary<-c(stats.summary,ifelse(is.null(object$var),NA,object$beta[which(object$coefnames==nameTRT)]-quantile*sqrt(object$var[which(object$coefnames==nameTRT),which(object$coefnames==nameTRT)]))) stats.summary<-c(stats.summary,ifelse(is.null(object$var),NA,object$beta[which(object$coefnames==nameTRT)]+quantile*sqrt(object$var[which(object$coefnames==nameTRT),which(object$coefnames==nameTRT)]))) stats.summary<-c(stats.summary,ifelse(is.null(object$var.nuisance),NA,sqrt(object$var.nuisance[which(object$coefnames==nameTRT),which(object$coefnames==nameTRT)]))) stats.summary<-c(stats.summary,ifelse(is.null(object$var.nuisance),NA,object$beta[which(object$coefnames==nameTRT)]-quantile*sqrt(object$var.nuisance[which(object$coefnames==nameTRT),which(object$coefnames==nameTRT)]))) stats.summary<-c(stats.summary,ifelse(is.null(object$var.nuisance),NA,object$beta[which(object$coefnames==nameTRT)]+quantile*sqrt(object$var.nuisance[which(object$coefnames==nameTRT),which(object$coefnames==nameTRT)]))) stats.summary<-c(stats.summary,ifelse(is.null(object$var.fay),NA,sqrt(object$var.fay[which(object$coefnames==nameTRT),which(object$coefnames==nameTRT)]))) stats.summary<-c(stats.summary,ifelse(is.null(object$var.fay),NA,object$beta[which(object$coefnames==nameTRT)]-quantile*sqrt(object$var.fay[which(object$coefnames==nameTRT),which(object$coefnames==nameTRT)]))) stats.summary<-c(stats.summary,ifelse(is.null(object$var.fay),NA,object$beta[which(object$coefnames==nameTRT)]+quantile*sqrt(object$var.fay[which(object$coefnames==nameTRT),which(object$coefnames==nameTRT)]))) stats.summary<-c(stats.summary,object$converged) stats.summary<-t(as.matrix(stats.summary)) colnames(stats.summary)<-c("Estimate","Naive SD","CI naive min","CI naive max","Sandwich SD","CI Sandwich min","CI Sandwich max","Nuisance-adj SD","CI Nuisance-adj min","CI Nuisance-adj max","Fay-adj SD","CI Fay-adj min","CI Fay-adj max","Convergence Status") rownames(stats.summary)<-c(nameTRT) return(stats.summary) } #' Get the histogram of weights for IPW and adequation for the glm weights model #' #' Get the histogram and some basic statistics for the weights used in the IPW part. #' #' @param object CRTgeeDR #' @param save, logical if TRUE the plot is saved as a pdf in the current directory #' @param name, name of the plot saved as pdf #' @param typeplot, integer indicating which is the adequation diagnostic plot for the PS. Default is NULL no output. '0', all available in plot.glm are displayed, '1' Residuals vs Fitted, '2' Normal Q-Q, #' '3' Scale-Location, '4' Cook's distance, '5' Residuals vs Leverage and '6' Cook's dist vs Leverage* h[ii] / (1 - h[ii]) #' @export #' getPSPlot <-function(object,save=FALSE,name="plotPS",typeplot=NULL){ ..count.. <- NULL if(save){pdf(paste(name,".pdf",sep=""))} toplot<-as.data.frame(object$weights) names(toplot)<-"weights" m <- ggplot(toplot, aes(x=weights)) plot<-m + geom_histogram(aes(y=..count..,fill=..count..),binwidth=0.1)+ ggtitle(paste("Summary: Q1=",round(quantile(object$used.weights,0.25),2)," Q2=",round(quantile(object$used.weights,0.5),2)," Q3=",round(quantile(object$used.weights,0.75),2)," max=",round(max(object$used.weights),2), sep=' ')) plot(plot) if(!is.null(object$ps.model)){ if(!is.null(typeplot)){ if(typeplot==0){ for (i in 1:6) { plot(object$ps.model,i) } }else{ plot(object$ps.model,typeplot) } } }else{ stop("Adequation plots not available. PS had not been computed internally") } if(save){dev.off()} toplot<-as.data.frame(object$weights) names(toplot)<-"weights" m <- ggplot(toplot, aes(x=weights)) plot<-m + geom_histogram(aes(y=..count..,fill=..count..),binwidth=0.1)+ ggtitle(paste("Summary: Q1=",round(quantile(object$used.weights,0.25),2)," Q2=",round(quantile(object$used.weights,0.5),2)," Q3=",round(quantile(object$used.weights,0.75),2)," max=",round(max(object$used.weights),2), sep=' ')) plot(plot) if(!is.null(object$ps.model)){ if(!is.null(typeplot)){ if(typeplot==0){ for (i in 1:6) { plot(object$ps.model,i) } }else{ plot(object$ps.model,typeplot) } } }else{ stop("Adequation plots not available. PS had not been computed internally") } } #' Get the observed vs fitted residuals #' #' Get the histogram and some basic statistics for the weights used in the IPW part. #' #' @param object CRTgeeDR #' @param save, logical if TRUE the plot is saved as a pdf in the current directory #' @param name, name of the plot saved as pdf #' @param typeplot, integer indicating which is the adequation diagnostic plot for the PS. '0', all available in plot.glm are displayed, '1' Residuals vs Fitted, '2' Normal Q-Q, #' '3' Scale-Location, '4' Cook's distance, '5' Residuals vs Leverage and '6' Cook's dist vs Leverage* h[ii] / (1 - h[ii]) #' @export #' getOMPlot <-function(object,save=FALSE,name="plotOM",typeplot=0){ if(is.null(object$call$model.augmentation.trt)&(is.null(object$call$aug))){ stop("The object given as argument is not the result of an AUGMENTED analysis") } else{ if(!is.null(object$call$model.augmentation.trt)){ if(typeplot==0){ if(save){pdf(paste(name,"_trt.pdf",sep=""))} print("For the treated group:") plot(object$om.model.trt,1:6) if(save){dev.off()} if(save){pdf(paste(name,"_ctrl.pdf",sep=""))} print("For the control group:") plot(object$om.model.ctrl,1:6) if(save){dev.off()} }else{ print("For the treated group:") if(save){pdf(paste(name,"_trt.pdf",sep=""))} plot(object$om.model.trt,typeplot) if(save){dev.off()} if(save){pdf(paste(name,"_ctrl.pdf",sep=""))} print("For the control group:") plot(object$om.model.ctrl,typeplot) if(save){dev.off()} } }else{ stop("Adequation plots not available. OM had not been computed internally") } } }
/scratch/gouwar.j/cran-all/cranData/CRTgeeDR/R/print.R
#Exactly the function 'updateAlphaUser' included in the package 'geeM', #authored by Lee S. McDaniel and Nick Henderson #under the GPL-2 license. ### Update the alpha (possibly) vector for the USERDEFINED correlation matrix. updateAlphaUser <- function(YY, mu, phi, id, len, StdErr, Resid, p, BlockDiag, row.vec, col.vec, corr.list, included, includedlen, allobs){ Resid <- StdErr %*% included %*% Diagonal(x = YY - mu) ml <- max(len) BlockDiag <- Resid %*% BlockDiag %*% Resid alpha.new <- vector("numeric", length(corr.list)) index <- cumsum(len)-len for(i in 1:length(alpha.new)){ newrow <- NULL newcol <- NULL for(j in 1:length(corr.list[[i]])){ newrow <- c(newrow, index[which(len >= col.vec[corr.list[[i]]][j])] + row.vec[corr.list[[i]][j]]) newcol <- c(newcol, index[which(len >= col.vec[corr.list[[i]]][j])] + col.vec[corr.list[[i]][j]]) } bdtmp <- BlockDiag[cbind(newrow, newcol)] if(allobs){ denom <- phi*(length(newrow) - p) }else{denom <- phi*(sum(bdtmp!=0)-p)} alpha.new[i] <- sum(bdtmp)/denom } return(alpha.new) } #Exactly the function 'updateAlphaEX' included in the package 'geeM', #authored by Lee S. McDaniel and Nick Henderson #under the GPL-2 license. ### Calculate the parameter for the EXCHANGEABLE correlation structure updateAlphaEX <- function(Y, mu, VarFun, phi, id, len, StdErr, Resid, p, BlockDiag, included, includedlen){ Resid <- StdErr %*% included %*% Diagonal(x = Y - mu) BlockDiag <- Resid %*% BlockDiag %*% Resid denom <- phi*(crossprod(includedlen, pmax(includedlen-1, 0))/2 - p) alpha <- (sum(BlockDiag) - phi*(sum(includedlen)-p))/2 alpha.new <- alpha/denom return(alpha.new) } ### Calculate the parameters for the M-DEPENDENT correlation structure updateAlphaMDEP <- function(YY, mu, VarFun, phi, id, len, StdErr, Resid, p, BlockDiag, m, included, includedlen, allobs){ Resid <- StdErr %*% included %*% Diagonal(x = YY - mu) BlockDiag <- Resid %*% BlockDiag %*% Resid alpha.new <- vector("numeric", m) for(i in 1:m){ if(sum(includedlen>i) > p){ bandmat <- drop0(band(BlockDiag, i,i)) if(allobs){alpha.new[i] <- sum(bandmat)/(phi*(sum(as.numeric(len>i)*(len-i))-p)) }else{alpha.new[i] <- sum( bandmat)/(phi*(length(bandmat@i)-p))} }else{ # If we don't have many observations for a certain parameter, don't divide by p # ensures we don't have NaN errors. bandmat <- drop0(band(BlockDiag, i,i)) if(allobs){alpha.new[i] <- sum(bandmat)/(phi*(sum(as.numeric(len>i)*(len-i)))) }else{alpha.new[i] <- sum( bandmat)/(phi*length(bandmat@i))} } } return(alpha.new) } #Exactly the function 'updateAlphaAR' included in the package 'geeM', #authored by Lee S. McDaniel and Nick Henderson #under the GPL-2 license. ### Calculate the parameter for the AR-1 correlation, also used for 1-DEPENDENT updateAlphaAR <- function(YY, mu, VarFun, phi, id, len, StdErr, p, included, includedlen, includedvec, allobs){ K <- length(len) oneobs <- which(len == 1) resid <- diag(StdErr %*% included %*% Diagonal(x = YY - mu)) len2 = len includedvec2 <- includedvec if(length(oneobs) > 0){ index <- c(0, (cumsum(len) -len)[2:K], sum(len)) len2 <- len[-oneobs] resid <- resid[-index[oneobs]] includedvec2 <- includedvec[-index[oneobs]] } nn <- length(resid) lastobs <- cumsum(len2) shiftresid1 <- resid[1:nn-1] shiftresid2 <- resid[2:nn] if(!allobs){ shiftresid1 <- shiftresid1[-lastobs] shiftresid2 <- shiftresid2[-lastobs] s1incvec2 <- includedvec2[1:nn-1] s2incvec2 <- includedvec2[2:nn] s1incvec2 <- s1incvec2[-lastobs] s2incvec2 <- s2incvec2[-lastobs] alphasum <- crossprod(shiftresid1, shiftresid2) denom <- (as.vector(crossprod(s1incvec2, s2incvec2)) - p)*phi }else{ alphasum <- crossprod(shiftresid1[-(cumsum(len2))], shiftresid2[-(cumsum(len2))]) denom <- (sum(len2-1) - p)*phi } alpha <- alphasum/denom return(as.numeric(alpha)) } #Exactly the function 'updateAlphaUnstruc' included in the package 'geeM', #authored by Lee S. McDaniel and Nick Henderson #under the GPL-2 license. ### Calculate alpha values for UNSTRUCTURED correlation updateAlphaUnstruc <- function(YY, mu, VarFun, phi, id, len, StdErr, Resid, p, BlockDiag, included, includedlen, allobs){ Resid <- StdErr %*% included %*% Diagonal(x = YY - mu) ml <- max(len) BlockDiag <- Resid %*% BlockDiag %*% Resid alpha.new <- vector("numeric", sum(1:(ml-1))) lalph <- length(alpha.new) row.vec <- NULL col.vec <- NULL for(i in 2:ml){ row.vec <- c(row.vec, 1:(i-1)) col.vec <- c(col.vec, rep(i, each=i-1)) } index <- cumsum(len)-len if(sum(includedlen == max(len)) <= p){stop("Number of clusters of largest size is less than p.")} for(i in 1:lalph){ # Get all of the indices of the matrix corresponding to the correlation # we want to estimate. newrow <- index[which(len>=col.vec[i])] + row.vec[i] newcol <- index[which(len>=col.vec[i])] + col.vec[i] bdtmp <- BlockDiag[cbind(newrow, newcol)] if(allobs){ denom <- (phi*(length(newrow)-p)) }else{denom <- (phi*(sum(bdtmp!=0)-p))} alpha.new[i] <- sum(bdtmp)/denom } return(alpha.new) }
/scratch/gouwar.j/cran-all/cranData/CRTgeeDR/R/updatealpha.R
#Adapted from the function 'updateBeta' included in the package 'geeM', #authored by Lee S. McDaniel and Nick Henderson #under the GPL-2 license. ### Method to update coefficients. Goes to a maximum of 10 iterations, or when ### rough convergence has been obtained. updateBeta = function(Y, X,X.t,X.c, B, beta, off, InvLinkDeriv, InvLink, VarFun, R.alpha.inv, StdErr, dInvLinkdEta, tol, sqrtW,W,included,typeweights,pi.a){ beta.new <- beta conv=F for(i in 1:10){ eta <- as.vector(X%*%beta.new) + off diag(dInvLinkdEta) <- InvLinkDeriv(eta) mu <- InvLink(eta) diag(StdErr) <- sqrt(1/VarFun(mu)) if(is.null(B)){ if(is.null(typeweights)){ hess <- crossprod(sqrtW %*% StdErr %*% dInvLinkdEta %*%X, R.alpha.inv %*% sqrtW %*% StdErr %*%dInvLinkdEta %*% X) esteq <- crossprod(sqrtW %*% StdErr %*%dInvLinkdEta %*%X , R.alpha.inv %*% sqrtW %*% StdErr %*% as.matrix(Y - mu)) }else{ if(typeweights=="GENMOD"){ hess <- crossprod(sqrtW %*% StdErr %*% dInvLinkdEta %*%X, R.alpha.inv %*% sqrtW %*% StdErr %*%dInvLinkdEta %*% X) esteq <- crossprod(sqrtW %*% StdErr %*%dInvLinkdEta %*%X , R.alpha.inv %*% sqrtW %*% StdErr %*% as.matrix(Y - mu)) } else{ hess <- crossprod( StdErr %*% dInvLinkdEta %*%X, R.alpha.inv %*% W %*% StdErr %*%dInvLinkdEta %*% X) esteq <- crossprod( StdErr %*%dInvLinkdEta %*%X , R.alpha.inv %*% W %*% StdErr %*% as.matrix(Y - mu)) } } } else{ nn<-length(Y) StdErr.c <- Diagonal(nn) dInvLinkdEta.c <- Diagonal(nn) eta.c <- as.vector(X.c%*%beta.new) + off diag(dInvLinkdEta.c) <- InvLinkDeriv(eta.c) mu.c <- InvLink(eta.c) diag(StdErr.c) <- sqrt(1/VarFun(mu.c)) nn<-length(Y) StdErr.t <- Diagonal(nn) dInvLinkdEta.t <- Diagonal(nn) eta.t <- as.vector(X.t%*%beta.new) + off diag(dInvLinkdEta.t) <- InvLinkDeriv(eta.t) mu.t <- InvLink(eta.t) diag(StdErr.t) <- sqrt(1/VarFun(mu.t)) if(is.null(typeweights)){ hess <- (1-pi.a)*crossprod( StdErr.c %*% dInvLinkdEta.c %*%X.c, R.alpha.inv %*% StdErr.c %*%dInvLinkdEta.c %*% X.c)+ (pi.a)*crossprod( StdErr.t %*% dInvLinkdEta.t %*%X.t, R.alpha.inv %*% StdErr.t %*%dInvLinkdEta.t %*% X.t) esteq <- crossprod( StdErr %*%dInvLinkdEta %*%X , R.alpha.inv %*% StdErr %*% as.matrix(Y - B[,"Bi"])) + (1-pi.a)*crossprod( StdErr.c %*%dInvLinkdEta.c%*%X.c , R.alpha.inv %*% StdErr.c %*% as.matrix(B[,"B.c"]-mu.c))+ (pi.a)*crossprod( StdErr.t %*%dInvLinkdEta.t %*%X.t , R.alpha.inv %*% StdErr.t %*% as.matrix(B[,"B.t"]-mu.t)) }else{ if(typeweights=="GENMOD"){ hess <- (1-pi.a)*crossprod( StdErr.c %*% dInvLinkdEta.c %*%X.c, R.alpha.inv %*% StdErr.c %*%dInvLinkdEta.c %*% X.c)+ (pi.a)*crossprod( StdErr.t %*% dInvLinkdEta.t %*%X.t, R.alpha.inv %*% StdErr.t %*%dInvLinkdEta.t %*% X.t) esteq <- crossprod(sqrtW %*% StdErr %*%dInvLinkdEta %*%X , R.alpha.inv %*% sqrtW %*% StdErr %*% as.matrix(Y - B[,"Bi"])) + (1-pi.a)*crossprod( StdErr.c %*%dInvLinkdEta.c%*%X.c , R.alpha.inv %*% StdErr.c %*% as.matrix(B[,"B.c"]-mu.c))+ (pi.a)*crossprod( StdErr.t %*%dInvLinkdEta.t %*%X.t , R.alpha.inv %*% StdErr.t %*% as.matrix(B[,"B.t"]-mu.t)) #print(hess) #print(esteq) }else{ hess <- (1-pi.a)*crossprod( StdErr.c %*% dInvLinkdEta.c %*%X.c, R.alpha.inv %*% StdErr.c %*%dInvLinkdEta.c %*% X.c)+ (pi.a)*crossprod( StdErr.t %*% dInvLinkdEta.t %*%X.t, R.alpha.inv %*% StdErr.t %*%dInvLinkdEta.t %*% X.t) esteq <- crossprod( StdErr %*%dInvLinkdEta %*%X ,R.alpha.inv%*% StdErr %*% W %*% as.matrix(Y - B[,"Bi"])) + (1-pi.a)*crossprod( StdErr.c %*%dInvLinkdEta.c%*%X.c , R.alpha.inv %*% StdErr.c %*% as.matrix(B[,"B.c"]-mu.c))+ (pi.a)*crossprod( StdErr.t %*%dInvLinkdEta.t %*%X.t , R.alpha.inv %*% StdErr.t %*% as.matrix(B[,"B.t"]-mu.t)) #print(hess) #print(esteq) } } } update <- solve(hess, esteq) #print(update) if(max(abs(update/beta.new)) < 100*tol){break} beta.new <- beta.new + as.vector(update) #print(beta.new) } return(list(beta = beta.new, hess = hess, esteq=esteq)) }
/scratch/gouwar.j/cran-all/cranData/CRTgeeDR/R/updatebeta.R
#Exactly the function 'getAlphaInvAR' included in the package 'geeM', #authored by Lee S. McDaniel and Nick Henderson #under the GPL-2 license. ### Get the AR-1 inverse matrix getAlphaInvAR <- function(alpha.new, a1,a2,a3,a4, row.vec, col.vec){ corr.vec <- c(alpha.new*a1/(1-alpha.new^2) , ( (1+alpha.new^2)*a2 + a3)/(1-alpha.new^2) + a4, alpha.new*a1/(1-alpha.new^2)) return(as(sparseMatrix(i= row.vec, j=col.vec, x=corr.vec), "symmetricMatrix")) } #Exactly the function 'buildAlphaInvAR' included in the package 'geeM', #authored by Lee S. McDaniel and Nick Henderson #under the GPL-2 license. ### Get the necessary structures for the inverse matrix of AR-1 ### returns the row and column indices of the AR-1 inverse ### a1, a2, a3, and a4 are used to compute the entries in the matrix buildAlphaInvAR <- function(len){ nn <- sum(len) K <- length(len) a1 <- a2 <- a3 <- a4 <- vector("numeric", nn) index <- c(cumsum(len) - len, nn) for (i in 1:K) { if(len[i] > 1) { a1[(index[i]+1) : index[i+1]] <- c(rep(-1,times = len[i]-1),0) a2[(index[i]+1) : index[i+1]] <- c(0,rep(1,times=len[i]-2),0) a3[(index[i]+1) : index[i+1]] <- c(1,rep(0,times=len[i]-2),1) a4[(index[i]+1) : index[i+1]] <- c(rep(0,times=len[i])) } else if (len[i] == 1) { a1[(index[i]+1) : index[i+1]] <- 0 a2[(index[i]+1) : index[i+1]] <- 0 a3[(index[i]+1) : index[i+1]] <- 0 a4[(index[i]+1) : index[i+1]] <- 1 } } a1 <- a1[1:(nn-1)] subdiag.col <- 1:(nn-1) subdiag.row <- 2:nn row.vec <- c(subdiag.row, (1:nn), subdiag.col) col.vec <- c(subdiag.col, (1:nn), subdiag.row) return(list(row.vec = row.vec, col.vec= col.vec, a1 = a1, a2=a2, a3=a3, a4=a4)) } #Exactly the function 'getAlphaInvEX' included in the package 'geeM', #authored by Lee S. McDaniel and Nick Henderson #under the GPL-2 license. ### Returns the full inverse matrix of the correlation for EXCHANGEABLE structure getAlphaInvEX <- function(alpha.new, diag.vec, BlockDiag){ return(as(BlockDiag %*% Diagonal(x = (-alpha.new/((1-alpha.new)*(1+(diag.vec-1)*alpha.new)))) + Diagonal( x = ((1+(diag.vec-2)*alpha.new)/((1-alpha.new)*(1+(diag.vec-1)*alpha.new)) + alpha.new/((1-alpha.new)*(1+(diag.vec-1)*alpha.new)))), "symmetricMatrix")) } #Exactly the function 'getAlphaInvMDEP' included in the package 'geeM', #authored by Lee S. McDaniel and Nick Henderson #under the GPL-2 license. ### Get the inverse of M-DEPENDENT correlation matrix. getAlphaInvMDEP <- function(alpha.new, len, row.vec, col.vec){ K <- length(len) N <- sum(len) m <- length(alpha.new) # First get all of the unique block sizes. mat.sizes <- sort(unique(len)) corr.vec <- vector("numeric", sum(len^2)) mat.inverses <- list() index <- c(0, (cumsum(len^2) -len^2)[2:K], sum(len^2)) for(i in 1:length(mat.sizes)){ # Now create and invert each matrix if(mat.sizes[i] == 1){ mat.inverses[[i]] <- 1 }else{ mtmp <- min(m, mat.sizes[i]-1) a1 = list() a1[[1]] <- rep(1, mat.sizes[i]) for(j in 1:mtmp){ a1[[j+1]] <- rep(alpha.new[j], mat.sizes[i]-j) } tmp <- bandSparse(mat.sizes[i], k=c(0:mtmp),diagonals=a1, symmetric=T ) mat.inverses[[i]] <- as.vector(solve(tmp)) } } # Put all inverted matrices in a vector in the right order corr.vec <- unlist(mat.inverses[len - min(len) + 1]) return(as(sparseMatrix(i=row.vec, j=col.vec, x=corr.vec), "symmetricMatrix")) } #Exactly the function 'getAlphaInvUnstruc' included in the package 'geeM', #authored by Lee S. McDaniel and Nick Henderson #under the GPL-2 license. ### Get a vector of elements of the inverse correlation matrix for UNSTRUCTURED ### Inversion strategy follows the same basic idea as M-DEPENDENT getAlphaInvUnstruc <- function(alpha.new, len, row.vec, col.vec){ K <- length(len) unstr.row <- NULL unstr.col <- NULL ml <- max(len) sl2 <- sum(len^2) for(i in 2:ml){ unstr.row <- c(unstr.row, 1:(i-1)) unstr.col <- c(unstr.col, rep(i, each=i-1)) } unstr.row <- c(unstr.row, 1:ml) unstr.col <- c(unstr.col, 1:ml) xvec <- c(alpha.new, rep(1, ml)) # Get the biggest matrix implied by the cluster sizes biggestMat <- forceSymmetric(sparseMatrix(i=unstr.row, j=unstr.col, x=xvec)) mat.sizes <- sort(unique(len)) corr.vec <- vector("numeric", sl2) mat.inverses <- list() index <- vector("numeric", K+1) index[1] <- 0 index[2:K] <- (cumsum(len^2) -len^2)[2:K] index[K+1] <- sl2 for(i in 1:length(mat.sizes)){ tmp <- biggestMat[1:mat.sizes[i], 1:mat.sizes[i]] mat.inverses[[i]] <- as.vector(solve(tmp)) } corr.vec <- unlist(mat.inverses[len - min(len) + 1]) return(as(sparseMatrix(i=row.vec, j=col.vec, x=corr.vec), "symmetricMatrix")) } #Exactly the function 'getAlphaInvFixed' included in the package 'geeM', #authored by Lee S. McDaniel and Nick Henderson #under the GPL-2 license. ### Invert the FIXED correlation structure. Again, ### uses same basic technique as M-DEPENDENT getAlphaInvFixed <- function(mat, len){ K <- length(len) mat.sizes <- sort(unique(len)) mat.inverses <- list() sl2 <- sum(len^2) corr.vec <- vector("numeric", sl2) index <- vector("numeric", K+1) index[1] <- 0 index[2:K] <- (cumsum(len^2) -len^2)[2:K] index[K+1] <- sl2 for(i in 1:length(mat.sizes)){ tmp <- mat[1:mat.sizes[i], 1:mat.sizes[i]] mat.inverses[[i]] <- as.vector(solve(tmp)) } corr.vec <- unlist(mat.inverses[len - min(len) + 1]) return(as(getBlockDiag(len, corr.vec)$BDiag, "symmetricMatrix")) }
/scratch/gouwar.j/cran-all/cranData/CRTgeeDR/R/updatematrices.R
#Exactly the function 'updatePhi' included in the package 'geeM', #authored by Lee S. McDaniel and Nick Henderson #under the GPL-2 license. ### Simple moment estimator of dispersion parameter updatePhi <- function(Y, mu, VarFun, p, StdErr, included, includedlen){ nn <- sum(includedlen) resid <- diag(StdErr %*% included %*% Diagonal(x=Y-mu)) phi <- (1/(nn-p))*crossprod(resid, resid) return(as.numeric(phi)) }
/scratch/gouwar.j/cran-all/cranData/CRTgeeDR/R/updatephi.R
#Exactly the function 'getUserStructure' included in the package 'geeM', #authored by Lee S. McDaniel and Nick Henderson #under the GPL-2 license. ### Get the structure of the USERDEFINED correlation matrix implied by the ### corr.mat argument to CRTgeeDR. getUserStructure <- function(corr.mat){ ml <- dim(corr.mat)[1] row.vec <- NULL col.vec <- NULL for(i in 2:ml){ row.vec <- c(row.vec, 1:(i-1)) col.vec <- c(col.vec, rep(i, each=i-1)) } struct.vec <- corr.mat[cbind(row.vec, col.vec)] corr.list <- vector("list", max(struct.vec)) for(i in 1:max(struct.vec)){ corr.list[[i]] <- which(struct.vec == i) } return(list(corr.list = corr.list, row.vec = row.vec, col.vec = col.vec, struct.vec = struct.vec)) } #Exactly the function 'getAlphaInvUser' included in the package 'geeM', #authored by Lee S. McDaniel and Nick Henderson #under the GPL-2 license. ### Get the inverse correlation matrix for USERDEFINED. getAlphaInvUser <- function(alpha.new, len, struct.vec, user.row, user.col, row.vec, col.vec){ K <- length(len) ml <- max(len) sl2 <- sum(len^2) # Indices for the correlation matrix for the subject # with the most observations. user.row <- c(user.row, 1:ml) user.col <- c(user.col, 1:ml) # The entries of the biggest matrix xvec <- rep.int(0, length(struct.vec)) for(i in 1:length(alpha.new)){ xvec[which(struct.vec == i)] <- alpha.new[i] } xvec <- c(xvec, rep(1, ml)) biggestMat <- forceSymmetric(sparseMatrix(i=user.row, j=user.col, x=xvec)) mat.sizes <- sort(unique(len)) corr.vec <- vector("numeric", sl2) mat.inverses <- list() for(i in 1:length(mat.sizes)){ tmp <- biggestMat[1:mat.sizes[i], 1:mat.sizes[i]] mat.inverses[[i]] <- as.vector(solve(tmp)) } corr.vec <- unlist(mat.inverses[len - min(len) + 1]) return(as(sparseMatrix(i=row.vec, j=col.vec, x=corr.vec), "symmetricMatrix")) } #Exactly the function 'checkUserMat' included in the package 'geeM', #authored by Lee S. McDaniel and Nick Henderson #under the GPL-2 license. ### Check some conditions on the USERDEFINED correlation structure supplied. checkUserMat <- function(corr.mat, len){ if(is.null(corr.mat)){ stop("corr.mat must be specified if using user defined correlation structure") } if(dim(corr.mat)[1] < max(len)){ stop("corr.mat needs to be at least as long as the maximum cluster size.") } test.vec <- as.vector(corr.mat) if(any(abs(test.vec-round(test.vec)) > .Machine$double.eps )){ stop("entries in corr.mat must be integers.") } max.val <- max(test.vec) min.val <- min(test.vec) if(!all(sort(unique(test.vec)) == min.val:max.val)){ stop("entries in corr.mat must be consecutive integers starting at 1.") } return(corr.mat[1:max(len), 1:max(len)]) }
/scratch/gouwar.j/cran-all/cranData/CRTgeeDR/R/userstruct.R
#Exactly the function 'getfam' included in the package 'geeM', #authored by Lee S. McDaniel and Nick Henderson #under the GPL-2 license. getfam <- function(family){ if(is.character(family)){ family <- get(family, mode = "function", envir = parent.frame(2)) } if(is.function(family)){ family <- family() LinkFun <- family$linkfun InvLink <- family$linkinv VarFun <- family$variance InvLinkDeriv <- family$mu.eta FunList <- list("LinkFun" = LinkFun, "VarFun" = VarFun, "InvLink" = InvLink, "InvLinkDeriv" = InvLinkDeriv) }else if(is.list(family) && !is.null(family$family)){ LinkFun <- family$linkfun InvLink <- family$linkinv VarFun <- family$variance InvLinkDeriv <- family$mu.eta FunList <- list("LinkFun" = LinkFun, "VarFun" = VarFun, "InvLink" = InvLink, "InvLinkDeriv" = InvLinkDeriv) }else if(is.list(family)){ if(length(match(names(family), c("LinkFun", "VarFun", "InvLink", "InvLinkDeriv"))) == 4){ LinkFun <- family$LinkFun InvLink <- family$InvLink VarFun <- family$VarFun InvLinkDeriv <- family$InvLinkDeriv }else{ LinkFun <- family[[1]] VarFun <- family[[2]] InvLink <- family[[3]] InvLinkDeriv <- family[[4]] } FunList <- list("LinkFun" = LinkFun, "VarFun" = VarFun, "InvLink" = InvLink, "InvLinkDeriv" = InvLinkDeriv) return(FunList) }else{ stop("problem with family argument: should be string, family object, or list of functions") } } #Exactly the function 'getBlockDiag' included in the package 'geeM', #authored by Lee S. McDaniel and Nick Henderson #under the GPL-2 license. ### Get a block diagonal matrix. Each block has dimension corresponding to ### each cluster size. By default, each block is just a matrix filled with ones. getBlockDiag <- function(len, xvec=NULL){ K <- length(len) if(is.null(xvec)){ xvec <- rep.int(1, sum(len^2)) } row.vec <- col.vec <- vector("numeric", sum(len^2)) add.vec <- cumsum(len) - len index <- c(0, (cumsum(len^2) -len^2)[2:K], sum(len^2)) for(i in 1:K){ row.vec[(index[i] + 1):(index[i+1])] <- rep.int( (1:len[i]) + add.vec[i], len[i]) col.vec[(index[i] + 1):(index[i+1])] <- rep( (1:len[i]) + add.vec[i], each=len[i]) } BlockDiag <- sparseMatrix(i = row.vec, j = col.vec, x = xvec) return(list(BDiag = as(BlockDiag, "symmetricMatrix"), row.vec =row.vec, col.vec=col.vec)) } #Exactly the function 'checkFixedMat' included in the package 'geeM', #authored by Lee S. McDaniel and Nick Henderson #under the GPL-2 license. ### Check some conditions on the FIXED correlation structure. checkFixedMat <- function(corr.mat, len){ if(is.null(corr.mat)){ stop("corr.mat must be specified if using fixed correlation structure") } if(dim(corr.mat)[1] < max(len)){ stop("Dimensions of corr.mat must be at least as large as largest cluster") } if(!isSymmetric(corr.mat)){ stop("corr.mat must be symmetric") } if(determinant(corr.mat, logarithm=T)$modulus == -Inf){ stop("supplied correlation matrix is not invertible.") } return(corr.mat[1:max(len), 1:max(len)]) } #' Fit CRTgeeDR object. #' #' Fit CRTgeeDR object to a dataset #' #' @param object CRTgeeDR object #' @param ... ignored #' @aliases fitted.CRTgeeDR fitted #' @method fitted CRTgeeDR #' @export ### fitted function for CRTgeeDR object fitted.CRTgeeDR <- function(object, ...){ InvLink <- object$FunList$InvLink return(InvLink(object$eta)) } #' Predict CRTgeeDR object. #' #' Predict CRTgeeDR object to a dataset #' #' @param object CRTgeeDR object #' @param newdata dataframe, new dataset to which the CRTgeeDRneed to be used for prediction #' @param ... ignored #' @method predict CRTgeeDR #' @aliases predict.CRTgeeDR predict #' @export predict.CRTgeeDR <- function(object, newdata = NULL,...){ coefs <- object$beta if(is.null(newdata)){ return(as.vector(object$X %*% object$beta)) }else{ if(dim(newdata)[2] != length(coefs)){warning("New observations must have the same number of rows as coefficients in the model")} return(as.vector(newdata %*% object$beta)) } } cleandata<-function(dat,type,nameY,cc,formula,print=TRUE){ na.inds <- NULL if(any(is.na(dat))){ na.inds <- which(is.na(dat), arr.ind=T) } if(sum(c(which(colnames(dat)%in%all.vars(formula[-2])))%in%na.inds[,2])!=0){ if(type=="marginal model"){ if(print)warning(paste("It exists missing data for covariates in the ",type," model \n -> Weights for these observation are set to 0",sep="")) }else{ if(print)warning(paste("It exists missing data for covariates in the ",type," model \n -> Single imputation had been used for computation",sep="")) } } if(!is.null(na.inds)){ if(cc){ #### If missing set to median value (No problem for outcome but carefull with covariates) dat$weights[unique(na.inds[,1])] <- 0 for(i in unique(na.inds[,2])){ if(is.factor(dat[,i])){ dat[na.inds[,1], i] <- levels(dat[,i])[1] }else{ dat[na.inds[,1], i] <- median(dat[,i], na.rm=T) } } }else{ for(i in unique(na.inds[,2][!na.inds[,2] %in% which(colnames(dat)==nameY)])){ if(is.factor(dat[,i])){ dat[na.inds[,1], i] <- levels(dat[,i])[1] }else{ dat[na.inds[,1], i] <- median(dat[,i], na.rm=T) } } } } return(dat) } #Exactly the function 'cormax.ind' included in the package 'geesmv', #authored by Ming Wang #under the GPL-2 license. cormax.ind <- function(n){ matrix<-matrix(0,nrow=n,ncol=n) diag(matrix)<-rep(1,n) return(matrix) } #Exactly the function 'cormax.ar1' included in the package 'geesmv', #authored by Ming Wang #under the GPL-2 license. cormax.ar1 <- function(n, alpha){ n.max<- max(n) cor.max<- diag(1,n.max) lowertri<- rep(0,0) for(j in (n.max-1):1){ lowertri<- c(lowertri,1:j) } cor.max[lower.tri(cor.max)]<- alpha^lowertri cor.max[upper.tri(cor.max)]<- alpha^lowertri[length(lowertri):1] return(cor.max) } #Exactly the function 'cormax.exch' included in the package 'geesmv', #authored by Ming Wang #under the GPL-2 license. cormax.exch <- function(n, alpha){ n.max<- n cor.max<- diag(1,n.max) cor.max[lower.tri(cor.max)]<- rep(alpha,n.max*(n.max-1)/2) cor.max[upper.tri(cor.max)]<- rep(alpha,n.max*(n.max-1)/2) return(cor.max) } #Exactly the function 'cluster.size' included in the package 'geesmv', #authored by Ming Wang #under the GPL-2 license. cluster.size <- function(id){ clid<- unique(id) m<- length(unique(id)) n<- rep(0,m) autotime<- rep(0,0) for(i in 1:m){ n[i]<- length(which(id==clid[i])) autotime<- c(autotime,1:n[i]) } id<- rep(1:m,n) return(list(m=m,n=n,id=id,autotime=autotime)) } #Exactly the function 'vec' included in the package 'fBasics', #authored by Rmetrics Core Team #under the GPL-2 license. vec <- function(x) { t(t(as.vector(x))) }
/scratch/gouwar.j/cran-all/cranData/CRTgeeDR/R/utility.R
#' Create or update a \code{"CRTsp"} object #' #' \code{CRTsp} coerces data frames containing co-ordinates and location attributes #' into objects of class \code{"CRTsp"} or creates a new \code{"CRTsp"} object by simulating a set of Cartesian co-ordinates for use as the locations in a simulated trial site #' @param x an object of class \code{"CRTsp"} or a data frame containing locations in (x,y) coordinates, cluster #' assignments (factor \code{cluster}), and arm assignments (factor \code{arm}). Optionally specification of a buffer zone (logical \code{buffer}); #' any other variables required for subsequent analysis. #' @param design list: an optional list containing the requirements for the power of the trial #' @param geoscale standard deviation of random displacement from each settlement cluster center (for new objects) #' @param locations number of locations in population (for new objects) #' @param kappa intensity of Poisson process of settlement cluster centers (for new objects) #' @param mu mean number of points per settlement cluster (for new objects) #' @export #' @returns A list of class \code{"CRTsp"} containing the following components: #' \tabular{lll}{ #' \code{design} \tab list: \tab parameters required for power calculations\cr #' \code{geom_full} \tab list: \tab summary statistics describing the site \cr #' \code{geom_core} \tab list: \tab summary statistics describing the core area #' (when a buffer is specified)\cr #' \code{trial} \tab data frame: \tab rows correspond to geolocated points, as follows:\cr #' \tab \code{x} \tab numeric vector: x-coordinates of locations \cr #' \tab \code{y} \tab numeric vector: y-coordinates of locations \cr #' \tab \code{cluster} \tab factor: assignments to cluster of each location \cr #' \tab \code{arm} \tab factor: assignments to \code{"control"} or \code{"intervention"} for each location \cr #' \tab \code{nearestDiscord} \tab numeric vector: Euclidean distance to nearest discordant location (km) \cr #' \tab \code{buffer} \tab logical: indicator of whether the point is within the buffer \cr #' \tab \code{...} \tab other objects included in the input \code{"CRTsp"} object or data frame \cr #' } #' @details #' If a data frame or \code{"CRTsp"} object is input then the output \code{"CRTsp"} object is validated, #' a description of the geography is computed and power calculations are carried out.\cr\cr #' If \code{geoscale, locations, kappa} and \code{mu} are specified then a new trial dataframe is constructed #' corresponding to a novel simulated human settlement pattern. This is generated using the #' Thomas algorithm (\code{rThomas}) in [\code{spatstat.random}](https://CRAN.R-project.org/package=spatstat.random) #' allowing the user to defined the density of locations and degree of spatial clustering. #' The resulting trial data frame comprises a set of Cartesian coordinates centred at the origin. #' @export #' @examples #' {# Generate a simulated area with 10,000 locations #' example_area = CRTsp(geoscale = 1, locations=10000, kappa=3, mu=40) #' summary(example_area) #' } CRTsp <- function(x = NULL, design = NULL, geoscale = NULL, locations = NULL, kappa = NULL, mu = NULL) { centroid <- list(lat = NULL, long = NULL) if(identical(class(x),"CRTsp")) { CRT <- x if(!is.null(design)) CRT$design <- design centroid <- if(!is.null(CRT$geom_full$centroid$lat)) CRT$geom_full$centroid } else if(identical(class(x),"data.frame")) { CRT <- list(trial = x, design = design) } else if(is.null(x)) { if (!is.null(geoscale) & !is.null(locations) & !is.null(kappa) & !is.null(mu)){ trial <- simulate_site(geoscale = geoscale, locations=locations, kappa=kappa, mu=mu) CRT <- list(trial = trial, design = design) } else { warning("*** All of geoscale, locations, kappa, mu needed to simulate a new site ***") CRT <- list(trial = data.frame(x=numeric(0),y=numeric(0)), design = design) } } if(is.null(CRT$design)) CRT$design <- list(locations = NULL, alpha = NULL, desiredPower = NULL, effect = NULL, yC = NULL, outcome_type = NULL, sigma2 = NULL, denominator = NULL, N = NULL, ICC = NULL, k = NULL, d_h = NULL) if(is.null(CRT$trial)) CRT$trial <- data.frame(x=numeric(0),y=numeric(0)) CRT$geom_full <- get_geom(trial = CRT$trial, design = CRT$design) CRT$geom_full$centroid <- centroid if (is.null(CRT$trial$buffer)) { CRT$geom_core <- list( locations = 0,sd_h = NULL, k= NULL, records = 0, mean_h = NULL, DE = NULL, power = NULL, clustersRequired = NULL) } else { CRT$geom_core <- get_geom(trial = CRT$trial[CRT$trial$buffer == FALSE, ], design = CRT$design) } return(validate_CRTsp(new_CRTsp(CRT))) } is_CRTsp <- function(x) { return(inherits(x, "CRTsp")) } new_CRTsp <- function(x = list()) { stopifnot(is.data.frame(x$trial)) stopifnot(is.list(x$design)) stopifnot(is.list(x$geom_full)) stopifnot(is.list(x$geom_core)) return(structure(x, class = "CRTsp")) } validate_CRTsp <- function(x) { stopifnot(inherits(x, "CRTsp")) values <- unclass(x) if (is.null(values$trial) & is.null(values$design)) { stop("There must be either a design or a trial data frame in `x`") } if (!is.null(values$trial)){ if (!identical(class(values$trial),"data.frame")){ stop("The trial object in `x` must be a data frame") } if (nrow(values$trial) != values$geom_full$records){ stop("The geom_full object in `x` is invalid") } } return(x) } #' Summary description of a \code{"CRTsp"} object #' #' \code{summary.CRTsp} provides a description of a \code{"CRTsp"} object #' @param object an object of class \code{"CRTsp"} or a data frame containing locations in (x,y) coordinates, cluster #' assignments (factor \code{cluster}), arm assignments (factor \code{arm}) and buffer zones (logical \code{buffer}), #' together with any other variables required for subsequent analysis. #' @param maskbuffer radius of area around a location to include in calculation of areas #' @param ... other arguments used by summary #' @method summary CRTsp #' @export #' @return No return value, write text to the console. #' @examples #' summary(CRTsp(readdata('exampleCRT.txt'))) summary.CRTsp <- function(object, maskbuffer = 0.2, ...) { defaultdigits <- getOption("digits") on.exit(options(digits = defaultdigits)) options(digits = 3) cat("===============================CLUSTER RANDOMISED TRIAL ===========================\n") output <- matrix(" ", nrow = 22, ncol = 2) rownames(output) <- paste0("row ", 1:nrow(output)) rownames(output)[1] <- "Locations and Clusters\n---------------------- " output[1, 1] <- "-" rownames(output)[2] <- "Coordinate system " if (length(object$trial$x) > 0 & length(object$trial$y) > 0) { output[2, 1] <- "(x, y)" } else if(!is.null(object$trial$lat)) { output[2, 1] <- "Lat-Long" } else { output[2, 1] <- "No coordinates in dataset" } if (identical(unname(output[2, 1]),"(x, y)")) { cat("\nSummary of coordinates\n----------------------\n") coordinate.summary <- with(object$trial, summary(cbind(x, y))) rownames(coordinate.summary) <- substr(coordinate.summary[, 1], 1, 8) coordinate.summary[, ] <- substr(coordinate.summary[, ], 9, 13) print(t(coordinate.summary)) xycoords <- data.frame(cbind(x=object$trial$x,y=object$trial$y)) tr <- sf::st_as_sf(xycoords, coords = c("x","y")) buf1 <- sf::st_buffer(tr, maskbuffer) buf2 <- sf::st_union(buf1) area <- sf::st_area(buf2) cat("Total area (within ", maskbuffer,"km of a location) : ", area, "sq.km\n\n") if (!is.null(object$geom_full$centroid)) { cat("Geolocation of centroid (radians): latitude: ", object$geom_full$centroid$lat, "longitude: ", object$geom_full$centroid$long,"\n\n") } } rownames(output)[5] <- "Available clusters (across both arms) " if (is.na(object$geom_full$c)) { output[5, 1] <- "Not assigned" } else { clustersAvailableFull <- with(object$geom_full, floor(locations/mean_h)) output[5, 1] <- clustersAvailableFull rownames(output)[6] <- " Per cluster mean number of points " output[6, 1] <- round(object$geom_full$mean_h, digits = 1) rownames(output)[7] <- " Per cluster s.d. number of points " if (!is.null(object$geom_full$sd_h)) output[7, 1] <- round(object$geom_full$sd_h, digits = 1) } rownames(output)[4] <- "Locations: " if(identical(object$geom_full$locations,object$geom_full$records)){ output[4, 1] <- object$geom_full$locations } else { if(!is.null(object$geom_full$records)) { rownames(output)[4] <- paste0("Not aggregated. Total records: ", object$geom_full$records,". Unique locations:") } output[4, 1] <- object$geom_full$locations } if (object$geom_core$locations > 0) { output[1, 1] <- "Full" output[1, 2] <- "Core" output[4, 2] <- object$geom_core$locations if (!is.na(object$geom_core$mean_h)) { clustersAvailableCore <- with(object$geom_core, floor(locations/mean_h)) output[5, 2] <- clustersAvailableCore output[6, 2] <- round(object$geom_core$mean_h, digits = 1) } if (!is.null(object$geom_core$sd_h)) output[7, 2] <- round(object$geom_core$sd_h, digits = 1) } if (!is.null(object$trial$arm) & !identical(object$trial$arm,character(0))) { sd1 <- ifelse(is.null(object$geom_full$sd_distance), NA, object$geom_full$sd_distance) sd2 <- ifelse(is.null(object$geom_core$sd_distance), NA, object$geom_core$sd_distance) rownames(output)[8] <- "S.D. of distance to nearest discordant location (km): " output[8, 1] <- ifelse(is.na(sd1), "", round(sd1, digits = 2)) output[8, 2] <- ifelse(is.na(sd2), "", round(sd2, digits = 2)) rownames(output)[9] <- "Cluster randomization: " if (is.null(object$trial$pair)) { output[9, 1] <- "Independently randomized" } else { output[9, 1] <- "Matched pairs randomized" } } else { if (is.null(object$trial$x)) { rownames(output)[9] <- "No locations to randomize" } else { rownames(output)[9] <- "No randomization" } output[9, 1] <- "-" } if (!is.null(object$design$alpha)) { rownames(output)[10] <- "\nSpecification of Requirements\n-----------------------------" output[10, 1] <- "-" rownames(output)[11] <- "Significance level (2-sided): " output[11, 1] <- object$design$alpha rownames(output)[12] <- "Type of Outcome " output[12, 1] <- switch(object$design$outcome_type, 'y' = "continuous", "n" = "count", "e" = "event rate", 'p' = "proportion", 'd' = "dichotomous") rownames(output)[13] <- "Expected outcome in control arm: " output[13, 1] <- object$design$yC link <- switch(object$design$outcome_type, 'y' = "identity", "n" = "log", "e" = "log", 'p' = "logit", 'd' = "logit") rownames(output)[14] <- switch(link, "identity" = "Expected variance of outcome: ", "log" = "Mean rate multiplier: ", "cloglog" = "Mean rate multiplier: ", "logit" = "Mean denominator: ") output[14, 1] <- switch(link, "identity" = object$design$sigma2, "log" = object$design$denominator, "cloglog" = object$design$denominator, "logit" = object$design$N) if (identical(object$design$outcome_type, 'd')) output[14, 1] <- "" rownames(output)[15] <- "Required effect size: " output[15, 1] <- object$design$effect if (is.na(object$design$ICC)) { rownames(output)[16] <- "Coefficient of variation (%): " output[16, 1] <- object$design$cv_percent } else { rownames(output)[16] <- "Intra-cluster correlation: " output[16, 1] <- object$design$ICC } if (!is.null(object$design$buffer_width)) { rownames(output)[3] <- "Buffer width : " if (object$design$buffer_width > 0) { output[3, 1] <- paste0(object$design$buffer_width, " km.") } else { output[3, 1] <- "No buffer" } } } output[17, 1] <- "-" if (is.null(object$design$effect)) { rownames(output)[17] <- "No power calculations to report" } else { rownames(output)[17] <- "\nPower calculations (ignoring spillover)\n------------------ " sufficient <- ifelse(clustersAvailableFull >= object$geom_full$clustersRequired, "Yes", "No") rownames(output)[18] <- "Design effect: " output[18, 1] <- round(object$geom_full$DE, digits = 1) rownames(output)[19] <- "Nominal power (%) " output[19, 1] <- round(object$geom_full$power * 100, digits = 1) rownames(output)[20] <- paste0("Total clusters required (power of ", object$design$desiredPower * 100, "%):") output[20, 1] <- object$geom_full$clustersRequired rownames(output)[21] <- "Sufficient clusters for required power?" output[21, 1] <- sufficient if (is.null(object$geom_core$power)) { output <- subset(output, select = -c(2)) } else { output[17, 1] <- "Full" output[17, 2] <- "Core" clustersAvailableCore <- with(object$geom_core, floor(locations/mean_h)) sufficientCore <- ifelse(clustersAvailableCore >= object$geom_core$clustersRequired, "Yes", "No") output[18, 2] <- round(object$geom_core$DE, digits = 1) output[19, 2] <- round(object$geom_core$power * 100, digits = 1) output[20, 2] <- object$geom_core$clustersRequired output[21, 2] <- sufficientCore } } standard.names <- c("x", "y", "cluster", "arm", "buffer", "nearestDiscord", "geom_full", "geom_core", "design") rownames(output)[22] <- "\nOther variables in dataset\n--------------------------" output[22, 1] <- paste(dplyr::setdiff(names(object$trial), standard.names), collapse = " ") output <- output[trimws(output[, 1]) != "", ] # display and return table utils::write.table(output, quote = FALSE, col.names = FALSE, sep = " ") options(digits = defaultdigits) invisible(object) }
/scratch/gouwar.j/cran-all/cranData/CRTspat/R/CRTsp.R
#' @keywords internal "_PACKAGE" ## usethis namespace: start #' @importFrom stats aggregate #' @importFrom stats coef #' @importFrom stats dnorm #' @importFrom stats fitted #' @importFrom stats gaussian #' @importFrom stats poisson #' @importFrom stats predict #' @importFrom stats quasipoisson #' @importFrom stats residuals #' @importFrom stats sd #' @importFrom stats vcov ## usethis namespace: end NULL
/scratch/gouwar.j/cran-all/cranData/CRTspat/R/CRTspat-package.R
#' Analysis of cluster randomized trial with spillover #' #' \code{CRTanalysis} carries out a statistical analysis of a cluster randomized trial (CRT). #' @param trial an object of class \code{"CRTsp"} or a data frame containing locations in (x,y) coordinates, cluster #' assignments (factor \code{cluster}), and arm assignments (factor \code{arm}) and outcome data (see details). #' @param method statistical method with options: #' \tabular{ll}{ #' \code{"EMP"} \tab simple averages of the data \cr #' \code{"T"} \tab comparison of cluster means by t-test \cr #' \code{"GEE"} \tab Generalised Estimating Equations \cr #' \code{"LME4"} \tab Generalized Linear Mixed-Effects Models \cr #' \code{"INLA"}\tab Integrated Nested Laplace Approximation (INLA) \cr #' \code{"MCMC"}\tab Markov chain Monte Carlo using \code{"JAGS"} \cr #' \code{"WCA"}\tab Within cluster analysis \cr #' } #' @param distance Measure of distance or surround with options: \cr #' \tabular{ll}{ #' \code{"nearestDiscord"} \tab distance to nearest discordant location (km)\cr #' \code{"disc"} \tab disc\cr #' \code{"kern"} \tab surround based on sum of normal kernels\cr #' \code{"hdep"} \tab Tukey half space depth\cr #' \code{"sdep"} \tab simplicial depth\cr #' } #' @param cfunc transformation defining the spillover function with options: #' \tabular{llll}{ #' \code{"Z"} \tab\tab arm effects not considered\tab reference model\cr #' \code{"X"} \tab\tab spillover not modelled\tab the only valid value of \code{cfunc} for methods \code{"EMP"}, \code{"T"} and \code{"GEE"}\cr #' \code{"L"} \tab\tab inverse logistic (sigmoid)\tab the default for \code{"INLA"} and \code{"MCMC"} methods\cr #' \code{"P"} \tab\tab inverse probit (error function)\tab available with \code{"INLA"} and \code{"MCMC"} methods\cr #' \code{"S"} \tab\tab piecewise linear\tab only available with the \code{"MCMC"} method\cr #' \code{"E"} \tab\tab estimation of scale factor\tab only available with \code{distance = "disc"} or \code{distance = "kern"}\cr #' \code{"R"} \tab\tab rescaled linear\tab \cr #' } #' @param scale_par numeric: pre-specified value of the spillover parameter or disc radius for models where this is fixed (\code{cfunc = "R"}).\cr\cr #' @param link link function with options: #' \tabular{ll}{ #' \code{"logit"}\tab (the default). \code{numerator} has a binomial distribution with denominator \code{denominator}.\cr #' \code{"log"} \tab \code{numerator} is Poisson distributed with an offset of log(\code{denominator}).\cr #' \code{"cloglog"} \tab \code{numerator} is Bernoulli distributed with an offset of log(\code{denominator}).\cr #' \code{"identity"}\tab The outcome is \code{numerator/denominator} with a normally distributed error function.\cr #' } #' @param numerator string: name of numerator variable for outcome #' @param denominator string: name of denominator variable for outcome data (if present) #' @param excludeBuffer logical: indicator of whether any buffer zone (records with \code{buffer=TRUE}) should be excluded from analysis #' @param alpha numeric: confidence level for confidence intervals and credible intervals #' @param baselineOnly logical: indicator of whether required analysis is of effect size or of baseline only #' @param baselineNumerator string: name of numerator variable for baseline data (if present) #' @param baselineDenominator string: name of denominator variable for baseline data (if present) #' @param personalProtection logical: indicator of whether the model includes local effects with no spillover #' @param clusterEffects logical: indicator of whether the model includes cluster random effects #' @param spatialEffects logical: indicator of whether the model includes spatial random effects #' (available only for \code{method = "INLA"}) #' @param requireMesh logical: indicator of whether spatial predictions are required #' (available only for \code{method = "INLA"}) #' @param inla_mesh string: name of pre-existing INLA input object created by \code{compute_mesh()} #' @return list of class \code{CRTanalysis} containing the following results of the analysis: #' \itemize{ #' \item \code{description} : description of the dataset #' \item \code{method} : statistical method #' \item \code{pt_ests} : point estimates #' \item \code{int_ests} : interval estimates #' \item \code{model_object} : object returned by the fitting routine #' \item \code{spillover} : function values and statistics describing the estimated spillover #' } #' @importFrom grDevices rainbow #' @importFrom stats binomial dist kmeans median na.omit qlogis qnorm quantile rbinom rnorm runif simulate #' @importFrom utils head read.csv #' @details \code{CRTanalysis} is a wrapper for the statistical analysis packages: #' [geepack](https://CRAN.R-project.org/package=geepack), #' [INLA](https://www.r-inla.org/), #' [jagsUI](https://CRAN.R-project.org/package=jagsUI), #' and the [t.test](https://www.rdocumentation.org/packages/stats/versions/3.6.2/topics/t.test) #' function of package \code{stats}.\cr\cr #' The wrapper does not provide an interface to the full functionality of these packages. #' It is specific for typical analyses of cluster randomized trials with geographical clustering. Further details #' are provided in the [vignette](https://thomasasmith.github.io/articles/Usecase5.html).\cr\cr #' The key results of the analyses can be extracted using a \code{summary()} of the output list. #' The \code{model_object} in the output list is the usual output from the statistical analysis routine, #' and can be also be inspected with \code{summary()}, or analysed using \code{stats::fitted()} #' for purposes of evaluation of model fit etc..\cr\cr #' For models with a complementary log-log link function specified with \code{link = "cloglog"}. #' the numerator must be coded as 0 or 1. Technically the binomial denominator is then 1. #' The value of \code{denominator} is used as a rate multiplier.\cr\cr #' With the \code{"INLA"} and \code{"MCMC"} methods 'iid' random effects are used to model extra-Poisson variation.\cr\cr #' Interval estimates for the coefficient of variation of the cluster level outcome are calculated using the method of #' [Vangel (1996)](https://www.jstor.org/stable/2685039). #' @export #' @examples #' \donttest{ #' example <- readdata('exampleCRT.txt') #' # Analysis of test dataset by t-test #' exampleT <- CRTanalysis(example, method = "T") #' summary(exampleT) #' # Standard GEE analysis of test dataset ignoring spillover #' exampleGEE <- CRTanalysis(example, method = "GEE") #' summary(exampleGEE) #' # LME4 analysis with error function spillover function #' exampleLME4 <- CRTanalysis(example, method = "LME4", cfunc = "P") #' summary(exampleLME4) #' } CRTanalysis <- function( trial, method = "GEE", distance = "nearestDiscord", scale_par = NULL, cfunc = "L", link = "logit", numerator = "num", denominator = "denom", excludeBuffer = FALSE, alpha = 0.05, baselineOnly = FALSE, baselineNumerator = "base_num", baselineDenominator = "base_denom", personalProtection = FALSE, clusterEffects = TRUE, spatialEffects = FALSE, requireMesh = FALSE, inla_mesh = NULL) { CRT <- CRTsp(trial) cluster <- linearity <- penalty <- distance_type <- NULL resamples <- 1000 penalty <- 0 # The prior for the scale parameter should allow this to range from smaller than # any plausible spillover zone, to larger than the study area log_sp_prior <- c(-5, log(max(CRT$trial$x) - min(CRT$trial$x)) + 2) # Test of validity of inputs if (!method %in% c("EMP", "T", "MCMC", "GEE", "INLA", "LME4", "WCA")) { stop("*** Invalid value for statistical method ***") return(NULL) } if (identical(method, "INLA") & identical(system.file(package='INLA'), "")){ message("*** INLA package is not installed. Running lme4 analysis instead. ***") method <- "LME4" } # Some statistical methods do not allow for spillover if (method %in% c("EMP", "T", "GEE")) cfunc <- "X" # cfunc='Z' is used to remove the estimation of effect size from the model if (baselineOnly) cfunc <- "Z" # Classification of distance_type and which non-linear fit is needed if (cfunc %in% c("Z","X")) { distance_type <- "No fixed effects of distance " if (is.null(CRT$trial[[distance]]) & is.null(CRT$trial$arm)){ distance <- "dummy" CRT$trial$dummy <- runif(nrow(CRT$trial), 0, 1) } linearity <- "No non-linear parameter. " scale_par <- 1.0 } else { if(identical(distance, "nearestDiscord")) { distance_type <- "Signed distance " } else if (distance %in% c("hdep", "sdep", "disc", "kern")) { distance_type <- "Surround: " } else if(is.null(CRT$trial[[distance]])) { stop("*** Invalid distance measure ***") return(NULL) } else { distance_type <- ifelse((min(CRT$trial[[distance]]) < 0), "Signed distance ", "Surround: ") } if (identical(distance_type, "Surround: ")) { if (!identical(cfunc,"E")) { message("*** Surrounds must have cfunc 'E' or 'R': using cfunc = 'R' ***") cfunc <- "R" } } else { if (identical(cfunc,"E")) { message("*** Signed distances cannot have cfunc = 'E': using cfunc = 'R' ***") cfunc <- "R" } } if(identical(cfunc, "R")) { if (distance %in% c("disc", "kern")) { if (is.null(scale_par)) { penalty <- ifelse(identical(method, "MCMC"), 0, 2) linearity <- "Estimated scale parameter: " } else { linearity <- paste0("Precalculated scale parameter: ") } } else { scale_par <- 1.0 linearity <- "No non-linear parameter. " } } else if(is.null(scale_par)) { if(identical(distance_type, "Surround: ") & identical(cfunc, "E")){ # message("Estimated escape function ) } else if (!cfunc %in% c("L", "P", "S")){ stop("*** Invalid spillover function ***") return(NULL) } # the goodness-of-fit is penalised if scale_par needs to be estimated # (unless this is via MCMC) penalty <- ifelse(identical(method, "MCMC"), 0, 2) linearity <- "Estimated scale parameter: " } else { linearity <- paste0("Precalculated scale parameter of ", round(scale_par, digits = 3),": ") } } # if the distance or surround is not provided, augment the trial data frame with distance or surround # (compute distance does nothing beyond validating the CRTsp, if the distance has already been calculated) if (!(distance %in% c("disc", "kern", "dummy"))) { CRT <- compute_distance(CRT, distance = distance, scale_par = scale_par) } trial <- CRT$trial if ("buffer" %in% colnames(trial) & excludeBuffer) trial <- trial[!trial$buffer, ] # trial needs to be ordered for some analyses if(!is.null(trial$cluster)) trial <- trial[order(trial$cluster), ] # Some statistical methods only run if there are cluster effects if (method %in% c("LME4", "MCMC")) clusterEffects <- TRUE if (baselineOnly){ # Baseline analyses are available only for GEE and INLA if (method %in% c("EMP", "T", "GEE", "MCMC", "LME4", "WCA")) { method <- "GEE" message("Analysis of baseline only, using GEE\n") } else if (identical(method,"INLA")) { message("Analysis of baseline only, using INLA\n") } if (is.null(trial[[baselineNumerator]])) { stop("*** No baseline data provided ***") } if (is.null(trial[[baselineDenominator]])) { trial[[baselineDenominator]] <- 1 } trial$y1 <- trial[[baselineNumerator]] trial$y0 <- trial[[baselineDenominator]] - trial[[baselineNumerator]] trial$y_off <- trial[[baselineDenominator]] } else { if (is.null(trial[[numerator]])){ stop("*** No outcome data to analyse ***") } trial$y1 <- trial[[numerator]] trial$y0 <- trial[[denominator]] - trial[[numerator]] trial$y_off <- trial[[denominator]] } # create model formula for use in equations and for display fterms <- switch(cfunc, Z = NULL, X = "arm", "pvar" ) if (personalProtection & cfunc != 'X') fterms <- c(fterms, "arm") if (clusterEffects) { if (identical(method, "INLA")) { fterms <- c(fterms, "f(cluster, model = \'iid\')") } else if(method %in% c("EMP","GEE")) { fterms <- fterms } else { fterms <- c(fterms, "(1 | cluster)") } } if (identical(method, "INLA")){ if (spatialEffects) fterms <- c(fterms, "f(s, model = spde)") if (link %in% c("log", "cloglog")) fterms <- c(fterms, "f(id, model = \'iid\')") } ftext <- paste(fterms, collapse = " + ") # create names for confidence limits for use throughout CLnames <- c( paste0(alpha/0.02, "%"), paste0(100 - alpha/0.02, "%") ) # store options here- noting that the model formula depends on allowable values of other options options <- list(method = method, link = link, distance = distance, cfunc = cfunc, alpha = alpha, baselineOnly = baselineOnly, fterms = fterms, ftext = ftext, CLnames = CLnames, log_sp_prior = log_sp_prior, clusterEffects = clusterEffects, spatialEffects = spatialEffects, personalProtection = personalProtection, distance_type = distance_type, linearity = linearity, scale_par = scale_par, penalty = penalty) # create scaffolds for lists pt_ests <- list(scale_par = NA, personal_protection = NA, spillover_interval = NA) int_ests <- list(controlY = NA, interventionY = NA, effect_size = NA) model_object <- list() description <- get_description(trial=trial, link=link, alpha=alpha, baselineOnly) analysis <- list(trial = trial, pt_ests = pt_ests, int_ests = int_ests, description = description, options = options) analysis <- switch(method, "EMP" = EMPanalysis(analysis), "T" = Tanalysis(analysis), "GEE" = GEEanalysis(analysis = analysis, resamples=resamples), "LME4" = LME4analysis(analysis), "INLA" = INLAanalysis(analysis, requireMesh = requireMesh, inla_mesh = inla_mesh), "MCMC" = MCMCanalysis(analysis), "WCA" = wc_analysis(analysis, design = CRT$design) ) if (!baselineOnly & !is.null(analysis$pt_ests$controlY)){ fittedCurve <- get_curve(x = analysis$pt_ests, analysis = analysis) spillover <- get_spilloverStats(fittedCurve=fittedCurve, trial=analysis$trial, distance = distance) # compute indirect effects here analysis <- tidySpillover(spillover, analysis, fittedCurve) if (!identical(method,"EMP")){ scale_par <- analysis$options$scale_par message(paste0(linearity, ifelse(is.null(scale_par), "", ifelse(identical(scale_par, 1), "", round(scale_par, digits = 3)))," ", distance_type, "-", ifelse(identical(distance_type, "No fixed effects of distance "), "", getDistanceText(distance = distance, scale_par = scale_par)), "\n")) } } class(analysis) <- "CRTanalysis" return(analysis) } # functions for INLA analysis #' \code{compute_mesh} create objects required for INLA analysis of an object of class \code{"CRTsp"}. #' @param trial an object of class \code{"CRTsp"} or a data frame containing locations in (x,y) coordinates, cluster #' assignments (factor \code{cluster}), and arm assignments (factor \code{arm}) and outcome. #' @param offset see \code{inla.mesh.2d} documentation #' @param max.edge see \code{inla.mesh.2d} documentation #' @param inla.alpha parameter related to the smoothness (see \code{inla} documentation) #' @param maskbuffer numeric: width of buffer around points (km) #' @param pixel numeric: size of pixel (km) #' @return list #' \itemize{ #' \item \code{prediction} Data frame containing the prediction points and covariate values #' \item \code{A} projection matrix from the observations to the mesh nodes. #' \item \code{Ap} projection matrix from the prediction points to the mesh nodes. #' \item \code{indexs} index set for the SPDE model #' \item \code{spde} SPDE model #' \item \code{pixel} pixel size (km) #' } #' @details \code{compute_mesh} carries out the computationally intensive steps required for setting-up an #' INLA analysis of an object of class \code{"CRTsp"}, creating the prediction mesh and the projection matrices. #' The mesh can be reused for different models fitted to the same #' geography. The computational resources required depend largely on the resolution of the prediction mesh. #' The prediction mesh is thinned to include only pixels centred at a distance less than #' \code{maskbuffer} from the nearest point.\cr #' A warning may be generated if the \code{Matrix} library is not loaded. #' @export #' @examples #' { #' # low resolution mesh for test dataset #' library(Matrix) #' example <- readdata('exampleCRT.txt') #' exampleMesh=compute_mesh(example, pixel = 0.5) #' } compute_mesh <- function(trial = trial, offset = -0.1, max.edge = 0.25, inla.alpha = 2, maskbuffer = 0.5, pixel = 0.5) { if (identical(system.file(package='INLA'), "")){ message("*** INLA package is not installed ***") return("Mesh not created as INLA package is not installed") } else { # extract the trial data frame from the "CRTsp" object if (identical(class(trial),"CRTsp")) trial <- trial$trial # create an id variable if this does not exist if(is.null(trial$id)) trial <- dplyr::mutate(trial, id = dplyr::row_number()) # create buffer around area of points trial.coords <- base::matrix( c(trial$x, trial$y), ncol = 2 ) tr <- sf::st_as_sf(trial, coords = c("x","y")) buf1 <- sf::st_buffer(tr, maskbuffer) buf2 <- sf::st_union(buf1) # determine pixel size area <- sf::st_area(buf2) buffer <- sf::as_Spatial(buf2) # estimation mesh construction # dummy call to Matrix. This miraculously allows the loading of the "dgCMatrix" in the mesh to pass the test dummy <- Matrix::as.matrix(c(1,1,1,1)) mesh <- INLA::inla.mesh.2d( boundary = buffer, offset = offset, cutoff = 0.05, max.edge = max.edge ) # set up SPDE (Stochastic Partial Differential Equation) model spde <- INLA::inla.spde2.matern(mesh = mesh, alpha = inla.alpha, constr = TRUE) indexs <- INLA::inla.spde.make.index("s", spde$n.spde) A <- INLA::inla.spde.make.A(mesh = mesh, loc = trial.coords) # 8.3.6 Prediction data from https://www.paulamoraga.com/book-geospatial/sec-geostatisticaldatatheory.html bb <- sf::st_bbox(buffer) # create a raster that is slightly larger than the buffered area xpixels <- round((bb$xmax - bb$xmin)/pixel) + 2 ypixels <- round((bb$ymax - bb$ymin)/pixel) + 2 x <- bb$xmin + (seq(1:xpixels) - 1.5)*pixel y <- bb$ymin + (seq(1:ypixels) - 1.5)*pixel all.coords <- as.data.frame(expand.grid(x, y), ncol = 2) colnames(all.coords) <- c("x", "y") all.coords <- sf::st_as_sf(all.coords, coords = c("x", "y")) pred.coords <- sf::st_filter(all.coords, sf::st_as_sf(buf2)) pred.coords <- t(base::matrix( unlist(pred.coords), nrow = 2 )) # projection matrix for the prediction locations Ap <- INLA::inla.spde.make.A(mesh = mesh, loc = pred.coords) # Distance matrix calculations for the prediction stack Create all pairwise comparisons pairs <- tidyr::crossing( row = seq(1:nrow(pred.coords)), col = seq(1:nrow(trial)) ) # Calculate the distances calcdistP <- function(row, col) sqrt( (trial$x[col] - pred.coords[row, 1])^2 + (trial$y[col] - pred.coords[row, 2])^2 ) distP <- apply(pairs, 1, function(y) calcdistP(y["row"], y["col"])) distM <- base::matrix( distP, nrow = nrow(pred.coords), ncol = nrow(trial), byrow = TRUE ) nearestNeighbour <- apply(distM, 1, function(x) return(array(which.min(x)))) prediction <- data.frame( x = pred.coords[, 1], y = pred.coords[, 2], nearestNeighbour = nearestNeighbour) prediction$id <- trial$id[nearestNeighbour] if (!is.null(trial$arm)) prediction$arm <- trial$arm[nearestNeighbour] if (!is.null(trial$cluster)) prediction$cluster <- trial$cluster[nearestNeighbour] prediction <- with(prediction, prediction[order(y, x), ]) prediction$shortestDistance <- apply(distM, 1, min) rows <- seq(1:nrow(prediction)) inla_mesh <- list( prediction = prediction, A = A, Ap = Ap, indexs = indexs, spde = spde, pixel = pixel) if (nrow(prediction) > 20){ message("Mesh of ", nrow(prediction), " pixels of size ", pixel," km \n") } return(inla_mesh) } } EMPanalysis <- function(analysis){ lp <- arm <- NULL description <- analysis$description pt_ests <- list() pt_ests$controlY <- unname(description$controlY) pt_ests$interventionY <- unname(description$interventionY) pt_ests$effect_size <- unname(description$effect_size) pt_ests$spillover_interval <- NA pt_ests$personal_protection <- NA analysis$pt_ests <- pt_ests return(analysis) } Tanalysis <- function(analysis) { trial <- analysis$trial link <- analysis$options$link alpha <- analysis$options$alpha clusterSum <- clusterSummary(trial, link) formula <- stats::as.formula("lp ~ arm") model_object <- stats::t.test( formula = formula, data = clusterSum, alternative = "two.sided", conf.level = 1 - alpha, var.equal = TRUE ) analysis$model_object <- model_object analysis$pt_ests$p.value <- model_object$p.value analysisC <- stats::t.test( clusterSum$lp[clusterSum$arm == "control"], conf.level = 1 - alpha) analysis$pt_ests$controlY <- unname(invlink(link, analysisC$estimate[1])) analysis$int_ests$controlY <- invlink(link, analysisC$conf.int) analysisI <- stats::t.test( clusterSum$lp[clusterSum$arm == "intervention"], conf.level = 1 - alpha) analysis$pt_ests$interventionY <- unname(invlink(link, analysisI$estimate[1])) analysis$int_ests$interventionY <- invlink(link, analysisI$conf.int) # Covariance matrix (note that two arms are independent so the off-diagonal elements are zero) Sigma <- base::matrix( data = c(analysisC$stderr^2, 0, 0, analysisI$stderr^2), nrow = 2, ncol = 2) if (link == 'identity'){ analysis$pt_ests$effect_size <- analysis$pt_ests$controlY - analysis$pt_ests$interventionY analysis$int_ests$effect_size <- unlist(model_object$conf.int) } if (link %in% c("logit", "log", "cloglog")){ analysis$pt_ests$effect_size <- 1 - analysis$pt_ests$interventionY/ analysis$pt_ests$controlY analysis$int_ests$effect_size <- 1 - exp(-unlist(model_object$conf.int)) } analysis$pt_ests$t.statistic <- analysis$model_object$statistic analysis$pt_ests$df <- unname(analysis$model_object$parameter) analysis$pt_ests$p.value <- analysis$model_object$p.value return(analysis) } clusterSummary <- function(trial = trial, link = link){ y1 <- arm <- cluster <- y_off <- NULL clusterSum <- data.frame( trial %>% dplyr::group_by(cluster) %>% dplyr::summarize( y = sum(y1), total = sum(y_off), arm = arm[1] ) ) clusterSum$lp <- switch(link, "identity" = clusterSum$y/clusterSum$total, "log" = log(clusterSum$y/clusterSum$total), "logit" = logit(clusterSum$y/clusterSum$total), "cloglog" = log(clusterSum$y/clusterSum$total)) # Trap any non-finite values clusterSum$lp[!is.finite(clusterSum$lp)] <- NA return(clusterSum) } wc_analysis <- function(analysis, design) { analysis$pt_ests <- analysis$int_ests <- y1 <- cluster <- y_off <- NULL trial <- analysis$trial link <- analysis$options$link alpha <- analysis$options$alpha distance = analysis$options$distance trial$d <- trial[[distance]] nclusters <- nlevels(trial$cluster) analysis$options <- list( method = "WCA", link = link, distance = distance, alpha = alpha, scale_par = design[[distance]][["scale_par"]] ) analysis[[distance]] <- design[[distance]] analysis$nearestDiscord <- design$nearestDiscord fterms <- switch(link, "identity" = "y1/y_off ~ 1 + d", "log" = "y1 ~ 1 + d + offset(log(y_off))", "cloglog" = "y1 ~ 1 + d + offset(log(y_off))", "logit" = "cbind(y1,y0) ~ 1 + d") formula <- stats::as.formula(paste(fterms, collapse = "+")) pe <- matrix(nrow = 0, ncol = 2) for (cluster in levels(trial$cluster)){ glm <- tryCatch(glm(formula = formula, family = "binomial", data = trial[trial$cluster == cluster,]) , warning = function(w){ NULL}) if (!is.null(glm)) { pe <- rbind(pe, matrix(glm$coefficients, ncol = 2)) } } rr <- invlink(link = link, x = pe[ , 1] + pe[, 2])/invlink(link = link, x = pe[ , 1]) exact = ifelse(length(unique(rr[!is.infinite(rr)])) == length(rr[!is.infinite(rr)]) , TRUE, FALSE) model_object <- stats::wilcox.test(rr, mu = 1, alternative = "less", exact = exact, conf.int = TRUE, conf.level = 1 - alpha) model_object$conf.int[1] <- ifelse(model_object$conf.int[1] > 0, model_object$conf.int[1], 0) analysis$pt_ests$effect_size <- 1 - model_object$estimate analysis$int_ests$effect_size <- 1 - rev(unname(model_object$conf.int)) analysis$pt_ests$test.statistic <- unname(model_object$statistic) analysis$pt_ests$p.value <- model_object$p.value analysis$model_object <- model_object return(analysis) } wc_summary <- function(analysis){ defaultdigits <- getOption("digits") on.exit(options(digits = defaultdigits)) options(digits = 3) distance <- analysis$options$distance cat('\nDistance and surround statistics\n') cat('Measure Minimum Median Maximum S.D. Within-cluster S.D. R-squared\n') cat('------- ------- ------ ------- ---- ------------------- ---------\n') with(analysis$nearestDiscord,cat("Signed distance", format(Min., scientific=F), Median, format(Max., scientific=F), sd, " ", within_cluster_sd, rSq, "\n", sep = " ")) with(analysis[[distance]], cat(distance, strrep(" ",8-nchar(distance)), Min., Median, Max., sd, " ", within_cluster_sd, rSq, "\n", sep = " ")) cat("\nClusters assigned : ", analysis$description$nclusters, "\n") cat("Clusters analysed : ", analysis$description$nclusters, "\n") cat("Wilcoxon statistic : ", analysis$pt_ests$statistic, "\n") cat("P-value (1-sided) : ", analysis$pt_ests$p.value, "\n") cat( "Effect size estimate : ", analysis$pt_ests$effect_size, paste0(" (", 100 * (1 - analysis$options$alpha), "% CL: "), unlist(analysis$int_ests$effect_size),")\n" ) } GEEanalysis <- function(analysis, resamples){ trial <- analysis$trial link <- analysis$options$link alpha <- analysis$options$alpha cfunc <- analysis$options$cfunc fterms <- analysis$options$fterms pt_ests <- analysis$pt_ests int_ests <- analysis$int_ests method <- analysis$options$method cluster <- NULL model_object <- get_GEEmodel(trial = trial, link = link, fterms = fterms) summary.fit <- summary(model_object) z <- -qnorm(alpha/2) #standard deviation score for calculating confidence intervals lp_yC <- summary.fit$coefficients[1, 1] se_lp_yC <- summary.fit$coefficients[1, 2] clusterSize <- nrow(trial)/nlevels(as.factor(trial$cluster)) # remove the temporary objects from the dataframe model_object$data$y1 <- model_object$data$y0 <- model_object$data$y_off <- NULL pt_ests$controlY <- invlink(link, lp_yC) int_ests$controlY <- namedCL( invlink(link, c(lp_yC - z * se_lp_yC, lp_yC + z * se_lp_yC)), alpha = alpha ) # Intracluster correlation pt_ests$ICC <- noLabels(summary.fit$corr[1]) #with corstr = 'exchangeable', alpha is the ICC se_ICC <- noLabels(summary.fit$corr[2]) int_ests$ICC <- namedCL( noLabels(c(pt_ests$ICC - z * se_ICC, pt_ests$ICC + z * se_ICC)), alpha = alpha ) pt_ests$DesignEffect <- 1 + (clusterSize - 1) * pt_ests$ICC #Design Effect int_ests$DesignEffect <- 1 + (clusterSize - 1) * int_ests$ICC # Estimation of effect_size does not apply if analysis is of baseline only (cfunc='Z') pt_ests$effect_size <- NULL if (cfunc == "X") { lp_yI <- summary.fit$coefficients[1, 1] + summary.fit$coefficients[2, 1] se_lp_yI <- sqrt( model_object$geese$vbeta[1, 1] + model_object$geese$vbeta[2, 2] + 2 * model_object$geese$vbeta[1,2] ) int_ests$interventionY <- namedCL( invlink(link, c(lp_yI - z * se_lp_yI, lp_yI + z * se_lp_yI)), alpha = alpha) int_ests$effect_size <- estimateCLeffect_size( q50 = summary.fit$coefficients[, 1], Sigma = model_object$geese$vbeta, alpha = alpha, resamples = resamples, method = method, link = link) pt_ests$interventionY <- invlink(link, lp_yI) pt_ests$effect_size <- (1 - invlink(link, lp_yI)/invlink(link, lp_yC)) } analysis$model_object <- model_object analysis$pt_ests <- pt_ests[names(pt_ests) != "model_object"] analysis$int_ests <- int_ests return(analysis) } get_GEEmodel <- function(trial, link, fterms){ # GEE analysis of cluster effects cluster <- NULL fterms <- c(switch(link, "identity" = "y1/y_off ~ 1", "log" = "y1 ~ 1 + offset(log(y_off))", "cloglog" = "cbind(y1, 1) ~ 1 + offset(log(y_off))", "logit" = "cbind(y1,y0) ~ 1"), fterms) formula <- stats::as.formula(paste(fterms, collapse = "+")) if (link == "log") { model_object <- geepack::geeglm( formula = formula, id = cluster, data = trial, family = poisson(link = "log"), corstr = "exchangeable", scale.fix = FALSE) } else if (link == "cloglog") { model_object <- geepack::geeglm( formula = formula, id = cluster, data = trial, family = binomial(link = "cloglog"), corstr = "exchangeable", scale.fix = FALSE) } else if (link == "logit") { model_object <- geepack::geeglm( formula = formula, id = cluster, corstr = "exchangeable", data = trial, family = binomial(link = "logit")) } else if (link == "identity") { model_object <- geepack::geeglm( formula = formula, id = cluster, corstr = "exchangeable", data = trial, family = gaussian) } return(model_object)} LME4analysis <- function(analysis, cfunc, trial, link, fterms){ trial <- analysis$trial link <- analysis$options$link cfunc <- analysis$options$cfunc FUN <- get_FUN(cfunc, variant = 0) alpha <- analysis$options$alpha scale_par <- analysis$options$scale_par distance <- analysis$options$distance log_sp_prior <- analysis$options$log_sp_prior linearity <- analysis$options$linearity distance_type <- analysis$options$distance_type fterms <- analysis$options$fterms # TODO replace the use of ftext with fterms ftext <- analysis$options$ftext log_scale_par <- NA contrasts <- NULL fterms = switch(link, "identity" = c("y1/y_off ~ 1", fterms), "log" = c("y1 ~ 1", fterms, "offset(log(y_off))"), "cloglog" = c("y1 ~ 1", fterms, "offset(log(y_off))"), "logit" = c("cbind(y1,y0) ~ 1", fterms)) formula <- stats::as.formula(paste(fterms, collapse = "+")) if (!identical(distance_type, "No fixed effects of distance ")) { if (analysis$options$penalty > 0) { if (identical(Sys.getenv("TESTTHAT"), "true")) { log_scale_par <- 2.0 } else { tryCatch({ #message "Estimating scale parameter for spillover interval\n") log_scale_par <- stats::optimize( f = estimateSpilloverLME4, interval = log_sp_prior, maximum = FALSE, tol = 0.1, trial = trial, FUN = FUN, formula = formula, link = link, distance = distance)$minimum }, error = function(e){ message("*** Spillover scale parameter cannot be estimated ***") log_scale_par <- 0 }) } scale_par <- exp(log_scale_par) } analysis$options$scale_par <- scale_par if (distance %in% c('disc','kern')) { trial <- compute_distance(trial, distance = distance, scale_par = scale_par)$trial x <- trial[[distance]] analysis$trial <- trial } else { x <- trial[[distance]] / scale_par } trial$pvar <- eval(parse(text = FUN)) } model_object <- switch(link, "identity" = lme4::lmer(formula = formula, data = trial, REML = FALSE), "log" = lme4::glmer(formula = formula, data = trial, family = poisson), "logit" = lme4::glmer(formula = formula, data = trial, family = binomial), "cloglog" = lme4::glmer(formula = formula, data = trial, family = binomial)) analysis$pt_ests$scale_par <- exp(log_scale_par) analysis$pt_ests$deviance <- unname(summary(model_object)$AICtab["deviance"]) analysis$pt_ests$AIC <- unname(summary(model_object)$AICtab["AIC"]) analysis$pt_ests$df <- unname(summary(model_object)$AICtab["df.resid"]) # if the spillover parameter has been estimated then penalise the AIC and # adjust the degrees of freedom in the output cov <- q50 <- NULL analysis$pt_ests$AIC <- analysis$pt_ests$AIC + analysis$options$penalty analysis$pt_ests$df <- analysis$pt_ests$df - (analysis$options$penalty > 0) coefficients <- summary(model_object)$coefficients if (!identical(distance_type, "No fixed effects of distance ")) { WaldP <- ifelse(ncol(coefficients) == 4, coefficients['pvar',4], NA) } if (grepl("pvar", ftext, fixed = TRUE) | grepl("arm", ftext, fixed = TRUE)) { q50 <- summary(model_object)$coefficients[,1] names(q50)[grep("Int",names(q50))] <- "int" names(q50)[grep("arm",names(q50))] <- "arm" names(q50)[grep("pvar",names(q50))] <- "pvar" cov <- vcov(model_object) rownames(cov) <- colnames(cov) <- names(q50) } analysis$model_object <- model_object if (!identical(cfunc, "Z")){ sample <- as.data.frame(MASS::mvrnorm(n = 10000, mu = q50, Sigma = cov)) analysis <- extractEstimates(analysis = analysis, sample = sample) } else { analysis$pt_ests$controlY <- invlink(link, summary(model_object)$coefficients[,1]) } return(analysis) } INLAanalysis <- function(analysis, requireMesh = requireMesh, inla_mesh = inla_mesh){ trial <- analysis$trial cfunc <- analysis$options$cfunc link <- analysis$options$link distance <- analysis$options$distance log_sp_prior <- analysis$options$log_sp_prior distance_type <- analysis$options$distance_type linearity <- analysis$options$linearity scale_par <- analysis$options$scale_par alpha <- analysis$options$alpha FUN <- get_FUN(cfunc, variant = 0) fterms <- analysis$options$fterms # TODO replace the use of ftext with fterms ftext <- analysis$options$ftext if (identical(link,"identity")) { fterms <- c("y1/y_off ~ 0 + int", fterms) } else { fterms <- c("y1 ~ 0 + int", fterms) } formula <- stats::as.formula(paste(fterms, collapse = "+")) trial <- dplyr::mutate(trial, id = dplyr::row_number()) # Check if an appropriate inla_mesh is present and create one if necessary # If a mesh is provided use scale_par from the mesh # If spatial predictions are not required a minimal mesh is sufficient pixel <- 0.5 if (!requireMesh) pixel <- (max(trial$x) - min(trial$x))/2 if (is.null(inla_mesh)) { inla_mesh <- compute_mesh( trial = trial, offset = -0.1, max.edge = 0.25, inla.alpha = 2, maskbuffer = 0.5, pixel = pixel) } y_off <- NULL spde <- inla_mesh$spde df = data.frame( int = rep(1, nrow(trial)), id = trial$id) if ("int + arm" %in% fterms | "arm" %in% fterms) df$arm = ifelse(trial$arm == "intervention", 1, 0) if ("f(cluster, model = \'iid\')" %in% fterms) df$cluster = trial$cluster effectse <- list(df = df, s = inla_mesh$indexs) dfp = data.frame( int = rep(1, nrow(inla_mesh$prediction)), id = inla_mesh$prediction$id) if ("int + arm" %in% fterms | "arm" %in% fterms) dfp$arm = ifelse(inla_mesh$prediction$arm == "intervention", 1, 0) if ("f(cluster, model = \"iid\")" %in% fterms) dfp$cluster = inla_mesh$prediction$cluster effectsp <- list(df = dfp, s = inla_mesh$indexs) lc <- NULL FUN <- get_FUN(cfunc=cfunc, variant = 0) log_scale_par <- ifelse(is.null(scale_par), NA, log(scale_par)) if (!identical(distance_type, "No fixed effects of distance ")) { if (analysis$options$penalty > 0) { if (identical(Sys.getenv("TESTTHAT"), "true")) { log_scale_par <- 2.0 } else { tryCatch({ #messag"Estimating scale parameter for spillover interval\n") log_scale_par <- stats::optimize( f = estimateSpilloverINLA, interval = log_sp_prior, tol = 0.1, trial = trial, FUN = FUN, formula = formula, link = link, inla_mesh = inla_mesh, distance = distance)$minimum }, error = function(e){ message("*** Spillover scale parameter cannot be estimated ***") log_scale_par <- 5 }) } scale_par <- exp(log_scale_par) } analysis$options$scale_par <- scale_par if (distance %in% c("disc", "kern")) { trial <- compute_distance(trial, distance = distance, scale_par = scale_par)$trial x <- trial[[distance]] trial$pvar <- eval(parse(text = FUN)) analysis$trial <- trial inla_mesh$prediction[[distance]] <- trial[[distance]][inla_mesh$prediction$nearestNeighbour] x <- inla_mesh$prediction[[distance]] } else { x <- trial[[distance]]/scale_par trial$pvar <- eval(parse(text = FUN)) inla_mesh$prediction[[distance]] <- trial[[distance]][inla_mesh$prediction$nearestNeighbour] x <- inla_mesh$prediction[[distance]]/scale_par } inla_mesh$prediction$pvar <- eval(parse(text = FUN)) effectse$df$pvar <- trial$pvar effectsp$df$pvar <- inla_mesh$prediction$pvar # set up linear contrasts lc <- INLA::inla.make.lincomb(int = 1, pvar = 1) if (grepl("arm", ftext, fixed = TRUE)){ lc <- INLA::inla.make.lincomb(int = 1, pvar = 1, arm = 1) } } else if (grepl("arm", ftext, fixed = TRUE)) { lc <- INLA::inla.make.lincomb(int = 1, arm = 1) } # stack for estimation stk.e stk.e <- INLA::inla.stack( tag = "est", data = list(y1 = trial$y1, y_off = trial$y_off), A = list(1, A = inla_mesh$A), effects = effectse) # stack for prediction stk.p stk.p <- INLA::inla.stack( tag = "pred", data = list(y1 = NA, y_off = NA), A = list(1, inla_mesh$Ap), effects = effectsp) # stk.full comprises both stk.e and stk.p if a prediction mesh is in use stk.full <- INLA::inla.stack(stk.e, stk.p) if (link == "identity") { model_object <- INLA::inla( formula, family = "gaussian", lincomb = lc, control.family = list(link = "identity"), data = INLA::inla.stack.data(stk.full), control.fixed = list(correlation.matrix = TRUE), control.predictor = list(compute = TRUE, link = 1, A = INLA::inla.stack.A(stk.full)), control.compute = list(dic = TRUE)) } else if (link == "log") { model_object <- INLA::inla( formula, family = "poisson", lincomb = lc, control.family = list(link = "log"), data = INLA::inla.stack.data(stk.full), control.fixed = list(correlation.matrix = TRUE), control.predictor = list(compute = TRUE, link = 1, A = INLA::inla.stack.A(stk.full)), control.compute = list(dic = TRUE)) } else if (link == "logit") { model_object <- INLA::inla( formula, family = "binomial", Ntrials = y_off, lincomb = lc, control.family = list(link = "logit"), data = INLA::inla.stack.data(stk.full), control.fixed = list(correlation.matrix = TRUE), control.predictor = list(compute = TRUE, link = 1, A = INLA::inla.stack.A(stk.full)), control.compute = list(dic = TRUE)) } else if (link == "cloglog") { model_object <- INLA::inla( formula, family = "binomial", Ntrials = 1, lincomb = lc, control.family = list(link = "cloglog"), data = INLA::inla.stack.data(stk.full), control.fixed = list(correlation.matrix = TRUE), control.predictor = list(compute = TRUE, link = 1, A = INLA::inla.stack.A(stk.full)), control.compute = list(dic = TRUE)) } analysis$pt_ests$scale_par <- scale_par # The DIC is penalised if a scale parameter was estimated analysis$pt_ests$DIC <- model_object$dic$dic + analysis$options$penalty # Augment the inla results list with application specific quantities index <- INLA::inla.stack.index(stack = stk.full, tag = "pred")$data inla_mesh$prediction$prediction <- invlink(link, model_object$summary.linear.predictor[index, "0.5quant"]) # Compute sample-based confidence limits for intervened outcome and effect_size # intervention effects are estimated q50 <- cov <- list() if (grepl("pvar", ftext, fixed = TRUE) | grepl("arm", ftext, fixed = TRUE)) { # Specify the point estimates of the parameters q50 <- model_object$summary.lincomb.derived$"0.5quant" names(q50) <- rownames(model_object$summary.lincomb.derived) # Specify the covariance matrix of the parameters cov <- model_object$misc$lincomb.derived.covariance.matrix } analysis$inla_mesh <- inla_mesh analysis$model_object <- model_object if (!identical(cfunc, "Z")){ sample <- as.data.frame(MASS::mvrnorm(n = 10000, mu = q50, Sigma = cov)) analysis <- extractEstimates(analysis = analysis, sample = sample) } else { analysis$pt_ests$controlY <- invlink(link, model_object$summary.fixed[["0.5quant"]]) } return(analysis) } MCMCanalysis <- function(analysis){ trial <- analysis$trial link <- analysis$options$link cfunc <- analysis$options$cfunc alpha <- analysis$options$alpha fterms <- analysis$options$fterms linearity <- analysis$options$linearity personalProtection <- analysis$options$personalProtection distance <- analysis$options$distance scale_par <- analysis$options$scale_par log_sp_prior <- analysis$options$log_sp_prior clusterEffects<- analysis$options$clusterEffects FUN <- get_FUN(cfunc, variant = 0) nsteps <- 10 # JAGS parameters nchains <- 4 iter.increment <- 2000 max.iter <- 50000 n.burnin <- 1000 datajags <- list(N = nrow(trial)) if (identical(linearity,"Estimated scale parameter: ")) { # Create vector of candidate values of scale_par # by dividing the prior (on log scale) into equal bins # log_sp is the central value of each bin nbins <- 10 binsize <- (log_sp_prior[2] - log_sp_prior[1])/(nbins - 1) log_sp <- log_sp_prior[1] + c(0, seq(1:nbins -1 )) * binsize # calculate pvar corresponding to first value of sp Pr <- compute_pvar(trial = trial, distance = distance, scale_par = exp(log_sp[1]), FUN = FUN) for(i in 1:nbins - 1){ Pri <- compute_pvar(trial = trial, distance = distance, scale_par = exp(log_sp[1 + i]), FUN = FUN) Pr <- data.frame(cbind(Pr, Pri)) } log_sp1 <- c(log_sp + binsize/2, log_sp[nbins - 1] + binsize/2) datajags$Pr <- as.matrix(Pr) datajags$nbins <- nbins datajags$log_sp <- log_sp cfunc <- "O" } else if (identical(cfunc, "R")) { trial <- compute_distance(trial, distance = distance, scale_par = scale_par)$trial datajags$d <- trial[[distance]] datajags$mind <- min(trial[[distance]]) datajags$maxd <- max(trial[[distance]]) } if ("arm" %in% fterms) { datajags$intervened <- ifelse(trial$arm == "intervention", 1, 0) } if (identical(link, 'identity')) { datajags$y <- trial$y1/trial$y_off } else { datajags$y1 <- trial$y1 datajags$y_off <- trial$y_off } if (clusterEffects) { datajags$cluster <- as.numeric(as.character(trial$cluster)) datajags$ncluster <- max(as.numeric(as.character(trial$cluster))) } # construct the rjags code by concatenating strings text1 <- "model{\n" text2 <- switch(cfunc, X = "for(i in 1:N){\n", Z = "for(i in 1:N){\n", R = "for(i in 1:N){\n pr[i] <- (d[i] - mind)/(maxd - mind)", O = "pr_s[1] <- pnorm(log_sp[1],log_scale_par,tau.s) cum_pr[1] <- pr_s[1] for (j in 1:(nbins - 2)) { pr_s[j + 1] <- pnorm(log_sp[j + 1],log_scale_par, tau.s) - cum_pr[j] cum_pr[j + 1] <- pr_s[j + 1] + cum_pr[j] } pr_s[nbins] <- 1 - cum_pr[nbins - 1] for(i in 1:N){\n for(j in 1:nbins){\n pr_j[i,j] <- sum(Pr[i,j] * pr_s[j]) } pr[i] <- sum(pr_j[i, ]) ") text3 <- switch(link, "identity" = "y[i] ~ dnorm(lp[i],tau1) \n", "log" = "gamma1[i] ~ dnorm(0,tau1) \n Expect_y[i] <- exp(lp[i] + gamma1[i]) * y_off[i] \n y1[i] ~ dpois(Expect_y[i]) \n", "logit" = "logitp[i] <- lp[i] \n p[i] <- 1/(1 + exp(-logitp[i])) \n y1[i] ~ dbin(p[i],y_off[i]) \n", "cloglog" = "gamma1[i] ~ dnorm(0,tau1) \n Expect_p[i] <- 1 - exp(- exp(lp[i] + gamma1[i]) * y_off[i]) \n y1[i] ~ dbern(Expect_p[i]) \n" ) # construct JAGS code for the linear predictor if (cfunc %in% c('Z', 'X')) { text4 <- "lp[i] <- int" } else { text4 <- "lp[i] <- int + pvar * pr[i]" } text5 <- ifelse(clusterEffects, " + gamma[cluster[i]] \n }\n for(ic in 1:ncluster) {\n gamma[ic] ~ dnorm(0, tau)\n }\n tau <- 1/(sigma * sigma) \n sigma ~ dunif(0, 2) \n ", "}\n") text6 <- "log_scale_par ~ dnorm(0, 1E-1) \n scale_par <- exp(log_scale_par) \n tau.s ~ dunif(0,3) \n int ~ dnorm(0, 1E-2) \n" if ("arm" %in% fterms) { text4 <- paste0(text4, " + arm * intervened[i]") text6 <- paste(text6, "arm ~ dnorm(0, 1E-2) \n") } text7 <- ifelse(identical(cfunc,'Z'), "pvar <- 0 \n", "pvar ~ dnorm(0, 1E-2) \n") text8 <- switch(link, "identity" = "tau1 <- 1/(sigma1 * sigma1) \n sigma1 ~ dunif(0, 2) } \n", "log" = "tau1 <- 1/(sigma1 * sigma1) \n sigma1 ~ dunif(0, 2) } \n", "cloglog" = "tau1 <- 1/(sigma1 * sigma1) \n sigma1 ~ dunif(0, 2) } \n", "logit" = "} \n") MCMCmodel <- paste0(text1, text2, text3, text4, text5, text6, text7, text8) if (identical(cfunc, "E")) cfunc = "ES" parameters.to.save <- switch(cfunc, O = c("int", "pvar", "scale_par"), X = c("int"), Z = c("int"), R = c("int", "pvar")) if ("arm" %in% fterms) parameters.to.save <- c(parameters.to.save, "arm") model_object <- jagsUI::autojags(data = datajags, inits = NULL, parameters.to.save = parameters.to.save, model.file = textConnection(MCMCmodel), n.chains = nchains, iter.increment = iter.increment, n.burnin = n.burnin, max.iter=max.iter) sample <- data.frame(rbind(model_object$samples[[1]],model_object$samples[[2]])) model_object$MCMCmodel <- MCMCmodel analysis$model_object <- model_object analysis$trial <- trial analysis <- extractEstimates(analysis = analysis, sample = sample) analysis$options$scale_par <- analysis$pt_ests$scale_par # distance must be re-computed in the case of surrounds with estimated scale parameter analysis$trial <- compute_distance(trial, distance = distance, scale_par = analysis$options$scale_par)$trial analysis$pt_ests$DIC <- model_object$DIC return(analysis) } group_data <- function(analysis, distance = NULL, grouping = "quintiles"){ # define the limits of the curve both for control and intervention arms trial <- analysis$trial link <- analysis$options$link alpha <- analysis$options$alpha if (is.null(distance)) distance <- analysis$options$distance y_off <- y1 <- average <- upper <- lower <- d <- NULL cats <- NULL breaks0 <- breaks1 <- rep(NA, times = 6) # categorisation of trial data for plotting if (identical(grouping, "quintiles")) { groupvar <- ifelse(trial$arm == "intervention", 1000 + trial[[distance]], trial[[distance]]) breaks0 <-unique(c(-Inf, quantile(groupvar[trial$arm == "control"], probs = seq(0.2, 1, by = 0.20)))) breaks1 <-unique(c(999, quantile(groupvar[trial$arm == "intervention"], probs = seq(0.2, 1, by = 0.20)))) trial$cat <- cut( groupvar, breaks=c(breaks0, breaks1),labels = FALSE) arm <- c(rep("control", times = length(breaks0)-1), rep("intervention", times = length(breaks1)-1)) } else { range_d <- max(trial[[distance]]) - min(trial[[distance]]) trial$cat <- cut( trial[[distance]], breaks = c(-Inf, min(trial[[distance]]) + seq(1:9) * range_d/10, Inf),labels = FALSE) arm <- NA } trial$d <- trial[[distance]] if (link %in% c('log', 'cloglog')) { data <- data.frame( trial %>% dplyr::group_by(cat) %>% dplyr::summarize( locations = dplyr::n(), positives = sum(y1), total = sum(y_off), d = median(d), average = Williams(x=y1/y_off, alpha=alpha, option = 'M'), lower = Williams(x=y1/y_off, alpha=alpha, option = 'L'), upper = Williams(x=y1/y_off, alpha=alpha, option = 'U'))) } else if (link == 'logit') { data <- data.frame( trial %>% dplyr::group_by(cat) %>% dplyr::summarize( locations = dplyr::n(), d = median(d), positives = sum(y1), total = sum(y_off))) # overwrite with proportions and binomial confidence intervals by category data$average <- data$positives/data$total data$upper <- with(data, average - qnorm(alpha/2) * (sqrt(average * (1 - average)/total))) data$lower <- with(data, average + qnorm(alpha/2) * (sqrt(average * (1 - average)/total))) } else if (link == 'identity') { # overall means and t-based confidence intervals by category data <- trial %>% dplyr::group_by(cat) %>% dplyr::summarize( locations = dplyr::n(), positives = sum(y1), total = sum(y_off), d = median(d), average = mean(x=y1/y_off), lower = Tinterval(y1/y_off, alpha = alpha, option = 'L'), upper = Tinterval(y1/y_off, alpha = alpha, option = 'U')) } data$arm <- arm return(data) } # add labels to confidence limits namedCL <- function(limits, alpha = alpha) { names(limits) <- c( paste0(100 * alpha/2, "%"), paste0(100 - 100 * alpha/2, "%") ) return(limits) } # Minimal data description and crude effect_size estimate get_description <- function(trial, link, alpha, baselineOnly) { lp <- arm <- NULL if(baselineOnly) trial$arm <- "control" clusterSum <- clusterSummary(trial = trial, link = "identity") sum.numerators <- tapply(trial$y1, trial$arm, FUN = sum) sum.denominators <- tapply(trial$y_off, trial$arm, FUN = sum) ratio <- sum.numerators/sum.denominators if(baselineOnly) { controlY <- ratio[1] effect_size <- interventionY <- NULL } else { controlY <- ratio[1] interventionY <- ratio[2] effect_size <- switch(link, "identity" = ratio[2] - ratio[1], "log" = 1 - ratio[2]/ratio[1], "cloglog" = 1 - ratio[2]/ratio[1], "logit" = 1 - ratio[2]/ratio[1]) } means <- clusterSum %>% group_by(arm) %>% dplyr::summarize(lp = mean(lp)) deviations <- ifelse(clusterSum$arm == "control", clusterSum$lp - means$lp[1], clusterSum$lp - means$lp[2]) meanlp <- mean(ifelse(clusterSum$arm == "control", means$lp[1], means$lp[2]), na.rm = TRUE) cv_percent <- sd(deviations, na.rm = TRUE)/meanlp * 100 cv_intervals <- cv_interval(K = cv_percent/100, n = nrow(clusterSum), alpha = alpha) description <- list( sum.numerators = sum.numerators, sum.denominators = sum.denominators, controlY = controlY, interventionY = interventionY, effect_size = effect_size, nclusters = max(as.numeric(as.character(trial$cluster))), cv_percent = cv_percent, cv_lower = cv_intervals$lcl * 100, cv_upper = cv_intervals$ucl * 100, locations = nrow(trial) ) return(description) } cv_interval <- function(K, n, alpha) { # Vangel (1996) method for interval estimates of the cv # https://www.jstor.org/stable/2685039 u1 <- stats::qchisq(p = 1 - alpha/2, df = n - 1) u2 <- stats::qchisq(p = alpha/2, df = n - 1) lcl <- K/sqrt(((u1 + 2)/n - 1)* K^2 + u1/(n - 1)) ucl <- K/sqrt(((u2 + 2)/n - 1)* K^2 + u2/(n - 1)) value <- list(lcl = lcl, ucl = ucl) return(value) } # Functions for T and GEE analysis noLabels <- function(x) { xclean <- as.matrix(x) dimnames(xclean) <- NULL xclean <- as.vector(xclean) return(xclean) } estimateCLeffect_size <- function(q50, Sigma, alpha, resamples, method, link) { # Use resampling approach to avoid need for Taylor approximation use at least 10000 samples (this is very # cheap) resamples1 <- max(resamples, 10000, na.rm = TRUE) samples <- MASS::mvrnorm(n = resamples1, mu = q50, Sigma = Sigma) pC <- invlink(link, samples[, 1]) # for the T method the t-tests provide estimates for the s.e. for both arms separately # for the GEE method the input is in terms of the incremental effect of the intervention if (method == "T") pI <- invlink(link, samples[, 2]) if (method == "GEE") pI <- invlink(link, samples[, 1] + samples[, 2]) eff <- 1 - pI/pC CL <- quantile(eff, probs = c(alpha/2, 1 - alpha/2)) return(CL) } # Calculate the distance or surround for an arbitrary location calculate_singlevalue <- function(i, trial , prediction , distM, distance, scale_par){ if (identical(distance, "nearestDiscord")) { discords <- (trial$arm != prediction$arm[i]) nearestDiscord <- min(distM[i, discords]) value <- ifelse(prediction$arm[i] == "control", -nearestDiscord, nearestDiscord) } if (distance %in% c("hdep", "sdep")){ X = list(x = prediction$x[i], y = prediction$y[i]) depthlist <- depths(X, trial = trial) value <- depthlist[distance] } if (identical(distance, "disc")) { value <- sum(trial$arm =='intervention' & (distM[i, ] <= scale_par)) if(identical(prediction$arm,'intervention')) value <- value - 1 } return(value) } # Use profiling to estimate scale_par estimateSpilloverINLA <- function( log_scale_par = log_scale_par, trial = trial, FUN = FUN, inla_mesh = inla_mesh, formula = formula, link = link, distance = distance){ y1 <- y0 <- y_off <- NULL if (distance %in% c('disc','kern')) { updated <- compute_distance(trial, distance = distance, scale_par = exp(log_scale_par)) x <- trial[[distance]] <- updated$trial[[distance]] } else { x <- trial[[distance]]/exp(log_scale_par) } trial$pvar <- eval(parse(text = FUN)) stk.e <- INLA::inla.stack( tag = "est", data = list(y1 = trial$y1, y_off = trial$y_off), A = list(1, A = inla_mesh$A), effects = list( data.frame( int = rep(1, nrow(trial)), arm = ifelse(trial$arm == "intervention", 1, 0), pvar = trial$pvar, id = trial$id, cluster = trial$cluster ), s = inla_mesh$indexs ) ) # run the model with just the estimation stack (no predictions needed at this stage) if (link == "identity") { result.e <- INLA::inla( formula, family = "gaussian", control.family = list(link = "identity"), data = INLA::inla.stack.data(stk.e), control.predictor = list(compute = TRUE, link = 1, A = INLA::inla.stack.A(stk.e)), control.compute = list(dic = TRUE)) } else if (link == "log") { result.e <- INLA::inla( formula, family = "poisson", control.family = list(link = "log"), data = INLA::inla.stack.data(stk.e), control.predictor = list(compute = TRUE, link = 1, A = INLA::inla.stack.A(stk.e)), control.compute = list(dic = TRUE)) } else if (link == "logit") { result.e <- INLA::inla( formula, family = "binomial", Ntrials = y_off, control.family = list(link = "logit"), data = INLA::inla.stack.data(stk.e), control.predictor = list(compute = TRUE, link = 1, A = INLA::inla.stack.A(stk.e)), control.compute = list(dic = TRUE)) } else if (link == "cloglog") { result.e <- INLA::inla( formula, family = "binomial", Ntrials = 1, control.family = list(link = "cloglog"), data = INLA::inla.stack.data(stk.e), control.predictor = list(compute = TRUE, link = 1, A = INLA::inla.stack.A(stk.e)), control.compute = list(dic = TRUE)) } # The DIC is penalised to allow for estimation of scale_par loss <- result.e$dic$family.dic + 2 # Display the DIC here if necessary for debugging # messag"\rDIC: ", loss, " Spillover scale parameter: ", exp(log_scale_par), " \n") return(loss) } # Use profiling to estimate scale_par estimateSpilloverLME4 <- function( log_scale_par = log_scale_par, trial = trial, FUN = FUN, formula = formula, link = link, distance = distance){ if (distance %in% c('disc','kern')) { updated <- compute_distance(trial, distance = distance, scale_par = exp(log_scale_par)) x <- trial[[distance]] <- updated$trial[[distance]] } else { x <- trial[[distance]]/exp(log_scale_par) } trial$pvar <- eval(parse(text = FUN)) try( model_object <- switch(link, "identity" = lme4::lmer(formula = formula, data = trial, REML = FALSE), "log" = lme4::glmer(formula = formula, data = trial, family = poisson), "logit" = lme4::glmer(formula = formula, data = trial, family = binomial), "cloglog" = lme4::glmer(formula = formula, data = trial, family = binomial)) ) loss <- ifelse (is.null(model_object),999999, unlist(summary(model_object)$AICtab["AIC"])) # The AIC is used as a loss function # Display the AIC here if necessary for debugging # messag"\rAIC: ", loss + 2, " Spillover scale parameter: ", exp(log_scale_par), " \n") return(loss) } # Add estimates to analysis list add_estimates <- function(analysis, bounds, CLnames){ bounds <- data.frame(bounds) for (variable in c("int", "arm", "pvar", "controlY","interventionY","effect_size", "personal_protection","scale_par", "deviance","spillover_interval","spillover_limit0", "spillover_limit1","contaminate_pop_pr", "total_effect", "ipsilateral_spillover", "contralateral_spillover")) { if (variable %in% colnames(bounds)) { analysis$pt_ests[[variable]] <- bounds[2, variable] analysis$int_ests[[variable]] <- stats::setNames( bounds[c(1, 3), variable], CLnames) } } return(analysis) } # Williams mean and confidence intervals Williams <- function(x, alpha, option){ logx_1 <- log(x + 1) logx_1[!is.finite(logx_1)] <- NA if(sum(is.finite(logx_1)) < 3) { value <- switch(option, M = mean(logx_1), L = NA, U = NA) } else { value <- NA tryCatch({ t <- stats::t.test(x = logx_1, conf.level = 1 - alpha) value <- switch(option, M = t$estimate, L = t$conf.int[1], U = t$conf.int[2]) }, error = function(e){ message("*** Averages and interval estimates not defined for some groups ***") }) } returnvalue <- as.numeric(exp(value) - 1) return(returnvalue) } # T-based confidence intervals Tinterval <- function(x, alpha, option){ if(length(x) < 3){ value <- NA } else { t <- stats::t.test(x = x, conf.level = 1 - alpha) value <- switch(option, L = t$conf.int[1], U = t$conf.int[2]) } returnvalue <- as.numeric(value) } #' Summary of the results of a statistical analysis of a CRT #' #' \code{summary.CRTanalysis} generates a summary of a \code{CRTanalysis} including the main results #' @param object an object of class \code{"CRTanalysis"} #' @param ... other arguments used by summary #' @method summary CRTanalysis #' @export #' @return No return value, writes text to the console. #' @examples #' {example <- readdata('exampleCRT.txt') #' exampleT <- CRTanalysis(example, method = "T") #' summary(exampleT) #' } summary.CRTanalysis <- function(object, ...) { defaultdigits <- getOption("digits") on.exit(options(digits = defaultdigits)) options(digits = 3) scale_par <- object$options$scale_par cat("\n=====================CLUSTER RANDOMISED TRIAL ANALYSIS =================\n") cat( "Analysis method: ", object$options$method, "\nLink function: ", object$options$link, "\n") if(!identical(object$options$distance_type,"No fixed effects of distance ")){ cat(paste0("Measure of distance or surround: ", getDistanceText(distance = object$options$distance, scale_par = scale_par),"\n")) if (!is.null(object$options$linearity)){ cat(object$options$linearity) if (!is.null(scale_par)) cat(paste0(round(scale_par, digits = 3),"\n")) } } if (identical(object$options$method,"WCA")) { wc_summary(object) } else { if (!is.null(object$options$ftext)) cat("Model formula: ", object$options$ftext, "\n") cat(switch(object$options$cfunc, Z = "No comparison of arms \n", X = "No modelling of spillover \n", S = "Piecewise linear function for spillover\n", P = "Error function model for spillover\n", L = "Sigmoid (logistic) function for spillover\n", R = "Rescaled linear function for spillover\n")) CLtext <- paste0(" (", 100 * (1 - object$options$alpha), "% CL: ") cat( "Estimates: Control: ", object$pt_ests$controlY, CLtext, unlist(object$int_ests$controlY), ")\n" ) if (!is.null(object$pt_ests$effect_size)) { if (!is.null(object$pt_ests$interventionY)) { cat( " Intervention: ", object$pt_ests$interventionY, CLtext, unlist(object$int_ests$interventionY), ")\n" ) effect.distance <- ifelse(object$options$link == 'identity', "Effect size: "," Efficacy: ") cat(" ", effect.distance, object$pt_ests$effect_size, CLtext, unlist(object$int_ests$effect_size), ")\n" ) } if (!is.na(object$pt_ests$personal_protection)) { cat( "Personal protection % : ", object$pt_ests$personal_protection*100, CLtext, unlist(object$int_ests$personal_protection*100), ")\n" ) if (object$pt_ests$personal_protection < 0 | object$pt_ests$personal_protection > 1){ cat( "** Warning: different signs for main effect and personal protection effect: face validity check fails **\n") } } if (!identical(object$options$distance_type, "No fixed effects of distance ")){ if (!is.null(object$pt_ests$spillover_interval)){ cat( "spillover interval(km): ", object$pt_ests$spillover_interval, CLtext, unlist(object$int_ests$spillover_interval), ")\n" ) } if (!is.null(object$spillover$contaminate_pop_pr)){ cat( "% locations contaminated:", object$spillover$contaminate_pop_pr*100, CLtext, unlist(object$int_ests$contaminate_pop_pr)*100, "%)\n") } } if (!is.null(object$int_ests$total_effect)) { cat("Total effect :", object$pt_ests$total_effect, CLtext, unlist(object$int_ests$total_effect),")\n") cat("Ipsilateral Spillover :", object$pt_ests$ipsilateral_spillover, CLtext, unlist(object$int_ests$ipsilateral_spillover),")\n") cat("Contralateral Spillover :", object$pt_ests$contralateral_spillover, CLtext, unlist(object$int_ests$contralateral_spillover),")\n") } } if (!is.null(object$description$cv_percent)) cat("Coefficient of variation: ", object$description$cv_percent,"%", CLtext, object$description$cv_lower, object$description$cv_upper,")\n" ) if (!is.null(object$pt_ests$ICC)) { cat( "Intracluster correlation (ICC) : ", object$pt_ests$ICC, CLtext, unlist(object$int_ests$ICC),")\n" ) } options(digits = defaultdigits) # goodness of fit if (!is.null(object$pt_ests$deviance)) cat("deviance: ", object$pt_ests$deviance, "\n") if (!is.null(object$pt_ests$DIC)) cat("DIC : ", object$pt_ests$DIC) if (!is.null(object$pt_ests$AIC)) cat("AIC : ", object$pt_ests$AIC) if (object$options$penalty > 0) { cat(" including penalty for the spillover scale parameter\n") } else { cat(" \n") } # TODO: add the degrees of freedom to the output if (!is.null(object$pt_ests$p.value)){ cat("P-value (2-sided): ", object$pt_ests$p.value, "\n") } } } extractEstimates <- function(analysis, sample) { alpha <- analysis$options$alpha link <- analysis$options$link method <- analysis$options$method distance <- analysis$options$distance CLnames <- analysis$options$CLnames scale_par <- analysis$options$scale_par sample$controlY <- invlink(link, sample$int) # personal_protection is the proportion of effect attributed to personal protection if ("arm" %in% names(sample) & "pvar" %in% names(sample)) { if (method %in% c("MCMC","LME4")) { sample$lc <- with(sample, int + pvar + arm) } sample$interventionY <- invlink(link, sample$lc) sample$personal_protection <- with( sample, (controlY - invlink(link, int + arm))/(controlY - interventionY)) } else { sample$personal_protection <- NA } if ("arm" %in% names(sample) & !("pvar" %in% names(sample))) { sample$interventionY <- invlink(link, sample$int + sample$arm) } if ("pvar" %in% names(sample) & !("arm" %in% names(sample))) { sample$interventionY <- invlink(link, sample$int + sample$pvar) } if ("interventionY" %in% names(sample)) { sample$effect_size <- 1 - sample$interventionY/sample$controlY } if (!(analysis$options$cfunc %in% c("X", "Z"))) { if (is.null(sample$scale_par)) sample$scale_par <- scale_par if (is.null(sample$scale_par)) sample$scale_par <- 1 spillover_list <- apply(sample, MARGIN = 1, FUN = get_spillover, analysis = analysis) spillover_df <- as.data.frame(do.call(rbind, lapply(spillover_list, as.data.frame))) sample <- cbind(sample, spillover_df) } bounds <- (apply( sample, 2, function(x) {quantile(x, c(alpha/2, 0.5, 1 - alpha/2), alpha = alpha, na.rm = TRUE)})) analysis <- add_estimates(analysis = analysis, bounds = bounds, CLnames = CLnames) return(analysis) } # logit transformation logit <- function(p = p) { return(log(p/(1 - p))) } # cloglog transformation cloglog = function(p) log(-log(1-p)) # link transformation link_tr <- function(link = link, x = x) { value <- switch(link, "identity" = x, "log" = log(x), "logit" = log(x/(1 - x)), "cloglog" = log(-log(1 - x))) return(value) } # inverse transformation of link function invlink <- function(link = link, x = x) { value <- switch(link, "identity" = x, "log" = exp(x), "logit" = 1/(1 + exp(-x)), "cloglog" = 1 - exp(-exp(x))) return(value) } # Contributions to the linear predictor for different spillover functions StraightLine <- function(par, trial) { par[2] <- par[3] <- -9 lp <- par[1] return(lp) } # step function for the case with no spillover StepFunction <- function(par, trial, distance) { par[3] <- -9 lp <- ifelse(trial[[distance]] < 0, par[1], par[1] + par[2]) return(lp) } # piecewise linear model PiecewiseLinearFunction <- function(par, trial, distance) { # constrain the slope parameter to be positive (par[2] is positive if effect_size is negative) scale_par <- par[3] # if scale_par is very large, the curve should be close to a straight line if (scale_par > 20){ lp <- par[1] + 0.5 * par[2] } else { lp <- ifelse( trial[[distance]] > -scale_par/2, par[1] + par[2] * (scale_par/2 + trial[[distance]])/scale_par, par[1]) lp <- ifelse(trial[[distance]] > scale_par/2, par[1] + par[2], lp) } return(lp) } escape = function(x) { value <- 1 - exp(-x) return(value)} piecewise <- function(x) { value <- ifelse(x < -0.5, 0, (0.5 + x)) value <- ifelse(x > 0.5, 1, value) return(value)} # rescaled linear model RescaledLinearFunction <- function(par, trial, distance) { # par[3] is not used scale_par <- par[3] lp <- par[1] + rescale(trial[[distance]]) * par[2] return(lp) } rescale <- function(x) { value <- (x - min(x))/(max(x) - min(x)) return(value)} # sigmoid (logit) function InverseLogisticFunction <- function(par, trial, distance) { lp <- par[1] + par[2] * invlink(link = "logit", x = trial[[distance]]/par[3]) return(lp) } # inverse probit function InverseProbitFunction <- function(par, trial, distance) { lp <- par[1] + par[2] * stats::pnorm(trial[[distance]]/par[3]) return(lp) } # escape function EscapeFunction <- function(par, trial, distance) { lp <- par[1] + par[2] * (1 - exp(-(trial[[distance]]/par[3]))) return(lp) } get_FUN <- function(cfunc, variant){ # TODO: remove the duplication and simplify here # Specify the function used for calculating the linear predictor if (identical(variant, 1)) { LPfunction <- c( "StraightLine", "StepFunction", "PiecewiseLinearFunction", "InverseLogisticFunction", "InverseProbitFunction", "RescaledLinearFunction", "EscapeFunction")[which(cfunc == c("Z", "X", "S", "L", "P", "R", "E"))] FUN <- eval(parse(text = LPfunction)) } else { # trap a warning with use of "E" if (identical(cfunc, "E")) cfunc = "ES" FUN <- switch( cfunc, L = "invlink(link='logit', x)", P = "stats::pnorm(x)", S = "piecewise(x)", X = "rescale(x)", Z = "rescale(x)", R = "rescale(x)", ES = "escape(x)") } return(FUN) } get_spillover <- function(x, analysis){ # define the limits of the curve both for control and intervention arms fittedCurve <- get_curve(x = x, analysis = analysis) spillover <- get_spilloverStats(fittedCurve=fittedCurve, trial=analysis$trial, distance = analysis$options$distance) return(spillover) } get_curve <- function(x, analysis) { trial <- analysis$trial link <- analysis$options$link distance <- analysis$options$distance cfunc <- analysis$options$cfunc if ((distance %in% c("disc", "kern")) & identical(cfunc, "E")) cfunc <- "R" total_effect <- ipsilateral_spillover <- contralateral_spillover <- NULL limits <- matrix(x[["controlY"]], nrow = 2, ncol = 2) if(!is.null(x[["interventionY"]])) { limits[ ,2] <- rep(x[["interventionY"]], times = 2) } else { limits[ , 2] <- rep(x[["controlY"]], times = 2) } if (!is.na(x[["personal_protection"]])) { limits[1, 2] <- invlink(link, x[["int"]] + x[["pvar"]]) limits[2, 1] <- invlink(link, x[["int"]] + x[["arm"]]) } if (identical(cfunc, 'X')) { limits[1, 2] <- limits[1, 1] limits[2, 1] <- limits[2, 2] } scale_par <- ifelse("scale_par" %in% names(x), x[["scale_par"]], analysis$options$scale_par) # Trap cases with extreme effect: TODO: a different criterion may be needed for continuous data pars <- link_tr(link,limits) if (sum((pars[, 1] - pars[, 2])^2) > 10000) { limits <- invlink(link, matrix(c(20,20, -20, -20), nrow = 2, ncol = 2)) } if (is.null(trial[[distance]])) trial <- compute_distance(trial, distance = distance, scale_par = scale_par)$trial range_d <- max(trial[[distance]]) - min(trial[[distance]]) d <- min(trial[[distance]]) + range_d * (seq(1:1001) - 1)/1000 if (identical(limits[, 1], limits[ , 2])) { control_curve <- rep(limits[1, 1], 1001) intervention_curve <- rep(limits[2, 1], 1001) } else { par0 <- c(link_tr(link, limits[1, 1]), link_tr(link, limits[1, 2]) - link_tr(link, limits[1, 1]), scale_par) par1 <- c( link_tr(link, limits[2, 1]), link_tr(link, limits[2, 2]) - link_tr(link, limits[2, 1]), scale_par ) # trap extreme cases with undefined, flat or very steep curves if (!identical(cfunc, "R")) { if (is.null(scale_par)) { cfunc <- "X" } else if (is.na(scale_par) | scale_par < 0.01 | scale_par > 100 | (sum((pars[, 1] - pars[, 2])^2) > 10000)) { cfunc <- "X" } } FUN1 <- get_FUN(cfunc, variant = 1) fitted_values <- ifelse(trial$arm == 'intervention', invlink(link, FUN1(trial = trial, par = par1, distance = distance)), invlink(link, FUN1(trial = trial, par = par0, distance = distance))) total_effect <- x[["controlY"]] - x[["interventionY"]] ipsilateral_spillover <- mean(fitted_values[trial$arm == 'intervention']) - x[["interventionY"]] contralateral_spillover <- x[["controlY"]] - mean(fitted_values[trial$arm == 'control']) intervention_curve <- invlink(link, FUN1(trial = data.frame(d = d), par = par1, distance = "d")) control_curve <- invlink(link, FUN1(trial = data.frame(d = d), par = par0, distance = "d")) } if(min(d) < 0) { control_curve[d > 0] <- NA intervention_curve[d < 0] <- NA } fittedCurve <- list(d = d, control_curve = control_curve, intervention_curve = intervention_curve, limits = limits, total_effect = total_effect, ipsilateral_spillover = ipsilateral_spillover, contralateral_spillover = contralateral_spillover) return(fittedCurve) } # This is called once for each row in the sample data frame (for obtaining interval estimates) get_spilloverStats <- function(fittedCurve, trial, distance) { # Compute the spillover interval # The absolute values of the limits are used so that a positive range is # obtained even with negative effect_size limits <- fittedCurve$limits d <- fittedCurve$d curve <- ifelse(d > 0, fittedCurve$intervention_curve, fittedCurve$control_curve) thetaL <- thetaU <- NA if (abs(limits[1, 1] - curve[1000]) > 0.025 * abs(limits[1, 1] - limits[1, 2])) { thetaL <- d[min( which( abs(limits[1, 1] - curve) > 0.025 * abs(limits[1, 1] - limits[1, 2]) ) )] } if (abs(limits[2, 2] - curve[1000]) < 0.025 * abs(limits[2, 1] - limits[2, 2])) { thetaU <- ifelse(abs(limits[2, 2] - curve[1001] > 0.025 * abs(limits[2, 1] - limits[2, 2])), d[max(which(abs(limits[2, 2] - curve) > 0.025 * abs(limits[2, 1] - limits[2, 2])) )], d[1001]) } if (is.na(thetaU)) thetaU <- max(trial[[distance]]) if (is.na(thetaL)) thetaL <- min(trial[[distance]]) # spillover interval spillover_limits <- c(thetaL, thetaU) if (thetaL > thetaU) spillover_limits <- c(thetaU, thetaL) contaminate_pop_pr <- sum(trial[[distance]] > spillover_limits[1] & trial[[distance]] < spillover_limits[2])/nrow(trial) spillover_interval <- thetaU - thetaL if (identical(thetaU, thetaL)) { spillover_interval <- 0 # To remove warnings from plotting ensure that spillover interval is non-zero spillover_limits <- c(-1e-04, 1e-04) } spillover <- list( spillover_interval = spillover_interval, spillover_limit0 = spillover_limits[1], spillover_limit1 = spillover_limits[2], contaminate_pop_pr = contaminate_pop_pr, total_effect = fittedCurve$total_effect, ipsilateral_spillover = fittedCurve$ipsilateral_spillover, contralateral_spillover = fittedCurve$contralateral_spillover) return(spillover)} tidySpillover <- function(spillover, analysis, fittedCurve){ # if (identical(analysis$options$distance,"nearestDiscord")) { spillover$spillover_limits <- with(spillover, c(spillover_limit0,spillover_limit1)) if (analysis$options$cfunc %in% c("Z","X")) { spillover$spillover_interval <- NULL spillover$contaminate_pop_pr <- NULL spillover$spillover_limits <- c(-1.0E-4,1.0E-4) } else { if (is.na(analysis$pt_ests$scale_par)) analysis$pt_ests$scale_par <- spillover$scale_par if (is.na(analysis$pt_ests$spillover_interval)) analysis$pt_ests$spillover_interval <- spillover$spillover_interval } spillover$spillover_limit0 <- spillover$spillover_limit1 <- NULL # } spillover$FittedCurve <- data.frame(d = fittedCurve$d, intervention_curve = fittedCurve$intervention_curve, control_curve = fittedCurve$control_curve) analysis$spillover <- spillover return(analysis)} #' Extract model fitted values #' #' \code{fitted.CRTanalysis} method for extracting model fitted values #' @param object CRTanalysis object #' @param ... other arguments #' @export #' @return the fitted values returned by the statistical model run within the \code{CRTanalysis} function #' @examples #' {example <- readdata('exampleCRT.txt') #' exampleGEE <- CRTanalysis(example, method = "GEE") #' fitted_values <- fitted(exampleGEE) #' } fitted.CRTanalysis <- function(object, ...){ value = fitted(object = object$model_object, ...) return(value) } #' Extract model coefficients #' #' \code{coef.CRTanalysis} method for extracting model fitted values #' @param object CRTanalysis object #' @param ... other arguments #' @export #' @return the model coefficients returned by the statistical model run within the \code{CRTanalysis} function #' @examples #' {example <- readdata('exampleCRT.txt') #' exampleGEE <- CRTanalysis(example, method = "GEE") #' coef(exampleGEE) #' } coef.CRTanalysis <- function(object, ...){ value = coef(object = object$model_object, ...) return(value) } #' Extract model residuals #' #' \code{residuals.CRTanalysis} method for extracting model residuals #' @param object CRTanalysis object #' @param ... other arguments #' @export #' @return the residuals from the statistical model run within the \code{CRTanalysis} function #' @examples #' {example <- readdata('exampleCRT.txt') #' exampleGEE <- CRTanalysis(example, method = "GEE") #' residuals <- residuals(exampleGEE) #' } residuals.CRTanalysis <- function(object, ...){ value = residuals(object = object$model_object, ...) return(value) } #' Model predictions #' #' \code{predict.CRTanalysis} method for extracting model predictions #' @param object CRTanalysis object #' @param ... other arguments #' @export #' @return the model predictions returned by the statistical model run within the \code{CRTanalysis} function #' @examples #' {example <- readdata('exampleCRT.txt') #' exampleGEE <- CRTanalysis(example, method = "GEE") #' predictions <- predict(exampleGEE) #' }#' predict.CRTanalysis <- function(object, ...){ value = predict(object = object$model_object, ...) return(value) } getDistanceText <- function(distance = "nearestDiscord", scale_par = NULL) { value <- switch(distance, "nearestDiscord" = "Signed distance to other arm (km)", "disc" = paste0("disc of radius ", round(scale_par, digits = 3), " km"), "kern" = paste0("kern with kernel s.d. ", round(scale_par, digits = 3), " km"), "hdep" = "Tukey half-depth ", "sdep" = "Simplicial depth ", distance) return(value) } compute_pvar <- function(trial, distance, scale_par, FUN) { if (distance %in% c('disc','kern')) { trial <- compute_distance(trial, distance = distance, scale_par = scale_par)$trial x <- trial[[distance]] } else { x <- trial[[distance]]/scale_par } pvar <- eval(parse(text = FUN)) return(pvar) }
/scratch/gouwar.j/cran-all/cranData/CRTspat/R/analyseCRT.R
#' Power and sample size calculations for a CRT #' #' \code{CRTpower} carries out power and sample size calculations for CRTs. #' #' @param trial dataframe or \code{'CRTsp'} object: optional list of locations #' @param locations numeric: total number of units available for randomization (required if \code{trial} is not specified) #' @param alpha numeric: confidence level #' @param desiredPower numeric: desired power #' @param effect numeric: required effect size #' @param yC numeric: baseline (control) value of outcome #' @param outcome_type character: with options - #' \code{'y'}: continuous; #' \code{'n'}: count; #' \code{'e'}: event rate; #' \code{'p'}: proportion; #' \code{'d'}: dichotomous. #' @param sigma2 numeric: variance of the outcome (required for \code{outcome_type = 'y'}) #' @param denominator numeric: rate multiplier (for \code{outcome_type = 'n'} or \code{outcome_type = 'e'}) #' @param N numeric: mean of the denominator for proportions (for \code{outcome_type = 'p'}) #' @param ICC numeric: Intra-cluster correlation #' @param cv_percent numeric: Coefficient of variation of the outcome (expressed as a percentage) #' @param c integer: number of clusters in each arm (required if \code{trial} is not specified) #' @param sd_h standard deviation of number of units per cluster (required if \code{trial} is not specified) #' @returns A list of class \code{'CRTsp'} object comprising the input data, cluster and arm assignments, #' trial description and results of power calculations #' @export #' @details #' Power and sample size calculations are for an unmatched two-arm trial. For counts #' or event rate data the formula of [Hayes & Bennett, 1999](https://academic.oup.com/ije/article/28/2/319/655247) is used. This requires as an input the #' between cluster coefficient of variation (\code{cv_percent}). For continuous outcomes and proportions the formulae of #' [Hemming et al, 2011](https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/1471-2288-11-102) are used. These make use of #' the intra-cluster correlation in the outcome (\code{ICC}) as an input. If the coefficient of variation and not the ICC is supplied then #' the intra-cluster correlation is computed from the coefficient of variation using the formulae #' from [Hayes & Moulton](https://www.taylorfrancis.com/books/mono/10.1201/9781584888178/cluster-randomised-trials-richard-hayes-lawrence-moulton). If incompatible values for \code{ICC} and \code{cv_percent} are supplied #' then the value of the \code{ICC} is used.\cr\cr #' The calculations do not consider any loss in power due to spillover, loss to follow-up etc..\cr\cr #' If geolocations are not input then power and sample size calculations are based on the scalar input parameters.\cr\cr #' If a trial dataframe or \code{'CRTsp'} object is input then this is used to determine the number of locations. If this input object #' contains cluster assignments then the numbers and sizes of clusters in the input data are used to estimate the power. If buffer zones have been specified #' then separate calculations are made for the core area and for the full site.\cr\cr #' The output is an object of class \code{'CRTsp'} containing any input trial dataframe and values for: #' - The required numbers of clusters to achieve the specified power. #' - The design effect based on the input ICC. #' - Calculations of the nominal power (ignoring any bias caused by spillover, loss to follow-up etc.)\cr #' @examples #' {# Power calculations for a binary outcome without input geolocations #' examplePower1 = CRTpower(locations = 3000, ICC = 0.10, effect = 0.4, alpha = 0.05, #' outcome_type = 'd', desiredPower = 0.8, yC=0.35, c = 20, sd_h = 5) #' summary(examplePower1) #' # Power calculations for a rate outcome without input geolocations #' examplePower2 = CRTpower(locations = 2000, cv_percent = 40, effect = 0.4, denominator = 2.5, #' alpha = 0.05, outcome_type = 'e', desiredPower = 0.8, yC = 0.35, c = 20, sd_h=5) #' summary(examplePower2) #' # Example with input geolocations and randomisation #' examplePower3 = CRTpower(trial = readdata('example_site.csv'), desiredPower = 0.8, #' effect=0.4, yC=0.35, outcome_type = 'd', ICC = 0.05, c = 20) #' summary(examplePower3) #' } CRTpower <- function(trial = NULL, locations = NULL, alpha = 0.05, desiredPower = 0.8, effect = NULL, yC = NULL, outcome_type = "d", sigma2 = NULL, denominator = 1, N = 1, ICC = NULL, cv_percent = NULL, c = NULL, sd_h = 0) { if(is.null(trial)) trial <- data.frame(x = c(), y = c()) CRT <- CRTsp(trial) # populate a design list with a data about the input trial (if # available) and the input parameters # sigma2 is only required for continuous data, so its absence # should otherwise not crash the program if (!identical(outcome_type, "y")) sigma2 <- NA if (is.null(ICC)) ICC <- NA if (is.null(cv_percent)) cv_percent <- NA if (is.na(ICC) & is.na(cv_percent)) { stop("*** Value must be supplied for either ICC or cv_percent ***") } design <- ifelse(is.null(CRT$design$locations), list(), CRT$design) design$locations <- ifelse((nrow(CRT$trial) == 0), locations, nrow(CRT$trial)) parnames <- c("alpha", "desiredPower", "effect", "yC", "outcome_type", "sigma2", "denominator", "N", "ICC", "cv_percent", "c", "sd_h") # Identify which variables to retrieve from the pre-existing design from_old <- lapply(mget(parnames), FUN = is.null) design[parnames] <- ifelse(from_old, design[parnames], mget(parnames)) missing <- lapply(design[parnames], FUN = is.null) if (TRUE %in% missing) { stop("*** Value(s) must be supplied for: ", parnames[missing == TRUE], " ***") } CRT <- CRTsp(CRT, design = design) } # Characteristics of a trial design. The input is a data frame or # CRTsp object. The output list conforms to the requirements for a # CRTsp object get_geom <- function(trial = NULL, design = NULL) { sd_distance <- clustersRequired <- DE <- power <- NULL # check if the power calculations need to be reconstructed from scratch if(!is.null(design$locations)) { locations <- design$locations c <- design$c mean_h <- locations/(2 * c) sd_h <- design$sd_h } else { outcome_type <- mean_h <- c <- sd_h <- locations <- NULL } geom <- list(locations = locations, sd_h = sd_h, c = c, records = 0, mean_h = mean_h, DE = NULL, power = NULL, clustersRequired = NULL) # cluster size geom$c <- ifelse(is.null(geom$c), NA, round(geom$c)) geom$mean_h <- geom$locations/(2 * geom$c) # overwrite values from the design with those from the data frame if these are present if (!is.null(trial) & nrow(trial) > 0) { coordinates <- data.frame(cbind(x = trial$x, y = trial$y)) geom$records <- nrow(trial) geom$locations <- nrow(dplyr::distinct(coordinates)) if (!is.null(trial$cluster)) { # reassign the cluster levels in case some are not # represented in this geom (otherwise nlevels() counts # clusters that are not present) trial$cluster <- as.factor(as.character(trial$cluster)) geom$c <- floor(nlevels(trial$cluster)/2) # mean number of locations randomized in each cluster geom$mean_h <- mean(table(trial$cluster)) # standard deviation of locations assigned to each cluster geom$sd_h <- stats::sd(table(trial$cluster)) } if (!is.null(trial$arm)) { geom$sd_distance <- stats::sd(trial$nearestDiscord) arms <- unique(cbind(trial$cluster, trial$arm))[, 2] #assignments } } if (!is.null(design$effect)) { if (is.null(geom$locations)) { stop("*** Number of locations is a required input ***") } if (is.null(geom$c)) { stop("*** Number of clusters is a required input ***") } if (identical(geom$sd_h, 0)) { message("*** Assuming all clusters are the same size ***") } effect <- design$effect yC <- design$yC # convert power and significance level to Zvalues Zsig <- -qnorm(design$alpha/2) Zpow <- qnorm(design$desiredPower) # effective cluster sizes (inflating for multiple observations # at the same location) mean_eff <- geom$mean_h * design$N sd_eff <- geom$sd_h * design$N # coefficient of variation of the cluster sizes cv_eff <- sd_eff/mean_eff # outcome in intervened group link <- switch(design$outcome_type, y = "identity", n = "log", e = "log", p = "logit", d = "logit") yI <- ifelse(link == "identity", yC - design$effect, yC * (1 - design$effect)) # difference between groups # d <- yC - yI # input value of the coefficient of variation of between cluster variation in outcome k <- ifelse(is.null(design$cv_percent), NA, design$cv_percent/100) if(is.null(design$ICC)) { design$ICC <- switch(link, "identity" = (k * yC)^2/sigma2, "log" = NA, "logit" = k^2 * yC/(1 - yC) ) } if(is.null(design$cv_percent)) { k <- switch(link, "identity" = sqrt(design$ICC * sigma2)/yC, "log" = NA, "logit" = sqrt(design$ICC * (1 - yC)/yC) ) design$cv_percent <- 100 * k } if(identical(link,"log")) { if(is.null(k)){ stop("*** Between cluster coefficient of variation is a required input ***") } # use the formulae from Hayes & Bennett (1999) https://doi.org/10.1093/ije/28.2.319 denom_per_cluster <- design$denominator * mean_eff # clusters required (both arms) geom$clustersRequired <- 2 * ceiling(1 + (Zsig + Zpow)^2 * ((yC + yI)/denom_per_cluster + (yC^2 + yI^2) * k^2)/((yC - yI)^2)) # power with c clusters per arm and unequal cluster sizes geom$power <- stats::pnorm(sqrt((c - 1) * ((yC - yI)^2)/((yC + yI)/denom_per_cluster + (yC^2 + yI^2) * k^2)) - Zsig) # the design effect is the ratio of the required denominator to that required for an individually randomised trial required_denom_RCT <- 2 * ((Zsig + Zpow)^2 * (yC + yI)/((yC - yI)^2)) geom$DE <- denom_per_cluster * geom$clustersRequired/required_denom_RCT } else { # design effect (Variance Inflation Factor) as a function of ICC allowing for # varying cluster sizes(Hemming eqn 6) geom$DE <- 1 + (cv_eff^2 + 1) * (mean_eff - 1) * design$ICC if (identical(link, "identity")) { # with normal models, sigma2 is an input variable sigma2 <- design$sigma2 } else if (identical(link, "logit")) { # This is the variance for a Bernoulli. The cluster sizes are # inflated for the binomial case (below) sigma2 <- 1/2 * (yI * (1 - yI) + yC * (1 - yC)) } # required individuals per arm in individually randomized trial n_ind <- 2 * sigma2 * ((Zsig + Zpow)/(yC - yI))^2 # number of individuals required per arm in CRT with equal # cluster sizes n_crt <- n_ind * geom$DE # minimum total numbers of clusters required assuming varying # cluster sizes per arm (Hemming eqn 8) geom$clustersRequired <- 2 * ceiling(n_ind * (1 + ((cv_eff + 1) * mean_eff - 1) * design$ICC)/mean_eff) # power with c clusters per arm and unequal cluster sizes geom$power <- stats::pnorm(sqrt(c * mean_eff/(2 * geom$DE)) * (yC - yI)/sqrt(sigma2) - Zsig) } } return(geom) }
/scratch/gouwar.j/cran-all/cranData/CRTspat/R/designCRT.R
#' Graphical displays of the geography of a CRT #' #' \code{plotCRT} returns graphical displays of the geography of a CRT #' or of the results of statistical analyses of a CRT #' @param object object of class \code{'CRTanalysis'} produced by \code{CRTanalysis()} #' @param map logical: indicator of whether a map is required #' @param distance measure of distance or surround with options: \cr #' \tabular{ll}{ #' \code{"nearestDiscord"} \tab distance to nearest discordant location (km)\cr #' \code{"disc"} \tab disc\cr #' \code{"hdep"} \tab Tukey's half space depth\cr #' \code{"sdep"} \tab simplicial depth\cr #' } #' @param fill fill layer of map with options: #' \tabular{ll}{ #' \code{'cluster'} \tab cluster assignment \cr #' \code{'arms'} \tab arm assignment \cr #' \code{'nearestDiscord'} \tab distance to the nearest discordant location\cr #' \code{'disc'} \tab disc measure of surround\cr #' \code{'hdep'} \tab Tukey's half space depth\cr #' \code{'sdep'} \tab simplicial depth\cr #' \code{'prediction'}\tab model prediction of the outcome \cr #' \code{'none'}\tab No fill \cr #' } #' @param showLocations logical: determining whether locations are shown #' @param showClusterLabels logical: determining whether the cluster numbers are shown #' @param showClusterBoundaries logical: determining whether cluster boundaries are shown #' @param showBuffer logical: whether a buffer zone should be overlayed #' @param cpalette colour palette (to use different colours for clusters this must be at #' least as long as the number of clusters. #' @param buffer_width width of buffer zone to be overlayed (km) #' @param maskbuffer radius of buffer around inhabited areas (km) #' @param labelsize size of cluster number labels #' @param legend.position (using \code{ggplot2::themes} syntax) #' @return graphics object produced by the \code{ggplot2} package #' @importFrom magrittr %>% #' @importFrom dplyr distinct group_by summarize #' @importFrom ggplot2 geom_polygon #' @importFrom ggplot2 aes #' @details #' If \code{map = FALSE} and the input is a trial data frame or a \code{CRTsp} object, #' containing a randomisation to arms, a stacked bar chart of the outcome #' grouped by the specified \code{distance} is produced. If the specified \code{distance} #' has not yet been calculated an error is returned.\cr\cr #' If \code{map = FALSE} and the input is a \code{CRTanalysis} object a plot of the #' estimated spillover function is generated. The fitted spillover function is plotted #' as a continuous blue line against the measure #' the surround or of the distance to the nearest discordant location. Using the same axes, data summaries are plotted for #' ten categories of distance from the boundary. Both the #' average of the outcome and confidence intervals are plotted. #' \itemize{ #' \item For analyses with logit link function the outcome is plotted as a proportion. \cr #' \item For analyses with log or cloglog link function the data are plotted on a scale of the Williams mean #' (mean of exp(log(x + 1))) - 1) rescaled so that the median matches the fitted curve at the midpoint.\cr #' } #' If \code{map = TRUE} a thematic map corresponding to the value of \code{fill} is generated. #' \itemize{ #' \item \code{fill = 'clusters'} or leads to thematic map showing the locations of the clusters #' \item \code{fill = 'arms'} leads to a thematic map showing the geography of the randomization #' \item \code{fill = 'distance'} leads to a raster plot of the distance to the nearest discordant location. #' \item \code{fill = 'prediction'} leads to a raster plot of predictions from an \code{'INLA'} model. #' } #' If \code{showBuffer = TRUE} the map is overlaid with a grey transparent layer showing which #' areas are within a defined distance of the boundary between the arms. Possibilities are: #' \itemize{ #' \item If the trial has not been randomised or if \code{showBuffer = FALSE} no buffer is displayed #' \item If \code{buffer_width} takes a positive value then buffers of this width are #' displayed irrespective of any pre-specified or spillover limits. #' \item If the input is a \code{'CRTanalysis'} and spillover limits have been estimated by #' an \code{'LME4'} or \code{'INLA'} model then these limits are used to define the displayed buffer. #' \item If \code{buffer_width} is not specified and no spillover limits are available, then any #' pre-specified buffer (e.g. one generated by \code{specify_buffer()}) is displayed. #' } #' A message is output indicating which of these possibilities applies. #' @export #' @importFrom ggplot2 aes alpha #' @examples #' {example <- readdata('exampleCRT.txt') #' #Plot of data by distance #' plotCRT(example) #' #Map of locations only #' plotCRT(example, map = TRUE, fill = 'none', showLocations = TRUE, #' showClusterBoundaries=FALSE, maskbuffer=0.2) #' #show cluster boundaries and number clusters #' plotCRT(example, map = TRUE, fill ='none', showClusterBoundaries=TRUE, #' showClusterLabels=TRUE, maskbuffer=0.2, labelsize = 2) #' #show clusters in colour #' plotCRT(example, map = TRUE, fill = 'clusters', showClusterLabels = TRUE, #' labelsize=2, maskbuffer=0.2) #' #show arms #' plotCRT(example, map = TRUE, #' fill = 'arms', maskbuffer=0.2, legend.position=c(0.8,0.8)) #' #spillover plot #' analysis <- CRTanalysis(example) #' plotCRT(analysis, map = FALSE) #' } #' @export plotCRT <- function(object, map = FALSE, distance = "nearestDiscord", fill = "arms", showLocations = FALSE, showClusterBoundaries = TRUE, showClusterLabels = FALSE, showBuffer = FALSE, cpalette = NULL, buffer_width = NULL, maskbuffer = 0.2, labelsize = 4, legend.position = NULL) { control_curve <- intervention_curve <- scale_par <- buffer <- g <- NULL if (is.null(legend.position)) legend.position <- "none" if (!isa(object, what = 'CRTanalysis')) object <- CRTsp(object) trial <- object$trial if (is.null(trial)) { stop("*** No data points for plotting ***") } if (isa(object, what = 'CRTanalysis')) { distance <- object$options$distance scale_par <- object$options$scale_par } else { scale_par <- object$design[[distance]]$scale_par } distanceText <- getDistanceText(distance = distance, scale_par = scale_par) if (!map) { if (isa(object, what = 'CRTanalysis')) { # if the object is the output from analysisCRT analysis <- object if (is.null(analysis$spillover$FittedCurve)) stop("*** No fitted curve available ***") d <- average <- upper <- lower <- spilloverFunction <- NULL interval <- analysis$spillover$spillover_limits range <- max(analysis$trial[[distance]]) - min(analysis$trial[[distance]]) data <- group_data(analysis = analysis, grouping = "quintiles") FittedCurve <- analysis$spillover$FittedCurve fitted_median <- median(c(FittedCurve$control_curve,FittedCurve$intervention_curve),na.rm = TRUE) data_median <- median(data$average) if (analysis$options$link %in% c('log', 'cloglog')) { scale_factor <- fitted_median/data_median data_scaled <- data.frame( d = data$d, arm = data$arm, average = data$average * scale_factor, lower = data$lower * scale_factor, upper = data$upper * scale_factor) } else { data_scaled <- data } g <- ggplot2::ggplot() + ggplot2::theme_bw() g <- g + ggplot2::geom_line(data = FittedCurve[!is.na(FittedCurve$control_curve), ], ggplot2::aes(x = d, y = control_curve), linewidth = 2, colour = "#b2df8a") g <- g + ggplot2::geom_line(data = FittedCurve[!is.na(FittedCurve$intervention_curve), ], ggplot2::aes(x = d, y = intervention_curve), linewidth = 2, colour = "#0072A7") g <- g + ggplot2::geom_point(data = data_scaled, ggplot2::aes(x = d, y = average, shape=factor(arm)), size = 2) g <- g + ggplot2::scale_shape_manual(name = "Arms", values = c(0, 16), labels = c("Control", "Intervention")) g <- g + ggplot2::theme(legend.position = legend.position) g <- g + ggplot2::geom_errorbar(data = data_scaled, mapping = ggplot2::aes(x = d, ymin = upper, ymax = lower), linewidth = 0.5, width = range/50) if (identical(analysis$options$distance, "nearestDiscord")) { g <- g + ggplot2::geom_vline(xintercept = 0, linewidth = 1, linetype = "dashed") if (analysis$options$cfunc %in% c("L","P","S")) { g <- g + ggplot2::geom_vline(xintercept = interval, linewidth = 1) g <- g + ggplot2::geom_rect(data = NULL, ggplot2::aes(xmin = interval[1], xmax = interval[2], ymin = -Inf, ymax = Inf), fill = alpha("#2C77BF", 0.2)) } } g <- g + ggplot2::xlab(distanceText) g <- g + ggplot2::ylab("Outcome") } else { if (is.null(object$trial[[distance]])) { stop(paste0("*** First use compute_distance() to calculate ", distance, "***")) } # Plot of data by distance dcat <- value <- NULL if (is.null(object$trial$num)) { return(plot(object$trial)) } analysis <- CRTanalysis(trial = object$trial, method = "EMP") data <- group_data(analysis = analysis, distance = distance, grouping = "equalwidth") data$dcat <- with(analysis, min(trial[[distance]]) + (data$cat - 0.5) * (max(trial[[distance]]) - min(trial[[distance]]))/10) data <- tidyr::gather(data[, c("dcat", "locations", "total", "positives")], key = 'variable', value = 'value', -dcat, factor_key = TRUE) levels(data$variable) <- c("Locations", "Sum of denominators", "Sum of numerators") g <- ggplot2::ggplot(data = data) + ggplot2::theme_bw() + ggplot2::geom_bar(aes(x = dcat, y = value), colour = NA, fill = "lightgrey", stat = "identity") + ggplot2::geom_vline(xintercept = 0, linewidth = 1, linetype = "dashed") + ggplot2::xlab(distanceText) + ggplot2::ylab(ggplot2::element_blank()) + ggplot2::facet_wrap( ~ variable, ncol = 1, scales = "free") } } else { colourClusters <- identical(fill, "clusters") showArms <- identical(fill, "arms") if (isa(object, what = 'CRTanalysis')) { analysis <- object spillover_limits <- analysis$spillover$spillover_limits if(!(fill %in% c("arms", "clusters"))){ # raster map derived from inla analysis x <- y <- prediction <- nearestDiscord <- NULL g <- ggplot2::ggplot() if (!identical(analysis$options$method, "INLA")) { stop("*** Raster plots only available for outputs from INLA analysis ***") } else { pixel <- analysis$inla_mesh$pixel raster <- analysis$inla_mesh$prediction if (identical(fill, "prediction")) distanceText <- "Prediction" if (is.null(raster[[fill]])){ stop("*** Requested measure not available for this analysis ***") } else { raster$fill <- raster[[fill]] } g <- g + ggplot2::geom_tile(data = raster, aes(x = x, y = y, fill = fill, width = pixel, height = pixel)) g <- g + ggplot2::scale_fill_gradient(name = distanceText, low = "blue", high = "orange") g <- g + ggplot2::theme(legend.title = ggplot2::element_text(size = 8), legend.text = ggplot2::element_text(size = 8)) } } } # vector plot starts here arm <- cluster <- x <- y <- NULL xlim <- c(min(trial$x - maskbuffer), max(trial$x + maskbuffer)) ylim <- c(min(trial$y - maskbuffer), max(trial$y + maskbuffer)) # The plotting routines require unique locations CRT <- aggregateCRT(trial) # The plotting routines use (x,y) coordinates if (is.null(CRT$trial$x)) { CRT <- latlong_as_xy(CRT) } trial <- CRT$trial # Adjust the required plots to exclude those for which there is no # data or combinations that are too cluttered or overprinted if (is.null(trial$cluster)) { trial$cluster <- rep(1, nrow(trial)) showClusterBoundaries <- FALSE showClusterLabels <- FALSE colourClusters <- FALSE } if (is.null(trial$arm)) { trial$arm <- 0 showArms <- FALSE } if (!showClusterBoundaries) { showClusterLabels <- FALSE } totalClusters <- length(unique(trial$cluster)) if (is.null(cpalette)) cpalette <- sample(rainbow(totalClusters)) if (totalClusters == 1) cpalette <- c("white") if (showBuffer) trial <- modifyBuffer(trial = trial, buffer_width = buffer_width, spillover_limits = spillover_limits) if (is.null(trial$buffer)) showBuffer <- FALSE sf_objects <- sf_objects(trial = trial, maskbuffer = maskbuffer) if (is.null(g)) g <- ggplot2::ggplot() if (colourClusters) { g <- g + ggplot2::geom_sf(data = sf_objects$clusters, aes(fill = cluster), fill = cpalette, alpha = 0.8) } if (showArms) { g <- g + ggplot2::geom_sf(data = sf_objects$arms, aes(fill = arm)) # use standard colour-blind compatible palette g <- g + ggplot2::scale_fill_manual(name = "Arms", values = c("#b2df8a", "#1f78b4"), labels = c("Control", "Intervention")) } if (showBuffer) { # whether the point is within the buffer g <- g + ggplot2::geom_sf(data = sf_objects$buffer, aes(alpha = buffer), color = NA, fill = "black", show.legend = FALSE) g <- g + ggplot2::scale_alpha_manual(name = "Buffer", values = c(0, 0.2)) } if (showClusterBoundaries) { g <- g + ggplot2::geom_sf(data = sf_objects$clusters, color = "black", fill = NA) } g <- g + ggplot2::geom_sf(data = sf_objects$mask, fill = "grey") # Labels if (showClusterLabels) { showLocations <- FALSE # Positions of centroids of clusters for locating the labels cc <- data.frame(trial %>% dplyr::group_by(cluster) %>% dplyr::summarize(x = mean(x), y = mean(y), .groups = "drop")) g <- g + ggplot2::geom_text(data = cc, aes(x = x, y = y, label = cluster), hjust = 0.5, vjust = 0.5, size = labelsize) } if (showLocations) { g <- g + ggplot2::geom_point(data = trial, aes(x = x, y = y), size = 0.5) } g <- g + ggplot2::theme(legend.position = legend.position) g <- g + ggplot2::theme(panel.border = ggplot2::element_blank()) g <- g + ggplot2::theme(axis.title = ggplot2::element_blank()) g <- g + ggplot2::coord_sf(expand = FALSE, xlim = xlim, ylim = ylim) } return(g) } # Create simple feature objects either for plotting or export sf_objects <- function(trial, maskbuffer, crs = "Euclidean", centroid = NULL) { clusters <- arms <- buffer <- cluster <- arm <- x <- y <- NULL if (!identical(crs, "Euclidean") & !is.null(centroid)) { # convert coordinates to radians and apply crs of "WGS84" # scalef is is the number of degrees per kilometer scalef <- 180/(6371*pi) crs <- 4326 trial$lat <- trial$y*scalef + centroid$lat trial$long <- trial$x*scalef + centroid$long coords <- c("lat", "long") xlim <- c(min(trial$x - maskbuffer * scalef), max(trial$x + maskbuffer * scalef)) ylim <- c(min(trial$y - maskbuffer * scalef), max(trial$y + maskbuffer * scalef)) # create pts pts <- tidyr::tibble(lat = trial$lat, long = trial$long) %>% sf::st_as_sf(coords = coords) %>% sf::st_set_crs(crs) } else { scalef <- 1 # coords <- c("y", "x") coords <- c("x", "y") xlim <- c(min(trial$x - maskbuffer), max(trial$x + maskbuffer)) ylim <- c(min(trial$y - maskbuffer), max(trial$y + maskbuffer)) # create pts pts <- tidyr::tibble(y = trial$y, x = trial$x) %>% sf::st_as_sf(coords = coords) } tr <- sf::st_as_sf(trial, coords = coords) %>% sf::st_set_crs(crs) # voronoi of pts- if the coordinates are lat long this would generate a # warning, but the issue is trivial if the area is small or near the equator suppressWarnings( vor <- sf::st_voronoi(sf::st_combine(tr)) %>% sf::st_collection_extract("POLYGON") %>% sf::st_as_sf() %>% sf::st_set_crs(crs) ) if (!is.null(trial$cluster)) { clusters <- vor %>% sf::st_join(tr, sf::st_intersects) %>% dplyr::group_by(cluster) %>% dplyr::summarize() } if (!is.null(trial$arm)) { arms <- vor %>% sf::st_join(tr, sf::st_intersects) %>% dplyr::group_by(arm) %>% dplyr::summarize() } # buffer zone if (!is.null(trial$buffer)) { buffer_tr <- tr[tr$buffer,,drop=FALSE] buffer <- vor %>% sf::st_join(tr, sf::st_intersects) %>% dplyr::group_by(buffer) %>% dplyr::summarize() # sf::st_collection_extract("POLYGON") %>% # sf::st_combine() %>% # sf::st_as_sf() %>% # sf::st_set_crs(crs) } # mask for excluded areas the mask needs to extend outside the plot # area x0 <- xlim[1] - 0.5 * scalef x1 <- xlim[2] + 0.5 * scalef y0 <- ylim[1] - 0.5 * scalef y1 <- ylim[2] + 0.5 * scalef bbox <- sf::st_polygon(list(cbind(x = c(x0, x1, x1, x0, x0), y = c(y0, y0, y1, y1, y0)))) bbox <- sf::st_sfc(bbox) tr <- sf::st_as_sf(trial, coords = coords) buf1 <- sf::st_buffer(tr, maskbuffer * scalef) buf2 <- sf::st_union(buf1) mask <- sf::st_difference(bbox, buf2) sf_objects <- list(clusters = clusters, arms = arms, buffer = buffer, mask = mask) return(sf_objects) } #' Export of GIS layer from \code{'CRTsp'} #' #' \code{CRTwrite} exports a simple features object in a GIS format #' @param object object of class \code{'CRTsp'} #' @param dsn dataset name (relative path) for output objects #' @param feature feature to be exported, options are: #' \tabular{ll}{ #' \code{'cluster'}\tab cluster assignments \cr #' \code{'arms'}\tab arm assignments \cr #' \code{'buffer'}\tab buffer zone or spillover zone\cr #' \code{'mask'}\tab mask for areas that are distant from habitations \cr #' } #' @param buffer_width width of buffer between discordant locations (km) #' @param maskbuffer radius of buffer drawn around inhabited areas (km) #' @param ... other arguments passed to \code{'sf::write_sf'} #' @return \code{obj}, invisibly #' @details #' \code{'sf::write_sf'} is used to format the output. The function returns TRUE on success, #' FALSE on failure, invisibly. \cr\cr #' If the input object contains a \code{'centroid'} then this is used to compute lat long #' coordinates, which are assigned the "WGS84" coordinate reference system. #' Otherwise the objects have equirectangular co-ordinates with centroid (0,0).\cr\cr #' If \code{feature = 'buffer'} then buffer width determination is as described under #' \code{plotCRT()}. #' \cr\cr #' The output vector objects are constructed by forming a Voronoi tessellation of polygons around #' each of the locations and combining these polygons. The polygons on the outside of the study area #' extend outwards to an external rectangle. The \code{'mask'} is used to mask out the areas of #' these polygons that are at a distance > \code{maskbuffer} from the nearest location. #' @examples #' \donttest{ #' tmpdir = tempdir() #' dsn <- paste0(tmpdir,'/arms') #' CRTwrite(readdata('exampleCRT.txt'), dsn = dsn, feature = 'arms', #' driver = 'ESRI Shapefile', maskbuffer = 0.2) #' } #' @export CRTwrite <- function(object, dsn, feature = 'clusters', buffer_width, maskbuffer = 0.2, ...){ spillover_limits <- NULL if (isa(object, what = 'CRTanalysis')) { spillover_limits <- ifelse(is.null(object$spillover$spillover_limits), NULL, object$spillover$spillover_limits) object <- object$trial } object <- CRTsp(object) centroid <- object$geom_full$centroid trial <- object$trial if (identical(feature,"buffer")) { trial <- modifyBuffer(trial = trial, buffer_width = buffer_width, spillover_limits = spillover_limits) if (is.null(trial$buffer)) stop("No buffer available for export") } sf_objects <- sf_objects(trial = trial, maskbuffer = maskbuffer, crs = "WGS84", centroid = centroid) sf::st_write(sf_objects[[feature]], dsn = dsn, ...) } modifyBuffer <- function(trial, buffer_width, spillover_limits){ if (!is.null(trial$nearestDiscord)){ if (!is.null(buffer_width)){ trial$buffer <- ifelse(dplyr::between(trial$nearestDiscord, -buffer_width, buffer_width), TRUE, FALSE) message("Buffer includes locations within ", buffer_width*1000, "m of the opposing arm") } else if(!is.null(spillover_limits)) { trial$buffer <- ifelse(dplyr::between(trial$nearestDiscord, spillover_limits[1], spillover_limits[2]), TRUE, FALSE) message("Buffer corresponds to estimated spillover zone") } else { if (!is.null(trial$buffer)) { message("Buffer corresponds pre-specified buffer zone") } else { message("No buffer available") } } } else { message("No buffer shown: distances to discordant locations are unavailable") } return(trial) }
/scratch/gouwar.j/cran-all/cranData/CRTspat/R/plotCRT.R
#' Simulation of cluster randomized trial with spillover #' #' \code{simulateCRT} generates simulated data for a cluster randomized trial (CRT) with geographic spillover between arms. #' #' @param trial an object of class \code{"CRTsp"} or a data frame containing locations in (x,y) coordinates, cluster #' assignments (factor \code{cluster}), and arm assignments (factor \code{arm}). Each location may also be #' assigned a \code{propensity} (see details). #' @param effect numeric. The simulated effect size (defaults to 0) #' @param outcome0 numeric. The anticipated value of the outcome in the absence of intervention #' @param generateBaseline logical. If \code{TRUE} then baseline data and the \code{propensity} will be simulated #' @param matchedPair logical. If \code{TRUE} then the function tries to carry out randomization #' using pair-matching on the baseline data (see details) #' @param scale measurement scale of the outcome. Options are: 'proportion' (the default); 'count'; 'continuous'. #' @param baselineNumerator optional name of numerator variable for pre-existing baseline data #' @param baselineDenominator optional name of denominator variable for pre-existing baseline data #' @param denominator optional name of denominator variable for the outcome #' @param kernels number of kernels used to generate a de novo \code{propensity} #' @param ICC_inp numeric. Target intra cluster correlation, provided as input when baseline data are to be simulated #' @param sd numeric. standard deviation of the normal kernel measuring spatial smoothing leading to spillover #' @param theta_inp numeric. input spillover interval #' @param tol numeric. tolerance of output ICC #' @returns A list of class \code{"CRTsp"} containing the following components: #' \tabular{lll}{ #' \code{geom_full}\tab list: \tab summary statistics describing the site #' cluster assignments, and randomization \cr #' \code{design}\tab list: \tab values of input parameters to the design \cr #' \code{trial} \tab data frame: \tab rows correspond to geolocated points, as follows:\cr #' \tab \code{x} \tab numeric vector: x-coordinates of locations \cr #' \tab \code{y} \tab numeric vector: y-coordinates of locations \cr #' \tab\code{cluster} \tab factor: assignments to cluster of each location \cr #' \tab\code{arm} \tab factor: assignments to \code{control} or \code{intervention} for each location \cr #' \tab\code{nearestDiscord} \tab numeric vector: signed Euclidean distance to nearest discordant location (km) \cr #' \tab\code{propensity} \tab numeric vector: propensity for each location \cr #' \tab\code{base_denom} \tab numeric vector: denominator for baseline \cr #' \tab\code{base_num} \tab numeric vector: numerator for baseline \cr #' \tab\code{denom} \tab numeric vector: denominator for the outcome \cr #' \tab\code{num} \tab numeric vector: numerator for the outcome \cr #' \tab\code{...} \tab other objects included in the input \code{"CRTsp"} object #' or \code{data.frame}\cr #' } #' @details Synthetic data are generated by sampling around the values of #' variable \code{propensity}, which is a numerical vector #' (taking positive values) of length equal to the number of locations. #' There are three ways in which \code{propensity} can arise: #' \enumerate{ #' \item \code{propensity} can be provided as part of the input \code{trial} object. #' \item Baseline numerators and denominators (values of \code{baselineNumerator} #' and \code{baselineDenominator} may be provided. #' \code{propensity} is then generated as the numerator:denominator ratio #' for each location in the input object #' \item Otherwise \code{propensity} is generated using a 2D Normal #' kernel density. The [\code{OOR::StoSOO}](https://rdrr.io/cran/OOR/man/StoSOO.html) #' is used to achieve an intra-cluster correlation coefficient (ICC) that approximates #' the value of \code{'ICC_inp'} by searching for an appropriate value of the kernel bandwidth. #' } #' \code{num[i]}, the synthetic outcome for location \code{i} #' is simulated with expectation: \cr #' \deqn{E(num[i]) = outcome0[i] * propensity[i] * denom[i] * (1 - effect*I[i])/mean(outcome0[] * propensity[])} \cr #' The sampling distribution of \code{num[i]} depends on the value of \code{scale} as follows: \cr #' - \code{scale}=’continuous’: Values of \code{num} are sampled from a #' Normal distributions with means \code{E(num[i])} #' and variance determined by the fitting to \code{ICC_inp}.\cr #' - \code{scale}=’count’: Simulated events are allocated to locations via multivariate hypergeometric distributions #' parameterised with \code{E(num[i])}.\cr #' - \code{scale}=’proportion’: Simulated events are allocated to locations via multinomial distributions #' parameterised with \code{E(num[i])}.\cr #' #' \code{denominator} may specify a vector of numeric (non-zero) values #' in the input \code{"CRTsp"} or \code{data.frame} which is returned #' as variable \code{denom}. It acts as a scale-factor for continuous outcomes, rate-multiplier #' for counts, or denominator for proportions. For discrete data all values of \code{denom} #' must be > 0.5 and are rounded to the nearest integer in calculations of \code{num}.\cr\cr #' By default, \code{denom} is generated as a vector of ones, leading to simulation of #' dichotomous outcomes if \code{scale}=’proportion’.\cr #' #' If baseline numerators and denominators are provided then the output vectors #' \code{base_denom} and \code{base_num} are set to the input values. If baseline numerators and denominators #' are not provided then the synthetic baseline data are generated by sampling around \code{propensity} in the same #' way as the outcome data, but with the effect size set to zero. #' #' If \code{matchedPair} is \code{TRUE} then pair-matching on the baseline data will be used in randomization providing #' there are an even number of clusters. If there are an odd number of clusters then matched pairs are not generated and #' an unmatched randomization is output. #' #' Either \code{sd} or \code{theta_inp} must be provided. If both are provided then #' the value of \code{sd} is overwritten #' by the standard deviation implicit in the value of \code{theta_inp}. #' Spillover is simulated as arising from a diffusion-like process. #' #' For further details see [Multerer (2021)](https://edoc.unibas.ch/85228/) #' @export #' #' @examples #' {smalltrial <- readdata('smalltrial.csv') #' simulation <- simulateCRT(smalltrial, #' effect = 0.25, #' ICC_inp = 0.05, #' outcome0 = 0.5, #' matchedPair = FALSE, #' scale = 'proportion', #' sd = 0.6, #' tol = 0.05) #' summary(simulation) #' } simulateCRT <- function(trial = NULL, effect = 0, outcome0 = NULL, generateBaseline = TRUE, matchedPair = TRUE, scale = "proportion", baselineNumerator = "base_num", baselineDenominator = "base_denom", denominator = NULL, ICC_inp = NULL, kernels = 200, sd = NULL, theta_inp = NULL, tol = 5e-03) { # Written by Tom Smith, July 2017. Adapted by Lea Multerer, September 2017 message("\n===================== SIMULATION OF CLUSTER RANDOMISED TRIAL =================\n") bw <- NULL initial_bandwidth <- 0.3 if (!is.null(trial)) CRT <- CRTsp(trial) trial <- CRT$trial if (is.null(trial$cluster)){ message("*** Clusters not yet assigned ***") return() } trial$cluster <- as.factor(trial$cluster) if (is.null(trial$arm)){ message("*** No randomization available ***") return() } trial$arm <- as.factor(trial$arm) # set the denominator variable to be 'denom' if (is.null(denominator)) denominator <- "denom" trial$denom <- trial[[denominator]] if (denominator != "denom") trial[[denominator]] <- NULL # If the denominator has no values set, set to one if (is.null(trial$denom)) trial$denom <- 1 # use spillover interval if this is available if (!is.null(theta_inp)) { sd <- theta_inp/(2 * qnorm(0.975)) } if (is.null(sd)) { stop("spillover interval or s.d. of spillover must be provided") } # compute distances to nearest discordant locations if they do not exist if (is.null(trial$nearestDiscord)) trial <- compute_distance(trial, distance = "nearestDiscord")$trial # trial needs to be ordered for GEE analyses (estimation of ICC) trial <- trial[order(trial$cluster), ] # remove any pre-existing propensity if a new one is required if (generateBaseline) trial$propensity <- NULL centers <- assignkernels(trial = trial, baselineNumerator = baselineNumerator, baselineDenominator = baselineDenominator, kernels = kernels) # For the smoothing step compute contributions to the relative effect size from other locations as a function of # distance to the other locations euclid <- distance_matrix(trial$x, trial$y) # compute approximate diagonal of clusters approx_diag <- sqrt((max(trial$x) - min(trial$x))^2 + (max(trial$y) - min(trial$y))^2)/sqrt(length(unique(trial$cluster))) if (is.null(ICC_inp)) { stop("*** A target ICC must be specified ***") } message("Estimating the smoothing required to achieve the target ICC of ", ICC_inp) # determine the required smoothing bandwidth by fitting to the pre-specified ICC # random multiplier is used to prevent the random number stream being reset to the same value for any given bw value random_multiplier <- sample(1e6, size = 1) loss <- 999 nb_iter <- 20 if (identical(Sys.getenv("TESTTHAT"), "true")) nb_iter <- 5 # in testing, the number of iterations is reduced giving very approximate output while (loss > tol) { ICC.loss <- OOR::StoSOO(par = c(NA), fn = ICCdeviation, lower = -5, upper = 5, nb_iter = nb_iter, trial = trial, ICC_inp = ICC_inp, centers = centers, approx_diag = approx_diag, sd = sd, scale = scale, euclid = euclid, effect = effect, outcome0 = outcome0, random_multiplier = random_multiplier) loss <- ICC.loss$value if(kernels > 500) { loss <- tol warning("*** Failure to converge on target ICC ***") } else if(loss > tol) { # if convergence was not achieved, re-assign the kernels, reducing the smoothness kernels <- round(kernels * 2) message("Increasing the number of kernels to ", kernels, "\r") centers <- assignkernels(trial = trial, baselineNumerator = baselineNumerator, baselineDenominator = baselineDenominator, kernels = kernels) } } logbw <- ICC.loss$par # recover the seed that generated the best fitting trial and use this to regenerate this trial bw <- exp(logbw[1]) set.seed(round(bw * random_multiplier)) trial <- get_assignments(trial = trial, scale = scale, euclid = euclid, sd = sd, effect = effect, outcome0 = outcome0, bw = bw, centers = centers, numerator = "num", denominator = "denom") # create a baseline dataset using the optimized bandwidth if (generateBaseline) trial <- get_assignments(trial = trial, scale = scale, euclid = euclid, sd = sd, effect = 0, outcome0 = outcome0, bw = bw, centers = centers, numerator = baselineNumerator, denominator = baselineDenominator) ICC <- get_ICC(trial = trial, scale = scale) message("\rbandwidth: ", bw, " ICC = ", ICC, " loss = ", loss, " \r") CRT$trial <- trial CRT$design$nkernels <- kernels return(CRTsp(CRT)) } createPropensity <- function(trial, bandwidth, centers = centers){ xdist <- with(trial, outer(x, x[centers], "-")) ydist <- with(trial, outer(y, y[centers], "-")) # distance matrix between the locations and the kernel centers euclid <- sqrt(xdist * xdist + ydist * ydist) #is not a square matrix # 2d normal kernels f <- (1/(2 * pi * bandwidth^2)) * exp(-(euclid^2)/(2 * (bandwidth^2))) totalf <- (1/ncol(euclid)) * rowSums(f) #sums over rows of matrix f # Scale propensity so that it is between zero and one propensity <- totalf/max(totalf) return(propensity) } # Assign expected outcome to each location assuming a fixed effect size. get_assignments <- function(trial, scale, euclid, sd, effect, outcome0, bw, centers, numerator, denominator) { expected_ratio <- num <- rowno <- sumnum <- NULL # remove any superseded numerator variable trial[[numerator]] <- NULL # generate a pattern of propensity trial$propensity <- f_1 <- createPropensity(trial, bandwidth = bw, centers = centers) # Smooth the propensity. the s.d. in each dimension of the 2 d gaussian is bw/sqrt(2) # f_2 is the value of propensity decremented by the effect of intervention and smoothed # by applying a further kernel smoothing step (trap the case with no spillover) if (sd < 0.001) sd <- 0.001 f_2 <- f_1 * (1 - effect * (trial$arm == "intervention")) f_3 <- dispersal(bw = sd*sqrt(2), euclid = euclid) %*% f_2 if (identical(scale, "continuous")) { # Note that the sd here is logically different from the smoothing sd, but how to choose a value? trial$num <- rnorm(n = nrow(trial), mean = f_3 * trial[[denominator]], sd = sd) } else { if (!(denominator %in% colnames(trial))) trial[[denominator]] <- 1 # the denominator must be an integer; this changes the value if a non-integral value is input trial[[denominator]] <- round(trial[[denominator]], digits = 0) # compute the total positives expected given the input effect size npositives <- round(outcome0 * sum(trial[[denominator]]) * (1 - 0.5 * effect)) # scale to input value of initial prevalence by assigning required number of infections with probabilities proportional # to smoothed multiplied by the denominator expected_allocation <- f_3 * trial[[denominator]]/sum(f_3 * trial[[denominator]]) trial$expected_ratio <- expected_allocation/trial[[denominator]] trial$rowno <- seq(1:nrow(trial)) # expand the vector of locations to allow for denominators > 1 triallong <- trial %>% tidyr::uncount(trial[[denominator]]) # To generate count data, records in triallong can be sampled multiple times. To generate proportions each record can # only be sampled once. replacement <- identical(scale, "count") # sample generates a multinomial sample and outputs the indices of the locations assigned # trap the case when there are not enough positive locations if (sum(triallong$expected_ratio > 0) < npositives) { triallong$expected_ratio <- (triallong$expected_ratio + 1e-6)/sum(triallong$expected_ratio + 1e-6) } positives <- sample(x = nrow(triallong), size = npositives, replace = replacement, prob = triallong$expected_ratio) triallong$num <- 0 # TODO: this needs to work also for count data (replace = TRUE above) triallong$num[positives] <- 1 # summarise the numerator values into the original set of locations numdf <- dplyr::group_by(triallong, rowno) %>% dplyr::summarise(sumnum = sum(num)) numdf[[numerator]] <- numdf$sumnum # use left_join to merge into the original data frame (records with zero denominator do not appear in numdf) trial <- trial %>% dplyr::left_join(numdf, by = "rowno") # remove temporary variables and replace missing numerators with zero (because the multinomial sampling algorithm leaves # NA values where no events are assigned) trial <- subset(trial, select = -c(rowno, expected_ratio, sumnum)) if (sum(is.na(trial[[numerator]])) > 0) { warning("*** Some records have zero denominator after rounding ***") message("You may want to remove these records or rescale the denominators") trial[[numerator]][is.na(trial[[numerator]])] <- 0 } } return(trial) } # deviation of ICC from target as a function of bandwidth ICCdeviation <- function(logbw, trial, ICC_inp, centers, approx_diag, sd, scale, euclid, effect, outcome0, random_multiplier) { cluster <- NULL # set the seed so that a reproducible result is obtained for a specific bandwidth if (!is.null(logbw)) { bw <- exp(logbw[1]) set.seed(round(bw * random_multiplier)) } trial <- get_assignments(trial = trial, scale = scale, euclid = euclid, sd = sd, effect = effect, outcome0 = outcome0, bw = bw, centers = centers, numerator = "num", denominator = "denom") loss <- (get_ICC(trial = trial, scale = scale) - ICC_inp)^2 # message("\rbandwidth: ", bw, " ICC=", ICC, " loss = ", loss, " \r") return(loss) } get_ICC <- function(trial, scale) { link <- map_scale_to_link(scale) trial$y1 <- trial$num trial$y_off <- trial$denom trial$y0 <- trial$denom - trial$num model_object <- get_GEEmodel(trial = trial, link = link, fterms = 'arm') summary.fit <- summary(model_object) # Intracluster correlation ICC <- noLabels(summary.fit$corr[1]) #with corstr = 'exchangeable', alpha is the ICC return(ICC) } # compute a euclidian distance matrix distance_matrix <- function(x, y) { # generates square matrices of differences xdist <- outer(x, x, "-") ydist <- outer(y, y, "-") euclid <- sqrt(xdist * xdist + ydist * ydist) } # add lognormal noise: not sure this function is needed X is the input vector comprising a sample from a smoothed # distribution varXY is the required variance add_noise <- function(X, varXY) { muY <- 1 varY <- (varXY - var(X))/(1 + var(X)) mu <- log(muY/sqrt(1 + varY/muY)) var <- log(1 + varY/muY) Y <- stats::rlnorm(length(XY), meanlog = mu, sdlog = sqrt(var)) XY <- X * Y return(XY) } assignkernels <- function(trial, baselineNumerator, baselineDenominator, kernels){ if (is.null(trial$propensity) & baselineNumerator %in% colnames(trial)){ # If baseline numerator data exist and propensity doesn't, use the baseline numerator to select kernel centers trial[[baselineDenominator]] <- ifelse(is.null(trial[[baselineDenominator]]), 1, trial[[baselineDenominator]]) # expand the vector of locations to allow for denominators > 1 possible_kernels <- dplyr::mutate(trial, kernelID = dplyr::row_number()) possible_kernels$propensity <- round(10 * possible_kernels[[baselineNumerator]]/possible_kernels[[baselineDenominator]]) possible_kernels <- tidyr::uncount(possible_kernels, possible_kernels$propensity) centers <- possible_kernels$kernelID[sample(x = nrow(possible_kernels), size = kernels, replace = FALSE)] } else { centers <- sample(1:nrow(trial), size = kernels, replace = TRUE) } return(centers) } # contribution of l to i as a function of the Gaussian used in simulating spillover dispersal <- function(bw, euclid) { # bivariate normal kernel f <- (1/(2 * pi * bw^2)) * exp(-(euclid^2)/(2 * (bw^2))) neighbours <- ifelse(euclid < 2*qnorm(0.975)*bw,1,0) f_neighbours <- f * neighbours n_neighbours <- rowSums(neighbours) #sums over rows of matrix f considering only neighbours dispersal <- f_neighbours/n_neighbours return(dispersal) }
/scratch/gouwar.j/cran-all/cranData/CRTspat/R/simulateCRT.R
#' Compute distance or surround values for a cluster randomized trial #' #' \code{compute_distance} computes distance or surround values for a cluster randomized trial (CRT) #' @param trial an object of class \code{"CRTsp"} or a data frame containing locations in (x,y) coordinates, cluster #' assignments (factor \code{cluster}), and arm assignments (factor \code{arm}). #' @param distance the quantity(s) to be computed. Options are: #' \tabular{ll}{ #' \code{"nearestDiscord"} \tab distance to nearest discordant location (km)\cr #' \code{"disc"} \tab disc \cr #' \code{"kern"} \tab kernel-based measure \cr #' \code{"hdep"} \tab Tukey half space depth\cr #' \code{"sdep"} \tab simplicial depth\cr #' } #' @param scale_par scale parameter equal to the disc radius in km if \code{distance = "disc"} #' or to the standard deviance of the kernels if \code{distance = "kern"} #' @returns The input \code{"CRTsp"} object with additional column(s) added to the \code{trial} data frame #' with variable name corresponding to the input value of \code{distance}. #' @details #' For each selected distance measure, the function first checks whether the variable is already present, and carries out #' the calculations only if the corresponding field is absent from the \code{trial} data frame.\cr\cr #' If \code{distance = "nearestDiscord"} is selected the computed values are Euclidean distances #' assigned a positive sign for the intervention arm of the trial, and a negative sign for the control arm.\cr\cr #' If \code{distance = "disc"} is specified, the disc statistic is computed for each location as the number of locations #' within the specified radius that are in the intervention arm #' ([Anaya-Izquierdo & Alexander(2020)](https://onlinelibrary.wiley.com/doi/full/10.1111/biom.13316)). The input #' value of \code{scale_par} is stored in the \code{design} list #' of the output \code{"CRTsp"} object. Recalculation is carried out if the input value of #' \code{scale_par} differs from the one in the input \code{design} list. The value of the the surround calculated #' based on intervened locations is divided by the value of the surround calculated on the basis of all locations, so the #' value returned is a proportion.\cr\cr #' If \code{distance = "kern"} is specified, the Normal curve with standard deviation #' \code{scale_par} is used to simulate diffusion of the intervention effect by Euclidean #' distance. For each location in the trial, the contributions of all intervened locations are #' summed. As with \code{distance = "disc"}, when \code{distance = "kern"} the surround calculated #' based on intervened locations is divided by the value of the surround calculated on the basis of all locations, so the #' value returned is a proportion.\cr\cr #' If either \code{distance = "hdep"} or \code{distance = "sdep"} is specified then both the simplicial depth and #' Tukey half space depth are calculated using the algorithm of #' [Rousseeuw & Ruts(1996)](https://www.jstor.org/stable/2986073). The half-depth probability within the intervention cloud (di) is computed #' with respect to other locations in the intervention arm ([Anaya-Izquierdo & Alexander(2020)](https://onlinelibrary.wiley.com/doi/full/10.1111/biom.13316)). The half-depth within #' the half-depth within the control cloud (dc) is also computed. \code{CRTspat} returns the proportion di/(dc + di). \cr #' @export #' @examples{ #' # Calculate the disc with a radius of 0.5 km #' exampletrial <- compute_distance(trial = readdata('exampleCRT.txt'), #' distance = 'disc', scale_par = 0.5) #' } compute_distance <- function(trial, distance = "nearestDiscord", scale_par = NULL) { CRT <- CRTsp(trial) trial <- CRT$trial require_nearestDiscord <- is.null(trial$nearestDiscord) & identical(distance, "nearestDiscord") require_hdep <- is.null(trial$hdep) & identical(distance, "hdep") require_sdep <- is.null(trial$sdep) & identical(distance, "sdep") require_disc <- identical(distance, "disc") & (is.null(trial$disc) | !identical(CRT$design$disc$scale_par, scale_par)) require_kern <- identical(distance, "kern") & (is.null(trial$kern) | !identical(CRT$design$kern$scale_par, scale_par)) kern <- NULL if (is.null(trial[[distance]] & is.null(trial$arm))) stop('*** Randomization is required for computation of distances or surrounds ***') if (require_hdep | require_sdep){ depthilist <- apply(trial, MARGIN = 1, FUN = depths, trial = trial, cloud = 'intervention') depthi_df <- as.data.frame(do.call(rbind, lapply(depthilist, as.data.frame))) depthclist <- apply(trial, MARGIN = 1, FUN = depths, trial = trial, cloud = 'control' ) depthc_df <- as.data.frame(do.call(rbind, lapply(depthclist, as.data.frame))) trial$hdep <- depthi_df$hdep/(depthc_df$hdep + depthi_df$hdep) trial$sdep <- depthi_df$sdep/(depthc_df$sdep + depthi_df$sdep) # replace NA with limiting value, depending which arm it is in (these points are on the outside of the cloud) trial$hdep[is.na(trial$hdep)] <- ifelse(trial$arm[is.na(trial$hdep)] == 'intervention', 1, 0) trial$sdep[is.na(trial$sdep)] <- ifelse(trial$arm[is.na(trial$sdep)] == 'intervention', 1, 0) CRT$design$hdep <- distance_stats(trial, distance = "hdep") CRT$design$sdep <- distance_stats(trial, distance = "sdep") } if ((require_nearestDiscord | require_disc | require_kern)){ dist_trial <- as.matrix(dist(cbind(trial$x, trial$y), method = "euclidean")) if (require_nearestDiscord){ discord <- outer(trial$arm, trial$arm, "!=") #true & false. discord_dist_trial <- ifelse(discord, dist_trial, Inf) trial$nearestDiscord <- ifelse(trial$arm == "control", -apply(discord_dist_trial, MARGIN = 2, min), apply(discord_dist_trial, MARGIN = 2, min)) CRT$design$nearestDiscord <- distance_stats(trial, distance = "nearestDiscord") } if (require_disc){ if (is.null(scale_par)) { stop("*** radius (scale_par) must be specified for computation of disc ***") } neighbours <- colSums(dist_trial <= scale_par) intervened_neighbours <- colSums(trial$arm =='intervention' & (dist_trial <= scale_par)) trial$disc <- intervened_neighbours/neighbours CRT$design$disc <- distance_stats(trial, distance = "disc") CRT$design$disc$scale_par <- scale_par } if (require_kern){ if (is.null(scale_par)) { stop("*** s.d. (scale_par) must be specified for computation of kern ***") } weighted_neighbours <- colSums(dnorm(dist_trial, mean = 0, sd = scale_par)) weighted_intervened <- colSums(dnorm(dist_trial, mean = 0, sd = scale_par) * matrix(data = (trial$arm == 'intervention'), nrow = nrow(trial), ncol = nrow(trial))) trial$kern <- weighted_intervened/weighted_neighbours CRT$design$kern <- distance_stats(trial, distance = "kern") CRT$design$kern$scale_par <- scale_par } } CRT$trial <- trial return(CRT) } depths <- function(X, trial, cloud) { # this is an R translation of the fortran code in # Rousseeuw & Ruts https://www.jstor.org/stable/2986073 # algorithm as 307.1 Appl.Statist. (1996), vol.45, no.4 # calculation of the simplicial depth and # the half space depth # u and v are the coordinates of the arbitrary point u <- as.numeric(X[["x"]]) v <- as.numeric(X[["y"]]) # for the CRT application, depth is computed with respect to the set of intervention individuals # excluding the point itself (if it is in the intervention arm) trial <- trial[trial$arm == cloud & (trial$x != u | trial$y != v),] n <- nrow(trial) x <- trial$x y <- trial$y nums <- 0 numh <- 0 sdep <- 0 # simplicial depth hdep <- 0 # half-space depth eps <- 1e-06 nt <- 0 # construct the vector alpha alpha <- fval <- rep(NA, nrow(trial)) for (i in 1:n) { d <- sqrt((x[i] - u) * (x[i] - u) + (y[i] - v) * (y[i] - v)) if (d <= eps) { nt <- nt + 1 } else { xu <- (x[i] - u)/d yu <- (y[i] - v)/d if (abs(xu) > abs(yu)) { if (x[i] >= u) { alpha[i - nt] <- asin(yu) if (alpha[i - nt] < 0.0) { alpha[i - nt] <- 2 * pi + alpha[i - nt] } } else { alpha[i - nt] <- pi - asin(yu) } } else { if (y[i] >= v) { alpha[i - nt] <- acos(xu) } else { alpha[i - nt] <- 2 * pi - acos(xu) } } if (alpha[i - nt] >= (2 * pi - eps)) alpha[i - nt] <- 0.0 } } nn <- n - nt if (nn > 1) { # nn is the number of elements of alpha that have been assigned a value # the missing elements should be removed #call sort (alpha, nn) alpha <- alpha[!is.na(alpha)] alpha <- alpha[order(alpha)] # check whether theta=(u,v) lies outside the data cloud angle <- alpha[1] - alpha[nn] + 2 * pi for (i in 2:nn) { angle <- max(angle, (alpha[i] - alpha[i - 1])) } if (angle <= (pi + eps)) { # make smallest alpha equal to zero, and compute nu = number of alpha < pi angle <- alpha[1] nu <- 0 for (i in 1:nn) { alpha[i] <- alpha[i] - angle if (alpha[i] < (pi - eps)) nu <- nu + 1 } if (nu < nn) { # merge sort the alpha with their antipodal angles beta, and at the same time # update i,fval[i], and nbad ja <- 1 jb <- 1 alphk <- alpha[1] betak <- alpha[nu + 1] - pi nn2 <- nn * 2 nbad <- 0 i <- nu nf <- nn for (j in 1:nn2) { if ((alphk + eps) < betak) { nf <- nf + 1 if (ja < nn) { ja <- ja + 1 alphk <- alpha[ja] } else { alphk <- 2 * pi + 1 } } else { i <- i + 1 if (identical(i,(nn + 1))) { i <- 1 nf <- nf - nn } fval[i] <- nf nbad <- nbad + k((nf - i), 2) if (jb < nn) { jb <- jb + 1 if ((jb + nu) <= nn) { betak <- alpha[jb + nu] - pi } else { betak <- alpha[jb + nu - nn] + pi } } else { betak <- 2 * pi + 1.0 } } } nums <- k(nn, 3) - nbad # computation of numh for half space depth gi <- 0 ja <- 1 angle <- alpha[1] numh <- min(fval[1], (nn - fval[1])) for (i in 2:nn) { if (alpha[i] <= (angle + eps)) { ja <- ja + 1 } else { gi <- gi + ja ja <- 1 angle <- alpha[i] } ki <- fval[i] - gi numh <- min(numh, min(ki, (nn - ki))) } # adjust for the number nt of datapoints equal to theta } } } nums <- nums + k(nt, 1) * k(nn, 2) + k(nt, 2) * k(nn, 1) + k(nt, 3) if (n >= 3) sdep <- nums/k(n, 3) numh <- numh + nt hdep <- numh/n depths <- list(numh = numh, hdep = hdep, sdep = sdep) return(depths) } k <- function(m, j) { # algorithm as 307.2 appl.statist. (1996),vol.45, no.4 # returns the value zero if m <j; otherwise # computes the number of combinations of j out of m if (m < j) { k <- 0 } else { if (j == 1) k <- m if (j == 2) k <- (m * (m - 1))/2 if (j == 3) k <- (m * (m - 1) * (m - 2))/6 } return(k) } # This could be incorporated into calculate_distance distance_stats <- function(trial, distance){ trial$distance <- trial[[distance]] formula <- stats::as.formula("distance ~ cluster") aov <- summary(aov(data = trial, formula = formula)) within_cluster_sd <- sqrt(aov[[1]]$`Mean Sq`[2]) rSq <- aov[[1]]$`Sum Sq`[1]/(aov[[1]]$`Sum Sq`[1] + aov[[1]]$`Sum Sq`[2]) distance_stats <- c(as.list(summary(trial[[distance]])), list(sd = sd(trial[[distance]]), within_cluster_sd = within_cluster_sd, rSq = rSq)) return(distance_stats) }
/scratch/gouwar.j/cran-all/cranData/CRTspat/R/surround.R
#' Aggregate data across records with duplicated locations #' #' \code{aggregateCRT} aggregates data from a \code{"CRTsp"} object or trial data frame containing multiple records with the same location, #' and outputs a list of class \code{"CRTsp"} containing single values for each location, for both the coordinates and the auxiliary variables. #' @param trial An object of class \code{"CRTsp"} containing locations (x,y) and variables to be summed #' @param auxiliaries vector of names of auxiliary variables to be summed across each location #' @returns A list of class \code{"CRTsp"} #' @details #' Variables that in the trial dataframe that are not included in \code{auxiliaries} are retained in the output #' algorithm \code{"CRTsp"} object, with the value corresponding to that of the first record for the location #' in the input data frame #' @examples { #' trial <- readdata('example_site.csv') #' trial$base_denom <- 1 #' aggregated <- aggregateCRT(trial, auxiliaries = c("RDT_test_result","base_denom")) #' } #' @export #' aggregateCRT <- function(trial, auxiliaries = NULL) { CRT <- CRTsp(trial) location <- NULL trial <- CRT$trial[order(CRT$trial$x, CRT$trial$y),] trial$location <- paste(trial$x,trial$y) trial1 <- trial if (length(auxiliaries) > 0) { auxvars <- names(trial) %in% auxiliaries trial1 <- dplyr::distinct(trial, location, .keep_all = TRUE) trial1 <- trial1[, !auxvars] # This code is a mess, but the smarter options seem to be work in progress in dplyr for(var in auxiliaries){ if(var %in% names(trial)) { trial2 <- with(trial, trial %>% dplyr::group_by(location) %>% dplyr::summarize(var = sum(get(var)))) class(trial2) <- "data.frame" colnames(trial2) <- c('location',var) trial1 <- merge(trial1, trial2, by = 'location', all.x = FALSE, all.y = FALSE) } else { message('*** Variable', var,' not present in input data ***') } } } trial1$location <- NULL CRT$trial <- trial1 return(CRTsp(CRT)) } #' Specification of buffer zone in a cluster randomized trial #' #' \code{specify_buffer} specifies a buffer zone in a cluster randomized #' trial (CRT) by flagging those locations that are within a defined distance of #' those in the opposite arm. #' #' @param trial an object of class \code{"CRTsp"} or a data frame containing locations in (x,y) coordinates, cluster #' assignments (factor \code{cluster}), and arm assignments (factor \code{arm}). #' @param buffer_width minimum distance between locations in #' opposing arms for them to qualify to be included in the core area (km) #' @returns A list of class \code{"CRTsp"} containing the following components: #' \tabular{lll}{ #' \code{geom_full} \tab list: \tab summary statistics describing the site, #' cluster assignments, and randomization.\cr #' \code{geom_core} \tab list: \tab summary statistics describing the core area \cr #' \code{trial} \tab data frame: \tab rows correspond to geolocated points, as follows:\cr #' \tab \code{x} \tab numeric vector: x-coordinates of locations \cr #' \tab \code{y} \tab numeric vector: y-coordinates of locations \cr #' \tab \code{cluster} \tab factor: assignments to cluster of each location \cr #' \tab \code{arm} \tab factor: assignments to \code{"control"} or \code{"intervention"} for each location \cr #' \tab \code{nearestDiscord} \tab numeric vector: signed Euclidean distance to nearest discordant location (km) \cr #' \tab \code{buffer} \tab logical: indicator of whether the point is within the buffer \cr #' \tab \code{...} \tab other objects included in the input \code{"CRTsp"} object or data frame \cr #' } #' @export #' @examples #' #Specify a buffer of 200m #' exampletrial <- specify_buffer(trial = readdata('exampleCRT.txt'), buffer_width = 0.2) specify_buffer <- function(trial, buffer_width = 0) { CRT <- CRTsp(trial) trial <- CRT$trial if (is.null(trial$arm)) return('*** Randomization is required before buffer specification ***') if (is.null(trial$nearestDiscord)) trial <- compute_distance(trial, distance = "nearestDiscord") if (buffer_width > 0) { trial$buffer <- (abs(trial$nearestDiscord) < buffer_width) } CRT$trial <- trial return(CRTsp(CRT)) } #' Randomize a two-armed cluster randomized trial #' #' \code{randomizeCRT} carries out randomization of clusters for a CRT and #' augments the trial dataframe with assignments to arms \cr #' #' @param trial an object of class \code{"CRTsp"} or a data frame containing locations in (x,y) coordinates, cluster #' assignments (factor \code{cluster}), and arm assignments (factor \code{arm}). Optionally: specification of a buffer zone (logical \code{buffer}); #' any other variables required for subsequent analysis. #' @param matchedPair logical: indicator of whether pair-matching on the #' baseline data should be used in randomization #' @param baselineNumerator name of numerator variable for baseline data (required for #' matched-pair randomization) #' @param baselineDenominator name of denominator variable for baseline data (required for #' matched-pair randomization) #' @returns A list of class \code{"CRTsp"} containing the following components: #' \tabular{lll}{ #' \code{design} \tab list: \tab parameters required for power calculations\cr #' \code{geom_full} \tab list: \tab summary statistics describing the site\cr #' \code{geom_core} \tab list: \tab summary statistics describing the core area #' (when a buffer is specified)\cr #' \code{trial} \tab data frame: \tab rows correspond to geolocated points, as follows:\cr #' \tab \code{x} \tab numeric vector: x-coordinates of locations \cr #' \tab \code{y} \tab numeric vector: y-coordinates of locations \cr #' \tab \code{cluster} \tab factor: assignments to cluster of each location \cr #' \tab \code{pair} \tab factor: assigned matched pair of each location #' (for \code{matchedPair} randomisations) \cr #' \tab \code{arm} \tab factor: assignments to \code{"control"} or \code{"intervention"} for each location \cr #' \tab \code{...} \tab other objects included in the input \code{"CRTsp"} object or data frame \cr #' } #' @export #' @examples #' # Randomize the clusters in an example trial #' exampleCRT <- randomizeCRT(trial = readdata('exampleCRT.txt'), matchedPair = TRUE) randomizeCRT <- function(trial, matchedPair = FALSE, baselineNumerator = "base_num", baselineDenominator = "base_denom") { CRT <- CRTsp(trial) CRT$design <- NULL trial <- CRT$trial # remove any preexisting assignments and coerce matchedPair to FALSE if there are no baseline data if(is.null(trial[[baselineNumerator]]) & matchedPair) { message("*** No baseline data for matching. Unmatched randomisation ***") matchedPair <- FALSE } trial$arm <- trial$pair <- trial$nearestDiscord <- trial$hdep <- trial$sdep <- trial$disc <- trial$kern <- NULL pair <- cluster <- base_num <- base_denom <- NULL trial$cluster <- as.factor(trial$cluster) # Randomization, assignment to arms nclusters <- length(unique(trial$cluster)) if ((nclusters%%2) == 1 & matchedPair) { warning("*** odd number of clusters: assignments are not matched on baseline data ***") matchedPair <- FALSE } # uniformly distributed numbers, take mean and boolean of that rand_numbers <- runif(nclusters, 0, 1) if (matchedPair) { trial$base_num <- trial[[baselineNumerator]] trial$base_denom <- trial[[baselineDenominator]] cdf <- data.frame(trial %>% group_by(cluster) %>% dplyr::summarize(positives = sum(base_num), total = sum(base_denom))) cdf$p <- cdf$positives/cdf$total cdf <- cdf[order(cdf$p), ] cdf$pair <- rep(seq(1, nclusters/2), 2) cdf$rand_numbers <- rand_numbers cdf <- cdf[with(cdf, order(pair, rand_numbers)), ] cdf$arm <- rep(c(1, 0), nclusters/2) arm <- cdf$arm[order(cdf$cluster)] pair <- cdf$pair[order(cdf$cluster)] } else { arm <- ifelse(rand_numbers > median(rand_numbers), 1, 0) } if (matchedPair) trial$pair <- factor(pair[trial$cluster[]]) trial$arm <- factor(arm[trial$cluster[]], levels = c(0, 1), labels = c("control", "intervention")) CRT$trial <- trial CRT <- compute_distance(CRT, distance = "nearestDiscord") return(CRTsp(CRT)) } plt <- function(object) { UseMethod("plt") } simulate_site <- function(geoscale, locations, kappa, mu) { scaling = geoscale * 10 # Poisson point pattern with Thomas algorithm p <- spatstat.random::rThomas(kappa, geoscale, mu, win = spatstat.geom::owin(c(0, scaling), c(0, scaling))) # expected number of points: kappa*mu*scaling^2 # create locations and specify co-ordinates hhID <- c(1:locations) x <- p$x[seq(1:locations)] y <- p$y[seq(1:locations)] coordinates <- data.frame(x = x - mean(x), y = y - mean(y)) trial <- coordinates return(trial) } #' Algorithmically assign locations to clusters in a CRT #' #' \code{specify_clusters} algorithmically assigns locations to clusters by grouping them geographically #' #' @param trial A CRT object or data frame containing (x,y) coordinates of #' households #' @param c integer: number of clusters in each arm #' @param h integer: number of locations per cluster #' @param algorithm algorithm for cluster boundaries, with options: #' \tabular{ll}{ #' \code{NN}\tab Nearest neighbour: assigns equal numbers of locations to each cluster \cr #' \code{kmeans}\tab kmeans clustering: aims to partition locations so that each #' belongs to the cluster with the nearest centroid.\cr #' \code{TSP}\tab travelling salesman problem heuristic: Assigns locations sequentially #' along a travelling salesman path.\cr #' } #' @param reuseTSP logical: indicator of whether a pre-existing path should be used by #' the TSP algorithm #' @returns A list of class \code{"CRTsp"} containing the following components: #' \tabular{lll}{ #' \code{geom_full} \tab list: \tab summary statistics describing the site, #' and cluster assignments.\cr #' \code{trial} \tab data frame: \tab rows correspond to geolocated points, as follows:\cr #' \tab \code{x} \tab numeric vector: x-coordinates of locations \cr #' \tab \code{y} \tab numeric vector: y-coordinates of locations \cr #' \tab \code{cluster} \tab factor: assignments to cluster of each location \cr #' \tab \code{...} \tab other objects included in the input \code{"CRTsp"} object or data frame \cr #' } #' @details #' The \code{reuseTSP} parameter is used to allow the path to be reused #' for creating alternative allocations with different cluster sizes.\cr\cr #' Either \code{c} or \code{h} must be specified. If both are specified #' the input value of \code{c} is ignored.\cr #' @export #' #' @examples #' #Assign clusters of average size h = 40 to a test set of co-ordinates, using the kmeans algorithm #' exampletrial <- specify_clusters(trial = readdata('exampleCRT.txt'), #' h = 40, algorithm = 'kmeans', reuseTSP = FALSE) specify_clusters <- function(trial = trial, c = NULL, h = NULL, algorithm = "NN", reuseTSP = FALSE) { CRT <- CRTsp(trial) trial <- CRT$trial # Local data from study area (ground survey and/or satellite # images) coordinates <- data.frame(x = as.numeric(as.character(trial$x)), y = as.numeric(as.character(trial$y))) # the number of clusters and the target cluster size must be integers. # cluster size can only be exactly equal to the input value of h if this is a factor of # the number of locations if (is.null(c)) { c <- ceiling(nrow(coordinates)/(2 * h)) } if (is.null(h)) { h <- ceiling(nrow(coordinates)/(2 * c)) } nclusters <- 2 * c # derive cluster boundaries if (algorithm == "TSP") { TSPoutput <- TSP_ClusterDefinition(coordinates, h, nclusters, reuseTSP) trial$path <- TSPoutput$path trial$cluster <- TSPoutput$cluster } else if (algorithm == "NN") { trial$cluster <- NN_ClusterDefinition(coordinates, h, nclusters)$cluster } else if (algorithm == "kmeans") { trial$cluster <- kmeans_ClusterDefinition(coordinates, nclusters)$cluster } else { stop("unknown method") } # remove any pre-existing arm assignments trial$arm <- NULL CRT$trial <- trial return(CRTsp(CRT)) } #' Convert lat long co-ordinates to x,y #' #' \code{latlong_as_xy} converts co-ordinates expressed as decimal degrees into x,y #' @param trial A trial dataframe or list of class \code{"CRTsp"} containing latitudes and longitudes in decimal degrees #' @param latvar name of column containing latitudes in decimal degrees #' @param longvar name of column containing longitudes in decimal degrees #' @details The output object contains the input locations replaced with Cartesian #' coordinates in units of km, centred on (0,0), corresponding to using the equirectangular projection #' (valid for small areas). Other data are unchanged. #' @returns A list of class \code{"CRTsp"} containing the following components: #' \tabular{lll}{ #' \code{geom_full} \tab list: \tab summary statistics describing the site \cr #' \code{trial} \tab data frame: \tab rows correspond to geolocated points, as follows:\cr #' \tab \code{x} \tab numeric vector: x-coordinates of locations \cr #' \tab \code{y} \tab numeric vector: y-coordinates of locations \cr #' \tab \code{...} \tab other objects included in the input \code{"CRTsp"} object or data frame \cr #' } #' @export #' @examples #' examplexy <- latlong_as_xy(readdata("example_latlong.csv")) #' latlong_as_xy <- function(trial, latvar = "lat", longvar = "long") { CRT <- CRTsp(trial) trial <- CRT$trial colnames(trial)[colnames(trial) == latvar] <- "lat" colnames(trial)[colnames(trial) == longvar] <- "long" # scalef is the number of degrees per kilometer scalef <- 180/(6371*pi) centroid <- list(lat = mean(trial$lat), long = mean(trial$long)) trial$y <- (trial$lat - centroid$lat)/scalef trial$x <- (trial$long - centroid$long) * cos(trial$lat * pi/180)/scalef drops <- c("lat", "long") trial <- trial[, !(names(trial) %in% drops)] CRT <- CRTsp(trial, design = NULL) CRT$geom_full$centroid <- centroid return(CRT) } #' Anonymize locations of a trial site #' #' \code{anonymize_site} transforms coordinates to remove potential identification information. #' @param trial \code{"CRTsp"} object or trial data frame with co-ordinates of households #' @param ID name of column used as an identifier for the points #' @param latvar name of column containing latitudes in decimal degrees #' @param longvar name of column containing longitudes in decimal degrees #' @returns A list of class \code{"CRTsp"}. #' @export #' @details #' The coordinates are transformed to support confidentiality of #' information linked to households by replacing precise geo-locations with transformed co-ordinates which preserve distances #' but not positions. The input may have either \code{lat long} or \code{x,y} coordinates. #' The function first searches for any \code{lat long} co-ordinates and converts these to \code{x,y} #' Cartesian coordinates. These are then are rotated by a random angle about a random origin. The returned object #' has transformed co-ordinates re-centred at the origin. Centroids stored in the \code{"CRTsp"} object are removed. #' Other data are unchanged. #' @examples #' #Rotate and reflect test site locations #' transformedTestlocations <- anonymize_site(trial = readdata("exampleCRT.txt")) anonymize_site <- function(trial, ID = NULL, latvar = "lat", longvar = "long") { # Local data from study area (ground survey and/or satellite # images) random rotation angle CRT <- CRTsp(trial) trial <- CRT$trial if (latvar %in% colnames(trial)) { CRT <- latlong_as_xy(trial, latvar = latvar, longvar = longvar) trial <- CRT$trial } theta <- 2 * pi * runif(n = 1) x <- trial$x y <- trial$y rangex <- max(x) - min(x) rangey <- max(y) - min(y) translation <- c(rangex * rnorm(n = 1), rangey * rnorm(n = 1)) xy <- t(matrix(c(x, y), ncol = 2, nrow = length(x))) xytranslated <- xy + translation rotation <- matrix(c(cos(theta), sin(theta), -sin(theta), cos(theta)), nrow = 2, ncol = 2) # Rotate xytrans <- rotation %*% xytranslated # Recentre on origin recentred <- xytrans - c(mean(xytrans[1, ]), mean(xytrans[2, ])) trial$x <- recentred[1, ] trial$y <- recentred[2, ] # Remove ID variable if (!is.null(ID)) { colnames(trial)[colnames(trial) == ID] <- "ID" trial$ID <- NULL } # Remove centroid information CRT$geom_full$centroid <- NULL CRT$trial <- trial return(CRTsp(CRT)) } #' Read example dataset #' #' \code{readdata} reads a file from the package library of example datasets #' #' @param filename name of text file stored within the package #' @return R object corresponding to the text file #' @details The input file name should include the extension (either .csv or .txt). #' The resulting object is a data frame if the extension is .csv. #' @export #' @examples #' exampleCRT <- readdata('exampleCRT.txt') #' readdata <- function(filename) { fname <- eval(filename) extdata <- system.file("extdata", package = "CRTspat") if (unlist(gregexpr("mesh", fname)) > 0) { # The mesh was stored using saveRDS e.g. # library(Matrix) # saveRDS(inla_mesh,file = "inst/extdata/examplemesh100.rds") robject <- readRDS(file = paste0(extdata, "/", fname)) } else if (unlist(gregexpr("analysis", fname)) > 0) { # Analysis objects should be stored using 'dump' but are easy to reproduce sourced <- load(file = paste0(extdata, "/", fname)) robject <- sourced$value } else if (unlist(gregexpr(".csv", fname)) > 0) { robject <- read.csv(file = paste0(extdata, "/", fname), row.names = NULL) # remove variable 'X' if it is present robject$X <- NULL } else if (unlist(gregexpr(".txt", fname)) > 0) { sourced <- source(file = paste0(extdata, "/", fname)) robject <- sourced$value } if (unlist(gregexpr("CRT", fname)) > 0) robject <- CRTsp(robject) return(robject) } TSP_ClusterDefinition <- function(coordinates, h, nclusters, reuseTSP) { if (!"path" %in% colnames(coordinates) | !reuseTSP) { # Code originally from Silkey and Smith, SolarMal # Order the coordinates along an optimised travelling # salesman path dist_coordinates <- dist(coordinates, method = "euclidean") tsp_coordinates <- TSP::TSP(dist_coordinates) # object of class TSP tsp_coordinates <- TSP::insert_dummy(tsp_coordinates) tour <- TSP::solve_TSP(tsp_coordinates, "repetitive_nn") #solves TSP, expensive path <- TSP::cut_tour(x = tour, cut = "dummy") coordinates$path <- path } # order coordinates coordinates$order <- seq(1:nrow(coordinates)) coordinates <- coordinates[order(coordinates$path), ] n1 <- (nclusters - 1) * h nclusters_1 <- nclusters - 1 # The last cluster may be a different size (if h is not a # factor of the population size) ) coordinates$cluster <- NA coordinates$cluster[1:n1] <- c(rep(1:nclusters_1, each = h)) #add cluster assignment coordinates$cluster[which(is.na(coordinates$cluster))] <- nclusters coordinates <- coordinates[order(coordinates$order), ] return(coordinates) } NN_ClusterDefinition <- function(coordinates, h, nclusters) { # algorithm is inspired by this website: ??? (comment from Lea) # initialize cluster, calculate euclidean distance dist_coordinates <- as.matrix(dist(coordinates, method = "euclidean")) coordinates$cluster <- NA nclusters_1 <- nclusters - 1 for (i in 1:nclusters_1) { # find unassigned coordinates cluster_unassigned <- which(is.na(coordinates$cluster)) dist_coordinates_unassigned <- dist_coordinates[cluster_unassigned, cluster_unassigned] cluster_na <- rep(NA, length(cluster_unassigned)) # find the coordinate furthest away from all the others index <- which.max(rowSums(dist_coordinates_unassigned)) # find the n nearest neighbors of index cluster_na[head(order(dist_coordinates_unassigned[index, ]), h)] <- i coordinates$cluster[cluster_unassigned] <- cluster_na } # The last cluster may be a different size (if h is not a # factor of the population size) ) coordinates$cluster[which(is.na(coordinates$cluster))] <- nclusters return(coordinates) } kmeans_ClusterDefinition <- function(coordinates, nclusters) { # kmeans as implemented in R base km <- kmeans(x = coordinates, centers = nclusters) coordinates$cluster <- km$cluster return(coordinates) } map_scale_to_link <- function(scale) { scales <- c("proportion", "count", "continuous") links <- c("logit", "log", "identity") link <- links[which(scale == scales)] return(link)}
/scratch/gouwar.j/cran-all/cranData/CRTspat/R/utils.R
--- title: "Use Case 01: Algorithmic specification of clusters" output: rmarkdown::html_vignette: toc: true vignette: > %\VignetteIndexEntry{Use Case 01: Algorithmic specification of clusters} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- The way in which clusters are assigned in cluster randomized trials (CRTs) can profoundly affect the efficiency of the trial. Allocating clusters by algorithm makes it easy to generate alternative cluster allocations for any given trial site, both for real-world trials and for exploring this neglected aspect of trial design in simulations. The `CRTspat` package contains R functions developed for this purpose. Input to the package is in the form of a data frame with one record for each geo-location in a trial area. Most of the functions of the package return a list of class `CRTsp`, which consists of the input data frame augmented with additional vectors (e.g. coding clusters, arms, or buffer zones), and lists containing descriptors of the dataset. Objects of class `CRTsp` can also be used as input to most of the functions. After each step, `summary()` can be used to provide a description of the output `CRTsp` object and `plotCRT()` can be used to output a descriptive plot, or a map of the locations, clusters, arms, buffer zones or other geographically structured analysis results. + In general the package functions do not expect to find repeated values for outcomes for the same location. The `aggregateCRT()` function is used to aggregate data with the same co-ordinates so that this condition is satisfied. In particular, if the input database contains outcome data (e.g. if it contains baseline survey results), these should be provided in the form of a numerator `base_num` and denominator `base_denom` for each record. These values will be summed by `aggregateCRT()` over all records with the same co-ordinates. An object of class `CRTsp` is output. + The `specify_clusters()` function carries out algorithmic assignment of clusters and outputs a `CRTsp` object augmented with the cluster assignments. One of three different algorithms must be selected: + `algorithm = "NN"` implements a nearest neighbour algorithm. Iteratively One household is selected and a cluster of size k is constructed by adding its k-1 nearest neighbors (NN). These points are removing these points from the data set, and this step is repeated iteratively until all the points have been allocated. [This algorithm](http://jmonlong.github.io/Hippocamplus/2018/06/09/cluster-same-size/#methods) will often lead to connected clusters, in a "fish scale" manner. This is the default option. + `algorithm = "TSP"` implements the `repetitive_nn` option of the [`TSP` package](https://CRAN.R-project.org/package=TSP) for solving the travelling salesman problem. This finds an efficient path through the study locations. Clusters are formed by grouping the required number of locations sequentially along the path. Note that this is not guaranteed to give rise to congruent clusters. + `algorithm = "kmeans"` implements a [k-means algorithm](https://en.wikipedia.org/wiki/K-means_clustering) that aims to partition the locations into the required number of clusters in which each observation belongs to the cluster with the nearest cluster centroid. k-means clustering minimizes within-cluster variances (squared Euclidean distances) but does not necessarily give equal-sized clusters. Irrespective of the algorithm, the target number of points allocated to each cluster is specified by the parameter `h`. + The `randomizeCRT()` function carries out a simple randomization of clusters to arms, and outputs a `CRTsp` object augmented with the assignments. (If baseline data are available matched pair randomization is available as an option) The units to be randomized will usually be households, but the algorithms can be used to generate clusters with equal geographical areas by randomizing pixels. In this case a dataset containing x,y coordinates for each pixel should be used as input. The example uses locations and baseline test positivity data from a site in Kenya. The input dataset contains a single record for each test so there are multiple records of test positivity for many locations. ```r library(CRTspat) example_locations <- readdata('example_site.csv') # assign the denominator to the baseline data example_locations$base_denom <- 1 # convert to a `CRTsp` object exampleCRT <- CRTsp(example_locations) summary(exampleCRT) ``` ``` ## ===============================CLUSTER RANDOMISED TRIAL =========================== ## ## Summary of coordinates ## ---------------------- ## Min. : 1st Qu.: Median : Mean : 3rd Qu.: Max. : ## x -3.20 -1.31 -0.24 0.00 1.35 5.16 ## y -5.08 -2.84 -0.17 0.00 2.49 6.16 ## Total area (within 0.2 km of a location) : 27.6 sq.km ## ## Geolocation of centroid (radians): ## latitude: longitude: ## ## Locations and Clusters ## ---------------------- - ## Coordinate system (x, y) ## Not aggregated. Total records: 3172. Unique locations: 1181 ## Available clusters (across both arms) Not assigned ## No randomization - ## No power calculations to report - ## ## Other variables in dataset ## -------------------------- RDT_test_result base_denom ``` ```r # Aggregate data for multiple observations for the same location Only the (x,y) co-ordinates and numerical # auxiliary variables example <- aggregateCRT(exampleCRT, auxiliaries = c("RDT_test_result", "base_denom")) summary(example) ``` ``` ## ===============================CLUSTER RANDOMISED TRIAL =========================== ## ## Summary of coordinates ## ---------------------- ## Min. : 1st Qu.: Median : Mean : 3rd Qu.: Max. : ## x -3.20 -1.40 -0.30 -0.07 1.26 5.16 ## y -5.08 -2.84 0.19 0.05 2.49 6.16 ## Total area (within 0.2 km of a location) : 27.6 sq.km ## ## Locations and Clusters ## ---------------------- - ## Coordinate system (x, y) ## Locations: 1181 ## Available clusters (across both arms) Not assigned ## No randomization - ## No power calculations to report - ## ## Other variables in dataset ## -------------------------- RDT_test_result base_denom ``` ```r # Plot map of locations plotCRT(example, map = TRUE, showLocations = TRUE, maskbuffer = 0.2) ``` <p> <img src="example1a.r-1.png"> <br> <em>Fig 1.1 Map of locations</em> </p> In the example shown here a target cluster size of 50 locations is set, but the heterogeneity in spatial density of the locations leads to considerable variation in the number of locations assigned to each cluster. ```r example_clustered <- specify_clusters(trial = example, h = 50, algorithm = 'NN') summary(example_clustered) ``` ``` ## ===============================CLUSTER RANDOMISED TRIAL =========================== ## ## Summary of coordinates ## ---------------------- ## Min. : 1st Qu.: Median : Mean : 3rd Qu.: Max. : ## x -3.20 -1.40 -0.30 -0.07 1.26 5.16 ## y -5.08 -2.84 0.19 0.05 2.49 6.16 ## Total area (within 0.2 km of a location) : 27.6 sq.km ## ## Locations and Clusters ## ---------------------- - ## Coordinate system (x, y) ## Locations: 1181 ## Available clusters (across both arms) 24 ## Per cluster mean number of points 49.2 ## Per cluster s.d. number of points 3.9 ## No randomization - ## No power calculations to report - ## ## Other variables in dataset ## -------------------------- RDT_test_result base_denom ``` ```r plotCRT(example_clustered, map = TRUE, showClusterLabels = TRUE, maskbuffer = 0.2, labelsize = 2) ``` <p> <img src="example1b.r-1.png"> <br> <em>Fig 1.2 Map of clusters</em> </p> A smoothed map of the baseline prevalence surface is produced using a geostatistical model in [R-INLA](https://www.r-inla.org/). Details of the implementation in `CRTspat` are in the [documentation of `CRTanalysis`](../reference/CRTanalysis.html) and of [Use Case 5](Usecase5.html). ```r library(Matrix) examplemesh100 <- readdata("examplemesh100.rds") baselineanalysis <- CRTanalysis(trial=example_clustered, method = 'INLA', link='logit', baselineOnly = TRUE, baselineNumerator = "RDT_test_result", baselineDenominator = "base_denom", clusterEffects = FALSE, spatialEffects = TRUE, requireMesh = TRUE, inla_mesh = examplemesh100) ``` ``` ## Analysis of baseline only, using INLA ``` ```r plotCRT(baselineanalysis, map = TRUE, fill = 'prediction') ``` <p> <img src="example1c.r-1.png"> <br> <em>Fig 1.3 Smoothed surface of baseline prevalence</em> </p> A summary of the baseline prevalence at cluster level is used in this example to match clusters on baseline prevalence and then generate a randomisation based on matched pairs. ```r example_randomized <- randomizeCRT(example_clustered, matchedPair = TRUE, baselineNumerator = "RDT_test_result", baselineDenominator = "base_denom") summary(example_randomized) ``` ``` ## ===============================CLUSTER RANDOMISED TRIAL =========================== ## ## Summary of coordinates ## ---------------------- ## Min. : 1st Qu.: Median : Mean : 3rd Qu.: Max. : ## x -3.20 -1.40 -0.30 -0.07 1.26 5.16 ## y -5.08 -2.84 0.19 0.05 2.49 6.16 ## Total area (within 0.2 km of a location) : 27.6 sq.km ## ## Locations and Clusters ## ---------------------- - ## Coordinate system (x, y) ## Locations: 1181 ## Available clusters (across both arms) 24 ## Per cluster mean number of points 49.2 ## Per cluster s.d. number of points 3.9 ## S.D. of distance to nearest discordant location (km): 0.56 ## Cluster randomization: Matched pairs randomized ## No power calculations to report - ## ## Other variables in dataset ## -------------------------- RDT_test_result base_denom base_num pair ``` ```r plotCRT(example_randomized, map = TRUE, maskbuffer=0.2, legend.position=c(0.8,0.8)) ``` <p> <img src="example1d.r-1.png"> <br> <em>Fig 1.4 Map of arm assignments</em> </p>
/scratch/gouwar.j/cran-all/cranData/CRTspat/inst/doc/Usecase1.Rmd
--- title: "Use Case 02: Simulation of trials with geographical spillover" output: rmarkdown::html_vignette: toc: true vignette: > %\VignetteIndexEntry{Use Case 02: Simulation of trials with geographical spillover} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- Effects of settlement patterns, choices of cluster size and buffer widths, and the extent of spillover between arms on the outcomes of CRTs do not lend themselves to mathematical analysis. Simulations of trials are used to explore the effects of these variables on trial power and on the robustness of statistical methodologies. Trials can be simulated using the `simulateCRT` function, which augments a `trial` data frame (created externally) or object of class `CRTsp` (created by package functions) with simulated outcome data. The input object must be given location information and both cluster and arm assignments (see [Use Case 1](Usecase1.html)) (or the package can generate these if the objective is purely simulation. Information about the underlying spatial pattern of disease is used in the form of the intra-cluster correlation of the outcome, which is input to the simulation as variable `ICC_inp`, and of the `propensity`. The former takes a single value for the chosen design. The latter takes a positive real value for each location. In the case of malaria, `propensity` can be thought of as measuring exposure to infectious mosquitoes. `ICC_inp` and `propensity` may either be estimated from other datasets or supplied by the user. The behaviour of the function depends on which variables are supplied, and the value of `generateBaseline`, as follows: | Data supplied by the user | Function behaviour | |:-------------------|:--------------------------| |`propensity` supplied by user|Baseline data are created by sampling around `propensity`| |Baseline data are supplied by user and `propensity` is not supplied |`propensity` is created from the baseline data| |Neither baseline data nor `propensity` are supplied |`propensity` is generated using normal kernels, with the bandwidth adjusted to achieve the input value of the `ICC_inp` (after the further smoothing stage to simulate spillover (see below))| The effect of intervention is simulated as a fixed percentage reduction in the `propensity`. Contamination or spillover between trial arms is then modelled as a additional smoothing process applied to the intervention-adjusted `propensity` via a further bivariate normal kernel. In the case of mosquito borne disease this is proposed as an approximation to the effect of mosquito movement. The degree of spillover is specified either as a spillover interval with the `theta_inp` parameter, or as `sd`, the bandwidth of the corresponding normal kernel. If both are provided then it is the value of `theta_inp` that is used. #### Example with baseline data provided as proportions ```r library(CRTspat) set.seed(1234) example_locations <- readdata('example_site.csv') example_locations$base_denom <- 1 library(dplyr) example_randomized <- CRTsp(example_locations) %>% aggregateCRT(auxiliaries = c("RDT_test_result", "base_denom")) %>% specify_clusters(h = 50, algorithm = 'NN') %>% randomizeCRT(matchedPair = FALSE) summary(example_randomized) ``` ``` ## ===============================CLUSTER RANDOMISED TRIAL =========================== ## ## Summary of coordinates ## ---------------------- ## Min. : 1st Qu.: Median : Mean : 3rd Qu.: Max. : ## x -3.20 -1.40 -0.30 -0.07 1.26 5.16 ## y -5.08 -2.84 0.19 0.05 2.49 6.16 ## Total area (within 0.2 km of a location) : 27.6 sq.km ## ## Locations and Clusters ## ---------------------- - ## Coordinate system (x, y) ## Locations: 1181 ## Available clusters (across both arms) 24 ## Per cluster mean number of points 49.2 ## Per cluster s.d. number of points 3.9 ## S.D. of distance to nearest discordant location (km): 1.05 ## Cluster randomization: Independently randomized ## No power calculations to report - ## ## Other variables in dataset ## -------------------------- RDT_test_result base_denom ``` ```r plotCRT(example_randomized, map = TRUE, legend.position = c(0.8, 0.8)) example2a <- simulateCRT(example_randomized, effect = 0.8, outcome0 = 0.5, generateBaseline = FALSE, baselineNumerator = "RDT_test_result", baselineDenominator = "base_denom", ICC_inp = 0.05, theta_inp = 0.8) summary(example2a) ``` ``` ## ===============================CLUSTER RANDOMISED TRIAL =========================== ## ## Summary of coordinates ## ---------------------- ## Min. : 1st Qu.: Median : Mean : 3rd Qu.: Max. : ## x -3.20 -1.40 -0.30 -0.07 1.26 5.16 ## y -5.08 -2.84 0.19 0.05 2.49 6.16 ## Total area (within 0.2 km of a location) : 27.6 sq.km ## ## Locations and Clusters ## ---------------------- - ## Coordinate system (x, y) ## Locations: 1181 ## Available clusters (across both arms) 24 ## Per cluster mean number of points 49.2 ## Per cluster s.d. number of points 3.9 ## S.D. of distance to nearest discordant location (km): 1.05 ## Cluster randomization: Independently randomized ## No power calculations to report - ## ## Other variables in dataset ## -------------------------- RDT_test_result base_denom denom propensity num ``` ```r library(Matrix) examplemesh100 <- readdata("examplemesh100.rds") example2aanalysis <- CRTanalysis(trial=example2a, method = 'T') summary(example2aanalysis) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: T ## Link function: logit ## Model formula: arm + (1 | cluster) ## No modelling of spillover ## Estimates: Control: 0.376 (95% CL: 0.286 0.475 ) ## Intervention: 0.195 (95% CL: 0.133 0.278 ) ## Efficacy: 0.48 (95% CL: 0.28 0.774 ) ## Coefficient of variation: 39.7 % (95% CL: 29.9 59.8 ) ## ## P-value (2-sided): 0.0036424 ``` ```r plotCRT(example2aanalysis) example2aINLA <- CRTanalysis(trial=example2a, method = 'INLA', link='logit', cfunc = 'Z', clusterEffects = FALSE, spatialEffects = TRUE, requireMesh = TRUE, inla_mesh = examplemesh100) plotCRT(example2aINLA, map = TRUE, fill = 'prediction', showClusterBoundaries = TRUE, legend.position = c(0.8, 0.8)) ``` <p> <img src="example2a.r-1.png" > <br> <em>Fig 2.1 Map of allocations of clusters to arms</em> </p> <p> <img src="example2a.r-2.png" > <br> <em>Fig 2.2 Plot of data by distance to other arm</em> </p> <p> <img src="example2a.r-3.png" > <br> <em>Fig 2.3 Smoothed outcome from geostatistical model</em> </p> #### Example with infectiousness proxy surface generated externally ```r set.seed(1234) # Simulate a site with 2000 locations new_site <- CRTsp(geoscale = 2, locations=2000, kappa=3, mu=40) # propensity surface generated as an arbitrary linear function of x the co-ordinate new_site$trial$propensity <- 0.5*new_site$trial$x - min(new_site$trial$x)+1 library(dplyr) example2b<- CRTsp(new_site) %>% specify_clusters(h = 40, algorithm = 'NN') %>% randomizeCRT(matchedPair = FALSE) %>% simulateCRT(effect = 0.8, outcome0 = 0.5, generateBaseline = TRUE, ICC_inp = 0.05, theta_inp = 0.5) ``` ``` ## ## ===================== SIMULATION OF CLUSTER RANDOMISED TRIAL ================= ``` ``` ## Estimating the smoothing required to achieve the target ICC of 0.05 ``` ``` ## bandwidth: 1 ICC = 0.0460233924407313 loss = 1.58134076804332e-05 ``` ```r summary(example2b) ``` ``` ## ===============================CLUSTER RANDOMISED TRIAL =========================== ## ## Summary of coordinates ## ---------------------- ## Min. : 1st Qu.: Median : Mean : 3rd Qu.: Max. : ## x -8.73 -5.16 -1.03 0.00 5.17 11.26 ## y -9.55 -4.42 -0.58 0.00 4.56 10.45 ## Total area (within 0.2 km of a location) : 181 sq.km ## ## Locations and Clusters ## ---------------------- - ## Coordinate system (x, y) ## Locations: 2000 ## Available clusters (across both arms) 50 ## Per cluster mean number of points 40 ## Per cluster s.d. number of points 0 ## S.D. of distance to nearest discordant location (km): 1.33 ## Cluster randomization: Independently randomized ## No power calculations to report - ## ## Other variables in dataset ## -------------------------- denom propensity num base_denom base_num ``` ```r results2b <- CRTanalysis(example2b, method = 'GEE') ``` ``` ## No non-linear parameter. No fixed effects of distance - ``` ```r summary(results2b) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: GEE ## Link function: logit ## Model formula: arm ## No modelling of spillover ## Estimates: Control: 0.461 (95% CL: 0.402 0.521 ) ## Intervention: 0.139 (95% CL: 0.113 0.17 ) ## Efficacy: 0.698 (95% CL: 0.615 0.764 ) ## Coefficient of variation: 40.5 % (95% CL: 33 52.7 ) ## Intracluster correlation (ICC) : 0.046 (95% CL: 0.019 0.073 ) ## ``` ```r plotCRT(example2b, map = TRUE, fill = 'clusters', showClusterLabels = TRUE, maskbuffer = 0.5) ``` <p> <img src="example2b.r-1.png" > <br> <em>Fig 2.4 Map of clusters in simulated trial</em> </p> #### Example with baseline generated from user-provided values of the overall initial prevalence and ICC ```r set.seed(1234) # use co-ordinates, cluster and arm assignments, and baseline data from `example_simulated` example2c<- CRTsp(geoscale = 2, locations=2000, kappa=3, mu=40) %>% specify_clusters(h = 40, algorithm = 'NN') %>% randomizeCRT(matchedPair = FALSE) %>% simulateCRT(effect = 0.8, outcome0 = 0.5, generateBaseline = TRUE, baselineNumerator = 'base_num', baselineDenominator = 'base_denom', ICC_inp = 0.08, theta_inp = 0.2) ``` ``` ## ## ===================== SIMULATION OF CLUSTER RANDOMISED TRIAL ================= ``` ``` ## Estimating the smoothing required to achieve the target ICC of 0.08 ``` ``` ## bandwidth: 0.156946255820714 ICC = 0.0824323247815882 loss = 5.91620384312814e-06 ``` ```r results2c <- CRTanalysis(example2c, method = 'GEE') ``` ``` ## No non-linear parameter. No fixed effects of distance - ``` ```r summary(results2c) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: GEE ## Link function: logit ## Model formula: arm ## No modelling of spillover ## Estimates: Control: 0.381 (95% CL: 0.309 0.458 ) ## Intervention: 0.219 (95% CL: 0.183 0.26 ) ## Efficacy: 0.425 (95% CL: 0.25 0.557 ) ## Coefficient of variation: 51.1 % (95% CL: 41.1 68.4 ) ## Intracluster correlation (ICC) : 0.0824 (95% CL: 0.0417 0.123 ) ## ```
/scratch/gouwar.j/cran-all/cranData/CRTspat/inst/doc/Usecase2.Rmd
--- title: "Use Case 03: Estimation of intracluster correlations (ICC) by cluster size" output: rmarkdown::html_vignette: toc: true vignette: > %\VignetteIndexEntry{Use Case 03: Estimation of intracluster correlations (ICC) by cluster size} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- The Intracluster Correlation Coefficient (ICC) is one of the inputs to standard power and sample size calculations for CRTs. Trialists often have difficulty identifying an appropriate source for their ICC calculations, or use a value from a source of questionable relevance. The [`CRTanalysis`](../reference/CRTanalysis.html) function has an option to use Generalised Estimating Equations, which provide an estimate of the ICC. This can be applied to baseline data, and hence to different cluster configurations. This makes it possible to estimate the ICC which is appropriate for any given cluster definition, in the chosen geography, assuming baseline data are available. ```r # use the same dataset as for Use Case 1. library(CRTspat) example_locations <- readdata('example_site.csv') example_locations$base_denom <- 1 library(dplyr) example <- CRTsp(example_locations) %>% aggregateCRT(auxiliaries = c("RDT_test_result", "base_denom")) summary(example) ``` ``` ## ===============================CLUSTER RANDOMISED TRIAL =========================== ## ## Summary of coordinates ## ---------------------- ## Min. : 1st Qu.: Median : Mean : 3rd Qu.: Max. : ## x -3.20 -1.40 -0.30 -0.07 1.26 5.16 ## y -5.08 -2.84 0.19 0.05 2.49 6.16 ## Total area (within 0.2 km of a location) : 27.6 sq.km ## ## Locations and Clusters ## ---------------------- - ## Coordinate system (x, y) ## Locations: 1181 ## Available clusters (across both arms) Not assigned ## No randomization - ## No power calculations to report - ## ## Other variables in dataset ## -------------------------- RDT_test_result base_denom ``` ```r # randomly sample an array of values of c (use a small sample size for testing # the plots were produced with n=5000) set.seed(5) c_vec <- round(runif(50, min = 6, max = 150)) # a user function randomizes and analyses each simulated trial CRTscenario3 <- function(c, CRT) { ex <- specify_clusters(CRT, c = c, algo = "kmeans") %>% randomizeCRT() GEEanalysis <- CRTanalysis(ex, method = "GEE", baselineOnly = TRUE, excludeBuffer = FALSE, baselineNumerator = "RDT_test_result", baselineDenominator = "base_denom") locations <- GEEanalysis$description$locations ICC <- GEEanalysis$pt_ests$ICC value <- c(c = c, ICC = ICC, mean_h = locations/c) return(value) } # The results are collected in a data frame results <- t(sapply(c_vec, FUN = CRTscenario3, simplify = "array", CRT = example)) %>% data.frame() ``` There is a clear downward trend in the ICC estimates, as cluster size increases (Figure 3.1). The ICC expected for a trial in this, or similar, geographies can be read off the curve. Note that the ICC is expected to vary not just with cluster size, but also to vary between different outcomes. <p> <img src="example3b.r-1.png" > <br> <em>Fig 3.1 Intracluster correlation by size of cluster</em> </p>
/scratch/gouwar.j/cran-all/cranData/CRTspat/inst/doc/Usecase3.Rmd
--- title: "Use Case 04: Estimation of optimal cluster size for a trial with pre-determined buffer width" output: rmarkdown::html_vignette: toc: true vignette: > %\VignetteIndexEntry{Use Case 04: Estimation of optimal cluster size for a trial with pre-determined buffer width} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- Both the number and size of clusters affect power calculations for CRTs: + If there are no logistical constraints, and spillover can be neglected (as in trials of vaccines that enrol only small proportions of the population), there is no need for a buffer zone and the most efficient design is an individually randomized CRT (i.e. a cluster size of one). In general, a trial with many small clusters has more power than one with the same number of individuals enrolled in larger clusters. + If spillover is an issue, and it is decided to address this by including buffer zones, then the number of individuals included in the trial is less than the total population. Enumeration and intervention allocation are still required for the full trial area, so there can be substantial resource implications if many people are included in the buffers. There is a trade-off between increasing power by creating many small clusters (leading to a large proportion of locations in buffer zones) and reducing the proportion of locations in buffer zones by using large clusters. The `CRTspat` package provides functions for analysing this trade-off for any site for which baseline data are available. The example shown here uses the baseline prevalence data introduced in [Use Case 1](Usecase1.html). The trial is assumed to plan to be based on the same outcome of prevalence, and to be powered for an efficacy of 30%. A set of different algorithmic cluster allocations are carried out with different numbers of clusters. Each allocation is randomized and buffer zones are specified with the a pre-specified width (in this example, 0.5 km). The ICC is computed from the baseline data, excluding the buffer zones, and corresponding power calculations are carried out. The power is calculated and plotted as a function of cluster size. ```r # use the same dataset as for Use Case 1. library(CRTspat) example_locations <- readdata('example_site.csv') example_locations$base_denom <- 1 exampleCRT <- CRTsp(example_locations) example <- aggregateCRT(exampleCRT, auxiliaries = c("RDT_test_result", "base_denom")) # randomly sample an array of numbers of clusters to allocate set.seed(5) c_vec <- round(runif(20, min = 6, max = 60)) CRTscenario <- function(c, CRT, buffer_width) { ex <- specify_clusters(CRT, c = c, algo = "kmeans") %>% randomizeCRT() %>% specify_buffer(buffer_width = buffer_width) GEEanalysis <- CRTanalysis(ex, method = "GEE", baselineOnly = TRUE, excludeBuffer = TRUE, baselineNumerator = "RDT_test_result", baselineDenominator = "base_denom") locations <- GEEanalysis$description$locations ex_power <- CRTpower(trial = ex, effect = 0.3, yC = GEEanalysis$pt_ests$controlY, outcome_type = "p", N = GEEanalysis$description$sum.denominators/locations, c = c, ICC = GEEanalysis$pt_ests$ICC) value <- c(c_full = c, c_core = ex_power$geom_core$c, clustersRequired = ex_power$geom_full$clustersRequired, power = ex_power$geom_full$power, mean_h = ex_power$geom_full$mean_h, locations = locations, ICC = GEEanalysis$pt_ests$ICC) names(value) <- c("c_full", "c_core", "clustersRequired", "power", "mean_h", "locations", "ICC") return(value) } results <- t(sapply(c_vec, FUN = CRTscenario, simplify = "array", CRT = example, buffer_width = 0.5)) %>% data.frame() ``` Each simulated cluster allocation is different, as are the randomizations. This leads to variation in the locations of the buffer zones, so the number of core clusters is a stochastic function of the number of clusters randomised (c). There is also variation in the estimated Intracluster Correlation (see [Use Case 3](Usecase3.html)) for any value of c. ```r total_locations <- example$geom_full$locations results$proportion_included <- results$c_core * results$mean_h * 2/total_locations results$corelocations_required <- results$clustersRequired * results$mean_h results$totallocations_required <- with(results, total_locations/locations * corelocations_required) library(ggplot2) theme_set(theme_bw(base_size = 14)) ggplot(data = results, aes(x = c_full, y = c_core)) + geom_smooth() + xlab("Clusters allocated (per arm)") + ylab("Clusters in core (per arm)") + geom_segment(aes(x = 5, xend = 35, y = 18.5, yend = 18.5), arrow = arrow(length = unit(1, "cm")), lwd = 2, color = "red") ``` <p> <img src="example4b.r-1.png" > <br> <em>Fig 4.1 Numbers of clusters</em> </p> The number of clusters in the core area increases with the number of clusters allocated, until the cluster size becomes small enough for entire clusters to be swallowed by the buffer zones. This can be illustrated by the contrast in the core areas randomised with c = 6 and c = 40 (Figures 4.2 and 4.3). ```r set.seed(7) library(dplyr) example6 <- specify_clusters(example, c = 6, algo = "kmeans") %>% randomizeCRT() %>% specify_buffer(buffer_width = 0.5) plotCRT(example6, map = TRUE, showClusterBoundaries = TRUE, showClusterLabels = TRUE, labelsize = 2, maskbuffer = 0.2) example40 <- specify_clusters(example, c = 40, algo = "kmeans") %>% randomizeCRT() %>% specify_buffer(buffer_width = 0.5) plotCRT(example40, map = TRUE, showClusterBoundaries = TRUE, showClusterLabels = TRUE, labelsize = 2, maskbuffer = 0.2) ``` <p> <img src="example4c.r-1.png"> <br> <em>Fig 4.2 Map of clusters with c = 6</em> </p> <p> <img src="example4c.r-2.png"> <br> <em>Fig 4.3 Map of clusters with c = 40</em> </p> Beyond this point, increasing the number of clusters allocated in the fixed area (by making them smaller) does not add to the total number of clusters. In this example the maximum is achieved when the input c is about 35 and the output c is 18.5. ```r ggplot(data = results, aes(x = c_core, y = mean_h)) + geom_smooth() + xlab("Clusters in core (per arm)") + ylab("Mean cluster size") ``` <p> <img src="example4d.r-1.png" > <br> <em>Fig 4.4 Size of clusters</em> </p> The size of clusters decreases with the number allocated (Figure 4.4), but does not fall much below 10 locations on average in the example because smaller clusters are likely to be absorbed into the buffer zones. ```r ggplot(data = results, aes(x = c_core, y = power)) + geom_smooth() + xlab("Clusters in core (per arm)") + ylab("Power") ``` <p> <img src="example4e.r-1.png" > <br> <em>Fig 4.5 Power achievable with given site</em> </p> The power increases approximately linearly with the number of clusters in the core (Figure 4.5), but the site is too small for an adequate power to be achieved with this size of buffer, irrespective of the cluster size. Because the buffering leads to a maximum in the cluster density (number of clusters per unit area), so does the power achievable with a fixed area (Figure 4.6). ```r ggplot2::ggplot(data = results, aes(x = c_full, y = power)) + geom_smooth() + xlab("Clusters allocated (per arm)") + ylab("Power") ``` <p> <img src="example4f.r-1.png" > <br> <em>Fig 4.6 Power achievable with given site</em> </p> However the analysis also gives an estimate of how large an extended site is needed to achieve adequate power (assuming the the spatial pattern for the wider site to be similar to that of the baseline area). A minimum total number of locations required to achieve a pre-specified power (80%) is achieved at the same density of clusters as the maximum of the power estimated for the smaller, baseline site. ```r ggplot2::ggplot(data = results, aes(x = c_core, y = corelocations_required)) + geom_smooth() + xlab("Clusters in core (per arm)") + ylab("Required core locations") ``` ``` ## `geom_smooth()` using method = 'loess' and formula = 'y ~ x' ``` <p> <img src="example4g.r-1.png" > <br> <em>Fig 4.7 Number of clusters required for full trial area </em> </p> This is also at the allocation density where saturation is achieved in the number of core clusters (Figure 4.1), and where the proportion of the locations included in the core area reaches its minimum (Figure 4.8). ```r ggplot2::ggplot(data = results, aes(x = c_core, y = proportion_included)) + geom_smooth() + xlab("Clusters in core (per arm)") + ylab("Proportion of locations in core") + geom_segment(aes(x = 18, xend = 18, y = 0, yend = 0.25), arrow = arrow(length = unit(1, "cm")), lwd = 2, color = "red") ``` <p> <img src="example4h.r-1.png" > <br> <em>Fig 4.8 Proportions of locations in core</em> </p> #### Conclusions With the example geography and the selected trial outcome, the most efficient trial design, conditional on a buffer width of 0.5 km, would be achieved by assigning about 30 clusters to each arm in a site of the size analysed, though about one third of these clusters would eliminated by inclusion in the buffer zones, so that there would be . This would be far from adequate to achieve adequate power. To achieve 80% power about 8,000 locations would be needed, in a larger trial area, of which about 2,400 would be in the core (sampled) parts of the clusters. ```r ggplot2::ggplot(data = results, aes(x = c_core, y = totallocations_required)) + geom_smooth() + xlab("Clusters in core (per arm)") + ylab("Total locations required") + geom_segment(aes(x = 18, xend = 18, y = 0, yend = 8000), arrow = arrow(length = unit(1, "cm")), lwd = 2, color = "red") ``` <p> <img src="example4i.r-1.png" > <br> <em>Fig 4.9 Size of trial area required to achieve adequate power</em> </p>
/scratch/gouwar.j/cran-all/cranData/CRTspat/inst/doc/Usecase4.Rmd
--- title: "Use Case 05: Analysis of trials (including methods for analysing spillover)" output: rmarkdown::html_vignette: toc: true vignette: > %\VignetteIndexEntry{Use Case 05: Analysis of trials (including methods for analysing spillover)} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- The [`CRTanalysis()`](../reference/CRTanalysis.html) function is a wrapper for different statistical analysis packages that can be used to analyse either simulated or real trial datasets. It is designed for use in simulation studies of different analytical methods for spatial CRTs by automating the data processing and selecting some appropriate analysis options. It does not replace conventional use of these packages. Real field trials very often entail complications that are not catered for any of the analysis options in `CRTanalysis()` and it does not aspire to carry out the full analytical workflow for a trial. It can be used as part of a wider workflow. In particular the usual object output by the statistical analysis package constitutes the `model_object` element within the `CRTanalysis` object generated by `CRTanalysis()`. This can be accessed by the usual methods (e.g `predict()`, `summary()`, `plot()`) which may be needed for diagnosing errors, assessing goodness of fit, and for identifying needs for additional analyses. ## Statistical Methods The options that can be specified using the `method` parameter in the function call are: + `method = "T"` summarises the outcome at the level of the cluster, and uses 2-sample t-tests to carry out statistical significance tests of the effect, and to compute confidence intervals for the effect size. The [t.test](https://www.rdocumentation.org/packages/stats/versions/3.6.2/topics/t.test) function in the `stats` package is used. + `method = "GEE"` uses Generalised Estimating Equations to estimate the efficacy in a model with iid random effects for the clusters. An estimate of the intracluster correlation (ICC) is also provided. This uses calls to the [geepack](https://www.jstatsoft.org/article/view/v015i02) package. + `method = "LME4"` fits linear (for continuous data) or generalized linear (for counts and proportions) mixed models with iid random effects for clusters in [lme4](https://CRAN.R-project.org/package=lme4). + `method = "MCMC"` uses Markov chain Monte Carlo simulation in package [jagsUI](https://CRAN.R-project.org/package=jagsUI), which calls r-JAGS. + `method = "INLA"` uses approximate Bayesian inference via the [R-INLA package](https://www.r-inla.org/). This provides functionality for geostatistical analysis, which can be used for geographical mapping of model outputs (as illustrated in . INLA spatial analysis requires a prediction mesh. This can be generated using [`CRTspat::new_mesh()`](../reference/new_mesh().html). This can be computationally expensive, so it is recommended to compute the mesh just once for each dataset. All these analysis methods can be used to carry out a simple comparision of outcomes between trial arms. Each offers different additional functionality, and has its own limitations (see Table 5.1). Some of these limitations are specific to the options offered within `CRTanalysis()`, which does not embrace the full range of options of the packages that are 'wrapped'. These are specified using the `method` argument of the function. Table 5.1. Available statistical methods | `method` | Package | What the `CRTanalysis()` implementation offers |Limitations (as implemented) | |----------|---------|------------------------------------------------|-----------------------------| | `T`| [t.test](https://www.rdocumentation.org/packages/stats/versions/3.6.2/topics/t.test) | P-values and confidence intervals for efficacy based on comparison of cluster means | No analysis of spillover or degree of clustering | | `GEE` | [geepack](https://www.jstatsoft.org/article/view/v015i02) | Interval estimates for efficacy and Intra-cluster correlations | No analysis of spillover or degree of clustering | | `LME4`| [lme4](https://CRAN.R-project.org/package=lme4) | Analysis of spillover | No geostatistical analysis | | `INLA` | [INLA](https://www.r-inla.org/) | Analysis of spillover, geostatistical analysis and spatially structured outputs | Computationally intensive | | `MCMC` | [jagsUI](https://CRAN.R-project.org/package=jagsUI) | Interval estimates for spillover parameters | Identifiability issues and slow convergence are possible | For the analysis of proportions, the outcome in the control arm is estimated as: $\hat{p}_{C} = \frac{1}{1 + exp(-\beta_1)}$, in the intervention arm as $\hat{p}_{I} = \frac{1}{1 + exp(-\beta_1-\beta_2)}$, and the efficacy is estimated as $\tilde{E}_{s} = 1- \frac{\tilde{p}_{I}}{\tilde{p}_{C}}$ where $\beta_1$ is the intercept term and $\beta_2$ the incremental effect associated with the intervention. `summary("<analysis>"")` is used to view the key results of the trial. To display the output from the statistical procedure that is called, try `<analysis>$model_object` or `summary("<analysis>$model_object")`. ```r library(CRTspat) example <- readdata("exampleCRT.txt") analysisT <- CRTanalysis(example, method = "T") summary(analysisT) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: T ## Link function: logit ## Model formula: arm + (1 | cluster) ## No modelling of spillover ## Estimates: Control: 0.364 (95% CL: 0.286 0.451 ) ## Intervention: 0.21 (95% CL: 0.147 0.292 ) ## Efficacy: 0.423 (95% CL: 0.208 0.727 ) ## Coefficient of variation: 41.6 % (95% CL: 31.2 63.1 ) ## ## P-value (2-sided): 0.006879064 ``` ```r analysisT$model_object ``` ``` ## ## Two Sample t-test ## ## data: lp by arm ## t = 2.9818, df = 22, p-value = 0.006879 ## alternative hypothesis: true difference in means between group control and group intervention is not equal to 0 ## 95 percent confidence interval: ## 0.2332638 1.2989425 ## sample estimates: ## mean in group control mean in group intervention ## -0.5561662 -1.3222694 ``` ## Assessing model fit The `model = "LME4"` option outputs the deviance of the model and the Akaike information criterion (AIC), which can be used to select the best fitting model. The deviance information criterion (DIC) and Bayesian information criterion (BIC) perform the same role for the Bayesian methods (`"INLA"`, and `"MCMC"`). The comparison of results with `cfunc = "X"` and `cfunc = "Z"` is used to assess whether the intervention effect is likely to be due to chance. With `method = "T"`, `cfunc = "X"` provides a significance test of the intervention effect directly. The models with spillover (see below) can be compared by that with `cfunc = "X"` to evaluate whether spillover has led to an important bias. ## Spillover `CRTanalysis()` provides options for analysing spillover effects either as function of a Euclidean distance or as a function of a surround measure: #### Models that do not consider spillover Models that do not consider spillover can be fitted using options `Z` and `X`. These are included both to allow conventional analyses (see above), and also to enable model selection using and likelihood ratio tests, the Akaike information criterion (AIC), deviance information criterion (DIC) or Bayesian information criterion (BIC) . #### Spillover as a function of distance These methods require a measure of distance from the boundary between the trial arms, with locations in the control arm assigned negative values, and those in the intervention arm assigned positive values. The functional forms for this relationship is specified by the value of `cfunc` (Table 5.2). Table 5.2. Available spillover functions | `cfunc` | Description | Formula for $P\left( d \right)$ | Compatible `method`(s) | |---------|------------------|--------------------------|--------------------------| | `Z`| No intervention effect | $P\left( d \right) = \ 0\ $ | `GEE` `LME4` `INLA` `MCMC` | | `X`| Simple intervention effect | $\begin{matrix} P\left( d \right) = \ 0\ for\ d\ < \ 0 \\ P\left( d \right) = \ 1\ for\ d\ > \ 0 \\ \end{matrix}\ $ | `T` `GEE` `LME4` `INLA` `MCMC` | | `L`| inverse logistic (sigmoid)| $P\left( d \right) = \ \frac{1}{\left( 1\ + \ exp\left( - d/S \right) \right)}$ | `LME4` `INLA` `MCMC` | | `P`| inverse probit (error function) | $P\left( d \right) = 1\ +\ erf\left(\frac{d}{S\sqrt2}\right)$ | `LME4` `INLA` `MCMC` | | `S`| piecewise linear | $\begin{matrix} P\left( d \right) = \ 0\ for\ d\ < \ - S/2\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ P\left( d \right) = \ \left(S/2\ + \ d \right)/S\ for\ - S/2 < d\ < \ S/2\\ P\left( d \right) = \ 1\ for\ d\ > \ S/2\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ \end{matrix}\ $ | `LME4` `INLA` `MCMC` | | `R`| rescaled linear | $P\left( d \right) =\frac{d\ -\ min(d)}{max(d)\ -\ min(d)}$ | `LME4` `INLA` `MCMC` | `cfunc` options `P`, `L` and `S` lead to non-linear models in which the spillover scale parameter (`S`) must be estimated. This is done by selecting `scale_par` using a one-dimensional optimisation of the goodness of fit of the model in `stats::optimize()`. The different values for `cfunc` lead to the fitted curves shown in Figure 5.1. The light blue shaded part of the plot corresponds to the spillover interval in those cases where this is estimated. ```r analysisLME4_Z <- CRTanalysis(example, method = "LME4", cfunc = "Z") summary(analysisLME4_Z) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: LME4 ## Link function: logit ## Model formula: (1 | cluster) ## No comparison of arms ## Estimates: Control: 0.285 (95% CL: NA ) ## Coefficient of variation: 41.6 % (95% CL: 31.2 63.1 ) ## deviance: 1387.609 ## AIC : 1391.609 ``` ```r analysisLME4_X <- CRTanalysis(example, method = "LME4", cfunc = "X") summary(analysisLME4_X) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: LME4 ## Link function: logit ## Model formula: arm + (1 | cluster) ## No modelling of spillover ## Estimates: Control: 0.366 (95% CL: 0.292 0.449 ) ## Intervention: 0.216 (95% CL: 0.162 0.281 ) ## Efficacy: 0.41 (95% CL: 0.165 0.584 ) ## Coefficient of variation: 41.6 % (95% CL: 31.2 63.1 ) ## deviance: 1379.898 ## AIC : 1385.898 ``` ```r analysisLME4_P <- CRTanalysis(example, method = "LME4", cfunc = "P") summary(analysisLME4_P) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: LME4 ## Link function: logit ## Measure of distance or surround: Signed distance to other arm (km) ## Estimated scale parameter: 0.45 ## Model formula: pvar + (1 | cluster) ## Error function model for spillover ## Estimates: Control: 0.418 (95% CL: 0.331 0.509 ) ## Intervention: 0.186 (95% CL: 0.136 0.25 ) ## Efficacy: 0.553 (95% CL: 0.327 0.703 ) ## spillover interval(km): 4.22 (95% CL: 4.2 4.23 ) ## % locations contaminated: 91.6 (95% CL: 90.6 92 %) ## Total effect : 0.23 (95% CL: 0.114 0.344 ) ## Ipsilateral Spillover : 0.0233 (95% CL: 0.0127 0.0323 ) ## Contralateral Spillover : 0.0417 (95% CL: 0.0192 0.0651 ) ## Coefficient of variation: 41.6 % (95% CL: 31.2 63.1 ) ## deviance: 1374.215 ## AIC : 1382.215 including penalty for the spillover scale parameter ``` ```r analysisLME4_L <- CRTanalysis(example, method = "LME4", cfunc = "L") summary(analysisLME4_L) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: LME4 ## Link function: logit ## Measure of distance or surround: Signed distance to other arm (km) ## Estimated scale parameter: 0.249 ## Model formula: pvar + (1 | cluster) ## Sigmoid (logistic) function for spillover ## Estimates: Control: 0.417 (95% CL: 0.332 0.51 ) ## Intervention: 0.186 (95% CL: 0.136 0.249 ) ## Efficacy: 0.552 (95% CL: 0.329 0.7 ) ## spillover interval(km): 4.26 (95% CL: 4.24 4.28 ) ## % locations contaminated: 92.7 (95% CL: 92.2 93.1 %) ## Total effect : 0.229 (95% CL: 0.115 0.342 ) ## Ipsilateral Spillover : 0.0219 (95% CL: 0.0121 0.0304 ) ## Contralateral Spillover : 0.0388 (95% CL: 0.0183 0.0604 ) ## Coefficient of variation: 41.6 % (95% CL: 31.2 63.1 ) ## deviance: 1374.201 ## AIC : 1382.201 including penalty for the spillover scale parameter ``` ```r analysisLME4_S <- CRTanalysis(example, method = "LME4", cfunc = "S") summary(analysisLME4_S) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: LME4 ## Link function: logit ## Measure of distance or surround: Signed distance to other arm (km) ## Estimated scale parameter: 1.674 ## Model formula: pvar + (1 | cluster) ## Piecewise linear function for spillover ## Estimates: Control: 0.423 (95% CL: 0.334 0.516 ) ## Intervention: 0.185 (95% CL: 0.135 0.247 ) ## Efficacy: 0.561 (95% CL: 0.341 0.711 ) ## spillover interval(km): 4.1 (95% CL: 4.1 4.11 ) ## % locations contaminated: 86.6 (95% CL: 86.6 87.1 %) ## Total effect : 0.237 (95% CL: 0.12 0.356 ) ## Ipsilateral Spillover : 0.029 (95% CL: 0.016 0.0403 ) ## Contralateral Spillover : 0.0522 (95% CL: 0.0248 0.0818 ) ## Coefficient of variation: 41.6 % (95% CL: 31.2 63.1 ) ## deviance: 1374.094 ## AIC : 1382.094 including penalty for the spillover scale parameter ``` ```r analysisLME4_R <- CRTanalysis(example, method = "LME4", cfunc = "R") summary(analysisLME4_R) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: LME4 ## Link function: logit ## Measure of distance or surround: Signed distance to other arm (km) ## No non-linear parameter. 1 ## Model formula: pvar + (1 | cluster) ## Rescaled linear function for spillover ## Estimates: Control: 0.584 (95% CL: 0.381 0.758 ) ## Intervention: 0.116 (95% CL: 0.0587 0.216 ) ## Efficacy: 0.801 (95% CL: 0.465 0.92 ) ## spillover interval(km): 6.64 (95% CL: 6.61 6.65 ) ## % locations contaminated: 99.8 (95% CL: 99.8 99.8 %) ## Total effect : 0.468 (95% CL: 0.181 0.694 ) ## Ipsilateral Spillover : 0.117 (95% CL: 0.0564 0.157 ) ## Contralateral Spillover : 0.238 (95% CL: 0.0831 0.368 ) ## Coefficient of variation: 41.6 % (95% CL: 31.2 63.1 ) ## deviance: 1378.711 ## AIC : 1384.711 ``` ```r p0 <- plotCRT(analysisLME4_Z, map = FALSE) p1 <- plotCRT(analysisLME4_X, map = FALSE) p2 <- plotCRT(analysisLME4_P, map = FALSE) p3 <- plotCRT(analysisLME4_L, map = FALSE) p4 <- plotCRT(analysisLME4_S, map = FALSE) p5 <- plotCRT(analysisLME4_R, map = FALSE) library(cowplot) plot_grid(p0, p1, p2, p3, p4, p5, labels = c('Z', 'X', 'P', 'L', 'S', 'R'), label_size = 10, ncol = 2) ``` <p> <img src="example5b.r-1.png"> <br> <em>Fig 5.1 Fitted curves for the example dataset with different options for `cfunc`</em> </p> The piecewise linear spillover function, `cfunc = "S"`, is only linear on the scale of the linear predictor. When used in a logistic model, as here, the transformation via the inverse of the link function leads to a slightly curved plot (Figure 5.1S). The rescaled linear function, `cfunc = "R"`, is provided as a comparator and for use with `distance` values other than `distance = "nearestDiscord"` see below (it should not be used to estimate the spillover interval). The full set of different `cfunc` options are available for each of model options `"LME4"`, `"INLA"`, and `"MCMC"`. The performance of all these different models has not yet been thoroughly investigated. The analyses of [Multerer *et al.* (2021b)](https://malariajournal.biomedcentral.com/articles/10.1186/s12936-021-03924-7) found that that a model equivalent to `method = "MCMC"`, `cfunc = "L"` gave estimates of efficacy with low bias, even in simulations with considerable spillover. #### Spillover as a function of surround Spillover can also be analysed by assuming the effect size to be a function of the number of intervened locations in the surroundings of the location [Anaya-Izquierdo & Alexander(2021)](https://onlinelibrary.wiley.com/doi/full/10.1111/biom.13316). Several different surround functions are available. These are specified by the `distance` parameter (Table 5.3). Table 5.3. Available surround functions | `distance` | Description | Details | |----------------|------------------|--------------------------------------------------------------| |`nearestDiscord`| Distance to nearest discordant location | The default. This is used for analyses by distance (see above) | |`hdep`| Tukey half-depth | Algorithm of [Rousseeuw & Ruts(1996)](https://www.jstor.org/stable/2986073) | |`sdep`| Simplicial depth| Algorithm of [Rousseeuw & Ruts(1996)](https://www.jstor.org/stable/2986073) | |`disc`| disc | The number of intervened locations within the specified radius (excluding the location itself) as described by [Anaya-Izquierdo & Alexander(2021)](https://onlinelibrary.wiley.com/doi/full/10.1111/biom.13316) | |`kern`| Sum of kernels | The sum of normal kernels | The [`compute_distance()`](../reference/compute_distance.html) function is provided to compute these quantities, so that they can be described, compared, and analysed independently of `CRTanalysis()`. Note that the values of the surround calculated by `compute_distance()` are scaled to avoid correlation with the spatial density of the points (see [documentation](../reference/compute_distance.html)) and so are not equivalent to the quantities reported in the original publications. Users can also devise other measures of surround or distance, add them to a `trial` data frame and specify them using `distance`. `CRTanalysis()` computes the minimum value for the specified field ```r examples <- compute_distance(example, distance = "hdep") ps1 <- plotCRT(examples, distance = "hdep", legend.position = c(0.6, 0.8)) ps2 <- plotCRT(examples, distance = "sdep") examples <- compute_distance(examples, distance = "disc", scale_par = 0.5) ps3 <- plotCRT(examples, distance = "disc") examples <- compute_distance(examples, distance = "kern", scale_par = 0.5) ps4 <- plotCRT(examples, distance = "kern") plot_grid(ps1, ps2, ps3, ps4, labels = c('hdep', 'sdep', 'disc', 'kern'), label_size = 10, ncol = 2) ``` <p> <img src="example5c.r-1.png"> <br> <em>Fig 5.2 Stacked bar plots for different surrounds</em> </p> If `distance` is assigned a value of either `hdep`, `sdep`, then `cfunc = "R"` is used by default and the overall effect size is computed by comparing the fitted values of the model for a surround value of zero with that of the maximum of the surround in the data. If `distance = "disc"` or `distance = "kern"` and `scale_par` is assigned a value, then `cfunc = "R"` is also used. If `cfunc = "E"` is specified then an escape function is fitted with the scale parameter estimated in the same way as in the scale parameter in other models (see above Table 5.2). ```r examples_hdep <- CRTanalysis(examples, method = "LME4", distance = "hdep", cfunc = 'R') summary(examples_hdep) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: LME4 ## Link function: logit ## Measure of distance or surround: Tukey half-depth ## No non-linear parameter. 1 ## Model formula: pvar + (1 | cluster) ## Rescaled linear function for spillover ## Estimates: Control: 0.381 (95% CL: 0.292 0.478 ) ## Intervention: 0.209 (95% CL: 0.15 0.282 ) ## Efficacy: 0.452 (95% CL: 0.167 0.639 ) ## spillover interval(km): 0.978 (95% CL: 0.976 0.98 ) ## % locations contaminated: 55 (95% CL: 55 55 %) ## Total effect : 0.172 (95% CL: 0.0524 0.292 ) ## Ipsilateral Spillover : 0.0313 (95% CL: 0.01 0.0512 ) ## Contralateral Spillover : 0.0444 (95% CL: 0.0128 0.0785 ) ## Coefficient of variation: 41.6 % (95% CL: 31.2 63.1 ) ## deviance: 1379.89 ## AIC : 1385.89 ``` ```r ps4 <- plotCRT(examples_hdep,legend.position = c(0.8, 0.8)) examples_sdep <- CRTanalysis(examples, method = "LME4", distance = "sdep", cfunc = 'R') summary(examples_sdep) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: LME4 ## Link function: logit ## Measure of distance or surround: Simplicial depth ## No non-linear parameter. 1 ## Model formula: pvar + (1 | cluster) ## Rescaled linear function for spillover ## Estimates: Control: 0.393 (95% CL: 0.307 0.485 ) ## Intervention: 0.199 (95% CL: 0.145 0.268 ) ## Efficacy: 0.493 (95% CL: 0.243 0.66 ) ## spillover interval(km): 0.978 (95% CL: 0.976 0.98 ) ## % locations contaminated: 52.4 (95% CL: 52.2 52.4 %) ## Total effect : 0.193 (95% CL: 0.0802 0.306 ) ## Ipsilateral Spillover : 0.0299 (95% CL: 0.013 0.0456 ) ## Contralateral Spillover : 0.0431 (95% CL: 0.0169 0.0704 ) ## Coefficient of variation: 41.6 % (95% CL: 31.2 63.1 ) ## deviance: 1376.417 ## AIC : 1382.417 ``` ```r ps5 <- plotCRT(examples_sdep) examples_disc <- CRTanalysis(examples, method = "LME4", distance = "disc", cfunc = 'R', scale_par = 0.15) summary(examples_disc) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: LME4 ## Link function: logit ## Measure of distance or surround: disc of radius 0.15 km ## Precalculated scale parameter: 0.15 ## Model formula: pvar + (1 | cluster) ## Rescaled linear function for spillover ## Estimates: Control: 0.387 (95% CL: 0.312 0.467 ) ## Intervention: 0.2 (95% CL: 0.149 0.26 ) ## Efficacy: 0.482 (95% CL: 0.273 0.634 ) ## spillover interval(km): 0.978 (95% CL: 0.976 0.98 ) ## % locations contaminated: 8.89 (95% CL: 8.89 8.89 %) ## Total effect : 0.186 (95% CL: 0.0912 0.282 ) ## Ipsilateral Spillover : 0.00458 (95% CL: 0.00239 0.00656 ) ## Contralateral Spillover : 0.00576 (95% CL: 0.00271 0.00905 ) ## Coefficient of variation: 41.6 % (95% CL: 31.2 63.1 ) ## deviance: 1374.274 ## AIC : 1380.274 ``` ```r ps6 <- plotCRT(examples_disc) examples_kern <- CRTanalysis(examples, method = "LME4", distance = "kern", cfunc = 'R', scale_par = 0.15) summary(examples_kern) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: LME4 ## Link function: logit ## Measure of distance or surround: kern with kernel s.d. 0.15 km ## Precalculated scale parameter: 0.15 ## Model formula: pvar + (1 | cluster) ## Rescaled linear function for spillover ## Estimates: Control: 0.406 (95% CL: 0.327 0.491 ) ## Intervention: 0.185 (95% CL: 0.136 0.245 ) ## Efficacy: 0.542 (95% CL: 0.349 0.684 ) ## spillover interval(km): 0.979 (95% CL: 0.977 0.98 ) ## % locations contaminated: 50.8 (95% CL: 50.6 50.9 %) ## Total effect : 0.22 (95% CL: 0.122 0.32 ) ## Ipsilateral Spillover : 0.011 (95% CL: 0.00661 0.0152 ) ## Contralateral Spillover : 0.0134 (95% CL: 0.00707 0.0203 ) ## Coefficient of variation: 41.6 % (95% CL: 31.2 63.1 ) ## deviance: 1369.677 ## AIC : 1375.677 ``` ```r ps7 <- plotCRT(examples_kern) plot_grid(ps4, ps5, ps6, ps7, labels = c('hdep', 'sdep', 'disc', 'kern'), label_size = 10, ncol = 2) ``` <p> <img src="example5d.r-1.png"> <br> <em>Fig 5.3 Fitted curves for the example dataset with different surrounds </em> </p> ## Geostatistical models and mapping results To carry out a geostatistical analysis with `method = "INLA"` a prediction mesh is needed. By default a very low resolution mesh is created (creating a high resolution mesh is computationally expensive). To create a 100m INLA mesh for `<MyTrial>`, use: `mesh <- new_mesh(trial = <MyTrial> , pixel = 0.1)`
/scratch/gouwar.j/cran-all/cranData/CRTspat/inst/doc/Usecase5.Rmd
--- title: "Use Case 06: Thematic mapping of the geography of a CRT" output: rmarkdown::html_vignette: toc: true vignette: > %\VignetteIndexEntry{Use Case 06: Thematic mapping of the geography of a CRT} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- `CRTspat` is intended to facilitate thematic mapping of the geography of a CRT at each stage from enumeration of the trial population to data analysis. Graphical outputs are generated with `ggplot2()`. In addition there is a function, `CRTwrite()` to export the thematic layers and shapefiles to GIS formats. The same `plotCRT()` function is used at each stage in the trial (see below), with the functionality available expanding as more fields are populated in the `CRTsp` object. When applied to output from `CRTanalysis()` `plotCRT()` that analyse the spillover interval, an expanded set of thematic maps are available, including overlay plots showing the spillover zone (i.e. the subset of the study area estimated to have effects of spillover) and thematic maps of spatial predictions. ```r # using the same dataset as for Use Case 1. library(CRTspat) exampleCRT <- readdata('exampleCRT.txt') plotCRT(exampleCRT, map = TRUE, fill = 'none', showLocations=TRUE) ``` <p> <img src="example6a.r-1.png" > <br> <em>Fig 6.1 Locations </em> </p> If the clusters have been established, a map can be drawn showing where they are located. The clusters can be distinguished by colour or by number. To ensure that the image is not too crowded, by default the locations are not shown (but they can be shown if required). ```r plotCRT(exampleCRT, map = TRUE, fill = 'clusters', showClusterLabels = TRUE, labelsize =3) ``` <p> <img src="example6b.r-1.png" > <br> <em>Fig 6.2 Clusters </em> </p> Similarly, the map of arms is available if the trial has been randomized. Buffer zones can be plotted on this map. ```r plotCRT(exampleCRT, map = TRUE, fill = 'arms', showLocations=TRUE) plotCRT(exampleCRT, map = TRUE, fill = 'arms', showBuffer=TRUE, showClusterBoundaries = FALSE, buffer_width = 0.5) ``` ``` ## Buffer includes locations within 500m of the opposing arm ``` <p> <img src="example6c.r-1.png" > <br> <em>Fig 6.3 Arms with locations </em> </p> <p> <img src="example6c.r-2.png" > <br> <em>Fig 6.4 Arms with 500m buffer zone shaded </em> </p> Once data have been collected, `plotCRT()` can be used to generate a bar plot to illustrate how much of the data are found close to the boundary between the arms. ```r plotCRT(exampleCRT, map = FALSE) ``` <p> <img src="example6d.r-1.png" > <br> <em>Fig 6.5 Numbers of observations by distance from boundary</em> </p> The results of the data analysis can be illustrated with further graphics. The blue shaded section of Figure 6.8 indicates the spillover zone, corresponding to those locations that fall within the central 95% of the estimated sigmoid of the of the effect size by distance from the boundary between the arms. ```r analysis <- CRTanalysis(exampleCRT, cfunc = "P", method = "LME4") ``` ``` ## Estimated scale parameter: 0.45 Signed distance -Signed distance to other arm (km) ``` ```r summary(analysis) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: LME4 ## Link function: logit ## Measure of distance or surround: Signed distance to other arm (km) ## Estimated scale parameter: 0.45 ## Model formula: pvar + (1 | cluster) ## Error function model for spillover ## Estimates: Control: 0.418 (95% CL: 0.331 0.511 ) ## Intervention: 0.186 (95% CL: 0.135 0.251 ) ## Efficacy: 0.554 (95% CL: 0.33 0.703 ) ## spillover interval(km): 4.22 (95% CL: 4.2 4.24 ) ## % locations contaminated: 91.6 (95% CL: 90.6 92.2 %) ## Total effect : 0.231 (95% CL: 0.116 0.347 ) ## Ipsilateral Spillover : 0.0234 (95% CL: 0.0129 0.0325 ) ## Contralateral Spillover : 0.0417 (95% CL: 0.0195 0.0655 ) ## Coefficient of variation: 41.6 % (95% CL: 31.2 63.1 ) ## deviance: 1374.215 ## AIC : 1382.215 including penalty for the spillover scale parameter ``` ```r plotCRT(analysis, map = FALSE) plotCRT(analysis, map = TRUE, fill = 'arms', showBuffer=TRUE, showClusterBoundaries = FALSE) ``` ``` ## Buffer corresponds to estimated spillover zone ``` <p> <img src="example6e.r-1.png" > <br> <em>Fig 6.6 Plot of estimated spillover function </em> </p> <p> <img src="example6e.r-2.png" > <br> <em>Fig 6.7 Arms with spillover zone shaded </em> </p> ## Conclusions In this example, a large proportion of the data points are close to the boundary between the arms. The analysis (based on simulated spillover) suggests that there are effects of spillover far beyond a 500m buffer. However this does not necessarily mean that the spillover leads to a large bias or loss in power (see [Use Case 7](Usecase7.html)).
/scratch/gouwar.j/cran-all/cranData/CRTspat/inst/doc/Usecase6.Rmd
--- title: "Use Case 07: Power and sample size calculations allowing for spillover" output: rmarkdown::html_vignette: toc: true vignette: > %\VignetteIndexEntry{Use Case 07: Power and sample size calculations allowing for spillover} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- Contamination between the arms of a trial leads to downward bias in the estimates of efficacy In the case of malaria, this spillover is mainly caused by mosquito movement, and is therefore expected to be greatest near to the boundary between the trial arms. Because the effective distance over which substantial spillover occurs is generally not known, sensitivity analyses must be used to get an idea of how great these effects are likely to be. The following workflow, similar to that used for [Use Case 3](Usecase3.html), explores the likely bias and loss of power for one specific simulated setting. In this simple example, spatially homogeneous background disease rates are assigned, using `propensity <- 1` A small number of simulations is specified for testing (here 2, a much larger number, at least several thousands is used for the definitive analysis). In this example, a fixed value for the outcome in the control arm, the target ICC of the simulations, and the number of clusters in each arm of the trial. input. Efficacy is sampled from a uniform(0, 0.6) distribution, and the simulated spillover interval from a uniform(0, 1.5km) distribution. ```r library(CRTspat) # The locations only are taken from the example dataset. The cluster, arm, and outcome assignments are replaced example <- readdata("exampleCRT.txt") trial <- example$trial[ , c("x","y", "denom")] trial$propensity <- 1 nsimulations <- 2 CRT <- CRTsp(trial) library(dplyr) outcome0 <- 0.4 ICC <- 0.05 c <- 25 set.seed(7) effect <- runif(nsimulations,0,0.6) # Data frame of input spillover interval theta_inp <- runif(nsimulations, min = 0, max = 1.5) input <- data.frame(effect = effect, theta_inp = theta_inp) ``` A user function is defined for randomizing and analysing each simulated trial ```r CRTscenario7 <- function(input) { ex <- specify_clusters(CRT, c = c, algo = "kmeans") %>% randomizeCRT() %>% simulateCRT(effect = input[["effect"]], generateBaseline = FALSE, outcome0 = outcome0, ICC_inp = ICC, theta_inp = input[["theta_inp"]], matchedPair = FALSE, scale = "proportion", denominator = "denom", tol = 0.01) sd <- theta_inp/(2 * qnorm(0.975)) contaminate_pop_pr_input <- sum(abs(ex$trial$nearestDiscord) < 0.5 * input[["theta_inp"]])/nrow(ex$trial) contaminate_pop_pr_sd <- sum(abs(ex$trial$nearestDiscord) < input[["theta_inp"]]/(2 * qnorm(0.975)))/nrow(ex$trial) examplePower = CRTpower(trial = ex, desiredPower = 0.8, effect=input[["effect"]], yC=outcome0, outcome_type = 'd', ICC = ICC, c = c) nominalpower <- examplePower$geom_full$power Tanalysis <- CRTanalysis(ex, method = "T") value <- c( effect = input[["effect"]], contaminate_pop_pr_input = contaminate_pop_pr_input, contaminate_pop_pr_sd = contaminate_pop_pr_sd, theta_inp = input[["theta_inp"]], nominalpower = examplePower$geom_full$power, Pvalue_t = Tanalysis$pt_ests$p.value, effect_size_t = Tanalysis$pt_ests$effect_size) return(value) } ``` The results are collected in a data frame and post-processed to classify the outcomes according to whether they represent either Type I errors or Type II errors ```r results_matrix <- apply(input, MARGIN = 1, FUN = CRTscenario7) results <- as.data.frame(t(results_matrix)) results$significant_t <- ifelse((results$Pvalue_t < 0.05), 1, 0) results$typeIerror <- ifelse(results$significant_t == 1 & results$effect == 0, 1, ifelse(results$effect == 0, 0 , NA)) results$typeIIerror <- ifelse(results$significant_t == 0, ifelse(results$effect > 0, 1, NA), 0) results$bias_t <- with(results, ifelse(significant_t,effect_size_t - effect,- effect)) ``` ## Analysis by simulated true efficacy The results are grouped by ranges of efficacy, for each of which the ratio of the number of simulations giving statistically significant results to the expected number can be calculated. ```r results$effect_cat <- factor(round(results$effect*10)) by_effect <- results[results$effect > 0, ] %>% group_by(effect_cat) %>% summarise_at(c("effect","significant_t", "nominalpower", "bias_t", "typeIIerror"), mean, na.rm = TRUE) by_effect$power_ratio <- with(by_effect, significant_t/nominalpower) library(ggplot2) theme_set(theme_bw(base_size = 14)) ggplot(data = by_effect, aes(x = effect)) + geom_smooth(aes(y = power_ratio), color = "#b2df8a",se = FALSE) + geom_smooth(aes(y = typeIIerror), color = "#D55E00",se = FALSE) + geom_smooth(aes(y = bias_t), color = "#0072A7",se = FALSE) + xlab('Simulated efficacy (%)') + ylab('Performance of t-test') ``` Despite the spillover, the t-test performs similarly to expectations across the range of efficacies investigated (Figure 7.1) <p> <img src="example7d.r-1.png" > <br> <em>Fig 7.1 Performance of t-test by efficacy $\color{purple}{\textbf{----}}$ : ratio of power:nominal power; $\color{green}{\textbf{----}}$ :type II error rate; $\color{blue}{\textbf{----}}$ : bias. </em> </p> ## Analysis by simulated spillover interval An analogous analysis, to that of performance relative to efficacy, can be carried out to explore the effect of the simulated spillover interval. ```r ggplot(data = results, aes(x = theta_inp)) + geom_smooth(aes(y = contaminate_pop_pr_input), color = "#b2df8a",se = FALSE, size = 2) + geom_smooth(aes(y = contaminate_pop_pr_sd), color = "#0072A7",se = FALSE, size = 2) + xlab('Simulated spillover interval (km)') + ylab('Proportion of the locations') results$theta_cat <- factor(round(results$theta_inp*10)) by_theta <- results[results$effect > 0, ] %>% group_by(theta_cat) %>% summarise_at(c("theta_inp","significant_t", "nominalpower", "bias_t", "typeIIerror"), mean, na.rm = TRUE) by_theta$power_ratio <- with(by_theta, significant_t/nominalpower) ggplot(data = by_theta, aes(x = theta_inp)) + geom_smooth(aes(y = power_ratio), color = "#b2df8a",se = FALSE) + geom_smooth(aes(y = typeIIerror), color = "#D55E00",se = FALSE) + geom_smooth(aes(y = bias_t), color = "#0072A7",se = FALSE) + xlab('Simulated spillover interval (km)') + ylab('Performance of t-test') ``` The relationships between the simulated spillover interval and the corresponding proportion of the locations in the trial (Figure 7.2). <p> <img src="example7e.r-1.png" > <br> <em>Fig 7.2 Proportion of locations in simulated spillover interval $\color{green}{\textbf{----}}$ :zone defined by `gamma_inp`; $\color{blue}{\textbf{----}}$ : zone defined by `sd`. </em> </p> The spillover results in a loss of power, and increased negative bias in efficacy estimate, but these effects are rather small (Figure 7.3). The potential for using the `CRTanalysis()` function to model the spillover and hence correct the naive efficacy estimate (from a t-test) can also be explored (see [Use Case 5](Usecase5.html) and [Multerer *et al.* (2021b)](https://malariajournal.biomedcentral.com/articles/10.1186/s12936-021-03924-7). <p> <img src="example7e.r-2.png" > <br> <em>Fig 7.3 Performance of t-test by spillover interval $\color{purple}{\textbf{----}}$ : ratio of power:nominal power; $\color{green}{\textbf{----}}$ :type II error rate; $\color{blue}{\textbf{----}}$ : bias. </em> </p>
/scratch/gouwar.j/cran-all/cranData/CRTspat/inst/doc/Usecase7.Rmd
--- title: "Use Case 08: Eggs - to fry or scramble?" output: rmarkdown::html_vignette: toc: true vignette: > %\VignetteIndexEntry{Use Case 08: Eggs - to fry or scramble?} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- In trials of malaria interventions, a 'fried-egg' design is often used to avoid the downward bias in the estimates of efficacy caused by spillover. This entails estimating the outcome only from the cores of the clusters. However, the intervention must also be introduced in the buffer zone, so the trial may be very expensive if there are high per capita intervention costs. Since the buffer zone is excluded from data collection, there are usually no data on whether the buffer is large enough to avoid spillover effects. A precautionary approach with large buffer zones is therefore the norm. However with 'fried-eggs' there are no data on the scale of the spillover, so if the effect was swamped by unexpectedly large spillover, this would be indistinguishable from failure of the intervention. The alternative design is to sample in the buffer zones, accepting some degree of spillover. The data analysis might then be used to estimate the scale of spillover (see [Use Case 5](Usecase5.html)). The statistical model might be to adjust the estimate of effect size for the spillover effect, or to decide which contaminated areas to exclude *post hoc* from the definitive analysis (based on pre-defined criteria). This is expected to lead to some loss of power, compared to collecting the same amount of outcome data from the core area alone (though this might be compensated for by increasing data collection), but is likely to be less complicated to organise, allowing the trial to be carried out over a much smaller area, with far fewer locations needing to be randomized (see [Use Case 4](Usecase4.html)). In this example, the effects of reducing the numbers of observations on power and bias are evaluated in simulated datasets. To compare 'fried-egg' strategies with comparable designs that sample the whole area, sets of locations are removed, either randomly from the whole area, or systematically depending on the distance from the boundary between arms. As in [Use Case 7](Usecase7.html), spatially homogeneous background disease rates are assigned, using `propensity <- 1`, fixed values are used for the outcome in the control arm, the target ICC of the simulations and the number of clusters in each arm of the trial. Efficacy is also fixed at 0.4. The spillover interval is sampled from a uniform(0, 1.5km) distribution. ```r library(CRTspat) # use the locations only from example dataset (as with Use Case 7) example <- readdata("exampleCRT.txt") trial <- example$trial[ , c("x","y", "denom")] trial$propensity <- 1 CRT <- CRTsp(trial) library(dplyr) # specify: # prevalence in the absence of intervention; # anticipated ICC; # clusters in each arm outcome0 <- 0.4 ICC <- 0.05 k <- 25 # the number of trial simulations required (a small number, is used for testing, the plots are based on 1000) nsimulations <- 2 theta_vec <- runif(nsimulations,0,1.5) radii <- c(0, 0.1, 0.2, 0.3, 0.4, 0.5) proportions <- c(0, 0.1, 0.2, 0.3, 0.4, 0.5) scenarios <- data.frame(radius = c(radii, rep(0,6)), proportion = c(rep(0,6), proportions)) set.seed(7) ``` Two user functions are required: + randomization and trial simulation + analysis of each simulated trial with different sets of locations removed ```r analyseReducedCRT <- function(x, CRT) { trial <- CRT$trial radius <- x[["radius"]] proportion <- x[["proportion"]] cat(radius,proportion) nlocations <- nrow(trial) if (radius > 0) { radius <- radius + runif(1, -0.05, 0.05) trial$num[abs(trial$nearestDiscord) < radius] <- NA } if (proportion > 0) { # add random variation to proportion to avoid heaping proportion <- proportion + runif(1, -0.05, 0.05) trial$num <- ifelse(runif(nlocations,0,1) < proportion, NA, trial$num) } trial <- trial[!is.na(trial$num),] resX <- CRTanalysis(trial,method = "LME4", cfunc = "X") resZ <- CRTanalysis(trial,method = "LME4", cfunc = "Z") LRchisq <- resZ$pt_ests$deviance - resX$pt_ests$deviance significant <- ifelse(LRchisq > 3.84, 1, 0) result <- list(radius = radius, proportion = proportion, observations = nrow(trial), significant = significant, effect_size = resX$pt_ests$effect_size) return(result) } # randomization and trial simulation randomize_simulate <- function(theta) { ex <- specify_clusters(CRT, c = c, algo = "kmeans") %>% randomizeCRT() %>% simulateCRT(effect = 0.4, generateBaseline = FALSE, outcome0 = outcome0, ICC_inp = ICC, theta_inp = theta, matchedPair = FALSE, scale = "proportion", denominator = "denom", tol = 0.01) # The results are collected in a data frame sub_results_matrix <- apply(scenarios, MARGIN = 1, FUN = analyseReducedCRT, CRT = ex) sub_results <- as.data.frame(do.call(rbind, lapply(sub_results_matrix, as.data.frame))) sub_results$theta_inp <- theta return(sub_results) } ``` Collect all the analysis results and plot. Note that some analyses result in warnings (because of problems computing some of the descriptive statistics when the number of observations in some clusters is very small) ```r results <- list(simulation = numeric(0), radius = numeric(0), proportion = numeric(0), observations = numeric(0), significant = numeric(0), effect_size = numeric(0), gamma = numeric(0)) simulation <- 0 for(theta in theta_vec){ simulation <- simulation + 1 sub_results <- randomize_simulate(theta) sub_results$simulation <- simulation results <- rbind(results,sub_results) } ``` ``` ## 0 00.1 00.2 00.3 00.4 00.5 00 00 0.10 0.20 0.30 0.40 0.50 00.1 00.2 00.3 00.4 00.5 00 00 0.10 0.20 0.30 0.40 0.5 ``` ```r results$fried <- ifelse(results$radius > 0, 'fried', ifelse(results$proportion > 0, 'scrambled', 'neither')) results$bias <- results$effect_size - 0.5 library(ggplot2) theme_set(theme_bw(base_size = 12)) fig8_1a <- ggplot(data = results, aes(x = observations, y = significant, color = factor(fried))) + geom_smooth(size = 2, se = FALSE, show.legend = FALSE, method = "loess", span = 1) + scale_colour_manual(values = c("#b2df8a","#D55E00","#0072A7")) + xlab('Number of locations in analysis') + ylab('Power') fig8_1b <- ggplot(data = results, aes(x = theta_inp, y = significant, color = factor(fried))) + geom_smooth(size = 2, se = FALSE, show.legend = FALSE, method = "loess", span = 1) + scale_colour_manual(values = c("#b2df8a","#D55E00","#0072A7")) + xlab('Simulated spillover interval (km)') + ylab('Power') fig8_1c <- ggplot(data = results, aes(x = observations, y = bias, color = factor(fried))) + geom_smooth(size = 2, se = FALSE, show.legend = FALSE, method = "loess", span = 1) + scale_colour_manual(values = c("#b2df8a","#D55E00","#0072A7")) + xlab('Number of locations in analysis') + ylab('Bias') fig8_1d <- ggplot(data = results, aes(x = theta_inp, y = bias, color = factor(fried))) + geom_smooth(size = 2, se = FALSE, show.legend = FALSE, method = "loess", span = 1) + scale_colour_manual(values = c("#b2df8a","#D55E00","#0072A7")) + xlab('Simulated spillover interval (km)') + ylab('Bias') library(cowplot) plot_grid(fig8_1a, fig8_1b, fig8_1c, fig8_1d, labels = c('a', 'b', 'c', 'd'), label_size = 14, ncol = 2) ``` #### Analysis Figure 8.1. is are based on analysing 1000 simulated datasets (12,000 scenarios in all). The power is calculated as the proportion of significance tests with likelihood ratio p < 0.05. The fried-egg design suffers little loss of power with exclusion of observations until the sample size is less than about 750 (the suggestion that there is a maximum at a sample size of about 900 locations is presumably an effect of the smoothing of the results). The 'scrambled' egg trials lose power with each reduction in the number of observations (Figure 8.1a), but substantive loss of power occurs only with spillover interval of 1 km or more in this dataset. The fried egg designs largely avoid this loss of power (Figure 8.1b) but are less powerful on average when the spillover interval is less that 0.5 km. In these simulations, the lme4 estimates of the efficacy are biased slightly downwards (i.e. the bias has negative sign). The absolute bias was least when some of the locations were excluded (Figure 8.1c). There is less absolute bias with the fried-egg designs but the effect is small except when the simulated spillover interval is very large (Figure 8.1d). The difference in bias between 'fried' and 'scrambled' egg designs is modest in relation to the scale of the overall bias. <p> <img src="example8c.r-1.png" > <br> <em>Fig 8.1 Power and bias by number of locations in analysis $\color{purple}{\textbf{----}}$ : Randomly sampled locations removed; $\color{green}{\textbf{----}}$ : Fried-egg- locations in boundary zone removed. </em> </p>
/scratch/gouwar.j/cran-all/cranData/CRTspat/inst/doc/Usecase8.Rmd
--- title: "Use Case 09: Preparation of datasets for `CRTspat`" output: rmarkdown::html_vignette: toc: true vignette: > %\VignetteIndexEntry{Use Case 09: Preparation of datasets for `CRTspat`} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- The starting point for the analyses is the co-ordinates for the spatial units that are to be randomized. These can be provided as lat-long coordinates, or as Cartesian point co-ordinates. Geolocated data from completed field studies or trials in progress can be read into `CRTspat` in the form of data frames, preferably with one record for each location in the trial. Pre-existing data from the field should be coded as follows: | Description of variable | Variable name | Type and values | |----------------------------|---------------|-------------------------------------------------| | x-coordinates of locations | x | Numeric | | x-coordinates of locations | y | Numeric | | cluster assignment | cluster | factor | | arm assignment | arm | factor with levels "Control" and "Intervention" | | buffer assignment | buffer | logical (TRUE for locations in the buffer) | Users might want to include an ID variable in the data frame if they intend to link the outputs back to other datasets. Other variables, including baseline data or trial outcomes may also be included. | Description of variable | Default Variable name | Type | |----------------------------|---------------|-------------| | Baseline numerator | base_num | Numeric | | Baseline denominator | base_denom | Numeric | | Numerator | num | Numeric | | Denominator | denom | Numeric | Simulated co-ordinate sets can also be generated *de novo* (function [`CRTsp()`](../reference/CRTsp.html)) for use in methods development and testing. To make it possible to share data files without compromising confidentiality the [`anonymize_site()`](../reference/anonymize_site.html) function is provided. This removes absolute geolocations and applies a transformation to the coordinates to conserve distances between them but modifying the orientation. The [`latlong_as_xy`](../reference/latlong_as_xy.html) function is available to convert co-ordinates provided as decimal degrees into Cartesian co-ordinates with units of km with centroid (0,0). If the input co-ordinates are provided using a different projection then they must be converted externally to the package.
/scratch/gouwar.j/cran-all/cranData/CRTspat/inst/doc/Usecase9.Rmd
--- title: "Use Case 01: Algorithmic specification of clusters" output: rmarkdown::html_vignette: toc: true vignette: > %\VignetteIndexEntry{Use Case 01: Algorithmic specification of clusters} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- The way in which clusters are assigned in cluster randomized trials (CRTs) can profoundly affect the efficiency of the trial. Allocating clusters by algorithm makes it easy to generate alternative cluster allocations for any given trial site, both for real-world trials and for exploring this neglected aspect of trial design in simulations. The `CRTspat` package contains R functions developed for this purpose. Input to the package is in the form of a data frame with one record for each geo-location in a trial area. Most of the functions of the package return a list of class `CRTsp`, which consists of the input data frame augmented with additional vectors (e.g. coding clusters, arms, or buffer zones), and lists containing descriptors of the dataset. Objects of class `CRTsp` can also be used as input to most of the functions. After each step, `summary()` can be used to provide a description of the output `CRTsp` object and `plotCRT()` can be used to output a descriptive plot, or a map of the locations, clusters, arms, buffer zones or other geographically structured analysis results. + In general the package functions do not expect to find repeated values for outcomes for the same location. The `aggregateCRT()` function is used to aggregate data with the same co-ordinates so that this condition is satisfied. In particular, if the input database contains outcome data (e.g. if it contains baseline survey results), these should be provided in the form of a numerator `base_num` and denominator `base_denom` for each record. These values will be summed by `aggregateCRT()` over all records with the same co-ordinates. An object of class `CRTsp` is output. + The `specify_clusters()` function carries out algorithmic assignment of clusters and outputs a `CRTsp` object augmented with the cluster assignments. One of three different algorithms must be selected: + `algorithm = "NN"` implements a nearest neighbour algorithm. Iteratively One household is selected and a cluster of size k is constructed by adding its k-1 nearest neighbors (NN). These points are removing these points from the data set, and this step is repeated iteratively until all the points have been allocated. [This algorithm](http://jmonlong.github.io/Hippocamplus/2018/06/09/cluster-same-size/#methods) will often lead to connected clusters, in a "fish scale" manner. This is the default option. + `algorithm = "TSP"` implements the `repetitive_nn` option of the [`TSP` package](https://CRAN.R-project.org/package=TSP) for solving the travelling salesman problem. This finds an efficient path through the study locations. Clusters are formed by grouping the required number of locations sequentially along the path. Note that this is not guaranteed to give rise to congruent clusters. + `algorithm = "kmeans"` implements a [k-means algorithm](https://en.wikipedia.org/wiki/K-means_clustering) that aims to partition the locations into the required number of clusters in which each observation belongs to the cluster with the nearest cluster centroid. k-means clustering minimizes within-cluster variances (squared Euclidean distances) but does not necessarily give equal-sized clusters. Irrespective of the algorithm, the target number of points allocated to each cluster is specified by the parameter `h`. + The `randomizeCRT()` function carries out a simple randomization of clusters to arms, and outputs a `CRTsp` object augmented with the assignments. (If baseline data are available matched pair randomization is available as an option) The units to be randomized will usually be households, but the algorithms can be used to generate clusters with equal geographical areas by randomizing pixels. In this case a dataset containing x,y coordinates for each pixel should be used as input. The example uses locations and baseline test positivity data from a site in Kenya. The input dataset contains a single record for each test so there are multiple records of test positivity for many locations. ```r library(CRTspat) example_locations <- readdata('example_site.csv') # assign the denominator to the baseline data example_locations$base_denom <- 1 # convert to a `CRTsp` object exampleCRT <- CRTsp(example_locations) summary(exampleCRT) ``` ``` ## ===============================CLUSTER RANDOMISED TRIAL =========================== ## ## Summary of coordinates ## ---------------------- ## Min. : 1st Qu.: Median : Mean : 3rd Qu.: Max. : ## x -3.20 -1.31 -0.24 0.00 1.35 5.16 ## y -5.08 -2.84 -0.17 0.00 2.49 6.16 ## Total area (within 0.2 km of a location) : 27.6 sq.km ## ## Geolocation of centroid (radians): ## latitude: longitude: ## ## Locations and Clusters ## ---------------------- - ## Coordinate system (x, y) ## Not aggregated. Total records: 3172. Unique locations: 1181 ## Available clusters (across both arms) Not assigned ## No randomization - ## No power calculations to report - ## ## Other variables in dataset ## -------------------------- RDT_test_result base_denom ``` ```r # Aggregate data for multiple observations for the same location Only the (x,y) co-ordinates and numerical # auxiliary variables example <- aggregateCRT(exampleCRT, auxiliaries = c("RDT_test_result", "base_denom")) summary(example) ``` ``` ## ===============================CLUSTER RANDOMISED TRIAL =========================== ## ## Summary of coordinates ## ---------------------- ## Min. : 1st Qu.: Median : Mean : 3rd Qu.: Max. : ## x -3.20 -1.40 -0.30 -0.07 1.26 5.16 ## y -5.08 -2.84 0.19 0.05 2.49 6.16 ## Total area (within 0.2 km of a location) : 27.6 sq.km ## ## Locations and Clusters ## ---------------------- - ## Coordinate system (x, y) ## Locations: 1181 ## Available clusters (across both arms) Not assigned ## No randomization - ## No power calculations to report - ## ## Other variables in dataset ## -------------------------- RDT_test_result base_denom ``` ```r # Plot map of locations plotCRT(example, map = TRUE, showLocations = TRUE, maskbuffer = 0.2) ``` <p> <img src="example1a.r-1.png"> <br> <em>Fig 1.1 Map of locations</em> </p> In the example shown here a target cluster size of 50 locations is set, but the heterogeneity in spatial density of the locations leads to considerable variation in the number of locations assigned to each cluster. ```r example_clustered <- specify_clusters(trial = example, h = 50, algorithm = 'NN') summary(example_clustered) ``` ``` ## ===============================CLUSTER RANDOMISED TRIAL =========================== ## ## Summary of coordinates ## ---------------------- ## Min. : 1st Qu.: Median : Mean : 3rd Qu.: Max. : ## x -3.20 -1.40 -0.30 -0.07 1.26 5.16 ## y -5.08 -2.84 0.19 0.05 2.49 6.16 ## Total area (within 0.2 km of a location) : 27.6 sq.km ## ## Locations and Clusters ## ---------------------- - ## Coordinate system (x, y) ## Locations: 1181 ## Available clusters (across both arms) 24 ## Per cluster mean number of points 49.2 ## Per cluster s.d. number of points 3.9 ## No randomization - ## No power calculations to report - ## ## Other variables in dataset ## -------------------------- RDT_test_result base_denom ``` ```r plotCRT(example_clustered, map = TRUE, showClusterLabels = TRUE, maskbuffer = 0.2, labelsize = 2) ``` <p> <img src="example1b.r-1.png"> <br> <em>Fig 1.2 Map of clusters</em> </p> A smoothed map of the baseline prevalence surface is produced using a geostatistical model in [R-INLA](https://www.r-inla.org/). Details of the implementation in `CRTspat` are in the [documentation of `CRTanalysis`](../reference/CRTanalysis.html) and of [Use Case 5](Usecase5.html). ```r library(Matrix) examplemesh100 <- readdata("examplemesh100.rds") baselineanalysis <- CRTanalysis(trial=example_clustered, method = 'INLA', link='logit', baselineOnly = TRUE, baselineNumerator = "RDT_test_result", baselineDenominator = "base_denom", clusterEffects = FALSE, spatialEffects = TRUE, requireMesh = TRUE, inla_mesh = examplemesh100) ``` ``` ## Analysis of baseline only, using INLA ``` ```r plotCRT(baselineanalysis, map = TRUE, fill = 'prediction') ``` <p> <img src="example1c.r-1.png"> <br> <em>Fig 1.3 Smoothed surface of baseline prevalence</em> </p> A summary of the baseline prevalence at cluster level is used in this example to match clusters on baseline prevalence and then generate a randomisation based on matched pairs. ```r example_randomized <- randomizeCRT(example_clustered, matchedPair = TRUE, baselineNumerator = "RDT_test_result", baselineDenominator = "base_denom") summary(example_randomized) ``` ``` ## ===============================CLUSTER RANDOMISED TRIAL =========================== ## ## Summary of coordinates ## ---------------------- ## Min. : 1st Qu.: Median : Mean : 3rd Qu.: Max. : ## x -3.20 -1.40 -0.30 -0.07 1.26 5.16 ## y -5.08 -2.84 0.19 0.05 2.49 6.16 ## Total area (within 0.2 km of a location) : 27.6 sq.km ## ## Locations and Clusters ## ---------------------- - ## Coordinate system (x, y) ## Locations: 1181 ## Available clusters (across both arms) 24 ## Per cluster mean number of points 49.2 ## Per cluster s.d. number of points 3.9 ## S.D. of distance to nearest discordant location (km): 0.56 ## Cluster randomization: Matched pairs randomized ## No power calculations to report - ## ## Other variables in dataset ## -------------------------- RDT_test_result base_denom base_num pair ``` ```r plotCRT(example_randomized, map = TRUE, maskbuffer=0.2, legend.position=c(0.8,0.8)) ``` <p> <img src="example1d.r-1.png"> <br> <em>Fig 1.4 Map of arm assignments</em> </p>
/scratch/gouwar.j/cran-all/cranData/CRTspat/vignettes/Usecase1.Rmd
--- title: "Use Case 02: Simulation of trials with geographical spillover" output: rmarkdown::html_vignette: toc: true vignette: > %\VignetteIndexEntry{Use Case 02: Simulation of trials with geographical spillover} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- Effects of settlement patterns, choices of cluster size and buffer widths, and the extent of spillover between arms on the outcomes of CRTs do not lend themselves to mathematical analysis. Simulations of trials are used to explore the effects of these variables on trial power and on the robustness of statistical methodologies. Trials can be simulated using the `simulateCRT` function, which augments a `trial` data frame (created externally) or object of class `CRTsp` (created by package functions) with simulated outcome data. The input object must be given location information and both cluster and arm assignments (see [Use Case 1](Usecase1.html)) (or the package can generate these if the objective is purely simulation. Information about the underlying spatial pattern of disease is used in the form of the intra-cluster correlation of the outcome, which is input to the simulation as variable `ICC_inp`, and of the `propensity`. The former takes a single value for the chosen design. The latter takes a positive real value for each location. In the case of malaria, `propensity` can be thought of as measuring exposure to infectious mosquitoes. `ICC_inp` and `propensity` may either be estimated from other datasets or supplied by the user. The behaviour of the function depends on which variables are supplied, and the value of `generateBaseline`, as follows: | Data supplied by the user | Function behaviour | |:-------------------|:--------------------------| |`propensity` supplied by user|Baseline data are created by sampling around `propensity`| |Baseline data are supplied by user and `propensity` is not supplied |`propensity` is created from the baseline data| |Neither baseline data nor `propensity` are supplied |`propensity` is generated using normal kernels, with the bandwidth adjusted to achieve the input value of the `ICC_inp` (after the further smoothing stage to simulate spillover (see below))| The effect of intervention is simulated as a fixed percentage reduction in the `propensity`. Contamination or spillover between trial arms is then modelled as a additional smoothing process applied to the intervention-adjusted `propensity` via a further bivariate normal kernel. In the case of mosquito borne disease this is proposed as an approximation to the effect of mosquito movement. The degree of spillover is specified either as a spillover interval with the `theta_inp` parameter, or as `sd`, the bandwidth of the corresponding normal kernel. If both are provided then it is the value of `theta_inp` that is used. #### Example with baseline data provided as proportions ```r library(CRTspat) set.seed(1234) example_locations <- readdata('example_site.csv') example_locations$base_denom <- 1 library(dplyr) example_randomized <- CRTsp(example_locations) %>% aggregateCRT(auxiliaries = c("RDT_test_result", "base_denom")) %>% specify_clusters(h = 50, algorithm = 'NN') %>% randomizeCRT(matchedPair = FALSE) summary(example_randomized) ``` ``` ## ===============================CLUSTER RANDOMISED TRIAL =========================== ## ## Summary of coordinates ## ---------------------- ## Min. : 1st Qu.: Median : Mean : 3rd Qu.: Max. : ## x -3.20 -1.40 -0.30 -0.07 1.26 5.16 ## y -5.08 -2.84 0.19 0.05 2.49 6.16 ## Total area (within 0.2 km of a location) : 27.6 sq.km ## ## Locations and Clusters ## ---------------------- - ## Coordinate system (x, y) ## Locations: 1181 ## Available clusters (across both arms) 24 ## Per cluster mean number of points 49.2 ## Per cluster s.d. number of points 3.9 ## S.D. of distance to nearest discordant location (km): 1.05 ## Cluster randomization: Independently randomized ## No power calculations to report - ## ## Other variables in dataset ## -------------------------- RDT_test_result base_denom ``` ```r plotCRT(example_randomized, map = TRUE, legend.position = c(0.8, 0.8)) example2a <- simulateCRT(example_randomized, effect = 0.8, outcome0 = 0.5, generateBaseline = FALSE, baselineNumerator = "RDT_test_result", baselineDenominator = "base_denom", ICC_inp = 0.05, theta_inp = 0.8) summary(example2a) ``` ``` ## ===============================CLUSTER RANDOMISED TRIAL =========================== ## ## Summary of coordinates ## ---------------------- ## Min. : 1st Qu.: Median : Mean : 3rd Qu.: Max. : ## x -3.20 -1.40 -0.30 -0.07 1.26 5.16 ## y -5.08 -2.84 0.19 0.05 2.49 6.16 ## Total area (within 0.2 km of a location) : 27.6 sq.km ## ## Locations and Clusters ## ---------------------- - ## Coordinate system (x, y) ## Locations: 1181 ## Available clusters (across both arms) 24 ## Per cluster mean number of points 49.2 ## Per cluster s.d. number of points 3.9 ## S.D. of distance to nearest discordant location (km): 1.05 ## Cluster randomization: Independently randomized ## No power calculations to report - ## ## Other variables in dataset ## -------------------------- RDT_test_result base_denom denom propensity num ``` ```r library(Matrix) examplemesh100 <- readdata("examplemesh100.rds") example2aanalysis <- CRTanalysis(trial=example2a, method = 'T') summary(example2aanalysis) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: T ## Link function: logit ## Model formula: arm + (1 | cluster) ## No modelling of spillover ## Estimates: Control: 0.376 (95% CL: 0.286 0.475 ) ## Intervention: 0.195 (95% CL: 0.133 0.278 ) ## Efficacy: 0.48 (95% CL: 0.28 0.774 ) ## Coefficient of variation: 39.7 % (95% CL: 29.9 59.8 ) ## ## P-value (2-sided): 0.0036424 ``` ```r plotCRT(example2aanalysis) example2aINLA <- CRTanalysis(trial=example2a, method = 'INLA', link='logit', cfunc = 'Z', clusterEffects = FALSE, spatialEffects = TRUE, requireMesh = TRUE, inla_mesh = examplemesh100) plotCRT(example2aINLA, map = TRUE, fill = 'prediction', showClusterBoundaries = TRUE, legend.position = c(0.8, 0.8)) ``` <p> <img src="example2a.r-1.png" > <br> <em>Fig 2.1 Map of allocations of clusters to arms</em> </p> <p> <img src="example2a.r-2.png" > <br> <em>Fig 2.2 Plot of data by distance to other arm</em> </p> <p> <img src="example2a.r-3.png" > <br> <em>Fig 2.3 Smoothed outcome from geostatistical model</em> </p> #### Example with infectiousness proxy surface generated externally ```r set.seed(1234) # Simulate a site with 2000 locations new_site <- CRTsp(geoscale = 2, locations=2000, kappa=3, mu=40) # propensity surface generated as an arbitrary linear function of x the co-ordinate new_site$trial$propensity <- 0.5*new_site$trial$x - min(new_site$trial$x)+1 library(dplyr) example2b<- CRTsp(new_site) %>% specify_clusters(h = 40, algorithm = 'NN') %>% randomizeCRT(matchedPair = FALSE) %>% simulateCRT(effect = 0.8, outcome0 = 0.5, generateBaseline = TRUE, ICC_inp = 0.05, theta_inp = 0.5) ``` ``` ## ## ===================== SIMULATION OF CLUSTER RANDOMISED TRIAL ================= ``` ``` ## Estimating the smoothing required to achieve the target ICC of 0.05 ``` ``` ## bandwidth: 1 ICC = 0.0460233924407313 loss = 1.58134076804332e-05 ``` ```r summary(example2b) ``` ``` ## ===============================CLUSTER RANDOMISED TRIAL =========================== ## ## Summary of coordinates ## ---------------------- ## Min. : 1st Qu.: Median : Mean : 3rd Qu.: Max. : ## x -8.73 -5.16 -1.03 0.00 5.17 11.26 ## y -9.55 -4.42 -0.58 0.00 4.56 10.45 ## Total area (within 0.2 km of a location) : 181 sq.km ## ## Locations and Clusters ## ---------------------- - ## Coordinate system (x, y) ## Locations: 2000 ## Available clusters (across both arms) 50 ## Per cluster mean number of points 40 ## Per cluster s.d. number of points 0 ## S.D. of distance to nearest discordant location (km): 1.33 ## Cluster randomization: Independently randomized ## No power calculations to report - ## ## Other variables in dataset ## -------------------------- denom propensity num base_denom base_num ``` ```r results2b <- CRTanalysis(example2b, method = 'GEE') ``` ``` ## No non-linear parameter. No fixed effects of distance - ``` ```r summary(results2b) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: GEE ## Link function: logit ## Model formula: arm ## No modelling of spillover ## Estimates: Control: 0.461 (95% CL: 0.402 0.521 ) ## Intervention: 0.139 (95% CL: 0.113 0.17 ) ## Efficacy: 0.698 (95% CL: 0.615 0.764 ) ## Coefficient of variation: 40.5 % (95% CL: 33 52.7 ) ## Intracluster correlation (ICC) : 0.046 (95% CL: 0.019 0.073 ) ## ``` ```r plotCRT(example2b, map = TRUE, fill = 'clusters', showClusterLabels = TRUE, maskbuffer = 0.5) ``` <p> <img src="example2b.r-1.png" > <br> <em>Fig 2.4 Map of clusters in simulated trial</em> </p> #### Example with baseline generated from user-provided values of the overall initial prevalence and ICC ```r set.seed(1234) # use co-ordinates, cluster and arm assignments, and baseline data from `example_simulated` example2c<- CRTsp(geoscale = 2, locations=2000, kappa=3, mu=40) %>% specify_clusters(h = 40, algorithm = 'NN') %>% randomizeCRT(matchedPair = FALSE) %>% simulateCRT(effect = 0.8, outcome0 = 0.5, generateBaseline = TRUE, baselineNumerator = 'base_num', baselineDenominator = 'base_denom', ICC_inp = 0.08, theta_inp = 0.2) ``` ``` ## ## ===================== SIMULATION OF CLUSTER RANDOMISED TRIAL ================= ``` ``` ## Estimating the smoothing required to achieve the target ICC of 0.08 ``` ``` ## bandwidth: 0.156946255820714 ICC = 0.0824323247815882 loss = 5.91620384312814e-06 ``` ```r results2c <- CRTanalysis(example2c, method = 'GEE') ``` ``` ## No non-linear parameter. No fixed effects of distance - ``` ```r summary(results2c) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: GEE ## Link function: logit ## Model formula: arm ## No modelling of spillover ## Estimates: Control: 0.381 (95% CL: 0.309 0.458 ) ## Intervention: 0.219 (95% CL: 0.183 0.26 ) ## Efficacy: 0.425 (95% CL: 0.25 0.557 ) ## Coefficient of variation: 51.1 % (95% CL: 41.1 68.4 ) ## Intracluster correlation (ICC) : 0.0824 (95% CL: 0.0417 0.123 ) ## ```
/scratch/gouwar.j/cran-all/cranData/CRTspat/vignettes/Usecase2.Rmd
--- title: "Use Case 03: Estimation of intracluster correlations (ICC) by cluster size" output: rmarkdown::html_vignette: toc: true vignette: > %\VignetteIndexEntry{Use Case 03: Estimation of intracluster correlations (ICC) by cluster size} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- The Intracluster Correlation Coefficient (ICC) is one of the inputs to standard power and sample size calculations for CRTs. Trialists often have difficulty identifying an appropriate source for their ICC calculations, or use a value from a source of questionable relevance. The [`CRTanalysis`](../reference/CRTanalysis.html) function has an option to use Generalised Estimating Equations, which provide an estimate of the ICC. This can be applied to baseline data, and hence to different cluster configurations. This makes it possible to estimate the ICC which is appropriate for any given cluster definition, in the chosen geography, assuming baseline data are available. ```r # use the same dataset as for Use Case 1. library(CRTspat) example_locations <- readdata('example_site.csv') example_locations$base_denom <- 1 library(dplyr) example <- CRTsp(example_locations) %>% aggregateCRT(auxiliaries = c("RDT_test_result", "base_denom")) summary(example) ``` ``` ## ===============================CLUSTER RANDOMISED TRIAL =========================== ## ## Summary of coordinates ## ---------------------- ## Min. : 1st Qu.: Median : Mean : 3rd Qu.: Max. : ## x -3.20 -1.40 -0.30 -0.07 1.26 5.16 ## y -5.08 -2.84 0.19 0.05 2.49 6.16 ## Total area (within 0.2 km of a location) : 27.6 sq.km ## ## Locations and Clusters ## ---------------------- - ## Coordinate system (x, y) ## Locations: 1181 ## Available clusters (across both arms) Not assigned ## No randomization - ## No power calculations to report - ## ## Other variables in dataset ## -------------------------- RDT_test_result base_denom ``` ```r # randomly sample an array of values of c (use a small sample size for testing # the plots were produced with n=5000) set.seed(5) c_vec <- round(runif(50, min = 6, max = 150)) # a user function randomizes and analyses each simulated trial CRTscenario3 <- function(c, CRT) { ex <- specify_clusters(CRT, c = c, algo = "kmeans") %>% randomizeCRT() GEEanalysis <- CRTanalysis(ex, method = "GEE", baselineOnly = TRUE, excludeBuffer = FALSE, baselineNumerator = "RDT_test_result", baselineDenominator = "base_denom") locations <- GEEanalysis$description$locations ICC <- GEEanalysis$pt_ests$ICC value <- c(c = c, ICC = ICC, mean_h = locations/c) return(value) } # The results are collected in a data frame results <- t(sapply(c_vec, FUN = CRTscenario3, simplify = "array", CRT = example)) %>% data.frame() ``` There is a clear downward trend in the ICC estimates, as cluster size increases (Figure 3.1). The ICC expected for a trial in this, or similar, geographies can be read off the curve. Note that the ICC is expected to vary not just with cluster size, but also to vary between different outcomes. <p> <img src="example3b.r-1.png" > <br> <em>Fig 3.1 Intracluster correlation by size of cluster</em> </p>
/scratch/gouwar.j/cran-all/cranData/CRTspat/vignettes/Usecase3.Rmd
--- title: "Use Case 04: Estimation of optimal cluster size for a trial with pre-determined buffer width" output: rmarkdown::html_vignette: toc: true vignette: > %\VignetteIndexEntry{Use Case 04: Estimation of optimal cluster size for a trial with pre-determined buffer width} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- Both the number and size of clusters affect power calculations for CRTs: + If there are no logistical constraints, and spillover can be neglected (as in trials of vaccines that enrol only small proportions of the population), there is no need for a buffer zone and the most efficient design is an individually randomized CRT (i.e. a cluster size of one). In general, a trial with many small clusters has more power than one with the same number of individuals enrolled in larger clusters. + If spillover is an issue, and it is decided to address this by including buffer zones, then the number of individuals included in the trial is less than the total population. Enumeration and intervention allocation are still required for the full trial area, so there can be substantial resource implications if many people are included in the buffers. There is a trade-off between increasing power by creating many small clusters (leading to a large proportion of locations in buffer zones) and reducing the proportion of locations in buffer zones by using large clusters. The `CRTspat` package provides functions for analysing this trade-off for any site for which baseline data are available. The example shown here uses the baseline prevalence data introduced in [Use Case 1](Usecase1.html). The trial is assumed to plan to be based on the same outcome of prevalence, and to be powered for an efficacy of 30%. A set of different algorithmic cluster allocations are carried out with different numbers of clusters. Each allocation is randomized and buffer zones are specified with the a pre-specified width (in this example, 0.5 km). The ICC is computed from the baseline data, excluding the buffer zones, and corresponding power calculations are carried out. The power is calculated and plotted as a function of cluster size. ```r # use the same dataset as for Use Case 1. library(CRTspat) example_locations <- readdata('example_site.csv') example_locations$base_denom <- 1 exampleCRT <- CRTsp(example_locations) example <- aggregateCRT(exampleCRT, auxiliaries = c("RDT_test_result", "base_denom")) # randomly sample an array of numbers of clusters to allocate set.seed(5) c_vec <- round(runif(20, min = 6, max = 60)) CRTscenario <- function(c, CRT, buffer_width) { ex <- specify_clusters(CRT, c = c, algo = "kmeans") %>% randomizeCRT() %>% specify_buffer(buffer_width = buffer_width) GEEanalysis <- CRTanalysis(ex, method = "GEE", baselineOnly = TRUE, excludeBuffer = TRUE, baselineNumerator = "RDT_test_result", baselineDenominator = "base_denom") locations <- GEEanalysis$description$locations ex_power <- CRTpower(trial = ex, effect = 0.3, yC = GEEanalysis$pt_ests$controlY, outcome_type = "p", N = GEEanalysis$description$sum.denominators/locations, c = c, ICC = GEEanalysis$pt_ests$ICC) value <- c(c_full = c, c_core = ex_power$geom_core$c, clustersRequired = ex_power$geom_full$clustersRequired, power = ex_power$geom_full$power, mean_h = ex_power$geom_full$mean_h, locations = locations, ICC = GEEanalysis$pt_ests$ICC) names(value) <- c("c_full", "c_core", "clustersRequired", "power", "mean_h", "locations", "ICC") return(value) } results <- t(sapply(c_vec, FUN = CRTscenario, simplify = "array", CRT = example, buffer_width = 0.5)) %>% data.frame() ``` Each simulated cluster allocation is different, as are the randomizations. This leads to variation in the locations of the buffer zones, so the number of core clusters is a stochastic function of the number of clusters randomised (c). There is also variation in the estimated Intracluster Correlation (see [Use Case 3](Usecase3.html)) for any value of c. ```r total_locations <- example$geom_full$locations results$proportion_included <- results$c_core * results$mean_h * 2/total_locations results$corelocations_required <- results$clustersRequired * results$mean_h results$totallocations_required <- with(results, total_locations/locations * corelocations_required) library(ggplot2) theme_set(theme_bw(base_size = 14)) ggplot(data = results, aes(x = c_full, y = c_core)) + geom_smooth() + xlab("Clusters allocated (per arm)") + ylab("Clusters in core (per arm)") + geom_segment(aes(x = 5, xend = 35, y = 18.5, yend = 18.5), arrow = arrow(length = unit(1, "cm")), lwd = 2, color = "red") ``` <p> <img src="example4b.r-1.png" > <br> <em>Fig 4.1 Numbers of clusters</em> </p> The number of clusters in the core area increases with the number of clusters allocated, until the cluster size becomes small enough for entire clusters to be swallowed by the buffer zones. This can be illustrated by the contrast in the core areas randomised with c = 6 and c = 40 (Figures 4.2 and 4.3). ```r set.seed(7) library(dplyr) example6 <- specify_clusters(example, c = 6, algo = "kmeans") %>% randomizeCRT() %>% specify_buffer(buffer_width = 0.5) plotCRT(example6, map = TRUE, showClusterBoundaries = TRUE, showClusterLabels = TRUE, labelsize = 2, maskbuffer = 0.2) example40 <- specify_clusters(example, c = 40, algo = "kmeans") %>% randomizeCRT() %>% specify_buffer(buffer_width = 0.5) plotCRT(example40, map = TRUE, showClusterBoundaries = TRUE, showClusterLabels = TRUE, labelsize = 2, maskbuffer = 0.2) ``` <p> <img src="example4c.r-1.png"> <br> <em>Fig 4.2 Map of clusters with c = 6</em> </p> <p> <img src="example4c.r-2.png"> <br> <em>Fig 4.3 Map of clusters with c = 40</em> </p> Beyond this point, increasing the number of clusters allocated in the fixed area (by making them smaller) does not add to the total number of clusters. In this example the maximum is achieved when the input c is about 35 and the output c is 18.5. ```r ggplot(data = results, aes(x = c_core, y = mean_h)) + geom_smooth() + xlab("Clusters in core (per arm)") + ylab("Mean cluster size") ``` <p> <img src="example4d.r-1.png" > <br> <em>Fig 4.4 Size of clusters</em> </p> The size of clusters decreases with the number allocated (Figure 4.4), but does not fall much below 10 locations on average in the example because smaller clusters are likely to be absorbed into the buffer zones. ```r ggplot(data = results, aes(x = c_core, y = power)) + geom_smooth() + xlab("Clusters in core (per arm)") + ylab("Power") ``` <p> <img src="example4e.r-1.png" > <br> <em>Fig 4.5 Power achievable with given site</em> </p> The power increases approximately linearly with the number of clusters in the core (Figure 4.5), but the site is too small for an adequate power to be achieved with this size of buffer, irrespective of the cluster size. Because the buffering leads to a maximum in the cluster density (number of clusters per unit area), so does the power achievable with a fixed area (Figure 4.6). ```r ggplot2::ggplot(data = results, aes(x = c_full, y = power)) + geom_smooth() + xlab("Clusters allocated (per arm)") + ylab("Power") ``` <p> <img src="example4f.r-1.png" > <br> <em>Fig 4.6 Power achievable with given site</em> </p> However the analysis also gives an estimate of how large an extended site is needed to achieve adequate power (assuming the the spatial pattern for the wider site to be similar to that of the baseline area). A minimum total number of locations required to achieve a pre-specified power (80%) is achieved at the same density of clusters as the maximum of the power estimated for the smaller, baseline site. ```r ggplot2::ggplot(data = results, aes(x = c_core, y = corelocations_required)) + geom_smooth() + xlab("Clusters in core (per arm)") + ylab("Required core locations") ``` ``` ## `geom_smooth()` using method = 'loess' and formula = 'y ~ x' ``` <p> <img src="example4g.r-1.png" > <br> <em>Fig 4.7 Number of clusters required for full trial area </em> </p> This is also at the allocation density where saturation is achieved in the number of core clusters (Figure 4.1), and where the proportion of the locations included in the core area reaches its minimum (Figure 4.8). ```r ggplot2::ggplot(data = results, aes(x = c_core, y = proportion_included)) + geom_smooth() + xlab("Clusters in core (per arm)") + ylab("Proportion of locations in core") + geom_segment(aes(x = 18, xend = 18, y = 0, yend = 0.25), arrow = arrow(length = unit(1, "cm")), lwd = 2, color = "red") ``` <p> <img src="example4h.r-1.png" > <br> <em>Fig 4.8 Proportions of locations in core</em> </p> #### Conclusions With the example geography and the selected trial outcome, the most efficient trial design, conditional on a buffer width of 0.5 km, would be achieved by assigning about 30 clusters to each arm in a site of the size analysed, though about one third of these clusters would eliminated by inclusion in the buffer zones, so that there would be . This would be far from adequate to achieve adequate power. To achieve 80% power about 8,000 locations would be needed, in a larger trial area, of which about 2,400 would be in the core (sampled) parts of the clusters. ```r ggplot2::ggplot(data = results, aes(x = c_core, y = totallocations_required)) + geom_smooth() + xlab("Clusters in core (per arm)") + ylab("Total locations required") + geom_segment(aes(x = 18, xend = 18, y = 0, yend = 8000), arrow = arrow(length = unit(1, "cm")), lwd = 2, color = "red") ``` <p> <img src="example4i.r-1.png" > <br> <em>Fig 4.9 Size of trial area required to achieve adequate power</em> </p>
/scratch/gouwar.j/cran-all/cranData/CRTspat/vignettes/Usecase4.Rmd
--- title: "Use Case 05: Analysis of trials (including methods for analysing spillover)" output: rmarkdown::html_vignette: toc: true vignette: > %\VignetteIndexEntry{Use Case 05: Analysis of trials (including methods for analysing spillover)} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- The [`CRTanalysis()`](../reference/CRTanalysis.html) function is a wrapper for different statistical analysis packages that can be used to analyse either simulated or real trial datasets. It is designed for use in simulation studies of different analytical methods for spatial CRTs by automating the data processing and selecting some appropriate analysis options. It does not replace conventional use of these packages. Real field trials very often entail complications that are not catered for any of the analysis options in `CRTanalysis()` and it does not aspire to carry out the full analytical workflow for a trial. It can be used as part of a wider workflow. In particular the usual object output by the statistical analysis package constitutes the `model_object` element within the `CRTanalysis` object generated by `CRTanalysis()`. This can be accessed by the usual methods (e.g `predict()`, `summary()`, `plot()`) which may be needed for diagnosing errors, assessing goodness of fit, and for identifying needs for additional analyses. ## Statistical Methods The options that can be specified using the `method` parameter in the function call are: + `method = "T"` summarises the outcome at the level of the cluster, and uses 2-sample t-tests to carry out statistical significance tests of the effect, and to compute confidence intervals for the effect size. The [t.test](https://www.rdocumentation.org/packages/stats/versions/3.6.2/topics/t.test) function in the `stats` package is used. + `method = "GEE"` uses Generalised Estimating Equations to estimate the efficacy in a model with iid random effects for the clusters. An estimate of the intracluster correlation (ICC) is also provided. This uses calls to the [geepack](https://www.jstatsoft.org/article/view/v015i02) package. + `method = "LME4"` fits linear (for continuous data) or generalized linear (for counts and proportions) mixed models with iid random effects for clusters in [lme4](https://CRAN.R-project.org/package=lme4). + `method = "MCMC"` uses Markov chain Monte Carlo simulation in package [jagsUI](https://CRAN.R-project.org/package=jagsUI), which calls r-JAGS. + `method = "INLA"` uses approximate Bayesian inference via the [R-INLA package](https://www.r-inla.org/). This provides functionality for geostatistical analysis, which can be used for geographical mapping of model outputs (as illustrated in . INLA spatial analysis requires a prediction mesh. This can be generated using [`CRTspat::new_mesh()`](../reference/new_mesh().html). This can be computationally expensive, so it is recommended to compute the mesh just once for each dataset. All these analysis methods can be used to carry out a simple comparision of outcomes between trial arms. Each offers different additional functionality, and has its own limitations (see Table 5.1). Some of these limitations are specific to the options offered within `CRTanalysis()`, which does not embrace the full range of options of the packages that are 'wrapped'. These are specified using the `method` argument of the function. Table 5.1. Available statistical methods | `method` | Package | What the `CRTanalysis()` implementation offers |Limitations (as implemented) | |----------|---------|------------------------------------------------|-----------------------------| | `T`| [t.test](https://www.rdocumentation.org/packages/stats/versions/3.6.2/topics/t.test) | P-values and confidence intervals for efficacy based on comparison of cluster means | No analysis of spillover or degree of clustering | | `GEE` | [geepack](https://www.jstatsoft.org/article/view/v015i02) | Interval estimates for efficacy and Intra-cluster correlations | No analysis of spillover or degree of clustering | | `LME4`| [lme4](https://CRAN.R-project.org/package=lme4) | Analysis of spillover | No geostatistical analysis | | `INLA` | [INLA](https://www.r-inla.org/) | Analysis of spillover, geostatistical analysis and spatially structured outputs | Computationally intensive | | `MCMC` | [jagsUI](https://CRAN.R-project.org/package=jagsUI) | Interval estimates for spillover parameters | Identifiability issues and slow convergence are possible | For the analysis of proportions, the outcome in the control arm is estimated as: $\hat{p}_{C} = \frac{1}{1 + exp(-\beta_1)}$, in the intervention arm as $\hat{p}_{I} = \frac{1}{1 + exp(-\beta_1-\beta_2)}$, and the efficacy is estimated as $\tilde{E}_{s} = 1- \frac{\tilde{p}_{I}}{\tilde{p}_{C}}$ where $\beta_1$ is the intercept term and $\beta_2$ the incremental effect associated with the intervention. `summary("<analysis>"")` is used to view the key results of the trial. To display the output from the statistical procedure that is called, try `<analysis>$model_object` or `summary("<analysis>$model_object")`. ```r library(CRTspat) example <- readdata("exampleCRT.txt") analysisT <- CRTanalysis(example, method = "T") summary(analysisT) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: T ## Link function: logit ## Model formula: arm + (1 | cluster) ## No modelling of spillover ## Estimates: Control: 0.364 (95% CL: 0.286 0.451 ) ## Intervention: 0.21 (95% CL: 0.147 0.292 ) ## Efficacy: 0.423 (95% CL: 0.208 0.727 ) ## Coefficient of variation: 41.6 % (95% CL: 31.2 63.1 ) ## ## P-value (2-sided): 0.006879064 ``` ```r analysisT$model_object ``` ``` ## ## Two Sample t-test ## ## data: lp by arm ## t = 2.9818, df = 22, p-value = 0.006879 ## alternative hypothesis: true difference in means between group control and group intervention is not equal to 0 ## 95 percent confidence interval: ## 0.2332638 1.2989425 ## sample estimates: ## mean in group control mean in group intervention ## -0.5561662 -1.3222694 ``` ## Assessing model fit The `model = "LME4"` option outputs the deviance of the model and the Akaike information criterion (AIC), which can be used to select the best fitting model. The deviance information criterion (DIC) and Bayesian information criterion (BIC) perform the same role for the Bayesian methods (`"INLA"`, and `"MCMC"`). The comparison of results with `cfunc = "X"` and `cfunc = "Z"` is used to assess whether the intervention effect is likely to be due to chance. With `method = "T"`, `cfunc = "X"` provides a significance test of the intervention effect directly. The models with spillover (see below) can be compared by that with `cfunc = "X"` to evaluate whether spillover has led to an important bias. ## Spillover `CRTanalysis()` provides options for analysing spillover effects either as function of a Euclidean distance or as a function of a surround measure: #### Models that do not consider spillover Models that do not consider spillover can be fitted using options `Z` and `X`. These are included both to allow conventional analyses (see above), and also to enable model selection using and likelihood ratio tests, the Akaike information criterion (AIC), deviance information criterion (DIC) or Bayesian information criterion (BIC) . #### Spillover as a function of distance These methods require a measure of distance from the boundary between the trial arms, with locations in the control arm assigned negative values, and those in the intervention arm assigned positive values. The functional forms for this relationship is specified by the value of `cfunc` (Table 5.2). Table 5.2. Available spillover functions | `cfunc` | Description | Formula for $P\left( d \right)$ | Compatible `method`(s) | |---------|------------------|--------------------------|--------------------------| | `Z`| No intervention effect | $P\left( d \right) = \ 0\ $ | `GEE` `LME4` `INLA` `MCMC` | | `X`| Simple intervention effect | $\begin{matrix} P\left( d \right) = \ 0\ for\ d\ < \ 0 \\ P\left( d \right) = \ 1\ for\ d\ > \ 0 \\ \end{matrix}\ $ | `T` `GEE` `LME4` `INLA` `MCMC` | | `L`| inverse logistic (sigmoid)| $P\left( d \right) = \ \frac{1}{\left( 1\ + \ exp\left( - d/S \right) \right)}$ | `LME4` `INLA` `MCMC` | | `P`| inverse probit (error function) | $P\left( d \right) = 1\ +\ erf\left(\frac{d}{S\sqrt2}\right)$ | `LME4` `INLA` `MCMC` | | `S`| piecewise linear | $\begin{matrix} P\left( d \right) = \ 0\ for\ d\ < \ - S/2\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ P\left( d \right) = \ \left(S/2\ + \ d \right)/S\ for\ - S/2 < d\ < \ S/2\\ P\left( d \right) = \ 1\ for\ d\ > \ S/2\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ \end{matrix}\ $ | `LME4` `INLA` `MCMC` | | `R`| rescaled linear | $P\left( d \right) =\frac{d\ -\ min(d)}{max(d)\ -\ min(d)}$ | `LME4` `INLA` `MCMC` | `cfunc` options `P`, `L` and `S` lead to non-linear models in which the spillover scale parameter (`S`) must be estimated. This is done by selecting `scale_par` using a one-dimensional optimisation of the goodness of fit of the model in `stats::optimize()`. The different values for `cfunc` lead to the fitted curves shown in Figure 5.1. The light blue shaded part of the plot corresponds to the spillover interval in those cases where this is estimated. ```r analysisLME4_Z <- CRTanalysis(example, method = "LME4", cfunc = "Z") summary(analysisLME4_Z) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: LME4 ## Link function: logit ## Model formula: (1 | cluster) ## No comparison of arms ## Estimates: Control: 0.285 (95% CL: NA ) ## Coefficient of variation: 41.6 % (95% CL: 31.2 63.1 ) ## deviance: 1387.609 ## AIC : 1391.609 ``` ```r analysisLME4_X <- CRTanalysis(example, method = "LME4", cfunc = "X") summary(analysisLME4_X) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: LME4 ## Link function: logit ## Model formula: arm + (1 | cluster) ## No modelling of spillover ## Estimates: Control: 0.366 (95% CL: 0.292 0.449 ) ## Intervention: 0.216 (95% CL: 0.162 0.281 ) ## Efficacy: 0.41 (95% CL: 0.165 0.584 ) ## Coefficient of variation: 41.6 % (95% CL: 31.2 63.1 ) ## deviance: 1379.898 ## AIC : 1385.898 ``` ```r analysisLME4_P <- CRTanalysis(example, method = "LME4", cfunc = "P") summary(analysisLME4_P) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: LME4 ## Link function: logit ## Measure of distance or surround: Signed distance to other arm (km) ## Estimated scale parameter: 0.45 ## Model formula: pvar + (1 | cluster) ## Error function model for spillover ## Estimates: Control: 0.418 (95% CL: 0.331 0.509 ) ## Intervention: 0.186 (95% CL: 0.136 0.25 ) ## Efficacy: 0.553 (95% CL: 0.327 0.703 ) ## spillover interval(km): 4.22 (95% CL: 4.2 4.23 ) ## % locations contaminated: 91.6 (95% CL: 90.6 92 %) ## Total effect : 0.23 (95% CL: 0.114 0.344 ) ## Ipsilateral Spillover : 0.0233 (95% CL: 0.0127 0.0323 ) ## Contralateral Spillover : 0.0417 (95% CL: 0.0192 0.0651 ) ## Coefficient of variation: 41.6 % (95% CL: 31.2 63.1 ) ## deviance: 1374.215 ## AIC : 1382.215 including penalty for the spillover scale parameter ``` ```r analysisLME4_L <- CRTanalysis(example, method = "LME4", cfunc = "L") summary(analysisLME4_L) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: LME4 ## Link function: logit ## Measure of distance or surround: Signed distance to other arm (km) ## Estimated scale parameter: 0.249 ## Model formula: pvar + (1 | cluster) ## Sigmoid (logistic) function for spillover ## Estimates: Control: 0.417 (95% CL: 0.332 0.51 ) ## Intervention: 0.186 (95% CL: 0.136 0.249 ) ## Efficacy: 0.552 (95% CL: 0.329 0.7 ) ## spillover interval(km): 4.26 (95% CL: 4.24 4.28 ) ## % locations contaminated: 92.7 (95% CL: 92.2 93.1 %) ## Total effect : 0.229 (95% CL: 0.115 0.342 ) ## Ipsilateral Spillover : 0.0219 (95% CL: 0.0121 0.0304 ) ## Contralateral Spillover : 0.0388 (95% CL: 0.0183 0.0604 ) ## Coefficient of variation: 41.6 % (95% CL: 31.2 63.1 ) ## deviance: 1374.201 ## AIC : 1382.201 including penalty for the spillover scale parameter ``` ```r analysisLME4_S <- CRTanalysis(example, method = "LME4", cfunc = "S") summary(analysisLME4_S) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: LME4 ## Link function: logit ## Measure of distance or surround: Signed distance to other arm (km) ## Estimated scale parameter: 1.674 ## Model formula: pvar + (1 | cluster) ## Piecewise linear function for spillover ## Estimates: Control: 0.423 (95% CL: 0.334 0.516 ) ## Intervention: 0.185 (95% CL: 0.135 0.247 ) ## Efficacy: 0.561 (95% CL: 0.341 0.711 ) ## spillover interval(km): 4.1 (95% CL: 4.1 4.11 ) ## % locations contaminated: 86.6 (95% CL: 86.6 87.1 %) ## Total effect : 0.237 (95% CL: 0.12 0.356 ) ## Ipsilateral Spillover : 0.029 (95% CL: 0.016 0.0403 ) ## Contralateral Spillover : 0.0522 (95% CL: 0.0248 0.0818 ) ## Coefficient of variation: 41.6 % (95% CL: 31.2 63.1 ) ## deviance: 1374.094 ## AIC : 1382.094 including penalty for the spillover scale parameter ``` ```r analysisLME4_R <- CRTanalysis(example, method = "LME4", cfunc = "R") summary(analysisLME4_R) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: LME4 ## Link function: logit ## Measure of distance or surround: Signed distance to other arm (km) ## No non-linear parameter. 1 ## Model formula: pvar + (1 | cluster) ## Rescaled linear function for spillover ## Estimates: Control: 0.584 (95% CL: 0.381 0.758 ) ## Intervention: 0.116 (95% CL: 0.0587 0.216 ) ## Efficacy: 0.801 (95% CL: 0.465 0.92 ) ## spillover interval(km): 6.64 (95% CL: 6.61 6.65 ) ## % locations contaminated: 99.8 (95% CL: 99.8 99.8 %) ## Total effect : 0.468 (95% CL: 0.181 0.694 ) ## Ipsilateral Spillover : 0.117 (95% CL: 0.0564 0.157 ) ## Contralateral Spillover : 0.238 (95% CL: 0.0831 0.368 ) ## Coefficient of variation: 41.6 % (95% CL: 31.2 63.1 ) ## deviance: 1378.711 ## AIC : 1384.711 ``` ```r p0 <- plotCRT(analysisLME4_Z, map = FALSE) p1 <- plotCRT(analysisLME4_X, map = FALSE) p2 <- plotCRT(analysisLME4_P, map = FALSE) p3 <- plotCRT(analysisLME4_L, map = FALSE) p4 <- plotCRT(analysisLME4_S, map = FALSE) p5 <- plotCRT(analysisLME4_R, map = FALSE) library(cowplot) plot_grid(p0, p1, p2, p3, p4, p5, labels = c('Z', 'X', 'P', 'L', 'S', 'R'), label_size = 10, ncol = 2) ``` <p> <img src="example5b.r-1.png"> <br> <em>Fig 5.1 Fitted curves for the example dataset with different options for `cfunc`</em> </p> The piecewise linear spillover function, `cfunc = "S"`, is only linear on the scale of the linear predictor. When used in a logistic model, as here, the transformation via the inverse of the link function leads to a slightly curved plot (Figure 5.1S). The rescaled linear function, `cfunc = "R"`, is provided as a comparator and for use with `distance` values other than `distance = "nearestDiscord"` see below (it should not be used to estimate the spillover interval). The full set of different `cfunc` options are available for each of model options `"LME4"`, `"INLA"`, and `"MCMC"`. The performance of all these different models has not yet been thoroughly investigated. The analyses of [Multerer *et al.* (2021b)](https://malariajournal.biomedcentral.com/articles/10.1186/s12936-021-03924-7) found that that a model equivalent to `method = "MCMC"`, `cfunc = "L"` gave estimates of efficacy with low bias, even in simulations with considerable spillover. #### Spillover as a function of surround Spillover can also be analysed by assuming the effect size to be a function of the number of intervened locations in the surroundings of the location [Anaya-Izquierdo & Alexander(2021)](https://onlinelibrary.wiley.com/doi/full/10.1111/biom.13316). Several different surround functions are available. These are specified by the `distance` parameter (Table 5.3). Table 5.3. Available surround functions | `distance` | Description | Details | |----------------|------------------|--------------------------------------------------------------| |`nearestDiscord`| Distance to nearest discordant location | The default. This is used for analyses by distance (see above) | |`hdep`| Tukey half-depth | Algorithm of [Rousseeuw & Ruts(1996)](https://www.jstor.org/stable/2986073) | |`sdep`| Simplicial depth| Algorithm of [Rousseeuw & Ruts(1996)](https://www.jstor.org/stable/2986073) | |`disc`| disc | The number of intervened locations within the specified radius (excluding the location itself) as described by [Anaya-Izquierdo & Alexander(2021)](https://onlinelibrary.wiley.com/doi/full/10.1111/biom.13316) | |`kern`| Sum of kernels | The sum of normal kernels | The [`compute_distance()`](../reference/compute_distance.html) function is provided to compute these quantities, so that they can be described, compared, and analysed independently of `CRTanalysis()`. Note that the values of the surround calculated by `compute_distance()` are scaled to avoid correlation with the spatial density of the points (see [documentation](../reference/compute_distance.html)) and so are not equivalent to the quantities reported in the original publications. Users can also devise other measures of surround or distance, add them to a `trial` data frame and specify them using `distance`. `CRTanalysis()` computes the minimum value for the specified field ```r examples <- compute_distance(example, distance = "hdep") ps1 <- plotCRT(examples, distance = "hdep", legend.position = c(0.6, 0.8)) ps2 <- plotCRT(examples, distance = "sdep") examples <- compute_distance(examples, distance = "disc", scale_par = 0.5) ps3 <- plotCRT(examples, distance = "disc") examples <- compute_distance(examples, distance = "kern", scale_par = 0.5) ps4 <- plotCRT(examples, distance = "kern") plot_grid(ps1, ps2, ps3, ps4, labels = c('hdep', 'sdep', 'disc', 'kern'), label_size = 10, ncol = 2) ``` <p> <img src="example5c.r-1.png"> <br> <em>Fig 5.2 Stacked bar plots for different surrounds</em> </p> If `distance` is assigned a value of either `hdep`, `sdep`, then `cfunc = "R"` is used by default and the overall effect size is computed by comparing the fitted values of the model for a surround value of zero with that of the maximum of the surround in the data. If `distance = "disc"` or `distance = "kern"` and `scale_par` is assigned a value, then `cfunc = "R"` is also used. If `cfunc = "E"` is specified then an escape function is fitted with the scale parameter estimated in the same way as in the scale parameter in other models (see above Table 5.2). ```r examples_hdep <- CRTanalysis(examples, method = "LME4", distance = "hdep", cfunc = 'R') summary(examples_hdep) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: LME4 ## Link function: logit ## Measure of distance or surround: Tukey half-depth ## No non-linear parameter. 1 ## Model formula: pvar + (1 | cluster) ## Rescaled linear function for spillover ## Estimates: Control: 0.381 (95% CL: 0.292 0.478 ) ## Intervention: 0.209 (95% CL: 0.15 0.282 ) ## Efficacy: 0.452 (95% CL: 0.167 0.639 ) ## spillover interval(km): 0.978 (95% CL: 0.976 0.98 ) ## % locations contaminated: 55 (95% CL: 55 55 %) ## Total effect : 0.172 (95% CL: 0.0524 0.292 ) ## Ipsilateral Spillover : 0.0313 (95% CL: 0.01 0.0512 ) ## Contralateral Spillover : 0.0444 (95% CL: 0.0128 0.0785 ) ## Coefficient of variation: 41.6 % (95% CL: 31.2 63.1 ) ## deviance: 1379.89 ## AIC : 1385.89 ``` ```r ps4 <- plotCRT(examples_hdep,legend.position = c(0.8, 0.8)) examples_sdep <- CRTanalysis(examples, method = "LME4", distance = "sdep", cfunc = 'R') summary(examples_sdep) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: LME4 ## Link function: logit ## Measure of distance or surround: Simplicial depth ## No non-linear parameter. 1 ## Model formula: pvar + (1 | cluster) ## Rescaled linear function for spillover ## Estimates: Control: 0.393 (95% CL: 0.307 0.485 ) ## Intervention: 0.199 (95% CL: 0.145 0.268 ) ## Efficacy: 0.493 (95% CL: 0.243 0.66 ) ## spillover interval(km): 0.978 (95% CL: 0.976 0.98 ) ## % locations contaminated: 52.4 (95% CL: 52.2 52.4 %) ## Total effect : 0.193 (95% CL: 0.0802 0.306 ) ## Ipsilateral Spillover : 0.0299 (95% CL: 0.013 0.0456 ) ## Contralateral Spillover : 0.0431 (95% CL: 0.0169 0.0704 ) ## Coefficient of variation: 41.6 % (95% CL: 31.2 63.1 ) ## deviance: 1376.417 ## AIC : 1382.417 ``` ```r ps5 <- plotCRT(examples_sdep) examples_disc <- CRTanalysis(examples, method = "LME4", distance = "disc", cfunc = 'R', scale_par = 0.15) summary(examples_disc) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: LME4 ## Link function: logit ## Measure of distance or surround: disc of radius 0.15 km ## Precalculated scale parameter: 0.15 ## Model formula: pvar + (1 | cluster) ## Rescaled linear function for spillover ## Estimates: Control: 0.387 (95% CL: 0.312 0.467 ) ## Intervention: 0.2 (95% CL: 0.149 0.26 ) ## Efficacy: 0.482 (95% CL: 0.273 0.634 ) ## spillover interval(km): 0.978 (95% CL: 0.976 0.98 ) ## % locations contaminated: 8.89 (95% CL: 8.89 8.89 %) ## Total effect : 0.186 (95% CL: 0.0912 0.282 ) ## Ipsilateral Spillover : 0.00458 (95% CL: 0.00239 0.00656 ) ## Contralateral Spillover : 0.00576 (95% CL: 0.00271 0.00905 ) ## Coefficient of variation: 41.6 % (95% CL: 31.2 63.1 ) ## deviance: 1374.274 ## AIC : 1380.274 ``` ```r ps6 <- plotCRT(examples_disc) examples_kern <- CRTanalysis(examples, method = "LME4", distance = "kern", cfunc = 'R', scale_par = 0.15) summary(examples_kern) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: LME4 ## Link function: logit ## Measure of distance or surround: kern with kernel s.d. 0.15 km ## Precalculated scale parameter: 0.15 ## Model formula: pvar + (1 | cluster) ## Rescaled linear function for spillover ## Estimates: Control: 0.406 (95% CL: 0.327 0.491 ) ## Intervention: 0.185 (95% CL: 0.136 0.245 ) ## Efficacy: 0.542 (95% CL: 0.349 0.684 ) ## spillover interval(km): 0.979 (95% CL: 0.977 0.98 ) ## % locations contaminated: 50.8 (95% CL: 50.6 50.9 %) ## Total effect : 0.22 (95% CL: 0.122 0.32 ) ## Ipsilateral Spillover : 0.011 (95% CL: 0.00661 0.0152 ) ## Contralateral Spillover : 0.0134 (95% CL: 0.00707 0.0203 ) ## Coefficient of variation: 41.6 % (95% CL: 31.2 63.1 ) ## deviance: 1369.677 ## AIC : 1375.677 ``` ```r ps7 <- plotCRT(examples_kern) plot_grid(ps4, ps5, ps6, ps7, labels = c('hdep', 'sdep', 'disc', 'kern'), label_size = 10, ncol = 2) ``` <p> <img src="example5d.r-1.png"> <br> <em>Fig 5.3 Fitted curves for the example dataset with different surrounds </em> </p> ## Geostatistical models and mapping results To carry out a geostatistical analysis with `method = "INLA"` a prediction mesh is needed. By default a very low resolution mesh is created (creating a high resolution mesh is computationally expensive). To create a 100m INLA mesh for `<MyTrial>`, use: `mesh <- new_mesh(trial = <MyTrial> , pixel = 0.1)`
/scratch/gouwar.j/cran-all/cranData/CRTspat/vignettes/Usecase5.Rmd
--- title: "Use Case 06: Thematic mapping of the geography of a CRT" output: rmarkdown::html_vignette: toc: true vignette: > %\VignetteIndexEntry{Use Case 06: Thematic mapping of the geography of a CRT} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- `CRTspat` is intended to facilitate thematic mapping of the geography of a CRT at each stage from enumeration of the trial population to data analysis. Graphical outputs are generated with `ggplot2()`. In addition there is a function, `CRTwrite()` to export the thematic layers and shapefiles to GIS formats. The same `plotCRT()` function is used at each stage in the trial (see below), with the functionality available expanding as more fields are populated in the `CRTsp` object. When applied to output from `CRTanalysis()` `plotCRT()` that analyse the spillover interval, an expanded set of thematic maps are available, including overlay plots showing the spillover zone (i.e. the subset of the study area estimated to have effects of spillover) and thematic maps of spatial predictions. ```r # using the same dataset as for Use Case 1. library(CRTspat) exampleCRT <- readdata('exampleCRT.txt') plotCRT(exampleCRT, map = TRUE, fill = 'none', showLocations=TRUE) ``` <p> <img src="example6a.r-1.png" > <br> <em>Fig 6.1 Locations </em> </p> If the clusters have been established, a map can be drawn showing where they are located. The clusters can be distinguished by colour or by number. To ensure that the image is not too crowded, by default the locations are not shown (but they can be shown if required). ```r plotCRT(exampleCRT, map = TRUE, fill = 'clusters', showClusterLabels = TRUE, labelsize =3) ``` <p> <img src="example6b.r-1.png" > <br> <em>Fig 6.2 Clusters </em> </p> Similarly, the map of arms is available if the trial has been randomized. Buffer zones can be plotted on this map. ```r plotCRT(exampleCRT, map = TRUE, fill = 'arms', showLocations=TRUE) plotCRT(exampleCRT, map = TRUE, fill = 'arms', showBuffer=TRUE, showClusterBoundaries = FALSE, buffer_width = 0.5) ``` ``` ## Buffer includes locations within 500m of the opposing arm ``` <p> <img src="example6c.r-1.png" > <br> <em>Fig 6.3 Arms with locations </em> </p> <p> <img src="example6c.r-2.png" > <br> <em>Fig 6.4 Arms with 500m buffer zone shaded </em> </p> Once data have been collected, `plotCRT()` can be used to generate a bar plot to illustrate how much of the data are found close to the boundary between the arms. ```r plotCRT(exampleCRT, map = FALSE) ``` <p> <img src="example6d.r-1.png" > <br> <em>Fig 6.5 Numbers of observations by distance from boundary</em> </p> The results of the data analysis can be illustrated with further graphics. The blue shaded section of Figure 6.8 indicates the spillover zone, corresponding to those locations that fall within the central 95% of the estimated sigmoid of the of the effect size by distance from the boundary between the arms. ```r analysis <- CRTanalysis(exampleCRT, cfunc = "P", method = "LME4") ``` ``` ## Estimated scale parameter: 0.45 Signed distance -Signed distance to other arm (km) ``` ```r summary(analysis) ``` ``` ## ## =====================CLUSTER RANDOMISED TRIAL ANALYSIS ================= ## Analysis method: LME4 ## Link function: logit ## Measure of distance or surround: Signed distance to other arm (km) ## Estimated scale parameter: 0.45 ## Model formula: pvar + (1 | cluster) ## Error function model for spillover ## Estimates: Control: 0.418 (95% CL: 0.331 0.511 ) ## Intervention: 0.186 (95% CL: 0.135 0.251 ) ## Efficacy: 0.554 (95% CL: 0.33 0.703 ) ## spillover interval(km): 4.22 (95% CL: 4.2 4.24 ) ## % locations contaminated: 91.6 (95% CL: 90.6 92.2 %) ## Total effect : 0.231 (95% CL: 0.116 0.347 ) ## Ipsilateral Spillover : 0.0234 (95% CL: 0.0129 0.0325 ) ## Contralateral Spillover : 0.0417 (95% CL: 0.0195 0.0655 ) ## Coefficient of variation: 41.6 % (95% CL: 31.2 63.1 ) ## deviance: 1374.215 ## AIC : 1382.215 including penalty for the spillover scale parameter ``` ```r plotCRT(analysis, map = FALSE) plotCRT(analysis, map = TRUE, fill = 'arms', showBuffer=TRUE, showClusterBoundaries = FALSE) ``` ``` ## Buffer corresponds to estimated spillover zone ``` <p> <img src="example6e.r-1.png" > <br> <em>Fig 6.6 Plot of estimated spillover function </em> </p> <p> <img src="example6e.r-2.png" > <br> <em>Fig 6.7 Arms with spillover zone shaded </em> </p> ## Conclusions In this example, a large proportion of the data points are close to the boundary between the arms. The analysis (based on simulated spillover) suggests that there are effects of spillover far beyond a 500m buffer. However this does not necessarily mean that the spillover leads to a large bias or loss in power (see [Use Case 7](Usecase7.html)).
/scratch/gouwar.j/cran-all/cranData/CRTspat/vignettes/Usecase6.Rmd
--- title: "Use Case 07: Power and sample size calculations allowing for spillover" output: rmarkdown::html_vignette: toc: true vignette: > %\VignetteIndexEntry{Use Case 07: Power and sample size calculations allowing for spillover} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- Contamination between the arms of a trial leads to downward bias in the estimates of efficacy In the case of malaria, this spillover is mainly caused by mosquito movement, and is therefore expected to be greatest near to the boundary between the trial arms. Because the effective distance over which substantial spillover occurs is generally not known, sensitivity analyses must be used to get an idea of how great these effects are likely to be. The following workflow, similar to that used for [Use Case 3](Usecase3.html), explores the likely bias and loss of power for one specific simulated setting. In this simple example, spatially homogeneous background disease rates are assigned, using `propensity <- 1` A small number of simulations is specified for testing (here 2, a much larger number, at least several thousands is used for the definitive analysis). In this example, a fixed value for the outcome in the control arm, the target ICC of the simulations, and the number of clusters in each arm of the trial. input. Efficacy is sampled from a uniform(0, 0.6) distribution, and the simulated spillover interval from a uniform(0, 1.5km) distribution. ```r library(CRTspat) # The locations only are taken from the example dataset. The cluster, arm, and outcome assignments are replaced example <- readdata("exampleCRT.txt") trial <- example$trial[ , c("x","y", "denom")] trial$propensity <- 1 nsimulations <- 2 CRT <- CRTsp(trial) library(dplyr) outcome0 <- 0.4 ICC <- 0.05 c <- 25 set.seed(7) effect <- runif(nsimulations,0,0.6) # Data frame of input spillover interval theta_inp <- runif(nsimulations, min = 0, max = 1.5) input <- data.frame(effect = effect, theta_inp = theta_inp) ``` A user function is defined for randomizing and analysing each simulated trial ```r CRTscenario7 <- function(input) { ex <- specify_clusters(CRT, c = c, algo = "kmeans") %>% randomizeCRT() %>% simulateCRT(effect = input[["effect"]], generateBaseline = FALSE, outcome0 = outcome0, ICC_inp = ICC, theta_inp = input[["theta_inp"]], matchedPair = FALSE, scale = "proportion", denominator = "denom", tol = 0.01) sd <- theta_inp/(2 * qnorm(0.975)) contaminate_pop_pr_input <- sum(abs(ex$trial$nearestDiscord) < 0.5 * input[["theta_inp"]])/nrow(ex$trial) contaminate_pop_pr_sd <- sum(abs(ex$trial$nearestDiscord) < input[["theta_inp"]]/(2 * qnorm(0.975)))/nrow(ex$trial) examplePower = CRTpower(trial = ex, desiredPower = 0.8, effect=input[["effect"]], yC=outcome0, outcome_type = 'd', ICC = ICC, c = c) nominalpower <- examplePower$geom_full$power Tanalysis <- CRTanalysis(ex, method = "T") value <- c( effect = input[["effect"]], contaminate_pop_pr_input = contaminate_pop_pr_input, contaminate_pop_pr_sd = contaminate_pop_pr_sd, theta_inp = input[["theta_inp"]], nominalpower = examplePower$geom_full$power, Pvalue_t = Tanalysis$pt_ests$p.value, effect_size_t = Tanalysis$pt_ests$effect_size) return(value) } ``` The results are collected in a data frame and post-processed to classify the outcomes according to whether they represent either Type I errors or Type II errors ```r results_matrix <- apply(input, MARGIN = 1, FUN = CRTscenario7) results <- as.data.frame(t(results_matrix)) results$significant_t <- ifelse((results$Pvalue_t < 0.05), 1, 0) results$typeIerror <- ifelse(results$significant_t == 1 & results$effect == 0, 1, ifelse(results$effect == 0, 0 , NA)) results$typeIIerror <- ifelse(results$significant_t == 0, ifelse(results$effect > 0, 1, NA), 0) results$bias_t <- with(results, ifelse(significant_t,effect_size_t - effect,- effect)) ``` ## Analysis by simulated true efficacy The results are grouped by ranges of efficacy, for each of which the ratio of the number of simulations giving statistically significant results to the expected number can be calculated. ```r results$effect_cat <- factor(round(results$effect*10)) by_effect <- results[results$effect > 0, ] %>% group_by(effect_cat) %>% summarise_at(c("effect","significant_t", "nominalpower", "bias_t", "typeIIerror"), mean, na.rm = TRUE) by_effect$power_ratio <- with(by_effect, significant_t/nominalpower) library(ggplot2) theme_set(theme_bw(base_size = 14)) ggplot(data = by_effect, aes(x = effect)) + geom_smooth(aes(y = power_ratio), color = "#b2df8a",se = FALSE) + geom_smooth(aes(y = typeIIerror), color = "#D55E00",se = FALSE) + geom_smooth(aes(y = bias_t), color = "#0072A7",se = FALSE) + xlab('Simulated efficacy (%)') + ylab('Performance of t-test') ``` Despite the spillover, the t-test performs similarly to expectations across the range of efficacies investigated (Figure 7.1) <p> <img src="example7d.r-1.png" > <br> <em>Fig 7.1 Performance of t-test by efficacy $\color{purple}{\textbf{----}}$ : ratio of power:nominal power; $\color{green}{\textbf{----}}$ :type II error rate; $\color{blue}{\textbf{----}}$ : bias. </em> </p> ## Analysis by simulated spillover interval An analogous analysis, to that of performance relative to efficacy, can be carried out to explore the effect of the simulated spillover interval. ```r ggplot(data = results, aes(x = theta_inp)) + geom_smooth(aes(y = contaminate_pop_pr_input), color = "#b2df8a",se = FALSE, size = 2) + geom_smooth(aes(y = contaminate_pop_pr_sd), color = "#0072A7",se = FALSE, size = 2) + xlab('Simulated spillover interval (km)') + ylab('Proportion of the locations') results$theta_cat <- factor(round(results$theta_inp*10)) by_theta <- results[results$effect > 0, ] %>% group_by(theta_cat) %>% summarise_at(c("theta_inp","significant_t", "nominalpower", "bias_t", "typeIIerror"), mean, na.rm = TRUE) by_theta$power_ratio <- with(by_theta, significant_t/nominalpower) ggplot(data = by_theta, aes(x = theta_inp)) + geom_smooth(aes(y = power_ratio), color = "#b2df8a",se = FALSE) + geom_smooth(aes(y = typeIIerror), color = "#D55E00",se = FALSE) + geom_smooth(aes(y = bias_t), color = "#0072A7",se = FALSE) + xlab('Simulated spillover interval (km)') + ylab('Performance of t-test') ``` The relationships between the simulated spillover interval and the corresponding proportion of the locations in the trial (Figure 7.2). <p> <img src="example7e.r-1.png" > <br> <em>Fig 7.2 Proportion of locations in simulated spillover interval $\color{green}{\textbf{----}}$ :zone defined by `gamma_inp`; $\color{blue}{\textbf{----}}$ : zone defined by `sd`. </em> </p> The spillover results in a loss of power, and increased negative bias in efficacy estimate, but these effects are rather small (Figure 7.3). The potential for using the `CRTanalysis()` function to model the spillover and hence correct the naive efficacy estimate (from a t-test) can also be explored (see [Use Case 5](Usecase5.html) and [Multerer *et al.* (2021b)](https://malariajournal.biomedcentral.com/articles/10.1186/s12936-021-03924-7). <p> <img src="example7e.r-2.png" > <br> <em>Fig 7.3 Performance of t-test by spillover interval $\color{purple}{\textbf{----}}$ : ratio of power:nominal power; $\color{green}{\textbf{----}}$ :type II error rate; $\color{blue}{\textbf{----}}$ : bias. </em> </p>
/scratch/gouwar.j/cran-all/cranData/CRTspat/vignettes/Usecase7.Rmd
--- title: "Use Case 08: Eggs - to fry or scramble?" output: rmarkdown::html_vignette: toc: true vignette: > %\VignetteIndexEntry{Use Case 08: Eggs - to fry or scramble?} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- In trials of malaria interventions, a 'fried-egg' design is often used to avoid the downward bias in the estimates of efficacy caused by spillover. This entails estimating the outcome only from the cores of the clusters. However, the intervention must also be introduced in the buffer zone, so the trial may be very expensive if there are high per capita intervention costs. Since the buffer zone is excluded from data collection, there are usually no data on whether the buffer is large enough to avoid spillover effects. A precautionary approach with large buffer zones is therefore the norm. However with 'fried-eggs' there are no data on the scale of the spillover, so if the effect was swamped by unexpectedly large spillover, this would be indistinguishable from failure of the intervention. The alternative design is to sample in the buffer zones, accepting some degree of spillover. The data analysis might then be used to estimate the scale of spillover (see [Use Case 5](Usecase5.html)). The statistical model might be to adjust the estimate of effect size for the spillover effect, or to decide which contaminated areas to exclude *post hoc* from the definitive analysis (based on pre-defined criteria). This is expected to lead to some loss of power, compared to collecting the same amount of outcome data from the core area alone (though this might be compensated for by increasing data collection), but is likely to be less complicated to organise, allowing the trial to be carried out over a much smaller area, with far fewer locations needing to be randomized (see [Use Case 4](Usecase4.html)). In this example, the effects of reducing the numbers of observations on power and bias are evaluated in simulated datasets. To compare 'fried-egg' strategies with comparable designs that sample the whole area, sets of locations are removed, either randomly from the whole area, or systematically depending on the distance from the boundary between arms. As in [Use Case 7](Usecase7.html), spatially homogeneous background disease rates are assigned, using `propensity <- 1`, fixed values are used for the outcome in the control arm, the target ICC of the simulations and the number of clusters in each arm of the trial. Efficacy is also fixed at 0.4. The spillover interval is sampled from a uniform(0, 1.5km) distribution. ```r library(CRTspat) # use the locations only from example dataset (as with Use Case 7) example <- readdata("exampleCRT.txt") trial <- example$trial[ , c("x","y", "denom")] trial$propensity <- 1 CRT <- CRTsp(trial) library(dplyr) # specify: # prevalence in the absence of intervention; # anticipated ICC; # clusters in each arm outcome0 <- 0.4 ICC <- 0.05 k <- 25 # the number of trial simulations required (a small number, is used for testing, the plots are based on 1000) nsimulations <- 2 theta_vec <- runif(nsimulations,0,1.5) radii <- c(0, 0.1, 0.2, 0.3, 0.4, 0.5) proportions <- c(0, 0.1, 0.2, 0.3, 0.4, 0.5) scenarios <- data.frame(radius = c(radii, rep(0,6)), proportion = c(rep(0,6), proportions)) set.seed(7) ``` Two user functions are required: + randomization and trial simulation + analysis of each simulated trial with different sets of locations removed ```r analyseReducedCRT <- function(x, CRT) { trial <- CRT$trial radius <- x[["radius"]] proportion <- x[["proportion"]] cat(radius,proportion) nlocations <- nrow(trial) if (radius > 0) { radius <- radius + runif(1, -0.05, 0.05) trial$num[abs(trial$nearestDiscord) < radius] <- NA } if (proportion > 0) { # add random variation to proportion to avoid heaping proportion <- proportion + runif(1, -0.05, 0.05) trial$num <- ifelse(runif(nlocations,0,1) < proportion, NA, trial$num) } trial <- trial[!is.na(trial$num),] resX <- CRTanalysis(trial,method = "LME4", cfunc = "X") resZ <- CRTanalysis(trial,method = "LME4", cfunc = "Z") LRchisq <- resZ$pt_ests$deviance - resX$pt_ests$deviance significant <- ifelse(LRchisq > 3.84, 1, 0) result <- list(radius = radius, proportion = proportion, observations = nrow(trial), significant = significant, effect_size = resX$pt_ests$effect_size) return(result) } # randomization and trial simulation randomize_simulate <- function(theta) { ex <- specify_clusters(CRT, c = c, algo = "kmeans") %>% randomizeCRT() %>% simulateCRT(effect = 0.4, generateBaseline = FALSE, outcome0 = outcome0, ICC_inp = ICC, theta_inp = theta, matchedPair = FALSE, scale = "proportion", denominator = "denom", tol = 0.01) # The results are collected in a data frame sub_results_matrix <- apply(scenarios, MARGIN = 1, FUN = analyseReducedCRT, CRT = ex) sub_results <- as.data.frame(do.call(rbind, lapply(sub_results_matrix, as.data.frame))) sub_results$theta_inp <- theta return(sub_results) } ``` Collect all the analysis results and plot. Note that some analyses result in warnings (because of problems computing some of the descriptive statistics when the number of observations in some clusters is very small) ```r results <- list(simulation = numeric(0), radius = numeric(0), proportion = numeric(0), observations = numeric(0), significant = numeric(0), effect_size = numeric(0), gamma = numeric(0)) simulation <- 0 for(theta in theta_vec){ simulation <- simulation + 1 sub_results <- randomize_simulate(theta) sub_results$simulation <- simulation results <- rbind(results,sub_results) } ``` ``` ## 0 00.1 00.2 00.3 00.4 00.5 00 00 0.10 0.20 0.30 0.40 0.50 00.1 00.2 00.3 00.4 00.5 00 00 0.10 0.20 0.30 0.40 0.5 ``` ```r results$fried <- ifelse(results$radius > 0, 'fried', ifelse(results$proportion > 0, 'scrambled', 'neither')) results$bias <- results$effect_size - 0.5 library(ggplot2) theme_set(theme_bw(base_size = 12)) fig8_1a <- ggplot(data = results, aes(x = observations, y = significant, color = factor(fried))) + geom_smooth(size = 2, se = FALSE, show.legend = FALSE, method = "loess", span = 1) + scale_colour_manual(values = c("#b2df8a","#D55E00","#0072A7")) + xlab('Number of locations in analysis') + ylab('Power') fig8_1b <- ggplot(data = results, aes(x = theta_inp, y = significant, color = factor(fried))) + geom_smooth(size = 2, se = FALSE, show.legend = FALSE, method = "loess", span = 1) + scale_colour_manual(values = c("#b2df8a","#D55E00","#0072A7")) + xlab('Simulated spillover interval (km)') + ylab('Power') fig8_1c <- ggplot(data = results, aes(x = observations, y = bias, color = factor(fried))) + geom_smooth(size = 2, se = FALSE, show.legend = FALSE, method = "loess", span = 1) + scale_colour_manual(values = c("#b2df8a","#D55E00","#0072A7")) + xlab('Number of locations in analysis') + ylab('Bias') fig8_1d <- ggplot(data = results, aes(x = theta_inp, y = bias, color = factor(fried))) + geom_smooth(size = 2, se = FALSE, show.legend = FALSE, method = "loess", span = 1) + scale_colour_manual(values = c("#b2df8a","#D55E00","#0072A7")) + xlab('Simulated spillover interval (km)') + ylab('Bias') library(cowplot) plot_grid(fig8_1a, fig8_1b, fig8_1c, fig8_1d, labels = c('a', 'b', 'c', 'd'), label_size = 14, ncol = 2) ``` #### Analysis Figure 8.1. is are based on analysing 1000 simulated datasets (12,000 scenarios in all). The power is calculated as the proportion of significance tests with likelihood ratio p < 0.05. The fried-egg design suffers little loss of power with exclusion of observations until the sample size is less than about 750 (the suggestion that there is a maximum at a sample size of about 900 locations is presumably an effect of the smoothing of the results). The 'scrambled' egg trials lose power with each reduction in the number of observations (Figure 8.1a), but substantive loss of power occurs only with spillover interval of 1 km or more in this dataset. The fried egg designs largely avoid this loss of power (Figure 8.1b) but are less powerful on average when the spillover interval is less that 0.5 km. In these simulations, the lme4 estimates of the efficacy are biased slightly downwards (i.e. the bias has negative sign). The absolute bias was least when some of the locations were excluded (Figure 8.1c). There is less absolute bias with the fried-egg designs but the effect is small except when the simulated spillover interval is very large (Figure 8.1d). The difference in bias between 'fried' and 'scrambled' egg designs is modest in relation to the scale of the overall bias. <p> <img src="example8c.r-1.png" > <br> <em>Fig 8.1 Power and bias by number of locations in analysis $\color{purple}{\textbf{----}}$ : Randomly sampled locations removed; $\color{green}{\textbf{----}}$ : Fried-egg- locations in boundary zone removed. </em> </p>
/scratch/gouwar.j/cran-all/cranData/CRTspat/vignettes/Usecase8.Rmd
--- title: "Use Case 09: Preparation of datasets for `CRTspat`" output: rmarkdown::html_vignette: toc: true vignette: > %\VignetteIndexEntry{Use Case 09: Preparation of datasets for `CRTspat`} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- The starting point for the analyses is the co-ordinates for the spatial units that are to be randomized. These can be provided as lat-long coordinates, or as Cartesian point co-ordinates. Geolocated data from completed field studies or trials in progress can be read into `CRTspat` in the form of data frames, preferably with one record for each location in the trial. Pre-existing data from the field should be coded as follows: | Description of variable | Variable name | Type and values | |----------------------------|---------------|-------------------------------------------------| | x-coordinates of locations | x | Numeric | | x-coordinates of locations | y | Numeric | | cluster assignment | cluster | factor | | arm assignment | arm | factor with levels "Control" and "Intervention" | | buffer assignment | buffer | logical (TRUE for locations in the buffer) | Users might want to include an ID variable in the data frame if they intend to link the outputs back to other datasets. Other variables, including baseline data or trial outcomes may also be included. | Description of variable | Default Variable name | Type | |----------------------------|---------------|-------------| | Baseline numerator | base_num | Numeric | | Baseline denominator | base_denom | Numeric | | Numerator | num | Numeric | | Denominator | denom | Numeric | Simulated co-ordinate sets can also be generated *de novo* (function [`CRTsp()`](../reference/CRTsp.html)) for use in methods development and testing. To make it possible to share data files without compromising confidentiality the [`anonymize_site()`](../reference/anonymize_site.html) function is provided. This removes absolute geolocations and applies a transformation to the coordinates to conserve distances between them but modifying the orientation. The [`latlong_as_xy`](../reference/latlong_as_xy.html) function is available to convert co-ordinates provided as decimal degrees into Cartesian co-ordinates with units of km with centroid (0,0). If the input co-ordinates are provided using a different projection then they must be converted externally to the package.
/scratch/gouwar.j/cran-all/cranData/CRTspat/vignettes/Usecase9.Rmd
# To regenerate the figures change fig.keep = 'none' to fig.path=('vignettes/') # (there is a long thread about how pkgdown doesn't seem to find figures where # it says it does, hence the workaround with html code) # since the vignettes take a long time to run, they are not run with every build # as suggested here https://ropensci.org/blog/2019/12/08/precompute-vignettes/ # to run the vignettes, execute the code from the vignettes via: library(CRTspat) knitr::opts_chunk$set(error=FALSE) knitr::knit("vignettes/Usecase1.Rmd.orig", output = "vignettes/Usecase1.Rmd") knitr::knit("vignettes/Usecase2.Rmd.orig", output = "vignettes/Usecase2.Rmd") knitr::knit("vignettes/Usecase3.Rmd.orig", output = "vignettes/Usecase3.Rmd") knitr::knit("vignettes/Usecase4.Rmd.orig", output = "vignettes/Usecase4.Rmd") knitr::knit("vignettes/Usecase5.Rmd.orig", output = "vignettes/Usecase5.Rmd") knitr::knit("vignettes/Usecase6.Rmd.orig", output = "vignettes/Usecase6.Rmd") knitr::knit("vignettes/Usecase7.Rmd.orig", output = "vignettes/Usecase7.Rmd") knitr::knit("vignettes/Usecase8.Rmd.orig", output = "vignettes/Usecase8.Rmd") knitr::knit("vignettes/Usecase9.Rmd.orig", output = "vignettes/Usecase9.Rmd") #knitr::knit("vignettes/Usecase11.Rmd.orig", output = "vignettes/Usecase11.Rmd") rmarkdown.html_vignette.check_title = FALSE devtools::install(build_vignettes = TRUE) detach("package:CRTspat", unload = TRUE) # to build package website usethis::use_pkgdown() pkgdown::build_site() # To write pdf manual shell('R CMD Rd2pdf . --output=man/figures/manual.pdf --force --no-preview')
/scratch/gouwar.j/cran-all/cranData/CRTspat/vignettes/runVignette.R
rlevel=function(fit,var,oldrf,newrf,cl=NA){ if(is.null(fit$call)){ type=as.character(fit$glmnet.fit$call) }else{ type=as.character(fit$call) } w=paste(var,newrf,sep="") if(type[1]=="glm"|type[1]=="lm"){ cv=vcov(fit) fit_sum=summary(fit) c=fit_sum$coefficients c=as.data.frame(c) for(i in 1:length(oldrf)){ x=c[substr(rownames(c),1,nchar(var[i]))==var[i],] y=c[substr(rownames(c),1,nchar(var[i]))!=var[i],] if(nrow(x)==1){ y[1,1]=y[1,1]+x[1,1] x[1,1]=-x[1,1] x[1,3]=-x[1,3] rownames(x)=paste(var[i],oldrf[i],sep="") }else{ n=which(substr(rownames(x),nchar(var[i])+1,nchar(rownames(x)))==newrf[i]) rownames(x)[n]=paste(var[i],oldrf[i],sep="") y[1,1]=y[1,1]+x[n,1] x[n,1]=-x[n,1] x[n,3]=-x[n,3] for(j in 1:nrow(x)){ if(j==n){ x[j,1]=x[j,1] }else{ x[j,1]=x[j,1]+x[n,1] x[j,2]=sqrt((x[j,2])^2+(x[n,2])^2-2*cv[which(rownames(cv)==rownames(x[j,])),which(colnames(cv)==paste(var[i],newrf[i],sep=""))]) x[j,3]=x[j,1]/x[j,2] if(type[1]=="glm"){ x[j,4]=2*pnorm(-abs(x[j,3])) }else{ x[j,4]=2*pt(abs(x[j,3]),fit$df.residual,lower.tail = FALSE) } } } } if(i==1){ e=which(colnames(cv)==paste(var[i],newrf[i],sep="")) vr=cv[1,1]+cv[which(rownames(cv)==paste(var[i],newrf[i],sep="")),which(colnames(cv)==paste(var[i],newrf[i],sep=""))]+ 2*cv[which(rownames(cv)==w[i]),1] }else{ vr=vr+cv[which(rownames(cv)==paste(var[i],newrf[i],sep="")),which(colnames(cv)==paste(var[i],newrf[i],sep=""))]+ 2*sum(cv[which(rownames(cv)==w[i]),c(1,e)]) e=c(e,which(colnames(cv)==paste(var[i],newrf[i],sep=""))) } c=rbind(y,x) } c[1,2]=sqrt(vr) c[1,3]=c[1,1]/c[1,2] if(type[1]=="glm"){ c[1,4]=2*pnorm(-abs(c[1,3])) }else{ c[1,4]=2*pt(abs(c[1,3]),fit$df.residual,lower.tail = FALSE) } return(c) } if(type[1]=="glmnet"){ if(is.na(cl)){ cl=1 } if(is.null(fit$call)){ c=coef(fit$glmnet.fit)[,cl] }else{ c=coef(fit)[,cl] } c=as.data.frame(c) c$p=1 for(i in 1:length(oldrf)){ x=c[substr(rownames(c),1,nchar(var[i]))==var[i],] y=c[substr(rownames(c),1,nchar(var[i]))!=var[i],] if(nrow(x)==1){ y[1,1]=y[1,1]+x[1,1] x[1,1]=-x[1,1] rownames(x)=paste(var[i],oldrf[i],sep="") }else{ n=which(substr(rownames(x),nchar(var[i])+1,nchar(rownames(x)))==newrf[i]) rownames(x)[n]=paste(var[i],oldrf[i],sep="") y[1,1]=y[1,1]+x[n,1] x[n,1]=-x[n,1] for(j in 1:nrow(x)){ if(j==n){ x[j,1]=x[j,1] }else{ x[j,1]=x[j,1]+x[n,1] } } } c=rbind(y,x) } name=rownames(c) c=c[,1] names(c)=name return(c) } } #rlevel(fit=a,var=var,oldrf=oldrf,newrf=newrf) #var=c("c_edu","hispanic","race") #oldrf=c("<HS","Hispanic","White") #newrf=c("HS","Not Hispanic","Black")
/scratch/gouwar.j/cran-all/cranData/CRWRM/R/rlevel.R
#' Write CRediT author statement #' @description #' The function transforms the information in the template #' (from \code{template_create}) to a raw string following the CRediT authors #' statement format of "author1: contributions author2: contributions ..." #' @param cras_table A data.frame created using \code{create_template()} #' @param file The text file to be created. If not provided (default), the statement is #' returned as a string instead of written to a file. #' @param drop_authors If TRUE (default) the authors without contributions are #' removed from the statement. If FALSE, they are kept without contributions #' assigned. #' @param overwrite If TRUE, the file is overwritten. Otherwise, a error is #' triggered. #' @param markdown If TRUE (default), the authors are surrounded by ** to make #' them bold in markdown. #' @param quiet If TRUE and \code{drop_authors} is also TRUE, authors without #' contributions are silently dropped out. #' If FALSE, a warning is triggered in case any authors is dropped out. #' @return A text file with the CRediT authors statement or, if file is NULL #' (default), a character vector of length 1 with the statement that can be #' used in a Rmarkdown or quarto document using inline code: #' \code{`r cras_write(cras_table, markdown = TRUE)`} #' @examples #' # Generate a template and populate it (randomwly for this example) #' cras_table <- template_create(authors = c("Josep Maria", "Jane Doe")) #' cras_table[,2:ncol(cras_table)] <- sample(0:1, (ncol(cras_table)-1)*2, #' replace = TRUE) #' #' # Create a temporary file just for this example #' file <- tempfile() #' #' # Write to the file #' cras_write(cras_table, file, markdown = TRUE) #' #' # Check the content of the file #' readLines(file) #' @export cras_write <- function(cras_table, file, drop_authors = TRUE, overwrite = FALSE, markdown = TRUE, quiet = FALSE){ if(drop_authors) cras_table <- drop_authors(cras_table, quiet = quiet) cras <- character() for (i in seq_len(nrow(cras_table))){ if (markdown) cras <- paste0(cras,"**") cras <- paste0(cras, cras_table$Authors[[i]]) if (rowSums(cras_table[i, -1]) > 0){ cras <- paste0(cras, ":") }# else if (i < nrow(cras_table)) { # cras <- paste0(cras, " ") # } if (markdown) cras <- paste0(cras,"**") cras <- paste0(cras, " ") for (j in 2:ncol(cras_table)){ if(cras_table[i,j, drop=T] > 0) { cras <- paste0(cras, names(cras_table)[[j]], ", ") } } cras <- gsub(", $", " ", cras) } cras <- gsub(" $", "", cras) if(missing(file)){ return(cras) } if(!is.character(file)) stop("file must be a string or not provided") if(length(file) > 1) stop("file cannot be a vector of length > 1") if(file.exists(file) && isFALSE(overwrite)) stop("The file already exists") writeLines(cras, file) invisible(cras) }
/scratch/gouwar.j/cran-all/cranData/CRediTas/R/cras_write.R
#' Get default roles for CRediT #' roles_get <- function() { c( "Conceptualization", "Methodology", "Software", "Validation", "Formal Analysis", "Investigation", "Resources", "Data curation", "Writing - original draft", "Writing - review & editing", "Visualization", "Supervision", "Project administration", "Funding acquisition" ) }
/scratch/gouwar.j/cran-all/cranData/CRediTas/R/roles_get.R
#' Create a template to fill the CRediT author statement. #' @description Create a template to fill the CRediT author statement. #' (\url{https://credit.niso.org}). The template is a table where the authors #' are the rows and the columns are the roles. #' @param authors A character vector with all the authors to be included in the #' statement. #' @param file If a path is provided, the template is saved as a csv for excel #' @param roles A character vector with the roles to be included in the #' statement. If NULL, it uses all the roles defined in the CRediT author #' statement. #' @returns A dataframe with a row for each author and a column for each role, #' filled with zeros. #' @details The dataframe can be edited in R or, if file is provided, it is #' exported to a csv to be edited manually in your preferred csv editor. The #' csv is created to be compatible with Microsoft Excel, since it is the most #' popular spreadsheet software among scientists. Therefore, it is separated #' by semicolon. #' @examples #' template_create(authors = c("Josep Maria", "Jane Doe")) #' @export template_create <- function(authors, file, roles = roles_get()){ df <- data.frame(Authors = authors) mat <- matrix(0, nrow = length(authors), ncol = length(roles)) colnames(mat) <- roles df <- cbind(df, mat) if (missing(file)) return(df) write.csv2(df, file, row.names = FALSE) invisible(df) }
/scratch/gouwar.j/cran-all/cranData/CRediTas/R/template_create.R
#' Read a template from a csv file #' @description The template should be created using \code{create_template()} #' @param file A character vector with the path to the csv file #' @returns a data.frame with the content of the csv file #' @examples #' # Create a temporary file for this example #' file <- tempfile() #' #' # Create a template and save it to a csv file #' template_create(authors = c("Josep Maria", "Jane Doe"), file = file) #' #' # Read the template back (in real life once it has been populated) #' template_read(file) #' @export #' @importFrom utils write.csv2 read.csv2 template_read <- function(file){ file <- normalizePath(file) cras_table <- read.csv2(file, check.names = FALSE) if(!("Authors" %in% names(cras_table))) stop("A column named `Authors` is missing") if(nrow(cras_table) < 1) stop("The cras_table has zero rows") if(!is.character(cras_table$Authors)) warning("Authors column is not of type character") if(!all(vapply(cras_table[-1], is.numeric, FALSE))) warning("Roles are not numeric, it can lead to unexpected behaviour") return(cras_table) }
/scratch/gouwar.j/cran-all/cranData/CRediTas/R/template_read.R
# Function to check structure of the cras_table check_cras_table <- function(cras_table){ stopifnot(inherits(cras_table, "data.frame")) stopifnot(nrow(cras_table) > 0) stopifnot(ncol(cras_table) > 1) stopifnot(names(cras_table)[1] == "Authors") } # Function to drop authors without contribution drop_authors <- function(cras_table, quiet = FALSE){ check_cras_table(cras_table) drop_rows <- vapply(seq_len(nrow(cras_table)), function(i) any(cras_table[i,-1] != 0), FUN.VALUE = logical(1)) if(all(!drop_rows)) stop("No authors have contributions") if(any(drop_rows == FALSE) && !quiet){ warning("Some authors were droped because of no contributions") } cras_table[drop_rows,] }
/scratch/gouwar.j/cran-all/cranData/CRediTas/R/utils.R
## ---- include = FALSE--------------------------------------------------------- knitr::opts_chunk$set( collapse = TRUE, comment = "#>", fig.path = "man/figures/README-", out.width = "100%" ) ## ----create_template---------------------------------------------------------- library(CRediTas) cras_table <- template_create(authors = c("Friedrich Ratzel", "Pau Vidal de la Blache", "Élisée Reclus")) knitr::kable(cras_table) ## ----fix, eval=FALSE---------------------------------------------------------- # # fix(cras_table) # ## ----write_csv, eval=FALSE---------------------------------------------------- # # template_create(authors = c("Friedrich Ratzel", # "Pau Vidal de la Blache", # "Élisée Reclus"), # file = path_to_your_csv_file) # ## ----define_roles------------------------------------------------------------- cras_got <- template_create(authors = c("Danaerys Targaryen", "Kingslayer", "John Snow"), roles = c("Free slaves", "Kill white walkers", "Ride dragons")) # add contribution roles cras_got[-2, -1] <- 1 knitr::kable(cras_got) ## ----template_read, eval=FALSE------------------------------------------------ # # cras_table <- template_read(path_to_your_csv_file) # ## ----populate_random, echo=FALSE---------------------------------------------- cras_table[, 2:ncol(cras_table)] <- sample(0:1, size=3*14, replace = TRUE, prob = c(0.6, 0.4)) knitr::kable(cras_table) ## ----------------------------------------------------------------------------- textfile <- tempfile() cras_write(cras_table, textfile, markdown = TRUE) ## ---- eval = FALSE------------------------------------------------------------ # # cras_write(cras_got, drop = FALSE, markdown = TRUE) #
/scratch/gouwar.j/cran-all/cranData/CRediTas/inst/doc/get_started.R
--- title: "Get started" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{get_started} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>", fig.path = "man/figures/README-", out.width = "100%" ) ``` # CRediTas The goal of CRediTas is to facilitate the tedious job of creating [CRediT authors statements](https://credit.niso.org/) for scientific publications. ## Installation You can install the development version of CRediTas from [r-universe](https://r-universe.dev) with: ``` r install.packages("CRediTas", repos = "https://ropensci.r-universe.dev") ``` ## Create a template The workflow is meant to work with three basic functions. First, we create a template table. It can be created as a `data.frame` and being populated in R. ```{r create_template} library(CRediTas) cras_table <- template_create(authors = c("Friedrich Ratzel", "Pau Vidal de la Blache", "Élisée Reclus")) knitr::kable(cras_table) ``` As you can see, the table is empty. So you must provide the information of who did what. To do this, you can use the `fix` function to edit directly in R: ```{r fix, eval=FALSE} fix(cras_table) ``` Alternatively, you can write the template as a csv file and then populate it in your preferred csv editor. ```{r write_csv, eval=FALSE} template_create(authors = c("Friedrich Ratzel", "Pau Vidal de la Blache", "Élisée Reclus"), file = path_to_your_csv_file) ``` Additionally, you can also define the roles to be included in the template. If `roles` is no specified, the roles recommended by the CRediT system are all included: ```{r define_roles} cras_got <- template_create(authors = c("Danaerys Targaryen", "Kingslayer", "John Snow"), roles = c("Free slaves", "Kill white walkers", "Ride dragons")) # add contribution roles cras_got[-2, -1] <- 1 knitr::kable(cras_got) ``` ## Read a template If you wrote the template to a file, then you can read it back to R as follows: ```{r template_read, eval=FALSE} cras_table <- template_read(path_to_your_csv_file) ``` ## Generate the CRediT author statement Once the `cras_table` is populated, for instance: ```{r populate_random, echo=FALSE} cras_table[, 2:ncol(cras_table)] <- sample(0:1, size=3*14, replace = TRUE, prob = c(0.6, 0.4)) knitr::kable(cras_table) ``` A text file can be generated following the CRediT author statement format. ```{r} textfile <- tempfile() cras_write(cras_table, textfile, markdown = TRUE) ``` If you open the text file, you will find this: `r readLines(textfile)` Moreover, if you are writing your paper in RMarkdown or quarto, you can insert the CRediT author statement directly in the text using an inline chunk `` `r cras_write(cras_table, markdown = TRUE)` ``. ### Do not drop authors without contributions In some cases, one or several authors did not contribute to any specific role. The `drop` arguments determines if they must be removed from the statement. If `drop = TRUE` (default), the authors are removed. Otherwise, they are kept without contributions as below. ```{r, eval = FALSE} cras_write(cras_got, drop = FALSE, markdown = TRUE) ``` `r cras_write(cras_got, drop = FALSE, markdown = TRUE)`
/scratch/gouwar.j/cran-all/cranData/CRediTas/inst/doc/get_started.Rmd
--- title: "Get started" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{get_started} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>", fig.path = "man/figures/README-", out.width = "100%" ) ``` # CRediTas The goal of CRediTas is to facilitate the tedious job of creating [CRediT authors statements](https://credit.niso.org/) for scientific publications. ## Installation You can install the development version of CRediTas from [r-universe](https://r-universe.dev) with: ``` r install.packages("CRediTas", repos = "https://ropensci.r-universe.dev") ``` ## Create a template The workflow is meant to work with three basic functions. First, we create a template table. It can be created as a `data.frame` and being populated in R. ```{r create_template} library(CRediTas) cras_table <- template_create(authors = c("Friedrich Ratzel", "Pau Vidal de la Blache", "Élisée Reclus")) knitr::kable(cras_table) ``` As you can see, the table is empty. So you must provide the information of who did what. To do this, you can use the `fix` function to edit directly in R: ```{r fix, eval=FALSE} fix(cras_table) ``` Alternatively, you can write the template as a csv file and then populate it in your preferred csv editor. ```{r write_csv, eval=FALSE} template_create(authors = c("Friedrich Ratzel", "Pau Vidal de la Blache", "Élisée Reclus"), file = path_to_your_csv_file) ``` Additionally, you can also define the roles to be included in the template. If `roles` is no specified, the roles recommended by the CRediT system are all included: ```{r define_roles} cras_got <- template_create(authors = c("Danaerys Targaryen", "Kingslayer", "John Snow"), roles = c("Free slaves", "Kill white walkers", "Ride dragons")) # add contribution roles cras_got[-2, -1] <- 1 knitr::kable(cras_got) ``` ## Read a template If you wrote the template to a file, then you can read it back to R as follows: ```{r template_read, eval=FALSE} cras_table <- template_read(path_to_your_csv_file) ``` ## Generate the CRediT author statement Once the `cras_table` is populated, for instance: ```{r populate_random, echo=FALSE} cras_table[, 2:ncol(cras_table)] <- sample(0:1, size=3*14, replace = TRUE, prob = c(0.6, 0.4)) knitr::kable(cras_table) ``` A text file can be generated following the CRediT author statement format. ```{r} textfile <- tempfile() cras_write(cras_table, textfile, markdown = TRUE) ``` If you open the text file, you will find this: `r readLines(textfile)` Moreover, if you are writing your paper in RMarkdown or quarto, you can insert the CRediT author statement directly in the text using an inline chunk `` `r cras_write(cras_table, markdown = TRUE)` ``. ### Do not drop authors without contributions In some cases, one or several authors did not contribute to any specific role. The `drop` arguments determines if they must be removed from the statement. If `drop = TRUE` (default), the authors are removed. Otherwise, they are kept without contributions as below. ```{r, eval = FALSE} cras_write(cras_got, drop = FALSE, markdown = TRUE) ``` `r cras_write(cras_got, drop = FALSE, markdown = TRUE)`
/scratch/gouwar.j/cran-all/cranData/CRediTas/vignettes/get_started.Rmd
#'@title penCSC #' #'@description Function to fit penalized cause-specific-cox with elastic-net penalty. #' #'@author Shahin Roshani #' #'@param time A character showing the name of the time variable in the data. #'@param status A character showing the name of the status/event variable in the data. #'@param vars.list A named list containing the variables to be included in each cause-specific model. Variables can be vectors of variable names or a one sided formula. Names of the list must be the events and exactly the same as values in the status variable. See `Examples` for details. #'@param data A data frame containing the information of the variables. #'@param alpha.list A named list containing the single alpha values of each cause-specific model. Names of the list must be the events and exactly the same as values in the status variable. See `Examples` for details. #'@param lambda.list A named list containing the single lambda values of each cause-specific model. Names of the list must be the events and exactly the same as values in the status variable. See `Examples` for details. #'@param standardize Logical indicating whether the variables must be standardized or not. Default is \code{TRUE}. #'@param keep A character vector of the names of variables that should not be shrunk. Default is \code{NULL}. #' #'@return A named list containing all the information related to the used data and the fitted models for all causes. Use \code{$} to explore all the involved information. #' #'@examples #' #'library(riskRegression) #' #'data(Melanoma) #' #'vl <- list('1'=c('age','sex','ulcer','thick'), #' #' '2'=~age+sex+epicel+thick+ici) #' #'al <- list('1'=0,'2'=.5) #' #'ll <- list('1'=.01,'2'=.04) #' #'penCSC(time='time',status='status',vars.list=vl, #' #' data=Melanoma,alpha.list=al,lambda.list=ll) #' #'@references Friedman J, Hastie T, Tibshirani R (2010). "Regularization Paths for Generalized Linear Models via Coordinate Descent." Journal of Statistical Software, 33(1), 1-22. \doi{10.18637/jss.v033.i01}, \url{https://www.jstatsoft.org/v33/i01/}. #' #'Therneau T (2022). A Package for Survival Analysis in R. R package version 3.3-1, \url{https://CRAN.R-project.org/package=survival}. #' #'Wickham H, Averick M, Bryan J, Chang W, McGowan L, François R, et al. Welcome to the tidyverse. J Open Source Softw. 2019 Nov 21;4(43):1686. #' #'Bache S, Wickham H (2022). magrittr: A Forward-Pipe Operator for R. \url{https://magrittr.tidyverse.org}, \url{https://github.com/tidyverse/magrittr}. #' #'@import tidyverse survival riskRegression prodlim magrittr glmnet #' #'@importFrom stats predict #' #'@export penCSC <- function(time,status,vars.list,data, alpha.list,lambda.list,standardize=TRUE,keep=NULL){ name_error <- function(x,arg){ if (is.null(names(x)) | any(is.na(names(x))) | any(names(x)=='')){ stop(stringr::str_c('Please specify complete names for ',arg, ' to know which event they are related to!'),call.=F) } } name_error(vars.list,'vars.list') name_error(alpha.list,'alpha.list') name_error(lambda.list,'lambda.list') data <- tibble::as_tibble(data) %>% dplyr::mutate_if(is.character,as.factor) inds <- data[[status]] %>% unique cens.ind <- inds[-which(inds %in% names(vars.list))] y_mats <- names(vars.list) %>% as.list %>% (function(x){names(x) <- x ; return(x)}) %>% purrr::map(~survival::Surv(data[[time]],data[[status]]==.) %>% as.matrix) vars.list <- vars.list %>% purrr::map(function(x){ if (inherits(x,'character')) x <- stringr::str_c('~',stringr::str_c(x,collapse='+')) %>% stats::as.formula() return(x) }) X_mats <- vars.list %>% purrr::map(~model.matrix(.,data=data)[,-1,drop=F]) alpha.list <- alpha.list[names(y_mats)] lambda.list <- lambda.list[names(y_mats)] if (!is.null(keep)){ keep <- keep[names(y_mats)] %>% purrr::map_if(function(x) !purrr::is_empty(x), ~model.matrix(stringr::str_c('~',stringr::str_c(.,collapse='+')) %>% as.formula,data=data) %>% colnames %>% (function(x) x[-1])) penalty_factors <- purrr::map2(.x=keep,.y=X_mats %>% purrr::map(colnames),.f=~as.numeric(!(.y %in% .x))) } else{ penalty_factors <- X_mats %>% purrr::map(~rep(1,ncol(.))) } model_lambdas <- purrr::pmap(.l=list(aa=y_mats,bb=X_mats,cc=alpha.list,dd=penalty_factors,ee=lambda.list), .f=function(aa,bb,cc,dd,ee){ glmnet::glmnet(x=bb,y=aa,family='cox',alpha=cc,penalty.factor=dd, standardize=standardize)$lambda %>% (function(x) c(x,ee)) %>% sort(decreasing=T) %>% unique() }) fits <- purrr::pmap(.l=list(y_mats,X_mats,alpha.list,model_lambdas,penalty_factors), .f=~glmnet::glmnet(x=..2,y=..1,family='cox',alpha=..3,lambda=..4, penalty.factor=..5,standardize=standardize)) names(fits) <- stringr::str_c('Event: ',names(fits)) baseline_ndX <- vars.list %>% purrr::map(~model.matrix(.,data=data[1,])[,-1]*0) cumulative_baseline_hazards <- purrr::pmap(.l=list(fits,X_mats,y_mats,baseline_ndX,lambda.list), .f=~survival::survfit(..1,x=..2,y=survival::Surv(time=..3[,1],event=..3[,2]), newx=..4,s=..5)) %>% purrr::map((function(x){ tibble::tibble(time=x$time,cumhaz=x$cumhaz) })) baseline_hazards <- cumulative_baseline_hazards %>% purrr::map(function(x){ x$cumhaz <- diff(c(0,x$cumhaz)) names(x)[2] <- 'haz' return(x) }) coefs <- purrr::pmap(.l=list(fits,lambda.list), .f=~predict(..1,s=..2,type='coefficients')) result <- structure(list('call'=sys.call(), 'data'=list('input.data'=data,'y'=y_mats,'X'=X_mats), 'models'=fits, 'coefs'=coefs, 'predictors'=vars.list, 'surv.names'=c('time'=time,'event'=status), 'parameters'=list('alpha.list'=alpha.list,'lambda.list'=lambda.list), 'baseline_hazards'=baseline_hazards), class='penCSC') return(result) }
/scratch/gouwar.j/cran-all/cranData/CSCNet/R/penCSC.R
#'@title predict.penCSC #' #'@description Flexible prediction method for the objects of class `penCSC` including the absolute risk prediction. #' #'@author Shahin Roshani #' #'@param object An object of class `penCSC`. #'@param newX A data frame containing the information of variables related to new records. Information of variables not included in the model creation will be ignored. #'@param event A vector of event codes which we want predictions for. This must be the same as values in the status variable of the data that was used to create the models. If \code{NULL}, absolute risk will be calculated for all involved events. Default is \code{NULL} which returns values for all involved causes. #'@param time A vector of time horizons which we want absolute risk predictions at. Only applicable when \code{type='absRisk'}. #'@param type Type of the predictions. Valid values are: \code{'lp'} or \code{'link'} for linear predictors, \code{'risk'} or \code{'response'} for \code{exp(lp)} and finally \code{'absRisk'} for semi-parametric estimates of absolute risk. #'@param reference Reference for centering predictions. Valid values are \code{'zero'} and \code{'sample'}. Default is \code{'zero'}. For more information on referencing see details in \code{?predict.coxph}. #'@param ... Additional arguments. Not used by \code{predict.penCSC}. #' #'@return A tibble containing the predictions based on the input arguments. #' #'@examples #' #'library(riskRegression) #' #'data(Melanoma) #' #'vl <- list('1'=c('age','sex','ulcer','thick'), #' #' '2'=~age+sex+epicel+thick+ici) #' #'al <- list('1'=0,'2'=.5) #' #'ll <- list('1'=.01,'2'=.04) #' #'penfit <- penCSC(time='time',status='status',vars.list=vl, #' #' data=Melanoma,alpha.list=al,lambda.list=ll) #' #'predict(penfit,Melanoma[1:5,],type='lp') #' #'predict(penfit,Melanoma[1:5,],type='response') #' #'predict(penfit,Melanoma[1:5,],type='absRisk',event=1:2,time=1825*(1:2)) #' #'@references Pfeiffer, R. M., & Gail, M. M. (2017). Absolute risk: Methods and applications in clinical management and public health. #' #'Aalen, O.O. (1978) Nonparametric Inference for a Family of Counting Processes. The Annals of Statistics, 6, 701-726. \doi{10.1214/aos/1176344247}. #' #'Wickham H, Averick M, Bryan J, Chang W, McGowan L, François R, et al. Welcome to the tidyverse. J Open Source Softw. 2019 Nov 21;4(43):1686. #' #'Bache S, Wickham H (2022). magrittr: A Forward-Pipe Operator for R. \url{https://magrittr.tidyverse.org}, \url{https://github.com/tidyverse/magrittr}. #' #'Friedman J, Hastie T, Tibshirani R (2010). "Regularization Paths for Generalized Linear Models via Coordinate Descent." Journal of Statistical Software, 33(1), 1-22. \doi{10.18637/jss.v033.i01}, \url{https://www.jstatsoft.org/v33/i01/}. #' #'@import tidyverse survival riskRegression prodlim magrittr glmnet #' #'@export predict.penCSC <- function(object,newX,event=NULL,time,type='lp',reference='zero',...){ if (!(type %in% c('lp','risk','link','response','absRisk'))){ stop('type must be `lp`/`link`, `risk`/`response` or `absRisk`!',call.=F) } if (is.null(event)) event <- names(object$models) %>% stringr::str_remove('Event: ') stopifnot('reference must be either `sample` or `zero`!'=reference %in% c('sample','zero')) #newX <- tibble::as_tibble(newX) %>% dplyr::mutate_if(is.character,as.factor) pred_vrs <- object$predictors %>% purrr::map(~model.frame(.,data=object$data$input.data) %>% colnames) %>% purrr::reduce(c) %>% unique() fcts <- object$data$input.data %>% dplyr::select(dplyr::all_of(pred_vrs)) %>% dplyr::select_if(~!is.numeric(.)) %>% purrr::map(levels) if (!purrr::is_empty(fcts)){ for (i in seq_len(length(fcts))){ newX[[names(fcts)[i]]] <- factor(newX[[names(fcts)[i]]],levels=fcts[[names(fcts)[i]]]) } } lp_risk_pred <- function(object,newX,event,type,reference){ newX <- object$predictors[event] %>% purrr::map(~model.matrix(.,data=newX)[,-1]) if (type=='lp') type <- 'link' ; if (type=='risk') type <- 'response' if (is.null(event)) event <- names(object$models) %>% stringr::str_remove('Event: ') preds <- purrr::pmap(.l=list(object$models[stringr::str_c('Event: ',event)], newX, object$parameters$lambda.list[event]), .f=~predict(..1,newx=..2,s=..3,type='link')) %>% purrr::map(~tibble::as_tibble(.) %>% dplyr::rename('prediction'='1')) if (reference=='sample'){ means <- object$data$X[event] %>% purrr::map(function(x){ x <- as.data.frame(x) x %>% purrr::map_if(~all(unique(na.omit(.)) %in% 0:1),~0,.else=mean) %>% as.data.frame %>% unlist }) pred_modif <- purrr::map2(.x=object$coefs[stringr::str_c('Event: ',event)] %>% purrr::map(~.[,1] %>% as.vector), .y=means, .f=~sum(.x*.y,na.rm=TRUE)) preds <- purrr::map2(.x=preds, .y=pred_modif, .f=~dplyr::mutate_all(.x,function(a) a-.y)) } if (type=='response') preds <- preds %>% purrr::map(~dplyr::mutate_all(.,exp)) purrr::map2(.x=preds %>% purrr::map(~dplyr::mutate(.,id=seq_len(nrow(.)))), .y=names(preds),.f=~dplyr::mutate(.x,event=.y %>% stringr::str_remove('Event: '))) %>% purrr::reduce(rbind) %>% dplyr::relocate('prediction',.after='event') -> preds return(preds) } if (type=='absRisk'){ CSCabsRisk <- function(object,newdata,events,horizons){ stopifnot('object must be from class penCSC!'=inherits(object,'penCSC')) indivs_absRisk <- function(object,indivs,event,horizon){ dat <- object$data$input.data event_times <- dat[which(dat[[object$surv.names[['time']]]]<=horizon & dat[[object$surv.names[['event']]]]==event),] %>% (function(x) x[[object$surv.names[['time']]]]) b <- object$baseline_hazards[[stringr::str_c('Event: ',event)]] %>% dplyr::filter(time %in% event_times) %>% (function(x) x$haz) purrr::map2(.x=object$predictors, .y=rep(list(indivs),length(object$predictors)), .f=~model.matrix(.x,data=.y)[,-1]) -> ndX cumulative_hazards <- purrr::pmap(.l=list(object$models, object$data$X, object$data$y, ndX, object$parameters$lambda.list), .f=~survival::survfit(..1, x=..2, y=survival::Surv(time=..3[,1],event=..3[,2]), newx=..4, s=..5)) %>% purrr::map((function(x){ tibble::tibble(time=x$time,cumhaz=x$cumhaz) %>% dplyr::filter(time %in% event_times) })) %>% purrr::map(~.$cumhaz %>% as.matrix) risks <- lp_risk_pred(object,indivs,event,'risk',reference='zero') %>% (function(x) x$prediction) risks * (matrix(b,nrow=1) %*% (purrr::reduce(cumulative_hazards,`+`) %>% (function(x) apply(x,2,function(y) exp(-y))))) %>% as.vector -> absRisks return( tibble::tibble(id=seq_len(nrow(indivs)),event=event,horizon=horizon,absoluteRisk=absRisks) ) } grid <- expand.grid(event=events,horizon=horizons) %>% dplyr::mutate_if(is.factor,as.character) return(grid %>% purrr::pmap(~indivs_absRisk(object,newdata,..1,..2)) %>% purrr::reduce(rbind) %>% tibble::as_tibble()) } preds_res <- CSCabsRisk(object,newX,event,time) } else{ preds_res <- lp_risk_pred(object,newX,event,type,reference) } return(preds_res) }
/scratch/gouwar.j/cran-all/cranData/CSCNet/R/predict.penCSC.R
#'@title predictRisk.penCSC #' #'@description predictRisk method for absolute risk prediction. This is mainly for compatibility of 'CSCNet' with functions of 'riskRegression' package. #' #'@author Shahin Roshani #' #'@param object An object of class 'penCSC'. #'@param newdata A data frame containing the variable information of new records. #'@param times A vector of time horizons which we want the absolute risk predictions at. #'@param cause A single value indicating the event of interest which we want the absolute risk predictions for. This value should be one of the values in the status variable of the data. #'@param ... Additional arguments. Not used by \code{predictRisk.penCSC}. #' #'@return A matrix with columns of absolute risk predictions of individuals for each requested time horizon. #' #'@examples #' #'library(riskRegression) #' #'data(Melanoma) #' #'vl <- list('1'=c('age','sex','ulcer','thick'), #' #' '2'=~age+sex+epicel+thick+ici) #' #'al <- list('1'=0,'2'=.5) #' #'ll <- list('1'=.01,'2'=.04) #' #'penfit <- penCSC(time='time',status='status',vars.list=vl, #' #' data=Melanoma,alpha.list=al,lambda.list=ll) #' #'predictRisk(penfit,Melanoma[1:5,],times=1825*(1:2),cause=1) #' #'@references Wickham H, Averick M, Bryan J, Chang W, McGowan L, François R, et al. Welcome to the tidyverse. J Open Source Softw. 2019 Nov 21;4(43):1686. #' #'Bache S, Wickham H (2022). magrittr: A Forward-Pipe Operator for R. \url{https://magrittr.tidyverse.org}, \url{https://github.com/tidyverse/magrittr}. #' #'@seealso \url{https://www.rdocumentation.org/packages/riskRegression/versions/1.3.7/topics/predictRisk} #' #'Details in: \url{https://rdrr.io/cran/riskRegression/man/Score.html} #' #'@import tidyverse survival riskRegression prodlim magrittr glmnet #' #'@export predictRisk.penCSC <- function(object,newdata,times,cause,...){ if (length(cause)>1){ stop('`predictRisk` method only handles one cause at a time on `penCSC` objects. To get the results of more times at one time, use `predict.penCSC` method!',call.=F) } as.list(times) %>% purrr::map(~predict.penCSC(object=object,newX=newdata,event=cause,time=., type='absRisk')$absoluteRisk %>% as.matrix) %>% purrr::reduce(cbind) -> absRisks colnames(absRisks) <- times return(absRisks) }
/scratch/gouwar.j/cran-all/cranData/CSCNet/R/predictRisk.penCSC.R
#'@title print.penCSC #' #'@description Internal method for printing the objects of class \code{penCSC}. #' #'@author Shahin Roshani #' #'@param x An object of class \code{penCSC}. #'@param ... Other arguments. Not used by \code{print.penCSC}. #' #'@return A modified print of \code{penCSC} objects. #' #'@export print.penCSC <- function(x,...){ print(x$coefs) }
/scratch/gouwar.j/cran-all/cranData/CSCNet/R/print.penCSC.R
#'@title print.tune_penCSC #' #'@description Internal method for printing the objects of class \code{tune_penCSC}. #' #'@author Shahin Roshani #' #'@param x An object of class \code{tune_penCSC}. #'@param ... Other arguments. Not used by \code{print.tune_penCSC}. #' #'@return A modified print of \code{tune_penCSC} objects. #' #'@export print.tune_penCSC <- function(x,...){ print(x$final_fits) }
/scratch/gouwar.j/cran-all/cranData/CSCNet/R/print.tune_penCSC.R
#'@title tune_penCSC #' #'@description A flexible function for tuning the involved hyper-parameters of a penalized cause-specific-cox model with elastic net penalty using the linking idea. #' #'@author Shahin Roshani #' #'@param time A character showing the name of the time variable in the data. #'@param status A character showing the name of the status/event variable in the data. #'@param vars.list A named list containing the variables to be included in each cause-specific model. Variables can be vectors of variable names or a one sided formula. Names of the list must be the events and exactly the same as values in the status variable. See `Examples` for details. #'@param data A data frame containing the information of the variables. #'@param horizons A vector of time horizons which we want the absolute risk predictions to be evaluated at. #'@param event The value for event of interest which we want the absolute risk predictions to be evaluated for. This must be one of the values in the status variable of the data. #'@param rhs A right hand sided formula indicating the variables to be used in estimating the inverse probability of censoring weighting (IPCW) model. Default is \code{~1}. #'@param method Resampling method to be used for hyper-parameter tuning. Values can be: \code{'cv'} for cross validation, \code{'repcv'} for repeated cross validation, \code{'lgocv'} for monte-carlo cross validation, \code{'loocv'} for leave one out cross validation and \code{'boot'} for bootstrap. Default is \code{'cv'}. #'@param k Number of folds. Only applicable for \code{method='cv'} and \code{method='repcv'}. Default is 10. #'@param times Repeat number of the resampling process. Only applicable for \code{method='repcv'}, \code{method='lgocv'} and \code{method='boot'}. Default is 25. #'@param p The fraction of data to be used as the training set during resampling. Only applicable for \code{method='lgocv'}. Default is 0.7. #'@param strat.var A single character indicating name of the strata variable to be used to create resamples. If numerical, groups will be specified based on percentiles. Default is \code{NULL} which considers status variable as a factor and creates the resamples based on different levels of it. #'@param metrics Evaluation metric (loss function) to be used. Values can be \code{'Brier'} for IPCW brier score, \code{'AUC'} for IPCW AUC or a vector of both. Default is \code{'Brier'}. #'@param final.metric The evaluation metric to decide the best hyper-parameters set for the final fits on the whole data. When \code{NULL} which is the default value, it takes the value from \code{metrics}. If both \code{'Brier'} and \code{'AUC'} were specified in metrics and \code{final.metric} is \code{NULL}, \code{'Brier'} will be used. #'@param alpha.grid A named list containing a sequence of alpha values to be evaluated for each cause-specific model. Names of the list must be the events and exactly the same as values in the status variable. Default is \code{NULL} which orders the function to set \code{seq(0,1,.5)} for all cause-specific models. See `Details` for more information. #'@param lambda.grid A named list containing a sequence of lambda values to be evaluated for each cause-specific model. Names of the list must be the events and exactly the same as values in the status variable. Default is \code{NULL} which orders the function to calculate exclusive lambda sequences for all causes. See `Details` for more information. #'@param nlambdas.list A names list of single integers indicating the length of lambda sequences which are calculated automatically by the function for each cause. Only applicable when \code{lambda.grid=NULL}. Default is NULL which sets all lengths to 5. See `Details` for more information. #'@param grow.by Difference between the values in the growing sequence of lambda values to find the maximum value that makes the null model. Only applicable when \code{lambda.grid=NULL}. Default is 0.01. See `Details` for more information. #'@param standardize Logical indicating whether the variables must be standardized or not during model fitting procedures. Default is \code{TRUE}. #'@param keep A character vector of the names of variables that should not be shrunk in all model fitting procedures. Default is \code{NULL}. #'@param preProc.fun A function that accepts a data and returns a modified version of it that has gone through the user's desired pre-processing steps. All modifications from this function will be done during the resampling procedures to avoid data leakage. It will modify all training and test set(s) during the validation unless other argument \code{preProc.fun.test} is specified by user and then it only affects the training set(s). Default is \code{function(x) x}. Also see the description of \code{preProc.fun.test} argument. #'@param preProc.fun.test A function the exact same characteristics and description as \code{preProc.fun} argument. If user specifies a separate function for \code{preProc.fun.test}, it will only affect test set(s) during validation while the function from \code{preProc.fun} will affect the training set(s). Default is \code{NULL} which means function from \code{preProc.fun} will be used on both training and test set(s) during validation. Also see the description of \code{preProc.fun} argument. #'@param parallel Logical indicating whether the tuning process should be performed in parallel or not. Default is \code{FALSE}. #'@param preProc.pkgs A character vector containing the names of packages that was used in creating user's \code{preProc.fun} while using parallel computation. Only applicable if \code{parallel=T} and \code{preProc.fun} is a user specified function using functions from other packages. See 'Examples' for details. #'@param preProc.globals A character vector containing names of objects included in \code{preProc.fun} to be considered as global objects while using parallel computation. The most frequent ones are the names of the user specified pre processing function or functions within this function. Only applicable if \code{parallel=T} and \code{preProc.fun} is a user specified function. See 'Examples' for details. #'@param core.nums Number of CPU cores to be used for parallel computation. Only applicable if \code{parallel=T}. Default is \code{future::availableCores()/2}. #' #'@return A list containing the detailed information of the hyper-parameter tuning and the validation process, best combination of hyper-parameters and the final fits based on the whole data using the best obtained hyper-parameters. Use \code{$} to explore all the involved information. #' #'@details \code{tune_penCSC} has the ability to automatically determine the candidate sequences of alpha & lambda values. Setting any of \code{alpha.grid} & \code{lambda.grid} to \code{NULL} will order the function to calculate them automatically. #'The process of determining the lambda values automatically is by: #'\enumerate{ #'\item Starting from lambda=0, the algorithm fits LASSO models until finding a lambda value that creates a NULL model where all variables were shrunk to be exactly zero. #'\item The obtained lambda value will be used as the maximum value of a sequence starting from 0. The length of this sequence is controlled by values in \code{nlambdas.list}. #'} #'This will be done for each cause-specific model to create exclusive sequences of lambdas for each of them. #' #'@examples \donttest{ #' #'library(riskRegression) #' #'data(Melanoma) #' #'vl <- list('1'=~age+sex+epicel+ici, #' #' '2'=c('age','ulcer','thick','invasion')) #' #'al <- list('1'=0,'2'=c(.5,1)) #' #'#External standardization function with data frame as its input and output #' #'library(recipes) #' #'std.fun <- function(data){ #' #' cont_vars <- data %>% select(where(~is.numeric(.))) %>% names #' #' cont_vars <- cont_vars[-which(cont_vars %in% c('time','status'))] #' #' #External functions from recipes package are being used #' #' recipe(~.,data=data) %>% #' #' step_center(all_of(cont_vars)) %>% #' #' step_scale(all_of(cont_vars)) %>% #' #' prep(training=data) %>% juice #' #'} #' #'set.seed(233) #' #'test <- tune_penCSC(time='time',status='status',vars.list=vl,data=Melanoma,horizons=1825, #' #' event=1,method='cv',k=5,metrics='AUC',alpha.grid=al,standardize=FALSE, #' #' preProc.fun=std.fun,parallel=TRUE,preProc.pkgs='recipes') #' #'test #' #'} #' #'@references Friedman J, Hastie T, Tibshirani R (2010). "Regularization Paths for Generalized Linear Models via Coordinate Descent." Journal of Statistical Software, 33(1), 1-22. \doi{10.18637/jss.v033.i01}, \url{https://www.jstatsoft.org/v33/i01/}. #' #'Saadati, M, Beyersmann, J, Kopp-Schneider, A, Benner, A. Prediction accuracy and variable selection for penalized cause-specific hazards models. Biometrical Journal. 2018; 60: 288– 306. \doi{10.1002/bimj.201600242}. #' #'Gerds TA, Kattan MW (2021). Medical Risk Prediction Models: With Ties to Machine Learning (1st ed.). Chapman and Hall/CRC. \doi{10.1201/9781138384484} #' #'Pfeiffer, R. M., & Gail, M. M. (2017). Absolute risk: Methods and applications in clinical management and public health. #' #'Kuhn, M. (2008). Building Predictive Models in R Using the caret Package. Journal of Statistical Software, 28(5), 1–26. \doi{10.18637/jss.v028.i05}. #' #'Bengtsson H (2021). “A Unifying Framework for Parallel and Distributed Processing in R using Futures.” The R Journal, 13(2), 208–227. \doi{10.32614/RJ-2021-048}. #' #'Vaughan D, Dancho M (2022). furrr: Apply Mapping Functions in Parallel using Futures. \url{https://github.com/DavisVaughan/furrr}, \url{https://furrr.futureverse.org/}. #' #'Therneau T (2022). A Package for Survival Analysis in R. R package version 3.3-1, \url{https://CRAN.R-project.org/package=survival}. #' #'Wickham H, Averick M, Bryan J, Chang W, McGowan L, François R, et al. Welcome to the tidyverse. J Open Source Softw. 2019 Nov 21;4(43):1686. #' #'Bache S, Wickham H (2022). magrittr: A Forward-Pipe Operator for R. \url{https://magrittr.tidyverse.org}, \url{https://github.com/tidyverse/magrittr}. #' #'@import tidyverse survival riskRegression prodlim magrittr glmnet furrr recipes #' #'@importFrom caret createDataPartition createFolds createMultiFolds createResample #' #'@importFrom future plan availableCores #' #'@importFrom stats predict #' #'@importFrom prodlim Hist #' #'@export tune_penCSC <- function(time,status,vars.list,data,horizons,event,rhs=~1, method='cv',k=10,times=25,p=.7,strat.var=NULL, metrics='Brier',final.metric=NULL,alpha.grid=NULL, lambda.grid=NULL,nlambdas.list=NULL,grow.by=.01,standardize=TRUE, keep=NULL,preProc.fun=function(x) x,preProc.fun.test=NULL, parallel=FALSE,preProc.pkgs=NULL,preProc.globals=NULL, core.nums=future::availableCores()/2){ if (!(method %in% c('loocv','lgocv','cv','repcv','boot'))) stop('`method` must be `loocv`, `lgocv`, `cv`, `repcv` or `boot`!',call.=F) if (!all(metrics %in% c('Brier','AUC')) | length(metrics)>2) stop('metrics must be either `Brier` or `AUC`. It can also be a vector of both.',call.=F) if (is.null(final.metric)){ if (length(metrics)>1){ final.metric <- 'Brier' } else{ final.metric <- metrics } } if (!(final.metric %in% c('Brier','AUC')) | length(final.metric)!=1) stop('final.metric should be only one of `Brier` or `AUC`.',call.=F) if (length(event)!=1) stop('Only the event of interest must be specified for the tuning process!',call.=F) if (!is.function(preProc.fun)) stop('`preProc.fun` must be a function!',call.=F) if (!purrr::is_empty(preProc.fun.test) & !is.function(preProc.fun.test)){ stop('`preProc.fun.test` must be a function!',call.=F) } #if (is.character(preProc.fun)){ # if (length(preProc.fun)>1){ # stop('Only one name of a unified pre-processing function must be given!',call.=F) # } else{ # preProc.globals <- c(preProc.globals,preProc.fun) # } #} if (purrr::is_empty(preProc.fun.test)) preProc.fun.test <- preProc.fun if (purrr::is_empty(strat.var)){ strat.vec <- data[[status]] %>% as.factor } else{ strat.vec <- data[[strat.var]] } resampler <- function(method){ if (method=='loocv'){ indices <- seq_len(nrow(data)) train_index_list <- as.list(indices) %>% purrr::map(~indices[-.]) } if (method=='lgocv') train_index_list <- caret::createDataPartition(y=strat.vec,times=times,p=p,list=T) if (method=='cv') train_index_list <- caret::createFolds(y=strat.vec,k=k,list=T,returnTrain=T) if (method=='repcv') train_index_list <- caret::createMultiFolds(y=strat.vec,k=k,times=times) if (method=='boot') train_index_list <- caret::createResample(y=strat.vec,times=times,list=T) test_index_list <- train_index_list %>% purrr::map(~which(!(seq_len(nrow(data)) %in% .))) return(list(train_index_list=train_index_list,test_index_list=test_index_list)) } codes <- unique(data[[status]]) cens.code <- codes[-which(codes %in% names(vars.list))] form <- stringr::str_c('Hist(',time,',',status,',cens.code=\'',cens.code,'\'',')', format(rhs)) %>% stats::as.formula() if (purrr::is_empty(lambda.grid)){ if (purrr::is_empty(nlambdas.list)){ nlambdas.list <- rep(list(5),length(vars.list)) names(nlambdas.list) <- names(vars.list) } nlambdas.list <- nlambdas.list[names(vars.list)] dd <- tibble::as_tibble(data) %>% dplyr::mutate_if(is.character,as.factor) %>% stats::na.omit() ymats <- names(vars.list) %>% as.list %>% (function(x){names(x) <- x ; return(x)}) %>% purrr::map(~survival::Surv(dd[[time]],dd[[status]]==.) %>% as.matrix()) vl <- vars.list %>% purrr::map(function(x){ if (inherits(x,'character')) x <- stringr::str_c('~',stringr::str_c(x,collapse='+')) %>% stats::as.formula() return(x) }) Xmats <- vl %>% purrr::map(~model.matrix(.,data=dd)[,-1,drop=F]) lambda_seq <- function(X,y,nlambdas){ max_lambda <- 0 range_fit <- glmnet::glmnet(x=X,y=y,family='cox',alpha=1,standardize=T) %>% (function(x) predict(x,s=max_lambda,type='coefficients')) while (any(range_fit[,1]!=0)){ max_lambda <- max_lambda + grow.by range_fit <- glmnet::glmnet(x=X,y=y,family='cox',alpha=1,standardize=T) %>% (function(x) predict(x,s=max_lambda,type='coefficients')) } return(seq(0,max_lambda,length.out=nlambdas)) } lambda.grid <- purrr::pmap(.l=list(ymats,Xmats,nlambdas.list), .f=~lambda_seq(..2,..1,..3) %>% unique) } if (purrr::is_empty(alpha.grid)){ alpha.grid <- rep(list(seq(0,1,.5)),length(vars.list)) names(alpha.grid) <- names(vars.list) } alpha.grid <- alpha.grid[names(vars.list)] lambda.grid <- lambda.grid[names(vars.list)] names(alpha.grid) <- stringr::str_c('alpha_',names(alpha.grid)) names(lambda.grid) <- stringr::str_c('lambda_',names(lambda.grid)) start <- Sys.time() grid <- c(alpha.grid,lambda.grid,horizon=list(horizons)) %>% expand.grid() nn <- length(lambda.grid) zl_indices <- apply(grid,1,function(x) which(x[(1:nn)+nn]==0) %>% as.vector) for (i in seq_len(nrow(grid))){ if (!purrr::is_empty(zl_indices[[i]])) grid[i,zl_indices[[i]]] <- 0 } grid <- dplyr::distinct(grid) calc_grid <- grid %>% dplyr::mutate(combination=seq_len(nrow(grid))) %>% (function(x) split(x,x$combination)) %>% purrr::map(~select(.,-combination) %>% (function(x){ y <- list() y$alpha.list <- dplyr::select(x,dplyr::starts_with('alpha_')) %>% dplyr::rename_all(~stringr::str_remove(.,'alpha_')) %>% as.list y$lambda.list <- dplyr::select(x,dplyr::starts_with('lambda_')) %>% dplyr::rename_all(~stringr::str_remove(.,'lambda_')) %>% as.list y$horizon <- dplyr::select(x,horizon) %>% unlist return(y) })) modeling <- function(alpha_list,lambda_list,horizon){ resamples <- resampler(method) training_list <- resamples$train_index_list %>% purrr::map(~data[.,] %>% preProc.fun) testing_list <- resamples$test_index_list %>% purrr::map(~data[.,] %>% preProc.fun.test) purrr::pmap(.l=list(aa=training_list,bb=testing_list), .f=purrr::possibly(.f=function(aa,bb){ penCSC(time=time, status=status, vars.list=vars.list, data=aa, alpha.list=alpha_list, lambda.list=lambda_list, standardize=standardize, keep=keep) -> fit riskRegression::Score(list(fit),data=bb,formula=form,metrics=metrics, cause=event,times=horizon,null.model=F) %>% (function(x) x[metrics]) %>% purrr::map(~.$score %>% (function(x) x[,3]) %>% unlist %>% as.vector) %>% tibble::as_tibble() }, otherwise=matrix(NA,1,length(metrics)) %>% (function(x){colnames(x) <- metrics ; return(tibble::as_tibble(x))}) ) ) %>% (function(x){ purrr::map2(.x=x, .y=as.list(names(x)), .f=~dplyr::mutate(.x,'step'=.y) %>% dplyr::relocate('step',.before=1)) }) %>% purrr::reduce(rbind) } if (!parallel){ calc_grid %>% purrr::map(~modeling(.$alpha.list,.$lambda.list,.$horizon)) -> lossfun_vals } else{ pkg_envs <- c('tidyverse','magrittr','survival','riskRegression','prodlim', 'Publish','glmnet','caret','furrr') %>% (function(x) c(x,preProc.pkgs)) %>% unique() globals <- c('penCSC','predict.penCSC','vars.list','preProc.fun','preProc.fun.test', 'strat.var','strat.vec','resampler','data','predictRisk.penCSC','keep', 'modeling') %>% (function(x) c(x,preProc.globals)) %>% unique() future::plan(future::multisession(),workers=core.nums) calc_grid %>% furrr::future_map(~modeling(.$alpha.list,.$lambda.list,.$horizon), .options=furrr::furrr_options(packages=pkg_envs,globals=globals,seed=T), .progress=T) -> lossfun_vals } stop <- Sys.time() message(stringr::str_c('\nProcess was done in ',format(stop-start),'.')) lossfun_vals %>% purrr::map(~dplyr::select(.,-step) %>% dplyr::summarize_all(mean) %>% dplyr::rename_all(~stringr::str_c('mean.',.))) %>% purrr::reduce(rbind) %>% (function(x) cbind(grid,x)) -> validation_result final_params <- validation_result %>% (function(x) split(x,x$horizon)) %>% purrr::map(function(x){ if (final.metric=='Brier'){ res <- dplyr::filter(x,x$mean.Brier==min(x$mean.Brier,na.rm=T)) } else{ res <- dplyr::filter(x,x$mean.AUC==max(x$mean.AUC,na.rm=T)) } return(res) }) final_fits <- final_params %>% purrr::map(function(x){ al <- x[stringr::str_subset(names(x),'alpha_')] %>% as.list() ll <- x[stringr::str_subset(names(x),'lambda_')] %>% as.list() names(al) <- stringr::str_remove(names(al),'alpha_') names(ll) <- stringr::str_remove(names(ll),'lambda_') penCSC(time = time, status = status, vars.list = vars.list, data = data %>% preProc.fun, alpha.list = al, lambda.list = ll, keep = keep, standardize = standardize) }) tuning_results <- list(lossfun_vals=lossfun_vals, validation_result=validation_result, final_params=final_params, final_fits=final_fits) class(tuning_results) <- 'tune_penCSC' return(tuning_results) }
/scratch/gouwar.j/cran-all/cranData/CSCNet/R/tune_penCSC.R
## ---- include = FALSE--------------------------------------------------------- knitr::opts_chunk$set( collapse = TRUE, comment = "" ) ## ----message=F, warning=F----------------------------------------------------- library(CSCNet) library(riskRegression) data(Melanoma) as_tibble(Melanoma) table(Melanoma$status) ## ----------------------------------------------------------------------------- vl <- list('1'=c('age','sex','invasion','thick'), '2'=~age+sex+epicel+ici+thick) penfit <- penCSC(time = 'time', status = 'status', vars.list = vl, data = Melanoma, alpha.list = list('1'=0,'2'=.5), lambda.list = list('1'=.01,'2'=.02)) penfit ## ----------------------------------------------------------------------------- predict(penfit,Melanoma[1:5,],type='lp',event=1) ## ----------------------------------------------------------------------------- predict(penfit,Melanoma[1:5,],type='response') ## ----------------------------------------------------------------------------- predict(penfit,Melanoma[1:5,],type='absRisk',event=1,time=365*c(3,5)) ## ----message=T, warning=F----------------------------------------------------- #Writing a hypothetical pre-processing function library(recipes) std.fun <- function(data){ cont_vars <- data %>% select(where(~is.numeric(.))) %>% names cont_vars <- cont_vars[-which(cont_vars %in% c('time','status'))] #External functions from recipes package are being used recipe(~.,data=data) %>% step_center(all_of(cont_vars)) %>% step_scale(all_of(cont_vars)) %>% prep(training=data) %>% juice } #Tuning a regularized cause-specific cox set.seed(455) #for reproducibility tune_melanoma <- tune_penCSC(time = 'time', status = 'status', vars.list = vl, data = Melanoma, horizons = 365*5, event = 1, method = 'cv', k = 5, standardize = FALSE, metrics = 'AUC', alpha.grid = list('1'=0,'2'=c(.5,1)), preProc.fun = std.fun, parallel = TRUE, preProc.pkgs = 'recipes') tune_melanoma$validation_result %>% arrange(desc(mean.AUC)) %>% head tune_melanoma$final_params tune_melanoma$final_fits
/scratch/gouwar.j/cran-all/cranData/CSCNet/inst/doc/CSCNet.R
--- title: "CSCNet vignette" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{CSCNet vignette} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "" ) ``` <style> body { text-align: justify} </style> CSCNet is package with flexible tools for fitting and evaluating cause-specific cox models with elastic-net penalty. Each cause is modeled in a separate penalized cox model (using elastic-net penalty) with its exclusive $\alpha$ and $\lambda$ assuming other involved competing causes as censored. ### Regularized cause-specific cox and absolute risk predictions In this package we will use ```Melanoma``` data from 'riskRegression' package (which will load up with 'CSCNet') so we start by loading the package and the ```Melanoma``` data. ```{r message=F, warning=F} library(CSCNet) library(riskRegression) data(Melanoma) as_tibble(Melanoma) table(Melanoma$status) ``` There are 2 events in the Melanoma data coded as 1 & 2. To introduce how setting up variables and hyper-parameters works in CSCNet, we will fit the a model with the following hyper-parameters to the ```Melanoma``` data: $$(\alpha_{1},\alpha_{2},\lambda_{1},\lambda_{2})=(0,0.5,0.01,0.02)$$ We set variables affecting the event: 1 as `age,sex,invasion,thick` and variables affecting event: 2 as `age,sex,epicel,ici,thick`. #### Fitting regularized cause-specific cox models In CSCNet, setting variables and hyper-parameters are done through named lists. Variables and hyper-parameters related to each involved cause are stored in list positions with the name of that position being that cause. Of course these names must be the same as values in the status variable in the data. ```{r} vl <- list('1'=c('age','sex','invasion','thick'), '2'=~age+sex+epicel+ici+thick) penfit <- penCSC(time = 'time', status = 'status', vars.list = vl, data = Melanoma, alpha.list = list('1'=0,'2'=.5), lambda.list = list('1'=.01,'2'=.02)) penfit ``` `penfit` is a comprehensive list with all information related to the data and fitted models in detail that user can access. **Note:** As we saw, variable specification in `vars.list` is possible in 2 ways which are introducing a vector of variable names or a one hand sided formula for different causes. #### Predictions and semi-parametric estimates of absolute risk Now to obtain predictions, specially estimates of the absolute risks, `predict.penCSC` method was developed so user can obtain different forms of values in the easiest way possible. By this method on objects of class `penCSCS` and for different involved causes, user can obtain values for linear predictors (`type='lp'` or `type='link'`), exponential of linear predictors (`type='risk'` or `type='response'`) and finally semi-parametric estimates of absolute risks (`type='absRisk'`) at desired time horizons. **Note:** Default value for `event` argument in `predict.penCSC` is `NULL`. If user leaves it as that, values for all involved causes will be returned. Values of linear predictors for event: 1 related to 1st five individuals of the data: ```{r} predict(penfit,Melanoma[1:5,],type='lp',event=1) ``` Or the risk values of the same individuals for all involved causes: ```{r} predict(penfit,Melanoma[1:5,],type='response') ``` Now let's say we want estimates of absolute risks related to the event: 1 as our event of interest at 3 and 5 year time horizons: ```{r} predict(penfit,Melanoma[1:5,],type='absRisk',event=1,time=365*c(3,5)) ``` **Note:** There's also `predictRisk.penCSC` to obtain absolute risk predictions. This method was developed for compatibility with tools from 'riskRegression' package. ### Tuning the hyper-parameters The above example was for illustration purposes. In real world analysis, one must tune the hyper-parameters with respect to a proper loss function through resampling procedures. `tune_penCSC` is a comprehensive function that was built for this purpose on regularized cause-specific cox models. Like before, specification of variables and hyper-parameters are done through named lists and sequences of candidate hyper-parameters related to each involved cause are stored in list positions with the name of that position being that cause. After that, `tune_penCSC` will create all possible combinations from user's specified sequences and evaluates them using either IPCW brier score or IPCW AUC (as loss functions) based on absolute risk predictions of the event of interest (linking) through a chosen resampling process. Supported resampling procedures are: cross validation (`method='cv'`), repeated cross validation (`method='repcv'`), bootstrap (`method='boot'`), Monte-Carlo or leave group out cross validation (`method='lgocv'`) and leave one out cross validation (`method='loocv'`). #### Automatic specification of hyper-parameters sequences `tune_penCSC` has the ability to automatically determine the candidate sequences of $\alpha$ & $\lambda$ values. Setting any of `alpha.grid` & `lambda.grid` to `NULL` will order the function to calculate them automatically. While the automatic sequence of $\alpha$ values for all causes is `seq(0,1,.5)`, the process of determining the $\lambda$ values automatically is by: 1. Starting from $\lambda=0$, the algorithm fits LASSO models until finding a $\lambda$ value that creates a NULL model where all variables were shrunk to be exactly 0. 2. The obtained $\lambda$ value will be used as the maximum value of a sequence starting from 0. The length of this sequence is controlled by values in `nlambdas.list`. This will be done for each cause-specific model to create exclusive sequences of $\lambda$s for each of them. #### Pre-processing within resampling If the data requires pre-processing steps, it must be done within the resampling process to avoid data leakage. This can be achieved by using `preProc.fun` argument of `tune_penCSC` function. This arguments accepts a function that has a data as its only input and returns a modified version of that data. Any pre-processing steps can be specified within this function. **Note:** `tune_penCSC` has the parallel processing option. If a user has specified a function for pre-processing steps with global objects or calls from other packages and wants to run the code in parallel, the names of those extra packages and global objects must be given through `preProc.pkgs` and `preProc.globals`. Now let's see all that was mentioned in this section in an example. Let's say we want to tune our model for 5 year absolute risk prediction of event: 1 based on time dependent (IPCW) AUC as the loss function (evaluation metric) through a 5-fold cross validation process: ```{r message=T, warning=F} #Writing a hypothetical pre-processing function library(recipes) std.fun <- function(data){ cont_vars <- data %>% select(where(~is.numeric(.))) %>% names cont_vars <- cont_vars[-which(cont_vars %in% c('time','status'))] #External functions from recipes package are being used recipe(~.,data=data) %>% step_center(all_of(cont_vars)) %>% step_scale(all_of(cont_vars)) %>% prep(training=data) %>% juice } #Tuning a regularized cause-specific cox set.seed(455) #for reproducibility tune_melanoma <- tune_penCSC(time = 'time', status = 'status', vars.list = vl, data = Melanoma, horizons = 365*5, event = 1, method = 'cv', k = 5, standardize = FALSE, metrics = 'AUC', alpha.grid = list('1'=0,'2'=c(.5,1)), preProc.fun = std.fun, parallel = TRUE, preProc.pkgs = 'recipes') tune_melanoma$validation_result %>% arrange(desc(mean.AUC)) %>% head tune_melanoma$final_params tune_melanoma$final_fits ```
/scratch/gouwar.j/cran-all/cranData/CSCNet/inst/doc/CSCNet.Rmd