content
stringlengths 0
14.9M
| filename
stringlengths 44
136
|
---|---|
---
title: "2. Area of applicability of spatial prediction models"
author: "Hanna Meyer"
date: "`r Sys.Date()`"
output:
rmarkdown::html_vignette:
toc: true
vignette: >
%\VignetteIndexEntry{Area of applicability of spatial prediction models}
%\VignetteEncoding{UTF-8}
%\VignetteEngine{knitr::rmarkdown}
editor_options:
chunk_output_type: console
---
```{r setup, echo=FALSE}
knitr::opts_chunk$set(fig.width = 8.83)
```
---
# Introduction
In spatial predictive mapping, models are often applied to make predictions far beyond sampling locations (i.e. field observations used to map a variable even on a global scale), where new locations might considerably differ in their environmental properties. However, areas in the predictor space without support of training data are problematic. The model has not been enabled to learn about relationships in these environments and predictions for such areas have to be considered highly uncertain.
In CAST, we implement the methodology described in [Meyer\&Pebesma (2021)](https://doi.org/10.1111/2041-210X.13650) to estimate the "area of applicability" (AOA) of (spatial) prediction models. The AOA is defined as the area where we enabled the model to learn about relationships based on the training data, and where the estimated cross-validation performance holds. To delineate the AOA, first an dissimilarity index (DI) is calculated that is based on distances to the training data in the multidimensional predictor variable space. To account for relevance of predictor variables responsible for prediction patterns we weight variables by the model-derived importance scores prior to distance calculation. The AOA is then derived by applying a threshold based on the DI observed in the training data using cross-validation.
This tutorial shows an example of how to estimate the area of applicability of spatial prediction models.
For further information see: Meyer, H., & Pebesma, E. (2021). Predicting into unknown space? Estimating the area of applicability of spatial prediction models. Methods in Ecology and Evolution, 12, 1620– 1633. [https://doi.org/10.1111/2041-210X.13650]
### Getting started
```{r, message = FALSE, warning=FALSE}
library(CAST)
library(caret)
library(terra)
library(sf)
library(viridis)
library(gridExtra)
```
```{r,message = FALSE,include=FALSE, warning=FALSE}
RMSE = function(a, b){
sqrt(mean((a - b)^2,na.rm=T))
}
```
# Example 1: Using simulated data
## Get data
### Generate Predictors
As predictor variables, a set of bioclimatic variables are used (https://www.worldclim.org). For this tutorial, they have been originally downloaded using the getData function from the raster package but cropped to an area in central Europe. The cropped data are provided in the CAST package.
```{r, message = FALSE, warning=FALSE}
predictors <- rast(system.file("extdata","bioclim.tif",package="CAST"))
plot(predictors,col=viridis(100))
```
### Generate Response
To be able to test the reliability of the method, we're using a simulated prediction task. We therefore simulate a virtual response variable from the bioclimatic variables.
```{r,message = FALSE, warning=FALSE}
generate_random_response <- function(raster, predictornames =
names(raster), seed = sample(seq(1000), 1)){
operands_1 = c("+", "-", "*", "/")
operands_2 = c("^1","^2")
expression <- paste(as.character(predictornames, sep=""))
# assign random power to predictors
set.seed(seed)
expression <- paste(expression,
sample(operands_2, length(predictornames),
replace = TRUE),
sep = "")
# assign random math function between predictors (expect after the last one)
set.seed(seed)
expression[-length(expression)] <- paste(expression[-
length(expression)],
sample(operands_1,
length(predictornames)-1, replace = TRUE),
sep = " ")
print(paste0(expression, collapse = " "))
# collapse
e = paste0("raster$", expression, collapse = " ")
response = eval(parse(text = e))
names(response) <- "response"
return(response)
}
```
```{r,message = FALSE, warning=FALSE}
response <- generate_random_response (predictors, seed = 10)
plot(response,col=viridis(100),main="virtual response")
```
### Simulate sampling locations
To simulate a typical prediction task, field sampling locations are randomly selected.
Here, we randomly select 20 points. Note that this is a very small data set, but used here to avoid long computation times.
```{r,message = FALSE, warning=FALSE}
mask <- predictors[[1]]
values(mask)[!is.na(values(mask))] <- 1
mask <- st_as_sf(as.polygons(mask))
mask <- st_make_valid(mask)
```
```{r,message = FALSE, warning=FALSE}
set.seed(15)
samplepoints <- st_as_sf(st_sample(mask,20,"random"))
plot(response,col=viridis(100))
plot(samplepoints,col="red",add=T,pch=3)
```
## Model training
Next, a machine learning algorithm will be applied to learn the relationships between predictors and response.
### Prepare data
Therefore, predictors and response are extracted for the sampling locations.
```{r,message = FALSE, warning=FALSE}
trainDat <- extract(predictors,samplepoints,na.rm=FALSE)
trainDat$response <- extract(response,samplepoints,na.rm=FALSE, ID=FALSE)$response
trainDat <- na.omit(trainDat)
```
### Train the model
Random Forest is applied here as machine learning algorithm (others can be used as well, as long as variable importance is returned). The model is validated by default cross-validation to estimate the prediction error.
```{r,message = FALSE, warning=FALSE}
set.seed(10)
model <- train(trainDat[,names(predictors)],
trainDat$response,
method="rf",
importance=TRUE,
trControl = trainControl(method="cv"))
print(model)
```
### Variable importance
The estimation of the AOA will require the importance of the individual predictor variables.
```{r,message = FALSE, warning=FALSE}
plot(varImp(model,scale = F),col="black")
```
### Predict and calculate error
The trained model is then used to make predictions for the entire area of interest. Since a simulated area-wide response is used, it's possible in this tutorial to compare the predictions with the true reference.
```{r,message = FALSE, warning=FALSE}
prediction <- predict(predictors,model,na.rm=T)
truediff <- abs(prediction-response)
plot(rast(list(prediction,response)),main=c("prediction","reference"))
```
## AOA Calculation
The visualization above shows the predictions made by the model. In the next step, the DI and AOA will be calculated.
The AOA calculation takes the model as input to extract the importance of the predictors, used as weights in multidimensional distance calculation. Note that the AOA can also be calculated without a trained model (i.e. using training data and new data only). In this case all predictor variables are treated equally important (unless weights are given in form of a table).
```{r,message = FALSE, warning=FALSE}
AOA <- aoa(predictors, model)
class(AOA)
names(AOA)
print(AOA)
```
Plotting the `aoa` object shows the distribution of DI values within the training data and the DI of the new data.
```{r,message = FALSE, warning=FALSE}
plot(AOA)
```
The most output of the `aoa` function are two raster data: The first is the DI that is the normalized and weighted minimum distance to a nearest training data point divided by the average distance within the training data. The AOA is derived from the DI by using a threshold. The threshold is the (outlier-removed) maximum DI observed in the training data where the DI of the training data is calculated by considering the cross-validation folds.
The used threshold and all relevant information about the training data DI is returned in the `parameters` list entry.
We can plot the DI as well as predictions onyl in the AOA:
```{r,message = FALSE, warning=FALSE, fig.show="hold", out.width="30%"}
plot(truediff,col=viridis(100),main="true prediction error")
plot(AOA$DI,col=viridis(100),main="DI")
plot(prediction, col=viridis(100),main="prediction for AOA")
plot(AOA$AOA,col=c("grey","transparent"),add=T,plg=list(x="topleft",box.col="black",bty="o",title="AOA"))
```
The patterns in the DI are in general agreement with the true prediction error.
Very high values are present in the Alps, as they have not been covered by training data but feature very distinct environmental conditions. Since the DI values for these areas are above the threshold, we regard this area as outside the AOA.
## AOA for spatially clustered data?
The example above had randomly distributed training samples. However, sampling locations might also be highly clustered in space. In this case, the random cross-validation is not meaningful (see e.g.
[Meyer et al. 2018](https://doi.org/10.1016/j.envsoft.2017.12.001), [Meyer et al. 2019](https://doi.org/10.1016/j.ecolmodel.2019.108815),
[Valavi et al. 2019](https://doi.org/10.1111/2041-210X.13107),
[Roberts et al. 2018](https://doi.org/10.1111/ecog.02881),
[Pohjankukka et al. 2017](https://doi.org/10.1080/13658816.2017.1346255),
[Brenning 2012](https://CRAN.R-project.org/package=sperrorest))
Also the threshold for the AOA is not reliable, because it is based in distance to a nearest data point within the training data (which is usually very small when data are clustered). Instead, cross-validation should be based on a leave-cluster-out approach, and the AOA estimation based on distances to a nearest data point not located in the same spatial cluster.
To show how this looks like, we use 15 spatial locations and simulate 5 data points around each location.
```{r,message = FALSE, warning=FALSE}
set.seed(25)
samplepoints <- clustered_sample(mask,75,15,radius=25000)
plot(response,col=viridis(100))
plot(samplepoints,col="red",add=T,pch=3)
```
```{r,message = FALSE, warning=FALSE}
trainDat <- extract(predictors,samplepoints,na.rm=FALSE)
trainDat$response <- extract(response,samplepoints,na.rm=FALSE)$response
trainDat <- data.frame(trainDat,samplepoints)
trainDat <- na.omit(trainDat)
```
We first train a model with (in this case) inappropriate random cross-validation.
```{r,message = FALSE, warning=FALSE}
set.seed(10)
model_random <- train(trainDat[,names(predictors)],
trainDat$response,
method="rf",
importance=TRUE,
trControl = trainControl(method="cv"))
prediction_random <- predict(predictors,model_random,na.rm=TRUE)
print(model_random)
```
...and a model based on leave-cluster-out cross-validation.
```{r,message = FALSE, warning=FALSE}
folds <- CreateSpacetimeFolds(trainDat, spacevar="parent",k=10)
set.seed(15)
model <- train(trainDat[,names(predictors)],
trainDat$response,
method="rf",
importance=TRUE,
tuneGrid = expand.grid(mtry = c(2:length(names(predictors)))),
trControl = trainControl(method="cv",index=folds$index))
print(model)
prediction <- predict(predictors,model,na.rm=TRUE)
```
The AOA is then calculated (for comparison) using the model validated by random cross-validation, and second by taking the spatial clusters into account and calculating the threshold based on minimum distances to a nearest training point not located in the same cluster. This is done in the aoa function, where the folds used for cross-validation are automatically extracted from the model.
```{r,message = FALSE, warning=FALSE}
AOA_spatial <- aoa(predictors, model)
AOA_random <- aoa(predictors, model_random)
```
```{r,message = FALSE, warning=FALSE, fig.show="hold", out.width="50%"}
plot(AOA_spatial$DI,col=viridis(100),main="DI")
plot(prediction, col=viridis(100),main="prediction for AOA \n(spatial CV error applies)")
plot(AOA_spatial$AOA,col=c("grey","transparent"),add=TRUE,plg=list(x="topleft",box.col="black",bty="o",title="AOA"))
plot(prediction_random, col=viridis(100),main="prediction for AOA \n(random CV error applies)")
plot(AOA_random$AOA,col=c("grey","transparent"),add=TRUE,plg=list(x="topleft",box.col="black",bty="o",title="AOA"))
```
Note that the AOA is much larger for the spatial CV approach. However, the spatial cross-validation error is considerably larger, hence also the area for which this error applies is larger.
The random cross-validation performance is very high, however, the area to which the performance applies is small. This fact is also apparent if you plot the `aoa` objects which will display the distributions of the DI of the training data as well as the DI of the new data. For random CV most of the predictionDI is larger than the AOA threshold determined by the trainDI. Using spatial CV, the predictionDI is well within the DI of the training samples.
```{r, message = FALSE, warning=FALSE}
grid.arrange(plot(AOA_spatial) + ggplot2::ggtitle("Spatial CV"),
plot(AOA_random) + ggplot2::ggtitle("Random CV"), ncol = 2)
```
## Comparison prediction error with model error
Since we used a simulated response variable, we can now compare the prediction error within the AOA with the model error, assuming that the model error applies inside the AOA but not outside.
```{r,message = FALSE, warning=FALSE}
###for the spatial CV:
RMSE(values(prediction)[values(AOA_spatial$AOA)==1],
values(response)[values(AOA_spatial$AOA)==1])
RMSE(values(prediction)[values(AOA_spatial$AOA)==0],
values(response)[values(AOA_spatial$AOA)==0])
model$results
###and for the random CV:
RMSE(values(prediction_random)[values(AOA_random$AOA)==1],
values(response)[values(AOA_random$AOA)==1])
RMSE(values(prediction_random)[values(AOA_random$AOA)==0],
values(response)[values(AOA_random$AOA)==0])
model_random$results
```
The results indicate that there is a high agreement between the model CV error (RMSE) and the true prediction RMSE. This is the case for both, the random as well as the spatial model.
## Relationship between the DI and the performance measure
The relationship between error and DI can be used to limit predictions to an area (within the AOA) where a required performance (e.g. RMSE, R2, Kappa, Accuracy) applies.
This can be done using the result of DItoErrormetric which used the relationship analyzed in a window of DI values. The corresponding model (here: shape constrained additive models which is the default: Monotone increasing P-splines with the dimension of the basis used to represent the smooth term is 6 and a 2nd order penalty.) can be used to estimate the performance on a pixel level, which then allows limiting predictions using a threshold. Note that we used a multi-purpose CV to estimate the relationship between the DI and the RMSE here (see details in the paper).
```{r,message = FALSE, warning=FALSE}
DI_RMSE_relation <- DItoErrormetric(model, AOA_spatial$parameters, multiCV=TRUE,
window.size = 5, length.out = 5)
plot(DI_RMSE_relation)
expected_RMSE = terra::predict(AOA_spatial$DI, DI_RMSE_relation)
# account for multiCV changing the DI threshold
updated_AOA = AOA_spatial$DI > attr(DI_RMSE_relation, "AOA_threshold")
plot(expected_RMSE,col=viridis(100),main="expected RMSE")
plot(updated_AOA, col=c("grey","transparent"),add=TRUE,plg=list(x="topleft",box.col="black",bty="o",title="AOA"))
```
# Example 2: A real-world example
The example above used simulated data so that it allows to analyze the reliability of the AOA. However, a simulated area-wide response is not available in usual prediction tasks. Therefore, as a second example the AOA is estimated for a dataset that has point observations as a reference only.
## Data and preprocessing
To do so, we will work with the cookfarm dataset, described in e.g. [Gasch et al 2015](https://www.sciencedirect.com/science/article/pii/S2211675315000251). The dataset included in CAST is a re-structured dataset. Find more details also in the vignette "Introduction to CAST".
We will use soil moisture (VW) as response variable here. Hence, we're aiming at making a spatial continuous prediction based on limited measurements from data loggers.
```{r, message = FALSE, warning=FALSE}
dat <- readRDS(system.file("extdata","Cookfarm.RDS",package="CAST"))
# calculate average of VW for each sampling site:
dat <- aggregate(dat[,c("VW","Easting","Northing")],by=list(as.character(dat$SOURCEID)),mean)
# create sf object from the data:
pts <- st_as_sf(dat,coords=c("Easting","Northing"))
##### Extract Predictors for the locations of the sampling points
studyArea <- rast(system.file("extdata","predictors_2012-03-25.tif",package="CAST"))
st_crs(pts) <- crs(studyArea)
trainDat <- extract(studyArea,pts,na.rm=FALSE)
pts$ID <- 1:nrow(pts)
trainDat <- merge(trainDat,pts,by.x="ID",by.y="ID")
# The final training dataset with potential predictors and VW:
head(trainDat)
```
## Model training and prediction
A set of variables is used as predictors for VW in a random Forest model. The model is validated with a leave one out cross-validation.
Note that the model performance is very low, due to the small dataset being used here (and for this small dataset a low ability of the predictors to model VW).
```{r, message = FALSE, warning=FALSE}
predictors <- c("DEM","NDRE.Sd","TWI","Bt")
response <- "VW"
model <- train(trainDat[,predictors],trainDat[,response],
method="rf",tuneLength=3,importance=TRUE,
trControl=trainControl(method="LOOCV"))
model
```
### Prediction
Next, the model is used to make predictions for the entire study area.
```{r, message = FALSE, warning=FALSE}
#Predictors:
plot(stretch(studyArea[[predictors]]))
#prediction:
prediction <- predict(studyArea,model,na.rm=TRUE)
```
## AOA estimation
Next we're limiting the predictions to the AOA. Predictions outside the AOA should be excluded.
```{r, message = FALSE, warning=FALSE, fig.show="hold", out.width="50%"}
AOA <- aoa(studyArea,model)
#### Plot results:
plot(AOA$DI,col=viridis(100),main="DI with sampling locations (red)")
plot(pts,zcol="ID",col="red",add=TRUE)
plot(prediction, col=viridis(100),main="prediction for AOA \n(LOOCV error applies)")
plot(AOA$AOA,col=c("grey","transparent"),add=TRUE,plg=list(x="topleft",box.col="black",bty="o",title="AOA"))
```
# Final notes
* The AOA is estimated based on training data and new data (i.e. raster group of the entire area of interest). The trained model are only used for getting the variable importance needed to weight predictor variables. These can be given as a table either, so the approach can be used with other packages than caret as well.
* Knowledge on the AOA is important when predictions are used as a baseline for decision making or subsequent environmental modelling.
* We suggest that the AOA should be provided alongside the prediction map and complementary to the communication of validation performances.
## Further reading
* Meyer, H., & Pebesma, E. (2022): Machine learning-based global maps of ecological variables and the challenge of assessing them. Nature Communications. Accepted.
* Meyer, H., & Pebesma, E. (2021). Predicting into unknown space? Estimating the area of applicability of spatial prediction models. Methods in Ecology and Evolution, 12, 1620– 1633. [https://doi.org/10.1111/2041-210X.13650]
* Tutorial (https://youtu.be/EyP04zLe9qo) and Lecture (https://youtu.be/OoNH6Nl-X2s) recording from OpenGeoHub summer school 2020 on the area of applicability. As well as talk at the OpenGeoHub summer school 2021: https://av.tib.eu/media/54879
|
/scratch/gouwar.j/cran-all/cranData/CAST/vignettes/cast02-AOA-tutorial.Rmd
|
---
title: '3. AOA in Parallel'
author: "Marvin Ludwig"
date: "`r Sys.Date()`"
output:
rmarkdown::html_vignette:
toc: true
vignette: >
%\VignetteIndexEntry{AOA in parallel}
%\VignetteEncoding{UTF-8}
%\VignetteEngine{knitr::rmarkdown}
editor_options:
chunk_output_type: console
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE, message = FALSE, warning = FALSE)
```
Estimating the Area of Applicability (AOA) can be computationally intensive, depending on the amount of training data used for a model as well as the amount of new data the AOA has to be computed. This vignette goes over the possibility to (partly) compute the AOA in parallel. We will use the same data setup as the vignette "Area of applicability of spatial prediction models". Please have a look there for a general introduction to the AOA and the details about the example data generation.
# Generate Example Data
```{r, message = FALSE, warning=FALSE}
library(CAST)
library(caret)
library(terra)
library(sf)
```
```{r,message = FALSE, warning=FALSE}
data("splotdata")
predictors <- rast(system.file("extdata","predictors_chile.tif",package="CAST"))
splotdata <- st_drop_geometry(splotdata)
```
```{r,message = FALSE, warning=FALSE}
set.seed(10)
model_random <- train(splotdata[,names(predictors)],
splotdata$Species_richness,
method="rf",
importance=TRUE,
ntrees = 50,
trControl = trainControl(method="cv"))
prediction_random <- predict(predictors,model_random,na.rm=TRUE)
```
# Parallel AOA by dividing the new data
For better performances, it is recommended to compute the AOA in two steps. First, the DI of training data and the resulting DI threshold is computed from the model or training data with the function `trainDI`. The result from trainDI is usually the first step of the `aoa` function, however it can be skipped by providing the trainDI object in the function call. This makes it possible to compute the AOA on multiple raster tiles at once (e.g. on different cores). This is especially useful for very large prediction areas, e.g. in global mapping.
```{r}
model_random_trainDI = trainDI(model_random)
print(model_random_trainDI)
```
```{r, eval = FALSE}
saveRDS(model_random_trainDI, "path/to/file")
```
If you have a large raster, you divide it into multiple smaller tiles and apply the trainDI object afterwards to each tile.
```{r, fig.show="hold", out.width="30%"}
r1 = crop(predictors, c(-75.66667, -67, -30, -17.58333))
r2 = crop(predictors, c(-75.66667, -67, -45, -30))
r3 = crop(predictors, c(-75.66667, -67, -55.58333, -45))
plot(r1[[1]],main = "Tile 1")
plot(r2[[1]],main = "Tile 2")
plot(r3[[1]],main = "Tile 3")
```
Use the `trainDI` argument in the `aoa` function to specify, that you want to use a previously computed trainDI object.
```{r, fig.show="hold", out.width="30%"}
aoa_r1 = aoa(newdata = r1, trainDI = model_random_trainDI)
plot(r1[[1]], main = "Tile 1: Predictors")
plot(aoa_r1$DI, main = "Tile 1: DI")
plot(aoa_r1$AOA, main = "Tile 1: AOA")
```
You can now run the aoa function in parallel on the different tiles! Of course you can use for favorite parallel backend for this task, here we use mclapply from the `parallel` package.
```{r, eval = FALSE}
library(parallel)
tiles_aoa = mclapply(list(r1, r2, r3), function(tile){
aoa(newdata = tile, trainDI = model_random_trainDI)
}, mc.cores = 3)
```
```{r, echo = FALSE}
tiles_aoa = lapply(list(r1, r2, r3), function(tile){
aoa(newdata = tile, trainDI = model_random_trainDI)
})
```
```{r, fig.show="hold", out.width="30%"}
plot(tiles_aoa[[1]]$AOA, main = "Tile 1")
plot(tiles_aoa[[2]]$AOA, main = "Tile 2")
plot(tiles_aoa[[3]]$AOA, main = "Tile 3")
```
For larger tasks it might be useful to save the tiles to you hard-drive and load them one by one to avoid filling up your RAM.
```{r, eval = FALSE}
# Simple Example Code for raster tiles on the hard drive
tiles = list.files("path/to/tiles", full.names = TRUE)
tiles_aoa = mclapply(tiles, function(tile){
current = terra::rast(tile)
aoa(newdata = current, trainDI = model_random_trainDI)
}, mc.cores = 3)
```
|
/scratch/gouwar.j/cran-all/cranData/CAST/vignettes/cast03-AOA-parallel.Rmd
|
---
title: "4. Visualization of nearest neighbor distance distributions"
author: "Hanna Meyer"
date: "`r Sys.Date()`"
output:
rmarkdown::html_vignette:
toc: true
vignette: >
%\VignetteIndexEntry{Visualization of nearest neighbor distance distributions}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE,fig.width=6.2, fig.height=3.4)
```
## Introduction
This tutorial shows how euclidean nearest neighbor distances in the geographic space or feature space can be calculated and visualized using CAST.
This type of visualization allows to assess whether training data feature a representative coverage of the prediction area and if cross-validation (CV) folds (or independent test data) are adequately chosen to be representative for the prediction locations.
See e.g. [Meyer and Pebesma (2022)](https://doi.org/10.1038/s41467-022-29838-9) and [Milà et al. (2022)](https://doi.org/10.1111/2041-210X.13851) for further discussion on this topic.
## Sample data
As example data, we use two different sets of global virtual reference data: One is a spatial random sample and in the second example, reference data are clustered in geographic space (see [Meyer and Pebesma (2022)](https://doi.org/10.1038/s41467-022-29838-9) for more discussions on this).
```{r, message = FALSE, warning=FALSE}
library(CAST)
library(caret)
library(terra)
library(sf)
library(rnaturalearth)
library(ggplot2)
```
Here we can define some parameters to run the example with different settings
```{r, message = FALSE, warning=FALSE}
seed <- 10 # random realization
samplesize <- 300 # how many samples will be used?
nparents <- 20 #For clustered samples: How many clusters?
radius <- 500000 # For clustered samples: What is the radius of a cluster?
```
### Prediction area
The prediction area is the entire global land area, i.e. we could imagine a prediction task where we aim at making global predictions based on the set of reference data.
```{r,message = FALSE, warning=FALSE}
ee <- st_crs("+proj=eqearth")
co <- ne_countries(returnclass = "sf")
co.ee <- st_transform(co, ee)
```
### Spatial random sample
Then, we simulate the random sample and visualize the data on the entire global prediction area.
```{r,message = FALSE, warning=FALSE, results='hide'}
sf_use_s2(FALSE)
set.seed(seed)
pts_random <- st_sample(co.ee, samplesize)
### See points on the map:
ggplot() + geom_sf(data = co.ee, fill="#00BFC4",col="#00BFC4") +
geom_sf(data = pts_random, color = "#F8766D",size=0.5, shape=3) +
guides(fill = "none", col = "none") +
labs(x = NULL, y = NULL)
```
### Clustered sample
As second data set we use a clustered design of the same size.
```{r,message = FALSE, warning=FALSE, results='hide'}
set.seed(seed)
sf_use_s2(FALSE)
pts_clustered <- clustered_sample(co.ee, samplesize, nparents, radius)
ggplot() + geom_sf(data = co.ee, fill="#00BFC4",col="#00BFC4") +
geom_sf(data = pts_clustered, color = "#F8766D",size=0.5, shape=3) +
guides(fill = "none", col = "none") +
labs(x = NULL, y = NULL)
```
## Distances in geographic space
Then we can plot the distributions of the spatial distances of reference
data to their nearest neighbor ("sample-to-sample") with the distribution of distances from all points of the global land surface to the nearest reference data point ("sample-to-prediction"). Note that samples of prediction locations are used to calculate the sample-to-prediction nearest neighbor distances. Since we're using a global case study here, throughout this tutorial we use sampling=Fibonacci to draw prediction locations with constant point density on the sphere.
```{r,message = FALSE, warning=FALSE, results='hide'}
dist_random <- geodist(pts_random,co.ee,
sampling="Fibonacci")
dist_clstr <- geodist(pts_clustered,co.ee,
sampling="Fibonacci")
plot(dist_random, unit = "km")+scale_x_log10(labels=round)+ggtitle("Randomly distributed reference data")
plot(dist_clstr, unit = "km")+scale_x_log10(labels=round)+ggtitle("Clustered reference data")
```
Note that for the random data set the nearest neighbor distance distribution of the training data is quasi identical to the nearest neighbor distance distribution of the prediction area.
In comparison, the second data set has the same number of training data but these are heavily clustered in geographic space. We therefore see that the nearest neighbor distances within the reference data is rather small. Prediction locations, however, are on average much further away.
### Accounting for cross-validation folds
#### Random Cross-validation
Let's use the clustered data set to show how the distribution of spatial nearest neighbor distances during cross-validation can be visualized as well. Therefore, we first use the "default" way of a random 10-fold cross validation where we randomly split the reference data into training and test (see Meyer et al., 2018 and 2019 to see why this might not be a good idea).
```{r,message = FALSE, warning=FALSE, results='hide'}
randomfolds <- caret::createFolds(1:nrow(pts_clustered))
```
```{r,message = FALSE, warning=FALSE, results='hide',echo=FALSE}
for (i in 1:nrow(pts_clustered)){
pts_clustered$randomCV[i] <- which(unlist(lapply(randomfolds,function(x){sum(x%in%i)}))==1)
}
ggplot() + geom_sf(data = co.ee, fill="#00BFC4",col="#00BFC4") +
geom_sf(data = pts_clustered, color = rainbow(max(pts_clustered$randomCV))[pts_clustered$randomCV],size=0.5, shape=3) +
guides(fill = FALSE, col = FALSE) +
labs(x = NULL, y = NULL)+ggtitle("random fold membership shown by color")
```
```{r,message = FALSE, warning=FALSE, results='hide'}
dist_clstr <- geodist(pts_clustered,co.ee,
sampling="Fibonacci",
cvfolds= randomfolds)
plot(dist_clstr, unit = "km")+scale_x_log10(labels=round)
```
Obviously the CV folds are not representative for the prediction locations (at least not in terms of distance to a nearest training data point). I.e. when these folds are used for performance assessment of a model, we can expect overly optimistic estimates because we only validate predictions in close proximity to the reference data.
#### Spatial Cross-validation
This, however, should not be the case but the CV performance should be regarded as representative for the prediction task. Therefore, we use a spatial CV instead. Here, we use a leave-cluster-out CV, which means that in each iteration, one of the spatial clusters is held back.
```{r,message = FALSE, warning=FALSE, results='hide'}
spatialfolds <- CreateSpacetimeFolds(pts_clustered,spacevar="parent",k=length(unique(pts_clustered$parent)))
```
```{r,message = FALSE, warning=FALSE, results='hide',echo=FALSE}
ggplot() + geom_sf(data = co.ee, fill="#00BFC4",col="#00BFC4") +
geom_sf(data = pts_clustered, color = rainbow(max(pts_clustered$parent))[pts_clustered$parent],size=0.5, shape=3) +
guides(fill = FALSE, col = FALSE) +
labs(x = NULL, y = NULL)+ ggtitle("spatial fold membership by color")
```
```{r,message = FALSE, warning=FALSE, results='hide'}
dist_clstr <- geodist(pts_clustered,co.ee,
sampling="Fibonacci",
cvfolds= spatialfolds$indexOut)
plot(dist_clstr, unit = "km")+scale_x_log10(labels=round)
```
See that this fits the nearest neighbor distribution of the prediction area much better. Note that `geodist` also allows inspecting independent test data instead of cross validation folds. See `?geodist` and `?plot.geodist`.
#### Why has spatial CV sometimes blamed for being too pessimistic ?
Recently, [Wadoux et al. (2021)](https://doi.org/10.1016/j.ecolmodel.2021.109692) published a paper with the title "Spatial cross-validation is not the right way to evaluate map accuracy" where they state that "spatial cross-validation strategies resulted in a grossly pessimistic map accuracy assessment". Why do they come to this conclusion?
The reference data they used in their study where either regularly, random or comparably mildly clustered in geographic space, but they applied spatial CV strategies that held large spatial units back during CV. Here we can see what happens when we apply spatial CV to randomly distributed reference data.
```{r,message = FALSE, warning=FALSE, results='hide'}
# create a spatial CV for the randomly distributed data. Here:
# "leave region-out-CV"
sf_use_s2(FALSE)
pts_random_co <- st_join(st_as_sf(pts_random),co.ee)
ggplot() + geom_sf(data = co.ee, fill="#00BFC4",col="#00BFC4") +
geom_sf(data = pts_random_co, aes(color=subregion),size=0.5, shape=3) +
scale_color_manual(values=rainbow(length(unique(pts_random_co$subregion))))+
guides(fill = FALSE, col = FALSE) +
labs(x = NULL, y = NULL)+ ggtitle("spatial fold membership by color")
```
```{r,message = FALSE, warning=FALSE, results='hide'}
spfolds_rand <- CreateSpacetimeFolds(pts_random_co,spacevar = "subregion",
k=length(unique(pts_random_co$subregion)))
dist_rand_sp <- geodist(pts_random_co,co.ee,
sampling="Fibonacci",
cvfolds= spfolds_rand$indexOut)
plot(dist_rand_sp, unit = "km")+scale_x_log10(labels=round)
```
We see that the nearest neighbor distances during cross-validation don't match the nearest neighbor distances during prediction. But compared to the section above, this time the cross-validation folds are too far away from reference data. Naturally we would end up with overly pessimistic performance estimates because we make prediction situations during cross-validation harder, compared to what is required during model application to the entire area of interest (here global). The spatial CV chosen here is therefore not suitable for this prediction task, because prediction situations created during CV do not resemble what is encountered during prediction.
#### Nearest Neighbour Distance Matching CV
A good way to approximate the geographical prediction distances during the CV is to use Nearest Neighbour Distance Matching (NNDM) CV (see [Milà et al., 2022](https://doi.org/10.1111/2041-210X.13851) for more details). NNDM CV is a variation of LOO CV in which the empirical distribution function of nearest neighbour distances found during prediction is matched during the CV process.
```{r,message = FALSE, warning=FALSE, results='hide'}
nndmfolds_clstr <- nndm(pts_clustered, modeldomain=co.ee, samplesize = 2000)
dist_clstr <- geodist(pts_clustered,co.ee,
sampling = "Fibonacci",
cvfolds = nndmfolds_clstr$indx_test,
cvtrain = nndmfolds_clstr$indx_train)
plot(dist_clstr, unit = "km")+scale_x_log10(labels=round)
```
The NNDM CV-distance distribution matches the sample-to-prediction distribution very well. What happens if we use NNDM CV for the randomly-distributed sampling points instead?
```{r,message = FALSE, warning=FALSE, results='hide'}
nndmfolds_rand <- nndm(pts_random_co, modeldomain=co.ee, samplesize = 2000)
dist_rand <- geodist(pts_random_co,co.ee,
sampling = "Fibonacci",
cvfolds = nndmfolds_rand$indx_test,
cvtrain = nndmfolds_rand$indx_train)
plot(dist_rand, unit = "km")+scale_x_log10(labels=round)
```
The NNDM CV-distance still matches the sample-to-prediction distance function.
#### k-fold Nearest Neighbour Distance Matching CV
Since NNDM CV is highly time consuming, the k-fold version may provide a good trade-off.
See (see [Linnenbrink et al., 2023](https://doi.org/10.5194/egusphere-2023-1308) for more details)
```{r,message = FALSE, warning=FALSE, results='hide'}
knndmfolds_clstr <- knndm(pts_clustered, modeldomain=co.ee, samplesize = 2000)
pts_clustered$knndmCV <- as.character(knndmfolds_clstr$clusters)
ggplot() + geom_sf(data = co.ee, fill="#00BFC4",col="#00BFC4") +
geom_sf(data = pts_clustered, aes(color=knndmCV),size=0.5, shape=3) +
scale_color_manual(values=rainbow(length(unique(pts_clustered$knndmCV))))+
guides(fill = FALSE, col = FALSE) +
labs(x = NULL, y = NULL)+ ggtitle("spatial fold membership by color")
dist_clstr <- geodist(pts_clustered,co.ee,
sampling = "Fibonacci",
cvfolds = knndmfolds_clstr$indx_test,
cvtrain = knndmfolds_clstr$indx_train)
plot(dist_clstr, unit = "km")+scale_x_log10(labels=round)
```
## Distances in feature space
So far we compared nearest neighbor distances in geographic space. We can also do so in feature space. Therefore, a set of bioclimatic variables are used (https://www.worldclim.org) as features (i.e. predictors) in this virtual prediction task.
```{r,message = FALSE, warning=FALSE, results='hide'}
predictors_global <- rast(system.file("extdata","bioclim_global.tif",package="CAST"))
plot(predictors_global)
```
Then we visualize nearest neighbor feature space distances under consideration of cross-validation.
```{r,message = FALSE, warning=FALSE, results='hide'}
# use random CV:
dist_clstr_rCV <- geodist(pts_clustered,predictors_global,
type = "feature",
sampling="Fibonacci",
cvfolds = randomfolds)
# use spatial CV:
dist_clstr_sCV <- geodist(pts_clustered,predictors_global,
type = "feature", sampling="Fibonacci",
cvfolds = spatialfolds$indexOut)
# Plot results:
plot(dist_clstr_rCV)+scale_x_log10()+ggtitle("Clustered reference data and random CV")
plot(dist_clstr_sCV)+scale_x_log10()+ggtitle("Clustered reference data and spatial CV")
```
With regard to the chosen predictor variables we see that again the nearest neighbor distance of the clustered training data is rather small, compared to what is required during prediction. Again the random CV is not representative for the prediction locations while spatial CV is doing a better job.
### References
* Meyer, H., Pebesma, E. (2022): Machine learning-based global maps of ecological variables and the challenge of assessing them. Nature Communications 13, 2208. https://doi.org/10.1038/s41467-022-29838-9
* Milà, C., Mateu, J., Pebesma, E., Meyer, H. (2022): Nearest Neighbour Distance Matching Leave-One-Out Cross-Validation for map validation. Methods in Ecology and Evolution 00, 1– 13. https://doi.org/10.1111/2041-210X.13851.
* Linnenbrink, J., Milà, C., Ludwig, M., and Meyer, H. (2023): kNNDM: k-fold Nearest Neighbour Distance Matching Cross-Validation for map accuracy estimation, EGUsphere [preprint], https://doi.org/10.5194/egusphere-2023-1308.
|
/scratch/gouwar.j/cran-all/cranData/CAST/vignettes/cast04-plotgeodist.Rmd
|
CATT=function(binomial,ordinal,table=NULL){
if(is.null(table)){
dname="The type of data is variable!"
tbl=table(binomial,ordinal)
N1i=tbl[1,]
N2i=tbl[2,]
R1=sum(N1i)
R2=sum(N2i)
T=VT1=VT2=0
for(i in 1:ncol(tbl)){
T=T+(i-1)*(N1i[i]*R2-N2i[i]*R1)
VT1=VT1+(i-1)^2*sum(tbl[,i])*(sum(tbl)-sum(tbl[,i]))
if(i<=(ncol(tbl)-1)){VT2=VT2+(i-1)*i*sum(tbl[,i])*sum(tbl[,i+1])}
}
VT=R1*R2/sum(tbl)*(VT1-2*VT2)
Z=T/sqrt(VT)
}
if(!is.null(table)){
dname="The type of data is table!"
tbl=table
N1i=tbl[1,]
N2i=tbl[2,]
R1=sum(N1i)
R2=sum(N2i)
T=VT1=VT2=0
for(i in 1:ncol(tbl)){
T=T+(i-1)*(N1i[i]*R2-N2i[i]*R1)
VT1=VT1+(i-1)^2*sum(tbl[,i])*(sum(tbl)-sum(tbl[,i]))
if(i<=(ncol(tbl)-1)){VT2=VT2+(i-1)*i*sum(tbl[,i])*sum(tbl[,i+1])}
}
VT=R1*R2/sum(tbl)*(VT1-2*VT2)
Z=T/sqrt(VT)
}
Z=as.numeric(Z)
P=pnorm(q=abs(Z),lower.tail=FALSE)
structure(list(method="The Cochran-Armitage Trend Test",
statistic=c("Z"=round(Z,3)),p.value=round(P*2,4),
data.name=dname),class="htest")
}
|
/scratch/gouwar.j/cran-all/cranData/CATT/R/CATT.R
|
#' Conditional exact Cochran-Armitage trend test
#'
#' \code{catt_exact} calculates the Cochran-Armitage trend test statistic (Cochran (1954), Armitage (1955)) and the one-sided p-value for the corresponding conditional exact test.
#' The conditional exact test has been established by Williams (1988). The computation of its p-value is performed using an algorithm following an idea by Mehta, et al. (1992).
#'
#' @param dose.ratings A vector of dose ratings, the i-th entry corresponds to the dose-rating of the i-th group. This vector must be strictly monotonically increasing
#' @param totals The vector of total individuals per group, the i-th entry corresponds to the total number of individuals in the i-th group.
#' @param cases The vector of incidences per groups, the i-th entry corresponds to the number of incidences in the i-th group.
#' @return A list containing the value of the Cochran-Armitage Trend Test Statistic, its exact and asymptotic p-value.
#' @references Armitage, P. Tests for linear trends in proportions and frequencies. \emph{Biometrics}, 11 (1955): 375-386.
#' @references Cochran, W. G. Some methods for strengthening the common \eqn{\chi^2} tests, \emph{Biometrics}. 10 (1954): 417-451.
#' @references Mehta, C. R., Nitin P., and Pralay S. Exact stratified linear rank tests for ordered categorical and binary data. \emph{Journal of Computational and Graphical Statistics}, 1 (1992): 21-40.
#' @references Portier, C., and Hoel D. Type 1 error of trend tests in proportions and the design of cancer screens. \emph{Communications in Statistics-Theory and Methods}, 13 (1984): 1-14.
#' @references Williams, D. A. Tests for differences between several small proportions. \emph{Applied Statistics}, 37 (1988): 421-434.
#' @examples
#' d <- c(1,2,3,4)
#' n <- rep(20,4)
#' r <- c(1,4,3,8)
#'
#' catt_exact(d, n, r)
#'
#' @export
catt_exact <- function(dose.ratings,totals,cases) {
le.d <- length(dose.ratings)
le.n <- length(totals)
le.r <- length(cases)
if (le.d != le.n | le.d != le.r) {
stop("Length of input is differing!")
}
le <- le.d
if (le < 3) {stop("Need at least three groups")}
## Extract Input, calculate total number of cases and individuals
nk <- totals
nhat <- sum(nk)
rk <- cases
rhat <- sum(rk)
dk <- dose.ratings
# dk <- dk/dk[2]
#
# rest <- dk - floor(dk)
# rest <- rest[rest>0]
#
# mult <- min(1/(prod(rest)), 1)
#
# dk <- round(dk * mult)
## Input checks
if (min(as.numeric(round(c(nk,rk)) == c(nk,rk))) == 0) {
stop("The number of totals and cases must be integer")
}
if (min(as.numeric(nk > 0)) == 0) {
stop("There must be at least one individual in every dose group")
}
if (min(as.numeric(rk >= 0)) == 0) {
stop("The number of cases in each group must be nonnegative")
}
if (min(as.numeric(nk >= rk)) == 0) {
stop("The number of cases can not exceed the size of the group")
}
if (max(as.numeric(rk > 0)) == 0) {
stop("This test can not be applied, when there is no case")
}
if (min(as.numeric(nk == rk)) == 1) {
stop("This test can not be applied, when the number of cases equals the total number of individuals")
}
check.dosemonvec <- rep(1, le - 1)
for (i in 1:(le - 1))
{check.dosemonvec[i] <- as.numeric(dk[i + 1] > dk[i])}
check.dosemon <- min(check.dosemonvec)
if (check.dosemon == 0) {
stop("Doses must be strictly monotonically increasing")
}
factor <- sqrt(nhat / ( (nhat-rhat) * rhat))
enum <- sum( (rk- (nk / nhat) * rhat) * dk)
denom <- sqrt(sum( (nk / nhat) * dk ^ 2)-sum( (nk / nhat) * dk) ^ 2)
test_statistic <- -factor * enum / denom
.pval_exact <- .pval_exact(dk, nk, rk)
pval_asy <- .aspvalue(test_statistic)
return(list("test.statistic" = test_statistic, "exact.pvalue" = .pval_exact, "asymptotic.pvalue" = pval_asy))
}
#' Asymptotic Cochran-Armitage trend test
#'
#' \code{catt_asy} calculates the Cochran-Armitage trend test statistic (Cochran (1954), Armitage (1955)) and the one-sided p-value for the corresponding asymptotic test.
#' The exact form of used test statistic can be found in the paper by Portier and Hoel (1984).
#'
#' @param dose.ratings A vector of dose ratings, the i-th entry corresponds to the dose-rating of the i-th group. This vector must be strictly monotonically increasing
#' @param totals The vector of total individuals per group, the i-th entry corresponds to the total number of individuals in the i-th group
#' @param cases The vector of incidences per groups, the i-th entry corresponds to the number of incidences in the i-th group
#' @return A list containing the value of the Cochran-Armitage Trend Test Statistic and its asymptotic p-value.
#' @references Armitage, P. Tests for linear trends in proportions and frequencies. \emph{Biometrics}, 11 (1955): 375-386.
#' @references Cochran, W. G. Some methods for strengthening the common \eqn{\chi^2} tests, \emph{Biometrics}. 10 (1954): 417-451.
#' @references Portier, C., and Hoel D. Type 1 error of trend tests in proportions and the design of cancer screens. \emph{Communications in Statistics-Theory and Methods}, 13 (1984): 1-14.
#' @examples
#' d <- c(1,2,3,4)
#' n <- rep(20,4)
#' r <- c(1,4,3,8)
#'
#' catt_asy(d, n, r)
#'
#' @export
catt_asy <- function(dose.ratings, totals, cases) {
le.d <- length(dose.ratings)
le.n <- length(totals)
le.r <- length(cases)
if (le.d != le.n | le.d != le.r) {
stop("Length of input is differing!")
}
le <- le.d
if (le < 3) {stop("Need at least three groups")}
## Extract Input, calculate total number of cases and individuals
nk <- totals
nhat <- sum(nk)
rk <- cases
rhat <- sum(rk)
dk <- dose.ratings
## Input checks
if (min(as.numeric(round(c(nk,rk)) == c(nk,rk))) == 0) {
stop("The number of totals and cases must be integer")
}
if (min(as.numeric(nk > 0)) == 0) {
stop("There must be at least one individual in every dose group")
}
if (min(as.numeric(rk >= 0)) == 0) {
stop("The number of cases in each group must be nonnegative")
}
if (min(as.numeric(nk >= rk)) == 0) {
stop("The number of cases can not exceed the size of the group")
}
if (max(as.numeric(rk > 0)) == 0) {
stop("This test can not be applied, when there is no case")
}
if (min(as.numeric(nk == rk)) == 1) {
stop("This test can not be applied, when the number of cases equals the total number of individuals")
}
check.dosemonvec <- rep(1, le - 1)
for (i in 1:(le - 1))
{check.dosemonvec[i] <- as.numeric(dk[i + 1] > dk[i])}
check.dosemon <- min(check.dosemonvec)
if (check.dosemon == 0) {
stop("Doses must be strictly monotonically increasing")
}
factor <- sqrt(nhat / ( (nhat-rhat) * rhat))
enum <- sum( (rk- (nk / nhat) * rhat) * dk)
denom <- sqrt(sum( (nk / nhat) * dk ^ 2)-sum( (nk / nhat) * dk) ^ 2)
test_statistic <- -factor * enum / denom
pval_asy <- .aspvalue(test_statistic)
return(list("test.statistic" = test_statistic, "asymptotic.pvalue" = pval_asy))
}
.pval_exact <- function(dk, nk, rk) {
dk <- dk/dk[2]
rest <- dk - floor(dk)
rest <- rest[rest>0]
mult <- min(1/(prod(rest)), 10 ^ 12)
dk <- round(dk * mult)
le <- length(dk)
nodes <- vector("list", le)
nhat <- sum(nk)
rhat <- sum(rk)
nodes[1] <- 0
a0 <- sum(rk * dk)
# Nodes are created
for (i in 1:(le - 1)) {
lowerbound <- max(0, rhat - sum(nk[ (i + 1):le]))
upperbound <- min(rhat, sum(nk[1:i]))
nodes[[i + 1]] <- lowerbound:upperbound
}
nodes[[le + 1]] <- rhat
arcs <- vector("list", le)
# Arcs are created
for (i in 1:le) {
for (j in nodes[[i]]) {
for (k in max(j, min(nodes[[i + 1]])):min(max(nodes[[i + 1]]), j + nk[i])) {
arcs[[i]] <- c(arcs[[i]], j, k, dk[i] * (k - j), choose(nk[i], k - j))
}
}
arcs[[i]] <- matrix(arcs[[i]], ncol = 4, byrow = TRUE)
}
# Zeros are added in the nodes
nodes[[le + 1]] <- matrix(c(rhat, 0), ncol = 2)
for (i in 1:le) {
nodes[[i]] <- matrix(c(nodes[[i]], rep(0, length(nodes[[i]]))), ncol = 2)
}
# Backwards processing for calculating longest paths
for (i in le:1) {
for (j in nodes[[i]][,1]) {
# Choose concurring arcs
arckonkur <- matrix(arcs[[i]][ (which(arcs[[i]][ ,1] == j)),], ncol = 4)
# Arcs get "consecutive" longest paths
for (k in 1:length(arckonkur[ ,1])) {
arckonkur[k,4] <- nodes[[i + 1]][which(nodes[[i+1]][ ,1]==arckonkur[k,2]),2]
}
# LP is calculated
nodes[[i]][which(nodes[[i]][ ,1] == j),2] <- max(arckonkur[ ,4]+arckonkur[ ,3])
}
}
# Two lists to express the tuples of lambdas over the nodes
nodes.u <- vector("list",le + 1)
for (i in 1:(le + 1)){
nodes.u[[i]] <- vector("list",length(nodes[[i]][ ,1]))
}
nodes.u[[1]][[1]] <- 0
nodes.cu <- vector("list",le + 1)
for (i in 1:(le + 1)) {
nodes.cu[[i]] <- vector("list",length(nodes[[i]][ ,1]))
}
nodes.cu[[1]][[1]] <- 0
# Prespecify sets for first nodes
nodes.u[[2]][1:length(nodes[[2]][ ,1])] <- arcs[[1]][ ,3]
nodes.cu[[2]][1:length(nodes[[2]][ ,1])] <- arcs[[1]][ ,4]
nodes.with.paths <- nodes[[2]][ ,1]
for (i in 2:(le)) {
nodes.with.paths.new <- numeric(0)
for (j in nodes.with.paths) {
# All successor of a node are evaluated
succ <- matrix(arcs[[i]][(which(arcs[[i]][ ,1]==j)),], ncol = 4)
# u and c(u) are copied to the following nodes
u.candidates <- matrix(c(succ,rep(nodes.u[[i]][[which(nodes[[i]][ ,1] == j)]], rep(length(succ) / 4, length(nodes.u[[i]][[which(nodes[[i]][ ,1] == j)]])))), nrow = length(succ) / 4)
cu.candidates <- matrix(c(succ,rep(nodes.cu[[i]][[which(nodes[[i]][ ,1] == j)]], rep(length(succ) / 4, length(nodes.cu[[i]][[which(nodes[[i]][ ,1] == j)]])))), nrow = length(succ) / 4)
# u and c(u) are transformed
u.candidates[ ,5:ncol(u.candidates)] <- u.candidates[ ,5:ncol(u.candidates)] + succ[ ,3]
cu.candidates[ ,5:ncol(u.candidates)] <- cu.candidates[ ,5:ncol(u.candidates)] * succ[ ,4]
for (k in 1:(length(succ) / 4)) {
candidate <- u.candidates[k,2]
LP <- nodes[[i + 1]][which(nodes[[i + 1]][ ,1] == candidate),2]
u.liste <- u.candidates[k,5:ncol(u.candidates)]
cu.liste <- cu.candidates[k,5:ncol(u.candidates)]
u.liste <- u.liste[(u.liste >= (a0 - LP - 1E-8))]
cu.liste <- cu.liste[(u.liste >= (a0 - LP - 1E-8))]
if (length(u.liste) > 0) {nodes.with.paths.new <-union(nodes.with.paths.new, candidate)}
existing.u <- intersect(u.liste,nodes.u[[i + 1]][[which(nodes[[i + 1]][ ,1] == candidate)]])
new.u <- setdiff(u.liste,nodes.u[[i + 1]][[which(nodes[[i + 1]][ ,1]==candidate)]])
new.cu <-cu.liste[which(is.element(u.liste,new.u))]
for (l in existing.u) {
index <- which(nodes.u[[i + 1]][[which(nodes[[i + 1]][,1] == candidate)]] == l) # index of existing l in nodes.u
index <- which(nodes.u[[i + 1]][[which(nodes[[i + 1]][,1] == candidate)]] == l) # index of existing l in nodes.u
index2 <- which(u.liste == l) # index of existing l in u.liste
nodes.cu[[i + 1]][[which(nodes[[i + 1]][ ,1] == candidate)]][index] <- nodes.cu[[i + 1]][[which(nodes[[i + 1]][ ,1] == candidate)]][index]+cu.liste[index2]
}
nodes.u[[i + 1]][[which(nodes[[i + 1]][ ,1] == candidate)]] <- c(nodes.u[[i + 1]][[which(nodes[[i + 1]][ ,1] == candidate)]],new.u)
nodes.cu[[i + 1]][[which(nodes[[i + 1]][ ,1] == candidate)]] <- c(nodes.cu[[i + 1]][[which(nodes[[i + 1]][ ,1] == candidate)]],new.cu)
}
}
nodes.with.paths <- nodes.with.paths.new
}
pval <- sum(nodes.cu[[le + 1]][[1]]) / choose(nhat, rhat)
return(pval)
}
#' @importFrom stats pnorm
.aspvalue <- function(statistic) {
pval <- pnorm(statistic)
return(pval)
}
|
/scratch/gouwar.j/cran-all/cranData/CATTexact/R/CATTexact.R
|
#' Average Rule chart
#'
#' This function helps locating the number of dimensions that are
#' important for CA interpretation, according to the so-called 'average rule'. The
#' reference line showing up in the returned histogram indicates the threshold
#' for an optimal dimensionality of the solution according to the average rule.
#'
#' @param data Name of the dataset (must be in dataframe format).
#'
#' @keywords aver.rule
#'
#' @export
#'
#' @importFrom graphics abline axis barplot hist legend par plot points rug symbols text title
#' @importFrom stats aggregate cutree dist hclust median pchisq quantile r2dtable rect.hclust
#' @importFrom utils setTxtProgressBar txtProgressBar
#' @importFrom ca ca
#' @importFrom FactoMineR CA HCPC plot.CA
#' @importFrom RcmdrMisc assignCluster
#' @importFrom Hmisc dotchart2
#' @importFrom classInt jenks.tests
#' @importFrom reshape2 melt
#' @import ggplot2
#' @import ggrepel
#' @import cluster
#'
#' @examples
#' data(greenacre_data)
#' aver.rule(greenacre_data)
#'
aver.rule <- function (data){
mydataasmatrix<-as.matrix(data)
dataframe.after.ca<- summary(ca(data))
nrows <- nrow(data)
ncols <- ncol(data)
c.dim<-round(100/(ncols-1), digits=1)
r.dim<-round(100/(nrows-1), digits=1)
thresh.sig.dim<-(max(c.dim, r.dim))
n.dim.average.rule <- length(which(dataframe.after.ca$scree[,3]>=thresh.sig.dim))
mydataasmatrix<-as.matrix(data)
barplot(dataframe.after.ca$scree[,3], xlab="Dimensions", ylab="% of Inertia", names.arg=dataframe.after.ca$scree[,1])
abline(h=thresh.sig.dim)
title (main="Percentage of inertia explained by the dimensions",
sub="reference line: threshold of an optimal dimensionality of the solution, according to the average rule",
cex.main=0.80, cex.sub=0.80)
}
|
/scratch/gouwar.j/cran-all/cranData/CAinterprTools/R/aver_rule.R
|
#' Dataset: Cross-tabulation of coffee brands vs. consumers' opinion
#'
#' Cross-tabulation (23x6) of the coffee brands against consumers' opinion.\cr
#' After: Kennedy R et al, Practical Applications of Correspondence Analysis to
#' Categorical Data in Market Research, in Journal of Targeting Measurement and
#' Analysis for Marketing, 1996
#'
#'
#' @docType data
#' @keywords datasets
#' @name brand_coffee
#' @usage data(brand_coffee)
#' @format dataframe
NULL
|
/scratch/gouwar.j/cran-all/cranData/CAinterprTools/R/brand_coffee.r
|
#' Dataset: Cross-tabulation of breakfast food vs consumers' opinion
#'
#' Cross-tabulation (14x8) of the breakfast food type against consumers'
#' opinion.\cr After: Bendixen M, A Practical Guide to the Use of Correspondence
#' Analysis in Marketing Research, in Research online 1, 1996, 16-38
#'
#'
#' @docType data
#' @keywords datasets
#' @name breakfast
#' @usage data(breakfast)
#' @format dataframe
NULL
|
/scratch/gouwar.j/cran-all/cranData/CAinterprTools/R/breakfast.r
|
#' Clustering row/column categories on the basis of Correspondence Analysis
#' coordinates from a space of user-defined dimensionality.
#'
#' This function plots the result of cluster analysis performed on the
#' results of Correspondence Analysis, providing the facility to produce a
#' dendrogram, a silhouette plot depicting the "quality" of the clustering
#' solution, and a scatterplot with points coded according to the cluster
#' membership.
#'
#' The function provides the facility to perform hierarchical cluster analysis
#' of row and/or column categories on the basis of Correspondence Analysis
#' result. The clustering is based on the row and/or colum categories'
#' coordinates from: \cr (1) a high-dimensional space corresponding to the whole
#' dimensionality of the input contingency table; \cr (2) a high-dimensional
#' space of dimensionality smaller than the full dimensionality of the input
#' dataset; \cr (3) a bi-dimensional space defined by a pair of user-defined
#' dimensions. \cr To obtain (1), the 'dim' parameter must be left in its
#' default value (NULL); \cr To obtain (2), the 'dim' parameter must be given an
#' integer (needless to say, smaller than the full dimensionality of the input
#' data); \cr To obtain (3), the 'dim' parameter must be given a vector (e.g.,
#' c(1,3)) specifying the dimensions the user is interested in.
#'
#' The method by which the distance is calculated is specified using the
#' 'dist.meth' parameter, while the agglomerative method is specified using the
#' 'aggl.meth' parameter. By default, they are set to "euclidean" and "ward.D2"
#' respectively.
#'
#' The user may want to specify beforehand the desired number of clusters (i.e.,
#' the cluster solution). This is accomplished feeding an integer into the
#' 'part' parameter. A dendrogram (with rectangles indicating the clustering
#' solution), a silhouette plot (indicating the "quality" of the cluster
#' solution), and a CA scatterplot (with points given colours on the basis of
#' their cluster membership) are returned. Please note that, when a
#' high-dimensional space is selected, the scatterplot will use the first 2 CA
#' dimensions; the user must keep in mind that the clustering based on a
#' higher-dimensional space may not be well reflected on the subspace defined by
#' the first two dimensions only.\cr Also note: \cr -if both row and column
#' categories are subject to the clustering, the column categories will be
#' flagged by an asterisk (*) in the dendrogram (and in the silhouette plot)
#' just to make it easier to identify rows and columns; \cr -the silhouette plot
#' displays the average silhouette width as a dashed vertical line; the
#' dimensionality of the CA space used is reported in the plot's title; if a
#' pair of dimensions has been used, the individual dimensions are reported in
#' the plot's title; \cr -the silhouette plot's labels end with a number
#' indicating the cluster to which each category is closer.
#'
#' An optimal clustering solution can be obtained setting the 'opt.part'
#' parameter to TRUE. The optimal partition is selected by means of an iterative
#' routine which locates at which cluster solution the highest average
#' silhouette width is achieved. If the 'opt.part' parameter is set to TRUE, an
#' additional plot is returned along with the silhouette plot. It displays a
#' scatterplot in which the cluster solution (x-axis) is plotted against the
#' average silhouette width (y-axis). A vertical reference line indicate the
#' cluster solution which maximize the silhouette width, corresponding to the
#' suggested optimal partition.
#'
#' The function returns a list storing information about the cluster membership
#' (i.e., which categories belong to which cluster).
#'
#' Further info and Disclaimer: \cr The silhouette plot is obtained from the
#' silhouette() function out from the 'cluster' package
#' (https://cran.r-project.org/web/packages/cluster/index.html). For a detailed
#' description of the silhouette plot, its rationale, and its interpretation,
#' see: \cr -Rousseeuw P J. 1987. "Silhouettes: A graphical aid to the
#' interpretation and validation of cluster analysis", Journal of Computational
#' and Applied Mathematics 20, 53-65
#' (http://www.sciencedirect.com/science/article/pii/0377042787901257)
#'
#' For the idea of clustering categories on the basis of the CA coordinates from
#' a full high-dimensional space (or from a subset thereof), see: \cr -Ciampi et
#' al. 2005. "Correspondence analysis and two-way clustering", SORT 29 (1), 27-4
#' \cr -Beh et al. 2011. "A European perception of food using two methods of
#' correspondence analysis", Food Quality and Preference 22(2), 226-231
#'
#' Please note that the interpretation of the clustering when both row AND
#' column categories are used must proceed with caution due to the issue of
#' inter-class points' distance interpretation. For a full description of the
#' issue (also with further references), see: \cr -Greenacre M. 2007.
#' "Correspondence Analysis in Practice", Boca Raton-London-New York,
#' Chapman&Hall/CRC, 267-268.
#'
#' @param data Contingency table (dataframe format).
#' @param which Takes "both" to cluster both row and column categories; "rows"
#' or "columns" to cluster only row or column categories respectively
#' @param dim Sets the dimensionality of the space whose coordinates are used to
#' cluster the CA categories; it can be an integer or a vector (e.g., c(2,3))
#' specifying the first and second selected dimension. NULL is the default; it
#' will make the clustering to be based on the maximum dimensionality of the
#' dataset.
#' @param dist.meth Sets the distance method used for the calculation of the
#' distance between categories; "euclidean" is the default (see the help of
#' the help if the dist() function for more info and other methods available).
#' @param aggl.meth Sets the agglomerative method to be used in the dendrogram
#' construction; "ward.D2" is the default (see the help of the hclust()
#' function for more info and for other methods available).
#' @param opt.part Takes TRUE or FALSE (default) if the user wants or doesn't
#' want an optimal partition to be suggested; the latter is based upon an
#' iterative process that seek for the maximization of the average silhouette
#' width.
#' @param opt.part.meth Sets whether the optimal partition method will try to
#' maximize the average ("mean") or median ("median") silhouette width. The
#' former is the default.
#' @param part Integer which sets the number of desired clusters (NULL is
#' default); this will override the optimal cluster solution.
#' @param cex.dndr.lab Sets the size of the dendrogram's labels. 0.85 is the
#' default.
#' @param cex.sil.lab Sets the size of the silhouette plot's s labels. 0.75 is
#' the default.
#' @param cex.sctpl.lab Sets the size of the Correspondence Analysis
#' scatterplot's labels. 3.5 is the default.
#'
#' @keywords caCluster
#'
#' @export
#'
#' @examples
#' data(brand_coffee)
#'
#' #displays a dendrogram of row AND column categories
#' res <- caCluster(brand_coffee, opt.part=FALSE)
#'
#' #displays a dendrogram for row AND column categories; the clustering is based on the CA
#' #coordinates from a full high-dimensional space. Rectangles indicating the clusters defined by
#' #the optimal partition method (see Details). A silhouette plot, a scatterplot, and a CA
#' #scatterplot with indication of cluster membership are also produced (see Details).
#' #The cluster membership is stored in the object 'res'.
#'
#' res <- caCluster(brand_coffee, opt.part=TRUE)
#'
#' #displays a dendrogram for row categories, with rectangles indicating the clusters defined by the
#' #optimal partition method (see Details). The clustering is based on a space of dimensionality 4.
#' #A silhouette plot, a scatterplot, and a CA scatterplot with indication of cluster membership are
#' #also produced (see Details). The cluster membership is stored in the object 'res'.
#'
#' res <- caCluster(brand_coffee, which="rows", dim=4, opt.part=TRUE)
#'
#' #like the above example, but the clustering is based on the coordinates on the sub-space defined
#' #by a pair of dimensions (i.e., 1 and 4).
#'
#' res <- caCluster(brand_coffee, which="rows", dim=c(1,4), opt.part=TRUE)
#'
#' @seealso \code{\link{groupBycoord}}
#'
caCluster <- function(data, which="both", dim=NULL, dist.meth="euclidean", aggl.meth="ward.D2", opt.part=FALSE, opt.part.meth="mean", part=NULL, cex.dndr.lab=0.85, cex.sil.lab=0.75, cex.sctpl.lab=3.5){
dimensionality <- min(ncol(data), nrow(data))-1 # calculate the dimensionality of the input table
ifelse(is.null(dim), dimens.to.report <- paste0("from a space of dimensionality: ", dimensionality), ifelse(length(dim)==1, dimens.to.report <- paste0("from a space of dimensionality: ", dim), dimens.to.report <- paste0("from the subspace defin. by the ", dim[1], " and ", dim[2], " dim.")))
ifelse(is.null(dim), sil.plt.title <- paste0("Silhouette plot for CA (dimensionality: ", dimensionality, ")"), ifelse(length(dim)==1, sil.plt.title <- paste0("Silhouette plot for CA (dimensionality: ", dim, ")"), sil.plt.title <- paste0("Silhouette plot for CA (dim. ", dim[1], " + ", dim[2], ")")))
ifelse(is.null(dim), ca.plt.title <- paste0("Clusters based on CA coordinates from a space of dimensionality: ", dimensionality), ifelse(length(dim)==1, ca.plt.title <- paste0("Clusters based on CA coordinates from a space of dimensionality: ", dim), ca.plt.title <- paste0("Clusters based on CA coordinates from the sub-space defined by dim. ", dim[1], " + ", dim[2])))
res.ca <- CA(data, ncp = dimensionality, graph = FALSE) # get the CA results from the CA command of the FactoMiner package
ifelse(which=="rows", binded.coord<-res.ca$row$coord, ifelse(which=="cols", binded.coord<-res.ca$col$coord, binded.coord <- rbind(res.ca$col$coord, res.ca$row$coord))) # get the columns and/or rows coordinates for all the dimensions and save them in a new table
binded.coord <- as.data.frame(binded.coord)
#binded.coord <- rbind(res.ca$col$coord, res.ca$row$coord) # get the columns and rows coordinates and bind them in a table
if(which=="both"){
rownames(binded.coord)[1:nrow(res.ca$col$coord)] <- paste(rownames(binded.coord)[1:nrow(res.ca$col$coord)], "*", sep = "") # add an asterisk to the dataframe row names corresponding to the column categories
dendr.title <- paste("Clusters of Row and Column (*) categories \nclustering based on Correspondence Analysis' coordinates", dimens.to.report)
} else {ifelse(which=="rows", dendr.title <- paste("Clusters of Row categories \nclustering based on Correspondence Analysis' coordinates", dimens.to.report), dendr.title <- paste("Clusters of Column categories \nclustering based on Correspondence Analysis' coordinates", dimens.to.report))}
max.ncl <- nrow(binded.coord)-1 # calculate the max number of clusters, 1 less than the number of objects (i.e., the binded table's rows)
sil.width.val <- numeric(max.ncl-1) # create an empty vector to store the average value of the silhouette width at different cluster solutions
sil.width.step <- c(2:max.ncl) # create an empty vector to store the progressive number of clusters for which silhouettes are calculated
ifelse(is.null(dim), d <- dist(binded.coord, method = dist.meth), ifelse(length(dim)==1, d <- dist(subset(binded.coord, select=1:dim)), d <- dist(subset(binded.coord, select=dim), method = dist.meth))) # calculate the distance matrix on the whole coordinate dataset if 'dim' is not entered by the user; otherwise, the matrix is calculated on a subset of the coordinate dataset
if (is.null(dim) | length(dim)==1) { # condition to extract the coordinates to be used later for plooting a scatterplot with cluster membership
first.setcoord <- 1
second.setcoord <- 2
dim.labelA <- "Dim. 1"
dim.labelB <- "Dim. 2"
} else {
first.setcoord <- dim[1]
second.setcoord <- dim[2]
dim.labelA <- paste0("Dim. ", dim[1])
dim.labelB <- paste0("Dim. ", dim[2])
}
#d <- dist(binded.coord, method = dist.meth)
fit <- hclust(d, method=aggl.meth) # perform the hierc agglomer clustering
if (is.null(part) & opt.part==TRUE) {
for (i in 2:max.ncl){
counter <- i-1
clust <- silhouette(cutree(fit, k=i),d) # calculate the silhouettes for increasing numbers of clusters; requires the 'cluster' package
sil.width.val[counter] <- ifelse(opt.part.meth=="mean", mean(clust[,3]), ifelse(opt.part.meth=="median", median(clust[,3]))) # store the mean or median of the silhouette width distribution at increasing cluster solutions
}
sil.res <- as.data.frame(cbind(sil.width.step, sil.width.val)) # store the results of the preceding loop binding the two vectors into a dataframe
select.clst.num <- sil.res$sil.width.step[sil.res$sil.width.val==max(sil.res$sil.width.val)] # from a column of the dataframe extract the cluster solution that corresponds to the maximum mean or median silhouette width
plot(fit, main=dendr.title, sub=paste("Distance method:", dist.meth, "\nAgglomeration method:", aggl.meth), xlab="", cex=cex.dndr.lab, cex.main=0.9, cex.sub=0.75) # display the dendogram when the optimal partition is desired, not the user-defined one
solution <- rect.hclust(fit, k=select.clst.num, border=1:select.clst.num) # create the cluster partition on the dendrogram using the optimal number of clusters stored in 'select.clst.num'
membership <- NULL
membership <- assignCluster(binded.coord, binded.coord, cutree(fit, k=select.clst.num))
binded.coord$membership <- membership
#binded.coord$membership <- assignCluster(binded.coord, binded.coord, cutree(fit, k=select.clst.num)) # store the cluster membership in the 'binded.coord' dataframe; requires 'RcmdrMisc'
par(mfrow=c(1,2))
final.sil.data <- silhouette(cutree(fit, k=select.clst.num),d) # store the silhouette data related to the selected cluster solution
row.names(final.sil.data) <- row.names(binded.coord) # copy the objects names to the rows' name of the object created in the above step
rownames(final.sil.data) <- paste(rownames(final.sil.data), final.sil.data[,2], sep = "_") # append a suffix to the objects names corresponding to the neighbor cluster; the latter info is got from the 'final.sil.data' object
par(oma=c(0,4,0,0)) # enlarge the left outer margin of the plot area to leave room for long objects' labels
plot(final.sil.data, cex.names=cex.sil.lab, max.strlen=30, nmax.lab=nrow(binded.coord)+1, main=sil.plt.title) # plot the final silhouette chart, allowing for long objects'labels
abline(v=mean(final.sil.data[,3]), lty=2) # add a reference line for the average silhouette width of the optimal partition
plot(sil.res, xlab="number of clusters", ylab="silhouette width", ylim=c(0,1), xaxt="n", type="b", main="Silhouette width vs. number of clusters", sub=paste("values on the y-axis represent the", opt.part.meth, "of the silhouettes' width distribution at each cluster solution"), cex.sub=0.75) # plot the scatterplot
axis(1, at = 0:max.ncl, cex.axis=0.70) # set the numbers for the x-axis labels starting from 2, which is the min number of clusters
text(x=sil.res$sil.width.step, y=sil.res$sil.width.val, labels = round(sil.res$sil.width.val, 3), cex = 0.65, pos = 3, offset = 1, srt=90) # add the average width values on the top of the dots in the scatterplot
abline(v=select.clst.num, lty=2, col="red") # add a red reference line indicating the number of selected clusters
par(mfrow=c(1,1)) # reset the default plot layout
p <- ggplot(binded.coord, aes(x=binded.coord[,first.setcoord], y=binded.coord[,second.setcoord], color=membership)) +
labs(x=dim.labelA, y=dim.labelB, colour="Clusters") +
geom_point() +
geom_vline(xintercept = 0, linetype=2, color="gray") +
geom_hline(yintercept = 0, linetype=2, color="gray") +
theme(panel.background = element_rect(fill="white", colour="black")) +
geom_text_repel(aes(x=binded.coord[,first.setcoord], y=binded.coord[,second.setcoord], label = rownames(binded.coord)), size=cex.sctpl.lab) +
coord_fixed(ratio = 1, xlim = NULL, ylim = NULL, expand = TRUE) +
ggtitle(ca.plt.title)
print(p)
return(solution)
} else {
if(is.null(part) & opt.part==FALSE){
plot(fit, main=dendr.title, sub=paste("Distance method:", dist.meth, "\nAgglomeration method:", aggl.meth), xlab="", cex=cex.dndr.lab, cex.main=0.9, cex.sub=0.75) # display the dendogram if neither a user-defined partition nor an optimal partition is desired
} else {
plot(fit, main=dendr.title, sub=paste("Distance method:", dist.meth, "\nAgglomeration method:", aggl.meth), xlab="", cex=cex.dndr.lab, cex.main=0.9, cex.sub=0.75) # display the dendogram if a user-defined partition is desired
select.clst.num <- part
solution <- rect.hclust(fit, k=select.clst.num, border=1:select.clst.num)
binded.coord$membership <- assignCluster(binded.coord, binded.coord, cutree(fit, k=select.clst.num))
final.sil.data <- silhouette(cutree(fit, k=select.clst.num),d)
row.names(final.sil.data) <- row.names(binded.coord)
rownames(final.sil.data) <- paste(rownames(final.sil.data), final.sil.data[,2], sep = "_")
plot(final.sil.data, cex.names=cex.sil.lab, max.strlen=30, nmax.lab=nrow(binded.coord)+1, main=sil.plt.title) # plot the final silhouette chart, allowing for long objects'labels
abline(v=mean(final.sil.data[,3]), lty=2)
p <- ggplot(binded.coord, aes(x=binded.coord[,first.setcoord], y=binded.coord[,second.setcoord], color=membership)) +
labs(x=dim.labelA, y=dim.labelB, colour="Clusters") +
geom_point() +
geom_vline(xintercept = 0, linetype=2, color="gray") +
geom_hline(yintercept = 0, linetype=2, color="gray") +
theme(panel.background = element_rect(fill="white", colour="black")) +
geom_text_repel(aes(x=binded.coord[,first.setcoord], y=binded.coord[,second.setcoord], label = rownames(binded.coord)), size=cex.sctpl.lab) +
coord_fixed(ratio = 1, xlim = NULL, ylim = NULL, expand = TRUE) +
ggtitle(ca.plt.title)
print(p)
par(mfrow=c(1,1))
return(solution)
}
}
}
|
/scratch/gouwar.j/cran-all/cranData/CAinterprTools/R/ca_cluster.R
|
#' Chart of correlation between rows and columns categories
#'
#' This function calculates the strength of the correlation between
#' rows and columns of the contingency table. A reference line indicates the
#' threshold above which the correlation can be considered important.
#'
#' @param data Name of the dataset (in dataframe format).
#'
#' @keywords caCorr
#'
#' @export
#'
#' @examples
#' data(greenacre_data)
#' caCorr(greenacre_data)
#'
caCorr <- function (data){
mydataasmatrix<-as.matrix(data)
dataframe.after.ca<- summary(ca(data))
perf.corr<-(1.0)
sqr.trace<-round(sqrt(sum(dataframe.after.ca$scree[,2])), digits=3)
barplot(c(perf.corr, sqr.trace), main=paste("Correlation coefficient between rows & columns (=square root of the inertia):", sqr.trace), sub="reference line: threshold of important correlation ", ylab="correlation coeff.", names.arg=c("correlation coeff. range", "correlation coeff. bt rows & cols"), cex.main=0.80, cex.sub=0.80, cex.lab=0.80)
abline(h=0.20)
}
|
/scratch/gouwar.j/cran-all/cranData/CAinterprTools/R/ca_corr.R
|
#' Perceptual map-like Correspondence Analysis scatterplot
#'
#' This command allows to plot a variant of the traditional Correspondence
#' Analysis scatterplots that allows facilitating the interpretation of the
#' results. It aims at producing what in marketing research is called perceptual
#' map, a visual representation of the CA results that seeks to avoid the
#' problem of interpreting inter-spatial distance. It represents only one type
#' of points (say, column points), and "gives names to the axes" corresponding
#' to the major row category contributors to the two selected dimensions.
#'
#' @param data Contingency table, in dataframe format.
#' @param x First dimensions to be plotted.
#' @param y Second dimensions to be plotted.
#' @param focus Takes "row" (default) if the interest is in assessing the
#' contribution of the rows to the definition of the dimensions, "col" if the
#' interest is on the columns.
#' @param dim.corr Dimension for which the points' correlation (column points if
#' focus is set to "row", row points if focus is set to "col") will be
#' computed and used as input value for the size of the points. The default
#' value is the smaller of the two input dimensions (i.e., x).
#' @param guide TRUE or FALSE (default) if the user does or doesn't want the
#' points being given a color code indicating with which of the two selected
#' dimension they have a higher relative correlation.
#' @param size.labls Adjust the size of the characters used in the labels that
#' give names to the axes.
#'
#' @keywords caPercept
#'
#' @export
#'
#' @examples
#' data(brand_coffee)
#'
#' caPercept(brand_coffee,1,2,focus="col",dim.corr=1, guide=FALSE)
#'
#' #In the returned plot, axes are given names according to the major contributing column categories
#'# (i.e., coffee brands in this datset), while the points correspond to the row categories
#'#(i.e., attributes). Points' size is proportional to the correlation of points with the 1st
#'#dimension. If 'guide' is set to TRUE, the returned plot is similar to the preceding one,
#'# but the points are given colour according to whether they are more correlated
#'# (in relative terms) to the first or to the second of the selected dimensions.
#'# In this example, points flagged with "->Dim 1" are more correlated to the 1st dimension,
#'# while those flagged with "->Dim 2" have a higher correlation with the 2nd dimension.
#'
#' @seealso \code{\link{caPlot}}
#'
caPercept <- function (data, x = 1, y = 2, focus="row", dim.corr=x, guide=FALSE, size.labls=3) {
ncols <- ncol(data)
nrows <- nrow(data)
numb.dim.cols <- ncol(data) - 1
numb.dim.rows <- nrow(data) - 1
a <- min(numb.dim.cols, numb.dim.rows)
res <- CA(data, ncp=a, graph=FALSE)
percent.inr.xdim <- round(res$eig[x,2], digits=2)
percent.inr.ydim <- round(res$eig[y,2], digits=2)
coord1=cntr1=coord2=cntr2=corr=corr_guide=NULL
if (focus=="col") {
pnt_labls <- colnames(data)
title <- paste("CA scatterplot: row points' correlation with Dim.", dim.corr,", and major column categories contributors (red)")
} else {
pnt_labls <- rownames(data)
title <- paste("CA scatterplot: column points' correlation with Dim.", dim.corr, ", and major row categories contributors (red)")
}
if (focus=="col") {
dfr <- data.frame(lab=pnt_labls,coord1=res$col$coord[,x], cntr1=res$col$contrib[,x], coord2=res$col$coord[,y], cntr2=res$col$contrib[,y])
dfr.to.plot <- data.frame(coord1=res$row$coord[,x],coord2=res$row$coord[,y], corr=sqrt(res$row$cos2[,dim.corr]), corr.b=sqrt(res$row$cos2[,ifelse(dim.corr==x,y,x)]))
col.data <- dfr
row.data <- dfr.to.plot
} else {
dfr <- data.frame(lab = pnt_labls,coord1=res$row$coord[,x], cntr1=res$row$contrib[,x], coord2=res$row$coord[,y], cntr2=res$row$contrib[,y])
dfr.to.plot <- data.frame(coord1=res$col$coord[,x],coord2=res$col$coord[,y], corr=sqrt(res$col$cos2[,dim.corr]), corr.b=sqrt(res$col$cos2[,ifelse(dim.corr==x,y,x)]))
row.data <- dfr
col.data <- dfr.to.plot
}
if (guide==TRUE) {
dfr.to.plot$corr_guide <- ifelse(dfr.to.plot$corr>dfr.to.plot$corr.b,paste("->Dim",dim.corr), paste("->Dim",ifelse(dim.corr==x,y,x)))
} else {}
cntr.thresh <- ifelse(focus=="col", 100/ncols, 100/nrows)
sub1 <- paste(subset(dfr, coord1<0 & cntr1>cntr.thresh)[,1], collapse="\n")
sub2 <- paste(subset(dfr, coord1>0 & cntr1>cntr.thresh)[,1], collapse="\n")
sub3 <- paste(subset(dfr, coord2<0 & cntr2>cntr.thresh)[,1], collapse="\n")
sub4 <- paste(subset(dfr, coord2>0 & cntr2>cntr.thresh)[,1], collapse="\n")
length.sub1 <- length(subset(dfr, coord1<0 & cntr1>cntr.thresh)[,1])
length.sub2 <- length(subset(dfr, coord1>0 & cntr1>cntr.thresh)[,1])
length.sub3 <- length(subset(dfr, coord2<0 & cntr2>cntr.thresh)[,1])
length.sub4 <- length(subset(dfr, coord2>0 & cntr2>cntr.thresh)[,1])
max.length <- max(length.sub1, length.sub2, length.sub3, length.sub4)
x.neg.lim <- min(dfr.to.plot$coord1)
x.pos.lim <- max(dfr.to.plot$coord1)
y.neg.lim <- min(dfr.to.plot$coord2)
y.pos.lim <- max(dfr.to.plot$coord2)
p <- ggplot(dfr.to.plot, aes(x=coord1, y=coord2)) + theme(panel.background = element_rect(fill="white", colour="black")) + xlab(paste0("Dim.",x," (",percent.inr.xdim,"%)" )) + ylab(paste0("Dim.",y, " (", percent.inr.ydim, "%)")) + geom_hline(yintercept = 0, colour="grey", linetype = "dashed") + geom_vline(xintercept = 0, colour="grey", linetype = "dashed") + geom_label(x=x.neg.lim+0.01, y=0.005, label=sub1, colour = "red", size=size.labls) + geom_label(x=x.pos.lim-0.01, y=0.005, label=sub2, colour="red", size=size.labls) + geom_label(x=0.005, y=y.neg.lim, label=sub3, colour="red",size=size.labls) + geom_label(x=0.005, y=y.pos.lim, label=sub4, colour="red",size=size.labls) + geom_text_repel(data = dfr.to.plot, aes(label = rownames(dfr.to.plot)), size = 2.7, colour = "black", box.padding = unit(0.35, "lines"), point.padding = unit(0.3, "lines")) + ggtitle(title) + theme(plot.title = element_text(size = 12))
if (guide==TRUE) {
p1 <- p + geom_point(aes(size=corr, colour=corr_guide))
} else {
p1 <- p + geom_point(aes(size=corr))
}
return(p1)
}
|
/scratch/gouwar.j/cran-all/cranData/CAinterprTools/R/ca_percept.R
|
#' Intepretation-oriented Correspondence Analysis scatterplots, with informative
#' and flexible (non-overlapping) labels.
#'
#' This function allows to plot different types of CA scatterplots, adding
#' information that are relevant to the CA interpretation. Thanks to the
#' 'ggrepel' package, the labels tends to not overlap so producing a nicely
#' readable chart.
#'
#' caPlot() provides the facility to produce: \cr (1) a 'regular' (symmetric)
#' scatterplot, in which points' labels only report the categories' names.
#'
#' (2) a scatterplot with advanced labels. If the user's interest lies (for
#' instance) in interpreting the rows in the space defined by the column
#' categories, by setting the parameter 'cntr' to "columns" the columns' labels
#' will be coupled with two asterisks within round brackets; each asterisk (if
#' present) will indicate if the category is a major contributor to the
#' definition of the first selected dimension (if the first asterisk to the left
#' is present) and/or if the same category is also a major contributor to the
#' definition of the second selected dimension (if the asterisk to the right is
#' present). The rows' labels will report the correlation (i.e., sqrt(COS2))
#' with the selected dimensions; the correlation values are reported between
#' square brackets; the left-hand side value refers to the correlation with the
#' first selected dimensions, while the right-hand side value refers to the
#' correlation with the second selected dimension. If the parameter 'cntr' is
#' set to "rows", the row categories' labels will indicate the contribution, and
#' the column categories' labels will report the correlation values.
#'
#' (3) a perceptual map, in which axes' poles are given names according to the
#' categories (either rows or columns, as specified by the user) having a major
#' contribution to the definition of the selected dimensions; rows' (or
#' columns') labels will report the correlation with the selected dimensions.
#'
#' The function returns a dataframe containing data about row and column points:
#' \cr (a) coordinates on the first selected dimension \cr (b) coordinates on
#' the second selected dimension \cr (c) contribution to the first selected
#' dimension \cr (d) contribution to the second selected dimension \cr (e)
#' quality on the first selected dimension \cr (f) quality on the second
#' selected dimension \cr (g) correlation with the first selected dimension \cr
#' (h) correlation with the second selected dimension \cr (j) (k) asterisks
#' indicating whether the corresponding category is a major contribution to the
#' first and/or second selected dimension.
#'
#' @param data Contingency table, in dataframe format.
#' @param x First of the two desired dimensions to be plotted. 1 is the default.
#' @param y Second of the two desired dimensions to be plotted. 2 is the
#' default.
#' @param adv.labls Logical value, which takes TRUE (default) or FALSE if the
#' user wants or does not want advanced labels to be displayed.
#' @param cntr If adv.labls is TRUE, the 'cntr' parameter takes "rows" or
#' "columns" if the user wants the rows' or columns' contribution to the
#' selected dimensions to be shown in the scatterplot.
#' @param percept Takes TRUE or FALSE (default) if the user does or doesn't want
#' the scatterplot to be turned into a perceptual map.
#' @param qlt.thres Sets the quality of the display's threshold under which
#' points will not be given labels. NULL is the default.
#' @param dot.size Sets the size of the scatterplot's dots. 2.5 is the default.
#' @param cex.labls Sets the size of the scatterplot dots' labels. 3 is the
#' default.
#' @param cex.percept Sets the size of the characters displayed in the axes'
#' labels featuring the perceptual map. 3 is the default.
#'
#' @keywords caPlot
#'
#' @export
#'
#' @examples
#' data(brand_coffee)
#'
#' #displays a 'regular' (symmetric) CA scatterplot, with row and column categories displayed in the
#' #same space, and with points' labels just reporting the categories' names.
#' #Relevant information (see description above) are stored in the variable 'res'.
#'
#' res <- caPlot(brand_coffee,1,2,adv.labls=FALSE)
#'
#' #displays the CA scatterplot, with the columns' labels indicating which category
#' # has a major contribution to the definition of the selected dimensions.
#' # Rows' labels report the correlation (i.e., sqrt(COS2)) with the selected dimensions.
#'
#' res <- caPlot(brand_coffee,1,2,cntr="columns")
#'
#'
#' #displays the CA scatterplot, with the rows' labels indicating
#' #which category has a major contribution to the definition of the selected dimensions.
#' #Columns' labels report the correlation (i.e., sqrt(COS2)) with the selected dimensions.
#'
#' res <- caPlot(brand_coffee,1,2,cntr="rows")
#'
#'
#' #displays the CA scatterplot as a perceptual map;
#' #the poles of the selected dimensions will be given names according
#' #to the column categories that have a major contribution to the definition
#' #of the selected dimensions. Rows' labels report the correlation (i.e., sqrt(COS2))
#' #with the selected dimensions.
#'
#' res <- caPlot(brand_coffee,1,2,cntr="columns", percept=TRUE)
#'
#' @seealso \code{\link{caPercept}} , \code{\link{caPlus}}
#'
caPlot <- function(data, x=1, y=2, adv.labls=TRUE, cntr="columns", percept=FALSE, qlt.thres=NULL, dot.size=2.5, cex.labls=3, cex.percept=3) {
coord.x=cntr.x=coord.y=cntr.y=Categories=labls.final=labls=qlt.sum=NULL
dimensionality <- min(ncol(data), nrow(data))-1
ca.res <- CA(data, ncp=dimensionality, graph=FALSE)
dtf.rows <- data.frame(Categories="rows", labls=row.names(ca.res$row$coord), coord.x=ca.res$row$coord[,x], coord.y=ca.res$row$coord[,y], cntr.x=ca.res$row$contrib[,x], cntr.y=ca.res$row$contrib[,y], qlt.x= ca.res$row$cos2[,x], qlt.y=ca.res$row$cos2[,y], corr.x=sqrt(ca.res$row$cos2[,x]), corr.y=sqrt(ca.res$row$cos2[,y]))
dtf.cols <- data.frame(Categories="columns", labls=row.names(ca.res$col$coord), coord.x=ca.res$col$coord[,x], coord.y=ca.res$col$coord[,y], cntr.x=ca.res$col$contrib[,x], cntr.y=ca.res$col$contrib[,y], qlt.x= ca.res$col$cos2[,x], qlt.y=ca.res$col$cos2[,y], corr.x=sqrt(ca.res$col$cos2[,x]), corr.y=sqrt(ca.res$col$cos2[,y]))
if(cntr=="columns"){
dtf.cols$majorcntr.x <- ifelse(dtf.cols$cntr.x>100/ncol(data), "*","")
dtf.cols$majorcntr.y <- ifelse(dtf.cols$cntr.y>100/ncol(data), "*","")
dtf.cols$labls.final <- ifelse(dtf.cols$majorcntr.x == "" & dtf.cols$majorcntr.y == "", rownames(dtf.cols), paste0(dtf.cols$labls, " (", dtf.cols$majorcntr.x, ",",dtf.cols$majorcntr.y, ")"))
dtf.rows$majorcntr.x <- ""
dtf.rows$majorcntr.y <- ""
dtf.rows$labls.final <- paste0(dtf.rows$labls, "\n[", round(dtf.rows$corr.x, 2), ", ", round(dtf.rows$corr.y, 2), "]")
} else {
dtf.rows$majorcntr.x <- ifelse(dtf.rows$cntr.x>100/nrow(data), "*","")
dtf.rows$majorcntr.y <- ifelse(dtf.rows$cntr.y>100/nrow(data), "*","")
dtf.rows$labls.final <- ifelse(dtf.rows$majorcntr.x == "" & dtf.rows$majorcntr.y == "", rownames(dtf.rows), paste0(dtf.rows$labls, " (", dtf.rows$majorcntr.x, ",",dtf.rows$majorcntr.y, ")"))
dtf.cols$majorcntr.x <- ""
dtf.cols$majorcntr.y <- ""
dtf.cols$labls.final <- paste0(dtf.cols$labls, "\n[", round(dtf.cols$corr.x, 2), ", ", round(dtf.cols$corr.y, 2), "]")
}
binded.dtf <- rbind(dtf.rows, dtf.cols)
binded.dtf$qlt.sum <- binded.dtf$qlt.x + binded.dtf$qlt.y
if(percept==TRUE){
cntr.thresh <- ifelse(cntr == "columns", 100 / ncol(data), 100 / nrow(data))
if(cntr=="columns"){
binded.dtf$labls.final[(nrow(data)+1):nrow(binded.dtf)] <- colnames(data)
} else {
binded.dtf$labls.final[1:nrow(data)] <- rownames(data)
}
bindeddtf.subs <- binded.dtf[which(binded.dtf$Categories == cntr),]
sub1 <- paste(subset(bindeddtf.subs, coord.x < 0 & cntr.x > cntr.thresh)[,2], collapse = "\n")
sub2 <- paste(subset(bindeddtf.subs, coord.x > 0 & cntr.x > cntr.thresh)[,2], collapse = "\n")
sub3 <- paste(subset(bindeddtf.subs, coord.y < 0 & cntr.y > cntr.thresh)[,2], collapse = "\n")
sub4 <- paste(subset(bindeddtf.subs, coord.y > 0 & cntr.y > cntr.thresh)[,2], collapse = "\n")
x.neg.lim <- min(subset(binded.dtf, Categories!=cntr)$coord.x) # get the min and max coordinates of the space in which one has to represent the categories opposite than the one selected under 'cntr'
x.pos.lim <- max(subset(binded.dtf, Categories!=cntr)$coord.x)
y.neg.lim <- min(subset(binded.dtf, Categories!=cntr)$coord.y)
y.pos.lim <- max(subset(binded.dtf, Categories!=cntr)$coord.y)
p <- ggplot(subset(binded.dtf, Categories!=cntr), aes(x=coord.x, y=coord.y)) +
geom_point(aes(colour=Categories, shape=Categories), size=dot.size) +
geom_vline(xintercept = 0, linetype=2, color="gray") +
geom_hline(yintercept = 0, linetype=2, color="gray") +
geom_text_repel(data=subset(binded.dtf, Categories!=cntr), aes(colour=Categories, label = labls.final), size = cex.labls) +
labs(x=paste0("Dim. ", x, " (", round(ca.res$eig[x,1],3), "; ", round(ca.res$eig[x,2],2), "%)"), y=paste0("Dim. ", y, " (", round(ca.res$eig[y,1],3), "; ", round(ca.res$eig[y,2],2), "%)")) +
theme(panel.background = element_rect(fill="white", colour="black")) + scale_color_manual(values=c("black", "red")) +
geom_label(x = x.neg.lim + 0.01, y = 0.005, label = sub1, colour = "red", size = cex.percept) +
geom_label(x = x.pos.lim - 0.01, y = 0.005, label = sub2,colour = "red", size = cex.percept) +
geom_label(x = 0.005, y = y.neg.lim, label = sub3, colour = "red", size = cex.percept) +
geom_label(x = 0.005, y = y.pos.lim, label = sub4, colour = "red", size = cex.percept) +
coord_fixed(ratio = 1, xlim = NULL, ylim = NULL, expand = TRUE)
print(p)
colnames(binded.dtf) <- sub("x", paste0(as.character(x),"Dim"), colnames(binded.dtf))
colnames(binded.dtf) <- sub("y", paste0(as.character(y),"Dim"), colnames(binded.dtf))
return(subset(binded.dtf, , -c(labls, labls.final, qlt.sum)))
} else {
if(adv.labls==TRUE){
p <- ggplot(binded.dtf, aes(x=coord.x, y=coord.y)) +
geom_point(aes(colour=Categories, shape=Categories), size=dot.size) +
geom_vline(xintercept = 0, linetype=2, color="gray") +
geom_hline(yintercept = 0, linetype=2, color="gray") +
labs(x=paste0("Dim. ", x, " (", round(ca.res$eig[x,1],3), "; ", round(ca.res$eig[x,2],2), "%)"), y=paste0("Dim. ", y, " (", round(ca.res$eig[y,1],3), "; ", round(ca.res$eig[y,2],2), "%)")) +
theme(panel.background = element_rect(fill="white", colour="black")) + scale_color_manual(values=c("black", "red")) +
coord_fixed(ratio = 1, xlim = NULL, ylim = NULL, expand = TRUE)
if(is.null(qlt.thres)){
p1 <- p + geom_text_repel(data=binded.dtf, aes(colour=Categories, label = labls.final), size = cex.labls)
} else {
p1 <- p + geom_text_repel(data=binded.dtf[which(binded.dtf$qlt.sum > qlt.thres),], aes(colour=Categories, label = labls.final), size = cex.labls)
}
print(p1)
colnames(binded.dtf) <- sub("x", paste0(as.character(x),"Dim"), colnames(binded.dtf))
colnames(binded.dtf) <- sub("y", paste0(as.character(y),"Dim"), colnames(binded.dtf))
return(subset(binded.dtf, , -c(labls, labls.final, qlt.sum)))
} else {
p <- ggplot(binded.dtf, aes(x=coord.x, y=coord.y)) +
geom_point(aes(colour=Categories, shape=Categories), size=dot.size) +
geom_vline(xintercept = 0, linetype=2, color="gray") +
geom_hline(yintercept = 0, linetype=2, color="gray") +
labs(x=paste0("Dim. ", x, " (", round(ca.res$eig[x,1],3), "; ", round(ca.res$eig[x,2],2), "%)"), y=paste0("Dim. ", y, " (", round(ca.res$eig[y,1],3), "; ", round(ca.res$eig[y,2],2), "%)")) +
theme(panel.background = element_rect(fill="white", colour="black")) +
scale_color_manual(values=c("black", "red")) +
coord_fixed(ratio = 1, xlim = NULL, ylim = NULL, expand = TRUE)
if(is.null(qlt.thres)){
p1 <- p + geom_text_repel(data=binded.dtf, aes(colour=Categories, label = labls), size = cex.labls)
} else {
p1 <- p + geom_text_repel(data=binded.dtf[which(binded.dtf$qlt.sum > qlt.thres),], aes(colour=Categories, label = labls), size = cex.labls)
}
print(p1)
colnames(binded.dtf) <- sub("x", paste0(as.character(x),"Dim"), colnames(binded.dtf))
colnames(binded.dtf) <- sub("y", paste0(as.character(y),"Dim"), colnames(binded.dtf))
return(subset(binded.dtf, , -c(labls, labls.final, qlt.sum)))
}
}
}
|
/scratch/gouwar.j/cran-all/cranData/CAinterprTools/R/ca_plot.R
|
#' Facility for interpretation-oriented CA scatterplot
#'
#' This function allows to plot Correspondence Analysis scatterplots modified to
#' help interpreting the analysis' results. In particular, the function aims at
#' making easier to understand in the same visual context (a) which (say,
#' column) categories are actually contributing to the definition of given pairs
#' of dimensions, and (b) to eyeball which (say, row) categories are more
#' correlated to which dimension.
#' @param data Object returned by the FactoMineR's CA() function (see example
#' provided below); if supplementary data (i.e., rows and/or columns) are
#' present, when using CA(), the analyst has to use the proper settings
#' required by that function.
#' @param x First dimensions to be plotted (x=1 by default).
#' @param y Second dimensions to be plotted (y=2 by default).
#' @param focus Takes "R" if the interest is in assessing the contribution of
#' rows to the definition of the dimensions, "C" if the interest is on the
#' columns.
#' @param row.suppl Takes TRUE or FALSE if supplementary row data are present or
#' absent (FALSE is the default value).
#' @param col.suppl Takes TRUE or FALSE if supplementary column data are present
#' or absent (FALSE is the default value).
#' @param oneplot Takes TRUE or FALSE if the analyst wants the four returned
#' charts on the same page (recommended) or on four separate windows (FALSE is
#' the default value).
#' @param inches Numerical value used to resize the size of the points' bubbles
#' (see below); the default value is 0.35.
#' @param cex Numerical value used to set the size of labels' font; the default
#' value is 0.50.
#' @keywords caPlus
#' @export
#' @examples
#' data(greenacre_data)
#'
#' #performs CA by means of FactoMineR's CA command, and store the result in the object named resCA.
#' library(FactoMineR)
#' resCA <- CA(greenacre_data, graph=FALSE)
#'
#' #If supplementary data are present, the user has to specify which rows and/or columns
#' #are supplmentary into this function (see FactoMineR's documentation).
#' caPlus(resCA, 1, 2, focus="C", row.suppl=FALSE, col.suppl=FALSE, oneplot=TRUE)
#'
#' @seealso \code{\link{caPlot}} , \code{\link{caPercept}} , \code{\link[FactoMineR]{CA}}
#'
caPlus <- function(data, x=1, y=2, focus, row.suppl=FALSE, col.suppl=FALSE, oneplot=FALSE, inches=0.35, cex=0.5){
inrt.perc.x <- round(data$eig[x,2],1)
inrt.perc.y <- round(data$eig[y,2],1)
if (focus=="R") {
cntr.x <- data$row$contrib[,x]
cntr.y <- data$row$contrib[,y]
coord.row.x <- data$row$coord[,x]
coord.row.y <- data$row$coord[,y]
if (col.suppl=="FALSE") {
coord.col.x <- data$col$coord[,x]
coord.col.y <- data$col$coord[,y]
corr.x <- sqrt(data$col$cos2[,x])
corr.y <- sqrt(data$col$cos2[,y])
labs.col <- rownames(data$col$cos2)
} else {
coord.col.x <- rbind(data$col$coord, data$col.sup$coord)[,x]
coord.col.y <- rbind(data$col$coord, data$col.sup$coord)[,y]
corr.x <- sqrt(rbind(data$col$cos2, data$col.sup$cos2))[,x]
corr.y <- sqrt(rbind(data$col$cos2, data$col.sup$cos2))[,y]
labs.col <- rownames(rbind(data$col$cos2, data$col.sup$cos2))
}
radius.cntr.x <- sqrt(cntr.x/pi)
radius.cntr.y <- sqrt(cntr.y/pi)
radius.corr.x <- sqrt(corr.x/pi)
radius.corr.y <- sqrt(corr.y/pi)
labs.row <- rownames(data$row$contrib)
title.cntr.x <- paste("CA rows scatterplot: points proportional to the contrib. to Dim", x)
title.cntr.y <- paste("CA rows scatterplot: points proportional to the contrib. to Dim", y)
title.corr.x <- paste("CA columns scatterplot: points proportional to the correl. with Dim", x)
title.corr.y <- paste("CA columns scatterplot: points proportional to the correl. with Dim", y)
if (oneplot=="TRUE") {
par(mfrow=c(2,2))
} else {}
symbols(coord.row.x, coord.row.y, circles=radius.cntr.x, inches=inches, fg="white", bg="red", xlab=paste0("Dim. ", x," (", inrt.perc.x, "%)"), ylab=paste0("Dim. ", y, " (", inrt.perc.y, "%)"), main=title.cntr.x, cex.main=0.70)
text(coord.row.x, coord.row.y, labs.row, cex=cex)
abline(v=0, lty=2, col="grey")
abline(h=0, lty=2, col="grey")
if (row.suppl=="TRUE") {
points(data$row.sup$coord[,x],data$row.sup$coord[,y])
text(data$row.sup$coord[,x],data$row.sup$coord[,y], rownames(data$row.sup$coord), cex=cex, pos=3)
} else {}
symbols(coord.row.x, coord.row.y, circles=radius.cntr.y, inches=inches, fg="white", bg="red", xlab=paste0("Dim. ", x," (", inrt.perc.x, "%)"), ylab=paste0("Dim. ", y, " (", inrt.perc.y, "%)"), main=title.cntr.y, cex.main=0.70)
text(coord.row.x, coord.row.y, labs.row, cex=cex)
abline(v=0, lty=2, col="grey")
abline(h=0, lty=2, col="grey")
if (row.suppl=="TRUE") {
points(data$row.sup$coord[,x],data$row.sup$coord[,y])
text(data$row.sup$coord[,x],data$row.sup$coord[,y], rownames(data$row.sup$coord), cex=cex, pos=3)
} else {}
symbols(coord.col.x, coord.col.y, circles=radius.corr.x, inches=inches, fg="white", bg="green", xlab=paste0("Dim. ", x," (", inrt.perc.x, "%)"), ylab=paste0("Dim. ", y, " (", inrt.perc.y, "%)"), main=title.corr.x, cex.main=0.70)
text(coord.col.x, coord.col.y, labs.col, cex=cex)
abline(v=0, lty=2, col="grey")
abline(h=0, lty=2, col="grey")
symbols(coord.col.x, coord.col.y, circles=radius.corr.y, inches=inches, fg="white", bg="green", xlab=paste0("Dim. ", x," (", inrt.perc.x, "%)"), ylab=paste0("Dim. ", y, " (", inrt.perc.y, "%)"), main=title.corr.y, cex.main=0.70)
text(coord.col.x, coord.col.y, labs.col, cex=cex)
abline(v=0, lty=2, col="grey")
abline(h=0, lty=2, col="grey")
par(mfrow=c(1,1))
} else {
cntr.x <- data$col$contrib[,x]
cntr.y <- data$col$contrib[,y]
coord.col.x <- data$col$coord[,x]
coord.col.y <- data$col$coord[,y]
if (row.suppl=="FALSE") {
coord.row.x <- data$row$coord[,x]
coord.row.y <- data$row$coord[,y]
corr.x <- sqrt(data$row$cos2[,x])
corr.y <- sqrt(data$row$cos2[,y])
labs.row <- rownames(data$row$cos2)
} else {
coord.row.x <- rbind(data$row$coord, data$row.sup$coord)[,x]
coord.row.y <- rbind(data$row$coord, data$row.sup$coord)[,y]
corr.x <- sqrt(rbind(data$row$cos2, data$row.sup$cos2))[,x]
corr.y <- sqrt(rbind(data$row$cos2, data$row.sup$cos2))[,y]
labs.row <- rownames(rbind(data$row$cos2, data$row.sup$cos2))
}
radius.cntr.x <- sqrt(cntr.x/pi)
radius.cntr.y <- sqrt(cntr.y/pi)
radius.corr.x <- sqrt(corr.x/pi)
radius.corr.y <- sqrt(corr.y/pi)
labs.col <- rownames(data$col$contrib)
title.cntr.x <- paste("CA cols scatterplot: points proportional to the contrib. to Dim", x)
title.cntr.y <- paste("CA cols scatterplot: points proportional to the contrib. to Dim", y)
title.corr.x <- paste("CA rows scatterplot: points proportional to the correl. with Dim", x)
title.corr.y <- paste("CA rows scatterplot: points proportional to the correl. with Dim", y)
if (oneplot=="TRUE") {
par(mfrow=c(2,2))
} else {}
symbols(coord.col.x, coord.col.y, circles=radius.cntr.x, inches=inches, fg="white", bg="red", xlab=paste0("Dim. ", x," (", inrt.perc.x, "%)"), ylab=paste0("Dim. ", y, " (", inrt.perc.y, "%)"), main=title.cntr.x, cex.main=0.70)
text(coord.col.x, coord.col.y, labs.col, cex=cex)
abline(v=0, lty=2, col="grey")
abline(h=0, lty=2, col="grey")
if (col.suppl=="TRUE") {
points(data$col.sup$coord[,x],data$col.sup$coord[,y])
text(data$col.sup$coord[,x],data$col.sup$coord[,y], rownames(data$col.sup$coord), cex=cex, pos=3)
} else {}
symbols(coord.col.x, coord.col.y, circles=radius.cntr.y, inches=inches, fg="white", bg="red", xlab=paste0("Dim. ", x," (", inrt.perc.x, "%)"), ylab=paste0("Dim. ", y, " (", inrt.perc.y, "%)"), main=title.cntr.y, cex.main=0.70)
text(coord.col.x, coord.col.y, labs.col, cex=cex)
abline(v=0, lty=2, col="grey")
abline(h=0, lty=2, col="grey")
if (col.suppl=="TRUE") {
points(data$col.sup$coord[,x],data$col.sup$coord[,y])
text(data$col.sup$coord[,x],data$col.sup$coord[,y], rownames(data$col.sup$coord), cex=cex, pos=3)
} else {}
symbols(coord.row.x, coord.row.y, circles=radius.corr.x, inches=inches, fg="white", bg="green", xlab=paste0("Dim. ",x," (", inrt.perc.x, "%)"), ylab=paste0("Dim. ", y, " (", inrt.perc.y, "%)"), main=title.corr.x, cex.main=0.70)
text(coord.row.x, coord.row.y, labs.row, cex=cex)
abline(v=0, lty=2, col="grey")
abline(h=0, lty=2, col="grey")
symbols(coord.row.x, coord.row.y, circles=radius.corr.y, inches=inches, fg="white", bg="green", xlab=paste0("Dim. ", x," (", inrt.perc.x, "%)"), ylab=paste0("Dim. ",y, " (", inrt.perc.y, "%)"), main=title.corr.y, cex.main=0.70)
text(coord.row.x, coord.row.y, labs.row, cex=cex)
abline(v=0, lty=2, col="grey")
abline(h=0, lty=2, col="grey")
par(mfrow=c(1,1))
}
}
|
/scratch/gouwar.j/cran-all/cranData/CAinterprTools/R/ca_plus.r
|
#' Scatterplot visualization facility
#'
#' This function allows to get different types of CA scatterplots. It is just a wrapper for
#' functions from the 'ca' and 'FactoMineR' packages.
#' @param data Name of the contingency table (must be in dataframe format).
#' @param x First dimension to be plotted (x=1 by default).
#' @param y Second dimensions to be plotted (y=2 by default).
#' @param type Type of scatterplot to be returned (see examples).
#' @keywords caScatter
#' @export
#' @examples
#' data(greenacre_data)
#'
#' # symmetric scatterplot for rows and columns
#' caScatter(greenacre_data, 1, 2, type=1)
#'
#' # Standard Biplot; 2 plots are returned:
#' #one with row-categories vectors displayed, one for columns categories vectors.
#' caScatter(greenacre_data, 1, 2, type=2)
#'
#' # scaterplot of row categories with groupings
#' #shown by different colors; scatterplot for column categories is also returned
#' caScatter(greenacre_data, 1, 2, type=3)
#'
#' # 3D scatterplot with cluster tree for row categories;
#' #scatterplot for column categories is also returned.
#' caScatter(greenacre_data, 1, 2, type=4)
#'
#' @seealso \code{\link{caPlot}} , \code{\link{caPercept}} , \code{\link{caPlus}} ,
#' \code{\link[ca]{ca}} , \code{\link[FactoMineR]{plot.CA}} , \code{\link[FactoMineR]{HCPC}}
#'
caScatter <- function(data,x=1,y=2,type){
numb.dim.cols<-ncol(data)-1
numb.dim.rows<-nrow(data)-1
dimensionality <- min(numb.dim.cols, numb.dim.rows)
res.ca <- ca(data)
ca.factom <- CA(data, ncp=dimensionality, graph=FALSE)
resclust.rows<-HCPC(ca.factom, nb.clust=-1, metric="euclidean", method="ward", order=TRUE, graph.scale="inertia", graph=FALSE, cluster.CA="rows")
resclust.cols<-HCPC(ca.factom, nb.clust=-1, metric="euclidean", method="ward", order=TRUE, graph.scale="inertia", graph=FALSE, cluster.CA="columns")
if (type==1) {
plot.CA(ca.factom, axes=c(x,y), autoLab = "auto", cex=0.75)
} else {
if (type==2) {
plot(res.ca, mass = FALSE, dim=c(x,y), contrib = "none", col=c("black", "red"), map ="rowgreen", arrows = c(FALSE, TRUE)) #for rows
plot(res.ca, mass = FALSE, dim=c(x,y), contrib = "none", col=c("black", "red"), map ="colgreen", arrows = c(TRUE, FALSE)) #for columns
} else {
if (type==3) {
plot(resclust.rows, axes=c(x,y), choice="map", draw.tree=FALSE, ind.names=TRUE, new.plot=TRUE)
plot(resclust.cols, axes=c(x,y), choice="map", draw.tree=FALSE, ind.names=TRUE, new.plot=TRUE)
} else {
if (type==4) {
plot(resclust.rows, axes=c(x,y), choice="3D.map", draw.tree=TRUE, ind.names=TRUE, new.plot=TRUE)
plot(resclust.cols, axes=c(x,y), choice="3D.map", draw.tree=TRUE, ind.names=TRUE, new.plot=TRUE)
}
}
}
}
}
|
/scratch/gouwar.j/cran-all/cranData/CAinterprTools/R/ca_scatter.r
|
#'Columns contribution chart
#'
#'This function allows to calculate the contribution of the column categories to the selected
#'dimension.
#'
#'The function displays the contribution of the categories as a dot plot. A reference line indicates
#'the threshold above which a contribution can be considered important for the determination of the
#'selected dimension. The parameter categ.sort=TRUE sorts the categories in descending order of
#'contribution to the inertia of the selected dimension. At the left-hand side of the plot, the
#'categories' labels are given a symbol (+ or -) according to whether each category is actually
#'contributing to the definition of the positive or negative side of the dimension, respectively.
#'The categories are grouped into two groups: 'major' and 'minor' contributors to the inertia of the
#'selected dimension. At the right-hand side, a legend (which is enabled/disabled using the 'leg'
#'parameter) reports the correlation (sqrt(COS2)) of the row categories with the selected dimension.
#'A symbol (+ or -) indicates with which side of the selected dimension each row category is
#'correlated.
#'
#'@param data Name of the dataset (must be in dataframe format).
#'@param x Dimension for which the column categories contribution is returned (1st dimension by
#' default).
#'@param categ.sort Logical value (TRUE/FALSE) which allows to sort the categories in descending
#' order of contribution to the inertia of the selected dimension. TRUE is set by default.
#'@param corr.thrs Threshold above which the row categories correlation will be displayed in the
#' plot's legend.
#'@param leg Enable (TRUE; default) or disable (FALSE) the legend at the right-hand side of the
#' dot plot.
#'@param cex.labls Adjust the size of the dot plot's labels.
#'@param dotprightm Increases the empty space between the right margin of the dot plot and the left
#' margin of the legend box.
#'@param cex.leg Adjust the size of the legend's characters.
#'@param leg.x.spc Adjust the horizontal space of the chart's legend. See more info from the
#' 'legend' function's help (?legend).
#'@param leg.y.spc Adjust the y interspace of the chart's legend. See more info from the 'legend'
#' function's help (?legend).
#'
#'@keywords cols.cntr
#'
#'@export
#'
#' @examples
#' data(greenacre_data)
#'
#' # Plots the contribution of the column
#' #categories to the 2nd CA dimension, and also displays the contribution to the total inertia.
#' #The categories are sorted in descending order of contribution
#' #to the inertia of the selected dimension.
#'
#' cols.cntr(greenacre_data, 2, categ.sort=TRUE)
#'
#' @seealso \code{\link{cols.cntr.scatter}} , \code{\link{rows.cntr}} ,
#' \code{\link{rows.cntr.scatter}}
#'
cols.cntr <- function (data, x = 1, categ.sort = TRUE, corr.thrs=0.0, leg=TRUE, cex.labls=0.75, dotprightm=5, cex.leg=0.6, leg.x.spc=1, leg.y.spc=1){
corr=NULL
ncols <- ncol(data)
cadataframe <- CA(data, graph = FALSE)
res.ca <- summary(ca(data))
df <- data.frame(cntr = cadataframe$col$contrib[, x] * 10, cntr.tot = res.ca$columns[, 4], coord=cadataframe$col$coord[,x])
df$labels <- ifelse(df$coord<0,paste(rownames(df), " -", sep = ""), paste(rownames(df), " +", sep = ""))
df.row.corr <- data.frame(coord=cadataframe$row$coord[,x], corr=round(sqrt(cadataframe$row$cos2[,x]), 3))
df.row.corr$labels <- ifelse(df.row.corr$coord<0,paste(rownames(df.row.corr), " - ", sep = ""), paste(rownames(df.row.corr), " + ", sep = ""))
df.row.corr$specif <- paste0(df.row.corr$labels, "(", df.row.corr$corr, ")")
ifelse(corr.thrs==0.0, df.row.corr <- df.row.corr, df.row.corr <- subset(df.row.corr, corr>=corr.thrs))
ifelse(categ.sort == TRUE, df.to.use <- df[order(-df$cntr), ], df.to.use <- df)
df.to.use$majcontr <- ifelse(df.to.use$cntr>round(((100/ncols) * 10)), "maj. contr.", "min. contr.")
if(leg==TRUE){
par(oma=c(0,0,0,dotprightm))
} else {}
dotchart2(df.to.use$cntr,
labels = df.to.use$labels,
groups=df.to.use$majcontr,
sort. = FALSE,
lty = 2,
xlim = c(0, 1000),
cex.labels=cex.labls,
xlab = paste("Column categories' contribution to Dim. ", x, " (in permills)"))
if(leg==TRUE){
par(oma=c(0,0,0,0))
legend(x="topright",
legend=df.row.corr[order(-df.row.corr$corr),]$specif,
xpd=TRUE,
cex=cex.leg,
x.intersp = leg.x.spc,
y.intersp = leg.y.spc)
par(oma=c(0,0,0,dotprightm))
} else {}
abline(v = round(((100/ncols) * 10), digits = 0), lty = 2, col = "RED")
par(oma=c(0,0,0,0))
}
|
/scratch/gouwar.j/cran-all/cranData/CAinterprTools/R/cols_cntr.R
|
#' Scatterplot for column categories contribution to dimensions
#'
#' This function allows to plot a scatterplot of the contribution of column
#' categories to two selected dimensions. Two references lines (in RED) indicate
#' the threshold above which the contribution can be considered important for
#' the determination of the dimensions. A diagonal line is a visual aid to
#' eyeball whether a category is actually contributing more (in relative terms)
#' to either of the two dimensions. The column categories' labels are coupled
#' with + or - symbols within round brackets indicating which to side of the two
#' selected dimensions the contribution values that can be read off from the
#' chart are actually referring. The first symbol (i.e., the one to the left),
#' either + or -, refers to the first of the selected dimensions (i.e., the one
#' reported on the x-axis). The second symbol (i.e., the one to the right)
#' refers to the second of the selected dimensions (i.e., the one reported on
#' the y-axis).
#'
#' @param data Name of the dataset (must be in dataframe format).
#' @param x First dimension for which the contributions are reported (x=1 by
#' default).
#' @param y Second dimension for which the contributions are reported (y=2 by
#' default).
#' @param filter Filter the categories in order to only display those who have a
#' major contribution to the definition of the selected dimensions.
#' @param cex.labls Adjust the size of the categories' labels
#'
#' @keywords cols.cntr.scatter
#'
#' @export
#'
#' @examples
#' data(greenacre_data)
#'
#' #Plots the scatterplot of the column categories contribution to dimensions 1&2.
#'
#' cols.cntr.scatter(greenacre_data,1,2)
#'
#' @seealso \code{\link{cols.cntr}} , \code{\link{rows.cntr}} , \code{\link{rows.cntr.scatter}}
#'
cols.cntr.scatter <- function (data, x = 1, y = 2, filter=FALSE, cex.labls=3) {
cntr1=cntr2=labels.final=NULL
ncols <- ncol(data)
nrows <- nrow(data)
numb.dim.cols <- ncol(data) - 1
numb.dim.rows <- nrow(data) - 1
a <- min(numb.dim.cols, numb.dim.rows)
pnt_labls <- colnames(data)
res <- CA(data, ncp = a, graph = FALSE)
dfr <- data.frame(lab = pnt_labls, cntr1 = res$col$contrib[,x] * 10, cntr2 = res$col$contrib[, y] * 10, coord1=res$col$coord[,x], coord2=res$col$coord[,y])
dfr$labels1 <- ifelse(dfr$coord1 < 0, "-", "+")
dfr$labels2 <- ifelse(dfr$coord2 < 0, "-", "+")
dfr$labels.final <- paste0(dfr$lab, " (",dfr$labels1,",",dfr$labels2, ")")
xmax <- max(dfr[, 2]) + 10
ymax <- max(dfr[, 3]) + 10
limit.value <- max(xmax, ymax)
ifelse(filter==FALSE, dfr <- dfr, dfr <- subset(dfr, cntr1>(100/ncols)*10 | cntr2>(100/ncols)*10))
p <- ggplot(dfr, aes(x = cntr1, y = cntr2)) + geom_point(alpha = 0.8) +
geom_hline(yintercept = round((100/ncols) * 10, digits = 0), colour = "red", linetype = "dashed") +
geom_vline(xintercept = round((100/ncols) * 10, digits = 0), colour = "red", linetype = "dashed") +
scale_y_continuous(limits = c(0, limit.value)) + scale_x_continuous(limits = c(0,limit.value)) +
geom_abline(intercept = 0, slope = 1, colour="#00000088") +
theme(panel.background = element_rect(fill="white", colour="black")) +
geom_text_repel(data = dfr, aes(label = labels.final), size = cex.labls) +
labs(x = paste("Column categories' contribution (permills) to Dim.",x), y = paste("Column categories' contribution (permills) to Dim.", y)) +
coord_fixed(ratio = 1, xlim = NULL, ylim = NULL, expand = TRUE)
return(p)
}
|
/scratch/gouwar.j/cran-all/cranData/CAinterprTools/R/cols_cntr_scatter.R
|
#' Chart of columns correlation with a selected dimension
#'
#' This function allows to calculate the correlation (sqrt(COS2)) of the column categories with the
#' selected dimension.
#'
#' The function displays the correlation of the column categories with the selected dimension; the
#' parameter categ.sort=TRUE arrange the categories in decreasing order of correlation. At the
#' left-hand side, the categories' labels show a symbol (+ or -) according to which side of the
#' selected dimension they are correlated, either positive or negative. The categories are grouped
#' into two groups: categories correlated with the positive ('pole +') or negative ('pole -') pole
#' of the selected dimension. At the right-hand side, a legend (which is enabled/disabled using the
#' 'leg' parameter) indicates the row categories' contribution (in permills) to the selected
#' dimension (value enclosed within round brackets), and a symbol (+ or -) indicating whether they
#' are actually contributing to the definition of the positive or negative side of the dimension,
#' respectively. Further, an asterisk (*) flags the categories which can be considered major
#' contributors to the definition of the dimension.
#' @param data Name of the dataset (must be in dataframe format).
#' @param x Dimension for which the column categories correlation is returned (1st dimension by
#' default).
#' @param categ.sort Logical value (TRUE/FALSE) which allows to sort the categories in descending
#' order of correlation with the selected dimension. TRUE is set by default.
#' @param filter Filter the row categories listed in the top-right legend, only showing those who
#' have a major contribution to the definition of the selected dimension.
#' @param leg Enable (TRUE; default) or disable (FALSE) the legend at the right-hand side of the
#' dot plot.
#' @param dotprightm Increases the empty space between the right margin of the dot plot and the left
#' margin of the legend box.
#' @param cex.leg Adjust the size of the legend's characters.
#' @param cex.labls Adjust the size of the dot plot's labels.
#' @param leg.x.spc Adjust the horizontal space of the chart's legend. See more info from the
#' 'legend' function's help (?legend).
#' @param leg.y.spc Adjust the y interspace of the chart's legend. See more info from the 'legend'
#' function's help (?legend).
#' @keywords cols.corr
#' @export
#' @examples
#' data(greenacre_data)
#'
#' #Plots the correlation of the column categories with the 1st CA dimension.
#' cols.corr(greenacre_data, 1, categ.sort=TRUE)
#'
#' @seealso \code{\link{cols.corr.scatter}} , \code{\link{rows.corr}} ,
#' \code{\link{rows.corr.scatter}}
#'
cols.corr <- function (data, x = 1, categ.sort = TRUE, filter= FALSE, leg=TRUE, dotprightm=5, cex.leg=0.6, cex.labls=0.75, leg.x.spc=1, leg.y.spc=1) {
cntr=NULL
cadataframe <- CA(data, graph = FALSE)
df <- data.frame(corr = round(sqrt((cadataframe$col$cos2[, x])), digits = 3), coord=cadataframe$col$coord[,x])
df$labels <- ifelse(df$coord < 0,
paste(rownames(df), " - ", sep = ""),
paste(rownames(df), " + ", sep = ""))
df.row.cntr <- data.frame(coord=cadataframe$row$coord[,x], cntr=(cadataframe$row$contrib[,x]*10))
df.row.cntr$labels <- ifelse(df.row.cntr$coord < 0,
paste(rownames(df.row.cntr), " - ", sep = ""),
paste(rownames(df.row.cntr), " + ", sep = ""))
df.row.cntr$specif <- ifelse(df.row.cntr$cntr > (100/nrow(data)) * 10,
"*",
"")
df.row.cntr$specif2 <- paste0(df.row.cntr$specif, df.row.cntr$labels, "(", round(df.row.cntr$cntr,2), ")")
ifelse(categ.sort == TRUE,
df.to.use <- df[order(-df$corr), ],
df.to.use <- df)
df.to.use$pole <- ifelse(df.to.use$coord > 0,
"pole +",
"pole -")
ifelse(filter== FALSE,
df.row.cntr <- df.row.cntr,
df.row.cntr <- subset(df.row.cntr, cntr>(100/nrow(data))*10))
if(leg==TRUE){
par(oma=c(0,0,0,dotprightm))
} else {}
dotchart2(df.to.use$corr,
labels = df.to.use$labels,
groups=df.to.use$pole,
sort. = FALSE,
lty = 2,
xlim = c(0, 1),
cex.labels=cex.labls,
xlab = paste("Column categories' correlation with Dim. ", x))
par(oma=c(0,0,0,0))
if(leg==TRUE){
legend(x="topright",
legend=df.row.cntr[order(-df.row.cntr$cntr),]$specif2,
xpd=TRUE,
cex=cex.leg,
x.intersp = leg.x.spc,
y.intersp = leg.y.spc)
} else {}
}
|
/scratch/gouwar.j/cran-all/cranData/CAinterprTools/R/cols_corr.R
|
#' Scatterplot for column categories correlation with dimensions
#'
#' This function allows to plot a scatterplot of the correlation (sqrt(COS2)) of column categories
#' with two selected dimensions. A diagonal line is a visual aid to eyeball whether a category is
#' actually more correlated (in relative terms) to either of the two dimensions. The column
#' categories' labels are coupled with two + or - symbols within round brackets indicating to which
#' side of the two selected dimensions the correlation values that can be read off from the chart
#' are actually referring. The first symbol (i.e., the one to the left), either + or -, refers to
#' the first of the selected dimensions (i.e., the one reported on the x-axis). The second symbol
#' (i.e., the one to the right) refers to the second of the selected dimensions (i.e., the one
#' reported on the y-axis).
#' @param data Name of the dataset (must be in dataframe format).
#' @param x First dimension for which the correlations are reported (x=1 by default).
#' @param y Second dimension for which the correlations are reported (y=2 by default).
#' @param cex.labls Adjust the size of the categories' labels
#' @keywords cols.corr.scatter
#' @export
#' @examples
#' data(greenacre_data) #load the sample dataset
#'
#' #Plots the scatterplot of the column categories correlation with dimensions 1&2.
#' cols.corr.scatter(greenacre_data,1,2)
#'
#' @seealso \code{\link{cols.corr}} , \code{\link{rows.corr}} ,
#' \code{\link{rows.corr.scatter}}
#'
cols.corr.scatter <- function (data, x = 1, y = 2, cex.labls=3) {
corr1=corr2=labels.final=NULL
ncols <- ncol(data)
nrows <- nrow(data)
numb.dim.cols <- ncol(data) - 1
numb.dim.rows <- nrow(data) - 1
a <- min(numb.dim.cols, numb.dim.rows)
pnt_labls <- colnames(data)
res <- CA(data, ncp = a, graph = FALSE)
dfr <- data.frame(lab = pnt_labls, corr1 = round(sqrt(res$col$cos2[,x]), digits = 3), corr2 = round(sqrt(res$col$cos2[, y]), digits = 3), coord1=res$col$coord[,x], coord2=res$col$coord[,y])
dfr$labels1 <- ifelse(dfr$coord1 < 0, "-", "+")
dfr$labels2 <- ifelse(dfr$coord2 < 0, "-", "+")
dfr$labels.final <- paste0(dfr$lab, " (",dfr$labels1,",",dfr$labels2, ")")
p <- ggplot(dfr, aes(x = corr1, y = corr2)) +
geom_point(alpha = 0.8) + scale_y_continuous(limits = c(0, 1)) +
scale_x_continuous(limits = c(0,1)) +
geom_abline(intercept = 0, slope = 1, colour="#00000088") +
theme(panel.background = element_rect(fill="white", colour="black")) +
geom_text_repel(data = dfr, aes(label = labels.final), size = cex.labls) +
labs(x = paste("Column categories' correlation with Dim.", x), y = paste("Column categories' correlation with Dim.",y)) +
coord_fixed(ratio = 1, xlim = NULL, ylim = NULL, expand = TRUE)
return(p)
}
|
/scratch/gouwar.j/cran-all/cranData/CAinterprTools/R/cols_corr_scatter.R
|
#' Chart of columns quality of the display
#'
#' This function allows you to calculate the quality of the display of the
#' column categories on pairs of selected dimensions.
#' @param data Name of the dataset (must be in dataframe format).
#' @param x First dimension for which the quality is calculated (x=1 by
#' default).
#' @param y Second dimension for which the quality is calculated (y=2 by
#' default).
#' @param categ.sort Logical value (TRUE/FALSE) which allows to sort the categories in
#' descending order of quality of the representation on the subspace defined
#' by the selected dimensions. TRUE is set by default.
#' @param cex.labls Adjust the size of the dot plot's labels.
#' @keywords cols.qlt
#' @export
#' @examples
#' data(greenacre_data)
#'
#' #Plots the quality of the display of the column categories on the 1&2 dimensions.
#' cols.qlt(greenacre_data, 1,2, categ.sort=TRUE)
#'
#' @seealso \code{\link{rows.qlt}}
#'
cols.qlt <- function (data, x=1, y=2, categ.sort=TRUE, cex.labls=0.75){
cadataframe <- CA(data, graph=FALSE)
df <- data.frame(qlt=cadataframe$col$cos2[,x]*100+cadataframe$col$cos2[,y]*100, labels=colnames(data))
ifelse(categ.sort==TRUE, df.to.use <- df[order(-df$qlt),], df.to.use <- df)
dotchart2(df.to.use$qlt, labels=df.to.use$labels, sort.=FALSE,lty=2, xlim=c(0, 100), cex.labels=cex.labls, xlab=paste("Column categories' quality of the display (% of inertia) on Dim.", x, "+",y))
}
|
/scratch/gouwar.j/cran-all/cranData/CAinterprTools/R/cols_qlt.R
|
#' Dataset: Cross-tabulation of quantity of tobacco smoked daily vs. cause of
#' death
#'
#' Cross-tabulation (15x4) of the amount of tobacco smoked on a daily basis (in
#' gramms) against cause of death.\cr After: Velleman P F, Hoaglin D C,
#' Applications, Basics, and Computing of Exploratory Data Analysis, Wadsworth
#' Pub Co 1984 (Exhibit 8-1)
#'
#'
#' @docType data
#' @keywords datasets
#' @name diseases
#' @usage data(diseases)
#' @format dataframe
NULL
|
/scratch/gouwar.j/cran-all/cranData/CAinterprTools/R/diseases.r
|
#' Dataset: Cross-tabulation of cause of fire vs. amount of money loss
#'
#' Cross-tabulation (9x4) of the amount of money loss against cause of fire.\cr
#' After: Li et al, Influences of Time, Location, and Cause Factors on the
#' Probability of Fire Loss in China: A Correspondence Analysis, in Fire
#' Technology 50(5), 2014, 1181-1200 (table 5)
#'
#'
#' @docType data
#' @keywords datasets
#' @name fire_loss
#' @usage data(fire_loss)
#' @format dataframe
NULL
|
/scratch/gouwar.j/cran-all/cranData/CAinterprTools/R/fire_loss.r
|
#' Dataset: Cross-tabulation of funding category vs. University faculty
#'
#' Cross-tabulation (10x5) of funding category against University faculty.\cr
#' After: Greenacre M, Correspondence Analysis in Practice, Boca
#' Raton-London-New York, Chapman&Hall/CRC 2007 (exhibit 12.1)
#'
#'
#' @docType data
#' @keywords datasets
#' @name greenacre_data
#' @usage data(greenacre_data)
#' @format dataframe
NULL
|
/scratch/gouwar.j/cran-all/cranData/CAinterprTools/R/greenacre_data.r
|
#'Define groups of categories on the basis of a selected partition into k groups
#'employing the Jenks' natural break method on the selected dimension's
#'coordinates
#'
#'The function allows to group the row/column categories into k user-defined
#'partitions.
#'
#'K groups are created employing the Jenks' natural break method applied on the
#'selected dimension's coordinates. A dot chart is returned representing the
#'categories grouped into the selected partitions. At the bottom of the chart,
#'the Goodness of Fit statistic is also reported. The function also returns a
#'dataframe storing the categories' coordinates on the selected dimension and
#'the group each category belongs to.
#'
#'@param data Name of the dataset (must be in dataframe format).
#'@param x Dimension whose coordinates are used to build the partitions.
#'@param k Number of groups.
#'@param which Speficy if rows ("rows"; default) or columns ("cols") must be
#' grouped.
#'@param cex.labls Set the size of the labels of the dot chart (0.75 by default).
#'
#'@keywords groupBycoord
#'
#'@export
#'
#' @examples
#' data(greenacre_data)
#'
#' #divide the row categories into 3 groups on the basis of the coordinates
#' #of the 1st dimension, and store the result into a 'res' object
#' res <- groupBycoord(greenacre_data, x=1, k=3, which="rows")
#'
#' @seealso \code{\link{caCluster}}
#'
groupBycoord <- function (data, x=1, k=3, which="rows", cex.labls=0.75){
categ=NULL
res <- CA(data, graph=FALSE)
ifelse(which=="rows",
dtf <- data.frame(categ=row.names(res$row$coord), coord.x=res$row$coord[,x]),
dtf <- data.frame(categ=row.names(res$col$coord), coord.x=res$col$coord[,x]))
dtf <- dtf[order(dtf$coord.x),]
Jclassif <- classInt::classIntervals(dtf$coord.x, k, style = "jenks")
GoFtest <- jenks.tests(Jclassif)
dtf$group <- as.factor(cut(dtf$coord.x, unique(Jclassif$brks), labels=FALSE, include.lowest=TRUE))
dotchart2(dtf$coord.x,
labels=dtf$categ,
groups=as.factor(paste0("group ", dtf$group)),
lty=2,
cex.labels=cex.labls,
xlab=paste0("coordinate on the ", x, " dim."),
main=paste0(ifelse(which=="rows", "Row", "Column"), " categories clustered into ", k, " groups (Jenks' natural breaks on the coord. of the selected dim.)"),
cex.main=0.90,
sub=paste0("Goodness of Fit: ", round(GoFtest[2],2)),
cex.sub=0.7)
colnames(dtf)[2] <- paste0("coord.", x,".Dim")
return(subset(dtf, , -c(categ)))
}
|
/scratch/gouwar.j/cran-all/cranData/CAinterprTools/R/groupBycoord.R
|
#' Malinvaud's test for significance of the CA dimensions
#'
#' This function allows you to perform the Malinvaud's test, which assesses the
#' significance of the CA dimensions.
#'
#' The function returns both a table in the R console and a plot. The former
#' lists relevant information, among which the significance of each CA
#' dimension. The dot chart graphically represents the p-value of each dimension;
#' dimensions are grouped by level of significance; a red reference lines
#' indicates the 0.05 threshold.
#' @param data Name of the dataset (must be in dataframe format).
#' @keywords malinvaud
#' @export
#' @examples
#' data(greenacre_data)
#'
#' #perform the Malinvaud test using the 'greenacre_data' dataset
#' #and store the output table in a object named 'res'
#' res <- malinvaud(greenacre_data)
#'
#' @seealso \code{\link{sig.dim.perm.scree}}
#'
malinvaud <- function (data) {
grandtotal <- sum(data)
nrows <- nrow(data)
ncols <- ncol(data)
numb.dim.cols <- ncol(data) - 1
numb.dim.rows <- nrow(data) - 1
a <- min(numb.dim.cols, numb.dim.rows)
labs <- c(1:a)
res.ca <- CA(data, ncp = a, graph = FALSE)
malinv.test.rows <- a
malinv.test.cols <- 7
malinvt.output <- as.data.frame(matrix(ncol = malinv.test.cols, nrow = malinv.test.rows))
colnames(malinvt.output) <- c("K", "Dimension", "Eigenvalue", "Chi-square", "df", "p-value", "p-class")
malinvt.output[,1] <- c(0:(a - 1))
malinvt.output[,2] <- paste0("dim. ",c(1:a))
for (i in 1:malinv.test.rows) {
k <- -1 + i
malinvt.output[i,3] <- res.ca$eig[i, 1]
malinvt.output[i,5] <- (nrows - k - 1) * (ncols - k - 1)
}
malinvt.output[,4] <- rev(cumsum(rev(malinvt.output[, 3]))) * grandtotal
pvalue <- pchisq(malinvt.output[,4], malinvt.output[,5], lower.tail = FALSE)
malinvt.output[,6] <- pvalue
malinvt.output[,7] <- ifelse(pvalue < 0.001, "p < 0.001",
ifelse(pvalue < 0.01, "p < 0.01",
ifelse(pvalue < 0.05, "p < 0.05",
"p > 0.05")))
dotchart2(pvalue,
labels = malinvt.output[,2],
groups=malinvt.output[,7],
sort. = FALSE,
lty = 2,
xlim = c(0, 1),
main="Malinvaud's test for the significance of CA dimensions",
xlab = paste("p-value"),
ylab = "Dimensions",
cex.main=0.9,
cex.labels=0.75)
abline(v = 0.05, lty = 2, col = "RED")
return(malinvt.output)
}
|
/scratch/gouwar.j/cran-all/cranData/CAinterprTools/R/malinvaud.R
|
#' Rescaling row/column categories coordinates between a minimum and maximum
#' value
#'
#' This function allows to rescale the coordinates of a selected dimension to be
#' constrained between a minimum and a maximum user-defined value.
#'
#' The rationale of the function is that users may wish to use the coordinates
#' on a given dimension to devise a scale, along the lines of what is
#' accomplished in:\cr Greenacre M 2002, "The Use of Correspondence Analysis in
#' the Exploration of Health Survey Data", Documentos de Trabajo 5, Fundacion
#' BBVA, pp. 7-39\cr The function returns a chart representing the row/column
#' categories against the rescaled coordinates from the selected dimension. A
#' dataframe is also returned containing the original values (i.e., the
#' coordinates) and the corresponding rescaled values.
#' @param data Name of the dataset (must be in dataframe format).
#' @param x Dimension for which the row categories contribution is returned (1st
#' dimension by default).
#' @param which Speficy if rows ("rows", default) or columns ("cols") must be
#' grouped.
#' @param min.v Minimum value of the new scale (0 by default).
#' @param max.v Maximum value of the new scale (100 by default).
#' @keywords rescale
#' @export
#' @examples
#' data(greenacre_data)
#'
#' #rescale the row coordinates between 0 and 10
#' res <- rescale(greenacre_data, which="rows", min.v=0, max.v=10)
#'
rescale <- function (data, x=1, which="rows", min.v=0, max.v=100) {
category=NULL
res <- CA(data, graph=FALSE)
ifelse(which=="rows",
coord.x <- res$row$coord[,x],
coord.x <- res$col$coord[,x])
resc.v <- ((coord.x-min(coord.x))*(max.v-min.v)/(max(coord.x)-min(coord.x)))+min.v
df <- data.frame(category=rownames(as.data.frame(coord.x)), orignal.v=coord.x, rescaled.v=resc.v)
plot(sort(df$rescaled.v),
xaxt="n",
xlab="categories",
ylab=paste0(x, " Dim. rescaled coordinates"),
pch=20,
type="b",
main=paste0("Plot of ", ifelse(which=="rows", "row", "column"), " categories against ", x, " Dim. coordinates rescaled between ", min.v, " and ", max.v),
cex.main=0.95)
axis(1, at=1:nrow(df), labels=df$category)
return(subset(df, , -c(category)))
}
|
/scratch/gouwar.j/cran-all/cranData/CAinterprTools/R/rescale.R
|
#' Rows contribution chart
#'
#' This function allows to calculate the contribution of the row categories to
#' the selected dimension.
#'
#' The function displays the contribution of the categories as a dot plot. A
#' reference line indicates the threshold above which a contribution can be
#' considered important for the determination of the selected dimension. The
#' parameter categ.sort=TRUE sorts the categories in descending order of contribution
#' to the inertia of the selected dimension. At the left-hand side of the plot,
#' the categories' labels are given a symbol (+ or -) according to whether each
#' category is actually contributing to the definition of the positive or
#' negative side of the dimension, respectively. The categories are grouped into
#' two groups: 'major' and 'minor' contributors to the inertia of the selected
#' dimension. At the right-hand side, a legend (which is enabled/disabled using
#' the 'leg' parameter) reports the correlation (sqrt(COS2)) of the column
#' categories with the selected dimension. A symbol (+ or -) indicates with
#' which side of the selected dimension each column category is correlated.
#' @param data Name of the dataset (must be in dataframe format).
#' @param x Dimension for which the row categories contribution is returned (1st
#' dimension by default).
#' @param categ.sort Logical value (TRUE/FALSE) which allows to sort the categories in
#' descending order of contribution to the inertia of the selected dimension.
#' TRUE is set by default.
#' @param corr.thrs Threshold above which the column categories correlation will
#' be displayed in the plot's legend.
#' @param leg Enable (TRUE; default) or disable (FALSE) the legend at the
#' right-hand side of the dot plot.
#' @param cex.labls Adjust the size of the dot plot's labels.
#' @param dotprightm Increases the empty space between the right margin of the
#' dot plot and the left margin of the legend box.
#' @param cex.leg Adjust the size of the legend's characters.
#' @param leg.x.spc Adjust the horizontal space of the chart's legend. See more
#' info from the 'legend' function's help (?legend).
#' @param leg.y.spc Adjust the y interspace of the chart's legend. See more info
#' from the 'legend' function's help (?legend).
#' @keywords rows.cntr
#' @export
#' @examples
#' data(greenacre_data)
#'
#' #Plots the contribution of the row categories to the 2nd CA dimension,
#' #and also displays the contribnution to the total inertia.
#' #The categories are sorted in descending order of contribution to the inertia
#' #of the selected dimension.
#' rows.cntr(greenacre_data, 2, categ.sort=TRUE)
#'
#' @seealso \code{\link{rows.cntr.scatter}} , \code{\link{cols.cntr}} ,
#' \code{\link{cols.cntr.scatter}}
#'
rows.cntr <- function (data, x = 1, categ.sort = TRUE, corr.thrs=0.0, leg=TRUE, cex.labls=0.75, dotprightm=5, cex.leg=0.6, leg.x.spc=1, leg.y.spc=1){
corr=NULL
nrows <- nrow(data)
cadataframe <- CA(data, graph = FALSE)
res.ca <- summary(ca(data))
df <- data.frame(cntr = cadataframe$row$contrib[, x] * 10, cntr.tot = res.ca$rows[, 4], coord=cadataframe$row$coord[,x])
df$labels <- ifelse(df$coord<0,paste(rownames(df), " -", sep = ""), paste(rownames(df), " +", sep = ""))
df.col.corr <- data.frame(coord=cadataframe$col$coord[,x], corr=round(sqrt(cadataframe$col$cos2[,x]), 3))
df.col.corr$labels <- ifelse(df.col.corr$coord<0,paste(rownames(df.col.corr), " - ", sep = ""), paste(rownames(df.col.corr), " + ", sep = ""))
df.col.corr$specif <- paste0(df.col.corr$labels, "(", df.col.corr$corr, ")")
ifelse(corr.thrs==0.0, df.col.corr <- df.col.corr, df.col.corr <- subset(df.col.corr, corr>=corr.thrs))
ifelse(categ.sort == TRUE, df.to.use <- df[order(-df$cntr), ], df.to.use <- df)
df.to.use$majcontr <- ifelse(df.to.use$cntr>round(((100/nrows) * 10)), "maj. contr.", "min. contr.")
if(leg==TRUE){
par(oma=c(0,0,0,dotprightm))
} else {}
dotchart2(df.to.use$cntr,
labels = df.to.use$labels,
groups=df.to.use$majcontr,
sort. = FALSE,
lty = 2,
xlim = c(0, 1000),
cex.labels=cex.labls,
xlab = paste("Row categories' contribution to Dim. ", x, " (in permills)"))
if(leg==TRUE){
par(oma=c(0,0,0,0))
legend(x="topright",
legend=df.col.corr[order(-df.col.corr$corr),]$specif,
xpd=TRUE,
cex=cex.leg,
x.intersp = leg.x.spc,
y.intersp = leg.y.spc)
par(oma=c(0,0,0,dotprightm))
} else {}
abline(v = round(((100/nrows) * 10), digits = 0), lty = 2, col = "RED")
par(oma=c(0,0,0,0))
}
|
/scratch/gouwar.j/cran-all/cranData/CAinterprTools/R/rows_cntr.R
|
#' Scatterplot for row categories contribution to dimensions
#'
#' This function allows to plot a scatterplot of the contribution of row
#' categories to two selected dimensions. Two references lines (in RED) indicate
#' the threshold above which the contribution can be considered important for
#' the determination of the dimensions. A diagonal line is a visual aid to
#' eyeball whether a category is actually contributing more (in relative terms)
#' to either of the two dimensions. The row categories' labels are coupled with
#' + or - symbols within round brackets indicating to which side of the two
#' selected dimensions the contribution values that can be read off from the
#' chart are actually referring. The first symbol (i.e., the one to the left),
#' either + or -, refers to the first of the selected dimensions (i.e., the one
#' reported on the x-axis). The second symbol (i.e., the one to the right)
#' refers to the second of the selected dimensions (i.e., the one reported on
#' the y-axis).
#' @param data Name of the dataset (must be in dataframe format).
#' @param x First dimension for which the contributions are reported (x=1 by
#' default).
#' @param y Second dimension for which the contributions are reported (y=2 by
#' default).
#' @param filter Filter the categories in order to only display those who have a
#' major contribution to the definition of the selected dimensions.
#' @param cex.labls Adjust the size of the categories' labels
#' @keywords rows.cntr.scatter
#' @export
#' @examples
#' data(greenacre_data)
#'
#' #Plot the scatterplot of the row categories contribution to dimensions 1&2.
#' rows.cntr.scatter(greenacre_data,1,2)
#'
#' @seealso \code{\link{rows.cntr}} , \code{\link{cols.cntr}} , \code{\link{cols.cntr.scatter}}
#'
rows.cntr.scatter <- function (data, x = 1, y = 2, filter=FALSE, cex.labls=3){
cntr1=cntr2=labels.final=NULL
ncols <- ncol(data)
nrows <- nrow(data)
numb.dim.cols <- ncol(data) - 1
numb.dim.rows <- nrow(data) - 1
a <- min(numb.dim.cols, numb.dim.rows)
pnt_labls <- rownames(data)
res <- CA(data, ncp = a, graph = FALSE)
dfr <- data.frame(lab = pnt_labls, cntr1 = res$row$contrib[,x] * 10, cntr2 = res$row$contrib[, y] * 10, coord1=res$row$coord[,x], coord2=res$row$coord[,y])
dfr$labels1 <- ifelse(dfr$coord1 < 0, "-", "+")
dfr$labels2 <- ifelse(dfr$coord2 < 0, "-", "+")
dfr$labels.final <- paste0(dfr$lab, " (",dfr$labels1,",",dfr$labels2, ")")
xmax <- max(dfr[, 2]) + 10
ymax <- max(dfr[, 3]) + 10
limit.value <- max(xmax, ymax)
ifelse(filter==FALSE, dfr <- dfr, dfr <- subset(dfr, cntr1>(100/nrows)*10 | cntr2>(100/nrows)*10))
p <- ggplot(dfr, aes(x = cntr1, y = cntr2)) + geom_point(alpha = 0.8) +
geom_hline(yintercept = round((100/nrows) * 10, digits = 0), colour = "red", linetype = "dashed") +
geom_vline(xintercept = round((100/nrows) *10, digits = 0), colour = "red", linetype = "dashed") +
scale_y_continuous(limits = c(0, limit.value)) +
scale_x_continuous(limits = c(0,limit.value)) +
geom_abline(intercept = 0, slope = 1, colour="#00000088") +
theme(panel.background = element_rect(fill="white", colour="black")) +
geom_text_repel(data = dfr, aes(label = labels.final), size = cex.labls) +
labs(x = paste("Row categories' contribution (permills) to Dim.",x), y = paste("Row categories' contribution (permills) to Dim.", y)) +
coord_fixed(ratio = 1, xlim = NULL, ylim = NULL, expand = TRUE)
return(p)
}
|
/scratch/gouwar.j/cran-all/cranData/CAinterprTools/R/rows_cntr_scatter.R
|
#' Chart of rows correlation with a selected dimension
#'
#' This function allows to calculate the correlation (sqrt(COS2)) of the row
#' categories with the selected dimension.
#'
#' The function displays the correlation of the row categories with the selected
#' dimension; the parameter categ.sort=TRUE arrange the categories in decreasing order
#' of correlation. At the left-hand side, the categories' labels show a symbol
#' (+ or -) according to which side of the selected dimension they are
#' correlated, either positive or negative. The categories are grouped into two
#' groups: categories correlated with the positive ('pole +') or negative ('pole
#' -') pole of the selected dimension. At the right-hand side, a legend (which
#' is enabled/disabled using the 'leg' parameter) indicates the column
#' categories' contribution (in permills) to the selected dimension (value
#' enclosed within round brackets), and a symbol (+ or -) indicating whether
#' they are actually contributing to the definition of the positive or negative
#' side of the dimension, respectively. Further, an asterisk (*) flags the
#' categories which can be considered major contributors to the definition of
#' the dimension.
#' @param data Name of the dataset (must be in dataframe format).
#' @param x Dimension for which the row categories correlation is returned (1st
#' dimension by default).
#' @param categ.sort Logical value (TRUE/FALSE) which allows to sort the categories in
#' descending order of correlation with the selected dimension. TRUE is set by
#' default.
#' @param filter Filter the column categories listed in the top-right legend,
#' only showing those who have a major contribution to the definition of the
#' selected dimension.
#' @param leg Enable (TRUE; default) or disable (FALSE) the legend at the
#' right-hand side of the dot plot.
#' @param dotprightm Increases the empty space between the right margin of the
#' dot plot and the left margin of the legend box.
#' @param cex.leg Adjust the size of the legend's characters.
#' @param cex.labls Adjust the size of the dot plot's labels.
#' @param leg.x.spc Adjust the horizontal space of the chart's legend. See more
#' info from the 'legend' function's help (?legend).
#' @param leg.y.spc Adjust the y interspace of the chart's legend. See more info
#' from the 'legend' function's help (?legend).
#' @keywords rows.corr
#' @export
#' @examples
#' data(greenacre_data)
#'
#' #Plots the correlation of the row categories with the 1st CA dimension.
#' rows.corr(greenacre_data, 1, categ.sort=TRUE)
#'
#' @seealso \code{\link{rows.corr.scatter}} , \code{\link{cols.corr}} ,
#' \code{\link{cols.corr.scatter}}
#'
rows.corr <- function (data, x = 1, categ.sort = TRUE, filter= FALSE, leg=TRUE, dotprightm=5, cex.leg=0.6, cex.labls=0.75, leg.x.spc=1, leg.y.spc=1) {
cntr=NULL
cadataframe <- CA(data, graph = FALSE)
df <- data.frame(corr = round(sqrt((cadataframe$row$cos2[, x])), digits = 3), coord=cadataframe$row$coord[,x])
df$labels <- ifelse(df$coord < 0,
paste(rownames(df), " - ", sep = ""),
paste(rownames(df), " + ", sep = ""))
df.col.cntr <- data.frame(coord=cadataframe$col$coord[,x], cntr=(cadataframe$col$contrib[,x]*10))
df.col.cntr$labels <- ifelse(df.col.cntr$coord < 0,
paste(rownames(df.col.cntr), " - ", sep = ""),
paste(rownames(df.col.cntr), " + ", sep = ""))
df.col.cntr$specif <- ifelse(df.col.cntr$cntr > (100/ncol(data)) * 10,
"*",
"")
df.col.cntr$specif2 <- paste0(df.col.cntr$specif, df.col.cntr$labels, "(", round(df.col.cntr$cntr,2), ")")
ifelse(categ.sort == TRUE,
df.to.use <- df[order(-df$corr), ],
df.to.use <- df)
df.to.use$pole <- ifelse(df.to.use$coord > 0,
"pole +",
"pole -")
ifelse(filter== FALSE,
df.col.cntr <- df.col.cntr,
df.col.cntr <- subset(df.col.cntr, cntr>(100/ncol(data))*10))
if(leg==TRUE){
par(oma=c(0,0,0,dotprightm))
} else {}
dotchart2(df.to.use$corr,
labels = df.to.use$labels,
groups=df.to.use$pole,
sort. = FALSE,
lty = 2,
xlim = c(0, 1),
cex.labels=cex.labls,
xlab = paste("Row categories' correlation with Dim. ", x))
par(oma=c(0,0,0,0))
if(leg==TRUE){
legend(x="topright",
legend=df.col.cntr[order(-df.col.cntr$cntr),]$specif2,
xpd=TRUE,
cex=cex.leg,
x.intersp = leg.x.spc,
y.intersp = leg.y.spc)
} else {}
}
|
/scratch/gouwar.j/cran-all/cranData/CAinterprTools/R/rows_corr.R
|
#' Scatterplot for row categories correlation with dimensions
#'
#' This function allows to plot a scatterplot of the correlation (sqrt(COS2)) of
#' row categories with two selected dimensions. A diagonal line is a visual aid
#' to eyeball whether a category is actually more correlated (in relative terms)
#' to either of the two dimensions. The row categories' labels are coupled with
#' two + or - symbols within round brackets indicating to which side of the two
#' selected dimensions the correlation values that can be read off from the
#' chart are actually referring. The first symbol (i.e., the one to the left),
#' either + or -, refers to the first of the selected dimensions (i.e., the one
#' reported on the x-axis). The second symbol (i.e., the one to the right)
#' refers to the second of the selected dimensions (i.e., the one reported on
#' the y-axis).
#' @param data Name of the dataset (must be in dataframe format).
#' @param x First dimension for which the correlations are reported (x=1 by
#' default).
#' @param y Second dimension for which the correlations are reported (y=2 by
#' default).
#' @param cex.labls Adjust the size of the categories' labels
#' @keywords rows.corr.scatter
#' @export
#' @examples
#' data(greenacre_data)
#'
#' #Plots the scatterplot of the row categories correlation with dimensions 1&2.
#' rows.corr.scatter(greenacre_data,1,2)
#'
#' @seealso \code{\link{rows.corr}} , \code{\link{cols.corr}} ,
#' \code{\link{cols.corr.scatter}}
#'
rows.corr.scatter <- function (data, x = 1, y = 2, cex.labls=3) {
corr1=corr2=labels.final=NULL
ncols <- ncol(data)
nrows <- nrow(data)
numb.dim.cols <- ncol(data) - 1
numb.dim.rows <- nrow(data) - 1
a <- min(numb.dim.cols, numb.dim.rows)
pnt_labls <- rownames(data)
res <- CA(data, ncp = a, graph = FALSE)
dfr <- data.frame(lab = pnt_labls, corr1 = round(sqrt(res$row$cos2[,x]), digits = 3), corr2 = round(sqrt(res$row$cos2[, y]), digits = 3), coord1=res$row$coord[,x], coord2=res$row$coord[,y])
dfr$labels1 <- ifelse(dfr$coord1 < 0, "-", "+")
dfr$labels2 <- ifelse(dfr$coord2 < 0, "-", "+")
dfr$labels.final <- paste0(dfr$lab, " (",dfr$labels1,",",dfr$labels2, ")")
p <- ggplot(dfr, aes(x = corr1, y = corr2)) + geom_point(alpha = 0.8) +
scale_y_continuous(limits = c(0, 1)) + scale_x_continuous(limits = c(0,1)) +
geom_abline(intercept = 0, slope = 1, colour="#00000088") +
theme(panel.background = element_rect(fill="white", colour="black")) +
geom_text_repel(data = dfr, aes(label = labels.final), size = cex.labls) +
labs(x = paste("Row categories' correlation with Dim.",x), y = paste("Row categories' correlation with Dim.",y)) +
coord_fixed(ratio = 1, xlim = NULL, ylim = NULL, expand = TRUE)
return(p)
}
|
/scratch/gouwar.j/cran-all/cranData/CAinterprTools/R/rows_corr_scatter.R
|
#' Chart of rows quality of the display
#'
#' This function allows you to calculate the quality of the display of the row
#' categories on pairs of selected dimensions.
#' @param data Name of the dataset (must be in dataframe format).
#' @param x First dimension for which the quality is calculated (x=1 by
#' default).
#' @param y Second dimension for which the quality is calculated (y=2 by
#' default).
#' @param categ.sort Logical value (TRUE/FALSE) which allows to sort the categories in
#' descending order of quality of the representation on the subspace defined
#' by the selected dimensions. TRUE is set by default.
#' @param cex.labls Adjust the size of the dot plot's labels.
#' @keywords rows.qlt
#' @export
#' @examples
#' data(greenacre_data)
#'
#' #Plots the quality of the display of the row categories on the 1&2 dimensions.
#' rows.qlt(greenacre_data,1,2,categ.sort=TRUE)
#'
#' @seealso \code{\link{cols.qlt}}
#'
rows.qlt <- function (data, x=1, y=2,categ.sort=TRUE, cex.labls=0.75){
cadataframe <- CA(data, graph=FALSE)
df <- data.frame(qlt=cadataframe$row$cos2[,x]*100+cadataframe$row$cos2[,y]*100, labels=rownames(data))
ifelse(categ.sort==TRUE, df.to.use <- df[order(-df$qlt),], df.to.use <- df)
dotchart2(df.to.use$qlt, labels=df.to.use$labels, sort.=FALSE,lty=2, xlim=c(0, 100), cex.labels=cex.labls, xlab=paste("Row categories' quality of the display (% of inertia) on Dim.", x, "+", y))
}
|
/scratch/gouwar.j/cran-all/cranData/CAinterprTools/R/rows_qlt.R
|
#' Permuted significance of CA dimensions
#'
#' This function calculates the permuted significance of a pair of selected
#' CA dimensions. Number of permutation set at 999 by default, but can be
#' increased by the user.
#' A scatterplot of the permuted inertia of a pair of selected dimensions is produced.
#' Permuted p.values are reported in the axes' labels and are also returned in a dataframe.
#'
#' @param data Name of the dataset (must be in dataframe format).
#' @param x First dimension whose significance is calculated (x=1 by default).
#' @param y Second dimension whose significance is calculated (y=2 by default).
#' @param B Number of permutations (999 by default).
#'
#' @return The function returns a dataframe storing the permuted p-values of each CA dimension.
#'
#' @keywords sig.dim.perm
#'
#' @export
#'
#' @examples
#' data(greenacre_data)
#'
#' #Produces a scatterplot of the permuted inertia of the 1 CA dimension
#' #against the permuted inertia of the 2 CA dimension.
#' #The observed inertia of the selected dimensions is displayed as a large red dot;
#' #pvalues are reported in the axes labels (and are stored in a 'pvalues' object).
#'
#' pvalues <- sig.dim.perm(greenacre_data, 1,2, B=99)
#'
#' @seealso \code{\link{sig.dim.perm.scree}}
#'
sig.dim.perm <- function(data, x=1, y=2, B=999) {
nIter <- B
numb.dim.cols <- ncol(data) - 1
numb.dim.rows <- nrow(data) - 1
table.dim <- min(numb.dim.cols, numb.dim.rows)
d <- as.data.frame(matrix(nrow=nIter+1, ncol=table.dim))
res <- CA(data, graph=FALSE)
d[1,]<- rbind(res$eig[,1])
pb <- txtProgressBar(min = 0, max = nIter, style = 3) #set the progress bar to be used inside the loop
for (i in 2:nrow(d)){
rand.table <- as.data.frame(r2dtable(1, apply(data, 1,sum), apply(data, 2, sum)))
res <- CA(rand.table, graph=FALSE)
d[i,] <- rbind(res$eig[,1])
setTxtProgressBar(pb, i)
}
perm.pvalues <- round((1 + colSums(d[-1,] > d[1,][col(d[-1,])])) / (1 + B), 4)
pvalues.toreport <- ifelse(perm.pvalues < 0.001, "< 0.001", ifelse(perm.pvalues < 0.01, "< 0.01", ifelse(perm.pvalues < 0.05, "< 0.05","> 0.05")))
plot(d[,x], d[,y],
main=" Scatterplot of permuted dimensions' inertia",
sub="large red dot: observed inertia",
xlab=paste0("inertia of permuted ", x," Dim. (p-value: ", pvalues.toreport[x], " [", perm.pvalues[x], "])"),
ylab=paste0("inertia of permuted ", y," Dim. (p-value: ", pvalues.toreport[y]," [", perm.pvalues[y], "])"),
cex.sub=0.75,
pch=20,
col="#00000088") # hex code for 'black'; last two digits set the transparency
par(new=TRUE)
plot(d[1,x], d[1,y], xlim=c(min(d[,x]), max(d[,x])), ylim=c(min(d[,y]), max(d[,y])), pch=20, cex=1.5, col="red", xaxt = "n", xlab = "", ylab = "", sub = "") #add the observed inertia as a large red dot
pvalues.df <- as.data.frame(perm.pvalues)
row.names(pvalues.df) <- paste0("dim.", seq(1,table.dim))
return(pvalues.df)
}
|
/scratch/gouwar.j/cran-all/cranData/CAinterprTools/R/sig_dim_perm.R
|
#' Scree plot to test the significance of CA dimensions by means of a randomized
#' procedure
#'
#' This function tests the significance of the CA dimensions by means
#' of permutation of the input contingency table. Number of permutation set at
#' 999 by default, but can be increased by the user. The function return a
#' scree-plot displaying for each dimension the observed eigenvalue and the 95th
#' percentile of the permuted distribution of the corresponding eigenvalue.
#' Observed eigenvalues that are larger than the corresponding 95th percentile
#' are significant at least at alpha 0.05. Permuted p-values are displayed into the
#' chart and also returned as dataframe.
#'
#' @param data Name of the contingency table (must be in dataframe format).
#' @param B Number of permutations to be used (999 by default).
#' @param cex Controls the size of the labels reporting the p values; see the
#' help documentation of the text() function by typing ?text.
#' @param pos Controls the position of the labels reporting the p values; see
#' the help documentation of the text() function by typing ?text.
#' @param offset Controls the offset of the labels reporting the p values; see
#' the help documentation of the text() function by typing ?text.
#'
#' @return The function returns a dataframe storing the permuted p-values of each CA dimension.
#'
#' @keywords sig.dim.perm.scree
#'
#' @export
#'
#' @examples
#' data(greenacre_data)
#'
#' pvalues <- sig.dim.perm.scree(greenacre_data, 99)
#'
#' @seealso \code{\link{sig.dim.perm}}
#'
sig.dim.perm.scree <- function(data, B=999, cex=0.7, pos=4, offset=0.5){
options(scipen = 999)
nIter <- B
numb.dim.cols <- ncol(data) - 1
numb.dim.rows <- nrow(data) - 1
table.dim <- min(numb.dim.cols, numb.dim.rows)
d <- as.data.frame(matrix(nrow=nIter+1, ncol=table.dim))
res <- CA(data, graph=FALSE)
d[1,]<- rbind(res$eig[,1])
pb <- txtProgressBar(min = 0, max = nIter, style = 3)#set the progress bar to be used inside the loop
for (i in 2:nrow(d)){
rand.table <- as.data.frame(r2dtable(1, apply(data, 1,sum), apply(data, 2, sum)))
res <- CA(rand.table, graph=FALSE)
d[i,] <- rbind(res$eig[,1])
setTxtProgressBar(pb, i)
}
target.percent <- apply(d[-c(1),],2, quantile, probs = 0.95) #calculate the 95th percentile of the randomized eigenvalues, excluding the first row (which store the observed eigenvalues)
max.y.lim <- max(d[1,], target.percent)
obs.eig <- as.matrix(d[1,])
obs.eig.to.plot <- melt(obs.eig) #requires reshape2
perm.p.values <- round((1 + colSums(d[-1,] > d[1,][col(d[-1,])])) / (1 + B), 4)
plot(obs.eig.to.plot$value, type = "o", ylim = c(0, max.y.lim), xaxt = "n", xlab = "Dimensions", ylab = "Eigenvalue", pch=20)
text(obs.eig.to.plot$value, labels = perm.p.values, cex = cex, pos = pos, offset = offset)
axis(1, at = 1:table.dim)
title(main = "Correspondence Analysis: \nscree-plot of observed and permuted eigenvalues", sub = paste0("Black dots=observed eigenvalues; blue dots=95th percentile of the permutated eigenvalues' distribution. Number of permutations: ", B), cex.sub = 0.8)
par(new = TRUE)
percentile.to.plot <- melt(target.percent)
plot(percentile.to.plot$value, type = "o", lty = 2, col = "blue", ylim = c(0, max.y.lim), xaxt = "n", xlab = "", ylab = "", sub = "")
pvalues.df <- as.data.frame(perm.p.values)
row.names(pvalues.df) <- paste0("dim.", seq(1,table.dim))
return(pvalues.df)
}
|
/scratch/gouwar.j/cran-all/cranData/CAinterprTools/R/sig_dim_perm_scree.R
|
#'Permuted significance of the CA total inertia
#'
#'This function calculates the permuted significance of CA total
#'inertia. Number of permutation is customizable (set at 999 by default). A frequency distribution
#'histogram of permuted CA total inertia is produces and p.value of the observed total inertia is reported.
#'
#'@param data Name of the dataset (must be in dataframe format).
#'@param B Number of permutations (999 by default).
#'
#'@keywords sig.tot.inertia.perm
#'
#'@export
#'
#' @examples
#' data(greenacre_data)
#'
#' #Returns the frequency distribution histogram of the permuted total inertia
#' #(using 99 permutations). The observed total inertia and the 95th percentile
#' #of the permuted inertia are also displayed for testing the significance
#' #of the observed total inertia.
#'
#' sig.tot.inertia.perm(greenacre_data, 99)
#'
#' @seealso \code{\link{sig.dim.perm.scree}} , \code{\link{sig.dim.perm}}
#'
sig.tot.inertia.perm <- function (data, B = 999) {
rowTotals <- rowSums(data)
colTotals <- colSums(data)
obs.totinrt <- round(sum(ca(data)$rowinertia), 3)
tot.inrt <- function(x) sum(ca(x)$rowinertia)
perm.totinrt <- sapply(r2dtable(B, rowTotals, colTotals), tot.inrt)
thresh <- round(quantile(perm.totinrt, c(0.95)), 5)
perm.p.value <- (1 + length(which(perm.totinrt > obs.totinrt))) / (1 + B)
p.to.report <- ifelse(perm.p.value < 0.001, "< 0.001", ifelse(perm.p.value < 0.01, "< 0.01", ifelse(perm.p.value < 0.05, "< 0.05", round(perm.p.value, 3))))
hist(perm.totinrt, xlab = "",
main = "Frequency distribution of Correspondence Analysis permuted total inertia",
sub = paste0("solid line: obs. inertia (", obs.totinrt, "); dashed line: 95th percentile of the permut. distrib. (",thresh, ")", "\np-value: ", p.to.report, " (",perm.p.value,")", " (number of permutations: ", B, ")"),
cex.sub = 0.8)
abline(v = obs.totinrt)
abline(v = thresh, lty = 2, col = "blue")
rug(perm.totinrt, col="#0000FF") #hex code for 'blue'; last two digits set the transparency
}
|
/scratch/gouwar.j/cran-all/cranData/CAinterprTools/R/sig_tot_inertia_perm.R
|
#' Collapse rows and columns of a table on the basis of hierarchical clustering
#'
#' This function allows to collapse the rows and columns of the input
#' contingency table on the basis of the results of a hierarchical
#' clustering.\cr
#'
#' The function returns a list containing the input table, the rows-collapsed
#' table, the columns-collapsed table, and a table with both rows and columns
#' collapsed. It optionally returns two dendrograms (one for the row profiles,
#' one for the column profiles) representing the clusters.\cr
#'
#' The hierarchical clustering is obtained using the FactoMineR's 'HCPC()' function.\cr
#' Rationale: clustering rows and/or columns of a table could interest the users
#' who want to know where a "significant association is concentrated" by
#' "collecting together similar rows (or columns) in discrete groups" (Greenacre
#' M, Correspondence Analysis in Practice, Boca Raton-London-New York,
#' Chapman&Hall/CRC 2007, pp. 116, 120). Rows and/or columns are progressively
#' aggregated in a way in which every successive merging produces the smallest
#' change in the table's inertia. The underlying logic lies in the fact that
#' rows (or columns) whose merging produces a small change in table's inertia
#' have similar profiles. This procedure can be thought of as maximizing the
#' between-group inertia and minimizing the within-group inertia.\cr
#' A method essentially similar is that provided by the 'FactoMineR' package (Husson F,
#' Le S, Pages J, Exploratory Multivariate Analysis by Example Using R, Boca
#' Raton-London-New York, CRC Press, pp. 177-185). The cluster solution is based
#' on the following rationale: a division into Q (i.e., a given number of)
#' clusters is suggested when the increase in between-group inertia attained
#' when passing from a Q-1 to a Q partition is greater than that from a Q to a
#' Q+1 clusters partition. In other words, during the process of rows (or
#' columns) merging, if the following aggregation raises highly the
#' within-group inertia, it means that at the further step very different
#' profiles are being aggregated.
#'
#' @param data Name of the dataset (must be in dataframe format)
#' @param graph Logical (TRUE/FALSE); it takes TRUE if the user wants the row
#' and colum profiles dendrograms to be produced.
#' @keywords table.collapse
#' @export
#' @examples
#' data(greenacre_data)
#'
#' #collapse the table, store the results into an object called 'res', and return 2 dendrograms
#' res <- table.collapse(greenacre_data, graph=TRUE)
#'
#' @seealso \code{\link[FactoMineR]{HCPC}} , \code{\link[FactoMineR]{plot.CA}}
#'
table.collapse <- function (data, graph=FALSE) {
clust=NULL
clst.rows <- HCPC(data, nb.clust=-1, cluster.CA="rows", graph=FALSE)
clst.cols <- HCPC(as.data.frame(t(data)), nb.clust=-1, cluster.CA="rows", graph=FALSE)
rows.clust <- clst.rows$data.clust
cols.clust <- clst.cols$data.clust
row.collaps.table <- aggregate(. ~ clust, data=rows.clust, sum)
row.nms <- tapply(rownames(data), rows.clust$clust, paste, collapse = "-")
rownames(row.collaps.table) <- row.nms
final.row.table <- row.collaps.table[-c(1)]
col.collaps.table <- aggregate(. ~ clust, data=cols.clust, sum)
col.nms <- tapply(colnames(data), cols.clust$clust, paste, collapse = "-")
rownames(col.collaps.table) <- col.nms
pre.final.table <- subset(col.collaps.table, select = -c(clust))
final.col.table <- as.data.frame(t(pre.final.table))
final.col.table$clust <- rows.clust$clust
collaps.table.all <- aggregate(. ~ clust, data=final.col.table, sum)
rownames(collaps.table.all) <- row.nms
collaps.table.all <- collaps.table.all[-c(1)]
if(graph==TRUE){
plot(clst.rows, choice="tree") | plot(clst.cols, choice="tree")
} else {}
results <- list("original.table"=data,
"rows.collaps.table"=final.row.table,
"cols.collaps.table" =subset(final.col.table, select = -c(clust)),
"all.collaps.table"=collaps.table.all)
return(results)
}
|
/scratch/gouwar.j/cran-all/cranData/CAinterprTools/R/table_collapse.R
|
CAvariants<-
function(
Xtable, mj=NULL,mi=NULL,firstaxis=1,lastaxis=2,
catype = "CA",M=min(nrow(Xtable),ncol(Xtable))-1,alpha=0.05) {
#if (printdims<1) stop(paste("Attention: number of dims for output must be at least 1\n\n"))
if (lastaxis<2) stop(paste("Attention: last axis must be at least 2\n\n"))
if (!any(catype==c("CA","SOCA","DOCA","NSCA","SONSCA","DONSCA"))) stop(paste("Must be CA, DOCA, SOCA, NSCA, SONSCA or DONSCA"))
#if (!any(is.wholenumber(Xtable))) stop(paste("Must be integer values in contingency table"))
# READ DATA FILE
# assume for now header and row names exist
#Xtable <- read.table(file = datafile, header=header)
#if (header==FALSE) {
#for (i in 1:dim(Xtable)[1]) rownames(Xtable)[i] <- paste("r",i,sep="")
#for (i in 1:dim(Xtable)[2]) colnames(Xtable)[i] <- paste("c",i,sep="")
#}
X <- as.matrix(Xtable)
rowlabels <- rownames(Xtable)
collabels <- colnames(Xtable)
rows <- dim(X)[1]
cols <- dim(X)[2]
n <- sum(X)
if (is.null(mj)){
mj <- c(1:cols)}
else
mj<-c(mj) #natural scores for columns
if (is.null(mi)){
mi <- c(1:rows)}
else
mi<-c(mi) #natural scores for rows
r<- min(rows,cols)-1
S <- switch(catype, "CA"=cabasic(X), "SOCA"=socabasic(X,mj),"DOCA"=docabasic(X,mi,mj),"NSCA"=nscabasic(X),"SONSCA"=sonscabasic(X,mj),
"DONSCA"=donscabasic(X,mi,mj))
##########################------CA
if(catype=="CA"){
Fmat <- S$RX %*% S$Rweights %*% S$Raxes[,1:r]
Gmat <- S$CX %*% S$Cweights %*% S$Caxes[,1:r]
#dmum1 <- diag( (S$mu + (S$mu==0)) * (1-(S$mu==0)) )
Fbi <- S$Cweights %*% S$Caxes[,1:r] # no orthonormal
Gbi <-S$Rweights %*% S$Raxes[,1:r] # no orthonormal
pcc <- t(S$CX)
#dimnames(pcc)<-dimnames(X)
tau=NULL
tauden=NULL
inertia <- (S$mu[1:r]^2) #please check!!
inertiasum <- sum(S$mu^2)
inertiasum2 <- sum(S$mu^2)
t.inertia<-inertiasum*n
comps<-diag(inertia)
Trend<-(Fmat[,firstaxis:lastaxis]%*%t(Gbi[,firstaxis:lastaxis]))
Z<-Trend
dimnames(Z) <- list(rowlabels,collabels)
}
#########################################################---------DOCA
if(catype=="DOCA"){
Z<-S$Z/sqrt(n)
pcc<-S$RX #centered column profile matrix
Gbi <- S$Raxes[,1:r]
Fbi<- S$Caxes[,1:r]
Gbi2 <- S$Raxes
Fbi2<- S$Caxes
Gmat <- S$CX %*% S$Caxes[,1:r] #row principal coordinates
Fmat <- S$RX %*% S$Raxes[,1:r] #column principal coordinates
nr<-nrow(Z)
nc<-ncol(Z)
#if (nr>2){
#if ((Z[2,2]<0) & (Z[1,2]>0)||(Z[2,2]>0) & (Z[1,2]<0)){
#Gmat<-(-1)*Gmat
#Fmat<-(-1)*Fmat
#}}
reconstruction<-t(Gmat%*%t(S$Cweights%*%Fbi))
dimnames(reconstruction)<-dimnames(X)
inertia <- S$mu[1:r]/n #number of inertia of row poly
inertia2<-S$mu2[1:r]/n #number of inertias of column poly
inertiasum <- sum(S$mu)/n
inertiasum2 <- sum(S$mu2)/n
t.inertia<-inertiasum*n
Z1<-S$Z
tau=NULL
tauden=NULL
comps <- compstable.exe(Z1)
#----------------------------------------rows comps
if (nr>3){
Rcompnames <- c( "Location", "Dispersion", "Cubic","Error", "** Chi-squared Statistic **")}
if(nr == 3){
Rcompnames <- c( "Location", "Dispersion","Error","** Chi-squared Statistic **")}
if(nr == 2){
Rcompnames <- c( "Location", "Dispersion","** Chi-squared Statistic **")}
#----------------------columns comps
if (nc>3){
Ccompnames <- c( "Location", "Dispersion", "Cubic","Error", "** Chi-squared Statistic **")}
if (nc==3){
Ccompnames <- c( "Location", "Dispersion","Error","** Chi-squared Statistic **")}
if (nc==2){
Ccompnames <- c( "Location", "Dispersion","** Chi-squared Statistic **")}
Jcompnames <- c("Inertia Value", "P-value")
dimnames(Z) <- list(paste("u", 1:nr,sep=""), paste("v", 1:nc,sep=""))
#browser()
dimnames(comps$compsR) <- list(paste(Rcompnames), paste(Jcompnames))
dimnames(comps$compsC) <- list(paste(Ccompnames), paste(Jcompnames))
Trend<-(Fmat[,firstaxis:lastaxis]%*%t(S$Rweights%*%Gbi[,firstaxis:lastaxis]))
#browser()
}
#########################################################################---------SOCA
if(catype=="SOCA"){
pcc<-S$RX
Z<-S$Z/sqrt(n)
nr<-nrow(Z)
nc<-ncol(Z)
dimnames(pcc)<-dimnames(X)
Gmat <- S$CX %*% S$Caxes[,1:r] #row principal coordinates
Fmat <- S$RX %*% S$Rweights %*% S$Raxes[,1:r] #column principal coordinates
#if (nr>2){
#if ((Z[2,2]<0) & (Z[1,2]>0)||(Z[2,2]>0) & (Z[1,2]<0)){
#Gmat<-(-1)*Gmat
#Fmat<-(-1)*Fmat
#}}
Gbi <-S$Rweights %*%S$Raxes[,1:r]
Fbi <- S$Cweights %*%S$Caxes[,1:r]
inertia <- (S$mu[1:r]^2)/n
inertia2<-(S$mu2[1:r])/n
inertiasum <- sum(S$mu^2)/n
inertiasum2 <- sum(S$mu2)/n
t.inertia<-inertiasum*n
tauden=NULL
tau=NULL
Z1<-S$Z
comps <- compsonetable.exe(Z1)
#----------------------columns comps
if (nc>3){
Ccompnames <- c( "Location", "Dispersion", "Cubic","Error", "** Chi-squared Statistic **")}
if (nc==3){
Ccompnames <- c( "Location", "Dispersion","Error","** Chi-squared Statistic **")}
if (nc==2){
Ccompnames <- c( "Location", "Dispersion","** Chi-squared Statistic **")}
Jcompnames <- c("Inertia Value", "P-value")
dimnames(Z) <- list(paste("m", 1:nr,sep="" ), paste("v", 1:nc,sep=""))
#browser()
dimnames(comps$comps) <- list(Ccompnames, Jcompnames)
Trend<-(Fmat[,firstaxis:lastaxis]%*%t(S$Rweights%*%Gbi[,firstaxis:lastaxis]))
#-----------------------------
}
####################################-------------NSCA
if(catype=="NSCA"){
Fbi <- S$Caxes[,1:r]
Gbi <- S$Raxes[,1:r]
#dmum1 <-diag( (S$mu + (S$mu==0)) * (1-(S$mu==0)) )
dmum1 <-diag( S$mu [1:r])
pcc<-S$RX
dimnames(pcc)<-dimnames(X)
Gmat <- S$Raxes[,1:r] %*% dmum1
Fmat <- S$Caxes[,1:r] %*% dmum1
tauden<-S$tauDen
inertia <- S$mu[1:r]^2
inertiasum <- sum(S$mu^2)
inertiasum2 <- sum(S$mu^2)
tau<-sum(inertia)/tauden
t.inertia <- (sum(S$mu^2)/tauden)*(n-1)*(rows-1)
comps<-diag(inertia)
Trend<-(Fmat[,firstaxis:lastaxis]%*%t(S$Rweights%*%Gbi[,firstaxis:lastaxis]))
Z<-Trend
dimnames(Z) <- list(rowlabels,collabels)
}
##################################-------------DONSCA
if(catype=="DONSCA"){
Fbi <- S$Caxes[,1:r]
Gbi<-S$Raxes[,1:r]
pcc<-S$RX
Z<-S$Z
nr<-nrow(Z)
nc<-ncol(Z)
dimnames(pcc)<-dimnames(X)
Gmat <- S$CX %*% S$Cweights %*% S$Caxes[,1:r] #row principal coordinates
Fmat <- S$RX %*% S$Rweights %*% S$Raxes[,1:r] #column principal coordinates
#if (nr>2){
#if ((Z[2,2]<0) & (Z[1,2]>0)||(Z[2,2]>0) & (Z[1,2]<0)){
#Gmat<-(-1)*Gmat
#Fmat<-(-1)*Fmat
#}}
inertia <- S$mu[1:r]
inertia2<-S$mu2[1:r]
inertiasum <- sum(S$mu)
inertiasum2 <- sum(S$mu2)
tauden<-S$tauDen
Z2<-1/sqrt(tauden)*sqrt((n-1)*(rows-1))*S$Z
#Z2<-sqrt((n-1)*(rows-1))*S$Z #when tau
t.inertia <- (sum(S$mu)/tauden)*(n-1)*(rows-1)
tau<-sum(inertia)/tauden
comps <- compstable.exe(Z2)
#----------------------------------------rows comps
if (nr>3){
Rcompnames <- c( "Location", "Dispersion", "Cubic","Error", "** Chi-squared Statistic **")}
if(nr == 3){
Rcompnames <- c( "Location", "Dispersion","Error","** Chi-squared Statistic **")}
if(nr == 2){
Rcompnames <- c( "Location", "Dispersion","** Chi-squared Statistic **")}
#----------------------columns comps
if (nc>3){
Ccompnames <- c( "Location", "Dispersion", "Cubic","Error", "** Chi-squared Statistic **")}
if (nc==3){
Ccompnames <- c( "Location", "Dispersion","Error","** Chi-squared Statistic **")}
if (nc==2){
Ccompnames <- c( "Location", "Dispersion","** Chi-squared Statistic **")}
Jcompnames <- c("Inertia Value", "P-value")
dimnames(Z) <- list(paste("u", 1:(rows-1 ),sep=""), paste("v", 1:(cols -1),sep=""))
#dimnames(Z) <- list(paste("u", 1:(rows ),sep=""), paste("v", 1:(cols -1),sep=""))
dimnames(comps$compsR) <- list(paste(Rcompnames), paste(Jcompnames))
dimnames(comps$compsC) <- list(paste(Ccompnames), paste(Jcompnames))
Trend<-(Fmat[,firstaxis:lastaxis]%*%t(Gbi[,firstaxis:lastaxis]))
}
############################################------------------SONSCA
if(catype=="SONSCA"){
pcc<-S$RX
dimnames(pcc)<-dimnames(X)
Z<-S$Z
nr<-nrow(Z)
nc<-ncol(Z)
Gmat <- S$CX %*% S$Cweights %*% S$Caxes[,1:r] #column principal coordinates with principal axes
Fmat <- S$RX %*% (S$Rweights) %*% S$Raxes[,1:r] #row principal coordinates with polys
Gbi <- S$Raxes[,1:r]
Fbi <- S$Caxes[,1:r]
inertia <- S$mu[1:r]
inertia2 <- S$mu2[1:r]
inertiasum <- sum(S$mu) #computed through diag(ZZ')
inertiasum2 <- sum(S$mu2) #for column categories diag(Z'Z)
tauden<-S$tauDen
tau<-sum(inertia)/tauden
Z1<-1/sqrt(tauden)*sqrt((n-1)*(rows-1))*S$Z
t.inertia <- (inertiasum/tauden)*(n-1)*(rows-1)
comps <- compsonetable.exe(Z1)
#----------------------columns comps
if (nc>3){
Ccompnames <- c( "Location", "Dispersion", "Cubic","Error", "** Chi-squared Statistic **")}
if (nc==3){
Ccompnames <- c( "Location", "Dispersion","Error","** Chi-squared Statistic **")}
if (nc==2){
Ccompnames <- c( "Location", "Dispersion","** Chi-squared Statistic **")}
Jcompnames <- c("Inertia Value", "P-value")
dimnames(Z) <- list(paste("m", 1:nr,sep=""), paste("v", 1:nc,sep=""))
dimnames(comps$comps) <- list(Ccompnames, Jcompnames)
Trend<-t(Gmat[,firstaxis:lastaxis]%*%t(Fbi[,firstaxis:lastaxis]))
}
##################################################################################################################
# OTHER CALCULATIONS
# Calc inertia sum
inertiapc <- 100*inertia/inertiasum
cuminertiapc <- cumsum(inertiapc)
inertiapc <- (100*inertiapc)/100
cuminertiapc <- (100*cuminertiapc)/100
inertias <- cbind(inertia,inertiapc,cuminertiapc)
##########################################################
if((catype=="SOCA")|(catype=="SONSCA")|(catype=="DOCA")|(catype=="DONSCA")){
#inertiasum2 <- sum(inertia2) #for column categories diag(Z'Z)
inertiapc2 <- 100*inertia2/inertiasum2
cuminertiapc2 <- cumsum(inertiapc2)
inertiapc2 <- (100*inertiapc2)/100
cuminertiapc2 <- (100*cuminertiapc2)/100
inertias2 <- cbind(inertia2,inertiapc2,cuminertiapc2)
}
else inertias2<-inertias
# Calc contributions and correlations
Xstd <- X/sum(X)
if ((catype=="CA")|(catype=="SOCA")|(catype=="DOCA")){
dr <- diag(rowSums(Xstd))}
else {uni<-rep(1,rows)
dr<-diag(uni)}
dc <- diag(colSums(Xstd))
dimnames(Trend)<-list(rowlabels,collabels)
dimnames(dr)<-list(rowlabels,rowlabels)
dimnames(dc)<-list(collabels,collabels)
dimnames(inertias)[[1]]<-paste("value", 1:r,sep="")
dimnames(inertias2)[[1]]<-paste("value", 1:r,sep="")
dimnames(Fmat)<-list(rowlabels,paste("axes", 1:r,sep=""))
dimnames(Fbi)<-list(rowlabels,paste("axes", 1:r,sep=""))
dimnames(Gmat)<-list(collabels,paste("axes", 1:r,sep=""))
dimnames(Gbi)<-list(collabels,paste("axes", 1:r,sep=""))
names(mi)<-rowlabels
names(mj)<-collabels
#---------------------------------------------------
cord1<-Gmat #check here
cord2<-Fmat
if ((catype=="DOCA")|(catype=="SOCA")|(catype=="SONSCA")|(catype=="DONSCA")){
cordr<-cord2
cordc<-cord1
cord1<-cordr
cord2<-cordc
}
#-------------------------------------------------------------
#for printing the characteristics of ellipses
#----------------------------------------------------
#if ((ellcomp==TRUE)&&(catype=="CA")|(ellcomp==TRUE)&&(catype=="NSCA")){
#if (ellcomp==TRUE){
#---------------------------------p-values
inertiapc<-inertias[,2]
cord1<-Fmat
cord2<-Gmat
a<-Fbi
b<-Gbi
Inames<-rowlabels
Jnames<-collabels
I<-rows
J<-cols
dmu<-sqrt(inertias[,1])
#t.inertia<-inertiasum*n
Imass<-dr
Jmass<-dc
#if (catype=="NSCA"){
# t.inertia <- (sum(dmu^2)/tauden)*(n-1)*(I-1)
# }
#-----------------------------axis ellipses
chisq.val <- qchisq(1 - alpha, df = (I - 1) * (J - 1))
hlax1.row <- vector(mode = "numeric", length = I)
hlax2.row <- vector(mode = "numeric", length = I)
hlax1.col <- vector(mode = "numeric", length = J)
hlax2.col <- vector(mode = "numeric", length = J)
#browser()
if (M > 2) {
for (i in 1:I) {
hlax1.row[i] <- dmu[1] * sqrt(abs((chisq.val/(t.inertia)) *
(1/Imass[i,i] - sum(a[i, 3:M]^2))))
hlax2.row[i] <- dmu[2] * sqrt(abs((chisq.val/(t.inertia)) *
(1/Imass[i,i] - sum(a[i, 3:M]^2))))
}
for (j in 1:J) {
hlax1.col[j] <- dmu[1] * sqrt(abs((chisq.val/(t.inertia)) *
(1/Jmass[j,j] - sum(b[j, 3:M]^2))))
hlax2.col[j] <- dmu[2] * sqrt(abs((chisq.val/(t.inertia)) *
(1/Jmass[j,j] - sum(b[j, 3:M]^2))))
}
}
if (M == 2) {
for (i in 1:I) {
hlax1.row[i] <- dmu[1] * sqrt(abs((chisq.val/(t.inertia)) *
(1/(Imass)[i,i])))
hlax2.row[i] <- dmu[2] * sqrt(abs((chisq.val/(t.inertia)) *
(1/(Imass)[i,i])))
}
for (j in 1:J) {
hlax1.col[j] <- dmu[1] * sqrt(abs((chisq.val/(t.inertia)) *
(1/(Jmass)[j,j])))
hlax2.col[j] <- dmu[2] * sqrt(abs((chisq.val/(t.inertia)) *
(1/(Jmass)[j,j])))
}
}
eccentricity <- abs(1 - (dmu[2]^2/dmu[1]^2))^(1/2)
area.row <- vector(mode = "numeric", length = I)
area.col <- vector(mode = "numeric", length = J)
for (i in 1:I) {
area.row[i] <- 3.14159 * hlax1.row[i] * hlax2.row[i]
}
for (j in 1:J) {
area.col[j] <- 3.14159 * hlax1.col[j] * hlax2.col[j]
}
pvalrow <- vector(mode = "numeric", length = I)
pvalcol <- vector(mode = "numeric", length = J)
for (i in 1:I) {
if (M > 2) {
pvalrow[i] <- 1- pchisq(t.inertia*((1/Imass[i,i] - sum(a[i,
3:M]^2))^(-1)) * (cord1[i, 1]^2/dmu[1]^2 + cord1[i,
2]^2/dmu[2]^2), df = (I - 1) * (J - 1))
}
else {
pvalrow[i] <- 1-pchisq(t.inertia* (Imass[i,i]) *
(cord1[i, 1]^2/dmu[1]^2 + cord1[i, 2]^2/dmu[2]^2),
df = (I - 1) * (J - 1))
}
}
for (j in 1:J) {
if (M > 2) {
pvalcol[j] <- 1-pchisq(t.inertia* ((1/Jmass[j,j] -
sum(b[j, 3:M]^2))^(-1)) * (cord2[j, 1]^2/dmu[1]^2 +
cord2[j, 2]^2/dmu[2]^2), df = (I - 1) * (J - 1))
}
else {
pvalcol[j] <- 1- pchisq(t.inertia * (Jmass[j,j]) *
(cord2[j, 1]^2/dmu[1]^2 + cord2[j, 2]^2/dmu[2]^2),
df = (I - 1) * (J - 1))
}
}
summ.name <- c("HL Axis 1", "HL Axis 2", "Area", "P-value")
row.summ <- cbind(hlax1.row, hlax2.row, area.row, pvalrow)
dimnames(row.summ) <- list(paste(Inames), paste(summ.name))
col.summ <- cbind(hlax1.col, hlax2.col, area.col, pvalcol)
dimnames(col.summ) <- list(paste(Jnames), paste(summ.name))
#}
#if ((ellcomp==FALSE)|(ellcomp==TRUE)&&(catype=="DOCA")|(ellcomp==TRUE)&&(catype=="DONSCA")|(ellcomp==TRUE)&&(catype=="SOCA")|(ellcomp==TRUE)&&(catype=="SONSCA")){
#if (ellcomp==FALSE){
#eccentricity=NULL
#row.summ=NULL
#col.summ=NULL
#}
#--------------------------------------------------
resultCA<-list(Xtable=X, rows=rows, cols=cols, r=r,n=n,
rowlabels=rowlabels, collabels=collabels,
Rprinccoord=Fmat, Cprinccoord=Gmat, Rstdcoord=Fbi, Cstdcoord=Gbi,tauden=tauden,tau=tau,
inertiasum2=inertiasum2, inertiasum=inertiasum, inertias=inertias, inertias2=inertias2,t.inertia=t.inertia,comps=comps,
catype=catype,mj=mj,mi=mi,pcc=pcc,Jmass=dc,Imass=dr,
Innprod=Trend,Z=Z,M=M,eccentricity=eccentricity,row.summ=row.summ,col.summ=col.summ)
class(resultCA)<-"CAvariants"
return(resultCA)
#-----------------------------------------------
}
|
/scratch/gouwar.j/cran-all/cranData/CAvariants/R/CAvariants.R
|
caRbiplot<- function(frows, gcols, firstaxis, lastaxis, inertiapc, bip="row", size1,size2){
##########################################################################
# #
# Principal and standard Coordinates # frows and gcols
# #
# #
##########################################################################
I<-nrow(frows)
J<-nrow(gcols)
categ<-NULL
attnam<-NULL
slp<-NULL
#frows <- data.frame(coord=f, labels=dimnames(f)[[1]], categ=rep("rows", I), linet=rep("solid",I)) # build a dataframe to be used as input for plotting via ggplot2
#gcols <- data.frame(coord=g, labels=dimnames(g)[[1]], categ=rep("cols", J), linet=rep("blank",J)) # build a dataframe to be used as input for plotting via ggplot2
if (bip == "row"){
rglines=data.frame(d1=gcols[,firstaxis],d2=gcols[,lastaxis],attnam=gcols$categ)
#rglines=data.frame(d1=gcols[,firstaxis],d2=gcols[,lastaxis],attnam="blue")
rglines$slp=rglines$d2/rglines$d1
}
if ((bip == "column")||(bip == "col")) {
rglines=data.frame(d1=frows[,firstaxis],d2=frows[,lastaxis],attnam=frows$categ)
#rglines=data.frame(d1=frows[,firstaxis],d2=frows[,lastaxis],attnam="blue")
rglines$slp=rglines$d2/rglines$d1
}
#-------------------------------------------------------------
ndim1<-I
catall <- rep("solid", ndim1)
FGcord <- rbind(frows, gcols) # build a dataframe to be used as input for plotting via
xmin <- min(FGcord[,firstaxis],FGcord[,lastaxis])
xmax <- max(FGcord[,firstaxis],FGcord[,lastaxis])
ymin <- min(FGcord[,lastaxis],FGcord[,firstaxis])
ymax <- max(FGcord[,lastaxis],FGcord[,firstaxis])
CAplot <- ggplot(FGcord, aes(x=FGcord[,firstaxis], y=FGcord[,lastaxis]), type="b") +
geom_point(aes(color=categ), size=size1) +
scale_shape_manual(values=categ) +
geom_vline(xintercept = 0, linetype=2, color="gray") +
geom_hline(yintercept = 0, linetype=2, color="gray") +
labs(x=paste0("Dimension", firstaxis,sep=" ", round(inertiapc[firstaxis],1), "%"),y=paste0("Dimension",lastaxis,sep= " ", round(inertiapc[lastaxis],1),"%")) +
scale_x_continuous(limits = c(xmin, xmax)) +
scale_y_continuous(limits = c(ymin, ymax)) +
theme(panel.background = element_rect(fill="white", colour="black")) +
coord_fixed(ratio = 1, xlim = NULL, ylim = NULL, expand = TRUE) +
geom_text_repel(data=FGcord, aes(colour=categ, label = labels), size = size2, max.overlaps =Inf) +
theme(legend.position="none")+
#geom_abline(data=rglines,aes(intercept=0,slope=slp,colour="blue"),alpha=.5)
geom_abline(data=rglines,aes(intercept=0,slope=slp,colour=attnam),alpha=.5)
grid.arrange(CAplot, ncol=1)
#list( frows=frows, gcols = gcols)
}
|
/scratch/gouwar.j/cran-all/cranData/CAvariants/R/caRbiplot.R
|
cabasic <-
function(Xtable) {
n<-sum(Xtable)
X <- Xtable/n
I<-nrow(X)
J<-ncol(X)
#rmax <- min(dim(X))-1
rsums <- as.vector(rowSums(X))
csums <- as.vector(colSums(X))
di<-diag(rsums)
dj<-diag(csums)
drm1 <- diag( 1/( rsums + (rsums==0) ) * (1-(rsums==0)) )
dcm1 <- diag( 1/( csums + (csums==0) ) * (1-(csums==0)) )
drmh <- sqrt(drm1)
dcmh <- sqrt(dcm1)
#ratio <- (drmh %*% ( X - rsums %*% t(csums) ) %*% dcmh)*n
ratio <- drmh %*% ( X - rsums %*% t(csums) ) %*% dcmh
ratio2<-drm1%*%X%*%dcm1
Yeigu<-eigen(ratio%*%t(ratio))
#Caxes<-Yeigu$vectors
Yeigv<-eigen(t(ratio)%*%ratio)
#Raxes<-Yeigv$vectors
Y <- svd(ratio,nu=I,nv=J)
mu <- Y$d
Raxes<-Y$v
Caxes<-Y$u
#r <- sum(mu>1e-15)
#r<-rmax
RX <- drm1 %*% X
CX <- dcm1 %*% t(X)
#setClass("cabasicresults",
#representation(
# RX="matrix", CX="matrix", Rweights="matrix", Cweights="matrix",
# Raxes="matrix", Caxes="matrix", r="numeric", mu="numeric",mu2="numeric",catype="character",
#tau="numeric",tauDen="numeric",Z="matrix",ZtZ="matrix",tZZ="matrix"),S3methods=FALSE )
#cabasic<-new("cabasicresults", RX=RX,CX=CX,Rweights=dcmh,Cweights=drmh,
# Raxes=Y$v,Caxes=Y$u,mu=mu,mu2=0,catype="CA",tauDen=0,Z=ratio2,ZtZ=RX,tZZ=RX)
#cabasic
resca=(list( RX=RX,CX=CX,Rweights=dcmh,Cweights=drmh,
Raxes=Raxes,Caxes=Caxes,mu=mu,mu2=0,catype="CA",tauDen=0,Z=ratio2,ZtZ=RX,tZZ=RX))
return(resca)
}
|
/scratch/gouwar.j/cran-all/cranData/CAvariants/R/cabasic.R
|
caplot3d<-function(coordR,coordC,inertiaper,firstaxis=1,lastaxis=2,thirdaxis=3){
#------------------------------------
# 3Dim plot
# coordR= row coordinates
# coordC= column coordinates
# inertiaper= percentage of explained inertia per dimension
#------------------------------------
rc = round(coordR,2)
I<-dim(rc)[2]
cc = round(coordC,2)
iner<-paste(round(inertiaper,1),"%",sep="")
colnames(rc)<-paste("Dim",1:I,sep="")
iner1<-paste("(",iner,")",sep="")
namecol<-paste(colnames(rc),iner1,sep=" ")
colnames(rc)<-namecol
p = plot_ly(type="scatter3d",mode = 'text')
p = add_trace(p, x = rc[,firstaxis], y = rc[,lastaxis], z = rc[,thirdaxis],
mode = 'text', text = rownames(rc),
textfont = list(color = "red"), showlegend = FALSE)
p = add_trace(p, x = cc[,firstaxis], y = cc[,lastaxis], z = cc[,thirdaxis],
mode = "text", text = rownames(cc),
textfont = list(color = "blue"), showlegend = FALSE)
p <- config(p, displayModeBar = FALSE)
p <- layout(p, scene = list(xaxis = list(title = colnames(rc)[1]),
yaxis = list(title = colnames(rc)[2]),
zaxis = list(title = colnames(rc)[3]),
aspectmode = "data"),
margin = list(l = 0, r = 0, b = 0, t = 0))
p$sizingPolicy$browser$padding <- 0
my.3d.plot = p
my.3d.plot
}
|
/scratch/gouwar.j/cran-all/cranData/CAvariants/R/caplot3d.R
|
caplotord<-function(frows,gcols,firstaxis,lastaxis,nseg,inertiapc,thingseg,col1,col2,col3,size1,size2){
#plotting biplot for ordered variables-depicting polynomials
# FGcord <- rbind(frows, gcols)
categ<-NULL
FGcord <- rbind(gcols,frows)
xmin <- min(FGcord[,firstaxis],FGcord[,lastaxis])
xmax <- max(FGcord[,firstaxis],FGcord[,lastaxis])
ymin <- min(FGcord[,lastaxis],FGcord[,firstaxis])
ymax <- max(FGcord[,lastaxis],FGcord[,firstaxis])
CAplot <- ggplot(FGcord, aes(x=FGcord[,firstaxis], y=FGcord[,lastaxis]),type="b") +
geom_point(aes(color=categ, shape=categ), size=size1) +
geom_vline(xintercept = 0, linetype=2, color="gray") +
geom_hline(yintercept = 0, linetype=2, color="gray") +
labs(x=paste0("Dimension ",firstaxis,sep=" (", round(inertiapc[1],1), "%)"),y=paste0("Dimension ",lastaxis,sep= " (", round(inertiapc[2],1),"%)")) +
scale_x_continuous(limits = c(xmin, xmax)) +
scale_y_continuous(limits = c(ymin, ymax)) +
theme(panel.background = element_rect(fill="white", colour="black")) +
# scale_colour_manual(values=c("red", "blue")) +
scale_colour_manual(values=c(col2, col1)) +
coord_fixed(ratio = 1, xlim = NULL, ylim = NULL, expand = TRUE) +
geom_text_repel(data=FGcord, aes(colour=categ, label = labels), size = size2) +
geom_line(aes(group=categ,linetype=categ),lwd=.2,colour="gray")+
scale_linetype_manual(values=c("rows"="dashed","cols"="blank"))+
geom_segment(data=thingseg,aes(x=rep(0,c(nseg)),y=rep(0,c(nseg)),xend=thingseg[, firstaxis],yend=thingseg[,
lastaxis]),colour=rep(col3,nseg))+
theme(legend.position="none")+
ggtitle(" ")
grid.arrange(CAplot, ncol=1)
}
|
/scratch/gouwar.j/cran-all/cranData/CAvariants/R/caplotord.R
|
compsonetable.exe<-
function(Z){
nr<-nrow(Z)
nc<-ncol(Z)
tZZ <- t(Z) %*% (Z)
factor <- sum(diag(tZZ))
ncomp<-3
if (nc==2) {ncomp <-2}
comps <- matrix(0, nrow=ncomp, ncol=2)
###############################
# Ordered Column Category #
###############################
for (j in 1:ncomp){
comps[j, 1] <- tZZ[j, j]#Location Component for Category 2#
comps[j, 2] <- 1 - pchisq(comps[j, 1], ncol(Z))#P-value of the Location Comp for Category 2#
}
compsClast1 <- factor -sum (comps[, 1])#Error of Row Components for Category 1#
if(ncol(Z) > 3) {
compsClast2 <- 1 - pchisq(compsClast1, (nc-3) * nr )#P-value for the Errors of Category 1#
Ccol=rbind(comps,c(compsClast1,compsClast2))
}
else {
#compsClast2 <- 0
#Ccol=rbind(comps,c(compsClast1,compsClast2))
Ccol=comps
}
compsCtot1 <- factor
compsCtot2 <- 1 - pchisq(compsCtot1, nr * nc)
Ccol=rbind(Ccol,c(compsCtot1,compsCtot2))
list(comps=as.matrix(Ccol))
}
|
/scratch/gouwar.j/cran-all/cranData/CAvariants/R/compsonetable.exe.R
|
compstable.exe<-
function(Z){
tZZ <- t(Z) %*% Z
ZtZ <- Z %*% t(Z)
factor <- sum(diag(tZZ))
nr<-nrow(Z)
nc<-ncol(Z)
ncompr<-3
ncompc<-3
if (nr==2) {ncompr<-2}
if (nc==2) {ncompc <-2}
compsR <- matrix(0, nrow = ncompr, ncol = 2)
compsC <- matrix(0, nrow = ncompc, ncol = 2)
###############################
# row Category 1 #
###############################
for (i in 1:ncompr){
compsR[i, 1] <- ZtZ[i, i]#Tube Location Component for Category 2#
compsR[i, 2] <- 1 - pchisq(compsR[i, 1], nc)#P-value of the Location Comp for Category 2#
}
compsRlast1 <- factor -sum (compsR[, 1])#Error of Row Components for Category 1#
if(nr > 3) {
compsRlast2 <- 1 - pchisq(compsRlast1, (nr-3) * nc )#P-value for the Errors of Category 1#
Crow=rbind(compsR,c(compsRlast1,compsRlast2))
}
else {
#compsRlast2 <- 0
#Crow=rbind(compsR,c(compsRlast1,compsRlast2))
Crow=compsR
}
compsRtot1 <- factor
compsRtot2 <- 1 - pchisq(compsRtot1, nr * nc)
Crow=rbind(Crow,c(compsRtot1,compsRtot2))
###############################
# column Category 2 #
###############################
for (j in 1:ncompc){
compsC[j, 1] <- tZZ[j, j]# Location Component for Category 1#
compsC[j, 2] <- 1 - pchisq(compsC[j, 1], nr)#P-Value for the Location Comp of Category 1#
}
compsClast1 <- factor -sum (compsC[, 1])#Error of Row Components for Category 1#
if(nc > 3) {
compsClast2 <- 1 - pchisq(compsClast1, (nc-3) * nr )#P-value for the Errors of Category 1#
Ccol=rbind(compsC,c(compsClast1,compsClast2))
}
else {
#compsClast2 <- 0
#Ccol=rbind(compsC,c(compsClast1,compsClast2))
Ccol=compsC
}
compsCtot1 <- factor
compsCtot2 <- 1 - pchisq(compsCtot1, nr * nc)
Ccol=rbind(Ccol,c(compsCtot1,compsCtot2))
list(compsR=Crow,compsC=Ccol)
#return(comps)
}
|
/scratch/gouwar.j/cran-all/cranData/CAvariants/R/compstable.exe.R
|
docabasic <-
function (Xtable,mi,mj)
{
n<-sum(Xtable)
x <- Xtable/n
rsums <- as.vector(rowSums(x))
csums <- as.vector(colSums(x))
dj<-diag(csums)
di<-diag(rsums)
#uni <- matrix(1, 1, J)
drm1 <- diag( 1/( rsums + (rsums==0) ) * (1-(rsums==0)) )
dcm1 <- diag( 1/( csums + (csums==0) ) * (1-(csums==0)) )
drmh <- sqrt(drm1)
dcmh <- sqrt(dcm1)
#ratio <- ( x%*%dcm1 - rsums%*%uni )*sqrt(n)
R <- drm1 %*% x
C <- dcm1 %*% t(x)
# rmax <- min(dim(xo)) - 1
Bpoly <- emerson.poly(mj, csums)$B
# Bpoly2 <- sqrt(dj) %*% Bpoly
Apoly <- emerson.poly(mi, rsums)$B
#Apoly2 <- (di) %*% Apoly
# Z <- t(Apoly) %*% (ratio) %*%dj%*% (Bpoly) #useful to check coordinates
Z <- t(Apoly) %*% x %*% (Bpoly)*sqrt(n) #useful to check coordinates
ZZ <- Z^2
pi <- (Apoly) %*% Z %*% t(Bpoly)
#browser()
ZtZ <- Z%*%t(Z)
tZZ <-t(Z)%*%Z
mu<-diag(ZtZ)
mu2<-diag(tZZ)
#r<-rmax
#browser()
#doca<- new("cabasicresults",
#RX=R,CX=C,Cweights=drmh,Rweights=dcmh,Raxes= Bpoly,
#Caxes=Apoly,mu=mu,mu2=mu2,catype="DOCA",tauDen=0,Z=Z,ZtZ=ZtZ,tZZ=tZZ)
resdoca<-(list(RX=R,CX=C,Cweights=drmh,Rweights=dcmh,Raxes= Bpoly,
Caxes=Apoly,mu=mu,mu2=diag(tZZ),catype="DOCA",tauDen=0,Z=Z,ZtZ=ZtZ,tZZ=tZZ))
return(resdoca)
}
|
/scratch/gouwar.j/cran-all/cranData/CAvariants/R/docabasic.R
|
donscabasic <-
function (Xtable,mi,mj)
{
# rmax <- min(dim(xo)) - 1
x <- Xtable/sum(Xtable)
rsums <- as.vector(rowSums(x))
csums <- as.vector(colSums(x))
drm1 <- diag( 1/( rsums + (rsums==0) ) * (1-(rsums==0)) )
dcm1 <- diag( 1/( csums + (csums==0) ) * (1-(csums==0)) )
drmh<-diag(rep(1,nrow(x))) #change the metric in NSCA
dcmh <- sqrt(dcm1)
dj <- diag(csums)
di <- diag(rsums)
tauden<-1 - sum(rsums^2)
#Apoly <- emerson.poly(mi, rsums)$BT #with trivial
Apoly <- emerson.poly(mi, rsums)$B #without trivial
Apoly2 <- sqrt(di) %*% Apoly
Bpoly <- emerson.poly(mj, csums)$B
Bpoly2 <- sqrt(dj) %*% Bpoly
#pcc <- 1/sqrt(tauden)*(drmh %*% ( x - rsums %*% t(csums) ) %*% dcm1)
pcc <- (drmh %*% ( x - rsums %*% t(csums) ) %*% dcm1)
Z <- t(Apoly2) %*% pcc %*% dj %*% Bpoly #no trivial
pi <- (Apoly2) %*% Z %*% t(Bpoly2) #no trivial
ZtZ<-Z%*%t(Z)
mu <- diag(ZtZ)
#tau<-sum(mu)/tauden
tZZ<-t(Z)%*%Z
mu2<- diag(tZZ)
#r<-rmax
Cweights <- dj
#browser()
#donsca<- new("cabasicresults",
#RX=pcc,CX=t(pcc),Rweights=dj,Cweights=diag(rep(1,nrow(x))),
# Raxes=Bpoly,Caxes=Apoly2,mu=mu,mu2=mu2,catype="DONSCA",tauDen=tauden,Z=Z,ZtZ=ZtZ,tZZ=tZZ)
resdonsca<-(list(RX=pcc,CX=t(pcc),Rweights=dj,Cweights=diag(rep(1,nrow(x))),
Raxes=Bpoly,Caxes=Apoly2,mu=mu,mu2=mu2,catype="DONSCA",tauDen=tauden,Z=Z,ZtZ=ZtZ,tZZ=tZZ))
return(resdonsca)
}
|
/scratch/gouwar.j/cran-all/cranData/CAvariants/R/donscabasic.R
|
emerson.poly <-
function (mj, pj)
{
#################Emerson polynomials, recurrence formulas
#mj: natural scores for example c(1,2,3,4)
#pj
#####################
nc <- length(mj)
Dj <- diag(pj)
B <- matrix(1, (nc + 1), nc)
B[1, ] <- 0
Sh <- Th <- Vh <- NULL
for (i in 3:(nc + 1)) {
for (j in 1:nc) {
Th[i] <- mj %*% Dj %*% B[i - 1, ]^2
Vh[i] <- mj %*% Dj %*% (B[i - 1, ] * B[i - 2, ])
Sh[i] <- sqrt(mj^2 %*% Dj %*% B[i - 1, ]^2 - Th[i]^2 -
Vh[i]^2)^(-1)
B[i, j] <- Sh[i] * ((mj[j] - Th[i]) * B[i - 1, j] -
Vh[i] * B[i - 2, j])
}
}
B<-t(B)
B1<-B[,-c(1,2)]
BT<-B[,-c(1)]
list(B=B1,BT=BT)
}
|
/scratch/gouwar.j/cran-all/cranData/CAvariants/R/emerson.poly.R
|
nscabasic <-
function(Xtable) {
X <- Xtable/sum(Xtable)
#r<- min(dim(X))-1
I<-nrow(X)
J<-ncol(X)
Imass<-rowSums(X)
tauden <- 1 - sum(Imass^2)
rsums <- as.vector(rowSums(X))
csums <- as.vector(colSums(X))
drm1 <- diag( 1/( rsums + (rsums==0) ) * (1-(rsums==0)) )
dcm1 <- diag( 1/( csums + (csums==0) ) * (1-(csums==0)) )
drmh<-diag(rep(1,I)) #change the metric in NSCA
dcmh <- sqrt(dcm1)
#Z <- 1/sqrt(tauden)*(drmh %*% ( X - rsums %*% t(csums) ) %*% dcmh)#tau index
Z <- (drmh %*% ( X - rsums %*% t(csums) ) %*% dcmh) #only numerator
#Yeigu<-eigen(Z%*%t(Z))
#Caxes<-Yeigu$vectors
#Yeigv<-eigen(t(Z)%*%Z)
#Raxes<-dcmh%*%Yeigv$vectors
Y <- svd(Z,nu=I,nv=J)
Raxes<-dcmh%*%Y$v
Caxes<-Y$u
mu <- Y$d
#tau<-sum(mu^2)/tauden
R <- drm1 %*% X
C <- dcm1 %*% t(X)
#browser()
#NSCA<-new("cabasicresults",
# RX=R,CX=C,Rweights=dcmh,Cweights=drmh,
# Raxes=dcmh%*%Y$v,Caxes=Y$u,mu=mu,tauDen=tauden,catype="NSCA")
#----------------------------
#browser()
resnsca<-(list(
RX=R,CX=C,Rweights=dcmh,Cweights=drmh,
Raxes=Raxes,Caxes=Caxes,mu=mu,tauDen=tauden,catype="NSCA"))
return(resnsca)
}
|
/scratch/gouwar.j/cran-all/cranData/CAvariants/R/nscabasic.R
|
plot.CAvariants<-function(x, firstaxis=1, lastaxis=2, thirdaxis=3, cex=0.8,cex.lab=0.8,
plottype="biplot", biptype = "row",scaleplot=1,
posleg="right",pos=2,ell=FALSE,alpha=0.05,plot3d =FALSE,size1=1.5,size2=3,...) {
## internal function to plot a single picture
##
if ((firstaxis<1)|(firstaxis>x$r)) stop(paste("incorrect first axis =", firstaxis, "\n\n"))
if (lastaxis>x$r) stop(paste("incorrect last axis =", lastaxis, "\n\n"))
if (firstaxis>=lastaxis) stop(paste("last axis must be greater than first axis\n\n"))
#if (!any(plottype==c("classic","classical","c","biplot","bip","b"))) stop(paste("Must be specified the kind of graph: classic, or biplot"))
# Groups file must have no blank line at start and only one between sections
# group number group name symbol colour plot ellipse?
n<-sum(x$Xtable)
I<-nrow(x$Xtable)
J<-ncol(x$Xtable)
######################################################
# Plot row and col coordinates
#########################################
if ((plottype=="Classical")|(plottype=="classical")|(plottype=="classic")|(plottype=="Classic")) {
nthings<-x$cols
nvars<-x$rows
cord1<- x$Cprinccoord*scaleplot
cord2<-x$Rprinccoord/scaleplot
dmu=diag(x$inertias[,1])
inertiapc=round(x$inertias[,2],1) #inertia in percentage of row axes
dimnames(cord1)[1]<-dimnames(x$Xtable)[2]
dimnames(cord2)[1]<-dimnames(x$Xtable)[1]
thinglabels<-x$collabels
varlabels<-x$rowlabels
main="Classical plot"
if ((x$catype=="DONSCA")|(x$catype=="DOCA")|(x$catype=="SOCA")|(x$catype=="SONSCA"))
{
cat("\n ERROR: NO CLASSICAL PLOT for ordered analysis. ONLY A BIPLOT can be constructed (Please change 'plottype' and specify 'biptype')\n")
stop()
}
#---------------------------------------------------------------------------------------
}#end classical plot
if ((plottype=="Biplot")|(plottype=="biplot")|(plottype=="bip")|(plottype=="b")){
if ((biptype=="rows")|(biptype=="Rows")|(biptype=="row")|(biptype=="r"))
{
plottype<-"biplot"
biptype<-"row"
cord1<-x$Rprinccoord*scaleplot
cord2<-x$Cstdcoord/scaleplot
nthings<-x$rows
nvars<-x$cols
thinglabels<-x$rowlabels
varlabels<-x$collabels
main<-"Row Isometric Biplot"
inertiapc=x$inertias[,2] #inertia of rows
dmu=diag(x$inertias[,1])
dimnames(cord2)[1]<-dimnames(x$Xtable)[2]
dimnames(cord1)[1]<-dimnames(x$Xtable)[1]
if ((x$catype=="DONSCA")|(x$catype=="DOCA")|(x$catype=="SOCA")|(x$catype=="SONSCA"))
{
cord2<-x$Rprinccoord*scaleplot
cord1<-x$Cstdcoord/scaleplot
nthings<-x$cols
nvars<-x$rows
thinglabels<-x$collabels
varlabels<-x$rowlabels
inertiapc=round(x$inertias2[,2],1) #inertia of rows
dmu=diag(x$inertias2[,1])
dimnames(cord2)[1]<-dimnames(x$Xtable)[1]
dimnames(cord1)[1]<-dimnames(x$Xtable)[2]
}#end catype
} #end bip row
if ((biptype=="cols")|(biptype=="Cols")|(biptype=="column")|(biptype=="col")) {
if ((x$catype=="CA")|(x$catype=="NSCA")){
cord1<- x$Cprinccoord*scaleplot
cord2<-x$Rstdcoord/scaleplot
nthings<-x$cols
nvars<-x$rows
thinglabels<-x$collabels
varlabels<-x$rowlabels
main<-"Column Isometric Biplot"
inertiapc=round(x$inertias[,2],1) #inertia of row
dmu=diag(x$inertias[,1])
dimnames(cord1)[1]<-dimnames(x$Xtable)[2]
dimnames(cord2)[1]<-dimnames(x$Xtable)[1]
}
if ((x$catype=="DONSCA")|(x$catype=="DOCA")|(x$catype=="SOCA")|(x$catype=="SONSCA"))
{
cord2<- x$Cprinccoord*scaleplot
cord1<-x$Rstdcoord/scaleplot
nthings<-x$rows
nvars<-x$cols
thinglabels<-x$rowlabels
varlabels<-x$collabels
inertiapc=round(x$inertias[,2],1) #inertia of cols
dmu=diag(x$inertias[,1])
dimnames(cord1)[1]<-dimnames(x$Xtable)[1]
dimnames(cord2)[1]<-dimnames(x$Xtable)[2]
}#end catype
}#end bip column
}
###################################################################################ok without choice plottype
#if ((x$catype=="DOCA")|(x$catype=="SOCA")|(x$catype=="SONSCA")|(x$catype=="DONSCA"))
#{
# cat("\n Looking at the Trends of rows and columns\n")
cat("\n Looking at the Rows and Columns profiles\n")
############################################################## reconstructed TREND
#trendplot(x$mj,(x$Innprod), main="Using coordinates",posleg=posleg, xlab="column scores")
#-----for rows
#trendplot(x$mi,t(x$Innprod), main="Using coordinates", posleg=posleg,xlab="row scores")
#-----original data
trendplot(x$mj,(x$Innprod), main="Column Profiles",posleg=posleg, xlab="column scores")
#dev.new()
trendplot(x$mi,t(x$Innprod), main="Row Profiles", posleg=posleg,xlab="row scores")
#}
##############################################################
##################
#library(scales)
#library (ggplot2)
#library(ggrepel)
#library(gridExtra)
#---------------------------------------------------------------------------------
categ<-NULL
frows <- data.frame(coord=cord1, labels=thinglabels, categ=rep("rows", nthings)) # build a dataframe to be used as input for plotting via ggplot2
gcols <- data.frame(coord=cord2, labels=varlabels, categ=rep("cols", nvars)) # build a dataframe to be used as input for plotting via ggplot2
FGcord <- rbind(frows, gcols) # build a dataframe to be used as input for plotting via
############################################################
if (((x$catype=="DONSCA")||(x$catype=="DOCA"))&&((plottype=="biplot")&&(biptype=="column")))
{
#caplotord(frows=frows,gcols=gcols,firstaxis=firstaxis,lastaxis=lastaxis,nseg=nvars,inertiapc=inertiapc,thingseg=gcols,col1="red",
#col2="blue",col3="blue",size1=size1,size2=size2)
#if (invproj==TRUE){
caplotord(frows=frows,gcols=gcols,firstaxis=firstaxis,lastaxis=lastaxis,nseg=nthings,inertiapc=inertiapc,thingseg=frows,col1="red",
col2="blue",col3="red",size1=size1,size2=size2)
#}
}#end catype
if (((x$catype=="SONSCA")||(x$catype=="SOCA"))&&((biptype=="column")&(plottype=="biplot")))
{
caRbiplot(frows=frows,gcols=gcols,firstaxis=firstaxis,lastaxis=lastaxis, inertiapc=inertiapc, bip="column",size1=size1,size2=size2)
}
###############################################################
if (((x$catype=="DONSCA")||(x$catype=="DOCA")||(x$catype=="SOCA")||(x$catype=="SONSCA"))&&((plottype=="biplot")&&(biptype=="row")))
{
caplotord(frows=gcols,gcols=frows,firstaxis=firstaxis, lastaxis=lastaxis,nseg=nthings,inertiapc=inertiapc,thingseg=frows,col1="red",
col2="blue",col3="red",size1=size1,size2=size2)
#caplotord(frows=frows,gcols=gcols,firstaxis=firstaxis,lastaxis=lastaxis,nseg=nvars,inertiapc=inertiapc,thingseg=gcols,col1="red",
#col2="blue",col3="blue",size1=size1,size2=size2)
}
#-----------------------------------------------------------
if (((x$catype=="NSCA")||(x$catype=="CA"))&&((plottype=="biplot")&(biptype=="row")))
{
caRbiplot(frows=frows,gcols=gcols,firstaxis=firstaxis,lastaxis=lastaxis, inertiapc=inertiapc, bip="row",size1=size1,size2=size2)
}
###############################
if (((x$catype=="NSCA")||(x$catype=="CA"))&&((plottype=="biplot")&(biptype=="column")))
{
caRbiplot(frows=gcols,gcols=frows,firstaxis=firstaxis,lastaxis=lastaxis, inertiapc=inertiapc, bip="column",size1=size1,size2=size2)
}
##############################################################
if ((plottype=="classic")&&(x$catype=="CA")|(plottype=="classic")&&(x$catype=="NSCA"))
{
categ<-NULL
frows <- data.frame(coord=cord1, labels=thinglabels, categ=rep("rows", nthings)) # build a dataframe to be used as input for plotting via ggplot2
gcols <- data.frame(coord=cord2, labels=varlabels, categ=rep("cols", nvars)) # build a dataframe to be used as input for plotting via ggplot2
#-------------------------------------------------------------
FGcord <- rbind(frows, gcols) # build a dataframe to be used as input for plotting via
xmin <- min(FGcord[,firstaxis],FGcord[,lastaxis])
xmax <- max(FGcord[,firstaxis],FGcord[,lastaxis])
ymin <- min(FGcord[,lastaxis],FGcord[,firstaxis])
ymax <- max(FGcord[,lastaxis],FGcord[,firstaxis])
CAplot <- ggplot(FGcord, aes(x=FGcord[,firstaxis], y=FGcord[,lastaxis])) +
geom_point(aes(colour=categ, shape=categ), size=size1) +
geom_vline(xintercept = 0, linetype=2, color="gray") +
geom_hline(yintercept = 0, linetype=2, color="gray") +
labs(x=paste0("Dimension ", firstaxis,sep=" (", round(inertiapc[firstaxis],1),"%) "), y=paste0("Dimension ", lastaxis,sep=" (", round(inertiapc[lastaxis],1),"%)" )) +
scale_x_continuous(limits = c(xmin, xmax)) +
scale_y_continuous(limits = c(ymin, ymax)) +
theme(panel.background = element_rect(fill="white", colour="black")) +
scale_color_manual(values=c("blue", "red")) +
coord_fixed(ratio = 1, xlim = NULL, ylim = NULL, expand = TRUE) +
geom_text_repel(data=FGcord, aes(colour=categ, label = labels), size = size2) +
theme(legend.position="none")+
ggtitle(" ")
grid.arrange(CAplot, ncol=1)
}
#cat("\nIncluding Beh's Confidence Ellipses\n")
################################################################################
if (ell==TRUE) {
cord1<-x$Cprinccoord*scaleplot #check here!!
cord2<-x$Rprinccoord/scaleplot
if ((x$catype=="DOCA")|(x$catype=="SOCA")|(x$catype=="SONSCA")|(x$catype=="DONSCA")){
cordr<-cord2
cordc<-cord1
cord1<-cordr
cord2<-cordc
}
vcaellipse(t.inertia=x$t.inertia,inertias=x$inertias[,1],inertiapc=x$inertias[,2],cord1=x$Rprinccoord,cord2=x$Cprinccoord,a=x$Rstdcoord,b=x$Cstdcoord,firstaxis=firstaxis,lastaxis=lastaxis,n=x$n,M=x$M,Imass=x$Imass,Jmass=x$Jmass)
}#end if ellipse
#library("plot3D")
if (plot3d==TRUE) {
#coordR<-cord1
#coordC<-cord2
inertiaper=x$inertias[,2]
caplot3d(coordR=cord1,coordC =cord2,inertiaper=x$inertias[,2])
}
}
|
/scratch/gouwar.j/cran-all/cranData/CAvariants/R/plot.CAvariants.R
|
print.CAvariants <-
function(x,printdims=2, ellcomp=TRUE,digits=3,...) {
#d <- min(printdims, x$r)
d<-printdims
if (d>x$r) { d<-x$r
cat("The maximum dimension number cannot be greater than the rank of the matrix\n")}
axnames <- character(length=d)
for (i in 1:d) { axnames[i] <- paste(" Axis",i) }
cat("\n RESULTS for",x$catype, "Correspondence Analysis\n")
cat("\n Data Matrix:\n")
print(x$Xtable)
cat("\n Row weights: Imass\n")
print(round(matrix(x$Imass,x$rows,x$rows,
dimnames=list(x$rowlabels,x$rowlabels)),digits=digits))
cat("\n Column Weights: Jmass\n")
print(round(matrix(x$Jmass,x$cols,x$cols,
dimnames=list(x$collabels,x$collabels)),digits=digits))
cat("\n Total inertia ", round(x$inertiasum,digits=digits), "\n\n") #for all 6 variants
if (x$catype=="CA"){
chi2<-x$inertiasum*sum(x$Xtable)
pvalueChi<-1 - pchisq(chi2, (nrow(x$Xtable)-1)*(ncol(x$Xtable)-1))
cat("\n Chi-squared ", round(chi2,digits=digits),"and p-value", pvalueChi, "\n\n")
}
#---------------------------------------------------------------------------
if ((x$catype=="CA")|(x$catype=="NSCA") ){
#cat("\n Total inertia ", round(x$inertiasum,digits=digits), "\n\n")
cat("The inertia values, their percentage contribution to the total inertia and
the cumulative percent inertias \n")
print(round(data.frame(x$inertias),digits=digits))
}
#----------------------------------------------------------------------------------------------
if ((x$catype=="DONSCA")|(x$catype=="SONSCA") ){
#cat("\n Total inertia ", round(x$inertiasum,digits=digits), "\n\n")
cat("Inertias, percent inertias and cumulative percent inertias of the row space\n\n")
print(round(data.frame(x$inertias),digits=digits))
cat("Inertias, percent inertias and cumulative percent inertias of the column space \n\n")
print(round(data.frame(x$inertias2),digits=digits))
cat("\n Absolute Contributions of Rows (per 100):\n")
ctrRow<-x$Imass%*%x$Rprinccoord^2%*%diag(1/x$inertias[,1])*100
print(round(ctrRow,digits=digits))
cat("\n Absolute Contributions of Columns (per 100):\n")
ctrCol<-x$Jmass%*%x$Cprinccoord^2%*%diag(1/x$inertias[,1])*100
print(round(ctrCol,digits=digits))
cat("\n Relative Contributions of Rows (per 100):\n")
ctrRowrel<-x$Rprinccoord^2/apply(x$Rprinccoord^2,1,sum)*100
printwithaxes(data.frame((ctrRowrel)[, 1:d], row.names=x$rowlabels), axnames,digits=digits)
cat("\n Relative Contributions of Columns (per 100):\n")
ctrColrel<-x$Cprinccoord^2/apply(x$Cprinccoord^2,1,sum)*100
printwithaxes(data.frame((ctrColrel)[, 1:d], row.names=x$collabels), axnames,digits=digits)
}
#-----------------------------------------------------------------------------------------------
if ((x$catype=="DOCA")|(x$catype=="SOCA") ){
#cat("\n Total inertia ", round(x$inertiasum,digits=digits), "\n\n")
cat("Inertias, percent inertias and cumulative percent inertias of the row space\n\n")
print(round(data.frame(x$inertias),digits=digits))
ctrRow<-x$Imass%*%x$Rprinccoord^2%*%diag(1/x$inertias[,1])*100
cat("Inertias, percent inertias and cumulative percent inertias of the column space \n\n")
print(round(data.frame(x$inertias2),digits=digits))
ctrCol<-x$Jmass%*%x$Cprinccoord^2%*%diag(1/x$inertias[,1])*100
cat("\n Absolute Contributions of Columns (per 100):\n")
print(round(ctrCol,digits=digits))
cat("\n Absolute Contributions of Rows (per 100):\n")
print(round(ctrRow,digits=digits))
cat("\n Relative Contributions of Rows (per 100):\n")
ctrRowrel<-x$Rprinccoord^2/apply(x$Rprinccoord^2,1,sum)*100
printwithaxes(data.frame((ctrRowrel)[, 1:d], row.names=x$rowlabels), axnames,digits=digits)
cat("\n Relative Contributions of Columns (per 100):\n")
ctrColrel<-x$Cprinccoord^2/apply(x$Cprinccoord^2,1,sum)*100
printwithaxes(data.frame((ctrColrel)[, 1:d], row.names=x$collabels), axnames,digits=digits)
}
#############################################################
if ((x$catype=="NSCA")||(x$catype=="DONSCA")||(x$catype=="SONSCA")){
cat("\n Predictability Index for Variants of Non symmetrical Correspondence Analysis:\n")
cat("\n Numerator of Tau Index predicting the row categories from the column categories\n\n")
print(round(x$inertiasum,digits=digits))
cat("\n Tau Index predicting the row categories from the column categories\n\n")
print(round(x$inertiasum/x$tauden,digits=digits))
Cstatistic<-(sum(x$Xtable)-1)*(nrow(x$Xtable)-1)*x$tau
pvalueC<-1 - pchisq(Cstatistic, (nrow(x$Xtable)-1)*(ncol(x$Xtable)-1))
cat("\n C-statistic", Cstatistic, "and p-value", pvalueC, "\n")
}
if ((x$catype=="DOCA")|(x$catype=="DONSCA")){
cat("\n Inertia by the Bivariate Moment Decomposition \n
\n** Row Inertia Values ** \n")
#print(round(x$comps$compsC,digits=digits))
print(round(x$inertias2[,-1],digits=digits))
cat("** Column Inertia Values ** \n")
print(round(x$inertias[,-1],digits=digits))
cat("\n Generalized correlation matrix of Bivariate Moment Decomposition\n")
print(round(x$Z,digits=digits))
cat("\n Column standard polynomial coordinates = column polynomial axes \n")
printwithaxes(data.frame(x$Cstdcoord[, 1:d], row.names=x$collabels), axnames,digits=digits)
cat("\n Row standard polynomial coordinates = row polynomial axes \n")
printwithaxes(data.frame(x$Rstdcoord[, 1:d], row.names=x$rowlabels), axnames,digits=digits)
cat("\n Column principal polynomial coordinates \n")
printwithaxes(data.frame(x$Cprinccoord[, 1:d], row.names=x$collabels), axnames,digits=digits)
cat("\n Row principal polynomial coordinates \n")
printwithaxes(data.frame(x$Rprinccoord[, 1:d], row.names=x$rowlabels), axnames,digits=digits)
}
if ((x$catype=="SOCA")|(x$catype=="SONSCA")){
cat(" Inertia by Singly Ordered Analysis
** Column Inertia Value ** \n")
print(round(x$comps$comps,digits=digits))
cat("\n Generalized correlation matrix of Hybrid Decomposition\n")
print(round(x$Z,digits=digits))
cat("\n Column standard polynomial coordinates \n")
printwithaxes(data.frame(x$Cstdcoord[, 1:d], row.names=x$collabels), axnames,digits=digits)
cat("\n Row principal polynomial coordinates \n")
printwithaxes(data.frame(x$Rprinccoord[, 1:d], row.names=x$rowlabels), axnames,digits=digits)
}
if ((x$catype=="CA")|(x$catype=="NSCA")){
cat("\n Column standard coordinates \n")
printwithaxes(data.frame(x$Cstdcoord[, 1:d], row.names=x$collabels), axnames,digits=digits)
cat("\n Row standard coordinates \n")
printwithaxes(data.frame(x$Rstdcoord[, 1:d], row.names=x$rowlabels), axnames,digits=digits)
cat("\n Column principal coordinates \n")
printwithaxes(data.frame(x$Cprinccoord[, 1:d], row.names=x$collabels), axnames,digits=digits)
cat("\n Row principal coordinates \n")
printwithaxes(data.frame(x$Rprinccoord[, 1:d], row.names=x$rowlabels), axnames,digits=digits)
cat("\n Absolute Contributions of Rows (per 100):\n")
ctrRow<-x$Imass%*%x$Rprinccoord^2%*%diag(1/x$inertias[,1])*100
print(round(ctrRow,digits=digits))
cat("\n Absolute Contributions of Columns (per 100):\n")
ctrCol<-x$Jmass%*%x$Cprinccoord^2%*%diag(1/x$inertias[,1])*100
print(round(ctrCol,digits=digits))
cat("\n Relative Contributions of Rows (per 100):\n")
ctrRowrel<-x$Rprinccoord^2/apply(x$Rprinccoord^2,1,sum)*100
printwithaxes(data.frame((ctrRowrel^2)[, 1:d], row.names=x$rowlabels), axnames,digits=digits)
cat("\n Relative Contributions of Columns (per 100):\n")
ctrColrel<-x$Cprinccoord^2/apply(x$Cprinccoord^2,1,sum)*100
printwithaxes(data.frame((ctrColrel)[, 1:d], row.names=x$collabels), axnames,digits=digits)
}
cat("\n Column distances from the origin of the plot\n")
printwithaxes(data.frame((x$Cprinccoord^2)[, 1:d], row.names=x$collabels), axnames,digits=digits)
cat("\n Row distances from the origin of the plot \n")
printwithaxes(data.frame((x$Rprinccoord^2)[, 1:d], row.names=x$rowlabels), axnames,digits=digits)
cat("\n Inner product of coordinates (first two axes when 'firstaxis=1' and 'lastaxis=2') \n")
print(round(x$Innprod,digits=digits))
#-------------------------------------------------------------
#---------------------------------p-value ellipses
if (ellcomp==TRUE){
cat("\n Eccentricity of ellipses\n")
print(round(x$eccentricity,digits=digits))
cat("\n Ellipse axes, Area, p-values of rows\n")
print(round(x$row.summ,digits=digits))
cat("\n Ellipse axes, Area, p-values of columns\n")
print(round(x$col.summ,digits=digits))
}#end ell
else{ }
}
|
/scratch/gouwar.j/cran-all/cranData/CAvariants/R/print.CAvariants.R
|
printwithaxes <-
function(x, thenames,digits=3) {
names(x) <- thenames
print(round(x,digits=digits))
}
|
/scratch/gouwar.j/cran-all/cranData/CAvariants/R/printwithaxes.R
|
socabasic <-
function (Xtable,mj)
{
n<-sum(Xtable)
x <- Xtable/n
rsums <- as.vector(rowSums(x))
csums <- as.vector(colSums(x))
di<-diag(rsums)
dj<-diag(csums)
Bpoly <- emerson.poly(mj, csums)$B
# Bpoly <- orthopoly.exe(c(csums))[,-1]
Bpoly2 <- sqrt(dj) %*% Bpoly
######################################################
drm1 <- diag( 1/( rsums + (rsums==0) ) * (1-(rsums==0)) )
dcm1 <- diag( 1/( csums + (csums==0) ) * (1-(csums==0)) )
drmh <- sqrt(drm1)
dcmh <- sqrt(dcm1)
ratio <- drmh %*% ( x - rsums %*% t(csums) ) %*% dcmh*sqrt(n)
svdratio<-svd(ratio)
u<-svdratio$u
mu <- svdratio$d
R <- drm1 %*% x
C <- dcm1 %*% t(x)
Z <- t(u) %*% ratio %*% Bpoly2 #useful to check coordinates
ZtZ <- Z%*%t(Z)
tZZ <- t(Z)%*%Z
#rmax <- min(dim(xo)) - 1
#r<-rmax
#soca <- new("cabasicresults",
# RX=R,CX=C,Rweights=dcmh,Cweights=drmh,
# Raxes= Bpoly,Caxes=u,mu=mu,mu2=diag(tZZ),tauDen=0,catype="SOCA",Z=Z,ZtZ=ZtZ,tZZ=tZZ)
#------------------------------------------
ressoca<-(list(
RX=R,CX=C,Rweights=dcmh,Cweights=drmh,
Raxes= Bpoly,Caxes=u,mu=mu,mu2=diag(tZZ),tauDen=0,catype="SOCA",Z=Z,ZtZ=ZtZ,tZZ=tZZ))
return(ressoca)
}
|
/scratch/gouwar.j/cran-all/cranData/CAvariants/R/socabasic.R
|
sonscabasic <-
function (Xtable,mj)
{
x <- Xtable/sum(Xtable)
rsums <- as.matrix(rowSums(x))
csums <- as.vector(colSums(x))
tauden <- 1 - sum(rsums^2)
drm1 <- diag( 1/( rsums + (rsums==0) ) * (1-(rsums==0)) )
dcm1 <- diag( 1/( csums + (csums==0) ) * (1-(csums==0)) )
drmh<-diag(rep(1,nrow(x))) #change the metric in NSCA
dcmh <- sqrt(dcm1)
dj <- diag(csums)
di <- diag(rsums)
uni <- matrix(1, 1, ncol(x))
uni1 <- rep(1, nrow(x))
Bpoly <- emerson.poly(mj, csums)$B
Bpoly2 <- sqrt(dj) %*% Bpoly
#pcc <- 1/sqrt(tauden)* ( x%*%dcm1 - rsums %*% (uni) )
pcc <- ( x%*%dcm1 - rsums %*% (uni) )
svdpccw<-svd(pcc%*%sqrt(dj))
u<-svdpccw$u
#mu<-svdpccw$d
#Z <- t(u) %*%pcc %*% sqrt(dj)%*%Bpoly2
Z <- t(u) %*%pcc %*% dj%*%Bpoly
ZtZ<-Z%*%t(Z)
tZZ<-t(Z)%*%Z
mu2<- diag(tZZ) #only the sum gives me the total inertia
mu<- diag(ZtZ) #these are coincident with each eigenvalue (mu^2)
#tau<-sum(mu)
#r<-rmax
#browser()
#sonsca <- new("cabasicresults",
# RX=pcc,CX=t(pcc),Rweights=dj,Cweights=diag(uni1),
# Raxes=Bpoly,Caxes=u,mu=mu,mu2=mu2,catype="SONSCA",tauDen=tauden,Z=Z,ZtZ=ZtZ,tZZ=tZZ)
ressonsca<-( list(RX=pcc,CX=t(pcc),Rweights=dj,Cweights=diag(uni1),
Raxes=Bpoly,Caxes=u,mu=mu,mu2=mu2,tauDen=tauden,catype="SONSCA",Z=Z,ZtZ=ZtZ,tZZ=tZZ))
return(ressonsca)
}
|
/scratch/gouwar.j/cran-all/cranData/CAvariants/R/sonscabasic.R
|
summary.CAvariants <-
function(object,printdims = 3,digits = 3,...) {
cat("\n SUMMARY",object$catype, "Correspondence Analysis\n")
cat("\n Names of output objects\n")
print(names(object))
d <- object$r
d <- min(printdims, object$r)
#---------------------------------------------------------------------------
if ((object$catype=="CA")|(object$catype=="NSCA") ){
cat("\n Total inertia ", round(object$inertiasum,digits = digits), "\n\n")
cat("Inertias, percent inertias and cumulative percent inertias of the row and column space\n\n")
print(round(data.frame(object$inertias),digits=digits))
}
#----------------------------------------------------------------------------------------------
if ((object$catype=="DONSCA")|(object$catype=="DOCA") ){
cat("\n Total inertia ", round(object$inertiasum,digits=digits), "\n\n")
cat("Inertias, percent inertias and cumulative percent inertias of the row space\n\n")
print(round(data.frame(object$inertias),digits=digits))
cat("Inertias, percent inertias and cumulative percent inertias of the column space \n\n")
print(round(data.frame(object$inertias2),digits=digits))
cat("\n Polynomial Components of Inertia \n
** Row Components ** \n")
print(round(object$comps$compsR,digits=digits))
cat("\n** Column Components ** \n")
print(round(object$comps$compsC,digits=digits))
}
#-----------------------------------------------------------------------------------------------
if ((object$catype=="SONSCA")|(object$catype=="SOCA") ){
cat("\n Total inertia ", round(object$inertiasum,digits=digits), "\n\n")
cat("Inertias, percent inertias and cumulative percent inertias of the row space\n\n")
print(round(data.frame(object$inertias),digits=digits))
cat("Inertias, percent inertias and cumulative percent inertias of the column space \n\n")
print(round(data.frame(object$inertias2),digits=digits))
cat("\n Polynomial Components of Inertia \n
** Column Components ** \n")
print(round(object$comps$comps,digits=digits))
}
#############################################################
if ((object$catype=="NSCA")||(object$catype=="DONSCA")||(object$catype=="SONSCA")){
cat("\n Predictability Index for Variants of Non symmetrical Correspondence Analysis:\n")
cat("\nTau Index predicting from column \n\n")
print(round(object$tau,digits=digits))
Cstatistic<-(sum(object$Xtable)-1)*(nrow(object$Xtable)-1)*object$tau
#browser()
pvalueC<-1 - pchisq(Cstatistic, (nrow(object$Xtable)-1)*(ncol(object$Xtable)-1))
cat("\n C-statistic", round(Cstatistic,digits=digits), "and p-value", pvalueC, "\n")
}
if ((object$catype=="DOCA")|(object$catype=="DONSCA")){
cat("\n Column standard polynomial coordinates \n")
print(round(data.frame(object$Cstdcoord[,1:d], row.names=object$collabels), digits=digits))
cat("\n Row standard polynomial coordinates \n")
print(round(data.frame(object$Rstdcoord[,1:d], row.names=object$rowlabels), digits=digits))
cat("\n Column principal polynomial coordinates \n")
print(round(data.frame(object$Cprinccoord[,1:d], row.names=object$collabels), digits=digits))
cat("\n Row principal polynomial coordinates \n")
print(round(data.frame(object$Rprinccoord[,1:d], row.names=object$rowlabels), digits=digits))
}
if ((object$catype=="SOCA")|(object$catype=="SONSCA")){
cat("\n Column standard polynomial coordinates \n")
print(round(data.frame(object$Cstdcoord[,1:d], row.names=object$collabels), digits=digits))
cat("\n Row standard coordinates \n")
print(round(data.frame(object$Rstdcoord[,1:d], row.names=object$rowlabels), digits=digits))
cat("\n Column principal coordinates \n")
print(round(data.frame(object$Cprinccoord[,1:d], row.names=object$collabels), digits=digits))
cat("\n Row principal polynomial coordinates \n")
print(round(data.frame(object$Rprinccoord[,1:d], row.names=object$rowlabels), digits=digits))
}
else{
cat("\n Column standard coordinates \n")
print(round(data.frame(object$Cstdcoord[,1:d], row.names=object$collabels), digits=digits))
cat("\n Row standard coordinates \n")
print(round(data.frame(object$Rstdcoord[,1:d], row.names=object$rowlabels), digits=digits))
cat("\n Column principal coordinates \n")
print(round(data.frame(object$Cprinccoord[,1:d], row.names=object$collabels), digits=digits))
cat("\n Row principal coordinates \n")
print(round(data.frame(object$Rprinccoord[,1:d], row.names=object$rowlabels), digits=digits))
}
#cat("\n Inner product of coordinates (first two axes) \n")
#print(round(object$Trend,digits=digits))
}
|
/scratch/gouwar.j/cran-all/cranData/CAvariants/R/summary.CAvariants.R
|
trendplot <-
function(f,g, cex = 1, cex.lab = 0.8, main=" ",prop=0.5,posleg="right",xlab="First Axis",ylab="Second Axis")
{
#------------------------------------------------------------------------------
# f and g are the row and column coordinates
#------------------------------------------------------------------------------
par(mar=c(5,4,4,8),xpd=TRUE)
nrows<-dim(g)[[1]]
ncols<-dim(g)[[2]]
leg.txt<-dimnames(g)[[2]]
leg.txt1 <-dimnames(g)[[1]]
colsymb<-c(1:nrows)
gt<-t(g)
plot(f,g[1,],type="b", ylim = range(gt[1:ncols,],g[1:nrows,])/0.5, xlab = xlab, ylab = ylab, cex = cex, cex.lab = cex.lab, main=main, col=1,xaxt="n")
axis(1, at= 1:ncols, labels=leg.txt)
#axis(1, at labels=leg.txt)
#abline(h=0,lty=3)
for (i in 1:(nrows)){
lines(f,g[i,],type="b",pch=i, col=i)
}
legend(x=posleg,legend=leg.txt1, inset=c(-0.4,0),col=c(1:nrows),pch=c(1:nrows),bty="o",cex=.8)
#----------------------------------------------------------------------------------------
}
|
/scratch/gouwar.j/cran-all/cranData/CAvariants/R/trendplot.R
|
#library(ggplot2)
#library(ggforce)
#library(plotly)
#library(CAvariants)
#vcaellipse<-
#function(t.inertia=t.inertia,inertias=inertias[,1],inertiapc=#x$inertias[,2],cord1=x$Rprinccoord,cord2=x$Cprinccoord,a=x$Rstdcoord,b=x$Cstdcoord,firstaxis=1,lastaxis=2,n=x$n,M=min(nrow(x$Xtable),ncol(x$Xtable))-1,Imass=x#$Imass,Jmass=x$Jmass){
#-----------------
vcaellipse<-function(t.inertia,inertias,inertiapc,cord1,cord2,a,b,firstaxis=1,lastaxis=2,n,M=2,Imass,Jmass){
#globalVariables(categ)
Inames<-thinglabels<-dimnames(cord1)[[1]]
Jnames<-varlabels<-dimnames(cord2)[[1]]
nthings<-I<-dim(cord1)[1]
nvars<-J<-dim(cord2)[1]
dmu<-sqrt(inertias)
alpha=0.05
#-----------------------------axis ellipses
chisq.val <- qchisq(1 - alpha, df = (I - 1) * (J - 1))
hlax1.row <- vector(mode = "numeric", length = I)
hlax2.row <- vector(mode = "numeric", length = I)
hlax1.col <- vector(mode = "numeric", length = J)
hlax2.col <- vector(mode = "numeric", length = J)
#browser()
if (M > 2) {
for (i in 1:I) {
hlax1.row[i] <- dmu[1] * sqrt(abs((chisq.val/(t.inertia)) *
(1/Imass[i,i] - sum(a[i, 3:M]^2))))
hlax2.row[i] <- dmu[2] * sqrt(abs((chisq.val/(t.inertia)) *
(1/Imass[i,i] - sum(a[i, 3:M]^2))))
}
for (j in 1:J) {
hlax1.col[j] <- dmu[1] * sqrt(abs((chisq.val/(t.inertia)) *
(1/Jmass[j,j] - sum(b[j, 3:M]^2))))
hlax2.col[j] <- dmu[2] * sqrt(abs((chisq.val/(t.inertia)) *
(1/Jmass[j,j] - sum(b[j, 3:M]^2))))
}
}
if (M == 2) {
for (i in 1:I) {
hlax1.row[i] <- dmu[1] * sqrt(abs((chisq.val/(t.inertia)) *
(1/(Imass)[i,i])))
hlax2.row[i] <- dmu[2] * sqrt(abs((chisq.val/(t.inertia)) *
(1/(Imass)[i,i])))
}
for (j in 1:J) {
hlax1.col[j] <- dmu[1] * sqrt(abs((chisq.val/(t.inertia)) *
(1/(Jmass)[j,j])))
hlax2.col[j] <- dmu[2] * sqrt(abs((chisq.val/(t.inertia)) *
(1/(Jmass)[j,j])))
}
}
#---------------------------------------------------
#faxis <- data.frame(coord=c(hlax1.row,hlax2.row), labels=thinglabels, categ=rep("rows", nthings)) # build a dataframe to be used as input for plotting via ggplot2
#gaxis <- data.frame(coord=c(hlax1.col,hlax2.col), labels=varlabels, categ=rep("cols", nvars)) # build a dataframe to be used as input for plotting via ggplot2
faxis <- cbind(hlax1.row,hlax2.row)
gaxis <- cbind(hlax1.col,hlax2.col)
categ<-NULL
frows <- data.frame(coord=cord1, labels=thinglabels, categ=rep("rows", nthings)) # build a dataframe to be used as input for plotting via ggplot2
gcols <- data.frame(coord=cord2, labels=varlabels, categ=rep("cols", nvars)) # build a dataframe to be used as input for plotting via ggplot2
#-------------------------------------------------------------
FGaxis<-rbind(faxis,gaxis)
FGcord <- rbind(frows, gcols) # build a dataframe to be used as input for plotting via
xmin <- min(FGcord[,firstaxis],FGcord[,lastaxis])
xmax <- max(FGcord[,firstaxis],FGcord[,lastaxis])
ymin <- min(FGcord[,lastaxis],FGcord[,firstaxis])
ymax <- max(FGcord[,lastaxis],FGcord[,firstaxis])
ellplot <- ggplot(FGcord, aes(x=FGcord[,firstaxis], y=FGcord[,lastaxis])) +
geom_point(aes(colour=categ, shape=categ), size=1.5) +
geom_vline(xintercept = 0, linetype=2, color="gray") +
geom_hline(yintercept = 0, linetype=2, color="gray") +
labs(x=paste0("Dimension ", firstaxis,sep=" (", round(inertiapc[firstaxis],1),"%)"), y=paste0("Dimension ", lastaxis,sep=" (", round(inertiapc[lastaxis],1),"%)" )) +
scale_x_continuous(limits = c(xmin, xmax)) +
scale_y_continuous(limits = c(ymin, ymax)) +
theme(panel.background = element_rect(fill="white", colour="black")) +
scale_color_manual(values=c("blue", "red")) +
coord_fixed(ratio = 1, xlim = NULL, ylim = NULL, expand = TRUE) +
geom_text_repel(data=FGcord, aes(colour=categ, label = labels), size = 3) +
theme(legend.position="none")+
geom_ellipse(data=NULL,aes(x0=FGcord[,1],y0=FGcord[,2],a=FGaxis[,1],b=FGaxis[,2],angle=0,col=categ), stat = "ellip",
position = "identity", n = 360, na.rm = FALSE, show.legend = NA,lty=2,
inherit.aes = TRUE)+
ggtitle(" ")
grid.arrange(ellplot, ncol=1)
#-------simple example
#ggplot() +
# geom_ellipse(aes(x0 = 0, y0 = 0, a = 1, b = 3, angle = 0, m1 = 3)) +
# coord_fixed()
#-------------------------------------------------------
#xmin <- min(gcols[,firstaxis],gcols[,lastaxis])
#xmax <- max(gcols[,firstaxis],gcols[,lastaxis])
#ymin <- min(gcols[,lastaxis],gcols[,firstaxis])
#ymax <- max(gcols[,lastaxis],gcols[,firstaxis])
xmin <- min(gcols[,firstaxis],gcols[,lastaxis],hlax1.col,hlax2.col)
xmax <- max(gcols[,firstaxis],gcols[,lastaxis],hlax1.col,hlax2.col)
ymin <- min(gcols[,firstaxis],gcols[,lastaxis],hlax1.col,hlax2.col)
ymax <- max(gcols[,firstaxis],gcols[,lastaxis],hlax1.col,hlax2.col)
ellcol<-ggplot(gcols,aes(x=gcols[,firstaxis], y=gcols[,lastaxis]))+
geom_point(aes(colour=categ, shape=categ), size=1.5) +
geom_vline(xintercept = 0, linetype=2, color="gray") +
geom_hline(yintercept = 0, linetype=2, color="gray") +
labs(x=paste0("Dimension ", firstaxis,sep=" (", round(inertiapc[firstaxis],1),"%)"), y=paste0("Dimension ", lastaxis,sep=" (", round(inertiapc[lastaxis],1),"%)" )) +
# scale_x_continuous(limits = c(xmin, xmax)) +
# scale_y_continuous(limits = c(ymin, ymax)) +
theme(panel.background = element_rect(fill="white", colour="black")) +
geom_text_repel(data=gcols, aes(colour=categ, label = labels), size = 3) +
scale_color_manual(values=c("blue", "blue")) +
coord_fixed(ratio = 1, xlim = NULL, ylim = NULL, expand = TRUE) +
theme(legend.position="none")+
geom_ellipse(data=NULL,aes(x0=gcols[,1],y0=gcols[,2],a=hlax1.col,b=hlax2.col,angle=0,col="blue"), stat = "ellip",
position = "identity", n = 360, na.rm = FALSE, show.legend = NA,
inherit.aes = TRUE,lty=2)+
ggtitle(" ")
grid.arrange(ellcol, ncol=1)
# ----------------------------------------------------ellipses on row categories
xmin <- min(frows[,firstaxis],frows[,lastaxis])
xmax <- max(frows[,firstaxis],frows[,lastaxis])
ymin <- min(frows[,lastaxis],frows[,firstaxis])
ymax <- max(frows[,lastaxis],frows[,firstaxis])
ellrow<-ggplot(frows,aes(x=frows[,firstaxis], y=frows[,lastaxis]))+
geom_point(aes(colour=categ, shape=categ), size=1.5) +
geom_vline(xintercept = 0, linetype=2, color="gray") +
geom_hline(yintercept = 0, linetype=2, color="gray") +
labs(x=paste0("Dimension ", firstaxis,sep=" (", round(inertiapc[firstaxis],1),"%)"), y=paste0("Dimension ", lastaxis,sep=" (", round(inertiapc[lastaxis],1),"%)" )) +
#scale_x_continuous(limits = c(xmin, xmax)) +
#scale_y_continuous(limits = c(ymin, ymax)) +
theme(panel.background = element_rect(fill="white", colour="black")) +
geom_text_repel(data=frows, aes(colour=categ, label = labels), size = 3) +
scale_color_manual(values=c("red", "red")) +
theme(legend.position="none")+
geom_ellipse(data=NULL,aes(x0=frows[,1],y0=frows[,2],a=hlax1.row,b=hlax2.row,angle=0,col="red"), stat = "ellip",
position = "identity", n = 50, na.rm = FALSE, show.legend = FALSE,
inherit.aes = TRUE,lty=2)+
ggtitle(" ")
grid.arrange(ellrow, ncol=1)
#---------------------------------p-values
eccentricity <- abs(1 - (dmu[2]^2/dmu[1]^2))^(1/2)
area.row <- vector(mode = "numeric", length = I)
area.col <- vector(mode = "numeric", length = J)
for (i in 1:I) {
area.row[i] <- 3.14159 * hlax1.row[i] * hlax2.row[i]
}
for (j in 1:J) {
area.col[j] <- 3.14159 * hlax1.col[j] * hlax2.col[j]
}
pvalrow <- vector(mode = "numeric", length = I)
pvalcol <- vector(mode = "numeric", length = J)
for (i in 1:I) {
if (M > 2) {
pvalrow[i] <- 1- pchisq(t.inertia *n* ((1/Imass[i,i] - sum(a[i,
3:M]^2))^(-1)) * (cord1[i, 1]^2/dmu[1]^2 + cord1[i,
2]^2/dmu[2]^2), df = (I - 1) * (J - 1))
}
else {
pvalrow[i] <- 1-pchisq(t.inertia*n * (Imass[i,i]) *
(cord1[i, 1]^2/dmu[1]^2 + cord1[i, 2]^2/dmu[2]^2),
df = (I - 1) * (J - 1))
}
}
for (j in 1:J) {
if (M > 2) {
pvalcol[j] <- 1-pchisq(t.inertia *n* ((1/Jmass[j,j] -
sum(b[j, 3:M]^2))^(-1)) * (cord2[j, 1]^2/dmu[1]^2 +
cord2[j, 2]^2/dmu[2]^2), df = (I - 1) * (J - 1))
}
else {
pvalcol[j] <- 1- pchisq(t.inertia *n* (Jmass[j,j]) *
(cord2[j, 1]^2/dmu[1]^2 + cord2[j, 2]^2/dmu[2]^2),
df = (I - 1) * (J - 1))
}
}
summ.name <- c("HL Axis 1", "HL Axis 2", "Area", "P-value")
row.summ <- cbind(hlax1.row, hlax2.row, area.row, pvalrow)
dimnames(row.summ) <- list(paste(Inames), paste(summ.name))
col.summ <- cbind(hlax1.col, hlax2.col, area.col, pvalcol)
dimnames(col.summ) <- list(paste(Jnames), paste(summ.name))
(list(eccentricity = eccentricity, row.summ = row.summ,
col.summ = col.summ))
}
|
/scratch/gouwar.j/cran-all/cranData/CAvariants/R/vcaellipse.R
|
# Generated by using Rcpp::compileAttributes() -> do not edit by hand
# Generator token: 10BE3573-1514-4C36-9D1C-5A225CD40393
#' A C++ function to quantify sgRNA abundance from NGS samples.
#'
#' @param ref_path the path of the annotation file and it has to be a FASTA formatted file.
#' @param fastq_path a list of the FASTQ files.
#' @param verbose Display some logs during the quantification if it is set to `true`.
#'
#' @importFrom Rcpp evalCpp
#' @useDynLib CB2
#' @export
quant <- function(ref_path, fastq_path, verbose = FALSE) {
.Call('_CB2_quant', PACKAGE = 'CB2', ref_path, fastq_path, verbose)
}
#' A C++ function to perform a parameter estimation for the sgRNA-level test.
#' It will estimate two different parameters `phat` and `vhat,`
#' and we assume input count data follows the beta-binomial distribution.
#' Dr. Keith Baggerly initially implemented this code in Matlab,
#' and it has been rewritten it in C++ for the speed-up.
#'
#' @param xvec a matrix contains sgRNA read counts.
#' @param nvec a vector contains the library size.
#'
#' @importFrom Rcpp evalCpp
#' @useDynLib CB2
#' @export
fit_ab <- function(xvec, nvec) {
.Call('_CB2_fit_ab', PACKAGE = 'CB2', xvec, nvec)
}
|
/scratch/gouwar.j/cran-all/cranData/CB2/R/RcppExports.R
|
#' A benchmark CRISPRn pooled screen data from Evers et al.
#'
#' @format The data object is a list and contains below information:
#' \describe{
#' \item{count}{The count matrix from Evers et al.'s paper and contains the CRISPRn screening result using RT112 cell-line.
#' It contains three different replicates for T0 (before) and contains different three replicates for T1 (after).}
#' \item{egenes}{The list of 46 essential genes used in Evers et al.'s study.}
#' \item{ngenes}{The list of 47 non-essential genes used in Evers et al.'s study.}
#' \item{design}{The data.frame contains study design.}
#' \item{sg_stat}{The data.frame contains the sgRNA-level statistics.}
#' \item{gene_stat}{The data.frame contains the gene-level statistics.}
#' }
#'
#' @docType data
#'
#' @usage data(Evers_CRISPRn_RT112)
#'
#' @source \url{https://www.ncbi.nlm.nih.gov/pubmed/27111720}
"Evers_CRISPRn_RT112"
#' A benchmark CRISPRn pooled screen data from Sanson et al.
#'
#' @format The data object is a list and contains below information:
#' \describe{
#' \item{count}{The count matrix from Sanson et al.'s paper and contains the CRISPRn screening result using A375 cell-line.
#' It contains a sample of plasimd, and three biological replicates after three weeks.}
#' \item{egenes}{The list of 1,580 essential genes used in Sanson et al.'s study.}
#' \item{ngenes}{The list of 927 non-essential genes used in Sanson et al.'s study.}
#' \item{design}{The data.frame contains study design.}
#' }
#'
#' @docType data
#'
#' @usage data(Sanson_CRISPRn_A375)
#'
#' @source \url{https://www.ncbi.nlm.nih.gov/pubmed/30575746}
"Sanson_CRISPRn_A375"
|
/scratch/gouwar.j/cran-all/cranData/CB2/R/data.R
|
#' A function to run a sgRNA quantification algorithm from NGS sample
#'
#' @param lib_path The path of the FASTA file.
#' @param design A table contains the study design. It must contain `fastq_path` and `sample_name.`
#' @param map_path The path of file contains gene-sgRNA mapping.
#' @param ncores The number that indicates how many processors will be used with a parallelization.
#' The parallelization will be enabled if users do not set the parameter as `-1``
#' (it means the full physical cores will be used) or greater than `1`.
#' @param verbose Display some logs during the quantification if it is set to `TRUE`
#'
#' @importFrom tools file_ext
#' @importFrom readr read_csv read_tsv
#' @importFrom dplyr left_join
#' @importFrom parallel detectCores makeCluster clusterExport clusterApply stopCluster
#' @importFrom R.utils gunzip
#' @return It will return a list, and the list contains three elements.
#' The first element (`count') is a data frame contains the result of the quantification for each sample.
#' The second element (`total') is a numeric vector contains the total number of reads of each sample.
#' The last element (`sequence') a data frame contains the sequence of each sgRNA in the library.
#'
#' @examples
#' library(CB2)
#' library(magrittr)
#' library(tibble)
#' library(dplyr)
#' library(glue)
#' FASTA <- system.file("extdata", "toydata", "small_sample.fasta", package = "CB2")
#' ex_path <- system.file("extdata", "toydata", package = "CB2")
#'
#' df_design <- tribble(
#' ~group, ~sample_name,
#' "Base", "Base1",
#' "Base", "Base2",
#' "High", "High1",
#' "High", "High2") %>%
#' mutate(fastq_path = glue("{ex_path}/{sample_name}.fastq"))
#'
#' cb2_count <- run_sgrna_quant(FASTA, df_design)
#'
#' @export
run_sgrna_quant <- function(lib_path,
design,
map_path = NULL,
ncores = 1,
verbose = FALSE) {
# `design` has to be table `design` has to have four columns: sample_name, group, fastq_path
if(!all(c("sample_name", "fastq_path") %in% colnames(design))) {
stop("The design data frame should have both `sample_name` and `fastq_path` columns.")
}
if(!file.exists(lib_path)) {
stop("The library annotation file (FASTA) does not exist.")
}
if(!is.null(map_path) && !file.exists(map_path)) {
stop("The mapping file does not exist.")
}
if(!is.null(map_path)&&!tolower(file_ext(map_path)) %in% c("csv", "tsv")) {
stop("The mapping file should be either CSV or TSV file.")
}
if(!all(file.exists(design$fastq_path))) {
stop("Some of sample FASTQ files does not exist.")
}
lib_path <- normalizePath(lib_path)
design$fastq_path <- normalizePath(design$fastq_path)
is_gzipped <- endsWith(tolower(design$fastq_path), ".gz")
quant_ret <- NULL
fastq_path <- design$fastq_path
for(i in seq_along(is_gzipped)) {
if(is_gzipped[i]) {
tmp_path <- tempfile(fileext = '.fastq')
R.utils::gunzip(fastq_path[i], tmp_path, remove=FALSE)
fastq_path[i] <- tmp_path
}
}
if(ncores == -1 || ncores >= 2) {
max_cores <- detectCores(logical = F)
if(ncores == -1) {
ncores <- max_cores
}
cl <- makeCluster( min(ncores, max_cores) )
clusterExport(cl, "lib_path", envir = environment() )
clusterExport(cl, "fastq_path", envir = environment() )
clusterExport(cl, "is_gzipped", envir = environment() )
clusterExport(cl, "verbose", envir = environment() )
tmp <- clusterApply(cl, x = seq_along(fastq_path), function(i) {
CB2::quant(lib_path, fastq_path[i], verbose)
})
stopCluster(cl)
quant_ret$sgRNA <- tmp[[1]]$sgRNA
quant_ret$sequence <- tmp[[1]]$sequence
quant_ret$count <- sapply(tmp, function(x) x$count)
quant_ret$total <- sapply(tmp, function(x) x$total)
} else {
quant_ret <- quant(lib_path, fastq_path)
}
if(is.null(map_path)) {
df_count <- as.data.frame(quant_ret$count)
rownames(df_count) <- quant_ret$sgRNA
colnames(df_count) <- design$sample_name
} else {
df_count <- as.data.frame(quant_ret$count)
colnames(df_count) <- design$sample_name
df_count <- cbind(
data.frame(id = quant_ret$sgRNA),
df_count
)
if(tolower(file_ext(map_path)) == "csv") {
df_map <- read_csv(map_path)
} else {
df_map <- read_tsv(map_path)
}
df_count <-
left_join(df_map,
df_count, by = "id")
}
total <- quant_ret$total
names(total) <- design$sample_name
sequence <- data.frame(sgRNA = quant_ret$sgRNA, sequence=quant_ret$sequence)
list(count = df_count, total = total, sequence = sequence)
}
#' A function to perform a statistical test at a sgRNA-level, deprecated.
#' @param sgcount This data frame contains read counts of sgRNAs for the samples.
#' @param design This table contains study design. It has to contain `group.`
#' @param group_a The first group to be tested.
#' @param group_b The second group to be tested.
#' @param delim The delimiter between a gene name and a sgRNA ID. It will be used if only rownames contains sgRNA ID.
#' @param ge_id The column name of the gene column.
#' @param sg_id The column/columns of sgRNA identifiers.
#' @return A table contains the sgRNA-level test result, and the table contains these columns:
#' \itemize{
#' \item `sgRNA': The sgRNA identifier.
#' \item `gene': The gene is the target of the sgRNA
#' \item `n_a': The number of replicates of the first group.
#' \item `n_b': The number of replicates of the second group.
#' \item `phat_a': The proportion value of the sgRNA for the first group.
#' \item `phat_b': The proportion value of the sgRNA for the second group.
#' \item `vhat_a': The variance of the sgRNA for the first group.
#' \item `vhat_b': The variance of the sgRNA for the second group.
#' \item `cpm_a': The mean CPM of the sgRNA within the first group.
#' \item `cpm_b': The mean CPM of the sgRNA within the second group.
#' \item `logFC': The log fold change of sgRNA between two groups.
#' \item `t_value': The value for the t-statistics.
#' \item `df': The value of the degree of freedom, and will be used to calculate the p-value of the sgRNA.
#' \item `p_ts': The p-value indicates a difference between the two groups.
#' \item `p_pa': The p-value indicates enrichment of the first group.
#' \item `p_pb': The p-value indicates enrichment of the second group.
#' \item `fdr_ts': The adjusted P-value of `p_ts'.
#' \item `fdr_pa': The adjusted P-value of `p_pa'.
#' \item `fdr_pb': The adjusted P-value of `p_pb'.
#' }
#' @export
run_estimation <- function( sgcount, design,
group_a, group_b,
delim = "_",
ge_id = NULL,
sg_id = NULL) {
.Deprecated('measure_sgrna_stats')
measure_sgrna_stats(
sgcount, design,
group_a, group_b,
delim,
ge_id,
sg_id
)
}
#' A function to perform a statistical test at a sgRNA-level
#' @param sgcount This data frame contains read counts of sgRNAs for the samples.
#'
#' @param design This table contains study design. It has to contain `group.`
#' @param group_a The first group to be tested.
#' @param group_b The second group to be tested.
#' @param delim The delimiter between a gene name and a sgRNA ID. It will be used if only rownames contains sgRNA ID.
#' @param ge_id The column name of the gene column.
#' @param sg_id The column/columns of sgRNA identifiers.
#'
#' @importFrom magrittr %>%
#' @importFrom tibble as_tibble
#' @importFrom dplyr arrange_
#' @importFrom stats p.adjust pt
#' @return A table contains the sgRNA-level test result, and the table contains these columns:
#' \itemize{
#' \item `sgRNA': The sgRNA identifier.
#' \item `gene': The gene is the target of the sgRNA
#' \item `n_a': The number of replicates of the first group.
#' \item `n_b': The number of replicates of the second group.
#' \item `phat_a': The proportion value of the sgRNA for the first group.
#' \item `phat_b': The proportion value of the sgRNA for the second group.
#' \item `vhat_a': The variance of the sgRNA for the first group.
#' \item `vhat_b': The variance of the sgRNA for the second group.
#' \item `cpm_a': The mean CPM of the sgRNA within the first group.
#' \item `cpm_b': The mean CPM of the sgRNA within the second group.
#' \item `logFC': The log fold change of sgRNA between two groups.
#' \item `t_value': The value for the t-statistics.
#' \item `df': The value of the degree of freedom, and will be used to calculate the p-value of the sgRNA.
#' \item `p_ts': The p-value indicates a difference between the two groups.
#' \item `p_pa': The p-value indicates enrichment of the first group.
#' \item `p_pb': The p-value indicates enrichment of the second group.
#' \item `fdr_ts': The adjusted P-value of `p_ts'.
#' \item `fdr_pa': The adjusted P-value of `p_pa'.
#' \item `fdr_pb': The adjusted P-value of `p_pb'.
#' }
#' @examples
#' library(CB2)
#' data(Evers_CRISPRn_RT112)
#' measure_sgrna_stats(Evers_CRISPRn_RT112$count, Evers_CRISPRn_RT112$design, "before", "after")
#'
#' @export
measure_sgrna_stats <- function(sgcount, design,
group_a, group_b,
delim = "_",
ge_id = NULL,
sg_id = NULL) {
if(!is.data.frame(sgcount) && !is.matrix(sgcount)) {
stop("sgcount has to be either a data.frame or a matrix.")
}
if(!all(c("sample_name", "group") %in% colnames(design))) {
stop("The design table should have both `sample_name` and `group` columns.")
}
if(!all(design$sample_name %in% colnames(sgcount))) {
stop("Some samples are missing in sgcount.")
}
if(!all(group_a %in% design$group)) {
stop("group_a must be one of the groups in the design data frame.")
}
if(!all(group_b %in% design$group)) {
stop("group_b must be one of the groups in the design data frame.")
}
if(is.matrix(sgcount)) {
if(!is.null(ge_id)||!is.null(sg_id)) {
stop("ge_id and sg_id should be NULL if sgcount is a matrix.")
}
}
if(xor(is.null(ge_id), is.null(sg_id))) {
stop("Both of ge_id and sg_id should be null or non-null.")
}
cname <- colnames(sgcount)
if(is.null(ge_id)) {
if(!all(sapply(sgcount, class) == "numeric")) {
stop(
paste0("sgcount contains some character columns. ",
"It may need to specify both ge_id and sg_id."))
}
sgcount <- as.matrix(sgcount)
cnt_delim <- stringr::str_count(rownames(sgcount), delim)
if(!all(cnt_delim == 1)) {
stop("Every rownames should contains exact one delimiter.")
}
}
else {
if(length(ge_id) != 1) {
stop("ge_id should be a character variables.")
}
if(!(ge_id %in% cname)) {
stop("Column of ge_id was not found in sgcount.")
}
if(!all(sg_id %in% cname)) {
stop("Column/columns of sg_id was not found in sgcount.")
}
}
group_a <- which(cname %in% design$sample_name[design$group == group_a])
group_b <- which(cname %in% design$sample_name[design$group == group_b])
sgcount_a <- as.matrix(sgcount[, group_a])
sgcount_b <- as.matrix(sgcount[, group_b])
nmat_a <- rep.row(colSums(sgcount_a), nrow(sgcount_a))
nmat_b <- rep.row(colSums(sgcount_b), nrow(sgcount_b))
est_a <- fit_ab(sgcount_a, nmat_a)
est_b <- fit_ab(sgcount_b, nmat_b)
# if you have gene colum then get read of this part
if(is.null(ge_id)) {
est <- data.frame(sgRNA = rownames(sgcount), stringsAsFactors=F) %>% as_tibble()
est$gene <- stringr::str_split(est$sgRNA, "_", simplify = T)[, 1]
} else {
est <- data.frame(gene=sgcount[,ge_id])
colnames(est)[1] <- "gene"
est <- cbind(est, as.data.frame(sgcount[,sg_id]))
est <- data.frame(gene = sgcount[,ge_id])
est <- cbind(est, sgcount[,sg_id])
}
est$n_a <- length(group_a)
est$n_b <- length(group_b)
est$phat_a <- est_a$phat
est$vhat_a <- est_a$vhat
est$phat_b <- est_b$phat
est$vhat_b <- est_b$vhat
est$cpm_a <- rowMeans(sgcount_a/nmat_a * 10^6)
est$cpm_b <- rowMeans(sgcount_b/nmat_b * 10^6)
# if `ncol(sgcount_a)' or `nocl(sgcount_b)' are 1, `vhat_a' or `vhat_b' will only contain `na'.
# In this case we need to treat all the `na' values as 0
# for further procedure.
est$vhat_a[is.na(est$vhat_a)] <- 0
est$vhat_b[is.na(est$vhat_b)] <- 0
est$logFC <- log2(est$cpm_b + 1) - log2(est$cpm_a + 1)
zero_var <- 1 * ((est$vhat_a == 0) & (est$vhat_b == 0))
eps <- .Machine$double.eps
est$t_value <- (est$phat_b - est$phat_a)/sqrt(est$vhat_a + est$vhat_b + eps * zero_var)
est$df <- ((est$vhat_a + est$vhat_b)^2 + zero_var)/((est$vhat_a^2)/max(1,est$n_a - 1) + (est$vhat_b^2)/max(1,est$n_b - 1) + zero_var)
est$df[est$df == 0] <- 1
est$df[is.nan(est$df)] <- 1
est$p_ts <- 2 * pt(-abs(est$t_value), df = est$df)
est$p_pa <- pt(est$t_value, df = est$df)
est$p_pb <- pt(-est$t_value, df = est$df)
est$fdr_ts <- p.adjust(est$p_ts, method = "fdr")
est$fdr_pa <- p.adjust(est$p_pa, method = "fdr")
est$fdr_pb <- p.adjust(est$p_pb, method = "fdr")
est
}
#' A function to perform gene-level test using a sgRNA-level statistics.
#'
#' @param sgrna_stat A data frame created by `measure_sgrna_stats'
#' @param logFC_level The level of `logFC' value. It can be `gene' or `sgRNA'.
#'
#' @importFrom magrittr %>%
#' @importFrom tibble tibble
#' @import dplyr
#' @importFrom stats p.adjust
#'
#' @return A table contains the gene-level test result, and the table contains these columns:
#' \itemize{
#' \item `gene': Theg gene name to be tested.
#' \item `n_sgrna': The number of sgRNA targets the gene in the library.
#' \item `cpm_a': The mean of CPM of sgRNAs within the first group.
#' \item `cpm_b': The mean of CPM of sgRNAs within the second group.
#' \item `logFC': The log fold change of the gene between two groups. Taking the mean of sgRNA `logFC's is default, and `logFC` is calculated by `log2(cpm_b+1) - log2(cpm_a+1)' if `logFC_level' parameter is set to `gene'.
#' \item `p_ts': The p-value indicates a difference between the two groups at the gene-level.
#' \item `p_pa': The p-value indicates enrichment of the first group at the gene-level.
#' \item `p_pb': The p-value indicates enrichment of the second group at the gene-level.
#' \item `fdr_ts': The adjusted P-value of `p_ts'.
#' \item `fdr_pa': The adjusted P-value of `p_pa'.
#' \item `fdr_pb': The adjusted P-value of `p_pb'.
#' }
#'
#' @examples
#' data(Evers_CRISPRn_RT112)
#' measure_gene_stats(Evers_CRISPRn_RT112$sg_stat)
#'
#' @export
measure_gene_stats <- function(sgrna_stat, logFC_level = 'sgRNA') {
if(!all(c("gene", "cpm_a", "cpm_b", "logFC", "p_ts", "p_pa", "p_pb") %in% colnames(sgrna_stat))) {
stop("It looks like `sgrna_stat` does not contain any result of a statistical test.")
}
if(!(logFC_level %in% c("gene", "sgRNA"))) {
stop("`logFC_level` has to be 'gene' or 'sgRNA'.")
}
ret <- sgrna_stat %>%
group_by(.data$gene) %>%
summarize(
n_sgrna = n(),
cpm_a = mean(.data$cpm_a),
cpm_b = mean(.data$cpm_b),
logFC = mean(.data$logFC),
p_ts = ifelse(n() > 1, metap::sumlog(.data$p_ts)$p, sum(.data$p_ts)),
p_pa = ifelse(n() > 1, metap::sumlog(.data$p_pa)$p, sum(.data$p_pa)),
p_pb = ifelse(n() > 1, metap::sumlog(.data$p_pb)$p, sum(.data$p_pb))
) %>%
ungroup() %>%
mutate(
fdr_ts = p.adjust(.data$p_ts, method = "fdr"),
fdr_pa = p.adjust(.data$p_pa, method = "fdr"),
fdr_pb = p.adjust(.data$p_pb, method = "fdr")
)
if(logFC_level != 'sgRNA') {
ret %>%
mutate(logFC = (log2(.data$cpm_b+1) - log2(.data$cpm_a+1)))
} else {
ret
}
}
|
/scratch/gouwar.j/cran-all/cranData/CB2/R/helpers.R
|
rep.row <- function(x, n) matrix(rep(x, each = n), nrow = n)
#' A function to normalize sgRNA read counts.
#'
#' @param sgcount The input table contains read counts of sgRNAs for each sample
#'
#' A function to calculate the CPM (Counts Per Million) (required)
#'
#' @return a normalized CPM table will be returned
#'
#' @examples
#' library(CB2)
#' data(Evers_CRISPRn_RT112)
#' get_CPM(Evers_CRISPRn_RT112$count)
#'
#' @export
get_CPM <- function(sgcount) {
cols <- which(sapply(sgcount, class) == "numeric")
nmat <- rep.row(colSums(sgcount[,cols]), nrow(sgcount[,cols]))
sgcount[,cols] <- sgcount[,cols]/nmat * 10^6
sgcount
}
#' A function to plot the first two principal components of samples.
#'
#' This function will perform a principal component analysis, and it returns a ggplot object of the PCA plot.
#'
#' @param sgcount The input matrix contains read counts of sgRNAs for each sample.
#' @param df_design The table contains a study design.
#'
#' @importFrom magrittr %>%
#' @return A ggplot2 object contains a PCA plot for the input.
#'
#' library(CB2)
#' data(Evers_CRISPRn_RT112)
#' plot_PCA(Evers_CRISPRn_RT112$count, Evers_CRISPRn_RT112$design)
#'
#' @export
plot_PCA <- function(sgcount, df_design) {
cols <- which(sapply(sgcount, class) == "numeric")
pca_obj <- sgcount[,cols] %>% t %>% prcomp
importance <- summary(pca_obj)$importance
prop_pc1 <- importance[2,1]
prop_pc2 <- importance[2,2]
pca_obj$x %>% as.data.frame() %>%
tibble::rownames_to_column("sample_name") %>%
dplyr::left_join(df_design, by = "sample_name") %>%
ggplot2::ggplot(ggplot2::aes_string(x = "PC1", y = "PC2")) +
ggplot2::geom_point(ggplot2::aes_string(color = "group"), size = 2) +
ggplot2::geom_text(ggplot2::aes_string(label = "sample_name")) +
ggplot2::xlab(sprintf("PC1 (%.2f%%)", prop_pc1*100)) +
ggplot2::ylab(sprintf("PC2 (%.2f%%)", prop_pc2*100))
}
#' A function to show a heatmap sgRNA-level corrleations of the NGS samples.
#' @param sgcount The input matrix contains read counts of sgRNAs for each sample.
#' @param df_design The table contains a study design.
#' @param cor_method A string parameter of the correlation measure. One of the three - "pearson", "kendall", or "spearman" will be the string.
#'
#' @importFrom magrittr %>%
#' @importFrom pheatmap pheatmap
#' @importFrom stats cor
#' @return A pheatmap object contains the correlation heatmap
#'
#' library(CB2)
#' data(Evers_CRISPRn_RT112)
#' plot_corr_heatmap(Evers_CRISPRn_RT112$count, Evers_CRISPRn_RT112$design)
#' @export
plot_corr_heatmap <- function(sgcount, df_design, cor_method = "pearson") {
cols <- which(sapply(sgcount, class) == "numeric")
sgcount[,cols] %>% cor(method = cor_method) %>%
pheatmap::pheatmap(display_numbers = T,
number_format = "%.2f",
annotation_col = df_design %>%
tibble::column_to_rownames("sample_name") %>%
dplyr::select_("group"))
}
#' A function to calculate the mappabilities of each NGS sample.
#'
#' @param count_obj A list object is created by `run_sgrna_quant`.
#' @param df_design The table contains a study design.
#' @importFrom magrittr %>%
#' @examples
#' library(CB2)
#' library(magrittr)
#' library(tibble)
#' library(dplyr)
#' library(glue)
#' FASTA <- system.file("extdata", "toydata", "small_sample.fasta", package = "CB2")
#' ex_path <- system.file("extdata", "toydata", package = "CB2")
#'
#' df_design <- tribble(
#' ~group, ~sample_name,
#' "Base", "Base1",
#' "Base", "Base2",
#' "High", "High1",
#' "High", "High2") %>%
#' mutate(fastq_path = glue("{ex_path}/{sample_name}.fastq"))
#'
#' cb2_count <- run_sgrna_quant(FASTA, df_design)
#' calc_mappability(cb2_count, df_design)
#'
#' @export
calc_mappability <- function(count_obj, df_design) {
csum <- count_obj$count %>% colSums()
mp <- csum/count_obj$total * 100
df_design %>%
dplyr::mutate_(total_reads = ~count_obj$total) %>%
dplyr::mutate_(mapped_reads = ~csum) %>%
dplyr::mutate_(mappability = ~mp) %>%
dplyr::select_(.dots = c("-fastq_path"))
}
#' A function to join a count table and a design table.
#'
#' @param sgcount The input matrix contains read counts of sgRNAs for each sample.
#' @param df_design The table contains a study design.
#' @importFrom magrittr %>%
#' @importFrom tibble tibble
#' @return A tall-thin and combined table of the sgRNA read counts and study design will be returned.
#'
#' @examples
#' library(CB2)
#' data(Evers_CRISPRn_RT112)
#' head(join_count_and_design(Evers_CRISPRn_RT112$count, Evers_CRISPRn_RT112$design))
#'
#' @export
join_count_and_design <- function(sgcount, df_design) {
cols <- colnames(sgcount)
if(all(sapply(sgcount, class) == "numeric")) {
sgcount %>% as.data.frame(stringsAsFactors=F) %>%
tibble::rownames_to_column("sgRNA") %>%
tidyr::gather_(key_col = "sample_name", value_col = "count",
gather_cols = cols) %>%
dplyr::left_join(df_design, by = "sample_name")
} else {
cols <- colnames(sgcount)[sapply(sgcount, class) == "numeric"]
sgcount %>% as.data.frame(stringsAsFactors=F) %>%
tidyr::gather_(key_col = "sample_name", value_col = "count",
gather_cols = cols) %>%
dplyr::left_join(df_design, by = "sample_name")
}
}
#' A function to plot read count distribution.
#'
#' @param sgcount The input matrix contains read counts of sgRNAs for each sample.
#' @param df_design The table contains a study design.
#' @param add_dots The function will display dots of sgRNA counts if it is set to `TRUE`.
#' @importFrom magrittr %>%
#' @return A ggplot2 object contains a read count distribution plot for `sgcount`.
#'
#' @examples
#' library(CB2)
#' data(Evers_CRISPRn_RT112)
#' cpm <- get_CPM(Evers_CRISPRn_RT112$count)
#' plot_count_distribution(cpm, Evers_CRISPRn_RT112$design)
#'
#' @export
plot_count_distribution <- function(sgcount, df_design, add_dots = FALSE) {
p <- join_count_and_design(sgcount, df_design) %>%
dplyr::mutate_(count = ~ log2(1+count)) %>%
ggplot2::ggplot(ggplot2::aes_string(y="count", x="sample_name")) +
ggplot2::geom_violin(ggplot2::aes_string(fill = "group")) +
ggplot2::ylab("log2(1+count)")
if( add_dots == T ) {
p <- p + ggplot2::geom_jitter(width=0.1, alpha=0.5)
}
p
}
#' A function to visualize dot plots for a gene.
#' @param sgcount The input matrix contains read counts of sgRNAs for each sample.
#' @param df_design The table contains a study design.
#' @param gene The gene to be shown.
#' @param ge_id A name of the column contains gene names.
#' @param sg_id A name of the column contains sgRNA IDs.
#'
#' @importFrom stats prcomp as.formula
#' @return A ggplot2 object contains dot plots of sgRNA read counts for a gene.
#'
#' @examples
#' library(CB2)
#' data(Evers_CRISPRn_RT112)
#' plot_dotplot(get_CPM(Evers_CRISPRn_RT112$count), Evers_CRISPRn_RT112$design, "RPS7")
#'
#' @export
plot_dotplot <- function(sgcount, df_design, gene, ge_id = NULL, sg_id = NULL) {
if(all(sapply(sgcount, class) == "numeric")) {
if(sum(stringr::str_detect(rownames(sgcount), glue::glue("^{gene}")))==0) {
stop(glue::glue("{gene} is not in sgcount."))
}
join_count_and_design(sgcount, df_design) %>%
dplyr::filter_(~stringr::str_detect(sgRNA, glue::glue("^{gene}"))) %>%
ggplot2::ggplot(ggplot2::aes_string(x = "group", y = "count")) +
ggplot2::geom_dotplot(ggplot2::aes_string(fill = "group", color = "group"), binaxis = "y", stackdir = "center", stackratio = 1.5, dotsize = 1.2) +
ggplot2::facet_wrap(~sgRNA, scales = "free_y") + ggplot2::ggtitle(gene)
} else {
if(is.null(ge_id) || is.null(sg_id)) {
stop("ge_id and sg_id should not be null.")
}
if(!(ge_id %in% colnames(sgcount))) {
stop("ge_id is not found in sgcount.")
}
if(!(sg_id %in% colnames(sgcount))) {
stop("sg_id is not found in sgcount.")
}
df <- join_count_and_design(sgcount, df_design)
df <- df[df[,ge_id]==gene,]
if(nrow(df) == 0) {
stop(glue::glue("{gene} is not in sgcount."))
}
df %>% ggplot2::ggplot(ggplot2::aes_string(x = "group", y = "count")) +
ggplot2::geom_dotplot(ggplot2::aes_string(fill = "group", color = "group"), binaxis = "y", stackdir = "center", stackratio = 1.5, dotsize = 1.2) +
ggplot2::facet_wrap(as.formula(paste("~", sg_id)), scales = "free_y") + ggplot2::ggtitle(gene)
}
}
|
/scratch/gouwar.j/cran-all/cranData/CB2/R/utils.R
|
## ----setup, include=FALSE-----------------------------------------------------
knitr::opts_chunk$set(echo = TRUE)
## -----------------------------------------------------------------------------
library(CB2)
library(dplyr)
library(readr)
## -----------------------------------------------------------------------------
data("Evers_CRISPRn_RT112")
head(Evers_CRISPRn_RT112$count)
## -----------------------------------------------------------------------------
Evers_CRISPRn_RT112$design
## -----------------------------------------------------------------------------
sgrna_stats <- measure_sgrna_stats(Evers_CRISPRn_RT112$count, Evers_CRISPRn_RT112$design, "before", "after")
gene_stats <- measure_gene_stats(sgrna_stats)
head(gene_stats)
## -----------------------------------------------------------------------------
df <- read_csv("https://raw.githubusercontent.com/hyunhwaj/CB2-Experiments/master/01_gene-level-analysis/data/Evers/CRISPRn-RT112.csv")
df
## -----------------------------------------------------------------------------
head(measure_sgrna_stats(df, Evers_CRISPRn_RT112$design, "before", "after", ge_id = 'gene', sg_id = 'sgRNA'))
|
/scratch/gouwar.j/cran-all/cranData/CB2/inst/doc/cb2-input-handling.R
|
---
title: "Handling the input matrix in CB2"
author: "Hyun-Hwan Jeong"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
bibliography: references.bib
vignette: >
%\VignetteIndexEntry{Handling the input matrix in CB2}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
If an analysis starts with an input matrix, it has to be appropriately pre-proceed before it is used any functions of `CB2`. `CB2` allows two different types of input: a numeric matrix/data frame with `row.names` and a data.frame contains columns of counts and columns of sgRNA IDs and target genes. Either of them will work. This document explains how the input should be formed and how to process the input using `CB2`. In the entire document, [@evers2016crispr]'s CRISPR-RT112 screen data are used.
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
```
The following code imports required packages which are required to run below codes.
```{r}
library(CB2)
library(dplyr)
library(readr)
```
The following code block shows an example of the first type of input which `CB2` can handle. Each column of `Evers_CRISPRn_RT112$count` contains counts of guide RNAs of a sample (that was initially extracted from NGS data). A count of the input shows that how many guide RNA barcodes were observed from a given NGS sample. Each row of the matrix has a row name (e.g., `RPS19_sg10`), and the name is the ID of a guide RNA. For example, `RPS19_sg10`, which is the first-row name in the example, indicates that the first row contains the counts of `RPS_sg10` guide RNA. Every guide RNA ID **must have exactly one `_` character**, and it is used to be a separator of two strings. The first string displays the name of a gene whose gene is targeted by the guide RNA, and the second string is used as an identifier among guide RNAs that targets the same gene. For example, `RPS_sg10` indicates that the guide RNA is designed to target the `RPS` gene, and `sg10` is the unique identifier.
**NOTE :** If the input contains multiple `_` characters, `CB2` is not able to run. In particular, if Entrez gene IDs are used as the gene names, `CB2` does not handle the input. One of the solutions for this case is changing the gene names to another identifier (e.g., HGNC symbol) or using another type of input, which will explain below.
```{r}
data("Evers_CRISPRn_RT112")
head(Evers_CRISPRn_RT112$count)
```
In addition, `CB2` requires experiment design information which is formed as a data.frame and contains sample names and groups of each sample. In `Evers_CRISPRn_RT112` data, `Evers_CRISPRn_RT112$design` is the data.frame.
```{r}
Evers_CRISPRn_RT112$design
```
With the two variables, `CB2` can perform the hypothesis test with `measure_sgrna_stats` and `measure_gene_stats` functions.
```{r}
sgrna_stats <- measure_sgrna_stats(Evers_CRISPRn_RT112$count, Evers_CRISPRn_RT112$design, "before", "after")
gene_stats <- measure_gene_stats(sgrna_stats)
head(gene_stats)
```
Another input type is a data.frame that contains two additional columns, which contain the guide RNA information (target gene and guide RNA identifier). A CSV file which was used in the `CB2` publication ([@jeong2019beta]). The CSV file contains the additional columns, the first is the `gene` column, and the second is the `sgRNA` column.
```{r}
df <- read_csv("https://raw.githubusercontent.com/hyunhwaj/CB2-Experiments/master/01_gene-level-analysis/data/Evers/CRISPRn-RT112.csv")
df
```
Two additional parameters have to be set to the `measure_sgrna_stats` function if an input matrix is this type. The first parameter is `ge_id`, which specifies the column of genes, and the second parameter is `sg_id`, which specifies the column of IDs. In the following code, `gene_id` sets as `gene` and `sg_id` sets as `sgRNA`.
```{r}
head(measure_sgrna_stats(df, Evers_CRISPRn_RT112$design, "before", "after", ge_id = 'gene', sg_id = 'sgRNA'))
```
### References
|
/scratch/gouwar.j/cran-all/cranData/CB2/inst/doc/cb2-input-handling.Rmd
|
## ----setup, include=FALSE-----------------------------------------------------
knitr::opts_chunk$set(echo = TRUE)
## -----------------------------------------------------------------------------
library(CB2)
library(magrittr)
library(glue)
library(tibble)
library(dplyr)
library(ggplot2)
## -----------------------------------------------------------------------------
# load the file path of the annotation file.
FASTA <- system.file("extdata", "toydata",
"small_sample.fasta",
package = "CB2")
system("tail -6 {FASTA}" %>% glue)
## -----------------------------------------------------------------------------
FASTQ <- system.file("extdata", "toydata",
"Base1.fastq",
package = "CB2")
system("head -8 {FASTQ}" %>% glue)
## -----------------------------------------------------------------------------
ex_path <- system.file("extdata", "toydata", package = "CB2")
Sys.glob("{ex_path}/*.fastq" %>% glue) %>% basename()
## -----------------------------------------------------------------------------
df_design <- tribble(
~group, ~sample_name,
"Base", "Base1",
"Base", "Base2",
"High", "High1",
"High", "High2"
) %>% mutate(
fastq_path = glue("{ex_path}/{sample_name}.fastq")
)
df_design
## -----------------------------------------------------------------------------
cb2_count <- run_sgrna_quant(FASTA, df_design)
## -----------------------------------------------------------------------------
head(cb2_count$count)
## -----------------------------------------------------------------------------
head(cb2_count$total)
## -----------------------------------------------------------------------------
get_CPM(cb2_count$count)
## -----------------------------------------------------------------------------
plot_count_distribution(cb2_count$count %>% get_CPM, df_design, add_dots = T)
## -----------------------------------------------------------------------------
calc_mappability(cb2_count, df_design)
## -----------------------------------------------------------------------------
plot_PCA(cb2_count$count %>% get_CPM, df_design)
## -----------------------------------------------------------------------------
plot_corr_heatmap(cb2_count$count %>% get_CPM, df_design)
## -----------------------------------------------------------------------------
sgrna_stat <- measure_sgrna_stats(cb2_count$count, df_design, "High", "Base")
sgrna_stat
## -----------------------------------------------------------------------------
gene_stats <- measure_gene_stats(sgrna_stat)
gene_stats
## -----------------------------------------------------------------------------
gene_stats %>%
filter(fdr_ts < 0.1)
## -----------------------------------------------------------------------------
plot_dotplot(cb2_count$count, df_design, "PARK2")
|
/scratch/gouwar.j/cran-all/cranData/CB2/inst/doc/cb2-tutorial.R
|
---
title: "CB2 Tutorial"
author: "Hyun-Hwan Jeong"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{CB2 Tutorial}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
CRISPRBetaBinomial (CB<sup>2</sup>) is a package for designing a statistical hypothesis test for robust target identification, developing an accurate mapping algorithm to quantify sgRNA abundances, and minimizing the parameters necessary for CRISPR pooled screen data analysis. This document shows how to use CB2 for the CRISPR pooled screen data analysis.
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
```
First, import `CB2` package using `library(),` and it will be helpful to import other packages as follows:
```{r}
library(CB2)
library(magrittr)
library(glue)
library(tibble)
library(dplyr)
library(ggplot2)
```
There are three different basic functions in CB<sup>2</sup>. The first function provides a quantification of counts of sgRNAs from the NGS samples. It requires a library file (`.fasta` or `.fa`) and a list of samples (`.fastq`). The library file must contain an annotation of sgRNAs in the library used in the screen. A sgRNA annotation consists of a barcode sequence (20nt sequence where sgRNA would target) and a name of a gene which the sgRNA suppose to target.
Here is an example of the loading data for the screen analysis. Files in the example are contained in the `CB2` package.
```{r}
# load the file path of the annotation file.
FASTA <- system.file("extdata", "toydata",
"small_sample.fasta",
package = "CB2")
system("tail -6 {FASTA}" %>% glue)
```
The first two lines of the annotation file indicate the annotation of the first sgRNA, and the next two lines are the annotation of the second sgRNA, and so on. The first line of an annotation is formatted as `><genename>_<id>`, where `<genename>` is an id of a symbol of the target gene for the sgRNA and `<id>` is the unique identifier for the sgRNA. `><genename>_<id>` is the completed identifier of the sgRNA and a completed identifier should not appear more than once. The second line of an annotation is the 20nt sequence, and it indicates which part of the target gene will be targeted by the sgRNA.
The first annotation indicates the library contains a sgRNA and `RAB_3` is the identifier of the sgRNA. This sgRNA is supposed to target `RAB` gene and the intended target locus is `CTGTAGAAGCTACATCGGCT`
We also have an example of the NGS sample file. The following snippet will display the contents of an NGS sample file.
```{r}
FASTQ <- system.file("extdata", "toydata",
"Base1.fastq",
package = "CB2")
system("head -8 {FASTQ}" %>% glue)
```
The NGS sample file contains multiple reads, and each read consists of four sequential lines. The first line is the id of the reads, and the second line includes a sequence of the read, and we assume that a read contains a nucleotide sequence of the sgRNA as a substring of the read. The third line only includes '+' and the last line includes the quality of each nucleotide of the read ([Phread quality score](https://en.wikipedia.org/wiki/FASTQ_format)).
Let's get start the analysis. Before running the analysis we will see the list of `FASTQ` files we can use from the `toydata` example.
```{r}
ex_path <- system.file("extdata", "toydata", package = "CB2")
Sys.glob("{ex_path}/*.fastq" %>% glue) %>% basename()
```
From the example directly above, we can recognize there are three groups (Base, High, and Low) in the example data, and each of them has two replicates. We will perform an analysis between `Base` and `High.` The first thing we need to do is creating a design matrix. The below code shows how to build it.
```{r}
df_design <- tribble(
~group, ~sample_name,
"Base", "Base1",
"Base", "Base2",
"High", "High1",
"High", "High2"
) %>% mutate(
fastq_path = glue("{ex_path}/{sample_name}.fastq")
)
df_design
```
`df_design` contains three columns and each row contains information of a sample. The first column is `group` where the sample belongs to, and the `sample_name` is the name of the sample for a convenience. `fastq_path` is the place where you will have the NGS sample file.
After creating the `df_design,` and we can run a sgRNA quantification by calling `run_sgrna_quant.`
```{r}
cb2_count <- run_sgrna_quant(FASTA, df_design)
```
```{r}
head(cb2_count$count)
```
After running `run_sgrna_quant,` we will have a data frame (`cb2_count$count`) and a numeric vector (`cb2_count$total`). The data frame contains sgRNA counts for each sample, and the numeric vector contains the number of reads for each sample.
In the data frame, each row corresponds to a sgRNA and each column belongs to a sample. Each value in the data frame indicates read counts of the corresponded sgRNA and sample, and it implies how many reads have been aligned to the sgRNA from the sample file. We assume the number will be used to approximate the number of knock-out cells of the target gene of the sgRNA.
```{r}
head(cb2_count$total)
```
We are also able to lookup CPM (Count Per Million mapped read counts) using `get_CPM.`
```{r}
get_CPM(cb2_count$count)
```
There are four functions we can use to check the quality of the input data. The first function (`plot_count_distribution`) will give the mappability (the success rate of sgRNA identification from reads) for each sample.
```{r}
plot_count_distribution(cb2_count$count %>% get_CPM, df_design, add_dots = T)
```
We can also check the mappability (The proportion of the number of reads successfully aligned to a sgRNA in the library among the entire reads) using `calc_mappability` function.
```{r}
calc_mappability(cb2_count, df_design)
```
`plot_PCA` can be a way of checking data quality.
```{r}
plot_PCA(cb2_count$count %>% get_CPM, df_design)
```
The last function (`plot_corr_heatmap`) display a sgRNA-level correlation heatmap of NGS samples. We assume that samples in the same group clustered together if the data quality is good.
```{r}
plot_corr_heatmap(cb2_count$count %>% get_CPM, df_design)
```
After we find the data quality is good to move to the next step, then we can perform an analysis for a sgRNA-level using `measure_sgrna_stats`
```{r}
sgrna_stat <- measure_sgrna_stats(cb2_count$count, df_design, "High", "Base")
sgrna_stat
```
As you can see above, we need four different parameters for the function. The first is a matrix of the read count, and the second parameter is the design data frame. The last two are the groups we are interested in performing differential abundance test for each sgRNA.
Here is the information of each column in the data.frame of the sgRNA-level statistics:
* `sgRNA`: The sgRNA identifier.
* `gene`: The gene is the target of the sgRNA
* `n_a`: The number of replicates of the first group.
* `n_b`: The number of replicates of the second group.
* `phat_a`: The proportion value of the sgRNA for the first group.
* `phat_b`: The proportion value of the sgRNA for the second group.
* `vhat_a`: The variance of the sgRNA for the first group.
* `vhat_b`: The variance of the sgRNA for the second group.
* `cpm_a`: The mean CPM of the sgRNA within the first group.
* `cpm_b`: The mean CPM of the sgRNA within the second group.
* `logFC`: The log fold change of sgRNA between two groups, and calculated by $log_{2}\frac{CPM_{b}+1}{CPM_{a}+1}$
* `t_value`: The value for the t-statistics.
* `df`: The value of the degree of freedom, and will be used to calculate the p-value of the sgRNA.
* `p_ts`: The p-value indicates a difference between the two groups.
* `p_pa`: The p-value indicates enrichment of the first group.
* `p_pb`: The p-value indicates enrichment of the second group.
* `fdr_ts`: The adjusted P-value of `p_ts`.
* `fdr_pa`: The adjusted P-value of `p_pa`.
* `fdr_pb`: The adjusted P-value of `p_pb`.
Once we finish the sgRNA-level test, we can perform a gene-level test using `measure_gene_stats.`
```{r}
gene_stats <- measure_gene_stats(sgrna_stat)
gene_stats
```
Here is the information of each column in the data.frame of the gene-level statistics:
* `gene`: The gene name to be tested.
* `n_sgrna`: The number of sgRNA targets the gene in the library.
* `cpm_a`: The mean of CPM of sgRNAs within the first group.
* `cpm_b`: The mean of CPM of sgRNAs within the second group.
* `logFC`: The log fold change of the gene between two groups, and calculated by $log_{2}\frac{CPM_{b}+1}{CPM_{a}+1}$
* `p_ts`: The p-value indicates a difference between the two groups at the gene-level.
* `p_pa`: The p-value indicates enrichment of the first group at the gene-level.
* `p_pb`: The p-value indicates enrichment of the second group at the gene-level.
* `fdr_ts`: The adjusted P-value of `p_ts`.
* `fdr_pa`: The adjusted P-value of `p_pa`.
* `fdr_pb`: The adjusted P-value of `p_pb`.
After we have a result of the gene-level test, we can filter out a list of genes using different measures. For example, if you are considering to find genes has a differential abundance between two groups, you can use the value `fdr_ts` for the hit selection. If you want to see some genes has enrichment of abundance in the first group (i.e., depiction in the opposite group), you lookup `fdr_pa` value, and `fdr_pb` can be used to see an enrichment of abundance in the second group. Here, we use `fdr_ts` to identify the hit genes.
```{r}
gene_stats %>%
filter(fdr_ts < 0.1)
```
CB<sup>2</sup> also supports a useful dot plot function to lookup the read counts for a gene, and this function can be used to clarify an interesting hit is valid.
```{r}
plot_dotplot(cb2_count$count, df_design, "PARK2")
```
|
/scratch/gouwar.j/cran-all/cranData/CB2/inst/doc/cb2-tutorial.Rmd
|
---
title: "Handling the input matrix in CB2"
author: "Hyun-Hwan Jeong"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
bibliography: references.bib
vignette: >
%\VignetteIndexEntry{Handling the input matrix in CB2}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
If an analysis starts with an input matrix, it has to be appropriately pre-proceed before it is used any functions of `CB2`. `CB2` allows two different types of input: a numeric matrix/data frame with `row.names` and a data.frame contains columns of counts and columns of sgRNA IDs and target genes. Either of them will work. This document explains how the input should be formed and how to process the input using `CB2`. In the entire document, [@evers2016crispr]'s CRISPR-RT112 screen data are used.
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
```
The following code imports required packages which are required to run below codes.
```{r}
library(CB2)
library(dplyr)
library(readr)
```
The following code block shows an example of the first type of input which `CB2` can handle. Each column of `Evers_CRISPRn_RT112$count` contains counts of guide RNAs of a sample (that was initially extracted from NGS data). A count of the input shows that how many guide RNA barcodes were observed from a given NGS sample. Each row of the matrix has a row name (e.g., `RPS19_sg10`), and the name is the ID of a guide RNA. For example, `RPS19_sg10`, which is the first-row name in the example, indicates that the first row contains the counts of `RPS_sg10` guide RNA. Every guide RNA ID **must have exactly one `_` character**, and it is used to be a separator of two strings. The first string displays the name of a gene whose gene is targeted by the guide RNA, and the second string is used as an identifier among guide RNAs that targets the same gene. For example, `RPS_sg10` indicates that the guide RNA is designed to target the `RPS` gene, and `sg10` is the unique identifier.
**NOTE :** If the input contains multiple `_` characters, `CB2` is not able to run. In particular, if Entrez gene IDs are used as the gene names, `CB2` does not handle the input. One of the solutions for this case is changing the gene names to another identifier (e.g., HGNC symbol) or using another type of input, which will explain below.
```{r}
data("Evers_CRISPRn_RT112")
head(Evers_CRISPRn_RT112$count)
```
In addition, `CB2` requires experiment design information which is formed as a data.frame and contains sample names and groups of each sample. In `Evers_CRISPRn_RT112` data, `Evers_CRISPRn_RT112$design` is the data.frame.
```{r}
Evers_CRISPRn_RT112$design
```
With the two variables, `CB2` can perform the hypothesis test with `measure_sgrna_stats` and `measure_gene_stats` functions.
```{r}
sgrna_stats <- measure_sgrna_stats(Evers_CRISPRn_RT112$count, Evers_CRISPRn_RT112$design, "before", "after")
gene_stats <- measure_gene_stats(sgrna_stats)
head(gene_stats)
```
Another input type is a data.frame that contains two additional columns, which contain the guide RNA information (target gene and guide RNA identifier). A CSV file which was used in the `CB2` publication ([@jeong2019beta]). The CSV file contains the additional columns, the first is the `gene` column, and the second is the `sgRNA` column.
```{r}
df <- read_csv("https://raw.githubusercontent.com/hyunhwaj/CB2-Experiments/master/01_gene-level-analysis/data/Evers/CRISPRn-RT112.csv")
df
```
Two additional parameters have to be set to the `measure_sgrna_stats` function if an input matrix is this type. The first parameter is `ge_id`, which specifies the column of genes, and the second parameter is `sg_id`, which specifies the column of IDs. In the following code, `gene_id` sets as `gene` and `sg_id` sets as `sgRNA`.
```{r}
head(measure_sgrna_stats(df, Evers_CRISPRn_RT112$design, "before", "after", ge_id = 'gene', sg_id = 'sgRNA'))
```
### References
|
/scratch/gouwar.j/cran-all/cranData/CB2/vignettes/cb2-input-handling.Rmd
|
---
title: "CB2 Tutorial"
author: "Hyun-Hwan Jeong"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{CB2 Tutorial}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
CRISPRBetaBinomial (CB<sup>2</sup>) is a package for designing a statistical hypothesis test for robust target identification, developing an accurate mapping algorithm to quantify sgRNA abundances, and minimizing the parameters necessary for CRISPR pooled screen data analysis. This document shows how to use CB2 for the CRISPR pooled screen data analysis.
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
```
First, import `CB2` package using `library(),` and it will be helpful to import other packages as follows:
```{r}
library(CB2)
library(magrittr)
library(glue)
library(tibble)
library(dplyr)
library(ggplot2)
```
There are three different basic functions in CB<sup>2</sup>. The first function provides a quantification of counts of sgRNAs from the NGS samples. It requires a library file (`.fasta` or `.fa`) and a list of samples (`.fastq`). The library file must contain an annotation of sgRNAs in the library used in the screen. A sgRNA annotation consists of a barcode sequence (20nt sequence where sgRNA would target) and a name of a gene which the sgRNA suppose to target.
Here is an example of the loading data for the screen analysis. Files in the example are contained in the `CB2` package.
```{r}
# load the file path of the annotation file.
FASTA <- system.file("extdata", "toydata",
"small_sample.fasta",
package = "CB2")
system("tail -6 {FASTA}" %>% glue)
```
The first two lines of the annotation file indicate the annotation of the first sgRNA, and the next two lines are the annotation of the second sgRNA, and so on. The first line of an annotation is formatted as `><genename>_<id>`, where `<genename>` is an id of a symbol of the target gene for the sgRNA and `<id>` is the unique identifier for the sgRNA. `><genename>_<id>` is the completed identifier of the sgRNA and a completed identifier should not appear more than once. The second line of an annotation is the 20nt sequence, and it indicates which part of the target gene will be targeted by the sgRNA.
The first annotation indicates the library contains a sgRNA and `RAB_3` is the identifier of the sgRNA. This sgRNA is supposed to target `RAB` gene and the intended target locus is `CTGTAGAAGCTACATCGGCT`
We also have an example of the NGS sample file. The following snippet will display the contents of an NGS sample file.
```{r}
FASTQ <- system.file("extdata", "toydata",
"Base1.fastq",
package = "CB2")
system("head -8 {FASTQ}" %>% glue)
```
The NGS sample file contains multiple reads, and each read consists of four sequential lines. The first line is the id of the reads, and the second line includes a sequence of the read, and we assume that a read contains a nucleotide sequence of the sgRNA as a substring of the read. The third line only includes '+' and the last line includes the quality of each nucleotide of the read ([Phread quality score](https://en.wikipedia.org/wiki/FASTQ_format)).
Let's get start the analysis. Before running the analysis we will see the list of `FASTQ` files we can use from the `toydata` example.
```{r}
ex_path <- system.file("extdata", "toydata", package = "CB2")
Sys.glob("{ex_path}/*.fastq" %>% glue) %>% basename()
```
From the example directly above, we can recognize there are three groups (Base, High, and Low) in the example data, and each of them has two replicates. We will perform an analysis between `Base` and `High.` The first thing we need to do is creating a design matrix. The below code shows how to build it.
```{r}
df_design <- tribble(
~group, ~sample_name,
"Base", "Base1",
"Base", "Base2",
"High", "High1",
"High", "High2"
) %>% mutate(
fastq_path = glue("{ex_path}/{sample_name}.fastq")
)
df_design
```
`df_design` contains three columns and each row contains information of a sample. The first column is `group` where the sample belongs to, and the `sample_name` is the name of the sample for a convenience. `fastq_path` is the place where you will have the NGS sample file.
After creating the `df_design,` and we can run a sgRNA quantification by calling `run_sgrna_quant.`
```{r}
cb2_count <- run_sgrna_quant(FASTA, df_design)
```
```{r}
head(cb2_count$count)
```
After running `run_sgrna_quant,` we will have a data frame (`cb2_count$count`) and a numeric vector (`cb2_count$total`). The data frame contains sgRNA counts for each sample, and the numeric vector contains the number of reads for each sample.
In the data frame, each row corresponds to a sgRNA and each column belongs to a sample. Each value in the data frame indicates read counts of the corresponded sgRNA and sample, and it implies how many reads have been aligned to the sgRNA from the sample file. We assume the number will be used to approximate the number of knock-out cells of the target gene of the sgRNA.
```{r}
head(cb2_count$total)
```
We are also able to lookup CPM (Count Per Million mapped read counts) using `get_CPM.`
```{r}
get_CPM(cb2_count$count)
```
There are four functions we can use to check the quality of the input data. The first function (`plot_count_distribution`) will give the mappability (the success rate of sgRNA identification from reads) for each sample.
```{r}
plot_count_distribution(cb2_count$count %>% get_CPM, df_design, add_dots = T)
```
We can also check the mappability (The proportion of the number of reads successfully aligned to a sgRNA in the library among the entire reads) using `calc_mappability` function.
```{r}
calc_mappability(cb2_count, df_design)
```
`plot_PCA` can be a way of checking data quality.
```{r}
plot_PCA(cb2_count$count %>% get_CPM, df_design)
```
The last function (`plot_corr_heatmap`) display a sgRNA-level correlation heatmap of NGS samples. We assume that samples in the same group clustered together if the data quality is good.
```{r}
plot_corr_heatmap(cb2_count$count %>% get_CPM, df_design)
```
After we find the data quality is good to move to the next step, then we can perform an analysis for a sgRNA-level using `measure_sgrna_stats`
```{r}
sgrna_stat <- measure_sgrna_stats(cb2_count$count, df_design, "High", "Base")
sgrna_stat
```
As you can see above, we need four different parameters for the function. The first is a matrix of the read count, and the second parameter is the design data frame. The last two are the groups we are interested in performing differential abundance test for each sgRNA.
Here is the information of each column in the data.frame of the sgRNA-level statistics:
* `sgRNA`: The sgRNA identifier.
* `gene`: The gene is the target of the sgRNA
* `n_a`: The number of replicates of the first group.
* `n_b`: The number of replicates of the second group.
* `phat_a`: The proportion value of the sgRNA for the first group.
* `phat_b`: The proportion value of the sgRNA for the second group.
* `vhat_a`: The variance of the sgRNA for the first group.
* `vhat_b`: The variance of the sgRNA for the second group.
* `cpm_a`: The mean CPM of the sgRNA within the first group.
* `cpm_b`: The mean CPM of the sgRNA within the second group.
* `logFC`: The log fold change of sgRNA between two groups, and calculated by $log_{2}\frac{CPM_{b}+1}{CPM_{a}+1}$
* `t_value`: The value for the t-statistics.
* `df`: The value of the degree of freedom, and will be used to calculate the p-value of the sgRNA.
* `p_ts`: The p-value indicates a difference between the two groups.
* `p_pa`: The p-value indicates enrichment of the first group.
* `p_pb`: The p-value indicates enrichment of the second group.
* `fdr_ts`: The adjusted P-value of `p_ts`.
* `fdr_pa`: The adjusted P-value of `p_pa`.
* `fdr_pb`: The adjusted P-value of `p_pb`.
Once we finish the sgRNA-level test, we can perform a gene-level test using `measure_gene_stats.`
```{r}
gene_stats <- measure_gene_stats(sgrna_stat)
gene_stats
```
Here is the information of each column in the data.frame of the gene-level statistics:
* `gene`: The gene name to be tested.
* `n_sgrna`: The number of sgRNA targets the gene in the library.
* `cpm_a`: The mean of CPM of sgRNAs within the first group.
* `cpm_b`: The mean of CPM of sgRNAs within the second group.
* `logFC`: The log fold change of the gene between two groups, and calculated by $log_{2}\frac{CPM_{b}+1}{CPM_{a}+1}$
* `p_ts`: The p-value indicates a difference between the two groups at the gene-level.
* `p_pa`: The p-value indicates enrichment of the first group at the gene-level.
* `p_pb`: The p-value indicates enrichment of the second group at the gene-level.
* `fdr_ts`: The adjusted P-value of `p_ts`.
* `fdr_pa`: The adjusted P-value of `p_pa`.
* `fdr_pb`: The adjusted P-value of `p_pb`.
After we have a result of the gene-level test, we can filter out a list of genes using different measures. For example, if you are considering to find genes has a differential abundance between two groups, you can use the value `fdr_ts` for the hit selection. If you want to see some genes has enrichment of abundance in the first group (i.e., depiction in the opposite group), you lookup `fdr_pa` value, and `fdr_pb` can be used to see an enrichment of abundance in the second group. Here, we use `fdr_ts` to identify the hit genes.
```{r}
gene_stats %>%
filter(fdr_ts < 0.1)
```
CB<sup>2</sup> also supports a useful dot plot function to lookup the read counts for a gene, and this function can be used to clarify an interesting hit is valid.
```{r}
plot_dotplot(cb2_count$count, df_design, "PARK2")
```
|
/scratch/gouwar.j/cran-all/cranData/CB2/vignettes/cb2-tutorial.Rmd
|
multigrps <-
function(df,gvar,
p.rd=3,varlist = NULL,
skewvar=NULL,
norm.rd=2,
sk.rd=2,
tabNA="no",#need to replace NaN with NA for all factors
cat.rd=0,pnormtest=0.05,
maxfactorlevels=30,
minfactorlevels=10,
sim=FALSE,
workspace=2e5,ShowStatistic = F){
##group varibale must be a factor
df[,gvar]<-as.factor(df[,gvar])
#NaN is forced to be NA, NaN can cause problem
df<-replace(df,is.na(df),NA)
for(i in 1:length(levels(df[,gvar]))){
assign(paste("g", i, sep = ""), levels(df[,gvar])[i])
}
if(is.null(varlist)){
varlist<-names(df)[!names(df)%in%gvar]
}else if(sum(!(varlist%in%names(df)))>0){
stop("varlist contains variables not in the data frame")
} else {
varlist = varlist
}
if(sum(!(skewvar%in%varlist))>0){
stop("skewvar contains variables not in the data frame or varlist")
}
Table <- NULL
#loop over variables
for (var in varlist){
if((class(df[,var])=="factor"|class(df[,var])=="character")&length(levels(factor(df[,var])))>maxfactorlevels){
print(paste("the factor variable",var,
"contains more than",
maxfactorlevels,"levels","check the class of", var,
"or reset the maxfactorlevels",sep=' '))
next
}else{
if(class(df[,var])=="factor"|length(levels(factor(df[,var])))<=minfactorlevels){
if(var%in%skewvar){
stop("skewvar contains categorical variables")
}
if(tabNA=="no"){
df[,var]<-factor(df[,var])
}else{
df[,var]<-factor(df[,var],exclude = NULL)
}
tableTol<-table(df[,var],useNA=tabNA)
per<-prop.table(tableTol)
table.sub<-table(df[,var],df[,gvar],useNA=tabNA)
per.sub<-prop.table(table.sub,2)
if(nrow(table.sub)==1){
p =1;statistic=NULL
} else{
p<-tryCatch({#using fisher's test when scarce data
chisq.test(table.sub)$p.value
}, warning = function(w) {
fisher.test(table.sub,
workspace = workspace,
simulate.p.value = sim)$p.value
})
statistic<-tryCatch({#using fisher's test when scarce data
chisq.test(table.sub)$statistic
}, warning = function(w) {
NULL
})
}
names(statistic) <- NULL
tabGrp <- NULL
nameGrp <- NULL
for (varGrp in levels(df[,gvar])) {
tabGrp1 <- paste(table.sub[,varGrp]," (",
round(per.sub[,varGrp]*100,cat.rd),
")",sep = "")
nameGrp1 <- paste(varGrp," (","n = ",table(df[,gvar])[varGrp],")",sep = "")
tabGrp <- cbind(tabGrp,tabGrp1)
nameGrp <- c(nameGrp,nameGrp1)
}
colnames(tabGrp) <- levels(df[,gvar])
rownames(tabGrp) <- rownames(table.sub)
table1 <- data.frame("Variables" = paste(" ",levels(df[,var]),sep = ""),
paste(as.data.frame(tableTol)[,"Freq"]," (",
round(as.data.frame(per)[,"Freq"]*100,cat.rd),
")",sep = ""),stringsAsFactors = F)
table1 <- cbind(table1,tabGrp,p=" ",statistic=" ")
table1 <- apply(table1, 2,as.character)
newline <- c(paste(var,", n (%)",sep = ""),
rep("",length(levels(df[,gvar]))+1),
ifelse(p<1*10^(-p.rd),
paste("< ",1*10^(-p.rd),sep = ""),
round(p,p.rd)),
ifelse(is.null(statistic),"Fisher",
round(statistic,3)))
table1 <- rbind(newline,table1)
colnames(table1)<-c("Variables",
paste("Total (n = ",nrow(df),")",sep = ""),
nameGrp,"p",
"statistic")
rownames(table1) <- NULL
Table<-rbind.data.frame(Table,table1,stringsAsFactors = F)
}else{
if((ad.test(df[,var])$p.value>=pnormtest&is.null(skewvar))|(!(var%in%skewvar)&!is.null(skewvar))){
tabGrp<-NULL
nameGrp <- NULL
for(varGrp in levels(df[,gvar])){
tabGrp1 <- paste(round(mean(df[df[,gvar]==varGrp,var],na.rm=T),norm.rd),
" \U00B1 ",
round(sd(df[df[,gvar]==varGrp,var],na.rm=T),norm.rd),
sep = "")
nameGrp1 <- paste(varGrp," (","n = ",table(df[,gvar])[varGrp],")",sep = "")
tabGrp <- cbind(tabGrp,tabGrp1)
nameGrp <- c(nameGrp,nameGrp1)
}
colnames(tabGrp) <- levels(df[,gvar])
rownames(tabGrp) <- var
p<-summary(aov(df[,var]~df[,gvar]))[[1]][1,"Pr(>F)"]
statistic <- summary(aov(df[,var]~df[,gvar]))[[1]][1,"F value"]
table1 <- data.frame(paste(var,", Mean"," \U00B1 ","SD",sep = ""),
paste(round(mean(df[,var],na.rm=T),norm.rd),
" \U00B1 ",
round(sd(df[,var],na.rm=T),norm.rd),sep = ""),
tabGrp,
ifelse(p<1*10^(-p.rd),
paste("< ",1*10^(-p.rd),sep = ""),
round(p,p.rd)),
round(statistic,3),stringsAsFactors = F)
colnames(table1)<-c("Variables",
paste("Total (n = ",nrow(df),")",sep = ""),
nameGrp,"p",
"statistic")
rownames(table1) <- NULL
Table<-rbind.data.frame(Table,table1,stringsAsFactors = F)
}else{
median<-as.numeric(summary(df[,var])[3])
IQR1<-as.numeric(summary(df[,var])[2])
IQR3<-as.numeric(summary(df[,var])[5])
tabGrp<-NULL
nameGrp <- NULL
for(varGrp in levels(df[,gvar])){
tabGrp1 <- paste(round(summary(df[df[,gvar]==varGrp,var])[3],sk.rd),
" (",
round(summary(df[df[,gvar]==varGrp,var])[2],sk.rd),", ",
round(summary(df[df[,gvar]==varGrp,var])[5],sk.rd),")",
sep = "")
nameGrp1 <- paste(varGrp," (","n = ",table(df[,gvar])[varGrp],")",sep = "")
tabGrp <- cbind(tabGrp,tabGrp1)
nameGrp <- c(nameGrp,nameGrp1)
}
p<-kruskal.test(df[,var]~df[,gvar])$p.value
statistic <- kruskal.test(df[,var]~df[,gvar])$statistic
table1 <- data.frame(paste(var,", Median"," (Q1,Q3)",sep = ""),
paste(round(median,sk.rd)," (",round(IQR1,sk.rd),", ",
round(IQR3,sk.rd),")",sep = ""),
tabGrp,
ifelse(p<1*10^(-p.rd),
paste("< ",1*10^(-p.rd),sep = ""),
round(p,p.rd)),
round(statistic,3),stringsAsFactors = F)
colnames(table1)<-c("Variables",
paste("Total (n = ",nrow(df),")",sep = ""),
nameGrp,"p",
"statistic")
rownames(table1) <- NULL
Table<-rbind.data.frame(Table,table1,stringsAsFactors = F)
}}
}
}
if(!ShowStatistic){
Table <- Table[,!colnames(Table) %in% "statistic"]
}
Table <- rbind(colnames(Table),Table)
colnames(Table) <- NULL
return(Table)
}
|
/scratch/gouwar.j/cran-all/cranData/CBCgrps/R/multigrps.R
|
twogrps <-
function(df,gvar,varlist = NULL,
p.rd=3,
skewvar=NULL,
norm.rd=2,
sk.rd=2,
tabNA="no",#need to replace NaN with NA for all factors
cat.rd=0, pnormtest=0.05,
maxfactorlevels=30,
minfactorlevels=10,
sim = FALSE,#to use simulated p value
workspace=2e5,ShowStatistic = F,ExtractP = 0.05){
##group varibale must be a factor
df[,gvar]<-as.factor(df[,gvar])
if(length(table(df[,gvar]))>2){
stop("The gvar contains more than two levels, please recheck or consider using multigrps() function")
}
#NaN is forced to be NA, NaN can cause problem
df<-replace(df,is.na(df),NA)
g1<-levels(df[,gvar])[1]
g2<-levels(df[,gvar])[2]
if(is.null(varlist)){
varlist<-names(df)[!names(df)%in%gvar]
}else if(sum(!(varlist%in%names(df)))>0){
stop("varlist contains variables not in the data frame")
} else {
varlist = varlist
}
if(sum(!(skewvar%in%varlist))>0){
stop("skewvar contains variables not in the data frame or varlist")
}
Table <- NULL;
VarExtract <- NULL;#extract variable names with p value less than ExtractP
#loop over variables
for (var in varlist){
if((class(df[,var])=="factor"|class(df[,var])=="character")&length(levels(factor(df[,var]))) > maxfactorlevels){
print(paste("The factor/character variable", var,
"contains more than",
maxfactorlevels,"levels,","check the class of", var,
"or reset the maxfactorlevels",sep=' '))
next
}else{
if(class(df[,var]) == "factor"|class(df[,var])=="character"|length(levels(factor(df[,var]))) <= minfactorlevels){
if(var %in% skewvar){
stop("skewvar contains categorical variables")
}
if(tabNA =="no"){
df[,var] <- factor(df[,var])
}else{
df[,var]<-factor(df[,var],exclude = NULL)
}
tableTol<-table(df[,var],useNA=tabNA)
per<-prop.table(tableTol)
table.sub<-table(df[,var],df[,gvar],useNA=tabNA)
per.sub<-prop.table(table.sub,2)
if(nrow(table.sub)==1){
p =1;statistic=NULL
} else{
p<-tryCatch({#using fisher's test when scarce data
chisq.test(table.sub)$p.value
}, warning = function(w) {
fisher.test(table.sub,
workspace = workspace,
simulate.p.value = sim)$p.value
})
statistic<-tryCatch({#using fisher's test when scarce data
chisq.test(table.sub)$statistic
}, warning = function(w) {
NULL
})
}
names(statistic) <- NULL
table1 <- data.frame("Variable"=paste(" ",levels(df[,var]),sep = ""),
paste(as.data.frame(tableTol)[,"Freq"]," (",
round(as.data.frame(per)[,"Freq"]*100,cat.rd),
")",sep = ""),
paste(as.data.frame.matrix(table.sub)[,g1]," (",
round(as.data.frame.matrix(per.sub)[,g1]*100,cat.rd),
")",sep = ""),
paste(as.data.frame.matrix(table.sub)[,g2]," (",
round(as.data.frame.matrix(per.sub)[,g2]*100,cat.rd),
")",sep = ""),
p = "",
statistic = "",stringsAsFactors = F)
newline <- c(paste(var,", n (%)",sep = ""),
rep("",3),
ifelse(p<1*10^(-p.rd),
paste("< ",1*10^(-p.rd),sep = ""),
round(p,p.rd)),
ifelse(is.null(statistic),"Fisher",
round(statistic,3)))
table1 <- rbind(newline,table1)
colnames(table1)<-c("Variables",
paste("Total (n = ",nrow(df),")",sep = ""),
paste(g1," (n = ",nrow(df[df[,gvar]==g1,]),")",sep = ""),
paste(g2," (n = ",nrow(df[df[,gvar]==g2,]),")",sep = ""),"p",
"statistic")
rownames(table1) <- NULL
Table<-rbind.data.frame(Table,table1,stringsAsFactors = F)
if(p < ExtractP){
VarExtract <- c(VarExtract,var)
}
}else{
if((ad.test(df[,var])$p.value>=pnormtest&is.null(skewvar))|(!(var%in%skewvar)&!is.null(skewvar))){
mean<-round(mean(df[,var],na.rm=T),norm.rd)
sd<-round(sd(df[,var],na.rm=T),norm.rd)
mean.1<-round(mean(df[df[,gvar]==g1,var],na.rm=T),norm.rd)
sd.1<-round(sd(df[df[,gvar]==g1,var],na.rm=T),norm.rd)
mean.2<-round(mean(df[df[,gvar]==g2,var],na.rm=T),norm.rd)
sd.2<-round(sd(df[df[,gvar]==g2,var],na.rm=T),norm.rd)
p<-t.test(df[,var]~df[,gvar])$p.value
statistic <- t.test(df[,var]~df[,gvar])$statistic
table1 <- data.frame("Variable"= paste(var,", Mean"," \U00B1 ","SD",sep = ""),
paste(mean," \U00B1 ",sd,sep=""),
paste(mean.1," \U00B1 ",sd.1,sep=""),
paste(mean.2," \U00B1 ",sd.2,sep=""),
p=ifelse(p<1*10^(-p.rd),
paste("< ",1*10^(-p.rd),sep = ""),
round(p,p.rd)),
statistic = round(statistic,3),stringsAsFactors = F)
colnames(table1)<-c("Variables",
paste("Total (n = ",nrow(df),")",sep = ""),
paste(g1," (n = ",nrow(df[df[,gvar]==g1,]),")",sep = ""),
paste(g2," (n = ",nrow(df[df[,gvar]==g2,]),")",sep = ""),"p",
"statistic")
rownames(table1) <- NULL
Table <- rbind.data.frame(Table,table1,stringsAsFactors = F)
if(p < ExtractP){
VarExtract <- c(VarExtract,var)
}
}else{
median<-as.numeric(summary(df[,var])[3])
IQR1<-as.numeric(summary(df[,var])[2])
IQR3<-as.numeric(summary(df[,var])[5])
median.1<-as.numeric(summary(df[df[,gvar]==g1,var])[3])
IQR1.1<-as.numeric(summary(df[df[,gvar]==g1,var])[2])
IQR3.1<-as.numeric(summary(df[df[,gvar]==g1,var])[5])
median.2<-as.numeric(summary(df[df[,gvar]==g2,var])[3])
IQR1.2<-as.numeric(summary(df[df[,gvar]==g2,var])[2])
IQR3.2<-as.numeric(summary(df[df[,gvar]==g2,var])[5])
p<-wilcox.test(df[,var]~df[,gvar])$p.value
statistic <- wilcox.test(df[,var]~df[,gvar])$statistic
table1<-data.frame("Variable"= paste(var,", Median"," (Q1,Q3)",sep = ""),
paste(round(median,sk.rd)," (",round(IQR1,sk.rd),", ",round(IQR3,sk.rd),")",sep = ""),
paste(round(median.1,sk.rd)," (",round(IQR1.1,sk.rd),", ",round(IQR3.1,sk.rd),")",sep = ""),
paste(round(median.2,sk.rd)," (",round(IQR1.2,sk.rd),", ",round(IQR3.2,sk.rd),")",sep = ""),
p=ifelse(p<1*10^(-p.rd),
paste("< ",1*10^(-p.rd),sep = ""),
round(p,p.rd)),
statistic = round(statistic,3),stringsAsFactors = F)
colnames(table1)<-c("Variables",
paste("Total (n = ",nrow(df),")",sep = ""),
paste(g1," (n = ",nrow(df[df[,gvar]==g1,]),")",sep = ""),
paste(g2," (n = ",nrow(df[df[,gvar]==g2,]),")",sep = ""),"p",
"statistic")
rownames(table1) <- NULL
Table <- rbind.data.frame(Table,table1,stringsAsFactors = F)
if(p < ExtractP){
VarExtract <- c(VarExtract,var)
}
}
}
}
}
if(!ShowStatistic){
Table$statistic <- NULL
}
Table <- rbind(colnames(Table),Table)
colnames(Table) <- NULL
return(list(Table=Table,VarExtract = VarExtract))
#the end of the function
}
|
/scratch/gouwar.j/cran-all/cranData/CBCgrps/R/twogrps.R
|
#' @title Asymptotic Variance and Confidence Interval Estimation of the ATE
#' @description
#' \code{AsyVar} estimates the asymptotic variance of the ATE obtained with the CBPS or oCBPS method. It also returns the finite variance estimate, the finite standard error, and a CI for the ATE.
#'
#' @import stats
#'
#' @param Y The vector of actual outcome values (observations).
#' @param Y_1_hat The vector of estimated outcomes according to the treatment model. (AsyVar automatically sets the treatment model as a linear regression model and it is fitted within the function.) If \code{CBPS_obj} is specified, or if \code{X} \eqn{and} \code{TL} are specified, this is unnecessary.
#' @param Y_0_hat The vector of estimated outcomes according to the control model. (AsyVar automatically sets the control model as a linear regression model and it is fitted within the function.) If \code{CBPS_obj} is specified, or if \code{X} \eqn{and} \code{TL} are specified, this is unnecessary.
#' @param CBPS_obj An object obtained with the CBPS function. If this object is not sepecified, then \code{X}, \code{TL}, \code{pi}, and \code{mu} must \eqn{all} be specified instead.
#' @param method The specific method to be considered. Either \code{"CBPS"} or \code{"oCBPS"} must be selected.
#' @param X The matrix of covariates with the rows corresponding to the observations and the columns corresponding to the variables. The left most column must be a column of 1's for the intercept. (\code{X} is not necessary if \code{CBPS_obj} is specified.)
#' @param TL The vector of treatment labels. More specifically, the label is 1 if it is in the treatment group and 0 if it is in the control group. (\code{TL} is not necessary if \code{CBPS_obj} is specified.)
#' @param pi The vector of estimated propensity scores. (\code{pi} is not necessary if \code{CBPS_obj} is specified.)
#' @param mu The estimated average treatment effect obtained with either the CBPS or oCBPS method. (\code{mu} is not necessary if \code{CBPS_obj} is specified.)
#' @param CI The specified confidence level (between 0 and 1) for calculating the confidence interval for the average treatment effect. Default value is 0.95.
#'
#' @return
#' \item{mu.hat}{The estimated average treatment effect, \eqn{hat{\mu}}{hat{\mu}}.}
#' \item{asy.var}{The estimated asymptotic variance of \eqn{\sqrt{n}*hat{\mu}}{\sqrt{n}*hat{\mu}} obtained with the CBPS or oCBPS method.}
#' \item{var}{The estimated variance of \eqn{hat{\mu}}{hat{\mu}} obtained with the CBPS or oCBPS method.}
#' \item{std.err}{The standard error of \eqn{hat{\mu}}{hat{\mu}} obtained with the CBPS or oCBPS method.}
#' \item{CI.mu.hat}{The confidence interval of \eqn{hat{\mu}}{hat{\mu}} obtained with the CBPS or oCBPS method with the confidence level specified in the input argument.}
#'
#' @author Inbeom Lee
#'
#' @references Fan, Jianqing and Imai, Kosuke and Lee, Inbeom and Liu, Han and Ning, Yang and Yang, Xiaolin. 2021.
#' ``Optimal Covariate Balancing Conditions in Propensity Score Estimation.'' Journal of Business & Economic Statistics.
#' \url{https://imai.fas.harvard.edu/research/CBPStheory.html}
#'
#' @examples #GENERATING THE DATA
#'n=300
#'
#'#Initialize the X matrix.
#'X_v1 <- rnorm(n,3,sqrt(2))
#'X_v2 <- rnorm(n,0,1)
#'X_v3 <- rnorm(n,0,1)
#'X_v4 <- rnorm(n,0,1)
#'X_mat <- as.matrix(cbind(rep(1,n), X_v1, X_v2, X_v3, X_v4))
#'
#'#Initialize the Y_1 and Y_0 vector using the treatment model and the control model.
#'Y_1 <- X_mat %*% matrix(c(200, 27.4, 13.7, 13.7, 13.7), 5, 1) + rnorm(n)
#'Y_0 <- X_mat %*% matrix(c(200, 0 , 13.7, 13.7, 13.7), 5, 1) + rnorm(n)
#'
#'#True Propensity Score calculation.
#'pre_prop <- X_mat[,2:5] %*% matrix(c(0, 0.5, -0.25, -0.1), 4, 1)
#'propensity_true <- (exp(pre_prop))/(1+(exp(pre_prop)))
#'
#'#Generate T_vec, the treatment vector, with the true propensity scores.
#'T_vec <- rbinom(n, size=1, prob=propensity_true)
#'
#'#Now generate the actual outcome Y_outcome (accounting for treatment/control groups).
#'Y_outcome <- Y_1*T_vec + Y_0*(1-T_vec)
#'
#'#Use oCBPS.
#'ocbps.fit <- CBPS(T_vec ~ X_mat, ATT=0, baseline.formula = ~X_mat[,c(1,3:5)],
#' diff.formula = ~X_mat[,2])
#'
#'#Use the AsyVar function to get the asymptotic variance of the
#'#estimated average treatment effect and its confidence interval when using oCBPS.
#'AsyVar(Y=Y_outcome, CBPS_obj=ocbps.fit, method="oCBPS", CI=0.95)
#'
#'#Use CBPS.
#'cbps.fit <- CBPS(T_vec ~ X_mat, ATT=0)
#'
#'#Use the AsyVar function to get the asymptotic variance of the
#'#estimated average treatment effect and its confidence interval when using CBPS.
#'AsyVar(Y=Y_outcome, CBPS_obj=cbps.fit, method="CBPS", CI=0.95)
#'
#' @export AsyVar
#'
AsyVar <- function(Y, Y_1_hat=NULL, Y_0_hat=NULL, CBPS_obj, method="CBPS",
X=NULL, TL=NULL, pi=NULL, mu=NULL, CI=0.95){
#Define X in all cases.
if(is.null(X)==TRUE && is.null(CBPS_obj)==TRUE){
stop('Need to specify either a matrix of covariates (X)
or an object obtained with the CBPS function.')
} else {
if(is.null(X)==TRUE && is.null(CBPS_obj)==FALSE){
X <- CBPS_obj$x
}
}
#Define TL in all cases.
if(is.null(TL)==TRUE && is.null(CBPS_obj)==TRUE){
stop('Need to specify either a vector of treatment labels (TL)
or an object obtained with the CBPS function.')
} else {
if(is.null(TL)==TRUE && is.null(CBPS_obj)==FALSE){
TL <- CBPS_obj$y
}
}
#Parameter values
p <- ncol(X)
n <- length(Y)
Y_1 <- Y[which(TL==1)]
Y_0 <- Y[which(TL==0)]
n_1 <- length(Y_1)
n_0 <- length(Y_0)
#Define pi in all cases.
if(is.null(pi)==TRUE && is.null(CBPS_obj)==TRUE){
stop('Need to specify either a vector of estimated propensity scores (pi)
or an object obtained with the CBPS function.')
} else {
if(is.null(pi)==TRUE && is.null(CBPS_obj)==FALSE){
pi <- CBPS_obj$fitted.values
}
}
#Define mu in all cases.
if(is.null(mu)==TRUE && is.null(CBPS_obj)==TRUE){
stop('Need to specify either an estimate of the average treatment effect (mu)
or an object obtained with the CBPS function.')
} else {
if(is.null(mu)==TRUE && is.null(CBPS_obj)==FALSE){
mu <- mean(((TL)*Y/pi) -
(((1-TL)*Y)/(1-pi)))
}
}
#Fit the linear regression model for TL=1 and TL=0 separately.
if(is.null(Y_1_hat)==TRUE){
#Separate the treatment covariates and the control covariates.
X_1 <- as.matrix(X[which(TL==1),-1])
#Perform linear regression separately.
lin_1 <- lm(Y_1 ~ X_1)
#Y_hat(1|X_i) values
Y_1_hat <- X%*%as.matrix(lin_1$coefficients,p,1)
}
if(is.null(Y_0_hat)==TRUE){
#Separate the treatment covariates and the control covariates.
X_0 <- as.matrix(X[which(TL==0),-1])
#Perform linear regression separately.
lin_0 <- lm(Y_0 ~ X_0)
#Y_hat(0|X_i) values
Y_0_hat <- X%*%as.matrix(lin_0$coefficients,p,1)
}
L_hat <- Y_1_hat - Y_0_hat
K_hat <- Y_0_hat
if(method=="oCBPS"){
#Calculate the Var(Y_1|X_i) and Var(Y_0|X_i).
sigma_hat_1_squared <- sum(((Y - Y_1_hat)*TL)^2)/(n_1-p)
sigma_hat_0_squared <- sum(((Y - Y_0_hat)*(1-TL))^2)/(n_0-p)
result <- list()
result[[1]] <- mu
result[[2]] <- mean((sigma_hat_1_squared)/pi + (sigma_hat_0_squared)/(1-pi) + (L_hat - mu)^2)
result[[3]] <- result[[2]]/n
result[[4]] <- sqrt(result[[3]])
diff_ocbps <- stats::qnorm(1-(1-CI)/2)*result[[4]]
lower_ocbps <- mu - diff_ocbps
upper_ocbps <- mu + diff_ocbps
result[[5]] <- c(lower_ocbps, upper_ocbps)
names(result) <- c("mu.hat", "asy.var", "var", "std.err", "CI.mu.hat")
return(result)
} else {
if(method=="CBPS"){
#omega_hat
new_X_2 <- array(rep(NA,p*p*n),dim=c(p,p,n))
for(i in 1:n){
new_X_2[,,i] <- matrix(X[i,],p,1)%*%t(X[i,])/(pi[i]*(1-pi[i]))
}
omega_hat <- apply(new_X_2,c(1,2),mean)
#Sigma_mu_hat
Sigma_mu_hat <- mean(((Y_1_hat)^2/pi) + ((Y_0_hat)^2/(1-pi))) - (mu)^2
#Cov(mu_beta, g_beta)
new_X_3 <- matrix(rep(NA,n*p),n,p)
for(i in 1:n){
new_X_3[i,] <- X[i,]*(K_hat[i] + (1-pi[i])*L_hat[i])/(pi[i]*(1-pi[i]))
}
cov_hat <- apply(new_X_3,2,mean)
#H_0_hat
prop_modified <- pi/(1+exp(matrix(CBPS_obj$coefficients,1,p)%*%t(X)))
der_mat <- matrix(rep(prop_modified,each=p),p,n)
derivative <- matrix(rep(NA,p*n),p,n)
for(i in 1:n){
derivative[,i] <- matrix(der_mat[,i]*X[i,],p,1)
}
temp_sum <- matrix(rep(0,p),p,1)
for(i in 1:n){
temp_sum <- temp_sum + derivative[,i]*(K_hat[i] + (1-pi[i])*L_hat[i])/(pi[i]*(1-pi[i]))
}
H_0_hat <- -temp_sum/n
#H_f_hat
temp_sum_2 <- matrix(rep(0,p*p),p,p)
for(i in 1:n){
temp_sum_2 <- temp_sum_2 + ( matrix(X[i,],p,1)%*%matrix(derivative[,i],1,p)/(pi[i]*(1-pi[i])) )
}
H_f_hat <- -temp_sum_2/n
#Put it all together
result <- list()
result[[1]] <- mu
result[[2]] <- Sigma_mu_hat + t(H_0_hat)%*%solve(t(H_f_hat)%*%solve(omega_hat)%*%H_f_hat)%*%H_0_hat -
2*t(H_0_hat)%*%solve(H_f_hat)%*%cov_hat
result[[3]] <- result[[2]]/n
result[[4]] <- sqrt(result[[3]])
diff_cbps <- stats::qnorm(1-(1-CI)/2)*result[[4]]
lower_cbps <- mu - diff_cbps
upper_cbps <- mu + diff_cbps
result[[5]] <- c(lower_cbps, upper_cbps)
names(result) <- c("mu.hat", "asy.var", "var", "std.err", "CI.mu.hat")
return(result)
}
}
}
|
/scratch/gouwar.j/cran-all/cranData/CBPS/R/AsyVar.R
|
runall<-FALSE
if(runall==TRUE){
#load("../Data/BlackwellData")
blackwell<-read.table("../Data/Blackwell.tab",header=TRUE)
##New help file
form1<-"d.gone.neg ~ d.gone.neg.l1 + d.gone.neg.l2 + d.neg.frac.l3 + camp.length + camp.length + deminc + base.poll + year.2002 + year.2004 + year.2006 + base.und + office"
fit1<-CBMSM(formula = form1, time=blackwell$time,id=blackwell$demName,data=blackwell, type="MSM", iterations = NULL, twostep = TRUE, msm.variance = "approx", time.vary = TRUE)
bal1<-balance.CBMSM(fit1)
##Effect estimation
lm(demprcnt[time==1]~fit1$treat.hist,data=blackwell,weights=fit1$weights)
lm(demprcnt[time==1]~fit1$treat.cum,data=blackwell)
##Attempting to replicate
#write.table(file="../Data/Blackwell.tab",blackwell)
blackwell<-read.table("../Data/Blackwell.tab")
form1<-"d.gone.neg ~ d.gone.neg.l1 + d.gone.neg.l2 + d.neg.frac.l3 + camp.length + camp.length + deminc + base.poll + as.factor(year) + base.und + office"
fit1<-CBMSM(formula = form1, time=time,id=id,data=blackwell, type="MSM", iterations = NULL, twostep = TRUE, msm.variance = "full", time.vary = TRUE)
fit2<-CBMSM(formula = form1, time=time,id=id,data=blackwell, type="MSM", iterations = NULL, twostep = TRUE, msm.variance = "approx", time.vary = TRUE)
dv<-blackwell$demprcnt[blackwell$time==1]
treat.mat<-sapply(1:5,FUN=function(x) treat[time==x])
treat.cum<-rowSums(treat.mat)
colnames(treat.mat)<-paste("treat_",1:5,sep="")
lm(dv~treat.mat,w=fit1$weights[blackwell$time==1])
lm(dv~treat.mat,w=fit2$weights[blackwell$time==1])
lm(dv~treat.cum,w=fit1$weights[time==1])
lm(dv~treat.cum,w=fit2$weights[time==1])
}
|
/scratch/gouwar.j/cran-all/cranData/CBPS/R/BlackwellHelpCode.R
|
#' Covariate Balancing Propensity Score for Instrumental Variable Estimates
#' (CBIV)
#'
#' \code{CBIV} estimates propensity scores for compliance status in an
#' instrumental variables setup such that both covariate balance and prediction
#' of treatment assignment are maximized. The method, therefore, avoids an
#' iterative process between model fitting and balance checking and implements
#' both simultaneously.
#'
#' Fits covariate balancing propensity scores for generalizing local average
#' treatment effect estimates obtained from instrumental variables analysis.
#'
#' @param Tr A binary treatment variable.
#' @param Z A binary encouragement variable.
#' @param X A pre-treatment covariate matrix.
#' @param iterations An optional parameter for the maximum number of iterations
#' for the optimization. Default is 1000.
#' @param method Choose "over" to fit an over-identified model that combines
#' the propensity score and covariate balancing conditions; choose "exact" to
#' fit a model that only contains the covariate balancing conditions. Our
#' simulations suggest that "over" dramatically outperforms "exact."
#' @param twostep Default is \code{TRUE} for a two-step GMM estimator, which
#' will run substantially faster than continuous-updating. Set to \code{FALSE}
#' to use the continuous-updating GMM estimator.
#' @param twosided Default is \code{TRUE}, which allows for two-sided
#' noncompliance with both always-takers and never-takers. Set to \code{FALSE}
#' for one-sided noncompliance, which allows only for never-takers.
#' @param ... Other parameters to be passed through to \code{optim()}.
#' @return \item{coefficients}{A named matrix of coefficients, where the first
#' column gives the complier coefficients and the second column gives the
#' always-taker coefficients.} \item{fitted.values}{The fitted N x 3 compliance
#' score matrix. The first column gives the estimated probability of being a
#' complier, the second column gives the estimated probability of being an
#' always-taker, and the third column gives the estimated probability of being
#' a never-taker.} \item{weights}{The optimal weights: the reciprocal of the
#' probability of being a complier.} \item{deviance}{Minus twice the
#' log-likelihood of the CBIV fit.} \item{converged}{Convergence value.
#' Returned from the call to \code{optim()}.} \item{J}{The J-statistic at
#' convergence} \item{df}{The number of linearly independent covariates.}
#' \item{bal}{The covariate balance associated with the optimal weights,
#' calculated as the GMM loss of the covariate balance conditions.}
#' @author Christian Fong
#' @references Imai, Kosuke and Marc Ratkovic. 2014. ``Covariate Balancing
#' Propensity Score.'' Journal of the Royal Statistical Society, Series B
#' (Statistical Methodology).
#' \url{http://imai.princeton.edu/research/CBPS.html}
#' @examples
#'
#' ###
#' ### Example: propensity score matching
#' ### (Need to fix when we have an actual example).
#'
#' ##Load the LaLonde data
#' data(LaLonde)
#' ## Estimate CBPS
#' fit <- CBPS(treat ~ age + educ + re75 + re74 +
#' I(re75==0) + I(re74==0),
#' data = LaLonde, ATT = TRUE)
#' summary(fit)
#'
#'
#' @export CBIV
#'
CBIV <- function(Tr, Z, X, iterations=1000, method="over", twostep = TRUE, twosided = TRUE, ...) {
probs.min<-10^-6
pZ <- mean(Z)
k<-0
score.only<-bal.only<-FALSE
if(method=="mle") score.only<-TRUE
if(method=="exact") bal.only<-TRUE
X<-as.matrix(X)
X<-cbind(1,X[,apply(X,2,sd)>0])
names.X<-colnames(X)
names.X[apply(X,2,sd)==0]<-"(Intercept)"
#######Declare some constants and orthogonalize Xdf.
X.orig<-X
x.sd<-apply(as.matrix(X[,-1]),2,sd)
Dx.inv<-diag(c(1,x.sd))
diag(Dx.inv)<-1
x.mean<-apply(as.matrix(X[,-1]),2,mean)
X[,-1]<-apply(as.matrix(X[,-1]),2,FUN=function(x) (x-mean(x))/sd(x))
if(k==0) k<-sum(diag(t(X)%*%X%*%ginv(t(X)%*%X)))
k<-floor(k+.1)
XprimeX.inv<-ginv(t(X)%*%X)
n<-length(Tr)
gmm.func <- function(beta.curr, invV = NULL, twosided)
{
if (twosided){
beta.curr.c<-beta.curr[1:k]
beta.curr.a<-beta.curr[k+(1:k)]
baseline.prob<-(1 + exp(X%*%beta.curr.c) + exp(X%*%beta.curr.a))^-1
probs.curr.c<-pmin(pmax(exp(X%*%beta.curr.c)*baseline.prob,probs.min),1-probs.min)
probs.curr.a<-pmin(pmax(exp(X%*%beta.curr.a)*baseline.prob,probs.min),1-probs.min)
probs.curr.n<-pmin(pmax(baseline.prob,probs.min),1-probs.min)
sums<-probs.curr.c+probs.curr.a+probs.curr.n
probs.curr.c<-probs.curr.c/sums
probs.curr.a<-probs.curr.a/sums
probs.curr.n<-probs.curr.n/sums
w.curr<-cbind(Z*Tr/(pZ*(probs.curr.c + probs.curr.a)) - 1,
(1-Z)*Tr/((1-pZ)*probs.curr.a) - 1,
Z*(1-Tr)/(pZ*probs.curr.n) - 1,
(1-Z)*(1-Tr)/((1-pZ)*(probs.curr.c + probs.curr.n)) - 1)
w.curr.del<-1/n*t(X)%*%w.curr
w.curr.del<-as.matrix(w.curr.del)
w.curr<-as.matrix(w.curr)
gbar<-c(1/n*t(X)%*%((Z*Tr/(1-probs.curr.n) + (1-Z)*(1-Tr)/(1-probs.curr.a) - 1)*probs.curr.c),
1/n*t(X)%*%((Z*Tr/(1-probs.curr.n) + (1-Z)*Tr/probs.curr.a - 1)*probs.curr.a),
w.curr.del)
if (is.null(invV))
{
X.1.1<-X*as.vector((pZ/(1 - probs.curr.n) + (1 - pZ)/(1 - probs.curr.a) - 1)*probs.curr.c^2)
X.1.2<-X*as.vector((pZ/(probs.curr.c + probs.curr.a) - 1)*probs.curr.a*probs.curr.c)
X.1.3<-X*as.vector(probs.curr.c*((probs.curr.c+probs.curr.a)^-1 - 1))
X.1.4<-X*as.vector(probs.curr.c*(-1))
X.1.5<-X*as.vector(probs.curr.c*(-1))
X.1.6<-X*as.vector(probs.curr.c*((probs.curr.c + probs.curr.n)^-1 - 1))
X.2.2<-X*as.vector((pZ/(1-probs.curr.n) + (1 - pZ)/probs.curr.a - 1)*probs.curr.a^2)
X.2.3<-X*as.vector(probs.curr.a*((probs.curr.c+probs.curr.a)^-1 - 1))
X.2.4<-X*as.vector(probs.curr.a*((probs.curr.a)^-1 - 1))
X.2.5<-X*as.vector(probs.curr.a*(-1))
X.2.6<-X*as.vector(probs.curr.a*(-1))
X.3.3<-X*as.vector((pZ*(probs.curr.c + probs.curr.a))^-1 - 1)
X.3.4<- -X
X.3.5<- -X
X.3.6<- -X
X.4.4<-X*as.vector(((1-pZ)*probs.curr.a)^-1 - 1)
X.4.5<- -X
X.4.6<- -X
X.5.5<-X*as.vector((pZ*probs.curr.n)^-1 - 1)
X.5.6<- -X
X.6.6<-X*as.vector(((1-pZ)*(probs.curr.c + probs.curr.n))^-1 - 1)
V<-1/n*rbind(cbind(t(X.1.1)%*%X, t(X.1.2)%*%X, t(X.1.3)%*%X, t(X.1.4)%*%X, t(X.1.5)%*%X, t(X.1.6)%*%X),
cbind(t(X.1.2)%*%X, t(X.2.2)%*%X, t(X.2.3)%*%X, t(X.2.4)%*%X, t(X.2.5)%*%X, t(X.2.6)%*%X),
cbind(t(X.1.3)%*%X, t(X.2.3)%*%X, t(X.3.3)%*%X, t(X.3.4)%*%X, t(X.3.5)%*%X, t(X.3.6)%*%X),
cbind(t(X.1.4)%*%X, t(X.2.4)%*%X, t(X.3.4)%*%X, t(X.4.4)%*%X, t(X.4.5)%*%X, t(X.4.6)%*%X),
cbind(t(X.1.5)%*%X, t(X.2.5)%*%X, t(X.3.5)%*%X, t(X.4.5)%*%X, t(X.5.5)%*%X, t(X.5.6)%*%X),
cbind(t(X.1.6)%*%X, t(X.2.6)%*%X, t(X.3.6)%*%X, t(X.4.6)%*%X, t(X.5.6)%*%X, t(X.6.6)%*%X))
invV<-ginv(V)
}
}
else{
probs.curr <- pmin(pmax(exp(X%*%beta.curr)/(1 + exp(X%*%beta.curr)), probs.min), 1-probs.min)
w.curr <- cbind(Z*Tr/(pZ * probs.curr) - 1, Z*(1-Tr)/(pZ *(1 - probs.curr)) - 1)
w.curr.del<-1/n*t(X)%*%w.curr
w.curr.del<-as.matrix(w.curr.del)
w.curr<-as.matrix(w.curr)
gbar<-c(1/n*t(X)%*%(Tr*Z*(1-probs.curr) - Z*(1-Tr)*probs.curr),
w.curr.del)
if (is.null(invV))
{
X.1.1<-X*as.vector(pZ * probs.curr * (1 - probs.curr))
X.1.2<-X*as.vector(1 - probs.curr)
X.1.3<-X*as.vector(-probs.curr)
X.2.2<-X*as.vector((pZ * probs.curr)^-1 - 1)
X.2.3<- -X
X.3.3<-X*as.vector((pZ * (1 - probs.curr))^-1 - 1)
V<-1/n*rbind(cbind(t(X.1.1)%*%X, t(X.1.2)%*%X, t(X.1.3)%*%X),
cbind(t(X.1.2)%*%X, t(X.2.2)%*%X, t(X.2.3)%*%X),
cbind(t(X.1.3)%*%X, t(X.2.3)%*%X, t(X.3.3)%*%X))
invV<-ginv(V)
}
}
loss1<-as.vector(t(gbar)%*%invV%*%(gbar))
out1<-list("loss"=loss1, "invV"=invV)
out1
}
gmm.loss <- function(beta.curr, invV = NULL, twosided) gmm.func(beta.curr, invV, twosided = twosided)$loss
gmm.gradient <- function(beta.curr, invV, twosided)
{
if(twosided){
beta.curr.c<-beta.curr[1:k]
beta.curr.a<-beta.curr[k+(1:k)]
baseline.prob<-(1 + exp(X%*%beta.curr.c) + exp(X%*%beta.curr.a))^-1
probs.curr.c<-pmin(pmax(exp(X%*%beta.curr.c)*baseline.prob,probs.min),1-probs.min)
probs.curr.a<-pmin(pmax(exp(X%*%beta.curr.a)*baseline.prob,probs.min),1-probs.min)
probs.curr.n<-pmin(pmax(baseline.prob,probs.min),1-probs.min)
sums<-probs.curr.c+probs.curr.a+probs.curr.n
probs.curr.c<-probs.curr.c/sums
probs.curr.a<-probs.curr.a/sums
probs.curr.n<-probs.curr.n/sums
w.curr<-cbind(Z*Tr/(pZ*(probs.curr.c + probs.curr.a)) - 1,
(1-Z)*Tr/((1-pZ)*probs.curr.a) - 1,
Z*(1-Tr)/(pZ*probs.curr.n) - 1,
(1-Z)*(1-Tr)/((1-pZ)*(probs.curr.c + probs.curr.n)) - 1)
w.curr.del<-1/n*t(X)%*%w.curr
w.curr.del<-as.matrix(w.curr.del)
w.curr<-as.matrix(w.curr)
gbar<-c(1/n*t(X)%*%((Z*Tr/(1-probs.curr.n) + (1-Z)*(1-Tr)/(1-probs.curr.a) - 1)*probs.curr.c),
1/n*t(X)%*%((Z*Tr/(1-probs.curr.n) + (1-Z)*Tr/probs.curr.a - 1)*probs.curr.a),
w.curr.del)
Ac<- -probs.curr.c*probs.curr.n/(probs.curr.c + probs.curr.a)^2
Bc<- probs.curr.c/probs.curr.a
Cc<- -probs.curr.c*probs.curr.a/(1-probs.curr.a)^2
Dc<- probs.curr.c/probs.curr.n
Aa<- -probs.curr.a*probs.curr.n/(probs.curr.c + probs.curr.a)^2
Ba<- -(1-probs.curr.a)/probs.curr.a
Ca<- probs.curr.a/(1 - probs.curr.a)
Da<- probs.curr.a/probs.curr.n
dgbar<-rbind(cbind(t(X*as.vector(probs.curr.c*(Z*Tr*Ac + (1-Z)*(1-Tr)*Cc + (Z*Tr/(probs.curr.c + probs.curr.a) +
(1-Z)*(1-Tr)/(1-probs.curr.a) - 1)*(1 - probs.curr.c))))%*%X,
t(X*as.vector(probs.curr.a*(Z*Tr*Ac + (1-Z)*Tr*Bc - (Z*Tr/(probs.curr.c + probs.curr.a) +
(1-Z)*Tr/probs.curr.a - 1)*probs.curr.c)))%*%X,
t(X*as.vector(Z*Tr/pZ*Ac))%*%X,
t(X*as.vector((1-Z)*Tr/(1-pZ)*Bc))%*%X,
t(X*as.vector(Z*(1-Tr)/pZ*Dc))%*%X,
t(X*as.vector((1-Z)*(1-Tr)/(1-pZ)*Cc))%*%X),
cbind(t(X*as.vector(probs.curr.c*(Z*Tr*Aa + (1-Z)*(1-Tr)*Ca - (Z*Tr/(probs.curr.c + probs.curr.a) +
(1-Z)*(1-Tr)/(1-probs.curr.a) - 1)*probs.curr.a)))%*%X,
t(X*as.vector(probs.curr.a*(Z*Tr*Aa + (1-Z)*Tr*Ba + (Z*Tr/(probs.curr.c + probs.curr.a) +
(1-Z)*Tr/probs.curr.a - 1)*(1-probs.curr.a))))%*%X,
t(X*as.vector(Z*Tr/pZ*Aa))%*%X,
t(X*as.vector((1-Z)*Tr/(1-pZ)*Ba))%*%X,
t(X*as.vector(Z*(1-Tr)/pZ*Da))%*%X,
t(X*as.vector((1-Z)*(1-Tr)/(1-pZ)*Ca))%*%X))/n
}
else{
probs.curr <- pmin(pmax(exp(X%*%beta.curr)/(1 + exp(X%*%beta.curr)), probs.min), 1-probs.min)
w.curr <- cbind(Z*Tr/(pZ * probs.curr) - 1, Z*(1-Tr)/(pZ *(1 - probs.curr)) - 1)
w.curr.del<-1/n*t(X)%*%w.curr
w.curr.del<-as.matrix(w.curr.del)
w.curr<-as.matrix(w.curr)
gbar<-c(1/n*t(X)%*%(Tr*Z*(1-probs.curr) - Z*(1-Tr)*probs.curr),
w.curr.del)
dgbar<-cbind(t(X*as.vector((-Z*Tr - Z*(1-Tr))*probs.curr*(1-probs.curr)))%*%X,
t(X*as.vector(-Z*Tr*(1-probs.curr)/(pZ*probs.curr)))%*%X,
t(X*as.vector(Z*(1-Tr)*probs.curr/(pZ*(1-probs.curr))))%*%X)/n
}
out<-2*dgbar%*%invV%*%gbar
out
}
mle.loss <- function(beta.curr, twosided)
{
if (twosided){
beta.curr.c<-beta.curr[1:k]
beta.curr.a<-beta.curr[k+(1:k)]
baseline.prob<-(1 + exp(X%*%beta.curr.c) + exp(X%*%beta.curr.a))^-1
probs.curr.c<-pmin(pmax(exp(X%*%beta.curr.c)*baseline.prob,probs.min),1-probs.min)
probs.curr.a<-pmin(pmax(exp(X%*%beta.curr.a)*baseline.prob,probs.min),1-probs.min)
probs.curr.n<-pmin(pmax(1-probs.curr.c-probs.curr.a,probs.min),1-probs.min)
sums<-probs.curr.c+probs.curr.a+probs.curr.n
probs.curr.c<-probs.curr.c/sums
probs.curr.a<-probs.curr.a/sums
probs.curr.n<-probs.curr.n/sums
# Take the negative because we are minimizing. Want to minimize negative log-likelihood.
loss<- -sum(Z*Tr*log(probs.curr.c+probs.curr.a) + Z*(1-Tr)*log(probs.curr.n) + (1-Z)*Tr*log(probs.curr.a) + (1-Z)*(1-Tr)*log(1-probs.curr.a))
}
else{
probs.curr <- pmin(pmax(exp(X%*%beta.curr)/(1 + exp(X%*%beta.curr)),probs.min),1-probs.min)
loss <- -sum(Z*Tr*log(probs.curr) + Z*(1 - Tr)*log(1 - probs.curr))
}
loss
}
mle.gradient <- function(beta.curr, twosided)
{
if (twosided){
beta.curr.c<-beta.curr[1:k]
beta.curr.a<-beta.curr[k+(1:k)]
baseline.prob<-(1 + exp(X%*%beta.curr.c) + exp(X%*%beta.curr.a))^-1
probs.curr.c<-pmin(pmax(exp(X%*%beta.curr.c)*baseline.prob,probs.min),1-probs.min)
probs.curr.a<-pmin(pmax(exp(X%*%beta.curr.a)*baseline.prob,probs.min),1-probs.min)
probs.curr.n<-pmin(pmax(baseline.prob,probs.min),1-probs.min)
sums<-probs.curr.c+probs.curr.a+probs.curr.n
probs.curr.c<-probs.curr.c/sums
probs.curr.a<-probs.curr.a/sums
probs.curr.n<-probs.curr.n/sums
ds<- -c(t(X)%*%((Z*Tr/(probs.curr.c + probs.curr.a) + (1-Z)*(1-Tr)/(1 - probs.curr.a) - 1)*probs.curr.c),
t(X)%*%((Z*Tr/(probs.curr.c + probs.curr.a) + (1-Z)*Tr/probs.curr.a - 1)*probs.curr.a))
}
else{
probs.curr<-pmin(pmax(exp(X%*%beta.curr)/(1 + exp(X%*%beta.curr)),probs.min),1-probs.min)
ds <- -t(X)%*%((Z*Tr/probs.curr - Z*(1-Tr)/(1-probs.curr))*probs.curr*(1-probs.curr))
}
ds
}
bal.loss <- function(beta.curr, invV, twosided)
{
if (twosided){
beta.curr.c<-beta.curr[1:k]
beta.curr.a<-beta.curr[k+(1:k)]
baseline.prob<-(1 + exp(X%*%beta.curr.c) + exp(X%*%beta.curr.a))^-1
probs.curr.c<-pmin(pmax(exp(X%*%beta.curr.c)*baseline.prob,probs.min),1-probs.min)
probs.curr.a<-pmin(pmax(exp(X%*%beta.curr.a)*baseline.prob,probs.min),1-probs.min)
probs.curr.n<-pmin(pmax(baseline.prob,probs.min),1-probs.min)
sums<-probs.curr.c+probs.curr.a+probs.curr.n
probs.curr.c<-probs.curr.c/sums
probs.curr.a<-probs.curr.a/sums
probs.curr.n<-probs.curr.n/sums
w.curr<-cbind(Z*Tr/(pZ*(probs.curr.c + probs.curr.a)) - 1,
(1-Z)*Tr/((1-pZ)*probs.curr.a) - 1,
Z*(1-Tr)/(pZ*probs.curr.n) - 1,
(1-Z)*(1-Tr)/((1-pZ)*(probs.curr.c + probs.curr.n)) - 1)
invV <- invV[(2*ncol(X)+1):ncol(invV),(2*ncol(X)+1):ncol(invV)]
}
else{
probs.curr<-pmin(pmax(exp(X%*%beta.curr)/(1 + exp(X%*%beta.curr)),probs.min),1-probs.min)
w.curr<-cbind(Z*Tr/(pZ * probs.curr) - 1, Z*(1-Tr)/(pZ *(1 - probs.curr)) - 1)
invV <- invV[(ncol(X)+1):ncol(invV),(ncol(X)+1):ncol(invV)]
}
w.curr.del<-as.matrix(1/n*t(X)%*%w.curr)
wbar <- c(w.curr.del)
loss1<-sum(diag(t(wbar)%*%invV%*%wbar))
loss1
}
bal.gradient <- function(beta.curr, invV, twosided)
{
if(twosided){
beta.curr.c<-beta.curr[1:k]
beta.curr.a<-beta.curr[k+(1:k)]
baseline.prob<-(1 + exp(X%*%beta.curr.c) + exp(X%*%beta.curr.a))^-1
probs.curr.c<-pmin(pmax(exp(X%*%beta.curr.c)*baseline.prob,probs.min),1-probs.min)
probs.curr.a<-pmin(pmax(exp(X%*%beta.curr.a)*baseline.prob,probs.min),1-probs.min)
probs.curr.n<-pmin(pmax(baseline.prob,probs.min),1-probs.min)
sums<-probs.curr.c+probs.curr.a+probs.curr.n
probs.curr.c<-probs.curr.c/sums
probs.curr.a<-probs.curr.a/sums
probs.curr.n<-probs.curr.n/sums
Ac<- -probs.curr.c*probs.curr.n/(probs.curr.c + probs.curr.a)^2
Bc<- probs.curr.c/probs.curr.a
Cc<- -probs.curr.c*probs.curr.a/(1-probs.curr.a)^2
Dc<- probs.curr.c/probs.curr.n
Aa<- -probs.curr.a*probs.curr.n/(probs.curr.c + probs.curr.a)^2
Ba<- -(1-probs.curr.a)/probs.curr.a
Ca<- probs.curr.a/(1 - probs.curr.a)
Da<- probs.curr.a/probs.curr.n
w.curr<-cbind(Z*Tr/(pZ*(probs.curr.c + probs.curr.a)) - 1,
(1-Z)*Tr/((1-pZ)*probs.curr.a) - 1,
Z*(1-Tr)/(pZ*probs.curr.n) - 1,
(1-Z)*(1-Tr)/((1-pZ)*(probs.curr.c + probs.curr.n)) - 1)
w.curr.del<-as.matrix(1/n*t(X)%*%w.curr)
wbar <- c(w.curr.del)
dw.beta.c<-1/n*cbind(t(X*as.vector(Z*Tr/pZ*Ac))%*%X,
t(X*as.vector((1-Z)*Tr/(1-pZ)*Bc))%*%X,
t(X*as.vector(Z*(1-Tr)/pZ*Dc))%*%X,
t(X*as.vector((1-Z)*(1-Tr)/(1-pZ)*Cc))%*%X)
dw.beta.a<-1/n*cbind(t(X*as.vector(Z*Tr/pZ*Aa))%*%X,
t(X*as.vector((1-Z)*Tr/(1-pZ)*Ba))%*%X,
t(X*as.vector(Z*(1-Tr)/pZ*Da))%*%X,
t(X*as.vector((1-Z)*(1-Tr)/(1-pZ)*Ca))%*%X)
invV <- invV[(2*ncol(X)+1):ncol(invV),(2*ncol(X)+1):ncol(invV)]
out.1<-2*dw.beta.c%*%invV%*%wbar
out.2<-2*dw.beta.a%*%invV%*%wbar
out<-c(out.1, out.2)
}
else{
probs.curr<-pmin(pmax(exp(X%*%beta.curr)/(1 + exp(X%*%beta.curr)),probs.min),1-probs.min)
w.curr<-cbind(Z*Tr/(pZ * probs.curr) - 1, Z*(1-Tr)/(pZ *(1 - probs.curr)) - 1)
w.curr.del<-as.matrix(1/n*t(X)%*%w.curr)
wbar <- c(w.curr.del)
invV <- invV[(ncol(X)+1):ncol(invV),(ncol(X)+1):ncol(invV)]
dw.beta <- 1/n*cbind(t(X*as.vector(-Z*Tr*(1-probs.curr)/(pZ*probs.curr)))%*%X,
t(X*as.vector(Z*(1-Tr)*probs.curr/(pZ*(1-probs.curr))))%*%X)
out<-2*dw.beta%*%invV%*%wbar
}
out
}
# Get starting point for optim
# This block needs to be separated for one-sided and two-sided
if (twosided){
beta.n0 <- coef(glm(I(1-Tr) ~ - 1 + X, subset = which(Z == 1)))
beta.a0 <- coef(glm(Tr ~ -1 + X, subset = which(Z == 0)))
p.hat.a0 <- pmin(pmax(exp(X%*%beta.a0)/(1 + exp(X%*%beta.a0) + exp(X%*%beta.n0)), probs.min), 1-probs.min)
p.hat.n0 <- pmin(pmax(exp(X%*%beta.n0)/(1 + exp(X%*%beta.a0) + exp(X%*%beta.n0)), probs.min), 1-probs.min)
p.hat.c0 <- pmin(pmax(1/(1 + exp(X%*%beta.a0) + exp(X%*%beta.n0)), probs.min), 1-probs.min)
sums <- p.hat.c0 + p.hat.a0 + p.hat.n0
p.hat.c0 <- p.hat.c0/sums
p.hat.a0 <- p.hat.a0/sums
p.hat.n0 <- p.hat.n0/sums
beta.init <- c(coef(lm(log(p.hat.c0/(1-p.hat.c0)) ~ -1 + X)), coef(lm(log(p.hat.a0/(1-p.hat.a0)) ~ -1 + X)))
}
else{
beta.init <- coef(glm(Tr ~ -1 + X, subset = which(Z == 1)))
}
# All optimization functions need a one-sided or two-sided option
mle.opt<-optim(beta.init, mle.loss, control=list("maxit"=iterations), method = "BFGS", gr = mle.gradient, twosided = twosided)
beta.mle<-mle.opt$par
this.invV<-gmm.func(beta.mle, twosided = twosided)$invV
if (score.only) gmm.opt<-mle.opt
else {
bal.init.opt<-optim(beta.init, bal.loss, control=list("maxit"=iterations), method = "BFGS", invV = this.invV, gr = bal.gradient, twosided = twosided)
bal.mle.opt<-optim(beta.mle, bal.loss, control=list("maxit"=iterations), method = "BFGS", invV = this.invV, gr = bal.gradient, twosided = twosided)
if(bal.init.opt$value > bal.mle.opt$value){
bal.opt <- bal.mle.opt
}
else{
bal.opt <- bal.init.opt
}
beta.bal<-bal.opt$par
if (bal.only) gmm.opt<-bal.opt
else {
gmm.mle.opt<-optim(beta.mle, gmm.loss, control=list("maxit"=iterations), method = "BFGS", invV = this.invV, gr = gmm.gradient, twosided = twosided)
gmm.bal.opt<-optim(beta.bal, gmm.loss, control=list("maxit"=iterations), method = "BFGS", invV = this.invV, gr = gmm.gradient, twosided = twosided)
if (gmm.mle.opt$value > gmm.bal.opt$value)
{
gmm.opt<-gmm.bal.opt
}
else
{
gmm.opt<-gmm.mle.opt
}
}
}
beta.opt<-matrix(gmm.opt$par,nrow=k)
J.opt<-gmm.loss(beta.opt, this.invV, twosided)
bal.loss.opt <- bal.loss(beta.opt, invV = this.invV, twosided)
fitted.vals <- X%*%beta.opt
class(beta.opt)<-"coef"
if (twosided){
pi.c.opt<-as.vector(exp(X%*%beta.opt[,1]))
pi.a.opt<-as.vector(exp(X%*%beta.opt[,2]))
pi.n.opt<-1
sums<-pi.c.opt+pi.a.opt+pi.n.opt
#Normalize, then trim
pi.c.opt<-pmax(pmin(pi.c.opt/sums, 1-probs.min),probs.min)
pi.a.opt<-pmax(pmin(pi.a.opt/sums, 1-probs.min),probs.min)
pi.n.opt<-pmax(pmin(pi.n.opt/sums, 1-probs.min),probs.min)
#Renormalize, so that they add up to 1
sums<-pi.c.opt+pi.a.opt+pi.n.opt
fitted.values <- cbind(pi.c.opt/sums, pi.a.opt/sums, pi.n.opt/sums)
colnames(fitted.values)<-c("Compliers","Always","Never")
beta.opt[-1,]<-beta.opt[-1,]/x.sd
deviance<- -2*sum(Z*Tr*log(fitted.values[,1]+fitted.values[,2]) + Z*(1-Tr)*log(fitted.values[,3]) + (1-Z)*Tr*log(fitted.values[,2]) + (1-Z)*(1-Tr)*log(1-fitted.values[,2]))
if (k > 2)
{
beta.opt[1,]<-beta.opt[1,]-matrix(x.mean%*%beta.opt[-1,])
}
else
{
beta.opt[1,]<-beta.opt[1,]-x.mean*beta.opt[-1,]
}
}
else{
fitted.values<-pmax(pmin(exp(X%*%beta.opt)/(1+exp(X%*%beta.opt)),1-probs.min),probs.min)
deviance<- -2*sum(Z*Tr*log(fitted.values) + Z*(1-Tr)*log(1-fitted.values))
beta.opt[-1]<-beta.opt[-1]/x.sd
beta.opt[1]<-beta.opt[1]-sum(x.mean*beta.opt[-1])
}
output<-list("coefficients"=beta.opt,"fitted.values"=fitted.values,"weights"=1/fitted.values[,1],
"deviance"=deviance,"converged"=gmm.opt$conv,"J"=J.opt,"df"=k,
"bal"=bal.loss.opt)
class(output)<-"CBIV"
output
}
|
/scratch/gouwar.j/cran-all/cranData/CBPS/R/CBIV.R
|
library(numDeriv)
library(MASS)
#' Covariate Balancing Propensity Score (CBPS) for Marginal Structural Models
#'
#' \code{CBMSM} estimates propensity scores such that both covariate balance
#' and prediction of treatment assignment are maximized. With longitudinal
#' data, the method returns marginal structural model weights that can be
#' entered directly into a linear model. The method also handles multiple
#' binary treatments administered concurrently.
#'
#' Fits covariate balancing propensity scores for marginal structural models.
#'
#' ### @aliases CBMSM CBMSM.fit
#' @param formula A formula of the form treat ~ X. The same covariates are used
#' in each time period. At default values, a single set of coefficients is estimated
#' across all time periods. To allow a different set of coefficients for each
#' time period, set \code{time.vary = TRUE}. Data should be sorted by time.
#' @param id A vector which identifies the unit associated with each row of
#' treat and X.
#' @param time A vector which identifies the time period associated with each
#' row of treat and X. All data should be sorted by time.
#' @param data An optional data frame, list or environment (or object coercible
#' by as.data.frame to a data frame) containing the variables in the model. If
#' not found in data, the variables are taken from \code{environment(formula)},
#' typically the environment from which \code{CBMSM} is called. Data should be
#' sorted by time.
#' @param twostep Set to \code{TRUE} to use a two-step estimator, which will
#' run substantially faster than continuous-updating. Default is \code{FALSE},
#' which uses the continuous-updating estimator described by Imai and Ratkovic
#' (2014).
#' @param msm.variance Default is \code{FALSE}, which uses the low-rank
#' approximation of the variance described in Imai and Ratkovic (2014). Set to
#' \code{TRUE} to use the full variance matrix.
#' @param time.vary Default is \code{FALSE}, which uses the same coefficients
#' across time period. Set to \code{TRUE} to fit one set per time period.
#' @param type "MSM" for a marginal structural model, with multiple time
#' periods or "MultiBin" for multiple binary treatments at the same time
#' period.
#' @param init Default is \code{"opt"}, which uses CBPS and logistic regression
#' starting values, and chooses the one that achieves the best balance. Other options
#' are "glm" and "CBPS"
#' @param ... Other parameters to be passed through to \code{optim()}
#' @return \item{weights}{The optimal weights.} \item{fitted.values}{The fitted
#' propensity score for each observation.} \item{y}{The treatment vector used.}
#' \item{x}{The covariate matrix.} \item{id}{The vector id used in CBMSM.fit.}
#' \item{time}{The vector time used in CBMSM.fit.} \item{model}{The model
#' frame.} \item{call}{The matched call.} \item{formula}{The formula supplied.}
#' \item{data}{The data argument.} \item{treat.hist}{A matrix of the treatment
#' history, with each observation in rows and time in columns.}
#' \item{treat.cum}{A vector of the cumulative treatment history, by
#' individual.}
#' @author Marc Ratkovic, Christian Fong, and Kosuke Imai; The CBMSM function
#' is based on the code for version 2.15.0 of the glm function implemented in
#' the stats package, originally written by Simon Davies. This documenation is
#' likewise modeled on the documentation for glm and borrows its language where
#' the arguments and values are the same.
#' @seealso \link{plot.CBMSM}
#' @references
#'
#' Imai, Kosuke and Marc Ratkovic. 2014. ``Covariate Balancing Propensity
#' Score.'' Journal of the Royal Statistical Society, Series B (Statistical
#' Methodology). \url{http://imai.princeton.edu/research/CBPS.html}
#'
#' Imai, Kosuke and Marc Ratkovic. 2015. ``Robust Estimation of Inverse
#' Probability Weights for Marginal Structural Models.'' Journal of the
#' American Statistical Association.
#' \url{http://imai.princeton.edu/research/MSM.html}
#' @examples
#'
#'
#' ##Load Blackwell data
#'
#' data(Blackwell)
#'
#' ## Quickly fit a short model to test
#' form0 <- "d.gone.neg ~ d.gone.neg.l1 + camp.length"
#' fit0<-CBMSM(formula = form0, time=Blackwell$time,id=Blackwell$demName,
#' data=Blackwell, type="MSM", iterations = NULL, twostep = TRUE,
#' msm.variance = "approx", time.vary = FALSE)
#'
#' \dontrun{
#' ##Fitting the models in Imai and Ratkovic (2014)
#' ##Warning: may take a few mintues; setting time.vary to FALSE
#' ##Results in a quicker fit but with poorer balance
#' ##Usually, it is best to use time.vary TRUE
#' form1<-"d.gone.neg ~ d.gone.neg.l1 + d.gone.neg.l2 + d.neg.frac.l3 +
#' camp.length + camp.length + deminc + base.poll + year.2002 +
#' year.2004 + year.2006 + base.und + office"
#'
#' ##Note that init="glm" gives the published results but the default is now init="opt"
#' fit1<-CBMSM(formula = form1, time=Blackwell$time,id=Blackwell$demName,
#' data=Blackwell, type="MSM", iterations = NULL, twostep = TRUE,
#' msm.variance = "full", time.vary = TRUE, init="glm")
#'
#' fit2<-CBMSM(formula = form1, time=Blackwell$time,id=Blackwell$demName,
#' data=Blackwell, type="MSM", iterations = NULL, twostep = TRUE,
#' msm.variance = "approx", time.vary = TRUE, init="glm")
#'
#'
#' ##Assessing balance
#'
#' bal1<-balance.CBMSM(fit1)
#' bal2<-balance.CBMSM(fit2)
#'
#' ##Effect estimation: Replicating Effect Estimates in
#' ##Table 3 of Imai and Ratkovic (2014)
#'
#' lm1<-lm(demprcnt[time==1]~fit1$treat.hist,data=Blackwell,
#' weights=fit1$glm.weights)
#' lm2<-lm(demprcnt[time==1]~fit1$treat.hist,data=Blackwell,
#' weights=fit1$weights)
#' lm3<-lm(demprcnt[time==1]~fit1$treat.hist,data=Blackwell,
#' weights=fit2$weights)
#'
#' lm4<-lm(demprcnt[time==1]~fit1$treat.cum,data=Blackwell,
#' weights=fit1$glm.weights)
#' lm5<-lm(demprcnt[time==1]~fit1$treat.cum,data=Blackwell,
#' weights=fit1$weights)
#' lm6<-lm(demprcnt[time==1]~fit1$treat.cum,data=Blackwell,
#' weights=fit2$weights)
#'
#'
#'
#' ### Example: Multiple Binary Treatments Administered at the Same Time
#' n<-200
#' k<-4
#' set.seed(1040)
#' X1<-cbind(1,matrix(rnorm(n*k),ncol=k))
#'
#' betas.1<-betas.2<-betas.3<-c(2,4,4,-4,3)/5
#' probs.1<-probs.2<-probs.3<-(1+exp(-X1 %*% betas.1))^-1
#'
#' treat.1<-rbinom(n=length(probs.1),size=1,probs.1)
#' treat.2<-rbinom(n=length(probs.2),size=1,probs.2)
#' treat.3<-rbinom(n=length(probs.3),size=1,probs.3)
#' treat<-c(treat.1,treat.2,treat.3)
#' X<-rbind(X1,X1,X1)
#' time<-c(rep(1,nrow(X1)),rep(2,nrow(X1)),rep(3,nrow(X1)))
#' id<-c(rep(1:nrow(X1),3))
#' y<-cbind(treat.1,treat.2,treat.3) %*% c(2,2,2) +
#' X1 %*% c(-2,8,7,6,2) + rnorm(n,sd=5)
#'
#' multibin1<-CBMSM(treat~X,id=id,time=time,type="MultiBin",twostep=TRUE)
#' summary(lm(y~-1+treat.1+treat.2+treat.3+X1, weights=multibin1$w))
#' }
#'
#' @export CBMSM
#'
CBMSM<-function(formula, id, time, data, type="MSM", twostep = TRUE, msm.variance = "approx", time.vary = FALSE, init="opt",...){
if (missing(data))
data <- environment(formula)
call <- match.call()
family <- binomial()
mf <- match.call(expand.dots = FALSE)
m <- match(c("formula", "data"), names(mf), 0L)
mf <- mf[c(1L, m)]
mf$drop.unused.levels <- TRUE
mf[[1L]] <- as.name("model.frame")
mf <- eval(mf, parent.frame())
mt <- attr(mf, "terms")
Y <- model.response(mf, "any")
if (length(dim(Y)) == 1L) {
nm <- rownames(Y)
dim(Y) <- NULL
if (!is.null(nm))
names(Y) <- nm
}
X <- if (!is.empty.model(mt)) model.matrix(mt, mf)#[,-2]
else matrix(, NROW(Y), 0L)
##Format treatment matrix
id<-as.numeric(as.factor(id))
unique.id<-sort(unique(id))
treat.hist<-matrix(NA,nrow=length(unique.id),ncol=length(unique(time)))
colnames(treat.hist)<-sort(unique(time))
rownames(treat.hist)<-unique.id
for(i in 1:length(unique(unique.id))) for(j in sort(unique(time)))
{{
treat.hist[i,j]<-Y[id==unique.id[i] & time==j]
}}
#treat.hist.fac<-apply(treat.hist,1,function(x) paste(x, collapse="+"))
cm.treat<-rowSums(treat.hist)
if(type=="MSM") {
MultiBin.fit<-FALSE
}
if(type=="MultiBin"){
MultiBin.fit<-TRUE
X<-cbind(1,X[,apply(X,2,sd)>0])
names.X<-c("Intercept",colnames(X)[-1])
}
fit <- eval(call("CBMSM.fit", treat = Y, X = X, id = id, time=time,
MultiBin.fit = MultiBin.fit, twostep = twostep, msm.variance = msm.variance,
time.vary = time.vary, init = init))
fit$call<-call
fit$formula<-formula
fit$y<-Y
fit$x<-X
fit$id<-id
fit$time<-time
fit$model<-mf
fit$data<-data
fit$treat.hist<-treat.hist
fit$treat.cum<-rowSums(treat.hist)
fit$weights<-fit$weights[time==min(time)]
fit
}
########################
###Calls loss function
########################
#' CBMSM.fit
#'
#' @param treat A vector of treatment assignments. For N observations over T
#' time periods, the length of treat should be N*T.
#' @param X A covariate matrix. For N observations over T time periods, X
#' should have N*T rows.
#' @param id A vector which identifies the unit associated with each row of
#' treat and X.
#' @param time A vector which identifies the time period associated with each
#' row of treat and X.
#' @param MultiBin.fit A parameter for whether the multiple binary treatments
#' occur concurrently (\code{FALSE}) or over consecutive time periods
#' (\code{TRUE}) as in a marginal structural model. Setting type = "MultiBin"
#' when calling \code{CBMSM} will set MultiBin.fit to \code{TRUE} when
#' CBMSM.fit is called.
#' @param twostep Set to \code{TRUE} to use a two-step estimator, which will
#' run substantially faster than continuous-updating. Default is \code{FALSE},
#' which uses the continuous-updating estimator described by Imai and Ratkovic
#' (2014).
#' @param msm.variance Default is \code{FALSE}, which uses the low-rank
#' approximation of the variance described in Imai and Ratkovic (2014). Set to
#' \code{TRUE} to use the full variance matrix.
#' @param time.vary Default is \code{FALSE}, which uses the same coefficients
#' across time period. Set to \code{TRUE} to fit one set per time period.
#' @param init Default is \code{"opt"}, which uses CBPS and logistic regression
#' starting values, and chooses the one that achieves the best balance. Other options
#' are "glm" and "CBPS"
#' @param ... Other parameters to be passed through to \code{optim()}
#'
CBMSM.fit<-function(treat, X, id, time, MultiBin.fit, twostep, msm.variance, time.vary, init, ...){
id0<-id
id<-as.numeric(as.factor(id0))
if(msm.variance=="approx") full.var<-FALSE
if(msm.variance=="full") full.var<-TRUE
X.mat<-X
X.mat<-X.mat[,apply(X.mat,2,sd)>0, drop = FALSE]
##Format design matrix, run glm
glm1<-glm(treat~X.mat,family="binomial")
glm1$coefficients<-CBPS(treat~X.mat, ATT=0,method="exact")$coefficients
##################
##Make SVD matrix of covariates
##and matrix of treatment history
##################
#if(time.vary==FALSE){
X.svd<-X.mat
#X.svd<-apply(X.svd,2,FUN=function(x) (x-mean(x))/sd(x), drop=FALSE)
X.svd<-scale(X.svd) # Edit by Christian; this was causing an error
#X.svd[,c(1,2,7)]<-X.svd[,c(1,2,7)]*10
X.svd<-svd(X.svd)$u%*%diag(svd(X.svd)$d>0.0001)
X.svd<-X.svd[,apply(X.svd,2,sd)>0,drop=FALSE]
glm1<-glm(treat~X.svd,family="binomial")
glm.cb<-glm1
glm.cb<-CBPS(treat~X.svd, ATT=0,method="exact")$coefficients
glm1<-glm1$coefficients
if(time.vary==TRUE){
#} else{
X.svd<-NULL
for(i in sort(unique(time))){
X.sub<-X.mat[time==i,,drop=FALSE]
#X.sub<-apply(X.sub,2,FUN=function(x) (x-mean(x))/sd(x))
X.sub <- scale(X.sub) # Edit by Christian; this was causing an error
X.sub[is.na(X.sub)]<-0
X.sub<-svd(X.sub)$u%*%diag(svd(X.sub)$d>0.0001)
X.sub<-X.sub[,apply(X.sub,2,sd)>0,drop=FALSE]
X.svd<-rbind(X.svd,X.sub)
}
##Make matrix of time-varying glm starting vals
cbps.coefs<-glm.coefs<-NULL
n.time<-length(unique(time))
for(i in 1:n.time){
glm1<-summary(glm(treat~X.svd, subset=(time==i)))$coefficients[,1]
glm.cb<-glm1
glm.cb<-CBPS(treat[time==i]~X.svd[time==i,], ATT=0,method="exact")$coefficients
glm.coefs<-cbind(glm.coefs,glm1)
cbps.coefs<-cbind(cbps.coefs,glm.cb)
}
glm.coefs[is.na(glm.coefs)]<-0
cbps.coefs[is.na(cbps.coefs)]<-0
glm1<-as.vector(glm.coefs)
glm.cb<-as.vector(cbps.coefs)
}
##################
## Start optimization
##################
#Twostep is true
msm.loss1<-function(x,...) msm.loss.func(betas=x, X=cbind(1,X.svd), treat=treat, time=time,...)$loss
glm.fit<-msm.loss.func(glm1,X=cbind(1,X.svd),time=time,treat=treat,full.var=full.var,twostep=FALSE)
cb.fit<-msm.loss.func(glm.cb,X=cbind(1,X.svd),time=time,treat=treat,full.var=full.var,twostep=FALSE)
type.fit<-"Returning Estimates from Logistic Regression\n"
if((cb.fit$loss<glm.fit$loss & init=="opt")|init=="CBPS") {
glm1<-glm.cb
glm.fit<-cb.fit
type.fit<-"Returning Estimates from CBPS\n"
}
##Twostep is true; full variance option is passed
#Run twostep regardless for starting vals
#if(twostep==TRUE){
Vcov.inv<-glm.fit
msm.opt<-optim(glm1,msm.loss1,full.var=full.var,Vcov.inv=Vcov.inv$V,bal.only=TRUE,twostep=TRUE,method="BFGS")
msm.twostep<-msm.fit<-msm.loss.func(msm.opt$par,X=cbind(1,X.svd), treat=treat, time=time, full.var=full.var,Vcov.inv=Vcov.inv$V,bal.only=TRUE,twostep=TRUE)
l3<-msm.loss.func(glm1,X=cbind(1,X.svd), treat=treat, time=time, full.var=full.var,Vcov.inv=Vcov.inv$V,bal.only=TRUE,twostep=TRUE)
if((l3$loss<msm.fit$loss) & init=="opt") {
msm.fit<-l3
warning("Warning: Optimization did not improve over initial estimates\n")
cat(type.fit)
}
if(twostep==FALSE) {
if(init=="opt") msm.opt<-optim(msm.fit$par,msm.loss1,full.var=full.var,bal.only=TRUE,twostep=FALSE,method="BFGS")
if(init!="opt") msm.opt<-optim(msm.opt$par,msm.loss1,full.var=full.var,bal.only=TRUE,twostep=FALSE,method="BFGS")
msm.fit<-msm.loss.func(msm.opt$par,X=cbind(1,X.svd), treat=treat, time=time, full.var=full.var,Vcov.inv=Vcov.inv$V,bal.only=TRUE,twostep=FALSE)
l3<-msm.loss.func(glm1,X=cbind(1,X.svd), treat=treat, time=time, full.var=full.var,
Vcov.inv=Vcov.inv$V,bal.only=TRUE,twostep=FALSE)
if((l3$loss<msm.fit$loss) & init=="opt") {
msm.fit<-l3
cat("\nWarning: Optimization did not improve over initial estimates\n")
cat(type.fit)
}
}
##################
## Calculate unconditional probs and treatment matrix
##################
n.obs<-length(unique(id))
n.time<-length(unique(time))
treat.hist<-matrix(NA,nrow=n.obs,ncol=n.time)
name.cands<-sort(unique(id))
for(i in 1:n.obs) for(j in 1:n.time) treat.hist[i,j]<-treat[id==name.cands[i] & time==j ]
treat.hist.unique<-unique(treat.hist,MAR=1)
treat.unique<-rep(NA,n.obs)
for(i in 1:n.obs) treat.unique[i]<- which(apply(treat.hist.unique,1,FUN=function(x) sum((x-treat.hist[i,])^2) )==0)
treat.unique<-as.factor(treat.unique)
uncond.probs.cand<-rep(0,n.obs)
for(i in 1:n.obs) {for(j in 1:n.obs) {
check<-mean(treat.hist[j,]==treat.hist[i,])==1
if(check) uncond.probs.cand[i]<-uncond.probs.cand[i]+1
}
}
uncond.probs.cand<-uncond.probs.cand/n.obs
###########
##Produce Weights
###########
wts.out<-rep(uncond.probs.cand/msm.fit$pr,n.time)[time==1]
probs.out<-msm.fit$pr
uncond.probs<-uncond.probs.cand
loss.glm<-glm.fit$loss
loss.msm<-msm.fit$loss
if(loss.glm<loss.msm){
warning("CBMSM fails to improve covariate balance relative to MLE. \n GLM loss: ", glm.fit$loss, "\n CBMSM loss: ", msm.fit$loss, "\n")
}
# I know I'm putting probs.out in the weights and wts.out in the fitted values, but that
# is what Marc said to do
out<-list("weights"=probs.out,"fitted.values"=wts.out,"id"=id0[1:n.obs],"glm.g"=glm.fit$g.all,"msm.g"=msm.fit$g.all,"glm.weights"=(uncond.probs/glm.fit$pr)[time==1])
class(out)<-c("CBMSM","list")
return(out)
}
########################
###Loss function for MSM
########################
msm.loss.func<-function(betas,X=X,treat=treat,time=time,bal.only=F,time.sub=0,twostep=FALSE, Vcov.inv=NULL,full.var=FALSE,
constant.var=FALSE){
if((length(betas)==dim(X)[2]) ) betas<-rep(betas, dim(X)[2]/length(betas))
time<-time-min(time)+1
unique.time<-sort(unique(time))
n.t<-length(unique.time)
n<-dim(X)[1]/n.t
treat.use<-betas.use<-NULL
X.t<-NULL
for(i in 1:n.t){
betas.use<-cbind(betas.use,betas[1:dim(X)[2]+(i-1)*dim(X)[2] ])
treat.use<-cbind(treat.use,treat[time==unique.time[i]])
X.t<-cbind(X.t,X[time==unique.time[i],])
}
betas<-betas.use
betas[is.na(betas)]<-0
treat<-treat.use
thetas<-NULL
for(i in 1:n.t)
thetas<-cbind(thetas,X[time==i,]%*%betas[,i] )
probs.trim<-.0001
probs<-(1+exp(-thetas))^(-1)
probs<-pmax(probs,probs.trim)
probs<-pmin(probs,1-probs.trim)
probs.obs<-treat*probs+(1-treat)*(1-probs)
w.each<-treat/probs+(1-treat)/(1-probs)#+(treat-probs)^2/(probs*(1-probs))
w.all<-apply(w.each,1,prod)#*probs.uncond
bin.mat<-matrix(0,nrow=(2^n.t-1),ncol=n.t)
for(i in 1:(2^n.t-1)) bin.mat[i,(n.t-length(integer.base.b(i))+1):n.t]<-
integer.base.b(i)
num.valid.outer<-constr.mat.outer<-NULL
for(i.time in 1:n.t){
num.valid<-rep(0,dim(treat)[1])
constr.mat.prop<-constr.mat<-matrix(0,nrow=dim(treat)[1],ncol=dim(bin.mat)[1])
for(i in 1:dim(bin.mat)[1]){
is.valid<-sum(bin.mat[i,(i.time):dim(bin.mat)[2]])>0
if(is.valid){
#for(i.wt in i.time:n.t) w.all.now<-w.all.now*1/(1+3*probs[,i.wt]*(1-probs[,i.wt]))
constr.mat[,i]<-(w.all*(-1)^(treat%*%bin.mat[i,]))
num.valid<-num.valid+1
}else{
constr.mat[,i]<-0
}
}
num.valid.outer<-c(num.valid.outer,num.valid)
constr.mat.outer<-rbind(constr.mat.outer,constr.mat)
}
if(twostep==FALSE){
if(full.var==TRUE){
var.big<-0
X.t.big<-matrix(NA,nrow=n.t*dim(X.t)[1],ncol=dim(X.t)[2])
for(i in 1:n.t){X.t.big[1:dim(X.t)[1]+(i-1)*dim(X.t)[1],]<-X.t}
for(i in 1:dim(X.t.big)[1]){
mat1<-(X.t.big[i,])%*%t(X.t.big[i,])
mat2<-constr.mat.outer[i,]%*%t(constr.mat.outer[i,])
var.big<-var.big+mat2 %x%mat1
}
}
}
X.wt<-X.prop<-g.wt<-g.prop<-NULL
for(i in 1:n.t){
g.prop<-c(g.prop, 1/n*t(X[time==i,])%*%(treat[,i]-probs[,i]))
g.wt<-rbind(g.wt,1/n*t(X[time==i,])%*%cbind(constr.mat.outer[time==i,])*(i>time.sub))
X.prop.curr<-matrix(0,ncol=n,nrow=dim(X)[2])
X.wt.curr<-matrix(0,ncol=n,nrow=dim(X)[2])
X.prop<-rbind(X.prop,1/n^.5*t((X[time==i,]*(probs.obs[,i]*(1-probs.obs[,i]))^.5)))
if(bal.only){
X.wt<-rbind(X.wt,1/n^.5*t(X[time==i,]*unique(num.valid.outer)[i]^.5)) } else{
X.wt<-rbind(X.wt,1/n^.5*t(X[time==i,]*w.all^.5*unique(num.valid.outer)[i]^.5))
}
}
mat.prop<-matrix(0,nrow=n, ncol=dim(X.wt)[2])
mat.prop[,1]<-1
g.prop.all<-0*g.wt
g.prop.all[,1]<-g.prop
#g.prop.all<-g.prop
if(bal.only==T) g.prop.all<-0*g.prop.all
g.all<-rbind(g.prop.all,g.wt)
X.all<-rbind(X.prop*(1-bal.only),X.wt)
if(twostep==TRUE){
var.X.inv<-Vcov.inv
if(constant.var==TRUE) var.X.inv<-Vcov.inv*0
} else{
if(full.var==FALSE){
var.X.inv<-ginv((X.all)%*%t(X.all))}else{
var.X.inv<-ginv(var.big/n)
}
}
length.zero<-dim(g.prop.all)[2]#length(g.prop)
#var.X[(length.zero+1):(2*length.zero),1:length.zero]<-0
#var.X[1:length.zero,(length.zero+1):(2*length.zero)]<-0
#print(dim(g.all))
#print(dim(var.X.inv))
if(full.var==TRUE) g.all<-as.vector(g.wt)
loss<-t(g.all)%*%var.X.inv%*%g.all
out=list("loss"=(sum(diag(loss)))*n,"Var.inv"=var.X.inv,"probs"=w.all,"g.all"=g.all)
#t(g.prop)%*%ginv(X.prop%*%t(X.prop))%*%g.prop +sum(diag(t(g.wt)%*%ginv(X.wt%*%t(X.wt))%*%g.wt ))
}#closes msm.loss.func
########################
###Makes binary representation
########################
integer.base.b <-
function(x, b=2){
xi <- as.integer(x)
if(any(is.na(xi) | ((x-xi)!=0)))
print(list(ERROR="x not integer", x=x))
N <- length(x)
xMax <- max(x)
ndigits <- (floor(logb(xMax, base=2))+1)
Base.b <- array(NA, dim=c(N, ndigits))
for(i in 1:ndigits){#i <- 1
Base.b[, ndigits-i+1] <- (x %% b)
x <- (x %/% b)
}
if(N ==1) Base.b[1, ] else Base.b
}
#' @export
balance.CBMSM<-function(object, ...)
{
treat.hist<-matrix(NA,nrow=length(unique(object$id)),ncol=length(unique(object$time)))
ids<-sort(unique(object$id))
times<-sort(unique(object$time))
for(i in 1:length(ids)) {
for(j in 1:length(times)){
treat.hist[i,j]<-object$y[object$id== ids[i] & object$time==j]
}
}
treat.hist.fac<-apply(treat.hist,1,function(x) paste(x, collapse="+"))
bal<-matrix(NA,nrow=(ncol(object$x)-1),ncol=length(unique(treat.hist.fac))*2)
baseline<-matrix(NA,nrow=(ncol(object$x)-1),ncol=length(unique(treat.hist.fac))*2)
cnames<-array()
for (i in 1:length(unique(treat.hist.fac)))
{
for (j in 2:ncol(object$x))
{
bal[j-1,i]<-sum((treat.hist.fac==unique(treat.hist.fac)[i])*object$x[which(object$time == times[1]),j]*object$w)/sum(object$w*(treat.hist.fac == unique(treat.hist.fac)[i]))
#bal[j-1,i]<-sum((treat.hist.fac==unique(treat.hist.fac)[i])*object$x[,j]*object$w)/sum(object$w*(treat.hist.fac == unique(treat.hist.fac)[i]))
# print(c(j,i,bal[j-1,i]))
bal[j-1,i+length(unique(treat.hist.fac))]<-bal[j-1,i]/sd(object$w*object$x[which(object$time == times[1]),j])
#bal[j-1,i+length(unique(treat.hist.fac))]<-bal[j-1,i]/sd(object$w*object$x[,j])
baseline[j-1,i]<-sum((treat.hist.fac==unique(treat.hist.fac)[i])*object$x[which(object$time == times[1]),j]*object$glm.w)/sum(object$glm.w*(treat.hist.fac == unique(treat.hist.fac)[i]))
baseline[j-1,i+length(unique(treat.hist.fac))]<-bal[j-1,i]/sd(object$glm.w*object$x[which(object$time == times[1]),j])
#baseline[j-1,i]<-sum((treat.hist.fac==unique(treat.hist.fac)[i])*object$x[,j]*object$glm.w)/sum(object$glm.w*(treat.hist.fac == unique(treat.hist.fac)[i]))
#baseline[j-1,i+length(unique(treat.hist.fac))]<-bal[j-1,i]/sd(object$glm.w*object$x[,j])
}
bal[is.na(bal)]<-0
baseline[is.na(baseline)]<-0
cnames[i]<-paste0(unique(treat.hist.fac)[i],".mean")
cnames[i+length(unique(treat.hist.fac))]<-paste0(unique(treat.hist.fac)[i],".std.mean")
}
colnames(bal)<-cnames
rnames<-colnames(object$x)[-1]
rownames(bal)<-rnames
colnames(baseline)<-cnames
rownames(baseline)<-rnames
statbal<-sum((bal-bal[,1])*(bal!=0)^2)
statloh<-sum((baseline-baseline[,1])*(baseline!=0)^2)
list("Balanced"=bal, "Unweighted"=baseline, "StatBal")
}
#' Plotting CBPS Estimation for Marginal Structural Models
#'
#' Plots the absolute difference in standardized means before and after
#' weighting.
#'
#' Covariate balance is improved if the plot's points are below the plotted
#' line of y=x.
#'
#' @param x an object of class \dQuote{CBMSM}.
#' @param covars Indices of the covariates to be plotted (excluding the
#' intercept). For example, if only the first two covariates from
#' \code{balance} are desired, set \code{covars} to 1:2. The default is
#' \code{NULL}, which plots all covariates.
#' @param silent If set to \code{FALSE}, returns the absolute imbalance for
#' each treatment history pair before and after weighting. This helps the user
#' to create his or her own customized plot. Default is \code{TRUE}, which
#' returns nothing.
#' @param boxplot If set to \code{TRUE}, returns a boxplot summarizing the
#' imbalance on the covariates instead of a point for each covariate. Useful
#' if there are many covariates.
#' @param ... Additional arguments to be passed to plot.
#' @return The x-axis gives the imbalance for each covariate-treatment history
#' pair without any weighting, and the y-axis gives the imbalance for each
#' covariate-treatment history pair after CBMSM weighting. Imbalance is
#' measured as the absolute difference in standardized means for the two
#' treatment histories. Means are standardized by the standard deviation of
#' the covariate in the full sample.
#' @author Marc Ratkovic and Christian Fong
#' @seealso \link{CBMSM}, \link{plot}
#'
#' @export
#'
plot.CBMSM<-function(x, covars = NULL, silent = TRUE, boxplot = FALSE, ...)
{
bal.out<-balance.CBMSM(x)
bal<-bal.out$Balanced
baseline<-bal.out$Unweighted
no.treats<-ncol(bal)/2
if (is.null(covars))
{
covars<-1:nrow(bal)
}
covarlist<-c()
contrast<-c()
bal.std.diff<-c()
baseline.std.diff<-c()
treat.hist.names<-sapply(colnames(bal)[1:no.treats],function(s) substr(s, 1, nchar(s)-5))
for (i in covars)
{
for (j in 1:(no.treats-1))
{
for (k in (j+1):no.treats)
{
covarlist<-c(covarlist, rownames(bal)[i])
contrast<-c(contrast, paste(treat.hist.names[j],treat.hist.names[k],sep=":",collapse=""))
bal.std.diff<-c(bal.std.diff,abs(bal[i,no.treats+j] - bal[i,no.treats+k]))
baseline.std.diff<-c(baseline.std.diff,abs(baseline[i,no.treats+j] - baseline[i,no.treats+k]))
}
}
}
range.x<-range.y<-range(c(bal.std.diff,baseline.std.diff))
if (!boxplot){
plot(x=baseline.std.diff,y=bal.std.diff,asp="1",xlab="Unweighted Regression Imbalance",ylab="CBMSM Imbalance",
xlim=range.x, ylim = range.y, main = "Difference in Standardized Means", ...)
abline(0,1)
}
else{
boxplot(baseline.std.diff, bal.std.diff, horizontal = TRUE, yaxt = 'n', xlab = "Difference in Standardized Means", ...)
axis(side=2, at=c(1,2),c("CBMSM Weighted", "Unweighted"))
}
if(!silent) return(data.frame("Covariate" = covarlist, "Contrast"=contrast, "Unweighted"=baseline.std.diff, "Balanced"=bal.std.diff))
}
|
/scratch/gouwar.j/cran-all/cranData/CBPS/R/CBMSM.R
|
#' Blackwell Data for Covariate Balancing Propensity Score
#'
#' This data set gives the outcomes a well as treatment assignments and
#' covariates for the example from Blackwell (2013).
#'
#'
#' @name Blackwell
#' @docType data
#' @format A data frame consisting of 13 columns (including treatment
#' assignment, time, and identifier vectors) and 570 observations.
#' @references Blackwell, Matthew. (2013). A framework for dynamic causal
#' inference in political science. American Journal of Political Science 57, 2,
#' 504-619.
#' @source d.gone.neg is the treatment. d.gone.neg.l1, d.gone.neg.l2, and
#' d.gone.neg.l3 are lagged treatment variables. camp.length, deminc,
#' base.poll, base.und, and office covariates. year is the year of the
#' particular race, and time goes from the first measurement (time = 1) to the
#' election (time = 5). demName is the identifier, and demprcnt is the outcome.
#' @keywords datasets
NULL
#' LaLonde Data for Covariate Balancing Propensity Score
#'
#' This data set gives the outcomes a well as treatment assignments and
#' covariates for the econometric evaluation of training programs in LaLonde
#' (1986).
#'
#'
#' @name LaLonde
#' @docType data
#' @format A data frame consisting of 12 columns (including a treatment
#' assignment vector) and 3212 observations.
#' @references LaLonde, R.J. (1986). Evaluating the econometric evaluations of
#' training programs with experimental data. American Economic Review 76, 4,
#' 604-620.
#' @source Data from the National Supported Work Study. A benchmark matching
#' dataset. Columns consist of an indicator for whether the observed unit was
#' in the experimental subset; an indicator for whether the individual received
#' the treatment; age in years; schooling in years; indicators for black and
#' Hispanic; an indicator for marriage status, one of married; an indicator for
#' no high school degree; reported earnings in 1974, 1975, and 1978; and
#' whether the 1974 earnings variable is missing. Data not missing 1974
#' earnings are the Dehejia-Wahba subsample of the LaLonde data. Missing
#' values for 1974 earnings set to zero. 1974 and 1975 earnings are
#' pre-treatment. 1978 earnings is taken as the outcome variable.
#' @keywords datasets
NULL
|
/scratch/gouwar.j/cran-all/cranData/CBPS/R/CBPS-package.R
|
CBPS.2Treat<-function(treat, X, method, k, XprimeX.inv, bal.only, iterations, ATT, standardize, twostep, sample.weights, ...){
# There is probably an X'X missing somewhere that is causing these variance problems.
probs.min<- 1e-6
treat.orig<-treat
treat<-sapply(treat,function(x) ifelse(x==levels(factor(treat))[2],1,0))
if(ATT == 2) treat<-1-treat
if (ATT == 1){
print(paste0("Finding ATT with T=",as.character(levels(factor(treat.orig))[2]),
" as the treatment. Set ATT=2 to find ATT with T=",
as.character(levels(factor(treat.orig))[1])," as the treatment"))
}
if (ATT == 2){
print(paste0("Finding ATT with T=",as.character(levels(factor(treat.orig))[1]),
" as the treatment. Set ATT=1 to find ATT with T=",
as.character(levels(factor(treat.orig))[2])," as the treatment"))
}
##Note: Sample weights sum to n and n.c and n.t measured wrt sample weigths
sample.weights<-sample.weights/mean(sample.weights)
n<-dim(X)[1]
n.c<-sum(sample.weights[treat==0])
n.t<-sum(sample.weights[treat==1])
##Generates ATT weights. Called by loss function, etc.
ATT.wt.func<-function(beta.curr,X.wt=X){
X<-as.matrix(X.wt)
n.c<-sum(sample.weights[treat==0])
n.t<-sum(sample.weights[treat==1])
n<-n.c+n.t
theta.curr<-as.vector(X%*%beta.curr)
probs.curr<-(1+exp(-theta.curr))^-1
probs.curr<-pmin(1-probs.min,probs.curr)
probs.curr<-pmax(probs.min,probs.curr)
w1<-n/n.t*(treat-probs.curr)/(1-probs.curr)
w1
}
##The gmm objective function--given a guess of beta, constructs the GMM J statistic. Used for vanilla binary treatments
gmm.func<-function(beta.curr,X.gmm=X,ATT.gmm=ATT,invV=NULL,sample.weights0=sample.weights){
sample.weights<-sample.weights0
##Designate a few objects in the function.
X<-as.matrix(X.gmm)
ATT<-ATT.gmm
##Designate sample size, number of treated and control observations,
##theta.curr, which are used to generate probabilities.
##Trim probabilities, and generate weights.
n.c<-sum(sample.weights[treat==0])
n.t<-sum(sample.weights[treat==1])
n<-n.c+n.t
theta.curr<-as.vector(X%*%beta.curr)
probs.curr<-(1+exp(-theta.curr))^-1
probs.curr<-pmin(1-probs.min,probs.curr)
probs.curr<-pmax(probs.min,probs.curr)
probs.curr<-as.vector(probs.curr)
if(ATT){
w.curr<-ATT.wt.func(beta.curr)}
else{
w.curr<-(probs.curr-1+treat)^-1}
##Generate the vector of mean imbalance by weights.
w.curr.del<-1/(n)*t(sample.weights*X)%*%(w.curr)
w.curr.del<-as.vector(w.curr.del)
w.curr<-as.vector(w.curr)
##Generate g-bar, as in the paper.
gbar<-c( 1/n*t(sample.weights*X)%*%(treat-probs.curr),w.curr.del)
##Generate the covariance matrix used in the GMM estimate.
##Was for the initial version that calculates the analytic variances.
if(is.null(invV))
{
if(ATT){
X.1<-sample.weights^.5*X*((1-probs.curr)*probs.curr)^.5
X.2<-sample.weights^.5*X*(probs.curr/(1-probs.curr))^.5
X.1.1<-sample.weights^.5*X*(probs.curr)^.5
}
else{
X.1<-sample.weights^.5*X*((1-probs.curr)*probs.curr)^.5
X.2<-sample.weights^.5*X*(probs.curr*(1-probs.curr))^-.5
X.1.1<- sample.weights^.5*X
}
if (ATT){
V<-rbind(1/n*cbind(t(X.1)%*%X.1,t(X.1.1)%*%X.1.1)*n/sum(treat),
1/n*cbind(t(X.1.1)%*%X.1.1*n/sum(treat),t(X.2)%*%X.2*n^2/sum(treat)^2))
}
else{
V<-rbind(1/n*cbind(t(X.1)%*%X.1,t(X.1.1)%*%X.1.1),
1/n*cbind(t(X.1.1)%*%X.1.1,t(X.2)%*%X.2))
}
invV<-ginv(V)
}
##Calculate the GMM loss.
loss1<-as.vector(t(gbar)%*%invV%*%(gbar))
out1<-list("loss"=loss1, "invV"=invV)
out1
}
gmm.loss<-function(x,...) gmm.func(x,...)$loss
##Loss function for balance constraints, returns the squared imbalance along each dimension.
bal.loss<-function(beta.curr,sample.weights0=sample.weights){
sample.weights<-sample.weights0
##Generate theta and probabilities.
theta.curr<-as.vector(X%*%beta.curr)
probs.curr<-(1+exp(-theta.curr))^-1
probs.curr<-pmin(1-probs.min,probs.curr)
probs.curr<-pmax(probs.min,probs.curr)
##Generate weights.
if(ATT){
w.curr<-1/n*ATT.wt.func(beta.curr)
}
else{
w.curr<-1/n*(probs.curr-1+treat)^-1
}
##Generate mean imbalance.
Xprimew <- t(sample.weights*X)%*%(w.curr)
loss1<-abs(t(Xprimew)%*%XprimeX.inv%*%Xprimew)
loss1
}
##Does not work with ATT. Need to fix this at some point.
gmm.gradient<-function(beta.curr, invV, ATT.gmm=ATT,sample.weights0=sample.weights)
{
sample.weights<-sample.weights0
n.c<-sum(sample.weights[treat==0])
n.t<-sum(sample.weights[treat==1])
n<-n.c+n.t
ATT<-ATT.gmm
theta.curr<-as.vector(X%*%beta.curr)
probs.curr<-(1+exp(-theta.curr))^-1
probs.curr<-pmin(1-probs.min,probs.curr)
probs.curr<-pmax(probs.min,probs.curr)
##Generate the vector of mean imbalance by weights.
if (ATT){
w.curr<-ATT.wt.func(beta.curr)
}
else{
w.curr<-(probs.curr-1+treat)^-1
}
w.curr.del<-1/n*t(X*sample.weights)%*%(w.curr)
w.curr.del<-as.vector(w.curr.del)
w.curr<-as.vector(w.curr)
##Generate g-bar, as in the paper.
gbar<-c(1/n*t(X*sample.weights)%*%(treat-probs.curr),w.curr.del)
##Calculate derivative of g-bar
if (ATT){
# Need to update here
dw<- -n/n.t*probs.curr/(1 - probs.curr)
dw[treat==1]<-0
dgbar<-cbind(1/n*t(-X*sample.weights*probs.curr*(1-probs.curr))%*%X,
1/n.t*t(X*dw*sample.weights)%*%X)
}
else{
dgbar<-cbind(-1/n*t(X*sample.weights*probs.curr*(1-probs.curr))%*%X,
-1/n*t(X*sample.weights*(treat - probs.curr)^2/(probs.curr*(1-probs.curr)))%*%X)
}
out<-2*dgbar%*%invV%*%gbar
}
bal.gradient<-function(beta.curr,sample.weights0=sample.weights)
{
##Generate theta and probabilities.
sample.weights<-sample.weights0
theta.curr<-as.vector(X%*%beta.curr)
probs.curr<-(1+exp(-theta.curr))^-1
probs.curr<-pmin(1-probs.min,probs.curr)
probs.curr<-pmax(probs.min,probs.curr)
##Generate weights.
if(ATT) w.curr<-1/n*ATT.wt.func(beta.curr)
else w.curr<-1/n*(probs.curr-1+treat)^-1
if (ATT){
dw2<- -n/n.t*probs.curr/(1 - probs.curr)
dw2[treat==1]<-0
dw<-1/n*t(X*dw2)
}
else{
dw<-1/n*t(-X*(treat-probs.curr)^2/(probs.curr*(1-probs.curr)))
}
##Generate mean imbalance.
Xprimew <- t(X)%*%(w.curr*sample.weights)
loss1<-t(Xprimew)%*%XprimeX.inv%*%Xprimew
out<-sapply(2*dw%*%X%*%XprimeX.inv%*%Xprimew, function (x) ifelse((x > 0 & loss1 > 0) | (x < 0 & loss1 < 0), abs(x), -abs(x)))
out
}
n<-length(treat)
n.t<-sum(treat==1)
##GLM estimation
glm1<-suppressWarnings(glm(treat~X-1,family=binomial))
glm1$coef[is.na(glm1$coef)]<-0
probs.glm<-glm1$fit
glm1$fit<-probs.glm<-pmin(1-probs.min,probs.glm)
glm1$fit<-probs.glm<-pmax(probs.min,probs.glm)
beta.curr<-glm1$coef
beta.curr[is.na(beta.curr)]<-0
alpha.func<-function(alpha) gmm.loss(beta.curr*alpha)
beta.curr<-beta.curr*optimize(alpha.func,interval=c(.8,1.1))$min
##Generate estimates for balance and CBPSE
gmm.init<-beta.curr
this.invV<-gmm.func(gmm.init)$invV
if (twostep)
{
opt.bal<-optim(gmm.init, bal.loss, control=list("maxit"=iterations), method="BFGS", gr = bal.gradient, hessian=TRUE)
}
else
{
opt.bal<-optim(gmm.init, bal.loss, control=list("maxit"=iterations), method="BFGS", hessian=TRUE)
}
beta.bal<-opt.bal$par
if(bal.only) opt1<-opt.bal
if(!bal.only)
{
if (twostep)
{
gmm.glm.init<-optim(gmm.init, gmm.loss, control=list("maxit"=iterations), method="BFGS", hessian=TRUE, gr = gmm.gradient, invV = this.invV)
gmm.bal.init<-optim(beta.bal, gmm.loss, control=list("maxit"=iterations), method="BFGS", hessian=TRUE, gr = gmm.gradient, invV = this.invV)
}
else
{
gmm.glm.init<-optim(gmm.init, gmm.loss, control=list("maxit"=iterations), method="BFGS", hessian=TRUE)
gmm.bal.init<-optim(beta.bal, gmm.loss, control=list("maxit"=iterations), method="BFGS", hessian=TRUE)
}
if(gmm.glm.init$val<gmm.bal.init$val) opt1<-gmm.glm.init else opt1<-gmm.bal.init
}
##Generate probabilities
beta.opt<-opt1$par
theta.opt<-as.vector(X%*%beta.opt)
probs.opt<-(1+exp(-theta.opt))^-1
probs.opt<-pmin(1-probs.min,probs.opt)
probs.opt<-pmax(probs.min,probs.opt)
##Generate weights
if(ATT){
w.opt<-abs(ATT.wt.func(beta.opt))
}else{
w.opt<-abs((probs.opt-1+treat)^-1)
}
if (standardize)
{
if (ATT)
{
norm1<-sum(treat*sample.weights*n/n.t)
norm2<-sum((1-treat)*sample.weights*n/n.t*(treat-probs.opt)/(1-probs.opt))
}
else
{
norm1<-sum(treat*sample.weights/probs.opt)
norm2<-sum((1-treat)*sample.weights/(1-probs.opt))
}
}
else {
norm1 <- 1
norm2 <- 1
}
if (ATT)
{
w.opt<-(treat == 1)*n/sum(treat == 1)/norm1 + abs((treat == 0)*n/sum(treat == 1)*((treat - probs.opt)/(1-probs.opt))/norm2)
}
else
{
w.opt<-(treat == 1)/probs.opt/norm1 + (treat == 0)/(1-probs.opt)/norm2
}
w.opt<-w.opt*sample.weights
J.opt<-ifelse(twostep, gmm.func(beta.opt, invV = this.invV)$loss, gmm.loss(beta.opt))
residuals<-treat-probs.opt
deviance <- -2*c(sum(treat*sample.weights*log(probs.opt)+(1-treat)*sample.weights*log(1-probs.opt)))
nulldeviance <- -2*c(sum(treat*sample.weights*log(mean(treat))+(1-treat)*sample.weights*log(1-mean(treat))))
XG.1<- -X*probs.opt*(1-probs.opt)*sample.weights
XW.1<- X*(treat-probs.opt)*sample.weights^.5
if(ATT){
XW.2<-X*ATT.wt.func(beta.opt)*sample.weights
dw2<- -n/n.t*probs.opt/(1 - probs.opt)
dw2[treat==1]<-0
XG.2 <- X*dw2*sample.weights
}
else{
XW.2 <- X*(probs.opt-1+treat)^-1*sample.weights^.5
XG.2 <- -X*(treat - probs.opt)^2/(probs.opt*(1-probs.opt))*sample.weights
}
if (bal.only){
G<-cbind(t(XG.2)%*%X)/n
W1<-rbind(t(XW.2))
W<-XprimeX.inv
}
else{
G<-cbind(t(XG.1)%*%X,t(XG.2)%*%X)/n
W1<-rbind(t(XW.1),t(XW.2))
if (twostep){
W <- this.invV
}
else{
W <- gmm.func(beta.opt)$invV
}
}
Omega<-(W1%*%t(W1)/n)
GWGinvGW <- ginv(G%*%W%*%t(G))%*%G%*%W
vcov<- GWGinvGW%*%Omega%*%t(GWGinvGW)
output<-list("coefficients"=matrix(beta.opt, ncol=1),"fitted.values"=probs.opt, "linear.predictor" = X%*%beta.opt,
"deviance"=deviance,"weights"=w.opt,
"y"=treat,"x"=X,"converged"=opt1$conv,"J"=J.opt,"var"=vcov,
"mle.J"=ifelse(twostep, gmm.func(glm1$coef, invV = this.invV)$loss, gmm.loss(glm1$coef)))
class(output)<- c("CBPS","glm","lm")
output
}
|
/scratch/gouwar.j/cran-all/cranData/CBPS/R/CBPSBinary.R
|
CBPS.Continuous<-function(treat, X, method, k, XprimeX.inv, bal.only, iterations, standardize, twostep, sample.weights, ...)
{
probs.min<-1e-6
sample.weights<-sample.weights/mean(sample.weights)
XprimeX.inv<-ginv(t(sample.weights^.5*X)%*%(sample.weights^.5*X))
##The gmm objective function--given a guess of beta, constructs the GMM J statistic.
gmm.func<-function(params.curr,invV=NULL){
##Generate probabilities.
##Trim probabilities, and generate weights.
beta.curr<-params.curr[-length(params.curr)]
sigmasq<-exp(params.curr[length(params.curr)])
probs.curr<-dnorm(Ttilde, mean = Xtilde%*%beta.curr, sd = sqrt(sigmasq), log = TRUE)
probs.curr<-pmin(log(1-probs.min),probs.curr)
probs.curr<-pmax(log(probs.min),probs.curr)
w.curr<-Ttilde*exp(stabilizers - probs.curr)
##Generate the vector of mean imbalance by weights.
w.curr.del<-1/n*t(wtXilde)%*%w.curr
w.curr.del<-as.matrix(w.curr.del)
w.curr<-as.matrix(w.curr)
##Generate g-bar, as in the paper.
gbar<-c(1/n*t(wtXilde)%*%(Ttilde-Xtilde%*%beta.curr)/sigmasq,
w.curr.del,
1/n*t(sample.weights)%*%((Ttilde - Xtilde%*%beta.curr)^2/sigmasq - 1))
##Generate the covariance matrix used in the GMM estimate.
if (is.null(invV))
{
Xtilde.1.1<-1/sigmasq*t(wtXilde)%*%(Xtilde)
Xtilde.1.2<-t(wtXilde)%*%(Xtilde)/sigmasq
Xtilde.1.3<-t(wtXilde)%*%n.identity.vec*0
Xtilde.2.2<-t(wtXilde)%*%sweep(Xtilde,MARGIN=1,as.vector(exp((Xtilde%*%beta.curr)^2/sigmasq + log(sigmasq + (Xtilde%*%beta.curr)^2))),'*')
Xtilde.2.3<-t(wtXilde)%*%(-Xtilde%*%beta.curr)*-2/sigmasq
Xtilde.3.3<-t(sample.weights)%*%n.identity.vec*2
V<-rbind(1/n*cbind(Xtilde.1.1,Xtilde.1.2,Xtilde.1.3),
1/n*cbind(Xtilde.1.2,Xtilde.2.2,Xtilde.2.3),
1/n*cbind(t(Xtilde.1.3),t(Xtilde.2.3),Xtilde.3.3))
if(max(is.infinite(V))) stop('Encountered an infinite value in the weighting matrix. Use the just-identified version of CBPS instead by setting method = "exact".')
invV<-ginv(V)
}
##Calculate the GMM loss.
loss1<-t(gbar)%*%invV%*%(gbar)
out1<-list("loss"=loss1, "invV"=invV)
out1
}
gmm.loss<-function(x,...) gmm.func(x,...)$loss
##Loss function for balance constraints, returns the squared imbalance along each dimension.
bal.func<-function(params.curr){
beta.curr<-params.curr[-length(params.curr)]
sigmasq<-exp(params.curr[length(params.curr)])
probs.curr<-dnorm(Ttilde, mean = Xtilde%*%beta.curr, sd = sqrt(sigmasq), log = TRUE)
probs.curr<-pmin(log(1-probs.min),probs.curr)
probs.curr<-pmax(log(probs.min),probs.curr)
w.curr<-Ttilde*exp(stabilizers - probs.curr)
##Generate the vector of mean imbalance by weights.
w.curr.del<-1/n*t(wtXilde)%*%w.curr
w.curr.del<-as.matrix(w.curr.del)
w.curr<-as.matrix(w.curr)
gbar <- c(w.curr.del,
1/n*t(sample.weights)%*%((Ttilde - Xtilde%*%beta.curr)^2/sigmasq - 1))
##Generate mean imbalance.
loss1<-t(gbar)%*%diag(k+1)%*%(gbar)
out1<-list("loss"=loss1)
out1
}
bal.loss<-function(x,...) bal.func(x,...)$loss
gmm.gradient<-function(params.curr, invV)
{
##Generate probabilities.
##Trim probabilities, and generate weights.
beta.curr<-params.curr[-length(params.curr)]
sigmasq<-exp(params.curr[length(params.curr)])
probs.curr<-dnorm(Ttilde, mean = Xtilde%*%beta.curr, sd = sqrt(sigmasq), log = TRUE)
probs.curr<-pmin(log(1-probs.min),probs.curr)
probs.curr<-pmax(log(probs.min),probs.curr)
w.curr<-Ttilde*exp(stabilizers - probs.curr)
##Generate the vector of mean imbalance by weights.
w.curr.del<-1/n*t(wtXilde)%*%w.curr
w.curr.del<-as.matrix(w.curr.del)
w.curr<-as.matrix(w.curr)
##Generate g-bar, as in the paper.
gbar<-c(1/n*t(wtXilde)%*%(Ttilde-Xtilde%*%beta.curr)/sigmasq,
w.curr.del,
1/n*t(sample.weights)%*%((Ttilde - Xtilde%*%beta.curr)^2/sigmasq - 1))
dgbar.1.1<-t(-wtXilde)%*%Xtilde/sigmasq
dgbar.1.2<-matrix(-sample.weights*(Ttilde - Xtilde%*%beta.curr)/(sigmasq^2), nrow = 1)%*%Xtilde
dgbar.2.1<-sweep(t(wtXilde), MARGIN=2, -(Ttilde-Xtilde%*%beta.curr)/sigmasq*w.curr,'*')%*%Xtilde
dgbar.2.2<-matrix(w.curr*(1/(2*sigmasq) - (Ttilde - Xtilde%*%beta.curr)^2/(2*sigmasq^2)), nrow = 1)%*%Xtilde
dgbar.3.1<-t(wtXilde)%*%matrix(-2*(Ttilde - Xtilde%*%beta.curr)/sigmasq, ncol = 1)
dgbar.3.2<-t(sample.weights)%*%(-(Ttilde - Xtilde%*%beta.curr)^2/(sigmasq^2))
dgbar<-1/n*cbind(rbind(dgbar.1.1, dgbar.1.2*sigmasq),
rbind(dgbar.2.1, dgbar.2.2*sigmasq),
rbind(dgbar.3.1, dgbar.3.2*sigmasq))
out<-2*dgbar%*%invV%*%gbar
out
}
bal.gradient<-function(params.curr, invV=NULL)
{
##Generate probabilities.
##Trim probabilities, and generate weights.
beta.curr<-params.curr[-length(params.curr)]
sigmasq<-exp(params.curr[length(params.curr)])
probs.curr<-dnorm(Ttilde, mean = Xtilde%*%beta.curr, sd = sqrt(sigmasq), log = TRUE)
probs.curr<-pmin(log(1-probs.min),probs.curr)
probs.curr<-pmax(log(probs.min),probs.curr)
w.curr<-Ttilde*exp(stabilizers - probs.curr)
##Generate the vector of mean imbalance by weights.
w.curr.del<-1/n*t(sample.weights*Xtilde)%*%w.curr
w.curr.del<-as.matrix(w.curr.del)
w.curr<-as.matrix(w.curr)
##Generate g-bar, as in the paper.
gbar<-c(w.curr.del,
1/n*t(sample.weights)%*%((Ttilde - Xtilde%*%beta.curr)^2/sigmasq - 1))
dgbar.2.1<-sweep(t(wtXilde), MARGIN=2, -(Ttilde-Xtilde%*%beta.curr)/sigmasq*w.curr,'*')%*%Xtilde
dgbar.2.2<-matrix(w.curr*(1/(2*sigmasq) - (Ttilde - Xtilde%*%beta.curr)^2/(2*sigmasq^2)), nrow = 1)%*%Xtilde
dgbar.3.1<-t(wtXilde)%*%matrix(-2*(Ttilde - Xtilde%*%beta.curr)/sigmasq, ncol = 1)
dgbar.3.2<-t(sample.weights)%*%(-(Ttilde - Xtilde%*%beta.curr)^2/(sigmasq^2))
dgbar<-1/n*cbind(rbind(dgbar.2.1, dgbar.2.2*sigmasq),
rbind(dgbar.3.1, dgbar.3.2*sigmasq))
out<-2*dgbar%*%diag(k+1)%*%gbar
out
}
n<-length(treat)
x.orig<-x<-cbind(as.matrix(X))
int.ind <- which(apply(X, 2, sd) <= 10^-10)
Xtilde<-cbind(X[,int.ind], scale(sample.weights*X[,-int.ind]%*%solve(chol(var(sample.weights*X[,-int.ind]))),
center = TRUE, scale = FALSE))
wtXilde <- sample.weights*Xtilde
XtildeprimeXtilde.inv<-ginv(t(sample.weights*Xtilde)%*%Xtilde)
Ttilde<-scale(sample.weights*treat)
n.identity.vec<-matrix(1,nrow=n,ncol=1)
##Run linear regression
lm1<-lm(Ttilde ~ -1 + Xtilde, weights = sample.weights)
mcoef<-coef(lm1)
mcoef[is.na(mcoef)]<-0
sigmasq<-mean((Ttilde - Xtilde%*%mcoef)^2)
Ttilde.hat<-apply(Xtilde, 1, function(x) x%*%mcoef)
stabilizers<-log(sapply(Ttilde, function(t) mean(pmin(pmax(dnorm(t, mean = 0, sd = 1), probs.min), 1-probs.min))))
probs.mle<-dnorm(Ttilde, mean = Xtilde%*%mcoef, sd = sqrt(sigmasq), log = TRUE)
probs.mle<-pmin(log(1-probs.min),probs.mle)
probs.mle<-pmax(log(probs.min),probs.mle)
params.curr<-c(mcoef, log(sigmasq))
mle.J <- NA
try(mle.J<-gmm.loss(params.curr))
mle.bal<-bal.loss(params.curr)
try({glm.invV<-gmm.func(params.curr)$invV
alpha.func<-function(alpha) gmm.loss(params.curr*alpha)
params.curr<-params.curr*optimize(alpha.func,interval=c(.8,1.1))$min
})
##Generate estimates for balance and CBPS
gmm.init<-params.curr
if (twostep)
{
opt.bal<-optim(gmm.init, bal.loss, control=list("maxit"=iterations), method="BFGS", gr = bal.gradient,
hessian = TRUE)
}
else
{
opt.bal<-tryCatch({optim(gmm.init, bal.loss, control=list("maxit"=iterations), method="BFGS", hessian=TRUE)},
error = function(err) {return(optim(gmm.init, bal.loss, control=list("maxit"=iterations),
method="Nelder-Mead", hessian=TRUE))})
}
params.bal<-opt.bal$par
if(bal.only) opt1<-opt.bal
pick.glm<-0
if(!bal.only)
{
if (twostep)
{
gmm.glm.init<-optim(gmm.init, gmm.loss, control=list("maxit"=iterations), method="BFGS", hessian=TRUE, invV = glm.invV, gr = gmm.gradient)
gmm.bal.init<-optim(params.bal, gmm.loss, control=list("maxit"=iterations), method="BFGS", hessian=TRUE, invV = glm.invV, gr = gmm.gradient)
}
else
{
gmm.glm.init<-tryCatch({optim(params.curr, gmm.loss, control=list("maxit"=iterations), method="BFGS", hessian=TRUE)},
error = function(err) {return(optim(params.curr, gmm.loss, control=list("maxit"=iterations), method="Nelder-Mead", hessian=TRUE))})
gmm.bal.init<-tryCatch({optim(params.bal, gmm.loss, control=list("maxit"=iterations), method="BFGS", hessian=TRUE)},
error = function(err) {return(optim(params.bal, gmm.loss, control=list("maxit"=iterations), method="Nelder-Mead", hessian=TRUE))})
}
if(gmm.glm.init$val<gmm.bal.init$val) opt1<-gmm.glm.init else opt1<-gmm.bal.init
pick.glm<-ifelse(gmm.glm.init$val<gmm.bal.init$val,1,0)
}
##Generate probabilities
params.opt<-opt1$par
beta.opt<-params.opt[-length(params.opt)]
sigmasq<-exp(params.opt[length(params.opt)])
probs.opt<-dnorm(Ttilde, mean = Xtilde%*%beta.opt, sd = sqrt(sigmasq), log = TRUE)
probs.opt<-pmin(log(1-probs.min),probs.opt)
probs.opt<-pmax(log(probs.min),probs.opt)
if (!bal.only){
J.opt<-ifelse(twostep, gmm.func(params.opt, invV = glm.invV)$loss, gmm.func(params.opt)$loss)
if ((J.opt > mle.J) & (bal.loss(params.opt) > mle.bal))
{
beta.opt<-mcoef
probs.opt<-probs.mle
J.opt <-mle.J
warning("Optimization failed. Results returned are for MLE.")
}
}
else{
J.opt<-bal.loss(params.opt)
}
##Generate weights
w.opt<-exp(stabilizers - probs.opt)
if(standardize) w.opt<-w.opt/sum(w.opt*sample.weights)
#How are residuals now defined?
residuals<- Ttilde - Xtilde%*%beta.opt
deviance <- -2*sum(probs.opt)
XG.1.1<-t(-wtXilde)%*%Xtilde/sigmasq
XG.2.1<-t(wtXilde)%*%matrix(-2*(Ttilde - Xtilde%*%beta.opt)/sigmasq, ncol = 1)
XG.3.1<-sweep(t(wtXilde), MARGIN=2, -(Ttilde-Xtilde%*%beta.opt)/sigmasq*Ttilde*w.opt,'*')%*%Xtilde
XG.1.2<-t(-wtXilde)%*%(Ttilde - Xtilde%*%beta.opt)/(sigmasq^2)
XG.2.2<-t(sample.weights)%*%(-(Ttilde - Xtilde%*%beta.opt)^2/(sigmasq^2))
XG.3.2<-matrix(-Ttilde*sample.weights*w.opt*((Ttilde - Xtilde%*%beta.opt)^2/(2*sigmasq^2) - 1/(2*sigmasq)), nrow = 1)%*%Xtilde
XW.1<-Xtilde*as.vector((Ttilde-Xtilde%*%beta.opt)/sigmasq*sample.weights^.5)
XW.2<-as.vector((Ttilde - Xtilde%*%beta.opt)^2/sigmasq - 1)*sample.weights^.5
XW.3<-Xtilde*as.vector(Ttilde*w.opt*sample.weights)
if (bal.only){
W <- diag(k+1)
G<-1/n*rbind(cbind(XG.3.1, t(XG.3.2)),
cbind(t(XG.2.1), XG.2.2))
W1<-rbind(t(XW.3), t(XW.2))
}
else{
W<-gmm.func(params.opt)$invV
G<-1/n*rbind(cbind(XG.1.1,XG.1.2),
cbind(XG.3.1, t(XG.3.2)),
cbind(t(XG.2.1), XG.2.2))
W1<-rbind(t(XW.1),t(XW.3),t(XW.2))
}
Omega<-1/n*(W1%*%t(W1))
GWGinvGW <- W%*%G%*%ginv(t(G)%*%W%*%G)
vcov.tilde<-(t(W%*%G%*%ginv(t(G)%*%W%*%G))%*%Omega%*%GWGinvGW)[1:k,1:k]
# Reverse the centering and Choleski decomposition from using Ttilde and Xtilde
beta.tilde<-beta.opt
beta.opt<-ginv(t(X)%*%X)%*%t(X)%*%(Xtilde%*%beta.tilde*sd(sample.weights*treat) + mean(sample.weights*treat))
sigmasq.tilde<-sigmasq
sigmasq<-sigmasq.tilde*var(sample.weights*treat)
vcov <- ginv(t(X)%*%X)%*%t(X)%*%(Xtilde%*%vcov.tilde%*%t(Xtilde)*var(sample.weights*treat))%*%X%*%ginv(t(X)%*%X)
class(beta.opt)<-"coef"
output<-list("coefficients"=beta.opt, "sigmasq"=sigmasq,
"fitted.values"=pmin(pmax(dnorm(Ttilde,Xtilde%*%beta.tilde,sigmasq.tilde),probs.min),1-probs.min),
"linear.predictor" = Xtilde%*%beta.tilde, "deviance"=deviance,
"weights"=w.opt*sample.weights,"y"=treat,"x"=X, "Ttilde" = Ttilde,
"Xtilde"=Xtilde, "beta.tilde" = beta.tilde, "sigmasq.tilde" = sigmasq.tilde,
"converged"=opt1$conv,"J"=J.opt,"var"=vcov, "mle.J"=mle.J)
class(output)<- c("CBPSContinuous","CBPS")
output
}
####
|
/scratch/gouwar.j/cran-all/cranData/CBPS/R/CBPSContinuous.r
|
#' @title Covariate Balancing Propensity Score (CBPS) Estimation
#'
#' @description
#' \code{CBPS} estimates propensity scores such that both covariate balance and
#' prediction of treatment assignment are maximized. The method, therefore,
#' avoids an iterative process between model fitting and balance checking and
#' implements both simultaneously. For cross-sectional data, the method can
#' take continuous treatments and treatments with a control (baseline)
#' condition and either 1, 2, or 3 distinct treatment conditions.
#'
#' Fits covariate balancing propensity scores.
#'
#' ### @aliases CBPS CBPS.fit print.CBPS
#'
#' @importFrom MASS mvrnorm ginv
#' @importFrom nnet multinom
#' @importFrom numDeriv jacobian
#' @importFrom MatchIt matchit
#' @importFrom glmnet cv.glmnet
#' @importFrom graphics abline axis layout mtext par plot points
#' @importFrom stats .getXlevels as.formula binomial coef cor dnorm glm is.empty.model lm model.frame
#' @importFrom stats model.matrix model.response naprint optim optimize pnorm predict sd symnum var terms
#' @importFrom utils packageDescription
#'
#' @param formula An object of class \code{formula} (or one that can be coerced
#' to that class): a symbolic description of the model to be fitted.
#' @param data An optional data frame, list or environment (or object coercible
#' by as.data.frame to a data frame) containing the variables in the model. If
#' not found in data, the variables are taken from \code{environment(formula)},
#' typically the environment from which \code{CBPS} is called.
#' @param na.action A function which indicates what should happen when the data
#' contain NAs. The default is set by the na.action setting of options, and is
#' na.fail if that is unset.
#' @param ATT Default is 1, which finds the average treatment effect on the
#' treated interpreting the second level of the treatment factor as the
#' treatment. Set to 2 to find the ATT interpreting the first level of the
#' treatment factor as the treatment. Set to 0 to find the average treatment
#' effect. For non-binary treatments, only the ATE is available.
#' @param iterations An optional parameter for the maximum number of iterations
#' for the optimization. Default is 1000.
#' @param standardize Default is \code{TRUE}, which normalizes weights to sum
#' to 1 within each treatment group. For continuous treatments, normalizes
#' weights to sum up to 1 for the entire sample. Set to \code{FALSE} to return
#' Horvitz-Thompson weights.
#' @param method Choose "over" to fit an over-identified model that combines
#' the propensity score and covariate balancing conditions; choose "exact" to
#' fit a model that only contains the covariate balancing conditions.
#' @param twostep Default is \code{TRUE} for a two-step estimator, which will
#' run substantially faster than continuous-updating. Set to \code{FALSE} to
#' use the continuous-updating estimator described by Imai and Ratkovic (2014).
#' @param sample.weights Survey sampling weights for the observations, if
#' applicable. When left NULL, defaults to a sampling weight of 1 for each
#' observation.
#' @param baseline.formula Used only to fit iCBPS (see Fan et al). Currently
#' only works with binary treatments. A formula specifying the balancing
#' covariates in the baseline outcome model, i.e., E(Y(0)|X).
#' @param diff.formula Used only to fit iCBPS (see Fan et al). Currently only
#' works with binary treatments. A formula specifying the balancing covariates
#' in the difference between the treatment and baseline outcome model, i.e.,
#' E(Y(1)-Y(0)|X).
#' @param ... Other parameters to be passed through to \code{optim()}.
#'
#' @return \item{fitted.values}{The fitted propensity score}
#' \item{linear.predictor}{X * beta}
#'
#' \item{deviance}{Minus twice the log-likelihood of the CBPS fit}
#' \item{weights}{The optimal weights. Let \eqn{\pi_i = f(T_i | X_i)}{\pi_i =
#' f(T_i | X_i)}. For binary ATE, these are given by \eqn{\frac{T_i}{\pi_i} +
#' \frac{(1 - T_i)}{(1 - \pi_i)}}{T_i/\pi_i + (1 - T_i)/(1 - \pi_i)}. For
#' binary ATT, these are given by \eqn{\frac{n}{n_t} * \frac{T_i - \pi_i}{1 -
#' \pi_i}}{n/n_t * (T_i - \pi_i)/(1 - \pi_i)}. For multi_valued treatments,
#' these are given by \eqn{\sum_{j=0}^{J-1} T_{i,j} /
#' \pi_{i,j}}{\sum_{j=0}^{J-1} T_i,j / \pi_i,j}. For continuous treatments,
#' these are given by \eqn{\frac{f(T_i)}{f(T_i | X_i)}}{f(T_i) / f(T_i | X_i)
#' }. These expressions for weights are all before standardization (i.e. with
#' standardize=\code{FALSE}). Standardization will make weights sum to 1
#' within each treatment group. For continuous treatment, standardization will
#' make all weights sum to 1. If sampling weights are used, the weight for
#' each observation is multiplied by the survey sampling weight.} \item{y}{The
#' treatment vector used} \item{x}{The covariate matrix} \item{model}{The model
#' frame} \item{converged}{Convergence value. Returned from the call to
#' \code{optim()}.} \item{call}{The matched call} \item{formula}{The formula
#' supplied} \item{data}{The data argument} \item{coefficients}{A named vector
#' of coefficients} \item{sigmasq}{The sigma-squared value, for continuous
#' treatments only} \item{J}{The J-statistic at convergence} \item{mle.J}{The
#' J-statistic for the parameters from maximum likelihood estimation}
#' \item{var}{The covariance matrix for the coefficients.} \item{Ttilde}{For
#' internal use only.} \item{Xtilde}{For internal use only.}
#' \item{beta.tilde}{For internal use only.} \item{simgasq.tilde}{For internal
#' use only.}
#' @author Christian Fong, Marc Ratkovic, Kosuke Imai, and Xiaolin Yang; The
#' CBPS function is based on the code for version 2.15.0 of the glm function
#' implemented in the stats package, originally written by Simon Davies. This
#' documentation is likewise modeled on the documentation for glm and borrows
#' its language where the arguments and values are the same.
#' @seealso \link{summary.CBPS}
#' @references Imai, Kosuke and Marc Ratkovic. 2014. ``Covariate Balancing
#' Propensity Score.'' Journal of the Royal Statistical Society, Series B
#' (Statistical Methodology).
#' \url{http://imai.princeton.edu/research/CBPS.html} \cr Fong, Christian, Chad
#' Hazlett, and Kosuke Imai. 2018. ``Covariate Balancing Propensity Score
#' for a Continuous Treatment.'' The Annals of Applied Statistics.
#' \url{http://imai.princeton.edu/research/files/CBGPS.pdf} \cr
#' Fan, Jianqing and Imai, Kosuke and Liu, Han and Ning, Yang and Yang,
#' Xiaolin. ``Improving Covariate Balancing Propensity Score: A Doubly Robust
#' and Efficient Approach.'' Unpublished Manuscript.
#' \url{http://imai.princeton.edu/research/CBPStheory.html}
#' @examples
#'
#' ###
#' ### Example: propensity score matching
#' ###
#'
#' ##Load the LaLonde data
#' data(LaLonde)
#' ## Estimate CBPS
#' fit <- CBPS(treat ~ age + educ + re75 + re74 +
#' I(re75==0) + I(re74==0),
#' data = LaLonde, ATT = TRUE)
#' summary(fit)
#' \dontrun{
#' ## matching via MatchIt: one to one nearest neighbor with replacement
#' library(MatchIt)
#' m.out <- matchit(treat ~ fitted(fit), method = "nearest",
#' data = LaLonde, replace = TRUE)
#'
#' ### Example: propensity score weighting
#' ###
#' ## Simulation from Kang and Shafer (2007).
#' set.seed(123456)
#' n <- 500
#' X <- mvrnorm(n, mu = rep(0, 4), Sigma = diag(4))
#' prop <- 1 / (1 + exp(X[,1] - 0.5 * X[,2] +
#' 0.25*X[,3] + 0.1 * X[,4]))
#' treat <- rbinom(n, 1, prop)
#' y <- 210 + 27.4*X[,1] + 13.7*X[,2] + 13.7*X[,3] + 13.7*X[,4] + rnorm(n)
#'
#' ##Estimate CBPS with a misspecified model
#' X.mis <- cbind(exp(X[,1]/2), X[,2]*(1+exp(X[,1]))^(-1)+10,
#' (X[,1]*X[,3]/25+.6)^3, (X[,2]+X[,4]+20)^2)
#' fit1 <- CBPS(treat ~ X.mis, ATT = 0)
#' summary(fit1)
#'
#' ## Horwitz-Thompson estimate
#' mean(treat*y/fit1$fitted.values)
#' ## Inverse propensity score weighting
#' sum(treat*y/fit1$fitted.values)/sum(treat/fit1$fitted.values)
#'
#' rm(list=c("y","X","prop","treat","n","X.mis","fit1"))
#'
#' ### Example: Continuous Treatment as in Fong, Hazlett,
#' ### and Imai (2018). See
#' ### https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/AIF4PI
#' ### for a real data example.
#' set.seed(123456)
#' n <- 1000
#' X <- mvrnorm(n, mu = rep(0,2), Sigma = diag(2))
#' beta <- rnorm(ncol(X)+1, sd = 1)
#' treat <- cbind(1,X)%*%beta + rnorm(n, sd = 5)
#'
#' treat.effect <- 1
#' effect.beta <- rnorm(ncol(X))
#' y <- rbinom(n, 1, (1 + exp(-treat.effect*treat -
#' X%*%effect.beta))^-1)
#'
#' fit2 <- CBPS(treat ~ X)
#' summary(fit2)
#' summary(glm(y ~ treat + X, weights = fit2$weights,
#' family = "quasibinomial"))
#'
#' rm(list=c("n", "X", "beta", "treat", "treat.effect",
#' "effect.beta", "y", "fit2"))
#'
#' ### Simulation example: Improved CBPS (or iCBPS) from Fan et al
#' set.seed(123456)
#' n <- 500
#' X <- mvrnorm(n, mu = rep(0, 4), Sigma = diag(4))
#' prop <- 1 / (1 + exp(X[,1] - 0.5 * X[,2] + 0.25*X[,3] + 0.1 * X[,4]))
#' treat <- rbinom(n, 1, prop)
#' y1 <- 210 + 27.4*X[,1] + 13.7*X[,2] + 13.7*X[,3] + 13.7*X[,4] + rnorm(n)
#' y0 <- 210 + 13.7*X[,2] + 13.7*X[,3] + 13.7*X[,4] + rnorm(n)
#' ##Estimate iCBPS with a misspecificied model
#' X.mis <- cbind(exp(X[,1]/2), X[,2]*(1+exp(X[,1]))^(-1)+10,
#' (X[,1]*X[,3]/25+.6)^3, (X[,2]+X[,4]+20)^2)
#' fit1 <- CBPS(treat ~ X.mis, baseline.formula=~X.mis[,2:4],
#' diff.formula=~X.mis[,1], ATT = FALSE)
#' summary(fit1)
#' }
#'
#' @export CBPS
#'
CBPS <- function(formula, data, na.action, ATT=1, iterations=1000, standardize=TRUE, method="over", twostep=TRUE,
sample.weights=NULL, baseline.formula=NULL, diff.formula=NULL, ...) {
if (missing(data))
data <- environment(formula)
call <- match.call()
family <- binomial()
mf <- match.call(expand.dots = FALSE)
m <- match(c("formula", "data", "na.action"), names(mf), 0L)
mf <- mf[c(1L, m)]
mf$drop.unused.levels <- TRUE
mf[[1L]] <- as.name("model.frame")
mf <- eval(mf, parent.frame())
mt <- attr(mf, "terms")
Y <- model.response(mf, "any")
if (length(dim(Y)) == 1L) {
nm <- rownames(Y)
dim(Y) <- NULL
if (!is.null(nm))
names(Y) <- nm
}
X <- if (!is.empty.model(mt)) model.matrix(mt, mf)#[,-2]
else matrix(NA, NROW(Y), 0L)
X<-cbind(1,X[,apply(X,2,sd)>0])
#Handle sample weights
if(is.null(sample.weights)) sample.weights<-rep(1,nrow(X))
# Parse formulae 2 and 3, if they are necessary
if (xor(is.null(baseline.formula), is.null(diff.formula))){
stop("Either baseline.formula or diff.formula not specified. Both must be specified to use CBPSOptimal. Otherwise, leave both NULL.")
}
if(!is.null(baseline.formula))
{
baselineX<-model.matrix(terms(baseline.formula))
baselineX<-baselineX[,apply(baselineX,2,sd)>0]
diffX<- model.matrix(terms(diff.formula))
diffX<-diffX[,apply(as.matrix(diffX),2,sd)>0]
}
else
{
baselineX <- NULL
diffX <- NULL
}
fit <- eval(call("CBPS.fit", X = X, treat = Y, ATT=ATT,
intercept = attr(mt, "intercept") > 0L, method=method, iterations=iterations,
standardize = standardize, twostep = twostep,
baselineX = baselineX, diffX = diffX,sample.weights=sample.weights))
fit$na.action <- attr(mf, "na.action")
xlevels <- .getXlevels(mt, mf)
fit$data<-data
fit$call <- call
fit$formula <- formula
fit$terms<-mt
fit
}
#' CBPS.fit determines the proper routine (what kind of treatment) and calls the
#' approporiate function. It also pre- and post-processes the data
#'
#'
#' @param ATT Default is 1, which finds the average treatment effect on the
#' treated interpreting the second level of the treatment factor as the
#' treatment. Set to 2 to find the ATT interpreting the first level of the
#' treatment factor as the treatment. Set to 0 to find the average treatment
#' effect. For non-binary treatments, only the ATE is available.
#' @param iterations An optional parameter for the maximum number of iterations
#' for the optimization. Default is 1000.
#' @param standardize Default is \code{TRUE}, which normalizes weights to sum
#' to 1 within each treatment group. For continuous treatments, normalizes
#' weights to sum up to 1 for the entire sample. Set to \code{FALSE} to return
#' Horvitz-Thompson weights.
#' @param method Choose "over" to fit an over-identified model that combines
#' the propensity score and covariate balancing conditions; choose "exact" to
#' fit a model that only contains the covariate balancing conditions.
#' @param twostep Default is \code{TRUE} for a two-step estimator, which will
#' run substantially faster than continuous-updating. Set to \code{FALSE} to
#' use the continuous-updating estimator described by Imai and Ratkovic (2014).
#' @param treat A vector of treatment assignments. Binary or multi-valued
#' treatments should be factors. Continuous treatments should be numeric.
#' @param X A covariate matrix.
#' @param sample.weights Survey sampling weights for the observations, if
#' applicable. When left NULL, defaults to a sampling weight of 1 for each
#' observation.
#' @param baselineX Similar to \code{baseline.formula}, but in matrix form.
#' @param diffX Similar to \code{diff.formula}, but in matrix form.
#' @param ... Other parameters to be passed through to \code{optim()}.
#'
#' @return CBPS.fit object
#'
#' @export
#'
CBPS.fit<-function(treat, X, baselineX, diffX, ATT, method, iterations, standardize, twostep, sample.weights=sample.weights,...){
# Special clause interprets T = 1 or 0 as a binary treatment, even if it is numeric
if ((levels(factor(treat))[1] %in% c("FALSE","0",0)) & (levels(factor(treat))[2] %in% c("TRUE","1",1))
& (length(levels(factor(treat))) == 2))
{
treat<-factor(treat)
}
# Declare some constants and orthogonalize Xdf.
k=0
if(method=="over") bal.only=FALSE
if(method=="exact") bal.only=TRUE
names.X<-colnames(X)
names.X[apply(X,2,sd)==0]<-"(Intercept)"
# Only preprocess if not doing CBPS Optimal
X.orig<-X
if(is.null(baselineX)){
x.sd<-apply(as.matrix(X[,-1]),2,sd)
Dx.inv<-diag(c(1,x.sd))
x.mean<-apply(as.matrix(X[,-1]),2,mean)
X[,-1]<-apply(as.matrix(X[,-1]),2,FUN=function(x) (x-mean(x))/sd(x))
svd1<-svd(X)
X<-svd1$u
}
k<-qr(X)$rank
if (k < ncol(X)) stop("X is not full rank")
# When you take the svd, this is the identity matrix. Perhaps
# we forgot to work this in somewhere
XprimeX.inv<-ginv(t(sample.weights^.5*X)%*%(sample.weights^.5*X))
# Determine the number of treatments
if (is.factor(treat)) {
no.treats<-length(levels(treat))
if (no.treats > 4) stop("Parametric CBPS is not implemented for more than 4 treatment values. Consider using a continuous value.")
if (no.treats < 2) stop("Treatment must take more than one value")
if (no.treats == 2)
{
if (!is.null(baselineX) && !is.null(diffX))
{
if(ATT==1)
{
message("CBPSOptimal does not support ATT=1 for now. Try ATT=0.")
}
output<-CBPSOptimal.2Treat(treat, X, baselineX, diffX, iterations, ATT=0, standardize = standardize)
}
else
{
output<-CBPS.2Treat(treat, X, method, k, XprimeX.inv, bal.only, iterations, ATT, standardize = standardize, twostep = twostep, sample.weights=sample.weights)
}
}
if (no.treats == 3)
{
output<-CBPS.3Treat(treat, X, method, k, XprimeX.inv, bal.only, iterations, standardize = standardize, twostep = twostep, sample.weights=sample.weights)
}
if (no.treats == 4)
{
output<-CBPS.4Treat(treat, X, method, k, XprimeX.inv, bal.only, iterations, standardize = standardize, twostep = twostep, sample.weights=sample.weights)
}
# Reverse the svd, centering and scaling
if (is.null(baselineX)){
d.inv<- svd1$d
d.inv[d.inv> 1e-5]<-1/d.inv[d.inv> 1e-5]
d.inv[d.inv<= 1e-5]<-0
beta.opt<-svd1$v%*%diag(d.inv)%*%coef(output)
beta.opt[-1,]<-beta.opt[-1,]/x.sd
beta.opt[1,]<-beta.opt[1,]-matrix(x.mean%*%beta.opt[-1,])
}
else{ #added by Xiaolin to deal with cases when baselineX is not null. (12/26/2016)
beta.opt<-as.matrix(coef(output))
}
output$coefficients<-beta.opt
output$x<-X.orig
rownames(output$coefficients)<-names.X
# Calculate the variance
variance<-output$var
if (no.treats == 2){
colnames(output$coefficients)<-c("Treated")
if (is.null(baselineX)){
output$var<-Dx.inv%*%ginv(t(X.orig)%*%X.orig)%*%t(X.orig)%*%X%*%svd1$v%*%ginv(diag(svd1$d))%*%variance%*%ginv(diag(svd1$d))%*%t(svd1$v)%*%t(X)%*%X.orig%*%ginv(t(X.orig)%*%X.orig)%*%Dx.inv
}
else{
output$var<-variance
}
colnames(output$var)<-names.X
rownames(output$var)<-colnames(output$var)
}
if (no.treats == 3){
colnames(output$coefficients)<-levels(as.factor(treat))[c(2,3)]
var.1.1<-variance[1:k,1:k]
var.1.2<-variance[1:k,(k+1):(2*k)]
var.2.1<-variance[(k+1):(2*k),1:k]
var.2.2<-variance[(k+1):(2*k),(k+1):(2*k)]
trans.var.1.1<-Dx.inv%*%ginv(t(X.orig)%*%X.orig)%*%t(X.orig)%*%X%*%svd1$v%*%ginv(diag(svd1$d))%*%var.1.1%*%ginv(diag(svd1$d))%*%t(svd1$v)%*%t(X)%*%X.orig%*%ginv(t(X.orig)%*%X.orig)%*%Dx.inv
trans.var.1.2<-Dx.inv%*%ginv(t(X.orig)%*%X.orig)%*%t(X.orig)%*%X%*%svd1$v%*%ginv(diag(svd1$d))%*%var.1.2%*%ginv(diag(svd1$d))%*%t(svd1$v)%*%t(X)%*%X.orig%*%ginv(t(X.orig)%*%X.orig)%*%Dx.inv
trans.var.2.1<-Dx.inv%*%ginv(t(X.orig)%*%X.orig)%*%t(X.orig)%*%X%*%svd1$v%*%ginv(diag(svd1$d))%*%var.2.1%*%ginv(diag(svd1$d))%*%t(svd1$v)%*%t(X)%*%X.orig%*%ginv(t(X.orig)%*%X.orig)%*%Dx.inv
trans.var.2.2<-Dx.inv%*%ginv(t(X.orig)%*%X.orig)%*%t(X.orig)%*%X%*%svd1$v%*%ginv(diag(svd1$d))%*%var.2.2%*%ginv(diag(svd1$d))%*%t(svd1$v)%*%t(X)%*%X.orig%*%ginv(t(X.orig)%*%X.orig)%*%Dx.inv
output$var<-rbind(cbind(trans.var.1.1,trans.var.1.2),cbind(trans.var.2.1,trans.var.2.2))
colnames(output$var)<-c(paste0(levels(as.factor(treat))[2],": ", names.X),paste0(levels(as.factor(treat))[3], ": ", names.X))
rownames(output$var)<-colnames(output$var)
}
if (no.treats == 4)
{
colnames(output$coefficients)<-levels(as.factor(treat))[c(2,3,4)]
var.1.1<-variance[1:k,1:k]
var.1.2<-variance[1:k,(k+1):(2*k)]
var.1.3<-variance[1:k,(2*k+1):(3*k)]
var.2.1<-variance[(k+1):(2*k),1:k]
var.2.2<-variance[(k+1):(2*k),(k+1):(2*k)]
var.2.3<-variance[(k+1):(2*k),(2*k+1):(3*k)]
var.3.1<-variance[(2*k+1):(3*k),1:k]
var.3.2<-variance[(2*k+1):(3*k),(k+1):(2*k)]
var.3.3<-variance[(2*k+1):(3*k),(2*k+1):(3*k)]
trans.var.1.1<-Dx.inv%*%ginv(t(X.orig)%*%X.orig)%*%t(X.orig)%*%X%*%svd1$v%*%ginv(diag(svd1$d))%*%var.1.1%*%ginv(diag(svd1$d))%*%t(svd1$v)%*%t(X)%*%X.orig%*%ginv(t(X.orig)%*%X.orig)%*%Dx.inv
trans.var.1.2<-Dx.inv%*%ginv(t(X.orig)%*%X.orig)%*%t(X.orig)%*%X%*%svd1$v%*%ginv(diag(svd1$d))%*%var.1.2%*%ginv(diag(svd1$d))%*%t(svd1$v)%*%t(X)%*%X.orig%*%ginv(t(X.orig)%*%X.orig)%*%Dx.inv
trans.var.1.3<-Dx.inv%*%ginv(t(X.orig)%*%X.orig)%*%t(X.orig)%*%X%*%svd1$v%*%ginv(diag(svd1$d))%*%var.1.3%*%ginv(diag(svd1$d))%*%t(svd1$v)%*%t(X)%*%X.orig%*%ginv(t(X.orig)%*%X.orig)%*%Dx.inv
trans.var.2.1<-Dx.inv%*%ginv(t(X.orig)%*%X.orig)%*%t(X.orig)%*%X%*%svd1$v%*%ginv(diag(svd1$d))%*%var.2.1%*%ginv(diag(svd1$d))%*%t(svd1$v)%*%t(X)%*%X.orig%*%ginv(t(X.orig)%*%X.orig)%*%Dx.inv
trans.var.2.2<-Dx.inv%*%ginv(t(X.orig)%*%X.orig)%*%t(X.orig)%*%X%*%svd1$v%*%ginv(diag(svd1$d))%*%var.2.2%*%ginv(diag(svd1$d))%*%t(svd1$v)%*%t(X)%*%X.orig%*%ginv(t(X.orig)%*%X.orig)%*%Dx.inv
trans.var.2.3<-Dx.inv%*%ginv(t(X.orig)%*%X.orig)%*%t(X.orig)%*%X%*%svd1$v%*%ginv(diag(svd1$d))%*%var.2.3%*%ginv(diag(svd1$d))%*%t(svd1$v)%*%t(X)%*%X.orig%*%ginv(t(X.orig)%*%X.orig)%*%Dx.inv
trans.var.3.1<-Dx.inv%*%ginv(t(X.orig)%*%X.orig)%*%t(X.orig)%*%X%*%svd1$v%*%ginv(diag(svd1$d))%*%var.3.1%*%ginv(diag(svd1$d))%*%t(svd1$v)%*%t(X)%*%X.orig%*%ginv(t(X.orig)%*%X.orig)%*%Dx.inv
trans.var.3.2<-Dx.inv%*%ginv(t(X.orig)%*%X.orig)%*%t(X.orig)%*%X%*%svd1$v%*%ginv(diag(svd1$d))%*%var.3.2%*%ginv(diag(svd1$d))%*%t(svd1$v)%*%t(X)%*%X.orig%*%ginv(t(X.orig)%*%X.orig)%*%Dx.inv
trans.var.3.3<-Dx.inv%*%ginv(t(X.orig)%*%X.orig)%*%t(X.orig)%*%X%*%svd1$v%*%ginv(diag(svd1$d))%*%var.3.3%*%ginv(diag(svd1$d))%*%t(svd1$v)%*%t(X)%*%X.orig%*%ginv(t(X.orig)%*%X.orig)%*%Dx.inv
output$var<-rbind(cbind(trans.var.1.1,trans.var.1.2,trans.var.1.3),cbind(trans.var.2.1,trans.var.2.2,trans.var.2.3),cbind(trans.var.3.1,trans.var.3.2,trans.var.3.3))
colnames(output$var)<-c(paste0(levels(as.factor(treat))[2],": ", names.X),paste0(levels(as.factor(treat))[3], ": ", names.X),paste0(levels(as.factor(treat))[4], ": ", names.X))
rownames(output$var)<-colnames(output$var)
}
} else if (is.numeric(treat)) {
# Warn if it seems like the user meant to input a categorical treatment
if (length(unique(treat)) <= 4) warning("Treatment vector is numeric. Interpreting as a continuous treatment. To solve for a binary or multi-valued treatment, make treat a factor.")
output<-CBPS.Continuous(treat, X, method, k, XprimeX.inv, bal.only, iterations, standardize = standardize, twostep = twostep, sample.weights=sample.weights)
# Reverse svd, centering, and scaling
d.inv<- svd1$d
d.inv[d.inv> 1e-5]<-1/d.inv[d.inv> 1e-5]
d.inv[d.inv<= 1e-5]<-0
beta.opt<-svd1$v%*%diag(d.inv)%*%coef(output)
beta.opt[-1,]<-beta.opt[-1,]/x.sd
beta.opt[1,]<-beta.opt[1,]-matrix(x.mean%*%beta.opt[-1,])
output$coefficients<-as.matrix(beta.opt)
output$x<-X.orig
rownames(output$coefficients)<-c(names.X)
# Calculate variance
var.1<-output$var
output$var<-Dx.inv%*%ginv(t(X.orig)%*%X.orig)%*%t(X.orig)%*%X%*%svd1$v%*%ginv(diag(svd1$d))%*%var.1%*%ginv(diag(svd1$d))%*%t(svd1$v)%*%t(X)%*%X.orig%*%ginv(t(X.orig)%*%X.orig)%*%Dx.inv
rownames(output$var)<-names.X
colnames(output$var)<-rownames(output$var)
} else {
stop("Treatment must be either a factor or numeric")
}
output$method<-method
output
}
#' Print coefficients and model fit statistics
#' @param x an object of class \dQuote{CBPS} or \dQuote{npCBPS}, usually, a result of a call to \code{CBPS} or \code{npCBPS}.
#' @param digits the number of digits to keep for the numerical quantities.
#' @param ... Additional arguments to be passed to summary.
#'
#' @export
#'
print.CBPS <- function(x, digits = max(3, getOption("digits") - 3), ...) {
cat("\nCall: ", paste(deparse(x$call), sep = "\n", collapse = "\n"),
"\n\n", sep = "")
if (length(coef(x))) {
cat("Coefficients:\n")
print.default(format(x$coefficients, digits = digits),
print.gap = 2, quote = FALSE)
}
else cat("No coefficients\n\n")
if (max(class(x) == "CBPScontinuous"))
cat("\nSigma-Squared: ",x$sigmasq)
if (nzchar(mess <- naprint(x$na.action)))
cat(" (", mess, ")\n", sep = "")
cat("Residual Deviance:\t", format(signif(x$deviance,
digits)), "\n")
cat("J-Statistic:\t ", format(signif(x$J)),"\n")
cat("Log-Likelihood:\t ",-0.5*x$deviance, "\n")
invisible(x)
}
# Expands on print by including uncertainty for coefficient estimates
#' Summarizing Covariate Balancing Propensity Score Estimation
#'
#' Prints a summary of a fitted CBPS object.
#'
#' Prints a summmary of a CBPS object, in a format similar to glm. The
#' variance matrix is calculated from the numerical Hessian at convergence of
#' CBPS.
#'
#' @param object an object of class \dQuote{CBPS}, usually, a result of a call
#' to CBPS.
#' @param ... Additional arguments to be passed to summary.
#' @return \item{call}{The matched call.} \item{deviance.residuals}{The five
#' number summary and the mean of the deviance residuals.}
#' \item{coefficients}{A table including the estimate for the each coefficient
#' and the standard error, z-value, and two-sided p-value for these estimates.}
#' \item{J}{Hansen's J-Statistic for the fitted model.}
#' \item{Log-Likelihood}{The log-likelihood of the fitted model.}
#' @author Christian Fong, Marc Ratkovic, and Kosuke Imai.
#' @seealso \link{CBPS}, \link{summary}
#'
#' @export
#'
summary.CBPS<-function(object, ...){
##x <- summary.glm(object, dispersion = dispersion, correlation = correlation, symbolic.cor = symbolic.cor, ...)
x<-NULL
names.X<-as.vector(names(object$coefficients))
sd.coef <- diag(object$var)^.5
coef.table<-(cbind(as.vector(object$coefficients),as.vector(sd.coef),as.vector(object$coefficients/sd.coef),as.vector(2-2*pnorm(abs(object$coefficients/sd.coef)))))
colnames(coef.table)<-c("Estimate", "Std. Error", "z value", "Pr(>|z|)")
if (ncol(coef(object)) == 1)
{
rownames(coef.table)<-rownames(object$coefficients)#names.X
}
if (ncol(coef(object)) > 1)
{
rnames<-array()
for (i in 1:ncol(coef(object)))
{
rnames[((i-1)*nrow(coef(object))+1):(i*nrow(coef(object)))]<-paste0(levels(as.factor(object$y))[i],": ",rownames(coef(object)))
}
rownames(coef.table)<-rnames
}
pval <- coef.table[,4]
symp <- symnum(pval, corr=FALSE,
cutpoints = c(0, .001,.01,.05, .1, 1),
symbols = c("***","**","*","."," "))
coef.print<-cbind(signif(coef.table,3),as.vector(symp))
coef.print[coef.print=="0"]<-"0.000"
cat("\nCall: \n", paste(deparse(object$call), sep = "\n", collapse = "\n"),
"\n", sep = "")
cat("\nCoefficients:\n")
print(noquote(coef.print))
cat("---\n")
cat("Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 \n")
#cat("\n Null J: ",object$J)
if(max(class(object)=="CBPScontinuous")){
cat("\nSigma-Squared: ",object$sigmasq)
}
cat("\nJ - statistic: ",object$J)
cat("\nLog-Likelihood: ",-0.5*object$deviance, "\n")
out<-list("call"=object$call,"coefficients"=coef.table,"J"=object$J)
invisible(out)
}
#' Calculate Variance-Covariance Matrix for a Fitted CBPS Object
#'
#' \code{vcov.CBPS} Returns the variance-covariance matrix of the main
#' parameters of a fitted CBPS object.
#'
#' This is the CBPS implementation of the generic function vcov().
#'
#' @param object An object of class \code{formula} (or one that can be coerced
#' to that class): a symbolic description of the model to be fitted.
#' @param ... Additional arguments to be passed to vcov.CBPS
#' @return A matrix of the estimated covariances between the parameter
#' estimates in the linear or non-linear predictor of the model.
#' @author Christian Fong, Marc Ratkovic, and Kosuke Imai.
#' @seealso \link{vcov}
#' @references This documentation is modeled on the documentation of the
#' generic \link{vcov}.
#' @examples
#'
#' ###
#' ### Example: Variance-Covariance Matrix
#' ###
#'
#' ##Load the LaLonde data
#' data(LaLonde)
#' ## Estimate CBPS via logistic regression
#' fit <- CBPS(treat ~ age + educ + re75 + re74 + I(re75==0) + I(re74==0),
#' data = LaLonde, ATT = TRUE)
#' ## Get the variance-covariance matrix.
#' vcov(fit)
#'
#'
#' @export
#'
vcov.CBPS<-function(object,...){
return(object$var)
}
# Plot binary and multi-valued CBPS. Plots the standardized difference in means for each contrast
# before and after weighting. Defined for an arbitrary number of discrete treatments.
#' Plotting Covariate Balancing Propensity Score Estimation
#'
#'
#' This function plots the absolute difference in standardized means before and after
#' weighting. To access more sophisticated graphics for assessing covariate balance,
#' consider using Noah Greifer's \code{cobalt} package.
#'
#' The "Before Weighting" plot gives the balance before weighting, and the
#' "After Weighting" plot gives the balance after weighting.
#'
#' ### @aliases plot.CBPS plot.npCBPS
#' @param x an object of class \dQuote{CBPS} or \dQuote{npCBPS}, usually, a
#' result of a call to \code{CBPS} or \code{npCBPS}.
#' @param covars Indices of the covariates to be plotted (excluding the
#' intercept). For example, if only the first two covariates from
#' \code{balance} are desired, set \code{covars} to 1:2. The default is
#' \code{NULL}, which plots all covariates.
#' @param silent If set to \code{FALSE}, returns the imbalances used to
#' construct the plot. Default is \code{TRUE}, which returns nothing.
#' @param boxplot If set to \code{TRUE}, returns a boxplot summarizing the
#' imbalance on the covariates instead of a point for each covariate. Useful
#' if there are many covariates.
#' @param ... Additional arguments to be passed to plot.
#' @return For binary and multi-valued treatments, plots the absolute
#' difference in standardized means by contrast for all covariates before and
#' after weighting. This quantity for a single covariate and a given pair of
#' treatment conditions is given by \eqn{\frac{\sum_{i=1}^{n} w_i * (T_i == 1)
#' * X_i}{\sum_{i=1}^{n} (T_i == 1) * w_i} - \frac{\sum_{i=1}^{n} w_i * (T_i ==
#' 0) * X_i}{\sum_{i=1}^{n} (T_i == 0) * w_i}}{[\sum w_i * (T_i == 1) *
#' X_i]/[\sum w_i * (T_i == 1)] - [\sum w_i * (T_i == 0) * X_i]/[\sum w_i *
#' (T_i == 0)]}. For continuous treatments, plots the weighted absolute
#' Pearson correlation between the treatment and each covariate. See
#' \url{https://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient#Weighted_correlation_coefficient.
#' }
#' @author Christian Fong, Marc Ratkovic, and Kosuke Imai.
#' @seealso \link{CBPS}, \link{plot}
#'
#' @export
#'
plot.CBPS<-function(x, covars = NULL, silent = TRUE, boxplot = FALSE, ...){
bal.x<-balance(x)
if(is.null(covars))
{
covars<-1:nrow(bal.x[["balanced"]])
}
no.treats<-length(levels(as.factor(x$y)))
balanced.std.mean<-bal.x[["balanced"]][covars,]
original.std.mean<-bal.x[["original"]][covars,]
no.contrasts<-ifelse(no.treats == 2, 1, ifelse(no.treats == 3, 3, 6))
abs.mean.ori.contrasts<-matrix(rep(0,no.contrasts*length(covars)),length(covars),no.contrasts)
abs.mean.bal.contrasts<-matrix(rep(0,no.contrasts*length(covars)),length(covars),no.contrasts)
contrast.names<-array()
true.contrast.names<-array()
contrasts<-c()
covarlist<-c()
ctr<-1
for (i in 1:(no.treats-1))
{
for (j in (i+1):no.treats)
{
abs.mean.ori.contrasts[,ctr]<-abs(original.std.mean[,i+no.treats]-original.std.mean[,j+no.treats])
abs.mean.bal.contrasts[,ctr]<-abs(balanced.std.mean[,i+no.treats]-balanced.std.mean[,j+no.treats])
contrast.names[ctr]<-paste0(i,":",j)
true.contrast.names[ctr]<-paste0(levels(as.factor(x$y))[i],":",levels(as.factor(x$y))[j])
contrasts<-c(contrasts, rep(true.contrast.names[ctr],length(covars)))
covarlist<-c(covarlist, rownames(balanced.std.mean))
ctr<-ctr+1
}
}
max.abs.contrast<-max(max(abs.mean.ori.contrasts),max(abs.mean.bal.contrasts))
m <- matrix(c(1,1,1,2,2,2,3,3,3),nrow = 3,ncol = 3,byrow = TRUE)
layout(mat = m,heights = c(0.4,0.4,0.3))
par(mfrow=c(2,1))
if (!boxplot){
plot(1, type="n", xlim=c(0,max.abs.contrast), ylim=c(1,no.contrasts),xlab="",ylab="",main="",yaxt='n', ...)
axis(side=2, at=seq(1,no.contrasts),contrast.names)
mtext("Absolute Difference of Standardized Means",side=1,line=2.25)
mtext("Contrasts",side=2,line=2)
mtext("Before Weighting",side=3,line=0.5,font=2)
for (i in 1:no.contrasts)
{
for (j in 1:length(covars))
{
points(abs.mean.ori.contrasts[j,i],i, ...)
}
}
plot(1, type="n", xlim=c(0,max.abs.contrast), ylim=c(1,no.contrasts), xlab="", ylab="", main="", yaxt='n', ...)
axis(side=2, at=seq(1,no.contrasts),contrast.names)
mtext("Absolute Difference of Standardized Means",side=1,line=2.25)
mtext("Contrasts",side=2,line=2)
mtext("After Weighting",side=3,line=0.5,font=2)
for (i in 1:no.contrasts)
{
for (j in 1:length(covars))
{
points(abs.mean.bal.contrasts[j,i],i, ...)
}
}
}
else{
boxplot(abs.mean.ori.contrasts, horizontal = TRUE, ylim = c(0,max.abs.contrast), xlim=c(1-0.5,no.contrasts+0.5),
xlab="", ylab="", main="", yaxt='n', ...)
axis(side=2, at=seq(1,no.contrasts),contrast.names)
mtext("Absolute Difference of Standardized Means",side=1,line=2.25)
mtext("Contrasts",side=2,line=2)
mtext("Before Weighting",side=3,line=0.5,font=2)
boxplot(abs.mean.bal.contrasts, horizontal = TRUE, ylim = c(0,max.abs.contrast), xlim=c(1-0.5,no.contrasts+0.5),
xlab="", ylab="", main="", yaxt='n', ...)
axis(side=2, at=seq(1,no.contrasts),contrast.names)
mtext("Absolute Difference of Standardized Means",side=1,line=2.25)
mtext("Contrasts",side=2,line=2)
mtext("After Weighting",side=3,line=0.5,font=2)
}
par(mfrow=c(1,1))
if(is.null(rownames(balanced.std.mean))) rownames(balanced.std.mean)<-paste0("X",covars)
if(!silent) return(data.frame("contrast" = contrasts, "covariate" = covarlist,
"balanced"=abs.mean.bal.contrasts,
"original"=abs.mean.ori.contrasts))
}
#' Plot the pre-and-post weighting correlations between X and T
#' @param x an object of class \dQuote{CBPS} or \dQuote{npCBPS}, usually, a
#' result of a call to \code{CBPS} or \code{npCBPS}.
#' @param covars Indices of the covariates to be plotted (excluding the intercept). For example,
#' if only the first two covariates from \code{balance} are desired, set \code{covars} to 1:2.
#' The default is \code{NULL}, which plots all covariates.
#' @param silent If set to \code{FALSE}, returns the imbalances used to
#' construct the plot. Default is \code{TRUE}, which returns nothing.
#' @param boxplot If set to \code{TRUE}, returns a boxplot summarizing the
#' imbalance on the covariates instead of a point for each covariate. Useful
#' if there are many covariates.
#' @param ... Additional arguments to be passed to balance.
#'
#' @export
#'
plot.CBPSContinuous<-function(x, covars = NULL, silent = TRUE, boxplot = FALSE, ...){
bal.x<-balance(x)
if (is.null(covars))
{
covars<-1:nrow(bal.x[["balanced"]])
}
balanced.abs.cor<-abs(bal.x[["balanced"]][covars])
original.abs.cor<-abs(bal.x[["unweighted"]][covars])
max.abs.cor<-max(max(original.abs.cor),max(balanced.abs.cor))
if(!boxplot){
plot(1, type="n", xlim=c(0,max.abs.cor), ylim=c(1.5,3.5), xlab = "Absolute Pearson Correlation", ylab = "", yaxt = "n", ...)
axis(side=2, at=seq(2,3),c("CBPS Weighted", "Unweighted"))
points(x=original.abs.cor, y=rep(3, length(covars)), pch=19)
points(x=balanced.abs.cor, y=rep(2, length(covars)), pch=19)
}
else{
boxplot(balanced.abs.cor, original.abs.cor, horizontal = TRUE, yaxt = 'n', xlab = "Absolute Pearson Correlation", ...)
axis(side=2, at=c(1,2),c("CBPS Weighted", "Unweighted"))
}
if(!silent) return(data.frame("covariate"=rownames(bal.x[["balanced"]])[covars],"balanced"=balanced.abs.cor,
"original"=original.abs.cor))
}
#' Optimal Covariate Balance
#'
#' Returns the mean and standardized mean associated with each treatment group,
#' before and after weighting. To access more comprehensive diagnotistics for
#' assessing covariate balance, consider using Noah Greifer's \code{cobalt} package.
#'
#' For binary and multi-valued treatments as well as marginal structural
#' models, each of the matrices' rows are the covariates and whose columns are
#' the weighted mean, and standardized mean associated with each treatment
#' group. The standardized mean is the weighted mean divided by the standard
#' deviation of the covariate for the whole population. For continuous
#' treatments, returns the absolute Pearson correlation between the treatment
#' and each covariate.
#'
#' ### @aliases balance balance.npCBPS balance.CBPS balance.CBMSM
#' @param object A CBPS, npCBPS, or CBMSM object.
#' @param ... Additional arguments to be passed to balance.
#' @return Returns a list of two matrices, "original" (before weighting) and
#' "balanced" (after weighting).
#' @author Christian Fong, Marc Ratkovic, and Kosuke Imai.
#' @examples
#'
#' ###
#' ### Example: Assess Covariate Balance
#' ###
#' data(LaLonde)
#' ## Estimate CBPS
#' fit <- CBPS(treat ~ age + educ + re75 + re74 +
#' I(re75==0) + I(re74==0),
#' data = LaLonde, ATT = TRUE)
#' balance(fit)
#'
#' @export
#'
balance<-function(object, ...)
{
UseMethod("balance")
}
#' Calculates the pre- and post-weighting difference in standardized means for covariate within each contrast
#' @param object A CBPS, npCBPS, or CBMSM object.
#' @param ... Additional arguments to be passed to balance.
#'
#' @export
#'
balance.CBPS<-function(object, ...){
treats<-as.factor(object$y)
treat.names<-levels(treats)
X<-object$x
bal<-matrix(rep(0,(ncol(X)-1)*2*length(treat.names)),ncol(X)-1,2*length(treat.names))
baseline<-matrix(rep(0,(ncol(X)-1)*2*length(treat.names)),ncol(X)-1,2*length(treat.names))
w<-object$weights
cnames<-array()
jinit<-ifelse(class(object)[1] == "npCBPS", 1, 2)
for (i in 1:length(treat.names))
{
for (j in jinit:ncol(X))
{
bal[j-1,i]<-sum((treats==treat.names[i])*X[,j]*w)/sum(w*(treats==treat.names[i]))
bal[j-1,i+length(treat.names)]<-bal[j-1,i]/sd(X[,j])
baseline[j-1,i]<-mean(X[which(treats==treat.names[i]),j])
baseline[j-1,i+length(treat.names)]<-baseline[j-1,i]/sd(X[,j])
}
cnames[i]<-paste0(treat.names[i],".mean")
cnames[length(treat.names)+i]<-paste0(treat.names[i],".std.mean")
}
colnames(bal)<-cnames
rownames(bal)<-colnames(X)[-1]
colnames(baseline)<-cnames
rownames(baseline)<-colnames(X)[-1]
out<-list(balanced=bal,original=baseline)
out
}
#' Calculates the pre- and post-weighting correlations between each covariate and the T
#' @param object A CBPS, npCBPS, or CBMSM object.
#' @param ... Additional arguments to be passed to balance.
#'
#' @export
#'
balance.CBPSContinuous<-function(object, ...){
treat<-object$y
X<-object$x
w<-object$weights
n<-length(w)
cnames<-array()
if ("npCBPS" %in% class(object)){
jinit<-1
bal<-matrix(rep(0,ncol(X)),ncol(X),1)
baseline<-matrix(rep(0,ncol(X)),ncol(X),1)
for (j in 1:ncol(X))
{
bal[j,1]<-(mean(w*X[,j]*treat) - mean(w*X[,j])*mean(w*treat)*n/sum(w))/(sqrt(mean(w*X[,j]^2) - mean(w*X[,j])^2*n/sum(w))*sqrt(mean(w*treat^2) - mean(w*treat)^2*n/sum(w)))
baseline[j,1]<-cor(treat, X[,j], method = "pearson")
}
rownames(bal)<-colnames(X)
rownames(baseline)<-colnames(X)
}
else{
bal<-matrix(rep(0,(ncol(X)-1)),ncol(X)-1,1)
baseline<-matrix(rep(0,(ncol(X)-1)),ncol(X)-1,1)
for (j in 2:ncol(X))
{
bal[j-1,1]<-(mean(w*X[,j]*treat) - mean(w*X[,j])*mean(w*treat)*n/sum(w))/(sqrt(mean(w*X[,j]^2) - mean(w*X[,j])^2*n/sum(w))*sqrt(mean(w*treat^2) - mean(w*treat)^2*n/sum(w)))
baseline[j-1,1]<-cor(treat, X[,j], method = "pearson")
}
rownames(bal)<-colnames(X)[-1]
rownames(baseline)<-colnames(X)[-1]
}
colnames(bal)<-"Pearson Correlation"
colnames(baseline)<-"Pearson Correlation"
out<-list(balanced=bal,unweighted=baseline)
out
}
#########################################
|
/scratch/gouwar.j/cran-all/cranData/CBPS/R/CBPSMain.R
|
CBPS.3Treat<-function(treat, X, method, k, XprimeX.inv, bal.only, iterations, standardize, twostep, sample.weights, ...)
{
probs.min<-1e-6
no.treats<-length(levels(as.factor(treat)))
treat.names<-levels(as.factor(treat))
T1<-as.numeric(treat==treat.names[1])
T2<-as.numeric(treat==treat.names[2])
T3<-as.numeric(treat==treat.names[3])
sample.weights<-sample.weights/mean(sample.weights)
wtX <- sample.weights*X
XprimeX.inv<-ginv(t(sample.weights^.5*X)%*%(sample.weights^.5*X))
##The gmm objective function--given a guess of beta, constructs the GMM J statistic.
gmm.func<-function(beta.curr,X.gmm=X,invV=NULL){
##Designate a few objects in the function.
beta.curr<-matrix(beta.curr,k,no.treats-1)
X<-as.matrix(X.gmm)
##Designate sample size, number of treated and control observations,
##theta.curr, which are used to generate probabilities.
##Trim probabilities, and generate weights.
theta.curr<-X%*%beta.curr
baseline.prob<-apply(theta.curr,1,function(x) (1+sum(exp(x)))^-1)
probs.curr<-cbind(baseline.prob, exp(theta.curr[,1])*baseline.prob, exp(theta.curr[,2])*baseline.prob)
probs.curr[,1]<-pmax(probs.min,probs.curr[,1])
probs.curr[,2]<-pmax(probs.min,probs.curr[,2])
probs.curr[,3]<-pmax(probs.min,probs.curr[,3])
norms<-apply(probs.curr,1,sum)
probs.curr<-probs.curr/norms
w.curr<-cbind(2*T1/probs.curr[,1] - T2/probs.curr[,2] - T3/probs.curr[,3],
T2/probs.curr[,2] - T3/probs.curr[,3])
##Generate the vector of mean imbalance by weights.
w.curr.del<-1/n*t(wtX)%*%w.curr
w.curr.del<-as.matrix(w.curr.del)
w.curr<-as.matrix(w.curr)
##Generate g-bar, as in the paper.
gbar<-c(1/n*t(wtX)%*%(T2-probs.curr[,2]),
1/n*t(wtX)%*%(T3-probs.curr[,3]),
w.curr.del)
if(is.null(invV))
{
##Generate the covariance matrix used in the GMM estimate.
##Was for the initial version that calculates the analytic variances.
X.1.1<-wtX*(probs.curr[,2]*(1-probs.curr[,2]))
X.1.2<-wtX*(-probs.curr[,2]*probs.curr[,3])
X.1.3<-wtX*-1
X.1.4<-wtX*1
X.2.2<-wtX*(probs.curr[,3]*(1-probs.curr[,3]))
X.2.3<-wtX*-1
X.2.4<-wtX*-1
X.3.3<-wtX*(4*probs.curr[,1]^-1+probs.curr[,2]^-1+probs.curr[,3]^-1)
X.3.4<-wtX*(-probs.curr[,2]^-1+probs.curr[,3]^-1)
X.4.4<-wtX*(probs.curr[,2]^-1+probs.curr[,3]^-1)
V<-1/n*rbind(cbind(t(X.1.1)%*%X,t(X.1.2)%*%X,t(X.1.3)%*%X,t(X.1.4)%*%X),
cbind(t(X.1.2)%*%X,t(X.2.2)%*%X,t(X.2.3)%*%X,t(X.2.4)%*%X),
cbind(t(X.1.3)%*%X,t(X.2.3)%*%X,t(X.3.3)%*%X,t(X.3.4)%*%X),
cbind(t(X.1.4)%*%X,t(X.2.4)%*%X,t(X.3.4)%*%X,t(X.4.4)%*%X))
invV<-ginv(V)
}
##Calculate the GMM loss.
loss1<-as.vector(t(gbar)%*%invV%*%(gbar))
out1<-list("loss"=loss1, "invV"=invV)
out1
}
gmm.loss<-function(x,...) gmm.func(x,...)$loss
##Loss function for balance constraints, returns the squared imbalance along each dimension.
bal.loss<-function(beta.curr){
beta.curr<-matrix(beta.curr,k,no.treats-1)
theta.curr<-X%*%beta.curr
baseline.prob<-apply(theta.curr,1,function(x) (1+sum(exp(x)))^-1)
probs.curr<-cbind(baseline.prob, exp(theta.curr[,1])*baseline.prob, exp(theta.curr[,2])*baseline.prob)
probs.curr[,1]<-pmax(probs.min,probs.curr[,1])
probs.curr[,2]<-pmax(probs.min,probs.curr[,2])
probs.curr[,3]<-pmax(probs.min,probs.curr[,3])
norms<-apply(probs.curr,1,sum)
probs.curr<-probs.curr/norms
w.curr<-cbind(2*T1/probs.curr[,1] - T2/probs.curr[,2] - T3/probs.curr[,3],
T2/probs.curr[,2] - T3/probs.curr[,3])/n
##Generate mean imbalance.
wtXprimew <- t(wtX)%*%(w.curr)
loss1<-sum(diag(t(wtXprimew)%*%XprimeX.inv%*%wtXprimew))
loss1
}
gmm.gradient<-function(beta.curr, X.gmm=X, invV)
{
##Designate a few objects in the function.
beta.curr<-matrix(beta.curr,k,no.treats-1)
X<-as.matrix(X.gmm)
##Designate sample size, number of treated and control observations,
##theta.curr, which are used to generate probabilities.
##Trim probabilities, and generate weights.
theta.curr<-X%*%beta.curr
baseline.prob<-apply(theta.curr,1,function(x) (1+sum(exp(x)))^-1)
probs.curr<-cbind(baseline.prob, exp(theta.curr[,1])*baseline.prob, exp(theta.curr[,2])*baseline.prob)
probs.curr[,1]<-pmax(probs.min,probs.curr[,1])
probs.curr[,2]<-pmax(probs.min,probs.curr[,2])
probs.curr[,3]<-pmax(probs.min,probs.curr[,3])
norms<-apply(probs.curr,1,sum)
probs.curr<-probs.curr/norms
w.curr<-cbind(2*T1/probs.curr[,1] - T2/probs.curr[,2] - T3/probs.curr[,3],
T2/probs.curr[,2] - T3/probs.curr[,3])
##Generate the vector of mean imbalance by weights.
w.curr.del<-1/n*t(wtX)%*%w.curr
w.curr.del<-as.matrix(w.curr.del)
w.curr<-as.matrix(w.curr)
##Generate g-bar, as in the paper.
gbar<-c(1/n*t(wtX)%*%(T2-probs.curr[,2]),
1/n*t(wtX)%*%(T3-probs.curr[,3]),
w.curr.del)
dgbar<-rbind(cbind(1/n*t(-wtX*probs.curr[,2]*(1-probs.curr[,2]))%*%X,
1/n*t(wtX*probs.curr[,2]*probs.curr[,3])%*%X,
1/n*t(wtX*(2*T1*probs.curr[,2]/probs.curr[,1] + T2*(1-probs.curr[,2])/probs.curr[,2] - T3*probs.curr[,2]/probs.curr[,3]))%*%X,
1/n*t(wtX*(-T2*(1-probs.curr[,2])/probs.curr[,2] - T3*probs.curr[,2]/probs.curr[,3]))%*%X),
cbind(1/n*t(wtX*probs.curr[,2]*probs.curr[,3])%*%X,
1/n*t(-wtX*probs.curr[,3]*(1-probs.curr[,3]))%*%X,
1/n*t(wtX*(2*T1*probs.curr[,3]/probs.curr[,1] - T2*probs.curr[,3]/probs.curr[,2] + T3*(1-probs.curr[,3])/probs.curr[,3]))%*%X,
1/n*t(wtX*(T2*probs.curr[,3]/probs.curr[,2] + T3*(1-probs.curr[,3])/probs.curr[,3]))%*%X))
out<-2*dgbar%*%invV%*%gbar
out
}
bal.gradient<-function(beta.curr)
{
beta.curr<-matrix(beta.curr,k,no.treats-1)
theta.curr<-X%*%beta.curr
baseline.prob<-apply(theta.curr,1,function(x) (1+sum(exp(x)))^-1)
probs.curr<-cbind(baseline.prob, exp(theta.curr[,1])*baseline.prob, exp(theta.curr[,2])*baseline.prob)
probs.curr[,1]<-pmax(probs.min,probs.curr[,1])
probs.curr[,2]<-pmax(probs.min,probs.curr[,2])
probs.curr[,3]<-pmax(probs.min,probs.curr[,3])
norms<-apply(probs.curr,1,sum)
probs.curr<-probs.curr/norms
w.curr<-cbind(2*T1/probs.curr[,1] - T2/probs.curr[,2] - T3/probs.curr[,3],
T2/probs.curr[,2] - T3/probs.curr[,3])/n
dw.beta1<-cbind(t(wtX*(2*T1*probs.curr[,2]/probs.curr[,1] + T2*(1-probs.curr[,2])/probs.curr[,2] - T3*probs.curr[,2]/probs.curr[,3])),
t(wtX*(-T2*(1-probs.curr[,2])/probs.curr[,2] - T3*probs.curr[,2]/probs.curr[,3])))/n
dw.beta2<-cbind(t(wtX*(2*T1*probs.curr[,3]/probs.curr[,1] - T2*probs.curr[,3]/probs.curr[,2] + T3*(1-probs.curr[,3])/probs.curr[,3])),
t(wtX*(T2*probs.curr[,3]/probs.curr[,2] + T3*(1-probs.curr[,3])/probs.curr[,3])))/n
##Generate mean imbalance.
wtXprimew <- t(wtX)%*%(w.curr)
loss1<-diag(t(wtXprimew)%*%XprimeX.inv%*%wtXprimew)
out.1<-2*dw.beta1[,1:n]%*%(wtX)%*%XprimeX.inv%*%t(wtX)%*%(w.curr[,1]) +
2*dw.beta1[,(n+1):(2*n)]%*%(wtX)%*%XprimeX.inv%*%t(wtX)%*%(w.curr[,2])
out.2<-2*dw.beta2[,1:n]%*%(wtX)%*%XprimeX.inv%*%t(wtX)%*%(w.curr[,1]) +
2*dw.beta2[,(n+1):(2*n)]%*%(wtX)%*%XprimeX.inv%*%t(wtX)%*%(w.curr[,2])
out<-c(out.1, out.2)
out
}
n<-length(treat)
##Run multionmial logit
dat.dummy<-data.frame(treat=treat,X)
#Need to generalize for different dimensioned X's
xnam<- colnames(dat.dummy[,-1])
fmla <- as.formula(paste("as.factor(treat) ~ -1 + ", paste(xnam, collapse= "+")))
mnl1<-multinom(fmla, data=dat.dummy, weights = sample.weights, trace=FALSE)
mcoef<-t(coef(mnl1))
mcoef[is.na(mcoef[,1]),1]<-0
mcoef[is.na(mcoef[,2]),2]<-0
probs.mnl<-cbind(1/(1+exp(X%*%mcoef[,1])+exp(X%*%mcoef[,2])),
exp(X%*%mcoef[,1])/(1+exp(X%*%mcoef[,1])+exp(X%*%mcoef[,2])),
exp(X%*%mcoef[,2])/(1+exp(X%*%mcoef[,1])+exp(X%*%mcoef[,2])))
colnames(probs.mnl)<-c("p1","p2","p3")
probs.mnl[,1]<-pmax(probs.min,probs.mnl[,1])
probs.mnl[,2]<-pmax(probs.min,probs.mnl[,2])
probs.mnl[,3]<-pmax(probs.min,probs.mnl[,3])
norms<-apply(probs.mnl,1,sum)
probs.mnl<-probs.mnl/norms
mnl1$fit<-matrix(probs.mnl,nrow=n,ncol=no.treats)
beta.curr<-matrix(mcoef, ncol = 1)
beta.curr[is.na(beta.curr)]<-0
alpha.func<-function(alpha) gmm.loss(beta.curr*alpha)
beta.curr<-beta.curr*optimize(alpha.func,interval=c(.8,1.1))$min
##Generate estimates for balance and CBPSE
gmm.init<-beta.curr
this.invV<-gmm.func(gmm.init)$invV
if (twostep)
{
opt.bal<-optim(gmm.init, bal.loss, control=list("maxit"=iterations), method="BFGS", gr = bal.gradient, hessian=TRUE)
}
else
{
opt.bal<-tryCatch({
optim(gmm.init, bal.loss, control=list("maxit"=iterations), method="BFGS", hessian=TRUE)
},
error = function(err)
{
return(optim(gmm.init, bal.loss, control=list("maxit"=iterations), method="Nelder-Mead", hessian=TRUE))
})
}
beta.bal<-opt.bal$par
if(bal.only) opt1<-opt.bal
if(!bal.only)
{
if (twostep)
{
gmm.glm.init<-optim(gmm.init, gmm.loss, control=list("maxit"=iterations), method="BFGS", hessian=TRUE, gr = gmm.gradient, invV = this.invV)
gmm.bal.init<-optim(beta.bal, gmm.loss, control=list("maxit"=iterations), method="BFGS", hessian=TRUE, gr = gmm.gradient, invV = this.invV)
}
else
{
gmm.glm.init<-tryCatch({
optim(gmm.init,gmm.loss, control=list("maxit"=iterations), method="BFGS", hessian=TRUE)
},
error = function(err)
{
return(optim(gmm.init,gmm.loss, control=list("maxit"=iterations), method="Nelder-Mead", hessian=TRUE))
})
gmm.bal.init<-tryCatch({
optim(beta.bal, gmm.loss, control=list("maxit"=iterations), method="BFGS", hessian=TRUE)
},
error = function(err)
{
return(optim(beta.bal, gmm.loss, control=list("maxit"=iterations), method="Nelder-Mead", hessian=TRUE))
})
}
if(gmm.glm.init$val<gmm.bal.init$val) opt1<-gmm.glm.init else opt1<-gmm.bal.init
}
##Generate probabilities
beta.opt<-matrix(opt1$par,nrow=k,ncol=no.treats-1)
theta.opt<-X%*%beta.opt
baseline.prob<-apply(theta.opt,1,function(x) (1+sum(exp(x)))^-1)
probs.opt<-cbind(baseline.prob, exp(theta.opt[,1])*baseline.prob, exp(theta.opt[,2])*baseline.prob)
probs.opt[,1]<-pmax(probs.min,probs.opt[,1])
probs.opt[,2]<-pmax(probs.min,probs.opt[,2])
probs.opt[,3]<-pmax(probs.min,probs.opt[,3])
norms<-apply(probs.opt,1,sum)
probs.opt<-probs.opt/norms
J.opt<-ifelse(twostep, gmm.func(beta.opt, invV = this.invV)$loss, gmm.func(beta.opt)$loss)
if ((J.opt > gmm.loss(mcoef)) & (bal.loss(beta.opt) > bal.loss(mcoef)))
{
beta.opt<-mcoef
probs.opt<-probs.mnl
J.opt <- gmm.loss(mcoef)
warning("Optimization failed. Results returned are for MLE.")
}
residuals<-cbind(T1-probs.opt[,1],T2-probs.opt[,2],T3-probs.opt[,3])
deviance <- -2*c(sum(T1*log(probs.opt[,1])+T2*log(probs.opt[,2])+T3*log(probs.opt[,3])))
nulldeviance <- -2*c(sum(T1*log(mean(T1))+T2*log(mean(T2))+T3*log(mean(T3))))
##Generate weights
norm1<-norm2<-norm3<-1
if (standardize)
{
norm1<-sum(T1*sample.weights/probs.opt[,1])
norm2<-sum(T2*sample.weights/probs.opt[,2])
norm3<-sum(T3*sample.weights/probs.opt[,3])
}
w.opt<-T1/probs.opt[,1]/norm1 + T2/probs.opt[,2]/norm2 + T3/probs.opt[,3]/norm3
W<-gmm.func(beta.opt)$invV
XG.1.1<-t(-wtX*probs.opt[,2]*(1-probs.opt[,2]))%*%X
XG.1.2<-t(wtX*probs.opt[,2]*probs.opt[,3])%*%X
XG.1.3<-t(wtX*(2*T1*probs.opt[,2]/probs.opt[,1] + T2*(1-probs.opt[,2])/probs.opt[,2] - T3*probs.opt[,2]/probs.opt[,3]))%*%X
XG.1.4<-t(wtX*(-T2*(1-probs.opt[,2])/probs.opt[,2] - T3*probs.opt[,2]/probs.opt[,3]))%*%X
XG.2.1<-t(wtX*probs.opt[,2]*probs.opt[,3])%*%X
XG.2.2<-t(-wtX*probs.opt[,3]*(1-probs.opt[,3]))%*%X
XG.2.3<-t(wtX*(2*T1*probs.opt[,3]/probs.opt[,1] - T2*probs.opt[,3]/probs.opt[,2] + T3*(1-probs.opt[,3])/probs.opt[,3]))%*%X
XG.2.4<-t(wtX*(T2*probs.opt[,3]/probs.opt[,2] + T3*(1-probs.opt[,3])/probs.opt[,3]))%*%X
G<-1/n*rbind(cbind(XG.1.1,XG.1.2,XG.1.3,XG.1.4),cbind(XG.2.1,XG.2.2,XG.2.3,XG.2.4))
XW.1<-X*(T2-probs.opt[,2])*sample.weights^.5
XW.2<-X*(T3-probs.opt[,3])*sample.weights^.5
XW.3<-X*(2*T1/probs.opt[,1] - T2/probs.opt[,2] - T3/probs.opt[,3])*sample.weights^.5
XW.4<-X*(T2/probs.opt[,2] - T3/probs.opt[,3])*sample.weights^.5
W1<-rbind(t(XW.1),t(XW.2),t(XW.3),t(XW.4))
Omega<-1/n*(W1%*%t(W1))
GWGinvGW <- ginv(G%*%W%*%t(G))%*%G%*%W
vcov <- GWGinvGW%*%Omega%*%t(GWGinvGW)
colnames(probs.opt)<-treat.names
class(beta.opt) <- "coef"
output<-list("coefficients"=beta.opt,"fitted.values"=probs.opt,"linear.predictor" = theta.opt,
"deviance"=deviance,"weights"=w.opt*sample.weights,
"y"=treat,"x"=X,"converged"=opt1$conv,"J"=J.opt,"var"=vcov,
"mle.J"=ifelse(twostep, gmm.func(mcoef, invV = this.invV)$loss, gmm.loss(mcoef)))
class(output)<- c("CBPS")
output
}
CBPS.4Treat<-function(treat, X, method, k, XprimeX.inv, bal.only, iterations, standardize, twostep, sample.weights, ...)
{
probs.min<-1e-6
no.treats<-length(levels(as.factor(treat)))
treat.names<-levels(as.factor(treat))
T1<-as.numeric(treat==treat.names[1])
T2<-as.numeric(treat==treat.names[2])
T3<-as.numeric(treat==treat.names[3])
T4<-as.numeric(treat==treat.names[4])
sample.weights<-sample.weights/mean(sample.weights)
wtX <- sample.weights*X
XprimeX.inv<-ginv(t(sample.weights^.5*X)%*%(sample.weights^.5*X))
##The gmm objective function--given a guess of beta, constructs the GMM J statistic.
gmm.func<-function(beta.curr,X.gmm=X,invV=NULL){
##Designate a few objects in the function.
beta.curr<-matrix(beta.curr,k,no.treats-1)
X<-as.matrix(X.gmm)
##Designate sample size, number of treated and control observations,
##theta.curr, which are used to generate probabilities.
##Trim probabilities, and generate weights.
theta.curr<-X%*%beta.curr
baseline.prob<-apply(theta.curr,1,function(x) (1+sum(exp(x)))^-1)
probs.curr<-cbind(baseline.prob, exp(theta.curr[,1])*baseline.prob, exp(theta.curr[,2])*baseline.prob, exp(theta.curr[,3])*baseline.prob)
probs.curr[,1]<-pmax(probs.min,probs.curr[,1])
probs.curr[,2]<-pmax(probs.min,probs.curr[,2])
probs.curr[,3]<-pmax(probs.min,probs.curr[,3])
probs.curr[,4]<-pmax(probs.min,probs.curr[,4])
norms<-apply(probs.curr,1,sum)
probs.curr<-probs.curr/norms
w.curr<-cbind(T1/probs.curr[,1] + T2/probs.curr[,2] - T3/probs.curr[,3] - T4/probs.curr[,4],
T1/probs.curr[,1] - T2/probs.curr[,2] - T3/probs.curr[,3] + T4/probs.curr[,4],
-T1/probs.curr[,1] + T2/probs.curr[,2] - T3/probs.curr[,3] + T4/probs.curr[,4])
##Generate the vector of mean imbalance by weights.
w.curr.del<-1/n*t(wtX)%*%w.curr
w.curr.del<-as.matrix(w.curr.del)
w.curr<-as.matrix(w.curr)
##Generate g-bar, as in the paper.
gbar<-c(1/n*t(wtX)%*%(T2-probs.curr[,2]),
1/n*t(wtX)%*%(T3-probs.curr[,3]),
1/n*t(wtX)%*%(T4-probs.curr[,4]),
w.curr.del)
if(is.null(invV))
{
##Generate the covariance matrix used in the GMM estimate.
##Was for the initial version that calculates the analytic variances.
X.1.1<-wtX*(probs.curr[,2]*(1-probs.curr[,2]))
X.1.2<-wtX*(-probs.curr[,2]*probs.curr[,3])
X.1.3<-wtX*(-probs.curr[,2]*probs.curr[,4])
X.1.4<-wtX
X.1.5<-wtX*(-1)
X.1.6<-wtX
X.2.2<-wtX*(probs.curr[,3]*(1-probs.curr[,3]))
X.2.3<-wtX*(-probs.curr[,3]*probs.curr[,4])
X.2.4<-wtX*(-1)
X.2.5<-wtX*(-1)
X.2.6<-wtX*(-1)
X.3.3<-wtX*(probs.curr[,4]*(1-probs.curr[,4]))
X.3.4<-wtX*(-1)
X.3.5<-wtX
X.3.6<-wtX
X.4.4<-wtX*(probs.curr[,1]^-1+probs.curr[,2]^-1+probs.curr[,3]^-1+probs.curr[,4]^-1)
X.4.5<-wtX*(probs.curr[,1]^-1-probs.curr[,2]^-1+probs.curr[,3]^-1-probs.curr[,4]^-1)
X.4.6<-wtX*(-probs.curr[,1]^-1+probs.curr[,2]^-1+probs.curr[,3]^-1-probs.curr[,4]^-1)
X.5.5<-X.4.4
X.5.6<-wtX*(-probs.curr[,1]^-1-probs.curr[,2]^-1+probs.curr[,3]^-1+probs.curr[,4]^-1)
X.6.6<-X.4.4
V<-1/n*rbind(cbind(t(X.1.1)%*%X,t(X.1.2)%*%X,t(X.1.3)%*%X,t(X.1.4)%*%X,t(X.1.5)%*%X,t(X.1.6)%*%X),
cbind(t(X.1.2)%*%X,t(X.2.2)%*%X,t(X.2.3)%*%X,t(X.2.4)%*%X,t(X.2.5)%*%X,t(X.2.6)%*%X),
cbind(t(X.1.3)%*%X,t(X.2.3)%*%X,t(X.3.3)%*%X,t(X.3.4)%*%X,t(X.3.5)%*%X,t(X.3.6)%*%X),
cbind(t(X.1.4)%*%X,t(X.2.4)%*%X,t(X.3.4)%*%X,t(X.4.4)%*%X,t(X.4.5)%*%X,t(X.4.6)%*%X),
cbind(t(X.1.5)%*%X,t(X.2.5)%*%X,t(X.3.5)%*%X,t(X.4.5)%*%X,t(X.5.5)%*%X,t(X.5.6)%*%X),
cbind(t(X.1.6)%*%X,t(X.2.6)%*%X,t(X.3.6)%*%X,t(X.4.6)%*%X,t(X.5.6)%*%X,t(X.6.6)%*%X))
invV<-ginv(V)
}
##Calculate the GMM loss.
loss1<-as.vector(t(gbar)%*%invV%*%(gbar))
out1<-list("loss"=loss1, "invV"=invV)
out1
}
gmm.loss<-function(x,...) gmm.func(x,...)$loss
##Loss function for balance constraints, returns the squared imbalance along each dimension.
bal.loss<-function(beta.curr){
beta.curr<-matrix(beta.curr,k,no.treats-1)
theta.curr<-X%*%beta.curr
baseline.prob<-apply(theta.curr,1,function(x) (1+sum(exp(x)))^-1)
probs.curr<-cbind(baseline.prob, exp(theta.curr[,1])*baseline.prob, exp(theta.curr[,2])*baseline.prob, exp(theta.curr[,3])*baseline.prob)
probs.curr[,1]<-pmax(probs.min,probs.curr[,1])
probs.curr[,2]<-pmax(probs.min,probs.curr[,2])
probs.curr[,3]<-pmax(probs.min,probs.curr[,3])
probs.curr[,4]<-pmax(probs.min,probs.curr[,4])
norms<-apply(probs.curr,1,sum)
probs.curr<-probs.curr/norms
w.curr<-cbind(T1/probs.curr[,1] + T2/probs.curr[,2] - T3/probs.curr[,3] - T4/probs.curr[,4],
T1/probs.curr[,1] - T2/probs.curr[,2] - T3/probs.curr[,3] + T4/probs.curr[,4],
-T1/probs.curr[,1] + T2/probs.curr[,2] - T3/probs.curr[,3] + T4/probs.curr[,4])/n
##Generate mean imbalance.
wtXprimew <- t(sample.weights*X)%*%(w.curr)
loss1<-sum(diag(t(wtXprimew)%*%XprimeX.inv%*%wtXprimew))
loss1
}
gmm.gradient<-function(beta.curr, X.gmm=X, invV)
{
##Designate a few objects in the function.
beta.curr<-matrix(beta.curr,k,no.treats-1)
X<-as.matrix(X.gmm)
##Designate sample size, number of treated and control observations,
##theta.curr, which are used to generate probabilities.
##Trim probabilities, and generate weights.
theta.curr<-X%*%beta.curr
baseline.prob<-apply(theta.curr,1,function(x) (1+sum(exp(x)))^-1)
probs.curr<-cbind(baseline.prob, exp(theta.curr[,1])*baseline.prob, exp(theta.curr[,2])*baseline.prob, exp(theta.curr[,3])*baseline.prob)
probs.curr[,1]<-pmax(probs.min,probs.curr[,1])
probs.curr[,2]<-pmax(probs.min,probs.curr[,2])
probs.curr[,3]<-pmax(probs.min,probs.curr[,3])
probs.curr[,4]<-pmax(probs.min,probs.curr[,4])
norms<-apply(probs.curr,1,sum)
probs.curr<-probs.curr/norms
w.curr<-cbind(T1/probs.curr[,1] + T2/probs.curr[,2] - T3/probs.curr[,3] - T4/probs.curr[,4],
T1/probs.curr[,1] - T2/probs.curr[,2] - T3/probs.curr[,3] + T4/probs.curr[,4],
-T1/probs.curr[,1] + T2/probs.curr[,2] - T3/probs.curr[,3] + T4/probs.curr[,4])
##Generate the vector of mean imbalance by weights.
w.curr.del<-1/n*t(wtX)%*%w.curr
w.curr.del<-as.matrix(w.curr.del)
w.curr<-as.matrix(w.curr)
##Generate g-bar, as in the paper.
gbar<-c(1/n*t(wtX)%*%(T2-probs.curr[,2]),
1/n*t(wtX)%*%(T3-probs.curr[,3]),
1/n*t(wtX)%*%(T4-probs.curr[,4]),
w.curr.del)
dgbar<-rbind(cbind(1/n*t(-wtX*probs.curr[,2]*(1-probs.curr[,2]))%*%X,
1/n*t(wtX*probs.curr[,2]*probs.curr[,3])%*%X,
1/n*t(wtX*probs.curr[,2]*probs.curr[,4])%*%X,
1/n*t(wtX*(T1*probs.curr[,2]/probs.curr[,1] - T2*(1 - probs.curr[,2])/probs.curr[,2] - T3*probs.curr[,2]/probs.curr[,3] - T4*probs.curr[,2]/probs.curr[,4]))%*%X,
1/n*t(wtX*(T1*probs.curr[,2]/probs.curr[,1] + T2*(1 - probs.curr[,2])/probs.curr[,2] - T3*probs.curr[,2]/probs.curr[,3] + T4*probs.curr[,2]/probs.curr[,4]))%*%X,
1/n*t(wtX*(-T1*probs.curr[,2]/probs.curr[,1] - T2*(1 - probs.curr[,2])/probs.curr[,2] - T3*probs.curr[,2]/probs.curr[,3] + T4*probs.curr[,2]/probs.curr[,4]))%*%X
),
cbind(1/n*t(wtX*probs.curr[,2]*probs.curr[,3])%*%X,
1/n*t(-wtX*probs.curr[,3]*(1-probs.curr[,3]))%*%X,
1/n*t(wtX*probs.curr[,3]*probs.curr[,4])%*%X,
1/n*t(wtX*(T1*probs.curr[,3]/probs.curr[,1] + T2*probs.curr[,3]/probs.curr[,2] + T3*(1 - probs.curr[,3])/probs.curr[,3] - T4*probs.curr[,3]/probs.curr[,4]))%*%X,
1/n*t(wtX*(T1*probs.curr[,3]/probs.curr[,1] - T2*probs.curr[,3]/probs.curr[,2] + T3*(1 - probs.curr[,3])/probs.curr[,3] + T4*probs.curr[,3]/probs.curr[,4]))%*%X,
1/n*t(wtX*(-T1*probs.curr[,3]/probs.curr[,1] + T2*probs.curr[,3]/probs.curr[,2] + T3*(1 - probs.curr[,3])/probs.curr[,3] + T4*probs.curr[,3]/probs.curr[,4]))%*%X
),
cbind(1/n*t(wtX*probs.curr[,2]*probs.curr[,4])%*%X,
1/n*t(wtX*probs.curr[,3]*probs.curr[,4])%*%X,
1/n*t(-wtX*probs.curr[,4]*(1-probs.curr[,4]))%*%X,
1/n*t(wtX*(T1*probs.curr[,4]/probs.curr[,1] + T2*probs.curr[,4]/probs.curr[,2] - T3*probs.curr[,4]/probs.curr[,3] + T4*(1 - probs.curr[,4])/probs.curr[,4]))%*%X,
1/n*t(wtX*(T1*probs.curr[,4]/probs.curr[,1] - T2*probs.curr[,4]/probs.curr[,2] - T3*probs.curr[,4]/probs.curr[,3] - T4*(1 - probs.curr[,4])/probs.curr[,4]))%*%X,
1/n*t(wtX*(-T1*probs.curr[,4]/probs.curr[,1] + T2*probs.curr[,4]/probs.curr[,2] - T3*probs.curr[,4]/probs.curr[,3] - T4*(1 - probs.curr[,4])/probs.curr[,4]))%*%X
))
out<-2*dgbar%*%invV%*%gbar
out
}
bal.gradient<-function(beta.curr)
{
beta.curr<-matrix(beta.curr,k,no.treats-1)
theta.curr<-X%*%beta.curr
baseline.prob<-apply(theta.curr,1,function(x) (1+sum(exp(x)))^-1)
probs.curr<-cbind(baseline.prob, exp(theta.curr[,1])*baseline.prob, exp(theta.curr[,2])*baseline.prob, exp(theta.curr[,3])*baseline.prob)
probs.curr[,1]<-pmax(probs.min,probs.curr[,1])
probs.curr[,2]<-pmax(probs.min,probs.curr[,2])
probs.curr[,3]<-pmax(probs.min,probs.curr[,3])
probs.curr[,4]<-pmax(probs.min,probs.curr[,4])
norms<-apply(probs.curr,1,sum)
probs.curr<-probs.curr/norms
w.curr<-cbind(T1/probs.curr[,1] + T2/probs.curr[,2] - T3/probs.curr[,3] - T4/probs.curr[,4],
T1/probs.curr[,1] - T2/probs.curr[,2] - T3/probs.curr[,3] + T4/probs.curr[,4],
-T1/probs.curr[,1] + T2/probs.curr[,2] - T3/probs.curr[,3] + T4/probs.curr[,4])/n
dw.beta1<-cbind(t(X*(T1*probs.curr[,2]/probs.curr[,1] - T2*(1 - probs.curr[,2])/probs.curr[,2] - T3*probs.curr[,2]/probs.curr[,3] - T4*probs.curr[,2]/probs.curr[,4])),
t(X*(T1*probs.curr[,2]/probs.curr[,1] + T2*(1 - probs.curr[,2])/probs.curr[,2] - T3*probs.curr[,2]/probs.curr[,3] + T4*probs.curr[,2]/probs.curr[,4])),
t(X*(-T1*probs.curr[,2]/probs.curr[,1] - T2*(1 - probs.curr[,2])/probs.curr[,2] - T3*probs.curr[,2]/probs.curr[,3] + T4*probs.curr[,2]/probs.curr[,4])))/n
dw.beta2<-cbind(t(X*(T1*probs.curr[,3]/probs.curr[,1] + T2*probs.curr[,3]/probs.curr[,2] + T3*(1 - probs.curr[,3])/probs.curr[,3] - T4*probs.curr[,3]/probs.curr[,4])),
t(X*(T1*probs.curr[,3]/probs.curr[,1] - T2*probs.curr[,3]/probs.curr[,2] + T3*(1 - probs.curr[,3])/probs.curr[,3] + T4*probs.curr[,3]/probs.curr[,4])),
t(X*(-T1*probs.curr[,3]/probs.curr[,1] + T2*probs.curr[,3]/probs.curr[,2] + T3*(1 - probs.curr[,3])/probs.curr[,3] + T4*probs.curr[,3]/probs.curr[,4])))/n
dw.beta3<-cbind(t(X*(T1*probs.curr[,4]/probs.curr[,1] + T2*probs.curr[,4]/probs.curr[,2] - T3*probs.curr[,4]/probs.curr[,3] + T4*(1 - probs.curr[,4])/probs.curr[,4])),
t(X*(T1*probs.curr[,4]/probs.curr[,1] - T2*probs.curr[,4]/probs.curr[,2] - T3*probs.curr[,4]/probs.curr[,3] - T4*(1 - probs.curr[,4])/probs.curr[,4])),
t(X*(-T1*probs.curr[,4]/probs.curr[,1] + T2*probs.curr[,4]/probs.curr[,2] - T3*probs.curr[,4]/probs.curr[,3] - T4*(1 - probs.curr[,4])/probs.curr[,4])))/n
Xprimew <- t(wtX)%*%w.curr
loss1<-diag(t(Xprimew)%*%XprimeX.inv%*%Xprimew)
out.1<-2*dw.beta1[,1:n]%*%(wtX)%*%XprimeX.inv%*%t(wtX)%*%(w.curr[,1]) +
2*dw.beta1[,(n+1):(2*n)]%*%(wtX)%*%XprimeX.inv%*%t(wtX)%*%(w.curr[,2]) +
2*dw.beta1[,(2*n+1):(3*n)]%*%(wtX)%*%XprimeX.inv%*%t(wtX)%*%(w.curr[,3])
out.2<-2*dw.beta2[,1:n]%*%(wtX)%*%XprimeX.inv%*%t(wtX)%*%(w.curr[,1]) +
2*dw.beta2[,(n+1):(2*n)]%*%(wtX)%*%XprimeX.inv%*%t(wtX)%*%(w.curr[,2]) +
2*dw.beta2[,(2*n+1):(3*n)]%*%(wtX)%*%XprimeX.inv%*%t(wtX)%*%(w.curr[,3])
out.3<-2*dw.beta3[,1:n]%*%(wtX)%*%XprimeX.inv%*%t(wtX)%*%(w.curr[,1]) +
2*dw.beta3[,(n+1):(2*n)]%*%(wtX)%*%XprimeX.inv%*%t(wtX)%*%(w.curr[,2]) +
2*dw.beta3[,(2*n+1):(3*n)]%*%(wtX)%*%XprimeX.inv%*%t(wtX)%*%(w.curr[,3])
out<-c(out.1, out.2, out.3)
out
}
n<-length(treat)
##Run multionmial logit
dat.dummy<-data.frame(treat=treat,X)
#Need to generalize for different dimensioned X's
xnam<- colnames(dat.dummy[,-1])
fmla <- as.formula(paste("as.factor(treat) ~ -1 + ", paste(xnam, collapse= "+")))
mnl1<-multinom(fmla, data=dat.dummy,trace=FALSE,weights=sample.weights)
mcoef<-t(coef(mnl1))
mcoef[is.na(mcoef[,1]),1]<-0
mcoef[is.na(mcoef[,2]),2]<-0
mcoef[is.na(mcoef[,3]),3]<-0
probs.mnl<-cbind(1/(1+exp(X%*%mcoef[,1])+exp(X%*%mcoef[,2])+exp(X%*%mcoef[,3])),
exp(X%*%mcoef[,1])/(1+exp(X%*%mcoef[,1])+exp(X%*%mcoef[,2])+exp(X%*%mcoef[,3])),
exp(X%*%mcoef[,2])/(1+exp(X%*%mcoef[,1])+exp(X%*%mcoef[,2])+exp(X%*%mcoef[,3])),
exp(X%*%mcoef[,3])/(1+exp(X%*%mcoef[,1])+exp(X%*%mcoef[,2])+exp(X%*%mcoef[,3])))
colnames(probs.mnl)<-c("p1","p2","p3","p4")
probs.mnl[,1]<-pmax(probs.min,probs.mnl[,1])
probs.mnl[,2]<-pmax(probs.min,probs.mnl[,2])
probs.mnl[,3]<-pmax(probs.min,probs.mnl[,3])
probs.mnl[,4]<-pmax(probs.min,probs.mnl[,4])
norms<-apply(probs.mnl,1,sum)
probs.mnl<-probs.mnl/norms
mnl1$fit<-matrix(probs.mnl,nrow=n,ncol=no.treats)
beta.curr<-matrix(mcoef, ncol = 1)
beta.curr[is.na(beta.curr)]<-0
alpha.func<-function(alpha) gmm.loss(beta.curr*alpha)
beta.curr<-beta.curr*optimize(alpha.func,interval=c(.8,1.1))$min
##Generate estimates for balance and CBPSE
gmm.init<-beta.curr
this.invV<-gmm.func(gmm.init)$invV
if (twostep)
{
opt.bal<-optim(gmm.init, bal.loss, control=list("maxit"=iterations), method="BFGS", gr = bal.gradient, hessian=TRUE)
}
else
{
opt.bal<-tryCatch({
optim(gmm.init, bal.loss, control=list("maxit"=iterations), method="BFGS", hessian=TRUE)
},
error = function(err)
{
return(optim(gmm.init, bal.loss, control=list("maxit"=iterations), method="Nelder-Mead", hessian=TRUE))
})
}
beta.bal<-opt.bal$par
if(bal.only) opt1<-opt.bal
if(twostep)
{
if (gmm.loss(gmm.init) < gmm.loss(beta.bal))
{
this.invV<-gmm.func(gmm.init)$invV
}
else
{
this.invV<-gmm.func(beta.bal)$invV
}
if(bal.only)
{
this.invV<-gmm.func(beta.bal)$invV
}
}
if(!bal.only)
{
if (twostep)
{
gmm.glm.init<-optim(gmm.init, gmm.loss, control=list("maxit"=iterations), method="BFGS", gr = gmm.gradient, hessian=TRUE, invV = this.invV)
gmm.bal.init<-optim(beta.bal, gmm.loss, control=list("maxit"=iterations), method="BFGS", gr = gmm.gradient, hessian=TRUE, invV = this.invV)
}
else
{
gmm.glm.init<-tryCatch({
optim(gmm.init,gmm.loss, control=list("maxit"=iterations), method="BFGS", hessian=TRUE)
},
error = function(err)
{
return(optim(gmm.init,gmm.loss, control=list("maxit"=iterations), method="Nelder-Mead", hessian=TRUE))
})
gmm.bal.init<-tryCatch({
optim(beta.bal, gmm.loss, control=list("maxit"=iterations), method="BFGS", hessian=TRUE)
},
error = function(err)
{
return(optim(beta.bal, gmm.loss, control=list("maxit"=iterations), method="Nelder-Mead", hessian=TRUE))
})
}
if(gmm.glm.init$val<gmm.bal.init$val) opt1<-gmm.glm.init else opt1<-gmm.bal.init
}
##Generate probabilities
beta.opt<-matrix(opt1$par,nrow=k,ncol=no.treats-1)
theta.opt<-X%*%beta.opt
baseline.prob<-apply(theta.opt,1,function(x) (1+sum(exp(x)))^-1)
probs.opt<-cbind(baseline.prob, exp(theta.opt[,1])*baseline.prob, exp(theta.opt[,2])*baseline.prob, exp(theta.opt[,3])*baseline.prob)
probs.opt[,1]<-pmax(probs.min,probs.opt[,1])
probs.opt[,2]<-pmax(probs.min,probs.opt[,2])
probs.opt[,3]<-pmax(probs.min,probs.opt[,3])
probs.opt[,4]<-pmax(probs.min,probs.opt[,4])
norms<-apply(probs.opt,1,sum)
probs.opt<-probs.opt/norms
J.opt<-ifelse(twostep, gmm.func(beta.opt, invV = this.invV)$loss, gmm.func(beta.opt)$loss)
if ((J.opt > gmm.loss(mcoef)) & (bal.loss(beta.opt) > bal.loss(mcoef)))
{
beta.opt<-mcoef
probs.opt<-probs.mnl
J.opt <- gmm.loss(mcoef)
warning("Optimization failed. Results returned are for MLE.")
}
#How are residuals now defined?
residuals<-cbind(T1-probs.opt[,1],T2-probs.opt[,2],T3-probs.opt[,3],T4-probs.opt[,4])
deviance <- -2*c(sum(T1*log(probs.opt[,1])+T2*log(probs.opt[,2])+T3*log(probs.opt[,3])+T4*log(probs.opt[,4])))
nulldeviance <- -2*c(sum(T1*log(mean(T1))+T2*log(mean(T2))+T3*log(mean(T3))+T4*log(mean(T4))))
##Generate weights
norm1<-norm2<-norm3<-norm4<-1
if (standardize)
{
norm1<-sum((sample.weights/probs.opt)[which(T1==1),1])
norm2<-sum((sample.weights/probs.opt)[which(T2==1),2])
norm3<-sum((sample.weights/probs.opt)[which(T3==1),3])
norm4<-sum((sample.weights/probs.opt)[which(T4==1),4])
}
w.opt<-(T1 == 1)/probs.opt[,1]/norm1 + (T2 == 1)/probs.opt[,2]/norm2 + (T3 == 1)/probs.opt[,3]/norm3 + (T4 == 1)/probs.opt[,4]/norm4
W<-gmm.func(beta.opt)$invV
X.G.1.1<-t(-wtX*probs.opt[,2]*(1-probs.opt[,2]))%*%X
X.G.1.2<-t(wtX*probs.opt[,2]*probs.opt[,3])%*%X
X.G.1.3<-t(wtX*probs.opt[,2]*probs.opt[,4])%*%X
X.G.1.4<-t(wtX*probs.opt[,2]*(T1/probs.opt[,1] - T2*(1-probs.opt[,2])/probs.opt[,2]^2 - T3/probs.opt[,3] - T4/probs.opt[,4]))%*%X
X.G.1.5<-t(wtX*probs.opt[,2]*(T1/probs.opt[,1] + T2*(1-probs.opt[,2])/probs.opt[,2]^2 - T3/probs.opt[,3] + T4/probs.opt[,4]))%*%X
X.G.1.6<-t(wtX*probs.opt[,2]*(-T1/probs.opt[,1] - T2*(1-probs.opt[,2])/probs.opt[,2]^2 - T3/probs.opt[,3] + T4/probs.opt[,4]))%*%X
X.G.2.1<-t(wtX*probs.opt[,2]*probs.opt[,3])%*%X
X.G.2.2<-t(-wtX*probs.opt[,3]*(1-probs.opt[,3]))%*%X
X.G.2.3<-t(wtX*probs.opt[,3]*probs.opt[,4])%*%X
X.G.2.4<-t(wtX*probs.opt[,3]*(T1/probs.opt[,1] + T2/probs.opt[,2] + T3*(1-probs.opt[,3])/probs.opt[,3]^2 - T4/probs.opt[,4]))%*%X
X.G.2.5<-t(wtX*probs.opt[,3]*(T1/probs.opt[,1] - T2/probs.opt[,2] + T3*(1-probs.opt[,3])/probs.opt[,3]^2 + T4/probs.opt[,4]))%*%X
X.G.2.6<-t(wtX*probs.opt[,3]*(-T1/probs.opt[,1] + T2/probs.opt[,2] + T3*(1-probs.opt[,3])/probs.opt[,3]^2 + T4/probs.opt[,4]))%*%X
X.G.3.1<-t(wtX*probs.opt[,2]*probs.opt[,4])%*%X
X.G.3.2<-t(wtX*probs.opt[,3]*probs.opt[,4])%*%X
X.G.3.3<-t(-wtX*probs.opt[,4]*(1-probs.opt[,4]))%*%X
X.G.3.4<-t(wtX*probs.opt[,4]*(T1/probs.opt[,1] + T2/probs.opt[,2] - T3/probs.opt[,3] + T4*(1-probs.opt[,4])/probs.opt[,4]^2))%*%X
X.G.3.5<-t(wtX*probs.opt[,4]*(T1/probs.opt[,1] - T2/probs.opt[,2] - T3/probs.opt[,3] - T4*(1-probs.opt[,4])/probs.opt[,4]^2))%*%X
X.G.3.6<-t(wtX*probs.opt[,4]*(-T1/probs.opt[,1] + T2/probs.opt[,2] - T3/probs.opt[,3] - T4*(1-probs.opt[,4])/probs.opt[,4]^2))%*%X
XW.1<-X*(T2-probs.opt[,2])*sample.weights^.5
XW.2<-X*(T3-probs.opt[,3])*sample.weights^.5
XW.3<-X*(T4-probs.opt[,4])*sample.weights^.5
XW.4<-X*( T1/probs.opt[,1] + T2/probs.opt[,2] - T3/probs.opt[,3] - T4/probs.opt[,4])*sample.weights^.5
XW.5<-X*( T1/probs.opt[,1] - T2/probs.opt[,2] - T3/probs.opt[,3] + T4/probs.opt[,4])*sample.weights^.5
XW.6<-X*(-T1/probs.opt[,1] + T2/probs.opt[,2] - T3/probs.opt[,3] + T4/probs.opt[,4])*sample.weights^.5
G<-1/n*rbind(cbind(X.G.1.1,X.G.1.2,X.G.1.3,X.G.1.4,X.G.1.5,X.G.1.6),
cbind(X.G.2.1,X.G.2.2,X.G.2.3,X.G.2.4,X.G.2.5,X.G.2.6),
cbind(X.G.3.1,X.G.3.2,X.G.3.3,X.G.3.4,X.G.3.5,X.G.3.6))
W1<-rbind(t(XW.1),t(XW.2),t(XW.3),t(XW.4),t(XW.5),t(XW.6))
Omega<-1/n*(W1%*%t(W1))
GWGinvGW <- ginv(G%*%W%*%t(G))%*%G%*%W
vcov <- GWGinvGW%*%Omega%*%t(GWGinvGW)
colnames(probs.opt)<-treat.names
class(beta.opt) <- "coef"
output<-list("coefficients"=beta.opt,"fitted.values"=probs.opt, "linear.predictor" = theta.opt,
"deviance"=deviance,"weights"=sample.weights*w.opt,
"y"=treat,"x"=X,"converged"=opt1$conv,"J"=J.opt,"var"=vcov,
"mle.J"=ifelse(twostep, gmm.func(mcoef, invV = this.invV)$loss, gmm.loss(mcoef)))
class(output)<- c("CBPS")
output
}
|
/scratch/gouwar.j/cran-all/cranData/CBPS/R/CBPSMultiTreat.r
|
CBPSOptimal.2Treat<-function(treat, X, baselineX, diffX, iterations, ATT, standardize = standardize)
{
#will add ATT=1 later.
probs.min<- 1e-6
n<-dim(X)[1]
X1=baselineX
X1new=cbind(X[,1],X1)
treat.orig<-treat
treat<-sapply(treat,function(x) ifelse(x==levels(factor(treat))[2],1,0))
baselineX=as.matrix(baselineX)
diffX=as.matrix(diffX)
if(dim(baselineX)[2]+dim(diffX)[2]+1 > dim(X)[2])
{
bal.only=3
xcov=NULL
}else if(dim(baselineX)[2]+dim(diffX)[2]+1 == dim(X)[2]){
xcov = rep(1,dim(baselineX)[2]+dim(diffX)[2]+1)
xcov = diag(xcov)
bal.only=1
}else{
stop("Invalid baseline and diff models.")
}
ATT.wt.func1<-function(beta.curr,X.wt=X){
X<-as.matrix(X.wt)
n<-dim(X)[1]
n.c<-sum(treat==0)
n.t<-sum(treat==1)
theta.curr<-as.vector(X%*%beta.curr)
probs.curr<-(1+exp(-theta.curr))^-1
probs.curr<-pmin(1-probs.min,probs.curr)
probs.curr<-pmax(probs.min,probs.curr)
w1<-(n/n.t*(treat-probs.curr)/(1-probs.curr))
w1[treat==1] <- n/n.t
w1
}
gmm.func1<-function(beta.curr,invV=NULL,option=NULL){
probs.min<- 1e-6
n<-dim(X)[1]
if(ATT == 1)
{
n.t<-sum(treat==1)
w.curr.del1.att = 1/n.t*t(X1new)%*%ATT.wt.func1(beta.curr,X)
w.curr.del3.att = 1/n.t*t(addX)%*%(n/n.t*treat-1)
gbar = c(w.curr.del1.att,w.curr.del3.att)
}else{
theta.curr<-as.vector(X%*%beta.curr)
probs.curr<-(1+exp(-theta.curr))^-1
probs.curr<-pmin(1-probs.min,probs.curr)
probs.curr<-pmax(probs.min,probs.curr)
probs.curr<-as.vector(probs.curr)
w.curr<-treat/probs.curr-(1-treat)/(1-probs.curr) #(probs.curr-1+treat)^-1 #need to check
w.curr<-as.vector(w.curr)
w.curr.del<-1/n * t(X)%*%(w.curr)
w.curr.del<-as.vector(w.curr.del)
##Generate the vector of mean imbalance by weights.
w.curr.del1<-1/n * t(X1new)%*%(w.curr)
w.curr.del1<-as.vector(w.curr.del1)
### Generate the vector of mean imbalance by weights for h2()
addX=diffX #as.vector(rep(1,length(X[,1])))
w.curr3 = treat/probs.curr - 1 #(1-probs.curr)/(probs.curr-1+treat) #need to check
w.curr.del3 = 1/n*t(addX)%*%(w.curr3)
w.curr.del3<-as.vector(w.curr.del3)
w.curr3 = as.vector(w.curr3)
##Generate g-bar, as in the paper.
if(is.null(option))
{
gbar<-c(w.curr.del1,w.curr.del3)
}else if(option == "CBPS")
{
gbar <- w.curr.del
}
}
##Generate the covariance matrix used in the GMM estimate.
if(is.null(invV))
{
if(ATT==1)
{
#need to fill in this part
}else{
X.1<-X1new*((1-probs.curr)*probs.curr)^-.5
X.2<-addX*(1/probs.curr-1)^.5
X.1.1<- X1new*probs.curr^-.5
X.1.2 <-addX*probs.curr^-.5
V<-rbind(1/n*cbind(t(X.1)%*%X.1,t(X.1.1)%*%X.1.2),
1/n*cbind(t(X.1.2)%*%X.1.1,t(X.2)%*%X.2))
}
invV.g<-ginv(V)
}else{
invV.g <-invV
}
##Calculate the GMM loss.
loss1<-as.vector(t(gbar)%*%invV.g%*%(gbar))
out1<-list("loss"=loss1, "invV"=invV.g)
out1
}
gmm.loss1<-function(x,...) gmm.func1(x,...)$loss
#initial value of beta by fitting glm
glm1<-suppressWarnings(glm(treat~X-1,family=binomial))
glm1$coef[is.na(glm1$coef)]<-0
probs.glm<-glm1$fit
glm1$fit<-probs.glm<-pmin(1-probs.min,probs.glm)
glm1$fit<-probs.glm<-pmax(probs.min,probs.glm)
beta.curr<-glm1$coef
beta.curr[is.na(beta.curr)]<-0
glm.beta.curr<-glm(treat~X-1,family=binomial)$coefficients
#initial value of beta by fitting CBPS
invV2<-ginv(t(X)%*%X)
cbps.beta.curr=optim(glm.beta.curr,gmm.loss1,invV=invV2,option="CBPS")$par
gmm.init = glm.beta.curr
if(bal.only ==1) #exact constrained
{
opt.bal<-optim(gmm.init, gmm.loss1,method="BFGS",invV=xcov)
#beta.bal <- opt.bal$par
opt1<-opt.bal
}
if(bal.only==3) #solve gmm
{
gmm.glm.init<-optim(glm.beta.curr, gmm.loss1, control=list("maxit"=iterations), method="BFGS", hessian=TRUE)
gmm.cbps.init<-optim(cbps.beta.curr, gmm.loss1, control=list("maxit"=iterations), method="BFGS", hessian=TRUE)
if(gmm.glm.init$val<gmm.cbps.init$val) opt1<-gmm.glm.init else opt1<-gmm.cbps.init
}
##Generate probabilities
beta.opt<-opt1$par
theta.opt<-as.vector(X%*%beta.opt)
probs.opt<-(1+exp(-theta.opt))^-1
probs.opt<-pmin(1-probs.min,probs.opt)
probs.opt<-pmax(probs.min,probs.opt)
beta.opt<-opt1$par
##Generate weights
w.opt<-abs((probs.opt-1+treat)^-1)
norm1<-norm2<-1
if (standardize)
{
norm1<-sum(treat/probs.opt)
norm2<-sum((1-treat)/(1-probs.opt))
}
w.opt<-(treat == 1)/probs.opt/norm1 + (treat == 0)/(1-probs.opt)/norm2
J.opt<-gmm.func1(beta.opt,invV=NULL)$loss
residuals<-treat-probs.opt
deviance <- -2*c(sum(treat*log(probs.opt)+(1-treat)*log(1-probs.opt)))
nulldeviance <- -2*c(sum(treat*log(mean(treat))+(1-treat)*log(1-mean(treat))))
#get vcov
#G
if(ATT==1)
{
#fill in this part later
}else{
XG.1 <- -sqrt(abs(treat-probs.opt)/(probs.opt*(1-probs.opt)))*X
XG.12 <- -sqrt(abs(treat-probs.opt)/(probs.opt*(1-probs.opt)))*X1new
XW.1 <- X1new*(probs.opt-1+treat)^-1
XG.2 <- -sqrt(treat*(1-probs.opt)/probs.opt)*X
XG.22 <- -sqrt(treat*(1-probs.opt)/probs.opt)*diffX
XW.2 <- diffX*(treat/probs.opt-1)
}
W1<-rbind(t(XW.1),t(XW.2))
G<-cbind(t(XG.1)%*%XG.12,t(XG.2)%*%XG.22)/n
Omega <- (W1%*%t(W1)/n)
#g
W<-gmm.func1(beta.opt,invV=NULL)$invV
vcov<-ginv(G%*%W%*%t(G))%*%G%*%W%*%Omega%*%W%*%t(G)%*%ginv(G%*%W%*%t(G))
output<-list("coefficients"=beta.opt,"fitted.values"=probs.opt,"deviance"=deviance,"weights"=w.opt,
"y"=treat,"x"=X,"converged"=opt1$conv,"J"=J.opt,"var"=vcov,
"mle.J"=gmm.loss1(glm1$coef))
class(output)<- c("CBPS","glm","lm")
output
}
|
/scratch/gouwar.j/cran-all/cranData/CBPS/R/CBPSOptimalBinary.R
|
#' @title Calculate Variance-Covariance Matrix for Outcome Model
#'
#' @description
#' \code{vcov_outcome} Returns the variance-covariance matrix of the main
#' parameters of a fitted CBPS object.
#'
#' This adjusts the standard errors of the weighted regression of Y on Z for
#' uncertainty in the weights.
#'
#' ### @aliases vcov_outcome vcov_outcome.CBPSContinuous
#' @param object A fitted CBPS object.
#' @param Y The outcome.
#' @param Z The covariates (including the treatment and an intercept term) that
#' predict the outcome.
#' @param delta The coefficients from regressing Y on Z, weighting by the
#' cbpsfit$weights.
#' @param tol Tolerance for choosing whether to improve conditioning of the "M"
#' matrix prior to conversion. Equal to 1/(condition number), i.e. the
#' smallest eigenvalue divided by the largest.
#' @param lambda The amount to be added to the diagonal of M if the condition
#' of the matrix is worse than tol.
#' @return A matrix of the estimated covariances between the parameter
#' estimates in the weighted outcome regression, adjusted for uncertainty in
#' the weights.
#' @author Christian Fong, Chad Hazlett, and Kosuke Imai.
#' @references Lunceford and Davididian 2004.
#' @examples
#'
#' ###
#' ### Example: Variance-Covariance Matrix
#' ###
#'
#' ##Load the LaLonde data
#' data(LaLonde)
#' ## Estimate CBPS via logistic regression
#' fit <- CBPS(treat ~ age + educ + re75 + re74 + I(re75==0) + I(re74==0),
#' data = LaLonde, ATT = TRUE)
#' ## Get the variance-covariance matrix.
#' vcov(fit)
#'
#' @export vcov_outcome
#'
vcov_outcome<-function(object, Y, Z, delta, tol=10^(-5), lambda=0.01)
{
UseMethod("vcov_outcome")
}
#' vcov_outcome
#' @param object A fitted CBPS object.
#' @param Y The outcome.
#' @param Z The covariates (including the treatment and an intercept term) that predict the outcome.
#' @param delta The coefficients from regressing Y on Z, weighting by the cbpsfit$weights.
#' @param tol Tolerance for choosing whether to improve conditioning of the "M"
#' matrix prior to conversion. Equal to 1/(condition number), i.e. the
#' smallest eigenvalue divided by the largest.
#' @param lambda The amount to be added to the diagonal of M if the condition of the matrix is worse than tol.
#' @return Variance-Covariance Matrix for Outcome Model
#'
#' @export
#'
vcov_outcome.CBPSContinuous <- function(object, Y, Z, delta, tol=10^(-5), lambda=0.01){
Xtilde <- object$Xtilde
Ttilde <- object$Ttilde
w <- object$weights
beta.tilde <- object$beta.tilde
sigmasq.tilde <- object$sigmasq.tilde
N <- length(Y)
K <- ncol(Xtilde)
P <- ncol(Z)
Sdelta <- matrix(0, nrow = P, ncol = P)
Stheta <- matrix(0, nrow = P, ncol = K+1)
# Precompute residuals since we use them a lot
eps.beta <- as.vector(Ttilde - Xtilde%*%beta.tilde)
eps.delta <- as.vector(Y - Z%*%delta)
M11 <- apply(-2/sigmasq.tilde*eps.beta*Xtilde, 2, mean)
M12 <- mean(-1/sigmasq.tilde^2*eps.beta^2)
M22 <- apply(as.vector(1/(2*sigmasq.tilde)*w*(1 - 1/sigmasq.tilde*eps.beta^2)*Ttilde)*Xtilde, 2, mean)
M21 <- matrix(0, nrow = K, ncol = K)
for (i in 1:N){
# Just added a -1 to Sdelta, I think it's correct. Doesn't actually make a difference in V
Sdelta <- Sdelta - w[i]*Z[i,]%*%t(Z[i,])/N
M21 <- M21 + as.vector(-1/sigmasq.tilde*w[i]*Ttilde[i]*eps.beta[i])*Xtilde[i,]%*%t(Xtilde[i,])/N
Stheta <- Stheta + cbind(-1/sigmasq.tilde*w[i]*eps.beta[i]*eps.delta[i]*Z[i,]%*%t(Xtilde[i,]),
1/(2*sigmasq.tilde)*w[i]*(1 - 1/sigmasq.tilde*eps.beta[i]^2)*eps.delta[i]*Z[i,])/N
}
M <- rbind(c(M11, M12), cbind(M21,M22))
#Improve conditioning of M if necessary
cond.num=svd(M)$d[1]/svd(M)$d[nrow(M)]
if (cond.num>(1/tol)){M = M+lambda*diag(rep(1,nrow(M)))}
s <- as.vector(w*eps.delta)*Z
mtheta <- cbind(1/sigmasq.tilde*(eps.beta)^2 - 1,
as.vector(w*Ttilde)*Xtilde)
M.inv = solve(M)
inner <- matrix(0, nrow = P, ncol = P)
for (i in 1:N){
inner.part <- s[i,] - Stheta%*%M.inv%*%mtheta[i,]
inner <- inner + inner.part%*%t(inner.part)/N
}
Sdelta.inv = solve(Sdelta)
V <- Sdelta.inv %*% inner %*% t(Sdelta.inv)/N
return(V)
}
|
/scratch/gouwar.j/cran-all/cranData/CBPS/R/analytic_vcov.R
|
#' hdCBPS high dimensional CBPS method to parses the formula object and passes the result to hdCBPS.fit, which calculates ATE using CBPS method in a high dimensional setting.
#'
#' @aliases hdCBPS hdCBPS.fit
#' @param formula An object of class formula (or one that can be coerced to
#' that class): a symbolic description of the model to be fitted.
#' @param data An optional data frame, list or environment (or object coercible
#' by as.data.frame to a data frame) containing the variables in the model. If
#' not found in data, the variables are taken from environment(formula),
#' typically the environment from which CBPS is called.
#' @param na.action A function which indicates what should happen when the data
#' contain NAs. The default is set by the na.action setting of options, and is
#' na.fail if that is unset.
#' @param y An outcome variable.
#' @param ATT Option to calculate ATT
#' @param iterations An optional parameter for the maximum number of iterations
#' for the optimization. Default is 1000.
#' @param method Choose among "linear", "binomial", and "possion".
#' @return
#' \item{ATT}{Average treatment effect on the treated.}
#' \item{ATE}{Average treatment effect.}
#' \item{s}{Standard Error.}
#' \item{fitted.values}{The fitted propensity score}
#' \item{coefficients1}{Coefficients for the treated propensity score}
#' \item{coefficients0}{Coefficients for the untreated propensity score}
#' \item{model}{The model frame}
#' @author Sida Peng
#'
#' @export hdCBPS
#'
hdCBPS <- function(formula, data, na.action, y, ATT = 0, iterations=1000, method="linear") {
if (missing(data))
data <- environment(formula)
call <- match.call()
family <- binomial()
mf <- match.call(expand.dots = FALSE)
m <- match(c("formula", "data", "na.action"), names(mf), 0L)
mf <- mf[c(1L, m)]
mf$drop.unused.levels <- TRUE
mf[[1L]] <- as.name("model.frame")
mf <- eval(mf, parent.frame())
mt <- attr(mf, "terms")
T <- model.response(mf, "any")
if (length(dim(T)) == 1L) {
nm <- rownames(T)
dim(T) <- NULL
if (!is.null(nm))
names(T) <- nm
}
X <- if (!is.empty.model(mt)) model.matrix(mt, mf)#[,-2]
else matrix(, NROW(T), 0L)
X<-cbind(1,X[,apply(X,2,sd)>0])
fit <- eval(call("hdCBPS.fit", x = X, y = y, treat = T, Att = ATT, iterations=iterations, methd=method))
fit$na.action <- attr(mf, "na.action")
xlevels <- .getXlevels(mt, mf)
fit$data<-data
fit$call <- call
fit$formula <- formula
fit$terms<-mt
fit
}
hdCBPS.fit <- function(x,y,treat, Att, iterations=1000, methd="linear") {
n = dim(x)[1]
p = dim(x)[2]
y1hat = y[treat==1]
x1hat = x[treat==1,]
y0hat = y[treat==0]
x0hat = x[treat==0,]
if (methd =="linear"){
cov1 = cv.glmnet(x1hat,y1hat)
cov0 = cv.glmnet(x0hat,y0hat)
} else if (methd=="binomial") {
cov1 = cv.glmnet(x1hat,y1hat,family = "binomial",intercept=FALSE)
cov0 = cv.glmnet(x0hat,y0hat,family = "binomial",intercept=FALSE)
} else if (methd=="poisson") {
cov1 = cv.glmnet(x1hat,y1hat,family = "poisson",intercept=FALSE)
cov0 = cv.glmnet(x0hat,y0hat,family = "poisson",intercept=FALSE)
}
covb = cv.glmnet(x,treat,family="binomial")
S1 = which(as.logical(coef(cov1)!=0))
S0 = which(as.logical(coef(cov0)!=0))
##Generates ATE weights. Called by loss function, etc.
ATE.wt.func<-function(beta.curr, S, tt, X.wt, beta.ini= coef(covb)){
x2<-as.matrix(X.wt)
n2<-dim(x2)[1]
X2 = cbind(rep(1,n2),x2)
beta.all = beta.ini
beta.all[S] = beta.curr
theta.curr<-as.vector(X2%*%beta.all)
probs.curr<-1-(1+exp(theta.curr))^-1
if (tt == 0){
W<- (1-((1-treat)/(1-probs.curr)))
}else{
W<- (treat/probs.curr-1)
}
out<-list("W"=W)
out
}
##Generates ATE weights nolinear. Called by loss function, etc.
ATE.wt.nl.func<-function(beta.curr, S, tt, X.wt, beta.ini= coef(covb)){
x2<-as.matrix(X.wt)
n2<-dim(x2)[1]
X2 = cbind(rep(1,n2),x2)
SS = c(1,S)
beta.all = beta.ini
beta.all[SS] = beta.curr
#beta.all = as.vector(beta.all)
theta.curr<-as.vector(X2%*%beta.all)
probs.curr<-1-(1+exp(theta.curr))^-1
if (tt == 0){
W<- (1-((1-treat)/(1-probs.curr)))
}else{
W<- (treat/probs.curr-1)
}
out<-list("W"=W)
out
}
##Generates ATT weights. Called by loss function, etc.
ATT.wt.func<-function(beta.curr, S, X.wt, beta.ini= coef(covb)){
x2<-as.matrix(X.wt)
n2<-dim(x2)[1]
X2 = cbind(rep(1,n2),x2)
beta.all = beta.ini
beta.all[S] = beta.curr
#beta.all = as.vector(beta.all)
theta.curr<-as.vector(X2%*%beta.all)
probs.curr<-1-(1+exp(theta.curr))^-1
W<- (treat-(((1-treat)*probs.curr)/(1-probs.curr)))
out<-list("W"=W)
out
}
##Generates ATT weights nolinear. Called by loss function, etc.
ATT.wt.nl.func<-function(beta.curr, S, X.wt, beta.ini= coef(covb)){
x2<-as.matrix(X.wt)
n2<-dim(x2)[1]
X2 = cbind(rep(1,n2),x2)
SS = c(1,S)
beta.all = beta.ini
beta.all[SS] = beta.curr
#beta.all = as.vector(beta.all)
theta.curr<-as.vector(X2%*%beta.all)
probs.curr<-1-(1+exp(theta.curr))^-1
W<- (treat-(((1-treat)*probs.curr)/(1-probs.curr)))
out<-list("W"=W)
out
}
gmm.func<-function(beta.curr, S, tt, X.gmm, methd){
##Designate a few objects in the function.
x1<-as.matrix(X.gmm)
n1<-dim(x1)[1]
##Generate the vector of mean imbalance by weights.
if (methd =="linear"){
w.curr<-ATE.wt.func(beta.curr, S, tt, x1)
X1 = cbind(rep(1,n1),x1)
if (length(S) != 0){
w.curr.del<- t(X1[,S])%*%(w.curr$W)
w.curr.del<- as.vector(w.curr.del)
}else{
w.curr.del = matrix(0, nrow = n1, ncol = 0)
}
}else if (methd=="poisson") {
w.curr<-ATE.wt.nl.func(beta.curr, S, tt, x1)
X1 = cbind(rep(1,n1),x1)
if (tt==1){
pweight = exp(as.vector(X1%*%coef(cov1)))
}else{
pweight = exp(as.vector(X1%*%coef(cov0)))
}
##Generate the vector of mean imbalance by weights.
if (length(S) != 0){
w.curr.del<- t(cbind(as.matrix(pweight), matrix(rep(pweight, length(S)),ncol=length(S))*X1[,S]))%*%(w.curr$W)
w.curr.del<- as.vector(w.curr.del)
}else{
w.curr.del = t(as.matrix(pweight))%*%(w.curr$W)
w.curr.del<- as.vector(w.curr.del)
}
}else if (methd=="binomial") {
w.curr<-ATE.wt.nl.func(beta.curr, S, tt, x1)
X1 = cbind(rep(1,n1),x1)
if (tt==1){
pweight1 = exp(as.vector(X1%*%coef(cov1)))/(1+exp(as.vector(X1%*%coef(cov1))))
pweight2 = exp(as.vector(X1%*%coef(cov1)))/(1+exp(as.vector(X1%*%coef(cov1))))^2
}else{
pweight1 = exp(as.vector(X1%*%coef(cov0)))/(1+exp(as.vector(X1%*%coef(cov0))))
pweight2 = exp(as.vector(X1%*%coef(cov0)))/(1+exp(as.vector(X1%*%coef(cov0))))^2
}
##Generate the vector of mean imbalance by weights.
if (length(S) != 0){
w.curr.del<- t(cbind(as.matrix(pweight1), matrix(rep(pweight2, length(S)),ncol=length(S))*X1[,S]))%*%(w.curr$W)
w.curr.del<- as.vector(w.curr.del)
}else{
w.curr.del = t(as.matrix(pweight1))%*%(w.curr$W)
w.curr.del<- as.vector(w.curr.del)
}
}
##Generate g-bar, as in the paper.
gbar<- w.curr.del
##Calculate the GMM loss.
loss1<-as.vector(t(gbar)%*%(gbar))
out1<-list("loss"=loss1)
out1
}
ATT.gmm.func<-function(beta.curr, S, X.gmm, methd){
##Designate a few objects in the function.
x1<-as.matrix(X.gmm)
n1<-dim(x1)[1]
##Generate the vector of mean imbalance by weights.
if (methd =="linear"){
w.curr<-ATT.wt.func(beta.curr, S, x1)
X1 = cbind(rep(1,n1),x1)
if (length(S) != 0){
w.curr.del<- t(X1[,S])%*%(w.curr$W)
w.curr.del<- as.vector(w.curr.del)
}else{
w.curr.del = matrix(0, nrow = n1, ncol = 0)
}
}else if (methd=="poisson") {
w.curr<-ATT.wt.nl.func(beta.curr, S, x1)
X1 = cbind(rep(1,n1),x1)
pweight = exp(as.vector(X1%*%coef(cov0)))
##Generate the vector of mean imbalance by weights.
if (length(S) != 0){
w.curr.del<- t(cbind(as.matrix(pweight), matrix(rep(pweight, length(S)),ncol=length(S))*X1[,S]))%*%(w.curr$W)
w.curr.del<- as.vector(w.curr.del)
}else{
w.curr.del = t(as.matrix(pweight))%*%(w.curr$W)
w.curr.del<- as.vector(w.curr.del)
}
}else if (methd=="binomial") {
w.curr<-ATT.wt.nl.func(beta.curr, S, x1)
X1 = cbind(rep(1,n1),x1)
pweight1 = exp(as.vector(X1%*%coef(cov0)))/(1+exp(as.vector(X1%*%coef(cov0))))
pweight2 = exp(as.vector(X1%*%coef(cov0)))/(1+exp(as.vector(X1%*%coef(cov0))))^2
##Generate the vector of mean imbalance by weights.
if (length(S) != 0){
w.curr.del<- t(cbind(as.matrix(pweight1), matrix(rep(pweight2, length(S)),ncol=length(S))*X1[,S]))%*%(w.curr$W)
w.curr.del<- as.vector(w.curr.del)
}else{
w.curr.del = t(as.matrix(pweight1))%*%(w.curr$W)
w.curr.del<- as.vector(w.curr.del)
}
}
##Generate g-bar, as in the paper.
gbar<- w.curr.del
##Calculate the GMM loss.
loss1<-as.vector(t(gbar)%*%(gbar))
out1<-list("loss"=loss1)
out1
}
tol = 1e-5
kk0 =0
kk1 =0
ATT.kk0 =0
diff0 = 1
diff1 = 1
ATT.diff0 = 1
gmm.loss0<-function(xx,...) gmm.func(xx, S=S0, tt=0, X.gmm = x, methd = methd)$loss
gmm.loss1<-function(xx,...) gmm.func(xx, S=S1, tt=1, X.gmm = x, methd = methd)$loss
if (methd =="linear"){
beta0 = optim(coef(covb)[S0], gmm.loss0, method = "Nelder-Mead")
while (diff0>tol & kk0<iterations){
beta0 = optim(beta0$par, gmm.loss0, method = "Nelder-Mead")
diff0 = beta0$value
kk0 = kk0+1
}
beta1 = optim(coef(covb)[S1], gmm.loss1, method = "Nelder-Mead")
while (diff1>tol & kk1<iterations){
beta1 = optim(beta1$par, gmm.loss1, method = "Nelder-Mead")
diff1 = beta1$value
kk1 = kk1+1
}
w.curr1<-ATE.wt.func(beta1$par, S1, 1, x)
w.curr0<-ATE.wt.func(beta0$par, S0, 0, x)
} else{
beta0 = optim(coef(covb)[c(1,S0)], gmm.loss0, method = "Nelder-Mead")
while (diff0>tol & kk0<iterations){
beta0 = optim(beta0$par, gmm.loss0, method = "Nelder-Mead")
diff0 = beta0$value
kk0 = kk0+1
}
beta1 = optim(coef(covb)[c(1,S1)], gmm.loss1, method = "Nelder-Mead")
while (diff1>tol & kk1<iterations){
beta1 = optim(beta1$par, gmm.loss1, method = "Nelder-Mead")
diff1 = beta1$value
kk1 = kk1+1
}
w.curr1<-ATE.wt.nl.func(beta1$par, S1, 1, x)
w.curr0<-ATE.wt.nl.func(beta0$par, S0, 0, x)
}
ATE = 1/(n)*(t(y1hat)%*%(w.curr1$W[treat==1]+1)+t(y0hat)%*%(w.curr0$W[treat==0]-1))
ATT = NULL
w = NULL
if (Att==1){
if (methd =="linear"){
ATT.gmm.loss<-function(xx,...) ATT.gmm.func(xx, S=S0, X.gmm = x, methd = methd)$loss
#beta0 = optim(coef(covb)[S0], gmm.loss0,method="BFGS")
ATT.beta0 = optim(coef(covb)[S0], ATT.gmm.loss, method = "Nelder-Mead")
while (ATT.diff0>tol & ATT.kk0<iterations){
ATT.beta0 = optim(ATT.beta0$par, ATT.gmm.loss, method = "Nelder-Mead")
ATT.diff0 = ATT.beta0$value
ATT.kk0 = ATT.kk0+1
}
ATT.w.curr0<-ATT.wt.func(ATT.beta0$par, S0, x)
X = cbind(rep(1,n),x)
ATT.beta.0 = coef(covb)
ATT.beta.0[S0] = ATT.beta0$par
ATT.beta.0 = as.matrix(ATT.beta.0)
ATT.theta.0<-as.vector(X%*%ATT.beta.0)
ATT.r_yhatb0<-1-(1+exp(ATT.theta.0))^-1
}else{
ATT.gmm.loss<-function(xx,...) ATT.gmm.func(xx, S=S0, X.gmm = x, methd = methd)$loss
#beta0 = optim(coef(covb)[S0], gmm.loss0,method="BFGS")
ATT.beta0 = optim(coef(covb)[c(1,S0)], ATT.gmm.loss, method = "Nelder-Mead")
while (ATT.diff0>tol & ATT.kk0<iterations){
ATT.beta0 = optim(ATT.beta0$par, ATT.gmm.loss, method = "Nelder-Mead")
ATT.diff0 = ATT.beta0$value
ATT.kk0 = ATT.kk0+1
}
ATT.w.curr0<-ATT.wt.nl.func(ATT.beta0$par, S0, x)
X = cbind(rep(1,n),x)
ATT.beta.0 = coef(covb)
ATT.beta.0[c(1,S0)] = ATT.beta0$par
ATT.beta.0 = as.matrix(ATT.beta.0)
ATT.theta.0<-as.vector(X%*%ATT.beta.0)
ATT.r_yhatb0<-1-(1+exp(ATT.theta.0))^-1
}
ATT = 1/(sum(treat))*sum(y1hat)-1/sum(ATT.w.curr0$W[treat==0])*(t(y0hat)%*%(ATT.w.curr0$W[treat==0]))
}
if (methd =="linear"){
X = cbind(rep(1,n),x)
beta.0 = coef(covb)
beta.0[S0] = beta0$par
beta.0 = as.matrix(beta.0)
theta.0<-as.vector(X%*%beta.0)
r_yhatb0<-1-(1+exp(theta.0))^-1
beta.1 = coef(covb)
beta.1[S1] = beta1$par
beta.1 = as.matrix(beta.1)
theta.1<-as.vector(X%*%beta.1)
r_yhatb1<-1-(1+exp(theta.1))^-1
r_yhat1 <- predict(cov1,newx=x1hat,s='lambda.min')
r_yhat0 <- predict(cov0,newx=x0hat,s='lambda.min')
r_yhat1full = as.vector(predict(cov1,newx=x,s='lambda.min'))
r_yhat0full = as.vector(predict(cov0,newx=x,s='lambda.min'))
} else{
X = cbind(rep(1,n),x)
beta.0 = coef(covb)
beta.0[c(1,S0)] = beta0$par
beta.0 = as.matrix(beta.0)
theta.0<-as.vector(X%*%beta.0)
r_yhatb0<-1-(1+exp(theta.0))^-1
beta.1 = coef(covb)
beta.1[c(1,S1)] = beta1$par
beta.1 = as.matrix(beta.1)
theta.1<-as.vector(X%*%beta.1)
r_yhatb1<-1-(1+exp(theta.1))^-1
r_yhat1 <- predict(cov1,newx=x1hat,s='lambda.min',type = "response")
r_yhat0 <- predict(cov0,newx=x0hat,s='lambda.min',type = "response")
r_yhat1full = as.vector(predict(cov1,newx=x,s='lambda.min', type = "response"))
r_yhat0full = as.vector(predict(cov0,newx=x,s='lambda.min', type = "response"))
}
delta_K <- sum((r_yhat1full-r_yhat0full-rep(ATE,n))^2)
sigma_1 <- sum((r_yhat1-y1hat)^2/r_yhatb1[treat==1])/n
sigma_0 <- sum((r_yhat0-y0hat)^2/(1-r_yhatb0[treat==0]))/n
s = sqrt((delta_K+sum(sigma_1/r_yhatb1)+sum(sigma_0/r_yhatb0))/n)/sqrt(n)
if (Att==1){
ATT.delta_K <- sum(ATT.r_yhatb0*(r_yhat1full-r_yhat0full-rep(ATT,n))^2)
w = (n/sum(treat))*sqrt((ATT.delta_K+sum(ATT.r_yhatb0[treat==1]*(r_yhat1-y1hat)^2)+sum(ATT.r_yhatb0[treat==0]^2*(r_yhat0-y0hat)^2/(1-ATT.r_yhatb0[treat==0])))/n)/sqrt(n)
}
fitted.values = rep(1,n)
fitted.values[treat==1] = 1/(w.curr1$W[treat==1]+1)
fitted.values[treat==0] = 1-1/(1-w.curr0$W[treat==0])
output =list()
output$ATE = ATE
output$ATT = ATT
output$s = s
output$w = w
output$test1 = w.curr1$W
output$test0 = w.curr0$W
output$coefficients1 = beta.1
output$coefficients0 = beta.0
output$fitted.values = fitted.values
output$fitted.y = y
output$fitted.x = x
output
}
|
/scratch/gouwar.j/cran-all/cranData/CBPS/R/hdCBPS.R
|
#' @title Non-Parametric Covariate Balancing Propensity Score (npCBPS) Estimation
#'
#' @description
#' \code{npCBPS} is a method to estimate weights interpretable as (stabilized)
#' inverse generlized propensity score weights, w_i = f(T_i)/f(T_i|X), without
#' actually estimating a model for the treatment to arrive at f(T|X) estimates.
#' In brief, this works by maximizing the empirical likelihood of observing the
#' values of treatment and covariates that were observed, while constraining
#' the weights to be those that (a) ensure balance on the covariates, and (b)
#' maintain the original means of the treatment and covariates.
#'
#' In the continuous treatment context, this balance on covariates means zero
#' correlation of each covariate with the treatment. In binary or categorical
#' treatment contexts, balance on covariates implies equal means on the
#' covariates for observations at each level of the treatment. When given a
#' numeric treatment the software handles it continuously. To handle the
#' treatment as binary or categorical is must be given as a factor.
#'
#' Furthermore, we apply a Bayesian variant that allows the correlation of each
#' covariate with the treatment to be slightly non-zero, as might be expected
#' in a a given finite sample.
#'
#' Estimates non-parametric covariate balancing propensity score weights.
#'
#' ### @aliases npCBPS npCBPS.fit
#' @param formula An object of class \code{formula} (or one that can be coerced
#' to that class): a symbolic description of the model to be fitted.
#' @param data An optional data frame, list or environment (or object coercible
#' by as.data.frame to a data frame) containing the variables in the model. If
#' not found in data, the variables are taken from \code{environment(formula)},
#' typically the environment from which \code{CBPS} is called.
#' @param na.action A function which indicates what should happen when the data
#' contain NAs. The default is set by the na.action setting of options, and is
#' na.fail if that is unset.
#' @param corprior Prior hyperparameter controlling the expected amount of
#' correlation between each covariate and the treatment. Specifically, the
#' amount of correlation between the k-dimensional covariates, X, and the
#' treatment T after weighting is assumed to have prior distribution
#' MVN(0,sigma^2 I_k). We conceptualize sigma^2 as a tuning parameter to be
#' used pragmatically. It's default of 0.1 ensures that the balance constraints
#' are not too harsh, and that a solution is likely to exist. Once the
#' algorithm works at such a high value of sigma^2, the user may wish to
#' attempt values closer to 0 to get finer balance.
#' @param print.level Controls verbosity of output to the screen while npCBPS
#' runs. At the default of print.level=0, little output is produced. It
#' print.level>0, it outputs diagnostics including the log posterior
#' (log_post), the log empirical likelihood associated with the weights
#' (log_el), and the log prior probability of the (weighted) correlation of
#' treatment with the covariates.
#' @param ... Other parameters to be passed.
#' @return \item{weights}{The optimal weights} \item{y}{The treatment vector
#' used} \item{x}{The covariate matrix} \item{model}{The model frame}
#' \item{call}{The matched call} \item{formula}{The formula supplied}
#' \item{data}{The data argument} \item{log.p.eta}{The log density for the
#' (weighted) correlation of the covariates with the treatment, given the
#' choice of prior (\code{corprior})} \item{log.el}{The log empirical
#' likelihood of the observed data at the chosen set of IPW weights.}
#' \item{eta}{A vector describing the correlation between the treatment and
#' each covariate on the weighted data at the solution.} \item{sumw0}{The sum
#' of weights, provided as a check on convergence. This is always 1 when
#' convergence occurs unproblematically. If it differs from 1 substantially, no
#' solution perfectly satisfying the conditions was found, and the user may
#' consider a larger value of \code{corprior}.}
#' @author Christian Fong, Chad Hazlett, and Kosuke Imai
#' @references Fong, Christian, Chad Hazlett, and Kosuke Imai. ``Parametric
#' and Nonparametric Covariate Balancing Propensity Score for General Treatment
#' Regimes.'' Unpublished Manuscript.
#' \url{http://imai.princeton.edu/research/files/CBGPS.pdf}
#' @examples
#'
#' ##Generate data
#' data(LaLonde)
#'
#' ## Restricted two only two covariates so that it will run quickly.
#' ## Performance will remain good if the full LaLonde specification is used
#' fit <- npCBPS(treat ~ age + educ, data = LaLonde, corprior=.1/nrow(LaLonde))
#' plot(fit)
#'
#' @export npCBPS
#'
npCBPS <- function(formula, data, na.action, corprior=.01, print.level=0, ...) {
if (missing(data))
data <- environment(formula)
call <- match.call()
family <- binomial()
mf <- match.call(expand.dots = FALSE)
m <- match(c("formula", "data", "na.action"), names(mf), 0L)
mf <- mf[c(1L, m)]
mf$drop.unused.levels <- TRUE
mf[[1L]] <- as.name("model.frame")
mf <- eval(mf, parent.frame())
mt <- attr(mf, "terms")
Y <- model.response(mf, "any")
if (length(dim(Y)) == 1L) {
nm <- rownames(Y)
dim(Y) <- NULL
if (!is.null(nm))
names(Y) <- nm
}
X <- if (!is.empty.model(mt)) model.matrix(mt, mf)#[,-2]
else matrix(, NROW(Y), 0L)
X<-X[,apply(X,2,sd)>0]
fit <- eval(call("npCBPS.fit", X = X, treat = Y, corprior = corprior,
print.level = print.level))
fit$na.action <- attr(mf, "na.action")
xlevels <- .getXlevels(mt, mf)
fit$data<-data
fit$call <- call
fit$formula <- formula
fit$terms<-mt
fit
}
#' npCBPS.fit
#'
#' @param treat A vector of treatment assignments. Binary or multi-valued
#' treatments should be factors. Continuous treatments should be numeric.
#' @param X A covariate matrix.
#' @param corprior Prior hyperparameter controlling the expected amount of
#' correlation between each covariate and the treatment. Specifically, the
#' amount of correlation between the k-dimensional covariates, X, and the
#' treatment T after weighting is assumed to have prior distribution
#' MVN(0,sigma^2 I_k). We conceptualize sigma^2 as a tuning parameter to be
#' used pragmatically. It's default of 0.1 ensures that the balance constraints
#' are not too harsh, and that a solution is likely to exist. Once the
#' algorithm works at such a high value of sigma^2, the user may wish to
#' attempt values closer to 0 to get finer balance.
#' @param print.level Controls verbosity of output to the screen while npCBPS
#' runs. At the default of print.level=0, little output is produced. It
#' print.level>0, it outputs diagnostics including the log posterior
#' (log_post), the log empirical likelihood associated with the weights
#' (log_el), and the log prior probability of the (weighted) correlation of
#' treatment with the covariates.
#' @param ... Other parameters to be passed.
#'
npCBPS.fit=function(treat, X, corprior, print.level, ...){
D=treat
rescale.orig=TRUE
orig.X=X
#pre-processesing:
X=X%*%solve(chol(var(X)))
X=scale(X,center=TRUE, scale=TRUE)
n=nrow(X)
eps=1/n
#Constraint matrix
if (is.numeric(D)){
print("Estimating npCBPS as a continuous treatment. To estimate for a binary or multi-valued treatment, use a factor.")
#re-orient each X to have positive correlation with T
X=X%*%diag(as.vector(sign(cor(X,D))),nrow=ncol(X))
D=scale(D,center=TRUE, scale=TRUE)
z=X*as.vector(D)
z=cbind(z,X,D)
ncon=ncol(z)
ncon_cor=ncol(X)*ncol(D)
}
if(is.factor(D)){
#For factor treatments
Td=as.matrix(model.matrix(~D-1))
conds=dim(Td)[2]
dimX=dim(X)[2]
#Now divide each column of Td by it's sum
colsums=apply(Td,2,sum)
Td=Td%*%diag(1/colsums)
#Now subtract the last column from each of the others, and remove the last
subtractMat=Td[,conds]%*%t(as.matrix(rep(1, conds)))
Td=Td-subtractMat
Td=Td[,1:(conds-1)]
#Center and rescale Td now
Td=scale(x=Td, center = TRUE, scale=TRUE)
#form matrix z that will be needed to setup contrasts
z=matrix(NA,nrow=n,ncol=dimX*(conds-1))
z=t(sapply(seq(1:n),function(x) t(kronecker(Td[x,],X[x,]))))
#Check that correlation of Td with X is very close to colMeans of z
cor.init=as.vector(t(apply(X = X,MARGIN = 2,function(x) cor(Td,x))))
rescale.factors=cor.init/colMeans(z)
if (print.level>0){print(rescale.factors)}
#Add aditional constraints that E[wX*]=0, if desired
#NB: I think we need another constraint to ensure something like E[wT*]=0
ncon_cor=dim(z)[2] #keep track of number of constraints not including the additional mean constraint
z=cbind(z,X)
ncon=dim(z)[2] #num constraints including mean constraints
#rm(Td)
}
#-----------------------------------------------
# Functions we will need
#-----------------------------------------------
llog = function(z, eps){
ans = z
avoidNA = !is.na(z)
lo = (z < eps) & avoidNA
ans[lo] = log(eps) - 1.5 + 2 * z[lo]/eps - 0.5 * (z[lo]/eps)^2
ans[!lo] = log(z[!lo])
ans
}
llogp = function(z, eps){
ans = z
avoidNA = !is.na(z)
lo = (z < eps) & avoidNA
ans[lo] = 2/eps - z[lo]/eps^2
ans[!lo] = 1/z[!lo]
ans
}
log_elgiven_eta=function(par,eta,z,eps,ncon_cor){
ncon=ncol(z)
gamma=par
eta_long=as.matrix(c(eta, rep(0,ncon-ncon_cor)))
#matrix version of eta for vectorization purposes
eta_mat=eta_long%*%c(rep(1,nrow(z)))
arg = (n + t(gamma)%*%(eta_mat-t(z)))
#used to be: arg = (1 + t(gamma)%*%(t(z)-eta_mat))
log_el=-sum(llog(z=arg,eps=eps))
return(log_el)
}
get.w=function(eta,z, sumw.tol=0.05, eps){
gam.init=rep(0, ncon)
opt.gamma.given.eta=optim(par=gam.init, eta=eta, method="BFGS", fn=log_elgiven_eta, z=z, eps=eps, ncon_cor=ncon_cor, control=list(fnscale=1))
gam.opt.given.eta=opt.gamma.given.eta$par
eta_long=as.matrix(c(eta, rep(0,ncon-ncon_cor)))
#matrix version of eta for vectorization purposes
eta_mat=eta_long%*%c(rep(1,nrow(z)))
arg_temp = (n + t(gam.opt.given.eta)%*%(eta_mat-t(z)))
#just use 1/x instead instead of the derivative of the pseudo-log
w=as.numeric(1/arg_temp)
sum.w=sum(w)
#scale: should sum to 1 when actually applied:
w_scaled=w/sum.w
if (abs(1-sum.w)<=sumw.tol){log_el=-sum(log(w_scaled))}
if (abs(1-sum.w)>=sumw.tol){log_el=-sum(log(w_scaled))-10^4*(1+abs(1-sum.w))}
R=list()
R$w=w
R$sumw=sum.w
R$log_el=log_el
R$el.gamma=gam.opt.given.eta[1:ncon_cor]
#R$grad.gamma=w*(eta_mat-t(z)) #gradient w.r.t. gamma
return(R)
}
#------------------------
# Some diagnostics:
# (a) is the eta really the cor(X,T) you end up with?
# (b) is balance on X and T (at 0) is maintained
#------------------------
## eta= (.1,.1,...,.1) should produce w's that produce weighted cov = (.1,.1,...)
#test.w=get.w(eta=rep(.1,ncon_cor),z)
##check convergence: is sumw near 1?
#test.w$sumw
##get w
#wtest=test.w$w
##check weighted covariances: are they near 0.10?
#sapply(seq(1,5), function(x) sum(X[,x]*T*wtest))
##means of X and T: are they near 0?
#sapply(seq(1,5), function(x) sum(X[,x]*wtest))
#sum(T*wtest)
log_post = function(par,eta.to.be.scaled,eta_prior_sd,z, eps=eps, sumw.tol=.001){
#get log(p(eta))
eta_now=par*eta.to.be.scaled
log_p_eta=sum(-.5*log(2*pi*eta_prior_sd^2) - (eta_now^2)/(2*eta_prior_sd^2))
#get best log_el for this eta
el.out=get.w(eta=eta_now,z=z, sumw.tol=sumw.tol, eps=eps)
#el.gamma=el.out$el.gamma
#put it together into log(post)
c=1 #in case we want to rescale the log(p(eta)), as sigma/c would.
log_post=el.out$log_el+c*log_p_eta
if(print.level>0){print(c(log_post, el.out$log_el, log_p_eta))}
return(log_post)
}
###-----------------------------------------------------------
### The main event
###-----------------------------------------------------------
#Now the outer optimization over eta
#setup the prior
eta_prior_sd=rep(corprior,ncon_cor)
#get original correlations
if (is.numeric(D)){eta.init=sapply(seq(1:ncon_cor), function(x) cor(X[,x],D))}
#for factor treatment, there is probably a better analog to the intialize correlation,
if (is.factor(D)){
eta.init=cor.init
}
#get vector of 1's long enough to be our dummy that gets rescaled to form eta if we want
#constant etas:
eta.const=rep(1, ncon_cor)
#note that as currently implemented, these are only the non-zero elements of eta that correspond
# to cor(X,T). For additional constraints that hold down the mean of X and T we are assuming
# eta=0 effectively. They get padded in within the optimization.
#Determine if we want to rescale 1's or rescale the original correlations
#rescale.orig=FALSE
if(rescale.orig==TRUE){eta.to.be.scaled=eta.init}else{eta.to.be.scaled=eta.const}
eta.optim.out=optimize(f=log_post, interval=c(-1,1), eta.to.be.scaled=eta.to.be.scaled,
eps=eps, sumw.tol=.001, eta_prior_sd=eta_prior_sd,z=z, maximum=TRUE)
#Some useful values:
par.opt=eta.optim.out$maximum
eta.opt=par.opt*eta.to.be.scaled
log.p.eta.opt=sum(-.5*log(2*pi*eta_prior_sd^2) - (eta.opt^2)/(2*eta_prior_sd^2))
el.out.opt=get.w(eta=eta.opt,z=z, eps=eps)
sumw0=sum(el.out.opt$w)
w=el.out.opt$w/sumw0
log.el.opt=el.out.opt$log_el
R=list()
R$par=par.opt
R$log.p.eta=log.p.eta.opt
R$log.el=log.el.opt
R$eta=eta.opt
R$sumw0=sumw0 #sum of original w prior to any corrective rescaling
R$weights=w
R$y=D
R$x=orig.X
class(R)<-"npCBPS"
return(R)
}
#' Calls the appropriate plot function, based on the number of treatments
#' @param x an object of class \dQuote{CBPS} or \dQuote{npCBPS}, usually, a
#' result of a call to \code{CBPS} or \code{npCBPS}.
#' @param covars Indices of the covariates to be plotted (excluding the intercept). For example,
#' if only the first two covariates from \code{balance} are desired, set \code{covars} to 1:2.
#' The default is \code{NULL}, which plots all covariates.
#' @param silent If set to \code{FALSE}, returns the imbalances used to
#' construct the plot. Default is \code{TRUE}, which returns nothing.
#' @param ... Additional arguments to be passed to balance.
#'
#' @export
#'
plot.npCBPS<-function(x, covars = NULL, silent = TRUE, ...){
bal.x<-balance(x)
if(is.numeric(x$y)) {out<-plot.CBPSContinuous(x, covars, silent, ...)}
else {out<-plot.CBPS(x, covars, silent, ...)}
if(!is.null(out)) return(out)
}
#' Calls the appropriate balance function based on the number of treatments
#'
#' @param object A CBPS, npCBPS, or CBMSM object.
#' @param ... Other parameters to be passed.
#'
#' @export
#'
balance.npCBPS<-function(object, ...){
if(is.numeric(object$y)) {out<-balance.CBPSContinuous(object, ...)}
else {out<-balance.CBPS(object, ...)}
out
}
|
/scratch/gouwar.j/cran-all/cranData/CBPS/R/npCBPS.R
|
".onAttach" <- function(lib, pkg) {
mylib <- dirname(system.file(package = pkg))
title <- packageDescription(pkg, lib.loc = mylib)$Title
ver <- packageDescription(pkg, lib.loc = mylib)$Version
author <- packageDescription(pkg, lib.loc = mylib)$Author
packageStartupMessage(pkg, ": ", title, "\nVersion: ", ver, "\nAuthors: ", author, "\n")
}
|
/scratch/gouwar.j/cran-all/cranData/CBPS/R/onAttach.R
|
#' CBS_ITC
#'
#' Fit either a 1-piece or 2-piece CBS latent utility function to binary intertemporal choice data.
#'
#' The input data has n choices (ideally n > 100) between two reward options.
#' Option 1 is receiving \code{Amt1} in \code{Delay1} and Option 2 is receiving \code{Amt2} in \code{Delay2} (e.g., $40 in 20 days vs. $20 in 3 days).
#' One of the two options may be immediate (i.e., delay = 0; e.g., $40 in 20 days vs. $20 today).
#' \code{choice} should be 1 if option 1 is chosen, 0 if option 2 is chosen.
#'
#' @param choice Vector of 0s and 1s. 1 if the choice was option 1, 0 if the choice was option 2.
#' @param Amt1 Vector of positive real numbers. Reward amount of choice 1.
#' @param Delay1 Vector of positive real numbers. Delay until the reward of choice 1.
#' @param Amt2 Vector of positive real numbers. Reward amount of choice 2.
#' @param Delay2 Vector of positive real numbers. Delay until the reward of choice 2.
#' @param numpiece Either 1 or 2. Number of CBS pieces to use.
#' @param numfit Number of model fits to perform from different starting points. If not provided, numfit = 10*numpiece
#' @return A list containing the following:
#' \itemize{
#' \item \code{type}: either 'CBS1' or 'CBS2' depending on the number of pieces
#' \item \code{LL}: log likelihood of the model
#' \item \code{numparam}: number of total parameters in the model
#' \item \code{scale}: scaling factor of the logit model
#' \item \code{xpos}: x coordinates of the fitted CBS function
#' \item \code{ypos}: y coordinates of the fitted CBS function
#' \item \code{AUC}: area under the curve of the fitted CBS function. Normalized to be between 0 and 1.
#' \item \code{normD} : The domain of CBS function runs from 0 to \code{normD}. Specifically, this is the constant used to normalize all delays between 0 and 1, since CBS is fitted in a unit square first and then scaled up.
#' }
#' @examples
#' # Fit example ITC data with 2-piece CBS function.
#' # Load example data (included with package).
#' # Each row is a choice between option 1 (Amt at Delay) vs option 2 (20 now).
#' Amount1 = ITCdat$Amt1
#' Delay1 = ITCdat$Delay1
#' Amount2 = 20
#' Delay2 = 0
#' Choice = ITCdat$Choice
#'
#' # Fit the model
#' out = CBS_ITC(Choice,Amount1,Delay1,Amount2,Delay2,2)
#'
#' # Plot the choices (x = Delay, y = relative amount : 20 / delayed amount)
#' plot(Delay1[Choice==1],20/Amount1[Choice==1],type = 'p',col="blue",xlim=c(0, 180), ylim=c(0, 1))
#' points(Delay1[Choice==0],20/Amount1[Choice==0],type = 'p',col="red")
#'
#' # Plot the fitted CBS
#' x = 0:out$normD
#' lines(x,CBSfunc(out$xpos,out$ypos,x),col="black")
#' @export
CBS_ITC <- function(choice,Amt1,Delay1,Amt2,Delay2,numpiece,numfit=NULL){
CBS_error(choice,Amt1,Delay1,Amt2,Delay2,numpiece,numfit) # error checking
minpad = 1e-04; maxpad = 1-minpad # pad around bounds because the solving algorithm tests values around the bounds
if(is.null(numfit)){numfit = 10*numpiece} # if not provided, use default number of numfit
# normalizing delay to [0 1] for easier parameter search
if(any(Delay1 > 1) | any(Delay2 > 1)){
nD <- max(Delay1,Delay2); Delay1 <- Delay1/nD; Delay2 <- Delay2/nD
}
else{ nD <- 1 }
# parameter bounds
lb <- c(-36,rep(minpad,6*numpiece-1)); ub <- c(36,rep(maxpad,6*numpiece-1))
if (numpiece == 1){ # active parameters (6): logbeta, x2, x3, y2, y3, y4
A = rbind(c(0,0,0,0,-1,1),c(0,0,0,-1,0,1)); B = rep(-minpad,2) # linear constraints: y4-y3<0, y4-y2<0
confun = NULL # no non-linear constraints
} else if (numpiece == 2){ # active parameters (12): logbeta, x2,x3,x4,x5,x6, y2,y3,y4,y5,y6,y7
# linear constraints:
A = rbind(c(0,1,0,-1,0,0, numeric(6)), # x2-x4<0
c(0,0,1,-1,0,0, numeric(6)), # x3-x4<0
c(0,0,0,1,-1,0, numeric(6)), # x4-x5<0
c(0,0,0,1,0,-1, numeric(6)), # x4-x6<0
c(numeric(6), -1,0,1,0,0,0), # y4-y2<0
c(numeric(6), 0,-1,1,0,0,0), # y4-y3<0
c(numeric(6), 0,0,-1,1,0,0), # y5-y4<0
c(numeric(6), 0,0,-1,0,1,0), # y6-y4<0
c(numeric(6), 0,0,0,-1,0,1), # y7-y5<0
c(numeric(6), 0,0,0,0,-1,1)) # y7-y6<0
B = rep(-minpad,10)
confun = twopiece_nonlincon
}
# optimizer input and options
funcinput <- list(objfun = function(x) ITCnegLL(x,Amt1,Delay1,Amt2,Delay2,choice,3*numpiece), confun = confun, A = A, B = B,
Aeq = NULL, Beq = NULL, lb = lb, ub = ub, tolX = 1e-04, tolFun = 1e-04, tolCon = minpad, maxnFun = 1e+07, maxIter = 400)
# fitting
mdl <- CBS_fitloop(funcinput,ITCrandstartpoint(numpiece,numfit))
# organizing output
LL <- -mdl$fn*length(choice)
xpos <- c(0,mdl$par[2:(3*numpiece)],1)
ypos <- c(1,mdl$par[(3*numpiece+1):(6*numpiece)])
return( list("type" = paste("CBS",numpiece,sep=""), "LL" = LL, "numparam" = 6*numpiece, "scale" = exp(mdl$par[1]),"xpos"= nD*xpos, "ypos"=ypos, "AUC" = CBSfunc(xpos,ypos),"normD"=nD) )
}
#' ITCnegLL
#'
#' Calculates per-trial neg log-likelihood of a CBS ITC model.
#' \code{cutoff} marks the index of x that corresponds to last xpos
#' @noRd
ITCnegLL <- function(x,A1,V1,A2,V2,Ch,cutoff){
yhat1 <- CBSfunc(c(0,x[2:cutoff],1), c(1,x[(cutoff+1):length(x)]), V1)
yhat2 <- CBSfunc(c(0,x[2:cutoff],1), c(1,x[(cutoff+1):length(x)]), V2)
return(negLL_logit(x[1],A1,yhat1,A2,yhat2,Ch))
}
#' ITCrandstartpoint
#'
#' Provides starting points for the CBS fitting function.
#' @noRd
ITCrandstartpoint <- function(numpiece,numpoints){
sp <- seq(0.12,0.88,length.out=numpoints)
if(numpiece == 1){return( cbind(numeric(numpoints),sp,sp,sp,sp,rep(0.01,numpoints)) )}
else {return( cbind(numeric(numpoints),sp-0.11,sp-0.11,sp,sp+0.11,sp+0.11,sp+0.11,sp+0.11,sp,sp-0.11,sp-0.11,rep(0.01,numpoints)) )}
}
|
/scratch/gouwar.j/cran-all/cranData/CBSr/R/CBS_ITC.R
|
#' CBS_RC
#'
#' Fit either a 1-piece or 2-piece CBS latent utility function to binary risky choice data.
#'
#' The input data has n choices (ideally n > 100) between two reward options.
#' Option 1 is receiving \code{Amt1} with probability \code{Prob1} and Option 2 is receiving \code{Amt2} with probability \code{Prob2} (e.g., $40 with 53\% chance vs. $20 with 90\% chance).
#' One of the two options may be certain (i.e., prob = 1; e.g., $40 with 53\% chance vs. $20 for sure).
#' \code{choice} should be 1 if option 1 is chosen, 0 if option 2 is chosen.
#'
#' @param choice Vector of 0s and 1s. 1 if the choice was option 1, 0 if the choice was option 2.
#' @param Amt1 Vector of positive real numbers. Reward amount of choice 1.
#' @param Prob1 Vector of positive real numbers between 0 and 1. Probability of winning the reward of choice 1.
#' @param Amt2 Vector of positive real numbers. Reward amount of choice 2.
#' @param Prob2 Vector of positive real numbers between 0 and 1. Probability of winning the reward of choice 2.
#' @param numpiece Either 1 or 2. Number of CBS pieces to use.
#' @param numfit Number of model fits to perform from different starting points. If not provided, numfit = 10*numpiece
#' @return A list containing the following:
#' \itemize{
#' \item \code{type}: either 'CBS1' or 'CBS2' depending on the number of pieces
#' \item \code{LL}: log likelihood of the model
#' \item \code{numparam}: number of total parameters in the model
#' \item \code{scale}: scaling factor of the logit model
#' \item \code{xpos}: x coordinates of the fitted CBS function
#' \item \code{ypos}: y coordinates of the fitted CBS function
#' \item \code{AUC}: area under the curve of the fitted CBS function. Normalized to be between 0 and 1.
#' }
#' @examples
#' # Fit example Risky choice data with 2-piece CBS function.
#' # Load example data (included with package).
#' # Each row is a choice between option 1 (Amt with prob) vs option 2 (20 for 100%).
#' Amount1 = RCdat$Amt1
#' Prob1 = RCdat$Prob1
#' Amount2 = 20
#' Prob2 = 1
#' Choice = RCdat$Choice
#'
#' # Fit the model
#' out = CBS_RC(Choice,Amount1,Prob1,Amount2,Prob2,2)
#'
#' # Plot the choices (x = Delay, y = relative amount : 20 / risky amount)
#' plot(Prob1[Choice==1],20/Amount1[Choice==1],type = 'p',col="blue",xlim=c(0, 1), ylim=c(0, 1))
#' points(Prob1[Choice==0],20/Amount1[Choice==0],type = 'p',col="red")
#'
#' # Plot the fitted CBS
#' x = seq(0,1,.01)
#' lines(x,CBSfunc(out$xpos,out$ypos,x))
#' @export
CBS_RC <- function(choice,Amt1,Prob1,Amt2,Prob2,numpiece,numfit=NULL){
CBS_error(choice,Amt1,Prob1,Amt2,Prob2,numpiece,numfit) # error checking
minpad = 1e-04; maxpad = 1-minpad # pad around bounds because the solving algorithm tests values around the bounds
if(is.null(numfit)){numfit = 10*numpiece} # if not provided, use default number of numfit
# checking to make sure probability is within 0 and 1
if(any(Prob1>1) | any(Prob2>1)){stop("prob not within [0 1]")}
# number of parameters in the model
numparam <- 6*numpiece-1
# parameter bounds
lb <- c(-36,rep(minpad,numparam-1)); ub <- c(36,rep(maxpad,numparam-1))
if (numpiece == 1){ # active parameters (5): logbeta, x2, x3, y2, y3
A = NULL; B = NULL; # no linear constraints
confun = NULL # no non-linear constraints
} else if (numpiece == 2){ # active parameters (11): logbeta, x2,x3,x4,x5,x6, y2,y3,y4,y5,y6
# linear constraints:
A = rbind(c(0,1,0,-1,0,0, numeric(5)), # x2-x4<0
c(0,0,1,-1,0,0, numeric(5)), # x3-x4<0
c(0,0,0,1,-1,0, numeric(5)), # x4-x5<0
c(0,0,0,1,0,-1, numeric(5)), # x4-x6<0
c(numeric(6), 1,0,-1,0,0), # y2-y4<0
c(numeric(6), 0,1,-1,0,0), # y3-y4<0
c(numeric(6), 0,0,1,-1,0), # y4-y5<0
c(numeric(6), 0,0,1,0,-1)) # y4-y6<0
B = rep(-minpad,8)
confun = twopiece_nonlincon
}
# optimizer input and options
funcinput <- list(objfun = function(x) RCnegLL(x,Amt1,Prob1,Amt2,Prob2,choice,(numparam+1)/2), confun = confun, A = A, B = B,
Aeq = NULL, Beq = NULL, lb = lb, ub = ub, tolX = 1e-04,tolFun = 1e-04, tolCon = minpad, maxnFun = 1e+07, maxIter = 400)
# fitting
mdl <- CBS_fitloop(funcinput,RCrandstartpoint(numpiece,numfit))
# organizing output
LL <- -mdl$fn*length(choice)
xpos <- c(0,mdl$par[2:((numparam+1)/2)],1)
ypos <- c(0,mdl$par[((numparam+1)/2 +1):numparam],1)
return( list("type"=paste("CBS",numpiece,sep=""), "LL" = LL, "numparam" = numparam, "scale" = exp(mdl$par[1]),"xpos"=xpos, "ypos"=ypos, "AUC" = CBSfunc(xpos,ypos)) )
}
#' RCnegLL
#'
#' Calculates per-trial neg log-likelihood of a CBS RC model.
#' \code{cutoff} marks the index of x that corresponds to last xpos
#' @noRd
RCnegLL <- function(x,A1,V1,A2,V2,Ch,cutoff){
yhat1 <- CBSfunc(c(0,x[2:cutoff],1), c(0,x[(cutoff+1):length(x)],1), V1)
yhat2 <- CBSfunc(c(0,x[2:cutoff],1), c(0,x[(cutoff+1):length(x)],1), V2)
return(negLL_logit(x[1],A1,yhat1,A2,yhat2,Ch))
}
#' Rrandstartpoint
#'
#' Provides starting points for the CBS fitting function.
#' @noRd
RCrandstartpoint <- function(numpiece,numpoints){
sp <- seq(0.12,0.88,length.out=numpoints)
if(numpiece == 1){return( cbind(numeric(numpoints),1-sp,1-sp,sp,sp) )}
else {return( cbind(numeric(numpoints),1-sp-0.11,1-sp-0.11,1-sp,1-sp+0.11,1-sp+0.11,sp-0.11,sp-0.11,sp,sp+0.11,sp+0.11) )}
}
|
/scratch/gouwar.j/cran-all/cranData/CBSr/R/CBS_RC.R
|
#' CBSfunc
#'
#' Calculate either the Area Under the Curve (AUC) of a CBS function, or calculate the y coordinates of CBS function given x.
#' @param xpos Vector of real numbers of length 1+3n (n = 1, 2, 3, ...), corresponding to Bezier points' x-coordinates of a CBS function
#' @param ypos Vector of real numbers of length 1+3n (n = 1, 2, 3, ...), corresponding to Bezier points' y-coordinates of a CBS function
#' @param x Vector of real numbers, corresponding to x-coordinates of a CBS function. Default value is Null.
#' @return If x is provided, return y coordinates corresponding to x. If x is not provided, return AUC.
#' @examples
#' CBSfunc(c(0,0.3,0.6,1),c(0.5, 0.2, 0.7, 0.9))
#' CBSfunc(c(0,0.3,0.6,1),c(0.5, 0.2, 0.7, 0.9),seq(0,1,0.1))
#' @export
CBSfunc <- function(xpos,ypos,x = NULL){
xpos <- as.double(xpos); ypos <- as.double(ypos)
if (length(xpos) != length(ypos)){stop("length of xpos and ypos different!")}
if (length(xpos) < 4){stop("length of xpos and ypos too short. They must have at least 4 elements")}
if (length(xpos)%%3 != 1){stop("unexpected length of xpos and ypos. They should be 3n+1 (n = 1, 2, ...)")}
if (is.null(x)){ #x is not provided. hence calculating AUC
return(CBSAUC(xpos,ypos))
} else { # x is provided. hence calculating yhat
return(rJava::.jcall("CBScalc", returnSig = "[D","getyhat",xpos,ypos,rJava::.jarray(as.double(x))))
}
}
#' CBSAUC
#'
#' calculates area under the curve of the entire CBS chain by calculating local AUCs for each piece and adding them up
#' @noRd
CBSAUC <- function(xpos,ypos){
AUC = 0;
for(i in seq(1,length(xpos)-1,3)){
AUC = AUC + partialAUC(xpos[i],xpos[i+1],xpos[i+2],xpos[i+3],ypos[i],ypos[i+1],ypos[i+2],ypos[i+3])
}
return(AUC)
}
#' partialAUC
#'
#' calculated area under the curve of a single piece of CBS using analytic formula
#' checks for CBS function constraint.
#' @noRd
partialAUC <- function(x1,x2,x3,x4,y1,y2,y3,y4){
check1 = -sqrt((x4-x3)*(x2-x1)) < (x3-x2)
check2 = x1 <= x2
check3 = x3 <= x4
if (!check1 || !check2 || !check3){
warning("CBS x coordinates not a monotonic function of t. Multiple y for x may exist. AUC may be inaccurate")
}
return((6*x2*y1-6*x1*y2-10*x1*y1-3*x1*y3+3*x3*y1-x1*y4-3*x2*y3+3*x3*y2+x4*y1-3*x2*y4+3*x4*y2-6*x3*y4+6*x4*y3+10*x4*y4)/20)
}
|
/scratch/gouwar.j/cran-all/cranData/CBSr/R/CBSfunc.R
|
#' Sample participant data from a binary intertemporal choice task (aka delay discounting task)
#'
#' A dataset containing one sample participant's 120 binary choices between a delayed monetary option (\code{Amt1} in \code{Delay1}) and a immediate monetary option ($20 now).
#' The immediate monetary option was always '$20 now' across all trials
#'
#' @format A data frame with 120 rows and 3 variables:
#' \describe{
#' \item{Amt1}{Delayed reward amount, in dollars}
#' \item{Delay1}{Delay until the receipt of \code{Amt1}, in days}
#' \item{Choice}{Choice between binary options. \code{Choice==1} means participnat chose the delayed option (i.e., \code{Amt1} in \code{Delay1} days). \code{Choice==0} means participnat chose the immediate option (i.e., $20 now)}
#' }
#' @source Kable, J. W., Caulfield, M. K., Falcone, M., McConnell, M., Bernardo, L., Parthasarathi, T., ... & Diefenbach, P. (2017). No effect of commercial cognitive training on brain activity, choice behavior, or cognitive performance. Journal of Neuroscience, 37(31), 7390-7402.
"ITCdat"
#' Sample participant data from a binary risky choice task (aka risk aversion task)
#'
#' A dataset containing one sample participant's 120 binary choices between a probabilistic monetary option (\code{Amt1} with \code{Prob1} chance of winning) and a certain monetary option ($20 for sure).
#' The certain monetary option was always '$20 for sure' across all trials
#'
#' @format A data frame with 120 rows and 3 variables:
#' \describe{
#' \item{Amt1}{Probabilistic reward amount, in dollars}
#' \item{Prob1}{Probability of winning \code{Amt1}, if it were to be chosen}
#' \item{Choice}{Choice between binary options. \code{Choice==1} means participnat chose the probabilistic option (i.e., \code{Amt1} with \code{Delay1} chance of winning). \code{Choice==0} means participnat chose the certain option (i.e., $20 for sure)}
#' }
#' @source Kable, J. W., Caulfield, M. K., Falcone, M., McConnell, M., Bernardo, L., Parthasarathi, T., ... & Diefenbach, P. (2017). No effect of commercial cognitive training on brain activity, choice behavior, or cognitive performance. Journal of Neuroscience, 37(31), 7390-7402.
"RCdat"
|
/scratch/gouwar.j/cran-all/cranData/CBSr/R/data.R
|
#' twopiece_nonlincon
#'
#' Provides non-linear equality and inequality constraint for a 2-piece CBS.
#' @noRd
twopiece_nonlincon <- function(x){
minhandle = 0.1
x3 = x[3]; x4 = x[4]; x5 = x[5]; y3 = x[8]; y4 = x[9]; y5 = x[10]
# non-linear inequalities: 0.1^2 -(x4-x3)^2 -(y4-y3)^2 < 0, 0.1^2-(x5-x4)^2-(y5-y4)^2 < 0
c_ineq = c(minhandle^2 -(x4-x3)^2 -(y4-y3)^2, minhandle^2 -(x5-x4)^2 -(y5-y4)^2)
# non-linear equalities: (x4-x3)/(y4-y3) = (x5-x4)/(y5-y4)
ceq = (x4-x3)*(y5-y4)-(x5-x4)*(y4-y3)
return(list(c=c_ineq,ceq=ceq))
}
#' negLL_logit
#'
#' Calculates per-trial negative log-likelihood of a binary logit model: logit(p(y=1)) = exp(scale) x (A1 x yhat - A2 x yhat)
#' @noRd
negLL_logit <- function(scale,A1,yhat1,A2,yhat2,Ch){
DV <- A1*yhat1 - A2*yhat2 # diff between utilities
DV[Ch==0] = -DV[Ch==0] # utility difference toward choice
reg = -exp(scale)*DV # scale parameter
logp = -log(1+exp(reg)) # directly calculating logp
logp[reg>709] = -reg[reg>709]; # log(realmax) is about 709.7827. (e.g., try log(exp(709)) vs. log(exp(710)))
return(-mean(logp)) # making per-trial LL
}
#' CBS_error
#'
#' Checks for input errors on CBS_ITC and CBS_RC
#' @noRd
CBS_error <- function(choice,Amt1,Var1,Amt2,Var2,numpiece,numfit){
if(any(choice != 0 & choice != 1)){stop("Choice should be a vector of 0 or 1")}
if(any(Amt1 < 0) | any(Amt2 < 0)){stop("Negative amounts are not allowed")}
if(any(Var1 < 0) | any(Var2 < 0)){stop("Negative delays or probabilities are not allowed")}
if(numpiece != 1 && numpiece !=2){stop("Sorry! Only 1-piece and 2-piece CBS functions are supported at the moment.")}
if(!is.null(numfit) && numfit<2){stop("Too few starting points (numfit)")}
}
#' CBS_fitloop
#'
#' Loop through multiple starting points and returns the best model.
#' This code is basically my substitute for MATLAB's multistart feature, which NlcOptim package does not have.
#' Also, NlcOptim package causes errors at certain situations that doesn't seem to be due to my misuse.
#' It seems that it fails when numerical derivatives and/or constraint matrices are close to singular, which is not something I can know/control a priori.
#' Hence, until I figure out a better way to deal with it (or NlcOptim package is updated to be more robust), we just try a different starting point.
#' From extensive testing on all data I have, it seems to happen once or twice every few hundred fittings.
#' I also tried other optimization packages that support non-linear inequality AND equality constraints, but they were all much slower than NlcOptim, which is already slower than MATLAB's fmincon.
#' Other developers, who want to see what errors I'm talking about, set the 'silent=TRUE' to FALSE in the try statement below and run the CBS_RC example script.
#' @noRd
CBS_fitloop <- function(inputlist,startingpoints){
start_time <- Sys.time()
successcounter = 0
bestmdl <- NULL
for(i in 1:dim(startingpoints)[1]){ # looping through starting points. Each row of the matrix is a starting point.
newmdl <- NULL
inputlist$X = startingpoints[i,] # new starting point
try(newmdl <- do.call(NlcOptim::solnl,inputlist),silent=TRUE) # try fitting
if(!is.null(newmdl)){ # if the fitting succeeded
successcounter = successcounter+1
if(is.null(bestmdl)){bestmdl <- newmdl} # if current best model was Null, change it to the new fit
else{ # current best model exists
if(newmdl$fn < bestmdl$fn){ bestmdl <- newmdl } # if new fit is better, change current model to new fit
}
}
}
if(is.null(bestmdl)){stop("No convergence! Consider using more starting points (numfit)")}
else{message(paste(successcounter,"out of",dim(startingpoints)[1],"models converged with a local solution"))}
message(paste("Fitting Time :",round(Sys.time() - start_time,2),"seconds"))
return(bestmdl)
}
|
/scratch/gouwar.j/cran-all/cranData/CBSr/R/utils.R
|
.onLoad = function(libname, pkgname) {
rJava::.jpackage(pkgname, lib.loc = libname)
rJava::.jaddClassPath(system.file("java",package="CBSr"))
}
|
/scratch/gouwar.j/cran-all/cranData/CBSr/R/zzz.R
|
Uniform_Prior = function(){
mu = runif(1,min = 0, max = 1)
return(mu)
}
Sine_Prior = function(){
u = runif(1,min = 0 ,max = 1)
mu = acos(1-2*u)/pi
return(mu)
}
Cosine_Prior = function(){
u = runif(1,min = 0 ,max = 1)
f <- function(mu) {mu-1/pi* sin(pi*mu)-u}
mu = uniroot(f,c(0,1))$root
}
CBT = function(n, prior, bn = log(log(n)), cn = log(log(n))){
if(prior == "Uniform"){
newmu <- Uniform_Prior
mu_star = sqrt(2/n)
}else if(prior == "Sine"){
newmu <- Sine_Prior
mu_star = (2*2*(2+1)/(pi^2*n))^(1/(2+1))
}else if(prior == "Cosine"){
newmu <- Cosine_Prior
mu_star = (2*3*(3+1)/(pi^2*n))^(1/(3+1))
}
count = 0
K = 1
regret = 0
sum = 0
sum_sq = 0
t = 0
mu = newmu()
while(count < n){
x = rgeom(1,prob = mu)
if(count + x + 1 >= n){
if(count + x + 1 == n){
regret = regret + 1
return(list("regret" = regret, "K" = K))
}else{
return(list("regret" = regret, "K" = K))
}
}
count = count + x + 1
regret = regret + 1
t = t + x + 1
sum = sum + 1
sum_sq = sum_sq + 1
x_bar = sum/t
sigma = sqrt(sum_sq/t - (sum/t)^2)
L = max(x_bar/bn , x_bar-cn*sigma/sqrt(t))
if(L > mu_star){
sum = 0
sum_sq = 0
t = 0
K = K + 1
mu = newmu()
}
}
}
Emp_CBT = function(n, prior, bn = log(log(n)), cn = log(log(n))){
if(prior == "Uniform"){
newmu <- Uniform_Prior
}else if(prior == "Sine"){
newmu <- Sine_Prior
}else if(prior == "Cosine"){
newmu <- Cosine_Prior
}
count = 0
regret = 0
sum = 0
sum_sq = 0
t = 0
mu = newmu()
K = 1
x = rgeom(1,prob = mu)
if(x+1 >= n){
if(x+1 == n){
return(list("regret" = 1, "K" = 1))
}else{
return(list("regret" = 0, "K" = 1))
}
}
count = count + x + 1
regret = regret + 1
t = t + x + 1
sum = sum + 1
sum_sq = sum_sq + 1
indi = TRUE
wml = 1
x_bar = sum/t
sigma = sqrt(sum_sq/t - (sum/t)^2)
L_min = max(x_bar/bn , x_bar-cn*sigma/sqrt(t))
while(count < n){
if(indi){
x_bar = sum[K]/t[K]
sigma = sqrt(sum_sq[K]/t[K] - (sum[K]/t[K])^2)
L[K] = max(x_bar/bn , x_bar-cn*sigma/sqrt(t[K]))
if(L[K]<L_min){
wml = K
L_min = L[K]
}
}else{
x_bar = sum[wml]/t[wml]
sigma = sqrt(sum_sq[wml]/t[wml] - (sum[wml]/t[wml])^2)
L[wml] = max(x_bar/bn , x_bar-cn*sigma/sqrt(t[wml]))
if(L[wml]<= L_min){
L_min = L[wml]
}else{
wml = which.min(L)
L_min = L[wml]
}
}
if(L_min <= regret/n){
indi = FALSE
x = rgeom(1,prob = mu[wml])
if(count + x + 1 >= n){
if(count + x + 1 == n){
regret = regret + 1
return(list("regret" = regret, "K" = K))
}else{
return(list("regret" = regret, "K" = K))
}
}
count = count + x + 1
regret = regret + 1
t[wml] = t[wml] + x + 1
sum[wml] = sum[wml] + 1
sum_sq[wml] = sum_sq[wml] + 1
}else{
indi = TRUE
K = K +1
mu[K] = newmu()
x = rgeom(1,prob = mu[K])
if(count + x + 1 >= n){
if(count + x + 1 == n){
regret = regret + 1
return(list("regret" = regret, "K" = K))
}else{
return(list("regret" = regret, "K" = K))
}
}
count = count + x + 1
regret = regret + 1
t[K] = x + 1
sum[K] = 1
sum_sq[K] = 1
}
}
}
Ana_CBT = function(n, data, bn = log(log(n)), cn = log(log(n))){
mK = ncol(data)
##shuffle the data
data <- data[,sample(mK)]
count = 0
regret = 0
sum = 0
sum_sq = 0
t = 0
K = 1
x = sample(data[,1],size = 1)
count = count + 1
regret = regret + x
t = t + 1
sum = sum + x
sum_sq = sum_sq + x^2
indi = TRUE
wml = 1
x_bar = sum/t
sigma = sqrt(sum_sq/t - (sum/t)^2)
L_min = max(x_bar/bn , x_bar-cn*sigma/sqrt(t))
while(count < n){
if(indi){
x_bar = sum[K]/t[K]
sigma = sqrt(sum_sq[K]/t[K] - (sum[K]/t[K])^2)
L[K] = max(x_bar/bn , x_bar-cn*sigma/sqrt(t[K]))
if(L[K]<L_min){
wml = K
L_min = L[K]
}
}else{
x_bar = sum[wml]/t[wml]
sigma = sqrt(sum_sq[wml]/t[wml] - (sum[wml]/t[wml])^2)
L[wml] = max(x_bar/bn , x_bar-cn*sigma/sqrt(t[wml]))
if(L[wml]<= L_min){
L_min = L[wml]
}else{
wml = which.min(L)
L_min = L[wml]
}
}
if(L_min <= regret/n){
indi = FALSE
x = sample(data[,wml],size = 1)
count = count + 1
regret = regret + x
t[wml] = t[wml] + 1
sum[wml] = sum[wml] + x
sum_sq[wml] = sum_sq[wml] + x^2
}else{
indi = TRUE
K = K +1
x = sample(data[,K],size = 1)
count = count + 1
regret = regret + x
t[K] = 1
sum[K] = x
sum_sq[K] = x^2
}
if(K == mK){
x_bar = sum[K]/t[K]
sigma = sqrt(sum_sq[K]/t[K] - (sum[K]/t[K])^2)
L[K] = max(x_bar/bn , x_bar-cn*sigma/sqrt(t[K]))
if(L[K]<L_min){
wml = K
L_min = L[K]
}
for(i in (count+1):n){
x = sample(data[,wml],size = 1)
count = count + 1
regret = regret + x
t[wml] = t[wml] + 1
sum[wml] = sum[wml] + x
sum_sq[wml] = sum_sq[wml] + x^2
x_bar = sum[wml]/t[wml]
sigma = sqrt(sum_sq[wml]/t[wml] - (sum[wml]/t[wml])^2)
L[wml] = max(x_bar/bn , x_bar-cn*sigma/sqrt(t[wml]))
if(L[wml]<= L_min){
L_min = L[wml]
}else{
wml = which.min(L)
L_min = L[wml]
}
}
count = n
}
}
return(list("regret" = regret, "K" = K))
}
|
/scratch/gouwar.j/cran-all/cranData/CBT/R/CBT.R
|
# ordmat: MCMC output
network_graphs = function(ordmat, gamma=c(0.5, 0.75, 0.9, 0.95, 0.99)){
# gamma is a vector of probability thresholds so that all pairwise edges
# in a directed graph have posterior probability of at least gamma
#################################
# Network with highest post prob
##################################
# Find network with highest prob
ordmatcollapse = sapply(ordmat, function(aux)paste(aux, collapse = "|"))
Postordmat = sort(table(ordmatcollapse),decreasing = T)
prob = Postordmat[1]/sum(Postordmat)
Network = ordmat[[which(names(prob)==ordmatcollapse)[1]]]
mode_graph <- Network
post_prob_mode_graph <- as.vector(prob)
##################################
# Network with pairwise post prob above threshold (Probpair)
##################################
# Find network with highest pairwise post prob
Network = ordmat[[1]]-ordmat[[1]]
pairProbs = ordmat[[1]]-ordmat[[1]]
for(i in 1:(ncol(Network)-1))
{
for(j in (i+1):ncol(Network))
{
prob = (table(c(sapply(ordmat,function(x) x[i,j]),c(-1,0,1)))-1)/length(ordmat)
Network[i,j] = as.numeric(names(which.max(prob)))
pairProbs[i,j] = max(prob)
# if(max(prob)<=Probpair)
# Network[i,j] = -1111
}
}
# Find Network0 which is the coherent graph in the mcmc closest to Network
weightl1 = sapply(1:length(ordmat), function(j1) sum(abs(ordmat[[j1]]-Network)*pairProbs))
Network0 = ordmat[[sample(which(weightl1==min(weightl1)),1)]]
out <- list()
cnt <- 1
for(Probpair in gamma){
# Remove from Network the edges with pairwise probability less than Probpair
Network = ifelse(pairProbs <= Probpair, -1111, Network0)
Network = ifelse(lower.tri(Network, diag = T), 0, Network)
out[[cnt]] <- Network
cnt <- cnt+1
}
names(out) <- gamma
list(mode_graph= mode_graph, post_prob_mode_graph=post_prob_mode_graph,out)
}
|
/scratch/gouwar.j/cran-all/cranData/CBnetworkMA/R/Network_postprocess.R
|
# Function to find the longest simple path
find_longest_simple_path <- function(graph) {
longest_path <- NULL
max_length <- 0
for (node in igraph::V(graph)) {
paths <- igraph::all_simple_paths(graph, from=node)
max_path_length <- max(lengths(paths))
if (max_path_length > max_length) {
longest_path <- paths[[which.max(lengths(paths))]]
max_length <- max_path_length
}
}
return(longest_path)
}
# Function to find the NMA path within a clique
path_NMA <- function(v, graph) {
# Extract a subnetwork with the given nodes
subgraph <- igraph::induced_subgraph(graph, v)
igraph::V(subgraph)$label <- igraph::V(graph)$label[v]
# Find the longest simple path
m <- igraph::get.adjacency(subgraph, sparse = FALSE)
if (is.null(colnames(m))) colnames(m) <- rownames(m) <- igraph::V(subgraph)$label
largest_path <- names(sort(rowSums(m), decreasing = TRUE))
size_path <- length(largest_path)
i <- 1
aux <- sum(sapply(igraph::all_simple_paths(subgraph, from=which(colnames(m) %in% largest_path[i]),
to=which(colnames(m) %in% largest_path[i+1])),
length) == 2) ==
sum(sapply(igraph::all_simple_paths(subgraph, from=which(colnames(m) %in% largest_path[i+1]),
to=which(colnames(m) %in% largest_path[i])),
length) == 2)
if (aux) final_path <- paste(colnames(m)[which(colnames(m) %in% largest_path[i])], "=",
colnames(m)[which(colnames(m) %in% largest_path[i+1])])
if (!aux) final_path <- paste(colnames(m)[which(colnames(m) %in% largest_path[i])], "<",
colnames(m)[which(colnames(m) %in% largest_path[i+1])])
if (size_path > 2) {
for (i in 2:(size_path-1)) {
aux <- sum(sapply(igraph::all_simple_paths(subgraph, from=which(colnames(m) %in% largest_path[i]),
to=which(colnames(m) %in% largest_path[i+1])),
length) == 2) ==
sum(sapply(igraph::all_simple_paths(subgraph, from=which(colnames(m) %in% largest_path[i+1]),
to=which(colnames(m) %in% largest_path[i])),
length) == 2)
if (aux) final_path <- paste(final_path, "=", colnames(m)[which(colnames(m) %in% largest_path[i+1])])
if (!aux) final_path <- paste(final_path, "<", colnames(m)[which(colnames(m) %in% largest_path[i+1])])
}
}
final_path
}
|
/scratch/gouwar.j/cran-all/cranData/CBnetworkMA/R/Rutils.R
|
clique_extract = function(ordmat,
type = "Highest_Post_Prob",
clique_size = NULL,
gamma = 0.95,
plot_graph = FALSE){
K <- nrow(ordmat[[1]])
if(type == "Highest_Post_Prob"){
# Find network with highest prob
ordmatcollapse = sapply(ordmat, function(aux)paste(aux, collapse = "|"))
Postordmat = sort(table(ordmatcollapse),decreasing = TRUE)
prob = Postordmat[1]/sum(Postordmat)
# Plot network
Network = ordmat[[which(names(prob)==ordmatcollapse)[1]]]
out = cbind(from=1:ncol(Network),to=1:ncol(Network),color=0)
for(i in 1:(ncol(Network)-1))
{
for(j in (i+1):ncol(Network)){
if(Network[i,j]==1)
out = rbind(out,c(i,j,2))
if(Network[i,j]==-1)
out = rbind(out,c(j,i,2))
if(Network[i,j]==0)
{
out = rbind(out,c(i,j,1),c(j,i,1))
}
}
}
# out <- out[,-3]
# out <- out[!(out[,1] == out[,2]),]
# Blue, one direction, orange equal (both direction)
mynet <- igraph::graph_from_data_frame(out,directed = TRUE)
graph <- mynet
igraph::V(graph)$label <- 1:K
# Plot network
if(plot_graph){
plot(graph, layout=igraph::layout.circle)
}
# Find the NMA path for each large clique
if(is.null(clique_size)){
v <- suppressWarnings(igraph::largest.cliques(graph))
} else {
# Find the NMA path for smaller clique
v <- suppressWarnings(igraph::cliques(graph, min = 2, max = clique_size))
}
}
if(type == "Highest_Pairwise_Post_Prob"){
# Find network with highest pairwise post prob
Network = ordmat[[1]]-ordmat[[1]]
pairProbs = ordmat[[1]]-ordmat[[1]]
for(i in 1:(ncol(Network)-1))
{
for(j in (i+1):ncol(Network))
{
prob = (table(c(sapply(ordmat,function(x) x[i,j]),c(-1,0,1)))-1)/length(ordmat)
Network[i,j] = as.numeric(names(which.max(prob)))
pairProbs[i,j] = max(prob)
# if(max(prob)<=Probpair)
# Network[i,j] = -1111
}
}
# Find Network0 which is the coherent graph in the mcmc closest to Network
weightl1 = sapply(1:length(ordmat), function(j1) sum(abs(ordmat[[j1]]-Network)*pairProbs))
Network0 = ordmat[[sample(which(weightl1==min(weightl1)),1)]]
Probpair <- gamma ########## THRESHOLD
# Remove from Network the edges with pairwise probability less than Probpair
Network = ifelse(pairProbs <= Probpair, -1111, Network0)
Network = ifelse(lower.tri(Network, diag = TRUE), 0, Network)
# Plot network
out = cbind(from=1:ncol(Network),to=1:ncol(Network),color=0)
for(i in 1:(ncol(Network)-1))
{
for(j in (i+1):ncol(Network)){
if(Network[i,j]==1)
out = rbind(out,c(i,j,2))
if(Network[i,j]==-1)
out = rbind(out,c(j,i,2))
if(Network[i,j]==0)
{
out = rbind(out,c(i,j,1),c(j,i,1))
}
}
}
out
# out <- out[,-3]
# out <- out[!(out[,1] == out[,2]),]
mynet <- igraph::graph_from_data_frame(out, directed = TRUE)
graph <- mynet
igraph::V(graph)$label <- 1:K
if(plot_graph){
plot(graph, layout=igraph::layout.circle)
}
# Find the NMA path for each large clique
if(is.null(clique_size)){
v <- suppressWarnings(igraph::largest.cliques(graph))
} else {
# Find the NMA path for smaller clique
v <- suppressWarnings(igraph::cliques(graph, min = 2, max = clique_size))
}
}
cl_list <- rep(0, length(v))
count <- 1
for(i in 1:length(v)){
for(j in 1:length(v)){
if (j != i){
if(all(v[[i]] %in% v[[j]])){
cl_list[i] <- 1
break
}
}
}
}
suppressWarnings(lapply(v[cl_list==0], path_NMA, graph = graph))
}
|
/scratch/gouwar.j/cran-all/cranData/CBnetworkMA/R/clique_extract.R
|
# Wrapper that executes C code to sample from the posterior distribution
# from network meta analysis model
# mb=0; sb=1; md=0; sd=1; tau_prior = "lognormal"; tau_max=5; tau_lm = -2.34; tau_lsd = 1.62; alpha=1; aw=1; bw=1; v0=0.1; kap=1; scale=1; nu=1; mh=c(0.5, 0.5, 0.1, 0.5); H=20; verbose=FALSE
networkMA <- function(data, model="gaussian",
niter=1100, nburn=100, nthin=1,
mb=0, sb=1, md=0, sd=1,
tau_prior = "uniform", tau_max=5, tau_lm = -2.34, tau_lsd = 1.62,
alpha=1, aw=1, bw=1, v0=0.1, scale=1, nu=1,
mh=c(0.5, 0.5, 0.1, 0.5), H=20, verbose=FALSE){
# data must have the following columns .
# sid - study id (must an integer beginning with 1)
# tid - treatment id (must be an integer beginning with 1)
# r - number of "success" associated with treatment x study
# n - number of "trials" associated with treatment x study
N <- max(data$sid) # number of studies
K <- length(unique(data$tid)) # total number of treatments across studies
nobs <- length(data$sid) # total number of observations
# Create matrices that indicate comparisons in each study and the number of
# trials and successes.
tid_mat <- r_mat <- n_mat <- tid_logical <- matrix(FALSE, nrow=N, ncol=choose(K,2)+1)
colnames(tid_mat) <- colnames(r_mat) <- colnames(n_mat) <- colnames(tid_logical) <-
c("baseline", apply(combn(1:K,2), 2, function(x) paste(x, collapse="_")))
tmp1 <- tapply(data$tid, data$sid, function(x) c(paste(x[1],x[1],sep="_"), paste(x[1], x[-1],sep="_")))
tmp2 <- tapply(data$r, data$sid, function(x) x)
tmp3 <- tapply(data$n, data$sid, function(x) x)
tmp4 <- tapply(data$tid, data$sid, function(x) x)
for(i in 1:N){
tid_logical[i,tmp1[[i]][-1]] <- TRUE
r_mat[i, c("baseline", tmp1[[i]][-1])] <- tmp2[[i]]
n_mat[i, c("baseline", tmp1[[i]][-1])] <- tmp3[[i]]
tid_mat[i, c("baseline", tmp1[[i]][-1])] <- tmp4[[i]]
}
tid_logical[,1] <- 1
## non-local inverse moment prior
dnlp <- function(x,x0, kap,scale,nu,logd=FALSE){
ld <- log(kap) + (nu/2)*log(scale) - lgamma(nu/(2*kap)) +
-0.5*(nu+1)*log((x - x0)^2) - ((x-x0)^2/scale)^(-kap)
if(x == x0){ld <- -Inf}
if(logd){out <- ld}
if(!logd){out <- exp(ld)}
out
}
# Calibrate the spike and slab nlp prior
xx <- seq(-9*v0, 0, length=100001)
xx0 <- xx[which.min(abs(dnorm(xx, 0, v0/3) - 0.01))]
kap0 <- seq(0.05,5, length=10001)
tmp <- which.min(abs(dnlp(xx0, 0, kap=kap0, scale=1,nu=1) - dnorm(xx0, 0, v0/3)))
kap0 <- kap0[tmp]
# priors
# mb - mean of mu_{i, b_i}
# sb - standard deviation of mu_{i, b_i}
# md - mean of d*_j
# sd - standard deviation of d*_j
# tau_max - upper-bound of tau
# alpha - precision/scale parameter of DP
# aw - shape 1 parameter for omega
# bw - shape 2 parameter for omega
# v0 - constant to which sd is multiplied in the spike. Determines the similarity of comparisons necessary
# to conlcude that two treatments are equal
# H - truncation of the infinite mixture prior
modelPriors <- c(mb, sb, tau_max, tau_lm, tau_lsd, md, sd, alpha, kap0, scale, nu, v0, aw, bw)
if(tau_prior == "uniform")tauprior <- 1
if(tau_prior == "lognormal") tauprior <- 2
if(model=="gaussian"){
run <- .Call("NETWORK_MA",
as.double(t(r_mat)), as.double(t(n_mat)), as.integer(t(tid_mat)), as.integer(t(tid_logical)),
as.integer(nobs), as.integer(N), as.integer(K), as.integer(H),
as.integer(1), as.double(modelPriors), as.integer(tauprior), as.double(mh),
as.integer(verbose),
as.integer(niter), as.integer(nburn), as.integer(nthin))
if(tau_prior == "uniform"){
priors <- round(c(mb, sb, tau_max, md, sd),2)
names(priors) <- c("mb", "sb","tau_max","md","sd")
}
if(tau_prior == "lognormal"){
priors <- round(c(mb, sb, tau_lm, tau_lsd, md, sd),2)
names(priors) <- c("mb", "sb","tau_lm","tau_lsd","md","sd")
}
run <- run[c(1,2,3,4,8)]
}
if(model=="dp_gaussian"){
run <- .Call("NETWORK_MA",
as.double(t(r_mat)), as.double(t(n_mat)), as.integer(t(tid_mat)), as.integer(t(tid_logical)),
as.integer(nobs), as.integer(N), as.integer(K), as.integer(H),
as.integer(2), as.double(modelPriors), as.integer(tauprior), as.double(mh),
as.integer(verbose),
as.integer(niter), as.integer(nburn), as.integer(nthin))
if(tau_prior == "uniform"){
priors <- round(c(mb, sb, tau_max, md, sd, alpha),2)
names(priors) <- c("mb", "sb","tau_max","md","sd", "alpha")
}
if(tau_prior == "lognormal"){
priors <- round(c(mb, sb, tau_lm, tau_lsd, md, sd, alpha),2)
names(priors) <- c("mb", "sb","tau_lm","tau_lsd","md","sd", "alpha")
}
run <- run[c(1,2,3,4,5,8)]
}
if(model=="dp_spike_slab"){
# cat("kap0 = ", kap0, "\n")
run <- .Call("NETWORK_MA",
as.double(t(r_mat)), as.double(t(n_mat)), as.integer(t(tid_mat)), as.integer(t(tid_logical)),
as.integer(nobs), as.integer(N), as.integer(K), as.integer(H),
as.integer(3), as.double(modelPriors), as.integer(tauprior), as.double(mh),
as.integer(verbose),
as.integer(niter), as.integer(nburn), as.integer(nthin))
if(tau_prior == "uniform"){
priors <- round(c(mb, sb, tau_max, alpha, v0, aw, bw, kap0, scale, nu),2)
names(priors) <- c("mb", "sb","tau_max","alpha", "v0","aw","bw","kap","scale","nu")
}
if(tau_prior == "lognormal"){
priors <- round(c(mb, sb, tau_lm, tau_lsd, alpha, v0, aw, bw, kap0, scale, nu),2)
names(priors) <- c("mb", "sb","tau_lm","tau_lsd", "alpha", "v0","aw","bw","kap","scale","nu")
}
}
# Creat the order pairwise comparison matrix
nout <- (niter - nburn)/nthin
ordmat <- list()
for(t in 1:nout){
# Here I create all the pairwise differences
if(model=="gaussian"){
if(K > 2) dtilde <- c(run$d1[t,-1], apply(combn(run$d1[t,-1],2), 2, diff))
}
if(model=="dp_gaussian"){
# Here I create all the pairwise differences
if(K > 2) dtilde <- c(run$d1[t,-1], apply(combn(run$d1[t,-1],2), 2, diff))
}
if(model=="dp_spike_slab"){
# Here I create all the pairwise differences and adjust for those that are allocatd
# to slab and those that are allocated to spike
if(K > 2){
dtilde <- c(run$d1[t,-1]*(run$sh[t,-1]==1),
apply(combn(run$d1[t,-1],2)*(combn(run$sh[t,-1],2)==1), 2, diff))
}
}
# Here I create a pairwise comparison matrix with upper triangle containing zero if treatments
# are equal
if(K==2) dtilde <- run$d1[t,2]
tmp <- matrix(0, nrow=K, ncol=K)
tmp[lower.tri(tmp)] <- dtilde
tmp[tmp < 0] <- -1
tmp[tmp > 0] <- 1
ordmat[[t]] <- t(tmp)
}
# list of possible treatments
run$ordmat <- ordmat
run$prior_values <- priors
run[-(length(run)-2)]
}
if(FALSE){
# This code will be used to run function to debug etc.
library(TeachingDemos)
library(NetworkMA)
coverage <- matrix(NA, nrow=100, ncol=4)
colnames(coverage) <- c("mu","delta","d1","tau2")
source("~/Research/BYU/NetworkMetaAnalysis/analysis/dataGeneration.R")
load("~/Research/BYU/NetworkMetaAnalysis/analysis/ReducedCipriani_dataAnalysis.RData")
mu_true <- apply(m3$mu,2,mean)
tau_true <- 0.1
for(ii in 1:100){
cat("dataset number = ", ii, "\n")
set.seed(ii)
synth.data <- dat.gen.cipriani(ci=c(1,2,2),
mu_true=mu_true,
tmt_keep=c(2,4,5,9,11,12),
d1_true=(c(0,0.306,0,0,0.306,0.306)*mult),
tau2_true=tau_true^2,
tau_max=0.1, mb=0, sb=1, md=0, sd=1)
out <- networkMA(synth.data$data, model="gaussian", niter=30000, nburn=20000, nthin=10, mb=0, sb=10, md=0, sd=5,
tau_prior = "lognormal", tau_max=5, tau_lm = -2.34, tau_lsd = 1.62,
alpha=1, aw=1, bw=1, v0=0.1, kap=1, scale=1, nu=1)
# check mus
coverage[ii,1] <- mean(apply(t(apply(out$mu, 2, emp.hpd)) - synth.data$mu, 1, prod) < 0)
# check d1
coverage[ii,3] <- mean(apply(t(apply(out$d1, 2, emp.hpd))[-1,] - synth.data$d1[-1], 1, prod) < 0)
# check tau2
coverage[ii,4] <- prod(apply(out$tau2,2,emp.hpd) - tau_true^2) < 0
# check deltas
tmp <- apply(out$delta,2,emp.hpd)
tmpkeep <- apply(t(tmp),1,diff) != 0
coverage[ii,2] <- mean(apply(t(tmp)[tmpkeep,] - c(t(synth.data$delta))[tmpkeep],1,prod) < 0)
}
}
|
/scratch/gouwar.j/cran-all/cranData/CBnetworkMA/R/networkMA_wrapper.R
|
LRCC <- function (x, sigma, plotit = FALSE) {
mu <- lowess(x)$y
LR <- abs(x - mu)
if (!missing(sigma)) {
UCL <- sigma*(sqrt(2/pi) + 3*sqrt(1-2/pi))
LCL <- 0
out.of.control <- (LR > UCL)
LRbar <- mean(LR[!out.of.control])
} else {
number.ooc <- 1
out.of.control <- rep(FALSE, length(LR))
while (number.ooc > 0) {
LRbar <- mean(LR[!out.of.control])
sigma <- LRbar * sqrt(pi/2)
UCL <- sigma*(sqrt(2/pi) + 3*sqrt(1-2/pi))
LCL <- 0
number.ooc <- sum(LR > UCL) - sum(out.of.control)
out.of.control <- (LR > UCL)
}
}
if (plotit) {
plot(LR, type = "l", ylim = range(c(UCL, LR, 0)), xlab = "t")
abline(UCL, 0, col = "red")
abline(LCL, 0, col = "red")
}
list(CL = LRbar, UCL = UCL, LCL = LCL, mu = mu, sigma
= sigma, LR = LR, ooc = which(LR>UCL))
}
|
/scratch/gouwar.j/cran-all/cranData/CC/R/LRCC.R
|
"RCC" <-
function (R, n, k=3, sigma)
{
d2n <- d_2(n)
d3n <- d_3(n)
if (!missing(sigma)) {
Rbar <- d2n*sigma
UCL <- (d2n + d3n * k) * sigma
LCL <- max(0, (d2n - d3n * k) * sigma)
out.of.control <- (R > UCL)
}
else {
number.ooc <- 1
out.of.control <- rep(FALSE, length(R))
while (number.ooc > 0) {
Rbar <- mean(R[!out.of.control])
sigma <- Rbar/d2n
UCL <- (d2n + d3n * k) * sigma
LCL <- max(0, (d2n - d3n * k) * sigma)
number.ooc <- sum(R > UCL) - sum(out.of.control)
out.of.control <- (R > UCL)
}
}
list(CL = Rbar, UCL = UCL, LCL = LCL, sigma = sigma, ooc = out.of.control)
}
|
/scratch/gouwar.j/cran-all/cranData/CC/R/RCC.R
|
"d_2" <-
function (n)
{
if ((length(n) > 1) || (mode(n) != "numeric")) {
stop("Argument must be a scalar integer.")
}
if (n < 2) {
approx <- NA
}
else {
x <- 6 - ((seq(1, (6^(50/49)), length = 51)))^(49/50)
x <- c(-x, rev(x[-1]))
x.diff <- diff(x)
x.101 <- x[-101] + diff(x)/2
approx <- 2 * n * sum(dnorm(x.101) * x.101 * (pnorm(x.101)^(n -
1)) * x.diff)
}
approx
}
|
/scratch/gouwar.j/cran-all/cranData/CC/R/d_2.R
|
"d_3" <-
function (n)
{
lower <- -3 + (n - 1)/8
upper <- 5 - (n - 1)/120
f <- function(x1) {
(xn - x1)^2 * dnorm(x1) * (pnorm(xn) - pnorm(x1))^(n -
2)
}
fn <- numeric(150)
x.inner <- seq(lower, upper, length = length(fn))
for (i in 1:length(fn)) {
xn <- x.inner[i]
fn[i] <- integrate(f, -Inf, xn)$value
}
f1 <- approxfun(x.inner, n * (n - 1) * dnorm(x.inner) * fn)
round(sqrt(integrate(f1, lower, upper)$value - d_2(n)^2),
3)
}
|
/scratch/gouwar.j/cran-all/cranData/CC/R/d_3.R
|
"diffrange" <-
function(x) {
diff(range(x))
}
|
/scratch/gouwar.j/cran-all/cranData/CC/R/diffrange.R
|
"plot.CC" <- function (x, start = 1, ...) {
chart.obj <- x
R <- chart.obj$R
m <- length(R)
time <- seq(start, m)
R <- R[time]
xbar <- chart.obj$xbar[time]
oldpar <- par(mfrow = c(2, 1), mar=c(4, 3, 2.6, 1))
on.exit(par(oldpar))
plot(R ~ time, ylim = range(
c(chart.obj$R.par$LCL, chart.obj$R.par$UCL, R)),
ylab = chart.obj$R.ylabel,
pch = 16)
mtext(chart.obj$R.chart.label, side=3, line = 0.5)
abline(chart.obj$R.par$LCL, 0, col = "red", lwd = 2)
abline(chart.obj$R.par$CL, 0, col = "blue")
abline(chart.obj$R.par$UCL, 0, col = "red", lwd = 2)
lines(time, R)
plot(xbar ~ time, ylim = range(c(chart.obj$xbar.par$LCL,
chart.obj$xbar.par$UCL, xbar)),
ylab = chart.obj$x.ylabel, pch = 16)
mtext(chart.obj$x.chart.label, side=3, line = 0.5)
abline(chart.obj$xbar.par$LCL, 0, col = "red", lwd = 2)
abline(chart.obj$xbar.par$mu, 0, col = "blue")
abline(chart.obj$xbar.par$UCL, 0, col = "red", lwd = 2)
lines(time, xbar)
}
|
/scratch/gouwar.j/cran-all/cranData/CC/R/plot.CC.R
|
rrCC <-
function (RR, k = 3, revise = TRUE, newdata) {
averages <- RR[,1]
STDs <- RR[,2]
if (!missing(newdata)) {
newx <- newdata[, 1]
newSTD <- newdata[, 2]
avg.xLR <- xLRCC(averages, k = k, revise = revise)
STD.xLR <- xLRCC(1/STDs, k = k, revise = revise)
} else {
avg.xLR <- xLRCC(averages, k = k, revise = revise)
STD.xLR <- xLRCC(1/STDs, k = k, revise = revise)
newx <- NULL
newSTD <- NULL
}
avg.xLR$xbar <- c(avg.xLR$xbar, newx)
avg.xLR$R <- 1/c(STDs, newSTD)
avg.xLR$R.par <- STD.xLR$xbar.par
avg.xLR$R.chart.label <- "Daily RR-Variability"
avg.xLR$R.ylabel <- "precision"
avg.xLR$x.ylabel <- "avg. rr"
avg.xLR$x.chart.label <- "Daily Baseline RR"
avg.xLR
}
|
/scratch/gouwar.j/cran-all/cranData/CC/R/rrCC.R
|
xCC <- function (x, sigma, k = 3, mu, newdata) {
if (!missing(mu)) { # control limits are revised
UCL <- mu + k * sigma
LCL <- mu - k * sigma
if (!missing(newdata)) {
out.of.control <- ((newdata > UCL) | (newdata < LCL))
} else {
out.of.control <- ((x > UCL) | (x < LCL))
}
}
else { # revise control limits
number.ooc <- 1
out.of.control <- rep(FALSE, length(x))
while (number.ooc > 0) {
mu <- mean(x[!out.of.control])
UCL <- mu + k * sigma
LCL <- mu - k * sigma
number.ooc <- sum(((x > UCL) | (x < LCL))) -
sum(out.of.control)
out.of.control <- ((x > UCL) | (x < LCL))
}
}
list(CL = mu, UCL = UCL, LCL = LCL, ooc = which(out.of.control))
}
|
/scratch/gouwar.j/cran-all/cranData/CC/R/xCC.R
|
xLRCC <-
function (qc.obj, k = 3, sigma, mu, revise = TRUE, newdata)
{
if (revise) {
if (!inherits(qc.obj, "CC")) {
x <- qc.obj
LR <- LRCC(x)
qc.obj <- list(R = LR$LR, xbar = x, k = k, n = 1,
R.chart.label = "LR-chart", x.chart.label = "x-chart",
R.ylabel = "LR", x.ylabel = "x")
class(qc.obj) <- c("CC")
}
if (!missing(sigma)) {
R.par <- LRCC(qc.obj$xbar, sigma=sigma)
} else {
R.par <- LRCC(qc.obj$xbar)
}
if (!missing(mu)) {
if (length(R.par$ooc) > 0) {
xbar.par <- xCC(qc.obj$xbar[-R.par$ooc],
R.par$sigma, qc.obj$k, mu)
} else {
xbar.par <- xCC(qc.obj$xbar,
R.par$sigma, qc.obj$k, mu)
}
}
else {
if (length(R.par$ooc) > 0) {
xbar.par <- xCC(qc.obj$xbar[-R.par$ooc],
R.par$sigma, qc.obj$k)
} else {
xbar.par <- xCC(qc.obj$xbar,
R.par$sigma, qc.obj$k, mu)
}
}
qc.obj$R.par <- R.par
qc.obj$xbar.par <- xbar.par
}
mu <- mean(qc.obj$R.par$mu)
qc.obj$xbar.par$mu <- mu
if (!missing(newdata)) {
LR.new <- abs(newdata - mu)
xbar.new <- newdata
qc.obj$R <- c(qc.obj$R, LR.new)
qc.obj$xbar <- c(qc.obj$xbar, xbar.new)
}
qc.obj
}
|
/scratch/gouwar.j/cran-all/cranData/CC/R/xLRCC.R
|
"xbarCC" <-
function (xbar, n, sigma, k, mu)
{
if (!missing(mu)) {
UCL <- mu + k * sigma/sqrt(n)
LCL <- mu - k * sigma/sqrt(n)
}
else {
number.ooc <- 1
out.of.control <- rep(FALSE, length(xbar))
while (number.ooc > 0) {
mu <- mean(xbar[!out.of.control])
UCL <- mu + k * sigma/sqrt(n)
LCL <- mu - k * sigma/sqrt(n)
number.ooc <- sum(((xbar > UCL) | (xbar < LCL))) - sum(out.of.control)
out.of.control <- ((xbar > UCL) | (xbar < LCL))
}
}
list(CL = mu, UCL = UCL, LCL = LCL, mu = mu)
}
|
/scratch/gouwar.j/cran-all/cranData/CC/R/xbarCC.R
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.