content
stringlengths 0
14.9M
| filename
stringlengths 44
136
|
---|---|
---
title: "C5.0 Classification Models"
vignette: >
%\VignetteEngine{knitr::rmarkdown}
%\VignetteIndexEntry{C5.0 Classification Models}
output:
knitr:::html_vignette:
toc: yes
---
```{r setup, include = FALSE}
knitr::opts_chunk$set(echo = TRUE)
library(C50)
library(modeldata)
```
The `C50` package contains an interface to the C5.0 classification model. The main two modes for this model are:
* a basic tree-based model
* a rule-based model
Many of the details of this model can be found in [Quinlan (1993)](https://books.google.com/books?id=b3ujBQAAQBAJ&lpg=PP1&ots=sQ2nTTEpC1&dq=C4.5%3A%20Programs%20for%20Machine%20Learning&lr&pg=PR6#v=onepage&q=C4.5:%20Programs%20for%20Machine%20Learning&f=false) although the model has new features that are described in [Kuhn and Johnson (2013)](http://appliedpredictivemodeling.com/). The main public resource on this model comes from the [RuleQuest website](http://www.rulequest.com/see5-info.html).
To demonstrate a simple model, we'll use the credit data that can be accessed in the [`modeldata` package](https://github.com/tidymodels/modeldata):
```{r credit-data}
library(modeldata)
data(credit_data)
```
The outcome is in a column called `Status` and, to demonstrate a simple model, the `Home` and `Seniority` predictors will be used.
```{r credit-vars}
vars <- c("Home", "Seniority")
str(credit_data[, c(vars, "Status")])
# a simple split
set.seed(2411)
in_train <- sample(1:nrow(credit_data), size = 3000)
train_data <- credit_data[ in_train,]
test_data <- credit_data[-in_train,]
```
## Classification Trees
To fit a simple classification tree model, we can start with the non-formula method:
```{r tree-mod}
library(C50)
tree_mod <- C5.0(x = train_data[, vars], y = train_data$Status)
tree_mod
```
To understand the model, the `summary` method can be used to get the default `C5.0` command-line output:
```{r tree-summ}
summary(tree_mod)
```
A graphical method for examining the model can be generated by the `plot` method:
```{r tree-plot, fig.width = 10}
plot(tree_mod)
```
A variety of options are outlines in the documentation for `C5.0Control` function. Another option that can be used is the `trials` argument which enables a boosting procedure. This method is model similar to AdaBoost than to more statistical approaches such as stochastic gradient boosting.
For example, using three iterations of boosting:
```{r tree-boost}
tree_boost <- C5.0(x = train_data[, vars], y = train_data$Status, trials = 3)
summary(tree_boost)
```
Note that the counting is zero-based. The `plot` method can also show a specific tree in the ensemble using the `trial` option.
# Rule-Based Models
C5.0 can create an initial tree model then decompose the tree structure into a set of mutually exclusive rules. These rules can then be pruned and modified into a smaller set of _potentially_ overlapping rules. The rules can be created using the `rules` option:
```{r rule-mod}
rule_mod <- C5.0(x = train_data[, vars], y = train_data$Status, rules = TRUE)
rule_mod
summary(rule_mod)
```
Note that no pruning was warranted for this model.
There is no `plot` method for rule-based models.
# Predictions
The `predict` method can be used to get hard class predictions or class probability estimates (aka "confidence values" in documentation).
```{r pred}
predict(rule_mod, newdata = test_data[1:3, vars])
predict(tree_boost, newdata = test_data[1:3, vars], type = "prob")
```
# Cost-Sensitive Models
A cost-matrix can also be used to emphasize certain classes over others. For example, to get more of the "bad" samples correct:
```{r cost}
cost_mat <- matrix(c(0, 2, 1, 0), nrow = 2)
rownames(cost_mat) <- colnames(cost_mat) <- c("bad", "good")
cost_mat
cost_mod <- C5.0(x = train_data[, vars], y = train_data$Status,
costs = cost_mat)
summary(cost_mod)
# more samples predicted as "bad"
table(predict(cost_mod, test_data[, vars]))
# that previously
table(predict(tree_mod, test_data[, vars]))
```
|
/scratch/gouwar.j/cran-all/cranData/C50/vignettes/C5.0.Rmd
|
---
title: "Class Probability Calculations"
vignette: >
%\VignetteEngine{knitr::rmarkdown}
%\VignetteIndexEntry{Class Probability Calculations}
output:
knitr:::html_vignette:
toc: yes
---
```{r, echo = FALSE, results = "hide",message=FALSE,warning=FALSE}
library(C50)
library(knitr)
opts_chunk$set(comment = NA, digits = 3, prompt = TRUE, tidy = FALSE)
```
This document describes exactly how the model computes class probabilities using the data in the terminal nodes. Here is an example model using the iris data:
```{r}
library(C50)
mod <- C5.0(Species ~ ., data = iris)
summary(mod)
```
Suppose that we are predicting the sample in row 130 with a petal length of `r iris[130,"Petal.Length"]` and a petal width of `r iris[130,"Petal.Width"]`. From this tree, the terminal node shows `virginica (6/2)` which means a predicted class of the virginica species with a probability of 4/6 = 0.66667. However, we get a different predicted probability:
```{r}
predict(mod, iris[130,], type = "prob")
```
When we wanted to describe the technical aspects of the [C5.0](https://www.rulequest.com/see5-info.html) and [cubist](https://www.rulequest.com/cubist-info.html) models, the main source of information on these models was the raw C source code from the [RuleQuest website](https://www.rulequest.com/download.html). For many years, both of these models were proprietary commercial products and we only recently open-sourced. Our intuition is that Quinlan quietly evolved these models from the versions described in the most recent publications to what they are today. For example, it would not be unreasonable to assume that C5.0 uses [AdaBoost](https://en.wikipedia.org/wiki/AdaBoost). From the sources, a similar reweighting scheme is used but it does not appear to be the same.
For classifying new samples, the C sources have
```c
ClassNo PredictTreeClassify(DataRec Case, Tree DecisionTree){
ClassNo c, C;
double Prior;
/* Save total leaf count in ClassSum[0] */
ForEach(c, 0, MaxClass) {
ClassSum[c] = 0;
}
PredictFindLeaf(Case, DecisionTree, Nil, 1.0);
C = SelectClassGen(DecisionTree->Leaf, (Boolean)(MCost != Nil), ClassSum);
/* Set all confidence values in ClassSum */
ForEach(c, 1, MaxClass){
Prior = DecisionTree->ClassDist[c] / DecisionTree->Cases;
ClassSum[c] = (ClassSum[0] * ClassSum[c] + Prior) / (ClassSum[0] + 1);
}
Confidence = ClassSum[C];
return C;
}
```
Here:
* The predicted probability is the "confidence" value
* The prior is the class probabilities from the training set. For the iris data, this value is 1/3 for each of the classes
* The array `ClassSum` is the probabilities of each class in the terminal node although `ClassSum[0]` is the number of samples in the terminal node (which, if there are missing values, can be fractional).
For sample 130, the virginica values are:
```
(ClassSum[0] * ClassSum[c] + Prior) / (ClassSum[0] + 1)
= ( 6 * (4/6) + (1/3)) / ( 6 + 1)
= 0.6190476
```
Why is it doing this? This will tend to avoid class predictions that are absolute zero or one.
Basically, it can be viewed to be _similar_ to how Bayesian methods operate where the simple probability estimates are "shrunken" towards the prior probabilities. Note that, as the number of samples in the terminal nodes (`ClassSum[0]`) becomes large, this operation has less effect on the final results. Suppose `ClassSum[0] = 10000`, then the predicted virginica probability would be 0.6663337, which is closer to the simple estimate.
This is very much related to the [Laplace Correction](https://en.wikipedia.org/wiki/Additive_smoothing). Traditionally, we would add a value of one to the denominator of the simple estimate and add the number of classes to the bottom, resulting in `(4+1)/(6+3) = 0.5555556`. C5.0 is substituting the prior probabilities and their sum (always one) into this equation instead.
To be fair, there are well known Bayesian estimates of the sample proportions under different prior distributions for the two class case. For example, if there were two classes, the estimate of the class probability under a uniform prior would be the same as the basic Laplace correction (using the integers and not the fractions). A more flexible Bayesian approach is the [Beta-Binomial model](https://en.wikipedia.org/wiki/Beta-binomial_distribution), which uses a Beta prior instead of the uniform. The downside here is that two extra parameters need to be estimated (and it only is defined for two classes)
|
/scratch/gouwar.j/cran-all/cranData/C50/vignettes/Class_Probability_Calcs.Rmd
|
CA3variants<-
function(Xdata, dims=c(p,q,r), ca3type = "CA3", test = 10^-6, resp="row", norder=3, sign=TRUE){
#------------------------------------------
#Xdata can be an array or raw data
#--------------------------------------
if ( missing(dims)) {stop("'dims' must be given without default\n number of dimension for rows, columns and tubes must be given\n
for example: dims=c(p=2,q=2,r=2). For guidance, use the 'tunelocal' function\n")
}
if (is.array(Xdata)==FALSE){
#Xdata=dget("Xdata.txt")
Xdata1<-Xdata
#attach(Xdata1)
nam<-names(Xdata1)
ndims<-dim(Xdata1)[[2]]
if (ndims<3) {stop("number of vars for output must be at least 3\n\n")}
Xdata2=table(Xdata1[[1]],Xdata1[[2]],Xdata1[[3]])
if(is.numeric(Xdata1)==TRUE){
dimnames(Xdata2)=list(paste(nam[[1]],1:max(Xdata[[1]]),sep=""),paste(nam[[2]],1:max(Xdata[[2]]),sep=""),paste(nam[[3]],1:max(Xdata[[3]]),sep=""))
}
#paste("This is your given array\n\n")
Xdata<-Xdata2
#print(Xdata)
}
if (is.array(Xdata)==FALSE) {stop("number of variables for three-way analysis must be at least 3\n\n")}
# READ DATA FILE
#Xdata <- read.table(file = datafile, header=header)
#if (header==FALSE) {
#for (i in 1:dim(Xdata)[1]) rownames(Xdata)[i] <- paste("r",i,sep="")
#for (i in 1:dim(Xdata)[2]) colnames(Xdata)[i] <- paste("c",i,sep="")
#}
X <- as.array(Xdata)
if (resp=="row"){
p<-dims[[1]]
q<-dims[[2]]
r<-dims[[3]]
}
#-------------------------------------------------------------------------------
if ((resp=="col")||(resp=="column")){
X<-aperm(X,c(2,1,3))
p<-dims[[2]]
q<-dims[[1]]
r<-dims[[3]]}
if (resp=="tube"){
X<-aperm(X,c(3,2,1))
p<-dims[[3]]
q<-dims[[2]]
r<-dims[[1]]}
#------------------------------------------------------------------------------
#-------------------------------------------------------------------------------
#------------------------------------------------------------------------------
ni <- dim(X)[1]
nj <- dim(X)[2]
nk <-dim(X)[3]
labelgjk <- NULL
labelgik <- NULL
labelgij <- NULL
nomi <- dimnames(X)[[1]]
nomj <- dimnames(X)[[2]]
nomk <- dimnames(X)[[3]]
n <- sum(X)
pii <- apply(X/sum(X), 1, sum)
pj <- apply(X/sum(X), 2, sum)
pk <- apply(X/sum(X), 3, sum)
#---------------------------
S <- switch(ca3type, "CA3" = ca3basic(X, p=p, q=q, r=r, sign=sign), "NSCA3" =
nsca3basic(X, p=p, q=q, r=r, sign=sign),"OCA3" = oca3basic(X, p=p, q=q, r=r,norder=norder, sign=sign),
"ONSCA3" = onsca3basic(X, p=p, q=q, r=r,norder=norder, sign=sign))
######################################################################################
# #
# Defines the analysis to be performed for nominal variables- CA3 or NSCA3 -for ordinal variables OCA3 -ONSCA3 #
# #
#####################################################################################
if(ca3type == "CA3"){
pi <- apply(X/n, 1, sum)
index3res<-chi3(X)
index3 <- index3res$z
cost<-n
#indexij<-NULL
#indexik<-NULL
#indexjk<-NULL
}
if(ca3type == "OCA3"){
pi <- apply(X/n, 1, sum)
index3res<-chi3ordered(X)
index3 <- index3res$zijk
cost<-n
#indexij<-chi3ordered(X)$zij
#indexik<-chi3ordered(X)$zik
#indexjk<-chi3ordered(X)$zjk
}
if(ca3type == "NSCA3") {
index3res<-tau3(X)
index3 <- tau3(X)$z
cost<-index3res$cost
pi <- rep(1, ni)
#indexij<-NULL
#indexik<-NULL
#indexjk<-NULL
}
if(ca3type == "ONSCA3"){
# S <- oca3basic(X, p, q, r)
pi <- rep(1, ni)
#tauMres<-tau3ordered(X)
#print(chi2res)
index3res<-tau3ordered(X)
cost<-index3res$cost
index3 <- index3res$zijk
#indexij<-tau3ordered(X)$zij
#indexik<-tau3ordered(X)$zik
#indexjk<-tau3ordered(X)$zjk
}
#---------------------two-way margin tables
pij<-index3res$pij
pik<-index3res$pik
pjk<-index3res$pjk
####################################################################
# #
# Calculation of coordinates for CA3 NSCA3 and all oordered variants #
# #
####################################################################
#----sign core issue---
#g<-sqrt(S$g^2) #the sign of core values will be always positive...
g<-S$g
#----------------------------------------------------------------------for biptype row and column-tube
fiStandard <- S$a
# Standard row coordinates
ncore<-dim(S$g)
fidim<-ncore[1]
fiCdim<-ncore[2]*ncore[3]
fi <- S$a %*% flatten(g)
#------------------------------------core sign issue!!!
#for (i in 1:fidim){
#for (j in 1:fiCdim){
#if (flatten(g)[i,j]<0) fiStandard[i,j]<-(-1)*fiStandard[i,j]
#}}
#------------------------------------------------------------
# Principal row coordinates
gjkStandard <- Kron(S$b,S$cc )
gjk <- Kron(S$b,S$cc) %*% t(flatten(g))
# Calculating the column-tube principal coordinates
#------------------------------------core sign issue!!!
#for (i in 1:fidim){
#for (j in 1:fiCdim){
#if (flatten(g)[i,j]<0)
#gjkStandard[i,j]<-(-1)*gjkStandard[i,j] #change the sign
# fi[i,j]<-(-1)*fi[i,j] #change the sign
#}}
#------------------------------------------------------------
labelfi<-nomi
for (i in 1:nk){
labelgjk <- c(labelgjk, paste(nomj, nomk[i], sep = ""))
}
nr <- dim(gjk)[[1]]
nc <- dim(gjk)[[2]]
nc2 <- dim(gjkStandard)[[2]]
productfigjk <- fiStandard %*% t(gjk)
productfigjk <- round(productfigjk, digits = 2) # Inner product
dimnames(gjk) <- list(labelgjk, paste("ax", 1:nc, sep = ""))
dimnames(gjkStandard) <- list(labelgjk, paste("ax", 1:nc2, sep = ""))
dimnames(productfigjk) <- list(labelfi, labelgjk)
dimnames(fi) <- list(labelfi, paste("ax", 1:fiCdim, sep = ""))
dimnames(fiStandard) <- list(labelfi, paste("ax", 1:dim(S$a)[[2]], sep = ""))
#---------------------------------------------------------------------------------------biptype =col and row-tube
fjStandard <- S$b # Standard col coordinates
fj <- S$b %*% flatten(aperm(g,c(2,1,3))) # Principal col coordinates JxPR
gikStandard <- Kron(S$a,S$cc ) # interactive coordinates IKxPR
gik <- Kron(S$a,S$cc) %*% t(flatten(aperm(g,c(2,1,3)))) # interactive principal coordinates
# Calculating the labels for the column-tube principal coordinates
labelfj<-nomj
for (i in 1:nk){
labelgik <- c(labelgik, paste(nomi, nomk[i], sep = ""))
}
fjdim<-ncore[2]
#fjCdim<-p*r
fjCdim<-ncore[1]*ncore[3]
nr <- dim(gik)[[1]]
nc <- dim(gik)[[2]]
nc2 <- dim(gikStandard)[[2]]
productfjgik <- fjStandard %*% t(gik)
productfjgik <- round(productfjgik, digits = 2) # Inner product
dimnames(gik) <- list(labelgik, paste("ax", 1:nc, sep = ""))
dimnames(gikStandard) <- list(labelgik, paste("ax", 1:nc2, sep = ""))
dimnames(productfjgik) <- list(labelfj, labelgik)
dimnames(fj) <- list(labelfj, paste("ax", 1:fjCdim, sep = ""))
dimnames(fjStandard) <- list(labelfj, paste("ax", 1:dim(fjStandard)[[2]], sep = ""))
#--------------------------------------
#---------------------------------------------------------------------------------------biptype tube and row-col
fkStandard <- S$cc # Standard col coordinates
fk <- S$cc %*% flatten(aperm(g,c(3,1,2))) # Principal col coordinates JxPR
gijStandard <- Kron(S$a,S$b ) # interactive coordinates IKxPR
gij <- Kron(S$a,S$b) %*% t(flatten(aperm(g,c(3,1,2)))) # interactive principal coordinates
# Calculating the labels for the column-tube principal coordinates
labelfk<-nomk
for (i in 1:nj){
labelgij <- c(labelgij, paste(nomi, nomj[i], sep = ""))
}
fkdim<-ncore[3]
#fkCdim<-p*q
fkCdim<-ncore[1]*ncore[2]
nr <- dim(gij)[[1]]
nc <- dim(gij)[[2]]
nc2 <- dim(gijStandard)[[2]]
productfkgij <- fkStandard %*% t(gij)
productfkgij <- round(productfkgij, digits = 2) # Inner product
dimnames(gij) <- list(labelgij, paste("ax", 1:nc, sep = ""))
dimnames(gijStandard) <- list(labelgij, paste("ax", 1:nc2, sep = ""))
dimnames(productfkgij) <- list(labelfk, labelgij)
dimnames(fk) <- list(labelfk, paste("ax", 1:fkCdim, sep = ""))
dimnames(fkStandard) <- list(labelfk, paste("ax", 1:dim(fkStandard)[[2]], sep = ""))
#--------------------------------------
####################################################################
# #
# Calculation inertia values #
# #
####################################################################
inertiaorig <- sum(S$xs^2) #total inertia of the table (I,J,K) equal to the related index
inertiatot<-sum(S$g^2) #inertia reconstructed when p,q and r are different from I, J and K dimensions
prp <- sum(S$g^2)/inertiaorig
inertiapc <- (S$g^2/inertiaorig*100)
# inertiaRSS <- (inertiaorig-inertiatot)
#----------------------------------------------------------for row and col-tube biplot
inertiacoltub <- apply(inertiapc,1,sum)
inertiarow <- c(apply(inertiapc,c(2,3),sum))
names(inertiarow)<-paste("ax", 1:fiCdim, sep = "")
#----------------------------------------------------------for col and row-tube biplot
inertiarowtub <- apply(inertiapc,2,sum)
inertiacol <- c(apply(inertiapc,c(1,3),sum))
names(inertiacol)<-paste("ax", 1:fjCdim, sep = "")
#----------------------------------------------------------for tube and row-col biplot
inertiarowcol <- apply(inertiapc,3,sum)
inertiatube <- c(apply(inertiapc,c(1,2),sum))
names(inertiatube)<-paste("ax", 1:fkCdim, sep = "")
#-----------------------------------------------------------------------------------------------------------------------
#ca3corporateresults<-list(Data = X, ca3type = ca3type, rows = ni, cols = nj, tubes = nk, labelfi =labelfi, labelgjk = labelgjk,labelfj =labelfj, labelgik = labelgik, labelfk =labelfk, labelgij = #labelgij, iteration = S$iteration, pi=pii, pj=pj, pk=pk, pij=pij, pik=pik, pjk=pjk,
#prp = S$prp, a=S$a,b=S$b,cc=S$cc,
#fi = fi, fiStandard= fiStandard, gjk = gjk,gjkStandard=gjkStandard, fj = fj, fjStandard= fjStandard, gik = gik,gikStandard=gikStandard,
#fk = fk, fkStandard= fkStandard, gij = gij,gijStandard=gijStandard,
#inertiaorig = inertiaorig,inertiatot=inertiatot, inertiaRSS=inertiaRSS, inertiapc=inertiapc,
#inertiapcsum = inertiapcsum, inertiacoltub = inertiacoltub,inertiarow=inertiarow, inertiarowtub = inertiarowtub,inertiacol=inertiacol,
#inertiarowcol = inertiarowcol,inertiatube=inertiatube, iproductijk = productfigjk,iproductjik = productfjgik,iproductkij = productfkgij,
#g = S$g, index3res = index3res, index3=index3,
#gammai=gammai,gammajk=gammajk,
#gammaj=gammaj,gammaik=gammaik,gammak=gammak,gammaij=gammaij)
ca3results<-list(Data = X, ca3type = ca3type, resp=resp, pi=pii, pj=pj, pk=pk, pij=pij, pik=pik, pjk=pjk,
prp = prp, a=S$a,b=S$b,cc=S$cc,
fi = fi, fiStandard= fiStandard, gjk = gjk,gjkStandard=gjkStandard, fj = fj, fjStandard= fjStandard, gik = gik,gikStandard=gikStandard,
fk = fk, fkStandard= fkStandard, gij = gij,gijStandard=gijStandard,
inertiaorig = inertiaorig,inertiatot=inertiatot, inertiapc=inertiapc, inertiacoltub = inertiacoltub,inertiarow=inertiarow, inertiarowtub = inertiarowtub,inertiacol=inertiacol,
inertiarowcol = inertiarowcol,inertiatube=inertiatube, iproductijk = productfigjk,iproductjik = productfjgik,iproductkij = productfkgij,
g = S$g, index3res = index3res, index3=index3, cost=cost)
class(ca3results)<-"CA3variants"
invisible(return(ca3results))
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/CA3variants.R
|
Kron <-
function(a, b){
a <- as.matrix(a)
b <- as.matrix(b)
ni <- nrow(a)
nj <- nrow(b)
p <- ncol(a)
q <- ncol(b)
ab <- a %o% b #ab di dimensione IxpxJxq
ab <- aperm(ab, c(1, 3, 2, 4)) # bc di dimensione IxJ xpxq
dim(ab) <- c(ni * nj, p * q)
ab
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/Kron.R
|
ca3basic <-
function(x, p, q, r, test = 10^-6, ctr = T, std = T, sign = TRUE){
nnom <- dimnames(x)
I<-dim(x)[1]
J<-dim(x)[2]
K<-dim(x)[3]
nomi <- nnom[[1]]
nomj <- nnom[[2]]
nomk <- nnom[[3]]
# np <- paste("p", 1:p, sep = "")
# nq <- paste("q", 1:q, sep = "")
# nr <- paste("r", 1:r, sep = "")
xs <- standtab(x, ctr = ctr, std = std) * sqrt(sum(x)) #to get chi3
if (sign==FALSE){
res <- tucker(xs, p, q, r, test)
}
if (sign==TRUE){
res <- tucker(xs, p, q, r, test)
res<-signscore(res$a,res$b,res$cc,I,J,K,p,q,r,core=res$g,IFIXA=0,IFIXB=0,IFIXC=0) #given negative core, change signs in components
}
ncore<-dim(res$g)
np <- paste("p", 1:ncore[1], sep = "")
nq <- paste("q", 1:ncore[2], sep = "")
nr <- paste("r", 1:ncore[3], sep = "")
dimnames(res$g) <- list(np, nq, nr)
res$xs <- xs
xhat <- reconst3(res)
nx2 <- sum(xs^2)
res$tot <- nx2
nxhat2 <- sum(xhat^2)
ng2 <- sum(res$g^2)
prp <- ng2/nx2
res$ctr <- list(cti = res$a^2, ctj = res$b^2, ctk = res$cc^2)
comp <- coord(res, x)
res$xinit <- x
iteration <- res$cont
dimnames(comp$a) <- list(nomi, np)
dimnames(comp$b) <- list(nomj, nq)
dimnames(comp$cc) <- list(nomk, nr)
dimnames(xhat) <- list(nomi, nomj, nomk)
ca3results<-list( x = x, xs = xs, xhat = xhat, nxhat2 =
nxhat2, prp = prp, a = comp$a, b = comp$b, cc = comp$cc, g = res$g,
iteration = iteration)
#class(ca3results)<-"ca3basicresults"
return(ca3results)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/ca3basic.R
|
ca3plot<-function(frows,gcols,firstaxis,lastaxis,inertiapc,size1,size2,biptype,addlines){
#biplots for nominal variables
categ<-NULL
attnam<-NULL
slp<-NULL
if ((biptype == "resp")||(biptype == "row")||(biptype == "col")||(biptype == "column")||(biptype == "tube")) {
#rglines=data.frame(d1=gcols$d1,d2=gcols$d2,attnam=gcols$categ)
dimcord<-dim(gcols)[[2]]
if (dimcord<4) {stop("number of axis for graphing must be at least 2\n\n")}
if ((dimcord-2)<lastaxis) {stop("unsuitable number of axis for graphing\n\n")}
rglines=data.frame(d1=gcols[,firstaxis],d2=gcols[,lastaxis],attnam=gcols$categ)
rglines$slp=rglines$d2/rglines$d1
}
if ((biptype == "pred")||(biptype == "col-tube")||(biptype == "column-tube")||(biptype == "row-tube")||(biptype == "row-column")||(biptype == "row-col")) {
#rglines=data.frame(d1=frows$d1,d2=frows$d2,attnam=frows$categ)
dimcord<-dim(frows)[[2]]
if (dimcord<4) {stop("number of axis for graphing must be at least 2\n\n")}
if ((dimcord-2)<lastaxis) {stop("unsuitable number of axis for graphing\n\n")}
rglines=data.frame(d1=frows[,firstaxis],d2=frows[,lastaxis],attnam=frows$categ)
rglines$slp=rglines$d2/rglines$d1
}
FGcord <- rbind(frows, gcols)
xmin <-min(FGcord[,firstaxis],FGcord[,lastaxis])
xmax <- max(FGcord[,firstaxis],FGcord[,lastaxis])
ymin <- min(FGcord[,lastaxis],FGcord[,firstaxis])
ymax <- max(FGcord[,lastaxis],FGcord[,firstaxis])
CAplot <- ggplot(FGcord, aes(x=FGcord[,firstaxis], y=FGcord[,lastaxis]),type="b") +
# geom_point(aes(color=categ, shape=categ), size=size1) +
geom_point(aes(color=categ), size=size1) +
scale_shape_manual(values=categ) +
geom_vline(xintercept = 0, linetype=2, color="gray") +
geom_hline(yintercept = 0, linetype=2, color="gray") +
labs(x=paste0("Dimension",firstaxis,sep=" ", round(inertiapc[firstaxis],1), "%"),y=paste0("Dimension",lastaxis,sep= " ", round(inertiapc[lastaxis],1),"%")) +
scale_x_continuous(limits = c(xmin, xmax)) +
scale_y_continuous(limits = c(ymin, ymax)) +
theme(panel.background = element_rect(fill="white", colour="black")) +
# scale_colour_manual(values=c("red", "blue")) +
# scale_colour_manual(values=c(col1, col2)) +
coord_fixed(ratio = 1, xlim = NULL, ylim = NULL, expand = TRUE) +
geom_text_repel(data=FGcord, aes(colour=categ, label = labels), size = size2, max.overlaps =Inf) +
theme(legend.position="none")+
# ggtitle(" ")
# grid.arrange(CAplot, ncol=1)
if (addlines==TRUE){
geom_abline(data=rglines,aes(intercept=0,slope=slp,colour=attnam),alpha=.5)
}#endif
grid.arrange(CAplot, ncol=1)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/ca3plot.R
|
caplot3d<-function(coordR, coordC, inertiaper, firstaxis = 1, lastaxis = 2, thirdaxis = 3){
#------------------------------------
# 3Dim plot
# coordR= row coordinates
# coordC= column coordinates
# inertiaper= percentage of explained inertia per dimension
#------------------------------------
rc = round(coordR,2)
I<-dim(rc)[2]
cc = round(coordC,2)
iner<-paste(round(inertiaper,1),"%",sep="")
colnames(rc)<-paste("Dim",1:I,sep="")
iner1<-paste("(",iner,")",sep="")
namecol<-paste(colnames(rc),iner1,sep=" ")
colnames(rc)<-namecol
p = plot_ly(type="scatter3d",mode = 'text')
p = add_trace(p, x = rc[,firstaxis], y = rc[,lastaxis], z = rc[,thirdaxis],
mode = 'text', text = rownames(rc),
textfont = list(color = "red"), showlegend = FALSE)
p = add_trace(p, x = cc[,firstaxis], y = cc[,lastaxis], z = cc[,thirdaxis],
mode = "text", text = rownames(cc),
textfont = list(color = "blue"), showlegend = FALSE)
p <- config(p, displayModeBar = FALSE)
#p <- layout(p, scene = list(xaxis = list(title = colnames(rc)[1]),
# yaxis = list(title = colnames(rc)[2]),
# zaxis = list(title = colnames(rc)[3]),
# aspectmode = "data"),
# margin = list(l = 0, r = 0, b = 0, t = 0))
p$sizingPolicy$browser$padding <- 0
my.3d.plot = p
my.3d.plot
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/caplot3d.R
|
chi3 <-
function(f3,digits=3){
nn <- dim(f3)
ni <- nn[1]
nj <- nn[2]
nk <- nn[3]
n <- sum(f3)
p3 <- f3/n
if(length(dim(f3)) != 3){
stop("f3 is not a 3 way table\n")
}
pi <- apply(p3, 1, sum)
pj <- apply(p3, 2, sum)
pk <- apply(p3, 3, sum)
pijk <- pi %o% pj %o% pk
khi3 <- n * (sum(p3^2/pijk) - 1)
pij <- apply(p3, c(1, 2), sum)
pik <- apply(p3, c(1, 3), sum)
pjk <- apply(p3, c(2, 3), sum)
#alpha<- (pij%o%pk)+aperm(pik%o%pj,c(1,3,2))+aperm(pjk%o%pi,c(3,1,2))-2*pijk
#pijktab<-((p3-alpha))
#pijkint<-n*(sum((p3-alpha)^2/pijk)) #good
khij <- n * (sum(pij^2/(pi %o% pj)) - 1)
khik <- n * (sum(pik^2/(pi %o% pk)) - 1)
khjk <- n * (sum(pjk^2/(pj %o% pk)) - 1)
khin3 <- khi3 - khij - khik - khjk
nom <- c("Term-IJ", "Term-IK", "Term-JK", "Term-IJK",
"Term-total")
xphi <- c(khij/n, khik/n, khjk/n, khin3/n, khi3/n)
x <- c(khij, khik, khjk, khin3, khi3)
y <- (100 * x)/khi3
dijk <- (ni - 1) * (nj - 1) * (nk - 1)
dij <- (ni - 1) * (nj - 1)
dik <- (ni - 1) * (nk - 1)
djk <- (nj - 1) * (nk - 1)
dtot<-dij+dik+djk+dijk
df <- c(dij, dik, djk, dijk, dtot)
pvalue= 1 - pchisq(x, df)
x2<-x/df
z <- rbind(x,xphi, y, df,pvalue,x2)
nomr <- c("Chi-squared","Phi-squared", "% of Inertia", "df","p-value", "X2/df")
dimnames(z) <- list(nomr, nom)
z <- round(z, digits = digits)
list(z = z,pij=pij,pik=pik,pjk=pjk)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/chi3.R
|
chi3ordered <-
function (f3, digits = 3)
{
####################################
#f3 three-way absolute frequency table
# ordered response variable
#ordered predictor variables X1 and X2
#it returns the three-way Pearson index, called chiIJK, and allow to partition this index
#without (result in table format) and with the orthogonal polynomials of Emerson
###################################
nn <- dim(f3)
ni <- nn[1]
ui <- rep(1, ni)
nj <- nn[2]
nk <- nn[3]
n <- sum(f3)
p3 <- f3/n
if (length(dim(f3)) != 3)
stop("f3 is not a 3 way table \n")
pi <- apply(p3, 1, sum)
pj <- apply(p3, 2, sum)
pk <- apply(p3, 3, sum)
pijk <- pi %o% pj %o% pk #to get the product
p1jk <- ui %o% pj %o% pk #to get the product
khi3 <-n* sum(((p3 - pijk)^2/pijk)) #khi3 index
pij <- apply(p3, c(1, 2), sum)
pik <- apply(p3, c(1, 3), sum)
pjk <- apply(p3, c(2, 3), sum)
p2ij <- pi %o% pj
p2ik <- pi %o% pk
p2jk<-pj %o% pk
khiij <- sum(((pij - p2ij)^2/p2ij))*n
khiik <- sum(((pik - p2ik)^2/p2ik))*n
khijk <- n* (sum((pjk - p2jk)^2/p2jk))
khin3 <- khi3 - khiij - khiik - khijk
# cat("Values of partial and total indices\n")
#nom <- c("chi2IJ", "chi2IK", "chi2JK", "chi2IJK", "chi2")
nom <- c("Term-IJ", "Term-IK", "Term-JK", "Term-IJK", "Term-total")
dres <- (ni - 1) * (nj - 1) * (nk - 1)
dij <- (ni - 1) * (nj - 1)
dik <- (ni - 1) * (nk - 1)
djk <- (nj - 1) * (nk - 1)
dtot <- dij + dik + djk + dres
# zz <- c(khiij, khiik, khijk, khin3, khi3)
# zz3 <- c(dij, dik, djk, dres, dtot)
#z <- rbind(zz, zz3)
#nomr <- c("values of partial terms", "degree of freedom")
#dimnames(z) <- list(nomr, nom)
# print(round(z, digits = digits))
############################################ computing polynomials
mi<-c(1:ni)
Apoly <- emerson.poly(mi, pi) # Emerson orthogonal polynomials
poli <- diag(sqrt(pi)) %*% Apoly[,-1] #poly with weight Di
mj <- c(1:nj)
Bpoly <- emerson.poly(mj, pj)
polj <- diag(sqrt(pj)) %*%(Bpoly[,-1]) #poly with weight Dj
mk <- c(1:nk)
Cpoly <- emerson.poly(mk, pk)
polk <- diag(sqrt(pk)) %*%(Cpoly[,-1]) #poly with weight Dk
##################################################partial term IJ
fij <- (pij - p2ij)/sqrt(p2ij)
z3par<- t(poli[,-1])%*%fij%*%polj[,-1]
z3par2<- t(poli)%*%fij%*%polj
chi2ij<-sum(z3par^2)*n
pvalijtot<-1 - pchisq(chi2ij, (ni-1)*(nj-1))
#cat("index chi2_ij reconstructed by 2 polynomials without the 0th order poly \n")
chi2ijcol<-apply(z3par^2*n,2,sum)
chi2ijrow<-apply(z3par^2*n,1,sum)
pval<-c()
for (i in 1:(ni-1)){
pval[i]<-1 - pchisq(chi2ijrow[i], nj-1)
}
pvalchi2ijrow<-pval
pval<-c()
for (i in 1:(nj-1)){
pval[i]<-1 - pchisq(chi2ijcol[i], ni-1)
}
pvalchi2ijcol<-pval
#row and column poly component
dfi<-rep((nj-1),(ni-1))
perci<-chi2ijrow/chi2ij*100
zi<-cbind(chi2ijrow,perci,dfi,pvalchi2ijrow)
zitot<-c(apply(zi[,1:3],2,sum),pvalijtot)
zi<-rbind(zi,zitot)
dfj<-rep((ni-1),(nj-1))
percj<-chi2ijcol/chi2ij*100
zj<-cbind(chi2ijcol,percj,dfj,pvalchi2ijcol)
zjtot<-c(apply(zj[,1:3],2,sum),pvalijtot)
zj<-rbind(zj,zjtot)
zij<-rbind(zi,zj)
nomi<-paste("poly-row",1:(ni-1),sep="")
nomj<-paste("poly-col",1:(nj-1),sep="")
dimnames(zij)<-list(c(nomi,"Chi2-IJ",nomj,"Chi2-IJ"),c("Term-IJ-poly","%inertia","df","p-value"))
#################################################partial term IK
fik <- (pik - p2ik)/sqrt(p2ik)
z3par<- t(poli[,-1])%*%fik%*%polk[,-1]
chi2ik<-sum(z3par^2)*n
pvaliktot<-1 - pchisq(chi2ik, (ni-1)*(nk-1))
z3par2<- t(poli)%*%fik%*%polk
chi2ikcol<-apply(z3par^2*n,2,sum)
chi2ikrow<-apply(z3par^2*n,1,sum)
pval<-c()
for (i in 1:(ni-1)){
pval[i]<-1 - pchisq(chi2ikrow[i], nk-1)
}
pvalchi2ikrow<-pval
pval<-c()
for (i in 1:(nk-1)){
pval[i]<-1 - pchisq(chi2ikcol[i], ni-1)
}
pvalchi2ikcol<-pval
#row and column poly component
dfi<-rep((nk-1),(ni-1))
perci<-chi2ikrow/chi2ik*100
zi<-cbind(chi2ikrow,perci,dfi,pvalchi2ikrow)
zitot<-c(apply(zi[,1:3],2,sum),pvaliktot)
zi<-rbind(zi,zitot)
dfk<-rep((ni-1),(nk-1))
perck<-chi2ikcol/chi2ik*100
zk<-cbind(chi2ikcol,perck,dfk,pvalchi2ikcol)
zktot<-c(apply(zk[,1:3],2,sum),pvaliktot)
zk<-rbind(zk,zktot)
zik<-rbind(zi,zk)
nomi<-paste("poly-row",1:(ni-1),sep="")
nomk<-paste("poly-tube",1:(nk-1),sep="")
dimnames(zik)<-list(c(nomi,"Chi2-IK",nomk,"Chi2-IK"),c("Term-IK-poly","%inertia","df","p-value"))
################################################partial term
fjk <- (pjk - p2jk)/sqrt(p2jk)
z3par<- t(polj[,-1])%*%fjk%*%polk[,-1]
z3par2<- t(polj)%*%fjk%*%polk
chi2jk<-sum(z3par^2)*n
pvaljktot<-1 - pchisq(chi2jk, (nj-1)*(nk-1))
chi2jkcol<-apply(z3par^2*n,2,sum)
chi2jkrow<-apply(z3par^2*n,1,sum)
pval<-c()
for (i in 1:(nk-1)){
pval[i]<-1 - pchisq(chi2jkcol[i], nj-1)
}
pvalchi2jkcol<-pval
pval<-c()
for (i in 1:(nj-1)){
pval[i]<-1 - pchisq(chi2jkrow[i], nk-1)
}
pvalchi2jkrow<-pval
#row and column poly component
dfj<-rep((nk-1),(nj-1))
percj<-chi2jkrow/chi2jk*100
zj<-cbind(chi2jkrow,percj,dfj,pvalchi2jkrow)
zjtot<-c(apply(zj[,1:3],2,sum),pvaljktot)
zj<-rbind(zj,zjtot)
dfk<-rep((nj-1),(nk-1))
perck<-chi2jkcol/chi2jk*100
zk<-cbind(chi2jkcol,perck,dfk,pvalchi2jkcol)
zktot<-c(apply(zk[,1:3],2,sum),pvaljktot)
zk<-rbind(zk,zktot)
zjk<-rbind(zj,zk)
nomj<-paste("poly-col",1:(nj-1),sep="")
nomk<-paste("poly-tube",1:(nk-1),sep="")
dimnames(zjk)<-list(c(nomj,"Chi2-JK",nomk,"Chi2-JK"),c("Term-JK-poly","%inertia","df","p-value"))
#################################three ordered variables and interaction term
fijk=(p3 - pijk)/sqrt(pijk)
z3n<- t(poli[,-1])%*%flatten(fijk)%*%Kron(polj[,-1],polk[,-1])
chi2int<-sum(z3n^2)*n
pvalijktot<-1 - pchisq(chi2int, (ni-1)*(nj-1)*(nk-1))
z3n2<- t(poli)%*%flatten(fijk)%*%Kron(polj,polk)
chi2tot<-sum(z3n2^2)*n
dim(z3n)<-c(ni-1,nj-1,nk-1)
chi2introw<-apply(z3n^2*n,1,sum)
pval<-c()
for (i in 1:(ni-1)){
pval[i]<-1 - pchisq(chi2introw[i], ni-1)
}
pvalchi2introw<-pval
dfi<-rep((nj-1)*(nk-1),(ni-1))
#----------------------------------
chi2intcol<-apply(z3n^2*n,2,sum)
pval<-c()
for (i in 1:(nj-1)){
pval[i]<-1 - pchisq(chi2intcol[i], nj-1)
}
pvalchi2intcol<-pval
#-----------------------------------
chi2inttub<-apply(z3n^2*n,3,sum)
pval<-c()
for (i in 1:(nk-1)){
pval[i]<-1 - pchisq(chi2inttub[i], nk-1)
}
pvalchi2inttub<-pval
#-----------------------------------------
#row column and tube poly component on three-way intereraction term
perci<-chi2introw/chi2int*100
zi<-cbind(chi2introw,perci,dfi,pvalchi2introw)
zitot<-c(apply(zi[,1:3],2,sum),pvalijktot)
zi<-rbind(zi,zitot)
#----------
dfj<-rep((ni-1)*(nk-1),(nj-1))
percj<-chi2intcol/chi2int*100
zj<-cbind(chi2intcol,percj,dfj,pvalchi2intcol)
zjtot<-c(apply(zj[,1:3],2,sum),pvalijktot)
zj<-rbind(zj,zjtot)
#--------
dfk<-rep((ni-1)*(nj-1),(nk-1))
perck<-chi2inttub/chi2int*100
zk<-cbind(chi2inttub,perck,dfk,pvalchi2inttub)
zktot<-c(apply(zk[,1:3],2,sum),pvalijktot)
zk<-rbind(zk,zktot)
zijk<-rbind(zi,zj,zk)
nomi<-paste("poly-row",1:(ni-1),sep="")
nomj<-paste("poly-col",1:(nj-1),sep="")
nomk<-paste("poly-tube",1:(nk-1),sep="")
#browser()
dimnames(zijk)<-list(c(nomi, "Chi2-IJK",nomj,"Chi2-IJK",nomk,"Chi2-IJK"),c("Term-IJK-poly","%inertia","df","p-value"))
#============================================================
zznew <- c(chi2ij, chi2ik, chi2jk, chi2int, chi2tot)
zphi <- c(chi2ij/n, chi2ik/n, chi2jk/n, chi2int/n, chi2tot/n)
df<- c(dij, dik, djk, dres, dtot)
pvalue= 1 - pchisq(zznew, df)
perc<-zznew/chi2tot*100
x2<-zznew/df
znew <- rbind(zznew, zphi, perc, df, pvalue,x2)
nomr <- c("Chi-squared", "Phi-squared","%inertia",
"df","p-value","X2/df")
dimnames(znew) <- list(nomr, nom)
znew<-round(znew,digits=digits)
return(list(z=znew,zij=zij,zik=zik,zjk=zjk,zijk=zijk,pij=pij,pik=pik,pjk=pjk))
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/chi3ordered.R
|
chkneg<-
function(comp,nr,nc)
# Subroutine to check the negativity of the column of an #array
#! AND the positivity of the columns of an array
#! If NegPtr = 1 then there is an entirely negative #component
#! If PosPtr = 1 then there is an entirely positive #component
#! If BigPtr = 1 then maximum neg. abs > max pos
#!---------------------------
{
#posptr=NULL
#negptr=NULL
#bigptr=NULL
posptr=1
negptr=1
bigptr=1
ic=0
for (j in 1:nc){
posbig=0
negbig=0
for (i in 1:nr){
ic=ic+1
posind=which(comp[i,j]>0)
negind=which(comp[i,j]<0)
if ((comp[i,j]>=0)){negptr[j]=0}
if ((comp[i,j]>posbig)) {posbig=comp[i,j]}
#if ((comp[ic]>=0)){negptr[j]=0}
#if ((comp[ic]>posbig)) {posbig=comp[ic]}
else {posptr[j]=0}
#if ((comp[ic]<negbig)) {negbig=comp[ic]}
if ((comp[i,j]<negbig)) {negbig=comp[i,j]}
#}
else {negptr[j]=0}
}#end for
if (length(posind)>length(negind)) {bigptr[j]=0}
bigptr[j]=1
}
#print(posptr)
#print(negptr)
#print(bigptr)
list(posptr=posptr,negptr=negptr,bigptr=bigptr)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/chkneg.R
|
coord <- function(res, x){
x <- x/sum(x)
pi <- margI(x)
pj <- margJ(x)
pk <- margK(x)
res$a <- diag(1/sqrt(pi)) %*% res$a
res$b <- diag(1/sqrt(pj)) %*% res$b
res$cc <- diag(1/sqrt(pk)) %*% res$cc
res
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/coord.R
|
coordnsc3<-function(res, x)
{
x <- x/sum(x) # pi <- margI(x)
pj <- margJ(x)
pk <- margK(x)
res$a <- res$a
res$b <- diag(1/sqrt(pj)) %*% res$b
res$cc <- diag(1/sqrt(pk)) %*% res$cc
res
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/coordnsc3.R
|
criter <-
function(x, xhat){
sum((x - xhat)^2)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/criter.R
|
critera <-
function(aold, anew){
(sum((aold - anew)^2))^0.5
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/critera.R
|
crptrs <-
function(icore,p,q,r){
#given the pointer to an element of icore, it calculates IA, IB,IC pointing to components
IC=as.integer(1+(icore-1)/(p*q))
IB= as.integer(1+((icore-1)-as.integer((icore-1)/(p*q))*(p*q))/p)
IA=1+((icore-1)-as.integer((icore-1)/p)*p)
list(IA=IA,IB=IB,IC=IC)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/crptrs.R
|
emerson.poly <-function (mj, pj)
{
#################Emerson polynomials, recurrence formulas
#mj: natural scores for example c(1,2,3,4)
#pj
#####################
nc <- length(mj)
Dj <- diag(pj)
B <- matrix(1, (nc + 1), nc)
B[1, ] <- 0
Sh <- Th <- Vh <- NULL
for (i in 3:(nc + 1)) {
for (j in 1:nc) {
Th[i] <- mj %*% Dj %*% B[i - 1, ]^2
Vh[i] <- mj %*% Dj %*% (B[i - 1, ] * B[i - 2, ])
Sh[i] <- sqrt(mj^2 %*% Dj %*% B[i - 1, ]^2 - Th[i]^2 -
Vh[i]^2)^(-1)
B[i, j] <- Sh[i] * ((mj[j] - Th[i]) * B[i - 1, j] -
Vh[i] * B[i - 2, j])
}
}
B<-t(B)
#B1<-B[,-c(1,2)]
#B<-B[-c(1),]
return(B)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/emerson.poly.R
|
flatten <-
function(x) {
nom <- dimnames(x)
dimnames(x) <- NULL
n <- dim(x)
dim(x) <- c(n[1], n[2] * n[3])
x
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/flatten.R
|
init3 <-
function(x, p, q, r){
nom <- dimnames(x)
n <- dim(x)
dimnames(x) <- NULL
y <- x
dim(y) <- c(n[1], n[2] * n[3])
p <- min(p, n[1], n[2] * n[3])
a <- svd(y)$u[, 1:p]
y <- aperm(x, c(2, 3, 1))
dim(y) <- c(n[2], n[3] * n[1])
q <- min(q, n[2], n[1] * n[3])
b <- svd(y)$u[, 1:q]
y <- aperm(x, c(3, 1, 2))
dim(y) <- c(n[3], n[1] * n[2])
r <- min(r, n[3], n[1] * n[2])
cc <- svd(y)$u[, 1:r]
dimnames(x) <- nom
list(a = as.matrix(a), b = as.matrix(b), cc = as.matrix(cc), g = NULL,
x = x)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/init3.R
|
init3ordered<-function (x, p, q, r, x0)
{
nom <- dimnames(x)
n <- dim(x)
dimnames(x) <- NULL
dimnames(x0) <- NULL
y <- x0
dimnames(y) <- NULL
dim(y) <- c(n[1], n[2] * n[3])
pii <- apply(y/sum(y), 1, sum)
p <- min(p, n[1])
mj <- c(1:n[1])
Bpoly <- emerson.poly(mj, pii)
Bpoly <- Bpoly[,-c(1,2) ]
# Bpoly <- t(Bpoly)
# cat("Bpoly \n")
# print(Bpoly)
a <- diag(sqrt(pii)) %*% Bpoly[,1:p]
# a <- Bpoly[,1:p]
#cat("Checking the orthonormality of polynomials a:\n")
#print(t(a) %*% (a))
y <- aperm(x0, c(2, 3, 1))
dim(y) <- c(n[2], n[3] * n[1])
pj <- apply(y/sum(y), 1, sum)
q <- min(q, n[2])
mj <- c(1:n[2])
Bpoly <- emerson.poly(mj, pj)
Bpoly <- Bpoly[,-c(1,2) ]
# cat("Bpoly flattened\n")
# print(Bpoly)
b <- diag(sqrt(pj)) %*% Bpoly[, 1:q]
# b <- Bpoly[,1:q]
# cat("Checking the orthonormality of polynomials b:\n")
#print(t(b) %*% (b))
y <- aperm(x0, c(3, 1, 2))
dim(y) <- c(n[3], n[1] * n[2])
pk <- apply(y/sum(y), 1, sum)
r <- min(r, n[3])
mj <- c(1:(n[3]))
Bpoly <- emerson.poly(mj, pk)
Bpoly <- Bpoly[,-c(1,2) ]
# Bpoly <- t(Bpoly)
#cat("Bpoly flattened\n")
#print(Bpoly)
cc <- diag(sqrt(pk)) %*% Bpoly[, 1:r]
# cc <- Bpoly[,1:r]
#cat("Checking the orthonormality of polynomials cc\n")
#print(t(cc) %*% (cc))
dimnames(x) <- nom
# cat("fine di init3\n")
# print(a)
# print(b)
# print(cc)
# cat("inerzia ricostruita con i polinomi ortogonali\n")
xsf <- flatten(x)
#print(dim(xsf))
bc <- Kron(b, cc)
#print((bc))
Z <- t(a) %*% xsf %*% bc
# Z <- t(a) %*%diag(sqrt(pii))%*% xsf%*%Kron(diag(sqrt(pj)),diag(sqrt(pk))) %*% bc
#cat("index ==inerzia of Z'Z and ZZ' and Z2\n")
#print(sum(diag(t(Z) %*% Z)))
#print(sum(diag((Z) %*% t(Z))))
# cat("Z table after Trivariate Moment Decomposition\n")
#print(Z)
#zij=t(a)%*%apply(xsf,2,sum)%*%b
#browser()
list(a = as.matrix(a), b = as.matrix(b), cc = as.matrix(cc),
g = NULL, x = x,pii=pii,pj=pj,pk=pk)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/init3ordered.R
|
init3ordered1<-function(x, p, q, r, x0)
{
#-------------------------------------------------
# Initialisation of TUCKER3 by
# Triple PCA for a three-way table on each
# way :
# dim(x) is IxJxK
# a is Ixp
# b is Jxq
# c is Kxr
# polinomi ortogonali calcolati solo sul secondo modo
#-------------------------------------------------
nom <- dimnames(x)
n <- dim(x)
dimnames(x) <- NULL
dimnames(x0) <- NULL
y <- x0
#pk <- apply(y/sum(y), 3, sum)
dim(y) <- c(n[1], n[2] * n[3])
pii <- apply(y/sum(y), 1, sum)
p <- min(p, n[1])
a <- svd(y)$u[, 1:p]
#browser()
#-------------------------------------------------------------
y <- aperm(x0, c(2, 3, 1))
dim(y) <- c(n[2], n[3] * n[1])
pj <- apply(y/sum(y), 1, sum)
mj <- c(1:(n[2]))
Bpoly <- emerson.poly(mj, pj) # Emerson orthogonal polynomials
Bpoly <- Bpoly[, - c(1,2) ] #Bpoly <- Bpoly[1:p, ]
b <- diag(sqrt(pj)) %*% Bpoly[, 1:q] #polinomio con pesi Di
#cat("Checking the orthonormality of polynomials b:\n")
#print(t(b) %*% (b))
#-------------------------------------------------------------
# only the second variable is ordinal
dimnames(y) <- NULL
y <- aperm(x0, c(3, 1, 2))
dim(y) <- c(n[3], n[1] * n[2])
pk <- apply(y/sum(y), 1, sum)
r <- min(r, n[3])
cc <- svd(y)$u[, 1:r]
#####################################################################################
dimnames(x) <- nom
list(a = as.matrix(a), b = as.matrix(b), cc = as.matrix(cc), g = NULL, x = x,pii=pii,pj=pj,pk=pk)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/init3ordered1.R
|
init3ordered2<-function(x, p, q, r, x0)
{
#-------------------------------------------------
# Initialisation of TUCKER3 by
# Triple PCA for a three-way table on each
# way :
# dim(x) is IxJxK
# a is Ixp
# b is Jxq
# c is Kxr
# polinomi ortogonali calcolati sulla seconda e terza via
#-------------------------------------------------
nom <- dimnames(x)
n <- dim(x)
dimnames(x) <- NULL
y <- x0
pii <- apply(y/sum(y), 1, sum)
pj <- apply(y/sum(y), 2, sum)
pk <- apply(y/sum(y), 3, sum)
y<-aperm(y,c(3,1,2))
dim(y) <- c(n[3], n[1] * n[2])
r <- min(r, n[3], n[1] * n[2])
cc <- svd(y)$u[, 1:r]
#---------------------------------------------------------------
# the first and second variables are ordinal
dimnames(y) <- NULL
mj <- c(1:n[2])
Bpoly <- emerson.poly(mj, pj) # Emerson orthogonal polynomials
Bpoly <- Bpoly[, - c(1,2) ] #
b <- diag(sqrt(pj)) %*% Bpoly[, 1:q] #polinomio con pesi Di
# cat("Checking the orthonormality of polynomials of the #second mode b:\n")
# print(t(b) %*% (b))
#-----------------------------------------------------
mi <- c(1:n[1])
Bpoly <- emerson.poly(mi, pii) # Emerson orthogonal polynomials
Bpoly <- Bpoly[, - c(1,2) ] #Bpoly <- Bpoly[1:p, ]
a <- diag(sqrt(pii)) %*% Bpoly[, 1:p] #polinomio con pesi Di
# cat("Checking the orthonormality of polynomials a:\n")
# print(t(a) %*% (a))
#####################################################################################
list(a = as.matrix(a), b = as.matrix(b), cc = as.matrix(cc), g = NULL, x = x,pii=pii,pj=pj,pk=pk)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/init3ordered2.R
|
invcmp <-
function(comp,nr,nc,chgcomp){
k=nr*(chgcomp-1)+1
for (i in 1:nr){
comp[k]= (-1)*comp[k]
k=k+1}
list(comp=comp,chgcomp=chgcomp)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/invcmp.R
|
invcor <-
function(core,p,q,r,chgmode,chgcomp){
#to change the sign of the elements of core slice
#chgcomp of mode slice of the core matrix
#case 1 horizontal slice
if (chgmode==1){
k=chgcomp
for (i in 1:q*r)
core[k]=(-1)*core[k]
k=k+p}
#case 2 lateral slice
if (chgmode==2){
k=(chgcomp-1)*p+1
for (i in 1:p*r){
#for (i in 1:r){
#for (j in 1:p){
core[k]=(-1)*core[k]
k=k+1
#}
#k=k-p+p*q
}}
#case 3 frontal slice
if (chgmode==3){
k=(chgcomp-1)*p*q+1
for (i in 1:p*q){
core[k]=(-1)*core[k]
k=k+1}}
#list(core=core,chgmode=chgmode,chgcomp=chgcomp)
list(core=core)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/invcor.R
|
loss1.3 <-
function(param, comp.old){
xhat <- reconst3(param)
criter(param$x, xhat)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/loss1.3.R
|
loss1.3ordered<-function(param, comp.old)
{
#calcul de la fonction de perte generale (difference entre x et xhat)
#cat("dentro loss1.3ordered\n")
xhat <- reconst3(param)
# criterordered(param$x, xhat)
criter(param$x, xhat)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/loss1.3ordered.R
|
loss2 <-
function(param, comp.old){
critera(comp.old, param$a)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/loss2.R
|
makeindicator<-function(X){
# Input a three-way table as can be used in CA3variants
# Output the N x total number of categories (rows+cols+tubs) indicator matrix
N<-sum(X)
rows <- dim(X)[1]
cols <- dim(X)[2]
tubs <- dim(X)[3]
margtubs<-apply(X,3,sum)
Z<-NULL # Create empty and start making the tubes:
for (k in 1:tubs){
Tk<-X[,,k]
n<-sum(Tk) # Number of observation. Can be replaced by margtubs[k]
for (i in 1:rows) {
for (j in 1:cols) {
rep<-Tk[i,j] # Number of observations in cell ij of tube k
if (rep>0) {
for (r in 1:rep) {
z<-matrix(0,1,rows+cols+tubs) # Make a 1 x (total # categories) vector of zeros
z[i]<-1 # column i corresponds to row i of tube k: the row category
z[rows+j]<-1 # column rows+j corresponds to column j of tube k: the column category
z[rows+cols+k]<-1 # column rows+cols+k corresponds to tube k: the tube category
Z<-rbind(Z,z)}
}
}
}
}
return(Z)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/makeindicator.R
|
margI <-
function(m){
n <- dim(m)
dimnames(m) <- NULL
if(length(n) == 3) {
n2 <- n[2] * n[3]
dim(m) <- c(n[1], n2)
mm <- apply(m, 1, sum)
dim(m) <- n
mm
} else {
cat(" Wrong dimension m")
return(0)
}
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/margI.R
|
margJ <-
function(m)
{
n <- dim(m)
dimnames(m) <- NULL
if(length(n) == 3) {
m <- aperm(m, c(2, 1, 3))
n2 <- n[1] * n[3]
dim(m) <- c(n[2], n2)
mm <- apply(m, 1, sum)
mm
}
else {
cat(" Erreur dimension m")
return(0)
}
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/margJ.R
|
margK <-
function(m){
n <- dim(m)
dimnames(m) <- NULL
if(length(n) == 3) {
n1 <- n[1] * n[2]
dim(m) <- c(n1, n[3])
mm <- apply(m, 2, sum)
} else {
cat(" Wrong dimension m")
return(0)
}
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/margK.R
|
newcomp3<-function(param)
{
#----------------------
# estimation of the third component matrix given the two others b and c
# and the non-diagonal core g
# --------------------------------provisoire
#--------------------------------------------ind (externe)
# a = x f (f'x'x f)**-.5
#-------------------------------------------------------
a <- param$a
b <- param$b
cc <- param$cc
g <- param$g
pqr <- dim(g)
p <- pqr[1]
q <- pqr[2]
r <- pqr[3]
x <- param$x
f <- Kron(b, cc)#f de dimensions (JxK) x (qxr)
gf <- flatten(g)#g de dimensions p x (qxr)
fg <- f %*% t(gf)#fg de dimensions (JxK) x p
xf <- flatten(x)#xf de dimensions I x (JxK)
y <- xf %*% fg# y de dimensions I x p
#if(ind) {
#s <- t(y) %*% y
#w <- svd(s)
#d <- 1/sqrt(w$d)
#u <- as.matrix(w$u)
#a <- y %*% u %*% diag(d, p, p) %*% t(u) #rotation not needed!!
#a <- y %*% u %*% diag(d, p, p)
#}
#else {
#browser()
res <- svd(y)
#a <- res$u %*% t(res$v) #it implies a rotation not needed!!
a <- res$u
#}
#browser()
list(a = as.matrix(a), b = as.matrix(b), cc = as.matrix(cc), g = g, x = x)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/newcomp3.R
|
newcomp3ordered<-function(param)
{
n <- dim(param$x)
a <- param$a
b <- param$b
cc <- param$cc
pii<-param$pii
pj<-param$pj
pk<-param$pk
g <- param$g
pqr <- dim(g)
p <- pqr[1]
q <- pqr[2]
r <- pqr[3]
qr<-q*r
x <- param$x
f <- Kron(b[,1:q], cc[,1:r]) #dimension JK x qr
gf <- flatten(g) #dimension p x qr
fg <- f %*% t(gf) #dimension JK x p
xf <- flatten(x) #dimension Ix JK
y <- xf %*% fg #dimension Ixp
########
p <- min(p, n[1])
mj <- c(1:n[1])
#mj<-c(param$a[,2])
Apoly <- emerson.poly(mj, pii)
Apoly <- Apoly[,-c(1) ]
# cat("Apoly \n")
# print(Apoly)
a <- diag(sqrt(pii)) %*% Apoly
# a <- Apoly[, 1:p]
# cat("Checking the orthonormality of polynomials a:\n")
# print(t(a) %*% (a))
####
# browser()
res <- svd(y)
a <- res$u %*% t(res$v)
# a <- a %*% t(res$v) #dim I,p
#a<-a
#browser()
list(a = as.matrix(a), b = as.matrix(b), cc = as.matrix(cc),
g = g, xs = x,pii=pii,pj=pj,pk=pk)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/newcomp3ordered.R
|
newcomp3ordered1<-function(param)
{
n <- dim(param$x)
a <- param$a
b <- param$b
cc <- param$cc
pii<-param$pii
pj<-param$pj
pk<-param$pk
g <- param$g
pqr <- dim(g)
p <- pqr[1]
q <- pqr[2]
r <- pqr[3]
qr<-q*r
x <- param$x
f <- Kron(b[,1:q], cc[,1:r]) #dimension JK x qr
gf <- flatten(g) #dimension p x qr
fg <- f %*% t(gf) #dimension JK x p
xf <- flatten(x) #dimension Ix JK
y <- xf %*% fg #dimension Ixp
########
res <- svd(y)
a <- res$u %*% t(res$v) #implies rotation
# a <- a %*% t(res$v) #dim I,p
#a<-res$u
#browser()
list(a = as.matrix(a), b = as.matrix(b), cc = as.matrix(cc),
g = g, xs = x,pii=pii,pj=pj,pk=pk)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/newcomp3ordered1.R
|
newcomp3ordered2<-function(param)
{
n <- dim(param$x)
a <- param$a
b <- param$b
cc <- param$cc
pii<-param$pii
pj<-param$pj
pk<-param$pk
g <- param$g
pqr <- dim(g)
p <- pqr[1]
q <- pqr[2]
r <- pqr[3]
qr<-q*r
x <- param$x
f <- Kron(b[,1:q], cc[,1:r]) #dimension JK x qr
gf <- flatten(g) #dimension p x qr
fg <- f %*% t(gf) #dimension JK x p
xf <- flatten(x) #dimension Ix JK
y <- xf %*% fg #dimension Ixp
########
res <- svd(y)
a <- res$u %*% t(res$v)
# a <- a %*% t(res$v) #dim I,p
#a<-a
#browser()
list(a = as.matrix(a), b = as.matrix(b), cc = as.matrix(cc),
g = g, xs = x,pii=pii,pj=pj,pk=pk)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/newcomp3ordered2.R
|
nsca3basic <-
function(x, p, q, r, test = 10^-6, ctr = T, std = T,sign = TRUE){
#--------------------------------------NSCA3 basic
# response variable is the row variable
# otherwise you have to change the dimensionality
#------------------------------------------------
nnom <- dimnames(x)
I<-dim(x)[1]
J<-dim(x)[2]
K<-dim(x)[3]
nomi <- nnom[[1]]
nomj <- nnom[[2]]
nomk <- nnom[[3]]
tot <- sum(x)
n <- dim(x)
pi <- apply(x/tot, 1, sum)
devt <- 1 - sum(pi^2)
xs <- rstand3(x, ctr = ctr, std = std) * sqrt((tot - 1) * (n[1] - 1) * (1/devt))
if (sign==TRUE){
res <- tucker(xs, p, q, r, test)
res<-signscore(res$a,res$b,res$cc,I,J,K,p,q,r,core=res$g,IFIXA=0,IFIXB=0,IFIXC=0) #given negative core, change signs in components
}
if (sign==FALSE){
res <- tucker(xs, p, q, r, test)
}
ncore<-dim(res$g)
np <- paste("p", 1:ncore[1], sep = "")
nq <- paste("q", 1:ncore[2], sep = "")
nr <- paste("r", 1:ncore[3], sep = "")
dimnames(res$g) <- list(np, nq, nr)
res$xs <- xs
xhat <- reconst3(res)
nx2 <- sum(xs^2)
res$tot <- nx2
nxhat2 <- sum(xhat^2)
ng2 <- sum(res$g^2)
prp <- ng2/nx2
res$ctr <- list(cti = res$a^2, ctj = res$b^2, ctk = res$cc^2)
comp <- coordnsc3(res, x)
res$xinit <- x
dimnames(comp$a) <- list(nomi, np)
dimnames(comp$b) <- list(nomj, nq)
dimnames(comp$cc) <- list(nomk, nr)
dimnames(xhat) <- list(nomi, nomj, nomk)
nsca3results<-list( x = x, xs = xs, xhat = xhat, nxhat2 =
nxhat2, prp = prp, a = comp$a, b = comp$b, cc = comp$cc, g = res$g,
iteration = res$cont)
#class(nsca3results)<-"ca3basicresults"
#return(nsca3results)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/nsca3basic.R
|
oca3basic<-function(x, p,q,r, test = 10^-6, ctr = T, std = T,norder=3,sign = TRUE)
{
#-------------------------------------------------------------------------
# 3-way non-symmetrical correspondence analysis
#
# x 3-way contingency table
# ni nj nk original table dimensions
# p, q, r order of the decomposition
# test treshold used in the algorithm
# ctr (T or F) if F the analysis is not centered
#---------------------------------------------------------------------------
nnom <- dimnames(x)
I<-dim(x)[1]
J<-dim(x)[2]
K<-dim(x)[3]
if (p==I){p<-p-1}
if (q==J){q<-q-1}
if (r==K){r<-r-1}
nomi <- nnom[[1]]
nomj <- nnom[[2]]
nomk <- nnom[[3]]
tot<-sum(x)
n<-dim(x)
pi <- apply(x/tot, 1, sum)
devt <- 1 - sum(pi^2)
xs <- standtab(x, ctr = ctr, std = std)*sqrt(tot)#la fun orig standtab
if (sign==TRUE){
res <- tuckerORDERED(xs, p=p, q=q, r=r, x, test=test,norder=norder) #good
res<-signscore(res$a,res$b,res$cc,I,J,K,p=p,q=q,r=r,core=res$g,IFIXA=0,IFIXB=0,IFIXC=0) #given negative core, change signs in components
}
if (sign==FALSE){
res <- tuckerORDERED(xs, p=p, q=q, r=r, x, test=test,norder=norder) #good
}
ncore<-dim(res$g)
np <- paste("p", 1:ncore[1], sep = "")
nq <- paste("q", 1:ncore[2], sep = "")
nr <- paste("r", 1:ncore[3], sep = "")
dimnames(res$g) <- list(np, nq, nr)
res$xs <- xs #con standtab
#res$xs<-xsg #con standtabnew
#cat("Total Variance, Estimated Variance, Proportions\n")
xhat <- reconst3(res)
nx2 <- sum(xs^2)
res$tot <- nx2
nxhat2 <- sum(res$g^2)
res$prp <- nxhat2/nx2
#print(nx2, digit = 5)
#print(nxhat2, digit = 5)
#print(res$prp, digit = 3)
#cat("Squared Core Values")
#print(res$g^2, digit = 3)
#------------------------------------------------------
# Chargement des resultats bruts pour les contributions
#-------------------------------------------------------
#res$ctr <- list(cti = res$a^2, ctj = res$b^2, ctk = res$cc^2)
#cord <- coordord(res, x)# cat("Facteurs a\n")
res$xinit <- x
dimnames(res$a) <- list(nomi, np)
dimnames(res$b) <- list(nomj, nq)
dimnames(res$cc) <- list(nomk, nr)
oca3results<-list( x = x, xs = xs, xhat = xhat, nxhat2 =
nxhat2, prp = res$prp, a = res$a, b = res$b, cc = res$cc, g = res$g,
iteration = res$cont)
#class(oca3results)<-"ca3basicresults"
return(oca3results)
#oca3basic <- new("ca3basicresults", x = x, xs = xsg, xhat = xhat, nxhat2 =
# nxhat2, prp = res$prp, a = res$a, b = res$b, cc = res$cc, g = res$g,
# iteration = res$cont)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/oca3basic.R
|
onsca3basic<-function(x, p,q,r, test = 10^-6, ctr = T, std = T,norder=3,sign = TRUE)
{
#-------------------------------------------------------------------------
## 3-way ordered non-symmetrical correspondence analysis
#
# x 3-way contingency table
# ni nj nk original table dimensions
# p, q, r order of the decomposition
# test treshold used in the algorithm
# ctr (T or F) if F the analysis is not centered
#---------------------------------------------------------------------------
nnom <- dimnames(x)
I<-dim(x)[1]
J<-dim(x)[2]
K<-dim(x)[3]
if (p==I){p<-p-1}
if (q==J){q<-q-1}
if (r==K){r<-r-1}
nomi <- nnom[[1]]
nomj <- nnom[[2]]
nomk <- nnom[[3]]
tot<-sum(x)
n<-dim(x)
pi <- apply(x/tot, 1, sum)
devt <- 1 - sum(pi^2)
#cost<-(tot - 1) * (n[1] - 1) * (1/devt)
xs <- rstand3(x, ctr = ctr, std = std)* sqrt((tot - 1) * (n[1] - 1) *(1/devt))#la fun orig standtab
#xs <- standtabnew(x, ctr = ctr, std = std)*sqrt(tot)#la fun orig standtab
#browser()
#xsg<-xs$xg
#browser()
#res <- tuckerORDEREDnotrivial(xsg, p=p, q=q, r=r,x, test=test,order=order)
if (sign==TRUE){
res <- tuckerORDERED(xs, p=p, q=q, r=r, x, test=test,norder=norder) #good
res<-signscore(res$a,res$b,res$cc,I,J,K,p=p,q=q,r=r,core=res$g,IFIXA=0,IFIXB=0,IFIXC=0) #given negative core, change signs in components
}
if (sign==FALSE){
res <- tuckerORDERED(xs, p=p, q=q, r=r, x, test=test,norder=norder) #good
}
#browser()
ncore<-dim(res$g)
np <- paste("p", 1:ncore[1], sep = "")
nq <- paste("q", 1:ncore[2], sep = "")
nr <- paste("r", 1:ncore[3], sep = "")
dimnames(res$g) <- list(np, nq, nr)
res$xs <- xs #con standtab
#res$xs<-xsg #con standtabnew
#cat("Total Variance, Estimated Variance, Proportions\n")
xhat <- reconst3(res)
nx2 <- sum(xs^2)
res$tot <- nx2
nxhat2 <- sum(res$g^2)
res$prp <- nxhat2/nx2
#print(nx2, digit = 5)
#print(nxhat2, digit = 5)
#print(res$prp, digit = 3)
#cat("Squared Core Values")
#print(res$g^2, digit = 3)
#------------------------------------------------------
# Chargement des resultats bruts pour les contributions
#-------------------------------------------------------
#res$ctr <- list(cti = res$a^2, ctj = res$b^2, ctk = res$cc^2)
#res <- coordrnsc3(res, x)# cat("Facteurs a\n")
res$xinit <- x
dimnames(res$a) <- list(nomi, np)
dimnames(res$b) <- list(nomj, nq)
dimnames(res$cc) <- list(nomk, nr)
onsca3results<-list( x = x, xs = xs, xhat = xhat, nxhat2 =
nxhat2, prp = res$prp, a = res$a, b = res$b, cc = res$cc, g = res$g,
iteration = res$cont)
#class(oca3results)<-"ca3basicresults"
return(onsca3results)
#oca3basic <- new("ca3basicresults", x = x, xs = xsg, xhat = xhat, nxhat2 =
# nxhat2, prp = res$prp, a = res$a, b = res$b, cc = res$cc, g = res$g,
# iteration = res$cont)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/onsca3basic.R
|
p.ext <-
function(x, y)
{
#----------------------------------------------
# x matrix IxS
# y matrix JxS
# resultant matrix (IxJ),S
# with elements xis * yis
#-----------------------------------------------
x <- as.matrix(x)
y <- as.matrix(y)
xr <- rep(1:nrow(x), nrow(y))
yr <- rep(1:nrow(y), rep(nrow(x), nrow(y)))
z <- as.matrix(x[xr, ] * y[yr, ])
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/p.ext.R
|
plot.CA3variants<-
function(x, firstaxis = 1, lastaxis = 2, thirdaxis=3, cex = 0.8,biptype="column-tube",
scaleplot = NULL, plot3d=FALSE, pos=1, size1=1, size2=3, addlines=TRUE,...){
#library(ggplot2)
#library(ggrepel)
#library(gridExtra)
#library(plotly)
ndim<-dim(x$Data)
ni<-ndim[1]
nj<-ndim[2]
nk<-ndim[3]
ndimg<-dim(x$g)
lab<-dimnames(x$Data)
nomi<-lab[[1]]
nomj<-lab[[2]]
nomk<-lab[[3]]
#-----------------------------------------------------------------------------scaleplot
VV1<-sum(diag(t(x$gjkStandard)%*%x$gjkStandard)) #for row biplot
VV2<-sum(diag(t(x$fi)%*%x$fi))
gammai<-1/((ni/nj*nk)*VV1/VV2)^{1/4}
V1<-sum(diag(t(x$fiStandard)%*%x$fiStandard)) #for column-tube biplot
V2<-sum(diag(t(x$gjk)%*%x$gjk))
gammajk<-1/((nj*nk/ni)*V1/V2)^{1/4}
V1<-sum(diag(t(x$fjStandard)%*%x$fjStandard)) #for row-tube biplot
V2<-sum(diag(t(x$gik)%*%x$gik))
gammaik<-1/((ni*nk/nj)*V1/V2)^{1/4}
VV1<-sum(diag(t(x$gikStandard)%*%x$gikStandard)) #for col biplot
VV2<-sum(diag(t(x$fj)%*%x$fj))
gammaj<-1/((ni/nj*nk)*VV1/VV2)^{1/4}
V1<-sum(diag(t(x$fkStandard)%*%x$fkStandard)) #for row-tube biplot
V2<-sum(diag(t(x$gij)%*%x$gij))
gammaij<-1/((ni*nj/nk)*V1/V2)^{1/4}
VV1<-sum(diag(t(x$gijStandard)%*%x$gijStandard)) #for col biplot
VV2<-sum(diag(t(x$fk)%*%x$fk))
gammak<-1/((nk/nj*ni)*VV1/VV2)^{1/4}
#----------------------------------------------------------------------------------
categ<-NULL
categtub<-NULL
catc<-NULL
#------------------------------------------------------
if ((biptype == "row")||(biptype == "col-tube")||(biptype == "column-tube")||(biptype == "pred")||(biptype == "resp")) {
for (i in 1:ndim[[3]]){
categtub<-c(categtub, paste(lab[[2]],ndim[[3]],sep=""))
#for (i in 1:ndim[[2]]){
#categtub<-c(categtub, paste(lab[[3]],ndim[[2]],sep=""))
if ((x$ca3type == "OCA3")||(x$ca3type == "ONSCA3")) {for (i in 1:ndim[[3]]){
categtub<-c(categtub, paste(rep("coltub", ndim[[2]]),lab[[3]][i],sep=""))
}
}
}
}#end if biptype
#--------------------------------biplot choice only for symmetrical variants is limited to biptype="pred" and biptype="resp"-----------
if (biptype == "pred") {
if (is.null(scaleplot)==TRUE) {scaleplot<-gammajk}
cord1 <- x$fiStandard * scaleplot
I <- nrow(cord1)
cord2 <- x$gjk/scaleplot
J <- nrow(cord2)
rowlabels <- dimnames(x$fi)[[1]]
collabels <- dimnames(x$gjk)[[1]]
cord1<-as.matrix(cord1)
cord2<-as.matrix(cord2)
dimnames(cord1)[[1]] <- rowlabels
dimnames(cord2)[[1]] <- collabels
inertiapc <- round(x$inertiacoltub)
plotitle <- " "
catc=c(1,0)
if (I<2) {stop("number of axis for plotting must be at least 2\n\n")}
if ((x$ca3type=="CA3")||(x$ca3type=="OCA3")) {stop("WARNING: For symmetrical variants biptype should be not equal to resp or pred \n\n")}
}
if (biptype == "resp") {
if (is.null(scaleplot)==TRUE) {scaleplot<-gammai}
cord1 <- x$fi/scaleplot
cord2 <- x$gjkStandard * scaleplot
I <- nrow(cord1)
J <- nrow(cord2)
Jaxis<-ncol(cord2)
rowlabels <- dimnames(x$fi)[[1]]
collabels <- dimnames(x$gjk)[[1]]
dimnames(cord2)[[1]] <- collabels
dimnames(cord1)[[1]] <- rowlabels
inertiapc <- round(x$inertiarow)
plotitle <- " "
catc=c(0,1)
if (Jaxis<2) {stop("number of axis for graphing must be at least 2\n\n")}
if ((x$ca3type=="CA3")||(x$ca3type=="OCA3")) {stop("WARNING: For symmetrical variants biptype should be equal to row or column-tube, or column or tube etc \n\n")}
}
#------------------------------------------------------------biplots for symmetrical variants
#--------------------------------------------------------------------------------------row and column-tube
if ((biptype == "column-tube")||(biptype == "col-tube")) {
if (is.null(scaleplot)==TRUE) {scaleplot<-gammajk}
cord1 <- x$fiStandard * scaleplot
I <- nrow(cord1)
Iaxis<-ncol(cord1)
cord2 <- x$gjk/scaleplot
J <- nrow(cord2)
rowlabels <- dimnames(x$fi)[[1]]
collabels <- dimnames(x$gjk)[[1]]
cord1<-as.matrix(cord1)
cord2<-as.matrix(cord2)
dimnames(cord1)[[1]] <- rowlabels
dimnames(cord2)[[1]] <- collabels
inertiapc <- round(x$inertiacoltub)
plotitle <- " "
catc=c(1,0)
if (Iaxis<2) {stop("number of axis for plotting must be at least 2\n\n")}
if ((x$ca3type=="NSCA3")||(x$ca3type=="ONSCA3")) {stop("WARNING: For symmetrical variants biptype should be equal to resp or pred \n\n")}
}
if (biptype == "row") {
if (is.null(scaleplot)==TRUE) {scaleplot<-gammai}
cord1 <- x$fi/scaleplot
cord2 <- x$gjkStandard * scaleplot
I <- nrow(cord1)
J <- nrow(cord2)
Jaxis<-ncol(cord2)
rowlabels <- dimnames(x$fi)[[1]]
collabels <- dimnames(x$gjk)[[1]]
dimnames(cord2)[[1]] <- collabels
dimnames(cord1)[[1]] <- rowlabels
inertiapc <- round(x$inertiarow)
plotitle <- " "
catc=c(0,1)
if (Jaxis<2) {stop("number of axis for graphing must be at least 2\n\n")}
if ((x$ca3type=="NSCA3")||(x$ca3type=="ONSCA3")) {stop("WARNING: For non-symmetrical variants biptype should be equal to resp or pred \n\n")}
}
#----------------------------------------------------------------------------------------------------------------
#--------------------------------------------------------------------------------------col and row-tub
if ((biptype == "row-tube")) {
if (is.null(scaleplot)==TRUE) {scaleplot<-gammaik}
cord1 <- x$fjStandard * scaleplot
I <- nrow(cord1)
Iaxis<-ncol(cord1)
cord2 <- x$gik/scaleplot
J <- nrow(cord2)
rowlabels <- dimnames(x$fj)[[1]]
collabels <- dimnames(x$gik)[[1]]
dimnames(cord1)[[1]] <- rowlabels
dimnames(cord2)[[1]] <- collabels
inertiapc <- round(x$inertiarowtub)
plotitle <- ""
catc=c(1,0)
if (Iaxis<2) {stop("number of axis for graphing must be at least 2\n\n")}
if ((x$ca3type=="NSCA3")||(x$ca3type=="ONSCA3")) {stop("WARNING: For non-symmetrical variants biptype should be equal to resp or pred \n\n")}
}#end biptype
if ((biptype == "col")||(biptype == "column")) {
if (is.null(scaleplot)==TRUE) {scaleplot<-gammaj}
cord1 <- x$fj/scaleplot
cord2 <- x$gikStandard * scaleplot
I <- nrow(cord1)
J <- nrow(cord2)
Jaxis<-ncol(cord2)
rowlabels <- dimnames(x$fj)[[1]]
collabels <- dimnames(x$gik)[[1]]
dimnames(cord2)[[1]] <- collabels
dimnames(cord1)[[1]] <- rowlabels
inertiapc <- round(x$inertiacol)
plotitle <- " "
catc=c(0,1)
if (Jaxis<2) {stop("number of axis for graphing must be at least 2\n\n")}
if ((x$ca3type=="NSCA3")||(x$ca3type=="ONSCA3")) {stop("WARNING: For non-symmetrical variants biptype should be equal to resp or pred \n\n")}
}
if (((biptype == "column")||(biptype == "row-tube")||(biptype == "col")||(biptype == "tube-row"))) {
#catall=NULL
for (i in 1:ndim[[3]]){
categtub<-c(categtub, paste(lab[[1]],ndim[[3]],sep=""))
}
catall <- rep("solid",ndim[[2]])
}#end if biptype
#----------------------------------------------------------------------------------------------------------------
#--------------------------------------------------------------------------------------tube and row-col
if ((biptype == "row-column")||(biptype == "row-col")||(biptype == "col-row")||(biptype == "column-row")) {
if (is.null(scaleplot)==TRUE) {scaleplot<-gammaij}
cord1 <- x$fkStandard * scaleplot
I <- nrow(cord1)
Iaxis<-ncol(cord1)
cord2 <- x$gij/scaleplot
J <- nrow(cord2)
rowlabels <- dimnames(x$fk)[[1]]
collabels <- dimnames(x$gij)[[1]]
dimnames(cord1)[[1]] <- rowlabels
dimnames(cord2)[[1]] <- collabels
inertiapc <- round(x$inertiarowcol)
plotitle <- " "
catc=c(1,0) #solid for standard coord and blank for principal ones
if (Iaxis<2) {stop("number of axis for graphing must be at least 2\n\n")}
if ((x$ca3type=="NSCA3")||(x$ca3type=="ONSCA3")) {stop("WARNING: For non-symmetrical variants biptype should be equal to resp or pred \n\n")}
}
if (biptype == "tube") {
if (is.null(scaleplot)==TRUE) {scaleplot<-gammak}
cord1 <- x$fk/scaleplot
cord2 <- x$gijStandard * scaleplot
I <- nrow(cord1)
J <- nrow(cord2)
Jaxis<-ncol(cord2)
rowlabels <- dimnames(x$fk)[[1]]
collabels <- dimnames(x$gij)[[1]]
dimnames(cord2)[[1]] <- collabels
dimnames(cord1)[[1]] <- rowlabels
inertiapc <- round(x$inertiatube)
plotitle <- " "
catc=c(0,1) #solid for standard coord and blank for principal ones
if (Jaxis<2) {stop("number of axis for graphing must be at least 2\n\n")}
if ((x$ca3type=="NSCA3")||(x$ca3type=="ONSCA3")) {stop("WARNING: For non-symmetrical variants biptype should be equal to resp or pred \n\n")}
}
if (( (biptype == "tube")||(biptype == "row-col")||(biptype == "row-column")||(biptype == "col-row")||(biptype == "column-row"))) {
for (i in 1:ndim[[1]]){
categtub<-c(categtub, paste(lab[[2]],ndim[[1]],sep=""))
}
}#end if biptype
#----------------------------------------------------------------------------------------------------------------
#--------------------------------------------------------------------------------------------------------------------
if ((x$ca3type=="CA3")||(x$ca3type=="NSCA3")) {
frows <- data.frame(cord1, labels=dimnames(cord1)[[1]], categ=rep("rows", I)) # build a dataframe to be used as input for plotting via ggplot2
gcols <- data.frame(cord2, labels=dimnames(cord2)[[1]], categ=categtub) # build a dataframe to be used as input for plotting via ggplot2
ca3plot(frows=frows,gcols=gcols,firstaxis=firstaxis,lastaxis=lastaxis,inertiapc=inertiapc,size1=size1,size2=size2,biptype=biptype,addlines=addlines)
}#end catype
linet<-NULL
#---------------------------------------------------------------------
if ((x$ca3type == "OCA3")||(x$ca3type == "ONSCA3")) {
categtub<-NULL
####
catall <- rep("solid",ndim[[3]])
for (i in 1:ndim[[3]]){
categtub<-c(categtub, paste(rep("coltub", ndim[[2]]),lab[[3]][i],sep=""))
}
####
if (( (biptype == "tube")||(biptype == "row-col")||(biptype == "row-column")||(biptype == "col-row")||(biptype == "column-row"))) {
Iaxis<- ncol(cord1)
Jaxis <- ncol(cord2)
categtub<-NULL
catall <- rep("solid",ndim[[1]])
for (i in 1:ndim[[2]]){
categtub<-c(categtub, paste(lab[[1]],ndim[[2]],sep=""))
}}
###
frows <- data.frame(coord = cord1, labels = dimnames(cord1)[[1]], categ = rep("rows", I), linet=rep("blank",I) )
gcols <- data.frame(coord = cord2, labels = dimnames(cord2)[[1]], categ = categtub, linet=rep("solid",J))
FGcord <- rbind(frows, gcols) # build a dataframe to be used as input for plotting via library(ggplot2)
xmin <- min(FGcord[, firstaxis], FGcord[, lastaxis])
xmax <- max(FGcord[, firstaxis], FGcord[, lastaxis])
ymin <- min(FGcord[, lastaxis], FGcord[, firstaxis])
ymax <- max(FGcord[, lastaxis], FGcord[, firstaxis])
if (addlines==TRUE){
# CA3plot <- ggplot(FGcord, aes(x = FGcord[, firstaxis], y = FGcord[, lastaxis]),group=categ) +
CA3plot <- ggplot(FGcord, aes(x = FGcord[, firstaxis], y = FGcord[, lastaxis])) +
geom_point(aes(colour = categ, shape = categ), size = size1) +
geom_vline(xintercept = 0, linetype = 2, color = "gray") +
geom_hline(yintercept = 0, linetype = 2, color = "gray") +
labs(x = paste0("Dimension", firstaxis, " ", inertiapc[firstaxis], "%"), y = paste0("Dimension",
lastaxis, " ", inertiapc[lastaxis], "%")) +
scale_x_continuous(limits = c(xmin, xmax)) +
scale_y_continuous(limits = c(ymin, ymax)) +
theme(panel.background = element_rect(fill = "white", colour = "black")) +
coord_fixed(ratio = 1, xlim = NULL, ylim = NULL, expand = TRUE) +
geom_text_repel(data = FGcord, aes(colour = categ, label = labels), size = size2, max.overlaps =Inf) +
geom_path(aes(color=categ,linetype=linet),lwd=.5)+
scale_linetype_manual(values=c("blank",catall)) +
#scale_linetype_manual(values=catc) +
#scale_color_manual(values=c('red',rep('#E69F00',6)))+
expand_limits(y = 0) +
theme(legend.position = "none") + ggtitle(plotitle)
grid.arrange(CA3plot, ncol = 1)
}#end if addlines
else{
CA3plot <- ggplot(FGcord, aes(x = FGcord[, firstaxis], y = FGcord[, lastaxis])) +
geom_point(aes(colour = categ, shape = categ), size = size1) +
geom_vline(xintercept = 0, linetype = 2, color = "gray") +
geom_hline(yintercept = 0, linetype = 2, color = "gray") +
labs(x = paste0("Dimension", firstaxis, " ", inertiapc[firstaxis], "%"), y = paste0("Dimension",
lastaxis, " ", inertiapc[lastaxis], "%")) +
scale_x_continuous(limits = c(xmin, xmax)) +
scale_y_continuous(limits = c(ymin, ymax)) +
theme(panel.background = element_rect(fill = "white", colour = "black")) +
coord_fixed(ratio = 1, xlim = NULL, ylim = NULL, expand = TRUE) + geom_text_repel(data = FGcord, aes(colour = categ,
label = labels), size = size2) +
expand_limits(y = 0) +
theme(legend.position = "none") + ggtitle(plotitle)
grid.arrange(CA3plot, ncol = 1)
}#end else
}#end if oca3 or onsca3
#--------------------------------------------------
#------------------------------------------------
if (plot3d==TRUE) {
coordR<-cord1
coordC<-cord2
inertiaper=x$inertias[,2]
caplot3d(coordR=cord1,coordC =cord2,inertiaper=inertiapc)
}
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/plot.CA3variants.R
|
plot.tunelocal<-
function(x,...){
#----------------------------plot tunelocal
cat("Convex hull for assessing the optimal model dimension -Goodness criterion-\n\n")
if (x$boots==TRUE){
plot(x$output1, type = "b")
title(sub="Bootstrap Data -Chi2 criterion-")
}
else{
plot(x$output1, type = "b")
title(sub="Original Data -Chi2 criterion-")
}#---
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/plot.tunelocal.R
|
print.CA3variants <-function(x, printall=FALSE, digits=3,...) {
if (printall==FALSE){
#--------------------------------------------
#cat("The three-way contingency table considered\n")
#print(x$Data)
#--------------------------------------------
if ((x$ca3type=="CA3")||(x$ca3type=="OCA3")) {
cat("Percentage contributions of the components to the total inertia for column-tube biplots\n")
print(round(x$inertiacoltub,digits=digits))
cat("\n")
#---------------------------------------------------
cat("Percentage contributions of the components to the total inertia for row-tube biplots\n")
print(round(x$inertiarowtub,digits=digits))
cat("\n")
#---------------------------------------------------
cat("Percentage contributions of the components to the total inertia for row-column biplots\n")
print(round(x$inertiarowcol,digits=digits))
cat("\n")
#---------------------------------------------------
}#end for symmetric
#-----------------------------------------------------------
if ((x$ca3type=="NSCA3")||(x$ca3type=="ONSCA3")) {
if (x$resp=="row"){
cat("Percentage contributions of the components to the total inertia for pred biplots\n")
print(round(x$inertiacoltub,digits=digits))
cat("\n")
}
#---------------------------------------------------
if ((x$resp=="column")||(x$resp=="col")){
cat("Percentage contributions of the components to the total inertia for pred biplots\n")
print(round(x$inertiarowtub,digits=digits))
cat("\n")
}
#---------------------------------------------------
if (x$resp=="tube"){
cat("Percentage contributions of the components to the total inertia for pred biplots\n")
print(round(x$inertiarowcol,digits=digits))
cat("\n")
}
#---------------------------------------------------
}#end for non-symmetric
#-----------------------------------index partition
if ((x$ca3type=="OCA3")||(x$ca3type=="ONSCA3")){
cat("\n Index partition\n\n")
print(round(x$index3res$z,digits=digits))
cat("\n")
cat("\n Partition of the Term-IJK using polynomials \n\n")
print(round(x$index3,digits=digits))
cat("\n")
cat("\n Partition of the Term-IJ using polynomials \n\n")
print(round(x$index3res$zij,digits=digits))
cat("\n")
cat("\n Partition of the Term-IK using polynomials \n\n")
print(round(x$index3res$zik,digits=digits))
cat("\n")
cat("\n Partition of the Term-JK using polynomials \n\n")
print(round(x$index3res$zjk,digits=digits))
cat("\n\n")
}
if ((x$ca3type=="CA3")||(x$ca3type=="NSCA3")){
cat("\n Index partition\n\n")
print(round(x$index3,digits=digits))
cat("\n\n")}
}#end printall false
###############################################################################
if (printall==TRUE){
#cat("The three-way contingency table \n\n")
#print(x$Data)
# cat("Number of iteration steps \n")
# print(x$iteration)
cat("\n RESULTS for 3-way Correspondence Analysis\n")
# if ((x$ca3type=="CA3")|(x$ca3type=="OCA3")){
# cat("Three-way Pearson standardised residuals \n")
# print(round(x$xs, digits = 2))
#}
#else {
# cat("Three-way (weighted) centered column profile table \n")
# print(round(x$xs, digits = 2))
# }
cat("\n Row marginals\n\n")
print(round(x$pi, digits = digits))
cat("\n Column marginals\n\n")
print(round(x$pj, digits = digits))
cat("\n Tube marginals\n\n")
print(round(x$pk, digits = digits))
cat("\n Row-Column marginals\n\n")
print(round(x$pij, digits = digits))
cat("\n Column-Tube marginals\n\n")
print(round(x$pjk, digits = digits))
cat("\n Row-Tube marginals\n\n")
print(round(x$pik, digits = digits))
exin<-x$inertiatot/x$n
orin<-x$inertiaorig/x$n
cat("Explained inertia (reduced dimensions)", exin, "\n\n")
cat("Total inertia (complete dimensions)", orin, "\n\n")
# cat("Percentage of explained inertia on total inertia \n\n")
# cat("Percentage contribution of components to the total variation\n\n")
# print(round(x$inertiapcsum,digits=digits))
# print(round(x$inertiapc,digits=digits))
#cat("Proportion of explained inertia (when reducing dimensions)\n\n")
#print(x$prp)
cat("\n Rows in principal coordinates\n\n")
print(round(x$fi, digits = digits))
cat("\n Rows in standard coordinates\n\n")
print(round(x$fiStandard, digits = digits))
cat("\n Column-tubes in principal coordinates\n\n")
print(round(x$gjk, digits = digits))
cat("\n Column-tubes in standard coordinates\n\n")
print(round(x$gjkStandard, digits = digits))
cat("\n Inner Product of coordinates (row x interactive)\n\n")
print(round(x$iproductijk, digits = digits))
cat("\n Inner Product of coordinates (col x interactive)\n\n")
print(round(x$iproductjik, digits = digits))
cat("\n Inner Product of coordinates (tube x interactive)\n\n")
print(round(x$iproductkij, digits = digits))
#--------------------------------------------
if ((x$ca3type=="CA3")||(x$ca3type=="OCA3")) {
cat("Core array i.e. Generalised singular values \n\n")
print(round(x$g/sqrt(x$cost), digits = digits))
cat("Percentage contributions of the components to the total inertia for row biplots\n\n")
print(round(x$inertiarow,digits=digits))
cat("Percentage contributions of the components to the total inertia for column-tube biplots\n\n")
print(round(x$inertiacoltub,digits=digits))
#---------------------------------------------------
cat("Percentage contributions of the components to the total inertia for column biplots\n\n")
print(round(x$inertiacol,digits=digits))
cat("Percentage contributions of the components to the total inertia for row-tube biplots\n\n")
print(round(x$inertiarowtub,digits=digits))
#---------------------------------------------------
cat("Percentage contributions of the components to the total inertia for tube biplots\n\n")
print(round(x$inertiatube,digits=digits))
cat("Percentage contributions of the components to the total inertia for row-column biplots\n\n")
print(round(x$inertiarowcol,digits=digits))
}#end for symmetric
#--------------------------------------
if ((x$ca3type=="NSCA3")||(x$ca3type=="ONSCA3")) {
if (x$resp=="row"){
cat("Core array i.e. Generalised singular values \n\n")
print(round(x$g/sqrt(x$cost), digits = digits))
cat("Percentage contributions of the components to the total inertia for resp biplots\n\n")
print(round(x$inertiarow,digits=digits))
cat("Percentage contributions of the components to the total inertia for pred biplots\n\n")
print(round(x$inertiacoltub,digits=digits))
}
#---------------------------------------------------
if ((x$resp=="column")||(x$resp=="col")){
cat("Percentage contributions of the components to the total inertia for resp biplots\n\n")
print(round(x$inertiacol,digits=digits))
cat("Percentage contributions of the components to the total inertia for pred biplots\n\n")
print(round(x$inertiarowtub,digits=digits))
}
#---------------------------------------------------
if (x$resp=="tube"){
cat("Percentage contributions of the components to the total inertia for resp biplots\n\n")
print(round(x$inertiatube,digits=digits))
cat("Percentage contributions of the components to the total inertia for pred biplots\n\n")
print(round(x$inertiarowcol,digits=digits))
}
#---------------------------------------------------
}#end for non-symmetric
#--------------------------------------
if ((x$ca3type=="OCA3")||(x$ca3type=="ONSCA3")){
cat("\n Index partition\n\n")
print(round(x$index3res$z,digits=digits))
cat("\n")
cat("\n Partition of the Term-IJK using polynomials \n\n")
print(round(x$index3,digits=digits))
cat("\n")
cat("\n Partition of the Term-IJ using polynomials \n\n")
print(round(x$index3res$zij,digits=digits))
cat("\n")
cat("\n Partition of the Term-IK using polynomials \n\n")
print(round(x$index3res$zik,digits=digits))
cat("\n")
cat("\n Partition of the Term-JK using polynomials \n\n")
print(round(x$index3res$zjk,digits=digits))
cat("\n\n")
}
if ((x$ca3type=="CA3")||(x$ca3type=="NSCA3")){
cat("\n Index partition\n\n")
print(round(x$index3,digits=digits))
cat("\n\n")}
}
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/print.CA3variants.R
|
print.tunelocal<-function(x, digits=3,...) {
cat("Note that when boots = T, the data samples generated are given in the object named 'XG' \n\n")
#print(x$XG)
#----
#cat("From tunelocal: Chi2 as goodness-of-fit criterion for #choosing the optimal model dimension\n\n")
#print(x$risCAchi2)
#----
#cat("From tunelocal: Degree of freedom (df) as measure of #the complexity of the model for choosing the optimal model #dimension\n\n")
#print(x$risCAdf)
#----
cat("Results for choosing the optimal model dimension\n\n")
print(x$output1)
#-----
#cat("From tunelocal: Variant of three-way CA considered\n\n")
#print(x$ca3type)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/print.tunelocal.R
|
prod3 <-
function(m, a1, a2, a3){
n <- dim(m)
nn <- dimnames(m)
dimnames(m) <- NULL
if(!missing(a1)){
if(length(a1) != n[1]){
stop("prod3: length(arg1) wrong")
}
dim(m) <- c(n[1], n[2] * n[3])
m <- diag(a1) %*% m
dim(m) <- n
}
if(!missing(a2)){
if(length(a2) != n[2]){
stop("prod3: length(arg2) wrong")
}
m <- aperm(m, c(2, 1, 3))
dim(m) <- c(n[2], n[1] * n[3])
m <- diag(a2) %*% m
dim(m) <- c(n[2], n[1], n[3])
m <- aperm(m, c(2, 1, 3))
}
if(!missing(a3)){
if(length(a3) != n[3]){
stop("prod3: length(arg3) wrong")
}
dim(m) <- c(n[1] * n[2], n[3])
m <- m %*% diag(a3)
dim(m) <- n
}
dimnames(m) <- nn
m
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/prod3.R
|
reconst3 <-
function(param){
a <- param$a
b <- param$b
cc <- param$cc
g <- param$g
pqr <- dim(g)
y <- Kron(b, cc) # dim(y) is (JxK)x(qxr)
gf <- flatten(g) # dim(gf) is p x (qxr)
y <- y %*% t(gf) # dim(y) is (JxK) x p
z <- p.ext(a, y)
xhat <- apply(z, 1, sum)
dim(xhat) <- c(dim(a)[1], dim(b)[1], dim(cc)[1])
xhat
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/reconst3.R
|
rstand3 <-
function(x, std = T, ctr = T){
ntot <- sum(x)
if(length(dim(x)) != 3){
stop("standtab: this is not a 3-way array!\n")
}
if(ntot == 0){
stop("This matrix as for sum zero!")
}
x <- x/ntot
pii <- margI(x)
pj <- margJ(x)
pk <- margK(x)
ui <- rep(1, length(pii))
uj <- rep(1, length(pj))
y <- rep(1, length(x))
dim(y) <- dim(x)
if(ctr){
y <- prod3(y, pii, pj, pk)
} else {
y <- 0
}
x <- x - y
if(std){
prod3(x, ui, 1/sqrt(pj), 1/sqrt(pk))
} else {
prod3(x, ui, 1/pj, 1/pk)
}
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/rstand3.R
|
selmod <-
function(aptr,bptr,cptr,posptrA,negptrA,bigptrA,posptrB,negptrB,bigptrB,posptrC,negptrC,bigptrC,
IA,IB,IC,I,J,K,p,q,r,longest)
{
#-------------------------------------------------------------------
#ai, bi cci column of a component matrix
#
#IA, IB and IC pointers to the components associated with the greatest (in absolute value) core elements
#!------------------------------------------------------------------------------------
#! Select the mode in which the column has to be sign reversed.
#! Below is an heuristic algorithm but a fully rational choice is hard to come by.
#!
#! 1,1,1 is always the largest element.
#!
#! maximal number of sign reversals = p+q+r-2, but this number can be much smaller
#!
#! *sign reverse a component
#! determine which if any of s, t and u is available for reversal
#! if one of them is wholly positieve way
#! never choose it
#! else if one is wholly negative
#! choose that one from A, B, C respectively
#! else if there is a component with a largest absolute value which is negative
#! choose that one, or the one from A,B,C in that order
#! else
#! choose the column of the longest mode
#! end if
#!
#! FreeA,FreeB,FreeC = 0 component is not available; = 1 component is available
#!------------------------------------------------------------------------------------
#-----------------------------------------------------------------------------------------------------
freeA=0
freeB=0
freeC=0
chgmode=0
chgcomp=0
success=0
#cat("longest is\n")
#print(longest)
#1. check if components are available for sign inversion
if (aptr==1) freeA=1
if (bptr==1) freeB=1
if (cptr==1) freeC=1
if ((freeA+freeB+freeC)==0) {success=0} #no components available to be reversed
if ((freeA+freeB+freeC)==1) {###success=1
if (freeA==1)
{chgmode=1
chgcomp=IA
#success=1
}
if (freeB==1)
{chgmode=2
chgcomp=IB
#success=1
}
if (freeC==1)
{chgmode=3
chgcomp=IC
#success=1
}
success=1
}#endif
#2. check whether there are only positive components
if (posptrA==1) freeA=0
if (posptrB==1) freeB=0
if (posptrC==1) freeC=0
if ((freeA+freeB+freeC)==0) {success=0} #no components available to be reversed
if ((freeA+freeB+freeC)==1) {#success=1-in pieter's goto999
if (freeA==1)
{chgmode=1
chgcomp=IA
#success=1
}
if (freeB==1)
{chgmode=2
chgcomp=IB
#success=1
}
if (freeC==1)
{chgmode=3
chgcomp=IC
#success=1
}
success=1
}#endif
#3. check whether there are only negative components
free2A=freeA*negptrA
free2B=freeB*negptrB
free2C=freeC*negptrC
if ((free2A+free2B+free2C)==0) {
freeA=0
freeB=0
freeC=0
success=0}
if ((free2A+free2B+free2C)==1)
{freeA=free2A
freeB=free2B
freeC=free2C
####success=1
if (freeA==1)
{chgmode=1
chgcomp=IA
#success=1
}
if (freeB==1)
{chgmode=2
chgcomp=IB
#success=1
}
if (freeC==1)
{chgmode=3
chgcomp=IC
#success=1
}
success=1
}#endif
if ((free2A+free2B+free2C)>1) {
if (free2A==1){
freeA=1
freeB=0
freeC=0}
if (free2B==1){
freeA=0
freeB=1
freeC=0}
if (free2C==1){
freeA=0
freeB=0
freeC=1}
###success=1
if (freeA==1)
{chgmode=1
chgcomp=IA
#success=1
}
if (freeB==1)
{chgmode=2
chgcomp=IB
#success=1
}
if (freeC==1)
{chgmode=3
chgcomp=IC
#success=1
}
success=1
}#endif
#4. no negative components, check whether there is one with a larger negative absolute value than a positive one
free2A=freeA*bigptrA
free2B=freeB*bigptrB
free2C=freeC*bigptrC
if ((free2A+free2B+free2C)==0) {
freeA=0
freeB=0
freeC=0}
if ((free2A+free2B+free2C)==1){
freeA=free2A
freeB=free2B
freeC=free2C
######success=1
if (freeA==1)
{chgmode=1
chgcomp=IA
#success=1
}
if (freeB==1)
{chgmode=2
chgcomp=IB
#success=1
}
if (freeC==1)
{chgmode=3
chgcomp=IC
#success=1
}
success=1
}#endif
if ((free2A+free2B+free2C)>1) {
if (free2C==1)
{freeA=0
freeB=0
freeC=1}
if (free2B==1){
freeA=0
freeB=1
freeC=0}
if (free2A==1){
freeA=1
freeB=0
freeC=0
}
#####999 success=1
if (freeA==1)
{chgmode=1
chgcomp=IA
#success=1
}
if (freeB==1)
{chgmode=2
chgcomp=IB
#success=1
}
if (freeC==1)
{chgmode=3
chgcomp=IC
#success=1
}
success=1
}#endif
#
#5. no special reason for selection could be found, choose components of longest mode to be inverted
if (freeA==1){
if (freeB==1){
if (freeC==1){
if (longest==1){
freeA=1
freeB=0
freeC=0
}
if (longest==2){
freeA=0
freeB=1
freeC=0}
else{
freeA=0
freeB=0
freeC=1}
if (I>=J)
{freeA=1
freeB=0
freeC=0
}
else{
freeA=0
freeB=1
freeC=0
}
if (I>=K)
{
freeA=1
freeB=0
freeC=0
}
else {
freeA=0
freeB=0
freeC=1
}
if (J>=K)
{freeA=0
freeB=1
freeC=0
}
else{
freeA=0
freeB=0
freeC=1
}
}#end if freeA
}#end if freeB
}#end if freeC
#-----------------------
list(success=success,chgmode=chgmode,chgcomp=chgcomp)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/selmod.R
|
signscore <-
function(a,b,cc,I,J,K,p,q,r,core,IFIXA,IFIXB,IFIXC){
aptr=1
bptr=1
cptr=1
if (IFIXA==1) aptr=0
if (IFIXB==1) bptr=0
if (IFIXC==1) cptr=0
if (I==max(I,J,K)) longest=1
if (J==max(I,J,K)) longest=2
if (K==max(I,J,K)) longest=3
ressrtcor=srtcor(core,p,q,r)
coreord=core[ressrtcor$coreptr]
dim(coreord)=dim(core)
####-----------------------sort core check neg and pos and flag
achk=chkneg(a,nr=I,nc=p)
#print(achk)
bchk=chkneg(b,nr=J,nc=q)
cchk=chkneg(cc,nr=K,nc=r)
for (i in 1:p*q*r){
rescrptrs=crptrs(ressrtcor$coreptr[i],p,q)
aord=unique(a[,crptrs(ressrtcor$coreptr,p,q)$IA],MARGIN=2) #ordered component a
bord=unique(b[,crptrs(ressrtcor$coreptr,p,q)$IB],MARGIN=2) #ordered component b
ccord=unique(cc[,crptrs(ressrtcor$coreptr,p,q)$IC],MARGIN=2) #ordered component cc
if (core[ressrtcor$coreptr[i]]<0){
rselmod=selmod(aptr=1, bptr=1, cptr=1, posptrA=achk$posptr[rescrptrs$IA],negptrA=achk$negptr[rescrptrs$IA],bigptrA=achk$bigptr[rescrptrs$IA],posptrB=bchk$posptr[rescrptrs$IB],negptrB=bchk$negptr[rescrptrs$IB],bigptrB=bchk$bigptr[rescrptrs$IB],posptrC=cchk$posptr[rescrptrs$IC],negptrC=cchk$negptr[rescrptrs$IC],bigptrC=cchk$bigptr[rescrptrs$IC],
IA=rescrptrs$IA,IB=rescrptrs$IB,IC=rescrptrs$IC,I=I,J=J,K=K,p=p,q=q,r=r,longest=longest)
if (rselmod$success==1)
{
rinvcore=invcor(core,p,q,r,chgmode=rselmod$chgmode,chgcomp=rselmod$chgcomp)
if (rselmod$chgmode==1) {comp=a
nr=I
nc=p}
if (rselmod$chgmode==2) {comp=b
nr=J
nc=q}
if (rselmod$chgmode==3) {comp=cc
nr=K
nc=r}
rinvcmp=invcmp(comp,nr,nc,chgcomp=rselmod$chgcomp)
}#end if success
if (rselmod$chgmode==1) a=rinvcmp$comp
if (rselmod$chgmode==2) b=rinvcmp$comp
if (rselmod$chgmode==3) cc=rinvcmp$comp
}#end if core
aptr[rescrptrs$IA]=0
bptr[rescrptrs$IB]=0
cptr[rescrptrs$IC]=0
#if (sum(aptr)+sum(bptr)+sum(cptr)==0)
}#end for
list(g=core,gord=coreord,a=a,aord=aord,b=b,bord=bord,cc=cc,ccord=ccord)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/signscore.R
|
simulabootsimple<-function(Xtable,nboots=100,resamptype=1){
#----------------------------------------------------------
# BOOTSTRAPPING for three-way contingency table
#Xtable <- read.table(file = "Suicidedata.txt", header=TRUE)
#Xtable<-happy
#nboots=3
#resamptype=1
#----------------------------------------------------------
set.seed(1234)
X <- as.array(Xtable)
Xf<-flatten(X)
rows <- dim(X)[1]
cols <- dim(X)[2]
tubs <- dim(X)[3]
n <- sum(X)
#----------------------------------------------
# Number of bootstrap replicates
# set value of resamptype according to method
# resamptype 1 = multinomial
# resamptype 2 = Poisson
#-----------------------------------------------
XB<-Xslice<-margI<-margJ<-margK<-list()
for (b in 1:nboots) {
# vary if mntype not 0 or poisszero,poissone not 0
if (resamptype==1) Xr <- rmultinom(1,n,Xf) else Xr <- rpois(rows*cols,Xf)
XB[[b]]<-array(unlist(Xr),c(rows,cols,tubs))
}#end bootstrap
XmargI<-apply(X/sum(X),1,sum)
XmargJ<-apply(X/sum(X),2,sum)
XmargK<-apply(X/sum(X),3,sum)
list(XB,XmargI,XmargJ,XmargK)
return(XB)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/simulabootsimple.R
|
simulabootstrat<-function(Xtable,nboots=100,resamptype=1){
# BOOTSTRAPPING for three-way contingency table
#Xtable <- read.table(file = "Suicidedata.txt", header=TRUE)
#Xtable<-happy
set.seed(1234)
X <- as.array(Xtable)
rows <- dim(X)[1]
cols <- dim(X)[2]
tubs <- dim(X)[3]
n<-c()
for (k in 1:tubs){
n[k] <- sum(X[,,k])
}
# Number of bootstrap replicates
nboots <- nboots
# set value of resamptype according to method
# resamptype 1 = multinomial
# resamptype 2 = Poisson
XB<-Xslice<-margI<-margJ<-margK<-list()
for (b in 1:nboots) {
# vary if mntype not 0 or poisszero,poissone not 0
for(k in 1:tubs){
if (resamptype==1) Xr <- rmultinom(1,n[k],X[,,k]) else Xr <- rpois(rows*cols,X[,,k])
Xslice[[k]] <- matrix(data=Xr,nrow=rows,ncol=cols)
}#end array
XB[[b]]<-array(unlist(Xslice),c(rows,cols,tubs))
}#end bootstrap
XmargI<-apply(X/sum(X),1,sum)
XmargJ<-apply(X/sum(X),2,sum)
XmargK<-apply(X/sum(X),3,sum)
#list(XB,XmargI,XmargJ,XmargK)
return(XB)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/simulabootstrat.R
|
srtcor <-
function(core,p,q,r)
{#give core pointers
#coreptr=array(sort(abs(core),index.return=T,decreasing=T)$ix,c(p,q,r)) #pointers to the greatest absolute value in core
coreptr=sort(abs(core),index.return=T,decreasing=T)$ix #pointers to the greatest absolute value in core
#coreord=array(sort(abs(core),index.return=T,decreasing=T)$x,c(p,q,r))
list(coreptr=coreptr)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/srtcor.R
|
standtab <-
function(x, std = T, ctr = T){
ntot <- sum(x)
if(length(dim(x)) != 3){
stop("standtab: this is not a 3-way array!\n")
}
if(ntot == 0){
stop("This array has for sum zero!")
}
x <- x/ntot
pii <- margI(x)
pj <- margJ(x)
pk <- margK(x)
ui <- rep(1, length(pii))
uj <- rep(1, length(pj))
y <- rep(1, length(x))
dim(y) <- dim(x)
y <- if(ctr) prod3(y, pii, pj, pk) else 0
x <- x - y
if(std){
prod3(x, 1/sqrt(pii), 1/sqrt(pj), 1/sqrt(pk))
} else {
prod3(x, 1/pii, 1/pj, 1/pk)
}
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/standtab.R
|
step.g3 <-
function(param) {
a <- param$a
b <- param$b
cc <- param$cc
p <- ncol(a)
q <- ncol(b)
r <- ncol(cc)
x <- param$x
x <- flatten(x)
ax <- t(a) %*% x
bc <-Kron( b, cc)
g <- ax %*% bc
dim(g) <- c(p, q, r)
param$g <- g
param
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/step.g3.R
|
step.g3ordered<-function(param)
{
#-----------------------------------------------------------
# Comput the core g given the 3-way matrix x, and the matrices A,B and C
# x of dimensions I x J x K
# A of dimensions I x p
# B of dimensions J x q
# C of dimensions K x r
# g of dimensions p x q x r
#-----------------------------------------------------------------------
Z <- param$Z
a <- param$a
b <- param$b
cc <- param$cc
pii<-param$pii
pj<-param$pj
pk<-param$pk
p <- ncol(a)
q <- ncol(b)
r <- ncol(cc)
x <- param$x
x <- flatten(x)# dim(x) is I x (JxK)
#cat("sono dentro spep.g3\n")#print(dim(a))
#print(dim(x))
ax <- t(a) %*% x# dim(ax) is p x (JxK)
#bc <- b %k% cc#bc de dimensions (J x K) x (q x r)
bc<-Kron(b,cc)
#print(bc)
g <- ax %*% bc# g de dimensions c(p,(q x r))
Z <- t(a) %*%diag(sqrt(pii))%*% x%*%Kron(diag(sqrt(pj)),diag(sqrt(pk))) %*% bc
dim(g) <- c(p, q, r)
dim(Z) <- c(p, q, r)
param$g <- g
param$Z <- Z #includes the metrics
#browser()
param
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/step.g3ordered.R
|
stepi3 <-
function(param){
xp <- aperm(param$x, c(2, 3, 1))
gp <- aperm(param$g, c(2, 3, 1))
paramp <- list(a = param$b, b = param$cc, cc = param$a, g = gp, x = xp)
newcomp3(paramp)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/stepi3.R
|
stepi3ordered<-function (param)
{
xp <- aperm(param$x, c(2, 3, 1))
gp <- aperm(param$g, c(2, 3, 1))
paramp <- list(a = param$b, b = param$cc, cc = param$a, g = gp,
x = xp,pii=param$pj,pj=param$pk,pk=param$pii)
#browser()
newcomp3ordered(paramp)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/stepi3ordered.R
|
summary.CA3variants <-function(object,digits=3,...){
#UseMethod("summary")
#--------------------------------------------
#---------------------------------------------------
# cat("\n RESULTS for 3-way Correspondence Analysis\n")
#if ((object$ca3type=="OCA3")||(object$ca3type=="ONSCA3")){
#cat("\n Index partionings\n\n")
#print(round(object$index3res$z,digits=digits))
#print(round(object$index3,digits=digits))
#print(round(object$index3res$zij,digits=digits))
# print(round(object$index3res$zik,digits=digits))
# print(round(object$index3res$zjk,digits=digits))
# cat("\n\n")
#}
#--------------------------------------
cat("Core table \n")
print(round(object$g/sqrt(object$cost),digits=digits))
cat("Squared core table\n")
print(round(object$g^2/object$cost,digits=digits))
#---------------------------------------------------
cat("Explained inertia (reduced dimensions)\n")
print(round(object$inertiatot/object$cost,digits=digits))
cat("\n")
cat("Total inertia (complete dimensions)\n")
print(round(object$inertiaorig/object$cost,digits=digits))
cat("\n")
cat("Percentage of explained inertia (when reducing dimensions)\n")
# print(round(object$inertiatot/object$inertiaorig), digits=digits)
print(object$prp*100, digits=digits)
#if ((object$ca3type=="OCA3")||(object$ca3type=="ONSCA3")){
#cat("\n Index partionings\n\n")
#print(round(object$index3res$z,digits=digits))
#print(round(object$index3,digits=digits))
#print(round(object$index3res$zij,digits=digits))
# print(round(object$index3res$zik,digits=digits))
# print(round(object$index3res$zjk,digits=digits))
# cat("\n\n")
#}
#if ((object$ca3type=="CA3")||(object$ca3type=="NSCA3")){
#cat("\n Index partionings\n\n")
# print(round(object$index3,digits=digits))
# cat("\n\n")}
# cat("\n OBJECTS of 3-way Correspondence Analysis\n")
#UseMethod("summary")
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/summary.CA3variants.R
|
tau3 <-
function(f3, digits = 3){
nn <- dim(f3)
ni <- nn[1]
ui <- rep(1, ni)
nj <- nn[2]
nk <- nn[3]
n <- sum(f3)
p3 <- f3/n
if(length(dim(f3)) != 3){
stop("f3 is not a 3 way table \n")
}
pi <- apply(p3, 1, sum)
pj <- apply(p3, 2, sum)
pk <- apply(p3, 3, sum)
pijk <- pi %o% pj %o% pk
p1jk <- ui %o% pj %o% pk
devt <- 1 - sum(pi^2)
tau3 <- sum(((p3 - pijk)^2/p1jk))
itau3 <- tau3/devt
pij <- apply(p3, c(1, 2), sum)
pik <- apply(p3, c(1, 3), sum)
pjk <- apply(p3, c(2, 3), sum)
p1j <- ui %o% pj
p1k <- ui %o% pk
p2ij <- pi %o% pj
p2ik <- pi %o% pk
tauij <- sum(((pij - p2ij)^2/p1j))
itauij <- tauij/devt
tauik <- sum(((pik - p2ik)^2/p1k))
itauik <- tauik/devt
khjk <- 1/ni * (sum((pjk - (pj %o% pk))^2/(pj %o% pk)))
ikhjk<-khjk/devt
khin3 <- tau3 - tauij - tauik - khjk
ikhin3 <-khin3/devt
# cat("Numerator Values of partial and total indices\n")
nom <- c("Term-IJ", "Term-IK", "Term-JK", "Term-IJK", "Term-total")
# nomI <- c("TauIJ", "TauIK", "TauJK", "TauIJK", "TauM")
x <- c(tauij, tauik, khjk, khin3, tau3)
y <- (100 * x)/tau3
dres<-(ni-1)*(nj-1)*(nk-1)
dij<-(ni-1)*(nj-1)
dik<-(ni-1)*(nk-1)
djk<-(nj-1)*(nk-1)
dtot<-dij+dik+djk+dres
cost<-(n - 1) * (ni - 1) * (1/devt)
CM <- (n - 1) * (ni - 1) * itau3
Cij<-(n-1)*(ni-1)*itauij
Cik<-(n-1)*(ni-1)*itauik
Cjk<-(n-1)*(ni-1)*ikhjk
Cijk<-(n-1)*(ni-1)*ikhin3
zz <- c(itauij, itauik, ikhjk, ikhin3,itau3)
zz2<-c(Cij,Cik,Cjk,Cijk,CM)
zz3<-c(dij,dik,djk,dres,dtot)
x2<-zz2/zz3
pvalue= 1 - pchisq(zz2, zz3)
z <- rbind(x, zz,y,zz2,zz3,pvalue,x2)
nomr <- c("Tau Numerator", "Tau", "% of Inertia",
"CM-Statistic","df","p-value","CM-Statistic/df")
dimnames(z) <- list(nomr, nom)
z <- round(z, digits = digits)
list(z = z, CM = CM,pij=pij,pik=pik,pjk=pjk, cost=cost)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/tau3.R
|
tau3ordered<-
function (f3, digits = 3)
{
####################################
#f3 three-way absolute frequency table
# ordered response variable: orderedY=T,F
#number of ordered predictor variables: orderedX=0 or 1 or 2 if orderedX=1 then consider X1 ordered and X2 not ordered
#########################
nn <- dim(f3)
ni <- nn[1]
ui <- rep(1, ni)
nj <- nn[2]
nk <- nn[3]
n <- sum(f3)
p3 <- f3/n
if (length(dim(f3)) != 3)
stop("f3 is not a 3 way table \n")
pi <- apply(p3, 1, sum)
pj <- apply(p3, 2, sum)
pk <- apply(p3, 3, sum)
pijk <- pi %o% pj %o% pk #to get the product
p1jk <- ui %o% pj %o% pk #to get the product
devt <- 1 - sum(pi^2) #tau3 denominator
tau3 <- sum(((p3 - pijk)^2/p1jk)) #tau3 numerator
itau3 <- tau3/devt #tau3 index
pij <- apply(p3, c(1, 2), sum)
pik <- apply(p3, c(1, 3), sum)
pjk <- apply(p3, c(2, 3), sum)
p1j <- ui %o% pj
p1k <- ui %o% pk
p2ij <- pi %o% pj
p2ik <- pi %o% pk
p2jk<-pj %o% pk
tauij <- sum(((pij - p2ij)^2/p1j))
itauij <- tauij/devt
tauik <- sum(((pik - p2ik)^2/p1k))
itauik <- tauik/devt
khjk <- 1/ni * (sum((pjk - (pj %o% pk))^2/(pj %o% pk)))
ikhjk <- khjk/devt
khin3 <- tau3 - tauij - tauik - khjk
ikhin3 <- khin3/devt
cost<-(n - 1) * (ni - 1) * (1/devt)
#cat("Numerator Values of partial and total indices\n")
nom <- c("Term-IJ", "Term-IK", "Term-JK", "Term-IJK", "Term-total")
# nomI <- c("TauIJ", "TauIK", "TauJK", "TauIJK", "TauM")
x <- c(tauij, tauik, khjk, khin3, tau3)
y <- (100 * x)/tau3
dres <- (ni - 1) * (nj - 1) * (nk - 1)
dij <- (ni - 1) * (nj - 1)
dik <- (ni - 1) * (nk - 1)
djk <- (nj - 1) * (nk - 1)
dtot <- dij + dik + djk + dres
CM <- (n - 1) * (ni - 1) * itau3
Cij <- (n - 1) * (ni - 1) * itauij
Cik <- (n - 1) * (ni - 1) * itauik
Cjk <- (n - 1) * (ni - 1) * ikhjk
Cijk <- (n - 1) * (ni - 1) * ikhin3
zz <- c(itauij, itauik, ikhjk, ikhin3, itau3)
zz2 <- c(Cij, Cik, Cjk, Cijk, CM)
zz3 <- c(dij, dik, djk, dres, dtot)
pvalue<-1-pchisq(zz2,zz3)
x2<-zz2/zz3
z <- rbind(x, zz, y, zz2, zz3,pvalue,x2)
nomr <- c("Tau-Numerator", "Tau-index",
"% of Inertia", "CM-Statistic", "df","p-value","CM-statistic/df")
dimnames(z) <- list(nomr, nom)
z<-round(z, digits = digits)
# cat("C-statistic of Marcotorchino\n")
# print(CM)
# cat("Tau3 denominator\n")
# print(devt)
############################################ computing polynomials
mi<-c(1:ni)
Apoly <- emerson.poly(mi, pi) # Emerson orthogonal polynomials
Apoly <- Apoly[, - c(1) ] #delete zero-vec but with trivial solution to reconstruct tau
poli <- diag(sqrt(pi)) %*% Apoly #poly with weight Di
mj <- c(1:nj)
Bpoly <- emerson.poly(mj, pj)
Bpoly <- Bpoly[,-c(1) ]
polj <- diag(sqrt(pj)) %*%(Bpoly) #poly with weight Dj
mk <- c(1:nk)
Cpoly <- emerson.poly(mk, pk)
Cpoly <- Cpoly[,-c(1) ]
polk <- diag(sqrt(pk)) %*%(Cpoly) #poly with weight Dk
#polk <- t(Cpoly)
#xs<-rstand3(f3) #standardized residuals
#xs<-p3
##########################################################partial term IJ
fij <- (pij - p2ij)/sqrt(p1j)
z3par<- t(poli)%*%fij%*%polj
tauij<-(sum(z3par^2)/(devt))*(ni-1)*(n-1)
pvalijtot<-1 - pchisq(tauij, (ni-1)*(nj-1))
# cat("index numerator of tau_ij reconstructed by 2 polynomials \n")
# print(tauij)
z3ij<-(z3par^2/(devt))*(ni-1)*(n-1)
#cat("partition of index numerator of tau_ij reconstructed by 2 polynomials \n")
tauijcol<-apply(z3ij,2,sum)
tauijrow<-apply(z3ij,1,sum)
#browser()
pval<-c()
for (i in 1:(ni)){
pval[i]<-1 - pchisq(tauijrow[i], nj-1)
}
pvaltauijrow<-pval
pval<-c()
for (i in 1:(nj)){
pval[i]<-1 - pchisq(tauijcol[i], ni-1)
}
pvaltauijcol<-pval
#row and column poly component
dfi<-rep((nj-1),(ni))
perci<-tauijrow/tauij*100
zi<-cbind(tauijrow,perci,dfi,pvaltauijrow)
zitot<-c(apply(zi[,1:3],2,sum),pvalijtot)
zi<-rbind(zi,zitot)
dfj<-rep((ni-1),(nj))
percj<-tauijcol/tauij*100
zj<-cbind(tauijcol,percj,dfj,pvaltauijcol)
zjtot<-c(apply(zj[,1:3],2,sum),pvalijtot)
zj<-rbind(zj,zjtot)
zij<-rbind(zi,zj)
nomi<-paste("poly-row",0:(ni-1),sep="")
nomj<-paste("poly-col",0:(nj-1),sep="")
dimnames(zij)<-list(c(nomi,"Tau-IJ",nomj,"Tau-IJ"),c("Term-IJ-poly","%inertia","df","p-value"))
zij<-round(zij,digits=digits)
#################################################partial term IK
fik <- (pik - p2ik)/sqrt(p1k)
# z3par<- t(poli[,-1])%*%fik%*%polk[,-1]
z3par<- t(poli)%*%fik%*%polk
tauik<-(sum(z3par^2)/devt)*(ni-1)*(n-1)
pvaliktot<-1 - pchisq(tauik, (ni-1)*(nk-1))
z3ik<- (z3par^2/devt)*(ni-1)*(n-1)
tauikcol<-apply(z3ik,2,sum)
tauikrow<-apply(z3ik,1,sum)
#---
pval<-c()
for (i in 1:(ni)){
pval[i]<-1 - pchisq(tauikrow[i], nk-1)
}
pvaltauikrow<-pval
pval<-c()
for (i in 1:(nk)){
pval[i]<-1 - pchisq(tauikcol[i], ni-1)
}
pvaltauikcol<-pval
#row and tube poly component
dfi<-rep((nk-1),(ni))
perci<-tauikrow/tauik*100
zi<-cbind(tauikrow,perci,dfi,pvaltauikrow)
zitot<-c(apply(zi[,1:3],2,sum),pvaliktot)
zi<-rbind(zi,zitot)
dfk<-rep((ni-1),(nk))
perck<-tauikcol/tauik*100
zk<-cbind(tauikcol,perck,dfk,pvaltauikcol)
zktot<-c(apply(zk[,1:3],2,sum),pvaliktot)
zk<-rbind(zk,zktot)
zik<-rbind(zi,zk)
nomi<-paste("poly-row",0:(ni-1),sep="")
nomk<-paste("poly-tube",0:(nk-1),sep="")
dimnames(zik)<-list(c(nomi,"Tau-IK",nomk,"Tau-IK"),c("Term-IK-poly","%inertia","df","p-value"))
zik<-round(zik,digits=digits)
################################################partial term JK
fjk <- (pjk - p2jk)/sqrt(p2jk)
z3par<- t(polj)%*%fjk%*%polk
z3jk<- 1/ni * (z3par^2/devt)*(ni-1)*(n-1)
taujk=(sum(z3par^2)/devt)*(ni-1)*(n-1)
pvaljktot<-1 - pchisq(taujk, (nj-1)*(nk-1))
taujkcol<-apply(z3jk,2,sum)
taujkrow<-apply(z3jk,1,sum)
#---
pval<-c()
for (i in 1:(nj)){
pval[i]<-1 - pchisq(taujkrow[i], nk-1)
}
pvaltaujkrow<-pval
pval<-c()
for (i in 1:(nk)){
pval[i]<-1 - pchisq(taujkcol[i], nj-1)
}
pvaltaujkcol<-pval
#row and column poly component
dfj<-rep((nk-1),(nj))
percj<-taujkrow/taujk*100
zj<-cbind(taujkrow,percj,dfj,pvaltaujkrow)
zjtot<-c(apply(zj[,1:3],2,sum),pvaljktot)
zj<-rbind(zj,zjtot)
dfk<-rep((nj-1),(nk))
perck<-taujkcol/taujk*100
zk<-cbind(taujkcol,perck,dfk,pvaltaujkcol)
zktot<-c(apply(zk[,1:3],2,sum),pvaljktot)
zk<-rbind(zk,zktot)
zjk<-rbind(zj,zk)
nomj<-paste("poly-col",0:(nj-1),sep="")
nomk<-paste("poly-tube",0:(nk-1),sep="")
dimnames(zjk)<-list(c(nomj,"Tau-JK",nomk,"Tau-JK"),c("Term-JK-poly","%inertia","df","p-value"))
zjk<-round(zjk,digits=digits)
#################################three ordered variables and interaction term
fijk=(p3 - pijk)/sqrt(p1jk)
#browser()
p2ijk<-p2ij%o%pk
p2ikj<-p1k%o%pj
p2ikj<-aperm(p2ikj,c(1,3,2))
p2jki<-p2jk%o%ui
p2jki<-aperm(p2jki,c(3,1,2))
#z3n<- sqrt((ni-1)*(n-1))* t(poli[,-1])%*%flatten(fijk)%*%Kron(polj[,-1],polk[,-1])/sqrt(devt)
z3n<-sqrt((ni-1)*(n-1))* ( t(poli)%*%flatten(fijk)%*%Kron(polj,polk))/sqrt(devt)
tauint<-(z3n^2)
dim(tauint)<-c(ni,nj,nk)
#dim(tauint)<-c(ni-1,nj-1,nk-1)
tautot<-(sum(z3n^2)/devt)*(ni-1)*(n-1)
pvalinttot<-1 - pchisq(tautot, (ni-1)*(nj-1)*(nk-1))
#browser()
tauintrow<-apply(tauint,1,sum)
dfi<-rep((nj-1)*(nk-1),(ni))
# dfi<-rep((nj-1)*(nk-1),(ni-1))
pvaltauintrow<-1 - pchisq(tauintrow, ni-1)
#----------------------------------
tauintcol<-apply(tauint,2,sum)
pvaltauintcol<-1 - pchisq(tauintcol, nj-1)
#-----------------------------------
tauinttub<-apply(tauint,3,sum)
pvaltauinttub<-1 - pchisq(tauinttub, nk-1)
#-----------------------------------------
#row column and tube poly component on three-way intereraction term
perci<-tauintrow/tautot*100
zi<-cbind(tauintrow,perci,dfi,pvaltauintrow)
zitot<-c(apply(zi[,1:3],2,sum),pvalinttot)
zi<-rbind(zi,zitot)
#----------
dfj<-rep((ni-1)*(nk-1),(nj))
# dfj<-rep((ni-1)*(nk-1),(nj-1))
percj<-tauintcol/tautot*100
zj<-cbind(tauintcol,percj,dfj,pvaltauintcol)
zjtot<-c(apply(zj[,1:3],2,sum),pvalinttot)
zj<-rbind(zj,zjtot)
#--------
# dfk<-rep((ni-1)*(nj-1),(nk-1))
dfk<-rep((ni-1)*(nj-1),(nk))
perck<-tauinttub/tautot*100
zk<-cbind(tauinttub,perck,dfk,pvaltauinttub)
zktot<-c(apply(zk[,1:3],2,sum),pvalinttot)
zk<-rbind(zk,zktot)
zijk<-rbind(zi,zj,zk)
# nomi<-paste("poly",1:(ni-1),sep="")
# nomj<-paste("poly",1:(nj-1),sep="")
# nomk<-paste("poly",1:(nk-1),sep="")
nomi<-paste("poly-row",0:(ni-1),sep="")
nomj<-paste("poly-col",0:(nj-1),sep="")
nomk<-paste("poly-tube",0:(nk-1),sep="")
#browser()
dimnames(zijk)<-list(c(nomi, "Tau-IJK",nomj,"Tau-IJK",nomk,"Tau-IJK"),c("Term-IJK-poly","%inertia","df","p-value"))
zijk<-round(zijk,digits=digits)
#============================================================
return(list(z=z,zij=zij,zik=zik,zjk=zjk,zijk=zijk,pij=pij,pik=pik,pjk=pjk,cost=cost))
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/tau3ordered.R
|
threewayboot<-function(Xdata,nboots=100){
#------------------------------------------------------
# Do nboots bootstrap on the observations x rows+cols+tubs categories indicator matrix X.
# If X is a three way table first make the indicator using makeindicator.
# The output is a list of threeway tables
#------------------------------------------------------
set.seed(1234)
rows <- dim(Xdata)[[1]]
cols <- dim(Xdata)[[2]]
tubs <- dim(Xdata)[[3]]
Z<-makeindicator(Xdata)
N<-nrow(Z)
margtubs<-colSums(Z[,(rows+cols+1):(rows+cols+tubs)])
ZB<-Xslices<-XB<-list()
for (b in 1:nboots) {
ZB<-Z[sample(N,replace=TRUE),]
ind<-0
for (k in 1:tubs){
ZB1<-ZB[(ind+1):(ind+margtubs[k]),1:rows]
ZB2<-ZB[(ind+1):(ind+margtubs[k]),(rows+1):(rows+cols)]
Xslices[[k]] <-t(ZB1) %*% ZB2
ind<-ind+margtubs[k]
}
XB[[b]]<-array(unlist(Xslices),c(rows,cols,tubs))
}
return(XB)}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/threewayboot.R
|
tucker <-
function(x, p, q, r, test = 10^-6){
loss.old <- criter(x, 0)
loss.new <- rep(0, 4)
param <- init3(x, p, q, r)
param <- step.g3(param)
param <- newcomp3(param)
loss.new[1] <- loss1.3(param, 0)
paramp <- param
cont <- 0
while(abs(loss.old - loss.new[1]) > test) {
cont <- cont + 1
a.old <- paramp$a
b.old <- paramp$b
c.old <- paramp$cc
loss.old <- loss.new[1]
paramp <- stepi3(paramp)
paramp <- step.g3(paramp)
loss.new[3] <- loss2(paramp, b.old)
paramp <- stepi3(paramp)
paramp <- step.g3(paramp)
loss.new[4] <- loss2(paramp, c.old)
paramp <- stepi3(paramp)
paramp <- step.g3(paramp)
loss.new[2] <- loss2(paramp, a.old)
loss.new[1] <- loss1.3(paramp, a.old)
}
paramp$cont <- cont
paramp
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/tucker.R
|
tuckerORDERED<-function (x, p, q, r, test = 10^-6, xi, norder=3)
{
loss.old <- criter(x, 0)
loss.new <- rep(0, 4)
#browser()
param <- switch(norder, "1" = init3ordered1(x, p, q, r, xi), "2" =
init3ordered2(x, p, q, r, xi),"3" = init3ordered(x, p, q, r, xi))
#if (norder == "one") {
# param <- init3ordered1(x, p, q, r, xi)
#param <- step.g3ordered(param)
#param <- newcomp3ordered1(param)
# }
# if (norder == "two") {
# param <- init3ordered2(x, p, q, r, xi)
#param <- step.g3ordered(param)
#param <- newcomp3ordered(param)
# }
#if (norder =="three") {
# param <- init3ordered(x, p, q, r, xi)
#param <- step.g3ordered(param)
#param <- newcomp3ordered(param)
#}
param <- step.g3ordered(param)
param <- newcomp3ordered(param)
loss.new[1] <- loss1.3ordered(param, 0)
paramp <- param
#browser()
cont <- 0
while (abs(loss.old - loss.new[1]) > test) {
cont <- cont + 1
a.old <- paramp$a
b.old <- paramp$b
c.old <- paramp$cc
loss.old <- loss.new[1]
paramp <- stepi3ordered(paramp)
paramp <- step.g3ordered(paramp)
loss.new[3] <- loss2(paramp, b.old)
#cat(cont, "b ")
#print(loss.new, 6)
paramp <- stepi3ordered(paramp)
paramp <- step.g3ordered(paramp)
loss.new[4] <- loss2(paramp, c.old)
#cat(cont, "c ")
#print(loss.new, 6)
paramp <- stepi3ordered(paramp)
paramp <- step.g3ordered(paramp)
loss.new[2] <- loss2(paramp, a.old)
loss.new[1] <- loss1.3ordered(paramp, a.old)
#cat(cont, "a ")
#print(loss.new, 6)
}
paramp$cont<-cont
#cat("matrice Z\n")
#paramp$Z <- param$Z
#print(paramp$Z)
return(paramp)
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/tuckerORDERED.R
|
tunelocal<- function(Xdata, ca3type = "CA3", resp="row", norder = 3, digits = 3, boots = FALSE,
nboots = 0, boottype= "bootpsimple", resamptype = 1, PercentageFit = 0.01)
{
#-------------------------------------------------------------------------------------------------------------------------------------
# when boot = T it does bootstrap sampling. There are three kinds of possible bootstrap
# sampling. When boottype = "bootnp" it is non parametric bootstrap sampling
# When boottype = "bootpsimple" it is parametric (multinomial or poisson) simple bootstrap
# sampling
# When boottype = "bootpstrat" it is parametric stratified bootstrap sampling
# library(multichull)
# library(CA3variants)
# source("simula-bootstrap-simpleSamp.R")
# source("simulaboot.R")
# ca3type="CA3"
# norder=2
# boots=T
# boottype="bootpsimple" #bootnp bootpsimple bootpstrat
# nboots<-2
# Xdata<-happy
# PercentageFit = 0.01, used in CHull function
#-----------------------------------------------------------------------------------------------------------------------------------------
if (is.array(Xdata)==FALSE){
#Xtable=dget("Xdata.txt")
Xdata1<-Xdata
nam<-names(Xdata1)
ndims<-dim(Xdata1)[[2]]
if (ndims<3) {stop("number of variables for output must be at least 3\n\n")}
Xdata2=table(Xdata1[[1]],Xdata1[[2]],Xdata1[[3]])
if(is.numeric(Xdata1)==TRUE){
dimnames(Xdata2)=list(paste(nam[[1]],1:max(Xdata[[1]]),sep=""),paste(nam[[2]],1:max(Xdata[[2]]),sep=""),paste(nam[[3]],1:max(Xdata[[3]]),sep=""))
}
#paste("This is your given array\n\n")
Xdata<-Xdata2
#print(Xdata)
}
#-----
rows <- dim(Xdata)[[1]]
cols <- dim(Xdata)[[2]]
tubs <- dim(Xdata)[[3]]
pi <- apply(Xdata/sum(Xdata), 1, sum)
pj <- apply(Xdata/sum(Xdata), 2, sum)
pk <- apply(Xdata/sum(Xdata), 3, sum)
I <- rows
J <- cols
K <- tubs
XG <- Xrand<-list()
output1 <- NULL
nran <- sum(Xdata)
risCArss <- RSScrit <- list()
risCAgof <- CAgof <- list()
risCAchi2 <- list()
risCAfp <- risCAdf <- dfmodel <- CAinertia <- list()
fpmodel <- dimmodel <- list()
if (boots == T) {
if (nboots==0) {
nboots<-100
cat("Note that 'nboots' by default is 100\n You can change the parameter in input of 'tunelocal' function \n")
cat("Also, note that the number of different models that is assessed is based on the size of all models obtained from the combination of dimensions of the bootstrapped data\n
For example, for a 4 x 5 x 4 array, there are 800 different models that are assessed.\n \n")}
Xrand <- switch(boottype, "bootnp" = threewayboot(Xdata,nboots=nboots), "bootpsimple" =
simulabootsimple(Xdata, nboots=nboots, resamptype = resamptype),"bootpstrat" =simulabootstrat(Xdata, nboots=nboots, resamptype = resamptype) )
XG[[1]]<-Xdata
dimnames(XG[[1]]) <- dimnames(Xdata)
for (b in 1:nboots) {
XG[[b+1]]<-Xrand[[b]]
dimnames(XG[[b ]]) <- dimnames(Xdata)
}
}
else{
nboots<-0
cat("Note that the number of different models that is assessed is based on the size of the original data being analysed.\n
For example, for a 4 x 5 x 4, there are 80 different models that are assessed.\n")
XG<-list(Xdata)
}
nsets <- nboots + 1
for (b in 1:nsets) {
ng <- 0
for (i in 1:rows) {
for (j in 1:cols) {
for (k in 1:tubs) {
ng <- ng + 1
res <- CA3variants(XG[[b]], dims=c(p= i, q= j, r =k),
ca3type = ca3type, resp=resp, norder = norder, sign = FALSE)
risCAchi2[[ng]] <- res$inertiatot
risCAdf[[ng]] <- ((i - 1) * (j - 1) * (k -
1) + (i - 1) * (j - 1) + (i - 1) * (k - 1) +
(j - 1) * (k - 1))
dimmodel[[ng]] <- c(i, j, k)
}
}
}
CAinertia[[b]] <- matrix(unlist(risCAchi2), ng, 1)
dfmodel[[b]] <- matrix(unlist(risCAdf), ng, 1)
}
dflabel <- matrix(unlist(as.character(dimmodel)), ng, 1)
dimmod <- dimmodel #newnew
CHIcritot <- matrix(unlist(CAinertia), ng, nsets)
dftot <- matrix(unlist(dfmodel), ng, nsets)
if (boots == TRUE) {
CHI_allrandom <- CHIcritot[, 2:nsets]
CHI_random <- as.matrix(rowMeans(CHIcritot[, 2:nsets]))
CHIplot <- cbind(dftot[, 1], CHIcritot[, 1], CHI_random)
CHIord <- CHIplot[order(CHIplot[, 1]), ]
dimnames(CHIord) <- list(paste(dflabel[order(CHIplot[,
1])]), c("df", "CHI_data", "CHI_Random"))
CHIord2 <- cbind(CHIord[, 1], CHIord[, 3])
}
CHIplot <- cbind(dftot[, 1], CHIcritot[, 1])
CHIord <- CHIplot[order(CHIplot[, 1]), ]
dimnames(CHIord) <- list(paste(dflabel[order(CHIplot[, 1])]),
c("df", "CHI_data"))
output1 <- CHull(CHIord, bound = "upper", PercentageFit = PercentageFit)
#plot(output1, type = "b")
#title(sub="Original Data -Chi2 criterion-")
if (boots == TRUE) {
output1 <- CHull(CHIord2, bound = "upper", PercentageFit = PercentageFit)
#plot(output1, type = "b")
#title(sub="Bootstrap Data -Chi2 criterion-")
}
#CHIord <- round(CHIord, digits = 3)
#---------------------export
# restunelocal<-list(XG = XG, risCAchi2 = risCAchi2, risCAdf = risCAdf, output1 = output1, ca3type = ca3type)
restunelocal<-list(XG = XG, output1 = output1, ca3type = ca3type, boots = boots)
class(restunelocal)<-"tunelocal"
invisible(return(restunelocal))
}
|
/scratch/gouwar.j/cran-all/cranData/CA3variants/R/tunelocal.R
|
#' Identify conserved regulatory networks
#' @description Use Score of Conserved network to identify conserved regulatory
#' network modules based on homologous genes databased and topology of networks
#' @param OrthG ortholog genes database
#' @param Species1_GRN gene regulatory network of species 1
#' @param Species2_GRN gene regulatory network of species 2
#' @param Species_name1 character, indicating the species names of Species1_GRN
#' @param Species_name2 character, indicating the species names of Species2_GRN
#' @importFrom reshape2 dcast
#' @return list contains two df. First df contains details of conserved regulatory
#' network, second df contains NCS between module pairs
#' @export
#'
#' @examples load(system.file("extdata", "gene_network.rda", package = "CACIMAR"))
#' n1 <- Identify_ConservedNetworks(OrthG_Mm_Zf,mm_gene_network,zf_gene_network,'mm','zf')
Identify_ConservedNetworks <- function(OrthG,Species1_GRN,Species2_GRN,Species_name1,Species_name2){
### input check
validInput(Species_name1,'Species_name1','character')
validInput(Species_name2,'Species_name2','character')
if (!'Source' %in% colnames(Species1_GRN)) {
stop('Species1_GRN should contain Source column')
}
if (!'Target' %in% colnames(Species1_GRN)) {
stop('Species1_GRN should contain Target column')
}
if (!'SourceGroup' %in% colnames(Species1_GRN)) {
stop('Species1_GRN should contain SourceGroup column')
}
if (!'TargetGroup' %in% colnames(Species1_GRN)) {
stop('Species1_GRN should contain TargetGroup column')
}
if (!'Source' %in% colnames(Species2_GRN)) {
stop('Species2_GRN should contain Source column')
}
if (!'Target' %in% colnames(Species2_GRN)) {
stop('Species2_GRN should contain Target column')
}
if (!'SourceGroup' %in% colnames(Species2_GRN)) {
stop('Species2_GRN should contain SourceGroup column')
}
if (!'TargetGroup' %in% colnames(Species2_GRN)) {
stop('Species2_GRN should contain TargetGroup column')
}
Species_name1 <- tolower(Species_name1)
Species_name2 <- tolower(Species_name2)
### species check
Spec1 <- colnames(OrthG)[2]
Spec2 <- colnames(OrthG)[4]
Spec1 <- gsub('_ID','',Spec1)
Spec2 <- gsub('_ID','',Spec2)
if (Spec1 == tolower(Species_name1) & Spec2 == tolower(Species_name2)) {
Species_name <- c(Spec1,Spec2)
RnList <- list(Species1_GRN,Species2_GRN)
}else if(Spec2 == tolower(Species_name1) & Spec1 == tolower(Species_name2)){
Species_name <- c(Spec1,Spec2)
RnList <- list(Species2_GRN,Species1_GRN)
}else{stop('please input correct Species name')}
### Extract genes in each group
RnGeneList <- list()
for (i in 1:length(RnList)) {
Species <- Species_name[i]
Rn <- RnList[[i]]
RnTFGroup <- dplyr::select(Rn,c('Source','SourceGroup'))
colnames(RnTFGroup) <- paste0(Species,c('Gene','Group'))
RnTargetGroup <- dplyr::select(Rn,c('Target','TargetGroup'))
colnames(RnTargetGroup) <- paste0(Species,c('Gene','Group'))
GeneGroup <- rbind(RnTFGroup,RnTargetGroup)
GeneGroup <- GeneGroup[!duplicated(GeneGroup),]
RnGeneList[[Species]] <- GeneGroup
}
### get orthology genes
Sp1Gene <- RnGeneList[[Species_name[1]]]
Sp2Gene <- RnGeneList[[Species_name[2]]]
Sp1Group <- sort(Sp1Gene$Group[!duplicated(Sp1Gene$Group)])
Sp2Group <- sort(Sp2Gene$Group[!duplicated(Sp2Gene$Group)])
Spec1_gene <- data.frame(rep(0,nrow(Sp1Gene)),
rep(1,nrow(Sp1Gene)))
rownames(Spec1_gene) <- Sp1Gene[,1]
Spec2_gene <- data.frame(rep(0,nrow(Sp2Gene)),
rep(1,nrow(Sp2Gene)))
rownames(Spec2_gene) <- Sp2Gene[,1]
Exp2 <- Get_OrthG(OrthG, Spec1_gene, Spec2_gene, Species_name)
Type1 <- paste0('Used_',Species_name[1],'_ID')
Type2 <- paste0('Used_',Species_name[2],'_ID')
Species1 <- Sp1Gene[match(Exp2[, Type1],Sp1Gene$mmGene), ]
Species2 <- Sp2Gene[match(Exp2[, Type2],Sp2Gene$zfGene), ]
Exp3 <- cbind(Exp2, Species1, Species2)
Exp4 <- Exp3[!is.na(Exp3[, dim(Exp2)[2]+1]) &
!is.na(Exp3[,dim(Exp2)[2]+dim(Species1)[2]+1]), ]
### calculate orthology fraction
Sp1Freq <- as.data.frame(table(Sp1Gene$mmGroup))
Sp2Freq <- as.data.frame(table(Sp2Gene$zfGroup))
OrthG_df <- as.data.frame(table(Exp4$mmGroup,Exp4$zfGroup))
OrthG_df <- OrthG_df[OrthG_df$Freq>0,]
Sp1GeneNum <- Sp1Freq[match(OrthG_df$Var1,Sp1Freq$Var1),2]
Sp2GeneNum <- Sp2Freq[match(OrthG_df$Var2,Sp2Freq$Var1),2]
OrthG_df$Sp1GeneNum <- Sp1GeneNum
OrthG_df$Sp2GeneNum <- Sp2GeneNum
OrthGeneIdx <- match(paste0(Exp4[,9],Exp4[,11]),paste0(OrthG_df[,1],OrthG_df[,2]))
Sp1OrthGenesList <- list()
Sp2OrthGenesList <- list()
for (i in 1:length(OrthGeneIdx)) {
x1 <- OrthGeneIdx[i]
Sp1OrthGenesList[[x1]] <- paste0(Sp1OrthGenesList[x1],';',Exp4[i,8])
Sp2OrthGenesList[[x1]] <- paste0(Sp2OrthGenesList[x1],';',Exp4[i,10])
}
OrthG_df$Sp1OrthGenes <- gsub('NULL;','',unlist(Sp1OrthGenesList))
OrthG_df$Sp2OrthGenes <- gsub('NULL;','',unlist(Sp2OrthGenesList))
OrthG_df$SharedGenesFraction <- (OrthG_df$Freq*2)/(OrthG_df$Sp1GeneNum+OrthG_df$Sp2GeneNum)
colnames(OrthG_df)[1:5] <- c('Group1','Group2','OrthGeneNum','Group1AllGenes','Group2AllGenes')
### calculate orthology edges fraction and topological orthology edges fraction
OrthGEdgeSum <- c()
AllEdgeNumSum <- c()
OrthGEdgeFraction <- c()
TopoOrthGEdgeSum <- c()
OrthGTopoEdgeFraction <- c()
for (i in 1:nrow(OrthG_df)) {
Sp1GeneRow <- unlist(strsplit(OrthG_df$Sp1OrthGenes[i],';'))
Sp2GeneRow <- unlist(strsplit(OrthG_df$Sp2OrthGenes[i],';'))
NetworkGroup1 <- RnList[[1]][RnList[[1]]$SourceGroup==OrthG_df[i,1] &
RnList[[1]]$TargetGroup==OrthG_df[i,1],]
NetworkGroup2 <- RnList[[2]][RnList[[2]]$SourceGroup==OrthG_df[i,2] &
RnList[[2]]$TargetGroup==OrthG_df[i,2],]
Sp1OrthGEdge <- NetworkGroup1[NetworkGroup1$Source%in%Sp1GeneRow &
NetworkGroup1$Target%in%Sp1GeneRow,]
Sp2OrthGEdge <- NetworkGroup2[NetworkGroup2$Source%in%Sp1GeneRow &
NetworkGroup2$Target%in%Sp1GeneRow,]
OrthEdgeGNum <- nrow(Sp1OrthGEdge)+nrow(Sp2OrthGEdge)
AllEdgeNum <- nrow(NetworkGroup1)+nrow(NetworkGroup2)
OrthGEdgeSum <- c(OrthGEdgeSum,OrthEdgeGNum)
AllEdgeNumSum <- c(AllEdgeNumSum,AllEdgeNum)
OrthGEdgeFraction <- c(OrthGEdgeFraction,(OrthEdgeGNum/AllEdgeNum))
### topological fraction
if (OrthG_df[i,3]>1) {
GeneRowmatch <- data.frame(Sp1GeneRow,Sp2GeneRow)
Sp1Regulation <- NetworkGroup1[NetworkGroup1$Source%in%Sp1GeneRow &
NetworkGroup1$Target%in%Sp1GeneRow,]
Sp2Regulation <- NetworkGroup2[NetworkGroup2$Source%in%Sp1GeneRow &
NetworkGroup2$Target%in%Sp1GeneRow,]
if (nrow(Sp1Regulation) > 0 & nrow(Sp2Regulation) > 0) {
OrthGRelationshipsNum <- 0
for (j in 1:nrow(Sp1Regulation)) {
Sp2RelatedTF <- GeneRowmatch[GeneRowmatch$Sp1GeneRow == Sp1Regulation$Source,2]
Sp2RelatedTarget <- GeneRowmatch[GeneRowmatch$Sp1GeneRow == Sp1Regulation$Target,2]
if (paste(Sp2RelatedTF,Sp2RelatedTarget)%in%paste(Sp2Regulation$Source,Sp2Regulation$Target)) {
OrthGRelationshipsNum <- OrthGRelationshipsNum+1
}
}
Fraction <- (OrthGRelationshipsNum*2)/(nrow(Sp1Regulation)+nrow(Sp2Regulation))
TopoOrthGEdgeSum <- c(TopoOrthGEdgeSum,OrthGRelationshipsNum*2)
OrthGTopoEdgeFraction <- c(OrthGTopoEdgeFraction,Fraction)
}else{
TopoOrthGEdgeSum <- c(TopoOrthGEdgeSum,0)
OrthGTopoEdgeFraction <- c(OrthGTopoEdgeFraction,0)
}
}else{
TopoOrthGEdgeSum <- c(TopoOrthGEdgeSum,0)
OrthGTopoEdgeFraction <- c(OrthGTopoEdgeFraction,0)
}
}
OrthG_df$OrthGEdgeNum <- OrthGEdgeSum
OrthG_df$EdgeNum <- AllEdgeNumSum
OrthG_df$OrthGEdgeFraction <- OrthGEdgeFraction
OrthG_df$TopologicalOrthGEdgeNum<-TopoOrthGEdgeSum
OrthG_df$TopologicalOrthGEdgeFraction <- OrthGTopoEdgeFraction
OrthG_df <- OrthG_df[,c(1:5,8:13,6,7)]
### calculate Score of Conserved Networks (SCN)
Candidate_moudle <- OrthG_df[OrthG_df$OrthGEdgeNum > 0,]
HNi <- mean(Candidate_moudle$OrthGeneNum)
HNj <- mean(Candidate_moudle$OrthGeneNum)
Ni <- mean(Candidate_moudle$Group1AllGenes)
Nj <- mean(Candidate_moudle$Group2AllGenes)
MIU <- ((Ni+Nj-1)/(HNi+HNj-1))
OrthG_df$NCS <- OrthG_df[,6] + (MIU*OrthG_df[,9]) + (MIU*2*OrthG_df[,11])
OrthG_df[,1] <- paste0(Species_name1,OrthG_df[,1])
OrthG_df[,2] <- paste0(Species_name2,OrthG_df[,2])
NCS_df <- reshape2::dcast(OrthG_df[,c(1,2,14)],Group1~Group2,fill = 0)
rownames(NCS_df) <- NCS_df[,1]
NCS_df <- NCS_df[,-1]
OrthG_list <- list(OrthG_df,NCS_df)
return(OrthG_list)
}
|
/scratch/gouwar.j/cran-all/cranData/CACIMAR/R/CSNetworks.R
|
#' Identify conserved cell types based on power of genes and orthologs database
#'
#' @param OrthG ortholog genes database
#' @param Species1_Marker_table data.frame of species 1, should contain three column:
#' 'gene', 'cluster' and 'power'
#' @param Species2_Marker_table data.frame of species 2, should contain three column:
#' 'gene', 'cluster' and 'power'
#' @param Species_name1 character, indicating the species names of Species1_Marker_table
#' @param Species_name2 character, indicating the species names of Species2_Marker_table
#'
#' @return list contains two elements: first one is details of conserved cell types,
#' second one is matrix of cell types conserved score
#' @export
#'
#' @examples load(system.file("extdata", "CellTypeAllMarkers.rda", package = "CACIMAR"))
#' expression <- Identify_ConservedCellTypes(OrthG_Mm_Zf,mm_Marker[1:30,],zf_Marker[1:30,],'mm','zf')
Identify_ConservedCellTypes <- function(OrthG,Species1_Marker_table,Species2_Marker_table,
Species_name1,Species_name2){
### inputs validation
if (!'power' %in% colnames(Species1_Marker_table)) {
stop('Species1_Marker_table should contain power column!')
}
if (!'power' %in% colnames(Species2_Marker_table)) {
stop('Species2_Marker_table should contain power column!')
}
if (!'cluster' %in% colnames(Species1_Marker_table)) {
stop('Species1_Marker_table should contain cluster column!')
}
if (!'cluster' %in% colnames(Species2_Marker_table)) {
stop('Species2_Marker_table should contain cluster column!')
}
if (!'gene' %in% colnames(Species1_Marker_table)) {
stop('Species1_Marker_table should contain gene column!')
}
if (!'gene' %in% colnames(Species2_Marker_table)) {
stop('Species2_Marker_table should contain gene column!')
}
validInput(OrthG,'OrthG','df')
validInput(Species_name1,'Species_name1','character')
validInput(Species_name2,'Species_name2','character')
Species_name1 <- tolower(Species_name1)
Species_name2 <- tolower(Species_name2)
Spec1 <- colnames(OrthG)[2]
Spec2 <- colnames(OrthG)[4]
Spec1 <- gsub('_ID','',Spec1)
Spec2 <- gsub('_ID','',Spec2)
if (Spec1 == Species_name1 & Spec2 == Species_name2) {
Species_name <- c(Spec1,Spec2)
Species1_Marker <- Species1_Marker_table
Species2_Marker <- Species2_Marker_table
}else if(Spec2 == Species_name1 & Spec1 == Species_name2){
Species_name <- c(Spec2,Spec1)
Species2_Marker <- Species1_Marker_table
Species1_Marker <- Species2_Marker_table
}else{stop('please input correct Species name')}
Species1_Marker_table$power <- as.numeric(Species1_Marker_table$power)
Species2_Marker_table$power <- as.numeric(Species2_Marker_table$power)
### fraction in same species
Species1_Marker_table$cluster <- paste0(Species_name1, Species1_Marker_table[, 'cluster'])
Species2_Marker_table$cluster <- paste0(Species_name2, Species2_Marker_table[, 'cluster'])
ShMarker1 <- Cal_SharedMarkers(Species1_Marker_table, Species_name1)
ShMarker2 <- Cal_SharedMarkers(Species2_Marker_table, Species_name2)
Frac1 <- rbind(ShMarker1[[3]], ShMarker2[[3]])
ShMarker3 <- rbind(ShMarker1[[1]],ShMarker2[[1]])
### fraction between different species
Sp1Gene <- Species1_Marker_table$gene
ExpInd01 <- t(apply(as.matrix(Sp1Gene), 1, function(x1){
if(length(grep(x1, OrthG[,2]))==1){ x3 <- c(x1, grep(x1, OrthG[,2])) }
else if(length(grep(x1, OrthG[,4]))==1){ x3 <- c(x1, grep(x1, OrthG[,4])) }else{ x3 <- c(x1, NA) }
}) )
Sp1Ind2 <- ExpInd01[!is.na(ExpInd01[, 2]), ]
colnames(Sp1Ind2) <- c('ID', 'RowNum')
Sp2Gene <- Species2_Marker_table$gene
ExpInd02 <- t(apply(as.matrix(Sp2Gene), 1, function(x1){
if(length(grep(x1, OrthG[,2]))==1){ x3 <- c(x1, grep(x1, OrthG[,2])) }
else if(length(grep(x1, OrthG[,4]))==1){ x3 <- c(x1, grep(x1, OrthG[,4])) }else{ x3 <- c(x1, NA) }
}) )
Sp2Ind2 <- ExpInd02[!is.na(ExpInd02[, 2]), ]
colnames(Sp2Ind2) <- c('ID', 'RowNum')
ShMarker2 <- Cal_SharedMarkers_Species(Species1_Marker_table, Species2_Marker_table,
Sp1Ind2, Sp2Ind2,
c(Species_name1,Species_name2))
ShMarker3 <- rbind(ShMarker3, ShMarker2[[1]])
Frac1 <- rbind(Frac1, ShMarker2[[3]])
AllCluster1 <- unique(c(Species1_Marker_table$cluster,Species2_Marker_table$cluster))
AllCluster12 <- sort(AllCluster1); Frac3 <- c()
for(i in 1:length(AllCluster1)){ Frac21 <- c()
for(j in 1:length(AllCluster1)){
Frac11 <- Frac1[Frac1[,1]==AllCluster1[i] & Frac1[,2]==AllCluster1[j], 3]
Frac12 <- Frac1[Frac1[,1]==AllCluster1[j] & Frac1[,2]==AllCluster1[i], 3]
if(length(Frac11)==1){ Frac2 <- as.numeric(Frac11)
}else if(length(Frac12)==1){ Frac2 <- as.numeric(Frac12)
}else{
print(paste('No',AllCluster1[i],AllCluster1[j])) }
Frac21 <- c(Frac21, Frac2)
}
Frac3 <- rbind(Frac3, Frac21)
}
rownames(Frac3) <- AllCluster1; colnames(Frac3) <- AllCluster1
ShMarker4 <- list(); ShMarker4[[1]] <- ShMarker3
Frac3[Frac3==1] <- NA
ShMarker4[[2]] <- Frac3
ShMarker4[[3]] <- Frac1
return(ShMarker4)
}
Cal_SharedMarkers <- function(mmExp1, Spec1='mm'){
mmCluster1 <- sort(unique(mmExp1$cluster))
ShMarker1 <- c(); Fraction3 <- c(); Fraction4 <- c()
for(i in 1:length(mmCluster1)){
zfExp2 <- mmExp1[mmExp1$cluster==mmCluster1[i], ]
zfPower1 <- sum(as.numeric(zfExp2$power))
Fraction2 <- c()
for(j in 1:length(mmCluster1)){
mmExp2 <- mmExp1[mmExp1$cluster==mmCluster1[j], ]
mmPower1 <- sum(mmExp2$power)
Exp3 <- intersect(zfExp2[, 'gene'], mmExp2[, 'gene'])
zfShMarker1 <- 'none'; zfShPower1 <-0;
mmShMarker1 <- 'none'; mmShPower1 <-0;
zfPower2 <- 0; mmPower2 <- 0
if(length(Exp3)>0){
zfExp3 <- zfExp2[match(Exp3, zfExp2[,'gene']), ]
mmExp3 <- mmExp2[match(Exp3, mmExp2[,'gene']), ]
zfPower2 <- sum(zfExp3$power); mmPower2 <- sum(mmExp3$power)
}
Fraction1 <- (zfPower2 + mmPower2)/(zfPower1 + mmPower1)
Fraction2 <- c(Fraction2, Fraction1); Fraction4 <- rbind(Fraction4, c(mmCluster1[i], mmCluster1[j], Fraction1))
ShMarker1 <- rbind(ShMarker1, c(mmCluster1[i], mmCluster1[j], dim(zfExp2)[1], dim(mmExp2)[1], dim(zfExp2)[1], dim(mmExp2)[1], length(Exp3), Fraction1, zfShMarker1, zfShPower1, mmShMarker1, mmShPower1))
}
Fraction3 <- rbind(Fraction3, Fraction2)
}
rownames(Fraction3) <- paste0('C',mmCluster1); colnames(Fraction3) <- paste0('C',mmCluster1);
colnames(Fraction4) <- c('Cluster1', 'Cluster2', 'Fraction')
colnames(ShMarker1) <- paste0(Spec1, c('Cluster', 'Cluster', 'MarkersNum', 'MarkersNum', 'OrthMarkersNum', 'OrthMarkersNum', 'ShMarkersNum', 'SharedFraction', 'SharedMarkers', 'ShMarkersPower', 'SharedMarkers', 'ShMarkersPower'))
ShMarker2 <- list(); ShMarker2[[1]] <- ShMarker1; ShMarker2[[2]] <- Fraction3; ShMarker2[[3]] <- Fraction4;
return(ShMarker2)
}
Cal_SharedMarkers_Species <- function(mmExp1, zfExp1, mmExpInd1, zfExpInd1, Spec1=c('mm','zf')){
zfCluster1 <- sort(unique(zfExp1$cluster))
mmCluster1 <- sort(unique(mmExp1$cluster))
ShMarker1 <- c(); Fraction3 <- c(); Fraction4 <- c()
for(i in 1:length(zfCluster1)){
zfExp2 <- zfExp1[zfExp1$cluster==zfCluster1[i], ]
zfPower1 <- sum(as.numeric(zfExp2$power))
zfExp22 <- zfExpInd1[match(zfExp2[,'gene'], zfExpInd1[,'ID']), ]
Fraction2 <- c()
for(j in 1:length(mmCluster1)){
mmExp2 <- mmExp1[mmExp1$cluster==mmCluster1[j], ]
mmPower1 <- sum(mmExp2$power)
mmExp22 <- mmExpInd1[match(mmExp2[, 'gene'], mmExpInd1[,'ID']), ]
if (is.character(mmExp22)) {
mmExp22 <- matrix(mmExp22,ncol=2)
}
if (is.character(zfExp22)) {
zfExp22 <- matrix(zfExp22,ncol=2)
}
Exp3 <- intersect(zfExp22[, 2], mmExp22[, 2])
Exp32 <- Exp3[!is.na(Exp3)]
zfShMarker1 <- 'none'; zfShPower1 <-0;
mmShMarker1 <- 'none'; mmShPower1 <-0;
zfPower2 <- 0; mmPower2 <- 0
if(length(Exp32)>0){
Exp33 <- apply(as.matrix(Exp32), 1, function(x1){ x2 <- c()
for(k in 1:2){
if(k==1){ Exp23 <- zfExp22; Exp2 <- zfExp2
}else{ Exp23 <- mmExp22; Exp2 <- mmExp2 }
Exp41 <- Exp23[is.element(Exp23[,2], x1), ]
if(is.null(dim(Exp41))){ Exp42 <- Exp2[is.element(Exp2[,'gene'], Exp41[1]), ]
}else{ Exp42 <- Exp2[is.element(Exp2[,'gene'], Exp41[,1]), ] }
ShMarker1 <- paste(Exp42[, 'gene'], collapse=',')
ShPower1 <- paste(Exp42[, 'power'], collapse=',')
Power2 <- sum(Exp42[, 'power'])
x2 <- c(x2, ShMarker1, ShPower1, Power2)
}
return(x2)
})
zfShMarker1 <- paste(Exp33[1, ], collapse=';')
zfShPower1 <- paste(Exp33[2, ], collapse=';')
mmShMarker1 <- paste(Exp33[4, ], collapse=';')
mmShPower1 <- paste(Exp33[5, ], collapse=';')
zfPower2 <- sum(as.numeric(Exp33[3, ])); mmPower2 <- sum(as.numeric(Exp33[6, ]))
}
Fraction1 <- (zfPower2 + mmPower2)/(zfPower1 + mmPower1)
Fraction2 <- c(Fraction2, Fraction1); Fraction4 <- rbind(Fraction4, c(zfCluster1[i], mmCluster1[j], Fraction1))
ShMarker1 <- rbind(ShMarker1, c(zfCluster1[i], mmCluster1[j], dim(zfExp22)[1], dim(mmExp22)[1], dim(zfExp22)[1], dim(mmExp22)[1], length(Exp32), Fraction1, zfShMarker1, zfShPower1, mmShMarker1, mmShPower1))
}
Fraction3 <- rbind(Fraction3, Fraction2)
}
rownames(Fraction3) <- paste0('C',zfCluster1); colnames(Fraction3) <- paste0('C',mmCluster1);
colnames(Fraction4) <- c('Cluster1', 'Cluster2', 'Fraction')
colnames(ShMarker1) <- c(paste0(Spec1[1],'Cluster'), paste0(Spec1[2],'Cluster'), paste0(Spec1[1],'MarkersNum'), paste0(Spec1[2],'MarkersNum'), paste0(Spec1[1],'OrthMarkersNum'), paste0(Spec1[2],'OrthMarkersNum'), 'ShMarkersNum', 'SharedFraction', paste0(Spec1[1],'SharedMarkers'), paste0(Spec1[1],'ShMarkersPower'), paste0(Spec1[2],'mmSharedMarkers'), paste0(Spec1[2],'ShMarkersPower'))
ShMarker2 <- list(); ShMarker2[[1]] <- ShMarker1; ShMarker2[[2]] <- Fraction3; ShMarker2[[3]] <- Fraction4;
return(ShMarker2)
}
|
/scratch/gouwar.j/cran-all/cranData/CACIMAR/R/CellTypeFraction.R
|
#' Identify cell type of each cluster
#' @description This function has three steps to identify cell type of each cluster.
#' (1) Calculate the power of each known marker based on AUC
#' (area under the receiver operating characteristic curve of gene expression)
#' which indicates the capability of marker i from cell type m to distinguish
#' cluster j and the other clusters. (2) Calculate the united power (UP)
#' for cell type m across each cluster j. (3) For each cluster j we determine
#' the cell type according to UP. Generally, the cluster beongs to the cell
#' type which have the highest united power or higher than the threshold of
#' the united power (for example > 0.9 power).
#' @param seurat_object seurat object
#' @param Marker_gene_table data.frame, indicating marker gene and its
#' corresponding cell type. Marker_gene_table should contain two columns: 'CellType'
#' represent correseponding cell types of each marker and 'Marker' represent Markers
#'
#' @return Cell type with the highest power in each cluster
#' @export
#'
#' @examples KnownMarker=data.frame(c('AIF1','BID','CCL5','CD79A','CD79B','MS4A6A'),c('a','a','a','b','b','b'))
#' data("pbmc_small")
#' colnames(KnownMarker)=c('Marker','CellType')
#' CT <- Identify_CellType(pbmc_small,KnownMarker)
Identify_CellType <- function(seurat_object, Marker_gene_table) {
validInput(seurat_object,'seurat_object','seuratobject')
if (!'CellType' %in% colnames(Marker_gene_table)) {
stop('Marker_gene_table should contain CellType column ')
}
if (!'Marker' %in% colnames(Marker_gene_table)) {
stop('Marker_gene_table should contain Marker column ')
}
if (length(levels(as.factor(as.character([email protected]))))<2) {
stop('number of identities in seurat object should be more than 1')
}
MarkerRoc1 <- Identify_CellTypes1(seurat_object, Marker_gene_table)
ClusterCellT1 <- Identify_CellTypes2(MarkerRoc1)
return(ClusterCellT1)
}
Identify_CellTypes1 <- function(object, Marker1) {
## calculate the power of each marker on distinguishing cell types
CelltypeIdx <- grep('CellType',colnames(Marker1))
NumCellType <- apply(Marker1, 1, function(X1) {
length(strsplit(as.character(X1[CelltypeIdx]),",")[[1]])
})
Marker2 <- cbind(Marker1, NumCellType)
MarkerRoc1 <- Cal_MarkersRoc(object, Marker1$Marker)
MarkerRoc2 <- cbind(Marker2[order(Marker2$Marker), ], MarkerRoc1[order(rownames(MarkerRoc1)), ])
MarkerRoc2 <- MarkerRoc2[order(MarkerRoc2$CellType), ]
return(MarkerRoc2)
}
Identify_CellTypes2 <- function(MarkerRoc1) {
## Get the list of cell types
CellType1 <- strsplit(levels(as.factor(MarkerRoc1$CellType)), ",")
CellType2 <- c()
for (i in 1:length(CellType1)) {
CellType2 <- c(CellType2, CellType1[[i]])
}
uCellType2 <- unique(CellType2)
## Revise the power according to the number of cell types
col_indx = grep('Cluster',colnames(MarkerRoc1))[1]
MarkerRoc2 <- cbind(MarkerRoc1[, 1:(col_indx-1)], t(apply(MarkerRoc1, 1, function(x1) {
x2 <- as.numeric(x1[col_indx:length(x1)])
x2[seq(4, length(x2), 3)] <- x2[seq(4, length(x2), 3)] / x2[1]
return(x2)
})))
colnames(MarkerRoc2) <- colnames(MarkerRoc1)
#write.table(MarkerRoc2, 'MarkerRoc2.txt')
## Calculate the joint power for each cluster
MarkerRoc5 <- c()
cell_type_out <- c()
for (i in 1:length(uCellType2)) {
uCellType3 <- uCellType2[i]
Ind1 <- c()
for (j in 1:length(uCellType3)) {
Ind11 <- grep(paste0("^", uCellType3[j], "$"), MarkerRoc2$CellType)
Ind12 <- grep(paste0(",", uCellType3[j], "$"), MarkerRoc2$CellType)
Ind13 <- grep(paste0("^", uCellType3[j], ","), MarkerRoc2$CellType)
Ind1 <- c(Ind1, Ind11, Ind12, Ind13)
}
MarkerRoc3 <- MarkerRoc2[unique(Ind1), ]
startidx <- grep('NumCellType',colnames(MarkerRoc3))
### add the power of gene in the same cell types, if gene difference <0 ,
### set power of that gene negative
if (nrow(MarkerRoc3)>0) {
MarkerRoc4 <- Cal_JointPower2(MarkerRoc3[, (startidx+1):ncol(MarkerRoc3)])
MarkerRoc4 <- t(as.matrix(MarkerRoc4))
colnames(MarkerRoc4) <- gsub("_power", "", colnames(MarkerRoc4))
MarkerRoc5 <- rbind(MarkerRoc5, MarkerRoc4)
cell_type_out <- c(cell_type_out,uCellType3)
}
}
rownames(MarkerRoc5) <- cell_type_out
exclusive_cell_type <- paste(setdiff(uCellType2,cell_type_out),collapse = ' ,')
if (length(exclusive_cell_type)>1) {
warning(paste0('No genes express in',exclusive_cell_type))
}
## Sort assigned cell types according to the joint power for each cluster
MarkerRoc8 <- c()
for (i in 1:dim(MarkerRoc5)[2]) {
print(colnames(MarkerRoc5)[i])
MarkerRoc6 <- t(as.matrix(MarkerRoc5[, i]))
MarkerRoc7 <- Sort_MarkersPower(MarkerRoc6, 0)
MarkerRoc8 <- rbind(MarkerRoc8, MarkerRoc7)
}
rownames(MarkerRoc8) <- colnames(MarkerRoc5)
colnames(MarkerRoc8) <- c("Cell.types", "Corresponding.powers", 'Predicted.cell.type')
return(MarkerRoc8)
}
|
/scratch/gouwar.j/cran-all/cranData/CACIMAR/R/Identify_CellType.R
|
#' Identify markers of each cluster
#' @description This function first identify marker genes in each cluster
#' with Roc threshold > RocThr. Then, based on marker genes identified above,
#' this function calculates the difference and power of marker genes in each
#' cluster, and marker genes with Difference threshold > DiffThr will be retained.
#' Next, gene with the largest power in which cluster will be the marker
#' gene in this cluster. Eventually, make fisher test for power of each cluster,
#' cluster with p.value < 0.05 will be retained as the final cluster for marker gene
#'
#' @param Seurat_object Seurat object, should contain cluster information
#' @param PowerCutoff numeric, indicating the cutoff of gene power to refine marker genes
#' @param DifferenceCutoff numeric, indicating the cutoff of difference in marker genes between
#' clusters to refine marker genes
#' @param PvalueCutoff numeric, indicating the p.value cutoff of chi-square test to refine marker genes
#' @importFrom Seurat FindAllMarkers
#' @importFrom Seurat GetAssayData
#' @importFrom stats fisher.test
#' @importFrom stats chisq.test
#'
#' @return Data frame of conserved markers
#' @export
#'
#' @examples data("pbmc_small")
#' all.markers <- Identify_Markers(pbmc_small)
Identify_Markers<-function(Seurat_object, PowerCutoff=0.4,
DifferenceCutoff=0,PvalueCutoff=0.05 ){
validInput(Seurat_object,'seurat_object','seuratobject')
validInput(PowerCutoff,'PowerCutoff','numeric')
validInput(DifferenceCutoff,'DifferenceCutoff','numeric')
validInput(PvalueCutoff,'PvalueCutoff','numeric')
MarkerRoc<-Identify_Markers1(Seurat_object,PowerCutoff,DifferenceCutoff)
MarkerRoc<-as.data.frame(MarkerRoc)
Marker<-Identify_Markers2(Seurat_object,MarkerRoc,PowerThr1=DifferenceCutoff)
final_Markers<-Refine_Markers(Seurat_object,Marker,p.value = PvalueCutoff)
final_Markers <- final_Markers[,c(1:4,ncol(final_Markers),5:(ncol(final_Markers)-1))]
colnames(final_Markers)[2] <- 'Allpower'
return(final_Markers)
}
#' Format marker genes for plotting
#' @description Order the gene expression in each cluster to make the heatmap
#' look better
#' @param Marker_genes data.frame, generated by \code{\link{Identify_Markers}}
#' @export
#' @return Markers corresponding to certain cluster
#'
#' @examples data("pbmc_small")
#' all.markers <- Identify_Markers(pbmc_small)
#' all.markers2 <- Format_Markers_Frac(all.markers)
Format_Markers_Frac<-function(Marker_genes){
RNA1 <- Marker_genes
Cluster1 <- t(apply(RNA1, 1, function(x1){
x21 <- strsplit(x1[1],',')[[1]]; x22 <- strsplit(x1[2],',')[[1]]
x31 <- paste(x21[1:(length(x21)-1)], collapse=',')
x32 <- x22[1]
return(c(x21[1], x31, x32))
}))
colnames(Cluster1) <- c('cluster','Allcluster','power')
RNA2 <- cbind(Cluster1, RNA1[,5:ncol(RNA1)])
RNA3 <- RNA2[order(RNA2[, 'power'], decreasing=T), ]
RNA3 <- RNA3[order(RNA3[, 'Allcluster']), ]
RNA3 <- RNA3[order(RNA3[, 'cluster']), ]
RNA3$gene <- rownames(RNA3)
RNA3 <- RNA3[,c(ncol(RNA3),(1:ncol(RNA3)-1))]
return(RNA3)
}
Identify_Markers1<-function(Seurat_object, PowerThr1=1/3,diffthr1){
Marker0 <- Seurat::FindAllMarkers(object = Seurat_object, test.use ='roc', return.thresh = PowerThr1)
Marker1 <- as.data.frame(Marker0)
Marker2 <- Marker1[Marker1[,'avg_diff'] > diffthr1 & Marker1[,'power']>PowerThr1,]
uMarker2 <- unique(Marker2[,'gene'])
MarkerRoc1 <- Cal_MarkersRoc(Seurat_object, uMarker2)
MarkerRoc2 <- Select_MarkersPower2(MarkerRoc1)
return(MarkerRoc2)
}
Select_MarkersPower <- function(MarkerRoc01){
MarkerRoc02 <- t(Revise_MarkersPower(MarkerRoc01))
colnames(MarkerRoc02) <- gsub('_power','',colnames(MarkerRoc02))
Power02 <- c()
for(i in 1:dim(MarkerRoc02)[1]){
MarkerRoc03 <- t(as.matrix(MarkerRoc02[i,seq(3,dim(MarkerRoc02)[2],3)]))
Power01 <- Sort_MarkersPower(MarkerRoc03,0)
Power02 <- rbind(Power02,Power01)
}
rownames(Power02) <- rownames(MarkerRoc02)
colnames(Power02) <- c('Cluster','Power','Diff')
return(Power02)
}
Select_MarkersPower2 <- function(MarkerRoc01){
MarkerRoc02 <- t(Revise_MarkersPower(MarkerRoc01))
colnames(MarkerRoc02) <- gsub('_power','',colnames(MarkerRoc02))
Power02 <- c()
for(i in 1:dim(MarkerRoc02)[1]){
MarkerRoc03 <- t(as.matrix(MarkerRoc02[i,seq(3,dim(MarkerRoc02)[2],3)]))
Power01 <- Sort_MarkersPower2(MarkerRoc03,0)
Power02 <- rbind(Power02,Power01)
}
rownames(Power02) <- rownames(MarkerRoc02)
colnames(Power02) <- c('Cluster','Power','Diff')
return(Power02)
}
Revise_MarkersPower <- function(MarkerRoc01){
MarkerRoc02 <- apply(MarkerRoc01,1,function(x01){
for(i0 in seq(2,length(x01),3)){
if(x01[i0]<0){ x01[i0+1] <- -x01[i0+1] }
}
return(x01)
} )
return(MarkerRoc02)
}
Sort_MarkersPower2 <- function(MarkerRoc01,Thr01=0){
MarkerRoc02 <- sort.int(as.numeric(MarkerRoc01),index.return=T,decreasing=T)
Name01 <- colnames(MarkerRoc01)
if(length(MarkerRoc01)==0|max(MarkerRoc02$x)<=Thr01){
MarkerRoc03 <- c('', '', '');
}else if(length(MarkerRoc02$x)==1 & length(MarkerRoc02$x[MarkerRoc02$x>Thr01])==1 |
length(MarkerRoc02$x)!=1 & length(MarkerRoc02$x[MarkerRoc02$x>Thr01])==1 & MarkerRoc02$x[2]<0){
MarkerRoc03 <- c(paste(Name01[MarkerRoc02$ix[1]],paste0('Ctr',Thr01),sep=','), paste(MarkerRoc02$x[1],Thr01,sep=','), MarkerRoc02$x[1]);
}else{
if(length(MarkerRoc02$x)!=1 & length(MarkerRoc02$x[MarkerRoc02$x>Thr01])==1){
MarkerRoc02x <- MarkerRoc02$x[1:2]
MarkerRoc02ix <- MarkerRoc02$ix[1:2]
}else{
MarkerRoc02x <- c(MarkerRoc02$x[MarkerRoc02$x>Thr01],Thr01)
MarkerRoc02ix <- c(MarkerRoc02$ix[MarkerRoc02$x>Thr01],length(MarkerRoc02$ix)+1)
Name01 <- c(Name01,paste0('Ctr',Thr01))
}
Power01 <- Cal_RelativePower(MarkerRoc02x)
maxPower01 <- which.max(Power01)
if(maxPower01==length(MarkerRoc02x)){ maxPower01=length(Power01)-1; }
MarkerRoc03 <- c(paste(Name01[MarkerRoc02ix[1:(maxPower01+1)]],collapse=','), paste(MarkerRoc02x[1:(maxPower01+1)],collapse=','), max(Power01));
}
return(MarkerRoc03)
}
Identify_Markers2 <- function(pbmc, Marker, PowerThr1=1/3){
TotoalCluster <- length(unique([email protected]))
MarkerRoc1 <- Marker
NumCluster <- apply(MarkerRoc1,1,function(X1){length(strsplit(as.character(X1[1]),',')[[1]])})
MarkerRoc2 <- cbind(MarkerRoc1,NumCluster)
MarkerRoc3 <- MarkerRoc2[MarkerRoc2$Diff>PowerThr1 & MarkerRoc2$NumCluster>1 & MarkerRoc2$NumCluster<=(ceiling(TotoalCluster/4)+1),]
Cluster <- apply(MarkerRoc3, 1, function(X1){strsplit(X1[1],',')[[1]][1]})
colnames(MarkerRoc3)[1]='AllCluster'
MarkerRoc3$Cluster <- Cluster
MarkerRoc3$gene <- rownames(MarkerRoc3)
MarkerRoc3 <- MarkerRoc3[,c(1,2,4,5)]
return(MarkerRoc3)
}
Refine_Markers<-function(Seurat_object,Marker,p.value = 0.05){
MarkerRoc3 <- Identify_Markers3(Seurat_object, Marker)
MarkerRoc4 <- MarkerRoc3[MarkerRoc3[,'Pvalue'] < 0.05,]
print(c(nrow(MarkerRoc3), nrow(MarkerRoc4)))
return(MarkerRoc4)
}
Identify_Markers3 <- function(pbmc, MarkerRoc2){
pbmc@assays$RNA@data <- GetAssayData(pbmc)[rownames(MarkerRoc2), ]
Frac1 <- Get_scRNA_AggExp(pbmc)
colnames(Frac1) <- gsub('_0', '', colnames(Frac1))
colnames(Frac1) <- gsub('FracC', 'Cluster', colnames(Frac1))
MarkerRoc3 <- cbind(MarkerRoc2, Frac1)
AllCluster1 <- as.matrix(table([email protected]))
rownames(AllCluster1) <- paste0('Cluster', rownames(AllCluster1))
MarkerRoc31 <- apply(MarkerRoc3, 1, function(x1){
x11 <- strsplit(x1[1], ',')[[1]]
x12 <- x11[1:(length(x11)-1)]; x13 <- setdiff(rownames(AllCluster1), x12)
TNum11 <- as.numeric(AllCluster1[match(x12, rownames(AllCluster1)),1])
TNum12 <- as.numeric(AllCluster1[match(x13, rownames(AllCluster1)),1])
Frac11 <- as.numeric(x1[match(x12, names(x1))]); mFrac11 <- which.min(Frac11)
Frac12 <- as.numeric(x1[match(x13, names(x1))]); mFrac12 <- which.max(Frac12)
TNum21 <- TNum11[mFrac11]; TNum22 <- TNum12[mFrac12]
Frac21 <- Frac11[mFrac11]; Frac22 <- Frac12[mFrac12]
TNum31 <- round(TNum21*Frac21); TNum32 <- round(TNum22*Frac22)
Num4 <- matrix(c(TNum31, TNum21-TNum31, TNum32, TNum22-TNum32), nrow=2)
if (FALSE %in% (c(TNum31, TNum21-TNum31, TNum32, TNum22-TNum32)>5)) {
Test1 <- fisher.test(Num4)
}else{Test1 <- chisq.test(Num4)}
return(Test1$p.value)
})
MarkerRoc4 <- cbind(MarkerRoc3, MarkerRoc31)
colnames(MarkerRoc4)[ncol(MarkerRoc4)] <- 'Pvalue'
return(MarkerRoc4)
}
Get_scRNA_AggExp <- function(pbmc){
Cluster1 <- 'idents'
ExpType1 <- 'fraction'
Group1 <- 'all'
[email protected][,'all'] <- 0
[email protected]$idents <- [email protected]
Meta01 <- [email protected]
Cluster2 <- as.character(sort(unique(Meta01[, Cluster1[1]])))
Group2 <- sort(unique(Meta01[, Group1[1]]))
Name01 <- c(); flag01 <- 0;
for(i0 in 1:length(Cluster2)){
for(j0 in 1:length(Group2)){
Cell01 <- Meta01[Meta01[, Cluster1[1]]==Cluster2[i0] & Meta01[, Group1[1]]==Group2[j0], ]
Object02 <- as.matrix(GetAssayData(pbmc)[, rownames(Cell01)]); print(c(Cluster2[i0], as.character(Group2[j0]), ncol(Object02)))
if(ncol(Object02)==0){
}else{
Frac1 <- t(apply(Object02, 1, function(x1){
x12 <- length(x1[x1!=0])
x2 <- x12/length(x1)
return(c(x12, x2))
}))
if(is.element('expression', ExpType1) & is.element('fraction', ExpType1)){ mObject03 <- cbind(rowMeans(Object02), Frac1[,2])
Name01 <- c(Name01, paste0(c('Exp','Frac'),paste0('C',Cluster2[i0],'_',Group2[j0])))
}else if(is.element('expression', ExpType1)){ mObject03 <- rowMeans(Object02)
Name01 <- c(Name01, paste0('ExpC',Cluster2[i0],'_',Group2[j0]))
}else if(is.element('fraction', ExpType1)){ mObject03 <- Frac1[,2]
Name01 <- c(Name01, paste0('FracC',Cluster2[i0],'_',Group2[j0]))
}else if(is.element('number', ExpType1)){ mObject03 <- Frac1[,1]
Name01 <- c(Name01, paste0('NumC',Cluster2[i0],'_',Group2[j0]))
}else{ stop(paste0('Please choose ExpType1 for expression or fraction')) }
if(flag01==0){ mObject04 <- mObject03; flag01 <- 1;
}else{ mObject04 <- cbind(mObject04, mObject03) }
} } }
mObject04 <- as.matrix(mObject04); colnames(mObject04) <- Name01
return(mObject04)
}
|
/scratch/gouwar.j/cran-all/cranData/CACIMAR/R/Identify_Markers.R
|
Cal_MarkersRoc <- function(pbmc01, marker01) {
cluster01 <- levels([email protected])
for (i in 1:length(cluster01)) {
print(cluster01[i])
markerRoc01 <- Markertest(pbmc01, cells.1 = Seurat::WhichCells(pbmc01, idents = cluster01[i]),
cells.2 = Seurat::WhichCells(pbmc01, idents = cluster01[i], invert = TRUE), genes.use = marker01)
if (i == 1) {
markerRoc02 <- markerRoc01[order(rownames(markerRoc01)), ]
} else {
markerRoc02 <- cbind(markerRoc02, markerRoc01[order(rownames(markerRoc01)), ])
}
colnames(markerRoc02)[(ncol(markerRoc02)-2):ncol(markerRoc02)] <- paste0(paste(paste0("Cluster", cluster01[i]), collapse = ""),
"_", colnames(markerRoc02)[(ncol(markerRoc02)-2):ncol(markerRoc02)])
}
return(markerRoc02)
}
Cal_JointPower2 <- function(MarkerRoc01) {
### revise power to negative if avg_diff < 0
MarkerRoc02 <- apply(MarkerRoc01, 1, function(x01) {
for (i0 in seq(2, length(x01), 3)) {
if (x01[i0] < 0) {
x01[i0 + 1] <- -x01[i0 + 1]
}
}
return(x01)
})
MarkerRoc021 <- as.matrix(MarkerRoc02[seq(3, dim(MarkerRoc02)[1], 3), ])
if (dim(MarkerRoc02)[1] == 1) {
MarkerRoc021 <- t(MarkerRoc021)
}
### calculate the joint power for each cluster
MarkerRoc03 <- apply(MarkerRoc021, 1, function(x01) {
x02 <- x01[x01 > 0]; x03 <- x01[x01 < 0]
if (length(x02) > 0) {
power01 <- Cal_JointPower(x02)
} else {
power01 <- 0
}
if (length(x03) > 0) {
power02 <- Cal_JointPower(-x03)
} else {
power02 <- 0
}
power03 <- power01 - power02
return(power03)
})
return(MarkerRoc03)
}
Cal_JointPower <- function(power01) {
power02 <- 1 - power01[1]
if (length(power01) < 2) {
} else {
for (i in 2:length(power01)) {
power02 <- power02 * (1 - power01[i])
}
}
return(1 - power02)
}
Sort_MarkersPower <- function(MarkerRoc01, Thr01 = 0) {
MarkerRoc02 <- sort.int(as.numeric(MarkerRoc01), index.return = TRUE, decreasing = T)
Name01 <- colnames(MarkerRoc01)
if (length(MarkerRoc01) == 0 | max(MarkerRoc02$x) <= Thr01) {
MarkerRoc03 <- c("None", "None", "None")
} else if (length(MarkerRoc02$x) == 1 & length(MarkerRoc02$x[MarkerRoc02$x > Thr01]) == 1 |
length(MarkerRoc02$x) != 1 & length(MarkerRoc02$x[MarkerRoc02$x > Thr01]) == 1 & MarkerRoc02$x[2] < 0) {
MarkerRoc03 <- c(paste(Name01[MarkerRoc02$ix[1]], sep = ","),
paste(MarkerRoc02$x[1], sep = ","), Name01[MarkerRoc02$ix[1]])
} else {
if (length(MarkerRoc02$x) != 1 & length(MarkerRoc02$x[MarkerRoc02$x > Thr01]) == 1) {
MarkerRoc02x <- MarkerRoc02$x[1:2]
MarkerRoc02ix <- MarkerRoc02$ix[1:2]
} else {
MarkerRoc02x <- c(MarkerRoc02$x[MarkerRoc02$x > Thr01], Thr01)
MarkerRoc02ix <- c(MarkerRoc02$ix[MarkerRoc02$x > Thr01], length(MarkerRoc02$ix) + 1)
}
Power01 <- Cal_RelativePower(MarkerRoc02x)
maxPower01 <- which.max(Power01)
if (maxPower01 == length(MarkerRoc02x)) {
maxPower01 <- length(Power01) - 1
}
MarkerRoc03 <- c(paste(Name01[MarkerRoc02ix[1:(maxPower01 + 1)]], collapse = ","),
paste(MarkerRoc02x[1:(maxPower01 + 1)], collapse = ","), Name01[MarkerRoc02ix[1]])
}
return(MarkerRoc03)
}
Cal_RelativePower <- function(power01) {
power02 <- c()
if (length(power01) < 2) {
power02 <- power01
} else {
for (i in 1:(length(power01) - 1)) {
power01g <- psych::geometric.mean(power01[1:i])
if ((power01g - power01[i + 1]) < 0 & (power01g - power01[i + 1]) > -10e-10) {
power02[i] <- 0
} else {
power02[i] <- (power01g * (power01g - power01[i + 1]))^0.5
}
}
}
return(power02)
}
AUCMarkerTest <- function(data1, data2, mygenes, print.bar = TRUE) {
myAUC <- unlist(x = lapply(X = mygenes,
FUN = function(x) {
return(DifferentialAUC(
x = as.numeric(x = data1[x, ]),
y = as.numeric(x = data2[x, ])
))
}
))
myAUC[is.na(x = myAUC)] <- 0
iterate.fxn <- ifelse(test = print.bar, yes = pbapply::pblapply, no = lapply)
avg_diff <- unlist(x = iterate.fxn(
X = mygenes,
FUN = function(x) {
return(
Seurat::ExpMean(
x = as.numeric(x = data1[x, ])
) - Seurat::ExpMean(
x = as.numeric(x = data2[x, ])
)
)
}
))
toRet <- data.frame(cbind(myAUC, avg_diff), row.names = mygenes)
toRet <- toRet[rev(x = order(toRet$myAUC)), ]
return(toRet)
}
DifferentialAUC <- function(x, y) {
prediction.use <- ROCR::prediction(
predictions = c(x, y),
labels = c(rep(x = 1, length(x = x)), rep(x = 0, length(x = y))),
label.ordering = 0:1
)
perf.use <- ROCR::performance(prediction.obj = prediction.use, measure = "auc")
auc.use <- round(x = [email protected][[1]], digits = 3)
return(auc.use)
}
Markertest <- function(object, cells.1, cells.2, genes.use = NULL, print.bar = TRUE,
assay.type = "RNA") {
data.test <- Seurat::GetAssayData(
object = object,
slot = "data"
)
if (is.null(genes.use)) {
genes.use <- rownames(x = data.test)
}
to.return <- AUCMarkerTest(
data1 = data.test[, cells.1],
data2 = data.test[, cells.2], mygenes = genes.use, print.bar = print.bar
)
to.return$power <- abs(x = to.return$myAUC - 0.5) * 2
return(to.return)
}
|
/scratch/gouwar.j/cran-all/cranData/CACIMAR/R/Identify_markers_preprocess.R
|
Sort_MarkersPower <- function(MarkerRoc01, Thr01 = 0) {
MarkerRoc02 <- sort.int(as.numeric(MarkerRoc01), index.return = TRUE, decreasing = TRUE)
Name01 <- colnames(MarkerRoc01)
if (length(MarkerRoc01) == 0 | max(MarkerRoc02$x) <= Thr01) {
MarkerRoc03 <- c("None", "None", "None")
} else if (length(MarkerRoc02$x) == 1 & length(MarkerRoc02$x[MarkerRoc02$x > Thr01]) == 1 |
length(MarkerRoc02$x) != 1 & length(MarkerRoc02$x[MarkerRoc02$x > Thr01]) == 1 & MarkerRoc02$x[2] < 0) {
MarkerRoc03 <- c(paste(Name01[MarkerRoc02$ix[1]], sep = ","),
paste(MarkerRoc02$x[1], sep = ","), Name01[MarkerRoc02$ix[1]])
} else {
if (length(MarkerRoc02$x) != 1 & length(MarkerRoc02$x[MarkerRoc02$x > Thr01]) == 1) {
MarkerRoc02x <- MarkerRoc02$x[1:2]
MarkerRoc02ix <- MarkerRoc02$ix[1:2]
} else {
MarkerRoc02x <- c(MarkerRoc02$x[MarkerRoc02$x > Thr01], Thr01)
MarkerRoc02ix <- c(MarkerRoc02$ix[MarkerRoc02$x > Thr01], length(MarkerRoc02$ix) + 1)
}
Power01 <- Cal_RelativePower(MarkerRoc02x)
maxPower01 <- which.max(Power01)
if (maxPower01 == length(MarkerRoc02x)) {
maxPower01 <- length(Power01) - 1
}
MarkerRoc03 <- c(paste(Name01[MarkerRoc02ix[1:(maxPower01 + 1)]], collapse = ","),
paste(MarkerRoc02x[1:(maxPower01 + 1)], collapse = ","), Name01[MarkerRoc02ix[1]])
}
return(MarkerRoc03)
}
|
/scratch/gouwar.j/cran-all/cranData/CACIMAR/R/Sort_MarkersPower.R
|
#' Identify orthologs marker genes for two species
#' @description Identify orthologs marker genes for two species based on orthologs database
#' @param OrthG ortholog genes database
#' @param Species1_Marker_table data.frame of species 1, first column should be gene name,
#' second column should be Clusters corresponding to marker gene
#' @param Species2_Marker_table data.frame of species 2, first column should be gene name,
#' second column should be Clusters corresponding to marker gene
#' of marker genes.
#' @param Species_name1 character, indicating the species names of Species1_Marker_table.
#' @param Species_name2 character, indicating the species names of Species2_Marker_table
#' @param match_cell_name characters contained in both cell names
#' to match similar cell types
#'
#' @return Data frame of conserved markers
#' @export
#'
#' @examples load(system.file("extdata", "CellMarkers.rda", package = "CACIMAR"))
#' o1 <- Identify_ConservedMarkers(OrthG_Mm_Zf,Mm_marker_cell_type,
#' Zf_marker_cell_type,Species_name1 = 'mm',Species_name2 = 'zf')
#' o2 <- Identify_ConservedMarkers(OrthG_Zf_Ch,Ch_marker_cell_type,
#' Zf_marker_cell_type,Species_name1 = 'ch',Species_name2 = 'zf')
Identify_ConservedMarkers <- function(OrthG,Species1_Marker_table,Species2_Marker_table,
Species_name1,Species_name2,
match_cell_name=NULL){
validInput(OrthG,'OrthG','df')
validInput(Species_name1,'Species_name1','character')
validInput(Species_name2,'Species_name2','character')
Species1_Marker_table <- Species1_Marker_table[!duplicated(Species1_Marker_table[,1]),]
Species2_Marker_table <- Species2_Marker_table[!duplicated(Species2_Marker_table[,1]),]
colnames(Species1_Marker_table)[2] <- 'cluster'
colnames(Species2_Marker_table)[2] <- 'cluster'
Species_name1 <- tolower(Species_name1)
Species_name2 <- tolower(Species_name2)
Spec1 <- colnames(OrthG)[2]
Spec2 <- colnames(OrthG)[4]
Spec1 <- gsub('_ID','',Spec1)
Spec2 <- gsub('_ID','',Spec2)
if (Spec1 == Species_name1 & Spec2 == Species_name2) {
Species_name <- c(Spec1,Spec2)
Species1_Marker <- Species1_Marker_table
Species2_Marker <- Species2_Marker_table
}else if(Spec2 == Species_name1 & Spec1 == Species_name2){
Species_name <- c(Spec2,Spec1)
Species2_Marker <- Species1_Marker_table
Species1_Marker <- Species2_Marker_table
}else{stop('please input correct Species name')}
colnames(Species1_Marker) <- paste0(Species_name[1],colnames(Species1_Marker))
colnames(Species2_Marker) <- paste0(Species_name[2],colnames(Species2_Marker))
Species12 <- Species1_Marker
Species22 <- Species2_Marker
Spec1_gene <- data.frame(rep(0,nrow(Species12)),
rep(1,nrow(Species12)))
rownames(Spec1_gene) <- Species12[,1]
Spec2_gene <- data.frame(rep(0,nrow(Species22)),
rep(1,nrow(Species22)))
rownames(Spec2_gene) <- Species22[,1]
Exp2 <- Get_OrthG(OrthG, Spec1_gene, Spec2_gene, Species_name)
if (grepl('ENS',rownames(Spec1_gene)[1])) {
Type1 <- paste0('Used_',Species_name[1],'_ID')
Type2 <- paste0('Used_',Species_name[2],'_ID')
}else{
Type1 <- paste0('Used_',Species_name[1],'_Symbol')
Type2 <- paste0('Used_',Species_name[2],'_Symbol')
}
Species1 <- Species1_Marker[match(Exp2[, Type1],Species1_Marker[,1]), ]
Species2 <- Species2_Marker[match(Exp2[, Type2],Species2_Marker[,1]), ]
Exp3 <- cbind(Exp2, Species1, Species2)
Exp4 <- Exp3[!is.na(Exp3[, dim(Exp2)[2]+1]) & !is.na(Exp3[,dim(Exp2)[2]+dim(Species1)[2]+1]), ]
if (nrow(Exp4)==0) {
stop('No homologous genes appear!')
}
Exp5 <- cbind(Exp4[,1:7], Species12[match(Exp4[,Type1],
Species12[,1]), ],
Species22[match(Exp4[,Type2], Species22[,1]), ])
Exp6 <- Refine_Used_OrthG(Exp5,Species_name,match_cell_name)
Exp6 <- Exp6[Exp6$mmcluster==Exp6$zfcluster,]
return(Exp6)
}
Get_OrthG <- function(OrthG1, MmRNA1, ZfRNA1, Spec1, MmPattern1='', ZfPattern1=''){
tOrthG1 <- table(OrthG1$Type); print(tOrthG1)
if (grepl('ENS',rownames(MmRNA1)[1])) {
Ind1 <- c(grep(paste0(Spec1[1],'_ID'), colnames(OrthG1)), grep(paste0(Spec1[2],'_ID'), colnames(OrthG1)))
}else{
Ind1 <- c(grep(paste0(Spec1[1],'_Symbol'), colnames(OrthG1)), grep(paste0(Spec1[2],'_Symbol'), colnames(OrthG1)))
}
OrthG21 <- list()
for(i in 1:length(tOrthG1)){ print(names(tOrthG1)[i])
if(names(tOrthG1)[i]==paste(c(Spec1,'0T1'),collapse='_')){ OrthG2 <- OrthG1[OrthG1$Type==names(tOrthG1)[i], ]
OrthG3 <- cbind(OrthG2, OrthG2[,Ind1])
}else if(names(tOrthG1)[i]==paste(c(Spec1,'1T0'),collapse='_')){ OrthG2 <- OrthG1[OrthG1$Type==names(tOrthG1)[i], ]
OrthG3 <- cbind(OrthG2, OrthG2[,Ind1])
}else if(names(tOrthG1)[i]==paste(c(Spec1,'1T1'),collapse='_')){ OrthG2 <- OrthG1[OrthG1$Type==names(tOrthG1)[i], ]
OrthG3 <- cbind(OrthG2, OrthG2[,Ind1])
}else if(grepl(paste(c(Spec1,'.*N'),collapse='_'),names(tOrthG1)[i])){ OrthG2 <- OrthG1[OrthG1$Type==names(tOrthG1)[i], ]
OrthG21 <- apply(OrthG2, 1 ,function(x1){
for(j in 1:length(Ind1)) { x2 <- strsplit(as.character(x1[Ind1[j]]),'[;,]')[[1]]
if(j==1){ RNA21 <- MmRNA1[match(x2, rownames(MmRNA1)), ]
if(MmPattern1[1]!=''){ Pattern21 <- MmPattern1[match(x2, rownames(MmRNA1)), ] }
}else{ RNA21 <- ZfRNA1[match(x2, rownames(ZfRNA1)), ]
if(MmPattern1[1]!=''){ Pattern21 <- ZfPattern1[match(x2, rownames(ZfRNA1)), ] }
}
RNA2 <- RNA21[!is.na(RNA21[,1]), ];
if(MmPattern1[1]!=''){ Pattern2 <- Pattern21[!is.na(RNA21[,1])] }
if(is.null(dim(RNA2)) | dim(RNA2)[1]==1){
RNA3 <- RNA2; Gene1 <- rownames(RNA2)
}else if(dim(RNA2)[1]==0){
RNA3 <- RNA2[1, ]; Gene1 <- rownames(RNA2)[1]
}else{
if(MmPattern1[1]!=''){ RNA31 <- RNA2[grepl('[UD]', Pattern2), ]
if(dim(RNA31)[1]!=0){ RNA32 <- RNA31
}else{ RNA32 <- RNA2 }
}else{ RNA32 <- RNA2 }
mRNA32 <- which.max(rowMeans(RNA32))
RNA3 <- RNA32[mRNA32, ]; Gene1 <- rownames(RNA32)[mRNA32]
}
if(j==1){ RNA4 <- as.numeric(RNA3); Gene2 <- Gene1
}else{ RNA4 <- c(RNA4, as.numeric(RNA3)); Gene2 <- c(Gene2, Gene1) }
}
return(Gene2)
} )
rownames(OrthG21) <- colnames(OrthG1)[Ind1]
OrthG3 <- cbind(OrthG2, t(OrthG21))
}else{ print(paste0('Not process ',tOrthG1[i]))
stop('Can not match any orthologs marker genes, please whether input
correct species names ') }
if(i==1){ OrthG4 <- as.matrix(OrthG3);
}else{ OrthG4 <- rbind(OrthG4, as.matrix(OrthG3)) }
}
colnames(OrthG4)[(ncol(OrthG1)+1):(ncol(OrthG1)+2)] <- paste0('Used_',colnames(OrthG4)[(ncol(OrthG1)+1):(ncol(OrthG1)+2)])
return(OrthG4)
}
Refine_Used_OrthG<-function(ShMarker1,Species,smiliar_cell_name){
Type1 <- paste0(Species[1],'.*\\cluster'); Type2 <- paste0(Species[2], '.*\\cluster')
Spec1Type1 <- grep(Type1, colnames(ShMarker1))
Spec1Type2 <- grep(Type2, colnames(ShMarker1))
if (is.null(smiliar_cell_name)) {
ShMarker2 <- apply(ShMarker1, 1, function(x1){
x11 <- x1[Spec1Type1]; x12 <- x1[Spec1Type2]; x2 <- F
for(i in 1:length(x11)){
if(!is.na(x11[i])){
x112 <- strsplit(x11[i], ',')[[1]]
for(i1 in 1:length(x112)){
for(j in 1:length(x12)){
if(!is.na(x12[j])){
x122 <- strsplit(x12[j], ',')[[1]]
for(j1 in 1:length(x122)){
if(x112[i1]==x122[j1]){
x2 <- T
} } } } } } }
return(x2)
})
}else{
ShMarker2 <- apply(ShMarker1, 1, function(x1){
x11 <- x1[Spec1Type1]; x12 <- x1[Spec1Type2]; x2 <- F
for(i in 1:length(x11)){
if(!is.na(x11[i])){
x112 <- strsplit(x11[i], ',')[[1]]
for(i1 in 1:length(x112)){
for(j in 1:length(x12)){
if(!is.na(x12[j])){
x122 <- strsplit(x12[j], ',')[[1]]
for(j1 in 1:length(x122)){
if(x112[i1]==x122[j1] | grepl(smiliar_cell_name,x112[i1]) & grepl(smiliar_cell_name,x122[j1])){
x2 <- T
} } } } } } }
return(x2)
})}
ShMarker3 <- ShMarker1[ShMarker2, ]
ShMarker4 <- ShMarker3[order(ShMarker3[, Spec1Type1[1]]), ]
return(ShMarker4)
}
Refine_Markers_Species<-function(Marker1,Species){
PowerTh1 <- 0.4; PowerTh2 <- gsub('\\.','',PowerTh1)
SpecInd1 <- length(Species);
Ind1 <- list(); Ind2 <- list();
for(i in 1:length(Species)){
Ind1[[i]] <- grep(paste0(Species[i],'.*\\.luster'), colnames(Marker1))
Ind2[[i]] <- grep(paste0(Species[i],'.*\\.Power'), colnames(Marker1))
}
Marker2 <- apply(Marker1, 1, function(x1){
x2 <- F; x14 <- list();
for(i in 1:length(Ind1)){
x12 <- x1[Ind1[[i]]]
for(i1 in 1:length(x12)){
if(!is.na(x12[i1])){
x13 <- strsplit(x12[i1], ',')[[1]]
for(i2 in 1:length(x13)){
x14[[x13[i2]]] <- x13[i2]
} }
} }
x41 <- list()
for(i in 1:length(x14)){
x15 <- rep(0, length(Ind1))
for(i1 in 1:length(Ind1)){
x12 <- x1[Ind1[[i1]]]; x22 <- x1[Ind2[[i1]]]
for(i2 in 1:length(x12)){
if(!is.na(x12[i2])){
x13 <- strsplit(x12[i2], ',')[[1]]
x23 <- strsplit(x22[i2], ',')[[1]]
for(i3 in 1:length(x13)){
if(x14[[i]]==x13[i3] | grepl('MG',x14[[i]]) & grepl('MG',x13[i3]) | grepl('BC',x14[[i]]) & grepl('BC',x13[i3])){
x15[i1] <- max(x15[i1], x23[i3])
} } } } }
x31 <- T; x32 <- F
for(j in 1:length(Ind1)){
if(as.numeric(x15[j])==0){ x31 <- F }
if(as.numeric(x15[j])>PowerTh1){ x32 <- T
} }
if(x31==T & x32==T){ x41[[x14[[i]]]] <- x15
} }
if(length(x41)>0){ x2 <- T }
return(x2)
})
Marker3 <- Marker1[Marker2, ]
print(c(nrow(Marker1), nrow(Marker3)))
return(Marker3)
}
|
/scratch/gouwar.j/cran-all/cranData/CACIMAR/R/cross-species_markers.R
|
#' Orthologs genes database for mus musculus and chicken
"OrthG_Mm_Ch"
#' Orthologs genes database for mus zebrafish and chicken
"OrthG_Zf_Ch"
#' Orthologs genes database for mus musculus and zebrafish
"OrthG_Mm_Zf"
#' Orthologs genes database for homo sapiens and zebrafish
"OrthG_Hs_Ch"
#' Orthologs genes database for homo sapiens and mus musculus
"OrthG_Hs_Mm"
#' Orthologs genes database for homo sapiens and zebrafish
"OrthG_Hs_Zf"
|
/scratch/gouwar.j/cran-all/cranData/CACIMAR/R/data.r
|
#' plot the heatmap of marker genes across different species
#' @param RNA1 correlation of expression in each cell type
#' @param RowType1 character, indicating the cell types that you want to show
#' on the row in heatmap. RowType1='' means show all cell types
#' @param ColType1 character, indicating the cell types that you want to show
#' on the column in heatmap. RowType1='' means show all cell types
#' @param cluster_cols boolean values determining if columns should be clustered
#' or hclust object
#' @param cluster_rows boolean values determining if rows should be clustered or
#'
#' hclust object
#' @param Color1 vector of colors used in heatmap
#' @param ... parameter in pheatmap
#' @export
#' @importFrom pheatmap pheatmap
#' @importFrom grDevices rgb
#' @importFrom grDevices colorRampPalette
#' @importFrom methods is
#' @return pheatmap object
#'
#' @examples load(system.file("extdata", "network_example.rda", package = "CACIMAR"))
#' n1 <- Identify_ConservedNetworks(OrthG_Mm_Zf,mmNetwork,zfNetwork,'mm','zf')
#' Heatmap_Cor(n1[[2]],cluster_cols=TRUE, cluster_rows=FALSE)
Heatmap_Cor <- function(RNA1, RowType1='', ColType1='', cluster_cols=TRUE
, cluster_rows=FALSE, Color1=NULL, ...){
validInput(RowType1,'RowType1','character')
validInput(ColType1,'ColType1','character')
validInput(cluster_cols,'cluster_cols','logical')
validInput(cluster_rows,'cluster_rows','logical')
Ind21 <- c(); Ind22 <- c();
if(RowType1==''){ Ind21 <- 1:dim(RNA1)[1];
}else{
for(i in 1:length(RowType1)){
Ind1 <- grep(RowType1[i], rownames(RNA1))
Ind21 <- c(Ind21, Ind1)
}
}
if(ColType1==''){ Ind22 <- 1:dim(RNA1)[2]
}else{
for(i in 1:length(ColType1)){
Ind1 <- grep(ColType1[i], colnames(RNA1))
Ind22 <- c(Ind22, Ind1)
}
}
RNA2 <- RNA1[Ind21, Ind22]
#RNA2[RNA2==1] <- NA; RNA2[is.na(RNA2)] <- max(RNA2[!is.na(RNA2)])
white1 <- rgb(230/255,230/255,230/255); purple1 <- rgb(192/255,103/255,169/255)
purple2 <- rgb(148/255,43/255,112/255);
blue1 <- rgb(72/255,85/255,167/255); red1 <- rgb(239/255,58/255,37/255)
black1 <- rgb(71/255,71/255,71/255); yellow1 <- rgb(250/255,240/255,21/255);
if(is.null(Color1)){ Color1 <- c(blue1, 'white', red1) }
Hier1 <- pheatmap::pheatmap(as.matrix(RNA2), cluster_cols =cluster_cols, cluster_rows =
cluster_rows, color = colorRampPalette(Color1)(50),
border_color=rgb(200/255,200/255,200/255),...)
return(Hier1)
}
Seurat_SubsetData <- function(pbmc1, SubG1, SubS1=NULL, ExSubS1=NULL){
if(!is.null(SubS1)){ CellN1 <- c()
for(SubS2 in SubS1){
CellN1 <- c(CellN1, rownames([email protected][[email protected][, SubG1]==SubS2, ]))
}
pbmc1 <- subset(pbmc1, cells=unique(CellN1) )
}
if(!is.null(ExSubS1)){ CellN2 <- c()
for(ExSubS2 in ExSubS1){
CellN2 <- c(CellN2, rownames([email protected][[email protected][, SubG1]==ExSubS1, ]))
}
CellN1 <- setdiff(rownames([email protected]), CellN2)
pbmc1 <- subset(pbmc1, cells=unique(CellN1) )
}
return(pbmc1)
}
#' CACIMAR colors palette
#'
#' @param color_number numeric, indicating used colors number
#'
#' @return vector of colors
#' @export
#'
#' @examples CACIMAR_cols(10)
#' CACIMAR_cols(20)
CACIMAR_cols <- function(color_number){
cols <- c("OrangeRed","SlateBlue3","DarkOrange","GreenYellow","Purple",
"DarkSlateGray","Gold","DarkGreen","DeepPink2","Red4","#4682B4",
"#FFDAB9","#708090","#836FFF","#CDC673","#CD9B1D","#FF6EB4","#CDB5CD"
,"#008B8B","#43CD80","#483D8B","#66CD00","#CDC673","#CDAD00","#CD9B9B"
,"#FF8247","#8B7355","#8B3A62","#68228B","#CDB7B5","#CD853F","#6B8E23"
,"#696969","#7B68EE","#9F79EE","#B0C4DE","#7A378B","#66CDAA","#EEE8AA"
,"#00FF00","#EEA2AD","#A0522D","#000080","#E9967A","#00CDCD","#8B4500"
,"#DDA0DD","#EE9572","#EEE9E9","#8B1A1A","#8B8378","#EE9A49","#EECFA1"
,"#8B4726","#8B8878","#EEB4B4","#C1CDCD","#8B7500","#0000FF","#EEEED1"
,"#4F94CD","#6E8B3D","#B0E2FF","#76EE00","#A2B5CD","#548B54","#BBFFFF"
,"#B4EEB4","#00C5CD","#008B8B","#7FFFD4","#8EE5EE","#43CD80","#68838B"
,"#00FF00","#B9D3EE","#9ACD32","#00688B","#FFEC8B","#1C86EE","#CDCD00"
,"#473C8B","#FFB90F","#EED5D2","#CD5555","#CDC9A5","#FFE7BA","#FFDAB9"
,"#CD661D","#CDC5BF","#FF8C69","#8A2BE2","#CD8500","#B03060","#FF6347"
,"#FF7F50","#CD0000","#F4A460","#FFB5C5","#DAA52")
cols_return <- cols[1:color_number]
return(cols_return)
}
#' Plot Markers in each cell type
#' @description This function integrate R package pheatmap to plot markers in each
#' cell type
#' @param ConservedMarker Markers table
#' @param start_col numeric, indicating the start column of marker power in each
#' cell type
#' @param module_colors vector, indicating colors of modules (annotation_colors)
#' @param heatmap_colors vector, indicating colors used in heatmap
#' @param cluster_rows boolean values determining if rows should be clustered or
#' hclust object
#' @param cluster_cols boolean values determining if columns should be clustered
#' or hclust object
#' @param show_rownames boolean specifying if column names are be shown
#' @param show_colnames boolean specifying if column names are be shown
#' @param cellwidth individual cell width in points. If left as NA, then the
#' values depend on the size of plotting window
#' @param cellheight individual cell height in points. If left as NA, then the
#' values depend on the size of plotting window
#' @param legend logical to determine if legend should be drawn or not
#' @param annotation_legend boolean value showing if the legend for annotation
#' tracks should be drawn
#' @param annotation_names_row boolean value showing if the names for row
#' annotation tracks should be drawn
#' @param ... parameter in pheatmap
#' @importFrom pheatmap pheatmap
#' @importFrom viridisLite viridis
#' @return pheatmap object
#' @export
#'
#' @examples data("pbmc_small")
#' all.markers <- Identify_Markers(pbmc_small)
#' all.markers <- Format_Markers_Frac(all.markers)
#' Plot_MarkersHeatmap(all.markers[,c(2,6,7,8)])
Plot_MarkersHeatmap <- function(ConservedMarker,start_col = 2,module_colors = NA,
heatmap_colors = NA, cluster_rows = FALSE,cluster_cols = FALSE,
show_rownames = FALSE, show_colnames = FALSE,cellwidth = NA,
cellheight = NA,legend = FALSE,annotation_legend=FALSE,
annotation_names_row = FALSE, ...){
if (is.na(module_colors)) {
all_cell_type <- c(ConservedMarker[,1])
all_cell_type <- all_cell_type[!duplicated(all_cell_type)]
module_colors <- CACIMAR::CACIMAR_cols(length(all_cell_type))
names(module_colors) <- all_cell_type
module_colors <- list(module_colors)
names(module_colors) <- 'Celltype'
}
if (is.na(heatmap_colors)) {
heatmap_colors <- viridisLite::viridis(100,option = 'D')
}
annotation_row <- as.data.frame(ConservedMarker[,start_col-1])
rownames(annotation_row) <- rownames(ConservedMarker)
colnames(annotation_row) <- 'Celltype'
gap_row <- c()
for (i in levels(as.factor(ConservedMarker[,1]))) {
row1 <- grep(i,ConservedMarker[,1])
gap_row <- c(gap_row,row1[length(row1)])
}
p1=pheatmap::pheatmap(ConservedMarker[,start_col:ncol(ConservedMarker)],annotation_row =
annotation_row,cluster_rows = cluster_rows,
show_rownames = show_rownames,cluster_cols = cluster_cols,
gaps_row = gap_row,show_colnames = show_colnames, color = heatmap_colors,
cellwidth = cellwidth,cellheight = cellheight,legend = legend,
annotation_colors = module_colors,annotation_legend = annotation_legend,
annotation_names_row = annotation_names_row,...)
return(p1)
}
|
/scratch/gouwar.j/cran-all/cranData/CACIMAR/R/plot.R
|
### from https://github.com/GreenleafLab/ArchR/blob/master/R/ValidationUtils.R
validInput <- function(input = NULL, name = NULL, valid = NULL){
valid <- unique(valid)
if(is.character(valid)){
valid <- tolower(valid)
}else{
stop("Validator must be a character!")
}
if(!is.character(name)){
stop("name must be a character!")
}
if("null" %in% tolower(valid)){
valid <- c("null", valid[which(tolower(valid) != "null")])
}
av <- FALSE
for(i in seq_along(valid)){
vi <- valid[i]
if(vi == "integer" | vi == "wholenumber"){
if(all(is.numeric(input))){
#https://stackoverflow.com/questions/3476782/check-if-the-number-is-integer
cv <- min(abs(c(input%%1, input%%1-1)), na.rm = TRUE) < .Machine$double.eps^0.5
}else{
cv <- FALSE
}
}else if(vi == "null"){
cv <- is.null(input)
}else if(vi == "bool" | vi == "boolean" | vi == "logical"){
cv <- is.logical(input)
}else if(vi == "numeric"){
cv <- is.numeric(input)
}else if(vi == "vector"){
cv <- is.vector(input)
}else if(vi == "matrix"){
cv <- is.matrix(input)
}else if(vi == "sparsematrix"){
cv <- is(input, "dgCMatrix")
}else if(vi == "character"){
cv <- is.character(input)
}else if(vi == "factor"){
cv <- is.factor(input)
}else if(vi == "rlecharacter"){
cv1 <- is(input, "Rle")
if(cv1){
cv <- is(input@values, "factor") || is(input@values, "character")
}else{
cv <- FALSE
}
}else if(vi == "timestamp"){
cv <- is(input, "POSIXct")
}else if(vi == "dataframe" | vi == "data.frame" | vi == "df"){
cv1 <- is.data.frame(input)
cv2 <- is(input, "DataFrame")
cv <- any(cv1, cv2)
}else if(vi == "fileexists"){
cv <- all(file.exists(input))
}else if(vi == "direxists"){
cv <- all(dir.exists(input))
}else if(vi == "granges" | vi == "gr"){
cv <- is(input, "GRanges")
}else if(vi == "list" | vi == "simplelist"){
cv1 <- is.list(input)
cv2 <- is(input, "SimpleList")
cv <- any(cv1, cv2)
}else if(vi == "se" | vi == "summarizedexperiment"){
cv <- is(input, "SummarizedExperiment")
}else if(vi == "seurat" | vi == "seuratobject"){
cv <- is(input, "Seurat")
}else if(vi == "txdb"){
cv <- is(input, "TxDb")
}else if(vi == "orgdb"){
cv <- is(input, "OrgDb")
}else if(vi == "bsgenome"){
cv <- is(input, "BSgenome")
}else if(vi == "parallelparam"){
cv <- is(input, "BatchtoolsParam")
}else if(vi == "archrproj" | vi == "archrproject"){
cv <- is(input, "ArchRProject")
###validObject(input) check this doesnt break anything if we
###add it. Useful to make sure all ArrowFiles exist! QQQ
}else{
stop("Validator is not currently supported by ArchR!")
}
if(cv){
av <- TRUE
break
}
}
if(av){
return(invisible(TRUE))
}else{
stop("Input value for '", name,"' is not a ", paste(valid, collapse="," ), ", (",name," = ",class(input),") please supply valid input!")
}
}
|
/scratch/gouwar.j/cran-all/cranData/CACIMAR/R/validation.R
|
CADFpvalues <- function(t0, rho2=0.5, type=c("trend", "drift", "none"))
{
# This procedure computes the p-values of the CADF test developed by Hansen (1995)
# t0: sample statistic
# rho2: value of the parameter rho^2
# type: CADF model type. No constant "nc", Constant "c", Constant plus trend "ct"
# The procedure is described in Costantini, Lupi & Popp (2007)
# Citation:
# @TECHREPORT{,
# author = {Costantini, Mauro and Lupi, Claudio and Popp, Stephan},
# title = {A Panel-{CADF} Test for Unit Roots},
# institution = {University of Molise},
# year = {2007},
# type = {Economics \& Statistics Discussion Paper},
# number = {39/07},
# timestamp = {2008.10.15},
# url = {http://econpapers.repec.org/paper/molecsdps/esdp07039.htm}}
type <- match.arg(type)
switch(type,
"trend" = coeffs <- CADFtest::coeffs_ct,
"drift" = coeffs <- CADFtest::coeffs_c,
"none" = coeffs <- CADFtest::coeffs_nc)
# the first column of coefs are the probabilities, the other columns are beta_0, ..., beta_3
# of Costantini, Lupi & Popp eqn (13).
L <- dim(coeffs)[1]
# compute the fitted quantiles for the given value of rho^2
fitted.q <- coeffs[,2] + coeffs[,3]*rho2 + coeffs[,4]*rho2^2 + coeffs[,5]*rho2^3
# find the position of the fitted quantile that is closest to the sample statistic
difference <- abs(fitted.q - t0)
position <- which(difference==min(difference))
if (length(position)>1) position <- position[1]
# interpolate locally using eq (10) in Costantini, Lupi & Popp (2007) using l observations (l must be an integer odd number)
l <- 11
if ( (position > ((l-1)/2)) & (position < (L - (l-1)/2)) ) range <- (position - (l-1)/2):(position + (l-1)/2)
if (position <= ((l-1)/2)) range <- 1:l
if (position >= (L-(l-1)/2)) range <- (L-(l-1)/2+1):L
prob <- coeffs[range,1]
prob <- qnorm(prob)
x1 <- fitted.q[range]
x2 <- x1^2
x3 <- x1^3
local.interp <- lm(prob~x1+x2+x3)
model.summary <- summary(local.interp)
gamma <- model.summary$coefficients
vt0 <- c(1, t0, t0^2, t0^3)
vt0 <- vt0[1:dim(gamma)[1]]
p.value <- vt0%*%gamma[,1]
p.value <- as.vector(pnorm(p.value))
return(p.value)
}
|
/scratch/gouwar.j/cran-all/cranData/CADFtest/R/CADFpvalues.R
|
CADFtest <- function(model, X=NULL, type=c("trend", "drift", "none"),
data=list(), max.lag.y=1, min.lag.X=0, max.lag.X=0, dname=NULL,
criterion=c("none", "BIC", "AIC", "HQC", "MAIC"), ...)
UseMethod("CADFtest")
|
/scratch/gouwar.j/cran-all/cranData/CADFtest/R/CADFtest.R
|
CADFtest.default <- function(model, X=NULL, type=c("trend", "drift", "none"),
data=list(), max.lag.y=1, min.lag.X=0, max.lag.X=0, dname=NULL,
criterion=c("none", "BIC", "AIC", "HQC", "MAIC"), ...)
{
# Author: Claudio Lupi
# This version: July 22, 2009.
# This function computes Hansen's (1995) Covariate-Augmented Dickey-Fuller (CADF) test.
# The only required argument is y, the Tx1 time series to be tested (y can be a vector).
# If no time series of stationary covariates X is passed to the procedure, then an ordinary ADF test is performed.
# The test types are no-constant ("none"), constant ("drift"), constant plus trend ("trend", the default).
#
# max.lag.y >= 0
# min.lag.X <= 0
# max.lag.X >= 0
if (is.null(dname)){dname <- deparse(substitute(model))}
method <- "CADF test"
y <- model
if (is.null(X)) method <- "ADF test"
type <- match.arg(type)
# modify argument type to be used by punitroot
switch(type,
"trend" = urtype <- "ct",
"drift" = urtype <- "c",
"none" = urtype <- "nc")
criterion <- match.arg(criterion)
rho2 <- NULL # default value for rho^2
nX <- 0 # default number of covariates. The exact number is computed below
if (is.ts(y)==FALSE) y <- ts(y)
trnd <- ts(1:length(y), start=start(y), frequency=frequency(y))
#############################################################################################################
#############################################################################################################
if (criterion=="none") # no automatic model selection
{
test.results <- estmodel(y=y, X=X, trnd=trnd, type=type,
max.lag.y=max.lag.y, min.lag.X=min.lag.X, max.lag.X=max.lag.X,
dname=dname, criterion=criterion, obs.1=NULL, obs.T=NULL, ...)
}
#############################################################################################################
#############################################################################################################
#############################################################################################################
#############################################################################################################
if (criterion!="none") # automatic model selection
{
all.models <- expand.grid(max.lag.y:0, min.lag.X:0, max.lag.X:0) # all possible models
models.num <- dim(all.models)[1] # number of models to be estimated
ICmatrix <- matrix(NA, models.num, 7) # matrix to store lag orders, & inf. crit.
max.lag.y <- all.models[1, 1]
min.lag.X <- all.models[1, 2]
max.lag.X <- all.models[1, 3]
interm.res <- estmodel(y=y, X=X, trnd=trnd, type=type,
max.lag.y=max.lag.y, min.lag.X=min.lag.X, max.lag.X=max.lag.X,
dname=dname, criterion=criterion, obs.1=NULL, obs.T=NULL, ...)
ICmatrix[1, ] <- c(max.lag.y, min.lag.X, max.lag.X, interm.res$AIC, interm.res$BIC, interm.res$HQC, interm.res$MAIC)
t.1 <- interm.res$est.model$index[1]
t.T <- interm.res$est.model$index[length(interm.res$est.model$index)]
for (modeln in 2:models.num)
{
max.lag.y <- all.models[modeln, 1]
min.lag.X <- all.models[modeln, 2]
max.lag.X <- all.models[modeln, 3]
interm.res <- estmodel(y=y, X=X, trnd=trnd, type=type,
max.lag.y=max.lag.y, min.lag.X=min.lag.X, max.lag.X=max.lag.X,
dname=dname, criterion=criterion, obs.1=t.1, obs.T=t.T, ...)
ICmatrix[modeln, ] <- c(max.lag.y, min.lag.X, max.lag.X, interm.res$AIC, interm.res$BIC,
interm.res$HQC, interm.res$MAIC)
}
if (criterion=="AIC") selected.model <- which(ICmatrix[,4]==min(ICmatrix[,4]))
if (criterion=="BIC") selected.model <- which(ICmatrix[,5]==min(ICmatrix[,5]))
if (criterion=="HQC") selected.model <- which(ICmatrix[,6]==min(ICmatrix[,6]))
if (criterion=="MAIC") selected.model <- which(ICmatrix[,7]==min(ICmatrix[,7]))
if (length(selected.model) > 1) selected.model <- selected.model[length(selected.model)]
max.lag.y <- ICmatrix[selected.model, 1]
min.lag.X <- ICmatrix[selected.model, 2]
max.lag.X <- ICmatrix[selected.model, 3]
################################## ESTIMATION & TEST WITH THE SELECTED MODEL ##############################
test.results <- estmodel(y=y, X=X, trnd=trnd, type=type,
max.lag.y=max.lag.y, min.lag.X=min.lag.X, max.lag.X=max.lag.X,
dname=dname, criterion=criterion, obs.1=t.1, obs.T=t.T, ...)
}
class(test.results) <- c("CADFtest", "htest")
if (is.null(X)){names(test.results$statistic) <- paste("ADF(",max.lag.y,")",sep="")}
else{names(test.results$statistic) <- paste("CADF(",max.lag.y,",",max.lag.X,",",min.lag.X,")",sep="")}
test.results$estimate <- c("delta" = as.vector(test.results$est.model$coefficients[(2 - as.numeric(type=="none") +
as.numeric(type=="trend"))]))
test.results$null.value <- c("delta" = 0)
test.results$alternative <- "less"
test.results$type <- type
return(test.results)
}
##############################################################################################################
##############################################################################################################
##############################################################################################################
estmodel <- function(y, X, trnd, type, max.lag.y, min.lag.X, max.lag.X, dname, criterion, obs.1, obs.T, ...)
{
method <- "CADF test"
if (is.null(X)) method <- "ADF test"
rho2 <- NULL
model <- "d(y) ~ "
if (type=="trend") model <- paste(model, "trnd +", sep="")
model <- paste(model, " L(y, 1)", sep="")
if (max.lag.y > 0)
{
for (i in 1:max.lag.y) model <- paste(model, " + L(d(y), ",i,")", sep="")
}
if (is.null(X)==FALSE)
{
if (is.ts(X)==FALSE) X <- ts(X, start=start(y), frequency=frequency(y))
nX <- 1; if (is.null(dim(X))==FALSE) nX <- dim(X)[2] # number of covariates
nX <- (max.lag.X - min.lag.X + 1)*nX # number of X's (including the lags)
if ((min.lag.X==0) & (max.lag.X==0)) model <- paste(model, " + L(X, 0)", sep="")
if ((min.lag.X!=0) | (max.lag.X!=0))
{
for (i in min.lag.X:max.lag.X) model <- paste(model, " + L(X, ",i,")", sep="")
}
}
if (type=="none") model <- paste(model, " -1", sep="")
est.model <- dynlm(formula=formula(model), start=obs.1, end=obs.T)
summ.est.model <- summary(est.model)
q <- summ.est.model$df[1]
TT <- q + summ.est.model$df[2]
sig2 <- sum(est.model$residuals^2)/TT
lsig2 <- log(sig2)
model.AIC <- lsig2 + 2*q/TT
model.BIC <- lsig2 + q*log(TT)/TT
model.HQC <- lsig2 + 2*q*log(log(TT))/TT
ytm1 <- est.model$model[, (2 + as.numeric(type=="trend"))]
if (type=="drift") ytm1 <- ytm1 - mean(ytm1)
if (type=="trend")
{
dtrmod <- lsfit((1:TT), ytm1)
ytm1 <- dtrmod$residuals
}
b0 <- est.model$coefficient[1 + as.numeric(type=="drift") + as.numeric(type=="trend")*2]
sy2 <- sum(ytm1^2)
tau <- b0^2 * sy2 / sig2
model.MAIC <- lsig2 + 2*(tau + q)/TT
t.value <- summ.est.model$coefficients[(2 - as.numeric(type=="none") + as.numeric(type=="trend")),3]
if (is.null(X))
{
switch(type,
"trend" = urtype <- "ct",
"drift" = urtype <- "c",
"none" = urtype <- "nc")
p.value <- punitroot(t.value, N=TT, trend=urtype, statistic = "t") # MacKinnon p-values
}
if (is.null(X)==FALSE)
{
# Compute Hansen's p-value
k <- length(est.model$coefficients)
series <- as.matrix(est.model$model) # explanatory variables considered in the model (excluding constant)
nseries <- dim(series)[2] # number of variables
Xseries <- series[,(nseries-nX+1):nseries] # the X's are the last nX columns of series
if (nX==1) Xseries <- Xseries - mean(Xseries) # demean the X's
if (nX>1) Xseries <- Xseries - apply(Xseries,2,mean)
e <- as.matrix(est.model$residuals)
if (nX==1) v <- Xseries * est.model$coefficients[k] + e
if (nX>1) v <- Xseries%*%est.model$coefficients[(k-nX+1):k] + e
V <- cbind(e,v)
mod <- lm(V~1)
LRCM <- (kernHAC(mod, ...))*nrow(V)
rho2 <- LRCM[1,2]^2/(LRCM[1,1]*LRCM[2,2])
p.value <- CADFpvalues(t.value, rho2, type)
}
return(list(statistic=t.value,
parameter=c("rho2" = rho2),
method=method,
p.value=as.vector(p.value),
data.name=dname,
max.lag.y=max.lag.y,
min.lag.X=min.lag.X,
max.lag.X=max.lag.X,
AIC=model.AIC,
BIC=model.BIC,
HQC=model.HQC,
MAIC=model.MAIC,
est.model=est.model,
call=match.call(CADFtest)))
}
|
/scratch/gouwar.j/cran-all/cranData/CADFtest/R/CADFtest.default.R
|
CADFtest.formula <- function(model, X=NULL, type=c("trend", "drift", "none"),
data=list(), max.lag.y=1, min.lag.X=0, max.lag.X=0, dname=NULL,
criterion=c("none", "BIC", "AIC", "HQC", "MAIC"), ...)
{
# Author: Claudio Lupi
# This version: December 12, 2008
# This function is an interface to function CADFtest.default that computes Hansen's (1995) CADF test.
# Reference:
# @ARTICLE{,
# author = {Hansen, Bruce E.},
# title = {Rethinking the Univariate Approach to Unit Root Testing: {U}sing Covariates to Increase Power},
# journal = {Econometric Theory},
# year = {1995},
# volume = {11},
# pages = {1148--1171},
# number = {5},
# }
#
# Arguments:
# model: a formula. If model is a vector or a time series then the
# standard ADF test is performed on the series described by model. If a CADF test is desired, then
# model should specified in the form z0 ~ z1 + z2 + ... where z0 is the variable to be tested,
# while z1 and z2 are the stationary covariates to be used in the test. Note that the model is stylized
# and all the variables are in levels. It is NOT the model equation on which the test is based.
# However, the covariates must be STATIONARY.
# type: it specifies if the underlying model must be with linear trend ("trend", the default),
# with constant ("drift") or without constant ("none").
# max.lag.y: it specifies the number of lags of the dependent (\Delta y_t).
# min.lag.X: it specifies the maximum lead of the covariates (it must be negative or zero).
# max.lag.X: it specifies the maximum lag of the covariates (it must be positive or zero).
# then the test is performed using the model that minimizes the selection citerion defined in
# 'criterion'. In this case, the max e min orders serve as upper and lower bounds in the model
# selection.
# criterion: it can be either "none", "BIC", "AIC" or "HQC". If criterion="none", no automatic model selection
# is performed. Otherwise, automatic model selection is performed using the specified
# criterion.
#
# The procedure to compute the CADF test p-value is proposed in Costantini et al. (2007). Please cite the paper
# when you use the present function.
# Reference:
# @TECHREPORT{,
# author = {Costantini, Mauro and Lupi, Claudio and Popp, Stephan},
# title = {A Panel-{CADF} Test for Unit Roots},
# institution = {University of Molise},
# year = {2007},
# type = {Economics \& Statistics Discussion Paper},
# number = {39/07},
# url = {http://econpapers.repec.org/paper/molecsdps/esdp07039.htm}
# }
if (is.null(dname)){dname <- deparse(substitute(model))}
if ((model[3]==".()")|(model[3]=="1()"))
{
model[3] <- 1
mf <- model.frame(model, data=data)
y <- model.response(mf)
X <- NULL
}
else
{
mf <- model.frame(model, data=data)
y <- model.response(mf)
X <- model.matrix(model, data=data)
X <- X[,2:dim(X)[2]]
}
call <- match.call(CADFtest)
test.results <- CADFtest.default(model=y, X=X, type=type, max.lag.y=max.lag.y,
data=data, min.lag.X=min.lag.X, max.lag.X=max.lag.X, dname=dname,
criterion=criterion, ...)
test.results$call <- call
return(test.results)
}
|
/scratch/gouwar.j/cran-all/cranData/CADFtest/R/CADFtest.formula.R
|
bread.mlm <-
function(x, ...) {
d <- length(coef(x))
rval <- diag(d)
colnames(rval) <- rownames(rval) <- colnames(coef(x))
return(rval)
}
|
/scratch/gouwar.j/cran-all/cranData/CADFtest/R/bread.mlm.R
|
estfun.mlm <-
function(x, ...) {
psi <- t(t(model.response(model.frame(x))) - as.vector(coef(x)))
colnames(psi) <- colnames(coef(x))
return(psi)
}
|
/scratch/gouwar.j/cran-all/cranData/CADFtest/R/estfun.mlm.R
|
plot.CADFtest <- function(x, plots=(1:4), ...)
{
# x : an x of class `CADFtest'
# plots: specify the plots to be produced
switch(length(plots),
par(mfrow=c(1,1)),
par(mfrow=c(2,1)),
layout(matrix(c(1,1,2,3),2,2, byrow=TRUE)),
par(mfrow=c(2,2))
)
r <- residuals(x)
if (1 %in% plots)
{
sr <- rstandard(x$est.model)
plot(sr, type="h", main="standardized residuals",
ylab="", xlab="Time", ...)
abline(h=0)
}
if (2 %in% plots)
{
jb <- jarque.bera.test(r)
plot(density(r), main="residuals density",
xlab=paste("p-value of the Jarque-Bera test = ", round(jb$p.value,4), sep=""), ...)
}
if (3 %in% plots)
{
acf(r, main="residuals ACF", ...)
}
if (4 %in% plots)
{
pacf(r, main="residuals PACF", ...)
}
}
|
/scratch/gouwar.j/cran-all/cranData/CADFtest/R/plot.CADFtest.R
|
print.CADFtestsummary <- function(x, ...)
{
# x is an object of class `CADFtestsummary'
ttype <- "Covariate Augmented DF test"
if (nrow(x$test.summary)==3) ttype <- "Augmented DF test"
cat(ttype, "\n")
print(x$test.summary, ...)
print(x$model.summary, ...)
}
|
/scratch/gouwar.j/cran-all/cranData/CADFtest/R/print.CADFtestsummary.R
|
residuals.CADFtest <- function(object, ...)
{
# object is an object of class CADFtest
residuals.lm(object$est.model)
}
|
/scratch/gouwar.j/cran-all/cranData/CADFtest/R/residuals.CADFtest.R
|
summary.CADFtest <- function(object, ...)
{
# object is an object of class CADFtest
rnames <-
c("t-test statistic: ",
"estimated rho^2: ",
"p-value: ",
"Max lag of the diff. dependent variable: ",
"Max lag of the stationary covariate(s): ",
"Max lead of the stationary covariate(s): ")
cnames <- "CADF test"
if (is.null(object$parameter))
{
rnames <- rnames[c(1,3:4)]
cnames <- "ADF test"
}
test.summary <- matrix(NA,(6-3*as.numeric(is.null(object$parameter))), 1,
dimnames=list(rnames,cnames))
test.summary[1] <- object$statistic
test.summary[3-as.numeric(is.null(object$parameter))] <- object$p.value
test.summary[4-as.numeric(is.null(object$parameter))] <- object$max.lag.y
if (!is.null(object$parameter))
{
test.summary[2] <- object$parameter
test.summary[5] <- object$max.lag.X
test.summary[6] <- object$min.lag.X
}
model.summary <- summary.lm(object$est.model)
pos.lag.dep <- which(rownames(model.summary$coefficients)=="L(y, 1)")
model.summary$coefficients[pos.lag.dep, 4] <- object$p.value
F <- NA
df.num <- NA
df.den <- NA
k1 <- 0; k0 <- 0
k <- dim(object$est.model$model)[2]-1
T <- dim(object$est.model$model)[1]
if ((object$type == "trend") & (k > 3))
{
reduced.model <- lm(object$est.model$model[,1] ~ object$est.model$model[,2] + object$est.model$model[,3])
k1 <- k + 1; k0 <- 3
}
if ((object$type=="drift") & (k > 2))
{
reduced.model <- lm(object$est.model$model[,1] ~ object$est.model$model[,2])
k1 <- k + 1; k0 <- 2
}
if ((object$type=="none") & (k > 2))
{
reduced.model <- lm(object$est.model$model[,1] ~ -1 + object$est.model$model[,2])
k1 <- k; k0 <- 1
}
if (k1 > k0)
{
s.reduced <- summary(reduced.model)
RSS1 <- sum(model.summary$residuals^2)
RSS0 <- sum(s.reduced$residuals^2)
df.num <- k1 - k0
df.den <- T - k1
F = ((RSS0 - RSS1)/df.num) / (RSS1/df.den)
}
model.summary$fstatistic[1:3] <- c(F, df.num, df.den)
CADFtestsummary <- list(test.summary=test.summary,
model.summary=model.summary)
class(CADFtestsummary) <- c("CADFtestsummary", "summary.dynlm", "summary.lm")
return(CADFtestsummary)
}
|
/scratch/gouwar.j/cran-all/cranData/CADFtest/R/summary.CADFtest.R
|
update.CADFtest <- function(object, change, ...)
{
# object: is an object of class CADFtest
# change: list of character. It is the change to the model. Ex: list("+x3 -x2", "kernel='Parzen')
nc <- nchar(object$data.name)
if (substring(object$data.name, nc, nc)==".") substring(object$data.name, nc, nc) <- "1"
for (i in 1:length(change))
{
if ((substring(change[i], 1, 1)=="+")|(substring(change[i], 1, 1)=="-"))
{
# change formula
newformula <- update.formula(object$data.name, paste("~ .",change[i]))
object$call$model <- newformula
}
else
{
text <- paste("object$call$", change[i], sep="")
eval(parse(text=text))
}
}
eval(object$call)
}
|
/scratch/gouwar.j/cran-all/cranData/CADFtest/R/update.CADFtest.R
|
### R code from vignette source '/home/claudio/Documents/CADFtest20170531/CADFtest/inst/doc/CADFtest.Rnw'
###################################################
### code chunk number 1: CADFtest.Rnw:8-9
###################################################
options(prompt = "R> ", continue="+ ", useFancyQuotes=FALSE)
###################################################
### code chunk number 2: CADFtest.Rnw:124-126
###################################################
data("npext", package="urca") # load data
library("CADFtest")
###################################################
### code chunk number 3: CADFtest.Rnw:133-134
###################################################
ADFt <- CADFtest(npext$gnpperca, max.lag.y=3)
###################################################
### code chunk number 4: CADFtest.Rnw:139-140
###################################################
ADFt$p.value
###################################################
### code chunk number 5: CADFtest.Rnw:145-146
###################################################
CADFpvalues(ADFt$statistic, type="trend", rho2=1)
###################################################
### code chunk number 6: CADFtest.Rnw:153-154
###################################################
print(ADFt)
###################################################
### code chunk number 7: CADFtest.Rnw:161-162
###################################################
summary(ADFt)
###################################################
### code chunk number 8: CADFtest.Rnw:169-170
###################################################
res.ADFt <- residuals(ADFt)
###################################################
### code chunk number 9: CADFtest.Rnw:175-176
###################################################
plot(ADFt)
###################################################
### code chunk number 10: CADFtest.Rnw:181-182
###################################################
plot(ADFt)
###################################################
### code chunk number 11: CADFtest.Rnw:190-191
###################################################
plot(ADFt, plots=c(1,3,4))
###################################################
### code chunk number 12: CADFtest.Rnw:200-201
###################################################
plot(ADFt, plots=c(1,3,4))
###################################################
### code chunk number 13: CADFtest.Rnw:208-212
###################################################
npext$unemrate <- exp(npext$unemploy) # compute unemployment rate
L <- ts(npext, start=1860) # time series of levels
D <- diff(L) # time series of diffs
S <- window(ts.intersect(L,D), start=1909) # select same sample as Hansen's
###################################################
### code chunk number 14: CADFtest.Rnw:219-220
###################################################
ADFt <- CADFtest(L.gnpperca ~ 1, data = S, max.lag.y = 3)
###################################################
### code chunk number 15: CADFtest.Rnw:227-228
###################################################
CADFtest(L.gnpperca ~ 1, data = S, max.lag.y = 4, criterion = "BIC", dname = "Extended Nelson-Plosser data")
###################################################
### code chunk number 16: CADFtest.Rnw:233-234
###################################################
ADFt2 <- update(ADFt, change=list("max.lag.y = 4", "criterion = 'BIC'", "dname = 'Extended Nelson-Plosser data'"))
###################################################
### code chunk number 17: CADFtest.Rnw:241-242
###################################################
ADFt <- CADFtest(L.gnpperca ~ 1, data = S, max.lag.y = 3)
###################################################
### code chunk number 18: CADFtest.Rnw:247-249
###################################################
CADFt <- update(ADFt, change=list("+ D.unemrate", "kernel = 'Parzen'", "prewhite = FALSE"))
print(CADFt)
###################################################
### code chunk number 19: CADFtest.Rnw:256-258
###################################################
CADFt <- update(CADFt, change=list("max.lag.X = 3", "min.lag.X = -3", "criterion = 'BIC'"))
print(CADFt)
###################################################
### code chunk number 20: CADFtest.Rnw:263-264
###################################################
CADFt <- CADFtest(L.gnpperca ~ D.unemrate, data = S, max.lag.y = 3, max.lag.X = 3, min.lag.X = -3, criterion = "BIC", kernel = "Parzen", prewhite = FALSE)
###################################################
### code chunk number 21: CADFtest.Rnw:269-270
###################################################
CADFt <- CADFtest(L.gnpperca ~ D.unemrate + D.indprod, data = S, max.lag.y = 3, max.lag.X = 3, min.lag.X = -3, criterion = "BIC", kernel = "Parzen", prewhite = FALSE)
###################################################
### code chunk number 22: CADFtest.Rnw:331-333
###################################################
CADFpvalues(t0 = -2.2, rho2 = 0.53)
CADFpvalues(t0 = -1.7, rho2 = 0.20)
###################################################
### code chunk number 23: CADFtest.Rnw:340-341
###################################################
CADFpvalues(-0.44, type = "drift", rho2 = 1)
###################################################
### code chunk number 24: CADFtest.Rnw:357-359
###################################################
library("urca")
adf.urca <- ur.df(npext$gnpperca[-(1:49)], type = "trend", lags = 3)
###################################################
### code chunk number 25: CADFtest.Rnw:366-368
###################################################
library("tseries")
adf.tseries <- adf.test(npext$gnpperca[-(1:49)], k = 3)
###################################################
### code chunk number 26: CADFtest.Rnw:398-404
###################################################
CADFtest.version <- packageVersion("CADFtest")
R.version <- R.Version()$version.string
dynlm.version <- packageVersion("dynlm")
sandwich.version <- packageVersion("sandwich")
tseries.version <- packageVersion("tseries")
urca.version <- packageVersion("urca")
|
/scratch/gouwar.j/cran-all/cranData/CADFtest/inst/doc/CADFtest.R
|
#' @title Compute CAGR(Compound Annual Growth Rate)
#' @description Compute CAGR(Compound Annual Growth Rate)
#' @param data.1 data of the first year
#' @param data.n data of the last year
#' @param n number of years
#'
#' @return CAGR and between years values
#' @export
#' @usage CAGR(data.1, data.n, n)
#' @examples c.cagr<-CAGR(100, 189, 5)
#' @references Bardhan, D., Singh, S.R.K., Raut, A.A.and Athare, T.R. (2022). Livestock in Madhya Pradesh and Chhattisgarh: An Analysis for Some Policy Implications. Agricultural Science Digest. DOI:10.18805/ag.D-5418.
CAGR <- function(data.1, data.n, n){
r<-((data.n/data.1)^(1/n)-1)
### generate data ###
df<-data.frame()
for(i in 1:n)
{
output=(data.1*(1+r)^i)
df=rbind(df, output)
}
colnames(df)<-'y'
y<-as.vector(rbind(data.1,df))
y<-as.data.frame(y)
Output_CAGR <- list(CAGR = r*100, Values = y)
return(Output_CAGR)
}
#' @title Computing Last Year data
#' @description Computing last year data
#'
#'
#' @param data.1 data of the first year
#' @param r CAGR
#' @param n number of years
#'
#' @return Last year data and between years values
#' @export
#' @usage data.last(data.1, r, n)
#' @examples d.last<-data.last(100, 13.57751, 5)
#' @references Bardhan, D., Singh, S.R.K., Raut, A.A.and Athare, T.R. (2022). Livestock in Madhya Pradesh and Chhattisgarh: An Analysis for Some Policy Implications. Agricultural Science Digest. DOI:10.18805/ag.D-5418.
data.last<-function(data.1, r, n){
df <- data.frame()
for (i in 1:n) {
data.n = (data.1 * (1 + r/100)^n)
output = (data.1 * (1 + r/100)^i)
df = rbind(df, output)
}
colnames(df) <- "y"
y <- as.vector(rbind(data.1, df))
y <- as.data.frame(y)
Output_CAGR <- list(data.n, Values = y)
return(Output_CAGR)
}
#' @title Computing First Year data
#' @description Computing first year data
#'
#'
#' @param data.n data of the last year
#' @param r CAGR
#' @param n number of years
#'
#' @return First year data and between years values
#' @export
#' @usage data.first(data.n, r, n)
#' @examples d.first<-data.first(189, 13.57751, 5)
#' @references Bardhan, D., Singh, S.R.K., Raut, A.A.and Athare, T.R. (2022). Livestock in Madhya Pradesh and Chhattisgarh: An Analysis for Some Policy Implications. Agricultural Science Digest. DOI:10.18805/ag.D-5418.
data.first<-function(data.n, r, n){
df <- data.frame()
for (i in 1:n) {
data.1 = (data.n * (1 + r/100)^(-n))
output = (data.1 * (1 + r/100)^i)
df = rbind(df, output)
}
colnames(df) <- "y"
y <- as.vector(rbind(data.1, df))
y <- as.data.frame(y)
Output_CAGR <- list(data.1, Values = y)
return(Output_CAGR)
}
#' @title Computing Number of Years
#' @description Computing number of years
#'
#' @param data.1 data of the first year
#' @param data.n data of the last year
#' @param r CAGR
#'
#' @return Number of years and between years values
#' @export
#' @usage n.years(data.1, data.n, r)
#' @examples n.yrs<-n.years(100, 189, 13.57751)
#' @references Bardhan, D., Singh, S.R.K., Raut, A.A.and Athare, T.R. (2022). Livestock in Madhya Pradesh and Chhattisgarh: An Analysis for Some Policy Implications. Agricultural Science Digest. DOI:10.18805/ag.D-5418.
n.years<-function(data.1, data.n, r){
df <- data.frame()
n = (log10(data.n/data.1)/log10(1+(r/100)))
n = round(n, digits = 0)
for (i in 1:n) {
output = (data.1 * (1 + r/100)^i)
df = rbind(df, output)
}
colnames(df) <- "y"
y <- as.vector(rbind(data.1, df))
y <- as.data.frame(y)
Output_CAGR <- list(n, Values = y)
return(Output_CAGR)
}
|
/scratch/gouwar.j/cran-all/cranData/CAGR/R/CAGR.R
|
#' TSP instance generator (for testing/examples)
#'
#' Adapted from stats::optim(). Check their documentation / examples for
#' details.
#'
#' @param x a valid closed route for the TSP instance
#' @param mydist object of class _dist_ defining the TSP instance
#'
#' @export
# TESTED: OK
TSP.dist <- function(x, mydist){
distmat <- as.matrix(mydist)
x2 <- stats::embed(x, 2)
y <- sum(distmat[cbind(x2[, 2], x2[, 1])])
return(y)
}
|
/scratch/gouwar.j/cran-all/cranData/CAISEr/R/TSP_dist.R
|
#' Bootstrap the sampling distribution of the mean
#'
#' Bootstraps the sampling distribution of the means for a given vector of observations
#'
#' @section References:
#' - A.C. Davison, D.V. Hinkley:
#' Bootstrap methods and their application. Cambridge University Press (1997)
#' - F. Campelo, F. Takahashi:
#' Sample size estimation for power and accuracy in the experimental
#' comparison of algorithms. Journal of Heuristics 25(2):305-338, 2019.
#'
#' @param x vector of observations
#' @param boot.R number of bootstrap resamples
#' @param ncpus number of cores to use
#' @param seed seed for the PRNG
#'
#' @return vector of bootstrap estimates of the sample mean
#'
#' @author Felipe Campelo (\email{fcampelo@@ufmg.br},
#' \email{f.campelo@@aston.ac.uk})
#'
#' @export
#'
#' @examples
#' x <- rnorm(15, mean = 4, sd = 1)
#' my.sdm <- boot_sdm(x)
#' hist(my.sdm, breaks = 30)
#' qqnorm(my.sdm, pch = 20)
#'
#' x <- runif(12)
#' my.sdm <- boot_sdm(x)
#' qqnorm(my.sdm, pch = 20)
#'
#' # Convergence of the SDM to a Normal distribution as sample size is increased
#' X <- rchisq(1000, df = 3)
#' x1 <- rchisq(10, df = 3)
#' x2 <- rchisq(20, df = 3)
#' x3 <- rchisq(40, df = 3)
#' par(mfrow = c(2, 2))
#' plot(density(X), main = "Estimated pop distribution");
#' hist(boot_sdm(x1), breaks = 25, main = "SDM, n = 10")
#' hist(boot_sdm(x2), breaks = 25, main = "SDM, n = 20")
#' hist(boot_sdm(x3), breaks = 25, main = "SDM, n = 40")
#' par(mfrow = c(1, 1))
# TESTED
boot_sdm <- function(x, # vector of observations
boot.R = 999, # number of bootstrap resamples
ncpus = 1, # number of cores to use
seed = NULL) # PRNG seed
{
# ========== Error catching ========== #
assertthat::assert_that(
is.numeric(x), length(x) > 1,
assertthat::is.count(boot.R), boot.R > 1,
assertthat::is.count(ncpus))
# ==================================== #
# set PRNG seed
if (!is.null(seed)) {
set.seed(seed)
}
# Perform bootstrap
if(ncpus > 1){
x.boot <- parallel::mclapply(1:boot.R,
function(i){
mean(sample(x,
size = length(x),
replace = TRUE))},
mc.cores = ncpus)
} else {
x.boot <- lapply(1:boot.R,
function(i){
mean(sample(x,
size = length(x),
replace = TRUE))})
}
# Return standard error
return(unlist(x.boot))
}
|
/scratch/gouwar.j/cran-all/cranData/CAISEr/R/boot_sdm.R
|
#' Calculates number of instances for the comparison of multiple algorithms
#'
#' Calculates either the number of instances, or the power(s) of the
#' comparisons of multiple algorithms.
#'
#' The main use of this routine uses the closed formula of the t-test to
#' calculate the number of instances required for the comparison of pairs of
#' algorithms, given a desired power and standardized effect size of
#' interest. Significance levels of each comparison are adjusted using
#' Holm's step-down correction (the default). The routine also takes into
#' account whether the desired statistical power refers to the mean power
#' (the default), median, or worst-case (which is equivalent to
#' designing the experiment for the more widely-known Bonferroni correction).
#' See the reference by `Campelo and Wanner` for details.
#'
#' @section Sample Sizes for Nonparametric Methods:
#' If the parameter `test` is set to either `Wilcoxon` or `Binomial`, this
#' routine approximates the number of instances using the ARE of these tests
#' in relation to the paired t.test, using the formulas (see reference by
#' `Campelo and Takahashi` for details):
#'
#' \deqn{n.wilcox = n.ttest / 0.86 = 1.163 * n.ttest}
#' \deqn{n.binom = n.ttest / 0.637 = 1.570 * n.ttest}
#'
#' @param ncomparisons number of comparisons planned
#' @param ninstances the number of instances to be used in the experiment.
#' @param d minimally relevant effect size (MRES, expressed as a standardized
#' effect size, i.e., "deviation from H0" / "standard deviation")
#' @param power target power for the comparisons (see `Details`)
#' @param sig.level desired family-wise significance level (alpha) for the
#' experiment
#' @param alternative.side type of alternative hypothesis to be performed
#' ("two.sided" or "one.sided")
#' @param test type of test to be used
#' ("t.test", "wilcoxon" or "binomial")
#' @param power.target which comparison should have the desired \code{power}?
#' Accepts "mean", "median", or "worst.case" (this last one
#' is equivalent to the Bonferroni correction).
#'
#' @return a list object containing the following items:
#' \itemize{
#' \item \code{ninstances} - number of instances
#' \item \code{power} - the power of the comparison
#' \item \code{d} - the effect size
#' \item \code{sig.level} - significance level
#' \item \code{alternative.side} - type of alternative hypothesis
#' \item \code{test} - type of test
#' }
#'
#' @references
#' - P. Mathews.
#' Sample size calculations: Practical methods for engineers and scientists.
#' Mathews Malnar and Bailey, 2010.
#' - F. Campelo, F. Takahashi:
#' Sample size estimation for power and accuracy in the experimental
#' comparison of algorithms. Journal of Heuristics 25(2):305-338, 2019.
#' - F. Campelo, E. Wanner:
#' Sample size calculations for the experimental comparison of multiple
#' algorithms on multiple problem instances.
#' Submitted, Journal of Heuristics, 2019.
#'
#' @author Felipe Campelo (\email{fcampelo@@ufmg.br},
#' \email{f.campelo@@aston.ac.uk})
#'
#'
#' @examples
#' # Calculate sample size for mean-case power
#' K <- 10 # number of comparisons
#' alpha <- 0.05 # significance level
#' power <- 0.9 # desired power
#' d <- 0.5 # MRES
#'
#' out <- calc_instances(K, d,
#' power = power,
#' sig.level = alpha)
#'
#' # Plot power of each comparison to detect differences of magnitude d
#' plot(1:K, out$power,
#' type = "b", pch = 20, las = 1, ylim = c(0, 1), xlab = "comparison",
#' ylab = "power", xaxs = "i", xlim = c(0, 11))
#' grid(11, NA)
#' points(c(0, K+1), c(power, power), type = "l", col = 2, lty = 2, lwd = .5)
#' text(1, 0.93, sprintf("Mean power = %2.2f for N = %d",
#' out$mean.power, out$ninstances), adj = 0)
#'
#' # Check sample size if planning for Wilcoxon tests:
#' calc_instances(K, d,
#' power = power,
#' sig.level = alpha,
#' test = "wilcoxon")$ninstances
#'
#'
#' # Calculate power profile for predefined sample size
#' N <- 45
#' out2 <- calc_instances(K, d, ninstances = N, sig.level = alpha)
#'
#' points(1:K, out2$power, type = "b", pch = 19, col = 3)
#' text(6, .7, sprintf("Mean power = %2.2f for N = %d",
#' out2$mean.power, out2$ninstances), adj = 0)
#'
#' # Sample size for worst-case (Bonferroni) power of 0.8, using Wilcoxon
#' out3 <- calc_instances(K, d, power = 0.9, sig.level = alpha,
#' test = "wilcoxon", power.target = "worst.case")
#' out3$ninstances
#'
#' # For median power:
#' out4 <- calc_instances(K, d, power = 0.9, sig.level = alpha,
#' test = "wilcoxon", power.target = "median")
#' out4$ninstances
#' out4$power
#'
#' @export
# TESTED: OK
calc_instances <- function(ncomparisons, # number of comparisons
d, # MRES
ninstances = NULL, # number of instances
power = NULL, # power
sig.level = 0.05, # significance level
alternative.side = "two.sided", # type of H1
test = "t.test", # type of test
power.target = "mean") # target power design
{
test <- match.arg(tolower(test),
c("t.test", "wilcoxon", "binomial"))
alternative.side <- match.arg(tolower(alternative.side),
c("one.sided", "two.sided"))
power.target <- match.arg(tolower(power.target),
c("worst.case", "mean", "median"))
# ========== Error catching ========== #
assertthat::assert_that(
assertthat::is.count(ncomparisons),
is.null(ninstances) || (assertthat::is.count(ninstances) && ninstances > 1),
is.null(power) || (is.numeric(power) && power > 0 && power < 1),
is.null(d) || (is.numeric(d) && d > 0),
sum(c(is.null(ninstances), is.null(power), is.null(d))) == 1,
is.numeric(sig.level) && sig.level > 0 && sig.level < 1,
alternative.side %in% c("one.sided", "two.sided"),
test %in% c("t.test", "wilcoxon", "binomial"),
power.target %in% c("worst.case", "mean", "median"))
# ==================================== #
# Calculate correction multiplier depending on test type
# Based on the ARE of the tests (See Sheskin 1996)
corr.factor <- switch(test,
t.test = 1,
wilcoxon = 1 / 0.86,
binomial = 1 / 0.637)
if (is.null(ninstances)){ # Estimate sample size
if (power.target == "mean"){
# Start by calculating N without any correction:
N <- stats::power.t.test(delta = d, sd = 1,
sig.level = sig.level, power = power,
type = "paired",
alternative = alternative.side)$n - 1
p.mean <- 0 # mean power
while (p.mean < power){
N <- ceiling(N) + 1
p <- numeric(ncomparisons)
a <- numeric(ncomparisons)
for (i in seq_along(p)){
ss <- stats::power.t.test(delta = d, sd = 1, n = N,
sig.level = sig.level / (ncomparisons - i + 1),
type = "paired",
alternative = alternative.side)
p[i] <- ss$power
a[i] <- ss$sig.level
}
p.mean <- mean(p)
}
p.median <- stats::median(p)
#
} else {
if (power.target == "worst.case") r <- 1 # Bonferroni correction
if (power.target == "median") r <- floor(ncomparisons / 2)
N <- stats::power.t.test(delta = d, sd = 1,
sig.level = sig.level / (ncomparisons - r + 1),
power = power,
type = "paired", alternative = alternative.side)$n
# Calculate individual power of each comparison, + mean and median power
p <- numeric(ncomparisons)
a <- numeric(ncomparisons)
for (i in seq_along(p)){
ss <- stats::power.t.test(delta = d, sd = 1, n = ceiling(N),
sig.level = sig.level / (ncomparisons - i + 1),
type = "paired",
alternative = alternative.side)
p[i] <- ss$power
a[i] <- ss$sig.level
}
p.mean <- mean(p)
p.median <- stats::median(p)
}
# Adjust sample size depending on the type of test to be performed
N <- ceiling(N * corr.factor)
} else if (is.null(power)){ # calculate power profile of experiment
N <- ninstances
p <- numeric(ncomparisons)
a <- numeric(ncomparisons)
for (i in seq_along(p)){
ss <- stats::power.t.test(delta = d, sd = 1, n = ceiling(N / corr.factor),
sig.level = sig.level / (ncomparisons - i + 1),
type = "paired",
alternative = alternative.side)
p[i] <- ss$power
a[i] <- ss$sig.level
}
p.mean <- mean(p)
p.median <- stats::median(p)
}
output <- list(ninstances = N,
power = p,
mean.power = p.mean,
median.power = p.median,
d = d,
sig.level = a,
alternative = alternative.side,
test = test,
power.target = power.target)
return(output)
}
|
/scratch/gouwar.j/cran-all/cranData/CAISEr/R/calc_instances.R
|
#' Determine sample sizes for a set of algorithms on a single problem instance
#'
#' Iteratively calculates the required sample sizes for K algorithms
#' on a given problem instance, so that the standard errors of the estimates of
#' the pairwise differences in performance is controlled at a predefined level.
#'
#' @section Instance:
#' Parameter `instance` must be a named list containing all relevant parameters
#' that define the problem instance. This list must contain at least the field
#' `instance$FUN`, with the name of the function implementing the problem
#' instance, that is, a routine that calculates y = f(x). If the instance
#' requires additional parameters, these must also be provided as named fields.
#'
#' @section Algorithms:
#' Object `algorithms` is a list in which each component is a named
#' list containing all relevant parameters that define an algorithm to be
#' applied for solving the problem instance. In what follows `algorithm[[k]]`
#' refers to any algorithm specified in the `algorithms` list.
#'
#' `algorithm[[k]]` must contain an `algorithm[[k]]$FUN` field, which is a
#' character object with the name of the function that calls the algorithm; as
#' well as any other elements/parameters that `algorithm[[k]]$FUN` requires
#' (e.g., stop criteria, operator names and parameters, etc.).
#'
#' The function defined by the routine `algorithm[[k]]$FUN` must have the
#' following structure: supposing that the list in `algorithm[[k]]` has
#' fields `algorithm[[k]]$FUN = "myalgo"`, `algorithm[[k]]$par1 = "a"` and
#' `algorithm$par2 = 5`, then:
#'
#' \preformatted{
#' myalgo <- function(par1, par2, instance, ...){
#' # do stuff
#' # ...
#' return(results)
#' }
#' }
#'
#' That is, it must be able to run if called as:
#'
#' \preformatted{
#' # remove '$FUN' and '$alias' fields from list of arguments
#' # and include the problem definition as field 'instance'
#' myargs <- algorithm[names(algorithm) != "FUN"]
#' myargs <- myargs[names(myargs) != "alias"]
#' myargs$instance <- instance
#'
#' # call function
#' do.call(algorithm$FUN,
#' args = myargs)
#' }
#'
#' The `algorithm$FUN` routine must return a list containing (at
#' least) the performance value of the final solution obtained, in a field named
#' `value` (e.g., `result$value`) after a given run.
#'
#' @section Initial Number of Observations:
#' In the **general case** the initial number of observations per algorithm
#' (`nstart`) should be relatively high. For the parametric case
#' we recommend between 10 and 20 if outliers are not expected, or between 30
#' and 50 if that assumption cannot be made. For the bootstrap approach we
#' recommend using at least 20. However, if some distributional assumptions can
#' be made - particularly low skewness of the population of algorithm results on
#' the test instances), then `nstart` can in principle be as small as 5 (if the
#' output of the algorithms were known to be normal, it could be 1).
#'
#' In general, higher sample sizes are the price to pay for abandoning
#' distributional assumptions. Use lower values of `nstart` with caution.
#'
#' @section Pairwise Differences:
#' Parameter `dif` informs the type of difference in performance to be used
#' for the estimation (\eqn{\mu_a} and \eqn{\mu_b} represent the mean
#' performance of any two algorithms on the test instance, and \eqn{mu}
#' represents the grand mean of all algorithms given in `algorithms`):
#'
#' - If `dif == "perc"` and `comparisons == "all.vs.first"`, the estimated quantity is
#' \eqn{\phi_{1b} = (\mu_1 - \mu_b) / \mu_1 = 1 - (\mu_b / \mu_1)}.
#'
#' - If `dif == "perc"` and `comparisons == "all.vs.all"`, the estimated quantity is
#' \eqn{\phi_{ab} = (\mu_a - \mu_b) / \mu}.
#'
#' - If `dif == "simple"` it estimates \eqn{\mu_a - \mu_b}.
#'
#' @param instance a list object containing the definitions of the problem
#' instance.
#' See Section `Instance` for details.
#' @param algorithms a list object containing the definitions of all algorithms.
#' See Section `Algorithms` for details.
#' @param se.max desired upper limit for the standard error of the estimated
#' difference between pairs of algorithms. See Section
#' `Pairwise Differences` for details.
#' @param dif type of difference to be used. Accepts "perc" (for percent
#' differences) or "simple" (for simple differences)
#' @param comparisons type of comparisons being performed. Accepts "all.vs.first"
#' (in which cases the first object in `algorithms` is considered to be
#' the reference algorithm) or "all.vs.all" (if there is no reference
#' and all pairwise comparisons are desired).
#' @param method method to use for estimating the standard errors. Accepts
#' "param" (for parametric) or "boot" (for bootstrap)
#' @param nstart initial number of algorithm runs for each algorithm.
#' See Section `Initial Number of Observations` for details.
#' @param nmax maximum **total** allowed number of runs to execute. Loaded
#' results (see `load.folder` below) do not count towards this
#' total.
#' @param seed seed for the random number generator
#' @param boot.R number of bootstrap resamples to use (if `method == "boot"`)
#' @param ncpus number of cores to use
#' @param force.balanced logical flag to force the use of balanced sampling for
#' the algorithms on each instance
#' @param load.folder name of folder to load results from. Use either "" or
#' "./" for the current working directory. Accepts relative paths.
#' Use `NA` for not saving. `calc_nreps()` will look for a .RDS file
#' with the same name
#' @param save.folder name of folder to save the results. Use either "" or
#' "./" for the current working directory. Accepts relative paths.
#' Use `NA` for not saving.
#'
#' @return a list object containing the following items:
#' \itemize{
#' \item \code{instance} - alias for the problem instance considered
#' \item \code{Xk} - list of observed performance values for all `algorithms`
#' \item \code{Nk} - vector of sample sizes generated for each algorithm
#' \item \code{Diffk} - data frame with point estimates, standard errors and
#' other information for all algorithm pairs of interest
#' \item \code{seed} - seed used for the PRNG
#' \item \code{dif} - type of difference used
#' \item \code{method} - method used ("param" / "boot")
#' \item \code{comparisons} - type of pairings ("all.vs.all" / "all.vs.first")
#' }
#'
#' @author Felipe Campelo (\email{fcampelo@@gmail.com})
#'
#' @export
#'
#' @references
#' - F. Campelo, F. Takahashi:
#' Sample size estimation for power and accuracy in the experimental
#' comparison of algorithms. Journal of Heuristics 25(2):305-338, 2019.
#' - P. Mathews.
#' Sample size calculations: Practical methods for engineers and scientists.
#' Mathews Malnar and Bailey, 2010.
#' - A.C. Davison, D.V. Hinkley:
#' Bootstrap methods and their application. Cambridge University Press (1997)
#' - E.C. Fieller:
#' Some problems in interval estimation. Journal of the Royal Statistical
#' Society. Series B (Methodological) 16(2), 175–185 (1954)
#' - V. Franz:
#' Ratios: A short guide to confidence limits and proper use (2007).
#' https://arxiv.org/pdf/0710.2024v1.pdf
#' - D.C. Montgomery, C.G. Runger:
#' Applied Statistics and Probability for Engineers, 6th ed. Wiley (2013)
#'
#' @examples
#' # Example using dummy algorithms and instances. See ?dummyalgo for details.
#' # We generate dummy algorithms with true means 15, 10, 30, 15, 20; and true
#' # standard deviations 2, 4, 6, 8, 10.
#' algorithms <- mapply(FUN = function(i, m, s){
#' list(FUN = "dummyalgo",
#' alias = paste0("algo", i),
#' distribution.fun = "rnorm",
#' distribution.pars = list(mean = m, sd = s))},
#' i = c(alg1 = 1, alg2 = 2, alg3 = 3, alg4 = 4, alg5 = 5),
#' m = c(15, 10, 30, 15, 20),
#' s = c(2, 4, 6, 8, 10),
#' SIMPLIFY = FALSE)
#'
#' # Make a dummy instance with a centered (zero-mean) exponential distribution:
#' instance = list(FUN = "dummyinstance", distr = "rexp", rate = 5, bias = -1/5)
#'
#' # Explicitate all other parameters (just this one time:
#' # most have reasonable default values)
#' myreps <- calc_nreps(instance = instance,
#' algorithms = algorithms,
#' se.max = 0.05, # desired (max) standard error
#' dif = "perc", # type of difference
#' comparisons = "all.vs.all", # differences to consider
#' method = "param", # method ("param", "boot")
#' nstart = 15, # initial number of samples
#' nmax = 1000, # maximum allowed sample size
#' seed = 1234, # seed for PRNG
#' boot.R = 499, # number of bootstrap resamples (unused)
#' ncpus = 1, # number of cores to use
#' force.balanced = FALSE, # force balanced sampling?
#' load.folder = NA, # file to load results from
#' save.folder = NA) # folder to save results
#' summary(myreps)
#' plot(myreps)
calc_nreps <- function(instance, # instance parameters
algorithms, # algorithm parameters
se.max, # desired (max) standard error
dif = "simple", # type of difference
comparisons = "all.vs.all", # differences to consider
method = "param", # method ("param", "boot")
nstart = 20, # initial number of samples
nmax = 1000, # maximum allowed sample size
seed = NULL, # seed for PRNG
boot.R = 499, # number of bootstrap resamples
ncpus = 1, # number of cores to use
force.balanced = FALSE, # force balanced sampling?
load.folder = NA, # folder to load results from
save.folder = NA) # folder to save results to
{
# ========== Error catching ========== #
assertthat::assert_that(
is.list(instance),
assertthat::has_name(instance, "FUN"),
is.list(algorithms),
all(sapply(X = algorithms, FUN = is.list)),
all(sapply(X = algorithms,
FUN = function(x){assertthat::has_name(x, "FUN")})),
is.numeric(se.max) && length(se.max) == 1,
dif %in% c("simple", "perc"),
comparisons %in% c("all.vs.all", "all.vs.first"),
method %in% c("param", "boot"),
assertthat::is.count(nstart),
is.infinite(nmax) || assertthat::is.count(nmax),
nmax >= length(algorithms) * nstart,
is.null(seed) || seed == seed %/% 1,
assertthat::is.count(boot.R), boot.R > 1,
is.logical(force.balanced), length(force.balanced) == 1,
is.na(save.folder) || (length(save.folder) == 1 && is.character(save.folder)),
is.na(load.folder) || (length(load.folder) == 1 && is.character(load.folder)))
# ==================================== #
# set PRNG seed
if (is.null(seed)) seed <- as.numeric(Sys.time())
set.seed(seed)
# Set instance alias if needed
if (!("alias" %in% names(instance))) {
instance$alias <- instance$FUN
}
# Set algorithm aliases if needed
for (i in seq_along(algorithms)){
if (!("alias" %in% names(algorithms[[i]]))) {
algorithms[[i]]$alias <- algorithms[[i]]$FUN
}
}
# Initialize vectors
Xk <- vector(mode = "list", length = length(algorithms))
Nk <- numeric(length(algorithms))
names(Xk) <- sapply(algorithms, function(x)x$alias)
names(Nk) <- names(Xk)
# Load results (if required)
if (!is.na(load.folder)){
if(load.folder == "") load.folder <- "./"
# Check that folder exists
if (dir.exists(load.folder)){
filepath <- paste0(normalizePath(load.folder),
"/", instance$alias, ".rds")
# Check that a file for this instance exists in the folder
if (file.exists(filepath)){
data.in.file <- readRDS(filepath)
algos.in.file <- names(data.in.file$Nk)
cat("\nExisting data loaded for instance:", instance$alias)
# Extract relevant observations from loaded results
for (i in seq_along(algos.in.file)){
if (algos.in.file[i] %in% names(Xk)){
indx <- which(algos.in.file[i] == names(Xk))
Xk[[indx]] <- data.in.file$Xk[[i]]
Nk[[indx]] <- data.in.file$Nk[[i]]
cat("\n", Nk[[indx]],
"observations retrieved for algorithm:", algos.in.file[i])
}
}
} else {
cat("\nNOTE: Instance file '", filepath, "' not found.")
}
} else {
cat("\nNOTE: folder '", normalizePath(load.folder), "' not found.")
}
}
n.loaded <- Nk
cat("\nSampling algorithms on instance", instance$alias, ": ")
# generate initial samples (if required)
n0 <- ifelse(rep(force.balanced, length(Nk)),
yes = max(c(Nk, nstart)) - Nk,
no = nstart - pmin(nstart, Nk))
newX <- parallel::mcmapply(FUN = get_observations,
algo = algorithms,
n = n0,
MoreArgs = list(instance = instance),
mc.cores = ncpus,
SIMPLIFY = FALSE)
# Append new observation to each algo list and update sample size counters
Xk <- mapply(FUN = c, Xk, newX,
SIMPLIFY = FALSE)
Nk <- sapply(Xk, length)
# Calculate point estimates, SEs, and sample size ratios (current x optimal)
Diffk <- calc_se(Xk = Xk,
dif = dif,
comparisons = comparisons,
method = method,
boot.R = boot.R)
while(any(Diffk$SE > se.max) & (sum(Nk) - sum(n.loaded) < nmax)){
# Echo something for the user
if (!(sum(Nk) %% nstart)) cat(".")
# Determine which algorithm(s) should get new observation
n <- numeric(length(algorithms))
if(force.balanced){
ind <- 1:length(algorithms)
} else {
# Get pair that has the worst SE
worst.se <- Diffk[which.max(Diffk$SE), ]
# Determine algorithm from worst.se that should receive a new observation
if (worst.se$r <= worst.se$ropt){
ind <- worst.se[1, 1]
} else {
ind <- worst.se[1, 2]
}
}
n[ind] <- 1
# Generate new observation(s)
newX <- parallel::mcmapply(FUN = get_observations,
algo = algorithms,
n = n,
MoreArgs = list(instance = instance),
mc.cores = ncpus,
SIMPLIFY = FALSE)
# Append new observation(s) and update sample size counters
Xk <- mapply(FUN = c, Xk, newX,
SIMPLIFY = FALSE)
Nk[ind] <- Nk[ind] + 1
# Recalculate point estimates, SEs, and sample size ratios
Diffk <- calc_se(Xk = Xk,
dif = dif,
comparisons = comparisons,
method = method,
boot.R = boot.R)
}
# Assemble output list
output <- list(instance = instance$alias,
Xk = Xk,
Nk = Nk,
n.loaded = n.loaded,
Diffk = Diffk,
dif = dif,
method = method,
comparisons = comparisons,
seed = seed)
class(output) <- c("nreps", "list")
# Save to file if required
if (!is.na(save.folder)){
# Check save folder
if(save.folder == "") save.folder <- "./"
save.folder <- normalizePath(save.folder)
if(!dir.exists(save.folder)) dir.create(save.folder)
# Prepare save filename
save.file <- paste0(save.folder, "/", instance$alias, ".rds")
# save output to file
cat("\nWriting file", basename(save.file))
saveRDS(output, file = save.file)
}
# Return output
return(output)
}
|
/scratch/gouwar.j/cran-all/cranData/CAISEr/R/calc_nreps.R
|
#' Calculates the standard error for simple and percent differences
#'
#' Calculates the sample standard error for the estimator differences between
#' multiple algorithms on a given instance.
#'
#' - If `dif == "perc"` it returns the standard errors for the sample
#' estimates of pairs
#' \eqn{(mu2 - mu1) / mu}, where \eqn{mu1, mu2} are the means of the
#' populations that generated sample vectors \eqn{x1, x2}, and
#' - If `dif == "simple"` it returns the SE for sample estimator of
#' \eqn{(mu2 - mu1)}
#'
#' @section References:
#' - F. Campelo, F. Takahashi:
#' Sample size estimation for power and accuracy in the experimental
#' comparison of algorithms. Journal of Heuristics 25(2):305-338, 2019.
#'
#' @param Xk list object where each position contains a vector of observations
#' of algorithm k on a given problem instance.
#' @param dif name of the difference for which the SEs are desired.
#' Accepts "perc" (for percent differences) or "simple" (for simple
#' differences)
#' @param comparisons standard errors to be calculated. Accepts "all.vs.first"
#' (in which cases the first object in `algorithms` is considered to be
#' the reference algorithm) or "all.vs.all" (if there is no reference
#' and all pairwise SEs are desired).
#' @param method method used to calculate the interval. Accepts "param"
#' (using parametric formulas based on normality of the sampling
#' distribution of the means) or "boot" (for bootstrap).
#' @param boot.R (optional) number of bootstrap resamples
#' (if `method == "boot"`)
#'
#' @return a list object containing the following items:
#' \itemize{
#' \item \code{Phi.est} - estimated values of the statistic of interest for
#' each pair of algorithms of interest (all pairs if `comparisons == "all.vs.all"`,
#' or all pairs containing the first algorithm if `comparisons == "all.vs.first"`).
#' \item \code{se} - standard error estimates
#' }
#'
#' @author Felipe Campelo (\email{fcampelo@@ufmg.br},
#' \email{f.campelo@@aston.ac.uk})
#'
#' @export
#'
#' @examples
#' # three vectors of normally distributed observations
#' set.seed(1234)
#' Xk <- list(rnorm(10, 5, 1), # mean = 5, sd = 1,
#' rnorm(20, 10, 2), # mean = 10, sd = 2,
#' rnorm(50, 15, 5)) # mean = 15, sd = 3
#'
#' calc_se(Xk, dif = "simple", comparisons = "all.vs.all", method = "param")
#' calc_se(Xk, dif = "simple", comparisons = "all.vs.all", method = "boot")
#'
#' calc_se(Xk, dif = "perc", comparisons = "all.vs.first", method = "param")
#' calc_se(Xk, dif = "perc", comparisons = "all.vs.first", method = "boot")
#'
#' calc_se(Xk, dif = "perc", comparisons = "all.vs.all", method = "param")
#' calc_se(Xk, dif = "perc", comparisons = "all.vs.all", method = "boot")
# TESTED: OK
calc_se <- function(Xk, # vector of observations
dif = "simple", # type of difference
comparisons = "all.vs.all", # standard errors to calculate
method = "param", # method for calculating CI
boot.R = 999) # number of bootstrap resamples
{
# ========== Error catching ========== #
assertthat::assert_that(
is.list(Xk),
all(sapply(Xk, is.numeric)),
all(sapply(Xk, function(x){length(x) >= 2})),
dif %in% c('simple', 'perc'),
comparisons %in% c("all.vs.all", "all.vs.first"),
method %in% c('param', 'boot'),
assertthat::is.count(boot.R))
# ==================================== #
# Calculate point estimates and standard errors
if (method == "param"){
Diffk <- se_param(Xk = Xk, dif = dif, comparisons = comparisons)
} else if (method == "boot"){
Diffk <- se_boot(Xk = Xk, dif = dif, comparisons = comparisons,
boot.R = boot.R)
}
# Fix NaN problem that happens if some variance = 0
Diffk$SE[is.nan(Diffk$SE)] <- 0
return(Diffk)
}
|
/scratch/gouwar.j/cran-all/cranData/CAISEr/R/calc_se.R
|
#' Consolidate results from partial files
#'
#' Consolidates results from a set of partial files (each generated by an
#' individual call to [calc_nreps()]) into a single output structure, similar
#' (but not identical) to the output of [run_experiment()]. This is useful
#' e.g., to consolidate the results from interrupted experiments.
#'
#' @param Configuration a named list containing all parameters
#' required in a call to [run_experiment()] except
#' `instances` and `algorithms`. See the parameter list
#' and default values in [run_experiment()]. Notice that
#' this is always returned as part of the output structure
#' of [run_experiment()], so it generally easier to just
#' retrieve it from previously saved results.
#'
#' @param folder folder where the partial files are located.
#'
#' @return a list object containing the following fields:
#' \itemize{
#' \item \code{data.raw} - data frame containing all observations generated
#' \item \code{data.summary} - data frame summarizing the experiment.
#' \item \code{N} - number of instances sampled
#' \item \code{total.runs} - total number of algorithm runs performed
#' \item \code{instances.sampled} - names of the instances sampled
#' }
#'
#' @export
#'
consolidate_partial_results <- function(Configuration,
folder = "./nreps_files")
{
# Error checking
assertthat::assert_that(is.character(folder),
length(folder) == 1)
# Get filenames from folder
filenames <- dir(path = normalizePath(folder), pattern = ".rds")
if(length(filenames) == 0){
cat("No files to load in folder", normalizePath(folder))
invisible(-1)
}
# Read results from files
my.results <- vector(mode = "list", length = length(filenames))
for (i in seq(length(filenames))){
my.results[[i]] <- readRDS(file = paste0(folder, "/", filenames[i]))
}
# Consolidate raw results
data.raw <- lapply(X = my.results,
FUN = function(x){
inst <- x$instance
nj <- sum(x$Nk)
data.frame(Algorithm = do.call(what = c,
mapply(rep,
names(x$Nk),
x$Nk,
SIMPLIFY = FALSE)),
Instance = rep(inst, nj),
Observation = do.call(c, x$Xk),
stringsAsFactors = FALSE)})
data.raw <- do.call(rbind, data.raw)
# Consolidate summary data
data.summary <- lapply(X = my.results,
FUN = function(x){
cbind(Instance = rep(x$instance, nrow(x$Diffk)),
x$Diffk)})
data.summary <- do.call(rbind, data.summary)
# ============ Assemble output ============ #
n.algs <- length(unique(data.raw$Algorithm))
n.comparisons <- switch(Configuration$comparisons,
all.vs.all = n.algs * (n.algs - 1) / 2,
all.vs.first = n.algs - 1)
if ("alternative.side" %in% names(Configuration)){
altsid <- Configuration$alternative.side
} else {
altsid <- Configuration$alternative
if (altsid != "two.sided") altsid <- "one.sided"
}
ss.calc <- calc_instances(ncomparisons = n.comparisons,
d = Configuration$d,
ninstances = length(unique(data.raw$Instance)),
sig.level = Configuration$sig.level,
alternative.side = altsid,
test = Configuration$test,
power.target = Configuration$power.target)
output <- list(Configuration = Configuration,
data.raw = data.raw,
data.summary = data.summary,
N = length(unique(data.raw$Instance)),
N.star = NA,
Underpowered = NA,
total.runs = nrow(data.raw),
instances.sampled = unique(data.raw$Instance),
samplesize.calc = ss.calc)
# Return output
class(output) <- c("CAISEr", "list")
return(output)
}
|
/scratch/gouwar.j/cran-all/cranData/CAISEr/R/consolidate_partial_results.R
|
#' Dummy algorithm routine to test the sampling procedures
#'
#' This is a dummy algorithm routine to test the sampling procedures, in
#' combination with [dummyinstance()].
#' `dummyalgo()` receives two parameters that determine the distribution of
#' performances it will exhibit on a hypothetical problem class:
#' `distribution.fun` (with the name of a random number generation function,
#' e.g. `rnorm`, `runif`, `rexp` etc.); and `distribution.pars`, a named list of
#' parameters to be passed on to `distribution.fun`.
#' The third parameter is an instance object (see [calc_nreps()] for details),
#' which is a named list with the following fields:
#' \itemize{
#' \item\code{FUN = "dummyinstance"} - must always be "dummyinstance" (will
#' be ignored otherwise).
#' \item\code{distr} - the name of a random number generation function.
#' \item\code{...} - other named fields with parameters to be passed down
#' to the function in `distr`.
#' }
#'
#' `distribution.fun` and `distribution.pars` regulate the mean performance of
#' the dummy algorithm on a given (hypothetical) problem class, and the
#' between-instances variance of performance. The instance specification in
#' `instance` regulates the within-instance variability of results. Ideally the
#' distribution parameters passed to the `instance` should result in a
#' within-instance distribution of values with zero mean, so that the mean of
#' the values returned by `dummyalgo` is regulated only by `distribution.fun`
#' and `distribution.pars`.
#'
#' The value returned by dummyalgo is sampled as follows:
#' \preformatted{
#' offset <- do.call(distribution.fun, args = distribution.pars)
#' y <- offset + do.call("dummyinstance", args = instance)
#' }
#'
#' @param distribution.fun name of a function that generates random values
#' according to a given distribution, e.g., "rnorm", "runif", "rexp" etc.
#' @param distribution.pars list of named parameters required by the function
#' in \code{distribution.fun}. Parameter \code{n} (number of points to
#' generate) is unnecessary (this routine always forces `n = 1`).
#' @param instance instance parameters (see `Details`).
#'
#' @return a list object with a single field \code{$value}, containing a scalar
#' numerical value distributed as described at the end of `Details`.
#'
#' @author Felipe Campelo (\email{fcampelo@@ufmg.br},
#' \email{f.campelo@@aston.ac.uk})
#'
#' @seealso [dummyinstance()]
#'
#' @export
#'
#' @examples
#' # Make a dummy instance with a centered (zero-mean) exponential distribution:
#' instance = list(FUN = "dummyinstance", distr = "rexp", rate = 5, bias = -1/5)
#'
#' # Simulate a dummy algorithm that has a uniform distribution of expected
#' # performance values, between -25 and 50.
#' dummyalgo(distribution.fun = "runif",
#' distribution.pars = list(min = -25, max = 50),
#' instance = instance)
#'
# TESTED: OK
dummyalgo <- function(distribution.fun = "rnorm",
distribution.pars = list(mean = 0, sd = 1),
instance = list(FUN = "dummyinstance",
distr = "rnorm",
mean = 0,
sd = 1))
{
# ========== Error catching ========== #
assertthat::assert_that(
is.character(distribution.fun),
length(distribution.fun) == 1,
is.list(distribution.pars),
is.list(instance),
all(c("FUN", "distr") %in% names(instance)))
# ==================================== #
distribution.pars$n <- 1
offset <- do.call(distribution.fun, args = distribution.pars)
y <- offset + do.call("dummyinstance", args = instance)
return(list(value = y))
}
|
/scratch/gouwar.j/cran-all/cranData/CAISEr/R/dummyalgo.R
|
#' Dummy instance routine to test the sampling procedures
#'
#' This is a dummy instance routine to test the sampling procedures, in
#' combination with [dummyalgo()].
#' `dummyinstance()` receives a parameter `distr` containing the name of a
#' random number generation function (e.g. `rnorm`, `runif`, `rexp` etc.), plus
#' a variable number of arguments to be passed down to the function in `distr`.
#'
#' @param distr name of a function that generates random values
#' according to a given distribution, e.g., "rnorm", "runif", "rexp" etc.
#' @param ... additional parameters to be passed down to the function in
#' `distr`. Parameter \code{n} (number of points to generate) is unnecessary
#' (this routine always forces `n = 1`).
#' @param bias a bias term to add to the results of the distribution function
#' (e.g., to set the mean to zero).
#'
#' @return a single numeric value sampled from the desired distribution.
#'
#' @author Felipe Campelo (\email{fcampelo@@ufmg.br},
#' \email{f.campelo@@aston.ac.uk})
#'
#' @seealso [dummyalgo()]
#'
#' @export
#'
#' @examples
#' dummyinstance(distr = "rnorm", mean = 10, sd = 1)
#'
#' # Make a centered (zero-mean) exponential distribution:
#' lambda = 4
#'
#' # 10000 observations
#' set.seed(1234)
#' y <- numeric(10000)
#' for (i in 1:10000) y[i] <- dummyinstance(distr = "rexp", rate = lambda,
#' bias = -1/lambda)
#' mean(y)
#' hist(y, breaks = 50, xlim = c(-0.5, 2.5))
# TESTED: OK
dummyinstance <- function(distr, ..., bias = 0){
# ========== Error catching ========== #
assertthat::assert_that(
is.character(distr),
length(distr) == 1,
is.numeric(bias), length(bias) == 1)
# ==================================== #
args <- list(...)
if ("FUN" %in% names(args)) args$FUN <- NULL
if ("alias" %in% names(args)) args$alias <- NULL
args$n <- 1
return(bias + do.call(distr, args = args))
}
|
/scratch/gouwar.j/cran-all/cranData/CAISEr/R/dummyinstance.R
|
#' Simulated annealing (for testing/examples)
#'
#' Adapted from stats::optim(). Check their documentation / examples for
#' details.
#'
#' @param Temp controls the "SANN" method. It is the starting temperature for
#' the cooling schedule.
#' @param budget stop criterion: number of function evaluations to execute
#' @param instance an instance object (see [calc_nreps()] for details)
#'
#' @export
#'
#' @examples
#' \dontrun{
#' instance <- list(FUN = "TSP.dist", mydist = datasets::eurodist)
#'
#' example_SANN(Temp = 2000, budget = 10000, instance = instance)
#' }
#'
#'
# TESTED: OK
example_SANN <- function(Temp, budget, instance){
dist.mat <- as.matrix(instance$mydist)
# Define perturbation function: 2-opt swap
gen.seq <- function(x, ...){ #
idx <- seq(2, nrow(dist.mat) - 1)
chgpt <- sample(idx, size = 2, replace = FALSE)
tmp <- x[chgpt[1]]
x[chgpt[1]] <- x[chgpt[2]]
x[chgpt[2]] <- tmp
return(x)
}
# random initial solution
sq <- sample.int(n = nrow(dist.mat), replace = FALSE)
sq <- c(sq, sq[1])
# run optimizer
res <- stats::optim(sq, fn = TSP.dist, gen.seq,
method = "SANN",
mydist = instance$mydist,
control = list(maxit = budget,
temp = Temp,
trace = FALSE,
REPORT = 500))
return(res)
}
|
/scratch/gouwar.j/cran-all/cranData/CAISEr/R/example_SANN.R
|
#' Run an algorithm on a problem.
#'
#' Call algorithm routine for the solution of a problem instance
#'
#' @param instance a list object containing the definitions of the problem
#' instance. See [calc_nreps()] for details.
#' @param algo a list object containing the definitions of the algorithm.
#' See [calc_nreps()] for details.
#' @param n number of observations to generate.
#'
#' @return vector of observed performance values
#'
#' @seealso [calc_nreps()]
#'
#' @author Felipe Campelo (\email{fcampelo@@ufmg.br},
#' \email{f.campelo@@aston.ac.uk})
#'
#' @export
#'
#' @examples
#' # Make a dummy instance with a centered (zero-mean) exponential distribution:
#' instance <- list(FUN = "dummyinstance", distr = "rexp", rate = 5, bias = -1/5)
#'
#' # Simulate a dummy algorithm that has a uniform distribution of expected
#' # performance values, between -25 and 50.
#' algorithm <- list(FUN = "dummyalgo",
#' distribution.fun = "runif",
#' distribution.pars = list(min = -25, max = 50))
#' x <- get_observations(algorithm, instance, n = 1000)
#' hist(x)
# TESTED
get_observations <- function(algo, # algorithm parameters
instance, # problem parameters
n = 1) # number of observations to generate.
{
# ========== Error catching ========== #
# Most of error catching is already performed by the calling routine
# run_nreps(), so no need to do much here, except:
assertthat::assert_that(is.numeric(n) || is.integer(n),
n %% 1 == 0)
# ==================================== #
# remove '$FUN' and '$alias' fields from list of arguments
# and include the problem definition as field 'instance'
myargs <- algo[names(algo) != "FUN"]
myargs <- myargs[names(myargs) != "alias"]
myargs$instance <- instance
# Get observation(s)
f <- numeric(n)
if (n > 0){
for (i in 1:n){
result <- do.call(algo$FUN,
myargs)
f[i] <- result$value
}
}
return(f)
}
|
/scratch/gouwar.j/cran-all/cranData/CAISEr/R/get_observations.R
|
.onAttach <- function(...) {
packageStartupMessage("\nCAISEr version 1.0.16
Not compatible with code developed for 0.X.Y versions
If needed, please visit https://git.io/fjFwf for version 0.3.3")
}
|
/scratch/gouwar.j/cran-all/cranData/CAISEr/R/onAttach.R
|
#' plot.CAISEr
#'
#' S3 method for plotting _CAISEr_ objects output by [run_experiment()].
#'
#' @param x list object of class _CAISEr_.
#' @param y unused. Included for consistency with generic `plot` method.
#' @param ... other parameters to be passed down to specific
#' plotting functions (currently unused)
#' @param latex logical: should labels be formatted for LaTeX? (useful for
#' later saving using library `TikzDevice`)
#' @param reorder logical: should the comparisons be reordered alphabetically?
#' @param show.text logical: should text be plotted?
#' @param digits how many significant digits should be used in text?
#' @param layout optional parameter to override the layout of the plots (see
#' `gridExtra::arrangeGrobs()` for details. The default layout is
#' `lay = rbind(c(1,1,1,1,1,1), c(1,1,1,1,1,1), c(2,2,2,3,3,3))`
#'
#' @return list containing (1) a list of of `ggplot2` objects generated, and
#' (2) a list of data frames used for the creation of the plots.
#'
#' @method plot CAISEr
#'
#' @export
#'
plot.CAISEr <- function(x, y = NULL, ...,
latex = FALSE,
reorder = FALSE,
show.text = TRUE,
digits = 3,
layout = NULL)
{
assertthat::assert_that("CAISEr" %in% class(x),
is.logical(latex), length(latex) == 1,
is.logical(show.text), length(show.text) == 1,
is.logical(reorder), length(reorder) == 1,
is.null(layout) || is.matrix(layout))
plots.list <- vector("list", 3)
df.list <- vector("list", 2)
ignore <- utils::capture.output(x.summary <- summary(x))
CIs <- as.data.frame(t(sapply(x.summary$test.info,
FUN = function(y) y$test$conf.int)))
Est <- sapply(x.summary$test.info,
FUN = function(y) y$test$estimate)
Comps <- sapply(x.summary$test.info,
FUN = function(y) y$comparison)
pvals <- sapply(x.summary$test.info,
FUN = function(y) y$test$p.value)
alpha <- x$samplesize.calc$sig.level
df <- data.frame(Comparison = Comps,
Estimate = Est,
CIl = CIs[, 1],
CIu = CIs[, 2],
p.value = pvals,
alpha = alpha,
stringsAsFactors = FALSE)
if(reorder) df <- df[order(df$Comparison), ]
df[, -1] <- signif(df[, -1], digits = digits)
df$Reject <- df$p.value <= df$alpha
if (latex){
pvaltxt <- paste0("$p = ", df$p.value, "$")
alphatxt <- paste0("$\\alpha = ", df$alpha, "$")
CItxt <- paste0("$CI = [", df$CIl, ", ",
df$CIu, "]$")
ylabtxt <- "$\\hat{\\mu}_D$"
} else {
pvaltxt <- paste0("p = ", df$p.value)
alphatxt <- paste0("alpha = ", df$alpha)
CItxt <- paste0("$CI = [", df$CIl, ", ",
df$CIu, "]")
ylabtxt <- "Est. Difference"
}
df <- cbind(df, pvaltxt, alphatxt, CItxt)
mp <- ggplot2::ggplot(df,
ggplot2::aes_string(x = "Comparison",
y = "Estimate",
ymin = "CIl",
ymax = "CIu",
colour = "!Reject",
shape = "Reject")) +
ggplot2::theme_minimal() +
ggplot2::geom_abline(slope = 0, intercept = 0,
lty = 3, lwd = 1.4, alpha = .5) +
ggplot2::geom_pointrange(size = 1.1, fatten = 2,
show.legend = FALSE) +
ggplot2::ylab(ylabtxt) + ggplot2::xlab("") +
ggplot2::scale_shape_manual(values = c(16, 18)) +
ggplot2::coord_flip()
if(show.text){
mp <- mp +
ggplot2::geom_text(ggplot2::aes(label = CItxt),
nudge_x = .2, size = 2.5,
col = 1) +
ggplot2::geom_text(ggplot2::aes(label = alphatxt),
nudge_x = -.175, size = 2.5,
col = 1) +
ggplot2::geom_text(ggplot2::aes(label = pvaltxt),
nudge_x = -.35, size = 2.5,
col = 1)
}
plots.list[[1]] <- mp
df.list[[1]] <- df
df <- x$data.raw
df <- as.data.frame(t(table(df$Algorithm, df$Instance)))
names(df) <- c("Instance", "Algorithm", "n")
mp <- ggplot2::ggplot(df,
ggplot2::aes_string(x = "Algorithm",
y = "n",
fill = "Algorithm")) +
ggplot2::geom_boxplot(alpha = .3,
show.legend = FALSE,
outlier.shape = NA) +
ggplot2::geom_jitter(ggplot2::aes_string(colour = "Algorithm"),
width = .2, height = .1, alpha = .7,
show.legend = FALSE) +
ggplot2::theme(axis.text.x = ggplot2::element_text(angle = 45,
hjust = 1)) +
ggplot2::ylab("Runs/Instance") + ggplot2::xlab("") +
ggplot2::theme_minimal() +
ggplot2::theme(axis.text.x = ggplot2::element_text(angle = 45, hjust = 1))
plots.list[[2]] <- mp
df.list[[2]] <- df
mp <- ggplot2::ggplot(df,
ggplot2::aes_string(x = "Algorithm",
y = "n",
fill = "Algorithm")) +
ggplot2::geom_col(alpha = .5, show.legend = FALSE) +
ggplot2::coord_flip() +
ggplot2::ylab("Total number of runs") + ggplot2::xlab("") +
ggplot2::theme_minimal()
plots.list[[3]] <- mp
if (!is.null(layout)) {
lay <- layout
} else {
lay <- rbind(c(1,1,1,1,1,1),
c(1,1,1,1,1,1),
c(2,2,2,3,3,3))
}
gridExtra::grid.arrange(grobs = plots.list, layout_matrix = lay)
invisible(list(ggplots = plots.list,
dfs = df.list))
}
|
/scratch/gouwar.j/cran-all/cranData/CAISEr/R/plot_caiser.R
|
#' plot.nreps
#'
#' S3 method for plotting _nreps_ objects output by [calc_nreps()]).
#'
#' @param x list object of class _nreps_ (generated by [calc_nreps()])
#' or of class _CAISEr_ (in which case an `instance.name`
#' must be provided).
#' @param y unused. Included for consistency with generic `plot` method.
#' @param ... other parameters to be passed down to specific
#' plotting functions (currently unused)
#' @param instance.name name for instance to be plotted if `object` is
#' of class _CAISEr_. Ignored otherwise.
#' @param latex logical: should labels be formatted for LaTeX? (useful for
#' later saving using library `TikzDevice`)
#' @param show.SE logical: should standard errors be plotted?
#' @param show.CI logical: should confidence intervals be plotted?
#' @param sig.level significance level for the confidence interval.
#' 0 < sig.level < 1
#' @param show.text logical: should text be plotted?
#'
#' @return `ggplot` object (invisibly)
#'
#' @method plot nreps
#'
#' @export
#'
plot.nreps <- function(x, y = NULL, ...,
instance.name = NULL,
latex = FALSE,
show.SE = TRUE,
show.CI = TRUE,
sig.level = 0.05,
show.text = TRUE)
{
object <- x
# Extract a single instance if plotting from CAISEr object
if ("CAISEr" %in% class(object)){
assertthat::assert_that(is.character(instance.name),
length(instance.name) == 1,
instance.name %in% unique(object$data.summary$Instance))
obj <- list()
obj$Diffk <- object$data.summary[object$data.summary$Instance == instance.name, ]
nk <- table(object$data.raw$Algorithm[object$data.raw$Instance == instance.name])
obj$Nk <- as.numeric(nk)
names(obj$Nk) <- names(nk)
obj$instance <- instance.name
object <- obj
class(object) <- "nreps"
}
assertthat::assert_that(all(c("Diffk", "Nk", "instance") %in% names(object)),
is.logical(latex), length(latex) == 1,
is.logical(show.SE), length(show.SE) == 1,
is.logical(show.CI), length(show.CI) == 1,
is.logical(show.text), length(show.text) == 1,
is.numeric(sig.level), length(sig.level) == 1,
sig.level > 0, sig.level < 1,
any(c("CAISEr", "nreps") %in% class(object)))
df <- object$Diffk
algs <- names(object$Nk)
df$Alg1 <- algs[df$Alg1]
df$Alg2 <- algs[df$Alg2]
df$CIHW <- df$SE * stats::qt(p = 1 - sig.level / 2,
df = df$N1 + df$N2)
if (latex){
pairx <- " $\\times$ "
ylabtxt <- "$\\phi_{ij}$"
setxt <- paste0("$SE_{ij} = ", signif(df$SE, 2), "$")
citxt <- paste0("$CI_{",
1 - sig.level,
"} = [", signif(df$Phi - df$CIHW, 2), ", ",
signif(df$Phi + df$CIHW, 3), "]$")
} else {
pairx <- " x "
ylabtxt <- "diff"
setxt <- paste0("SE = ", signif(df$SE, 2))
citxt <- paste0("CI(",
1 - sig.level,
") = [", signif(df$Phi - df$CIHW, 2), ", ",
signif(df$Phi + df$CIHW, 2), "]")
}
df$pair <- paste0(df$Alg1, pairx, df$Alg2)
mp <- ggplot2::ggplot(df,
ggplot2::aes_string(x = "pair",
y = "Phi",
ymin = "Phi - SE",
ymax = "Phi + SE")) +
ggplot2::theme_minimal() +
ggplot2::geom_abline(slope = 0, intercept = 0,
lty = 3, col = "red", lwd = 1.4,
alpha = .5)
if (show.CI){
mp <- mp +
ggplot2::geom_errorbar(ggplot2::aes_string(ymin = "Phi - CIHW",
ymax = "Phi + CIHW"),
alpha = .5, col = 2,
width = .12, size = 1.2)
}
if (show.SE){
mp <- mp +
ggplot2::geom_linerange(size = 1.8)
}
mp <- mp + ggplot2::geom_point(size = 2.5) +
ggplot2::coord_flip() +
ggplot2::xlab("Pair") +
ggplot2::ylab(ylabtxt) +
ggplot2::labs(caption = paste0("Instance: ", object$instance))
if(show.text & show.SE){
mp <- mp +
ggplot2::geom_text(ggplot2::aes(label = setxt),
nudge_x = .2, size = 2.5)
}
if(show.text & show.CI){
mp <- mp +
ggplot2::geom_text(ggplot2::aes(label = citxt),
nudge_x = -.2, size = 2.5)
}
return(mp)
}
|
/scratch/gouwar.j/cran-all/cranData/CAISEr/R/plot_nreps.R
|
#' print.CAISEr
#'
#' S3 method for printing _CAISEr_ objects (the output of
#' [run_experiment()]).
#'
#' @param x list object of class _CAISEr_
#' (generated by [run_experiment()])
#' @param ... other parameters to be passed down to specific
#' summary functions (currently unused)
#' @param echo logical flag: should the print method actually print to screen?
#' @param digits the minimum number of significant digits to be used.
#' See [print.default()].
#' @param right logical, indicating whether or not strings should be
#' right-aligned.
#' @param breakrows logical, indicating whether to "widen" the output table by
#' placing the bottom half to the right of the top half.
#'
#' @examples
#' # Example using four dummy algorithms and 100 dummy instances.
#' # See [dummyalgo()] and [dummyinstance()] for details.
#' # Generating 4 dummy algorithms here, with means 15, 10, 30, 15 and standard
#' # deviations 2, 4, 6, 8.
#' algorithms <- mapply(FUN = function(i, m, s){
#' list(FUN = "dummyalgo",
#' alias = paste0("algo", i),
#' distribution.fun = "rnorm",
#' distribution.pars = list(mean = m, sd = s))},
#' i = c(alg1 = 1, alg2 = 2, alg3 = 3, alg4 = 4),
#' m = c(15, 10, 30, 15),
#' s = c(2, 4, 6, 8),
#' SIMPLIFY = FALSE)
#'
#' # Generate 100 dummy instances with centered exponential distributions
#' instances <- lapply(1:100,
#' function(i) {rate <- runif(1, 1, 10)
#' list(FUN = "dummyinstance",
#' alias = paste0("Inst.", i),
#' distr = "rexp", rate = rate,
#' bias = -1 / rate)})
#'
#' my.results <- run_experiment(instances, algorithms,
#' d = 1, se.max = .1,
#' power = .9, sig.level = .05,
#' power.target = "mean",
#' dif = "perc", comparisons = "all.vs.all",
#' seed = 1234, ncpus = 1)
#' my.results
#'
#'
#' @return data frame object containing the summary table (invisibly)
#'
#' @method print CAISEr
#' @export
#'
print.CAISEr <- function(x, ...,
echo = TRUE,
digits = 4,
right = TRUE,
breakrows = FALSE)
{
# Error checking
assertthat::assert_that("CAISEr" %in% class(x))
# ===========================================================================
my.table <- x$data.summary
if (breakrows){
ninst <- nrow(my.table)
breakpoint <- ceiling(ninst / 2)
tophalf <- my.table[1:breakpoint, ]
bottomhalf <- my.table[(breakpoint + 1):ninst, ]
if(nrow(tophalf) > nrow(bottomhalf)) bottomhalf <- rbind(bottomhalf,
rep(NA, 5))
my.table <- cbind(tophalf,
`|` = rep("|", breakpoint),
bottomhalf)
}
# Print summary
if(echo){
cat("#====================================")
cat("\n Summary table of CAISEr object\n")
print.data.frame(my.table[, 1:(ncol(my.table) - 2)],
digits = digits, right = right,
quote = FALSE, row.names = FALSE)
cat("\n#====================================")
}
invisible(my.table)
}
|
/scratch/gouwar.j/cran-all/cranData/CAISEr/R/print_caiser.R
|
#' Run a full experiment for comparing multiple algorithms using multiple
#' instances
#'
#' Design and run a full experiment - calculate the required number of
#' instances, run the algorithms on each problem instance using the iterative
#' approach based on optimal sample size ratios, and return the results of the
#' experiment. This routine builds upon [calc_instances()] and [calc_nreps()],
#' so refer to the documentation of these two functions for details.
#'
#' @section Instance List:
#' Parameter `instances` must contain a list of instance objects, where
#' each field is itself a list, as defined in the documentation of function
#' [calc_nreps()]. In short, each element of `instances` is an `instance`, i.e.,
#' a named list containing all relevant parameters that define the problem
#' instance. This list must contain at least the field `instance$FUN`, with the
#' name of the problem instance function, that is, a routine that calculates
#' y = f(x). If the instance requires additional parameters, these must also be
#' provided as named fields.
#' An additional field, "instance$alias", can be used to provide the instance
#' with a unique identifier (e.g., when using an instance generator).
#'
#' @section Algorithm List:
#' Object `algorithms` is a list in which each component is a named
#' list containing all relevant parameters that define an algorithm to be
#' applied for solving the problem instance. In what follows `algorithms[[k]]`
#' refers to any algorithm specified in the `algorithms` list.
#'
#' `algorithms[[k]]` must contain an `algorithms[[k]]$FUN` field, which is a
#' character object with the name of the function that calls the algorithm; as
#' well as any other elements/parameters that `algorithms[[k]]$FUN` requires
#' (e.g., stop criteria, operator names and parameters, etc.).
#'
#' The function defined by the routine `algorithms[[k]]$FUN` must have the
#' following structure: supposing that the list in `algorithms[[k]]` has
#' fields `algorithm[[k]]$FUN = "myalgo"`, `algorithms[[k]]$par1 = "a"` and
#' `algorithms[[k]]$par2 = 5`, then:
#'
#' \preformatted{
#' myalgo <- function(par1, par2, instance, ...){
#' #
#' # <do stuff>
#' #
#' return(results)
#' }
#' }
#'
#' That is, it must be able to run if called as:
#'
#' \preformatted{
#' # remove '$FUN' and '$alias' field from list of arguments
#' # and include the problem definition as field 'instance'
#' myargs <- algorithm[names(algorithm) != "FUN"]
#' myargs <- myargs[names(myargs) != "alias"]
#' myargs$instance <- instance
#'
#' # call function
#' do.call(algorithm$FUN,
#' args = myargs)
#' }
#'
#' The `algorithm$FUN` routine must return a list containing (at
#' least) the performance value of the final solution obtained, in a field named
#' `value` (e.g., `result$value`) after a given run. In general it is easier to
#' write a small wrapper function around existing implementations.
#'
#' @section Initial Number of Observations:
#' In the _general case_ the initial number of observations / algorithm /
#' instance (`nstart`) should be relatively high. For the parametric case
#' we recommend 10~15 if outliers are not expected, and 30~40 (at least) if that
#' assumption cannot be made. For the bootstrap approach we recommend using at
#' least 15 or 20. However, if some distributional assumptions can be
#' made - particularly low skewness of the population of algorithm results on
#' the test instances), then `nstart` can in principle be as small as 5 (if the
#' output of the algorithm were known to be normal, it could be 1).
#'
#' In general, higher sample sizes are the price to pay for abandoning
#' distributional assumptions. Use lower values of `nstart` with caution.
#'
#' @section Pairwise Differences:
#' Parameter `dif` informs the type of difference in performance to be used
#' for the estimation (\eqn{\mu_a} and \eqn{\mu_b} represent the mean
#' performance of any two algorithms on the test instance, and \eqn{mu}
#' represents the grand mean of all algorithms given in `algorithms`):
#'
#' - If `dif == "perc"` and `comparisons == "all.vs.first"`, the estimated
#' quantity is:
#' \eqn{\phi_{1b} = (\mu_1 - \mu_b) / \mu_1 = 1 - (\mu_b / \mu_1)}.
#'
#' - If `dif == "perc"` and `comparisons == "all.vs.all"`, the estimated
#' quantity is:
#' \eqn{\phi_{ab} = (\mu_a - \mu_b) / \mu}.
#'
#' - If `dif == "simple"` it estimates \eqn{\mu_a - \mu_b}.
#'
#' @section Sample Sizes for Nonparametric Methods:
#' If the parameter `` is set to either `Wilcoxon` or `Binomial`, this
#' routine approximates the number of instances using the ARE of these tests
#' in relation to the paired t.test:
#' - `n.wilcox = n.ttest / 0.86 = 1.163 * n.ttest`
#' - `n.binom = n.ttest / 0.637 = 1.570 * n.ttest`
#'
#' @inheritParams calc_nreps
#' @inheritParams calc_instances
#' @param instances list object containing the definitions of the
#' _available_ instances. This list may (or may not) be exhausted in the
#' experiment. To estimate the number of required instances,
#' see [calc_instances()]. For more details, see Section `Instance List`.
#' @param power (desired) test power. See [calc_instances()] for details.
#' Any value equal to or greater than one will force the method to use all
#' available instances in `Instance.list`.
#' @param d minimally relevant effect size (MRES), expressed as a standardized
#' effect size, i.e., "deviation from H0" / "standard deviation".
#' See [calc_instances()] for details.
#' @param sig.level family-wise significance level (alpha) for the experiment.
#' See [calc_instances()] for details.
#' @param alternative type of alternative hypothesis ("two.sided" or
#' "less" or "greater"). See [calc_instances()] for details.
#' @param nmax maximum number of runs to execute on each instance (see
#' [calc_nreps()]). Loaded results (see `load.partial.results`
#' below) do not count towards this maximum.
#' @param save.partial.results should partial results be saved to files? Can be
#' either `NA` (do not save) or a character string
#' pointing to a folder. File names are generated
#' based on the instance aliases. **Existing files with
#' matching names will be overwritten.**
#' `run_experiment()` uses **.RDS** files for saving
#' and loading.
#' @param load.partial.results should partial results be loaded from files? Can
#' be either `NA` (do not save) or a character
#' string pointing to a folder containing the
#' file(s) to be loaded. `run_experiment()` will
#' use .RDS file(s) with a name(s) matching instance
#' `alias`es. `run_experiment()` uses **.RDS** files
#' for saving and loading.
#' @param save.final.result should the final results be saved to file? Can be
#' either `NA` (do not save) or a character string
#' pointing to a folder where the results will be
#' saved on a **.RDS** file starting with
#' `CAISEr_results_` and ending with 12-digit
#' datetime tag in the format `YYYYMMDDhhmmss`.
#'
#' @return a list object containing the following fields:
#' \itemize{
#' \item \code{Configuration} - the full input configuration (for reproducibility)
#' \item \code{data.raw} - data frame containing all observations generated
#' \item \code{data.summary} - data frame summarizing the experiment.
#' \item \code{N} - number of instances sampled
#' \item \code{N.star} - number of instances required
#' \item \code{total.runs} - total number of algorithm runs performed
#' \item \code{instances.sampled} - names of the instances sampled
#' \item \code{Underpowered} - flag: TRUE if N < N.star
#' }
#'
#' @author Felipe Campelo (\email{fcampelo@@ufmg.br},
#' \email{f.campelo@@aston.ac.uk})
#'
#' @references
#' - F. Campelo, F. Takahashi:
#' Sample size estimation for power and accuracy in the experimental
#' comparison of algorithms. Journal of Heuristics 25(2):305-338, 2019.
#' - P. Mathews.
#' Sample size calculations: Practical methods for engineers and scientists.
#' Mathews Malnar and Bailey, 2010.
#' - A.C. Davison, D.V. Hinkley:
#' Bootstrap methods and their application. Cambridge University Press (1997)
#' - E.C. Fieller:
#' Some problems in interval estimation. Journal of the Royal Statistical
#' Society. Series B (Methodological) 16(2), 175–185 (1954)
#' - V. Franz:
#' Ratios: A short guide to confidence limits and proper use (2007).
#' https://arxiv.org/pdf/0710.2024v1.pdf
#' - D.C. Montgomery, C.G. Runger:
#' Applied Statistics and Probability for Engineers, 6th ed. Wiley (2013)
#' - D.J. Sheskin:
#' Handbook of Parametric and Nonparametric Statistical Procedures,
#' 4th ed., Chapman & Hall/CRC, 1996.
#'
#' @export
#'
#' @examples
#' # Example using four dummy algorithms and 100 dummy instances.
#' # See [dummyalgo()] and [dummyinstance()] for details.
#' # Generating 4 dummy algorithms here, with means 15, 10, 30, 15 and standard
#' # deviations 2, 4, 6, 8.
#' algorithms <- mapply(FUN = function(i, m, s){
#' list(FUN = "dummyalgo",
#' alias = paste0("algo", i),
#' distribution.fun = "rnorm",
#' distribution.pars = list(mean = m, sd = s))},
#' i = c(alg1 = 1, alg2 = 2, alg3 = 3, alg4 = 4),
#' m = c(15, 10, 30, 15),
#' s = c(2, 4, 6, 8),
#' SIMPLIFY = FALSE)
#'
#' # Generate 100 dummy instances with centered exponential distributions
#' instances <- lapply(1:100,
#' function(i) {rate <- runif(1, 1, 10)
#' list(FUN = "dummyinstance",
#' alias = paste0("Inst.", i),
#' distr = "rexp", rate = rate,
#' bias = -1 / rate)})
#'
#' my.results <- run_experiment(instances, algorithms,
#' d = .5, se.max = .1,
#' power = .9, sig.level = .05,
#' power.target = "mean",
#' dif = "perc", comparisons = "all.vs.all",
#' ncpus = 1, seed = 1234)
#'
#' # Take a look at the results
#' summary(my.results)
#' plot(my.results)
#'
run_experiment <- function(instances, algorithms, d, se.max,
power = 0.8, sig.level = 0.05,
power.target = "mean",
dif = "simple", comparisons = "all.vs.all",
alternative = "two.sided", test = "t.test",
method = "param",
nstart = 20, nmax = 100 * length(algorithms),
force.balanced = FALSE,
ncpus = 2, boot.R = 499, seed = NULL,
save.partial.results = NA,
load.partial.results = NA,
save.final.result = NA)
{
# ================ Preliminary bureaucracies ================ #
# one-sided tests only make sense for all-vs-one experiments
if (alternative %in% c("less", "greater")){
assertthat::assert_that(comparisons == "all.vs.first")
alternative.side <- "one.sided"
} else {
alternative.side <- "two.sided"
}
# Fix a common mistake
if (tolower(dif) == "percent") dif <- "perc"
# set PRNG seed
assertthat::assert_that(is.null(seed) || seed == seed %/% 1)
if (is.null(seed)) seed <- as.numeric(Sys.time())
set.seed(seed)
# Capture input parameters to be returned later
var.input.pars <- as.list(environment())
# Set up parallel processing
assertthat::assert_that(assertthat::is.count(ncpus))
if ((.Platform$OS.type == "windows") & (ncpus > 1)){
cat("\nAttention: multicore not currently available for Windows.\n
Forcing ncpus = 1.")
ncpus <- 1
} else {
available.cores <- parallel::detectCores()
if (ncpus >= available.cores){
cat("\nAttention: ncpus too large, we only have ", available.cores,
" cores.\nUsing ", available.cores - 1,
" cores for run_experiment().")
ncpus <- available.cores - 1
}
}
# Fill up instance aliases if needed
assertthat::assert_that(is.list(instances), length(instances) > 1)
for (i in 1:length(instances)){
if (!("alias" %in% names(instances[[i]]))) {
instances[[i]]$alias <- instances[[i]]$FUN
}
}
# Fill up algorithm aliases if needed
assertthat::assert_that(is.list(algorithms), length(algorithms) > 1)
for (i in 1:length(algorithms)){
if (!("alias" %in% names(algorithms[[i]]))) {
algorithms[[i]]$alias <- algorithms[[i]]$FUN
}
}
# ================ Start the actual method ================ #
# Calculate N*
n.available <- length(instances)
n.algs <- length(algorithms)
n.comparisons <- switch(comparisons,
all.vs.all = n.algs * (n.algs - 1) / 2,
all.vs.first = n.algs - 1)
if (power >= 1) {
ss.calc <- calc_instances(ncomparisons = n.comparisons,
d = d,
ninstances = n.available,
sig.level = sig.level,
alternative.side = alternative.side,
test = test,
power.target = power.target)
N.star <- n.available
} else {
ss.calc <- calc_instances(ncomparisons = n.comparisons,
d = d,
power = power,
sig.level = sig.level,
alternative.side = alternative.side,
test = test,
power.target = power.target)
N.star <- ceiling(ss.calc$ninstances)
if (N.star < n.available){
# Randomize order of presentation for available instances
instances <- instances[sample.int(n.available)]
}
}
inst.to.use <- min(N.star, n.available)
# Echo some information for the user
cat("CAISEr running")
cat("\n-----------------------------")
cat("\nRequired number of instances:", N.star)
cat("\nAvailable number of instances:", n.available)
cat("\nUsing", ncpus, "cores.")
cat("\n-----------------------------")
# Sample algorithms on instances
if(ncpus > 1){
my.results <- pbmcapply::pbmclapply(X = instances[1:inst.to.use],
FUN = calc_nreps,
# Arguments for calc_nreps:
algorithms = algorithms,
se.max = se.max,
dif = dif,
comparisons = comparisons,
method = method,
nstart = nstart,
nmax = nmax,
boot.R = boot.R,
force.balanced = force.balanced,
save.folder = save.partial.results,
load.folder = load.partial.results,
# other arguments for pbmclapply:
mc.cores = ncpus,
mc.preschedule = FALSE)
} else {
my.results <- lapply(X = instances[1:inst.to.use],
FUN = calc_nreps,
# Arguments for calc_nreps:
algorithms = algorithms,
se.max = se.max,
dif = dif,
comparisons = comparisons,
method = method,
nstart = nstart,
nmax = nmax,
boot.R = boot.R,
force.balanced = force.balanced,
save.folder = save.partial.results,
load.folder = load.partial.results)
}
# Consolidate raw data
data.raw <- lapply(X = my.results,
FUN = function(x){
inst <- x$instance
nj <- sum(x$Nk)
data.frame(Algorithm = do.call(what = c,
mapply(rep,
names(x$Nk),
x$Nk,
SIMPLIFY = FALSE)),
Instance = rep(inst, nj),
Observation = do.call(c, x$Xk))})
data.raw <- do.call(rbind, data.raw)
rownames(data.raw) <- NULL
# Consolidate summary data
data.summary <- lapply(X = my.results,
FUN = function(x){
cbind(Instance = rep(x$instance, nrow(x$Diffk)),
x$Diffk)})
data.summary <- do.call(rbind, data.summary)
algonames <- sapply(algorithms, function(x) x$alias)
rownames(data.summary) <- NULL
data.summary$Alg1 <- as.factor(algonames[data.summary$Alg1])
data.summary$Alg2 <- as.factor(algonames[data.summary$Alg2])
# Assemble output
output <- list(Configuration = var.input.pars,
data.raw = data.raw,
data.summary = data.summary,
N = min(N.star, n.available),
N.star = N.star,
total.runs = nrow(data.raw),
instances.sampled = unique(data.raw$Instance),
Underpowered = (N.star > n.available),
samplesize.calc = ss.calc)
class(output) <- c("CAISEr", "nreps", "list")
# Save output (if required)
assertthat::assert_that(is.na(save.final.result) ||
(is.character(save.final.result) &
length(save.final.result) == 1))
if(!is.na(save.final.result)){
# Check save folder
if(save.final.result == "") save.final.result <- "./"
save.folder <- normalizePath(save.final.result)
if(!dir.exists(save.folder)) dir.create(save.folder)
# Prepare save filename
filepath <- paste0(save.folder, "/CAISEr_results_",
gsub("[- ::alpha::]", "", Sys.time()),
".rds")
# save output to file
cat("\nWriting file", basename(filepath))
saveRDS(output, file = filepath)
}
return(output)
}
|
/scratch/gouwar.j/cran-all/cranData/CAISEr/R/run_experiment.R
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.