content
stringlengths
0
14.9M
filename
stringlengths
44
136
--- title: "A Comprehensive Monte Carlo Valuation of Variable Annuities" author: "Hengxin Li, Mingbin Feng, Mingyi Jiang" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{A Comprehensive Monte Carlo Valuation of Variable Annuities} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- # Package Info This package uses Monte Carlo simulation to estimate the fair market value of a large portfolio of synthetic variable annuities. The portfolio of variable annuities under consideration is generated based on realistic features of common types of guarantee riders in North America. The Monte Carlo simulation engine generates sample paths of asset prices based on Black-Scholes model. In this vignette, we will demonstrate the functionalities provided in this package. For illustrative purposes, we will use few scenarios to valuate a pool of two variable annuities. Users may obtain a more robust valuation result by increasing the amount of risk-neutral scenarios. ## Yield Curve Generation In this step, we exploit Secant method to calculate discount factors and forward rates at different tenor based on given swap rates using `buildCurve()`. ```{r, echo = FALSE} knitr::opts_chunk$set(collapse = TRUE, comment = "#>") library(vamc) ``` ```{r} # Initialize required inputs to boostrap a curve swap <- c(0.69, 0.77, 0.88, 1.01, 1.14, 1.38, 1.66, 2.15)*0.01 tenor <- c(1, 2, 3, 4, 5, 7, 10, 30) fixFreq <- 6 fixDCC <- "Thirty360" fltFreq <- 6 fltDCC <- "ACT360" calendar <- "General" bdc <- "Modified_Foll" curveDate <- "2016-02-08" numSetDay <- 2 yieldCurveDCC <- "Thirty360" holidays <- NULL # Bootstrap a forward curve buildCurve(swap, tenor, fixFreq, fixDCC, fltFreq, fltDCC, calendar, bdc, curveDate, numSetDay, yieldCurveDCC, holidays) ``` ## Generate index scenarios and fund scenarios In the following example, we first simulate the index movements using `genIndexScen()`. Three of the inputs to `genIndexScen()` are stored as default data under variable names "mCov", "indexNames", and "cForwardCurve" respectively. For illustration purposes, we will simulate 100 scenarios for 360 steps with a step length dT = 1/12 and seed = 1. The underlying model utilizes the __multivariate Black-Scholes model__. All the simulated index movements are stored in a __3D-array__ with dimensions [number of Scenarios, number of Steps, number of Indices] ### Default covariance matrix ```{r, echo=FALSE, results='asis'} # Default randomly generated covariance matrix knitr::kable(mCov, col.names = indexNames) ``` ### Default index names ```{r, echo=FALSE, results='asis'} # Default index names knitr::kable(indexNames, col.names = c("Index Names")) ``` ### Risk-neutral path simulation for 5 indices ```{r} # We will show the index simulated path for five months of the first scenario indexScen <- genIndexScen(mCov, 100, 360, indexNames, 1 / 12, cForwardCurve, 1) indexScen[1, 1:5, ] ``` Then we use `genFundScen()` to map the index movements to funds according to different allocations of capital using a fund map (stored as default data under variable "fundMap"). The fund movements are also stored in a 3D-array with dimension [number of Scenarios, number of Steps, number of Funds] ### Risk-neutral path simulation for 10 funds ```{r} # Again, we show the fund simulated path for five months of the first scenario fundScen <- genFundScen(fundMap, indexScen) fundScen[1, 1:5, ] ``` ## Generate a synthetic portfolio of variable annuities Perhaps the most value-added step in this package is the generation of synthetic portfolio of variable annuities that has realistic charateristic features. Using the fuction `genPortInception()`, users can generate a synthetic variable annuity portfolio of desirable size. The function `genPortInception()` has certain predetermined default values based on the research in the package reference. We recommend users to change these default values, such as maturity and issue range, to match their portfolio characteristics. In the current version, there are a few constraints for the portfolio being generated: The issue range must be later than the first date of historical scenario; The maturity range should also be set after the valuation date to be meaningful. ```{r} # For illustration purposes, we will only simulate one guarantee contract for each of # the 19 guarantee types. Please note that due to randomness the generated portfolio # under this code block may not align with the default VAPort under lazy data. if(capabilities("long.double")) { VAport <- genPortInception(issueRng = c("2001-08-01", "2014-01-01"), numPolicy = 1) } ``` ## Monte Carlo Valuation After generating the above required elements for Monte Carlo valuation, we can now proceed to calculate the fair market price of the portfolio by calling the function `valuatePortfolio()`. Under the current version of the package, all the annuity contracts in the portfolio are assumed to be valuated on the same date, i.e. the first date of our simulated fund scenario. Users can either use the default mortality table by calling "mortTable", or input a mortality table to project liability cash flows. ```{r} # In this vignette, we will arbitrarily use the first two scenarios from fundSen to # valuate a portfolio of two guarantees to speed up the execution of the example. # The input cForwardCurve is a vector of 0.02 with dimension 360. valuatePortfolio(VAPort[1:5, ], mortTable, fundScen[1, , ], 1 / 12, cForwardCurve) ``` Note that users can also "age" the portfolio, calling the function `agePortfolio()`, to a particular valuation date by incorporating the historical fund movements prior to that date. ```{r} # Again, we will arbitrarily age a portfolio of two guarantees to speed up the execution. targetDate <- "2016-01-01" # Here we generate historical fund scenarios using default index data stored under "histIdxScen" histFundScen <- genFundScen(fundMap, histIdxScen) # Perform aging agePortfolio(VAPort[1:2, ], mortTable, histFundScen, histDates, dT = 1 / 12, targetDate, cForwardCurve) ``` ```{r setup, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` ## Closing Remarks Though the primary purpose of this package is to valuate a portfolio of variable annuities, users can also use the `valuateOnePolicy()` and the `ageOnePolicy()` functions to perform fair market valuation on a single variable annuity as demonstrated below. ### Valuation of one variable annuity ```{r} exPolicy <- VAPort[1, ] valuateOnePolicy(exPolicy, mortTable, fundScen[1:2, , ], 1 / 12, cForwardCurve) ``` ### Aging of one variable annuity ```{r} # Similarly, users can age this single policy before pricing it. We use the same # target date and historical fund scenario as generated before exPolicy <- VAPort[1, ] ageOnePolicy(exPolicy, mortTable, histFundScen, histDates, dT = 1 / 12, targetDate, cForwardCurve) ```
/scratch/gouwar.j/cran-all/cranData/vamc/vignettes/my-vignette.Rmd
ControlResponseBias<-function(x,content_factors,SD_items,unbalanced_items,contSD = FALSE,contAC = FALSE,corr = "Pearson",rotat = "promin",target,factor_scores = FALSE, PA = FALSE, path = FALSE, display = TRUE){ ###################################################################### # x : Raw sample scores ###################################################################### if (missing(x)){ stop("The argument x is not optional, please provide a valid raw sample scores") } else { N<-size(x)[1] m<-size(x)[2] if (N==m){ corr<-0 #square matrix, take it as a covariance/correlation matrix R <- x factor_scores = FALSE PA = FALSE } buff_na <- is.na(x) if (any(buff_na)){ #check if missing data stop("The data contains missing values, please provide a full dataset to be analyzed.") } x_o<-x } ###################################################################### # content_factors : Number of content factors to be retained ###################################################################### if (missing(content_factors)){ stop("The argument content_factors is not optional, please provide the number of content factors to be retained") } else { content_factors<-round(content_factors) if (content_factors>(m/4)){ stop("The argument content_factors has to be greater than the number of items / 4") } } r<-content_factors ###################################################################### # SD_items : A vector containing the Social Desirability markers ###################################################################### ###################################################################### # unbalanced_items : A vector containing the the items that wouldn't be part of the balanced core ###################################################################### ###################################################################### # contSD : logical variable determining if the method for controlling Social Desirability will be used. ###################################################################### if (is.logical(contSD)!=TRUE){ stop("contSD argument should be a logical variable.") } ###################################################################### # factor_scores : logical variable determining if factor scores wil be computed ###################################################################### if (is.logical(factor_scores)!=TRUE){ stop("factor_scores argument should be a logical variable.") } if (missing(contSD)) { if (missing(SD_items)){ n_SD_items<-0 contSD<-FALSE SD_items<-0 } else { #check the number of SD items n_SD_items<-size(SD_items)[2] if (n_SD_items<4){ stop("The argument SD_items has to be a vector with at least 4 items") } contSD<-TRUE } } else { if (contSD==TRUE){ if (missing(SD_items)){ stop("When contSD is TRUE, the argument SD_items can not be missing") } else { #check the number of SD items n_SD_items<-size(SD_items)[2] if (n_SD_items<4){ stop("The argument SD_items has to be a vector with at least 4 items") } } contSD<-TRUE } else if (contSD==FALSE){ n_SD_items<-0 contSD<-FALSE SD_items<-0 } } if (factor_scores!=TRUE && factor_scores!=FALSE){ stop("The argument factor_scores has to be a logical value") } ###################################################################### # contAC : logical variable determining if the method for controlling Acquiescence will be used. ###################################################################### if (missing(contAC)){ if (missing(unbalanced_items)){ n_unbalanced_items<-0 contAC<-FALSE unbalanced_items<-0 n_unbalanced_items<-0 } else { n_unbalanced_items<-size(unbalanced_items)[2] contAC<-TRUE } } else { if (contAC==TRUE){ #doesn't matter if unbalanced_items is missing if (missing(unbalanced_items)){ unbalanced_items<-0 n_unbalanced_items<-0 } else { n_unbalanced_items<-size(unbalanced_items)[2] } contAC<-TRUE } else if (contAC==FALSE){ n_unbalanced_items<-0 contAC<-FALSE } } ###################################################################### # corr: Determine if Pearson or Polychoric matrix will be used "Pearson" or "Polychoric" ###################################################################### if (missing(corr)){ corr<-1 #Pearson by default } else{ if (corr!=0){ if (corr!="Pearson" && corr!="Polychoric"){ if (corr=="pearson"){ corr<-1 } else if (corr=="polychoric"){ corr<-2 } else if (corr==0){ corr<-0 } else { stop("corr argument has to be 'Pearson' for computing Pearson correlation or 'Polychoric' for computing Polychoric/Tetrachoric correlation)") } } else { if (corr=="Pearson"){ corr<-1 } else if (corr=="Polychoric"){ corr<-2 } } } } ###################################################################### # target : The semi-specified target if procustes rotations are selected ###################################################################### if (missing(target)==FALSE){ #check size f1<-size(target)[1] f2<-size(target)[2] N <- size(x)[1] m <- size(x)[2] if (missing(target)==F){ target=as.matrix(target) if ((f1==m)==F){ if (contSD==T){ if ((f1==(m-n_SD_items))){ #the target contains content items only } } } else { #the target contains all the items, has to be adjusted to the content items } } } ###################################################################### # rotat: Determine if the factor loading matrix will be rotated, and using which method ###################################################################### if (missing(target)==FALSE){ #target has been provided, the rotations available are: rotat_list_target<-c("targetT","targetQ","pstT","pstQ","bentlerT","bentlerQ","cfT","cfQ","bifactorT","bifactorQ") if (is.na(match(rotat,rotat_list_target))){ stop("When providing a target, the rotation methods available are: targetT, targetQ, pstT, pstQ, bentlerT, bentlerQ, cfT, cfQ, bifactorT and bifactorQ") } else { #ok rotat_package<-1 } } rotat_list_GPArotation<-c("oblimin","quartimin","targetT","targetQ","pstT","pstQ","oblimax","entropy","quartimax", "Varimax","simplimax","bentlerT","bentlerQ","tandemI","tandemII","geominT","geominQ", "cfT","cfQ","infomaxT","infomaxQ","mccammon","bifactorT","bifactorQ","vgQ.oblimin", "vgQ.quartimin","vgQ.target","vgQ.pst","vgQ.oblimax","vgQ.entropy","vgQ.quartimax", "vgQ.varimax","vgQ.simplimax","vgQ.bentler","vgQ.tandemI","vgQ.tandemII","vgQ.geomin", "vgQ.cf","vgQ.infomax","vgQ.mccammon","vgQ.bifactor") rotat_list_stats<-c("varimax","promax") rotat_list_PCovR<-c("promin","wvarim") is_GPArotation<-match(rotat,rotat_list_GPArotation) if(is.na(is_GPArotation)){ is_GPArotation<-0 } is_stats<-match(rotat,rotat_list_stats) if(is.na(is_stats)){ is_stats<-0 } is_PCovR<-match(rotat,rotat_list_PCovR) if(is.na(is_PCovR)){ is_PCovR<-0 } if(rotat=="none"){ rotat_package<-0 } else{ if(r==1){ # message('\nRotation methods are only available when retaining more than one content factor. The obtained loading matrix will not be rotated.') rotat<-"none" rotat_package<-0 } else{ if(is_GPArotation>0){ rotat_package<-1 #the rotation is available on GPArotation package } else { if(is_stats>0){ rotat_package<-2 #the rotation is available on stats package } else { if(is_PCovR>0){ rotat_package<-3 #the rotation is available on PCovR package } else{ rotat<-"none" rotat_package<-0 message("\nrotat argument has to be available through GPArotation, stats or PCovR packages (case sensitive). rotat has been switch to none") } } } } } ###################################################################### # PA: If Parallel Analysis will be computed ###################################################################### if (is.logical(PA)!=TRUE){ stop("PA argument should be a logical variable.") } ###################################################################### # path: If path diagram will be plotted ###################################################################### if (is.logical(path)!=TRUE){ stop("path argument should be a logical variable.") } if (path == TRUE){ if (m > 40 || r >5){ path = FALSE message("\npath argument is limited to a maximum of 40 items and 5 content factors. The path diagram will not be plotted") } } ###################################################################### # display: What should be displayed in console # - TRUE: (default) display the complete output # - FALSE: no output will be printed, the result will be passed silently # - Available options (multiple selection available): # "Summary", "Descriptives", "Adequacy", "GOF" (Goodness of Fit indices), "Loadings" and "PA" ###################################################################### # true logical display if (is.logical(display)){ if (isTRUE(display)){ displayL <- TRUE display_summary <- TRUE display_descriptives <- TRUE display_adequacy <- TRUE display_GOF <- TRUE display_loadings <- TRUE if (factor_scores == TRUE){ display_scores <- TRUE } else { display_scores <- FALSE } if (PA == TRUE){ display_PA <- TRUE } else { display_PA <- FALSE } } else {#FALSE displayL <- FALSE display_summary <- FALSE display_descriptives <- FALSE display_adequacy <- FALSE display_GOF <- FALSE display_loadings <- FALSE display_scores <- FALSE display_PA <- FALSE } } else { #check selected options displayL <- TRUE display_summary <- FALSE display_descriptives <- FALSE display_adequacy <- FALSE display_GOF <- FALSE display_loadings <- FALSE display_scores <- FALSE display_PA <- FALSE if (is.character(display)){ #char or char list if (length(grep("Summary",display)==1) > 0){ display_summary <- TRUE } if (length(grep("Descriptives",display)==1) > 0){ display_descriptives <- TRUE } if (length(grep("Adequacy",display)==1) > 0){ display_adequacy <- TRUE } if (length(grep("GOF",display)==1) > 0){ display_GOF <- TRUE } if (length(grep("Loadings",display)==1) > 0){ display_loadings <- TRUE } if (length(grep("Scores",display)==1) > 0){ if (factor_scores == TRUE){ display_scores <- TRUE } else { display_scores <- FALSE } } if (length(grep("PA",display)==1) > 0){ if (PA == TRUE){ display_PA <- TRUE } else { display_PA <- FALSE } } } else { stop("The display argument should be or a logical value, or a character array containing the sections to be printed (see documentation).") } } ################ BEGIN ################## n_content_items<-m-n_SD_items G<-c(n_SD_items,n_content_items) content_items<-c() buff1<-1 buff2<-1 cont<-0 token<-1 ib<-c() iub<-c() headnames<-character(length = r+contSD+contAC) for (i in 1:(r+contSD+contAC)){ if (contSD==TRUE && contAC==TRUE){ if (i==1){ headnames[i]=("Factor SD") } if (i==2){ headnames[i]=("Factor AC") } if (i>2) { headnames[i]=sprintf("Factor %.0f",i-2) } } if (contSD==TRUE && contAC==FALSE){ if (i==1){ headnames[i]=("Factor SD") } else { headnames[i]=sprintf("Factor %3.0f",i-1) } } if (contSD==FALSE && contAC==TRUE){ if (i==1){ headnames[i]=("Factor AC") } else { headnames[i]=sprintf("Factor %.0f",i-1) } } if (contSD==FALSE && contAC==FALSE){ headnames[i]=sprintf("Factor %.0f",i) } } itemnames<-character(length = m) #define the subscripts for SD items, content items, balanced subset and unbalanced subset for (i in 1:m){ if (SD_items[buff1]==i){ cont=cont+1 itemnames[buff1]<-sprintf("Item %3.0f",i) if (buff1<n_SD_items){ buff1=buff1+1 } else { token<-0 } } else { content_items<-c(content_items,i) itemnames[i+n_SD_items-buff1+token]<-sprintf("Item %3.0f",i) if (buff2<=n_unbalanced_items){ if (i==unbalanced_items[buff2]){ buff2=buff2+1 iub=c(iub,i-cont) } else { ib=c(ib,i-cont) } } else { ib=c(ib,i-cont) } } } buff_SD<-x[,SD_items] buff_content<-x[,content_items] #adjust target if (missing(target)==F){ target <- target[content_items,] } if (corr==1){ #new matrix with the SD items at the beginning x<-cbind(buff_SD,buff_content) R<-cor(x) } else if (corr==2){ #new matrix with the SD items at the beginning x<-cbind(buff_SD,buff_content) #Polychoric matrix R<-(psych::polychoric(x,smooth = FALSE, correct = FALSE))$rho #check if the matrix is positive definite D<-eigen(R)$values smoothing_done <- FALSE if (min(D)<0){ #R<-psych::cor.smooth(R,eig.tol = 0) # Bentler and Yuan (2011) smoothing: R_smooth <- fungible::smoothBY(R, const = 1)$RBY # smooth_indices = R_smooth != R # smooth_v = 0 # for (i in 1:m){ # if (R_smooth[i,i] != R[i,i]){ # smooth_v = smooth_v + 1 # } # } # # R_smooth[R!=R] smoothing_done <- TRUE R <- R_smooth } #Thresholds xmin<-apply(x,2,min) xmax<-apply(x,2,max) THRES<-thresholds(x,xmin,xmax) m<-size(THRES)[1] k<-size(THRES)[2] THRES<-t(cbind(THRES,matrix(0,m,1))) } else { new_order<-c(SD_items,content_items) R<-R[new_order,new_order] } #ib<-ib-G[1] #iub<-iub-G[1] m<-dim(R) m<-m[2] Rf<-EFA.MRFA::mrfa(R,dimensionality = 1,display = FALSE)$Matrix dRr<-diag(Rf) ##########REDUCED MATRIX########### Rr<-R-diag(diag(R))+diag(dRr) ##########Social Desirability########### if (contSD==TRUE){ Rds<-Rr[1:G[1],1:G[1]] out_fa<-suppressMessages(psych::fa(Rds,nfactors=1,rotate="none", covar=TRUE)) Ads<-out_fa$loadings k<-Ads[1] rm<-transpose(Rds[2:G[1]]) prod<-solve(transpose(rm)%*%rm)*k lambda<-matrix(0,G[2],1) j=1 i=G[1]+1 for(i in 1:G[2]+G[1]){ ri<-Rr[2:G[1],i] lambda[j]=ri%*%rm*prod j=j+1 } #SOCIAL DESIRABILITY FACTOR# Pds<-rbind(Ads,lambda) Rr1<-Pds%*%transpose(Pds) #Rae: Matrix without SD Rae<-Rr-Rr1 a<-G[1]+1 b<-G[1]+G[2] Rae<-Rae[a:b,a:b] } else { Rae<-Rr } if (contAC==TRUE){ #BALANCED CORE# m_b<-length(ib) Rae_b<-Rae[ib,ib] #Ten Berge's Method #Computing centroid using balanced core (Acquiescence) u<-matrix(1,m_b,1) a1<-Rae_b%*%u a2<-(transpose(u)%*%Rae_b%*%u)^(1/2) a<-a1 %*% (1/a2) #a<-Rae_b%*%u/(transpose(u)%*%Rae_b%*%u)^(1/2) #a<-matrix(0,m_b,1) #ACQUIESCENCE FACTOR# a_g<-matrix(0,m_b,1) m_k<-length(iub) a_g[ib]<-a if(m_k>0){ for(i in 1:m_k){ buff1<-transpose(Rae[iub[i],ib]) buff2<-(transpose(u)%*%Rae_b%*%u)^(1/2) a_g[iub[i]]<-sum(buff1)/buff2 } } Rr2<-matrix(a_g)%*%t(a_g) #Re <- matrix without Acquiescence and SD Re<-Rae-Rr2 } else { Re<-Rae } # CONTENT FACTORS out_eigen<-eigen(Re) VV<-out_eigen$vectors[,1:r] if (r==1){ VV<-transpose(VV) } LL<-diag(out_eigen$values[1:r]) if (r==1){ LL<-out_eigen$values[1:r] } A<-VV%*%sqrt(LL) #FINAL FACTORIAL PATTERN# buff<-matrix(0,G[1],contAC+r) if (contAC==TRUE){ a_gA<-cbind(a_g,A) buff<-rbind(buff,a_gA) } else { buff<-rbind(buff,A) } if (contSD==TRUE){ P<-cbind(Pds,buff) } else { P<-buff } #ROTATION Pc<-P[(G[1]+1):(G[1]+G[2]),(1+contSD+contAC):(contSD+contAC+content_factors)] # Content factors #Explained Common Variance proportion Rf<-EFA.MRFA::mrfa(R,dimensionality = r+contSD+contAC,display = FALSE)$Matrix dRr<-transpose(diag(Rf)) EV<-transpose(as.numeric(transpose(diag(t(P)%*%P))/sum(dRr))) #Produced final matrix Rpf<-P%*%t(P) Rpf2<-Rpf + diag(1,size(R)[1]) - diag(diag(Rpf)) #Residual final matrix Ref<-R-Rpf #KMO adeq<-EFA.MRFA::adequacymatrix(R,N) if (adeq$kmo_index < 0.5 && corr == 2){ stop(sprintf("The KMO obtained using Polychoric correlation is inacceptable. Please, select Pearson Correlation. KMO = %7.5f", adeq$kmo_index)) } ### GOF INDICES: Deprecated, instead using robust fit indices by lavaan # # #GFI # RES<-R-Rpf # RES<-RES-diag(diag(RES)) # R0<-R-diag(diag(R)) # GFI<-1-(sum(diag(RES%*%RES)))/(sum(diag(R0%*%R0))) # # #AGFI # k<-3 # p<-m # Dk<-(1/2 * ((p-k) * (p-k+1)) - p) # AGFI<-1 - ((1 - GFI)*( (p*(p-1)/2)/Dk)) # #RMSR A<-matrix(0,m*(m-1)/2,1) h<-1 for (i in 1:m){ for (j in (i+1):m){ if (j>m){ break } A[h]<-Ref[j,i] h<-h+1 } } me<-mean(A) s<-apply(A,2,sd) RMSR<-sqrt(s*s+me*me) kelley<-sqrt(mean(diag(R))/(N-1)) Pc<-P[(G[1]+1):m,(contSD+contAC+1):(contSD+contAC+r)] if (rotat=="none"){ #no rotation buffT<-Pc } else { if (rotat_package==1){ #GPArotation #check if there is a rotation target if (missing(target)==FALSE){ if (rotat=="pstT"||rotat=="pstQ"){ #Weights required W=(array(1, dim(Pc))) W[target>0]=0 W[target<0]=0 W[target==0]=1 buffT<-eval(parse(text=paste("GPArotation::",as.name(rotat),"(Pc, W=W, Target=target)",sep=""))) Pc<-buffT$loadings } else { buffT<-eval(parse(text=paste("GPArotation::",as.name(rotat),"(Pc, Target=target)",sep=""))) Pc<-buffT$loadings } } else { buffT<-eval(parse(text=paste("GPArotation::",as.name(rotat),"(Pc)",sep=""))) Pc<-buffT$loadings } } else if(rotat_package==2){ #stats buffT<-eval(parse(text=paste("stats::",as.name(rotat),"(Pc)",sep=""))) Pc<-buffT$loadings } else if (rotat_package==3){ #PCovR buffT<-eval(parse(text=paste("PCovR::",as.name(rotat),"(Pc)",sep=""))) Pc<-buffT$loadings if (rotat=="promin"){ U<-buffT$U } } } P[(G[1]+1):(G[1]+G[2]),(1+contSD+contAC):(contSD+contAC+content_factors)]<-Pc #round to 8 decimals P<-round(P,8) #Correlation between content factors PHI_total=diag(r+contSD+contAC) if (rotat=="promin"){ #promin PHI<-t(U)%*%U PHI_total[(contSD+contAC+1):(contSD+contAC+r),(contSD+contAC+1):(contSD+contAC+r)]<-PHI } else { if ("Phi" %in% names(buffT)){ #oblique rotations PHI<-buffT$Phi PHI_total[(contSD+contAC+1):(contSD+contAC+r),(contSD+contAC+1):(contSD+contAC+r)]<-PHI } else { #none or orthogonal rotation, no PHI PHI<-diag(r) } } if (r==1 && (size(P)[1]==1)){ if (contSD==0 && contAC == 0){ P <- transpose(P) } Pc<-transpose(Pc) } if (r==1){ if (any(sum(P[,(contSD+contAC+1):(contSD+contAC+r)])<0)){ if (sum(P[,1+contSD+contAC])<0){ P[,1+contSD+contAC] <- -P[,1+contSD+contAC] PHI[,1] <- -PHI[,1] PHI[1,] <- -PHI[1,] } } } else { if (any(colSums(P[,(contSD+contAC+1):(contSD+contAC+r)])<0)){ for (i in 1:(r)){ if (sum(P[,i+contSD+contAC])<0){ P[,i+contSD+contAC] <- -P[,i+contSD+contAC] PHI[,i] <- -PHI[,i] PHI[i,] <- -PHI[i,] } } PHI_total[(contSD+contAC+1):(contSD+contAC+r),(contSD+contAC+1):(contSD+contAC+r)]<-PHI } } if (r==1){ bent<-1 ls_index<-1 } else { # Bentler's simplicity C <- Pc*Pc D <- sqrt(diag(diag(t(C)%*%C))) D2 <- diag(diag(D)^(-1)) S <- D2 %*% transpose(C) %*% C %*% D2 bent <- det(S) # Lorenzo-Seva simplicity index L <- Pc D <- diag(diag(t(L)%*%L)) D2 <- diag(diag(D)^(-1)) H <- diag(diag(L%*%D2%*%t(L))) D3 <- diag(diag(D)^(-1/2)) H3 <- diag(diag(H)^(-1/2)) B <- H3 %*% L %*% D3 C <- B*B p <- size(C)[1] r <- size(C)[2] s <- 0 for (i in 1:p) { for (j in 1:r) { s <- s + (C[i,j]+0.000001)^(10*C[i,j]) } } s <- s/(p*r) e <- 0 for (i in 1:p) { for (j in 1:r) { e <- e + ((1/r)+0.000001)^(10*(1/r)) } } e <- e/(p*r) ls_index <- (s-e)/(1-e) } ################################################### ## Robust fit indices using lavaan testing ## ## We are going to use lavaan for computing the chi squares and degrees of freedom, ## since the indices are estimated using the wrong number of dof (like all the values are fixed) #TESTING P_lavaan <- zapsmall(matrix(round(P,2),nrow = m, (r+contSD+contAC))) rownames(P_lavaan)<- colnames(as.data.frame(x)) # preventing issues with numerical matrices for (i in 1:(r+contSD+contAC)){ #for each factor if (i ==1){ if (contSD==TRUE){ buff_names <- c("SD") } else{ if (contSD==FALSE && contAC==TRUE){ buff_names <- c("AC") } else { buff_names <- paste("F", toString(i-contSD-contAC), sep ="") } } } else { if (i == 2 && contSD==TRUE && contAC==TRUE){ buff_names <- c(buff_names,"AC") } else { buff_names <- c(buff_names,paste("F", toString(i-contSD-contAC), sep ="")) } } } colnames(P_lavaan)=buff_names terms <- vector() for (i in 1:(r+contSD+contAC)) { terms[i] <- paste0(colnames(P_lavaan)[i],"=~ ", paste0(c(P_lavaan[,i]), "*", names(P_lavaan[,1]), collapse = "+")) } terms <- paste(terms, collapse = "\n") #define correlations between factors if (rotat == "promin" || "Phi" %in% names(buffT)){ #oblique rotations, restrict SD and AC and fix PHI buff_terms <- vector() if (contSD == TRUE && contAC == FALSE){ for (i in 1:(r)){ buff_terms[i] <- paste0("SD ~~ 0*", colnames(P_lavaan)[i+1]) } } if (contSD == TRUE && contAC == TRUE){ for (i in 1:(r+contAC)){ buff_terms[i] <- paste0("SD ~~ 0*", colnames(P_lavaan)[i+1]) } for (j in 1:(r)){ buff_terms[j+r+contAC] <- paste0("AC ~~ 0*", colnames(P_lavaan)[j+2]) } } if (contSD == FALSE && contAC == TRUE){ for (i in 1:(r)){ buff_terms[i] <- paste0("AC ~~ 0*", colnames(P_lavaan)[i+1]) } } if (contSD == FALSE && contAC == FALSE){ #dont restrict anything } buff_terms2 <- vector() # fix the ones for (i in 1:(r+contSD+contAC)){ buff_terms2[i] <- paste0(colnames(P_lavaan)[i]," ~~ 1*" ,colnames(P_lavaan)[i]) } buff_terms3 <- vector() # the correlations between content factors b <- 0 for (i in 1:(r-1)){ for (j in i:(r-1)){ buff_terms3[b+1] <- paste0(colnames(P_lavaan)[i+contSD+contAC], "~~", PHI[j+1,i], "*", colnames(P_lavaan)[j+contSD+contAC+1]) b <- b+1 } } buff_terms <- paste(buff_terms, collapse = "\n") buff_terms2 <- paste(buff_terms2, collapse = "\n") buff_terms3 <- paste(buff_terms3, collapse = "\n") terms <- paste(terms, "\n", buff_terms, "\n", buff_terms2, "\n", buff_terms3) if (corr == 2){ suppressWarnings(fit <- lavaan::cfa(terms, data=x, std.lv=T, ordered = c(colnames(x)), estimator = "ULSM")) } else { suppressWarnings(fit <- lavaan::cfa(terms, data=x, std.lv=T, estimator = "ULSM")) } } else { #none or orthogonal rotation, no PHI if (rotat == "none"){ if (corr == 2){ suppressWarnings(fit <- lavaan::cfa(terms, data=x, std.lv=T, ordered = c(colnames(x)), estimator = "ULSM")) } else { suppressWarnings(fit <- lavaan::cfa(terms, data=x, std.lv=T, estimator = "ULSM")) } } else { #orthogonal if (corr == 2){ suppressWarnings(fit <- lavaan::cfa(terms, data=x, std.lv=T, ordered = c(colnames(x)), estimator = "ULSM", orthogonal = TRUE)) } else { suppressWarnings(fit <- lavaan::cfa(terms, data=x, std.lv=T, estimator = "ULSM", orthogonal = TRUE)) } } } gof_lavaan <- lavaan::fitmeasures(fit) # we need the chi squared scaled, the baseline chi square scaled and the degrees of freedom chi_0 <- as.numeric(gof_lavaan["baseline.chisq.scaled"]) df_0 <- as.numeric(gof_lavaan["baseline.df.scaled"]) chi_model <- as.numeric(gof_lavaan["chisq.scaled"]) if (contSD == TRUE){ #not all parameters are free free_parameters <- ((contSD+contAC+r) * m) - (n_SD_items * (r + contAC)) } else { #all parameters are free free_parameters <- ((contSD+contAC+r) * m) } # calculate the degrees of freedom from the model df_model <- df_0 - free_parameters gof_warning <- FALSE if (df_model <=0){ warning("The model can not be properly identified with that few items, goodness of fit indices are not available.") TLI <- NaN CFI <- NaN RMSEA <- NaN GFI <- NaN gof_warning <- TRUE } else { # ROBUST TLI TLI <- ((chi_0 / df_0) - (chi_model / df_model)) / ((chi_0 / df_0) - 1) if (TLI >= 1){ TLI <- 0.999 } # ROBUST CFI CFI <- ((chi_0 - df_0) - (chi_model - df_model)) / (chi_0 - df_0) if (CFI >= 1){ CFI <- 0.999 } # ROBUST RMSEA #check if chi_model - df_model < 1 if ((chi_model - df_model) > 1){ RMSEA <- (sqrt(chi_model - df_model)) / (sqrt(df_model * (N - 1))) } else { RMSEA <- 0.0001 } # GFI using the model chi GFI <- 1 - (chi_model / chi_0) if (GFI >= 1){ GFI <- 0.999 } } ################################################### # FACTOR SCORES ESTIMATION if (factor_scores==TRUE){ if (corr==1){ #continuous out<-eap_continuous_obli(x,P,PHI_total) th<-out$th th_li<-out$th_li th_ls<-out$th_ls se<-diag(out$se) reli<-as.vector(out$reli) } else { sigj <- diag(sqrt(diag(diag(R - P%*%PHI_total%*%t(P))))) g2 <- size(P)[2] x<-as.matrix(x) out<-reap_grad_obli(x,P,PHI_total,THRES,sigj,displayL) th<-out$th th_li<-out$th_li th_ls<-out$th_ls se<-out$se reli<-out$reli incons <- out$incons } } #set the proper headers and rownames to all the matrices rownames(P)<-itemnames colnames(P)<-headnames rownames(Ref)<-itemnames colnames(Ref)<-itemnames rownames(Rpf2)<-itemnames colnames(Rpf2)<-itemnames if (factor_scores==TRUE){ colnames(th)<-headnames colnames(th_li)<-headnames colnames(th_ls)<-headnames if (corr==2){ colnames(se)<-headnames colnames(reli)<-headnames } } colnames(EV)<-c("ECV") rownames(EV)<-headnames colnames(dRr)<-c("Comunalities") rownames(dRr)<-itemnames colnames(PHI)<-headnames[(contSD+contAC+1):(r+contSD+contAC)] rownames(PHI)<-headnames[(contSD+contAC+1):(r+contSD+contAC)] if (factor_scores==TRUE){ if (corr == 2){ precision_matrix <- array(0,dim=c(N,6,(r+contSD+contAC))) colnames(precision_matrix) <- c("Factor Score","90% lower CI","90% upper CI","PSD","Reliability", "Inconsistent") for (j in 1:(r+contSD+contAC)){ precision_matrix[,,j] <- cbind(th[,j],th_li[,j],th_ls[,j],se[,j],reli[,j],incons[,j]) } matrices<-list("loadings"=P,"Phi"=PHI_total,"Factor_scores"=th,"Precision_scores" =precision_matrix,"comunalities"=dRr,"ECV"=EV,"reduced"=Ref,"produced"=Rpf2,"RMSEA" = RMSEA,"Chi"=chi_model,"TLI"=TLI,"CFI"=CFI,"GFI"=GFI,"RMSR"=RMSR,"kelley"=kelley) } else { matrices<-list("loadings"=P,"Phi"=PHI_total,"Factor_scores"=th,"comunalities"=dRr,"ECV"=EV,"reduced"=Ref,"produced"=Rpf2,"RMSEA" = RMSEA,"Chi"=chi_model,"TLI"=TLI,"CFI"=CFI,"GFI"=GFI,"RMSR"=RMSR,"kelley"=kelley) } } else { matrices<-list("loadings"=P,"Phi"=PHI_total,"comunalities"=dRr,"ECV"=EV,"reduced"=Ref,"produced"=Rpf2,"RMSEA" = RMSEA,"Chi"=chi_model,"TLI"=TLI,"CFI"=CFI,"GFI"=GFI,"RMSR"=RMSR,"kelley"=kelley) } ################## PATH DIAGRAM ################## if (path == T){ lambdas <- fit@Model@GLIST$lambda lambdas[which(lambdas > -.2 & lambdas < .2)] = 0.000 fit@Model@GLIST$lambda <- lambdas lambdas2 <- fit@ParTable$est lambdas3 <- lambdas2[1:(m*(r+contSD+contAC))] # REARRANGE ITEMS ORDER # lambdas4 <- matrix(lambdas3, ncol = (r+contSD+contAC)) # lambdas_cont <- lambdas4[,((1+contSD+contAC) : (r+contSD+contAC))] # # max_loading <- max.col(abs(lambdas_cont)) # # n_max_factor <- vector() # pos <- matrix(nrow = m, ncol = r) # new_order <- vector() # # for (i in 1:r){ # buff_sum <- sum(max_loading == i) # # pos[,i] <- order(lambdas_cont[,i],decreasing = T) # } # # for (i in 1:m){ # max_loading <- which.max(abs(lambdas_cont[i,])) # new_order[i] <- pos[i,max_loading] # } # # lambdas5 <- lambdas4[new_order,] lambdas5 <- lambdas3 #lambdas3[which(lambdas3 > -.2 & lambdas3 < .2)] = 0.00 AC_index <- c((contSD*m+1):(m*(contSD+contAC))) for (i in 1:(m*(r+contSD+contAC))){ #different criteria for AC if (any(i == AC_index) ){ if (lambdas5[i] > -.2 && lambdas5[i] < .2){ lambdas5[i] <- 0.00 } } else { if (lambdas5[i] > -.3 && lambdas5[i] < .3){ lambdas5[i] <- 0.00 } } } fit@ParTable$est[1:(m*(r+contSD+contAC))] <- lambdas5 if (contAC == T){ base_plot <- c("semPaths(fit, what = 'est', layout = 'tree3', intercepts = F, residuals = F, rotation = 1, fade = F, exoCov = F, edge.width = 0.25, edge.label.cex = 1, ask = FALSE, mar = c(3,1,3,1), thresholds = FALSE,") } else { base_plot <- c("semPaths(fit, what = 'est', layout = 'tree3', intercepts = F, residuals = F, rotation = 1, fade = F, exoCov = F, edge.width = 0.25, edge.label.cex = 1, ask = FALSE, mar = c(3,3,3,3), thresholds = FALSE,") } if (contAC == T){ base_plot <- paste(base_plot, " bifactor = 'AC',") } if (m < 20){ base_plot <- paste(base_plot, " sizeMan = 5,") } if (m > 20 && m <= 30){ base_plot <- paste(base_plot, " sizeMan = 4,") } if (m > 30 && m <= 40){ base_plot <- paste(base_plot, " sizeMan = 3,") } ############ NODE COLOR buff_color <- character(5) buff_color[1] <- 'color = list( lat = c(' if (contSD == T && contAC == T){ buff_color[1] <- 'color = list( lat = c(rgb(255,200,135, maxColorValue = 255),' buff_color[2] <- 'rgb(204,110,110, maxColorValue = 255),' } if (contSD == T && contAC == F){ buff_color[1] <- 'color = list( lat = c(rgb(255,200,135, maxColorValue = 255),' } if (contSD == F && contAC == T){ buff_color[1] <- 'color = list( lat = c(rgb(204,110,110, maxColorValue = 255),' #buff_color[2] <- 'rgb(255,200,135, maxColorValue = 255),' } if (contSD == F && contAC == F){ buff_color[1] <- 'color = list( lat = c(' } n_colors <- vector() n_colors[1] <- 'rgb(180,235,215, maxColorValue = 255),' #GREEN n_colors[2] <- 'rgb(200,205,235, maxColorValue = 255),' #BLUE n_colors[3] <- 'rgb(255,180,180, maxColorValue = 255),' #PINK n_colors[4] <- 'rgb(128,206,225, maxColorValue = 255),' #AZUL ANA n_colors[5] <- 'rgb(120,220,120, maxColorValue = 255),' #VERDE MANZANA n_colors2 <- vector() n_colors2[1] <- 'rgb(14,165,135, maxColorValue = 255),' #GREEN n_colors2[2] <- 'rgb(130,105,165, maxColorValue = 255),' #BLUE n_colors2[3] <- 'rgb(205,100,110, maxColorValue = 255),' #PINK n_colors2[4] <- 'rgb(40,80,180, maxColorValue = 255),' #AZUL ANA n_colors2[5] <- 'rgb(60,140,60, maxColorValue = 255),' #VERDE MANZANA for (i in (1+contSD+contAC):(r+contSD+contAC)){ #for each factor, change the colors of the nodes if (i == 1){ #no SD or AC buff_color[1] <- sprintf('color = list( lat = c(%s',n_colors[i-(contSD+contAC)]) } else { buff_color[i] <- n_colors[i-(contSD+contAC)] } } #remove last ',' buff_color[i] <- substr(buff_color[i],1,nchar(buff_color[i])-1) buff_color[i+1] <- '),' buff_color <- paste(buff_color, collapse = '') ########### END NODE COLOR ########### ITEM COLOR if (contSD == T){ #SD markers with different color, loop for each item buff_color2 <- vector() for (i in 1:(size(SD_items)[2])){ if (i == 1){ buff_color2[1] <- ' man = c(rgb(255,200,135, maxColorValue = 255),' } else { buff_color2[i] <- 'rgb(255,200,135, maxColorValue = 255),' } } for (j in 1:(m-i)){ buff_color2[j+i] <- 'rgb(230,230,230, maxColorValue = 255),' } #remove last ',' buff_color2[j+i] <- substr(buff_color2[j],1,nchar(buff_color2[j])-1) buff_color2[j+i+1] <- ')),' buff_color2 <- paste(buff_color2,collapse = '') } else { buff_color2 <- 'man = rgb(230,230,230, maxColorValue = 255)),' } ########## END ITEM COLOR ########## EDGE COLOR buff_color3 <- vector() if (contSD == T && contAC == T){ for (i in 1:m){ ### SD ### # SD loadings if (i == 1){ buff_color3[1] <- 'edge.color = c(rgb(225,140,75, maxColorValue = 255),' } else { buff_color3[i] <- 'rgb(225,140,75, maxColorValue = 255),' } } for (j in 1:m){ ### AC ### buff_color3[j+i] <- 'rgb(180,90,90, maxColorValue = 255),' } for (k in 1:(m*r)){ ## content ### if (k <= m){ # 1st content factor buff_color3[k+i+j] <- n_colors2[1] } if ((k <= (m*2)) && (k > m)){ buff_color3[k+i+j] <- n_colors2[2] } if ((k <= (m*3)) && (k > (m*2))){ buff_color3[k+i+j] <- n_colors2[3] } if ((k <= (m*4)) && (k > (m*3))){ buff_color3[k+i+j] <- n_colors2[4] } if ((k <= (m*5)) && (k > (m*4))){ buff_color3[k+i+j] <- n_colors2[5] } } } if (contSD == T && contAC == F){ for (i in 1:m){ ### SD ### # SD loadings if (i == 1){ buff_color3[1] <- 'edge.color = c(rgb(225,140,75, maxColorValue = 255),' } else { buff_color3[i] <- 'rgb(225,140,75, maxColorValue = 255),' } } j <- 0 for (k in 1:(m*r)){ ## content ### if (k <= m){ # 1st content factor buff_color3[k+i+j] <- n_colors2[1] } if ((k <= (m*2)) && (k > m)){ buff_color3[k+i+j] <- n_colors2[2] } if ((k <= (m*3)) && (k > (m*2))){ buff_color3[k+i+j] <- n_colors2[3] } if ((k <= (m*4)) && (k > (m*3))){ buff_color3[k+i+j] <- n_colors2[4] } if ((k <= (m*5)) && (k > (m*4))){ buff_color3[k+i+j] <- n_colors2[5] } #buff_color3[k+i+j] <- 'rgb(40,160,60, maxColorValue = 255),' } } if (contSD == F && contAC == T){ i <- 0 for (j in 1:m){ ### AC ### if (j == 1){ buff_color3[1] <- 'edge.color = c(rgb(180,90,90, maxColorValue = 255),' } else { buff_color3[j+i] <- 'rgb(180,90,90, maxColorValue = 255),' } } for (k in 1:(m*r)){ ## content ### if (k <= m){ # 1st content factor buff_color3[k+i+j] <- n_colors2[1] } if ((k <= (m*2)) && (k > m)){ buff_color3[k+i+j] <- n_colors2[2] } if ((k <= (m*3)) && (k > (m*2))){ buff_color3[k+i+j] <- n_colors2[3] } if ((k <= (m*4)) && (k > (m*3))){ buff_color3[k+i+j] <- n_colors2[4] } if ((k <= (m*5)) && (k > (m*4))){ buff_color3[k+i+j] <- n_colors2[5] } #buff_color3[k+i+j] <- 'rgb(40,160,60, maxColorValue = 255),' } } if (contSD == F && contAC == F){ i <- 0 j <- 0 for (k in 1:(m*r)){ ## content ### if (k == 1){ buff_color3[1] <- sprintf('edge.color = c(%s',n_colors2[1]) } else { if (k <= m){ # 1st content factor buff_color3[k+i+j] <- n_colors2[1] } if ((k <= (m*2)) && (k > m)){ buff_color3[k+i+j] <- n_colors2[2] } if ((k <= (m*3)) && (k > (m*2))){ buff_color3[k+i+j] <- n_colors2[3] } if ((k <= (m*4)) && (k > (m*3))){ buff_color3[k+i+j] <- n_colors2[4] } if ((k <= (m*5)) && (k > (m*4))){ buff_color3[k+i+j] <- n_colors2[5] } #buff_color3[k+i+j] <- 'rgb(40,160,60, maxColorValue = 255),' } } } #remove last ',' buff_color3[i+j+k] <- substr(buff_color3[i+j+k],1,nchar(buff_color3[i+j+k])-1) buff_color3[i+j+k+1] <-'),' buff_color3 <- paste(buff_color3,collapse = '') ########## END EDGE COLOR ########## LABEL POSITION # Indistinctly of SD and content, AC independent labels_index <- .8/(r+contSD) if (labels_index == .8){ labels_index <- .6 } buff_labels_pos <- vector() # if ((r+contSD) == 1){ # buff_labels_pos <- 'edge.label.position = 0.5' # } # else{ buff <- 0 buff2 <- 0.0 incr <- signif(labels_index/4,2) n_jumps <- 1 for (i in 1:(r+contSD+contAC)){ for (j in 1:m){ if (i == 1 && j == 1){ buff_labels_pos[1] <- sprintf(' edge.label.position = c(%.2f ,',(1-labels_index+buff2)) pos <- 2 if (n_jumps != 3){ buff2 <- buff2 + incr n_jumps <- n_jumps + 1 } else { buff2 <- 0.0 n_jumps <- 1 } #buff2 <- - buff2 } else { if (i == 2 && contAC == T) {#AC buff_labels_pos[pos] <- sprintf('%.2f,',0.6+buff2) if (n_jumps != 3){ buff2 <- buff2 + incr n_jumps <- n_jumps + 1 } else { buff2 <- 0.0 n_jumps <- 1 } #buff2 <- - buff2 pos <- pos + 1 buff <- 1 } else { buff_labels_pos[pos] <- sprintf('%.2f ,', (1-labels_index*(i-buff) + buff2)) pos <- pos + 1 if (n_jumps != 3){ buff2 <- buff2 + incr n_jumps <- n_jumps + 1 } else { buff2 <- 0.0 n_jumps <- 1 } #buff2 <- - buff2 } } } } #remove last ',' buff_labels_pos[i*j] <- substr(buff_labels_pos[i*j],1,nchar(buff_labels_pos[i*j])-1) buff_labels_pos[(i*j)+1] <-')' buff_labels_pos <- paste(buff_labels_pos,collapse = '') #} ######### END LABEL POSITION plot_final <- paste(base_plot,'',buff_color,'',buff_color2,'',buff_color3,buff_labels_pos,')') result_path <- tryCatch({eval(parse(text=plot_final))}, error = function(e) message("\nPath diagram failed.")) } ##################### Printing time ##################### if (displayL==F){ invisible(matrices) } else if (displayL==T){ if (display_summary == T){ cat('\n\n') cat('DETAILS OF ANALYSIS\n\n') cat(sprintf('Number of participants : %5.0f \n',N)) cat(sprintf('Number of items : %5.0f \n',m)) if (contSD==1){ buff<-'' f2<-size(SD_items)[2] for (i in 1:f2){ if (i==1){ buff2<-sprintf('%.0f',SD_items[i]) } else { buff2<-sprintf(', %.0f',SD_items[i]) } buff<-paste(buff,buff2,sep = "") } cat(sprintf('Items selected as SD items : %s',buff)) cat('\n') } if (contAC==1){ f2=size(unbalanced_items)[2] } if (contAC==1 && f2>0){ buff<-'' for (i in 1:f2){ if (i==1){ buff2<-sprintf('%.0f',unbalanced_items[i]) } else { buff2<-sprintf(', %.0f',unbalanced_items[i]) } buff<-paste(buff,buff2,sep = "") } cat(sprintf('Items selected as unbalanced : %s',buff)) cat('\n') } if (corr==0){ cat('Dispersion Matrix : User Defined') } else if (corr==1){ cat('Dispersion Matrix : Pearson Correlations') } else if (corr==2){ cat('Dispersion Matrix : Polychoric Correlations') } cat('\n') cat('Method for factor extraction : Unweighted Least Squares (ULS)') cat('\n') cat(sprintf('Rotation Method : %s',rotat)) cat('\n\n') cat('-----------------------------------------------------------------------') } if (display_descriptives == T){ Kur<-moments::kurtosis(x_o) Skew<-moments::skewness(x_o) cat('\n\n') cat('Univariate item descriptives') cat('\n\n') cat('Item Mean Variance Skewness Kurtosis (Zero centered)\n\n') for (i in 1:m){ buff<-sprintf('Item %3.0f ',i) buff1<-sprintf('% 2.3f % 2.3f % 2.3f % 2.3f',mean(as.matrix(x_o[,i])),(sd(as.matrix(x_o[,i]))^2),Skew[i],Kur[i]-3) cat(paste(buff,buff1)) cat('\n') } cat('\n') kurmax<-max(Kur)-3 kurmin<-min(Kur)-3 if (kurmax>1 || kurmin<(-1)){ cat('Polychoric correlation is advised when the univariate distributions of ordinal items are\n') cat('asymmetric or with excess of kurtosis. If both indices are lower than one in absolute value,\n') cat('then Pearson correlation is advised. You can read more about this subject in:\n\n') cat('Muthen, B., & Kaplan D. (1985). A Comparison of Some Methodologies for the Factor Analysis of\n') cat('Non-Normal Likert Variables. British Journal of Mathematical and Statistical Psychology, 38, 171-189.\n\n') cat('Muthen, B., & Kaplan D. (1992). A Comparison of Some Methodologies for the Factor Analysis of\n') cat('Non-Normal Likert Variables: A Note on the Size of the Model. British Journal of Mathematical\n') cat('and Statistical Psychology, 45, 19-30. \n\n') } cat('-----------------------------------------------------------------------') } if (display_adequacy == T && display_PA == FALSE){ cat('\n\n') cat('Adequacy of the dispersion matrix') cat('\n\n') cat(sprintf('Determinant of the matrix = %17.15f\n',adeq$d)) cat(sprintf('Bartlett\'s statistic = %7.1f (df = %5.0f; P = %7.6f)\n',adeq$chisq,adeq$df,adeq$p_value)) cat(sprintf('Kaiser-Meyer-Olkin (KMO) test = %7.5f ',adeq$kmo_index)) if (adeq$kmo_index >= 0.9){ cat(sprintf('(very good)'))} else if (adeq$kmo_index >= 0.8){ cat(sprintf('(good)'))} else if (adeq$kmo_index >= 0.7){ cat(sprintf('(fair)'))} else if (adeq$kmo_index >= 0.6){ cat(sprintf('(mediocre)'))} else if (adeq$kmo_index >= 0.5){ cat(sprintf('(bad)'))} else { cat(sprintf('(inaceptable)'))} cat('\n\n') cat('-----------------------------------------------------------------------') } if (PA == TRUE){ if (display_PA == TRUE){ cat('\n\n') if (corr == 1){ out_PA <- EFA.MRFA::parallelMRFA(x) } else { out_PA <- EFA.MRFA::parallelMRFA(x, Ndatsets = 100, corr = "Polychoric") } cat('-----------------------------------------------------------------------') } else { if (corr == 1){ out_PA <- EFA.MRFA::parallelMRFA(x, display = FALSE, graph = FALSE) } else { out_PA <- EFA.MRFA::parallelMRFA(x, Ndatsets = 100, corr = "Polychoric", display = FALSE, graph = FALSE) } } # change matrices list if (factor_scores==TRUE){ if (corr == 2){ matrices<-list("loadings"=P,"Phi"=PHI_total,"Factor_scores"=th,"Precision_scores" =precision_matrix,"comunalities"=dRr,"ECV"=EV,"reduced"=Ref,"produced"=Rpf2,"RMSEA" = RMSEA,"Chi"=chi_model,"TLI"=TLI,"CFI"=CFI,"GFI"=GFI,"RMSR"=RMSR,"kelley"=kelley,"PA_Real_Data"=out_PA$Real_Data,"PA_Mean_Random"=out_PA$Mean_random,"PA_Percentile_Random"=out_PA$Percentile_random,"N_factors_mean"=out_PA$N_factors_mean,"N_factors_percentiles"=out_PA$N_factors_percentiles) } else { matrices<-list("loadings"=P,"Phi"=PHI_total,"Factor_scores"=th,"comunalities"=dRr,"ECV"=EV,"reduced"=Ref,"produced"=Rpf2,"RMSEA" = RMSEA,"Chi"=chi_model,"TLI"=TLI,"CFI"=CFI,"GFI"=GFI,"RMSR"=RMSR,"kelley"=kelley,"PA_Real_Data"=out_PA$Real_Data,"PA_Mean_Random"=out_PA$Mean_random,"PA_Percentile_Random"=out_PA$Percentile_random,"N_factors_mean"=out_PA$N_factors_mean,"N_factors_percentiles"=out_PA$N_factors_percentiles) } } else { matrices<-list("loadings"=P,"Phi"=PHI_total,"comunalities"=dRr,"ECV"=EV,"reduced"=Ref,"produced"=Rpf2,"RMSEA" = RMSEA,"Chi"=chi_model,"TLI"=TLI,"CFI"=CFI,"GFI"=GFI,"RMSR"=RMSR,"kelley"=kelley,"PA_Real_Data"=out_PA$Real_Data,"PA_Mean_Random"=out_PA$Mean_random,"PA_Percentile_Random"=out_PA$Percentile_random,"N_factors_mean"=out_PA$N_factors_mean,"N_factors_percentiles"=out_PA$N_factors_percentiles) } ##################### Printing time ##################### if (displayL==F){ invisible(matrices) } } if (display_loadings == T || display_GOF == T){ # FACTOR ANALYSIS cat('\n') if (contSD==F && contAC==F){ cat('EXPLORATORY FACTOR ANALYSIS') } if (contSD==T && contAC==F){ cat('EXPLORATORY FACTOR ANALYSIS CONTROLLING SOCIAL DESIRABILITY') } if (contSD==F && contAC==T){ cat('EXPLORATORY FACTOR ANALYSIS CONTROLLING ACQUIESCENCE') } if (contSD==T && contAC==T){ cat('EXPLORATORY FACTOR ANALYSIS CONTROLLING SOCIAL DESIRABILITY AND ACQUIESCENCE') } cat('\n') cat('-----------------------------------------------------------------------') } if (display_GOF == T && gof_warning == FALSE){ cat('\n\n') cat('Robust Goodness of Fit statistics') cat('\n\n') cat(sprintf(' Root Mean Square Error of Approximation (RMSEA) = %4.3f',RMSEA)) cat('\n\n') cat(sprintf(' Robust Mean-Scaled Chi Square with %.0f degrees of freedom = %.3f',df_model,chi_model)) cat('\n\n') cat(sprintf(' Non-Normed Fit Index (NNFI; Tucker & Lewis) = %4.3f', TLI)) cat('\n') cat(sprintf(' Comparative Fit Index (CFI) = %4.3f', CFI)) cat('\n') cat(sprintf(' Goodness of Fit Index (GFI) = %4.3f',GFI)) cat('\n\n') cat('-----------------------------------------------------------------------') cat('\n\n') cat(sprintf(' Root Mean Square Residuals (RMSR) = %5.4f',RMSR)) cat('\n') cat(sprintf('Expected mean value of RMSR for an acceptable model = %5.4f (Kelley\'s criterion)',kelley)) if (RMSR > kelley){ cat('\n(Kelley, 1935,page 146; see also Harman, 1962, page 21 of the 2nd edition)') cat('\nNote: if the value of RMSR is much larger than Kelley\'s criterion value the model cannot be considered as good') } cat('\n\n') cat('-----------------------------------------------------------------------') } if (display_loadings == T){ cat('\n\n') if (rotat=="none"){ cat('Unrotated loading matrix\n\n') } else { cat('Rotated loading matrix\n\n') } prmatrix(round(P,5)) cat('\n') if (r!=1){ if (PHI[1,2]!=0){ cat('Inter-factor correlations\n\n') prmatrix(PHI) cat('\n\n') } cat(sprintf('Bentler\'s simplicity = % 5.4f \n\n',bent)) cat(sprintf('Lorenzo-Seva\'s simplicity = % 5.4f \n\n',ls_index)) } if (display_scores == TRUE){ cat('-----------------------------------------------------------------------') } } if (factor_scores==TRUE){ if (display_scores == TRUE){ if (corr ==2){ reli_good <- matrix(nrow = N,ncol = contSD+contAC+r) j = 1 for (i in 1:N){ if (sum(reli[i,] > 0.10) == (r+contSD+contAC)){ reli_good[j,] <- reli[i,] j = j + 1 } } reli_good <- reli_good[1:(j-1),] } cat('\n\n') cat(sprintf('RELIABILITY OF EAP SCORES\n\n')) prov <- contSD + contAC cat(sprintf('Factor EAP Reliability estimate\n\n')) for (i in 1:(r+prov)){ buff <- sprintf('Factor %.0f :',i-prov) if (i ==1 && contSD == T){ buff <- 'SD :' } if (i == 1 && contSD == F && contAC == T){ buff <- 'Acquiescence :' } if (i == 2 && contSD == T && contAC == T){ buff <- 'Acquiescence :' } if (corr == 1){ buff2 <- sprintf(' %.4f', reli[i]) } else { buff2 <- sprintf(' %.4f',psych::harmonic.mean(reli_good[,i])) } cat(buff, buff2,'\n') } cat(sprintf('\nPARTICIPANTS\'S SCORES ON FACTORS:\n')) cat(sprintf('Rescaled to mean = 50 and standard deviation = 10 in the sample\n\n')) prmatrix(th*10+50) if (corr == 2){ # If printing precision matrices for each factor, the output will be too large, the precision matrix are being saved in output$Precision_matrix cat(sprintf('\nNOTE: The precision matrices for the %2.0f factors were not printed for preventing console spacing issues.\n',contSD+contAC+r)) cat(sprintf('These matrices are stored in $Precision_matrix in the output variable.\n\n')) # for (j in 1:(r+contSD+contAC)){ # # cat('\n\n') # buff <- sprintf('FACTOR: CONTENT %1.0f ',j-prov); # # if (j ==1 && contSD == T){ # buff <- 'FACTOR: SOCIAL DESIRABILITY ' # } # if (j == 1 && contSD == F && contAC == T){ # buff <- 'FACTOR: ACQUIESCENCE ' # } # if (j == 2 && contSD == T && contAC == T){ # buff <- 'FACTOR: ACQUIESCENCE ' # } # # cat(buff, '\n\n') # # cat(sprintf('RELIABILITY OF EAP FACTOR SCORES: % .3f \n\n',psych::harmonic.mean(reli[,j]))) # cat('Participant Estimate Approximate 90% Posterior Reliability\n') # cat(' factor score confidence interval SD (PSD)\n\n') # # for (i in 1:N){ # buff <- sprintf('%4.0f % 7.3f ',i,th[i,j]) # buff1 <- sprintf('(% 7.3f % 7.3f) ',th_li[i,j],th_ls[i,j]) # if (reli[i,j] > 0.10){ # buff2 <- sprintf('% .3f % .3f',se[i,j],reli[i,j]) # } # else{ # buff2 <- sprintf('% .3f Inconsistent responder',se[i,j]) # } # cat(buff,buff1,buff2,'\n') # } # # precision_matrix <- cbind(th[i,],th_li[i,],th_ls[i,],se[i,],reli[i,]) # # # } } } } if (corr == 2 && smoothing_done == TRUE){ # Smoothing performed # cat(sprintf('\nNOTE: The Polychoric/Tetrachoric matrix was not positive definite. An smoothing procedure was applied for %2.0f variables.\n',smoothing_v)) message('\nNOTE: The Polychoric/Tetrachoric matrix was not positive definite. An smoothing procedure was applied (Bentler and Yuan, 2011).\n\n') } invisible(matrices) } }
/scratch/gouwar.j/cran-all/cranData/vampyr/R/ControlResponseBias.R
Nseriesk<-function(N,k){ r <- k nod <- N contador<-matrix(1,nod^r,r) c<-size(contador)[1] tmp<-size(contador)[2] for (i in 2:c){ pos <- 1 again <- 1 while (again){ if (contador[i-1,pos] < nod){ if (pos==1){ contador[i,] <- contador[i-1,] } else { contador[i,pos:tmp] <- contador[i-1,pos:tmp] } contador[i,pos] <- contador[i,pos]+1 again <- 0 } else { pos <- pos+1 } } } tmp <- numeric() for (i in r:1){ tmp <- rbind(tmp,contador[,i]) } contador <- transpose(tmp) }
/scratch/gouwar.j/cran-all/cranData/vampyr/R/Nseriesk.R
eap_continuous_obli<-function(X,P,PHI){ X<-as.matrix(X) P<-as.matrix(P) n<-size(X)[1] m<-size(P)[1] r<-size(P)[2] M <- apply(X,2,mean) Sx <- apply(X,2,sd) DIF <- X - (matrix(1,n,1) %*% M) Z <- DIF / (matrix(1,n,1) %*% Sx) R <- cor(X) "%^%" <- function(mat,power){ base = mat out = diag(nrow(mat)) while(power > 1){ if(power %% 2 == 1){ out = out %*% base } base = base %*% base power = power %/% 2 } out %*% base } th <- t(PHI%*%t(P)%*%mpower(R,-1)%*%t(Z)) U <- diag(diag(R - P%*%PHI%*%t(P))) VAR_E <- mpower((mpower(PHI,-1) + t(P)%*%mpower(U,-1) %*% P),-1) if (r!=1){ se <- sqrt(diag(diag(VAR_E))) } else { se <- sqrt(VAR_E) } z90 <- 1.64485362695147 # 90% approximate confidence interval th_li <- matrix(0,n,r) th_ls <- matrix(0,n,r) for (i in 1:n){ for (j in 1:r){ th_li[i,j] <- th[i,j] - z90*se[j,j] th_ls[i,j] <- th[i,j] + z90*se[j,j] } } reli <- diag(1-VAR_E) OUT<-list("th"=th,"th_li"=th_li,"th_ls"=th_ls,"se"=se,"reli"=reli) return(OUT) }
/scratch/gouwar.j/cran-all/cranData/vampyr/R/eap_continuous_obli.R
eap_grad_obli<-function(X, LAM, PHI, THRES, sigj,grid,index_nodos){ ni <- length(X) r <- ncol(cbind(LAM)) # Works in case LAM is vector or matrix nnod <- length(grid) hnod <- nrow(index_nodos) d<-grid[2]-grid[1] nume_th <- matrix(0,r,1) nume_se <- matrix(0,r,1) # Computations vectorized from now on: zi.mat <- t(sapply(1:hnod, function(h) grid[index_nodos[h, ]])) # p1: p1.mat <- matrix(0, hnod, ni) ind <- which(X != 1) if (r==1){ tmp1 <- t(zi.mat) %*% LAM[ind] } else { if ((size(ind)[2])==1){ tmp1 <- zi.mat %*% (LAM[ind, ]) } else { tmp1 <- zi.mat %*% t(LAM[ind, ]) } } tmp2 <- tmp1 - matrix(rep(THRES[cbind(X[ind] - 1, ind)], hnod), nrow = hnod, byrow = TRUE) tmp3 <- tmp2 / matrix(rep(sigj[ind], hnod), nrow = hnod, byrow = TRUE) tmp4 <- exp(1.702 * tmp3) tmp5 <- tmp4 / (1+tmp4) tmp6 <- matrix(rep(X[ind]-1, hnod), nrow = hnod, byrow = TRUE) * log(tmp5) p1.mat[, ind] <- tmp6 rm(list=ls(pattern = c("tmp"))) # p2: p2.mat <- matrix(0, hnod, ni) ind <- which(THRES[cbind(X,1:ni)] != 0) if (r==1){ tmp1 <- t(zi.mat) %*% LAM[ind] } else { if ((size(ind)[2])==1){ tmp1 <- zi.mat %*% (LAM[ind, ]) } else { tmp1 <- zi.mat %*% t(LAM[ind, ]) } } tmp2 <- tmp1 - matrix(rep(THRES[cbind(X[ind], ind)], hnod), nrow = hnod, byrow = TRUE) tmp3 <- tmp2 / matrix(rep(sigj[ind], hnod), nrow = hnod, byrow = TRUE) tmp4 <- exp(1.702 * tmp3) tmp5 <- tmp4 / (1+tmp4) tmp6 <- matrix(rep(X[ind], hnod), nrow = hnod, byrow = TRUE) * log(1-tmp5) p2.mat[, ind] <- tmp6 rm(list=ls(pattern = c("tmp"))) # p, L1: p.mat <- p1.mat + p2.mat L1.vec <- rowSums(p.mat) # term2: dospi <-6.2832 tmp1 <-(dospi)^(r/2) # JNT - 'n' in ordnormulti is r, here, right? if (r == 1) tmp2 <- sqrt(PHI) else tmp2 <- sqrt(det(PHI)) tmp3 <- 1 / (tmp1*tmp2) PHIinv <- solve(PHI) tmp4 <- rep(NA, hnod) if (r==1){ for (h in 1:hnod) tmp4[h] <- t(cbind(zi.mat[h])) %*% PHIinv %*% cbind(zi.mat[h]) } else { for (h in 1:hnod) tmp4[h] <- t(cbind(zi.mat[h,])) %*% PHIinv %*% cbind(zi.mat[h,]) } tmp5 <- as.vector(tmp3) * exp(-.5* tmp4) #tmp5 = term1 term2.vec <- tmp5 * d * d rm(list=ls(pattern = c("tmp"))) pp.vec <- exp(L1.vec)*term2.vec deno <- sum(pp.vec) if (r==1){ nume_th <- sum(pp.vec*zi.mat) nume_se <- sum(pp.vec*zi.mat^2) } else { nume_th <- colSums(matrix(rep(pp.vec, r), nrow = hnod, byrow = FALSE) * zi.mat) nume_se <- colSums(matrix(rep(pp.vec, r), nrow = hnod, byrow = FALSE) * (zi.mat^2)) } TH <- nume_th / deno #SE <- sqrt(nume_se/deno - (TH^2)) #RELI <- 1 - SE^2 IPHI <- solve(PHI) VAR_E <- nume_se / deno - (TH^2) VAR_EE <- 1 - VAR_E SE <- sqrt(VAR_E / (1 - IPHI %*% VAR_E)) RELI <- (VAR_EE - SE^2) / VAR_EE OUT <- list("th_i"=TH, "se_i"=SE, "reli_i"=RELI) return(OUT) }
/scratch/gouwar.j/cran-all/cranData/vampyr/R/eap_grad_obli.R
ordnormulti<-function(z,PHI){ n <- nrow(cbind(z)) dospi <-6.2832 tmp1 <-(dospi)^(n/2) if (n == 1){ # JNT Right ?? tmp2 <- sqrt(PHI) } else { tmp2<- sqrt(det(PHI)) } tmp3<-1/(tmp1*tmp2) quad<-t(z)%*%solve(PHI)%*%z # JNT: Alternative for solve() is chol2inv(chol())). It saves time when PHI is large (not your case here). orde<-tmp3*exp(-.5*(quad)) }
/scratch/gouwar.j/cran-all/cranData/vampyr/R/ordnormulti.R
pconmul<-function(t,lam,thres,sige){ nume <- sum(lam * t)-thres term <- nume/sige n <- exp(1.702*term) p <- n / (1+n) }
/scratch/gouwar.j/cran-all/cranData/vampyr/R/pconmul.R
ppnd <- function(P){ ## PPND produces the normal deviate value corresponding to lower tail area = P. # # Licensing: # # This code is distributed under the GNU LGPL license. # # Modified: # # 7 November 2017 # # Author: # # Original FORTRAN77 version by J Beasley, S Springer. # R version by David Navarro-Gonzalez. # # Reference: # # J Beasley, S Springer, # Algorithm AS 111: # The Percentage Points of the Normal Distribution, # Applied Statistics, # Volume 26, Number 1, 1977, pages 118-121. # # Parameters: # # Input, real P, the value of the cumulative probability # densitity function. 0 < P < 1. # # Output, real VALUE, the normal deviate value with the property that # the probability of a standard normal deviate being less than or # equal to PPND is P. SPLIT=0.42 A0 = 2.50662823884 A1 =-18.61500062529 A2 = 41.39119773534 A3 =-25.44106049637 B1 = -8.47351093090 B2 = 23.08336743743 B3 =-21.06224101826 B4 = 3.13082909833 C0 = -2.78718931138 C1 = -2.29796479134 C2 = 4.85014127135 C3 = 2.32121276858 D1 = 3.54388924762 D2 = 1.63706781897 ZERO = 0.0 ONE = 1.0 HALF = 0.5 IER = 0 Q = P-HALF if (abs(Q) <= SPLIT){ R <- Q*Q PPND <- Q*(((A3*R + A2)*R +A1)*R + A0)/((((B4*R + B3)*R + B2)*R +B1)*R+ ONE) return(PPND) } R <- P if (Q > ZERO){ R <- ONE-P } if (R > ZERO){ R <- sqrt(-log(R)) PPND <- (((C3*R + C2)*R + C1)*R + C0)/((D2*R + D1)*R + ONE) if (Q < ZERO){ PPND <- -PPND } return(PPND) } PPND = ZERO return(PPND) }
/scratch/gouwar.j/cran-all/cranData/vampyr/R/ppnd.R
reap_grad_obli<-function(X, LAM, PHI, THRES, sigj,disp){ m<-size(LAM)[1] r<-size(LAM)[2] #7 nodes grid<-transpose(seq(-4,4,(8/7))) # Number of nodes, for now 7 ni<-7 tmp<-size(grid)[2] index_nodos <- Nseriesk(ni,r) ######### n<-size(X)[1] m<-size(X)[2] th<-numeric() se<-numeric() reli<-numeric() incons <- numeric() #X<-as.numeric(X) ptm_one <- proc.time() for (i in 1:n){ #if (disp==TRUE){ #if (i==1){ #calculate the time elapsed for computing the first one for estimating the total elapsed time # ptm_one <- proc.time() #} #if (i==2){ #waitbar #cat('Computing EAP scores: Please wait \n') #pb <- txtProgressBar(min = 0, max = n-1, style = 3) #} #} if (min(X)==0){ X=X+1 } out <- eap_grad_obli(X[i,],LAM,PHI,THRES,sigj,grid,index_nodos) th <- rbind(th,t(out$th_i)) se <- rbind(se,t(out$se_i)) reli <- rbind(reli,t(out$reli_i)) incons <- rbind(incons,t(as.numeric(out$reli_i < 0.10))) #if (any(out$reli_i < 0.10)){ incons <- rbind(incons, 0)} else {incons <- rbind(incons,1)} #if any value <0.10 if (disp==TRUE){ # if (i==1){ compT <- proc.time() - ptm_one compT<-compT[3] compT<-compT*(n-i)/i secondsInAMinute = 60 secondsInAnHour = 60 * secondsInAMinute secondsInADay = 24 * secondsInAnHour days <- floor(compT / secondsInADay) hourSeconds <- compT %% secondsInADay hours <- floor(hourSeconds / secondsInAnHour) minuteSeconds <- hourSeconds %% secondsInAnHour minutes <- floor(minuteSeconds / secondsInAMinute) remainingSeconds <- minuteSeconds %% secondsInAMinute seconds <- ceiling(remainingSeconds) if (compT > 3600){ if (days >= 1){ #Very very rare, but just to be sure cat("Computing EAP scores Time remaining: +24 hours \r") flush.console() } else { if (hours == 1){ cat("Computing EAP scores. Time remaining: ", hours,"hour, ",minutes, "minutes and ",seconds, "seconds \r") flush.console() } else { cat("Computing EAP scores. Time remaining: ", hours,"hours, ",minutes, "minutes and ",seconds, "seconds \r") flush.console() } } } else{ if (compT >= 60){ cat("Computing EAP scores. Time remaining: ", minutes, "minutes and ",seconds,"seconds \r") flush.console() } if (compT < 60) { cat("Computing EAP scores. Time remaining ",seconds,"seconds \r") flush.console() } } #Sys.sleep(0.01) #if (et.minutes<=1){ # cat('Estimated time for the analysis: less than a minute') #} #if (et.minutes>1 && et.minutes<=1.5) { # cat(sprintf('Estimated time for the analysis: %3.0f minute',round(et.minutes))) #} #if (et.minutes>1.5) { # cat(sprintf('Estimated time for the analysis: %3.0f minutes',round(et.minutes))) #} #cat('\n\n') #} #else{ #Sys.sleep(0.1) # update progress bar #setTxtProgressBar(pb, i-1) #} } } if (disp==TRUE){ cat("\r"," ","\r") #close(pb) } z90<-1.64485362695147 th_li<-matrix(0,n,r) th_ls<-matrix(0,n,r) for (i in 1:n){ for (j in 1:r){ th_li[i,j] <- th[i,j] - z90*se[i,j] th_ls[i,j] <- th[i,j] + z90*se[i,j] } } OUT<-list("th"=th, "th_li"=th_li, "th_ls"=th_ls,"se"=se,"reli"=reli,"incons" = incons) return(OUT) }
/scratch/gouwar.j/cran-all/cranData/vampyr/R/reap_grad_obli.R
size <- function(x=NULL,n=NULL){ if (is.list(x) & !is.data.frame(x)){ # x is a list return(lapply(x,size)) } else { if (is.null(x)){ # x is NULL if (is.null(n)){ return(c(0,0)) } else { return(c(0,0)[n]) } } else { if (is.null(dim(x))){ # x is a vector if (is.null(n)){ return(c(1,length(x))) } else { return(c(1,length(x))[n]) } } else { # x is a matrix, an array or a data.frame if (is.null(n)){ return(dim(x)) } else { return(dim(x)[n]) } } } } }
/scratch/gouwar.j/cran-all/cranData/vampyr/R/size.R
thresholds <- function(X,Xmin,Xmax){ N<-size(X)[1] m<-size(X)[2] Ki <- (Xmax - Xmin + 1) Kmax <- max(Ki) THRES <- matrix(0,m,(max(Ki)-1)) FR <- matrix(0,m,Kmax) for (i in 1:m){ Xmi <- min(X[,i]) for (k in 1:N){ jth<-1 for (j in 1:Kmax){ if (X[k,i] == (Xmi-1+j)){ jth <- j } } FR[i,jth] <- FR[i,jth] + 1 } FR[i,] <- FR[i,]/N for (k in 1:Kmax-1){ tmp1 <- sum(t(FR[i,1:k])) THRES[i,k] <- ppnd(tmp1) } } return(THRES) }
/scratch/gouwar.j/cran-all/cranData/vampyr/R/thresholds.R
transpose <- function(object=NULL){ if (is.null(object)) stop('transpose: input is not a matrix or a vector.', call.=FALSE) if (!is.matrix(object)){ if (is.atomic(object)){ mat <- matrix(object,ncol=length(object),byrow=TRUE) } else { stop('transpose: input is not a matrix or a vector.', call.=FALSE) } } else { mat <- object } return(t(mat)) }
/scratch/gouwar.j/cran-all/cranData/vampyr/R/transpose.R
#' @title Calculates the uniform \emph{AUC} and uniform \emph{Se*} #' #' @description This function computes the uniform \emph{AUC} (\emph{uAUC}) and #' uniform \emph{Se*} (\emph{uSe*}) following Jiménez-Valverde (2022). #' @param mat A matrix with two columns. The first column must contain the #' suitability values (i.e., the classification rule); the second column must #' contain the presences and absences. #' @param rep Number of sampling replications. By default, \code{rep} = 100. #' @param by Size of the suitability intervals (i.e., bins). By default, #' \code{by} = 0.1. #' @param deleteBins A vector (e.g., from 1 to 10 if \code{by} = 0.1) with the #' bins that have to be excluded (1 for [0,0.1), 10 for [0.9,1]) from the #' resampling procedure (trimming); \code{NULL} by default. #' @param plot Logical. Indicates whether or not the observed ROC curve is #' plotted. #' @param plot.adds Logical. Indicates whether or not the negative diagonal and #' the point of equivalence are added to the observed ROC plot. #' @details This function performs the stratified weighted bootstrap to #' calculate the uniform \emph{AUC} (\emph{uAUC}) and uniform \emph{Se*} #' (\emph{uSe*}) as suggested in Jiménez-Valverde (2022). A warning message #' will be shown if the sample size of any bin is zero. Another warning message #' will be shown if the sample size of any bin is lower than 15. In such case, #' trimming should be considered. The \emph{AUC} (non-uniform) is estimated #' non-parametrically (Bamber 1975). \emph{Se*} is calculated by selecting the #' point that minimizes the absolute difference between sensitivity and #' specificity and by doing the mean of those values (Jiménez-Valverde 2020). #' @return A list with the following elements: #' @return \code{AUC}: the \emph{AUC} value (non-uniform), a numeric value #' between 0 and 1. #' @return \code{Se}: the \emph{Se*} value (non-uniform), a numeric value #' between 0 and 1. #' @return \code{bins}: a table with the sample size of each bin. #' @return \code{suit.sim}: a matrix with the bootstrapped suitability values. #' @return \code{sp.sim}: a matrix with the bootstrapped presence-absence data. #' @return \code{uAUC}: a numeric vector with the (\emph{uAUC}) values for each #' replication. #' @return \code{uAUC.95CI}: a numeric vector with the sample (\emph{uAUC}) #' quantiles corresponding to the probabilities 0.025, 0.5 and 0.975. #' @return \code{uSe}: a numeric vector with the (\emph{uSe*}) values for each #' replication. #' @return \code{uSe.95CI}: a numeric vector with the sample (\emph{uSe*}) #' quantiles corresponding to the probabilities 0.025, 0.5 and 0.975. #' @examples #' suit<-rbeta(100, 2, 2) #Generate suitability values #' random<-runif(100) #' sp<-ifelse(random < suit, 1, 0) #Generate presence-absence data #' result<-AUCuniform(cbind(suit, sp), plot = TRUE, plot.adds = TRUE) #' result$uAUC.95CI[2] #Get the uAUC #' @encoding UTF-8 #' @references Bamber, D. (1975). The Area above the Ordinal Dominance Graph and #' the Area below the Receiver Operating Characteristic Graph. #' \emph{J. Math. Psychol}., 12, 387-415. #' #' Jiménez-Valverde, A. (2020). Sample size for the evaluation of #' presence-absence models. \emph{Ecol. Indic}., 114, 106289. #' #' Jiménez-Valverde, A. (2022). The uniform AUC: dealing with the #' representativeness effect in presence-absence models. \emph{Methods Ecol. #' Evol.}, accepted on 28 January 2022. #' @importFrom stats quantile wilcox.test #' @importFrom graphics plot #' @importFrom graphics abline #' @importFrom graphics axis #' @importFrom graphics points #' @export AUCuniform <- function(mat, rep = 100, by = 0.1, deleteBins = NULL, plot = FALSE, plot.adds = FALSE) { # Non-uniform: Wilcox <- wilcox.test(mat[, 1] ~ mat[, 2], exact = FALSE) AUC <- 1 - (Wilcox["statistic"]$statistic/((length(mat[, 2]) - (sum(mat[, 2]))) * sum(mat[, 2]))) cosa <- ROCR::prediction(mat[, 1], mat[, 2]) temporal <- ROCR::performance(cosa, "sens", "spec") diferencia <- abs([email protected][[1]] - [email protected][[1]]) SE <- ([email protected][[1]][which.min(diferencia)] + [email protected][[1]][which.min(diferencia)])/2 if (plot == TRUE) { plot(1 - [email protected][[1]], [email protected][[1]], pch = 16, xlab = "false positive rate", ylab = "sensitivity", main = "ROC curve", yaxt = "n", cex.lab = 1.3, cex.axis = 1) axis(side = 2, las = 2, mgp = c(3, 0.75, 0)) abline(a = 0, b = 1, lty = 2) if (plot.adds == TRUE) { abline(a = 1, b = -1, col = "darkgrey", lty = 2) points(1 - SE, SE, col = "red", pch = 16) } } # Uniform: bins <- seq(0, 1, by) intervals <- cut(mat[, 1], bins, include.lowest = T, right = F) probs <- table(intervals)[intervals] if (dim(mat)[1] < 30) warning("Your sample size is low, results must be interpreted with caution.") if (sum(table(intervals) < 15) > 0) warning("At least one sutability interval has n < 15, results must be interpreted with caution.") if (sum(table(intervals) == 0) > 0) warning(paste("There are", sum(table(intervals) == 0), "interval(s) with zero data, results must be interpreted with caution.")) if (is.null(deleteBins) == FALSE) { toDelete <- levels(intervals)[deleteBins] mat <- mat[(intervals %in% toDelete) == FALSE, ] probs <- probs[(intervals %in% toDelete) == FALSE] } uAUC <- c() uSE <- c() HS <- matrix(nrow = nrow(mat), ncol = rep) SP <- matrix(nrow = nrow(mat), ncol = rep) for (i in 1:rep) { newdata <- mat[sample(nrow(mat), nrow(mat), replace = T, prob = 1/probs), ] HS[, i] <- newdata[, 1] SP[, i] <- newdata[, 2] Wilcox <- wilcox.test(newdata[, 1] ~ newdata[, 2], exact = FALSE) uAUC <- c(uAUC, 1 - (Wilcox["statistic"]$statistic/((dim(newdata)[1] - (sum(newdata[, 2]))) * sum(newdata[, 2])))[[1]]) cosa <- ROCR::prediction(newdata[, 1], newdata[, 2]) temporal <- ROCR::performance(cosa, "sens", "spec") diferencia <- abs([email protected][[1]] - [email protected][[1]]) uSE <- c(uSE, ([email protected][[1]][which.min(diferencia)] + [email protected][[1]][which.min(diferencia)])/2) } return(list(AUC = AUC[[1]], Se = SE, bins = table(intervals), suit.sim = HS, sp.sim = SP, uAUC = uAUC, uAUC.95CI = quantile(uAUC, c(0.025, 0.5, 0.975)), uSe = uSE, uSe.95CI = quantile(uSE, c(0.025, 0.5, 0.975)))) }
/scratch/gouwar.j/cran-all/cranData/vandalico/R/AUCuniform.R
#' @title Calibration graph #' #' @description A function to plot a calibration graph. #' @param mat A matrix with two columns. The first column must contain the #' suitability values (i.e., the classification rule); the second column must #' contain the presences and absences. #' @param by Size of the suitability intervals (bins). By default, #' \code{by} = 0.1. #' @details Dots for bins with 15 or more cases are shown in solid black; dots #' for bins with less than 15 cases are shown empty (see Jiménez-Valverde et #' al. 2013). This way, by plotting the calibration graph before running #' \code{\link{AUCuniform}}, one can get a glimpse of how reliable \emph{uAUC} #' or \emph{uSe*} can be expected to be. #' @return This function returns a calibration plot #' @examples #' suit<-rbeta(100, 2, 2) #Generate suitability values #' random<-runif(100) #' sp<-ifelse(random < suit,1 , 0) #Generate presence-absence data #' CALplot(cbind(suit, sp)) #' @encoding UTF-8 #' @references Jiménez-Valverde, A., Acevedo, P., Barbosa, A. M., Lobo, J. M. & #' Real, R. (2013). Discrimination capacity in species distribution models #' depends on the representativeness of the environmental domain. #' \emph{Global Ecol. Biogeogr.}, 22, 508-516. #' @export CALplot <- function(mat, by = 0.1) { bins <- cut(mat[, 1], seq(0, 1, by), include.lowest = T, right = F) tableBins <- table(mat[, 2], bins) prevalBins <- as.matrix(tableBins[2, ]/colSums(tableBins)) prevalBins.2 <- as.matrix(tapply(mat[, 1], bins, mean)) colDots <- ifelse(colSums(tableBins) < 14, 21, 19) plot(prevalBins.2, prevalBins, main = "Calibration plot", pch = colDots, ylab = "observed probability", xlab = "predicted probability", xlim = c(0, 1), ylim = c(0, 1), yaxt = "n", cex.lab = 1.3, cex = 1.1) axis(side = 2, las = 2, mgp = c(3, 0.75, 0)) abline(a = 0, b = 1, col = "black", lty = 2) }
/scratch/gouwar.j/cran-all/cranData/vandalico/R/CALplot.R
#' Complete list of palettes: #' #' Use \code{\link{vangogh_palette}} to construct palettes of desired length. #' #' @export vangogh_palettes <- list( StarryNight = c("#0b1e38", "#4988BF", "#82C9D9", "#F2E96B", "#D9851E"), StarryRhone = c("#0D4373", "#27668C", "#5A98BF", "#637340", "#D9C873"), SelfPortrait = c("#27708C", "#B4D9CE", "#85A693", "#BFA575", "#A6511F"), CafeTerrace = c("#024873", "#A2A637", "#D9AA1E", "#D98825", "#BF4F26"), Eglise = c("#1D2759", "#27418C", "#9E6635", "#BFB95E", "#D9CA9C"), Irises = c("#819BB4", "#30588C", "#72A684", "#CDA124", "#A68863"), SunflowersMunich = c("#77A690", "#304020", "#BF7E06", "#401506", "#A63D17"), SunflowersLondon = c("#d49f2d", "#9D743B", "#cda35b", "#88925D", "#5E3E34"), Rest = c("#54778C", "#BF7315", "#F29B30", "#8C4303", "#F2AA6B"), Bedroom = c("#A2A63F", "#A67F0A", "#A6710F", "#8C6D46", "#731D0A"), CafeDeNuit = c("#467326", "#8AA676", "#D9B23D", "#BF793B", "#A63333"), Chaise = c("#3F858C", "#707322", "#F2D43D", "#D9814E", "#731A12"), Shoes = c("#8C7345", "#594A36", "#D9BD9C", "#8C4A32", "#AA9B89"), Landscape = c("#606B81", "#1E4359", "#58838C", "#A8BFBB", "#D7CBB3"), Cypresses = c("#A7CFF2", "#428C5C", "#D9A648", "#BF8136", "#0D0D0D") ) #' A Van Gogh color palette generator. #' #' These are some color palettes from a selection of Vincent van Gogh's paintings. #' #' @param name Name of desired palette. Choices are: #' \code{StarryNight}, \code{StarryRhone}, \code{SelfPortrait}, #' \code{CafeTerrace}, \code{Eglise}, \code{Irises}, #' \code{SunflowersMunich}, \code{SunflowersLondon}, \code{Rest} ,\code{Bedroom} , #' \code{CafeDeNuit}, \code{Chaise}, \code{Shoes}, \code{Landscape}, #' \code{Cypresses} #' @param n Number of colors desired. All palettes have a standard of 5 colors. #' If omitted, uses all colors. #' @param type Either "continuous" or "discrete". #' Use "continuous" to automatically interpolate between colours. #' @importFrom graphics rgb rect par image text #' @return A vector of colors. #' @export #' @keywords colors #' @examples #' vangogh_palette("StarryNight") #' vangogh_palette("SelfPortrait") #' vangogh_palette("Cypresses") #' vangogh_palette("Cypresses", 3) #' #' # If you want a continous paletted based on the colors already found in the preset #' # palettes, you can interpolate between existing colours accordingly. #' pal <- vangogh_palette(21, name = "StarryRhone", type = "continuous") vangogh_palette <- function(name, n, type = c("discrete", "continuous")) { type <- match.arg(type) pal <- vangogh_palettes[[name]] if (is.null(pal)) { stop("Palette not found.") } if (missing(n)) { n <- length(pal) } if (type == "discrete" && n > length(pal)) { stop("Number of requested colors is greater than what palette can offer") } out <- switch(type, continuous = grDevices::colorRampPalette(pal)(n), discrete = pal[1:n] ) structure(out, class = "palette", name = name) } #' @export #' @importFrom graphics rect par image text #' @importFrom grDevices rgb print.palette <- function(x, ...) { n <- length(x) old <- par(mar = c(0.5, 0.5, 0.5, 0.5)) on.exit(par(old)) image(1:n, 1, as.matrix(1:n), col = x, ylab = "", xaxt = "n", yaxt = "n", bty = "n" ) rect(0, 0.9, n + 1, 1.1, col = rgb(1, 1, 1, 0.8), border = NA) text((n + 1) / 2, 1, labels = attr(x, "name"), cex = 1, family = "serif") }
/scratch/gouwar.j/cran-all/cranData/vangogh/R/colors.R
#' vangogh palette with ramped colours #' #' @param palette Choose from 'vangogh_palettes' list #' #' @param alpha transparency #' #' @param reverse If TRUE, the direction of the colours is reversed. #' #' @examples #' library(scales) #' show_col(vangogh_pal()(10)) #' #' filled.contour(volcano, color.palette = vangogh_pal(), asp = 1) #' @return Palettes with ramped colors from predefined palettes #' @export vangogh_pal <- function(palette = "StarryRhone", alpha = 1, reverse = FALSE) { pal <- vangogh_palettes[[palette]] if (reverse) { pal <- rev(pal) } return(colorRampPalette(pal, alpha)) } #' Setup colour palette for ggplot2 #' #' @rdname scale_color_vangogh #' #' @param palette Choose from 'vangogh_palettes' list #' #' @param reverse logical, Reverse the order of the colours? #' #' @param alpha transparency #' #' @param discrete whether to use a discrete colour palette #' #' @param ... additional arguments to pass to scale_color_gradientn #' #' @inheritParams viridis::scale_color_viridis #' #' @importFrom ggplot2 scale_colour_manual #' #' @examples #' library(ggplot2) #' ggplot(mtcars, aes(mpg, wt)) + #' geom_point(aes(colour = factor(cyl))) + #' scale_colour_vangogh(palette = "StarryNight") #' ggplot(mtcars, aes(mpg, wt)) + #' geom_point(aes(colour = hp)) + #' scale_colour_vangogh(palette = "StarryNight", discrete = FALSE) #' ggplot(data = mpg) + #' geom_point(mapping = aes(x = displ, y = hwy, color = class)) + #' scale_colour_vangogh(palette = "StarryRhone") #' ggplot(diamonds) + #' geom_bar(aes(x = cut, fill = clarity)) + #' scale_fill_vangogh() #' @return A scale_color_vangogh function #' @export #' #' @importFrom ggplot2 discrete_scale scale_color_gradientn scale_color_vangogh <- function(..., palette = "StarryNight", discrete = TRUE, alpha = 1, reverse = FALSE) { if (discrete) { discrete_scale("colour", "vangogh", palette = vangogh_pal(palette, alpha = alpha, reverse = reverse)) } else { scale_color_gradientn(colours = vangogh_pal(palette, alpha = alpha, reverse = reverse, ...)(256)) } # scale_colour_manual(values=ochre_palettes[[palette]]) } #' @rdname scale_color_vangogh #' @export scale_colour_vangogh <- scale_color_vangogh #' #' Setup fill palette for ggplot2 #' #' @param palette Choose from 'vangogh_palettes' list #' #' @inheritParams viridis::scale_fill_viridis #' @inheritParams vangogh_pal #' #' @param discrete whether to use a discrete colour palette #' #' @param ... additional arguments to pass to scale_color_gradientn #' #' @importFrom ggplot2 scale_fill_manual discrete_scale scale_fill_gradientn #' @return A scale_fill_vangogh function #' @export scale_fill_vangogh <- function(..., palette = "StarryNight", discrete = TRUE, alpha = 1, reverse = TRUE) { if (discrete) { discrete_scale("fill", "vangogh", palette = vangogh_pal(palette, alpha = alpha, reverse = reverse)) } else { scale_fill_gradientn(colours = vangogh_pal(palette, alpha = alpha, reverse = reverse, ...)(256)) } }
/scratch/gouwar.j/cran-all/cranData/vangogh/R/scale.R
#' Show a single palette #' #' Display a single palette to see whether it meets your needs. #' If no \code{num} parameter is given, #' all the colours in the palette will be displayed. #' If \code{num} is less than the number of colours in the palette, #' then only the first \code{num} colours will be displayed. #' If \code{num} is greater than the number of colours in the palette, #' then that many colours will be generated by linear interpolation #' over the vector of colours in the chosen palette. #' @param pal character, vector of (hexadecimal) colours representing a palette #' @param ttl character, title to be displayed (the name of the palette) #' @param num numeric, the number of colours to display #' @examples #' viz_palette(vangogh_palettes$StarryNight) #' viz_palette(vangogh_palettes$StarryNight, "StarryNight") #' viz_palette(vangogh_palettes$StarryNight, "StarryNight first 4", num = 4) #' viz_palette(vangogh_palettes$StarryNight, "StarryNight interpolated to 25", num = 25) #' @return A vector of colors from a single palette #' @export #' @importFrom graphics image #' @importFrom grDevices colorRampPalette #' viz_palette <- function(pal, ttl = deparse(substitute(pal)), num = length(pal)) { if (num <= 0) { stop("'num' should be > 0") } pal_func <- colorRampPalette(pal) image(seq_len(num), 1, as.matrix(seq_len(num)), col = pal_func(num), main = paste0(ttl, " (", length(pal), " colours in palette, ", num, " displayed)"), xlab = "", ylab = "", xaxt = "n", yaxt = "n", bty = "n" ) }
/scratch/gouwar.j/cran-all/cranData/vangogh/R/utils.R
#' @title vangogh #' @name vangogh #' @docType package #' @details list of palettes generated from Vincent van Gogh's paintings #' @description list of palettes generated from Vincent van Gogh's paintings NULL
/scratch/gouwar.j/cran-all/cranData/vangogh/R/vangogh.R
## ---- echo = FALSE------------------------------------------------------------ knitr::opts_chunk$set( collapse = TRUE, comment = "#>", fig.path = "figure/", fig.height = 1 ) ## ---- palettes_dummy---------------------------------------------------------- library("vangogh") # See all palettes names(vangogh_palettes) # See all functions lsf.str("package:vangogh") ## ---- StarryNight------------------------------------------------------------- vangogh_palette("StarryNight") ## ---- StarryRhone------------------------------------------------------------- vangogh_palette("StarryRhone") ## ---- SelfPortrait------------------------------------------------------------ vangogh_palette("SelfPortrait") ## ---- CafeTerrace------------------------------------------------------------- vangogh_palette("CafeTerrace") ## ---- Eglise------------------------------------------------------------------ vangogh_palette("Eglise") ## ---- Irises------------------------------------------------------------------ vangogh_palette("Irises") ## ---- SunflowersMunich-------------------------------------------------------- vangogh_palette("SunflowersMunich") ## ---- SunflowersLondon-------------------------------------------------------- vangogh_palette("SunflowersLondon") ## ---- Rest-------------------------------------------------------------------- vangogh_palette("Rest") ## ---- Bedroom----------------------------------------------------------------- vangogh_palette("Bedroom") ## ---- CafeDeNuit-------------------------------------------------------------- vangogh_palette("CafeDeNuit") ## ---- Chaise------------------------------------------------------------------ vangogh_palette("Chaise") ## ---- Shoes------------------------------------------------------------------- vangogh_palette("Shoes") ## ---- Landscape--------------------------------------------------------------- vangogh_palette("Landscape") ## ---- Cypresses--------------------------------------------------------------- vangogh_palette("Cypresses") ## ----------------------------------------------------------------------------- library("ggplot2") ggplot(mtcars, aes(factor(cyl), fill=factor(vs))) + geom_bar() + scale_fill_manual(values = vangogh_palette("SelfPortrait")) ggplot(mtcars, aes(mpg, disp)) + geom_point(aes(col = factor(gear)), size = 4) + scale_color_manual(values = vangogh_palette("Cypresses")) ggplot(iris) + aes(Sepal.Length, Sepal.Width, color = Species) + geom_point(size = 3) + scale_color_manual(values = vangogh_palette("CafeDeNuit")) ## ----------------------------------------------------------------------------- x <- vangogh_palette("Chaise", 1000, "continuous") x ## ----------------------------------------------------------------------------- oldpar <- par(mar = c(1, 1, 1, 1)) par(mar = c(1, 1, 1, 1)) pal <- vangogh_palette("SunflowersLondon", 21, type = "continuous") image(volcano, col = pal) par(oldpar) ## ----------------------------------------------------------------------------- ggplot(faithfuld) + aes(waiting, eruptions, fill = density) + geom_tile() + scale_color_manual(values = vangogh_palette("CafeTerrace")) ## ----------------------------------------------------------------------------- ggplot(data = mpg) + geom_point(mapping = aes(x = displ, y = hwy, color = class)) + scale_colour_vangogh(palette="StarryRhone") ggplot(diamonds) + geom_bar(aes(x = cut, fill = clarity)) + scale_fill_vangogh(palette = "StarryNight")
/scratch/gouwar.j/cran-all/cranData/vangogh/inst/doc/vangogh.R
--- title: "vangogh" author: "Cheryl Isabella" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{vignette} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, echo = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>", fig.path = "figure/", fig.height = 1 ) ``` # Vincent van Gogh Color Palettes ## Introduction The vangogh package (for use in R) consist of color scales extracted by Cheryl Isabella from a selection of Vincent van Gogh's paintings. ## Installation ```r install.packages("vangogh") ``` __Or the development version__ ``` r devtools::install_github("cherylisabella/vangogh") ``` ## Usage ```{r, palettes_dummy} library("vangogh") # See all palettes names(vangogh_palettes) # See all functions lsf.str("package:vangogh") ``` ## Palettes and their associated artworks ### The Starry Night (1889) ```{r, StarryNight} vangogh_palette("StarryNight") ``` ### Starry Night Over the Rhône / La Nuit étoilée (1888) ```{r, StarryRhone} vangogh_palette("StarryRhone") ``` ### Self-portrait (1889) ```{r, SelfPortrait} vangogh_palette("SelfPortrait") ``` ### Café Terrace at Night (1888) ```{r, CafeTerrace} vangogh_palette("CafeTerrace") ``` ### The Church at Auvers (1890) ```{r, Eglise} vangogh_palette("Eglise") ``` ### Irises / Les Iris (1889) ```{r, Irises} vangogh_palette("Irises") ``` ### Sunflowers - Munich version (1888) ```{r, SunflowersMunich} vangogh_palette("SunflowersMunich") ``` ### Sunflowers - London version (1888) ```{r, SunflowersLondon} vangogh_palette("SunflowersLondon") ``` ### Noon – Rest from Work (1890) ```{r, Rest} vangogh_palette("Rest") ``` ### Bedroom in Arles / Slaapkamer te Arles (1888) ```{r, Bedroom} vangogh_palette("Bedroom") ``` ### The Night Café / Le Café de nuit (1888) ```{r, CafeDeNuit} vangogh_palette("CafeDeNuit") ``` ### Van Gogh's Chair (1888) ```{r, Chaise} vangogh_palette("Chaise") ``` ### Shoes (1886) ```{r, Shoes} vangogh_palette("Shoes") ``` ### Landscape with Houses (1890) ```{r, Landscape} vangogh_palette("Landscape") ``` ### Wheat Field with Cypresses (1889) ```{r, Cypresses} vangogh_palette("Cypresses") ``` ## Examples ### Discrete palette examples using ggplot2 ```{r} library("ggplot2") ggplot(mtcars, aes(factor(cyl), fill=factor(vs))) + geom_bar() + scale_fill_manual(values = vangogh_palette("SelfPortrait")) ggplot(mtcars, aes(mpg, disp)) + geom_point(aes(col = factor(gear)), size = 4) + scale_color_manual(values = vangogh_palette("Cypresses")) ggplot(iris) + aes(Sepal.Length, Sepal.Width, color = Species) + geom_point(size = 3) + scale_color_manual(values = vangogh_palette("CafeDeNuit")) ``` - Discrete palettes pick 1:n colors from the palette vector (n=5 for this package as 5 colors were curated for each palette) - Default color selection starts from colors at the extreme left and ends at the extreme right of the color palette. ### Continuous palette examples #### Generate a continuous palette from the given discrete palettes ```{r} x <- vangogh_palette("Chaise", 1000, "continuous") x ``` - `colorRampPalette()`is used to a set of colors to create a new continuous palette. #### Heatmap examples ```{r} oldpar <- par(mar = c(1, 1, 1, 1)) par(mar = c(1, 1, 1, 1)) pal <- vangogh_palette("SunflowersLondon", 21, type = "continuous") image(volcano, col = pal) par(oldpar) ``` ```{r} ggplot(faithfuld) + aes(waiting, eruptions, fill = density) + geom_tile() + scale_color_manual(values = vangogh_palette("CafeTerrace")) ``` ### scale_colour_vangogh() and scale_fill_vangogh() examples ```{r} ggplot(data = mpg) + geom_point(mapping = aes(x = displ, y = hwy, color = class)) + scale_colour_vangogh(palette="StarryRhone") ggplot(diamonds) + geom_bar(aes(x = cut, fill = clarity)) + scale_fill_vangogh(palette = "StarryNight") ```
/scratch/gouwar.j/cran-all/cranData/vangogh/inst/doc/vangogh.Rmd
--- title: "vangogh" author: "Cheryl Isabella" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{vignette} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, echo = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>", fig.path = "figure/", fig.height = 1 ) ``` # Vincent van Gogh Color Palettes ## Introduction The vangogh package (for use in R) consist of color scales extracted by Cheryl Isabella from a selection of Vincent van Gogh's paintings. ## Installation ```r install.packages("vangogh") ``` __Or the development version__ ``` r devtools::install_github("cherylisabella/vangogh") ``` ## Usage ```{r, palettes_dummy} library("vangogh") # See all palettes names(vangogh_palettes) # See all functions lsf.str("package:vangogh") ``` ## Palettes and their associated artworks ### The Starry Night (1889) ```{r, StarryNight} vangogh_palette("StarryNight") ``` ### Starry Night Over the Rhône / La Nuit étoilée (1888) ```{r, StarryRhone} vangogh_palette("StarryRhone") ``` ### Self-portrait (1889) ```{r, SelfPortrait} vangogh_palette("SelfPortrait") ``` ### Café Terrace at Night (1888) ```{r, CafeTerrace} vangogh_palette("CafeTerrace") ``` ### The Church at Auvers (1890) ```{r, Eglise} vangogh_palette("Eglise") ``` ### Irises / Les Iris (1889) ```{r, Irises} vangogh_palette("Irises") ``` ### Sunflowers - Munich version (1888) ```{r, SunflowersMunich} vangogh_palette("SunflowersMunich") ``` ### Sunflowers - London version (1888) ```{r, SunflowersLondon} vangogh_palette("SunflowersLondon") ``` ### Noon – Rest from Work (1890) ```{r, Rest} vangogh_palette("Rest") ``` ### Bedroom in Arles / Slaapkamer te Arles (1888) ```{r, Bedroom} vangogh_palette("Bedroom") ``` ### The Night Café / Le Café de nuit (1888) ```{r, CafeDeNuit} vangogh_palette("CafeDeNuit") ``` ### Van Gogh's Chair (1888) ```{r, Chaise} vangogh_palette("Chaise") ``` ### Shoes (1886) ```{r, Shoes} vangogh_palette("Shoes") ``` ### Landscape with Houses (1890) ```{r, Landscape} vangogh_palette("Landscape") ``` ### Wheat Field with Cypresses (1889) ```{r, Cypresses} vangogh_palette("Cypresses") ``` ## Examples ### Discrete palette examples using ggplot2 ```{r} library("ggplot2") ggplot(mtcars, aes(factor(cyl), fill=factor(vs))) + geom_bar() + scale_fill_manual(values = vangogh_palette("SelfPortrait")) ggplot(mtcars, aes(mpg, disp)) + geom_point(aes(col = factor(gear)), size = 4) + scale_color_manual(values = vangogh_palette("Cypresses")) ggplot(iris) + aes(Sepal.Length, Sepal.Width, color = Species) + geom_point(size = 3) + scale_color_manual(values = vangogh_palette("CafeDeNuit")) ``` - Discrete palettes pick 1:n colors from the palette vector (n=5 for this package as 5 colors were curated for each palette) - Default color selection starts from colors at the extreme left and ends at the extreme right of the color palette. ### Continuous palette examples #### Generate a continuous palette from the given discrete palettes ```{r} x <- vangogh_palette("Chaise", 1000, "continuous") x ``` - `colorRampPalette()`is used to a set of colors to create a new continuous palette. #### Heatmap examples ```{r} oldpar <- par(mar = c(1, 1, 1, 1)) par(mar = c(1, 1, 1, 1)) pal <- vangogh_palette("SunflowersLondon", 21, type = "continuous") image(volcano, col = pal) par(oldpar) ``` ```{r} ggplot(faithfuld) + aes(waiting, eruptions, fill = density) + geom_tile() + scale_color_manual(values = vangogh_palette("CafeTerrace")) ``` ### scale_colour_vangogh() and scale_fill_vangogh() examples ```{r} ggplot(data = mpg) + geom_point(mapping = aes(x = displ, y = hwy, color = class)) + scale_colour_vangogh(palette="StarryRhone") ggplot(diamonds) + geom_bar(aes(x = cut, fill = clarity)) + scale_fill_vangogh(palette = "StarryNight") ```
/scratch/gouwar.j/cran-all/cranData/vangogh/vignettes/vangogh.Rmd
#' Uniform Crime Reports, 2015 (County-Level) #' #' This subset of data comes from one iteration of the \emph{Uniform Crime Reporting Program}, administered in 2015. These data were collected by the Federal Bureau of Investigation under the United States Department of Justice. While the original data cover every \emph{reported} crime event that took place in 2015, these data are aggregated to the county level. Additionally, these data are combined with (a subset of) county-level demographic data from the 2005-2009 (5-year estimates) iteration of the \emph{American Community Survey}. Information about the data set can be found in the UCR2015 Codebook at: \url{https://burrelvannjr.com/docs/UCR2015_Codebook.pdf}. #' #' @format A data frame with 3108 observations and 102 variables. #' \tabular{ll}{ \cr #' id \tab State and County Identifier \cr #' statefips \tab FIPS Code for State \cr #' countyfips \tab FIPS Code for County \cr #' state \tab State Name \cr #' county \tab County Name \cr #' totalpop \tab Total County Population \cr #' pct_unemp \tab Percent of Total County Population who are Unemployed \cr #' pct_homeowners \tab Percent of Total County Population who are Homeowners \cr #' pct_college \tab Percent of Total County Population who are over 25 years old and hold a Bachelor's Degree \cr #' med_fam_inc \tab Median Family Income (in Thousands of Dollars) \cr #' pop_density \tab Population Density (Population over Land Area in County) \cr #' pct_poverty \tab Percent of Total County Population who are below the Poverty Line \cr #' pct_white \tab Percent of Total County Population who are White \cr #' pct_black \tab Percent of Total County Population who are Black \cr #' pct_latino \tab Percent of Total County Population who are Latinx/e/a/o \cr #' income_inequality \tab Gini Coefficient of Income Inequality -- The distribution of income across the county population. High scores indicate greater inequality, with high-income individuals receiving much larger percentages of the total income made in the county. \cr #' rape \tab Forcible rape (Count) \cr #' robbery \tab Robbery (Count) \cr #' agg_assault \tab Aggravated assault (Count) \cr #' burglary \tab Burglary-breaking or entering (Count) \cr #' larceny \tab Larceny-theft (not motor vehicles) (Count) \cr #' mv_theft \tab Motor vehicle theft (Count) \cr #' other_assault \tab Other assaults (Count) \cr #' arson \tab Arson (Count) \cr #' forgery \tab Forgery and counterfeiting (Count) \cr #' fraud \tab Fraud (Count) \cr #' embezzlement \tab Embezzlement (Count) \cr #' stolen_property \tab Stolen property-buy, receive, poss. (Count) \cr #' vandalism \tab Vandalism (Count) \cr #' weapons \tab Weapons-carry, posses, etc. (Count) \cr #' sex_offense \tab Sex offenses (not rape or prostitution) (Count) \cr #' drug_abuse \tab Total drug abuse violations (Count) \cr #' drug_sale \tab Sale/manufacture (subtotal) (Count) \cr #' drug_possession \tab Possession (subtotal) (Count) \cr #' drug_sale_coke \tab Sale/mfg-Opium, coke, and their derivatives (Count) \cr #' drug_sale_mj \tab Sale/mfg-Marijuana (Count) \cr #' drug_possession_coke \tab Possession-Opium, coke, and their derivatives (Count) \cr #' drug_possession_mj \tab Possession-Marijuana (Count) \cr #' drug_possession_narc \tab Possession-Truly addicting synthetic narcotics (Count) \cr #' drug_possession_other \tab Possession-Other dangerous non-narc drugs (Count) \cr #' domestic_offenses \tab Offenses against family and children (Count) \cr #' dui \tab Driving under the influence (Count) \cr #' liquor_violation \tab Liquor laws (Count) \cr #' disorderly_conduct \tab Disorderly conduct (Count) \cr #' other_nontraffic_violation \tab All other non-traffic offenses (Count) \cr #' murder \tab Murder and non-negligent manslaughter (Count) \cr #' drug_sale_other \tab Sale/mfg-Other dangerous non-narc drugs (Count) \cr #' prostitution \tab Prostitution and commercialized vice (Count) \cr #' drug_sale_narc \tab Sale/mfg-Truly addicting synthetic narcotics (Count) \cr #' vagrancy \tab Vagrancy (Count) \cr #' drunkenness \tab Drunkenness (Count) \cr #' curfew_loitering \tab Curfew and loitering violations (Count) \cr #' runaway \tab Runaways (Count) \cr #' manslaughter_negligence \tab Manslaughter by negligence (Count) \cr #' gambling_all \tab Gambling (total) (Count) \cr #' suspicion \tab Suspicion (Count) \cr #' gambling_bookmaking \tab Bookmaking (horse and sports) (Count) \cr #' gambling_other \tab All other gambling (Count) \cr #' gambling_lottery \tab Number and lottery (Count) \cr #' rape_pct \tab Forcible rape (as percent of total county population) \cr #' robbery_pct \tab Robbery (as percent of total county population) \cr #' agg_assault_pct \tab Aggravated assault (as percent of total county population) \cr #' burglary_pct \tab Burglary-breaking or entering (as percent of total county population) \cr #' larceny_pct \tab Larceny-theft (not motor vehicles) (as percent of total county population) \cr #' mv_theft_pct \tab Motor vehicle theft (as percent of total county population) \cr #' other_assault_pct \tab Other assaults (as percent of total county population) \cr #' arson_pct \tab Arson (as percent of total county population) \cr #' forgery_pct \tab Forgery and counterfeiting (as percent of total county population) \cr #' fraud_pct \tab Fraud (as percent of total county population) \cr #' embezzlement_pct \tab Embezzlement (as percent of total county population) \cr #' stolen_property_pct \tab Stolen property-buy, receive, poss. (as percent of total county population) \cr #' vandalism_pct \tab Vandalism (as percent of total county population) \cr #' weapons_pct \tab Weapons-carry, posses, etc. (as percent of total county population) \cr #' sex_offense_pct \tab Sex offenses (not rape or prostitution) (as percent of total county population) \cr #' drug_abuse_pct \tab Total drug abuse violations (as percent of total county population) \cr #' drug_sale_pct \tab Sale/manufacture (subtotal) (as percent of total county population) \cr #' drug_possession_pct \tab Possession (subtotal) (as percent of total county population) \cr #' drug_sale_coke_pct \tab Sale/mfg-Opium, coke, and their derivatives (as percent of total county population) \cr #' drug_sale_mj_pct \tab Sale/mfg-Marijuana (as percent of total county population) \cr #' drug_possession_coke_pct \tab Possession-Opium, coke, and their derivatives (as percent of total county population) \cr #' drug_possession_mj_pct \tab Possession-Marijuana (as percent of total county population) \cr #' drug_possession_narc_pct \tab Possession-Truly addicting synthetic narcotics (as percent of total county population) \cr #' drug_possession_other_pct \tab Possession-Other dangerous non-narc drugs (as percent of total county population) \cr #' domestic_offenses_pct \tab Offenses against family and children (as percent of total county population) \cr #' dui_pct \tab Driving under the influence (as percent of total county population) \cr #' liquor_violation_pct \tab Liquor laws (as percent of total county population) \cr #' disorderly_conduct_pct \tab Disorderly conduct (as percent of total county population) \cr #' other_nontraffic_violation_pct \tab All other non-traffic offenses (as percent of total county population) \cr #' murder_pct \tab Murder and non-negligent manslaughter (as percent of total county population) \cr #' drug_sale_other_pct \tab Sale/mfg-Other dangerous non-narc drugs (as percent of total county population) \cr #' prostitution_pct \tab Prostitution and commercialized vice (as percent of total county population) \cr #' drug_sale_narc_pct \tab Sale/mfg-Truly addicting synthetic narcotics (as percent of total county population) \cr #' vagrancy_pct \tab Vagrancy (as percent of total county population) \cr #' drunkenness_pct \tab Drunkenness (as percent of total county population) \cr #' curfew_loitering_pct \tab Curfew and loitering violations (as percent of total county population) \cr #' runaway_pct \tab Runaways (as percent of total county population) \cr #' manslaughter_negligence_pct \tab Manslaughter by negligence (as percent of total county population) \cr #' gambling_all_pct \tab Gambling (total) (as percent of total county population) \cr #' suspicion_pct \tab Suspicion (as percent of total county population) \cr #' gambling_bookmaking_pct \tab Bookmaking (horse and sports) (as percent of total county population) \cr #' gambling_other_pct \tab All other gambling (as percent of total county population) \cr #' gambling_lottery_pct \tab Number and lottery (as percent of total county population) \cr #' } #' @source Data: \url{https://www.icpsr.umich.edu/web/NACJD/studies/36794} and \url{https://data.census.gov/mdat/#/search?ds=ACSPUMS5Y2009} #' @source Codebook: \url{https://burrelvannjr.com/docs/UCR2015_Codebook.pdf} #' "UCR2015"
/scratch/gouwar.j/cran-all/cranData/vannstats/R/UCR2015.R
#' Well-Being and Basic Needs Survey, 2019 (Individual-Level) #' #' This subset of data comes from one iteration of the \emph{Well-Being and Basic Needs Survey}, administered in 2019. These data were collected by the Urban Institute. Information about the data set can be found in the WBBN2019 Codebook at: \url{https://burrelvannjr.com/docs/WBBN2019_Codebook.pdf}. #' #' @format A data frame with 7694 observations and 23 variables. #' \tabular{ll}{ \cr #' subsidized_housing \tab Is your household paying lower rent because the federal, state, or local government is paying part of the cost? \cr #' food_last \tab Food did not last \cr #' nervous \tab During the past 30 days, about how often did you feel: nervous? \cr #' hopeless \tab During the past 30 days, about how often did you feel: hopeless? \cr #' restless \tab During the past 30 days, about how often did you feel: restless or fidgety? \cr #' no_cheer \tab During the past 30 days, about how often did you feel: so sad that nothing could cheer you up \cr #' worthless \tab During the past 30 days, about how often did you feel: worthless? \cr #' insured \tab Thinking about your health insurance coverage over the past 12 months, how many months were you insured? \cr #' med_notafford \tab Thinking about your health care experiences over the past 12 months, was there any time when you needed medical care but did not get it because you couldn't afford it? \cr #' working \tab Are you currently working for pay or self-employed? \cr #' unexp_400 \tab How confident are you that you could come up with $400 if an unexpected expense arose within the next month? \cr #' educ \tab Education level \cr #' race_eth \tab Race/ethnicity \cr #' sex_gender \tab Sex/Gender \cr #' head_household \tab Head of Household? \cr #' internet \tab Internet access \cr #' children_in_house \tab Number of children age 0-18 in household \cr #' food_insecure \tab Household was food insecure in past 12 months \cr #' utility_suspend \tab Gas or electric company turned off service or oil company would not deliver in oil past 12 months \cr #' utility_problems_paying \tab Household was not able to pay full amount of gas, oil, or electricity bills in past 12 months \cr #' mortgage_cost \tab How much is the regular monthly payment on this property, including mortgage payments, second mortgage or home equity loan payments, real estate taxes, insurance, and condominium fees? \cr #' rent_cost \tab What is the monthly rent for the place where you live? \cr #' electricity_cost \tab In a typical month, what is the total cost of electricity, gas, and any other fuel used in the place where you live? \cr #' } #' @source Data: \url{https://www.icpsr.umich.edu/web/ICPSR/studies/38044} #' @source Codebook: \url{https://burrelvannjr.com/docs/WBBN2019_Codebook.pdf} #' "WBBN2019"
/scratch/gouwar.j/cran-all/cranData/vannstats/R/WBBN2019.R
#' Simplified Bar Chart #' #' This function plots a bar chart (bar.chart) on a given data frame. #' @import ggplot2 dplyr purrr ggrepel #' @importFrom stats na.omit #' @param df data frame to read in. #' @param var1 the dependent/outcome variable, \eqn{Y}. The variable of interest that should be plotted. #' @param lab logical (default set to \code{FALSE}). When set to \code{lab = TRUE}, will add frequency label for each bar in chart. #' @return This function returns the bar chart for \code{var1} in data frame \code{df}. #' @examples #' data <- mtcars #' #' bar.chart(data,cyl) #' @export bar.chart <- function(df, var1, lab=FALSE){ #options(warn=-1) #suppressWarnings() v1 <- NULL #necessary for removing the "undefined global function" warning bygroups <- length(match.call())#-3 n_call3 <- as.character(match.call()[3]) n_call4 <- as.character(match.call()[4]) n1 <- deparse(substitute(var1)) n1 <- as.character(n1) if(bygroups==3 & n_call3 == "F" | bygroups==3 & n_call3 == "FALSE" | bygroups==3 & n_call3 == "TRUE" | bygroups==3 & n_call3 == "T"){ bygroups <- 2 } if(bygroups==3 & n_call3 != "NULL"){ bygroups <- 3 } if(bygroups==4 & n_call4 != "NULL"){ bygroups <- 3 } if(bygroups==2) { title <- paste0("Bar Chart of '", deparse(substitute(df)), "'") labx <- deparse(substitute(df)) df <- as.factor(df) df <- as.data.frame(df) df <- df %>% group_by(df) %>% summarise(Frequency = n()) %>% na.omit() df <- as.data.frame(df) p <- ggplot(data = df, aes(x = df, y = .data$Frequency)) + geom_bar(stat = "identity") + scale_fill_brewer(palette="Blues") + #geom_text(stat='identity', aes(label=paste0("n = ",Frequency)), vjust=-1) + ggtitle(title) + xlab(labx) + theme_classic() + theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank(), panel.background = element_blank(), axis.line = element_line(colour = "black"), axis.text.x = element_text(vjust=0.5, colour="#000000"), axis.text.y = element_text(face="bold", colour="#000000"), plot.title = element_text(hjust = 0.5, lineheight=1.5, face="bold")) } if(bygroups==3) { title <- paste0("Bar Chart of '", deparse(substitute(var1)), "'") labx <- deparse(substitute(var1)) df <- as.data.frame(df) df <- df %>% group_by({{ var1 }}) %>% summarise(Frequency = n()) %>% na.omit() df <- as.data.frame(df) p <- ggplot2::ggplot(data = df, aes(x=as.factor({{ var1 }}), y = .data$Frequency)) + geom_bar(stat = "identity") + scale_fill_brewer(palette="Blues") + #geom_text(stat='identity', aes(label=paste0("n = ",Frequency)), vjust=-1) + ggtitle(title) + xlab(labx) + theme_classic() + theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank(), panel.background = element_blank(), axis.line = element_line(colour = "black"), axis.text.x = element_text(vjust=0.5, colour="#000000"), axis.text.y = element_text(face="bold", colour="#000000"), plot.title = element_text(hjust = 0.5, lineheight=1.5, face="bold")) } if(lab==TRUE) { p <- p + #geom_text(stat='identity', aes(label=paste0("n = ",Frequency)), vjust=-1) geom_label_repel(stat='identity', aes(label = paste0("n = ",.data$Frequency)), size = 4, nudge_x = 1, show.legend = FALSE) } return(p) }
/scratch/gouwar.j/cran-all/cranData/vannstats/R/bar.chart.R
#' Simplified Boxplot #' #' This function plots a Box-and-Whisker (box) on a given data frame, and uses simplified calls within the function to parse the boxplot by up to 2 variables. #' @importFrom graphics boxplot #' @param df data frame to read in. #' @param var1 the dependent/outcome variable, \eqn{Y}. The variable of interest that should be plotted. #' @param by1 the main independent/predictor variable, \eqn{X_1}. A grouping variable by which the boxplot for \code{var1} should be parsed. #' @param by2 a potential second independent/predictor variable, \eqn{X_2}. A second grouping variable by which the boxplot for \code{var1} (already parsed by \code{by1}) should be parsed. #' @examples #' data <- mtcars #' #' box(data,mpg,cyl) #' @export box <- function(df, var1, by1, by2){ bygroups <- length(match.call())-3 if(bygroups==-1) { main <- paste0("Boxplot of '", deparse(substitute(df)), "'") laby <- deparse(substitute(df)) boxplot(df, main = main, ylab = laby) #boxplot({{ var1 }}, data = df, main = main) } if(bygroups==0) { main <- paste0("Boxplot of '", deparse(substitute(var1)), "'") laby <- deparse(substitute(var1)) boxplot(eval(substitute(var1), df), main = main, ylab = laby) # a way of calling values within #df$var1 #boxplot({{ var1 }}, data = df, main = main) } if(bygroups==1) { main <- paste0("Boxplot of '", deparse(substitute(var1)),"' by '", deparse(substitute(by1)),"'") labx <- deparse(substitute(by1)) laby <- deparse(substitute(var1)) boxplot(eval(substitute(var1), df) ~ eval(substitute(by1), df), main = main, xlab = labx, ylab = laby) } if(bygroups==2) { main <- paste0("Boxplot of '", deparse(substitute(var1)),"' by '", deparse(substitute(by1)),"' and '", deparse(substitute(by2)),"'") labx2 <- paste0(deparse(substitute(by1))," by ", deparse(substitute(by2))) laby2 <- deparse(substitute(var1)) boxplot(eval(substitute(var1), df) ~ eval(substitute(by1), df) + eval(substitute(by2), df), main = main, xlab = labx2, ylab = laby2) } #return(p) }
/scratch/gouwar.j/cran-all/cranData/vannstats/R/box.R
#' Simplified Chi Square #' #' This function simplifies the call for Pearson's Chi Square test (chi.sq) on a given data frame. #' @importFrom stats chisq.test pchisq qchisq qnorm p.adjust #' @param df data frame to read in. #' @param var1 the dependent/outcome variable, \eqn{Y}. #' @param var2 the main independent/predictor variable, \eqn{X}. #' @param correct logical (default set to \code{F}). When set to \code{correct = T}, will employ Yates' continuity correction (for data that violate the normality assumption). #' @param post logical (default set to \code{F}). When set to \code{post = T}, will return results of post-hoc (Z) tests using Bonferroni's alpha adjustment. #' @return This function returns the summary results table for a Pearson's Chi Square test, examining the relationship between \code{var1} from data frame \code{df}, and \code{var2}. #' @examples #' data <- mtcars #' #' chi.sq(data,vs,am) #' @export chi.sq <- function(df, var1, var2, correct = FALSE, post = FALSE){ #options(scipen=999) #suppressWarnings({}) v1 <- deparse(substitute(var1)) v2 <- deparse(substitute(var2)) {suppressWarnings(model <- chisq.test(eval(substitute(var1), df), eval(substitute(var2), df), correct = FALSE))} modname <- model$method df_1 <- model$parameter[[1]] crit <- qchisq(p=.95, df=df_1) ch_text <- "\u03C7\U00B2" ch_c_text <- paste0("Critical ",ch_text) model$statistic[2] <- round(crit, 3) names(model$statistic) <- c(ch_text,ch_c_text) if(correct == TRUE){ {suppressWarnings(model <- chisq.test(eval(substitute(var1), df), eval(substitute(var2), df), correct = TRUE))} modname <- model$method df_1 <- model$parameter[[1]] crit <- qchisq(p=.95, df=df_1) ch_text <- "\u03C7\U00B2" ch_c_text <- paste0("Critical ",ch_text) model$statistic[2] <- round(crit, 3) names(model$statistic) <- c(ch_text,ch_c_text) } model$data.name <- paste0(deparse(substitute(var1))," and ", deparse(substitute(var2))) #print(model$residuals) #print(model$stdres) if(post == TRUE){ method = "bonferroni" round = 2 stdres <- as.matrix(model$stdres) #get dim num of matrix dim(stdres) #get dim num of matrix rows (dv) v1num <- dim(stdres)[1] #dv #get dim num of matrix cols (iv) v2num <- dim(stdres)[2] #iv #get attributes of rows, in order (this is DV) v1names <- dimnames(stdres)[[1]] #dv #get attributes of columns, in order (this is IV) v2names <- dimnames(stdres)[[2]] #iv #get chi square values from standardized residuals, by squaring the values in the matrix chisq_values <- stdres^2 #get pvalues for chi square p_vals <- pchisq(chisq_values, 1, lower.tail = FALSE) #crit <- qchisq(p=.95, df=1) #crit_round <- round(crit, 2) #z_crit <- sqrt(crit) #print(z_crit) adj_p_vals <- p_vals #print(p_vals) for (i in 1:nrow(adj_p_vals)) { adj_p_vals[i, ] <- p.adjust( adj_p_vals[i, ], method = method, n = ncol(adj_p_vals) * nrow(adj_p_vals) ) } z_crit <- qnorm(p=1-((.05/2)/( ncol(adj_p_vals) * nrow(adj_p_vals) ))) z_crit_round <- round(z_crit, 2) crit <- z_crit^2 crit_round <- round(crit, 2) #print(z_crit_round) #print(crit_round) v1_names <- rep(v1names, v2num) #dv v2_names <- NULL #iv for(v2 in v2names){ v2_names_pre <- rep(v2, v1num) #iv v2_names <- c(v2_names,v2_names_pre) } bonf <- as.data.frame(matrix( data = NA, #nrow = nrow(adj_p_vals) * 2, #ncol = ncol(adj_p_vals) + 2 nrow = nrow(adj_p_vals) * ncol(adj_p_vals), ncol = 4 )) stdresvals <- NULL adjpvals <- NULL chi2vals <- NULL for(i in 1:v2num){ for(j in 1:v1num){ stdresvals <- c(stdresvals,stdres[j,i]) adjpvals <- c(adjpvals,adj_p_vals[j,i]) chi2vals <- c(chi2vals,chisq_values[j,i]) } } chi2vals_round <- round(chi2vals, 2) stdresvals_round <- round(stdresvals, 2) bonf[,1] <- v2_names #iv bonf[,2] <- v1_names #dv bonf[,3] <- stdresvals #bonf[,4] <- chi2vals #bonf[,5] <- adjpvals bonf[,4] <- adjpvals #colnames(bonf) <- c(deparse(substitute(var2)), deparse(substitute(var1)), "Adj. Standardized Residual (Z)", "Chi Square", "Adj. p-value") colnames(bonf) <- c(deparse(substitute(var2)), deparse(substitute(var1)), "Adj. Standardized Residual (Z)", "Adj. p-value") bonf2 <- as.data.frame(matrix( data = NA, #nrow = nrow(adj_p_vals) * 2, #ncol = ncol(adj_p_vals) + 2 nrow = nrow(adj_p_vals) * ncol(adj_p_vals), ncol = 2 )) grouping <- paste0(v2_names, " * ", v1_names) bonf2[,1] <- grouping #group bonf2[,2] <- chi2vals #chi2 bonf2[,3] <- adjpvals bonf3 <- bonf2 bonf3[,4] <- chi2vals_round bonf3[,5] <- stdresvals_round #ch_text <- "\u03C7\U00B2" newList <- list(model = model, "Post-Hoc Test w/ Bonferroni Adjustment" = bonf) return(newList) #if(plot == TRUE){ #plotmax <- nrow(bonf3) #title <- paste0("Bar Chart of ", ch_text, " Values") #labx <- "Intersection of Categories" #laby <- paste0(ch_text," Value") #bonf2 <- as.data.frame(bonf3) #p <- ggplot2::ggplot(data = bonf3, aes(x=as.factor(grouping), y = chi2vals)) + ##ggplot2::ggplot(data = bonf3, aes(x=as.factor(grouping), y = chi2vals)) + # geom_bar(stat = "identity") + # scale_fill_brewer(palette="Blues") + # #geom_text(stat='identity', aes(label=paste0("n = ",Frequency)), vjust=-1) + # ggtitle(title) + xlab(labx) + ylab(laby) + # geom_hline(yintercept = crit, color="red") + # geom_text(aes(plotmax, crit, label = paste0("Critical ", ch_text, " = ", crit_round), vjust = -1), color = "red") + # geom_label_repel(stat='identity', aes(label = paste0(ch_text, " = ", chi2vals_round, "\nZ = ", stdresvals_round)), # size = 4, nudge_x = 1, show.legend = FALSE) + # theme_classic() + # theme(axis.text.x = element_text(angle = 65)) + # theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank(), panel.background = element_blank(), # axis.line = element_line(colour = "black"), axis.text.x = element_text(vjust=0.5, colour="#000000"), # axis.text.y = element_text(face="bold", colour="#000000"), plot.title = element_text(hjust = 0.5, lineheight=1.5, face="bold")) #newList <- list(model = model, "Post-Hoc Test w/ Bonferroni Adjustment" = bonf, plot = p) #return(newList) #print(p) #return(p) #} } return(model) }
/scratch/gouwar.j/cran-all/cranData/vannstats/R/chi.sq.R
#' Simplified Confidence Interval Calculation #' #' This function calculates the confidence interval (for a given confidence level) for a variable in a given data frame. #' @importFrom gdata nobs #' @importFrom stats qt sd #' @param df data frame to read in. #' @param var1 the variable of interest for which the CI will be calculated. #' @param cl the desired confidence level (in percentages, ranging from \eqn{1} to \eqn{100}). #' @return This function returns the mean, lower bound, upper bound, and standard error. #' @examples #' data <- mtcars #' #' ci.calc(data,mpg,95) #' @export ci.calc <- function (df, var1, cl){ #cl <- (cl/100) #alpha <- 1 - cl calls <- length(match.call())-3 if(calls==0){ newcl <- gsub("\\s*\\([^\\)]+\\)","",as.character(match.call()[3])) newcl <- as.numeric(newcl) newcl <- (newcl/100) alpha <- 1 - newcl #CI(eval(substitute(var1), df), ci=cl) xbar <- mean(df, na.rm = TRUE) se <- sd(df, na.rm = TRUE)/sqrt(nobs(df)) ci_low <- xbar + round(qt(alpha/2, 100000000000), 3) * se ci_high <- xbar - round(qt(alpha/2, 100000000000), 3) * se out <- c(Mean = xbar, `CI lower` = ci_low, `CI upper` = ci_high, `Std. Error` = se) #print(abs(round(qt(alpha/2, 100000000000), 3))) } else { #CI(eval(substitute(var1), df), ci=cl) newcl <- gsub("\\s*\\([^\\)]+\\)","",as.character(match.call()[4])) newcl <- as.numeric(newcl) newcl <- (newcl/100) alpha <- 1 - newcl xbar <- mean(eval(substitute(var1), df), na.rm = TRUE) se <- sd(eval(substitute(var1), df), na.rm = TRUE)/sqrt(nobs(eval(substitute(var1), df))) ci_low <- xbar + round(qt(alpha/2, 100000000000), 3) * se ci_high <- xbar - round(qt(alpha/2, 100000000000), 3) * se out <- c(Mean = xbar, `CI lower` = ci_low, `CI upper` = ci_high, `Std. Error` = se) #print(abs(round(qt(alpha/2, 100000000000), 3))) } return(out) }
/scratch/gouwar.j/cran-all/cranData/vannstats/R/ci.calc.R
#' Simplified Correlation Matrix #' #' This function creates a correlation (cormat) on a data frame of the variables in an equation. #' @importFrom formula.tools get.vars #' @importFrom stats cor #' @param df data frame to read in. #' @param formula the variables in the regression model, \eqn{Y = X_1 + X_2 + ... + X_m}, written as \code{Y ~ X1 + X2}... #' @return This function returns a correlation matrix for the variables provided in the formula. #' @examples #' data <- mtcars #' #' cormat(data, mpg ~ wt + am) #' @export cormat <- function(df, formula){ vars <- get.vars(formula, data = df) frame <- df %>% dplyr::select(all_of(vars)) frame <- as.data.frame(frame) #find variables/columns with characters/factors and convert to numeric, then report recode values as.numeric(frame$nom/ord)-1 cormatrix <- round((cor(frame, use = "complete.obs")),2) #cormatrix <- data.frame(cormatrix) cormatrix[upper.tri(cormatrix)] <- "" cormatrix[cormatrix == 1.00] <- 1 cormatrix <- as.data.frame(cormatrix) #cormatrix return(cormatrix) }
/scratch/gouwar.j/cran-all/cranData/vannstats/R/cormat.R
#' Simplified Crosstabs #' #' This function returns a crosstab (tab) on a given data frame, and using simplified calls within the function for two variables, to return the observed and expected frequencies. #' @import ggplot2 dplyr purrr #' @importFrom stats chisq.test #' @param df data frame to read in. #' @param var1 a first grouping variable. #' @param var2 a second grouping variable. #' @return This function returns the observed and expected frequencies of a bivariate relationship between \code{var1} and \code{var2} in data frame \code{df}. #' @examples #' data <- mtcars #' #' tab(data,mpg,cyl) #' @export tab <- function(df, var1, var2){ #options(warn=-1) # suppress warnings for chi square run #v1 <- paste0(substitute(df),"$",substitute(var1)) # how to use $ operator in dataframe$variable #v2 <- paste0(substitute(df),"$",substitute(var2)) # how to use $ operator in dataframe$variable v1 <- (eval(substitute(var1), df)) v2 <- (eval(substitute(var2), df)) suppressWarnings(crosstab <- chisq.test(v1, v2, correct=FALSE)) #options(warn=0) # unsuppress warnings for chi square crosstab$data.name <- paste0(deparse(substitute(var1)), " and ", deparse(substitute(var2))) names(dimnames(crosstab$observed)) <- c(deparse(substitute(var1)),deparse(substitute(var2))) names(dimnames(crosstab$expected)) <- c(deparse(substitute(var1)),deparse(substitute(var2))) #crosstab$observed #crosstab$expected rowmarg <- NULL for(i in 1:dim(crosstab$observed)[1]){ i_rmarg <- sum(crosstab$observed[i,]) rowmarg <- c(rowmarg,i_rmarg) } #print(rowmarg) colmarg <- NULL for(i in 1:dim(crosstab$observed)[2]){ i_cmarg <- sum(crosstab$observed[,i]) colmarg <- c(colmarg,i_cmarg) } tot_tot <- sum(colmarg) colmarg <- c(colmarg,tot_tot) obs <- as.matrix(crosstab$observed) ex <- as.matrix(crosstab$expected) endcol <- dim(crosstab$observed)[2] + 1 endrow <- dim(crosstab$observed)[1] + 1 #print(endrow) #print(endcol) #print(colmarg) #print(endcol) obs <- cbind(obs,rowmarg) obs <- rbind(obs,colmarg) ex <- cbind(ex,rowmarg) ex <- rbind(ex,colmarg) #print(dimnames(obs)) dimnames(obs)[[1]][endrow] <- "Total" #paste0("Col. Total", " (", deparse(substitute(var2)), "):") dimnames(obs)[[2]][endcol] <- "Total" #paste0("Row Total", " (", deparse(substitute(var1)), "):") dimnames(obs)[[1]][1] <- paste0(deparse(substitute(var1)), ": ", dimnames(obs)[[1]][1]) dimnames(obs)[[2]][1] <- paste0(deparse(substitute(var2)), ": ", dimnames(obs)[[2]][1]) dimnames(ex)[[1]][endrow] <- "Total" #paste0("Col Total", " (", deparse(substitute(var2)), "):") dimnames(ex)[[2]][endcol] <- "Total" #paste0("Row Total", " (", deparse(substitute(var1)), "):") dimnames(ex)[[1]][1] <- paste0(deparse(substitute(var1)), ": ", dimnames(ex)[[1]][1]) dimnames(ex)[[2]][1] <- paste0(deparse(substitute(var2)), ": ", dimnames(ex)[[2]][1]) #print(dimnames(obs)) #print(endcol) #print(obs) #tab <- list(crosstab$observed, crosstab$expected) tab <- list(obs, ex) names(tab) <- c("Observed Frequencies","Expected Frequencies") return(tab) }
/scratch/gouwar.j/cran-all/cranData/vannstats/R/crosstabs.R
#' General Social Survey, 2014 #' #' This subset of data comes from one iteration of the \emph{General Social Survey}, administered in 2014. These data were collected by the National Opinion Research Center (NORC) at the University of Chicago. The observations represent individuals' responses to survey questions. Information about the data set can be found in the GSS Codebook at: \url{https://burrelvannjr.com/docs/GSS_Codebook.pdf}. #' #' @format A data frame with 2538 observations and 676 variables. #' \tabular{ll}{ \cr #' id \tab respondent id number \cr #' age \tab age of respondent \cr #' sex \tab respondents sex \cr #' race \tab race of respondent \cr #' educ \tab highest year of school completed \cr #' dipged \tab diploma, ged, or other \cr #' paeduc \tab highest year school completed, father \cr #' maeduc \tab highest year school completed, mother \cr #' speduc \tab highest year school completed, spouse \cr #' sei10 \tab r's socioeconomic index (2010) \cr #' conrinc \tab respondent income in constant dollars \cr #' coninc \tab family income in constant dollars \cr #' degree \tab rs highest degree \cr #' padeg \tab fathers highest degree \cr #' madeg \tab mothers highest degree \cr #' spdeg \tab spouses highest degree \cr #' citizen \tab are you a citizen of america? \cr #' born \tab was r born in this country \cr #' year \tab gss year for this respondent \cr #' cohort \tab year of birth \cr #' spsei10 \tab r's spouse's socioeconomic index (2010) \cr #' pasei10 \tab r's father's socioeconomic index (2010) \cr #' masei10 \tab r's mother's socioeconomic index (2010) \cr #' childs \tab number of children \cr #' immcrime \tab immigrants increase crime rates \cr #' abany \tab abortion if woman wants for any reason \cr #' abdefect \tab strong chance of serious defect \cr #' abhlth \tab womans health seriously endangered \cr #' abnomore \tab married--wants no more children \cr #' abpoor \tab low income--cant afford more children \cr #' abrape \tab pregnant as result of rape \cr #' absingle \tab not married \cr #' accptoth \tab r accept others even when they do things wrong \cr #' acqntsex \tab r had sex with acquaintance last year \cr #' actassoc \tab how important to be active on soc or pol association \cr #' actlaw \tab how likely r to do something if unjust law being cons \cr #' adults \tab household members 18 yrs and older \cr #' advfront \tab sci rsch is necessary and should be supported by federal govt \cr #' affctlaw \tab how lliely congress give serious attention to rs dema \cr #' affrmact \tab favor preference in hiring blacks \cr #' aged \tab should aged live with their children \cr #' aidscndm \tab condom can reduce aids \cr #' aidslook \tab a health-look person may have aids \cr #' amancstr \tab how important to have american ancestry \cr #' ambetter \tab agree america is a better country \cr #' ambornin \tab how important to have been born in america \cr #' amchrstn \tab how important to be a christian \cr #' amcit \tab how important to have american citizenship \cr #' amcitizn \tab agree i would rather be a citizen of america \cr #' amcult \tab it is impossible to become fully american \cr #' amenglsh \tab how important to be able to speak english \cr #' amfeel \tab how important to feel american \cr #' amgovt \tab how important to respect america's laws etc \cr #' amlived \tab how important to have lived in america for life \cr #' amownway \tab america should follow its own interests \cr #' amproud1 \tab how proud being american \cr #' amshamed \tab agree there are things make me ashamed \cr #' amsports \tab agree sports makes me proud to be an american \cr #' amtv \tab tv should give preference to american films \cr #' arthrtis \tab told have arthritis or rheumatism \cr #' astrolgy \tab ever read a horscope or persoanl astrology report \cr #' astrosci \tab astrology is scientific \cr #' attend \tab how often r attends religious services \cr #' attrally \tab attended a political meeting or rally \cr #' avoidbuy \tab boycotted products for pol reasons \cr #' babies \tab household members less than 6 yrs old \cr #' backpain \tab r had back pain in the past 12 months \cr #' balneg \tab sci research is strongly in favor of harmful results \cr #' balpos \tab sci research is strongly in favor of benefits \cr #' befair \tab how often do you think people take advantage \cr #' belikeus \tab agree better if people were more like americans \cr #' bettrlfe \tab science makes our lives better \cr #' betrlang \tab which language r speaks more fluent \cr #' bible \tab feelings about the bible \cr #' bigbang \tab sci knowledge:the universe began with a huge explosion \cr #' boyorgrl \tab sci knowledge:father gene decides sex of baby \cr #' buypol \tab how important to choose products for pol reasons \cr #' buyvalue \tab percent of company stock r bought from own money \cr #' cantrust \tab poeple can be trusted or cant be too careful \cr #' cappun \tab favor or oppose death penalty for murder \cr #' careself \tab those in need have to take care of themselves \cr #' carried \tab r carried a stranger's belongings \cr #' chldidel \tab ideal number of children \cr #' chngeoth \tab how often r try to persuade other to share views \cr #' chngtme \tab how often r allowed change schedule \cr #' choices \tab political parties dont give real policy choices \cr #' citworld \tab i am a citizen of the world \cr #' class \tab subjective class identification \cr #' closeblk \tab how close feel to blacks \cr #' closewht \tab how close feel to whites \cr #' clsenoam \tab how close do you feel to north america \cr #' clsestat \tab how close do you feel to your state \cr #' clsetown \tab how close do you feel to your town or city \cr #' clseusa \tab how close do you feel to america \cr #' cntctgov \tab contacted politician or civil servant to express view \cr #' colath \tab allow anti-religionist to teach \cr #' colcom \tab should communist teacher be fired \cr #' coldeg1 \tab the highest degree r have earned \cr #' colhomo \tab allow homosexual to teach \cr #' colmil \tab allow militarist to teach \cr #' colmslm \tab allow anti-american muslim clergymen teaching in college \cr #' colrac \tab allow racist to teach \cr #' colsci \tab r has taken any college-level sci course \cr #' colscinm \tab number of college-level sci courses r have taken \cr #' compperf \tab size of perf based pay depend on profits \cr #' comprend \tab rs understanding of questions \cr #' compuse \tab r use computer \cr #' conarmy \tab confidence in military \cr #' conbus \tab confidence in major companies \cr #' conclerg \tab confidence in organized religion \cr #' condemnd \tab r free from conflicting demands \cr #' condom \tab used condom last time \cr #' condrift \tab sci knowledge:the continents have been moving \cr #' coneduc \tab confidence in education \cr #' confed \tab confid. in exec branch of fed govt \cr #' confinan \tab confid in banks & financial institutions \cr #' conjudge \tab confid. in united states supreme court \cr #' conlabor \tab confidence in organized labor \cr #' conlegis \tab confidence in congress \cr #' conmedic \tab confidence in medicine \cr #' conpress \tab confidence in press \cr #' consci \tab confidence in scientific community \cr #' contv \tab confidence in television \cr #' corruptn \tab how widespread corruption is in pub service in americ \cr #' courts \tab courts dealing with criminals \cr #' cowrkhlp \tab coworkers can be relied on when r needs help \cr #' cowrkint \tab coworkers take a personal interest in r \cr #' crack30 \tab r last use crack cocaine \cr #' crimlose \tab people convicted of serious crimes lose citizen rights \cr #' cutahead \tab r allowed a stranger to go ahead of you in line \cr #' decsorgs \tab america should follow decision of intl org \cr #' defpensn \tab r has defined benefit pension plan \cr #' dem10fut \tab how well will democracy work in america in ten yrs \cr #' dem10pst \tab how well did democracy work in america ten yrs ago \cr #' demtoday \tab how well democracy work in america \cr #' denom \tab specific denomination \cr #' denom16 \tab denomination in which r was raised \cr #' depress \tab told have depression \cr #' diabetes \tab told have diabetes \cr #' directns \tab r has given directions to a stranger \cr #' discaff \tab whites hurt by aff. action \cr #' discaffm \tab a man won't get a job or promotion \cr #' discaffw \tab a woman won't get a job or promotion \cr #' discpol \tab how often r discuss politics \cr #' divlaw \tab divorce laws \cr #' divorce \tab ever been divorced or separated \cr #' dwelown \tab does r own or rent home? \cr #' earnrs \tab how many in family earned money \cr #' earthsun \tab sci knowledge:the earth goes around the sun \cr #' effctsup \tab supervisor effective solve work/personal conflicts \cr #' elecfair \tab how fair last natl election:opprtunities of candidate \cr #' electron \tab sci knowledge:electrons are smaller than atoms \cr #' elecvote \tab how honest last natl election:counting of votes \cr #' emailhr \tab email hours per week \cr #' emailmin \tab email minutes per week \cr #' empinput \tab r involved in any task force for decision-making \cr #' emptrain \tab received formal training from employer \cr #' eqwlth \tab should govt reduce income differences \cr #' esop \tab r is member of esop \cr #' ethnic \tab country of family origin \cr #' evcrack \tab r ever use crack cocaine \cr #' evidu \tab r ever inject drugs \cr #' evolved \tab sci knowledge:human beings developed from animals \cr #' evpaidsx \tab ever have sex paid for or being paid since 18 \cr #' evstray \tab have sex other than spouse while married \cr #' evwork \tab ever work as long as one year \cr #' excldimm \tab america should exclude illegal immigrants \cr #' expdesgn \tab better way to test drug btw control and non-control \cr #' exptext \tab why is it better to test drug this way \cr #' extrapay \tab eligible for performance based pay \cr #' extrayr \tab year of the most recent perf based payments \cr #' fair \tab people fair or try to take advantage \cr #' fairearn \tab how fair is what r earn on the job \cr #' famgen \tab number of family generations in household \cr #' family16 \tab living with parents when 16 yrs old \cr #' famvswk \tab how often fam life interfere job \cr #' famwkoff \tab how hard to take time off \cr #' fear \tab afraid to walk at night in neighborhood \cr #' fechld \tab mother working doesnt hurt children \cr #' feelevel \tab amount of fees paid \cr #' feeused \tab fee given to get case \cr #' fefam \tab better for man to work, woman tend home \cr #' fehire \tab should hire and promote women \cr #' fejobaff \tab for or against preferential hiring of women \cr #' fepol \tab women not suited for politics \cr #' fepresch \tab preschool kids suffer if mother works \cr #' finalter \tab change in financial situation \cr #' finrela \tab opinion of family income \cr #' forland \tab foreigners should not be allowed to buy land \cr #' form \tab form of split questionnaire asked \cr #' freetrde \tab free trade leads to better products \cr #' fringeok \tab fringe benefits are good \cr #' frndsex \tab r had sex with friend last year \cr #' fucitzn \tab is r planning/appling for us citizenship or not \cr #' fund \tab how fundamentalist is r currently \cr #' fund16 \tab how fundamentalist was r at age 16 \cr #' getahead \tab opinion of how people get ahead \cr #' givblood \tab r donated blood during the past 12 months \cr #' givchrty \tab r has given money to a charity \cr #' givhmlss \tab r has given food or money to a homeless person \cr #' givseat \tab r offered seat to a stranger during past 12 months \cr #' god \tab rs confidence in the existence of god \cr #' goodlife \tab standard of living of r will improve \cr #' govdook \tab we can trust people in govt \cr #' granborn \tab how many grandparents born outside u.s. \cr #' grass \tab should marijuana be made legal \cr #' grpother \tab r belongs to another voluntary association \cr #' grpparty \tab r belongs to a political party \cr #' grprelig \tab r belongs to a church or othr religious organization \cr #' grpsprts \tab r belongs to a sports, leisure, or cultural grp \cr #' grpwork \tab r belongs to a trade union or professtional associati \cr #' gunlaw \tab favor or oppose gun permits \cr #' gvtrghts \tab (on a scale of 1 to 7, where 1 is not at all important and 7 is very important \cr #' handmove \tab r perform forceful hand movements \cr #' hapcohab \tab happiness of relt with partner \cr #' hapmar \tab happiness of marriage \cr #' happy \tab general happiness \cr #' haveinfo \tab enough info to get the job done \cr #' health \tab condition of health \cr #' health1 \tab rs health in general \cr #' hefinfo \tab number of hef informant \cr #' height \tab r is how tall \cr #' helpaway \tab r looked after plant or pet of others while away \cr #' helpblk \tab should govt aid blacks? \cr #' helpful \tab people helpful or looking out for selves \cr #' helphwrk \tab helped someone with hwork during past 12 months \cr #' helpjob \tab helped somebody to find a job past 12 months \cr #' helpnot \tab should govt do more or less? \cr #' helpoth \tab to help others \cr #' helppoor \tab should govt improve standard of living? \cr #' helpsick \tab should govt help pay for medical care? \cr #' helpusa \tab how important to help worse off ppl in america \cr #' helpwrld \tab how important to help worse off ppl in rest of world \cr #' hhtype \tab household type \cr #' hhtype1 \tab household type (condensed) \cr #' hispanic \tab hispanic specified \cr #' hivkiss \tab kiss can spread hiv \cr #' hivtest \tab have you ever been tested for hiv \cr #' hivtest1 \tab in what month and year was your last hiv test \cr #' hivtest2 \tab where did you have your last hiv test \cr #' hivvac \tab there is a vaccine that can prevent hiv \cr #' hlpequip \tab enough help and equip to ge the job done \cr #' hlthall \tab healthcare provided for everyone \cr #' hlthdays \tab days of activity limitation past 30 days \cr #' homosex \tab homosexual sex relations \cr #' hompop \tab number of persons in household \cr #' hotcore \tab sci knowledge: the center of earth is very hot \cr #' hrs1 \tab number of hours worked last week \cr #' hrs2 \tab number of hours usually work a week \cr #' hrsrelax \tab hours per day r have to relax \cr #' hsbio \tab r ever took a high school biology course \cr #' hschem \tab r ever took a high school chemistry course \cr #' hsmath \tab the highest level of math r completed in high school \cr #' hsphys \tab r ever took a high school physics course \cr #' hunt \tab does r or spouse hunt \cr #' hurtatwk \tab number of injuries on the job past 12 months \cr #' hvylift \tab r do repeated lifting \cr #' hyperten \tab told have hypertension or high blood pressure \cr #' idu30 \tab r inject drugs in past 30 days \cr #' if08who \tab who you would have voted for \cr #' if12who \tab who would r have voted for in 2012 election \cr #' ifwrong \tab agree people should support their country \cr #' immameco \tab immigrants good for america \cr #' immassim \tab what statement about immigrants matches view \cr #' immcult \tab immigrants undermine american culture \cr #' immeduc \tab legal immigrants should have same education as americans \cr #' immideas \tab immigrants make america more open \cr #' immjobs \tab immigrants take jobs away \cr #' immrghts \tab legal immigrants should have same right as american \cr #' imports \tab america should limit the import \cr #' incom16 \tab rs family income when 16 yrs old \cr #' income \tab total family income \cr #' income06 \tab total family income \cr #' indperf \tab size of perf based pay depend on individual \cr #' intecon \tab interested in economic issues \cr #' inteduc \tab interested in local school issues \cr #' intenvir \tab interested in environmental issues \cr #' interpol \tab joined an internet political forum \cr #' intfarm \tab interested in farm issues \cr #' intintl \tab interested in international issues \cr #' intlblks \tab unintelligent - intelligent \cr #' intlincs \tab largee intl company damage to local business \cr #' intlwhts \tab unintelligent -intelligent \cr #' intmed \tab interested in medical discoveries \cr #' intmil \tab interested in military policy \cr #' intrhome \tab internet access in r's home \cr #' intsci \tab interested in new scientific discoveries \cr #' intspace \tab interested in space exploration \cr #' inttech \tab interested in technologies \cr #' jobfind \tab could r find equally good job \cr #' jobfind1 \tab how easy for r to find a same job \cr #' jobhour \tab short working hours \cr #' jobinc \tab high income \cr #' joblose \tab is r likely to lose job \cr #' jobmeans \tab work important and feel accomplishment \cr #' jobpromo \tab chances for advancement \cr #' jobsec \tab no danger of being fired \cr #' jobsecok \tab the job security is good \cr #' joindem \tab took part in a demonstration \cr #' kidssol \tab rs kids living standard compared to r \cr #' knowschd \tab how far in advance know work schedule \cr #' knowwhat \tab r knows what's expected on job \cr #' laidoff \tab r was laid off main job last year \cr #' lasers \tab sci knowledge:lasers work by focusing sound waves \cr #' learnnew \tab job requires r to learn new things \cr #' leftrght \tab how left or right in politics \cr #' lentto \tab lent money to another person past 12 months \cr #' lessprd \tab agree often less proud of america \cr #' letdie1 \tab allow incurable patients to die \cr #' letin1 \tab number of immigrants to america nowadays should be \cr #' letin1a \tab number of immigrants nowadays should be \cr #' libath \tab allow anti-religious book in library \cr #' libcom \tab allow communists book in library \cr #' libhomo \tab allow homosexuals book in library \cr #' libmil \tab allow militarists book in library \cr #' libmslm \tab allow anti-american muslim clergymen's books in library \cr #' librac \tab allow racists book in library \cr #' life \tab is life exciting or dull \cr #' liveblks \tab neighborhood half black \cr #' livewhts \tab r favors living in half white neighborhood \cr #' loanitem \tab r has let someone borrow a item of some value \cr #' localnum \tab number of employees: rs work site \cr #' maind10 \tab mothers industry code (naics 2007) \cr #' major1 \tab college major 1 \cr #' major2 \tab college major 2 \cr #' majorcol \tab the field of degree r earned \cr #' manvsemp \tab relations bw management and employees \cr #' maocc10 \tab mothers census occupation code (2010) \cr #' marasian \tab close relative marry asian \cr #' marblk \tab close relative marry black \cr #' marhisp \tab close relative marry hispanic \cr #' marhomo \tab homosexuals should have right to marry \cr #' marital \tab marital status \cr #' martype \tab marital type \cr #' marwht \tab r favor close relative marrying white person \cr #' matesex \tab was 1 of rs partners spouse or regular \cr #' mawrkgrw \tab mothers employment when r was 16 \cr #' mawrkslf \tab mother self-emp. or worked for somebody \cr #' meltpot1 \tab better to maintain distinct cultures \cr #' meovrwrk \tab men hurt family when focus on work too much \cr #' mincult \tab ethnic minorities should be given gov assistance \cr #' misswork \tab miss work for health past 30 days \cr #' mntlhlth \tab days of poor mental health past 30 days \cr #' mobile16 \tab geographic mobility since age 16 \cr #' mode \tab interview done in-person or over the phone \cr #' moredays \tab days per month r work extra hours \cr #' mustwork \tab mandatory to work extra hours \cr #' nafta1 \tab how much heard or read about nafta? \cr #' nafta2a \tab america benefits from being a member of nafta? \cr #' nataid \tab foreign aid \cr #' nataidy \tab assistance to other countries -- ver y \cr #' natarms \tab military, armaments, and defense \cr #' natarmsy \tab national defense -- version y \cr #' natchld \tab assistance for childcare \cr #' natcity \tab solving problems of big cities \cr #' natcityy \tab assistance to big cities -- version y \cr #' natcrime \tab halting rising crime rate \cr #' natcrimy \tab law enforcement -- verison y \cr #' natdrug \tab dealing with drug addiction \cr #' natdrugy \tab drug rehabilitation -- version y \cr #' nateduc \tab improving nations education system \cr #' nateducy \tab education -- version y \cr #' natenrgy \tab developing alternative energy sources \cr #' natenvir \tab improving & protecting environment \cr #' natenviy \tab the environment -- version y \cr #' natfare \tab welfare \cr #' natfarey \tab assistance to the poor -- version y \cr #' natheal \tab improving & protecting nations health \cr #' nathealy \tab health -- version y \cr #' natmass \tab mass transportation \cr #' natpark \tab parks and recreation \cr #' natrace \tab improving the conditions of blacks \cr #' natracey \tab assistance to blacks -- version y \cr #' natroad \tab highways and bridges \cr #' natsci \tab supporting scientific research \cr #' natsoc \tab social security \cr #' natspac \tab space exploration program \cr #' natspacy \tab space exploration -- version y \cr #' news \tab how often does r read newspaper \cr #' newsfrom \tab main source of information about events in the news \cr #' nextgen \tab science & tech. give more opportunities to next generation \cr #' notvote \tab citizens have right not to vote \cr #' ntcitvte \tab long-term residents should vote \cr #' ntwkhard \tab past week not work hard enough \cr #' numemps \tab number of employee for the self-employed \cr #' nummen \tab number of male sex partners since 18 \cr #' numorg \tab number of people working in organization at all locations \cr #' numwomen \tab number of female sex partners since 18 \cr #' obey \tab to obey \cr #' obeylaws \tab how important always to abey laws \cr #' opdevel \tab opportunity to develop my abilities \cr #' oppsegov \tab how important:citizen engage in acts of civil disobed \cr #' oth16 \tab other protestant denominations \cr #' other \tab other protestant denominations \cr #' othersex \tab r had sex with some other last year \cr #' othjew \tab consider self to be jewish \cr #' othlang \tab can r speak language other than english \cr #' othlang1 \tab what other languages does r speak \cr #' othlang2 \tab what other languages does r speak \cr #' othreasn \tab how important to try to undrstnd reasonings of othr o \cr #' othshelp \tab people should help less fortunate others \cr #' oversamp \tab weights for black oversamples \cr #' overwork \tab r has too much work to do well \cr #' owngun \tab have gun in home \cr #' ownstock \tab r has stock in rs company \cr #' paidsex \tab r had sex for pay last year \cr #' painarms \tab r had pain in the arms in the past 12 months \cr #' paind10 \tab fathers industry code (2010) \cr #' paocc10 \tab fathers census occupation code (2010) \cr #' parborn \tab were rs parents born in this country \cr #' parcit \tab were your parents citizens of america? \cr #' parsol \tab rs living standard compared to parents \cr #' partfull \tab was r's work part-time or full-time? \cr #' partners \tab how many sex partners r had in last year \cr #' partnrs5 \tab how many sex partners r had in last 5 years \cr #' partteam \tab r work as part of a team \cr #' partyid \tab political party affiliation \cr #' patriot1 \tab patriotic feelings strengthen america's place in world \cr #' patriot2 \tab patriotic feelings lead to intolerance in america \cr #' patriot3 \tab patriotic feelings are needed for america to remain united \cr #' patriot4 \tab patriotic feelings lead to negative feelings towards immigrants \cr #' pawrkslf \tab father self-emp. or worked for somebody \cr #' paytaxes \tab how important never to try to evade taxes \cr #' peocntct \tab how many people in contact in a typical weekday \cr #' peoptrbl \tab assisting people in trouble is very important \cr #' phase \tab subsampling: two-phase design. \cr #' phone \tab does r have telephone \cr #' physhlth \tab days of poor physical health past 30 days \cr #' pikupsex \tab r had sex with casual date last year \cr #' pillok \tab birth control to teenagers 14-16 \cr #' pistol \tab pistol or revolver in home \cr #' polabuse \tab citizen said vulgar or obscene things \cr #' polactve \tab pol party encourge ppl to be active in politics in am \cr #' polattak \tab citizen attacking policeman with fists \cr #' poleff11 \tab don't have any say about what the government does \cr #' poleff18 \tab govt do not care much what ppl like r think \cr #' poleff19 \tab r have a good understanding of pol issues \cr #' poleff20 \tab most ppl are better informed about politics than r is \cr #' polescap \tab citizen attempting to escape custody \cr #' polfunds \tab donated money or raised funds for soc or pol activity \cr #' polgreed \tab most politicians are only for what get out of politics \cr #' polhitok \tab ever approve of police striking citizen \cr #' polint1 \tab how interested in politics \cr #' polinter \tab expressed political views on internet past year \cr #' polmurdr \tab citizen questioned as murder suspect \cr #' polnews \tab how often use media to get political news \cr #' polopts \tab how important:ppl given chance to participate in deci \cr #' polviews \tab think of self as liberal or conservative \cr #' popespks \tab pope is infallible on matters of faith or morals \cr #' popular \tab to be well liked or popular \cr #' pornlaw \tab feelings about pornography laws \cr #' posslq \tab does r have marital partner \cr #' posslqy \tab relationship status and cohabitation or not \cr #' postlife \tab belief in life after death \cr #' powrorgs \tab intl orgs take away much power from american govt \cr #' pray \tab how often does r pray \cr #' prayer \tab bible prayer in public schools \cr #' premarsx \tab sex before marriage \cr #' pres08 \tab vote obama or mccain \cr #' pres12 \tab vote obama or romney \cr #' preteen \tab household members 6 thru 12 yrs old \cr #' prodctiv \tab work conditions allow productivity \cr #' promtefr \tab promotions are handled fairly \cr #' promteok \tab rs chances for promotion good \cr #' proudart \tab how proud its achievements in the arts & lit. \cr #' prouddem \tab how proud the way democracy works \cr #' proudeco \tab how proud america's economic achievements \cr #' proudemp \tab r proud to work for employer \cr #' proudgrp \tab how proud its fair and equal treatment \cr #' proudhis \tab how proud its history \cr #' proudmil \tab how proud america's armed forces \cr #' proudpol \tab how proud its political influence in the world \cr #' proudsci \tab how proud its scientific and tech achievements \cr #' proudspt \tab how proud its achievements in sports \cr #' proudsss \tab how proud its social security system \cr #' racdif1 \tab differences due to discrimination \cr #' racdif2 \tab differences due to inborn disability \cr #' racdif3 \tab differences due to lack of education \cr #' racdif4 \tab differences due to lack of will \cr #' raclive \tab any opp. race in neighborhood \cr #' racmeet \tab allowed to hold pub meeting for racist \cr #' racopen \tab vote on open housing law \cr #' racwork \tab racial makeup of workplace \cr #' radioact \tab sci knowledge:all radioactivity is man-made \cr #' rank \tab rs self ranking of social position \cr #' ratetone \tab r's facial coloring by interviewer \cr #' realinc \tab family income in constant $ \cr #' realrinc \tab rs income in constant $ \cr #' reborn \tab has r ever had a 'born again' experience \cr #' refrndms \tab referendum are good way to decide important pol quest \cr #' reg16 \tab region of residence, age 16 \cr #' relactiv \tab how often does r take part in relig activities \cr #' relatsex \tab relation to last sex partner \cr #' relig \tab rs religious preference \cr #' relig16 \tab religion in which raised \cr #' reliten \tab strength of affiliation \cr #' relmeet \tab allowed to hold pub meeting for religious extremist \cr #' relpersn \tab r consider self a religious person \cr #' res16 \tab type of place lived in when 16 yrs old \cr #' respect \tab r treated with respect at work \cr #' respnum \tab number in family of r \cr #' retchnge \tab r returned money after getting too much change \cr #' revmeet \tab allowed to hold pub meeting for ppl who want overthro \cr #' rghtsmin \tab how important:govt protect right of minorities \cr #' richwork \tab if rich, continue or stop working \cr #' rifle \tab rifle in home \cr #' rincblls \tab income alone is enough \cr #' rincom06 \tab respondents income \cr #' rincome \tab respondents income \cr #' rowngun \tab does gun belong to r \cr #' safefrst \tab no shortcuts on worker safety \cr #' safehlth \tab safety and health condition good at work \cr #' safetywk \tab worker safety priority at work \cr #' satfin \tab satisfaction with financial situation \cr #' satjob \tab job or housework \cr #' satjob1 \tab job satisfaction in general \cr #' savesoul \tab tried to convince others to accept jesus \cr #' scibnfts \tab benefits of sci research outweight harmful results \cr #' scifrom \tab main source of information about science and technology \cr #' scinews1 \tab newspaper printed or online \cr #' scinews2 \tab magazine printed or online \cr #' scinews3 \tab where online get info \cr #' scistudy \tab r has clear understanding of scientific study \cr #' scitext \tab what it means to r to study scienfically \cr #' secondwk \tab r has job other than main \cr #' sector \tab type of college respondent attended \cr #' seeksci \tab probable source of information about scientific issues \cr #' selffrst \tab people need not overly worry about others \cr #' selfless \tab r feels like a selfless caring for others \cr #' servepeo \tab how committed govt admnstrators are to serve people \cr #' sexeduc \tab sex education in public schools \cr #' sexfreq \tab frequency of sex during last year \cr #' sexornt \tab sexual orientation \cr #' sexsex \tab sex of sex partners in last year \cr #' sexsex5 \tab sex of sex partners last five years \cr #' shortcom \tab world better if america acknowledged shortcomings \cr #' shotgun \tab shotgun in home \cr #' sibs \tab number of brothers and sisters \cr #' signdpet \tab signed a petition \cr #' size \tab size of place in 1000s \cr #' slpprblm \tab trouble sleeping last 12 months \cr #' socbar \tab spend evening at bar \cr #' socfrend \tab spend evening with friends \cr #' socommun \tab spend evening with neighbor \cr #' socrel \tab spend evening with relatives \cr #' solarrev \tab sci knowledge:how long the earth goes around the sun \cr #' solok \tab how important:citizens have adequate standard of livi \cr #' spanking \tab favor spanking to discipline child \cr #' spden \tab specific denomination, spouse \cr #' spdipged \tab spouse diploma, ged, or other \cr #' spevwork \tab spouse ever work as long as a year \cr #' spfund \tab how fundamentalist is spouse currently \cr #' sphrs1 \tab number of hrs spouse worked last week \cr #' sphrs2 \tab no. of hrs spouse usually works a week \cr #' spind10 \tab spouses industry code (naics 2007) \cr #' spkath \tab allow anti-religionist to speak \cr #' spkcom \tab allow communist to speak \cr #' spkhomo \tab allow homosexual to speak \cr #' spklang \tab how well does r speak other language \cr #' spkmil \tab allow militarist to speak \cr #' spkmslm \tab allow muslim clergymen preaching hatred of the us \cr #' spkrac \tab allow racist to speak \cr #' spocc10 \tab spouse census occupation code (2010) \cr #' spother \tab other protestant denominations \cr #' sprel \tab spouses religious preference \cr #' sprtprsn \tab r consider self a spiritual person \cr #' spsector \tab type of college spouse attended \cr #' spvtrfair \tab supervisor is fair \cr #' spwrkslf \tab spouse self-emp. or works for somebody \cr #' spwrksta \tab spouse labor force status \cr #' stockops \tab r hold any stock options of rs company \cr #' stockval \tab total dollar value of rs stock \cr #' stress \tab how often does r find work stressful \cr #' stress12 \tab stress management program last 12 months \cr #' strredpg \tab access to stress management \cr #' suicide1 \tab suicide if incurable disease \cr #' suicide2 \tab suicide if bankrupt \cr #' suicide3 \tab suicide if dishonored family \cr #' suicide4 \tab suicide if tired of living \cr #' supcares \tab supervisor concerned about welfare \cr #' suprvsjb \tab does r supervise others at work \cr #' suphelp \tab supervisor helpful to r in getting job done \cr #' talkedto \tab talked with someone depressed past 12 months \cr #' talkspvs \tab comfortable talking with supervisor about personal \cr #' tax \tab rs federal income tax \cr #' teamsafe \tab mgt and employees work together re safety \cr #' teens \tab household members 13 thru 17 yrs old \cr #' teensex \tab sex before marriage -- teens 14-16 \cr #' thnkself \tab to think for ones self \cr #' toofast \tab science makes our way of life change too fast \cr #' toofewwk \tab how often not enough staff \cr #' trdestck \tab company stock publicly traded \cr #' trdunion \tab workers need strong unions \cr #' trust \tab can people be trusted \cr #' trustman \tab r trust management at work \cr #' trynewjb \tab how likely r make effort for new job next year \cr #' tvhours \tab hours per day watching tv \cr #' unemp \tab ever unemployed in last ten yrs \cr #' union \tab does r or spouse belong to union \cr #' unrelat \tab number in household not related \cr #' uscitzn \tab is r us citizen \cr #' usedup \tab how often during past month r felt used up \cr #' usemedia \tab contacted in the media to express view \cr #' useskill \tab how much past skills can you make use in present \cr #' usetech \tab percentage of time use tech \cr #' usewww \tab r use www other than email \cr #' uswar \tab expect u.s. in war within 10 years \cr #' uswary \tab expect u.s. in world war in 10 years \cr #' valgiven \tab total donations past year r and immediate family \cr #' vetyears \tab years in armed forces \cr #' viruses \tab sci knowledge:antiviotics kill viruses as well as bacteria \cr #' visitors \tab number of visitors in household \cr #' voedcol \tab non-college postsecondary education (voednme1) \cr #' voednme1 \tab postsecondary institution attended for credit \cr #' voedncol \tab non-college postsecondary education (voednme2) \cr #' voednme2 \tab postsecondary institution attended for credit \cr #' volchrty \tab r done volunteer work for a charity \cr #' volmonth \tab volunteer in last month \cr #' vote08 \tab did r vote in 2008 election \cr #' vote12 \tab did r vote in 2012 election \cr #' voteelec \tab how important always to vote in elections \cr #' watchgov \tab how important to keep watch on action of govt \cr #' waypaid \tab how paid in main job \cr #' wealth \tab total wealth of respondent \cr #' webmob \tab r uses home internet through mobile device \cr #' weekswrk \tab weeks r. worked last year \cr #' weight \tab r weighs how much \cr #' whencol \tab when received college degree \cr #' whenhs \tab when received hs degree \cr #' whoelse1 \tab presence of others:children under six \cr #' whoelse2 \tab presence of others:older children \cr #' whoelse3 \tab presence of others:spouse partner \cr #' whoelse4 \tab presence of others:other relatives \cr #' whoelse5 \tab presence of others:other adults \cr #' whoelse6 \tab presence of others:no one \cr #' whywkhme \tab usual reason r work at home \cr #' widowed \tab ever been widowed \cr #' wkageism \tab r feels discriminated because of age \cr #' wkcontct \tab how often contacted about work when not working \cr #' wkdecide \tab how often r take part in decisions \cr #' wkfreedm \tab a lot of freedom to decide how to do job \cr #' wkharoth \tab r threatened on the job last 12 months \cr #' wkharsex \tab r sexually harassed on the job last 12 months \cr #' wkpraise \tab r is likely to be praised by supervisor \cr #' wkracism \tab r feels discriminated because of race \cr #' wksexism \tab r feels discriminated because of gender \cr #' wksmooth \tab workplace runs in smooth manner \cr #' wksub \tab does r or spouse have supervisor \cr #' wksubs \tab does supervisor have supervisor \cr #' wksup \tab does r or spouse supervise anyone \cr #' wksups \tab does subordinate supervise anyone \cr #' wkvsfam \tab how often job interferes fam life \cr #' wlthblks \tab rich - poor \cr #' wlthwhts \tab rich - poor \cr #' workblks \tab hard working - lazy \cr #' workdiff \tab r does numerous things on job \cr #' workfast \tab job requires r to work fast \cr #' workfor1 \tab r work for whom \cr #' workhard \tab to work hard \cr #' workwhts \tab hard working - lazy \cr #' wrkgovt \tab govt or private employee \cr #' wrkhome \tab how often r works at home \cr #' wrksched \tab usual work schedule \cr #' wrkslf \tab r self-emp or works for somebody \cr #' wrkstat \tab labor force status \cr #' wrktime \tab r has enough time to get the job done \cr #' wrktype \tab work arrangement at main job \cr #' wrkwayup \tab blacks overcome prejudice without favors \cr #' wrldgovt \tab international bodies should enforce environment \cr #' wwwhr \tab www hours per week \cr #' wwwmin \tab www minutes per week \cr #' xmarsex \tab sex with person other than spouse \cr #' xmovie \tab seen x-rated movie in last year \cr #' xnorcsiz \tab expanded n.o.r.c. size code \cr #' yearsjob \tab time at current job \cr #' yearval \tab total dollar value of payments in that year \cr #' } #' @source Data: \url{https://sda.berkeley.edu/sdaweb/analysis/?dataset=gss14} #' @source Codebook: \url{https://burrelvannjr.com/docs/GSS_Codebook.pdf} #' "GSS2014"
/scratch/gouwar.j/cran-all/cranData/vannstats/R/data.R
#' Creating Dummy-Code Columns for Values of a Variable #' #' This function applies dummy-coding to a variable of interest, enabling the creation of \emph{n} or \emph{n-1} columns/variables based on \emph{n} number of attributes for the variable. #' @import dplyr #' @importFrom rlang .data #' @param df data frame to read in. #' @param var the variable to be dummy-coded. Is automatically converted to a character string. #' @param remove logical (default set to \code{F}). When set to \code{remove = T}, will return a data frame using the true number of dummy coded columns (e.g. \emph{n-1}). #' @return This function updates the data frame with new variables (columns) representing unique values of a selected variable, and a binary score (0/1) for the absence or presence of a column's represented value for each observation. #' @examples #' data <- howell_aids_long #' #' dummy(data, student) #' @export dummy <- function(df, var, remove=FALSE){ df2 <- as.data.frame(df) col <- deparse(substitute(var)) vals <- unique(eval(substitute(var), df2)) vals <- as.character(vals) mat <- data.frame(matrix(ncol = length(vals), nrow = nrow(df2))) colnames(mat) <- vals mat[is.na(mat)] <- 0 df2 <- df2 %>% dplyr::bind_cols(mat) for(row in 1:nrow(df2)){ code <- df2[row, col] code <- as.character(code) df2[row, code] <- 1 } if(remove==TRUE){ df2 <- df2[1:(length(df2)-1)] } dfname <- deparse(substitute(df)) pos <- 1 envir = as.environment(pos) assign(dfname, df2, envir = envir) }
/scratch/gouwar.j/cran-all/cranData/vannstats/R/dummy.R
#' Howell Student AIDS Knowledge Data (Long Form) #' #' This data set, from Howell, measures students' knowledge at three time points, in long form. #' #' @format A data frame with 12 observations and 3 variables. #' \tabular{ll}{ \cr #' student \tab student id \cr #' time \tab time point measured \cr #' knowledge \tab student AIDS knowledge score (at various time points) \cr #' } #' "howell_aids_long"
/scratch/gouwar.j/cran-all/cranData/vannstats/R/howell_aids_long.R
#' Howell Student AIDS Knowledge Data (Wide Form) #' #' This data set, from Howell, measures students' knowledge at three time points, in wide form. #' #' @format A data frame with 4 observations and 4 variables. #' \tabular{ll}{ \cr #' student \tab student id \cr #' t1 \tab student AIDS knowledge score at time 1 \cr #' t2 \tab student AIDS knowledge score at time 2 \cr #' t3 \tab student AIDS knowledge score at time 3 \cr #' } #' "howell_aids_wide"
/scratch/gouwar.j/cran-all/cranData/vannstats/R/howell_aids_wide.R
#' Simplified Histogram #' #' This function plots a histogram (hst) on a given data frame, and uses simplified calls within the function to parse the histogram by up to 2 variables. #' @import ggplot2 dplyr purrr #' @importFrom stats IQR density #' @param df data frame to read in. #' @param var1 the dependent/outcome variable, \eqn{Y}. The variable of interest that should be plotted. #' @param by1 the main independent/predictor variable, \eqn{X_1}. A grouping variable by which the histogram for \code{var1} should be parsed. #' @param by2 a potential second independent/predictor variable, \eqn{X_2}. A second grouping variable by which the histogram for \code{var1} (already parsed by \code{by1}) should be parsed. #' @return This function returns the histogram for \code{var1} in data frame \code{df}. Can be split to return a histogram for \code{var1} in data frame \code{df}, broken out by \code{var2}. #' @examples #' data <- mtcars #' #' hst(data,mpg,cyl) #' @export hst <- function(df, var1, by1, by2){ #options(warn=-1) #suppressWarnings() v1 <- NULL #necessary for removing the "undefined global function" warning density1 <- NULL group <- NULL bygroups <- length(match.call())-3 n1 <- deparse(substitute(var1)) n1 <- as.character(n1) if(bygroups==-1) { title <- paste0("Histogram of '", deparse(substitute(df)), "'") labx <- deparse(substitute(df)) df <- as.data.frame(df) df <- df %>% mutate(group = "group") names(df) <- c("v1","group") n0 <- "v1" n0 <- as.character(n0) #dens = split(df, df$group) %>% # map_df(~ tibble(v1=seq(min(.x[[n0]],na.rm=T), max(.x[[n0]],na.rm=T), length=1000), # density1=dnorm(x=v1, mean=mean(.x[[n0]],na.rm=T), sd=sd(.x[[n0]],na.rm=T))), # .id="group") #dens = split(df, df$group) %>% # map_df(~ tibble(var1=seq(min(.x[[n1]],na.rm=T), max(.x[[n1]],na.rm=T), length=1000), # density1=dnorm(x=var1, mean=mean(.x[[n1]],na.rm=T), sd=sd(.x[[n1]],na.rm=T))), # .id="group") #dens = split(df, df$group) %>% # map_df(~ tibble(v1=seq(((mean(.x[[n0]], na.rm=T))-(2.58*(((sd(.x[[n0]], na.rm=T)) )))), ((mean(.x[[n0]], na.rm=T))+(2.58*(((sd(.x[[n0]], na.rm=T)) )))), length=1000), # density1=dnorm(x=v1, mean=mean(.x[[n0]],na.rm=T), sd=sd(.x[[n0]],na.rm=T))), # .id="group") dens = split(df, df$group) %>% map_df(~ tibble(v1=seq(((mean(.x[[n0]], na.rm=T))-(3.291*(ifelse(length(.x)>1, sd(.x[[n0]], na.rm=T), 0)))), ((mean(.x[[n0]], na.rm=T))+(3.291*(ifelse(length(.x)>1, sd(.x[[n0]], na.rm=T), 0)))), length=1000), density1=dnorm(x=v1, mean=mean(.x[[n0]],na.rm=T), sd=(ifelse(length(.x)>1, sd(.x[[n0]], na.rm=T), 0)))), .id="group") b1 <- df b1 <- b1[,1] #bins <- ((2 * (IQR(b1, na.rm=T))) / (length(b1)^(1/(length(b1))))) bins <- diff(range(b1, na.rm=T)) / (2 * IQR(b1, na.rm=T) / length(b1)^(1/3)) bw <- ((2 * IQR(b1, na.rm=T)) / length(b1)^(1/3)) if(bw<1){ bw <- 1 } #print(bw) minx <- 0 if(min(b1, na.rm = T)<minx){ minx <- min(b1, na.rm = T) } maxx <- max(b1, na.rm = T) + 1 #print(minx) #print(maxx) df2 <- df %>% dplyr::count(group) df2$group <- as.character(df2$group) dens <- dens %>% left_join(df2, by="group") dens <- dens %>% mutate(density=(density1*bw*n)) #newheight is yheight * bw * length(df) p <- ggplot2::ggplot(data = df, aes(x=v1)) + geom_histogram(color="black", fill="lightgrey", binwidth = bw, boundary = 0, closed = "left", na.rm = TRUE) + facet_null() + geom_line(data=dens, aes(x=v1, y=(density)), colour="black", na.rm = TRUE) + ggtitle(title) + xlab(labx) + xlim(c(minx,maxx)) + theme_classic() + theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank(), panel.background = element_blank(), axis.line = element_line(colour = "black"), axis.text.x = element_text(vjust=0.5, colour="#000000"), axis.text.y = element_text(face="bold", colour="#000000"), plot.title = element_text(hjust = 0.5, lineheight=1.5, face="bold")) } if(bygroups==0) { title <- paste0("Histogram of '", deparse(substitute(var1)), "'") df <- df %>% mutate(group = "group") dens = split(df, df$group) %>% map_df(~ tibble(var1=seq(((mean(.x[[n1]], na.rm=T))-(3.291*(ifelse(length(.x)>1, sd(.x[[n1]], na.rm=T), 0)))), ((mean(.x[[n1]], na.rm=T))+(3.291*(ifelse(length(.x)>1, sd(.x[[n1]], na.rm=T), 0)))), length=1000), density1=dnorm(x=var1, mean=mean(.x[[n1]],na.rm=T), sd=(ifelse(length(.x)>1, sd(.x[[n1]], na.rm=T), 0)))), .id="group") #dens = split(df, df$group) %>% # map_df(~ tibble(var1=seq(((mean(.x[[n1]], na.rm=T))-(2.58*(((sd(.x[[n1]], na.rm=T)) )))), ((mean(.x[[n1]], na.rm=T))+(2.58*(((sd(.x[[n1]], na.rm=T)) )))), length=1000), # density1=dnorm(x=var1, mean=mean(.x[[n1]],na.rm=T), sd=sd(.x[[n1]],na.rm=T))), # .id="group") b1 <- df %>% dplyr::select({{ var1 }}) b1 <- b1[,1] #bins <- ((2 * (IQR(b1, na.rm=T))) / (length(b1)^(1/(length(b1))))) bins <- diff(range(b1, na.rm=T)) / (2 * IQR(b1, na.rm=T) / length(b1)^(1/3)) bw <- ((2 * IQR(b1, na.rm=T)) / length(b1)^(1/3)) #print(bw) if(bw<1){ bw <- 1 } #print(bw) minx <- 0 if(min(b1, na.rm = T)<minx){ minx <- min(b1, na.rm = T) } maxx <- max(b1, na.rm = T) + 1 #dens <- dens %>% mutate(density=(density1*bw*nrow(df))) #newheight is yheight * bw * length(df) df2 <- df %>% dplyr::count(group) df2$group <- as.character(df2$group) dens <- dens %>% left_join(df2, by="group") dens <- dens %>% mutate(density=(density1*bw*n)) #newheight is yheight * bw * length(df) p <- ggplot2::ggplot(data = df, aes(x={{ var1 }})) + geom_histogram(color="black", fill="lightgrey", binwidth = bw, boundary = 0, closed = "left", na.rm = TRUE) + facet_null() + geom_line(data=dens, aes(x=var1, y=(density)), colour="black", na.rm = TRUE) + ggtitle(title) + xlim(c(minx,maxx)) + theme_classic() + theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank(), panel.background = element_blank(), axis.line = element_line(colour = "black"), axis.text.x = element_text(vjust=0.5, colour="#000000"), axis.text.y = element_text(face="bold", colour="#000000"), plot.title = element_text(hjust = 0.5, lineheight=1.5, face="bold")) } if(bygroups==1) { df <- df %>% dplyr::filter(!is.na({{ by1 }})) %>% #can change this to include NAs if want to compare missingness dplyr::mutate(group = {{ by1 }}) #%>% dplyr::filter(!is.na(group)) title <- paste0("Histogram of '", deparse(substitute(var1)),"' by '", deparse(substitute(by1)), "'") #print(bygroups) #dens = split(df, df$group) %>% # map_df(~ tibble(var1=seq(((mean(.x[[n1]], na.rm=T))-(2.58*(((sd(.x[[n1]], na.rm=T)))))), ((mean(.x[[n1]], na.rm=T))+(2.58*(((sd(.x[[n1]], na.rm=T)))))), length=1000), # density1=dnorm(x=var1, mean=mean(.x[[n1]],na.rm=T), sd=sd(.x[[n1]],na.rm=T))), # .id="group") dens = split(df, df$group) %>% map_df(~ tibble(var1=seq(((mean(.x[[n1]], na.rm=T))-(3.291*(ifelse(length(.x)>1, sd(.x[[n1]], na.rm=T), 0)))), ((mean(.x[[n1]], na.rm=T))+(3.291*(ifelse(length(.x)>1, sd(.x[[n1]], na.rm=T), 0)))), length=1000), density1=dnorm(x=var1, mean=mean(.x[[n1]],na.rm=T), sd=(ifelse(length(.x)>1, sd(.x[[n1]], na.rm=T), 0)))), .id="group") b1 <- df %>% dplyr::select({{ var1 }}) b1 <- b1[,1] #bins <- ((2 * (IQR(b1, na.rm=T))) / (length(b1)^(1/(length(b1))))) bins <- diff(range(b1, na.rm=T)) / (2 * IQR(b1, na.rm=T) / length(b1)^(1/3)) bw <- ((2 * IQR(b1, na.rm=T)) / length(b1)^(1/3)) #print(bw) if(bw<1){ bw <- 1 } #print(bw) minx <- 0 if(min(b1, na.rm = T)<minx){ minx <- min(b1, na.rm = T) } maxx <- max(b1, na.rm = T) + 1 df2 <- df %>% dplyr::count(group) df2$group <- as.character(df2$group) #print(sapply(df2, class)) #print(sapply(dens, class)) dens <- dens %>% left_join(df2, by="group") dens <- dens %>% mutate(density=(density1*bw*n)) #newheight is yheight * bw * length(df) p <- ggplot2::ggplot(data = df, aes(x={{ var1 }})) + geom_histogram(color="black", fill="lightgrey", binwidth = bw, boundary = 0, closed = "left", na.rm = TRUE) + facet_wrap(~group) + geom_line(data=dens, aes(x=var1, y=(density)), colour="black", na.rm = TRUE) + ggtitle(title) + xlim(c(minx,maxx)) + theme_classic() + theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank(), panel.background = element_blank(), axis.line = element_line(colour = "black"), axis.text.x = element_text(vjust=0.5, colour="#000000"), axis.text.y = element_text(face="bold", colour="#000000"), plot.title = element_text(hjust = 0.5, lineheight=1.5, face="bold")) } if(bygroups==2) { df <- df %>% dplyr::filter(!is.na({{ by1 }})) %>% #can change this to include NAs if want to compare missingness dplyr::filter(!is.na({{ by2 }})) %>% #can change this to include NAs if want to compare missingness mutate(group = paste0({{ by1 }},", ",{{ by2}})) title <- paste0("Histogram of '", deparse(substitute(var1)),"' by '", deparse(substitute(by1)),"' and '", deparse(substitute(by2)), "'") #print(bygroups) dens = split(df, df$group) %>% map_df(~ tibble(var1=seq(((mean(.x[[n1]], na.rm=T))-(3.291*(ifelse(length(.x)>1, sd(.x[[n1]], na.rm=T), 0)))), ((mean(.x[[n1]], na.rm=T))+(3.291*(ifelse(length(.x)>1, sd(.x[[n1]], na.rm=T), 0)))), length=1000), density1=dnorm(x=var1, mean=mean(.x[[n1]],na.rm=T), sd=(ifelse(length(.x)>1, sd(.x[[n1]], na.rm=T), 0)))), .id="group") #dens = split(df, df$group) %>% # map_df(~ tibble(var1=seq(((mean(.x[[n1]], na.rm=T))-(2.58*(((sd(.x[[n1]], na.rm=T)))))), ((mean(.x[[n1]], na.rm=T))+(2.58*(((sd(.x[[n1]], na.rm=T)))))), length=1000), # density1=dnorm(x=var1, mean=mean(.x[[n1]],na.rm=T), sd=sd(.x[[n1]],na.rm=T))), # .id="group") b1 <- df %>% dplyr::select({{ var1 }}) b1 <- b1[,1] #bins <- ((2 * (IQR(b1, na.rm=T))) / (length(b1)^(1/(length(b1))))) bins <- diff(range(b1, na.rm=T)) / (2 * IQR(b1, na.rm=T) / length(b1)^(1/3)) bw <- ((2 * IQR(b1, na.rm=T)) / length(b1)^(1/3)) #print(bw) if(bw<1){ bw <- 1 } #print(bw) minx <- 0 if(min(b1, na.rm = T)<minx){ minx <- min(b1, na.rm = T) } maxx <- max(b1, na.rm = T) + 1 #dens <- dens %>% mutate(density=(density1*bw*nrow(df))) #newheight is yheight * bw * length(df) df2 <- df %>% dplyr::count(group) df2$group <- as.character(df2$group) dens <- dens %>% left_join(df2, by="group") dens <- dens %>% mutate(density=(density1*bw*n)) #newheight is yheight * bw * length(df) p <- ggplot2::ggplot(data = df, aes(x={{ var1 }})) + geom_histogram(color="black", fill="lightgrey", binwidth = bw, boundary = 0, closed = "left", na.rm = TRUE) + facet_wrap(~group) + geom_line(data=dens, aes(x=var1, y=(density)), colour="black", na.rm = TRUE) + ggtitle(title) + xlim(c(minx,maxx)) + theme_classic() + theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank(), panel.background = element_blank(), axis.line = element_line(colour = "black"), axis.text.x = element_text(vjust=0.5, colour="#000000"), axis.text.y = element_text(face="bold", colour="#000000"), plot.title = element_text(hjust = 0.5, lineheight=1.5, face="bold")) } #df$group #print(df) return(p) }
/scratch/gouwar.j/cran-all/cranData/vannstats/R/hst.R
#' Mode Function #' #' This function returns the mode for a given data frame. #' @param x variable within data frame or a list of values. #' @param na.rm remove the NAs, default is FALSE. #' @return This function returns the mode for a variable within a data frame or a list of values. #' @examples #' #' data <- mtcars #' #' mode(data$mpg) #' @export mode <- function(x, na.rm = FALSE) { if(na.rm){ x = x[!is.na(x)] } ux <- unique(x) return(ux[which.max(tabulate(match(x, ux)))]) }
/scratch/gouwar.j/cran-all/cranData/vannstats/R/mode.R
#' Simplified One-Way Analysis of Variance #' #' This function simplifies the call for one-way ANOVA (ow.anova) on a given data frame. Also allows calls for Tukey's Honestly Significant Difference Post-Hoc Comparisons Test (hsd), as well as a means plot (plot). #' @import gplots #' @importFrom stats TukeyHSD aov #' @param df data frame to read in. #' @param var1 the dependent/outcome variable, \eqn{Y}. #' @param by1 the main independent/predictor variable, \eqn{X}. A grouping variable by which \code{var1} should be parsed. #' @param hsd logical (default set to \code{F}). When set to \code{hsd = T}, will return results of Tukey's Honestly Significant Difference Post-Hoc Comparisons Test. #' @param plot logical (default set to \code{F}). When set to \code{plot = T}, will return a means plot with 95 percent confidence intervals, broken out by each group (\code{by1}). #' @return This function returns the summary results table for a one-way ANOVA, examining mean differences in \code{var1} from data frame \code{df}, across \code{by1} groups. #' @examples #' data <- mtcars #' #' ow.anova(data,mpg,cyl,plot=TRUE) #' @export ow.anova <- function(df, var1, by1, plot = FALSE, hsd = FALSE){ #options(scipen=999) labx <- deparse(substitute(by1)) laby <- deparse(substitute(var1)) btwn <- paste0("Between Groups (",deparse(substitute(by1)),")") witn <- paste0("Within Groups (",deparse(substitute(by1)),")") model <- summary(aov(eval(substitute(var1), df) ~ factor(eval(substitute(by1), df)))) #rownames(model[[1]]) <- c(deparse(substitute(by1)), "Residuals") italic_p <- "\U1D45D" italic_F <- "\U1D46D" #colnames(model[[1]]) <- c("df", "SS", "MS", "F", "p") colnames(model[[1]]) <- c("df", "SS", "MS", italic_F, italic_p) rownames(model[[1]]) <- c(btwn, witn) if(plot==TRUE){ #model <- summary(aov(eval(substitute(var1), df) ~ eval(substitute(by1), df))) ##rownames(model[[1]]) <- c(deparse(substitute(by1)), "Residuals") #rownames(model[[1]]) <- c(btwn, witn) {suppressWarnings(plotmeans(eval(substitute(var1), df) ~ eval(substitute(by1), df), main = "Plot of Group Means with 95% CI", xlab = labx, ylab = laby))} } if(hsd==TRUE){ v2 <- as.factor(eval(substitute(by1), df)) tukey <- TukeyHSD(aov(eval(substitute(var1), df) ~ v2)) #names(tukey) <- "Tukey's HSD (Honestly Significant Difference)" b <- tukey$v2 #b <- format(tukey$v2, scientific=F) #sig <- rep(NA, length(b[,4])) #b <- cbind(b,sig) #b[,4] <- as.numeric(b[,4]) #b[,5][b[,4]<=.05] <- "*" #b[,5][b[,4]<=.01] <- "**" #b[,5][b[,4]<=.001] <- "***" #colnames(b) <- c("diff", "lwr", "upr", italp) #return(b) list <- c(model, tukey) names(list) <- c("Analysis of Variance (ANOVA)","Tukey's HSD (Honestly Significant Difference)") return(list) } return(model) }
/scratch/gouwar.j/cran-all/cranData/vannstats/R/ow.anova.R
#' Simplified Correlation #' #' This function simplifies the call for Pearson's Product-Moment Correlation Coefficient (p.corr) on a given data frame. #' @importFrom stats cor.test #' @param df data frame to read in. #' @param var1 the dependent/outcome variable, \eqn{Y}. #' @param var2 the main independent/predictor variable, \eqn{X}. #' @return This function returns the summary results table for a Pearson's correlation, examining the relationship between \code{var1} from data frame \code{df}, and \code{var2}. #' @examples #' data <- mtcars #' #' p.corr(data,mpg,wt) #' @export p.corr <- function(df, var1, var2){ #options(scipen=999) model <- cor.test(eval(substitute(var1), df), eval(substitute(var2), df)) t <- model$statistic model$statistic <- model$estimate names(model$statistic) <- "\U1D493" #"\u0072" #"\u0072\U00B2" #U1D493 model$estimate <- t names(model$estimate) <- "\U1D495" #bold italic t #"\U1D461"#reg italic t #(model$p.value,"names") <- "p" #names(model$p.value) <- "\U1D45D" #\U1D45D == p italic model$data.name <- paste0(deparse(substitute(var1))," and ", deparse(substitute(var2))) df_1 <- model$parameter[[1]] crit_t <- (-qt(0.025,df_1)) #print(crit_t) crit_r2 <- ((crit_t^2) / ((crit_t^2) + df_1)) crit_r <- sqrt(crit_r2) #print(crit_r) return(model) }
/scratch/gouwar.j/cran-all/cranData/vannstats/R/p.corr.R
#' Simplified Normal (Q-Q) Plot #' #' This function plots a Q-Q/Quantile-Quantile plot (qq) on a given data frame, and uses simplified calls within the function to parse the Q-Q plot by up to 2 variables. #' @import dplyr ggpubr #' @param df data frame to read in. #' @param var1 the dependent/outcome variable, \eqn{Y}. The variable of interest that should be plotted. #' @param by1 the main independent/predictor variable, \eqn{X_1}. A grouping variable by which the Q-Q plot for \code{var1} should be parsed. #' @param by2 a potential second independent/predictor variable, \eqn{X_2}. A second grouping variable by which the Q-Q plot for \code{var1} (already parsed by \code{by1}) should be parsed. #' @return This function returns the quantile-quantile plot for \code{var1} in data frame \code{df}. Can be split to return a quantile-quantile plot for \code{var1} in data frame \code{df}, broken out by \code{var2}. #' @examples #' data <- mtcars #' #' qq(data,mpg,cyl) #' @export qq <- function(df, var1, by1, by2){ v1 <- NULL #necessary for removing the "undefined global function" warning bygroups <- length(match.call())-3 if(bygroups==-1) { title <- paste0("Normal Q-Q Plot of '", deparse(substitute(df)), "'") laby <- paste0("Observed Value of '",deparse(substitute(df)),"'") df <- as.data.frame(df) names(df) <- "v1" df <- df %>% dplyr::filter(!is.na(v1)) #norm_test <- shapiro.test(eval(substitute(v1), df)) #norm_test_p <- round(norm_test$p.value,3) #norm_text <- paste0("p = ", norm_test_p) p <- ggplot2::ggplot(data = df, aes(sample=v1)) + #ggplot2::stat_qq(shape = 1) + ggplot2::stat_qq_line() + geom_point(stat = "qq", shape = 1) + geom_qq_line() + geom_exec(.stat_qq_confint, alpha = 0.2, conf.int.level = .95) + #annotate("text", x = 1, y=40, label = norm_text) + #qqplotr::stat_qq_band() + #qqplotr::stat_qq_line() + #qqplotr::stat_qq_point(shape = 1) + facet_null() + ggtitle(title) + labs(x = "Expected (Theoretical) Normal", y = laby) + theme_classic() + theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank(), panel.background = element_blank(), axis.line = element_line(colour = "black"), axis.text.x = element_text(vjust=0.5, colour="#000000"), axis.text.y = element_text(face="bold", colour="#000000"), plot.title = element_text(hjust = 0.5, lineheight=1.5, face="bold")) } if(bygroups==0) { df <- df %>% dplyr::filter(!is.na({{ var1 }})) title <- paste0("Normal Q-Q Plot of '", deparse(substitute(var1)), "'") laby <- paste0("Observed Value of '",deparse(substitute(var1)),"'") #norm_test <- shapiro.test(eval(substitute(var1), df)) #norm_test_p <- round(norm_test$p.value,3) #norm_text <- paste0("p = ", norm_test_p) p <- ggplot2::ggplot(data = df, aes(sample={{ var1 }})) + #ggplot2::stat_qq(shape = 1) + ggplot2::stat_qq_line() + geom_point(stat = "qq", shape = 1) + geom_qq_line() + geom_exec(.stat_qq_confint, alpha = 0.2, conf.int.level = .95) + #annotate("text", x = 1, y=40, label = norm_text) + #qqplotr::stat_qq_band() + #qqplotr::stat_qq_line() + #qqplotr::stat_qq_point(shape = 1) + facet_null() + ggtitle(title) + labs(x = "Expected (Theoretical) Normal", y = laby) + theme_classic() + theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank(), panel.background = element_blank(), axis.line = element_line(colour = "black"), axis.text.x = element_text(vjust=0.5, colour="#000000"), axis.text.y = element_text(face="bold", colour="#000000"), plot.title = element_text(hjust = 0.5, lineheight=1.5, face="bold")) } if(bygroups==1) { df <- df %>% dplyr::filter(!is.na({{ var1 }})) %>% dplyr::filter(!is.na({{ by1 }})) df <- df %>% mutate(group = {{ by1 }}) title <- paste0("Normal Q-Q Plot of '", deparse(substitute(var1)),"' by '", deparse(substitute(by1)), "'") laby <- paste0("Observed Value of '",deparse(substitute(var1)),"'") #pvals <- NULL #for(i in unique(eval(substitute(by1), df))){ # norm_test <- shapiro.test(eval(substitute(var1), df)[eval(substitute(by1), df)==i]) # #print(norm_test) # norm_test_p <- round(norm_test$p.value,3) # norm_text <- paste0("p = ", norm_test_p) # print(norm_text) # pvals <- c(pvals,norm_text) #} p <- ggplot2::ggplot(data = df, aes(sample={{ var1 }})) + #ggplot2::stat_qq(shape = 1) + ggplot2::stat_qq_line() + geom_point(stat = "qq", shape = 1) + geom_qq_line() + geom_exec(.stat_qq_confint, alpha = 0.2, conf.int.level = .95) + #qqplotr::stat_qq_band() + #qqplotr::stat_qq_line() + #qqplotr::stat_qq_point(shape = 1) + facet_wrap(~group) + ggtitle(title) + labs(x = "Expected (Theoretical) Normal", y = laby) + theme_classic() + theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank(), panel.background = element_blank(), axis.line = element_line(colour = "black"), axis.text.x = element_text(vjust=0.5, colour="#000000"), axis.text.y = element_text(face="bold", colour="#000000"), plot.title = element_text(hjust = 0.5, lineheight=1.5, face="bold")) } if(bygroups==2) { df <- df %>% dplyr::filter(!is.na({{ var1 }})) %>% dplyr::filter(!is.na({{ by1 }})) %>% dplyr::filter(!is.na({{ by2 }})) df <- df %>% mutate(group = paste0({{ by1 }},", ",{{ by2 }})) title <- paste0("Normal Q-Q Plot of '",deparse(substitute(var1)),"' by '",deparse(substitute(by1)),"' and '",deparse(substitute(by2)),"'") laby <- paste0("Observed Value of '",deparse(substitute(var1)),"'") p <- ggplot2::ggplot(data = df, aes(sample={{ var1 }})) + #ggplot2::stat_qq(shape = 1) + ggplot2::stat_qq_line() + geom_point(stat = "qq", shape = 1) + geom_qq_line() + geom_exec(.stat_qq_confint, alpha = 0.2, conf.int.level = .95) + #qqplotr::stat_qq_band() + #qqplotr::stat_qq_line() + #qqplotr::stat_qq_point(shape = 1) + facet_wrap(~group) + ggtitle(title) + labs(x = "Expected (Theoretical) Normal", y = laby) + theme_classic() + theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank(), panel.background = element_blank(), axis.line = element_line(colour = "black"), axis.text.x = element_text(vjust=0.5, colour="#000000"), axis.text.y = element_text(face="bold", colour="#000000"), plot.title = element_text(hjust = 0.5, lineheight=1.5, face="bold")) } return(p) } .qq_line <- function(data, qf, na.rm) { q.sample <- stats::quantile(data, c(0.25, 0.75), na.rm = na.rm) q.theory <- qf(c(0.25, 0.75)) slope <- diff(q.sample) / diff(q.theory) intercept <- q.sample[1] - slope * q.theory[1] list(slope = slope, intercept = intercept) } StatQQLine <- ggproto("StatQQLine", Stat, # http://docs.ggplot2.org/current/vignettes/extending-ggplot2.html # https://github.com/hadley/ggplot2/blob/master/R/stat-qq.r required_aes = c('sample'), compute_group = function(data, scales, distribution = stats::qnorm, dparams = list(), conf.int.level = 0.95, na.rm = FALSE) { qf <- function(p) do.call(distribution, c(list(p = p), dparams)) n <- length(data$sample) P <- stats::ppoints(n) theoretical <- qf(P) qq <- .qq_line(data$sample, qf = qf, na.rm = na.rm) line <- qq$intercept + theoretical * qq$slope # Confidence interval zz <- stats::qnorm(1 - (1 - conf.int.level)/2) SE <- (qq$slope/stats::dnorm(theoretical)) * sqrt(P * (1 - P)/n) fit.value <- qq$intercept + qq$slope * theoretical ymax <- fit.value + zz * SE ymin <- fit.value - zz * SE data.frame(sample = line, x = theoretical, y = line, ymin = ymin, ymax = ymax) } ) .stat_qqline <- function(mapping = NULL, data = NULL, geom = "line", position = "identity", ..., distribution = stats::qnorm, dparams = list(), na.rm = FALSE, show.legend = NA, inherit.aes = TRUE, conf.int.level = 0.95) { layer(stat = StatQQLine, data = data, mapping = mapping, geom = geom, position = position, show.legend = show.legend, inherit.aes = inherit.aes, params = list(distribution = distribution, dparams = dparams, na.rm = na.rm, conf.int.level = conf.int.level, ...)) } .stat_qq_confint <- function(mapping = NULL, data = NULL, geom = "ribbon", position = "identity", ..., distribution = stats::qnorm, dparams = list(), na.rm = FALSE, show.legend = NA, inherit.aes = TRUE, conf.int.level = 0.95) { layer(stat = StatQQLine, data = data, mapping = mapping, geom = geom, position = position, show.legend = show.legend, inherit.aes = inherit.aes, params = list(distribution = distribution, dparams = dparams, na.rm = na.rm, conf.int.level = conf.int.level, ...)) }
/scratch/gouwar.j/cran-all/cranData/vannstats/R/qq.R
#' Simplified Residuals Plot #' #' This function creates a residual plot (residplot) on a data frame of the variables in an equation. #' @importFrom formula.tools get.vars #' @importFrom stats predict residuals #' @param df data frame to read in. #' @param formula the variables in the regression model, \eqn{Y = X_1 + X_2 + ... + X_m}, written as \code{Y ~ X1 + X2}... #' @examples #' data <- mtcars #' #' residplot(data, mpg ~ wt + am) #' @export residplot <- function(df, formula){ y <- get.vars(formula, data = df)[1] main <- paste0("Residual Plot Predicting ", deparse(substitute(y))) reg <- lm(formula, data=df) predicted <- predict(reg) residuals <- residuals(reg) df2 <- data.frame(predicted,residuals) plot(df2$predicted, df2$residuals, xlab="Predicted Y Value", ylab="Residuals", main=main) abline(0, 0) }
/scratch/gouwar.j/cran-all/cranData/vannstats/R/residplot.R
#' Reverse Coding for Scales #' #' This function applies reverse-coding to a variable of interest. #' @import dplyr #' @importFrom rlang .data #' @param df data frame to read in. #' @param var the variable to be recoded. #' @param missing a list of values in the variable that are ``missing'' values. #' @return This function updates the data frame with a new variable with the recoded values. #' @examples #' data <- GSS2014 #' #' revcode(data, amcult) #' @export revcode <- function(df, var, missing=c("")){ df2 <- as.data.frame(df) max <- max(eval(substitute({{ var }}), df2), na.rm = T) min <- min(eval(substitute({{ var }}), df2), na.rm = T) df2 <- df2 %>% dplyr::mutate(newvar = ((max-{{ var }})+min) ) if(length(missing)>1){ for(i in {{ missing }}){ i <- as.numeric(i) df2 <- df2 %>% dplyr::mutate(newvar=replace(.data$newvar, {{ var }}==i, NA)) } } names(df2)[names(df2) == "newvar"] <- paste0("rev.",substitute(var)) #output is an updated data frame dfname <- deparse(substitute(df)) pos <- 1 envir = as.environment(pos) assign(dfname, data.frame(df2), envir = envir) }
/scratch/gouwar.j/cran-all/cranData/vannstats/R/rev.code.R
#' Simplified One-Way Repeated Measures Analysis of Variance #' #' This function simplifies the call for repeated measures ANOVA (rm.anova) on a given data frame. Also allows calls for sphericity correction (correct), as well as a sphericity test table (sph). #' @import dplyr rstatix stringr #' @importFrom stats time #' @importFrom rlang .data #' @param df data frame to read in. #' @param id the main grouping variable by which \code{times} will be analyzed #' @param times dependent variable values at the time points measured. If data are in wide form (where time points are listed as separate variables for each observation), read in as a list of time point variables (e.g. \code{c("t1", "t2", "t3", ..., "tn")}), where the values represent the scores at the various time points. Read in as list if data are in wide form. If data are in long form, the \code{times} variable is one column (rather than multiple columns) in the data frame, and must be paired with the \code{scores} variable for actual values (listed below). #' @param scores if data are in long form (where each group has multiple observations), a \code{scores} variable must be read in, which represents the values at the specific time points represented in the \code{times} variable. #' @param correct logical (default set to \code{T}). Corrects the results in the repeated measures ANOVA table -- adjusts the degrees of freedom (\eqn{df}) by multiplying the sphericity assumed degrees of freedom (\eqn{df}) by the Greenhouse-Geisser Epsilon value. When set to \code{correct = F}, will print results of repeated measures ANOVA with sphericity assumed. #' @param sph logical (default set to \code{F}). When set to \code{sph = T}, will print a sphericity tests table with Mauchly's W, as well as two Epsilon values (Greenhouse-Geisser and Huynh-Feldt). #' @param phc logical (default set to \code{F}). When set to \code{phc = T}, will print a post-hoc comparisons table with Bonferroni's adjusted alpha levels (and p-values). #' @examples #' data <- howell_aids_wide #' rm.anova(data, student, c("t1","t2","t3")) #' #' data2 <- howell_aids_long #' rm.anova(data2, student, time, scores=knowledge) #' @export rm.anova <- function(df, id, times, scores=NULL, correct=TRUE, sph=FALSE, phc=FALSE){ #utils::globalVariables(c("score","dupe")) df <- as.data.frame(df) df <- df %>% dplyr::mutate(dupe=ifelse(duplicated({{ id }}),1, ifelse(!duplicated({{ id }}),0,NA))) if(sum(df$dupe)==0){ #in wide form, to convert to long form df <- df %>% dplyr::select(-.data$dupe) %>% # gather(key = "time", value = "score", {{ v1 }}, {{ v2 }}, {{ v3 }}, {{ v4 }}) %>% gather(key = "time", value = "score", times) %>% #score is time component, value is outcome convert_as_factor({{ id }}, time) %>% #break out id and time component dplyr::select({{ id }}, time, .data$score) model <- anova_test(data = df, dv=.data$score, wid={{ id }}, within=time, effect.size = "pes") #score is time, value is the outcome value at that time print(model) } if(sum(df$dupe)>0){ #already in long form df <- df %>% dplyr::select(-.data$dupe) %>% convert_as_factor({{ id }}, {{ times }}) %>% dplyr::select({{ id }}, {{ times }}, {{ scores }}) %>% dplyr::rename(time = {{ times }}, score = {{ scores }}) #model <- anova_test(data = df, dv={{ scores }}, wid={{ id }}, within={{ times }}, effect.size = "pes") model <- anova_test(data = df, dv=.data$score, wid={{ id }}, within=time, effect.size = "pes") #score is time, value is the outcome value at that time } model$ANOVA[[6]] <- "" model$ANOVA[[6]][model$ANOVA[[5]]<=.05] <- "*" model$ANOVA[[6]][model$ANOVA[[5]]<=.01] <- "**" model$ANOVA[[6]][model$ANOVA[[5]]<=.001] <- "***" model$ANOVA <- rbind(model$ANOVA,model$ANOVA) model$ANOVA[[1]][2] <- "error" model$ANOVA[[2]][2] <- model$ANOVA[[3]][2] model$ANOVA <- model$ANOVA[-3] #names(model$ANOVA) <- c("", "df", "F", "p", "sig.", "eta^2") model$ANOVA[[3]][2] <- "" model$ANOVA[[4]][2] <- "" model$ANOVA[[5]][2] <- "" model$ANOVA[[6]][2] <- "" model$ANOVA[[7]] <- paste(model$ANOVA[[4]],model$ANOVA[[5]]) model$ANOVA <- model$ANOVA[, c(1, 2, 3, 7, 6, 4, 5)] model$ANOVA <- model$ANOVA[,-c(6,7)] names(model$ANOVA) <- c("", "df", "F", "p-value", "eta^2") model$sph <- model[2] model$eps <- model[3] names(model$sph) <- "Sphericity Test" names(model$eps) <- "Corrections" model$eps[[1]][5][[1]] <- "" model$eps[[1]][5][[1]][model$eps[[1]][4][[1]]<=.05] <- "*" model$eps[[1]][5][[1]][model$eps[[1]][4][[1]]<=.01] <- "**" model$eps[[1]][5][[1]][model$eps[[1]][4][[1]]<=.001] <- "***" model$sph[[1]][5] <- paste(model$sph[[1]][3],model$sph[[1]][4]) model$sph[[1]] <- model$sph[[1]][, c(1, 2, 5, 3, 4)] model$sph[[1]] <- model$sph[[1]][,-c(4,5)] model$sph[[1]][4] <- model$eps[[1]][2] model$sph[[1]][5] <- model$eps[[1]][6] names(model$sph[[1]]) <- c("", "Mauchly's W", "p-value","Greenhouse-Geisser", "Huynh-Feldt") c_df <- str_split(model$eps[[1]][3][[1]], ", ") note <- "---\nSignif. codes: '***' 0.001 '**' 0.01 '*' 0.05\nNote: Model with Sphericity Assumed" if(correct==TRUE){ if(as.numeric(model$sph[[1]][4][1])<.700){ model$ANOVA[[2]][1] <- as.numeric(c_df[[1]][1][1]) #df1 model$ANOVA[[2]][2] <- as.numeric(c_df[[1]][2][1])#df2 model$eps[[1]][10] <- paste(model$eps[[1]][4],model$eps[[1]][5]) model$ANOVA[[4]][1] <- model$eps[[1]][10][[1]] model$eps <- model$eps[[1]][,-10] note <- "---\nSignif. codes: '***' 0.001 '**' 0.01 '*' 0.05\nNote: Model with Greenhouse-Geisser Adjusted df" } } if(sph==TRUE){ cat("\nSphericity Tests\n\n") print(model$sph[[1]], , row.names = FALSE) cat("\n") } cat("\nRepeated Measures ANOVA\n\n") #(type III tests) #return(get_anova_table(model)) print(model$ANOVA, row.names = FALSE) cat(note) cat("\n") if(phc==TRUE){ cat("\nPost-Hoc Comparisons\n\n") pwc <- df %>% pairwise_t_test( score ~ time, paired = TRUE, p.adjust.method = "bonferroni" ) pwc <- as.data.frame(pwc) pwc[[10]] <- "" pwc[[10]][pwc[[9]]<=.05] <- "*" pwc[[10]][pwc[[9]]<=.01] <- "**" pwc[[10]][pwc[[9]]<=.001] <- "***" pwc[[11]] <- paste0(pwc[[9]]," ",pwc[[10]]) pwc[[11]] pwc <- pwc[, c(1, 2, 3, 4, 5, 6, 7, 11, 8, 9, 10)] pwc <- pwc[,-c(1,9,10,11)] names(pwc) <- c("Group 1","Group 2","N (Group 1)","N (Group 2)", "t", "df", "p-value" ) print(pwc, row.names = FALSE) pwc_note <- "---\nSignif. codes: '***' 0.001 '**' 0.01 '*' 0.05\nNote: p-values adjusted based on Bonferroni's alpha" cat(pwc_note) #print(model$sph[[1]], , row.names = FALSE) cat("\n") } }
/scratch/gouwar.j/cran-all/cranData/vannstats/R/rm.anova.R
#' Simplified Scatterplot #' #' This function plots a scatterplot (scatter) on a given data frame, and adds a fit-line to the data. #' @importFrom graphics abline plot text #' @importFrom stats lm #' @param df data frame to read in. #' @param var1 the dependent/outcome variable, \eqn{Y}. #' @param var2 the independent/predictor variable, \eqn{X}. #' @param lab logical (default set to \code{FALSE}). When set to \code{lab = TRUE}, will add Pearson's correlation coefficient (\eqn{r}) value to the plot. #' @examples #' data <- mtcars #' #' scatter(data,mpg,wt) #' @export scatter <- function(df, var1, var2, lab = FALSE){ #bygroups <- length(match.call())-3 #if(bygroups==0) { # main <- paste0("Boxplot of '", deparse(substitute(var1)), "'") # laby <- deparse(substitute(var1)) # boxplot(eval(substitute(var1), df), main = main, ylab = laby) # a way of calling values within #df$var1 # #boxplot({{ var1 }}, data = df, main = main) #} #if(bygroups==1) { # main <- paste0("Boxplot of '", deparse(substitute(var1)),"' by '", deparse(substitute(var2)),"'") # labx <- deparse(substitute(var2)) # laby <- deparse(substitute(var1)) # boxplot(eval(substitute(var1), df) ~ eval(substitute(var2), df), main = main, xlab = labx, ylab = laby) #} #if(bygroups==2) { # main <- paste0("Boxplot of '", deparse(substitute(var1)),"' by '", deparse(substitute(var2)),"' and '", deparse(substitute(by2)),"'") # labx2 <- paste0(deparse(substitute(var2))," by ", deparse(substitute(by2))) # laby2 <- deparse(substitute(var1)) # boxplot(eval(substitute(var1), df) ~ eval(substitute(var2), df) + eval(substitute(by2), df), main = main, xlab = labx2, ylab = laby2) #} main <- paste0("Scatterplot of '", deparse(substitute(var1)),"' and '", deparse(substitute(var2)),"'") labx <- deparse(substitute(var2)) laby <- deparse(substitute(var1)) ycoord <- ( max(eval(substitute(var1), df), na.rm = T) - min(eval(substitute(var1), df), na.rm = T) ) * .80 xcoord <- ( max(eval(substitute(var2), df), na.rm = T) - min(eval(substitute(var2), df), na.rm = T) ) * .80 #print(ycoord) #print(xcoord) model <- cor.test(eval(substitute(var1), df), eval(substitute(var2), df)) r_val <- model$estimate[[1]] r_val_round <- round(r_val, 4) r_text <- "\u0072" #"\u0072\U00B2" #U1D493 r_text2 <- paste0(r_text, " = ", r_val_round) plot(eval(substitute(var2), df), eval(substitute(var1), df), main = main, xlab = labx, ylab = laby) abline(lm(eval(substitute(var1), df)~eval(substitute(var2), df)), col="Blue") if(lab == TRUE){ text(xcoord, ycoord, r_text2, cex = 1.35, col = "red") } #return(p) }
/scratch/gouwar.j/cran-all/cranData/vannstats/R/scatter.R
#' Simplified STATA Predictive Margins #' #' This function returns a data frame with interactive margins and standard errors similar to those returned in the STATA margins call. The function can also return a margins plot. #' #' @import ggplot2 purrr dplyr #' @importFrom rlang .data #' @importFrom stats vcov #' @importFrom plm plm fixef #' @param mod a plm model object. #' @param plot logical (default set to \code{FALSE}). When set to \code{plot = TRUE}, will return a an margins plot of the interaction terms. #' @param error the number standard deviation units for which the margins will be calculated (default set to 2). #' @return This function creates a data frame of predictive margins for the dependent variable, given values of the variables in the interaction. #' @examples #' library(plm) #' data <- UCR2015 #' summary(mod <- plm(dui_pct ~ pct_poverty*pct_unemp + #' income_inequality, data=data, index=c("state","county"), #' model="within")) #' #' stata.plm.margins(mod) #' @export stata.plm.margins <- function(mod, plot=FALSE, error=NULL){ if(is.null(error)){ error <- 2 } formula <- as.list(mod$call)[2][[1]] #df <- mod$call$data #df <- as.data.frame(df) df <- eval(mod$call$data) df <- as.data.frame(df) formula_s <- toString(formula) formula_s <- gsub("~", "", formula_s) y <- str_extract(formula_s, "(?<=\\, )(\\w+)") #look after first comma formula_s <- gsub(",", "", formula_s) x1 <- str_extract(formula_s, "(\\w+)(?= \\*)") #look before int asterisk x2 <- str_extract(formula_s, "(?<=\\* )(\\w+)") #look after int asterisk factorvar <- as.character(as.list(mod$call)[5][[1]][2]) factorvar2 <- paste0(" + factor(",factorvar,")") formula2 <- as.character(formula) formula2 <- paste0(formula2[2]," ",formula2[1]," ",formula2[3],factorvar2) mod2 <- lm(formula2, eval(mod$call$data)) lm_names <- names(mod2$coefficients) plm_names <- names(mod$coefficients) diff_names <- setdiff(lm_names,plm_names) diff_names_no_intercept <- diff_names[-c(1)] #vcovs vcov2 <- vcov(mod2) vcov2_intercept_rows <- vcov2[row.names(vcov2) %in% diff_names,] int_data_rows <- data.frame() for(i in 1:ncol(vcov2_intercept_rows)){ othermeans <- mean(vcov2_intercept_rows[-c(1),i]) sum1 <- sum(c(vcov2_intercept_rows[1,i],othermeans)) int_data_rows[1,i] <- sum1 } colnames(int_data_rows) <- colnames(vcov2_intercept_rows) row.names(int_data_rows) <- row.names(vcov2_intercept_rows)[1] int_data_rows <- int_data_rows[,!(colnames(int_data_rows) %in% diff_names_no_intercept),drop = FALSE] vcov2_intercept_cols <- vcov2[,colnames(vcov2) %in% diff_names] int_data_cols <- data.frame() for(i in 1:nrow(vcov2_intercept_cols)){ othermeans2 <- mean(vcov2_intercept_cols[i,2:ncol(vcov2_intercept_cols)]) sum2 <- sum(c(vcov2_intercept_cols[i,1],othermeans2)) int_data_cols[i,1] <- sum2 } row.names(int_data_cols) <- row.names(vcov2_intercept_cols) colnames(int_data_cols) <- colnames(vcov2_intercept_cols)[1] int_data_cols <- int_data_cols[!(row.names(int_data_cols) %in% diff_names_no_intercept),,drop = FALSE] # vcov_new <- vcov2 vcov_new <- vcov_new[!(row.names(vcov_new) %in% diff_names_no_intercept),] vcov_new <- vcov_new[,!(colnames(vcov_new) %in% diff_names_no_intercept)] vcov_new <- vcov_new vcov_new[1,] <- as.matrix(int_data_rows) vcov_new[,1] <- as.matrix(int_data_cols) var1 <- df[, x1] var2 <- df[, x2] #var1 <- eval(mod$call$data)[, x1] #var2 <- eval(mod$call$data)[, x2] constants <- fixef(mod) constants <- as.data.frame(constants) constants$factorvar <- rownames(constants) colnames(constants) <- c("constants",factorvar) u_id <- fixef(mod, type = "dmean") # fixed effects for each id. u_id <- as.data.frame(u_id) u_id$u_id <- (-1*u_id$u_id) u_id$factorvar <- rownames(u_id) colnames(u_id) <- c("u_id",factorvar) #print(u_id) # error_terms <- df %>% dplyr::select(factorvar) if(is.character(constants[,2])){ #print("is character") error_terms[,1] <- as.character(error_terms[,1]) } #error_terms$factorvar <- as.factor(error_terms$factorvar) error_terms <- error_terms %>% dplyr::left_join(u_id, by=factorvar) error_terms <- error_terms %>% dplyr::left_join(constants, by=factorvar) mean_u_id <- mean(error_terms$u_id, na.rm = T) mean_constants <- mean(error_terms$constants, na.rm = T) pred_data <- data.frame( var1_at <- c( rep(-error*(sd(var1)), 3), rep(0, 3), rep((error*(sd(var1))), 3)), var2_at <- rep(c((-error*(sd(var2))), 0, (error*(sd(var2)))), 3) ) names(pred_data) <- c("x1", "x2") pred_data$id <- 1 y_hat_mod <- as.numeric(mod$model[[1]] - mod$residuals) l1 <- as.list(mod$coefficients) vars_in_model <- names(mod$coefficients) #list of variables in model vars_in_model <- vars_in_model[-c(NROW(vars_in_model))] #list of variables in model but remove interaction term df2 <- df %>% dplyr::select( all_of(vars_in_model)) %>% summarise(across(everything(), list(mean))) names(df2) <- vars_in_model df2$id <- 1 pred_data <- pred_data %>% left_join(df2, by = "id") pred_data <- pred_data[-c(3,4,5)] pred_data$x1x2 <- (pred_data$x1)*(pred_data$x2) calc_y <- data.frame() for(i in 1:nrow(pred_data)){ for(j in 1:ncol(pred_data)){ val = (l1[j][[1]]*pred_data[i,j]) #multiply mean of each var by the slope/coefficient calc_y[i,j] <- val } } calc_y <- calc_y %>% dplyr::rowwise() %>% dplyr::mutate(calc_y_no_errors = sum(across(starts_with("V")), na.rm = T)) calc_y <- calc_y %>% dplyr::rowwise() %>% dplyr::mutate(y_hat = sum(.data$calc_y_no_errors, mean_constants, mean_u_id)) #dplyr::mutate(y_hat = sum(calc_y_no_errors, mean_constants, mean_u_id)) y_hat <- calc_y %>% dplyr::select(y_hat) margins_vals <- pred_data %>% dplyr::select(x1,x2) margins_plot <- dplyr::bind_cols(margins_vals,y_hat) numrows <- nrow(margins_plot) v <- vcov_new jac_intercept <- data.frame( rep(1,numrows) ) names(jac_intercept) <- "(Intercept)" j <- as.data.frame(jac_intercept) j <- j %>% dplyr::bind_cols(pred_data) j <- as.matrix(j) delta_mat <- j %*% v %*% t(j) se <- as.data.frame(sqrt(diag(delta_mat))) names(se) <- "se" names(margins_plot) <- c(x1,x2,"y_hat") margins_plot <- as.data.frame(margins_plot) margins_plot <- margins_plot %>% dplyr::bind_cols(se) margins_plot[,1][margins_plot[,1]==min(margins_plot[,1])] <- -1*error margins_plot[,1][margins_plot[,1]==0] <- 0 margins_plot[,1][margins_plot[,1]==max(margins_plot[,1])] <- 1*error margins_plot[,1] <- as.numeric(margins_plot[,1]) margins_plot[,2][margins_plot[,2]==0] <- "Mean" margins_plot[,2][margins_plot[,2]==min(margins_plot[,2])] <- "Low" #margins_plot[,2][margins_plot[,2]==max(margins_plot[,2])] <- "High" #old way margins_plot[,2][margins_plot[,2]==min(margins_plot[,2])] <- "High" #because, rewritten as character, so the only number left is the min. the max is the character string. #dfname <- deparse(substitute(margins_plot)) #return(margins_plot) margins_df <- "margins_df" pos <- 1 envir = as.environment(pos) assign(margins_df, margins_plot, envir = envir) if(plot==TRUE){ ylabel <- paste0("Predicted Outcome\n",y) margins_df<-as.data.frame(margins_plot) v1 <- margins_df[,x1] v2 <- margins_df[,x2] plotlimits <- c((margins_df$y_hat+(-1.96*margins_df$se)),(margins_df$y_hat+(1.96*margins_df$se))) low <- min(plotlimits) low <- floor(low/5)*5 high <- max(plotlimits) high <- ceiling(high/5)*5 ggplot2::ggplot(margins_df, aes(x= v1, y= y_hat, #color=factor(c_gini), shape= v2, linetype= v2)) + geom_line() + geom_point(size = 2) + labs(#title = "Predictive Margins (with 95% CI) for\nPercent Supporting Legalization", x = x1, #y = "Predicted Outcome", y = ylabel, #color = "Income\nInequality", shape = x2, linetype = x2) + scale_linetype_discrete(breaks=c('Low', 'Mean', 'High')) + scale_shape_discrete(breaks=c('Low', 'Mean', 'High')) + geom_errorbar(aes(ymin = y_hat+(-1.96*se), ymax = y_hat+(1.96*se)), data = margins_df, width = 0.2) + theme(plot.title = element_text(size=12)) + ylim(low, high) + theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank(), panel.background = element_blank(), #axis.line = element_line(colour = "black") ) + theme(panel.grid.major.y = element_line(color = "grey", size = 0.25, linetype = 1)) + theme(axis.line.x = element_line(color="black", size = .75), axis.line.y = element_line(color="black", size = .75), plot.title = element_text(hjust = 0.5)) } }
/scratch/gouwar.j/cran-all/cranData/vannstats/R/stata.plm.margins.R
#' Simplified Z Scores #' #' This function calculates the Z score for a given value, relative to the mean and standard deviation for a variable in a given data frame. #' @importFrom stats sd pnorm #' @param df data frame to read in. #' @param var1 the variable of interest for which the mean and standard deviations will be calculated. #' @param raw the desired raw score to compare with the mean and standard deviation of \code{var1}. #' @param tails to report a p-value (level of significance) for the reported Z score, user must select a desired number of tails (either \code{tails = 1} for a one-tailed test, or \code{tails = 2} for a two-tailed test). Default set to \code{NULL}, and does not report a p-value. #' @return This function returns the raw score, mean, and z-score for a given raw score. #' @examples #' data <- mtcars #' #' z.calc(data,mpg,12) #' @export z.calc <- function (df, var1, raw, tails = NULL){ calls <- length(match.call())-3 if(calls==0){ newx <- gsub("\\s*\\([^\\)]+\\)","",as.character(match.call()[3])) newx <- as.numeric(newx) #print(as.character(match.call()[3])) xbar <- mean(df, na.rm = TRUE) sd <- sd(df, na.rm = TRUE) z <- (newx - xbar)/(sd) if(is.null(tails)){ out <- c(`Raw Score` = newx, `Mean` = xbar, `Z Score` = z) } else { if(tails==1){ pval <- ((2*pnorm(z, mean = xbar, sd = sd, lower.tail = TRUE))/2) out <- c(`Raw Score` = newx, `Mean` = xbar, `Z Score` = z, `p-value (1-tailed)` = pval) } if(tails==2){ pval <- (2*pnorm(z, mean = xbar, sd = sd, lower.tail = TRUE)) out <- c(`Raw Score` = newx, `Mean` = xbar, `Z Score` = z, `p-value (2-tailed)` = pval) } if(!((tails == 1) || (tails == 2))){ out <- c(`Raw Score` = newx, `Mean` = xbar, `Z Score` = z) } } } else { newx <- gsub("\\s*\\([^\\)]+\\)","",as.character(match.call()[4])) newx <- as.numeric(newx) #CI(eval(substitute(var1), df), ci=cl) xbar <- mean(eval(substitute(var1), df), na.rm = TRUE) sd <- sd(eval(substitute(var1), df), na.rm = TRUE) z <- (newx - xbar)/(sd) if(is.null(tails)){ out <- c(`Raw Score` = newx, `Mean` = xbar, `Z Score` = z) } else { if(tails==1){ pval <- ((2*pnorm(z, mean = xbar, sd = sd, lower.tail = TRUE))/2) out <- c(`Raw Score` = newx, `Mean` = xbar, `Z Score` = z, `p-value (1-tailed)` = pval) } if(tails==2){ pval <- (2*pnorm(z, mean = xbar, sd = sd, lower.tail = TRUE)) out <- c(`Raw Score` = newx, `Mean` = xbar, `Z Score` = z, `p-value (2-tailed)` = pval) } if(!((tails == 1) || (tails == 2))){ out <- c(`Raw Score` = newx, `Mean` = xbar, `Z Score` = z) } } } return(out) }
/scratch/gouwar.j/cran-all/cranData/vannstats/R/z.calc.R
#' Default parameters of config. #' #' A dataframe containing default parameters. #' #' @format A data frame with 12 variables: #' \describe{ #' \item{\code{threshold}}{Threshold for allele frequency} #' \item{\code{skew}}{Skewness for allele frequency} #' \item{\code{lower}}{Lower bound for allele frequency region} #' \item{\code{upper}}{Upper bound for allele frequency region} #' \item{\code{ldpthred}}{Threshold to determine low depth} #' \item{\code{hom_mle}}{Hom MLE of p in Beta-Binomial model} #' \item{\code{het_mle}}{Het MLE of p in Beta-Binomial model} #' \item{\code{Hom_thred}}{Threshold between hom and high} #' \item{\code{High_thred}}{Threshold between high and het} #' \item{\code{Het_thred}}{Threshold between het and low} #' \item{\code{hom_rho}}{Hom MLE of rho in Beta-Binomial model} #' \item{\code{het_rho}}{Het MLE of rho in Beta-Binomial model} #' } #' @source Created by Tao Jiang "config_df" #' Default svm classification model. #' #' An svm object containing default svm classification model. #' #' @format An svm object: #' @source Created by Tao Jiang "svm_class_model" #' Default svm regression model. #' #' An svm object containing default svm regression model. #' #' @format An svm object: #' @source Created by Tao Jiang "svm_regression_model" #' VCF example file. #' #' An example containing a list of 4 data frames. #' #' @format A list of 4 data frames: #' @source Created by Tao Jiang "vcf_example"
/scratch/gouwar.j/cran-all/cranData/vanquish/R/data.R
#' @title DEtection of Frequency CONtamination #' @description Detects whether a sample is contaminated another sample of its same species. The input file should be in vcf format. #' #' @param file VCF input object #' @param rmCNV Remove CNV regions, default is FALSE #' @param cnvobj CNV object, default is NULL #' @param config config information of parameters. A default set is generated as part of the model and is included in a model object, which contains #' @param class_model An SVM classification model #' @param regression_model An SVM regression model #' #' @return A list containing (1) stat: a data frame with all statistics for contamination estimation; (2) result: contamination estimation (Class = 0, pure; Class = 1, contaminated) #' #' @export #' @import e1071 #' @import utils #' #' @examples #' data(vcf_example) #' result <- defcon(file = vcf_example) defcon <- function(file, rmCNV = FALSE, cnvobj = NULL, config = NULL, class_model = NULL, regression_model = NULL) { config_df <- svm_class_model <- svm_regression_model <- NULL data("config_df", envir = environment()) data("svm_class_model", envir = environment()) data("svm_regression_model", envir = environment()) if (is.null(config)) { config <- config_df #config_df contains default parameters. } vcf <- file$VCF vcf <- update_vcf(rmCNV = rmCNV, vcf = vcf, cnvobj = cnvobj, config$threshold, config$skew, config$lower, config$upper) Name <- file$file_sample_name[1] LOH <- nrow(vcf[vcf$ZG == "het",])/nrow(vcf[vcf$ZG == "hom",]) HomVar <- getVar(df = vcf, state = "hom", config$hom_mle, config$het_mle) HetVar <- getVar(df = vcf, state = "het", config$hom_mle, config$het_mle) HomRate <- nrow(vcf[(vcf$AF > config$Hom_thred),]) / nrow(vcf) HighRate <- nrow(vcf[((vcf$AF > config$High_thred) & (vcf$AF < config$Hom_thred)),]) / nrow(vcf) HetRate <- nrow(vcf[((vcf$AF > config$Het_thred) & (vcf$AF < config$High_thred)),]) / nrow(vcf) LowRate <- nrow(vcf[(vcf$AF < config$Het_thred),]) / nrow(vcf) AvgLL <- getAvgLL(vcf, config$hom_mle, config$het_mle, config$hom_rho, config$het_rho) res_df <- data.frame(Name, LOH, HomVar, HetVar, HomRate, HighRate, HetRate, LowRate, AvgLL) if (is.null(class_model)) { Class <- predict(object = svm_class_model, newdata = res_df) # svm_class_model is the default classification model. } else { Class <- predict(object = class_model, newdata = res_df) } if (is.null(regression_model)) { Regression <- predict(object = svm_regression_model, newdata = res_df) # svm_regression_model is the default regression model. } else { Regression <- predict(object = regression_model, newdata = res_df) } res <- data.frame(Name, Class, Regression) res_list <- list("stat" = res_df, "result" = res) return(res_list) }
/scratch/gouwar.j/cran-all/cranData/vanquish/R/defcon.R
#' Second alternative allele percentage #' @param f Input raw file #' @return Percent of the second alternative allele #' @export getAlt2 <- function(f) { tmp <- f[grep(",", f$Alt), ] percentage <- nrow(tmp)/nrow(f) return(percentage) } #' Annotation rate #' @param f Input raw file #' @return Percentage of annotation locus getAnnoRate <- function(f) { tmp <- f[ f$dbID !=".", ] percentage <- nrow(tmp)/nrow(f) return(percentage) } #' Low depth percentage #' @param f Input raw file #' @param ldpthred Threshold to determine low depth, default is 20 #' @return Percentage of low depth getLowDepth <- function(f, ldpthred) { tmp <- f[which(f$DP < ldpthred), ] percentage <- nrow(tmp)/nrow(f) return(percentage) } #' SNV percentage #' @param df Input raw file #' @return Percentage of SNV getSNVRate <- function(df) { tmp_df <- df[nchar(df$Ref)==1 & nchar(df$Alt)==1, ] return (nrow(tmp_df)/nrow(df)) } #' Calculate zygosity variable #' @param df Input modified file #' @param state Zygosity state #' @param hom_mle MLE in hom model #' @param het_mle MLE in het model #' @return Zygosity variable getVar <- function(df, state, hom_mle, het_mle) { df_sub <- df[df$ZG == state,] if (state == "hom") { expected <- hom_mle } else if (state == "het") { expected <- het_mle } df_sub <- df_sub[(abs(df_sub$AF - expected) < 0.49),] tmp <- sum(((df_sub$AF - expected)^2) * df_sub$DP) / sum(df_sub$DP) return(tmp) } #' Calculate average log-likelihood #' @param df Input modified file #' @param hom_mle Hom MLE of p in Beta-Binomial model, default is 0.9981416 from NA12878_1_L5 #' @param het_mle Het MLE of p in Beta-Binomial model, default is 0.4737897 from NA12878_1_L5 #' @param hom_rho Hom MLE of rho in Beta-Binomial model, default is 0.04570275 from NA12878_1_L5 #' @param het_rho Het MLE of rho in Beta-Binomial model, default is 0.02224098 from NA12878_1_L5 #' @importFrom VGAM dbetabinom #' @return meanLL getAvgLL <- function(df, hom_mle, het_mle, hom_rho, het_rho) { df_hom <- df[df$ZG == "hom",] df_het <- df[df$ZG == "het",] df_hom$LL <- dbetabinom(x = df_hom$AC, size = df_hom$DP, prob = hom_mle, rho = hom_rho, log = TRUE) df_het$LL <- dbetabinom(x = df_het$AC, size = df_het$DP, prob = het_mle, rho = het_rho, log = TRUE) df <- rbind(df_hom, df_het) meanLL <- mean(df$LL) return (meanLL) } #' @title Feature Generation for Contamination Detection Model #' @description Generates features from each pair of input VCF objects for training contamination detection model. #' #' @param file VCF input object #' @param hom_p The initial value for p in Homozygous Beta-Binomial model, default is 0.999 #' @param het_p The initial value for p in Heterozygous Beta-Binomial model, default is 0.5 #' @param hom_rho The initial value for rho in Homozygous Beta-Binomial model, default is 0.005 #' @param het_rho The initial value for rho in Heterozygous Beta-Binomial model, default is 0.1 #' @param mixture A vector of whether the sample is contaminated: 0 for pure; 1 for contaminated #' @param homcut Cutoff allele frequency value between hom and high, default is 0.99 #' @param highcut Cutoff allele frequency value between high and het, default is 0.7 #' @param hetcut Cutoff allele frequency value between het and low, default is 0.3 #' #' @return A data frame with all features for training model of contamination detection #' @export generate_feature <- function(file, hom_p = 0.999, het_p = 0.5, hom_rho = 0.005, het_rho = 0.1, mixture, homcut = 0.99, highcut = 0.7, hetcut = 0.3) { Name <- file$file_sample_name[1] if ((mixture != 0) & (mixture != 1)) { stop("Use 0 and 1 for mixture!") } Mixture <- mixture vcf <- file$VCF LOH <- nrow(vcf[vcf$ZG == "het",])/nrow(vcf[vcf$ZG == "hom",]) HomVar <- getVar(df = vcf, state = "hom", hom_mle = hom_p, het_mle = het_p) HetVar <- getVar(df = vcf, state = "het", hom_mle = hom_p, het_mle = het_p) HomRate <- nrow(vcf[(vcf$AF > homcut),]) / nrow(vcf) HighRate <- nrow(vcf[((vcf$AF > highcut) & (vcf$AF < homcut)),]) / nrow(vcf) HetRate <- nrow(vcf[((vcf$AF > hetcut) & (vcf$AF < highcut)),]) / nrow(vcf) LowRate <- nrow(vcf[(vcf$AF < hetcut),]) / nrow(vcf) AvgLL <- getAvgLL(df = vcf, hom_mle = hom_p, het_mle = het_p, hom_rho = hom_rho, het_rho = het_rho) res_df <- data.frame(Name, Mixture, LOH, HomVar, HetVar, HomRate, HighRate, HetRate, LowRate, AvgLL) return(res_df) }
/scratch/gouwar.j/cran-all/cranData/vanquish/R/generate_feature.R
#' Check input filename #' @param fn Exact full file name of input file, including directory #' @param extension Expected input file extension: vcf & txt #' #' @return Valid directory locateFile <- function(fn, extension) { if (!file.exists(fn)) { stop('Input file does NOT exist!') } if (file.access(names = fn, mode = 4) != 0) { stop('Input file does not have read permission!') } ext <- strsplit(x = fn, split = '\\.')[[1]] if ((ext[length(ext)]!=extension)&(ext[length(ext)-1]!=extension)) { stop('Invalid input file extension!') } return(fn) } #' Read in input vcf data in GATK format for Contamination detection #' @param dr A valid input object #' @param dbOnly Use dbSNP as filter, default is FALSE, passed from read_vcf #' @param depCut Use a threshold for min depth , default is False #' @param thred Threshold for min depth, default is 20 #' @param content Column names in VCF files #' @param extnum The column number or numbers to be extracted from vcf, default is 10; 0 for not extracting any columns #' @param keepall Keep unextracted column in output, default is TRUE, passed from read_vcf #' #' @return Dataframe from VCF file readGATK <- function(dr, dbOnly, depCut, thred, content, extnum, keepall) { vcf <- read.table(file = dr, header = FALSE, sep = "\t", quote="", as.is = TRUE, skip = 0) if (dbOnly) { tmp_f <- vcf[vcf$V3 !=".",] #Filter unannotated variants message(paste0(100*nrow(tmp_f)/nrow(vcf), "% of variants are annotated.")) if (nrow(tmp_f) == 0) { message("No variants are annotated, so dbOnly function is off!") tmp_f <- vcf } vcf <- tmp_f } names(vcf) <- content infoname <- lapply(X = vcf$FORMAT[1], FUN = function(x) { y <- strsplit(x = x, split = ":")[[1]] } ) if (extnum != 0) { tmp <- lapply(X = vcf[,extnum], FUN = function(x) { y <- strsplit(x = x, split = ":")[[1]] } ) for (i in 1:length(infoname[[1]])) { name <- infoname[[1]][i] vcf[, name] <- unlist(lapply(X = tmp, FUN = function(x) {x[i]})) } vcf$DP <- as.numeric(vcf$DP) if (any(grep(pattern=",", x=vcf$ALT))) print("This sample contains more than one alternative allele.") vcf$AC <- as.numeric(lapply(X = vcf$AD, FUN = function(x) { y <- strsplit(x = x, split = ",")[[1]][2]} )) #Allele depth vcf$RC <- as.numeric(lapply(X = vcf$AD, FUN = function(x) { y <- strsplit(x = x, split = ",")[[1]][1]} )) #Reference depth vcf$AF <- as.numeric(round(vcf$AC/as.numeric(vcf$DP), 4)) vcf$ZG <- "Complex" #Zygosity vcf$ZG[ grep(pattern = "^1[/|]1", x = vcf[,extnum])] <- "hom" vcf$ZG[ grep(pattern = "^0[/|]1", x = vcf[,extnum])] <- "het" if (depCut) { vcf <- vcf[ vcf$DP > thred, ] } } if (!keepall) { remove_list <- c(10:length(content))[!(c(10:length(content)) %in% extnum)] vcf <- vcf[, -remove_list] } return(vcf) } #' Read in input vcf data in VarPROWL format #' @param dr A valid input object #' @param dbOnly Use dbSNP as filter, default is FALSE, passed from read_vcf #' @param depCut Use a threshold for min depth , default is False #' @param thred Threshold for min depth, default is 20 #' @param content Column names in VCF files #' @param extnum The column number or numbers to be extracted from vcf, default is 10; 0 for not extracting any columns #' @param keepall Keep unextracted column in output, default is TRUE, passed from read_vcf #' #' @return vcf Dataframe from VCF file readVarPROWL <- function(dr, dbOnly, depCut, thred, content, extnum, keepall) { vcf <- read.table(file = dr, header = FALSE, sep = "\t", quote="", as.is = TRUE, skip = 0) if (dbOnly) { tmp_f <- vcf[ vcf$V3 !=".", ] #Filter unannotated variants message(paste0(100*nrow(tmp_f)/nrow(vcf), "% of variants are annotated.")) if (nrow(tmp_f) == 0) { message("No variants are annotated, so dbOnly function is off!") tmp_f <- vcf } vcf <- tmp_f } vcf$V1 <- gsub("chr", "", vcf$V1) #To get a numeric chromosome column vcf$V1 <- gsub("Chr", "", vcf$V1) #The same reason names(vcf) <- content infoname <- lapply(X = vcf$FORMAT[1], FUN = function(x) { y <- strsplit(x = x, split = ":")[[1]] } ) if (extnum != 0) { tmp <- lapply(X = vcf[,extnum], FUN = function(x) { y <- strsplit(x = x, split = ":")[[1]] } ) for (i in 1:length(infoname[[1]])) { name <- infoname[[1]][i] vcf[, name] <- unlist(lapply(X = tmp, FUN = function(x) {x[i]})) } vcf$GT <- gsub("|", "/", vcf$GT, fixed=TRUE) #Genotype at that locus vcf$DP <- as.numeric(vcf$DP) if (any(grep(pattern=",", x=vcf$ALT))) print("This sample contains more than one alternative allele.") vcf$AF <- as.numeric(unlist(lapply(X = vcf$AF, FUN = function(x) { y <- strsplit(x = x, split = ",")[[1]][1] } ))) vcf$AC <- as.numeric(unlist(lapply(X = vcf$AD, FUN = function(x) { y <- strsplit(x = x, split = ",")[[1]][1] } ))) #Alternate depth vcf$RC <- vcf$DP - vcf$AC #Reference depth vcf$ZG <- "Complex" #Zygosity vcf$ZG[ grep(pattern = "^1[/|]1", x = vcf[,extnum])] <- "hom" vcf$ZG[ grep(pattern = "^0[/|]1", x = vcf[,extnum])] <- "het" if (depCut) { vcf <- vcf[ vcf$DP > thred, ] } } if (!keepall) { remove_list <- c(10:length(content))[!(c(10:length(content)) %in% extnum)] vcf <- vcf[, -remove_list] } return(vcf) } #' Read in input vcf data in VarDict format for Contamination detection #' @param dr A valid input object #' @param dbOnly Use dbSNP as filter, default is FALSE, passed from read_vcf #' @param depCut Use a threshold for min depth , default is False #' @param thred Threshold for min depth, default is 20 #' @param content Column names in VCF files #' @param extnum The column number to be extracted from vcf, default is 10; 0 for not extracting any column #' @param keepall Keep unextracted column in output, default is TRUE, passed from read_vcf #' #' @return Dataframe from VCF file readVarDict <- function(dr, dbOnly, depCut, thred, content, extnum, keepall) { vcf <- read.table(file = dr, header = FALSE, sep = "\t", quote="", as.is = TRUE, skip = 0) if (dbOnly) { tmp_f <- vcf[vcf$V3 !=".",] #Filter unannotated variants message(paste0(100*nrow(tmp_f)/nrow(vcf), "% of variants are annotated.")) if (nrow(tmp_f) == 0) { message("No variants are annotated, so dbOnly function is off!") tmp_f <- vcf } vcf <- tmp_f } names(vcf) <- content infoname <- lapply(X = vcf$FORMAT[1], FUN = function(x) { y <- strsplit(x = x, split = ":")[[1]] } ) if (extnum != 0) { tmp <- lapply(X = vcf[,extnum], FUN = function(x) { y <- strsplit(x = x, split = ":")[[1]] } ) for (i in 1:length(infoname[[1]])) { name <- infoname[[1]][i] vcf[, name] <- unlist(lapply(X = tmp, FUN = function(x) {x[i]})) } vcf$DP <- as.numeric(vcf$DP) if (any(grep(pattern=",", x=vcf$ALT))) print("This sample contains more than one alternative allele.") vcf$AC <- as.numeric(vcf$VD) vcf$AF <- as.numeric(vcf$AF) vcf$RC <- as.numeric(unlist(lapply(X = vcf$AD, FUN = function(x) {y <- strsplit(x = x, split = ",")[[1]][1] }))) vcf$ZG <- "Complex" #Zygosity vcf$ZG[ grep(pattern = "^1[/|]1", x = vcf[,extnum])] <- "hom" vcf$ZG[ grep(pattern = "^0[/|]1", x = vcf[,extnum])] <- "het" if (depCut) { vcf <- vcf[ vcf$DP > thred, ] } } if (!keepall) { remove_list <- c(10:length(content))[!(c(10:length(content)) %in% extnum)] vcf <- vcf[, -remove_list] } return(vcf) } #' Read in input vcf data in strelka2 format for Contamination detection #' @param dr A valid input object #' @param dbOnly Use dbSNP as filter, default is FALSE, passed from read_vcf #' @param depCut Use a threshold for min depth , default is False #' @param thred Threshold for min depth, default is 20 #' @param content Column names in VCF files #' @param extnum The column number or numbers to be extracted from vcf, default is 10; 0 for not extracting any columns #' @param keepall Keep unextracted column in output, default is TRUE, passed from read_vcf #' #' @return Dataframe from VCF file readStrelka <- function(dr, dbOnly, depCut, thred, content, extnum, keepall) { vcf <- read.table(file = dr, header = FALSE, sep = "\t", quote="", as.is = TRUE, skip = 0) if (dbOnly) { tmp_f <- vcf[vcf$V3 !=".",] #Filter unannotated variants message(paste0(100*nrow(tmp_f)/nrow(vcf), "% of variants are annotated.")) if (nrow(tmp_f) == 0) { message("No variants are annotated, so dbOnly function is off!") tmp_f <- vcf } vcf <- tmp_f } names(vcf) <- content infoname <- lapply(X = vcf$FORMAT[1], FUN = function(x) { y <- strsplit(x = x, split = ":")[[1]] } ) if (extnum != 0) { tmp <- lapply(X = vcf[,extnum], FUN = function(x) { y <- strsplit(x = x, split = ":")[[1]] } ) for (i in 1:length(infoname[[1]])) { name <- infoname[[1]][i] vcf[, name] <- unlist(lapply(X = tmp, FUN = function(x) {x[i]})) } vcf$DP <- as.numeric(vcf$DP) if (any(grep(pattern=",", x=vcf$ALT))) print("This sample contains more than one alternative allele.") vcf$RC <- as.numeric(unlist(lapply(X = vcf$AD, FUN = function(x) {y <- strsplit(x = x, split = ",")[[1]][1] }))) vcf$AC <- as.numeric(unlist(lapply(X = vcf$AD, FUN = function(x) {y <- strsplit(x = x, split = ",")[[1]][2] }))) vcf$AF <- (vcf$AC)*(1/vcf$DP) vcf$ZG <- "Complex" #Zygosity vcf$ZG[ grep(pattern = "^1[/|]1", x = vcf[,extnum])] <- "hom" vcf$ZG[ grep(pattern = "^0[/|]1", x = vcf[,extnum])] <- "het" if (depCut) { vcf <- vcf[ vcf$DP > thred, ] } } if (!keepall) { remove_list <- c(10:length(content))[!(c(10:length(content)) %in% extnum)] vcf <- vcf[, -remove_list] } return(vcf) } #' @title VCF Data Input #' @description Reads a file in vcf or vcf.gz file and creates a list containing Content, Meta, VCF and file_sample_name #' #' @param fn Input vcf file name #' @param vcffor Input vcf data format: 1) GATK; 2) VarPROWL; 3) VarDict; 4) strelka2 #' @param dbOnly Use dbSNP as filter, default is FALSE #' @param depCut Use a threshold for min depth , default is False #' @param thred Threshold for min depth, default is 20 #' @param metaline Number of head lines to read in (better to be large enough), the lines will be checked if they contain meta information, default is 200 #' @param extnum The column number to be extracted from vcf, default is 10; 0 for not extracting any column; extnum should be between 10 and total column number #' @param keepall Keep unextracted column in output, default is TRUE #' @param filter Whether to select "PASS" variants for analyses if they contain unfiltered variants, default is FALSE #' #' @return A list containing (1) Content: a vector showing what is contained; (2) Meta: a data frame containing meta-information of the file; #' (3) VCF: a data frame, the main part of VCF file; (4) file_sample_name: the file name and sample name, #' in case when multiple samples exist in one file, file and sample names might be different #' #' @export #' @importFrom utils read.table #' #' @examples #' file.name <- system.file("extdata", "example.vcf.gz", package = "vanquish") #' example <- read_vcf(fn=file.name, vcffor="VarPROWL") read_vcf <- function(fn, vcffor, dbOnly = FALSE, depCut = FALSE, thred = 20, metaline = 200, extnum = 10, keepall = TRUE, filter = FALSE) { vcfobj <- locateFile(fn = fn, extension = 'vcf') meta_lines <- readLines(con = vcfobj, n = metaline) content <- meta_lines[startsWith(meta_lines, "#CH")] if (!is.numeric(extnum)) { stop('extnum must be numeric!') } else if ((extnum < 10) & (extnum != 0)) { stop('extnum must >= 10 or 0!') } if (length(content)!=0) { contents <- strsplit(content, '\t')[[1]] contents[1] <- gsub("#", "", contents[1]) if (extnum > length(contents)) { stop('extnum cannot beyond column number!') } } else { contents <- NULL } meta <- as.data.frame(meta_lines[startsWith(meta_lines, "##")]) colnames(meta) <- "Meta" if (vcffor == 'VarDict'){ df <- readVarDict(dr = vcfobj, dbOnly, depCut, thred, content = contents, extnum, keepall) } else if (vcffor == 'GATK') { df <- readGATK(dr = vcfobj, dbOnly, depCut, thred, content = contents, extnum, keepall) } else if (vcffor == 'VarPROWL') { df <- readVarPROWL(dr = vcfobj, dbOnly, depCut, thred, content = contents, extnum, keepall) } else if (vcffor == 'Strelka') { df <- readStrelka(dr = vcfobj, dbOnly, depCut, thred, content = contents, extnum, keepall) } else { stop('Invalid VCF type!') } if (filter) { if ("FILTER" %in% colnames(df)) { print ('Select PASS variants only for analysis.') df <- df[df$FILTER == 'PASS', ] } else { print ('No FILTER column in VCF file!') } } res_list <- list("Content" = contents, "Meta" = meta, "VCF" = df, "file_sample_name" = c(basename(vcfobj), contents[extnum])) return (res_list) }
/scratch/gouwar.j/cran-all/cranData/vanquish/R/read_vcf.R
#' @title Negative Log Likelihood #' @description Calculates negative log likelihood for beta binomial distribution. #' #' @param x Depth of alternative allele #' @param size Total depth #' @param prob Theoretical probability for heterozygous is 0.5, for homozygous is 0.999 #' @param rho Rho parameter of Beta-Binomial distribution of alternative allele #' #' @importFrom VGAM dbetabinom #' negll <- function(x, size, prob, rho){ -sum(dbetabinom(x, size, prob, rho, log = TRUE)) } #' @title Estimate Rho for Alternative Allele Frequency #' @description Estimates Rho parameter in beta binomial distribution for alternative allele frequency #' #' @param vl A list of vcf objects from read_vcf function. #' @importFrom stats optim #' #' @return A list containing (1) het_rho: Rho parameter of heterozygous location; (2) hom_rho: Rho parameter homozygous location; #' @export #' #' @examples #' data("vcf_example") #' vcf_list <- list() #' vcf_list[[1]] <- vcf_example$VCF #' res <- rho_est(vl = vcf_list) #' res$het_rho[[1]]$par #' res$hom_rho[[1]]$par rho_est <- function(vl) { n <- length(vl) locinum <- lapply(X = vl, FUN = nrow) het_ac <- lapply(X = vl, FUN = function(x) {x$AC[x$ZG=='het']}) hom_ac <- lapply(X = vl, FUN = function(x) {x$AC[x$ZG=='hom']}) het_dp <- lapply(X = vl, FUN = function(x) {x$DP[x$ZG=='het']}) hom_dp <- lapply(X = vl, FUN = function(x) {x$DP[x$ZG=='hom']}) het_rho <- list() hom_rho <- list() for (i in 1:n) { het_objfn <- function(rho) { negll(x=het_ac[[i]], size=het_dp[[i]], prob=0.5, rho) } het_rho[[i]] <- optim(par = c(0.03), fn = het_objfn, method = "L-BFGS-B", lower = 0, upper = 0.999) hom_objfn <- function(rho) { negll(x=hom_ac[[i]], size=hom_dp[[i]], prob=0.999, rho) } hom_rho[[i]] <- optim(par = c(0.03), fn = hom_objfn, method = "L-BFGS-B", lower = 0, upper = 0.999) } res_list <- list("het_rho" = het_rho, "hom_rho" = hom_rho) return (res_list) }
/scratch/gouwar.j/cran-all/cranData/vanquish/R/rho_est.R
#' @title VCF Data Summary #' @description Summarizes allele frequency information in scatter and density plots #' #' @param vcf VCF object from read_vcf function #' @param ZG zygosity: (1) null, for both het and hom, default; (2) het; (3) hom #' @param CHR chromosome number: (1) null, all chromosome, default; (2) any specific number #' #' @return A list containing (1) scatter: allele frequency scatter plot; (2) density: allele frequency density plot #' #' @export #' @import stats #' @import ggplot2 #' #' @examples #' data("vcf_example") #' tmp <- summary_vcf(vcf = vcf_example, ZG = 'het', CHR = c(1,2)) #' plot(tmp$scatter) #' plot(tmp$density) summary_vcf <- function(vcf, ZG = NULL, CHR = NULL) { AF <- group <- location <- NULL table <- vcf$VCF detail <- NULL if (!is.null(ZG)) { table <- table[table$ZG == ZG ,] } if (!is.null(CHR)) { table <- table[table$CHROM %in% CHR ,] } density_plot <- density(table$AF) table$location <- as.numeric(rownames(table)) s <- split(unique(table$CHROM), rep(1:2, length = length(unique(table$CHROM)))) table$group <- table$CHROM table$group[table$CHROM%in%s[[1]]] <- 'Odd' if (length(CHR)!=1) { table$group[table$CHROM%in%s[[2]]] <- 'Even' } scatter_plot <- ggplot(table, aes(x=location, y=AF, color=group)) + geom_point(size = 1) + scale_color_manual(values=rep(c('red','blue')), guide=FALSE) + ylim(c(0, 1)) + labs(y="Allele Frequency", x="Position", title=paste("Scatter Plot of Allele Frequency", detail, ZG, sep = " ")) + theme_classic() res_list <- list("scatter" = scatter_plot, "density" = density_plot) return(res_list) }
/scratch/gouwar.j/cran-all/cranData/vanquish/R/summary_vcf.R
#' @title Train Contamination Detection Model #' @description Trains two SVM models (classification and regression) to detects whether a sample is contaminated another sample of its same species. #' #' @param feature Feature list objects from generate_feature() #' #' @import e1071 #' @return A list contains two trained svm models: regression & classification #' @export train_ct <- function(feature) { df <- as.data.frame(do.call(rbind, feature)) df <- df[,-1] #The first column is file name, so it is not needed for model training. svm_regression_model <- e1071::svm(Mixture ~ ., data = df, type = 'eps-regression', kernel = 'radial', cost = 16, gamma = 0.25) svm_class_model <- e1071::svm(Mixture ~ ., data = df, type = 'C-classification', kernel = 'radial', cost = 16, gamma = 0.25) return (list(regression=svm_regression_model, class=svm_class_model)) }
/scratch/gouwar.j/cran-all/cranData/vanquish/R/train_ct.R
#' Get the ratio of allele frequencies with a region #' @param subdf Dataframe with calculated statistics #' @param lower Lower bound for allele frequency region #' @param upper Upper bound for allele frequency region #' @return Ratio of allele frequencies with a region getRatio <- function(subdf, lower, upper) { ratio <- nrow(subdf[((subdf$AF>lower) & (subdf$AF<upper)),])/nrow(subdf) return (ratio) } #' Get absolute value of skewness #' @param subdf Input dataframe #' @import e1071 #' @return Absolute value of skewness getSkewness <- function(subdf) { value <- abs(skewness(subdf$AF)) return (value) } #' Remove CNV regions within VCF files by change point method #' @param vcf Input VCF files #' @param threshold Threshold for allele frequency #' @param skew Skewness for allele frequency #' @param lower Lower bound for allele frequency region #' @param upper Upper bound for allele frequency region #' @importFrom changepoint cpt.var #' @return VCF object without change point region rmChangePoint <- function(vcf, threshold, skew, lower, upper) { vcf_Het <- vcf[vcf$ZG == "het",] vcf_Hom <- vcf[vcf$ZG == "hom",] res <- cpt.var(data = vcf_Het$AF, penalty = "CROPS", pen.value = c(100, 101), method = "PELT", class = FALSE, # Logical. If TRUE then an object of class cpt is returned. param.estimates = TRUE) # Logical. (1) If TRUE and class=TRUE then parameter estimates are returned. # (2) If FALSE or class=FALSE no parameter estimates are returned. # Here Class=F & param.estimates=T, an object of class list (not cpt) is returned. if (!is.na(res$changepoints[[length(res$changepoints)]][1])) { # This only looks at the last value. # For most cases, if change points exist, NA value will not show up. # In very few situations, possible change point output can be: # 1,2,3,4,5 # 1,3,4,5,NA # 1,3,5,NA,NA # NA values are all on the right. Here we choose the first one. res$changepoints <- c(1, res$changepoints[[length(res$changepoints)]], nrow(vcf_Het)) f <- NULL for (i in 1:(length(res$changepoints)-1)) { tmp <- vcf_Het[(res$changepoints[i]:res$changepoints[i+1]),] if ((getRatio(subdf = tmp, lower, upper) > threshold) | (getSkewness(subdf = tmp) > skew)) { f <- rbind(f, tmp) # The region which is NOT considered as CNV } } vcf <- rbind(f, vcf_Hom) return (vcf) } return (vcf) } #' Remove CNV regions within VCF files given cnv file #' @param vcf Input VCF files #' @param cnvobj cnv object #' @return VCF object without change point region rmCNVinVCF <- function(vcf, cnvobj) { if (nrow(cnvobj) > 0) { for (i in 1:nrow(cnvobj)) { index <- which((vcf$Chr == cnvobj$Chr[i])&((vcf$Pos > cnvobj$Start[i])&(vcf$Pos < cnvobj$Stop[i]))) if (length(index) > 0) { vcf <- vcf[-index,] } } } return (vcf) } #' Remove CNV regions within VCF files #' @param rmCNV Remove CNV regions, default is FALSE #' @param vcf Input VCF files #' @param cnvobj cnv object, default is NULL #' @param threshold Threshold for allele frequency, default is 0.1 #' @param skew Skewness for allele frequency, default is 0.5 #' @param lower Lower bound for allele frequency region, default is 0.45 #' @param upper Upper bound for allele frequency region, default is 0.55 #' @return VCF file without CNV region #' @export update_vcf <- function(rmCNV = FALSE, vcf, cnvobj = NULL, threshold = 0.1, skew = 0.5, lower = 0.45, upper = 0.55) { if (rmCNV) { if (is.null(cnvobj)) { vcf <- rmChangePoint(vcf, threshold, skew, lower, upper) } else { vcf <- rmCNVinVCF(vcf, cnvobj) } } return (vcf) }
/scratch/gouwar.j/cran-all/cranData/vanquish/R/update_vcf.R
#' Create raster file #' #' This is in an incomplete interface to raster writing, for exploring. #' #' If GeoTIFF is used (`driver = "GTiff"`, recommended) then the output is tiled 512x512, and has DEFLATE compression, and #' is sparse when created (no values are initiated, so the file is tiny). #' #' Note that there is no restriction on where you can read or write from, the responsibility is yours. In future we will #' allow control of output tiling and data type etc. #' #' @param filename filename/path to create #' @param driver GDAL driver to use (GTiff is default, and recommended) #' @param extent xmin,xmax,ymin,ymax 4-element vector #' @param dimension dimension of the output, X * Y #' @param projection projection of the output, best to use a full WKT but any string accepted #' @param n_bands number of bands in the output, default is 1 #' @param overwrite not TRUE by default #' #' @return the file path that was created #' @export #' #' @examples #' tfile <- tempfile(fileext = ".tif") #' if (!file.exists(tfile)) { #' vapour_create(tfile, extent = c(-1, 1, -1, 1) * 1e6, #' dimension = c(128, 128), #' projection = "+proj=laea") #' file.remove(tfile) #' } vapour_create <- function(filename, driver = "GTiff", extent = c(-180, 180, -90, 90), dimension = c(2048, 1024), projection = "OGC:CRS84", n_bands = 1L, overwrite = FALSE) { if (!overwrite && file.exists(filename)) stop("'filename' exists") vapour_create_cpp(filename, driver, extent, dimension, projection, n_bands) } vapour_create_copy <- function(dsource, filename, overwrite = FALSE, driver = "GTiff") { ##tf <- tempfile(fileext = ".tif") # f <- "inst/extdata/sst.tif" ## FIXME: we need to remove the SourceFilename element, else the data gets copied #vapour:::vapour_create_copy_cpp(gsub(f, "", vrt), tf, driver = "GTiff") ## tif has empty size #file.info(tf)$size #[1] 824 ## but not (FIXME) cleared metadata, I think this is where we need to clear the PAM # terra::rast(tf) # class : SpatRaster # dimensions : 286, 143, 1 (nrow, ncol, nlyr) # resolution : 0.07, 0.07000389 (x, y) # extent : 140, 150.01, -60.01833, -39.99722 (xmin, xmax, ymin, ymax) # coord. ref. : lon/lat WGS 84 (EPSG:4326) # source : filefaae639737aad.tif # name : filefaae639737aad # min value : 271.35 # max value : 289.859 # if (!overwrite && file.exists(filename)) stop("'filename' exists") .check_dsn_single(dsource) ## 1) convert to VRT ## vrt <- vapour_vrt(dsource) ## 2) clear SourceFilename ## vrt <- vapour_create_copy_cpp(gsub(f, "", vrt), tf, driver = "GTiff") ##vapour_create_copy_cpp(vrt, tf, driver = driver) stop("not implemented") } #' Read or write raster block #' #' Read a 'block' from raster. #' #' @param dsource file name to read from, or write to #' @param offset position x,y to start writing (0-based, y-top) #' @param dimension window size to read from, or write to #' @param band_output_type numeric type of band to apply (else the native type if '') can be one of 'Byte', 'Int32', or 'Float64' #' @param band which band to read (1-based) #' @param unscale default is `TRUE` so native values will be converted by offset and scale to floating point #' #' @return a list with a vector of data from the band read #' @export #' #' @examples #' f <- system.file("extdata", "sst.tif", package = "vapour") #' v <- vapour_read_raster_block(f, c(0L, 0L), dimension = c(2L, 3L), band = 1L) vapour_read_raster_block <- function(dsource, offset, dimension, band = 1L, band_output_type = "", unscale = TRUE) { dsource <- .check_dsn_single(dsource) if (anyNA(band) || length(band) < 1L) stop("missing band value") if (file.exists(dsource)) { dsource <- normalizePath(dsource) } vapour_read_raster_block_cpp(dsource, as.integer(rep(offset, length.out = 2L)), as.integer(rep(dimension, length.out = 2L)), band = as.integer(band[1L]), band_output_type = band_output_type, unscale = unscale) } #' Write data to a block *in an existing file*. #' #' Be careful! The write function doesn't create a file, you have to use an existing one. #' Don't write to a file you don't want to update by mistake. #' @export #' @param data data vector, length should match `prod(dimension)` or length 1 allowed #' @param overwrite set to FALSE as a safety valve to not overwrite an existing file #' @param dsource data source name #' @param offset offset to start #' @param dimension dimension to write #' @param band which band to write to (1-based) #' #' @return a logical value indicating success (or failure) of the write #' @examples #' f <- system.file("extdata", "sst.tif", package = "vapour") #' v <- vapour_read_raster_block(f, c(0L, 0L), dimension = c(2L, 3L), band = 1L) #' file.copy(f, tf <- tempfile(fileext = ".tif")) #' try(vapour_write_raster_block(tf, data = v[[1]], offset = c(0L, 0L), #' dimension = c(2L, 3L), band = 1L)) #' if (file.exists(tf)) file.remove(tf) vapour_write_raster_block <- function(dsource, data, offset, dimension, band = 1L, overwrite = FALSE) { if (!file.exists(dsource)) stop("file dsource must exist") if (!overwrite) stop(sprintf("set 'overwrite' to TRUE if you really mean to write to file %s", dsource)) if (anyNA(band) || length(band) < 1L) stop("missing band value") band <- as.integer(band[1L]) offset <- as.integer(rep(offset, length.out = 2L)) dimension <- as.integer(rep(dimension, length.out = 2L)) if (length(data) < 1 || length(prod(dimension)) < 1 || length(data) != prod(dimension)) { ## allow single value to write to entire block if (length(data) == 1L && !is.na(data)) { data <- rep(data, length.out = prod(dimension)) } else { if (length(data) == 1L && anyNA(data)) stop("data is NA singleton but prod(dimension) > 1, please explicity input NA for every element not just a single value") stop("mismatched data and dimension") } } vapour_write_raster_block_cpp(dsource, data, offset, dimension, band = band) }
/scratch/gouwar.j/cran-all/cranData/vapour/R/00_read_block.R
## this is ongoing work to replace the innards of GDALInfo for vapour_raster_info and others ## for example, instead of ## f1 <- system.file("extdata/gcps", "volcano_gcp.tif", package = "vapour") ## vapour_raster_gcp(f1) ## we can now do ## jsonlite::fromJSON(vapour:::gdalinfo_internal(f1))$gcps gdalinfo_internal <- function(x, json = TRUE, stats = FALSE, sd = 0, checksum = FALSE, wkt_format = "WKT2", oo = character(), initial_format = character(), options = character()) { rep_zip <- function(x, y) { as.vector(t(cbind(rep(x, length(y)), y))) } if (length(sd) > 1) message("'sd' argument cannot be vectorized over 'dsn', ignoring all but first value") version <- vapour_gdal_version() v3 <- TRUE if (grepl("GDAL 2", version )) v3 <- FALSE extra <- c(if(json) "-json", if (is.numeric(sd) && sd[1L] > 0) c("-sd", sd[1L]), if (stats) "-stats", if (checksum) "-checksum", if (nchar(wkt_format[1]) > 0 && v3) c("-wkt_format", wkt_format[1L]), if (length(oo) > 0 && any(nchar(oo) > 0) ) rep_zip("-oo", oo[nchar(oo) > 0]), if (length(initial_format) > 0 && any(nchar(initial_format) > 0)) rep_zip("-if", initial_format[nchar(initial_format) > 0])) options <- c(options, "-proj4", "-listmdd", extra) options <- options[!is.na(options)] ## cant do unique because repeated arguments possible things like "-if" "GTiff" "-if" "netCDF" info <- raster_gdalinfo_app_cpp(x, options) info }
/scratch/gouwar.j/cran-all/cranData/vapour/R/00_utils.R
#' General raster read and convert #' #' The warper is used to convert source/s to an output file or to data in memory. #' #' Two functions 'gdal_raster_data' and 'gdal_raster_dsn' act like the gdalwarp command line #' tool, a convenience third function 'gdal_raster_image()' works especially for image data. #' #' @param dsn data sources, files, urls, db strings, vrt, etc #' @param target_crs projection of the target grid #' @param target_dim dimension of the target grid #' @param target_ext extent of the target grid #' @param target_res resolution of the target grid #' @param resample resampling algorithm used #' @param bands band or bands to include, default is first band only (use NULL or a value less that one to obtain all bands) #' @param band_output_type specify the band type, see [vapour_read_raster] #' @param options general options passed to gdal warper #' @param out_dsn use with [gdal_raster_dsn] optionally set the output file name (or one will be generated) #' @export #' @returns pixel values in a list vector per band, or a list of file paths #' #' @examples #' dsn <- system.file("extdata/sst.tif", package = "vapour") #' par(mfrow = c(2, 2)) #' ## do nothing, get native #' X <- gdal_raster_data(dsn) #' #' ## set resolution (or dimension, extent, crs, or combination thereof - GDAL #' ## will report/resolve incompatible opts) #' X1 <- gdal_raster_data(dsn, target_res = 1) #' #' ## add a cutline, and cut to it using gdal warp args #' cutline <- system.file("extdata/cutline_sst.gpkg", package = "vapour") #' X1c <- gdal_raster_data(dsn, target_res = .1, options = c("-cutline",cutline, "-crop_to_cutline" )) #' #' ## warp whole grid to give res #' X2 <- gdal_raster_data(dsn, target_res = 25000, target_crs = "EPSG:32755") #' #' ## specify exactly (as per vapour originally) #' X3 <- gdal_raster_data(dsn, target_ext = c(-1, 1, -1, 1) * 8e6, #' target_dim = c(512, 678), target_crs = "+proj=stere +lon_0=147 +lat_0=-90") #' #' X4 <- gdal_raster_dsn(dsn, out_dsn = tempfile(fileext = ".tif")) gdal_raster_data <- function(dsn, target_crs = NULL, target_dim = NULL, target_ext = NULL, target_res = NULL, resample = "near", bands = 1L, band_output_type = NULL, options = character()) { if (is.null(target_crs)) target_crs <- "" if (is.null(target_ext)) { target_ext <- numeric() } else { if (!length(target_ext) == 4L ) stop("'target_ext' must be of length 4 (xmin, xmax, ymin, ymax") if (anyNA(target_ext)) stop("NA values in 'target_ext'") dif <- diff(target_ext)[c(1L, 3L)] if (any(!dif > 0)) stop("all 'target_ext' values must xmin < xmax, ymin < ymax") } if (is.null(target_dim)) { target_dim <- integer() #info$dimension } else { if (length(target_dim) > 0 ) target_dim <- as.integer(rep(target_dim, length.out = 2L)) if (anyNA(target_dim)) stop("NA values in 'target_dim'") if (any(target_dim < 0)) stop("all 'target_dim' values must be >= 0") if (all(target_dim < 1)) stop("one 'target_dim' value must be > 0") } if (is.null(target_res)) { target_res <- numeric() ## TODO } else { ## check res is above zero if (length(target_res) > 0 ) target_res <- as.numeric(rep(target_res, length.out = 2L)) if (anyNA(target_res)) stop("NA values in 'target_res'") if (any(target_res <= 0)) stop("all 'target_res' values must be > 0") } if (is.null(band_output_type)) band_output_type <- "Float64" if (is.null(bands)) bands <- -1 warp_general_cpp(dsn, target_crs, as.numeric(target_ext), as.integer(target_dim), as.numeric(target_res), bands = as.integer(bands), resample = resample, silent = FALSE, band_output_type = band_output_type, options = options, dsn_outname = "") } #' @name gdal_raster_data #' @export gdal_raster_dsn <- function(dsn, target_crs = NULL, target_dim = NULL, target_ext = NULL, target_res = NULL, resample = "near", bands = NULL, band_output_type = NULL, options = character(), out_dsn = tempfile(fileext = ".tif")) { if (is.null(target_crs)) target_crs <- "" if (is.null(target_ext)) { target_ext <- numeric() } else { if (!length(target_ext) == 4L ) stop("'target_ext' must be of length 4 (xmin, xmax, ymin, ymax") if (anyNA(target_ext)) stop("NA values in 'target_ext'") dif <- diff(target_ext)[c(1L, 3L)] if (any(!dif > 0)) stop("all 'target_ext' values must xmin < xmax, ymin < ymax") } if (is.null(target_dim)) { target_dim <- integer() #info$dimension } else { if (length(target_dim) > 0 ) target_dim <- as.integer(rep(target_dim, length.out = 2L)) if (anyNA(target_dim)) stop("NA values in 'target_dim'") if (any(target_dim < 0)) stop("all 'target_dim' values must be >= 0") if (all(target_dim < 1)) stop("one 'target_dim' value must be > 0") } if (is.null(target_res)) { target_res <- numeric() ## TODO } else { ## check res is above zero if (length(target_res) > 0 ) target_res <- as.numeric(rep(target_res, length.out = 2L)) if (anyNA(target_res)) stop("NA values in 'target_res'") if (any(target_res <= 0)) stop("all 'target_res' values must be >= 0") } if (is.null(band_output_type)) band_output_type <- "Float64" #if (grepl("tif$", out_dsn)) { ## we'll have to do some work here ## currently always COG options <- c(options, "-of", "COG") if (!is.null(bands) || (is.integer(bands) && !length(bands) == 1 && bands[1] > 0)) { stop("bands cannot be set for gdal_raster_dsn, please use an upfront call to 'vapour_vrt(dsn, bands = )' to create the dsn") } else { bands <- -1 } warp_general_cpp(dsn, target_crs, target_ext, target_dim, target_res, bands = bands, resample = resample, silent = FALSE, band_output_type = band_output_type, options = options, dsn_outname = out_dsn[1L]) } #' @name gdal_raster_data #' @export gdal_raster_image <- function(dsn, target_crs = NULL, target_dim = NULL, target_ext = NULL, target_res = NULL, resample = "near", bands = NULL, band_output_type = NULL, options = character()) { if (length(target_res) > 0 ) target_res <- as.numeric(rep(target_res, length.out = 2L)) if (is.null(target_crs)) target_crs <- "" if (is.null(target_ext)) target_ext <- numeric() if (is.null(target_dim)) target_dim <- integer() #info$dimension if (is.null(target_res)) target_res <- numeric() ## TODO if (is.null(band_output_type)) band_output_type <- "UInt8" if (is.null(bands)) { nbands <- vapour_raster_info(dsn[1])$bands bands <- seq(min(c(nbands, 4L))) } bytes <- warp_general_cpp(dsn, target_crs, target_ext, target_dim, target_res, bands = bands, resample = resample, silent = FALSE, band_output_type = band_output_type, options = options, dsn_outname = "") atts <- attributes(bytes) out <- list(as.vector(grDevices::as.raster(array(unlist(bytes, use.names = FALSE), c(length(bytes[[1]]), 1, max(c(3, length(bytes)))))))) attributes(out) <- atts out }
/scratch/gouwar.j/cran-all/cranData/vapour/R/00_warpgeneral.R
# Generated by using Rcpp::compileAttributes() -> do not edit by hand # Generator token: 10BE3573-1514-4C36-9D1C-5A225CD40393 gdal_dsn_read_vector_stream <- function(stream_xptr, dsn, layer, sql, options, quiet, drivers, wkt_filter, dsn_exists, dsn_isdb, fid_column_name, width) { .Call('_vapour_gdal_dsn_read_vector_stream', PACKAGE = 'vapour', stream_xptr, dsn, layer, sql, options, quiet, drivers, wkt_filter, dsn_exists, dsn_isdb, fid_column_name, width) } warp_general_cpp <- function(dsn, target_crs, target_extent, target_dim, target_res, bands, resample, silent, band_output_type, options, dsn_outname) { .Call('_vapour_warp_general_cpp', PACKAGE = 'vapour', dsn, target_crs, target_extent, target_dim, target_res, bands, resample, silent, band_output_type, options, dsn_outname) } warp_suggest_cpp <- function(dsn, target_crs) { .Call('_vapour_warp_suggest_cpp', PACKAGE = 'vapour', dsn, target_crs) } set_gdal_config_cpp <- function(option, value) { .Call('_vapour_set_gdal_config_cpp', PACKAGE = 'vapour', option, value) } get_gdal_config_cpp <- function(option) { .Call('_vapour_get_gdal_config_cpp', PACKAGE = 'vapour', option) } cleanup_gdal_cpp <- function() { .Call('_vapour_cleanup_gdal_cpp', PACKAGE = 'vapour') } driver_gdal_cpp <- function(dsn) { .Call('_vapour_driver_gdal_cpp', PACKAGE = 'vapour', dsn) } driver_id_gdal_cpp <- function(dsn) { .Call('_vapour_driver_id_gdal_cpp', PACKAGE = 'vapour', dsn) } layer_names_gdal_cpp <- function(dsn) { .Call('_vapour_layer_names_gdal_cpp', PACKAGE = 'vapour', dsn) } drivers_list_gdal_cpp <- function() { .Call('_vapour_drivers_list_gdal_cpp', PACKAGE = 'vapour') } proj_to_wkt_gdal_cpp <- function(proj4string) { .Call('_vapour_proj_to_wkt_gdal_cpp', PACKAGE = 'vapour', proj4string) } crs_is_lonlat_cpp <- function(input_string) { .Call('_vapour_crs_is_lonlat_cpp', PACKAGE = 'vapour', input_string) } register_gdal_cpp <- function() { .Call('_vapour_register_gdal_cpp', PACKAGE = 'vapour') } version_gdal_cpp <- function() { .Call('_vapour_version_gdal_cpp', PACKAGE = 'vapour') } vsi_list_gdal_cpp <- function(dsn) { .Call('_vapour_vsi_list_gdal_cpp', PACKAGE = 'vapour', dsn) } feature_count_gdal_cpp <- function(dsn, layer, sql, ex) { .Call('_vapour_feature_count_gdal_cpp', PACKAGE = 'vapour', dsn, layer, sql, ex) } projection_info_gdal_cpp <- function(dsn, layer, sql) { .Call('_vapour_projection_info_gdal_cpp', PACKAGE = 'vapour', dsn, layer, sql) } report_fields_gdal_cpp <- function(dsn, layer, sql) { .Call('_vapour_report_fields_gdal_cpp', PACKAGE = 'vapour', dsn, layer, sql) } vapour_geom_name_cpp <- function(dsource, layer, sql, ex) { .Call('_vapour_vapour_geom_name_cpp', PACKAGE = 'vapour', dsource, layer, sql, ex) } vapour_layer_extent_cpp <- function(dsource, layer, sql, ex) { .Call('_vapour_vapour_layer_extent_cpp', PACKAGE = 'vapour', dsource, layer, sql, ex) } gdal_dsn_read_geom_all <- function(dsn, layer, sql, ex, format) { .Call('_vapour_gdal_dsn_read_geom_all', PACKAGE = 'vapour', dsn, layer, sql, ex, format) } gdal_dsn_read_geom_ij <- function(dsn, layer, sql, ex, format, ij) { .Call('_vapour_gdal_dsn_read_geom_ij', PACKAGE = 'vapour', dsn, layer, sql, ex, format, ij) } gdal_dsn_read_geom_ia <- function(dsn, layer, sql, ex, format, ia) { .Call('_vapour_gdal_dsn_read_geom_ia', PACKAGE = 'vapour', dsn, layer, sql, ex, format, ia) } gdal_dsn_read_geom_fa <- function(dsn, layer, sql, ex, format, fa) { .Call('_vapour_gdal_dsn_read_geom_fa', PACKAGE = 'vapour', dsn, layer, sql, ex, format, fa) } gdal_dsn_read_fields_all <- function(dsn, layer, sql, ex, fid_column_name) { .Call('_vapour_gdal_dsn_read_fields_all', PACKAGE = 'vapour', dsn, layer, sql, ex, fid_column_name) } gdal_dsn_read_fields_ij <- function(dsn, layer, sql, ex, fid_column_name, ij) { .Call('_vapour_gdal_dsn_read_fields_ij', PACKAGE = 'vapour', dsn, layer, sql, ex, fid_column_name, ij) } gdal_dsn_read_fields_ia <- function(dsn, layer, sql, ex, fid_column_name, ia) { .Call('_vapour_gdal_dsn_read_fields_ia', PACKAGE = 'vapour', dsn, layer, sql, ex, fid_column_name, ia) } gdal_dsn_read_fields_fa <- function(dsn, layer, sql, ex, fid_column_name, fa) { .Call('_vapour_gdal_dsn_read_fields_fa', PACKAGE = 'vapour', dsn, layer, sql, ex, fid_column_name, fa) } gdal_dsn_read_fids_all <- function(dsn, layer, sql, ex) { .Call('_vapour_gdal_dsn_read_fids_all', PACKAGE = 'vapour', dsn, layer, sql, ex) } gdal_dsn_read_fids_ij <- function(dsn, layer, sql, ex, ij) { .Call('_vapour_gdal_dsn_read_fids_ij', PACKAGE = 'vapour', dsn, layer, sql, ex, ij) } gdal_dsn_read_fids_ia <- function(dsn, layer, sql, ex, ia) { .Call('_vapour_gdal_dsn_read_fids_ia', PACKAGE = 'vapour', dsn, layer, sql, ex, ia) } read_fields_gdal_cpp <- function(dsn, layer, sql, limit_n, skip_n, ex, fid_column_name) { .Call('_vapour_read_fields_gdal_cpp', PACKAGE = 'vapour', dsn, layer, sql, limit_n, skip_n, ex, fid_column_name) } read_geometry_gdal_cpp <- function(dsn, layer, sql, what, textformat, limit_n, skip_n, ex) { .Call('_vapour_read_geometry_gdal_cpp', PACKAGE = 'vapour', dsn, layer, sql, what, textformat, limit_n, skip_n, ex) } read_fids_gdal_cpp <- function(dsn, layer, sql, limit_n, skip_n, ex) { .Call('_vapour_read_fids_gdal_cpp', PACKAGE = 'vapour', dsn, layer, sql, limit_n, skip_n, ex) } raster_gcp_gdal_cpp <- function(dsn) { .Call('_vapour_raster_gcp_gdal_cpp', PACKAGE = 'vapour', dsn) } raster_has_geolocation_gdal_cpp <- function(dsn, sds) { .Call('_vapour_raster_has_geolocation_gdal_cpp', PACKAGE = 'vapour', dsn, sds) } raster_info_gdal_cpp <- function(dsn, min_max) { .Call('_vapour_raster_info_gdal_cpp', PACKAGE = 'vapour', dsn, min_max) } raster_extent_cpp <- function(dsn) { .Call('_vapour_raster_extent_cpp', PACKAGE = 'vapour', dsn) } raster_io_gdal_cpp <- function(dsn, window, band, resample, band_output_type, unscale) { .Call('_vapour_raster_io_gdal_cpp', PACKAGE = 'vapour', dsn, window, band, resample, band_output_type, unscale) } sds_list_gdal_cpp <- function(dsn) { .Call('_vapour_sds_list_gdal_cpp', PACKAGE = 'vapour', dsn) } sds_list_list_gdal_cpp <- function(dsn) { .Call('_vapour_sds_list_list_gdal_cpp', PACKAGE = 'vapour', dsn) } warp_in_memory_gdal_cpp <- function(dsn, source_WKT, target_WKT, target_extent, target_dim, bands, source_extent, resample, silent, band_output_type, options, nomd, overview) { .Call('_vapour_warp_in_memory_gdal_cpp', PACKAGE = 'vapour', dsn, source_WKT, target_WKT, target_extent, target_dim, bands, source_extent, resample, silent, band_output_type, options, nomd, overview) } vapour_read_raster_block_cpp <- function(dsource, offset, dimension, band, band_output_type, unscale) { .Call('_vapour_vapour_read_raster_block_cpp', PACKAGE = 'vapour', dsource, offset, dimension, band, band_output_type, unscale) } vapour_write_raster_block_cpp <- function(dsource, data, offset, dimension, band) { .Call('_vapour_vapour_write_raster_block_cpp', PACKAGE = 'vapour', dsource, data, offset, dimension, band) } vapour_create_copy_cpp <- function(dsource, dtarget, driver) { .Call('_vapour_vapour_create_copy_cpp', PACKAGE = 'vapour', dsource, dtarget, driver) } vapour_create_cpp <- function(filename, driver, extent, dimension, projection, n_bands) { .Call('_vapour_vapour_create_cpp', PACKAGE = 'vapour', filename, driver, extent, dimension, projection, n_bands) } vapour_read_raster_value_cpp <- function(dsource, col, row, band, band_output_type) { .Call('_vapour_vapour_read_raster_value_cpp', PACKAGE = 'vapour', dsource, col, row, band, band_output_type) } blocks_cpp1 <- function(dsource, iblock, read) { .Call('_vapour_blocks_cpp1', PACKAGE = 'vapour', dsource, iblock, read) } blocks_cpp <- function(dsource, iblock, read) { .Call('_vapour_blocks_cpp', PACKAGE = 'vapour', dsource, iblock, read) } raster_gdalinfo_app_cpp <- function(dsn, options) { .Call('_vapour_raster_gdalinfo_app_cpp', PACKAGE = 'vapour', dsn, options) } raster_vrt_cpp <- function(dsn, extent, projection, sds, bands, geolocation, nomd, overview) { .Call('_vapour_raster_vrt_cpp', PACKAGE = 'vapour', dsn, extent, projection, sds, bands, geolocation, nomd, overview) } raster_warp_file_cpp <- function(source_filename, target_crs, target_extent, target_dim, target_filename, bands, resample, silent, band_output_type, warp_options, transformation_options) { .Call('_vapour_raster_warp_file_cpp', PACKAGE = 'vapour', source_filename, target_crs, target_extent, target_dim, target_filename, bands, resample, silent, band_output_type, warp_options, transformation_options) }
/scratch/gouwar.j/cran-all/cranData/vapour/R/RcppExports.R
validate_limit_ia <- function(x) { if (is.null(x)) stop("invalid ia, is NULL must be between 0 and one less the number of features") if (anyNA(x)) stop("missing values ia") if (!all(x >= 0)) stop("ia values < 0") x } validate_limit_ij <- function(x) { if (is.null(x)) stop("invalid ij, is NULL") if (length(x) < 2) stop("ij values not of length 2") if (length(x) > 2) { message("ij values longer than 2, only first two used") x <- x[1:2] } if (anyNA(x)) stop("missing values ij") if (!all(x >= 0)) stop("ij values < 0") if (x[2] < x[1]) stop("ij values must be increasing") if (as.integer(x[1]) == as.integer(x[2])) stop("ij values must not be duplicated (use vapour_read_geometry_ia() to read a single geometry)") x } validate_limit_fa <- function(x) { if (is.null(x)) stop("invalid fa, is NULL") if (length(x) < 1) stop("no valid fa value") if (anyNA(x) | any(!is.finite(x))) stop("missing values fa") x } #' @name vapour_read_geometry #' @export vapour_read_geometry_ia <- function(dsource, layer = 0L, sql = "", extent = NA, ia = NULL) { dsource <- .check_dsn_single(dsource) if (!is.numeric(layer)) layer <- index_layer(dsource, layer) ia <- validate_limit_ia(ia) extent <- validate_extent(extent, sql) gdal_dsn_read_geom_ia( dsn = dsource, layer = layer, sql = sql, ex = extent, format = "wkb", ia = ia) } #' @name vapour_read_geometry #' @export vapour_read_geometry_ij <- function(dsource, layer = 0L, sql = "", extent = NA, ij = NULL) { dsource <- .check_dsn_single(dsource) if (!is.numeric(layer)) layer <- index_layer(dsource, layer) ij <- validate_limit_ij(ij) extent <- validate_extent(extent, sql) gdal_dsn_read_geom_ij( dsn = dsource, layer = layer, sql = sql, ex = extent, format = "wkb", ij = ij) }
/scratch/gouwar.j/cran-all/cranData/vapour/R/index_input_geometry.R
sds_boilerplate_checks <- function(x, sds = NULL) { ## don't pass relative paths to GDAL ## but also don't prevent access to non-files if (file.exists(x)) x <- base::normalizePath(x, mustWork = FALSE) ## use sds wrapper to target the first by default subdatasets <- try(vapour_sds_names(x), silent = TRUE) if (inherits(subdatasets, "try-error")) stop("GDAL was unable to open ^^") datavars <- data.frame(datsource = x, subdataset = subdatasets, stringsAsFactors = FALSE) ## catch for l1b where we end up with the GCP conflated with the data set #48 if (nrow(datavars) < 2) return(x) ## shortcut to avoid #48 wasnull <- is.null(sds) if (wasnull) sds <- 1 if (!is.numeric(sds)) stop("sds must be specified by number, starting from 1") if (wasnull && nrow(datavars) > 1L) { varnames <- unlist(lapply(strsplit(datavars$subdataset, ":"), utils::tail, 1L), use.names = FALSE) message(sprintf("subdataset (variable) used is '%s'\n", varnames[1])) message("If that is not correct (or to suppress this message) choose 'sds' by number from ", paste(sprintf("\n%i: '%s'", seq_len(nrow(datavars)), varnames), collapse = ", ")) } stopifnot(length(sds) == 1L) if (sds < 1) stop("sds must be 1 at minimum") if (sds > nrow(datavars)) stop(sprintf("'sds' must not exceed number of subdatasets (%i)", nrow(datavars))) datavars$subdataset[sds] } .gt_dim_to_extent <- function (x, dim) { xx <- c(x[1], x[1] + dim[1] * x[2]) yy <- c(x[4] + dim[2] * x[6], x[4]) c(xx, yy) } #' Raster information #' #' Return the basic structural metadata of a raster source understood by GDAL. #' Subdatasets may be specified by number, starting at 1. See #' [vapour_sds_names] for more. #' #' The structural metadata are #' #' \describe{ #' \item{extent}{the extent of the data, xmin, xmax, ymin, ymax - these are the lower left and upper right corners of pixels} #' \item{geotransform}{the affine transform} #' \item{dimension}{dimensions x-y, columns*rows} #' \item{minmax}{numeric values of the computed min and max from the first band (optional)} #' \item{block}{dimensions x-y of internal tiling scheme} #' \item{projection}{text version of map projection parameter string} #' \item{bands}{number of bands in the dataset} #' \item{projstring}{the proj string version of 'projection'} #' \item{nodata_value}{not implemented} #' \item{overviews}{the number and size of any available overviews} #' \item{filelist}{the list of files involved (may be none, and so will be a single NA character value)} #' \item{datatype}{the band type name, in GDAL form 'Byte', 'Int16', 'Float32', etc.} #' \item{subdatasets}{any subdataset DSNs is present, otherwise `NULL` } #' \item{corners}{corner coordinates of the data, for non-zero skew geotransforms a 2-column matrix with rows upperLeft, lowerLeft, lowerRight, upperRight, and center} #' } #' #' Note that the geotransform is a kind of obscure combination of the extent and dimension, I don't find it #' useful and modern GDAL is moving away from needing it so much. Extent is more sensible and used in many places in #' a straightforward way. #' #' On access vapour functions will report on the existence of subdatasets while #' defaulting to the first subdataset found. #' #' @section Subdatasets: #' #' Some sources provide multiple data sets, where a dataset is described by a 2- #' (or more) dimensional grid whose structure is described by the metadata #' described above. Note that _subdataset_ is a different concept to _band or #' dimension_. Sources that may have multiple data sets are HDF4/HDF5 and #' NetCDF, and they are loosely analogous to the concept of *layer* in GDAL #' vector data. Variables are usually seen as distinct data but in GDAL and #' related 2D-interpretations this concept is leveraged as a 3rd dimension (and #' higher). In a GeoTIFF a third dimension might be implicit across bands, i.e. #' to express time varying data and so each band is not properly a variable. #' Similarly in NetCDF, the data may be any dimensional but there's only an #' implicit link for other variables that exist in that same dimensional space. #' When using GDAL you are always traversing this confusing realm. #' #' If subdatasets are present but not specified the first is queried. The choice #' of subdataset is analogous to the way that the `raster` package behaves, and #' uses the argument `varname`. Variables in NetCDF correspond to subdatasets, #' but a single data set might have multiple variables in different bands or in #' dimensions, so this guide does not hold across various systems. #' #' @section The Geo Transform: #' #' From \url{https://gdal.org/user/raster_data_model.html}. #' #' The affine transform consists of six coefficients returned by #' `GDALDataset::GetGeoTransform()` which map pixel/line coordinates into #' georeferenced space using the following relationship: #' #' \code{Xgeo = GT(0) + Xpixel*GT(1) + Yline*GT(2)} #' #' \code{Ygeo = GT(3) + Xpixel*GT(4) + Yline*GT(5)} #' #' #' They are #' \describe{ #' \item{GT0, xmin}{the x position of the lower left corner of the lower left pixel} #' \item{GT1, xres}{the scale of the x-axis, the width of the pixel in x-units} #' \item{GT2, yskew}{y component of the pixel width} #' \item{GT3, ymax}{the y position of the upper left corner of the upper left pixel} #' \item{GT4, xskew}{x component of the pixel height} #' \item{GT5, yres}{the scale of the y-axis, the height of the pixel in *negative* y-units} #' } #' #' Please note that these coefficients are equivalent to the contents of a #' *world file* but that the order is not the same and the world file uses cell #' centre convention rather than edge. #' \url{https://en.wikipedia.org/wiki/World_file} #' #' Usually the skew components are zero, and so only four coefficients are #' relevant and correspond to the offset and scale used to position the raster - #' in combination with the number of rows and columns of data they provide the #' spatial extent and the pixel size in each direction. Very rarely a an actual #' affine raster will be use with this _rotation_ specified within the transform #' coefficients. #' #' Calculation of 'minmax' can take a significant amount of time, so it's not done by default. Use #' 'minmax = TRUE' to do it. (It does perform well, but may be prohibitive for very large or remote sources.) #' @seealso vapour_sds_info #' #' @section Overviews: #' #' If there are no overviews this element will simply be a single-element vector #' of value 0. If there are overviews, the first value will give the number of overviews and #' their dimensions will be listed as pairs of x,y values. #' #' @param x data source string (i.e. file name or URL or database connection string) #' @param sds a subdataset number, if necessary #' @param min_max logical, control computing min and max values in source ('FALSE' by default) #' @param ... currently unused #' #' @return list with vectors 'geotransform', 'dimXY', 'minmax', 'tilesXY', 'projection', 'bands', 'proj4', 'nodata_value', #' 'overviews', 'filelist' see sections in Details for more on each element #' @export #' @examples #' f <- system.file("extdata", "sst.tif", package = "vapour") #' vapour_raster_info(f) vapour_raster_info <- function(x, ..., sds = NULL, min_max = FALSE) { sd <- if (is.null(sds)) 0 else sds x <- .check_dsn_single(x) info <- gdalinfo_internal(x[1L], json = TRUE, stats = min_max, sd = sd, ...) if (is.na(info)) { stop("GDAL was unable to open ^^") } json <- jsonlite::fromJSON(info) sds <- NULL if (!is.null(json$metadata$SUBDATASETS)) { sds <- unlist(json$metadata$SUBDATASETS[grep("NAME$", names(json$metadata$SUBDATASETS))], use.names = FALSE) } corners <- do.call(rbind, json$cornerCoordinates) extent <- c(range(corners[,1]), range(corners[,2])) if (is.null(json$geoTransform)) { geoTransform <- c(extent[1], diff(extent[c(1,2)])/json$size[1], 0, extent[4], 0, diff(extent[c(4:3)])/json$size[2]) } else { geoTransform <- json$geoTransform } list(geotransform = geoTransform, dimension = json$size, ## or/and dimXY dimXY = json$size, ## this needs to be per band minmax = c(json$bands$min[1L], json$bands$max[1L]), block = json$bands$block[[1L]], ## or/and dimXY projection = json$coordinateSystem$wkt, bands = dim(json$bands)[1L], projstring = json$coordinateSystem$proj4, nodata_value = json$bands$noDataValue[1L], overviews = unlist(json$bands$overviews[[1]], use.names = FALSE), ## NULL if there aren't any (was integer(0)), filelist = json$files, datatype = json$bands$type[1L], extent = extent, subdatasets = sds, corners = corners) } old_vapour_raster_info <- function(x, ..., sds = NULL, min_max = FALSE) { x <- .check_dsn_single(x) datasourcename <- sds_boilerplate_checks(x, sds = sds) info <- raster_info_gdal_cpp(dsn = datasourcename, min_max = min_max) info[["extent"]] <- .gt_dim_to_extent(info$geotransform, info$dimXY) info } #' Raster ground control points #' #' Return any ground control points for a raster data set, if they exist. #' #' Pixel and Line coordinates do not correspond to cells in the underlying raster grid, they #' refer to the index space of that array in 0, ncols and 0, nrows. They are usually a subsample of #' the grid and may not align with the grid spacing itself (though they often do in satellite remote sensing products). #' #' The coordinate system of the GCPs is currently not read. #' #' @param x data source string (i.e. file name or URL or database connection string) #' @param ... ignored currently #' #' @return list with #' \itemize{ #' \item \code{Pixel} the pixel coordinate #' \item \code{Line} the line coordinate #' \item \code{X} the X coordinate of the GCP #' \item \code{Y} the Y coordinate of the GCP #' \item \code{Z} the Z coordinate of the GCP (usually zero) #' } #' @export #' @examples #' ## this file has no ground control points #' ## they are rare, and tend to be in large files #' f <- system.file("extdata", "sst.tif", package = "vapour") #' vapour_raster_gcp(f) #' #' ## a very made-up example with no real use #' f1 <- system.file("extdata/gcps", "volcano_gcp.tif", package = "vapour") #' vapour_raster_gcp(f1) #' vapour_raster_gcp <- function(x, ...) { x <- .check_dsn_single(x) if (file.exists(x)) x <- normalizePath(x) raster_gcp_gdal_cpp(x) } #' GDAL raster subdatasets (variables) #' #' A **subdataset** is a collection abstraction for a number of **variables** #' within a single GDAL source. If there's only one variable the datasource and #' the variable have the same data source string. If there is more than one the #' subdatasets have the form **DRIVER:"datasourcename":varname**. Each #' subdataset name can stand in place of a data source name that has only one #' variable, so we always treat a source as a subdataset, even if there's only #' one. #' #' Returns a character vector of 'subdatasets`. In the case of a normal data #' source, with no subdatasets the value is simply the `datasource`. #' #' If the raw SDS names contain spaces these are replaced by '%20' escape strings. A specific example is #' "WCS:https://elevation.nationalmap.gov:443" with request #' "arcgis/services/3DEPElevation/ImageServer/WCSServer?version=2.0.1&coverage=DEP3Elevation_Hillshade Gray". #' This function will return "..DEP3Elevation_Hillshade%20Gray". #' See [wiki post](https://github.com/hypertidy/vapour/wiki/Examples-of-subdatasets) for more details. #' #' @param x a data source string, filename, database connection string, or other URL #' #' @return character vector of subdataset names, or just the source itself if no SDS are present #' @export #' #' @examples #' f <- system.file("extdata/gdal", "sds.nc", package = "vapour") #' ## protect from error with netcdf problems #' result <- try(vapour_sds_names(f), silent = TRUE) #' if (!inherits(result, "try-error")) { #' print(result) #' } #' vapour_sds_names(system.file("extdata", "sst.tif", package = "vapour")) #' vapour_sds_names <- function(x) { x <- .check_dsn_single(x) info <- gdalinfo_internal(x[1L], json = TRUE) if (is.na(info)) stop("GDAL was unable to open ^^") json <- jsonlite::fromJSON(info) if (!is.null(json$metadata$SUBDATASETS)) { sources <- unlist(json$metadata$SUBDATASETS[grep("NAME$", names(json$metadata$SUBDATASETS))], use.names = FALSE) } else { sources <- x[1L] ## should return 0-vector, or NULL I think } sources }
/scratch/gouwar.j/cran-all/cranData/vapour/R/raster-info.R
.r_to_gdal_datatype <- function(x) { if (nchar(x) == 0 || is.na(x)) return("") xout <- toupper(x[1L]) xout <- c("RAW" = "Byte", "INTEGER" = "Int32", "DOUBLE" = "Float64", "NUMERIC" = "Float64", "BYTE" = "Byte", "UINT16" = "Int32", "INT16" = "Int32", "UINT32" = "Int32", "INT32" = "Int32", "FLOAT32" = "Float32", "FLOAT64" = "Float64")[xout] if (is.na(xout)) { message(sprintf("unknown 'band_output_type = \'%s\'', ignoring", x)) xout <- "" } xout } #' Raster IO (read) #' #' Read a window of data from a GDAL raster source. The first argument is the source #' name and the second is a 6-element `window` of offset, source dimension, and output dimension. #' #' The value of `window` may be input as only 4 elements, in which case the source dimension #' Will be used as the output dimension. #' #' This is analogous to the `rgdal` function `readGDAL` with its arguments `offset`, `region.dim` #' and `output.dim`. There's no semantic wrapper for this in vapour, but see #' `https://github.com/hypertidy/lazyraster` for one approach. #' #' Resampling options will depend on GDAL version, but currently 'NearestNeighbour' (default), #' 'Average', 'Bilinear', 'Cubic', 'CubicSpline', 'Gauss', 'Lanczos', 'Mode' are potentially #' available. These are compared internally by converting to lower-case. Detailed use of this is barely tried or tested with vapour, but is #' a standard facility used in GDAL. Easiest way to compare results is with gdal_translate. #' #' There is no write support in vapour. #' #' Currently the `window` argument is required. If this argument unspecified and `native = TRUE` then #' the default window specification will be used, the entire extent at native resolution. If 'window' #' is specified and `native = TRUE` then the window is used as-is, with a warning (native is ignored). #' #' 'band_output_type' can be 'raw', 'integer', 'double', or case-insensitive versions of the GDAL types #' 'Byte', 'UInt16', 'Int16', 'UInt32', 'Int32', 'Float32', or 'Float64'. These are mapped to one of the #' supported types 'Byte' ('== raw'), 'Int32' ('== integer'), or 'Float64' ('== double'). #' #' @param x data source #' @param band index of which band to read (1-based) #' @param window src_offset, src_dim, out_dim #' @param resample resampling method used (see details) #' @param ... reserved #' @param native apply the full native window for read, `FALSE` by default #' @param sds index of subdataset to read (usually 1) #' @param set_na specify whether NA values should be set for the NODATA #' @param band_output_type numeric type of band to apply (else the native type if ''), is mapped to one of 'Byte', 'Int32', or 'Float64' #' @param unscale default is `TRUE` so native values will be converted by offset and scale to floating point #' @export #' @return list of numeric vectors (only one for 'band') #' @examples #' f <- system.file("extdata", "sst.tif", package = "vapour") #' ## a 5*5 window from a 10*10 region #' vapour_read_raster(f, window = c(0, 0, 10, 10, 5, 5)) #' vapour_read_raster(f, window = c(0, 0, 10, 10, 5, 5), resample = "Lanczos") #' ## find the information first #' ri <- vapour_raster_info(f) #' str(matrix(vapour_read_raster(f, window = c(0, 0, ri$dimXY, ri$dimXY)), ri$dimXY[1])) #' ## the method can be used to up-sample as well #' str(matrix(vapour_read_raster(f, window = c(0, 0, 10, 10, 15, 25)), 15)) #' vapour_read_raster <- function(x, band = 1, window, resample = "nearestneighbour", ..., sds = NULL, native = FALSE, set_na = TRUE, band_output_type = "", unscale = TRUE) { x <- .check_dsn_single(x) if (!length(native) == 1L || is.na(native[1]) || !is.logical(native)) { stop("'native' must be a single 'TRUE' or 'FALSE'") } band_output_type <- .r_to_gdal_datatype(band_output_type) datasourcename <- sds_boilerplate_checks(x, sds = sds) resample <- tolower(resample) ## ensure check internally is lower case if (!resample %in% c("nearestneighbour", "average", "bilinear", "cubic", "cubicspline", "gauss", "lanczos", "mode")) { warning(sprintf("resample mode '%s' is unknown", resample)) } ri <- vapour_raster_info(x, sds = sds) if (native && !missing(window)) warning("'window' is specified, so 'native = TRUE' is ignored") if (native && missing(window)) window <- c(0, 0, ri$dimXY, ri$dimXY) if (!is.numeric(band) || band < 1 || length(band) < 1 || anyNA(band)) { stop("'band' must be an integer of length 1, and be greater than 0") } if (any(band > ri$bands)) stop(sprintf("specified 'band = %i', but maximum band number is %i", band, ri$bands)) ## turn these warning cases into errors here, + tests ## rationale is that dev can still call the internal R wrapper function to ## get these errors, but not the R user stopifnot(length(window) %in% c(4L, 6L)) ## use src dim as out dim by default if (length(window) == 4L) window <- c(window, window[3:4]) ## these error at the GDAL level if (any(window[1:2] < 0)) stop("window cannot index lower than 0") if (any(window[1:2] > (ri$dimXY-1))) stop("window offset cannot index higher than grid dimension") ## this does not error in GDAL, gives an out of bounds value if (any(window[3:4] < 1)) stop("window size cannot be less than 1") ## GDAL error if (any((window[1:2] + window[3:4]) > ri$dimXY)) stop("window size cannot exceed grid dimension") ## GDAL error if (any(window[5:6] < 1)) stop("requested output dimension cannot be less than 1") ## pull a swifty here with [[ to return numeric or integer ##vals <- raster_io_gdal_cpp(dsn = datasourcename, window = window, band = band, resample = resample[1L], band_output_type = band_output_type) vals <- lapply(band, function(iband) { raster_io_gdal_cpp(dsn = datasourcename, window = window, band = iband, resample = resample[1L], band_output_type = band_output_type, unscale = unscale)[[1L]] }) if (set_na && !is.raw(vals[[1L]][1L])) { for (i in seq_along(vals)) { vals[[i]][vals[[i]] == ri$nodata_value] <- NA ## hardcode to 1 for now } } names(vals) <- sprintf("Band%i",band) vals } #' type safe(r) raster read #' #' These wrappers around [vapour_read_raster()] guarantee single vector output of the nominated type. #' #' `*_hex` and `*_chr` are aliases of each other. #' @inheritParams vapour_read_raster #' @aliases vapour_read_raster_raw vapour_read_raster_int vapour_read_raster_dbl vapour_read_raster_chr vapour_read_raster_hex #' @export #' @return atomic vector of the nominated type raw, int, dbl, or character (hex) #' @examples #' f <- system.file("extdata", "sst.tif", package = "vapour") #' vapour_read_raster_int(f, window = c(0, 0, 5, 4)) #' vapour_read_raster_raw(f, window = c(0, 0, 5, 4)) #' vapour_read_raster_chr(f, window = c(0, 0, 5, 4)) #' plot(vapour_read_raster_dbl(f, native = TRUE), pch = ".", ylim = c(273, 300)) vapour_read_raster_raw <- function(x, band = 1, window, resample = "nearestneighbour", ..., sds = NULL, native = FALSE, set_na = TRUE) { if (length(band) > 1) message("_raw output implies one band, using only the first") vapour_read_raster(x, band = band, window = window, resample = resample, ..., sds = sds, native = native, set_na = set_na, band_output_type = "Byte")[[1L]] } #' @name vapour_read_raster_raw #' @export vapour_read_raster_int <- function(x, band = 1, window, resample = "nearestneighbour", ..., sds = NULL, native = FALSE, set_na = TRUE) { if (length(band) > 1) message("_int output implies one band, using only the first") vapour_read_raster(x, band = band, window = window, resample = resample, ..., sds = sds, native = native, set_na = set_na, band_output_type = "Int32")[[1L]] } #' @name vapour_read_raster_raw #' @export vapour_read_raster_dbl <- function(x, band = 1, window, resample = "nearestneighbour", ..., sds = NULL, native = FALSE, set_na = TRUE) { if (length(band) > 1) message("_dbl output implies one band, using only the first") vapour_read_raster(x, band = band, window = window, resample = resample, ..., sds = sds, native = native, set_na = set_na, band_output_type = "Float64")[[1L]] } #' @name vapour_read_raster_raw #' @export vapour_read_raster_chr <- function(x, band = 1, window, resample = "nearestneighbour", ..., sds = NULL, native = FALSE, set_na = TRUE) { ## band must be length 1, 3 or 4 if (length(band) == 2 || length(band) > 4) message("_chr output implies one, three or four bands ...") if (length(band) == 2L) band <- band[1L] if (length(band) > 4) band <- band[1:4] bytes <- vapour_read_raster(x, band = band, window = window, resample = resample, ..., sds = sds, native = native, set_na = set_na, band_output_type = "Byte") ## pack into character with as.raster ... ## note that we replicate out *3 if we only have one band ... (annoying of as.raster) as.vector(grDevices::as.raster(array(unlist(bytes, use.names = FALSE), c(length(bytes[[1]]), 1, max(c(3, length(bytes))))))) } #' @name vapour_read_raster_raw #' @export vapour_read_raster_hex <- function(x, band = 1, window, resample = "nearestneighbour", ..., sds = NULL, native = FALSE, set_na = TRUE) { vapour_read_raster_chr(x, band = band, window = window, resample = resample, sds = sds, native = native, set_na = set_na, ...) } #' Raster warper (reprojection) #' #' Read a window of data from a GDAL raster source through a warp specification. #' The warp specification is provided by 'extent', 'dimension', and 'projection' #' properties of the transformed output. #' #' Any bands may be read, including repeats. #' #' This function is not memory safe, the source is left on disk but the output #' raster is all computed in memory so please be careful with very large values #' for 'dimension'. `1000 * 1000 * 8` for 1000 columns, 1000 rows and floating #' point double type will be 8Mb. #' #' There's control over the output type, and is auto-detected from the source #' (raw/Byte, integer/Int32, numeric/Float64) or can be set with #' 'band_output_type'. #' #' 'projection' refers to any projection string for a CRS understood by GDAL. #' This includes the full Well-Known-Text specification of a coordinate #' reference system, PROJ strings, "AUTH:CODE" types, and others. See #' [vapour_srs_wkt()] for conversion from PROJ.4 string to WKT, and #' [vapour_raster_info()] and [vapour_layer_info()] for various formats #' available from a data source. Any string accepted by GDAL may be used for #' 'projection' or 'source_projection', including EPSG strings, PROJ4 strings, #' and file names. Note that this argument was named 'wkt' up until version #' 0.8.0. #' #' 'extent' is the four-figure xmin,xmax,ymin,ymax outer corners of corner pixels #' #' 'dimension' is the pixel dimensions of the output, x (ncol) then y (nrow). #' #' Options for missing data are not yet handled, just returned as-is. Note that #' there may be regions of "zero data" in a warped output, separate from #' propagated missing "NODATA" values in the source. #' #' Argument 'source_projection' may be used to assign the projection of the #' source, 'source_extent' to assign the extent of the source. Sometimes both #' are required. Note, this is now better done by creating 'VRT', see [vapour_vrt()] #' for assigning the source projection, extent, and some other options. #' #' If multiple sources are specified via 'x' and either 'source_projection' or #' 'source_extent' are provided, these are applied to every source even if they #' have valid values already. If this is not sensible please use VRT to wrap the #' multiple sources first. #' #' Wild combinations of 'source_extent' and/or 'extent' may be used for #' arbitrary flip orientations, scale and offset. For expert usage only. Old #' versions allowed transform input for target and source but this is now #' disabled (maybe we'll write a new wrapper for that). #' #' @section Options: #' #' The various options are convenience arguments for 'warp options -wo', #' transformation options -to', 'open options -oo', and 'options' for any other #' arguments in gdalwarp. There are no 'creation options -co' or 'dataset output #' options -doo', because these are not supported by the MEM driver. #' #' All 'warp_options' are paired with a '-wo' declaration and similarly for '-to', and '-oo', #' this is purely a convenience, since 'options' itself can be used for these as well but we recommend using #' the individual arguments. #' An example for warp options is `warp_options = c("SAMPLE_GRID=YES", "SAMPLE_STEPS=30")` and one for #' general arguments might be #' 'options = c("-ovr", "AUTO", "-nomd", "-cutline", "/path/to/cut.gpkg", "-crop_to_cutline")'. If they would #' be separated by spaces on the command line then include as separate elements in the options character vector. #' #' #' See [GDALWarpOptions](https://gdal.org/api/gdalwarp_cpp.html#_CPPv4N15GDALWarpOptions16papszWarpOptionsE) for '-wo'. #' #' See [GDAL transformation options](https://gdal.org/api/gdal_alg.html#_CPPv432GDALCreateGenImgProjTransformer212GDALDatasetH12GDALDatasetHPPc) for '-to'. #' #' See [GDALWARP command line app](https://gdal.org/programs/gdalwarp.html) for further details. #' #' Note we already apply the following gdalwarp arguments based on input R #' arguments to this function. #' #' \describe{ #' \item{-of}{MEM is hardcoded, but may be extended in future} #' \item{-t_srs}{set via 'projection'} #' \item{-s_srs}{set via 'source_projection'} #' \item{-te}{set via 'extent'} #' \item{-ts}{set via 'dimension'} #' \item{-r}{set via 'resample'} #' \item{-ot}{set via 'band_output_type'} #' \item{-te_srs}{ not supported} #' \item{-a_ullr}{(not a gdalwarp argument, but we do analog) set via 'source_extent' use [vapour_vrt()] instead} #' } #' #' In future all 'source_*' arguments may be deprecated in favour of #' augmentation by 'vapour_vrt()'. #' #' Common inputs for `projection` are WKT variants, 'AUTH:CODE's e.g. #' 'EPSG:3031', the 'OGC:CRS84' for lon,lat WGS84, 'ESRI:code' and other #' authority variants, and datum names such as 'WGS84','NAD27' recognized by #' PROJ itself. #' #' See help for 'SetFromUserInput' in 'OGRSpatialReference', and #' 'proj_create_crs_to_crs'. #' #' [c.proj_create_crs_to_crs](https://proj.org/development/reference/functions.html#c.proj_create_crs_to_crs) #' #' [c.proj_create](https://proj.org/development/reference/functions.html#c.proj_create) #' #' [SetFromUserInput](https://gdal.org/doxygen/classOGRSpatialReference.html#aec3c6a49533fe457ddc763d699ff8796) #' #' @param x vector of data source names (file name or URL or database connection string) #' @param bands index of band/s to read (1-based), may be new order or replicated, or NULL (all bands used, the default) #' @param extent extent of the target warped raster 'c(xmin, xmax, ymin, ymax)' #' @param source_extent extent of the source raster, used to override/augment incorrect source metadata #' @param dimension dimensions in pixels of the warped raster (x, y) #' @param projection projection of warped raster (in Well-Known-Text, or any projection string accepted by GDAL) #' @param set_na NOT IMPLEMENTED logical, should 'NODATA' values be set to `NA` #' @param resample resampling method used (see details in [vapour_read_raster]) #' @param source_projection optional, override or augment the projection of the source (in Well-Known-Text, or any projection string accepted by GDAL) #' @param silent `TRUE` by default, set to `FALSE` to report messages #' @param band_output_type numeric type of band to apply (else the native type if '') can be one of 'Byte', 'Int32', or 'Float64' but see details in [vapour_read_raster()] #' @param ... unused #' @param warp_options character vector of options, as in gdalwarp -wo - see Details #' @param transformation_options character vector of options, as in gdalwarp -to see Details #' @param open_options character vector of options, as in gdalwarp -oo - see Details #' @param options character vectors of options as per the gdalwarp command line #' @param nomd if `TRUE` the Metadata tag is removed from the resulting VRT (it can be quite substantial) #' @param overview pick an integer overview from the source (0L is highest resolution, default -1L does nothing) #' @export #' @seealso vapour_read_raster vapour_read_raster_raw vapour_read_raster_int vapour_read_raster_dbl vapour_read_raster_chr vapour_read_raster_hex #' @return list of vectors (only 1 for 'band') of numeric values, in raster order #' @examples #' b <- 4e5 #' f <- system.file("extdata", "sst.tif", package = "vapour") #' prj <- "+proj=aeqd +lon_0=147 +lat_0=-42" #' vals <- vapour_warp_raster(f, extent = c(-b, b, -b, b), #' dimension = c(186, 298), #' bands = 1, #' projection = vapour_srs_wkt(prj), #' warp_options = c("SAMPLE_GRID=YES")) #' #' #' image(list(x = seq(-b, b, length.out = 187), y = seq(-b, b, length.out = 298), #' z = matrix(unlist(vals, use.names = FALSE), 186)[,298:1]), asp = 1) vapour_warp_raster <- function(x, bands = NULL, extent = NULL, dimension = NULL, projection = "", set_na = TRUE, source_projection = NULL, source_extent = 0.0, resample = "near", silent = TRUE, ..., band_output_type = "", warp_options = "", transformation_options = "", open_options = "", options = "", nomd = FALSE, overview = -1L) { x <- .check_dsn_multiple(x) if (!is.null(bands) && (anyNA(bands) || length(bands) < 1 || !is.numeric(bands))) { stop("'bands' must be a valid set of band integers (1-based)") } if (is.null(projection) || is.na(projection[1]) || length(projection) != 1L || !is.character(projection)) { stop("'projection' must be a valid character string for a GDAL/PROJ coordinate reference system (map projection)") } ## deprecated arguments if (!is.null(list(...)$source_wkt)) stop("'source_wkt' is defunct, please use 'source_projection'") if(!is.null(list(...)$source_geotransform)) stop("'source_geotransform' is defunct and now ignored, please use 'source_extent'") if (!is.null(list(...)$geotransform)) stop("'geotransform' is defunct and now ignored, used 'extent'") if (band_output_type == "vrt") { ## special case, do nothing } else { band_output_type <- .r_to_gdal_datatype(band_output_type) } args <- list(...) if (projection == "" && "wkt" %in% names(args)) { projection <- args$wkt message("please use 'projection = ' rather than 'wkt = ', use of 'wkt' is deprecated and will be removed in a future version") } ## bands if (is.numeric(bands) && any(bands < 1)) stop("all 'bands' index must be >= 1") if (is.null(bands)) bands <- 0 if(length(bands) < 1 || anyNA(bands) || !is.numeric(bands)) stop("'bands' must be numeric (integer), start at 1") bands <- as.integer(bands) ##if ("projection" %in% names(list(...))) message("argument 'projection' input is ignored, warper functions use 'wkt = ' to specify target projection (any format is ok)") # dud_extent <- FALSE # if (is.null(extent)) { # ## set a dummy and then pass in dud after following tests # extent <- c(0, 1, 0, 1) # dud_extent <- TRUE # } if(!is.numeric(extent)) { if (isS4(extent)) { extent <- c(extent@xmin, extent@xmax, extent@ymin, extent@ymax) } else if (is.matrix(extent)) { extent <- extent[c(1, 3, 2, 4)] } else { stop("'extent' must be numeric 'c(xmin, xmax, ymin, ymax)'") } } if(!length(extent) == 4L) stop("'extent' must be of length 4") if (any(diff(extent)[c(1, 3)] == 0)) stop("'extent' expected to be 'c(xmin, xmax, ymin, ymax)', zero x or y range not permitted") if (length(source_extent) > 1 && any(diff(source_extent)[c(1, 3)] == 0)) stop("'extent' expected to be 'c(xmin, xmax, ymin, ymax)', zero x or y range not permitted") if (!all(diff(extent)[c(1, 3)] > 0)) message("'extent' expected to be 'c(xmin, xmax, ymin, ymax)', negative values detected (ok for expert use)") if (length(source_extent) > 1 && !all(diff(source_extent)[c(1, 3)] > 0)) message("'extent' expected to be 'c(xmin, xmax, ymin, ymax)', negative values detected (ok for expert use)") ## if (dud_extent) extent <- 0.0 ## hmm, we can't rely on gdalwarp to give a sensibleish dimension if not specified, it goes for the native-res dud_dimension <- FALSE ## we dud it if no target projection is set, so you get native from the extent if (is.null(dimension) && nchar(projection) < 1) { ## NO. We can't heuristic dimension or extent because we don't have a format to return those values with ## we make a simple raster, the image() thing and go with that ## FIXME: move this hardcode to C, and override with min(native_dimension, dimension) #dimension <- c(512, 512) ## we could ## ## hardcode a default options(vapour.warp.default.dimension = c(512, 512)) ## ## modify hardcode based on extent aspect ratio (not lat corrected) ## ## modify harcode to not exceed the native (I think I like this the best, because it reduces logic churn and delays when ## ## that has to be set in the C++, but we need to send down a message that the default is used (so do it all in C is the summ)) ## set it to native with a max ## set it to native with a warn/override dud_dimension <- TRUE dimension <- c(2, 2) } if(!is.numeric(dimension)) stop("'dimension' must be numeric") if(!length(dimension) == 2L) stop("'dimension must be of length 2'") if(!all(dimension > 0)) stop("'dimension' values must be greater than 0") if(!all(is.finite(dimension))) stop("'dimension' values must be finite and non-missing") if (dud_dimension) dimension <- 0 if (length(source_extent) > 1) { if (!is.numeric(source_extent)) { stop("'source_extent' must be numeric, of length 4 c(xmin, xmax, ymin, ymax)") } if (!all(is.finite(source_extent))) stop("'source_extent' values must be finite and non missing") } if(!is.null(source_projection)) { if (!is.character(source_projection)) stop("source_projection must be character") if(!silent) { if(!nchar(source_projection) > 10) message("short 'source_projection', possibly invalid?") } } if (!silent) { if(!nchar(projection) > 0) message("target 'projection' not provided, read will occur from from source in native projection") } if (is.null(source_projection)) source_projection <- "" resample <- tolower(resample[1L]) if (resample == "gauss") { warning("Gauss resampling not available for warper, using NearestNeighbour") resample <- "near" } rso <- c("near", "bilinear", "cubic", "cubicspline", "lanczos", "average", "mode", "max", "min", "med", "q1", "q3", "sum") #, "rms") if (!resample %in% rso) { warning(sprintf("%s resampling not available for warper, using near", resample)) resample <- "near" } warp_options <- warp_options[!is.na(warp_options)] #if (length(warp_options) < 1) warp_options <- "" transformation_options <- transformation_options[!is.na(transformation_options)] #if (length(transformation_options) < 1) transformation_options <- "" open_options <- open_options[!is.na(open_options)] # if (length(open_options) < 1) open_options <- "" # dataset_output_options <- dataset_output_options[!is.na(dataset_output_options)] ## process all options into one big string list if (any(grepl("-wo", options) | grepl("-to", options) | grepl("-oo", options))) { ##message("manually setting -wo, -to, -oo options detected, prefer use of 'warp_options', 'transformation_options', 'open_options'") } if (nchar(warp_options)[1L] > 0) options <- c(options, rbind("-wo", warp_options)) if (nchar(transformation_options)[1L] > 0) options <- c(rbind("-to", transformation_options)) if (nchar(open_options)[1L] > 0) options <- c(options, rbind("-oo", open_options)) #if (nchar(dataset_output_options)[1L] > 0) options <- c(rbind("-doo", dataset_output_options)) options <- options[!is.na(options)] options <- options[nchar(options) > 0] if (length(options) < 1) options <- "" ## no -r, -te, -t_srs, -ts, -of, -s_srs, -ot, -te_srs we set them manually if (any(grepl("-r", options) | grepl("-te", options) | grepl("-t_srs", options) | grepl("-ts", options) | grepl("-of", options) | grepl("-s_srs", options) | grepl("-ot", options))) { stop("manually setting -r, -te, -t_srs, -of, -s_srs, -ot options not allowed \n ( these controlled by arguments 'resample', 'target_extent', 'target_projection', '<MEM>', 'source_projection', 'band_output_type')") } if (any(grepl("-te_srs", options))) stop("setting '-te_srs' projection of target extent is not supported") vals <- warp_in_memory_gdal_cpp(x, source_WKT = source_projection, target_WKT = projection, target_extent = as.numeric(extent), target_dim = as.integer(dimension), bands = as.integer(bands), source_extent = as.numeric(source_extent), resample = resample, silent = silent, band_output_type = band_output_type, options = options, nomd = nomd, overview) # ##// if we Dataset->RasterIO we don't have separated bands' # nbands <- length(vals[[1L]]) / prod(as.integer(dimension)) # if (nbands > 1) vals <- split(vals[[1L]], rep(seq_len(nbands), each = prod(as.integer(dimension)))) # # names(vals) <- make.names(sprintf("Band%i",seq_len(nbands)), unique = TRUE) # if (band_output_type == "vrt") return(vals[[1L]]) bands <- seq_along(vals) names(vals) <- make.names(sprintf("Band%i",bands), unique = TRUE) vals } #' type safe(r) raster warp #' #' These wrappers around [vapour_warp_raster()] guarantee single vector output of the nominated type. #' #' _hex and _chr are aliases of each other. #' @inheritParams vapour_warp_raster #' @aliases vapour_warp_raster_raw vapour_warp_raster_int vapour_warp_raster_dbl vapour_warp_raster_chr vapour_warp_raster_hex #' @name vapour_warp_raster_raw #' @export #' @return atomic vector of the nominated type raw, int, dbl, or character (hex) #' @examples #' b <- 4e5 #' f <- system.file("extdata", "sst.tif", package = "vapour") #' prj <- "+proj=aeqd +lon_0=147 +lat_0=-42" #' bytes <- vapour_warp_raster_raw(f, extent = c(-b, b, -b, b), #' dimension = c(18, 2), #' bands = 1, #' projection = prj) #' # not useful given source type floating point, but works #' str(bytes) vapour_warp_raster_raw <- function(x, bands = NULL, extent = NULL, dimension = NULL, projection = "", set_na = TRUE, source_projection = NULL, source_extent = 0.0, resample = "near", silent = TRUE, ..., warp_options = "", transformation_options = "", open_options = "", options = "") { if (length(bands) > 1 ) message("_raw output implies one band, ignoring all but the first") vapour_warp_raster(x, bands = bands, extent = extent, dimension = dimension, projection = projection, set_na = set_na, source_projection = source_projection, source_extent = source_extent, resample = resample, silent = silent, band_output_type = "Byte", warp_options = warp_options, transformation_options = transformation_options, open_options = open_options, options = options,...)[[1L]] } vapour_warp_raster_vrt <- function(x, bands = NULL, extent = NULL, dimension = NULL, projection = "", set_na = TRUE, source_projection = NULL, source_extent = 0.0, resample = "near", silent = TRUE, ..., warp_options = "", transformation_options = "", open_options = "", options = "") { vapour_warp_raster(x, bands = bands, extent = extent, dimension = dimension, projection = projection, set_na = set_na, source_projection = source_projection, source_extent = source_extent, resample = resample, silent = silent, band_output_type = "vrt", warp_options = warp_options, transformation_options = transformation_options, open_options = open_options, options = options,...)[[1L]] } #' @name vapour_warp_raster_raw #' @export vapour_warp_raster_int <- function(x, bands = NULL, extent = NULL, dimension = NULL, projection = "", set_na = TRUE, source_projection = NULL, source_extent = 0.0, resample = "near", silent = TRUE, ..., warp_options = "", transformation_options = "", open_options = "", options = "") { if (length(bands) > 1 ) message("_int output implies one band, ignoring all but the first") vapour_warp_raster(x, bands = bands, extent = extent, dimension = dimension, projection = projection, set_na = set_na, source_projection = source_projection, source_extent = source_extent, resample = resample, silent = silent, band_output_type = "Int32", warp_options = warp_options, transformation_options = transformation_options, open_options = open_options, options = options,...)[[1L]] } #' @name vapour_warp_raster_raw #' @export vapour_warp_raster_dbl <- function(x, bands = NULL, extent = NULL, dimension = NULL, projection = "", set_na = TRUE, source_projection = NULL, source_extent = 0.0, resample = "near", silent = TRUE, ..., warp_options = "", transformation_options = "", open_options = "", options = "") { if (length(bands) > 1 ) message("_dbl output implies one band, ignoring all but the first") vapour_warp_raster(x, bands = bands, extent = extent, dimension = dimension, projection = projection, set_na = set_na, source_projection = source_projection, source_extent = source_extent, resample = resample, silent = silent, band_output_type = "Float64", warp_options = warp_options, transformation_options = transformation_options, open_options = open_options, options = options,...)[[1L]] } #' @name vapour_warp_raster_raw #' @export vapour_warp_raster_chr <- function(x, bands = NULL, extent = NULL, dimension = NULL, projection = "", set_na = TRUE, source_projection = NULL, source_extent = 0.0, resample = "near", silent = TRUE, ..., warp_options = "", transformation_options = "", open_options = "", options = "") { ## band must be length 1, 3 or 4 if (length(bands) == 2 || length(bands) > 4) message("_chr output implies one, three or four bands ...") if (length(bands) == 2L) bands <- bands[1L] if (length(bands) > 4) bands <- bands[1:4] bytes <- vapour_warp_raster(x, bands = bands, extent = extent, dimension = dimension, projection = projection, set_na = set_na, source_projection = source_projection, source_extent = source_extent, resample = resample, silent = silent, band_output_type = "Byte", warp_options = warp_options, transformation_options = transformation_options, open_options = open_options, options = options,...) ## note that we replicate out *3 if we only have one band ... (annoying of as.raster) as.vector(grDevices::as.raster(array(unlist(bytes, use.names = FALSE), c(length(bytes[[1]]), 1, max(c(3, length(bytes))))))) } #' @name vapour_warp_raster_raw #' @export vapour_warp_raster_hex <- function(x, bands = NULL, extent = NULL, dimension = NULL, projection = "", set_na = TRUE, source_projection = NULL, source_extent = 0.0, resample = "near", silent = TRUE, ..., warp_options = "", transformation_options = "", open_options = "", options = "") { vapour_warp_raster_chr(x, bands = bands, extent = extent, dimension = dimension, projection = projection, set_na = set_na, source_projection = source_projection, source_extent = source_extent, resample = resample, silent = silent, warp_options = warp_options, transformation_options = transformation_options, open_options = open_options, options = options,...) }
/scratch/gouwar.j/cran-all/cranData/vapour/R/raster-input.R
.read_stream <- function(dsn, layer, ..., sql = NA, options = NULL, quiet = FALSE, fid_column_name = character(0), drivers = character(0), ## replace with vrt://..?if= wkt_filter = character(0), ## extent optional = FALSE, return_stream = FALSE) { ## vapourize this layer <- if (missing(layer)) { character() } else { enc2utf8(layer) } if (nchar(dsn) < 1L) { stop("`dsn` must describe a valid data source name for GDAL (input wasan empty string).", call. = FALSE) } dsn_exists <- file.exists(dsn) ## good to see this finally stuck in sf if (length(dsn) == 1 && dsn_exists) { dsn = enc2utf8(normalizePath(dsn)) } stream = nanoarrow::nanoarrow_allocate_array_stream() info = gdal_dsn_read_vector_stream(stream, dsn, layer, sql, as.character(options), quiet, drivers, wkt_filter, dsn_exists, dsn_isdb = FALSE, fid_column_name, 80L) browser() if (return_stream) return(stream) ##// layer has been freed as this point # geometry_column <- unlist(lapply( # stream$get_schema()$children, function(s) identical(s$metadata[["ARROW:extension:name"]], "ogc.wkb") # )) crs <- info[[1L]] if (info[[2L]] == -1) { num_features = NULL } list(data = suppressWarnings(nanoarrow::convert_array_stream(stream, size = num_features)), crs = crs) }
/scratch/gouwar.j/cran-all/cranData/vapour/R/read_stream_internal.R
.check_dsn_single <- function(x) { mess <- "%s\n 'x' must be a valid character vector (of length 1) for a GDAL-accessible source (file, '/vsi*/' url, database string, etc)" if (is.null(x)) stop(sprintf(mess, "NULL: ")) if (length(x) < 1L) stop(sprintf(mess, "Missing (empty): ")) if (is.na(x[1L])) stop(sprintf(mess, "Missing (NA): ")) if (length(x) > 1L) stop(sprintf(mess, "Longer than 1 element:")) if (!is.character(x)) stop(sprintf(mess, "Not a character string:")) if (nchar(x) < 1) stop(sprintf(mess, "Not a valid character string:")) if (file.exists(x)) { ## we only want to normalize out the "~" if (grepl("^~", x)) { x <- normalizePath(x) } } x } .check_dsn_multiple <- function(x) { mess <- "%s\n 'x' must be a valid character vector (NAs not allowed) for a GDAL-accessible source (file, '/vsi*/' url, database string, etc)" if (is.null(x)) stop(sprintf(mess, "NULL: ")) if (length(x) < 1L) stop(sprintf(mess, "Missing (empty): ")) if (anyNA(x[1L])) stop(sprintf(mess, "Missing (NA): ")) ## if (length(x) > 1L) stop(sprintf(mess, "Longer than 1 element:")) if (!is.character(x)) stop(sprintf(mess, "Not a character vector:")) if (any(nchar(x) < 1)) stop(sprintf(mess, "Not a valid character vector:")) tildes <- unlist(lapply(x, function(.x) grepl("~", .x))) normthem <- tildes & file.exists(x) x <- ifelse(normthem, normalizePath(x), x) x } .check_dsn_multiple_naok <- function(x) { mess <- "%s\n 'x' must be a valid character vector (NAs are allowed) for a GDAL-accessible source (file, '/vsi*/' url, database string, etc)" if (is.null(x)) stop(sprintf(mess, "NULL: ")) if (length(x) < 1L) stop(sprintf(mess, "Missing (empty): ")) ## if (anyNA(x[1L])) stop(sprintf(mess, "Missing (NA): ")) ## if (length(x) > 1L) stop(sprintf(mess, "Longer than 1 element:")) if (!is.character(x)) stop(sprintf(mess, "Not a character vector:")) if (any(nchar(x) < 1)) stop(sprintf(mess, "Not a valid character vector:")) x }
/scratch/gouwar.j/cran-all/cranData/vapour/R/utils.R
#' Set and query GDAL configuration options #' #' These functions can get and set configuration options for GDAL, for fine #' control over specific GDAL behaviours. #' #' Configuration options may also be set as environment variables. #' #' See [GDAL config options](https://trac.osgeo.org/gdal/wiki/ConfigOptions) for #' details on available options. #' #' @param option GDAL config name (see Details), character string #' @param value value for config option, character string #' #' @return character string for `vapour_get_config`, integer 1 for successful `vapour_set_config()` #' @export #' #' @examples #' \dontrun{ #' (orig <- vapour_get_config("GDAL_CACHEMAX")) #' vapour_set_config("GDAL_CACHEMAX", "64") #' vapour_get_config("GDAL_CACHEMAX") #' vapour_set_config("GDAL_CACHEMAX", orig) #' } vapour_set_config <- function(option, value) { option <- as.character(option[1L]) value <- as.character(value[1]) if (length(option) < 1 || nchar(option) < 1 || is.na(option) || is.null(option) ) { stop(sprintf("invalid 'option': %s - must be valid character string")) } if (length(value) < 1 || nchar(value) < 1 || is.na(value) || is.null(value) ) { stop(sprintf("invalid 'value': %s - must be valid character string")) } set_gdal_config_cpp(option, value) } #' @export #' @name vapour_set_config vapour_get_config <- function(option) { if (length(option) < 1 || nchar(option) < 1 || is.na(option) || is.null(option) ) { stop(sprintf("invalid 'option': %s - must be valid character string")) } get_gdal_config_cpp(option) } #' PROJ4 string to WKT #' #' Convert a projstring to Well Known Text. #' #' The function is vectorized because why not, but probably only ever will be #' used on single element vectors of character strings. #' #' Note that no sanitizing is done on inputs, we literally just 'OGRSpatialReference.SetFromUserInput(crs)' and #' give the output as WKT. If it's an error in GDAL it's an error in R. #' #' Common inputs are WKT variants, #' 'AUTH:CODE's e.g. 'EPSG:3031', the 'OGC:CRS84' for long,lat WGS84, 'ESRI:code' and other authority variants, and #' datum names such as 'WGS84','NAD27' recognized by PROJ itself. #' #' See help for 'SetFromUserInput' in 'OGRSpatialReference', and 'proj_create_crs_to_crs'. #' #' [c.proj_create_crs_to_crs](https://proj.org/development/reference/functions.html#c.proj_create_crs_to_crs) #' #' [c.proj_create](https://proj.org/development/reference/functions.html#c.proj_create) #' #' [SetFromUserInput](https://gdal.org/doxygen/classOGRSpatialReference.html#aec3c6a49533fe457ddc763d699ff8796) #' #' @param crs projection string, see Details. #' @export #' @return WKT2 projection string #' @examples #' vapour_srs_wkt("+proj=laea +datum=WGS84") vapour_srs_wkt <- function(crs) { do.call(c, lapply(crs, proj_to_wkt_gdal_cpp)) } #' Is the CRS string representative of angular coordinates #' #' Returns `TRUE` if this is longitude latitude data. Missing, malformed, zero-length values are disallowed. #' #' @param crs character string of length 1 #' #' @return logical value `TRUE` for lonlat, `FALSE` otherwise #' @export #' #' @examples #' vapour_crs_is_lonlat("+proj=laea") #' vapour_crs_is_lonlat("+proj=longlat") #' vapour_crs_is_lonlat("+init=EPSG:4326") #' vapour_crs_is_lonlat("OGC:CRS84") #' vapour_crs_is_lonlat("WGS84") #' vapour_crs_is_lonlat("NAD27") #' vapour_crs_is_lonlat("EPSG:3031") vapour_crs_is_lonlat <- function(crs) { crs_in <- crs[1L] if (length(crs) > 1L) message("multiple crs input is not supported, using the first only") if (is.na(crs_in) || is.null(crs_in) || length(crs_in) < 1L || !nzchar(crs_in)) stop(sprintf("problem with input crs: %s", crs_in)) crs_is_lonlat_cpp(crs) } #' Summary of available geometry #' #' Read properties of geometry from a source, optionally after SQL execution. #' #' Use `limit_n` to arbitrarily limit the number of features queried. #' @inheritParams vapour_read_geometry #' #' @return list containing the following #' * `FID` the feature id value (an integer, usually sequential) #' * `valid_geometry` logical value if a non-empty geometry is available #' * `type` integer value of geometry type from [GDAL enumeration](https://gdal.org/doxygen/ogr__core_8h.html#a800236a0d460ef66e687b7b65610f12a) #' * `xmin, xmax, ymin, ymax` numeric values of the extent (bounding box) of each geometry #' @export #' #' @examples #' file <- "list_locality_postcode_meander_valley.tab" #' mvfile <- system.file(file.path("extdata/tab", file), package="vapour") #' vapour_geom_summary(mvfile, limit_n = 3L) #' #' gsum <- vapour_geom_summary(mvfile) #' plot(NA, xlim = range(c(gsum$xmin, gsum$xmax), na.rm = TRUE), #' ylim = range(c(gsum$ymin, gsum$ymax), na.rm = TRUE)) #' rect(gsum$xmin, gsum$ymin, gsum$xmax, gsum$ymax) #' text(gsum$xmin, gsum$ymin, labels = gsum$FID) vapour_geom_summary <- function(dsource, layer = 0L, sql = "", limit_n = NULL, skip_n = 0, extent = NA) { #limit_n <- validate_limit_n(limit_n) dsource <- .check_dsn_single(dsource) if (!is.numeric(layer)) layer <- index_layer(x = dsource, layername = layer) extent <- validate_extent(extent, sql, warn = FALSE) extents <- vapour_read_extent(dsource = dsource, layer = layer, sql = sql, limit_n = limit_n, skip_n = skip_n, extent = extent) fids <- vapour_read_names(dsource = dsource, layer = layer, sql = sql, limit_n = limit_n, skip_n = skip_n, extent = extent) ## FIXME the other funs deal with these args #limit_n <- validate_limit_n(limit_n) extent <- validate_extent(extent, sql) types <- vapour_read_type(dsource = dsource, layer = layer, sql = sql, limit_n = limit_n, skip_n = skip_n, extent = extent) na_geoms <- unlist(lapply(extents, anyNA), use.names = FALSE) list(FID = fids, valid_geometry = !na_geoms, type = types, xmin = unlist(lapply(extents, "[", 1L), use.names = FALSE), xmax = unlist(lapply(extents, "[", 2L), use.names = FALSE), ymin = unlist(lapply(extents, "[", 3L), use.names = FALSE), ymax = unlist(lapply(extents, "[", 4L), use.names = FALSE)) } #' GDAL version and drivers. #' #' Return information about the GDAL library in use. #' #' `vapour_gdal_version` returns the version of GDAL as a string. This corresponds to the "--version" #' as described for "GDALVersionInfo". [GDAL documentation](https://gdal.org/). #' #' `vapour_all_drivers` returns the names and capabilities of all available drivers, in a list. This contains: #' * `driver` the driver (short) name #' * `name` the (long) description name #' * `vector` logical vector indicating a vector driver #' * `raster` logical vector indicating a raster driver #' * `create` driver can create (note vapour provides no write capacity) #' * `copy` driver can copy (note vapour provides no write capacity) #' * `virtual` driver has virtual capabilities ('vsi') #' #' `vapour_driver()` returns the short name of the driver, e.g. 'GPKG' or 'GTiff', to get the #' long name and other properties use `vapour_all_drivers()` and match on 'driver'. #' #' @export #' @aliases vapour_all_drivers vapour_driver #' @rdname GDAL-library #' @return please see Details, character vectors or lists of character vectors #' @examples #' vapour_gdal_version() #' #' drv <- vapour_all_drivers() #' #' f <- system.file("extdata/sst_c.gpkg", package = "vapour") #' vapour_driver(f) #' #' as.data.frame(drv)[match(vapour_driver(f), drv$driver), ] vapour_gdal_version <- function() { version_gdal_cpp() } #' @rdname GDAL-library #' @export vapour_all_drivers <- function() { drivers_list_gdal_cpp() } #' @rdname GDAL-library #' @export #' @param dsource data source string (i.e. file name or URL or database connection string) vapour_driver <- function(dsource) { dsource <- .check_dsn_single(dsource) driver_id_gdal_cpp(dsource); }
/scratch/gouwar.j/cran-all/cranData/vapour/R/vapour-gdal-library.R
#' vapour #' #' A lightweight GDAL API package for R. #' #' #' Provides low-level access to 'GDAL' functionality for R packages. The aim is #' to minimize the level of interpretation put on the 'GDAL' facilities, to #' enable direct use of it for a variety of purposes. 'GDAL' is the 'Geospatial #' Data Abstraction Library' a translator for raster and vector geospatial data #' formats that presents a single raster abstract data model and single vector #' abstract data model to the calling application for all supported formats #' <https://gdal.org/>. #' #' Lightweight means we access parts of the GDAL API as near as possible to #' their native usage. GDAL is not a lightweight library, but provide a very #' nice abstraction over format details for a very large number of different #' formats. #' #' Functions for raster and vector sources are included. #' #' \tabular{ll}{ \code{\link{vapour_all_drivers}} \tab list of all #' available drivers, with type and features \cr \code{\link{vapour_driver}} #' \tab report short name of driver that will be used for a data source \cr #' \code{\link{vapour_gdal_version}} \tab report version of GDAL in use #' \cr \code{\link{vapour_srs_wkt}} \tab produce WKT projection string #' from various projection string inputs \cr \code{\link{vapour_vsi_list}} #' \tab report contents of VSI sources \cr #' #' } #' #' \tabular{ll}{ \code{\link{vapour_raster_gcp}} \tab return internal #' ground control points, if present \cr \code{\link{vapour_raster_info}} #' \tab structural metadata of a source \cr \code{\link{vapour_read_raster}} #' \tab read data direct from a window of a raster band source \cr #' \code{\link{vapour_sds_names}} \tab list individual raster sources #' in a source containing subdatasets \cr \code{\link{vapour_warp_raster}} \tab #' read data direct from a raster source into a specific window \cr } #' #' \tabular{ll}{ \code{\link{vapour_driver}} \tab report name of the #' driver used for a given source \cr \code{\link{vapour_geom_name}} \tab #' report attribute name of geometry \cr \code{\link{vapour_geom_summary}} #' \tab report simple properties of each feature geometry \cr #' \code{\link{vapour_layer_names}} \tab list names of vector layers in a #' data source \cr \code{\link{vapour_layer_info}} \tab list of data source, #' driver, layer name/s, fields, feature count, projection \cr #' \code{\link{vapour_read_extent}} \tab read the extent, or bounding #' box, of geometries in a layer \cr \code{\link{vapour_read_fields}} \tab #' read attributes of features in a layer, the columnar data associated with #' each geometry \cr \code{\link{vapour_read_geometry}} \tab read geometry #' in binary (blob, WKB) form \cr \code{\link{vapour_read_geometry_ia}} \tab #' read geometry by index, arbitrary \cr \code{\link{vapour_read_geometry_ij}} #' \tab read geometry by sequential index, i to j \cr #' \code{\link{vapour_read_geometry_text}} \tab read geometry in text form, #' various formats \cr \code{\link{vapour_read_names}} \tab read the #' 'names' of features in a layer, the 'FID' \cr \code{\link{vapour_read_type}} #' \tab read the GDAL types of attributes \cr \code{\link{vapour_report_fields}} #' \tab report internal type of each attribute by name \cr } #' #' As far as possible vapour aims to minimize the level of interpretation #' provided for the functions, so that developers can choose how things are #' implemented. Functions return raw lists or vectors rather than data frames or #' classed types. #' #' @section options: #' #' The following options can be set to control global behaviour. #' #' \tabular{ll}{ \code{Sys.getenv("vapour.sql.dialect")} \tab the current SQL #' dialect in use \cr } #' #' @section SQL dialect: #' #' The SQL dialect can be set to "" (empty string), "OGRSQL", or "SQLITE". #' #' The empty string indicates that the native dialect will be used, see #' [OGRSQL and SQLITE for GDAL, accessed #' 2022-11-11](https://gdal.org/user/ogr_sql_sqlite_dialect.html) and the #' [GDAL_DMD_SUPPORTED_SQL_DIALECTS development documentation (since GDAL #' 3.6)](https://gdal.org/api/raster_c_api.html#c.GDAL_DMD_SUPPORTED_SQL_DIALECTS). #' #' Setting "NATIVE" as an alias for "" is quite recent and has not been tested with vapour, similarly no testing has been done #' with non OGRSQL-native or SQLITE-native drivers yet. #' #' @name vapour-package #' @aliases vapour #' @docType package #' @useDynLib vapour #' @importFrom Rcpp sourceCpp NULL #' SST contours #' #' Southern Ocean GHRSST contours in sf data frame from 2017-07-28, read from #' #' podaac-ftp.jpl.nasa.gov/allData/ghrsst/data/GDS2/L4 #' GLOB/JPL/MUR/v4.1/2017/209/ #' 20170728090000-JPL-L4_GHRSST-SSTfnd-MUR-GLOB-v02.0-fv04.1.nc #' #' See data-raw/sst_c.R for the derivation column \code{sst_c} in Celsius. #' #' Also stored in GeoPackage format in #' \code{system.file("extdata/sst_c.gpkg", package = "vapour")} #' @docType data #' @name sst_c #' @examples #' f <- system.file("extdata/sst_c.gpkg", package = "vapour") #' #' ## create a class-less form of the data in the 'sst_c.gpkg' file with GeoJSON geometry #' atts <- vapour_read_fields(f) #' dat <- as.data.frame(atts, stringsAsFactors = FALSE) #' dat[["json"]] <- vapour_read_geometry_text(f) #' names(dat) #' names(sst_c) NULL #' Example WKT coordinate reference system #' #' A Lambert Azimuthal Equal Area Well-Known-Text string for a region #' centred on Tasmania. #' #' Created from '+proj=laea +lon_0=147 +lat_0=-42 +datum=WGS84'. #' For use in a future warping example. #' @docType data #' @name tas_wkt NULL
/scratch/gouwar.j/cran-all/cranData/vapour/R/vapour-package.R
## find index of a layer name, or error index_layer <- function(x, layername) { if (is.factor(layername)) { warning("layer is a factor, converting to character") layername <- levels(layername)[layername] } available_layers <- try(vapour_layer_names(x), silent = TRUE) if (inherits(available_layers, "try-error")) stop(sprintf("cannot open data source: %s", x)) idx <- match(layername, available_layers) if (length(idx) != 1 || !is.numeric(idx)) stop(sprintf("cannot find layer: %s", layername)) if (is.na(idx) || idx < 1 || idx > length(available_layers)) stop(sprintf("layer index not found for: %s \n\nto determine, compare 'vapour_layer_names(dsource)'", layername)) idx - 1L ## layer is 0-based } validate_limit_n <- function(x) { if (is.null(x)) { x <- 0L } else { if (x < 1) stop("limit_n must be 1 or greater (or NULL)") } stopifnot(is.numeric(x)) x } validate_extent <- function(extent, sql, warn = TRUE) { if (length(extent) > 1) { if (is.matrix(extent) && all(colnames(extent) == c("min", "max")) && all(rownames(extent) == c("x", "y"))) { extent <- as.vector(t(extent)) } if (inherits(extent, "bbox")) extent <- extent[c("xmin", "xmax", "ymin", "ymax")] if (!length(extent) == 4) stop("'extent' must be length 4 'c(xmin, xmax, ymin, ymax)'") } else { if (inherits(extent, "Extent")) extent <- c(xmin = extent@xmin, xmax = extent@xmax, ymin = extent@ymin, ymax = extent@ymax) } if (is.na(extent[1])) extent = 0.0 if (warn && length(extent) == 4L && nchar(sql) < 1) warning("'extent' given but 'sql' query is empty, extent clipping will be ignored") if (!is.numeric(extent)) stop("extent must be interpretable as xmin, xmax, ymin, ymax") extent } #' Read GDAL layer names #' #' Obtain the names of available layers from a GDAL vector source. #' #' Some vector sources have multiple layers while many have only one. Shapefiles #' for example have only one, and the single layer gets the file name with no path #' and no extension. GDAL provides a quirk for shapefiles in that a directory may #' act as a data source, and any shapefile in that directory acts like a layer of that #' data source. This is a little like the one-or-many sleight that exists for raster #' data sources with subdatasets (there's no way to virtualize single rasters into #' a data source with multiple subdatasets, oh except by using VRT....) #' #' See [vapour_sds_names] for more on the multiple topic. #' #' @inheritParams vapour_read_geometry #' @param ... arguments ignore for deprecated compatibility (no 'sql' argument any longer) #' @return character vector of layer names #' #' @examples #' file <- "list_locality_postcode_meander_valley.tab" #' mvfile <- system.file(file.path("extdata/tab", file), package="vapour") #' vapour_layer_names(mvfile) #' @export vapour_layer_names <- function(dsource, ...) { dsource <- .check_dsn_single(dsource) if ("sql" %in% names(list(...))) { message("old 'sql' argument is unused") } layer_names_gdal_cpp(dsn = dsource) } #' Read geometry column name #' #' There might be one or more geometry column names, or it might be an empty string. #' #' It might be "", or "geom", or "_ogr_geometry_" - the last is a default name #' given when SQL is executed by GDAL but there was no geometry name, and 'SELECT * ' or #' equivalent was used. #' #' This feature is required by the DBI backend work in RGDALSQL, so that when `SELECT * ` is used #' we can give a reasonable name to the geometry column which is obtained separately. #' #' @inheritParams vapour_read_geometry #' @export #' @return character vector of geometry column name/s #' @examples #' file <- system.file("extdata/tab/list_locality_postcode_meander_valley.tab", package = "vapour") #' vapour_geom_name(file) ## empty string vapour_geom_name <- function(dsource, layer = 0L, sql = "") { dsource <- .check_dsn_single(dsource) if (!is.numeric(layer)) layer <- index_layer(dsource, layer) vapour_geom_name_cpp(dsource = dsource, layer = layer, sql = sql, ex = 0) } #' Read feature names #' #' Obtains the internal 'Feature ID (FID)' for a data source. #' #' This may be virtual (created by GDAL for the SQL interface) and may be 0- or #' 1- based. Some drivers have actual names, and they are persistent and #' arbitrary. Please use with caution, this function can return the current #' FIDs, but there's no guarantee of what it represents for subsequent access. #' #' An earlier version use 'OGRSQL' to obtain these names, which was slow for some #' drivers and also clashed with independent use of the `sql` argument. #' [vapour_read_names()] is an older name, aliased to [vapour_read_fids()]. #' @inheritParams vapour_read_geometry #' @export #' @return character vector of geometry id 'names' #' @examples #' file <- "list_locality_postcode_meander_valley.tab" #' mvfile <- system.file(file.path("extdata/tab", file), package="vapour") #' range(fids <- vapour_read_names(mvfile)) #' length(fids) vapour_read_fids <- function(dsource, layer = 0L, sql = "", limit_n = NULL, skip_n = 0, extent = NA) { dsource <- .check_dsn_single(dsource) if (!is.numeric(layer)) layer <- index_layer(dsource, layer) limit_n <- validate_limit_n(limit_n) skip_n <- skip_n[1L] if (skip_n < 0) stop("skip_n must be 0, or higher") extent <- validate_extent(extent, sql) fids <- read_fids_gdal_cpp(dsource, layer = layer, sql = sql, limit_n = limit_n, skip_n = skip_n, ex = extent) fids } #' @name vapour_read_fids #' @export vapour_read_names <- function(dsource, layer = 0L, sql = "", limit_n = NULL, skip_n = 0, extent = NA) { dsource <- .check_dsn_single(dsource) vapour_read_fids(dsource, layer, sql, limit_n, skip_n, extent) } #' Read feature field types. #' #' Obtains the internal type-constant name for the data attributes in a source. #' #' Use this to compare the interpreted versions converted into R types by #' `vapour_read_fields`. #' #' This and [vapour_read_fields()] are aliased to older versions named 'vapour_report_attributes()' and #' 'vapour_read_attributes()', but "field" is a clearer and more sensible name (in our opinion). #' #' These are defined for the enum OGRFieldType in GDAL itself. #' \url{https://gdal.org/doxygen/ogr__core_8h.html#a787194bea637faf12d61643124a7c9fc} #' #' @inheritParams vapour_read_geometry #' @export #' @return named character vector of the GDAL types for each field #' @examples #' file <- "list_locality_postcode_meander_valley.tab" #' mvfile <- system.file(file.path("extdata/tab", file), package="vapour") #' vapour_report_fields(mvfile) #' #' ## modified by sql argument #' vapour_report_fields(mvfile, #' sql = "SELECT POSTCODE, NAME FROM list_locality_postcode_meander_valley") vapour_report_fields <- function(dsource, layer = 0L, sql = "") { dsource <- .check_dsn_single(dsource) if (!is.numeric(layer)) layer <- index_layer(dsource, layer) report_fields_gdal_cpp(dsource, layer, sql = sql) } #' @name vapour_report_fields #' @export vapour_report_attributes <- function(dsource, layer = 0L, sql = "") { dsource <- .check_dsn_single(dsource) vapour_report_fields(dsource, layer, sql) } #' Read feature field data #' #' Read features fields (attributes), optionally after SQL execution. #' #' Internal types are not fully supported, there are straightforward conversions #' for numeric, integer (32-bit) and string types. Date, Time, DateTime are #' returned as character, and Integer64 is returned as numeric. #' #' @inheritParams vapour_read_geometry #' #' @return list of vectors one for each field in the source, each will be the same length which will #' depend on the values of 'skip_n', 'limit_n', 'sql', and the available records in the source. The #' types will be raw, numeric, integer, character, logical depending on the available mapping to the types #' in the source for the data there to R's native vectors. #' #' @examples #' file <- "list_locality_postcode_meander_valley.tab" #' mvfile <- system.file(file.path("extdata/tab", file), package="vapour") #' att <- vapour_read_fields(mvfile) #' str(att) #' sq <- "SELECT * FROM list_locality_postcode_meander_valley WHERE FID < 5" #' (att <- vapour_read_fields(mvfile, sql = sq)) #' pfile <- "list_locality_postcode_meander_valley.tab" #' dsource <- system.file(file.path("extdata/tab", pfile), package="vapour") #' SQL <- "SELECT NAME FROM list_locality_postcode_meander_valley WHERE POSTCODE < 7300" #' vapour_read_fields(dsource, sql = SQL) #' @export vapour_read_fields <- function(dsource, layer = 0L, sql = "", limit_n = NULL, skip_n = 0, extent = NA) { dsource <- .check_dsn_single(dsource) dsource <- .check_dsn_single(dsource) if (!is.numeric(layer)) layer <- index_layer(dsource, layer) limit_n <- validate_limit_n(limit_n) extent <- validate_extent(extent, sql) read_fields_gdal_cpp(dsn = dsource, layer = layer, sql = sql, limit_n = limit_n, skip_n = skip_n, ex = extent, fid_column_name = character(0)) } #' @name vapour_read_fields #' @export vapour_read_attributes <- function(dsource, layer = 0L, sql = "", limit_n = NULL, skip_n = 0, extent = NA) { dsource <- .check_dsn_single(dsource) vapour_read_fields(dsource, layer, sql, limit_n, skip_n, extent) }
/scratch/gouwar.j/cran-all/cranData/vapour/R/vapour_input_attributes.R
#' Read GDAL layer info #' #' Read GDAL layer information for a vector data source. #' #' Set `extent` and/or `count` to `FALSE` to avoid calculating them if not needed, it might take some time. #' #' The layer information elements are #' #' \describe{ #' \item{dsn}{the data source name} #' \item{driver}{the short name of the driver used} #' \item{layer}{the name of the layer queried} #' \item{layer_names}{the name/s of all available layers (see [vapour_layer_names])} #' \item{fields}{a named vector of field types (see [vapour_report_fields])} #' \item{count}{the number of features in this data source (can be turned off to avoid the extra work `count`)} #' \item{extent}{the extent of all features xmin, xmax, ymin, ymax (can be turned off to avoid the extra work `extent`)} #' \item{projection}{a list of character strings, see next} #' } #' #' `$projection` is a list of various formats of the projection metadata. #' Use `$projection$Wkt` as most authoritative, but we don't enter into the discussion or limit what #' might be done with this (that's up to you). Currently we see #' `c("Proj4", "MICoordSys", "PrettyWkt", "Wkt", "EPSG", "XML")` as names of this `$projection` element. #' #' To get the geometry type/s of a layer see [vapour_read_type()]. #' #' @inheritParams vapour_read_geometry #' @param ... unused, reserved for future use #' @param count logical to control if count calculated and returned, TRUE by default (set to FALSE to avoid the extra calculation and missing value is the result) #' @return list with a list of character vectors of projection metadata, see details #' @export #' @seealso vapour_geom_name vapour_layer_names vapour_report_fields vapour_read_fields vapour_driver vapour_read_names #' @examples #' file <- "list_locality_postcode_meander_valley.tab" #' ## A MapInfo TAB file with polygons #' mvfile <- system.file(file.path("extdata/tab", file), package="vapour") #' info <- vapour_layer_info(mvfile) #' names(info$projection) #' #' ## info depends on the query/spatial-filter #' vapour_layer_info(mvfile, extent = c(412000, 420000, 5352612.8, 5425154.3), #' sql = "SELECT * FROM list_locality_postcode_meander_valley")$count vapour_layer_info <- function(dsource, layer = 0L, sql = "", extent = NA, count = TRUE, ...) { dsource <- .check_dsn_single(dsource) layer_names <- try(vapour_layer_names(dsource)) if (inherits(layer_names, "try-error") && grepl("rest.*FeatureServer/0", dsource)) { message("looks like dsource is a ESRI Feature Services, try constructing the DSN as \n \ https://<_>rest/services/<sources>/<layerset>/FeatureServer/0query?where=objectid+%3D+objectid&outfields=*&f=json\n see ESRIJSON document in gdal.org and this example https://gist.github.com/mdsumner/c54bdc119accf95138f5ad7ab574337d\n") } layer_name <- layer if (!is.numeric(layer)) layer <- match(layer_name, layer_names) - 1 if (is.numeric(layer_name)) layer_name <- layer_names[layer + 1] if (is.na(layer)) stop(sprintf("layer: %s not found", layer_name)) driver <- vapour_driver(dsource) geom_name <- vapour_geom_name(dsource, layer, sql) fields <- vapour_report_fields(dsource, layer, sql) if (count) { cnt <- feature_count_gdal_cpp(dsource, layer, sql, extent) } else { cnt <- NA_integer_ } ext <- vapour_layer_extent(dsource, layer, sql, extent = extent) list(dsn = dsource, driver = driver, layer = layer_names[layer + 1], layer_names = layer_names, fields = fields, count = cnt, extent = ext, projection = projection_info_gdal_cpp(dsource, layer = layer, sql = sql)[c("Wkt", "Proj4", "EPSG")]) } #' Read layer extent #' #' Extent of all features in entire layer, possibly after execution of sql query and #' input extent filter. #' #' @inheritParams vapour_read_geometry #' @param extent optional extent (xmin,xmax,ymin,ymax) #' @param ... unused #' #' @return vector of numeric values xmin,xmax,ymin,ymax #' @seealso vapour_read_extent vapour_layer_info #' @export #' #' @examples #' file <- "list_locality_postcode_meander_valley.tab" #' ## A MapInfo TAB file with polygons #' mvfile <- system.file(file.path("extdata/tab", file), package="vapour") #' vapour_layer_extent(mvfile) vapour_layer_extent <- function(dsource, layer = 0L, sql = "", extent = 0, ...) { dsource <- .check_dsn_single(dsource) layer_names <- vapour_layer_names(dsource) layer_name <- layer if (!is.numeric(layer)) layer <- match(layer_name, layer_names) - 1 if (is.numeric(layer_name)) layer_name <- layer_names[layer + 1] if (is.na(layer)) stop(sprintf("layer: %s not found", layer_name)) extent <- validate_extent(extent, sql) vapour_layer_extent_cpp(dsource, layer, sql, extent) } #' Read GDAL feature geometry #' #' Read GDAL geometry as binary blob, text, or numeric extent. #' #' `vapour_read_geometry` will read features as binary WKB, `vapour_read_geometry_text` as various text formats (geo-json, wkt, kml, gml), #' #' `vapour_read_extent` a numeric extent which is the native bounding box, the four numbers (in this order) `xmin, xmax, ymin, ymax`. #' For each function an optional SQL string will be evaluated against the data source before reading. #' #' `vapour_read_geometry_ia` will read features by *arbitrary index*, so any integer between 0 and one less than the number of #' features. These may be duplicated. If 'ia' is greater than the highest index NULL is returned, but if less than 0 the function will error. #' #' `vapour_read_geometry_ij` will read features by *index range*, so two numbers to read ever feature between those limits inclusively. #' 'i' and 'j' must be increasing. #' #' `vapour_read_type` will read the (wkb) type of the geometry as an integer. These are #' `0` unknown, `1` Point, `2` LineString, `3` Polygon, `4` MultiPoint, `5` MultiLineString, #' `6` MultiPolygon, `7` GeometryCollection, and the other more exotic types listed in "api/vector_c_api.html" from the #' GDAL home page (as at October 2020). A missing value 'NA' indicates an empty geometry. #' #' Note that `limit_n` and `skip_n` interact with the affect of `sql`, first the query is executed on the data source, then #' while looping through available features `skip_n` features are ignored, and then a feature-count begins and the loop #' is stopped if `limit_n` is reached. #' #' Note that `extent` applies to the 'SpatialFilter' of 'ExecuteSQL': https://gdal.org/user/ogr_sql_dialect.html#executesql. #' @param dsource data source name (path to file, connection string, URL) #' @param layer integer of layer to work with, defaults to the first (0) or the name of the layer #' @param sql if not empty this is executed against the data source (layer will be ignored) #' @param textformat indicate text output format, available are "json" (default), "gml", "kml", "wkt" #' @param limit_n an arbitrary limit to the number of features scanned #' @param skip_n an arbitrary number of features to skip #' @param extent apply an arbitrary extent, only when 'sql' used (must be 'ex = c(xmin, xmax, ymin, ymax)' but sp bbox, sf bbox, and raster extent also accepted) #' @param ia an arbitrary index, integer vector with values between 0 and one less the number of features, duplicates allowed and arbitrary order is ok #' @param ij an range index, integer vector of length two with values between 0 and one less the number of features, this range of geometries is returned #' @return for [vapour_read_geometry()], [vapour_read_geometry_ia()] and [vapour_read_geometry_ij()] a raw vector of #' geometry, for [vapour_read_extent()] a list of numeric vectors each with 'xmin,xmax,ymin,ymax' respectively for each geometry, #' for [vapour_read_type()] a character vector. See Details for more information. #' @examples #' file <- "list_locality_postcode_meander_valley.tab" #' ## A MapInfo TAB file with polygons #' mvfile <- system.file(file.path("extdata/tab", file), package="vapour") #' ## A shapefile with points #' pfile <- system.file("extdata/point.shp", package = "vapour") #' #' ## raw binary WKB points in a list #' ptgeom <- vapour_read_geometry(pfile) #' ## create a filter query to ensure data read is small #' SQL <- "SELECT FID FROM list_locality_postcode_meander_valley WHERE FID < 3" #' ## polygons in raw binary (WKB) #' plgeom <- vapour_read_geometry_text(mvfile, sql = SQL) #' ## polygons in raw text (GeoJSON) #' txtjson <- vapour_read_geometry_text(mvfile, sql = SQL) #' #' ## polygon extents in a list xmin, xmax, ymin, ymax #' exgeom <- vapour_read_extent(mvfile) #' #' ## points in raw text (GeoJSON) #' txtpointjson <- vapour_read_geometry_text(pfile) #' ## points in raw text (WKT) #' txtpointwkt <- vapour_read_geometry_text(pfile, textformat = "wkt") #' @export #' @name vapour_read_geometry vapour_read_geometry <- function(dsource, layer = 0L, sql = "", limit_n = NULL, skip_n = 0, extent = NA) { dsource <- .check_dsn_single(dsource) if (!is.numeric(layer)) layer <- index_layer(dsource, layer) limit_n <- validate_limit_n(limit_n) extent <- validate_extent(extent, sql) read_geometry_gdal_cpp( dsn = dsource, layer = layer, sql = sql, what = "wkb", textformat = "", limit_n = limit_n, skip_n = skip_n, ex = extent) } #' @export #' @rdname vapour_read_geometry vapour_read_geometry_text <- function(dsource, layer = 0L, sql = "", textformat = "json", limit_n = NULL, skip_n = 0, extent = NA) { dsource <- .check_dsn_single(dsource) if (!is.numeric(layer)) layer <- index_layer(dsource, layer) textformat = match.arg (tolower (textformat), c ("json", "gml", "kml", "wkt")) limit_n <- validate_limit_n(limit_n) extent <- validate_extent(extent, sql) read_geometry_gdal_cpp(dsn = dsource, layer = layer, sql = sql, what = textformat, textformat = "", limit_n = limit_n, skip_n = skip_n, ex = extent) } #' @rdname vapour_read_geometry #' @export vapour_read_extent <- function(dsource, layer = 0L, sql = "", limit_n = NULL, skip_n = 0, extent = NA) { dsource <- .check_dsn_single(dsource) if (!is.numeric(layer)) layer <- index_layer(dsource, layer) limit_n <- validate_limit_n(limit_n) extent <- validate_extent(extent, sql) out <- read_geometry_gdal_cpp(dsn = dsource, layer = layer, sql = sql, what = "extent", textformat = "", limit_n = limit_n, skip_n = skip_n, ex = extent) nulls <- unlist(lapply(out, is.null), use.names = FALSE) if (any(nulls)) out[nulls] <- replicate(sum(nulls), rep(NA_real_, 4L), simplify = FALSE) out } #' @rdname vapour_read_geometry #' @export vapour_read_type <- function(dsource, layer = 0L, sql = "", limit_n = NULL, skip_n = 0, extent = NA) { dsource <- .check_dsn_single(dsource) if (!is.numeric(layer)) layer <- index_layer(dsource, layer) limit_n <- validate_limit_n(limit_n) extent <- validate_extent(extent, sql) out <- read_geometry_gdal_cpp(dsn = dsource, layer = layer, sql = sql, what = "type", textformat = "", limit_n = limit_n, skip_n = skip_n, ex = extent) nulls <- unlist(lapply(out, is.null), use.names = FALSE) if (any(nulls)) out[nulls] <- list(NA_integer_) unlist( out, use.names = FALSE) }
/scratch/gouwar.j/cran-all/cranData/vapour/R/vapour_input_geometry.R
#' Tile servers as VRT string #' #' first argument is a tile server template such as 'https://tile.openstreetmap.org/${z}/${x}/${y}.png' #' GDAL expects the '${z}' and so forth pattern. See GDAL doc for 'WMS' driver, the TMS minidriver #' specification. This is just a helper function to put the right Mercator extent on. #' #' You might want to modify the band count, projection, xmin,xmax,ymin,ymax for non global or #' non Mercator tile servers. #' @param x tile server see Details #' @param user_agent #' @param bands_count #' @param block_size #' @param projection #' @param y_origin #' @param tile_count #' @param tile_level #' @param xmin #' @param xmax #' @param ymin #' @param ymax #' @param silent #' #' @return GDAL xml text string for a tiled xyz service #' @noRd #' #' @examples #' osm_src <- vapour:::.vapour_tilexyz() #' bm_src <- vapour:::.vapour_tilexyz("http://s3.amazonaws.com/com.modestmaps.bluemarble/${z}-r${y}-c${x}.jpg", #' tile_level = 9L) #' ## these are tile server sources useable by raster,stars,terra,python rasterio, etc #' writeLines(bm_src, tfile <- tempfile(fileext = ".vrt")) vapour_tilexyz <- function(x, user_agent = getOption("HTTPUserAgent"), bands_count = 3L, block_size = c(256L, 256L), projection = "EPSG:3857", y_origin = "top", tile_count = c(1L, 1L), tile_level = 18L, xmin = -20037508.34, xmax = 20037508.34, ymin = -20037508.34, ymax = 20037508.34, silent = FALSE) { if (missing(x)) { if (!silent) { message("no tile server provided, using OSM as a default (assuming global mercator, 3 band image, etc. see args to vapour_tilexyz()") message("\n\n") } x <- "https://tile.openstreetmap.org/${z}/${x}/${y}.png" } sprintf('<GDAL_WMS> <Service name="TMS"> <ServerUrl>%s</ServerUrl> </Service> <DataWindow> <UpperLeftX>%f</UpperLeftX> <UpperLeftY>%f</UpperLeftY> <LowerRightX>%f</LowerRightX> <LowerRightY>%f</LowerRightY> <TileLevel>%i</TileLevel> <TileCountX>%i</TileCountX> <TileCountY>%i</TileCountY> <YOrigin>%s</YOrigin> </DataWindow> <Projection>%s</Projection> <BlockSizeX>%i</BlockSizeX> <BlockSizeY>%i</BlockSizeY> <BandsCount>%i</BandsCount> <UserAgent>%s</UserAgent> </GDAL_WMS>', x, xmin, ymax, xmax, ymin, tile_level, tile_count[1L], tile_count[2L], y_origin, projection, block_size[1L], block_size[2L], bands_count, user_agent) }
/scratch/gouwar.j/cran-all/cranData/vapour/R/vapour_tilexyz.R
#' Virtual raster #' #' Simple VRT creation of a GDAL virtual raster. The data source string is #' **augmented** by input of other optional arguments. That means it **overrides** #' their values provided by the source data, or stands in place of this information if #' it is missing. #' #' Create a GDAL data source string (to be used like a filename) with various helpers. #' VRT stands for 'ViRTual'. A VRT string then acts as a representative of a data source for #' further use (to read or warp it). #' #' An input string will be converted to a single subdataset, use 'sds' argument to select. #' #' If 'extent', 'projection' is provided this is applied to override the source's extent and/or projection. #' (These might be invalid, or missing, so we facilitate correcting this). #' #' If 'bands' is provided this is used to select a set of bands (numbered from 1), which might be repeated, or #' in any order and contain repetitions. #' #' `vapour_vrt()` is vectorized, it will return multiple VRT strings for multiple inputs in #' a "length > 1" character vector. These are all independent, this is different to the function #' `vapour_warp_raster()` where multiple inputs are merged (possibly by sequential overlapping). #' @section Rationale: #' #' For a raster, the basic essentials we can specify or modify for a source are #' 1) the source, 2) the extent, 3) the projection 4) what subdataset (these are #' variables from NetCDF and the like that contain multiple datasets) and 5) #' which band/s to provided. For extent and projection we are simply providing #' or correcting complete information about how to interpret the georeferencing, #' with subdatasets and bands this is more like a query of which ones we want. #' If we only wanted band 5, then the output data would have one band only (and #' we we read it we need `band = 1`). #' #' We don't provide ability override the dimension, but that is possible as #' well. More features may come with a 'VRTBuilder' interface. #' #' @section Projections: #' Common inputs for `projection` are WKT variants, "AUTH:CODE"s e.g. #' "EPSG:3031", the "OGC:CRS84" for long,lat WGS84, "ESRI:code" and other #' authority variants, and datum names such as 'WGS84','NAD27' recognized by #' PROJ itself. #' #' See the following links to GDAL and PROJ documentation: #' #' [PROJ documentation: c.proj_create_crs_to_crs](https://proj.org/development/reference/functions.html#c.proj_create_crs_to_crs) #' #' [PROJ documentation: c.proj_create](https://proj.org/development/reference/functions.html#c.proj_create) #' #' [GDAL documentation: SetFromUserInput](https://gdal.org/doxygen/classOGRSpatialReference.html#aec3c6a49533fe457ddc763d699ff8796) #' #' @param x data source name, filepath, url, database connection string, or VRT text #' @param extent (optional) numeric extent, xmin,xmax,ymin,ymax #' @param projection (optional) character string, projection string ("auth:code", proj4, or WKT, or anything understood by PROJ, see Details) #' @param bands (optional) which band/s to include from the source #' @param sds which subdataset to select from a source with more than one #' @param ... ignored #' @param relative_to_vrt default `FALSE`, if `TRUE` input strings that identify as files on the system are left as-is (by default they are made absolute at the R level) #' @param geolocation vector of 2 dsn to longitude, latitude geolocation array sources #' @return VRT character string (for use by GDAL-capable tools, i.e. reading raster) #' @param nomd if `TRUE` the Metadata tag is removed from the resulting VRT (it can be quite substantial) #' @param overview pick an integer overview from the source (0L is highest resolution, default -1L does nothing) #' @export #' #' @examples #' tif <- system.file("extdata", "sst.tif", package = "vapour") #' vapour_vrt(tif) #' #' vapour_vrt(tif, bands = c(1, 1)) #' vapour_vrt <- function(x, extent = NULL, projection = NULL, sds = 1L, bands = NULL, geolocation = NULL, ..., relative_to_vrt = FALSE, nomd = FALSE, overview = -1L) { x <- .check_dsn_multiple_naok(x) if (!relative_to_vrt) { ## GDAL does its hardest to use relative paths, so either implemente Even's proposal or force alway absolute (but allow turn that off) ## https://lists.osgeo.org/pipermail/gdal-dev/2022-April/055739.html ## relativeToVRT doesn't really make sense for an in-memory VRT string, so depends what you're doing and we'll review over time :) fe <- file.exists(x) if (any(fe)) x[fe] <- normalizePath(x[fe]) } if (is.character(sds)) { sdsnames <- vapour_sds_names(x) #browser() sds <- which(unlist(lapply(sdsnames, function(.x) grepl(sprintf("%s$", sds[1L]), .x))))[1] } if (is.na(sds[1]) || length(sds) > 1L || !is.numeric(sds) || sds < 1) { stop("'sds' must be a valid name or integer for a GDAL-subdataset (1-based)") } ## FIXME: we can't do bands atm, also what else does boilerplate checks do that the new C++ in gdalraster doesn't do? # x <- sds_boilerplate_checks(x, sds) # if (!is.null(bands)) { # version <- substr(vapour::vapour_gdal_version(), 6, 8) # if (version < 3.1) { # message("gdal 3.1 or above required for 'bands', ignoring") # } else { # srcbands <- vapour_raster_info(x)$bands # if (any(bands) > srcbands) stop(sprintf("source has only %i bands" ,srcbands)) # x <- sprintf("vrt://%s?bands=%s", x, paste0(bands, collapse = ",")) # } # } if (is.null(extent)) { extent <- 1.0 } if (is.null(geolocation)) { geolocation <- "" } if (!is.numeric(extent)) { #message("vapour_vrt(): extent must be a valid numeric vector of xmin,xmax,ymin,ymax") extent <- 1.0 } if (is.null(projection) || is.na(projection)) { projection <- "" } if (!is.character(projection)) { message("vapour_vrt(): projection must a valid projection string for PROJ") projection <- "" } if (is.null(bands) || !is.numeric(bands)) { bands <- 0 } if (length(sds) < 1 || !is.numeric(sds) || is.null(sds) || anyNA(sds)) { sds <- 1L } if (!is.null(bands) && (length(bands) < 1 || !is.numeric(bands) || anyNA(bands))) { bands <- 1L } if (!is.null(geolocation) && (length(geolocation) != 2 || !is.character(geolocation) || anyNA(geolocation))) { geolocation <- "" } if (is.null(overview)) overview <- -1L overview <- as.integer(overview[1]) if (is.na(overview)) overview <- -1L raster_vrt_cpp(x, extent, projection[1L], sds, bands, geolocation, nomd, overview) }
/scratch/gouwar.j/cran-all/cranData/vapour/R/vapour_vrt.R
#' Read GDAL virtual source contents #' #' Obtain the names of available items in a virtual file source. #' #' The `dsource` must begin with a valid form of the special `vsiPREFIX`, for details #' see [GDAL Virtual File Systems](https://gdal.org/user/virtual_file_systems.html). #' #' #' Note that the listing is not recursive, and so cannot be used for automation. One would #' use this function interactively to determine a useable `/vsiPREFIX/dsource` data #' source string. #' #' @param dsource data source name (path to file, connection string, URL) with virtual prefix, see Details #' @param ... ignored #' @return character vector listing of items #' #' @examples #' pointzipfile <- system.file("extdata/vsi/point_shp.zip", package = "vapour") #' vapour_vsi_list(sprintf("/vsizip/%s", pointzipfile)) #' \donttest{ #' \dontrun{ #' ## example from https://github.com/hypertidy/vapour/issues/55 #' #file <- "http/radmap_v3_2015_filtered_dose/radmap_v3_2015_filtered_dose.ers.zip" #' #url <- "http://dapds00.nci.org.au/thredds/fileServer/rr2/national_geophysical_compilations" #' #u <- sprintf("/vsizip//vsicurl/%s", file.path(url, file)) #' #vapour_vsi_list(u) #' #[1] "radmap_v3_2015_filtered_dose" "radmap_v3_2015_filtered_dose.ers" #' #[3] "radmap_v3_2015_filtered_dose.isi" "radmap_v3_2015_filtered_dose.txt" #' #gdalinfo /vsitar//home/ubuntu/LT05_L1GS_027026_20060116_20160911_01_T2.tar.gz #' #vapour_vsi_list("/vsitar//home/ubuntu/LT05_L1GS_027026_20060116_20160911_01_T2.tar.gz") #' #"LT05_L1TP_027026_20061218_20160911_01_T1_ANG.txt" #' #"LT05_L1TP_027026_20061218_20160911_01_T1_B1.TIF" #' #"LT05_L1TP_027026_20061218_20160911_01_T1_B2.TIF" #' #"LT05_L1TP_027026_20061218_20160911_01_T1_B3.TIF" #' #... #' }} #' @export vapour_vsi_list <- function(dsource, ...) { vsi_list_gdal_cpp(dsource) }
/scratch/gouwar.j/cran-all/cranData/vapour/R/vapour_vsi_list.R
## 2020-05-08 GDAL up/down stuff cribbed from sf ## TODO: see mdsumner/dirigible, we'll have a gdalheaders package .vapour_cache <- new.env(FALSE, parent=globalenv()) .onLoad = function(libname, pkgname) { vapour_load_gdal() } .onUnload = function(libname, pkgname) { vapour_unload_gdal() } .onAttach = function(libname, pkgname) { ## provided by R. Bivand #183 rver <- version_gdal_cpp() if (strsplit(strsplit(rver, ",")[[1]][1], " ")[[1]][2] == "3.6.0") { warning("{vapour} package: GDAL 3.6.0 is in use but has been officially retracted; check here for whether its use might affect your work:\nhttps://github.com/OSGeo/gdal/blob/v3.6.1/NEWS.md\n Need help? Contact the maintainer of {vapour}.") } } vapour_getenv_sql_dialect <- function() { Sys.getenv("vapour.sql.dialect") } vapour_load_gdal <- function() { ## data only on ## - windows because tools/winlibs.R ## - macos because CRAN mac binary libs, and configure --with-data-copy=yes --with-proj-data=/usr/local/share/proj sql <- Sys.getenv("vapour.sql.dialect") if (is.null(sql)) Sys.setenv(vapour_sql_dialect = "") ##PROJ data, only if the files are in package (will fix in gdalheaders) if (file.exists(system.file("proj/nad.lst", package = "vapour"))) { prj = system.file("proj", package = "vapour")[1L] assign(".vapour.PROJ_LIB", Sys.getenv("PROJ_LIB"), envir=.vapour_cache) Sys.setenv("PROJ_LIB" = prj) } if (file.exists(system.file("gdal/epsg.wkt", package = "vapour"))) { assign(".vapour.GDAL_DATA", Sys.getenv("GDAL_DATA"), envir=.vapour_cache) gdl = system.file("gdal", package = "vapour")[1] Sys.setenv("GDAL_DATA" = gdl) } # here we start calling vapour functions (this function is called by .onLoad) ## . out <- register_gdal_cpp() out } # todo vapour_unload_gdal <- function() { ## PROJ data, only if the files are in package (will fix in gdalheaders) if (file.exists(system.file("proj/alaska", package = "vapour")[1L])) { Sys.setenv("PROJ_LIB"=get(".vapour.PROJ_LIB", envir=.vapour_cache)) } if (file.exists(system.file("gdal/epsg.wkt", package = "vapour"))) { Sys.setenv("GDAL_DATA"=get(".vapour.GDAL_DATA", envir=.vapour_cache)) } }
/scratch/gouwar.j/cran-all/cranData/vapour/R/zzz.R
library(dplyr) f <- raadfiles::thelist_files(format = "") %>% filter(grepl("parcel", fullname), grepl("shp$", fullname)) library(vapour) library(blob) system.time(purrr::map(f$fullname, sf::read_sf)) # user system elapsed # 43.124 2.857 39.386 system.time({ d <- purrr::map(f$fullname, read_gdal_table) d <- dplyr::bind_rows(d) g <- purrr::map(f$fullname, read_gdal_geometry) d[["wkb"]] <- new_blob(unlist(g, recursive = FALSE)) }) # user system elapsed # 16.400 2.882 23.227 pryr::object_size(d) ## 359 MB
/scratch/gouwar.j/cran-all/cranData/vapour/inst/benchmarks/gdb-tab-shp.R
--- title: "vapour benchmarks" author: "Michael Sumner" date: "30/07/2017" output: html_document editor_options: chunk_output_type: console --- ```{r setup, include=FALSE} knitr::opts_chunk$set(echo = TRUE) ``` ## Vapour for GDAL vector read Here we explore the read times of different formats, the raw read time for the attribute data and the GDAL raw WKB or formatted text geometry from various formats. These files are Local Government Area multipolygon layers of **cadastral parcels**, property boundaries within the state of Tasmania. There are three formats, "gdb" is the so-called GeoDatabase format by ESRI which is a complicated and opaque replacement for the old ArcInfo binary "ADF" formats, "tab" is the classic MapInfo TAB format also a sophisticated and full-featured propietary vector format, and "shp" that lowest-common denominator interchange workaround, and poor cousin of the the real GIS vector formats. ```{r} library(dplyr) f <- raadfiles::thelist_files(format = "") %>% filter(grepl("parcel", fullname)) gdb <- f %>% filter(grepl("gdb$", fullname)) tab <- f %>% filter(grepl("tab$", fullname)) shp <- f %>% filter(grepl("shp$", fullname)) gdb %>% transmute(basename(file)) tab %>% transmute(basename(file)) shp %>% transmute(basename(file)) ``` Let's read all the feature metadata attributes from the first format so we know what we are dealing with. 400,000 or so multi polygon features spread over 29 file types and represents "several hundred Mb". (It might be interesting to determine the file footprint of each type here, but each has a relativly complicated relationship between "file set" and "the data set", so we leave it for now). ```{r} system.time(l <- lapply(gdb$fullname, vapour::read_gdal_table)) (gdb$nrows <- purrr::map_int(l, nrow)) sum(gdb$nrows) ```
/scratch/gouwar.j/cran-all/cranData/vapour/inst/benchmarks/gdb-tab-shp.Rmd
benchmark(sf = read_sf(f), v = {d <- vapour_read_attributes(f) d$geometry <- st_as_sfc(vapour_read_geometry(f), crs = vapour:::vapour_projection_info_cpp(f)) st_as_sf(as_tibble(d))}, replications = 200) # test replications elapsed relative user.self sys.self user.child sys.child # 1 sf 200 4.122 1.000 3.744 0.380 0 0 # 2 v 200 4.921 1.194 4.840 0.084 0 0 benchmark(sf = read_sf(f)[1:10, ], v = {d <- vapour_read_attributes(f, limit_n = 10) d$geometry <- st_as_sfc(vapour_read_geometry(f, limit_n = 10), crs = vapour:::vapour_projection_info_cpp(f)) st_as_sf(as_tibble(d))}, replications = 200) # test replications elapsed relative user.self sys.self user.child sys.child # 1 sf 200 3.263 3.34 3.204 0.056 0 0 # 2 v 200 0.977 1.00 0.956 0.020 0 0
/scratch/gouwar.j/cran-all/cranData/vapour/inst/benchmarks/limit_n.R
## ---- include = FALSE--------------------------------------------------------- knitr::opts_chunk$set( collapse = TRUE, comment = "#>" )
/scratch/gouwar.j/cran-all/cranData/vapour/inst/doc/feature-access.R
--- title: "feature-access" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{feature-access} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` The gdalheaders API uses the following (pseudo-code) schemes for feature access. The different feature loops, see https://github.com/dis-organization/dirigible/issues/5 ### ALL Get all the feature thingys. ```C++ double nFeature = gdalheaders::force_layer_feature_count(poLayer); List/Vector out (nFeature); double ii = 0; while( (poFeature = poLayer->GetNextFeature()) != NULL ) { out[ii] = poFeature-><something>; OGRFeature::DestroyFeature(poFeature); ii++; } ``` ### IJ Get a subset of feature thingys i:j - their positional index. For `c(0, 0)` get the first feature, for `c(0, n - 1)` get all of them. For `c(0, -1)` that is a special case, there are no features. ```C++ ListVector out(ij[1] - ij[0] + 1); double cnt = 0; double ii = 0; while( (poFeature = poLayer->GetNextFeature()) != NULL ) { if (ii == ij[0] || (ii > ij[0] && ii <= ij[1])) { out[cnt] = poFeature-><something>; cnt++; } ii++; OGRFeature::DestroyFeature(poFeature); } ``` ### IA Get a subset of feature things, *arbitrary i* (in order) - their positional index ```C++ ListVector out(ia.length()); double ii = 0; double cnt = 0; while( (poFeature = poLayer->GetNextFeature()) != NULL ) { if (ii == ia[cnt]) { out[cnt] = poFeature-><something>; cnt++; } ii++; OGRFeature::DestroyFeature(poFeature); } ``` ## FA Get a subset of feature thingys *arbitrary FID* (order irrelevant) - their FID *unique names* ```C++ List out(fa.length()); for (double ii = 0; ii < fa.length(); ii++) { GIntBig feature_id = (GIntBig)fa[ii]; poFeature = poLayer->GetFeature(feature_id); ```
/scratch/gouwar.j/cran-all/cranData/vapour/inst/doc/feature-access.Rmd
## ----setup, include = FALSE--------------------------------------------------- knitr::opts_chunk$set( collapse = TRUE, comment = "#>", out.width = "100%" ) ## ----------------------------------------------------------------------------- pfile <- system.file("extdata", "point.shp", package = "vapour") library(vapour) vapour_read_fields(pfile) ## ----------------------------------------------------------------------------- mvfile <- system.file("extdata", "tab", "list_locality_postcode_meander_valley.tab", package="vapour") dat <- as.data.frame(vapour_read_fields(mvfile), stringsAsFactors = FALSE) dim(dat) head(dat) ## ----------------------------------------------------------------------------- vapour_read_geometry(pfile)[5:6] ## format = "WKB" vapour_read_geometry_text(pfile)[5:6] ## format = "json" vapour_read_geometry_text(pfile, textformat = "gml")[2] ## don't do this with a non-longlat data set like cfile vapour_read_geometry_text(pfile, textformat = "kml")[1:2] cfile <- system.file("extdata/sst_c.gpkg", package = "vapour") str(vapour_read_geometry_text(cfile, textformat = "wkt")[1:2]) ## ----------------------------------------------------------------------------- dat <- as.data.frame(vapour_read_fields(cfile), stringsAsFactors = FALSE) dat$wkt <- vapour_read_geometry_text(cfile, textformat = "wkt") head(dat) ## ----------------------------------------------------------------------------- mvfile <- system.file("extdata", "tab", "list_locality_postcode_meander_valley.tab", package="vapour") str(head(vapour_read_extent(mvfile))) ## ----------------------------------------------------------------------------- vapour_geom_summary(mvfile) ## ----skip-limit--------------------------------------------------------------- vapour_geom_summary(mvfile, limit_n = 4)$FID vapour_geom_summary(mvfile, skip_n = 2, limit_n = 6)$FID vapour_geom_summary(mvfile, skip_n = 6)$FID ## ----raster------------------------------------------------------------------- f <- system.file("extdata", "sst.tif", package = "vapour") vapour_raster_info(f) ## ----raster-read-------------------------------------------------------------- vapour_read_raster(f, window = c(0, 0, 6, 5)) ## the final two arguments specify up- or down-sampling ## controlled by resample argument vapour_read_raster(f, window = c(0, 0, 6, 5, 8, 9)) ## if window is not included, and native TRUE then we get the entire window str(vapour_read_raster(f, native = TRUE)) ## notice this is the length of the dimXY above prod(vapour_raster_info(f)$dimXY) ## ----gdal-flex---------------------------------------------------------------- mm <- matrix(vapour_read_raster(f, native = TRUE)[[1]], vapour_raster_info(f)$dimXY) mm[mm < -1e6] <- NA image(mm[,ncol(mm):1], asp = 2) ## ----------------------------------------------------------------------------- ## note, this code assumes OGRSQL dialect current_dialect <- Sys.getenv("vapour.sql.dialect") Sys.unsetenv("vapour.sql.dialect") ## ensure that default dialect is used ## (in this case it's 'OGRSQL' but we only want to record the state and reset after) ## later on, when done do Sys.setenv(vapour.sql.dialect = current_dialect) vapour_read_fields(mvfile, sql = "SELECT NAME, PLAN_REF FROM list_locality_postcode_meander_valley WHERE POSTCODE < 7291") vapour_read_fields(mvfile, sql = "SELECT NAME, PLAN_REF, FID FROM list_locality_postcode_meander_valley WHERE POSTCODE = 7306") ## ----------------------------------------------------------------------------- library(vapour) file0 <- "list_locality_postcode_meander_valley.tab" mvfile <- system.file("extdata/tab", file0, package="vapour") layer <- gsub(".tab$", "", basename(mvfile)) ## get the number of features by FID (DISTINCT should be redundant here) vapour_read_fields(mvfile, sql = sprintf("SELECT COUNT(DISTINCT FID) AS nfeatures FROM %s", layer)) ## note how TAB is 1-based vapour_read_fields(mvfile, sql = sprintf("SELECT COUNT(*) AS n FROM %s WHERE FID < 2", layer)) ## but SHP is 0-based shp <- system.file("extdata/point.shp", package="vapour") vapour_read_fields(shp, sql = sprintf("SELECT COUNT(*) AS n FROM %s WHERE FID < 2", "point")) ## ----restore------------------------------------------------------------------ Sys.setenv(vapour.sql.dialect = current_dialect) ## ----dialect------------------------------------------------------------------ ## good citizenry current_dialect <- Sys.getenv("vapour.sql.dialect") Sys.setenv(vapour.sql.dialect = "SQLITE") ## now we can use SQLITE dialect with TAB vapour_read_fields(mvfile, sql = sprintf("SELECT st_area(GEOMETRY) AS area FROM %s LIMIT 1 ", layer)) ## but with OGRSQL we need Sys.setenv(vapour.sql.dialect = "OGRSQL") vapour_read_fields(mvfile, sql = sprintf("SELECT OGR_GEOM_AREA AS area FROM %s LIMIT 1 ", layer)) ## ----GDAL-info---------------------------------------------------------------- vapour_gdal_version() str(vapour_all_drivers()) ## ----GDAL-driver-------------------------------------------------------------- vapour_driver(mvfile)
/scratch/gouwar.j/cran-all/cranData/vapour/inst/doc/vapour.R
--- title: "Vapour - lightweight GDAL" author: "Michael D. Sumner" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Vapour - lightweight GDAL} %\VignetteEngine{knitr::rmarkdown} editor_options: chunk_output_type: console --- ```{r setup, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>", out.width = "100%" ) ``` The vapour package provides access to some GDAL functionality with minimal overhead. This includes read geometry and data ('attributes') There's a function `vapour_read_fields` that returns the fields (attributes) as list of vectors. ```{r} pfile <- system.file("extdata", "point.shp", package = "vapour") library(vapour) vapour_read_fields(pfile) ``` ```{r} mvfile <- system.file("extdata", "tab", "list_locality_postcode_meander_valley.tab", package="vapour") dat <- as.data.frame(vapour_read_fields(mvfile), stringsAsFactors = FALSE) dim(dat) head(dat) ``` A low-level function will return a character vector of JSON, GML, KML or WKT. ```{r} vapour_read_geometry(pfile)[5:6] ## format = "WKB" vapour_read_geometry_text(pfile)[5:6] ## format = "json" vapour_read_geometry_text(pfile, textformat = "gml")[2] ## don't do this with a non-longlat data set like cfile vapour_read_geometry_text(pfile, textformat = "kml")[1:2] cfile <- system.file("extdata/sst_c.gpkg", package = "vapour") str(vapour_read_geometry_text(cfile, textformat = "wkt")[1:2]) ``` Combine these together to get a custom data set. ```{r} dat <- as.data.frame(vapour_read_fields(cfile), stringsAsFactors = FALSE) dat$wkt <- vapour_read_geometry_text(cfile, textformat = "wkt") head(dat) ``` There is a function `vapour_read_extent` to return a straightforward bounding box vector for every feature, so that we can flexibly build an index of a data set for later use. ```{r} mvfile <- system.file("extdata", "tab", "list_locality_postcode_meander_valley.tab", package="vapour") str(head(vapour_read_extent(mvfile))) ``` This makes for a very lightweight summary data set that will scale to hundreds of large inputs. There is a `vapour_geom_summary()` function to read only the information about each geometry. ```{r} vapour_geom_summary(mvfile) ``` Each function that relates to geometry includes arguments `skip_n` and `limit_n` to first specify the number of features to ignore, and second to set a maximum number of features visited. These interact, and so can be used to scan through a source. Both are applied after the `sql` argument. ```{r skip-limit} vapour_geom_summary(mvfile, limit_n = 4)$FID vapour_geom_summary(mvfile, skip_n = 2, limit_n = 6)$FID vapour_geom_summary(mvfile, skip_n = 6)$FID ``` Each geometry function also includes an `extent` argument, which takes a simple vector of four values `xmin, xmax, ymin, ymax` or sp bbox, sf bbox, or raster extent. This is only applied if the sql argument is non empty, and corresponds to the [SpatialFilter argument of ExecuteSQL](https://gdal.org/user/ogr_sql_dialect.html#executesql). ## Raster data Find raster info. ```{r raster} f <- system.file("extdata", "sst.tif", package = "vapour") vapour_raster_info(f) ``` Read raster data (requires explicit setting of `window` argument, and is not useful without being used in the context of the raster dimensions). ```{r raster-read} vapour_read_raster(f, window = c(0, 0, 6, 5)) ## the final two arguments specify up- or down-sampling ## controlled by resample argument vapour_read_raster(f, window = c(0, 0, 6, 5, 8, 9)) ## if window is not included, and native TRUE then we get the entire window str(vapour_read_raster(f, native = TRUE)) ## notice this is the length of the dimXY above prod(vapour_raster_info(f)$dimXY) ``` By chaining together what we know about how raster data works we can get exactly what we want from GDAL. (Note that `vapour_read_raster()` returns a list of one band, new behaviour since version 0.4.0). ```{r gdal-flex} mm <- matrix(vapour_read_raster(f, native = TRUE)[[1]], vapour_raster_info(f)$dimXY) mm[mm < -1e6] <- NA image(mm[,ncol(mm):1], asp = 2) ``` An example of using this facility interactively is in [lazyraster](https://github.com/hypertidy/lazyraster). ## OGRSQL SQL is available for general GDAL vector data. Note that each lower-level function accepts a `sql` argument, which sends a query to the GDAL library to be executed against the data source, this can create custom layers and so is independent of and ignores the `layer` argument. Note that the same sql statement can be passed to the geometry readers, so we get the matching sets of information. `vapour_read_geometry` will return NULL for each missing geometry if the statement doesn't include geometry explicitly or implicitly, but `vapour_read_geometry`, `vapour_read_geometry_text` and `vapour_read_extent` all explicitly modify the statement "SELECT *". (We are also assuming the data source hasn't changed between accesses ... let me know if this causes you problems!). ```{r} ## note, this code assumes OGRSQL dialect current_dialect <- Sys.getenv("vapour.sql.dialect") Sys.unsetenv("vapour.sql.dialect") ## ensure that default dialect is used ## (in this case it's 'OGRSQL' but we only want to record the state and reset after) ## later on, when done do Sys.setenv(vapour.sql.dialect = current_dialect) vapour_read_fields(mvfile, sql = "SELECT NAME, PLAN_REF FROM list_locality_postcode_meander_valley WHERE POSTCODE < 7291") vapour_read_fields(mvfile, sql = "SELECT NAME, PLAN_REF, FID FROM list_locality_postcode_meander_valley WHERE POSTCODE = 7306") ``` Also note that FID is a special row number value, to be used a as general facility for selecting by structural row. This FID is driver-dependent, it can be 0- or 1-based, or completely arbitrary. Variously, drivers (GDAL's formats) are 0- or 1- based with the FID. Others (such as OSM) are arbitrary, and have non-sequential (and presumably persistent) FID values. ```{r} library(vapour) file0 <- "list_locality_postcode_meander_valley.tab" mvfile <- system.file("extdata/tab", file0, package="vapour") layer <- gsub(".tab$", "", basename(mvfile)) ## get the number of features by FID (DISTINCT should be redundant here) vapour_read_fields(mvfile, sql = sprintf("SELECT COUNT(DISTINCT FID) AS nfeatures FROM %s", layer)) ## note how TAB is 1-based vapour_read_fields(mvfile, sql = sprintf("SELECT COUNT(*) AS n FROM %s WHERE FID < 2", layer)) ## but SHP is 0-based shp <- system.file("extdata/point.shp", package="vapour") vapour_read_fields(shp, sql = sprintf("SELECT COUNT(*) AS n FROM %s WHERE FID < 2", "point")) ``` See https://gdal.org/user/ogr_sql_dialect.html Now we are done with our SQL, so in case something else was using `current_dialect`, restore it. ```{r restore} Sys.setenv(vapour.sql.dialect = current_dialect) ``` There are many useful higher level operations that can be used with this. The simplest is the ability to use GDAL as a database-like connection to attribute tables. # SQL Dialect In earlier versions (pre 0.9.1) we couldn't control the SQL dialect in use, so we variously had available 'OGRSQL' or 'SQLITE', or whatever native dialect the format defaults to. We couldn't use 'OGRSQL' with a Geopackage, and neither could we use 'SQLITE' with a shapefile, MapInfo TAB, or ESRI file Geodatabase (these last three aren't "real" databases so don't have their own native SQL, and GDAL fills the gap with OGRSQL. Geopackage defaults to SQLITE, because that's what it is built upon. So for example. ```{r dialect} ## good citizenry current_dialect <- Sys.getenv("vapour.sql.dialect") Sys.setenv(vapour.sql.dialect = "SQLITE") ## now we can use SQLITE dialect with TAB vapour_read_fields(mvfile, sql = sprintf("SELECT st_area(GEOMETRY) AS area FROM %s LIMIT 1 ", layer)) ## but with OGRSQL we need Sys.setenv(vapour.sql.dialect = "OGRSQL") vapour_read_fields(mvfile, sql = sprintf("SELECT OGR_GEOM_AREA AS area FROM %s LIMIT 1 ", layer)) ``` # GDAL information Find the GDAL version and drivers available. ```{r GDAL-info} vapour_gdal_version() str(vapour_all_drivers()) ``` Find the driver that will be used for a given data source. ```{r GDAL-driver} vapour_driver(mvfile) ```
/scratch/gouwar.j/cran-all/cranData/vapour/inst/doc/vapour.Rmd
f <- "~/albania-latest.osm.pbf" vapour_layer_names(f) td <- "~" u = "http://download.geofabrik.de/europe/albania-latest.osm.pbf" f <- normalizePath(file.path(td, basename(u))) #download.file(u, f, mode = "wb") g <- vapour::vapour_read_geometry(f, 0) d <- vapour::vapour_read_attributes(f, layer = 0) x <- sf::read_sf(f, "points") # Driver: OSM # Available layers: # layer_name geometry_type features fields # 1 points Point 56653 10 # 2 lines Line String 83161 9 # 3 multilinestrings Multi Line String 127 4 # 4 multipolygons Multi Polygon 112854 25 # 5 other_relations Geometry Collection 713 4 ## the multipolygons layer has 112854 features and 25 fields, so d <- vapour::vapour_read_attributes(f, sql = "SELECT osm_id, type FROM multipolygons") g <- vapour::vapour_read_extent(f, sql = "SELECT osm_id FROM multipolygons WHERE type LIKE 'City_municipality'") d <- vapour::vapour_read_attributes(f, sql = "SELECT * FROM multilinestrings WHERE type LIKE 'route'") g <- vapour::vapour_read_geometry(f, sql = "SELECT * FROM multilinestrings WHERE type LIKE 'route'") #f <- list.files(basename(tempdir()), recursive = TRUE, pattern = "pbf$") read_pbf_geometry <- function(file) { layers <- sf::st_layers(file)$name setNames(purrr::map(layers, ~sf::read_sf(file, .x)), layers) } x <- read_pbf_geometry(f) purrr::map_int(x, nrow) purrr::map_int(x, ncol) download.file(u, f, mode = "wb") msg = "osmconvert albania.osh.pbf >albania.osm") system(msg) osmdata_xml(input_file = "albania.osh.pbf", filename = "albania.osm") q = add_feature(opq("Albania"), "highway") albanian_roads = osmdata_sf(q, doc = "albania.osm")
/scratch/gouwar.j/cran-all/cranData/vapour/inst/pbf/pbf.R
library(vapour) ## the extent of this one is a bit off, so we correct it srcfile <- "/rdsi/PUBLIC/raad/data/www.ngdc.noaa.gov/mgg/global/relief/ETOPO1/data/ice_surface/grid_registered/netcdf/ETOPO1_Ice_g_gdal.grd" src <- vapour_vrt(srcfile, extent = c(-180, 180, -90, 90)) ## it is otherwise ok info <- vapour_raster_info(src) info[c("extent", "dimXY", "projection")] ## create a raster, here assume we match our input raster (but it could be different) ex <- info$extent prj <- info$projection dm <- info$dimXYQA ## now we create a file, with 1 Float64 band (hardcoded for now) rfile <- vapour:::vapour_create(tempfile(fileext = ".tif"), extent = ex, dimension = dm, projection = prj, n_bands = 1L, driver = "GTiff") #remotes::install_github("hypertidy/grout") outinfo <- vapour::vapour_raster_info(rfile) ## for now the tiling is also hardcoded to 512x512 ## now get the tiling index, gives us everthing we need to read a tile for our target, either by the warper or by RasterIO ## the native tiling is tiny so we up it info$tilesXY BLOCKSIZE <- info$tilesXY * 1 ## 2, 4, 8, 16 index <- grout::tile_index( grout:::grout(outinfo, BLOCKSIZE)) index library(dplyr) system.time({ for (i in seq_len(dim(index)[1L])) { tile <- as.matrix(index[i, ]) vals <- vapour_warp_raster_dbl(src, extent = tile[, c("xmin", "xmax", "ymin", "ymax"), drop = TRUE], dimension = tile[, c("ncol", "nrow"), drop = TRUE], projection = outinfo$projection) vals <- vals + runif(1, -8000, 8000) ## so we know we achieved something ## here we might read from multiple rasters or bands and coalesce, do calcs etc vapour:::vapour_write_raster_block(rfile, vals, tile[, c("offset_x", "offset_y"), drop = TRUE], tile[, c("ncol", "nrow"), drop = TRUE] , overwrite = TRUE) } }) library(terra) plot(rast(rfile), col = grey.colors(256))
/scratch/gouwar.j/cran-all/cranData/vapour/inst/readwrite/readwriteexample.R
library(vapour) #srcfile <- "D:\\data\\topography\\etopo2\\subset.tif" #srcfile <- raadtools::topofile("etopo1") srcfile <- "/rdsi/PRIVATE/raad/data_local/www.bodc.ac.uk/gebco/GRIDONE_2D.nc" #srcfile <- "/rdsi/PUBLIC/raad/data/www.ngdc.noaa.gov/mgg/global/relief/ETOPO1/data/ice_surface/grid_registered/netcdf/ETOPO1_Ice_g_gdal.grd" #srcfile <- "/rdsi/PUBLIC/raad/data/www.ngdc.noaa.gov/mgg/global/relief/ETOPO2/ETOPO2v2-2006/ETOPO2v2c/netCDF/ETOPO2v2c_f4.nc" library(gladr) if (FALSE) plot_raster(srcfile) ui <- fluidPage( # Some custom CSS for a smaller font for preformatted text tags$head( tags$style(HTML(" pre, table.table { font-size: smaller; } ")) ), fluidRow( column(width = 4, wellPanel( radioButtons("plot_type", "Plot type", c("base", "ggplot2") ) )), column(width = 12, # In a plotOutput, passing values for click, dblclick, hover, or brush # will enable those interactions. plotOutput("plot1", height = 750, # Equivalent to: click = clickOpts(id = "plot_click") click = "plot_click", dblclick = dblclickOpts( id = "plot_dblclick" ), hover = hoverOpts( id = "plot_hover" ), ## see here for reset https://stackoverflow.com/questions/30588472/is-it-possible-to-clear-the-brushed-area-of-a-plot-in-shiny brush = brushOpts( id = "plot_brush" ) ) )) ) server <- function(input, output) { get_extent <- reactive({ if (is.null(input$plot_brush)) NULL else raster::extent(input$plot_brush$xmin, input$plot_brush$xmax, input$plot_brush$ymin, input$plot_brush$ymax) }) output$plot1 <- renderPlot({ plot_raster(srcfile, ext = get_extent()) }) } shinyApp(ui, server)
/scratch/gouwar.j/cran-all/cranData/vapour/inst/shiny/gdal_map/app.R
## moved to hypertidy/discrete
/scratch/gouwar.j/cran-all/cranData/vapour/inst/stars/very-early-stars-overview.R
f <- "/rdsi/PUBLIC/raad/data/www.ngdc.noaa.gov/mgg/global/relief/ETOPO2/ETOPO2v2-2006/ETOPO2v2c/netCDF/ETOPO2v2c_f4.nc" f <- system.file("extdata", "sst.tif", package = "vapour") par(mfrow = c(2, 2)) zipurl <- "https://www.ngdc.noaa.gov/mgg/global/relief/ETOPO2/ETOPO2v2-2006/ETOPO2v2c/netCDF/ETOPO2v2c_f4_netCDF.zip" vsiurl <- file.path("/vsizip//vsicurl", zipurl) (u <- file.path(vsiurl, vapour_vsi_list(vsiurl)[1L])) ## neither gets the extent (because the file is bad) stars::read_stars(u) terra::rast(u) ## but we only need four numbers and a string source_ex <- c(-180, 180, -90, 90) source_proj <- "EPSG:4326" ## choose any grid, literally an extent, dimension, projection (this just a concise way to centre an example) b <- 4e5 dm <- c(512, 512) prj <- "+proj=aeqd +lon_0=147 +lat_0=-42" v <- vapour_warp_raster(f, extent = c(-b, b, -b, b), dimension = dm, bands = 1, ## we have to *augment* the source, because the tools don't know the extent or its projection) source_extent = source_ex, source_wkt = source_proj, projection = prj, band_output_type = "vrt") ## put the filename back in (will hide this step soon ..) vrt <- gsub("\"0\"></SourceDataset>", sprintf("\"0\">%s</SourceDataset>", f), v[[1]]) ## and we good terra::plot(tr <- terra::rast(vrt)) tr library(stars) image(st <- stars::read_stars(vrt, proxy = F, normalize_path = FALSE), col = grey.colors(256)) axis(1);axis(2) st ## what if we want to match the source? that works too! v <- vapour_warp_raster(f, extent = c(140, 154, -49, -35), dimension = dm, bands = 1, ## we have to *augment* the source, because the tools don't know the extent or its projection) source_extent = source_ex, source_wkt = source_proj, projection = "OGC:CRS84", band_output_type = "vrt") ## put the filename back in (will hide this step soon ..) vrt <- gsub("\"0\"></SourceDataset>", sprintf("\"0\">%s</SourceDataset>", f), v[[1]]) ## and we good terra::plot(tr <- terra::rast(vrt)) image(st <- stars::read_stars(vrt, proxy = F, normalize_path = FALSE), col = grey.colors(256)) axis(1);axis(2) library(vapour) zipurl <- "https://www.ngdc.noaa.gov/mgg/global/relief/ETOPO2/ETOPO2v2-2006/ETOPO2v2c/netCDF/ETOPO2v2c_f4_netCDF.zip" vsiurl <- file.path("/vsizip//vsicurl", zipurl) (u <- file.path(vsiurl, vapour_vsi_list(vsiurl)[1L])) ## helper fun rinfo <- function(x) setNames(vapour_raster_info(x)[c("extent", "dimXY", "projection")], c("extent", "dimension", "projection")) rinfo(u) ## relies on dev build of GDAL, with local patch to vrt:// connection support (adding a_ullr and a_srs) u_augment <- sprintf("vrt://%s?a_ullr=-180 90 180 -90&a_srs=OGC:4326", u) u_augment rinfo(u_augment)
/scratch/gouwar.j/cran-all/cranData/vapour/inst/warpsandbox/vrt_stars-terra_compare.R
## IBCSO GeoTIFF hs.pangaea.de/Maps/bathy/IBCSO_v1/ibcso_v1_is.tif f <- "/vsizip/vsicurl/https://hs.pangaea.de/Maps/bathy/IBCSO_v1/IBCSO_v1_bed_PS65_500m_tif.zip" # vapour_vsi_list(f) # [1] "ibcso_v1_bed.tfw" "ibcso_v1_bed.tif" "ibcso_v1_bed.tif.aux.xml" "ibcso_v1_bed.tif.ovr" # [5] "ibcso_v1_bed.tif.xml" library(vapour) vsiurl <- file.path(f, vapour_vsi_list(f)[2]) f <- raadtools::topofile("etopo2") dm <- as.integer(c(512, 512)) b <- 4e6 prj <- "+proj=laea +lon_0=180" ex <- c(-180 * 60 * 1852, 180 * 60 * 1852, -90 * 60 * 1852, 90 * 60 * 1852) dm <- c(1024, 512) v <- vapour_warp_raster(f, band = 1, extent = ex, dimension = dm, wkt = vapour_srs_wkt(prj), source_wkt = vapour_srs_wkt("+proj=longlat"), resample = "bilinear") image( list(x = seq(-b, b, length.out = dm[1] + 1), y = seq(-b, b, length.out = dm[2] + 1), z = matrix(unlist(v, use.names = FALSE), dm[1])[,dm[2]:1]), asp = 1, col = hcl.colors(256))
/scratch/gouwar.j/cran-all/cranData/vapour/inst/warpsandbox/warping.R
if(getRversion() < "3.3.0") { stop("Your version of R is too old. This package requires R-3.3.0 or newer on Windows.") } # For details see: https://github.com/rwinlib/gdal3 VERSION <- commandArgs(TRUE) testfile <- sprintf("../windows/gdal3-%s/include/gdal-%s/gdal.h", VERSION, VERSION) if(!file.exists(testfile)) { if(getRversion() < "3.3.0") setInternet2() download.file(sprintf("https://github.com/rwinlib/gdal3/archive/v%s.zip", VERSION), "lib.zip", quiet = TRUE) dir.create("../windows", showWarnings = FALSE) unzip("lib.zip", exdir = "../windows") unlink("lib.zip") }
/scratch/gouwar.j/cran-all/cranData/vapour/tools/winlibs.R
--- title: "feature-access" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{feature-access} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` The gdalheaders API uses the following (pseudo-code) schemes for feature access. The different feature loops, see https://github.com/dis-organization/dirigible/issues/5 ### ALL Get all the feature thingys. ```C++ double nFeature = gdalheaders::force_layer_feature_count(poLayer); List/Vector out (nFeature); double ii = 0; while( (poFeature = poLayer->GetNextFeature()) != NULL ) { out[ii] = poFeature-><something>; OGRFeature::DestroyFeature(poFeature); ii++; } ``` ### IJ Get a subset of feature thingys i:j - their positional index. For `c(0, 0)` get the first feature, for `c(0, n - 1)` get all of them. For `c(0, -1)` that is a special case, there are no features. ```C++ ListVector out(ij[1] - ij[0] + 1); double cnt = 0; double ii = 0; while( (poFeature = poLayer->GetNextFeature()) != NULL ) { if (ii == ij[0] || (ii > ij[0] && ii <= ij[1])) { out[cnt] = poFeature-><something>; cnt++; } ii++; OGRFeature::DestroyFeature(poFeature); } ``` ### IA Get a subset of feature things, *arbitrary i* (in order) - their positional index ```C++ ListVector out(ia.length()); double ii = 0; double cnt = 0; while( (poFeature = poLayer->GetNextFeature()) != NULL ) { if (ii == ia[cnt]) { out[cnt] = poFeature-><something>; cnt++; } ii++; OGRFeature::DestroyFeature(poFeature); } ``` ## FA Get a subset of feature thingys *arbitrary FID* (order irrelevant) - their FID *unique names* ```C++ List out(fa.length()); for (double ii = 0; ii < fa.length(); ii++) { GIntBig feature_id = (GIntBig)fa[ii]; poFeature = poLayer->GetFeature(feature_id); ```
/scratch/gouwar.j/cran-all/cranData/vapour/vignettes/feature-access.Rmd
--- title: "Vapour - lightweight GDAL" author: "Michael D. Sumner" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Vapour - lightweight GDAL} %\VignetteEngine{knitr::rmarkdown} editor_options: chunk_output_type: console --- ```{r setup, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>", out.width = "100%" ) ``` The vapour package provides access to some GDAL functionality with minimal overhead. This includes read geometry and data ('attributes') There's a function `vapour_read_fields` that returns the fields (attributes) as list of vectors. ```{r} pfile <- system.file("extdata", "point.shp", package = "vapour") library(vapour) vapour_read_fields(pfile) ``` ```{r} mvfile <- system.file("extdata", "tab", "list_locality_postcode_meander_valley.tab", package="vapour") dat <- as.data.frame(vapour_read_fields(mvfile), stringsAsFactors = FALSE) dim(dat) head(dat) ``` A low-level function will return a character vector of JSON, GML, KML or WKT. ```{r} vapour_read_geometry(pfile)[5:6] ## format = "WKB" vapour_read_geometry_text(pfile)[5:6] ## format = "json" vapour_read_geometry_text(pfile, textformat = "gml")[2] ## don't do this with a non-longlat data set like cfile vapour_read_geometry_text(pfile, textformat = "kml")[1:2] cfile <- system.file("extdata/sst_c.gpkg", package = "vapour") str(vapour_read_geometry_text(cfile, textformat = "wkt")[1:2]) ``` Combine these together to get a custom data set. ```{r} dat <- as.data.frame(vapour_read_fields(cfile), stringsAsFactors = FALSE) dat$wkt <- vapour_read_geometry_text(cfile, textformat = "wkt") head(dat) ``` There is a function `vapour_read_extent` to return a straightforward bounding box vector for every feature, so that we can flexibly build an index of a data set for later use. ```{r} mvfile <- system.file("extdata", "tab", "list_locality_postcode_meander_valley.tab", package="vapour") str(head(vapour_read_extent(mvfile))) ``` This makes for a very lightweight summary data set that will scale to hundreds of large inputs. There is a `vapour_geom_summary()` function to read only the information about each geometry. ```{r} vapour_geom_summary(mvfile) ``` Each function that relates to geometry includes arguments `skip_n` and `limit_n` to first specify the number of features to ignore, and second to set a maximum number of features visited. These interact, and so can be used to scan through a source. Both are applied after the `sql` argument. ```{r skip-limit} vapour_geom_summary(mvfile, limit_n = 4)$FID vapour_geom_summary(mvfile, skip_n = 2, limit_n = 6)$FID vapour_geom_summary(mvfile, skip_n = 6)$FID ``` Each geometry function also includes an `extent` argument, which takes a simple vector of four values `xmin, xmax, ymin, ymax` or sp bbox, sf bbox, or raster extent. This is only applied if the sql argument is non empty, and corresponds to the [SpatialFilter argument of ExecuteSQL](https://gdal.org/user/ogr_sql_dialect.html#executesql). ## Raster data Find raster info. ```{r raster} f <- system.file("extdata", "sst.tif", package = "vapour") vapour_raster_info(f) ``` Read raster data (requires explicit setting of `window` argument, and is not useful without being used in the context of the raster dimensions). ```{r raster-read} vapour_read_raster(f, window = c(0, 0, 6, 5)) ## the final two arguments specify up- or down-sampling ## controlled by resample argument vapour_read_raster(f, window = c(0, 0, 6, 5, 8, 9)) ## if window is not included, and native TRUE then we get the entire window str(vapour_read_raster(f, native = TRUE)) ## notice this is the length of the dimXY above prod(vapour_raster_info(f)$dimXY) ``` By chaining together what we know about how raster data works we can get exactly what we want from GDAL. (Note that `vapour_read_raster()` returns a list of one band, new behaviour since version 0.4.0). ```{r gdal-flex} mm <- matrix(vapour_read_raster(f, native = TRUE)[[1]], vapour_raster_info(f)$dimXY) mm[mm < -1e6] <- NA image(mm[,ncol(mm):1], asp = 2) ``` An example of using this facility interactively is in [lazyraster](https://github.com/hypertidy/lazyraster). ## OGRSQL SQL is available for general GDAL vector data. Note that each lower-level function accepts a `sql` argument, which sends a query to the GDAL library to be executed against the data source, this can create custom layers and so is independent of and ignores the `layer` argument. Note that the same sql statement can be passed to the geometry readers, so we get the matching sets of information. `vapour_read_geometry` will return NULL for each missing geometry if the statement doesn't include geometry explicitly or implicitly, but `vapour_read_geometry`, `vapour_read_geometry_text` and `vapour_read_extent` all explicitly modify the statement "SELECT *". (We are also assuming the data source hasn't changed between accesses ... let me know if this causes you problems!). ```{r} ## note, this code assumes OGRSQL dialect current_dialect <- Sys.getenv("vapour.sql.dialect") Sys.unsetenv("vapour.sql.dialect") ## ensure that default dialect is used ## (in this case it's 'OGRSQL' but we only want to record the state and reset after) ## later on, when done do Sys.setenv(vapour.sql.dialect = current_dialect) vapour_read_fields(mvfile, sql = "SELECT NAME, PLAN_REF FROM list_locality_postcode_meander_valley WHERE POSTCODE < 7291") vapour_read_fields(mvfile, sql = "SELECT NAME, PLAN_REF, FID FROM list_locality_postcode_meander_valley WHERE POSTCODE = 7306") ``` Also note that FID is a special row number value, to be used a as general facility for selecting by structural row. This FID is driver-dependent, it can be 0- or 1-based, or completely arbitrary. Variously, drivers (GDAL's formats) are 0- or 1- based with the FID. Others (such as OSM) are arbitrary, and have non-sequential (and presumably persistent) FID values. ```{r} library(vapour) file0 <- "list_locality_postcode_meander_valley.tab" mvfile <- system.file("extdata/tab", file0, package="vapour") layer <- gsub(".tab$", "", basename(mvfile)) ## get the number of features by FID (DISTINCT should be redundant here) vapour_read_fields(mvfile, sql = sprintf("SELECT COUNT(DISTINCT FID) AS nfeatures FROM %s", layer)) ## note how TAB is 1-based vapour_read_fields(mvfile, sql = sprintf("SELECT COUNT(*) AS n FROM %s WHERE FID < 2", layer)) ## but SHP is 0-based shp <- system.file("extdata/point.shp", package="vapour") vapour_read_fields(shp, sql = sprintf("SELECT COUNT(*) AS n FROM %s WHERE FID < 2", "point")) ``` See https://gdal.org/user/ogr_sql_dialect.html Now we are done with our SQL, so in case something else was using `current_dialect`, restore it. ```{r restore} Sys.setenv(vapour.sql.dialect = current_dialect) ``` There are many useful higher level operations that can be used with this. The simplest is the ability to use GDAL as a database-like connection to attribute tables. # SQL Dialect In earlier versions (pre 0.9.1) we couldn't control the SQL dialect in use, so we variously had available 'OGRSQL' or 'SQLITE', or whatever native dialect the format defaults to. We couldn't use 'OGRSQL' with a Geopackage, and neither could we use 'SQLITE' with a shapefile, MapInfo TAB, or ESRI file Geodatabase (these last three aren't "real" databases so don't have their own native SQL, and GDAL fills the gap with OGRSQL. Geopackage defaults to SQLITE, because that's what it is built upon. So for example. ```{r dialect} ## good citizenry current_dialect <- Sys.getenv("vapour.sql.dialect") Sys.setenv(vapour.sql.dialect = "SQLITE") ## now we can use SQLITE dialect with TAB vapour_read_fields(mvfile, sql = sprintf("SELECT st_area(GEOMETRY) AS area FROM %s LIMIT 1 ", layer)) ## but with OGRSQL we need Sys.setenv(vapour.sql.dialect = "OGRSQL") vapour_read_fields(mvfile, sql = sprintf("SELECT OGR_GEOM_AREA AS area FROM %s LIMIT 1 ", layer)) ``` # GDAL information Find the GDAL version and drivers available. ```{r GDAL-info} vapour_gdal_version() str(vapour_all_drivers()) ``` Find the driver that will be used for a given data source. ```{r GDAL-driver} vapour_driver(mvfile) ```
/scratch/gouwar.j/cran-all/cranData/vapour/vignettes/vapour.Rmd
#' @title Variance Estimation with Bootstrap-RCV #' #' @description Estimation of error variance using Bootstrap-Refitted cross validation method in ultrahigh dimensional dataset. #' #' @param x #' #' @param y #' #' @param a #' #' @param b #' #' @param d #' #' @return Error variance #' #' @examples #' #' @export bsrcv <- function(x,y,a,b,d,method= c("spam","lasso","lsr")){ method = match.arg(method) p<- ncol(x) n<- nrow(x) bs<- vector("list",b) xb <- vector("list",b) for (i in 1:b){ bs[[i]] <- sample(1:n,n, replace = TRUE) xb[[i]] <- x[bs[[i]],] } if (method == "spam"){ spam_var_rcv <- function(x,y,d){ p<- ncol(x) n<- nrow(x) k <- floor(n/2) x1 <- x[1:k, ] y1 <- y[1:k] x2 <- x[(k + 1):n, ] y2 <- y[(k + 1):n] n1 <- k n2 <- n-k requireNamespace("SAM") spam_fit_n1 <- samQL(x1,y1,p=1) w_n1 <- row(as.matrix(spam_fit_n1$w[,30]))[which(spam_fit_n1$w[,30] != 0)] w_order_n1 <- head(order(spam_fit_n1$w[,30],decreasing = TRUE),d) w_value_n1 <- as.matrix(spam_fit_n1$w[,30])[w_order_n1,] spam_selected_feature_n1<- w_order_n1 M1 <- length(spam_selected_feature_n1) selected_x2 <- x2[,spam_selected_feature_n1] fit_x2 <- lm(y2 ~ selected_x2 - 1) var1 <- sum((fit_x2$resid)^2)/(n - k - M1) spam_fit_n2 <- samQL(x2,y2,p=1) w_n2 <- row(as.matrix(spam_fit_n2$w[,30]))[which(spam_fit_n2$w[,30] != 0)] w_order_n2 <- head(order(spam_fit_n2$w[,30],decreasing = TRUE),d) w_value_n2 <- as.matrix(spam_fit_n2$w[,30])[w_order_n2,] spam_selected_feature_n2<- w_order_n2 M2 <- length(spam_selected_feature_n2) selected_x1 <- x1[,spam_selected_feature_n2] fit_x1 <- lm(y1 ~ selected_x1 - 1) var2 <- sum((fit_x1$resid)^2)/(k - M2) var_rcv <- (var1 + var2)/2 return(var_rcv) } spam_var <- numeric() for (i in 1:b){ spam_var[[i]] <- spam_var_rcv(xb[[i]],y,d) } return(mean(spam_var)) } if (method == "lasso"){ lasso_var_rcv <- function(x,y,a,d){ p<- ncol(x) n<- nrow(x) k <- floor(n/2) x1 <- x[1:k, ] y1 <- y[1:k] x2 <- x[(k + 1):n, ] y2 <- y[(k + 1):n] n1 <- k n2 <- n-k requireNamespace("glmnet") lasso_fit_n1 <- glmnet(y=y1,x=x1,alpha= a,intercept=FALSE) lambda_n1=tail(lasso_fit_n1$lambda,1) beta_n1=coef(lasso_fit_n1,s=lambda_n1) beta_n1<-as.matrix(beta_n1) beta1_n1<- as.matrix(beta_n1[-1]) select_beta_lasso_n1<- head(order(abs(beta1_n1), decreasing= TRUE),d) M1 <- length(select_beta_lasso_n1) selected_x2 <- x2[,select_beta_lasso_n1] fit_x2 <- lm(y2 ~ selected_x2 - 1) var1 <- sum((fit_x2$resid)^2)/(n - k - M1) lasso_fit_n2 <- glmnet(y=y2,x=x2,alpha= a,intercept=FALSE) lambda_n2=tail(lasso_fit_n2$lambda,1) beta_n2=coef(lasso_fit_n2,s=lambda_n2) beta_n2<-as.matrix(beta_n2) beta1_n2<- as.matrix(beta_n2[-1]) select_beta_lasso_n2<- head(order(abs(beta1_n2), decreasing= TRUE),d) M2 <- length(select_beta_lasso_n2) selected_x1 <- x1[,select_beta_lasso_n2] fit_x1 <- lm(y1 ~ selected_x1 - 1) var2 <- sum((fit_x1$resid)^2)/(k - M2) var_rcv <- (var1 + var2)/2 return(var_rcv) } lasso_var <- numeric() for (i in 1:b){ lasso_var[[i]] <- lasso_var_rcv(xb[[i]],y,a,d) } return(mean(lasso_var)) } if (method == "lsr"){ lsr_var_rcv <- function(x,y,d){ p<- ncol(x) n<- nrow(x) k <- floor(n/2) x1 <- x[1:k, ] y1 <- y[1:k] x2 <- x[(k + 1):n, ] y2 <- y[(k + 1):n] n1 <- k n2 <- n-k pValues_n1<-numeric() for(i in 1:p){ fitlm_n1<- lm(y1~x1[,i]) pValues_n1[i]<-summary(fitlm_n1)$coeff[2,4] } ranking_n1<-order(pValues_n1) pindex_n1<- head(ranking_n1,100) fitlm1_n1<- lm(y1~x1[,pindex_n1]-1) requireNamespace("lm.beta") lmbeta_n1<- lm.beta(fitlm1_n1) reg_beta_n1<- lmbeta_n1$standardized.coefficients reg_beta_n1<- as.matrix(reg_beta_n1) reg_beta_n1[is.na(reg_beta_n1)] <- 0 reg_beta_select_n1 <- head(order(abs(reg_beta_n1),decreasing = TRUE),d) reg_feature_name_n1 <- reg_beta_n1[reg_beta_select_n1,0] final_p_n1<- summary(fitlm1_n1)$coeff[,4] final_ranking_n1<- order(final_p_n1) final_p_n1<- as.matrix(final_p_n1) final_select_p_n1 <- row(final_p_n1)[which(final_p_n1<=0.05)] new_x2 <- x2[,pindex_n1] M1 <- length(reg_beta_select_n1) selected_x2 <- new_x2[,reg_beta_select_n1] fit_x2 <- lm(y2 ~ selected_x2 - 1) var1 <- sum((fit_x2$resid)^2)/(n - k - M1) ## second set pValues_n2<-numeric() for(i in 1:p){ fitlm_n2<- lm(y2~x2[,i]) pValues_n2[i]<-summary(fitlm_n2)$coeff[2,4] } ranking_n2<-order(pValues_n2) pindex_n2<- head(ranking_n2,100) fitlm1_n2<- lm(y2~x2[,pindex_n2]-1) requireNamespace("lm.beta") lmbeta_n2<- lm.beta(fitlm1_n2) reg_beta_n2<- lmbeta_n2$standardized.coefficients reg_beta_n2<- as.matrix(reg_beta_n2) reg_beta_n2[is.na(reg_beta_n2)] <- 0 reg_beta_select_n2 <- head(order(abs(reg_beta_n2),decreasing = TRUE),d) reg_feature_name_n2 <- reg_beta_n2[reg_beta_select_n2,0] final_p_n2<- summary(fitlm1_n2)$coeff[,4] final_ranking_n2<- order(final_p_n2) final_p_n2<- as.matrix(final_p_n2) final_select_p_n2 <- row(final_p_n2)[which(final_p_n2<=0.05)] new_x1 <- x1[,pindex_n2] M2 <- length(reg_beta_select_n2) selected_x1 <- new_x1[,reg_beta_select_n2] fit_x1 <- lm(y1 ~ selected_x1 - 1) var2 <- sum((fit_x1$resid)^2)/(n - k - M2) var_rcv <- (var1 + var2)/2 return(var_rcv) } lsr_var <- numeric() for (i in 1:b){ lsr_var[[i]] <- lsr_var_rcv(xb[[i]],y,d) } lsr_bs_var <- mean(lsr_var) return(lsr_bs_var) } }
/scratch/gouwar.j/cran-all/cranData/varEst/R/bsrcv.R
#' @title Variance Estimation with Ensemble method #' #' @description Estimation of error variance using ensemble method which combines bootstraping and sampling with srswor in ultrahigh dimensional dataset. #' #' @param x #' #' @param y #' #' @param a #' #' @param b #' #' @param d #' #' @return Error variance #' #' @examples #' #' @export ensemble <- function(x,y,a,b,d,method= c("spam","lasso","lsr")){ method = match.arg(method) p<- ncol(x) n<- nrow(x) if (method == "spam"){ requireNamespace("SAM") spam_fit_n1 <- samQL(x,y,p=1) w_n1 <- row(as.matrix(spam_fit_n1$w[,30]))[which(spam_fit_n1$w[,30] != 0)] w_order_n1 <- head(order(spam_fit_n1$w[,30],decreasing = TRUE),d) w_value_n1 <- as.matrix(spam_fit_n1$w[,30])[w_order_n1,] spam_selected_feature_n1<- w_order_n1 srswor_index <- combn(spam_selected_feature_n1,floor(d/2),FUN = NULL, simplify = FALSE) s <- dim(combn(d,floor(d/2)))[2] M1 <- dim(combn(d,floor(d/2)))[1] final_var <- vector("list",b) bs<- vector("list",b) xb <- vector("list",b) for (i in 1:b){ bs[[i]] <- sample(1:n,n, replace = TRUE) xb[[i]] <- x[bs[[i]],] var <- numeric() selected_x <- vector("list",s) for (j in 1:s){ selected_x[[j]] <- xb[[i]][,srswor_index[[j]]] fit_x2 <- lm(y ~ selected_x[[j]] - 1) var[[j]] <- sum((fit_x2$resid)^2)/(n - M1) } final_var[[i]] <- var } final_spam_var2 <- numeric() final_spam_var1<- vector("list",b) for (i in 1:b){ final_spam_var<- numeric() for (j in 1:b){ final_spam_var[[j]]<- mean(final_var[[i]][[j]]) } final_spam_var1[[i]] <- final_spam_var final_spam_var2[[i]] <- mean(final_spam_var1[[i]]) } final_spam_var3 <- mean(final_spam_var2) return(final_spam_var3) } if (method == "lasso"){ requireNamespace("glmnet") lasso_fit_n1 <- glmnet(y=y,x=x,alpha= a,intercept=FALSE) lambda_n1=tail(lasso_fit_n1$lambda,1) beta_n1=coef(lasso_fit_n1,s=lambda_n1) beta_n1<-as.matrix(beta_n1) beta1_n1<- as.matrix(beta_n1[-1]) select_beta_lasso_n1<- head(order(abs(beta1_n1), decreasing= TRUE),d) srswor_index <- combn(select_beta_lasso_n1,floor(d/2),FUN = NULL, simplify = FALSE) s <- dim(combn(d,floor(d/2)))[2] M1 <- dim(combn(d,floor(d/2)))[1] final_var <- vector("list",b) bs<- vector("list",b) xb <- vector("list",b) for (i in 1:b){ bs[[i]] <- sample(1:n,n, replace = TRUE) xb[[i]] <- x[bs[[i]],] var <- numeric() selected_x <- vector("list",s) for (j in 1:s){ selected_x[[j]] <- xb[[i]][,srswor_index[[j]]] fit_x2 <- lm(y ~ selected_x[[j]] - 1) var[[j]] <- sum((fit_x2$resid)^2)/(n - M1) } final_var[[i]] <- var } final_lasso_var2 <- numeric() final_lasso_var1<- vector("list",b) for (i in 1:b){ final_lasso_var<- numeric() for (j in 1:b){ final_lasso_var[[j]]<- mean(final_var[[i]][[j]]) } final_lasso_var1[[i]] <- final_lasso_var final_lasso_var2[[i]] <- mean(final_lasso_var1[[i]]) } final_lasso_var3 <- mean(final_lasso_var2) return(final_lasso_var3) } if (method == "lsr"){ pValues_n<-numeric() for(i in 1:p){ fitlm_n<- lm(y ~ x[,i]) pValues_n[i]<-summary(fitlm_n)$coeff[2,4] } ranking_n<-order(pValues_n) pindex_n<- head(ranking_n,100) fitlm1_n<- lm(y~x[,pindex_n]-1) requireNamespace("lm.beta") lmbeta_n<- lm.beta(fitlm1_n) reg_beta_n<- lmbeta_n$standardized.coefficients reg_beta_n<- as.matrix(reg_beta_n) reg_beta_n[is.na(reg_beta_n)] <- 0 reg_beta_select_n <- head(order(abs(reg_beta_n),decreasing = TRUE),d) reg_feature_name_n <- reg_beta_n[reg_beta_select_n,0] final_p_n<- summary(fitlm1_n)$coeff[,4] final_ranking_n<- order(final_p_n) final_p_n<- as.matrix(final_p_n) final_select_p_n <- row(final_p_n)[which(final_p_n<=0.05)] new_x <- x[,pindex_n] srswor_index <- combn(reg_beta_select_n,floor(d/2),FUN = NULL, simplify = FALSE) s <- dim(combn(d,floor(d/2)))[2] M1 <- dim(combn(d,floor(d/2)))[1] final_var <- vector("list",b) bs<- vector("list",b) xb <- vector("list",b) for (i in 1:b){ bs[[i]] <- sample(1:n,n, replace = TRUE) xb[[i]] <- x[bs[[i]],] var <- numeric() selected_x <- vector("list",s) for (j in 1:s){ selected_x[[j]] <- xb[[i]][,srswor_index[[j]]] fit_x2 <- lm(y ~ selected_x[[j]] - 1) var[[j]] <- sum((fit_x2$resid)^2)/(n - M1) } final_var[[i]] <- var } final_lsr_var2 <- numeric() final_lsr_var1<- vector("list",b) for (i in 1:b){ final_lsr_var<- numeric() for (j in 1:b){ final_lsr_var[[j]]<- mean(final_var[[i]][[j]]) } final_lsr_var1[[i]] <- final_lsr_var final_lsr_var2[[i]] <- mean(final_lsr_var1[[i]]) } final_lsr_var3 <- mean(final_lsr_var2) return(final_lsr_var3) } }
/scratch/gouwar.j/cran-all/cranData/varEst/R/ensemble.R
#' @title Variance Estimation with kfold-RCV #' #' @description Estimation of error variance using k-fold Refitted cross validation in ultrahigh dimensional dataset. #' #' @param x #' #' @param y #' #' @param a #' #' @param k #' #' @param d #' #' @return Error variance #' #' @examples #' #' @export krcv <- function(x,y,a,k,d,method= c("spam","lasso","lsr")){ method = match.arg(method) p<- ncol(x) n<- nrow(x) requireNamespace("caret") flds <- createFolds(y, k , list = TRUE, returnTrain = FALSE) split_up <- lapply(flds, function(ind, dat) dat[ind,], dat = y) for(i in 1:1010) {split_upx <- lapply(flds, function(i, dat) dat[i,], dat = x)} if (method == "spam"){ w_n <- vector("list",k) w_order_n<- vector("list",k) w_value_n<- vector("list",k) spam_selected_feature_n<- vector("list",k) M<- numeric() requireNamespace("SAM") for (j in 1:k){ spam_fit_n <- samQL(split_upx[[j]],split_up[[j]],p=1) w_n[[j]] <- row(as.matrix(spam_fit_n$w[,30]))[which(spam_fit_n$w[,30] != 0)] w_order_n[[j]] <- head(order(spam_fit_n$w[,30],decreasing = TRUE),d) w_value_n[[j]] <- as.matrix(spam_fit_n$w[,30])[w_order_n[[j]],] spam_selected_feature_n[[j]]<- w_order_n[[j]] M[[j]] <- length(spam_selected_feature_n[[j]])} final_selected_xM <- vector("list",k) final_var <- vector("list",k) for(t in 1:k){ selected_xM <- vector("list",k) var <- numeric() for(l in 1:k){ if (t != l){ selected_xM[[l]] <- split_upx[[t]][,spam_selected_feature_n[[l]]] fit_xM <- lm(split_up[[t]] ~ selected_xM[[l]] - 1) var[[l]] <- sum((fit_xM$resid)^2)/(n/k - M[[t]]) } } final_selected_xM[[t]] <- selected_xM final_var[[t]] <- var } var_krcv <- numeric() for(i in 1:k){ var_krcv[[i]] <- mean(final_var[[i]], na.rm = TRUE) } } if (method == "lasso"){ lambda_n <- numeric() beta_n<- vector("list",k) beta1_n<- vector("list",k) select_beta_lasso_n <- vector("list",k) M<- numeric() requireNamespace("glmnet") for (j in 1:k){ lasso_fit_n <- glmnet(y=split_up[[j]],x=split_upx[[j]],alpha= a,intercept=FALSE) lambda_n <-tail(lasso_fit_n$lambda,1) beta_n<-coef(lasso_fit_n,s=lambda_n) beta_n<-as.matrix(beta_n) beta1_n[[j]]<- as.matrix(beta_n[-1]) select_beta_lasso_n[[j]]<- head(order(abs(beta1_n[[j]]), decreasing= TRUE),d) M[[j]] <- length(select_beta_lasso_n[[j]])} final_selected_xM <- vector("list",k) final_var <- vector("list",k) for(t in 1:k){ selected_xM <- vector("list",k) var <- numeric() for(l in 1:k){ if (t != l){ selected_xM[[l]] <- split_upx[[t]][,select_beta_lasso_n[[l]]] fit_xM <- lm(split_up[[t]] ~ selected_xM[[l]] - 1) var[[l]] <- sum((fit_xM$resid)^2)/(n/k - M[[t]]) } } final_selected_xM[[t]] <- selected_xM final_var[[t]] <- var } var_krcv <- numeric() for(i in 1:k){ var_krcv[[i]] <- mean(final_var[[i]], na.rm = TRUE) } } if (method == "lsr"){ pindex_n <- vector("list",k) reg_beta_select_n <- vector("list",k) reg_feature_name_n <- vector("list",k) new_split_upx<- vector("list",k) M <- numeric() for (j in 1:k){ pValues_n<-numeric() for(i in 1:p){ fitlm_n<- lm(split_up[[j]] ~ split_upx[[j]][,i]) pValues_n[i]<-summary(fitlm_n)$coeff[2,4] } ranking_n<-order(pValues_n) pindex_n[[j]]<- head(ranking_n,100) fitlm1_n<- lm(split_up[[j]]~split_upx[[j]][,pindex_n[[j]]]-1) requireNamespace("lm.beta") lmbeta_n<- lm.beta(fitlm1_n) reg_beta_n<- lmbeta_n$standardized.coefficients reg_beta_n<- as.matrix(reg_beta_n) reg_beta_n[is.na(reg_beta_n)] <- 0 reg_beta_select_n[[j]] <- head(order(abs(reg_beta_n),decreasing = TRUE),d) reg_feature_name_n[[j]] <- reg_beta_n[reg_beta_select_n[[j]],0] final_p_n<- summary(fitlm1_n)$coeff[,4] final_ranking_n<- order(final_p_n) final_p_n<- as.matrix(final_p_n) final_select_p_n <- row(final_p_n)[which(final_p_n<=0.05)] new_split_upx[[j]] <- split_upx[[j]][,pindex_n[[j]]] M[[j]] <- length(reg_beta_select_n[[j]])} final_selected_xM <- vector("list",k) final_var <- vector("list",k) for(t in 1:k){ selected_xM <- vector("list",k) var <- numeric() for(l in 1:k){ if (t != l){ selected_xM[[l]] <- new_split_upx[[t]][,reg_beta_select_n[[l]]] fit_xM <- lm(split_up[[t]] ~ selected_xM[[l]] - 1) var[[l]] <- sum((fit_xM$resid)^2)/(n/k - M[[t]]) } } final_selected_xM[[t]] <- selected_xM final_var[[t]] <- var } var_krcv <- numeric() for(i in 1:k){ var_krcv[[i]] <- mean(final_var[[i]], na.rm = TRUE) } } final_var_krcv <- mean(var_krcv) return(final_var_krcv) }
/scratch/gouwar.j/cran-all/cranData/varEst/R/krcv.R
#' @title Variance Estimation with Refitted Cross Validation(RCV) #' #' @description Estimation of error variance using Refitted cross validation in ultrahigh dimensional dataset. #' #' @param x #' #' @param y #' #' @param a #' #' @param d #' #' @return Error variance #' #' @examples #' #' @export rcv <- function(x,y,a,d,method= c("spam","lasso","lsr")){ method = match.arg(method) p<- ncol(x) n<- nrow(x) k <- floor(n/2) x1 <- x[1:k, ] y1 <- y[1:k] x2 <- x[(k + 1):n, ] y2 <- y[(k + 1):n] n1 <- k n2 <- n-k if (method == "spam"){ requireNamespace("SAM") spam_fit_n1 <- samQL(x1,y1,p=1) w_n1 <- row(as.matrix(spam_fit_n1$w[,30]))[which(spam_fit_n1$w[,30] != 0)] w_order_n1 <- head(order(spam_fit_n1$w[,30],decreasing = TRUE),d) w_value_n1 <- as.matrix(spam_fit_n1$w[,30])[w_order_n1,] spam_selected_feature_n1<- w_order_n1 M1 <- length(spam_selected_feature_n1) selected_x2 <- x2[,spam_selected_feature_n1] fit_x2 <- lm(y2 ~ selected_x2 - 1) var1 <- sum((fit_x2$resid)^2)/(n - k - M1) spam_fit_n2 <- samQL(x2,y2,p=1) w_n2 <- row(as.matrix(spam_fit_n2$w[,30]))[which(spam_fit_n2$w[,30] != 0)] w_order_n2 <- head(order(spam_fit_n2$w[,30],decreasing = TRUE),d) w_value_n2 <- as.matrix(spam_fit_n2$w[,30])[w_order_n2,] spam_selected_feature_n2<- w_order_n2 M2 <- length(spam_selected_feature_n2) selected_x1 <- x1[,spam_selected_feature_n2] fit_x1 <- lm(y1 ~ selected_x1 - 1) var2 <- sum((fit_x1$resid)^2)/(k - M2) } if (method == "lasso"){ requireNamespace("glmnet") lasso_fit_n1 <- glmnet(y=y1,x=x1,alpha=a,intercept=FALSE) lambda_n1=tail(lasso_fit_n1$lambda,1) beta_n1=coef(lasso_fit_n1,s=lambda_n1) beta_n1<-as.matrix(beta_n1) beta1_n1<- as.matrix(beta_n1[-1]) select_beta_lasso_n1<- head(order(abs(beta1_n1), decreasing= TRUE),d) M1 <- length(select_beta_lasso_n1) selected_x2 <- x2[,select_beta_lasso_n1] fit_x2 <- lm(y2 ~ selected_x2 - 1) var1 <- sum((fit_x2$resid)^2)/(n - k - M1) lasso_fit_n2 <- glmnet(y=y2,x=x2,alpha= a,intercept=FALSE) lambda_n2=tail(lasso_fit_n2$lambda,1) beta_n2=coef(lasso_fit_n2,s=lambda_n2) beta_n2<-as.matrix(beta_n2) beta1_n2<- as.matrix(beta_n2[-1]) select_beta_lasso_n2<- head(order(abs(beta1_n2), decreasing= TRUE),d) M2 <- length(select_beta_lasso_n2) selected_x1 <- x1[,select_beta_lasso_n2] fit_x1 <- lm(y1 ~ selected_x1 - 1) var2 <- sum((fit_x1$resid)^2)/(k - M2) } if (method == "lsr"){ pValues_n1<-numeric() for(i in 1:p){ fitlm_n1<- lm(y1~x1[,i]) pValues_n1[i]<-summary(fitlm_n1)$coeff[2,4] } ranking_n1<-order(pValues_n1) pindex_n1<- head(ranking_n1,100) fitlm1_n1<- lm(y1~x1[,pindex_n1]-1) requireNamespace("lm.beta") lmbeta_n1<- lm.beta(fitlm1_n1) reg_beta_n1<- lmbeta_n1$standardized.coefficients reg_beta_n1<- as.matrix(reg_beta_n1) reg_beta_n1[is.na(reg_beta_n1)] <- 0 reg_beta_select_n1 <- head(order(abs(reg_beta_n1),decreasing = TRUE),d) reg_feature_name_n1 <- reg_beta_n1[reg_beta_select_n1,0] final_p_n1<- summary(fitlm1_n1)$coeff[,4] final_ranking_n1<- order(final_p_n1) final_p_n1<- as.matrix(final_p_n1) final_select_p_n1 <- row(final_p_n1)[which(final_p_n1<=0.05)] new_x2 <- x2[,pindex_n1] M1 <- length(reg_beta_select_n1) selected_x2 <- new_x2[,reg_beta_select_n1] fit_x2 <- lm(y2 ~ selected_x2 - 1) var1 <- sum((fit_x2$resid)^2)/(n - k - M1) pValues_n2<-numeric() for(i in 1:p){ fitlm_n2<- lm(y2~x2[,i]) pValues_n2[i]<-summary(fitlm_n2)$coeff[2,4] } ranking_n2<-order(pValues_n2) pindex_n2<- head(ranking_n2,100) fitlm1_n2<- lm(y2~x2[,pindex_n2]-1) requireNamespace("lm.beta") lmbeta_n2<- lm.beta(fitlm1_n2) reg_beta_n2<- lmbeta_n2$standardized.coefficients reg_beta_n2<- as.matrix(reg_beta_n2) reg_beta_n2[is.na(reg_beta_n2)] <- 0 reg_beta_select_n2 <- head(order(abs(reg_beta_n2),decreasing = TRUE),d) reg_feature_name_n2 <- reg_beta_n2[reg_beta_select_n2,0] final_p_n2<- summary(fitlm1_n2)$coeff[,4] final_ranking_n2<- order(final_p_n2) final_p_n2<- as.matrix(final_p_n2) final_select_p_n2 <- row(final_p_n2)[which(final_p_n2<=0.05)] new_x1 <- x1[,pindex_n2] M2 <- length(reg_beta_select_n2) selected_x1 <- new_x1[,reg_beta_select_n2] fit_x1 <- lm(y1 ~ selected_x1 - 1) var2 <- sum((fit_x1$resid)^2)/(n - k - M2) } var_rcv <- (var1 + var2)/2 return(var_rcv) }
/scratch/gouwar.j/cran-all/cranData/varEst/R/rcv.R
# create_cond_list # # create the list of variables to condition on for the current variable of interest, xname, create_cond_list = function(cond, threshold, xname, input) { stopifnot(is.logical(cond)) if (!cond) return(NULL) if (threshold > 0 & threshold < 1) { ctrl = ctree_control(teststat = "quad", testtype = "Univariate", stump = TRUE) xnames = names(input) xnames = xnames[xnames != xname] ct = ctree(as.formula(paste(xname, "~", paste(xnames, collapse = "+"), collapse = "")), data = input, controls = ctrl) crit = ct@tree$criterion[[2]] crit[which(is.na(crit))] = 0 return(xnames[crit > threshold]) } stop() } ### extract ID of _all_ variables the tree uses for splitting varIDs = function(node) { v = c() foo = function(node) { if (node[[4]]) return(NULL) v <<- c(v, node[[5]][[1]]) foo(node[[8]]) foo(node[[9]]) } foo(node) return(v) } conditional_perm = function(cond, xnames, input, tree, oob) { ## intitial partitioning => all observations in one partition parts = rep(1, length(oob)) ## develop partitioning by going over all the conditiong variables for (condVar in cond) { ## varID is variable index or column number of input (predictor matrix) ## not variable name! varID = which(xnames == condVar) ## if conditioning variable is not used for splitting in current tree ## proceed with next conditioning variable cl = cutpoints_list(tree, varID) if (is.null(cl)) next ## proceed cutpoints for different types of variables x = input[, varID] xclass = class(x)[1] if (xclass == "integer") xclass = "numeric" block = switch(xclass, "numeric" = cut(x, breaks = c(-Inf, sort(unique(cl)), Inf)), "ordered" = cut(as.numeric(x), breaks = c(-Inf, sort(unique(cl)), Inf)), "factor" = { CL = matrix(as.logical(cl), nrow = nlevels(x)) rs = rowSums(CL) dlev = (1:nrow(CL))[rs %in% rs[duplicated(rs)]] fuse = c() for (ii in dlev) { for (j in dlev[dlev > ii]) { if (all(CL[ii,] == CL[j,])) fuse = rbind(fuse, c(ii, j)) } } xlev = 1:nlevels(x) newl = nlevels(x) + 1 block = as.integer(x) for (l in xlev) { if (NROW(fuse) == 0) break if (any(fuse[, 1] == l)) { f = c(l, fuse[fuse[, 1] == l, 2]) fuse = fuse[!fuse[,1] %in% f, , drop = FALSE] block[block %in% f] = newl newl = newl + 1 } } as.factor(block) }) ## add partitioning based on the split points the variable to the ## current partitioning parts = interaction(parts, as.numeric(block), drop = TRUE, sep = "") } ## if none of the conditioning variables are used in the tree if (!length(levels(parts)) > 1) { perm = sample(which(oob)) return(perm) } else { ## one conditional permutation perm = 1:nrow(input) for(part in levels(parts)){ index = which(parts == part & oob) if (length(index) > 1) perm[index] = index[sample.int(length(index))] } return(perm[oob]) } } # cutpoints_list() returns: # - vector of cutpoints (length=number of cutpoints) # if variable is continuous # - vector of indicators (length=number of categories x number of cutpoints) # if variable is categorical (nominal or ordered) cutpoints_list = function(tree, variableID) { cutp = function(node) { if (node[[4]]) return(NULL) cp = NULL if (node[[5]][[1]] == variableID) cp = node[[5]][[3]] nl = cutp(node[[8]]) nr = cutp(node[[9]]) return(c(cp, nl, nr)) } return(cutp(tree)) }
/scratch/gouwar.j/cran-all/cranData/varImp/R/helpers.R
#' varImp #' #' Computes the variable importance for arbitrary measures from the 'measures' package. #' #' @param object An object as returned by cforest. #' @param mincriterion The value of the test statistic or 1 - p-value that must be exceeded in order to include a #' split in the computation of the importance. The default mincriterion = 0 guarantees that all splits are included. #' @param conditional a logical determining whether unconditional or conditional computation of the importance is performed. #' @param threshold The threshold value for (1 - p-value) of the association between the variable of interest and a #' covariate, which must be exceeded inorder to include the covariate in the conditioning scheme for the variable of #' interest (only relevant if conditional = TRUE). A threshold value of zero includes all covariates. #' @param nperm The number of permutations performed. #' @param OOB A logical determining whether the importance is computed from the out-of-bag sample or the learning #' sample (not suggested). #' @param pre1.0_0 Prior to party version 1.0-0, the actual data values were permuted according to the original #' permutation importance suggested by Breiman (2001). Now the assignments to child nodes of splits in the variable #' of interest are permuted as described by Hapfelmeier et al. (2012), which allows for missing values in the #' explanatory variables and is more efficient wrt memory consumption and computing time. This method does not #' apply to conditional variable importances. #' @param measure The name of the measure of the 'measures' package that should be used for the variable importance calculation. #' @param ... Further arguments (like positive or negativ class) that are needed by the measure. #' @details Many measures have not been tested for the usefulness of random forests variable importance. Use at your own risk. #' @return Vector with computed permutation importance for each variable. #' @importFrom utils lsf.str #' @importFrom measures multiclass.Brier listAllMeasures #' @export #' #' @examples #' # multiclass case #' data(iris) #' iris.cf = cforest(Species ~ ., data = iris, control = cforest_unbiased(mtry = 2, ntree = 50)) #' set.seed(123) #' vimp = varImp(object = iris.cf, measure = "multiclass.Brier") #' vimp varImp = function(object, mincriterion = 0, conditional = FALSE, threshold = 0.2, nperm = 1, OOB = TRUE, pre1.0_0 = conditional, measure = "multiclass.Brier", ...) { # vgl. Janitza # Some tests measureList = listAllMeasures() if (!(measure %in% measureList[, 1])) stop("measure should be a measure of the measures package") # Test the Class response = object@responses CLASS = all(response@is_nominal | response@is_ordinal) PROB = measureList$probabilities[measureList[,1] == measure] MEASURECLASS = measureList$task[measureList[,1] == measure] if (CLASS & (MEASURECLASS %in% c("regression", "multilabel"))) stop("Measure is not suitable for classification") if (!CLASS & !(MEASURECLASS %in% "regression")) stop("Measure is not suitable for regression") MEASUREMINIMIZE = measureList$minimize[measureList[,1] == measure] input = object@data@get("input") xnames = colnames(input) inp = initVariableFrame(input, trafo = NULL) y = object@responses@variables[[1]] if (length(response@variables) != 1) stop("cannot compute variable importance measure for multivariate response") if (conditional || pre1.0_0) { if (!all(complete.cases(inp@variables))) stop("cannot compute variable importance measure with missing values") } if (CLASS) { if (PROB) { error = function(x, oob, ...) { xoob = t(sapply(x, function(x) x))[oob,] colnames(xoob) = levels(y) if (measure == "AUC") xoob = xoob[,2] yoob = y[oob] return(do.call(measure, list(xoob, yoob, ...))) } }else { error = function(x, oob, ...) { xoob = t(sapply(x, function(x) x))[oob,] colnames(xoob) = levels(y) if (measure == "AUC") xoob = xoob[,2] xoob = colnames(xoob)[max.col(xoob,ties.method="first")] yoob = y[oob] return(do.call(measure, list(yoob, xoob, ...))) } } } else { error = function(x, oob, ...) { xoob = unlist(x)[oob] yoob = y[oob] return(do.call(measure, list(xoob, yoob, ...))) } } w = object@initweights if (max(abs(w - 1)) > sqrt(.Machine$double.eps)) warning(sQuote("varImp"), " with non-unity weights might give misleading results") perror = matrix(0, nrow = nperm * length(object@ensemble), ncol = length(xnames)) colnames(perror) = xnames for (b in 1:length(object@ensemble)) { tree <- object@ensemble[[b]] if (OOB) { oob = object@weights[[b]] == 0 } else { oob = rep(TRUE, length(xnames)) } p = party_intern(tree, inp, mincriterion, -1L, fun = "R_predict") eoob = error(p, oob, ...) for (j in unique(varIDs(tree))) { for (per in 1:nperm) { if (conditional || pre1.0_0) { tmp = inp ccl = create_cond_list(conditional, threshold, xnames[j], input) if (is.null(ccl)) { perm = sample(which(oob)) } else { perm = conditional_perm(ccl, xnames, input, tree, oob) } tmp@variables[[j]][which(oob)] = tmp@variables[[j]][perm] p = party_intern(tree, tmp, mincriterion, -1L, fun = "R_predict") } else { p = party_intern(tree, inp, mincriterion, as.integer(j), fun = "R_predict") } minSign = ifelse(MEASUREMINIMIZE, 1, -1) perror[(per + (b - 1) * nperm), j] = minSign * (error(p,oob, ...) - eoob) } } } perror = as.data.frame(perror) return(MeanDecrease = colMeans(perror, na.rm = TRUE)) }
/scratch/gouwar.j/cran-all/cranData/varImp/R/varImp.R
#' varImpACC #' #' Computes the variable importance regarding the accuracy (ACC). #' #' @param object An object as returned by cforest. #' @param mincriterion The value of the test statistic or 1 - p-value that must be exceeded in order to include a #' split in the computation of the importance. The default mincriterion = 0 guarantees that all splits are included. #' @param conditional The value of the test statistic or 1 - p-value that must be exceeded in order to include a split #' in the computation of the importance. The default mincriterion = 0 guarantees that all splits are included. #' @param threshold The threshold value for (1 - p-value) of the association between the variable of interest and a #' covariate, which must be exceeded inorder to include the covariate in the conditioning scheme for the variable of #' interest (only relevant if conditional = TRUE). A threshold value of zero includes all covariates. #' @param nperm The number of permutations performed. #' @param OOB A logical determining whether the importance is computed from the out-of-bag sample or the learning #' sample (not suggested). #' @param pre1.0_0 Prior to party version 1.0-0, the actual data values were permuted according to the original #' permutation importance suggested by Breiman (2001). Now the assignments to child nodes of splits in the variable #' of interest are permuted as described by Hapfelmeier et al. (2012), which allows for missing values in the #' explanatory variables and is more efficient wrt memory consumption and computing time. This method does not #' apply to conditional variable importances. #' #' @return Vector with computed permutation importance for each variable #' @export #' #' @examples #' data(iris) #' iris2 = iris #' iris2$Species = factor(iris$Species == "versicolor") #' iris.cf = cforest(Species ~ ., data = iris2,control = cforest_unbiased(mtry = 2, ntree = 50)) #' set.seed(123) #' a = varImpACC(object = iris.cf) #' varImpACC = function (object, mincriterion = 0, conditional = FALSE, threshold = 0.2, nperm = 1, OOB = TRUE, pre1.0_0 = conditional) { return(varImp(object, mincriterion = mincriterion, conditional = conditional, threshold = threshold, nperm = nperm, OOB = OOB, pre1.0_0 = pre1.0_0, measure = "ACC")) }
/scratch/gouwar.j/cran-all/cranData/varImp/R/varImpACC.R
#' varImpAUC #' #' Computes the variable importance regarding the AUC. Bindings are not taken into account in the AUC definition as they did #' not provide as good results as the version without bindings in the paper of Janitza et. al (2013) (see References section). #' #' For using the original AUC definition and multiclass AUC you can use the varImp function and specify the particular measure. #' #' @param object An object as returned by cforest. #' @param mincriterion The value of the test statistic or 1 - p-value that must be exceeded in order to include a #' split in the computation of the importance. The default mincriterion = 0 guarantees that all splits are included. #' @param conditional The value of the test statistic or 1 - p-value that must be exceeded in order to include a split #' in the computation of the importance. The default mincriterion = 0 guarantees that all splits are included. #' @param threshold The threshold value for (1 - p-value) of the association between the variable of interest and a #' covariate, which must be exceeded inorder to include the covariate in the conditioning scheme for the variable of #' interest (only relevant if conditional = TRUE). A threshold value of zero includes all covariates. #' @param nperm The number of permutations performed. #' @param OOB A logical determining whether the importance is computed from the out-of-bag sample or the learning #' sample (not suggested). #' @param pre1.0_0 Prior to party version 1.0-0, the actual data values were permuted according to the original #' permutation importance suggested by Breiman (2001). Now the assignments to child nodes of splits in the variable #' of interest are permuted as described by Hapfelmeier et al. (2012), which allows for missing values in the #' explanatory variables and is more efficient wrt memory consumption and computing time. This method does not #' apply to conditional variable importances. #' @references https://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-14-119 #' #' @return Vector with computed permutation importance for each variable #' @export #' @importFrom stats as.formula complete.cases #' @importFrom party ctree_control initVariableFrame ctree initVariableFrame party_intern #' #' @examples #' # multiclass case #' data(iris) #' iris2 = iris #' iris2$Species = factor(iris$Species == "versicolor") #' iris.cf = cforest(Species ~ ., data = iris2,control = cforest_unbiased(mtry = 2, ntree = 50)) #' set.seed(123) #' varImpAUC(object = iris.cf) varImpAUC = function (object, mincriterion = 0, conditional = FALSE, threshold = 0.2, nperm = 1, OOB = TRUE, pre1.0_0 = conditional) { # vgl. Janitza response = object@responses input = object@data@get("input") xnames = colnames(input) inp = initVariableFrame(input, trafo = NULL) y = object@responses@variables[[1]] if (length(response@variables) != 1) stop("cannot compute variable importance measure for multivariate response") if (conditional || pre1.0_0) { if (!all(complete.cases(inp@variables))) stop("cannot compute variable importance measure with missing values") } CLASS = all(response@is_nominal) ORDERED = all(response@is_ordinal) if (!CLASS & !ORDERED) stop("only calculable for classification") if (CLASS) { if (nlevels(y) > 2) { stop("varImpAUC() is only usable for binary classification. For multiclass classification please use the standard varImp() function.") # MULTICLASS # if(method=="ova"){ ########################################################### one-versus-all Verfahren # error = function(x, oob) { # xoob = t(sapply(x, function(x) x))[oob,] # yoob = y[oob] # return(measures::multiclass.AUNU(xoob, yoob)) # } # } else if(method=="ovo"){ ############################# one-versus-one, paarweises Verfahren (Hand & Till) # error = function(x, oob) { # xoob = t(sapply(x, function(x) x))[oob,] # yoob = y[oob] # return(measures::multiclass.AU1U(xoob, yoob)) # } # } } else { ############# AUC-Berechnung für den Fall einer binären Zielgröße (s. Janitza) ############################ # error = function(x, oob) { # xoob = sapply(x, function(x) x[1])[oob] # yoob = y[oob] # pos = levels(y)[1] # # which1 <- which(yoob==levels(y)[1]) # noob1 <- length(which1) # noob <- length(yoob) # if (noob1==0|noob1==noob) { return(NA) } # AUC cannot be computed if all OOB-observations are from one class # return(measures::AUC(xoob, yoob, positive = pos)) # } error <- function(x, oob) { xoob <- sapply(x, function(x) x[1])[oob] yoob <- y[oob] which1 <- which(yoob==levels(y)[2]) noob1 <- length(which1) noob <- length(yoob) if (noob1==0|noob1==noob) { return(NA) } # AUC cannot be computed if all OOB-observations are from one class return(1-sum(kronecker(xoob[which1] , xoob[-which1],">"))/(noob1*(length(yoob)-noob1))) # calculate AUC } } } else { if (ORDERED) { error = function(x, oob) mean((sapply(x, which.max) != y)[oob]) } else { error = function(x, oob) mean((unlist(x) - y)[oob]^2) } } w = object@initweights if (max(abs(w - 1)) > sqrt(.Machine$double.eps)) warning(sQuote("varimp"), " with non-unity weights might give misleading results") perror = matrix(0, nrow = nperm * length(object@ensemble), ncol = length(xnames)) colnames(perror) = xnames for (b in 1:length(object@ensemble)) { tree = object@ensemble[[b]] if (OOB) { oob = object@weights[[b]] == 0 } else { oob = rep(TRUE, length(xnames)) } p = party_intern(tree, inp, mincriterion, -1L, fun = "R_predict") eoob = error(p, oob) for (j in unique(varIDs(tree))) { for (per in 1:nperm) { if (conditional || pre1.0_0) { tmp = inp ccl = create_cond_list(conditional, threshold, xnames[j], input) if (is.null(ccl)) { perm = sample(which(oob)) } else { perm = conditional_perm(ccl, xnames, input, tree, oob) } tmp@variables[[j]][which(oob)] = tmp@variables[[j]][perm] p = party_intern(tree, tmp, mincriterion, -1L, fun = "R_predict") } else { p = party_intern(tree, inp, mincriterion, as.integer(j), fun = "R_predict") } perror[(per + (b - 1) * nperm), j] = - (error(p,oob) - eoob) } } } perror = as.data.frame(perror) return(MeanDecrease = colMeans(perror, na.rm = TRUE)) }
/scratch/gouwar.j/cran-all/cranData/varImp/R/varImpAUC.R
#' varImpRanger #' #' Computes the variable importance for ranger models and for arbitrary measures from the 'measures' package. #' #' @param object An object as returned by cforest. \code{\link[ranger]{ranger}} with option \code{keep.inbag = TRUE}. #' @param data Original data that was used for training the random forest. #' @param target Target variable as used in the trained model. #' @param nperm The number of permutations performed. #' @param measure The name of the measure of the 'measures' package that should be used for the variable importance calculation. #' #' #' @importFrom stats predict #' @return Vector with computed permutation importance for each variable. #' @export #' #' @examples #' \dontrun{ #' library(ranger) #' iris.rg = ranger(Species ~ ., data = iris, keep.inbag = TRUE, probability = TRUE) #' vimp.ranger = varImpRanger(object = iris.rg, data = iris, target = "Species") #' vimp.ranger #' } varImpRanger = function(object, data, target, nperm = 1, measure = "multiclass.Brier") { # Some tests if(!("ranger" %in% class(object))) stop("Object is not a 'ranger' model") measureList = listAllMeasures() if (!(measure %in% measureList[, 1])) stop("measure should be a measure of the measures package") measure.minimize = measureList$minimize[measureList[,1] == measure] pred_cols = which(colnames(data) != target) num.trees = object$num.trees inbag = do.call(cbind, object$inbag.counts) pred_levels = levels(data[, target]) truth = data[, target] # Calculate original performance old_predis = predict(object, data = data, predict.all = TRUE)$predictions colnames(old_predis) = pred_levels res_old = numeric(num.trees) for(i in 1:num.trees) res_old[i] = do.call(measure, list(old_predis[inbag[,i] == 0, , i], truth[inbag[,i] == 0])) # Calculate permuted performance res_new = matrix(NA, num.trees, length(pred_cols)) for(j in pred_cols) { for(i in 1:num.trees) { print(paste("column", j, "tree", i)) data_new = data[inbag[,i] == 0,] data_new[,j] = sample(data_new[,j], replace = FALSE) object2 = swap_trees(object, 1, i) predis = predict(object2, data = data_new, num.trees = 1)$predictions colnames(predis) = pred_levels truth = data_new[, target] perf_new = do.call(measure, list(predis, truth)) res_new[i, j] = perf_new } } minSign = ifelse(measure.minimize, 1, -1) res = minSign * (res_new - res_old) return(colMeans(res)) } swap_trees = function(rf, i, j) { res = rf res$forest$child.nodeIDs[[i]] = rf$forest$child.nodeIDs[[j]] res$forest$split.varIDs[[i]] = rf$forest$split.varIDs[[j]] res$forest$split.values[[i]] = rf$forest$split.values[[j]] res$forest$child.nodeIDs[[j]] = rf$forest$child.nodeIDs[[i]] res$forest$split.varIDs[[j]] = rf$forest$split.varIDs[[i]] res$forest$split.values[[j]] = rf$forest$split.values[[i]] if (!is.null(rf$forest$terminal.class.counts)) { res$forest$terminal.class.counts[[i]] = rf$forest$terminal.class.counts[[j]] res$forest$terminal.class.counts[[j]] = rf$forest$terminal.class.counts[[i]] } res }
/scratch/gouwar.j/cran-all/cranData/varImp/R/varImpRanger.R
#' @title Bhattacharyya distance among classes #' @author Michele Dalponte and Hans Ole Oerka #' @description Bhattacharyya distance. #' @param g A column vector of the lables. length(g) is equal to nrow(X). #' @param X A dataframe of the features. ncol(X) is equal to the total number of features, and nrow(X) is equal to the number of avaialble training samples. nrow(X) is equal to length(g) #' @return A list containing a matrix of the class combinations and a vector of the Bhattacharyya distances of all the class combinations. #' @export BHATdist #' @references Dalponte, M., Oerka, H.O., Gobakken, T., Gianelle, D. & Naesset, E. (2013). Tree Species Classification in Boreal Forests With Hyperspectral Data. IEEE Transactions on Geoscience and Remote Sensing, 51, 2632-2645. ############################################################################### BHATdist <- function(g,X){ X<-as.matrix(X) nfeat <- ncol(X) nclass <- length(unique(g)) mu <- by(X,g,colMeans) Cov <- by(X,g,stats::cov) ncomb <- t(utils::combn(unique(g),2)) Bhat <- c() for(j in 1:nrow(ncomb)){ mu.i <- mu[[ncomb[j,1]]] cov.i <- Cov[[ncomb[j,1]]] mu.j <- mu[[ncomb[j,2]]] cov.j <- Cov[[ncomb[j,2]]] if(nfeat==1){ Bhat[j]<-(1/8)*t(mu.i-mu.j) %*% (solve((cov.i+cov.j)/2)) %*% (mu.i-mu.j) + 0.5*log((((cov.i+cov.j)/2))/(sqrt(((cov.i))*((cov.j)))),base=exp(1)) }else{ Bhat[j]<-(1/8)*t(mu.i-mu.j) %*% (solve((cov.i+cov.j)/2)) %*% (mu.i-mu.j) + 0.5*log(det(((cov.i+cov.j)/2))/(sqrt((det(cov.i))*(det(cov.j)))),base=exp(1)) } } return(list(classComb=ncomb,bhatdist=Bhat)) }
/scratch/gouwar.j/cran-all/cranData/varSel/R/BHATdist.R
#' @title Jeffries-Matusita distance among classes #' @author Michele Dalponte and Hans Ole Oerka #' @description Jeffries-Matusita distance. #' @param g A column vector of the lables. length(g) is equal to nrow(X). #' @param X A dataframe of the features. ncol(X) is equal to the total number of features, and nrow(X) is equal to the number of avaialble training samples. nrow(X) is equal to length(g) #' @return A list containing a matrix of the class combinations and a vector of the JM distances of all the class combinations. #' @export JMdist #' @references Dalponte, M., Oerka, H.O., Gobakken, T., Gianelle, D. & Naesset, E. (2013). Tree Species Classification in Boreal Forests With Hyperspectral Data. IEEE Transactions on Geoscience and Remote Sensing, 51, 2632-2645. ############################################################################### JMdist <- function(g,X){ X<-as.matrix(X) nfeat <- ncol(X) nclass <- length(unique(g)) mu <- by(X,g,colMeans) Cov <- by(X,g,stats::cov) ncomb <- t(utils::combn(unique(g),2)) Bhat <- c() jm <- c() for(j in 1:nrow(ncomb)){ mu.i <- mu[[ncomb[j,1]]] cov.i <- Cov[[ncomb[j,1]]] mu.j <- mu[[ncomb[j,2]]] cov.j <- Cov[[ncomb[j,2]]] if(nfeat==1){ Bhat[j]<-(1/8)*t(mu.i-mu.j) %*% (solve((cov.i+cov.j)/2)) %*% (mu.i-mu.j) + 0.5*log((((cov.i+cov.j)/2))/(sqrt(((cov.i))*((cov.j)))),base=exp(1)) }else{ Bhat[j]<-(1/8)*t(mu.i-mu.j) %*% (solve((cov.i+cov.j)/2)) %*% (mu.i-mu.j) + 0.5*log(det(((cov.i+cov.j)/2))/(sqrt((det(cov.i))*(det(cov.j)))),base=exp(1)) } jm[j] <- sqrt(2*(1-exp(-Bhat[j]))) } return(list(classComb=ncomb,jmdist=jm)) }
/scratch/gouwar.j/cran-all/cranData/varSel/R/JMdist.R
#' Hyperspectral data acquired over a forest area #' #' A dataset containing 3230 samples with 65 hyperspectral bands and 8 classes. #' #' \itemize{ #' \item B1...B65 Hyperspectral bands. #' \item SP. Classes. #' } #' #' @docType data #' @keywords datasets #' @name dat #' @usage data(dat) #' @format A data frame with 3230 rows and 66 variables "dat"
/scratch/gouwar.j/cran-all/cranData/varSel/R/dat.R
#' @title Sequential Forward Floating Selection using Jeffries-Matusita Distance #' @author Michele Dalponte and Hans Ole Oerka #' @description Feature selection using the Sequential Forward Floating Selection search strategy and the Jeffries-Matusita distance. #' @param g A column vector of the lables. length(g) is equal to nrow(X). #' @param X A dataframe of the features. ncol(X) is equal to the total number of features, and nrow(X) is equal to the number of avaialble training samples. nrow(X) is equal to length(g) #' @param strategy string indicating the multiclass strategy to adopt: 'minimum' or 'mean'. #' @param n integer indicating the number of features to select. The algorithm will stop at n+1 features selected. #' @return A list containing a vector of the JM distances on the individual bands, a matrix with the set of features selected, and a vector containing the distances for each feature set from 1 to N-1, where N is equal to ncol(X). #' @export varSelSFFS #' @references Dalponte, M., Oerka, H.O., Gobakken, T., Gianelle, D. & Naesset, E. (2013). Tree Species Classification in Boreal Forests With Hyperspectral Data. IEEE Transactions on Geoscience and Remote Sensing, 51, 2632-2645. #' @examples #' \dontrun{ #' data(dat) #' #' se<-varSelSFFS(g=dat$SP,X=dat[,c(1:65)],strategy="mean",n=4) #' summary(se) #' #' } ############################################################################### varSelSFFS <- function(g,X,strategy="mean",n=ncol(X)){ if (strategy %in% c("minimum","mean")){ if (n!=ncol(X)){ STOP <- n+1 }else{ STOP = n } nfeat = ncol(X) nclass <- length(unique(g)) mu <- by(X,g,colMeans) Cov <- by(X,g,stats::cov) #Jeffries-Matusita (JM) distance JM1<-function(fe){ f<-fe ncomb <- t(utils::combn(1:nclass,2)) Bhat <- c() jm <- c() for(j in 1:nrow(ncomb)){ mu.i <- mu[[ncomb[j,1]]] cov.i <- Cov[[ncomb[j,1]]] mu.j <- mu[[ncomb[j,2]]] cov.j <- Cov[[ncomb[j,2]]] Bhat[j]<-(1/8)*t(mu.i[f]-mu.j[f]) %*% (solve((cov.i[f,f]+cov.j[f,f])/2)) %*% (mu.i[f]-mu.j[f]) + 0.5*log((((cov.i[f,f]+cov.j[f,f])/2))/(sqrt(((cov.i[f,f]))*((cov.j[f,f])))),base=exp(1)) jm[j] <- sqrt(2*(1-exp(-Bhat[j]))) } jm.dist<-c(min(jm),mean(jm)) return(jm.dist) } JM<-function(fe){ f<-fe ncomb <- t(utils::combn(1:nclass,2)) Bhat <- c() jm <- c() for(j in 1:nrow(ncomb)){ mu.i <- mu[[ncomb[j,1]]] cov.i <- Cov[[ncomb[j,1]]] mu.j <- mu[[ncomb[j,2]]] cov.j <- Cov[[ncomb[j,2]]] Bhat[j]<-(1/8)*t(mu.i[f]-mu.j[f]) %*% (solve((cov.i[f,f]+cov.j[f,f])/2)) %*% (mu.i[f]-mu.j[f]) + 0.5*log(det(((cov.i[f,f]+cov.j[f,f])/2))/(sqrt((det(cov.i[f,f]))*(det(cov.j[f,f])))),base=exp(1)) jm[j] <- sqrt(2*(1-exp(-Bhat[j]))) } jm.dist<-c(min(jm),mean(jm)) return(jm.dist) } #Backword function backword<-function(fs){ distance<-1:length(fs) for (j in 1:length(fs)){ dist<-tryCatch(JM(fs[-j])[1],error=function(e){NA}) if (!is.na(dist)){ distance[j]<-dist }else{ return<--1 } } if (which.max(distance)==length(fs)){ return<--1 }else{ return<-which.max(distance) } } features<-matrix(STOP,STOP,data=NA) distances<-matrix(1,STOP,data=NA) ########################################################################################### #STRART ########################################################################################### #computing JM for single features JMsingle<-matrix(nfeat,2,data=0) for(f in 1:nfeat){ JMsingle[f,] <- JM1(f) } if(strategy == "minimum"){ref <- 1} if(strategy == "mean"){ref <- 2} features[1,1]<-which.max(JMsingle[,ref]) distances[1] <- JMsingle[which.max(JMsingle[,ref]),1] print("Feature to select: 1") print(paste("JM distance: ",distances[1])) print("Features selected: ") print(features[1,1]) if (STOP>1){ #Two features i<-2 C<-utils::combn(nfeat,2) dist<-apply(C,2,FUN=JM) FS<-C[,which.max(dist[ref,])] features[2,c(1,2)]<-C[,which.max(dist[ref,])] distances[2]<-dist[ref,which.max(dist[ref,])] print("Feature to select: 2") print(paste("JM distance: ",dist[ref,which.max(dist[ref,])])) print("Features selected: ") print(C[,which.max(dist[ref,])]) if (STOP>2){ #More than two features i<-2 while (i<(nfeat-1) & (distances[i]<=round(sqrt(2),4)) & (i<STOP)){ C<-matrix(i+1,(nfeat-i),data=NA) C[c(1:i),c(1:(nfeat-i))]<-replicate((nfeat-i),FS) ff<-1:nfeat ff<-ff[-FS] C[i+1,]<-ff dist<-tryCatch(apply(C,2,FUN=JM),error=function(e){NA}) if (is.na(dist)[1]){ print("ERROR: singularity of the covariance matrix") return(list(JMsingle=JMsingle[,ref],features=features,distances=distances,strategy=strategy, distance = "Jeffries-Matusita distance")) } FS<-C[,which.max(dist[ref,])] bw<-backword(FS) if (bw!=-1){ FS<-FS[-bw] features[i,c(1:(i))]<-FS distTmp<-tryCatch(JM(FS),error=function(e){NA}) if (is.na(distTmp)[1]){ print("ERROR: singularity of the covariance matrix") return(list(JMsingle=JMsingle[,ref],features=features,distances=distances,strategy=strategy, distance = "Jeffries-Matusita distance")) }else{ distances[i]<-distTmp[ref] } print(paste("Feature to select: ",i)) print(paste("JM distance: ",distances[i])) print("Features selected: ") print(features[i,c(1:(i))]) }else{ features[i+1,c(1:(i+1))]<-C[,which.max(dist[ref,])] distances[i+1]<-dist[ref,which.max(dist[ref,])] i<-i+1 print(paste("Feature to select: ",i)) print(paste("JM distance: ",distances[i])) print("Features selected: ") print(features[i,c(1:(i))]) } } return(list(JMsingle=JMsingle[,ref],features=features,distances=distances,strategy=strategy, distance = "Jeffries-Matusita distance")) } } }else{ print("ERROR: wrong multiclass strategy") } }
/scratch/gouwar.j/cran-all/cranData/varSel/R/varSelSFFS.R
#### watch out use of "..." inside Boot function ##### in bootSimplifyRF maybe allow allBootRuns = NULL so that object is small. varSelRF <- function(xdata, Class, c.sd = 1, mtryFactor = 1, ntree = 5000, ntreeIterat = 2000, vars.drop.num = NULL, vars.drop.frac = 0.2, whole.range = TRUE, recompute.var.imp = FALSE, verbose = FALSE, returnFirstForest = TRUE, fitted.rf = NULL, keep.forest = FALSE) { if(!is.factor(Class)) stop("Class should be a factor") if( (is.null(vars.drop.num) & is.null(vars.drop.frac)) | (!is.null(vars.drop.num) & !is.null(vars.drop.frac))) stop("One (and only one) of vars.drop.frac and vars.drop.num must be NULL and the other set") max.num.steps <- dim(xdata)[2] num.subjects <- dim(xdata)[1] if(is.null(colnames(xdata))) colnames(xdata) <- paste("v", 1:dim(xdata)[2], sep ="") ##oversize the vectors; will prune later. n.vars <- vars <- OOB.rf <- OOB.sd <- rep(NA, max.num.steps) oobError <- function(rf) { ## should not be exported in the namespace. ## The out of bag error ooo <- rf$confusion[, -dim(rf$confusion)[2]] s.ooo <- sum(ooo) diag(ooo) <- 0 sum(ooo)/s.ooo } if(!is.null(fitted.rf)) { if(ncol(fitted.rf$importance) < 2) stop("The fitted rf was not fitted with importance = TRUE") n.ntree <- fitted.rf$ntree mtry <- fitted.rf$mtry n.mtryFactor <- mtry/sqrt(ncol(xdata)) if((n.ntree != ntree) | (n.mtryFactor != mtryFactor)) warning("Using as ntree and mtry the parameters obtained from fitted.rf", immediate.= TRUE) ntree <- n.ntree mtryFactor <- n.mtryFactor rm(n.ntree, n.mtryFactor) rf <- fitted.rf } else { mtry <- floor(sqrt(ncol(xdata)) * mtryFactor) rf <- randomForest(x = xdata, y = Class, ntree = ntree, mtry = mtry, importance = TRUE, keep.forest = keep.forest) } if(returnFirstForest) FirstForest <- rf else FirstForest <- NULL m.iterated.ob.error <- m.initial.ob.error <- oobError(rf) sd.iterated.ob.error <- sd.initial.ob.error <- sqrt(m.iterated.ob.error * (1 - m.iterated.ob.error) * (1/num.subjects)) if(verbose) { print(paste("Initial OOB error: mean = ", round(m.initial.ob.error, 4), "; sd = ", round(sd.initial.ob.error, 4), sep = "")) } importances <- importance(rf, type = 1, scale = FALSE) selected.vars <- order(importances, decreasing = TRUE) ordered.importances <- importances[selected.vars] initialImportances <- importances initialOrderedImportances <- ordered.importances j <- 1 n.vars[j] <- dim(xdata)[2] vars[j] <- paste(colnames(xdata), collapse = " + ") OOB.rf[j] <- m.iterated.ob.error OOB.sd[j] <- sd.iterated.ob.error var.simplify <- TRUE while(var.simplify) { if (verbose){ print("gc inside loop of varSelRF") print(gc()) } else { gc() } last.rf <- rf last.vars <- selected.vars previous.m.error <- m.iterated.ob.error previous.sd.error <- sd.iterated.ob.error ## If this is left as only ## "if(length(selected.vars) <= 2) var.simplify <- FALSE" ## under the if((length(selected.vars) < 2) | (any(selected.vars < 1))), ## as it used to be, then we fit a 2 model var, which might be ## better or as good as others, but as we never re-enter, ## we cannot return it, even if we see it in the history. ## Alternatively, we cannot just convert ## "if((length(selected.vars) < 2) | (any(selected.vars < 1))" ## to the <= 2, as we then would re-enter many times because ## of the way selected.vars <- selected.vars[1:2] when ## num.vars < (vars.drop + 2) ## This way, we enter just to set last.rf, last.vars and ## we bail out if(length(selected.vars) <= 2) { var.simplify <- FALSE break } if(recompute.var.imp & (j > 1)) { importances <- importance(rf, type = 1, scale = FALSE) tmp.order <- order(importances, decreasing = TRUE) selected.vars <- selected.vars[tmp.order] ordered.importances <- importances[tmp.order] } num.vars <- length(selected.vars) if(is.null(vars.drop.num)) vars.drop <- round(num.vars * vars.drop.frac) else vars.drop <- vars.drop.num if(num.vars >= (vars.drop + 2)) { ## prevent infinite looping when num.vars is, say, 3 and ## vars.drop.frac < 0.17 ## We must drop a variable for sure, since > 2 and we ## are still simplifying ## Alternatively, use ceiling instead of round, above. if(vars.drop == 0) { vars.drop <- 1 if( (num.vars - vars.drop) < 1) stop("vars.drop = 0 and num.vars -vars.drop < 1!") } selected.vars <- selected.vars[1: (num.vars - vars.drop)] ordered.importances <- ordered.importances[1: (num.vars - vars.drop)] } else { selected.vars <- selected.vars[1:2] ordered.importances <- ordered.importances[1:2] } ## couldn't we eliminate the following? if((length(selected.vars) < 2) | (any(selected.vars < 1))) { var.simplify <- FALSE break } mtry <- floor(sqrt(length(selected.vars)) * mtryFactor) if(mtry > length(selected.vars)) mtry <- length(selected.vars) if(recompute.var.imp) rf <- randomForest(x = xdata[, selected.vars], y = Class, importance= TRUE, ntree = ntree, mtry = mtry, keep.forest = keep.forest) else rf <- randomForest(x = xdata[, selected.vars], y = Class, importance= FALSE, ntree = ntreeIterat, mtry = mtry, keep.forest = keep.forest) m.iterated.ob.error <- oobError(rf) sd.iterated.ob.error <- sqrt(m.iterated.ob.error * (1 - m.iterated.ob.error) * (1/num.subjects)) if(verbose) { print(paste("..... iteration ", j, "; OOB error: mean = ", round(m.iterated.ob.error, 4), "; sd = ", round(sd.iterated.ob.error, 4), "; num. vars = ", length(selected.vars), sep = "")) } j <- j + 1 n.vars[j] <- length(selected.vars) vars[j] <- paste(colnames(xdata)[selected.vars], collapse = " + ") OOB.rf[j] <- m.iterated.ob.error OOB.sd[j] <- sd.iterated.ob.error if(!whole.range & ( (m.iterated.ob.error > (m.initial.ob.error + c.sd*sd.initial.ob.error)) | (m.iterated.ob.error > (previous.m.error + c.sd*previous.sd.error))) ) var.simplify <- FALSE } if (!whole.range) { if(!is.null(colnames(xdata))) selected.vars <- sort(colnames(xdata)[last.vars]) else selected.vars <- last.vars out <- list(selec.history = data.frame( Number.Variables = n.vars, Vars.in.Forest = vars, OOB = OOB.rf, sd.OOB = OOB.sd)[1:j,], rf.model = last.rf, selected.vars = selected.vars, selected.model = paste(selected.vars, collapse = " + "), best.model.nvars = length(selected.vars), initialImportances = initialImportances, initialOrderedImportances = initialOrderedImportances, ntree = ntree, ntreeIterat = ntreeIterat, mtryFactor = mtryFactor, # mtry = mtry, firstForest = FirstForest) class(out) <- "varSelRF" return(out) } else { ## Prune the too long vectors created at begin. ## not needed above, because we select the 1:j rows ## of the return matrix selec.history. n.vars <- n.vars[1:j] vars <- vars[1:j] OOB.rf<- OOB.rf[1:j] OOB.sd <- OOB.sd[1:j] min.oob.ci <- min(OOB.rf) + c.sd * OOB.sd[which.min(OOB.rf)] best.pos <- which(OOB.rf <= min.oob.ci)[which.min(n.vars[which(OOB.rf <= min.oob.ci)])] selected.vars <- sort(unlist(strsplit(vars[best.pos], " + ", fixed = TRUE))) out <- list(selec.history = data.frame( Number.Variables = n.vars, Vars.in.Forest = vars, OOB = OOB.rf, sd.OOB = OOB.sd), rf.model = NA, selected.vars = selected.vars, selected.model = paste(selected.vars, collapse = " + "), best.model.nvars = n.vars[best.pos], initialImportances = initialImportances, initialOrderedImportances = initialOrderedImportances, ntree = ntree, ntreeIterat = ntreeIterat, mtryFactor = mtryFactor, ##mtry = mtry, firstForest = FirstForest) class(out) <- "varSelRF" return(out) } } print.varSelRF <- function(x, ...) { cat("\nBackwards elimination on random forest; ") cat(paste("ntree = ", x$ntree,"; mtryFactor = ", x$mtryFactor, "\n"), sep ="") cat("\n Selected variables:\n") print(x$selected.vars) cat("\n Number of selected variables:", x$best.model.nvars, "\n\n") } #summary.varSelRF <- print.varSelRF plot.varSelRF <- function(x, nvar = NULL, which = c(1, 2), ...) { if (length(which) == 2 && dev.interactive()) { op <- par(ask = TRUE, las = 1) } else { op <- par(las = 1) } on.exit(par(op)) if(is.null(nvar)) nvar <- min(30, length(x$initialOrderedImportances)) show <- c(FALSE, FALSE) show[which] <- TRUE if (show[1]){ dotchart(rev(x$initialOrderedImportances[1:nvar]), main = "Initial importances", xlab = "Importances (unscaled)") } if (show[2]){ ylim <- c(0, max(0.50, x$selec.history$OOB)) plot(x$selec.history$Number.Variables, x$selec.history$OOB, type = "b", xlab = "Number of variables used", ylab = "OOB error", log = "x", ylim = ylim, ...) ## if(max(x$selec.history$Number.Variables) > 300) ## axis(1, at = c(1, 2, 3, 5, 8, 15, 25, 50, 75, 150, 200, 300), ## labels = c(1, 2, 3, 5, 8, 15, 25, 50, 75, 150, 200, 300)) lines(x$selec.history$Number.Variables, x$selec.history$OOB + 2 * x$selec.history$sd.OOB, lty = 2) lines(x$selec.history$Number.Variables, x$selec.history$OOB - 2 * x$selec.history$sd.OOB, lty = 2) } } ## We could also write a varSelRFCV zz: move al TODO varSelRFBoot <- function(xdata, Class, c.sd = 1, mtryFactor = 1, ntree = 5000, ntreeIterat = 2000, vars.drop.frac = 0.2, bootnumber = 200, whole.range = TRUE, recompute.var.imp = FALSE, usingCluster = TRUE, TheCluster = NULL, srf = NULL, verbose = TRUE, ...) { ## beware there is a lot of data copying... pass by reference, or ## minimize copying or something. if(is.null(colnames(xdata))) colnames(xdata) <- paste("v", 1:dim(xdata)[2], sep ="") if(!is.null(srf)) { ## we are passing a simplified rf object. if(class(srf) != "varSelRF") stop("srf must be the results of a run of varSelRF") n.ntree <- srf$ntree n.ntreeIterat <- srf$ntreeIterat n.mtryFactor <- srf$mtryFactor if((n.ntree != ntree) | (n.mtryFactor != mtryFactor) | (n.ntreeIterat != ntreeIterat)) warning("Using as ntree and mtryFactor the parameters obtained from srf", immediate.= TRUE) ntree <- n.ntree mtryFactor <- n.mtryFactor rm(n.ntree, n.mtryFactor) all.data.run <- srf } else { ## we are simplifying the random forest all.data.run <- varSelRF(Class = Class, xdata = xdata, c.sd = c.sd, mtryFactor = mtryFactor, ntree = ntree, ntreeIterat = ntreeIterat, vars.drop.frac = vars.drop.frac, whole.range = whole.range, recompute.var.imp = recompute.var.imp) ### ...) } columns.data <- which(colnames(xdata) %in% all.data.run$selected.vars) all.data.rf.mtry <- floor(mtryFactor * sqrt(length(columns.data))) if(all.data.rf.mtry > length(columns.data)) all.data.rf.mtry <- length(columns.data) all.data.rf.predict <- randomForest(y = Class, x = xdata[, columns.data], ntree = all.data.run$ntree, mtry = all.data.rf.mtry, xtest = xdata[, columns.data], ytest = Class, keep.forest = FALSE) full.pred <- all.data.rf.predict$test$predicted all.data.selected.vars <- all.data.run$selected.vars ## all.data.selected.model <- all.data.run$selected.model all.data.best.model.nvars <- all.data.run$best.model.nvars N <- length(Class) solution.sizes <- rep(NA, bootnumber) overlap.with.full <- rep(NA, bootnumber) vars.in.solutions <- vector() ## solutions <- rep(NA, bootnumber) bootTrainTest <- function(dummy, xdataTheCluster, ClassTheCluster, c.sd, mtryFactor, ntree, ntreeIterat, whole.range, recompute.var.imp, ...) { N <- length(ClassTheCluster) sample.again <- TRUE while(sample.again) { bootsample <- unlist(tapply(1:N, ClassTheCluster, function(x) sample(x, size = length(x), replace = TRUE))) ## sure, this isn't the fastest, but will do for now. nobootsample <- setdiff(1:N, bootsample) if(!length(nobootsample)) sample.again <- TRUE else sample.again <- FALSE } ## this is an ugly hack to prevent nobootsamples ## of size 0. train.data <- xdataTheCluster[bootsample, , drop = FALSE] test.data <- xdataTheCluster[nobootsample, , drop = FALSE] train.class <- ClassTheCluster[bootsample] test.class <- ClassTheCluster[nobootsample] boot.run <- varSelRF(Class = train.class, xdata = train.data, c.sd = c.sd, mtryFactor = mtryFactor, ntree = ntree, ntreeIterat = ntreeIterat, whole.range = whole.range, recompute.var.imp = recompute.var.imp, vars.drop.frac = vars.drop.frac) ### ...) output.cl <- list() output.cl$best.model.nvars <- boot.run$best.model.nvars output.cl$selected.model <- boot.run$selected.model output.cl$selected.vars <- boot.run$selected.vars output.cl$nobootsample <- nobootsample output.cl$bootsample <- bootsample output.cl$initialImportances <- boot.run$initialImportances output.cl$initialOrderedImportances <- boot.run$initialOrderedImportances output.cl$selec.history <- boot.run$selec.history boot.col.data <- which(colnames(xdataTheCluster) %in% boot.run$selected.vars) run.test.mtry <- floor(mtryFactor * sqrt(length(boot.col.data))) if(run.test.mtry > length(boot.col.data)) run.test.mtry <- length(boot.col.data) boot.run.test <- randomForest(y = train.class, x = train.data[, boot.col.data, drop = FALSE], ntree = boot.run$ntree, mtry = run.test.mtry, keep.forest = FALSE, xtest = test.data[, boot.col.data, drop = FALSE])$test ## ytest = test.class)$test output.cl$class.pred.array <- boot.run.test$predicted output.cl$prob.pred.array <- boot.run.test$votes rm(boot.run.test) gc() return(output.cl) } ## bootTrainTest; this function is defined to be used in the cluster. ## we use it too if not in the cluster. There is a bit too much ## copying, because we define a pair of new objects ## xdataTheCluster and ClassTheCluster that need not be defined ## if we just copied the code. But that is ugly and hard to follow, ## and the non-cluster is slow for other reasons. if(usingCluster) { if (verbose){ print("gc inside varSelRFBoot papply") print(gc()) } else { gc() } cat("\n Running bootstrap iterations using cluster (can take a while)\n") boot.runs <- clusterApplyLB(TheCluster, 1:bootnumber, bootTrainTest, xdataTheCluster = xdata, ClassTheCluster = Class, c.sd = c.sd, mtryFactor = mtryFactor, ntree = ntree, ntreeIterat = ntreeIterat, whole.range = whole.range, recompute.var.imp = recompute.var.imp) ## clean up before leaving clusterEvalQ(TheCluster, rm(list = c("xdataTheCluster", "ClassTheCluster"))) } else { ## Not using Cluster boot.runs <- list() xdataTheCluster <- xdata ClassTheCluster <- Class cat("\n Running bootstrap iterations") for(nboot in 1:bootnumber) { cat(".") boot.runs[[nboot]] <- bootTrainTest(nboot, xdataTheCluster = xdata, ClassTheCluster = Class, c.sd = c.sd, mtryFactor = mtryFactor, ntree = ntree, ntreeIterat = ntreeIterat, whole.range = whole.range, recompute.var.imp = recompute.var.imp) } cat("\n") } solutions <- unlist(lapply(boot.runs, function(z) { paste(sort(z$selected.vars), collapse = " + ")})) vars.in.solutions <- unlist(lapply(boot.runs, function(z) z$selected.vars)) solution.sizes <- unlist(lapply(boot.runs, function(z) z$best.model.nvars)) overlap.with.full <- unlist(lapply(boot.runs, function(x) { length(intersect(x$selected.vars, all.data.selected.vars))/ sqrt(all.data.best.model.nvars * x$best.model.nvars)})) prob.pred.array <- array(NA, dim = c(N, nlevels(Class), bootnumber), dimnames = list(1:N, levels(Class), paste("BootstrapReplication.", 1:bootnumber, sep = ""))) ## to store class predictions as data frame class.pred.array <- data.frame(Class) for(nb in 1:bootnumber) { nobootsample <- boot.runs[[nb]]$nobootsample class.pred.array[[nb]] <- factor(NA, levels = levels(Class)) class.pred.array[nobootsample, nb] <- boot.runs[[nb]]$class.pred.array prob.pred.array[nobootsample, , nb] <- boot.runs[[nb]]$prob.pred.array } names(class.pred.array) <- paste("BootstrapReplication.", 1:bootnumber, sep = "") ############## The .632+ estimate of prediction error ################## ## one: \hat{Err}^{(1)} in Efron & Tibshirani, 1997, p. 550, ## or leave-one-out bootstrap error. ## resubst: \bar{err}, the apparent error rate, resubstitution rate, ## or "in sample" error rate. ## full.pred: the "in sample" prediction from full model; only used ## to obtain gamma. ## gamma: gamma in p. 552 ## r: R in p. 552 (bounding in [0, 1]). ## err632: the .632 error ## errprime: the one prime; \hat{Err}^{(1)}' ## err: the .632+ error ## This is how I find one one <- mean(apply(cbind(class.pred.array, Class), 1, function(x) {mean(x[-(bootnumber + 1)] != x[bootnumber + 1], na.rm = TRUE)}), na.rm = TRUE) ## this is equivalent to what Torsten Hothorn does in ipred resubst <- mean(full.pred != Class) ## The following code I take directly from the function ## bootest.factor, by Torsten Hothorn, in package ipred. err632 <- 0.368 * resubst + 0.632 * one gamma <- sum(outer(as.numeric(Class), as.numeric(full.pred), function(x, y) ifelse(x == y, 0, 1)))/(length(Class)^2) r <- (one - resubst)/(gamma - resubst) r <- ifelse(one > resubst & gamma > resubst, r, 0) if((r > 1) | (r < 0)) { ## just debugging; remove later? print(paste("r outside of 0, 1 bounds: one", one, "resubst", resubst, "gamma", gamma)) if(r > 1) { r <- 1 print("setting r to 1") } else if(r < 0) { r <- 0 print("setting r to 0") } } errprime <- min(one, gamma) err <- err632 + (errprime - resubst) * (0.368 * 0.632 * r)/(1 - 0.368 * r) cat("\n .632+ prediction error ", round(err, 4), "\n") out <- list(number.of.bootsamples = bootnumber, bootstrap.pred.error = err, resubstitution.error = resubst, leave.one.out.bootstrap.error = one, all.data.randomForest = all.data.rf.predict, all.data.vars = all.data.selected.vars, all.data.run = all.data.run, class.predictions = class.pred.array, prob.predictions = prob.pred.array, number.of.vars = solution.sizes, overlap = overlap.with.full, all.vars.in.solutions = vars.in.solutions, all.solutions = solutions, class = Class, allBootRuns = boot.runs) class(out) <- "varSelRFBoot" return(out) } print.varSelRFBoot <- function(x, ...) { cat("\n\n Variable selection with random forest \n") cat(" ------------------------------\n") ## cat("\n\n randomForest summary \n") ## print(object$all.data.randomForest) cat("\n Variables used \n") print(x$all.data.vars) cat("\n \n Number of variables used: ", length(x$all.data.vars), "\n") cat("\n\n Bootstrap results\n") cat(" ------------------\n") cat("\n Bootstrap (.632+) estimate of prediction error: \n", " (using", x$number.of.bootsamples, "bootstrap iterations): \n", x$bootstrap.pred.error) cat("\n\n Number of vars in bootstrapped forests: \n") print(summary(x$number.of.vars)) } ## zz: pass gene names and subject names?? ## look at examples from ~/Proyectos/Signatures/Symposum/boot.pamr.knn.dlda.R ## o similar ## Or not, because I think I am ussing column names ## but look at that code anyway for improvements #### in summaryBoot: a plot of error rate vs. #### number of variables #### requires saving el "all.data.run" en el Boot, which should ALWAYS be done!! summary.varSelRFBoot <- function(object, return.model.freqs = FALSE, return.class.probs = TRUE, return.var.freqs.b.models = TRUE, ...) { cat("\n\n Variable selection using all data \n") cat(" ------------------------------\n") ### cat("\n\n randomForest summary \n") ### print(object$all.data.randomForest) cat("\n \n variables used \n") print(object$all.data.vars) cat("\n \n Number of variables used: ", length(object$all.data.vars), "\n") cat("\n\n Bootstrap results\n") cat(" ------------------\n") cat("\n\n Bootstrap (.632+) estimate of prediction error:", object$bootstrap.pred.error, " (using", object$number.of.bootsamples, "bootstrap iterations).\n") cat("\n\n Resubstitution error: ", object$resubstitution.error, "\n") cat("\n\n Leave-one-out bootstrap error: ", object$leave.one.out.bootstrap.error, "\n") nk <- as.vector(table(object$class)) cat("\n\n Error rate at random: ", 1 - (max(nk)/sum(nk)), "\n") cat("\n\n Number of vars in bootstrapped forests: \n") print(summary(object$number.of.vars)) cat cat("\n") cat("\n Overlapp of bootstrapped forests with forest from all data\n") print(summary(object$overlap)) if(return.var.freqs.b.models) { cat("\n\n Variable freqs. in bootstrapped models \n") print(sort(table(object$all.vars.in.solutions), decreasing = TRUE)/object$number.of.bootsamples) } in.all.data <- which(names(table(object$all.vars.in.solutions)) %in% object$all.data.vars) cat("\n\n Variable freqs. of variables in forest from all data, and summary \n") print(table(object$all.vars.in.solutions)[in.all.data]/object$number.of.bootsamples) cat("\n") print(summary(table(object$all.vars.in.solutions)[in.all.data]/object$number.of.bootsamples)) if(return.model.freqs) { tmp.table <- sort(table(object$all.solutions), decreasing = TRUE)/object$number.of.bootsamples n.tmp.table <- names(tmp.table) dim(tmp.table) <- c(dim(tmp.table), 1) rownames(tmp.table) <- n.tmp.table colnames(tmp.table) <- "Freq." cat("\n\n Solutions frequencies in bootstrapped models \n") print(tmp.table) } if(return.class.probs) cat("\n\n Mean class membership probabilities from out of bag samples\n") if(return.class.probs) { mean.class.probs <- apply(object$prob.predictions, c(1, 2), function(x) mean(x, na.rm = TRUE)) colnames(mean.class.probs) <- levels(object$class) } if(return.class.probs) return(mean.class.probs) } plot.varSelRFBoot <- function(x, oobProb = TRUE, oobProbBoxPlot = FALSE, ErrorNum = TRUE, subject.names = NULL, class.to.plot = NULL, ...) { if(oobProb | oobProbBoxPlot) { mean.class.probs <- apply(x$prob.predictions, c(1, 2), function(x) mean(x, na.rm = TRUE)) colnames(mean.class.probs) <- levels(x$class) rainbow.col <- rainbow(nlevels(x$class)) if(dev.interactive()) { op <- par(ask = TRUE, las = 1) } else { op <- par(las = 1) } on.exit(par(op)) #### for(i in 1:ncol(mean.class.probs)) { if(is.null(class.to.plot)) class.to.plot <- 1:ncol(mean.class.probs) for(i in class.to.plot) { if(oobProbBoxPlot) { boxplot(data.frame(t(x$prob.predictions[ , i, ])), xlab = "Samples", ylab = "Out of bag probability of membership", main = paste("Class", colnames(mean.class.probs)[i]), type = "p", col = rainbow.col[as.numeric(x$class)], pch = 19, ylim = c(0, 1.3), axes = FALSE) } else { ### dotchart(mean.class.probs[, i], ### xlab = "Samples", ### ylab = "(Average) Out of bag probability of membership", ### main = paste("Class", colnames(mean.class.probs)[i]), ### type = "p", ### col = rainbow.col[as.numeric(x$class)], ### pch = 19, ylim = c(0, 1.2), ### axes = FALSE) plot(mean.class.probs[, i], xlab = "Samples", ylab = "(Average) Out of bag probability of membership", main = paste("Class", colnames(mean.class.probs)[i]), type = "p", col = rainbow.col[as.numeric(x$class)], pch = 19, ylim = c(0, 1.3), axes = FALSE) } if(!is.null(subject.names)) text(mean.class.probs[ , i], labels = subject.names, pos = 2, cex = 0.8) box() ##axis(1) par(las = 2) axis(1, at = seq(1:dim(mean.class.probs)[1]), labels = seq(1:dim(mean.class.probs)[1])) segments(x0 = seq(1:dim(mean.class.probs)[1]), y0 = 0, x1 = seq(1:dim(mean.class.probs)[1]), y1 = 1, lty = 2, col = "grey") axis(2, at = seq(from = 0, to = 1, by = 0.2)) legend(y = c(1.29, 1.15), x = c(1, dim(mean.class.probs)[1]), legend = paste("Class", levels(x$class)), col = rainbow.col, pch = 19, bty = "n" ) abline( h = 1.05, lty = 1) abline(h = seq(from = 0, to = 1, length = 1 + nlevels(x$class))[-c(1, nlevels(x$class) + 1)], lty = 2) } } if(ErrorNum) { par(las = 2) all.data.errors <- x$all.data.run$selec.history$OOB ngenes <- x$all.data.run$selec.history$Number.Variables maxplot <- max( c(unlist(lapply(x$allBootRuns, function(x) max(x$selec.history$OOB))), all.data.errors)) minplot <- min( c(unlist(lapply(x$allBootRuns, function(x) min(x$selec.history$OOB))), all.data.errors)) minplot <- minplot * (1 - 0.1) maxplot <- maxplot * (1 + 0.2) plot(ngenes, all.data.errors, type = "l", axes = TRUE, xlab = "Number of variables", ylab = "OOB Error rate", ylim = c(minplot, maxplot), lty = 1, log = "x", col = "red", lwd = 2, main = "OOB Error rate vs. Number of variables in predictor", xlim = c(2, max(ngenes)*1.1)) ### plot(num.points.plot:1, ### x$allBootRuns[[1]]$other$trained.pam.cv$error, ### type = "l", axes = FALSE, xlab = "Number of genes", ### ylab = "CV Error rate", ylim = c(minplot, maxplot), lty = 2, ### main = "CV Error rate vs. Number of genes in predictor.") legend(x = 10, y = maxplot, legend = c("Bootstrap samples", "Original sample"), lty = c(2, 1), lwd = c(1, 3), col = c("Black", "Red"), bty = "n") ## box() ## axis(2) ## axis(1) if(max(ngenes) > 300) axis(1, at = c(2, 3, 5, 8, 15, 20, 25, 35, 50, 75, 150, 200, 300), labels = c(2, 3, 5, 8, 15, 10, 25, 35, 50, 75, 150, 200, 300)) for(nb in 1:x$number.of.bootsamples) lines(x$allBootRuns[[nb]]$selec.history$Number.Variables, x$allBootRuns[[nb]]$selec.history$OOB, lty = 2) lines(ngenes, all.data.errors, col = "red", lwd = 4) } } randomVarImpsRF <- function(xdata, Class, forest, numrandom = 100, whichImp = "impsUnscaled", usingCluster = TRUE, TheCluster = NULL, ...) { if(!all(whichImp %in% c("impsScaled", "impsUnscaled", "impsGini"))) stop("whichImp contains a non-valid option; should be one or more \n", "of impsScaled, impsUnscaled, impsGini") ontree <- forest$ntree omtry <- forest$mtry nodesize <- 1 if(usingCluster) { iRF2.cluster <- function(dummy, xdataTheCluster, ClassTheCluster, ontree, omtry, nodesize, ...) { rf <- randomForest(x = xdataTheCluster, y = sample(ClassTheCluster), ntree = ontree, mtry = omtry, nodesize = nodesize, importance = TRUE, keep.forest = FALSE, ...) ## If we specify the importance measure, only that var is evaluated. ## if we say "ALL", all three. impsScaled <- NULL impsUnscaled <- NULL impsGini <- NULL if("impsUnscaled" %in% whichImp) impsUnscaled <- importance(rf, type = 1, scale = FALSE) if("impsScaled" %in% whichImp) impsScaled <- importance(rf, type = 1, scale = TRUE) if("impsGini" %in% whichImp) impsGini <- importance(rf, type = 2, scale = TRUE) ## impsGini <- rf$importance[, ncol(rf$importance)] return(list(impsScaled, impsUnscaled, impsGini)) } outCl <- clusterApplyLB(TheCluster, 1:numrandom, iRF2.cluster, xdataTheCluster = xdata, ClassTheCluster = Class, ontree = ontree, omtry = omtry, nodesize = nodesize) } else { outCl <- list() cat("\n Obtaining random importances ") for(nriter in 1:numrandom) { cat(".") rf <- randomForest(x = xdata, y = sample(Class), ntree = ontree, mtry = omtry, nodesize = nodesize, importance = TRUE, keep.forest = FALSE, ...) impsScaled <- NULL impsUnscaled <- NULL impsGini <- NULL if("impsUnscaled" %in% whichImp) impsUnscaled <- importance(rf, type = 1, scale = FALSE) if("impsScaled" %in% whichImp) impsScaled <- importance(rf, type = 1, scale = TRUE) if("impsGini" %in% whichImp) impsGini <- importance(rf, type = 2, scale = TRUE) ## impsGini <- rf$importance[, ncol(rf$importance)] outCl[[nriter]] <- list(impsScaled, impsUnscaled, impsGini) } cat("\n") } ##</else> randomVarImps <- list() if("impsScaled" %in% whichImp) { randomVarImps$impsScaled <- matrix(unlist(lapply(outCl, function(x) x[[1]])), ncol = length(outCl)) colnames(randomVarImps$impsScaled) <- 1:numrandom rownames(randomVarImps$impsScaled) <- rownames(forest$importance) } if("impsUnscaled" %in% whichImp){ randomVarImps$impsUnscaled <- matrix(unlist(lapply(outCl, function(x) x[[2]])), ncol = length(outCl)) colnames(randomVarImps$impsUnscaled) <- 1:numrandom rownames(randomVarImps$impsUnscaled) <- rownames(forest$importance) } if("impsGini" %in% whichImp) { randomVarImps$impsGini <- matrix(unlist(lapply(outCl, function(x) x[[3]])), ncol = length(outCl)) colnames(randomVarImps$impsGini) <- 1:numrandom rownames(randomVarImps$impsGini) <- rownames(forest$importance) } class(randomVarImps) <- c(class(randomVarImps), "randomVarImpsRF") return(randomVarImps) ## Hopefully, we are returning a list with only the components selected.zz } randomVarImpsRFplot <- function(randomImportances, forest, whichImp = "impsUnscaled", nvars = NULL, show.var.names = FALSE, vars.highlight = NULL, main = NULL, screeRandom = TRUE, lwdBlack = 1.5, lwdRed = 2, lwdLightblue = 1, cexPoint = 1, overlayTrue = FALSE, xlab = NULL, ylab = NULL, ...) { if(ncol(forest$importance) < 2) stop("The fitted rf", deparse(substitute(forest)), "was not fitted with importance = TRUE") randomImportances <- switch(whichImp, "impsUnscaled" = randomImportances$impsUnscaled, "impsScaled" = randomImportances$impsScaled, "impsGini" = randomImportances$impsGini, ) originalForestImportance <- switch(whichImp, "impsUnscaled" = importance(forest, type = 1, scale = FALSE), "impsScaled" = importance(forest, type = 1, scale = TRUE), "impsGini" = importance(forest, type = 2, scale = TRUE) ## forest$importance[, ncol(forest$importance)] ) if(is.null(originalForestImportance)) stop("\n Not valid 'whichImp' \n") if(is.null(xlab)) xlab <- "(Ordered) Variable" if(is.null(ylab)) ylab <- switch(whichImp, "impsUnscaled" = "Importance (unscaled)", "impsScaled" = "Importance (scaled)", "impsGini" = "Importance (Gini)", ) nvars <- min(nvars, dim(randomImportances)[1]) ylim <- range(originalForestImportance, randomImportances) plottingOrder <- order(originalForestImportance, decreasing = TRUE)[1:nvars] ## This is really (something changed in randomForest?) a matrix. ## Explictily make it a vector ## check first if(ncol(originalForestImportance) != 1) stop("originalForestImportance ncol != 1") orderedOriginalImps <- originalForestImportance[plottingOrder, 1, drop = TRUE] plot(orderedOriginalImps, type = "n", axes = FALSE, xlab = xlab, ylab = ylab, main = main, ylim = ylim, ...) abline(h = 0, lty = 2, col = "blue") axis(2) box() if(show.var.names) { axis(1, labels = names(orderedOriginalImps), at = 1:length(orderedOriginalImps)) } else { axis(1) } if(!overlayTrue) ###points(x = 1:nvars, orderedOriginalImps, lwd = lwdBlack, ### col = "black", type = "b", cex = cexPoint) lines(x = 1:nvars, orderedOriginalImps, lwd = lwdBlack, col = "black", type = "b", cex = cexPoint) if(!is.null(vars.highlight)) { if(length(vars.highlight) > nvars) { warning("Not all vars. to highlight will be shown; increase nvars\n") cat("\n Not all vars. to highlight will be shown; increase nvars\n") } pos.selected <- which(names(orderedOriginalImps) %in% vars.highlight) if(!length(pos.selected)){ warning("No selected vars. among those to show\n") cat("\nNo selected vars. among those to show\n") } else segments(pos.selected, 0, pos.selected, orderedOriginalImps[pos.selected], col = "blue", lwd = 2) if(length(pos.selected) < length(vars.highlight)) { warning("Not shown ", length(vars.highlight) - length(pos.selected), " of the 'to-highlight' variables\n") cat("Not shown ", length(vars.highlight) - length(pos.selected), " of the 'to-highlight' variables\n") } } column.of.mean <- dim(randomImportances)[2] + 1 if(screeRandom) { randomImportances <- apply(randomImportances, 2, sort, decreasing = TRUE) randomImportances <- randomImportances[1:nvars, ] randomImportances <- cbind(randomImportances, apply(randomImportances, 1, mean)) } else { randomImportances <- randomImportances[plottingOrder, ] randomImportances <- cbind(randomImportances, apply(randomImportances, 1, mean)) } matlines(randomImportances[, -column.of.mean], col = "lightblue", lwd = lwdLightblue, lty = 1) lines(x = 1:nvars, y = randomImportances[, column.of.mean], lwd = lwdRed, col = "red") if(overlayTrue) { points(x = 1:nvars, orderedOriginalImps, lwd = lwdBlack, col = "black", type = "b", cex = cexPoint) lines(x = 1:nvars, orderedOriginalImps, lwd = lwdBlack, col = "black", type = "l") } } varSelImpSpecRF <- function(forest, xdata = NULL, Class = NULL, randomImps = NULL, threshold = 0.10, numrandom = 20, whichImp = "impsUnscaled", usingCluster = TRUE, TheCluster = NULL, ...) { if((is.null(xdata) | is.null(Class)) & is.null(randomImps)) stop("You must specify a randomVarImpsRF object OR", "valid covariates and class objects.\n") if((!is.null(xdata) & !is.null(Class)) & !is.null(randomImps)) warning("Using only the randomVarImpsRF object.", "Covariates and class objects discarded.\n", immediate. = TRUE) if(length(whichImp) > 1) stop("You can only use one importance measure") originalImps <- switch(whichImp, "impsUnscaled" = importance(forest, type = 1, scale = FALSE), "impsScaled" = importance(forest, type = 1, scale = TRUE), "impsGini" = importance(forest, type = 2, scale = TRUE) ## forest$importance[, ncol(forest$importance)] ) if(is.null(originalImps)) stop("\n Not valid 'whichImp' \n") originalImpsOrder <- order(originalImps, decreasing = TRUE) if(is.null(randomImps)) { randomImps <- switch(whichImp, "impsUnscaled" = randomVarImpsRF(xdata, Class, forest, numrandom = numrandom, whichImp = "impsUnscaled", usingCluster = usingCluster, TheCluster = TheCluster, ...)$impsUnscaled, "impsUnscaled" = randomVarImpsRF(xdata, Class, forest, numrandom = numrandom, whichImp = "impsScaled", usingCluster = usingCluster, TheCluster = TheCluster, ...)$impsScaled, "impsUnscaled" = randomVarImpsRF(xdata, Class, forest, numrandom = numrandom, whichImp = "impsGini", usingCluster = usingCluster, TheCluster = TheCluster, ...)$impsGini, ) } else { elemento <- match(whichImp, names(randomImps)) if(is.na(elemento)) stop("The requested importance was not calculated\n", "for the randomImps", deparse(substitute(randomImps)),"\n", "object.\n") cat("\n Using the randomVarImpsRF", deparse(substitute(randomImps)), "object. xdata, Class, numrandom ignored.\n") randomImps <- randomImps[[elemento]] } randomImps <- apply(randomImps, 2, sort, decreasing = TRUE) thresholds <- apply(randomImps, 1, function(x) quantile(x, probs = 1 - threshold, type = 8)) largest.value <- which(originalImps[originalImpsOrder] <= thresholds)[1] if(is.na(largest.value)) { selected.vars <- originalImpsOrder warning("All variables selected; could signal a problem") } else if(largest.value == 1) { selected.vars <- NA warning("No variables selected; could signal a problem") } else selected.vars <- originalImpsOrder[1:(largest.value - 1)] return(selected.vars) } selProbPlot <- function(object, k = c(20, 100), color = TRUE, legend = FALSE, xlegend = 68, ylegend = 0.93, cexlegend = 1.4, main = NULL, xlab = "Rank of gene", ylab = "Selection probability", pch = 19, ...) { ## selection probability plots, such as in Pepe ## et al. 2003 (ROC paper). if(class(object) != "varSelRFBoot") stop("This function only works with objects created\n", "with the varSelRFBoot function.\n") nboot <- object$number.of.bootsamples if(nboot < 100) warning("You only used ", nboot, " bootstrap samples. Might be too few.", immediate. = TRUE) original.imps <- object$all.data.run$initialImportances original.ranks <- rank(-original.imps) boot.ranks <- lapply(object$allBootRuns, function(x) {rank(-x$initialImportances)}) boot.ranks <- matrix(unlist(boot.ranks), ncol = nboot) k1 <- apply(boot.ranks, 1, function(z) {sum(z <= k[1])/nboot}) k2 <- apply(boot.ranks, 1, function(z) {sum(z <= k[2])/nboot}) if(color) { plot(original.ranks[original.ranks <= k[2]], k2[original.ranks <= k[2]], xlim = c(1, k[2]), col = "red", pch = pch, xlab = xlab, ylab = ylab, main = main, ylim = c(0, 1), ...) points(original.ranks[original.ranks <= k[1]], k1[original.ranks <= k[1]], col = "blue", pch = pch) if(legend) legend(x = xlegend, y = ylegend, legend = c(paste("Top", k[2]), paste("Top", k[1])), col = c("red", "blue"), pch = pch, cex = cexlegend) } else { plot(original.ranks[original.ranks <= k[2]], k2[original.ranks <= k[2]], xlim = c(1, k[2]), pch = pch, xlab = "Rank of gene", ylab = "Selection probability", main = main, ylim = c(0, 1), ...) points(original.ranks[original.ranks <= k[1]], k1[original.ranks <= k[1]], pch = 21) if(legend) legend(x = xlegend, y = ylegend, legend = c(paste("Top", k[2]), paste("Top", k[1])), pch = c(19, 21), cex = (cexlegend + 0.2)) } } ################################################################### ################################################################### ########## ############## ########## Custom code used in paper ############## ########## (unlikely to be of interest ############## ########## for anybody else) ############## ########## ############## ################################################################### ################################################################### figureSummary.varSelRFBoot <- function(object) { ## to create data for figures of paper with randomdata errorrate <- object$bootstrap.pred.error nvused <- length(object$all.data.vars) return(c(errorrate, nvused)) } figureSummary2.varSelRFBoot <- function(object) { ## to create data for figures of paper with randomdata errorrate <- object$bootstrap.pred.error nvused <- length(object$all.data.vars) median.nvars <- median (object$number.of.vars) iq1.nvars <- quantile(object$number.of.vars, p = 0.25) iq3.nvars <- quantile(object$number.of.vars, p = 0.75) string1 <- paste(formatC(round(nvused, 0), width = 6)," (", round(iq1.nvars, 0),", ", round(median.nvars, 0), ", ", round(iq3.nvars, 0),"), ", formatC(round(errorrate, 3), width = 4)," \\\\ ", sep = "") string1 } tableSummary.varSelRFBoot <- function(object, name){ ### to create data ready for the LaTeX tables of the paper neatname <- switch(name, "gl.boot" = "Leukemia ", "vv.boot" = "Breast 2 cl.", "vv3.boot" ="Breast 3 cl.", "nci.boot" ="NCI 60 ", "ra.boot" = "Adenocar. ", "brain.boot" = "Brain ", "colon.boot" = "Colon ", "lymphoma.boot" = "Lymphoma ", "prostate.boot" = "Prostate ", "srbct.boot" = "Srbct ") errorrate <- object$bootstrap.pred.error nvused <- length(object$all.data.vars) median.nvars <- median (object$number.of.vars) iq1.nvars <- quantile(object$number.of.vars, p = 0.25) iq3.nvars <- quantile(object$number.of.vars, p = 0.75) in.all.data <- which(names(table(object$all.vars.in.solutions)) %in% object$all.data.vars) tmp1 <- table(object$all.vars.in.solutions)[in.all.data]/object$number.of.bootsamples median.freq <- median(tmp1) if(nvused == 2) { iq1.freq <- min(tmp1) iq3.freq <- max(tmp1) } else { iq1.freq <- quantile(tmp1, p = 0.25) iq3.freq <- quantile(tmp1, p = 0.75) } if(nvused > 2) string1 <- paste(neatname, "& ", formatC(round(errorrate, 3), width = 4)," & ", formatC(round(nvused, 0), width = 4)," & ", formatC(round(median.nvars, 0), width = 4)," (", round(iq1.nvars, 0),", ", round(iq3.nvars, 0),") & ", formatC(round(median.freq, 2), width = 4)," (", round(iq1.freq, 2),", ", round(iq3.freq, 2),")\\\\", sep = "") else string1 <- paste(neatname, "& ", formatC(round(errorrate, 3), width = 4)," & ", formatC(round(nvused, 0), width = 4)," & ", formatC(round(median.nvars, 0), width = 4)," (", round(iq1.nvars, 0),", ", round(iq3.nvars, 0),") & ", formatC(round(median.freq, 2), width = 4)," (", round(iq1.freq, 2),", ", round(iq3.freq, 2),")\\footnotemark[1]\\\\", sep = "") cat(string1, "\n") } ################################################################### ################################################################### ########## ############## ########## Miscellaneous code ############## ########## ############## ################################################################### ################################################################### ## rVI <- function(xdata, ydata, forest, numrandom = 40, ## whichImp = "impsUnscaled", ## ...) { ## ## from randomVarImpsRF, but simplified to use only ## ## one tytpe of importance ## ontree <- forest$ntree ## omtry <- forest$mtry ## if(usingCluster) { ## iRF.cluster <- function(dummy, ontree, omtry, ...) { ## rf <- randomForest(x = xdataTheCluster, ## y = sample(ClassTheCluster), ## ntree = ontree, ## mtry = omtry, ## importance = TRUE, keep.forest = FALSE, ## ...) ## imps <- switch(whichImp, ## "impsUnscaled" = ## importance(rf, type = 1, scale = FALSE), ## "impsScaled" = ## importance(rf, type = 1, scale = TRUE), ## "impsGini" = ## rf$importance[, ncol(rf$importance)] ## ) ## if(is.null(imps)) ## stop("\n Not valid 'whichImp' \n") ## return(imps) ## } ## clusterEvalQ(TheCluster, ## rm(list = c("xdataTheCluster", "ClassTheCluster"))) ## xdataTheCluster <<- xdata ## ClassTheCluster <<- ydata ## clusterExport(TheCluster, ## c("xdataTheCluster", "ClassTheCluster")) ## outCl <- clusterApplyLB(TheCluster, ## 1:numrandom, ## iRF.cluster, ## ontree = ontree, ## omtry = omtry) ## } else { ## outCl <- list() ## for(nriter in 1:numrandom) { ## rf <- randomForest(x = xdata, ## y = sample(ydata), ## ntree = ontree, ## mtry = omtry, ## importance = TRUE, ## keep.forest = FALSE, ## ...) ## imps <- switch(whichImp, ## "impsUnscaled" = ## importance(rf, type = 1, scale = FALSE), ## "impsScaled" = ## importance(rf, type = 1, scale = TRUE), ## "impsGini" = ## rf$importance[, ncol(rf$importance)] ## ) ## if(is.null(imps)) ## stop("\n Not valid 'whichImp' \n") ## outCl[[nriter]] <- imps ## } ## } ## randomVarImps <- matrix(unlist(outCl), ncol = numrandom) ## rownames(randomVarImps) <- rownames(forest$importance) ## colnames(randomVarImps) <- 1:numrandom ## class(randomVarImps) <- c(class(randomVarImps), ## "rVI") ## return(randomVarImps) ## } ##### old stuff ####nr <- 10; nc <- 20; x <- matrix(rnorm(2* nr*nc), ncol = nc) ####colnames(x) <- paste("v", 1:nc, sep ="") ####Class <- factor(c(rep("A", nr), rep("B", nr))) ####xdata <- x ####library(randomForest) ####rf1 <- randomForest(x = x, y = Class, importance = TRUE, #### keep.forest = FALSE, ntree = 2000) ####usingCluster <- TRUE ####if(usingCluster) { #### library(snow) #### library(Rmpi) #### clusterNumberNodes <- 4 #### typeCluster <- "MPI" #### TheCluster <- makeCluster(clusterNumberNodes, #### type = typeCluster) #### clusterSetupSPRNG(TheCluster) #### clusterEvalQ(TheCluster, library(randomForest)) ####} ####screePlotRF <- function(forest, randomImportances, #### nvars = 50, #### show.var.names = FALSE, #### vars.highlight = NULL, #### main = NULL, scale = TRUE) { #### nvars <- min(nvars, dim(randomImportances)[1]) #### if(scale) { #### original.imps <- importance(forest, type = 1, scale = TRUE) #### } else { #### original.imps <- importance(forest, type = 1, scale = FALSE) #### } #### ordered.imps <- sort(original.imps, decreasing = TRUE)[1:nvars] #### plot(ordered.imps, type = "b", axes = FALSE, lwd = 1.5, #### xlab = "Variable", ylab = "Importance", #### main = main) #### abline(h = 0, lty = 2, col = "blue") #### axis(2) #### box() #### if(show.var.names) { #### axis(1, labels = names(ordered.imps), #### at = 1:length(ordered.imps)) #### } else { #### axis(1) #### } #### if(!is.null(vars.highlight)) { #### if(length(vars.highlight) > nvars) { #### warning("Not all vars. to highlight will be shown; increase nvars\n") #### cat("\n Not all vars. to highlight will be shown; increase nvars\n") #### } #### pos.selected <- which(names(ordered.imps) %in% vars.highlight) #### if(!length(pos.selected)){ #### warning("No selected vars. among those to show\n") #### cat("\nNo selected vars. among those to show\n") #### } #### else segments(pos.selected, 0, #### pos.selected, ordered.imps[pos.selected], #### col = "blue", lwd = 2) #### if(length(pos.selected) < length(vars.highlight)) { #### warning("Not shown ", length(vars.highlight) - length(pos.selected), #### " of the 'to-highlight' variables\n") #### cat("Not shown ", length(vars.highlight) - length(pos.selected), #### " of the 'to-highlight' variables\n") #### } #### } #### randomImportances <- randomImportances[1:nvars, ] #### column.mean <- dim(randomImportances)[2] #### matlines(randomImportances[, -column.mean], #### col = "green", lty = 1) #### lines(x = 1:nvars, #### y = randomImportances[, column.mean], #### lwd = 1.3, col = "red") ####} ####screePlotRF(rf1, ii1, nvars = 20, vars.highligh = paste("v", c(8, 4), sep =""), show.var.names = TRUE) ####i1 <- randomVarImpsRF(xdata, Class, rf1, numrandom = 3) ####screePlotRF(rf1, i1) ### This version allows optimizing mtry and ntree; leave code here, ### but don't use. ## FIXME: uncomment, but fix global function uses. And add examples. ## ExperimentalvarSelRF <- function(xdata, Class, vars.drop.num = NULL, ## vars.drop.frac = 0.5, ## c.sd = 1, ## whole.range = TRUE, ## recompute.var.imp = FALSE, ## verbose = FALSE, ## returnFirstForest = TRUE, ## ## next are all for tune2RF ## ## but this is a mess; should pass them as ## ## control. ## tuneMtry = FALSE, tuneNtree = FALSE, ## startNtree = 1000, startMtryFactor = 1, ## stepFactorMtry = 1.25, ## stepFactorNtree = 1.75, ## minCorNtree = 0.975, ## quantNtree = 0.025, ## returnForest = TRUE, ## ntreeTry = 2000) { ## if( (is.null(vars.drop.num) & is.null(vars.drop.frac)) | ## (!is.null(vars.drop.num) & !is.null(vars.drop.frac))) ## stop("One (and only one) of vars.drop.frac and vars.drop.num must be NULL and the other set") ## max.num.steps <- dim(xdata)[2] ## num.subjects <- dim(xdata)[1] ## if(is.null(colnames(xdata))) ## colnames(xdata) <- paste("v", 1:dim(xdata)[2], sep ="") ## ##oversize the vectors; will prune later. ## n.vars <- vars <- OOB.rf <- OOB.sd <- rep(NA, max.num.steps) ## ## First get optimal values for mtry and ntree, and get first run: ## rf <- tune2RF(x = xdata, y = Class, ## tuneMtry = tuneMtry, tuneNtree = tuneNtree, ## startNtree = startNtree, mtryFactor = mtryFactor, ## stepFactorMtry = stepFactorMtry, ## stepFactorNtree = stepFactorNtree, ## minCorNtree = minCorNtree, ## quantNtree = quantNtree, ## returnForest = TRUE, ## ntreeTry = ntreeTry, ## verbose = verbose) ## ## We'll need it for future runs ## ntree <- rf$ntree ## mtry <- rf$mtry ## if(returnFirstForest) ## FirstForest <- rf ## else ## FirstForest <- NULL ## # rf <- randomForest(x = xdata, y = Class, importance= TRUE, ## # ntree = ntree, keep.forest = FALSE) ## m.iterated.ob.error <- m.initial.ob.error <- oobError(rf) ## sd.iterated.ob.error <- sd.initial.ob.error <- ## sqrt(m.iterated.ob.error * (1 - m.iterated.ob.error) * (1/num.subjects)) ## if(verbose) { ## print(paste("Initial OOB error: mean = ", round(m.initial.ob.error, 4), ## "; sd = ", round(sd.initial.ob.error, 4), sep = "")) ## } ## ### previous code (before rF < 4.3.1) ## #selected.vars <- order(rf$importance[, (ncol(rf$importance) - 1)], decreasing = TRUE) ## # ordered.importance <- rf$importance[selected.vars,(ncol(rf$importance) -1)] ## importances <- importance(rf, type = 1, scale = FALSE) ## selected.vars <- order(importances, decreasing = TRUE) ## ordered.importances <- importances[selected.vars] ## initialImportances <- importances ## initialOrderedImportances <- ordered.importances ## j <- 1 ## n.vars[j] <- dim(xdata)[2] ## vars[j] <- paste(colnames(xdata), collapse = " + ") ## OOB.rf[j] <- m.iterated.ob.error ## OOB.sd[j] <- sd.iterated.ob.error ## var.simplify <- TRUE ## while(var.simplify) { ## last.rf <- rf ## last.vars <- selected.vars ## # print(paste(".........Number of variables before selection", ## # dim(xdata)[2])) ## debug ## previous.m.error <- m.iterated.ob.error ## previous.sd.error <- sd.iterated.ob.error ## if(recompute.var.imp & (j > 1)) { ## ## need to set indexes as absolute w.r.t. original data ## #### tmp.order <- order(rf$importance[, (ncol(rf$importance) - 1)], ## #### decreasing = TRUE) ## #### selected.vars <- selected.vars[tmp.order] ## #### ordered.importance <- rf$importance[tmp.order, (ncol(rf$importance) -1)] ## importances <- importance(rf, type = 1, scale = FALSE) ## tmp.order <- order(importances, decreasing = TRUE) ## selected.vars <- selected.vars[tmp.order] ## ordered.importances <- importances[tmp.order] ## } ## num.vars <- length(selected.vars) ## #### if(any(is.na(ordered.importances))) { ## #### print("********** Nas in ordered.importances ******") ## #### browser() ## #### } ## if(any(ordered.importances < 0)) { ## selected.vars <- selected.vars[-which(ordered.importances < 0)] ## ordered.importances <- ordered.importances[-which(ordered.importances < 0)] ## } else { ## if(is.null(vars.drop.num)) ## vars.drop <- round(num.vars * vars.drop.frac) ## else vars.drop <- vars.drop.num ## if(num.vars >= (vars.drop + 2)) { ## selected.vars <- selected.vars[1: (num.vars - vars.drop)] ## ordered.importances <- ordered.importances[1: (num.vars - vars.drop)] ## } ## else { ## selected.vars <- selected.vars[1:2] ## ordered.importances <- ordered.importances[1:2] ## } ## } ## ## couldn't we eliminate the following? ## if((length(selected.vars) < 2) | (any(selected.vars < 1))) { ## var.simplify <- FALSE ## break ## } ## if(length(selected.vars) <= 2) var.simplify <- FALSE ## if(recompute.var.imp) ## rf <- randomForest(x = xdata[, selected.vars], y = Class, importance= TRUE, ## ntree = ntree, mtry = mtry, keep.forest = FALSE) ## else ## rf <- randomForest(x = xdata[, selected.vars], y = Class, importance= FALSE, ## ntree = ntree, mtry = mtry, keep.forest = FALSE) ## m.iterated.ob.error <- oobError(rf) ## sd.iterated.ob.error <- ## sqrt(m.iterated.ob.error * (1 - m.iterated.ob.error) * (1/num.subjects)) ## if(verbose) { ## print(paste("..... iteration ", j, "; OOB error: mean = ", ## round(m.iterated.ob.error, 4), ## "; sd = ", round(sd.iterated.ob.error, 4), sep = "")) ## } ## j <- j + 1 ## n.vars[j] <- length(selected.vars) ## vars[j] <- paste(colnames(xdata)[selected.vars], ## collapse = " + ") ## OOB.rf[j] <- m.iterated.ob.error ## OOB.sd[j] <- sd.iterated.ob.error ## if(!whole.range & ## ( ## (m.iterated.ob.error > ## (m.initial.ob.error + c.sd*sd.initial.ob.error)) ## | ## (m.iterated.ob.error > ## (previous.m.error + c.sd*previous.sd.error))) ## ) ## var.simplify <- FALSE ## } ## if (!whole.range) { ## if(!is.null(colnames(xdata))) ## selected.vars <- sort(colnames(xdata)[last.vars]) ## else ## selected.vars <- last.vars ## out <- list(selec.history = data.frame( ## Number.Variables = n.vars, ## Vars.in.Forest = vars, ## OOB = OOB.rf, ## sd.OOB = OOB.sd)[1:j,], ## rf.model = last.rf, ## selected.vars = selected.vars, ## selected.model = paste(selected.vars, collapse = " + "), ## best.model.nvars = length(selected.vars), ## initialImportances = initialImportances, ## initialOrderedImportances = initialOrderedImportances, ## ntree = ntree, ## mtry = mtry, ## firstForest = FirstForest) ## class(out) <- "varSelRF" ## return(out) ## } ## else { ## n.vars <- n.vars[1:j] ## vars <- vars[1:j] ## OOB.rf<- OOB.rf[1:j] ## OOB.sd <- OOB.sd[1:j] ## ##browser() ## min.oob.ci <- min(OOB.rf) + c.sd * OOB.sd[which.min(OOB.rf)] ## best.pos <- ## which(OOB.rf <= min.oob.ci)[which.min(n.vars[which(OOB.rf <= min.oob.ci)])] ## selected.vars <- sort(unlist(strsplit(vars[best.pos], ## " + ", fixed = TRUE))) ## out <- list(selec.history = data.frame( ## Number.Variables = n.vars, ## Vars.in.Forest = vars, ## OOB = OOB.rf, ## sd.OOB = OOB.sd), ## rf.model = NA, ## selected.vars = selected.vars, ## selected.model = paste(selected.vars, collapse = " + "), ## best.model.nvars = n.vars[best.pos], ## initialImportances = initialImportances, ## initialOrderedImportances = initialOrderedImportances, ## ntree = ntree, ## mtry = mtry, ## firstForest = FirstForest) ## class(out) <- "varSelRF" ## return(out) ## } ## } ## FIXME: uncomment, but fix global function uses. And add examples. ## tune2RF <- function(x, y, tuneMtry = TRUE, tuneNtree = TRUE, ## startNtree = 1000, startMtryFactor = 1, ## stepFactorMtry = 1.25, ## stepFactorNtree = 1.75, ## minCorNtree = 0.9, ## quantNtree = 0.025, ## with 4000, look at first 100 ## returnForest = TRUE, ## ntreeTry = 2000, ## verbose = TRUE, ## plot = FALSE, ## ...) { ## ## if tuneMtry = TRUE and tuneNtree = TRUE, ## ## ntree is used for tuneRF, mtryFactor is ignored ## ## if tuneMtry = TRUE and tuneNtree = FALSE ## ## ntree used for tuneRF and final ntree, and mtry factor is ignored ## ## if tuneMtry = FALSE and tuneRF = TRUE, ## ## ntree is used only as starting value for search of ntree and ## ## mtryFactor is used for mtry ## ## if tuneMtry = FALSE and tuneRF = FALSE, ## ## mtryFactor and ntree are the ntree and mtry used. ## if(plot) { ## op <- par(mfrow = c(1,2)) ## on.exit(par(op)) ## } ## if(tuneMtry) { ## tunedMtry <- tuneRF(x, y, stepFactor = stepFactorMtry, ## ntreeTry = ntreeTry, ## mtryStart = floor(sqrt(ncol(x)) * startMtryFactor), ## doBest = FALSE, ## plot = plot, ## trace = verbose) ## tunedMtry <- ## tunedMtry[which.min(tunedMtry[, 2]), 1] ## } else { ## tunedMtry <- floor(sqrt(ncol(x)) * startMtryFactor) ## } ## tunedNtree <- startNtree ## if(tuneNtree) { ## if(verbose) ## cat("\n Tunning ntree: initial forest construction \n") ## f1 <- randomForest(x, y, mtry = tunedMtry, ## importance = TRUE, ## keep.forest = FALSE, ## ntree = tunedNtree) ## repeat { ## f2 <- randomForest(x, y, mtry = tunedMtry, ## importance = TRUE, ## keep.forest = FALSE, ## ntree = round(stepFactorNtree * tunedNtree)) ## m1 <- cbind( ## importance(f1, type = 1, scale = FALSE), ## importance(f2, type = 1, scale = FALSE)) ## m1[m1 < quantile(m1, 1-quantNtree)] <- NA ## m1 <- na.omit(m1) ## m2 <- m1 ## m2[m2 <= 0] <- NA ## m2 <- na.omit(m2) ## mc1 <- cor(m1) ## mc.rob <- cov.rob(m2, method = "mcd", cor = TRUE)$cor ## if(verbose) { ## cat("\n ... tunning ntree; cor. importances successive ntrees\n") ## colnames(mc1) <- c(tunedNtree, round(stepFactorNtree * tunedNtree)) ## rownames(mc1) <- c(tunedNtree, round(stepFactorNtree * tunedNtree)) ## colnames(mc.rob) <- c(tunedNtree, round(stepFactorNtree * tunedNtree)) ## rownames(mc.rob) <- c(tunedNtree, round(stepFactorNtree * tunedNtree)) ## cat("\n correlation matrix\n") ## print(round(mc1, 4)) ## cat("\n robust correlation matrix\n") ## print(round(mc.rob, 4)) ## cat(" \n using ", dim(m2)[1], "observations\n") ## cat("\n") ## } ## if((min(mc1) > minCorNtree) & ## (min(mc.rob) > minCorNtree)) { ## plot(m1[, 1], m1[, 2], xlab = paste("Ntree", tunedNtree), ## ylab = paste("Ntree", round(stepFactorNtree * tunedNtree)), ## main = ## paste("Upper ", quantNtree, ## "th quantile of importances", sep = "")) ## break ## } ## tunedNtree <- round(stepFactorNtree * tunedNtree) ## f1 <- f2 ## } ## } ## if(returnForest) { ## if(tuneNtree) ## return(f1) ## else ## return(randomForest(x, y, mtry = tunedMtry, ## importance = TRUE, ## keep.forest = FALSE, ## ntree = ntree) ## ) ## } ## else { ## return(c(tunedMtry = tunedMtry, tunedNtree = tunedNtree)) ## } ## } boot.imp <- function(data, class, ntree = 20000, B = 200) { ## just checking the output from the pg.plots. is correct. N <- length(class) mat.out <- matrix(NA, nrow = dim(data)[2], ncol = B) for(i in 1:B) { sample.again <- TRUE while(sample.again) { bootsample <- unlist(tapply(1:N, class, function(x) sample(x, size = length(x), replace = TRUE))) ## sure, this isn't the fastest, but will do for now. nobootsample <- setdiff(1:N, bootsample) if(!length(nobootsample)) sample.again <- TRUE else sample.again <- FALSE } ## this is an ugly hack to prevetn nobootsamples ## of size 0. train.data <- data[bootsample, , drop = FALSE] train.class <- class[bootsample] rftmp <- randomForest(x = train.data, y = train.class, ntree = ntree, keep.forest = FALSE, importance = TRUE) mat.out[, i] <- importance(rftmp, type = 1, scale = FALSE) } return(mat.out) } ####gl.b.i <- boot.imp(gl.data, gl.class, B = 20) ####pairs(cbind(gl.b.i, importance(gl.20000.rf, type = 1, scale = FALSE)), #### pch = ".") ####pairs(cbind(gl.b.i[, 1:10], importance(gl.20000.rf, type = 1, scale = FALSE)), #### pch = ".") ####summary(cbind(gl.b.i, importance(gl.20000.rf, type = 1, scale = FALSE))) ####boxplot(data.frame(cbind(gl.b.i, importance(gl.20000.rf, type = 1, scale = FALSE)))) #####f.mtr <- function(x, mf = 3) { ##### x2 <- floor(sqrt(x) * mf) ##### x2[x2 > x] <- x[x2 > x] ##### return(x2) #####} #####plot(x = sqrt(2:200), f.mtr(2:200, mf = 5), type = "l")
/scratch/gouwar.j/cran-all/cranData/varSelRF/R/varSelRF.R
#' @title Chi-bar-square degrees of freedom computation #' #' @description Computation of the degrees of freedom of the chi-bar-square #' #' @param msdata a list containing the structure of the model and data, as an output from #' \code{extractStruct.<package_name>} functions #' @return a list containing the vector of the degrees of freedom of the chi-bar-square and the dimensions #' of the cone of the chi-bar-square distribution #' #' @name dfChiBarSquare # #' @export dfChiBarSquare dfChiBarSquare <- function(msdata){ orthan <- (msdata$struct == "diag") if (msdata$struct == "diag"){ dims <- list(dimBeta=list(dim0=msdata$dims$nbFE0, dimR=msdata$dims$nbFE1-msdata$dims$nbFE0), dimGamma=list(dim0=msdata$dims$nbRE0, dimR=0, dimSplus=msdata$dims$nbRE1-msdata$dims$nbRE0), dimSigma=msdata$dims$dimSigma) }else if (msdata$struct == "full"){ nbCovTestedAlone <- (msdata$dims$nbRE0==msdata$dims$nbRE1)*(msdata$dims$nbCov1-msdata$dims$nbCov0) # nb of covariances tested without the corresponding variances being tested dims <- list(dimBeta=list(dim0=msdata$dims$nbFE0, dimR=msdata$dims$nbFE1-msdata$dims$nbFE0), dimGamma=list(dim0=msdata$dims$nbRE0*(msdata$dims$nbRE0+1)/2, dimR=(msdata$dims$nbRE0)*(msdata$dims$nbRE1-msdata$dims$nbRE0)+nbCovTestedAlone, dimSplus=msdata$dims$nbRE1-msdata$dims$nbRE0), dimSigma=msdata$dims$dimSigma) }else{ dd <- msdata$detailStruct dim0 <- nrow(dd[!dd$tested,])-msdata$dims$nbFE1 #dimRplus <- length(dd$names[dd$tested & dd$diagBlock]) dimR <- length(dd$names[dd$covInBlock!=1 & dd$type=="co" & dd$tested]) ddSp <- dd[dd$tested & dd$covInBlock,] dimSplus <- floor(sqrt(2*as.vector(table(ddSp$block)))) rm(ddSp) dims <- list(dimBeta=list(dim0=msdata$dims$nbFE0, dimR=msdata$dims$nbFE1-msdata$dims$nbFE0), dimGamma=list(dim0=dim0, dimR=dimR, dimSplus=dimSplus), dimSigma=msdata$dims$dimSigma) } # Identify the components of the mixture # dimLSincluded : dimension of the buggest linear space included in the cone # dimLScontaining : dimension of the smallest linear space containing the cone q <- sum(unlist(dims)) # total dimension dimLSincluded <- dims$dimBeta$dimR + dims$dimGamma$dimR # weights of chi-bar with df=0, ..., dimLSincluded-1 are null if (orthan) dimLScontaining <- dims$dimBeta$dimR + dims$dimGamma$dimR + sum(dims$dimGamma$dimSplus) # weights of chi-bar with df=dimLScontaining+1, ..., q are null if (!orthan) dimLScontaining <- dims$dimBeta$dimR + dims$dimGamma$dimR + sum(dims$dimGamma$dimSplus*(dims$dimGamma$dimSplus+1)/2) return(list(df=seq(dimLSincluded,dimLScontaining,1),dimsCone=dims)) } #' @title Monte Carlo approximation of chi-bar-square weights #' #' @description The function provides a method to approximate the weights of the mixture components, #' when the number of components is known as well as the degrees of freedom of each chi-square distribution #' in the mixture, and given a vector of simulated values from the target \eqn{\bar{\chi}^2(V,C)} #' distribution. Note that the estimation is based on (pseudo)-random Monte Carlo samples. For reproducible #' results, one should fix the seed of the (pseudo)-random number generator. #' #' @name weightsChiBarSquare # #' @export weightsChiBarSquare #' #' @param df a vector with the degrees of freedom of the chi-square components of the chi-bar-square distribution #' @param V a positive semi-definite matrix #' @param dimsCone a list with the dimensions of the cone C, expressed on the parameter space scale #' @param orthan a boolean specifying whether the cone is an orthan #' @param control (optional) a list of control options for the computation of the chi-bar-weights, containing #' two elements: \code{parallel} a boolean indicating whether computation should be done in parallel (FALSE #' by default), \code{nb_cores} the number of cores for parallel computing (if \code{parallel=TRUE} but no value is given #' for \code{nb_cores}, it is set to number of detected cores minus 1), and \code{M} the Monte Carlo sample #' size for the computation of the weights. #' #' @return A list containing the estimated weights, the standard deviations of the estimated weights and the #' random sample of \code{M} realizations from the chi-bar-square distribution #' #' @importFrom foreach %dopar% #' @importFrom stats na.omit weightsChiBarSquare <- function(df,V,dimsCone,orthan,control){ # Initialize vector of weights w <- rep(0,length(df)) b <- sum(unlist(dimsCone$dimBeta)) # nb of fixed effects rf <- dimsCone$dimBeta$dimR # nb of tested fixed effects r <- sum(dimsCone$dimGamma$dimSplus) # nb of variances tested ds <- dimsCone$dimSigma # size of residual # Initialize vector of simulated chi-bar-square chibarsquare <- numeric() if (length(df) == 2){ w <- c(0.5,0.5) sdw <- rep(0,2) }else{ if(orthan){ R <- cbind(matrix(0,ncol=b+dimsCone$dimGamma$dim0,nrow=r), diag(r), matrix(0,nrow=r,ncol=dimsCone$dimSigma)) W <- R%*%V%*%t(R) invW <- chol2inv(chol(W)) if (r == 2){ C <- stats::cov2cor(W) w[1] <- acos(C[1,2])/(2*pi) w[2] <- 0.5 w[3] <- 0.5-w[1] sdw <- rep(0,3) }else if (r == 3){ C <- stats::cov2cor(W) pC <- corpcor::cor2pcor(C) w[4] <- (3*pi - acos(pC[1,2]) - acos(pC[1,3]) - acos(pC[2,3]))/(4*pi) w[3] <- (2*pi - acos(C[1,2]) - acos(C[1,3]) - acos(C[2,3]))/(4*pi) w[2] <- 0.5 - w[4] w[1] <- 0.5 - w[3] sdw <- rep(0,4) }else{ message("Simulating chi-bar-square weights ...") Z <- mvtnorm::rmvnorm(control$M,mean=rep(0,nrow(W)),sigma=W) projZ <- t(sapply(1:control$M,FUN = function(i){ quadprog::solve.QP(invW, Z[i,]%*%invW, diag(ncol(Z)), rep(0,ncol(Z)), meq=0, factorized=FALSE)$solution})) nbPosComp <- apply(projZ,1,FUN = function(x){length(x[x>1e-6])}) w <- summary(as.factor(nbPosComp),maxsum=length(df))/control$M sdw <- sqrt(w*(1-w)/control$M) } }else{ invV <- chol2inv(chol(V)) Z <- mvtnorm::rmvnorm(control$M,mean=rep(0,nrow(V)),sigma=V) chibarsquare <- rep(0,control$M) message("Simulating chi-bar-square weights ...") nb_cores <- ifelse(control$parallel,max(1,parallel::detectCores() - 1),1) # Initiate cluster doParallel::registerDoParallel(nb_cores) i <- 0 chibarsquare <- foreach::foreach(i=1:control$M, .packages='varTestnlme', .combine=c) %dopar% { x0 <- Z[i,] constantes <- list(Z = Z[i,], invV = invV, orthan = orthan, dimsCone = dimsCone) projZ <- alabama::auglag(par=x0, fn = objFunction, gr = gradObjFunction, hin = ineqCstr, heq = eqCstr, hin.jac = jacobianIneqCstr, heq.jac = jacobianEqCstr, cst=constantes, control.outer=list(trace=F,method="BFGS",eps=1e-5)) # due to the tolerance threshold in auglag, we end up with equality constraints verified up to 1e-5 # we set those values to 0 otherwise it results in incorrect projected Z projZzero <- projZ$par projZzero[abs(projZzero)<1e-5] <- 0 Z[i,]%*%invV%*%Z[i,] - objFunction(projZzero,constantes) } chibarsquare <- chibarsquare[chibarsquare>-1e-04] doParallel::stopImplicitCluster() # Approximation of chi-bar-square weights empQuant <- seq(0.001,1,0.001) w_covw <- lapply(empQuant, FUN = function(q){try(approxWeights(chibarsquare,df,q))}) w <- na.omit(t(sapply(1:length(empQuant), FUN = function(i){if (is.null(names(w_covw[[i]]))) {NA*df} else {w_covw[[i]]$w}}))) covw <- na.omit(t(sapply(1:length(empQuant), FUN = function(i){if (is.null(names(w_covw[[i]]))) {NA*df} else {sqrt(diag(w_covw[[i]]$covw))}}))) # Take into account the constraints on the weights : they should be between 0 and 1/2 minW <- apply(w,1,min) maxW <- apply(w,1,max) admissibleWeights <- w[minW>0 & maxW<0.5,] sdAdmissibleWeights <- covw[minW>0 & maxW<0.5,] # NCOL and NROW can be used on vector eps <- 0 while (min(NROW(admissibleWeights),NCOL(admissibleWeights))==0){ # if no combination is admissible, relax the constraints admissibleWeights <- w[minW>-(0.05+eps) & maxW<(0.55+eps),] sdAdmissibleWeights <- covw[minW>-(0.05+eps) & maxW<(0.55+eps),] eps <- eps+0.01 } w <- apply(admissibleWeights,2,mean) sdw <- sqrt(apply(sdAdmissibleWeights,2,mean)/nrow(admissibleWeights)) } } return(list(weights=w,sdWeights=sdw,randomCBS=chibarsquare)) } #' @title Monte Carlo approximation of chi-bar-square weights #' #' @description The chi-bar-square distribution \eqn{\bar{\chi}^2(I,C)} is a mixture of chi-square distributions. The function provides #' a method to approximate the weights of the mixture components, when the number of components is known as well as the #' degrees of freedom of each chi-square distribution in the mixture, and given a vector of simulated values from the target #' \eqn{\bar{\chi}^2(I,C)} distribution. Note that the estimation is based on (pseudo)-random Monte Carlo samples. For reproducible #' results, one should fix the seed of the (pseudo)-random number generator. #' #' @details Let us assume that there are \eqn{p} components in the mixture, with degrees of #' freedom between \eqn{n_1} and \eqn{n_p}. By definition of a mixture distribution, we have : #' \deqn{ P(\bar{\chi}^2(I,C) \leq c) = \sum_{i=n_1}^{n_p} w_i P(\chi^2_{i} \leq c)} #' Choosing \eqn{p-2} values \eqn{c_1, \dots, c_{p-2}}, the function will generate a system of \eqn{p-2} equations #' according to the above relationship, and add two additional relationships stating that the sum of all the weights is #' equal to 1, and that the sum of odd weights and of even weights is equal to 1/2, so that we end up with a system a \eqn{p} #' equations with \eqn{p} variables. #' #' @name approxWeights #' @export approxWeights #' #' @param x a vector of i.i.d. random realizations of the target chi-bar-square distribution #' @param df a vector containing the degrees of freedom of the chi-squared components #' @param q the empirical quantile of \code{x} used to choose the \eqn{p-2} values \eqn{c_1, \dots, c_{p-2}} (see Details) #' @return A vector containing the estimated weights, as well as their covariance matrix. #' @author Charlotte Baey <\email{[email protected]}> #' #' @importFrom stats quantile pchisq qchisq cov approxWeights <- function(x,df,q){ maxcbs <- max(0,quantile(x,q)) epsilon <- pchisq(maxcbs,df=max(df)) c <- numeric() if(length(df)>2){ c <- sapply(df[-c(1,2)],FUN = function(i){qchisq(epsilon,i,lower.tail = T)}) }else{ c <- sapply(df,FUN = function(i){qchisq(epsilon,i,lower.tail = T)}) } aij <- sapply(df,FUN = function(i){pchisq(c,df=i)}) phatcbs <- sapply(1:length(c),FUN = function(i){x<=c[i]}) pj <- apply(phatcbs,2,mean) covpj <- cov(phatcbs)/length(x) # We add constraints on the weights: sum of weights is equal to 1 and sums of even and of odd weights are equal to 1/2 aij <- rbind(aij,rep(1,length(df)),rep(c(1,0),length.out=length(df))) pj <- c(pj,1,0.5) invAij <- solve(aij) w <- as.vector(invAij%*%pj) covpj2 <- matrix(0,nrow=length(df),ncol=length(df)) covpj2[1:(length(df)-2),1:(length(df)-2)] <- covpj covw <- invAij %*% covpj2 %*% t(invAij) return(list(w=w,covw=covw)) }
/scratch/gouwar.j/cran-all/cranData/varTestnlme/R/chibarsquare.R