content
stringlengths
0
14.9M
filename
stringlengths
44
136
--- title: "Getting Started with Achilles" date: "`r Sys.Date()`" output: pdf_document: number_sections: yes toc: yes html_document: number_sections: yes toc: yes vignette: > %\VignetteIndexEntry{Getting Started with Achilles} %\VignetteEncoding{UTF-8} %\VignetteEngine{knitr::rmarkdown} --- ## Installation 1. Achilles currently supports version 5.3 and 5.4 of the OMOP CDM. (https://github.com/OHDSI/CommonDataModel). 1. This package makes use of rJava. Make sure that you have Java installed. If you don't have Java already installed on your computer (on most computers it already is installed), go to [java.com](https://java.com) to get the latest version. If you are having trouble with rJava, [this Stack Overflow post](https://stackoverflow.com/questions/7019912/using-the-rjava-package-on-win7-64-bit-with-r) may assist you when you begin troubleshooting. 1. In R, use the following commands to install Achilles. ```r if (!require("remotes")) install.packages("remotes") # To install the master branch remotes::install_github("OHDSI/Achilles") # To install latest release (if master branch contains a bug for you) # remotes::install_github("OHDSI/Achilles@*release") # To avoid Java 32 vs 64 issues # remotes::install_github("OHDSI/Achilles", args="--no-multiarch") ``` ## Running Achilles The analyses are run in one SQL session and all intermediate results are written to temp tables before finally being combined into the final results tables. Temp tables are dropped once the package is finished running. See the [DatabaseConnector](https://github.com/OHDSI/DatabaseConnector) package for details on settings the connection details for your database: ```r library(Achilles) connectionDetails <- createConnectionDetails( dbms="redshift", server="server.com", user="secret", password='secret', port="5439") ``` ```r Achilles::achilles( cdmVersion = "5.4", connectionDetails = connectionDetails, cdmDatabaseSchema = "yourCdmSchema", resultsDatabaseSchema = "yourResultsSchema" ) ``` The cdmDatabaseSchema parameter, and resultsDatabaseSchema parameter, are the fully qualified names of the schemas holding the CDM data, and targeted for result writing, respectively. The SQL platforms supported by [DatabaseConnector](https://github.com/OHDSI/DatabaseConnector) and [SqlRender](https://github.com/OHDSI/SqlRender) are the **only** ones supported here in Achilles as `dbms`. ## Developers: How to Add or Modify Analyses Please refer to the [README-developers.md file](README-developers.md). ## License Achilles is licensed under Apache License 2.0
/scratch/gouwar.j/cran-all/cranData/Achilles/inst/doc/GettingStarted.Rmd
## ---- echo = FALSE, message = FALSE, warning = FALSE-------------------------- library(Achilles) knitr::opts_chunk$set( cache = FALSE, comment = "#>", error = FALSE, tidy = FALSE) ## ----tidy = FALSE, eval = FALSE----------------------------------------------- # connectionDetails <- createConnectionDetails(dbms = "postgresql", # server = "localhost/synpuf", # user = "cdm_user", # password = "cdm_password") # # achilles(connectionDetails = connectionDetails, # cdmDatabaseSchema = "cdm", # resultsDatabaseSchema = "results", # outputFolder = "output") ## ----tidy = FALSE, eval = FALSE----------------------------------------------- # connectionDetails <- createConnectionDetails(dbms = "postgresql", # server = "localhost/synpuf", # user = "cdm_user", # password = "cdm_password") # # achilles(connectionDetails = connectionDetails, # cdmDatabaseSchema = "cdm", # resultsDatabaseSchema = "results", # scratchDatabaseSchema = "scratch", # numThreads = 5, # outputFolder = "output") ## ----tidy = FALSE, eval = FALSE----------------------------------------------- # connectionDetails <- createConnectionDetails(dbms = "postgresql", # server = "localhost/synpuf", # user = "cdm_user", # password = "cdm_password") # # createIndices(connectionDetails = connectionDetails, # resultsDatabaseSchema = "results", # outputFolder = "output") ## ----tidy = FALSE, eval = FALSE----------------------------------------------- # connectionDetails <- createConnectionDetails(dbms = "postgresql", # server = "localhost/synpuf", # user = "cdm_user", # password = "cdm_password") # # dropAllScratchTables(connectionDetails = connectionDetails, # scratchDatabaseSchema = "scratch", numThreads = 5) ## ----tidy = TRUE, eval = TRUE------------------------------------------------- citation("Achilles")
/scratch/gouwar.j/cran-all/cranData/Achilles/inst/doc/RunningAchilles.R
--- title: "Running Achilles on Your CDM" date: "`r Sys.Date()`" output: pdf_document: number_sections: yes toc: yes html_document: number_sections: yes toc: yes vignette: > %\VignetteIndexEntry{Running Achilles on Your CDM} %\VignetteEncoding{UTF-8} %\VignetteEngine{knitr::rmarkdown} --- ```{r, echo = FALSE, message = FALSE, warning = FALSE} library(Achilles) knitr::opts_chunk$set( cache = FALSE, comment = "#>", error = FALSE, tidy = FALSE) ``` # Introduction In this vignette we cover how to run the Achilles package on your Common Data Model (CDM) database in order to characterize the dataset. The characterizations can help you learn more about your dataset's features and limitations. It is a best practice for all OHDSI sites to run Achilles on their CDM datasets to ensure researchers can evaluate study feasibility and contextualize study results. # General Approach The Achilles package consists of: 1. The **achilles** function runs a set of SQL scripts to characterize the domains and concepts of the CDM. 2. The **createIndices** function creates table indices for the achilles tables, which can help improve query performance. 3. The **getAnalysisDetails** function provides descriptions about the full set of Achilles analyses. 4. The **dropAllScratchTables** function is useful only for multi-threaded mode. It can clear any leftover staging tables. ## SQL Only Mode In most Achilles functions, you can specify `sqlOnly = TRUE` in order to produce the SQL without executing it, which can be useful if you'd like to examine the SQL closely or debug something. The SQL files are stored in the `outputFolder`. ## Logging File and console logging is enabled across most Achilles functions. The status of each step is logged into files in the `outputFolder`. You can review the files in a common text editor. ## Verbose Mode The `verboseMode` parameter can be set to FALSE if you'd like less details about the function execution to appear in the console. Either way, all details are written to the log files. By default, this is set to TRUE. ## Preparation for running Achilles In order to run the package, you will need to determine if you'd like the Achilles tables and staging tables to be stored in schemas that are separate from your CDM's schema (recommended), or within the same schema as the CDM. ### Multi-Threaded vs Single-Threaded As the **achilles** functions can run independently, we have added a multi-threaded mode to allow for more than 1 SQL script to execute at a time. This is particularly useful for massively parallel processing (MPP) platforms such as Amazon Redshift and Microsoft PDW. It may not be beneficial for traditional SQL platforms, so only use the multi-threaded mode if confident it can be useful. Further, while multiple threads can help performance in MPP platforms, there can be diminishing returns as the cluster has a finite number of concurrency slots to handle the queries. A rule of thumb: most likely you should not use more than 10. In the multi-threaded mode, all scripts produce permanent staging tables, whereas in the single-threaded mode, the scripts produce temporary staging tables. In both, the staging tables are merged to produce the final Achilles tables. # Achilles Parameters (Both Modes) The following sub-sections describe the optional parameters in **achilles** that can be configured, regardless of whether you run the function in single- or multi-threaded mode. ## Staging Table Prefix To keep the staging tables organized, the **achilles** function will use a table prefix of "tmpach" by default, but you can choose a different one using the `tempAchillesPrefix` parameter. This is useful for database platforms like Oracle, which limit the length of table names. ## Source Name The `sourceName` parameter is used to assign the name of the dataset to the Achilles results. If you set this to `NULL`, the **achilles** function will try to obtain the source name from the CDM_SOURCE table. ## Create Table The `createTable` parameter, when set to `TRUE`, drops any existing Achilles results tables and builds new ones. If set to `FALSE`, these tables will persist, and the **achilles** function will just insert new data to them. ## Limiting the Analyses By default, the **achilles** function runs all default analyses detailed in the `getAnalysisDetails` function. However, it may be useful to focus on a subset of analyses rather than running the whole set. This can be accomplished by specifying analysis Ids in the `analysisIds` parameter. ## Cost Analyses By default, the **achilles** function does not run analyses on the COST table(s), as they can be very time-consuming, and are not critical to most OHDSI studies. However, you can choose to run these analyses by setting `runCostAnalysis` to `TRUE`. The cost analyses are conditional on the CDM version. If using CDM v5.0, then the older cost tables are queried. If using any version after 5.0, the unified cost table is queried. ## Small Cell Count To avoid patient identification, you can establish the minimum cell size that should be kept in the Achilles tables. Cells with small counts (less than or equal to the value of the `smallCellCount` parameter) are deleted. By default, this is set to 5. Set to 0 for complete summary without small cell count restrictions. ## Drop Scratch Tables *See the Post-Processing section to read about how to run this step separately* *This parameter is only necessary if running in multi-threaded mode* The `dropScratchTables` parameter, if set to `TRUE`, will drop all staging tables created during the execution of **achilles** in multi-threaded mode. ## Create Indices *See the Post-Processing section to read about how to run this step separately* The `createIndices` parameter, if set to `TRUE`, will result in indices on the Achilles results tables to be created in order to improve query performance. ## Return Value When running **achilles**, the return value, if you assign a variable to the function call, is a list object in which metadata about the execution and all of the SQL scripts executed are attributes. You can also run the function call without assigning a variable to it, so that no values are printed or returned. # Running Achilles: Single-Threaded Mode In single-threaded mode, there is no need to set a `scratchDatabaseSchema`, as temporary tables will be used. ```{r tidy = FALSE, eval = FALSE} connectionDetails <- createConnectionDetails(dbms = "postgresql", server = "localhost/synpuf", user = "cdm_user", password = "cdm_password") achilles(connectionDetails = connectionDetails, cdmDatabaseSchema = "cdm", resultsDatabaseSchema = "results", outputFolder = "output") ``` # Running Achilles: Multi-Threaded Mode In multi-threaded mode, you need to specify `scratchDatabaseSchema` and use > 1 for `numThreads`. ```{r tidy = FALSE, eval = FALSE} connectionDetails <- createConnectionDetails(dbms = "postgresql", server = "localhost/synpuf", user = "cdm_user", password = "cdm_password") achilles(connectionDetails = connectionDetails, cdmDatabaseSchema = "cdm", resultsDatabaseSchema = "results", scratchDatabaseSchema = "scratch", numThreads = 5, outputFolder = "output") ``` # Post-Processing This section describes the usage of standalone functions for post-processing that can be invoked if you did not use them in the **achilles** function call. ## Creating Indices *Not supported by Amazon Redshift or IBM Netezza; function will skip this step if using those platforms* To improve query performance of the Achilles results tables, run the **createIndices** function. ```{r tidy = FALSE, eval = FALSE} connectionDetails <- createConnectionDetails(dbms = "postgresql", server = "localhost/synpuf", user = "cdm_user", password = "cdm_password") createIndices(connectionDetails = connectionDetails, resultsDatabaseSchema = "results", outputFolder = "output") ``` ## Dropping All Staging Tables (Multi-threaded only) If the **achilles** execution has errors, or if you did not enable this step in the call to these functions, use the `dropAllScratchTables` function. The `tableTypes` parameter can be used to specify which batch of staging tables to drop ("achilles"). ```{r tidy = FALSE, eval = FALSE} connectionDetails <- createConnectionDetails(dbms = "postgresql", server = "localhost/synpuf", user = "cdm_user", password = "cdm_password") dropAllScratchTables(connectionDetails = connectionDetails, scratchDatabaseSchema = "scratch", numThreads = 5) ``` # Acknowledgments Considerable work has been dedicated to provide the `Achilles` package. ```{r tidy = TRUE, eval = TRUE} citation("Achilles") ```
/scratch/gouwar.j/cran-all/cranData/Achilles/inst/doc/RunningAchilles.Rmd
--- title: "Getting Started with Achilles" date: "`r Sys.Date()`" output: pdf_document: number_sections: yes toc: yes html_document: number_sections: yes toc: yes vignette: > %\VignetteIndexEntry{Getting Started with Achilles} %\VignetteEncoding{UTF-8} %\VignetteEngine{knitr::rmarkdown} --- ## Installation 1. Achilles currently supports version 5.3 and 5.4 of the OMOP CDM. (https://github.com/OHDSI/CommonDataModel). 1. This package makes use of rJava. Make sure that you have Java installed. If you don't have Java already installed on your computer (on most computers it already is installed), go to [java.com](https://java.com) to get the latest version. If you are having trouble with rJava, [this Stack Overflow post](https://stackoverflow.com/questions/7019912/using-the-rjava-package-on-win7-64-bit-with-r) may assist you when you begin troubleshooting. 1. In R, use the following commands to install Achilles. ```r if (!require("remotes")) install.packages("remotes") # To install the master branch remotes::install_github("OHDSI/Achilles") # To install latest release (if master branch contains a bug for you) # remotes::install_github("OHDSI/Achilles@*release") # To avoid Java 32 vs 64 issues # remotes::install_github("OHDSI/Achilles", args="--no-multiarch") ``` ## Running Achilles The analyses are run in one SQL session and all intermediate results are written to temp tables before finally being combined into the final results tables. Temp tables are dropped once the package is finished running. See the [DatabaseConnector](https://github.com/OHDSI/DatabaseConnector) package for details on settings the connection details for your database: ```r library(Achilles) connectionDetails <- createConnectionDetails( dbms="redshift", server="server.com", user="secret", password='secret', port="5439") ``` ```r Achilles::achilles( cdmVersion = "5.4", connectionDetails = connectionDetails, cdmDatabaseSchema = "yourCdmSchema", resultsDatabaseSchema = "yourResultsSchema" ) ``` The cdmDatabaseSchema parameter, and resultsDatabaseSchema parameter, are the fully qualified names of the schemas holding the CDM data, and targeted for result writing, respectively. The SQL platforms supported by [DatabaseConnector](https://github.com/OHDSI/DatabaseConnector) and [SqlRender](https://github.com/OHDSI/SqlRender) are the **only** ones supported here in Achilles as `dbms`. ## Developers: How to Add or Modify Analyses Please refer to the [README-developers.md file](README-developers.md). ## License Achilles is licensed under Apache License 2.0
/scratch/gouwar.j/cran-all/cranData/Achilles/vignettes/GettingStarted.Rmd
--- title: "Running Achilles on Your CDM" date: "`r Sys.Date()`" output: pdf_document: number_sections: yes toc: yes html_document: number_sections: yes toc: yes vignette: > %\VignetteIndexEntry{Running Achilles on Your CDM} %\VignetteEncoding{UTF-8} %\VignetteEngine{knitr::rmarkdown} --- ```{r, echo = FALSE, message = FALSE, warning = FALSE} library(Achilles) knitr::opts_chunk$set( cache = FALSE, comment = "#>", error = FALSE, tidy = FALSE) ``` # Introduction In this vignette we cover how to run the Achilles package on your Common Data Model (CDM) database in order to characterize the dataset. The characterizations can help you learn more about your dataset's features and limitations. It is a best practice for all OHDSI sites to run Achilles on their CDM datasets to ensure researchers can evaluate study feasibility and contextualize study results. # General Approach The Achilles package consists of: 1. The **achilles** function runs a set of SQL scripts to characterize the domains and concepts of the CDM. 2. The **createIndices** function creates table indices for the achilles tables, which can help improve query performance. 3. The **getAnalysisDetails** function provides descriptions about the full set of Achilles analyses. 4. The **dropAllScratchTables** function is useful only for multi-threaded mode. It can clear any leftover staging tables. ## SQL Only Mode In most Achilles functions, you can specify `sqlOnly = TRUE` in order to produce the SQL without executing it, which can be useful if you'd like to examine the SQL closely or debug something. The SQL files are stored in the `outputFolder`. ## Logging File and console logging is enabled across most Achilles functions. The status of each step is logged into files in the `outputFolder`. You can review the files in a common text editor. ## Verbose Mode The `verboseMode` parameter can be set to FALSE if you'd like less details about the function execution to appear in the console. Either way, all details are written to the log files. By default, this is set to TRUE. ## Preparation for running Achilles In order to run the package, you will need to determine if you'd like the Achilles tables and staging tables to be stored in schemas that are separate from your CDM's schema (recommended), or within the same schema as the CDM. ### Multi-Threaded vs Single-Threaded As the **achilles** functions can run independently, we have added a multi-threaded mode to allow for more than 1 SQL script to execute at a time. This is particularly useful for massively parallel processing (MPP) platforms such as Amazon Redshift and Microsoft PDW. It may not be beneficial for traditional SQL platforms, so only use the multi-threaded mode if confident it can be useful. Further, while multiple threads can help performance in MPP platforms, there can be diminishing returns as the cluster has a finite number of concurrency slots to handle the queries. A rule of thumb: most likely you should not use more than 10. In the multi-threaded mode, all scripts produce permanent staging tables, whereas in the single-threaded mode, the scripts produce temporary staging tables. In both, the staging tables are merged to produce the final Achilles tables. # Achilles Parameters (Both Modes) The following sub-sections describe the optional parameters in **achilles** that can be configured, regardless of whether you run the function in single- or multi-threaded mode. ## Staging Table Prefix To keep the staging tables organized, the **achilles** function will use a table prefix of "tmpach" by default, but you can choose a different one using the `tempAchillesPrefix` parameter. This is useful for database platforms like Oracle, which limit the length of table names. ## Source Name The `sourceName` parameter is used to assign the name of the dataset to the Achilles results. If you set this to `NULL`, the **achilles** function will try to obtain the source name from the CDM_SOURCE table. ## Create Table The `createTable` parameter, when set to `TRUE`, drops any existing Achilles results tables and builds new ones. If set to `FALSE`, these tables will persist, and the **achilles** function will just insert new data to them. ## Limiting the Analyses By default, the **achilles** function runs all default analyses detailed in the `getAnalysisDetails` function. However, it may be useful to focus on a subset of analyses rather than running the whole set. This can be accomplished by specifying analysis Ids in the `analysisIds` parameter. ## Cost Analyses By default, the **achilles** function does not run analyses on the COST table(s), as they can be very time-consuming, and are not critical to most OHDSI studies. However, you can choose to run these analyses by setting `runCostAnalysis` to `TRUE`. The cost analyses are conditional on the CDM version. If using CDM v5.0, then the older cost tables are queried. If using any version after 5.0, the unified cost table is queried. ## Small Cell Count To avoid patient identification, you can establish the minimum cell size that should be kept in the Achilles tables. Cells with small counts (less than or equal to the value of the `smallCellCount` parameter) are deleted. By default, this is set to 5. Set to 0 for complete summary without small cell count restrictions. ## Drop Scratch Tables *See the Post-Processing section to read about how to run this step separately* *This parameter is only necessary if running in multi-threaded mode* The `dropScratchTables` parameter, if set to `TRUE`, will drop all staging tables created during the execution of **achilles** in multi-threaded mode. ## Create Indices *See the Post-Processing section to read about how to run this step separately* The `createIndices` parameter, if set to `TRUE`, will result in indices on the Achilles results tables to be created in order to improve query performance. ## Return Value When running **achilles**, the return value, if you assign a variable to the function call, is a list object in which metadata about the execution and all of the SQL scripts executed are attributes. You can also run the function call without assigning a variable to it, so that no values are printed or returned. # Running Achilles: Single-Threaded Mode In single-threaded mode, there is no need to set a `scratchDatabaseSchema`, as temporary tables will be used. ```{r tidy = FALSE, eval = FALSE} connectionDetails <- createConnectionDetails(dbms = "postgresql", server = "localhost/synpuf", user = "cdm_user", password = "cdm_password") achilles(connectionDetails = connectionDetails, cdmDatabaseSchema = "cdm", resultsDatabaseSchema = "results", outputFolder = "output") ``` # Running Achilles: Multi-Threaded Mode In multi-threaded mode, you need to specify `scratchDatabaseSchema` and use > 1 for `numThreads`. ```{r tidy = FALSE, eval = FALSE} connectionDetails <- createConnectionDetails(dbms = "postgresql", server = "localhost/synpuf", user = "cdm_user", password = "cdm_password") achilles(connectionDetails = connectionDetails, cdmDatabaseSchema = "cdm", resultsDatabaseSchema = "results", scratchDatabaseSchema = "scratch", numThreads = 5, outputFolder = "output") ``` # Post-Processing This section describes the usage of standalone functions for post-processing that can be invoked if you did not use them in the **achilles** function call. ## Creating Indices *Not supported by Amazon Redshift or IBM Netezza; function will skip this step if using those platforms* To improve query performance of the Achilles results tables, run the **createIndices** function. ```{r tidy = FALSE, eval = FALSE} connectionDetails <- createConnectionDetails(dbms = "postgresql", server = "localhost/synpuf", user = "cdm_user", password = "cdm_password") createIndices(connectionDetails = connectionDetails, resultsDatabaseSchema = "results", outputFolder = "output") ``` ## Dropping All Staging Tables (Multi-threaded only) If the **achilles** execution has errors, or if you did not enable this step in the call to these functions, use the `dropAllScratchTables` function. The `tableTypes` parameter can be used to specify which batch of staging tables to drop ("achilles"). ```{r tidy = FALSE, eval = FALSE} connectionDetails <- createConnectionDetails(dbms = "postgresql", server = "localhost/synpuf", user = "cdm_user", password = "cdm_password") dropAllScratchTables(connectionDetails = connectionDetails, scratchDatabaseSchema = "scratch", numThreads = 5) ``` # Acknowledgments Considerable work has been dedicated to provide the `Achilles` package. ```{r tidy = TRUE, eval = TRUE} citation("Achilles") ```
/scratch/gouwar.j/cran-all/cranData/Achilles/vignettes/RunningAchilles.Rmd
#' AcousticNDLCodeR-Package #' # #' @author Denis Arnold #' @docType package #' @name AcousticNDLCodeR #' @description Package to make acoustic cues to use with ndl or ndl2. #' @details #' The packages main function is \code{\link{makeCues}}. \code{\link{readTextGridFast}}, #' \code{\link{readTextGridRobust}}, \code{\link{readESPSAnnotation}} and #' \code{\link{readWavesurfer}} are helper functions that read the corresponding annotation #' files and return a data.frame. \code{\link{CorpusCoder}} codes a whole corpus given a vector #' with the path to and names of wave files and a vector for the annotation files. #' \code{\link{word_classification_data}} provides data from Arnold et al 2017 #' https://doi.org/10.1371/journal.pone.0174623 #' #' @references Reference to to paper in accepted form. #' @examples #' \dontrun{ #' # assuming the corpus contains wave files and praat textgrids #' #' setwd(~/Data/MyCorpus) # assuming everything is in one place #' #' #assuming you have one wav for each annotation #' #' Waves=list.files(pattern="*.wav",recursive=T) #' Annotations=list.files(pattern="*.TextGrids",recursive=T) # see above #' #' # Lets assume the annotation is in UTF-8 and you want everything from a tier called words #' # Lets assume tha you want to dismiss everything in <|> #' # Lets assume that have 4 cores available #' # Lets assume that you want the defaut settings for the parameters #' #' Data=CorpusCoderCorpusCoder(Waves, Annotations, AnnotationType = "TextGrid", #' TierName = "words", Dismiss = "<|>", Encoding, Fast = F, Cores = 4, #' IntensitySteps = 5, Smooth = 800) #' #' } "_PACKAGE"
/scratch/gouwar.j/cran-all/cranData/AcousticNDLCodeR/R/AcousticNDLCodeR_package.R
#' Codes a corpus for use with NDL with vector of wavefile names and a vector of TextGrid names provided #' @param Waves Vector with names (and full path to if not in wd) of the wave files. #' @param Annotations Vector with names (and full path to if not in wd) of the TextGrid files. #' @param AnnotationType Type of annotation files. Suported formats are praat TextGrids (set to "TextGrid") and ESPS/Wavesurfer (set to "ESPS") files. #' @param TierName Name of the tier in the TextGrid to be used. #' @param Dismiss Regular expression for Outcomes that should be removed. Uses grep. #' E.g. "<|>" would remove <noise>,<xxx>, etc. Default is NULL. #' @param Encoding Encoding of the annotation file. It is assumed, that all annotation files have the same encoding. #' @param Fast Switches between a fast and a robust TextGrid parser. #' For Fast no "\\n" or "\\t" may be in the transcription. Default is FALSE. #' @param Cores Number of cores that the function may use. Default is 1. #' @param IntensitySteps Number of steps that the intensity gets compressed to. Default is 5 #' @param Smooth A parameter for using the kernel smooth function provied by the package zoo. #' @return A data.frame with $Cues and $Outcomes for use with ndl or ndl2. #' @examples #' \dontrun{ #' # assuming the corpus contains wave files and praat textgrids #' #' setwd(~/Data/MyCorpus) # assuming everything is in one place #' #' #assuming you have one wav for each annotation #' #' Waves=list.files(pattern="*.wav",recursive=T) #' Annotations=list.files(pattern="*.TextGrids",recursive=T) # see above #' #' # Lets assume the annotation is in UTF-8 and you want everything from a tier called words #' # Lets assume tha you want to dismiss everything in <|> #' # Lets assume that have 4 cores available #' # Lets assume that you want the defaut settings for the parameters #' #' Data=CorpusCoderCorpusCoder(Waves, Annotations, AnnotationType = "TextGrid", #' TierName = "words", Dismiss = "<|>", Encoding, Fast = F, Cores = 4, #' IntensitySteps = 5, Smooth = 800) #' #' } #' @import tuneR #' @import seewave #' @import parallel #' @export #' @author Denis Arnold CorpusCoder=function(Waves,Annotations,AnnotationType=c("TextGrid","ESPS"),TierName=NULL,Dismiss=NULL,Encoding,Fast=F,Cores=1,IntensitySteps,Smooth){ WaveHandling=function(val,IntensitySteps,Smooth){ start=Part$start[val] end=Part$end[val] Cues=makeCues(Wave[(start*16000):(end*16000)],IntensitySteps,Smooth) return(Cues) } WholeData=data.frame(Outcomes=character(), start=numeric(), end=numeric(), file=character(), Cues=character(), stringsAsFactors=F) if(length(Waves)!=length(Annotations)){ stop("Length of lists does not match!") } for(i in 1:length(Waves)){ if(AnnotationType=="ESPS"){ Part=readESPSAnnotation(Annotations[i],Encoding) }else{ if(Fast){ TG=readTextGridFast(Annotations[i],Encoding) }else{ TG=readTextGridRobust(Annotations[i],Encoding) } if(!(TierName%in%TG[[1]])){ stop(paste0("TierName ",TierName," is not prestent in TextGrid:", Annotations[i])) } Part=TG[[which(TG[[1]]==TierName)+1]] if(length(Part$Outcomes)<2) next } Part$File=Waves[i] Part$Prev=c("<P>",Part$Outcomes[1:(length(Part$Outcomes)-1)]) if(!is.null(Dismiss)){ if(length(grep(Dismiss,Part$Outcomes))>0){ Part=Part[-grep(Dismiss,Part$Outcomes),] } } if(length(Part$Outcomes)==0) next Wave=readWave(Waves[i]) if([email protected]!=16000){ if([email protected]<16000){ warning("Sampling rate below 16 kHz!") } Wave=resamp(Wave,[email protected],g=16000,output="Wave") } X=mclapply(1:dim(Part)[1],WaveHandling,IntensitySteps,Smooth,mc.cores=Cores) Part$Cues=unlist(X) WholeData=rbind(WholeData,Part) } return(WholeData) }
/scratch/gouwar.j/cran-all/cranData/AcousticNDLCodeR/R/CorpusCoder.R
#' Creates a string with the cues for each frequency band and segment seperated by "_" #' @param WAVE A Wave object (see \link{tuneR}). Currently it is implemented for use with 16kHz sampling rate. #' @param IntensitySteps Number of steps that the intensity gets compressed to. Default is 5. #' @param Smooth A parameter for using the kernel smooth function provied by the package zoo. #' @return A string containing the coding. Each band and part is seperated by "_" #' @import zoo #' @import tuneR #' @import seewave #' @examples \dontrun{ #' #' library(tuneR) #' library(seewave) #' Wave=readWave("MyWaveFile.wav") #' if([email protected]!=16000){ #' Wave=resamp(Wave,[email protected],g=16000,output="Wave") #' } #' Cues=makeCues(Wave,IntensitySteps=5,Smooth=800) #' #' } #' @export #' @author Denis Arnold makeCues<-function(WAVE,IntensitySteps=5,Smooth=800){ CODED="tooShort" if(length(WAVE)>800){ ps=powspec(WAVE@left,[email protected],wintime=0.005,steptime=0.005) SPEC=log(audspec(ps,[email protected],fbtype="mel")$aspec+1) #SPEC=ceiling((SPEC-min(SPEC))/sd(SPEC)) SPEC=ceiling((SPEC-min(SPEC))*(IntensitySteps/abs(range(SPEC)[2]-range(SPEC)[1]))) BOUNDARY=getBoundary(WAVE,Smooth)/80 CODED="" if(length(BOUNDARY)>0){ PARTS=data.frame(start=c(1,ceiling(BOUNDARY)),stop=c(ceiling(BOUNDARY),dim(SPEC)[2])) for(i in 1:dim(PARTS)[1]){CODED=paste(CODED,CODE(SPEC[,(PARTS$start[i]:PARTS$stop[i])],i),sep="_")} } else{ CODED=CODE(SPEC,1) } } return(gsub("^_","",CODED)) } #' Helper function for makeCues that splits the signal based on the envelope of the signal #' @param Wave A Wave object (see tuneR) #' @param smooth A parameter for using the kernel smooth function provied by the package zoo. #' @return A vector with the sample numbers of the boundaries. #' @importFrom stats kernel #' @examples #' \dontrun{ #' library(tuneR) #' Wave=readWave("MyWaveFile.wav") #' Boundaries=getBoundary(Wave,800) #' } #' @export #' @author Denis Arnold getBoundary<-function(Wave,smooth=800){ if(length(Wave)>(1000+(2*smooth))){ ENV=env(Wave,ksmooth=kernel("daniell",smooth),plot=F) ENVzoo=as.zoo(ENV) MIN=rollapply(ENVzoo,1000, function(x) which.min(x)==500) INDEX=index(MIN[coredata(MIN)]) return(ceiling(INDEX*(length(Wave)/length(ENV))))} } #' Helper function for makeCues #' @export #' @param SPEC Spectrum representation made in makeCues() #' @param num Number of the part #' @return A string containing the coding. Each band is seperated by "_". #' @export #' @importFrom stats median #' @author Denis Arnold CODE<-function(SPEC,num){ LINES=dim(SPEC)[1] END=dim(SPEC)[2] OUT=vector(mode="character",length=LINES) for(i in 1:LINES){ OUT[i]=paste("b",i,"start",SPEC[i,1],"median",median(SPEC[i,]),"min",min(SPEC[i,]),"max",max(SPEC[i,]),"end",SPEC[i,END],"part",num,sep="") } return(paste(OUT,collapse="_")) }
/scratch/gouwar.j/cran-all/cranData/AcousticNDLCodeR/R/makeCues.R
#' Reads a TextGrid made with praat and returns a list with a vector of all tier names and a data.frame for each tier. #' @param File Name (with full path, if not in wd) of the TextGrid #' @param Encoding Encoding of the TextGrid. Typically encodings are "ACSII","UTF-8" or "UTF-16" #' @details This method has sometimes problems with certain sequences like "\\n" in the annotation file. #' If the method fails, try readTextGridRobust() #' @return A list containing a vectors with the names and data.frames for each tier in the TextGrid. #' @examples #' \dontrun{ #' # Assume that NameOfTextGrid is encoded in "UTF-8" #' Data=readTextGridFast("NameOfTextGrid","UTF-8") #' #' } #' @export #' @author Denis Arnold readTextGridFast<-function(File,Encoding){ File=file(File,encoding=Encoding) Data=readLines(File,-1) close(File) names=gsub("^[[:space:]]+name\\ =\\ |\"|[\\ ]+","",Data[grep("name\ =",Data)]) numberOfTiers=length(names) TierBorder=c(grep("IntervalTier|TextTier",Data),length(Data)) TierType=gsub("[[:space:]]+|class|\\=|\"|-","",Data[grep("IntervalTier|TextTier",Data)]) for(i in 1:numberOfTiers){ if(TierType[i]=="IntervalTier"){ Part=Data[TierBorder[i]:TierBorder[i+1]] Part=Part[-(1:5)] Part=gsub("^[[:space:]]+((text)|(xmin)|(xmax))\\ =\\ |\"|[\\ ]+$","",Part) Part=gsub("[\\ ]+$","",Part) if(length(grep("class\ =\ (IntervalTier)|(TextTier)",Part))>0){ Part=Part[-grep("class\ =\ (IntervalTier)|(TextTier)",Part)] } if(length(grep("item\ \\[[0-9]+\\]",Part))>0){ Part=Part[-grep("item\ \\[[0-9]+\\]",Part)] } PartDataFrame=data.frame(Outcomes=Part[seq(4,length(Part),4)], start=as.numeric(Part[seq(2,length(Part),4)]), end=as.numeric(Part[seq(3,length(Part),4)]), stringsAsFactors=F) } else{ Part=Data[TierBorder[i]:TierBorder[i+1]] Part=Part[-(1:5)] Part=gsub("^[[:space:]]+((mark)|(number))\\ =\\ |\"|[\\ ]+$","",Part) Part=gsub("[\\ ]+$","",Part) if(length(grep("class\ =\ (IntervalTier)|(TextTier)",Part))>0){ Part=Part[-grep("class\ =\ (IntervalTier)|(TextTier)",Part)] } if(length(grep("item\ \\[[0-9]+\\]",Part))>0){ Part=Part[-grep("item\ \\[[0-9]+\\]",Part)] } PartDataFrame=data.frame( Outcomes=Part[seq(3,length(Part),3)], point=as.numeric(Part[seq(2,length(Part),3)]), stringsAsFactors=F) } assign(names[i],PartDataFrame) } NewData=vector("list",length(names)+1) NewData[[1]]=names for(i in 2:length(NewData)){ NewData[[i]]=get(names[i-1]) } return(NewData) } #' Reads a TextGrid made with praat and returns a list with a vector of all tier names and a data.frame for each tier #' @param File Name (with full path, if not in wd) of the TextGrid #' @param Encoding Encoding of the TextGrid. Typically encodings are "ACSII","UTF-8" or "UTF-16" #' @importFrom utils read.csv #' @return A list containing a vectors with the names and data.frames for each tier in the TextGrid. #' @examples #' \dontrun{ #' # Assume that NameOfTextGrid is encoded in "UTF-8" #' Data=readTextGridRobust("NameOfTextGrid","UTF-8") #' #' } #' @export #' @author Denis Arnold readTextGridRobust<-function(File,Encoding){ Data=read.csv(file(File,encoding=Encoding),stringsAsFactors=F,header=F)$V1 names=gsub("^[[:space:]]+name\\ =\\ |\"|[\\ ]+","",Data[grep("name\ =",Data)]) numberOfTiers=length(names) TierBorder=c(grep("IntervalTier|TextTier",Data),length(Data)+1) TierType=gsub("[[:space:]]+|class|\\=|\"|-","",Data[grep("IntervalTier|TextTier",Data)]) for(i in 1:numberOfTiers){ if(TierType[i]=="IntervalTier"){ Part=Data[TierBorder[i]:(TierBorder[i+1]-1)] Part=Part[-(1:5)] Part=gsub("^[[:space:]]+((text)|(xmin)|(xmax))\\ =\\ |\"|[\\ ]+$","",Part) Part=gsub("[\\ ]+$","",Part) PartDataFrame=data.frame(Outcomes=character(),start=numeric(),end=numeric(),stringsAsFactors=F) for(j in 1:(length(Part)/4)){ PartDataFrame=rbind(PartDataFrame, data.frame(Outcomes=Part[(j*4)], start=as.numeric(Part[(j*4)-2]), end=as.numeric(Part[(j*4)-1]), stringsAsFactors=F), stringsAsFactors=F) } assign(names[i],PartDataFrame) } else{ Part=Data[TierBorder[i]:(TierBorder[i+1]-1)] Part=Part[-(1:5)] Part=gsub("^[[:space:]]+((mark)|(number))\\ =\\ |\"|[\\ ]+$","",Part) Part=gsub("[\\ ]+$","",Part) PartDataFrame=data.frame(Outcomes="",point=0,stringsAsFactors=F) for(j in 1:(length(Part)/3)){ PartDataFrame=rbind(PartDataFrame, data.frame(Outcomes=Part[(j*3)], point=as.numeric(Part[(j*3)-1]),stringsAsFactors=F), stringsAsFactors=F) } assign(names[i],PartDataFrame) } } NewData=vector("list",length(names)+1) NewData[[1]]=names for(i in 2:length(NewData)){ NewData[[i]]=get(names[i-1]) } return(NewData) } #' Reads a ESPS/Old Wavesurfer style annotation file and returns a data.frame with times and lables #' @param File Name (with full path, if not in wd) of the annotation file #' @param Encoding Encoding of the annotation file. Typically encodings are "ACSII","UTF-8" or "UTF-16" #' @return A data.frame with $Output for the lable $start and $end time of the lable. #' @examples #' \dontrun{ #' # Assume that NameOfAnnotation is encoded in "UTF-8" #' Data=readESPSAnnotation("NameOfTextGrid","UTF-8") #' } #' @export #' @author Denis Arnold readESPSAnnotation<-function(File,Encoding){ File=file(File,encoding=Encoding) Data=readLines(File,-1) close(File) r=grep("^#",Data) if(r>0){ Data=Data[-(1:r)] } Data=unlist(strsplit(Data,"\ [0-9]+\ ")) End=as.numeric(Data[seq(1,length(Data),2)]) DataFrame=data.frame( Outcomes=Data[seq(2,length(Data),2)], start=as.numeric(c(0,End[-length(End)])), end=as.numeric(End), stringsAsFactors=F) return(DataFrame) } #' Reads a New Wavesurfer style annotation file and returns a data.frame with times and lables #' @param File Name (with full path, if not in wd) of the annotation file #' @param Encoding Encoding of the annotation file. Typically encodings are "ACSII","UTF-8" or "UTF-16" #' @return A data.frame with $Output for the lable $start and $end time of the lable. #' @examples #' \dontrun{ #' # Assume that NameOfAnnotation is encoded in "UTF-8" #' Data=readWavesurfer("NameOfTextGrid","UTF-8") #' } #' @export #' @author Denis Arnold readWavesurfer<-function(File,Encoding){ File=file(File,encoding=Encoding) Data=readLines(File,-1) close(File) Data=gsub("^[\ ]+","",Data) Data=unlist(strsplit(Data,"\ ")) End=as.numeric(Data[seq(1,length(Data),3)]) DataFrame=data.frame( Outcomes=Data[seq(3,length(Data),3)], start=as.numeric(c(0,End[-length(End)])), end=as.numeric(End), stringsAsFactors=F) return(DataFrame) }
/scratch/gouwar.j/cran-all/cranData/AcousticNDLCodeR/R/readAnnotation.R
#' Data of PLoS ONE paper #' #' Dataset of a subject and modeling data for an auditory word identification task. #' #' @name word_classification_data #' @docType data #' @usage data(word_classification_data) #' #' #' @format Data from the four experiments and model estimates #' \describe{ #' \item{\code{ExperimentNumber}}{Experiment identifier} #' \item{\code{PresentationMethod}}{Method of presentation in the experiment: loudspeaker, headphones 3. Trial: Trial number in the experimental list} #' \item{\code{TrialScaled}}{scaled Trial} #' \item{\code{Subject}}{anonymized subject identifier} #' \item{\code{Item}}{word identifier -german umlaute and special character coded as 'ae' 'oe' 'ue' and 'ss'} #' \item{\code{Activation}}{NDL activation} #' \item{\code{LogActivation}}{log(activation+epsilon)} #' \item{\code{L1norm}}{L1-norm (lexicality)} #' \item{\code{LogL1norm}}{log of L1-norm} #' \item{\code{RecognitionDecision}}{recognition decision (yes/no)} #' \item{\code{RecognitionRT}}{latency for recognition decision} #' \item{\code{LogRecognitionRT}}{log recognition RT} #' \item{\code{DictationAccuracy}}{dictation accuracy (TRUE: correct word reported, FALSE otherwise) 15. DictationRT: response latency to typing onset} #'} #' #' @references #' #' Denis Arnold, Fabian Tomaschek, Konstantin Sering, Florence Lopez, and R. Harald Baayen (2017). #' Words from spontaneous conversational speech can be recognized with human-like accuracy by #' an error-driven learning algorithm that discriminates between meanings straight from smart #' acoustic features, bypassing the phoneme as recognition unit PLoS ONE 12(4):e0174623. #' https://doi.org/10.1371/journal.pone.0174623 #' @keywords data NULL
/scratch/gouwar.j/cran-all/cranData/AcousticNDLCodeR/R/word_classification_data.R
plot.AcrossTic <- function (x, X.values, y, grp.cols = c(2, 4), grp.pch = c(16, 17), ...) { # # plot an AcrossTic object. This is intended for a two-column X and # a two-level y. # # If X is supplied, use it. Otherwise use the "X" component of the # AcrossTic object. If neither is supplied, that's an error. # if (missing(X.values)) { if (!any(names(x) == "X")) stop("Can't plot an AcrossTic object with no X", call. = FALSE) X <- x$X } else { X <- X.values } # # If y is supplied, use it. If not, use the one in x, if there is one, # and if there isn't, that's an error. # if (missing (y)) { y <- x$y if (is.null(y)) stop("This AcrossTic object has no 'y' entry", call. = FALSE) } # # This is for two-level y's only. # if (length(unique(y)) != 2) stop("This function only plots 'y' entries with two values", call. = FALSE) y <- c(0, 1)[as.numeric(as.factor(y))] if (ncol(X) == 1) stop("One-column X object?", call. = FALSE) if (ncol(X) > 2) warning("Plotting two columns of a >2-col X object") # # Plot! # M <- x$matches N <- x$nrow.X plot(X[, 1], X[, 2], col = grp.cols[y + 1], pch = grp.pch[y + 1], xlim = c(min(X[, 1]) - 0.5, max(X[, 1]) + 0.5), ylim = c(min(X[, 2]) - 0.5, max(X[, 2]) + 0.2), ...) legend("topleft", c("Group 1", "Group 2", "Within-group pairing", "Across-group pairing"), lty = c(0, 0, 2, 1), col = c(grp.cols, 1, 1), pch = c(grp.pch, NA, NA)) # # Draw lines to connect matched pairs. # x.from <- X[M[, 1], 1] y.from <- X[M[, 1], 2] x.to <- X[M[, 2], 1] y.to <- X[M[, 2], 2] cross.match <- y[M[, 1]] != y[M[, 2]] solid.inds <- which(cross.match) dashed.inds <- which(!cross.match) segments(x.from[solid.inds], y.from[solid.inds], x.to[solid.inds], y.to[solid.inds], lwd = 2) segments(x.from[dashed.inds], y.from[dashed.inds], x.to[dashed.inds], y.to[dashed.inds], lty = 2) }
/scratch/gouwar.j/cran-all/cranData/AcrossTic/R/plot.AcrossTic.R
print.AcrossTic <- function (x, ...) { cat ("AcrossTic object\n") if (x$X.supplied) { cat (paste0 ("Data is ", x$nrow.X, " x ", x$ncol.X, ", r = ", x$r, "\n")) } else { cat (paste0 ("Dist of size ", x$nrow.X, ", r = ", x$r, "\n")) } if (any (names (x) == "cross.count")) cat (paste0 ("Solution ", signif (x$total.dist, 4), ", cross-count ", signif (x$cross.count, 4), "\n")) else cat (paste0 ("Solution ", signif (x$total.dist, 4), "\n")) # "computed in elapsed time", x$time.required["elapsed"], "\n") }
/scratch/gouwar.j/cran-all/cranData/AcrossTic/R/print.AcrossTic.R
print.AcrossTicPtest <- function (x, ...) { cat ("Observed cross-match statistic", x$observed, "\n") exceed.pct <- sprintf ("%2.1f", 100 * x$p.value) cat (paste0 ("This is >= the simulated ones ", exceed.pct, "% of the time (p = ", sprintf ("%0.3f", x$p.value), ")\n")) }
/scratch/gouwar.j/cran-all/cranData/AcrossTic/R/print.AcrossTicPtest.R
ptest <- function (acobj, y, edge.weights, n = 1000) { # # Normally acobj will be a list of class AcrossTic. But we will also # accept a (two-column) matrix of matches. In that case, we will need # edge weights. They default to all 1's, but if there has been partial # matching -- which we detect by a non-constant number of matchers -- # they're required. # if (class (acobj) != "AcrossTic" && class (acobj) != "matrix") stop ("First argument must be a matrix or an AcrossTic object") if (class (acobj) == "matrix") { if (ncol (acobj) != 2) stop ("Matrix argument must have two columns") mat <- acobj if (missing (edge.weights)) { if (length (table (table (mat))) != 1) stop ("Edge weights are required for partial matching") edge.weights <- rep (1, nrow (mat)) } acobj <- list (matches = mat, nrow.X = max (mat), edge.weights = edge.weights) } # # # If y is missing and acobj has no y either, fail. If y is missing, use the # y and the cross.count statistic from acobj. # if (missing (y)) { if (is.null (acobj$y)) stop ("Can't run without 'y' explicit or in AcrossTic object", call.=FALSE) y <- acobj$y observed.cross.count <- acobj$cross.count } else { # # If y is supplied, use it (unless it's the wrong length). Then compute # the cross.count statistic from this y (even if acobj already had a y). # if (length (y) != acobj$nrow.X) stop (paste ("Y has length", length (y), ", should be", acobj$nrow.X)) observed.cross.count <- sum (acobj$edge.weights[ y[acobj$matches[,1]] != y[acobj$matches[,2]]]) } if (!any (names (acobj) == "matches")) stop ("Input object didn't have matches.") result <- numeric (n) for (i in 1:n) { y.new <- sample (y) cross <- y.new[acobj$matches[,1]] != y.new[acobj$matches[,2]] result[i] <- sum (acobj$edge.weights[cross]) } out <- list (sims = result) class (out) <- "AcrossTicPtest" out$observed <- observed.cross.count out$p.value <- mean (out$sims <= out$observed) return (out) }
/scratch/gouwar.j/cran-all/cranData/AcrossTic/R/ptest.R
rRegMatch <- function(X, r, y = NULL, dister = "daisy", dist.args = list(), keep.X = nrow (X) < 100, keep.D = (dister == "treeClust.dist"), relax = (N >= 100), thresh = 1e-6) { # # rRegMatch 2.1: attempted efficiency gains; lpSolve # # Arguments: X, matrix of data or object inheriting from "dist" # r: integer, degree # y: factor, group assignment indicator # dister: distance computation function (default, daisy()) # dist.args: optional list of arguments to dister # keep.X: Should we keep data X in the output object? # keep.D: Should we keep distances D in the output object? # relax: Solve relaxed version of problem? Default FALSE unless N > 100 # thresh: Elements of solution > this are declared non-zero # # 1. Set up interpoint distance. If X is already a thing of class "dist," # just use that. Otherwise, use dister and dist.args to compute distances. # if (inherits (X, "dist")) { X.supplied <- FALSE D <- X keep.X <- FALSE N <- attributes(D)$Size } else { X.supplied <- TRUE dist.args$x <- X N <- nrow (X) D <- do.call (dister, dist.args) } # Check that r's value makes sense. It needs to be an integer between # 1 and N-1. If N is odd, it will be increased by one temporarily (below), # but that doesn't concern us here. # if ((r != as.integer (r)) || r < 1 || r >= N) { stop ("r must be an integer between 1 and (size - 1)") } # # N is the # of rows or size of D. It must be bigger than the degree r. # Moreover, if it's not even we need to take special steps. Add a dummy # entry whose distance to all other entries is zero. The easy way is to # turn D back into a matrix, add a row of zeros at the bottom and a column # of zeros at the right, then turn it back into a dist. This feels expensive. # N.WAS.ODD <- FALSE; N.WAS.EVEN <- TRUE if (N %% 2 != 0) { attr.save <- attributes (D) D <- as.dist (cbind (rbind (as.matrix (D), 0), 0)) # inefficient? attr.save$Size <- attr.save$Size + 1 if (any (names (attr.save) == "Labels")) attr.save$Labels <- c(attr.save$Labels, attr.save$Size) attributes (D) <- attr.save N.WAS.EVEN <- FALSE; N.WAS.ODD <- TRUE N <- attr (D, "Size") } # # Build a two-column matrix with all combinations (i, j) of two values from # 1, ..., N with i < j. const.txt is a text version with entries like "5.10". # So nrow (const.rows) = length (const.txt) = choose (N, 2). # const.rows <- cbind (rep (1:(N-1), (N-1):1), unlist (sapply (2:N, seq, to = N))) const.txt <- paste0 (const.rows[,1], ".", const.rows[,2]) # # This constructs a "dense" matrix suitable for lpSolve, but the # first two columns will also be useful in the call to Rsymphony. # This thing has first column = "constraint's row," second column is # "variable associated with this constraint" and the third is the # value -- which is 1. # const.mat.dense <- cbind (rep (1:N, each = N-1), c(sapply (1:N, function (x) match (const.txt[const.rows[,1] == x | const.rows[,2] == x], const.txt))), rep (1, N * (N-1))) # # Set up constraint directions, right-hand sides, and all.bin # const.dir <- rep ("=", N) const.rhs <- rep (r, N) if (relax) all.bin <- FALSE else all.bin <- TRUE # # For lpSolve, upper bounds need to be entered into the constraint # matrix. There are N-choose 2 constraints so far; we need to add that # many more. The ith new one says that variable i <= 1. # The first column is just the constraint number. # nc2 <- N * (N-1) / 2 uppers <- cbind ((N+1):(N + nc2), 1:nc2, 1) const.mat.dense <- rbind (const.mat.dense, uppers) const.dir <- c(const.dir, rep ("<", nc2)) const.rhs <- c(const.rhs, rep (1, nc2)) # # Call lpSolve solver # this.took <- system.time ( RsOut <- lp ("min", objective.in = c(D), dense.const = const.mat.dense, const.dir = const.dir, const.rhs = const.rhs, all.bin = all.bin) ) # # These are the matches, in text form. # assigns <- const.txt[RsOut$solution > thresh] # # Convert that to a two-column matrix of pairs. # S <- matrix (as.numeric (unlist (strsplit (assigns, "\\."))), ncol = 2, byrow=2) # # Build result. If this started with odd N, omit any mention of the # N-th observation, which is the dummy one. # if (N.WAS.ODD) { S <- S[S[,1] != N & S[,2] != N,] N <- N - 1 D <- as.dist ((as.matrix (D)[1:N,1:N])) } result <- list(matches = S, total.dist = RsOut$objval, status = RsOut$status, time.required = this.took) # # "Bug": if r = N-1, our problem is degenerate. The solution is correct, # but the status is TM_UNBOUNDED. Let's make it TM_OPTIMAL_SOLUTION_FOUND. # if (r == N-1) result$status <- c("TM_OPTIMAL_SOLUTION_FOUND" = 0) result$call <- match.call () result$r <- r result$dister <- dister result$dist.args <- dist.args result$X.supplied <- X.supplied result$relax <- relax if (X.supplied && keep.X) result$X <- X if (keep.D) result$D <- D if (!missing (y) && length (y) != N) warning ("Length of y was not equal to N") result$y <- y # Save the costs. Should this be a third column of S? result$edge.weights <- RsOut$solution[RsOut$solution > thresh] # # If y was supplied, compute two (for now) cross-count statistics. # One is the simple count of the number of cross-group matchings -- # only we have to account for weights if relax=TRUE. In that case, # extract only the weights > thresh. If that vector isn't the same # length as nrow(S), we have trouble. The second is the sum of the # distances associated with cross-group matching. That weird formula # in S.num comes from help(dist). # if (!is.null (y)) { S.num <- N * (S[,1] - 1) - (S[,1] * (S[,1] - 1)/2) + S[,2] - 1 result$cross.sum <- sum (D[S.num][y[S[,1]] != y[S[,2]]]) result$cross.count <- sum (result$edge.weights[y[S[,1]] != y[S[,2]]]) } if (X.supplied) { result$nrow.X <- nrow (X) result$ncol.X <- ncol (X) } else { result$nrow.X <- N } class (result) <- "AcrossTic" return (result) }
/scratch/gouwar.j/cran-all/cranData/AcrossTic/R/rRegMatch.R
#' @title Cosinor Model for Circadian Rhythmicity #' @description A parametric approach to study circadian rhythmicity assuming cosinor shape. #' #' #' @param x \code{vector} vector of dimension n*1440 which reprsents n days of 1440 minute activity data #' @param window The calculation needs the window size of the data. E.g window = 1 means each epoch is in one-minute window. #' @param export_ts A Boolean to indicate whether time series should be exported #' #' #' @importFrom cosinor cosinor.lm #' @importFrom cosinor2 correct.acrophase #' #' @return A list with elements #' \item{mes}{MESOR which is short for midline statistics of rhythm, which is a rhythm adjusted mean. This represents mean activity level.} #' \item{amp}{amplitude, a measure of half the extend of predictable variation within a cycle. This represents the highest activity one can achieve.} #' \item{acro}{acrophase, a meaure of the time of the overall high values recurring in each cycle. Here it has a unit of radian. This represents time to reach the peak.} #' \item{acrotime}{acrophase in the unit of the time (hours)} #' \item{ndays}{Number of days modeled} #' \item{cosinor_ts}{Exported data frame with time, time over days, original time series, fitted time series using cosinor model} #' #' #' @references Cornelissen, G. Cosinor-based rhythmometry. Theor Biol Med Model 11, 16 (2014). https://doi.org/10.1186/1742-4682-11-16 #' @export #' @examples #' count1 = c(t(example_activity_data$count[c(1:2),-c(1,2)])) #' cos_coeff = ActCosinor(x = count1, window = 1, export_ts = TRUE) ActCosinor = function( x, window = 1, export_ts = FALSE ){ if(1440 %% window != 0){ stop("Only use window size that is an integer factor of 1440") } if(length(x) %% (1440/window) != 0){ stop("Window size and length of input vector doesn't match. Only use window size that is an integer factor of 1440") } dim = 1440/window n.days = length(x)/dim tmp.dat = data.frame(time = rep(1:dim, n.days) / (60/window), Y = x) fit = cosinor.lm(Y ~ time(time) + 1, data = tmp.dat, period = 24) mesor = fit$coefficients[1] amp = fit$coefficients[2] # acr = fit$coefficients[3] acr = correct.acrophase(fit) acrotime = (-1) * acr * 24/(2 * pi) names(mesor) = names(amp) = names(acr) = names(acrotime) = NULL if (export_ts == TRUE) { # Put time series in data.frame and add it to output: fittedY = fitted(fit$fit) #cosinor omodel original = tmp.dat$Y #original data time = tmp.dat$time #time time2 = time drops = which(diff(time) < 0) + 1 for (k in drops) { time2[k:length(time)] = time2[k:length(time)] + 24 } time_across_days = time2 cosinor_ts = as.data.frame(cbind(time, time_across_days, original, fittedY)) # data.frame with all signal } else { cosinor_ts = NULL } params = list("mes" = mesor, "amp" = amp, "acr" = acr, "acrotime" = acrotime, "ndays" = n.days) ret = list("params" = params, "cosinor_ts" = cosinor_ts) ret = return(ret) }
/scratch/gouwar.j/cran-all/cranData/ActCR/R/ActCosinor.R
#' @title Cosinor Model for Circadian Rhythmicity for the Whole Dataset #' @description A parametric approach to study circadian rhythmicity assuming cosinor shape.This function is a whole dataset #' wrapper for \code{ActCosinor}. #' #' @param count.data \code{data.frame} of dimension n * (p+2) containing the #' p dimensional activity data for all n subject days. #' The first two columns have to be ID and Day. ID can be #' either \code{character} or \code{numeric}. Day has to be \code{numeric} indicating #' the sequence of days within each subject. #' @param window The calculation needs the window size of the data. E.g window = 1 means each epoch is in one-minute window. #' @param export_ts A Boolean to indicate whether time series should be exported (notice: it takes time and storage space to export time series data for all subject-days. Use this with caution. Suggest to only export time series for selected subjects) #' #' @importFrom stats na.omit reshape #' @importFrom dplyr group_by %>% do mutate #' #' #' @return A \code{data.frame} with the following 5 columns #' \item{ID}{ID} #' \item{ndays}{number of days} #' \item{mes}{MESRO, which is short for midline statistics of rhythm, which is a rhythm adjusted mean. This represents mean activity level.} #' \item{amp}{amplitude, a measure of half the extend of predictable variation within a cycle. This represents the highest activity one can achieve.} #' \item{acro}{acrophase, a meaure of the time of the overall high values recurring in each cycle. Here it has a unit of radian. This represents time to reach the peak.} #' \item{acrotime}{acrophase in the unit of the time (hours)} #' \item{ndays}{Number of days modeled} #' and #' \item{cosinor_ts}{Exported data frame with time, time over days, original time series, fitted time series using cosinor model} #' #' @export #' @examples #' counts_1 = example_activity_data$count[c(1:12),] #' cos_all_1 = ActCosinor_long(count.data = counts_1, window = 1,export_ts = TRUE) #' counts_10 = cbind(counts_1[,1:2], #' as.data.frame(t(apply(counts_1[,-c(1:2)], 1, #' FUN = bin_data, window = 10, method = "average")))) #' cos_all_10 = ActCosinor_long(count.data = counts_10, window = 10) ActCosinor_long = function( count.data, window = 1, export_ts = FALSE ){ ID = value = . = NULL rm(list = c("ID", "value", ".")) long.count = reshape(count.data, varying = names(count.data)[3:ncol(count.data)],direction = "long", timevar = "Time",idvar = c("ID","Day"),v.names = "values",new.row.names = c(1:((ncol(count.data)-2)*nrow(count.data)))) long.count = long.count[ with(long.count, order(ID, Day,Time)), ] result= long.count %>% group_by(ID) %>% do(out = ActCosinor(.$values, window = window, export_ts = export_ts)) # out = unlist(result$out) # # result$ndays = out[which(names(out) == "ndays")] # result$mes = out[which(names(out) == "mes")] # result$amp = out[which(names(out) == "amp")] # result$acr = out[which(names(out) == "acr")] # result$acrotime = out[which(names(out) == "acrotime")] # # result$out = NULL # names(result)[3:6] = paste0(names(result)[3:6],"_",window) # return(result) # ## Exporting the parameters out = unlist(sapply(result$out, function(x) x[1])) params = as.data.frame(matrix(out,ncol = 5,byrow = T)) names(params) = gsub("params.","", names(out)[1:5]) params = params %>% mutate(ID = result$ID) names(params)[1:4] = paste0(names(params)[1:4],"_",window) params = params[,c(6,5,1:4)] ## Exporting the parameters if(export_ts){ data_ts = sapply(result$out, function(x) x[2]) names(data_ts) = result$ID ret = list("params" = params,"cosinor_ts" = data_ts) }else{ ret = params } return(ret) }
/scratch/gouwar.j/cran-all/cranData/ActCR/R/ActCosinor_long.R
#' @title Extended Cosinor Model for Circadian Rhythmicity #' @description Extended cosinor model based on sigmoidally transformed cosine curve using anti-logistic transformation #' #' #' @param x \code{vector} vector of dimension n*1440 which represents n days of 1440 minute activity data #' @param window The calculation needs the window size of the data. E.g window = 1 means each epoch is in one-minute window. #' @param lower A numeric vector of lower bounds on each of the five parameters (in the order of minimum, amplitude, alpha, beta, acrophase) for the NLS. If not given, the default lower bound for each parameter is set to \code{-Inf}. #' @param upper A numeric vector of upper bounds on each of the five parameters (in the order of minimum, amplitude, alpha, beta, acrophase) for the NLS. If not given, the default lower bound for each parameter is set to \code{Inf} #' @param export_ts A Boolean to indicate whether time series should be exported #' #' @importFrom cosinor cosinor.lm #' @importFrom cosinor2 correct.acrophase #' @importFrom minpack.lm nls.lm nls.lm.control #' @importFrom stats coef residuals fitted #' #' @return A list with elements #' \item{minimum}{Minimum value of the of the function.} #' \item{amp}{amplitude, a measure of half the extend of predictable variation within a cycle. This represents the highest activity one can achieve.} #' \item{alpha}{It determines whether the peaks of the curve are wider than the troughs: when alpha is small, the troughs are narrow and the peaks are wide; when alpha is large, the troughs are wide and the peaks are narrow.} #' \item{beta}{It dertermines whether the transformed function rises and falls more steeply than the cosine curve: large values of beta produce curves that are nearly square waves.} #' \item{acrotime}{acrophase is the time of day of the peak in the unit of the time (hours)} #' \item{F_pseudo}{Measure the improvement of the fit obtained by the non-linear estimation of the transformed cosine model} #' \item{UpMesor}{Time of day of switch from low to high activity. Represents the timing of the rest- activity rhythm. Lower (earlier) values indicate increase in activity earlier in the day and suggest a more advanced circadian phase.} #' \item{DownMesor}{Time of day of switch from high to low activity. Represents the timing of the rest-activity rhythm. Lower (earlier) values indicate decline in activity earlier in the day, suggesting a more advanced circadian phase.} #' \item{MESOR}{A measure analogous to the MESOR of the cosine model (or half the deflection of the curve) can be obtained from mes=min+amp/2. However, it goes through the middle of the peak, and is therefore not equal to the MESOR of the cosine model, which is the mean of the data.} #' \item{ndays}{Number of days modeled.} #' \item{cosinor_ts}{Exported data frame with time, time over days, original time series, fitted time series using cosinor model from step 1, and fitted extended cosinor model from step 2} #' #' #' @references Marler MR, Gehrman P, Martin JL, Ancoli-Israel S. The sigmoidally transformed cosine curve: a mathematical model for circadian rhythms with symmetric non-sinusoidal shapes. Stat Med. #' @export #' @examples #' count1 = c(t(example_activity_data$count[c(1:2),-c(1,2)])) #' cos_coeff = ActExtendCosinor(x = count1, window = 1,export_ts = TRUE) ActExtendCosinor = function( x, window = 1, lower = c(0, 0, -1, 0, -3), ## min, amp, alpha, beta, phi upper = c(Inf, Inf, 1, Inf, 27), export_ts = FALSE ){ if(1440 %% window != 0){ stop("Only use window size that is an integer factor of 1440") } if(length(x) %% (1440/window) != 0){ stop("Window size and length of input vector doesn't match. Only use window size that is an integer factor of 1440") } dim = 1440/window n.days = length(x)/dim # Stage 1 ---- Cosinor Model tmp.dat = data.frame(time = rep(1:dim, n.days) / (60/window), Y = x) fit = cosinor.lm(Y ~ time(time) + 1, data = tmp.dat, period = 24) mesor = fit$coefficients[1] amp = fit$coefficients[2] acr = correct.acrophase(fit) acrotime = (-1) * acr * 24/(2 * pi) names(mesor) = names(amp) = names(acr) = names(acrotime) = NULL # Stage 2 ---- Transformation ## Set up the initial values e_min0 = max(mesor - amp, 0) e_amp0 = 2 * amp e_phi0 = acrotime e_par0 = c(e_min0, e_amp0, 0, 2, e_phi0) ## min, amp, alpha, beta, phi tmp.dat = tmp.dat[!is.na(tmp.dat$Y),] # ignore missing values fit_nls = nls.lm(e_par0, fn = fn_obj, lower = lower, upper = upper, tmp.dat = tmp.dat, control = nls.lm.control(maxiter = 1000)) if (export_ts == TRUE) { # Put time series in data.frame and add it to output: fittedYext = tmp.dat$Y - residuals(fit_nls) #extended cosinor model fittedY = fitted(fit$fit) #cosinor omodel original = tmp.dat$Y #original data time = tmp.dat$time #time time2 = time drops = which(diff(time) < 0) + 1 for (k in drops) { time2[k:length(time)] = time2[k:length(time)] + 24 } time_across_days = time2 cosinor_ts = as.data.frame(cbind(time, time_across_days, original, fittedY, fittedYext)) # data.frame with all signal } else { cosinor_ts = NULL } ## Estimated exteded cosinor parameters,in the order of ## minimum, amplitude, alpha, beta, acrophase coef.nls = coef(fit_nls) e_min = coef.nls[1] e_amp = coef.nls[2] e_alpha = coef.nls[3] e_beta = coef.nls[4] e_acrotime = coef.nls[5] ## Pseudo F statistics RSS_cos = sum((fit$fit$residuals)^2) RSS_ext = sum(residuals(fit_nls)^2) F_pseudo = ((RSS_cos - RSS_ext)/2)/(RSS_ext/(nrow(tmp.dat) - 5)) ## Derived metrics UpMesor = -acos(e_alpha)/(2*pi/24) + e_acrotime DownMesor = acos(e_alpha)/(2*pi/24) + e_acrotime MESOR = e_min + e_amp/2 params = list("minimum" = e_min, "amp" = e_amp, "alpha" = e_alpha, "beta" = e_beta, "acrotime" = e_acrotime, "F_pseudo" = F_pseudo, "UpMesor" = UpMesor, "DownMesor" = DownMesor, "MESOR" = MESOR, "ndays" = n.days) ret = list("params" = params, "cosinor_ts" = cosinor_ts) return(ret) } ## Objective function to optimize for extended cosinor model fn_obj = function(par, tmp.dat) { ct = cos((tmp.dat[, 1] - par[5]) * 2 * pi / 24) lct = exp(par[4] * (ct - par[3])) / (1 + exp(par[4] * (ct - par[3]))) rt = par[1] + par[2] * lct tmp.dat[, 2] - rt }
/scratch/gouwar.j/cran-all/cranData/ActCR/R/ActExtendCosinor.R
#' @title Cosinor Model for Circadian Rhythmicity for the Whole Dataset #' @description Extended cosinor model based on sigmoidally transformed cosine curve using anti-logistic transformation.This function is a whole dataset #' wrapper for \code{ActExtendCosinor}. #' #' @param count.data \code{data.frame} of dimension n * (p+2) containing the #' p dimensional activity data for all n subject days. #' The first two columns have to be ID and Day. ID can be #' either \code{character} or \code{numeric}. Day has to be \code{numeric} indicating #' the sequence of days within each subject. #' @param window The calculation needs the window size of the data. E.g window = 1 means each epoch is in one-minute window. #' window size as an argument. #' @param lower A numeric vector of lower bounds on each of the five parameters (in the order of minimum, amplitude, alpha, beta, acrophase) for the NLS. If not given, the default lower bound for each parameter is set to \code{-Inf}. #' @param upper A numeric vector of upper bounds on each of the five parameters (in the order of minimum, amplitude, alpha, beta, acrophase) for the NLS. If not given, the default lower bound for each parameter is set to \code{Inf} #' @param export_ts A Boolean to indicate whether time series should be exported (notice: it takes time and storage space to export time series data for all subject-days. Use this with caution. Suggest to only export time series for selected subjects) #' #' #' @importFrom stats na.omit reshape #' @importFrom dplyr group_by %>% do mutate #' #' #' @return A \code{data.frame} with the following 11 columns #' \item{ID}{ID} #' \item{ndays}{number of days} #' \item{minimum}{Minimum value of the of the function.} #' \item{amp}{amplitude, a measure of half the extend of predictable variation within a cycle. This represents the highest activity one can achieve.} #' \item{alpha}{It determines whether the peaks of the curve are wider than the troughs: when alpha is small, the troughs are narrow and the peaks are wide; when alpha is large, the troughs are wide and the peaks are narrow.} #' \item{beta}{It dertermines whether the transformed function rises and falls more steeply than the cosine curve: large values of beta produce curves that are nearly square waves.} #' \item{acrotime}{acrophase is the time of day of the peak in the unit of the time (hours)} #' \item{F_pseudo}{Measure the improvement of the fit obtained by the non-linear estimation of the transformed cosine model} #' \item{UpMesor}{Time of day of switch from low to high activity. Represents the timing of the rest- activity rhythm. Lower (earlier) values indicate increase in activity earlier in the day and suggest a more advanced circadian phase.} #' \item{DownMesor}{Time of day of switch from high to low activity. Represents the timing of the rest-activity rhythm. Lower (earlier) values indicate decline in activity earlier in the day, suggesting a more advanced circadian phase.} #' \item{MESOR}{A measure analogous to the MESOR of the cosine model (or half the deflection of the curve) can be obtained from mes=min+amp/2. However, it goes through the middle of the peak, and is therefore not equal to the MESOR of the cosine model, which is the mean of the data.} #' \item{cosinor_ts}{Exported data frame with time, time over days, original time series, fitted time series using cosinor model from step 1, and fitted extended cosinor model from step 2} #' #' @export #' @examples #' counts_1 = example_activity_data$count[c(1:12),] #' cos_all_1 = ActExtendCosinor_long(count.data = counts_1, window = 1, export_ts = TRUE) #' counts_10 = cbind(counts_1[,1:2], #' as.data.frame(t(apply(counts_1[,-c(1:2)], 1, #' FUN = bin_data, window = 10, method = "average")))) #' cos_all_10 = ActExtendCosinor_long(count.data = counts_10, window = 10, export_ts = FALSE) #' ActExtendCosinor_long = function( count.data, window = 1, lower = c(0, 0, -1, 0, -3), ## min, amp, alpha, beta, phi upper = c(Inf, Inf, 1, Inf, 27), export_ts = FALSE ){ ID = value = . = NULL rm(list = c("ID", "value", ".")) long.count = reshape(count.data, varying = names(count.data)[3:ncol(count.data)],direction = "long", timevar = "Time",idvar = c("ID","Day"),v.names = "values",new.row.names = c(1:((ncol(count.data)-2)*nrow(count.data)))) long.count = long.count[ with(long.count, order(ID, Day,Time)), ] result= long.count %>% group_by(ID) %>% do(out = ActExtendCosinor(.$values, window = window, lower = lower, upper = upper,export_ts = export_ts)) ## Exporting the parameters out = unlist(sapply(result$out, function(x) x[1])) params = as.data.frame(matrix(out,ncol = 10,byrow = T)) names(params) = gsub("params.","", names(out)[1:10]) params = params %>% mutate(ID = result$ID) names(params)[1:9] = paste0(names(params)[1:9],"_",window) params = params[,c(11,10,1:9)] ## Exporting the parameters if(export_ts){ data_ts = sapply(result$out, function(x) x[2]) names(data_ts) = result$ID ret = list("params" = params,"cosinor_ts" = data_ts) }else{ ret = params } return(ret) }
/scratch/gouwar.j/cran-all/cranData/ActCR/R/ActExtendCosinor_long.R
#' @title Interdaily Statbility #' @description This function calcualte interdaily stability, a nonparametric metric #' of circadian rhtymicity #' #' @param x \code{data.frame} of dimension ndays by p, where p is the dimension of the data. #' #' @references Junrui Di et al. Joint and individual representation of domains of physical activity, sleep, and circadian rhythmicity. Statistics in Biosciences. #' @export #' #' #' @examples #' data(example_activity_data) #' count1 = example_activity_data$count[c(1,2,3),-c(1,2)] #' is = IS(x = count1) IS = function( x ){ p = ncol(x) xh = colMeans(x) v = c(t(x)) n = length(v) numerator = sum((xh - mean(v))^2)/p denominator = sum((v - mean(v))^2)/n return(numerator/denominator) }
/scratch/gouwar.j/cran-all/cranData/ActCR/R/IS.R
#' @title Interdaily Statbility for the Whole Dataset #' @description This function calcualte interdaily stability, a nonparametric metric #' of circadian rhtymicity. This function is a whole dataset #' wrapper for \code{IS} #' #' @param count.data \code{data.frame} of dimension n * (1440+2) containing the #' 1440 dimensional activity data for all n subject days. #' The first two columns have to be ID and Day. ID can be #' either \code{character} or \code{numeric}. Day has to be \code{numeric} indicating #' the sequency of days within each subject. #' @param window an \code{integer} indicating what is the window to bin the data before #' the function can be apply to the dataset. For details, see \code{bin_data}. #' @param method \code{character} of "sum" or "average", function used to bin the data #' #' @return A \code{data.frame} with the following 2 columns #' \item{ID}{ID} #' \item{IS}{IS} #' #' @references Junrui Di et al. Joint and individual representation of domains of physical activity, sleep, and circadian rhythmicity. Statistics in Biosciences. #' @export #' #' #' @examples #' data(example_activity_data) #' count1 = example_activity_data$count #' is_subj = IS_long(count.data = count1, window = 10, method = "average") IS_long = function( count.data, window = 1, method = c("average","sum") ){ x = count.data if(! "ID" %in% names(x)){ stop("Please name the ID column with the name ID") } x = cbind(x[,1:2], as.data.frame( t(apply(x[,-c(1:2)], 1, FUN = bin_data, window = window, method = method)))) a = split(x,f = x$ID) y = unlist(lapply(a, function(x) IS(x[,-c(1:2)]))) is.out = data.frame(ID = names(y), IS = y) names(is.out)[2] = paste0("IS_",window) return(is.out) }
/scratch/gouwar.j/cran-all/cranData/ActCR/R/IS_long.R
#' @title Intradaily Variability #' @description This function calcualte intradaily variability, a nonparametric metric #' reprsenting fragmentation of circadian rhtymicity #' #' @param x \code{vector} of activity data #' @return IV #' #' #' @export #' @references Junrui Di et al. Joint and individual representation of domains of physical activity, sleep, and circadian rhythmicity. Statistics in Biosciences. #' #' #' @examples #' data(example_activity_data) #' count1 = c(t(example_activity_data$count[1,-c(1,2)])) #' iv = IV(x = count1) #' #' IV = function( x ){ mean.counts = mean(x) numerator = sum(diff(x)^2)/(length(x) - 1) denominator = sum((x - mean.counts)^2)/length(x) return(numerator/denominator) }
/scratch/gouwar.j/cran-all/cranData/ActCR/R/IV.R
#' @title Intradaily Variability for the Whole Dataset #' @description This function calcualte intradaily variability, a nonparametric metric #' reprsenting fragmentation of circadian rhtymicity. This function is a whole dataset #' wrapper for \code{IV}. #' #' @param count.data \code{data.frame} of dimension n * (1440+2) containing the #' 1440 dimensional activity data for all n subject days. #' The first two columns have to be ID and Day. ID can be #' either \code{character} or \code{numeric}. Day has to be \code{numeric} indicating #' the sequency of days within each subject. #' @param window an \code{integer} indicating what is the window to bin the data before #' the function can be apply to the dataset. For details, see \code{bin_data}. #' @param method \code{character} of "sum" or "average", function used to bin the data #' #' #' #' @return A \code{data.frame} with the following 5 columns #' \item{ID}{ID} #' \item{Day}{Day} #' \item{IV}{IV} #' #' #' @export #' @references Junrui Di et al. Joint and individual representation of domains of physical activity, sleep, and circadian rhythmicity. Statistics in Biosciences. #' #' #' @examples #' data(example_activity_data) #' count1 = example_activity_data$count #' iv_subj = IV_long(count.data = count1, window = 10, method = "average") #' IV_long = function( count.data, window = 1, method = c("average","sum") ){ x = count.data x_bin = cbind(x[,1:2], as.data.frame( t(apply(x[,-c(1:2)], 1, FUN = bin_data, window = window, method = method)))) iv_out = as.data.frame(cbind(x_bin[,c(1,2)], apply(x_bin[,-c(1,2)], 1, IV))) names(iv_out) = c("ID","Day",paste0("IV_",window)) return(iv_out) }
/scratch/gouwar.j/cran-all/cranData/ActCR/R/IV_long.R
#' @title Relative Amplitude #' @description This function calcualte relative amplitude, a nonparametric metric #' reprsenting fragmentation of circadian rhtymicity #' #' @param x \code{vector} vector of activity data #' @param window since the caculation of M10 and L5 depends on the dimension of data, we need to include #' window size as an argument. #' @param method \code{character} of "sum" or "average", function used to bin the data #' #' @return A list with elements #' \item{M10}{Maximum 10 hour activity} #' \item{L5}{Minimum 5 hour activity} #' \item{RA}{Relative amplitude} #' #' @importFrom zoo rollapplyr #' #' @export #' @references Junrui Di et al. Joint and individual representation of domains of physical activity, sleep, and circadian rhythmicity. Statistics in Biosciences. #' #' @examples #' data(example_activity_data) #' count1 = c(t(example_activity_data$count[1,-c(1,2)])) #' ra = RA(x = count1, window = 10, method = "average") #' #' RA = function( x, window = 1, method = c("average","sum") ){ if(length(x) %% 1440/window != 0){ stop("Window size and length of input vector doesn't match. Only use window size that is an integer factor of 1440") } x_bin = bin_data(x, window = window, method = method) M10 = max(roll(x_bin, 10 * 1440/window/24)) L5 = min(roll(x_bin, 5 * 1440/window/24)) relaamp = (M10 - L5)/(M10 + L5) params = list("M10" = M10, "L5" = L5, "RA" = relaamp) return(params) } roll = function(day.counts,k){ kvec = rollapplyr(day.counts, k, function(x) mean(x,na.rm = T), fill = NA) kvec = kvec[!is.na(kvec)] return(kvec) }
/scratch/gouwar.j/cran-all/cranData/ActCR/R/RA.R
#' @title Relative Amplitude for the Whole Datset #' @description This function calcualte relative amplitude, a nonparametric metric #' of circadian rhtymicity. This function is a whole dataset #' wrapper for \code{RA}. #' #' @param count.data \code{data.frame} of dimension n * (p+2) containing the #' p dimensional activity data for all n subject days. #' The first two columns have to be ID and Day. ID can be #' either \code{character} or \code{numeric}. Day has to be \code{numeric} indicating #' the sequency of days within each subject. #' @param window since the caculation of M10 and L5 depends on the dimension of data, we need to include #' window size as an argument. This function is a whole dataset #' wrapper for \code{RA}. #' @param method \code{character} of "sum" or "average", function used to bin the data #' #' @return A \code{data.frame} with the following 3 columns #' \item{ID}{ID} #' \item{Day}{Day} #' \item{RA}{RA} #' #' @importFrom zoo rollapplyr #' #' @export #' #' #' @examples #' data(example_activity_data) #' count1 = example_activity_data$count[1:12,] #' ra_all = RA_long(count.data = count1, window = 10, method = "average") #' RA_long = function( count.data, window = 1, method = c("average","sum") ){ x = count.data ra_out = apply(x[,-c(1,2)], 1, RA, window = window, method = method) out = unlist(ra_out) params = as.data.frame(matrix(out,ncol = 3,byrow = T)) names(params) = c("M10","L5","RA") params = params %>% mutate(ID = x$ID,Day = x$Day) names(params)[1:3] = paste0(names(params)[1:3],"_",window) params = params[,c(4,5,1:3)] return(params) }
/scratch/gouwar.j/cran-all/cranData/ActCR/R/RA_long.R
#' @title Bin data into longer windows #' @description Bin minute level data into different time resolutions #' #' @param x \code{vector} of activity data. #' @param method \code{character} of "sum" or "average", function used to bin the data #' @param window window size used to bin the original 1440 dimensional data into. Window size #' should be an integer factor of 1440 #' @return a vector of binned data #' #' @importFrom zoo rollapply #' #' @export #' #' #' #' @examples #' data(example_activity_data) #' count1 = c(t(example_activity_data$count[1,-c(1,2)])) #' xbin = bin_data(x = count1, window = 10, method = "average") bin_data = function( x = x, window = 1, method = c("average","sum") ){ if(length(x) != 1440){ stop("Please inpute 1440 dimensional minute-level activity data!") } if(1440 %% window != 0){ stop("Only use window size that is an integer factor of 1440") } method = match.arg(method) if(method == "sum"){ binx = rollapply(x, width = window, by = window, FUN = sum) } if(method == "average"){ binx = rollapply(x, width = window, by = window, FUN = mean) } return(binx) }
/scratch/gouwar.j/cran-all/cranData/ActCR/R/bin_data.R
#' @title Activity/Wear Data from 50 Subjects from NHANES 2003 - 2006 #' #' @description A list of two \code{data.frames} containing the counts and the weartime #' for 50 NHANES subjects #' #' @format A list of two \code{data.frame}s with 1442 columns, which are in the following order: #' \describe{ #' \item{ID}{identifier of the person.} #' \item{Day}{\code{numeric} sequence 1,2,.. indicating the order of days within a subject.} #' \item{MIN1-MIN1440}{counts of activity of that specific minute.} #' } "example_activity_data"
/scratch/gouwar.j/cran-all/cranData/ActCR/R/example_activity_data.R
#' @title Activity/Wear Data from 50 Subjects from NHANES 2003 - 2006 #' #' @description A list of two \code{data.frames} containing the counts and the weartime #' for 50 NHANES subjects #' #' @format A list of two \code{data.frame}s with 1442 columns, which are in the following order: #' \describe{ #' \item{ID}{identifier of the person.} #' \item{Day}{\code{numeric} sequence 1,2,.. indicating the order of days within a subject.} #' \item{MIN1-MIN1440}{counts of activity of that specific minute.} #' } "example_activity_data"
/scratch/gouwar.j/cran-all/cranData/ActFrag/R/example_activity_data.R
#' @title Fragmentation Metrics #' @description Fragmentation methods to study the transition between two states, e.g. #' sedentary v.s. active. #' #' @param x \code{integer} \code{vector} of activity data. #' @param w \code{vector} of wear flag data with same dimension as \code{x}. #' @param thresh threshold to binarize the data. #' @param bout.length minimum duration of defining an active bout; defaults to 1. #' @param metrics What is the fragmentation metrics to exract. Can be #' "mean_bout","TP","Gini","power","hazard",or all the above metrics "all". #' #' @return A list with elements #' \item{mean_r}{mean sedentary bout duration} #' \item{mean_a}{mean active bout duration} #' \item{SATP}{sedentary to active transition probability} #' \item{ASTP}{bactive to sedentary transition probability} #' \item{Gini_r}{Gini index for active bout} #' \item{Gini_a}{Gini index for sedentary bout} #' \item{h_r}{hazard function for sedentary bout} #' \item{h_a}{hazard function for active bout} #' \item{alpha_r}{power law parameter for sedentary bout} #' \item{alpha_a}{power law parameter for active bout} #' #' @importFrom stats na.omit reshape #' @importFrom dplyr %>% as_data_frame filter #' @importFrom accelerometry bouts rle2 #' @importFrom survival survfit Surv #' @importFrom ineq Gini #' #' @export #' #' @references Junrui Di, Andrew Leroux, Jacek Urbanek, Ravi Varadhan, Adam P. Spira, Jennifer Schrack, Vadim Zipunnikov. #' Patterns of sedentary and active time accumulation are associated with mortality in US adults: The NHANES study. bioRxiv 182337; doi: https://doi.org/10.1101/182337 #' #' @details Metrics include #' mean_bout (mean bout duration), #' TP (between states transition probability), #' Gini (gini index), #' power (alapha parameter for power law distribution) #' hazard (average hazard function) #' #' #' @examples #' data(example_activity_data) #' count1 = c(t(example_activity_data$count[1,-c(1,2)])) #' wear1 = c(t(example_activity_data$wear[1,-c(1,2)])) #' frag = fragmentation(x = count1, w = wear1, thresh = 100, bout.length = 1, metrics = "mean_bout") #' #' fragmentation = function( x, w, thresh , bout.length = 1, metrics = c("mean_bout","TP","Gini","power","hazard","all") ){ value = NULL rm(list = c("value")) metrics = match.arg(metrics) if(!is.integer(x)){ stop("Activity counts has to be integers!") } if(missing(w)){ stop("Please input weartime flag vector w with same dimension!") } if(length(x) != length(w)){ stop("x and w should have the same length!") } uwear = as.integer(unique(c(w))) if (!all(uwear %in% c(0, 1, NA))) { stop("w has non 0-1 data!") } x = na.omit(as.integer(x)) w = na.omit(w) w[w == 0] = NA y = bouts(counts = x, thresh_lower = thresh, bout_length = bout.length) yw = y * w uy = unique(na.omit(yw)) if (length(uy) == 1) { #stop("Only one state found in the activity, no transition defined.") if(metrics == "mean_bout"){ frag = list(mean_r = NA, mean_a = NA) } if(metrics == "TP"){ frag = list(SATP = NA, ASTP = NA) } if(metrics == "Gini"){ frag = list(Gini_r = NA, Gini_a = NA) } if(metrics == "power"){ frag = list(alpha_r = NA, alpha_a = NA) } if(metrics == "hazard"){ frag = list(h_r = NA, h_a = NA) } if (metrics == "all"){ frag = list(mean_r = NA, mean_a = NA, SATP = NA, ASTP = NA, Gini_r = NA, Gini_a = NA, alpha_r = NA, alpha_a = NA, h_r = NA, h_a = NA ) } } if (length(uy) > 1) { mat = as_data_frame(rle2(yw)) %>% filter(!is.na(value)) A = mat$length[which(mat$value == 1)] R = mat$length[which(mat$value == 0)] if(metrics == "mean_bout"){ frag = list(mean_r = mean(R), mean_a = mean(A)) } if(metrics == "TP"){ frag = list(SATP = 1/mean(R), ASTP = 1/mean(A)) } if(metrics == "Gini"){ frag = list(Gini_r = Gini(R,corr = T), Gini_a = Gini(A,corr = T)) } if(metrics == "power"){ nr = length(R) na = length(A) rmin = min(R) amin = min(A) frag = list(alpha_r = 1+ nr/sum(log(R/(rmin-0.5))), alpha_a = 1+ na/sum(log(A/(amin-0.5)))) } if(metrics == "hazard"){ fitr = survfit(Surv(R,rep(1,length(R)))~1) fita = survfit(Surv(A,rep(1,length(A)))~1) frag = list(h_r = mean(fitr$n.event/fitr$n.risk), h_a = mean(fita$n.event/fita$n.risk)) } if(metrics == "all"){ nr = length(R) na = length(A) rmin = min(R) amin = min(A) fitr = survfit(Surv(R,rep(1,length(R)))~1) fita = survfit(Surv(A,rep(1,length(A)))~1) frag = list(mean_r = mean(R), mean_a = mean(A), SATP = 1/mean(R), ASTP = 1/mean(A), Gini_r = Gini(R,corr = T), Gini_a = Gini(A,corr = T), alpha_r = 1+ nr/sum(log(R/(rmin-0.5))), alpha_a = 1+ na/sum(log(A/(amin-0.5))), h_r = mean(fitr$n.event/fitr$n.risk), h_a = mean(fita$n.event/fita$n.risk) ) }} return(frag) }
/scratch/gouwar.j/cran-all/cranData/ActFrag/R/fragmentation.R
#' @title Fragmentation Metrics for Whole Dataset #' @description Fragmentation methods to study the transition between two states, e.g. #' sedentary v.s. active.This function is a whole dataset wrapper for \code{fragmentation} #' #' @param count.data \code{data.frame} of dimension n*1442 containing the 1440 minutes of activity data for all n subject days. #' The first two columns have to be ID and Day. ID can be either \code{character} or \code{numeric}. Day has to be \code{numeric} indicating #' the sequency of days within each subject. #' @param weartime \code{data.frame} with dimension of \code{count.data}. #' The first two columns have to be ID and Day.ID can be either \code{character} or \code{numeric}. Day has to be \code{numeric} indicating #' the sequencey of days within each subject. #' #' @param thresh threshold to define the two states. #' @param bout.length minimum duration of defining an active bout; defaults to 1. #' @param metrics What is the fragmentation metrics to exract. Can be #' "mean_bout","TP","Gini","power","hazard",or all the above metrics "all". #' @param by Determine whether fragmentation is calcualted by day or by subjects (i.e. aggregate bouts across days). #' by-subject is recommended to gain more power. #' #' #' #' #' @return A dataframe with some of the following columns #' \item{ID}{identifier of the person} #' \item{Day}{\code{numeric} vector indicating the sequencey of days within each subject. } #' \item{mean_r}{mean sedentary bout duration} #' \item{mean_a}{mean active bout duration} #' \item{SATP}{sedentary to active transition probability} #' \item{ASTP}{bactive to sedentary transition probability} #' \item{Gini_r}{Gini index for active bout} #' \item{Gini_a}{Gini index for sedentary bout} #' \item{h_r}{hazard function for sedentary bout} #' \item{h_a}{hazard function for active bout} #' \item{alpha_r}{power law parameter for sedentary bout} #' \item{alpha_a}{power law parameter for active bout} #' #' #' @importFrom stats na.omit reshape #' @importFrom dplyr group_by %>% #' @importFrom dplyr do as_data_frame filter #' @importFrom accelerometry bouts rle2 #' @importFrom survival survfit Surv #' @importFrom ineq Gini #' #' @export #' @details Metrics include #' mean_bout (mean bout duration), #' TP (between states transition probability), #' Gini (gini index), #' power (alapha parameter for power law distribution) #' hazard (average hazard function) #' #' #' @examples #' data(example_activity_data) #' count = example_activity_data$count #' wear = example_activity_data$wear #' frag_by_day = fragmentation_long(count.data = count, #' weartime = wear,thresh = 100,bout.length = 1, #' metrics = "all",by = "day") #' tp_by_subject = fragmentation_long(count.data = count, #' weartime = wear,thresh = 100,bout.length = 1, #' metrics = "TP",by = "subject") #' #' fragmentation_long = function( count.data, weartime, thresh, bout.length = 1, metrics = c("mean_bout","TP","Gini","power","hazard","all"), by = c("day","subject") ){ ID = value = . = NULL rm(list = c("ID", "value", ".")) metrics = match.arg(metrics) by = match.arg(by) if(missing(weartime)){ print("No weartime supplied, calculated based on defualt from 05:00 to 23:00") weartime = wear_flag(count.data = count.data) } if(by == "day"){ mat = cbind(as.matrix(count.data[,-c(1:2)]),as.matrix(weartime[,-c(1:2)])) result.list = apply(mat,1,function(x){ fragmentation(x[1:1440],x[1441:2880],thresh = thresh,bout.length = bout.length, metrics = metrics) }) vfrag = unlist(result.list) if(metrics == "all"){ frag_all = as.data.frame(cbind(count.data[,c(1,2)], vfrag[seq(1,length(vfrag),10)], vfrag[seq(2,length(vfrag),10)], vfrag[seq(3,length(vfrag),10)], vfrag[seq(4,length(vfrag),10)], vfrag[seq(5,length(vfrag),10)], vfrag[seq(6,length(vfrag),10)], vfrag[seq(7,length(vfrag),10)], vfrag[seq(8,length(vfrag),10)], vfrag[seq(9,length(vfrag),10)], vfrag[seq(10,length(vfrag),10)])) } if(metrics != "all"){ frag_all = as.data.frame(cbind(count.data[,c(1,2)], vfrag[seq(1,length(vfrag),2)], vfrag[seq(2,length(vfrag),2)])) } if(metrics == "mean_bout"){ names(frag_all) = c("ID","Day","mean_r","mean_a") } if(metrics == "TP"){ names(frag_all) = c("ID","Day","SATP","ASTP") } if(metrics == "Gini"){ names(frag_all) = c("ID","Day","Gini_r","Gini_a") } if(metrics == "power"){ names(frag_all) = c("ID","Day","alpha_r","alpha_a") } if(metrics == "hazard"){ names(frag_all) = c("ID","Day","h_r","h_a") } if(metrics == "all"){ names(frag_all) = c("ID","Day","mean_r","mean_a","SATP","ASTP", "Gini_r","Gini_a","alpha_r","alpha_a","h_r","h_a") } } if(by == "subject"){ long.count = reshape(count.data, varying = names(count.data)[3:1442],direction = "long", timevar = "MIN",idvar = c("ID","Day"),v.names = "values") long.count = long.count[ with(long.count, order(ID, Day, MIN)), ] long.wear = reshape(weartime, varying = names(weartime)[3:1442],direction = "long", timevar = "MIN",idvar = c("ID","Day"),v.names = "values") long.wear= long.wear[ with(long.wear, order(ID, Day,MIN)), ] longdata = data.frame(ID = long.count$ID, count = long.count$values, wear = long.wear$values) result= longdata %>% group_by(ID) %>% do(out = fragmentation(.$count,.$wear,thresh = thresh, bout.length = bout.length, metrics = metrics)) idlist = as.numeric(as.character(result$ID)) result.list = result$out vfrag = unlist(result.list) if(metrics == "all"){ frag_all = as.data.frame(cbind(idlist, vfrag[seq(1,length(vfrag),10)], vfrag[seq(2,length(vfrag),10)], vfrag[seq(3,length(vfrag),10)], vfrag[seq(4,length(vfrag),10)], vfrag[seq(5,length(vfrag),10)], vfrag[seq(6,length(vfrag),10)], vfrag[seq(7,length(vfrag),10)], vfrag[seq(8,length(vfrag),10)], vfrag[seq(9,length(vfrag),10)], vfrag[seq(10,length(vfrag),10)])) } if(metrics != "all"){ frag_all = as.data.frame(cbind(idlist, vfrag[seq(1,length(vfrag),2)], vfrag[seq(2,length(vfrag),2)])) } if(metrics == "mean_bout"){ names(frag_all) = c("ID","mean_r","mean_a") } if(metrics == "TP"){ names(frag_all) = c("ID","SATP","ASTP") } if(metrics == "Gini"){ names(frag_all) = c("ID","Gini_r","Gini_a") } if(metrics == "power"){ names(frag_all) = c("ID","alpha_r","alpha_a") } if(metrics == "hazard"){ names(frag_all) = c("ID","h_r","h_a") } if(metrics == "all"){ names(frag_all) = c("ID", "mean_r","mean_a","SATP","ASTP", "Gini_r","Gini_a","alpha_r","alpha_a","h_r","h_a") } row.names(frag_all) = c(1:length(idlist)) } return(frag_all) }
/scratch/gouwar.j/cran-all/cranData/ActFrag/R/fragmentation_long.R
#' @title Create Wear/Nonwear Flags #' @description Determine during which time period, subject should wear the device. #' It is preferable that user provide their own wear/non wear flag which should has the same dimension #' as the activity data. This function provide wear/non wear flag based on time of day. #' #' @param count.data \code{data.frame} of dimension n*1442 containing the 1440 minute activity data for all n subject days. #' The first two columns have to be ID and Day. #' #' @param start start time, a string in the format of 24hr, e.g. "05:00"; defaults to "05:00". #' @param end end time, a string in the format of 24hr, e.g. "23:00"; defaults to "23:00" #' #' #' #' @return A \code{data.frame} with same dimension and column name as the \code{count.data}, with 0/1 as the elments #' reprensting wear, nonwear respectively. #' @export #' @details Fragmentation metrics are usually defined when subject is awake. The \code{weartime} provide time periods on which those features should be extracted. #' This can be also used as indication of wake/sleep. #' #' #' @examples #' data(example_activity_data) #' count = example_activity_data$count #' weartime = wear_flag(count.data = count) #' #' #' wear_flag = function( count.data, start = "05:00", end = "23:00" ){ if(grepl("(am)|(AM)|(pm)|(PM)",start) | grepl("(am)|(AM)|(pm)|(PM)",end)){ stop("Please use 24hr format for start and end time withou am/pm") } count.mat = as.matrix(count.data[,-c(1:2)]) wear.mat = matrix(0,nrow = nrow(count.mat),ncol = ncol(count.mat)) start.i = as.numeric(gsub(":.[0-9]",replacement = "",start)) * 60 + as.numeric(gsub("[0-9].:",replacement = "",start)) + 1 end.i = as.numeric(gsub(":.[0-9]",replacement = "",end)) * 60 + as.numeric(gsub("[0-9].:",replacement = "",end)) + 1 wear.mat[,c(start.i:end.i)] = 1 weartime = as.data.frame(cbind(count.data[,c(1,2)],wear.mat)) names(weartime) = names(count.data) return(weartime = weartime) }
/scratch/gouwar.j/cran-all/cranData/ActFrag/R/wear_flag.R
## ----setup, include = FALSE--------------------------------------------------- knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ## ---- eval=FALSE-------------------------------------------------------------- # devtools::install_github("junruidi/ActFrag") ## ----------------------------------------------------------------------------- library(ActFrag) ## ---- eval=FALSE-------------------------------------------------------------- # data(example_activity_data) # count = example_activity_data$count # weartime = wear_flag(count.data = count, start = "06:00", end = "23:00") ## ----eval=FALSE--------------------------------------------------------------- # data(example_activity_data) # count1 = c(t(example_activity_data$count[1,-c(1,2)])) # wear1 = c(t(example_activity_data$wear[1,-c(1,2)])) # mb = fragmentation(x = count1, w = wear1, thresh = 100, metrics = "mean_bout",bout.length = 1) # tp = fragmentation(x = count1, w = wear1, thresh = 100, metrics = "TP",bout.length = 1) ## ----eval=FALSE--------------------------------------------------------------- # data(example_activity_data) # count = example_activity_data$count # wear = example_activity_data$wear # frag_by_subject = fragmentation_long(count.data = count, weartime = wear,thresh = 100, metrics = "all",by = "subject",bout.length = 1) # frag_by_day = fragmentation_long(count.data = count, weartime = wear,thresh = 100, metrics = "all",by = "day",bout.length = 1)
/scratch/gouwar.j/cran-all/cranData/ActFrag/inst/doc/ActFrag.R
--- title: "ActFrag" author: "Junrui Di" date: "6/24/2018" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Activity Fragmentation} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r setup, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` This is the vignette for `ActFrag`. This package extracts commonly used fragmention features from minute level actigraphy data. Recent studies haven shown that, on top of total daily active/sedentary volumes, the time accumulation strategies provide more sensitive information. This package provides functions to extract commonly used fragmentation metrics to quantidy such time accumulation strategies based on minute level actigraphy-measured activity counts data. To download the package from Github ```{r, eval=FALSE} devtools::install_github("junruidi/ActFrag") ``` And to load in the package into the R environment ```{r} library(ActFrag) ``` ## 1. Data The expected data should consider at least one data frame of minute level activity counts, stored in a format of `data.frame` of dimension $(\sum_i d_i) \times 1442$, where $d_i$ is the number of available days for subject i.And the order of the 1442 columns (and corresponding column names) should be "ID","Day","MIN1",..."MIN1440". It is preferrable that user can also provide a `data.frame` of wear/non-wear flag as same dimension of the activity counts. This flag data can serve as the following purposes: 1. Define time regions where the subjects were wearing the devices. E.g. in NHANES 2003 - 2006, protocol required subjects to remove the devices when sleep. Certian non-wear detection algorithms can be used (see package [`rnahnesdata`](https://github.com/andrew-leroux/rnhanesdata) ). 2. Separate sleep and wake period to derive domain specific features. E.g. when actigraphy record is paired with a sleep log, or when the device has built in sleep detecting algorithms. 3. Define regions where features should be calcualted. E.g, we want features to be calculated only for 5:00AM to 11:00PM. Thbe wear/nonwear flag data should only consist of entries 0/1 representing nonwear/wear, sleep/wake, regions of interest/regions of interests, respectively. This is especially crucial for calculating features like total sedentary time, fragmentation etc. Because, we are not supposed to mix sedentary with sleep. If no wear/nonwear flag data is supplied, users can create one using the `wear_flag` function providing the time region: ```{r, eval=FALSE} data(example_activity_data) count = example_activity_data$count weartime = wear_flag(count.data = count, start = "06:00", end = "23:00") ``` In this version, we only incorporate type 3 . For 1 and 2 there are more appropriate softwares to look for. ## 2. Fragmentation metrics Frequently, studies extract total time spent in sedentary behavior (e.g., total sedentary minutes per day) or proportion of waking hours spent sedentary. More advanced techniques have examined the effect of replacing sedentary time with active time spent either in light or moderate-vigorous intensity. For example, isotemporal substitution modeling) and compositional data analysis examine the combined effects of time spent sedentary, light and moderate-vigorous activity, and sleep while taking into account the codependence due to finite time during a day. Yet, most studies typically only use total volume of sedentary time while ignoring the potential importance of accumulation patterns. Recent studies have suggested that such patterns (known as prolonged sedentary bouts) may provide additional sensitivity to predict health outcomes and provide additional information beyond total volume of sedentary time. Fragmentation metrics study the accumulation pattern of TST and TAT by quantifying the alternating these sequences via summaries of duration of and fre- quency of switching between sedentary and active bouts. They provide unique translatable insights into accumulation patterns for sedentary and active time and lead to additional findings of associations between those patterns and health outcomes on top of total sedentary volume Here is the list of available fragmentation metrics: * average bout duration: bout/minute * transition probability: reexpressed as the reciprocal of averge bout durations * Gini index: absolute variability normalized to the average bout duration * average hazard * power law distribution parameter We can calculate the above mentioned metrics for both sedentary and active bout. Details about fragmentations can be found [here](https://www.biorxiv.org/content/early/2017/08/31/182337). `fragmentation` and `fragmentation_long` calcualte fragmentation features, (for a single vector amd whole dataset respectively). The argument `metrics`, which consists of "mean_bout", "TP", "Gini", "hazard", "power", and "all" decides which metrics to calcualte. "all" will lead to all metrics. For a single day of count (a vector): ```{r,eval=FALSE} data(example_activity_data) count1 = c(t(example_activity_data$count[1,-c(1,2)])) wear1 = c(t(example_activity_data$wear[1,-c(1,2)])) mb = fragmentation(x = count1, w = wear1, thresh = 100, metrics = "mean_bout",bout.length = 1) tp = fragmentation(x = count1, w = wear1, thresh = 100, metrics = "TP",bout.length = 1) ``` Given all the activity and wear/nonwear flag data for the whole dataset, user can choose to calcualte framentation at daily level, or aggregate bouts across all available days by choosing from either "subject" and "day" for the argument `by`: ```{r,eval=FALSE} data(example_activity_data) count = example_activity_data$count wear = example_activity_data$wear frag_by_subject = fragmentation_long(count.data = count, weartime = wear,thresh = 100, metrics = "all",by = "subject",bout.length = 1) frag_by_day = fragmentation_long(count.data = count, weartime = wear,thresh = 100, metrics = "all",by = "day",bout.length = 1) ```
/scratch/gouwar.j/cran-all/cranData/ActFrag/inst/doc/ActFrag.Rmd
--- title: "ActFrag" author: "Junrui Di" date: "6/24/2018" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Activity Fragmentation} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r setup, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` This is the vignette for `ActFrag`. This package extracts commonly used fragmention features from minute level actigraphy data. Recent studies haven shown that, on top of total daily active/sedentary volumes, the time accumulation strategies provide more sensitive information. This package provides functions to extract commonly used fragmentation metrics to quantidy such time accumulation strategies based on minute level actigraphy-measured activity counts data. To download the package from Github ```{r, eval=FALSE} devtools::install_github("junruidi/ActFrag") ``` And to load in the package into the R environment ```{r} library(ActFrag) ``` ## 1. Data The expected data should consider at least one data frame of minute level activity counts, stored in a format of `data.frame` of dimension $(\sum_i d_i) \times 1442$, where $d_i$ is the number of available days for subject i.And the order of the 1442 columns (and corresponding column names) should be "ID","Day","MIN1",..."MIN1440". It is preferrable that user can also provide a `data.frame` of wear/non-wear flag as same dimension of the activity counts. This flag data can serve as the following purposes: 1. Define time regions where the subjects were wearing the devices. E.g. in NHANES 2003 - 2006, protocol required subjects to remove the devices when sleep. Certian non-wear detection algorithms can be used (see package [`rnahnesdata`](https://github.com/andrew-leroux/rnhanesdata) ). 2. Separate sleep and wake period to derive domain specific features. E.g. when actigraphy record is paired with a sleep log, or when the device has built in sleep detecting algorithms. 3. Define regions where features should be calcualted. E.g, we want features to be calculated only for 5:00AM to 11:00PM. Thbe wear/nonwear flag data should only consist of entries 0/1 representing nonwear/wear, sleep/wake, regions of interest/regions of interests, respectively. This is especially crucial for calculating features like total sedentary time, fragmentation etc. Because, we are not supposed to mix sedentary with sleep. If no wear/nonwear flag data is supplied, users can create one using the `wear_flag` function providing the time region: ```{r, eval=FALSE} data(example_activity_data) count = example_activity_data$count weartime = wear_flag(count.data = count, start = "06:00", end = "23:00") ``` In this version, we only incorporate type 3 . For 1 and 2 there are more appropriate softwares to look for. ## 2. Fragmentation metrics Frequently, studies extract total time spent in sedentary behavior (e.g., total sedentary minutes per day) or proportion of waking hours spent sedentary. More advanced techniques have examined the effect of replacing sedentary time with active time spent either in light or moderate-vigorous intensity. For example, isotemporal substitution modeling) and compositional data analysis examine the combined effects of time spent sedentary, light and moderate-vigorous activity, and sleep while taking into account the codependence due to finite time during a day. Yet, most studies typically only use total volume of sedentary time while ignoring the potential importance of accumulation patterns. Recent studies have suggested that such patterns (known as prolonged sedentary bouts) may provide additional sensitivity to predict health outcomes and provide additional information beyond total volume of sedentary time. Fragmentation metrics study the accumulation pattern of TST and TAT by quantifying the alternating these sequences via summaries of duration of and fre- quency of switching between sedentary and active bouts. They provide unique translatable insights into accumulation patterns for sedentary and active time and lead to additional findings of associations between those patterns and health outcomes on top of total sedentary volume Here is the list of available fragmentation metrics: * average bout duration: bout/minute * transition probability: reexpressed as the reciprocal of averge bout durations * Gini index: absolute variability normalized to the average bout duration * average hazard * power law distribution parameter We can calculate the above mentioned metrics for both sedentary and active bout. Details about fragmentations can be found [here](https://www.biorxiv.org/content/early/2017/08/31/182337). `fragmentation` and `fragmentation_long` calcualte fragmentation features, (for a single vector amd whole dataset respectively). The argument `metrics`, which consists of "mean_bout", "TP", "Gini", "hazard", "power", and "all" decides which metrics to calcualte. "all" will lead to all metrics. For a single day of count (a vector): ```{r,eval=FALSE} data(example_activity_data) count1 = c(t(example_activity_data$count[1,-c(1,2)])) wear1 = c(t(example_activity_data$wear[1,-c(1,2)])) mb = fragmentation(x = count1, w = wear1, thresh = 100, metrics = "mean_bout",bout.length = 1) tp = fragmentation(x = count1, w = wear1, thresh = 100, metrics = "TP",bout.length = 1) ``` Given all the activity and wear/nonwear flag data for the whole dataset, user can choose to calcualte framentation at daily level, or aggregate bouts across all available days by choosing from either "subject" and "day" for the argument `by`: ```{r,eval=FALSE} data(example_activity_data) count = example_activity_data$count wear = example_activity_data$wear frag_by_subject = fragmentation_long(count.data = count, weartime = wear,thresh = 100, metrics = "all",by = "subject",bout.length = 1) frag_by_day = fragmentation_long(count.data = count, weartime = wear,thresh = 100, metrics = "all",by = "day",bout.length = 1) ```
/scratch/gouwar.j/cran-all/cranData/ActFrag/vignettes/ActFrag.Rmd
#' Read FASTA file as character vector. #' #' @param fname name of file to be read. #' #' @return character vector with names corresponding to annotations from FASTA. #' @export read_fasta = function(fname) { fas = readLines(fname) dat = grep(">", fas, value=TRUE) seq_coords = cbind(grep(">", fas)+1, c(c(grep(">", fas)-1)[-1], length(fas))) seqs = apply(seq_coords, 1, function(x) paste(fas[x[1]:x[2]], collapse="")) names(seqs) = dat names(seqs) = gsub(">", "", sapply(strsplit(names(seqs), split="\\|"), '[[', 1)) seqs } map_one_active_site = function(pos, residue, my_pep, gene, flank=7) { pos_end = pos+nchar(residue)-1 if (!all(c(pos, pos_end) %in% 1:length(my_pep))) { cat("warning: Indicated signaling sites of ", pos, pos_end, "not present in gene", gene, "of length", length(my_pep), "; skipping\n") return(NULL) } if (paste(my_pep[pos:pos_end], collapse="") != residue) { cat("warning: Indicated signaling sequence", residue, "not equal to wildtype sequence", paste(my_pep[pos:pos_end], collapse="") ,"of", gene, "at ", pos, pos_end, "; skipping\n") return(NULL) } regions = rep(FALSE, length(my_pep)) regions[intersect((pos-flank):(pos_end+flank), 1:length(my_pep))] = TRUE regions } map_all_active_sites = function(active_site, pep, gene, flank=7) { if (nrow(active_site)==0){ return(NULL) } my_pep = strsplit(pep, split="")[[1]] mat = do.call("rbind", lapply(1:nrow(active_site), function(i) map_one_active_site(active_site[i, "position"], as.character(active_site[i, "residue"]), my_pep, gene, flank=flank))) if (is.null(mat[[1]]) || nrow(mat)==0){ return(NULL) } active_site_reg = apply(mat, 2, any) active_site_rle_v = rle(active_site_reg)$values active_site_rle_p1 = cumsum(rle(active_site_reg)$lengths) active_site_rle_p0 = c(1, active_site_rle_p1[-length(active_site_rle_p1)]+1) counter = 1 for (i in 1:length(active_site_rle_v)) { active_site_reg[active_site_rle_p0[i]:active_site_rle_p1[i]] = ifelse(active_site_rle_v[i], counter, 0) counter = ifelse(active_site_rle_v[i], counter+1, counter) } active_site_reg } #pos = muts[i, "position"]; wt_residue = muts[i, "wt_residue"]; mut_residue = muts[i, "mut_residue"]; my_count = muts[i, "count"]; my_pep = strsplit(pep, split="")[[1]] map_one_mutation = function(pos, wt_residue, mut_residue, pep, gene, skip_mismatch, my_count) { if (!pos %in% 1:length(pep)) { cat("warning: proposed mutated position", pos, "not present in gene", gene, "of length", length(pep), "; skipping\n") return(NULL) } skip_word = ifelse(skip_mismatch, "", "NOT") if (pep[[pos]] != wt_residue) { cat("warning: proposed wildtype residue", wt_residue, "not equal to expected wildtype residue", pep[pos] ,"of", gene, "at position", pos, "; ", skip_word," skipping\n") if (skip_mismatch) { return(NULL) } } return(rep(pos, my_count)) } #muts=l$mutations_input; pep=l$protein_sequence; gene=l$gene; skip_mismatch=T map_all_mutations = function(muts, pep, gene, skip_mismatch) { if (nrow(muts)==0){ return(NULL) } my_pep = strsplit(pep, split="")[[1]] posi = unlist(lapply(1:nrow(muts), function(i) map_one_mutation(muts[i, "position"], muts[i, "wt_residue"], muts[i, "mut_residue"], my_pep, gene, skip_mismatch=skip_mismatch, muts[i, "count"]))) if (is.null(posi[[1]])) { return(NULL) } lapply(1:length(my_pep), function(x) which(x==posi)) } get_mutation_status = function(pos, active_site_pos, flank, mid_flank) { if (is.null(active_site_pos[[1]])) { return("") } stat = c() if (active_site_pos[pos]>0) { stat = c(stat, "DI") } proximal_flank_reg = intersect(setdiff((pos-mid_flank):(pos+mid_flank), pos), 1:length(active_site_pos)) if (flank>0 & any(active_site_pos[proximal_flank_reg]>0)) { stat = c(stat, "N1") } distal_flank_reg = intersect(setdiff((pos-flank):(pos+flank), proximal_flank_reg), 1:length(active_site_pos)) if (flank>0 & any(active_site_pos[distal_flank_reg]>0)) { stat = c(stat, "N2") } paste(stat, collapse=",") } count_flanking_active_sites_in_sequence = function(pos, active_sites, f1, f2) { posi = c((pos-f2):(pos-f1), (pos+f1):(pos+f2)) posi = intersect(posi, 1:length(active_sites)) length(which(active_sites[posi]!=0)) } glm1 = function(form, data, type="poisson") { if (type=="nb") { return(MASS::glm.nb(stats::as.formula(form), data=data)) } else { return(stats::glm(stats::as.formula(form), family=stats::poisson, data=data)) } } # r = 1; active_site_regions = l$active_regions; mut_pos = l$mutations_per_position; active_site_pos = l$active_sites_in_sequence; dis = l$disorder assess_one_region = function(r, active_site_regions, mut_pos, active_site_pos, dis, flank, mid_flank, simplified=FALSE, type="poisson") { n_muts = sapply(mut_pos, length) rr = active_site_regions == r dfr = data.frame(rr, n_muts, dis=dis==1) h0 = try(glm1("n_muts~dis", data=dfr, type=type), T) if (simplified) { h1 = try(glm1("n_muts~rr+dis", data=dfr, type=type), T) } else { jp_sites_nearby = n2_sites_nearby = n1_sites_nearby = rep(0, length(active_site_pos)) n2_sites_nearby[active_site_regions == r] = sapply(which(active_site_regions == r), count_flanking_active_sites_in_sequence, active_site_pos, mid_flank+1, flank) n1_sites_nearby[active_site_regions == r] = sapply(which(active_site_regions == r), count_flanking_active_sites_in_sequence, active_site_pos, 1, mid_flank) jp_sites_nearby[active_site_regions == r] = sapply(which(active_site_regions == r), count_flanking_active_sites_in_sequence, active_site_pos, 0, 0) dfr = data.frame(rr, n_muts, n2_sites_nearby, n1_sites_nearby, jp_sites_nearby, dis=dis==1) h1_scope = paste("n_muts~", paste(collapse="+", setdiff(colnames(dfr), c("n_muts")))) h1 = NULL if (type=="nb") { h1 = try(MASS::stepAIC(MASS::glm.nb(n_muts~dis, data=dfr), trace=0, direction="forward", scope=stats::as.formula(h1_scope)), T) } else { h1 = try(MASS::stepAIC(stats::glm(n_muts~dis, data=dfr, family=stats::poisson), trace=0, direction="forward", scope=stats::as.formula(h1_scope)), T) } } if (any(c(class(h0)[[1]], class(h1)[[1]])=="try-error")) { return(data.frame(p=NA, low=NA, med=NA, high=NA, obs=NA, stringsAsFactors=FALSE)) } p = stats::anova(h0, h1, test="Chisq")[2, ifelse(type=="nb", "Pr(Chi)", "Pr(>Chi)")] h0_predicted_lambdas = stats::predict(h0, type="response")[rr] if (type=="nb") { exp_sampled = replicate(1000, sum(MASS::rnegbin(length(h0_predicted_lambdas), mu=h0_predicted_lambdas, theta=h0$theta), na.rm=T)) } else { exp_sampled = replicate(1000, sum(stats::rpois(n=length(h0_predicted_lambdas), lambda=h0_predicted_lambdas))) } exp_mean = mean(exp_sampled) exp_sd = stats::sd(exp_sampled) res = data.frame(p, low=exp_mean-exp_sd, med=exp_mean, high=exp_mean+exp_sd, obs=sum(n_muts[rr]), stringsAsFactors=FALSE) rm(h0,h1,dfr) gc() res } assess_all_regions = function(active_site_regions, mut_pos, active_site_pos, dis, flank, mid_flank, simplified=FALSE, all_sites_together=FALSE, type="poisson") { if (all_sites_together) { active_site_regions = (active_site_regions>0)+0 } all_reg = setdiff(unique(active_site_regions), 0) data.frame(region=all_reg, do.call("rbind", lapply(all_reg, assess_one_region, active_site_regions, mut_pos, active_site_pos, dis, flank, mid_flank, simplified, type=type)), stringsAsFactors=FALSE) } create_gene_record = function(gene, fasta, mut, pho, disorder, flank=7, mid_flank=2, simplified=FALSE, all_sites_together=FALSE, skip_mismatch=TRUE, type="poisson", enriched_only=TRUE) { l = list() l$gene = gene l$protein_sequence = fasta[[gene]] l$disorder = rep(0, nchar(l$protein_sequence)) if (!is.null(disorder[[gene]][[1]])) { l$disorder = as.numeric(strsplit(disorder[[gene]], split="")[[1]]) } l$mutations_input = mut[mut$gene==gene,,drop=FALSE] l$mutations_input = l$mutations_input[l$mutations_input$position %in% 1:nchar(l$protein_sequence),,drop=F] l$active_sites_input = pho[pho$gene==gene,,drop=FALSE] l$active_sites_input = l$active_sites_input[l$active_sites_input$position %in% 1:nchar(l$protein_sequence),,drop=F] if (simplified) { flank = mid_flank = 0 } l$active_regions = map_all_active_sites(l$active_sites_input, l$protein_sequence, l$gene, flank=flank) l$active_sites_in_sequence = map_all_active_sites(l$active_sites_input, l$protein_sequence, l$gene, flank=0) l$mutations_per_position = map_all_mutations(l$mutations_input, l$protein_sequence, l$gene, skip_mismatch=skip_mismatch) if (nrow(l$mutations_input)>0) { l$mutations_input$status = "" } if (nrow(l$mutations_input)>0) { l$mutations_input$active_region = 0 } if (nrow(l$active_sites_input)>0) { l$active_sites_input$active_region = 0 } if (!is.null(l$mutations_per_position[[1]])) { l$mutations_input$status = sapply(l$mutations_input$position, get_mutation_status, l$active_sites_in_sequence, flank, mid_flank) } if (!is.null(l$active_sites_in_sequence[[1]])) { regi = unique(setdiff(unique(l$active_regions), 0)) l$active_region_summary = do.call(rbind, lapply(regi, function(i) data.frame(gene, reg=i, t(summary(which(l$active_regions==i))[c("Min.", "Max.")])))) colnames(l$active_region_summary)[3:4] = c("pos0", "pos1") } if (!is.null(l$mutations_per_position[[1]]) & !is.null(l$active_sites_in_sequence[[1]])) { l$region_mutation_significance = assess_all_regions(l$active_regions, l$mutations_per_position, l$active_sites_in_sequence, l$disorder, flank, mid_flank, simplified, all_sites_together, type=type) tot = prod(l$region_mutation_significance[l$region_mutation_significance$p<0.05, "p"], na.rm=T) if (enriched_only) { tot = prod(l$region_mutation_significance[l$region_mutation_significance$p<0.05 & l$region_mutation_significance$obs>l$region_mutation_significance$med, "p"], na.rm=T) } l$total_mutation_significance = data.frame(p=tot, fdr=tot) l$active_sites_input$active_region = l$active_regions[l$active_sites_input$position] l$mutations_input$active_region = l$active_regions[l$mutations_input$position] } cat(".") gc() l } merge_report = function(all_active_sites, all_active_regions, all_region_based_pval, all_active_mutations, flank) { colnames(all_active_sites)[2] = "PTM_position" all_active_regions$actreg = paste(all_active_regions$gene, all_active_regions$reg, sep=":") all_region_based_pval$actreg = paste(all_region_based_pval$gene, all_region_based_pval$region, sep=":") all_active_sites$actreg = paste(all_active_sites$gene, all_active_sites$active_region, sep=":") all_active_mutations$actreg = paste(all_active_mutations$gene, all_active_mutations$active_region, sep=":") mat1 = merge(all_active_mutations, all_active_regions, by="actreg") mat2 = merge(mat1, all_active_sites, by="actreg") colnames(all_region_based_pval)[colnames(all_region_based_pval)=="gene"] = "gene1" mat3 = merge(mat2, all_region_based_pval, by="actreg") exclude_cols = c("actreg", "cancer_type", "gene.y", "reg", "pos0", "pos1", "gene.x", "active_region.y","gene.y","region", "low", "med", "high", "obs", "gene1") mat3 = mat3[,!colnames(mat3) %in% exclude_cols] colnames(mat3)[colnames(mat3)=="position"] = "mut_position" colnames(mat3)[colnames(mat3)=="active_region.x"] = "active_region" colnames(mat3)[colnames(mat3)=="p"] = "active_region_p" # exclude psites that are further away from mutations than +/- 7 # however keep all direct sites as all active regions are annotated as DI when simplified=T mat3 = mat3[mat3$status=="DI" | (mat3$PTM_position>=mat3$mut_position-flank & mat3$PTM_position<=mat3$mut_position+flank),] rownames(mat3) = NULL mat3 } check_mutations_and_sites = function(seqs_to_check, muts_to_check, sites_to_check, genes_to_check) { genes <- character() for (i in 1:length(muts_to_check$gene)) { if (muts_to_check$gene[i] %in% genes_to_check & !(muts_to_check$gene[i] %in% genes)) { seq_to_check <- seqs_to_check[muts_to_check$gene[i]] position <- muts_to_check$position[i] if (muts_to_check$wt_residue[i] == substring(seq_to_check, position, position)) { genes <- append(genes, muts_to_check$gene[i]) } } } for (i in 1:length(sites_to_check$gene)) { if (sites_to_check$gene[i] %in% genes) { seq_to_check <- seqs_to_check[sites_to_check$gene[i]] position <- sites_to_check$position[i] length <- nchar(sites_to_check$residue[i]) if (sites_to_check$residue[i] == substring(seq_to_check, position, position + length - 1)) { return(TRUE) } } } FALSE } #' Identification of active protein sites (post-translational modification sites, signalling domains, etc) with #' specific and significant mutations. #' #' @param sequences character vector of protein sequences, names are protein IDs. #' @param seq_disorder character vector of disorder in protein sequences, names are protein IDs and values are strings #' 1/0 for disordered/ordered protein residues. #' @param mutations data frame of mutations, with [gene, sample_id, position, wt_residue, mut_residue] as columns. #' @param active_sites data frame of active sites, with [gene, position, residue, kinase] as columns. Kinase field may #' be blank and is shown for informative purposes. #' @param flank numeric for selecting region size around active sites considered important for site activity. Default #' value is 7. Ignored in case of simplified analysis. #' @param mid_flank numeric for splitting flanking region size into proximal (<=X) and distal (>X). Default value is #' 2. Ignored in case of simplified analysis. #' @param mc.cores numeric for indicating number of computing cores dedicated to computation. Default value is 1. #' @param simplified true/false for selecting simplified analysis. Default value is FALSE. If TRUE, no flanking regions #' are considered and only indicated sites are tested for mutations. #' @param return_records true/false for returning a collection of gene records with more data regarding sites and #' mutations. Default value is FALSE. #' @param skip_mismatch true/false for skipping mutations whose reference protein residue does not match expected #' residue from FASTA sequence file. #' @param regression_type 'nb' for negative binomial, 'poisson' for poisson GLM. The latter is default. #' @param enriched_only true/false to indicate whether only sites with enriched active site mutations will be included #' in the final p-value estimation (TRUE is default). If FALSE, sites with less than expected mutations will be also #' included. #' #' @return list with the following components: #' @return all_active_mutations - table with mutations that hit or flank an active site. Additional columns of #' interest include Status (DI - direct active mutation; N1 - proximal flanking mutation; N2 - distal flanking #' mutation) and Active_region (region ID of active sites in that protein). #' @return all_active_sites - #' @return all_region_based_pval - p-values for regions of sites, statistics on observed mutations (obs) and expected #' mutations (exp, low, high based on mean and s.d. from Poisson sampling). The field Region identifies region in #' all_active_sites. # @return all_gene_based_fdr - gene-based uncorrected and FDR-corrected p-values aggregated over multiple sites. # @return gene_records - if return_records is TRUE, a list of gene-based records is returned (large size). #' @references Systematic analysis of somatic mutations in phosphorylation signaling predicts novel cancer drivers #' (2013, Molecular Systems Biology) by Juri Reimand and Gary Bader. #' @author Juri Reimand <juri.reimand@@utoronto.ca> #' @examples #' data(ActiveDriver_data) #' \donttest{ #' phos_results = ActiveDriver(sequences, sequence_disorder, mutations, phosphosites) #' ovarian_mutations = mutations[grep("ovarian", mutations$sample_id),] #' phos_results_ovarian = ActiveDriver(sequences, sequence_disorder, ovarian_mutations, phosphosites) #' GBM_muts = mutations[grep("glioblastoma", mutations$sample_id),] #' kin_rslt_GBM = ActiveDriver(sequences, sequence_disorder, GBM_muts, kinase_domains, simplified=TRUE) #' } #' kin_results = ActiveDriver(sequences, sequence_disorder, mutations, kinase_domains, simplified=TRUE) #' @export ActiveDriver = function(sequences, seq_disorder, mutations, active_sites, flank = 7, mid_flank = 2, mc.cores = 1, simplified = FALSE, return_records = FALSE, skip_mismatch = TRUE, regression_type = "poisson", enriched_only = TRUE) { # first initiate mutation counts if needed if(is.null(mutations$count)) {mutations$count=1} # then check that no counts are zero mutations = mutations[mutations$count>0,] # replace factors with characters mutations[] = lapply(mutations, as.character) active_sites[] = lapply(active_sites, as.character) # make sure positions are numeric mutations$position = as.numeric(mutations$position) active_sites$position = as.numeric(active_sites$position) # ensure case is uniform mutations$wt_residue = toupper(mutations$wt_residue) mutations$mut_residue = toupper(mutations$mut_residue) active_sites$residue = toupper(active_sites$residue) sequences = toupper(sequences) genes_to_test = Reduce(intersect, list(mutations$gene, active_sites$gene, names(sequences), names(seq_disorder))) if (length(genes_to_test)<1) { cat("Error: no genes matched in tables for mutations, active sites and sequences\n"); return(NULL) } # check if mutation and active site wt residues match with reference sequences if (check_mutations_and_sites(sequences, mutations, active_sites, genes_to_test) == FALSE) { cat("Error: wildtype residues do not match reference sequences\n") return(NULL) } cat("genes:", length(genes_to_test)) gene_records = parallel::mclapply(genes_to_test, create_gene_record, sequences, mutations, active_sites, seq_disorder, flank=flank, mid_flank=mid_flank, mc.cores=mc.cores, simplified=simplified, skip_mismatch=skip_mismatch, type=regression_type, enriched_only=enriched_only, mc.preschedule=F) names(gene_records) = sapply(gene_records, '[[', 'gene') all_gene_based_fdr = do.call("rbind", lapply(gene_records, function(gr) if(!is.null(gr$total_mutation_significance[[1]])) data.frame(gene=gr$gene, gr$total_mutation_significance, stringsAsFactors=FALSE))) if (!is.null(all_gene_based_fdr[[1]])) { all_gene_based_fdr$fdr = stats::p.adjust(all_gene_based_fdr$fdr, method="fdr") # update fdr values in gene records for (i in 1:nrow(all_gene_based_fdr)) { gene_records[[as.character(all_gene_based_fdr[i, "gene"])]]$total_mutation_significance[,"fdr"] = all_gene_based_fdr[i,"fdr"] } } all_region_based_pval = do.call("rbind", lapply(gene_records, function(gr) if(!is.null(gr$region_mutation_significance[[1]])) data.frame(gene=gr$gene, gr$region_mutation_significance, stringsAsFactors=FALSE))) all_active_mutations = do.call("rbind", lapply(gene_records, '[[', 'mutations_input')) all_active_mutations = all_active_mutations[all_active_mutations$status!="",] all_active_sites = do.call("rbind", lapply(gene_records, '[[', 'active_sites_input')) all_active_sites = all_active_sites[all_active_sites$active_region>0,,drop=F] all_active_regions = do.call("rbind", lapply(gene_records, '[[', 'active_region_summary')) rownames(all_active_mutations) = rownames(all_active_sites) = rownames(all_active_regions) = rownames(all_region_based_pval) = NULL ## assemble merged report merged_report = merge_report(all_active_sites, all_active_regions, all_region_based_pval, all_active_mutations, flank) results = list(all_active_mutations, all_active_sites, all_region_based_pval, all_gene_based_fdr, all_active_regions, merged_report) names(results) = c("all_active_mutations", "all_active_sites", "all_region_based_pval", "all_gene_based_fdr", "all_active_regions", "merged_report") if (return_records) { results$gene_records = gene_records } rm(gene_records) gc() results } #' Example protein sequences for ActiveDriver #' #' A dataset containing the sequences of four proteins. #' #' @docType data #' @keywords datasets #' @name sequences #' @usage data(ActiveDriver_data) #' @format A named character vector with 4 elements NULL #' Example protein disorder for ActiveDriver #' #' A dataset containing the disorder of four proteins. #' #' @docType data #' @keywords datasets #' @name sequence_disorder #' @usage data(ActiveDriver_data) #' @format A named character vector with 4 elements NULL #' Example mutations for ActiveDriver #' #' A dataset describing mis-sense mutations (i.e., substitutions in proteins). The variables are as follows: #' #' \itemize{ #' \item gene. the mutated gene #' \item sample_id. the sample where the mutation originates #' \item position. the position in the protein sequence where the mutation occurs #' \item wt_residue. the wild-type residue #' \item mut_residue. the mutant residue #' } #' #' @docType data #' @keywords datasets #' @name mutations #' @usage data(ActiveDriver_data) #' @format A data frame with 408 observations of 5 variables NULL #' Example phosphosites for ActiveDriver #' #' A dataset describing protein phosphorylation sites. The variables are as follows: #' #' \itemize{ #' \item gene. the gene symbol the phosphosite occurs in #' \item position. the position in the protein sequence where the phosphosite occurs #' \item residue. the phosphosite residue #' \item kinase. the kinase that phosphorylates this site #' } #' #' @docType data #' @keywords datasets #' @name phosphosites #' @usage data(ActiveDriver_data) #' @format A data frame with 131 observations of 4 variables NULL #' Example kinase domains for ActiveDriver #' #' A dataset describing kinase domains. The variables are as follows: #' #' \itemize{ #' \item gene. the gene symbol of the gene where the kinase domain occurs #' \item position. the position in the protein sequence where the kinase domain begins #' \item phos. TRUE #' \item residue. the kinase domain residues #' } #' #' @docType data #' @keywords datasets #' @name kinase_domains #' @usage data(ActiveDriver_data) #' @format A data frame with 1 observation of 4 variables NULL
/scratch/gouwar.j/cran-all/cranData/ActiveDriver/R/ActiveDriver.R
#' ActivePathways #' #' @param scores A numerical matrix of p-values where each row is a gene and #' each column represents an omics dataset (evidence). Rownames correspond to the genes #' and colnames to the datasets. All values must be 0<=p<=1. We recommend converting #' missing values to ones. #' @param gmt A GMT object to be used for enrichment analysis. If a filename, a #' GMT object will be read from the file. #' @param background A character vector of gene names to be used as a #' statistical background. By default, the background is all genes that appear #' in \code{gmt}. #' @param geneset_filter A numeric vector of length two giving the lower and #' upper limits for the size of the annotated geneset to pathways in gmt. #' Pathways with a geneset shorter than \code{geneset_filter[1]} or longer #' than \code{geneset_filter[2]} will be removed. Set either value to NA #' to not enforce a minimum or maximum value, or set \code{geneset_filter} to #' \code{NULL} to skip filtering. #' @param cutoff A maximum merged p-value for a gene to be used for analysis. #' Any genes with merged, unadjusted \code{p > significant} will be discarded #' before testing. #' @param significant Significance cutoff for selecting enriched pathways. Pathways with #' \code{adjusted_p_val <= significant} will be selected as results. #' @param merge_method Statistical method to merge p-values. See section on Merging P-Values #' @param correction_method Statistical method to correct p-values. See #' \code{\link[stats]{p.adjust}} for details. #' @param cytoscape_file_tag The directory and/or file prefix to which the output files #' for generating enrichment maps should be written. If NA, files will not be written. #' @param color_palette Color palette from RColorBrewer::brewer.pal to color each #' column in the scores matrix. If NULL grDevices::rainbow is used by default. #' @param custom_colors A character vector of custom colors for each column in the scores matrix. #' @param color_integrated_only A character vector of length 1 specifying the color of the #' "combined" pathway contribution. #' @param scores_direction A numerical matrix of log2 transformed fold-change values where each row is a #' gene and each column represents a dataset (evidence). Rownames correspond to the genes #' and colnames to the datasets. We recommend converting missing values to zero. #' Must contain the same dimensions as the scores parameter. Datasets without directional information should be set to 0. #' @param constraints_vector A numerical vector of +1 or -1 values corresponding to the user-defined #' directional relationship between columns in scores_direction. Datasets without directional information should #' be set to 0. #' #' @return A data.table of terms (enriched pathways) containing the following columns: #' \describe{ #' \item{term_id}{The database ID of the term} #' \item{term_name}{The full name of the term} #' \item{adjusted_p_val}{The associated p-value, adjusted for multiple testing} #' \item{term_size}{The number of genes annotated to the term} #' \item{overlap}{A character vector of the genes enriched in the term} #' \item{evidence}{Columns of \code{scores} (i.e., omics datasets) that contributed #' individually to the enrichment of the term. Each input column is evaluated #' separately for enrichments and added to the evidence if the term is found.} #' } #' #' @section Merging P-values: #' To obtain a single p-value for each gene across the multiple omics datasets considered, #' the p-values in \code{scores} #' are merged row-wise using a data fusion approach of p-value merging. #' The eight available methods are: #' \describe{ #' \item{Fisher}{Fisher's method assumes p-values are uniformly #' distributed and performs a chi-squared test on the statistic sum(-2 log(p)). #' This method is most appropriate when the columns in \code{scores} are #' independent.} #' \item{Fisher_directional}{Fisher's method modification that allows for #' directional information to be incorporated with the \code{scores_direction} #' and \code{constraints_vector} parameters.} #' \item{Brown}{Brown's method extends Fisher's method by accounting for the #' covariance in the columns of \code{scores}. It is more appropriate when the #' tests of significance used to create the columns in \code{scores} are not #' necessarily independent. The Brown's method is therefore recommended for #' many omics integration approaches.} #' \item{DPM}{DPM extends Brown's method by incorporating directional information #' using the \code{scores_direction} and \code{constraints_vector} parameters.} #' \item{Stouffer}{Stouffer's method assumes p-values are uniformly distributed #' and transforms p-values into a Z-score using the cumulative distribution function of a #' standard normal distribution. This method is appropriate when the columns in \code{scores} #' are independent.} #' \item{Stouffer_directional}{Stouffer's method modification that allows for #' directional information to be incorporated with the \code{scores_direction} #' and \code{constraints_vector} parameters.} #' \item{Strube}{Strube's method extends Stouffer's method by accounting for the #' covariance in the columns of \code{scores}.} #' \item{Strube_directional}{Strube's method modification that allows for #' directional information to be incorporated with the \code{scores_direction} #' and \code{constraints_vector} parameters.} #' } #' #' @section Cytoscape: #' To visualize and interpret enriched pathways, ActivePathways provides an option #' to further analyse results as enrichment maps in the Cytoscape software. #' If \code{!is.na(cytoscape_file_tag)}, four files will be written that can be used #' to build enrichment maps. This requires the EnrichmentMap and enhancedGraphics apps. #' #' The four files written are: #' \describe{ #' \item{pathways.txt}{A list of significant terms and the #' associated p-value. Only terms with \code{adjusted_p_val <= significant} are #' written to this file.} #' \item{subgroups.txt}{A matrix indicating whether the significant terms (pathways) #' were also found to be significant when considering only one column from #' \code{scores}. A one indicates that term was found to be significant #' when only p-values in that column were used to select genes.} #' \item{pathways.gmt}{A Shortened version of the supplied GMT #' file, containing only the significantly enriched terms in pathways.txt. } #' \item{legend.pdf}{A legend with colours matching contributions #' from columns in \code{scores}.} #' } #' #' How to use: Create an enrichment map in Cytoscape with the file of terms #' (pathways.txt) and the shortened gmt file #' (pathways.gmt). Upload the subgroups file (subgroups.txt) as a table #' using the menu File > Import > Table from File. To paint nodes according #' to the type of supporting evidence, use the 'style' #' panel, set image/Chart1 to use the column `instruct` and the passthrough #' mapping type. Make sure the app enhancedGraphics is installed. #' Lastly, use the file legend.pdf as a reference for colors in the enrichment map. #' #' @examples #' fname_scores <- system.file("extdata", "Adenocarcinoma_scores_subset.tsv", #' package = "ActivePathways") #' fname_GMT = system.file("extdata", "hsapiens_REAC_subset.gmt", #' package = "ActivePathways") #' #' dat <- as.matrix(read.table(fname_scores, header = TRUE, row.names = 'Gene')) #' dat[is.na(dat)] <- 1 #' #' ActivePathways(dat, fname_GMT) #' #' @import data.table #' #' @export ActivePathways <- function(scores, gmt, background = makeBackground(gmt), geneset_filter = c(5, 1000), cutoff = 0.1, significant = 0.05, merge_method = c("Fisher", "Fisher_directional", "Brown", "DPM", "Stouffer", "Stouffer_directional", "Strube", "Strube_directional"), correction_method = c("holm", "fdr", "hochberg", "hommel", "bonferroni", "BH", "BY", "none"), cytoscape_file_tag = NA, color_palette = NULL, custom_colors = NULL, color_integrated_only = "#FFFFF0", scores_direction = NULL, constraints_vector = NULL) { merge_method <- match.arg(merge_method) correction_method <- match.arg(correction_method) ##### Validation ##### # scores if (!(is.matrix(scores) && is.numeric(scores))) stop("scores must be a numeric matrix") if (any(is.na(scores))) stop("scores cannot contain missing values, we recommend replacing NA with 1 or removing") if (any(scores < 0) || any(scores > 1)) stop("All values in scores must be in [0,1]") if (any(duplicated(rownames(scores)))) stop("scores matrix contains duplicated genes - rownames must be unique") # scores_direction and constraints_vector if (xor(!is.null(scores_direction),!is.null(constraints_vector))) stop("Both scores_direction and constraints_vector must be provided") if (!is.null(scores_direction) && !is.null(constraints_vector)){ if (!(is.numeric(constraints_vector) && is.vector(constraints_vector))) stop("constraints_vector must be a numeric vector") if (any(!constraints_vector %in% c(1,-1,0))) stop("constraints_vector must contain the values: 1, -1 or 0") if (!(is.matrix(scores_direction) && is.numeric(scores_direction))) stop("scores_direction must be a numeric matrix") if (any(is.na(scores_direction))) stop("scores_direction cannot contain missing values, we recommend replacing NA with 0 or removing") if (any(!rownames(scores_direction) %in% rownames(scores))) stop ("scores_direction gene names must match scores genes") if (is.null(colnames(scores)) || is.null(colnames(scores_direction))) stop("column names must be provided to scores and scores_direction") if (any(!colnames(scores_direction) %in% colnames(scores))) stop("scores_direction column names must match scores column names") if (length(constraints_vector) != length(colnames(scores_direction))) stop("constraints_vector should have the same number of entries as columns in scores_direction") if (merge_method %in% c("Fisher","Brown","Stouffer","Strube")) stop("Only DPM, Fisher_directional, Stouffer_directional, and Strube_directional methods support directional integration") if (any(constraints_vector %in% 0) && !all(scores_direction[,constraints_vector %in% 0] == 0)) stop("scores_direction entries must be set to 0's for columns that do not contain directional information") if (!is.null(names(constraints_vector))){ if (!all.equal(names(constraints_vector), colnames(scores_direction), colnames(scores)) == TRUE){ stop("the constraints_vector entries should match the order of scores and scores_direction columns") }}} # cutoff and significant stopifnot(length(cutoff) == 1) stopifnot(is.numeric(cutoff)) if (cutoff < 0 || cutoff > 1) stop("cutoff must be a value in [0,1]") stopifnot(length(significant) == 1) stopifnot(is.numeric(significant)) if (significant < 0 || significant > 1) stop("significant must be a value in [0,1]") # gmt if (!is.GMT(gmt)) gmt <- read.GMT(gmt) if (length(gmt) == 0) stop("No pathways in gmt made the geneset_filter") if (!(is.character(background) && is.vector(background))) { stop("background must be a character vector") } # geneset_filter if (!is.null(geneset_filter)) { if (!(is.numeric(geneset_filter) && is.vector(geneset_filter))) { stop("geneset_filter must be a numeric vector") } if (length(geneset_filter) != 2) stop("geneset_filter must be length 2") if (!is.numeric(geneset_filter)) stop("geneset_filter must be numeric") if (any(geneset_filter < 0, na.rm=TRUE)) stop("geneset_filter limits must be positive") } # custom_colors if (!is.null(custom_colors)){ if(!(is.character(custom_colors) && is.vector(custom_colors))){ stop("colors must be provided as a character vector") } if(length(colnames(scores)) != length(custom_colors)) stop("incorrect number of colors is provided") } if (!is.null(custom_colors) & !is.null(color_palette)){ stop("Both custom_colors and color_palette are provided. Specify only one of these parameters for node coloring.") } if (!is.null(names(custom_colors))){ if (!all(names(custom_colors) %in% colnames(scores))){ stop("names() of the custom colors vector should match the scores column names") } } # color_palette if (!is.null(color_palette)){ if (!(color_palette %in% rownames(RColorBrewer::brewer.pal.info))) stop("palette must be from the RColorBrewer package") } # color_integrated_only if(!(is.character(color_integrated_only) && is.vector(color_integrated_only))){ stop("color must be provided as a character vector") } if(1 != length(color_integrated_only)) stop("only a single color must be specified") # contribution contribution <- TRUE if (ncol(scores) == 1) { contribution <- FALSE message("scores matrix contains only one column. Column contributions will not be calculated") } ##### filtering and sorting #### # Remove any genes not found in the background orig_length <- nrow(scores) scores <- scores[rownames(scores) %in% background, , drop=FALSE] if(!is.null(scores_direction)){ scores_direction <- scores_direction[rownames(scores_direction) %in% background, , drop=FALSE] } if (nrow(scores) == 0) { stop("scores does not contain any genes in the background") } if (nrow(scores) < orig_length) { message(paste(orig_length - nrow(scores), "rows were removed from scores", "because they are not found in the background")) } # Filter the GMT if (!all(background %in% unique(unlist(sapply(gmt, "[", c(3)))))){ background_genes <- lapply(sapply(gmt, "[", c(3)), intersect, background) background_genes <- background_genes[lapply(background_genes,length) > 0] gmt <- gmt[names(sapply(gmt,"[",c(3))) %in% names(background_genes)] for (i in 1:length(gmt)) { gmt[[i]]$genes <- background_genes[[i]] } } if(!is.null(geneset_filter)) { orig_length <- length(gmt) if (!is.na(geneset_filter[1])) { gmt <- Filter(function(x) length(x$genes) >= geneset_filter[1], gmt) } if (!is.na(geneset_filter[2])) { gmt <- Filter(function(x) length(x$genes) <= geneset_filter[2], gmt) } if (length(gmt) == 0) stop("No pathways in gmt made the geneset_filter") if (length(gmt) < orig_length) { message(paste(orig_length - length(gmt), "terms were removed from gmt", "because they did not make the geneset_filter")) } } # merge p-values to get a single score for each gene and remove any genes # that don't make the cutoff merged_scores <- merge_p_values(scores, merge_method, scores_direction, constraints_vector) merged_scores <- merged_scores[merged_scores <= cutoff] if (length(merged_scores) == 0) stop("No genes made the cutoff") # Sort genes by p-value ordered_scores <- names(merged_scores)[order(merged_scores)] ##### enrichmentAnalysis and column contribution ##### res <- enrichmentAnalysis(ordered_scores, gmt, background) adjusted_p <- stats::p.adjust(res$adjusted_p_val, method = correction_method) res[, "adjusted_p_val" := adjusted_p] significant_indeces <- which(res$adjusted_p_val <= significant) if (length(significant_indeces) == 0) { warning("No significant terms were found") return() } if (contribution) { sig_cols <- columnSignificance(scores, gmt, background, cutoff, significant, correction_method, res$adjusted_p_val) res <- cbind(res, sig_cols[, -1]) } else { sig_cols <- NULL } # if significant result were found and cytoscape file tag exists # proceed with writing files in the working directory if (length(significant_indeces) > 0 & !is.na(cytoscape_file_tag)) { prepareCytoscape(res[significant_indeces, c("term_id", "term_name", "adjusted_p_val")], gmt[significant_indeces], cytoscape_file_tag, sig_cols[significant_indeces,], color_palette, custom_colors, color_integrated_only) } res[significant_indeces] } #' Perform pathway enrichment analysis on an ordered list of genes #' #' @param genelist character vector of gene names, in decreasing order #' of significance #' @param gmt GMT object #' @param background character vector of gene names. List of all genes being used #' as a statistical background #' #' @return a data.table of terms with the following columns: #' \describe{ #' \item{term_id}{The id of the term} #' \item{term_name}{The full name of the term} #' \item{adjusted_p_val}{The associated p-value adjusted for multiple testing} #' \item{term_size}{The number of genes annotated to the term} #' \item{overlap}{A character vector of the genes that overlap between the #' term and the query} #' } #' @keywords internal enrichmentAnalysis <- function(genelist, gmt, background) { dt <- data.table(term_id=names(gmt)) for (i in 1:length(gmt)) { term <- gmt[[i]] tmp <- orderedHypergeometric(genelist, background, term$genes) overlap <- genelist[1:tmp$ind] overlap <- overlap[overlap %in% term$genes] if (length(overlap) == 0) overlap <- c() set(dt, i, 'term_name', term$name) set(dt, i, 'adjusted_p_val', tmp$p_val) set(dt, i, 'term_size', length(term$genes)) set(dt, i, 'overlap', list(list(overlap))) } dt } #' Determine which terms are found to be significant using each column #' individually. #' #' @inheritParams ActivePathways #' @param pvals p-value for the pathways calculated by ActivePathways #' #' @return a data.table with columns 'term_id' and a column for each column #' in \code{scores}, indicating whether each term (pathway) was found to be #' significant or not when considering only that column. For each term, #' either report the list of related genes if that term was significant, or NA if not. columnSignificance <- function(scores, gmt, background, cutoff, significant, correction_method, pvals) { dt <- data.table(term_id=names(gmt), evidence=NA) for (col in colnames(scores)) { col_scores <- scores[, col, drop=TRUE] col_scores <- col_scores[col_scores <= cutoff] col_scores <- names(col_scores)[order(col_scores)] res <- enrichmentAnalysis(col_scores, gmt, background) set(res, i = NULL, "adjusted_p_val", stats::p.adjust(res$adjusted_p_val, correction_method)) set(res, i = which(res$adjusted_p_val > significant), "overlap", list(list(NA))) set(dt, i=NULL, col, res$overlap) } ev_names = colnames(dt[,-1:-2]) set_evidence <- function(x) { ev <- ev_names[!is.na(dt[x, -1:-2])] if(length(ev) == 0) { if (pvals[x] <= significant) { ev <- 'combined' } else { ev <- 'none' } } ev } evidence <- lapply(1:nrow(dt), set_evidence) set(dt, i=NULL, "evidence", evidence) colnames(dt)[-1:-2] = paste0("Genes_", colnames(dt)[-1:-2]) dt } #' Export the results from ActivePathways as a comma-separated values (CSV) file. #' #' @param res the data.table object with ActivePathways results. #' @param file_name location and name of the CSV file to write to. #' @export #' #' @examples #' fname_scores <- system.file("extdata", "Adenocarcinoma_scores_subset.tsv", #' package = "ActivePathways") #' fname_GMT = system.file("extdata", "hsapiens_REAC_subset.gmt", #' package = "ActivePathways") #' #' dat <- as.matrix(read.table(fname_scores, header = TRUE, row.names = 'Gene')) #' dat[is.na(dat)] <- 1 #' #' res <- ActivePathways(dat, fname_GMT) #'\donttest{ #' export_as_CSV(res, "results_ActivePathways.csv") #'} export_as_CSV = function (res, file_name) { data.table::fwrite(res, file_name) }
/scratch/gouwar.j/cran-all/cranData/ActivePathways/R/ActivePathways.r
#' Prepare files for building an enrichment map network visualization in Cytoscape #' #' This function writes four text files that are used to build an network using #' Cytoscape and the EnrichmentMap app. The files are prefixed with \code{cytoscape_file_tag}. #' The four files written are: #' \describe{ #' \item{pathways.txt}{A list of significant terms and the #' associated p-value. Only terms with \code{adjusted_p_val <= significant} are #' written to this file} #' \item{subgroups.txt}{A matrix indicating whether the significant #' pathways are found to be significant when considering only one column (i.e., type of omics evidence) from #' \code{scores}. A 1 indicates that that term is significant using only that #' column to test for enrichment analysis} #' \item{pathways.gmt}{A shortened version of the supplied GMT #' file, containing only the terms in pathways.txt.} #' \item{legend.pdf}{A legend with colours matching contributions #' from columns in \code{scores}} #' } #' #' @param terms A data.table object with the columns 'term_id', 'term_name', 'adjusted_p_val'. #' @param gmt An abridged GMT object containing only the pathways that were #' found to be significant in the ActivePathways analysis. #' @param cytoscape_file_tag The user-defined file prefix and/or directory defining the location of the files. #' @param col_significance A data.table object with a column 'term_id' and a column #' for each type of omics evidence indicating whether a term was also found to be significant or not #' when considering only the genes and p-values in the corresponding column of the \code{scores} matrix. #' If term was not found, NA's are shown in columns, otherwise the relevant lists of genes are shown. #' @param color_palette Color palette from RColorBrewer::brewer.pal to color each #' column in the scores matrix. If NULL grDevices::rainbow is used by default. #' @param custom_colors A character vector of custom colors for each column in the scores matrix. #' @param color_integrated_only A character vector of length 1 specifying the color of the "combined" pathway contribution. #' @import ggplot2 #' #' @return None prepareCytoscape <- function(terms, gmt, cytoscape_file_tag, col_significance, color_palette = NULL, custom_colors = NULL, color_integrated_only = "#FFFFF0") { if (!is.null(col_significance)) { # Obtain the name of each omics dataset and incorporate a 'combined' contribution tests <- colnames(col_significance)[3:length(colnames(col_significance))] tests <- substr(tests, 7, 100) tests <- append(tests, "combined") # Create a matrix of ones and zeros, where columns are omics datasets + 'combined' # and rows are enriched pathways rows <- 1:nrow(col_significance) evidence_columns = do.call(rbind, lapply(col_significance$evidence, function(x) 0+(tests %in% x))) colnames(evidence_columns) = tests col_significance = cbind(col_significance[,"term_id"], evidence_columns) # Acquire colours from grDevices::rainbow or RColorBrewer::brewer.pal if custom colors are not provided if(is.null(color_palette) & is.null(custom_colors)) { col_colors <- grDevices::rainbow(length(tests)) } else if (!is.null(custom_colors)){ if (!is.null(names(custom_colors))){ custom_colors <- custom_colors[order(match(names(custom_colors),tests))] } custom_colors <- append(custom_colors, color_integrated_only, after = match("combined",tests)) col_colors <- custom_colors } else { col_colors <- RColorBrewer::brewer.pal(length(tests),color_palette) } col_colors <- replace(col_colors, match("combined",tests),color_integrated_only) if (!is.null(names(col_colors))){ names(col_colors)[length(col_colors)] <- "combined" } instruct_str <- paste('piechart:', ' attributelist="', paste(tests, collapse=','), '" colorlist="', paste(col_colors, collapse=','), '" showlabels=FALSE', sep='') col_significance[, "instruct" := instruct_str] # Writing the Files utils::write.table(terms, file=paste0(cytoscape_file_tag, "pathways.txt"), row.names=FALSE, sep="\t", quote=FALSE) utils::write.table(col_significance, file=paste0(cytoscape_file_tag, "subgroups.txt"), row.names=FALSE, sep="\t", quote=FALSE) write.GMT(gmt, paste0(cytoscape_file_tag, "pathways.gmt")) # Making a Legend dummy_plot = ggplot(data.frame("tests" = factor(tests, levels = tests), "value" = 1), aes(tests, fill = tests)) + geom_bar() + scale_fill_manual(name = "Contribution", values=col_colors) grDevices::pdf(file = NULL) # Suppressing Blank Display Device from ggplot_gtable dummy_table = ggplot_gtable(ggplot_build(dummy_plot)) grDevices::dev.off() legend = dummy_table$grobs[[which(sapply(dummy_table$grobs, function(x) x$name) == "guide-box")]] # Estimating height & width legend_height = ifelse(length(tests) > 20, 5.5, length(tests)*0.25+1) legend_width = ifelse(length(tests) > 20, ceiling(length(tests)/20)*(max(nchar(tests))*0.05+1), max(nchar(tests))*0.05+1) ggsave(legend, device = "pdf", filename = paste0(cytoscape_file_tag, "legend.pdf"), height = legend_height, width = legend_width, scale = 1) } else { utils::write.table(terms, file=paste0(cytoscape_file_tag, "pathways.txt"), row.names=FALSE, sep="\t", quote=FALSE) write.GMT(gmt, paste0(cytoscape_file_tag, "pathways.gmt")) } }
/scratch/gouwar.j/cran-all/cranData/ActivePathways/R/cytoscape.r
#' Read and Write GMT files #' #' Functions to read and write Gene Matrix Transposed (GMT) files and to test if #' an object inherits from GMT. #' #' A GMT file describes gene sets, such as biological terms and pathways. GMT files are #' tab delimited text files. Each row of a GMT file contains a single term with its #' database ID and a term name, followed by all the genes annotated to the term. #' #' @format #' A GMT object is a named list of terms, where each term is a list with the items: #' \describe{ #' \item{id}{The term ID.} #' \item{name}{The full name or description of the term.} #' \item{genes}{A character vector of genes annotated to this term.} #' } #' @rdname GMT #' @name GMT #' @aliases GMT gmt #' #' @param filename Location of the gmt file. #' @param gmt A GMT object. #' @param x The object to test. #' #' @return \code{read.GMT} returns a GMT object. \cr #' \code{write.GMT} returns NULL. \cr #' \code{is.GMT} returns TRUE if \code{x} is a GMT object, else FALSE. #' #' #' @examples #' fname_GMT <- system.file("extdata", "hsapiens_REAC_subset.gmt", package = "ActivePathways") #' gmt <- read.GMT(fname_GMT) #' gmt[1:10] #' gmt[[1]] #' gmt[[1]]$id #' gmt[[1]]$genes #' gmt[[1]]$name #' gmt$`REAC:1630316` #' @export read.GMT <- function(filename) { gmt <- strsplit(readLines(filename), '\t') names(gmt) <- sapply(gmt, `[`, 1) gmt <- lapply(gmt, function(x) { list(id=x[1], name=x[2], genes=x[-c(1,2)]) }) class(gmt) <- 'GMT' gmt } #' @rdname GMT #' @export write.GMT <- function(gmt, filename) { if (!is.GMT(gmt)) stop("gmt is not a valid GMT object") sink(filename) for (term in gmt) { cat(term$id, term$name, paste(term$genes, collapse="\t"), sep = "\t") cat("\n") } sink() } #' Make a background list of genes (i.e., the statistical universe) based on all the terms (gene sets, pathways) considered. #' #' Returns A character vector of all genes in a GMT object. #' #' @param gmt A \link{GMT} object. #' @return A character vector containing all genes in GMT. #' @export #' #' @examples #' fname_GMT <- system.file("extdata", "hsapiens_REAC_subset.gmt", package = "ActivePathways") #' gmt <- read.GMT(fname_GMT) #' makeBackground(gmt)[1:10] makeBackground <- function(gmt) { if (!is.GMT(gmt)) stop('gmt is not a valid GMT object') unlist(Reduce(function(x, y) union(x, y$genes), gmt, gmt[[1]]$genes)) } ##### Subsetting functions ##### # Treat as a list but return an object of "GMT" class #' @export `[.GMT` <- function(x, i) { x <- unclass(x) res <- x[i] class(res) <- c('GMT') res } #' @export `[[.GMT` <- function(x, i, exact = TRUE) { x <- unclass(x) x[[i, exact = exact]] } #' @export `$.GMT` <- function(x, i) { x[[i]] } #' @export #' @rdname GMT is.GMT <- function(x) inherits(x, 'GMT') # Print a GMT object #' @export print.GMT <- function(x, ...) { num_lines <- min(length(x), getOption("max.print", 99999)) num_trunc <- length(x) - num_lines cat(sapply(x[1:num_lines], function(a) paste(a$id, "-", a$name, "\n", paste(a$genes, collapse=", "), '\n\n'))) if (num_trunc == 1) { cat('[ reached getOption("max.print") -- omitted 1 term ]') } else if (num_trunc > 1) { cat(paste('[ reached getOption("max.print") -- ommitted', num_trunc, 'terms ]')) } }
/scratch/gouwar.j/cran-all/cranData/ActivePathways/R/gmt.r
#' Merge a list or matrix of p-values #' #' @param scores Either a list/vector of p-values or a matrix where each column is a test. #' @param method Method to merge p-values. See 'methods' section below. #' @param scores_direction Either a vector of log2 transformed fold-change values or a matrix where each column is a test. #' Must contain the same dimensions as the scores parameter. Datasets without directional information should be set to 0. #' @param constraints_vector A numerical vector of +1 or -1 values corresponding to the user-defined #' directional relationship between the columns in scores_direction. Datasets without directional information should #' be set to 0. #' #' @return If \code{scores} is a vector or list, returns a number. If \code{scores} is a #' matrix, returns a named list of p-values merged by row. #' #' @section Methods: #' Eight methods are available to merge a list of p-values: #' \describe{ #' \item{Fisher}{Fisher's method (default) assumes that p-values are uniformly #' distributed and performs a chi-squared test on the statistic sum(-2 log(p)). #' This method is most appropriate when the columns in \code{scores} are #' independent.} #' \item{Fisher_directional}{Fisher's method modification that allows for #' directional information to be incorporated with the \code{scores_direction} #' and \code{constraints_vector} parameters.} #' \item{Brown}{Brown's method extends Fisher's method by accounting for the #' covariance in the columns of \code{scores}. It is more appropriate when the #' tests of significance used to create the columns in \code{scores} are not #' necessarily independent. Note that the "Brown" method cannot be used with a #' single list of p-values. However, in this case Brown's method is identical #' to Fisher's method and should be used instead.} #' \item{DPM}{DPM extends Brown's method by incorporating directional information #' using the \code{scores_direction} and \code{constraints_vector} parameters.} #' \item{Stouffer}{Stouffer's method assumes p-values are uniformly distributed #' and transforms p-values into a Z-score using the cumulative distribution function of a #' standard normal distribution. This method is appropriate when the columns in \code{scores} #' are independent.} #' \item{Stouffer_directional}{Stouffer's method modification that allows for #' directional information to be incorporated with the \code{scores_direction} #' and \code{constraints_vector} parameters.} #' \item{Strube}{Strube's method extends Stouffer's method by accounting for the #' covariance in the columns of \code{scores}.} #' \item{Strube_directional}{Strube's method modification that allows for #' directional information to be incorporated with the \code{scores_direction} #' and \code{constraints_vector} parameters.} #' #' } #' #' @examples #' merge_p_values(c(0.05, 0.09, 0.01)) #' merge_p_values(list(a=0.01, b=1, c=0.0015, d=0.025), method='Fisher') #' merge_p_values(matrix(data=c(0.03, 0.061, 0.48, 0.052), nrow = 2), method='Brown') #' #' @export merge_p_values <- function(scores, method = "Fisher", scores_direction = NULL, constraints_vector = NULL) { ##### Validation ##### # scores if (is.list(scores)) scores <- unlist(scores, recursive=FALSE) if (!(is.vector(scores) || is.matrix(scores))) stop("scores must be a matrix or vector") if (any(is.na(scores))) stop("scores cannot contain missing values, we recommend replacing NA with 1 or removing") if (!is.numeric(scores)) stop("scores must be numeric") if (any(scores < 0 | scores > 1)) stop("All values in scores must be in [0,1]") # scores_direction and constraints_vector if (xor(!is.null(scores_direction),!is.null(constraints_vector))) stop("Both scores_direction and constraints_vector must be provided") if (!is.null(scores_direction) && !is.null(constraints_vector)){ if (!(is.numeric(constraints_vector) && is.vector(constraints_vector))) stop("constraints_vector must be a numeric vector") if (any(!constraints_vector %in% c(1,-1,0))) stop("constraints_vector must contain the values: 1, -1 or 0") if (!(is.vector(scores_direction) || is.matrix(scores_direction))) stop("scores_direction must be a matrix or vector") if (!all(class(scores_direction) == class(scores))) stop("scores and scores_direction must be the same data type") if (any(is.na(scores_direction))) stop("scores_direction cannot contain missing values, we recommend replacing NA with 0 or removing") if (!is.numeric(scores_direction)) stop("scores_direction must be numeric") if (method %in% c("Fisher","Brown","Stouffer","Strube")) stop("Only DPM, Fisher_directional, Stouffer_directional, and Strube_directional methods support directional integration") if (is.matrix(scores_direction)){ if (any(!rownames(scores_direction) %in% rownames(scores))) stop ("scores_direction gene names must match scores genes") if (is.null(colnames(scores)) || is.null(colnames(scores_direction))) stop("column names must be provided to scores and scores_direction") if (any(!colnames(scores_direction) %in% colnames(scores))) stop("scores_direction column names must match scores column names") if (length(constraints_vector) != length(colnames(scores_direction))) stop("constraints_vector should have the same number of entries as columns in scores_direction") if (any(constraints_vector %in% 0) && !all(scores_direction[,constraints_vector %in% 0] == 0)) stop("scores_direction entries must be set to 0's for columns that do not contain directional information") if (!is.null(names(constraints_vector))){ if (!all.equal(names(constraints_vector), colnames(scores_direction), colnames(scores)) == TRUE){ stop("the constraints_vector entries should match the order of scores and scores_direction columns") }}} if (is.vector(scores_direction)){ if (length(constraints_vector) != length(scores_direction)) stop("constraints_vector should have the same number of entries as scores_direction") if (length(scores_direction) != length(scores)) stop("scores_direction should have the same number of entries as scores") if (any(constraints_vector %in% 0) && !all(scores_direction[constraints_vector %in% 0] == 0)) stop("scores_direction entries that do not contain directional information must be set to 0's") if (!is.null(names(constraints_vector))){ if (!all.equal(names(constraints_vector), names(scores_direction), names(scores)) == TRUE){ stop("the constraints_vector entries should match the order of scores and scores_direction") }}}} # method if (!method %in% c("Fisher", "Fisher_directional", "Brown", "DPM", "Stouffer", "Stouffer_directional", "Strube", "Strube_directional")){ stop("Only Fisher, Brown, Stouffer and Strube methods are currently supported for non-directional analysis. And only DPM, Fisher_directional, Stouffer_directional, and Strube_directional are supported for directional analysis") } if (method %in% c("Fisher_directional", "DPM", "Stouffer_directional", "Strube_directional") & is.null(scores_direction)){ stop("scores_direction and constraints_vector must be provided for directional analyses") } ##### Merge P-values ##### # Methods to merge p-values from a scores vector if (is.vector(scores)){ if (method == "Brown" || method == "Strube" || method == "DPM" || method == "Strube_directional") { stop("Brown's, DPM, Strube's, and Strube_directional methods cannot be used with a single list of p-values") } # Convert 0 or very small p-values to 1e-300 if(min(scores) < 1e-300){ message(paste('warning: p-values smaller than ', 1e-300, ' are replaced with ', 1e-300)) scores <- sapply(scores, function(x) ifelse (x < 1e-300, 1e-300, x)) } if (method == "Fisher"){ p_fisher <- stats::pchisq(fishersMethod(scores),2*length(scores), lower.tail = FALSE) return(p_fisher) } if (method == "Fisher_directional"){ p_fisher <- stats::pchisq(fishersDirectional(scores, scores_direction,constraints_vector), 2*length(scores), lower.tail = FALSE) return(p_fisher) } if (method == "Stouffer"){ p_stouffer <- 2*stats::pnorm(-1*abs(stouffersMethod(scores))) return(p_stouffer) } if (method == "Stouffer_directional"){ p_stouffer <- 2*stats::pnorm(-1*abs(stouffersDirectional(scores,scores_direction,constraints_vector))) return(p_stouffer) } } # If scores is a matrix with one column, then no p-value merging can be done if (ncol(scores) == 1) return (scores[, 1, drop=TRUE]) # If scores is a matrix with multiple columns, apply the following methods if(min(scores) < 1e-300){ message(paste('warning: p-values smaller than ', 1e-300, ' are replaced with ', 1e-300)) scores <- apply(scores, c(1,2), function(x) ifelse (x < 1e-300, 1e-300, x)) } if (method == "Fisher"){ fisher_merged <- c() for(i in 1:length(scores[,1])) { p_fisher <- stats::pchisq(fishersMethod(scores[i,]), 2*length(scores[i,]), lower.tail = FALSE) fisher_merged <- c(fisher_merged, p_fisher) } names(fisher_merged) <- rownames(scores) return(fisher_merged) } if (method == "Fisher_directional"){ fisher_merged <- c() for(i in 1:length(scores[,1])) { p_fisher <- stats::pchisq(fishersDirectional(scores[i,], scores_direction[i,], constraints_vector), 2*length(scores[i,]), lower.tail = FALSE) fisher_merged <- c(fisher_merged,p_fisher) } names(fisher_merged) <- rownames(scores) return(fisher_merged) } if (method == "Brown") { cov_matrix <- calculateCovariances(t(scores)) brown_merged <- brownsMethod(scores, cov_matrix = cov_matrix) return(brown_merged) } if (method == "DPM") { cov_matrix <- calculateCovariances(t(scores)) dpm_merged <- DPM(scores, cov_matrix = cov_matrix, scores_direction = scores_direction, constraints_vector = constraints_vector) return(dpm_merged) } if (method == "Stouffer"){ stouffer_merged <- c() for(i in 1:length(scores[,1])){ p_stouffer <- 2*stats::pnorm(-1*abs(stouffersMethod(scores[i,]))) stouffer_merged <- c(stouffer_merged,p_stouffer) } names(stouffer_merged) <- rownames(scores) return(stouffer_merged) } if (method == "Stouffer_directional"){ stouffer_merged <- c() for(i in 1:length(scores[,1])){ p_stouffer <- 2*stats::pnorm(-1*abs(stouffersDirectional(scores[i,], scores_direction[i,],constraints_vector))) stouffer_merged <- c(stouffer_merged,p_stouffer) } names(stouffer_merged) <- rownames(scores) return(stouffer_merged) } if (method == "Strube"){ strube_merged <- strubesMethod(scores) return(strube_merged) } if (method == "Strube_directional"){ strube_merged <- strubesDirectional(scores,scores_direction,constraints_vector) return(strube_merged) } } fishersMethod <- function(p_values) { chisq_values <- -2*log(p_values) sum(chisq_values) } fishersDirectional <- function(p_values, scores_direction, constraints_vector) { # Sum the directional chi-squared values directionality <- constraints_vector * scores_direction/abs(scores_direction) p_values_directional <- p_values[!is.na(directionality)] chisq_directional <- abs(-2 * sum(log(p_values_directional)*directionality[!is.na(directionality)])) # Sum the non-directional chi-squared values chisq_nondirectional <- abs(-2 * sum(log(p_values[is.na(directionality)]))) # Combine both sum(c(chisq_directional, chisq_nondirectional)) } #' Merge p-values using the Brown's method. #' #' @param p_values A matrix of m x n p-values. #' @param data_matrix An m x n matrix representing m tests and n samples. NA's are not allowed. #' @param cov_matrix A pre-calculated covariance matrix of \code{data_matrix}. This is more #' efficient when making many calls with the same data_matrix. #' Only one of \code{data_matrix} and \code{cov_matrix} must be given. If both are supplied, #' \code{data_matrix} is ignored. #' @return A p-value vector representing the merged significance of multiple p-values. #' @export # Based on the R package EmpiricalBrownsMethod # https://github.com/IlyaLab/CombiningDependentPvaluesUsingEBM/blob/master/R/EmpiricalBrownsMethod/R/ebm.R # Only significant differences are the removal of extra_info and allowing a # pre-calculated covariance matrix # brownsMethod <- function(p_values, data_matrix = NULL, cov_matrix = NULL) { if (missing(data_matrix) && missing(cov_matrix)) { stop ("Either data_matrix or cov_matrix must be supplied") } if (!(missing(data_matrix) || missing(cov_matrix))) { message("Both data_matrix and cov_matrix were supplied. Ignoring data_matrix") } if (missing(cov_matrix)) cov_matrix <- calculateCovariances(data_matrix) N <- ncol(cov_matrix) expected <- 2 * N cov_sum <- 2 * sum(cov_matrix[lower.tri(cov_matrix, diag=FALSE)]) var <- (4 * N) + cov_sum sf <- var / (2 * expected) df <- (2 * expected^2) / var if (df > 2 * N) { df <- 2 * N sf <- 1 } # Acquiring the unadjusted chi-squared values from Fisher's method fisher_chisq <- c() for(i in 1:length(p_values[,1])) { fisher_chisq <- c(fisher_chisq, fishersMethod(p_values[i,])) } # Adjusted p-value p_brown <- stats::pchisq(df=df, q=fisher_chisq/sf, lower.tail=FALSE) names(p_brown) <- rownames(p_values) p_brown } #' Merge p-values using the DPM method. #' #' @param p_values A matrix of m x n p-values. #' @param data_matrix An m x n matrix representing m tests and n samples. NA's are not allowed. #' @param cov_matrix A pre-calculated covariance matrix of \code{data_matrix}. This is more #' efficient when making many calls with the same data_matrix. #' Only one of \code{data_matrix} and \code{cov_matrix} must be given. If both are supplied, #' \code{data_matrix} is ignored. #' @param scores_direction A matrix of log2 fold-change values. Datasets without directional information should be set to 0. #' @param constraints_vector A numerical vector of +1 or -1 values corresponding to the user-defined #' directional relationship between columns in scores_direction. Datasets without directional information should #' be set to 0. #' @return A p-value vector representing the merged significance of multiple p-values. #' @export DPM <- function(p_values, data_matrix = NULL, cov_matrix = NULL, scores_direction, constraints_vector) { if (missing(data_matrix) && missing(cov_matrix)) { stop ("Either data_matrix or cov_matrix must be supplied") } if (!(missing(data_matrix) || missing(cov_matrix))) { message("Both data_matrix and cov_matrix were supplied. Ignoring data_matrix") } if (missing(cov_matrix)) cov_matrix <- calculateCovariances(data_matrix) N <- ncol(cov_matrix) expected <- 2 * N cov_sum <- 2 * sum(cov_matrix[lower.tri(cov_matrix, diag=FALSE)]) var <- (4 * N) + cov_sum sf <- var / (2 * expected) df <- (2 * expected^2) / var if (df > 2 * N) { df <- 2 * N sf <- 1 } # Acquiring the unadjusted chi-squared value from Fisher's method fisher_chisq <- c() for(i in 1:length(p_values[,1])) { fisher_chisq <- c(fisher_chisq, fishersDirectional(p_values[i,], scores_direction[i,],constraints_vector)) } # Adjusted p-value p_dpm <- stats::pchisq(df=df, q=fisher_chisq/sf, lower.tail=FALSE) names(p_dpm) <- rownames(p_values) p_dpm } stouffersMethod <- function (p_values){ k = length(p_values) z_values <- stats::qnorm(p_values/2) sum(z_values)/sqrt(k) } stouffersDirectional <- function (p_values, scores_direction, constraints_vector){ k = length(p_values) # Sum the directional z-values directionality <- constraints_vector * scores_direction/abs(scores_direction) p_values_directional <- p_values[!is.na(directionality)] z_directional <- abs(sum(stats::qnorm(p_values_directional/2)*directionality[!is.na(directionality)])) # Sum the non-directional z-values z_nondirectional <- abs(sum(stats::qnorm(p_values[is.na(directionality)]/2))) # Combine both z_values <- c(z_directional, z_nondirectional) sum(z_values)/sqrt(k) } strubesMethod <- function (p_values){ # Acquiring the unadjusted z-value from Stouffer's method stouffer_z <- c() for(i in 1:length(p_values[,1])){ stouffer_z <- c(stouffer_z,stouffersMethod(p_values[i,])) } # Correlation matrix cor_mtx <- stats::cor(p_values, use = "complete.obs") cor_mtx[is.na(cor_mtx)] <- 0 cor_mtx <- abs(cor_mtx) # Adjusted p-value k = length(p_values[1,]) adjusted_z <- stouffer_z * sqrt(k) / sqrt(sum(cor_mtx)) p_strube <- 2*stats::pnorm(-1*abs(adjusted_z)) names(p_strube) <- rownames(p_values) p_strube } strubesDirectional <- function (p_values, scores_direction, constraints_vector){ # Acquiring the unadjusted z-value from Stouffer's method stouffer_z <- c() for(i in 1:length(p_values[,1])){ stouffer_z <- c(stouffer_z,stouffersDirectional(p_values[i,], scores_direction[i,],constraints_vector)) } # Correlation matrix cor_mtx <- stats::cor(p_values, use = "complete.obs") cor_mtx[is.na(cor_mtx)] <- 0 cor_mtx <- abs(cor_mtx) # Adjusted p-value k = length(p_values[1,]) adjusted_z <- stouffer_z * sqrt(k) / sqrt(sum(cor_mtx)) p_strube <- 2*stats::pnorm(-1*abs(adjusted_z)) names(p_strube) <- rownames(p_values) p_strube } transformData <- function(dat) { # If all values in dat are the same (equal to y), return dat. The covariance # matrix will be the zero matrix, and brown's method gives the p-value as y # Otherwise (dat - dmv) / dvsd is NaN and ecdf throws an error if (isTRUE(all.equal(min(dat), max(dat)))) return(dat) dvm <- mean(dat, na.rm=TRUE) dvsd <- pop.sd(dat) s <- (dat - dvm) / dvsd distr <- stats::ecdf(s) sapply(s, function(a) -2 * log(distr(a))) } calculateCovariances <- function(data_matrix) { transformed_data_matrix <- apply(data_matrix, 1, transformData) stats::cov(transformed_data_matrix) } pop.var <- function(x) stats::var(x, na.rm=TRUE) * (length(x) - 1) / length(x) pop.sd <- function(x) sqrt(pop.var(x))
/scratch/gouwar.j/cran-all/cranData/ActivePathways/R/merge_p.r
#' Hypergeometric test #' #' Perform a hypergeometric test, also known as the Fisher's exact test, on a 2x2 contingency #' table with the alternative hypothesis set to 'greater'. In this application, the test finds the #' probability that counts[1, 1] or more genes would be found to be annotated to a term (pathway), #' assuming the null hypothesis of genes being distributed randomly to terms. #' #' @param counts A 2x2 numerical matrix representing a contingency table. #' #' @return a p-value of enrichment of genes in a term or pathway. hypergeometric <- function(counts) { if (any(counts < 0)) stop('counts contains negative values. Something went very wrong.') m <- counts[1, 1] + counts[2, 1] n <- counts[1, 2] + counts[2, 2] k <- counts[1, 1] + counts[1, 2] x <- counts[1, 1] stats::phyper(x-1, m, n, k, lower.tail=FALSE) } #' Ordered Hypergeometric Test #' #' Perform a series of hypergeometric tests (a.k.a. Fisher's Exact tests), on a ranked list of genes ordered #' by significance against a list of annotation genes. The hypergeometric tests are executed with #' increasingly larger numbers of genes representing the top genes in order of decreasing scores. #' The lowest p-value of the series is returned as the optimal enriched intersection of the ranked list of genes #' and the biological term (pathway). #' #' @param genelist Character vector of gene names, assumed to be ordered by decreasing importance. #' For example, the genes could be ranked by decreasing significance of differential expression. #' @param background Character vector of gene names. List of all genes used as a statistical background (i.e., the universe). #' @param annotations Character vector of gene names. A gene set representing a functional term, process or biological pathway. #' #' @return a list with the items: #' \describe{ #' \item{p_val}{The lowest obtained p-value} #' \item{ind}{The index of \code{genelist} such that \code{genelist[1:ind]} #' gives the lowest p-value} #' } #' @export #' #' @examples #' orderedHypergeometric(c('HERC2', 'SP100'), c('PHC2', 'BLM', 'XPC', 'SMC3', 'HERC2', 'SP100'), #' c('HERC2', 'PHC2', 'BLM')) orderedHypergeometric <- function(genelist, background, annotations) { # Only test subsets of genelist that end with a gene in annotations since # these are the only tests for which the p-value can decrease which_in <- which(genelist %in% annotations) if (length(which_in) == 0) return(list(p_val=1, ind=1)) # Construct the counts matrix for the first which_in[1] genes gl <- genelist[1:which_in[1]] cl <- setdiff(background, gl) genelist0 <- length(gl) - 1 complement1 <- length(which(cl %in% annotations)) complement0 <- length(cl) - complement1 counts <- matrix(data=c(1, genelist0, complement1, complement0), nrow=2) scores <- hypergeometric(counts) if (length(which_in) == 1) return(list(p_val=scores, ind=which_in[1])) # Update counts and recalculate score for the rest of the indeces in which_in # The genes in genelist[which_in[i]:which_in[i-1]] are added to the genes # being tested and removed from the complement. Of these, 1 will always be # in annotations and the rest will not. Therefore we can just modify the # contingency table rather than recounting which genes are in annotations for (i in 2:length(which_in)) { diff <- which_in[i] - which_in[i-1] counts[1, 1] <- i counts[2, 1] <- counts[2, 1] + diff - 1 counts[1, 2] <- counts[1, 2] - 1 counts[2, 2] <- counts[2, 2] - diff + 1 scores[i] <- hypergeometric(counts) } # Return the lowest p-value and the associated index min_score <- min(scores) ind = which_in[max(which(scores==min_score))] p_val = min_score list(p_val=p_val, ind=ind) }
/scratch/gouwar.j/cran-all/cranData/ActivePathways/R/statistical_tests.r
## ----echo=FALSE--------------------------------------------------------------- knitr::opts_chunk$set(warning=FALSE, message=FALSE, width=500) options(max.print=35) library("ggplot2") library("data.table") ## ----------------------------------------------------------------------------- scores <- read.table( system.file('extdata', 'Adenocarcinoma_scores_subset.tsv', package = 'ActivePathways'), header = TRUE, sep = '\t', row.names = 'Gene') scores <- as.matrix(scores) scores ## ----------------------------------------------------------------------------- scores[is.na(scores)] <- 1 ## ----------------------------------------------------------------------------- library(ActivePathways) gmt_file <- system.file('extdata', 'hsapiens_REAC_subset.gmt', package = 'ActivePathways') ActivePathways(scores, gmt_file) ## ----------------------------------------------------------------------------- fname_data_matrix <- system.file('extdata', 'Differential_expression_rna_protein.tsv', package = 'ActivePathways') pvals_FCs <- read.table(fname_data_matrix, header = TRUE, sep = '\t') example_genes <- c('ACTN4','PIK3R4','PPIL1','NELFE','LUZP1','ITGB2') pvals_FCs[pvals_FCs$gene %in% example_genes,] ## ----------------------------------------------------------------------------- pval_matrix <- data.frame( row.names = pvals_FCs$gene, rna = pvals_FCs$rna_pval, protein = pvals_FCs$protein_pval) pval_matrix <- as.matrix(pval_matrix) pval_matrix[is.na(pval_matrix)] <- 1 pval_matrix[example_genes,] ## ----------------------------------------------------------------------------- dir_matrix <- data.frame( row.names = pvals_FCs$gene, rna = pvals_FCs$rna_log2fc, protein = pvals_FCs$protein_log2fc) dir_matrix <- as.matrix(dir_matrix) dir_matrix <- sign(dir_matrix) dir_matrix[is.na(dir_matrix)] <- 0 dir_matrix[example_genes,] ## ----------------------------------------------------------------------------- constraints_vector <- c(1,1) constraints_vector # constraints_vector <- c(1,-1) ## ----------------------------------------------------------------------------- directional_merged_pvals <- merge_p_values(pval_matrix, method = "DPM", dir_matrix, constraints_vector) merged_pvals <- merge_p_values(pval_matrix, method = "Brown") sort(merged_pvals)[1:5] sort(directional_merged_pvals)[1:5] ## ----------------------------------------------------------------------------- pvals_FCs[pvals_FCs$gene == "PIK3R4",] pval_matrix["PIK3R4",] dir_matrix["PIK3R4",] merged_pvals["PIK3R4"] directional_merged_pvals["PIK3R4"] ## ----------------------------------------------------------------------------- lineplot_df <- data.frame(original = -log10(merged_pvals), modified = -log10(directional_merged_pvals)) ggplot(lineplot_df) + geom_point(size = 2.4, shape = 19, aes(original, modified, color = ifelse(original <= -log10(0.05),"gray", ifelse(modified > -log10(0.05),"#1F449C","#F05039")))) + labs(title = "", x ="Merged -log10(P)", y = "Directional Merged -log10(P)") + geom_hline(yintercept = 1.301, linetype = "dashed", col = 'black', size = 0.5) + geom_vline(xintercept = 1.301, linetype = "dashed", col = "black", size = 0.5) + geom_abline(size = 0.5, slope = 1,intercept = 0) + scale_color_identity() ## ----------------------------------------------------------------------------- constraints_vector <- c(-1,1) constraints_vector <- c(1,-1) ## ----------------------------------------------------------------------------- constraints_vector <- c(0,0) ## ----------------------------------------------------------------------------- constraints_vector <- c(1,-1) ## ----------------------------------------------------------------------------- constraints_vector <- c(1,1,-1) ## ----------------------------------------------------------------------------- fname_GMT2 <- system.file("extdata", "hsapiens_REAC_subset2.gmt", package = "ActivePathways") ## ----------------------------------------------------------------------------- enriched_pathways <- ActivePathways( pval_matrix, gmt = fname_GMT2, cytoscape_file_tag = "Original_") constraints_vector <- c(1,1) constraints_vector dir_matrix[example_genes,] enriched_pathways_directional <- ActivePathways( pval_matrix, gmt = fname_GMT2, cytoscape_file_tag = "Directional_", merge_method = "DPM", scores_direction = dir_matrix, constraints_vector = constraints_vector) ## ----------------------------------------------------------------------------- pathways_lost_in_directional_integration = setdiff(enriched_pathways$term_id, enriched_pathways_directional$term_id) pathways_lost_in_directional_integration enriched_pathways[enriched_pathways$term_id %in% pathways_lost_in_directional_integration,] ## ----------------------------------------------------------------------------- wnt_pathway_id <- "REAC:R-HSA-3858494" enriched_pathway_genes <- unlist( enriched_pathways[enriched_pathways$term_id == wnt_pathway_id,]$overlap) enriched_pathway_genes ## ----------------------------------------------------------------------------- pathway_gene_pvals = pval_matrix[enriched_pathway_genes,] pathway_gene_directions = dir_matrix[enriched_pathway_genes,] directional_conflict_genes = names(which( pathway_gene_directions[,1] != pathway_gene_directions[,2] & pathway_gene_directions[,1] != 0 & pathway_gene_directions[,2] != 0)) pathway_gene_pvals[directional_conflict_genes,] pathway_gene_directions[directional_conflict_genes,] length(directional_conflict_genes) ## ----------------------------------------------------------------------------- nrow(ActivePathways(scores, gmt_file, significant = 0.05)) nrow(ActivePathways(scores, gmt_file, significant = 0.1)) ## ----------------------------------------------------------------------------- gmt <- read.GMT(gmt_file) names(gmt[[1]]) # Pretty-print the GMT gmt[1:3] # Look at the genes annotated to the first term gmt[[1]]$genes # Get the full name of Reactome pathway 2424491 gmt$`REAC:2424491`$name ## ----------------------------------------------------------------------------- gmt <- Filter(function(term) length(term$genes) >= 10, gmt) gmt <- Filter(function(term) length(term$genes) <= 500, gmt) ## ----------------------------------------------------------------------------- ActivePathways(scores, gmt) ## ----------------------------------------------------------------------------- ActivePathways(scores, gmt_file, geneset_filter = c(10, 500)) ## ----eval=FALSE--------------------------------------------------------------- # write.GMT(gmt, 'hsapiens_REAC_subset_filtered.gmt') ## ----------------------------------------------------------------------------- background <- makeBackground(gmt) background <- background[background != 'TP53'] ActivePathways(scores, gmt_file, background = background) ## ----------------------------------------------------------------------------- sort(merge_p_values(scores, 'Fisher'))[1:5] sort(merge_p_values(scores, 'Brown'))[1:5] sort(merge_p_values(scores, 'Stouffer'))[1:5] sort(merge_p_values(scores, 'Strube'))[1:5] ## ----------------------------------------------------------------------------- scores2 <- cbind(scores[, 'CDS'], merge_p_values(scores[, c('X3UTR', 'X5UTR', 'promCore')], 'Brown')) colnames(scores2) <- c('CDS', 'non_coding') scores[c(2179, 1760),] scores2[c(2179, 1760),] ActivePathways(scores, gmt_file) ActivePathways(scores2, gmt_file) ## ----------------------------------------------------------------------------- nrow(ActivePathways(scores, gmt_file)) nrow(ActivePathways(scores, gmt_file, cutoff = 0.01)) ## ----------------------------------------------------------------------------- nrow(ActivePathways(scores, gmt_file)) nrow(ActivePathways(scores, gmt_file, correction_method = 'none')) ## ----------------------------------------------------------------------------- res <- ActivePathways(scores, gmt_file) res ## ----------------------------------------------------------------------------- res$overlap[1:3] ## ----------------------------------------------------------------------------- unlist(res[res$term_id == "REAC:422475","evidence"]) ## ----eval = FALSE------------------------------------------------------------- # result_file <- paste('ActivePathways_results.csv', sep = '/') # export_as_CSV (res, result_file) # remove comment to run # read.csv(result_file, stringsAsFactors = F)[1:3,] ## ----eval=FALSE--------------------------------------------------------------- # result_file <- paste('ActivePathways_results2.txt', sep = '/') # data.table::fwrite(res, result_file, sep = '\t', sep2 = c('', ',', '')) # cat(paste(readLines(result_file)[1:2], collapse = '\n')) ## ----eval=FALSE--------------------------------------------------------------- # res <- ActivePathways(scores, gmt_file, cytoscape_file_tag = "enrichmentMap__") ## ----------------------------------------------------------------------------- files <- c(system.file('extdata', 'enrichmentMap__pathways.txt', package='ActivePathways'), system.file('extdata', 'enrichmentMap__subgroups.txt', package='ActivePathways'), system.file('extdata', 'enrichmentMap__pathways.gmt', package='ActivePathways'), system.file('extdata', 'enrichmentMap__legend.pdf', package='ActivePathways')) ## ----eval=FALSE--------------------------------------------------------------- # gmt_file <- system.file('extdata', 'hsapiens_REAC_subset.gmt', package = 'ActivePathways') # scores_file <- system.file('extdata', 'Adenocarcinoma_scores_subset.tsv', package = 'ActivePathways') # # scores <- read.table(scores_file, header = TRUE, sep = '\t', row.names = 'Gene') # scores <- as.matrix(scores) # scores[is.na(scores)] <- 1 # # res <- ActivePathways(scores, gmt_file, cytoscape_file_tag = "enrichmentMap__") ## ----------------------------------------------------------------------------- cat(paste(readLines(files[1])[1:5], collapse='\n')) cat(paste(readLines(files[2])[1:5], collapse='\n')) cat(paste(readLines(files[3])[18:19], collapse='\n')) ## ----------------------------------------------------------------------------- res <- ActivePathways(scores, gmt_file, cytoscape_file_tag = "enrichmentMap__", color_palette = "Pastel1") ## ----------------------------------------------------------------------------- res <- ActivePathways(scores, gmt_file, cytoscape_file_tag = "enrichmentMap__", custom_colors = c("violet","green","orange","red"))
/scratch/gouwar.j/cran-all/cranData/ActivePathways/inst/doc/ActivePathways-vignette.R
--- title: "Analysing and visualising pathway enrichment in multi-omics data using ActivePathways" author: "Mykhaylo Slobodyanyuk, Jonathan Barenboim and Juri Reimand" date: "`r Sys.Date()`" output: rmarkdown::html_vignette: toc: false vignette: > %\VignetteIndexEntry{ActivePathways} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, echo=FALSE} knitr::opts_chunk$set(warning=FALSE, message=FALSE, width=500) options(max.print=35) library("ggplot2") library("data.table") ``` # Multi-omics pathway enrichment analysis ## Introduction ActivePathways is a tool for multivariate pathway enrichment analysis that identifies gene sets, such as pathways or Gene Ontology terms, that are over-represented in a list or matrix of genes. ActivePathways uses a data fusion method to combine multiple omics datasets, prioritizes genes based on the significance and direction of signals from the omics datasets, and performs pathway enrichment analysis of these prioritized genes. We can find pathways and genes supported by single or multiple omics datasets, as well as additional genes and pathways that are only apparent through data integration and remain undetected in any single dataset alone. The new version of ActivePathways is described in our recent preprint. Mykhaylo Slobodyanyuk^, Alexander T. Bahcheli^, Zoe P. Klein, Masroor Bayati, Lisa J. Strug, Jüri Reimand. Directional integration and pathway enrichment analysis for multi-omics data. bioRxiv (2023-09-24) (^ - co-first authors) https://www.biorxiv.org/content/10.1101/2023.09.23.559116v1 The first version of ActivePathways was published in Nature Communications with the PCAWG Pan-Cancer project. Marta Paczkowska\^, Jonathan Barenboim\^, Nardnisa Sintupisut, Natalie S. Fox, Helen Zhu, Diala Abd-Rabbo, Miles W. Mee, Paul C. Boutros, PCAWG Drivers and Functional Interpretation Working Group, PCAWG Consortium, Juri Reimand. Integrative pathway enrichment analysis of multivariate omics data. Nature Communications 11 735 (2020) (\^ - co-first authors) <https://www.nature.com/articles/s41467-019-13983-9> <https://pubmed.ncbi.nlm.nih.gov/32024846/> ## Pathway enrichment analysis using the ranked hypergeometric test From a matrix of p-values, `ActivePathways` creates a ranked gene list where genes are prioritised based on their combined significance. The combined significance of each gene is determined by performing statistical data fusion on a series of omics datasets provided in the input matrix. The ranked gene list includes the most significant genes first. `ActivePathways` then performs a ranked hypergeometric test to determine if a pathway (i.e., a gene set with a common functional annotation) is enriched in the ranked gene list, by performing a series of hypergeometric tests (also known as Fisher's exact tests). In each such test, a larger set of genes from the top of the ranked gene list is considered. At the end of the series, the ranked hypergeometric test returns the top, most significant p-value from the series, corresponding to the point in the ranked gene list where the pathway enrichment reached the greatest significance of enrichment. This approach is useful when the genes in our ranked gene list have varying signals of biological importance in the input omics datasets, as the test identifies the top subset of genes that are the most relevant to the enrichment of the pathway. ## Using the package A basic example of using ActivePathways is shown below. We will analyse cancer driver gene predictions for a collection of cancer genomes. Each dataset (i.e., column in the matrix) contains a statistical significance score (P-value) where genes with small P-values are considered stronger candidates of cancer drivers based on the distribution of mutations in the genes. For each gene (i.e., row in the matrix), we have several predictions representing genomic elements of the gene, such as coding sequences (CDS), untranslated regions (UTR), and core promoters (promCore). To analyse these driver genes using existing knowledge of gene function, we will use gene sets corresponding to known molecular pathways from the Reactome database. These gene sets are commonly distributed in text files in the GMT format [(Gene Matrix Transposed)](https://software.broadinstitute.org/cancer/software/gsea/wiki/index.php/Data_formats#GMT:_Gene_Matrix_Transposed_file_format_.28.2A.gmt.29) file. Let's start by reading the data from the files embedded in the R package. For the p-value matrix, `ActivePathways` expects an object of the matrix class so the table has to be cast to the correct class after reading the file. ```{r} scores <- read.table( system.file('extdata', 'Adenocarcinoma_scores_subset.tsv', package = 'ActivePathways'), header = TRUE, sep = '\t', row.names = 'Gene') scores <- as.matrix(scores) scores ``` `ActivePathways` does not allow missing (NA) values in the matrix of P-values and these need to be removed. One conservative option is to re-assign all missing values as ones, indicating our confidence that the missing values are not indicative of cancer drivers. Alternatively, one may consider removing genes with NA values. ```{r} scores[is.na(scores)] <- 1 ``` ## Basic use The basic use of `ActivePathways` requires only two input parameters, the matrix of P-values with genes in rows and datasets in columns, as prepared above, and the path to the GMT file in the file system. Importantly, the gene IDs (symbols, accession numbers, etc) in the P-value matrix need to match those in the GMT file. Here we use a GMT file provided with the package. This GMT file is heavily filtered and outdated; thus users must provide their own GMT file when using the package. These GMT files can be acquired from multiple [sources](https://baderlab.org/GeneSets) such as Gene Ontology, Reactome and others. For better accuracy and statistical power these pathway databases should be combined. Acquiring an [up-to-date GMT file](http://download.baderlab.org/EM_Genesets/current_release/) is essential to avoid using unreliable outdated annotations [(see this paper)](https://www.nature.com/articles/nmeth.3963). ```{r} library(ActivePathways) gmt_file <- system.file('extdata', 'hsapiens_REAC_subset.gmt', package = 'ActivePathways') ActivePathways(scores, gmt_file) ``` ## ActivePathways 2.0 Directional integration of multi-omics data ActivePathways 2.0 extends our integrative pathway analysis framework significantly. Users can now provide directional assumptions of input omics datasets for more accurate analyses. This allows us to prioritise genes and pathways where certain directional assumptions are met and penalise those where the assumptions are violated. For example, fold-change in protein expression would be expected to associate positively with mRNA fold-change of the corresponding gene, while negative associations would be unexpected and indicate more-complex situations or potential false positives. We can instruct the pathway analysis to prioritise positively-associated protein/mRNA pairs and penalise negative associations (or vice versa). Two additional inputs are included in ActivePathways that allow diverse multi-omics analyses. These inputs are optional. The scores_direction and constraints_vector parameters are provided in the merge_p_values() and ActivePathways() functions to incorporate this directional penalty into the data fusion and pathway enrichment analyses. The parameter constraints_vector is a vector that allows the user to represent the expected relationship between the input omics datasets. The vector size is n_datasets. Values include +1, -1, and 0. The parameter scores_direction is a matrix that reflects the directions that the genes/transcripts/protein show in the data. The matrix size is n_genes \* n_datasets, that is the same size as the P-value matrix. This is a numeric matrix, but only the signs of the values are accounted for. ### Directional data integration at the gene level Load a dataset of P-values and fold-changes for mRNA and protein levels. This dataset is embedded in the package. Examine a few example genes. ```{r} fname_data_matrix <- system.file('extdata', 'Differential_expression_rna_protein.tsv', package = 'ActivePathways') pvals_FCs <- read.table(fname_data_matrix, header = TRUE, sep = '\t') example_genes <- c('ACTN4','PIK3R4','PPIL1','NELFE','LUZP1','ITGB2') pvals_FCs[pvals_FCs$gene %in% example_genes,] ``` Create a matrix of gene/protein P-values. Where the columns are different omics datasets (mRNA, protein) and the rows are genes. Examine a few genes in the P-value matrix, and convert missing values to P = 1. ```{r} pval_matrix <- data.frame( row.names = pvals_FCs$gene, rna = pvals_FCs$rna_pval, protein = pvals_FCs$protein_pval) pval_matrix <- as.matrix(pval_matrix) pval_matrix[is.na(pval_matrix)] <- 1 pval_matrix[example_genes,] ``` Create a matrix of gene/protein directions similarly to the P-value matrix (i.e., scores_direction). ActivePathways only uses the signs of the direction values (ie +1 or -1). If directions are missing (NA), we recommend setting the values to zero. Lets examine a few genes in the direction matrix. ```{r} dir_matrix <- data.frame( row.names = pvals_FCs$gene, rna = pvals_FCs$rna_log2fc, protein = pvals_FCs$protein_log2fc) dir_matrix <- as.matrix(dir_matrix) dir_matrix <- sign(dir_matrix) dir_matrix[is.na(dir_matrix)] <- 0 dir_matrix[example_genes,] ``` This matrix has to be accompanied by a vector that provides the expected relationship between the different datasets. Here, mRNA levels and protein levels are expected to have consistent directions: either both positive or both negative (eg log fold-change). Alternatively, we can use another vector to prioritise genes/proteins where the directions are the opposite. ```{r} constraints_vector <- c(1,1) constraints_vector # constraints_vector <- c(1,-1) ``` Now we merge the P-values of the two datasets using directional assumptions and compare these with the plain non-directional merging. The top 5 scoring genes differ if we penalise genes where this directional logic is violated: While 4 of 5 genes retain significance, the gene PIK3R4 is penalised. Interestingly, as a consequence of penalizing PIK3R4, other genes such as ITGB2 move up in rank. ```{r} directional_merged_pvals <- merge_p_values(pval_matrix, method = "DPM", dir_matrix, constraints_vector) merged_pvals <- merge_p_values(pval_matrix, method = "Brown") sort(merged_pvals)[1:5] sort(directional_merged_pvals)[1:5] ``` PIK3R4 is penalised because the fold-changes of its mRNA and protein levels are significant and have the opposite signs: ```{r} pvals_FCs[pvals_FCs$gene == "PIK3R4",] pval_matrix["PIK3R4",] dir_matrix["PIK3R4",] merged_pvals["PIK3R4"] directional_merged_pvals["PIK3R4"] ``` To assess the impact of the directional penalty on gene merged P-value signals we create a plot showing directional results on the y axis and non-directional results on the x. Blue dots are prioritised hits, red dots are penalised. ```{r} lineplot_df <- data.frame(original = -log10(merged_pvals), modified = -log10(directional_merged_pvals)) ggplot(lineplot_df) + geom_point(size = 2.4, shape = 19, aes(original, modified, color = ifelse(original <= -log10(0.05),"gray", ifelse(modified > -log10(0.05),"#1F449C","#F05039")))) + labs(title = "", x ="Merged -log10(P)", y = "Directional Merged -log10(P)") + geom_hline(yintercept = 1.301, linetype = "dashed", col = 'black', size = 0.5) + geom_vline(xintercept = 1.301, linetype = "dashed", col = "black", size = 0.5) + geom_abline(size = 0.5, slope = 1,intercept = 0) + scale_color_identity() ``` ### Constraints vector intuition The constraints_vector parameter is provided in the merge_p_values() and ActivePathways() functions to incorporate directional penalty into the data fusion and pathway enrichment analyses. The constraints_vector parameter is a vector of size n_datasets that allows the user to represent the expected relationship between the input omics datasets, and includes the values +1, -1, and 0. The constraints_vector should reflect the expected *relative* directional relationship between datasets. For example, the two constraints_vector values shown below are functionally identical. ```{r} constraints_vector <- c(-1,1) constraints_vector <- c(1,-1) ``` We use different constraints_vector values depending on the type of data we are analyzing and the specific question we are asking. The simplest directional use case and the default for the ActivePathways package is to assume that the datasets have no directional constraints. This is useful when merging datasets where the relative impact of gene perturbations is not clear. For example, gene-level mutational burden and chromatin-immunoprecipitation sequencing (ChIP-seq) experiments can provide gene-level information, however, we cannot know from these datatypes whether gene function is increased or decreased. Therefore, we would set the constraints_vector for gene mutational burden and ChIP-seq datatypes to 0. ```{r} constraints_vector <- c(0,0) ``` When combining datasets that have directional information, we provide the expected relative directions between datasets in the constraints_vector. This is useful when measuring different data types or different conditions. To investigate the transcriptomic differences following gene knockdown or overexpression, we would provide constraints_vector values that reflect the expected opposing relationship between gene knockdown and gene overexpression. ```{r} constraints_vector <- c(1,-1) ``` The constraints_vector is also useful when merging different data types such as gene and protein expression, promoter methylation, and chromatin accessibility. Importantly, the expected relative direction between each datatype must be closely considered given the experimental conditions. For example, when combining gene expression, protein expression, and promoter methylation data measured in the same biological condition, we would provide a constraints_vector to reflect the expected agreement between gene and protein expression and disagreement with promoter methylation. ```{r} constraints_vector <- c(1,1,-1) ``` The intuition for merging these datasets is that direction of change in gene expression and protein expression tend to associate with each other according to the central dogma of biology while the direction of change of promoter methylation has the opposite effect as methylation insulates genes from transcription. ### Pathway-level insight To explore how changes on the individual gene level impact biological pathways, we can compare results before and after incorporating a directional penalty. Use the example GMT file embedded in the package. ```{r} fname_GMT2 <- system.file("extdata", "hsapiens_REAC_subset2.gmt", package = "ActivePathways") ``` First perform integrative pathway enrichment analysis with no directionality. Then perform directional integration and pathway enrichment analysis. For this analysis the directional coefficients and constraints_vector are from the gene-based analysis described above. ```{r} enriched_pathways <- ActivePathways( pval_matrix, gmt = fname_GMT2, cytoscape_file_tag = "Original_") constraints_vector <- c(1,1) constraints_vector dir_matrix[example_genes,] enriched_pathways_directional <- ActivePathways( pval_matrix, gmt = fname_GMT2, cytoscape_file_tag = "Directional_", merge_method = "DPM", scores_direction = dir_matrix, constraints_vector = constraints_vector) ``` Examine the pathways that are lost when directional information is incorporated in the data integration. ```{r} pathways_lost_in_directional_integration = setdiff(enriched_pathways$term_id, enriched_pathways_directional$term_id) pathways_lost_in_directional_integration enriched_pathways[enriched_pathways$term_id %in% pathways_lost_in_directional_integration,] ``` An example of a lost pathway is Beta-catenin independent WNT signaling. Out of the 32 genes that contribute to this pathway enrichment, 10 genes are in directional conflict. The enrichment is no longer identified when these genes are penalised due to the conflicting log2 fold-change directions. ```{r} wnt_pathway_id <- "REAC:R-HSA-3858494" enriched_pathway_genes <- unlist( enriched_pathways[enriched_pathways$term_id == wnt_pathway_id,]$overlap) enriched_pathway_genes ``` Examine the pathway genes that have directional disagreement and contribute to the lack of pathway enrichment in the directional analysis ```{r} pathway_gene_pvals = pval_matrix[enriched_pathway_genes,] pathway_gene_directions = dir_matrix[enriched_pathway_genes,] directional_conflict_genes = names(which( pathway_gene_directions[,1] != pathway_gene_directions[,2] & pathway_gene_directions[,1] != 0 & pathway_gene_directions[,2] != 0)) pathway_gene_pvals[directional_conflict_genes,] pathway_gene_directions[directional_conflict_genes,] length(directional_conflict_genes) ``` To visualise differences in biological pathways between ActivePathways analyses with or without a directional penalty, we combine both outputs into a single enrichment map for [plotting](#visualizing-directional-impact-with-node-borders). ## Significance threshold and returning all results A pathway is considered to be significantly enriched if it has `adjusted_p_val <= significant`. The parameter `significant` represents the maximum adjusted P-value for a resulting pathway to be considered statistically significant. Only the significant pathways are returned. P-values from pathway enrichment analysis are adjusted for multiple testing correction to provide a more conservative analysis (see below). ```{r} nrow(ActivePathways(scores, gmt_file, significant = 0.05)) nrow(ActivePathways(scores, gmt_file, significant = 0.1)) ``` ## GMT objects In the most common use case, a GMT file is downloaded from a database and provided directly to `ActivePathways` as a location in the file system. In some cases, it may be useful to load a GMT file separately for preprocessing. The ActivePathways package includes an interface for working with GMT objects. The GMT object can be read from a file using the `read.GMT` function. The GMT is structured as a list of terms (e.g., molecular pathways, biological processes, etc.). In the GMT object, each term is a list containing an id, a name, and the list of genes associated with this term. ```{r} gmt <- read.GMT(gmt_file) names(gmt[[1]]) # Pretty-print the GMT gmt[1:3] # Look at the genes annotated to the first term gmt[[1]]$genes # Get the full name of Reactome pathway 2424491 gmt$`REAC:2424491`$name ``` The most common processing step for GMT files is the removal of gene sets that are too large or small. Here we remove pathways (gene sets) that have less than 10 or more than 500 annotated genes. ```{r} gmt <- Filter(function(term) length(term$genes) >= 10, gmt) gmt <- Filter(function(term) length(term$genes) <= 500, gmt) ``` The new GMT object can now be used for analysis with `ActivePathways` ```{r} ActivePathways(scores, gmt) ``` This filtering can also be done automatically using the `geneset_filter` option to the `ActivePathways` function. By default, `ActivePathways` removes gene sets with less than five or more than a thousand genes from the GMT prior to analysis. In general, gene sets that are too large are likely not specific and less useful in interpreting the data and may also cause statistical inflation of enrichment scores in the analysis. Gene sets that are too small are likely too specific for most analyses and make the multiple testing corrections more stringent, potentially causing deflation of results. A stricter filter can be applied by running `ActivePathways` with the parameter `geneset_filter = c(10, 500)`. ```{r} ActivePathways(scores, gmt_file, geneset_filter = c(10, 500)) ``` This GMT object can be saved to a file ```{r, eval=FALSE} write.GMT(gmt, 'hsapiens_REAC_subset_filtered.gmt') ``` ## Background gene set for statistical analysis To perform pathway enrichment analysis, a global set of genes needs to be defined as a statistical background set. This represents the universe of all genes in the organism that the analysis can potentially consider. By default, this background gene set includes every gene that is found in the GMT file in any of the biological processes and pathways. Another option is to provide the full set of all protein-coding genes, however, this may cause statistical inflation of the results since a sizable fraction of all protein-coding genes still lack any known function. Sometimes the statistical background set needs to be considerably narrower than the GMT file or the full set of genes. Genes need to be excluded from the background if the analysis or experiment specifically excluded these genes initially. An example would be a targeted screen or sequencing panel that only considered a specific class of genes or proteins (e.g., kinases). In analysing such data, all non-kinase genes need to be excluded from the background set to avoid statistical inflation of all gene sets related to kinase signalling, phosphorylation and similar functions. To alter the background gene set in `ActivePathways`, one can provide a character vector of gene names that make up the statistical background set. In this example, we start from the original list of genes in the entire GMT and remove one gene, the tumor suppressor TP53. The new background set is then used for the ActivePathways analysis. ```{r} background <- makeBackground(gmt) background <- background[background != 'TP53'] ActivePathways(scores, gmt_file, background = background) ``` Note that only the genes found in the background set are used for testing enrichment. Any genes in the input data that are not in the background set will be automatically removed by `ActivePathways`. ## Merging p-values A key feature of `ActivePathways` is the integration of multiple complementary omics datasets to prioritise genes for the pathway analysis. In this approach, genes with significant scores in multiple datasets will get the highest priority, and certain genes with weak scores in multiple datasets may be ranked higher, highlighting functions that would be missed when only single datasets were analysed. `ActivePathways` accomplishes this by merging the series of p-values in the columns of the scores matrix for each gene into a single combined P-value. The four methods to merge P-values are Fisher's method (the default), Brown's method (extension of Fisher's), Stouffer's method and Strube's method (extension of Stouffer's). Each of these methods have been extended to account for the directional activity of genes across multi-omics datasets with Fisher_directional, DPM, Stouffer_directional, and Strube_directional methods. The Brown's and Strube's methods are more conservative in the case when the input datasets show some large-scale similarities (i.e., covariation), since they will take that into account when prioritising genes across similar datasets. The Brown's or Strube's method are recommended for most cases since omics datasets are often not statistically independent of each other and genes with high scores in one dataset are more likely to have high scores in another dataset just by chance. The following example compares the merged P-values of the first few genes between the four methods. Fisher's and Stouffer's method are two alternative strategies to merge p-values and as a result the top scoring genes and p-values may differ. The genes with the top scores for Brown's method are the same as Fisher's, but their P-values are more conservative. This is the case for Strube's method as well, in which the top scoring genes are the same as Stouffer's method, but the P-values are more conservative. ```{r} sort(merge_p_values(scores, 'Fisher'))[1:5] sort(merge_p_values(scores, 'Brown'))[1:5] sort(merge_p_values(scores, 'Stouffer'))[1:5] sort(merge_p_values(scores, 'Strube'))[1:5] ``` This function can be used to combine some of the data before the analysis for any follow-up analysis or visualisation. For example, we can merge the columns `X5UTR`, `X3UTR`, and `promCore` into a single `non_coding` column (these correspond to predictions of driver mutations in 5'UTRs, 3'UTRs, and core promoters of genes, respectively). This will consider the three non-coding regions as a single column, rather than giving them all equal weight to the `CDS` column. ```{r} scores2 <- cbind(scores[, 'CDS'], merge_p_values(scores[, c('X3UTR', 'X5UTR', 'promCore')], 'Brown')) colnames(scores2) <- c('CDS', 'non_coding') scores[c(2179, 1760),] scores2[c(2179, 1760),] ActivePathways(scores, gmt_file) ActivePathways(scores2, gmt_file) ``` ## Cutoff for filtering the ranked gene list for pathway enrichment analysis To perform pathway enrichment of the ranked gene list of merged P-values, `ActivePathways` defines a P-value cutoff to filter genes that have little or no significance in the series of omics datasets. This threshold represents the maximum p-value for a gene to be considered of interest in our analysis. The threshold is `0.1` by default but can be changed using the `cutoff` option. The default option considers raw P-values that have not been adjusted for multiple-testing correction. Therefore, the default option provides a relatively lenient approach to filtering the input data. This is useful for finding additional genes with weaker signals that associate with well-annotated and strongly significant genes in the pathway and functional context. ```{r} nrow(ActivePathways(scores, gmt_file)) nrow(ActivePathways(scores, gmt_file, cutoff = 0.01)) ``` ## Adjusting P-values using multiple testing correction Multiple testing correction is essential in the analysis of omics data since each analysis routinely considers thousands of hypotheses and apparently significant P-values will occur by chance alone. `ActivePathways` uses multiple testing correction at the level of pathways as P-values from the ranked hypergeometric test are adjusted for multiple testing (note that the ranked gene list provided to the ranked hypergeometric test remain unadjusted for multiple testing by design). The package uses the `p.adjust` function of base R to run multiple testing corrections and all methods in this function are available. By default, 'holm' correction is used. The option `correction_method = 'none'` can be used to override P-value adjustment (not recommended in most cases). ```{r} nrow(ActivePathways(scores, gmt_file)) nrow(ActivePathways(scores, gmt_file, correction_method = 'none')) ``` ## The results table of ActivePathways Consider the results object from the basic use case of `ActivePathways` ```{r} res <- ActivePathways(scores, gmt_file) res ``` The columns `term_id`, `term_name`, and `term_size` give information about each pathway detected in the enrichment analysis. The `adjusted_p_val` column with the adjusted P-value indicates the confidence that the pathway is enriched after multiple testing correction. The `overlap` column provides the set of genes from the integrated gene list that occur in the given enriched gene set (i.e., molecular pathway or biological process). These genes were quantified across multiple input omics datasets and prioritized based on their joint significance in the input data. Note that the genes with the strongest scores across the multiple datasets are listed first. ```{r} res$overlap[1:3] ``` This column is useful for further data analysis, allowing the researcher to go from the space of enriched pathways back to the space of individual genes and proteins involved in pathways and their input omics datasets. The `evidence` column provides insights to which of the input omics datasets (i.e., columns in the scores matrix) contributed to the discovery of this pathway or process in the integrated enrichment analysis. To achieve this level of detail, `ActivePathways` also analyses the gene lists ranked by the individual columns of the input matrix to detect enriched pathways. The `evidence` column lists the name of a given column of the input matrix if the given pathway is detected both in the integrated analysis and the analysis of the individual column. For example, in this analysis the majority of the detected pathways have only 'CDS' as their evidence, since these pathways were found to be enriched in data fusion through P-value merging and also by analysing the gene scores in the column `CDS` (for reference, CDS corresponds to protein-coding sequence where the majority of known driver mutations have been found). As a counter-example, the record for the pathway `REAC:422475` in our results lists as evidence `list('X3UTR', 'promCore')`, meaning that the pathway was found to be enriched when considering either the `X3UTR` column, the `promCore` column, or the combined omics datasets. ```{r} unlist(res[res$term_id == "REAC:422475","evidence"]) ``` Finally, if a pathway is found to be enriched only with the combined data and not in any individual column, 'combined' will be listed as the evidence. This subset of results may be particularly interesting since it highlights complementary aspects of the analysis that would remain hidden in the analysis of any input omics dataset separately. The following columns named as `Genes_{column}` help interpret how each pathway was detected in the multi-omics data integration, as listed in the column `evidence`. These columns show the genes present in the pathway and any of the input omics datasets. If the given pathway was not identified using the scores of the given column of the input scores matrix, an `NA` value is shown. Again, the genes are ranked by the significance of their scores in the input data, to facilitate identification of the most relevant genes in the analysis. ## Writing results to a CSV file The results are returned as a `data.table` object due to some additional data structures needed to store lists of gene IDs and supporting evidence. The usual R functions `write.table` and `write.csv` will struggle with exporting the data unless the gene and evidence lists are manually transformed as strings. Fortunately, the `fwrite` function of `data.table` can be used to write the file directly and the ActivePathways package includes the function `export_as_CSV` as a shortcut that uses the vertical bar symbol to concatenate gene lists. ```{r, eval = FALSE} result_file <- paste('ActivePathways_results.csv', sep = '/') export_as_CSV (res, result_file) # remove comment to run read.csv(result_file, stringsAsFactors = F)[1:3,] ``` The `fwrite` can be called directly for customised output. ```{r, eval=FALSE} result_file <- paste('ActivePathways_results2.txt', sep = '/') data.table::fwrite(res, result_file, sep = '\t', sep2 = c('', ',', '')) cat(paste(readLines(result_file)[1:2], collapse = '\n')) ``` # Visualising pathway enrichment results using enrichment maps in Cytoscape The Cytoscape software and the EnrichmentMap app provide powerful tools to visualise the enriched pathways from `ActivePathways` as a network (i.e., an Enrichment Map). To facilitate this visualisation step, `ActivePathways` provides the files needed for building enrichment maps. To create these files, a file prefix must be supplied to `ActivePathways` using the argument `cytoscape_file_tag`. The prefix can be a path to an existing writable directory. ```{r, eval=FALSE} res <- ActivePathways(scores, gmt_file, cytoscape_file_tag = "enrichmentMap__") ``` Four files are written using the prefix: - `enrichmentMap__pathways.txt` contains the table of significant terms (i.e. molecular pathways, biological processes, other gene sets) and the associated adjusted P-values. Note that only terms with `adjusted_p_val <= significant` are written. - `enrichmentMap__subgroups.txt` contains a matrix indicating the columns of the input matrix of P-values that contributed to the discovery of the corresponding pathways. These values correspond to the `evidence` evaluation of input omics datasets discussed above, where a value of one indicates that the pathway was also detectable using a specific input omics dataset. A value of zero indicates otherwise. This file will not be generated if a single-column matrix of scores corresponding to just one omics dataset is provided to `ActivePathways`. - `enrichmentMap__pathways.gmt` contains a shortened version of the supplied GMT file which consists of only the significant pathways detected by `ActivePathways`. - `enrichmentMap__legend.pdf` is a pdf file that displays a color legend of different omics datasets visualised in the enrichment map that can be used as a reference. ## Creating enrichment maps using the ActivePathways results The following sections will discuss how to create a pathway enrichment map using the results from `ActivePathways`. The datasets analysed earlier in the vignette will be used. To follow the steps, save the required files from `ActivePathways` in an accessible location. ## Required software 1. Cytoscape, see <https://cytoscape.org/download.html> 2. EnrichmentMap app of Cytoscape, see menu Apps\>App manager or <https://apps.cytoscape.org/apps/enrichmentmap> 3. EnhancedGraphics app of Cytoscape, see menu Apps\>App manager or <https://apps.cytoscape.org/apps/enhancedGraphics> ## Required files `ActivePathways` writes four files that are used to build enrichment maps in Cytoscape. ```{r} files <- c(system.file('extdata', 'enrichmentMap__pathways.txt', package='ActivePathways'), system.file('extdata', 'enrichmentMap__subgroups.txt', package='ActivePathways'), system.file('extdata', 'enrichmentMap__pathways.gmt', package='ActivePathways'), system.file('extdata', 'enrichmentMap__legend.pdf', package='ActivePathways')) ``` The following commands will perform the basic analysis again and write output files required for generating enrichment maps into the current working directory of the R session. All file names use the prefix `enrichmentMap__`. The generated files are also available in the `ActivePathways` R package as shown above. ```{r, eval=FALSE} gmt_file <- system.file('extdata', 'hsapiens_REAC_subset.gmt', package = 'ActivePathways') scores_file <- system.file('extdata', 'Adenocarcinoma_scores_subset.tsv', package = 'ActivePathways') scores <- read.table(scores_file, header = TRUE, sep = '\t', row.names = 'Gene') scores <- as.matrix(scores) scores[is.na(scores)] <- 1 res <- ActivePathways(scores, gmt_file, cytoscape_file_tag = "enrichmentMap__") ``` The four files written are: - `enrichmentMap__pathways.txt`, a table of significant pathways and the associated adjusted P-values. - `enrichmentMap__subgroups.txt`, a table of pathways and corresponding omics datasets supporting the enrichment of those pathways. This corresponds to the `evidence` column of the `ActivePathways` result object discussed above. - `enrichmentMap__pathways.gmt`, a shortened version of the supplied GMT file which consists of only the significant pathways detected by `ActivePathways`. - `enrichmentMap__legend.pdf`, a reference color legend of different omics datasets visualised in the enrichment map. The following code will examine a few lines of the files generated by `ActivePathways`. ```{r} cat(paste(readLines(files[1])[1:5], collapse='\n')) cat(paste(readLines(files[2])[1:5], collapse='\n')) cat(paste(readLines(files[3])[18:19], collapse='\n')) ``` ## Creating the enrichment map - Open the Cytoscape software. - Ensure that the apps *EnrichmentMap* and *enchancedGraphics* are installed. Apps may be installed by clicking in the menu *Apps -\> App Manager*. - Select *Apps -\> EnrichmentMap*. - In the following dialogue, click the button `+` *Add Data Set from Files* in the top left corner of the dialogue. - Change the Analysis Type to Generic/gProfiler/Enrichr. - Upload the files `enrichmentMap__pathways.txt` and `enrichmentMap__pathways.gmt` in the *Enrichments* and *GMT* fields, respectively. - Click the checkbox *Show Advanced Options* and set *Cutoff* to 0.6. - Then click *Build* in the bottom-right corner to create the enrichment map. ![](CreateEnrichmentMapDialogue_V2.png) ![](NetworkStep1_V2.png) ## Colour the nodes of the network to visualise supporting omics datasets To color nodes in the network (i.e., molecular pathways, biological processes) according to the omics datasets supporting the enrichments, the third file `enrichmentMap__subgroups.txt` needs to be imported to Cytoscape directly. To import the file, activate the menu option *File -\> Import -\> Table from File* and select the file `enrichmentMap__subgroups.txt`. In the following dialogue, select *To a Network Collection* in the dropdown menu *Where to Import Table Data*. Click OK to proceed. ![](ImportStep_V2.png) Next, Cytoscape needs to use the imported information to color nodes using a pie chart visualisation. To enable this, click the Style tab in the left control panel and select the Image/Chart1 Property in a series of dropdown menus (*Properties -\> Paint -\> Custom Paint 1 -\> Image/Chart 1*). ![](PropertiesDropDown2_V2.png) The *image/Chart 1* property now appears in the Style control panel. Click the triangle on the right, then set the *Column* to *instruct* and the *Mapping Type* to *Passthrough Mapping*. ![](StylePanel_V2.png) This step colours the nodes corresponding to the enriched pathways according to the supporting omics datasets, based on the scores matrix initially analysed in `ActivePathways`. ![](NetworkStep2_V2.png) To allow better interpretation of the enrichment map, `ActivePathways` generates a color legend in the file `enrichmentMap__legend.pdf` that shows which colors correspond to which omics datasets. ![](LegendView.png) Note that one of the colors corresponds to a subset of enriched pathways with *combined* evidence that were only detected through data fusion and P-value merging and not when any of the input datasets were detected separately. This exemplifies the added value of integrative multi-omics pathway enrichment analysis. ## Visualizing directional impact with node borders {#visualizing-directional-impact-with-node-borders} From the drop-down Properties menu, select *Border Line Type*. ![](border_line_type.jpg) Set *Column* to *directional impact* and *Mapping Type* to *Discrete Mapping*. Now we can compare findings between a non-directional and a directional method. We highlight pathways that were shared (0), lost (1), and gained (2) between the approaches. Here, we have solid lines for the shared pathways, dots for the lost pathways, and vertical lines for the gained pathways. Border widths can be adjusted in the *Border Width* property, again with discrete mapping. ![](set_aesthetic.jpg) This step changes node borders in the aggregated enrichment map, depicting the additional information provided by directional impact. ![](new_map.png) ![](legend.png) ## Alternative node coloring For a more diverse range of colors, ActivePathways supports any color palette from RColorBrewer. The color_palette parameter must be provided. ```{r} res <- ActivePathways(scores, gmt_file, cytoscape_file_tag = "enrichmentMap__", color_palette = "Pastel1") ``` ![](LegendView_RColorBrewer.png) Instead, to manually input the color of each dataset the custom_colors parameter must be specified as a vector. This vector should contain the same number of colors as columns in the scores matrix. ```{r} res <- ActivePathways(scores, gmt_file, cytoscape_file_tag = "enrichmentMap__", custom_colors = c("violet","green","orange","red")) ``` ![](LegendView_Custom.png) To change the color of the *combined* contribution, a color must be provided to the color_integrated_only parameter. Tip: if the coloring of nodes did not work in Cytoscape after setting the options in the Style panel, check that the EnhancedGraphics Cytoscape app is installed. # References - Mykhaylo Slobodyanyuk^, Alexander T. Bahcheli^, Zoe P. Klein, Masroor Bayati, Lisa J. Strug, Jüri Reimand. Directional integration and pathway enrichment analysis for multi-omics data. bioRxiv (2023-09-24) (^ - co-first authors) <https://www.biorxiv.org/content/10.1101/2023.09.23.559116v1>. - Integrative Pathway Enrichment Analysis of Multivariate Omics Data. Paczkowska M, Barenboim J, Sintupisut N, Fox NS, Zhu H, Abd-Rabbo D, Mee MW, Boutros PC, PCAWG Drivers and Functional Interpretation Working Group; Reimand J, PCAWG Consortium. Nature Communications (2020) <https://pubmed.ncbi.nlm.nih.gov/32024846/> <https://doi.org/10.1038/s41467-019-13983-9>. - Pathway Enrichment Analysis and Visualization of Omics Data Using g:Profiler, GSEA, Cytoscape and EnrichmentMap. Reimand J, Isserlin R, Voisin V, Kucera M, Tannus-Lopes C, Rostamianfar A, Wadi L, Meyer M, Wong J, Xu C, Merico D, Bader GD. Nature Protocols (2019) <https://pubmed.ncbi.nlm.nih.gov/30664679/> <https://doi.org/10.1038/s41596-018-0103-9>.
/scratch/gouwar.j/cran-all/cranData/ActivePathways/inst/doc/ActivePathways-vignette.Rmd
--- title: "Analysing and visualising pathway enrichment in multi-omics data using ActivePathways" author: "Mykhaylo Slobodyanyuk, Jonathan Barenboim and Juri Reimand" date: "`r Sys.Date()`" output: rmarkdown::html_vignette: toc: false vignette: > %\VignetteIndexEntry{ActivePathways} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, echo=FALSE} knitr::opts_chunk$set(warning=FALSE, message=FALSE, width=500) options(max.print=35) library("ggplot2") library("data.table") ``` # Multi-omics pathway enrichment analysis ## Introduction ActivePathways is a tool for multivariate pathway enrichment analysis that identifies gene sets, such as pathways or Gene Ontology terms, that are over-represented in a list or matrix of genes. ActivePathways uses a data fusion method to combine multiple omics datasets, prioritizes genes based on the significance and direction of signals from the omics datasets, and performs pathway enrichment analysis of these prioritized genes. We can find pathways and genes supported by single or multiple omics datasets, as well as additional genes and pathways that are only apparent through data integration and remain undetected in any single dataset alone. The new version of ActivePathways is described in our recent preprint. Mykhaylo Slobodyanyuk^, Alexander T. Bahcheli^, Zoe P. Klein, Masroor Bayati, Lisa J. Strug, Jüri Reimand. Directional integration and pathway enrichment analysis for multi-omics data. bioRxiv (2023-09-24) (^ - co-first authors) https://www.biorxiv.org/content/10.1101/2023.09.23.559116v1 The first version of ActivePathways was published in Nature Communications with the PCAWG Pan-Cancer project. Marta Paczkowska\^, Jonathan Barenboim\^, Nardnisa Sintupisut, Natalie S. Fox, Helen Zhu, Diala Abd-Rabbo, Miles W. Mee, Paul C. Boutros, PCAWG Drivers and Functional Interpretation Working Group, PCAWG Consortium, Juri Reimand. Integrative pathway enrichment analysis of multivariate omics data. Nature Communications 11 735 (2020) (\^ - co-first authors) <https://www.nature.com/articles/s41467-019-13983-9> <https://pubmed.ncbi.nlm.nih.gov/32024846/> ## Pathway enrichment analysis using the ranked hypergeometric test From a matrix of p-values, `ActivePathways` creates a ranked gene list where genes are prioritised based on their combined significance. The combined significance of each gene is determined by performing statistical data fusion on a series of omics datasets provided in the input matrix. The ranked gene list includes the most significant genes first. `ActivePathways` then performs a ranked hypergeometric test to determine if a pathway (i.e., a gene set with a common functional annotation) is enriched in the ranked gene list, by performing a series of hypergeometric tests (also known as Fisher's exact tests). In each such test, a larger set of genes from the top of the ranked gene list is considered. At the end of the series, the ranked hypergeometric test returns the top, most significant p-value from the series, corresponding to the point in the ranked gene list where the pathway enrichment reached the greatest significance of enrichment. This approach is useful when the genes in our ranked gene list have varying signals of biological importance in the input omics datasets, as the test identifies the top subset of genes that are the most relevant to the enrichment of the pathway. ## Using the package A basic example of using ActivePathways is shown below. We will analyse cancer driver gene predictions for a collection of cancer genomes. Each dataset (i.e., column in the matrix) contains a statistical significance score (P-value) where genes with small P-values are considered stronger candidates of cancer drivers based on the distribution of mutations in the genes. For each gene (i.e., row in the matrix), we have several predictions representing genomic elements of the gene, such as coding sequences (CDS), untranslated regions (UTR), and core promoters (promCore). To analyse these driver genes using existing knowledge of gene function, we will use gene sets corresponding to known molecular pathways from the Reactome database. These gene sets are commonly distributed in text files in the GMT format [(Gene Matrix Transposed)](https://software.broadinstitute.org/cancer/software/gsea/wiki/index.php/Data_formats#GMT:_Gene_Matrix_Transposed_file_format_.28.2A.gmt.29) file. Let's start by reading the data from the files embedded in the R package. For the p-value matrix, `ActivePathways` expects an object of the matrix class so the table has to be cast to the correct class after reading the file. ```{r} scores <- read.table( system.file('extdata', 'Adenocarcinoma_scores_subset.tsv', package = 'ActivePathways'), header = TRUE, sep = '\t', row.names = 'Gene') scores <- as.matrix(scores) scores ``` `ActivePathways` does not allow missing (NA) values in the matrix of P-values and these need to be removed. One conservative option is to re-assign all missing values as ones, indicating our confidence that the missing values are not indicative of cancer drivers. Alternatively, one may consider removing genes with NA values. ```{r} scores[is.na(scores)] <- 1 ``` ## Basic use The basic use of `ActivePathways` requires only two input parameters, the matrix of P-values with genes in rows and datasets in columns, as prepared above, and the path to the GMT file in the file system. Importantly, the gene IDs (symbols, accession numbers, etc) in the P-value matrix need to match those in the GMT file. Here we use a GMT file provided with the package. This GMT file is heavily filtered and outdated; thus users must provide their own GMT file when using the package. These GMT files can be acquired from multiple [sources](https://baderlab.org/GeneSets) such as Gene Ontology, Reactome and others. For better accuracy and statistical power these pathway databases should be combined. Acquiring an [up-to-date GMT file](http://download.baderlab.org/EM_Genesets/current_release/) is essential to avoid using unreliable outdated annotations [(see this paper)](https://www.nature.com/articles/nmeth.3963). ```{r} library(ActivePathways) gmt_file <- system.file('extdata', 'hsapiens_REAC_subset.gmt', package = 'ActivePathways') ActivePathways(scores, gmt_file) ``` ## ActivePathways 2.0 Directional integration of multi-omics data ActivePathways 2.0 extends our integrative pathway analysis framework significantly. Users can now provide directional assumptions of input omics datasets for more accurate analyses. This allows us to prioritise genes and pathways where certain directional assumptions are met and penalise those where the assumptions are violated. For example, fold-change in protein expression would be expected to associate positively with mRNA fold-change of the corresponding gene, while negative associations would be unexpected and indicate more-complex situations or potential false positives. We can instruct the pathway analysis to prioritise positively-associated protein/mRNA pairs and penalise negative associations (or vice versa). Two additional inputs are included in ActivePathways that allow diverse multi-omics analyses. These inputs are optional. The scores_direction and constraints_vector parameters are provided in the merge_p_values() and ActivePathways() functions to incorporate this directional penalty into the data fusion and pathway enrichment analyses. The parameter constraints_vector is a vector that allows the user to represent the expected relationship between the input omics datasets. The vector size is n_datasets. Values include +1, -1, and 0. The parameter scores_direction is a matrix that reflects the directions that the genes/transcripts/protein show in the data. The matrix size is n_genes \* n_datasets, that is the same size as the P-value matrix. This is a numeric matrix, but only the signs of the values are accounted for. ### Directional data integration at the gene level Load a dataset of P-values and fold-changes for mRNA and protein levels. This dataset is embedded in the package. Examine a few example genes. ```{r} fname_data_matrix <- system.file('extdata', 'Differential_expression_rna_protein.tsv', package = 'ActivePathways') pvals_FCs <- read.table(fname_data_matrix, header = TRUE, sep = '\t') example_genes <- c('ACTN4','PIK3R4','PPIL1','NELFE','LUZP1','ITGB2') pvals_FCs[pvals_FCs$gene %in% example_genes,] ``` Create a matrix of gene/protein P-values. Where the columns are different omics datasets (mRNA, protein) and the rows are genes. Examine a few genes in the P-value matrix, and convert missing values to P = 1. ```{r} pval_matrix <- data.frame( row.names = pvals_FCs$gene, rna = pvals_FCs$rna_pval, protein = pvals_FCs$protein_pval) pval_matrix <- as.matrix(pval_matrix) pval_matrix[is.na(pval_matrix)] <- 1 pval_matrix[example_genes,] ``` Create a matrix of gene/protein directions similarly to the P-value matrix (i.e., scores_direction). ActivePathways only uses the signs of the direction values (ie +1 or -1). If directions are missing (NA), we recommend setting the values to zero. Lets examine a few genes in the direction matrix. ```{r} dir_matrix <- data.frame( row.names = pvals_FCs$gene, rna = pvals_FCs$rna_log2fc, protein = pvals_FCs$protein_log2fc) dir_matrix <- as.matrix(dir_matrix) dir_matrix <- sign(dir_matrix) dir_matrix[is.na(dir_matrix)] <- 0 dir_matrix[example_genes,] ``` This matrix has to be accompanied by a vector that provides the expected relationship between the different datasets. Here, mRNA levels and protein levels are expected to have consistent directions: either both positive or both negative (eg log fold-change). Alternatively, we can use another vector to prioritise genes/proteins where the directions are the opposite. ```{r} constraints_vector <- c(1,1) constraints_vector # constraints_vector <- c(1,-1) ``` Now we merge the P-values of the two datasets using directional assumptions and compare these with the plain non-directional merging. The top 5 scoring genes differ if we penalise genes where this directional logic is violated: While 4 of 5 genes retain significance, the gene PIK3R4 is penalised. Interestingly, as a consequence of penalizing PIK3R4, other genes such as ITGB2 move up in rank. ```{r} directional_merged_pvals <- merge_p_values(pval_matrix, method = "DPM", dir_matrix, constraints_vector) merged_pvals <- merge_p_values(pval_matrix, method = "Brown") sort(merged_pvals)[1:5] sort(directional_merged_pvals)[1:5] ``` PIK3R4 is penalised because the fold-changes of its mRNA and protein levels are significant and have the opposite signs: ```{r} pvals_FCs[pvals_FCs$gene == "PIK3R4",] pval_matrix["PIK3R4",] dir_matrix["PIK3R4",] merged_pvals["PIK3R4"] directional_merged_pvals["PIK3R4"] ``` To assess the impact of the directional penalty on gene merged P-value signals we create a plot showing directional results on the y axis and non-directional results on the x. Blue dots are prioritised hits, red dots are penalised. ```{r} lineplot_df <- data.frame(original = -log10(merged_pvals), modified = -log10(directional_merged_pvals)) ggplot(lineplot_df) + geom_point(size = 2.4, shape = 19, aes(original, modified, color = ifelse(original <= -log10(0.05),"gray", ifelse(modified > -log10(0.05),"#1F449C","#F05039")))) + labs(title = "", x ="Merged -log10(P)", y = "Directional Merged -log10(P)") + geom_hline(yintercept = 1.301, linetype = "dashed", col = 'black', size = 0.5) + geom_vline(xintercept = 1.301, linetype = "dashed", col = "black", size = 0.5) + geom_abline(size = 0.5, slope = 1,intercept = 0) + scale_color_identity() ``` ### Constraints vector intuition The constraints_vector parameter is provided in the merge_p_values() and ActivePathways() functions to incorporate directional penalty into the data fusion and pathway enrichment analyses. The constraints_vector parameter is a vector of size n_datasets that allows the user to represent the expected relationship between the input omics datasets, and includes the values +1, -1, and 0. The constraints_vector should reflect the expected *relative* directional relationship between datasets. For example, the two constraints_vector values shown below are functionally identical. ```{r} constraints_vector <- c(-1,1) constraints_vector <- c(1,-1) ``` We use different constraints_vector values depending on the type of data we are analyzing and the specific question we are asking. The simplest directional use case and the default for the ActivePathways package is to assume that the datasets have no directional constraints. This is useful when merging datasets where the relative impact of gene perturbations is not clear. For example, gene-level mutational burden and chromatin-immunoprecipitation sequencing (ChIP-seq) experiments can provide gene-level information, however, we cannot know from these datatypes whether gene function is increased or decreased. Therefore, we would set the constraints_vector for gene mutational burden and ChIP-seq datatypes to 0. ```{r} constraints_vector <- c(0,0) ``` When combining datasets that have directional information, we provide the expected relative directions between datasets in the constraints_vector. This is useful when measuring different data types or different conditions. To investigate the transcriptomic differences following gene knockdown or overexpression, we would provide constraints_vector values that reflect the expected opposing relationship between gene knockdown and gene overexpression. ```{r} constraints_vector <- c(1,-1) ``` The constraints_vector is also useful when merging different data types such as gene and protein expression, promoter methylation, and chromatin accessibility. Importantly, the expected relative direction between each datatype must be closely considered given the experimental conditions. For example, when combining gene expression, protein expression, and promoter methylation data measured in the same biological condition, we would provide a constraints_vector to reflect the expected agreement between gene and protein expression and disagreement with promoter methylation. ```{r} constraints_vector <- c(1,1,-1) ``` The intuition for merging these datasets is that direction of change in gene expression and protein expression tend to associate with each other according to the central dogma of biology while the direction of change of promoter methylation has the opposite effect as methylation insulates genes from transcription. ### Pathway-level insight To explore how changes on the individual gene level impact biological pathways, we can compare results before and after incorporating a directional penalty. Use the example GMT file embedded in the package. ```{r} fname_GMT2 <- system.file("extdata", "hsapiens_REAC_subset2.gmt", package = "ActivePathways") ``` First perform integrative pathway enrichment analysis with no directionality. Then perform directional integration and pathway enrichment analysis. For this analysis the directional coefficients and constraints_vector are from the gene-based analysis described above. ```{r} enriched_pathways <- ActivePathways( pval_matrix, gmt = fname_GMT2, cytoscape_file_tag = "Original_") constraints_vector <- c(1,1) constraints_vector dir_matrix[example_genes,] enriched_pathways_directional <- ActivePathways( pval_matrix, gmt = fname_GMT2, cytoscape_file_tag = "Directional_", merge_method = "DPM", scores_direction = dir_matrix, constraints_vector = constraints_vector) ``` Examine the pathways that are lost when directional information is incorporated in the data integration. ```{r} pathways_lost_in_directional_integration = setdiff(enriched_pathways$term_id, enriched_pathways_directional$term_id) pathways_lost_in_directional_integration enriched_pathways[enriched_pathways$term_id %in% pathways_lost_in_directional_integration,] ``` An example of a lost pathway is Beta-catenin independent WNT signaling. Out of the 32 genes that contribute to this pathway enrichment, 10 genes are in directional conflict. The enrichment is no longer identified when these genes are penalised due to the conflicting log2 fold-change directions. ```{r} wnt_pathway_id <- "REAC:R-HSA-3858494" enriched_pathway_genes <- unlist( enriched_pathways[enriched_pathways$term_id == wnt_pathway_id,]$overlap) enriched_pathway_genes ``` Examine the pathway genes that have directional disagreement and contribute to the lack of pathway enrichment in the directional analysis ```{r} pathway_gene_pvals = pval_matrix[enriched_pathway_genes,] pathway_gene_directions = dir_matrix[enriched_pathway_genes,] directional_conflict_genes = names(which( pathway_gene_directions[,1] != pathway_gene_directions[,2] & pathway_gene_directions[,1] != 0 & pathway_gene_directions[,2] != 0)) pathway_gene_pvals[directional_conflict_genes,] pathway_gene_directions[directional_conflict_genes,] length(directional_conflict_genes) ``` To visualise differences in biological pathways between ActivePathways analyses with or without a directional penalty, we combine both outputs into a single enrichment map for [plotting](#visualizing-directional-impact-with-node-borders). ## Significance threshold and returning all results A pathway is considered to be significantly enriched if it has `adjusted_p_val <= significant`. The parameter `significant` represents the maximum adjusted P-value for a resulting pathway to be considered statistically significant. Only the significant pathways are returned. P-values from pathway enrichment analysis are adjusted for multiple testing correction to provide a more conservative analysis (see below). ```{r} nrow(ActivePathways(scores, gmt_file, significant = 0.05)) nrow(ActivePathways(scores, gmt_file, significant = 0.1)) ``` ## GMT objects In the most common use case, a GMT file is downloaded from a database and provided directly to `ActivePathways` as a location in the file system. In some cases, it may be useful to load a GMT file separately for preprocessing. The ActivePathways package includes an interface for working with GMT objects. The GMT object can be read from a file using the `read.GMT` function. The GMT is structured as a list of terms (e.g., molecular pathways, biological processes, etc.). In the GMT object, each term is a list containing an id, a name, and the list of genes associated with this term. ```{r} gmt <- read.GMT(gmt_file) names(gmt[[1]]) # Pretty-print the GMT gmt[1:3] # Look at the genes annotated to the first term gmt[[1]]$genes # Get the full name of Reactome pathway 2424491 gmt$`REAC:2424491`$name ``` The most common processing step for GMT files is the removal of gene sets that are too large or small. Here we remove pathways (gene sets) that have less than 10 or more than 500 annotated genes. ```{r} gmt <- Filter(function(term) length(term$genes) >= 10, gmt) gmt <- Filter(function(term) length(term$genes) <= 500, gmt) ``` The new GMT object can now be used for analysis with `ActivePathways` ```{r} ActivePathways(scores, gmt) ``` This filtering can also be done automatically using the `geneset_filter` option to the `ActivePathways` function. By default, `ActivePathways` removes gene sets with less than five or more than a thousand genes from the GMT prior to analysis. In general, gene sets that are too large are likely not specific and less useful in interpreting the data and may also cause statistical inflation of enrichment scores in the analysis. Gene sets that are too small are likely too specific for most analyses and make the multiple testing corrections more stringent, potentially causing deflation of results. A stricter filter can be applied by running `ActivePathways` with the parameter `geneset_filter = c(10, 500)`. ```{r} ActivePathways(scores, gmt_file, geneset_filter = c(10, 500)) ``` This GMT object can be saved to a file ```{r, eval=FALSE} write.GMT(gmt, 'hsapiens_REAC_subset_filtered.gmt') ``` ## Background gene set for statistical analysis To perform pathway enrichment analysis, a global set of genes needs to be defined as a statistical background set. This represents the universe of all genes in the organism that the analysis can potentially consider. By default, this background gene set includes every gene that is found in the GMT file in any of the biological processes and pathways. Another option is to provide the full set of all protein-coding genes, however, this may cause statistical inflation of the results since a sizable fraction of all protein-coding genes still lack any known function. Sometimes the statistical background set needs to be considerably narrower than the GMT file or the full set of genes. Genes need to be excluded from the background if the analysis or experiment specifically excluded these genes initially. An example would be a targeted screen or sequencing panel that only considered a specific class of genes or proteins (e.g., kinases). In analysing such data, all non-kinase genes need to be excluded from the background set to avoid statistical inflation of all gene sets related to kinase signalling, phosphorylation and similar functions. To alter the background gene set in `ActivePathways`, one can provide a character vector of gene names that make up the statistical background set. In this example, we start from the original list of genes in the entire GMT and remove one gene, the tumor suppressor TP53. The new background set is then used for the ActivePathways analysis. ```{r} background <- makeBackground(gmt) background <- background[background != 'TP53'] ActivePathways(scores, gmt_file, background = background) ``` Note that only the genes found in the background set are used for testing enrichment. Any genes in the input data that are not in the background set will be automatically removed by `ActivePathways`. ## Merging p-values A key feature of `ActivePathways` is the integration of multiple complementary omics datasets to prioritise genes for the pathway analysis. In this approach, genes with significant scores in multiple datasets will get the highest priority, and certain genes with weak scores in multiple datasets may be ranked higher, highlighting functions that would be missed when only single datasets were analysed. `ActivePathways` accomplishes this by merging the series of p-values in the columns of the scores matrix for each gene into a single combined P-value. The four methods to merge P-values are Fisher's method (the default), Brown's method (extension of Fisher's), Stouffer's method and Strube's method (extension of Stouffer's). Each of these methods have been extended to account for the directional activity of genes across multi-omics datasets with Fisher_directional, DPM, Stouffer_directional, and Strube_directional methods. The Brown's and Strube's methods are more conservative in the case when the input datasets show some large-scale similarities (i.e., covariation), since they will take that into account when prioritising genes across similar datasets. The Brown's or Strube's method are recommended for most cases since omics datasets are often not statistically independent of each other and genes with high scores in one dataset are more likely to have high scores in another dataset just by chance. The following example compares the merged P-values of the first few genes between the four methods. Fisher's and Stouffer's method are two alternative strategies to merge p-values and as a result the top scoring genes and p-values may differ. The genes with the top scores for Brown's method are the same as Fisher's, but their P-values are more conservative. This is the case for Strube's method as well, in which the top scoring genes are the same as Stouffer's method, but the P-values are more conservative. ```{r} sort(merge_p_values(scores, 'Fisher'))[1:5] sort(merge_p_values(scores, 'Brown'))[1:5] sort(merge_p_values(scores, 'Stouffer'))[1:5] sort(merge_p_values(scores, 'Strube'))[1:5] ``` This function can be used to combine some of the data before the analysis for any follow-up analysis or visualisation. For example, we can merge the columns `X5UTR`, `X3UTR`, and `promCore` into a single `non_coding` column (these correspond to predictions of driver mutations in 5'UTRs, 3'UTRs, and core promoters of genes, respectively). This will consider the three non-coding regions as a single column, rather than giving them all equal weight to the `CDS` column. ```{r} scores2 <- cbind(scores[, 'CDS'], merge_p_values(scores[, c('X3UTR', 'X5UTR', 'promCore')], 'Brown')) colnames(scores2) <- c('CDS', 'non_coding') scores[c(2179, 1760),] scores2[c(2179, 1760),] ActivePathways(scores, gmt_file) ActivePathways(scores2, gmt_file) ``` ## Cutoff for filtering the ranked gene list for pathway enrichment analysis To perform pathway enrichment of the ranked gene list of merged P-values, `ActivePathways` defines a P-value cutoff to filter genes that have little or no significance in the series of omics datasets. This threshold represents the maximum p-value for a gene to be considered of interest in our analysis. The threshold is `0.1` by default but can be changed using the `cutoff` option. The default option considers raw P-values that have not been adjusted for multiple-testing correction. Therefore, the default option provides a relatively lenient approach to filtering the input data. This is useful for finding additional genes with weaker signals that associate with well-annotated and strongly significant genes in the pathway and functional context. ```{r} nrow(ActivePathways(scores, gmt_file)) nrow(ActivePathways(scores, gmt_file, cutoff = 0.01)) ``` ## Adjusting P-values using multiple testing correction Multiple testing correction is essential in the analysis of omics data since each analysis routinely considers thousands of hypotheses and apparently significant P-values will occur by chance alone. `ActivePathways` uses multiple testing correction at the level of pathways as P-values from the ranked hypergeometric test are adjusted for multiple testing (note that the ranked gene list provided to the ranked hypergeometric test remain unadjusted for multiple testing by design). The package uses the `p.adjust` function of base R to run multiple testing corrections and all methods in this function are available. By default, 'holm' correction is used. The option `correction_method = 'none'` can be used to override P-value adjustment (not recommended in most cases). ```{r} nrow(ActivePathways(scores, gmt_file)) nrow(ActivePathways(scores, gmt_file, correction_method = 'none')) ``` ## The results table of ActivePathways Consider the results object from the basic use case of `ActivePathways` ```{r} res <- ActivePathways(scores, gmt_file) res ``` The columns `term_id`, `term_name`, and `term_size` give information about each pathway detected in the enrichment analysis. The `adjusted_p_val` column with the adjusted P-value indicates the confidence that the pathway is enriched after multiple testing correction. The `overlap` column provides the set of genes from the integrated gene list that occur in the given enriched gene set (i.e., molecular pathway or biological process). These genes were quantified across multiple input omics datasets and prioritized based on their joint significance in the input data. Note that the genes with the strongest scores across the multiple datasets are listed first. ```{r} res$overlap[1:3] ``` This column is useful for further data analysis, allowing the researcher to go from the space of enriched pathways back to the space of individual genes and proteins involved in pathways and their input omics datasets. The `evidence` column provides insights to which of the input omics datasets (i.e., columns in the scores matrix) contributed to the discovery of this pathway or process in the integrated enrichment analysis. To achieve this level of detail, `ActivePathways` also analyses the gene lists ranked by the individual columns of the input matrix to detect enriched pathways. The `evidence` column lists the name of a given column of the input matrix if the given pathway is detected both in the integrated analysis and the analysis of the individual column. For example, in this analysis the majority of the detected pathways have only 'CDS' as their evidence, since these pathways were found to be enriched in data fusion through P-value merging and also by analysing the gene scores in the column `CDS` (for reference, CDS corresponds to protein-coding sequence where the majority of known driver mutations have been found). As a counter-example, the record for the pathway `REAC:422475` in our results lists as evidence `list('X3UTR', 'promCore')`, meaning that the pathway was found to be enriched when considering either the `X3UTR` column, the `promCore` column, or the combined omics datasets. ```{r} unlist(res[res$term_id == "REAC:422475","evidence"]) ``` Finally, if a pathway is found to be enriched only with the combined data and not in any individual column, 'combined' will be listed as the evidence. This subset of results may be particularly interesting since it highlights complementary aspects of the analysis that would remain hidden in the analysis of any input omics dataset separately. The following columns named as `Genes_{column}` help interpret how each pathway was detected in the multi-omics data integration, as listed in the column `evidence`. These columns show the genes present in the pathway and any of the input omics datasets. If the given pathway was not identified using the scores of the given column of the input scores matrix, an `NA` value is shown. Again, the genes are ranked by the significance of their scores in the input data, to facilitate identification of the most relevant genes in the analysis. ## Writing results to a CSV file The results are returned as a `data.table` object due to some additional data structures needed to store lists of gene IDs and supporting evidence. The usual R functions `write.table` and `write.csv` will struggle with exporting the data unless the gene and evidence lists are manually transformed as strings. Fortunately, the `fwrite` function of `data.table` can be used to write the file directly and the ActivePathways package includes the function `export_as_CSV` as a shortcut that uses the vertical bar symbol to concatenate gene lists. ```{r, eval = FALSE} result_file <- paste('ActivePathways_results.csv', sep = '/') export_as_CSV (res, result_file) # remove comment to run read.csv(result_file, stringsAsFactors = F)[1:3,] ``` The `fwrite` can be called directly for customised output. ```{r, eval=FALSE} result_file <- paste('ActivePathways_results2.txt', sep = '/') data.table::fwrite(res, result_file, sep = '\t', sep2 = c('', ',', '')) cat(paste(readLines(result_file)[1:2], collapse = '\n')) ``` # Visualising pathway enrichment results using enrichment maps in Cytoscape The Cytoscape software and the EnrichmentMap app provide powerful tools to visualise the enriched pathways from `ActivePathways` as a network (i.e., an Enrichment Map). To facilitate this visualisation step, `ActivePathways` provides the files needed for building enrichment maps. To create these files, a file prefix must be supplied to `ActivePathways` using the argument `cytoscape_file_tag`. The prefix can be a path to an existing writable directory. ```{r, eval=FALSE} res <- ActivePathways(scores, gmt_file, cytoscape_file_tag = "enrichmentMap__") ``` Four files are written using the prefix: - `enrichmentMap__pathways.txt` contains the table of significant terms (i.e. molecular pathways, biological processes, other gene sets) and the associated adjusted P-values. Note that only terms with `adjusted_p_val <= significant` are written. - `enrichmentMap__subgroups.txt` contains a matrix indicating the columns of the input matrix of P-values that contributed to the discovery of the corresponding pathways. These values correspond to the `evidence` evaluation of input omics datasets discussed above, where a value of one indicates that the pathway was also detectable using a specific input omics dataset. A value of zero indicates otherwise. This file will not be generated if a single-column matrix of scores corresponding to just one omics dataset is provided to `ActivePathways`. - `enrichmentMap__pathways.gmt` contains a shortened version of the supplied GMT file which consists of only the significant pathways detected by `ActivePathways`. - `enrichmentMap__legend.pdf` is a pdf file that displays a color legend of different omics datasets visualised in the enrichment map that can be used as a reference. ## Creating enrichment maps using the ActivePathways results The following sections will discuss how to create a pathway enrichment map using the results from `ActivePathways`. The datasets analysed earlier in the vignette will be used. To follow the steps, save the required files from `ActivePathways` in an accessible location. ## Required software 1. Cytoscape, see <https://cytoscape.org/download.html> 2. EnrichmentMap app of Cytoscape, see menu Apps\>App manager or <https://apps.cytoscape.org/apps/enrichmentmap> 3. EnhancedGraphics app of Cytoscape, see menu Apps\>App manager or <https://apps.cytoscape.org/apps/enhancedGraphics> ## Required files `ActivePathways` writes four files that are used to build enrichment maps in Cytoscape. ```{r} files <- c(system.file('extdata', 'enrichmentMap__pathways.txt', package='ActivePathways'), system.file('extdata', 'enrichmentMap__subgroups.txt', package='ActivePathways'), system.file('extdata', 'enrichmentMap__pathways.gmt', package='ActivePathways'), system.file('extdata', 'enrichmentMap__legend.pdf', package='ActivePathways')) ``` The following commands will perform the basic analysis again and write output files required for generating enrichment maps into the current working directory of the R session. All file names use the prefix `enrichmentMap__`. The generated files are also available in the `ActivePathways` R package as shown above. ```{r, eval=FALSE} gmt_file <- system.file('extdata', 'hsapiens_REAC_subset.gmt', package = 'ActivePathways') scores_file <- system.file('extdata', 'Adenocarcinoma_scores_subset.tsv', package = 'ActivePathways') scores <- read.table(scores_file, header = TRUE, sep = '\t', row.names = 'Gene') scores <- as.matrix(scores) scores[is.na(scores)] <- 1 res <- ActivePathways(scores, gmt_file, cytoscape_file_tag = "enrichmentMap__") ``` The four files written are: - `enrichmentMap__pathways.txt`, a table of significant pathways and the associated adjusted P-values. - `enrichmentMap__subgroups.txt`, a table of pathways and corresponding omics datasets supporting the enrichment of those pathways. This corresponds to the `evidence` column of the `ActivePathways` result object discussed above. - `enrichmentMap__pathways.gmt`, a shortened version of the supplied GMT file which consists of only the significant pathways detected by `ActivePathways`. - `enrichmentMap__legend.pdf`, a reference color legend of different omics datasets visualised in the enrichment map. The following code will examine a few lines of the files generated by `ActivePathways`. ```{r} cat(paste(readLines(files[1])[1:5], collapse='\n')) cat(paste(readLines(files[2])[1:5], collapse='\n')) cat(paste(readLines(files[3])[18:19], collapse='\n')) ``` ## Creating the enrichment map - Open the Cytoscape software. - Ensure that the apps *EnrichmentMap* and *enchancedGraphics* are installed. Apps may be installed by clicking in the menu *Apps -\> App Manager*. - Select *Apps -\> EnrichmentMap*. - In the following dialogue, click the button `+` *Add Data Set from Files* in the top left corner of the dialogue. - Change the Analysis Type to Generic/gProfiler/Enrichr. - Upload the files `enrichmentMap__pathways.txt` and `enrichmentMap__pathways.gmt` in the *Enrichments* and *GMT* fields, respectively. - Click the checkbox *Show Advanced Options* and set *Cutoff* to 0.6. - Then click *Build* in the bottom-right corner to create the enrichment map. ![](CreateEnrichmentMapDialogue_V2.png) ![](NetworkStep1_V2.png) ## Colour the nodes of the network to visualise supporting omics datasets To color nodes in the network (i.e., molecular pathways, biological processes) according to the omics datasets supporting the enrichments, the third file `enrichmentMap__subgroups.txt` needs to be imported to Cytoscape directly. To import the file, activate the menu option *File -\> Import -\> Table from File* and select the file `enrichmentMap__subgroups.txt`. In the following dialogue, select *To a Network Collection* in the dropdown menu *Where to Import Table Data*. Click OK to proceed. ![](ImportStep_V2.png) Next, Cytoscape needs to use the imported information to color nodes using a pie chart visualisation. To enable this, click the Style tab in the left control panel and select the Image/Chart1 Property in a series of dropdown menus (*Properties -\> Paint -\> Custom Paint 1 -\> Image/Chart 1*). ![](PropertiesDropDown2_V2.png) The *image/Chart 1* property now appears in the Style control panel. Click the triangle on the right, then set the *Column* to *instruct* and the *Mapping Type* to *Passthrough Mapping*. ![](StylePanel_V2.png) This step colours the nodes corresponding to the enriched pathways according to the supporting omics datasets, based on the scores matrix initially analysed in `ActivePathways`. ![](NetworkStep2_V2.png) To allow better interpretation of the enrichment map, `ActivePathways` generates a color legend in the file `enrichmentMap__legend.pdf` that shows which colors correspond to which omics datasets. ![](LegendView.png) Note that one of the colors corresponds to a subset of enriched pathways with *combined* evidence that were only detected through data fusion and P-value merging and not when any of the input datasets were detected separately. This exemplifies the added value of integrative multi-omics pathway enrichment analysis. ## Visualizing directional impact with node borders {#visualizing-directional-impact-with-node-borders} From the drop-down Properties menu, select *Border Line Type*. ![](border_line_type.jpg) Set *Column* to *directional impact* and *Mapping Type* to *Discrete Mapping*. Now we can compare findings between a non-directional and a directional method. We highlight pathways that were shared (0), lost (1), and gained (2) between the approaches. Here, we have solid lines for the shared pathways, dots for the lost pathways, and vertical lines for the gained pathways. Border widths can be adjusted in the *Border Width* property, again with discrete mapping. ![](set_aesthetic.jpg) This step changes node borders in the aggregated enrichment map, depicting the additional information provided by directional impact. ![](new_map.png) ![](legend.png) ## Alternative node coloring For a more diverse range of colors, ActivePathways supports any color palette from RColorBrewer. The color_palette parameter must be provided. ```{r} res <- ActivePathways(scores, gmt_file, cytoscape_file_tag = "enrichmentMap__", color_palette = "Pastel1") ``` ![](LegendView_RColorBrewer.png) Instead, to manually input the color of each dataset the custom_colors parameter must be specified as a vector. This vector should contain the same number of colors as columns in the scores matrix. ```{r} res <- ActivePathways(scores, gmt_file, cytoscape_file_tag = "enrichmentMap__", custom_colors = c("violet","green","orange","red")) ``` ![](LegendView_Custom.png) To change the color of the *combined* contribution, a color must be provided to the color_integrated_only parameter. Tip: if the coloring of nodes did not work in Cytoscape after setting the options in the Style panel, check that the EnhancedGraphics Cytoscape app is installed. # References - Mykhaylo Slobodyanyuk^, Alexander T. Bahcheli^, Zoe P. Klein, Masroor Bayati, Lisa J. Strug, Jüri Reimand. Directional integration and pathway enrichment analysis for multi-omics data. bioRxiv (2023-09-24) (^ - co-first authors) <https://www.biorxiv.org/content/10.1101/2023.09.23.559116v1>. - Integrative Pathway Enrichment Analysis of Multivariate Omics Data. Paczkowska M, Barenboim J, Sintupisut N, Fox NS, Zhu H, Abd-Rabbo D, Mee MW, Boutros PC, PCAWG Drivers and Functional Interpretation Working Group; Reimand J, PCAWG Consortium. Nature Communications (2020) <https://pubmed.ncbi.nlm.nih.gov/32024846/> <https://doi.org/10.1038/s41467-019-13983-9>. - Pathway Enrichment Analysis and Visualization of Omics Data Using g:Profiler, GSEA, Cytoscape and EnrichmentMap. Reimand J, Isserlin R, Voisin V, Kucera M, Tannus-Lopes C, Rostamianfar A, Wadi L, Meyer M, Wong J, Xu C, Merico D, Bader GD. Nature Protocols (2019) <https://pubmed.ncbi.nlm.nih.gov/30664679/> <https://doi.org/10.1038/s41596-018-0103-9>.
/scratch/gouwar.j/cran-all/cranData/ActivePathways/vignettes/ActivePathways-vignette.Rmd
#' Activity Index calculation using raw accelerometry data. #' #' @name ActivityIndex #' @docType package #' @author Jiawei Bai #' @description This package contains functions to read raw accelerometry #' data and to calculate Activity Index based on the data. #require("matrixStats") #library("matrixStats") #require("data.table") #library("data.table") #data(TimeScale)
/scratch/gouwar.j/cran-all/cranData/ActivityIndex/R/ActivityIndex-package.r
#' @title Read the raw tri-axial accelerometry data csv file by ActiGraph GT3X+. #' #' @description #' \code{ReadGT3XPlus} reads the accelerometry data collected by ActiGraph GT3XPlus #' in csv files generated by ActiLife software. It automatically parses the header #' of the csv file to acquire the setting of the device. #' #' @details #' The function is tested on the csv files generated by ActiLife6, which have exactly #' 10 lines of headers, containing information about the device name, the starting #' and ending date/time of data collection, the sample rate, and the downloading date/time, #' etc. The 11th may further be omitted, if it is the header for the tri-axial acceleration #' time series. The function only reads the first 3 columns from then, if more are present. #' #' @param filename #' The name of the csv file. #' #' @param epoch #' The epoch length (in second) of the Activity Index. Must #' be a positive integer. The default is \code{1}. #' #' @return The \code{ReadGT3XPlus} returns an object of \link{class} "\code{GT3XPlus}". #' This class of object is supported by functions \code{\link{computeActivityIndex}}. #' A object of class "\code{GT3XPlus}" is a list containing at least the following components: #' #' \code{SN}: Serial Number of the accelerometer #' #' \code{StartTime}: Start time of the data collection #' #' \code{StartDate}: Start date of the data collection #' #' \code{Epoch}: The Epoch time of each observation. If sample rate \code{Hertz}>1, then \code{Epoch}=00:00:00 #' #' \code{DownloadTime}: Download time of the data #' #' \code{DownloadDate}: Download date the data #' #' \code{Hertz}: Sampling Rate #' #' \code{Raw}: a data frame with 5 columns containing the date, time and acceleration in X, Y and Z axes #' for each observation. #' #' @export #' #' @import data.table #' @import matrixStats #' #' @importFrom R.utils gunzip #' @examples #' filename = system.file("extdata","sample_GT3X+.csv.gz",package="ActivityIndex") #' res = ReadGT3XPlus(filename) #' ReadGT3XPlus = function(filename, epoch=1) { stopifnot(length(filename) == 1) if (grepl("[.]gz$", filename)) { filename = R.utils::gunzip(filename, temporary = TRUE, remove = FALSE, overwrite = TRUE) } result=list(SN="",StartTime="",StartDate="",Epoch="",DownloadTime="",DownloadDate="",Hertz="",Raw="") ### Header ### result_Head=readLines(filename,10) result$Epoch=regmatches(result_Head[5],regexpr("(?<=Epoch\\sPeriod\\s\\(hh\\:mm\\:ss\\)\\s)(.+)(?=\\Z)",result_Head[5],perl=TRUE)) result$Hertz=ifelse(length(as.numeric(regmatches(result_Head[1],regexpr("(?<=\\s)(\\d+)(?=\\sHz)",result_Head[1],perl=TRUE))))>0, as.numeric(regmatches(result_Head[1],regexpr("(?<=\\s)(\\d+)(?=\\sHz)",result_Head[1],perl=TRUE))), 1) result$SN=regmatches(result_Head[2],regexpr("(?<=Serial Number:\\s)(.+)(?=\\Z)",result_Head[2],perl=TRUE)) result$StartTime=regmatches(result_Head[3],regexpr("(?<=Start\\sTime\\s)(.+)(?=\\Z)",result_Head[3],perl=TRUE)) result$StartDate=regmatches(result_Head[4],regexpr("(?<=Start\\sDate\\s)(.+)(?=\\Z)",result_Head[4],perl=TRUE)) result$StartDate=as.character(as.Date(result$StartDate,format="%m/%d/%Y")) result$DownloadTime=regmatches(result_Head[6],regexpr("(?<=Download\\sTime\\s)(.+)(?=\\Z)",result_Head[6],perl=TRUE)) result$DownloadDate=regmatches(result_Head[7],regexpr("(?<=Download\\sDate\\s)(.+)(?=\\Z)",result_Head[7],perl=TRUE)) result$DownloadDate=as.character(as.Date(result$DownloadDate,format="%m/%d/%Y")) ### Data ### result$Raw=read.csv(file=filename,skip=10,stringsAsFactors=FALSE,header=FALSE,nrows=1) if (is.character(result$Raw[,1])==TRUE) { row1=read.csv(file=filename,skip=11,stringsAsFactors=FALSE,header=FALSE,nrows=1) result$Raw=fread(filename,skip=11,sep=",",stringsAsFactors=FALSE, colClasses=rep("numeric",ncol(row1)),header=FALSE,fill=TRUE, showProgress=FALSE) } else { row1=read.csv(file=filename,skip=10,stringsAsFactors=FALSE,header=FALSE,nrows=1) result$Raw=fread(filename,skip=10,sep=",",stringsAsFactors=FALSE, colClasses=rep("numeric",ncol(row1)),header=FALSE,fill=TRUE, showProgress=FALSE) } TimeScale = ActivityIndex::TimeScale # Remove incomplete row if (any(is.na(result$Raw[nrow(result$Raw),]))) result$Raw=result$Raw[-nrow(result$Raw),] # Time Stamp # if (result$Epoch=="00:00:00") { Time_Temp_idx=which(TimeScale==result$StartTime):(which(TimeScale==result$StartTime)-1+nrow(result$Raw)%/%result$Hertz+1) Time_Temp_idx=Time_Temp_idx%%length(TimeScale) Time_Temp_idx=rep(Time_Temp_idx,each=result$Hertz) Time_Temp_idx=Time_Temp_idx[1:nrow(result$Raw)] Time_Temp_idx[which(Time_Temp_idx==0)]=length(TimeScale) } else { Time_Temp_idx=which(TimeScale==result$StartTime):(which(TimeScale==result$StartTime)-1+nrow(result$Raw)*epoch+1) Time_Temp_idx=Time_Temp_idx%%length(TimeScale) Time_Temp_idx=Time_Temp_idx[which((1:length(Time_Temp_idx))%%epoch==1)] Time_Temp_idx=Time_Temp_idx[1:nrow(result$Raw)] Time_Temp_idx[which(Time_Temp_idx==0)]=length(TimeScale) } # Combine # result$Raw=cbind(rep(result$StartDate,nrow(result$Raw)),TimeScale[Time_Temp_idx],result$Raw) if (ncol(result$Raw)>5) { colnames(result$Raw)=c("Date","Time","X","Y","Z",paste0("V",1:(ncol(result$Raw)-5))) } else { colnames(result$Raw)=c("Date","Time","X","Y","Z") } # Date Stamp # Change the Date if sample reaches midnight if (length(which(result$Raw$Time=="00:00:00"))>0) { date_idx_start=which(result$Raw$Time=="00:00:00")[(1:(length(which(result$Raw$Time=="00:00:00"))%/%result$Hertz)-1)*result$Hertz+1] date_idx_end=c(date_idx_start[-1]-1,nrow(result$Raw)) if (date_idx_start[1]==1) { date_follow=as.character(as.Date(result$StartDate)+1:length(date_idx_start)-1) } else { date_follow=as.character(as.Date(result$StartDate)+1:length(date_idx_start)) } for (i in 1:length(date_idx_start)) { result$Raw$Date[date_idx_start[i]:date_idx_end[i]]=rep(date_follow[i],length(date_idx_start[i]:date_idx_end[i])) } } # class(result)="GT3XPlus" return(result) }
/scratch/gouwar.j/cran-all/cranData/ActivityIndex/R/ReadGT3XPlus.r
#' @title Read the raw tri-axial accelerometry data csv file. #' #' @description #' \code{ReadTable} reads the raw tri-axial accelerometry data in csv files. #' #' @details #' The function reads csv files containing only three columns: acceleration time series #' of x-, y- and z-axis. #' #' @param filename #' The name of the csv file. #' #' @return The \code{ReadTable} returns a data frame with 4 columns: Index, X, Y and Z. #' Index is the column for the indices of acceleration. X, Y and Z are for the acceleration #' time series in each direction. #' @export #' @importFrom utils read.csv #' @examples #' filename = system.file("extdata","sample_table.csv.gz",package="ActivityIndex") #' res = ReadTable(filename) #' ReadTable = function(filename) { if (ncol(read.csv(file = filename, stringsAsFactors = FALSE, header = FALSE, nrows = 1)) != 3) { stop(paste0(filename, " is not an appropriate 3-column data file!")) } result = fread( filename, sep = ",", stringsAsFactors = FALSE, colClasses = rep("numeric", 3), header = FALSE, showProgress = FALSE ) result = cbind(1:nrow(result), result) colnames(result) = c("Index", "X", "Y", "Z") return(result) }
/scratch/gouwar.j/cran-all/cranData/ActivityIndex/R/ReadTable.r
#' @title \eqn{\bar{\sigma}_i} computing using raw accelerometry data #' @description \code{Sigma0} computes \eqn{\bar{\sigma}_i}, which is needed for #' the Activity Index computing in \code{\link{computeActivityIndex}} #' @param x A 4-column data frame containing the raw accelerometry #' data when the device is not worn. The 1st column has the record/index #' number. The 2nd to 4th columns contain the tri-axial raw acceleration. The #' data will be used to calculate \eqn{\bar{\sigma}_i}. #' @param hertz The sample rate of the data. #' @return \eqn{\bar{\sigma}_i}, a numeric vector of length one. #' @export #' @examples #' filename = system.file("extdata","sample_GT3X+.csv.gz",package="ActivityIndex") #' res = ReadGT3XPlus(filename) #' hertz = res$Hertz #' x = res$Raw[ 1:1000, c("Time", "X", "Y", "Z")] #' res = Sigma0(x, hertz = hertz) #' testthat::expect_equal(res, c(SD = 0.184321637135534)) Sigma0 = function(x, hertz = 30) { # stopifnot(is.data.table(x)) x = as.data.table(x) X = Y = Z = NULL rm(list = c("X", "Y", "Z")) n = nrow(x) %/% hertz*hertz result = mean(x[1:n, sqrt((rowVars(matrix(X, ncol = hertz, byrow=TRUE))+ rowVars(matrix(Y, ncol = hertz, byrow=TRUE))+ rowVars(matrix(Z, ncol = hertz, byrow=TRUE)))/3)] ) names(result) = c("SD") return(result) }
/scratch/gouwar.j/cran-all/cranData/ActivityIndex/R/Sigma0.r
#' Time scale vector from 00:00:00 to 23:59:59. #' #' A vector of length 86400 containing the time scale characters from 00:00:00 #' to 23:59:59. #' @format A vector of characters with 86400 entries. #' @name TimeScale NULL
/scratch/gouwar.j/cran-all/cranData/ActivityIndex/R/TimeScale.r
#' @title Compute Activity Index #' @description \code{computeActivityIndex} computes the Activity Index using raw #' accelerometry data, based on user specified parameters such as sample rate #' and epoch length. #' #' @param x An object containing raw accelerometry data, which could #' either be a 4-column data frame or "\code{GT3XPlus}" object. See "Details". #' @param x_sigma0 A 4-column data frame containing the raw accelerometry #' data when the device is not worn. The 1st column has the record/index #' number. The 2nd to 4th columns contain the tri-axial raw acceleration. The #' data will be used to calculate \eqn{\bar{\sigma}_i}. #' @param sigma0 Specify \eqn{\bar{\sigma}_i} directly. At least one of #' \code{x_sigma0} and \code{sigma0} must be specified. If both existed, #' \code{sigma0} will be used. #' @param epoch The epoch length (in second) of the Activity Index. Must #' be a positive integer. #' @param hertz The sample rate of the data. #' @return A data frame with two columns. The first column has the "record #' number" associated with each entry of Activity Index, while the second column #' has the actual value of Activity Index. #' @details \code{x} could be either of the following two types of objects: #' \enumerate{ #' \item A 4-column data frame containing the tri-axial raw accelerometry #' data in the 2nd to 4th column, and the associated record number (could be #' time) in the 1st column. \code{\link{ReadTable}} can be used to generate #' such data frame. #' \item An "\code{GT3XPlus}" object given by function #' \code{\link{ReadGT3XPlus}}. #' } #' @examples #' library(graphics) #' fname = system.file("extdata", "sample_table.csv.gz", #' package = "ActivityIndex") #' sampleTable = ReadTable(fname) #' AI_sampleTable_x = computeActivityIndex( #' sampleTable, #' x_sigma0 = sampleTable[1004700:1005600, ], #' epoch = 1, #' hertz = 30) #' AI_sampleTable_x #' plot(AI ~ RecordNo, data = AI_sampleTable_x, type = "l") #' @export computeActivityIndex=function(x,x_sigma0=NULL,sigma0=NULL,epoch=1,hertz) { if (epoch<1) stop("epoch must not be less than 1!") if (abs(round(epoch)-epoch)>0) stop("epoch must be an integer!") UseMethod("computeActivityIndex",x) } #' @rdname computeActivityIndex #' @export computeActivityIndex.default=function(x,x_sigma0=NULL,sigma0=NULL,epoch=1,hertz) { Index = NULL rm("Index") if (is.null(x_sigma0)&&is.null(sigma0)) stop("Either x_sigma0 or sigma0 needs to be specified!") if (is.null(sigma0)) { sigma0=Sigma0(x_sigma0,hertz) } x = as.data.table(x) n=nrow(x)%/%hertz*hertz result=array(0,c(n%/%hertz,2)) if (sigma0!=0) { result[,2]=x[1:n,(rowSds(matrix(x$X,ncol=hertz,byrow=TRUE))^2-sigma0^2)/sigma0^2+ (rowSds(matrix(x$Y,ncol=hertz,byrow=TRUE))^2-sigma0^2)/sigma0^2+ (rowSds(matrix(x$Z,ncol=hertz,byrow=TRUE))^2-sigma0^2)/sigma0^2] } else { result[,2]=x[1:n,rowSds(matrix(x$X,ncol=hertz,byrow=TRUE))^2+ rowSds(matrix(x$Y,ncol=hertz,byrow=TRUE))^2+ rowSds(matrix(x$Z,ncol=hertz,byrow=TRUE))^2] } result[which(result[,2]<0),2]=0 result[,2]=sqrt(result[,2]/3) result=as.data.frame(result,stringsAsFactors=FALSE) # is Index a column name or some other thing? result[,1]=x[(1:(n%/%hertz)-1)*hertz+1,Index] # fetch "Index" if (epoch>1) { L_AI=length(result[,2]) result0=as.data.frame(array(0,c(L_AI%/%epoch,2)),stringsAsFactors=FALSE) result0[,2]=as.numeric(rowSums(matrix(result[1:(L_AI-L_AI%%epoch),2],ncol=epoch,byrow=TRUE))) result0[,1]=result[(1:(L_AI%/%epoch)-1)*epoch+1,1] result=result0 } colnames(result)=c("RecordNo","AI") class(result) = c("ActivityIndex", class(result)) return(result) } #' @rdname computeActivityIndex #' @export computeActivityIndex.GT3XPlus=function(x,x_sigma0=NULL,sigma0=NULL,epoch=1,hertz) { if (x$Hertz != hertz) { stop("hertz must be equal to the sample rate of GT3XPlus!") } x = x$Raw cn = colnames(x) cn[ cn == "Time"] = "Index" colnames(x) = cn result = computeActivityIndex.default(x, x_sigma0 = x_sigma0, sigma0 = sigma0, epoch = epoch, hertz = hertz) return(result) }
/scratch/gouwar.j/cran-all/cranData/ActivityIndex/R/computeActivityIndex.r
#' Print method for token #' #' @return NULL #' @param x an object used to select a method. #' @param ... further arguments passed to or from other methods #' @return No return value, called for side effects #' @export #' #' @examples #' x = data.frame(RecordNo = rnorm(100), AI = rnorm(100)) #' class(x) = c("ActivityIndex", class(x)) #' print(x) #' @importFrom utils head tail #' @method print ActivityIndex print.ActivityIndex = function(x, ...) { x = as.data.frame(x) cat("Showing head and tail rows\n") print(head(x), ...) print(tail(x), ...) }
/scratch/gouwar.j/cran-all/cranData/ActivityIndex/R/print.ActivityIndex.R
## ----"setup",echo=FALSE,cache=FALSE,warning=FALSE,message=FALSE---- library(ActivityIndex) library(knitr) opt_setup=options(width=52,scipen=1,digits=5) opts_chunk$set(tidy=TRUE) ## ----"AccessCSV",echo=TRUE,eval=FALSE------------- # system.file("extdata","sample_GT3X+.csv.gz",package="ActivityIndex") # system.file("extdata","sample_table.csv.gz",package="ActivityIndex") ## ----"GT3X+_CSV",echo=FALSE,eval=TRUE------------- fname = system.file("extdata", "sample_GT3X+.csv.gz", package = "ActivityIndex") unzipped = R.utils::gunzip(fname, temporary = TRUE, remove = FALSE, overwrite = TRUE) cat(readLines(unzipped, n = 15), sep = "\n") ## ----"Table_CSV",echo=FALSE,eval=TRUE------------- fname = system.file("extdata", "sample_table.csv.gz", package = "ActivityIndex") unzipped = R.utils::gunzip(fname, temporary = TRUE, remove = FALSE, overwrite = TRUE) cat(readLines(unzipped, n = 5), sep = "\n") ## ----"ReadData",echo=TRUE,warning=FALSE,message=FALSE---- sampleGT3XPlus=ReadGT3XPlus(system.file("extdata","sample_GT3X+.csv.gz",package="ActivityIndex")) sampleTable=ReadTable(system.file("extdata", "sample_table.csv.gz",package="ActivityIndex")) ## ----"str_sampleGT3XPlus",echo=TRUE,eval=TRUE----- str(sampleGT3XPlus) ## ----"head_sampleTable",echo=TRUE,eval=TRUE------- head(sampleTable,n=6) ## ----"computeAI_syntax",echo=TRUE,eval=FALSE------ # computeActivityIndex(x, x_sigma0 = NULL, sigma0 = NULL, epoch = 1, hertz) ## ----"computeAI_onthefly",echo=TRUE,eval=TRUE----- AI_sampleTable_x=computeActivityIndex(sampleTable, x_sigma0=sampleTable[1004700:1005600,], epoch=1, hertz=30) AI_sampleGT3XPlus_x=computeActivityIndex(sampleGT3XPlus, x_sigma0=sampleTable[1004700:1005600,], epoch=1, hertz=30) ## ----"compute_sigma0",echo=TRUE,eval=TRUE--------- sample_sigma0=Sigma0(sampleTable[1004700:1005600,],hertz=30) ## ----"computeAI_beforehand",echo=TRUE,eval=TRUE---- AI_sampleTable=computeActivityIndex(sampleTable, sigma0=sample_sigma0, epoch=1, hertz=30) AI_sampleGT3XPlus=computeActivityIndex(sampleGT3XPlus, sigma0=sample_sigma0, epoch=1, hertz=30) ## ----"head_AI",echo=TRUE,eval=TRUE---------------- head(AI_sampleGT3XPlus,n=10) ## ----"computeAI_minute",echo=TRUE,eval=TRUE------- AI_sampleTable_min=computeActivityIndex(sampleTable, sigma0=sample_sigma0, epoch=60, hertz=30) AI_sampleGT3XPlus_min=computeActivityIndex(sampleGT3XPlus, sigma0=sample_sigma0, epoch=60, hertz=30) ## ----"head_AI_min",echo=TRUE,eval=TRUE------------ head(AI_sampleGT3XPlus_min) ## ----"setup_cleanup",echo=FALSE,cache=FALSE,warning=FALSE,message=FALSE---- options(opt_setup)
/scratch/gouwar.j/cran-all/cranData/ActivityIndex/inst/doc/ActivityIndexIntro.R
--- title: "Introduction to the `ActivityIndex` package in `R`" author: "Jiawei Bai" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Introduction to the `ActivityIndex` package in `R`} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r "setup",echo=FALSE,cache=FALSE,warning=FALSE,message=FALSE} library(ActivityIndex) library(knitr) opt_setup=options(width=52,scipen=1,digits=5) opts_chunk$set(tidy=TRUE) ``` The **ActivityIndex** package contains functions to 1) read raw accelerometry data and 2) compute "Activity Index" (AI) using the raw data. This introduction provides step-by-step instructions on how to read data from `.csv` files and then compute AI. # Data description The sample data were collected by accelerometer GT3X+ (ActiGraph, ), downloaded from \url{https://help.theactigraph.com/entries/21688392-GT3X-ActiSleep-Sample-Data}. The data are available in the **ActivityIndex** package and their paths can be acquired using command: ```{r "AccessCSV",echo=TRUE,eval=FALSE} system.file("extdata","sample_GT3X+.csv.gz",package="ActivityIndex") system.file("extdata","sample_table.csv.gz",package="ActivityIndex") ``` `sample_GT3X+.csv.gz` is the standard output of GT3X+ accelerometer, with a $10$-line header containing the basic information of the data collection, followed by $3$-column raw acceleration data. `sample_table.csv.gz` contains the same $3$-column acceleration data without the $10$-line header. The first $15$ lines of `sample_GT3X+.csv.gz` are shown below: ```{r "GT3X+_CSV",echo=FALSE,eval=TRUE} fname = system.file("extdata", "sample_GT3X+.csv.gz", package = "ActivityIndex") unzipped = R.utils::gunzip(fname, temporary = TRUE, remove = FALSE, overwrite = TRUE) cat(readLines(unzipped, n = 15), sep = "\n") ``` while the first $5$ lines of `sample_table.csv.gz` are ```{r "Table_CSV",echo=FALSE,eval=TRUE} fname = system.file("extdata", "sample_table.csv.gz", package = "ActivityIndex") unzipped = R.utils::gunzip(fname, temporary = TRUE, remove = FALSE, overwrite = TRUE) cat(readLines(unzipped, n = 5), sep = "\n") ``` Users should follow the same format while preparing their own data. # Read the data `ReadGT3XPlus` and `ReadTable` functions read the GT3X+ `.csv.gz` file and the $3$-column acceleration table, respectively. To read the data, use the following code ```{r "ReadData",echo=TRUE,warning=FALSE,message=FALSE} sampleGT3XPlus=ReadGT3XPlus(system.file("extdata","sample_GT3X+.csv.gz",package="ActivityIndex")) sampleTable=ReadTable(system.file("extdata", "sample_table.csv.gz",package="ActivityIndex")) ``` Now that object `sampleGT3XPlus` has class `GT3XPlus`, which contains the raw data and header information. Function `ReadGT3XPlus` automatically applies time stamps to the acceleration time series using the information from the header. For example, our sample data look like this ```{r "str_sampleGT3XPlus",echo=TRUE,eval=TRUE} str(sampleGT3XPlus) ``` However, `sampleTable` is much simpler, since limited information was given. The first $6$ lines of it look like this ```{r "head_sampleTable",echo=TRUE,eval=TRUE} head(sampleTable,n=6) ``` # Compute AI AI is a metric to reflect the variability of the raw acceleration signals after removing systematic noise of the signals. Formally, its definition (a one-second AI) is $$ \text{AI}^{\text{new}}_i(t;H)=\sqrt{\max\left(\frac{1}{3}\left\{\sum_{m=1}^{3}{\frac{\sigma^2_{im}(t;H)-\bar{\sigma}^2_{i}}{\bar{\sigma}^2_{i}}}\right\},0\right)},\label{EQ: AI} $$ where $\sigma^2_{im}(t;H)$ ($m=1,2,3$) is axis-$m$'s moving variance during the window starting from time $t$ (of size $H$), and $\bar{\sigma}_i$ is the systematic noise of the signal when the device is placed steady. Function `computeActivityIndex` is used to compute AI. The syntax of the function is ```{r "computeAI_syntax",echo=TRUE,eval=FALSE} computeActivityIndex(x, x_sigma0 = NULL, sigma0 = NULL, epoch = 1, hertz) ``` `x` is the data used to compute AI. It can either be a `GT3XPlus` object, or a $4$-column data frame (tri-axial acceleration time series with an index column). Either `x_sigma0` or `sigma0` are used to determine the systematic noise $\bar{\sigma}_i$. More detailed example will follow to illustrate how to use them. `epoch` is the epoch length (in second) of the AI. For example, the default `epoch=1` yields to $1$-second AI, while minute-by-minute AI is given by `epoch=60`. `hertz` specifies the sample rate (in Hertz), which is usually $10$, $30$ or $80$, etc. We will continue our example of computing AI using our data `sampleGT3XPlus` and `sampleTable`. ## Find $\bar{\sigma}_i$ on-the-fly According to the definition of the systematic noise $\bar{\sigma}_i$, it changes with subject $i$. Therefore, strictly speaking, we are to compute $\bar{\sigma}_i$ every time we compute AI for a new subject $i$. Argument `x_sigma0` can be used to specify a $4$-column data frame (one column for indices and three columns for acceleration) which is used to calculate $\bar{\sigma}_i$. The $4$-column data frame should contain the raw accelerometry data collected while the accelerometer is not worn or kept steady. For example, if we say a segment of our sample data (`sampleTable[1004700:1005600,]`) meets such requirement, we could compute AI using the following code ```{r "computeAI_onthefly",echo=TRUE,eval=TRUE} AI_sampleTable_x=computeActivityIndex(sampleTable, x_sigma0=sampleTable[1004700:1005600,], epoch=1, hertz=30) AI_sampleGT3XPlus_x=computeActivityIndex(sampleGT3XPlus, x_sigma0=sampleTable[1004700:1005600,], epoch=1, hertz=30) ``` ## Find $\bar{\sigma}_i$ beforehand Sometimes we do not want to calculate $\bar{\sigma}_i$ whenever computing AI. For example, if $10$ accelerometers were used to collect data over $100$ subjects, there is no reason to calculate $\bar{\sigma}_i$ for $100$ times. One $\bar{\sigma}_i$ is only needed for one accelerometer. Furthermore, if we could verify the $\bar{\sigma}_i$'s of the $10$ accelerometers are close to each others, we could combine them into a single $\bar{\sigma}=\sum_{i=1}^{10}{\bar{\sigma}_i}/10$. In this case, $\bar{\sigma}$ will be used for all subjects in that study, which is crucial for fast processing of data collected by large studies. This can be achieved by using the argument `x_sigma0` to specify a pre-determined $\bar{\sigma}_i$. Still using the same segment of data (`sampleTable[1004700:1005600,]`) as an example, we calculate a `sample_sigma0` beforehand with code ```{r "compute_sigma0",echo=TRUE,eval=TRUE} sample_sigma0=Sigma0(sampleTable[1004700:1005600,],hertz=30) ``` Then we could use this `sample_sigma0`=$`r sample_sigma0`$ to compute AI with code ```{r "computeAI_beforehand",echo=TRUE,eval=TRUE} AI_sampleTable=computeActivityIndex(sampleTable, sigma0=sample_sigma0, epoch=1, hertz=30) AI_sampleGT3XPlus=computeActivityIndex(sampleGT3XPlus, sigma0=sample_sigma0, epoch=1, hertz=30) ``` # Explore AI Using either method to compute AI yield to the same result. The output of function `computeActivityIndex` has two columns: `RecordNo` saves the indices and `AI` stores AI. The first $10$ lines of `AI_sampleGT3XPlus` is as follow ```{r "head_AI",echo=TRUE,eval=TRUE} head(AI_sampleGT3XPlus,n=10) ``` We could also compute AI in different epoch. Say we want minute-by-minute AI, then we could use the following code ```{r "computeAI_minute",echo=TRUE,eval=TRUE} AI_sampleTable_min=computeActivityIndex(sampleTable, sigma0=sample_sigma0, epoch=60, hertz=30) AI_sampleGT3XPlus_min=computeActivityIndex(sampleGT3XPlus, sigma0=sample_sigma0, epoch=60, hertz=30) ``` And according to the definition of AI, the minute-by-minute AI's are simply the sum of all 1-second AI within each minute. The AI during the first $6$ minutes are ```{r "head_AI_min",echo=TRUE,eval=TRUE} head(AI_sampleGT3XPlus_min) ``` ```{r "setup_cleanup",echo=FALSE,cache=FALSE,warning=FALSE,message=FALSE} options(opt_setup) ```
/scratch/gouwar.j/cran-all/cranData/ActivityIndex/inst/doc/ActivityIndexIntro.Rmd
--- title: "Introduction to the `ActivityIndex` package in `R`" author: "Jiawei Bai" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Introduction to the `ActivityIndex` package in `R`} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r "setup",echo=FALSE,cache=FALSE,warning=FALSE,message=FALSE} library(ActivityIndex) library(knitr) opt_setup=options(width=52,scipen=1,digits=5) opts_chunk$set(tidy=TRUE) ``` The **ActivityIndex** package contains functions to 1) read raw accelerometry data and 2) compute "Activity Index" (AI) using the raw data. This introduction provides step-by-step instructions on how to read data from `.csv` files and then compute AI. # Data description The sample data were collected by accelerometer GT3X+ (ActiGraph, ), downloaded from \url{https://help.theactigraph.com/entries/21688392-GT3X-ActiSleep-Sample-Data}. The data are available in the **ActivityIndex** package and their paths can be acquired using command: ```{r "AccessCSV",echo=TRUE,eval=FALSE} system.file("extdata","sample_GT3X+.csv.gz",package="ActivityIndex") system.file("extdata","sample_table.csv.gz",package="ActivityIndex") ``` `sample_GT3X+.csv.gz` is the standard output of GT3X+ accelerometer, with a $10$-line header containing the basic information of the data collection, followed by $3$-column raw acceleration data. `sample_table.csv.gz` contains the same $3$-column acceleration data without the $10$-line header. The first $15$ lines of `sample_GT3X+.csv.gz` are shown below: ```{r "GT3X+_CSV",echo=FALSE,eval=TRUE} fname = system.file("extdata", "sample_GT3X+.csv.gz", package = "ActivityIndex") unzipped = R.utils::gunzip(fname, temporary = TRUE, remove = FALSE, overwrite = TRUE) cat(readLines(unzipped, n = 15), sep = "\n") ``` while the first $5$ lines of `sample_table.csv.gz` are ```{r "Table_CSV",echo=FALSE,eval=TRUE} fname = system.file("extdata", "sample_table.csv.gz", package = "ActivityIndex") unzipped = R.utils::gunzip(fname, temporary = TRUE, remove = FALSE, overwrite = TRUE) cat(readLines(unzipped, n = 5), sep = "\n") ``` Users should follow the same format while preparing their own data. # Read the data `ReadGT3XPlus` and `ReadTable` functions read the GT3X+ `.csv.gz` file and the $3$-column acceleration table, respectively. To read the data, use the following code ```{r "ReadData",echo=TRUE,warning=FALSE,message=FALSE} sampleGT3XPlus=ReadGT3XPlus(system.file("extdata","sample_GT3X+.csv.gz",package="ActivityIndex")) sampleTable=ReadTable(system.file("extdata", "sample_table.csv.gz",package="ActivityIndex")) ``` Now that object `sampleGT3XPlus` has class `GT3XPlus`, which contains the raw data and header information. Function `ReadGT3XPlus` automatically applies time stamps to the acceleration time series using the information from the header. For example, our sample data look like this ```{r "str_sampleGT3XPlus",echo=TRUE,eval=TRUE} str(sampleGT3XPlus) ``` However, `sampleTable` is much simpler, since limited information was given. The first $6$ lines of it look like this ```{r "head_sampleTable",echo=TRUE,eval=TRUE} head(sampleTable,n=6) ``` # Compute AI AI is a metric to reflect the variability of the raw acceleration signals after removing systematic noise of the signals. Formally, its definition (a one-second AI) is $$ \text{AI}^{\text{new}}_i(t;H)=\sqrt{\max\left(\frac{1}{3}\left\{\sum_{m=1}^{3}{\frac{\sigma^2_{im}(t;H)-\bar{\sigma}^2_{i}}{\bar{\sigma}^2_{i}}}\right\},0\right)},\label{EQ: AI} $$ where $\sigma^2_{im}(t;H)$ ($m=1,2,3$) is axis-$m$'s moving variance during the window starting from time $t$ (of size $H$), and $\bar{\sigma}_i$ is the systematic noise of the signal when the device is placed steady. Function `computeActivityIndex` is used to compute AI. The syntax of the function is ```{r "computeAI_syntax",echo=TRUE,eval=FALSE} computeActivityIndex(x, x_sigma0 = NULL, sigma0 = NULL, epoch = 1, hertz) ``` `x` is the data used to compute AI. It can either be a `GT3XPlus` object, or a $4$-column data frame (tri-axial acceleration time series with an index column). Either `x_sigma0` or `sigma0` are used to determine the systematic noise $\bar{\sigma}_i$. More detailed example will follow to illustrate how to use them. `epoch` is the epoch length (in second) of the AI. For example, the default `epoch=1` yields to $1$-second AI, while minute-by-minute AI is given by `epoch=60`. `hertz` specifies the sample rate (in Hertz), which is usually $10$, $30$ or $80$, etc. We will continue our example of computing AI using our data `sampleGT3XPlus` and `sampleTable`. ## Find $\bar{\sigma}_i$ on-the-fly According to the definition of the systematic noise $\bar{\sigma}_i$, it changes with subject $i$. Therefore, strictly speaking, we are to compute $\bar{\sigma}_i$ every time we compute AI for a new subject $i$. Argument `x_sigma0` can be used to specify a $4$-column data frame (one column for indices and three columns for acceleration) which is used to calculate $\bar{\sigma}_i$. The $4$-column data frame should contain the raw accelerometry data collected while the accelerometer is not worn or kept steady. For example, if we say a segment of our sample data (`sampleTable[1004700:1005600,]`) meets such requirement, we could compute AI using the following code ```{r "computeAI_onthefly",echo=TRUE,eval=TRUE} AI_sampleTable_x=computeActivityIndex(sampleTable, x_sigma0=sampleTable[1004700:1005600,], epoch=1, hertz=30) AI_sampleGT3XPlus_x=computeActivityIndex(sampleGT3XPlus, x_sigma0=sampleTable[1004700:1005600,], epoch=1, hertz=30) ``` ## Find $\bar{\sigma}_i$ beforehand Sometimes we do not want to calculate $\bar{\sigma}_i$ whenever computing AI. For example, if $10$ accelerometers were used to collect data over $100$ subjects, there is no reason to calculate $\bar{\sigma}_i$ for $100$ times. One $\bar{\sigma}_i$ is only needed for one accelerometer. Furthermore, if we could verify the $\bar{\sigma}_i$'s of the $10$ accelerometers are close to each others, we could combine them into a single $\bar{\sigma}=\sum_{i=1}^{10}{\bar{\sigma}_i}/10$. In this case, $\bar{\sigma}$ will be used for all subjects in that study, which is crucial for fast processing of data collected by large studies. This can be achieved by using the argument `x_sigma0` to specify a pre-determined $\bar{\sigma}_i$. Still using the same segment of data (`sampleTable[1004700:1005600,]`) as an example, we calculate a `sample_sigma0` beforehand with code ```{r "compute_sigma0",echo=TRUE,eval=TRUE} sample_sigma0=Sigma0(sampleTable[1004700:1005600,],hertz=30) ``` Then we could use this `sample_sigma0`=$`r sample_sigma0`$ to compute AI with code ```{r "computeAI_beforehand",echo=TRUE,eval=TRUE} AI_sampleTable=computeActivityIndex(sampleTable, sigma0=sample_sigma0, epoch=1, hertz=30) AI_sampleGT3XPlus=computeActivityIndex(sampleGT3XPlus, sigma0=sample_sigma0, epoch=1, hertz=30) ``` # Explore AI Using either method to compute AI yield to the same result. The output of function `computeActivityIndex` has two columns: `RecordNo` saves the indices and `AI` stores AI. The first $10$ lines of `AI_sampleGT3XPlus` is as follow ```{r "head_AI",echo=TRUE,eval=TRUE} head(AI_sampleGT3XPlus,n=10) ``` We could also compute AI in different epoch. Say we want minute-by-minute AI, then we could use the following code ```{r "computeAI_minute",echo=TRUE,eval=TRUE} AI_sampleTable_min=computeActivityIndex(sampleTable, sigma0=sample_sigma0, epoch=60, hertz=30) AI_sampleGT3XPlus_min=computeActivityIndex(sampleGT3XPlus, sigma0=sample_sigma0, epoch=60, hertz=30) ``` And according to the definition of AI, the minute-by-minute AI's are simply the sum of all 1-second AI within each minute. The AI during the first $6$ minutes are ```{r "head_AI_min",echo=TRUE,eval=TRUE} head(AI_sampleGT3XPlus_min) ``` ```{r "setup_cleanup",echo=FALSE,cache=FALSE,warning=FALSE,message=FALSE} options(opt_setup) ```
/scratch/gouwar.j/cran-all/cranData/ActivityIndex/vignettes/ActivityIndexIntro.Rmd
#' @export #' @import stats eBellB12<-function(p, a, b, k, lambda) { fn = function(x) { vBellB12(x, a, b, k, lambda) } ES = p for (i in 1:length(p)) { ES[i] = (1/p[i]) * integrate(fn, lower = 0, upper = p[i], stop.on.error = FALSE)$value } return(ES) }
/scratch/gouwar.j/cran-all/cranData/ActuarialM/R/eBellB12.R
#' @export #' @import stats eBellBX<-function(p, a, lambda) { fn = function(x) { vBellBX(x, a, lambda) } ES = p for (i in 1:length(p)) { ES[i] = (1/p[i]) * integrate(fn, lower = 0, upper = p[i], stop.on.error = FALSE)$value } return(ES) }
/scratch/gouwar.j/cran-all/cranData/ActuarialM/R/eBellBX.R
#' @export #' @import stats eBellE<-function(p, alpha, lambda) { fn = function(x) { vBellE(x, alpha, lambda) } ES = p for (i in 1:length(p)) { ES[i] = (1/p[i]) * integrate(fn, lower = 0, upper = p[i], stop.on.error = FALSE)$value } return(ES) }
/scratch/gouwar.j/cran-all/cranData/ActuarialM/R/eBellE.R
#' @export #' @import stats eBellEE<-function(p, alpha, beta, lambda) { fn = function(x) { vBellEE(x, alpha, beta, lambda) } ES = p for (i in 1:length(p)) { ES[i] = (1/p[i]) * integrate(fn, lower = 0, upper = p[i], stop.on.error = FALSE)$value } return(ES) }
/scratch/gouwar.j/cran-all/cranData/ActuarialM/R/eBellEE.R
#' @export #' @import stats eBellEW<-function(p, alpha, beta, theta,lambda) { fn = function(x) { vBellEW(x, alpha, beta, theta, lambda) } ES = p for (i in 1:length(p)) { ES[i] = (1/p[i]) * integrate(fn, lower = 0, upper = p[i], stop.on.error = FALSE)$value } return(ES) }
/scratch/gouwar.j/cran-all/cranData/ActuarialM/R/eBellEW.R
#' @export #' @import stats eBellL<-function(p, b, q, lambda) { fn = function(x) { vBellL(x, b, q, lambda) } ES = p for (i in 1:length(p)) { ES[i] = (1/p[i]) * integrate(fn, lower = 0, upper = p[i], stop.on.error = FALSE)$value } return(ES) }
/scratch/gouwar.j/cran-all/cranData/ActuarialM/R/eBellL.R
#' @export #' @import stats eBellW<-function(p, alpha , beta, lambda) { fn = function(x) { vBellW(x, alpha , beta, lambda) } ES = p for (i in 1:length(p)) { ES[i] = (1/p[i]) * integrate(fn, lower = 0, upper = p[i], stop.on.error = FALSE)$value } return(ES) }
/scratch/gouwar.j/cran-all/cranData/ActuarialM/R/eBellW.R
#' @export #' @import stats vBellB12<-function (p, a, b, k, lambda, log.p = FALSE, lower.tail = TRUE) { if (log.p == TRUE) p = exp(p) if (lower.tail == FALSE) p = 1 - p t=-1/lambda*log(1-((log(1-p[p >= 0 & p <= 1]*(1-(exp(-exp(lambda)+1)))))/(-exp(lambda)))) VaR=(a*(((1-t)^(-1/k) -1)^(1/b))) return(VaR) }
/scratch/gouwar.j/cran-all/cranData/ActuarialM/R/vBellB12.R
#' @export #' @import stats vBellBX<-function (p, a, lambda, log.p = FALSE, lower.tail = TRUE) { if (log.p == TRUE) p = exp(p) if (lower.tail == FALSE) p = 1 - p t=-1/lambda*log(1-((log(1-p[p >= 0 & p <= 1]*(1-(exp(-exp(lambda)+1)))))/(-exp(lambda)))) VaR=(-1*log(1-(t)^(1/a)))^(0.5) return(VaR) }
/scratch/gouwar.j/cran-all/cranData/ActuarialM/R/vBellBX.R
#' @export #' @import stats vBellE<-function (p, alpha, lambda, log.p = FALSE, lower.tail = TRUE) { if (log.p == TRUE) p = exp(p) if (lower.tail == FALSE) p = 1 - p t=-1/lambda*log(1-((log(1-p[p >= 0 & p <= 1]*(1-(exp(-exp(lambda)+1)))))/(-exp(lambda)))) VaR=(-1/alpha*(log(1-(t)))) return(VaR) }
/scratch/gouwar.j/cran-all/cranData/ActuarialM/R/vBellE.R
#' @export #' @import stats vBellEE<-function (p, alpha, beta, lambda, log.p = FALSE, lower.tail = TRUE) { if (log.p == TRUE) p = exp(p) if (lower.tail == FALSE) p = 1 - p t=-1/lambda*log(1-((log(1-p[p >= 0 & p <= 1]*(1-(exp(-exp(lambda)+1)))))/(-exp(lambda)))) VaR=(-1/alpha*log(1-(t)^(1/beta))) return(VaR) }
/scratch/gouwar.j/cran-all/cranData/ActuarialM/R/vBellEE.R
#' @export #' @import stats vBellEW<-function (p, alpha, beta, theta,lambda, log.p = FALSE, lower.tail = TRUE) { if (log.p == TRUE) p = exp(p) if (lower.tail == FALSE) p = 1 - p t=-1/lambda*log(1-((log(1-p[p >= 0 & p <= 1]*(1-(exp(-exp(lambda)+1)))))/(-exp(lambda)))) VaR=((-1/alpha*log(1-(t)^(1/theta)))^(1/beta)) return(VaR) }
/scratch/gouwar.j/cran-all/cranData/ActuarialM/R/vBellEW.R
#' @export #' @import stats vBellL<-function (p, b, q, lambda, log.p = FALSE, lower.tail = TRUE) { if (log.p == TRUE) p = exp(p) if (lower.tail == FALSE) p = 1 - p t=-1/lambda*log(1-((log(1-p[p >= 0 & p <= 1]*(1-(exp(-exp(lambda)+1)))))/(-exp(lambda)))) VaR=b*(((1-t)^(-1/q))-1) return(VaR) }
/scratch/gouwar.j/cran-all/cranData/ActuarialM/R/vBellL.R
#' @export #' @import stats vBellW<-function (p, alpha , beta, lambda, log.p = FALSE, lower.tail = TRUE) { if (log.p == TRUE) p = exp(p) if (lower.tail == FALSE) p = 1 - p t=-1/lambda*log(1-((log(1-p[p >= 0 & p <= 1]*(1-(exp(-exp(lambda)+1)))))/(-exp(lambda)))) VaR=(-1/alpha*(log(1-(t))))^1/beta return(VaR) }
/scratch/gouwar.j/cran-all/cranData/ActuarialM/R/vBellW.R
#'AcuityView #' #'This function provides a simple method for displaying a visual scene as it may appear to an animal with lower acuity. #'@param photo The photo you wish to alter; if NULL then a pop up window allows you to navigate to your photo, otherwise include the file path here #'@param distance The distance from the viewer to the object of interest in the image; can be in any units so long as it is in the same units as RealWidth #'@param realWidth The real width of the entire image; can be in any units as long as it is in the same units as distance #'@param eyeResolutionX The resolution of the viewer in degrees #'@param eyeResolutionY The resolution of the viewer in the Y direction, if different than ResolutionX; defaults to NULL, as it is uncommon for this to differ from eyeResolutionX #'@param plot Whether to plot the final image; defaults to T, but if F, the final image will still be saved to your working directory #'@param output The name of the output file, must be in the format of output="image_name.filetype"; acceptable filetypes are .bmp, .png, or .jpeg #'@return Returns an image in the specified format #'@section Image Format Requirements: Image must be in 3-channel format, either PNG, JPEG or BMP. Note: some PNG files have an alpha channel that makes them 4-channel images; this will not work with the code. The image must be 3-channel. #'@section Image size: Image must be square with each side a power of 2 pixels. Example: 512x512, 1024 x 1024, 2048 x 2048 pixels #'@section For Linux Users: You may need to install the fftw library in order for the R package "fftwtools" to install and perform correctly. #'The FFTW website and install information can be found here: http://www.fftw.org/ #'This library can easily be installed on Ubuntu with: apt-get install fftw3-dev #'@importFrom grDevices dev.list dev.new dev.off png #'@importFrom graphics par rasterImage #'@importFrom tools file_ext #'@importFrom imager load.image imsplit #'@importFrom plotrix rescale #'@importFrom fftwtools fftw2d #'@importFrom grid grid.raster #'@examples #'require(imager) #'photo<-system.file('extdata/reef.bmp', package='AcuityView') #'reef<-load.image(photo) #'AcuityView(photo = reef, distance = 2, realWidth = 2, eyeResolutionX = 2, #'eyeResolutionY = NULL, plot = TRUE, output="Example.jpeg") #'@export AcuityView <- function(photo = NULL, distance = 2, realWidth = 2, eyeResolutionX = 0.2, eyeResolutionY = NULL, plot = T, output="test.jpg"){ # Load the image. The image must be a 3-channel image. if (is.null(photo)) { photo = file.choose() #if (!is.element(file_ext(photo), c("png", "bmp", "jpeg", "jpg"))) stop("Input file must be png, bmp, or jpeg format!") image <- load.image(photo) } else { #if (!is.element(file_ext(photo), c("png", "bmp", "jpeg", "jpg"))) stop("Input file must be png, bmp, or jpeg format!") image<-photo #image <- load.image(photo) } if (missing(image)) stop("Failed to load the image file") # Check that a correct output format is provided if (!is.character(output)) stop("Output file must be a character string!") if (!is.element(file_ext(output), c("png", "bmp", "jpeg", "jpg"))) stop("Output file must be png, bmp, or jpeg format!") # Get image dimensions dimensions <- dim(image) # Check to make sure dimensions are a power of two or give error if (!is.element(dimensions[1], 2^c(1:100)) || !is.element(dimensions[2], 2^c(1:100))) { stop("Image dimensions must be a power of 2!!!") } # Plot the image if required if (plot) { Devices = dev.list() for (i in Devices) dev.off(i) dev.new(width = 7, height = 4) par(mfrow = c(1, 2), mar = c(1, 0.1, 2, 0.1)) plot(image, axes = FALSE, ylab = "", xlab = "", main = "Before") } # If the X and Y resolutions differ, check here if (is.null(eyeResolutionY)) eyeResolutionY <- eyeResolutionX # Calculate the image width in degrees widthInDegrees <- 57.2958*(2 * atan(realWidth / distance / 2)) # Extract image width in pixels widthInPixels <- dimensions[2] # Calculate the center of the image center <- round(widthInPixels / 2) + 1 pixelsPerDegree <- widthInPixels / widthInDegrees # Create a blur matrix, with the same dimensions as the image # Each element is based on the resolution of the eye, distance to the viewer, and size of the image # See main text for more details blur <- matrix(NA, nrow = widthInPixels, ncol = widthInPixels) for (i in 1:widthInPixels){ for (j in 1:widthInPixels) { x <- i - center y <- j - center freq <- round(sqrt(x^2 + y^2)) / widthInPixels * pixelsPerDegree mySin <- y / sqrt(x^2 + y^2) myCos <- x / sqrt(x^2 + y^2) eyeResolution <- eyeResolutionX * eyeResolutionY /sqrt((eyeResolutionY * myCos)^2 +(eyeResolutionX * mySin)^2) blur[i,j] <- exp(-3.56 * (eyeResolution * freq)^2) } } # Define the center pixel to have a value of 1 blur[center, center] = 1 blur<<-blur # Convert the original 3 color channels into linear RGB space # as opposed to sRGB space, which is how color images are usually encoded. # Each color channel must be linearized separately. splitimage <- imsplit(image,"c") channel <- splitimage[[1]][,] # Convert the data from matrix into array form array <- array(NA, dim = c(widthInPixels^2, length(splitimage))) for (i in 1:length(splitimage)){ matrix <- as.matrix(splitimage[[i]]) vector <- as.vector(rescale(matrix, newrange = c(0, 1))) array[,i] <- vector } # Convert red, green, and blue to linearized values # Begin by creating an empty array for your linearized values linearized_values <- array(NA, dim = c(widthInPixels, widthInPixels, 3)) dim_array <- dim(array) # Define the variable "a" for use in converting values to linearized color space a <- 0.055 # To find the equations for converting to linearized space, # see main text or: https://en.wikipedia.org/wiki/SRGB # Specifically, the section entitled "The reverse transformation." # Linearize the red color channel redlinear <- array(NA, dim = c(dim_array[1], 1)) for (i in 1:dim_array[1]){ if (array[i,1] <= 0.04045){ redlinear[i] <- (array[i,1] / 12.92) } else { redlinear[i] <- ((array[i,1] + a) /(1 + a))^2.4 }} dim(redlinear) <- dim(splitimage[[1]]) linearized_values[,,1] <- redlinear #red_linearized_values<<-linearized_values[,,1] # Linearize the green color channel greenlinear <- array(NA, dim = c(dim_array[1], 1)) for (i in 1:dim_array[1]){ if (array[i,2] <= 0.04045){ greenlinear[i] <- (array[i,2] / 12.92) } else { greenlinear[i] <- ((array[i,2] + a)/(1 + a))^2.4 } } dim(greenlinear) <- dim(splitimage[[2]]) linearized_values[,,2] <- greenlinear #green_linearized_values<<-linearized_values[,,2] # Linearize the blue color channel bluelinear <- array(NA, dim = c(dim_array[1], 1)) for (i in 1:dim_array[1]){ if (array[i,3] <= 0.04045){ bluelinear[i] <- (array[i,3]/12.92) } else { bluelinear[i] <- ((array[i,3] + a) / (1 + a))^2.4 } } dim(bluelinear) <- dim(splitimage[[3]]) linearized_values[,,3] <- bluelinear #blue_linearized_values<<-linearized_values[,,3] # Perform the 2-D Fourier Transform, blur matrix multiplication # and inverse fourier transform on the linearized color values: final <- array(NA, dim = c(widthInPixels, widthInPixels, length(splitimage))) for (i in 1:length(splitimage)){ matrix <- linearized_values[,,i] fft <- (1/widthInPixels) * fft_matrix_shift(fftw2d(matrix, inverse = 0)) transform <- fft * blur ifft <- (1/widthInPixels) * fftw2d(transform, inverse = 1) final[,,i] <- Mod(ifft) } #final_red<<-final[,,1] #final_green<<-final[,,2] #final_blue<<-final[,,3] # Now, for display purposes, we need to transform the colors from # linearized color space back into sRGB space sRGB_values <- array(NA, dim = c(widthInPixels, widthInPixels, 3)) # Each dimension from the three-dimensional "final" array is a color # channel that has been linearized, fourier transformed, blurred, and # inverse fourier transformed. Create a vector from each of these # so that you can do the calculations that retransform things back into # sRGB space red2 <- as.vector(final[,,1]) green2 <- as.vector(final[,,2]) blue2 <- as.vector(final[,,3]) # To see the equations for the transformation to sRGB space, # see main text or: https://en.wikipedia.org/wiki/SRGB # Speficially the section entitled "The forward transformation." # Calculate sRGB values for the red channel redsRGB <- array(NA, dim = c(dim_array[1])) for (i in 1:dim_array[1]){ if (red2[i] < 0.0031308){ redsRGB[i] <- (red2[i] * 12.92) } else { redsRGB[i] <- (((1 + a) * red2[i]^(1 / 2.4)) - a) } } dim(redsRGB) <- dim(splitimage[[1]]) sRGB_values[,,1] <- redsRGB #red_sRGB<<-sRGB_values[,,1] # Calculate sRGB values for the green channel greensRGB <- array(NA, dim = c(dim_array[1])) for (i in 1:dim_array[1]){ if (green2[i] < 0.0031308){ greensRGB[i] <- (green2[i] * 12.92) } else { greensRGB[i] <- (((1 + a) * green2[i]^(1 / 2.4)) - a) } } dim(greensRGB) <- dim(splitimage[[1]]) sRGB_values[,,2] <- greensRGB #green_sRGB<<-sRGB_values[,,2] # Calculate sRGB values for the blue channel bluesRGB <- array(NA, dim = c(dim_array[1])) for (i in 1:dim_array[1]){ if (blue2[i] < 0.0031308){ bluesRGB[i] <- (blue2[i] * 12.92) } else { bluesRGB[i] <- (((1 + a) * blue2[i]^(1 / 2.4)) - a) } } dim(bluesRGB) <- dim(splitimage[[1]]) sRGB_values[,,3] <- bluesRGB #blue_sRGB<<-sRGB_values[,,3] # Rescale the sRGB values so that the maximum is equal to 1, # for the purposes of displaying the image # Note: depending on the particular original image, the maximum # values may not be above 1; scaling is only necessary if the # maximum value is >1, hence the if/else statement if (max(sRGB_values[,,1]) > 1){ rsc <- rescale(sRGB_values[,,1], newrange = c(min(sRGB_values[,,1]), 1)) } else { rsc <- sRGB_values[,,1] } if (max(sRGB_values[,,2]) > 1){ gsc <- rescale(sRGB_values[,,2], newrange = c(min(sRGB_values[,,2]), 1)) } else { gsc <- sRGB_values[,,2] } if (max(sRGB_values[,,3]) > 1){ bsc <- rescale(sRGB_values[,,3], newrange = c(min(sRGB_values[,,3]), 1)) } else { bsc <- sRGB_values[,,3] } # Put the rescaled sRGB values into an array to plot as a raster, # which displays them as an image # This array should have the same dimensions as the original image rgbmatrix <- array(NA, dim = c(widthInPixels, widthInPixels, length(splitimage))) # Because of the fourier transform, the matrix needs to be transposed # or the final image will end up sideways rgbmatrix[,,1] <- t(rsc) rgbmatrix[,,2] <- t(gsc) rgbmatrix[,,3] <- t(bsc) #rgbmatrix_red<<-rgbmatrix[,,1] #rgbmatrix_green<<-rgbmatrix[,,2] #rgbmatrix_blue<<-rgbmatrix[,,3] # Save output file in the provided format if (file_ext(output) == "png") { png(filename = output, width = dimensions[2], height = dimensions[2], units = "px") grid.raster(rgbmatrix, interpolate = FALSE) dev.off() } if (file_ext(output) == "bmp") { png(filename = output, width = dimensions[2], height = dimensions[2], units = "px") grid.raster(rgbmatrix, interpolate = FALSE) dev.off() } if (file_ext(output) == "jpeg") { png(filename = output, width = dimensions[2], height = dimensions[2], units = "px") grid.raster(rgbmatrix, interpolate = FALSE) dev.off() } if (file_ext(output) == "jpg") { png(filename = output, width = dimensions[2], height = dimensions[2], units = "px") grid.raster(rgbmatrix, interpolate = FALSE) dev.off() } # Now, display the final image (represented in rgbmatrix) in a separate box if (plot) { #grid.raster(rgbmatrix, interpolate = FALSE) plot(c(0, ncol(rgbmatrix)), c(0, nrow(rgbmatrix)), type = "n", axes = F, xlab = "", ylab = "", main = "After") rasterImage(rgbmatrix, 1, 1, ncol(rgbmatrix), nrow(rgbmatrix), interpolate = FALSE) message(paste0('To save the side-by-side image, use a command like this before closing the device:\ndev.copy(jpeg,file="sidebyside.jpg")')) } message(paste0("The results are complete. The output file has been saved to ", output)) } # End of function #'FFTMatrixShift #' #'This function rearranges the output of the FFT by moving the zero frequency component to the center #'@param input_matrix the output of an FFT #'@param dim -1 gives the correct matrix shift for the AcuityView function #'@export fft_matrix_shift <- function(input_matrix, dim = -1) { rows <- dim(input_matrix)[1] cols <- dim(input_matrix)[2] # You need a check here for if dim != -1 or is NULL if (dim == -1) { input_matrix <- swap_up_down(input_matrix) return(swap_left_right(input_matrix)) } } swap_up_down <- function(input_matrix) { rows <- dim(input_matrix)[1] cols <- dim(input_matrix)[2] rows_half <- ceiling(rows / 2) return(rbind(input_matrix[((rows_half + 1):rows), (1:cols)], input_matrix[(1:rows_half), (1:cols)])) } swap_left_right <- function(input_matrix) { rows <- dim(input_matrix)[1] cols <- dim(input_matrix)[2] cols_half <- ceiling(cols / 2) return(cbind(input_matrix[1:rows, ((cols_half+1):cols)], input_matrix[1:rows, 1:cols_half])) } #'Sample image for use in example code #'@docType data #'@name reef.bmp #'@title Photograph of a coral reef #'@format a .bmp image #'@usage reef.bmp #'@description This photograph is copyright the authors, and is used for the example code #'@export
/scratch/gouwar.j/cran-all/cranData/AcuityView/R/AcuityView.R
## Function which fits an adaptive mixture of Student-t densities to a function KERNEL ## See Hoogerheide (2006, pp.46-49) ## __input__ ## KERNEL : [function] which computes the kernel !! must be vectorized !! ## mu0 : [kx1 vector] initial value for the location of the first component ## (or location vector of the first component) ## Sigma0 : [kxk matrix] scaling matrix of the first component (default: NULL, i.e. estimated by 'AdMit') ## control : [list] control parameters with the following components: ## $Ns : [integer>100] number of draws used in the simulation (default: 1e5) ## $Np : [integer>100] number of draws used when estimating the probabilities (default: 1e3) ## $Hmax : [integer>0] maximum number of components (default: 10) ## $df : [integer>0] degrees of freedom parameter (default: 1) ## $CVtol : [double] relative decrease of the coefficient of variation (default: 0.1, i.e. 10%) ## $IS : [logical] use importance sampling (default: FALSE) ## $ISpercent : [vector] of percentages of weights used to compute the scale matrix in IS sampling (default: c(.1, .15, .3)) ## $ISscale : [vector] of scaling coefficient for the covariance matrix in IS sampling (default: c(1,.25,4)) ## $trace : [logical] output printed during the fitting (default: FALSE) ## $trace.mu : [logical] output printed of the optimizer (default: 0, i.e. no output) ## $maxit.mu : [double] maximum number of iterations in the optimization (default: 1e4) ## $reltol.mu : [double] relative tolerance in the optimization (default: 1e-8) ## $trace.p : [logical] output printed of the optimizer (default: 0, i.e. no output) ## $weightNC : [double] weight of the new component (default: 0.1, i.e. 10%) ## $maxit.p : [double] maximum number of iterations in the optimization (default: 1e4) ## $reltol.p : [double] relative tolerance in the optimization (default: 1e-8) ## ... : additional parameters used by 'KERNEL' ## __output__ ## CV : [Hx1 vector] of coefficient of variation ## mit : [list] with the following components: ## $p : [Hx1 vector] of probabilities ## $mu : [Hxk matrix] of location vectors (in rows) ## $Sigma : [Hxk^2 matrix] of scale matrices (in rows) ## $df : [integer>0] degrees of freedom ## summary : [data.frame] containing information on optimization algorithm, time and CV over fitting process ## __20081223__ 'AdMit' <- function(KERNEL, mu0, Sigma0=NULL, control=list(), ...) { if (missing(KERNEL)) stop ("'KERNEL' is missing in 'AdMit'") KERNEL <- match.fun(KERNEL) if (!any(names(formals(KERNEL))=="log")) stop ("'KERNEL' MUST have the logical argument 'log' which returns the (natural) logarithm of 'KERNEL' if 'log=TRUE'") if (missing(mu0)) stop ("'mu0' is missing in 'AdMit'") if (!is.vector(mu0)) stop ("'mu0' must be a vector") if (!is.null(Sigma0)) { ## if something is provided if (!is.matrix(Sigma0)) { ## check if its a square matrix stop ("'Sigma0' must be a matrix") } else { ## if square matrix is provided if (!all(Sigma0==t(Sigma0))) ## check if the matrix is symmetric stop ("'Sigma0' is not symmetric") if (fn.isSingular(Sigma0)) ## check if the matrix is singular stop ("'Sigma0' is a singular matrix") if (!fn.isPD(Sigma0)) ## check is the matrix is positive definite stop ("'Sigma0' is not positive definite") } } Sigma0 <- as.vector(Sigma0) ## change square matrix into a vector con <- list(Ns=1e5, Np=1e3, Hmax=10, df=1, CVtol=.1, ## general control parameters IS=FALSE, ISpercent=c(.05,.15,.3), ISscale=c(1,.25,4), ## importance sampling trace=FALSE, ## tracing information trace.mu=0, maxit.mu=5e2, reltol.mu=1e-8, ## mu optimization trace.p=0, weightNC=.1, maxit.p=5e2, reltol.p=1e-8) ## p optimization con[names(control)] <- control if (con$Ns<100) stop ("'Ns' far too small.") if (con$Np<100) stop ("'Np' far too small") if (con$Np>con$Ns) stop ("'Np' must be lower or equal than 'Ns'") if (con$Hmax>15) warning ("'Hmax' larger than 15. May take some time and pose difficulties in the optimization") if (con$df<1) stop ("'df' must be greater or equal than 1") if (con$CVtol<=0 | con$CVtol>=1) stop ("'CVtol' must belong to ]0,1[") if (con$weightNC<=0 | con$weightNC>=1) stop ("'weightNC' must belong to ]0,1[") if (!is.logical(con$trace)) stop ("'trace' must be logical") if (any(con$ISpercent<=0) | any(con$ISpercent>=1)) stop ("components of 'ISpercent' must belong to ]0,1[") if (any(con$ISscale<0)) stop ("components of 'ISscale' must be positive") controlIS <- list(percent=con$ISpercent, scale=con$ISscale) controloptmu <- list(trace=con$trace.mu, maxit=con$maxit.mu, reltol=con$reltol.mu) controloptp <- list(trace=con$trace.p, iter.max=con$maxit.p, rel.tol=con$reltol.p, weightNC=con$weightNC) fn.AdMit_sub(KERNEL, mu0, Sigma0, con$Ns, con$Np, con$Hmax, con$df, con$CVtol, con$trace, con$IS, controlIS, controloptmu, controloptp, ...) } 'fn.AdMit_sub' <- function(KERNEL, mu0, Sigma0, Ns, Np, Hmax, df, CVtol, trace, IS, controlIS, controloptmu, controloptp, ...) { ## initialisation lnD <- lnK <- lnd <- h <- ph <- muh <- Sigmah <- NULL mit <- list(p=NULL, mu=NULL, Sigma=NULL, df=df) k <- length(mu0) Theta <- drawsh <- matrix(NA, Ns, k) nsummary <- c("H","METHOD.mu","TIME.mu","METHOD.p","TIME.p","CV") ## first component if (is.null(Sigma0)) { ## optimize the first component optmu <- fn.optmu(FUN=KERNEL, mu0, control=controloptmu, ...) if (optmu$method=="IS") { stop ("Problem in the optimization. Try another starting values 'mu0'") } } else { ## use the input by the user optmu <- list(mu=mu0, Sigma=Sigma0, method="USER", time=0) } ## first component h <- mit$p <- 1 mit$mu <- matrix(optmu$mu, nrow=1) mit$Sigma <- matrix(optmu$Sigma, nrow=1) Theta <- drawsh <- fn.rmvt(Ns, mit$mu, mit$Sigma, df) lnK <- KERNEL(Theta, log=TRUE, ...) lnd <- fn.dmvt(Theta, mit$mu, mit$Sigma, df, log=TRUE) lnD <- as.matrix(lnd[1:Np]) w <- fn.computeexpw(lnK, lnd) CV <- fn.CV(w) summary <- data.frame(h, optmu$method, optmu$time, "NONE", 0, CV) names(summary) <- nsummary if (trace) print(summary) hstop <- FALSE while (h<Hmax & hstop==FALSE) { h <- h+1 if (!IS) ## usual optimization { mu00 <- drawsh[which.max(w),] tmp <- fn.muSigma(w, drawsh) theta <- fn.rmvt(Ns, tmp$mu, tmp$Sigma, 1) lnk <- KERNEL(theta, log=TRUE, ...) lnd <- fn.dmvt(theta, tmp$mu, tmp$Sigma, 1, log=TRUE) w1 <- fn.computeexpw(lnk, lnd) mu01 <- theta[which.max(w1),] mu0 <- rbind(mu00, mu01) tmpoptmu <- list() for (i in 1:2) { ## optimize using each starting value tmpoptmu[[i]] <- fn.optmu(FUN=fn.w, mu0[i,], control=controloptmu, KERNEL=KERNEL, mit=mit, ...) } tmptmpoptmu <- unlist(tmpoptmu) pos <- tmptmpoptmu[names(tmptmpoptmu)=="method"]!="IS" if (any(pos)) ## if at least one has converged { v <- tmptmpoptmu[names(tmptmpoptmu)=="value"][pos] optmu <- tmpoptmu[[which.max(v)]] } else { ## or use importance sampling if not converged optmu <- fn.wIS(drawsh, w, controlIS) } } else { ## use importance sampling optmu <- fn.wIS(drawsh, w, controlIS) } ## loop over scaling factors tmpCV <- tmpw <- tmpph <- tmptheta <- tmpTheta <- tmplnK <- tmplnD <- pos <- NULL muh <- optmu$mu tmpCV <- Inf for (m in 1:nrow(optmu$Sigma)) { ## loop over scaling matrix Sigma Sigmah <- optmu$Sigma[m,] ## draw from the new component tmptheta <- fn.rmvt(Ns, muh, Sigmah, df) tmpTheta <- cbind(Theta, tmptheta) tmplnK <- cbind(lnK, KERNEL(tmptheta, log=TRUE, ...)) ## form matrix used in optimization of probabilities tmplnD <- matrix(NA, Np, h^2) for (i in 1:(h-1)) { tmplnD[,(1+(i-1)*h):((h-1)+(i-1)*h)] <- lnD[,(1+(i-1)*(h-1)):((h-1)+(i-1)*(h-1))] } for (i in 1:h) { pos <- seq(from=1+(i-1)*k, length.out=k) tmplnD[,i*h] <- fn.dmvt(tmpTheta[1:Np,pos], muh, Sigmah, df, log=TRUE) } for (i in 1:(h-1)) { tmplnD[,(h-1)*h+i] <- fn.dmvt(tmptheta[1:Np,], mit$mu[i,], mit$Sigma[i,], df, log=TRUE) } ## optimization of the probabilities optp <- fn.optp(mit$p, tmplnK[1:Np,], tmplnD, control=controloptp) tmpph <- optp$p ## draw from the new mixture comp <- sample(1:h, Ns, prob=tmpph, replace=TRUE) tmpdrawsh <- tmptmplnK <- tmptmplnD <- tmpw <- pos <- NULL for (i in 1:h) { nh <- length(comp[comp==i]) if (nh>0) { pos <- seq(from=1+(i-1)*k, length.out=k) tmpdrawsh <- rbind(tmpdrawsh, matrix(tmpTheta[1:nh,pos], ncol=k)) tmptmplnK <- c(tmptmplnK, tmplnK[1:nh,i]) } } tmptmplnD <- dMit(tmpdrawsh, mit=list(p=tmpph, mu=rbind(mit$mu, muh), ## log of mixture for these draws Sigma=rbind(mit$Sigma, Sigmah), df=mit$df), log=TRUE) tmpw <- fn.computeexpw(tmptmplnK, tmptmplnD) tmptmpCV <- fn.CV(tmpw) if (tmptmpCV<tmpCV) { ## if coefficient of variation better than before newCV <- tmpCV <- tmptmpCV neww <- tmpw newph <- tmpph newSigmah <- Sigmah newdrawsh <- tmpdrawsh newTheta <- tmpTheta newlnK <- tmplnK newlnD <- tmplnD } summaryh <- data.frame(h, optmu$method[m], optmu$time, optp$method, optp$time, tmptmpCV) names(summaryh) <- nsummary if (trace) print(summaryh) summary <- rbind(summary, summaryh) } ## add the new component to the mixture CV[h] <- newCV w <- neww ph <- newph Sigmah <- newSigmah drawsh <- newdrawsh Theta <- newTheta lnK <- newlnK lnD <- newlnD mit$p <- ph mit$mu <- rbind(mit$mu, muh) mit$Sigma <- rbind(mit$Sigma, Sigmah) ## stopping criterion hstop <- (abs((CV[h]-CV[h-1])/CV[h-1]) <= CVtol) } ## form the output (add labels) names(mit$p) <- paste("cmp", 1:h, sep="") dimnames(mit$mu) <- list(paste("cmp", 1:h, sep=""), paste("k", 1:k, sep="")) lab <- NULL for (i in 1:k) for (j in 1:k) lab <- c(lab, paste("k", i, "k", j, sep="")) dimnames(mit$Sigma) <- list(paste("cmp", 1:h, sep=""), lab) ## output list(CV=as.vector(CV), mit=as.list(mit), summary=as.data.frame(summary)) }
/scratch/gouwar.j/cran-all/cranData/AdMit/R/AdMit.R
## Function which performs importance sampling using the ## adaptive mixture of Student-t densities as the importance density ## __input__ ## N : [integer>0] number of draws used in the importance sampling (default: 1e5) ## KERNEL : [function] which computes the kernel !! must be vectorized !! ## G : [function] used in the importance sampling !! must be vectorized !! (default: NULL, i.e. posterior mean) ## mit : [list] with mixture information ## ... : additional parameters used by 'KERNEL' or/and by 'G' ## __output__ ## [list] with the following components: ## $ghat : [vector] importance sampling estimator ## $NSE : [vector] numerical standard error estimated as in Geweke (1989, p.1321) ## $RNE : [vector] relative numerical efficiency estimated as in Geweke (1989, p.1321) ## __20080521__ 'AdMitIS' <- function(N=1e5, KERNEL, G=function(theta){theta}, mit=list(), ...) { if (N<1) stop ("'N' should be larger than 1") if (missing(KERNEL)) stop ("'KERNEL' is missing in 'AdMitIS'") KERNEL <- match.fun(KERNEL) if (!any(names(formals(KERNEL))=="log")) stop ("'KERNEL' MUST have the logical argument 'log' which returns the (natural) logarithm of 'KERNEL' if 'log=TRUE'") G <- match.fun(G) theta <- rMit(N, mit) ## arguments in list(...) args <- list(...) nargs <- names(args) ## arguments for KERNEL argsK <- formals(KERNEL) nargsK <- names(argsK) ## arguments for do.call('fn.w') argsw <- list(theta) argsw <- c(argsw, list(KERNEL=KERNEL, mit=mit, log=FALSE)) nargsargsK <- nargsK[charmatch(nargs, nargsK, nomatch=0)] argsw <- c(argsw, args[nargsargsK]) w <- do.call('fn.w', argsw) ## arguments for G argsG <- formals(G) nargsG <- names(argsG) ## arguments for do.call('G') argsg <- list(theta) nargsargsG <- nargsG[charmatch(nargs, nargsG, nomatch=0)] argsg <- c(argsg, args[nargsargsG]) g <- do.call('G', argsg) k <- ncol(g) w <- w/sum(w) ghat <- apply(w*g, 2, sum) tmp <- g-matrix(ghat, N, k, byrow=TRUE) NSE <- sqrt( apply(w^2*tmp^2, 2, sum) ) RNE <- (apply( w*tmp^2, 2, sum) / N) / NSE^2 ## naive variance / NSE^2 list(ghat=as.vector(ghat), NSE=as.vector(NSE), RNE=as.vector(RNE)) }
/scratch/gouwar.j/cran-all/cranData/AdMit/R/AdMitIS.R
## Function which performs (independence) Metropolis-Hastings (MH) sampling ## using an adaptive mixture of Student-t densities as the candidate density ## __input__ ## N : [integer>0] length of the MCMC output (default: 1e5) ## KERNEL : [function] which computes the kernel !! must be vectorized !! ## mit : [list] with mixture information (default: univariate Cauchy) ## ... : additional parameters used by 'KERNEL' ## __output__ ## [list] with the following components: ## $theta : [Nxk matrix] sample generated by the (independence) MH algorithm ## $accept : [double] acceptance rate in the (independence) MH algorithm ## __20080429__ 'AdMitMH' <- function(N=1e5, KERNEL, mit=list(), ...) { if (N<2) stop ("'N' should be at least larger than 2") if (missing(KERNEL)) stop ("'KERNEL' is missing in 'AdMitMH'") KERNEL <- match.fun(KERNEL) if (!any(names(formals(KERNEL))=="log")) stop ("'KERNEL' MUST have the logical argument 'log' which returns the (natural) logarithm of 'KERNEL' if 'log=TRUE'") theta <- rMit(N, mit) lnw <- fn.w(theta, KERNEL=KERNEL, mit=mit, log=TRUE, ...) k <- ncol(mit$mu) r <- .C('fnMH_C', theta = as.double(as.vector(t(theta))), N = as.integer(N), k = as.integer(k), lnw = as.double(lnw), u = as.double(stats::runif(N)), draws = vector('double',N*k), ns = as.integer(0), PACKAGE = 'AdMit', NAOK = TRUE) draws <- matrix(r$draws, N, k, byrow=TRUE) dimnames(draws) <- list(1:N, paste("k", 1:k, sep="")) list(draws=draws, accept=as.numeric(r$ns/N)) }
/scratch/gouwar.j/cran-all/cranData/AdMit/R/AdMitMH.R
## Function which computes the density of a mixture of Student-t densities ## __input__ ## theta : [Nxk matrix] of values ## mit : [list] of mixture information (default: univariate Cauchy) ## log : [logical] log output (default: TRUE) ## __output__ ## [Nx1 vector] of density evaluated at theta ## __20080329__ ## changes marked by # date 20120816 (updated mit definition: mit$df can be a double or vector) 'dMit' <- function(theta, mit=list(), log=TRUE) { if (missing(theta)) stop ("'theta' is missing in 'dMit'") H <- length(mit$p) if (H==0) { ## default for the mixture warning ("'mit' not well defined in 'dMit'; set to default") mit <- list(p=1, mu=as.matrix(0), Sigma=as.matrix(1), df=1) H <- 1 } if (is.vector(theta)) { ## if vector is supplied instead of a matrix if (ncol(mit$mu)==1) { ## univariate density theta <- as.matrix(theta) } else { ## multivariate density but evaluated at a single point theta <- matrix(theta, nrow=1) } } r <- tmp <- 0 # date 20120816: df can be a vector of size 1 or H, replicated in the former case if(length(mit$df)==1 & H >1) mit$df = rep(mit$df,H) for (h in 1:H) { # date 20120816: density evaluation of mixture with possibly different df parameters #tmp <- exp(log(mit$p[h]) + fn.dmvt(theta, mit$mu[h, ], # mit$Sigma[h, ], mit$df, log = TRUE)) tmp <- exp(log(mit$p[h]) + fn.dmvt(theta, mit$mu[h, ], mit$Sigma[h, ], mit$df[h], log = TRUE)) r <- r + tmp } if (log) r <- log(r) r <- as.vector(r) return(r) }
/scratch/gouwar.j/cran-all/cranData/AdMit/R/dMit.R
## Function which computes the coefficient of variation ## See Hoogerheide (2006, p.48) ## __input__ ## w : [Nx1 vector] of weights ## __output__ ## [double] coefficient of variation ## __20080429__ 'fn.CV' <- function(w) { r <- stats::sd(w) if (r==0) stop ("'w' is constant in 'fn.CV'") as.numeric(r/mean(w)) }
/scratch/gouwar.j/cran-all/cranData/AdMit/R/fn.CV.R
## Function which computes the weigths (exponential) ## __input__ ## lnk : [Nx1 vector] of log kernel values ## lnd : [Nx1 vector] of log mixture values ## __output__ ## [Nx1 vector] of weights ## __20080429__ 'fn.computeexpw' <- function(lnk, lnd) { r <- lnk-lnd r <- r-max(r) ## robustify r <- exp(r) r[is.na(r) | is.nan(r)] <- 0 as.vector(r) }
/scratch/gouwar.j/cran-all/cranData/AdMit/R/fn.computeexpw.R
## Function which computes the density values of a multivariate Student-t density ## __input__ ## x : [Nxk matrix] of values ## mu : [kx1 vector] of mean ## Sigma : [k^2x1 matrix] scale matrix (in vector form) ## df : [integer>0] degrees of freedom parameter ## log : [logical] natural logarithm output ## __output__ ## [Nx1 vector] of density values ## __20080429__ 'fn.dmvt' <- function(x, mu, Sigma, df, log) { k <- length(mu) r <- mvtnorm::dmvt(as.matrix(x), as.vector(mu), matrix(Sigma,k,k), df, log) as.vector(r) }
/scratch/gouwar.j/cran-all/cranData/AdMit/R/fn.dmvt.R
## Function which checks whether a square matrix is positive definite ## __input__ ## A : [kxk matrix] ## __output__ ## [logical] is the matrix positive definite? ## __20080427__ 'fn.isPD' <- function(A) { as.logical(all(eigen(A)$values>0)) }
/scratch/gouwar.j/cran-all/cranData/AdMit/R/fn.isPD.R
## Function which checks whether a square matrix is singular ## __input__ ## A : [kxk matrix] ## tol : [double] tolerance for the determinant (default: 1e15) ## __output__ ## [logical] is the matrix singular? ## __20080502__ 'fn.isSingular' <- function(A, tol=1e25) { tmp <- abs(det(A)) as.logical(tmp>=tol | tmp<=1/tol) }
/scratch/gouwar.j/cran-all/cranData/AdMit/R/fn.isSingular.R
## Function which computes the location vector and the scale matrix ## __input__ ## w : [Nx1 vector] of weights ## theta : [Nxk matrix] of draws ## __ouput__ ## [list] with the following components: ## $mu : [kx1 vector] of location ## $Sigma : [k^2x1 matrix] covariance matrix (in vector form) ## __20080429__ 'fn.muSigma' <- function(w, theta, mu=NULL) { theta <- as.matrix(theta) ## univariate wscale <- exp(log(w)-log(sum(w))) if (is.null(mu)) { mu <- apply(wscale*theta, 2, sum) } tmp <- theta-matrix(mu, nrow(theta), ncol(theta), byrow=TRUE) Sigma <- t(tmp) %*% (wscale*tmp) list(mu=as.vector(mu), Sigma=as.vector(Sigma)) }
/scratch/gouwar.j/cran-all/cranData/AdMit/R/fn.muSigma.R
## Function which optimizes the location of the new mixture component ## __input__ ## FUN : [function] to be optimized (either 'KERNEL' or 'fn.w') ## mu0 : [kx1 vector] starting vector ## control : [list] of optimization parameters (see optim) ## ... : additional parameters used by 'KERNEL' ## __output__ ## [list] with the following components: ## $mu : [kx1 vector] of location parameters ## $Sigma : [k^2x1 vector] of scale matrices (in vector form) ## $value : [double] value of the function at optimum ## $method : [character] method used in the optimization ("BFGS", "Nelder-Mead", "IS") ## $time : [double] time of the optimization ## __20080502__ 'fn.optmu' <- function(FUN, mu0, control, ...) { 'fn.optmu_sub' <- function(method) tmp <- optim(par=mu0, fn=FUN, method=method, control=list( trace=control$trace, maxit=control$maxit, reltol=control$reltol, fnscale=-1), ## maximization hessian=TRUE, log=TRUE, ...) ptm <- proc.time()[3] k <- length(mu0) method <- "BFGS" Sigma <- NA ## first pass optimization tmp <- fn.optmu_sub(method) if (tmp$convergence>1) { ## if bad convergence if (k>1) { ## and multivariate optimization, then use Nelder-Mead optimization method <- "Nelder-Mead" tmp <- fn.optmu_sub(method) } } if (tmp$convergence>1) { ## if still bad convervence method <- "IS" } if (fn.isSingular(tmp$hessian)) { ## or if the Hessian matrix is singular method <- "IS" } else { Sigma <- -solve(tmp$hessian) if (!fn.isPD(Sigma)) { ## or if the scale matrix is not positive definite method <- "IS" } } list(mu=as.vector(tmp$par), Sigma=matrix(Sigma, nrow=1), value=as.numeric(tmp$value), method=as.character(method), time=as.numeric(proc.time()[3]-ptm)) }
/scratch/gouwar.j/cran-all/cranData/AdMit/R/fn.optimmu.R
## Function which optimizes the probabilities by minimizing ## the squared coefficient of variation ## __input__ ## ph : [(H-1)x1 vector] of past probabilities ## lnK : [NpxH matrix] of kernel values ## lnD : [NpxH^2 matrix] of Student-t densities components ## control : [list] of optimization parameters ## __output__ ## [list] with the following components ## $ph : [Hx1 vector] of optimized probabilities ## $method : [character] indicating the optimization method used ("NLMIN", "BFGS", "Nelder-Mead", "NONE") ## $time : [double] indicating the time of the optimization ## __20080427__ 'fn.optp' <- function(ph, lnK, lnD, control) { ## function which transforms the probabilities (positivity and summability) 'fn.lambdap' <- function(lambda) { e <- c(exp(lambda),1) as.vector(e/sum(e)) } ## objective function 'fn.lnf' <- function(lambda) { r <- .C('fnlnf_C', lnp = as.double(log(fn.lambdap(lambda))), lnk = as.double(as.vector(t(lnK))), lnd = as.double(as.vector(t(lnD))), Np = as.integer(Np), H = as.integer(H), f = as.double(0), grad = vector('double',H), PACKAGE = 'AdMit', NAOK = TRUE) assign('grad', r$grad, envir = memo) as.numeric(r$f) } ## gradient of 'fn.lnf' 'fn.gradlnf' <- function(lambda) { e <- c(exp(lambda),1) s <- sum(e) tmp <- -e %*% t(e) / s^2 diag(tmp) <- (e*s-e^2) / s^2 gradlambda <- as.matrix(tmp[1:H,1:(H-1)]) grad <- get('grad', envir = memo) ## take gradient from fn.lnf as.vector(t(gradlambda)%*%grad) } ptm <- proc.time()[3] Np <- nrow(lnK) H <- ncol(lnK) ph <- c((1-control$weightNC)*ph, control$weightNC) lambda0 <- log(ph[1:(H-1)]/ph[H]) memo <- new.env(hash=FALSE) method <- "NLMINB" tmp <- stats::nlminb(start=lambda0, objective=fn.lnf, gradient=fn.gradlnf, control=list( trace=control$trace, iter.max=control$iter.max, rel.tol=control$rel.tol, x.tol=1e-10)) if (tmp$convergence>0 & length(lambda0)==1) { ## if no convergence and univariate optimization method <- "BFGS" } if (tmp$convergence>0 & length(lambda0)>1) { ## if no convergence and multivariate optimization method <- "Nelder-Mead" } if (tmp$convergence>0) { ## run the optimization again with the changed methods tmp <- stats::optim(par=lambda0, fn=fn.lnf, gr=fn.gradlnf, method=method, control=list( trace=control$trace, maxit=control$iter.max, reltol=control$rel.tol)) } if (tmp$convergence>0) { ## if still no convergence, keep past values method <- "NONE" tmp$par <- ph } list(p=as.vector(fn.lambdap(tmp$par)), method=as.character(method), time=as.numeric(proc.time()[3]-ptm)) }
/scratch/gouwar.j/cran-all/cranData/AdMit/R/fn.optimp.R
## Function which generates draws from a multivariate Student-t density ## __input__ ## N : [integer>0] number of draws ## mu : [kx1 vector] location vector ## Sigma : [k^2x1 matrix] scale matrix (in vector form) ## df : [integer>0] degrees of freedom ## __output__ ## [Nxk matrix] of draws ## __20080427__ 'fn.rmvt' <- function(N, mu, Sigma, df) { k <- length(mu) r <- matrix(mu, N, k, byrow=TRUE) + mvtnorm::rmvt(N, matrix(Sigma,k,k), df) as.matrix(r) }
/scratch/gouwar.j/cran-all/cranData/AdMit/R/fn.rmvt.R
## Function which computes the weights ## See Hoogerheide (2006,pp.46-47) ## __input__ ## theta : [Nxk matrix] of draws ## KERNEL : [function] which computes the kernel !! must be vectorized !! ## mit : [list] containing mixture information ## log : [logical] natural logartithm output (default: TRUE) ## ... : additional parameters used by 'KERNEL' ## __output__ ## [Nx1 vector] of weights ## __20080427__ 'fn.w' <- function(theta, KERNEL, mit, log=TRUE, ...) { lnk <- KERNEL(theta, log=TRUE, ...) lnd <- dMit(theta, mit, log=TRUE) if (log) { r <- lnk-lnd r[is.na(r) | is.nan(r)] <- -Inf } else { r <- fn.computeexpw(lnk, lnd) } as.vector(r) }
/scratch/gouwar.j/cran-all/cranData/AdMit/R/fn.w.R
## Function which computes mu and Sigma from the largest weights ## __input__ ## theta : [Nxk matrix] of draws ## w : [Nx1 vector] of weights ## control : [list] containing the following components: ## $ISpercent : [k1x1 vector] of percentages ## $ISscale : [k2x1 vector] of scaling factors ## __output__ ## [list] with the following components: ## mu : [kx1 vector] of location ## Sigma : [(k1*k2)xk^2 matrix] of scale matrics (in rows and in vector form) ## method : [(k1*k2)x1 character vector] of IS computations ## time : [double] computation time ## __20080429__ 'fn.wIS' <- function(theta, w, control) { ptm <- proc.time()[3] k <- ncol(theta) n <- nrow(theta) pos <- order(w, decreasing=TRUE) Sigma <- method <- NULL for (i in control$percent) { ## iterate over percentages of importance weights tmppos <- pos[1:floor(n*i)] tmp <- fn.muSigma(w[tmppos], theta[tmppos,], mu=theta[pos[1],]) for (j in control$scale) { ## iterate over scaling factors Sigma <- rbind(Sigma, j*tmp$Sigma) method <- c(method, paste("IS", paste(i,j,sep="-"))) } } list(mu=tmp$mu, Sigma=matrix(Sigma, ncol=k*k), method=as.character(method), time=as.numeric(proc.time()[3]-ptm)) }
/scratch/gouwar.j/cran-all/cranData/AdMit/R/fn.wIS.R
## Function which computes mu and Sigma from the residuals weights ## See Hoogerheide (2006,Sect.2.3.1,p.48) ## __input__ ## theta : [Nxk matrix] of draws ## w : [Nx1 vector] of weights ## ISctild : [double] tuning parameter ## ISmax : [integer>0] maximum iteration allowed ## ISfactor : [double] factor by which ctild is multiplied ## __output__ ## [list] with the following components: ## $mu : [kx1 vector] of locations ## $Sigma : [k^2x1 vector] of scale matrices (in vector form) ## $time : [double] time of the optimization ## __20080429__ 'fn.wRes' <- function(theta, w, control) { ptm <- proc.time()[3] k <- ncol(theta) N <- nrow(theta) ctild <- 100*mean(w) ISstop <- FALSE while (ISstop==FALSE) { wres <- w-ctild wres[wres<0] <- 0 wrestild <- wres / sum(wres) wrestild <- matrix(wrestild, N, k) mu <- apply( wrestild * theta, 2, sum) tmp <- theta-matrix(mu, N, k, byrow=TRUE) tmpSigma <- t(tmp) %*% (wrestild*tmp) if (any(is.na(tmpSigma)) | any(is.nan(tmpSigma))) { ## if 'NA' of 'NaN' detected, scale ctild ctild <- .5*ctild } else if (!fn.isPD(tmpSigma)) { ## if the matrix is not positive definite, scale ctild ctild <- .5*ctild } else { ## otherwise, stop the iteration ISstop <- TRUE } } Sigma <- NULL for (j in control$scale) { ## iterate of the scaling factors Sigma <- rbind(Sigma, as.vector(j*tmpSigma)) } list(mu=as.vector(mu), Sigma=matrix(Sigma, ncol=k*k, byrow=TRUE), time=as.numeric(proc.time()[3]-ptm)) }
/scratch/gouwar.j/cran-all/cranData/AdMit/R/fn.wRes.R
## Function which generates draws from a mixture of Student-t densities ## __input__ ## N : [integer>0] number of draws (default: 1) ## mit : [list] of mixture information (default: univariate Cauchy) ## $p : [Hx1 vector] of probabilities ## $mu : [Hxk matrix] of mean vectors (in row) ## $Sigma : [Hxk^2 matrix] of scale matrices (in row) ## $df : [integer>0] degrees of freedom parameter ## __output__ ## [Nxk matrix] of draws ## __20080429__ ## changes marked by # date 20120816 (updated mit definition: mit$df can be a double or vector) 'rMit' <- function(N=1, mit=list()) { H <- length(mit$p) if (H==0) { ## default warning ("'mit' not well defined in 'rMit'; set to default") mit <- list(p=1, mu=as.matrix(0), Sigma=as.matrix(1), df=1) H <- 1 } comp <- sample(1:H, N, prob=mit$p, replace=TRUE) r <- NULL # date 20120816: df can be a vector of size 1 or H, replicated in the former case if(length(mit$df)==1 & H >1) mit$df = rep(mit$df,H) for (h in 1:H) { nh <- length(comp[comp == h]) tmp <- NULL if (nh > 0) # date 20120816: simulation from mixture with possibly different df parameters # tmp <- fn.rmvt(nh, mit$mu[h, ], mit$Sigma[h, ], mit$df) tmp <- fn.rmvt(nh, mit$mu[h, ], mit$Sigma[h, ], mit$df[h]) r <- rbind(r, tmp) } r <- as.matrix(r[sample(1:N), ]) if (ncol(r) == 1) r <- as.vector(r) return(r) }
/scratch/gouwar.j/cran-all/cranData/AdMit/R/rMit.R
wait <- function() { t <- readline("\nPlease 'q' to quit the demo or any other key to continue...\n") if (t == "q") stop ("end of the demo") } ####################################################################### ## START OF THE DEMO ####################################################################### ##_____________________________________________________________________ ## GELMAN AND MENG (1991) EXAMPLE ## Initialization options(digits = 4, max.print = 40, prompt = "R> ") ## Gelman and Meng (1991) kernel function GelmanMeng <- function(x, A = 1, B = 0, C1 = 3, C2 = 3, log = TRUE) { if (is.vector(x)) x <- matrix(x, nrow = 1) r <- -.5 * (A * x[,1]^2 * x[,2]^2 + x[,1]^2 + x[,2]^2 - 2 * B * x[,1] * x[,2] - 2 * C1 * x[,1] - 2 * C2 * x[,2]) if (!log) r <- exp(r) as.vector(r) } wait() ## Contour plot of the Gelman and Meng (1991) kernel function. PlotGelmanMeng <- function(x1, x2) { GelmanMeng(cbind(x1, x2), log = FALSE) } x1 <- x2 <- seq(from = -1.0, to = 6.0, by = 0.1) z <- outer(x1, x2, FUN = PlotGelmanMeng) contour(x1, x2, z, nlevel = 20, las = 1, lwd = 2, col = rainbow(20), xlab = expression(X[1]), ylab = expression(X[2])) abline(a = 0, b = 1, lty = "dotted") wait() ## Use AdMit to the Gelman and Meng (1991) kernel function. set.seed(1234) outAdMit <- AdMit(GelmanMeng, mu0 = c(0.0, 0.1)) print(outAdMit) wait() ## Contour plot of the mixture approximation obtained with AdMit. PlotMit <- function(x1, x2, mit) { dMit(cbind(x1, x2), mit = mit, log = FALSE) } z <- outer(x1, x2, FUN = PlotMit, mit = outAdMit$mit) contour(x1, x2, z, nlevel = 20, las = 1, lwd = 2, col = rainbow(20), xlab = expression(X[1]), ylab = expression(X[2])) abline(a = 0, b = 1, lty = "dotted") wait() ## Contour plot of the components of the mixture approximation ## obtained with AdMit. par(mfrow = c(2,2)) for (h in 1:4) { mith <- list(p = 1, mu = outAdMit$mit$mu[h,,drop = FALSE], Sigma = outAdMit$mit$Sigma[h,,drop = FALSE], df = outAdMit$mit$df) z <- outer(x1, x2, FUN = PlotMit, mit = mith) contour(x1, x2, z, las = 1, nlevel = 20, lwd = 2, col = rainbow(20), xlab = expression(X[1]), ylab = expression(X[2])) abline(a = 0, b = 1, lty = "dotted") title(main = paste("component nr.", h)) } wait() ## Use importance sampling with the mixture approximation ## as the importance density. outAdMitIS <- AdMitIS(KERNEL = GelmanMeng, mit = outAdMit$mit) print(outAdMitIS) wait() ## Use an alternative 'G' function in importance sampling ## for computing the covariance matrix. G.cov <- function(theta, mu) { G.cov_sub <- function(x) (x-mu) %*% t(x-mu) theta <- as.matrix(theta) tmp <- apply(theta, 1, G.cov_sub) if (length(mu) > 1) t(tmp) else as.matrix(tmp) } outAdMitIS <- AdMitIS(KERNEL = GelmanMeng, G = G.cov, mit = outAdMit$mit, mu = c(1.459, 1.459)) print(outAdMitIS) V <- matrix(outAdMitIS$ghat, 2, 2) print(V) cov2cor(V) wait() ## Use independence Metropolis-Hastings algorithm with ## the mixture approximation as the candidate density. outAdMitMH <- AdMitMH(KERNEL = GelmanMeng, mit = outAdMit$mit) print(outAdMitMH) wait() ## Use some functions of the package 'coda' to obtain summaries from the MCMC output. library("coda") draws <- as.mcmc(outAdMitMH$draws[1001:1e5,]) colnames(draws) <- c("X1", "X2") summary(draws)$stat summary(draws)$stat[,3]^2 / summary(draws)$stat[,4]^2 plot(draws) wait() ##_____________________________________________________________________ ## SIMPLE ECONOMETRIC EXAMPLE ## Simple econometric model: y_t ~ i.i.d. N(mu,sigma^2) ## Jeffreys prior p(theta) prop 1/sigma for sigma > 0 KERNEL <- function(theta, y, log = TRUE) { if (is.vector(theta)) theta <- matrix(theta, nrow = 1) KERNEL_sub <- function(thetai) { if (thetai[2] > 0) { r <- - log(thetai[2]) + sum(dnorm(y, thetai[1], thetai[2], TRUE)) } else { r <- -Inf } as.numeric(r) } r <- apply(theta, 1, KERNEL_sub) if (!log) r <- exp(r) as.numeric(r) } ## Generate 20 draws for mu = 1 and sigma = 0.5 set.seed(1234) y <- rnorm(20, 2.0, 0.5) par(mfrow = c(1,1)) plot(y) wait() ## Run AdMit (with default values); pass the vector y ## of observations using the ... argument of AdMit and ## print steps of the fitting process outAdMit <- AdMit(KERNEL, mu0 = c(1.0, 1.0), y = y, control = list(trace = TRUE)) print(outAdMit) wait() ## Use independence Metropolis-Hastings algorithm with ## the mixture approximation as the candidate density; pass the ## vector y of observations using the ... argument of AdMitMH outAdMitMH <- AdMitMH(KERNEL = KERNEL, mit = outAdMit$mit, y = y) print(outAdMitMH) wait() ## Use some functions of the package 'coda' to obtain summaries from the MCMC output. draws <- as.mcmc(outAdMitMH$draws[1001:1e5,]) colnames(draws) <- c("mu", "sigma") summary(draws)$stat summary(draws)$stat[,3]^2 / summary(draws)$stat[,4]^2 plot(draws) wait() ##_____________________________________________________________________ ## BAYESIAN ESTIMATION OF A MIXTURE OF ARCH(1) MODEL ## Define the prior density ## The function outputs a Nx2 matrix. The first column indicates whether the ## prior constraint is satisfied, the second returns the value of the prior PRIOR <- function(omega1, omega2, alpha, p, log = TRUE) { c1 <- (omega1 > 0.0 & omega2 > 0.0 & alpha >= 0.0) ## positivity constraint c2 <- (alpha < 1.0) ## stationarity constraint c3 <- (p > 0.0 & p < 1.0) ## U(0,1) prior on p c4 <- (omega1 < omega2) ## identification constraint r1 <- c1 & c2 & c3 & c4 r2 <- rep.int(-Inf, length(omega1)) tmp <- dnorm(omega1[r1==TRUE], 0.0, 2.0, log = TRUE) ## prior on omega1 tmp <- tmp + dnorm(omega2[r1==TRUE], 0.0, 2.0, log = TRUE) ## prior on omega2 r2[r1==TRUE] <- tmp + dnorm(alpha[r1==TRUE], 0.2, 0.5, log = TRUE) ## prior on alpha if (!log) r2 <- exp(r2) cbind(r1, r2) } wait() ## Define the kernel function ## The function takes a Nx4 matrix of parameters (theta), a vector of log-returns (y) ## It outputs the kernel value for the N parameters KERNEL <- function(theta, y, log = TRUE) { if (is.vector(theta)) theta <- matrix(theta, nrow = 1) N <- nrow(theta) ## compute the prior for the parameters prior <- PRIOR(theta[,1], theta[,2], theta[,3], theta[,4]) ## the kernel function is implemented in C in order to speed up the estimation d <- .C(name = "fnKernelMixtureArch_C", theta = as.double( as.vector(t(theta)) ), N = as.integer(N), y = as.double(y), n = as.integer(length(y)), prior = as.double( as.vector(t(prior)) ), d = vector("double", N), PACKAGE = "AdMit", NAOK = TRUE, DUP = FALSE)$d if (!log) d <- exp(d) as.vector(d) } wait() ## Load the data set library("fEcofin") data("dem2gbp") y <- dem2gbp[1:250,1] par(mfrow = c(1,1)) plot(y, type = "l", las = 1, ylab = "log-returns", xlab = "time index") wait() ## Maximize to find the mode of the kernel function NLL <- function(..., log = TRUE) -KERNEL(...) start <- c(0.1, 0.5, 0.1, 0.5) outML <- optim(par = start, fn = NLL, y = y, method = "Nelder-Mead", control = list(trace = 1, maxit = 5000)) ## print the mode round(outML$par, 4) ## __Adaptive mixture approach___ set.seed(1234) system.time( outAdMit <- AdMit(KERNEL, mu0 = outML$par, y = y, control = list(IS = TRUE, trace = TRUE)) ) print(outAdMit) ## ___Naive (unimodal Student-t approach) approach___ set.seed(1234) system.time( outAdMit <- AdMit(KERNEL, mu0 = outML$par, y = y, control = list(Hmax = 1)) ) print(outAdMit) ## Then, use the output of 'AdMit' to perform importance sampling or independence MH sampling ## __Importance sampling approach (for estimating the posterior mean)__ set.seed(1234) outAdMitIS <- AdMitIS(N = 50000, KERNEL = KERNEL, mit = outAdMit$mit, y = y) print(outAdMitIS) ## __Importance sampling approach (for estimating the posterior covariance matrix)__ set.seed(1234) ## !!! compile the G.cov function above (section 3) !!! outAdMitIS <- AdMitIS(N = 50000, KERNEL = KERNEL, G = G.cov, mit = outAdMit$mit, y = y, mu = outAdMitIS$ghat) print(outAdMitIS) ## posterior standard deviations sqrt(diag(matrix(outAdMitIS$ghat, 4, 4))) ## __Independence chain Metropolis-Hasting algorithm__ set.seed(1234) outAdMitMH <- AdMitMH(N = 51000, KERNEL = KERNEL, mit = outAdMit$mit, y = y) print(outAdMitMH$accept) draws <- outAdMitMH$draws[1001:nrow(outAdMitMH$draws),] colnames(draws) <- c("omega1", "omega2", "alpha", "p") ## ACF plots of the MCMC output par(mfrow = c(2,2)) par(cex.axis = 1.2, cex.lab = 1.2) acf(draws[,"omega1"], lag.max = 30, las = 1, main = expression(omega[1])) acf(draws[,"omega2"], lag.max = 30, las = 1, main = expression(omega[2])) acf(draws[,"alpha"], lag.max = 30, las = 1, main = expression(alpha)) acf(draws[,"p"], lag.max = 30, las = 1, main=expression(p)) ## ACF up to lag 10 apply(draws, 2, acf, plot = FALSE, lag.max = 10) ## use summary from package coda draws <- as.mcmc(draws) summary(draws)$stat ## RNE summary(draws)$stat[,3]^2 / summary(draws)$stat[,4]^2 ####################################################################### ## END OF THE DEMO ## Additional code in ./AdMitJSS.R and ./AdMitRnews.R files #######################################################################
/scratch/gouwar.j/cran-all/cranData/AdMit/demo/AdMit.R
#' Implementation of AdaSampling for positive unlabelled and label noise #' learning. #' #' \code{adaSample()} applies the AdaSampling procedure to reduce noise #' in the training set, and subsequently trains a classifier from #' the new training set. For each row (observation) in the test set, it #' returns the probabilities of it being a positive ("P) or negative #' ("N") instance, as a two column data frame. #' #' \code{adaSample()} is an adaptive sampling-based noise reduction method #' to deal with noisy class labelled data, which acts as a wrapper for #' traditional classifiers, such as support vector machines, #' k-nearest neighbours, logistic regression, and linear discriminant #' analysis. #' #' This process is used to build up a noise-minimized training set #' that is derived by iteratively resampling the training set, #' (\code{train}) based on probabilities derived after its classification. #' #' This sampled training set is then used to train a classifier, which #' is then executed on the test set. \code{adaSample()} returns a series of #' predictions for each row of the test set. #' #' Note that this function does not evaluate the quality of the model #' and thus does not compare its output to true values of the test set. #' To assess please see \code{adaSvmBenchmark()}. #' #' @section References: #' Yang, P., Liu, W., Yang. J. (2017) Positive unlabeled learning via wrapper-based #' adaptive sampling. \emph{International Joint Conferences on Artificial Intelligence (IJCAI)}, 3272-3279 #' #' Yang, P., Ormerod, J., Liu, W., Ma, C., Zomaya, A., Yang, J.(2018) #' AdaSampling for positive-unlabeled and label noise learning with bioinformatics applications. #' \emph{IEEE Transactions on Cybernetics}, doi:10.1109/TCYB.2018.2816984 #' #' @param Ps names (each instance in the data has to be named) of positive examples #' @param Ns names (each instance in the data has to be named) of negative examples #' @param train.mat training data matrix, without class labels. #' @param test.mat test data matrix, without class labels. #' @param classifier classification algorithm to be used for learning. Current options are #' support vector machine, \code{"svm"}, k-nearest neighbour, \code{"knn"}, logistic regression \code{"logit"}, #' linear discriminant analysis \code{"lda"}, and feature weighted knn, \code{"wKNN"}. #' @param s sets the seed. #' @param C sets how many times to run the classifier, C>1 induces an ensemble learning model. #' @param sampleFactor provides a control on the sample size for resampling. #' @param weights feature weights, required when using weighted knn. #' #' @return a two column matrix providing classification probabilities of each sample #' with respect to positive and negative classes #' @export #' @examples #' # Load the example dataset #' data(brca) #' head(brca) #' #' # First, clean up the dataset to transform into the required format. #' brca.mat <- apply(X = brca[,-10], MARGIN = 2, FUN = as.numeric) #' brca.cls <- sapply(X = brca$cla, FUN = function(x) {ifelse(x == "malignant", 1, 0)}) #' rownames(brca.mat) <- paste("p", 1:nrow(brca.mat), sep="_") #' #' # Introduce 40% noise to positive class and 30% noise to the negative class #' set.seed(1) #' pos <- which(brca.cls == 1) #' neg <- which(brca.cls == 0) #' brca.cls.noisy <- brca.cls #' brca.cls.noisy[sample(pos, floor(length(pos) * 0.4))] <- 0 #' brca.cls.noisy[sample(neg, floor(length(neg) * 0.3))] <- 1 #' #' # Identify positive and negative examples from the noisy dataset #' Ps <- rownames(brca.mat)[which(brca.cls.noisy == 1)] #' Ns <- rownames(brca.mat)[which(brca.cls.noisy == 0)] #' #' # Apply AdaSampling method on the noisy data #' brca.preds <- adaSample(Ps, Ns, train.mat=brca.mat, test.mat=brca.mat, classifier = "knn") #' head(brca.preds) #' #' # Orignal accuracy from the labels #' accuracy <- sum(brca.cls.noisy == brca.cls) / length(brca.cls) #' accuracy #' #' # Accuracy after applying AdaSampling method #' accuracyWithAdaSample <- sum(ifelse(brca.preds[,"P"] > 0.5, 1, 0) == brca.cls) / length(brca.cls) #' accuracyWithAdaSample #' adaSample <- function(Ps, Ns, train.mat, test.mat, classifier="svm", s=1, C=1, sampleFactor=1, weights=NULL) { # checking the input if(ncol(train.mat) != ncol(test.mat)) {stop("train.mat and test.mat do not have the same number of columns")} # initialize sampling probablity pos.probs <- rep(1, length(Ps)) una.probs <- rep(1, length(Ns)) names(pos.probs) <- Ps names(una.probs) <- Ns i <- 0 while (i < 5) { # update count i <- i + 1 # training the predictive model model <- singleIter(Ps=Ps, Ns=Ns, dat=train.mat, pos.probs=pos.probs, una.probs=una.probs, seed=i, classifier=classifier, sampleFactor=sampleFactor, weights=weights) # update probability arrays pos.probs <- model[Ps, "P"] una.probs <- model[Ns, "N"] } pred <- singleIter(Ps=Ps, Ns=Ns, dat=train.mat, test=test.mat, pos.probs=pos.probs, una.probs=una.probs, seed=s, classifier=classifier, sampleFactor=sampleFactor, weights=weights) # if C is greater than 1, create an ensemble if (C > 1){ for (j in 2:C){ pred <- pred + singleIter(Ps=Ps, Ns=Ns, dat=train.mat, test=test.mat, pos.probs=pos.probs, una.probs=una.probs, seed=j, classifier=classifier, sampleFactor=sampleFactor, weights=weights) } pred <- pred/C } return(pred) }
/scratch/gouwar.j/cran-all/cranData/AdaSampling/R/adaSample.R
#' Benchmarking AdaSampling efficacy on noisy labelled data. #' #' \code{adaSvmBenchmark()} allows a comparison between the performance #' of an AdaSampling-enhanced SVM (support vector machine)- #' classifier against the SVM-classifier on its #' own. It requires a matrix of features (extracted from a labelled dataset), #' and two vectors of true labels and labels with noise added as desired. #' It runs an SVM classifier and returns a matrix which displays the specificity #' (Sp), sensitivity (Se) and F1 score for each of four conditions: #' "Original" (classifying with true labels), "Baseline" (classifying with #' noisy labels), "AdaSingle" (classifying using AdaSampling) and #' "AdaEnsemble" (classifying using AdaSampling in conjunction with #' an ensemble of models). #' #' AdaSampling is an adaptive sampling-based noise reduction method #' to deal with noisy class labelled data, which acts as a wrapper for #' traditional classifiers, such as support vector machines, #' k-nearest neighbours, logistic regression, and linear discriminant #' analysis. For more details see \code{?adaSample()}. #' #' This function runs evaluates the AdaSampling procedure by adding noise #' to a labelled dataset, and then running support vector machines on #' the original and the noisy dataset. Note that this function is for #' benchmarking AdaSampling performance using what is assumed to be #' a well-labelled dataset. In order to run AdaSampling on a noisy dataset, #' please see \code{adaSample()}. #' #' @section References: #' Yang, P., Liu, W., Yang. J. (2017) Positive unlabeled learning via wrapper-based #' adaptive sampling. \emph{International Joint Conferences on Artificial Intelligence (IJCAI)}, 3272-3279 #' #' Yang, P., Ormerod, J., Liu, W., Ma, C., Zomaya, A., Yang, J.(2018) #' AdaSampling for positive-unlabeled and label noise learning with bioinformatics applications. #' \emph{IEEE Transactions on Cybernetics}, doi:10.1109/TCYB.2018.2816984 #' #' @import stats #' #' @param data.mat a rectangular matrix or data frame that can be #' coerced to a matrix, containing the #' features of the dataset, without class labels. Rownames (possibly #' containing unique identifiers) will be ignored. #' @param data.cls a numeric vector containing class labels for the dataset #' with added noise. #' Must be in the same order and of the same length as \code{data.mat}. Labels #' must be 1 for positive observations, and 0 for negative observations. #' @param data.cls.truth a numeric vector of true class labels for #' the dataset. Must be the same order and of the same length as \code{data.mat}. Labels must #' be 1, for positive observations, and 0 for negative observations. #' @param cvSeed sets the seed for cross-validation. #' @param C sets how many times to run the classifier, for the AdaEnsemble #' condition. See Description above. #' @param sampleFactor provides a control on the sample size for resampling. #' @return performance matrix #' @export #' @examples #' # Load the example dataset #' data(brca) #' head(brca) #' #' # First, clean up the dataset to transform into the required format. #' brca.mat <- apply(X = brca[,-10], MARGIN = 2, FUN = as.numeric) #' brca.cls <- sapply(X = brca$cla, FUN = function(x) {ifelse(x == "malignant", 1, 0)}) #' rownames(brca.mat) <- paste("p", 1:nrow(brca.mat), sep="_") #' #' # Introduce 40% noise to positive class and 30% noise to the negative class #' set.seed(1) #' pos <- which(brca.cls == 1) #' neg <- which(brca.cls == 0) #' brca.cls.noisy <- brca.cls #' brca.cls.noisy[sample(pos, floor(length(pos) * 0.4))] <- 0 #' brca.cls.noisy[sample(neg, floor(length(neg) * 0.3))] <- 1 #' #' # benchmark classification performance with different approaches #' \donttest{ #' adaSvmBenchmark(data.mat = brca.mat, data.cls = brca.cls.noisy, data.cls.truth = brca.cls, cvSeed=1) #' } #' adaSvmBenchmark <- function(data.mat, data.cls, data.cls.truth, cvSeed, C=50, sampleFactor=1){ ##############------------Helper functions--------------################################## ### evaluation function evaluate <- function(TN, FP, TP, FN, psd=TRUE, print=FALSE) { mat <- rbind(TN, FP, TP, FN) if (print == TRUE) { cat(round(mean(Se(mat)), digits=3)) cat(" ") if (psd==TRUE) { cat(round(sd(Se(mat)), digits=3)) cat(" ") } cat(round(mean(Sp(mat)), digits=3)) cat(" ") if (psd==TRUE) { cat(round(sd(Sp(mat)), digits=3)) cat(" ") } cat(round(mean(F1(mat)), digits=3)) cat(" ") if (psd==TRUE) { cat(round(sd(F1(mat)), digits=3)) cat(" ") } } return(c(round(mean(Se(mat)), digits=3), round(mean(Sp(mat)), digits=3), round(mean(F1(mat)), digits=3))) } ### Evaluation matrices # sensitivity Se <- function(mat) { apply(mat, 2, function(x) { TN <- x[1] FP <- x[2] TP <- x[3] FN <- x[4] TP/(TP+FN) }) } # specificity Sp <- function(mat) { apply(mat, 2, function(x) { TN <- x[1] FP <- x[2] TP <- x[3] FN <- x[4] TN/(FP+TN) }) } # F1 score F1 <- function(mat) { apply(mat, 2, function(x){ TN <- x[1] FP <- x[2] TP <- x[3] FN <- x[4] 2*TP/(2*TP+FP+FN) }) } ##############------------------------------------------################################## #Convert to factors eval <- matrix(NA, nrow=4, ncol=3) colnames(eval) <- c("Se", "Sp", "F1") rownames(eval) <- c("Original", "Baseline", "AdaSingle", "AdaEnsemble") k <- 5 set.seed(cvSeed) fold <- createFolds(data.cls.truth, k); # gold standard (orignal data) TP <- TN <- FP <- FN <- c() for(i in 1:length(fold)){ model <- svm(data.mat[-fold[[i]],], data.cls.truth[-fold[[i]]]) preds <- ifelse(predict(model, data.mat[fold[[i]],])> 0.5, 1, 0) TP <- c(TP, sum((data.cls.truth[fold[[i]]] == preds)[data.cls.truth[fold[[i]]] == "1"])) TN <- c(TN, sum((data.cls.truth[fold[[i]]] == preds)[data.cls.truth[fold[[i]]] == "0"])) FP <- c(FP, sum((data.cls.truth[fold[[i]]] != preds)[preds == "1"])) FN <- c(FN, sum((data.cls.truth[fold[[i]]] != preds)[preds == "0"])) } eval[1,] <- evaluate(TN, FP, TP, FN, psd=FALSE) # without correction TP <- TN <- FP <- FN <- c() for(i in 1:length(fold)){ model <- svm(data.mat[-fold[[i]],], data.cls[-fold[[i]]]) preds <- ifelse(predict(model, data.mat[fold[[i]],])> 0.5, 1, 0) TP <- c(TP, sum((data.cls.truth[fold[[i]]] == preds)[data.cls.truth[fold[[i]]] == "1"])) TN <- c(TN, sum((data.cls.truth[fold[[i]]] == preds)[data.cls.truth[fold[[i]]] == "0"])) FP <- c(FP, sum((data.cls.truth[fold[[i]]] != preds)[preds == "1"])) FN <- c(FN, sum((data.cls.truth[fold[[i]]] != preds)[preds == "0"])) } eval[2,] <- evaluate(TN, FP, TP, FN, psd=FALSE) # single classifier AdaSampling TP <- TN <- FP <- FN <- c() for (i in 1:length(fold)) { Ps <- rownames(data.mat[-fold[[i]],])[which(data.cls[-fold[[i]]] == 1)] Ns <- rownames(data.mat[-fold[[i]],])[which(data.cls[-fold[[i]]] == 0)] pred <- adaSample(Ps, Ns, data.mat[-fold[[i]],], test.mat=data.mat[fold[[i]],], classifier="svm", sampleFactor)[,"P"] TP <- c(TP, sum(pred > 0.5 & data.cls.truth[fold[[i]]] == 1)) TN <- c(TN, sum(pred < 0.5 & data.cls.truth[fold[[i]]] == 0)) FP <- c(FP, sum(pred > 0.5 & data.cls.truth[fold[[i]]] == 0)) FN <- c(FN, sum(pred < 0.5 & data.cls.truth[fold[[i]]] == 1)) } eval[3,] <- evaluate(TN, FP, TP, FN, psd=FALSE) # ensemble classifier AdaSampling TP <- TN <- FP <- FN <- c() for (i in 1:length(fold)) { Ps <- rownames(data.mat[-fold[[i]],])[which(data.cls[-fold[[i]]] == 1)] Ns <- rownames(data.mat[-fold[[i]],])[which(data.cls[-fold[[i]]] == 0)] pred <- adaSample(Ps, Ns, data.mat[-fold[[i]],], test.mat=data.mat[fold[[i]],], classifier="svm", C=C, sampleFactor)[,"P"] TP <- c(TP, sum(pred > 0.5 & data.cls.truth[fold[[i]]] == 1)) TN <- c(TN, sum(pred < 0.5 & data.cls.truth[fold[[i]]] == 0)) FP <- c(FP, sum(pred > 0.5 & data.cls.truth[fold[[i]]] == 0)) FN <- c(FN, sum(pred < 0.5 & data.cls.truth[fold[[i]]] == 1)) } eval[4,] <- evaluate(TN, FP, TP, FN, psd=FALSE) return(eval) }
/scratch/gouwar.j/cran-all/cranData/AdaSampling/R/adaSvmBenchmark.R
#' Wisconsin Breast Cancer Database (1991) #' #' A cleaned version of the original Wisconsin Breast Cancer #' dataset containing histological information about 683 #' breast cancer samples collected from patients at the #' University of Wisconsin Hospitals, Madison by #' Dr. William H. Wolberg between January #' 1989 and November 1991. #' #' @format A data frame with 683 rows and 10 variables: #' \describe{ #' \item{clt}{Clump thickness, 1 - 10} #' \item{ucs}{Uniformity of cell size, 1 - 10} #' \item{uch}{Uniformity of cell shape, 1 - 10} #' \item{mad}{Marginal adhesion, 1 - 10} #' \item{ecs}{Single epithelial cell size, 1 - 10} #' \item{nuc}{Bare nuclei, 1 - 10} #' \item{chr}{Bland chromatin, 1 - 10} #' \item{ncl}{Normal nucleoli, 1 - 10} #' \item{mit}{Mitoses, 1 - 10} #' \item{cla}{Class, benign or malignant} #' #' } #' @source #' \url{https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.data} #' #'@references O. L. Mangasarian and W. H. Wolberg: "Cancer diagnosis via linear #' programming", \emph{SIAM News}, Volume 23, Number 5, September 1990, pp 1 & 18. #' "brca"
/scratch/gouwar.j/cran-all/cranData/AdaSampling/R/data.R
#' \code{singleIter()} applies a single iteraction of AdaSampling procedure. It #' returns the probabilities of all samples as being a positive (P) or negative #' (N) instance, as a two column data frame. #' #' Classification algorithms included are support vector machines (svm), #' k-nearest neighbours (knn), logistic regression (logit), linear discriminant #' analysis (lda), feature weighted knn (wKNN). #' @section References: #' Yang, P., Liu, W., Yang. J. (2017) Positive unlabeled learning via wrapper-based #' adaptive sampling. \emph{International Joint Conferences on Artificial Intelligence (IJCAI)}, 3272-3279 #' #' Yang, P., Ormerod, J., Liu, W., Ma, C., Zomaya, A., Yang, J.(2018) #' AdaSampling for positive-unlabeled and label noise learning with bioinformatics applications. #' \emph{IEEE Transactions on Cybernetics}, doi:10.1109/TCYB.2018.2816984 #' #' @import class #' @import e1071 #' @import MASS #' @import caret #' #' @param Ps names (name as index) of positive examples #' @param Ns names (name as index) of negative examples #' @param dat training data matrix, without class labels. #' @param test test data matrix, without class labels. #' Training data matrix will be used for testing if this is NULL (default). #' @param pos.probs a numeric vector of containing probability of positive examples been positive #' @param una.probs a numeric vector of containing probability of negative or unannotated examples been negative #' @param classifier classification algorithm to be used for learning. Current options are #' support vector machine, \code{"svm"}, k-nearest neighbour, \code{"knn"}, logistic regression \code{"logit"}, #' linear discriminant analysis \code{"lda"}, and feature weighted knn, \code{"wKNN"}. #' @param sampleFactor provides a control on the sample size for resampling. #' @param seed sets the seed. #' @param weights feature weights, required when using weighted knn. #' @export singleIter <- function(Ps, Ns, dat, test=NULL, pos.probs=NULL, una.probs=NULL, classifier="svm", sampleFactor, seed, weights) { set.seed(seed); positive.train <- c() positive.cls <- c() # determine the proper sample size for creating a balanced dataset sampleN <- ifelse(length(Ps) < length(Ns), length(Ps), length(Ns)) # bootstrap sampling to build the positive training set (labeled as 'P') idx.pl <- unique(sample(x=Ps, size=sampleFactor*sampleN, replace=TRUE, prob=pos.probs[Ps])) positive.train <- dat[idx.pl,] positive.cls <- rep("P", nrow(positive.train)) # bootstrap sampling to build the "unannotate" or "negative" training set (labeled as 'N') idx.dl <- unique(sample(x=Ns, size=sampleFactor*sampleN, replace=TRUE, prob=una.probs[Ns])) unannotate.train <- dat[idx.dl,] unannotate.cls <- rep("N", nrow(unannotate.train)) # combine data train.sample <- rbind(positive.train, unannotate.train) rownames(train.sample) <- NULL; cls <- as.factor(c(positive.cls, unannotate.cls)) # training svm classifier if (classifier == "svm") { model.svm <- svm(train.sample, cls, probability=TRUE, scale=TRUE); svm.pred <- c(); if (is.null(test)) { svm.pred <- predict(model.svm, dat, decision.values=TRUE, probability=TRUE); } else { svm.pred <- predict(model.svm, test, decision.values=TRUE, probability=TRUE); } return(attr(svm.pred,"probabilities")); } else if (classifier == "knn") { # training knn classifier if (is.null(test)) { knn.fit <- knn(train.sample, dat, cl=cls, k=5, prob=TRUE) p <- attr(knn.fit, "prob") idx <- which(knn.fit == "N") p[idx] <- 1- p[idx] knn.pred <- cbind(p, 1 - p) colnames(knn.pred) <- c("P", "N") rownames(knn.pred) <- rownames(dat) return(knn.pred) } else { test.mat <- test rownames(test.mat) <- NULL knn.fit <- knn(train.sample, test.mat, cl=cls, k=5, prob=TRUE) p <- attr(knn.fit, "prob") idx <- which(knn.fit == "N") p[idx] <- 1- p[idx] knn.pred <- cbind(p, 1 - p) colnames(knn.pred) <- c("P", "N") rownames(knn.pred) <- rownames(test) return(knn.pred) } } else if (classifier == "logit") { logit.model <- glm(cls~., family=binomial(link='logit'), data=data.frame(train.sample, cls)) if (is.null(test)) { p <- predict(logit.model, newdata=data.frame(dat), type='response') logit.pred <- cbind(p, 1-p) colnames(logit.pred) <- c("P", "N") rownames(logit.pred) <- rownames(dat) return(logit.pred) } else { test.mat <- data.frame(test) rownames(test.mat) <- NULL colnames(test.mat) <- colnames(dat) p <- predict(logit.model, newdata=test.mat, type='response') logit.pred <- cbind(p, 1-p) colnames(logit.pred) <- c("P", "N") rownames(logit.pred) <- rownames(test) return(logit.pred) } } else if (classifier == "lda") { lda.model <- MASS::lda(cls~., data=data.frame(train.sample, cls)) if (is.null(test)) { lda.pred <- predict(lda.model, data.frame(dat))$posterior colnames(lda.pred) <- c("N", "P") rownames(lda.pred) <- rownames(dat) return(lda.pred) } else { test.mat <- data.frame(test) rownames(test.mat) <- NULL colnames(test.mat) <- colnames(dat) lda.pred <- predict(lda.model, test.mat)$posterior colnames(lda.pred) <- c("N", "P") rownames(lda.pred) <- rownames(test) return(lda.pred) } } else if (classifier == "wKNN") { # training a modified knn classifier if (is.null(weights)) { stop("need to specify weights for using weighted knn!"); } if (is.null(test)) { wKNN.pred <- weightedKNN(train.sample, dat, cl=cls, k=3, weights) rownames(wKNN.pred) <- rownames(dat) return(wKNN.pred) } else { test.mat <- test wKNN.pred <- weightedKNN(train.sample, test.mat, cl=cls, k=3, weights) rownames(wKNN.pred) <- rownames(test) return(wKNN.pred) } } }
/scratch/gouwar.j/cran-all/cranData/AdaSampling/R/singleIter.R
#' Implementation of a feature weighted k-nearest neighbour classifier. #' #' @param train.mat training data matrix, without class labels. #' @param test.mat test data matrix, without class labels. #' @param cl class labels for training data. #' @param k number of nearest neighbour to be used. #' @param weights weights to be assigned to each feautre. #' #' @export #' weightedKNN <- function(train.mat, test.mat, cl, k=3, weights){ #Calculate cross-distance matrix Ds <- (train.mat^2)%*%weights%*%t(rep(1,nrow(test.mat))) + t((test.mat^2)%*%weights%*%t(rep(1,nrow(train.mat)))) - 2*(train.mat*(rep(1,nrow(train.mat)))%*%t(sqrt(weights)))%*%t(test.mat*(rep(1,nrow(test.mat)))%*%t(sqrt(weights))) #Calculate prediction u <- sort(unique(cl)) preds <- t(apply(Ds, 2, function(x)table(cl[order(x)][1:k])[as.character(u)]/k)) colnames(preds) = u return(preds) }
/scratch/gouwar.j/cran-all/cranData/AdaSampling/R/weightedKNN.R
## ----setup, include = FALSE---------------------------------------------- knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) library(AdaSampling) data(brca) ## ----preview------------------------------------------------------------- head(brca) ## ----prelim-------------------------------------------------------------- brca.mat <- apply(X = brca[,-10], MARGIN = 2, FUN = as.numeric) brca.cls <- sapply(X = brca$cla, FUN = function(x) {ifelse(x == "malignant", 1, 0)}) rownames(brca.mat) <- paste("p", 1:nrow(brca.mat), sep="_") ## ----examinedata--------------------------------------------------------- table(brca.cls) brca.cls ## ----noise--------------------------------------------------------------- set.seed(1) pos <- which(brca.cls == 1) neg <- which(brca.cls == 0) brca.cls.noisy <- brca.cls brca.cls.noisy[sample(pos, floor(length(pos) * 0.4))] <- 0 brca.cls.noisy[sample(neg, floor(length(neg) * 0.3))] <- 1 ## ----examinenoisy-------------------------------------------------------- table(brca.cls.noisy) brca.cls.noisy ## ----ada----------------------------------------------------------------- Ps <- rownames(brca.mat)[which(brca.cls.noisy == 1)] Ns <- rownames(brca.mat)[which(brca.cls.noisy == 0)] brca.preds <- adaSample(Ps, Ns, train.mat=brca.mat, test.mat=brca.mat, classifier = "knn", C= 1, sampleFactor = 1) head(brca.preds) accuracy <- sum(brca.cls.noisy == brca.cls) / length(brca.cls) accuracy accuracyWithAdaSample <- sum(ifelse(brca.preds[,"P"] > 0.5, 1, 0) == brca.cls) / length(brca.cls) accuracyWithAdaSample ## ------------------------------------------------------------------------ adaSvmBenchmark(data.mat = brca.mat, data.cls = brca.cls.noisy, data.cls.truth = brca.cls, cvSeed=1)
/scratch/gouwar.j/cran-all/cranData/AdaSampling/inst/doc/vignette.R
--- title: "Breast cancer classification with AdaSampling" author: "Pengyi Yang (original version by Dinuka Perera)" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Breast cancer classification with AdaSampling} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r setup, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) library(AdaSampling) data(brca) ``` Here we will examine how AdaSampling works on the Wisconsin Breast Cancer dataset, `brca`, from the UCI Machine Learning Repository and included as part of this package. For more information about the variables, try `?brca`. This dataset contains ten features, with an eleventh column containing the class labels, *malignant* or *benign*. ```{r preview} head(brca) ``` First, clean up the dataset to transform into the required format. ```{r prelim} brca.mat <- apply(X = brca[,-10], MARGIN = 2, FUN = as.numeric) brca.cls <- sapply(X = brca$cla, FUN = function(x) {ifelse(x == "malignant", 1, 0)}) rownames(brca.mat) <- paste("p", 1:nrow(brca.mat), sep="_") ``` Examining this dataset shows balanced proportions of classes. ```{r examinedata} table(brca.cls) brca.cls ``` In order to demonstrate how AdaSampling eliminates noisy class label data it will be necessary to introduce some noise into this dataset, by randomly flipping a selected number of class labels. More noise will be added to the positive observations. ```{r noise} set.seed(1) pos <- which(brca.cls == 1) neg <- which(brca.cls == 0) brca.cls.noisy <- brca.cls brca.cls.noisy[sample(pos, floor(length(pos) * 0.4))] <- 0 brca.cls.noisy[sample(neg, floor(length(neg) * 0.3))] <- 1 ``` Examining the noisy class labels reveals noise has been added: ```{r examinenoisy} table(brca.cls.noisy) brca.cls.noisy ``` We can now run AdaSampling on this data. For more information use `?adaSample()`. ```{r ada} Ps <- rownames(brca.mat)[which(brca.cls.noisy == 1)] Ns <- rownames(brca.mat)[which(brca.cls.noisy == 0)] brca.preds <- adaSample(Ps, Ns, train.mat=brca.mat, test.mat=brca.mat, classifier = "knn", C= 1, sampleFactor = 1) head(brca.preds) accuracy <- sum(brca.cls.noisy == brca.cls) / length(brca.cls) accuracy accuracyWithAdaSample <- sum(ifelse(brca.preds[,"P"] > 0.5, 1, 0) == brca.cls) / length(brca.cls) accuracyWithAdaSample ``` The table gives the prediction probability for both a positive ("P") and negative ("N") class label for each row of the test set. In order to compare the improvement in performance of adaSample against learning without resampling, use the `adaSvmBenchmark()` function. In order to see how effective `adaSample()` is at removing noise, we will use the `adaSvmBenchmark()` function to compare its performance to a regular classification process. This procedure compares classification across four conditions, firstly using the original dataset (with correct label information), the second with the noisy dataset (but without AdaSampling), the third with AdaSampling, and the fourth utilising AdaSampling multiple times in the form of an ensemble learning model. ```{r} adaSvmBenchmark(data.mat = brca.mat, data.cls = brca.cls.noisy, data.cls.truth = brca.cls, cvSeed=1) ```
/scratch/gouwar.j/cran-all/cranData/AdaSampling/inst/doc/vignette.Rmd
--- title: "Breast cancer classification with AdaSampling" author: "Pengyi Yang (original version by Dinuka Perera)" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Breast cancer classification with AdaSampling} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r setup, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) library(AdaSampling) data(brca) ``` Here we will examine how AdaSampling works on the Wisconsin Breast Cancer dataset, `brca`, from the UCI Machine Learning Repository and included as part of this package. For more information about the variables, try `?brca`. This dataset contains ten features, with an eleventh column containing the class labels, *malignant* or *benign*. ```{r preview} head(brca) ``` First, clean up the dataset to transform into the required format. ```{r prelim} brca.mat <- apply(X = brca[,-10], MARGIN = 2, FUN = as.numeric) brca.cls <- sapply(X = brca$cla, FUN = function(x) {ifelse(x == "malignant", 1, 0)}) rownames(brca.mat) <- paste("p", 1:nrow(brca.mat), sep="_") ``` Examining this dataset shows balanced proportions of classes. ```{r examinedata} table(brca.cls) brca.cls ``` In order to demonstrate how AdaSampling eliminates noisy class label data it will be necessary to introduce some noise into this dataset, by randomly flipping a selected number of class labels. More noise will be added to the positive observations. ```{r noise} set.seed(1) pos <- which(brca.cls == 1) neg <- which(brca.cls == 0) brca.cls.noisy <- brca.cls brca.cls.noisy[sample(pos, floor(length(pos) * 0.4))] <- 0 brca.cls.noisy[sample(neg, floor(length(neg) * 0.3))] <- 1 ``` Examining the noisy class labels reveals noise has been added: ```{r examinenoisy} table(brca.cls.noisy) brca.cls.noisy ``` We can now run AdaSampling on this data. For more information use `?adaSample()`. ```{r ada} Ps <- rownames(brca.mat)[which(brca.cls.noisy == 1)] Ns <- rownames(brca.mat)[which(brca.cls.noisy == 0)] brca.preds <- adaSample(Ps, Ns, train.mat=brca.mat, test.mat=brca.mat, classifier = "knn", C= 1, sampleFactor = 1) head(brca.preds) accuracy <- sum(brca.cls.noisy == brca.cls) / length(brca.cls) accuracy accuracyWithAdaSample <- sum(ifelse(brca.preds[,"P"] > 0.5, 1, 0) == brca.cls) / length(brca.cls) accuracyWithAdaSample ``` The table gives the prediction probability for both a positive ("P") and negative ("N") class label for each row of the test set. In order to compare the improvement in performance of adaSample against learning without resampling, use the `adaSvmBenchmark()` function. In order to see how effective `adaSample()` is at removing noise, we will use the `adaSvmBenchmark()` function to compare its performance to a regular classification process. This procedure compares classification across four conditions, firstly using the original dataset (with correct label information), the second with the noisy dataset (but without AdaSampling), the third with AdaSampling, and the fourth utilising AdaSampling multiple times in the form of an ensemble learning model. ```{r} adaSvmBenchmark(data.mat = brca.mat, data.cls = brca.cls.noisy, data.cls.truth = brca.cls, cvSeed=1) ```
/scratch/gouwar.j/cran-all/cranData/AdaSampling/vignettes/vignette.Rmd
#' Adaptive Rejection Sampling Algorithm #' #' rARS generates a sequence of random numbers using the adaptive rejection sampling algorithm. #' #' @param n Desired sample size; #' @param formula Kernal of the target density; #' @param min,max Domain including positive and negative infinity of the target distribution; #' @param sp Supporting set. #' @export #' @author Dong Zhang <\url{[email protected]}> #' #' @examples #' #' # Example 1: Standard normal distribution #' x1 <- rARS(100,"exp(-x^2/2)",-Inf,Inf,c(-2,2)) #' #' # Example 2: Truncated normal distribution #' x2 <- rARS(100,"exp(-x^2/2)",-2.1,2.1,c(-2,2)) #' #' # Example 3: Normal distribution with mean=2 and sd=2 #' x3 <- rARS(100,"exp(-(x-2)^2/(2*4))",-Inf,Inf,c(-3,3)) #' #' # Example 4: Exponential distribution with rate=3 #' x4 <- rARS(100,"exp(-3*x)",0,Inf,c(2,3,100)) #' #' # Example 5: Beta distribution with alpha=3 and beta=4 #' x5 <- rARS(100,"x^2*(1-x)^3",0,1,c(0.4,0.6)) #' #' # Example 6: Gamma distribution with alpha=5 and lambda=2 #' x6 <- rARS(100,"x^(5-1)*exp(-2*x)",0,Inf,c(1,10)) #' #' # Example 7: Student distribution with df=10 #' x7 <- rARS(100,"(1+x^2/10)^(-(10+1)/2)",-Inf,Inf,c(-10,2)) #' #' # Example 8: F distribution with m=10 and n=5 #' x8 <- rARS(100,"x^(10/2-1)/(1+10/5*x)^(15/2)",0,Inf,c(3,10)) #' #' # Example 9:Cauchy distribution #' x9 <- rARS(100,"1/(1+(x-1)^2)",-Inf,Inf,c(-2,2,10)) #' #' # Example 10:Rayleigh distribution with lambda=1 #' x10 <- rARS(100,"2*x*exp(-x^2)",0,Inf,c(0.01,10)) #' rARS <- function(n,formula,min=-Inf,max=Inf,sp){ sp <- sort(sp) if(!is.character(formula)) stop("Unsuitable density function.") if (n<=0) stop("Unsuitable sample size.") if(min >= max) stop("Unsuitable domain.") p <- function(x){eval(parse(text=formula))} V <- function(x){-log(p(x))} x_final <- numeric(n) for(j in 1:n){ Support <- sp if (!identical(Support,sort(Support))) stop("Put the supporting points in ascending order.") u=0 compareprop=-1 while(u>compareprop){ tangent <- fderiv(V,Support,1) crosspoint=numeric(length(Support)+1) crosspoint[1]=min crosspoint[length(crosspoint)]=max crossvalue=numeric(length(Support)-1) for( i in 1:(length(Support)-1)){ A=matrix(c(tangent[i],-1,tangent[i+1],-1),nrow=2,byrow=T) b=c(tangent[i]*Support[i]-V(Support)[i],tangent[i+1]*Support[i+1]-V(Support)[i+1]) solve(A,b) crosspoint[i+1]=solve(A,b)[1] crossvalue[i]=solve(A,b)[2] } IntSum <- numeric(length(Support)) for (i in 1:length(IntSum)){ expfun=function(x){ exp(-tangent[i]*(x-Support[i])-V(Support)[i]) } IntSum[i]= integrate(expfun,crosspoint[i],crosspoint[i+1])[[1]] } rdm <- runif(1) cum=c(0, cumsum(IntSum/sum(IntSum))) idx <- which(rdm<cumsum(IntSum/sum(IntSum)))[1] x_star <- log((rdm-cum[idx]+exp(tangent[idx]*Support[idx]-V(Support)[idx])* exp(-tangent[idx]*crosspoint[idx])/sum(IntSum)/(-tangent[idx]))* sum(IntSum)*(-tangent[idx])/exp(tangent[idx]*Support[idx]-V(Support)[idx]))/(-tangent[idx]) u <- runif(1) compareprop <- p(x_star)/exp(-tangent[idx]*(x_star-Support[idx])-V(Support)[idx]) Support <- sort(c(Support,x_star)) } x_final[j]=x_star } x_final }
/scratch/gouwar.j/cran-all/cranData/AdapSamp/R/rARS.R
#' Adaptive Slice Sampling Algorithm With Stepping-Out Procedures #' #' rASS generates a sequence of random numbers by the adaptive slice sampling algorithm with stepping-out procedures. #' @param n Desired sample size; #' @param x0 Initial value; #' @param formula Target density function p(x); #' @param w Length of the coverage interval. #' @references Neal R M. Slice sampling - Rejoinder[J]. Annals of Statistics, 2003, 31(3):758-767. #' @author Dong Zhang <\url{[email protected]}> #' #' @export #' #' @examples #' #' # Example 1: Sampling from exponential distribution with bounded domain #' x<-rASS(100,-1,"1.114283*exp(-(4-x^2)^2)",3) #' plot(density(x)) #' rASS <- function(n,x0=0,formula,w=3){ f <- function(x){eval(parse(text=formula))} x_final=NULL Slice=NULL x_final[1]=x0 for (i in 1:n){ Slice[i]=runif(1,0,f(x_final[i])) left=x_final[i]-runif(1,0,w) right=left+w while(!((f(left)<Slice[i])&(f(right)<Slice[i]))){ left=left-w right=right+w } x=runif(1,left,right) while(f(x)<Slice[i]){ if(x>x_final[i]) {right=x} else {left=x} x=runif(1,left,right) } if(f(x)>Slice[i]){x_final[i+1]=x} } return(x_final) }
/scratch/gouwar.j/cran-all/cranData/AdapSamp/R/rASS.R
#' Concave-Convex Adaptive Rejection Sampling Algorithm #' #' rCCARS generates a sequence of random numbers by the concave-convex adaptive rejection sampling algorithm from target distributions with bounded domain. #' #' @param n Desired sample size; #' @param cvformula,ccformula Convex and concave decompositions for -ln(p(x)) where p(x) is the kernal of target density; #' @param min,max Domain except positive and negative infinity; #' @param sp Supporting set #' @details Strictly speaking, the concave-convex adaptive rejection sampling algorithm can generate samples from target distributions who have bounded domains. For distributions with unbounded domain, rCCARS can also be used for sampling approximately. For example, if we want draw a sequence from N(0,1) by the concave-convex adaptive rejection sampling algorithm. We know that X~N(0,1) has a so small probability in two tails that we can ingore the parts at both ends. Pr(X>20)=P(X<-20)=2.753624e-89, therefore we can get random numbers approximately from N(0,1) with the bound [-20,20]. Also, you can make this bound large enough to reduce sampling error. #' @author Dong Zhang <\url{[email protected]}> #' @references Teh Y W. Concave-Convex Adaptive Rejection Sampling[J]. Journal of Computational & Graphical Statistics, 2011, 20(3):670-691. #' @export #' @examples #' #' # Example 1: Generalized inverse bounded gaussian distribution with lambda=-1 and a=b=2 #' x<-rCCARS(100,"x+x^-1","2*log(x)",0.001,100,1) #' hist(x,breaks=20,probability =TRUE);lines(density(x,bw=0.1),col="red",lwd=2,lty=2) #' f <- function(x) {x^(-2)*exp(-x-x^(-1))/0.2797318} #' lines(seq(0,5,0.01),f(seq(0,5,0.01)),lwd=2,lty=3,col="blue") #' #' #The following examples are also available; #' #But it may take a few minutes to run them. #' #' # Example 2: Expontional bounded distribution #' # x<-rCCARS(1000,"x^4","-8*x^2+16",-3,4,c(-2,1)) #' # hist(x,breaks=30,probability=TRUE);lines(density(x,bw=0.05),col="blue",lwd=2,lty=2) #' # f <- function(x) exp(-(x^2-4)^2)/ 0.8974381 #' # lines(seq(-3,4,0.01),f(seq(-3,4,0.01)),col="red",lty=3,lwd=2) #' #' # Example 3: Makeham bounded distribution #' # x<-rCCARS(1000,"x+1/log(2)*(2^x-1)","-log(1+2^x)",0,5,c(1,2,3)) #' # hist(x,breaks=30,probability=TRUE);lines(density(x,bw=0.05),col="blue",lwd=2,lty=2) #' # f <- function(x){(1+2^x)*exp(-x-1/log(2)*(2^x-1))} #' # lines(seq(0,5,0.01),f(seq(0,5,0.01)),col="red",lty=3,lwd=2,type="l") #' rCCARS <- function(n,cvformula,ccformula,min,max,sp){ p <- function(x){eval(parse(text=paste("exp(-(",cvformula,")-(",ccformula,"))",sp="")))} x_final <- numeric(n) for( k in 1:n){ support <- sp xrange <- c(min,max) u=0 prop=-1 while(u>prop){ allpt <- sort(c(xrange,support)) convex <- function(x){eval(parse(text=cvformula))} drv1or<- function(x){eval(D(parse(text=cvformula),"x"))} der <- drv1or(allpt) crossx <- c() crossy <- c() for(i in 1:(length(allpt)-1)){ A <- matrix(c(der[i],-1,der[i+1],-1),nrow=2,byrow=1) b <- c(der[i]*allpt[i]-convex(allpt)[i],der[i+1]*allpt[i+1]-convex(allpt)[i+1]) crossx[i] <- solve(A,b)[1] crossy[i] <- solve(A,b)[2] } rubbish1 <- data.frame(X=c(crossx,allpt),Y=c(crossy,convex(allpt))) xconvex <- c(rubbish1[order(rubbish1$X),][1])$X yconvex <- c(rubbish1[order(rubbish1$X),][2])$Y tan1<- numeric(length(xconvex)-1) int1<- numeric(length(xconvex)-1) for (i in 1:length(tan1)){ tan1[i] <- (yconvex[i+1]-yconvex[i])/(xconvex[i+1]-xconvex[i]) int1[i] <- (yconvex[i+1]-yconvex[i])/(xconvex[i+1]-xconvex[i])*(-xconvex[i])+yconvex[i] } concave <- function(x){eval(parse(text=ccformula))} tan2 <- numeric(length(allpt)-1) for(i in 1:length(tan2)){ tan2[i] <- (concave(allpt[i+1])-concave(allpt[i]))/(allpt[i+1]-allpt[i]) } int2<- numeric(length(allpt)-1) for(i in 1:length(tan2)){ int2[i] <- -tan2[i]*allpt[i]+concave(allpt[i]) } xconcave <- rep(allpt[1:length(allpt)-1],rep(2,length(allpt)-1)) yconcave <- concave(xconcave) tan2 <- rep(tan2,rep(2,length(tan2))) int2 <- rep(int2,rep(2,length(int2))) IntSum <- numeric(length(tan2)) for(i in 1:length(tan2)){ fun <- function(x){ exp(-(tan1[i]+tan2[i])*x-int1[i]-int2[i]) } IntSum[i] <- integrate(fun,xconvex[i],xconvex[i+1])[[1]] } cum=c(0,cumsum(IntSum/sum(IntSum))) rdm <- runif(1) idx <- which(rdm<cumsum(IntSum/sum(IntSum)))[1] x_star <- log(((rdm-cum[idx])*(-tan1[idx]-tan2[idx])*sum(IntSum))*exp(int1[idx]+int2[idx])+exp(-(tan1[idx]+tan2[idx])*xconvex[idx]))/(-tan2[idx]-tan1[idx]) u <- runif(1) prop=p(x_star)/exp(-(tan1[idx]+tan2[idx])*x_star-int1[idx]-int2[idx]) support <- sort(c(x_star,support)) } x_final[k]=x_star } x_final }
/scratch/gouwar.j/cran-all/cranData/AdapSamp/R/rCCARS.R
#' Modified Adaptive Rejection Sampling Algorithm #' #' rMARS generates a sequence of random numbers using the modified adaptive rejection sampling algorithm. #' #' @param n Desired sample size; #' @param formula Kernel of the target distribution; #' @param min,max Domain including positive and negative infinity of the target distribution; #' @param sp Supporting set; #' @param infp Inflexion set; #' @param m A parameter for judging concavity and convexity in a certain interval. #' @author Dong Zhang <\url{[email protected]}> #' @references Martino L, Miguez J. A generalization of the adaptive rejection sampling algorithm[J]. Statistics & Computing, 2011, 21(4):633-647. #' #' @export #' #' @examples #' # Example 1: Exponential distribution #' x <- rMARS(100,"exp(-(4-x^2)^2)",-Inf,Inf, c(-2.5,0,2.5),c(-2/sqrt(3),2/sqrt(3))) #' hist(x,probability=TRUE,xlim=c(-3,3),ylim=c(0,1.2),breaks=20) #' lines(density(x,bw=0.05),col="blue") #' f <- function(x)(exp(-(4-x^2)^2)) #' lines(seq(-3,3,0.01),f(seq(-3,3,0.01))/integrate(f,-3,3)[[1]],lwd=2,lty=2,col="red") #' #' #The following examples are also available; #' #But it may take a few minutes to run them. #' #' # Example 2: Distribution with bounded domain #' # x <- rMARS(1000,"exp(-(x^2-x^3))",-3,2,c(-1,1),1/3) #' # hist(x,probability=TRUE,xlim=c(-3,2.5),ylim=c(0,1.2),breaks=20) #' # lines(density(x,bw=0.2),col="blue") #' # f <- function(x) exp(-(x^2-x^3)) #' # lines(seq(-3,2,0.01),f(seq(-3,2,0.01))/integrate(f,-3,2)[[1]],lwd=2,lty=2,col="red",type="l") #' #' #' # Example 3: Weibull distribution with k=3 and lambda=1 #' # x <- rMARS(100,"3*x^2*exp(-x^3)",10^-15,Inf,c(0.01,1),(1/3)^(1/3),m=10^-4) #' # hist(x,probability=TRUE,breaks=20,xlim=c(0,2)) #' # lines(density(x,bw=0.15),col="blue") #' # f <- function(x) 3*x^2*exp(-x^3) #' # lines(seq(0,2,0.01),f(seq(0,2,0.01)),lwd=2,lty=2,col="red",type="l") #' #' #' # Example 4: Mixed normal distribution with p=0.3,m1=2,m2=8,sigma1=1,sigma2=2 #' # x <- rMARS(100,"0.3/sqrt(2*pi)*exp(-(x-2)^2/2)+(1-0.3)/sqrt(2*pi)/2*exp(-(x-8)^2/8)",-Inf,Inf, #' # c(-6,-4,0,3,6,15),c(-5.120801,-3.357761,3.357761,5.120801),m=10^-8) #' # hist(x,breaks=20,probability=TRUE);lines(density(x,bw=0.45),col="blue",lwd=2) #' # f <- function(x)0.3/sqrt(2*pi)*exp(-(x-2)^2/2)+(1-0.3)/sqrt(2*pi)/2*exp(-(x-8)^2/8) #' # lines(seq(0,14,0.01),f(seq(0,14,0.01)),lty=3,col="red",lwd=2 ) #' rMARS <- function(n,formula,min=-Inf,max=Inf,sp,infp,m=10^(-4)){ sp <- sort(sp);infp <- sort(infp) if(!is.character(formula)) stop("Density function is inappropriate, please look up examples for help") if (n<=0) stop("Length of sequence shouble be larger than 0") if (min>=max) stop("Minimum of domain shouble be larger than maximum") ltuInf <- function(x){ result<- list() tg=int=crv=crp=c() usepoint <- c(pandl[x,2][[1]],infp[1]) tg <- deriv1(usepoint) int <- V(usepoint)-tg*usepoint crp <- numeric(pandl[x,3][[1]]) crv <- numeric(pandl[x,3][[1]]) for(i in 1:pandl[x,3][[1]]){ A=matrix(c(tg[i],-1,tg[i+1],-1),nrow=2,byrow=T) b=-c(int[i],int[i+1]) crp[i]=solve(A,b)[1] crv[i]=solve(A,b)[2] } result$tg <- tg result$int <- int result$crp <- crp result$crv <- crv result } ltuFin <- function(x){ result<- list() tg=int=crv=crp=c() usepoint <- c(min,pandl[x,2][[1]],infp[1]) tg <- deriv1(usepoint) int <- V(usepoint)-tg*usepoint crp <- numeric(pandl[x,3][[1]]) crv <- numeric(pandl[x,3][[1]]) for(i in 1:pandl[x,3][[1]]){ A=matrix(c(tg[i],-1,tg[i+1],-1),nrow=2,byrow=T) b=-c(int[i],int[i+1]) crp[i]=solve(A,b)[1] crv[i]=solve(A,b)[2] } result$tg <- tg result$int <- int result$crp <- crp result$crv <- crv result } laoFin <- function(x){ result<- list() tg=int=crv=crp=c() usepoint <- c(min,pandl[x,2][[1]],infp[1]) crp <- c(min,pandl[x,2][[1]]) crv <- V(crp) tg=int=numeric(pandl[x,3][[1]]) for(i in 1:pandl[x,3][[1]]){ tg[i]=(V(usepoint[i+1])-V(usepoint[i]))/(usepoint[i+1]-usepoint[i]) int[i]=V(usepoint[i])-tg[i]*usepoint[i] } result$tg <- tg result$int <- int result$crp <- crp[-1] result$crv <- crv[-1] result } rtuInf <- function(x){ result<- list() tg=int=crv=crp=c() usepoint <- c(tail(infp,1),pandl[x,2][[1]]) tg <- deriv1(usepoint) int <- V(usepoint)-tg*usepoint crp <- numeric(pandl[x,3][[1]]) crv <- numeric(pandl[x,3][[1]]) for(i in 1:pandl[x,3][[1]]){ A=matrix(c(tg[i],-1,tg[i+1],-1),nrow=2,byrow=T) b=-c(int[i],int[i+1]) crp[i]=solve(A,b)[1] crv[i]=solve(A,b)[2] } result$tg <- tg result$int <- int result$crp <- crp result$crv <- crv result } rtuFin <- function(x){ result<- list() tg=int=crv=crp=c() usepoint <- c(tail(infp,1),pandl[x,2][[1]],max) tg <- deriv1(usepoint) int <- V(usepoint)-tg*usepoint crp <- numeric(pandl[x,3][[1]]) crv <- numeric(pandl[x,3][[1]]) for(i in 1:pandl[x,3][[1]]){ A=matrix(c(tg[i],-1,tg[i+1],-1),nrow=2,byrow=T) b=-c(int[i],int[i+1]) crp[i]=solve(A,b)[1] crv[i]=solve(A,b)[2] } result$tg <- tg result$int <- int result$crp <- crp result$crv <- crv result } raoFin <- function(x){ result<- list() tg=int=crv=crp=c() usepoint <- c(tail(infp,1),pandl[x,2][[1]],max) crp <- c(pandl[x,2][[1]],max) crv <- V(crp) tg=int=numeric(pandl[x,3][[1]]) for(i in 1:pandl[x,3][[1]]){ tg[i]=(V(usepoint[i+1])-V(usepoint[i]))/(usepoint[i+1]-usepoint[i]) int[i]=V(usepoint[i])-tg[i]*usepoint[i] } result$tg <- tg result$int <- int result$crp <- crp[-length(crp)] result$crv <- crv[-length(crv)] result } ao <- function(x){ result<- list() tg=int=crv=crp=c() usepoint <- c(infp[x-1],pandl[x,2][[1]],infp[x]) crp <- pandl[x,2][[1]] crv <- V(crp) tg = int = numeric(pandl[x,4][[1]]) for(i in 1:pandl[x,4][[1]]){ tg[i]=(V(usepoint[i+1])-V(usepoint[i]))/(usepoint[i+1]-usepoint[i]) int[i]=V(usepoint[i])-tg[i]*usepoint[i] } result$tg <- tg result$int <- int result$crp <- crp result$crv <- crv result } tu <- function(x){ result<- list() tg=int=crv=crp=c() usepoint <- c(infp[x-1],pandl[x,2][[1]],infp[x]) tg <- deriv1(usepoint) int <- V(usepoint)-tg*usepoint crp <- numeric(pandl[x,3][[1]]) crv <- numeric(pandl[x,3][[1]]) for(i in 1:pandl[x,3][[1]]){ A=matrix(c(tg[i],-1,tg[i+1],-1),nrow=2,byrow=T) b=-c(int[i],int[i+1]) crp[i]=solve(A,b)[1] crv[i]=solve(A,b)[2] } result$tg <- tg result$int <- int result$crp <- crp result$crv <- crv result } x_final<- numeric(n) for(N in 1:n){ p <- function(x){eval(parse(text=formula))} V <- function(x){-log(p(x))} u=0 rate =-1 while(u>=rate){ allpt <- sort(c(sp,infp)) deriv2<- function(x){eval(D(D(parse(text=paste("-log(",formula,")",sp="")),"x"),"x"))} deriv1<- function(x){eval(D(parse(text=paste("-log(",formula,")",sp="")),"x"))} corc <- numeric(length(infp)+1) if (deriv2(infp[length(infp)]+m)>0 & max==Inf) corc[length(infp)+1]="rtuInf" if (deriv2(infp[length(infp)]+m)>0 & max!=Inf) corc[length(infp)+1]="rtuFin" if (deriv2(infp[length(infp)]+m)<0 & max!=Inf) corc[length(infp)+1]="raoFin" if (deriv2(infp[1]-m)>0 & min==-Inf ) corc[1]="ltuInf" if (deriv2(infp[1]-m)>0 & min!=-Inf ) corc[1]="ltuFin" if (deriv2(infp[1]-m)<0 & min!=-Inf ) corc[1]="laoFin" if(length(infp)>1){ for(i in 2:length(infp)){ if (deriv2(infp[i-1]+m)<0) corc[i]="ao" if (deriv2(infp[i-1]+m)>0) corc[i]="tu" } } parsp <- list() parsp[[1]]=sp[which(sp<infp[1])] parsp[[length(infp)+1]]=sp[which(sp>infp[length(infp)])] if(length(infp)>1){ for(i in 2:length(infp)){ parsp[[i]]=sp[which(sp>infp[i-1]&sp<infp[i])] } } pandl <- cbind(corc,parsp,pt=numeric((length(corc))),lns=numeric((length(corc)))) for(i in 1:nrow(pandl)){ if(pandl[i,1][[1]]=="ao"){ pandl[i,3][[1]]=length(pandl[i,2][[1]]) pandl[i,4][[1]]=length(pandl[i,2][[1]])+1 } if(pandl[i,1][[1]]=="tu"){ pandl[i,3][[1]]=length(pandl[i,2][[1]])+1 pandl[i,4][[1]]=length(pandl[i,2][[1]])+2 } if(pandl[i,1][[1]]=="ltuInf"){ pandl[i,3][[1]]=length(pandl[i,2][[1]]) pandl[i,4][[1]]=length(pandl[i,2][[1]])+1 } if(pandl[i,1][[1]]=="ltuFin"){ pandl[i,3][[1]]=length(pandl[i,2][[1]])+1 pandl[i,4][[1]]=length(pandl[i,2][[1]])+2 } if(pandl[i,1][[1]]=="laoFin"){ pandl[i,3][[1]]=length(pandl[i,2][[1]])+1 pandl[i,4][[1]]=length(pandl[i,2][[1]])+1 } if(pandl[i,1][[1]]=="rtuInf"){ pandl[i,3][[1]]=length(pandl[i,2][[1]]) pandl[i,4][[1]]=length(pandl[i,2][[1]])+1 } if(pandl[i,1][[1]]=="rtuFin"){ pandl[i,3][[1]]=length(pandl[i,2][[1]])+1 pandl[i,4][[1]]=length(pandl[i,2][[1]])+2 } if(pandl[i,1][[1]]=="raoFin"){ pandl[i,3][[1]]=length(pandl[i,2][[1]])+1 pandl[i,4][[1]]=length(pandl[i,2][[1]])+1 } } tg_total <- c() int_total <- c() crp_total <- c() crv_total <- c() for( i in 1:nrow(pandl) ){ if(pandl[i,1][[1]]=="ao") tsf <- ao(i) if(pandl[i,1][[1]]=="tu") tsf <- tu(i) if(pandl[i,1][[1]]=="raoFin") tsf <- raoFin(i) if(pandl[i,1][[1]]=="rtuFin") tsf <- rtuFin(i) if(pandl[i,1][[1]]=="rtuInf") tsf <- rtuInf(i) if(pandl[i,1][[1]]=="laoFin") tsf <- laoFin(i) if(pandl[i,1][[1]]=="ltuFin") tsf <- ltuFin(i) if(pandl[i,1][[1]]=="ltuInf") tsf <- ltuInf(i) tg_total <- c(tg_total,tsf$tg) int_total <- c(int_total,tsf$int) crp_total <- c(crp_total,tsf$crp) crv_total <- c(crv_total,tsf$crv) } fdtfram <-rbind(cbind(crp_total,crv_total),c(min,V(min)),c(max,V(max))) fdtfram <- rbind(fdtfram,matrix(c(infp,V(infp)),nrow=length(infp),byrow=F)) fdtfram <- fdtfram[order(fdtfram[,1]),] intsum <- c() for(i in 1:length(tg_total)){ integ <- function(x){ exp(-(tg_total[i]*x+int_total[i])) } intsum[i] <- integrate(integ,fdtfram[i],fdtfram[i+1,1])[[1]] } rdm <- runif(1) cum=c(0, cumsum(intsum/sum(intsum))) idx <- which(rdm<cumsum(intsum/sum(intsum)))[1] ifelse(idx>1,x_star <- (log(-(rdm-cum[idx])*sum(intsum)*tg_total[idx]+exp(-tg_total[idx]*fdtfram[idx,1][[1]]-int_total[idx]))+int_total[idx])/(-tg_total[idx]),x_star <-(log(-rdm*sum(intsum)*tg_total[1]+exp(-tg_total[1]*fdtfram[2,1][[1]]-int_total[1]))+int_total[1])/(-tg_total[1])) u <- runif(1) rate <- p(x_star)/exp(-tg_total[idx]*x_star-int_total[idx]) sp=sort(c(sp,x_star)) } x_final[N] <- x_star } x_final }
/scratch/gouwar.j/cran-all/cranData/AdapSamp/R/rMARS.R
AdaptGauss = function(Data,Means=NaN,SDs=NaN,Weights=NaN,ParetoRadius=NaN,LB=NaN,HB=NaN,ListOfAdaptGauss,fast=T){ # V=AdaptGauss(Data,Means,SDs,Weights,ParetoRadius,LB,HB); # Means <- V$Means # SDs <- V$SDs # Weights <- V$Weights # Pareto_Radius <- V$ParetoRadius # RMS <- V$RMS # BayesBoundaries <- V$BB # # adapt interactively a Gaussians Mixture Model GMM to the empirical PDF of the data such that N(M,S)*W is a model for Data # # INPUT # Data(1:n) Vector of Data, may contain NaN # # OPTIONAL # Means(1:c) The means of the distribution; default: nanmean(Data) # SDs(1:c) The StandardDeviatons of the distributions default: nanstd(Data) # Weights(1:c) The weights of the distributions default: 1 # ParetoRadius the ParetoRadius for PDE on Data, # It is calculated, if not given # LB,HB limits where the adaptation is done, default: [min(Data) max(Data)] # ListOfAdaptGauss if given in return Format of AdaptGauss, the modell can be edited # # OUTPUT # V$Means(1:L) The adapted means of the distributions # V$SDs(1:L) The adapted sdev of the distributions # V$Weights(1:L) The adapted weights of the distributions # V$ParetoRadius Pareto Radius used for empirical PDE # V$RMS Root Mean Square Distance between empirical PDE and pdf(GMM) # RMS == sqrt(sum(empirical PDE - pdf(GMM))^2) # Out$BayesBoundaries Bayes Boundaries between gaussians # author: Onno-Hansen Goos # 1.Editor: MT 08/2015 Data; #Bricht bei nicht existierendem Bezeichner ab ## Starte Shiny # library(shiny) if(!missing(ListOfAdaptGauss)){ if(is.list(ListOfAdaptGauss)){ Means<-ListOfAdaptGauss$Means if(is.null(Means)) Means=NaN SDs<-ListOfAdaptGauss$SDs if(is.null(SDs)) SDs=NaN Weights<-ListOfAdaptGauss$Weights if(is.null(Weights)) Weights=NaN } } if (!(hasArg(Data))){ stop("No Data Input.") } if(!missing(Data)){ if(!is.vector(Data)){ stop("Data has to be a vector, maybe use as.vector(Data)") } if(!is.numeric(Data)){ stop("Data has to be numeric, maybe use as.numeric(Data)") } } outputApp=runApp(list( ## ui.R --> stellt oberfl?che her ui = fluidPage( #Ueberschrift-Panel #headerPanel("Adapt Gauss"), # oh! die Kommas sind wichtig! ^^ fluidRow( column(3, fluidRow( wellPanel(# Legt aktuellen Gauss fest (welcher gerade bearbeitet werden kann) tags$div(class = "row", uiOutput("sliderCurrGauss"), actionButton("AddGaussButton", "Add", icon = icon("plus-square")), actionButton("DeleteGaussButton", "Delete Current",icon = icon("bitbucket"))#, ) ) ) ), column(2,offset = 1, fluidRow( h6("Expectation Maximation Algorithm with Iterations:"), numericInput("numIterations", label ="", value = 10), actionButton("expMax", "", icon = icon("calculator")), actionButton("restore", " ", icon = icon("undo")) ) ), column(3, #wellPanel(# weights h6("Weights"), actionButton("normAll", "Normalize All", icon = icon("balance-scale")), actionButton("normOth", "Normalize Other", icon = icon("angle-double-down")), h6("Options"), checkboxInput("showComponents", "Show Components", TRUE), checkboxInput("showBayes", "Show Bayes Boundaries", FALSE) #) ), column(3, wellPanel( h6("General"), actionButton("PlotFig", "Plot Figure", icon = icon("area-chart")), actionButton("RestoreBestRMS", "Restore Best RMS", icon = icon("backward")), actionButton("ChiSquareAnalysis", "Chi Square Analysis"), h6("AdaptGauss:"), actionButton("CloseButton", "Close", icon = icon("close")) ) ), fluidRow( column(12,offset=1, plotOutput('PDE',width = 700, height = 450) ) ), fluidRow( column(2, wellPanel(h5(textOutput('gaussianNumber'))), uiOutput("minMean"), uiOutput("minSdev"), uiOutput("minWeight") ), column(6, uiOutput("sliderM"), uiOutput("sliderS"), uiOutput("sliderW") ), column(2, uiOutput("numericM"), uiOutput("numericS"), uiOutput("numericW") ), column(2, wellPanel(h5(textOutput('ScreenMessage'))), uiOutput("maxMean"), uiOutput("maxSdev"), uiOutput("maxWeight") ) ) )), server = function(input, output, session){ ## server.R --> Ab hier keine Oberflaeche mehr, ist aber mit ui.R (Oberflaeche) verkn?pft # Default values for Data if no input data <- Data #?bertrage Input Werte (koennen sonst von Shiny nicht korrekt verwendet werden) GM <- Means GS <- SDs GW <- Weights ParetoRadius <- ParetoRadius LB <- LB HB <- HB nSignif <- 4 # Auf wie viele "echte" Stellen soll nach dem Komma gerundet werden (zB. RMS) numGaussSave <- NULL MLimit <- NULL GSSave <- NULL GSBestRMS <- NULL GMSave <- NULL GMBestRMS <- NULL GWSave <- NULL GWBestRMS <- NULL MonitorStopReactions <- FALSE # Index f?r Befehle (siehe interactive value befehl) iBefehl <- 0 # Lada Daten, definiere Variablen #observe({ # Wird ein mal am Anfang ausgef?hrt. Erzeugt means, stds usw. print("Start Session") #print("Load Data") if(length(data)==0) return("Error: Data could not be loaded"); # Wie viele Gauss sollen getestet werden? noch nicht implentiert! suggestedMaxGauss <- 5 # Eliminate NaNs in Data dataNew <- 0 j <- 1 for (i in 1:length(data)){ if (!is.nan(data[i]) && !is.na(data[i])){ dataNew[j] <- data[i] j <- j+1 } } data <- dataNew # Setzte LB (low Boundary) und HB (high boundary) Falls nicht im Input if(is.nan(LB)) LB <- min(data) if(is.nan(HB)) HB <- max(data) # Eliminate Values below LB and above HB in Data dataNew <- 0 j <- 1 for (i in 1:length(data)){ if (data[i]>=LB && data[i]<=HB){ dataNew[j] <- data[i] j <- j+1 } } data <- dataNew # Reduce Data to 25000 Elements, if larger than 25000 (ueberschuessige Datenpunkte werden randomisiert entfernt) if (length(data)>25000){ data <- data[rsampleAdaptGauss(25000,length(data))]; print("Reducing to 25000 datapoints"); } # Bestimme Pareto Density inkl. Kernels if (is.nan(ParetoRadius)){ ParetoRadius<-DataVisualizations::ParetoRadius(data) nRow=length(data) #MT: Halte ich nicht fuer plausibel #if (nRow>1024){ # ParetoRadius = ParetoRadius * 4 /(nRow^0.2); #} } ParetoDensityEstimationVar <- DataVisualizations::ParetoDensityEstimation(data,paretoRadius=ParetoRadius); ParetoDensity <- ParetoDensityEstimationVar$paretoDensity; Kernels <- ParetoDensityEstimationVar$kernels; # Setze Werte f?r Means, deviations und weights, falls nicht ?bergeben if (is.nan(sum(GM)) || is.nan(sum(GS)) || is.nan(sum(GW)) || length(GM)!=length(GS) || length(GM)!=length(GW) ){ vars=getOptGauss(Data=data, Kernels, ParetoDensity,fast=fast) GM <- vars$means GS <- vars$deviations GW <- vars$weights } BB <- NaN # Bayes Boundaries (wird berechnet, bevor es das erste mal ausgegeben wird) #GM <<- means #Mean der Gaussians #GS <<- deviations #StdDev der Gaussians #GW <<- weights #Weight der Gaussians numGauss <- length(GM)#Anzahl der Gaussians numIterations <- 10 # Default value for number of Iterations (in EMGauss) RMS <- 99 # Root Mean Sqare (wird berechnet, bevor es das erste mal ausgegeben wird) meanRMS0 <- mean(data) DeviationRMS0 <- sd(data) Fi <- dnorm(Kernels,meanRMS0,DeviationRMS0) RMS0 <- sqrt(sum((Fi-ParetoDensity)^2)) currGauss <- 1 # Current Gauss (der Gauss, welcher gerade bearbeitet werden kann) # Speicher Werte f?r "restore Best RMS GMBestRMS <- GM GSBestRMS <- GS GWBestRMS <- GW BestRMS <- RMS numGaussBestRMS <- numGauss currGaussBestRMS <- currGauss xlimit <- c(min(Kernels), max(Kernels)); ylimit <- c(0,max(ParetoDensity*1.2)) numKernels <- length(Kernels) MDefault <- mean(data) # Default Werte f?r neu erzeugte Gauss SDefault <- sd(data) WDefault <- 0.5 MLimit <- xlimit # Legt bereiche fest, in welchen sich die Werte f?r die Gaussians befinden k?nnen SLimit <- c(0,max(SDefault*2,max(GS))) WLimit <- c(0,1) #print("[DONE]"); #}) ## Output Part: lege slider, Felder und Buttons fest (sind im ui.R part eingebunden) # Text Output output$Header <- renderText({'PDE Plot of uploaded Data'}) output$gaussianNumber <- renderText({ befehl$updateCurrGauss paste0("GaussianNo.",currGauss) }) # Slider Output output$sliderCurrGauss <- renderUI({ if (numGauss > 1){ sliderInput('currGauss', 'Gaussian No.',ticks=FALSE,width='50%', sep="" ,min = 1, max = numGauss, value = currGauss, step= 1) } }) output$sliderM <- renderUI({ befehl$drawSliderMSW sliderInput("M", h5("Mean"), width='150%', min=MLimit[1], max=MLimit[2], value = GM[currGauss], step= (MLimit[2]-MLimit[1])*0.001) }) output$sliderS <- renderUI({ befehl$drawSliderMSW sliderInput("S", h5("SD"), width='150%', min=SLimit[1], max=SLimit[2], value = GS[currGauss], step=(SLimit[2]-SLimit[1])*0.001) }) output$sliderW <- renderUI({ befehl$drawSliderMSW sliderInput("W", h5("Weight"), width='150%',min=WLimit[1], max=WLimit[2], value = GW[currGauss], step=0.0001) }) output$numericM <- renderUI({ #numericInput("numericM", h5("Mean"), min=MLimit[1], max=MLimit[2], value = GM[currGauss], step=(MLimit[2]-MLimit[1])*0.001) numericInput("numericM", h5("Mean"), min=MLimit[1], max=MLimit[2], value = GM[currGauss], step=1) }) output$numericS <- renderUI({ numericInput("numericS", h5("SD"), min=MLimit[1], max=MLimit[2], value = GM[currGauss], step=(SLimit[2]-SLimit[1])*0.001) }) output$numericW <- renderUI({ numericInput("numericW", h5("Weight"), min=MLimit[1], max=MLimit[2], value = GM[currGauss], step=0.0001) }) output$minMean <- renderUI({ befehl$updateMinMax numericInput("minMean", label = h6("Min Mean"), value = signif(MLimit[1],digits=nSignif) ) }) output$maxMean <- renderUI({ befehl$updateMinMax numericInput("maxMean", label = h6("Max Mean"), value = signif(MLimit[2],digits=nSignif) ) }) output$minSdev <- renderUI({ befehl$updateMinMax numericInput("minSdev", label = h6("Min SD"), value = signif(SLimit[1],digits=nSignif) ) }) output$maxSdev <- renderUI({ befehl$updateMinMax numericInput("maxSdev", label = h6("Max SD"), value = signif(SLimit[2],digits=nSignif) ) }) output$minWeight <- renderUI({ befehl$updateMinMax numericInput("minWeight", label = h6("Min Weight"), value = signif(WLimit[1],digits=nSignif) ) }) output$maxWeight <- renderUI({ befehl$updateMinMax numericInput("maxWeight", label = h6("Max Weight"), value = signif(WLimit[2],digits=nSignif) ) }) ## Interaktive Variablen: befehl$... fungiert als Funktionsaufruf. befehl <- reactiveValues(plot = 0, updateSlider=0, updateSliderCurrGauss=0, drawSliderMSW=0, updateRMS=0, updateMinMax=0) #Signal zum update des Plots / der Slider # die numerischen eingabekaesten (numericM, ...) werden aktualisiert, # sollte sich der Wert eines sliders veraendern observe({ updateNumericInput(session, "numericM", value=input$M) updateNumericInput(session, "numericS", value=input$S) updateNumericInput(session, "numericW", value=input$W) }) # die Slider werden aktualisiert, sollte sich der Wert # eines numerischen eingabekaesten veraendern observe({ input$numericM input$numericS input$numericW # der numeric input darf nicht zu scnell andere werte updaten sonst gibt es eine endlosschleife mit dem slider if(MonitorStopReactions==F){ MonitorStopReactions<<-T if(is.numeric(input$numericM)) updateSliderInput(session, "M", value=input$numericM) if(is.numeric(input$numericS)) updateSliderInput(session, "S", value=input$numericS) if(is.numeric(input$numericW)) updateSliderInput(session, "W", value=input$numericW) # MonitorStopReactions<<-F } }) # alle 500ms darf der numeric input wieder einen neuen wert setzen # solange MonitorStopReactions auf False steht, kann ein Input System seinen Wert # veraendern und auf das analoge gegenstueck uebertragen (slider <-> kaesten) # sobald ein system etwas aendert ist das erste was es tut, den Flag zu aktivieren validationTimer <- reactiveTimer(500) observe({ validationTimer() MonitorStopReactions <<- F }) ## Observe Part: warte auf Input observe({ # Grenzen f?r die Werte von means, sdevs und weights #print("AdaptGauss: Enforce Limits for Mean, Sdev and Weight") if (!is.null(input$minMean)){ if ( is.numeric(input$minMean) && input$minMean<min(GM) ) MLimit[1] <<- input$minMean if ( is.numeric(input$maxMean) && input$maxMean>max(GM) ) MLimit[2] <<- input$maxMean if ( is.numeric(input$minSdev) && input$minSdev<min(GS) ) SLimit[1] <<- input$minSdev if ( is.numeric(input$maxSdev) && input$maxSdev>max(GS) ) SLimit[2] <<- input$maxSdev if ( is.numeric(input$minWeight) && input$minWeight<min(GW) ) WLimit[1] <<- input$minWeight if ( is.numeric(input$maxWeight) && input$maxWeight>max(GW) ) WLimit[2] <<- input$maxWeight iBefehl <<- iBefehl+1 befehl$drawSliderMSW <- iBefehl } }) # Normalisiert alle Gaussians observe({ #print("AdaptGauss: Normalize All") #print("Normalize All") if (input$normAll>0){ sumWeight <- sum(GW) for (i in 1:numGauss){ GW[i] <<- GW[i]/sumWeight } iBefehl <<- iBefehl+1 befehl$plot <- iBefehl befehl$updateSlider <- iBefehl } }) # Normalisiert andere Gaussians (alle ausser dem aktuellen) observe({ #print("AdaptGauss: Normalize Other") #print("Normalize Other") if (input$normOth>0){ sumWeight <- sum(GW) sumWeightOther <- sumWeight-GW[currGauss] zielSum <- 1-GW[currGauss] for (i in 1:numGauss){ if (i!=currGauss){ GW[i] <<- GW[i]/sumWeightOther*zielSum } } iBefehl <<- iBefehl+1 befehl$plot <- iBefehl befehl$updateSlider <- iBefehl } }) # wechsel den aktuellen Gauss observe({ #print("AdaptGauss: Update CurGauss") #print("Update currGauss") if (!is.null(input$currGauss)){ currGauss <<- input$currGauss iBefehl <<- iBefehl+1 befehl$updateCurrGauss <- iBefehl } }) #Erneuert Slider f?r M/S/W, (zB. wenn der Gauss gewechselt wurde) observe({ #print("AdaptGauss: Update Slider for M, S and W") befehl$updateSlider befehl$updateCurrGauss #print("Update Slieder M/S/W") updateSliderInput(session, "M", value = GM[currGauss]) updateSliderInput(session, "S", value = GS[currGauss]) updateSliderInput(session, "W", value = GW[currGauss]) }) # Erneuert slider f?r den aktuellen Gauss (falls numGauss==0 --> lehrer output --> slider wird nicht angezeigt) observe({ #print("AdaptGauss:Update CurGauss Slider") befehl$updateSliderCurrGauss if (numGauss>1){ output$sliderCurrGauss <- renderUI({ sliderInput('currGauss', h6('Gaussian No.'), min = 1, max = numGauss, value = currGauss, step= 1) }) updateSliderInput(session, "currGauss", value = currGauss) } else { output$sliderCurrGauss <- renderUI({ }) } }) #Erneuert Werte f?r M/S/W, wenn der Slider bewegt wird observe({ #print("Refresh Values for M") #print("AdaptGauss:RefreshM") if (!is.null(input$M)){ GM[currGauss] <<- input$M iBefehl <<- iBefehl+1 befehl$plot <- iBefehl } }) observe({ #print("AdaptGauss:RefreshS") #print("Refresh Values for S") if (!is.null(input$S)){ GS[currGauss] <<- input$S iBefehl <<- iBefehl+1 befehl$plot <- iBefehl } }) observe({ #print("AdaptGauss:RefreshW") #print("Refresh Values for W") if (!is.null(input$W)){ GW[currGauss] <<- input$W iBefehl <<- iBefehl+1 befehl$plot <- iBefehl } }) observe({ #print("Refresh numIterations") numIterations <<- input$numIterations if (numIterations<1) {numIterations <<- 1} }) # Starte EMGauss() observe({ #print("AdaptGauss:EMGauss") GMSave <<- GM GSSave <<- GS GWSave <<- GW numGaussSave <<- numGauss if (input$expMax>0){ print("Expectation Maximation") Var <- EMGauss(data,length(GM),GM,GS,GW,numIterations,fast=fast) GM <<- Var$Means GS <<- Var$SDs GW <<- round(Var$Weights,4) } # L?sche Gauss mit Weight==0 for (i in length(GW):1){ if (GW[i]==0){ if (i<numGauss){ for (j in i:(numGauss-1)){ GM[j] <<- GM[j+1] GS[j] <<- GS[j+1] GW[j] <<- GW[j+1] } } GM <<- GM[1:numGauss-1] GS <<- GS[1:numGauss-1] GW <<- GW[1:numGauss-1] numGauss <<- numGauss-1 if (currGauss>i) currGauss <<- currGauss-1 if (currGauss==i) currGauss <<- 1 } } for (i in 1:numGauss){ if (GM[i]<MLimit[1]) MLimit[1] <<- GM[i]*1.01^(sign(-GM[i])) #Faktor 1.01^(sign(-GM[i]), weil nach dem runden von GM oder MLimit sonst Fehler auftreten k?nnen if (GM[i]>MLimit[2]) MLimit[2] <<- GM[i]*1.01^(sign(GM[i])) if (GS[i]<SLimit[1]) SLimit[1] <<- GS[i]*1.01^(sign(-GS[i])) if (GS[i]>SLimit[2]) SLimit[2] <<- GS[i]*1.01^(sign(GS[i])) if (GW[i]<WLimit[1]) WLimit[1] <<- GW[i]*1.01^(sign(-GW[i])) if (GW[i]>WLimit[2]) WLimit[2] <<- GW[i]*1.01^(sign(GW[i])) } iBefehl <<- iBefehl+1 befehl$updateCurrGauss <- iBefehl befehl$updateSliderCurrGauss <- iBefehl befehl$drawSliderMSW <- iBefehl befehl$updateSlider <- iBefehl befehl$plot <- iBefehl befehl$updateMinMax <- iBefehl }) # Lade Werte (wurden vor Expectation Maximation gespeichert) observe({ print("AdaptGauss: Restore Previous Values") if (input$restore>0){ print("Restore previous Values") GM <<- GMSave GS <<- GSSave GW <<- GWSave numGauss <<- numGaussSave currGauss <<- 1 } iBefehl <<- iBefehl+1 befehl$updateSliderCurrGauss <- iBefehl befehl$updateSlider <- iBefehl befehl$plot <- iBefehl }) #Lade Werte f?r best RMS observe({ #print("AdaptGauss:RestoreBestRMS") if (input$RestoreBestRMS>0){ print("Restore Values of Best RMS") GM <<- GMBestRMS GS <<- GSBestRMS GW <<- GWBestRMS numGauss <<- numGaussBestRMS currGauss <<- 1 } iBefehl <<- iBefehl+1 befehl$updateSliderCurrGauss <- iBefehl befehl$updateSlider <- iBefehl befehl$plot <- iBefehl }) observeEvent(input$ChiSquareAnalysis,{ Chi2testMixtures(Data, GM, GS, GW, PlotIt = T,NoRepetitions=50) }) # Plotten der Grafik (nur bei Befehl (befehl$plot)) output$PDE <- renderPlot({ befehl$plot #print("AdaptGauss:renderPlot") #print("PDE estimation using DataVisualizations::ParetoDensityEstimation... "); #Plotte Pareto Density plot(Kernels, ParetoDensity,xlim=xlimit,ylim=ylimit, col="black", axes = FALSE, xlab = "Data", ylab = "Pareto Density Estimation", type="l", lwd=3,xaxs='i',yaxs='i') axis(1,xlim=xlimit,col="black",las=1) #x-Achse axis(2,ylim=ylimit,col="black",las=1) #y-Achse par(xaxs='i') par(yaxs='i') u <- par("usr") arrows(u[1], u[3], 1.05*u[2],u[3], code = 2, xpd = TRUE,lwd=2) arrows(u[1], u[3], u[1], 1.1*u[4], code = 2, xpd = TRUE,lwd=2) #box() #Kasten um Graphen par(new = TRUE) # der befehl das der n?chste den alten nicht ?bermalt # plotte gaussians (in Schwarz) FSum=0 for (i in 1:numGauss){ Fi=dnorm(Kernels,GM[i],GS[i])*GW[i] FSum=FSum+Fi if (input$showComponents){ if (i==currGauss){ points(Kernels, Fi,xlim=xlimit,ylim=ylimit, col="green", type="l", lwd=3) } else { points(Kernels, Fi,xlim=xlimit,ylim=ylimit, col="blue", type="l", lwd=3) } par(new = TRUE) # der befehl das der n?chste den alten nicht ?bermalt } } # Plotte Summe ?ber alle gaussians (in Rot) points(Kernels, FSum,xlim=xlimit,ylim=ylimit, col="red", type="l", lwd=3) par(new = TRUE) # der befehl das der n?chste den alten nicht ?bermalt #Plot Ende #Berechne RMS RMS <<- sqrt(sum((FSum-ParetoDensity)^2))/RMS0 #output$RMS <- renderText({ # paste("RMS = ",signif(RMS, digits=nSignif)) #}) str=paste('AdaptGauss(): GMM with',"RMS = ",signif(RMS, digits=nSignif)) if (RMS<BestRMS){ # Speichere Werte, falls RMS optimiert wurde GMBestRMS <<- GM GSBestRMS <<- GS GWBestRMS <<- GW BestRMS <<- RMS numGaussBestRMS <<- numGauss currGaussBestRMS <<- currGauss } # Berechne (immer) und Plotte (nur wenn angeklickt) Bayes Boundaries if (numGauss>1 && sum(GW>0)>1){ # sum(GW>0)>1: Sicherstellen, dass mindestens 2 Gauss ein weight von ?ber 0 haben, sonst ist Berechnung von BayesBoundaries nicht m?glich BayesBoundaries <- BayesDecisionBoundaries(GM[GW>0],GS[GW>0],GW[GW>0]) #BB <<- BayesBoundaries$DecisionBoundaries; BB <<- BayesBoundaries if (input$showBayes){ for (i in 1:length(BB)){ abline(v=BB[i],col="Magenta") #plot(c(BB[i],BB[i]), ylimit,xlim=xlimit,ylim=ylimit, col="Magenta", axes = FALSE, xlab = " ", , ylab = " ", type="l", lwd=3) par(new = TRUE) # der befehl das der n?chste den alten nicht ?bermalt } BBText=paste(round(BB[1],digits=5-round(log(xlimit[2]-xlimit[1],10)))) #BB wird angepasst an den betrachteten X-Bereich (xlimit[2]-xlimit[1]) gerundet. if (length(BB)>1){ for (i in 2:length(BB)){ BBText=paste(BBText,", ",round(BB[i],digits=5-round(log(xlimit[2]-xlimit[1],10)))) #BB wird wieder angepasst an den betrachteten X-Bereich (xlimit[2]-xlimit[1]) gerundet. } } title(paste(str,", BayesBoundaries = ",BBText)) # output$Bayes <- renderText({ # paste("BayesBoundaries = ",BBText) # }) } else { #output$Bayes <- renderText({ }) title(str) } } else{ BB <<- NaN title(str) #output$Bayes <- renderText({ }) } }) # Fuege Gauss hinzu (bei Knopfdruck) observe({ if (input$AddGaussButton > 0){ print("Add Gauss") numGauss <<- numGauss+1 GM[numGauss] <<- MDefault GS[numGauss] <<- SDefault GW[numGauss] <<- WDefault currGauss <<- numGauss } iBefehl <<- iBefehl+1 befehl$updateSliderCurrGauss <- iBefehl befehl$plot <- iBefehl }) # L?sche aktuellen Gauss: alle Gauss mit gr??erem Index rutschen eins nach link; neuer aktueller Gauss wird der, welcher vorher rechts vom gel?schten gauss stand. currGauss bleibt also gleich. Ausnahme: der Gaus mit dem gr??ten Index wird gel?scht, dann currGauss -> currGauss-1 observe({ if (input$DeleteGaussButton > 0 && numGauss>1){ print("DeleteGauss") if (currGauss<numGauss){ for (i in currGauss:(numGauss-1)){ GM[i] <<- GM[i+1] GS[i] <<- GS[i+1] GW[i] <<- GW[i+1] } } if (currGauss==numGauss){ currGauss <<- numGauss-1 } GM <<- GM[1:numGauss-1] GS <<- GS[1:numGauss-1] GW <<- GW[1:numGauss-1] numGauss <<- numGauss-1 } iBefehl <<- iBefehl+1 befehl$updateCurrGauss <- iBefehl befehl$updateSlider <- iBefehl befehl$updateSliderCurrGauss <- iBefehl befehl$plot <- iBefehl }) #Closing APP oder plot figure observe({ input$CloseButton input$PlotFig if (input$CloseButton+input$PlotFig > 0){ print("Plot Figure") # Plotte Ergebnis plot(Kernels, ParetoDensity,xlim=xlimit,ylim=ylimit, col="black", axes = FALSE, xlab = "Data", ylab = "Pareto Density Estimation", main='GMM with AdaptGauss()',type="l", lwd=3) axis(1,xlim=xlimit,col="black",las=1) #x-Achse axis(2,ylim=ylimit,col="black",las=1) #y-Achse u <- par("usr") arrows(u[1], u[3], 1.05*u[2],u[3], code = 2, xpd = TRUE,lwd=2) arrows(u[1], u[3], u[1], 1.1*u[4], code = 2, xpd = TRUE,lwd=2) #box() #Kasten um Graphen par(new = TRUE) # der befehl das der n?chste den alten nicht ?bermalt FSum=0 for (i in 1:numGauss){ Fi=dnorm(Kernels,GM[i],GS[i])*GW[i] FSum=FSum+Fi if (isolate(input$showComponents)){ points(Kernels, Fi,xlim=xlimit,ylim=ylimit, col="blue", type="l", lwd=3) par(new = TRUE) # der befehl das der n?chste den alten nicht ?bermalt } } if (input$showBayes && numGauss>1){ for (i in 1:length(BB)){ abline(v=BB[i], col="Magenta") #plot(c(BB[i],BB[i]), ylimit,xlim=xlimit,ylim=ylimit, col="Magenta", axes = FALSE, xlab = " ", , ylab = " ", type="l", lwd=3) par(new = TRUE) # der befehl das der n?chste den alten nicht ?bermalt } } #Fi=dnorm(Kernels,GM[currGauss],GS[currGauss])*GW[currGauss] #FSum=FSum+Fi #plot(Kernels, Fi,xlim=xlimit,ylim=ylimit, col="green", axes = FALSE, xlab = " ", , ylab = " ", type="l", lwd=3) #par(new = TRUE) # der befehl das der n?chste den alten nicht ?bermalt plot(Kernels, FSum,xlim=xlimit,ylim=ylimit, col="red", axes = FALSE, xlab = " ", , ylab = " ", type="l", lwd=3) # Plot Ende } if (input$CloseButton > 0){ print("close App") output <- list(Means=GM,SDs=GS,Weights=GW,ParetoRadius=ParetoRadius,RMS=RMS,BayesBoundaries=BB) stopApp(output) } })# end observe CloseButton and PlotFig session$onSessionEnded(function() { print("close App") output <- list(Means=GM,SDs=GS,Weights=GW,ParetoRadius=ParetoRadius,RMS=RMS,BayesBoundaries=BB) stopApp(output) # write out everything into files }) }# end function server )) #outputApp=runApp(paste0(dbtDirectory(),'/dbt/dbt.EMforGauss/R/AdaptGauss2')); return(outputApp) } ## Benutzte Funktionen: Subversion\PUB\dbt\... # dbt.EMforGauss\EMGauss.R # dbt.ClusteringAlgorithms\KmeansCluster.R # dbt.BayesDecision\BayesDecisionBoundaries.R
/scratch/gouwar.j/cran-all/cranData/AdaptGauss/R/AdaptGauss.R
Bayes4Mixtures <- function(Data, Means, SDs, Weights, IsLogDistribution = 0*Means, PlotIt = FALSE, CorrectBorders = FALSE,Color=NULL,xlab='Data',lwd=4){ # V = Bayes4Mixtures(Data, Means, SDs, Weights, IsLogDistribution, PlotIt, CorrectBorders) # Posteriors = V$Posteriors(1:N,1:C) # Vektor der Posteriors korrespondierend zu Data # NormalizationFactor= V$NormalizationFactor(1:N) # Nenner des Bayes Theorems korrespondierend zu Data # # INPUT # Data(1:N) vector of data, may contain NaN # Means(1:C),SDs(1:C),Weights(1:C) parameters of the Gaussians (Mean, StdDeviation, Weight) # # OPTIONAL # IsLogDistribution(1:C) 1 oder 0, gibt an ob die jeweilige Verteilung eine Lognormaverteilung ist,(default ==0*(1:C)) # PlotIt ==TRUE Verteilungen und Posteriors werden gezeichnet (default ==0); # CorrectBorders ==TRUE Daten an den Grenzen werden den Randverteilungen zugeordnet # (default ==0) d.h. ganz gewoehnlicher Bayes mit allen seinen Problemen # OUTPUT # Posteriors = V$Posteriors(1:N,1:C) # Vektor der Posteriors korrespondierend zu Data # NormalizationFactor= V$NormalizationFactor(1:N) # Nenner des Bayes Theorems korrespondierend zu Data # # # AUTHOR: CL # 1.Editor: MT 08/2015 : Plotten neu, Variablen vereinheitlicht # Sortiere die Daten und merke die unsortiert-Reihenfolge AnzMixtures <- length(Means) Kernels <- unique(Data) UNsortInd <- match(Data, Kernels) AnzKernels <- length(Kernels) # Berechne bedingte Wkeit p(x|ci) mit x = Daten[1:N] und ci = Klasse i, i aus 1 bis C PDataGivenClass <- matrix(0,AnzKernels,AnzMixtures); for(i in c(1:AnzMixtures)){ if( IsLogDistribution[i] == 1 ){ # LogNormal PDataGivenClass[,i] <- Symlognpdf(Kernels,Means[i],SDs[i]); # LogNormaldichte. Achtung: Muss gespiegelt werden fuer negative Werte. }else{ PDataGivenClass[,i] <- dnorm(Kernels,Means[i],SDs[i]); # Gaussian }#end if(IsLogDistribution[i] == 1) }#end for(i in c(1:AnzMixtures)) NormalizationFactor <- PDataGivenClass %*% Weights; # Gewichtete Summe der Priors; # Zum Normalisizerungsfaktor: # Achtung: Es soll 1.Spalte mal 1.Eintrag von Weights + 2.Spalte mal 2.Eintrag von Weights usw. gerechnet werden. # Dazu brauchen wir Matrixmultiplikation! # Bei PDataGivenClass * Weights wird die 1.Zeile von P... mal 1. Wert von Weights, 2.Zeile von P mal 2. Wert usw. # gerechnet, was nicht Sinn der Sache ist!!! Pmax = max(NormalizationFactor); # to prevent division error: ZeroInd <- which(NormalizationFactor==0); if(length(ZeroInd) > 0){ NormalizationFactor[ZeroInd] =10^(-7) }#end if(length(ZeroInd) > 0) #Posterioris nach Bayes p(c|x) mit c = Klasse (ueber-, unter- oder nicht exprimiert) und x Datensatz. PClassGivenData <- matrix(0, AnzKernels, AnzMixtures); for(i in c(1:AnzMixtures)){ PClassGivenData[,i] <- PDataGivenClass[,i]*Weights[i] / NormalizationFactor }#end for(i in c(1:AnzMixtures)) if(CorrectBorders == TRUE & (sum(IsLogDistribution)==0)){ # Randkorrektuuren anbringen # Daten kleiner kleinstem Modus werden diesem zugeschlagen; KleinsterModus <- min(Means) SmallModInd <- which.min(Means) LowerInd <- which(Kernels<KleinsterModus); for(i in c(1:AnzMixtures)){ PClassGivenData[LowerInd,i] <- 0; }#end for(i in c(1:AnzMixtures)) PClassGivenData[LowerInd,SmallModInd] <- 1; # Daten groesser groesstem Modus werden diesem zugeschlagen; GroessterModus <- max(Means) BigModInd <- which.max(Means) HigherInd <- which(Kernels>GroessterModus); for(i in c(1:AnzMixtures)){ PClassGivenData[HigherInd,i] <- 0; }#end for(i in c(1:AnzMixtures)) PClassGivenData[HigherInd,BigModInd] <- 1; }#end if(CorrectBorders == TRUE & (sum(IsLogDistribution)==0)) # jetzt zurueck auf Daten Posteriors = matrix(0, length(Data), AnzMixtures)#;zeros(length(Data),AnzMixtures); for(i in c(1:AnzMixtures)){ Posteriors[,i] <- PClassGivenData[UNsortInd,i]; # fuer die Daten anpassen }#end for(i in c(1:AnzMixtures)) # auch noch den Normalisierungsfaktor auf Datengroesse anpassen Nenner <- NormalizationFactor; NormalizationFactor <- NormalizationFactor[UNsortInd]; ## MT: Neu gemacht if(PlotIt==TRUE){ def.par <- par(no.readonly = TRUE) # save default, for resetting... on.exit(par(def.par)) if(is.null(Color)) color <- rainbow(AnzMixtures) else color=Color xlim=c(min(Data,na.rm=T),max(Data,na.rm=T)) ylim=c(0,1.05) plot.new() par(xaxs='i') par(yaxs='i') par(usr=c(xlim,ylim)) ind=order(Data) if(CorrectBorders){ for(i in 1:AnzMixtures){ points(Data[ind], Posteriors[ind,i], col = color[i],type='l',lwd=lwd) }#end for(i in 2:AnzMixtures) }else{ for(i in 1:AnzMixtures){ points(Data[ind], Posteriors[ind,i], col = color[i],type='l',lwd=lwd) }#end for(i in 2:AnzMixtures) } axis(1,xlim=xlim,col="black",las=1) #x-Achse axis(2,ylim=ylim,col="black",las=1) #y-Achse #box() #Kasten um Graphen title(ylab='Posteriori',xlab=xlab) }#end if(PlotIt==TRUE) ## res <- list(Posteriors = Posteriors, NormalizationFactor=NormalizationFactor, PClassGivenData = PClassGivenData) return (res) }
/scratch/gouwar.j/cran-all/cranData/AdaptGauss/R/Bayes4Mixtures.R