content
stringlengths 0
14.9M
| filename
stringlengths 44
136
|
---|---|
---
title: "Using Binary Dosage files"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Using Binary Dosage files}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
```{r setup}
library(BinaryDosage)
```
The following routines are available for accessing information contained in binary dosage files
- <span style="font-family:Courier">getbdinfo</span>
- <span style="font-family:Courier">bdapply</span>
- <span style="font-family:Courier">getsnp</span>
## getbdinfo
The <span style="font-family:Courier">getbdinfo</span> routine returns information about a binary dosage file. For more information about the data returned see [Genetic File Information](geneticfileinfo.html). This information needs to be passed to <span style="font-family:Courier">bdapply</span and <span style="font-family:Courier">getsnp</span> routines so it can read the binary dosage file.
The only parameter used by <span style="font-family:Courier">getbdinfo</span> is <span style="font-family:Courier">bdfiles</span>. This parameter is a character vector. If the format of the binary dosage file is 1, 2, or 3, this must be a character vector of length 3 with the following values, binary dosage file name, family file name, and map file name. If the format of the binary dosage file is 4 then the parameter value is a single character value with the name of the binary dosage file.
The following code gets the information about the binary dosage file *vcf1a.bdose*.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
bd1afile <- system.file("extdata", "vcf1a.bdose", package = "BinaryDosage")
bd1ainfo <- getbdinfo(bdfiles = bd1afile)
```
## bdapply
The <span style="font-family:Courier">bdapply</span> routine applies a function to all SNPs in the binary dosage file. The routine returns a list with length equal to the number of SNPs in the binary dosage file. Each element in the list is the value returned by the user supplied function. The routine takes the following parameters.
- <span style="font-family:Courier">bdinfo</span> - list with information about the binary dosage file returned by <span style="font-family:Courier">getbdinfo</span>.
- <span style="font-family:Courier">func</span> - user supplied function to be applied to each SNP in the VCF file.
- <span style="font-family:Courier">...</span> - additional parameters needed by the user supplied function
The user supplied function must have the following parameters.
- <span style="font-family:Courier">dosage</span> - A numeric vector with the dosage values for each subject.
- <span style="font-family:Courier">p0</span> - A numeric vector with the probabilities the subject has no alternate alleles for each subject.
- <span style="font-family:Courier">p1</span> - A numeric vector with the probabilities the subject has one alternate allele for each subject.
- <span style="font-family:Courier">p2</span> - A numeric vector with the probabilities the subject has two alternate alleles for each subject.
The user supplied function can have other parameters. These parameters need to passed to the <span style="font-family:Courier">bdapply</span> routine.
There is a function in the <span style="font-family:Courier">BinaryDosage</span> package named <span style="font-family:Courier">getaaf</span> that calculates the alternate allele frequencies and is the format needed by <span style="font-family:Courier">vcfapply</span> routine. The following uses <span style="font-family:Courier">getaaf</span> to calculate the alternate allele frequency for each SNP in the *vcf1a.bdose* file using the <span style="font-family:Courier">bdapply</span> routine.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
aaf <- unlist(bdapply(bdinfo = bd1ainfo, func = getaaf))
altallelefreq <- data.frame(SNP = bd1ainfo$snps$snpid, aafcalc = aaf)
knitr::kable(altallelefreq, caption = "Information vs Calculated aaf", digits = 3)
```
## getsnp
The <span style="font-family:Courier">getsnp</span> routine return the dosage and genotype probabilities for each subject for a given SNP in a binary dosage file.
The routine takes the following parameters.
- <span style="font-family:Courier">bdinfo</span> - list with information about the binary dosage file returned by <span style="font-family:Courier">getbdinfo</span>.
- <span style="font-family:Courier">snp</span> - the SNP to return information about. This can either be the index of the SNP in the binary dosage file or its SNP ID.
- <span style="font-family:Courier">dosageonly</span> - a logical value indicating if only the dosage values are returned without the genotype probabilities. The default value is TRUE indicating that only the dosage values are returned.
The following code returns the dosage values and the genotype probabilities for SNP 1:12000:T:C from the *vcf1a.bdose"
binary dosage file.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
snp3 <- data.frame(getsnp(bdinfo = bd1ainfo, "1:12000:T:C", FALSE))
knitr::kable(snp3[1:20,], caption = "SNP 1:12000:T:C", digits = 3)
```
|
/scratch/gouwar.j/cran-all/cranData/BinaryDosage/inst/doc/usingbdfiles.Rmd
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## ----setup, echo=FALSE--------------------------------------------------------
library(BinaryDosage)
## ---- eval = T, echo = T, message = F, warning = F, tidy = T------------------
gen3bchrfile <- system.file("extdata", "set3b.chr.imp", package = "BinaryDosage")
sample3bfile <- system.file("extdata", "set3b.sample", package = "BinaryDosage")
bdfile3b_chr <- tempfile()
gentobd(genfiles = c(gen3bchrfile, sample3bfile), bdfiles = bdfile3b_chr)
bdinfo3b_chr <- getbdinfo(bdfiles = bdfile3b_chr)
## ---- eval = T, echo = T, message = F, warning = F, tidy = T------------------
gen1bfile <- system.file("extdata", "set1b.imp", package = "BinaryDosage")
bdfile1b <- tempfile()
gentobd(genfiles = gen1bfile,
bdfiles = bdfile1b,
snpcolumns = c(1L, 3L, 2L, 4L, 5L),
header = TRUE)
bdinfo1b <- getbdinfo(bdfiles = bdfile1b)
## ---- eval = T, echo = T, message = F, warning = F, tidy = T------------------
gen3bfile <- system.file("extdata", "set3b.imp", package = "BinaryDosage")
sample3bfile <- system.file("extdata", "set3b.sample", package = "BinaryDosage")
bdfile3b <- tempfile()
gentobd(genfiles = c(gen3bfile, sample3bfile),
bdfiles = bdfile3b,
snpcolumns = c(0L,2L:5L))
bdinfo3b <- getbdinfo(bdfiles = bdfile3b)
## ---- eval = T, echo = T, message = F, warning = F, tidy = T------------------
gen4bfile <- system.file("extdata", "set4b.imp", package = "BinaryDosage")
sample4bfile <- system.file("extdata", "set4b.sample", package = "BinaryDosage")
bdfile4b <- tempfile()
gentobd(genfiles = c(gen4bfile, sample4bfile),
bdfiles = bdfile4b,
snpcolumns = c(1L,2L,4L,5L,6L),
startcolumn = 7L,
impformat = 2L)
bdinfo4b <- getbdinfo(bdfiles = bdfile4b)
## ---- eval = T, echo = T, message = F, warning = F, tidy = T------------------
gen2bfile <- system.file("extdata", "set2b.imp", package = "BinaryDosage")
sample2bfile <- system.file("extdata", "set2b.sample", package = "BinaryDosage")
bdfile2b <- tempfile()
gentobd(genfiles = c(gen2bfile, sample2bfile),
bdfiles = bdfile2b,
snpcolumns = c(1L,3L,2L,4L,5L),
impformat = 1L)
bdinfo2b <- getbdinfo(bdfiles = bdfile2b)
## ---- eval = T, echo = T, message = F, warning = F, tidy = T------------------
gen3bfile <- system.file("extdata", "set3b.imp", package = "BinaryDosage")
sample3bfile <- system.file("extdata", "set3b.sample", package = "BinaryDosage")
bdfile3bm1 <- tempfile()
gentobd(genfiles = c(gen3bfile, sample3bfile),
bdfiles = bdfile3bm1,
snpcolumns = c(-1L,2L:5L),
chromosome = "1")
bdinfo3bm1 <- getbdinfo(bdfiles = bdfile3bm1)
## ---- eval = T, echo = T, message = F, warning = F, tidy = T------------------
gen3bfile <- system.file("extdata", "set3b.imp", package = "BinaryDosage")
sample3bnhfile <- system.file("extdata", "set3bnh.sample", package = "BinaryDosage")
bdfile3bnh <- tempfile()
gentobd(genfiles = c(gen3bfile, sample3bnhfile),
bdfiles = bdfile3bnh,
snpcolumns = c(0L,2L:5L),
header = c(FALSE, FALSE))
bdinfo3bnh <- getbdinfo(bdfiles = bdfile3bnh)
## ---- eval = T, echo = T, message = F, warning = F, tidy = T------------------
gen4bgzfile <- system.file("extdata", "set4b.imp.gz", package = "BinaryDosage")
sample4bfile <- system.file("extdata", "set4b.sample", package = "BinaryDosage")
bdfile4bgz <- tempfile()
gentobd(genfiles = c(gen4bgzfile, sample4bfile),
bdfiles = bdfile4bgz,
snpcolumns = c(1L,2L,4L,5L,6L),
startcolumn = 7L,
impformat = 2L,
gz = TRUE)
bdinfo4bgz <- getbdinfo(bdfiles = bdfile4bgz)
## ---- eval = T, echo = T, message = F, warning = F, tidy = T------------------
gen3bchrfile <- system.file("extdata", "set3b.chr.imp", package = "BinaryDosage")
sample3bfile <- system.file("extdata", "set3b.sample", package = "BinaryDosage")
geninfo <- getgeninfo(genfiles = c(gen3bchrfile, sample3bfile), index = TRUE)
aaf <- unlist(genapply(geninfo = geninfo, getaaf))
altallelefreq <- data.frame(SNP = geninfo$snps$snpid, aafcalc = aaf)
knitr::kable(altallelefreq, caption = "Calculated aaf", digits = 3)
|
/scratch/gouwar.j/cran-all/cranData/BinaryDosage/inst/doc/usinggenfiles.R
|
---
title: "Using GEN Files"
output:
rmarkdown::html_vignette:
toc: true
vignette: >
%\VignetteIndexEntry{Using GEN Files}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
This vignette documents the functions in the BinaryDosage package that convert GEN files to binary dosage files.
**Note:** The examples below use functions to access information in binary dosage files. Information about these functions can be found in the vignette [Using Binary Dosage Files](usingbdfiles.html). Data returned by the function <span style="font-family:Courier">getbdinfo</span> contains information about a binary dosage file. Information on the data return by <span style="font-family:Courier">getbdinfo</span> can be found in the vignette [Genetic File Information](geneticfileinfo.html).
```{r setup, echo=FALSE}
library(BinaryDosage)
```
# Introduction
GEN files are a useful way to store genetic data. They are text files and can be easily parsed. The output files returned from the imputation software [Impute2](http://mathgen.stats.ox.ac.uk/impute/impute_v2.html) are returned in this format.
Uncompressed GEN files can be very large, 100s of GB. Because of this they are quite often compressed. This makes the files much smaller but greatly increases the read time. The BinaryDosage package supports reading of gzip compressed GEN files.
There appears to have been changes to the GEN file over the years and it also appears people have created GEN-like file formats. The <span style="font-family:Courier">BinaryDosage</span> package can support many GEN-like file formats.
The BinaryDosage package has a routine to convert GEN files into a binary format that maintains the dosage, genetic and probabilities. This results in a file about 10-15% the size of the uncompressed VCF file with much faster, 200-300x, read times. In comparison, Using gzip to compress the GEN file reduces the size of the file to about 5% its original size but makes run times slower.
Routines were written to help debug the conversion routine. It was discovered these routines were quite useful for accessing data in the GEN file and are now included in the package. This document contains instructions on how to use these routines.
# GEN file formats
The GEN file may have a header. If it does have a header the format the first N entries must be the column names for the SNP information variables. The following values identify the subjects and can have either of the following formats ordered by subject
- The family ID followed by the subject ID
- The subject ID only
If the GEN file does not have a header, the subject information must be in a sample file that can be read with <span style="font-family:Courier">read.table</span>. If there is only one column the subject ID is set to this value and the family ID is set to "". Otherwise, the family ID value is set to the value of the first column and the subject ID value is set to the second column. If the first value of the subject ID and family ID are both "0", they are deleted. If family ID and subject ID are equal for all subjects, the family ID value is set to "".
**Note:** If a sample file is provided, the header is ignored.
The body GEN file must have the following format. The first N columns must contain information about the SNP. These columns must contain the following values
- SNP ID
- Location
- Alternate allele
- Reference allele
The chromosome number may also be in the first N columns.
**Note:** The first three columns of the GEN file used to be snp_id, rs_id, and position. In many cases these values got change to chromosome, snp_id, and position.
The remaining columns must contain the genotype probabilities sorted by subject. The genotype probabilities can be in any of the following formats.
- The dosage value only
- Probability subject has no alternate alleles, probability subject has one alternate allele.
- Probability subject has no alternate alleles, probability subject has one alternate allele, probability subject has two alternate allele.
**Note:** The number of genotype probabilities must agree with the number of subjects identified in the header or sample file.
# Example files
There are several sample files included with the BinaryDosage package. The file names will be found by using the <span style="font-family:Courier">system.file</span> command in the examples. This will be used many times in the examples.
The binary dosage files created will be temporary files. They will be created using the <span style="font-family:Courier">tempfile</span> command. This will also be used many times in the examples. All output files will use the default format of 4. For information on other formats see the vignette [Binary Dosage Formats](bdformats.html).
# Converting a GEN file to the Binary Dosage format
The <span style="font-family:Courier">gentobd</span> routine converts GEN files to the binary dosage format. Many different formats of GEN files by the BinaryDosage package. The following sections show how to convert GEN files in various formats to the binary dosage format.
The <span style="font-family:Courier">gentobd</span> takes the following parameters
- <span style="font-family:Courier">genfiles</span> - Name of GEN file and the optional sample file.
- <span style="font-family:Courier">snpcolumns</span> - Columns containing the values for chromosome, SNP ID, location, reference allele, and alternate allele.
- <span style="font-family:Courier">startcolumn</span> - Column where genotype probabilities start.
- <span style="font-family:Courier">impformat</span> - Number of genotype probabilities for each subject.
- <span style="font-family:Courier">chromosome</span> - Optional chromosome, provided if chromosome is not include in GEN file
- <span style="font-family:Courier">header</span> - Vector of one or two logical values indicating if GEN and sample files have headers respectively.
- <span style="font-family:Courier">gz</span> - Logical value indicating if GEN file is compressed.
- <span style="font-family:Courier">sep</span> - Separator used in GEN file.
- <span style="font-family:Courier">bdfiles</span> - Vector of character values give the names of the binary dosage file. If the binary dosage format is 3 or less there are three files names, binary dosage file, family file, and map file names. For format 4 there is only the binary dosage file name.
- <span style="font-family:Courier">format</span> - Format of the binary dosage file.
- <span style="font-family:Courier">subformat</span> - Subformat of the binary dosage file.
- <span style="font-family:Courier">snpidformat</span> - Format to store the SNP ID in.
- <span style="font-family:Courier">bdoptions</span> - Options for calculating additional SNP data.
## Default options
The default values for the <span style="font-family:Courier">gentobd</span> require a sample file meaning there is no header in the GEN file and the first five columns be chromosome, SNP ID, location, reference allele, and alternate allele, respectively. The genotype data must have the three genotype values and the file must not be compressed.
The following code reads the gen file *set3b.chr.imp* using the *set3b.sample* sample file. These files are in the format mentioned above.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
gen3bchrfile <- system.file("extdata", "set3b.chr.imp", package = "BinaryDosage")
sample3bfile <- system.file("extdata", "set3b.sample", package = "BinaryDosage")
bdfile3b_chr <- tempfile()
gentobd(genfiles = c(gen3bchrfile, sample3bfile), bdfiles = bdfile3b_chr)
bdinfo3b_chr <- getbdinfo(bdfiles = bdfile3b_chr)
```
## snpcolumns
The <span style="font-family:Courier">snpcolumns</span> parameter lists the column numbers for the chromosome, SNP ID, location, reference allele, and alternate allele, respectively.
The following code reads in *set1b.imp*. This file has the SNP data in the following order, chromosome, location, SNP ID, reference allele, alternate allele. The file also has a header so there is no sample file.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
gen1bfile <- system.file("extdata", "set1b.imp", package = "BinaryDosage")
bdfile1b <- tempfile()
gentobd(genfiles = gen1bfile,
bdfiles = bdfile1b,
snpcolumns = c(1L, 3L, 2L, 4L, 5L),
header = TRUE)
bdinfo1b <- getbdinfo(bdfiles = bdfile1b)
```
Quite often the chromosome is not part of the GEN file and first column has the value '\-\-'. In this case the SNP ID is often in the format <span style="font-family:Courier">\<chromosome\>:\<additional SNP data\></span>. In this case, the chromosome column number (first value in snpcolumns) can be set to 0L and the <span style="font-family:Courier">gentobd</span> routine will extract the chromosome from the SNP ID value.
The following code reads in *set3b.imp*. This is in same format as *set3b.chr.imp* except that there is no column for the chromosome value. The value of 0L will be used for the chromosome column to indicate to get the chromosome value from the SNP ID value.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
gen3bfile <- system.file("extdata", "set3b.imp", package = "BinaryDosage")
sample3bfile <- system.file("extdata", "set3b.sample", package = "BinaryDosage")
bdfile3b <- tempfile()
gentobd(genfiles = c(gen3bfile, sample3bfile),
bdfiles = bdfile3b,
snpcolumns = c(0L,2L:5L))
bdinfo3b <- getbdinfo(bdfiles = bdfile3b)
```
## startcolumn
Sometimes the GEN file has more SNP information than the five values mentioned earlier. In this case the genotype probabilities start in a column number other than 6. The value of the <span style="font-family:Courier">startcolumn</span> is the column number that the genotype probabilities start
The following code reads in *set4b.imp*. It has an extra column in the SNP data in column 2. The <span style="font-family:Courier">snpcolumns</span> and <span style="font-family:Courier">startcolumn</span> has been set to handle this. The value of <span style="font-family:Courier">impformat</span> has also been set since there are only 2 genotype probabilities in the file.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
gen4bfile <- system.file("extdata", "set4b.imp", package = "BinaryDosage")
sample4bfile <- system.file("extdata", "set4b.sample", package = "BinaryDosage")
bdfile4b <- tempfile()
gentobd(genfiles = c(gen4bfile, sample4bfile),
bdfiles = bdfile4b,
snpcolumns = c(1L,2L,4L,5L,6L),
startcolumn = 7L,
impformat = 2L)
bdinfo4b <- getbdinfo(bdfiles = bdfile4b)
```
## impformat
The <span style="font-family:Courier">impformat</span> parameter is an integer from 1 to 3 that indicates how many genotype probabilities are in the file for each person. The value of 1 indicates that the value is the dosage value for the subject.
The following codes reads in file *set2b.imp*. This file contains only the dosage values for the subjects. The SNP information is not in the default order so the values of snpcolumns has to be specified (see above).
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
gen2bfile <- system.file("extdata", "set2b.imp", package = "BinaryDosage")
sample2bfile <- system.file("extdata", "set2b.sample", package = "BinaryDosage")
bdfile2b <- tempfile()
gentobd(genfiles = c(gen2bfile, sample2bfile),
bdfiles = bdfile2b,
snpcolumns = c(1L,3L,2L,4L,5L),
impformat = 1L)
bdinfo2b <- getbdinfo(bdfiles = bdfile2b)
```
## chromosome
The <span style="font-family:Courier">chromosome</span> parameter is a character value that used when the chromosome column value in <span style="font-family:Courier">snpcolumns</span> is set to -1L.
The following code reads in *set3b.imp*, setting the chromosome value to 1.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
gen3bfile <- system.file("extdata", "set3b.imp", package = "BinaryDosage")
sample3bfile <- system.file("extdata", "set3b.sample", package = "BinaryDosage")
bdfile3bm1 <- tempfile()
gentobd(genfiles = c(gen3bfile, sample3bfile),
bdfiles = bdfile3bm1,
snpcolumns = c(-1L,2L:5L),
chromosome = "1")
bdinfo3bm1 <- getbdinfo(bdfiles = bdfile3bm1)
```
## header parameter
The <span style="font-family:Courier">header</span> parameter is a character vector of length 1 or 2. These indicate if the GEN file an sample file have headers, respectively. If the first value is <span style="font-family:Courier">TRUE</span>, the second value is ignored as the subjects IDs are in the header of the GEN file.
The following code reads in *set3b.imp* using the sample file *set3bnh.sample* which has no header.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
gen3bfile <- system.file("extdata", "set3b.imp", package = "BinaryDosage")
sample3bnhfile <- system.file("extdata", "set3bnh.sample", package = "BinaryDosage")
bdfile3bnh <- tempfile()
gentobd(genfiles = c(gen3bfile, sample3bnhfile),
bdfiles = bdfile3bnh,
snpcolumns = c(0L,2L:5L),
header = c(FALSE, FALSE))
bdinfo3bnh <- getbdinfo(bdfiles = bdfile3bnh)
```
## gz
The <span style="font-family:Courier">gz</span> parameter is a logical value that indicates if the GEN file is compressed using gzip. The sample file is always assumed to be uncompressed.
The following code reads in *set4b.imp.gz* file using the sample file *set4b.sample*.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
gen4bgzfile <- system.file("extdata", "set4b.imp.gz", package = "BinaryDosage")
sample4bfile <- system.file("extdata", "set4b.sample", package = "BinaryDosage")
bdfile4bgz <- tempfile()
gentobd(genfiles = c(gen4bgzfile, sample4bfile),
bdfiles = bdfile4bgz,
snpcolumns = c(1L,2L,4L,5L,6L),
startcolumn = 7L,
impformat = 2L,
gz = TRUE)
bdinfo4bgz <- getbdinfo(bdfiles = bdfile4bgz)
```
## separator
The <span style="font-family:Courier">separator</span> parameter is a character variable. This character separates the columns in the GEN file. Multiple copies of the separator are considered to be separator.
## bdfiles
The <span style="font-family:Courier">bdfiles</span> parameter is a character vector of length 1 or 3. These are the names of the binary dosage, family, and map files. If the format of the binary dosage file is 4, the only value needed is the name of the binary dosage file.
## format
The <span style="font-family:Courier">format</span> parameter determines the format of the binary dosage files. Formats 1, 2 and 3 consist of three files, binary dosage, family, and map. Format 4 combines all of these into one file.
## subformat
The <span style="font-family:Courier">subformat</span> parameter determines what information is in the binary dosage files. All formats can have subformats 1 and 2. A <span style="font-family:Courier">subformat</span> value of 1 indicates that only the dosage values are written to the binary dosage file and a value of 2 indicates that the dosage and genotype probabilities are written to the binary dosage file. Formats 3 and 4 can also have <span style="font-family:Courier">subformat</span> values of 3 and 4. These values have the same meaning as 1 and 2 respectively but have a slightly reordered header in the binary dosage file to improve read speed.
## snpidformat
The <span style="font-family:Courier">snpidformat</span> options specifies how the SNP ID is written to the binary dosage file. The default value is 0. This tells the code to use the SNP IDs that are in the GEN file. Other values that can be supplied to the function creates a SNP ID from the chromosome, location, reference, and alternate allele values.
When the snpidformat is set to 1, the SNP ID is written in the format
<span style="font-family:Courier">Chromosome:Location</span>
When the snpidformat is set to 2, the SNP ID is written in the format
<span style="font-family:Courier">Chromosome:Location:Reference Allele:Alternate Allele</span>
When the snpidformat is set to 3, the SNP ID is written in the format
<span style="font-family:Courier">Chromosome:Location_Reference Allele_Alternate Allele</span>
When the snpidformat is set to -1, the SNP ID is not written to the binary dosage file. When the binary dosage file is read, the SNP ID is generated using the format for snpidformat equal to 2. This reduces the size of the binary dosage file.
## bdoptions
When using binary dosage format 4.x it is possible to store additional information about the SNPs in the file. This is information consists of the following values
- Alternate allele frequency
- Minor allele frequency
- Imputation r-squared
It is possible to calculate the alternate and minor allele frequency without the imputation information file. It is also possible to estimate the imputation r-squared. See the vignette [Estimating Imputed R-squares](r2estimates.html) for more information on the r-squared estimate.
The value for bdoptions is a vector of character values that can be "aaf", "maf", "rsq", or and combination of these values. The values indicate to calculate alternate allele frequency, minor allele frequency, and imputation r-squared respectively.
# Additional routines
The following routines are available for accessing information contained in VCF files
- <span style="font-family:Courier">getvcfinfo</span>
- <span style="font-family:Courier">vcfapply</span>
## getgeninfo
The <span style="font-family:Courier">getgeninfo</span> routine returns information about a GEN file. For more information about the data returned see [Genetic File Information](geneticfileinfo.html). This information needs to be passed to genapply so it can efficiently read the GEN file.
The parameters passed to <span style="font-family:Courier">getgeninfo</span> are
- <span style="font-family:Courier">genfiles</span> - Name of GEN file and the optional sample file.
- <span style="font-family:Courier">snpcolumns</span> - Columns containing the values for chromosome, SNP ID, location, reference allele, and alternate allele.
- <span style="font-family:Courier">startcolumn</span> - Column where genotype probabilities start.
- <span style="font-family:Courier">impformat</span> - Number of genotype probabilities for each subject.
- <span style="font-family:Courier">chromosome</span> - Optional chromosome, provided if chromosome is not include in GEN file
- <span style="font-family:Courier">header</span> - Vector of one or two logical values indicating if GEN and sample files have headers respectively.
- <span style="font-family:Courier">gz</span> - Logical value indicating if GEN file is compressed.
- <span style="font-family:Courier">index</span> - Logical value indicating if the GEN file is to be indexed.
- <span style="font-family:Courier">snpidformat</span> - Format to create the SNP ID in.
- <span style="font-family:Courier">sep</span> - Separator used in GEN file.
All of these parameters have the same meaning as in the <span style="font-family:Courier">gentobd</span> routine above. There is on additional parameter <span style="font-family:Courier">index</span>. This is a logical value indicating if the GEN file should be indexed for quicker reading. This is useful when using <span style="font-family:Courier">genapply</span>. However the <span style="font-family:Courier">index</span> parameter can not be TRUE if the file is compressed.
## genapply
The <span style="font-family:Courier">genapply</span> routine applies a function to all SNPs in the GEN file. The routine returns a list with length equal to the number of SNPs in the GEN file. Each element in the list is the value returned by the user supplied function. The routine takes the following parameters.
- <span style="font-family:Courier">geninfo</span> - list with information about the GEN file returned by <span style="font-family:Courier">getgeninfo</span>.
- <span style="font-family:Courier">func</span> - user supplied function to be applied to each SNP in the VCF file.
- <span style="font-family:Courier">...</span> - additional parameters needed by the user supplied function
The user supplied function must have the following parameters.
- <span style="font-family:Courier">geninfo</span> - A numeric vector with the dosage values for each subject.
- <span style="font-family:Courier">p0</span> - A numeric vector with the probabilities the subject has no alternate alleles for each subject.
- <span style="font-family:Courier">p1</span> - A numeric vector with the probabilities the subject has one alternate allele for each subject.
- <span style="font-family:Courier">p2</span> - A numeric vector with the probabilities the subject has two alternate alleles for each subject.
The user supplied function can have other parameters. These parameters need to passed to the <span style="font-family:Courier">genapply</span> routine.
There is a function in the <span style="font-family:Courier">BinaryDosage</span> package named <span style="font-family:Courier">getaaf</span> that calculates the alternate allele frequencies and is the format needed by <span style="font-family:Courier">genapply</span> routine. The following uses <span style="font-family:Courier">getaaf</span> to calculate the alternate allele frequency for each SNP in the *set3b.chr.imp* file using the <span style="font-family:Courier">vcfapply</span> routine.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
gen3bchrfile <- system.file("extdata", "set3b.chr.imp", package = "BinaryDosage")
sample3bfile <- system.file("extdata", "set3b.sample", package = "BinaryDosage")
geninfo <- getgeninfo(genfiles = c(gen3bchrfile, sample3bfile), index = TRUE)
aaf <- unlist(genapply(geninfo = geninfo, getaaf))
altallelefreq <- data.frame(SNP = geninfo$snps$snpid, aafcalc = aaf)
knitr::kable(altallelefreq, caption = "Calculated aaf", digits = 3)
```
|
/scratch/gouwar.j/cran-all/cranData/BinaryDosage/inst/doc/usinggenfiles.Rmd
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## ----setup, echo=FALSE--------------------------------------------------------
library(BinaryDosage)
## ---- eval = T, echo = T, message = F, warning = F, tidy = T------------------
vcf1afile <- system.file("extdata", "set1a.vcf", package = "BinaryDosage")
bdfile1a_woinfo <- tempfile()
vcftobd(vcffiles = vcf1afile, bdfiles = bdfile1a_woinfo)
## ---- eval = T, echo = T, message = F, warning = F, tidy = T------------------
vcf1afile <- system.file("extdata", "set1a.vcf", package = "BinaryDosage")
vcf1ainfo <- system.file("extdata", "set1a.info", package = "BinaryDosage")
bdfile1a_winfo <- tempfile()
vcftobd(vcffiles = c(vcf1afile, vcf1ainfo), bdfiles = bdfile1a_winfo)
## ---- eval = T, echo = T, message = F, warning = F, tidy = T------------------
bdinfo1a_woinfo <- getbdinfo(bdfiles = bdfile1a_woinfo)
bdinfo1a_woinfo$snpinfo
bdinfo1a_winfo <- getbdinfo(bdfiles = bdfile1a_winfo)
knitr::kable(data.frame(bdinfo1a_winfo$snpinfo), caption = "bdinfo1a_winfo$snpinfo")
## ---- eval = T, echo = T, message = F, warning = F, tidy = T------------------
vcf1afile_gz <- system.file("extdata", "set1a.vcf.gz", package = "BinaryDosage")
vcf1ainfo <- system.file("extdata", "set1a.info", package = "BinaryDosage")
bdfile1a_woinfo_gz <- tempfile()
vcftobd(vcffiles = vcf1afile_gz, bdfiles = bdfile1a_woinfo_gz)
bdfile1a_winfo_gz <- tempfile()
vcftobd(vcffiles = c(vcf1afile_gz, vcf1ainfo), bdfiles = bdfile1a_winfo_gz)
## ---- eval = T, echo = T, message = F, warning = F, tidy = T------------------
bdinfo1a_woinfo_gz <- getbdinfo(bdfiles = bdfile1a_woinfo_gz)
bdinfo1a_winfo_gz <- getbdinfo(bdfiles = bdfile1a_winfo_gz)
aaf1a_woinfo <- unlist(bdapply(bdinfo = bdinfo1a_woinfo, getaaf))
aaf1a_winfo <- unlist(bdapply(bdinfo = bdinfo1a_winfo, getaaf))
aaf1a_woinfo_gz <- unlist(bdapply(bdinfo = bdinfo1a_woinfo_gz, getaaf))
aaf1a_winfo_gz <- unlist(bdapply(bdinfo = bdinfo1a_winfo_gz, getaaf))
aaf1a <- data.frame(SNPID = bdinfo1a_woinfo$snps$snpid,
aaf1a_woinfo = aaf1a_woinfo,
aaf1a_winfo = aaf1a_winfo,
aaf1a_woinfo_gz = aaf1a_woinfo_gz,
aaf1a_winfo_gz = aaf1a_winfo_gz)
knitr::kable(aaf1a, caption = "Alternate Allele Frequencies", digits = 4)
## ---- eval = T, echo = T, message = F, warning = F, tidy = T------------------
vcf2afile <- system.file("extdata", "set2a.vcf", package = "BinaryDosage")
bdfile2a <- tempfile()
vcftobd(vcffiles = vcf2afile, bdfiles = bdfile2a)
bdinfo2a <- getbdinfo(bdfiles = bdfile2a)
snp1_2a <- data.frame(getsnp(bdinfo = bdinfo2a, snp = 1L, dosageonly = FALSE))
snp1 <- cbind(SubjectID = bdinfo2a$samples$sid, snp1_2a)
knitr::kable(snp1[1:10,], caption = "Dosage and Genotype Probabilities")
## ---- eval = T, echo = T, message = F, warning = F, tidy = T------------------
vcf1brsfile <- system.file("extdata", "set1b_rssnp.vcf", package = "BinaryDosage")
bdfile1b.snpid0 <- tempfile()
bdfile1b.snpid1 <- tempfile()
bdfile1b.snpid2 <- tempfile()
bdfile1b.snpidm1 <- tempfile()
vcftobd(vcffiles = vcf1brsfile, bdfiles = bdfile1b.snpid0)
vcftobd(vcffiles = vcf1brsfile, bdfiles = bdfile1b.snpid1, snpidformat = 1)
vcftobd(vcffiles = vcf1brsfile, bdfiles = bdfile1b.snpid2, snpidformat = 2)
vcftobd(vcffiles = vcf1brsfile, bdfiles = bdfile1b.snpidm1, snpidformat = -1)
bdinfo1b.snpid0 <- getbdinfo(bdfiles = bdfile1b.snpid0)
bdinfo1b.snpid1 <- getbdinfo(bdfiles = bdfile1b.snpid1)
bdinfo1b.snpid2 <- getbdinfo(bdfiles = bdfile1b.snpid2)
bdinfo1b.snpidm1 <- getbdinfo(bdfiles = bdfile1b.snpidm1)
snpnames <- data.frame(format0 = bdinfo1b.snpid0$snps$snpid,
format1 = bdinfo1b.snpid1$snps$snpid,
format2 = bdinfo1b.snpid2$snps$snpid,
formatm1 = bdinfo1b.snpidm1$snps$snpid)
knitr::kable(snpnames, caption = "SNP Names by Format")
## ---- eval = T, echo = T, message = F, warning = F, tidy = T------------------
vcf1afile <- system.file("extdata", "set1a.vcf", package = "BinaryDosage")
bdfile1a_calcinfo <- tempfile()
vcftobd(vcffiles = vcf1afile, bdfiles = bdfile1a_calcinfo, bdoptions = c("aaf", "maf", "rsq"))
bdcalcinfo <- getbdinfo(bdfile1a_calcinfo)
snpinfo <- data.frame(aaf_info = bdinfo1a_winfo$snpinfo$aaf,
aaf_calc = bdcalcinfo$snpinfo$aaf,
maf_info = bdinfo1a_winfo$snpinfo$maf,
maf_calc = bdcalcinfo$snpinfo$maf,
rsq_info = bdinfo1a_winfo$snpinfo$rsq,
rsq_calc = bdcalcinfo$snpinfo$rsq)
knitr::kable(snpinfo, caption = "Information vs Calculated Information", digits = 3)
## ---- eval = T, echo = T, message = F, warning = F, tidy = T------------------
vcf1afile <- system.file("extdata", "set1a.vcf", package = "BinaryDosage")
vcfinfo <- getvcfinfo(vcf1afile, index = TRUE)
aaf <- unlist(vcfapply(vcfinfo = vcfinfo, getaaf))
altallelefreq <- data.frame(SNP = vcfinfo$snps$snpid, aafinfo = aaf1a_winfo, aafcalc = aaf)
knitr::kable(altallelefreq, caption = "Information vs Calculated aaf", digits = 3)
|
/scratch/gouwar.j/cran-all/cranData/BinaryDosage/inst/doc/usingvcffiles.R
|
---
title: "Using VCF files"
output:
rmarkdown::html_vignette:
toc: true
vignette: >
%\VignetteIndexEntry{Using VCF Files}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
This vignette documents the functions in the BinaryDosage package that convert VCF files to binary dosage files.
**Note:** The examples below use functions to access information in binary dosage files. Information about these functions can be found in the vignette [Using Binary Dosage Files](usingbdfiles.html). Data returned by the function <span style="font-family:Courier">getbdinfo</span> contains information about a binary dosage file. Information on the data return by <span style="font-family:Courier">getbdinfo</span> can be found in the vignette [Genetic File Information](geneticfileinfo.html).
```{r setup, echo=FALSE}
library(BinaryDosage)
```
# Introduction
VCF files are a useful way to store genetic data. They have a well defined format and can be easily parsed. The output files returned from the imputation software [minimac](https://genome.sph.umich.edu/wiki/Minimac) are returned in this format. The minimac software also returns an information file that is supported by the BinaryDosage package. The [Michigan Imputation Server](https://imputationserver.sph.umich.edu/index.html) using minimac for imputation and returns VCF and information. Functions in the BinaryDosage package have default settings to use the files return from minimac.
Uncompressed VCF files are text files and can be very large, 100s of GB. Because of this they are quite often compressed. Files returned from the Michigan Imputation Server are compressed using gzip. This makes the files much smaller but greatly increases the read time. The BinaryDosage package supports reading of gzip compressed VCF files.
The BinaryDosage package was originally designed for use with files return from the Michigan Imputation Server. It was quickly learned that if any manipulation was done to these files by various tools such as [vcftools](https://vcftools.github.io), The conversion routine in the BinaryDosage package would not work. The routine was modified to support additional VCF file formats.
The BinaryDosage package has a routine to convert VCF files into a binary format that maintains the dosage, genetic probabilities, and imputation statistics. This results in a file about 10-15% the size of the uncompressed VCF file with much faster, 200-300x, read times. In comparison, Using gzip to compress the VCF file reduces the size of the file to about 5% its original size but makes run times slower.
Routines were written to help debug the conversion routine. It was discovered these routines were quite useful for accessing data in the VCF file and are now included in the package. This document contains instructions on how to use these routines.
# Example files
There are several sample files included with the BinaryDosage package. The file names will be found by using the <span style="font-family:Courier">system.file</span> command in the examples. This will be used many times in the examples.
The binary dosage files created will be temporary files. They will be created using the <span style="font-family:Courier">tempfile</span> command. This will also be used many times in the examples. All output files will use the default format of 4. For information on other formats see the vignette [Binary Dosage Formats](bdformats.html).
# Converting a VCF file to the Binary Dosage format
The <span style="font-family:Courier">vcftobd</span> routine converts VCF files to the binary dosage format. Many different formats of VCF files by the BinaryDosage package. The following sections show how to convert VCF files in various formats to the binary dosage format.
## Minimac files
Since the binary dosage format was initially created for use with files produced by minimac, the default options for calling the routine to convert a VCF file to a binary dosage format are for this type of file.
### Uncompressed VCF files
Uncompressed VCF files are the easiest to convert to the binary dosage format since the default values for <span style="font-family:Courier">vcftobd</span> are set for this format.
#### No imputation information file
An imputation information file is not required to convert a VCF file into the binary dosage format. In this case, the parameter value for <span style="font-family:Courier">vcffiles</span> is set to the VCF file name.
The following commands convert the VCF file *set1a.vcf* into the binary dosage format.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
vcf1afile <- system.file("extdata", "set1a.vcf", package = "BinaryDosage")
bdfile1a_woinfo <- tempfile()
vcftobd(vcffiles = vcf1afile, bdfiles = bdfile1a_woinfo)
```
#### Using the imputation information file
The minimac program returns an imputation information file that can be passed to the <span style="font-family:Courier">vcftobd</span> routine. This is done by setting the parameter <span style="font-family:Courier">vcffiles</span> to a vector of characters containing the VCF and the imputation information file names.
The following commands convert the VCF file *set1a.vcf* into the binary dosage format using the information file *set1a.info*.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
vcf1afile <- system.file("extdata", "set1a.vcf", package = "BinaryDosage")
vcf1ainfo <- system.file("extdata", "set1a.info", package = "BinaryDosage")
bdfile1a_winfo <- tempfile()
vcftobd(vcffiles = c(vcf1afile, vcf1ainfo), bdfiles = bdfile1a_winfo)
```
The differences between the two binary dosage datasets can be checked by running the <span style="font-family:Courier">getbdinfo</span> routine on both files. The value of <span style="font-family:Courier">snpinfo</span> in the list returned from <span style="font-family:Courier">getbdinfo</span> will be an empty for the first file and will contain the imputation information for the second file..
The following commands show the first file does not contain any imputation information and the the second one does. The imputation information for the second file is converted to a table for easier displaying.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
bdinfo1a_woinfo <- getbdinfo(bdfiles = bdfile1a_woinfo)
bdinfo1a_woinfo$snpinfo
bdinfo1a_winfo <- getbdinfo(bdfiles = bdfile1a_winfo)
knitr::kable(data.frame(bdinfo1a_winfo$snpinfo), caption = "bdinfo1a_winfo$snpinfo")
```
### Compressed VCF files
VCF files can be quite large, 100s of GB. Because of this they are often compressed. The funciton <span style="font-family:Courier">vcftobd</span> supports VCF files compressed using gunzip by adding the option <span style="font-family:Courier">gz = TRUE</span> to the function call. The compressed file can be converted using an imputation information. The imputation file must **NOT** be compressed.
The following code reads in the data from the compressed VCF file *set1a.vcf.gz*. This the *set1a.vcf* file after it has been compressed using gunzip. The file will be read in twice, once without the imputation information file and once with the imputation information fie.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
vcf1afile_gz <- system.file("extdata", "set1a.vcf.gz", package = "BinaryDosage")
vcf1ainfo <- system.file("extdata", "set1a.info", package = "BinaryDosage")
bdfile1a_woinfo_gz <- tempfile()
vcftobd(vcffiles = vcf1afile_gz, bdfiles = bdfile1a_woinfo_gz)
bdfile1a_winfo_gz <- tempfile()
vcftobd(vcffiles = c(vcf1afile_gz, vcf1ainfo), bdfiles = bdfile1a_winfo_gz)
```
### Checking the files
The four binary dosage files created above should all have the same dosage and genotype probabilities in them. The following code calculates the alternate allele frequencies for each of the binary dosage files using the <span style="font-family:Courier">bdapply</span> function. The results are then displayed in a table showing the alternate allele frequencies are the same for each file. The value for SNPID was taken from the list return from <span style="font-family:Courier">getbdinfo</span>.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
bdinfo1a_woinfo_gz <- getbdinfo(bdfiles = bdfile1a_woinfo_gz)
bdinfo1a_winfo_gz <- getbdinfo(bdfiles = bdfile1a_winfo_gz)
aaf1a_woinfo <- unlist(bdapply(bdinfo = bdinfo1a_woinfo, getaaf))
aaf1a_winfo <- unlist(bdapply(bdinfo = bdinfo1a_winfo, getaaf))
aaf1a_woinfo_gz <- unlist(bdapply(bdinfo = bdinfo1a_woinfo_gz, getaaf))
aaf1a_winfo_gz <- unlist(bdapply(bdinfo = bdinfo1a_winfo_gz, getaaf))
aaf1a <- data.frame(SNPID = bdinfo1a_woinfo$snps$snpid,
aaf1a_woinfo = aaf1a_woinfo,
aaf1a_winfo = aaf1a_winfo,
aaf1a_woinfo_gz = aaf1a_woinfo_gz,
aaf1a_winfo_gz = aaf1a_winfo_gz)
knitr::kable(aaf1a, caption = "Alternate Allele Frequencies", digits = 4)
```
## Other VCF file formats
The <span style="font-family:Courier">vcftobd</span> function can support VCF files in formats other than those returned from minimac. This is done by examining the value of FORMAT for each SNP in the VCF file. The routine looks for the values "DS" and "GP", dosage and genotype probabilities, in the FORMAT column. If one or both of these values are found, the appropriate information is written to the binary dosage file.
The file *set2a.vcf* contains only the dosage values. The following code converts it to a binary dosage file. The <span style="font-family:Courier">getsnp</span> function is then used to extract the first SNP and display the values for the first 10 subjects.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
vcf2afile <- system.file("extdata", "set2a.vcf", package = "BinaryDosage")
bdfile2a <- tempfile()
vcftobd(vcffiles = vcf2afile, bdfiles = bdfile2a)
bdinfo2a <- getbdinfo(bdfiles = bdfile2a)
snp1_2a <- data.frame(getsnp(bdinfo = bdinfo2a, snp = 1L, dosageonly = FALSE))
snp1 <- cbind(SubjectID = bdinfo2a$samples$sid, snp1_2a)
knitr::kable(snp1[1:10,], caption = "Dosage and Genotype Probabilities")
```
## Other vcftobd options
There are other options for <span style="font-family:Courier">vcftobd</span>. These options effect how the information is written to the binary dosage file.
### format and subformat options
The format and subformat options determine the format of the binary dosage files. These format are documented in the [Binary Dosage Formats](bdformats.html).
### snpidformat
The <span style="font-family:Courier">snpidformat</span> options specifies how the SNP ID is written to the binary dosage file. The default value is 0. This tells the code to use the SNP IDs that are in the VCF file. Other values that can be supplied to the function creates a SNP ID from the chromosome, location, reference, and alternate allele values.
When the snpidformat is set to 1, the SNP ID is written in the format
<span style="font-family:Courier">Chromosome:Location</span>
When the snpidformat is set to 2, the SNP ID is written in the format
<span style="font-family:Courier">Chromosome:Location:Reference Allele:Alternate Allele</span>
When the snpidformat is set to -1, the SNP ID is not written to the binary dosage file. When the binary dosage file is read, the SNP ID is generated using the format for snpidformat equal to 2. This reduces the size of the binary dosage file.
**Note:** If the SNP IDs in the VCF are in snpidformat 1 or 2, the code recognizes this and writes the smaller binary dosage files.
**Note:** If the SNP IDs in the VCF are in snpidformat 2, and the snpidformat option is set to 1, an error will be returned. This is because of possible information loss and the binary dosage file does not change.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
vcf1brsfile <- system.file("extdata", "set1b_rssnp.vcf", package = "BinaryDosage")
bdfile1b.snpid0 <- tempfile()
bdfile1b.snpid1 <- tempfile()
bdfile1b.snpid2 <- tempfile()
bdfile1b.snpidm1 <- tempfile()
vcftobd(vcffiles = vcf1brsfile, bdfiles = bdfile1b.snpid0)
vcftobd(vcffiles = vcf1brsfile, bdfiles = bdfile1b.snpid1, snpidformat = 1)
vcftobd(vcffiles = vcf1brsfile, bdfiles = bdfile1b.snpid2, snpidformat = 2)
vcftobd(vcffiles = vcf1brsfile, bdfiles = bdfile1b.snpidm1, snpidformat = -1)
bdinfo1b.snpid0 <- getbdinfo(bdfiles = bdfile1b.snpid0)
bdinfo1b.snpid1 <- getbdinfo(bdfiles = bdfile1b.snpid1)
bdinfo1b.snpid2 <- getbdinfo(bdfiles = bdfile1b.snpid2)
bdinfo1b.snpidm1 <- getbdinfo(bdfiles = bdfile1b.snpidm1)
snpnames <- data.frame(format0 = bdinfo1b.snpid0$snps$snpid,
format1 = bdinfo1b.snpid1$snps$snpid,
format2 = bdinfo1b.snpid2$snps$snpid,
formatm1 = bdinfo1b.snpidm1$snps$snpid)
knitr::kable(snpnames, caption = "SNP Names by Format")
```
### bdoptions
When using binary dosage format 4.x it is possible to store additional information about the SNPs in the file. This is information consists of the following values
- Alternate allele frequency
- Minor allele frequency
- Average call
- Imputation r-squared
These values are normally provided in the imputation information file. However, it is possible to calculate the alternate and minor allele frequency without the imputation information file. This can be useful when a subset of subjects is extracted from the VCF file that returned from minimac. It is also possible to estimate the imputation r-squared. See the vignette [Estimating Imputed R-squares](r2estimates.html) for more information on the r-squared estimate.
The value for bdoptions is a vector of character values that can be "aaf", "maf", "rsq", or and combination of these values. The values indicate to calculate alternate allele frequency, minor allele frequency, and imputation r-squared respectively.
The following code converts the *set1a.vcf* file into a binary dosage file and calculates the alternate allele frequency and the minor allele frequency, and estimates the imputed r-squared. These values are then compared to the values in the binary dosage file that was generated by using the *set1a.info* information file.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
vcf1afile <- system.file("extdata", "set1a.vcf", package = "BinaryDosage")
bdfile1a_calcinfo <- tempfile()
vcftobd(vcffiles = vcf1afile, bdfiles = bdfile1a_calcinfo, bdoptions = c("aaf", "maf", "rsq"))
bdcalcinfo <- getbdinfo(bdfile1a_calcinfo)
snpinfo <- data.frame(aaf_info = bdinfo1a_winfo$snpinfo$aaf,
aaf_calc = bdcalcinfo$snpinfo$aaf,
maf_info = bdinfo1a_winfo$snpinfo$maf,
maf_calc = bdcalcinfo$snpinfo$maf,
rsq_info = bdinfo1a_winfo$snpinfo$rsq,
rsq_calc = bdcalcinfo$snpinfo$rsq)
knitr::kable(snpinfo, caption = "Information vs Calculated Information", digits = 3)
```
# Additional routines
The following routines are available for accessing information contained in VCF files
- <span style="font-family:Courier">getvcfinfo</span>
- <span style="font-family:Courier">vcfapply</span>
## getvcfinfo
The <span style="font-family:Courier">getvcfinfo</span> routine returns information about a VCF file. For more information about the data returned see [Genetic File Information](geneticfileinfo.html). This information needs to be passed to vcfapply so it can efficiently read the VCF file.
The parameters passed to <span style="font-family:Courier">getvcfinfo</span> are
- <span style="font-family:Courier">filenames</span>
- <span style="font-family:Courier">gz</span>
- <span style="font-family:Courier">index</span>
- <span style="font-family:Courier">snpidformat</span>
<span style="font-family:Courier">filenames</span> is a character vector that can contain up to two values. The first value is the name of the VCF file. The second value is optional and is the name of the imputation information file.
<span style="font-family:Courier">gz</span> is a logical value that indicates if the VCF file has been compressed using gzip. This only applies to the VCF file. Compression of the imputation information file is not supported.
<span style="font-family:Courier">index</span> is a logical value that indicates if the VCF file should be indexed. Indexing the VCF file takes time but greatly reduces the time needed to read the file. Indexing can only be done on uncompressed VCF files.
<span style="font-family:Courier">snpidformat</span> is an integer value from 0 to 2 that indicates how the SNP ID should be formatted. The value indicates which of the follow formats to use.
- 0 - Value in VCF file
- 1 - <span style="font-family:Courier">Chromosome:Location(bp)</span>
- 2 - <span style="font-family:Courier">Chromosome:Location(bp):Reference Allele:Alternate Allele</span>
## vcfapply
The <span style="font-family:Courier">vcfapply</span> routine applies a function to all SNPs in the VCF file. The routine returns a list with length equal to the number of SNPs in the VCF file. Each element in the list is the value returned by the user supplied function. The routine takes the following parameters.
- <span style="font-family:Courier">vcfinfo</span> - list with information about the VCF file returned by <span style="font-family:Courier">getvcfinfo</span>.
- <span style="font-family:Courier">func</span> - user supplied function to be applied to each SNP in the VCF file.
- <span style="font-family:Courier">...</span> - additional parameters needed by the user supplied function
The user supplied function must have the following parameters.
- <span style="font-family:Courier">vcfinfo</span> - A numeric vector with the dosage values for each subject.
- <span style="font-family:Courier">p0</span> - A numeric vector with the probabilities the subject has no alternate alleles for each subject.
- <span style="font-family:Courier">p1</span> - A numeric vector with the probabilities the subject has one alternate allele for each subject.
- <span style="font-family:Courier">p2</span> - A numeric vector with the probabilities the subject has two alternate alleles for each subject.
The user supplied function can have other parameters. These parameters need to passed to the <span style="font-family:Courier">vcfapply</span> routine.
There is a function in the <span style="font-family:Courier">BinaryDosage</span> package named <span style="font-family:Courier">getaaf</span> that calculates the alternate allele frequencies and is the format needed by <span style="font-family:Courier">vcfapply</span> routine. The following uses <span style="font-family:Courier">getaaf</span> to calculate the alternate allele frequency for each SNP in the *set1a.vcf* file using the <span style="font-family:Courier">vcfapply</span> routine and compares it to the aaf in the information file.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
vcf1afile <- system.file("extdata", "set1a.vcf", package = "BinaryDosage")
vcfinfo <- getvcfinfo(vcf1afile, index = TRUE)
aaf <- unlist(vcfapply(vcfinfo = vcfinfo, getaaf))
altallelefreq <- data.frame(SNP = vcfinfo$snps$snpid, aafinfo = aaf1a_winfo, aafcalc = aaf)
knitr::kable(altallelefreq, caption = "Information vs Calculated aaf", digits = 3)
```
|
/scratch/gouwar.j/cran-all/cranData/BinaryDosage/inst/doc/usingvcffiles.Rmd
|
---
title: "Binary Dosage Formats"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Binary Dosage Formats}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
There are currently 4 formats for the binary dosage file.
The first three formats consist of three files, binary dosage, family, and map. The family and maps are data frames with information about the subjects and SNPs in the binary dosage file, respectively. These are data frames saved with the <span style="font-family:Courier">saveRDS</span> command.
# Format 1
Format 1 has a header that begins with a magic word and is followed by a number indicating whether it is in format 1.1 or 1.2. It is the followed by the genotype information. The total header length is 8 bytes.
## Format 1.1
In format 1.1 the only value stored is the dosage. The dosage values stored are multiplied by $2^{15} - 1$, 0x7ffe, and stored as short integers. If a value is missing it is stored as $2^{16}$, 0xffff. Each subject needs to 2 bytes per SNP. The total size of the data section is 2 times the number of subjects times the number of SNPs bytes.
## Format 1.2
In format 1.2 the only values stored are $\Pr(g=1)$ and $\Pr(g=2)$. These values are multiplied by $2^{16} - 1$, 0xfffe, and stored as short integers. A value of $2^16$, 0xffff indicates a value is missing. The total size of the data section is 4 times the number of subjects times the number of SNPs bytes.
# Format 2
Format 2 has the same header as format 1.
## Format 2.1
The format of the data section is same as format 1.1 except the dosage values are multiplied by 20,000, 0x4e20. The missing value is still $2^{16}$, 0xffff.
## Format 2.2
The format of the data section is same as format 1.1 except the dosage values are multiplied by 20,000, 0x4e20. The missing value is still $2^{16}$, 0xffff.
**Note:** Format 2 was adopted when it was discovered that the values returned from the imputation programs were limited to 3 or 4 digits passed the decimal point. When results from fitting models were compared between using the binary dosage file and the original VCF or GEN file, there were slight but unimportant differences. It was considered desirable to be able to return the values exactly as they appear in the original imputation file.
# Format 3
Format 3.1 and 3.2 has a header similar to formats 1 and 2 but the number of subjects and SNPs were added to the header to avoid problems associating the wrong family or map file to the binary dosage file.
Format 3.3 and 3.4 has a header similar to formats 1 and 2 but the md5 hash of the family and map data frames were added to the header to avoid problems associating the wrong family or map file to the binary dosage file.
## Format 3.1 and 3.3
The data section of formats 3.1 and 3.3 are the same as format 2.1
## Format 3.2
Each SNP in the data section begins with a integer value identifying how long the section is for that SNP. The data is then stored as described below in minimizing the data stored.
## Format 3.4
Format 3.4 stores the data is a similar format as 3.2 but the section begins with the lengths of all the sections for the SNPs and then is followed by the genotype information.
# Format 4
Format 4 takes the data that is in the family and map files and moves it into the header of the binary dosage file. The first section of the header has the magic word and the format. This is followed by information on where the family, map, and genotype data are stored in the file. After the header there is the family data, followed by the map data, and then the imputation data.
## Format 4.1 and 4.3
The data section of formats 4.1 and 4.3 are the same as format 2.1
## Format 4.2 and 4.4
The data section of formats 4.2 and 4.4 are the same as format 4.2 and 4.3 respectively.
# Minimizing the data stored
A lot is known about the imputation data. We know the following
$$\Pr(g=0) + \Pr(g=1) + \Pr(g=2) = 1 $$
$$ d = \Pr(g=1) + 2\Pr(g=2)$$
where $d$ is the dosage. This means we only need to know two of the values to calculate the other two. In the <span style="font-family:Courier">BinaryDosage</span> package, the dosage and $\Pr(g=1) are used.
It is quite often the case that either $\Pr(g=0)$ or $\Pr(g=2)$ is 0. In this case, knowing the dosage is enough.
$$
\Pr(g = 1) = \left\{\begin{array}{ll}%
d & \; \Pr(g=2)=0, d \leq 1\\%
2 - d & \; \Pr(g=0) = 0, d > 1 %
\end{array}\right.
$$
Once the dosage and $\Pr(g=1)$ is know the other values can be quickly calculated.
$$\Pr(g=2) = \frac{d - \Pr(g=1)}{2}$$
$$\Pr(g=0) = 1 - \Pr(g=1) - \Pr(g=2)$$
These formulae work well but sometimes there is round off error in the imputation values. In these cases the above equations can't be used to get the exact imputation values. In these situations all four imputations values, $d$, $\Pr(g=0)$,$\Pr(g=1)$, and $\Pr(g=2)$ have to be saved. Fortunately this is not a common occurrence.
Since the values stored are short integers of 2 bytes in length, only the last 15 bits are used. This allows the 16th bit to be used as an indicator. For each SNP for each subject the first value saved is the dosage. If the 16th bit is 0, this indicates that either $\Pr(g=0)$ or $\Pr(g=2)$ is 0 and the other values can be calculated as described above. If the 16th bit is set to 1, this indicates that the value of $\Pr(g=1)$ follows. If the 16th bit is set to 0, this indicates that the above equations can be used to calculate $\Pr(g=0)$ and $\Pr(g=2)$. If the 16th bit is set to one, this indicates the next two values are $\Pr(g=0)$ and $\Pr(g=2)$ respectively.
**Note:** Usage of the this method general results in 2.2 to 2.4 bytes needed to store each SNP for each subject.
|
/scratch/gouwar.j/cran-all/cranData/BinaryDosage/vignettes/bdformats.Rmd
|
---
title: "Genetic File Information"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Genetic File Information}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
```{r setup}
library(BinaryDosage)
```
The routines <span style="font-family:Courier">getbdinfo</span>, <span style="font-family:Courier">getvcfinfo</span>, and <span style="font-family:Courier">getgeninfo</span> return a list with information about the data in the files. The list returned by each of these routines a section common to them all and a list <span style="font-family:Courier">additionalinfo</span> that is specific to the file type.
## Common section
The common section has the following elements
- filename - Character value with the complete path and file name of the file with the genetic data
- usesfid - Logical value indicating if the subject data has family IDs.
- samples - Data frame containing the following information about the subjects
+ fid - Character value with family IDs
+ sid - Character value with the individual IDs
- onchr - Logical value indicating if all the SNPs are on the same chromosome
- snpidformat - Integer indicating the format of the SNP IDs as follows
+ 0 - Unknown for VCF and GEN files or user specified for binary dosage files
+ 1 - chromosome:location
+ 2 - chromosome:location:referenceallele:alternateallele
+ 3 - chromosome:location_referenceallele_alternateallele
- snps - Data frame containing the following values
+ chromosome - Character value indicating what chromosome the SNP is on
+ location - Integer value with the location of the SNP on the chromosome
+ snpid - Character value with the ID of the SNP
+ reference - Character value of the reference allele
+ alternate - Character value of the alternate allele
- snpinfo - List that contain the following information
+ aaf - numeric vector with the alternate allele frequencies
+ maf - numeric vector with the minor allele frequencies
+ avgcall - Numeric vector with the imputation average call
+ rsq - Numeric vector with the imputation r squared value
- datasize - Numeric vector indicating the size of data in the file for each SNP
- indices - Numeric vector indicating the starting location in the file for each SNP
The list returned has its class value set to "genetic-info".
The <span style="font-family:Courier">datasize</span> and <span style="font-family:Courier">indices</span> values are only returned if the parameter <span style="font-family:Courier">index</span> is set equal to <span style="font-family:Courier">TRUE</span>
## Binary Dosage Additional Information
The additional information returned for binary dosage files contains the following information.
- format - numeric value with the format of the binary dosage file
- subformat - numeric value with the subformat of the binary dosage file
- headersize - integer value with the size of the header in the binary dosage file
- numgroups - integer value of the number of groups of subjects in the binary dosage file. This is usually the number of binary dosage files merged together to form the file
- groups - integer vector with size of each of the groups
This list has its class value set to "bdose-info".
## VCF File Additional Information
The additional information returned for VCF files contains the following information.
- gzipped - Logical value indicating if the file has been compressed using gzip
- headerlines - Integer value indicating the number of lines in the header
- headersize - Numeric value indicating the size of the header in bytes
- quality - Character vector containing the values in QUALITY column
- filter - Character vector containing the values in the FILTER column
- info - Character vector containing the values in the INFO column
- format - Character vector containing the values in the FORMAT column
- datacolumns - Data frame summarizing the entries in the FORMAT value containing the following information
+ numcolumns - Integer value indicating the number of values in the FORMAT value
+ dosage - Integer value indicating the column containing the dosage value
+ genotypeprob - Integer value indicating the column containing the genotype probabilities
+ genotype - Integer value indicating the column containing the genotype call
This list has its class value set to "vcf-info".
The values for quality, filter, info, and format can have a length of 0 if all the values are missing. They will have a length of 1 if all the values are equal. The number of rows in the datacolumns data frame will be equal to the length of the format value.
## GEN File Additional Information
The additional information returned for GEN files contains the following information.
- gzipped - Logical value indicating if the GEN file is compressed using gz
- headersize - Integer value indicating the size of the header in bytes
- format - Integer value indicating the number of genotype probabilities for each subject with the following meanings
+ 1 - Dosage only
+ 2 - $\Pr(g=0)$ and $\Pr(g=1)$
+ 3 - $\Pr(g=0)$, $\Pr(g=1)$, and $\Pr(g=2)$
- startcolumn - Integer value indicating in which column the genetic data starts
- sep - Character value indicating what value separates the columns
$g$ indicates the number of alternate alleles the subject has.
This list has its class value set to "gen-info".
|
/scratch/gouwar.j/cran-all/cranData/BinaryDosage/vignettes/geneticfileinfo.Rmd
|
---
title: "Merging Files"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Merging Files}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
```{r setup}
library(BinaryDosage)
```
Quite often subjects have their genotypes imputed in batches. The files returned by these imputation can be converted into binary dosage files. These binary files can be merged into a single file if they have the same SNPs and different subjects using the bdmerge routine.
## bdmerge
The bdmerge routine takes the following parameters
- mergefiles - A character vector of the binary dosage file, family file, and map file names
- format - Integer value indicating which format of the binary dosage file should be used for the merged files
- subformat - Integer value indicating which subformat should be used for the merged files
- bdfiles - A character vector of the binary dosage files to merge
- famfiles - Character vector of the family files associated with the binary dosage files to merge
- mapfiles - Character vector of the map files associated with the binary dosage files to merge
- onegroup - Logical value indicating if the binary dosage saves SNP summary information about each merged file
- bdoptions - Character vector indicating on which SNP information should be evaluated for the merged files. This cannot be used if onegroup is set to FALSE
- snpjoin - Character value indicating if an inner or outer join is done for the SNPs
The following code merges *vcf1a.bdose* and *vcf1b.bdose* into one binary dosage file. It then displays the number of subjects in each file.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
bd1afile <- system.file("extdata", "vcf1a.bdose", package = "BinaryDosage")
bd1bfile <- system.file("extdata", "vcf1b.bdose", package = "BinaryDosage")
bd1file <- tempfile()
bdmerge(mergefiles = bd1file, bdfiles = c(bd1afile, bd1bfile))
bd1ainfo <- getbdinfo(bd1afile)
bd1binfo <- getbdinfo(bd1bfile)
bd1info <- getbdinfo(bd1file)
nrow(bd1ainfo$samples)
nrow(bd1binfo$samples)
nrow(bd1info$samples)
```
|
/scratch/gouwar.j/cran-all/cranData/BinaryDosage/vignettes/mergingfiles.Rmd
|
---
title: "Estimating Imputed R-squares"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Estimating Imputed R-squares}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
A common way to measure the imputation $r^2$ is calculate the variance of the imputed alleles probabilities and divide that by the variance if the alleles were perfectly imputed. An allele is perfectly imputed if $\Pr(a_i = 1)$ equals 0 or 1 for all $i$.
The variance of the alleles when are perfectly imputed is $q(1-q)$ where $q$ is the alternate allele frequency. Given the imputation data We do not know what $q$ is the general population. However we can estimate it using the dosage values for each subject.
$$
\hat q = \sum_{i = 1}^{N}\frac{d_i}{2N}
$$
where the dosage is calculated as
$$
d_i = \frac{\Pr(g_i = 1) + 2\Pr(g_i=2)}{2}
$$
Another problem with the dosage data is we don't have the probabilities for each allele. Instead we have $\Pr(g_i=0), \Pr(g_i=1),$ and $\Pr(g_i=2)$. If we assume that a subject's two allelic probabilities, $q_1, q_2$ are independently imputed we know the following
$$
q_1(1-q2) + (1-q_1)q_2 = \Pr(g=1)
$$
and
$$
q_1 q_2 = \Pr(g=2)
$$
These equations can be solved resulting in the following values
$$
q_1 = \frac{d - \sqrt{d^2 - \Pr(g = 2)}}{2}\\%
q_2 = \frac{d + \sqrt{d^2 - \Pr(g = 2)}}{2}
$$
There can be some problems using the above equations. Sometimes the value inside the radical can be negative. This can be caused by roundoff error. If the value is negative and close to zero, the value can be set to zero.
**Note:** The documentation for minimac and Impute 2 indicate that the imputation values for the two alleles are imputed independently.
Since each subject has two alleles we can let $q_1$ to $q_N$ represent the first allele of each subject and $q_{N+1}$ to $q_{2N}$ represent the second allele. Given this we can calculate all the $q$'s as follows
$$
q_i = \left\{\begin{array}{ll}%
\frac{d_i - \sqrt{d_i^2 - 4\Pr(g_i = 2)}}{2} & \; 0<i\leq N\\%
\frac{d_i + \sqrt{d_i^2 - 4\Pr(g_i = 2)}}{2} & \; N<i\leq 2N %
\end{array}\right.
$$
Once the $q$'s have been calculated, the imputation $r^2$ can be estimated as follows
$$
\hat r^2 = \frac{\sum_{i = 1}^{2N}\frac{(q_i - \hat q)^2}{2N}}{\hat q(1 - \hat q)}
$$
|
/scratch/gouwar.j/cran-all/cranData/BinaryDosage/vignettes/r2estimates.Rmd
|
---
title: "Using Binary Dosage files"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Using Binary Dosage files}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
```{r setup}
library(BinaryDosage)
```
The following routines are available for accessing information contained in binary dosage files
- <span style="font-family:Courier">getbdinfo</span>
- <span style="font-family:Courier">bdapply</span>
- <span style="font-family:Courier">getsnp</span>
## getbdinfo
The <span style="font-family:Courier">getbdinfo</span> routine returns information about a binary dosage file. For more information about the data returned see [Genetic File Information](geneticfileinfo.html). This information needs to be passed to <span style="font-family:Courier">bdapply</span and <span style="font-family:Courier">getsnp</span> routines so it can read the binary dosage file.
The only parameter used by <span style="font-family:Courier">getbdinfo</span> is <span style="font-family:Courier">bdfiles</span>. This parameter is a character vector. If the format of the binary dosage file is 1, 2, or 3, this must be a character vector of length 3 with the following values, binary dosage file name, family file name, and map file name. If the format of the binary dosage file is 4 then the parameter value is a single character value with the name of the binary dosage file.
The following code gets the information about the binary dosage file *vcf1a.bdose*.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
bd1afile <- system.file("extdata", "vcf1a.bdose", package = "BinaryDosage")
bd1ainfo <- getbdinfo(bdfiles = bd1afile)
```
## bdapply
The <span style="font-family:Courier">bdapply</span> routine applies a function to all SNPs in the binary dosage file. The routine returns a list with length equal to the number of SNPs in the binary dosage file. Each element in the list is the value returned by the user supplied function. The routine takes the following parameters.
- <span style="font-family:Courier">bdinfo</span> - list with information about the binary dosage file returned by <span style="font-family:Courier">getbdinfo</span>.
- <span style="font-family:Courier">func</span> - user supplied function to be applied to each SNP in the VCF file.
- <span style="font-family:Courier">...</span> - additional parameters needed by the user supplied function
The user supplied function must have the following parameters.
- <span style="font-family:Courier">dosage</span> - A numeric vector with the dosage values for each subject.
- <span style="font-family:Courier">p0</span> - A numeric vector with the probabilities the subject has no alternate alleles for each subject.
- <span style="font-family:Courier">p1</span> - A numeric vector with the probabilities the subject has one alternate allele for each subject.
- <span style="font-family:Courier">p2</span> - A numeric vector with the probabilities the subject has two alternate alleles for each subject.
The user supplied function can have other parameters. These parameters need to passed to the <span style="font-family:Courier">bdapply</span> routine.
There is a function in the <span style="font-family:Courier">BinaryDosage</span> package named <span style="font-family:Courier">getaaf</span> that calculates the alternate allele frequencies and is the format needed by <span style="font-family:Courier">vcfapply</span> routine. The following uses <span style="font-family:Courier">getaaf</span> to calculate the alternate allele frequency for each SNP in the *vcf1a.bdose* file using the <span style="font-family:Courier">bdapply</span> routine.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
aaf <- unlist(bdapply(bdinfo = bd1ainfo, func = getaaf))
altallelefreq <- data.frame(SNP = bd1ainfo$snps$snpid, aafcalc = aaf)
knitr::kable(altallelefreq, caption = "Information vs Calculated aaf", digits = 3)
```
## getsnp
The <span style="font-family:Courier">getsnp</span> routine return the dosage and genotype probabilities for each subject for a given SNP in a binary dosage file.
The routine takes the following parameters.
- <span style="font-family:Courier">bdinfo</span> - list with information about the binary dosage file returned by <span style="font-family:Courier">getbdinfo</span>.
- <span style="font-family:Courier">snp</span> - the SNP to return information about. This can either be the index of the SNP in the binary dosage file or its SNP ID.
- <span style="font-family:Courier">dosageonly</span> - a logical value indicating if only the dosage values are returned without the genotype probabilities. The default value is TRUE indicating that only the dosage values are returned.
The following code returns the dosage values and the genotype probabilities for SNP 1:12000:T:C from the *vcf1a.bdose"
binary dosage file.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
snp3 <- data.frame(getsnp(bdinfo = bd1ainfo, "1:12000:T:C", FALSE))
knitr::kable(snp3[1:20,], caption = "SNP 1:12000:T:C", digits = 3)
```
|
/scratch/gouwar.j/cran-all/cranData/BinaryDosage/vignettes/usingbdfiles.Rmd
|
---
title: "Using GEN Files"
output:
rmarkdown::html_vignette:
toc: true
vignette: >
%\VignetteIndexEntry{Using GEN Files}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
This vignette documents the functions in the BinaryDosage package that convert GEN files to binary dosage files.
**Note:** The examples below use functions to access information in binary dosage files. Information about these functions can be found in the vignette [Using Binary Dosage Files](usingbdfiles.html). Data returned by the function <span style="font-family:Courier">getbdinfo</span> contains information about a binary dosage file. Information on the data return by <span style="font-family:Courier">getbdinfo</span> can be found in the vignette [Genetic File Information](geneticfileinfo.html).
```{r setup, echo=FALSE}
library(BinaryDosage)
```
# Introduction
GEN files are a useful way to store genetic data. They are text files and can be easily parsed. The output files returned from the imputation software [Impute2](http://mathgen.stats.ox.ac.uk/impute/impute_v2.html) are returned in this format.
Uncompressed GEN files can be very large, 100s of GB. Because of this they are quite often compressed. This makes the files much smaller but greatly increases the read time. The BinaryDosage package supports reading of gzip compressed GEN files.
There appears to have been changes to the GEN file over the years and it also appears people have created GEN-like file formats. The <span style="font-family:Courier">BinaryDosage</span> package can support many GEN-like file formats.
The BinaryDosage package has a routine to convert GEN files into a binary format that maintains the dosage, genetic and probabilities. This results in a file about 10-15% the size of the uncompressed VCF file with much faster, 200-300x, read times. In comparison, Using gzip to compress the GEN file reduces the size of the file to about 5% its original size but makes run times slower.
Routines were written to help debug the conversion routine. It was discovered these routines were quite useful for accessing data in the GEN file and are now included in the package. This document contains instructions on how to use these routines.
# GEN file formats
The GEN file may have a header. If it does have a header the format the first N entries must be the column names for the SNP information variables. The following values identify the subjects and can have either of the following formats ordered by subject
- The family ID followed by the subject ID
- The subject ID only
If the GEN file does not have a header, the subject information must be in a sample file that can be read with <span style="font-family:Courier">read.table</span>. If there is only one column the subject ID is set to this value and the family ID is set to "". Otherwise, the family ID value is set to the value of the first column and the subject ID value is set to the second column. If the first value of the subject ID and family ID are both "0", they are deleted. If family ID and subject ID are equal for all subjects, the family ID value is set to "".
**Note:** If a sample file is provided, the header is ignored.
The body GEN file must have the following format. The first N columns must contain information about the SNP. These columns must contain the following values
- SNP ID
- Location
- Alternate allele
- Reference allele
The chromosome number may also be in the first N columns.
**Note:** The first three columns of the GEN file used to be snp_id, rs_id, and position. In many cases these values got change to chromosome, snp_id, and position.
The remaining columns must contain the genotype probabilities sorted by subject. The genotype probabilities can be in any of the following formats.
- The dosage value only
- Probability subject has no alternate alleles, probability subject has one alternate allele.
- Probability subject has no alternate alleles, probability subject has one alternate allele, probability subject has two alternate allele.
**Note:** The number of genotype probabilities must agree with the number of subjects identified in the header or sample file.
# Example files
There are several sample files included with the BinaryDosage package. The file names will be found by using the <span style="font-family:Courier">system.file</span> command in the examples. This will be used many times in the examples.
The binary dosage files created will be temporary files. They will be created using the <span style="font-family:Courier">tempfile</span> command. This will also be used many times in the examples. All output files will use the default format of 4. For information on other formats see the vignette [Binary Dosage Formats](bdformats.html).
# Converting a GEN file to the Binary Dosage format
The <span style="font-family:Courier">gentobd</span> routine converts GEN files to the binary dosage format. Many different formats of GEN files by the BinaryDosage package. The following sections show how to convert GEN files in various formats to the binary dosage format.
The <span style="font-family:Courier">gentobd</span> takes the following parameters
- <span style="font-family:Courier">genfiles</span> - Name of GEN file and the optional sample file.
- <span style="font-family:Courier">snpcolumns</span> - Columns containing the values for chromosome, SNP ID, location, reference allele, and alternate allele.
- <span style="font-family:Courier">startcolumn</span> - Column where genotype probabilities start.
- <span style="font-family:Courier">impformat</span> - Number of genotype probabilities for each subject.
- <span style="font-family:Courier">chromosome</span> - Optional chromosome, provided if chromosome is not include in GEN file
- <span style="font-family:Courier">header</span> - Vector of one or two logical values indicating if GEN and sample files have headers respectively.
- <span style="font-family:Courier">gz</span> - Logical value indicating if GEN file is compressed.
- <span style="font-family:Courier">sep</span> - Separator used in GEN file.
- <span style="font-family:Courier">bdfiles</span> - Vector of character values give the names of the binary dosage file. If the binary dosage format is 3 or less there are three files names, binary dosage file, family file, and map file names. For format 4 there is only the binary dosage file name.
- <span style="font-family:Courier">format</span> - Format of the binary dosage file.
- <span style="font-family:Courier">subformat</span> - Subformat of the binary dosage file.
- <span style="font-family:Courier">snpidformat</span> - Format to store the SNP ID in.
- <span style="font-family:Courier">bdoptions</span> - Options for calculating additional SNP data.
## Default options
The default values for the <span style="font-family:Courier">gentobd</span> require a sample file meaning there is no header in the GEN file and the first five columns be chromosome, SNP ID, location, reference allele, and alternate allele, respectively. The genotype data must have the three genotype values and the file must not be compressed.
The following code reads the gen file *set3b.chr.imp* using the *set3b.sample* sample file. These files are in the format mentioned above.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
gen3bchrfile <- system.file("extdata", "set3b.chr.imp", package = "BinaryDosage")
sample3bfile <- system.file("extdata", "set3b.sample", package = "BinaryDosage")
bdfile3b_chr <- tempfile()
gentobd(genfiles = c(gen3bchrfile, sample3bfile), bdfiles = bdfile3b_chr)
bdinfo3b_chr <- getbdinfo(bdfiles = bdfile3b_chr)
```
## snpcolumns
The <span style="font-family:Courier">snpcolumns</span> parameter lists the column numbers for the chromosome, SNP ID, location, reference allele, and alternate allele, respectively.
The following code reads in *set1b.imp*. This file has the SNP data in the following order, chromosome, location, SNP ID, reference allele, alternate allele. The file also has a header so there is no sample file.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
gen1bfile <- system.file("extdata", "set1b.imp", package = "BinaryDosage")
bdfile1b <- tempfile()
gentobd(genfiles = gen1bfile,
bdfiles = bdfile1b,
snpcolumns = c(1L, 3L, 2L, 4L, 5L),
header = TRUE)
bdinfo1b <- getbdinfo(bdfiles = bdfile1b)
```
Quite often the chromosome is not part of the GEN file and first column has the value '\-\-'. In this case the SNP ID is often in the format <span style="font-family:Courier">\<chromosome\>:\<additional SNP data\></span>. In this case, the chromosome column number (first value in snpcolumns) can be set to 0L and the <span style="font-family:Courier">gentobd</span> routine will extract the chromosome from the SNP ID value.
The following code reads in *set3b.imp*. This is in same format as *set3b.chr.imp* except that there is no column for the chromosome value. The value of 0L will be used for the chromosome column to indicate to get the chromosome value from the SNP ID value.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
gen3bfile <- system.file("extdata", "set3b.imp", package = "BinaryDosage")
sample3bfile <- system.file("extdata", "set3b.sample", package = "BinaryDosage")
bdfile3b <- tempfile()
gentobd(genfiles = c(gen3bfile, sample3bfile),
bdfiles = bdfile3b,
snpcolumns = c(0L,2L:5L))
bdinfo3b <- getbdinfo(bdfiles = bdfile3b)
```
## startcolumn
Sometimes the GEN file has more SNP information than the five values mentioned earlier. In this case the genotype probabilities start in a column number other than 6. The value of the <span style="font-family:Courier">startcolumn</span> is the column number that the genotype probabilities start
The following code reads in *set4b.imp*. It has an extra column in the SNP data in column 2. The <span style="font-family:Courier">snpcolumns</span> and <span style="font-family:Courier">startcolumn</span> has been set to handle this. The value of <span style="font-family:Courier">impformat</span> has also been set since there are only 2 genotype probabilities in the file.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
gen4bfile <- system.file("extdata", "set4b.imp", package = "BinaryDosage")
sample4bfile <- system.file("extdata", "set4b.sample", package = "BinaryDosage")
bdfile4b <- tempfile()
gentobd(genfiles = c(gen4bfile, sample4bfile),
bdfiles = bdfile4b,
snpcolumns = c(1L,2L,4L,5L,6L),
startcolumn = 7L,
impformat = 2L)
bdinfo4b <- getbdinfo(bdfiles = bdfile4b)
```
## impformat
The <span style="font-family:Courier">impformat</span> parameter is an integer from 1 to 3 that indicates how many genotype probabilities are in the file for each person. The value of 1 indicates that the value is the dosage value for the subject.
The following codes reads in file *set2b.imp*. This file contains only the dosage values for the subjects. The SNP information is not in the default order so the values of snpcolumns has to be specified (see above).
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
gen2bfile <- system.file("extdata", "set2b.imp", package = "BinaryDosage")
sample2bfile <- system.file("extdata", "set2b.sample", package = "BinaryDosage")
bdfile2b <- tempfile()
gentobd(genfiles = c(gen2bfile, sample2bfile),
bdfiles = bdfile2b,
snpcolumns = c(1L,3L,2L,4L,5L),
impformat = 1L)
bdinfo2b <- getbdinfo(bdfiles = bdfile2b)
```
## chromosome
The <span style="font-family:Courier">chromosome</span> parameter is a character value that used when the chromosome column value in <span style="font-family:Courier">snpcolumns</span> is set to -1L.
The following code reads in *set3b.imp*, setting the chromosome value to 1.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
gen3bfile <- system.file("extdata", "set3b.imp", package = "BinaryDosage")
sample3bfile <- system.file("extdata", "set3b.sample", package = "BinaryDosage")
bdfile3bm1 <- tempfile()
gentobd(genfiles = c(gen3bfile, sample3bfile),
bdfiles = bdfile3bm1,
snpcolumns = c(-1L,2L:5L),
chromosome = "1")
bdinfo3bm1 <- getbdinfo(bdfiles = bdfile3bm1)
```
## header parameter
The <span style="font-family:Courier">header</span> parameter is a character vector of length 1 or 2. These indicate if the GEN file an sample file have headers, respectively. If the first value is <span style="font-family:Courier">TRUE</span>, the second value is ignored as the subjects IDs are in the header of the GEN file.
The following code reads in *set3b.imp* using the sample file *set3bnh.sample* which has no header.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
gen3bfile <- system.file("extdata", "set3b.imp", package = "BinaryDosage")
sample3bnhfile <- system.file("extdata", "set3bnh.sample", package = "BinaryDosage")
bdfile3bnh <- tempfile()
gentobd(genfiles = c(gen3bfile, sample3bnhfile),
bdfiles = bdfile3bnh,
snpcolumns = c(0L,2L:5L),
header = c(FALSE, FALSE))
bdinfo3bnh <- getbdinfo(bdfiles = bdfile3bnh)
```
## gz
The <span style="font-family:Courier">gz</span> parameter is a logical value that indicates if the GEN file is compressed using gzip. The sample file is always assumed to be uncompressed.
The following code reads in *set4b.imp.gz* file using the sample file *set4b.sample*.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
gen4bgzfile <- system.file("extdata", "set4b.imp.gz", package = "BinaryDosage")
sample4bfile <- system.file("extdata", "set4b.sample", package = "BinaryDosage")
bdfile4bgz <- tempfile()
gentobd(genfiles = c(gen4bgzfile, sample4bfile),
bdfiles = bdfile4bgz,
snpcolumns = c(1L,2L,4L,5L,6L),
startcolumn = 7L,
impformat = 2L,
gz = TRUE)
bdinfo4bgz <- getbdinfo(bdfiles = bdfile4bgz)
```
## separator
The <span style="font-family:Courier">separator</span> parameter is a character variable. This character separates the columns in the GEN file. Multiple copies of the separator are considered to be separator.
## bdfiles
The <span style="font-family:Courier">bdfiles</span> parameter is a character vector of length 1 or 3. These are the names of the binary dosage, family, and map files. If the format of the binary dosage file is 4, the only value needed is the name of the binary dosage file.
## format
The <span style="font-family:Courier">format</span> parameter determines the format of the binary dosage files. Formats 1, 2 and 3 consist of three files, binary dosage, family, and map. Format 4 combines all of these into one file.
## subformat
The <span style="font-family:Courier">subformat</span> parameter determines what information is in the binary dosage files. All formats can have subformats 1 and 2. A <span style="font-family:Courier">subformat</span> value of 1 indicates that only the dosage values are written to the binary dosage file and a value of 2 indicates that the dosage and genotype probabilities are written to the binary dosage file. Formats 3 and 4 can also have <span style="font-family:Courier">subformat</span> values of 3 and 4. These values have the same meaning as 1 and 2 respectively but have a slightly reordered header in the binary dosage file to improve read speed.
## snpidformat
The <span style="font-family:Courier">snpidformat</span> options specifies how the SNP ID is written to the binary dosage file. The default value is 0. This tells the code to use the SNP IDs that are in the GEN file. Other values that can be supplied to the function creates a SNP ID from the chromosome, location, reference, and alternate allele values.
When the snpidformat is set to 1, the SNP ID is written in the format
<span style="font-family:Courier">Chromosome:Location</span>
When the snpidformat is set to 2, the SNP ID is written in the format
<span style="font-family:Courier">Chromosome:Location:Reference Allele:Alternate Allele</span>
When the snpidformat is set to 3, the SNP ID is written in the format
<span style="font-family:Courier">Chromosome:Location_Reference Allele_Alternate Allele</span>
When the snpidformat is set to -1, the SNP ID is not written to the binary dosage file. When the binary dosage file is read, the SNP ID is generated using the format for snpidformat equal to 2. This reduces the size of the binary dosage file.
## bdoptions
When using binary dosage format 4.x it is possible to store additional information about the SNPs in the file. This is information consists of the following values
- Alternate allele frequency
- Minor allele frequency
- Imputation r-squared
It is possible to calculate the alternate and minor allele frequency without the imputation information file. It is also possible to estimate the imputation r-squared. See the vignette [Estimating Imputed R-squares](r2estimates.html) for more information on the r-squared estimate.
The value for bdoptions is a vector of character values that can be "aaf", "maf", "rsq", or and combination of these values. The values indicate to calculate alternate allele frequency, minor allele frequency, and imputation r-squared respectively.
# Additional routines
The following routines are available for accessing information contained in VCF files
- <span style="font-family:Courier">getvcfinfo</span>
- <span style="font-family:Courier">vcfapply</span>
## getgeninfo
The <span style="font-family:Courier">getgeninfo</span> routine returns information about a GEN file. For more information about the data returned see [Genetic File Information](geneticfileinfo.html). This information needs to be passed to genapply so it can efficiently read the GEN file.
The parameters passed to <span style="font-family:Courier">getgeninfo</span> are
- <span style="font-family:Courier">genfiles</span> - Name of GEN file and the optional sample file.
- <span style="font-family:Courier">snpcolumns</span> - Columns containing the values for chromosome, SNP ID, location, reference allele, and alternate allele.
- <span style="font-family:Courier">startcolumn</span> - Column where genotype probabilities start.
- <span style="font-family:Courier">impformat</span> - Number of genotype probabilities for each subject.
- <span style="font-family:Courier">chromosome</span> - Optional chromosome, provided if chromosome is not include in GEN file
- <span style="font-family:Courier">header</span> - Vector of one or two logical values indicating if GEN and sample files have headers respectively.
- <span style="font-family:Courier">gz</span> - Logical value indicating if GEN file is compressed.
- <span style="font-family:Courier">index</span> - Logical value indicating if the GEN file is to be indexed.
- <span style="font-family:Courier">snpidformat</span> - Format to create the SNP ID in.
- <span style="font-family:Courier">sep</span> - Separator used in GEN file.
All of these parameters have the same meaning as in the <span style="font-family:Courier">gentobd</span> routine above. There is on additional parameter <span style="font-family:Courier">index</span>. This is a logical value indicating if the GEN file should be indexed for quicker reading. This is useful when using <span style="font-family:Courier">genapply</span>. However the <span style="font-family:Courier">index</span> parameter can not be TRUE if the file is compressed.
## genapply
The <span style="font-family:Courier">genapply</span> routine applies a function to all SNPs in the GEN file. The routine returns a list with length equal to the number of SNPs in the GEN file. Each element in the list is the value returned by the user supplied function. The routine takes the following parameters.
- <span style="font-family:Courier">geninfo</span> - list with information about the GEN file returned by <span style="font-family:Courier">getgeninfo</span>.
- <span style="font-family:Courier">func</span> - user supplied function to be applied to each SNP in the VCF file.
- <span style="font-family:Courier">...</span> - additional parameters needed by the user supplied function
The user supplied function must have the following parameters.
- <span style="font-family:Courier">geninfo</span> - A numeric vector with the dosage values for each subject.
- <span style="font-family:Courier">p0</span> - A numeric vector with the probabilities the subject has no alternate alleles for each subject.
- <span style="font-family:Courier">p1</span> - A numeric vector with the probabilities the subject has one alternate allele for each subject.
- <span style="font-family:Courier">p2</span> - A numeric vector with the probabilities the subject has two alternate alleles for each subject.
The user supplied function can have other parameters. These parameters need to passed to the <span style="font-family:Courier">genapply</span> routine.
There is a function in the <span style="font-family:Courier">BinaryDosage</span> package named <span style="font-family:Courier">getaaf</span> that calculates the alternate allele frequencies and is the format needed by <span style="font-family:Courier">genapply</span> routine. The following uses <span style="font-family:Courier">getaaf</span> to calculate the alternate allele frequency for each SNP in the *set3b.chr.imp* file using the <span style="font-family:Courier">vcfapply</span> routine.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
gen3bchrfile <- system.file("extdata", "set3b.chr.imp", package = "BinaryDosage")
sample3bfile <- system.file("extdata", "set3b.sample", package = "BinaryDosage")
geninfo <- getgeninfo(genfiles = c(gen3bchrfile, sample3bfile), index = TRUE)
aaf <- unlist(genapply(geninfo = geninfo, getaaf))
altallelefreq <- data.frame(SNP = geninfo$snps$snpid, aafcalc = aaf)
knitr::kable(altallelefreq, caption = "Calculated aaf", digits = 3)
```
|
/scratch/gouwar.j/cran-all/cranData/BinaryDosage/vignettes/usinggenfiles.Rmd
|
---
title: "Using VCF files"
output:
rmarkdown::html_vignette:
toc: true
vignette: >
%\VignetteIndexEntry{Using VCF Files}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
This vignette documents the functions in the BinaryDosage package that convert VCF files to binary dosage files.
**Note:** The examples below use functions to access information in binary dosage files. Information about these functions can be found in the vignette [Using Binary Dosage Files](usingbdfiles.html). Data returned by the function <span style="font-family:Courier">getbdinfo</span> contains information about a binary dosage file. Information on the data return by <span style="font-family:Courier">getbdinfo</span> can be found in the vignette [Genetic File Information](geneticfileinfo.html).
```{r setup, echo=FALSE}
library(BinaryDosage)
```
# Introduction
VCF files are a useful way to store genetic data. They have a well defined format and can be easily parsed. The output files returned from the imputation software [minimac](https://genome.sph.umich.edu/wiki/Minimac) are returned in this format. The minimac software also returns an information file that is supported by the BinaryDosage package. The [Michigan Imputation Server](https://imputationserver.sph.umich.edu/index.html) using minimac for imputation and returns VCF and information. Functions in the BinaryDosage package have default settings to use the files return from minimac.
Uncompressed VCF files are text files and can be very large, 100s of GB. Because of this they are quite often compressed. Files returned from the Michigan Imputation Server are compressed using gzip. This makes the files much smaller but greatly increases the read time. The BinaryDosage package supports reading of gzip compressed VCF files.
The BinaryDosage package was originally designed for use with files return from the Michigan Imputation Server. It was quickly learned that if any manipulation was done to these files by various tools such as [vcftools](https://vcftools.github.io), The conversion routine in the BinaryDosage package would not work. The routine was modified to support additional VCF file formats.
The BinaryDosage package has a routine to convert VCF files into a binary format that maintains the dosage, genetic probabilities, and imputation statistics. This results in a file about 10-15% the size of the uncompressed VCF file with much faster, 200-300x, read times. In comparison, Using gzip to compress the VCF file reduces the size of the file to about 5% its original size but makes run times slower.
Routines were written to help debug the conversion routine. It was discovered these routines were quite useful for accessing data in the VCF file and are now included in the package. This document contains instructions on how to use these routines.
# Example files
There are several sample files included with the BinaryDosage package. The file names will be found by using the <span style="font-family:Courier">system.file</span> command in the examples. This will be used many times in the examples.
The binary dosage files created will be temporary files. They will be created using the <span style="font-family:Courier">tempfile</span> command. This will also be used many times in the examples. All output files will use the default format of 4. For information on other formats see the vignette [Binary Dosage Formats](bdformats.html).
# Converting a VCF file to the Binary Dosage format
The <span style="font-family:Courier">vcftobd</span> routine converts VCF files to the binary dosage format. Many different formats of VCF files by the BinaryDosage package. The following sections show how to convert VCF files in various formats to the binary dosage format.
## Minimac files
Since the binary dosage format was initially created for use with files produced by minimac, the default options for calling the routine to convert a VCF file to a binary dosage format are for this type of file.
### Uncompressed VCF files
Uncompressed VCF files are the easiest to convert to the binary dosage format since the default values for <span style="font-family:Courier">vcftobd</span> are set for this format.
#### No imputation information file
An imputation information file is not required to convert a VCF file into the binary dosage format. In this case, the parameter value for <span style="font-family:Courier">vcffiles</span> is set to the VCF file name.
The following commands convert the VCF file *set1a.vcf* into the binary dosage format.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
vcf1afile <- system.file("extdata", "set1a.vcf", package = "BinaryDosage")
bdfile1a_woinfo <- tempfile()
vcftobd(vcffiles = vcf1afile, bdfiles = bdfile1a_woinfo)
```
#### Using the imputation information file
The minimac program returns an imputation information file that can be passed to the <span style="font-family:Courier">vcftobd</span> routine. This is done by setting the parameter <span style="font-family:Courier">vcffiles</span> to a vector of characters containing the VCF and the imputation information file names.
The following commands convert the VCF file *set1a.vcf* into the binary dosage format using the information file *set1a.info*.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
vcf1afile <- system.file("extdata", "set1a.vcf", package = "BinaryDosage")
vcf1ainfo <- system.file("extdata", "set1a.info", package = "BinaryDosage")
bdfile1a_winfo <- tempfile()
vcftobd(vcffiles = c(vcf1afile, vcf1ainfo), bdfiles = bdfile1a_winfo)
```
The differences between the two binary dosage datasets can be checked by running the <span style="font-family:Courier">getbdinfo</span> routine on both files. The value of <span style="font-family:Courier">snpinfo</span> in the list returned from <span style="font-family:Courier">getbdinfo</span> will be an empty for the first file and will contain the imputation information for the second file..
The following commands show the first file does not contain any imputation information and the the second one does. The imputation information for the second file is converted to a table for easier displaying.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
bdinfo1a_woinfo <- getbdinfo(bdfiles = bdfile1a_woinfo)
bdinfo1a_woinfo$snpinfo
bdinfo1a_winfo <- getbdinfo(bdfiles = bdfile1a_winfo)
knitr::kable(data.frame(bdinfo1a_winfo$snpinfo), caption = "bdinfo1a_winfo$snpinfo")
```
### Compressed VCF files
VCF files can be quite large, 100s of GB. Because of this they are often compressed. The funciton <span style="font-family:Courier">vcftobd</span> supports VCF files compressed using gunzip by adding the option <span style="font-family:Courier">gz = TRUE</span> to the function call. The compressed file can be converted using an imputation information. The imputation file must **NOT** be compressed.
The following code reads in the data from the compressed VCF file *set1a.vcf.gz*. This the *set1a.vcf* file after it has been compressed using gunzip. The file will be read in twice, once without the imputation information file and once with the imputation information fie.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
vcf1afile_gz <- system.file("extdata", "set1a.vcf.gz", package = "BinaryDosage")
vcf1ainfo <- system.file("extdata", "set1a.info", package = "BinaryDosage")
bdfile1a_woinfo_gz <- tempfile()
vcftobd(vcffiles = vcf1afile_gz, bdfiles = bdfile1a_woinfo_gz)
bdfile1a_winfo_gz <- tempfile()
vcftobd(vcffiles = c(vcf1afile_gz, vcf1ainfo), bdfiles = bdfile1a_winfo_gz)
```
### Checking the files
The four binary dosage files created above should all have the same dosage and genotype probabilities in them. The following code calculates the alternate allele frequencies for each of the binary dosage files using the <span style="font-family:Courier">bdapply</span> function. The results are then displayed in a table showing the alternate allele frequencies are the same for each file. The value for SNPID was taken from the list return from <span style="font-family:Courier">getbdinfo</span>.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
bdinfo1a_woinfo_gz <- getbdinfo(bdfiles = bdfile1a_woinfo_gz)
bdinfo1a_winfo_gz <- getbdinfo(bdfiles = bdfile1a_winfo_gz)
aaf1a_woinfo <- unlist(bdapply(bdinfo = bdinfo1a_woinfo, getaaf))
aaf1a_winfo <- unlist(bdapply(bdinfo = bdinfo1a_winfo, getaaf))
aaf1a_woinfo_gz <- unlist(bdapply(bdinfo = bdinfo1a_woinfo_gz, getaaf))
aaf1a_winfo_gz <- unlist(bdapply(bdinfo = bdinfo1a_winfo_gz, getaaf))
aaf1a <- data.frame(SNPID = bdinfo1a_woinfo$snps$snpid,
aaf1a_woinfo = aaf1a_woinfo,
aaf1a_winfo = aaf1a_winfo,
aaf1a_woinfo_gz = aaf1a_woinfo_gz,
aaf1a_winfo_gz = aaf1a_winfo_gz)
knitr::kable(aaf1a, caption = "Alternate Allele Frequencies", digits = 4)
```
## Other VCF file formats
The <span style="font-family:Courier">vcftobd</span> function can support VCF files in formats other than those returned from minimac. This is done by examining the value of FORMAT for each SNP in the VCF file. The routine looks for the values "DS" and "GP", dosage and genotype probabilities, in the FORMAT column. If one or both of these values are found, the appropriate information is written to the binary dosage file.
The file *set2a.vcf* contains only the dosage values. The following code converts it to a binary dosage file. The <span style="font-family:Courier">getsnp</span> function is then used to extract the first SNP and display the values for the first 10 subjects.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
vcf2afile <- system.file("extdata", "set2a.vcf", package = "BinaryDosage")
bdfile2a <- tempfile()
vcftobd(vcffiles = vcf2afile, bdfiles = bdfile2a)
bdinfo2a <- getbdinfo(bdfiles = bdfile2a)
snp1_2a <- data.frame(getsnp(bdinfo = bdinfo2a, snp = 1L, dosageonly = FALSE))
snp1 <- cbind(SubjectID = bdinfo2a$samples$sid, snp1_2a)
knitr::kable(snp1[1:10,], caption = "Dosage and Genotype Probabilities")
```
## Other vcftobd options
There are other options for <span style="font-family:Courier">vcftobd</span>. These options effect how the information is written to the binary dosage file.
### format and subformat options
The format and subformat options determine the format of the binary dosage files. These format are documented in the [Binary Dosage Formats](bdformats.html).
### snpidformat
The <span style="font-family:Courier">snpidformat</span> options specifies how the SNP ID is written to the binary dosage file. The default value is 0. This tells the code to use the SNP IDs that are in the VCF file. Other values that can be supplied to the function creates a SNP ID from the chromosome, location, reference, and alternate allele values.
When the snpidformat is set to 1, the SNP ID is written in the format
<span style="font-family:Courier">Chromosome:Location</span>
When the snpidformat is set to 2, the SNP ID is written in the format
<span style="font-family:Courier">Chromosome:Location:Reference Allele:Alternate Allele</span>
When the snpidformat is set to -1, the SNP ID is not written to the binary dosage file. When the binary dosage file is read, the SNP ID is generated using the format for snpidformat equal to 2. This reduces the size of the binary dosage file.
**Note:** If the SNP IDs in the VCF are in snpidformat 1 or 2, the code recognizes this and writes the smaller binary dosage files.
**Note:** If the SNP IDs in the VCF are in snpidformat 2, and the snpidformat option is set to 1, an error will be returned. This is because of possible information loss and the binary dosage file does not change.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
vcf1brsfile <- system.file("extdata", "set1b_rssnp.vcf", package = "BinaryDosage")
bdfile1b.snpid0 <- tempfile()
bdfile1b.snpid1 <- tempfile()
bdfile1b.snpid2 <- tempfile()
bdfile1b.snpidm1 <- tempfile()
vcftobd(vcffiles = vcf1brsfile, bdfiles = bdfile1b.snpid0)
vcftobd(vcffiles = vcf1brsfile, bdfiles = bdfile1b.snpid1, snpidformat = 1)
vcftobd(vcffiles = vcf1brsfile, bdfiles = bdfile1b.snpid2, snpidformat = 2)
vcftobd(vcffiles = vcf1brsfile, bdfiles = bdfile1b.snpidm1, snpidformat = -1)
bdinfo1b.snpid0 <- getbdinfo(bdfiles = bdfile1b.snpid0)
bdinfo1b.snpid1 <- getbdinfo(bdfiles = bdfile1b.snpid1)
bdinfo1b.snpid2 <- getbdinfo(bdfiles = bdfile1b.snpid2)
bdinfo1b.snpidm1 <- getbdinfo(bdfiles = bdfile1b.snpidm1)
snpnames <- data.frame(format0 = bdinfo1b.snpid0$snps$snpid,
format1 = bdinfo1b.snpid1$snps$snpid,
format2 = bdinfo1b.snpid2$snps$snpid,
formatm1 = bdinfo1b.snpidm1$snps$snpid)
knitr::kable(snpnames, caption = "SNP Names by Format")
```
### bdoptions
When using binary dosage format 4.x it is possible to store additional information about the SNPs in the file. This is information consists of the following values
- Alternate allele frequency
- Minor allele frequency
- Average call
- Imputation r-squared
These values are normally provided in the imputation information file. However, it is possible to calculate the alternate and minor allele frequency without the imputation information file. This can be useful when a subset of subjects is extracted from the VCF file that returned from minimac. It is also possible to estimate the imputation r-squared. See the vignette [Estimating Imputed R-squares](r2estimates.html) for more information on the r-squared estimate.
The value for bdoptions is a vector of character values that can be "aaf", "maf", "rsq", or and combination of these values. The values indicate to calculate alternate allele frequency, minor allele frequency, and imputation r-squared respectively.
The following code converts the *set1a.vcf* file into a binary dosage file and calculates the alternate allele frequency and the minor allele frequency, and estimates the imputed r-squared. These values are then compared to the values in the binary dosage file that was generated by using the *set1a.info* information file.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
vcf1afile <- system.file("extdata", "set1a.vcf", package = "BinaryDosage")
bdfile1a_calcinfo <- tempfile()
vcftobd(vcffiles = vcf1afile, bdfiles = bdfile1a_calcinfo, bdoptions = c("aaf", "maf", "rsq"))
bdcalcinfo <- getbdinfo(bdfile1a_calcinfo)
snpinfo <- data.frame(aaf_info = bdinfo1a_winfo$snpinfo$aaf,
aaf_calc = bdcalcinfo$snpinfo$aaf,
maf_info = bdinfo1a_winfo$snpinfo$maf,
maf_calc = bdcalcinfo$snpinfo$maf,
rsq_info = bdinfo1a_winfo$snpinfo$rsq,
rsq_calc = bdcalcinfo$snpinfo$rsq)
knitr::kable(snpinfo, caption = "Information vs Calculated Information", digits = 3)
```
# Additional routines
The following routines are available for accessing information contained in VCF files
- <span style="font-family:Courier">getvcfinfo</span>
- <span style="font-family:Courier">vcfapply</span>
## getvcfinfo
The <span style="font-family:Courier">getvcfinfo</span> routine returns information about a VCF file. For more information about the data returned see [Genetic File Information](geneticfileinfo.html). This information needs to be passed to vcfapply so it can efficiently read the VCF file.
The parameters passed to <span style="font-family:Courier">getvcfinfo</span> are
- <span style="font-family:Courier">filenames</span>
- <span style="font-family:Courier">gz</span>
- <span style="font-family:Courier">index</span>
- <span style="font-family:Courier">snpidformat</span>
<span style="font-family:Courier">filenames</span> is a character vector that can contain up to two values. The first value is the name of the VCF file. The second value is optional and is the name of the imputation information file.
<span style="font-family:Courier">gz</span> is a logical value that indicates if the VCF file has been compressed using gzip. This only applies to the VCF file. Compression of the imputation information file is not supported.
<span style="font-family:Courier">index</span> is a logical value that indicates if the VCF file should be indexed. Indexing the VCF file takes time but greatly reduces the time needed to read the file. Indexing can only be done on uncompressed VCF files.
<span style="font-family:Courier">snpidformat</span> is an integer value from 0 to 2 that indicates how the SNP ID should be formatted. The value indicates which of the follow formats to use.
- 0 - Value in VCF file
- 1 - <span style="font-family:Courier">Chromosome:Location(bp)</span>
- 2 - <span style="font-family:Courier">Chromosome:Location(bp):Reference Allele:Alternate Allele</span>
## vcfapply
The <span style="font-family:Courier">vcfapply</span> routine applies a function to all SNPs in the VCF file. The routine returns a list with length equal to the number of SNPs in the VCF file. Each element in the list is the value returned by the user supplied function. The routine takes the following parameters.
- <span style="font-family:Courier">vcfinfo</span> - list with information about the VCF file returned by <span style="font-family:Courier">getvcfinfo</span>.
- <span style="font-family:Courier">func</span> - user supplied function to be applied to each SNP in the VCF file.
- <span style="font-family:Courier">...</span> - additional parameters needed by the user supplied function
The user supplied function must have the following parameters.
- <span style="font-family:Courier">vcfinfo</span> - A numeric vector with the dosage values for each subject.
- <span style="font-family:Courier">p0</span> - A numeric vector with the probabilities the subject has no alternate alleles for each subject.
- <span style="font-family:Courier">p1</span> - A numeric vector with the probabilities the subject has one alternate allele for each subject.
- <span style="font-family:Courier">p2</span> - A numeric vector with the probabilities the subject has two alternate alleles for each subject.
The user supplied function can have other parameters. These parameters need to passed to the <span style="font-family:Courier">vcfapply</span> routine.
There is a function in the <span style="font-family:Courier">BinaryDosage</span> package named <span style="font-family:Courier">getaaf</span> that calculates the alternate allele frequencies and is the format needed by <span style="font-family:Courier">vcfapply</span> routine. The following uses <span style="font-family:Courier">getaaf</span> to calculate the alternate allele frequency for each SNP in the *set1a.vcf* file using the <span style="font-family:Courier">vcfapply</span> routine and compares it to the aaf in the information file.
``` {r, eval = T, echo = T, message = F, warning = F, tidy = T}
vcf1afile <- system.file("extdata", "set1a.vcf", package = "BinaryDosage")
vcfinfo <- getvcfinfo(vcf1afile, index = TRUE)
aaf <- unlist(vcfapply(vcfinfo = vcfinfo, getaaf))
altallelefreq <- data.frame(SNP = vcfinfo$snps$snpid, aafinfo = aaf1a_winfo, aafcalc = aaf)
knitr::kable(altallelefreq, caption = "Information vs Calculated aaf", digits = 3)
```
|
/scratch/gouwar.j/cran-all/cranData/BinaryDosage/vignettes/usingvcffiles.Rmd
|
#' Variable Selection For Binary Data Using The EM Algorithm
#'
#' Conducts EMVS analysis
#'
#' @param y responses in 0-1 coding
#' @param x X matrix
#' @param type probit or logit model
#' @param epsilon tuning parameter
#' @param v0s tuning parameter, can be vector
#' @param nu.1 tuning parameter
#' @param nu.gam tuning parameter
#' @param lambda.var tuning parameter
#' @param a tuning parameter
#' @param b tuning parameter
#' @param beta.initial starting values
#' @param sigma.initial starting value
#' @param theta.inital startng value
#' @param temp not sure
#' @param p not sure
#' @param n not sure
#' @param SDCD.length not sure
#'
#' @return probs is posterior probabilities
#'
#' @examples
#' #Generate data
#' set.seed(1)
#' n=25;p=500;pr=10;cor=.6
#' X=data.sim(n,p,pr,cor)
#'
#' #Randomly generate related beta coefficnets from U(-1,1)
#' beta.Vec=rep(0,times=p)
#' beta.Vec[1:pr]=runif(pr,-1,1)
#'
#' y=scale(X%*%beta.Vec+rnorm(n,0,sd=sqrt(3)),center=TRUE,scale=FALSE)
#' prob=1/(1+exp(-y))
#' y.bin=t(t(ifelse(rbinom(n,1,prob)>0,1,0)))
#'
#' result.probit=BinomialEMVS(y=y.bin,x=X,type="probit")
#' result.logit=BinomialEMVS(y=y.bin,x=X,type="logit")
#'
#' which(result.probit$posts>.5)
#' which(result.logit$posts>.5)
#'
#' @export
BinomialEMVS=function(y,x,type="probit",epsilon=.0005,
v0s=ifelse(type=="probit",.025,5),
nu.1=ifelse(type=="probit",100,1000),
nu.gam=1,lambda.var=.001,a=1,b=ncol(x),
beta.initial=NULL,
sigma.initial=1,theta.inital=.5,temp=1,p=ncol(x),n=nrow(x),SDCD.length=50){
if(type=="probit")
{
result=
EMVS.probit(y=y,x=x,epsilon=epsilon,v0s=v0s,nu.1=nu.1,nu.gam=nu.gam,a=a,b=b,
beta.initial=beta.initial,sigma.initial=sigma.initial,theta.inital=.5,
temp=temp,p=p,n=n)
}
if(type=="logit")
{
y[y==0]=-1
result=
EMVS.logit(y=y,x=x,epsilon=epsilon,v0s=v0s,nu.1=nu.1,nu.gam=nu.gam,a=a,b=b,
beta.initial=beta.initial,sigma.initial=sigma.initial,theta.inital=.5,
temp=temp,p=p,n=n,lambda.var=lambda.var,SDCD.length=SDCD.length)
}
return(result)
}
|
/scratch/gouwar.j/cran-all/cranData/BinaryEMVS/R/BinomialEMVS.R
|
EMVS.logit=function(y,x,epsilon=.0005,v0s=5,nu.1=1000,nu.gam=1,lambda.var=.001,a=1,b=ncol(x),
beta.initial=rep(1,p),sigma.initial=1,theta.inital=.5,temp=1,p=ncol(x),n=nrow(x),SDCD.length=50){
if(length(beta.initial)==0){
beta.initial=rep(1,p)
}
L=length(v0s)
cat("\n")
cat("\n","Running Logit across v0's","\n")
cat(rep("",times=(L+1)),sep="|")
cat("\n")
intersects=numeric(L) # intersection points between posterior weighted spike and slab
log_post=numeric(L) # logarithm of the g-function models associated with v0s
sigma.Vec=numeric(L)
theta.Vec=numeric(L)
log_post=numeric(L)
index.Vec=numeric(L)
beta.Vec=matrix(0,L,p) # L x p matrix of MAP beta estimates for each spike
p.Star.Vec=matrix(0,L,p) # L x p matrix of conditional posterior inclusion probabilities
for (i in (1:L)){
nu.0=v0s[i]
beta.Current=beta.initial
beta.new=beta.initial
sigma.EM=sigma.initial
theta.EM=theta.inital
eps=epsilon+1
iter.index=1
while(eps>epsilon && iter.index<20){
d.Star=rep(NA,p)
p.Star=rep(NA,p)
for(j in 1:p){
gam.one=dnorm(beta.Current[j],0,sigma.EM*sqrt(nu.1))**temp*theta.EM**temp
gam.zero=dnorm(beta.Current[j],0,sigma.EM*sqrt(nu.0))**temp*(1-theta.EM)**temp
p.Star[j]=gam.one/(gam.one+gam.zero)
d.Star[j]=((1-p.Star[j])/nu.0)+(p.Star[j]/nu.1)
}
#cat("max p.Star", max(p.Star),"\n")
#cat("d.Star.EM: ", d.Star[1:5],"\n")
############### M STEP #######################
d.Star.Mat=diag(d.Star,p)
beta.Current=rep(NA,p)
count.while=0
while(is.na(min(beta.Current))){
beta.Current=CSDCD.logistic(p,n,x,y,d.Star,SDCD.length)
count.while=count.while+1
#cat("This is count.while:",count.while,"\n")
}
######## VARIANCE FORUMULA IS DIFFERENT FROM CONTINUOUS AND PROBIT CASE ###########
#sigma.EM[i]=sqrt((sum(log(1+exp(-y*x%*%beta.EM[i,])))+sum((sqrt(d.Star.Mat)%*%beta.EM[i,])**2)+lambda.var*nu.gam)/(n+p+nu.gam))
sigma.EM=sqrt((sum((sqrt(d.Star.Mat)%*%beta.Current)**2)+lambda.var*nu.gam)/(n+p+nu.gam+2))
theta.EM=(sum(p.Star)+a-1)/(a+b+p-2)
eps=max(abs(beta.new-beta.Current))
#print(eps)
beta.new=beta.Current
iter.index=iter.index+1
}
p.Star.Vec[i,]=p.Star
beta.Vec[i,]=beta.new
sigma.Vec[i]=sigma.EM
theta.Vec[i]=theta.EM
index.Vec[i]=iter.index
index=p.Star>0.5
c=sqrt(nu.1/v0s[i])
w=(1-theta.Vec[i])/theta.Vec[i]
if (w>0){
intersects[i]=sigma.Vec[i]*sqrt(v0s[i])*sqrt(2*log(w*c)*c^2/(c^2-1))}else{
intersects[i]=0}
cat("|",sep="")
}
list=list(betas=beta.Vec,intersects=intersects,sigmas=sigma.Vec,
niters=index.Vec,posts=p.Star.Vec,thetas=theta.Vec,v0s=v0s)
return(list)
}
|
/scratch/gouwar.j/cran-all/cranData/BinaryEMVS/R/EMVS.logit.R
|
EMVS.probit=function(y,x,epsilon=.0005,v0s=.025,nu.1=100,nu.gam=1,a=1,b=ncol(x),beta.initial=NULL,
sigma.initial=1,theta.inital=.5,temp=1,p=ncol(x),n=nrow(x)){ #set reasonable defauly for v0
if(length(beta.initial)==0){
beta.initial=rep(NaN,times=ncol(x))
while(sum(is.nan(beta.initial))>0)
{
beta.initial=CSDCD.logistic(p,n,x,ifelse(y==1,1,-1),rep(.0001,p),75)
}
}
scrap=0
cat("\n")
L=length(v0s)
cat("\n","Running Probit across v0's","\n")
cat(rep("",times=(L+1)),sep="|")
cat("\n")
intersects=numeric(L) # intersection points between posterior weighted spike and slab
log_post=numeric(L) # logarithm of the g-function models associated with v0s
sigma.Vec=numeric(L)
theta.Vec=numeric(L)
log_post=numeric(L)
index.Vec=numeric(L)
beta.Vec=matrix(0,L,p) # L x p matrix of MAP beta estimates for each spike
p.Star.Vec=matrix(0,L,p) # L x p matrix of conditional posterior inclusion probabilities
for (i in (1:L)){
bail.count=0
nu.0=v0s[i]
beta.Current=beta.initial
beta.new=beta.initial
sigma.EM=sigma.initial
theta.EM=theta.inital
eps=epsilon+1
iter.index=1
while(eps>epsilon && iter.index<20){ #
if(bail.count>=3)
{
cat("\n","Iteration scrapped!","\n",sep="")
list=list(betas=NULL,intersects=NULL,sigmas=NULL,
niters=NULL,posts=NULL,thetas=NULL,v0s=NULL,scrap=1)
return(list)
}
#print(iter.index)
############### E STEP #######################
d.Star=rep(NA,p)
p.Star=rep(NA,p)
for(j in 1:p){
gam.one=dnorm(beta.Current[j],0,sqrt(nu.1))**temp*theta.EM**temp
gam.zero=dnorm(beta.Current[j],0,sqrt(nu.0))**temp*(1-theta.EM)**temp
p.Star[j]=gam.one/(gam.one+gam.zero)
d.Star[j]=((1-p.Star[j])/nu.0)+(p.Star[j]/nu.1)
}
############# LATENT VARIABLE FOR PROBIT ##################
M=rep(NA,n)
y.Star=rep(NA,n)
for(j in 1:n){
M[j]=ifelse(y[j]==0,-dnorm(-x[j,]%*%beta.Current,0,1)/pnorm(-x[j,]%*%beta.Current,0,1),
dnorm(-x[j,]%*%beta.Current,0,1)/(1-pnorm(-x[j,]%*%beta.Current,0,1)))
y.Star[j]=x[j,]%*%beta.Current+M[j]
}
############### M STEP #######################
d.Star.Mat=diag(d.Star,p)
beta.old=beta.new
######## Sherman-Morrison-Woodbury Formula ####################
#beta.EM[i,]=(solve(v.Star.Mat)-(solve(v.Star.Mat)%*%t(x))%*%solve(diag(1,n)+x%*%solve(v.Star.Mat)%*%t(x))%*%(x%*%solve(v.Star.Mat)))%*%(t(x)%*%y.Star)
#beta.Current=solve(t(x)%*%x+d.Star.Mat)%*%t(x)%*%y.Star
beta.Current=CSDCD.random(p,n,x,y.Star,d.Star)
#print(beta.Current[1:10])
if(sum(is.nan(beta.Current))==0)
{
sigma.EM=1
theta.EM=(sum(p.Star)+a-1)/(a+b+p-2)
eps=max(abs(beta.new-beta.Current))
beta.new=beta.Current
iter.index=iter.index+1
}
if(sum(is.nan(beta.Current))>0){
beta.Current=beta.old
bail.count=bail.count+1
}
}
p.Star.Vec[i,]=p.Star
beta.Vec[i,]=beta.new
sigma.Vec[i]=sigma.EM
theta.Vec[i]=theta.EM
index.Vec[i]=iter.index
index=p.Star>0.5
c=sqrt(nu.1/v0s[i])
w=(1-theta.Vec[i])/theta.Vec[i]
if (w>0){
intersects[i]=sigma.Vec[i]*sqrt(v0s[i])*sqrt(2*log(w*c)*c^2/(c^2-1))}else{
intersects[i]=0}
cat("|",sep="")
}
list=list(betas=beta.Vec,intersects=intersects,sigmas=sigma.Vec,
niters=index.Vec,posts=p.Star.Vec,thetas=theta.Vec,v0s=v0s,scrap=0)
return(list)
}
|
/scratch/gouwar.j/cran-all/cranData/BinaryEMVS/R/EMVS.probit.R
|
######SCDA ALGORITHM#############
CSDCD.random=function(num.var,num.sample,x.mat,y.vec,d.Star){
p=num.var
n=num.sample
design.Mat=x.mat
y.sim=y.vec
lambda=d.Star
################ DEFINE PARAMETERS ########################
thru.data=75
tot.it=thru.data*n
alpha=matrix(NA,tot.it,n)
nu.mat=matrix(NA,tot.it,p)
################ INITIAL VALUES ############################
alpha[1,]=rep(0,n)
nu.mat[1,]=(alpha[1,]%*%design.Mat*(1/lambda))/n
for(r in 1:thru.data){
random.seq=sample(1:n, n,replace = FALSE)
for(k in 1:n){
t=(r-1)*n+k
if(r==1){
t=(r-1)*n+k+1
}
delta.alpha=-(alpha[t-1,random.seq[k]]+t(nu.mat[t-1,])%*%design.Mat[random.seq[k],]-y.sim[random.seq[k]])/(1+(sum(design.Mat[random.seq[k],]**2*(1/lambda))/(2*n)))
alpha[t,]=alpha[t-1,]+delta.alpha
nu.mat[t,]=nu.mat[t-1,]+(delta.alpha/n)*design.Mat[random.seq[k],]*(1/lambda)
}
}
colMeans(nu.mat[(tot.it/2):tot.it,])
}
|
/scratch/gouwar.j/cran-all/cranData/BinaryEMVS/R/SCDA.R
|
CSDCD.logistic=function(num.var,num.sample,x.mat,y.vec,d.Star,it.num){
save.count=0
p=num.var
n=num.sample
delta.lam=0
design.Mat=x.mat
y.sim=y.vec
lambda=d.Star
thru.data=it.num
tot.it=thru.data*n
alpha=matrix(NA,tot.it,n)
nu.mat=matrix(NA,tot.it,p)
################ INITIAL VALUES ############################
alpha[1,]=rep(-.0001,n)
initial.x.i=matrix(NA,n,p)
#initial.x.i=t(design.Mat)%*%(-y.sim)
#nu.mat[1,]=(sum(alpha[1,]*initial.x.i)*(1/lambda))/n
phi.star=function(b){b*log(b)+(1-b)*log(1-b)}
for(i in 1:n){
initial.x.i[i,]=design.Mat[i,]*y.sim[i]
}
nu.mat[1,]=t(((alpha[1,]%*%initial.x.i)*(1/lambda))/n)
for(t in 2:tot.it){
random.seq=sample(1:n, 1)
if(is.nan(delta.lam) | is.na(alpha[t-1,random.seq])){
t=t-1
}else{
x.i=-y.sim[random.seq]*design.Mat[random.seq,]
p.rand=t(x.i)%*%nu.mat[t-1,]
if(alpha[t-1,random.seq]>0 | is.nan(alpha[t-1,random.seq]) | alpha[t-1,random.seq]<(-1) ){
#print(alpha[t-1,random.seq])
alpha[t-1,random.seq]=-.0001
##print("alpha fixed")
}
q=-1/(1+exp(-p.rand))-alpha[t-1,random.seq]
min.val=(log(1+exp(p.rand))+phi.star(-alpha[t-1,random.seq])+p.rand*alpha[t-1,random.seq]+2*q^2)/(q^2*((4+(sum(x.i^2*(1/lambda))/n))))
s=min(1,min.val)
# if(is.nan(s)==TRUE){
# delta.lam=0
#save.count=save.count+1
# cat("saved ",save.count," times"," q = ",q,"\n",sep="")
# alpha[t-1,random.seq]=-.0001
# #t=t-1
# }else{
delta.lam=s*q
#}
#cat("delta = ",delta.lam,", s = ",s,", q = ",q,"\n",sep="")
alpha[t,-random.seq]=alpha[t-1,-random.seq]
alpha[t,random.seq]=alpha[t-1,random.seq]+delta.lam
nu.mat[t,]=nu.mat[t-1,]+((delta.lam*x.i)/(lambda*n))
}
}
colMeans(nu.mat[(tot.it/2):tot.it,])
#nu.mat[tot.it,]
}
|
/scratch/gouwar.j/cran-all/cranData/BinaryEMVS/R/SCDA.logistic.R
|
#' High Dimensional Correlated Data Generation
#'
#' Generates an high dimensional dataset with a subset of columns being related to the response, while
#' controlling the maximum correlation between related and unrelated variables.
#'
#' @param n sample size
#' @param p total number of variables
#' @param pr the number of variables related to the response
#' @param cor the maximum correlation between related and unrelated variables
#'
#' @return Returns an nxp matrix with the first pr columns having maximum correlation cor with
#' the remaining p-pr columns
#'
#' @examples
#' data=data.sim(n=100,p=1000,pr=10,cor=.6)
#' max(abs(cor(data))[abs(cor(data))<1])
#'
#' @export
data.sim=function(n=100,p=1000,pr=3,cor=.6)
{
beta.Vec=matrix(0,p,1)
k=pr #dimension of diagonal matrix D used to construct T
#INCLUDING LARGEST EIGENVALUE
alpha=0 #intercept term=0 w.l.o.g
A = matrix(runif(n*p),nrow=n,ncol=p)
related=A[,1:pr]
lamsq=cor^2/(1-cor^2) #largest eigenvalue of T'T
# D is thethe diagonal part of T
D = diag(c(sqrt(lamsq),sort(runif(k-1,0,sqrt(lamsq)),decreasing=TRUE)))
T=rbind(cbind(D,matrix(0,nrow=k,ncol=n-pr-k)),
cbind(matrix(0,nrow=pr-k,ncol=k),matrix(0,nrow=pr-k,ncol=n-pr-k)))
#T is block diagonal
# [ D 0 ]
# [ 0 0 ]
A.qr <- qr(A)
Q <- qr.Q(A.qr)
al=Q[,1:pr] #take the first pr columns of the G-S orthonormalization
be=Q[,(pr+1):n] #take the remaioning n-pr columns of the G-S orthonormalization
#first construct the linear space that will generate the unrelated p-pr
#variables, then get the basis for this space
C=be+al%*%T
ro=nrow(C) #number of columns of this basis, "k" in out notes
co=ncol(C)
# my explanation for this is the picture I attached in the email.
b=kronecker(matrix(runif((p-pr)*(co),-.5,.5),p-pr,co),matrix(1,nrow=n,ncol=1))
C.rep=C[rep(1:n,times=p-pr),]
variables= apply(b * C.rep,1,sum)
unrelated=matrix(variables,nrow=n,ncol=p-pr,byrow=F)
design.Mat = cbind(related, unrelated)
########## STANDARDIZE DESIGN MATRIX################
design.Mat=scale(design.Mat,center=TRUE,scale=TRUE)
return(design.Mat)
}
|
/scratch/gouwar.j/cran-all/cranData/BinaryEMVS/R/data.sim.R
|
#' @title Binarybalancedcut
#'
#' @description This Supports the datascientist to determine the optimal threshold for binary classifier problem by visuallizing the sensitivity, specificity and accurarcy of the given model.
#'
#' @param probability,class
#'
#' @return NULL
#'
#' @examples set.seed(100);disease <- sample(c("yes","no"), 1000, replace=TRUE);Probabilities<-sample(seq(0,1,by=0.01),1000,replace=TRUE);Binary_threshold(Probabilities,disease)
#'
#' @export
globalVariables(c("Probability", "Percentage","Legends"))
Binary_threshold<-function(probability,class){
Unique_Prob<-sort(unique(probability))
Unique_Prob<-Unique_Prob[-1]
df<-data.frame()
for(i in Unique_Prob){
cut<-ifelse(probability<i,0,1)
cm<-table(cut,class)
df<-rbind(df,data.frame(Sensitivity=as.numeric(cm[1]/(cm[1]+cm[2])),Specificity=as.numeric(cm[4]/(cm[3]+cm[4]))))
}
df$Probability<-Unique_Prob
test_data_long <- melt(df, id="Probability") # convert to long format
test_data_long$Legends<-test_data_long$variable
test_data_long$Percentage<-test_data_long$value
P1<-ggplot(data=test_data_long,
aes(x=Probability, y=Percentage, colour=Legends)) +
geom_line()+ggtitle("Binary Cut-Off Plot")
print(P1)
}
|
/scratch/gouwar.j/cran-all/cranData/BinarybalancedCut/R/BinarybalancedCut.R
|
#' BioCircos
#'
#' Interactive circular visualization of genomic data using ‘htmlwidgets’ and ‘BioCircos.js’
#'
#' @import htmlwidgets
#' @import RColorBrewer
#' @import plyr
#' @import jsonlite
#' @import grDevices
#'
#' @export
#' BioCircos widget
#'
#' Interactive circular visualization of genomic data using 'htmlwidgets' and 'BioCircos.js'
#'
#' @param tracklist A list of tracks to display.
#' @param genome A list of chromosome lengths to be used as reference for the visualization or 'hg19' to use
#' the chromosomes 1 to 22 and the sexual chromosomes according to the hg19 reference.
#' @param yChr A logical stating if the Y chromosome should be displayed. Used only when genome is set to 'hg19'.
#' @param genomeFillColor The color to display in each chromosome. Can be a RColorBrewer palette name used to
#' generate one color per chromosome, or a character object or vector of character objects stating RGB values in hexadecimal
#' format or base R colors. If the vector is shorter than the reference genome, values will be repeated.
#' @param chrPad Distance between chromosomes.
#'
#' @param displayGenomeBorder,genomeBorderColor,genomeBorderSize Should the reference genome have borders?
#' If yes specify the color, in RGB hexadecimal format, and the thickness.
#'
#' @param genomeTicksDisplay,genomeTicksLen,genomeTicksColor,genomeTicksTextSize,genomeTicksTextColor,genomeTicksScale
#' Should the refence genome have ticks, of which length, color (in hexadecimal RGB format), with labels in which font
#' size and color, and spaced by how many bases?
#' @param genomeLabelDisplay,genomeLabelTextSize,genomeLabelTextColor,genomeLabelDx,genomeLabelDy,genomeLabelOrientation
#' Should the reference genome have labels on each chromosome, in which font size and color? Moreover rotation
#' and radius shifts for the label texts can be added, and the angle between the radius and the label changed.
#'
#' @param zoom Is zooming and moving in the visualization allowed?
#'
#' @param TEXTModuleDragEvent Are text annotations draggable?
#'
#' @param SNPMouseOverDisplay Display the tooltip when mouse hover on a SNP point.
#' @param SNPMouseOverColor Color of the SNP point when hovered by the mouse, in hexadecimal RGB format.
#' @param SNPMouseOverCircleSize Size of the SNP point when hovered by the mouse.
#' @param SNPMouseOverCircleOpacity Opacity of the SNP point when hovered by the mouse.
#'
#' @param SNPMouseOutDisplay Hide tooltip when mouse is not hovering a SNP point anymore.
#' @param SNPMouseOutColor Color of the SNP point when mouse is not hovering a SNP point anymore, in hexadecimal
#' RGB format. To revert back to original color, use the value "none".
#'
#' @param SNPMouseOverTooltipsHtml01 Label displayed in tooltip in first position, before chromosome number.
#' @param SNPMouseOverTooltipsHtml02 Label displayed in tooltip in second position, before genomic position.
#' @param SNPMouseOverTooltipsHtml03 Label displayed in tooltip in third position, before value.
#' @param SNPMouseOverTooltipsHtml04 Label displayed in tooltip in fourth position, before SNP labels if any.
#' @param SNPMouseOverTooltipsHtml05 Label displayed in tooltip in fifth position, after SNP labels if any.
#' @param SNPMouseOverTooltipsBorderWidth The thickness of the tooltip borders, with units specified (such as em or px).
#'
#' @param ARCMouseOverDisplay Display the tooltip when mouse hover on an arc.
#' @param ARCMouseOverColor Color of the arc when hovered by the mouse, in hexadecimal RGB format.
#' @param ARCMouseOverArcOpacity Opacity of the arc when hovered by the mouse.
#'
#' @param ARCMouseOutDisplay Hide tooltip when mouse is not hovering an arc anymore.
#' @param ARCMouseOutColor Color of the arc when mouse is not hovering an arc anymore, in hexadecimal
#' RGB format. To revert back to original color, use the value "none".
#'
#' @param ARCMouseOverTooltipsHtml01 Label displayed in tooltip in first position, before chromosome number.
#' @param ARCMouseOverTooltipsHtml02 Label displayed in tooltip in second position, before genomic position.
#' @param ARCMouseOverTooltipsHtml03 Label displayed in tooltip in third position, before value.
#' @param ARCMouseOverTooltipsHtml04 Label displayed in tooltip in fourth position, before ARC labels if any.
#' @param ARCMouseOverTooltipsHtml05 Label displayed in tooltip in fifth position, after ARC labels if any.
#' @param ARCMouseOverTooltipsBorderWidth The thickness of the tooltip borders, with units specified (such as em or px).
#'
#' @param CNVMouseOutDisplay Hide tooltip when mouse is not hovering an arc anymore.
#' @param CNVMouseOutColor Color of the line when mouse is not hovering anymore, in hexadecimal
#' RGB format. To revert back to original color, use the value "none".
#' @param CNVMouseOutArcOpacity Opacity of the arc when not hovered by the mouse anymore.
#' @param CNVMouseOutArcStrokeColor Color of the arc's stroke when not hovered by the mouse anymore.
#' @param CNVMouseOutArcStrokeWidth Width of the arc's stroke when not hovered by the mouse anymore.
#'
#' @param CNVMouseOverDisplay Display the tooltip when mouse hover on an arc.
#' @param CNVMouseOverColor Color of the arc when hovered by the mouse, in hexadecimal RGB format.
#' @param CNVMouseOverArcOpacity Opacity of the arc when hovered by the mouse.
#' @param CNVMouseOverArcStrokeColor Color of the arc's stroke when hovered by the mouse, in hexadecimal RGB format.
#' @param CNVMouseOverArcStrokeWidth Width of the arc's stroke when hovered by the mouse.
#' @param CNVMouseOverTooltipsHtml01 Label displayed in tooltip in first position, before chromosome number.
#' @param CNVMouseOverTooltipsHtml02 Label displayed in tooltip in second position, before starting position.
#' @param CNVMouseOverTooltipsHtml03 Label displayed in tooltip in second position, before ending position.
#' @param CNVMouseOverTooltipsHtml04 Label displayed in tooltip in third position, before value.
#' @param CNVMouseOverTooltipsHtml05 Label displayed in tooltip in third position, after value.
#' @param CNVMouseOverTooltipsBorderWidth The thickness of the tooltip borders, with units specified (such as em or px).
#'
#' @param LINEMouseOverDisplay Display the tooltip when mouse hover on a line.
#' @param LINEMouseOverLineOpacity Opacity of the line when hovered by the mouse, in hexadecimal RGB format.
#' @param LINEMouseOverLineStrokeColor Color of the line when hovered by the mouse, in hexadecimal RGB format.
#' @param LINEMouseOverLineStrokeWidth Width of the line when hovered by the mouse, in hexadecimal RGB format.
#' @param LINEMouseOverTooltipsHtml01 Label displayed in tooltip.
#' @param LINEMouseOverTooltipsBorderWidth The thickness of the tooltip borders, with units specified (such as em or px).
#'
#' @param LINEMouseOutDisplay Hide tooltip when mouse is not hovering a line anymore.
#' @param LINEMouseOutLineOpacity Opacity of the line when mouse is not hovering a link anymore.
#' @param LINEMouseOutLineStrokeColor Color of the line when mouse is not hovering anymore, in hexadecimal
#' RGB format. To revert back to original color, use the value "none".
#' @param LINEMouseOutLineStrokeWidth Thickness of the line when mouse is not hovering a link anymore.
#'
#' @param LINKMouseOverDisplay Display the tooltip when mouse hover on a link.
#' @param LINKMouseOverStrokeColor Color of the link when hovered.
#' @param LINKMouseOverOpacity Opacity of the link when hovered.
#' @param LINKMouseOverStrokeWidth Thickness of the link when hovered.
#'
#' @param LINKMouseOutDisplay Hide tooltip when mouse is not hovering a link anymore.
#' @param LINKMouseOutStrokeColor Color of the link when mouse is not hovering anymore, in hexadecimal
#' RGB format. To revert back to original color, use the value "none".
#' @param LINKMouseOutStrokeWidth Thickness of the link when mouse is not hovering a link anymore.
#'
#' @param LINKMouseOverTooltipsHtml01 Label displayed in tooltip in first position, before label.
#' @param LINKMouseOverTooltipsHtml02 Label displayed in tooltip in second position, after label.
#' @param LINKMouseOverTooltipsBorderWidth The thickness of the tooltip borders, with units specified (such as em or px).
#'
#' @param BARMouseOverDisplay Display the tooltip when mouse hover on a bar.
#' @param BARMouseOverColor Color of the bar when hovered.
#' @param BARMouseOverOpacity Opacity of the bar when hovered.
#' @param BARMouseOverTooltipsHtml01 Label displayed in tooltip in first position, before chromosome number.
#' @param BARMouseOverTooltipsHtml02 Label displayed in tooltip in second position, before start position.
#' @param BARMouseOverTooltipsHtml03 Label displayed in tooltip in second position, before end position.
#' @param BARMouseOverTooltipsHtml04 Label displayed in tooltip in third position, before labels if any.
#' @param BARMouseOverTooltipsHtml05 Label displayed in tooltip in fourth position, before values.
#' @param BARMouseOverTooltipsHtml06 Label displayed in tooltip in fifth position, after values.
#' @param BARMouseOverTooltipsBorderWidth The thickness of the tooltip borders, with units specified (such as em or px).
#'
#' @param BARMouseOutDisplay Hide tooltip when mouse is not hovering a bar anymore.
#' @param BARMouseOutColor Color of the bar when mouse is not hovering anymore, in hexadecimal
#' RGB format. To revert back to original color, use the value "none".
#'
#' @param HEATMAPMouseOutDisplay Hide tooltip when mouse is not hovering a box anymore.
#' @param HEATMAPMouseOutColor Color of the box when mouse is not hovering anymore, in hexadecimal
#' RGB format. To revert back to original color, use the value "none".
#'
#' @param HEATMAPMouseOverDisplay Display the tooltip when mouse hover on a box.
#' @param HEATMAPMouseOverColor Color of the box when hovered.
#' @param HEATMAPMouseOverOpacity Opacity of the box when hovered.
#' @param HEATMAPMouseOverTooltipsHtml01 Label displayed in tooltip in first position, before chromosome number.
#' @param HEATMAPMouseOverTooltipsHtml02 Label displayed in tooltip in second position, before start position.
#' @param HEATMAPMouseOverTooltipsHtml03 Label displayed in tooltip in second position, before end position.
#' @param HEATMAPMouseOverTooltipsHtml04 Label displayed in tooltip in third position, before labels if any.
#' @param HEATMAPMouseOverTooltipsHtml05 Label displayed in tooltip in fourth position, before values.
#' @param HEATMAPMouseOverTooltipsHtml06 Label displayed in tooltip in fifth position, after values.
#' @param HEATMAPMouseOverTooltipsBorderWidth The thickness of the tooltip borders, with units specified (such as em or px).
#'
#' @param width,height Must be a valid CSS unit (like \code{'100\%'},
#' \code{'400px'}, \code{'auto'}) or a number, which will be coerced to a
#' string and have \code{'px'} appended.
#'
#' @param elementId the name of the HTML id to be used to contain the visualization.
#'
#' @param ... Ignored
#'
#' @examples
#' BioCircos(yChr = FALSE, chrPad = 0, genomeFillColor = "Blues")
#'
#' @export
BioCircos <- function(tracklist = BioCircosTracklist(),
genome = "hg19", yChr = TRUE,
genomeFillColor = "Spectral",
chrPad = 0.04,
displayGenomeBorder = TRUE, genomeBorderColor = "#000", genomeBorderSize = 0.5,
genomeTicksDisplay = TRUE, genomeTicksLen = 5, genomeTicksColor = "#000",
genomeTicksTextSize = "0.6em", genomeTicksTextColor = "#000", genomeTicksScale = 30000000,
genomeLabelDisplay = TRUE, genomeLabelTextSize = "10pt", genomeLabelTextColor = "#000",
genomeLabelDx = 0.0, genomeLabelDy = 10, genomeLabelOrientation = 0,
zoom = TRUE, TEXTModuleDragEvent = FALSE,
SNPMouseOverDisplay = TRUE, SNPMouseOverColor = "#FF0000", SNPMouseOverCircleSize = 3,
SNPMouseOverCircleOpacity = 0.9,
SNPMouseOutDisplay = TRUE, SNPMouseOutColor = "none",
SNPMouseOverTooltipsHtml01 = "Chromosome: ", SNPMouseOverTooltipsHtml02 = "<br/>Position: ",
SNPMouseOverTooltipsHtml03 = "<br/>Value: ", SNPMouseOverTooltipsHtml04 = "<br/>", SNPMouseOverTooltipsHtml05 = "",
SNPMouseOverTooltipsBorderWidth = "1px",
ARCMouseOverDisplay = TRUE, ARCMouseOverColor = "#FF0000",
ARCMouseOverArcOpacity = 0.9,
ARCMouseOutDisplay = TRUE, ARCMouseOutColor = "none",
ARCMouseOverTooltipsHtml01 = "Chromosome: ", ARCMouseOverTooltipsHtml02 = "<br/>Start: ",
ARCMouseOverTooltipsHtml03 = "<br/>End: ", ARCMouseOverTooltipsHtml04 = "<br/>", ARCMouseOverTooltipsHtml05 = "",
ARCMouseOverTooltipsBorderWidth = "1px",
LINKMouseOverDisplay = TRUE, LINKMouseOverStrokeColor = "#FF00FF",
LINKMouseOverOpacity = 0.9,
LINKMouseOutDisplay = TRUE, LINKMouseOutStrokeColor = "none",
LINKMouseOverTooltipsHtml01 = "Fusion: ", LINKMouseOverTooltipsHtml02 = "",
LINKMouseOverTooltipsBorderWidth = "1px", LINKMouseOverStrokeWidth = 5, LINKMouseOutStrokeWidth = "none",
BARMouseOutDisplay = TRUE, BARMouseOutColor = "none", BARMouseOverDisplay = TRUE, BARMouseOverColor = "#FF0000",
BARMouseOverOpacity = 0.9,
BARMouseOverTooltipsHtml01 = "Chromosome: ", BARMouseOverTooltipsHtml02 = "<br/>Start: ",
BARMouseOverTooltipsHtml03 = " End: ", BARMouseOverTooltipsHtml04 = "<br/>", BARMouseOverTooltipsHtml05 = "<br/>Value: ",
BARMouseOverTooltipsHtml06 = "", BARMouseOverTooltipsBorderWidth = "1px",
HEATMAPMouseOutDisplay = TRUE, HEATMAPMouseOutColor = "none", HEATMAPMouseOverDisplay = TRUE, HEATMAPMouseOverColor = "#FF0000",
HEATMAPMouseOverOpacity = 0.9,
HEATMAPMouseOverTooltipsHtml01 = "Chromosome: ", HEATMAPMouseOverTooltipsHtml02 = "<br/>Start: ",
HEATMAPMouseOverTooltipsHtml03 = " End: ", HEATMAPMouseOverTooltipsHtml04 = "<br/>", HEATMAPMouseOverTooltipsHtml05 = "<br/>Value: ",
HEATMAPMouseOverTooltipsHtml06 = "", HEATMAPMouseOverTooltipsBorderWidth = "1px",
LINEMouseOutDisplay = TRUE, LINEMouseOutLineOpacity = "none", LINEMouseOutLineStrokeColor = "none",
LINEMouseOutLineStrokeWidth = "none", LINEMouseOverDisplay = T, LINEMouseOverLineOpacity = 1,
LINEMouseOverLineStrokeColor = "#FF0000", LINEMouseOverLineStrokeWidth = "none", LINEMouseOverTooltipsHtml01 = "Line",
LINEMouseOverTooltipsBorderWidth = 0, CNVMouseOutDisplay = TRUE, CNVMouseOutColor = "none",
CNVMouseOutArcOpacity = 1.0, CNVMouseOutArcStrokeColor = "none", CNVMouseOutArcStrokeWidth = 0,
CNVMouseOverDisplay = TRUE, CNVMouseOverColor = "#FF0000", CNVMouseOverArcOpacity = 0.9, CNVMouseOverArcStrokeColor = "#F26223",
CNVMouseOverArcStrokeWidth = 3, CNVMouseOverTooltipsHtml01 = "Chromosome: ", CNVMouseOverTooltipsHtml02 = "<br>Start: ",
CNVMouseOverTooltipsHtml03 = "<br>End: ", CNVMouseOverTooltipsHtml04 = "<br>Value: ", CNVMouseOverTooltipsHtml05 = "",
CNVMouseOverTooltipsBorderWidth = "1px", width = NULL, height = NULL, elementId = NULL, ...) {
# If genome is a string, convert to corresponding chromosome lengths
if(class(genome) == "character"){
if(genome == "hg19"){
genome = list("1" = 249250621, #Hg19
"2" = 243199373,
"3" = 198022430,
"4" = 191154276,
"5" = 180915260,
"6" = 171115067,
"7" = 159138663,
"8" = 146364022,
"9" = 141213431,
"10" = 135534747,
"11" = 135006516,
"12" = 133851895,
"13" = 115169878,
"14" = 107349540,
"15" = 102531392,
"16" = 90354753,
"17" = 81195210,
"18" = 78077248,
"19" = 59128983,
"20" = 63025520,
"21" = 48129895,
"22" = 51304566,
"X" = 155270560)
if(yChr){ genome$"Y" = 59373566 }
}
else{
stop("\'genome\' parameter should be either a list of chromosome lengths or \'hg19\'.")
}
}
# If genomeFillColor is a palette, create corresponding color vector
genomeFillColor = .BioCircosColorCheck(genomeFillColor, length(genome), "genomeFillColor")
# forward options using x
x = list(
message = message,
tracklist = tracklist,
genome = genome,
genomeFillColor = genomeFillColor,
chrPad = chrPad,
displayGenomeBorder = displayGenomeBorder,
genomeBorderColor = genomeBorderColor,
genomeBorderSize = genomeBorderSize,
genomeTicksDisplay = genomeTicksDisplay,
genomeTicksLen = genomeTicksLen,
genomeTicksColor = genomeTicksColor,
genomeTicksTextSize = genomeTicksTextSize,
genomeTicksTextColor = genomeTicksTextColor,
genomeTicksScale = genomeTicksScale,
genomeLabelDisplay = genomeLabelDisplay,
genomeLabelTextSize = genomeLabelTextSize,
genomeLabelTextColor = genomeLabelTextColor,
genomeLabelDx = genomeLabelDx,
genomeLabelDy = genomeLabelDy,
genomeLabelOrientation = genomeLabelOrientation,
SNPMouseEvent = T,
SNPMouseClickDisplay = F,
SNPMouseClickColor = "red",
SNPMouseClickCircleSize = 4,
SNPMouseClickCircleOpacity = 1.0,
SNPMouseClickCircleStrokeColor = "#F26223",
SNPMouseClickCircleStrokeWidth = 0,
SNPMouseClickTextFromData = "fourth",
SNPMouseClickTextOpacity = 1.0,
SNPMouseClickTextColor = "red",
SNPMouseClickTextSize = 8,
SNPMouseClickTextPostionX = 1.0,
SNPMouseClickTextPostionY = 10.0,
SNPMouseClickTextDrag = T,
SNPMouseDownDisplay = F,
SNPMouseDownColor = "green",
SNPMouseDownCircleSize = 4,
SNPMouseDownCircleOpacity = 1.0,
SNPMouseDownCircleStrokeColor = "#F26223",
SNPMouseDownCircleStrokeWidth = 0,
SNPMouseEnterDisplay = F,
SNPMouseEnterColor = "yellow",
SNPMouseEnterCircleSize = 4,
SNPMouseEnterCircleOpacity = 1.0,
SNPMouseEnterCircleStrokeColor = "#F26223",
SNPMouseEnterCircleStrokeWidth = 0,
SNPMouseLeaveDisplay = F,
SNPMouseLeaveColor = "pink",
SNPMouseLeaveCircleSize = 4,
SNPMouseLeaveCircleOpacity = 1.0,
SNPMouseLeaveCircleStrokeColor = "#F26223",
SNPMouseLeaveCircleStrokeWidth = 0,
SNPMouseMoveDisplay = F,
SNPMouseMoveColor = "red",
SNPMouseMoveCircleSize = 2,
SNPMouseMoveCircleOpacity = 1.0,
SNPMouseMoveCircleStrokeColor = "#F26223",
SNPMouseMoveCircleStrokeWidth = 0,
SNPMouseOutDisplay = SNPMouseOutDisplay,
SNPMouseOutAnimationTime = 500,
SNPMouseOutColor = SNPMouseOutColor,
SNPMouseOutCircleSize = 2,
SNPMouseOutCircleOpacity = "none",
SNPMouseOutCircleStrokeColor = "red",
SNPMouseOutCircleStrokeWidth = 0,
SNPMouseUpDisplay = F,
SNPMouseUpColor = "grey",
SNPMouseUpCircleSize = 4,
SNPMouseUpCircleOpacity = 1.0,
SNPMouseUpCircleStrokeColor = "#F26223",
SNPMouseUpCircleStrokeWidth = 0,
SNPMouseOverDisplay = SNPMouseOverDisplay,
SNPMouseOverColor = SNPMouseOverColor,
SNPMouseOverCircleSize = SNPMouseOverCircleSize,
SNPMouseOverCircleOpacity = SNPMouseOverCircleOpacity,
SNPMouseOverCircleStrokeColor = "#F26223",
SNPMouseOverCircleStrokeWidth = 1,
SNPMouseOverTooltipsHtml01 = SNPMouseOverTooltipsHtml01,
SNPMouseOverTooltipsHtml02 = SNPMouseOverTooltipsHtml02,
SNPMouseOverTooltipsHtml03 = SNPMouseOverTooltipsHtml03,
SNPMouseOverTooltipsHtml04 = SNPMouseOverTooltipsHtml04,
SNPMouseOverTooltipsHtml05 = SNPMouseOverTooltipsHtml05,
SNPMouseOverTooltipsBorderWidth = SNPMouseOverTooltipsBorderWidth,
zoom = zoom,
TEXTModuleDragEvent = TEXTModuleDragEvent,
ARCMouseEvent = T,
ARCMouseClickDisplay = F,
ARCMouseClickColor = "red",
ARCMouseClickArcOpacity = 1.0,
ARCMouseClickArcStrokeColor = "#F26223",
ARCMouseClickArcStrokeWidth = 1,
ARCMouseClickTextFromData = "fourth",
ARCMouseClickTextOpacity = 1,
ARCMouseClickTextColor = "red",
ARCMouseClickTextSize = 8,
ARCMouseClickTextPostionX = 0,
ARCMouseClickTextPostionY = 0,
ARCMouseClickTextDrag = T,
ARCMouseDownDisplay = F,
ARCMouseDownColor = "green",
ARCMouseDownArcOpacity = 1.0,
ARCMouseDownArcStrokeColor = "#F26223",
ARCMouseDownArcStrokeWidth = 0,
ARCMouseEnterDisplay = F,
ARCMouseEnterColor = "yellow",
ARCMouseEnterArcOpacity = 1.0,
ARCMouseEnterArcStrokeColor = "#F26223",
ARCMouseEnterArcStrokeWidth = 0,
ARCMouseLeaveDisplay = F,
ARCMouseLeaveColor = "pink",
ARCMouseLeaveArcOpacity = 1.0,
ARCMouseLeaveArcStrokeColor = "#F26223",
ARCMouseLeaveArcStrokeWidth = 0,
ARCMouseMoveDisplay = F,
ARCMouseMoveColor = "red",
ARCMouseMoveArcOpacity = 1.0,
ARCMouseMoveArcStrokeColor = "#F26223",
ARCMouseMoveArcStrokeWidth = 0,
ARCMouseOutDisplay = ARCMouseOutDisplay,
ARCMouseOutAnimationTime = 500,
ARCMouseOutColor = ARCMouseOutColor,
ARCMouseOutArcOpacity = "none",
ARCMouseOutArcStrokeColor = "red",
ARCMouseOutArcStrokeWidth = 0,
ARCMouseUpDisplay = F,
ARCMouseUpColor = "grey",
ARCMouseUpArcOpacity = 1.0,
ARCMouseUpArcStrokeColor = "#F26223",
ARCMouseUpArcStrokeWidth = 0,
ARCMouseOverDisplay = ARCMouseOverDisplay,
ARCMouseOverColor = ARCMouseOverColor,
ARCMouseOverArcOpacity = ARCMouseOverArcOpacity,
ARCMouseOverArcStrokeColor = "#F26223",
ARCMouseOverArcStrokeWidth = 3,
ARCMouseOverTooltipsHtml01 = ARCMouseOverTooltipsHtml01,
ARCMouseOverTooltipsHtml02 = ARCMouseOverTooltipsHtml02,
ARCMouseOverTooltipsHtml03 = ARCMouseOverTooltipsHtml03,
ARCMouseOverTooltipsHtml04 = ARCMouseOverTooltipsHtml04,
ARCMouseOverTooltipsHtml05 = ARCMouseOverTooltipsHtml05,
ARCMouseOverTooltipsBorderWidth = ARCMouseOverTooltipsBorderWidth,
HISTOGRAMMouseEvent = T,
HISTOGRAMMouseClickDisplay = F,
HISTOGRAMMouseClickColor = "red",
HISTOGRAMMouseClickOpacity = 1.0,
HISTOGRAMMouseClickStrokeColor = "none",
HISTOGRAMMouseClickStrokeWidth = "none",
HISTOGRAMMouseDownDisplay = F,
HISTOGRAMMouseDownColor = "red",
HISTOGRAMMouseDownOpacity = 1.0,
HISTOGRAMMouseDownStrokeColor = "none",
HISTOGRAMMouseDownStrokeWidth = "none",
HISTOGRAMMouseEnterDisplay = F,
HISTOGRAMMouseEnterColor = "red",
HISTOGRAMMouseEnterOpacity = 1.0,
HISTOGRAMMouseEnterStrokeColor = "none",
HISTOGRAMMouseEnterStrokeWidth = "none",
HISTOGRAMMouseLeaveDisplay = F,
HISTOGRAMMouseLeaveColor = "red",
HISTOGRAMMouseLeaveOpacity = 1.0,
HISTOGRAMMouseLeaveStrokeColor = "none",
HISTOGRAMMouseLeaveStrokeWidth = "none",
HISTOGRAMMouseMoveDisplay = F,
HISTOGRAMMouseMoveColor = "red",
HISTOGRAMMouseMoveOpacity = 1.0,
HISTOGRAMMouseMoveStrokeColor = "none",
HISTOGRAMMouseMoveStrokeWidth = "none",
HISTOGRAMMouseOutDisplay = BARMouseOutDisplay,
HISTOGRAMMouseOutAnimationTime = 500,
HISTOGRAMMouseOutColor = BARMouseOutColor,
HISTOGRAMMouseOutOpacity = 1.0,
HISTOGRAMMouseOutStrokeColor = "none",
HISTOGRAMMouseOutStrokeWidth = "none",
HISTOGRAMMouseUpDisplay = F,
HISTOGRAMMouseUpColor = "red",
HISTOGRAMMouseUpOpacity = 1.0,
HISTOGRAMMouseUpStrokeColor = "none",
HISTOGRAMMouseUpStrokeWidth = "none",
HISTOGRAMMouseOverDisplay = BARMouseOverDisplay,
HISTOGRAMMouseOverColor = BARMouseOverColor,
HISTOGRAMMouseOverOpacity = BARMouseOverOpacity,
HISTOGRAMMouseOverStrokeColor = "none",
HISTOGRAMMouseOverStrokeWidth = "none",
HISTOGRAMMouseOverTooltipsHtml01 = BARMouseOverTooltipsHtml01,
HISTOGRAMMouseOverTooltipsHtml02 = BARMouseOverTooltipsHtml02,
HISTOGRAMMouseOverTooltipsHtml03 = BARMouseOverTooltipsHtml03,
HISTOGRAMMouseOverTooltipsHtml04 = BARMouseOverTooltipsHtml04,
HISTOGRAMMouseOverTooltipsHtml05 = BARMouseOverTooltipsHtml05,
HISTOGRAMMouseOverTooltipsHtml06 = BARMouseOverTooltipsHtml06,
HISTOGRAMMouseOverTooltipsPosition = "absolute",
HISTOGRAMMouseOverTooltipsBackgroundColor = "white",
HISTOGRAMMouseOverTooltipsBorderStyle = "solid",
HISTOGRAMMouseOverTooltipsBorderWidth = BARMouseOverTooltipsBorderWidth,
HISTOGRAMMouseOverTooltipsPadding = "3px",
HISTOGRAMMouseOverTooltipsBorderRadius = "3px",
HISTOGRAMMouseOverTooltipsOpacity = 0.8,
HEATMAPMouseEvent = T,
HEATMAPMouseClickDisplay = F,
HEATMAPMouseClickColor = "green",
HEATMAPMouseClickOpacity = 1.0,
HEATMAPMouseClickStrokeColor = "none",
HEATMAPMouseClickStrokeWidth = "none",
HEATMAPMouseDownDisplay = F,
HEATMAPMouseDownColor = "green",
HEATMAPMouseDownOpacity = 1.0,
HEATMAPMouseDownStrokeColor = "none",
HEATMAPMouseDownStrokeWidth = "none",
HEATMAPMouseEnterDisplay = F,
HEATMAPMouseEnterColor = "green",
HEATMAPMouseEnterOpacity = 1.0,
HEATMAPMouseEnterStrokeColor = "none",
HEATMAPMouseEnterStrokeWidth = "none",
HEATMAPMouseLeaveDisplay = F,
HEATMAPMouseLeaveColor = "green",
HEATMAPMouseLeaveOpacity = 1.0,
HEATMAPMouseLeaveStrokeColor = "none",
HEATMAPMouseLeaveStrokeWidth = "none",
HEATMAPMouseMoveDisplay = F,
HEATMAPMouseMoveColor = "green",
HEATMAPMouseMoveOpacity = 1.0,
HEATMAPMouseMoveStrokeColor = "none",
HEATMAPMouseMoveStrokeWidth = "none",
HEATMAPMouseOutDisplay = HEATMAPMouseOutDisplay,
HEATMAPMouseOutAnimationTime = 500,
HEATMAPMouseOutColor = HEATMAPMouseOutColor,
HEATMAPMouseOutOpacity = 1.0,
HEATMAPMouseOutStrokeColor = "none",
HEATMAPMouseOutStrokeWidth = "none",
HEATMAPMouseUpDisplay = F,
HEATMAPMouseUpColor = "green",
HEATMAPMouseUpOpacity = 1.0,
HEATMAPMouseUpStrokeColor = "none",
HEATMAPMouseUpStrokeWidth = "none",
HEATMAPMouseOverDisplay = HEATMAPMouseOverDisplay,
HEATMAPMouseOverColor = HEATMAPMouseOverColor,
HEATMAPMouseOverOpacity = HEATMAPMouseOverOpacity,
HEATMAPMouseOverStrokeColor = "none",
HEATMAPMouseOverStrokeWidth = "none",
HEATMAPMouseOverTooltipsHtml01 = HEATMAPMouseOverTooltipsHtml01,
HEATMAPMouseOverTooltipsHtml02 = HEATMAPMouseOverTooltipsHtml02,
HEATMAPMouseOverTooltipsHtml03 = HEATMAPMouseOverTooltipsHtml03,
HEATMAPMouseOverTooltipsHtml04 = HEATMAPMouseOverTooltipsHtml04,
HEATMAPMouseOverTooltipsHtml05 = HEATMAPMouseOverTooltipsHtml05,
HEATMAPMouseOverTooltipsHtml06 = HEATMAPMouseOverTooltipsHtml06,
HEATMAPMouseOverTooltipsPosition = "absolute",
HEATMAPMouseOverTooltipsBackgroundColor = "white",
HEATMAPMouseOverTooltipsBorderStyle = "solid",
HEATMAPMouseOverTooltipsBorderWidth = HEATMAPMouseOverTooltipsBorderWidth,
HEATMAPMouseOverTooltipsPadding = "3px",
HEATMAPMouseOverTooltipsBorderRadius = "3px",
HEATMAPMouseOverTooltipsOpacity = 0.8,
LINKMouseEvent = T,
LINKMouseClickDisplay = F,
LINKMouseClickColor = "red",
LINKMouseClickTextFromData = "fourth",
LINKMouseClickTextOpacity = 1,
LINKMouseClickTextColor = "red",
LINKMouseClickTextSize = 8,
LINKMouseClickTextPostionX = 0,
LINKMouseClickTextPostionY = 0,
LINKMouseClickTextDrag = T,
LINKMouseDownDisplay = F,
LINKMouseDownColor = "green",
LINKMouseEnterDisplay = F,
LINKMouseEnterColor = "yellow",
LINKMouseLeaveDisplay = F,
LINKMouseLeaveColor = "pink",
LINKMouseMoveDisplay = F,
LINKMouseMoveColor = "red",
LINKMouseOutDisplay = LINKMouseOutDisplay,
LINKMouseOutAnimationTime = 500,
LINKMouseOutStrokeColor = LINKMouseOutStrokeColor,
LINKMouseUpDisplay = F,
LINKMouseUpColor = "grey",
LINKMouseOverOpacity = LINKMouseOverOpacity,
LINKMouseOverDisplay = LINKMouseOverDisplay,
LINKMouseOverStrokeColor = LINKMouseOverStrokeColor,
LINKMouseOverTooltipsHtml01 = LINKMouseOverTooltipsHtml01,
LINKMouseOverTooltipsHtml02 = LINKMouseOverTooltipsHtml02,
LINKMouseOverStrokeWidth = LINKMouseOverStrokeWidth,
LINKMouseOutStrokeWidth = LINKMouseOutStrokeWidth,
LINKMouseOverTooltipsBorderWidth = LINKMouseOverTooltipsBorderWidth,
LINEMouseEvent = T,
LINEMouseClickDisplay = F,
LINEMouseClickLineOpacity = 1,
LINEMouseClickLineStrokeColor = "red",
LINEMouseClickLineStrokeWidth = "none",
LINEMouseDownDisplay = F,
LINEMouseDownLineOpacity = 1,
LINEMouseDownLineStrokeColor = "red",
LINEMouseDownLineStrokeWidth = "none",
LINEMouseEnterDisplay = F,
LINEMouseEnterLineOpacity = 1,
LINEMouseEnterLineStrokeColor = "red",
LINEMouseEnterLineStrokeWidth = "none",
LINEMouseLeaveDisplay = F,
LINEMouseLeaveLineOpacity = 1,
LINEMouseLeaveLineStrokeColor = "red",
LINEMouseLeaveLineStrokeWidth = "none",
LINEMouseMoveDisplay = F,
LINEMouseMoveLineOpacity = 1,
LINEMouseMoveLineStrokeColor = "red",
LINEMouseMoveLineStrokeWidth = "none",
LINEMouseOutDisplay = LINEMouseOutDisplay,
LINEMouseOutAnimationTime = 500,
LINEMouseOutLineOpacity = LINEMouseOutLineOpacity,
LINEMouseOutLineStrokeColor = LINEMouseOutLineStrokeColor,
LINEMouseOutLineStrokeWidth = LINEMouseOutLineStrokeWidth,
LINEMouseUpDisplay = F,
LINEMouseUpLineOpacity = 1,
LINEMouseUpLineStrokeColor = "red",
LINEMouseUpLineStrokeWidth = "none",
LINEMouseOverDisplay = LINEMouseOverDisplay,
LINEMouseOverLineOpacity = LINEMouseOverLineOpacity,
LINEMouseOverLineStrokeColor = LINEMouseOverLineStrokeColor,
LINEMouseOverLineStrokeWidth = LINEMouseOverLineStrokeWidth,
LINEMouseOverTooltipsHtml01 = LINEMouseOverTooltipsHtml01,
LINEMouseOverTooltipsPosition = "absolute",
LINEMouseOverTooltipsBackgroundColor = "white",
LINEMouseOverTooltipsBorderStyle = "solid",
LINEMouseOverTooltipsBorderWidth = LINEMouseOverTooltipsBorderWidth,
LINEMouseOverTooltipsPadding = "3px",
LINEMouseOverTooltipsBorderRadius = "3px",
LINEMouseOverTooltipsOpacity = 0.8,
CNVMouseEvent = T,
CNVMouseClickDisplay = F,
CNVMouseClickColor = "red",
CNVMouseClickArcOpacity = 1.0,
CNVMouseClickArcStrokeColor = "#F26223",
CNVMouseClickArcStrokeWidth = 0,
CNVMouseClickTextFromData = "fourth",
CNVMouseClickTextOpacity = 1,
CNVMouseClickTextColor = "red",
CNVMouseClickTextSize = 8,
CNVMouseClickTextPostionX = 0,
CNVMouseClickTextPostionY = 0,
CNVMouseClickTextDrag = T,
CNVMouseDownDisplay = F,
CNVMouseDownColor = "green",
CNVMouseDownArcOpacity = 1.0,
CNVMouseDownArcStrokeColor = "#F26223",
CNVMouseDownArcStrokeWidth = 0,
CNVMouseEnterDisplay = F,
CNVMouseEnterColor = "yellow",
CNVMouseEnterArcOpacity = 1.0,
CNVMouseEnterArcStrokeColor = "#F26223",
CNVMouseEnterArcStrokeWidth = 0,
CNVMouseLeaveDisplay = F,
CNVMouseLeaveColor = "pink",
CNVMouseLeaveArcOpacity = 1.0,
CNVMouseLeaveArcStrokeColor = "#F26223",
CNVMouseLeaveArcStrokeWidth = 0,
CNVMouseMoveDisplay = F,
CNVMouseMoveColor = "red",
CNVMouseMoveArcOpacity = 1.0,
CNVMouseMoveArcStrokeColor = "#F26223",
CNVMouseMoveArcStrokeWidth = 0,
CNVMouseOutDisplay = CNVMouseOutDisplay,
CNVMouseOutAnimationTime = 500,
CNVMouseOutColor = CNVMouseOutColor,
CNVMouseOutArcOpacity = CNVMouseOutArcOpacity,
CNVMouseOutArcStrokeColor = CNVMouseOutArcStrokeColor,
CNVMouseOutArcStrokeWidth = CNVMouseOutArcStrokeWidth,
CNVMouseUpDisplay = F,
CNVMouseUpColor = "grey",
CNVMouseUpArcOpacity = 1.0,
CNVMouseUpArcStrokeColor = "#F26223",
CNVMouseUpArcStrokeWidth = 0,
CNVMouseOverDisplay = CNVMouseOverDisplay,
CNVMouseOverColor = CNVMouseOverColor,
CNVMouseOverArcOpacity = CNVMouseOverArcOpacity,
CNVMouseOverArcStrokeColor = CNVMouseOverArcStrokeColor,
CNVMouseOverArcStrokeWidth = CNVMouseOverArcStrokeWidth,
CNVMouseOverTooltipsHtml01 = CNVMouseOverTooltipsHtml01,
CNVMouseOverTooltipsHtml02 = CNVMouseOverTooltipsHtml02,
CNVMouseOverTooltipsHtml03 = CNVMouseOverTooltipsHtml03,
CNVMouseOverTooltipsHtml04 = CNVMouseOverTooltipsHtml04,
CNVMouseOverTooltipsHtml05 = CNVMouseOverTooltipsHtml05,
CNVMouseOverTooltipsPosition = "absolute",
CNVMouseOverTooltipsBackgroundColor = "white",
CNVMouseOverTooltipsBorderStyle = "solid",
CNVMouseOverTooltipsBorderWidth = CNVMouseOverTooltipsBorderWidth,
CNVMouseOverTooltipsPadding = "3px",
CNVMouseOverTooltipsBorderRadius = "3px",
CNVMouseOverTooltipsOpacity = 0.8
)
# create widget
htmlwidgets::createWidget(
name = 'BioCircos',
x,
width = width,
height = height,
package = 'BioCircos',
elementId = elementId
)
}
#' Shiny bindings for BioCircos
#'
#' Output and render functions for using BioCircos within Shiny
#' applications and interactive Rmd documents.
#'
#' @param outputId output variable to read from
#' @param width,height Must be a valid CSS unit (like \code{'100\%'},
#' \code{'400px'}, \code{'auto'}) or a number, which will be coerced to a
#' string and have \code{'px'} appended.
#' @param expr An expression that generates a BioCircos
#' @param env The environment in which to evaluate \code{expr}.
#' @param quoted Is \code{expr} a quoted expression (with \code{quote()})? This
#' is useful if you want to save an expression in a variable.
#'
#' @name BioCircos-shiny
#'
#' @export
BioCircosOutput <- function(outputId, width = '100%', height = '400px'){
htmlwidgets::shinyWidgetOutput(outputId, 'BioCircos', width, height, package = 'BioCircos')
}
#' @rdname BioCircos-shiny
#' @export
renderBioCircos <- function(expr, env = parent.frame(), quoted = FALSE) {
if (!quoted) { expr <- substitute(expr) } # force quoted
htmlwidgets::shinyRenderWidget(expr, BioCircosOutput, env, quoted = TRUE)
}
#' Create a background track to be added to a BioCircos tracklist
#'
#' Simple background to display behind another track
#'
#' @param trackname The name of the new track.
#'
#' @param fillColors The color of the background element, in hexadecimal RGB format.
#' @param borderColors The color of the background borders, in hexadecimal RGB format.
#'
#' @param minRadius,maxRadius Where the track should begin and end, in proportion of the inner radius of the plot.
#' @param borderSize The thickness of the background borders.
#'
#' @param ... Ignored
#'
#' @examples
#' BioCircos(BioCircosBackgroundTrack('bgTrack', fillColors="#FFEEEE", borderSize = 1))
#'
#' @export
BioCircosBackgroundTrack <- function(trackname,
fillColors = "#EEEEFF", borderColors = "#000000",
maxRadius = 0.9, minRadius = 0.5, borderSize = 0.3, ...){
track1 = paste("BACKGROUND", trackname, sep="_")
track2 = list(BgouterRadius = maxRadius, BginnerRadius = minRadius,
BgFillColor = fillColors,
BgborderColor = borderColors,
BgborderSize = borderSize)
track = BioCircosTracklist() + list(list(track1, track2))
return(track)
}
#' Create a Text track to be added to a BioCircos tracklist
#'
#' Simple text annotation displayed in the visualization
#'
#' @param trackname The name of the new track.
#'
#' @param text The text to be displayed.
#'
#' @param x,y Coordinates of the lower left corner of the annotation, in proportion of the inner radius of the plot.
#' @param size Font size, with units specified (such as em or px).
#' @param color Font color, in hexadecimal RGB format.
#' @param weight Font weight. Can be "normal", "bold", "bolder" or "lighter".
#' @param opacity Font opacity.
#'
#' @param ... Ignored
#'
#' @examples
#' BioCircos(BioCircosTextTrack('textTrack', 'Annotation', color = '#DD2222', x = -0.3))
#'
#' @export
BioCircosTextTrack <- function(trackname, text,
x = -0.15, y = 0, size = "1.2em", weight = "bold", opacity = 1, color = "#000000", ...){
track1 = paste("TEXT", trackname, sep="_")
track2 = list(x = x,
y = y,
textSize = size,
textWeight = weight,
textColor = color,
textOpacity = opacity,
text = text)
track = BioCircosTracklist() + list(list(track1, track2))
return(track)
}
#' Create a track with SNPs to be added to a BioCircos tracklist
#'
#' SNPs are defined by genomic coordinates and associated with a numerical value
#'
#' @param trackname The name of the new track.
#'
#' @param chromosomes A vector containing the chromosomes on which each SNP are found.
#' Values should match the chromosome names given in the genome parameter of the BioCircos function.
#' @param positions A vector containing the coordinates on which each SNP are found.
#' Values should be inferior to the chromosome lengths given in the genome parameter of the BioCircos function.
#' @param values A vector of numerical values associated with each SNPs, used to determine the
#' radial coordinates of each point on the visualization.
#'
#' @param colors The colors for each point. Can be a RColorBrewer palette name used to
#' generate one color per point, or a character object or vector of character objects stating RGB values in hexadecimal
#' format or base R colors. If the vector is shorter than the number of points, values will be repeated.
#' @param labels One or multiple character objects to label each point.
#' @param opacities One or multiple opacity values for the points, between 0 and 1.
#'
#' @param size The size of each point.
#' @param shape Shape of the points. Can be "circle" or "rect".
#'
#' @param minRadius,maxRadius Where the track should begin and end, in proportion of the inner radius of the plot.
#' @param range Vector of values to be mapped to the minimum and maximum radii of the track.
#' Default to 0, mapping the minimal and maximal values input in the values parameter.
#'
#' @param ... Ignored
#'
#' @examples
#' BioCircos(BioCircosSNPTrack('SNPTrack', chromosomes = 1:3, positions = 1e+7*2:4,
#' values = 1:3, colors = "Accent", labels = c('A', 'B', 'C')) + BioCircosBackgroundTrack('BGTrack'))
#'
#' @export
BioCircosSNPTrack <- function(trackname, chromosomes, positions, values,
colors = "#40B9D4", labels = "", size = 2, shape = "circle", opacities = 1,
maxRadius = 0.9, minRadius = 0.5, range = 0, ...){
# If colors is a palette, create corresponding color vector
colors = .BioCircosColorCheck(colors, length(positions), "colors")
track1 = paste("SNP", trackname, sep="_")
track2 = list(maxRadius = maxRadius, minRadius = minRadius,
SNPFillColor = "#9400D3",
PointType = shape,
circleSize = size,
rectWidth = size,
rectHeight = size,
range = range)
tabSNP = suppressWarnings(rbind(unname(chromosomes), unname(positions), unname(values), unname(colors),
unname(labels), unname(opacities)))
rownames(tabSNP) = c("chr", "pos", "value", "color", "des", "opacity")
track3 = unname(alply(tabSNP, 2, as.list))
track = BioCircosTracklist() + list(list(track1, track2, track3))
return(track)
}
#' Create a track with lines to be added to a BioCircos tracklist
#'
#' Lines are defined by genomic coordinates and values of an ordered set of points,
#' that will define the edges of the segments.
#'
#' @param trackname The name of the new track.
#'
#' @param chromosomes A vector containing the chromosomes on which each vertex is found.
#' Values should match the chromosome names given in the genome parameter of the BioCircos function.
#' @param positions A vector containing the coordinates on which each vertex are found.
#' Values should be inferior to the chromosome lengths given in the genome parameter of the BioCircos function.
#' @param values A vector of numerical values associated with each vertex, used to determine the
#' radial coordinate of each vertex on the visualization.
#'
#' @param color The color of the line in hexadecimal RGB format.
#' @param width The line width.
#'
#' @param minRadius,maxRadius Where the track should begin and end, in proportion of the inner radius of the plot.
#' @param range Vector of values to be mapped to the minimum and maximum radii of the track.
#' Default to 0, mapping the minimal and maximal values input in the values parameter.
#'
#' @param ... Ignored
#'
#' @examples
#' BioCircos(BioCircosLineTrack('LnId', rep(1,30), 2e+6*(1:100), log(1:100))
#' + BioCircosBackgroundTrack('BGId'))
#
#' @export
BioCircosLineTrack <- function(trackname, chromosomes, positions, values, color = "#40B9D4",
width = 2, maxRadius = 0.9, minRadius = 0.5, range = 0, ...){
track1 = paste("LINE", trackname, sep="_")
track2 = list(maxRadius = maxRadius, minRadius = minRadius,
LineColor = color,
LineWidth = width,
range = range)
tabSNP = suppressWarnings(rbind(unname(chromosomes), unname(positions), unname(values)))
rownames(tabSNP) = c("chr", "pos", "value")
track3 = unname(alply(tabSNP, 2, as.list))
track = BioCircosTracklist() + list(list(track1, track2, track3))
return(track)
}
#' Create a track with a bar plot to be added to a BioCircos tracklist
#'
#' Bins are defined by a genomic range and associated with a numerical value
#'
#' @param trackname The name of the new track.
#'
#' @param chromosomes A vector containing the chromosomes on which each bar is found.
#' Values should match the chromosome names given in the genome parameter of the BioCircos function.
#' @param starts,ends Vectors containing the coordinates on which each bin begins or ends.
#' @param values A vector of numerical values associated with each bin, used to determine the
#' height of each bar on the track.
#'
#' @param labels One or multiple character objects to label each bar.
#'
#' @param range Vector of values to be mapped to the minimum and maximum radii of the track.
#' Default to 0, mapping the minimal and maximal values input in the values parameter.
#' @param color The color for the bars, in hexadecimal RGB format.
#'
#' @param minRadius,maxRadius Where the track should begin and end, in proportion of the inner radius of the plot.
#'
#' @param ... Ignored
#'
#' @examples
#' BioCircos(BioCircosBarTrack('BarTrack', chromosomes = 1:3, starts = 1e+7*2:4, ends = 2.5e+7*2:4,
#' values = 1:3, labels = c('A ', 'B ', 'C '), range = c(0,4)) + BioCircosBackgroundTrack('BGTrack'))
#'
#' @export
BioCircosBarTrack <- function(trackname, chromosomes, starts, ends, values,
labels = "", maxRadius = 0.9, minRadius = 0.5, color = "#40B9D4", range = 0, ...){
track1 = paste("HISTOGRAM", trackname, sep="_")
track2 = list(maxRadius = maxRadius, minRadius = minRadius,
histogramFillColor = color,
range = range)
tabHist = suppressWarnings(rbind(unname(chromosomes), unname(starts), unname(ends), unname(labels), unname(values)))
rownames(tabHist) = c("chr", "start", "end", "name", "value")
track3 = unname(alply(tabHist, 2, as.list))
track = BioCircosTracklist() + list(list(track1, track2, track3))
return(track)
}
#' Create a track with concentric arcs to be added to a BioCircos tracklist
#'
#' Arcs are defined by a genomic range and radially associated with a numerical value
#'
#' @param trackname The name of the new track.
#'
#' @param chromosomes A vector containing the chromosomes on which each arc is found.
#' Values should match the chromosome names given in the genome parameter of the BioCircos function.
#' @param starts,ends Vectors containing the coordinates on which each arc begins or ends.
#' @param values A vector of numerical values associated with each bin, used to determine the
#' height of each bar on the track.
#'
#' @param width The thickness of the arc
#' @param color The color for the arcs, in hexadecimal RGB format.
#' @param range Vector of values to be mapped to the minimum and maximum radii of the track.
#' Default to 0, mapping the minimal and maximal values input in the values parameter.
#'
#' @param minRadius,maxRadius Where the track should begin and end, in proportion of the inner radius of the plot.
#'
#' @param ... Ignored
#'
#' @examples
#' BioCircos(BioCircosCNVTrack('BarTrack', chromosomes = 1:3, starts = 1e+7*2:4, ends = 2.5e+7*2:4,
#' values = 1:3, color = "#BB0000", maxRadius = 0.85, minRadius = 0.55)
#' + BioCircosBackgroundTrack('BGTrack'))
#' @export
BioCircosCNVTrack <- function(trackname, chromosomes, starts, ends, values,
maxRadius = 0.9, minRadius = 0.5, width = 1, color = "#40B9D4", range = 0, ...){
track1 = paste("CNV", trackname, sep="_")
track2 = list(maxRadius = maxRadius, minRadius = minRadius,
CNVColor = color, CNVwidth = width, range = range)
tabHist = suppressWarnings(rbind(unname(chromosomes), unname(starts), unname(ends), unname(values)))
rownames(tabHist) = c("chr", "start", "end", "value")
track3 = unname(alply(tabHist, 2, as.list))
track = BioCircosTracklist() + list(list(track1, track2, track3))
return(track)
}
#' Create a heatmap track to be added to a BioCircos tracklist
#'
#' Heatmaps are defined by the genomic range and the color-associated numerical value
#' of each box of the heatmap layer
#'
#' @param trackname The name of the new track.
#'
#' @param chromosomes A vector containing the chromosomes on which each box is found.
#' Values should match the chromosome names given in the genome parameter of the BioCircos function.
#' @param starts,ends Vectors containing the coordinates on which each box begins or ends.
#' @param values A vector of numerical values associated with each box, used to determine the
#' height of each bar on the track.
#'
#' @param labels One or multiple character objects to label each bar.
#'
#' @param range a vector of the values to be mapped to the minimum and maximum colors of the track.
#' Default to 0, mapping the minimal and maximal values input in the values parameter.
#' @param color a vector of the colors in hexadecimal RGB format to be mapped to the minimum and
#' maximum values of the track.
#' Colors of intermediate values will be linearly interpolated between this two colors.
#'
#' @param minRadius,maxRadius Where the track should begin and end, in proportion of the inner radius of the plot.
#'
#' @param ... Ignored
#'
#' @examples
#' BioCircos(BioCircosHeatmapTrack('HmTrack', chromosomes = 1:3, starts = 1e+7*2:4, ends = 2.5e+7*2:4,
#' values = 1:3, labels = c('A ', 'B ', 'C ')))
#'
#' @export
BioCircosHeatmapTrack <- function(trackname, chromosomes, starts, ends, values,
labels = "", maxRadius = 0.9, minRadius = 0.5, color = c("#40B9D4", "#F8B100"), range = 0, ...){
track1 = paste("HEATMAP", trackname, sep="_")
track2 = list(outerRadius = maxRadius - 8/7, innerRadius = minRadius - 1,
minColor = color[1], maxColor = color[2], range = range) # In JS lib the innerRadius and outerRadius are
# based on the inner and outer radii of the chromosome. Here we convert the arc coordinates to percentage of the space
# inside the chromosome, based on the assumption that the inner and outer radii of the chromosome are respectively at 70
# and 80 percents of the widget minimal dimension. The conversion to absolute values is performed on the JavaScript side.
tabHeat = suppressWarnings(rbind(unname(chromosomes), unname(starts), unname(ends), unname(labels), unname(values)))
rownames(tabHeat) = c("chr", "start", "end", "name", "value")
track3 = unname(alply(tabHeat, 2, as.list))
track = BioCircosTracklist() + list(list(track1, track2, track3))
return(track)
}
#' Create a track with arcs to be added to a BioCircos tracklist
#'
#' Arcs are defined by beginning and ending genomic coordinates
#'
#' @param trackname The name of the new track.
#'
#' @param chromosomes A vector containing the chromosomes on which each arc is found.
#' Values should match the chromosome names given in the genome parameter of the BioCircos function.
#' @param starts,ends Vectors containing the coordinates on which each arc begins or ends.
#' Values should be inferior to the chromosome lengths given in the genome parameter of the BioCircos function.
#'
#' @param colors The colors for each arc. Can be a RColorBrewer palette name used to
#' generate one color per arc, or a character object or vector of character objects stating RGB values in hexadecimal
#' format or base R colors. If the vector is shorter than the number of arcs, values will be repeated.
#' @param labels One or multiple character objects to label each arc.
#' @param opacities One or multiple opacity values for the arcs, between 0 and 1.
#'
#' @param minRadius,maxRadius Where the track should begin and end, in proportion of the inner radius of the plot.
#'
#' @param ... Ignored
#'
#' @examples
#' BioCircos(BioCircosArcTrack('ArcTrack', chromosomes = 1:5, starts = 2e+7*1:5, ends = 2.5e+7*2:6))
#'
#' @export
BioCircosArcTrack <- function(trackname, chromosomes, starts, ends,
colors = "#40B9D4", labels = "", opacities = 1,
maxRadius = 0.9, minRadius = 0.5, ...){
# If colors is a palette, create corresponding color vector
colors = .BioCircosColorCheck(colors, length(starts), "colors")
track1 = paste("ARC", trackname, sep="_")
track2 = list(outerRadius = maxRadius - 8/7, innerRadius = minRadius- 1) # In JS lib the innerRadius and outerRadius are
# based on the inner and outer radii of the chromosome. Here we convert the arc coordinates to percentage of the space
# inside the chromosome, based on the assumption that the inner and outer radii of the chromosome are respectively at 70
# and 80 percents of the widget minimal dimension. The conversion to absolute values is performed on the JavaScript side.
tabSNP = suppressWarnings(rbind(unname(chromosomes), unname(starts), unname(ends), unname(colors), unname(labels), unname(opacities)))
rownames(tabSNP) = c("chr", "start", "end", "color", "des", "opacity")
track3 = unname(alply(tabSNP, 2, as.list))
track = BioCircosTracklist() + list(list(track1, track2, track3))
return(track)
}
#' Create an inner track with links to be added to a BioCircos tracklist
#'
#' Links are defined by beginning and ending genomic coordinates of the 2 regions to linked,
#' such as the positions linked in genomic fusions.
#'
#' @param trackname The name of the new track.
#'
#' @param gene1Chromosomes,gene1Starts,gene1Ends,gene1Names,gene2Chromosomes,gene2Starts,gene2Ends,gene2Names
#' Vectors with the chromosomes, genomic coordinates of beginning and end, and names of both genes to link.
#' Chromosomes and positions should respect the chromosome names and lengths given in the genome parameter of
#' the BioCircos function.
#'
#' @param color The color for the links, in hexadecimal RGB format.
#' @param width The thickness of the links.
#'
#' @param labels A vector of character objects to label each link.
#'
#' @param displayAxis Display additional axis (i.e. circle) around the track.
#' @param axisColor,axisWidth,axisPadding Color, thickness and padding of the additional axis.
#'
#' @param displayLabel Display labels of the track.
#' @param labelColor,labelSize,labelPadding Color, font size and padding of the labels around the track.
#'
#' @param maxRadius Where the track should end, in proportion of the inner radius of the plot.
#'
#' @param ... Ignored
#'
#' @examples
#' start_chromosomes <- 1:5
#' end_chromosomes <- 2*10:6
#' start_pos <- 2.5e+7*2:6
#' end_pos <- 2e+7*1:5
#' BioCircos(BioCircosLinkTrack('LinkTrack', start_chromosomes, start_pos, start_pos+1,
#' end_chromosomes, end_pos, end_pos+1, color = '#FF00FF'))
#'
#' @export
BioCircosLinkTrack <- function(trackname, gene1Chromosomes, gene1Starts, gene1Ends,
gene2Chromosomes, gene2Starts, gene2Ends, color = "#40B9D4", labels = "",
maxRadius = 0.4, width = "0.1em",
gene1Names = "", gene2Names = "", displayAxis = TRUE, axisColor = "#B8B8B8", axisWidth = 0.5,
axisPadding = 0, displayLabel = TRUE, labelColor = "#000000",
labelSize = "1em", labelPadding = 3, ...){
track1 = paste("LINK", trackname, sep="_")
track2 = list(LinkRadius = maxRadius, LinkFillColor = color, LinkWidth = width,
displayLinkAxis = displayAxis, LinkAxisColor = axisColor, LinkAxisWidth = axisWidth,
LinkAxisPad = axisPadding, displayLinkLabel = displayLabel, LinkLabelColor = labelColor,
LinkLabelSize = labelSize, LinkLabelPad = labelPadding)
tabSNP = suppressWarnings(rbind(unname(labels), unname(gene1Chromosomes), unname(gene1Starts), unname(gene1Ends),
unname(gene1Names), unname(gene2Chromosomes), unname(gene2Starts), unname(gene2Ends), unname(gene2Names)))
rownames(tabSNP) = c("fusion", "g1chr", "g1start", "g1end", "g1name", "g2chr", "g2start", "g2end", "g2name")
track3 = unname(alply(tabSNP, 2, as.list))
track = BioCircosTracklist() + list(list(track1, track2, track3))
return(track)
}
#' Create a list of BioCircos tracks
#'
#' This allows the use of the '+' and '-' operator on these lists
#'
#' @name BioCircosTracklist
#'
#' @param x The tracklist on which other tracks should be added or removed.
#' @param ... The tracks to add (as tracklists) or to remove (as track names).
#'
#' @export
BioCircosTracklist <- function(){
x = list()
class(x) <- c("BioCircosTracklist")
return(x)
}
#' @rdname BioCircosTracklist
#' @export
"+.BioCircosTracklist" <- function(x,...) {
x <- append(x,...)
if(class(x) != "BioCircosTracklist"){
class(x) <- c("BioCircosTracklist")
}
return(x)
}
#' @rdname BioCircosTracklist
#' @export
"-.BioCircosTracklist" <- function(x,...) {
indicesToDelete = list()
for (i in 1:length(x)){
if(paste(strsplit(x[[i]][[1]], '_')[[1]][-1], collapse = "_") %in% ...){
indicesToDelete = append(indicesToDelete, i)
}
}
y <- x
y[unlist(indicesToDelete)] <- NULL
return(y)
}
.BioCircosColorCheck <- function(colVar, colLength, varName = "Color") {
# If genomeFillColor is a string, create corresponding palette
colorError = paste0("\'", varName,
"\' parameter should be either a vector of chromosome colors or the name of a RColorBrewer brewer.")
if(class(colVar) == "character"){
if(all(colVar %in% rownames(RColorBrewer::brewer.pal.info))&(length(colVar) == 1)) { # RColorBrewer's brewer
colVar = grDevices::colorRampPalette(RColorBrewer::brewer.pal(8, colVar))(colLength)
}
else if(!all(grepl("^#", colVar))){ # Not RGB values
if(all(colVar %in% colors())){
colVar = rgb(t(col2rgb(colVar))/255)
}
else{ # Unknown format
stop(colorError)
}
}
}
else{
stop(colorError)
}
return(colVar)
}
|
/scratch/gouwar.j/cran-all/cranData/BioCircos/R/BioCircos.R
|
## ----eval=FALSE, screenshot.force = FALSE--------------------------------
# install.packages('BioCircos')
## ----eval=FALSE, screenshot.force = FALSE--------------------------------
# # You need devtools for that
# if (!require('devtools')){install.packages('devtools')}
#
# devtools::install_github('lvulliard/BioCircos.R', build_vignettes = TRUE)
## ------------------------------------------------------------------------
library(BioCircos)
BioCircos()
## ---- screenshot.force = FALSE-------------------------------------------
library(BioCircos)
BioCircos(genome = "hg19", yChr = FALSE, genomeFillColor = "Reds", chrPad = 0,
displayGenomeBorder = FALSE, genomeTicksDisplay = FALSE, genomeLabelDy = 0)
## ---- screenshot.force = FALSE-------------------------------------------
library(BioCircos)
myGenome = list("A" = 10560,
"B" = 8808,
"C" = 12014,
"D" = 7664,
"E" = 9403,
"F" = 8661)
BioCircos(genome = myGenome, genomeFillColor = c("tomato2", "darkblue"),
genomeTicksScale = 4e+3)
## ---- screenshot.force = FALSE-------------------------------------------
library(BioCircos)
tracklist = BioCircosTextTrack('myTextTrack', 'Some text', size = "2em", opacity = 0.5,
x = -0.67, y = -0.5)
BioCircos(tracklist, genomeFillColor = "PuOr",
chrPad = 0, displayGenomeBorder = FALSE,
genomeTicksLen = 2, genomeTicksTextSize = 0, genomeTicksScale = 1e+8,
genomeLabelTextSize = "9pt", genomeLabelDy = 0)
## ---- screenshot.force = FALSE-------------------------------------------
library(BioCircos)
tracklist = BioCircosBackgroundTrack("myBackgroundTrack", minRadius = 0.5, maxRadius = 0.8,
borderColors = "#AAAAAA", borderSize = 0.6, fillColors = "#FFBBBB")
BioCircos(tracklist, genomeFillColor = "PuOr",
chrPad = 0.05, displayGenomeBorder = FALSE,
genomeTicksDisplay = FALSE, genomeLabelTextSize = "9pt", genomeLabelDy = 0)
## ---- screenshot.force = FALSE-------------------------------------------
library(BioCircos)
# Chromosomes on which the points should be displayed
points_chromosomes = c('X', '2', '7', '13', '9')
# Coordinates on which the points should be displayed
points_coordinates = c(102621, 140253678, 98567307, 28937403, 20484611)
# Values associated with each point, used as radial coordinate
# on a scale going to minRadius for the lowest value to maxRadius for the highest value
points_values = 0:4
tracklist = BioCircosSNPTrack('mySNPTrack', points_chromosomes, points_coordinates,
points_values, colors = c("tomato2", "darkblue"), minRadius = 0.5, maxRadius = 0.9)
# Background are always placed below other tracks
tracklist = tracklist + BioCircosBackgroundTrack("myBackgroundTrack",
minRadius = 0.5, maxRadius = 0.9,
borderColors = "#AAAAAA", borderSize = 0.6, fillColors = "#B3E6FF")
BioCircos(tracklist, genomeFillColor = "PuOr",
chrPad = 0.05, displayGenomeBorder = FALSE, yChr = FALSE,
genomeTicksDisplay = FALSE, genomeLabelTextSize = 18, genomeLabelDy = 0)
## ---- screenshot.force = FALSE-------------------------------------------
library(BioCircos)
arcs_chromosomes = c('X', 'X', '2', '9') # Chromosomes on which the arcs should be displayed
arcs_begin = c(1, 45270560, 140253678, 20484611)
arcs_end = c(155270560, 145270560, 154978472, 42512974)
tracklist = BioCircosArcTrack('myArcTrack', arcs_chromosomes, arcs_begin, arcs_end,
minRadius = 1.18, maxRadius = 1.25, opacities = c(0.4, 0.4, 1, 0.8))
BioCircos(tracklist, genomeFillColor = "PuOr",
chrPad = 0.02, displayGenomeBorder = FALSE, yChr = FALSE,
genomeTicksDisplay = FALSE, genomeLabelTextSize = 0)
## ---- screenshot.force = FALSE-------------------------------------------
library(BioCircos)
links_chromosomes_1 = c('X', '2', '9') # Chromosomes on which the links should start
links_chromosomes_2 = c('3', '18', '9') # Chromosomes on which the links should end
links_pos_1 = c(155270560, 154978472, 42512974)
links_pos_2 = c(102621477, 140253678, 20484611)
links_labels = c("Link 1", "Link 2", "Link 3")
tracklist = BioCircosBackgroundTrack("myBackgroundTrack", minRadius = 0, maxRadius = 0.55,
borderSize = 0, fillColors = "#EEFFEE")
tracklist = tracklist + BioCircosLinkTrack('myLinkTrack', links_chromosomes_1, links_pos_1,
links_pos_1 + 50000000, links_chromosomes_2, links_pos_2, links_pos_2 + 750000,
maxRadius = 0.55, labels = links_labels)
BioCircos(tracklist, genomeFillColor = "PuOr",
chrPad = 0.02, displayGenomeBorder = FALSE, yChr = FALSE,
genomeTicksDisplay = FALSE, genomeLabelTextSize = "8pt", genomeLabelDy = 0)
## ----figBarTrack, fig.width=4, fig.height=4, fig.align = 'center', screenshot.force = FALSE----
library(BioCircos)
library(RColorBrewer)
library(grDevices)
# Define a custom genome
genomeChr = LETTERS
lengthChr = 5*1:length(genomeChr)
names(lengthChr) <- genomeChr
tracks = BioCircosTracklist()
# Add one track for each chromosome
for (i in 1:length(genomeChr)){
# Define histogram/bars to be displayed
nbBars = lengthChr[i] - 1
barValues = sapply(1:nbBars, function(x) 10 + nbBars%%x)
barColor = colorRampPalette(brewer.pal(8, "YlOrBr"))(length(genomeChr))[i]
# Add a track with bars on the i-th chromosome
tracks = tracks + BioCircosBarTrack(paste0("bars", i), chromosome = genomeChr[i],
starts = (1:nbBars) - 1, ends = 1:nbBars, values = barValues, color = barColor,
range = c(5,75))
}
# Add background
tracks = tracks + BioCircosBackgroundTrack("bars_background", colors = "#2222EE")
BioCircos(tracks, genomeFillColor = "YlOrBr", genome = as.list(lengthChr),
genomeTicksDisplay = F, genomeLabelDy = 0)
## ----figCNVTrack, screenshot.force = FALSE-------------------------------
library(BioCircos)
# Arcs coordinates
snvChr = rep(4:9, 3)
snvStart = c(rep(1,6), rep(40000000,6), rep(100000000,6))
snvEnd = c(rep(39999999,6), rep(99999999,6),
191154276, 180915260, 171115067, 159138663, 146364022, 141213431)
# Values associated with each point, used as radial coordinate
# on a scale going to minRadius for the lowest value to maxRadius for the highest value
snvValues = (1:18%%5)+1
# Create CNV track
tracks = BioCircosCNVTrack('cnv_track', as.character(snvChr), snvStart, snvEnd, snvValues,
color = "#CC0000", range = c(0,6))
# Add background
tracks = tracks + BioCircosBackgroundTrack("arcs_background", colors = "#2222EE")
BioCircos(tracks, genomeFillColor = "YlOrBr", genomeTicksDisplay = F, genomeLabelDy = 0)
## ---- screenshot.force = FALSE-------------------------------------------
library(BioCircos)
# Define a custom genome
genomeChr = LETTERS[1:10]
lengthChr = 5*1:length(genomeChr)
names(lengthChr) <- genomeChr
# Define boxes positions
boxPositions = unlist(sapply(lengthChr, seq))
boxChromosomes = rep(genomeChr, lengthChr)
# Define values for two heatmap tracks
boxVal1 = boxPositions %% 13 / 13
boxVal2 = (7 + boxPositions) %% 17 / 17
tracks = BioCircosHeatmapTrack("heatmap1", boxChromosomes, boxPositions - 1, boxPositions,
boxVal1, minRadius = 0.6, maxRadius = 0.75)
tracks = tracks + BioCircosHeatmapTrack("heatmap1", boxChromosomes, boxPositions - 1,
boxPositions, boxVal2, minRadius = 0.75, maxRadius = 0.9, color = c("#FFAAAA", "#000000"))
BioCircos(tracks, genome = as.list(lengthChr), genomeTicksDisplay = F, genomeLabelDy = 0,
HEATMAPMouseOverColor = "#F3C73A")
## ---- screenshot.force = FALSE-------------------------------------------
chrVert = rep(c(1, 3, 5), c(20,10,5))
posVert = c(249250621*log(c(20:1, 10:1, 5:1), base = 20))
tracks = BioCircosLineTrack('LineTrack', as.character(chrVert), posVert, values = cos(posVert))
tracks = tracks + BioCircosLineTrack('LineTrack2', as.character(chrVert+1), 0.95*posVert,
values = sin(posVert), color = "#40D4B9")
tracks = tracks + BioCircosBackgroundTrack('Bg', fillColors = '#FFEEBB', borderSize = 0)
BioCircos(tracks, chrPad = 0.05, displayGenomeBorder = FALSE, LINEMouseOutDisplay = FALSE,
LINEMouseOverTooltipsHtml01 = "Pretty lines<br/>This tooltip won't go away!")
## ---- screenshot.force = FALSE-------------------------------------------
library(BioCircos)
# Create a tracklist with a text annotation and backgrounds
tracklist = BioCircosTextTrack('t1', 'hi')
tracklist = tracklist + BioCircosBackgroundTrack('b1')
# Remove the text annotation and display the result
BioCircos(tracklist - 't1')
## ----figMultiTrack, fig.width=5, fig.height=5, fig.align = 'center', screenshot.force = FALSE----
library(BioCircos)
# Fix random generation for reproducibility
set.seed(3)
# SNP tracks
tracks = BioCircosSNPTrack("testSNP1", as.character(rep(1:10,10)),
round(runif(100, 1, 135534747)),
runif(100, 0, 10), colors = "Spectral", minRadius = 0.3, maxRadius = 0.45)
tracks = tracks + BioCircosSNPTrack("testSNP2", as.character(rep(1:15,5)),
round(runif(75, 1, 102531392)),
runif(75, 2, 12), colors = c("#FF0000", "#DD1111", "#BB2222", "#993333"),
maxRadius = 0.8, range = c(2,12))
# Overlap point of interest on previous track, fix range to use a similar scale
tracks = tracks + BioCircosSNPTrack("testSNP3", "7", 1, 9, maxRadius = 0.8, size = 6,
range = c(2,12))
# Background and text tracks
tracks = tracks + BioCircosBackgroundTrack("testBGtrack1", minRadius = 0.3, maxRadius = 0.45,
borderColors = "#FFFFFF", borderSize = 0.6)
tracks = tracks + BioCircosBackgroundTrack("testBGtrack2", borderColors = "#FFFFFF",
fillColor = "#FFEEEE", borderSize = 0.6, maxRadius = 0.8)
tracks = tracks + BioCircosTextTrack("testText", 'BioCircos!', weight = "lighter",
x = - 0.17, y = - 0.87)
# Arc track
arcsEnds = round(runif(7, 50000001, 133851895))
arcsLengths = round(runif(7, 1, 50000000))
tracks = tracks + BioCircosArcTrack("fredTestArc", as.character(sample(1:12, 7, replace=T)),
starts = arcsEnds - arcsLengths, ends = arcsEnds, labels = 1:7,
maxRadius = 0.97, minRadius = 0.83)
# Link tracks
linkPos1 = round(runif(5, 1, 50000000))
linkPos2 = round(runif(5, 1, 50000000))
chr1 = sample(1:22, 5, replace = T)
chr2 = sample(1:22, 5, replace = T)
linkPos3 = round(runif(5, 1, 50000000))
linkPos4 = round(runif(5, 1, 50000000))
chr3 = sample(1:22, 5, replace = T)
chr4 = sample(1:22, 5, replace = T)
tracks = tracks + BioCircosLinkTrack("testLink", gene1Chromosomes = chr1,
gene1Starts = linkPos1, gene1Ends = linkPos1+1, gene2Chromosomes = chr2, axisPadding = 6,
color = "#EEEE55", width = "0.3em", labels = paste(chr1, chr2, sep = "*"), displayLabel = F,
gene2Starts = linkPos2, gene2Ends = linkPos2+1, maxRadius = 0.42)
tracks = tracks + BioCircosLinkTrack("testLink2", gene1Chromosomes = chr3,
gene1Starts = linkPos3, gene1Ends = linkPos3+5000000, axisPadding = 6, displayLabel = F,
color = "#FF6666", labels = paste(chr3, chr4, sep = "-"), gene2Chromosomes = chr4,
gene2Starts = linkPos4, gene2Ends = linkPos4+2500000, maxRadius = 0.42)
# Display the BioCircos visualization
BioCircos(tracks, genomeFillColor = "Spectral", yChr = T, chrPad = 0, displayGenomeBorder = F,
genomeTicksLen = 3, genomeTicksTextSize = 0, genomeTicksScale = 50000000,
genomeLabelTextSize = 18, genomeLabelDy = 0)
## ----sessionINFO, screenshot.force = FALSE-------------------------------
sessionInfo()
|
/scratch/gouwar.j/cran-all/cranData/BioCircos/inst/doc/BioCircos.R
|
---
title: "BioCircos: Generating circular multi-track plots"
output:
rmarkdown::html_vignette:
toc: true
toc_depth: 2
vignette: >
%\VignetteIndexEntry{BioCircos: Generating circular multi-track plots}
%\VignetteEngine{knitr::rmarkdown}
%\usepackage[utf8]{inputenc}
date: "`r Sys.Date()`"
author: "Loan Vulliard"
---
## Introduction
This package allows to implement in 'R' Circos-like visualizations of genomic data, as proposed by the BioCircos.js JavaScript library, based on the JQuery and D3 technologies.
We will demonstrate here how to generate easily such plots and what are the main parameters to customize them. Each example can be run independently of the others.
For a complete list of all the parameters available, please refer to the package documentation.
## Motivation
The amount of data produced nowadays in a lot of different fields assesses the relevance of reactive analyses and interactive display of the results. This especially true in biology, where the cost of sequencing data has dropped must faster than the Moore's law prediction. New ways of integrating different level of information and accelerating the interpretation are therefore needed.
The integration challenge appears to be of major importance, as it allows a deeper understanding of the biological phenomena happening, that cannot be observed in the single analyses independently.
This package aims at offering an easy way of producing Circos-like visualizations to face distinct challenges :
* On the one hand, data integration and visualization: Circos is a popular tool to combine different biological information on a single plot.
* On the other hand, reactivity and interactivity: thanks to the *htmlwidgets* framework, the figures produced by this package are responsive to mouse events and display useful tooltips, and they can be integrated in shiny apps. Once the analyses have been performed and the shiny app coded, it is possible for the end-user to explore a massive amount of biological data without any programming or bioinformatics knowledge.
The terminology used here arises from genomics but this tool may be of interest for different situations where different positional or temporal informations must be combined.
## Installation
To install this package, you can use CRAN (the central R package repository) to get the last stable release or build the last development version directly from the GitHub repository.
### From CRAN
```{r eval=FALSE, screenshot.force = FALSE}
install.packages('BioCircos')
```
### From Github
```{r eval=FALSE, screenshot.force = FALSE}
# You need devtools for that
if (!require('devtools')){install.packages('devtools')}
devtools::install_github('lvulliard/BioCircos.R', build_vignettes = TRUE)
```
## Generating Circos-like visualizations
### Principle
To produce a BioCircos visualization, you need to call the *BioCircos* method, that accepts a *tracklist* containing the different *tracks* to be displayed, the genome to be displayed and plotting parameters.
By default, an empty *tracklist* is used, and the genome is automatically set to use the chromosome sizes of the reference genome hg19 (GRCh37).
```{r}
library(BioCircos)
BioCircos()
```
### Genome configuration
A genome needs to be set in order to map all the coordinates of the tracks on it.
For now, the only pre-configured genome available is *hg19* (GRCh37), for which the length of the main 22 genomic autosomal chromosome pairs and of the sexual chromosomes are available. The Y chromosome can be removed using the *ychr* parameter. Visual parameters are also available, such as by giving a vector of colors or a *RColorBrewer* palette to change the colors of each chromosome (parameter *genomeFillColor*), the space between each chromosome (*chrPad*) or their borders (*displayGenomeBorder*).
The ticks, displaying the scale on each chromosome, can be removed with *genomeTicksDisplay*, and the genome labels (chromosome names) can be brought closer or further away from the chromosomes with *genomeLabelDy*.
```{r, screenshot.force = FALSE}
library(BioCircos)
BioCircos(genome = "hg19", yChr = FALSE, genomeFillColor = "Reds", chrPad = 0,
displayGenomeBorder = FALSE, genomeTicksDisplay = FALSE, genomeLabelDy = 0)
```
To use your own reference genome, you need to define a named list of chromosomal lengths and use it as the *genome* parameter. The names and lengths should match the coordinates you plan on using later for your tracks.
You may want to change the scale of the ticks on the chromosomes, to fit to your reference genome, with the *genomeTickScale* parameters.
```{r, screenshot.force = FALSE}
library(BioCircos)
myGenome = list("A" = 10560,
"B" = 8808,
"C" = 12014,
"D" = 7664,
"E" = 9403,
"F" = 8661)
BioCircos(genome = myGenome, genomeFillColor = c("tomato2", "darkblue"),
genomeTicksScale = 4e+3)
```
Another use of a custom genome can be seen in the [Bar tracks section](#barSection).
### Tracklists
The different levels of information will be displayed on different *tracks* of different types and located at different radii on the visualization. All the track-generating functions of this package return tracklists that can be added together into a single tracklist, to be given as the *tracks* argument of the *BioCircos* method.
The different kinds of tracks are presented in the following sections.
All tracks need to be named.
## Text track
A first track simply corresponds to text annotations. The obligatory parameters are the track name and the text to be displayed.
Some parameters such as the size, the opacity and the coordinates can be customized.
```{r, screenshot.force = FALSE}
library(BioCircos)
tracklist = BioCircosTextTrack('myTextTrack', 'Some text', size = "2em", opacity = 0.5,
x = -0.67, y = -0.5)
BioCircos(tracklist, genomeFillColor = "PuOr",
chrPad = 0, displayGenomeBorder = FALSE,
genomeTicksLen = 2, genomeTicksTextSize = 0, genomeTicksScale = 1e+8,
genomeLabelTextSize = "9pt", genomeLabelDy = 0)
```
## Background track
Another simple track type correspond to backgrounds, displayed under other tracks, in a given radius interval.
```{r, screenshot.force = FALSE}
library(BioCircos)
tracklist = BioCircosBackgroundTrack("myBackgroundTrack", minRadius = 0.5, maxRadius = 0.8,
borderColors = "#AAAAAA", borderSize = 0.6, fillColors = "#FFBBBB")
BioCircos(tracklist, genomeFillColor = "PuOr",
chrPad = 0.05, displayGenomeBorder = FALSE,
genomeTicksDisplay = FALSE, genomeLabelTextSize = "9pt", genomeLabelDy = 0)
```
## SNP track
To map punctual information associated with a single-dimensional value on the reference genome, such as a variant or an SNP associated with a confidence score, SNP tracks can be used.
It is therefore needed to specify the chromosome and coordinates where each points are mapped, as well as the corresponding value, which will be used to compute the radial coordinate of the points.
By default, points display a tooltip when hovered by the mouse.
```{r, screenshot.force = FALSE}
library(BioCircos)
# Chromosomes on which the points should be displayed
points_chromosomes = c('X', '2', '7', '13', '9')
# Coordinates on which the points should be displayed
points_coordinates = c(102621, 140253678, 98567307, 28937403, 20484611)
# Values associated with each point, used as radial coordinate
# on a scale going to minRadius for the lowest value to maxRadius for the highest value
points_values = 0:4
tracklist = BioCircosSNPTrack('mySNPTrack', points_chromosomes, points_coordinates,
points_values, colors = c("tomato2", "darkblue"), minRadius = 0.5, maxRadius = 0.9)
# Background are always placed below other tracks
tracklist = tracklist + BioCircosBackgroundTrack("myBackgroundTrack",
minRadius = 0.5, maxRadius = 0.9,
borderColors = "#AAAAAA", borderSize = 0.6, fillColors = "#B3E6FF")
BioCircos(tracklist, genomeFillColor = "PuOr",
chrPad = 0.05, displayGenomeBorder = FALSE, yChr = FALSE,
genomeTicksDisplay = FALSE, genomeLabelTextSize = 18, genomeLabelDy = 0)
```
## Arc track
Arc tracks are displaying arcs along the genomic circle, between the radii given as the *minRadius* and *maxRadius* parameters. As for an SNP track, the chromosome and coordinates (here corresponding to the beginning and end of each arc) should be specified.
By default, arcs display a tooltip when hovered by the mouse.
```{r, screenshot.force = FALSE}
library(BioCircos)
arcs_chromosomes = c('X', 'X', '2', '9') # Chromosomes on which the arcs should be displayed
arcs_begin = c(1, 45270560, 140253678, 20484611)
arcs_end = c(155270560, 145270560, 154978472, 42512974)
tracklist = BioCircosArcTrack('myArcTrack', arcs_chromosomes, arcs_begin, arcs_end,
minRadius = 1.18, maxRadius = 1.25, opacities = c(0.4, 0.4, 1, 0.8))
BioCircos(tracklist, genomeFillColor = "PuOr",
chrPad = 0.02, displayGenomeBorder = FALSE, yChr = FALSE,
genomeTicksDisplay = FALSE, genomeLabelTextSize = 0)
```
## Link track
Links track represent links between different genomic position. They are displayed at the center of the visualization, and out to a radius specified by the *maxRadius* parameter. The chromosomes and beginning and end positions of the regions to be linked are necessary, and labels can be added.
By default, links display a tooltip when hovered by the mouse.
```{r, screenshot.force = FALSE}
library(BioCircos)
links_chromosomes_1 = c('X', '2', '9') # Chromosomes on which the links should start
links_chromosomes_2 = c('3', '18', '9') # Chromosomes on which the links should end
links_pos_1 = c(155270560, 154978472, 42512974)
links_pos_2 = c(102621477, 140253678, 20484611)
links_labels = c("Link 1", "Link 2", "Link 3")
tracklist = BioCircosBackgroundTrack("myBackgroundTrack", minRadius = 0, maxRadius = 0.55,
borderSize = 0, fillColors = "#EEFFEE")
tracklist = tracklist + BioCircosLinkTrack('myLinkTrack', links_chromosomes_1, links_pos_1,
links_pos_1 + 50000000, links_chromosomes_2, links_pos_2, links_pos_2 + 750000,
maxRadius = 0.55, labels = links_labels)
BioCircos(tracklist, genomeFillColor = "PuOr",
chrPad = 0.02, displayGenomeBorder = FALSE, yChr = FALSE,
genomeTicksDisplay = FALSE, genomeLabelTextSize = "8pt", genomeLabelDy = 0)
```
## Bar tracks {#barSection}
Bar plots may be added on another type of tracks. The start and end coordinates of each bar, as well as the associated value need to be specified.
By default, the radial range of the track will stretch from the minimal to the maximum value of the track, but other boundaries may be specified with the *range* parameter.
Here, to add a track to the tracklist at each iteration of the loop, we initialize the *tracks* tracklist with an empty *BioCircosTracklist* object.
```{r figBarTrack, fig.width=4, fig.height=4, fig.align = 'center', screenshot.force = FALSE}
library(BioCircos)
library(RColorBrewer)
library(grDevices)
# Define a custom genome
genomeChr = LETTERS
lengthChr = 5*1:length(genomeChr)
names(lengthChr) <- genomeChr
tracks = BioCircosTracklist()
# Add one track for each chromosome
for (i in 1:length(genomeChr)){
# Define histogram/bars to be displayed
nbBars = lengthChr[i] - 1
barValues = sapply(1:nbBars, function(x) 10 + nbBars%%x)
barColor = colorRampPalette(brewer.pal(8, "YlOrBr"))(length(genomeChr))[i]
# Add a track with bars on the i-th chromosome
tracks = tracks + BioCircosBarTrack(paste0("bars", i), chromosome = genomeChr[i],
starts = (1:nbBars) - 1, ends = 1:nbBars, values = barValues, color = barColor,
range = c(5,75))
}
# Add background
tracks = tracks + BioCircosBackgroundTrack("bars_background", colors = "#2222EE")
BioCircos(tracks, genomeFillColor = "YlOrBr", genome = as.list(lengthChr),
genomeTicksDisplay = F, genomeLabelDy = 0)
```
## CNV tracks {#cnvSection}
Conceptually close to bar tracks, and commonly used for purposes such as representation of copy number variants, the CNV tracks consist of arcs at a given radial distance showing a value associated with a genome stretch.
The start and end coordinates of each arc, as well as the associated value need to be specified.
```{r figCNVTrack, screenshot.force = FALSE}
library(BioCircos)
# Arcs coordinates
snvChr = rep(4:9, 3)
snvStart = c(rep(1,6), rep(40000000,6), rep(100000000,6))
snvEnd = c(rep(39999999,6), rep(99999999,6),
191154276, 180915260, 171115067, 159138663, 146364022, 141213431)
# Values associated with each point, used as radial coordinate
# on a scale going to minRadius for the lowest value to maxRadius for the highest value
snvValues = (1:18%%5)+1
# Create CNV track
tracks = BioCircosCNVTrack('cnv_track', as.character(snvChr), snvStart, snvEnd, snvValues,
color = "#CC0000", range = c(0,6))
# Add background
tracks = tracks + BioCircosBackgroundTrack("arcs_background", colors = "#2222EE")
BioCircos(tracks, genomeFillColor = "YlOrBr", genomeTicksDisplay = F, genomeLabelDy = 0)
```
## Heatmap tracks {#heatSection}
For a given genome stretch, heatmaps associate linearly numerical values with a color range.
For two-dimensional heatmaps, you can stack up *heatmap tracks*, as done in the following example.
```{r, screenshot.force = FALSE}
library(BioCircos)
# Define a custom genome
genomeChr = LETTERS[1:10]
lengthChr = 5*1:length(genomeChr)
names(lengthChr) <- genomeChr
# Define boxes positions
boxPositions = unlist(sapply(lengthChr, seq))
boxChromosomes = rep(genomeChr, lengthChr)
# Define values for two heatmap tracks
boxVal1 = boxPositions %% 13 / 13
boxVal2 = (7 + boxPositions) %% 17 / 17
tracks = BioCircosHeatmapTrack("heatmap1", boxChromosomes, boxPositions - 1, boxPositions,
boxVal1, minRadius = 0.6, maxRadius = 0.75)
tracks = tracks + BioCircosHeatmapTrack("heatmap1", boxChromosomes, boxPositions - 1,
boxPositions, boxVal2, minRadius = 0.75, maxRadius = 0.9, color = c("#FFAAAA", "#000000"))
BioCircos(tracks, genome = as.list(lengthChr), genomeTicksDisplay = F, genomeLabelDy = 0,
HEATMAPMouseOverColor = "#F3C73A")
```
## Line tracks
The *Line tracks* display segments on a track. They are defined by the set of vertices that will be joined to produce the segments.
If the vertices provided span multiple chromosomes, the segments between the last point on a chromosome and the first point on the next chromosome will be discarded.
```{r, screenshot.force = FALSE}
chrVert = rep(c(1, 3, 5), c(20,10,5))
posVert = c(249250621*log(c(20:1, 10:1, 5:1), base = 20))
tracks = BioCircosLineTrack('LineTrack', as.character(chrVert), posVert, values = cos(posVert))
tracks = tracks + BioCircosLineTrack('LineTrack2', as.character(chrVert+1), 0.95*posVert,
values = sin(posVert), color = "#40D4B9")
tracks = tracks + BioCircosBackgroundTrack('Bg', fillColors = '#FFEEBB', borderSize = 0)
BioCircos(tracks, chrPad = 0.05, displayGenomeBorder = FALSE, LINEMouseOutDisplay = FALSE,
LINEMouseOverTooltipsHtml01 = "Pretty lines<br/>This tooltip won't go away!")
```
## Removing track
Tracks can be removed from a track list by substracting the name of the corresponding track.
```{r, screenshot.force = FALSE}
library(BioCircos)
# Create a tracklist with a text annotation and backgrounds
tracklist = BioCircosTextTrack('t1', 'hi')
tracklist = tracklist + BioCircosBackgroundTrack('b1')
# Remove the text annotation and display the result
BioCircos(tracklist - 't1')
```
## Multi-track example
You can combine and overlap as many tracks as you want.
```{r figMultiTrack, fig.width=5, fig.height=5, fig.align = 'center', screenshot.force = FALSE}
library(BioCircos)
# Fix random generation for reproducibility
set.seed(3)
# SNP tracks
tracks = BioCircosSNPTrack("testSNP1", as.character(rep(1:10,10)),
round(runif(100, 1, 135534747)),
runif(100, 0, 10), colors = "Spectral", minRadius = 0.3, maxRadius = 0.45)
tracks = tracks + BioCircosSNPTrack("testSNP2", as.character(rep(1:15,5)),
round(runif(75, 1, 102531392)),
runif(75, 2, 12), colors = c("#FF0000", "#DD1111", "#BB2222", "#993333"),
maxRadius = 0.8, range = c(2,12))
# Overlap point of interest on previous track, fix range to use a similar scale
tracks = tracks + BioCircosSNPTrack("testSNP3", "7", 1, 9, maxRadius = 0.8, size = 6,
range = c(2,12))
# Background and text tracks
tracks = tracks + BioCircosBackgroundTrack("testBGtrack1", minRadius = 0.3, maxRadius = 0.45,
borderColors = "#FFFFFF", borderSize = 0.6)
tracks = tracks + BioCircosBackgroundTrack("testBGtrack2", borderColors = "#FFFFFF",
fillColor = "#FFEEEE", borderSize = 0.6, maxRadius = 0.8)
tracks = tracks + BioCircosTextTrack("testText", 'BioCircos!', weight = "lighter",
x = - 0.17, y = - 0.87)
# Arc track
arcsEnds = round(runif(7, 50000001, 133851895))
arcsLengths = round(runif(7, 1, 50000000))
tracks = tracks + BioCircosArcTrack("fredTestArc", as.character(sample(1:12, 7, replace=T)),
starts = arcsEnds - arcsLengths, ends = arcsEnds, labels = 1:7,
maxRadius = 0.97, minRadius = 0.83)
# Link tracks
linkPos1 = round(runif(5, 1, 50000000))
linkPos2 = round(runif(5, 1, 50000000))
chr1 = sample(1:22, 5, replace = T)
chr2 = sample(1:22, 5, replace = T)
linkPos3 = round(runif(5, 1, 50000000))
linkPos4 = round(runif(5, 1, 50000000))
chr3 = sample(1:22, 5, replace = T)
chr4 = sample(1:22, 5, replace = T)
tracks = tracks + BioCircosLinkTrack("testLink", gene1Chromosomes = chr1,
gene1Starts = linkPos1, gene1Ends = linkPos1+1, gene2Chromosomes = chr2, axisPadding = 6,
color = "#EEEE55", width = "0.3em", labels = paste(chr1, chr2, sep = "*"), displayLabel = F,
gene2Starts = linkPos2, gene2Ends = linkPos2+1, maxRadius = 0.42)
tracks = tracks + BioCircosLinkTrack("testLink2", gene1Chromosomes = chr3,
gene1Starts = linkPos3, gene1Ends = linkPos3+5000000, axisPadding = 6, displayLabel = F,
color = "#FF6666", labels = paste(chr3, chr4, sep = "-"), gene2Chromosomes = chr4,
gene2Starts = linkPos4, gene2Ends = linkPos4+2500000, maxRadius = 0.42)
# Display the BioCircos visualization
BioCircos(tracks, genomeFillColor = "Spectral", yChr = T, chrPad = 0, displayGenomeBorder = F,
genomeTicksLen = 3, genomeTicksTextSize = 0, genomeTicksScale = 50000000,
genomeLabelTextSize = 18, genomeLabelDy = 0)
```
## Contact
To report bugs, request features or for any question or remark regarding this package, please use the <a href="https://github.com/lvulliard/BioCircos.R">GitHub page</a> or contact <a href="mailto:[email protected]">Loan Vulliard</a>.
## Credits
The creation and implementation of the **BioCircos.js** JavaScript library is an independent work attributed to <a href="mailto:[email protected]">Ya Cui</a> and <a href="mailto:[email protected]">Xiaowei Chen</a>.
This work is described in the following scientific article: BioCircos.js: an Interactive Circos JavaScript Library for Biological Data Visualization on Web Applications. Cui, Y., et al. Bioinformatics. (2016).
This package relies on several open source projects other R packages, and is made possible thanks to **shiny** and **htmlwidgets**.
The package **heatmaply** was used as a model for this vignette, as well as for the **htmlwidgets** configuration.
## Session info
```{r sessionINFO, screenshot.force = FALSE}
sessionInfo()
```
|
/scratch/gouwar.j/cran-all/cranData/BioCircos/inst/doc/BioCircos.Rmd
|
---
title: "BioCircos: Generating circular multi-track plots"
output:
rmarkdown::html_vignette:
toc: true
toc_depth: 2
vignette: >
%\VignetteIndexEntry{BioCircos: Generating circular multi-track plots}
%\VignetteEngine{knitr::rmarkdown}
%\usepackage[utf8]{inputenc}
date: "`r Sys.Date()`"
author: "Loan Vulliard"
---
## Introduction
This package allows to implement in 'R' Circos-like visualizations of genomic data, as proposed by the BioCircos.js JavaScript library, based on the JQuery and D3 technologies.
We will demonstrate here how to generate easily such plots and what are the main parameters to customize them. Each example can be run independently of the others.
For a complete list of all the parameters available, please refer to the package documentation.
## Motivation
The amount of data produced nowadays in a lot of different fields assesses the relevance of reactive analyses and interactive display of the results. This especially true in biology, where the cost of sequencing data has dropped must faster than the Moore's law prediction. New ways of integrating different level of information and accelerating the interpretation are therefore needed.
The integration challenge appears to be of major importance, as it allows a deeper understanding of the biological phenomena happening, that cannot be observed in the single analyses independently.
This package aims at offering an easy way of producing Circos-like visualizations to face distinct challenges :
* On the one hand, data integration and visualization: Circos is a popular tool to combine different biological information on a single plot.
* On the other hand, reactivity and interactivity: thanks to the *htmlwidgets* framework, the figures produced by this package are responsive to mouse events and display useful tooltips, and they can be integrated in shiny apps. Once the analyses have been performed and the shiny app coded, it is possible for the end-user to explore a massive amount of biological data without any programming or bioinformatics knowledge.
The terminology used here arises from genomics but this tool may be of interest for different situations where different positional or temporal informations must be combined.
## Installation
To install this package, you can use CRAN (the central R package repository) to get the last stable release or build the last development version directly from the GitHub repository.
### From CRAN
```{r eval=FALSE, screenshot.force = FALSE}
install.packages('BioCircos')
```
### From Github
```{r eval=FALSE, screenshot.force = FALSE}
# You need devtools for that
if (!require('devtools')){install.packages('devtools')}
devtools::install_github('lvulliard/BioCircos.R', build_vignettes = TRUE)
```
## Generating Circos-like visualizations
### Principle
To produce a BioCircos visualization, you need to call the *BioCircos* method, that accepts a *tracklist* containing the different *tracks* to be displayed, the genome to be displayed and plotting parameters.
By default, an empty *tracklist* is used, and the genome is automatically set to use the chromosome sizes of the reference genome hg19 (GRCh37).
```{r}
library(BioCircos)
BioCircos()
```
### Genome configuration
A genome needs to be set in order to map all the coordinates of the tracks on it.
For now, the only pre-configured genome available is *hg19* (GRCh37), for which the length of the main 22 genomic autosomal chromosome pairs and of the sexual chromosomes are available. The Y chromosome can be removed using the *ychr* parameter. Visual parameters are also available, such as by giving a vector of colors or a *RColorBrewer* palette to change the colors of each chromosome (parameter *genomeFillColor*), the space between each chromosome (*chrPad*) or their borders (*displayGenomeBorder*).
The ticks, displaying the scale on each chromosome, can be removed with *genomeTicksDisplay*, and the genome labels (chromosome names) can be brought closer or further away from the chromosomes with *genomeLabelDy*.
```{r, screenshot.force = FALSE}
library(BioCircos)
BioCircos(genome = "hg19", yChr = FALSE, genomeFillColor = "Reds", chrPad = 0,
displayGenomeBorder = FALSE, genomeTicksDisplay = FALSE, genomeLabelDy = 0)
```
To use your own reference genome, you need to define a named list of chromosomal lengths and use it as the *genome* parameter. The names and lengths should match the coordinates you plan on using later for your tracks.
You may want to change the scale of the ticks on the chromosomes, to fit to your reference genome, with the *genomeTickScale* parameters.
```{r, screenshot.force = FALSE}
library(BioCircos)
myGenome = list("A" = 10560,
"B" = 8808,
"C" = 12014,
"D" = 7664,
"E" = 9403,
"F" = 8661)
BioCircos(genome = myGenome, genomeFillColor = c("tomato2", "darkblue"),
genomeTicksScale = 4e+3)
```
Another use of a custom genome can be seen in the [Bar tracks section](#barSection).
### Tracklists
The different levels of information will be displayed on different *tracks* of different types and located at different radii on the visualization. All the track-generating functions of this package return tracklists that can be added together into a single tracklist, to be given as the *tracks* argument of the *BioCircos* method.
The different kinds of tracks are presented in the following sections.
All tracks need to be named.
## Text track
A first track simply corresponds to text annotations. The obligatory parameters are the track name and the text to be displayed.
Some parameters such as the size, the opacity and the coordinates can be customized.
```{r, screenshot.force = FALSE}
library(BioCircos)
tracklist = BioCircosTextTrack('myTextTrack', 'Some text', size = "2em", opacity = 0.5,
x = -0.67, y = -0.5)
BioCircos(tracklist, genomeFillColor = "PuOr",
chrPad = 0, displayGenomeBorder = FALSE,
genomeTicksLen = 2, genomeTicksTextSize = 0, genomeTicksScale = 1e+8,
genomeLabelTextSize = "9pt", genomeLabelDy = 0)
```
## Background track
Another simple track type correspond to backgrounds, displayed under other tracks, in a given radius interval.
```{r, screenshot.force = FALSE}
library(BioCircos)
tracklist = BioCircosBackgroundTrack("myBackgroundTrack", minRadius = 0.5, maxRadius = 0.8,
borderColors = "#AAAAAA", borderSize = 0.6, fillColors = "#FFBBBB")
BioCircos(tracklist, genomeFillColor = "PuOr",
chrPad = 0.05, displayGenomeBorder = FALSE,
genomeTicksDisplay = FALSE, genomeLabelTextSize = "9pt", genomeLabelDy = 0)
```
## SNP track
To map punctual information associated with a single-dimensional value on the reference genome, such as a variant or an SNP associated with a confidence score, SNP tracks can be used.
It is therefore needed to specify the chromosome and coordinates where each points are mapped, as well as the corresponding value, which will be used to compute the radial coordinate of the points.
By default, points display a tooltip when hovered by the mouse.
```{r, screenshot.force = FALSE}
library(BioCircos)
# Chromosomes on which the points should be displayed
points_chromosomes = c('X', '2', '7', '13', '9')
# Coordinates on which the points should be displayed
points_coordinates = c(102621, 140253678, 98567307, 28937403, 20484611)
# Values associated with each point, used as radial coordinate
# on a scale going to minRadius for the lowest value to maxRadius for the highest value
points_values = 0:4
tracklist = BioCircosSNPTrack('mySNPTrack', points_chromosomes, points_coordinates,
points_values, colors = c("tomato2", "darkblue"), minRadius = 0.5, maxRadius = 0.9)
# Background are always placed below other tracks
tracklist = tracklist + BioCircosBackgroundTrack("myBackgroundTrack",
minRadius = 0.5, maxRadius = 0.9,
borderColors = "#AAAAAA", borderSize = 0.6, fillColors = "#B3E6FF")
BioCircos(tracklist, genomeFillColor = "PuOr",
chrPad = 0.05, displayGenomeBorder = FALSE, yChr = FALSE,
genomeTicksDisplay = FALSE, genomeLabelTextSize = 18, genomeLabelDy = 0)
```
## Arc track
Arc tracks are displaying arcs along the genomic circle, between the radii given as the *minRadius* and *maxRadius* parameters. As for an SNP track, the chromosome and coordinates (here corresponding to the beginning and end of each arc) should be specified.
By default, arcs display a tooltip when hovered by the mouse.
```{r, screenshot.force = FALSE}
library(BioCircos)
arcs_chromosomes = c('X', 'X', '2', '9') # Chromosomes on which the arcs should be displayed
arcs_begin = c(1, 45270560, 140253678, 20484611)
arcs_end = c(155270560, 145270560, 154978472, 42512974)
tracklist = BioCircosArcTrack('myArcTrack', arcs_chromosomes, arcs_begin, arcs_end,
minRadius = 1.18, maxRadius = 1.25, opacities = c(0.4, 0.4, 1, 0.8))
BioCircos(tracklist, genomeFillColor = "PuOr",
chrPad = 0.02, displayGenomeBorder = FALSE, yChr = FALSE,
genomeTicksDisplay = FALSE, genomeLabelTextSize = 0)
```
## Link track
Links track represent links between different genomic position. They are displayed at the center of the visualization, and out to a radius specified by the *maxRadius* parameter. The chromosomes and beginning and end positions of the regions to be linked are necessary, and labels can be added.
By default, links display a tooltip when hovered by the mouse.
```{r, screenshot.force = FALSE}
library(BioCircos)
links_chromosomes_1 = c('X', '2', '9') # Chromosomes on which the links should start
links_chromosomes_2 = c('3', '18', '9') # Chromosomes on which the links should end
links_pos_1 = c(155270560, 154978472, 42512974)
links_pos_2 = c(102621477, 140253678, 20484611)
links_labels = c("Link 1", "Link 2", "Link 3")
tracklist = BioCircosBackgroundTrack("myBackgroundTrack", minRadius = 0, maxRadius = 0.55,
borderSize = 0, fillColors = "#EEFFEE")
tracklist = tracklist + BioCircosLinkTrack('myLinkTrack', links_chromosomes_1, links_pos_1,
links_pos_1 + 50000000, links_chromosomes_2, links_pos_2, links_pos_2 + 750000,
maxRadius = 0.55, labels = links_labels)
BioCircos(tracklist, genomeFillColor = "PuOr",
chrPad = 0.02, displayGenomeBorder = FALSE, yChr = FALSE,
genomeTicksDisplay = FALSE, genomeLabelTextSize = "8pt", genomeLabelDy = 0)
```
## Bar tracks {#barSection}
Bar plots may be added on another type of tracks. The start and end coordinates of each bar, as well as the associated value need to be specified.
By default, the radial range of the track will stretch from the minimal to the maximum value of the track, but other boundaries may be specified with the *range* parameter.
Here, to add a track to the tracklist at each iteration of the loop, we initialize the *tracks* tracklist with an empty *BioCircosTracklist* object.
```{r figBarTrack, fig.width=4, fig.height=4, fig.align = 'center', screenshot.force = FALSE}
library(BioCircos)
library(RColorBrewer)
library(grDevices)
# Define a custom genome
genomeChr = LETTERS
lengthChr = 5*1:length(genomeChr)
names(lengthChr) <- genomeChr
tracks = BioCircosTracklist()
# Add one track for each chromosome
for (i in 1:length(genomeChr)){
# Define histogram/bars to be displayed
nbBars = lengthChr[i] - 1
barValues = sapply(1:nbBars, function(x) 10 + nbBars%%x)
barColor = colorRampPalette(brewer.pal(8, "YlOrBr"))(length(genomeChr))[i]
# Add a track with bars on the i-th chromosome
tracks = tracks + BioCircosBarTrack(paste0("bars", i), chromosome = genomeChr[i],
starts = (1:nbBars) - 1, ends = 1:nbBars, values = barValues, color = barColor,
range = c(5,75))
}
# Add background
tracks = tracks + BioCircosBackgroundTrack("bars_background", colors = "#2222EE")
BioCircos(tracks, genomeFillColor = "YlOrBr", genome = as.list(lengthChr),
genomeTicksDisplay = F, genomeLabelDy = 0)
```
## CNV tracks {#cnvSection}
Conceptually close to bar tracks, and commonly used for purposes such as representation of copy number variants, the CNV tracks consist of arcs at a given radial distance showing a value associated with a genome stretch.
The start and end coordinates of each arc, as well as the associated value need to be specified.
```{r figCNVTrack, screenshot.force = FALSE}
library(BioCircos)
# Arcs coordinates
snvChr = rep(4:9, 3)
snvStart = c(rep(1,6), rep(40000000,6), rep(100000000,6))
snvEnd = c(rep(39999999,6), rep(99999999,6),
191154276, 180915260, 171115067, 159138663, 146364022, 141213431)
# Values associated with each point, used as radial coordinate
# on a scale going to minRadius for the lowest value to maxRadius for the highest value
snvValues = (1:18%%5)+1
# Create CNV track
tracks = BioCircosCNVTrack('cnv_track', as.character(snvChr), snvStart, snvEnd, snvValues,
color = "#CC0000", range = c(0,6))
# Add background
tracks = tracks + BioCircosBackgroundTrack("arcs_background", colors = "#2222EE")
BioCircos(tracks, genomeFillColor = "YlOrBr", genomeTicksDisplay = F, genomeLabelDy = 0)
```
## Heatmap tracks {#heatSection}
For a given genome stretch, heatmaps associate linearly numerical values with a color range.
For two-dimensional heatmaps, you can stack up *heatmap tracks*, as done in the following example.
```{r, screenshot.force = FALSE}
library(BioCircos)
# Define a custom genome
genomeChr = LETTERS[1:10]
lengthChr = 5*1:length(genomeChr)
names(lengthChr) <- genomeChr
# Define boxes positions
boxPositions = unlist(sapply(lengthChr, seq))
boxChromosomes = rep(genomeChr, lengthChr)
# Define values for two heatmap tracks
boxVal1 = boxPositions %% 13 / 13
boxVal2 = (7 + boxPositions) %% 17 / 17
tracks = BioCircosHeatmapTrack("heatmap1", boxChromosomes, boxPositions - 1, boxPositions,
boxVal1, minRadius = 0.6, maxRadius = 0.75)
tracks = tracks + BioCircosHeatmapTrack("heatmap1", boxChromosomes, boxPositions - 1,
boxPositions, boxVal2, minRadius = 0.75, maxRadius = 0.9, color = c("#FFAAAA", "#000000"))
BioCircos(tracks, genome = as.list(lengthChr), genomeTicksDisplay = F, genomeLabelDy = 0,
HEATMAPMouseOverColor = "#F3C73A")
```
## Line tracks
The *Line tracks* display segments on a track. They are defined by the set of vertices that will be joined to produce the segments.
If the vertices provided span multiple chromosomes, the segments between the last point on a chromosome and the first point on the next chromosome will be discarded.
```{r, screenshot.force = FALSE}
chrVert = rep(c(1, 3, 5), c(20,10,5))
posVert = c(249250621*log(c(20:1, 10:1, 5:1), base = 20))
tracks = BioCircosLineTrack('LineTrack', as.character(chrVert), posVert, values = cos(posVert))
tracks = tracks + BioCircosLineTrack('LineTrack2', as.character(chrVert+1), 0.95*posVert,
values = sin(posVert), color = "#40D4B9")
tracks = tracks + BioCircosBackgroundTrack('Bg', fillColors = '#FFEEBB', borderSize = 0)
BioCircos(tracks, chrPad = 0.05, displayGenomeBorder = FALSE, LINEMouseOutDisplay = FALSE,
LINEMouseOverTooltipsHtml01 = "Pretty lines<br/>This tooltip won't go away!")
```
## Removing track
Tracks can be removed from a track list by substracting the name of the corresponding track.
```{r, screenshot.force = FALSE}
library(BioCircos)
# Create a tracklist with a text annotation and backgrounds
tracklist = BioCircosTextTrack('t1', 'hi')
tracklist = tracklist + BioCircosBackgroundTrack('b1')
# Remove the text annotation and display the result
BioCircos(tracklist - 't1')
```
## Multi-track example
You can combine and overlap as many tracks as you want.
```{r figMultiTrack, fig.width=5, fig.height=5, fig.align = 'center', screenshot.force = FALSE}
library(BioCircos)
# Fix random generation for reproducibility
set.seed(3)
# SNP tracks
tracks = BioCircosSNPTrack("testSNP1", as.character(rep(1:10,10)),
round(runif(100, 1, 135534747)),
runif(100, 0, 10), colors = "Spectral", minRadius = 0.3, maxRadius = 0.45)
tracks = tracks + BioCircosSNPTrack("testSNP2", as.character(rep(1:15,5)),
round(runif(75, 1, 102531392)),
runif(75, 2, 12), colors = c("#FF0000", "#DD1111", "#BB2222", "#993333"),
maxRadius = 0.8, range = c(2,12))
# Overlap point of interest on previous track, fix range to use a similar scale
tracks = tracks + BioCircosSNPTrack("testSNP3", "7", 1, 9, maxRadius = 0.8, size = 6,
range = c(2,12))
# Background and text tracks
tracks = tracks + BioCircosBackgroundTrack("testBGtrack1", minRadius = 0.3, maxRadius = 0.45,
borderColors = "#FFFFFF", borderSize = 0.6)
tracks = tracks + BioCircosBackgroundTrack("testBGtrack2", borderColors = "#FFFFFF",
fillColor = "#FFEEEE", borderSize = 0.6, maxRadius = 0.8)
tracks = tracks + BioCircosTextTrack("testText", 'BioCircos!', weight = "lighter",
x = - 0.17, y = - 0.87)
# Arc track
arcsEnds = round(runif(7, 50000001, 133851895))
arcsLengths = round(runif(7, 1, 50000000))
tracks = tracks + BioCircosArcTrack("fredTestArc", as.character(sample(1:12, 7, replace=T)),
starts = arcsEnds - arcsLengths, ends = arcsEnds, labels = 1:7,
maxRadius = 0.97, minRadius = 0.83)
# Link tracks
linkPos1 = round(runif(5, 1, 50000000))
linkPos2 = round(runif(5, 1, 50000000))
chr1 = sample(1:22, 5, replace = T)
chr2 = sample(1:22, 5, replace = T)
linkPos3 = round(runif(5, 1, 50000000))
linkPos4 = round(runif(5, 1, 50000000))
chr3 = sample(1:22, 5, replace = T)
chr4 = sample(1:22, 5, replace = T)
tracks = tracks + BioCircosLinkTrack("testLink", gene1Chromosomes = chr1,
gene1Starts = linkPos1, gene1Ends = linkPos1+1, gene2Chromosomes = chr2, axisPadding = 6,
color = "#EEEE55", width = "0.3em", labels = paste(chr1, chr2, sep = "*"), displayLabel = F,
gene2Starts = linkPos2, gene2Ends = linkPos2+1, maxRadius = 0.42)
tracks = tracks + BioCircosLinkTrack("testLink2", gene1Chromosomes = chr3,
gene1Starts = linkPos3, gene1Ends = linkPos3+5000000, axisPadding = 6, displayLabel = F,
color = "#FF6666", labels = paste(chr3, chr4, sep = "-"), gene2Chromosomes = chr4,
gene2Starts = linkPos4, gene2Ends = linkPos4+2500000, maxRadius = 0.42)
# Display the BioCircos visualization
BioCircos(tracks, genomeFillColor = "Spectral", yChr = T, chrPad = 0, displayGenomeBorder = F,
genomeTicksLen = 3, genomeTicksTextSize = 0, genomeTicksScale = 50000000,
genomeLabelTextSize = 18, genomeLabelDy = 0)
```
## Contact
To report bugs, request features or for any question or remark regarding this package, please use the <a href="https://github.com/lvulliard/BioCircos.R">GitHub page</a> or contact <a href="mailto:[email protected]">Loan Vulliard</a>.
## Credits
The creation and implementation of the **BioCircos.js** JavaScript library is an independent work attributed to <a href="mailto:[email protected]">Ya Cui</a> and <a href="mailto:[email protected]">Xiaowei Chen</a>.
This work is described in the following scientific article: BioCircos.js: an Interactive Circos JavaScript Library for Biological Data Visualization on Web Applications. Cui, Y., et al. Bioinformatics. (2016).
This package relies on several open source projects other R packages, and is made possible thanks to **shiny** and **htmlwidgets**.
The package **heatmaply** was used as a model for this vignette, as well as for the **htmlwidgets** configuration.
## Session info
```{r sessionINFO, screenshot.force = FALSE}
sessionInfo()
```
|
/scratch/gouwar.j/cran-all/cranData/BioCircos/vignettes/BioCircos.Rmd
|
##' @title describeRNA
##' @description This function provide filterByExpr with two customized options. See 'Arguments'. Different strategies can be useful either if you are trying to compare different approaches of normalization or if you want to analyze a particular biotype where a different variation of expression is expected under certain conditions. A BioInsight data frame will return with your new count matrix where you can proceed with your Differential Expression Analysis.
##' @param counts data.frame where you have gene counts
##' @param biotypes data.frame where you have a gene_biotype column
##' @param groups factor with groups and samples (see Examples)
##' @param report if TRUE will generate a .pdf file in /tmp/ folder ('wordcloud' and 'RColorBrewer' are necessary)
##' @param verbose if TRUE will print your result in your console ('knitr' is necessary)
##' @param filter Your threshold (see edgeR::filterByExpr)
##' @import edgeR wordcloud knitr RColorBrewer grDevices graphics stats limma
##' @details For filter you may want to use "1" if you want the default option from filterByExpr. If 2 "Slightly" above the default will be applied. If "3" A more restrictive option will be applied.
##' @note You can use trace(describeRNA, edit=T) to set different values as threshold for "filter" option.
##' @references Robinson MD, McCarthy DJ, Smyth GK. edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics. 2010;26(1):139-140. doi:10.1093/bioinformatics/btp616
##' @examples
##' counts = system.file("extdata", "count_matrix.tsv", package="BioInsight")
##' counts = read.table(counts, row.names=1, header=TRUE)
##' biotypes = system.file("extdata", "Rattus_Norvegicus_biomart.tsv", package="BioInsight")
##' biotypes = read.table(biotypes, row.names=1, header=TRUE)
##'
##' groups = rep(as.factor(c("1","2")), each=5)
##'
##' describeRNA(counts=counts,
##' biotypes=biotypes,
##' groups=groups,
##' filter=2)
#' @export describeRNA
describeRNA = function(counts, biotypes, groups, report = FALSE, verbose = FALSE, filter = 1)
{
table = table(biotypes$gene_biotype, exclude = NA)
target = c("miRNA", "protein_coding", "lincRNA", "pseudogene",
"snoRNA", "snRNA", "ribozyme")
table = data.frame(table)
index <- table$Var1 %in% target
barplot = table[index, ]
if (report) {
File <- tempfile(fileext = ".pdf")
warning("\n Temporary report at ", File, call. = FALSE,
immediate. = TRUE)
dir.create(dirname(File), showWarnings=FALSE)
pdf(File, width = 15, height = 15)
oldpar <- par(no.readonly = TRUE)
on.exit(par(oldpar))
par(mfrow = c(2, 2))
plotMDS(counts, main = "Multidimensional Scaling")
sortbar = barplot[order(barplot$Freq, decreasing = TRUE),
]
barplot(sortbar$Freq, names.arg = sortbar$Var1, col = 1:6,
main = "Absolute Quantity")
x = t(counts)
x = hclust(dist(x))
plot(x, main = "Cluster Dendrogram")
wordcloud(table$Var1, table$Freq, colors = brewer.pal(5,"Dark2"), min.freq = 10)
dev.off()
}
if (filter == 1) {
data_filtered = filterByExpr(counts, group = groups)
data_filtered = counts[data_filtered, ]
}
if (filter == 2) {
data_filtered = filterByExpr(counts, group = groups,
min.count = 15, min.total.count = 25)
data_filtered = counts[data_filtered, ]
}
if (filter == 3) {
data_filtered = filterByExpr(counts, group = groups,
min.count = 25, min.total.count = 40)
data_filtered = counts[data_filtered, ]
}
if (verbose) {
print(knitr::kable(table))
cat("\nGENES", sep = "\n")
cat("Total number of genes:", nrow(counts))
cat("\nGenes remaining:", dim(data_filtered)[1])
}
pos <- 1
envir = as.environment(pos)
BioInsight <- assign("BioInsight", data_filtered, envir = envir)
}
|
/scratch/gouwar.j/cran-all/cranData/BioInsight/R/describeRNA.R
|
#' Prediction by Machine Learning
#'
#' @description
#' Prediction by Machine Learning with different learners ( From 'mlr3' )
#' @param trainData The input training dataset. The first column
#' is the label or the output. For binary classes,
#' 0 and 1 are used to indicate the class member.
#' @param testData The input test dataset. The first column
#' is the label or the output. For binary classes,
#' 0 and 1 are used to indicate the class member.
#' @param predMode The prediction mode. Available options are
#' c('probability', 'classification').
#' @param classifier Learners in mlr3
#' @param paramlist Learner parameters
#' @param inner_folds k-fold cross validation ( Only supported when testData = NULL )
#'
#' @return The predicted output for the test data.
#' @import mlr3verse
#' @importFrom mlr3 as_task_classif lrn rsmp msr resample as_task_regr
#' @export
#' @author Shunjie Zhang
#' @examples
#'library(mlr3verse)
#'library(caret)
#'library(BioM2)
#'data=MethylData_Test
#'set.seed(1)
#'part=unlist(createDataPartition(data$label,p=0.8))#Split data
#'predict=baseModel(trainData=data[part,1:10],
#' testData=data[-part,1:10],
#' classifier = 'svm')#Use 10 features to make predictions,Learner uses svm
#'
#'
baseModel=function ( trainData, testData, predMode = "probability",
classifier,paramlist=NULL, inner_folds=10){
predMode=match.arg(predMode)
if(!is.null(testData)){
if (colnames(trainData)[1] != "label") {
stop("The first column of the 'trainData' must be the 'label'!")
}
if (colnames(testData)[1] != "label") {
stop("The first column of the 'testData' must be the 'label'!")
}
if( predMode == 'probability'){
classifier=paste0('classif.',classifier,'')
trainData[,1]=as.factor(trainData[,1])
testData[,1]=as.factor(testData[,1])
trainData=as_task_classif(trainData,target='label')
testData=as_task_classif(testData,target='label')
model=lrn(classifier,predict_type = "prob")
if(!is.null(paramlist)){
at = auto_tuner(
tuner = tnr("grid_search", resolution = 10, batch_size = 5),
learner = model,
search_space = paramlist,
resampling =rsmp("cv", folds =5),
measure = msr("classif.acc")
)
at$train(trainData)
model$param_set$values = at$tuning_result$learner_param_vals[[1]]
model$train(trainData)
predict=model$predict(testData)$prob[,2]
return(predict)
}else{
sink(nullfile())
model$train(trainData)
sink()
predict=model$predict(testData)$prob[,2]
return(predict)
}
}else if( predMode == 'regression'){
classifier=paste0('regr.',classifier,'')
trainData[,1]=as.numeric(trainData[,1])
testData[,1]=as.numeric(testData[,1])
trainData=as_task_regr(trainData,target='label')
testData=as_task_regr(testData,target='label')
model=lrn(classifier)
if(!is.null(paramlist)){
at = auto_tuner(
tuner = tnr("grid_search", resolution = 5, batch_size = 5),
learner = model,
search_space = paramlist,
resampling = rsmp("cv", folds =5),
measure = msr("regr.mae")
)
at$train(trainData)
model$param_set$values = at$tuning_result$learner_param_vals[[1]]
model$train(trainData)
predict=model$predict(testData)$response
return(predict)
}else{
model$train(trainData)
predict=model$predict(testData)$response
return(predict)
}
}
}
else if(is.null(testData)){
if (colnames(trainData)[1] != "label") {
stop("The first column of the 'trainData' must be the 'label'!")
}
if( predMode == 'probability'){
classifier=paste0('classif.',classifier,'')
trainData[,1]=as.factor(trainData[,1])
trainData=as_task_classif(trainData,target='label')
model=lrn(classifier,predict_type = "prob")
#set.seed(seed)
sink(nullfile())
rr=resample(trainData, model, rsmp("cv", folds = inner_folds))$prediction()
sink()
re=as.data.frame(as.data.table(rr))[,c(1,5)]
re=re[order(re$row_ids),][,2]
return(re)
}else if(predMode == 'regression'){
classifier=paste0('regr.',classifier,'')
trainData=as_task_regr(trainData,target='label')
model=lrn(classifier)
#set.seed(seed)
sink(nullfile())
rr=resample(trainData, model, rsmp("cv", folds = inner_folds))$prediction()
sink()
re=as.data.frame(as.data.table(rr))[,c(1,3)]
re=re[order(re$row_ids),][,2]
return(re)
}
}
}
#' Stage 1 Fearture Selection
#'
#' @param Stage1_FeartureSelection_Method Feature selection methods. Available options are
#' c(NULL, 'cor', 'wilcox.test', 'cor_rank', 'wilcox.test_rank').
#' @param data The input training dataset. The first column is the label.
#' @param cutoff The cutoff used for feature selection threshold. It can be any value
#' between 0 and 1. Commonly used cutoffs are c(0.5, 0.1, 0.05, 0.01, etc.).
#' @param featureAnno The annotation data stored in a data.frame for probe
#' mapping. It must have at least two columns named 'ID' and 'entrezID'.
#' (For details, please refer to data( data("MethylAnno") )
#' @param pathlistDB_sub A list of pathways with pathway IDs and their
#' corresponding genes ('entrezID' is used).
#' For details, please refer to ( data("GO2ALLEGS_BP") )
#' @param cores The number of cores used for computation.
#' @param verbose Whether to print running process information to the console
#'
#' @return A list of matrices with pathway IDs as the associated list member
#' names.
#' @import parallel
#' @importFrom stats wilcox.test
#' @export
#' @author Shunjie Zhang
#' @examples
#'
#' library(parallel)
#' data=MethylData_Test
#' feature_pathways=Stage1_FeartureSelection(Stage1_FeartureSelection_Method='cor',
#' data=data,cutoff=0,
#' featureAnno=MethylAnno,pathlistDB_sub=GO2ALLEGS_BP,cores=1)
#'
Stage1_FeartureSelection=function(Stage1_FeartureSelection_Method='cor',data=NULL,cutoff=NULL,
featureAnno=NULL,pathlistDB_sub=NULL,cores=1,verbose=TRUE){
if(Sys.info()[1]=="Windows"){
cores=1
}
if(Stage1_FeartureSelection_Method=='cor'){
if(verbose)print(paste0(' Using << correlation >>',' ,and you choose cutoff:',cutoff))
Cor=stats::cor(data$label,data)
Cor=ifelse(Cor>0,Cor,-Cor)
names(Cor)=colnames(data)
Cor_names=names(Cor)
Cor_cutoff=Cor[which(Cor>cutoff)]
Cor_cutoff_names=names(Cor_cutoff)
feature_pathways=mclapply(1:length(pathlistDB_sub),function(x){
id=c('label',featureAnno$ID[which(featureAnno$entrezID %in% pathlistDB_sub[[x]])])
if(length(id)>10){
id2=id[which(id %in% Cor_cutoff_names)]
if(length(id2)<11){
a=Cor[id]
id2=names(a)[order(a,decreasing = T)[1:11]]
return(id2)
}else{
return(id2)
}
}else{
return(id)
}
} ,mc.cores=cores)
return(feature_pathways)
}else if(Stage1_FeartureSelection_Method=='wilcox.test'){
if(verbose)print(paste0(' Using << wilcox.test >>',' ,and you choose cutoff:',cutoff))
data_0=data[which(data$label==unique(data$label)[1]),]
data_1=data[which(data$label==unique(data$label)[2]),]
Cor=unlist(mclapply(1:ncol(data),function(x) wilcox.test(data_0[,x],data_1[,x])$p.value,mc.cores=cores))
names(Cor)=colnames(data)
Cor_names=names(Cor)
Cor_cutoff=Cor[which(Cor<cutoff)]
Cor_cutoff_names=names(Cor_cutoff)
feature_pathways=mclapply(1:length(pathlistDB_sub),function(x){
id=c('label',featureAnno$ID[which(featureAnno$entrezID %in% pathlistDB_sub[[x]])])
if(length(id)>10){
id2=id[which(id %in% Cor_cutoff_names)]
if(length(id2)<11){
a=Cor[id]
id2=names(a)[order(a,decreasing = T)[1:11]]
return(id2)
}else{
return(id2)
}
}else{
return(id)
}
} ,mc.cores=cores)
return(feature_pathways)
}else if(Stage1_FeartureSelection_Method=='cor_rank'){
if(verbose)print(paste0(' Using << correlation_rank >>',' ,and you choose cutoff:',cutoff))
Cor=stats::cor(data$label,data)
Cor=ifelse(Cor>0,Cor,-Cor)
len=length(Cor)*cutoff
names(Cor)=colnames(data)
Cor_names=names(Cor)
Cor_cutoff=order(Cor,decreasing=T)[1:len]
Cor_cutoff_names=names(Cor_cutoff)
feature_pathways=mclapply(1:length(pathlistDB_sub),function(x){
id=c('label',featureAnno$ID[which(featureAnno$entrezID %in% pathlistDB_sub[[x]])])
if(length(id)>10){
id2=id[which(id %in% Cor_cutoff_names)]
if(length(id2)<11){
a=Cor[id]
id2=names(a)[order(a,decreasing = T)[1:11]]
return(id2)
}else{
return(id2)
}
}else{
return(id)
}
} ,mc.cores=cores)
return(feature_pathways)
}else if(Stage1_FeartureSelection_Method=='wilcox.test_rank'){
if(verbose)print(paste0(' Using << wilcox.test_rank >>',' ,and you choose cutoff:',cutoff))
data_0=data[which(data$label==unique(data$label)[1]),]
data_1=data[which(data$label==unique(data$label)[2]),]
Cor=unlist(mclapply(1:ncol(data),function(x) wilcox.test(data_0[,x],data_1[,x])$p.value,mc.cores=cores))
len=length(Cor)*cutoff
names(Cor)=colnames(data)
Cor_names=names(Cor)
Cor_cutoff=order(Cor)[1:len]
Cor_cutoff_names=names(Cor_cutoff)
feature_pathways=mclapply(1:length(pathlistDB_sub),function(x){
id=c('label',featureAnno$ID[which(featureAnno$entrezID %in% pathlistDB_sub[[x]])])
if(length(id)>10){
id2=id[which(id %in% Cor_cutoff_names)]
if(length(id2)<11){
a=Cor[id]
id2=names(a)[order(a,decreasing = T)[1:11]]
return(id2)
}else{
return(id2)
}
}else{
return(id)
}
} ,mc.cores=cores)
return(feature_pathways)
}else{
feature_pathways=mclapply(1:length(pathlistDB_sub),function(x){
id=c('label',featureAnno$ID[which(featureAnno$entrezID %in% pathlistDB_sub[[x]])])
return(id)
} ,mc.cores=cores)
return(feature_pathways)
}
}
#' Stage 2 Fearture Selection
#'
#' @param Stage2_FeartureSelection_Method Feature selection methods. Available options are
#' c(NULL, 'cor', 'wilcox.test', 'RemoveHighcor', 'RemoveLinear').
#' @param data The input training dataset. The first column is the label.
#' @param label The label of dataset
#' @param cutoff The cutoff used for feature selection threshold. It can be any value
#' between 0 and 1.
#' @param preMode The prediction mode. Available options are
#' c('probability', 'classification').
#' @param classifier Learners in mlr3
#' @param cores The number of cores used for computation.
#' @param verbose Whether to print running process information to the console
#'
#' @return Column index of feature
#' @export
#' @import parallel
#' @importFrom stats wilcox.test
#' @import caret
#' @author Shunjie Zhang
#'
#'
Stage2_FeartureSelection=function(Stage2_FeartureSelection_Method='RemoveHighcor',data=NULL,label=NULL,cutoff=NULL,preMode=NULL,classifier=NULL,verbose=TRUE,cores=1){
if(Sys.info()[1]=="Windows"){
cores=1
}
if(preMode=='probability' | preMode=='classification'){
if(Stage2_FeartureSelection_Method=='cor'){
up=ifelse(classifier=='lda',0.999,100)
corr=sapply(1:length(data),function(x) stats::cor(data[[x]],label,method='pearson'))
if(verbose)print(paste0(' |> Final number of pathways >>>',length(which(corr>cutoff & corr < up)),'......Min correlation of pathways>>>',round(min(corr[which(corr > cutoff & corr < up)]),digits = 3)))
index=which(corr>cutoff & corr < up )
return(index)
}else if(Stage2_FeartureSelection_Method=='wilcox.test'){
data=do.call(cbind,data)
data=cbind(label=label,data)
data=as.data.frame(data)
data_0=data[which(data$label==unique(data$label)[1]),]
data_1=data[which(data$label==unique(data$label)[2]),]
pvalue=unlist(mclapply(2:ncol(data),function(x) wilcox.test(data_0[,x],data_1[,x])$p.value,mc.cores=60))
if(cutoff < length(pvalue)){
index=order(pvalue)[1:cutoff]
}else{
index=order(pvalue)
}
#index=which(pvalue < cutoff)
if(verbose)print(paste0(' |> Final number of pathways >>>',length(index),'......Max p-value of pathways>>>',round(max(pvalue[index]),digits = 3)))
return(index)
}else if(Stage2_FeartureSelection_Method=='RemoveHighcor'){
if(!is.null(label)){
corr=sapply(1:length(data),function(x) stats::cor(data[[x]],label,method='pearson'))
data=do.call(cbind,data)
corm=stats::cor(data)
unindex=caret::findCorrelation(corm,cutoff =cutoff)
index=which(corr > 0)
index=setdiff(index,unindex)
if(verbose)print(paste0(' |> Final number of pathways >>> ',length(index),'......Min correlation of pathways>>>',round(min(corr[index]),digits = 3)))
return(index)
}else{
data=do.call(rbind,data)
label=data[,1]
corr=sapply(2:ncol(data),function(x) stats::cor(data[,x],label,method='pearson'))
index=which(corr > 0)
data=data[,-1]
corm=stats::cor(data)
unindex=caret::findCorrelation(corm,cutoff =cutoff)
index=setdiff(index,unindex)
index=index+1
return(index)
}
}else if(Stage2_FeartureSelection_Method=='RemoveLinear'){
if(!is.null(label)){
corr=sapply(1:length(data),function(x) stats::cor(data[[x]],label,method='pearson'))
data=do.call(cbind,data)
unindex=caret::findLinearCombos(data)$remove
index=which(corr > 0)
index=setdiff(index,unindex)
if(verbose)print(paste0(' |> Final number of pathways >>>',length(index),'......Min correlation of pathways>>>',round(min(corr[index]),digits = 3)))
return(index)
}else{
data=do.call(rbind,data)
label=data[,1]
corr=sapply(2:ncol(data),function(x) stats::cor(data[,x],label,method='pearson'))
index=which(corr > 0)
data=data[,-1]
unindex=caret::findLinearCombos(data)$remove
index=setdiff(index,unindex)
index=index+1
return(index)
}
}else{
if(!is.null(label)){
up=ifelse(classifier=='lda',0.99,100)
corr=sapply(1:length(data),function(x) stats::cor(data[[x]],label,method='pearson'))
if(verbose)print(paste0(' |> Final number of pathways >>>',length(order(corr,decreasing=T)[which(corr > 0 & corr < up)]),'......Min correlation of pathways>>>',round(min(corr[which(corr > 0 & corr < up)]),digits = 3)))
index=which(corr > 0 & corr < up )
return(index)
}else{
data=do.call(rbind,data)
label=data[,1]
corr=sapply(2:ncol(data),function(x) stats::cor(data[,x],label,method='pearson'))
index=which(corr > 0)
index=index+1
return(index)
}
}
}
if(preMode=='regression'){
}
}
#' Add unmapped probe
#'
#' @param train The input training dataset. The first column
#' is the label or the output. For binary classes,
#' 0 and 1 are used to indicate the class member.
#' @param test The input test dataset. The first column
#' is the label or the output. For binary classes,
#' 0 and 1 are used to indicate the class member.
#' @param Unmapped_num The number of unmapped probes.
#' @param Add_FeartureSelection_Method Feature selection methods. Available options are
#' c('cor', 'wilcox.test').
#' @param anno The annotation data stored in a data.frame for probe
#' mapping. It must have at least two columns named 'ID' and 'entrezID'.
#' (For details, please refer to data( data("MethylAnno") )
#' @param len The number of unmapped probes
#' @param cores The number of cores used for computation.
#' @param verbose Whether to print running process information to the console
#'
#' @return Matrix of unmapped probes
#' @export
#' @import parallel
#' @importFrom stats wilcox.test
#'
AddUnmapped=function(train=NULL,test=NULL,Unmapped_num=NULL,Add_FeartureSelection_Method='wilcox.test',anno=NULL,len=NULL,verbose=TRUE,cores=1){
requireNamespace("parallel")
if(Sys.info()[1]=="Windows"){
cores=1
}
if(is.null(Add_FeartureSelection_Method) | Add_FeartureSelection_Method=='wilcox.test'){
Unmapped_Train=train[,setdiff(colnames(train),anno$ID)]
Unmapped_Test=test[,setdiff(colnames(train),anno$ID)]
Unmapped_0=Unmapped_Train[which(Unmapped_Train$label==unique(Unmapped_Train$label)[1]),]
Unmapped_1=Unmapped_Train[which(Unmapped_Train$label==unique(Unmapped_Train$label)[2]),]
Unmapped_pvalue=unlist(mclapply(2:ncol(Unmapped_Train),function(x) wilcox.test(Unmapped_0[,x],Unmapped_1[,x])$p.value,mc.cores=cores))
if(is.null(Unmapped_num)){
Unmapped_num=len-1
}
if(length(Unmapped_pvalue)<Unmapped_num){
Unmapped_num=length(Unmapped_pvalue)
}
Unmapped_id=order(Unmapped_pvalue)[1:Unmapped_num]
Unmapped_id=Unmapped_id+1
Unmapped_Train=Unmapped_Train[,Unmapped_id]
Unmapped_Test=Unmapped_Test[,Unmapped_id]
Unmapped_Data=list('train'= Unmapped_Train,'test'= Unmapped_Test)
if(verbose)print(paste0(' |> Add Unmapped features==>>',length(Unmapped_id)))
return(Unmapped_Data)
}else if(Add_FeartureSelection_Method=='cor'){
Unmapped_Train=train[,setdiff(colnames(train),anno$ID)]
Unmapped_Test=test[,setdiff(colnames(train),anno$ID)]
Unmapped_pvalue=unlist(mclapply(2:ncol(Unmapped_Train),function(x) stats::cor(Unmapped_Train$label,Unmapped_Train[,x]),mc.cores=cores))
Unmapped_pvalue=ifelse(Unmapped_pvalue>0,Unmapped_pvalue,-Unmapped_pvalue)
if(is.null(Unmapped_num)){
Unmapped_num=len-1
}
if(length(Unmapped_pvalue)<Unmapped_num){
Unmapped_num=length(Unmapped_pvalue)
}
Unmapped_id=order(Unmapped_pvalue)[1:Unmapped_num]
Unmapped_id=Unmapped_id+1
Unmapped_Train=Unmapped_Train[,Unmapped_id]
Unmapped_Test=Unmapped_Test[,Unmapped_id]
Unmapped_Data=list('train'= Unmapped_Train,'test'= Unmapped_Test)
if(verbose)print(paste0(' |> Add Unmapped features==>>',length(Unmapped_id)))
return(Unmapped_Data)
}
}
#' Biologically Explainable Machine Learning Framework
#'
#' @param TrainData The input training dataset. The first column
#' is the label or the output. For binary classes,
#' 0 and 1 are used to indicate the class member.
#' @param TestData The input test dataset. The first column
#' is the label or the output. For binary classes,
#' 0 and 1 are used to indicate the class member.
#' @param pathlistDB A list of pathways with pathway IDs and their
#' corresponding genes ('entrezID' is used).
#' For details, please refer to ( data("GO2ALLEGS_BP") )
#' @param FeatureAnno The annotation data stored in a data.frame for probe
#' mapping. It must have at least two columns named 'ID' and 'entrezID'.
#' (For details, please refer to data( data("MethylAnno") )
#' @param resampling Resampling in mlr3verse.
#' @param nfolds k-fold cross validation ( Only supported when TestData = NULL )
#' @param classifier Learners in mlr3
#' @param predMode The prediction mode. Available options are
#' c('probability', 'classification').
#' @param PathwaySizeUp The upper-bound of the number of genes in each
#' biological pathways.
#' @param PathwaySizeDown The lower-bound of the number of genes in each
#' biological pathways.
#' @param MinfeatureNum_pathways The minimal defined pathway size after mapping your
#' own data to pathlistDB(KEGG database/GO database).
#' @param Add_UnMapped Whether to add unmapped probes for prediction
#' @param Unmapped_num The number of unmapped probes
#' @param Add_FeartureSelection_Method Feature selection methods.
#' @param Inner_CV Whether to perform a k-fold verification on the training set.
#' @param inner_folds k-fold verification on the training set.
#' @param Stage1_FeartureSelection_Method Feature selection methods.
#' @param cutoff The cutoff used for feature selection threshold. It can be any value
#' between 0 and 1.
#' @param Stage2_FeartureSelection_Method Feature selection methods.
#' @param cutoff2 The cutoff used for feature selection threshold. It can be any value
#' between 0 and 1.
#' @param classifier2 Learner for stage 2 prediction(if classifier2==NULL,then it is the same as the learner in stage 1.)
#' @param target Is it used to predict or explore potential biological mechanisms?
#' Available options are c('predict', 'pathways').
#' @param p.adjust.method p-value adjustment method.(holm", "hochberg", "hommel", "bonferroni", "BH", "BY",
# "fdr", "none")
#' @param save_pathways_matrix Whether to output the path matrix file
#' @param cores The number of cores used for computation.
#' @param verbose Whether to print running process information to the console
#'
#' @return A list containing prediction results and prediction result evaluation
#' @export
#' @import ROCR
#' @import caret
#' @importFrom utils head data
#' @importFrom stats wilcox.test p.adjust cor.test
#' @examples
#'
#'
#'
#' library(mlr3verse)
#' library(caret)
#' library(parallel)
#' library(BioM2)
#' data=MethylData_Test
#' set.seed(1)
#' part=unlist(createDataPartition(data$label,p=0.8))
#' Train=data[part,]
#' Test=data[-part,]
#' pathlistDB=GO2ALLEGS_BP
#' FeatureAnno=MethylAnno
#'
#'
#' pred=BioM2(TrainData = Train,TestData = Test,
#' pathlistDB=pathlistDB,FeatureAnno=FeatureAnno,
#' classifier='svm',nfolds=5,
#' PathwaySizeUp=25,PathwaySizeDown=20,MinfeatureNum_pathways=10,
#' Add_UnMapped='Yes',Unmapped_num=300,
#' Inner_CV='None',inner_folds=5,
#' Stage1_FeartureSelection_Method='cor',cutoff=0.3,
#' Stage2_FeartureSelection_Method='None',
#' target='predict',cores=1
#' )#(To explore biological mechanisms, set target=‘pathways’)
#'
#'
#'
BioM2=function(TrainData=NULL,TestData=NULL,pathlistDB=NULL,FeatureAnno=NULL,resampling=NULL,nfolds=5,classifier='liblinear', predMode = "probability",
PathwaySizeUp=200,PathwaySizeDown=20,MinfeatureNum_pathways=10,
Add_UnMapped=TRUE,Unmapped_num=300,Add_FeartureSelection_Method='wilcox.test',
Inner_CV=TRUE,inner_folds=10,
Stage1_FeartureSelection_Method='cor',cutoff=0.3,
Stage2_FeartureSelection_Method='RemoveHighcor',cutoff2=0.85,classifier2=NULL,
target='predict',p.adjust.method='fdr',save_pathways_matrix=FALSE,cores=1,verbose=TRUE){
if(verbose)print('===================BioM2==================')
if(Sys.info()[1]=="Windows"){
cores=1
}
prediction=list()
FeatureAnno$ID=gsub('[\\.\\_\\-]','',FeatureAnno$ID)
colnames(TrainData)=gsub('[\\.\\_\\-]','',colnames(TrainData))
if(is.null(TestData)){
list_pathways=list()
Record=data.frame(resampling_id=1:nfolds,learner_name=1:nfolds,AUC=1:nfolds,ACC=1:nfolds,PCCs=1:nfolds)
Resampling=caret::createFolds(TrainData$label,k=nfolds)
if(!is.null(resampling)){
nfolds=resampling$param_set$values$folds
}
T1=Sys.time()
for(xxx in 1:nfolds){
if(verbose)print(paste0('<<<<<-----Start-----','Resampling(CV',',folds=',nfolds,')-No.',xxx,'----->>>>>'))
if(verbose)print('Step1: ReadData')
t1=Sys.time()
if(is.null(resampling)){
trainData=TrainData[unlist(Resampling[-xxx]),]
testData=TrainData[unlist(Resampling[xxx]),]
}else{
trainData=TrainData[resampling$train_set(xxx),]
testData=TrainData[resampling$test_set(xxx),]
}
geneNum_pathways=sapply(1:length(pathlistDB),function(i) length(pathlistDB[[i]]))
pathlistDB_sub=pathlistDB[which(geneNum_pathways > PathwaySizeDown & geneNum_pathways < PathwaySizeUp )]
featureAnno=FeatureAnno[FeatureAnno$ID %in% colnames(trainData),]
if(verbose)print(paste0(' |>Total number of pathways==>>',length(pathlistDB_sub)))
if(verbose)print('Step2: FeartureSelection-features')
feature_pathways=Stage1_FeartureSelection(Stage1_FeartureSelection_Method=Stage1_FeartureSelection_Method,data=trainData,cutoff=cutoff,
featureAnno=featureAnno,pathlistDB_sub=pathlistDB_sub,cores=cores,verbose=verbose)
lens=sapply(1:length(feature_pathways),function(x) length(feature_pathways[[x]]))
if(verbose)print('Step3: MergeData')
trainDataList=mclapply(1:length(feature_pathways),function(x) trainData[,feature_pathways[[x]]] ,mc.cores=cores)
testDataList=mclapply(1:length(feature_pathways),function(x) testData[,feature_pathways[[x]]] ,mc.cores=cores)
names(trainDataList)=names(pathlistDB_sub)
names(testDataList)=names(pathlistDB_sub)
trainDataList=trainDataList[which(lens>MinfeatureNum_pathways)]
testDataList=testDataList[which(lens>MinfeatureNum_pathways)]
featureNum_pathways=sapply(1:length(trainDataList),function(i2) length(trainDataList[[i2]]))
if(verbose)print(paste0(' |> Total number of selected pathways==>>',length(trainDataList)))
if(verbose)print(paste0(' |> Min features number of pathways==>>',min(featureNum_pathways)-1,'.......','Max features number of pathways==>>',max(featureNum_pathways)-1))
#(PredictPathways)
if(target=='pathways'){
if(verbose)print('Step4: PredictPathways')
test=mclapply(1:length(testDataList),function(i6) baseModel(trainData =trainDataList[[i6]],testData =testDataList[[i6]],predMode = predMode,classifier = classifier),mc.cores=cores)
corr=sapply(1:length(testDataList),function(x) stats::cor(test[[x]],testDataList[[x]]$label,method='pearson'))
newtest=do.call(cbind, test)
colnames(newtest)=names(testDataList)
newtest=cbind(label=testDataList[[1]]$label,newtest)
list_pathways[[xxx]]=newtest
if(verbose)print(paste0(' |>min correlation of pathways=====>>>',round(min(corr),digits = 3),'......','max correlation of pathways===>>>',round(max(corr),digits = 3)))
if(verbose)print(' <<< PredictPathways Done! >>> ')
t2=Sys.time()
if(verbose)print(t2-t1)
if(verbose)print('---------------------####################------------------')
}else{
#(Reconstruction )
if(verbose)print('Step4: Reconstruction')
if(Inner_CV==TRUE){
if(verbose)print(' |> Using Inner CV ~ ~ ~')
train=mclapply(1:length(trainDataList),function(i4) baseModel(trainData =trainDataList[[i4]],testData =NULL,predMode =predMode,classifier = classifier,inner_folds=inner_folds),mc.cores=cores)
test=mclapply(1:length(testDataList),function(i5) baseModel(trainData =trainDataList[[i5]],testData =testDataList[[i5]],predMode =predMode,classifier = classifier),mc.cores=cores)
}else{
train=mclapply(1:length(trainDataList),function(i4) baseModel(trainData =trainDataList[[i4]],testData =trainDataList[[i4]],predMode = predMode ,classifier = classifier),mc.cores=cores)
test=mclapply(1:length(testDataList),function(i5) baseModel(trainData =trainDataList[[i5]],testData =testDataList[[i5]],predMode = predMode ,classifier = classifier),mc.cores=cores)
}
if(verbose)print(' <<< Reconstruction Done! >>> ')
#(FeartureSelection-pathways)
if(verbose)print('Step5: FeartureSelection-pathways')
index=Stage2_FeartureSelection(Stage2_FeartureSelection_Method=Stage2_FeartureSelection_Method,data=train,
label=trainDataList[[1]]$label,cutoff=cutoff2,preMode='probability',classifier =classifier,verbose=verbose,cores=cores)
newtrain=do.call(cbind,train[index])
colnames(newtrain)=names(trainDataList)[index]
newtrain=cbind(label=trainDataList[[1]]$label,newtrain)
newtest=do.call(cbind,test[index])
colnames(newtest)=names(trainDataList)[index]
newtest=cbind(label=testDataList[[1]]$label,newtest)
colnames(newtest)=gsub(':','',colnames(newtest))
colnames(newtrain)=gsub(':','',colnames(newtrain))
if(Add_UnMapped==TRUE){
Unmapped_Data=AddUnmapped(train=trainData,test=testData,Add_FeartureSelection_Method=Add_FeartureSelection_Method,
Unmapped_num=Unmapped_num,len=ncol(newtrain),anno=featureAnno,verbose=verbose,cores=cores)
newtrain=cbind(newtrain,Unmapped_Data$train)
newtest=cbind(newtest,Unmapped_Data$test)
if(verbose)print(paste0(' |> Merge PathwayFeature and AddFeature ==>>',ncol(newtrain)))
}
#(Predict and Metric)
if(verbose)print('Step6: Predict and Metric')
if(is.null(classifier2)){
classifier2=classifier
}
result=baseModel(trainData=newtrain,testData=newtest,predMode ='probability',classifier = classifier2)
prediction_part=data.frame(sample=rownames(testData),prediction=result)
prediction[[xxx]]=prediction_part
names(prediction)[xxx]=paste('Resample No.',xxx)
testDataY=testDataList[[1]]$label
pre=ifelse(result>0.5,1,0)
Record[xxx,1]=xxx
Record[xxx,2]=classifier2
Record[xxx,5]=stats::cor(testDataY,result,method='pearson')
Record[xxx,3]=ROCR::performance(ROCR::prediction(result,testDataY),'auc')@y.values[[1]]
testDataY=as.factor(testDataY)
pre=as.factor(pre)
Record[xxx,4]=confusionMatrix(pre, testDataY)$overall['Accuracy'][[1]]
if(verbose)print(paste0('######Resampling NO.',xxx,'~~~~',classifier2,'==>','AUC:',round(Record[xxx,3],digits = 3),' ','ACC:',round(Record[xxx,4],digits = 3),' ','PCCs:',round(Record[xxx,5],digits = 3)))
t2=Sys.time()
if(verbose)print(t2-t1)
if(verbose)print('---------------------####################------------------')
}
#(Comprehensive Assessment)
if(xxx==nfolds){
if(verbose)print('-----------------------------------------------------------')
if(verbose)print('------------========<<<< Completed! >>>>======-----------')
if(verbose)print('-----------------------------------------------------------')
if(target=='pathways'){
ColN=Reduce(intersect,lapply(1:nfolds,function(x) colnames(list_pathways[[x]])))
list_pathways=lapply(1:nfolds,function(x) list_pathways[[x]][,ColN])
index=Stage2_FeartureSelection(Stage2_FeartureSelection_Method=Stage2_FeartureSelection_Method,data=list_pathways,
label=NULL,cutoff=cutoff2,preMode='probability',classifier =classifier,cores=cores)
matrix_pathways=do.call(rbind,list_pathways)
matrix_pathways=matrix_pathways[,c(1,index)]
rownames(matrix_pathways)=rownames(TrainData)[unlist(Resampling)]
if(save_pathways_matrix==TRUE){
saveRDS(matrix_pathways,'pathways_matrix.rds')
if(verbose)print(' |||==>>> Save the Pathways-Matrix ')
}
pathways_result=data.frame(
id=colnames(matrix_pathways)[2:ncol(matrix_pathways)],
cor=sapply(2:ncol(matrix_pathways),function(i) cor.test(matrix_pathways[,1],matrix_pathways[,i])$estimate),
p.value=sapply(2:ncol(matrix_pathways),function(i) cor.test(matrix_pathways[,1],matrix_pathways[,i])$p.value)
)
pathways_result$adjust_p.value=p.adjust(pathways_result$p.value,method=p.adjust.method)
GO_Ancestor=NA
data("GO_Ancestor",envir = environment())
GO_anno=GO_Ancestor[,1:2]
GO_anno=GO_anno[-which(duplicated(GO_anno)),]
colnames(GO_anno)=c('id','term')
pathways_result=merge(pathways_result,GO_anno,by='id')
pathways_result=pathways_result[order(pathways_result$cor,decreasing = TRUE),]
if(verbose)print(head(pathways_result))
final=list('PathwaysMatrix'= matrix_pathways,'PathwaysResult'= pathways_result)
T2=Sys.time()
if(verbose)print(T2-T1)
if(verbose)print('######------- Well Done!!!-------######')
return(final)
}else{
T2=Sys.time()
if(verbose)print(paste0('{|>>>=====','Learner: ',classifier,'---Performance Metric---==>>','AUC:',round(mean(Record$AUC),digits = 3),' ','ACC:',round(mean(Record$ACC),digits = 3),' ','PCCs:',round(mean(Record$PCCs),digits = 3),'======<<<|}'))
if(verbose)print(Record)
final=list('Prediction'=prediction,'Metric'=Record,'TotalMetric'=c('AUC'=round(mean(Record$AUC),digits = 3),'ACC'=round(mean(Record$ACC),digits = 3),'PCCs'=round(mean(Record$PCCs),digits = 3)))
if(verbose)print(T2-T1)
if(verbose)print('######------- Well Done!!!-------######')
return(final)
}
}
}
}else{
colnames(TestData)=colnames(TrainData)
Record=data.frame(learner_name=1,AUC=1,ACC=1,PCCs=1)
if(verbose)print('Step1: ReadData')
T1=Sys.time()
trainData=TrainData
testData=TestData
geneNum_pathways=sapply(1:length(pathlistDB),function(i) length(pathlistDB[[i]]))
pathlistDB_sub=pathlistDB[which(geneNum_pathways > PathwaySizeDown & geneNum_pathways < PathwaySizeUp )]
featureAnno=FeatureAnno[FeatureAnno$ID %in% colnames(trainData),]
if(verbose)print(paste0(' |>Total number of pathways==>>',length(pathlistDB_sub)))
if(verbose)print('Step2: FeartureSelection-features')
feature_pathways=Stage1_FeartureSelection(Stage1_FeartureSelection_Method=Stage1_FeartureSelection_Method,data=trainData,cutoff=cutoff,
featureAnno=featureAnno,pathlistDB_sub=pathlistDB_sub,cores=cores,verbose=verbose)
lens=sapply(1:length(feature_pathways),function(x) length(feature_pathways[[x]]))
if(verbose)print('Step3: MergeData')
trainDataList=mclapply(1:length(feature_pathways),function(x) trainData[,feature_pathways[[x]]] ,mc.cores=cores)
testDataList=mclapply(1:length(feature_pathways),function(x) testData[,feature_pathways[[x]]] ,mc.cores=cores)
names(trainDataList)=names(pathlistDB_sub)
names(testDataList)=names(pathlistDB_sub)
trainDataList=trainDataList[which(lens>MinfeatureNum_pathways)]
testDataList=testDataList[which(lens>MinfeatureNum_pathways)]
featureNum_pathways=sapply(1:length(trainDataList),function(i2) length(trainDataList[[i2]]))
if(verbose)print(paste0(' |> Total number of selected pathways==>>',length(trainDataList)))
if(verbose)print(paste0(' |> Min features number of pathways==>>',min(featureNum_pathways)-1,'.......','Max features number of pathways==>>',max(featureNum_pathways)-1))
#(PredictPathways)
if(target=='pathways'){
if(verbose)print('Step4: PredictPathways')
if(Inner_CV ==TRUE){
if(verbose)print(' |> Using Inner CV ~ ~ ~')
train=mclapply(1:length(trainDataList),function(i4) baseModel(trainData =trainDataList[[i4]],testData =NULL,predMode =predMode,classifier = classifier,inner_folds=inner_folds),mc.cores=cores)
test=mclapply(1:length(testDataList),function(i5) baseModel(trainData =trainDataList[[i5]],testData =testDataList[[i5]],predMode =predMode,classifier = classifier),mc.cores=cores)
}else{
train=mclapply(1:length(trainDataList),function(i4) baseModel(trainData =trainDataList[[i4]],testData =trainDataList[[i4]],predMode = predMode ,classifier = classifier),mc.cores=cores)
test=mclapply(1:length(testDataList),function(i5) baseModel(trainData =trainDataList[[i5]],testData =testDataList[[i5]],predMode = predMode ,classifier = classifier),mc.cores=cores)
}
if(verbose)print('Step5: FeartureSelection-pathways')
index=Stage2_FeartureSelection(Stage2_FeartureSelection_Method=Stage2_FeartureSelection_Method,data=train,
label=trainDataList[[1]]$label,cutoff=cutoff2,preMode='probability',classifier =classifier,verbose=verbose,cores=cores)
corr=sapply(1:length(testDataList),function(x) stats::cor(test[[x]],testDataList[[x]]$label,method='pearson'))
newtest=do.call(cbind, test)
colnames(newtest)=names(testDataList)
newtest=cbind(label=testDataList[[1]]$label,newtest)
matrix_pathways=newtest
rownames(matrix_pathways)=rownames(TestData)
if(verbose)print(paste0(' |>min correlation of pathways=====>>>',round(min(corr),digits = 3),'......','max correlation of pathways===>>>',round(max(corr),digits = 3)))
if(verbose)print(' <<< PredictPathways Done! >>> ')
if(save_pathways_matrix==TRUE){
saveRDS(matrix_pathways,'pathways_matrix.rds')
if(verbose)print(' |||==>>> Save the Pathways-Matrix ')
}
pathways_result=data.frame(
id=colnames(matrix_pathways)[2:ncol(matrix_pathways)],
cor=sapply(2:ncol(matrix_pathways),function(i) cor.test(matrix_pathways[,1],matrix_pathways[,i])$estimate),
p.value=sapply(2:ncol(matrix_pathways),function(i) cor.test(matrix_pathways[,1],matrix_pathways[,i])$p.value)
)
pathways_result$adjust_p.value=p.adjust(pathways_result$p.value,method=p.adjust.method)
GO_Ancestor=NA
data("GO_Ancestor",envir = environment())
GO_anno=GO_Ancestor[,1:2]
GO_anno=GO_anno[-which(duplicated(GO_anno)),]
colnames(GO_anno)=c('id','term')
pathways_result=merge(pathways_result,GO_anno,by='id')
pathways_result=pathways_result[order(pathways_result$cor,decreasing = TRUE),]
if(verbose)print(head(pathways_result))
final=list('PathwaysMatrix'= matrix_pathways,'PathwaysResult'= pathways_result)
T2=Sys.time()
if(verbose)print(T2-T1)
if(verbose)print('######------- Well Done!!!-------######')
return(final)
}else{
#(Reconstruction )
if(verbose)print('Step4: Reconstruction')
if(Inner_CV ==TRUE){
if(verbose)print(' |> Using Inner CV ~ ~ ~')
train=mclapply(1:length(trainDataList),function(i4) baseModel(trainData =trainDataList[[i4]],testData =NULL,predMode =predMode,classifier = classifier,inner_folds=inner_folds),mc.cores=cores)
test=mclapply(1:length(testDataList),function(i5) baseModel(trainData =trainDataList[[i5]],testData =testDataList[[i5]],predMode =predMode,classifier = classifier),mc.cores=cores)
}else{
train=mclapply(1:length(trainDataList),function(i4) baseModel(trainData =trainDataList[[i4]],testData =trainDataList[[i4]],predMode = predMode ,classifier = classifier),mc.cores=cores)
test=mclapply(1:length(testDataList),function(i5) baseModel(trainData =trainDataList[[i5]],testData =testDataList[[i5]],predMode = predMode ,classifier = classifier),mc.cores=cores)
}
if(verbose)print(' <<< Reconstruction Done! >>> ')
#(FeartureSelection-pathways)
if(verbose)print('Step5: FeartureSelection-pathways')
index=Stage2_FeartureSelection(Stage2_FeartureSelection_Method=Stage2_FeartureSelection_Method,data=train,
label=trainDataList[[1]]$label,cutoff=cutoff2,preMode='probability',classifier =classifier,verbose=verbose,cores=cores)
newtrain=do.call(cbind,train[index])
colnames(newtrain)=names(trainDataList)[index]
newtrain=cbind(label=trainDataList[[1]]$label,newtrain)
newtest=do.call(cbind,test[index])
colnames(newtest)=names(trainDataList)[index]
newtest=cbind(label=testDataList[[1]]$label,newtest)
colnames(newtest)=gsub(':','',colnames(newtest))
colnames(newtrain)=gsub(':','',colnames(newtrain))
if(Add_UnMapped==TRUE){
Unmapped_Data=AddUnmapped(train=trainData,test=testData,,Add_FeartureSelection_Method=Add_FeartureSelection_Method,
Unmapped_num=Unmapped_num,anno=featureAnno,verbose=verbose,cores=cores)
newtrain=cbind(newtrain,Unmapped_Data$train)
newtest=cbind(newtest,Unmapped_Data$test)
if(verbose)print(paste0(' |> Merge PathwayFeature and AddFeature ==>>',ncol(newtrain)))
}
#(Predict and Metric)
if(verbose)print('Step6: Predict and Metric')
if(is.null(classifier2)){
classifier2=classifier
}
result=baseModel(trainData=newtrain,testData=newtest,predMode ='probability',classifier = classifier2)
predict=data.frame(sample=rownames(testData),prediction=result)
testDataY=testDataList[[1]]$label
pre=ifelse(result>0.5,1,0)
Record[1,1]=classifier
Record[1,4]=stats::cor(testDataY,result,method='pearson')
Record[1,2]=ROCR::performance(ROCR::prediction(result,testDataY),'auc')@y.values[[1]]
testDataY=as.factor(testDataY)
pre=as.factor(pre)
Record[1,3]=confusionMatrix(pre, testDataY)$overall['Accuracy'][[1]]
if(verbose)print(paste0('######~~~~',classifier2,'==>','AUC:',round(Record[1,2],digits = 3),' ','ACC:',round(Record[1,3],digits = 3),' ','PCCs:',round(Record[1,4],digits = 3)))
final=list('Prediction'=predict,'Metric'=Record)
T2=Sys.time()
if(verbose)print(T2-T1)
if(verbose)print('######------- Well Done!!!-------######')
return(final)
}
}
}
#' Find suitable parameters for partitioning pathways modules
#'
#' @param pathways_matrix A pathway matrix generated by the BioM2( target='pathways') function.
#' @param control_label The label of the control group ( A single number, factor, or character )
#' @param minModuleSize minimum module size for module detection. Detail for WGCNA::blockwiseModules()
#' @param mergeCutHeight dendrogram cut height for module merging. Detail for WGCNA::blockwiseModules()
#' @param minModuleNum Minimum total number of modules detected
#' @param power soft-thresholding power for network construction. Detail for WGCNA::blockwiseModules()
#' @param exact Whether to divide GO pathways more accurately
#'
#' @return A list containing recommended parameters
#' @export
#' @importFrom utils data
#' @importFrom stats sd aggregate
#' @importFrom WGCNA pickSoftThreshold blockwiseModules
#'
#'
#'
#'
#'
FindParaModule=function(pathways_matrix=NULL,control_label=NULL,minModuleSize = seq(10,20,5),mergeCutHeight=seq(0,0.3,0.1),minModuleNum=20,power=NULL,exact=TRUE){
if('package:WGCNA' %in% search()){
final=list()
if(exact==FALSE){
GO_Ancestor=NA
data("GO_Ancestor",envir = environment())
anno=GO_Ancestor
}else{
GO_Ancestor_exact=NA
data("GO_Ancestor_exact",envir = environment())
anno=GO_Ancestor_exact
}
data=as.data.frame(pathways_matrix)
label=data.frame(ID=rownames(data),label=data$label)
data=data[data$label==control_label,]
data=data[,which(colnames(data) %in% anno$GO)]
if(is.null(power)){
powers = c(c(1:10), seq(from = 12, to=20, by=1))
sink(nullfile())
sft = pickSoftThreshold(data, powerVector = powers, verbose = 5)
sink()
if(is.na(sft$powerEstimate)){
stop('Could not find a proper powers , Please give a power by youself .')
}else{
power=sft$powerEstimate
message('Find the proper power!')
}
}
if(length(minModuleSize)==1 & length(mergeCutHeight)==1){
stop('Length of both minModuleSize and mergeCutHeight must greater than 1')
}else{
Num_module=minModuleSize
result_list=list()
for(xxx in 1:length(Num_module)){
cutoff=mergeCutHeight
n=length(cutoff)
result=data.frame(mergeCutHeight=1:n,Number_clusters=1:n,Mean_number_pathways=1:n,Mean_Fraction=1:n,Sd_Fraction=1:n,minModuleSize=1:n)
for(ii in 1:length(cutoff)){
sink(nullfile())
net = blockwiseModules(data, power = power,
TOMType = "unsigned", minModuleSize = Num_module[xxx],
reassignThreshold = 0, mergeCutHeight = cutoff[ii],
numericLabels = TRUE, pamRespectsDendro = FALSE,
saveTOMs = F,
saveTOMFileBase = "femaleMouseTOM",
verbose = 3)
sink()
cluster=data.frame(ID=names(net$colors),cluster=net$colors)
cluster$cluster=cluster$cluster+1
cluster_list=list()
faction=vector()
numder_pathways=vector()
for(i in 1:max(cluster$cluster)){
new_anno=anno[which(anno$GO %in% cluster[which(cluster$cluster==i),]$ID),]
sum=length(which(cluster$cluster==i))
numder_pathways[i]=sum
term=unique(new_anno$Ancestor)
prop=sapply(1:length(term),function(x) length(which(new_anno$Ancestor==term[x]))*100/sum)
a=data.frame(Term=term[which.max(prop)],Fraction=max(prop))
faction[i]=a$Fraction
a$Fraction=paste0(round(a$Fraction,2),'%')
if(length(which(cluster$cluster==i))>5 & length(which(cluster$cluster==i))<300 ){
cluster_list[[i]]=a
}else{
cluster_list[[i]]=NA
}
names(cluster_list)[i]=paste0('cluster_NO.',i,'==>> ','There are ',sum,' pathways')
}
result[ii,1]=cutoff[ii]
result[ii,4]=mean(faction[which(!is.na(cluster_list))])
result[ii,5]=sd(faction[which(!is.na(cluster_list))])
result[ii,3]=mean(numder_pathways[which(numder_pathways<300)])
result[ii,2]=length(which(!is.na(cluster_list)))
result[ii,6]= Num_module[xxx]
}
result_list[[xxx]]=result
}
result=do.call(rbind,result_list)
result=result[which(result$Number_clusters > minModuleNum),]
d=aggregate(result$Mean_Fraction,by=list(minModuleSize=result$minModuleSize),max)
colnames(d)[2]='Mean_Fraction'
}
best_minModuleSize=d[which.max(d$Mean_Fraction),]$minModuleSize
message(paste0('The best minModuleSize is:',best_minModuleSize))
size=result[result$minModuleSize==best_minModuleSize,]
best_mergeCutHeight=size[which.max(size$Mean_Fraction),]$mergeCutHeight
message(paste0('The best mergeCutHeight is:', best_mergeCutHeight))
Para=c(power,best_minModuleSize,best_mergeCutHeight)
names(Para)=c('power','ModuleSize','mergeCutHeight')
final[[1]]=result
final[[2]]=Para
names(final)=c('TotalResult','BestParameter')
return(final)
message('Completed!')
}else{
message('If you want to use this function, please install and load the WGCNA package')
}
}
#' Delineate differential pathway modules with high biological interpretability
#'
#' @param pathways_matrix A pathway matrix generated by the BioM2( target='pathways') function.
#' @param control_label The label of the control group ( A single number, factor, or character )
#' @param power soft-thresholding power for network construction. Detail for WGCNA::blockwiseModules()
#' @param minModuleSize minimum module size for module detection. Detail for WGCNA::blockwiseModules()
#' @param mergeCutHeight dendrogram cut height for module merging. Detail for WGCNA::blockwiseModules()
#' @param cutoff Thresholds for Biological Interpretability Difference Modules
#' @param MinNumPathways Minimum number of pathways included in the biologically interpretable difference module
#' @param p.adjust.method p-value adjustment method.(holm", "hochberg", "hommel", "bonferroni", "BH", "BY",
# "fdr", "none")
#' @param exact Whether to divide GO pathways more accurately
#'
#' @return A list containing differential module results that are highly biologically interpretable
#' @export
#' @importFrom WGCNA pickSoftThreshold blockwiseModules moduleEigengenes
#' @importFrom stats wilcox.test p.adjust
#' @importFrom utils data
PathwaysModule=function(pathways_matrix=NULL,control_label=NULL,power=NULL,minModuleSize=NULL,mergeCutHeight=NULL,
cutoff=70,MinNumPathways=5,p.adjust.method='fdr',exact=TRUE){
if('package:WGCNA' %in% search()){
if(exact==FALSE){
GO_Ancestor=NA
data("GO_Ancestor",envir = environment())
anno=GO_Ancestor
}else{
GO_Ancestor_exact=NA
data("GO_Ancestor_exact",envir = environment())
anno=GO_Ancestor_exact
}
final=list()
data=as.data.frame(pathways_matrix)
label=data.frame(ID=rownames(data),label=data$label)
data=data[data$label==control_label,]
data=data[,which(colnames(data) %in% anno$GO)]
if(is.null(power)){
powers = c(1:30)
sink(nullfile())
sft = pickSoftThreshold(data, powerVector = powers, verbose = 5)
sink()
if(is.na(sft$powerEstimate)){
stop('Could not find a proper powers , Please give a power by youself .')
}else{
power=sft$powerEstimate
message('Find the proper power!')
}
}
sink(nullfile())
net = blockwiseModules(data, power = power,
TOMType = "unsigned", minModuleSize = minModuleSize,
reassignThreshold = 0, mergeCutHeight = mergeCutHeight,
numericLabels = TRUE, pamRespectsDendro = FALSE,
saveTOMs = F,
verbose = 3)
sink()
cluster=data.frame(ID=names(net$colors),cluster=net$colors)
cluster_list=list()
faction=vector()
for(i in 0:max(cluster$cluster)){
new_anno=anno[which(anno$GO %in% cluster[which(cluster$cluster==i),]$ID),]
sum=length(which(cluster$cluster==i))
term=unique(new_anno$Ancestor)
prop=sapply(1:length(term),function(x) length(which(new_anno$Ancestor==term[x]))*100/sum)
a=data.frame(Term=term[which.max(prop)],Fraction=max(prop))
faction[i+1]=a$Fraction
a$Fraction=paste0(round(a$Fraction,2),'%')
if(length(which(cluster$cluster==i))>5 & length(which(cluster$cluster==i))<1000 ){
cluster_list[[i+1]]=a
}else{
cluster_list[[i+1]]=NA
}
names(cluster_list)[i+1]=paste0('Module_NO.',i,'==>> ','There are ',sum,' pathways')
}
data=as.data.frame(pathways_matrix)
label=data.frame(ID=rownames(data),label=data$label)
data=data[,c(1,which(colnames(data) %in% anno$GO))]
ALL_eigengene=moduleEigengenes(data[,-1],net$colors)$eigengenes
ALL_eigengene$label=data$label
ALL_eigengene=ALL_eigengene[,c(ncol(ALL_eigengene),(1:(ncol(ALL_eigengene)-1)))]
Cor=data.frame(module=rownames(as.data.frame(stats::cor(ALL_eigengene)))[-1],cor=as.data.frame(stats::cor(ALL_eigengene))[-1,1])
ALL_eigengene_0=ALL_eigengene[ALL_eigengene$label==0,]
ALL_eigengene_1=ALL_eigengene[ALL_eigengene$label==1,]
pvalue=unlist(lapply(1:ncol(ALL_eigengene),function(x) wilcox.test(ALL_eigengene_0[,x],ALL_eigengene_1[,x])$p.value))
n=ncol(ALL_eigengene)-1
adjust=data.frame(module=colnames(ALL_eigengene),pvalue=pvalue,adjust_pvalue=p.adjust(pvalue,method=p.adjust.method))
adjust=adjust[-which(adjust$module=='label'),]
Num_pathways=as.vector(table(net$colors))
names(Num_pathways)=NULL
module=data.frame(module=paste0('ME',names(table(net$colors))),
Num_pathways=Num_pathways,
Fraction=faction)
result=merge(module,adjust,by='module')
result=merge(result,Cor,by='module')
message(paste0('Filter Module those Fraction less than ',cutoff))
id=which(result$adjust_pvalue < 0.05 & result$Fraction >= cutoff & result$Num_pathways >= MinNumPathways )
Result=result[id,]
Result=Result[order(Result$adjust_pvalue),]
final[[1]]=cluster
final[[2]]=result
final[[3]]=Result
final[[4]]=pathways_matrix
names(final)=c('ModuleResult','RAW_PathwaysModule','DE_PathwaysModule','Matrix')
message('Completed!')
return(final)
}else{
message('If you want to use this function, please install and load the WGCNA package')
}
}
#' Display biological information within each pathway module
#'
#' @param obj Results produced by PathwaysModule()
#' @param ID_Module ID of the diff module
#' @param exact Whether to divide GO pathways more accurately
#'
#' @return List containing biologically specific information within the module
#' @export
#' @importFrom utils data
#'
ShowModule=function(obj=NULL,ID_Module=NULL,exact=TRUE){
i=ID_Module
if(exact==FALSE){
GO_Ancestor=NA
data("GO_Ancestor",envir = environment())
anno=GO_Ancestor
}else{
GO_Ancestor_exact=NA
data("GO_Ancestor_exact",envir = environment())
anno=GO_Ancestor_exact
}
cluster=obj$ModuleResult
final=list()
for(x in 1:length(i)){
new_anno=anno[which(anno$GO %in% cluster[which(cluster$cluster==i[x]),]$ID),]
sum=length(which(cluster$cluster==i[x]))
term=unique(new_anno$Ancestor)
prop=sapply(1:length(term),function(x) length(which(new_anno$Ancestor==term[x]))*100/sum)
a=new_anno[new_anno$Ancestor==term[which.max(prop)],]
b=setdiff(cluster[which(cluster$cluster==i[x]),]$ID,a$GO)
if(length(b)>0){
b2=anno[anno$GO %in% b,]
if(length(which(duplicated(b2$GO)))==0){
a=rbind(a,b2)
}else{
b2=b2[-which(duplicated(b2$GO)),]
a=rbind(a,b2)
}
}
final[[x]]=a
names(final)[x]=paste0('ME',i[x])
}
return(final)
}
#' Visualisation of the results of the analysis of the pathway modules
#'
#' @param BioM2_pathways_obj Results produced by BioM2(,target='pathways')
#' @param FindParaModule_obj Results produced by FindParaModule()
#' @param ShowModule_obj Results produced by ShowModule()
#' @param PathwaysModule_obj Results produced by PathwaysModule()
#' @param exact Whether to divide GO pathways more accurately
#' @param type_text_table Whether to display it in a table
#' @param text_table_theme The topic of this table.Detail for ggtexttable()
#' @param n_neighbors The size of local neighborhood (in terms of number of neighboring sample points) used for manifold approximation.
#' Larger values result in more global views of the manifold, while smaller values result in more local data being preserved.
#' In general values should be in the range 2 to 100.
#' @param spread The effective scale of embedded points. In combination with min_dist, this determines how clustered/clumped the embedded points are.
#' @param min_dist The effective minimum distance between embedded points.
#' Smaller values will result in a more clustered/clumped embedding where nearby points on the manifold are drawn closer together,
#' while larger values will result on a more even dispersal of points.
#' The value should be set relative to the spread value,
#' which determines the scale at which embedded points will be spread out.
#' @param size Scatter plot point size
#' @param target_weight Weighting factor between data topology and target topology.
#' A value of 0.0 weights entirely on data, a value of 1.0 weights entirely on target.
#' The default of 0.5 balances the weighting equally between data and target.
#' Only applies if y is non-NULL.
#' @param alpha Alpha for ellipse specifying the transparency level of fill color. Use alpha = 0 for no fill color.
#' @param ellipse logical value. If TRUE, draws ellipses around points.
#' @param ellipse.alpha Alpha for ellipse specifying the transparency level of fill color. Use alpha = 0 for no fill color.
#' @param theme Default:theme_base(base_family = "serif")
#' @param width image width
#' @param height image height
#' @param save_pdf Whether to save images in PDF format
#' @param volin Can only be used when PathwaysModule_obj exists. ( Violin diagram )
#' @param control_label Can only be used when PathwaysModule_obj exists. ( Control group label )
#' @param module Can only be used when PathwaysModule_obj exists.( PathwaysModule ID )
#' @param cols palette (vector of colour names)
#'
#' @return a ggplot2 object
#' @export
#' @import ggplot2
#' @import htmlwidgets
#' @import jiebaR
#' @import ggsci
#' @import CMplot
#' @import uwot
#' @import webshot
#' @import wordcloud2
#' @import ggpubr
#' @import ggthemes
#' @importFrom utils data
#' @importFrom stats aggregate quantile
#' @importFrom ggstatsplot ggbetweenstats
#'
VisMultiModule=function(BioM2_pathways_obj=NULL,FindParaModule_obj=NULL,ShowModule_obj=NULL,PathwaysModule_obj=NULL,exact=TRUE,
type_text_table=FALSE,text_table_theme=ttheme('mOrange'),
volin=FALSE,control_label=0,module=NULL,cols=NULL,
n_neighbors = 8,spread=1,min_dist =2,target_weight = 0.5,
size=1.5,alpha=1,ellipse=TRUE,ellipse.alpha=0.2,theme=ggthemes::theme_base(base_family = "serif"),
save_pdf=FALSE,width =7, height=7){
if(is.null(cols)){
cols = pal_d3("category20",alpha=alpha)(20)
}
if(!is.null(BioM2_pathways_obj)){
if(exact==FALSE){
GO_Ancestor=NA
data("GO_Ancestor",envir = environment())
anno=GO_Ancestor
}else{
GO_Ancestor_exact=NA
data("GO_Ancestor_exact",envir = environment())
anno=GO_Ancestor_exact
}
if(type_text_table){
Result=BioM2_pathways_obj$PathwaysResult
Result=Result[1:10,]
colnames(Result)=c('ID','Correlation','Pvalue','P-adjusted','Description')
pic=ggtexttable(Result, rows = NULL,theme=text_table_theme)
if(save_pdf){
pic
ggsave('PathwaysResult_Table.pdf',width =width, height =height)
return(pic)
}else{
pic
return(pic)
}
}else{
pathways=BioM2_pathways_obj$PathwaysResult[,c('id','p.value')]
colnames(pathways)=c('SNP','pvalue')
new_anno=anno[anno$GO %in% pathways$SNP,]
tb=table(new_anno$Ancestor)
if(length(which(tb>20))>10){
Names= names(tb[order(tb,decreasing=T)[1:8]])
}else{
Names=names(which(table(new_anno$Ancestor)>20))
}
new_anno=new_anno[which(new_anno$Ancestor %in% Names),]
new_anno=new_anno[,c('GO','Ancestor')]
colnames(new_anno)=c('SNP','Chromosome')
Anno=merge(new_anno,pathways)
a=names(which(table(Anno$Chromosome)>250))
a2=names(which(table(Anno$Chromosome)<250))
if(length(a)==0){
Anno=Anno[Anno$Chromosome %in% a2,]
}else{
a2=names(which(table(Anno$Chromosome)<250))
A=lapply(1:length(a),function(x){
Anno2=Anno[Anno$Chromosome %in% a[x],]
c1<- quantile(Anno2$pvalue, probs = 0.03)
Anno3=Anno2[Anno2$pvalue < c1 ,]
Anno4=Anno2[Anno2$pvalue > c1 ,]
Anno4=Anno4[sample(1:nrow(Anno4),150,replace = F),]
o=rbind(Anno3,Anno4)
})
A=do.call(rbind,A)
A2=Anno[Anno$Chromosome %in% a2,]
Anno=rbind(A,A2)
}
Anno$Position=sample(1:100000,nrow(Anno))
Anno=Anno[,c(1,2,4,3)]
if(save_pdf){
pic=CMplot(Anno,plot.type="c",
threshold=c(0.001,0.05)/nrow(pathways),threshold.col=c('red','orange'),
multracks=FALSE, H=2,axis.cex=2,chr.den.col=NULL,col=cols,
r=2.5,lab.cex=1.7,
file.output=T,file='pdf',height=height, width=width)
return(pic)
}else{
pic=CMplot(Anno,plot.type="c",
threshold=c(0.001,0.05)/nrow(pathways),threshold.col=c('red','orange'),
r=2,lab.cex=1.7,
multracks=FALSE, chr.den.col=NULL,H=2,axis.cex=1.7,col=cols,file.output=F)
return(pic)
}
}
}
if(!is.null(FindParaModule_obj)){
if(type_text_table){
Result=FindParaModule_obj$TotalResult
Result[,3:5]=round(Result[,3:5],2)
pic=ggtexttable(Result, rows = NULL, theme=text_table_theme)
if(save_pdf){
pic
ggsave('ParameterSelection_Table.pdf',width =width, height =height)
return(pic)
}else{
pic
return(pic)
}
}else{
result=FindParaModule_obj$TotalResult
result$minModuleSize=as.character(result$minModuleSize)
pic=ggpubr::ggline(result,
size=size,
x = "mergeCutHeight",
y = "Mean_Fraction",
linetype = "minModuleSize",
shape = "minModuleSize",
color = "minModuleSize",
title = "Parameter Selection",
xlab = "mergeCutHeight",
ylab = "Mean_Fraction",
palette = cols)+theme
if(save_pdf){
pic
ggsave('ParameterSelection.pdf',width =width, height =height)
return(pic)
}else{
pic
return(pic)
}
}
}
if(!is.null(ShowModule_obj)){
if(type_text_table){
NAME=names(ShowModule_obj)
output=paste0(NAME,'_Table.pdf')
p<-"
Result=ShowModule_obj[[xxx]]
colnames(Result)=c('GO','Description','Ancestor','AncestorGO')
pic=ggtexttable(Result, rows = NULL, theme=text_table_theme)
if(save_pdf){
pic
ggsave(output[xxx],width =width, height =height)
return(pic)
}else{
pic
return(pic)
}
"
y=sapply(1:length(NAME),function(x) gsub('xxx',x,p))
pic=eval(parse(text =y))
return(pic)
}else{
NAME=names(ShowModule_obj)
output=paste0(NAME,'_WordCloud.png')
p<-"
words=ShowModule_obj[[NAME[xxx]]]$Name
engine <- worker()
segment <- segment(words, engine)
wordfreqs <- freq(segment)
wordf <- wordfreqs[order(wordfreqs$freq,decreasing = T),]
rm=c('of','in','by','for','via','process','regulation','lengthening','to')
wordf=wordf[-which(wordf$char %in% rm),]
#colors=rep('skyblue', nrow(wordf))
colors=rep('darkseagreen', nrow(wordf))
colors[1:5]='darkorange'
my_graph <-wordcloud2(wordf,shape = 'circle',color = colors)
if(save_pdf){
my_graph
saveWidget(my_graph,'tmp.html',selfcontained = F)
webshot('tmp.html',output[xxx], delay =6)
#return(my_graph)
}else{
my_graph
return(my_graph)
}
"
y=sapply(1:length(NAME),function(x) gsub('xxx',x,p))
pic=eval(parse(text =y))
return(pic)
}
}
if(!is.null(PathwaysModule_obj)){
if(type_text_table){
Result=PathwaysModule_obj$DE_PathwaysModule[,-4]
Result$Fraction=round(Result$Fraction,2)
Result$cor=round(Result$cor,2)
colnames(Result)=c('Modules','Num_Pathways','Fraction','P-adjusted','Correlation')
pic=ggtexttable(Result, rows = NULL, theme=text_table_theme)
if(save_pdf){
pic
ggsave('PathwaysModule_Table.pdf',width =width, height =height)
return(pic)
}else{
pic
return(pic)
}
}else if(volin){
data=PathwaysModule_obj$Matrix[,PathwaysModule_obj$ModuleResult$ID]
data=moduleEigengenes(data,PathwaysModule_obj$ModuleResult$cluster)$eigengenes
data$label=PathwaysModule_obj$Matrix[,'label']
data$label=ifelse(data$label==control_label,'Control','Case')
colnames(data)[which(colnames(data)==paste0('ME',module))]='y'
label=NA
pic=ggstatsplot::ggbetweenstats(
data=data,
x = label,
y = y,
type = "nonparametric",
p.adjust.method ='fdr'
)+ labs(
x = "Phenotype",
y = "Module EigenPathways",
#title = 'Distribution of Module Eigengenes across Phenotype',
title = paste0('Module',module)
) +
theme(
# This is the new default font in the plot
text = element_text(family = "serif", size = 8, color = "black"),
plot.title = element_text(
family = "serif",
size = 20,
face = "bold",
color = "#2a475e"
),
plot.subtitle = element_text(
family = "serif",
size = 15,
face = "bold",
color="#1b2838"
),
plot.title.position = "plot", # slightly different from default
axis.text = element_text(size = 10, color = "black"),
axis.title = element_text(size = 12)
)+
theme(
axis.ticks = element_blank(),
axis.line = element_line(colour = "grey50"),
panel.grid = element_line(color = "#b4aea9"),
panel.grid.minor = element_blank(),
panel.grid.major.x = element_blank(),
panel.grid.major.y = element_line(linetype = "dashed"),
panel.background = element_rect(fill = "#fbf9f4", color = "#fbf9f4"),
plot.background = element_rect(fill = "#fbf9f4", color = "#fbf9f4")
)
if(save_pdf){
pic
ggsave(paste0('PathwaysModule_ME',module,'_VolinPlot.pdf'),width =width, height =height)
return(pic)
}else{
pic
return(pic)
}
}else{
cluster=PathwaysModule_obj$ModuleResult
cluster$cluster=paste0('ME',cluster$cluster)
Result=PathwaysModule_obj$DE_PathwaysModule
if(nrow(Result)>10){
Result=Result[1:10,]
}
meta=cluster[cluster$cluster %in% Result$module,]
data=as.data.frame(PathwaysModule_obj$Matrix)
pdata=data[,meta$ID]
test=as.data.frame(t(pdata))
test$ID=rownames(test)
test=merge(test,meta,by='ID')
rownames(test)=test$ID
test=test[,-1]
test$cluster=as.factor(test$cluster)
test_umap <- uwot::umap(test, n_neighbors = n_neighbors,spread=spread,min_dist = min_dist,
y = test$cluster, target_weight = target_weight)
test_umap <- as.data.frame(test_umap)
test_umap$Modules=test$cluster
pic=ggpubr::ggscatter(test_umap,
x='V1',
y='V2',
size = size,
title ="Highly Biologically Explainable Differential Module",
subtitle="A UMAP visualization",
color = "Modules",
alpha = alpha,
ellipse = ellipse,
ellipse.alpha=ellipse.alpha,
ellipse.type="norm",
palette =cols,
xlab = "UMAP_1",
ylab = "UMAP_2",
)+theme
if(save_pdf){
pic
ggsave('PathwaysModule_UMAP.pdf',width =width, height =height)
return(pic)
}else{
pic
return(pic)
}
}
}
}
#' Visualisation of significant pathway-level features
#'
#' @param BioM2_pathways_obj Results produced by BioM2(,target='pathways')
#' @param pathlistDB A list of pathways with pathway IDs and their corresponding genes ('entrezID' is used).
#' For details, please refer to ( data("GO2ALLEGS_BP") )
#' @param top Number of significant pathway-level features visualised
#' @param p.adjust.method p-value adjustment method.(holm", "hochberg", "hommel",
#' "bonferroni", "BH", "BY","fdr","none")
#' @param alpha The alpha transparency, a number in (0,1). Detail for scale_fill_viridis()
#' @param begin The (corrected) hue in (0,1) at which the color map begins. Detail for scale_fill_viridis().
#' @param end The (corrected) hue in (0,1) at which the color map ends. Detail for scale_fill_viridis()
#' @param option A character string indicating the color map option to use. Detail for scale_fill_viridis()
#' @param seq Interval of x-coordinate
#'
#' @return a ggplot2 object
#' @export
#' @import ggplot2
#' @import viridis
#' @importFrom stats p.adjust
PlotPathFearture=function(BioM2_pathways_obj=NULL,pathlistDB=NULL,top=10,p.adjust.method='none',begin=0.1,end=0.9,alpha=0.9,option='C',seq=1){
data=BioM2_pathways_obj$PathwaysResult
geneNum_pathways=sapply(1:length(pathlistDB),function(i) length(pathlistDB[[i]]))
pathlistDB=pathlistDB[which(geneNum_pathways > 20 & geneNum_pathways < 200 )]
data_top=data[1:top,]
if(p.adjust.method=='none'){
data_top$val=-log10(data_top$p.value)
}else{
data_top$val=-log10(p.adjust(data_top$p.value,method = p.adjust.method))
}
data_top=data_top[order(data_top$val),]
data_top$size=sapply(1:nrow(data_top),function(x) length(pathlistDB[[data_top$id[x]]]))
data_top$id=factor(data_top$id,levels = data_top$id)
max=max(data_top$val)+1
val=NA
id=NA
size=NA
term=NA
pic=ggplot(data_top) +
geom_col(aes(val, id,fill=size), width = 0.6)+
scale_fill_viridis(alpha=alpha,begin=begin,end=end,direction = -1,
name='size',option = option)+
labs(
x = '-log(p) ',
y = NULL,
title = paste0('Top ',top,' Pathway-Level Features'),
shape='-log(P-value)'
)+
scale_x_continuous(
limits = c(0, max),
breaks = seq(0, max, by = seq),
expand = c(0, 0),
position = "top"
) +
scale_y_discrete(expand = expansion(add = c(0, 0.5))) +
theme(
panel.background = element_rect(fill = "white"),
panel.grid.major.x = element_line(color = "#A8BAC4", linewidth = 0.3),
axis.ticks.length = unit(0, "mm"),
#axis.title = element_blank(),
axis.line.y.left = element_line(color = "black"),
axis.text.y = element_text(family = "serif", size = 12,face = 'bold',colour='black'),
axis.text.x = element_text(family = "serif", size = 13,colour='black'),
axis.title = element_text(size = 18,family = "serif",face = 'bold.italic',vjust = 5),
plot.title = element_text(size = 20,family = "serif",face = 'bold'),
legend.text = element_text(family = "serif",face = 'bold'),
legend.title = element_text(size=13,family = "serif",face = 'bold'),
)+
geom_text(
data = data_top,
aes(0, y = id, label = term),
hjust = 0,
nudge_x = 0.5,
colour = "white",
family = "serif",
size = 6
)
return(pic)
}
#' Visualisation Original features that make up the pathway
#'
#' @param data The input omics data
#' @param pathlistDB A list of pathways with pathway IDs and their corresponding genes ('entrezID' is used).
#' For details, please refer to ( data("GO2ALLEGS_BP") )
#' @param FeatureAnno The annotation data stored in a data.frame for probe mapping.
#' It must have at least two columns named 'ID' and 'entrezID'.
#' (For details, please refer to data( data("MethylAnno") )
#' @param PathNames A vector.A vector containing the names of pathways
#' @param p.adjust.method p-value adjustment method.(holm", "hochberg", "hommel",
#' "bonferroni", "BH", "BY","fdr","none")
#' @param save_pdf Whether to save images in PDF format
#' @param alpha The alpha transparency, a number in (0,1).
#' @param cols palette (vector of colour names)
#'
#' @return a plot object
#' @export
#' @import ggplot2
#' @import CMplot
#' @import ggsci
#' @importFrom stats p.adjust
PlotPathInner=function(data=NULL,pathlistDB=NULL,FeatureAnno=NULL,PathNames=NULL,
p.adjust.method='none',save_pdf=FALSE,alpha=1,cols=NULL){
Result=list()
if(is.null(cols)){
cols = pal_d3("category20",alpha=alpha)(20)
}
featureAnno=FeatureAnno[FeatureAnno$ID %in% colnames(data),]
for(i in 1:length(PathNames)){
cpg_id=featureAnno$ID[which(featureAnno$entrezID %in% pathlistDB[[PathNames[i]]])]
cpg_data=data[,c('label',cpg_id)]
cpg_0=cpg_data[which(cpg_data$label==unique(cpg_data$label)[1]),]
cpg_1=cpg_data[which(cpg_data$label==unique(cpg_data$label)[2]),]
pvalue=unlist(lapply(2:ncol(cpg_data),function(x) wilcox.test(cpg_0[,x],cpg_1[,x])$p.value))
if(p.adjust.method=='none'){
result=data.frame(SNP=cpg_id,Chromosome=rep(PathNames[i],length(cpg_id)),
Position=sample(1:100000,length(cpg_id)),pvalue=pvalue)
}else{
result=data.frame(SNP=cpg_id,Chromosome=rep(PathNames[i],length(cpg_id)),
Position=sample(1:100000,length(cpg_id)),pvalue=p.adjust(pvalue,method = p.adjust.method))
}
Result[[i]]=result
}
Result=do.call(rbind,Result)
pic=CMplot(Result,plot.type="c",
threshold=c(0.001,0.05),threshold.col=c('red','orange'),
multracks=FALSE, H=2,axis.cex=2,chr.den.col=NULL,col=cols,
r=2.5,lab.cex=2,outward = TRUE,signal.cex=2, signal.pch = 18,
file.output=save_pdf,file='pdf',height=13, width=13)
return(pic)
}
#' Correlalogram for Biological Differences Modules
#'
#' @param PathwaysModule_obj Results produced by PathwaysModule()
#' @param alpha The alpha transparency, a number in (0,1). Detail for scale_fill_viridis()
#' @param begin The (corrected) hue in (0,1) at which the color map begins. Detail for scale_fill_viridis().
#' @param end The (corrected) hue in (0,1) at which the color map ends. Detail for scale_fill_viridis()
#' @param option A character string indicating the color map option to use. Detail for scale_fill_viridis()
#' @param family calligraphic style
#' @return a ggplot object
#' @export
#' @import ggplot2
#' @importFrom ggstatsplot ggcorrmat
#' @import viridis
#'
#'
#'
PlotCorModule=function(PathwaysModule_obj=NULL,
alpha=0.7,begin=0.2,end=0.9,option="C",family="serif"){
colors=PathwaysModule_obj$ModuleResult$cluster
names(colors)=PathwaysModule_obj$ModuleResult$ID
ALL_eigengene=moduleEigengenes(PathwaysModule_obj$Matrix[,names(colors)],colors)$eigengenes
data=ALL_eigengene[,PathwaysModule_obj$DE_PathwaysModule$module]
pic=ggcorrmat(
data = data,
type = "nonparametric",
ggcorrplot.args = list(show.legend =T,pch.cex=10),
title = "Correlalogram for Biological Differences Modules",
subtitle = " ",
caption = " "
)+theme(
# This is the new default font in the plot
text = element_text(family = family, size = 8, color = "black"),
plot.title = element_text(
family = family,
size = 20,
face = "bold",
color = "black"
),
plot.subtitle = element_text(
family = family,
size = 15,
face = "bold",
color="#1b2838"
),
plot.title.position = "plot", # slightly different from default
axis.text.x = element_text(size = 12, color = "black"),
axis.text.y = element_text(size = 12, color = "black"),
axis.title = element_text(size = 15)
)+scale_fill_viridis(alpha=alpha,begin=begin,end=end,direction = -1,
name='correlation',option = option)+
theme(legend.title =element_text(size = 10, color = "black"),
legend.text = element_text(size = 10,color = "black"))
pic$labels$caption=NULL
return(pic)
}
#' Network diagram of pathways-level features
#'
#' @param data The input omics data
#' @param pathlistDB A list of pathways with pathway IDs and their corresponding genes ('entrezID' is used).
#' For details, please refer to ( data("GO2ALLEGS_BP") )
#' @param FeatureAnno The annotation data stored in a data.frame for probe mapping.
#' It must have at least two columns named 'ID' and 'entrezID'.
#' (For details, please refer to data( data("MethylAnno") )
#' @param PathNames A vector.A vector containing the names of pathways
#' @param cutoff Threshold for correlation between features within a pathway
#' @param num The first few internal features of each pathway that are most relevant to the phenotype
#' @param BioM2_pathways_obj Results produced by BioM2()
#'
#' @return a ggplot object
#' @export
#' @import ggplot2
#' @import ggnetwork
#' @import igraph
#' @import ggsci
#' @import ggforce
#'
PlotPathNet=function(data=NULL,BioM2_pathways_obj=NULL,FeatureAnno=NULL,pathlistDB=NULL,PathNames=NULL,
cutoff=0.2,num=20){
featureAnno=FeatureAnno[FeatureAnno$ID %in% colnames(data),]
sub=list()
i=1
for(i in 1:length(PathNames)){
cpg_id=featureAnno$ID[which(featureAnno$entrezID %in% pathlistDB[[PathNames[i]]])]
cpg_data=data[,c('label',cpg_id)]
COR=stats::cor(cpg_data$label,cpg_data[,-1])
COR=ifelse(COR>0,COR,-COR)
names(COR)=cpg_id
n=names(COR)[order(COR,decreasing = T)][1:num]
sub[[i]]=data[,n]
names(sub)[i]=PathNames[i]
}
result=list()
a=lapply(1:length(PathNames), function(x){
data.frame(
label=colnames(sub[[x]]),
value=rep(names(sub)[x],length(sub[[x]]))
)
})
result$vertices=do.call(rbind,a)
comid=unique(result$vertices$label[duplicated(result$vertices$label)])
result$vertices=result$vertices[!duplicated(result$vertices$label),]
rownames(result$vertices)=result$vertices$label
nonid=setdiff(result$vertices$label,comid)
x=NA
y=NA
xend=NA
yend=NA
same.conf=NA
conf=NA
names(sub)=NULL
dd=do.call(cbind,sub)
dd=dd[,!duplicated(colnames(dd))]
cor_matrix <- stats::cor(dd)
upper_tri <- cor_matrix[upper.tri(cor_matrix)]
n <- nrow(cor_matrix)
upper_tri_matrix <- matrix(0, n, n)
upper_tri_matrix[upper.tri(upper_tri_matrix)] <- upper_tri
cor_matrix=upper_tri_matrix
colnames(cor_matrix)=colnames(dd)
rownames(cor_matrix)=colnames(dd)
df <- data.frame(from = character(n^2), to = character(n^2), Correlation = numeric(n^2))
count <- 1
for (i in 1:n) {
for (j in i:n) {
df[count, "from"] <- rownames(cor_matrix)[i]
df[count, "to"] <- rownames(cor_matrix)[j]
df[count, "Correlation"] <- cor_matrix[i, j]
count <- count + 1
}
}
df$Correlation=ifelse(df$Correlation>0,df$Correlation,-df$Correlation)
df=df[df$Correlation>0,]
DF=df[df$Correlation> cutoff,]
DF$same.conf=ifelse(result$vertices[DF$from,"value"]==result$vertices[DF$to,"value"],1,0)
DF1=DF[DF$same.conf==1,]
pname=BioM2_pathways_obj$PathwaysResult$id[1:10]
cor_matrix <- stats::cor(BioM2_pathways_obj$PathwaysMatrix[,pname])
upper_tri <- cor_matrix[upper.tri(cor_matrix)]
n <- nrow(cor_matrix)
upper_tri_matrix <- matrix(0, n, n)
upper_tri_matrix[upper.tri(upper_tri_matrix)] <- upper_tri
cor_matrix=upper_tri_matrix
colnames(cor_matrix)=colnames(BioM2_pathways_obj$PathwaysMatrix[,pname])
rownames(cor_matrix)=colnames(BioM2_pathways_obj$PathwaysMatrix[,pname])
df <- data.frame(from = character(n^2), to = character(n^2), Correlation = numeric(n^2))
count <- 1
for (i in 1:n) {
for (j in i:n) {
df[count, "from"] <- rownames(cor_matrix)[i]
df[count, "to"] <- rownames(cor_matrix)[j]
df[count, "Correlation"] <- cor_matrix[i, j]
count <- count + 1
}
}
df$Correlation=ifelse(df$Correlation>0,df$Correlation,-df$Correlation)
DF0=df[df$Correlation> 0,]
DF0=DF0[DF0$Correlation>quantile(DF0$Correlation, probs = 0.75),]
map=result$vertices[which(!duplicated(result$vertices$value)),]
rownames(map)=map$value
DF0$from=map[DF0$from,]$label
DF0$to=map[DF0$to,]$label
DF0$same.conf=rep(0,nrow(DF0))
DF=rbind(DF1,DF0)
result$edges=DF
fb.igra=graph_from_data_frame(result$edges[,1:2],directed = FALSE)
V(fb.igra)$conf=result$vertices[V(fb.igra)$name, "value"]
E(fb.igra)$same.conf=result$edges$same.conf
E(fb.igra)$lty=ifelse(E(fb.igra)$same.conf == 1, 1, 2)
pic<-ggplot(ggnetwork(fb.igra), aes(x = x, y = y, xend = xend, yend = yend)) +
geom_edges(aes(linetype= as.factor(same.conf)),
#arrow = arrow(length = unit(6, "pt"), type = "closed") #if directed
color = "grey50",
curvature = 0.2,
alpha=0.8,
ncp=10,
linewidth=0.7
) +
geom_nodes(aes(color = conf),
size = 7,
alpha=0.5) +
scale_color_brewer("Pathways",
palette = 'Paired') +
scale_linetype_manual(values = c(2,1)) +
guides(linetype = "none") +
theme_blank()+
geom_nodes(aes(color = conf),
size = 4)+labs(title = 'Network Diagram of TOP 10 Pathway-Level Features')+
theme(legend.text = element_text(family = 'serif',face = 'bold.italic',color = 'grey15'),
legend.title = element_text(family = 'serif',face = 'bold'),
plot.title = element_text(family = 'serif',face = 'bold'))+
geom_mark_ellipse(
aes(fill=conf,label =conf),
alpha = 0.2,
show.legend = F
)+scale_fill_brewer(palette = 'Paired')+xlim(-0.05,1.05)+ylim(-0.05,1.05)
return(pic)
}
|
/scratch/gouwar.j/cran-all/cranData/BioM2/R/BioM2.R
|
#' @docType data
#' @name GO_Ancestor
#'
#' @title Pathways in the GO database and their Ancestor
#' @description Inclusion relationships between pathways
#' @details In the GO database, each pathway will have its own ancestor pathway.
#' Map pathways in GO database to about 20 common ancestor pathways.
#'
#' @format A data frame :
#' \describe{
#' ...
#' }
#' @source From GO.db
NULL
#' @docType data
#' @name GO_Ancestor_exact
#'
#' @title Pathways in the GO database and their Ancestor
#' @description Inclusion relationships between pathways
#' @details In the GO database, each pathway will have its own ancestor pathway.
#' Map pathways in GO database to about 400 common ancestor pathways.
#'
#' @format A data frame :
#' \describe{
#' ...
#' }
#' @source From GO.db
NULL
#' @docType data
#' @name MethylAnno
#'
#' @title An example about FeatureAnno for methylation data
#' @description An example about FeatureAnno for methylation data
#' @details The annotation data stored in a data.frame for probe
#' mapping. It must have at least two columns named 'ID' and 'entrezID'.
#'
#' @format A data frame :
#' \describe{
#' ...
#' }
NULL
#' @docType data
#' @name TransAnno
#'
#' @title An example about FeatureAnno for gene expression
#' @description An example about FeatureAnno for gene expression
#' @details The annotation data stored in a data.frame for probe
#' mapping. It must have at least two columns named 'ID' and 'entrezID'.
#'
#' @format A data frame :
#' \describe{
#' ...
#' }
NULL
#' @docType data
#' @name MethylData_Test
#'
#' @title An example about TrainData/TestData for methylation data
#' @description An example about TrainData/TestData for methylation data
#' MethylData_Test.
#' @details The first column
#' is the label or the output. For binary classes,
#' 0 and 1 are used to indicate the class member.
#' @format A data frame :
#' \describe{
#' ...
#' }
NULL
#' @docType data
#' @name TransData_Test
#'
#' @title An example about TrainData/TestData for gene expression
#' @description An example about TrainData/TestData for gene expression
#' MethylData_Test.
#' @details The first column
#' is the label or the output. For binary classes,
#' 0 and 1 are used to indicate the class member.
#' @format A data frame :
#' \describe{
#' ...
#' }
NULL
#' @docType data
#' @name GO2ALLEGS_BP
#'
#' @title An example about pathlistDB
#' @description An example about pathlistDB
#' @details A list of pathways with pathway IDs and their
#' corresponding genes ('entrezID' is used).
#' @format A list :
#' \describe{
#' ...
#' }
NULL
|
/scratch/gouwar.j/cran-all/cranData/BioM2/R/data.R
|
## returns indices of HC-selected variables in the order of the p
## values (small to large)
HCthresh <- function(pvec, alpha = .1, plotit = FALSE)
{
N <- length(pvec)
p.order <- order(pvec)
pivar <- (1 - (1:N/N))*(1:N)/(N^2)
HCi <- ((1:N)/N - pvec[p.order])/sqrt(pivar)
cutoff <- round(N * alpha)
nvar <- which.min(-HCi[1:cutoff])
if (plotit) {
plot(1:cutoff/N, HCi[1:cutoff], type = "b",
xlab = "Ordered scores", ylab = "HCi",
main = paste("HC thresholding, alpha =", alpha))
abline(v = nvar/N, col = 2, lty = 2)
}
p.order[1:nvar]
}
### functions to generate individual null distributions for all
### variables. Two separate functions for PCR, and PLS/VIP.
### Contrary to an earlier implementation for the spiked apple data,
### here we allow ncomp to be a vector. We do not allow subsets of
### variables in X.
### Added June 23.
### Further speed improvement: June 24.
pval.pcr <- function(X, Y, ncomp, scale.p, npermut) {
result <- matrix(0, ncol(X), length(ncomp))
dimnames(result) <- list(colnames(X), ncomp)
maxcomp <- max(ncomp)
Y <- matrix(as.integer(factor(Y)), ncol = 1)
Y <- Y - mean(Y)
FUN <- scalefun(scale.p)
huhn <- La.svd(FUN(X))
DD <- huhn$d[1:maxcomp]
DD2 <- DD^2
TT <- huhn$u[, 1:maxcomp, drop = FALSE] %*%
diag(DD[1:maxcomp], nrow = maxcomp) # if maxcomp = 1
PP <- t(huhn$vt[1:maxcomp, , drop = FALSE])
fit.fun <- function(Tp, Pp, DD2p, Yp, ap) {
Pp[, 1:ap, drop = FALSE] %*% (crossprod(Tp[, 1:ap], Yp) / DD2p[1:ap])
}
for (aa in seq(along = ncomp)) {
a <- ncomp[aa]
real.coefs <- abs(fit.fun(TT, PP, DD2, Y, a))
for (i in 1:npermut) {
nulls <- abs(fit.fun(TT, PP, DD2, sample(Y), a))
coef.bigger <- as.numeric((real.coefs - nulls < 0)) ## 0/1
result[,aa] <- result[,aa] + coef.bigger
}
}
result / npermut
}
## huhn0 <- pval.pcr(spikedApples$dataMatrix, rep(0:1, each = 10),
## ncomp = 2:3, scale.p = "auto", npermut = 1000)
pval.plsvip <- function(X, Y, ncomp, scale.p, npermut,
smethod = c("both", "pls", "vip")) {
smethod <- match.arg(smethod)
nmethods <- ifelse(smethod == "both", 2, 1)
if (smethod == "both") smethod = c("pls", "vip")
result <- array(0, c(ncol(X), length(ncomp), nmethods))
dimnames(result) <- list(colnames(X), ncomp, smethod)
maxcomp <- max(ncomp)
Y <- matrix(as.integer(factor(Y)), ncol = 1)
Y <- Y - mean(Y)
FUN <- scalefun(scale.p)
Xsc <- FUN(X)
get.vip <- function(plsmod) {
ww <- loading.weights(plsmod)
result <- matrix(NA, ncol(X), plsmod$ncomp)
for (i in 1:plsmod$ncomp) {
var.exp <- diff(c(0, R2(plsmod, estimate = "train", ncomp = 1:i,
intercept = FALSE)$val))
result[, i] <- sqrt(ncol(X) * ww[, 1:i, drop = FALSE]^2 %*%
var.exp/sum(var.exp))
}
result
}
huhn <- plsr(Y ~ Xsc, maxcomp, method = "widekernelpls")
if ("pls" %in% smethod)
pls.coefs <- abs(huhn$coefficients[,1,ncomp]) ## absolute size matters
if ("vip" %in% smethod)
vip.coefs <- get.vip(huhn)[,ncomp] ## always positive
for (i in 1:npermut) {
huhn <- plsr(sample(Y) ~ Xsc, maxcomp, method = "widekernelpls")
if ("pls" %in% smethod) {
nulls <- abs(huhn$coefficients[,1,ncomp])
coef.bigger <- as.numeric((pls.coefs - nulls < 0)) ## 0/1
result[,,"pls"] <- result[,,"pls"] + coef.bigger
}
if ("vip" %in% smethod) {
nulls <- get.vip(huhn)[,ncomp]
coef.bigger <- as.numeric((vip.coefs - nulls < 0)) ## 0/1
result[,,"vip"] <- result[,,"vip"] + coef.bigger
}
}
result / npermut
}
## huhn0 <- pval.pls(spikedApples$dataMatrix, rep(0:1, each = 10),
## ncomp = 2:3, scale.p = "auto", npermut = 1000)
|
/scratch/gouwar.j/cran-all/cranData/BioMark/R/biom.HC.R
|
pcr.coef <- function(X, Y, ncomp, scale.p, ...)
{
if (nlevels(Y) > 2)
stop("multi-class discrimination not implemented for PCR")
Y <- as.numeric(Y)
FUN <- scalefun(scale.p)
matrix(svdpc.fit(FUN(X), Y, ncomp = max(ncomp),
stripped = TRUE)$coefficients[, 1, ncomp],
ncol(X), length(ncomp))
}
## Changed to widekernelpls.fit because this probably is the most
## relevant situation
pls.coef <- function(X, Y, ncomp, scale.p, ...)
{
if (nlevels(Y) > 2)
stop("multi-class discrimination not implemented for PLS")
Y <- as.numeric(Y)
FUN <- scalefun(scale.p)
matrix(widekernelpls.fit(FUN(X), Y, ncomp = max(ncomp),
stripped = TRUE)$coefficients[, 1, ncomp],
ncol(X), length(ncomp))
}
vip.coef <- function(X, Y, ncomp, scale.p, ...)
{
if (nlevels(Y) > 2)
stop("multi-class discrimination not implemented for VIP")
Y <- as.numeric(Y)
FUN <- scalefun(scale.p)
plsmod <- plsr(Y ~ FUN(X), ncomp = max(ncomp), method = "widekernelpls")
ww <- loading.weights(plsmod)
result <- matrix(NA, ncol(X), length(ncomp))
for (i in 1:length(ncomp)) {
var.exp <- diff(c(0, R2(plsmod, estimate = "train",
ncomp = 1:ncomp[i], intercept = FALSE)$val))
result[,i] <- sqrt(ncol(X) * ww[,1:ncomp[i],drop = FALSE]^2 %*%
var.exp / sum(var.exp))
}
result
}
studentt.coef <- function(X, Y, scale.p, ...)
{
if (nlevels(Y) > 2)
stop("only two-class discrimination implemented for studentt")
FUN <- scalefun(scale.p)
TFUN <- studentt.fun(Y)
matrix(TFUN(FUN(X)), ncol = 1)
}
shrinkt.coef <- function(X, Y, scale.p, ...)
{
if (nlevels(Y) > 2)
stop("only two-class discrimination implemented for shrinkt")
FUN <- scalefun(scale.p)
TFUN <- shrinkt.fun(L = Y, var.equal = FALSE, verbose = FALSE)
matrix(TFUN(FUN(X)), ncol = 1)
}
## Nov 21, 2011: inclusion of the lasso. For classification, Y should
## be a factor!
lasso.coef <- function(X, Y, scale.p, lasso.opt = biom.options()$lasso, ...)
{
## check whether family and character of Y agree
fam <- lasso.opt$family
if (!is.null(fam)) {
if (!is.factor(Y)) {
if (fam != "gaussian")
stop("Attempt of regression with a family different than 'gaussian'")
} else {
if (fam != "binomial")
stop("Attempt of binary classification with a family different than 'binomial'")
}
} else {
if (!is.factor(Y)) {
lasso.opt$family <- "gaussian"
} else {
lasso.opt$family <- "binomial"
}
}
## browser()
FUN <- scalefun(scale.p)
glmargs <- c(list(x = FUN(X), y = Y, standardize = FALSE,
dfmax = ncol(X)), lasso.opt)
huhn <- do.call(glmnet, glmargs)
x.coef <- as.matrix(huhn$beta)
colnames(x.coef) <- huhn$lambda
x.coef
}
|
/scratch/gouwar.j/cran-all/cranData/BioMark/R/biom.coef.R
|
## biom.options.R: modeled after pls.options.R.
## New version August 4, 2014. (Thanks for the example BH!)
## The list of initial options, for the moment all pertaining to
## stability-based biomarkers selection
## 21-11-2011: added "lasso" as possibility for fmethods. We assume
## that the number of observations is smaller than the number of
## variables. Eventually we also take lasso to be an empty list. Let's see.
biom.options <- function(..., reset = FALSE) {
if (reset) {
.biom.Options$options <-
list(max.seg = 100, oob.size = NULL, oob.fraction = .3,
variable.fraction = .5, ntop = 10, min.present = .1,
fmethods = c("studentt", "shrinkt", "pcr", "pls", "vip", "lasso"),
univ.methods = c("studentt", "shrinkt"),
lasso = list(alpha = 1, nlambda = 100),
nset = 10000, HCalpha = .1)
.biom.Options$options
}
temp <- list(...)
if (length(temp) == 0)
.biom.Options$options
current <- .biom.Options$options
if (length(temp) == 1 && is.null(names(temp))) {
arg <- temp[[1]]
switch(mode(arg),
list = temp <- arg,
character = return(.biom.Options$options[arg]),
stop("invalid argument: ", sQuote(arg)))
}
if (length(temp) == 0) return(current)
n <- names(temp)
if (is.null(n)) stop("options must be given by name")
changed <- current[n]
current[n] <- temp
.biom.Options$options <- current
invisible(current)
}
.biom.Options <- new.env(parent = emptyenv())
biom.options(reset = TRUE)
|
/scratch/gouwar.j/cran-all/cranData/BioMark/R/biom.options.R
|
lasso.stab <- function(X, Y, scale.p = NULL,
segments = NULL, variables = NULL,
...)
{
lasso.opt <- biom.options()$lasso
## do one run to obtain lambda sequence
## this one also will give a warning in case of an inappropriate family
lambdas <- as.numeric(colnames(lasso.coef(X, Y, scale.p, ...)))
if (is.null(lasso.opt$lambda))
lasso.opt$lambda <- lambdas
## get all coefficients - note that they usually are calculated for only
## a part of the variables in each iteration
all.coefs <- lapply(1:ncol(segments),
function(i, xx, ss, vv, yy) {
huhn <- lasso.coef(xx[-ss[,i], vv[,i]],
yy[-ss[,i]], scale.p = scale.p,
lasso.opt = lasso.opt, ...)
(huhn != 0) + 0 ## + 0 to convert to nrs
},
X, segments, variables, Y)
x.coef <- matrix(0, ncol(X), length(lambdas))
dimnames(x.coef) <- list(NULL, lambdas)
## count how often non-zero - this can probably be written more elegantly
for (i in 1:ncol(segments))
x.coef[variables[,i],] <- x.coef[variables[,i],] + all.coefs[[i]]
## finally, correct for the number of times every variable has had the
## chance to be selected. Maximum value in x.coef can be 1.
noccur <- tabulate(variables, nbins = ncol(X))
x.coef.sc <- sweep(x.coef, 1, noccur, FUN = "/")
x.coef.sc
}
########################################################################
### Functions below select by taking the ntop biggest coefficients,
### or, alternatively the ntop fraction of biggest coefficients.
########################################################################
## New selection function for the ntop highest coefs. The last
## dimension will always disappear, and the result is a matrix,
## possibly with only one column. The number of rows is always equal
## to the number of variables
select.aux <- function(object, variables) {
ntop <- biom.options()$ntop
nvar <- dim(object)[2]
## if ntop is a fraction between 0 and 1, it is taken to mean the
## fraction of variables to be selected. Typically 0.1.
if (ntop > 0 & ntop < 1)
ntop <- round(ntop * nvar)
if (is.matrix(object))
object <- array(object, c(nrow(object), ncol(object), 1))
gooduns <- apply(object, c(1,3),
function(x) sort.list(abs(x), decreasing = TRUE)[1:ntop])
x.coef <- apply(gooduns, 3, function(x) tabulate(x, nbins = nvar))
## finally, correct for the number of times every variable has had the
## chance to be selected. Maximum value in the result can be 1.
noccur <- tabulate(variables, nbins = nvar)
sweep(x.coef, 1, noccur, FUN = "/")
}
pcr.stab <- function(X, Y, ncomp = 2, scale.p = NULL,
segments = NULL, variables = NULL, ...)
{
x.coef <- array(NA, c(ncol(segments), ncol(X), length(ncomp)))
for (i in 1:ncol(segments))
x.coef[i,variables[,i],] <-
pcr.coef(X[-segments[,i], variables[,i]], Y[-segments[,i]],
ncomp = ncomp, scale.p = scale.p, ...)
select.aux(x.coef, variables)
}
pls.stab <- function(X, Y, ncomp = 2, scale.p = NULL,
segments = NULL, variables = NULL, ...)
{
x.coef <- array(NA, c(ncol(segments), ncol(X), length(ncomp)))
for (i in 1:ncol(segments))
x.coef[i,variables[,i],] <-
pls.coef(X[-segments[,i], variables[,i]], Y[-segments[,i]],
ncomp = ncomp, scale.p = scale.p, ...)
select.aux(x.coef, variables)
}
vip.stab <- function(X, Y, ncomp = 2, scale.p = NULL,
segments = NULL, variables = NULL, ...)
{
x.coef <- array(NA, c(ncol(segments), ncol(X), length(ncomp)))
for (i in 1:ncol(segments))
x.coef[i,variables[,i],] <-
vip.coef(X[-segments[,i], variables[,i]], Y[-segments[,i]],
ncomp = ncomp, scale.p = scale.p, ...)
select.aux(x.coef, variables)
}
### the dots in the shrinkt.stab and studentt.stab functions are
### necessary to catch extra arguments to other functions.
shrinkt.stab <- function(X, Y, scale.p = NULL,
segments = NULL, variables = NULL, ...)
{
x.coef <- matrix(NA, ncol(segments), ncol(X))
cat("\n")
for (i in 1:ncol(segments)) {
x.coef[i,variables[,i]] <- shrinkt.coef(X[-segments[,i], variables[,i]],
Y[-segments[,i]],
scale.p = scale.p, ...)
}
select.aux(x.coef, variables)
}
studentt.stab <- function(X, Y, scale.p = NULL,
segments = NULL, variables = NULL, ...)
{
x.coef <- matrix(NA, ncol(segments), ncol(X))
for (i in 1:ncol(segments))
x.coef[i,variables[,i]] <- studentt.coef(X[-segments[,i], variables[,i]],
Y[-segments[,i]],
scale.p = scale.p, ...)
select.aux(x.coef, variables)
}
|
/scratch/gouwar.j/cran-all/cranData/BioMark/R/biom.stab.R
|
gen.data <- function(ncontrol, ntreated = ncontrol, nvar, nbiom = 5,
group.diff = .5, nsimul = 100,
means = rep(0, nvar), cormat = diag(nvar))
{
nobj <- ncontrol + ntreated
X <- array(0, c(nobj, nvar, nsimul))
diffvec <- rep(c(group.diff, 0), c(nbiom, nvar-nbiom))
means.treated <- means + diffvec
for (i in 1:nsimul)
X[,,i] <- rbind(mvrnorm(ncontrol, means, cormat),
mvrnorm(ntreated, means.treated, cormat))
list(X = X,
Y = factor(rep(c("control", "treated"), c(ncontrol, ntreated))),
nbiomarkers = nbiom)
}
gen.data2 <- function(X, ncontrol, nbiom, spikeI,
type = c("multiplicative", "additive"),
nsimul = 100, stddev = .05) {
dimnames(X) <- NULL
ntreated <- nrow(X) - ncontrol
X <- X[sample(1:nrow(X)),]
X.control <- X[1:ncontrol,]
X.treated.orig <- X[(ncontrol+1):nrow(X),]
## if more than one data set is to be simulated, there should be
## differences between the simulations. In this implementation, the
## differences are generated by choosing different levels for the
## first nbiom elements. We check that different settings indeed are
## possible. Naturally, if only one set is simulated
## (but who wants to do that?) this is less interesting.
if (length(unique(spikeI)) < 3 & nsimul > 1)
stop("spikeI should contain at least three different elements")
if (length(unique(spikeI))^nbiom < nsimul)
stop("number of simulations exceeds number of possible data sets")
newI <- matrix(sample(spikeI, nbiom * nsimul, replace = TRUE),
ncol = nsimul)
newI <- newI + matrix(rnorm(prod(dim(newI)), sd = stddev*mean(newI)),
nrow(newI), ncol(newI))
nvar <- ncol(X)
X.output <- array(0, c(nrow(X), nvar, nsimul))
type <- match.arg(type)
if (type == "multiplicative") {
for (i in 1:nsimul) {
Bmat <- rep(1, ncol(X))
Bmat[1:nbiom] <- newI[,i]
X.output[,,i] <- rbind(X.control, X.treated.orig %*% diag(Bmat))
}
} else {
zeromat <- matrix(0, ntreated, ncol(X) - nbiom)
for (i in 1:nsimul) {
X.output[,,i] <- rbind(X.control,
X.treated.orig + cbind(newI, zeromat))
}
}
dimnames(X.output) <- list(c(paste("Control", 1:ncontrol),
paste("Treated", 1:ntreated)),
c(paste("Biom", 1:nbiom),
paste("Var", 1:(nvar - nbiom))))
list(X = X.output,
Y = factor(rep(c("control", "treated"), c(ncontrol, ntreated))),
n.biomarkers = nbiom)
}
|
/scratch/gouwar.j/cran-all/cranData/BioMark/R/gen.data.R
|
## New version of get.biom using aux functions, because get.biom was
## becoming too big to also include new things like the lasso
## stability path.
get.biom <- function(X, Y, fmethod = "all",
type = c("stab", "HC", "coef"),
ncomp = 2, biom.opt = biom.options(),
scale.p = "auto", ...)
{
## allow for older fmethods pclda and plsda
if ("plsda" %in% fmethod) {
fmethod[fmethod == "plsda"] <- "pls"
warning("fmethod 'plsda' is obsolete - please use 'pls'")
}
if ("pclda" %in% fmethod) {
fmethod[fmethod == "pclda"] <- "pcr"
warning("fmethod 'pclda' is obsolete - please use 'pcr'")
}
## Which biomarker selection methods should we consider?
fmethod <- match.arg(fmethod, c("all", biom.opt$fmethods),
several.ok = TRUE)
if ("all" %in% fmethod)
fmethod <- biom.opt$fmethods
multiv <- fmethod[!(fmethod %in% biom.opt$univ.methods)]
nmultiv <- length(multiv)
univ <- fmethod[(fmethod %in% biom.opt$univ.methods)]
nuniv <- length(univ)
fmethod <- c(univ, multiv) # do univariate methods first
nncomp <- rep(c(1, length(ncomp)), c(nuniv, nmultiv))
type <- match.arg(type)
if (type == "HC") fmethod <- fmethod[fmethod != "lasso"]
fname <- paste(fmethod, ifelse(type == "stab", "stab", "coef"), sep = ".")
## every list element consists of one fmethod, with one or more
## sublists corresponding to different settings
result <- vector(length(fmethod), mode = "list")
names(result) <- fmethod
if (is.factor(Y)) {
Y <- factor(Y) ## get rid of extra levels
} else {
if (length(table(Y)) == 2) {
warning("Y has only two values: assuming discrimination!")
Y <- factor(Y)
}
}
## Get settings from the biom.opt argument, mostly for stability-based BS
if (type == "stab") {
oob.size <- biom.opt$oob.size
oob.fraction <- biom.opt$oob.fraction
min.present <- biom.opt$min.present
if (is.factor(Y)) { ## classification
if (nlevels(Y) > 2)
stop("Only binary classification implemented")
smallest.class.fraction <- min(table(Y) / length(Y))
## for equal class sizes this is .5
if (is.null(oob.size))
oob.size <- round(smallest.class.fraction * oob.fraction * length(Y))
segments <- get.segments(Y, oob.size = oob.size,
max.seg = biom.opt$max.seg)
} else { ## we assume regression
if (is.null(oob.size))
oob.size <- round(oob.fraction * length(Y))
segments <- get.segments(1:length(Y), 1:length(Y),
oob.size = oob.size,
max.seg = biom.opt$max.seg)
}
variable.fraction <- biom.opt$variable.fraction
if (variable.fraction < 1) { # use different subsets of variables
nvar <- round(variable.fraction * ncol(X))
variables <- sapply(1:ncol(segments),
function(i) sample(ncol(X), nvar))
nvars <- table(variables)
if (length(nvars) < ncol(X))
stop(paste(c("Too few variables in resampling scheme:,\ntry",
"with a larger variable.fraction or use",
"more segments.")))
} else {
variables <- matrix(1:ncol(X), nrow = ncol(X), ncol = ncol(segments))
nvars <- rep(biom.opt$max.seg, ncol(X))
}
} else {
variables <- NULL
if (type == "HC") {
nset <- biom.opt$nset
HCalpha <- biom.opt$HCalpha
}
}
## Compared to earlier versions: treat HC separately because of the
## expensive evaluation of null distributions
## Temporary solution - not pretty though - is to do HC in the same
## way only for the univariate methods and to treat the multivariate
## methods separately. Take care that with future versions this may
## have to be revised.
for (m in seq(along = fmethod)) {
## Here the real work is done: call the modelling functions
huhn.models <- do.call(fname[m],
list(X = X, Y = Y,
segments = segments,
ncomp = ncomp,
scale.p = scale.p,
variables = variables, ...))
## huhn.models is always a matrix, possibly with one column
switch(type, ## extract relevant info
coef = {
## relevant info: coefficients
woppa <- huhn.models
},
stab = {
## relevant info:
## those coefficients occuring more often than the
## threshold min.present
orderfun <- function(xx) {
selection <- which(xx > min.present)
sel.order <- order(xx[selection], decreasing = TRUE)
list(biom.indices = selection[sel.order],
fraction.selected = xx)
}
woppa <- lapply(1:dim(huhn.models)[2],
function(i, x) orderfun(x[,i]),
huhn.models)
},
HC = {
## relevant info:
## pvals, and the ones selected by the HC criterion
if (m <= nuniv) {
huhn.pvals <-
apply(huhn.models, 2,
function(x) 2*(1 - pt(abs(x), nrow(X) - 2)))
woppa <-
lapply(1:ncol(huhn.models),
function(i)
list(biom.indices =
HCthresh(huhn.pvals[,i], alpha = HCalpha,
plotit = FALSE),
pvals = huhn.pvals[,i]))
} else { # just return something, real calcs later
woppa <- lapply(1:ncol(huhn.models),
function(i)
list(biom.indices = NULL,
pvals = huhn.models[,i]))
}
})
if (type == "coef") { ## result is a matrix, possibly with 1 column
colnames(woppa) <- switch(fmethod[m],
studentt = ,
shrinkt = NULL, # was:fmethod[m]
pls = ,
vip = ,
pcr = ncomp,
lasso =
round(as.numeric(colnames(huhn.models)), 4))
} else { ## result is a list
names(woppa) <- switch(fmethod[m],
studentt = ,
shrinkt = NULL, # was:fmethod[m],
pls = ,
vip = ,
pcr = ncomp,
lasso =
round(as.numeric(colnames(huhn.models)), 4))
}
result[[m]] <- woppa
}
if (type == "HC" & nmultiv > 0) {
## Possible PCR, PLS and VIP calculations for HC are done here
which.pcr <- which(substr(names(result), 1, 5) == "pcr")
if (length(which.pcr) > 0) {
huhn.models <- pval.pcr(X, Y, ncomp, scale.p, nset)
result[[which.pcr]] <-
lapply(1:ncol(huhn.models),
function(i)
list(biom.indices =
HCthresh(huhn.models[,i], alpha = HCalpha,
plotit = FALSE),
pvals = huhn.models[,i]))
}
which.pls <- which(substr(names(result), 1, 5) == "pls")
which.vip <- which(substr(names(result), 1, 3) == "vip")
if (length(which.pls) > 0 | length(which.vip > 0)) {
if (length(which.pls) > 0) {
if (length(which.vip > 0)) {
smethod <- "both"
} else {
smethod <- "pls"
}
} else {
smethod <- "vip"
}
## next statement takes some time...
huhn.models <- pval.plsvip(X, Y, ncomp, scale.p, nset, smethod)
if (length(which.pls) > 0) {
result[[which.pls]] <-
lapply(1:dim(huhn.models)[2],
function(i)
list(biom.indices =
HCthresh(huhn.models[, i, "pls"], alpha = HCalpha,
plotit = FALSE),
pvals = huhn.models[, i, "pls"]))
}
}
if (length(which.vip) > 0) {
result[[which.vip]] <-
lapply(1:dim(huhn.models)[2],
function(i)
list(biom.indices =
HCthresh(huhn.models[, i, "vip"], alpha = HCalpha,
plotit = FALSE),
pvals = huhn.models[, i, "vip"]))
}
}
if("lasso" %in% fmethod) {
info.lst <- list(call = match.call(),
type = type, fmethod = fmethod,
nvar = ncol(X),
lasso = biom.options()$lasso)
} else {
info.lst <- list(call = match.call(),
type = type, fmethod = fmethod,
nvar = ncol(X))
}
result2 <- c(result, list(info = info.lst))
class(result2) <- "BMark"
result2
}
print.BMark <- function(x, ...) {
type <- x$info$type
switch(type,
coef = cat("Result of coefficient-based biomarker selection using ",
length(x)-1, " modelling method",
ifelse(length(x) > 2, "s", ""), ".\n", sep = ""),
HC = cat("Result of HC-based biomarker selection using ",
length(x)-1, " modelling method",
ifelse(length(x) > 2, "s", ""), ".\n", sep = ""),
cat("Result of stability-based biomarker selection using ",
length(x)-1, " modelling method",
ifelse(length(x) > 2, "s", ""), ".\n", sep = ""))
}
summary.BMark <- function(object, ...) {
type <- object$info$type
nslots <- length(object)
infoslot <- which(names(object) == "info")
switch(type,
coef = {
nsett <- sapply(object[-infoslot], ncol)
names(nsett) <- names(object)[-infoslot]
cat("Result of coefficient-based biomarker selection using ",
nslots-1, " modelling method",
ifelse(length(object) > 2, "s", ""), ".\n", sep = "")
cat("Number of different settings for each method:\n")
print(nsett)
cat("\nTotal number of variables in the X matrix:",
object[[infoslot]]$nvar, "\n")
},
{
nsett <- sapply(object[-infoslot], length)
names(nsett) <- names(object)[-infoslot]
typestr <- ifelse(type == "HC", "HC-based", "stability-based")
cat("Result of ", typestr, " biomarker selection using ",
nslots-1, " modelling method",
ifelse(length(object) > 2, "s", ""), ".\n", sep = "")
cat("Number of different settings for each method:\n")
print(nsett)
cat("\nTotal number of variables in the X matrix:",
object[[infoslot]]$nvar, "\n")
cat("Number of variables selected:\n")
nsel <- sapply(object[-infoslot],
function(xx)
sapply(xx, function(yy) length(yy$biom.indices)))
print(nsel)
}
)
}
## returns "coefficients", which are only real coefficients when type
## == "coef", otherwise they are either stabilities or p values (for
## stability selection and HC, respectively)
coef.BMark <- function(object, ...) {
huhn <- object[(names(object) != "info")]
switch(object$info$type,
coef = {
huhn
## lapply(huhn,
## function(x) x)
},
HC = {
lapply(huhn,
function(x)
lapply(x, function(xx) xx$pvals))
},
stab = {
lapply(huhn,
function(x)
lapply(x, function(xx) xx$fraction.selected))
})
}
selection <- function(object, ...) {
huhn <- object[(names(object) != "info")]
if (object$info$type == "coef") {
stop("no selection made when type == 'coef'")
} else {
lapply(huhn,
function(x) lapply(x, function(xx) xx$biom.indices))
}
}
## utility function to plot the lasso trace as a function of
## lambda. Can be used either for coefficients, or for the stability
## trace.
traceplot <- function(object, ...) {
if (length(lasso.idx <- which(names(object) == "lasso")) == 0)
stop("No lasso results present")
switch(object$info$type,
coef = {
cfs <- coef(object)[[lasso.idx]]
lambdas <- as.numeric(colnames(cfs))
matplot(lambdas, t(cfs), type = "l",
ylab = "Coefficient size", xlab = expression(lambda),
main = "Lasso/elastic net coefficient trace", ...)
mtext(bquote(alpha == .(object$info$lasso$alpha)), line = .25)
},
stab = {
cfs <- do.call(cbind,
lapply(object[[lasso.idx]],
function(x) x$fraction.selected))
lambdas <- names(object[[lasso.idx]])
matplot(lambdas, t(cfs), type = "l",
ylab = expression(Pi), xlab = expression(lambda),
main = "Lasso/elastic net stability trace", ...)
abline(h = biom.options()$min.present, col = 4, lty = 3)
mtext(bquote(alpha == .(object$info$lasso$alpha)), line = .25)
})
}
|
/scratch/gouwar.j/cran-all/cranData/BioMark/R/get.biom.R
|
### ROC S3 class from Thomas Lumley, RNews 2004.
### TestResult in every case is the statistic of interest, e.g. a t-statistic
### or a regression coefficient, and D is the 0-1 vector indicating
### whether it is a control (0) or a true finding (1).
print.ROC <- function(x,...){
cat("ROC curve: ")
print(x$call)
}
plot.ROC <- function(x, type = "b", null.line = TRUE,
xlab = "False Pos. Rate", ylab = "True Pos. Rate",
xlim = c(0, 1), ylim = c(0, 1), main = "ROC", ...)
{
plot(x$mspec, x$sens, type = type, xlab = xlab, ylab = ylab,
main = main, xlim = xlim, ylim = ylim, ...)
if(null.line) abline(0, 1, lty = 3, col = "gray")
invisible()
}
lines.ROC <- function(x,...){
lines(x$mspec, x$sens, ...)
}
points.ROC <- function(x,...){
points(x$mspec, x$sens, ...)
}
identify.ROC <- function(x, labels = NULL, ..., digits = 1)
{
if (is.null(labels))
labels <- round(x$test,digits)
identify(x$mspec, x$sens, labels = labels,...)
}
ROC <- function(TestResult, ...) UseMethod("ROC")
ROC.default <- function(TestResult, D, take.abs = TRUE, ...){
## addition: D can also be a vector of indices
if (length(D) < length(TestResult)) {
D2 <- rep(0, length(TestResult))
D2[D] <- 1
D <- D2
}
if (take.abs) TestResult <- abs(TestResult)
TT <- rev(sort(unique(TestResult)))
DD <- table(-TestResult,D)
sens <- cumsum(DD[,2])/sum(DD[,2])
mspec <- cumsum(DD[,1])/sum(DD[,1])
rval <- list(sens = sens, mspec = mspec,
test = TT, call = sys.call())
class(rval) <- "ROC"
rval
}
AUC <- function(x, max.mspec = 1) {
huhn <- aggregate(x$sens, list(x$mspec), max)
mean(huhn[huhn[,1] <= max.mspec, 2])
}
roc.value <- function(found, true, totalN)
{
TPR <- sum(found %in% true)
FPR <- (length(found) - TPR) / (totalN - length(true))
rval <- list(sens = TPR / length(true), mspec = FPR,
test = NULL,
call = sys.call())
class(rval) <- "ROC"
rval
}
|
/scratch/gouwar.j/cran-all/cranData/BioMark/R/roc.R
|
scalefun <- function(sc.p = c("none", "log", "sqrt", "pareto", "auto"))
{
if (is.null(sc.p)) sc.p <- "none"
sc.p <- match.arg(sc.p)
function(X) {
switch(sc.p,
"none" = return(scale(X, scale = FALSE)),
"log" = return(scale(log(X), scale = FALSE)),
"sqrt" = return(scale(sqrt(X), scale = FALSE)),
"pareto" = return(scale(X, scale = sqrt(apply(X, 2, sd)))),
"auto" = return(scale(X, scale = TRUE)))
}
}
|
/scratch/gouwar.j/cran-all/cranData/BioMark/R/scale.R
|
get.segments <- function(i1, i2 = NULL, oob.size = 1, max.seg = 100)
{
if (is.null(i2)) {
Y <- i1
i1 <- which(Y == names(table(Y))[1])
i2 <- which(Y == names(table(Y))[2])
}
n1 <- length(i1)
n2 <- length(i2)
if (oob.size == 1) {
segments <- rbind(rep.int(i1, n2), rep.int(i2, rep(n1, n2)))
if (!is.null(max.seg) & ncol(segments) > max.seg)
segments <- segments[,sample(ncol(segments), max.seg)]
} else {
if (is.null(max.seg))
stop("max.seg cannot be NULL when oob.size is larger than 1")
segments <- sapply(1:max.seg,
function(i) c(sample(i1, oob.size),
sample(i2, oob.size)))
}
segments
}
|
/scratch/gouwar.j/cran-all/cranData/BioMark/R/segments.R
|
#' Simulated dataset for package 'DecisionCurve'
#'
#' Simulated cohort data containing demographic variables,
#' marker values and cancer outcome.
#'
#' @format A data frame with 500 rows and 6 variables:
#' \itemize{
#' \item Age: Age in years.
#' \item Female: Indicator for female gender.
#' \item Smokes: Indicator for smoking status.
#' \item Marker1: simulated biomarker.
#' \item Marker2: simulated biomarker.
#' \item Cancer: Indicator for cancer.
#' }
"dcaData"
|
/scratch/gouwar.j/cran-all/cranData/BioPET/R/data.R
|
#' Prognostic Enrichment with Real Data
#'
#' Evaluating biomarkers for prognostic enrichment of clinical trials using real data
#'
#' @param formula Object of class "formula", in the form "outcome ~ predictors", where the outcome is a binary indicator with a value of 1 in cases and a value of 0 in controls.
#' @param data Data frame containing the outcome and predictors specified in the ``formula'' argument. Observations with a missing value of the outcome or of any predictor are dropped.
#' @param family Character object or call to the family() function specifying the link function that is passed to 'glm' to estimate a risk score when more than one predictor is specified. Defaults to binomial(link = "logit"), which yields logistic regression.
#' @param reduction.under.treatment Number between 0 and 1 indicating the percent reduction in event rate under treatment that the trial should be able to detect with the specified power
#' @param cost.screening Number indicating the cost of screening a patient to determine trial eligibility, This argument is optional; if both the ``cost.screening'' and ``cost.keeping'' arguments are specified, then the total cost of the trial based on each screening threshold is estimated and returned.
#' @param cost.keeping Number indicating the cost of retaining a patient in the trial after enrolling. This argument is optional; if both the ``cost.screening'' and ``cost.keeping'' arguments are specified, then the total cost of the trial based on each screening threshold is estimated and returned.
#' @param do.bootstrap Logical indicating whether bootstrap 95\% confidence intervals should be computed. Defaults to TRUE.
#' @param n.bootstrap Number of bootstrap samples to draw, if ``do.bootstrap'' is set to TRUE. Defaults to 1000.
#' @param power Number between 0 and 1 giving the power the trial should have to reject the null hypothesis that there is no treatment effect. Defaults to 0.9.
#' @param smooth.roc Logical indicating the ``smooth'' argument passed to the roc() function from the `pROC' package when a single biomarker is given. Defaults to FALSE.
#' @param alpha Number between 0 and 1 giving the type I error rate for testing the null hypothesis that there is no treatment effect. Defaults to 0.025.
#' @param alternative Character specifying whether the alternative hypothesis is one-sided with a higher event rate in the treatment group (``one.sided'') or two-sided (``two.sided''). Defaults to ``one.sided''.
#' @param selected.biomarker.quantiles Numeric vector specifying the quantiles of the biomarker measured in controls that will be used to screen trial participants. Defaults to 0, 0.05, ..., 0.95. All entries must be between at least 0 and less than 1.
#' @return A list with components
#' \itemize{
#' \item estimates: A data frame with the following summary measures for each biomarker threshold that is used to screen trial participants: `selected.biomarker.quantiles': quantiles of observed biomarker values used for screening. `biomarker.screening.thresholds': the values of the biomarker corresponding to the quantiles, `event.rate': post-screening event rate, `NNS': The estimated number of patients needed to screen to identify one patient eligible for the trial, `SS': The sample size in a clinical trial enrolling only patients whose biomarker-based disease risk is above the level used for screening, `N.screen': The total number of individuals whose biomarker values are screened to determine whether they should be enrolled in the trial, `N.screen.increase.percentage': Percentage in N.screen relative to a trail that does not based on the biomarker. `total.cost': The estimated total cost of running the trial if the biomarker were used for prognostic enrichment (if cost.screening and cost.keeping are specified), `cost.reduction.percentage': The reduction in total cost relative to a trial that does not screen based on the biomarker.
#' \item estimates.min.total.cost: The row of the estimates data frame corresponding the screening strategy that results in the lowest total cost.
#' \item bootstrap.CIs: 95\% bootstrap-based CIs for reported summary measures (if do.bootstrap=TRUE).
#' \item simulation: A logical indicating whether data were simulated.
#' \item bootstrap.CIs: 95\% bootstrap-based CIs for reported summary measures (if do.bootstrap=TRUE).
#' \item biomarker: Biomarker from the given dataset, either the single biomarker specified or the predicted values from logistic regression if multiple biomarkers are specified).
#' \item response: Response variable specified in the dataset.
#' }
#'
#' @seealso \code{\link{enrichment_simulation}}, \code{\link{plot_enrichment_summaries}}
#' @examples
#' data(dcaData)
#'
#' ## using a single biomarker in the dataset
#' analysis.single.marker <- enrichment_analysis(Cancer ~ Marker1,
#' data=dcaData,
#' reduction.under.treatment=0.3,
#' cost.screening=100,
#' cost.keeping=1000)
#' head(analysis.single.marker$estimates)
#' head(analysis.single.marker$bootstrap.CIs)
#'
#' ## combining two biomarkers in the dataset
#' analysis.two.markers <- enrichment_analysis(Cancer ~ Marker1 + Marker2,
#' data=dcaData,
#' reduction.under.treatment=0.3,
#' cost.screening=100,
#' cost.keeping=1000)
#' head(analysis.two.markers$estimates)
#' head(analysis.two.markers$bootstrap.CIs)
#' @export
enrichment_analysis <- function(formula,
data,
family=binomial(link=logit),
reduction.under.treatment,
cost.screening=NULL,
cost.keeping=NULL,
do.bootstrap=TRUE,
n.bootstrap=1000,
smooth.roc=FALSE,
power=0.9,
alpha=0.025,
alternative=c("one.sided", "two.sided"),
selected.biomarker.quantiles=seq(from=0, to=0.95, by=0.05)) {
alternative <- match.arg(alternative)
stopifnot(class(formula) == "formula")
if (!is.data.frame(data)) {
stop("dataset must be a data frame")
}
formula.vars <- all.vars(formula)
count.vars <- length(formula.vars)
response.name <- formula.vars[1]
response <- data[, response.name]
features.names <- formula.vars[2:count.vars]
features <- as.matrix(data[, features.names])
ind.missing <- apply(data[, formula.vars], 1, function(x) sum(is.na(x)) > 0)
count.any.missing <- sum(ind.missing)
if (count.any.missing > 0) {
data <- data[!ind.missing, ]
warning(paste(count.any.missing, "observation(s) with missing data were removed"))
}
if (!all(formula.vars %in% names(data))) {
stop(paste("Variables named", formula.vars[which(formula.vars %in% names(data) == FALSE)], "were not found in your data"))
}
if (!all(response == 0 | response == 1)) {
stop("The response variable should be binary")
}
baseline.event.rate <- mean(response)
if (!(power > 0 & power < 1)) {
stop("power should be between 0 and 1")
}
if (!(alpha > 0 & alpha < 1)) {
stop("alpha should be between 0 and 1")
}
if (!(reduction.under.treatment > 0 & reduction.under.treatment < 1)) {
stop("reduction.under.treatment should be between 0 and 1")
}
if (!all(selected.biomarker.quantiles >= 0 & selected.biomarker.quantiles < 1)) {
stop("quantiles of the biomarker measured in controls must be at least 0 and less than 1")
}
##########################################################
## If a dataset is provided, calculate ROC directly (1 biomarker) or estimate with logistic model (> 1 biomarker) ##
##########################################################
if (!is.null(data)) {
if (count.vars == 2) {
biomarker.name <- formula.vars[2]
biomarker <- data[, biomarker.name]
my.roc <- do.call(what=roc, args=list("formula"=formula, "data"=data, "smooth"=smooth.roc))
} else if (count.vars > 2) {
glm.multiple.markers <- do.call(what=glm, args=list("formula"=formula, "data"=data, "family"=family))
biomarker <- predict(glm.multiple.markers, type="response")
}
}
biomarker.screening.thresholds <- quantile(biomarker, prob=selected.biomarker.quantiles)
N <- nrow(data)
NNS <- sapply(biomarker.screening.thresholds, function(x) N / sum(biomarker > x))
event.rate <- sapply(biomarker.screening.thresholds, function(x) sum(response[biomarker > x]) / sum(biomarker > x))
# check for NaN/NA values in event rate (indicates no events for observations above a certain level of biomarker)
idx.missing.event.rate <- is.na(event.rate)
if (any(idx.missing.event.rate) == TRUE) {
warning(paste("There were no events for observations with biomarker values above the following quantile(s): ", do.call(paste, c(as.list(selected.biomarker.quantiles[idx.missing.event.rate]), sep=", ")), ". As a result, we removed ", length(sum(idx.missing.event.rate)), " quantile(s) from the analysis.", sep=""))
selected.biomarker.quantiles <- selected.biomarker.quantiles[-which(idx.missing.event.rate)]
biomarker.screening.thresholds <- quantile(biomarker, prob=selected.biomarker.quantiles)
NNS <- sapply(biomarker.screening.thresholds, function(x) N / sum(biomarker > x))
event.rate <- sapply(biomarker.screening.thresholds, function(x) sum(response[biomarker > x]) / sum(biomarker > x))
}
SS <- sample_size(event.rate=event.rate, reduction.under.treatment=reduction.under.treatment, alpha=alpha, power=power)
N.screen <- SS * NNS
N.screen.increase.percentage <- ((N.screen - N.screen[1]) / N.screen[1]) * 100
if (!is.null(cost.screening) & !is.null(cost.keeping)) {
total.cost <- SS * (cost.keeping + cost.screening * NNS)
total.cost[1] <- SS[1] * cost.keeping
cost.reduction.percentage <- ((total.cost[1] - total.cost) / total.cost[1]) * 100
total.cost.no.screening <- cost.keeping * SS[1]
ind.min.total.cost <- which.min(total.cost)
}
# also allow for bootstrap to estimate standard errors
if (do.bootstrap == TRUE) {
n.quantiles <- length(biomarker.screening.thresholds)
NNS.boot <- matrix(NA, nrow=n.quantiles, ncol=n.bootstrap)
event.rate.boot <- matrix(NA, nrow=n.quantiles, ncol=n.bootstrap)
SS.boot <- matrix(NA, nrow=n.quantiles, ncol=n.bootstrap)
N.screen.boot <- matrix(NA, nrow=n.quantiles, ncol=n.bootstrap)
N.screen.increase.percentage.boot <- matrix(NA, nrow=n.quantiles, ncol=n.bootstrap)
total.cost.boot <- matrix(NA, nrow=n.quantiles, ncol=n.bootstrap)
cost.reduction.percentage.boot <- matrix(NA, nrow=n.quantiles, ncol=n.bootstrap)
total.cost.no.screening.boot <- rep(NA, n.bootstrap)
ind.min.total.cost.boot <- rep(NA, n.bootstrap)
zero.event.rate.count <- 0
for (b in 1:n.bootstrap) {
zero.event.rates <- 100 ## just initializing
while (zero.event.rates > 0) {
idx.bootstrap <- sample(1:N, size=N, replace=TRUE)
data.boot <- data[idx.bootstrap, ]
response.boot <- data.boot[, response.name]
if (count.vars == 2) {
biomarker.boot <- data.boot[, biomarker.name]
} else if (count.vars > 2) {
biomarker.boot <- predict(glm.multiple.markers, newdata=data.boot, type="link")
}
biomarker.screening.thresholds.boot <- quantile(biomarker.boot, prob=selected.biomarker.quantiles)
event.rate.boot[, b] <- sapply(biomarker.screening.thresholds.boot, function(x) sum(response.boot[biomarker.boot > x]) / sum(biomarker.boot > x))
zero.event.rates <- sum(event.rate.boot[, b] == 0 | is.na(event.rate.boot[, b]))
if (zero.event.rates > 0) {
zero.event.rate.count <- zero.event.rate.count + 1
}
}
NNS.boot[, b] <- sapply(biomarker.screening.thresholds.boot, function(x) N / sum(biomarker.boot > x))
SS.boot[, b] <- sample_size(event.rate=event.rate.boot[, b], reduction.under.treatment=reduction.under.treatment, alpha=alpha, power=power)
N.screen.boot[, b] <- SS.boot[, b] * NNS.boot[, b] # total number of patients needed to be screened
N.screen.increase.percentage.boot[, b] <- ((N.screen.boot[, b] - N.screen.boot[1, b]) / N.screen.boot[1, b]) * 100
if (!is.null(cost.screening) & !is.null(cost.keeping)) {
total.cost.boot[, b] <- SS.boot[, b] * (cost.keeping + cost.screening * NNS.boot[, b]) # total cost
total.cost.boot[1, b] <- SS.boot[1, b] * cost.keeping # no screening
cost.reduction.percentage.boot[ , b] <- ((total.cost.boot[1, b] - total.cost.boot[, b]) / total.cost.boot[1, b]) * 100
total.cost.no.screening.boot[b] <- cost.keeping * SS.boot[1, b]
ind.min.total.cost.boot[b] <- which.min(total.cost.boot[, b])
}
}
## get bootstrap CIs
boot.ci.SS <- matrix(NA, nrow=n.quantiles, ncol=2)
colnames(boot.ci.SS) <- c("SS.LB", "SS.UB")
boot.ci.NNS <- matrix(NA, nrow=n.quantiles, ncol=2)
colnames(boot.ci.NNS) <- c("NNS.LB", "NNS.UB")
boot.ci.event.rate <- matrix(NA, nrow=n.quantiles, ncol=2)
colnames(boot.ci.event.rate) <- c("event.rate.LB", "event.rate.UB")
boot.ci.N.screen <- matrix(NA, nrow=n.quantiles, ncol=2)
colnames(boot.ci.N.screen) <- c("N.screen.LB", "N.screen.UB")
boot.ci.N.screen.increase.percentage <- matrix(NA, nrow=n.quantiles, ncol=2)
colnames(boot.ci.N.screen.increase.percentage) <- c("N.screen.increase.percentage.LB", "N.screen.increase.percentage.UB")
boot.ci.total.cost <- matrix(NA, nrow=n.quantiles, ncol=2)
colnames(boot.ci.total.cost) <- c("total.cost.LB", "total.cost.UB")
boot.ci.cost.reduction.percentage <- matrix(NA, nrow=n.quantiles, ncol=2)
colnames(boot.ci.cost.reduction.percentage) <- c("cost.reduction.percentage.LB", "cost.reduction.percentage.UB")
for (r in 1:n.quantiles) {
boot.ci.SS[r, ] <- quantile(SS.boot[r, ], probs=c(0.025, 0.975))
boot.ci.NNS[r, ] <- quantile(NNS.boot[r, ], probs=c(0.025, 0.975))
boot.ci.event.rate[r, ] <- quantile(event.rate.boot[r, ], probs=c(0.025, 0.975))
boot.ci.N.screen[r, ] <- quantile(N.screen.boot[r, ], probs=c(0.025, 0.975))
boot.ci.N.screen.increase.percentage[r, ] <- quantile(N.screen.increase.percentage.boot[r, ], probs=c(0.025, 0.975))
if (!is.null(cost.screening) & !is.null(cost.keeping)) {
boot.ci.total.cost[r, ] <- quantile(total.cost.boot[r, ], probs=c(0.025, 0.975))
boot.ci.cost.reduction.percentage[r, ] <- quantile(cost.reduction.percentage.boot[r, ], probs=c(0.025, 0.975))
}
}
if (zero.event.rate.count > 0) {
warning(paste(zero.event.rate.count, "bootstrap replications had zero events above a specified biomarker threshold and were re-sampled."))
}
if (!is.null(cost.screening) & !is.null(cost.keeping)) {
estimates <- as.data.frame(cbind(selected.biomarker.quantiles, biomarker.screening.thresholds, event.rate, NNS, SS, N.screen, N.screen.increase.percentage, total.cost, cost.reduction.percentage), row.names=NULL)
estimates$selected.biomarker.quantiles <- estimates$selected.biomarker.quantiles * 100
rownames(estimates) <- NULL
bootstrap.CIs <- as.data.frame(cbind(boot.ci.event.rate, boot.ci.NNS, boot.ci.SS, boot.ci.N.screen, boot.ci.N.screen.increase.percentage, boot.ci.total.cost, boot.ci.cost.reduction.percentage))
return(list("biomarker"=biomarker, "response"=response, "simulation"=FALSE, "estimates"=estimates, "bootstrap.CIs"=bootstrap.CIs, "estimates.min.total.cost"=estimates[ind.min.total.cost, ]))
} else {
estimates <- as.data.frame(cbind(selected.biomarker.quantiles, biomarker.screening.thresholds, event.rate, NNS, SS, N.screen, N.screen.increase.percentage), row.names=NULL)
estimates$selected.biomarker.quantiles <- estimates$selected.biomarker.quantiles * 100
bootstrap.CIs <- as.data.frame(cbind(boot.ci.event.rate, boot.ci.NNS, boot.ci.SS, boot.ci.N.screen, boot.ci.N.screen.increase.percentage))
return(list("biomarker"=biomarker, "response"=response, "simulation"=FALSE, "estimates"=estimates, "bootstrap.CIs"=bootstrap.CIs))
}
} else {
if (!is.null(cost.screening) & !is.null(cost.keeping)) {
estimates <- as.data.frame(cbind(selected.biomarker.quantiles, biomarker.screening.thresholds, event.rate, NNS, SS, N.screen, N.screen.increase.percentage, total.cost, cost.reduction.percentage), row.names=NULL)
estimates$selected.biomarker.quantiles <- estimates$selected.biomarker.quantiles * 100
return(list("biomarker"=biomarker, "response"=response, "simulation"=FALSE, "estimates"=estimates, "estimates.min.total.cost"=estimates[ind.min.total.cost, ]))
} else {
estimates <- as.data.frame(cbind(selected.biomarker.quantiles, biomarker.screening.thresholds, event.rate, NNS, SS, N.screen, N.screen.increase.percentage), row.names=NULL)
estimates$selected.biomarker.quantiles <- estimates$selected.biomarker.quantiles * 100
return(list("biomarker"=biomarker, "response"=response, "simulation"=FALSE, "estimates"=estimates))
}
}
}
|
/scratch/gouwar.j/cran-all/cranData/BioPET/R/enrichment_analysis.R
|
#' Prognostic Enrichment with Simulated Data
#'
#' Evaluating biomarkers for prognostic enrichment of clinical trials using simulated data
#'
#' @param baseline.event.rate A number between 0 and 1 indicating the prevalence of the event in the study population.
#' @param reduction.under.treatment A number between 0 and 1 indicating the percent reduction in event rate under treatment that the trial should be able to detect with the specified power.
#' @param estimated.auc A numeric vector, with each entry between 0.5 and 1, that specifies the AUC for each biomarker to use in simulations.
#' @param cost.screening A positive number indicating the cost of screening a patient to determine trial eligibility, This argument is optional; if both cost.screening and cost.keeping are specified, then then the total cost of the trial based on each screening threshold is estimated and returned.
#' @param cost.keeping A positive number indicating the cost of retaining a patient in the trial after enrolling. This argument is optional; if both cost.screening and cost.keeping are specified, then then the total cost of the trial based on each screening threshold is estimated and returned.
#' @param roc.type A character vector with the same length as the estimated.auc argument. Each entry must be one of "symmetric", "right.shifted", or "left.shifted", which describes the general shape of the ROC curve to use for simulated data. Defaults to "symmetric" for each biomarker.
#' @param simulation.sample.size A positive number giving the sample size to use for simulated data. Defaults to 500,000 (to help see trends).
#' @param power Number between 0 and 1 giving the power the trial should have to reject the null hypothesis that there is no treatment effect. Defaults to 0.9.
#' @param alpha Number between 0 and 1 giving the type I error rate for testing the null hypothesis that there is no treatment effect. Defaults to 0.025.
#' @param alternative Character specifying whether the alternative hypothesis is one-sided (``one.sided'') with a higher outcome probability in the treatment group or two-sided (``two.sided''). Defaults to ``one.sided''.
#' @param selected.biomarker.quantiles Numeric vector specifying the quantiles of the biomarker measured in controls that will be used to screen trial participants. Defaults to 0, 5, ..., 95. All entries must be between at least 0 and less than 001.
#' @return A list with components
#' \itemize{
#' \item estimates: A data frame with the following summary measures for each biomarker threshold that is used to screen trial participants: `selected.biomarker.quantiles': quantiles of observed biomarker values used for screening. `biomarker.screening.thresholds': the values of the biomarker corresponding to the quantiles, `event.rate': post-screening event rate, `NNS': The estimated number of patients needed to screen to identify one patient eligible for the trial, `SS': The sample size in a clinical trial enrolling only patients whose biomarker-based disease risk is above the level used for screening, `N.screen': The total number of individuals whose biomarker values are screened to determine whether they should be enrolled in the trial, `N.screen.increase.percentage': Percentage in N.screen relative to a trail that does not based on the biomarker. `total.cost': The estimated total cost of running the trial if the biomarker were used for prognostic enrichment (if cost.screening and cost.keeping are specified), `cost.reduction.percentage': The reduction in total cost relative to a trial that does not screen based on the biomarker. `Biomarker': label for the biomarker.
#` \item roc.data: Data frame with three columns -- 'FPR', 'TPR', and `Biomarker' -- that are used to make the ROC plots for each biomarker.
#' \item simulation: Logical indicating whether data were simulated (always TRUE for the \code{\link{plot_enrichment_summaries}} function).
#' }
#'
#' @seealso \code{\link{enrichment_analysis}}, \code{\link{plot_enrichment_summaries}}
#' @examples
#' ## three biomarkers with symmetric ROC curves
#' simulation.three.markers <- enrichment_simulation(baseline.event.rate=0.2,
#' reduction.under.treatment=0.3,
#' estimated.auc=c(0.72, 0.82, 0.85),
#' roc.type=c("symmetric", "symmetric", "symmetric"),
#' cost.screening=1,
#' cost.keeping=10,
#' simulation.sample.size=1e+5)
#' head(simulation.three.markers$estimates)
#'
#' @import pROC
#' @import VGAM
#' @importFrom graphics abline lines par
#' @importFrom stats binomial glm qnorm quantile rbinom rnorm
#' @export
enrichment_simulation <- function(baseline.event.rate,
reduction.under.treatment,
estimated.auc,
roc.type=NULL,
cost.screening=NULL,
cost.keeping=NULL,
simulation.sample.size=5e+5,
alternative=c("one.sided", "two.sided"),
power=0.9,
alpha=0.025,
selected.biomarker.quantiles=seq(from=0, to=95, by=5)) {
#############
## Check arguments ##
#############
stopifnot(is.numeric(baseline.event.rate))
stopifnot(baseline.event.rate > 0 & baseline.event.rate < 1)
updated.auc <- estimated.auc[!is.na(estimated.auc)]
n.auc <- length(updated.auc)
for (i in 1:n.auc) {
stopifnot(updated.auc[i] >= 0.5 & updated.auc[i] <= 1)
}
if (is.null(roc.type)) {
roc.type <- rep("symmetric", n.auc)
}
n.roc.type <- length(roc.type)
if (n.auc != n.roc.type) {
stop("estimated.auc and roc.type need to have the same length")
}
if (is.null(cost.screening) & !is.null(cost.keeping)) {
stop("if cost.screening is specified, then cost.keeping must also be specified")
}
if (!is.null(cost.screening) & is.null(cost.keeping)) {
stop("if cost.keeping is specified, then cost.screening must also be specified")
}
if (!is.null(cost.screening) & !is.null(cost.keeping) & (cost.screening < 0 | cost.keeping < 0)) {
stop("cost.keeping and cost.keeping should both be positive numbers")
}
for (i in 1:n.roc.type) {
if (!(roc.type[i] %in% c("symmetric", "right.shifted", "left.shifted"))) {
stop("each entry of the roc.type argument needs to be either symmetric, right.shifted, or left.shifted")
}
}
updated.roc.type <- roc.type[!is.na(estimated.auc)]
alternative <- match.arg(alternative)
stopifnot(is.numeric(power))
if (!(power > 0 & power < 1)) {
stop("power should be between 0 and 1")
}
if (!(alpha > 0 & alpha < 1)) {
stop("alpha should be between 0 and 1")
}
if (!(reduction.under.treatment > 0 & reduction.under.treatment < 1)) {
stop("reduction.under.treatment should be between 0 and 1")
}
if (!all(selected.biomarker.quantiles >= 0 & selected.biomarker.quantiles < 100)) {
stop("quantiles of the biomarker measured in controls must be at least 0 and less than 100")
}
if (simulation.sample.size < 0) {
stop("simulation.sample.size should be a positive number")
}
N <- simulation.sample.size
estimates.check <- NULL
roc.data.check <- NULL
for (i in 1:n.auc) {
simulation.data <- user_auc_and_roc_type_to_data(N=N, baseline.event.rate=baseline.event.rate,
auc=updated.auc[i], roc.type=updated.roc.type[i],
selected.biomarker.quantiles=selected.biomarker.quantiles / 100)
roc.data <- user_auc_to_plots(auc=updated.auc[i], baseline.event.rate=baseline.event.rate, roc.type=updated.roc.type[i], prototypical=FALSE)
biomarker <- simulation.data$biomarker
response <- simulation.data$response
biomarker.screening.thresholds <- quantile(biomarker, prob=selected.biomarker.quantiles / 100)
if (updated.roc.type[i] %in% c("symmetric", "left.shifted")) {
NNS <- sapply(biomarker.screening.thresholds, function(x) N / sum(biomarker > x))
event.rate <- sapply(biomarker.screening.thresholds, function(x) sum(response[biomarker > x]) / sum(biomarker > x))
} else {
NNS <- sapply(biomarker.screening.thresholds, function(x) N / sum(-biomarker > x))
event.rate <- sapply(biomarker.screening.thresholds, function(x) sum(response[-biomarker > x]) / sum(-biomarker > x))
}
SS <- sample_size(event.rate=event.rate, reduction.under.treatment=reduction.under.treatment, alpha=alpha, power=power, alternative=alternative)
N.screen <- SS * NNS
N.screen.increase.percentage <- ((N.screen - N.screen[1]) / N.screen[1]) * 100
cost.missing <- is.null(cost.screening) | is.null(cost.keeping)
if (cost.missing == TRUE) {
estimates <- as.data.frame(cbind(selected.biomarker.quantiles, biomarker.screening.thresholds, event.rate, NNS, SS, N.screen, N.screen.increase.percentage))
table.data <- as.data.frame(cbind(paste(selected.biomarker.quantiles, "%", sep=""),
round(event.rate, 2),
round(SS, 0),
round(NNS, 1),
round(N.screen, 0),
round(N.screen.increase.percentage, 1)))
table.data <- as.data.frame(cbind(apply(table.data[, 1:5], 2, function(x) as.character(x)),
paste(as.character(table.data[, 6]), "%", sep="")))
names(table.data) <- c("Percent of Patients Screened from Trial", "Event Rate Among Biomarker-Positive Patients", "Sample Size", "NNS", "Total Screened", "Percent Change in Total Screened")
rownames(table.data) <- NULL
} else {
total.cost <- SS * (cost.keeping + cost.screening * NNS) # total cost
total.cost[1] <- SS[1] * cost.keeping
cost.reduction.percentage <- ((total.cost[1] - total.cost) / total.cost[1]) * 100
estimates <- as.data.frame(cbind(selected.biomarker.quantiles, biomarker.screening.thresholds, event.rate, NNS, SS, N.screen, N.screen.increase.percentage, total.cost, cost.reduction.percentage))
table.data <- as.data.frame(cbind(paste(selected.biomarker.quantiles , "%", sep=""),
round(event.rate, 2),
round(SS, 0),
round(NNS, 1),
round(N.screen, 0),
round(N.screen.increase.percentage, 1),
round(total.cost, 0),
round(cost.reduction.percentage, 1)))
table.data <- cbind(apply(table.data[, 1:5], 2, function(x) as.character(x)),
paste(as.character(table.data[, 6]), "%", sep=""),
as.character(table.data[, 7]),
paste(as.character(table.data[, 8]), "%", sep=""))
table.data <- as.data.frame(table.data)
names(table.data) <- c("Percent of Patients Screened from Trial", "Event Rate Among Biomarker-Positive Patients", "Sample Size", "NNS", "Total Screened", "Percent Change in Total Screened", "Total Costs for Screening and Patients in Trial", "Percent Reduction in Total Cost")
rownames(table.data) <- NULL
}
estimates$Biomarker <- paste("Biomarker", as.character(i), sep=" ")
estimates.check <- rbind(estimates.check, estimates)
roc.df <- as.data.frame(cbind(roc.data$fpr.vec, roc.data$tpr.vec))
roc.df$Biomarker <- paste("Biomarker", as.character(i), sep=" ")
roc.data.check <- rbind(roc.data.check, roc.df)
}
names(roc.data.check) <- c("FPR", "TPR", "Biomarker")
return(list("simulation"=TRUE, "estimates"=estimates.check,"roc.data"=roc.data.check))
}
user_auc_and_roc_type_to_data <- function(N, baseline.event.rate, auc, roc.type, selected.biomarker.quantiles) {
if (roc.type == "right.shifted") {
# need to flip case/control labels and eventually call a test "positive" if it is below the threshold, rather than above
response <- rbinom(n=N, size=1, prob=baseline.event.rate)
biomarker <- numeric(N)
biomarker[response == 1] <- rlomax(n=sum(response==1), scale=1, shape3.q=1)
biomarker[response == 0] <- rlomax(n=sum(response==0), scale=1, shape3.q=(1-auc)/auc)
biomarker.screening.thresholds <- quantile(-biomarker, prob=selected.biomarker.quantiles)
}
if (roc.type == "symmetric") {
response <- rbinom(n=N, size=1, prob=baseline.event.rate)
sd.cases <- 1
b <- 1 / sd.cases
a <- sqrt(1 + b^2) * qnorm(auc)
mean.cases <- a * sd.cases
biomarker <- numeric(N)
biomarker[response == 1] <- rnorm(n=sum(response==1), mean=mean.cases, sd=sd.cases)
biomarker[response == 0] <- rnorm(n=sum(response==0), mean=0, sd=1)
biomarker.screening.thresholds <- quantile(biomarker, prob=selected.biomarker.quantiles)
} else if (roc.type == "left.shifted") {
response <- rbinom(n=N, size=1, prob=baseline.event.rate)
biomarker <- numeric(N)
biomarker[response == 0] <- rlomax(n=sum(response==0), scale=1, shape3.q=1)
biomarker[response == 1] <- rlomax(n=sum(response==1), scale=1, shape3.q=(1-auc)/auc)
biomarker.screening.thresholds <- quantile(biomarker, prob=selected.biomarker.quantiles)
}
return(list("selected.biomarker.quantiles"=selected.biomarker.quantiles,
"biomarker.screening.thresholds"=biomarker.screening.thresholds,
"response"=response, "biomarker"=biomarker))
}
user_auc_to_plots <- function(auc, baseline.event.rate, roc.type=NULL, prototypical=TRUE, n=5e+4, n.thresholds=1000, verbose=TRUE) {
response <- rbinom(n, size=1, prob=baseline.event.rate)
## high TPR earlier (orange)
x.left.shifted <- rep(NA, n)
x.left.shifted[response == 0] <- rlomax(n=sum(response==0), scale=1, shape3.q=1)
x.left.shifted[response == 1] <- rlomax(n=sum(response==1), scale=1, shape3.q=(1-auc)/auc)
result.left.shifted <- get_roc(x=x.left.shifted, response=response, test.positive="higher", n.thresholds=n.thresholds, verbose=verbose)
## high TPR earlier (blue)
x.right.shifted <- rep(NA, n)
x.right.shifted[response == 1] <- rlomax(n=sum(response==1), scale=1, shape3.q=1)
x.right.shifted[response == 0] <- rlomax(n=sum(response==0), scale=1, shape3.q=(1-auc)/auc)
result.right.shifted <- get_roc(x=x.right.shifted, response=response, test.positive="lower", n.thresholds=n.thresholds, verbose=verbose)
## symmetric (black)
x.symmetric <- rep(NA, n)
x.symmetric[response == 0] <- rnorm(n=sum(response==0), mean=0, sd=1)
x.symmetric[response == 1] <- rnorm(n=sum(response==1), mean=sqrt(2) * qnorm(auc), sd=1)
result.symmetric <- get_roc(x=x.symmetric, response=response, test.positive="higher", n.thresholds=n.thresholds, verbose=verbose)
# make plots
if (prototypical == TRUE) {
par(pty="s")
plot(result.left.shifted$fpr.vec, result.left.shifted$tpr.vec, type="l", lty=2, lwd=3, col="orange", ylim=c(0, 1), xlim=c(0, 1),
xlab="FPR (%)", ylab="TPR (%)",
main=paste("Prototypical ROC Curves \n (Example for AUC=", auc, ")", sep=""))
lines(result.right.shifted$fpr.vec, result.right.shifted$tpr.vec, type="l", lty=3, lwd=3, col="cyan")
lines(result.symmetric$fpr.vec, result.symmetric$tpr.vec, type="l", lty=1, lwd=3, col="black")
abline(a=0, b=1, lwd=2, lty="dashed", col="gray")
} else {
if (roc.type == "symmetric") {
fpr.vec <- result.symmetric$fpr.vec
tpr.vec <- result.symmetric$tpr.vec
} else if (roc.type == "right.shifted") {
fpr.vec <- result.right.shifted$fpr.vec
tpr.vec <- result.right.shifted$tpr.vec
} else if (roc.type == "left.shifted") {
fpr.vec <- result.left.shifted$fpr.vec
tpr.vec <- result.left.shifted$tpr.vec
}
return(list("fpr.vec"=fpr.vec, "tpr.vec"=tpr.vec))
}
}
get_roc <- function(x, response, test.positive=c("higher","lower"), n.thresholds=1000, verbose=TRUE) {
test.positive <- match.arg(test.positive)
n.healthy <- sum(response == 0)
n.disease <- sum(response == 1)
if (test.positive == "lower") {
x <- -x
}
thresholds <- as.numeric(sort(quantile(x=x[response==0], prob= seq(from=0, to=0.99, length.out=n.thresholds), decreasing=FALSE)))
tpr <- rep(NA, n.thresholds)
fpr <- rep(NA, n.thresholds)
x.cases <- x[response==1]
x.controls <- x[response==0]
for (i in 1:n.thresholds) {
tpr[i] <- sum(x.cases > thresholds[i]) / n.disease
fpr[i] <- sum(x.controls > thresholds[i]) / n.healthy
}
if (verbose == TRUE) {
auc.estimate <- mean(sample(x.cases, size=5e+5, replace=TRUE) > sample(x.controls, size=5e+5, replace=TRUE))
return(list("auc.estimate"=auc.estimate, "fpr.vec"=fpr, "tpr.vec"=tpr))
} else {
return(list("fpr.vec"=fpr, "tpr.vec"=tpr))
}
}
|
/scratch/gouwar.j/cran-all/cranData/BioPET/R/enrichment_simulation.R
|
#' Plotting Prognostic Enrichment Estimates
#'
#' Plot summaries of prognostic enrichment of clinical trials estimated by the \code{\link{enrichment_analysis}} and \code{\link{enrichment_simulation}} functions.
#' @param x Object returned by either the \code{\link{enrichment_analysis}} or the \code{\link{enrichment_simulation}} function.
#' @param text.size.x.axis Size of text for the x-axis of plots. Defaults to 10.
#' @param text.size.y.axis Size of text for the y-axis of plots. Defaults to 10.
#' @param text.size.plot.title Size of text for the plot titles. Defaults to 10.
#' @param text.size.axis.ticks Size of axis tick marks for plots. Defaults to 10.
#' @param annotate.no.screening.cost Logical indicating whether to annotate the relative to total cost curve at the point where no biomarker screening occurs. Defaults to FALSE
#' @param smooth.roc Logical indicating whether the ROC curves (plotting with the roc() function in the `pROC' package) should be smoothed. Defaults to TRUE.
#' @return A grid of either 4 or 6 plots, summarizing the results of either the \code{\link{enrichment_analysis}} or the \code{\link{enrichment_simulation}} function.
#' @seealso \code{\link{enrichment_analysis}}, \code{\link{enrichment_simulation}}
#' @examples
#'
#' data(dcaData)
#' # one marker
#' analysis.single.marker <- enrichment_analysis(Cancer ~ Marker1,
#' data=dcaData,
#' reduction.under.treatment=0.3,
#' cost.screening=100, cost.keeping=1000)
#' plot_enrichment_summaries(analysis.single.marker)
#'
#' # two markers
#' analysis.two.markers <- enrichment_analysis(Cancer ~ Marker1 + Marker2,
#' data=dcaData,
#' reduction.under.treatment=0.3,
#' cost.screening=100,
#' cost.keeping=1000)
#' plot_enrichment_summaries(analysis.two.markers)
#' @import ggplot2
#' @import pROC
#' @import gridExtra
#' @export
plot_enrichment_summaries <- function(x,
text.size.x.axis=10,
text.size.y.axis=10,
text.size.plot.title=10,
text.size.axis.ticks=10,
annotate.no.screening.cost=FALSE,
smooth.roc=TRUE) {
## Determine whether bootstrap CIs are available and store data used for plotting
bootstrap.indicator <- "bootstrap.CIs" %in% names(x)
if (bootstrap.indicator == TRUE) {
estimates <- cbind(x$estimates, x$bootstrap.CIs)
} else {
estimates <- x$estimates
}
## Determine whether we are using real or simulated data
using.simulated.data <- x$simulation
if (using.simulated.data == TRUE) {
roc.data <- x$roc.data
roc.data$FPR <- roc.data$FPR * 100
roc.data$TPR <- roc.data$TPR * 100
} else {
real.roc <- roc(response=x$response, predictor=x$biomarker, smooth=smooth.roc)
roc.data <- as.data.frame(cbind((1 - real.roc$specificities) * 100, real.roc$sensitivities * 100))
names(roc.data) <- c("FPR", "TPR")
}
## Determine whether the total cost information is in the data frame (i.e. whether the user specified cost.screening and cost.retention earlier)
cost.indicator <- "total.cost" %in% names(estimates) & "cost.reduction.percentage" %in% names(estimates)
## Plot ROC curves for specified biomarkers (simulated data only)
if (using.simulated.data == TRUE) {
plot.ROC <- ggplot(roc.data, aes_string(x="FPR", y="TPR", group="Biomarker", shape="Biomarker", col="Biomarker", linetype="Biomarker", fill="Biomarker")) + geom_line(size=0.9) + geom_abline(intercept = 0, slope = 1, linetype="dashed", colour="gray") + coord_fixed() +
labs(x="FPR (%)", y="TPR (%)") +
ggtitle("ROC Curve for Specified Biomarkers")
} else {
plot.ROC <- ggplot(roc.data, aes_string(x="FPR", y="TPR")) + geom_line(size=0.9) + geom_abline(intercept = 0, slope = 1, linetype="dashed", colour="gray") + coord_fixed() +
labs(x="FPR (%)", y="TPR (%)") +
ggtitle("ROC Curve for Specified Biomarker")
}
theme_update(plot.title=element_text(hjust=0.5))
plot.ROC <- plot.ROC + expand_limits(y=0) +
theme(axis.title.x = element_text(size=text.size.x.axis)) +
theme(axis.title.y = element_text(size=text.size.y.axis)) +
theme(axis.text= element_text(size=text.size.axis.ticks)) +
theme(plot.title = element_text(size=text.size.plot.title)) + scale_x_continuous(expand = c(0, 0)) +
theme(legend.title=element_blank())
if (cost.indicator == TRUE) {
plot.ROC <- plot.ROC + theme(legend.text=element_text(size=10), legend.key.size = unit(0.45, "cm"), legend.position=c(0.6, 0.25))
} else {
plot.ROC <- plot.ROC + theme(legend.text=element_text(size=13), legend.key.size = unit(0.6, "cm"), legend.position=c(0.7, 0.2))
}
## Biomarker percentile vs. sample size
if (using.simulated.data == TRUE) {
plot.sample.size <- ggplot(estimates, aes_string(x="selected.biomarker.quantiles", y="SS", group="Biomarker", shape="Biomarker", col="Biomarker", linetype="Biomarker", fill="Biomarker")) +
labs(x="Percent of Patients Screened from Trial", y="Sample Size") +
ggtitle("Clinical Trial Total Sample Size")
} else {
plot.sample.size <- ggplot(estimates, aes_string(x="selected.biomarker.quantiles", y="SS")) +
labs(x="Percent of Patients Screened from Trial", y="Sample Size") +
ggtitle("Clinical Trial Total Sample Size")
}
plot.sample.size <- plot.sample.size + geom_line(size=0.9) + geom_point(size=1.5) + theme(axis.title.x = element_text(size=text.size.x.axis)) +
theme(axis.title.y = element_text(size=text.size.y.axis)) +
theme(axis.text= element_text(size=text.size.axis.ticks)) +
theme(plot.title = element_text(size=text.size.plot.title)) + scale_x_continuous(expand = c(0, 0)) + theme(legend.position = 'none')
if (cost.indicator == TRUE) {
plot.sample.size <- plot.sample.size + labs(x="", y="Total Screened")
} else {
plot.sample.size <- plot.sample.size + labs(x="Percent of Patients Screened from Trial", y="Total Screened")
}
## Biomarker percentile vs. event rate after screening
if (using.simulated.data == TRUE) {
plot.event.rate <- ggplot(estimates, aes_string(x="selected.biomarker.quantiles", y="event.rate", group="Biomarker", shape="Biomarker", col="Biomarker", linetype="Biomarker", fill="Biomarker")) +
labs(x="", y="Event Rate") +
ggtitle("Event Rate Among \n Biomarker-Positive Patients")
} else {
plot.event.rate <- ggplot(estimates, aes_string(x="selected.biomarker.quantiles", y="event.rate")) +
labs(x="", y="Post-Screening Event Rate") +
ggtitle("Event Rate Among \n Biomarker-Positive Patients")
}
plot.event.rate <- plot.event.rate + geom_line(size=0.9) + geom_point(size=1.5) + expand_limits(y=0) +
theme(axis.title.x = element_text(size=text.size.x.axis)) +
theme(axis.title.y = element_text(size=text.size.y.axis)) +
theme(axis.text= element_text(size=text.size.axis.ticks)) +
theme(plot.title = element_text(size=text.size.plot.title)) + scale_x_continuous(expand = c(0, 0)) + theme(legend.position = 'none')
## Biomarker percentile vs. total # needing to be screened
if (using.simulated.data == TRUE) {
plot.N.screen <- ggplot(estimates, aes_string(x="selected.biomarker.quantiles", y="N.screen", group="Biomarker", shape="Biomarker", col="Biomarker", linetype="Biomarker", fill="Biomarker")) +
ggtitle("Total Number of Patients \n Screened to Enroll Trial")
} else {
plot.N.screen <- ggplot(estimates, aes_string(x="selected.biomarker.quantiles", y="N.screen")) + geom_line() + geom_point(size=1.5) +
ggtitle("Total Number of Patients \n Screened to Enroll Trial")
}
if (cost.indicator == TRUE) {
plot.N.screen <- plot.N.screen + labs(x="", y="Total Screened")
} else {
plot.N.screen <- plot.N.screen + labs(x="Percent of Patients Screened from Trial", y="Total Screened")
}
plot.N.screen <- plot.N.screen + geom_line(size=0.9) + geom_point(size=1.5) + expand_limits(y=0) +
theme(axis.title.x = element_text(size=text.size.x.axis)) +
theme(axis.title.y = element_text(size=text.size.y.axis)) +
theme(axis.text= element_text(size=text.size.axis.ticks)) +
theme(plot.title = element_text(size=text.size.plot.title)) + scale_x_continuous(expand = c(0, 0)) + theme(legend.position = 'none')
if (bootstrap.indicator == TRUE) {
## store bootstrap limits
limits.sample.size <- aes_string(ymin="SS.LB", ymax="SS.UB")
limits.event.rate <- aes_string(ymin="event.rate.LB", ymax="event.rate.UB")
limits.N.screen <- aes_string(ymin="N.screen.LB", ymax="N.screen.UB")
## plot bootstrap CIs
plot.sample.size <- plot.sample.size + geom_errorbar(limits.sample.size, width=5, linetype="dashed", colour="darkblue") + coord_cartesian(ylim=c(0, estimates$SS.UB[1]*1.05))
plot.event.rate <- plot.event.rate + geom_errorbar(limits.event.rate, width=5, linetype="dashed", colour="darkblue") + coord_cartesian(ylim=c(0, max(estimates$event.rate.UB)*1.05))
plot.N.screen <- plot.N.screen + geom_errorbar(limits.N.screen, width=5, linetype="dashed", colour="darkblue") + coord_cartesian(ylim=c(0, max(estimates$N.screen)*1.05))
}
## Biomarker percentile vs. total cost
if (cost.indicator == TRUE) {
ind.total.cost <- 1:nrow(estimates)
if (using.simulated.data == TRUE) {
plot.total.cost <- ggplot(estimates[ind.total.cost, ], aes_string(x="selected.biomarker.quantiles", y="total.cost", group="Biomarker", shape="Biomarker", col="Biomarker", linetype="Biomarker", fill="Biomarker")) + geom_line(size=0.9) + geom_point(size=2) + geom_hline(yintercept=0, linetype=2) +
labs(x="Percent of Patients Screened from Trial", y="Total Cost") +
ggtitle("Total Costs for Screening \n and Patients in Trial")
} else {
plot.total.cost <- ggplot(estimates[ind.total.cost, ], aes_string(x="selected.biomarker.quantiles", y="total.cost")) + geom_line() + geom_hline(yintercept=0, linetype=2) + geom_point(size=1.5) +
labs(x="Percent of Patients Screened from Trial", y="Total Cost") +
ggtitle("Total Costs for Screening \n and Patients in Trial")
}
plot.total.cost <- plot.total.cost + expand_limits(y=0)+
theme(axis.title.x = element_text(size=text.size.x.axis)) +
theme(axis.title.y = element_text(size=text.size.y.axis)) +
theme(axis.text= element_text(size=text.size.axis.ticks)) +
theme(plot.title = element_text(size=text.size.plot.title)) + scale_x_continuous(expand = c(0, 0)) + theme(legend.position = 'none')
if (annotate.no.screening.cost == TRUE) {
plot.total.cost <- plot.total.cost + annotate("text", x=estimates[1, "selected.biomarker.quantiles"], y=0.99 * estimates[1, "total.cost"],
angle=90, color="blue", size=2, label="no screening") +
annotate("point", x=estimates[1, "selected.biomarker.quantiles"], y=estimates[1, "total.cost"],
size=2, color="blue")
}
## Biomarker percentile vs. percentage reduction in total cost (relative to no screening scenario)
if (using.simulated.data == TRUE) {
plot.cost.reduction.percentage <- ggplot(estimates[ind.total.cost, ], aes_string(x="selected.biomarker.quantiles", y="cost.reduction.percentage", group="Biomarker", shape="Biomarker", col="Biomarker", linetype="Biomarker", fill="Biomarker")) + geom_line(size=0.9) + geom_point(size=1.5) + geom_hline(yintercept=0, linetype=2) +
labs(x="Percent of Patients Screened from Trial", y="% Reduction in Total Cost") +
ggtitle("Percent Reduction \n in Total Cost")
} else {
plot.cost.reduction.percentage <- ggplot(estimates[ind.total.cost, ], aes_string(x="selected.biomarker.quantiles", y="cost.reduction.percentage")) + geom_line() + geom_hline(yintercept=0, linetype=2) + geom_point(size=1.5) +
labs(x="Percent of Patients Screened from Trial", y="% Reduction in Total Cost") +
ggtitle("Percent Reduction \n in Total Cost")
}
plot.cost.reduction.percentage <- plot.cost.reduction.percentage + expand_limits(y=0) +
theme(axis.title.x = element_text(size=text.size.x.axis)) +
theme(axis.title.y = element_text(size=text.size.y.axis)) +
theme(axis.text= element_text(size=text.size.axis.ticks)) +
theme(plot.title = element_text(size=text.size.plot.title)) + scale_x_continuous(expand = c(0, 0)) + theme(legend.position = 'none')
if (bootstrap.indicator == TRUE) {
## bootstrap limits
limits.total.cost <- aes_string(ymin="total.cost.LB", ymax="total.cost.UB")
limits.cost.reduction.percentage <- aes_string(ymin="cost.reduction.percentage.LB", ymax="cost.reduction.percentage.UB")
lower.limit.cost.reduction.percentage <- max(min(estimates$cost.reduction.percentage.LB, 0), -20)
## plotting bootstrap CIs
plot.total.cost <- plot.total.cost + geom_errorbar(limits.total.cost, width=5, linetype="dashed", colour="darkblue") + coord_cartesian(ylim=c(0, estimates$total.cost.UB[1]*1.05))
plot.cost.reduction.percentage <- plot.cost.reduction.percentage + geom_errorbar(limits.cost.reduction.percentage, width=5, linetype="dashed", colour="darkblue") + coord_cartesian(ylim=c(lower.limit.cost.reduction.percentage, max(estimates$cost.reduction.percentage.UB) * 1.05))
}
args.to.plot <- list("plot.ROC"=plot.ROC, "plot.event.rate"=plot.event.rate, "plot.sample.size"=plot.sample.size, "plot.N.screen"=plot.N.screen, "plot.total.cost"=plot.total.cost, "plot.cost.reduction.percentage"=plot.cost.reduction.percentage)
} else {
args.to.plot <- list("plot.ROC"=plot.ROC, "plot.event.rate"=plot.event.rate, "plot.sample.size"=plot.sample.size, "plot.N.screen"=plot.N.screen)
}
## Show the plots the user wants to see in a grid
do.call(grid.arrange, c(args.to.plot, list(ncol=2)))
}
|
/scratch/gouwar.j/cran-all/cranData/BioPET/R/plot_enrichment_summaries.R
|
sample_size <- function(event.rate, reduction.under.treatment,
alpha=0.025, power=0.9,
alternative=c("one.sided", "two.sided")) {
alternative <- match.arg(alternative)
p1 <- event.rate
p2 <- event.rate * (1 - reduction.under.treatment)
z1.one.sided <- qnorm(1 - alpha)
z1.two.sided <- qnorm(1 - alpha/2)
z2 <- qnorm(power)
if (alternative == "one.sided") {
SS <- 2 * ( (z1.one.sided * sqrt((p1 + p2) * (1 - (p1 + p2)/2) ) +
z2 * sqrt(p1 * (1 - p1) + p2 * (1 - p2)) )^2 / (p1 - p2)^2)
} else if (alternative == "two.sided") {
SS <- 2 * ( (z1.two.sided * sqrt((p1 + p2) * (1 - (p1 + p2)/2) ) +
z2 * sqrt(p1 * (1 - p1) + p2 * (1 - p2)) )^2 / (p1 - p2)^2)
}
return(SS)
}
|
/scratch/gouwar.j/cran-all/cranData/BioPET/R/sample_size.R
|
#' Example dataset for package 'BioPETsurv'
#'
#' A dataset containing values of two biomarkers and survival outcomes of 1533 individuals.
#'
#' @docType data
#'
#' @usage data(SurvMarkers)
#'
#' @format A data frame with 1533 rows and 4 variables:
#' \describe{
#' \item{time}{observed times of event or censoring}
#' \item{event}{indicator of event; 0 means censored and 1 means event}
#' \item{x1}{A modestly prognostic biomarker (concordance index=0.64)}
#' \item{x2}{A strongly prognostic biomarker (concordance index=0.82)}
#' }
"SurvMarkers"
|
/scratch/gouwar.j/cran-all/cranData/BioPETsurv/R/SurvMarkers.R
|
# simulating data with biomarker and survival observations
if(getRversion() >= "2.15.1")
utils::globalVariables(c("surv", "level.enrichment"))
sim_data <- function(n = 500, biomarker = "normal", effect.size = 1.25,
baseline.hazard = "constant", end.time = 10, end.survival = 0.5, shape = NULL,
seed = 2333){
# effect size is log(HR) when sd(biomarker)=1
if (!baseline.hazard %in% c("constant","increasing","decreasing")){
stop("Invalid type of baseline hazard (should be constant/increasing/decreasing)")
}
if (!biomarker %in% c("normal","lognormal")){
stop("Invalid distribution of biomarker (should be normal/lognormal)")
}
if (end.survival <=0 & end.survival >=1){
stop("end.survival should be between 0 and 1")
}
##### simulating the data ######
set.seed(seed)
biom <- rnorm(n)
if (biomarker=="lognormal") biom <- (exp(biom)-mean(exp(biom)))/sd(exp(biom))
X <- biom
b <- log(effect.size)
Xb <- X*b
hr <- exp(Xb)
if (baseline.hazard=="constant"){
lambda.surv <- -log(end.survival)/end.time
#lambda.cens <- prob.censor*lambda.surv/(1-prob.censor)
lambda <- lambda.surv*hr
t.surv <- rexp(n, rate = lambda)
#t.cens <- rexp(n, rate = lambda.cens)
t.cens <- rep(end.time, n)
} else{
if (baseline.hazard=="increasing"){
if (is.null(shape)){
message("No Weibull shape parameter specified; defaults to shape = 2")
k <- 2
} else if (shape <= 1){
stop("Weibull shape should >1 for an increasing baseline hazard")
}
else k <- shape
}
if (baseline.hazard=="decreasing"){
if (is.null(shape)){
k <- 0.5
message("No Weibull shape parameter specified; defaults to shape = 0.5")
} else if (shape >= 1){
stop("Weibull shape should <1 for a decreasing baseline hazard")
}
else k <- shape
}
b.bl <- end.time*(-log(end.survival))^(-1/k)
lambda.surv <- b.bl^(-k)
lambda <- lambda.surv*hr
b <- lambda^(-1/k)
t.surv <- rweibull(n, shape = k, scale = b)
t.cens <- rep(end.time, n)
}
t.obs <- apply(cbind(t.surv,t.cens), 1, min)
event <- apply(cbind(t.surv,t.cens), 1, function(vec) as.numeric(vec[1]<vec[2]))
t.obs[t.obs>end.time] <- end.time
event[t.obs==end.time] <- 0
dat <- cbind(X,t.obs,event,t.surv)
dat <- as.data.frame(dat)
colnames(dat) <- c("biomarker","time.observed","event","time.event")
# plotting the K-M curve
cols <- gray.colors(7)
km.quantiles <- c(0, 0.25, 0.5, 0.75)
km.all <- survfit(Surv(dat$time.observed, dat$event)~1, error="greenwood")
dat1 <- as.data.frame(seq(0,max(km.all$time),by=max(km.all$time)/500))
colnames(dat1) <- "time"
for (j in 1:length(km.quantiles)){
q <- quantile(dat$biomarker,prob=km.quantiles[j])
sobj <- Surv(dat$time.observed, dat$event)[dat$biomarker>=q]
km <- survfit(sobj~1,error="greenwood")
survfun <- stepfun(km$time, c(1, km$surv))
dat1 <- cbind(dat1, survfun(dat1[,1]))
colnames(dat1)[j+1] <- paste(j,"surv",sep=".")
}
dat1 <- reshape(dat1, direction = 'long', timevar = 'level.enrichment',
varying=list(grep("surv", colnames(dat1), value=TRUE)),
times = as.character(km.quantiles),
v.names = c("surv"),
idvar='time')
g <- ggplot(dat1,aes(x=time, y=surv, colour=level.enrichment)) +
geom_line(size=1) + ylim(0,1) +
labs(title ="Kaplan-Meier survival curves",
x = "time", y = "survival estimate", color = "enrichment level") +
scale_color_manual(labels = as.character(km.quantiles), values = cols[1:4]) +
theme(plot.title = element_text(hjust = 0.5), legend.position="bottom")
plot(g)
return(list(data = dat, km.plot = g))
}
|
/scratch/gouwar.j/cran-all/cranData/BioPETsurv/R/sim_data.R
|
survROC <- function (Stime, status, marker, entry = NULL, predict.time,
cut.values = NULL, method = "NNE", lambda = NULL, span = NULL,
window = "symmetric")
{
times = Stime
x <- marker
if (is.null(entry))
entry <- rep(0, length(times))
bad <- is.na(times) | is.na(status) | is.na(x) | is.na(entry)
entry <- entry[!bad]
times <- times[!bad]
status <- status[!bad]
x <- x[!bad]
if (sum(bad) > 0)
message(paste("\n", sum(bad), "records with missing values dropped. \n"))
if (is.null(cut.values))
cut.values <- unique(x)
cut.values <- cut.values[order(cut.values)]
ncuts <- length(cut.values)
ooo <- order(times)
times <- times[ooo]
status <- status[ooo]
x <- x[ooo]
s0 <- 1
unique.t0 <- unique(times)
unique.t0 <- unique.t0[order(unique.t0)]
n.times <- sum(unique.t0 <= predict.time)
if (method == "NNE") {
if (is.null(lambda) & is.null(span)) {
message("method = NNE requires either lambda or span! \n")
stop(0)
}
x.unique <- unique(x)
x.unique <- x.unique[order(x.unique)]
S.t.x <- rep(0, length(x.unique))
t.evaluate <- unique(times[status == 1])
t.evaluate <- t.evaluate[order(t.evaluate)]
t.evaluate <- t.evaluate[t.evaluate <= predict.time]
for (j in 1:length(x.unique)) {
if (!is.null(span)) {
if (window == "symmetric") {
ddd <- (x - x.unique[j])
n <- length(x)
ddd <- ddd[order(ddd)]
index0 <- sum(ddd < 0) + 1
index1 <- index0 + trunc(n * span + 0.5)
if (index1 > n)
index1 <- n
lambda <- ddd[index1]
wt <- as.integer(((x - x.unique[j]) <= lambda) &
((x - x.unique[j]) >= 0))
index0 <- sum(ddd <= 0)
index2 <- index0 - trunc(n * span/2)
if (index2 < 1)
index2 <- 1
lambda <- abs(ddd[index1])
set.index <- ((x - x.unique[j]) >= -lambda) &
((x - x.unique[j]) <= 0)
wt[set.index] <- 1
}
}
else {
wt <- exp(-(x - x.unique[j])^2/lambda^2)
}
s0 <- 1
for (k in 1:length(t.evaluate)) {
n <- sum(wt * (entry <= t.evaluate[k]) & (times >= t.evaluate[k]))
d <- sum(wt * (entry <= t.evaluate[k]) & (times == t.evaluate[k]) * (status == 1))
if (n > 0)
s0 <- s0 * (1 - d/n)
}
S.t.x[j] <- s0
}
S.all.x <- S.t.x[match(x, x.unique)]
Sx <- sum(S.all.x[x > cut.values])/sum(x > cut.values)
}
return(Sx)
}
`%:::%` <- function(pkg, fun) get(fun, envir = asNamespace(pkg),
inherits = FALSE)
surv.mean <- 'survival' %:::% 'survmean'
surv_enrichment <- function (formula, data, hr = 0.8, end.of.trial=NULL, a=NULL, f=NULL,
method = "KM", lambda = 0.05,
cost.screening = NULL, cost.keeping = NULL, cost.unit.keeping = NULL,
power = 0.9, alpha = 0.05, one.sided = FALSE,
selected.biomarker.quantiles = seq(from = 0, to = 0.95, by = 0.05),
do.bootstrap = FALSE, n.bootstrap = 1000, seed = 2333,
print.summary.tables = FALSE){
##### check arguments #####
if (class(formula) != "formula"){
stop("invalid formula")
}
if (!is.data.frame(data)) {
stop("dataset must be a data frame")
}
#if (length(cost.keeping)!=length(end.of.trial)){
# stop("length of trial costs does not match the length of trial duration")
#}
if (is.null(end.of.trial) & (is.null(a) | is.null(f))){
stop("must specify either accrual + follow-up time or length of trial")
}
acc.fu <- FALSE
if (!is.null(a) & !is.null(f))
acc.fu <- TRUE
if (method=="NNE") acc.fu <- FALSE
comp.cost <- TRUE # indicator for whether to compute costs
if (is.null(cost.screening) | (is.null(cost.keeping) & is.null(cost.unit.keeping)))
comp.cost <- FALSE
if (method=="NNE" & (is.null(cost.keeping) | is.null(cost.screening)))
comp.cost <- FALSE
formula.vars <- all.vars(formula)
count.vars <- length(formula.vars)
data <- data[,which(colnames(data) %in% formula.vars)]
#ind.missing <- apply(data[, formula.vars], 1, function(x) sum(is.na(x)) > 0)
ind.missing <- apply(as.matrix(data), 1, function(x) sum(is.na(x)) > 0)
count.any.missing <- sum(ind.missing)
if (count.any.missing > 0) {
data <- data[!ind.missing, ]
warning(paste(count.any.missing, "observation(s) with missing values were removed"))
}
if (!all(formula.vars %in% names(data))) {
stop(paste("Variable(s) named", formula.vars[which(formula.vars %in%
names(data) == FALSE)], "were not found in the data"))
}
response.name <- formula.vars[1]
response <- data[, response.name]
#if (class(response)!="Surv"){
# stop("response must be a survival object as returned by the Surv function")
#}
features.names <- formula.vars[2:count.vars]
features <- as.matrix(data[, features.names])
if (!(power > 0 & power < 1)) {
stop("power should be between 0 and 1")
}
if (!(alpha > 0 & alpha < 1)) {
stop("alpha should be between 0 and 1")
}
if (!(hr > 0 & hr < 1)) {
stop("hazard ratio should be between 0 and 1")
}
if (!all(selected.biomarker.quantiles >= 0 & selected.biomarker.quantiles < 1)) {
stop("quantiles of the biomarker measured in controls must be at least 0 and less than 1")
}
selected.biomarker.quantiles <- c(0, selected.biomarker.quantiles) # no enrichment for reference
if (one.sided == TRUE) alpha <- alpha*2
if (method!="KM" & method!="NNE")
stop("method can only be 'KM' or 'NNE'")
if (method=="NNE" & (is.null(end.of.trial)))
stop("trial length missing; NNE can only deal with fixed length trial")
if (method=="NNE" & !is.null(cost.unit.keeping))
message("NNE does not deal with unit cost; 'cost.unit.keeping' will be ignored")
# "pseudo-biomarker" if multiple biomarkers are used
if(count.vars == 2){
biomarker.name <- formula.vars[2]
biomarker <- data[, biomarker.name]
}
if (count.vars > 2) {
biomarker.name <- "combined biomarker"
coxfit <- do.call(what = coxph, args = list(formula = formula, data = data))
biomarker <- as.numeric(features%*%coxfit$coefficients)
}
##### separate functions #####
simps <- function(vec) (vec[1]+4*vec[2]+vec[3])/6 # Simpson's rule
nevent.calc <- function(hr) 4*(qnorm(1-alpha/2)+qnorm(power))^2/(log(hr))^2 # sample size formula (# events)
event.rate <- function(enr.level, eot){
q <- quantile(biomarker,prob=enr.level)
sobj <- response[biomarker>q]
km <- survfit(sobj~1,error="greenwood")
survest <- stepfun(km$time, c(1, km$surv))
eprob <- 1-survest(eot)
reftm <- tryCatch(
{
max(summary(km)$time[summary(km)$time<=eot]) # find a point where the sd of event rate is equal to that at t0
},
error=function(cond) {
message("length of trial is too short; cannot compute sd of event rate for all enrichment levels (NA returned)")
},
warning=function(cond) {
message("length of trial is too short; cannot compute sd of event rate for all enrichment levels (NA returned)")
},
finally={
}
)
esd <- NULL
esd <- summary(km)$std.err[which(summary(km)$time==reftm)]
#rmst.obj <- rmsth(y=sobj[,1],d=sobj[,2],tcut=eot,eps=1.0e-06)
#tmean <- rmst.obj$rmst
tmean <- surv.mean(km, rmean=eot) [[1]]["*rmean"]
#tmean.sd <- sqrt(rmst.obj$var)
return(list(eprob = eprob, esd = esd, tmean = tmean))
}
event.rate.acc.fu <- function(enr.level,biomarker,response){
q <- quantile(biomarker,prob=enr.level)
sobj <- response[biomarker>q]
km <- survfit(sobj~1,error="greenwood")
survest <- stepfun(km$time, c(1, km$surv))
p <- c(survest(a),survest(a+f/2),survest(a+f))
d.ctrl <- 1-simps(p)
d.trt <- 1-simps(p^hr)
d <- (d.ctrl+d.trt)/2
eprob <- d.ctrl
tmean <- rep(NA, 3)
tmean[1] <- surv.mean(km, rmean=f) [[1]]["*rmean"]
tmean[2] <- surv.mean(km, rmean=a/2+f) [[1]]["*rmean"]
tmean[3] <- surv.mean(km, rmean=a+f) [[1]]["*rmean"]
avgtm <- simps(tmean)
return(list(eprob = eprob, tmean = avgtm, d=d))
}
boot.acc.fu <- function(idx){
rslt.boot <- sapply(selected.biomarker.quantiles, event.rate.acc.fu,
biomarker=biomarker.orig[idx], response=response.orig[idx,])
eprob.boot <- as.numeric(rslt.boot[1,])
tmean.boot <- as.numeric(rslt.boot[2,])
d.boot <- as.numeric(rslt.boot[3,])
return(list(eprob.boot = eprob.boot, tmean.boot = tmean.boot, d.boot=d.boot))
}
event.rate.NNE <- function(enr.level, eot){
q <- quantile(biomarker,prob=enr.level)
nne <- survROC(Stime=response[,1], status=response[,2], marker=biomarker,
predict.time=eot, cut.values=q,
method = method, lambda = lambda)
return(eprob = 1-nne)
}
##### the main function #####
# 1. K-M left for "plot_surv_enrichment_summaries"
# 2. event rate and sd
if (!acc.fu){
arg <- expand.grid(selected.biomarker.quantiles, end.of.trial)
if (method!="NNE"){
rslt <- mapply(event.rate, arg[,1], arg[,2])
eprob <- matrix(as.numeric(rslt[1,]), ncol=length(end.of.trial))
esd <- matrix(as.numeric(rslt[2,]), ncol=length(end.of.trial))
tmean <- matrix(as.numeric(rslt[3,]), ncol=length(end.of.trial))
#tmeansd <- matrix(as.numeric(rslt[4,]), ncol=length(end.of.trial))
} else {
rslt <- mapply(event.rate.NNE, arg[,1], arg[,2])
eprob <- matrix(as.numeric(rslt), ncol=length(end.of.trial))
esd <- NULL
tmean <- NULL
}
} else {
rslt <- sapply(selected.biomarker.quantiles, event.rate.acc.fu,
biomarker = biomarker, response = response)
eprob <- as.numeric(rslt[1,])
tmean <- as.numeric(rslt[2,])
esd <- NULL
d <- as.numeric(rslt[3,])
biomarker.orig <- biomarker
response.orig <- response
if (do.bootstrap){
set.seed(seed)
idx.boot <- replicate(n.bootstrap,
sample(seq(1,length(biomarker)), length(biomarker),replace=TRUE))
idx.boot <- split(idx.boot, rep(1:n.bootstrap, each = length(biomarker)))
stat.boot <- lapply(idx.boot, boot.acc.fu)
eprob.boot <- do.call(rbind, lapply(stat.boot, '[[', 1))
tmean.boot <- do.call(rbind, lapply(stat.boot, '[[', 2))
d.boot <- do.call(rbind, lapply(stat.boot, '[[', 3))
esd <- apply(eprob.boot, 2, sd)
}
}
# 3. total sample size
nevent <- nevent.calc(hr)
if (!acc.fu){
prob.all <- 1-(1-eprob)^hr+eprob
npat <- ceiling(nevent*2/prob.all)
sd.npat <- NULL
if (method!="NNE"){
sd.all <- (1+hr*(1-eprob)^(hr-1))*esd
sd.npat <- npat/prob.all*sd.all
}
} else {
nevent <- nevent.calc(hr)
npat<-ceiling(nevent/d)
sd.npat <- NULL
if (do.bootstrap){
npat.boot <- ceiling(nevent/d.boot)
sd.npat <- apply(npat.boot, 2, sd)
}
}
# 4. total patients screened = trial ss/(1-p)
sd.nscr <-NULL
nscr <- npat/(1-selected.biomarker.quantiles)
if (!is.null(sd.npat)){
sd.nscr <- sd.npat/(1-selected.biomarker.quantiles)
}
nscr.orig <- nscr
sd.nscr.orig <- sd.nscr
## 5 & 6: costs
if (!comp.cost){
cost <- sd.cost <- reduc <- sd.reduc <- NULL
} else {
# 5. total costs for screening and trial
nscr[selected.biomarker.quantiles == 0] <- 0
if (!is.null(sd.npat)){
sd.nscr[selected.biomarker.quantiles == 0] <- 0
}
if (!acc.fu){
cost <- sd.cost <- matrix(NA, ncol = length(end.of.trial), nrow = length(selected.biomarker.quantiles))
if (is.null(cost.keeping)){#cost.keeping <- end.of.trial*cost.unit.keeping
# follow-up to expected occurrence of event
sd.cost <- NULL
for (k in 1:length(end.of.trial)){
cost[,k] <- cost.unit.keeping[k]*npat[,k]*tmean[,k]+cost.screening*nscr[,k]
#sd.cost[,k] <- (cost.keeping[k]+cost.screening/(1-selected.biomarker.quantiles))*sd.npat[,k]
}
idx <- which(selected.biomarker.quantiles==0) # no screening needed
if (length(idx)!=0){
for (k in 1:length(end.of.trial)){
cost[idx,k] <- cost.unit.keeping[k]*npat[idx,k]*tmean[idx,k]
#sd.cost[idx,k] <- sd.npat[idx,k]*cost.keeping[k]
}
}
}
if (!is.null(cost.keeping)){
if (is.null(sd.npat)) sd.cost <- NULL
# follow-up to the end of trial
for (k in 1:length(end.of.trial)){
cost[,k] <- cost.keeping[k]*npat[,k]+cost.screening*nscr[,k]
if (!is.null(sd.npat))
sd.cost[,k] <- (cost.keeping[k]+cost.screening/(1-selected.biomarker.quantiles))*sd.npat[,k]
}
idx <- which(selected.biomarker.quantiles==0) # no screening needed
if (length(idx)!=0){
for (k in 1:length(end.of.trial)){
cost[idx,k] <- cost.keeping[k]*npat[idx,k]
if (!is.null(sd.npat))
sd.cost[idx,k] <- sd.npat[idx,k]*cost.keeping[k]
}
}
}
}
if (acc.fu){
sd.cost <- NULL
if (!is.null(cost.unit.keeping)){
cost <- cost.screening*nscr + cost.unit.keeping*tmean*npat
if (do.bootstrap){
nscr.boot <- t(apply(npat.boot, 1, function(vec) vec/(1-selected.biomarker.quantiles)))
cscr.boot <- cost.screening*nscr.boot
cscr.boot[,selected.biomarker.quantiles == 0] <- 0
ckeep.boot <- cost.unit.keeping*tmean.boot*npat.boot
cost.boot <- cscr.boot+ckeep.boot
sd.cost <- apply(cost.boot, 2, sd)
}
}
if (is.null(cost.unit.keeping)){
cost <- cost.screening*nscr + cost.keeping*npat
if (do.bootstrap){
nscr.boot <- t(apply(npat.boot, 1, function(vec) vec/(1-selected.biomarker.quantiles)))
cscr.boot <- cost.screening*nscr.boot
cscr.boot[,selected.biomarker.quantiles == 0] <- 0
ckeep.boot <- cost.keeping*npat.boot
cost.boot <- cscr.boot+ckeep.boot
sd.cost <- apply(cost.boot, 2, sd)
}
}
}
# 6. % reduction in total cost
sd.reduc <- NULL
if (!acc.fu){
reduc <- matrix(NA, ncol = length(end.of.trial), nrow = length(selected.biomarker.quantiles))
for (k in 1:length(end.of.trial)){
if (!is.null(cost.keeping))
reduc[,k] <- (cost.keeping[k]*npat[1,k]-cost[,k])/cost.keeping[k]/npat[1,k]
if (is.null(cost.keeping))
reduc[,k] <- (cost.unit.keeping[k]*npat[1,k]*tmean[1,k]-cost[,k])/cost.unit.keeping[k]/npat[1,k]/tmean[1,k]
}
}
if (acc.fu){
reduc <- (cost[1]-cost)/cost[1]
if (do.bootstrap){
reduc.boot <- apply(cost.boot, 2, function(vec) (cost.boot[,1]-vec)/cost.boot[,1])
sd.reduc <- apply(reduc.boot, 2,sd)
}
}
}
# remove reference values
#ind.matrix <- class(eprob)=="matrix"
if (class(eprob)=="matrix"){
eprob <- eprob[-1,]
esd <- esd[-1,]
npat <- npat[-1,]
sd.npat <- sd.npat[-1,]
nscr.orig <- nscr.orig[-1,]
sd.nscr.orig <- sd.nscr.orig[-1,]
cost <- cost[-1,]
sd.cost <- sd.cost[-1,]
reduc <- reduc[-1,]
} else {
eprob <- eprob[-1]
esd <- esd[-1]
npat <- npat[-1]
sd.npat <- sd.npat[-1]
nscr.orig <- nscr.orig[-1]
sd.nscr.orig <- sd.nscr.orig[-1]
cost <- cost[-1]
sd.cost <- sd.cost[-1]
reduc <- reduc[-1]
sd.reduc <- sd.reduc[-1]
}
selected.biomarker.quantiles <- selected.biomarker.quantiles[-1]
# rounding
nscr.orig <- ceiling(nscr.orig)
# print tables
cnames <- c("level.enrichment","event.prob","event.prob.se",
"n.patients","n.patients.se",
"n.screened","n.screened.se",
"cost","cost.se",
"reduction","reduction.se")
if (acc.fu){
tab <- cbind(selected.biomarker.quantiles, eprob, esd, npat, sd.npat, nscr.orig, sd.nscr.orig,
cost, sd.cost, reduc, sd.reduc)
if (comp.cost & do.bootstrap)
colnames(tab) <- cnames
if (comp.cost & !do.bootstrap)
colnames(tab) <- cnames[c(1,2,4,6,8,10)]
if (!comp.cost & do.bootstrap)
colnames(tab) <- cnames[1:7]
if (!comp.cost & !do.bootstrap)
colnames(tab) <- cnames[c(1,2,4,6)]
} else {
tab <- rep(list(NULL),length(end.of.trial))
if (length(end.of.trial) > 1){
for (j in 1:length(end.of.trial)){
tab[[j]] <- cbind(selected.biomarker.quantiles, eprob[,j], esd[,j],
npat[,j], sd.npat[,j],
nscr.orig[,j], sd.nscr.orig[,j],
cost[,j], sd.cost[,j], reduc[,j])
if ((comp.cost & !is.null(cost.keeping)) & !is.null(esd)) colnames(tab[[j]]) <- cnames[1:10]
if ((comp.cost & is.null(cost.keeping)) & !is.null(esd)) colnames(tab[[j]]) <- cnames[c(1:8,10)]
if (!comp.cost & !is.null(esd)) colnames(tab[[j]]) <- cnames[1:7]
if (comp.cost & is.null(esd)) colnames(tab[[j]]) <- cnames[c(1,2,4,6,8,10)]
if (!comp.cost & is.null(esd)) colnames(tab[[j]]) <- cnames[c(1,2,4,6)]
}
names(tab) <- paste("trial length=", end.of.trial)
}
if (length(end.of.trial) == 1){
tab <- cbind(selected.biomarker.quantiles, eprob, esd, npat, sd.npat, nscr.orig, sd.nscr.orig,
cost, sd.cost, reduc)
if (comp.cost & !is.null(esd)) colnames(tab) <- cnames[1:10]
if (!comp.cost & !is.null(esd)) colnames(tab) <- cnames[1:7]
if (comp.cost & is.null(esd)) colnames(tab) <- cnames[c(1,2,4,6,8,10)]
if (!comp.cost & is.null(esd)) colnames(tab) <- cnames[c(1,2,4,6)]
}
}
if(print.summary.tables) print(tab)
return(list(summary.table=tab,
event.prob=eprob,event.prob.se=esd,
n.patients=npat, n.patients.se=sd.npat,
n.screened=nscr.orig, n.screened.se=sd.nscr.orig,
cost=cost, cost.se=sd.cost,
cost.reduction=reduc, cost.reduction.se=sd.reduc,
response=response, biomarker=biomarker, biomarker.name=biomarker.name,
selected.biomarker.quantiles=selected.biomarker.quantiles,
end.of.trial=end.of.trial, a=a, f=f, acc.fu=acc.fu,
method=method))
}
|
/scratch/gouwar.j/cran-all/cranData/BioPETsurv/R/surv_enrichment.R
|
if(getRversion() >= "2.15.1")
utils::globalVariables(c("ci","cost","cost.se","event.prob","n.patients","n.patients.se",
"n.screened","n.screened.se","reduction","reduction.se"))
surv_plot_enrichment <- function (x, km.quantiles = c(0,0.25,0.5,0.75),
km.range = NULL, alt.color = NULL){
plot.error.bar <- as.numeric(!is.null(x$event.prob.se))
reduc.error.bar <- as.numeric(!is.null(x$cost.reduction.se))
if (is.null(x$event.prob.se))
x$event.prob.se <- x$n.patients.se <- x$n.screened.se <- x$cost.se <-
x$cost.reduction.se <- rep(0,length(x$selected.biomarker.quantiles))
if (x$acc.fu)
x$end.of.trial <- x$a+x$f
if (x$acc.fu==FALSE){
x$cost.reduction.se <- matrix(0,nrow=length(x$selected.biomarker.quantiles),ncol=length(x$end.of.trial))
if (is.null(x$cost.se))
x$cost.se <- matrix(0,nrow=length(x$selected.biomarker.quantiles),ncol=length(x$end.of.trial))
}
end.of.trial <- x$end.of.trial
len.pos <- "bottom"
if (length(end.of.trial)==1) len.pos <- "none"
comp.cost <- !is.null(x$cost)
#colors
gg_color_hue <- function(n) {
if (n==1) return("black")
hues = seq(15, 375, length = n + 1)
hcl(h = hues, l = 65, c = 100)[1:n]
}
if (!is.null(alt.color) & length(alt.color)==length(end.of.trial))
gg_color_hue <- function(n) alt.color
# 1. K-M curve ################
cols <- gray.colors(length(km.quantiles)+3)
km.all <- survfit(x$response~1, error="greenwood")
dat <- as.data.frame(seq(0,max(km.all$time),by=max(km.all$time)/500))
if (!is.null(km.range)){
km.range<=max(km.all$time)
dat <- as.data.frame(seq(0,km.range,by=km.range/500))
}
colnames(dat) <- "time"
for (j in 1:length(km.quantiles)){
q <- quantile(x$biomarker,prob=km.quantiles[j])
sobj <- x$response[x$biomarker>=q]
km <- survfit(sobj~1,error="greenwood")
survfun <- stepfun(km$time, c(1, km$surv))
dat <- cbind(dat, survfun(dat[,1]))
colnames(dat)[j+1] <- paste(j,"surv",sep=".")
}
dat <- reshape(dat, direction = 'long', timevar = 'level.enrichment',
varying=list(grep("surv", colnames(dat), value=TRUE)),
times = as.character(km.quantiles),
v.names = c("surv"),
idvar='time')
g <- ggplot(dat,aes(x=time, y=surv, colour=level.enrichment)) +
geom_line(size=1) + ylim(0,1) +
labs(title ="Kaplan-Meier survival curves",
x = "time", y = "survival estimate", color = "enrichment level") +
scale_color_manual(labels = as.character(km.quantiles), values = cols[1:j]) +
theme(plot.title = element_text(hjust = 0.5), legend.position="bottom")
if (x$acc.fu==FALSE){
for (k in 1:length(x$end.of.trial))
g <- g + geom_vline(xintercept = x$end.of.trial[k], colour = gg_color_hue(length(x$end.of.trial))[k])
}
if (x$acc.fu==TRUE){
g <- g + geom_vline(xintercept = x$f, colour = cols[j])
#g <- g + geom_vline(xintercept = x$a/2+x$f, colour = cols[j])
g <- g + geom_vline(xintercept = x$a+x$f, colour = cols[j])
}
# 2. event rate and sd ###############
dat <- as.data.frame(cbind(x$selected.biomarker.quantiles, x$event.prob, x$event.prob.se))
colnames(dat)[1] <- "level.enrichment"
for (j in 1:length(x$end.of.trial)){
colnames(dat)[1+j] <- paste(j,"prob", sep=".")
colnames(dat)[(1+length(x$end.of.trial)+j)] <- paste(j, "sd", sep=".")
}
dat <- reshape(dat, direction = 'long', timevar = 'end.of.trial',
varying=list(grep("prob", colnames(dat), value=TRUE), grep("sd", colnames(dat), value=TRUE)),
times = as.character(seq(1,length(x$end.of.trial))),
v.names = c("event.prob", "event.prob.se"),
idvar='level.enrichment')
dat$ci <- 1.96*dat$event.prob.se
#pd <- position_dodge(0) # move them .05 to the left and right
pd <- position_jitter()
if (x$acc.fu == FALSE){
tt <- "Event rate"
}
if (x$acc.fu == TRUE)
tt <- "Average event rate"
g2 <- ggplot(dat, aes(x=100*level.enrichment, y=event.prob, colour=end.of.trial)) +
geom_errorbar(aes(ymin=event.prob-ci, ymax=event.prob+ci),
width=.05*length(x$end.of.trial)*sd(x$selected.biomarker.quantiles*100)*plot.error.bar) +
#position=pd) +
geom_line() +#position=pd) +
geom_point() +#position=pd) +
labs(title = tt, x = "level of enrichment", y = "event rate") +
labs(color = "end of trial") +
scale_color_manual(labels = as.character(x$end.of.trial), values = gg_color_hue(length(x$end.of.trial))) +
ylim(0, 1) +
theme(plot.title = element_text(hjust = 0.5), legend.position=len.pos)
# 3. total sample size
dat <- as.data.frame(cbind(x$selected.biomarker.quantiles, x$n.patients, x$n.patients.se))
colnames(dat)[1] <- "level.enrichment"
for (j in 1:length(x$end.of.trial)){
colnames(dat)[1+j] <- paste(j,"num", sep=".")
colnames(dat)[(1+length(x$end.of.trial)+j)] <- paste(j, "sd", sep=".")
}
dat <- reshape(dat, direction = 'long', timevar = 'end.of.trial',
#varying = colnames(dat)[-1],
varying=list(grep("num", colnames(dat), value=TRUE), grep("sd", colnames(dat), value=TRUE)),
times = as.character(seq(1,length(end.of.trial))),
v.names = c("n.patients", "n.patients.se"),
idvar='level.enrichment')
g3 <- ggplot(dat, aes(x=level.enrichment*100, y=n.patients, colour=end.of.trial)) +
geom_errorbar(aes(ymin=n.patients-1.96*n.patients.se, ymax=n.patients+1.96*n.patients.se),
width=.05*length(end.of.trial)*sd(x$selected.biomarker.quantiles*100)*plot.error.bar) +
#position=pd) +
geom_line() +#position=pd) +
geom_point() +#position=pd) +
expand_limits(y=0) +
labs(title ="Clinical trial sample size",
x = "level of enrichment", y = "total sample size", color = "end of trial") +
scale_color_manual(labels = as.character(x$end.of.trial), values = gg_color_hue(length(end.of.trial))) +
theme(plot.title = element_text(hjust = 0.5), legend.position=len.pos)
# 4. total patients screened
dat <- as.data.frame(cbind(x$selected.biomarker.quantiles, x$n.screened, x$n.screened.se))
colnames(dat)[1] <- "level.enrichment"
for (j in 1:length(x$end.of.trial)){
colnames(dat)[1+j] <- paste(j,"num", sep=".")
colnames(dat)[(1+length(x$end.of.trial)+j)] <- paste(j, "sd", sep=".")
}
dat <- reshape(dat, direction = 'long', timevar = 'end.of.trial',
varying=list(grep("num", colnames(dat), value=TRUE), grep("sd", colnames(dat), value=TRUE)),
times = as.character(seq(1,length(end.of.trial))),
v.names = c("n.screened", "n.screened.se"),
idvar='level.enrichment')
g4 <- ggplot(dat, aes(x=level.enrichment*100, y=n.screened, colour=end.of.trial)) +
geom_errorbar(aes(ymin=n.screened-1.96*n.screened.se, ymax=n.screened+1.96*n.screened.se),
width=.05*length(end.of.trial)*sd(x$selected.biomarker.quantiles*100)*plot.error.bar) +
#position=pd) +
geom_line() +#position=pd) +
geom_point() +#position=pd) +
labs(title ="Number of patients screened",
x = "level of enrichment", y = "total # screened", color = "end of trial") +
scale_color_manual(labels = as.character(x$end.of.trial), values = gg_color_hue(length(end.of.trial))) +
expand_limits(y=0) +
theme(plot.title = element_text(hjust = 0.5), legend.position=len.pos)
# plots for cost
g5 <- g6 <- NULL
if (comp.cost){
# 5. total costs for screening and trial
dat <- as.data.frame(cbind(x$selected.biomarker.quantiles, x$cost, x$cost.se))
colnames(dat)[1] <- "level.enrichment"
for (j in 1:length(x$end.of.trial)){
colnames(dat)[1+j] <- paste(j,"num", sep=".")
colnames(dat)[(1+length(x$end.of.trial)+j)] <- paste(j, "sd", sep=".")
}
dat <- reshape(dat, direction = 'long', timevar = 'end.of.trial',
varying=list(grep("num", colnames(dat), value=TRUE), grep("sd", colnames(dat), value=TRUE)),
times = as.character(seq(1,length(end.of.trial))),
v.names = c("cost", "cost.se"),
idvar='level.enrichment')
g5 <- ggplot(dat, aes(x=level.enrichment*100, y=cost, colour=end.of.trial)) +
geom_errorbar(aes(ymin=cost-1.96*cost.se, ymax=cost+1.96*cost.se),
width=.05*length(end.of.trial)*sd(x$selected.biomarker.quantiles*100)*plot.error.bar) +
#position=pd) +
geom_line() +#position=pd) +
geom_point() +#position=pd) +
labs(title ="Total screening + trial cost",
x = "level of enrichment", y = "total cost", color = "end of trial") +
scale_color_manual(labels = as.character(x$end.of.trial), values = gg_color_hue(length(end.of.trial))) +
expand_limits(y=0) +
theme(plot.title = element_text(hjust = 0.5), legend.position=len.pos)
# 6. % reduction in total cost
dat <- as.data.frame(cbind(x$selected.biomarker.quantiles, x$cost.reduction, x$cost.reduction.se))
colnames(dat)[1] <- "level.enrichment"
for (j in 1:length(x$end.of.trial)){
colnames(dat)[1+j] <- paste(j,"num", sep=".")
colnames(dat)[(1+length(x$end.of.trial)+j)] <- paste(j, "sd", sep=".")
}
dat <- reshape(dat, direction = 'long', timevar = 'end.of.trial',
varying=list(grep("num", colnames(dat), value=TRUE), grep("sd", colnames(dat), value=TRUE)),
times = as.character(seq(1,length(end.of.trial))),
v.names = c("reduction", "reduction.se"),
idvar='level.enrichment')
g6 <- ggplot(dat, aes(x=level.enrichment*100, y=reduction*100, colour=end.of.trial)) +
geom_errorbar(aes(ymin=reduction*100-196*reduction.se, ymax=reduction*100+196*reduction.se),
width=.05*length(end.of.trial)*sd(x$selected.biomarker.quantiles*100)*reduc.error.bar) +
#position=pd) +
geom_line() +#position=pd) +
geom_point() +#position=pd) +
labs(title ="Reduction (%) in total cost",
x = "level of enrichment", y = "% reduction in cost", color = "end of trial") +
scale_color_manual(labels = as.character(x$end.of.trial), values = gg_color_hue(length(x$end.of.trial))) +
expand_limits(y=0) +
theme(plot.title = element_text(hjust = 0.5), legend.position=len.pos)
}
# plot and return
if (comp.cost)
summary <- arrangeGrob(grid.arrange(g,g2,g3,g4,g5,g6, nrow=3))
if (!comp.cost)
summary <- arrangeGrob(grid.arrange(g,g2,g3,g4, nrow=2))
return(list(km.plot=g, prob.plot=g2, ss.plot=g3,
screen.plot=g4, cost.plot=g5, reduction.cost.plot=g6,
summary=summary))
}
|
/scratch/gouwar.j/cran-all/cranData/BioPETsurv/R/surv_plot_enrichment.R
|
input1<-function(){
message("The input of this function must be a non-zero column matrix of dimension 2x2",'\n')
A<-matrix(0,nrow=2,ncol=2)
A[1,1]<-as.numeric(readline("People with the disease have a positive result\n"))
A[1,2]<-as.numeric(readline("People without the disease have a positive result\n"))
A[2,1]<-as.numeric(readline("People with the disease have a negative result\n"))
A[2,2]<-as.numeric(readline("People without the disease have a negative result\n"))
return(A)
}
|
/scratch/gouwar.j/cran-all/cranData/BioProbability/R/input1.R
|
input2<-function(){
message("The input of this function must be a non-zero column matrix of dimension 2x2",'\n')
A<-matrix(0,nrow=2,ncol=2)
A[1,1]<-as.numeric(readline("People have been exposed to the factor and they present the disease\n"))
A[1,2]<-as.numeric(readline("People have been exposed to the factor and they do not present the disease\n"))
A[2,1]<-as.numeric(readline("People have not been exposed to the factor and they present the disease\n"))
A[2,2]<-as.numeric(readline("People have not been exposed to the factor and they do not present the disease\n"))
return(A)
}
|
/scratch/gouwar.j/cran-all/cranData/BioProbability/R/input2.R
|
odds<-function(p,name="Prevalence"){ #p is the a probability of success (prevalence, incidence)
while(sum(p>=1)>0|sum(p<=0)>0){
message("Prevalence of a disease is a value or a vector of values between 0 and 1","\n")
pStr<-readline("What is the value/vector of prevalences?\n")
p<-as.numeric(unlist(strsplit(pStr, ",")))
}
message(paste(name, "Odds"),"\n")
p0=sort(p)
result<-cbind(p0,p0/(1-p0))
colnames(result)<-c(paste(name),"Odds")
return(result)
}
|
/scratch/gouwar.j/cran-all/cranData/BioProbability/R/odds.R
|
odds.ratio<-function(A=NULL,show.matrix=FALSE,conf.int=FALSE,level=0.05){ #IC for large n
c1<-is.matrix(A)
if(!c1){
A<-input2()
}else{
c2<-nrow(A)==2
c3<-ncol(A)==2
if(!c2|!c3){A<-input2()}
}
c4<-any(apply(A,2,sum)==0)
while(c4){
message("Sum of rows can not be zero",'\n')
A<-input2()
c4<-any(apply(A,2,sum)==0)
}
if(show.matrix==TRUE){
rownames(A)=c("Exposed","Non-exposed")
colnames(A)=c("Disease","Without disease")
print(A)
message('\n')
}
OR=A[1,1]*A[2,2]/(A[1,2]*A[2,1])
if(conf.int==TRUE){
while((level>=1)|(level<=0)){
message("Level of the confidence interval is a value between 0 and 1",'\n')
level<-as.numeric(readline("What is the value of the level?\n"))
}
LI<-qnorm(level/2)*sqrt(sum(1/A))
CI<-sort(exp(c(log(OR)-LI,log(OR)+LI)))
L<-list(OR,CI)
names(L)=c("Odds Ratio",paste("Confidence Interval of level ",level*100,"%",sep=""))
return(L)
}else{
message("Odds ratio for a 2x2 contingency table",'\n')
return(OR)
}
}
|
/scratch/gouwar.j/cran-all/cranData/BioProbability/R/odds.ratio.R
|
predictive.value<-function(p,Spe,Sen,plot.it=FALSE){
while(sum(p>=1)>0|sum(p<=0)>0){
message("Prevalence of a disease is a value or a vector of values between 0 and 1","\n")
pStr<-readline("What is the value/vector of prevalences?\n")
p<-as.numeric(unlist(strsplit(pStr, ",")))
}
while(((Sen==0)&(Spe==1))|((Spe==0)&(Sen==1))|(Spe<0)|(Sen<0)|(Spe>1)|(Sen>1)) {
message("Sensitivity and specificity are probabilities. They take values between 0 an 1","\n")
Sen<-as.numeric(readline("What is the sensitivity value?\n"))
Spe<-as.numeric(readline("What is the specificity value?\n"))
}
message("Computation of the predictive values (+ and -) from the prevalence","\n")
p<-sort(p)
p.p.v<-p*Sen/(p*Sen+(1-p)*(1-Spe))
p.n.v<-(1-p)*Spe/((1-p)*Spe+p*(1-Sen))
if((length(p)>1)&(plot.it==TRUE)){
oldpar<-par(mfrow=c(1,2))
on.exit(par(oldpar))
plot(p,p.p.v,xlab="Prevalence",ylab="+ predictive value",pch=19)
plot(p,p.n.v,pch=19,xlab="Prevalence",ylab="- predictive value")
}
result<-cbind(p,p.p.v,p.n.v)
colnames(result)<-c("Prevalence","+ predictive value","- predictive value")
return(result)
}
|
/scratch/gouwar.j/cran-all/cranData/BioProbability/R/predictive.value.R
|
relative.risk<-function(A=NULL,show.matrix=FALSE,conf.int=FALSE,level=0.05){ #IC for n large
c1<-is.matrix(A)
if(!c1){
A<-input2()
}else{
c2<-nrow(A)==2
c3<-ncol(A)==2
if(!c2|!c3){A<-input2()}
}
c4<-any(apply(A,2,sum)==0)
while(c4){
message("Sum of rows can not be zero",'\n')
A<-input2()
c4<-any(apply(A,2,sum)==0)
}
message("Relative risk for a 2x2 contingency table",'\n')
if(show.matrix==TRUE){
rownames(A)<-c("Presence","Absence")
colnames(A)<-c("Disease","Without disease")
print(A)
}
R1=A[1,1]/(A[1,1]+A[1,2])
R2=A[2,1]/(A[2,1]+A[2,2])
RR=R1/R2
if(conf.int==TRUE){
while((level>=1)|(level<=0)){
message("Level of the confidence interval is a value between 0 and 1",'\n')
level<-as.numeric(readline("What is the value of the level?\n"))
}
LI<-qnorm(level/2)*sqrt((1-R1)/(sum(A[1,])*R1)+(1-R2)/(sum(A[2,])*R2))
CI<-sort(exp(c(log(RR)-LI, log(RR)+LI)))
L<-list(RR,CI);names(L)=c("Relative Risk",paste("Confidence Interval of level ",level*100,"%",sep=""))
return(L)
}else{
return(RR)
}
}
|
/scratch/gouwar.j/cran-all/cranData/BioProbability/R/relative.risk.R
|
sensitivity.specificity<-function(A=NULL,show.matrix=FALSE){ #Calculo sensibilidad y especificidad
c1<-is.matrix(A)
if(!c1){
A<-input2()
}else{
c2<-nrow(A)==2
c3<-ncol(A)==2
if(!c2|!c3){A<-input2()}
}
c4<-any(apply(A,2,sum)==0)
while(c4){
message("Sum of rows can not be zero",'\n')
A<-input2()
c4<-any(apply(A,2,sum)==0)
}
message("Sensitivity and Specificity of a diagnostic test",'\n')
if(show.matrix==TRUE){
rownames(A)=c("+","-")
colnames(A)=c("Disease","Without disease")
print(A)
}
S<-A[1,1]/sum(A[,1]) #P(+|E)
E<-A[2,2]/sum(A[,2]) #P(-|S)
output=c(S,E)
names(output)=c("Sensitivity","Specificity")
return(output)
}
|
/scratch/gouwar.j/cran-all/cranData/BioProbability/R/sensitivity.specificity.R
|
#' Example bioassay data set
#'
#' @docType data
#' @usage data(bioassay)
#' @keywords datasets
#'
#' @examples
#' data(bioassay)
#' head(bioassay$assay1)
'bioassay'
|
/scratch/gouwar.j/cran-all/cranData/BioRssay/R/data.R
|
#' Test validity of the probit model
#' @noRd
validity<-function(strains,data){
ndataf<-do.call(rbind,lapply(strains,function(ss,data){
tmp<-data[data$strain==ss,]
dss<-unique(tmp$dose)
dd<-do.call(rbind,lapply(dss,function(x,tmp){
tt<-tmp[tmp$dose==x,]
cbind(sum(tt$dead),sum(tt$total))
},tmp))
tx<-cbind(ss,dss,dd)
},data=data))
ndataf<-data.frame(strain=ndataf[,1],apply(ndataf[,-1],2,as.numeric))
colnames(ndataf)<-c("strain","dose","dead","total")
d.ratio<-ifelse(ndataf$dead/ndataf$total==0,0.006,
ifelse(ndataf$dead/ndataf$total==1,1-0.006,ndataf$dead/ndataf$total))
p.mortality<-sapply(d.ratio,qnorm)
ndataf<-cbind(ndataf,d.ratio,p.mortality)
return(ndataf)
}
#' Legend assembly
#' @noRd
lgas<-function(legend.par, ll,strains){llg<-as.list(legend.par)
lpos<-c("bottomleft","bottomright","topleft","topright","top","bottom","center")
ps<-match(lpos,legend.par)
#if(any(lpos==llg[[1]])) {llg$x<-llg[[1]]} else {llg$x <-"bottomleft"}
if(sum(ps,na.rm = TRUE)>0){x<-unlist(llg[na.omit(ps)]);llg<-llg[-na.omit(ps)];llg$x<-x} else {llg$x <-"bottomleft"}
if(any(names(llg)=="y")) llg$y<-llg$y
if(!any(names(llg)=="legend")) llg$legend<-strains
if(!is.null(ll$col)){llg$col<-ll$col}
#if(is.null(llg$col)) llg$col=rainbow_hcl(length(strains))
#if(is.null(llg$pch)) {if(length(strains)<=6)llg$pch=15:20 else llg$pch=1:20}
if(!is.null(ll$pch)){llg$pch<-ll$pch}
if(is.null(llg$lwd)) llg$lwd=1.5
llg$lwd<-as.numeric(llg$lwd)
if(is.null(llg$cex)) llg$cex=0.8
llg$cex<-as.numeric(llg$cex)
if(any(is.na(ll$pch))) {llg$lty=1;llg$pch=NA}
llg$lty<-as.numeric(llg$lty)
if(is.null(llg$bg)) llg$bg="grey60"
if(is.null(llg$bty)) llg$bty="o"
if(is.null(llg$box.col)) llg$box.col=NA
lnames<-names(formals(legend))
do.call("legend",llg[names(llg)%in%lnames])}
#' Plot dose-mortality response for each strain
#'
#' This function plots the probit-transformed mortalities (\code{probit.trans()}
#' function) as a function of the log10 of the dose, the regressions predicted
#' by the resist.ratio() function, with or without confidence levels, if the
#' dose-mortality responses are linear (option).
#'
#' @param data a data frame of probit transformed mortality data using the
#' function \code{probit.trans()}
#' @param strains character. list of test strains to be plotted. If not
#' provided, the function will plot all the strains in the data set.
#' @param plot.conf logical. Whether to plot the confidence intervals for
#' each strain, default TRUE
#' @param conf.level numerical. The confidence interval to be plotted
#' @param LD.value numerical. Level of lethal dose to be tested.
#' default=c(25,50,95)
#' @param test.validity logical. When TRUE (default), if a strain
#' mortality-dose response fails the chi-square test for linearity in the
#' \code{resist.ratio()} function, no regression will be plotted, only the
#' observed data.
#' @param legend.par multi-type. Arguments to be passed to the legend as in
#' \code{\link[graphics]{legend}}. default position \code{bottomleft}.
#' If no legend desired use FALSE. Note: if pch, lty, and col are passed to
#' the plot, they don't need to be passed to \code{legend()}
#' @param ... parameters to be passed on to graphics for the plot
#' (e.g. col, pch)
#'
#' @importFrom graphics points layout par plot.default title
#' @importFrom colorspace rainbow_hcl
#'
#' @return A plot of dose-mortality responses for bioassays
#'
#' @author Piyal Karunarathne, Pascal Milesi, Pierrick Labbé
#'
#' @examples
#' data(bioassay)
#' transd<-probit.trans(bioassay$assay2)
#' data<-transd$tr.data
#' strains<-levels(data$strain)
#' mort.plot(data,strains)
#'
#' @export
mort.plot<-function(data,strains=NULL,plot.conf=TRUE,conf.level=0.95,
LD.value=c(25,50,95),test.validity=TRUE,legend.par=c("bottomleft"),...){
#opars<-par(no.readonly = TRUE)
#on.exit(par(opars))
data$strain<-factor(data$strain)
if(is.null(strains)){
strains<-levels(data$strain)
} else {data<-data[data$strain==strains,]}
dmin<-floor(log10(min(data$dose)))
dmax<-ceiling(log10(max(data$dose)))
dose_min<- 10^(dmin)
dose_max<- 10^(dmax)
pmort_min<- qnorm(0.006)
pmort_max<- qnorm(1-0.006)
ll<-list(...)
if(is.null(ll$col)) ll$col=rainbow_hcl(length(strains))
if(is.null(ll$pch)) {if(length(strains)<=6)ll$pch=15:20 else ll$pch=1:20}
if(is.null(ll$conf.level)) ll$conf.level=0.95
if(is.null(ll$lwd)) ll$lwd=1.5
if(is.null(ll$cex)) ll$cex=1
if(is.null(ll$xlim)) {ll$xlim=c(dose_min,dose_max)}
if(is.null(ll$ylim)) {ll$ylim=c(floor(pmort_min*100)/100,ceiling(pmort_max*100)/100)}
#if(is.null(ll$ylab)) {ll$ylab="mortality"}
if(is.null(ll$yaxt)) {ll$yaxt="n"}
if(is.null(ll$xaxt)) {ll$xaxt="n"}
if(is.null(ll$log)) {ll$log="x"}
if(is.null(ll$ann)) {ll$ann=FALSE}
cl<-ll$col
ph<-ll$pch
ll<-ll[-which(names(ll)=="pch")]
ll<-ll[-which(names(ll)=="col")]
pnames<-c(names(formals(plot.default)),names(par()))
dxt<-get.dxt(strains,data,ll$conf.level,LD.value=LD.value)
do.call("plot",c(list(x=data$dose,y=data$probmort),col=list(cl[data$strain]),pch=list(ph[data$strain]),ll[names(ll)%in%pnames],typ=list("n")))
if(!is.null(ll$main)){title(ll$main)}
ll$col<-cl
ll$pch<-ph
abline(v = dose_min, col = "grey95", lwd = 180000)
points(data$dose,data$probmort,col=ll$col[factor(data$strain)],
pch=ll$pch[factor(data$strain)])
labely<-c(1,5,seq(10,90,10),95,99)
axis(2, at=qnorm(labely/100),labels=labely,las=2, adj=0)
axis(4, at=qnorm(labely/100),labels=FALSE)
mtext(ifelse(is.null(ll$ylab),"Mortality (%)",ll$ylab), side=2, line=3)
for (i in dmin:dmax) {
axis(1,at=10^i,labels=substitute(10^k,list(k=i)))
}
axis.at <- 10 ^ c(dmin:dmax)
axis(1, at = 2:9 * rep(axis.at[-1] / 10, each = 8),
tcl = -0.5, labels = FALSE)
mtext(ifelse(is.null(ll$xlab),expression(Dose (mg.L^-1)),ll$xlab), side=1, line=3)
abline(h=pmort_min, lty=3)
abline(h=pmort_max, lty=3)
if(plot.conf){
if(test.validity){
ndataf<-validity(strains,data)
for(i in 1:length(strains)){
if(dxt[[i]][[2]][[length(dxt[[i]][[2]])]]>0.05){ # dxt[[i]][[2]][[15]]
abline(dxt[[i]][[1]], col=ll$col[i],lwd=ll$lwd)
CIfit<-CIplot(dxt[[i]][[1]],pmort_min,pmort_max,conf.level=conf.level)
lines(CIfit[,1],CIfit[,2],type="l", lty=3, col=ll$col[i],lwd=ll$lwd)
lines(CIfit[,1],CIfit[,3],type="l", lty=3, col=ll$col[i],lwd=ll$lwd)
} else {
points(sort(ndataf$dose[ndataf$strain==strains[i]]),
sort(ndataf$p.mortality[ndataf$strain==strains[i]]),type="l", col=ll$col[i])
}
}
} else {
for(i in 1:length(strains)){
abline(dxt[[i]][[1]], col=ll$col[i],lwd=ll$lwd)
CIfit<-CIplot(dxt[[i]][[1]],pmort_min,pmort_max,conf.level=conf.level)
lines(CIfit[,1],CIfit[,2],type="l", lty=3, col=ll$col[i],lwd=ll$lwd)
lines(CIfit[,1],CIfit[,3],type="l", lty=3, col=ll$col[i],lwd=ll$lwd)
}
}
} else {
for(i in 1:length(strains)){
abline(dxt[[i]][[1]], col=ll$col[i],lwd=ll$lwd)
}
}
if(length(legend.par)<2){
if(!isFALSE(legend.par)){
lgas(legend.par,ll,strains)
}
} else {
lgas(legend.par,ll,strains)
}
}
mort.plot0<-function(data,strains=NULL,plot.conf=TRUE,conf.level=0.95,
LD.value=c(25,50,95),test.validity=TRUE,legend.par=c("bottomleft"),...){
#opars<-par(no.readonly = TRUE)
#on.exit(par(opars))
data$strain<-as.factor(data$strain)
if(is.null(strains)){
strains<-levels(data$strain)
}
dmin<-floor(log10(min(data$dose)))
dmax<-ceiling(log10(max(data$dose)))
dose_min<- 10^(dmin)
dose_max<- 10^(dmax)
pmort_min<- qnorm(0.006)
pmort_max<- qnorm(1-0.006)
ll<-list(...)
if(is.null(ll$col)) ll$col=rainbow_hcl(length(strains))
if(is.null(ll$pch)) {if(length(strains)<=6)ll$pch=15:20 else ll$pch=1:20}
if(is.null(ll$conf.level)) ll$conf.level=0.95
if(is.null(ll$lwd)) ll$lwd=1.5
if(is.null(ll$cex)) ll$cex=1
if(is.null(ll$xlim)) {ll$xlim=c(dose_min,dose_max)}
if(is.null(ll$ylim)) {ll$ylim=c(floor(pmort_min*100)/100,ceiling(pmort_max*100)/100)}
if(is.null(ll$ylab)) {ll$ylab="mortality"}
if(is.null(ll$yaxt)) {ll$yaxt="n"}
if(is.null(ll$xaxt)) {ll$xaxt="n"}
if(is.null(ll$log)) {ll$log="x"}
if(is.null(ll$ann)) {ll$ann=FALSE}
cl<-ll$col
ph<-ll$pch
ll<-ll[-which(names(ll)=="pch")]
ll<-ll[-which(names(ll)=="col")]
pnames<-c(names(formals(plot.default)),names(par()))
dxt<-get.dxt(strains,data,ll$conf.level,LD.value=LD.value)
do.call("plot",c(list(x=data$dose,y=data$probmort),col=list(cl[data$strain]),pch=list(ph[data$strain]),ll[names(ll)%in%pnames]))
if(!is.null(ll$main)){title(ll$main)}
ll$col<-cl
ll$pch<-ph
abline(v = dose_min, col = "grey95", lwd = 180000)
points(data$dose,data$probmort,col=ll$col[data$strain],
pch=ll$pch[data$strain])
labely<-c(1,5,seq(10,90,10),95,99)
axis(2, at=qnorm(labely/100),labels=labely,las=2, adj=0)
axis(4, at=qnorm(labely/100),labels=FALSE)
mtext("Mortality (%)", side=2, line=3)
for (i in dmin:dmax) {
axis(1,at=10^i,labels=substitute(10^k,list(k=i)))
}
axis.at <- 10 ^ c(dmin:dmax)
axis(1, at = 2:9 * rep(axis.at[-1] / 10, each = 8),
tcl = -0.5, labels = FALSE)
mtext(expression(Dose (mg.L^-1) ), side=1, line=3)
abline(h=pmort_min, lty=3)
abline(h=pmort_max, lty=3)
if(plot.conf){
if(test.validity){
ndataf<-validity(strains,data)
for(i in 1:length(strains)){
if(dxt[[i]][[2]][[length(dxt[[i]][[2]])]]>0.05){ # dxt[[i]][[2]][[15]]
abline(dxt[[i]][[1]], col=ll$col[i],lwd=ll$lwd)
CIfit<-CIplot(dxt[[i]][[1]],pmort_min,pmort_max,conf.level=conf.level)
lines(CIfit[,1],CIfit[,2],type="l", lty=3, col=ll$col[i],lwd=ll$lwd)
lines(CIfit[,1],CIfit[,3],type="l", lty=3, col=ll$col[i],lwd=ll$lwd)
} else {
points(ndataf$dose[ndataf$strain==strains[i]],
ndataf$p.mortality[ndataf$strain==strains[i]],type="l", col=ll$col[i])
}
}
} else {
for(i in 1:length(strains)){
abline(dxt[[i]][[1]], col=ll$col[i],lwd=ll$lwd)
CIfit<-CIplot(dxt[[i]][[1]],pmort_min,pmort_max,conf.level=conf.level)
lines(CIfit[,1],CIfit[,2],type="l", lty=3, col=ll$col[i],lwd=ll$lwd)
lines(CIfit[,1],CIfit[,3],type="l", lty=3, col=ll$col[i],lwd=ll$lwd)
}
}
} else {
for(i in 1:length(strains)){
abline(dxt[[i]][[1]], col=ll$col[i],lwd=ll$lwd)
}
}
if(length(legend.par)<2){
if(!isFALSE(legend.par)){
lgas(legend.par,ll,strains)
}
} else {
lgas(legend.par,ll,strains)
}
}
|
/scratch/gouwar.j/cran-all/cranData/BioRssay/R/plots.R
|
#' Calculate confidence range for regressions
#' @noRd
CIplot<-function(mods,pmort_min,pmort_max,conf.level){
summ<-summary(mods)
zz<-qnorm((1-conf.level)/2,lower.tail=FALSE)
a<-summ$coefficient[2] # mods slope
b<-summ$coefficient[1] # mods intercept
minldose<-((pmort_min-b)/a) # predicted dose for 0% mortality
maxldose<-((pmort_max-b)/a) # predicted dose for 100% mortality
datalfit<-seq(minldose-0.2,maxldose+0.2,0.01) # generates a set of doses
datafit<-data.frame(dose=10^datalfit)
pred<-predict.glm(mods,newdata=datafit,type="response",se.fit=TRUE)
#generates the predicted mortality for the set of doses and SE
ci<-cbind(pred$fit-zz*pred$se.fit,pred$fit+zz*pred$se.fit) #CI for each dose
probitci<-cbind(datafit$dose,suppressWarnings(qnorm(ci)))
# apply probit transformation to CI
return(probitci)
}
#' Lethal dose glm test
#' @noRd
LD <- function(mod, conf.level,LD.value=c(25,50,95)) {
p <- LD.value # leathal dose
het = deviance(mod)/df.residual(mod)
if(het < 1){het = 1} # Heterogeneity cannot be less than 1
m.stats <- summary(mod, dispersion=het, cor = F)
b0<-m.stats$coefficients[1] # Intercept (alpha)
b1<-m.stats$coefficients[2] # Slope (beta)
interceptSE <- m.stats$coefficients[3]
slopeSE <- m.stats$coefficients[4]
z.value <- m.stats$coefficients[6]
vcov = summary(mod)$cov.unscaled
var.b0<-vcov[1,1] # Intercept variance
var.b1<-vcov[2,2] # Slope variance
cov.b0.b1<-vcov[1,2] # Slope intercept covariance
alpha<-1-conf.level
if(het > 1) {
talpha <- -qt(alpha/2, df=df.residual(mod))
} else {
talpha <- -qnorm(alpha/2)
}
g <- het * ((talpha^2 * var.b1)/b1^2)
eta = family(mod)$linkfun(p/100) #probit distribution curve
theta.hat <- (eta - b0)/b1
const1 <- (g/(1-g))*(theta.hat - cov.b0.b1/var.b1)
const2a <- var.b0 - 2*cov.b0.b1*theta.hat + var.b1*theta.hat^2 - g*(var.b0 - (cov.b0.b1^2/var.b1))
const2 <- talpha/((1-g)*b1) * sqrt(het * (const2a))
#Calculate the confidence intervals LCL=lower,
#UCL=upper (Finney, 1971, p. 78-79. eq. 4.35)
LCL <- (theta.hat + const1 - const2)
UCL <- (theta.hat + const1 + const2)
#Calculate variance for theta.hat (Robertson et al., 2007, pg. 27)
var.theta.hat <- (1/(theta.hat^2)) * ( var.b0 + 2*cov.b0.b1*theta.hat + var.b1*theta.hat^2 )
ECtable <- c(10^c(rbind(theta.hat,LCL,UCL,var.theta.hat)),
b1,slopeSE,b0,interceptSE,het,g)
return(ECtable)
}
#' Test the significance of model pairs of strains
#' @noRd
reg.pair0<-function(data){
mortality<-cbind(data$dead,data$total-data$dead)
mod1<-glm(mortality~log10(data$dose)*data$strain,
family = quasibinomial(link=probit))
mod2<-glm(mortality~log10(data$dose),
family = quasibinomial(link=probit))
return(anova(mod2,mod1,test="Chi"))
}
reg.pair<-function(data){
mortality<-cbind(data$dead,data$total-data$dead)
mod1<-glm(mortality~log10(data$dose)*data$strain,
family = quasibinomial(link=probit))
mod2<-glm(mortality~log10(data$dose),
family = quasibinomial(link=probit))
Test<-anova(mod2,mod1,test="Chi")
an<-as.data.frame(anova(mod1,test="F"))
return(list(pairT=Test,fullM=an))
}
#' Get LD and RR values for each strain
#' @noRd
get.dxt<-function(strains,data,conf.level,LD.value){
dxt<-lapply(strains,function(ss,data,conf.level,LD.value){
tmp<-data[data$strain == ss,]
y<-with(tmp,cbind(dead,total-dead))
mods<-glm(y~log10(dose),data=tmp,family = quasibinomial(link=probit))
dat <- LD(mods, conf.level,LD.value=LD.value)
E<-mods$fitted.values*tmp$total # expected dead
chq<-sum(((E-tmp$dead)^2)/(ifelse(E<1,1,E))) #if E is lower than 1 chi-sq
#fails to detect the significance, ~change denominator to 1
df<-length(tmp$dead)-1
dat<-c(dat,chq,df,pchisq(q=chq,df=df,lower.tail=FALSE))
return(list(mods,dat))
},data=data,conf.level=conf.level,LD.value=LD.value)
return(dxt)
}
###########
#' Calculate lethal dosage, resistance ratios, and regression coefficients
#' and tests for linearity
#'
#' Using a generalized linear model (GLM, logit link function), this function
#' computes the lethal doses for 25%, 50% and 95% (unless otherwise provided)
#' of the population (LD25, LD50 and LD95, resp.), and their confidence
#' intervals (LDmax and LDmin, 0.95 by default). See details for more info.
#'
#' @param data a data frame of probit-transformed mortality data using the
#' function probit.trans()
#' @param conf.level numerical. level for confidence intervals to be applied
#' to the models (default 0.95)
#' @param LD.value numerical. Level of lethal dose to be tested.
#' default=c(25,50,95)
#' @param ref.strain character. name of the reference strain if present
#' (see details)
#' @param plot logical. Whether to draw the plot. Default FALSE
#' @param plot.conf logical. If plot=TRUE, whether to plot the 95 percent
#' confidence intervals. Default TRUE
#' @param test.validity logical. If plot=TRUE (default), the regression for a
#' strain that failed the linearity test is not plotted
#' @param legend.par arguments to be passed on to \code{legend()} as
#' in \code{mort.plot()}
#' @param ... parameters to be passed on to graphics for the plot
#' (e.g. col, pch)
#'
#' @importFrom graphics abline axis legend lines mtext
#' @importFrom stats deviance df.residual family pchisq predict.glm qt qnorm
#'
#' @details If a name is provided in ref.strain=, it will be used as the
#' reference to compute the resistance ratios (RR). Alternatively, the
#' function will look for a strain with the suffix "-ref" in the dataset.
#' If this returns NULL, the strain with the lowest LD50 will be considered as reference.
#'
#' In addition to LD values, the function in a nutshell uses a script modified
#' from Johnson et al (2013), which allows taking the g factor into account
#' ("With almost all good sets of data, g will be substantially smaller than
#' 1.0 and seldom greater than 0.4." Finney, 1971) and the heterogeneity (h)
#' of the data (Finney, 1971) to calculate the confidence intervals (i.e. a
#' larger heterogeneity will increase the confidence intervals). It also
#' computes the corresponding resistance ratios (RR), i.e. the ratios between
#' a given strain and the strain with the lower LD50 and LD95, respectively for
#' RR50 and RR95 (usually, it is the susceptible reference strain), with their
#' 95% confidence intervals (RRmin and RRmax), calculated according to
#' Robertson and Preisler (1992). Finally, it also computes the coefficients
#' (slope and intercept, with their standard error) of the linear
#' regressions) and tests for the linearity of the dose-mortality response
#' using a chi-square test (Chi(p)) between the observed dead numbers (data)
#' and the dead numbers predicted by the regression (the test is significant
#' if the data is not linear, e.g. mixed populations).
#'
#' @return Returns a data frame with the various estimates mentioned above.
#' If plot=TRUE, plots the mortality on a probit-transformed scale against
#' the log_10 doses.
#'
#' @author Pascal Milesi, Piyal Karunarathne, Pierrick Labbé
#'
#' @references Finney DJ (1971). Probitanalysis. Cambridge:Cambridge
#' University Press. 350p.
#'
#' Hommel G (1988). A stage wise rejective multiple test procedure based on
#' a modified Bonferroni test. Biometrika 75, 383-6.
#'
#' Johnson RM, Dahlgren L, Siegfried BD, Ellis MD (2013). Acaricide,fungicide
#' and drug interactions in honeybees (Apis mellifera). PLoSONE8(1): e54092.
#'
#' Robertson, J. L., and H.K. Preisler.1992. Pesticide bioassays with
#' arthropods. CRC, Boca Raton, FL.
#'
#' @examples
#' data(bioassay)
#' transd<-probit.trans(bioassay$assay2)
#' data<-transd$tr.data
#' resist.ratio(data,plot=TRUE)
#'
#' @export
resist.ratio<-function(data,conf.level=0.95,LD.value=c(25,50,95),
ref.strain=NULL,plot=FALSE,plot.conf=TRUE,
test.validity=TRUE,legend.par=c("bottomright"),...) {
if(!any(LD.value==50)){LD.value<-sort(c(LD.value,50))}
data$strain<-factor(data$strain)
strains<-levels(data$strain)
dxt<-get.dxt(strains,data,conf.level,LD.value=LD.value)
dat<-do.call(rbind,lapply(dxt,function(x){x[[2]]}))
colnames(dat)<-c(paste0(paste0("LD",rep(LD.value,each=4)),
rep(c("","min","max","var"),2)),"Slope", "SlopeSE",
"Intercept", "InterceptSE", "h", "g","Chi2","df","Chi(p)")
rownames(dat)<-strains
if(is.null(ref.strain)){
ref <- which(strains == strains[grep("-ref$",as.character(strains))],
arr.ind=TRUE)
} else {
ref=ref.strain
}
if (length(ref)==0) {
refrow <- which(dat[,"LD50"]==min(dat[,"LD50"]),arr.ind=TRUE)
} else {
refrow <-ref
}
for(l in seq_along(LD.value)){
assign(paste0("rr",LD.value[l]),
dat[,paste0("LD",LD.value[l])]/dat[refrow,paste0("LD",LD.value[l])])
assign(paste0("CI",LD.value[l]),
1.96*sqrt(log10(dat[,paste0("LD",LD.value[l],"var")])+log10(dat[refrow,paste0("LD",LD.value[l],"var")])))
assign(paste0("rr",LD.value[l],"max"),
10^(log10(get(paste0("rr",LD.value[l])))+get(paste0("CI",LD.value[l]))))
assign(paste0("rr",LD.value[l],"min"),
10^(log10(get(paste0("rr",LD.value[l])))-get(paste0("CI",LD.value[l]))))
ggl<-get(paste0("rr",LD.value[l],"max"))
ggl[refrow]<-0
ggl2<-get(paste0("rr",LD.value[l],"min"))
ggl2[refrow]<-0
}
RR<-mget(c(paste0("rr",rep(LD.value,each=3),c("","min","max"))))
RR<-do.call(cbind,RR)
if(plot){
mort.plot(data,strains=NULL,plot.conf,test.validity=test.validity,
conf.level=conf.level,legend.par=legend.par,...)
}
nm<-colnames(dat)
dat<-rbind.data.frame(dat[,-(grep("var",colnames(dat)))])
nm<-nm[-c(grep("var",nm))]
nm<-nm[c((ncol(dat)-8):ncol(dat),1:(ncol(dat)-9))]
dat<-as.matrix(cbind(dat[,(ncol(dat)-8):ncol(dat)],dat[,1:(ncol(dat)-9)],RR))
colnames(dat)<-c(nm,colnames(RR))
dat<-ifelse(dat>10,round(dat,0),ifelse(dat>1,round(dat,2),round(dat,4)))
return(dat)
}
#' Test the significance of dose-mortality response differences
#'
#' This function is used when comparing at least two strains. It tests whether
#' the mortality-dose regressions are similar for different strains, using a
#' likelihood ratio test (LRT). If there are more than two strains, it also
#' computes pairwise tests, using sequential Bonferroni correction
#' (Hommel, 1988) to account for multiple testing.
#'
#' @param data a data frame of probit transformed mortality data using the
#' function probit.trans
#' @importFrom stats anova na.omit
#' @importFrom utils combn
#'
#' @return a list with model outputs: a chi-square test if there are only two
#' strains or if there are more than two strains, first an overall model
#' assessment (i.e. one strain vs. all) and given overall model is significant,
#' then a bonferroni test of significance from a pairwise model comparison.
#'
#' @details A global LRT test assesses a strain’s effect, by comparing two
#' models, one with and one without this effect (i.e. comparing a model with
#' several strains to a model where all the data originate from a single
#' strain).
#' If there are more than two strains, pairwise tests are computed, and
#' p-values of significance are assessed using sequential Bonferroni correction
#' (Hommel, 1988) to account for multiple testing.
#'
#' Warning: We strongly encourage users to not use this function when the
#' dose-mortality response for at least one strain significantly deviates
#' from linearity (see resist.ratio() function for more details): in such
#' cases the test cannot be interpreted.
#'
#' @author Pascal Milesi, Piyal Karunarathne, Pierrick Labbé
#'
#' @examples
#' data(bioassay)
#' transd<-probit.trans(bioassay$assay2)
#' data<-transd$tr.data
#' model.signif(data)
#'
#' @export
model.signif<-function(data){
data$strain<-as.factor(data$strain)
strains<-levels(data$strain)
if (length(strains)>=2) {
Test<-reg.pair(data)
if(length(strains)>2 & Test$pairT$`Pr(>Chi)`[2]>0.05){
message("effect on strains are non-significant \n all strains come from the same population")
} else if(length(strains)>2 & Test$pairT$`Pr(>Chi)`[2]<=0.05){
print(Test$pairT)
message("complete model is significant against a NULL model \n continueing to pair-wise comparison")
Test<-sapply(strains, function(x,data) sapply(strains, function(y,data){
if(x!=y){
dat<-data[data$strain==x | data$strain==y,]
reg.pair(dat)$pairT$Pr[2]
}
},data=data),data=data)
dv<-sapply(strains, function(x,data){
sapply(strains, function(y,data){
if(x!=y){
dat<-data[data$strain==x | data$strain==y,]
reg.pair(dat)$pairT$Deviance[2]
}
},data=data)
},data=data)
rdl<-combn(strains,2,function(x,data){
dat<-data[data$strain==x[1] | data$strain==x[2],]
tmp<-reg.pair(dat)$fullM
list(c(round(tmp$`Resid. Dev`[1],2),round(as.numeric(tmp$`Resid. Dev`[3:4]),3),round(as.numeric(tmp$`Pr(>F)`[3:4]),3)))
},data=data)
rdl<-do.call(rbind,rdl)
rk<-(length(strains)*(length(strains)-1)/2):1
rk<-0.05/rk
toget<-t(combn(rownames(Test),2))
pval<-unlist(Test[toget])
pval<-cbind(1:length(pval),pval)
pval<-pval[order(pval[,2],decreasing = T),]
pval<-cbind(pval,rk)
pval<-pval[order(pval[,1]),]
Test<-data.frame(cbind(toget,round(unlist(Test[toget]),5),
ifelse(pval[,2]<pval[,3],"sig","non-sig")))
rdl[which(Test[,3]>0.05),]<-NA
### bonferr for pvals
bp<-cbind(1:(nrow(Test)*2),as.numeric(unlist(rdl[,4:5])))
bp<-bp[order(unlist(bp[,2])),]
bp0<-length(na.omit(unlist(bp[,2])))
th<-unlist(lapply(1:bp0,function(x){0.05/(bp0-x+1)}))
tmp<-suppressWarnings(cbind(bp,th))
tmp[which(is.na(tmp[,2])),3]<-NA
tmp<-round(tmp[order(tmp[,1]),3],5)
tmp<-split(tmp,cut(seq_along(tmp),2,labels = F))
Test<-cbind(Test,rdl[,1:3],rdl[,4],tmp$`1`,rdl[,5],tmp$`2`)
colnames(Test)<-c("strain1","strain2","model.pval","bonferroni","res.Dv.Null","res.Dv.str","res.Dv.int","str.pval","str.thr","int.pval","int.thr")
}
} else {
message("Only one strain present; check your data")
}
cat("Output details
model.pval - significance value of ANOVA on the binomial GLM test of the strain pair
bonferroni - significance of the model.pval with bonferroni correction
res.Dv - residual deviance
thr - threshold for the significance of the pvalue
str - values for the strains
int - values for the interaction between the strain and the dose
")
return(list(model=Test))
}
|
/scratch/gouwar.j/cran-all/cranData/BioRssay/R/ratios.R
|
#' Apply Abbott's correction
#'
#' Apply Abbott's correction to morbidity data
#' @noRd
probit_C <- function(Cx,ii,dataf,x){
datac<-dataf[dataf$dose==0,]
data<-dataf[dataf$dose>0,]
data$mort[ii]<-(data$mort[ii]-Cx)/(1-Cx)
dataf$dead[ii]<-dataf$mort[ii]*dataf$total[ii]
y<-cbind(data$dead[ii],data$total[ii]-data$dead[ii])
moda<-glm(y~log10(data$dose[ii]),family = quasibinomial(link=probit))
L<--moda$deviance/2
j<-datac$strain==x
L<-L+sum(datac$dead[j]*log10(Cx)+(datac$total[j]-datac$dead[j])*log10(1-Cx))
return(L)
}
#' Probit-transform the data and apply Abbott's correction
#'
#' This function applies probit transformation to the data, after applying
#' Abbott's correction (see reference) when control groups (e.g. unexposed
#' susceptible strain) show non-negligible mortality.
#'
#' @param dataf a data frame of mortality data containing four mandatory
#' columns "strain", "dose", "total", "dead" (not necessarily in that order).
#' @param conf numerical. Threshold for the mortality in the controls above
#' which the correction should be applied (default=0.05)
#'
#' @importFrom stats glm optim qnorm quasibinomial runif
#'
#' @return Returns a list. convrg: with correction values and convergence
#' (NULL if mortality in the controls is below conf.), tr.data: transformed
#' data
#'
#' @author Pascal Milesi, Piyal Karunarathne, Pierrick Labbé
#'
#' @references Abbott, WS (1925). A method of computing the effectiveness of
#' an insecticide. J. Econ. Entomol.;18:265‐267.
#'
#' @examples
#' data(bioassay)
#' transd<-probit.trans(bioassay)
#' @export
probit.trans<-function(dataf,conf=0.05){
mort<-ifelse(dataf$dead/dataf$total==0,0.006,ifelse(dataf$dead/dataf$total==1,
1-0.006,dataf$dead/dataf$total))
dataf<-cbind(dataf,mort)
if(any(dataf$dose==0)){
if(any(dataf[dataf$dose==0,"mort"]>conf)){
st<-unique(as.character(dataf$strain))
tt<-lapply(st,function(x,dataf){
data<-dataf[dataf$dose>0,]
sig<-data$strain==x
if(!any(data$mort[sig]==0)){
bottom<-1e-12;top<-min(data$mort[sig])
pin<-runif(1,min=bottom,max=top)
opz<-optim(pin,probit_C,ii=sig,dataf=dataf,x=x,control=list(fnscale=-1,trace=1),
method="L-BFGS-B",lower=bottom,upper=top)
val<-c(opz$par,opz$convergence)
} else {
val<-c(0,0)
}
return(c(x,val))
},dataf=dataf)
tt<-data.frame(do.call(rbind,tt))
colnames(tt)<-c("Strain","ControlMortality","Convergence(OK if 0)")
data<-dataf[dataf$dose>0,]
for(i in seq_along(tt[,1])){
data$mort[data[,"strain"]==tt[i,1]]<-(data$mort[data[,"strain"]==tt[i,1]]-as.numeric(tt[i,2]))/(1-as.numeric(tt[i,2]))
data$dead[data[,"strain"]==tt[i,1]]<-data$mort[data[,"strain"]==tt[i,1]]*data$total[data[,"strain"]==tt[i,1]]
}
} else {
data<-dataf[dataf$dose>0,]
tt<-NULL
}
} else {
data<-dataf
tt<-NULL
}
probmort<-sapply(data$mort,qnorm) # apply probit transformation to the data
data<-cbind(data,probmort)
data$strain<-as.factor(data$strain)
outdata<-list(convrg=tt,tr.data=data)
return(outdata)
}
|
/scratch/gouwar.j/cran-all/cranData/BioRssay/R/transform.R
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## ----setup--------------------------------------------------------------------
library(BioRssay)
## ----eval=FALSE---------------------------------------------------------------
# #1. CRAN version
# install.packages("BioRssay")
# #2. Developmental version
# if (!requireNamespace("devtools", quietly = TRUE))
# install.packages("devtools")
# devtools::install_github("milesilab/BioRssay", build_vignettes = TRUE)
## -----------------------------------------------------------------------------
data(bioassay)
head(bioassay$assay2)
## -----------------------------------------------------------------------------
file <- paste0(path.package("BioRssay"), "/Test.BioRssay.txt")
test<-read.table(file,header=TRUE)
head(test)
## -----------------------------------------------------------------------------
assays<-bioassay
exm1<-assays$assay2
head(exm1)
unique(as.character(exm1$strain))
## -----------------------------------------------------------------------------
dataT<-probit.trans(exm1) #additionally an acceptable threshold for controls' mortality can be set as desired with "conf="; default is 0.05.
dataT$convrg
head(dataT$tr.data)
## -----------------------------------------------------------------------------
data<-dataT$tr.data #probid transformed data
RR<-resist.ratio(data)
RR
## -----------------------------------------------------------------------------
model.signif(dataT$tr.data)
## ----echo=FALSE---------------------------------------------------------------
oldpar<-par(no.readonly = TRUE)
## ----fig.dim=c(8,4)-----------------------------------------------------------
strains<-levels(data$strain)
par(mfrow=c(1,2)) # set plot rows
# plot without confidence intervals and test of validity of the model
mort.plot(data,plot.conf=FALSE,test.validity=FALSE)
# plot only the regression lines
mort.plot(data,plot.conf=FALSE,test.validity=FALSE,pch=NA)
# same plots with confidence level
par(mfrow=c(1,2))
mort.plot(data,plot.conf=TRUE,test.validity=FALSE)
mort.plot(data,plot.conf=TRUE,test.validity=FALSE,pch=NA)
## ----echo=FALSE---------------------------------------------------------------
par(oldpar)
## -----------------------------------------------------------------------------
head(test)
unique(test$insecticide)
bend<-test[test$insecticide=="bendiocarb",]
head(bend)
## ----fig.dim=c(6,4)-----------------------------------------------------------
dataT.b<-probit.trans(bend)
data.b<-dataT.b$tr.data
RR.b<-resist.ratio(data.b,plot = T,ref.strain = "Kisumu",plot.conf = T, test.validity = T)
head(RR.b)
## -----------------------------------------------------------------------------
#To then test the difference in dose-mortality response between the strains
t.models<-model.signif(data.b)
t.models
## ----fig.dim=c(6,4)-----------------------------------------------------------
file <- paste0(path.package("BioRssay"), "/Example3.txt") #import the example file from the package
exm3<-read.table(file,header=TRUE)
trnd<-probit.trans(exm3) #probit transformation and correction of data
resist.ratio(trnd$tr.data,LD.value = c(50,95),plot = T) #get LD and RR values with the mortality plot
model.signif(trnd$tr.data) # test the models significance for each strain
|
/scratch/gouwar.j/cran-all/cranData/BioRssay/inst/doc/BioRssay.R
|
---
title: "BioRssay"
date: "`r format(Sys.time(), '%d %B, %Y')`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{BioRssay}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
```{r setup}
library(BioRssay)
```
# **An R package for analyses of bioassays and probit graphs**
**Piyal Karunarathne, Nicolas Pocquet, Pascal Milesi, and Pierrick Labbé**
This package is designed to analyze mortality data from bioassays of one or several strains/lines/populations. As of now, the functions in the package allow adjusting for mortality in the controls with Abott’s correction. For each strain, functions are available to generate a mortality-dose regression using a generalized linear model (which takes over-dispersion into account and allow mortality of 0 or 1), and plot the regressions with or without the desired confidence interval (e.g. 95%).
The package also provides functions to test the linearity of the log-dose response using a chi-square test between model predictions and observed data (significant deviations from linearity may reflect mixed populations for example).
The package also allows determining the lethal doses for 25%, 50% and 95% of the population (LD25, LD50 and LD95 respectively) or the level as specified by the user, with their 95% confidence intervals (CI) and variance of each (e.g., LD25var, LD50var, etc.), following Johnson et al. 2013 approach, which allows taking the heterogeneity of the data into account (*Finney 1971*) to calculate the CI (i.e. a larger heterogeneity will increase the CI).
The methods implemented here use likelihood ratio tests (LRT) to test for differences in resistance levels among different strains. Finally, resistance ratios (RR) at LD25, LD50 and LD95, i.e. the LD ratios between a given strain and the strain with the lowest LD50 (or LD25,LD50, and LD95; usually it is the susceptible reference), with their 95% confidence intervals are calculated according to Robertson and *Preisler (1992)*.
* Installing `BioRssay`
```{r,eval=FALSE}
#1. CRAN version
install.packages("BioRssay")
#2. Developmental version
if (!requireNamespace("devtools", quietly = TRUE))
install.packages("devtools")
devtools::install_github("milesilab/BioRssay", build_vignettes = TRUE)
```
# 1. **DATA PREPARATION**
BioRssay can import data in any format that is compatible with base R data import functions (e.g. read.table, read.csv). However, for the functions in BioRssay to work, the data **must** have at least the following columns (other columns won’t be used, but are no hindrance).
* strain: a column containing the strains tested
* dose: dosage tested on each strain/sample (controls should be entered as 0)
* total: total number of samples tested
* dead: number of dead (or knock down) samples
See the examples below.
**Example 1**
```{r}
data(bioassay)
head(bioassay$assay2)
```
Also download the test data at <https://github.com/milesilab/DATA/blob/main/BioAssays/Test.BioRssay.txt>
and find more example data sets at <https://github.com/milesilab/DATA/tree/main/BioAssays>
**Example 2**
```{r}
file <- paste0(path.package("BioRssay"), "/Test.BioRssay.txt")
test<-read.table(file,header=TRUE)
head(test)
```
NOTE: It is also possible to include a reference strain/population with the suffix "ref" in the strain column (see example 1), or the reference strain can be specified later in the function `resist.ratio` to obtain the resistance ratios for each strain (see below).
# 2. **Analysis**
The workflow is only succinctly described here, for more information on the functions and their options, see individual one in the reference index.
## **Example 1**
Let's have a quick look at the data again.
```{r}
assays<-bioassay
exm1<-assays$assay2
head(exm1)
unique(as.character(exm1$strain))
```
This example contains the mortality data of three strains (KIS-ref, DZOU, and DZOU2 ); KIS is used as the reference, as indicated by the “ref” suffix.
The first step is to check whether the controls have a non-negligible mortality, in which case a correction should be applied to the data, before probit transformation. This is easily achieved with the function `probit.trans()`.
```{r}
dataT<-probit.trans(exm1) #additionally an acceptable threshold for controls' mortality can be set as desired with "conf="; default is 0.05.
dataT$convrg
head(dataT$tr.data)
```
The output of probit.trans is a list of which the first element (`convrg`) contains the results of Abott’s correction and convergence values.
However, since the mortality in the controls (dose=0) is below 5% (`conf=0.05`) in the present example, `data$convrg` is NULL and thus no correction is applied to the data . The second element of the list dataT is the probid transformed data with two additional columns: *mort*, the observed mortalities, and *probmort*, the observed probit-transformed mortalities. This data frame is what we’ll use in the next steps of the analysis.
*If you set the threshold to conf=0.01 with example 1, you can assess the effects of the Abbot’s correction: all mortalities are slightly reduced to take the base control mortality into account.*
The second step is to compute the lethal dose values (25%, 50% and 95%, LD25, LD50 and LD95 respectively) and the corresponding resistance ratios. The function `resist.ratio` allows you to do just that (user also has the option to calculate these values for different LD values). If no reference strain has been specified in the data file (using the suffix “ref” as mentioned above), it can be specified in `ref.strain=`. Otherwise, the strain with the lowest LD50 will be considered as such. By default, the LDs’ 95% confidence intervals are computed (the min and max values are reported); you can adjust this using `conf.level=`.
```{r}
data<-dataT$tr.data #probid transformed data
RR<-resist.ratio(data)
RR
```
Note that we did not specify the reference strain here as it is already labeled in the data
For each strain, you have first the LD25, LD50 and LD95 and their upper and lower limits (defaults is 95% CI), then the slope and intercept of the regression (with their standard error), the heterogeneity (h) and the g factor (“With almost all good sets of data, g will be substantially smaller than 1.0 and seldom greater than 0.4.” Finney, 1971).
The result of the chi test (`Chi(p)`) is then indicated to judge whether the data follow a linear regression: here all the p-values are over 0.05 so the fits are acceptable. Finally the resistance ratios are indicated for LD25, LD50 and LD95 (RR25, RR50 and RR95), as well as their upper and lower limits.
The third step, when analyzing more than one strain, is now to test for difference in dose-mortality responses between strains using the `model.signif()` function.
```{r}
model.signif(dataT$tr.data)
```
As there are 3 strains, the function first tests whether all strains are similar (i.e. equivalent to 1 strain) or not (i.e. at least one is different from others), using a likelihood ratio test. Here, the test is highly significant, some strains are thus different in terms of dose response.
Pairwise tests are then performed and reported below. Here, the KIS strain is different from DZOU and from DZOU2 strains (model.pval <0.05). DZOU and DZOU2 are not different (model.pval >0.05). The `bonferroni` column indicates whether the p-values <0.05 remain significant (sig vs non-sig) after correction for multiple testing.
Further, the function outputs seven more columns with statistical outputs from the model evaluation between strains and strain-dose to a null model. The abbreviations are as follows:
`res.Dv` - residual deviance
`thr` - threshold for the significance of the pvalue
`str` - values for the strains
`int` - values for the interaction between the strain and the dose
*Note: the pvalues for strain and strain-dose interaction is from a F-test for a binomial model.*
***Data Visualization***
The data and the regression can be plotted with confidence levels using the `mort.plot()` function. It is also possible to take the validity of the linearity test into account for the plots using the `test.validity=` option. The probit-transformed mortalities (`probit.trans()` function) are plotted as a function of the log10 of the doses.
```{r,echo=FALSE}
oldpar<-par(no.readonly = TRUE)
```
```{r,fig.dim=c(8,4)}
strains<-levels(data$strain)
par(mfrow=c(1,2)) # set plot rows
# plot without confidence intervals and test of validity of the model
mort.plot(data,plot.conf=FALSE,test.validity=FALSE)
# plot only the regression lines
mort.plot(data,plot.conf=FALSE,test.validity=FALSE,pch=NA)
# same plots with confidence level
par(mfrow=c(1,2))
mort.plot(data,plot.conf=TRUE,test.validity=FALSE)
mort.plot(data,plot.conf=TRUE,test.validity=FALSE,pch=NA)
```
```{r,echo=FALSE}
par(oldpar)
```
It is also possible to plot different confidence intervals with the `conf.level=` option (the default is 0.95). It is possible to plot only a subset of strains using the `strains=` option to list the desired strains; if not provided, all the strains will be plotted.
Note that the plots can be generated directly from the “resist.ratio” function using the `plot=TRUE` option.
## **Example 2**
We follow the same workflow (using the plot option in `resist.ratio()`). However, there are more than one insecticide tested in this experiment. Therefore, we need to subset the data for each insecticide, and carry out the analysis as before.
```{r}
head(test)
unique(test$insecticide)
bend<-test[test$insecticide=="bendiocarb",]
head(bend)
```
We will use a subset of the data for the insecticide "bendiocarb" only.
```{r,fig.dim=c(6,4)}
dataT.b<-probit.trans(bend)
data.b<-dataT.b$tr.data
RR.b<-resist.ratio(data.b,plot = T,ref.strain = "Kisumu",plot.conf = T, test.validity = T)
head(RR.b)
```
Note that we have enabled the arguments “plot=” with “plot.conf=” and `test.validity=`. When the log-dose-response is not linear for a strain (Chi-square p-value < 0.05), it will be plotted without forcing linearity as for “Acerkis or AgRR5” strains in this example.
```{r}
#To then test the difference in dose-mortality response between the strains
t.models<-model.signif(data.b)
t.models
```
Note that at least one of the strains failed the linearity test, the validity of the pairwise dose-mortality response test is, at best, highly questionable. We do not recommend it.
If many strains are present and only one (few) fails the linearity tests, we do recommend users to remove specific strains from the analyses.
These steps can be repeated for the different insecticides, either one by one or or in a loop (e.g. “for” loop function).
## **Example 3**
```{r,fig.dim=c(6,4)}
file <- paste0(path.package("BioRssay"), "/Example3.txt") #import the example file from the package
exm3<-read.table(file,header=TRUE)
trnd<-probit.trans(exm3) #probit transformation and correction of data
resist.ratio(trnd$tr.data,LD.value = c(50,95),plot = T) #get LD and RR values with the mortality plot
model.signif(trnd$tr.data) # test the models significance for each strain
```
# 3. **REFERENCES**
1. Finney DJ(1971). Probitanalysis. Cambridge:Cambridge UniversityPress. 350p.
1. HommelG(1988). A stage wise rejective multiple test procedure based on a modified Bonferroni test. Biometrika 75, 383-6.
1. Johnson RM, Dahlgren L, Siegfried BD,EllisMD(2013). Acaricide,fungicide and druginteractions in honeybees (Apis mellifera). PLoSONE8(1): e54092.
1. Robertson, J. L., and H.K. Preisler.1992. Pesticide bioassays with arthropods. CRC, Boca Raton, FL.
|
/scratch/gouwar.j/cran-all/cranData/BioRssay/inst/doc/BioRssay.Rmd
|
---
title: "BioRssay"
date: "`r format(Sys.time(), '%d %B, %Y')`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{BioRssay}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
```{r setup}
library(BioRssay)
```
# **An R package for analyses of bioassays and probit graphs**
**Piyal Karunarathne, Nicolas Pocquet, Pascal Milesi, and Pierrick Labbé**
This package is designed to analyze mortality data from bioassays of one or several strains/lines/populations. As of now, the functions in the package allow adjusting for mortality in the controls with Abott’s correction. For each strain, functions are available to generate a mortality-dose regression using a generalized linear model (which takes over-dispersion into account and allow mortality of 0 or 1), and plot the regressions with or without the desired confidence interval (e.g. 95%).
The package also provides functions to test the linearity of the log-dose response using a chi-square test between model predictions and observed data (significant deviations from linearity may reflect mixed populations for example).
The package also allows determining the lethal doses for 25%, 50% and 95% of the population (LD25, LD50 and LD95 respectively) or the level as specified by the user, with their 95% confidence intervals (CI) and variance of each (e.g., LD25var, LD50var, etc.), following Johnson et al. 2013 approach, which allows taking the heterogeneity of the data into account (*Finney 1971*) to calculate the CI (i.e. a larger heterogeneity will increase the CI).
The methods implemented here use likelihood ratio tests (LRT) to test for differences in resistance levels among different strains. Finally, resistance ratios (RR) at LD25, LD50 and LD95, i.e. the LD ratios between a given strain and the strain with the lowest LD50 (or LD25,LD50, and LD95; usually it is the susceptible reference), with their 95% confidence intervals are calculated according to Robertson and *Preisler (1992)*.
* Installing `BioRssay`
```{r,eval=FALSE}
#1. CRAN version
install.packages("BioRssay")
#2. Developmental version
if (!requireNamespace("devtools", quietly = TRUE))
install.packages("devtools")
devtools::install_github("milesilab/BioRssay", build_vignettes = TRUE)
```
# 1. **DATA PREPARATION**
BioRssay can import data in any format that is compatible with base R data import functions (e.g. read.table, read.csv). However, for the functions in BioRssay to work, the data **must** have at least the following columns (other columns won’t be used, but are no hindrance).
* strain: a column containing the strains tested
* dose: dosage tested on each strain/sample (controls should be entered as 0)
* total: total number of samples tested
* dead: number of dead (or knock down) samples
See the examples below.
**Example 1**
```{r}
data(bioassay)
head(bioassay$assay2)
```
Also download the test data at <https://github.com/milesilab/DATA/blob/main/BioAssays/Test.BioRssay.txt>
and find more example data sets at <https://github.com/milesilab/DATA/tree/main/BioAssays>
**Example 2**
```{r}
file <- paste0(path.package("BioRssay"), "/Test.BioRssay.txt")
test<-read.table(file,header=TRUE)
head(test)
```
NOTE: It is also possible to include a reference strain/population with the suffix "ref" in the strain column (see example 1), or the reference strain can be specified later in the function `resist.ratio` to obtain the resistance ratios for each strain (see below).
# 2. **Analysis**
The workflow is only succinctly described here, for more information on the functions and their options, see individual one in the reference index.
## **Example 1**
Let's have a quick look at the data again.
```{r}
assays<-bioassay
exm1<-assays$assay2
head(exm1)
unique(as.character(exm1$strain))
```
This example contains the mortality data of three strains (KIS-ref, DZOU, and DZOU2 ); KIS is used as the reference, as indicated by the “ref” suffix.
The first step is to check whether the controls have a non-negligible mortality, in which case a correction should be applied to the data, before probit transformation. This is easily achieved with the function `probit.trans()`.
```{r}
dataT<-probit.trans(exm1) #additionally an acceptable threshold for controls' mortality can be set as desired with "conf="; default is 0.05.
dataT$convrg
head(dataT$tr.data)
```
The output of probit.trans is a list of which the first element (`convrg`) contains the results of Abott’s correction and convergence values.
However, since the mortality in the controls (dose=0) is below 5% (`conf=0.05`) in the present example, `data$convrg` is NULL and thus no correction is applied to the data . The second element of the list dataT is the probid transformed data with two additional columns: *mort*, the observed mortalities, and *probmort*, the observed probit-transformed mortalities. This data frame is what we’ll use in the next steps of the analysis.
*If you set the threshold to conf=0.01 with example 1, you can assess the effects of the Abbot’s correction: all mortalities are slightly reduced to take the base control mortality into account.*
The second step is to compute the lethal dose values (25%, 50% and 95%, LD25, LD50 and LD95 respectively) and the corresponding resistance ratios. The function `resist.ratio` allows you to do just that (user also has the option to calculate these values for different LD values). If no reference strain has been specified in the data file (using the suffix “ref” as mentioned above), it can be specified in `ref.strain=`. Otherwise, the strain with the lowest LD50 will be considered as such. By default, the LDs’ 95% confidence intervals are computed (the min and max values are reported); you can adjust this using `conf.level=`.
```{r}
data<-dataT$tr.data #probid transformed data
RR<-resist.ratio(data)
RR
```
Note that we did not specify the reference strain here as it is already labeled in the data
For each strain, you have first the LD25, LD50 and LD95 and their upper and lower limits (defaults is 95% CI), then the slope and intercept of the regression (with their standard error), the heterogeneity (h) and the g factor (“With almost all good sets of data, g will be substantially smaller than 1.0 and seldom greater than 0.4.” Finney, 1971).
The result of the chi test (`Chi(p)`) is then indicated to judge whether the data follow a linear regression: here all the p-values are over 0.05 so the fits are acceptable. Finally the resistance ratios are indicated for LD25, LD50 and LD95 (RR25, RR50 and RR95), as well as their upper and lower limits.
The third step, when analyzing more than one strain, is now to test for difference in dose-mortality responses between strains using the `model.signif()` function.
```{r}
model.signif(dataT$tr.data)
```
As there are 3 strains, the function first tests whether all strains are similar (i.e. equivalent to 1 strain) or not (i.e. at least one is different from others), using a likelihood ratio test. Here, the test is highly significant, some strains are thus different in terms of dose response.
Pairwise tests are then performed and reported below. Here, the KIS strain is different from DZOU and from DZOU2 strains (model.pval <0.05). DZOU and DZOU2 are not different (model.pval >0.05). The `bonferroni` column indicates whether the p-values <0.05 remain significant (sig vs non-sig) after correction for multiple testing.
Further, the function outputs seven more columns with statistical outputs from the model evaluation between strains and strain-dose to a null model. The abbreviations are as follows:
`res.Dv` - residual deviance
`thr` - threshold for the significance of the pvalue
`str` - values for the strains
`int` - values for the interaction between the strain and the dose
*Note: the pvalues for strain and strain-dose interaction is from a F-test for a binomial model.*
***Data Visualization***
The data and the regression can be plotted with confidence levels using the `mort.plot()` function. It is also possible to take the validity of the linearity test into account for the plots using the `test.validity=` option. The probit-transformed mortalities (`probit.trans()` function) are plotted as a function of the log10 of the doses.
```{r,echo=FALSE}
oldpar<-par(no.readonly = TRUE)
```
```{r,fig.dim=c(8,4)}
strains<-levels(data$strain)
par(mfrow=c(1,2)) # set plot rows
# plot without confidence intervals and test of validity of the model
mort.plot(data,plot.conf=FALSE,test.validity=FALSE)
# plot only the regression lines
mort.plot(data,plot.conf=FALSE,test.validity=FALSE,pch=NA)
# same plots with confidence level
par(mfrow=c(1,2))
mort.plot(data,plot.conf=TRUE,test.validity=FALSE)
mort.plot(data,plot.conf=TRUE,test.validity=FALSE,pch=NA)
```
```{r,echo=FALSE}
par(oldpar)
```
It is also possible to plot different confidence intervals with the `conf.level=` option (the default is 0.95). It is possible to plot only a subset of strains using the `strains=` option to list the desired strains; if not provided, all the strains will be plotted.
Note that the plots can be generated directly from the “resist.ratio” function using the `plot=TRUE` option.
## **Example 2**
We follow the same workflow (using the plot option in `resist.ratio()`). However, there are more than one insecticide tested in this experiment. Therefore, we need to subset the data for each insecticide, and carry out the analysis as before.
```{r}
head(test)
unique(test$insecticide)
bend<-test[test$insecticide=="bendiocarb",]
head(bend)
```
We will use a subset of the data for the insecticide "bendiocarb" only.
```{r,fig.dim=c(6,4)}
dataT.b<-probit.trans(bend)
data.b<-dataT.b$tr.data
RR.b<-resist.ratio(data.b,plot = T,ref.strain = "Kisumu",plot.conf = T, test.validity = T)
head(RR.b)
```
Note that we have enabled the arguments “plot=” with “plot.conf=” and `test.validity=`. When the log-dose-response is not linear for a strain (Chi-square p-value < 0.05), it will be plotted without forcing linearity as for “Acerkis or AgRR5” strains in this example.
```{r}
#To then test the difference in dose-mortality response between the strains
t.models<-model.signif(data.b)
t.models
```
Note that at least one of the strains failed the linearity test, the validity of the pairwise dose-mortality response test is, at best, highly questionable. We do not recommend it.
If many strains are present and only one (few) fails the linearity tests, we do recommend users to remove specific strains from the analyses.
These steps can be repeated for the different insecticides, either one by one or or in a loop (e.g. “for” loop function).
## **Example 3**
```{r,fig.dim=c(6,4)}
file <- paste0(path.package("BioRssay"), "/Example3.txt") #import the example file from the package
exm3<-read.table(file,header=TRUE)
trnd<-probit.trans(exm3) #probit transformation and correction of data
resist.ratio(trnd$tr.data,LD.value = c(50,95),plot = T) #get LD and RR values with the mortality plot
model.signif(trnd$tr.data) # test the models significance for each strain
```
# 3. **REFERENCES**
1. Finney DJ(1971). Probitanalysis. Cambridge:Cambridge UniversityPress. 350p.
1. HommelG(1988). A stage wise rejective multiple test procedure based on a modified Bonferroni test. Biometrika 75, 383-6.
1. Johnson RM, Dahlgren L, Siegfried BD,EllisMD(2013). Acaricide,fungicide and druginteractions in honeybees (Apis mellifera). PLoSONE8(1): e54092.
1. Robertson, J. L., and H.K. Preisler.1992. Pesticide bioassays with arthropods. CRC, Boca Raton, FL.
|
/scratch/gouwar.j/cran-all/cranData/BioRssay/vignettes/BioRssay.Rmd
|
#' BioStatR
#'
#' Motivation: Package compagnon du livre Initiation à la statistique avec R.
#' Il contient les codes des chapitres du livre ainsi que les solutions des
#' exercices mais aussi d'autres compléments à découvrir.
#'
#'
#' @name BioStatR
#' @docType package
#' @references F. Bertrand, M. Maumy-Bertrand, Initiation à la Statistique avec
#' R, 4ème édition, ISBN:9782100847945, Dunod, Paris, 2023
#' @references \emph{Initiation à la Statistique avec R}, Frédéric Bertrand,
#' Myriam Maumy-Bertrand, 2023, ,
#' \url{https://www.dunod.com/sciences-techniques/initiation-statistique-avec-r-cours-exemples-exercices-et-problemes-corriges-1},
#' \url{https://github.com/fbertran/BioStatR/} et
#' \url{https://fbertran.github.io/BioStatR/}
#'
#' @importFrom grDevices colorRampPalette gray
#' @importFrom graphics hist par persp rect
#' @importFrom stats as.formula lm qchisq qf qnorm quantile sd
#' @import ggplot2
#'
#' @examples
#' set.seed(314)
#'
NULL
|
/scratch/gouwar.j/cran-all/cranData/BioStatR/R/BioStatR-package.R
|
#' Intervalles de confiance pour une proportion
#'
#' Cette fonction permet de calculer plusieurs types d'intervalles de confiance
#' pour une proportion.
#'
#'
#' @param x Nombre de succès
#' @param n Nombre d'essais
#' @param conf.level Niveau de confiance recherché pour l'intervalle
#' @param method Type d'intervalle de confiance à calculer : intervalle de
#' "Wilson", intervalle "exact" de Clopper-Pearson, intervalle asymptotique de
#' "Wald" ou tous les trois "all"
#' @return \item{matrix}{Limites des intervalles de confiance demandés.}
#' @author Frédéric Bertrand\cr \email{frederic.bertrand@@utt.fr}\cr
#' \url{http://www-irma.u-strasbg.fr/~fbertran/}\cr
#' Maumy-Bertrand\cr \email{myriam.maumy@@utt.fr}\cr
#' \url{http://www-irma.u-strasbg.fr/~mmaumy/}
#' @seealso \code{\link{binom.test}}, \code{\link{binom.ci}},
#' \code{\link{poi.ci}}
#' @references F. Bertrand, M. Maumy-Bertrand, Initiation à la Statistique avec
#' R, Dunod, 4ème édition, 2023.
#' @keywords univar
#' @examples
#'
#' binom.ci(5,10,method="all")
#'
#' @export binom.ci
binom.ci <- function (x, n, conf.level = 0.95, method = c("Wilson", "exact", "Wald",
"all"))
{
method <- match.arg(method)
bc <- function(x, n, conf.level, method) {
nu1 <- 2 * (n - x + 1)
nu2 <- 2 * x
ll <- if (x > 0)
x/(x + qf(1/2 + conf.level/2, nu1, nu2) * (n - x + 1))
else 0
nu1p <- nu2 + 2
nu2p <- nu1 - 2
pp <- if (x < n)
qf(1/2 + conf.level/2, nu1p, nu2p)
else 1
ul <- ((x + 1) * pp)/(n - x + (x + 1) * pp)
zcrit <- -qnorm((1-conf.level)/2)
z2 <- zcrit * zcrit
p <- x/n
cl <- (p + z2/2/n + c(-1, 1) * zcrit * sqrt((p * (1 -
p) + z2/4/n)/n))/(1 + z2/n)
if (x == 1)
cl[1] <- -log(conf.level)/n
if (x == (n - 1))
cl[2] <- 1 + log(conf.level)/n
asymp.lcl <- x/n - qnorm(1/2+conf.level/2) * sqrt(((x/n) *
(1 - x/n))/n)
asymp.ucl <- x/n + qnorm(1/2+conf.level/2) * sqrt(((x/n) *
(1 - x/n))/n)
res <- rbind(c(ll, ul), cl, c(asymp.lcl, asymp.ucl))
res <- cbind(rep(x/n, 3), res)
switch(method, Wilson = res[2, ], exact = res[1, ], Wald = res[3,
], all = res, res)
}
if ((length(x) != length(n)) & length(x) == 1)
x <- rep(x, length(n))
if ((length(x) != length(n)) & length(n) == 1)
n <- rep(n, length(x))
if ((length(x) > 1 | length(n) > 1) & method == "all") {
method <- "Wilson"
warning("method=all will not work with vectors...setting method to Wilson")
}
if (method == "all" & length(x) == 1 & length(n) == 1) {
mat <- bc(x, n, conf.level, method)
dimnames(mat) <- list(c("Exact", "Wilson", "Wald"),
c("PointEst", "Lower", "Upper"))
return(mat)
}
mat <- matrix(ncol = 3, nrow = length(x))
for (i in 1:length(x)) mat[i, ] <- bc(x[i], n[i], conf.level = conf.level,
method = method)
dimnames(mat) <- list(rep("", dim(mat)[1]), c("PointEst",
"Lower", "Upper"))
mat
}
|
/scratch/gouwar.j/cran-all/cranData/BioStatR/R/binom.ci.R
|
#' Coefficient de variation
#'
#' Calcule coefficent de variation d'une série statistique
#'
#' Le coefficient de variation est égal à l'écart-type corrigé divisé par la
#' moyenne. Il est exprimé en pourcents.
#'
#' @param x Un vecteur numérique
#' @return \item{num}{Valeur du coefficient de variation exprimé en pourcents}
#' @author Frédéric Bertrand\cr \email{frederic.bertrand@@utt.fr}\cr
#' \url{http://www-irma.u-strasbg.fr/~fbertran/}\cr
#' Maumy-Bertrand\cr \email{myriam.maumy@@utt.fr}\cr
#' \url{http://www-irma.u-strasbg.fr/~mmaumy/}
#' @seealso \code{\link{mean}}, \code{\link{sd}}
#' @references F. Bertrand, M. Maumy-Bertrand, Initiation à la Statistique avec
#' R, Dunod, 4ème édition, 2023.
#' @keywords univar
#' @examples
#'
#' data(Europe)
#' cvar(Europe[,2])
#'
#' @export cvar
cvar <- function(x){100*sd(x)/mean(x)}
|
/scratch/gouwar.j/cran-all/cranData/BioStatR/R/cvar.R
|
#' Durées de travail en Europe
#'
#' This dataset provide mean weekly cumulated work durations for several
#' European countries.
#'
#' The duration is given in hours
#'
#' @name Europe
#' @docType data
#' @format A data frame with 25 observations on the following 2 variables.
#' \describe{ \item{Pays}{a factor with the some of the European
#' countries as levels} \item{Duree}{weekly cumulative work duration} }
#' @references F. Bertrand, M. Maumy-Bertrand, Initiation à la Statistique avec
#' R, Dunod, 4ème édition, 2023.
#' @keywords datasets
#' @examples
#'
#' data(Europe)
#'
NULL
#' Mesures de fruits d'arbustes
#'
#' This dataset measurements of several features of the fruits of small trees
#' such as their mass (in g) or their length (in cm).
#'
#' This dataset was made during the summer 2009 in the south of France. It
#' provides measurements of several features of the fruits of small trees such
#' as their mass or their length.
#'
#' @name Extrait_Taille
#' @docType data
#' @format A data frame with 252 observations on the following 5 variables.
#' \describe{ \item{masse}{a numeric vector} \item{taille}{a
#' numeric vector} \item{espece}{a factor with levels \code{bignone},
#' \code{glycine blanche}, \code{glycine violette} and \code{lauriers roses}} }
#' @references F. Bertrand, M. Maumy-Bertrand, Initiation à la Statistique avec
#' R, Dunod, 4ème édition, 2023.
#' @keywords datasets
#' @examples
#'
#' data(Extrait_Taille)
#'
NULL
#' Mesures de fruits d'arbustes
#'
#' This dataset measurements of several features of the fruits of small trees
#' such as their mass or their length.
#'
#' This dataset was made during the summer 2009 in the south of France. It
#' provides measurements of several features of the fruits of small trees such
#' as their mass or their length.
#'
#' @name Mesures
#' @docType data
#' @format A data frame with 252 observations on the following 3 variables.
#' \describe{ \item{masse}{a numeric vector} \item{taille}{a
#' numeric vector} \item{espece}{a factor with levels \code{bignone},
#' \code{glycine blanche}, \code{glycine violette} and \code{lauriers roses}} }
#' @references F. Bertrand, M. Maumy-Bertrand, Initiation à la Statistique avec
#' R, Dunod, 4ème édition, 2023.
#' @keywords datasets
#' @examples
#'
#' data(Mesures)
#'
NULL
#' Mesures de fruits d'arbustes
#'
#' This dataset measurements of several features of the fruits of small trees
#' such as their mass or their length.
#'
#' This dataset was made during the summer 2009 in the south of France. It
#' provides measurements of several features of the fruits of small trees such
#' as their mass or their length.
#'
#' @name Mesures5
#' @docType data
#' @format A data frame with 252 observations on the following 5 variables.
#' \describe{ \item{masse}{a numeric vector} \item{taille}{a
#' numeric vector} \item{graines}{a numeric vector}
#' \item{masse_sec}{a numeric vector} \item{espece}{a factor
#' with levels \code{bignone}, \code{glycine blanche}, \code{glycine violette}
#' and \code{lauriers roses}} }
#' @references F. Bertrand, M. Maumy-Bertrand, Initiation à la Statistique avec
#' R, Dunod, 4ème édition, 2023.
#' @keywords datasets
#' @examples
#'
#' data(Mesures5)
#'
NULL
#' Indices de Quetelet
#'
#' Ce jeu de données contient des mesures de masse et de taille pour permettre
#' le calcul de l'indice de masse corporelle (aussi dit de Quetelet).
#'
#' Le poids est exprimée en kg et la hauteur en cm
#'
#' @name Quetelet
#' @docType data
#' @format Un data frame avec 66 observations de 3 variables. \describe{
#' \item{sexe}{un facteur donnant le sexe de l'individu}
#' \item{poids}{le poids de l'individu} \item{taille}{la
#' hauteur de l'individu} }
#' @references F. Bertrand, M. Maumy-Bertrand, Initiation à la Statistique avec
#' R, Dunod, 4ème édition, 2023.
#' @keywords datasets
#' @examples
#'
#' data(Quetelet)
#'
NULL
|
/scratch/gouwar.j/cran-all/cranData/BioStatR/R/datasets.R
|
#' Calcul du rapport de corrélation eta carré
#'
#' Cette fonction calcule le rapport de corrélation \eqn{\eta^2} qui est une
#' mesure d'association importante entre une variable quantitative et une
#' variable qualitative.
#'
#'
#' @param x Un vecteur associé à la variable quantitative
#' @param y Un facteur associé à la variable qualitative
#' @return \item{num}{La valeur du rapport de corrélation empirique}
#' @author Frédéric Bertrand\cr \email{frederic.bertrand@@utt.fr}\cr
#' \url{http://www-irma.u-strasbg.fr/~fbertran/}\cr
#' Maumy-Bertrand\cr \email{myriam.maumy@@utt.fr}\cr
#' \url{http://www-irma.u-strasbg.fr/~mmaumy/}
#' @references F. Bertrand, M. Maumy-Bertrand, Initiation à la Statistique avec
#' R, Dunod, 4ème édition, 2023.
#' @keywords univar
#' @examples
#'
#' eta2(Mesures5$taille,Mesures5$espece)
#'
#' @export eta2
eta2 <- function(x,y) {
return(summary(lm(as.formula(x~y)))$r.squared)
}
|
/scratch/gouwar.j/cran-all/cranData/BioStatR/R/eta2.R
|
#' Graphique des quantiles (qqplot) et droite interquartile
#'
#' Dessine le graphique des quantiles ou \code{\link{qqplot}} et la droite
#' interquartile (passant par le premier et le troisième quartile à la manière
#' de la fonction \code{\link{qqline}}) avec la bibliothèque graphique
#' \code{ggplot2}.
#'
#'
#' @param df Un jeu de données (dataframe)
#' @param var Le nom d'une variable de df
#' @param qdist La fonction quantile d'une (famille de) distribution. Par
#' défaut celle de la famille des lois normales.
#' @param params Une liste de paramètres pour spécifier la loi à utiliser. Par
#' défaut la loi normale centrée et réduite. Les paramètres peuvent être
#' estimés avec la fonction \code{\link[MASS]{fitdistr}} de la bibliothèque
#' MASS.
#' @param qq.line Une valeur logique. Affiche ou masque la droite
#' interquartile.
#' @param color Le nom d'une couleur. Spécifie la couleur à utiliser pour la
#' droite interquartile.
#' @param alpha Indice de transparence. Spécifie la transparence à utiliser
#' pour représenter les valeurs de l'échantillon.
#' @return \item{ggplot}{Un graphique utilisant la bibliothèque ggplot2.
#' Affiche les valeurs des quartiles théoriques par lesquels passe la droite
#' ainsi que son ordonnée à l'origine et sa pente si le tracé de celle-ci est
#' demandé.}
#' @author Frédéric Bertrand\cr \email{frederic.bertrand@@utt.fr}\cr
#' \url{http://www-irma.u-strasbg.fr/~fbertran/}\cr
#' Maumy-Bertrand\cr \email{myriam.maumy@@utt.fr}\cr
#' \url{http://www-irma.u-strasbg.fr/~mmaumy/}
#' @seealso \code{\link{qqplot}}, \code{\link{qqline}}
#' @references F. Bertrand, M. Maumy-Bertrand, Initiation à la Statistique avec
#' R, Dunod, 3e, 2018.
#' @keywords univar
#' @examples
#'
#' glycine.blanche<-subset(Mesures,subset=(Mesures$espece=="glycine blanche"))
#' gg_qqplot(glycine.blanche,"taille")
#'
#' #bonus ajustement avec une autre loi (ici Student (car dist = qt) dont on estime les ddl)
#' lauriers.roses<-subset(Mesures,subset=(Mesures$espece=="laurier rose"))
#' shapiro.test(lauriers.roses$taille)
#' #pas issu d'une loi normale au risque alpha=5%
#' gg_qqplot(lauriers.roses,"taille")
#' gg_qqplot(lauriers.roses,"taille",qq.line=FALSE)
#' #essayons un qqplot avec une loi de Student
#' \dontrun{
#' require(MASS)
#' params <- as.list(fitdistr(lauriers.roses$taille, "t")$estimate)
#' #avec la droite
#' gg_qqplot(lauriers.roses,"taille",qt,params)
#' #essayons un qqplot avec une loi gamma
#' params <- as.list(fitdistr(lauriers.roses$taille,"gamma")$estimate)
#' #avec la droite
#' gg_qqplot(lauriers.roses,"taille",qgamma,params)
#' #essayons un qqplot avec une loi du chi-deux
#' params <- list(df=fitdistr(lauriers.roses$taille,"chi-squared",start=list(df=5),
#' method="Brent",lower=1,upper=40)$estimate)
#' #avec la droite
#' gg_qqplot(lauriers.roses,"taille",qchisq,params)
#' }
#'
#' @export gg_qqplot
gg_qqplot <- function(df,var,qdist=qnorm,params=list(),qq.line=TRUE,color="red",alpha=.5)
{
requireNamespace("ggplot2")
force(params)
y <- quantile((df[var])[!is.na(df[var])], c(0.25, 0.75))
mf <- names(formals(qdist))
m <- match(names(formals(qdist)), names(params), 0L)
uparams <- params[m]
x <- do.call("qdist",c(list(p=c(0.25, 0.75)),uparams))
if(qq.line){
slope <- diff(y)/diff(x)
int <- y[1L] - slope * x[1L]
}
p <- ggplot2::ggplot(df, aes_string(sample=var)) + ggplot2::stat_qq(alpha = alpha,distribution=qdist,dparams=params)
if(qq.line){
p <- p + ggplot2::geom_abline(slope = slope, intercept = int, color=color)
cat(paste(c("1st quartile : ",x[1],"\n3rd quartile : ",x[2],"\nIntercept : ",int,"\nSlope : ",slope,"\n"),sep=""))
}
return(p)
}
|
/scratch/gouwar.j/cran-all/cranData/BioStatR/R/gg_qqplot.R
|
#' Histrogammes
#'
#' Sert à représenter des histogrammes dans les graphiques matriciels
#'
#' Cette fonction s'utilise avec la fonctions graphique pairs.
#'
#' @param x Un vecteur numérique
#' @param \dots Des arguments à transmettre à la fonction qui créé les
#' histogrammes
#' @author Frédéric Bertrand\cr \email{frederic.bertrand@@utt.fr}\cr
#' \url{http://www-irma.u-strasbg.fr/~fbertran/}\cr
#' Maumy-Bertrand\cr \email{myriam.maumy@@utt.fr}\cr
#' \url{http://www-irma.u-strasbg.fr/~mmaumy/}
#' @seealso \code{\link{pairs}}, \code{\link{hist}}
#' @references F. Bertrand, M. Maumy-Bertrand, Initiation à la Statistique avec
#' R, Dunod, 4ème édition, 2023.
#' @keywords univar
#' @examples
#'
#' data(Mesures5)
#' pairs(Mesures5,diag.panel="panel.hist")
#'
#' @export panel.hist
panel.hist <- function(x, ...)
{
usr <- par("usr"); on.exit(par(usr))
par(usr = c(usr[1:2], 0, 1.5) )
h <- hist(x, plot = FALSE)
breaks <- h$breaks; nB <- length(breaks)
y <- h$counts; y <- y/max(y)
rect(breaks[-nB], 0, breaks[-1], y, col="cyan", ...)
}
|
/scratch/gouwar.j/cran-all/cranData/BioStatR/R/panel.hist.R
|
#' Représentation bivariée des variables discrètes ou des variables continues
#' groupées en classes.
#'
#' Cette fonction construit un stéréogramme permettant de juger de
#' l'association entre deux variables discrètes ou groupées en classes.
#'
#'
#' @param x Valeurs observées ou modalités de la première variable discrète
#' @param y Valeurs observées ou modalités de la seconde variable discrète
#' @param f Si \code{f=0} (donc \code{length(f)=0}), \code{x} et \code{y} sont
#' deux séries statistiques. Si \code{length(f)>1}, f est un tableau de
#' fréquences et \code{x} et \code{y} les noms des lignes et des colonnes de
#' \code{f}.
#' @param xaxe Nom de l'axe des abscisses
#' @param yaxe Nom de l'axe des ordonnées
#' @param col Couleur du stéréogramme
#' @param border Le maillage du graphique doit-il être affiché ?
#' @param Nxy Pas du maillage pour chaque axe
#' @param theme Le thème détermine la palette de couleurs utilisées. Il y a
#' quatre choix possibles en couleurs "0", "1", "2", "3" et un en nuances de
#' gris "bw"
#' @return Un stéréogramme des deux séries statistiques groupées ou des deux
#' variables discrètes étudiées.
#' @author Frédéric Bertrand\cr \email{frederic.bertrand@@utt.fr}\cr
#' \url{http://www-irma.u-strasbg.fr/~fbertran/}\cr
#' Maumy-Bertrand\cr \email{myriam.maumy@@utt.fr}\cr
#' \url{http://www-irma.u-strasbg.fr/~mmaumy/}
#' @references F. Bertrand, M. Maumy-Bertrand, Initiation à la Statistique avec
#' R, Dunod, 4ème édition, 2023.
#' @keywords univar
#' @examples
#'
#' xx=c(1.83,1.72,1.65,1.70,2.05,1.92,1.85,1.70,1.75,1.9)
#' yy=c(75,70,70,60,90,92,75,68,71,87)
#' plotcdf2(xx,yy,f=0,"taille en m","poids en kg")
#'
#' xx=seq(2,12)
#' yy=seq(1,6)
#' p=c(1/36,0,0,0,0,0,
#' 2/36,0,0,0,0,0,
#' 2/36,1/36,0,0,0,0,
#' 2/36,2/36,0,0,0,0,
#' 2/36,2/36,1/36,0,0,0,
#' 2/36,2/36,2/36,0,0,0,
#' 0,2/36,2/36,1/36,0,0,
#' 0,0,2/36,2/36,0,0,
#' 0,0,0,2/36,1/36,0,
#' 0,0,0,0,2/36,0,
#' 0,0,0,0,0,1/36)
#' p=matrix(p,byrow=TRUE,ncol=6)
#' plotcdf2(xx,yy,p,"somme des dés","valeur du plus petit")
#'
#' @export plotcdf2
plotcdf2 <- function (x, y, f, xaxe, yaxe, col = NULL, border = FALSE, Nxy = 200,
theme = "0")
{
if (length(f) > 1) {
xi = sort(x)
yj = sort(y)
k = length(x)
l = length(y)
}
else {
xi = as.numeric(levels(as.factor(x)))
yj = as.numeric(levels(as.factor(y)))
f = table(x, y)
k = length(xi)
l = length(yj)
}
if (sum(sum(f)) > 1) {
f = f/sum(sum(f))
}
F = matrix(0,ncol=l,nrow=k)
F[1, ] = cumsum(f[1, ])
F[, 1] = cumsum(f[, 1])
for (i in 2:k) {
for (j in 2:l) {
F[i, j] = f[i, j] + F[i - 1, j] + F[i, j - 1] - F[i -
1, j - 1]
}
}
deltax = (max(xi) - min(xi))/Nxy
deltay = (max(yj) - min(yj))/Nxy
x = seq(min(xi) - deltax, max(xi) + deltax, deltax)
y = seq(min(yj) - deltay, max(yj) + deltay, deltay)
n1 = length(x)
n2 = length(y)
z = matrix(rep(0, n1 * n2), ncol = n2)
for (i in 1:n1) {
for (j in 1:n2) {
i1 = (x[i] >= xi)
i2 = (y[j] >= yj)
if (sum(i1) == 0 | sum(i2) == 0) {
z[i, j] = 0
}
if (sum(i1) >= k & sum(i2) >= l) {
z[i, j] = 1
}
if (sum(i1) >= k & sum(i2) < l & sum(i2) > 0) {
z[i, j] = F[k, sum(i2)]
}
if (sum(i1) < k & sum(i2) >= l & sum(i1) > 0) {
z[i, j] = F[sum(i1), l]
}
if (sum(i1) < k & sum(i2) < l & sum(i1) > 0 & sum(i2) >
0) {
z[i, j] = F[sum(i1), sum(i2)]
}
}
}
if (is.null(col)) {
nrz <- nrow(z)
ncz <- ncol(z)
jet.colors <- colorRampPalette(c("blue", "red"))
if (theme == "1") {
jet.colors <- colorRampPalette(c("#BDFF00", "#FF00BD",
"#00BDFF"))
}
if (theme == "2") {
jet.colors <- colorRampPalette(c("#FF8400", "#8400FF",
"#00FF84"))
}
if (theme == "3") {
jet.colors <- colorRampPalette(c("#84FF00", "#FF0084",
"#0084FF"))
}
if (theme == "bw") {
jet.colors <- function(nbcols) {
gray(seq(0.1, 0.9, length.out = nbcols))
}
}
nbcol <- 100
color <- jet.colors(nbcol)
zfacet <- z[-1, -1] + z[-1, -ncz] + z[-nrz, -1] + z[-nrz,
-ncz]
facetcol <- cut(zfacet, nbcol)
persp(x, y, z, theta = -30, phi = 15, col = color[facetcol],
shade = 0.15, main = "St\u00E9r\u00E9ogramme des deux variables",
xlab = xaxe, ylab = yaxe, zlab = "", cex.axis = 0.75,
ticktype = "detailed", border = border)
}
else {
persp(x, y, z, theta = -30, phi = 15, col = col, shade = 0.15,
main = "St\u00E9r\u00E9ogramme des deux variables", xlab = xaxe,
ylab = yaxe, zlab = "", cex.axis = 0.75, ticktype = "detailed",
border = border)
}
invisible(list(F=F,z=z,x=x,y=y))
}
|
/scratch/gouwar.j/cran-all/cranData/BioStatR/R/plotcdf2.R
|
#' Intervalle de confiance pour le paramètre d'une loi de Poisson
#'
#' Créé un intervalle de confiance pour le paramètre d'une loi de Poisson.
#'
#'
#' @param x Un vecteur de données
#' @param conf.level Niveau de confiance de l'intervalle
#' @return \item{matrix}{Limites des intervalles de confiance demandés.}
#' @author Frédéric Bertrand\cr \email{frederic.bertrand@@utt.fr}\cr
#' \url{http://www-irma.u-strasbg.fr/~fbertran/}\cr
#' Maumy-Bertrand\cr \email{myriam.maumy@@utt.fr}\cr
#' \url{http://www-irma.u-strasbg.fr/~mmaumy/}
#' @seealso \code{\link{binom.test}}, \code{\link{binom.ci}},
#' \code{\link{poi.ci}}
#' @references F. Bertrand, M. Maumy-Bertrand, Initiation à la Statistique avec
#' R, Dunod, 4ème édition, 2023.
#' @keywords univar
#' @examples
#'
#' poi.ci(rpois(20,10))
#'
#' @export poi.ci
poi.ci <- function (x, conf.level = 0.95)
{
nn <- length(x)
LCI <- qchisq((1 - conf.level)/2, 2 * sum(x))/2/nn
UCI <- qchisq(1 - (1 - conf.level)/2, 2 * (sum(x) + 1))/2/nn
res <- cbind(mean(x), LCI, UCI)
ci.prefix <- paste(round(100 * conf.level, 1), "%", sep = "")
colnames(res) <- c("PointEst", paste(ci.prefix, "LCI"), paste(ci.prefix,
"UCI"))
res
}
|
/scratch/gouwar.j/cran-all/cranData/BioStatR/R/poi.ci.R
|
#' ---
#' title: "Initiation \u00e0 la statistique avec R, code et compl\u00e9ments chapitre 1"
#' author: "Fr\u00e9d\u00e9ric Bertrand et Myriam Maumy-Bertrand"
#' date: "20 mars 2023"
#' ---
#Chapitre 1
#page 9
#q()
?read.table
#page 10
help(read.table)
#help(package="package")
example(plot)
help("read.table",help_type="html")
#page 11
help("read.table",help_type="text")
help.start()
options(help_type="html")
options(help_type="text")
2+8
#page 12
2+8
120:155
sqrt(4)
#page 13
#source(file="C://chemin//vers//nomdefichier//fichier.R",echo=TRUE)
#source(file=".../repertoire/fichier.R",echo=TRUE)
#source(file="fichier.R",echo=TRUE)
## Si "fichier.R" est dans le r\'epertoire de travail
# Exercice 1.1
#page 18
#install.packages("BioStatR")
help(package="BioStatR")
#install.packages("devtools")
#library(devtools)
#install_github("fbertran/BioStatR")
# Exercice 1.2
# 1)
10:25
#page 19
seq(from=10,to=25,by=1)
seq(10,25,1)
# 2)
seq(from=20,to=40,by=5)
seq(20,40,5)
# 3)
rep(x=28,times=10)
#page 20
rep(28,10)
|
/scratch/gouwar.j/cran-all/cranData/BioStatR/demo/Chapitre1.R
|
#' ---
#' title: "Initiation \u00e0 la statistique avec R, code et compl\u00e9ments chapitre 10"
#' author: "Fr\u00e9d\u00e9ric Bertrand et Myriam Maumy-Bertrand"
#' date: "20 mars 2023"
#' ---
#Chapitre 10
require(BioStatR)
#page 403
foret<-rep(1:3,c(10,10,10))
hauteur<-c(23.4,24.4,24.6,24.9,25,26.2,26.1,24.8,25.5,25.8,18.9,21.1,21.1,
22.1,22.5,23.5,22.7,21.3,22.2,21.7,22.5,22.9,23.7,24,24,24.5,24.3,24.2,
23.4,23.9)
foret<-factor(foret)
arbre<-data.frame(foret,hauteur)
rm(foret)
rm(hauteur)
arbre
moyennes<-tapply(arbre$hauteur,arbre$foret,mean)
moyennes
#page 404
variances<-tapply(arbre$hauteur,arbre$foret,var)
variances
#page 405
moy.g<-mean(arbre$hauteur)
moy.g
mean(moyennes)
#page 406
plot(arbre$foret,arbre$hauteur)
points(1:3,moyennes,pch="@")
abline(h=moy.g)
pdf("ch11fig101.pdf")
plot(arbre$foret,arbre$hauteur)
points(1:3,moyennes,pch="@")
abline(h=moy.g)
dev.off()
#page 409
options(contrasts=c("contr.sum","contr.poly"))
modele1<-lm(hauteur~foret,data=arbre)
anova(modele1)
modele1_aov<-aov(hauteur~foret,data=arbre)
summary(modele1_aov)
#page 410
options(contrasts=c("contr.sum","contr.poly"))
#page 411
residus<-residuals(modele1)
shapiro.test(residus)
length(residus)
#En plus : les r\'esidus des deux mod\`eles sont \'egaux
all(residuals(modele1)==residuals(modele1_aov))
#page 413
bartlett.test(residus~foret,data=arbre)
coef(modele1)
#En plus : les coefficients des deux mod\`eles sont \'egaux
all(coef(modele1)==coef(modele1_aov))
#page 414
-sum(coef(modele1)[2:3])
dummy.coef(modele1)
#En plus : fonctionne aussi avec le mod\`ele aov et introduction de la
#fonction model.tables
dummy.coef(modele1_aov)
model.tables(modele1_aov)
if(!("granova" %in% rownames(installed.packages()))){
install.packages("granova")}
library(granova)
granova.1w(arbre$hauteur,arbre$foret)
pdf("chap10fig102.pdf")
print(granova.1w(arbre$hauteur,arbre$foret))
dev.off()
#page 416
if(!("granovaGG" %in% rownames(installed.packages()))){
install.packages("granovaGG")}
library(granovaGG)
granovagg.1w(arbre$hauteur,arbre$foret)
pdf("chap10fig103.pdf")
print(granovagg.1w(arbre$hauteur,arbre$foret))
dev.off()
#page 419
modele2<-aov(hauteur~foret,data=arbre)
model.tables(modele2)
TukeyHSD(modele2)
plot(TukeyHSD(modele2))
pdf("chap10fig104.pdf")
plot(TukeyHSD(modele2))
dev.off()
#En plus : export des graphiques en niveaux de gris et aux formats .png ou .ps
png("chap10fig102.png")
granova.1w(arbre$hauteur,arbre$foret)
dev.off()
postscript("chap10fig102.ps")
granova.1w(arbre$hauteur,arbre$foret)
dev.off()
pdf("chap10fig102bw.pdf",colormodel="gray")
granova.1w(arbre$hauteur,arbre$foret)
dev.off()
postscript("chap10fig102bw.ps",colormodel="gray")
granova.1w(arbre$hauteur,arbre$foret)
dev.off()
png("chap10fig103.png")
granovagg.1w(arbre$hauteur,arbre$foret)
dev.off()
postscript("chap10fig103.ps")
granovagg.1w(arbre$hauteur,arbre$foret)
dev.off()
pdf("chap10fig103bw.pdf",colormodel="gray")
granovagg.1w(arbre$hauteur,arbre$foret)
dev.off()
postscript("chap10fig103bw.ps",colormodel="gray")
granovagg.1w(arbre$hauteur,arbre$foret)
dev.off()
#page 426
#Exercice 10.1
#1)
options(contrasts=c(unordered="contr.sum", ordered="contr.poly"))
#page 427
#2)
variete<-rep(1:6,c(5,5,5,5,5,5))
vitamine<-c(93.6,95.3,96,93.7,96.2,95.3,96.9,95.8,97.3,97.7,94.5,97,97.8,97,
98.3,98.8,98.2,97.8,97.2,97.9,94.6,97.8,98,95,98.9,93.2,94.4,93.8,95.6,94.8)
variete<-factor(variete)
exo1<-data.frame(variete,vitamine)
modele1<-aov(vitamine~variete,data=exo1)
residus1<-residuals(modele1)
shapiro.test(residus1)
length(residus1)
bartlett.test(residus1~variete,data=exo1)
#page 428
#3)
modele1
summary(modele1)
#page 429
#4)
granovagg.1w(vitamine,group=variete)
pdf("chap10fig105.pdf")
granovagg.1w(vitamine,group=variete)
dev.off()
#page 431
#6)
Tukey1 <- TukeyHSD(modele1, conf.level = 0.95)
Tukey1
#page 432
#4)
if(!("multcomp" %in% rownames(installed.packages()))){
install.packages("multcomp")}
library(multcomp)
wht = glht(modele1, linfct = mcp(variete = "Tukey"))
cld(wht)
plot(Tukey1)
pdf("chap10fig106.pdf")
plot(Tukey1)
dev.off()
#page 433
CI <- confint(wht)
fortify(CI)
ggplot(CI,aes(lhs,estimate,ymin=lwr,ymax=upr))+geom_pointrange()+
geom_hline(yintercept = 0)
pdf("chap10fig107.pdf")
print(ggplot(CI,aes(lhs,estimate,ymin=lwr,ymax=upr))+geom_pointrange()+
geom_hline(yintercept = 0))
dev.off()
ggplot(aes(lhs,estimate),data=fortify(summary(wht))) +
geom_linerange(aes(ymin=lwr,ymax=upr),data=CI) +
geom_text(aes(y=estimate+1,label=round(p,3)))+geom_hline(yintercept = 0) +
geom_point(aes(size=p),data=summary(wht)) +scale_size(trans="reverse")
pdf("chap10fig108.pdf")
ggplot(aes(lhs,estimate),data=fortify(summary(wht))) +
geom_linerange(aes(ymin=lwr,ymax=upr),data=CI) +
geom_text(aes(y=estimate+1,label=round(p,3)))+geom_hline(yintercept = 0) +
geom_point(aes(size=p),data=summary(wht)) +scale_size(trans="reverse")
dev.off()
#page 434
if(!("multcompView" %in% rownames(installed.packages()))){
install.packages("multcompView")}
library(multcompView)
if(!("plyr" %in% rownames(installed.packages()))){install.packages("plyr")}
library(plyr)
generate_label_df <- function(HSD,flev){
Tukey.levels <- HSD[[flev]][,4]
Tukey.labels <- multcompLetters(Tukey.levels)['Letters']
plot.labels <- names(Tukey.labels[['Letters']])
boxplot.df <- ddply(exo1, flev, function (x) max(fivenum(x$vitamine)) + 0.2)
plot.levels <- data.frame(plot.labels, labels = Tukey.labels[['Letters']],
stringsAsFactors = FALSE)
labels.df <- merge(plot.levels, boxplot.df, by.x = 'plot.labels', by.y = flev,
sort = FALSE)
return(labels.df)
}
#page 435
p_base <- ggplot(exo1,aes(x=variete,y=vitamine)) + geom_boxplot() +
geom_text(data = generate_label_df(Tukey1, 'variete'), aes(x = plot.labels,
y = V1, label = labels))
p_base
pdf("chap10fig109.pdf")
print(p_base)
dev.off()
#page 436
#Exercice 10.1
#2)
traitement<-rep(1:5,c(7,7,7,7,7))
taux<-c(4.5,2.5,6,4.5,3,5.5,3.5,7.5,3,2.5,4,2,4,5.5,8,6.5,6,3.5,5,
7,5,2,7.5,4,2.5,5,3.5,6.5,6.5,5.5,6,4.5,4,7,5.5)
traitement<-factor(traitement)
exo2<-data.frame(traitement,taux)
modele2<-aov(taux~traitement,data=exo2)
residus2<-residuals(modele2)
shapiro.test(residus2)
length(residus2)
bartlett.test(residus2~traitement,data=exo2)
#page 437
#3)
modele1<-lm(taux~traitement,data=exo2)
anova(modele1)
#4)
power.anova.test(5,7,19.043,76.42857)
#page 438
power.anova.test(groups=5,between.var=19.043,within.var=76.42857,power=.80)
granovagg.1w(taux,group=traitement)
pdf("chap10fig1010.pdf",colormodel="gray")
granovagg.1w(taux,group=traitement)
dev.off()
|
/scratch/gouwar.j/cran-all/cranData/BioStatR/demo/Chapitre10.R
|
#' ---
#' title: "Initiation \u00e0 la statistique avec R, code et compl\u00e9ments chapitre 2"
#' author: "Fr\u00e9d\u00e9ric Bertrand et Myriam Maumy-Bertrand"
#' date: "20 mars 2023"
#' ---
#Chapitre 2
#page 22
data(package="datasets")
?iris
#page 23
help(iris)
iris
#page 24
n<-28
N<-20
#page 25
m=1973
m
n
N+n
#page 26
rm(m)
rm(n,N)
rm(list = ls())
#page 27
class(iris)
mode(iris)
names(iris)
length(iris)
dim(iris)
#page 29
serie1<-c(1.2,36,5.33,-26.5)
serie1
mode(serie1)
class(serie1)
c(1.2,36,5.33,-26.5)
(serie1<-c(1.2,36,5.33,-26.5))
#page 30
serie2<-c("bleu","vert","marron")
serie2
mode(serie2)
#serie2<-c(bleu,vert,marron)
serie3<-c(T,T,F,F,T)
serie3
#page 31
serie3<-c(TRUE,TRUE,FALSE,FALSE,TRUE)
serie3
mode(serie3)
serie1[3]
serie1[3:4]
#page 32
head(serie1,n=2)
tail(serie1,n=2)
v<-c(2.3,3.5,6,14,12)
w<-c(3.2,5,0.7,1,3.5)
#page 33
x<-c(v,w)
x
y<-c(w,v)
y
v[c(2,5)]
v[-c(2,3)]
#page 34
v[v>4]
w[v>4]
(v+w)/2
20+5*v
z<-c(2.8,3,19.73)
z
#page 35
v+z
length(v)
length(z)
s<-1:10
s
#page 36
s[3]<-35
s
s[s==1]<-25
s
s[s>=5]<-20
s
donnees<-c(1,2,3)
donnees
#page 37
rep(x=donnees,times=2)
rep(x=donnees,2)
rep(1,50)
rep("chien",4)
#page 38
notes.Guillaume<-c(Anglais=12,Informatique=19.5,Biologie=14)
notes.Guillaume
matiere<-c("Anglais","Informatique","Biologie")
matiere
note<-c(12,19.5,14)
note
names(note)<-matiere
note
names(note)<-NULL
note
#page 39
sort(note)
rev(sort(note))
rev(note)
serie4<-c(1.2,36,NA,-26.5)
serie4
#page 40
mode(serie4)
is.na
is.na(serie4)
matrice1<-matrix(1:12,ncol=3)
matrice1
#page 41
matrice2<-matrix(1:12,ncol=3,byrow=TRUE)
matrice2
class(matrice2)
length(matrice2)
#page 42
dim(matrice2)
matrice3<-matrix(1:12,nrow=4,ncol=4)
matrice3
matrice3[3,3]
#page 43
matrice3[3,]
matrice3[,3]
matrice3[,3,drop=FALSE]
#page 44
(matrice4<-matrice3[,c(2,4)])
(matrice5<-matrice3[,-1])
nrow(matrice5)
#page 45
ncol(matrice5)
dim(matrice5)
rbind(matrice5,c(13:15))
cbind(matrice5,c(13:16))
#page 46
matrice6<-matrix(1:6,ncol=3)
matrice6
matrice7<-matrix(1:12,ncol=4)
matrice7
matrice8<-matrice6 %*% matrice7
matrice8
#page 47
try(matrice6 * matrice7)
matrice9<-matrix(7:12,ncol=3)
matrice9
matrice10<-matrice6 * matrice9
matrice10
matrice11<-matrice9 * matrice6
#page 48
matrice11<-matrice9 * matrice6
matrice11
try(matrice12<-matrice7 %*% matrice6)
#page 49
mode
#page 50
args(matrix)
#page 51
aov(Sepal.Length~Species,data=iris)
#jeu1<-scan()
#1.2
#36
#5.33
#-26.5
#
#page 52
#jeu1
#matrix(scan(),nrow=2,byrow=TRUE)
#1 3 4
#5 2 1
mat<-c(19.6,17.6,18.2,16.0)
phy<-c(19.1,17.8,18.7,16.1)
#page 53
res<-data.frame(mat,phy)
res
res2<-data.frame(mat,phy,row.names=c("Guillaume","Val\'erie","Thomas","Julie"))
res2
#page 54
getwd()
#setwd("C:\\Data")
#setwd("C:/Data")
#page 55
Chemin<-"/Users/fbertran/Documents/GitHub/R3ed_complements/"
Chemin
pH<-c(1.2,3.5,11.0,7.1,8.2)
#page 56
pH
setwd(Chemin)
save(pH,file="FichierpH.RData")
#page 55
rm(pH)
try(pH)
load("FichierpH.RData")
pH
#page 57
read.table(paste(Chemin,"table1.txt",sep=""))
read.table("table1.txt")
#read.table(file.choose())
#page 58
read.table("https://fbertran.github.io/homepage/BioStatR/table1.txt")
table1<-read.table("table1.txt")
table1
table1$V1
#page 59
table1[1,1]
table1[c(1),c(1)]
table1[1:2,1]
table1[1:2,1:2]
masse<-table1$V1
taille<-table1$V2
masse
#page 60
taille
read.table("table2.txt",header=TRUE)
read.table("table3.txt",dec=",")
read.table("table4.txt",sep=";")
#page 61
#write.table(table1,file=file.choose())
read.csv("table6.csv")
read.csv2("table5.csv")
#write.csv(table1,file=file.choose())
#write.csv2(table1,file=file.choose())
#page 63
if(!("xlsx" %in% rownames(installed.packages()))){install.packages("xlsx")}
library(xlsx)
(data<-read.xlsx("table7.xls",1))
args(read.xlsx)
#page 65
data$BMI<-data$Masse/(data$Taille/100)^2
write.xlsx(x=data,file="table10.xlsx",sheetName="FeuilleTest",row.names=FALSE)
write.xlsx(x=data,file="table10.xlsx",sheetName="AutreFeuilleTest",row.names=FALSE,append=TRUE)
#page 66
args(write.xlsx)
wb<-loadWorkbook("table10.xlsx")
feuilles <- getSheets(wb)
feuille <- feuilles[[1]]
#page 67
feuille <- createSheet(wb, sheetName="ajout1")
addDataFrame(x=data,sheet=feuille,row.names = FALSE, startRow = 1, startColumn = 5)
feuille2 <- createSheet(wb, sheetName="graphique")
png(filename = "matplotdata.png", width=6, height=6, units= "in", pointsize=12, res=120)
plot(data)
dev.off()
addPicture("matplotdata.png", feuille2, scale=1, startRow =2, startColumn=2)
png(filename = "matplotdata2.png", width=6, height=8, units= "in", pointsize=12, res=300)
plot(data)
dev.off()
addPicture("matplotdata2.png", feuille2, scale=.4, startRow =62, startColumn=1)
addPicture("matplotdata2.png", feuille2, scale=1, startRow =62, startColumn=14)
#page 68
saveWorkbook(wb,"table8bis.xlsx")
#if(!("RODBC" %in% rownames(installed.packages()))){install.packages("RODBC")}
#library(RODBC)
#connexion<-odbcConnectExcel()
# sqlTables(connexion)
#data<-sqlFetch(connexion,"Feuil1")
#close(connexion)
#data
#page 69
#connexion<-odbcConnectExcel(,readOnly=FALSE)
#data<-sqlFetch(connexion,"Feuil1")
#data$BMI<-data$Masse/(data$Taille/100)^2
#sqlSave(connexion,data,rownames=FALSE)
#close(connexion)
#connexion<-odbcConnectExcel(,readOnly=FALSE)
#data<-sqlFetch(connexion,"Feuil2")
#data$BMI<-data$Masse/(data$Taille/100)^2
#sqlUpdate(connexion,data,"Feuil2",index="F1")
#close(connexion)
#page 70
if(!("gdata" %in% rownames(installed.packages()))){install.packages("gdata")}
library(gdata)
read.xls("table7.xls")
#Pas de donn\'ees dans la feuille 2 donc erreur lors de la lecture
#read.xls("table7.xls",sheet=2)
#page 71
#read.xls("https://fbertran.github.io/homepage/BioStatR/table7.xls",sheet=1)
if(!("XLConnect" %in% rownames(installed.packages()))){install.packages("XLConnect")}
#vignette("XLConnect")
#vignette("XLConnectImpatient")
#page 77
u<-1:10
v<-1:8
outer(u,v,"*")
x<-c(NA,FALSE,TRUE)
names(x)<-as.character(x)
!x
outer(x,x,"&")
#page 78
outer(x,x,"|")
outer(x,x,"xor")
#page 79
#Exercice 2.1
v<-101:112
v
#page 80
v<-seq(101,112)
v
w<-rep(c(4,6,3),4)
w
length(w)
x<-c(rep(4,8),rep(6,7),rep(3,5))
x
length(x)
x<-rep(c(4,6,3),c(8,7,5))
x
#page 81
#Exercice 2.2
masse<-c(28,27.5,27,28,30.5,30,31,29.5,30,31,31,31.5,32,30,30.5)
masse
masse1<-c(40,39,41,37.5,43)
masse1
nouvelle.masse<-c(rep(masse1,2),masse[6:15])
nouvelle.masse
length(nouvelle.masse)
#page 82
(nouvelle.masse<-c(rep(masse1,2),tail(masse,n=10)))
nouvelle.masse
library(xlsx)
write.xlsx(nouvelle.masse,file="Masse.xlsx")
write.xlsx(data.frame(masse=nouvelle.masse),file="Masse.xlsx")
#massedf<-data.frame(nouvelle.masse)
#library(RODBC)
#connexion<-odbcConnectExcel("Resultat.xls",readOnly = FALSE)
#sqlSave(connexion,massedf)
#close(connexion)
#page 83
#Exercice 2.3
nom<-c("Guillaume","Val\'erie","Thomas","Julie","S\'ebastien","St\'ephanie","Gr\'egory","Ambre",
"Jean-S\'ebastien","Camille")
nom
age<-c(25,24,23,22,41,40,59,58,47,56)
names(age)<-nom
age
str(age)
masse
c("Guillaume"=25,"Val\'erie"=24,"Thomas"=23,"Julie"=22,"S\'ebastien"=41,
"St\'ephanie"=40,"Gr\'egory"=59,"Ambre"=58,"Jean-S\'ebastien"=47,"Camille"=56)
#page 84
age<-data.frame(age,row.names=nom)
age
masse<-c(66.5,50.5,67.5,52,83,65,79,64,81,53)
names(masse)<-nom
masse
#page 85
masse<-data.frame(masse,row.names=nom)
masse
taille<-c(1.86,1.62,1.72,1.67,1.98,1.77,1.83,1.68,1.92,1.71)
names(taille)<-nom
taille
taille<-data.frame(taille,row.names=nom)
taille
#page 86
masse.lourde<-masse[masse>80]
masse.lourde
masse<-data.frame(masse,row.names=nom)
masse.lourde<-masse[masse>80]
masse.lourde
str(masse.lourde)
#page 87
masse.lourde<-masse[masse>80,,drop=FALSE]
masse.lourde
masse.lourde<-masse[masse>80]
taille.masse.lourde<-taille[masse>=80]
taille.masse.lourde
taille.masse.lourde<-taille[masse>=80,,drop=FALSE]
taille.masse.lourde
#page 88
taille.vieux.masse.lourde<-taille[masse>=80 & age>=30]
taille.vieux.masse.lourde
taille.vieux.masse.lourde<-taille[masse>=80 & age>=30,,drop=FALSE]
taille.vieux.masse.lourde
ensemble<-cbind(age,masse,taille)
ensemble
#page 89
suite<-1:12
suite
suite>6
suite<6
!(suite>=6)
suite==6
#page 90
suite<=6 & suite>=6
suite<=8 && suite>=4
suite<=4 | suite>=8
suite<=4||suite>=8
|
/scratch/gouwar.j/cran-all/cranData/BioStatR/demo/Chapitre2.R
|
#' ---
#' title: "Initiation \u00e0 la statistique avec R, code et compl\u00e9ments chapitre 3"
#' author: "Fr\u00e9d\u00e9ric Bertrand et Myriam Maumy-Bertrand"
#' date: "20 mars 2023"
#' ---
#Chapitre 3
#page 95
library(BioStatR)
Mesures
head(Mesures)
#page 96
head(Mesures,10)
tail(Mesures)
#page 97
str(Mesures)
class(Mesures$espece)
names(Mesures$espece)
names(Mesures)
#page 98
levels(Mesures$espece)
?factor
str(Mesures5)
Mesures5
#page 101
table_graines<-table(Mesures5$graines)
table_graines
effcum_graines<-cumsum(table_graines)
effcum_graines
#page 102
table(Mesures5$espece)
freq_table_graines<-table_graines/sum(table_graines)
options(digits=3)
freq_table_graines
freq_table_graines<-prop.table(table(Mesures5$graines))
freq_table_graines
#page 103
freqcum_table_graines<-cumsum(table_graines/sum(table_graines))
freqcum_table_graines
freqcum_table_graines<-cumsum(prop.table((table(Mesures5$graines))))
freqcum_table_graines
#page 104
?hist
#page 105
minmax<-c(min(Mesures$masse),max(Mesures$masse))
minmax
histo<-hist(Mesures$masse)
classes<-histo$breaks
classes
#page 106
effectifs<-histo$counts
effectifs
effectifs<-histo$counts
cumsum(effectifs)
frequences<-effectifs/sum(effectifs)
print(frequences,digits=3)
sum(frequences)
#page 107
print(cumsum(frequences),digits=3)
table(Mesures$espece)
plot(taille~masse,data=Mesures)
#ggplot est une biblioth\`eque graphique \`a conna^itre
if(!("ggplot2" %in%
rownames(installed.packages()))){install.packages("ggplot2")}
library(ggplot2)
#ggplot(Mesures, aes(x = masse)) + geom_histogram()
#Pas le m^eme calcul de la largeur des classes par d\'efaut. Dans ggplot2, la
#largeur des classes (binwidth) est \'egale \`a l'\'etendue divis\'ee par 30.
ggplot(Mesures,aes(x=masse,y=taille))+geom_point()
pdf("figure32Bggplot.pdf")
print(ggplot(Mesures, aes(x = masse,y=taille)) + geom_point())
dev.off()
#page 109
args(plot.default)
names(par())
#page 110
plot(taille~masse,pch=19,main="Taille vs. Masse",xlab="Masse",ylab="Taille",data=Mesures)
ggplot(Mesures, aes(x = masse,y=taille)) + geom_point(pch=19) + xlab("Masse") +
ylab("Taille") + ggtitle("Taille vs. Masse")
#Autre mani\`ere de sp\'ecifier le titre et le noms des axes
ggplot(Mesures, aes(x = masse,y=taille)) + geom_point(pch=19) + labs(title =
"Taille vs. Masse", x = "Masse", y = "Taille")
#page 111
pdf("figure33Bggplot.pdf")
print(ggplot(Mesures, aes(x = masse,y=taille)) + geom_point(pch=19) +
xlab("Masse") + ylab("Taille") + ggtitle("Taille vs. Masse"))
dev.off()
ggplot(Mesures, aes(x = masse,y=taille)) + geom_point(pch=19) + xlab("Masse") +
ylab("Taille") + ggtitle("Taille vs. Masse")+theme(plot.title=element_text(hjust = 0.5))
#Titre au centre
theme_update(plot.title = element_text(hjust = 0.5))
ggplot(Mesures, aes(x = masse,y=taille)) + geom_point(pch=19) + labs(title =
"Taille vs. Masse", x = "Masse", y = "Taille")
#Titre \`a gauche
theme_update(plot.title = element_text(hjust = 0))
ggplot(Mesures, aes(x = masse,y=taille)) + geom_point(pch=19) + labs(title =
"Taille vs. Masse", x = "Masse", y = "Taille")
#page 112
#Titre \`a droite
theme_update(plot.title = element_text(hjust = 1))
ggplot(Mesures, aes(x = masse,y=taille)) + geom_point(pch=19) + labs(title =
"Taille vs. Masse", x = "Masse", y = "Taille")
pdf("figure33Cggplot.pdf")
theme_update(plot.title = element_text(hjust = 0.5))
print(ggplot(Mesures, aes(x = masse,y=taille)) + geom_point(pch=19) +
xlab("Masse") + ylab("Taille") + ggtitle("Taille vs. Masse"))
dev.off()
#page 113
pairs(Mesures5)
pdf("figure34.pdf")
pairs(Mesures5)
dev.off()
pairs(Mesures5,diag.panel=panel.hist)
pdf("figure35A.pdf")
pairs(Mesures5,diag.panel=panel.hist)
dev.off()
#page 114
if(!("GGally" %in% rownames(installed.packages()))){install.packages("GGally")}
library(GGally)
#Noir et blanc
ggpairs(Mesures5)
pdf("figure35Bggplot.pdf")
print(ggpairs(Mesures5))
dev.off()
#Si besoin, cr\'eer des abr\'eviations pour les noms des variables
Mesures5abbr <- Mesures5
Mesures5abbr$espece <- abbreviate(Mesures5$espece)
ggpairs(Mesures5abbr, axisLabels='show')
pdf("figure35abbrggplot.pdf")
print(ggpairs(Mesures5abbr, axisLabels='show'))
dev.off()
#Couleur et groupes
ggpairs(Mesures5abbr, ggplot2::aes(colour=espece, alpha=0.4), axisLabels='show')
pdf("figure35couleurggplot.pdf")
print(ggpairs(Mesures5abbr, ggplot2::aes(colour=espece, alpha=0.4),
axisLabels='show'))
dev.off()
#En plus
#Noir et blanc
Mesuresabbr <- Mesures
Mesuresabbr$espece <- abbreviate(Mesures$espece)
ggpairs(Mesuresabbr, diag=list(continuous="bar"), axisLabels='show')
ggpairs(Mesures5abbr, diag=list(continuous="bar"), axisLabels='show')
pdf("figure35Mesuresggplot.pdf")
print(ggpairs(Mesuresabbr, diag=list(continuous="bar"), axisLabels='show'))
dev.off()
pdf("figure35Mesures5ggplot.pdf")
print(ggpairs(Mesures5abbr, diag=list(continuous="bar"), axisLabels='show'))
dev.off()
#Couleur
ggpairs(Mesuresabbr, ggplot2::aes(colour=espece, alpha=0.4),
diag=list(continuous="bar"), axisLabels='show')
pdf("figure35MesuresCouleurggplot.pdf")
print(ggpairs(Mesuresabbr, ggplot2::aes(colour=espece, alpha=0.4),
diag=list(continuous="bar"), axisLabels='show'))
dev.off()
ggpairs(Mesures5abbr, ggplot2::aes(colour=espece, alpha=0.4),
diag=list(continuous="bar"), axisLabels='show')
pdf("figure35Mesures5Couleurggplot.pdf")
print(ggpairs(Mesures5abbr, ggplot2::aes(colour=espece, alpha=0.4),
diag=list(continuous="bar"), axisLabels='show'))
dev.off()
#page 116
plot(table(Mesures5$graines),type="h",lwd=4,col="red",xlab="Nombre de graines",ylab="Effectif")
pdf("figure36Aggplot.pdf")
plot(table(Mesures5$graines),type="h",lwd=4,col="red",xlab="Nombre de graines",ylab="Effectif")
dev.off()
#page 117
table(Mesures5$graines)
#page 118
ggplot(Mesures5, aes(x = graines)) + geom_bar(fill=I("red")) +
xlab("Nombre de graines") + ylab("Effectif")
ggplot(Mesures5, aes(x = graines)) + geom_histogram(binwidth=.1,fill=I("red")) +
xlab("Nombre de graines") + ylab("Effectif")
pdf("figure36Bggplot.pdf")
ggplot(Mesures5, aes(x = graines)) + geom_histogram(binwidth=.1,fill=I("red")) +
xlab("Nombre de graines") + ylab("Effectif")
dev.off()
#page 119
ggplot(Mesures5, aes(x = graines)) + geom_histogram(binwidth=.1,fill=I("red")) +
xlab("Nombre de graines") + ylab("Effectif") + facet_grid(.~espece)
ggplot(Mesures5, aes(x = graines)) + geom_histogram(binwidth=.1,fill=I("red")) +
xlab("Nombre de graines") + ylab("Effectif") + facet_grid(espece~.)
ggplot(Mesures5, aes(x = graines)) + geom_histogram(binwidth=.1,fill=I("red")) +
xlab("Nombre de graines") + ylab("Effectif") + facet_wrap(~espece)
pdf("figure36Cggplot.pdf")
ggplot(Mesures5, aes(x = graines)) + geom_histogram(binwidth=.1,fill=I("red")) +
xlab("Nombre de graines") + ylab("Effectif") + facet_wrap(~espece)
dev.off()
tapply(Mesures5$graines,Mesures5$espece,table)
#En plus avec ggplot
data.graines_espece<-as.data.frame(table(Mesures5$graines,Mesures5$espece))
colnames(data.graines_espece)<-c("nbr.graines","espece","effectif")
ggplot(data.graines_espece,aes(y=effectif,x=nbr.graines))+geom_bar(stat=
"identity")+ facet_grid(espece~.)
ggplot(data.graines_espece,aes(y=effectif,x=nbr.graines))+geom_bar(stat=
"identity")+ facet_grid(~espece)
ggplot(data.graines_espece,aes(y=effectif,x=nbr.graines))+geom_bar(stat=
"identity")+ facet_wrap(~espece)
ggplot(data.graines_espece,aes(y=effectif,x=nbr.graines,fill=espece))+geom_bar(
stat="identity")+ facet_wrap(~espece)
ggplot(data.graines_espece,aes(y=effectif,x=nbr.graines,fill=espece))+geom_bar(
stat="identity")+ facet_wrap(~espece) + scale_fill_grey() + theme_bw()
ggplot(data.graines_espece,aes(y=effectif,x=nbr.graines,fill=nbr.graines))+
geom_bar(stat="identity")+ facet_wrap(~espece)
ggplot(data.graines_espece,aes(y=effectif,x=nbr.graines,fill=nbr.graines))+
geom_bar(stat="identity")+ facet_wrap(~espece) + scale_fill_grey() + theme_bw()
pdf("figure36Dggplot.pdf")
print(ggplot(data.graines_espece,aes(y=effectif,x=nbr.graines))+geom_bar(stat=
"identity")+ facet_grid(espece~.))
dev.off()
pdf("figure36Eggplot.pdf")
print(ggplot(data.graines_espece,aes(y=effectif,x=nbr.graines))+geom_bar(stat=
"identity")+ facet_grid(~espece))
dev.off()
pdf("figure36Fggplot.pdf")
print(ggplot(data.graines_espece,aes(y=effectif,x=nbr.graines))+geom_bar(stat=
"identity")+ facet_wrap(~espece))
dev.off()
pdf("figure36Gggplot.pdf")
print(ggplot(data.graines_espece,aes(y=effectif,x=nbr.graines,fill=espece))+
geom_bar(stat="identity")+ facet_wrap(~espece))
dev.off()
pdf("figure36Hbwggplot.pdf")
print(ggplot(data.graines_espece,aes(y=effectif,x=nbr.graines,fill=espece))+
geom_bar(stat="identity")+ facet_wrap(~espece) + scale_fill_grey() + theme_bw())
dev.off()
pdf("figure36Iggplot.pdf")
print(ggplot(data.graines_espece,aes(y=effectif,x=nbr.graines,fill=nbr.graines))
+geom_bar(stat="identity")+ facet_wrap(~espece))
dev.off()
pdf("figure36Jbwggplot.pdf")
print(ggplot(data.graines_espece,aes(y=effectif,x=nbr.graines,fill=nbr.graines))
+geom_bar(stat="identity")+ facet_wrap(~espece) + scale_fill_grey() +
theme_bw())
dev.off()
#page 120
tapply(Mesures5$graines,Mesures5$espece,table)
if(!("lattice" %in%
rownames(installed.packages()))){install.packages("lattice")}
library("lattice")
data.graines_espece<-as.data.frame(table(Mesures5$graines,Mesures5$espece))
colnames(data.graines_espece)<-c("nbr.graines","espece","effectif")
barchart(effectif~nbr.graines|espece,data=data.graines_espece,layout=c(1,4))
#page 121
as.data.frame(table(Mesures5$graines,Mesures5$espece))
(table.graines.espece <-
table(Mesures5$graines,Mesures5$espece,dnn=c("nbr.graines","espece")))
print(table.graines.espece,zero.print=".")
(data.graines.espece <-
as.data.frame(table.graines.espece,responseName="effectif"))
barchart(effectif~nbr.graines|espece,data= data.graines.espece)
pdf("figure38lattice.pdf")
barchart(effectif~nbr.graines|espece,data= data.graines.espece)
dev.off()
#En plus avec ggplot2
ggplot(data.graines.espece,aes(y=effectif,x=nbr.graines,fill=nbr.graines))+
geom_bar(stat="identity")+ facet_wrap(~espece)
pdf("figure38ggplot.pdf")
print(ggplot(data.graines.espece,aes(y=effectif,x=nbr.graines,fill=nbr.graines))
+geom_bar(stat="identity")+ facet_wrap(~espece))
dev.off()
#page 122
(table.graines.espece <-
table(factor(Mesures5$graines),Mesures5$espece,dnn=c("nbr.graines","espece"),
exclude=c("bignone","laurier rose")))
#En plus pour supprimer la modalit\'e <NA>
(table.graines.espece <-
table(factor(Mesures5$graines),Mesures5$espece,dnn=c("nbr.graines","espece"),
exclude=c("bignone","laurier rose"), useNA="no"))
#page 123
(data.graines.espece<-as.data.frame(table.graines.espece,responseName="effectif"
))
pdf("figure39lattice.pdf")
barchart(effectif~nbr.graines|espece,data=data.graines.espece)
dev.off()
#En plus avec ggplot
ggplot(data.graines.espece,aes(y=effectif,x=nbr.graines,fill=nbr.graines))+
geom_bar(stat="identity")+ facet_grid(~espece)
pdf("figure39ggplot.pdf")
print(ggplot(data.graines.espece,aes(y=effectif,x=nbr.graines,fill=nbr.graines))
+geom_bar(stat="identity")+ facet_grid(~espece))
dev.off()
print(ggplot(data.graines.espece,aes(y=effectif,x=nbr.graines,fill=nbr.graines))
+geom_bar(stat="identity")+ facet_grid(~espece) + scale_fill_grey() +
theme_bw())
pdf("figure39bwggplot.pdf")
print(ggplot(data.graines.espece,aes(y=effectif,x=nbr.graines,fill=nbr.graines))
+geom_bar(stat="identity")+ facet_grid(~espece) + scale_fill_grey() +
theme_bw())
dev.off()
barchart(effectif~nbr.graines|espece,data=data.graines.espece,layout=c(1,2))
pdf("figure310lattice.pdf")
barchart(effectif~nbr.graines|espece,data=data.graines.espece,layout=c(1,2))
dev.off()
#En plus
ggplot(data.graines.espece,aes(y=effectif,x=nbr.graines,fill=nbr.graines))+
geom_bar(stat="identity")+ facet_grid(espece~.)
pdf("figure310ggplot.pdf")
print(ggplot(data.graines.espece,aes(y=effectif,x=nbr.graines,fill=nbr.graines))
+geom_bar(stat="identity")+ facet_grid(espece~.))
dev.off()
print(ggplot(data.graines.espece,aes(y=effectif,x=nbr.graines,fill=nbr.graines))
+geom_bar(stat="identity")+ facet_grid(espece~.) + scale_fill_grey() +
theme_bw())
pdf("figure310bwggplot.pdf")
print(ggplot(data.graines.espece,aes(y=effectif,x=nbr.graines,fill=nbr.graines))
+geom_bar(stat="identity")+ facet_grid(espece~.) + scale_fill_grey() +
theme_bw())
dev.off()
#page 125
xyplot(effectif~nbr.graines|espece,data=data.graines.espece,type="h",lwd=4)
pdf("figure311lattice.pdf")
xyplot(effectif~nbr.graines|espece,data=data.graines.espece,type="h",lwd=4)
dev.off()
#En plus ggplot
ggplot(data.graines.espece, aes(x = nbr.graines)) +
geom_linerange(aes(ymin=0,ymax=effectif,group=espece),size=1.2,color=I("blue"))+
xlab("Nombre de graines") + ylab("Effectif") + facet_wrap(~espece)
pdf("figure311ggplot.pdf")
print(ggplot(data.graines.espece, aes(x = nbr.graines)) +
geom_linerange(aes(ymin=0,ymax=effectif,group=espece),size=1.2,color=I("blue"))+
xlab("Nombre de graines") + ylab("Effectif") + facet_wrap(~espece))
dev.off()
xyplot(effectif~nbr.graines|espece,data=data.graines.espece,type="h",layout=c(1,2),lwd=4)
pdf("figure312lattice.pdf")
xyplot(effectif~nbr.graines|espece,data=data.graines.espece,type="h",layout=c(1,2),lwd=4)
dev.off()
ggplot(data.graines.espece, aes(x = nbr.graines)) +
geom_linerange(aes(ymin=0,ymax=effectif,group=espece),size=1.2,color=I("blue"))+
xlab("Nombre de graines") + ylab("Effectif") + facet_grid(espece~.)
pdf("figure312ggplot.pdf")
print(ggplot(data.graines.espece, aes(x = nbr.graines)) +
geom_linerange(aes(ymin=0,ymax=effectif,group=espece),size=1.2,color=I("blue"))+
xlab("Nombre de graines") + ylab("Effectif") + facet_grid(espece~.))
dev.off()
#page 126
barplot(table.graines.espece,beside=TRUE,legend=rownames(table.graines.espece))
pdf("figure313.pdf")
barplot(table.graines.espece,beside=TRUE,legend=rownames(table.graines.espece))
dev.off()
#En plus avec ggplot
ggplot(data.graines.espece, aes(x = nbr.graines, y= effectif, fill =
nbr.graines)) + geom_bar(stat="identity") + xlab("Nombre de graines") +
ylab("Effectif") + facet_wrap(~espece) + scale_fill_grey() + theme_bw()
pdf("figure313ggplot.pdf")
print(ggplot(data.graines.espece, aes(x = nbr.graines, y= effectif, fill =
nbr.graines)) + geom_bar(stat="identity") + xlab("Nombre de graines") +
ylab("Effectif") + facet_wrap(~espece) + scale_fill_grey() + theme_bw())
dev.off()
plot(table(Mesures5$graines),lwd=4,col="red",xlab="Nombre de graines",ylab="Effectif")
lines(table(Mesures5$graines),type="l",lwd=4)
pdf("figure314.pdf")
plot(table(Mesures5$graines),lwd=4,col="red",xlab="Nombre de graines",ylab="Effectif")
lines(table(Mesures5$graines),type="l",lwd=4)
dev.off()
#En plus avec ggplot
df.table_graines<-as.data.frame(table(Mesures5$graines,dnn="nbr.graines"),
responseName="effectif")
ggplot(df.table_graines, aes(x = nbr.graines)) +
geom_linerange(aes(ymin=0,ymax=effectif),size=1.8,color=I("red"))+
xlab("Nombre de graines") + ylab("Effectif")
pdf("figure314ggplot.pdf")
ggplot(df.table_graines, aes(x = nbr.graines)) + geom_linerange(aes(ymin=0,ymax=effectif),
size=1.8,color=I("red"))+ xlab("Nombre de graines") + ylab("Effectif")
dev.off()
ggplot(df.table_graines, aes(x = nbr.graines)) + geom_linerange(aes(ymin=0, ymax=effectif),
size=1.2,color=I("red"))+ geom_line(aes(y=effectif,group=""),size=1.2,color=I("black"))+
xlab("Nombre de graines") + ylab("Effectif")
pdf("figure314aggplot.pdf")
print(ggplot(df.table_graines, aes(x = nbr.graines)) +
geom_linerange(aes(ymin=0, ymax=effectif), size=1.2,color=I("red"))+
geom_line(aes(y=effectif,group=""), size=1.2,color=I("black"))+
xlab("Nombre de graines") + ylab("Effectif"))
dev.off()
ggplot(df.table_graines, aes(x = nbr.graines))+
geom_line(aes(y=effectif,group=""), size=1.2,color=I("black")) +
geom_pointrange(aes(ymin=0,ymax=effectif,y=effectif,group=""),
size=1.2,color=I("red"))+ xlab("Nombre de graines") + ylab("Effectif")
pdf("figure314bggplot.pdf")
print(ggplot(df.table_graines, aes(x = nbr.graines))+
geom_line(aes(y=effectif,group=""), size=1.2,color=I("black")) +
geom_pointrange(aes(ymin=0,ymax=effectif,y=effectif,group=""),
size=1.2,color=I("red"))+ xlab("Nombre de graines") + ylab("Effectif"))
dev.off()
ggplot(df.table_graines, aes(x = nbr.graines)) +
geom_ribbon(aes(ymin=0,ymax=effectif,group=""),fill=I("red"),alpha=.5)+
geom_line(aes(y=effectif,group=""), size=1, color="red")+
geom_pointrange(aes(ymin=0,ymax=effectif,y=effectif,group=""), size=1,
color="blue")+ xlab("Nombre de graines") + ylab("Effectif") +theme_bw()
pdf("figure314cggplot.pdf")
print(ggplot(df.table_graines, aes(x = nbr.graines)) +
geom_ribbon(aes(ymin=0,ymax=effectif,group=""),fill=I("red"),alpha=.5)+
geom_line(aes(y=effectif,group=""), size=1, color="red")+
geom_pointrange(aes(ymin=0,ymax=effectif,y=effectif,group=""), size=1,
color="blue")+ xlab("Nombre de graines") + ylab("Effectif") +theme_bw())
dev.off()
ggplot(df.table_graines, aes(x = nbr.graines)) +
geom_ribbon(aes(ymin=0,ymax=effectif,group=""),fill=I("gray80"))+
geom_line(aes(y=effectif,group=""), size=1, color=I("gray40")) +
geom_pointrange(aes(ymin=0,ymax=effectif,y=effectif,group=""), size=1)+
xlab("Nombre de graines") + ylab("Effectif") +theme_bw()
pdf("figure314dggplot.pdf")
print(ggplot(df.table_graines, aes(x = nbr.graines)) +
geom_ribbon(aes(ymin=0,ymax=effectif,group=""),fill=I("gray80"))+
geom_line(aes(y=effectif,group=""), size=1, color=I("gray40")) +
geom_pointrange(aes(ymin=0,ymax=effectif,y=effectif,group=""), size=1)+
xlab("Nombre de graines") + ylab("Effectif") +theme_bw())
dev.off()
#En plus, ggplot par groupes
ggplot(data.graines.espece, aes(x = nbr.graines)) +
geom_linerange(aes(ymin=0,ymax=effectif,group=espece), size=1.2,color=I("red"))+
geom_line(aes(y=effectif,group=espece), size=1.2,color=I("black"))+
xlab("Nombre de graines") + ylab("Effectif") + facet_wrap(~espece)
pdf("figure314groupeAggplot.pdf")
print(ggplot(data.graines.espece, aes(x = nbr.graines)) +
geom_linerange(aes(ymin=0,ymax=effectif,group=espece), size=1.2,color=I("red"))+
geom_line(aes(y=effectif,group=espece), size=1.2,color=I("black"))+
xlab("Nombre de graines") + ylab("Effectif") + facet_wrap(~espece))
dev.off()
ggplot(data.graines.espece, aes(x = nbr.graines))+
geom_line(aes(y=effectif,group=espece), size=1.2,color=I("red")) +
geom_pointrange(aes(ymin=0,ymax=effectif,y=effectif,group=espece),
size=1.2,color=I("blue"))+ xlab("Nombre de graines") + ylab("Effectif") +
facet_wrap(~espece)
pdf("figure314groupeAggplot.pdf")
print(ggplot(data.graines.espece, aes(x = nbr.graines))+
geom_line(aes(y=effectif,group=espece), size=1.2,color=I("red")) +
geom_pointrange(aes(ymin=0,ymax=effectif,y=effectif,group=espece),
size=1.2,color=I("blue"))+ xlab("Nombre de graines") + ylab("Effectif") +
facet_wrap(~espece))
dev.off()
ggplot(data.graines.espece, aes(x = nbr.graines)) +
geom_ribbon(aes(ymin=0,ymax=effectif,group=espece),fill=I("red"),alpha=.5)+
geom_line(aes(y=effectif,group=espece), size=1, color="red")+
geom_pointrange(aes(ymin=0,ymax=effectif,y=effectif,group=espece), size=1,
color="blue")+ xlab("Nombre de graines") + ylab("Effectif") +
facet_wrap(~espece)+theme_bw()
pdf("figure314groupeAggplot.pdf")
print(ggplot(data.graines.espece, aes(x = nbr.graines)) +
geom_ribbon(aes(ymin=0,ymax=effectif,group=espece),fill=I("red"),alpha=.5)+
geom_line(aes(y=effectif,group=espece), size=1, color="red")+
geom_pointrange(aes(ymin=0,ymax=effectif,y=effectif,group=espece), size=1,
color="blue")+ xlab("Nombre de graines") + ylab("Effectif") +
facet_wrap(~espece)+theme_bw())
dev.off()
ggplot(data.graines.espece, aes(x = nbr.graines)) +
geom_ribbon(aes(ymin=0,ymax=effectif,group=espece),fill=I("gray80"))+
geom_line(aes(y=effectif,group=espece), size=1, color=I("gray40")) +
geom_pointrange(aes(ymin=0,ymax=effectif,y=effectif,group=espece), size=1)+
xlab("Nombre de graines") + ylab("Effectif") + facet_wrap(~espece)+theme_bw()
pdf("figure314groupeAggplot.pdf")
print(ggplot(data.graines.espece, aes(x = nbr.graines)) +
geom_ribbon(aes(ymin=0,ymax=effectif,group=espece),fill=I("gray80"))+
geom_line(aes(y=effectif,group=espece), size=1, color=I("gray40")) +
geom_pointrange(aes(ymin=0,ymax=effectif,y=effectif,group=espece), size=1)+
xlab("Nombre de graines") + ylab("Effectif") + facet_wrap(~espece)+theme_bw())
dev.off()
#page 128
plot(cumsum(table(Mesures5$graines)),type="h",lwd=4,col="red",xlab="Nombre de graines",
ylab="Effectif")
lines(cumsum(table(Mesures5$graines)),lwd=4)
pdf("figure315.pdf")
plot(cumsum(table(Mesures5$graines)),type="h",lwd=4,col="red",xlab="Nombre de graines",
ylab="Effectif")
lines(cumsum(table(Mesures5$graines)),lwd=4)
dev.off()
df.cumsum.table_graines<-df.table_graines; df.cumsum.table_graines[,2] <-
cumsum(df.table_graines[,2])
ggplot(df.cumsum.table_graines, aes(x = nbr.graines)) +
geom_linerange(aes(ymin=0, ymax=effectif), size=1.2,color=I("red"))+
geom_line(aes(y=effectif,group=""), size=1.2,color=I("black"))+
xlab("Nombre de graines") + ylab("Effectif")
pdf("figure315ggplot.pdf")
print(ggplot(df.cumsum.table_graines, aes(x = nbr.graines)) +
geom_linerange(aes(ymin=0, ymax=effectif), size=1.2,color=I("red"))+
geom_line(aes(y=effectif,group=""), size=1.2,color=I("black"))+
xlab("Nombre de graines") + ylab("Effectif"))
dev.off()
ggplot(df.cumsum.table_graines, aes(x = nbr.graines))+
geom_line(aes(y=effectif,group=""), size=1.2,color=I("black")) +
geom_pointrange(aes(ymin=0,ymax=effectif,y=effectif,group=""),
size=1.2,color=I("red"))+ xlab("Nombre de graines") + ylab("Effectif")
pdf("figure315bggplot.pdf")
print(ggplot(df.cumsum.table_graines, aes(x = nbr.graines))+
geom_line(aes(y=effectif,group=""), size=1.2,color=I("black")) +
geom_pointrange(aes(ymin=0,ymax=effectif,y=effectif,group=""),
size=1.2,color=I("red"))+ xlab("Nombre de graines") + ylab("Effectif"))
dev.off()
ggplot(df.cumsum.table_graines, aes(x = nbr.graines)) +
geom_ribbon(aes(ymin=0,ymax=effectif,group=""),fill=I("red"),alpha=.5)+
geom_line(aes(y=effectif,group=""), size=1, color="red")+
geom_pointrange(aes(ymin=0,ymax=effectif,y=effectif,group=""), size=1,
color="blue")+ xlab("Nombre de graines") + ylab("Effectif") +theme_bw()
pdf("figure315cggplot.pdf")
print(ggplot(df.cumsum.table_graines, aes(x = nbr.graines)) +
geom_ribbon(aes(ymin=0,ymax=effectif,group=""),fill=I("red"),alpha=.5)+
geom_line(aes(y=effectif,group=""), size=1, color="red")+
geom_pointrange(aes(ymin=0,ymax=effectif,y=effectif,group=""), size=1,
color="blue")+ xlab("Nombre de graines") + ylab("Effectif") +theme_bw())
dev.off()
ggplot(df.cumsum.table_graines, aes(x = nbr.graines)) +
geom_ribbon(aes(ymin=0,ymax=effectif,group=""),fill=I("gray80"))+
geom_line(aes(y=effectif,group=""), size=1, color=I("gray40")) +
geom_pointrange(aes(ymin=0,ymax=effectif,y=effectif,group=""), size=1)+
xlab("Nombre de graines") + ylab("Effectif") +theme_bw()
pdf("figure315dggplot.pdf")
print(ggplot(df.cumsum.table_graines, aes(x = nbr.graines)) +
geom_ribbon(aes(ymin=0,ymax=effectif,group=""),fill=I("gray80"))+
geom_line(aes(y=effectif,group=""), size=1, color=I("gray40")) +
geom_pointrange(aes(ymin=0,ymax=effectif,y=effectif,group=""), size=1)+
xlab("Nombre de graines") + ylab("Effectif") +theme_bw())
dev.off()
#Par groupes
data.cumsum.graines.espece<-data.graines.espece
data.cumsum.graines.espece[,3] <- unlist(tapply(data.graines.espece[,3],
data.graines.espece[,2],cumsum))
ggplot(data.cumsum.graines.espece, aes(x = nbr.graines)) +
geom_linerange(aes(ymin=0,ymax=effectif,group=espece), size=1.2,color=I("red"))+
geom_line(aes(y=effectif,group=espece), size=1.2,color=I("black"))+
xlab("Nombre de graines") + ylab("Effectif") + facet_wrap(~espece)
pdf("figure315eggplot.pdf")
print(ggplot(data.cumsum.graines.espece, aes(x = nbr.graines)) +
geom_linerange(aes(ymin=0,ymax=effectif,group=espece), size=1.2,color=I("red"))+
geom_line(aes(y=effectif,group=espece), size=1.2,color=I("black"))+
xlab("Nombre de graines") + ylab("Effectif") + facet_wrap(~espece))
dev.off()
ggplot(data.cumsum.graines.espece, aes(x = nbr.graines))+
geom_line(aes(y=effectif,group=espece), size=1.2,color=I("red")) +
geom_pointrange(aes(ymin=0,ymax=effectif,y=effectif,group=espece),
size=1.2,color=I("blue"))+ xlab("Nombre de graines") + ylab("Effectif") +
facet_wrap(~espece)
pdf("figure315fggplot.pdf")
print(ggplot(data.cumsum.graines.espece, aes(x = nbr.graines))+
geom_line(aes(y=effectif,group=espece), size=1.2,color=I("red")) +
geom_pointrange(aes(ymin=0,ymax=effectif,y=effectif,group=espece),
size=1.2,color=I("blue"))+ xlab("Nombre de graines") + ylab("Effectif") +
facet_wrap(~espece))
dev.off()
ggplot(data.cumsum.graines.espece, aes(x = nbr.graines)) +
geom_ribbon(aes(ymin=0,ymax=effectif,group=espece),fill=I("red"),alpha=.5)+
geom_line(aes(y=effectif,group=espece), size=1, color="red")+
geom_pointrange(aes(ymin=0,ymax=effectif,y=effectif,group=espece), size=1,
color="blue")+ xlab("Nombre de graines") + ylab("Effectif") +
facet_wrap(~espece)+theme_bw()
pdf("figure315gggplot.pdf")
print(ggplot(data.cumsum.graines.espece, aes(x = nbr.graines)) +
geom_ribbon(aes(ymin=0,ymax=effectif,group=espece),fill=I("red"),alpha=.5)+
geom_line(aes(y=effectif,group=espece), size=1, color="red")+
geom_pointrange(aes(ymin=0,ymax=effectif,y=effectif,group=espece), size=1,
color="blue")+ xlab("Nombre de graines") + ylab("Effectif") +
facet_wrap(~espece)+theme_bw())
dev.off()
ggplot(data.cumsum.graines.espece, aes(x = nbr.graines)) +
geom_ribbon(aes(ymin=0,ymax=effectif,group=espece),fill=I("gray80"))+
geom_line(aes(y=effectif,group=espece), size=1, color=I("gray40")) +
geom_pointrange(aes(ymin=0,ymax=effectif,y=effectif,group=espece), size=1)+
xlab("Nombre de graines") + ylab("Effectif") + facet_wrap(~espece)+theme_bw()
pdf("figure315hggplot.pdf")
print(ggplot(data.cumsum.graines.espece, aes(x = nbr.graines)) +
geom_ribbon(aes(ymin=0,ymax=effectif,group=espece),fill=I("gray80"))+
geom_line(aes(y=effectif,group=espece), size=1, color=I("gray40")) +
geom_pointrange(aes(ymin=0,ymax=effectif,y=effectif,group=espece), size=1)+
xlab("Nombre de graines") + ylab("Effectif") + facet_wrap(~espece)+theme_bw())
dev.off()
pie.graines<-c(0.1000,0.3727,0.2455,0.1455,0.0909,0.0182,0.0273)
#page 129
names(pie.graines)<-c("1 graine","2 graines","3 graines","4 graines","5
graines","6 graines","7 graines")
pie(pie.graines,col=c("red","purple","cyan","blue","green","cornsilk","orange"))
pie(table(Mesures5$graines),labels=c("1 graine",paste(2:7,"
graines")),col=rainbow(7))
pdf("figure316.pdf")
pie(table(Mesures5$graines),labels=c("1 graine",paste(2:7,"
graines")),col=rainbow(7))
dev.off()
#ggplot pie is only a polar coord change from geom_bar
p=ggplot(data.graines.espece, aes(x="", y= effectif, fill = nbr.graines)) +
geom_bar(stat="identity",position="fill") + xlab("Nombre de graines") +
ylab("Effectif") + facet_wrap(~espece) + scale_fill_grey() + theme_bw()
p
q <- p+coord_polar(theta="y")
q
q + scale_fill_hue()
q + scale_fill_brewer()
pdf("figure316aggplot.pdf")
print(q)
dev.off()
pdf("figure316bggplot.pdf")
print(q + scale_fill_hue())
dev.off()
pdf("figure316cggplot.pdf")
print(q + scale_fill_brewer())
dev.off()
#page 130
hist(Mesures$masse)
histo<-hist(Mesures$masse,ylab="Effectif",xlab="Masse",main="Histogramme des
masses")
#en plus ggplot
g=ggplot(Mesures,aes(x=masse))+geom_histogram()
g
pdf("figure317aggplot.pdf")
g
dev.off()
g1 = g +
geom_histogram(binwidth=diff(range(Mesures$masse))/nclass.Sturges(Mesures$masse)
) #R\`egle de Sturges
g1
pdf("figure317bggplot.pdf")
g1
dev.off()
ggplot(Mesures,aes(x=masse))+geom_histogram(aes(fill=..count..))+
scale_fill_gradient("Count", low = "green", high = "red")
pdf("figure317cggplot.pdf")
print(ggplot(Mesures,aes(x=masse))+geom_histogram(aes(fill=..count..))+
scale_fill_gradient("Count", low = "green", high = "red"))
dev.off()
g+geom_histogram(aes(fill=..count..),binwidth=diff(range(Mesures$masse))/
nclass.Sturges(Mesures$masse))+scale_fill_gradient("Count",low = "green", high ="red")
pdf("figure317dggplot.pdf")
print(g+geom_histogram(aes(fill=..count..),binwidth=diff(range(Mesures$masse))/
nclass.Sturges(Mesures$masse))+scale_fill_gradient("Count", low = "green", high = "red"))
dev.off()
ggplot(Mesures,aes(x=masse))+geom_histogram(aes(fill=..count..))+
scale_fill_gradient("Count", low = "grey80", high = "black")
pdf("figure317eggplot.pdf")
print(ggplot(Mesures,aes(x=masse))+geom_histogram(aes(fill=..count..))+
scale_fill_gradient("Count", low = "grey80", high = "black"))
dev.off()
g+geom_histogram(aes(fill=..count..),binwidth=diff(range(Mesures$masse))/
nclass.Sturges(Mesures$masse))+scale_fill_gradient("Count", low = "grey80", high = "black")
pdf("figure317fggplot.pdf")
print(g+geom_histogram(aes(fill=..count..),binwidth=diff(range(Mesures$masse))/
nclass.Sturges(Mesures$masse))+scale_fill_gradient("Count", low = "grey80", high = "black"))
dev.off()
#page 131
histo<-hist(Mesures$masse)
histo
#page 133
library(lattice)
histogram(~masse|espece,data=Mesures)
pdf("figure318lattice.pdf")
histogram(~masse|espece,data=Mesures)
dev.off()
#en plus
ggplot(Mesures, aes(x = masse)) +
geom_histogram(binwidth=diff(range(Mesures$masse))/nclass.Sturges(Mesures$masse)
) + xlab("Nombre de graines") + ylab("Effectif") + facet_wrap(~espece)
pdf("figure318ggplot.pdf")
print(ggplot(Mesures, aes(x = masse)) +
geom_histogram(binwidth=diff(range(Mesures$masse))/nclass.Sturges(Mesures$masse)
) + xlab("Nombre de graines") + ylab("Effectif") + facet_wrap(~espece))
dev.off()
ggplot(Mesures, aes(x = masse)) +
geom_histogram(aes(fill=..count..),binwidth=diff(range(Mesures$masse))/
nclass.Sturges(Mesures$masse)) + xlab("Nombre de graines") + ylab("Effectif") +
facet_wrap(~espece) + scale_fill_gradient("Count", low = "green", high = "red")
pdf("figure318aggplot.pdf")
print(ggplot(Mesures, aes(x = masse)) +
geom_histogram(aes(fill=..count..),binwidth=diff(range(Mesures$masse))/
nclass.Sturges(Mesures$masse)) + xlab("Nombre de graines") + ylab("Effectif") +
facet_wrap(~espece) + scale_fill_gradient("Count", low = "green", high = "red"))
dev.off()
g=ggplot(Mesures, aes(x = masse)) +
geom_histogram(aes(fill=..count..),binwidth=diff(range(Mesures$masse))/
nclass.Sturges(Mesures$masse)) + xlab("Nombre de graines") + ylab("Effectif") +
facet_wrap(~espece)
g
pdf("figure318bggplot.pdf")
print(g)
dev.off()
histo<-hist(Mesures$masse,ylab="Effectif",xlab="Masse",main="Polygone des
effectifs des masses")
lines(histo$mids,histo$counts,lwd=2)
points(histo$mids,histo$counts,cex=1.2,pch=19)
pdf("figure319.pdf")
histo<-hist(Mesures$masse,ylab="Effectif",xlab="Masse",main="Polygone des
effectifs des masses")
lines(histo$mids,histo$counts,lwd=2)
points(histo$mids,histo$counts,cex=1.2,pch=19)
dev.off()
#En plus ggplot
g=ggplot(Mesures, aes(x = masse)) +
geom_histogram(aes(fill=..count..),binwidth=diff(range(Mesures$masse))/
nclass.Sturges(Mesures$masse),boundary=0) + xlab("Nombre de graines") + ylab("Effectif")
g
pdf("figure319ggplot.pdf")
print(g)
dev.off()
g1=g+geom_line(binwidth=diff(range(Mesures$masse))/nclass.Sturges(Mesures$masse),
size=2,alpha=.60,color="blue",stat="bin",boundary=0)
g1
pdf("figure319aggplot.pdf")
g1
dev.off()
g+stat_bin(binwidth=diff(range(Mesures$masse))/nclass.Sturges(Mesures$masse),
size=2,alpha=.60,color="blue",geom="line",boundary=0)
pdf("figure319bggplot.pdf")
print(g+stat_bin(binwidth=diff(range(Mesures$masse))/
nclass.Sturges(Mesures$masse),size=2,alpha=.60,color="blue",geom="line",boundary=0))
dev.off()
g1+ scale_fill_gradient(low="white", high="black")
pdf("figure319cggplot.pdf")
print(g1+ scale_fill_gradient(low="white", high="black"))
dev.off()
if(!("scales" %in% rownames(installed.packages()))){install.packages("scales")}
library(scales)
g1+ scale_fill_gradient2(low=muted("red"), mid="white",
high=muted("blue"),midpoint=40)
pdf("figure319dggplot.pdf")
g1+ scale_fill_gradient2(low=muted("red"), mid="white",
high=muted("blue"),midpoint=40)
dev.off()
g1+ scale_fill_gradientn(colours = c("darkred", "orange", "yellow", "white"))
pdf("figure319eggplot.pdf")
g1+ scale_fill_gradientn(colours = c("darkred", "orange", "yellow", "white"))
dev.off()
#Par groupe
g=ggplot(Mesures, aes(x = masse)) +
geom_histogram(aes(fill=..count..),binwidth=diff(range(Mesures$masse))/
nclass.Sturges(Mesures$masse),boundary=0) + xlab("Nombre de graines") +
ylab("Effectif") + facet_wrap(~espece)
g
pdf("figure319fggplot.pdf")
print(g)
dev.off()
g+geom_freqpoly(aes(fill=..count..),binwidth=diff(range(Mesures$masse))/
nclass.Sturges(Mesures$masse),size=2,alpha=.60,color="blue")+
scale_fill_gradientn(colours = c("darkred", "orange", "yellow", "white"))
pdf("figure319fggplot.pdf")
print(g+geom_freqpoly(aes(fill=..count..),binwidth=diff(range(Mesures$masse))/
nclass.Sturges(Mesures$masse),size=2,alpha=.60,color="blue")+
scale_fill_gradientn(colours = c("darkred", "orange", "yellow", "white")))
dev.off()
#page 135
histo<-hist(Mesures$masse,plot=FALSE)
barplot<-barplot(cumsum(histo$counts),ylab="Effectif",xlab="Masse",main="
Polygone des effectifs cumul\'es des masses")
lines(barplot,cumsum(histo$counts),lwd=2)
points(barplot,cumsum(histo$counts),cex=1.2,pch=19)
pdf("figure320.pdf")
barplot<-barplot(cumsum(histo$counts),ylab="Effectif",xlab="Masse",main="
Polygone des effectifs cumul\'es des masses")
lines(barplot,cumsum(histo$counts),lwd=2)
points(barplot,cumsum(histo$counts),cex=1.2,pch=19)
dev.off()
#Effectifs et polygone des fr\'equences cumul\'ees
library(qcc)
pareto.chart(table(Mesures5$graines))
pdf("figure320qcc.pdf")
pareto.chart(table(Mesures5$graines))
dev.off()
consmw=diff(range(Mesures$masse))/nclass.Sturges(Mesures$masse)
consmw
consmw.espece=cbind(espece=names(unlist(lapply(split(Mesures$masse,Mesures$
espece),function(xxx) return(diff(range(xxx))/nclass.Sturges(xxx))))),
consmw.espece=unlist(lapply(split(Mesures$masse,Mesures$espece),function(xxx)
return(diff(range(xxx))/nclass.Sturges(xxx)))))
consmw.espece
Mesures.binw<-merge(cbind(Mesures,consmw=diff(range(Mesures$masse))/
nclass.Sturges(Mesures$masse)),consmw.espece)
g=ggplot(Mesures.binw, aes(x = masse))
g +
geom_histogram(data=Mesures.binw,aes(y=5.355556*..density..,fill=..density..),
binwidth=diff(range(Mesures$masse))/nclass.Sturges(Mesures$masse),boundary =
min(Mesures$masse)) + xlab("Masse") + ylab("Fr\'equence")
pdf("figure320ggplot.pdf")
print(g +
geom_histogram(data=Mesures.binw,aes(y=5.355556*..density..,fill=..density..),
binwidth=diff(range(Mesures$masse))/nclass.Sturges(Mesures$masse),boundary =
min(Mesures$masse)) + xlab("Masse") + ylab("Fr\'equence"))
dev.off()
g +
geom_histogram(data=Mesures.binw,aes(y=5.355556*..count..,fill=..count..),
binwidth=diff(range(Mesures$masse))/nclass.Sturges(Mesures$masse),boundary =
min(Mesures$masse)) + xlab("Masse") + ylab("D\'enombrement")
pdf("figure320aggplot.pdf")
print(g +
geom_histogram(data=Mesures.binw,aes(y=5.355556*..count..,fill=..count..),
binwidth=diff(range(Mesures$masse))/nclass.Sturges(Mesures$masse),boundary =
min(Mesures$masse)) + xlab("Masse") + ylab("D\'enombrement"))
dev.off()
g + stat_ecdf(n=nclass.Sturges(Mesures$masse)+1,direction="vh") +
stat_ecdf(n=nclass.Sturges(Mesures$masse)+1,direction="vh",geom="linerange",ymin
=0,aes(ymax=..y..))
pdf("figure320bggplot.pdf")
print(g + stat_ecdf(n=nclass.Sturges(Mesures$masse)+1,direction="vh") +
stat_ecdf(n=nclass.Sturges(Mesures$masse)+1,direction="vh",geom="linerange",ymin
=0,aes(ymax=..y..)))
dev.off()
g +
geom_histogram(aes(y=5.355556*..density..,fill=..density..),binwidth=diff(range(
Mesures$masse))/nclass.Sturges(Mesures$masse),boundary = min(Mesures$masse)) +
xlab("Masse") + ylab("Fr\'equence") +
stat_ecdf(n=nclass.Sturges(Mesures$masse)+1,direction="vh") +
stat_ecdf(n=nclass.Sturges(Mesures$masse)+1,direction="vh",geom="linerange",ymin
=0,aes(ymax=..y..))
pdf("figure320cggplot.pdf")
print(g +
geom_histogram(aes(y=5.355556*..density..,fill=..density..),binwidth=diff(range(
Mesures$masse))/nclass.Sturges(Mesures$masse),boundary = min(Mesures$masse)) +
xlab("Masse") + ylab("Fr\'equence") +
stat_ecdf(n=nclass.Sturges(Mesures$masse)+1,direction="vh") +
stat_ecdf(n=nclass.Sturges(Mesures$masse)+1,direction="vh",geom="linerange",ymin
=0,aes(ymax=..y..)))
dev.off()
#depuis ggplot2 2.0 ne freqpoly n'est plus un geom acceptable
#g+stat_ecdf(n=nclass.Sturges(Mesures$masse)+1,geom=c("rect"),fill="blue",aes(
#ymax=..y..,ymin=0,xmax=..x..,xmin=..x..-diff(range(BioStatR::Mesures$masse))/
#grDevices::nclass.Sturges(BioStatR::Mesures$masse)),alpha=.5,colour="blue")+
#stat_ecdf(n=nclass.Sturges(Mesures$masse)+1,geom=c("freqpoly"),fill="blue",aes(
#x=masse-5.355556/2,y=..y..))+geom_histogram(aes(y=5.355556*..density..,fill=..
#density..),binwidth=diff(range(Mesures$masse))/nclass.Sturges(Mesures$masse),
#alpha=.35,boundary = min(Mesures5$masse))
#pdf("figure320dggplot.pdf")
#print(g+stat_ecdf(n=nclass.Sturges(Mesures$masse)+1,geom=c("rect"),fill="blue",
#aes(ymax=..y..,ymin=0,xmax=..x..,xmin=..x..-diff(range(BioStatR::Mesures$masse)
#)/grDevices::nclass.Sturges(BioStatR::Mesures$masse)),alpha=.5,colour="blue")+
#stat_ecdf(n=nclass.Sturges(Mesures$masse)+1,geom=c("freqpoly"),fill="blue",aes(
#x=masse-5.355556/2,y=..y..))+geom_histogram(aes(y=5.355556*..density..,fill=..
#density..),binwidth=diff(range(Mesures$masse))/nclass.Sturges(Mesures$masse),
#alpha=.35,boundary = min(Mesures5$masse)))
#dev.off()
#
g+stat_ecdf(n=nclass.Sturges(Mesures$masse)+1,geom=c("bar"),fill="blue",
aes(x=masse-5.355556/2,width=5.355556),alpha=.5,colour="blue")+stat_ecdf(n=
nclass.Sturges(Mesures$masse)+1,geom=c("line"),fill="blue",aes(x=masse-5.355556/2,
y=..y..))+stat_ecdf(n=nclass.Sturges(Mesures$masse)+1,geom=c("point"),fill="blue",
aes(x=masse-5.355556/2,y=..y..))+geom_histogram(aes(y=5.355556*..density..,
fill=..density..),binwidth=diff(range(Mesures$masse))/nclass.Sturges(Mesures$masse),
alpha=.35,boundary = min(Mesures5$masse))
pdf("figure320eggplot.pdf")
print(g+stat_ecdf(n=nclass.Sturges(Mesures$masse)+1,geom=c("bar"),fill="blue",
aes(x=masse-5.355556/2,width=5.355556),alpha=.5,colour="blue")+stat_ecdf(n=
nclass.Sturges(Mesures$masse)+1,geom=c("line"),fill="blue",aes(x=masse-5.355556/2,
y=..y..))+stat_ecdf(n=nclass.Sturges(Mesures$masse)+1,geom=c("point"),fill="blue",
aes(x=masse-5.355556/2,y=..y..))+geom_histogram(aes(y=5.355556*..density..,
fill=..density..),binwidth=diff(range(Mesures$masse))/nclass.Sturges(Mesures$masse),
alpha=.35,boundary = min(Mesures5$masse)))
dev.off()
g+stat_ecdf(n=nclass.Sturges(Mesures$masse)+1,geom=c("bar"),fill="grey50",aes(x=masse-
5.355556/2,width=5.355556),alpha=.5,colour="black")+stat_ecdf(n=
nclass.Sturges(Mesures$masse)+1,geom=c("line"),fill="grey50",aes(x=masse-5.355556/2,
y=..y..))+stat_ecdf(n=nclass.Sturges(Mesures$masse)+1,geom=c("point"),fill="black",
aes(x=masse-5.355556/2,y=..y..))+geom_histogram(aes(y=5.355556*..density..,
fill=..density..),binwidth=diff(range(Mesures$masse))/nclass.Sturges(Mesures$masse),
alpha=.35,boundary = min(Mesures5$masse))+ scale_fill_gradient(low="white", high="black")
pdf("figure320fggplot.pdf")
print(g+stat_ecdf(n=nclass.Sturges(Mesures$masse)+1,geom=c("bar"),fill="grey50", aes(x=
masse-5.355556/2,width=5.355556),alpha=.5,colour="black")+stat_ecdf(n=
nclass.Sturges(Mesures$masse)+1,geom=c("line"),fill="grey50",aes(x=masse-5.355556/2,
y=..y..))+stat_ecdf(n=nclass.Sturges(Mesures$masse)+1,geom=c("point"),
fill="black",aes(x=masse-5.355556/2,y=..y..))+geom_histogram(aes(y=5.355556*..density..,
fill=..density..),binwidth=diff(range(Mesures$masse))/nclass.Sturges(Mesures$masse),
alpha=.35,boundary = min(Mesures5$masse))+scale_fill_gradient(low="white", high="black"))
dev.off()
#Par groupe
g+stat_ecdf(n=9+1,geom=c("bar"),fill="blue",aes(x=masse-5.355556/2,width=5.355556/2),
alpha=.5,colour="blue",binwidth=5.355556)+stat_ecdf(n=9+1,geom=c("line"),fill="blue",
aes(x=masse-5.355556/2,y=..y..))+stat_ecdf(n=9+1,geom=c("point"),fill="blue",
aes(x=masse-5.355556/2,y=..y..))+facet_wrap(~espece)+geom_histogram(aes(y=
5.355556*..density..,fill=..density..),binwidth=5.355556,alpha=.35)
pdf("figure320gggplot.pdf")
print(g+stat_ecdf(n=9+1,geom=c("bar"),fill="blue",aes(x=masse-5.355556/2,width=5.355556/2),
alpha=.5,colour="blue",binwidth=5.355556)+stat_ecdf(n=9+1,geom=c("line"),fill="blue",
aes(x=masse-5.355556/2,y=..y..))+stat_ecdf(n=9+1,geom=c("point"),fill="blue",
aes(x=masse-5.355556/2,y=..y..))+facet_wrap(~espece)+geom_histogram(aes(y=5.355556*..density..,
fill=..density..),binwidth=5.355556,alpha=.35))
dev.off()
g+stat_ecdf(n=9+1,geom=c("bar"),fill="blue",aes(x=masse-5.355556/2,width=5.355556/2),
alpha=.5,colour="blue")+stat_ecdf(n=9+1,geom=c("line"),fill="blue",aes(x=masse-5.355556/2,
y=..y..))+stat_ecdf(n=9+1,geom=c("point"),fill="blue",aes(x=masse-5.355556/2,y=..y..))+
facet_wrap(~espece,scales="free_x")+geom_histogram(aes(y=5.355556*..density..,
fill=..density..),binwidth=5.355556,alpha=.35)
pdf("figure320hggplot.pdf")
print(g+stat_ecdf(n=9+1,geom=c("bar"),fill="blue",aes(x=masse-5.355556/2,width=5.355556/2),
alpha=.5,colour="blue")+stat_ecdf(n=9+1,geom=c("line"),fill="blue",aes(x=masse-5.355556/2,
y=..y..))+stat_ecdf(n=9+1,geom=c("point"),fill="blue",aes(x=masse-5.355556/2,y=..y..))+
facet_wrap(~espece,scales="free_x")+geom_histogram(aes(y=5.355556*..density..,
fill=..density..),binwidth=5.355556,alpha=.35))
dev.off()
#page 137
boxplot(Mesures$masse)
title("Bo^ite \`a moustaches de la variable masse")
pdf("figure321.pdf")
boxplot(Mesures$masse)
title("Bo^ite \`a moustaches de la variable masse")
dev.off()
#En plus ggplot
ggplot(Mesures, aes(x="",y=masse)) + geom_boxplot()
pdf("figure321ggplot.pdf")
print(ggplot(Mesures, aes(x="",y=masse)) + geom_boxplot())
dev.off()
#remove label axe x
ggplot(Mesures, aes(x="",y=masse)) + geom_boxplot() + xlab("")
pdf("figure321aggplot.pdf")
print(ggplot(Mesures, aes(x="",y=masse)) + geom_boxplot() + xlab(""))
dev.off()
ggplot(Mesures, aes(x="",y=masse)) + geom_boxplot() + coord_flip() + xlab("")
pdf("figure321bggplot.pdf")
print(ggplot(Mesures, aes(x="",y=masse)) + geom_boxplot() + coord_flip() +
xlab(""))
dev.off()
ggplot(Mesures, aes(x="", y=masse)) + geom_boxplot(width=.5) +
stat_summary(fun.y="mean", geom="point", shape=23, size=3, fill="white") +
xlab("")
pdf("figure321cggplot.pdf")
print(ggplot(Mesures, aes(x="", y=masse)) + geom_boxplot(width=.5) +
stat_summary(fun.y="mean", geom="point", shape=23, size=3, fill="white") +
xlab(""))
dev.off()
ggplot(Mesures, aes(x="", y=masse)) + geom_violin() + geom_boxplot(width=.1,
fill="black") + stat_summary(fun.y=mean, geom="point", fill="white", shape=21, size=2.5)
pdf("figure321dggplot.pdf")
print(ggplot(Mesures, aes(x="", y=masse)) + geom_violin() +
geom_boxplot(width=.1, fill="black") + stat_summary(fun.y=mean, geom="point",
fill="white", shape=21, size=2.5))
dev.off()
#Without extreme values
ggplot(Mesures, aes(x="", y=masse)) + geom_violin() + geom_boxplot(width=.1,
fill="black", outlier.colour=NA) + stat_summary(fun.y=mean, geom="point",
fill="white", shape=21, size=2.5)
pdf("figure321eggplot.pdf")
print(ggplot(Mesures, aes(x="", y=masse)) + geom_violin() +
geom_boxplot(width=.1, fill="black", outlier.colour=NA) +
stat_summary(fun.y=mean, geom="point", fill="white", shape=21, size=2.5))
dev.off()
#Gaussian kernel is the default and very (too) smooth for a finite population
ggplot(Mesures, aes(x="", y=masse)) + geom_violin(kernel="rectangular") +
geom_boxplot(width=.1, fill="black") + stat_summary(fun.y=mean, geom="point",
fill="white", shape=21, size=2.5)
pdf("figure321fggplot.pdf")
print(ggplot(Mesures, aes(x="", y=masse)) + geom_violin(kernel="rectangular") +
geom_boxplot(width=.1, fill="black") + stat_summary(fun.y=mean, geom="point",
fill="white", shape=21, size=2.5))
dev.off()
#Without extreme values
ggplot(Mesures, aes(x="", y=masse)) + geom_violin(kernel="rectangular") +
geom_boxplot(width=.1, fill="black", outlier.colour=NA) +
stat_summary(fun.y=mean, geom="point", fill="white", shape=21, size=2.5)
pdf("figure321gggplot.pdf")
print(ggplot(Mesures, aes(x="", y=masse)) + geom_violin(kernel="rectangular") +
geom_boxplot(width=.1, fill="black", outlier.colour=NA) +
stat_summary(fun.y=mean, geom="point", fill="white", shape=21, size=2.5))
dev.off()
#page 138
boxplot.stats(Mesures$masse)
boxplot(Mesures$masse~Mesures$espece)
pdf("figure322.pdf")
boxplot(Mesures$masse~Mesures$espece)
dev.off()
#page 139
pdf("figure322color.pdf")
boxplot(Mesures$masse~Mesures$espece,col=rainbow(4))
dev.off()
#En plus lattice par groupe
bwplot(masse~espece,data=Mesures,pch="|")
bwplot(~masse|espece,data=Mesures,pch="|")
pdf("figure322lattice.pdf")
bwplot(masse~espece,data=Mesures,pch="|")
dev.off()
pdf("figure322latticegroupe.pdf")
bwplot(~masse|espece,data=Mesures,pch="|")
dev.off()
#En plus ggplot par groupe
ggplot(Mesures, aes(x=espece,y=masse)) + geom_boxplot()
pdf("figure322ggplot.pdf")
print(ggplot(Mesures, aes(x=espece,y=masse)) + geom_boxplot())
dev.off()
ggplot(Mesures, aes(x=espece,y=masse)) + geom_boxplot() + coord_flip()
pdf("figure322ggplot.pdf")
print(ggplot(Mesures, aes(x=espece,y=masse)) + geom_boxplot() + coord_flip())
dev.off()
ggplot(Mesures, aes(x=espece,y=masse,fill=espece)) + geom_boxplot()
pdf("figure322ggplot.pdf")
print(ggplot(Mesures, aes(x=espece,y=masse,fill=espece)) + geom_boxplot())
dev.off()
ggplot(Mesures, aes(x=espece,y=masse,fill=espece)) + geom_boxplot() +
coord_flip() + scale_fill_brewer(palette="Set1")
pdf("figure322ggplot.pdf")
print(ggplot(Mesures, aes(x=espece,y=masse,fill=espece)) + geom_boxplot() +
coord_flip() + scale_fill_brewer(palette="Set1"))
dev.off()
ggplot(Mesures, aes(x="", y=masse)) + geom_violin(kernel="rectangular") +
geom_boxplot(width=.1, fill="black", outlier.colour="black") +
stat_summary(fun.y=mean, geom="point", fill="white", shape=21,
size=2.5)+facet_wrap(~espece)
pdf("figure322ggplot.pdf")
print(ggplot(Mesures, aes(x="", y=masse)) + geom_violin(kernel="rectangular") +
geom_boxplot(width=.1, fill="black", outlier.colour="black") +
stat_summary(fun.y=mean, geom="point", fill="white", shape=21,
size=2.5)+facet_wrap(~espece))
dev.off()
ggplot(Mesures, aes(x="", y=masse)) +
geom_violin(aes(fill=espece),kernel="rectangular",alpha=.2) +
geom_boxplot(aes(fill=espece),width=.1) + stat_summary(fun.y=mean, geom="point",
fill="white", shape=21, size=2.5)+facet_wrap(~espece)
pdf("figure322ggplot.pdf")
print(ggplot(Mesures, aes(x="", y=masse)) +
geom_violin(aes(fill=espece),kernel="rectangular",alpha=.2) +
geom_boxplot(aes(fill=espece),width=.1) + stat_summary(fun.y=mean, geom="point",
fill="white", shape=21, size=2.5)+facet_wrap(~espece))
dev.off()
ggplot(Mesures, aes(x="", y=masse)) +
geom_violin(aes(fill=espece),kernel="rectangular",alpha=.2) +
geom_boxplot(aes(fill=espece),width=.1) + stat_summary(fun.y=mean, geom="point",
fill="white", shape=21, size=2.5)+facet_wrap(~espece)+scale_fill_brewer(palette="Set1")
pdf("figure322ggplot.pdf")
print(ggplot(Mesures, aes(x="", y=masse)) +
geom_violin(aes(fill=espece),kernel="rectangular",alpha=.2) +
geom_boxplot(aes(fill=espece),width=.1) + stat_summary(fun.y=mean, geom="point",
fill="white", shape=21, size=2.5)+facet_wrap(~espece)+scale_fill_brewer(palette="Set1"))
dev.off()
ggplot(Mesures, aes(x="", y=masse)) +
geom_violin(aes(fill=espece,kernel="rectangular"),alpha=.2) +
geom_boxplot(aes(fill=espece),width=.1) + stat_summary(fun.y=mean, geom="point",
fill="white", shape=21, size=2.5)+facet_wrap(~espece)+scale_fill_brewer(palette="Set2")
pdf("figure322ggplot.pdf")
print(ggplot(Mesures, aes(x="", y=masse)) +
geom_violin(aes(fill=espece,kernel="rectangular"),alpha=.2) +
geom_boxplot(aes(fill=espece),width=.1) + stat_summary(fun.y=mean, geom="point",
fill="white", shape=21, size=2.5)+facet_wrap(~espece)+scale_fill_brewer(palette="Set2"))
dev.off()
ggplot(Mesures, aes(x="", y=masse)) +
geom_violin(aes(fill=espece,kernel="rectangular"),alpha=.2) +
geom_boxplot(aes(fill=espece),width=.1) + stat_summary(fun.y=mean, geom="point",
fill="white", shape=21, size=2.5)+facet_wrap(~espece)+scale_fill_brewer(palette="Set3")
pdf("figure322ggplot.pdf")
print(ggplot(Mesures, aes(x="", y=masse)) +
geom_violin(aes(fill=espece,kernel="rectangular"),alpha=.2) +
geom_boxplot(aes(fill=espece),width=.1) + stat_summary(fun.y=mean, geom="point",
fill="white", shape=21, size=2.5)+facet_wrap(~espece)+scale_fill_brewer(palette="Set3"))
dev.off()
#Without extreme values and with Gaussian kernel
ggplot(Mesures, aes(x="", y=masse)) + geom_violin() + geom_boxplot(width=.1,
fill="black", outlier.colour=NA) + stat_summary(fun.y=mean, geom="point",
fill="white", shape=21, size=2.5)+facet_wrap(~espece)
pdf("figure322ggplot.pdf")
print(ggplot(Mesures, aes(x="", y=masse)) + geom_violin() +
geom_boxplot(width=.1, fill="black", outlier.colour=NA) +
stat_summary(fun.y=mean, geom="point", fill="white", shape=21,
size=2.5)+facet_wrap(~espece))
dev.off()
ggplot(Mesures, aes(x="", y=masse)) + geom_violin(aes(fill=espece),alpha=.2) +
geom_boxplot(aes(fill=espece),width=.1,outlier.color=NA) +
stat_summary(fun.y=mean, geom="point", fill="white", shape=21,
size=2.5)+facet_wrap(~espece)+ scale_fill_brewer(palette="Set3")
pdf("figure322ggplot.pdf")
print(ggplot(Mesures, aes(x="", y=masse)) +
geom_violin(aes(fill=espece),alpha=.2) +
geom_boxplot(aes(fill=espece),width=.1,outlier.color=NA) +
stat_summary(fun.y=mean, geom="point", fill="white", shape=21,
size=2.5)+facet_wrap(~espece)+ scale_fill_brewer(palette="Set3"))
dev.off()
#page 140
stem(Mesures$masse)
#page 142
hist(Mesures$masse,ylab="Effectif",xlab="Masse",main="Histogramme des masses")
histo<-hist(Mesures$masse,plot=FALSE)
classes<-histo$breaks
classes
#page 143
effectifs<-histo$counts
effectifs
which(histo$density==max(histo$density))
median(Mesures$masse)
quantile(Mesures$masse,0.5,type=6)
#page 144
quantile(Mesures$masse,0.25,type=6)
quantile(Mesures$masse,0.75,type=6)
quantile(Mesures$masse,c(0.25,0.5,0.75),type=6)
#page 145
quantile(Mesures$masse,type=6)
#page 146
options(digits=7)
mean(Mesures$masse)
summary(Mesures$masse)
#page 147
max(Mesures$masse)-min(Mesures$masse)
diff(range(Mesures$masse))
IQR(Mesures$masse,type=6)
#page 149
var(Mesures$masse)
var(Mesures$masse)*length(Mesures$masse)/(length(Mesures$masse)-1)
#page 150
sd(Mesures$masse)
#page 151
mad(Mesures$masse,constant=1)
mad(Mesures$masse,quantile(Mesures$masse,type=1,probs=.5),constant=1)
median(abs(Mesures$masse-quantile(Mesures$masse,type=1,probs=.5)))
mad(Mesures$masse,constant=1,low=TRUE)
#page 152
quantile(abs(Mesures$masse-median(Mesures$masse)),type=1,probs=.5)
mad(Mesures$masse,quantile(Mesures$masse,type=1,probs=.5),constant=1,low=TRUE)
quantile(abs(Mesures$masse-quantile(Mesures$masse,type=1,probs=.5)),type=1,probs
=.5)
#mads par rapport \`a une autre r\'ef\'erence
mad(Mesures$masse,quantile(Mesures$masse,type=4,probs=.5),constant=1)
mad(Mesures$masse,quantile(Mesures$masse,type=6,probs=.5),constant=1)
mad(Mesures$masse,quantile(Mesures$masse,type=7,probs=.5),constant=1)
#Autre example de calculs \`a partir d'un petit \'echantillon
x <- c(1,2,3,5,7,8)
sort(abs(x - median(x)))
c(mad(x, constant = 1),
mad(x, constant = 1, low = TRUE),
mad(x, constant = 1, high = TRUE))
quantile(x,type=1,probs=.5)
quantile(x,type=2,probs=.5)
mad(x,constant=1,low = TRUE)
sort(abs(x-quantile(x,type=1,probs=.5)))
quantile(abs(x-quantile(x,type=1,probs=.5)),type=1,probs=.5)
library(BioStatR)
cvar(Mesures$masse)
#page 154
# Asym\'etrie et aplatissement d'un \'echantillon
if(!("agricolae" %in%
rownames(installed.packages()))){install.packages("agricolae")}
library(agricolae)
skewness(Mesures$masse)
kurtosis(Mesures$masse)
#Pour retirer la biblioth\`eque agricolae de la m\'emoire de R avant de charger e1071
detach(package:agricolae)
if(!("e1071" %in% rownames(installed.packages()))){install.packages("e1071")}
library(e1071)
# Asym\'etrie et aplatissement d'une s\'erie statistique (=population)
skewness(Mesures$masse,type=1)
kurtosis(Mesures$masse,type=1)
# Asym\'etrie et aplatissement d'un \'echantillon (comme agricolae)
skewness(Mesures$masse,type=2)
kurtosis(Mesures$masse,type=2)
detach(package:e1071)
#Exercice 3.1
#page 164
#1)
Variete<-c(rep(1,4),rep(2,4),rep(3,4))
Variete
Jutosite<-c(4,6,3,5,7,8,7,6,8,6,5,6)
Jutosite
Pommes<-data.frame(Variete,Jutosite)
Pommes
#page 165
#2)
str(Pommes)
class(Pommes$Variete)
#3)
Variete<-factor(Variete)
Pommes<-data.frame(Variete,Jutosite)
rm(Variete)
rm(Jutosite)
str(Pommes)
#page 166
class(Pommes$Variete)
Pommes
#4)
Variete<-factor(c(rep(1,4),rep(2,4),rep(3,4)))
Jutosite<-c(4,6,3,5,7,8,7,6,8,6,5,6)
Pommes<-data.frame(Variete,Jutosite)
str(Pommes)
#5)
Variete<-factor(c(rep(1,4),rep(2,4),rep(3,4)),labels=c("V1","V2","V3"))
Jutosite<-c(4,6,3,5,7,8,7,6,8,6,5,6)
Pommes<-data.frame(Variete,Jutosite)
Pommes
#page 167
str(Pommes)
#6)
Variete<-as.factor(c(rep(1,4),rep(2,4),rep(3,4)))
Jutosite<-c(4,6,3,5,7,8,7,6,8,6,5,6)
Pommes<-data.frame(Variete,Jutosite)
Pommes
str(Pommes)
#page 168
#7)
tapply(Jutosite,Variete,mean)
tapply(Jutosite,Variete,sd)
tapply(Jutosite,Variete,quantile,type=6)
tapply(Jutosite,Variete,summary)
#Exercice 3.2
#page 169
#1)
options(digits=3)
hist(Mesures$masse,breaks=5,plot=FALSE)
#page 170
#2)
hist(Mesures$masse,breaks=c(0,5,10,15,20,50),plot=FALSE)
#page 171
#3)
brk <- c(0,5,10,15,20,50)
table(cut(Mesures$masse, brk))
head(cut(Mesures$masse,brk))
data.frame(table(cut(Mesures$masse, brk)))
#4)
if(!("Hmisc" %in% rownames(installed.packages()))){install.packages("Hmisc")}
library(Hmisc)
brk <- c(0,5,10,15,20,50)
res <- cut2(Mesures$masse, brk)
head(res)
#page 172
table(res)
table(cut2(Mesures$masse, g=10))
table(cut2(Mesures$masse, m=50))
#Exercice 3.3
#1)
library(BioStatR)
head(Mesures$masse)
#head(masse)
#
#page 173
#2)
attach(Mesures)
head(masse)
detach(Mesures)
#head(masse)
#
#Exercice 3.4
options(digits=7)
#1)
head(Europe)
#2)
str(Europe)
#page 174
#3)
class(Europe)
dim(Europe)
#4)
summary(Europe$Duree)
#page 175
histo<-hist(Europe$Duree,xlab="Dur\'ee en heures",ylab="Nombre de pays",
main="Histogramme de la variable Duree")
histo<-hist(Europe$Duree)
classe<-histo$breaks
classe
#page 176
which(histo$density==max(histo$density))
#5)
sd(Europe$Duree)
cvar(Europe$Duree)
diff(range(Europe$Duree))
#6)
boxplot(Europe$Duree,ylab="Dur\'ee en heures")
points(1,mean(Europe$Duree),pch=1)
#page 177
#7)
pdf(file="boxplot.pdf")
boxplot(Europe$Duree,ylab="Dur\'ee en heures")
points(1,mean(Europe$Duree),pch=1)
dev.off()
#page 178
postscript(file="boxplot.ps")
boxplot(Europe$Duree,ylab="Dur\'ee en heures")
points(1,mean(Europe$Duree),pch=1)
dev.off()
#Probl\`eme 3.1
#1)
Femmes<-c(105,110,112,112,118,119,120,120,125,126,127,128,130,132,133,
134,135,138,138,138,138,142,145,148,148,150,151,154,154,158)
Femmes
#page 179
Hommes<-c(141,144,146,148,149,150,150,151,153,153,153,154,155,156,156,
160,160,160,163,164,164,165,166,168,168,170,172,172,176,179)
Hommes
#2)
histo.fem<-hist(Femmes,breaks=c(104,114,124,134,144,154,164,174,184))
effectif.fem<-histo.fem$counts
effectif.fem
sum(effectif.fem)
histo.frm<-hist(Femmes,breaks=c(104,114,124,134,144,154,164,174,184))
frequence.fem<-effectif.fem/sum(effectif.fem)
print(frequence.fem,digits=3)
#page 180
histo.hom<-hist(Hommes,breaks=c(104,114,124,134,144,154,164,174,184))
effectif.hom<-histo.hom$counts
effectif.hom
histo.hom<-hist(Hommes,breaks=c(104,114,124,134,144,154,164,174,184))
frequence.hom<-effectif.hom/sum(effectif.hom)
print(frequence.hom,digits=3)
#page 181
#3)
histo<-hist(Femmes,breaks=c(104,114,124,134,144,154,164,174,184),
main="Histogramme de la variable taux d'h\'emoglobine pour les
Femmes",
xlab="Taux d'h\'emoglobine",ylab="Effectif")
#page 182
histo<-hist(Hommes,breaks=c(104,114,124,134,144,154,164,174,184),
main="Histogramme de la variable taux d'h\'emoglobine pour les
Hommes",
xlab="Taux d'h\'emoglobine",ylab="Effectif")
library(lattice)
Ensemble.df <- make.groups(Femmes,Hommes)
colnames(Ensemble.df) <- c("Taux","Sexe")
histogram(~Taux|Sexe,xlab="Taux d'h\'emoglobine",data=Ensemble.df,
breaks=c(104,114,124,134,144,154,164,174,184),layout=c(1,2))
#page 183
histogram(~Taux|Sexe,xlab="Taux d'h\'emoglobine",data=Ensemble.df,
breaks=c(104,114,124,134,144,154,164,174,184))
#page 184
#4)
Ensemble<-c(Femmes,Hommes)
Ensemble
mean(Ensemble)
mean(Femmes)
mean(Hommes)
#5)
histo.ens<-hist(Ensemble,breaks=c(104,114,124,134,144,154,164,174,184))
sum(histo.ens$counts*histo.ens$mids)/length(Ensemble)
#page 185
sum(histo.fem$counts*histo.fem$mids)/length(Femmes)
sum(histo.hom$counts*histo.hom$mids)/length(Hommes)
#6)
quantile(Ensemble,0.50,type=6)
quantile(Femmes,0.50,type=6)
quantile(Hommes,0.50,type=6)
#M^eme r\'esultats avec la fonction median
median(Ensemble)
median(Femmes)
median(Hommes)
#page 186
#7)
IQR(Ensemble,type=6)
IQR(Femmes,type=6)
IQR(Hommes,type=6)
#8)
var(Ensemble)*(length(Ensemble)-1)/length(Ensemble)
var(Femmes)*(length(Femmes)-1)/length(Femmes)
#page 187
var(Hommes)*(length(Hommes)-1)/length(Hommes)
sd(Ensemble)*sqrt((length(Ensemble)-1)/length(Ensemble))
sd(Femmes)*sqrt((length(Femmes)-1)/length(Femmes))
sd(Hommes)*sqrt((length(Hommes)-1)/length(Hommes))
#9)
# Asym\'etrie et aplatissement d'une s\'erie statistique (=population)
if(!("e1071" %in% rownames(installed.packages()))){install.packages("e1071")}
library(e1071)
skewness(Femmes,type=1)
#page 188
kurtosis(Femmes,type=1)
|
/scratch/gouwar.j/cran-all/cranData/BioStatR/demo/Chapitre3.R
|
#' ---
#' title: "Initiation \u00e0 la statistique avec R, code et compl\u00e9ments chapitre 4"
#' author: "Fr\u00e9d\u00e9ric Bertrand et Myriam Maumy-Bertrand"
#' date: "20 mars 2023"
#' ---
#Chapitre 4
#page 208
#Exercice 4.2
#2)
dnorm(0)
pnorm(2.58)
qnorm(0.975)
rnorm(50)
rnorm(20,mean=10,sd=2)
x=seq(-5,5,0.1) ;pdf=dnorm(x) ;plot(x,pdf,type="l",main="Densit\'e d'une loi
normale centr\'ee et r\'eduite")
library(ggplot2)
ggplot(data.frame(x=c(-5,5)),aes(x))+stat_function(fun=dnorm)+
ggtitle("Densit\'e d'une loi normale centr\'ee et r\'eduite")+ylab("Densit\'e")
runif(10)
rt(10,20)
#Exercice 4.1
#page 211
#1)
#couleurs <- gray(c(0,.25,.5,.75)) #En nuances de gris comme dans le livre
couleurs<-c("black","red","green","blue") #En couleurs
fd<-function(x) {dbinom(x,5,0.5)}
plot(cbind(0:5,sapply(0:5,fd)),xlim=c(0,20),ylim=c(0,.40),type="p",ylab="",xlab="",
pch=15,cex=2,lwd=3,col=couleurs[1],cex.axis=2)
fd<-function(x) {dbinom(x,10,0.5)}
points(cbind(0:10,sapply(0:10,fd)),xlim=c(0,20),ylim=c(0,.40),type="p",ylab="",xlab="",
pch=16,cex=2,lwd=3,col=couleurs[2])
#L'option new=TRUE n'est pas n\'ecessaire pour que la fonction points ajoute les points
# au graphique d\'ej\`a existant
fd<-function(x) {dbinom(x,20,0.5)}
points(cbind(0:20,sapply(0:20,fd)),xlim=c(0,20),ylim=c(0,.40),type="p",ylab="",xlab="",
pch=17,cex=2,lwd=3,col=couleurs[3])
#L'option new=TRUE n'est pas n\'ecessaire pour que la fonction points ajoute les points
# au graphique d\'ej\`a existant
legtxt<-c(expression(paste(italic(n)," = 5",sep="")),expression(paste(italic(n)," = 10",
sep="")),expression(paste(italic(n)," = 20",sep="")))
legend("topright",legtxt,title=expression(paste(italic(p)," = 0,5",sep="")),pch=c(15,16,
17),col=c(couleurs[1],couleurs[2],couleurs[3]),cex=2,bg="white",inset=.075)
#page 212
#2)
dhypergeom<-function(x,N,n,p) (choose(N*p,x)*choose(N*(1-p),n-x)/choose(N,n))
fd<-function(x) {dhypergeom(x,14,10,0.5)}
plot(cbind(0:10,sapply(0:10,fd)),xlim=c(0,10),ylim=c(0,.5),type="p",ylab="",xlab="",
pch=15,cex=2,lwd=3,col=couleurs[4],cex.axis=2)
fd<-function(x) {dhypergeom(x,20,10,0.5)}
points(cbind(0:10,sapply(0:10,fd)),xlim=c(0,10),ylim=c(0,.5),type="p",ylab="",xlab="",
pch=16,cex=2,lwd=3,col=couleurs[3],new=T)
fd<-function(x) {dhypergeom(x,50,10,0.5)}
points(cbind(0:10,sapply(0:10,fd)),xlim=c(0,10),ylim=c(0,.5),type="p",ylab="",xlab="",
pch=17,cex=2,lwd=3,col=couleurs[2],new=T)
fd<-function(x) {dbinom(x,10,0.5)}
points(cbind(0:10,sapply(0:10,fd)),xlim=c(0,10),ylim=c(0,.5),type="p",ylab="",xlab="",
pch=18,cex=2,lwd=3,col=couleurs[1],new=T)
legtxt<-c(expression(paste(italic(N)," = 14",sep="")),expression(paste(italic(N)," = 20",
sep="")),expression(paste(italic(N)," = 50",sep="")),expression(paste(italic(B),
"(10;0,5)",sep="")))
legend("topright",legtxt,title=expression(paste(italic(n)," = 10 et ",italic(p)," = 0,5",
sep="")),pch=c(15,16,17,18),col=c(couleurs[4],couleurs[3],couleurs[2],couleurs[1]),
cex=1.6,bg="white",inset=.0)
#3)
fr<-function(x) {pchisq(x,1)}
curve(fr,from=-1,to=9,ylab="",xlab="",lty=1,lwd=3,col=couleurs[1],type="n",cex.axis=2)
curve(fr,from=-1,to=-0.000001,ylab="",xlab="",lty=5,lwd=3,add=TRUE,col=couleurs[1])
curve(fr,from=0.000001,to=9,ylab="",xlab="",lty=5,lwd=3,add=TRUE,col=couleurs[1])
fr<-function(x) {pchisq(x,3)}
curve(fr,from=-1,to=-0.000001,ylab="",xlab="",lty=1,lwd=3,col=couleurs[3],add=TRUE)
curve(fr,from=0.000001,to=9,ylab="",xlab="",lty=4,lwd=3,col=couleurs[3],add=TRUE)
fr<-function(x) {pchisq(x,2)}
curve(fr,from=-1,to=-0.000001,ylab="",xlab="",lty=2,lwd=3,add=TRUE,col=couleurs[2])
curve(fr,from=0.000001,to=9,ylab="",xlab="",lty=2,lwd=3,add=TRUE,col=couleurs[2])
fr<-function(x) {pchisq(x,6)}
curve(fr,from=-1,to=-0.000001,ylab="",xlab="",lty=4,lwd=3,add=TRUE,col=couleurs[1])
#la fin de cette instruction est sur la page 212
curve(fr,from=0.000001,to=9,ylab="",xlab="",lty=1,lwd=3,add=TRUE,col=couleurs[4])
#page 213
legtxt<-c(expression(paste(italic(p)," = 1",sep="")),expression(paste(italic(p)," = 2",
sep="")),expression(paste(italic(p)," = 3",sep="")),expression(paste(italic(p)," = 6",
sep="")))
legend("bottomright",legtxt,lty=c(5,2,4,1),lwd=3,col=c(couleurs[1],couleurs[2],
couleurs[3],couleurs[4]),cex=2,bg="white",inset=.0375)
#4)
fd<-function(x) {dnorm(x)}
curve(fd,from=-4,to=4,ylab="",xlab="",lty=5,lwd=3,add=FALSE,col=couleurs[1],cex.axis=2)
fd<-function(x) {dt(x,1)}
curve(fd,from=-4,to=4,ylab="",xlab="",lty=1,lwd=3,add=TRUE,col=couleurs[2])
fd<-function(x) {dt(x,2)}
curve(fd,from=-4,to=4,ylab="",xlab="",lty=2,lwd=3,add=TRUE,col=couleurs[3])
fd<-function(x) {dt(x,5)}
curve(fd,from=-4,to=4,ylab="",xlab="",lty=4,lwd=3,add=TRUE,col=couleurs[4])
legtxt<-c(expression(paste(italic(n)," = 1",sep="")),expression(paste(italic(n)," = 2",
sep="")),expression(paste(italic(n)," = 5",sep="")),expression(paste(italic(N),"(0;1)",
sep="")))
legend("topleft",legtxt,lty=c(1,2,4,5),lwd=3,col=c(couleurs[2],couleurs[3],couleurs[4],
couleurs[1]),cex=1.6,bg="white",inset=.0375)
#Exercice 4.2
dnorm(0)
#page 214
1/sqrt(2*pi)
pnorm(2.58)
qnorm(0.975)
rnorm(50)
rnorm(20,mean=10,sd=2)
#page 215
x=seq(-5,5,0.1) ;pdf=dnorm(x) ;plot(x,pdf,type="l",
main="Densit\'e de la loi normale centr\'ee et r\'eduite")
runif(10)
#page 216
rt(10,20)
#Exercice 4.3
#1)
dbinom(5,150,0.02)
#page 217
pbinom(3,150,0.02)
qbinom(0.99,150,0.02)
#page 218
#Exercice 4.4
#1)
qbinom(0.95,230,0.85,lower.tail = FALSE)
qbinom(0.95,240,0.85,lower.tail = FALSE)
qbinom(0.95,246,0.85,lower.tail = FALSE)
plot(230:250,qbinom(0.95,230:250,0.85,lower.tail = FALSE))
abline(h=200)
abline(v=246)
#page 219
#2)
which.max(dbinom(0:330,330,.85))
plot(0:330,dbinom(0:330,330,.85),xlab="n",ylab="Probabilit\'e",lwd=2)
plot(270:285,dbinom(270:285,330,.85),xlab="n",ylab="Probabilit\'e",lwd=2)
#En plus : code figure 424
old.par <- par(no.readonly = TRUE)
layout(t(1:2))
plot(0:330,dbinom(0:330,330,.85),xlab="n",ylab="Probabilit\'e",lwd=2)
plot(270:285,dbinom(270:285,330,.85),xlab="n",ylab="Probabilit\'e",lwd=2)
abline(v=281)
layout(1)
par(old.par)
old.par <- par(no.readonly = TRUE)
pdf("figure424.pdf",h=6,w=9)
layout(t(1:2))
par(oma=rep(0,4));par(mar=c(4, 4, 2, 2) + 0.1)
plot(0:330,dbinom(0:330,330,.85),xlab="n",ylab="Probabilit\'e",lwd=2)
plot(270:285,dbinom(270:285,330,.85),xlab="n",ylab="Probabilit\'e",lwd=2)
abline(v=281)
layout(1)
dev.off()
par(old.par)
#page 220
#Exercice 4.5
#1)
1-pnorm(80,92,8)
#page 221
#2)
(1-pnorm(80,92,8))*6000
#3)
which.max(dbinom(0:6000,6000,.9331928))
plot(0:6000,dbinom(0:6000,6000,.9331928),xlab="n",ylab="Probabilit\'e",lwd=2)
plot(5500:5700,dbinom(5500:5700,6000,.9331928),xlab="n",ylab="Probabilit\'e",lwd=2)
plot(5590:5610,dbinom(5590:5610,6000,.9331928),xlab="n",ylab="Probabilit\'e",lwd=2)
dbinom(5599,6000,.9331928)
#page 222
dbinom(5600,6000,.9331928)
#En plus : code figure 425
old.par <- par(no.readonly = TRUE)
layout(matrix(c(1,2,1,3),nrow=2))
par(oma=rep(0,4));par(mar=c(4, 4, 2, 2) + 0.1)
plot(0:6000,dbinom(0:6000,6000,.9331928),xlab="n",ylab="Probabilit\'e",lwd=2)
plot(5500:5700,dbinom(5500:5700,6000,.9331928),xlab="n",ylab="Probabilit\'e",lwd=2)
plot(5590:5610,dbinom(5590:5610,6000,.9331928),xlab="n",ylab="Probabilit\'e",lwd=2)
layout(1)
par(old.par)
old.par <- par(no.readonly = TRUE)
pdf("figure425.pdf",h=6,w=9)
layout(matrix(c(1,2,1,3),nrow=2))
par(oma=rep(0,4));par(mar=c(4, 4, 2, 2) + 0.1)
plot(0:6000,dbinom(0:6000,6000,.9331928),xlab="n",ylab="Probabilit\'e",lwd=2)
plot(5500:5700,dbinom(5500:5700,6000,.9331928),xlab="n",ylab="Probabilit\'e",lwd=2)
plot(5590:5610,dbinom(5590:5610,6000,.9331928),xlab="n",ylab="Probabilit\'e",lwd=2)
layout(1)
dev.off()
par(old.par)
|
/scratch/gouwar.j/cran-all/cranData/BioStatR/demo/Chapitre4.R
|
#' ---
#' title: "Initiation \u00e0 la statistique avec R, code et compl\u00e9ments chapitre 5"
#' author: "Fr\u00e9d\u00e9ric Bertrand et Myriam Maumy-Bertrand"
#' date: "20 mars 2023"
#' ---
#Chapitre 5
#page 225
library(BioStatR)
attach(Mesures5)
table(graines,espece)
table(graines,espece,useNA="ifany")
(table.cont<-table(factor(graines),espece,dnn=c("nbr.graines","espece"),
exclude=c("bignone","laurier rose")))
#page 226
(table.cont<-table(factor(graines),espece,dnn=c("nbr.graines","espece"),
exclude=c("bignone","laurier rose"),useNA="no"))
#En plus : deuxi\`eme mani\`ere de faire
(table.cont<-table(factor(graines),espece,dnn=c("nbr.graines","espece"),
exclude=c("bignone","laurier rose",NA)))
library(ggplot2)
#Couleur
ggplot(Mesures,aes(x=masse,y=taille,color=espece))+geom_point(size=3,shape=19)+
ggtitle("Taille en fonction de la masse par esp\`ece")
#Noir et blanc
ggplot(Mesures,aes(x=masse,y=taille,shape=espece))+geom_point()+
ggtitle("Taille en fonction de la masse par esp\`ece")+theme_bw()
#page 228
#En plus : code figure 51A
pdf("fig51A.pdf")
print(ggplot(Mesures,aes(x=masse,y=taille,shape=espece))+geom_point()+
ggtitle("Taille en fonction de la masse par esp\`ece")+theme_bw())
dev.off()
addmargins(table.cont)
#page 227
print(prop.table(table.cont),digits=3)
#page 228
margin.table(table.cont,1)
margin.table(table.cont,2)
margin.table(prop.table(table.cont),1)
margin.table(prop.table(table.cont),2)
#page 229
prop.table(table.cont,1)
#page 230
prop.table(table.cont,2)
#page 233
cov(masse,taille)
#page 234
cor(masse,taille)
#page 235
require(BioStatR)
eta2(Mesures5$taille,Mesures5$espece)
#page 236
#Couleur
plot(taille~masse,col=rainbow(4)[espece],pch=19,data=Mesures)
legend("bottomright",levels(Mesures$espece),pch=19,col=rainbow(4))
title("Taille en fonction de la masse par esp\`ece")
#Noir et blanc
plot(taille~masse,pch=1:4,data=Mesures)
legend("bottomright",levels(Mesures$espece),pch=1:4)
title("Taille en fonction de la masse par esp\`ece")
#En plus : code figure 51B
pdf("fig51B.pdf")
plot(taille~masse,pch=1:4,data=Mesures)
legend("bottomright",levels(Mesures$espece),pch=1:4)
title("Taille en fonction de la masse par esp\`ece")
dev.off()
#Les m^emes figures avec ggplot2 (code page 225 et 226)
library(ggplot2)
#Couleur
ggplot(Mesures,aes(x=masse,y=taille,color=espece))+geom_point(size=3,shape=19)+
ggtitle("Taille en fonction de la masse par esp\`ece")
#Noir et blanc
ggplot(Mesures,aes(x=masse,y=taille,shape=espece))+geom_point()+
ggtitle("Taille en fonction de la masse par esp\`ece")+theme_bw()
#page 238
library(lattice)
show.settings()
show.settings(x=standard.theme(color=FALSE))
lattice.options(default.theme=standard.theme(color=FALSE))
#page 239
lattice.options(default.theme=NULL)
trellis.device(theme=standard.theme(color=FALSE))
dev.off()
trellis.device(color=FALSE)
dev.off()
#Pour obtenir les graphiques en noir et blanc
trellis.device(theme=standard.theme(color=FALSE),new=FALSE)
#page 240
xyplot(taille~masse,groups=espece,auto.key = list(corner = c(1, 0)),
main="Taille en fonction de la masse par esp\`ece",data=Mesures)
xyplot(taille~masse|espece,groups=espece,data=Mesures)
xyplot(taille~masse|espece,groups=espece,scales="free",data=Mesures)
#page 242
xyplot(taille~masse|espece,data=Mesures,groups=espece,
prepanel=function(x,y) prepanel.loess(x,y,span=1),
panel=function(x,y,subscripts,groups) {
panel.grid(h=-1,v=2)
panel.xyplot(x,y,pch=groups[subscripts])
panel.loess(x,y,span=1,lwd=2,pch=groups[subscripts])})
#d\'ebut page 242 et fin page 243
xyplot(taille~masse|espece,data=Mesures,groups=espece,scales="free",
prepanel=function(x,y) prepanel.loess(x,y,span=1),
panel=function(x,y,subscripts,groups) {
panel.grid(h=-1,v=2)
panel.xyplot(x,y,pch=groups[subscripts])
panel.loess(x,y,span=1,lwd=2,pch=groups[subscripts])
}
)
#page 243
xyplot(masse+masse_sec~taille|espece,data=Mesures5,scales="free",
layout=c(2,2),auto.key=list(x=-.01,y=.37,corner=c(0,0)))
#d\'ebut page 243 et fin page 244
xyplot(masse+masse_sec~taille|espece,data=Mesures5,scales="free",
layout=c(2,2),auto.key=list(x=-.01,y=.37,points=FALSE,
col=c("black","grey50"),font=2,corner=c(0,0)),
panel=function(x,y,subscripts,groups) {
panel.grid(h=-1,v= 2)
panel.xyplot(x,y,pch=19,col=c("black","grey50")[groups[subscripts]])
}
)
#page 244
#contruction des graphiques pr\'ec\'edents en couleur
trellis.device(theme=NULL,color = TRUE,new=FALSE)
#ceux de la page 240
xyplot(taille~masse,groups=espece,auto.key = list(corner = c(1, 0)),
main="Taille en fonction de la masse par esp\`ece",data=Mesures)
xyplot(taille~masse|espece,groups=espece,data=Mesures,pch=19)
xyplot(taille~masse|espece,groups=espece,scales="free",data=Mesures,pch=19)
#ceux de la page 242
xyplot(taille~masse|espece,data=Mesures,groups=espece,prepanel=function(x,y) prepanel.loess(x,y,span=1),
panel=function(x,y,subscripts,groups) {
panel.grid(h=-1,v=2)
panel.xyplot(x,y,pch=19,col=groups[subscripts])
panel.loess(x,y,span=1,lwd=2,col=groups[subscripts])})
xyplot(taille~masse|espece,data=Mesures,groups=espece,scales="free",
prepanel=function(x,y) prepanel.loess(x,y,span=1),
panel=function(x,y,subscripts,groups) {
panel.grid(h=-1,v=2)
panel.xyplot(x,y,pch=19,col=groups[subscripts])
panel.loess(x,y,span=1,lwd=2,col=groups[subscripts])})
#ceux de la page 243
xyplot(masse+masse_sec~taille|espece,data=Mesures5,scales="free",layout=c(2,2),
auto.key=list(x=-.01,y=.37,corner=c(0,0)))
xyplot(masse+masse_sec~taille|espece,data=Mesures5,scales="free",layout=c(2,2),
auto.key=list(x=-.01,y=.37,points=FALSE,col=c("black","red"),corner=c(0,0)),
panel=function(x,y,subscripts,groups) {
panel.grid(h=-1,v= 2)
panel.xyplot(x,y,pch=19,col=groups[subscripts])
}
)
#page 246
#Contruction des graphiques pr\'ec\'edents avec ggplot2
#Noir et blanc
library(ggplot2)
#ceux de la page 240
ggplot(Mesures,aes(x=masse,y=taille,shape=espece))+geom_point()+
ggtitle("Taille en fonction de la masse par esp\`ece")+theme_bw()+
theme(legend.position="bottom")
ggplot(Mesures,aes(x=masse,y=taille,shape=espece))+geom_point()+
ggtitle("Taille en fonction de la masse par esp\`ece")+theme_bw()+
facet_wrap(~espece)
ggplot(Mesures,aes(x=masse,y=taille,shape=espece))+geom_point()+
ggtitle("Taille en fonction de la masse par esp\`ece")+theme_bw()+
facet_wrap(~espece,scales = "free")
#ceux de la page 242
ggplot(Mesures,aes(x=masse,y=taille,shape=espece))+geom_point()+
ggtitle("Taille en fonction de la masse par esp\`ece")+theme_bw()+
facet_wrap(~espece)+stat_smooth(color="grey50")
ggplot(Mesures,aes(x=masse,y=taille,shape=espece))+geom_point()+
ggtitle("Taille en fonction de la masse par esp\`ece")+theme_bw()+
facet_wrap(~espece,scales = "free")+stat_smooth(color="grey50")
#ceux de la page 243
ggplot(Mesures5,aes(x=taille,y=masse,shape=espece))+geom_point(aes(x=taille,y=masse_sec),
color="gray50")+geom_point()+ggtitle("Taille en fonction de la masse par esp\`ece")+
theme_bw()+facet_wrap(~espece,scales = "free")+stat_smooth(color="black")+
stat_smooth(aes(x=taille,y=masse_sec),color="grey50")
#Pour faire appara^itre les deux variables dans la l\'egende en plus des groupes li\'es #aux esp\`eces
if(!("reshape" %in% rownames(installed.packages()))){install.packages("reshape")}
library(reshape)
Mesures5.long <- melt(Mesures5, id = c("taille","espece"),
measure = c("masse", "masse_sec"))
ggplot(Mesures5.long,aes(x=taille,y=value,shape=espece,color=variable))+geom_point()+
ggtitle("Taille en fonction de la masse par esp\`ece")+theme_bw()+
facet_wrap(~espece,scales = "free")+stat_smooth(aes(color=variable))+
scale_color_grey(start=.1,end=.5)
pdf("chap5fig511ggplot.pdf")
print(ggplot(Mesures5.long,aes(x=taille,y=value,shape=espece,color=variable))+
geom_point()+ggtitle("Taille en fonction de la masse par esp\`ece")+theme_bw()+
facet_wrap(~espece,scales = "free")+stat_smooth(aes(color=variable))+
scale_color_grey(start=.1,end=.5)
)
dev.off()
#page 247
#Couleur
#ceux de la page 240
ggplot(Mesures,aes(x=masse,y=taille,color=espece))+geom_point()+
ggtitle("Taille en fonction de la masse par esp\`ece")+theme_bw()
ggplot(Mesures,aes(x=masse,y=taille,color=espece))+geom_point()+
ggtitle("Taille en fonction de la masse par esp\`ece")+theme_bw()+
facet_wrap(~espece)
ggplot(Mesures,aes(x=masse,y=taille,color=espece))+geom_point()+
ggtitle("Taille en fonction de la masse par esp\`ece")+theme_bw()+
facet_wrap(~espece,scales = "free")
#ceux de la page 242
ggplot(Mesures,aes(x=masse,y=taille,color=espece))+geom_point()+
ggtitle("Taille en fonction de la masse par esp\`ece")+theme_bw()+
facet_wrap(~espece)+stat_smooth()
ggplot(Mesures,aes(x=masse,y=taille,color=espece))+geom_point()+
ggtitle("Taille en fonction de la masse par esp\`ece")+theme_bw()+
facet_wrap(~espece,scales = "free")+stat_smooth()
#ceux de la page 243
ggplot(Mesures5,aes(x=taille,y=masse,color=espece))+
geom_point(aes(x=taille,y=masse_sec,color=espece),shape=22)+geom_point(shape=19)+
ggtitle("Taille en fonction de la masse par esp\`ece")+theme_bw()+
facet_wrap(~espece,scales = "free")+stat_smooth()+
stat_smooth(aes(x=taille,y=masse_sec))
#page 248
#Pour faire appara^itre les deux variables dans la l\'egende en plus des groupes li\'es
#aux esp\`eces
if(!("reshape" %in% rownames(installed.packages()))){install.packages("reshape")}
library(reshape)
ggplot(Mesures5.long,aes(x=taille,y=value,color=variable,shape=espece))+geom_point()+
ggtitle("Taille en fonction de la masse par esp\`ece")+theme_bw()+
facet_wrap(~espece,scales = "free")+stat_smooth(aes(color=variable))
#Exercice 5.1
#page 249
#2)
outer(1:6,1:6,"+")
outer(1:6,1:6,pmin)
(effs<-table(outer(1:6,1:6,"+"),outer(1:6,1:6,pmin)))
#page 251
#1)
require(BioStatR)
plotcdf2(Mesures5$taille,Mesures5$masse,f=0,"taille","poids",theme="bw")
#En plus : autres options pour plotcdf2
plotcdf2(Mesures5$taille,Mesures5$masse,f=0,"taille","poids",col="gray50")
plotcdf2(Mesures5$taille,Mesures5$masse,f=0,"taille","poids")
plotcdf2(Mesures5$taille,Mesures5$masse,f=0,"taille","poids",theme="1")
plotcdf2(Mesures5$taille,Mesures5$masse,f=0,"taille","poids",theme="2")
plotcdf2(Mesures5$taille,Mesures5$masse,f=0,"taille","poids",theme="3")
#page 252
#2)
margin.table(effs)
plotcdf2(2:12,1:6,f=effs/36,"somme des d\'es","valeur du plus petit",the="bw")
#En plus : autres options pour plotcdf2
plotcdf2(2:12,1:6,f=effs/36,"somme des d\'es","valeur du plus petit",col="gray50")
plotcdf2(2:12,1:6,f=effs/36,"somme des d\'es","valeur du plus petit")
plotcdf2(2:12,1:6,f=effs/36,"somme des d\'es","valeur du plus petit",theme="1")
plotcdf2(2:12,1:6,f=effs/36,"somme des d\'es","valeur du plus petit",theme="2")
plotcdf2(2:12,1:6,f=effs/36,"somme des d\'es","valeur du plus petit",theme="3")
#3)
margin.table(effs,1)
#page 253
margin.table(effs,2)
print(prop.table(margin.table(effs,1)),3)
print(prop.table(margin.table(effs,2)),3)
#4)
print(prop.table(effs,1),digit=3)
#page 254
print(prop.table(effs,2),digit=3)
#Exercice 5.2
#1)
cov(Mesures5[,1:4])
cor(Mesures5[,1:4])
#page 255
#2)
cov(Mesures5[,1:4],use="pairwise.complete.obs")
cor(Mesures5[,1:4],use="pairwise.complete.obs")
|
/scratch/gouwar.j/cran-all/cranData/BioStatR/demo/Chapitre5.R
|
#' ---
#' title: "Initiation \u00e0 la statistique avec R, code et compl\u00e9ments chapitre 6"
#' author: "Fr\u00e9d\u00e9ric Bertrand et Myriam Maumy-Bertrand"
#' date: "20 mars 2023"
#' ---
#Chapitre 6
#page 261
require(BioStatR)
glycine.blanche<-subset(Mesures,subset=(Mesures$espece=="glycine blanche"))
mean(glycine.blanche$taille)
#page 262
var(glycine.blanche$taille)
#page 263
(var(glycine.blanche$taille))*((length(glycine.blanche$taille)-1)/
length(glycine.blanche$taille))
glycine.blanche<-subset(Mesures5,subset=(Mesures5$espece=="glycine blanche"))
#page 264
effectif.cumule<-cumsum(table(glycine.blanche$graines))
effectif.cumule
37/54
#page 265
qnorm(0.975)
#page 266
glycine.blanche<-subset(Mesures,subset=(Mesures$espece=="glycine blanche"))
shapiro.test(glycine.blanche$taille)
#page 267
length((glycine.blanche$taille))
qqnorm(glycine.blanche$taille)
qqline(glycine.blanche$taille)
pdf("figch61A.pdf")
qqnorm(glycine.blanche$taille)
qqline(glycine.blanche$taille)
dev.off()
#argument: un dataframe et le nom d'une variable
gg_qqplot(glycine.blanche,"taille")
library(ggplot2)
pdf("figch61B.pdf")
gg_qqplot(glycine.blanche,"taille")
dev.off()
#En plus : autre mani\`ere de construire le diagramme quantile-quantile
#bas\'e sur la loi normale centr\'ee et r\'eduite
ggplot(glycine.blanche, aes(sample = taille)) + stat_qq()
ggplot(glycine.blanche, aes(sample = taille)) + geom_point(stat = "qq")
#ou avec le fonction pr\'ec\'edente et l'option qq.line=FALSE
gg_qqplot(glycine.blanche,"taille",qq.line=FALSE)
#page 268
lauriers.roses<-subset(Mesures,subset=(Mesures$espece=="laurier rose"))
shapiro.test(lauriers.roses$taille)
#pas issu d'une loi normal au risque alpha=5%
gg_qqplot(lauriers.roses,"taille")
#page 269
#essayons un qqplot avec une autre loi, ici Student (car dist = qt) dont on estime les ddl
if(!("MASS" %in% rownames(installed.packages()))){install.packages("MASS")}
library(MASS)
params <- as.list(fitdistr(lauriers.roses$taille, "t")$estimate)
gg_qqplot(lauriers.roses,"taille",qt,list(df=params$df))
#En plus : autre mani\`ere de construire le diagramme quantile-quantile
#bas\'e sur la loi de student
ggplot(lauriers.roses, aes(sample = taille)) + stat_qq(distribution = stats::qt,
dparams = list(df=params[[3]]))
#En plus essayons un qqplot avec une loi gamma
params <- as.list(fitdistr(lauriers.roses$taille,"gamma")$estimate)
ggplot(lauriers.roses, aes(sample = taille)) + stat_qq(distribution = stats::qgamma,
dparams = params)
#avec la droite
gg_qqplot(lauriers.roses,"taille",qgamma,params)
#essayons un qqplot avec une loi du chi-deux
params <- list(df=fitdistr(lauriers.roses$taille,"chi-squared",start=list(df=5),
method="Brent",lower=1,upper=40)$estimate)
ggplot(lauriers.roses, aes(sample = taille)) + stat_qq(distribution = qchisq,
dparams = params)
#avec la droite
gg_qqplot(lauriers.roses,"taille",qchisq,params)
if(!("gridExtra" %in% rownames(installed.packages()))){install.packages("gridExtra")}
library(gridExtra)
params <- as.list(fitdistr(lauriers.roses$taille, "t")$estimate)
p1=gg_qqplot(lauriers.roses,"taille",qt,list(df=params$df))
params <- list(df=fitdistr(lauriers.roses$taille,"chi-squared",start=list(df=5),
method="Brent",lower=1,upper=40)$estimate)
p2=gg_qqplot(lauriers.roses,"taille",qchisq,params)
pdf("fig61Cggplot")
grid.arrange(p1, p2, nrow = 1)
dev.off()
#En plus : graphique avec les quatre qqplots
p0=gg_qqplot(lauriers.roses,"taille")+ggtitle("qqplot normal")
params <- as.list(fitdistr(lauriers.roses$taille,"gamma")$estimate)
p3=gg_qqplot(lauriers.roses,"taille",qgamma,params)+ggtitle("qqplot gamma")
grid.arrange(p1+ggtitle("qqplot student"), p2+ggtitle("qqplot chi-deux"), p0, p3, nrow=2)
#page 270
(moyenne<-mean(glycine.blanche$taille))
(quantile<-qt(0.975,53))
(ecart.type<-sd(glycine.blanche$taille))
moyenne-quantile*(ecart.type/sqrt(length(glycine.blanche$taille)))
moyenne+quantile*(ecart.type/sqrt(length(glycine.blanche$taille)))
#page 271
t.test(glycine.blanche$taille)
#page 272
glycine.blanche<-subset(Mesures,subset=(Mesures$espece=="glycine blanche"))
shapiro.test(glycine.blanche$taille)
length(glycine.blanche$taille)
#page 273
(variance<-var(glycine.blanche$taille))
qchisq(0.975,53)
qchisq(0.025,53)
((length(glycine.blanche$taille)-1)*variance)/qchisq(0.975,53)
((length(glycine.blanche$taille)-1)*variance)/qchisq(0.025,53)
#page 274
binom.test(x=5,n=10,p=0.5,alternative=c("two.sided","less","greater"),conf.level=0.95)
#page 275
binom.ci(x=5,n=10,conf.level=0.95,method="exact")
prop.test(x=5,n=10,p=0.5,alternative=c("two.sided","less","greater"),conf.level=0.95)
#page 276
binom.ci(x=5,n=10,conf.level=0.95,method="Wilson")
binom.ci(x=5,n=10,conf.level=0.95,method="Wald")
#page 284
#Exercice 6.1
#1)
toxine<-c(1.2,0.8,0.6,1.1,1.2,0.9,1.5,0.9,1.0)
str(toxine)
mean(toxine)
sd(toxine)
#2)
t.test(toxine)
#page 285
#4)
variance<-var(toxine)
((length(toxine)-1)*variance)/qchisq(0.975,8)
((length(toxine)-1)*variance)/qchisq(0.025,8)
sqrt(((length(toxine)-1)*variance)/qchisq(0.975,8))
#page 286
sqrt(((length(toxine)-1)*variance)/qchisq(0.025,8))
#Exercice 6.3
#page 287
#1)
lambda_n<-(1*11+2*41+3*27+4*16+5*10+6*2+7*3)/110
lambda_n
#2)
echantillon<-rep(0:8,c(0,11,41,27,16,10,2,3,0))
echantillon
poi.ci(echantillon)
#Probl\`eme 6.1
#page 288
library(BioStatR)
#1)
glycine<-subset(Mesures,subset=(Mesures$espece=="glycine blanche"))
#2)
layout(t(1:2))
histo<-hist(glycine$taille,ylab="Nombre de gousses de glycine blanche",
main="Histogramme de la taille\n d'une gousse de glycine blanche",
xlab="Taille d'une gousse de glycine blanche en cm")
boxplot(glycine$taille,ylab="Taille d'une gousse de glycine blanche en cm",
main="Bo^ite \`a moustaches de la taille\n d'une gousse de glycine blanche")
pdf("chap5fig62.pdf")
layout(t(1:2))
histo<-hist(glycine$taille,ylab="Nombre de gousses de glycine blanche",
main="Histogramme de la taille\n d'une gousse de glycine blanche",
xlab="Taille d'une gousse de glycine blanche en cm")
boxplot(glycine$taille,ylab="Taille d'une gousse de glycine blanche en cm",
main="Bo^ite \`a moustaches de la taille\n d'une gousse de glycine blanche")
dev.off()
#page 289
#4)
shapiro.test(glycine$taille)
#page 290
length(glycine$taille)
#5)
classes<-histo$breaks
classes
effectifs<-histo$counts
effectifs
#6)
mean(glycine$taille)
#page 291
sd(glycine$taille)
#7)
t.test(glycine$taille)
#8)
15.67395-13.87050
#page 292
1.80345/2
(8*1.96/((15.67395-13.87050)/2))^2
|
/scratch/gouwar.j/cran-all/cranData/BioStatR/demo/Chapitre6.R
|
#' ---
#' title: "Initiation \u00e0 la statistique avec R, code et compl\u00e9ments chapitre 7"
#' author: "Fr\u00e9d\u00e9ric Bertrand et Myriam Maumy-Bertrand"
#' date: "20 mars 2023"
#' ---
#Chapitre 7
require(BioStatR)
#page 300
gaz<-c(52.0,60.2,68.8,46.8,62.2,53.5,50.9,44.9,73.2,60.4,61.9,
67.8,30.5,52.5,40.4,29.6,58.3,62.6,53.6,64.6,54.4,53.8,49.8,
57.4,63.1,53.4,59.4,48.6,40.7,51.9)
shapiro.test(gaz)
length(gaz)
#page 301
(z<-(sqrt(30)*(mean(gaz)-50))/10)
qnorm(0.95)
if(!("TeachingDemos" %in% rownames(installed.packages()))){
install.packages("TeachingDemos")}
#page 302
library(TeachingDemos)
z.test(gaz,mu=50,sd=10,alternative="greater",conf.level=0.95)
#page 303
glycine<-subset(Mesures,subset=(Mesures$espece=="glycine blanche"))
shapiro.test(glycine$taille)
#page 304
length(glycine$taille)
t.test(glycine$taille,mu=15)
power.t.test(n=54,delta=mean(glycine$taille)-15,
sd=sd(glycine$taille),type="one.sample",alternative="two.sided")
#page 305
power.t.test(power=.8,delta=mean(glycine$taille)-15,
sd=sd(glycine$taille),type="one.sample",alternative="two.sided")
#page 307
pesee<-c(2.53,1.51,1.52,1.44,4.32,2.36,2.41,2.06,1.57,1.68,
3.09,0.54,2.32,0.19,2.66,2.20,1.04,1.02,0.74,1.01,
0.35,2.42,2.66,1.11,0.56,1.75,1.51,3.80,2.22,2.88)
shapiro.test(pesee)
length(pesee)
((length(pesee)-1)*var(pesee))/4
#page 308
qchisq(0.95,29)
library(TeachingDemos)
sigma.test(pesee,sigma=2,alternative="greater")
if(!("OneTwoSamples" %in% rownames(installed.packages()))){
install.packages("OneTwoSamples")}
library(OneTwoSamples)
var_test1(pesee,sigma2=4)
#page 310
binom.test(507,988,0.5)
#page 317
pipit<-c(17.0,16.9,16.9,17.3,16.8,16.8,17.0,16.5,16.9,16.5,
17.0,17.0,16.8,17.0,16.9,17.0,17.0,17.3,16.8,17.1,16.9,16.8,
17.1,17.0,17.1,17.2,16.7,16.6,17.2,17.0,17.0)
fauvette<-c(16.0,16.1,16.3,16.5,16.2,15.2,15.6,15.6,16.6,16.0,
16.2,16.8,16.0,17.0,17.9,16.0,16.4,16.3,16.9,17.1,17.0,16.1,
16.5,16.5,16.1,16.5,17.9,16.5,16.7,16.8)
shapiro.test(pipit)
length(pipit)
shapiro.test(fauvette)
length(fauvette)
#page 318
var.test(pipit,fauvette)
t.test(pipit,fauvette,var.equal=FALSE)
t.test(pipit,fauvette)
#page 325
#Probl\`eme 7.1
#2)
glycines<-subset(Mesures,subset=(Mesures$espece=="glycine violette"
|Mesures$espece=="glycine blanche"))
glycines$espece<-factor(glycines$espece)
tapply(glycines$taille,glycines$espece,summary)
tapply(glycines$taille,glycines$espece,sd)
#page 326
#4)
layout(matrix(c(1,2,1,3),nrow=2,ncol=2,byrow=F))
boxplot(taille~espece,data=glycines)
glycine_blanche<-glycines[glycines$espece=="glycine blanche",]
qqnorm(glycine_blanche$taille,ylab="Taille des glycines blanches")
qqline(glycine_blanche$taille)
glycine_violette<-glycines[glycines$espece=="glycine violette",]
qqnorm(glycine_violette$taille,ylab="Taille des glycines violettes")
qqline(glycine_violette$taille)
#Page 327
#7)
wilcox.test(taille~espece,data=glycines,conf.int=TRUE)
#Page 330
#Exercice 7.1
#1)
jus_orange=c(8.2,9.4,9.6,9.7,10.0,14.5,15.2,16.1,17.6,21.5,14.0,13.8,
12.8,15.0,9.5,10.9,12.4,14.7,10.7,11.1,13.8,13.1,8.6,13.9,15.2,13.6,13.4,
12.3,15.2,11.2,19.6,7.8,14.1,12.5,14.1,17.6,13.5,12.4,12.6,14.6,15.5,11.6,
11.8,12.9,8.1,11.8,18.7,12.6,16.0,15.8,17.2,16.4,11.2,10.2,13.6,13.2,15.9,
9.8,8.8,12.0)
acide_ascorbique=c(4.2,5.2,5.8,6.4,7.0,7.3,10.1,11.2,11.3,11.5,7.1,9.8,
5.3,4.8,11.9,10.1,12.5,14.6,4.9,9.7,7.0,3.8,5.0,9.3,8.7,8.7,8.7,9.5,2.5,
6.6,13.6,6.6,9.4,12.1,13.1,4.1,12.1,8.8,7.0,7.5)
#2)
shapiro.test(jus_orange)
length(jus_orange)
#Page 331
shapiro.test(acide_ascorbique)
length(acide_ascorbique)
#4)
var.test(jus_orange,acide_ascorbique)
#Page 332
t.test(jus_orange,acide_ascorbique,alternative="greater",var.equal=TRUE)
#Exercice 7.2
#1)
avnt<-c(15,18,17,20,21,18,17,15,19,16,19,17,19,15,14,16,21,20,21,18,17,17,
17,15,17,18,16,10,17,18,14,15,15,17,17,20,17)
aprs<-c(12,16,17,18,17,15,18,14,16,18,20,16,15,17,18,16,15,14,11,13,13,15,
14,15,19,14,16,14,14,15,19,19,16,19,15,17,16)
mode(avnt)
#Page 333
mode(aprs)
length(avnt)
length(aprs)
#2)
diff<-aprs-avnt
diff
#4)
shapiro.test(diff)
#Page 334
length(diff)
#5)
t.test(diff)
#page 335
#Probl\`eme 7.1
glycines<-subset(Mesures,subset=(Mesures$espece=="glycine violette"|Mesures$espece=="glycine blanche"))
glycines$espece<-factor(glycines$espece)
#2)
tapply(glycines$taille,glycines$espece,summary)
tapply(glycines$taille,glycines$espece,sd)
#page 336
#4)
layout(matrix(c(1,2,1,3),nrow=2,ncol=2,byrow=F))
boxplot(taille~espece,data=glycines,main="Bo^ites \`a moustaches")
glycine_blanche<-glycines[glycines$espece=="glycine blanche",]
qqnorm(glycine_blanche$taille,ylab="Taille des glycines blanches")
qqline(glycine_blanche$taille)
glycine_violette<-glycines[glycines$espece=="glycine violette",]
qqnorm(glycine_violette$taille,ylab="Taille des glycines violettes")
qqline(glycine_violette$taille)
#page 337
#6)
tapply(glycines$taille,glycines$espece,shapiro.test)
tapply(glycines$taille,glycines$espece,length)
#page 338
#8)
wilcox.test(taille~espece,data=glycines,conf.int=TRUE)
#Probl\`eme 7.2
#1)
lauriers<-subset(Mesures5,subset=(Mesures5$espece=="laurier rose"))
#2)
str(lauriers)
#page 339
#3)
la_masse<-lauriers$masse
la_masse_sec<-lauriers$masse_sec
diff_laurier<-(la_masse-la_masse_sec)
#4)
layout(matrix(c(1,2),nrow=1,ncol=2,byrow=F))
boxplot(diff_laurier,ylab="Diff\'erence entre la masse et la masse s\`eche pour une graine de
laurier",main="Bo^ite \`a moustaches")
abline(h=0, lty=2)
qqnorm(diff_laurier,ylab="Diff\'erence entre la masse et la masse s\`eche")
qqline(diff_laurier)
#page 340
#6)
shapiro.test(diff_laurier)
length(diff_laurier)
#7)
t.test(diff_laurier)
#page 341
#9)
wilcox.test(diff_laurier)
t.test(lauriers$masse,lauriers$masse_sec,paired=TRUE)
wilcox.test(lauriers$masse,lauriers$masse_sec,paired=TRUE)
|
/scratch/gouwar.j/cran-all/cranData/BioStatR/demo/Chapitre7.R
|
#' ---
#' title: "Initiation \u00e0 la statistique avec R, code et compl\u00e9ments chapitre 8"
#' author: "Fr\u00e9d\u00e9ric Bertrand et Myriam Maumy-Bertrand"
#' date: "20 mars 2023"
#' ---
#Chapitre 8
require(BioStatR)
#page 348
fisher.test(matrix(c(5,1,0,14),ncol=2,byrow=TRUE))
#page 357
#Exercice 8.1
#1)
Rhesus<-matrix(c(3620,3805,934,172,631,676,165,30),nrow=2,byrow=TRUE)
rownames(Rhesus)<-c("Rh+","Rh-")
colnames(Rhesus)<-c("O","A","B","AB")
#2)
Rhesus
#3)
class(Rhesus)
Rhesus<-as.table(Rhesus)
class(Rhesus)
#4)
plot(Rhesus,main="D\'enombrements")
pdf("figexo81.pdf")
plot(Rhesus,main="D\'enombrements")
dev.off()
#page 358
#5)
margin.table(Rhesus)
margin.table(Rhesus,margin=1)
margin.table(Rhesus,margin=2)
#6)
chisq.test(Rhesus,simulate.p.value=FALSE)$expected
chisq.test(Rhesus,simulate.p.value=FALSE)
#7)
chisq.test(Rhesus,simulate.p.value=TRUE,B=50000)
#page 359
#8)
fisher.test(Rhesus)
#9)
fisher.test(Rhesus,simulate.p.value=TRUE,B=50000)
#Exercice 8.2
#1)
flor<-matrix(c(34,73,63,16,12,12),nrow=2,byrow=T)
rownames(flor)<-c("Fleuri","Pas fleuri")
colnames(flor)<-c("Engrais A","Engrais B","Engrais C")
flor<-as.table(flor)
#page 360
#2)
flor
#3)
dim(flor)
#4)
plot(flor,main="D\'enombrements")
#5)
chisq.test(flor)$expected
chisq.test(flor)
#En plus : calcul de la p-valeur par simulation
chisq.test(flor,simulate.p.value=T,B=100000)
#page 361
#6)
chisq.test(flor)$residuals
#page 362
#7)
if(!("vcd" %in% rownames(installed.packages()))){install.packages("vcd")}
library(vcd)
assoc(flor,shade=TRUE)
assoc(t(flor),shade=TRUE)
pdf("figexo82.pdf")
assoc(flor,shade=TRUE)
dev.off()
pdf("figexo82transpose.pdf")
assoc(t(flor),shade=TRUE)
dev.off()
#Exercice 8.3
res.test<-chisq.test(c(100,18,24,18),p=c(90,30,30,10),rescale.p=TRUE)
res.test$expected
res.test
chisq.test(c(100,18,24,18),p=c(90,30,30,10),rescale.p=TRUE,simulate=TRUE)
#page 363
#Exercice 8.4
#1)
radio<-matrix(c(103,12,18,35),nrow=2,byrow=T)
rownames(radio)<-c("Bras cass\'e","Bras normal")
colnames(radio)<-c("Bras cass\'e","Bras normal")
radio<-as.table(radio)
#2)
radio
#4)
mcnemar.test(radio)
#page 364
#5)
binom.test(radio[2],n=sum(radio[c(2,3)]))
|
/scratch/gouwar.j/cran-all/cranData/BioStatR/demo/Chapitre8.R
|
#' ---
#' title: "Initiation \u00e0 la statistique avec R, code et compl\u00e9ments chapitre 9"
#' author: "Fr\u00e9d\u00e9ric Bertrand et Myriam Maumy-Bertrand"
#' date: "20 mars 2023"
#' ---
#Chapitre 9
require(BioStatR)
#page 378
#Exercice 9.1
#1)
lauriers<-subset(Mesures,subset=(Mesures$espece=="laurier rose"))
plot(taille~masse,data=lauriers,pch=19)
#page 379
#3)
droite_lauriers<-lm(taille~masse,data=lauriers)
coef(droite_lauriers)
#4)
fitted(droite_lauriers)
#page 380
#5)
abline(coef(droite_lauriers),col="red",lwd=2)
#6)
predict(droite_lauriers,data.frame(masse=4.8))
#fonctionne comme predict(droite_lauriers,list(masse=4.8))
#7)
residuals(droite_lauriers)[lauriers$masse==4.8]
#page 381
#8)
mean(lauriers$taille)
6.413523+1.700114*mean(lauriers$masse)
coef(droite_lauriers)[1]+coef(droite_lauriers)[2]*mean(lauriers$masse)
#9)
summary(droite_lauriers)
#page 382
#10)
anova(droite_lauriers)
#11)
summary(droite_lauriers)
#page 383
#12)
residus<-residuals(droite_lauriers)
shapiro.test(residus)
#page 384
plot(lauriers$masse,residus)
pdf("residusmasse.pdf")
plot(lauriers$masse,residus)
dev.off()
#Les r\'esidus ont l'air corrects => homosc\'edasticit\'e des erreurs ok et
#absence d'effet syst\'ematique
#Approche par permutation valide
#13)
if(!("lmPerm" %in% rownames(installed.packages()))){install.packages("lmPerm")}
library(lmPerm)
lmp(taille~masse,lauriers)
#page 385
perm_laurier<-lmp(taille~masse,lauriers,center=FALSE)
summary(perm_laurier)
#page 386
#14)
confint(droite_lauriers)
predict(droite_lauriers,list(masse=c(4.8)),interval="confidence")
predict(droite_lauriers,list(masse=c(4.8)),interval="prediction")
#page 387
#Exercice 9.2
#1)
bignones<-subset(Mesures5,subset=(Mesures5$espece=="bignone"))[,c(1,4)]
plot(masse~masse_sec,data=bignones,pch=19)
pdf("figure94.pdf")
plot(masse~masse_sec,data=bignones,pch=19)
dev.off()
#3)a)
droite_bignones<-lm(masse~masse_sec,data=bignones)
coef(droite_bignones)
#page 388
residus<-residuals(droite_bignones)
plot(bignones$masse_sec,residus)
pdf("figure95.pdf")
plot(bignones$masse_sec,residus)
dev.off()
#Les r\'esidus n'ont l'air corrects car ils pr\'esentent une forme en trompette,
#ce qui remet en question de l'homosc\'edasticit\'e des erreurs. Nous proc\'ederons
#dans la suite \`a un test pour nous assurer que ce d\'efaut est significatif au
#seuil de \alpha=5%. Par contre les r\'esidus ont l'air r\'epartis al\'eatoirement
#au-dessus ou en-dessous de l'axe des abscisses. Vous notez \'egalement l'absence
#d'un effet syst\'ematique qui se traduirait par exemple par une forme de banane.
#L'hypoth\`ese d'ind\'ependance n'est pas remise en question.
#Malgr\'e l'inhomog\'en\'eit\'e des variances l'estimation de la pente et de l'ordonn\'ee
#\`a l'origine reste sans biais. Il sera, par contre, n\'ecessaire tenir compte de
#l'h\'et\'erosc\'edastict\'e des erreurs pour la mise en oeuvre des proc\'edures de test et
#la construction des intervalles de confiance.
#page 389
#4)
fitted(droite_bignones)
#5)
plot(masse~masse_sec,data=bignones,pch=19)
abline(coef(droite_bignones),col="red",lwd=2)
pdf("figure96.pdf")
plot(masse~masse_sec,data=bignones,pch=19)
abline(coef(droite_bignones),col="red",lwd=2)
dev.off()
#6)
predict(droite_bignones,data.frame(masse_sec=2.5))
plot(masse~masse_sec,data=bignones,pch=19)
abline(coef(droite_bignones),col="red",lwd=2)
points(2.5,predict(droite_bignones,data.frame(masse_sec=2.5)),pch=17,col="blue")
segments(2.5, bignones$masse[bignones$masse_sec==2.5],2.5,
predict(droite_bignones,data.frame(masse_sec=2.5)),lty=2,lwd=2)
pdf("figure96residusmasselinepoint.pdf")
plot(masse~masse_sec,data=bignones,pch=19)
abline(coef(droite_bignones),col="red",lwd=2)
points(2.5,predict(droite_bignones,data.frame(masse_sec=2.5)),pch=17,col="blue")
segments(2.5, bignones$masse[bignones$masse_sec==2.5],2.5,
predict(droite_bignones,data.frame(masse_sec=2.5)),lty=2,lwd=2)
dev.off()
#page 390
#7)
residuals(droite_bignones)[bignones$masse_sec==2.5]
#8)
mean(bignones$masse)
-0.5391407+4.8851935*mean(bignones$masse_sec)
coef(droite_bignones)[1]+coef(droite_bignones)[2]*mean(bignones$masse_sec)
#page 391
#9)
summary(droite_bignones)
#10)
anova(droite_bignones)
#page 392
#12) et 13)
residus<-residuals(droite_bignones)
shapiro.test(residus)
length(residus)
#Les r\'esidus sont au nombre de 70 sup\'erieur ou \'egal \`a 30. Le test de normalit\'e
#est donc fiable. La $p$-valeur du test est strictement sup\'erieure \`a \alpha=5%,
#le test n'est pas significatif. Nous conservons, par d\'efaut, l'hypoth\`ese H0
#de normalit\'e des erreurs.
#page 393
#Le test de White est un cas particulier du test de Breusch-Pagan qui est
#disponible dans le biblioth\`eque lmtest
if(!("lmtest" %in% rownames(installed.packages()))){install.packages("lmtest")}
library(lmtest)
bptest(droite_bignones, ~ masse_sec + I(masse_sec^2), data = bignones)
## White test (Table 5.1, p. 113)
#bptest(cig_lm2, ~ income * price + I(income^2) + I(price^2), data = CigarettesB)
#Le test de White permet de s'int\'eresser aux deux hypoth\`eses :
#"H0 : les erreurs sont homosc\'edastiques"
#contre
#"H1 : les erreurs sont h\'et\'erosc\'edastiques".
#L'hypoth\`ese de normalit\'e des erreurs n'a \'et\'e remise en cause, le test de White
#est donc fiable. La $p$-valeur du test est inf\'erieure ou \'egale \`a \alpha=5%,
#le test est significatif. Nous rejetons l'hypoth\`ese H0 d'homosc\'edasticit\'e
#des erreurs et d\'ecidons que l'hypoth\`ese alternative d'h\'et\'erosc\'edasticit\'e
#des erreurs est vraie.
#Comme nous l'avions per\c cu graphiquement, les erreurs ne sont pas homosc\'edastiques,
#il faut tenir compte de cette inhomog\'en\'eit\'e des variances lors de l'estimation
#des param\`etres du mod\`ele puis de la mise en oeuvre des tests de student ou
#du test global de Fisher pour la r\'egression.
if(!("sandwich" %in% rownames(installed.packages()))){install.packages("sandwich")}
library(sandwich)
vcovHC(droite_bignones)
#Estimation, tenant de l'inhomog\'en\'eit\'e des variances, de la matrice de
#variance-covariance des estimateurs \hat\beta_0 et \hat\beta_1.
coeftest(droite_bignones, df="inf", vcov=vcovHC)
#Tests de student des coefficient \beta_0 et \beta_1.
#page 394
waldtest(droite_bignones, vcov=vcovHC)
#Tests de Fihser global du mod\`ele de r\'egression lin\'eaire simple.
#Pour construire les intervalles de confiance autour des param\`etres,
#vous poouvez utiliser la biblioth\`eque hcci.
if(!("hcci" %in% rownames(installed.packages()))){install.packages("hcci")}
library(hcci)
?hcci
#L'aide de la biblioth\`eque HCCI vous apprend qu'il existe plusieurs proc\'edures
#permettant de tenir compte de l'h\'et\'erosc\'edasticit\'e. La fonction vcovHC utilise
#la m\'ethode HC3 par d\'efaut La fonction HC, la m\'ethode HC4 avec le param\`etre k=0.7
#par d\'efaut. Les m\'ethodes HC3, HC4 et HC5 sont recommend\'ees. En comparant leurs
#r\'esultats, vous constatez qu'elles aboutissent toutes aux m^emes conclusions
#au seuil de \alpha=5% : conservation, par d\'efaut, de "H0 : \beta_0=0" pour
#le test de l'ordonn\'ee \`a l'origine et d\'ecision que "H1 : \beta_1<>0" est vraie.
HC(droite_bignones,method=3)
coeftest(droite_bignones, df="inf", vcov=HC(droite_bignones,method=3))
#page 395
vcovHC(droite_bignones,type="HC4")
coeftest(droite_bignones, df="inf", vcov=vcovHC(droite_bignones,type="HC4"))
vcovHC(droite_bignones,type="HC4m")
coeftest(droite_bignones, df="inf", vcov=vcovHC(droite_bignones,type="HC4m"))
#page 396
HC(droite_bignones,method=4,k=0.7)
coeftest(droite_bignones, df="inf", vcov=HC(droite_bignones,method=4,k=0.7))
vcovHC(droite_bignones,type="HC5")
coeftest(droite_bignones, df="inf", vcov=vcovHC(droite_bignones,type="HC5"))
HC(droite_bignones,method=5)
#page 397
coeftest(droite_bignones, df="inf", vcov=HC(droite_bignones,method=5))
#Passons \`a la construction d'intervalles de confiance sur les param\`etres
#\beta_0 et \beta_1 de la r\'egression lin\'eaire simple. Nous devons passer par
#cette \'etape de r\'e\'ecriture du mod\`ele pour pouvoir utiliser les fonctions Pboot
#et Tbootde la biblioth\`eque hcci.
y = bignones$masse
x = bignones$masse_sec
model = lm(y ~ x)
#Il est possible de "fixer" le point de d\'epart du g\'en\'erateur al\'eatoir
#pour avoir des r\'esultats reproductibles \`a l'aide de la fonction set.seed
set.seed(123456)
#Commencez par utiliser une technique de bootstrap simple.
#Bootstrap percentile simple.
Pboot(model, significance = 0.05, double = FALSE, J=1000, K = 100,
distribution = "rademacher")
#page 398
#Bootstrap t simple.
Tboot(model, significance = 0.05, double = FALSE, J=1000, K = 100,
distribution = "rademacher")
#Utilisez maintenant une technique de bootstrap double.
#Bootstrap percentile double.
Pboot(model, significance = 0.05, double = TRUE, J=1000, K = 100,
distribution = "rademacher")
#Bootstrap t double.
Tboot(model, significance = 0.05, double = TRUE, J=1000, K = 100,
distribution = "rademacher")
#Le mod\`ele \'etant h\'et\'erosc\'edastique, la construction d'intervalles de pr\'ediction
#n'est pas fiable
|
/scratch/gouwar.j/cran-all/cranData/BioStatR/demo/Chapitre9.R
|
#' Draw an area-proportional Venn diagram of 2 or 3 circles
#'
#' This function creates an area-proportional Venn diagram of 2 or 3 circles, based on lists of (biological) identifiers.
#' It requires three parameters: input lists X, Y and Z. For a 2-circle Venn diagram, one of these lists
#' should be left empty. Duplicate identifiers are removed automatically, and a mapping from Entrez and/or
#' Affymetrix to Ensembl IDs is available. BioVenn is case-sensitive. In SVG mode, text and numbers can be dragged and dropped.
#'
#' When using a BioVenn diagram for a publication, please cite:
#' BioVenn - an R and Python package for the comparison and visualization of biological lists using area-proportional Venn diagrams
#' T. Hulsen, Data Science 2021, 4 (1): 51-61
#' https://dx.doi.org/10.3233/DS-210032
#'
#' @param list_x (Required) List with IDs from dataset X
#' @param list_y (Required) List with IDs from dataset Y
#' @param list_z (Required) List with IDs from dataset Z
#' @param title (Optional) The title of the Venn diagram (default is "BioVenn")
#' @param t_f (Optional) The font of the main title (default is "serif")
#' @param t_fb (Optional) The font "face" of the main title (1=plain, 2=bold, 3=italic, 4=bold-italic; default is 2)
#' @param t_s (Optional) The size of the main title (cex; relative to the standard size; default is 1.5)
#' @param t_c (Optional) The colour of the main title (default is "black")
#' @param subtitle (Optional) The subtitle of the Venn diagram (default is "(C) 2007-2020 Tim Hulsen")
#' @param st_f (Optional) The font of the subtitle (default is "serif")
#' @param st_fb (Optional) The font "face" of the subtitle (1=plain, 2=bold, 3=italic, 4=bold-italic; default is 2)
#' @param st_s (Optional) The size of the subtitle (cex; relative to the standard size; default is 1.2)
#' @param st_c (Optional) The colour of the subtitle (default is "black")
#' @param xtitle (Optional) The X title of the Venn diagram (default is "ID set X")
#' @param xt_f (Optional) The font of the X title (default is "serif")
#' @param xt_fb (Optional) The font "face" of the X title (1=plain, 2=bold, 3=italic, 4=bold-italic; default is 2)
#' @param xt_s (Optional) The size of the X title (cex; relative to the standard size; default is 1)
#' @param xt_c (Optional) The colour of the X title (default is "black")
#' @param ytitle (Optional) The Y title of the Venn diagram (default is "ID set Y")
#' @param yt_f (Optional) The font of the Y title (default is "serif")
#' @param yt_fb (Optional) The font "face" of the Y title (1=plain, 2=bold, 3=italic, 4=bold-italic; default is 2)
#' @param yt_s (Optional) The size of the Y title (cex; relative to the standard size; default is 1)
#' @param yt_c (Optional) The colour of the Y title (default is "black")
#' @param ztitle (Optional) The Z title of the Venn diagram (default is "ID set Z")
#' @param zt_f (Optional) The font of the Z title (default is "serif")
#' @param zt_fb (Optional) The font "face" of the Z title (1=plain, 2=bold, 3=italic, 4=bold-italic; default is 2)
#' @param zt_s (Optional) The size of the Z title (cex; relative to the standard size; default is 1)
#' @param zt_c (Optional) The colour of the Z title (default is "black")
#' @param nrtype (Optional) The type of the numbers to be displayed: absolute (abs) numbers or percentages (pct) (default is "abs")
#' @param nr_f (Optional) The font of the numbers (default is "serif")
#' @param nr_fb (Optional) The font "face" of the numbers (1=plain, 2=bold, 3=italic, 4=bold-italic; default is 2)
#' @param nr_s (Optional) The size of the numbers (cex; relative to the standard size; default is 1)
#' @param nr_c (Optional) The colour of the numbers (default is "black")
#' @param x_c (Optional) The colour of the X circle (default is "red")
#' @param y_c (Optional) The colour of the X circle (default is "green")
#' @param z_c (Optional) The colour of the X circle (default is "blue")
#' @param bg_c (Optional) The background colour (default is "white")
#' @param width (Optional) The width of the output file (in pixels for BMP/JPEG/PNG/TIF or in centiinch for PDF/SVG; default is 1000)
#' @param height (Optional) The height of the output file (in pixels for BMP/JPEG/PNG/TIF or in centiinch for PDF/SVG; default is 1000)
#' @param output (Optional) Output format: "bmp","jpg","pdf","png","svg" or "tif" (anything else writes to the screen; default is "screen")
#' @param filename (Optional) The name of the output file (default is "biovenn" + extension of the selected output format)
#' @param map2ens (Optional) Map from Entrez or Affymetrix IDs to Ensembl IDs (default is FALSE)
#' @return An image of the Venn diagram is generated in the desired output format.
#' @return Also returns an object with thirteen lists: X, Y, Z, X only, Y only, Z only, XY, XZ, YZ, XY only, XZ only, YZ only, XYZ.
#' @import biomaRt graphics grDevices plotrix svglite
#' @examples
#' list_x <- c("1007_s_at","1053_at","117_at","121_at","1255_g_at","1294_at")
#' list_y <- c("1255_g_at","1294_at","1316_at","1320_at","1405_i_at")
#' list_z <- c("1007_s_at","1405_i_at","1255_g_at","1431_at","1438_at","1487_at","1494_f_at")
#' biovenn <- draw.venn(list_x, list_y, list_z, subtitle="Example diagram", nrtype="abs")
#' @export
draw.venn <- function(list_x, list_y, list_z, title="BioVenn", t_f="serif", t_fb=2, t_s=1.5, t_c="black", subtitle="(C) 2007-2020 Tim Hulsen", st_f="serif", st_fb=2, st_s=1.2, st_c="black", xtitle="ID Set X", xt_f="serif", xt_fb=2, xt_s=1, xt_c="black", ytitle="ID Set Y", yt_f="serif", yt_fb=2, yt_s=1, yt_c="black", ztitle="ID Set Z", zt_f="serif", zt_fb=2, zt_s=1, zt_c="black", nrtype="abs", nr_f="serif", nr_fb=2, nr_s=1, nr_c="black", x_c="red", y_c="green", z_c="blue", bg_c="white", width=1000, height=1000, output="screen", filename=NULL, map2ens=FALSE){
# Make input lists unique
list_x <- unique(list_x)
list_y <- unique(list_y)
list_z <- unique(list_z)
# Convert to Ensembl IDs
if(map2ens)
{
mart <- biomaRt::useMart(dataset="hsapiens_gene_ensembl",biomart="ensembl")
if(length(list_x)>0)
{
list_x_1 <- biomaRt::select(mart, keys=list_x, columns=c("ensembl_gene_id"),keytype="affy_hg_u133a")$ensembl_gene_id
list_x_2 <- biomaRt::select(mart, keys=list_x, columns=c("ensembl_gene_id"),keytype="entrezgene_id")$ensembl_gene_id
list_x <- unique(c(list_x_1,list_x_2))
}
if(length(list_y)>0)
{
list_y_1 <- biomaRt::select(mart, keys=list_y, columns=c("ensembl_gene_id"),keytype="affy_hg_u133a")$ensembl_gene_id
list_y_2 <- biomaRt::select(mart, keys=list_y, columns=c("ensembl_gene_id"),keytype="entrezgene_id")$ensembl_gene_id
list_y <- unique(c(list_y_1,list_y_2))
}
if(length(list_z)>0)
{
list_z_1 <- biomaRt::select(mart, keys=list_z, columns=c("ensembl_gene_id"),keytype="affy_hg_u133a")$ensembl_gene_id
list_z_2 <- biomaRt::select(mart, keys=list_z, columns=c("ensembl_gene_id"),keytype="entrezgene_id")$ensembl_gene_id
list_z <- unique(c(list_z_1,list_z_2))
}
}
# Generate lists and calculate numbers
x <- length(list_x)
y <- length(list_y)
z <- length(list_z)
list_xy <- intersect(list_x, list_y)
xy <- length(list_xy)
list_xz <- intersect(list_x, list_z)
xz <- length(list_xz)
list_yz <- intersect(list_y, list_z)
yz <- length(list_yz)
list_xyz <- intersect(list_xy, list_z)
xyz <- length(list_xyz)
list_xy_only <- setdiff(list_xy, list_xyz)
xy_only <- length(list_xy_only)
list_xz_only <- setdiff(list_xz, list_xyz)
xz_only <- length(list_xz_only)
list_yz_only <- setdiff(list_yz, list_xyz)
yz_only <- length(list_yz_only)
list_x_only <- setdiff(list_x, c(list_xy, list_xz))
x_only <- length(list_x_only)
list_y_only <- setdiff(list_y, c(list_xy,list_yz))
y_only <- length(list_y_only)
list_z_only <- setdiff(list_z, c(list_xz,list_yz))
z_only <- length(list_z_only)
# Print numerical output
print(paste("x total:",x))
print(paste("y total:",y))
print(paste("z total:",z))
print(paste("x only:",x_only))
print(paste("y only:",y_only))
print(paste("z only:",z_only))
print(paste("x-y total overlap:",xy))
print(paste("x-z total overlap:",xz))
print(paste("y-z total overlap:",yz))
print(paste("x-y only overlap:",xy_only))
print(paste("x-z only overlap:",xz_only))
print(paste("y-z only overlap:",yz_only))
print(paste("x-y-z overlap:",xyz))
# Define sq function
sq <- function(nr)
{
nr=nr^2
return(nr)
}
# Define sqr function
sqr <- function(nr)
{
nr=sqrt(round(abs(nr)))
return(nr)
}
# Define arccos function
arccos <- function(nr)
{
nr=acos(min(max(-1,round(nr,5)),1))
return(nr)
}
# Set width and height of plotting area
width_p=1000
height_p=1000
# Amplification
amp=100000/(x+y+z-xy-xz-yz+xyz)
x_text=x
x=x*amp
y_text=y
y=y*amp
z_text=z
z=z*amp
xy_text=xy
xy=xy*amp
xz_text=xz
xz=xz*amp
yz_text=yz
yz=yz*amp
xyz_text=xyz
xyz=xyz*amp
total=x+y+z-xy-xz-yz+xyz
total_text=x_text+y_text+z_text-xy_text-xz_text-yz_text+xyz_text
# Radius calculation
x_r=sqr(x/pi)
y_r=sqr(y/pi)
z_r=sqr(z/pi)
# Distance calculation (with 0.001 error margin)
xy_d=x_r+y_r
if(x&&y)
{
while(xy*0.999>sq(x_r)*arccos((sq(xy_d)+sq(x_r)-sq(y_r))/(2*xy_d*x_r))+sq(y_r)*arccos((sq(xy_d)+sq(y_r)-sq(x_r))/(2*xy_d*y_r))-0.5*sqr(round((xy_d+x_r+y_r)*(xy_d+x_r-y_r)*(xy_d-x_r+y_r)*(-xy_d+x_r+y_r),5)))
{
xy_d=xy_d-min(x_r,y_r)/1000.0
}
}
xz_d=x_r+z_r
if(x&&z)
{
while(xz*0.999>sq(x_r)*arccos((sq(xz_d)+sq(x_r)-sq(z_r))/(2*xz_d*x_r))+sq(z_r)*arccos((sq(xz_d)+sq(z_r)-sq(x_r))/(2*xz_d*z_r))-0.5*sqr(round((xz_d+x_r+z_r)*(xz_d+x_r-z_r)*(xz_d-x_r+z_r)*(-xz_d+x_r+z_r),5)))
{
xz_d=xz_d-min(x_r,z_r)/1000.0
}
}
yz_d=y_r+z_r
if(y&&z)
{
while(yz*0.999>sq(y_r)*arccos((sq(yz_d)+sq(y_r)-sq(z_r))/(2*yz_d*y_r))+sq(z_r)*arccos((sq(yz_d)+sq(z_r)-sq(y_r))/(2*yz_d*z_r))-0.5*sqr(round((yz_d+y_r+z_r)*(yz_d+y_r-z_r)*(yz_d-y_r+z_r)*(-yz_d+y_r+z_r),5)))
{
yz_d=yz_d-min(y_r,z_r)/1000.0
}
}
# Distance calculation for horizontally plotted diagrams
if(xy_d>xz_d+yz_d)
{
xy_d=xz_d+yz_d
}
if(xz_d>xy_d+yz_d)
{
xz_d=xy_d+yz_d
}
if(yz_d>xy_d+xz_d)
{
yz_d=xy_d+xz_d
}
# Angle calculation
x_a=arccos((sq(xy_d)+sq(xz_d)-sq(yz_d))/(2*xy_d*xz_d))
y_a=arccos((sq(xy_d)+sq(yz_d)-sq(xz_d))/(2*xy_d*yz_d))
z_a=arccos((sq(xz_d)+sq(yz_d)-sq(xy_d))/(2*xz_d*yz_d))
x_yz=xz_d*sin(z_a)
y_yz=xy_d*cos(y_a)
# PPU calculation
width_h=max(y_r+y_yz,x_r,z_r-yz_d+y_yz)+max(x_r,y_r-y_yz,z_r+yz_d-y_yz)
ppu_h=width_p/width_h
width_v=max(x_r+x_yz,y_r,z_r)+max(y_r,z_r,x_r-x_yz)
ppu_v=height_p/width_v
ppu=min(ppu_h,ppu_v)
# Circle center calculation
x_h=max(x_r,y_r+y_yz,z_r-yz_d+y_yz)
x_v=max(x_r,y_r-x_yz,z_r-x_yz)
y_h=max(x_r-y_yz,y_r,z_r-yz_d)
y_v=max(x_r+x_yz,y_r,z_r)
z_h=max(x_r+yz_d-y_yz,y_r+yz_d,z_r)
z_v=max(x_r+x_yz,y_r,z_r)
# Calculate intersection points X-Y (first inner, then outer)
xy_i_h_part1=(x_h+y_h)/2+((y_h-x_h)*(sq(x_r)-sq(y_r)))/(2*sq(xy_d))
xy_i_v_part1=(x_v+y_v)/2+((y_v-x_v)*(sq(x_r)-sq(y_r)))/(2*sq(xy_d))
xy_i_h_part2=2*((x_v-y_v)/sq(xy_d))*sqr((xy_d+x_r+y_r)*(xy_d+x_r-y_r)*(xy_d-x_r+y_r)*(-xy_d+x_r+y_r))/4
xy_i_v_part2=2*((x_h-y_h)/sq(xy_d))*sqr((xy_d+x_r+y_r)*(xy_d+x_r-y_r)*(xy_d-x_r+y_r)*(-xy_d+x_r+y_r))/4
xy_i1_h=xy_i_h_part1-xy_i_h_part2
xy_i1_v=xy_i_v_part1+xy_i_v_part2
xy_i2_h=xy_i_h_part1+xy_i_h_part2
xy_i2_v=xy_i_v_part1-xy_i_v_part2
# Calculate intersection points X-Z (first inner, then outer)
xz_i_h_part1=(x_h+z_h)/2+((z_h-x_h)*(sq(x_r)-sq(z_r)))/(2*sq(xz_d))
xz_i_v_part1=(x_v+z_v)/2+((z_v-x_v)*(sq(x_r)-sq(z_r)))/(2*sq(xz_d))
xz_i_h_part2=2*((x_v-z_v)/sq(xz_d))*sqr((xz_d+x_r+z_r)*(xz_d+x_r-z_r)*(xz_d-x_r+z_r)*(-xz_d+x_r+z_r))/4
xz_i_v_part2=2*((x_h-z_h)/sq(xz_d))*sqr((xz_d+x_r+z_r)*(xz_d+x_r-z_r)*(xz_d-x_r+z_r)*(-xz_d+x_r+z_r))/4
xz_i1_h=xz_i_h_part1+xz_i_h_part2
xz_i1_v=xz_i_v_part1-xz_i_v_part2
xz_i2_h=xz_i_h_part1-xz_i_h_part2
xz_i2_v=xz_i_v_part1+xz_i_v_part2
# Calculate intersection points Y-Z (first inner, then outer)
yz_i_h_part1=(y_h+z_h)/2+((z_h-y_h)*(sq(y_r)-sq(z_r)))/(2*sq(yz_d))
yz_i_v_part1=(y_v+z_v)/2+((z_v-y_v)*(sq(y_r)-sq(z_r)))/(2*sq(yz_d))
yz_i_h_part2=2*((y_v-z_v)/sq(yz_d))*sqr((yz_d+y_r+z_r)*(yz_d+y_r-z_r)*(yz_d-y_r+z_r)*(-yz_d+y_r+z_r))/4
yz_i_v_part2=2*((y_h-z_h)/sq(yz_d))*sqr((yz_d+y_r+z_r)*(yz_d+y_r-z_r)*(yz_d-y_r+z_r)*(-yz_d+y_r+z_r))/4
yz_i1_h=yz_i_h_part1-yz_i_h_part2
yz_i1_v=yz_i_v_part1+yz_i_v_part2
yz_i2_h=yz_i_h_part1+yz_i_h_part2
yz_i2_v=yz_i_v_part1-yz_i_v_part2
# Number fill point calculation of XYZ
if(x&&y&&z)
{
xyz_f_h=(xy_i1_h+xz_i1_h+yz_i1_h)/3
xyz_f_v=(xy_i1_v+xz_i1_v+yz_i1_v)/3
}
# Number fill point calculation of X-only
# For XYZ diagrams
if(x&&y&&z&&xy&&xz)
{
xyz_yz_i1=sqr(sq(xyz_f_h-yz_i1_h)+sq(xyz_f_v-yz_i1_v))
x_ratio_h=(xyz_f_h-yz_i1_h)/xyz_yz_i1
x_ratio_v=(xyz_f_v-yz_i1_v)/xyz_yz_i1
x_out_h=x_h-x_r*x_ratio_h
x_out_v=x_v-x_r*x_ratio_v
x_f_h=(x_out_h+yz_i1_h)/2
x_f_v=(x_out_v+yz_i1_v)/2
}
# For XY diagrams or XYZ diagrams without XZ overlap
else if(x&&y&&!z||x&&y&&z&&!xz)
{
xy_f_h=(xy_i1_h+xy_i2_h)/2
xy_f_v=(xy_i1_v+xy_i2_v)/2
x_in_h=y_h+cos(y_a)*y_r
x_in_v=y_v-sin(y_a)*y_r
x_out_h=x_h+cos(y_a)*x_r
x_out_v=x_v-sin(y_a)*x_r
x_f_h=(x_out_h+x_in_h)/2
x_f_v=(x_out_v+x_in_v)/2
}
# For XZ diagrams or XYZ diagrams without XY overlap
else if(x&&!y&&z||x&&y&&z&&!xy)
{
xz_f_h=(xz_i1_h+xz_i2_h)/2
xz_f_v=(xz_i1_v+xz_i2_v)/2
x_in_h=z_h-cos(z_a)*z_r
x_in_v=z_v-sin(z_a)*z_r
x_out_h=x_h-cos(z_a)*x_r
x_out_v=x_v-sin(z_a)*x_r
x_f_h=(x_out_h+x_in_h)/2
x_f_v=(x_out_v+x_in_v)/2
}
# Number fill point calculation of Y-only
# For XYZ diagrams
if(x&&y&&z&&xy&&yz)
{
xyz_xz_i1=sqr(sq(xyz_f_h-xz_i1_h)+sq(xyz_f_v-xz_i1_v))
y_ratio_h=(xyz_f_h-xz_i1_h)/xyz_xz_i1
y_ratio_v=(xyz_f_v-xz_i1_v)/xyz_xz_i1
y_out_h=y_h-y_r*y_ratio_h
y_out_v=y_v-y_r*y_ratio_v
y_f_h=(y_out_h+xz_i1_h)/2
y_f_v=(y_out_v+xz_i1_v)/2
}
# For XY diagrams or XYZ diagrams without YZ overlap
else if(x&&y&&!z||x&&y&&z&&!yz)
{
xy_f_h=(xy_i1_h+xy_i2_h)/2
xy_f_v=(xy_i1_v+xy_i2_v)/2
y_in_h=x_h-cos(y_a)*x_r
y_in_v=x_v+sin(y_a)*x_r
y_out_h=y_h-cos(y_a)*y_r
y_out_v=y_v+sin(y_a)*y_r
y_f_h=(y_out_h+y_in_h)/2
y_f_v=(y_out_v+y_in_v)/2
}
# For YZ diagrams or XYZ diagrams without XY overlap
else if(!x&&y&&z||x&&y&&z&&!xy)
{
yz_f_h=(yz_i1_h+yz_i2_h)/2
yz_f_v=(yz_i1_v+yz_i2_v)/2
y_in_h=z_h-z_r
y_in_v=z_v
y_out_h=y_h-y_r
y_out_v=y_v
y_f_h=(y_out_h+y_in_h)/2
y_f_v=(y_out_v+y_in_v)/2
}
# Number fill point calculation of Z-only
# For XYZ diagrams
if(x&&y&&z&&xz&&yz)
{
xyz_xy_i1=sqr(sq(xyz_f_h-xy_i1_h)+sq(xyz_f_v-xy_i1_v))
z_ratio_h=(xyz_f_h-xy_i1_h)/xyz_xy_i1
z_ratio_v=(xyz_f_v-xy_i1_v)/xyz_xy_i1
z_out_h=z_h-z_r*z_ratio_h
z_out_v=z_v-z_r*z_ratio_v
z_f_h=(z_out_h+xy_i1_h)/2
z_f_v=(z_out_v+xy_i1_v)/2
}
# For XZ diagrams or XYZ diagrams without YZ overlap
else if(x&&!y&&z||x&&y&&z&&!yz)
{
xz_f_h=(xz_i1_h+xz_i2_h)/2
xz_f_v=(xz_i1_v+xz_i2_v)/2
z_in_h=x_h+cos(z_a)*x_r
z_in_v=x_v+sin(z_a)*x_r
z_out_h=z_h+cos(z_a)*z_r
z_out_v=z_v+sin(z_a)*z_r
z_f_h=(z_out_h+z_in_h)/2
z_f_v=(z_out_v+z_in_v)/2
}
# For YZ diagrams or XYZ diagrams without XZ overlap
else if(!x&&y&&z||x&&y&&z&&!xz)
{
yz_f_h=(yz_i1_h+yz_i2_h)/2
yz_f_v=(yz_i1_v+yz_i2_v)/2
z_in_h=y_h+z_r
z_in_v=y_v
z_out_h=z_h+y_r
z_out_v=z_v
z_f_h=(z_out_h+z_in_h)/2
z_f_v=(z_out_v+z_in_v)/2
}
# Number fill point calculation of XY-only
if(x&&y&&z)
{
dh=(xyz_f_h-z_h)-(xy_i2_h-z_h)
dv=(xyz_f_v-z_v)-(xy_i2_v-z_v)
dr=sqr(sq(dh)+sq(dv))
D=(xy_i2_h-z_h)*(xyz_f_v-z_v)-(xyz_f_h-z_h)*(xy_i2_v-z_v)
z_in_h=z_h+(D*dv-dh*sqr(sq(z_r)*sq(dr)-sq(D)))/sq(dr)
z_in_v=z_v+(-D*dh-abs(dv)*sqr(sq(z_r)*sq(dr)-sq(D)))/sq(dr)
xy_f_h=(z_in_h+xy_i2_h)/2
xy_f_v=(z_in_v+xy_i2_v)/2
}
# Number fill point calculation of XZ-only
if(x&&y&&z)
{
dh=(xyz_f_h-y_h)-(xz_i2_h-y_h)
dv=(xyz_f_v-y_v)-(xz_i2_v-y_v)
dr=sqr(sq(dh)+sq(dv))
D=(xz_i2_h-y_h)*(xyz_f_v-y_v)-(xyz_f_h-y_h)*(xz_i2_v-y_v)
y_in_h=y_h+(D*dv-dh*sqr(sq(y_r)*sq(dr)-sq(D)))/sq(dr)
y_in_v=y_v+(-D*dh-abs(dv)*sqr(sq(y_r)*sq(dr)-sq(D)))/sq(dr)
xz_f_h=(y_in_h+xz_i2_h)/2
xz_f_v=(y_in_v+xz_i2_v)/2
}
# Number fill point calculation of YZ-only
if(x&&y&&z)
{
dh=(xyz_f_h-x_h)-(yz_i2_h-x_h)
dv=(xyz_f_v-x_v)-(yz_i2_v-x_v)
dr=sqr(sq(dh)+sq(dv))
D=(yz_i2_h-x_h)*(xyz_f_v-x_v)-(xyz_f_h-x_h)*(yz_i2_v-x_v)
x_in_h=x_h+(D*dv-dh*sqr(sq(x_r)*sq(dr)-sq(D)))/sq(dr)
x_in_v=x_v+(-D*dh+abs(dv)*sqr(sq(x_r)*sq(dr)-sq(D)))/sq(dr)
yz_f_h=(x_in_h+yz_i2_h)/2
yz_f_v=(x_in_v+yz_i2_v)/2
}
# Number fill point calculation for horizontally plotted diagrams
if(xy_d==xz_d+yz_d||xz_d==xy_d+yz_d||yz_d==xy_d+xz_d)
{
# No X-only and no Y-only
if(x&&!x_only&&y&&!y_only)
{
xz_f_v=yz_f_v=xyz_f_v=x_v
xz_f_h=(max(y_h+y_r,x_h-x_r)+(x_h+x_r))/2
yz_f_h=((y_h-y_r)+min(y_h+y_r,x_h-x_r))/2
#z_f_h, z_f_v stay the same
}
# No X-only and no Z-only
else if(x&&!x_only&&z&&!z_only)
{
xy_f_v=yz_f_v=xyz_f_v=x_v
xy_f_h=((x_h-x_r)+min(x_h+x_r,z_h-z_r))/2
yz_f_h=(max(x_h+x_r,z_h-z_r)+(z_h+z_r))/2
#y_f_h, y_f_v stay the same
}
# No Y-only and no Z-only
else if(y&&!y_only&&z&&!z_only)
{
yz_f_v=xz_f_v=xyz_f_v=x_v
yz_f_h=(max(x_h+x_r,y_h-y_r)+(y_h+y_r))/2
xz_f_h=(max(y_h+y_r,z_h-z_r)+(z_h+z_r))/2
#x_f_h, x_f_v stay the same
}
# No X-only
else if(x&&!x_only)
{
yz_f_v=xyz_f_v=x_v
# X is subset of Y
if(!xz_only)
{
z_f_h=(max(x_h+x_r,y_h+y_r)+(z_h+z_r))/2
z_f_v=x_v
xy_f_h=(max(y_h-y_r,x_h-x_r)+(z_h-z_r))/2
xy_f_v=x_v
yz_f_h=(max(x_h+x_r,z_h-z_r)+(y_h+y_r))/2
#y_f_h, y_f_v stay the same
}
# X is subset of Z
else if(!xy_only)
{
y_f_h=((y_h-y_r)+min(z_h-z_r,x_h-x_r))/2
y_f_v=x_v
xz_f_h=(max(y_h+y_r,x_h-x_r)+(x_h+x_r))/2
xz_f_v=x_v
yz_f_h=((z_h-z_r)+min(y_h+y_r,x_h-x_r))/2
#z_f_h, z_f_v stay the same
}
}
# No Y-only
else if(y&&!y_only)
{
xz_f_v=xyz_f_v=x_v
# Y is subset of X
if(!yz_only)
{
z_f_h=(max(y_h+y_r,x_h+x_r)+(z_h+z_r))/2
z_f_v=x_v
xy_f_h=((y_h-y_r)+min(y_h+y_r,z_h-z_r))/2
xy_f_v=x_v
xz_f_h=(max(y_h+y_r,z_h-z_r)+(x_h+x_r))/2
#x_f_h, x_f_v stay the same
}
# Y is subset of Z
else if(!xy_only)
{
x_f_h=(max(y_h+y_r,z_h+z_r)+(x_h+x_r))/2
x_f_v=x_v
yz_f_h=((y_h-y_r)+max(y_h+y_r,x_h-x_r))/2
yz_f_v=x_v
xz_f_h=(max(y_h+y_r,x_h-x_r)+(z_h+z_r))/2
#z_f_h, z_f_v stay the same
}
}
# No Z-only
else if(z&&!z_only)
{
xy_f_v=xyz_f_v=x_v
# Z is subset of X
if(!yz_only)
{
y_f_h=((y_h-y_r)+min(x_h-x_r,z_h-z_r))/2
y_f_v=x_v
xz_f_h=(max(y_h+y_r,z_h-z_r)+(z_h+z_r))/2
xz_f_v=x_v
xy_f_h=((x_h-x_r)+min(y_h+y_r,z_h-z_r))/2
#x_f_h, x_f_v stay the same
}
# Z is subset of Y
else if(!xz_only)
{
x_f_h=((x_h-x_r)+min(y_h-y_r,z_h-z_r))/2
x_f_v=x_v
yz_f_h=(max(x_h+x_r,z_h-z_r)+(z_h+z_r))/2
yz_f_v=x_v
xy_f_h=((y_h-y_r)+min(x_h+x_r,z_h-z_r))/2
#y_f_h, y_f_v stay the same
}
}
xyz_f_h=(max(x_h-x_r,y_h-y_r,z_h-z_r)+min(x_h+x_r,y_h+y_r,z_h+z_r))/2
}
# Output to file or screen
if(output=="bmp")
{
if(is.null(filename))
{
filename="biovenn.bmp"
}
grDevices::bmp(filename,width=width,height=height,units="px")
}
else if(output=="jpg")
{
if(is.null(filename))
{
filename="biovenn.jpg"
}
grDevices::jpeg(filename,width=width,height=height,units="px")
}
else if(output=="pdf")
{
if(is.null(filename))
{
filename="biovenn.pdf"
}
grDevices::pdf(filename,width=width/100,height=height/100)
}
else if(output=="png")
{
if(is.null(filename))
{
filename="biovenn.png"
}
grDevices::png(filename,width=width,height=height,units="px")
}
else if(output=="svg")
{
if(is.null(filename))
{
filename="biovenn.svg"
}
svglite::svglite("biovenn_temp.svg",width=width/100,height=height/100)
}
else if(output=="tif")
{
if(is.null(filename))
{
filename="biovenn.tif"
}
grDevices::tiff(filename,width=width,height=height,units="px")
}
# Draw circles
opar<-graphics::par(no.readonly=TRUE)
on.exit(graphics::par(opar))
graphics::par(pty="s",bg=bg_c)
graphics::plot(0,type="n",axes=FALSE,xlim=c(0,width_p),ylim=c(height_p,0),xlab="",ylab="",xaxt="none",yaxt="none")
graphics::par(family=t_f)
graphics::title(main=title,line=1,font.main=t_fb,cex.main=t_s,col.main=t_c)
graphics::par(family=st_f)
graphics::title(sub=subtitle,line=1,font.sub=st_fb,cex.sub=st_s,col.sub=st_c)
plotrix::draw.circle(ppu*x_h,ppu*x_v,ppu*x_r,lty=0,col=grDevices::rgb(grDevices::col2rgb(x_c)[,1][1],grDevices::col2rgb(x_c)[,1][2],grDevices::col2rgb(x_c)[,1][3],maxColorValue=255,alpha=128))
plotrix::draw.circle(ppu*y_h,ppu*y_v,ppu*y_r,lty=0,col=grDevices::rgb(grDevices::col2rgb(y_c)[,1][1],grDevices::col2rgb(y_c)[,1][2],grDevices::col2rgb(y_c)[,1][3],maxColorValue=255,alpha=128))
plotrix::draw.circle(ppu*z_h,ppu*z_v,ppu*z_r,lty=0,col=grDevices::rgb(grDevices::col2rgb(z_c)[,1][1],grDevices::col2rgb(z_c)[,1][2],grDevices::col2rgb(z_c)[,1][3],maxColorValue=255,alpha=128))
# Print numbers
if(length(nrtype)>0)
{
if(nrtype=="abs")
{
if(x_only)
{
graphics::text(ppu*x_f_h,ppu*x_f_v,x_only,adj=c(0.5,0.5),col=nr_c,family=nr_f,font=nr_fb,cex=nr_s)
}
if(y_only)
{
graphics::text(ppu*y_f_h,ppu*y_f_v,y_only,adj=c(0.5,0.5),col=nr_c,family=nr_f,font=nr_fb,cex=nr_s)
}
if(z_only)
{
graphics::text(ppu*z_f_h,ppu*z_f_v,z_only,adj=c(0.5,0.5),col=nr_c,family=nr_f,font=nr_fb,cex=nr_s)
}
if(xy_only)
{
graphics::text(ppu*xy_f_h,ppu*xy_f_v,xy_only,adj=c(0.5,0.5),col=nr_c,family=nr_f,font=nr_fb,cex=nr_s)
}
if(xz_only)
{
graphics::text(ppu*xz_f_h,ppu*xz_f_v,xz_only,adj=c(0.5,0.5),col=nr_c,family=nr_f,font=nr_fb,cex=nr_s)
}
if(yz_only)
{
graphics::text(ppu*yz_f_h,ppu*yz_f_v,yz_only,adj=c(0.5,0.5),col=nr_c,family=nr_f,font=nr_fb,cex=nr_s)
}
if(xyz)
{
graphics::text(ppu*xyz_f_h,ppu*xyz_f_v,xyz_text,adj=c(0.5,0.5),col=nr_c,family=nr_f,font=nr_fb,cex=nr_s)
}
}
else if(nrtype=="pct")
{
if(x_only)
{
graphics::text(ppu*x_f_h,ppu*x_f_v,paste0(round(x_only/total_text*100,2),"%"),adj=c(0.5,0.5),col=nr_c,family=nr_f,font=nr_fb,cex=nr_s)
}
if(y_only)
{
graphics::text(ppu*y_f_h,ppu*y_f_v,paste0(round(y_only/total_text*100,2),"%"),adj=c(0.5,0.5),col=nr_c,family=nr_f,font=nr_fb,cex=nr_s)
}
if(z_only)
{
graphics::text(ppu*z_f_h,ppu*z_f_v,paste0(round(z_only/total_text*100,2),"%"),adj=c(0.5,0.5),col=nr_c,family=nr_f,font=nr_fb,cex=nr_s)
}
if(xy_only)
{
graphics::text(ppu*xy_f_h,ppu*xy_f_v,paste0(round(xy_only/total_text*100,2),"%"),adj=c(0.5,0.5),col=nr_c,family=nr_f,font=nr_fb,cex=nr_s)
}
if(xz_only)
{
graphics::text(ppu*xz_f_h,ppu*xz_f_v,paste0(round(xz_only/total_text*100,2),"%"),adj=c(0.5,0.5),col=nr_c,family=nr_f,font=nr_fb,cex=nr_s)
}
if(yz_only)
{
graphics::text(ppu*yz_f_h,ppu*yz_f_v,paste0(round(yz_only/total_text*100,2),"%"),adj=c(0.5,0.5),col=nr_c,family=nr_f,font=nr_fb,cex=nr_s)
}
if(xyz)
{
graphics::text(ppu*xyz_f_h,ppu*xyz_f_v,paste0(round(xyz_text/total_text*100,2),"%"),adj=c(0.5,0.5),col=nr_c,family=nr_f,font=nr_fb,cex=nr_s)
}
}
}
# Print texts
if(x)
{
graphics::text(ppu*x_h,ppu*x_v,xtitle,adj=c(0.5,0.5),col=xt_c,family=xt_f,font=xt_fb,cex=xt_s)
}
if(y)
{
graphics::text(ppu*y_h,ppu*y_v,ytitle,adj=c(0.5,0.5),col=yt_c,family=yt_f,font=yt_fb,cex=yt_s)
}
if(z)
{
graphics::text(ppu*z_h,ppu*z_v,ztitle,adj=c(0.5,0.5),col=zt_c,family=zt_f,font=zt_fb,cex=zt_s)
}
# Write to file
if(output %in% c("bmp","jpg","pdf","png","svg","tif"))
{
grDevices::dev.off()
}
# Create drag-and-drop functionality for SVG file
if(output=="svg")
{
svg_temp <- file("biovenn_temp.svg", "r")
svg <- file(filename, "w")
id=1
while (length(oneLine <- readLines(svg_temp, n=1, warn=FALSE)) > 0) {
if(substr(oneLine,1,4)=="<svg")
{
oneLine=sub("viewBox='0 0 (\\d*\\.\\d*) (\\d*\\.\\d*)'","viewBox='0 0 \\1 \\2' height='\\1' width='\\2'",oneLine)
}
if(substr(oneLine,1,5)=="<rect")
{
oneLine=sub("<rect","<script>
<![CDATA[
var Root=document.documentElement
standardize(Root)
function standardize(R){
var Attr={
'onmouseup':'add(evt)',
'onmousedown':'grab(evt)',
'onmousemove':null
}
assignAttr(R,Attr)
}
function grab(evt){
var O=evt.target
var Attr={
'onmousemove':'slide(evt,\"'+O.id+'\")',
'onmouseup':'standardize(Root)'
}
assignAttr(Root,Attr)
}
function slide(evt,id){
if(id!='rect'&&id!='polygon'){
var o=document.getElementById(id)
o.setAttributeNS(null, 'x', evt.clientX)
o.setAttributeNS(null, 'y', evt.clientY)
}
}
function assignAttr(O,A){
for (i in A) O.setAttributeNS(null,i, A[i])
}
]]>
</script>
<rect id='rect'",oneLine)
}
else if(substr(oneLine,1,5)=="<text")
{
oneLine=paste0(substr(oneLine,1,5)," id='t",id,"'",substr(oneLine,6,nchar(oneLine)))
oneLine=sub("style='","style='cursor:move;",oneLine)
id=id+1
}
else if(substr(oneLine,61,65)=="<text")
{
oneLine=paste0(substr(oneLine,1,65)," id='t",id,"'",substr(oneLine,66,nchar(oneLine)))
oneLine=sub("style='","style='cursor:move;",oneLine)
id=id+1
}
else if(substr(oneLine,1,8)=="<polygon")
{
oneLine=paste0(substr(oneLine,1,8)," id='polygon'",substr(oneLine,9,nchar(oneLine)))
}
write(oneLine,svg)
}
close(svg_temp)
file.remove("biovenn_temp.svg")
close(svg)
}
# Return lists
return(list("x"=list_x,"y"=list_y,"z"=list_z,"x_only"=list_x_only,"y_only"=list_y_only,"z_only"=list_z_only,"xy"=list_xy,"xz"=list_xz,"yz"=list_yz,"xy_only"=list_xy_only,"xz_only"=list_xz_only,"yz_only"=list_yz_only,"xyz"=list_xyz))
}
|
/scratch/gouwar.j/cran-all/cranData/BioVenn/R/draw.venn.R
|
## ---- echo = FALSE, message = FALSE-------------------------------------------
library("BioVenn")
## -----------------------------------------------------------------------------
list_x <- c("1007_s_at","1053_at","117_at","121_at","1255_g_at","1294_at")
list_y <- c("1255_g_at","1294_at","1316_at","1320_at","1405_i_at")
list_z <- c("1007_s_at","1405_i_at","1255_g_at","1431_at","1438_at","1487_at","1494_f_at")
## ---- fig.dim = c(10, 10), out.width="100%"-----------------------------------
biovenn <- draw.venn(list_x, list_y, list_z, subtitle="Example diagram 1", nrtype="abs")
## -----------------------------------------------------------------------------
biovenn
## -----------------------------------------------------------------------------
list_x <- NULL
list_y <- c("ENSG00000070778","ENSG00000271503","ENSG00000126351","ENSG00000182179","ENSG00000283726","ENSG00000048545","ENSG00000287363","ENSG00000274233")
list_z <- c("ENSG00000130649","ENSG00000173153","ENSG00000215572","ENSG00000271503","ENSG00000255974","ENSG00000198077","ENSG00000182580","ENSG00000204580","ENSG00000048545","ENSG00000287363","ENSG00000274233","ENSG00000137332","ENSG00000230456","ENSG00000234078","ENSG00000215522")
## ---- fig.dim = c(10, 10), out.width="100%"-----------------------------------
biovenn <- draw.venn(list_x, list_y, list_z, subtitle="Example diagram 2", nrtype="pct")
## -----------------------------------------------------------------------------
biovenn
## -----------------------------------------------------------------------------
list_x <- c("1007_s_at","1053_at","117_at","121_at","1255_g_at","1294_at")
list_y <- c("1255_g_at","1294_at","1316_at","1320_at","1405_i_at")
list_z <- c("1007_s_at","1405_i_at","1255_g_at","1431_at","1438_at","1487_at","1494_f_at")
## ---- fig.dim = c(10, 10), out.width="100%"-----------------------------------
biovenn <- draw.venn(list_x, list_y, list_z, t_c="#FFFFFF", subtitle="Example diagram 3", st_c="#FFFFFF", xt_c="#FFFFFF", yt_c="#FFFFFF", zt_c="#FFFFFF", nrtype="abs", nr_c="#FFFFFF", x_c="#FFFF00", y_c="#FF00FF", z_c="#00FFFF", bg_c="#000000")
## -----------------------------------------------------------------------------
biovenn
|
/scratch/gouwar.j/cran-all/cranData/BioVenn/inst/doc/BioVenn.R
|
---
title: "BioVenn Tutorial"
output: rmarkdown::html_vignette
description: >
Start here if this is your first time using BioVenn. You'll learn how to
create area-proportional Venn diagrams from two or three circles.
vignette: >
%\VignetteIndexEntry{BioVenn Tutorial}
%\VignetteEngine{knitr::rmarkdown}
\usepackage[utf8]{inputenc}
---
```{r, echo = FALSE, message = FALSE}
library("BioVenn")
```
## Example diagram 1: 3-circle diagram with absolute numbers
Create three lists of Affymetrix IDs.
```{r}
list_x <- c("1007_s_at","1053_at","117_at","121_at","1255_g_at","1294_at")
list_y <- c("1255_g_at","1294_at","1316_at","1320_at","1405_i_at")
list_z <- c("1007_s_at","1405_i_at","1255_g_at","1431_at","1438_at","1487_at","1494_f_at")
```
Create the BioVenn diagram, using the three lists as input. The subtitle is set to "Example diagram 1", and absolute numbers will be displayed.
The function prints the resulting numbers.
```{r, fig.dim = c(10, 10), out.width="100%"}
biovenn <- draw.venn(list_x, list_y, list_z, subtitle="Example diagram 1", nrtype="abs")
```
The returned object contains the thirteen lists (the sets and their overlaps).
```{r}
biovenn
```
## Example diagram 2: 2-circle diagram with percentages
Create two lists of Ensembl IDs.
```{r}
list_x <- NULL
list_y <- c("ENSG00000070778","ENSG00000271503","ENSG00000126351","ENSG00000182179","ENSG00000283726","ENSG00000048545","ENSG00000287363","ENSG00000274233")
list_z <- c("ENSG00000130649","ENSG00000173153","ENSG00000215572","ENSG00000271503","ENSG00000255974","ENSG00000198077","ENSG00000182580","ENSG00000204580","ENSG00000048545","ENSG00000287363","ENSG00000274233","ENSG00000137332","ENSG00000230456","ENSG00000234078","ENSG00000215522")
```
Create the BioVenn diagram, using the two lists as input. The subtitle is set to "Example diagram 2", and percentages will be displayed.
The function prints the resulting numbers.
```{r, fig.dim = c(10, 10), out.width="100%"}
biovenn <- draw.venn(list_x, list_y, list_z, subtitle="Example diagram 2", nrtype="pct")
```
The returned object contains the thirteen lists (the sets and their overlaps).
```{r}
biovenn
```
## Example diagram 3: 3-circle diagram with altered colours
Create three lists of Affymetrix IDs.
```{r}
list_x <- c("1007_s_at","1053_at","117_at","121_at","1255_g_at","1294_at")
list_y <- c("1255_g_at","1294_at","1316_at","1320_at","1405_i_at")
list_z <- c("1007_s_at","1405_i_at","1255_g_at","1431_at","1438_at","1487_at","1494_f_at")
```
Create the BioVenn diagram, using the three lists as input. The subtitle is set to "Example diagram 3", and absolute numbers will be displayed.
The background colour will be black, with different circle colours and white text. The function prints the resulting numbers.
```{r, fig.dim = c(10, 10), out.width="100%"}
biovenn <- draw.venn(list_x, list_y, list_z, t_c="#FFFFFF", subtitle="Example diagram 3", st_c="#FFFFFF", xt_c="#FFFFFF", yt_c="#FFFFFF", zt_c="#FFFFFF", nrtype="abs", nr_c="#FFFFFF", x_c="#FFFF00", y_c="#FF00FF", z_c="#00FFFF", bg_c="#000000")
```
The returned object contains the thirteen lists (the sets and their overlaps).
```{r}
biovenn
```
|
/scratch/gouwar.j/cran-all/cranData/BioVenn/inst/doc/BioVenn.Rmd
|
---
title: "BioVenn Tutorial"
output: rmarkdown::html_vignette
description: >
Start here if this is your first time using BioVenn. You'll learn how to
create area-proportional Venn diagrams from two or three circles.
vignette: >
%\VignetteIndexEntry{BioVenn Tutorial}
%\VignetteEngine{knitr::rmarkdown}
\usepackage[utf8]{inputenc}
---
```{r, echo = FALSE, message = FALSE}
library("BioVenn")
```
## Example diagram 1: 3-circle diagram with absolute numbers
Create three lists of Affymetrix IDs.
```{r}
list_x <- c("1007_s_at","1053_at","117_at","121_at","1255_g_at","1294_at")
list_y <- c("1255_g_at","1294_at","1316_at","1320_at","1405_i_at")
list_z <- c("1007_s_at","1405_i_at","1255_g_at","1431_at","1438_at","1487_at","1494_f_at")
```
Create the BioVenn diagram, using the three lists as input. The subtitle is set to "Example diagram 1", and absolute numbers will be displayed.
The function prints the resulting numbers.
```{r, fig.dim = c(10, 10), out.width="100%"}
biovenn <- draw.venn(list_x, list_y, list_z, subtitle="Example diagram 1", nrtype="abs")
```
The returned object contains the thirteen lists (the sets and their overlaps).
```{r}
biovenn
```
## Example diagram 2: 2-circle diagram with percentages
Create two lists of Ensembl IDs.
```{r}
list_x <- NULL
list_y <- c("ENSG00000070778","ENSG00000271503","ENSG00000126351","ENSG00000182179","ENSG00000283726","ENSG00000048545","ENSG00000287363","ENSG00000274233")
list_z <- c("ENSG00000130649","ENSG00000173153","ENSG00000215572","ENSG00000271503","ENSG00000255974","ENSG00000198077","ENSG00000182580","ENSG00000204580","ENSG00000048545","ENSG00000287363","ENSG00000274233","ENSG00000137332","ENSG00000230456","ENSG00000234078","ENSG00000215522")
```
Create the BioVenn diagram, using the two lists as input. The subtitle is set to "Example diagram 2", and percentages will be displayed.
The function prints the resulting numbers.
```{r, fig.dim = c(10, 10), out.width="100%"}
biovenn <- draw.venn(list_x, list_y, list_z, subtitle="Example diagram 2", nrtype="pct")
```
The returned object contains the thirteen lists (the sets and their overlaps).
```{r}
biovenn
```
## Example diagram 3: 3-circle diagram with altered colours
Create three lists of Affymetrix IDs.
```{r}
list_x <- c("1007_s_at","1053_at","117_at","121_at","1255_g_at","1294_at")
list_y <- c("1255_g_at","1294_at","1316_at","1320_at","1405_i_at")
list_z <- c("1007_s_at","1405_i_at","1255_g_at","1431_at","1438_at","1487_at","1494_f_at")
```
Create the BioVenn diagram, using the three lists as input. The subtitle is set to "Example diagram 3", and absolute numbers will be displayed.
The background colour will be black, with different circle colours and white text. The function prints the resulting numbers.
```{r, fig.dim = c(10, 10), out.width="100%"}
biovenn <- draw.venn(list_x, list_y, list_z, t_c="#FFFFFF", subtitle="Example diagram 3", st_c="#FFFFFF", xt_c="#FFFFFF", yt_c="#FFFFFF", zt_c="#FFFFFF", nrtype="abs", nr_c="#FFFFFF", x_c="#FFFF00", y_c="#FF00FF", z_c="#00FFFF", bg_c="#000000")
```
The returned object contains the thirteen lists (the sets and their overlaps).
```{r}
biovenn
```
|
/scratch/gouwar.j/cran-all/cranData/BioVenn/vignettes/BioVenn.Rmd
|
#' @importFrom utils packageVersion contrib.url head
#' installed.packages sessionInfo tail
NULL
#' Install or update Bioconductor, CRAN, or GitHub packages
#'
#' This package provides tools for managing _Bioconductor_ and other
#' packages in a manner consistent with _Bioconductor_'s package
#' versioning and release system.
#'
#' @details
#'
#' Main functions are as follows; additional help is available for
#' each function, e.g., `?BiocManager::version`.
#'
#' - `BiocManager::install()`: Install or update packages from
#' _Bioconductor_, CRAN, and GitHub.
#'
#' - `BiocManager::version()`: Report the version of _Bioconductor_ in
#' use.
#'
#' - `BiocManager::available()`: Return a `character()` vector of
#' package names available (at `BiocManager::repositories()`) for
#' installation.
#'
#' - `BiocManager::valid()`: Determine whether installed packages are
#' from the same version of _Bioconductor_.
#'
#' - `BiocManager::repositories()`: _Bioconductor_ and other
#' repository URLs to discover packages for installation.
#'
#' The version of _Bioconductor_ in use is determined by the installed
#' version of a second package, BiocVersion. BiocVersion is installed
#' automatically during first use of `BiocManager::install()`. If
#' BiocVersion has not yet been installed, the version is determined
#' by code in base R.
#'
#' Options influencing package behavior (see `?options`, `?getOption`)
#' include:
#'
#' - `"repos"`, `"BiocManager.check_repositories"`: URLs of additional
#' repositories for use by `BiocManger::install()`. See `?repositories`.
#'
#' - `"pkgType"`: The default type of packages to be downloaded and
#' installed; see `?install.packages`.
#'
#' - `"timeout"`: The maximum time allowed for download of a single
#' package, in seconds. _BiocManager_ increases this to 300 seconds
#' to accommodate download of large BSgenome and other packages.
#'
#' System environment variables influencing package behavior include:
#'
#' - \env{BIOCONDUCTOR_ONLINE_VERSION_DIAGNOSIS} advanced
#' configuration to avoid _Bioconductor_ version checks. See
#' `?install`.
#'
#' - \env{BIOCONDUCTOR_CONFIG_FILE} for offline use of BiocManager
#' versioning functionality. See `?install`.
#'
#' - \env{BIOCONDUCTOR_USE_CONTAINER_REPOSITORY} opt out of binary package
#' installations. See `?containerRepository`.
#'
#' - \env{BIOCMANAGER_CHECK_REPOSITORIES} silence messages regarding
#' non-standard CRAN or Bioconductor repositories. See `?repositories`.
#'
#' - \env{BIOCMANAGER_SITE_REPOSITORY} configure a more permanent
#' `site_repository` input to `repositories()`. See `?repositories`.
#'
#' @md
#' @name BiocManager-pkg
#' @aliases BiocManager
#' @docType package
#'
#' @examples
#' R.version.string
#' packageVersion("BiocManager")
#' if (requireNamespace("BiocVersion", quietly = TRUE))
#' packageVersion("BiocVersion")
#' BiocManager::version()
"_PACKAGE"
|
/scratch/gouwar.j/cran-all/cranData/BiocManager/R/BiocManager-pkg.R
|
#' Discover packages available for installation.
#'
#' The function lists all packages available from \code{repositories()} when
#' no pattern is provided. This usually includes CRAN and Bioconductor
#' packages. The function can also be used to check for package name
#' availability. Common use cases include annotation package lookups by
#' organism short name (e.g., "hsapiens").
#'
#' @param pattern character(1) pattern to filter (via
#' `grep(pattern=...)`) available packages; the filter is not case
#' sensitive.
#'
#' @param include_installed logical(1) When `TRUE`, include installed
#' packages in list of available packages; when `FALSE`, exclude
#' installed packages.
#'
#' @return `character()` vector of package names available for
#' installation.
#'
#' @examples
#' if (interactive()) {
#' avail <- BiocManager::available()
#' length(avail)
#'
#' BiocManager::available("bs.*hsapiens")
#' }
#' @md
#' @export
available <-
function(pattern = "", include_installed = TRUE)
{
stopifnot(
is.character(pattern), length(pattern) == 1L, !is.na(pattern),
is.logical(include_installed), length(include_installed) == 1L,
!is.na(include_installed)
)
answer <- character()
repos <- .repositories_filter(repositories())
if (length(repos))
answer <- rownames(.inet_available.packages(repos = repos))
answer <- sort(grep(pattern, answer, value = TRUE, ignore.case = TRUE))
if (!include_installed)
answer <- setdiff(answer, rownames(installed.packages()))
answer
}
|
/scratch/gouwar.j/cran-all/cranData/BiocManager/R/available.R
|
#' @importFrom utils available.packages install.packages old.packages
#' update.packages
NULL
.inet_warning <-
function(w)
{
if (.is_CRAN_check()) {
.message(conditionMessage(w))
} else {
warning(w)
}
invokeRestart("muffleWarning")
}
.inet_error <-
function(e)
{
if (.is_CRAN_check()) {
.message(conditionMessage(e))
} else {
stop(e)
}
}
.inet_readChar <-
function(...)
{
withCallingHandlers({
tryCatch({
readChar(...)
}, error = function(e) {
.inet_error(e)
character()
})
}, warning = .inet_warning)
}
.inet_readLines <-
function(...)
{
withCallingHandlers({
tryCatch({
readLines(...)
}, error = function(e) {
.inet_error(e)
e
})
}, warning = .inet_warning)
}
.inet_available.packages <-
function(...)
{
withCallingHandlers({
tryCatch({
available.packages(...)
}, error = function(e) {
.inet_error(e)
colnames <- c(
"Package", "Version", "Priority", "Depends",
"Imports", "LinkingTo", "Suggests", "Enhances",
"License", "License_is_FOSS", "License_restricts_use",
"OS_type", "Archs", "MD5sum", "NeedsCompilation",
"File", "Repository"
)
matrix(character(0), ncol = 17, dimnames = list(NULL, colnames))
})
}, warning = .inet_warning)
}
.inet_install.packages <-
function(...)
{
## More generous timeout for large package download, see
## `?download.file` and, for instance,
## https://stat.ethz.ch/pipermail/bioc-devel/2020-November/017448.html
if (identical(as.integer(getOption("timeout")), 60L)) { # change default only
otimeout <- options(timeout = 300L)
on.exit(options(otimeout))
}
withCallingHandlers({
tryCatch({
install.packages(...)
}, error = function(e) {
.inet_error(e)
invisible(NULL)
})
}, warning = function(w) {
msg <- conditionMessage(w)
if (grepl("not available", msg)) {
msg <- gsub(
"this version of R",
paste0("Bioconductor version ", "'", version(), "'"),
msg
)
w <- simpleWarning(msg, conditionCall(w))
}
.inet_warning(w)
})
}
.inet_old.packages <-
function(...)
{
withCallingHandlers({
tryCatch({
old.packages(...)
}, error = function(e) {
.inet_error(e)
invisible(NULL)
})
}, warning = .inet_warning)
}
.inet_update.packages <-
function(...)
{
## see .inet_old.packages for implementation note
if (identical(as.integer(getOption("timeout")), 60L)) {
otimeout <- options(timeout = 300L)
on.exit(options(otimeout))
}
withCallingHandlers({
tryCatch({
update.packages(...)
}, error = function(e) {
.inet_error(e)
invisible(NULL)
})
}, warning = .inet_warning)
}
|
/scratch/gouwar.j/cran-all/cranData/BiocManager/R/inet.R
|
.package_filter_masked <-
function(pkgs)
{
path0 <- normalizePath(pkgs[, "LibPath"], winslash="/")
path1 <- normalizePath(.libPaths(), winslash="/")
idx <- order(match(path0, path1))
dup <- duplicated(pkgs[idx,"Package"])[order(idx)]
pkgs[!dup,, drop=FALSE]
}
.package_filter_unwriteable <-
function(pkgs, instlib=NULL)
{
if (!nrow(pkgs))
return(pkgs)
libs <-
if (is.null(instlib)) {
pkgs[,"LibPath"]
} else instlib
ulibs <- unique(libs)
status <- dir.exists(ulibs)
if (.Platform$OS.type == "windows") {
status[status] <- vapply(ulibs[status], function(lib) {
## from tools::install.R: file.access() unreliable on
## Windows
fn <- file.path(lib, paste0("_test_dir", Sys.getpid()))
unlink(fn, recursive = TRUE) # precaution
res <- try(dir.create(fn, showWarnings = FALSE))
if (inherits(res, "try-error") || !res) {
FALSE
} else {
unlink(fn, recursive = TRUE)
TRUE
}
}, logical(1))
} else
status[status] <- file.access(ulibs[status], 2L) == 0
status <- status[match(libs, ulibs)]
if (!all(status)) {
failed_pkgs <- pkgs[!status, "Package"]
failed_lib <- pkgs[!status, "LibPath"]
failed <- split(failed_pkgs, failed_lib)
detail <- paste(
mapply(function(lib, pkg) {
paste0(
" path: ", lib, "\n",
" packages:\n",
.msg(paste(pkg, collapse = ", "), indent = 4, exdent = 4)
)
}, names(failed), unname(failed), USE.NAMES = FALSE),
collapse = "\n"
)
message(
.msg("Installation paths not writeable, unable to update packages"),
"\n",
detail
)
}
pkgs[status,, drop=FALSE]
}
.install_filter_r_repos <-
function(pkgs, invert = FALSE)
{
grep("^(https?://.*|[^/]+)$", pkgs, invert = invert, value=TRUE)
}
.install_filter_up_to_date <-
function(pkgs, instPkgs, old_pkgs, force)
{
if (!force) {
noInst <- !pkgs %in% rownames(old_pkgs) & pkgs %in% rownames(instPkgs)
if (any(noInst))
.warning(
paste(
"package(s) not installed when version(s) same as or",
"greater than current; use `force = TRUE` to re-install: ",
"\n'%s'"
),
paste(pkgs[noInst], collapse = "' '")
)
pkgs <- pkgs[!noInst]
}
pkgs
}
.install_filter_github_repos <-
function(pkgs)
{
pkgs <- .install_filter_r_repos(pkgs, invert = TRUE)
grep("^[^/]+/.+", pkgs, value=TRUE)
}
.install_github_load_remotes <-
function(pkgs, lib.loc = NULL)
{
if (!"remotes" %in% rownames(installed.packages(lib.loc))) {
if (is.null(lib.loc))
lib.loc <- .libPaths()
.stop(
"%s\n %s\n%s",
"package 'remotes' not installed in library path(s)",
paste(lib.loc, collapse="\n "),
"install with 'BiocManager::install(\"remotes\")'",
call. = FALSE,
wrap. = FALSE
)
}
tryCatch({
loadNamespace("remotes", lib.loc)
}, error=function(e) {
.stop(
"'loadNamespace(\"remotes\")' failed:\n %s",
conditionMessage(e),
call. = FALSE,
wrap. = FALSE
)
})
TRUE
}
.install_repos <-
function(pkgs, old_pkgs, instPkgs, lib, repos, force, ...)
{
doing <- .install_filter_up_to_date(
pkgs = pkgs, instPkgs = instPkgs, old_pkgs = old_pkgs, force = force
)
up_to_date <- setdiff(pkgs, doing)
doing <- .install_filter_r_repos(doing)
if (length(doing)) {
pkgNames <- paste(.sQuote(doing), collapse=", ")
.message("Installing package(s) %s", pkgNames)
.inet_install.packages(pkgs = doing, lib = lib, repos = repos, ...)
}
setdiff(pkgs, c(doing, up_to_date))
}
.install_github <-
function(pkgs, lib, lib.loc, repos, update, ask, force, ...)
{
doing <- .install_filter_github_repos(pkgs)
ask <- if (!update) "never" else if (update && !ask) "always" else "default"
oopts <- options(repos = repos) # required by remotes::
on.exit(options(oopts))
if (length(doing)) {
pkgNames <- paste(.sQuote(doing), collapse=", ")
.message("Installing github package(s) %s", pkgNames)
.install_github_load_remotes(pkgs, lib.loc = lib.loc)
for (repo in doing)
remotes::install_github(
repo, lib = lib, upgrade = ask, force = force, ...
)
}
setdiff(pkgs, doing)
}
.install_validate_dots <-
function(..., repos)
{
if (!missing(repos))
.stop("'repos' argument to 'install()' not allowed")
args <- list(...)
nms <- sum(nzchar(names(args)))
if (nms != length(args))
.stop("all '...' arguments to 'install()' must be named")
TRUE
}
.install_n_invalid_pkgs <- function(valid) {
if (isTRUE(valid))
0L
else
sum(nrow(valid$too_new), nrow(valid$out_of_date))
}
.install_ask_up_or_down_grade <-
function(version, npkgs, cmp, ask)
{
action <- if (cmp < 0) "Downgrade" else "Upgrade"
txt <- sprintf("%s %d packages to Bioconductor version '%s'? [y/n]: ",
action, npkgs, version)
!ask || .getAnswer(txt, allowed = c("y", "Y", "n", "N")) == "y"
}
.install <-
function(pkgs, old_pkgs, instPkgs, repos, lib.loc=NULL, lib=.libPaths()[1],
update, ask, force, ...)
{
requireNamespace("utils", quietly=TRUE) ||
.stop("failed to load package 'utils'")
todo <- .install_repos(
pkgs, old_pkgs, instPkgs = instPkgs, lib = lib, repos = repos,
force = force, ...
)
todo <- .install_github(
todo, lib = lib, lib.loc = lib.loc, repos = repos,
update = update, ask = ask, force = force, ...
)
if (length(todo))
.warning(
"packages not installed (unknown repository)\n '%s'",
paste(.sQuote(todo), collapse = "' '")
)
setdiff(pkgs, todo)
}
.install_update <-
function(repos, ask, lib.loc = NULL, instlib = NULL, checkBuilt, ...)
{
old_pkgs <- .inet_old.packages(lib.loc, repos, checkBuilt = checkBuilt)
if (is.null(old_pkgs))
return()
old_pkgs <- .package_filter_masked(old_pkgs)
old_pkgs <- .package_filter_unwriteable(old_pkgs, instlib)
if (!nrow(old_pkgs))
return()
pkgs <- paste(old_pkgs[,"Package"], collapse="', '")
.message("Old packages: '%s'", pkgs)
if (ask) {
answer <- .getAnswer(
"Update all/some/none? [a/s/n]: ",
allowed = c("a", "A", "s", "S", "n", "N")
)
if (answer == "n")
return()
ask <- answer == "s"
}
.inet_update.packages(
lib.loc, repos, oldPkgs = old_pkgs, ask = ask, instlib = instlib, ...
)
}
.install_updated_version <-
function(valid, update, old_pkgs, instPkgs, repos, ask, force, ...)
{
if (isTRUE(valid))
return(valid)
else
pkgs <- c(rownames(valid$too_new), rownames(valid$out_of_date))
if (is.null(pkgs) || !update)
return(pkgs)
.install(
pkgs, old_pkgs, instPkgs, repos, update = update,
ask = ask, force = force, ...
)
pkgs
}
#' @name install
#' @aliases BIOCONDUCTOR_ONLINE_VERSION_DIAGNOSIS
#' @md
#'
#' @title Install or update Bioconductor, CRAN, and GitHub packages
#'
#' @description The `BiocManager::install()` function installs or
#' updates _Bioconductor_ and CRAN packages in a _Bioconductor_
#' release. Upgrading to a new _Bioconductor_ release may require
#' additional steps; see \url{https://bioconductor.org/install}.
#'
#' @details
#'
#' Installation of _Bioconductor_ and CRAN packages use R's standard
#' functions for library management -- `install.packages()`,
#' `available.packages()`, `update.packages()`. Installation of GitHub
#' packages uses the `remotes::install_github()`.
#'
#' When installing CRAN or _Bioconductor_ packages, typical arguments
#' include: `lib.loc`, passed to \code{\link{old.packages}()} and used to
#' determine the library location of installed packages to be updated;
#' and `lib`, passed to \code{\link{install.packages}{}} to determine the
#' library location where `pkgs` are to be installed.
#'
#' When installing GitHub packages, `...` is passed to the
#' \pkg{remotes} package functions \code{\link[remotes]{install_github}()}
#' and `remotes:::install()`. A typical use is to build vignettes, via
#' `dependencies=TRUE, build_vignettes=TRUE`.
#'
#' See `?repositories` for additional detail on customizing where
#' BiocManager searches for package installation.
#'
#' \env{BIOCONDUCTOR_ONLINE_VERSION_DIAGNOSIS} is an environment
#' variable or global `options()` which, when set to `FALSE`, allows
#' organizations and its users to use offline repositories with BiocManager
#' while enforcing appropriate version checks between _Bioconductor_ and R.
#' Setting \env{BIOCONDUCTOR_ONLINE_VERSION_DIAGNOSIS} to `FALSE` can speed
#' package loading when internet access is slow or non-existent, but may
#' result in out-of-date information regarding the current release and
#' development versions of _Bioconductor_. In addition, offline
#' organizations and its users should set the \env{BIOCONDUCTOR_CONFIG_FILE}
#' environment variable or option to a `.yaml` file similar to
#' \url{https://bioconductor.org/config.yaml} for full offline use and
#' version validation.
#'
#' @param pkgs `character()` vector of package names to install or
#' update. A missing value updates installed packages according
#' to `update =` and `ask =`. Package names containing a '/' are
#' treated as GitHub repositories and installed using
#' `remotes::install_github()`.
#' @param ... Additional arguments used by `install.packages()`.
#' @param site_repository (Optional) `character(1)` vector
#' representing an additional repository in which to look for
#' packages to install. This repository will be prepended to the
#' default repositories (which you can see with
#' \code{BiocManager::\link{repositories}()}).
#' @param update `logical(1)`. When `FALSE`, `BiocManager::install()`
#' does not attempt to update old packages. When `TRUE`, update
#' old packages according to `ask`.
#' @param ask `logical(1)` indicating whether to prompt user before
#' installed packages are updated. If TRUE, user can choose
#' whether to update all outdated packages without further
#' prompting, to pick packages to update, or to cancel updating
#' (in a non-interactive session, no packages will be updated
#' unless `ask = FALSE`).
#' @param checkBuilt `logical(1)`. If `TRUE` a package built under an
#' earlier major.minor version of R (e.g., 3.4) is considered to
#' be old.
#' @param force `logical(1)`. If `TRUE` re-download a package that is
#' currently up-to-date.
#' @param version `character(1)` _Bioconductor_ version to install,
#' e.g., `version = "3.8"`. The special symbol `version = "devel"`
#' installs the current 'development' version.
#'
#' @return `BiocManager::install()` returns the `pkgs` argument, invisibly.
#' @seealso
#'
#' \code{BiocManager::\link{repositories}()} returns the _Bioconductor_ and
#' CRAN repositories used by `install()`.
#'
#' \code{\link{install.packages}()} installs the packages themselves (used by
#' `BiocManager::install` internally).
#'
#' \code{\link{update.packages}()} updates all installed packages (used by
#' `BiocManager::install` internally).
#'
#' \code{\link{chooseBioCmirror}()} allows choice of a mirror from all
#' public _Bioconductor_ mirrors.
#'
#' \code{\link{chooseCRANmirror}()} allows choice of a mirror from all
#' public CRAN mirrors.
#'
#' @keywords environment
#' @examples
#'
#' \dontrun{
#' ## update previously installed packages
#' BiocManager::install()
#'
#' ## install Bioconductor packages, and prompt to update all
#' ## installed packages
#' BiocManager::install(c("GenomicRanges", "edgeR"))
#'
#' ## install a CRAN and Bioconductor packages:
#' BiocManager::install(c("survival", "SummarizedExperiment"))
#'
#' ## install a package from source:
#' BiocManager::install("IRanges", type="source")
#' }
#'
#' @export
install <-
function(pkgs = character(), ..., site_repository = character(),
update = TRUE, ask = TRUE, checkBuilt = FALSE, force = FALSE,
version = BiocManager::version())
{
stopifnot(
is.character(pkgs), !anyNA(pkgs),
.install_validate_dots(...),
is.logical(update), length(update) == 1L, !is.na(update),
is.logical(ask), length(ask) == 1L, !is.na(ask),
is.logical(checkBuilt), length(checkBuilt) == 1L, !is.na(checkBuilt),
length(version) == 1L || inherits(version, "version_sentinel")
)
site_repository <- .repositories_site_repository(site_repository)
version <- .version_validate(version)
inst <- installed.packages()
if (!"BiocVersion" %in% rownames(inst)) {
pkgs <- unique(c("BiocVersion", pkgs))
}
cmp <- .version_compare(version, version())
action <- if (cmp < 0) "Downgrade" else "Upgrade"
repos <- .repositories(site_repository, version = version, ...)
vout <- .valid_out_of_date_pkgs(pkgs = inst,
repos = repos, ..., checkBuilt = checkBuilt,
site_repository = site_repository)
if (cmp != 0L) {
pkgs <- unique(c("BiocVersion", pkgs))
valist <- .valid_result(vout, pkgs = inst)
npkgs <- .install_n_invalid_pkgs(valist) + length(pkgs)
if (!length(pkgs)-1L) {
.install_ask_up_or_down_grade(version, npkgs, cmp, ask) ||
.stop(paste0(
"Bioconductor version not changed by 'install()'",
if (!interactive() && isTRUE(ask))
"; in non-interactive sessions use 'ask = FALSE'"
))
} else {
fmt <- paste(c(
"To use Bioconductor version '%s', first %s %d packages with",
"\n BiocManager::install(version = '%s')"),
collapse="")
action <- tolower(action)
.stop(fmt, version, action, npkgs, version, wrap.=FALSE)
}
}
.message(.version_string(version))
pkgs <- .install(
pkgs, vout[["out_of_date"]], instPkgs = inst, repos = repos,
update = update, ask = ask, force = force, ...
)
if (update && cmp == 0L) {
.install_update(repos, ask, checkBuilt = checkBuilt, ...)
} else if (cmp != 0L) {
.install_updated_version(
valist, update, vout[["out_of_date"]], inst, repos, ask = ask,
force = force, ...
)
}
invisible(pkgs)
}
|
/scratch/gouwar.j/cran-all/cranData/BiocManager/R/install.R
|
BINARY_BASE_URL <- "https://bioconductor.org/packages/%s/container-binaries/%s"
.repositories_check_repos_envopt <-
function()
{
opt <- Sys.getenv("BIOCMANAGER_CHECK_REPOSITORIES", TRUE)
opt <- getOption("BiocManager.check_repositories", opt)
isTRUE(as.logical(opt))
}
.repositories_site_repository <-
function(site_repository = character())
{
stopifnot(
length(site_repository) == 0L || .is_scalar_character(site_repository)
)
if (!length(site_repository) || !nzchar(site_repository)) {
site_repository <- Sys.getenv("BIOCMANAGER_SITE_REPOSITORY", "")
site_repository <-
getOption("BiocManager.site_repository", site_repository)
if (!nzchar(site_repository))
site_repository <- character()
}
site_repository
}
.repositories_check_repos <-
function(repos)
{
conflict <-
names(repos) %in% c(names(.repositories_bioc(version())), "CRAN")
conflict <- conflict & repos != "@CRAN@"
conflicts <- repos[conflict]
if (length(conflicts)) {
txt <- paste(
"'getOption(\"repos\")' replaces Bioconductor standard ",
"repositories, see ",
"'help(\"repositories\", package = \"BiocManager\")' for details."
)
fmt <- paste0(
.msg(txt, exdent = 0),
"\nReplacement repositories:",
"\n %s\n"
)
repos_string <- paste0(
names(conflicts), ": ", unname(conflicts),
collapse = "\n "
)
if (.repositories_check_repos_envopt())
.message(
fmt, repos_string,
call. = FALSE, wrap. = FALSE, appendLF = FALSE
)
}
repos
}
.repositories_base <-
function()
{
repos <- getOption("repos")
repos <- .repositories_check_repos(repos)
rename <- repos == "@CRAN@"
repos[rename] <- "https://cloud.r-project.org"
repos
}
#' @importFrom stats setNames
.repositories_bioc <-
function(version, ..., type = NULL)
{
mirror <- getOption("BioC_mirror", "https://bioconductor.org")
paths <- c(
BioCsoft = "bioc",
BioCann = "data/annotation",
BioCexp = "data/experiment",
BioCworkflows = "workflows",
BioCbooks = if (version() >= "3.12") "books" else character()
)
bioc_repos <- paste(mirror, "packages", version, paths, sep="/")
c(
containerRepository(version = version, type = type),
setNames(bioc_repos, names(paths))
)
}
.repositories_filter <-
function(repos)
{
ext <- c(".rds", ".gz", "")
pkg_files <- paste0("/PACKAGES", ext)
online <- logical(length(repos))
for (pkg_file in pkg_files) {
if (all(online))
next
urls <- paste0(contrib.url(repos[!online]), pkg_file)
online[!online] <- vapply(urls, .url_exists, logical(1))
}
repos[online]
}
.repositories <-
function(site_repository, version, ...)
{
base <- .repositories_base()
bioc <- .repositories_bioc(version, ...)
repos <- c(site_repository = site_repository, bioc, base)
repos[!duplicated(names(repos))]
}
#' @title Display current Bioconductor and CRAN repositories.
#'
#' @aliases BiocManager.check_repositories
#'
#' @description `repositories()` reports the URLs from which to
#' install _Bioconductor_ and CRAN packages. It is used by
#' `BiocManager::install()` and other functions.
#'
#' @param site_repository (Optional) `character(1)` representing an
#' additional repository (e.g., a URL to an organization's
#' internally maintained repository) in which to look for packages
#' to install. This repository will be prepended to the default
#' repositories returned by the function.
#'
#' @param version (Optional) `character(1)` or `package_version`
#' indicating the _Bioconductor_ version (e.g., "3.8") for which
#' repositories are required.
#'
#' @param ... Additional parameters passed to lower level functions, not
#' used.
#'
#' @param type (Optional) `character(1)` indicating the type of package
#' repository to retrieve (default: "both"). Setting `type` to "source" will
#' disable any Bioconductor binary packages specifically built for the
#' containers.
#'
#' @details
#'
#' `repositories()` returns the appropriate software package
#' repositories for your version of _Bioconductor_.
#'
#' _Bioconductor_ has a 'release' and a 'devel' semi-annual release
#' cycle. Packages within a release have been tested against each
#' other and the current version of packages on CRAN. _Bioconductor_
#' best practice is to use packages from the same release, and from
#' the appropriate CRAN repository.
#'
#' To install binary packages on containerized versions of Bioconductor,
#' a default binary package location URL is set as a package constant,
#' see `BiocManager:::BINARY_BASE_URL`. Binary package installations
#' are enabled by default for Bioconductor Docker containers. Anyone
#' wishing to opt out of the binary package installation can set either the
#' variable or the option, \env{BIOCONDUCTOR_USE_CONTAINER_REPOSITORY}, to
#' `FALSE`. Note that the availability of Bioconductor package binaries is
#' experimental and binary installations are intended to be used with
#' `bioconductor/bioconductor_docker` images where such installations
#' correspond to specific versions of Linux / Ubuntu.
#'
#' If alternative default repositories are known to provide appropriate
#' versions of CRAN or _Bioconductor_ packages, the message may be silenced
#' by setting either the option or the variable to `FALSE`, i.e.,
#' `options(BiocManager.check_repositories = FALSE)` or
#' \env{BIOCMANAGER_CHECK_REPOSITORIES=FALSE}. Alternative default
#' repositories are not guaranteed to work without issues related to
#' incompatible package installations and are used at the user's own risk.
#'
#' The intended use of `site_repository =` is to enable installation of
#' packages not available in the default repositories, e.g., packages
#' internal to an organization and not yet publicly available. A
#' secondary use might provide alternative versions (e.g., compiled
#' binaries) of packages available in the default repositories. Note
#' that _R_'s standard rules of package selection apply, so the most
#' recent version of candidate packages is selected independent of the
#' location of the repository in the vector returned by `repositories()`.
#' To set a more permenanent `site_repository`, one can use either the
#' \env{BIOCMANAGER_SITE_REPOSITORY} environment variable or the
#' `options(BiocManager.site_repository = ...)` option.
#'
#' For greater flexiblity in installing packages while still adhering
#' as much as possible to _Bioconductor_ best practices, use
#' `repositories()` as a basis for constructing the `repos =` argument
#' to `install.packages()` and related functions.
#'
#' @return `repositories()`: named `character()` of repositories.
#'
#' @seealso
#'
#' \code{BiocManager::\link{install}()} Installs or updates Bioconductor,
#' CRAN, and GitHub packages.
#'
#' \code{\link{chooseBioCmirror}()} choose an alternative Bioconductor
#' mirror; not usually necessary.
#'
#' \code{\link{chooseCRANmirror}()} choose an alternative CRAN mirror; not
#' usually necessary.
#'
#' \code{\link{setRepositories}()} Select additional repositories for
#' searching.
#'
#' @keywords environment
#'
#' @examples
#' BiocManager::repositories()
#' \dontrun{
#' BiocManager::repositories(version="3.8")
#' }
#'
#' @md
#' @export repositories
repositories <- function(
site_repository = character(),
version = BiocManager::version(),
...,
type = "both"
) {
site_repository <- .repositories_site_repository(site_repository)
stopifnot(
length(site_repository) <= 1L,
is.character(site_repository), !anyNA(site_repository)
)
version <- .version_validate(version)
.repositories(site_repository, version, ..., type = type)
}
## is the docker container configured correctly?
.repository_container_version_test <-
function(bioconductor_version, container_version)
{
bioconductor_version <- package_version(bioconductor_version)
docker_version <- package_version(container_version)
(bioconductor_version$major == docker_version$major) &&
(bioconductor_version$minor == docker_version$minor)
}
## are we running on a docker container?
.repository_container_version <-
function()
{
container_version <- Sys.getenv("BIOCONDUCTOR_DOCKER_VERSION")
if (nzchar(container_version)) {
platform <- "bioconductor_docker"
} else {
platform <- Sys.getenv("TERRA_R_PLATFORM")
container_version <- Sys.getenv("TERRA_R_PLATFORM_BINARY_VERSION")
}
# platform and container_version are zero character vectors
# when not running on a container
list(platform = platform, container_version = container_version)
}
.repositories_use_container_repo <-
function()
{
opt <- Sys.getenv("BIOCONDUCTOR_USE_CONTAINER_REPOSITORY", TRUE)
opt <- getOption("BIOCONDUCTOR_USE_CONTAINER_REPOSITORY", opt)
isTRUE(as.logical(opt))
}
#' @rdname repositories
#'
#' @aliases BINARY_BASE_URL
#'
#' @description `containerRepository()` reports the location of the repository
#' of binary packages for fast installation within containerized versions
#' of Bioconductor, if available.
#'
#' @details
#'
#' The unexported URL to the base repository is available with
#' `BiocManager:::BINARY_BASE_URL`.
#'
#' \env{BIOCONDUCTOR_USE_CONTAINER_REPOSITORY} is an environment
#' variable or global `options()` which, when set to `FALSE`, avoids
#' the fast installation of binary packages within containerized
#' versions of Bioconductor.
#'
#' @return `containerRepository()`: character(1) location of binary repository,
#' if available, or character(0) if not.
#'
#' @examples
#' containerRepository() # character(0) if not within a Bioconductor container
#'
#' @importFrom utils contrib.url
#'
#' @md
#' @export
containerRepository <-
function(
version = BiocManager::version(), type = "binary"
)
{
if (identical(type, "source"))
return(character())
platform_docker <- .repository_container_version()
container_version <- platform_docker$container_version
platform <- platform_docker$platform
## are we running on a known container?
if (!nzchar(container_version))
return(character())
## do the versions of BiocManager::version() and the container match?
versions_match <- .repository_container_version_test(
version, container_version
)
if (!versions_match)
return(character())
if (!.repositories_use_container_repo())
return(character())
## does the binary repository exist?
binary_repos0 <- sprintf(BINARY_BASE_URL, version, platform)
packages <- paste0(contrib.url(binary_repos0), "/PACKAGES.gz")
url <- url(packages)
tryCatch({
suppressWarnings(open(url, "rb"))
close(url)
setNames(binary_repos0, "BioCcontainers")
}, error = function(...) {
close(url)
character()
})
}
|
/scratch/gouwar.j/cran-all/cranData/BiocManager/R/repositories.R
|
.is_CRAN_check <-
function()
{
!interactive() && ("CheckExEnv" %in% search())
}
.is_character <-
function(x, na.ok = FALSE, zchar = FALSE)
{
is.character(x) &&
(na.ok || all(!is.na(x))) &&
(zchar || all(nzchar(x)))
}
.is_scalar_character <- function(x, na.ok = FALSE, zchar = FALSE)
length(x) == 1L && .is_character(x, na.ok, zchar)
.is_scalar_logical <- function(x, na.ok = FALSE)
is.logical(x) && length(x) == 1L && (na.ok || !is.na(x))
.getAnswer <- function(msg, allowed)
{
if (interactive()) {
repeat {
cat(msg)
answer <- readLines(n = 1)
if (answer %in% allowed)
break
}
tolower(answer)
} else {
"n"
}
}
.sQuote <- function(x)
sprintf("'%s'", as.character(x))
.url_exists <-
function(url)
{
suppressWarnings(tryCatch({
identical(nchar(.inet_readChar(url, 1L)), 1L)
}, error = function(...) {
FALSE
}))
}
.msg <-
function(
fmt, ...,
width=getOption("width"), indent = 0, exdent = 2, wrap. = TRUE
)
## Use this helper to format all error / warning / message text
{
txt <- sprintf(fmt, ...)
if (wrap.) {
txt <- strwrap(
sprintf(fmt, ...), width=width, indent = indent, exdent=exdent
)
paste(txt, collapse="\n")
} else {
txt
}
}
.message <-
function(..., call. = FALSE, domain = NULL, appendLF=TRUE)
{
## call. = FALSE provides compatibility with .stop(), but is ignored
message(.msg(...), domain = NULL, appendLF=appendLF)
invisible(TRUE)
}
.packageStartupMessage <-
function(..., domain = NULL, appendLF = TRUE)
{
packageStartupMessage(.msg(...), domain = domain, appendLF = appendLF)
invisible(TRUE)
}
.stop <-
function(..., call.=FALSE)
{
stop(.msg(...), call.=call.)
}
.warning <-
function(..., call.=FALSE, immediate.=FALSE)
{
warning(.msg(...), call.=call., immediate.=immediate.)
invisible(TRUE)
}
isDevel <-
function()
{
version() == .version_bioc("devel")
}
isRelease <-
function()
{
version() == .version_bioc("release")
}
## testthat helper functions
.skip_if_misconfigured <-
function()
{
R_version <- getRversion()
bioc_version <- version()
test_ver <- tryCatch({
.version_validity(bioc_version)
}, error = function(err) {
conditionMessage(err)
})
if (!isTRUE(test_ver)) {
msg <- sprintf(
"mis-configuration, R %s, Bioconductor %s", R_version, bioc_version
)
testthat::skip(msg)
}
}
.skip_if_BiocVersion_not_available <-
function()
{
if (!"BiocVersion" %in% rownames(installed.packages()))
testthat::skip("BiocVersion not installed")
}
|
/scratch/gouwar.j/cran-all/cranData/BiocManager/R/utilities.R
|
.valid_pkgs_too_new <-
function(instPkgs, availPkgs)
{
idx <- rownames(availPkgs) %in% rownames(instPkgs)
vers <- availPkgs[idx, "Version"]
idx <- package_version(vers) <
package_version(instPkgs[names(vers), "Version"])
too_new <- names(vers)[idx]
instPkgs[too_new, c("Version", "LibPath"), drop=FALSE]
}
.valid_out_of_date_pkgs <-
function(pkgs = installed.packages(lib.loc, priority=priority), repos,
lib.loc=NULL, priority="NA", type=getOption("pkgType"),
filters=NULL, ..., checkBuilt, site_repository)
{
contribUrl <- contrib.url(repos, type=type)
available <- out_of_date <- too_new <- character()
result <- FALSE
available <- .inet_available.packages(
contribUrl, type=type, filters=filters
)
out_of_date <- .inet_old.packages(
lib.loc, repos=repos, instPkgs=pkgs,
available=available, checkBuilt=checkBuilt, type=type
)
list(
available = available,
out_of_date = out_of_date,
noRepos = !length(repos)
)
}
.valid_result <-
function(avail_out, pkgs = installed.packages(lib.loc, priority=priority),
lib.loc=NULL, priority="NA")
{
too_new <- .valid_pkgs_too_new(pkgs, avail_out[["available"]])
out_of_date <- avail_out[["out_of_date"]]
result <- !nrow(too_new) && is.null(out_of_date)
if (!result || avail_out[["noRepos"]]) {
result <- structure(
list(out_of_date = out_of_date, too_new = too_new),
class="biocValid"
)
}
result
}
#' Validate installed package versions against correct versions.
#'
#' Check that installed packages are consistent (neither out-of-date
#' nor too new) with the version of R and _Bioconductor_ in use.
#'
#' @details This function compares the version of installed packages
#' to the version of packages associated with the version of _R_
#' and _Bioconductor_ currently in use.
#'
#' Packages are reported as 'out-of-date' if a more recent version
#' is available at the repositories specified by
#' `BiocManager::repositories()`. Usually, `BiocManager::install()` is
#' sufficient to update packages to their most recent version.
#'
#' Packages are reported as 'too new' if the installed version is
#' more recent than the most recent available in the
#' `BiocManager::repositories()`. It is possible to down-grade by
#' re-installing a too new package "PkgA" with
#' `BiocManger::install("PkgA")`. It is important for the user to
#' understand how their installation became too new, and to avoid
#' this in the future.
#'
#' @param pkgs A character() vector of package names for checking, or
#' a matrix as returned by \code{\link{installed.packages}()}`.
#' @param lib.loc A character() vector of library location(s) of
#' packages to be validated; see \code{\link{installed.packages}()}.
#' @param priority character(1) Check validity of all, "base", or
#' "recommended" packages; see \code{\link{installed.packages}()}.
#' @param type character(1) The type of available package (e.g.,
#' binary, source) to check validity against; see
#' \code{\link{available.packages}()}.
#' @param filters character(1) Filter available packages to check
#' validity against; see \code{\link{available.packages}()}.
#' @param \dots Additional arguments, passed to
#' \code{BiocManager::\link{install}()} when `fix=TRUE`.
#' @param checkBuilt `logical(1)`. If `TRUE` a package built under an
#' earlier major.minor version of R (e.g., 3.4) is considered to
#' be old.
#' @param site_repository `character(1)`. See `?install`.
#' @return `biocValid` list object with elements `too_new` and
#' `out_of_date` containing `data.frame`s with packages and their
#' installed locations that are too new or out-of-date for the
#' current version of _Bioconductor_. When internet access
#' is unavailable, an empty 'biocValid' list is returned. If all
#' packages ('pkgs') are up to date, then TRUE is returned.
#' @author Martin Morgan \email{martin.morgan@@roswellpark.org}
#' @seealso \code{BiocManager::\link{install}()} to update installed
#' packages.
#' @keywords environment
#' @examples
#' if (interactive()) {
#' BiocManager::valid()
#' }
#' @md
#' @export valid
valid <-
function(pkgs = installed.packages(lib.loc, priority=priority),
lib.loc=NULL, priority="NA", type=getOption("pkgType"),
filters=NULL, ..., checkBuilt = FALSE,
site_repository = character())
{
stopifnot(
is.logical(checkBuilt), length(checkBuilt) == 1L, !is.na(checkBuilt)
)
site_repository <- .repositories_site_repository(site_repository)
if (!is.matrix(pkgs)) {
if (is.character(pkgs)) {
pkgs <- installed.packages(pkgs, lib.loc=lib.loc)
} else {
.stop(
"'pkgs' must be a character vector of package names,
or a matrix like that returned by 'installed.packages()'"
)
}
}
version <- .version_validate(version())
repos <- .repositories(site_repository, version = version)
repos <- .repositories_filter(repos)
vout <- .valid_out_of_date_pkgs(pkgs = pkgs, lib.loc = lib.loc,
repos = repos, priority = priority, type = type, filters=filters, ...,
checkBuilt = checkBuilt, site_repository = site_repository)
result <-
.valid_result(vout, pkgs = pkgs, lib.loc = lib.loc, priority = priority)
if (!isTRUE(result)) {
out_of_date <- result$out_of_date
too_new <- result$too_new
if (NROW(out_of_date) + NROW(too_new) != 0L) {
.warning(
"%d packages out-of-date; %d packages too new",
NROW(out_of_date), NROW(too_new)
)
}
}
result
}
#' @rdname valid
#' @param x A `biocValid` object returned by `BiocManager::valid()`.
#' @return `print()` is invoked for its side effect.
#' @export
print.biocValid <-
function(x, ...)
{
cat("\n* sessionInfo()\n\n")
print(sessionInfo())
cat(
"\nBioconductor version '", as.character(version()), "'",
"\n",
"\n * ", NROW(x$out_of_date), " packages out-of-date",
"\n * ", NROW(x$too_new), " packages too new",
sep = ""
)
n <- NROW(x$too_new) + NROW(x$out_of_date)
if (n == 0L) {
cat("\n\nInstallation valid\n")
return()
}
fmt <-' BiocManager::install(%s, update = TRUE, ask = FALSE, force = TRUE)'
if (n == 1L) {
fmt <- sprintf(fmt, '"%s"')
} else {
fmt <- sprintf(fmt, 'c(\n "%s"\n )')
}
pkgs0 <- sort(unique(c(rownames(x$too_new), rownames(x$out_of_date))))
pkgs <- paste(strwrap(
paste(pkgs0, collapse='", "'),
width = getOption("width") - 4L
), collapse="\n ")
cat(
"\n\ncreate a valid installation with",
"\n\n", sprintf(fmt, pkgs), "\n\n",
sep = ""
)
cat("more details: BiocManager::valid()$too_new, BiocManager::valid()$out_of_date\n\n")
}
|
/scratch/gouwar.j/cran-all/cranData/BiocManager/R/valid.R
|
.VERSION_HELP <- "see https://bioconductor.org/install"
.VERSION_UNKNOWN <-
"Bioconductor version cannot be determined; no internet connection?
See #troubleshooting section in vignette"
.VERSION_MAP_UNABLE_TO_VALIDATE <-
"Bioconductor version cannot be validated; no internet connection?
See #troubleshooting section in vignette"
.VERSION_MAP_MISCONFIGURATION <-
"Bioconductor version map cannot be validated; is it misconfigured?
See #troubleshooting section in vignette"
.VERSION_TYPE_MISSPECIFICATION <-
"Bioconductor version cannot be validated; is type input misspecified?
See #troubleshooting section in vignette"
.NO_ONLINE_VERSION_DIAGNOSIS <-
"Bioconductor online version validation disabled;
see ?BIOCONDUCTOR_ONLINE_VERSION_DIAGNOSIS"
.LEGACY_INSTALL_CMD <-
"source(\"https://bioconductor.org/biocLite.R\")"
.VERSION_TAGS <-
c("out-of-date", "release", "devel", "future")
.VERSION_MAP_SENTINEL <- data.frame(
Bioc = package_version(character()),
R = package_version(character()),
BiocStatus = factor(
factor(),
levels = .VERSION_TAGS
)
)
.version_sentinel <-
function(msg)
{
version <- package_version(NA_character_, strict = FALSE)
structure(
unclass(version),
msg = msg,
class = c("version_sentinel", class(version))
)
}
.version_sentinel_msg <-
function(x)
{
attr(x, "msg")
}
#' @export
format.version_sentinel <-
function(x, ...)
{
paste0("unknown version: ", .version_sentinel_msg(x))
}
.version_compare <-
function(v1, v2)
{
## return -1, 0, or 1 when v1 is <, ==, or > v2
if (v1 < v2)
-1L
else if (v1 > v2)
1L
else 0L
}
.VERSION_MAP <- local({
WARN_NO_ONLINE_CONFIG <- TRUE
environment()
})
.version_validity_online_check <-
function()
{
opt <- Sys.getenv("BIOCONDUCTOR_ONLINE_VERSION_DIAGNOSIS", TRUE)
opt <- getOption("BIOCONDUCTOR_ONLINE_VERSION_DIAGNOSIS", opt)
opt <- isTRUE(as.logical(opt))
if (.VERSION_MAP$WARN_NO_ONLINE_CONFIG && !opt) {
.VERSION_MAP$WARN_NO_ONLINE_CONFIG <- FALSE
.warning(.NO_ONLINE_VERSION_DIAGNOSIS)
}
opt
}
.version_map_get_online_config <-
function(config)
{
txt <- tryCatch(.inet_readLines(config), error = identity)
if (inherits(txt, "error") && startsWith(config, "https://")) {
config <- sub("https", "http", config)
txt <- tryCatch(.inet_readLines(config), error = identity)
}
txt
}
.version_map_config_element <-
function(txt, tag)
{
grps <- grep("^[^[:blank:]]", txt)
start <- match(grep(tag, txt), grps)
if (!length(start))
return(setNames(character(), character()))
end <- ifelse(length(grps) < start + 1L, length(txt), grps[start + 1] - 1L)
map <- txt[seq(grps[start] + 1, end)]
map <- trimws(gsub("\"", "", sub(" #.*", "", map)))
pattern <- "(.*): (.*)"
key <- sub(pattern, "\\1", map)
value <- sub(pattern, "\\2", map)
setNames(value, key)
}
.version_map_get_online <-
function(config)
{
toggle_warning <- FALSE
withCallingHandlers({
txt <- .version_map_get_online_config(config)
}, warning = function(w) {
if (!.VERSION_MAP$WARN_NO_ONLINE_CONFIG)
invokeRestart("muffleWarning")
toggle_warning <<- TRUE
})
if (toggle_warning)
.VERSION_MAP$WARN_NO_ONLINE_CONFIG <- FALSE
if (!length(txt) || inherits(txt, "error"))
return(.VERSION_MAP_SENTINEL)
bioc_r_map <- .version_map_config_element(txt, "r_ver_for_bioc_ver")
if (!length(bioc_r_map))
return(.VERSION_MAP_SENTINEL)
bioc <- package_version(names(bioc_r_map))
r <- package_version(unname(bioc_r_map))
pattern <- "^release_version: \"(.*)\""
release <- package_version(
sub(pattern, "\\1", grep(pattern, txt, value=TRUE))
)
pattern <- "^devel_version: \"(.*)\""
devel <- package_version(
sub(pattern, "\\1", grep(pattern, txt, value=TRUE))
)
status <- rep("out-of-date", length(bioc))
status[bioc == release] <- "release"
status[bioc == devel] <- "devel"
## append final version for 'devel' R
bioc <- c(
bioc, max(bioc)
## package_version(paste(unlist(max(bioc)) + 0:1, collapse = "."))
)
if (max(r) == package_version("3.6")) {
future_r <- package_version("4.0")
} else {
future_r <- package_version(paste(unlist(max(r)) + 0:1, collapse = "."))
}
r <- c(r, future_r)
status <- c(status, "future")
rbind(.VERSION_MAP_SENTINEL, data.frame(
Bioc = bioc, R = r,
BiocStatus = factor(
status,
levels = .VERSION_TAGS
)
))
}
.version_map_get_offline <-
function()
{
bioc <- .version_BiocVersion()
if (is.na(bioc))
return(.VERSION_MAP_SENTINEL)
r <- .version_R_version()[,1:2]
status <- .VERSION_TAGS
rbind(.VERSION_MAP_SENTINEL, data.frame(
Bioc = bioc, R = r,
BiocStatus = factor(
NA,
levels = status
)
))
}
.version_map_get <-
function(config = NULL)
{
if (!.version_validity_online_check())
.version_map_get_offline()
else {
if (is.null(config) || !nchar(config))
config <- "https://bioconductor.org/config.yaml"
.version_map_get_online(config)
}
}
.version_map <- local({
version_map <- .VERSION_MAP_SENTINEL
function() {
config <- Sys.getenv("BIOCONDUCTOR_CONFIG_FILE")
config <- getOption("BIOCONDUCTOR_CONFIG_FILE", config)
if (identical(version_map, .VERSION_MAP_SENTINEL))
version_map <<- .version_map_get(config)
version_map
}
})
.version_field <-
function(field)
{
map <- .version_map()
if (identical(map, .VERSION_MAP_SENTINEL))
return(NA)
idx <- match(version(), map[["Bioc"]])
map[idx, field]
}
.version_R_version <- function()
getRversion()
.version_BiocVersion_installed <- function()
nzchar(system.file(package = "BiocVersion"))
.version_BiocVersion <-
function()
{
if (.version_BiocVersion_installed())
packageVersion("BiocVersion")[, 1:2]
else
.version_sentinel("BiocVersion is not installed")
}
.version_string <-
function(bioc_version = version())
{
sprintf(
"Bioconductor version %s (BiocManager %s), %s",
bioc_version, packageVersion("BiocManager"),
sub(" version", "", R.version.string)
)
}
## .version_validity() returns TRUE if the version is valid for this
## version of R, or a text string (created with sprintf()) explaining why
## the version is invalid. It does NOT call message / warning / etc
## directly.
.version_validity <-
function(version, map = .version_map(), r_version = .version_R_version(),
check_future = FALSE)
{
if (identical(version, "devel"))
version <- .version_bioc("devel")
version <- .package_version(version)
if (inherits(version, "version_sentinel"))
return(.version_sentinel_msg(version))
if (version[, 1:2] != version)
return(sprintf(
"version '%s' must have two components, e.g., '3.7'", version
))
if (identical(map, .VERSION_MAP_SENTINEL))
return(.VERSION_MAP_UNABLE_TO_VALIDATE)
if (!all(.VERSION_TAGS %in% map$BiocStatus))
return(.VERSION_MAP_MISCONFIGURATION)
if (!version %in% map$Bioc)
return(sprintf(
"unknown Bioconductor version '%s'; %s", version, .VERSION_HELP
))
required <- map$R[map$Bioc == version & !map$BiocStatus %in% "future"]
r_version <- r_version[, 1:2]
if (!r_version %in% required) {
rec <- map[map$R == r_version, , drop = FALSE]
one_up <- required
one_up[, 2] <- as.integer(required[, 2]) + 1L
if (r_version == one_up && "future" %in% rec$BiocStatus) {
if (check_future) {
return(sprintf(
"Bioconductor does not yet build and check packages for R
version %s, using unsupported Bioconductor version %s; %s",
r_version, version, .VERSION_HELP
))
}
} else {
rec_fun <- ifelse("devel" %in% rec$BiocStatus, head, tail)
rec_msg <- sprintf(
"use `version = '%s'` with R version %s",
rec_fun(rec$Bioc, 1), r_version
)
return(sprintf(
"Bioconductor version '%s' requires R version '%s'; %s; %s",
version, head(required, 1), rec_msg, .VERSION_HELP
))
}
}
TRUE
}
.version_validate <-
function(version)
{
if (identical(version, "devel"))
version <- .version_bioc("devel")
version <- .package_version(version)
txt <- .version_validity(version)
isTRUE(txt) || ifelse(.is_CRAN_check(), .message(txt), .stop(txt))
version
}
.r_version_lt_350 <-
function()
{
getRversion() < package_version("3.5.0")
}
.version_recommend <-
function(version)
{
release <- .version_bioc("release")
if (is.package_version(release) && version < release) {
if (.r_version_lt_350())
return(sprintf(
"Bioconductor version '%s' is out-of-date; BiocManager does
not support R version '%s'. For older installations of
Bioconductor, use '%s' and refer to the 'BiocInstaller'
vignette on the Bioconductor website",
version, getRversion(), .LEGACY_INSTALL_CMD
))
else
return(sprintf(
"Bioconductor version '%s' is out-of-date; the current release
version '%s' is available with R version '%s'; %s",
version, release, .version_R("release"), .VERSION_HELP
))
}
TRUE
}
.version_choose_best <-
function()
{
map <- .version_map()
if (identical(map, .VERSION_MAP_SENTINEL))
return(.version_sentinel(.VERSION_MAP_UNABLE_TO_VALIDATE))
if (!all(.VERSION_TAGS %in% map$BiocStatus))
return(.version_sentinel(.VERSION_MAP_MISCONFIGURATION))
map <- map[map$R == getRversion()[, 1:2],]
if ("release" %in% map$BiocStatus)
idx <- map$BiocStatus == "release"
else if ("devel" %in% map$BiocStatus)
idx <- map$BiocStatus == "devel"
else if ("out-of-date" %in% map$BiocStatus)
idx <- map$BiocStatus == "out-of-date"
else
idx <- map$BiocStatus == "future"
tail(map$Bioc[idx], 1)
}
.version_bioc <-
function(type)
{
map <- .version_map()
if (identical(map, .VERSION_MAP_SENTINEL))
return(.VERSION_MAP_UNABLE_TO_VALIDATE)
if (!all(.VERSION_TAGS %in% map$BiocStatus))
return(.VERSION_MAP_MISCONFIGURATION)
if (!type %in% .VERSION_TAGS)
return(.VERSION_TYPE_MISSPECIFICATION)
version <- map$Bioc[map$BiocStatus == type]
if (!length(version) || is.na(version))
version <- .VERSION_UNKNOWN
version
}
.version_R <-
function(type)
{
map <- .version_map()
if (identical(map, .VERSION_MAP_SENTINEL))
return(.VERSION_MAP_UNABLE_TO_VALIDATE)
if (!all(.VERSION_TAGS %in% map$BiocStatus))
return(.VERSION_MAP_MISCONFIGURATION)
if (!type %in% .VERSION_TAGS)
return(.VERSION_TYPE_MISSPECIFICATION)
version <- map$R[map$BiocStatus == type]
if (!length(version) || is.na(version))
version <- .VERSION_UNKNOWN
version
}
#' Version of Bioconductor currently in use.
#'
#' `version()` reports the version of _Bioconductor_ appropropriate
#' for this version of R, or the version of _Bioconductor_ requested
#' by the user.
#'
#' `version()` (and all functions requiring version information) fails
#' when version cannot be validated e.g., because internet access is
#' not available.
#'
#' @return A two-digit version, e.g., `3.8`, of class
#' `package_version` describing the version of _Bioconductor_ in
#' use.
#'
#' @md
#' @examples
#' BiocManager::version()
#'
#' @export
version <-
function()
{
bioc <- .version_BiocVersion()
if (is.na(bioc))
bioc <- .version_choose_best()
bioc
}
.package_version <-
function(x)
{
if (!inherits(x, "package_version")) # preserved full class attributes
x <- package_version(x)
x
}
#' @rdname version
#'
#' @param x An `unknown_version` instance used to represent the
#' situation when the version of Bioconductor in use cannot be
#' determined.
#'
#' @param ... Additional arguments, ignored.
#'
#' @md
#' @export
print.version_sentinel <-
function(x, ...)
{
cat(format(x), "\n", sep = "")
}
|
/scratch/gouwar.j/cran-all/cranData/BiocManager/R/version.R
|
.onAttach <-
function(libname, pkgname)
{
version <- version()
validity <- .version_validity(version, check_future = TRUE)
isTRUE(validity) || .packageStartupMessage(validity)
if (interactive() && isTRUE(validity))
.packageStartupMessage(.version_string(version))
recommend <- .version_recommend(version)
isTRUE(recommend) || .packageStartupMessage(recommend)
}
|
/scratch/gouwar.j/cran-all/cranData/BiocManager/R/zzz.R
|
## ----setup, include=FALSE-----------------------------------------------------
knitr::opts_chunk$set(echo = TRUE, eval = interactive())
## ---- eval = FALSE------------------------------------------------------------
# install.packages("BiocManager", repos = "https://cloud.r-project.org")
## ---- eval = FALSE------------------------------------------------------------
# BiocManager::install(c("GenomicRanges", "Organism.dplyr"))
## ---- eval = FALSE------------------------------------------------------------
# BiocManager::install()
## -----------------------------------------------------------------------------
# BiocManager::version()
## -----------------------------------------------------------------------------
# BiocManager::valid()
## -----------------------------------------------------------------------------
# avail <- BiocManager::available()
# length(avail) # all CRAN & Bioconductor packages
# BiocManager::available("BSgenome.Hsapiens") # BSgenome.Hsapiens.* packages
## ---- eval = FALSE------------------------------------------------------------
# BiocManager::install(version="3.7")
## ---- eval = FALSE------------------------------------------------------------
# .libPaths()
## ---- eval = FALSE------------------------------------------------------------
# options(
# repos = c(CRAN_mirror = "file:///path/to/CRAN-mirror"),
# BioC_mirror = "file:///path/to/Bioc-mirror"
# )
## ---- eval = FALSE------------------------------------------------------------
# options(
# BIOCONDUCTOR_ONLINE_VERSION_DIAGNOSIS = FALSE
# )
## ---- eval = FALSE------------------------------------------------------------
# install.package(c("BiocManager", "BiocVersion"))
## ---- eval = FALSE------------------------------------------------------------
# options(
# BIOCONDUCTOR_CONFIG_FILE = "file:///path/to/config.yaml"
# )
## ----out.width = '100%', echo = FALSE, eval = TRUE----------------------------
knitr::include_graphics("img/badges.png")
## ----out.width = '100%', echo = FALSE, eval = TRUE----------------------------
knitr::include_graphics("img/archives.png")
## ---- eval = FALSE------------------------------------------------------------
# BiocManager::install()
## ---- eval = FALSE------------------------------------------------------------
# BiocManager::valid()
## ---- eval = TRUE-------------------------------------------------------------
sessionInfo()
|
/scratch/gouwar.j/cran-all/cranData/BiocManager/inst/doc/BiocManager.R
|
---
title: "Installing and Managing _Bioconductor_ Packages"
author:
- name: Marcel Ramos
affiliation: Roswell Park Comprehensive Cancer Center, Buffalo, NY
- name: Martin Morgan
affiliation: Roswell Park Comprehensive Cancer Center, Buffalo, NY
output:
html_document:
toc: true
vignette: |
%\VignetteIndexEntry{Installing and Managing Bioconductor Packages}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE, eval = interactive())
```
# Introduction
Use the [BiocManager][1] package to install and manage packages from the
_[Bioconductor][2]_ project for the statistical analysis and comprehension of
high-throughput genomic data.
Current _Bioconductor_ packages are available on a 'release' version intended
for every-day use, and a 'devel' version where new features are introduced. A
new release version is created every six months. Using the [BiocManager][1]
package helps users install packages from the same release.
# Basic use
## Installing _R_
We recommend using the current 'release' version of _R_. [Follow
instructions][6] for installing _R_.
## Installing _BiocManager_
Use standard _R_ installation procedures to install the
[BiocManager][1] package. This command is requried only once per _R_
installation.
```{r, eval = FALSE}
install.packages("BiocManager", repos = "https://cloud.r-project.org")
```
## Installing _Bioconductor_, _CRAN_, or GitHub packages
Install _Bioconductor_ (or CRAN) packages with
```{r, eval = FALSE}
BiocManager::install(c("GenomicRanges", "Organism.dplyr"))
```
Installed packages can be updated to their current version with
```{r, eval = FALSE}
BiocManager::install()
```
## Previous releases
To install CRAN package versions consistent with previous releases of
Bioconductor, use the [BiocArchive][BiocArchive] package. BiocArchive enables
contemporary installations of CRAN packages with out-of-date _Bioconductor_
releases using [Posit Package Manager][RSPM].
[BiocArchive]: https://github.com/Bioconductor/BiocArchive
[RSPM]: https://packagemanager.posit.co/client/#/repos/2/overview
## Version and validity of installations
Use `version()` to discover the version of _Bioconductor_ currently in
use.
```{r}
BiocManager::version()
```
_Bioconductor_ packages work best when they are all from the same release. Use
`valid()` to identify packages that are out-of-date or from unexpected
versions.
```{r}
BiocManager::valid()
```
`valid()` returns an object that can be queried for detailed
information about invalid packages, as illustrated in the following
screen capture
```
> v <- valid()
Warning message:
6 packages out-of-date; 0 packages too new
> names(v)
[1] "out_of_date" "too_new"
> head(v$out_of_date, 2)
Package LibPath
bit "bit" "/home/mtmorgan/R/x86_64-pc-linux-gnu-library/3.5-Bioc-3.8"
ff "ff" "/home/mtmorgan/R/x86_64-pc-linux-gnu-library/3.5-Bioc-3.8"
Installed Built ReposVer Repository
bit "1.1-12" "3.5.0" "1.1-13" "https://cloud.r-project.org/src/contrib"
ff "2.2-13" "3.5.0" "2.2-14" "https://cloud.r-project.org/src/contrib"
>
```
## Available packages
Packages available for your version of _Bioconductor_ can be
discovered with `available()`; the first argument can be used to
filter package names based on a regular expression, e.g., 'BSgenome'
package available for _Homo sapiens_
```{r}
avail <- BiocManager::available()
length(avail) # all CRAN & Bioconductor packages
BiocManager::available("BSgenome.Hsapiens") # BSgenome.Hsapiens.* packages
```
Questions about installing and managing _Bioconductor_ packages should
be addressed to the [_Bioconductor_ support site][3].
# Advanced use
## Changing version
Use the `version=` argument to update all packages to a specific _Bioconductor_
version
```{r, eval = FALSE}
BiocManager::install(version="3.7")
```
_Bioconductor_ versions are associated with specific _R_ versions, as
summarized [here][5]. Attempting to install a version of
_Bioconductor_ that is not supported by the version of _R_ in use
leads to an error; using the most recent version of _Bioconductor_ may
require installing a new version of _R_.
```
> BiocManager::install(version="3.9")
Error: Bioconductor version '3.9' requires R version '3.6'; see
https://bioconductor.org/install
```
A special version, `version="devel"`, allows use of _Bioconductor_
packages that are under development.
## Managing multiple versions {#multiple-versions}
It is possible to have multiple versions of _Bioconductor_ installed on the
same computer. A best practice is to [create an initial _R_ installation][6].
Then create and use a library for each version of _Bioconductor_. The library
will contain all _Bioconductor_, CRAN, and other packages for that version of
_Bioconductor_. We illustrate the process assuming use of _Bioconductor_
version 3.7, available using _R_ version 3.5
Create a directory to contain the library (replace `USER_NAME` with your user
name on Windows)
- Linux: `~/R/3.5-Bioc-3.7`
- macOS: `~/Library/R/3.5-Bioc-3.7/library`
- Windows: `C:\Users\USER_NAME\Documents\R\3.5-Bioc-3.7`
Set the environment variable `R_LIBS_USER` to this directory, and invoke _R_.
Command line examples for Linux are
- Linux: `R_LIBS_USER=~/R/3.5-Bioc-3.7 R`
- macOS: `R_LIBS_USER=~/Library/R/3.5-Bioc-3.7/library R`
- Windows: `cmd /C "set R_LIBS_USER=C:\Users\USER_NAME\Documents\R\3.5-Bioc-3.7 && R"`
Once in _R_, confirm that the version-specific library path has been set
```{r, eval = FALSE}
.libPaths()
```
On Linux and macOS, create a bash alias to save typing, e.g.,
- Linux: `alias Bioc3.7='R_LIBS_USER=~/R/3.5-Bioc-3.7 R'`
- macOS: `alias Bioc3.7='R_LIBS_USER=~/Library/R/3.5-Bioc-3.7/library R'`
Invoke these from the command line as `Bioc3.7`.
On Windows, create a shortcut. Go to My Computer and navigate to a directory
that is in your PATH. Then right-click and choose New->Shortcut.
In the "type the location of the item" box, put:
```
cmd /C "set R_LIBS_USER=C:\Users\USER_NAME\Documents\R\3.5-Bioc-3.7 && R"
```
Click "Next". In the "Type a name for this shortcut" box, type `Bioc-3.7`.
## Offline use
Offline use of _BiocManager_ is possible for organizations and users that would
like to provide access to internal repositories of _Bioconductor_ packages
while enforcing appropriate version checks between Bioconductor and R.
For offline use, organizations and users require the following steps:
1. Use `rsync` to create local repositories of [CRAN][8] and
[Bioconductor][7]. Tell _R_ about these repositories using (e.g.,
in a site-wide `.Rprofile`, see `?.Rprofile`).
```{r, eval = FALSE}
options(
repos = c(CRAN_mirror = "file:///path/to/CRAN-mirror"),
BioC_mirror = "file:///path/to/Bioc-mirror"
)
```
Validate repository setting by reviewing the output of `repositories()`.
2. Create an environment variable or option, e.g.,
```{r, eval = FALSE}
options(
BIOCONDUCTOR_ONLINE_VERSION_DIAGNOSIS = FALSE
)
```
3. Use `install.packages()` to bootstrap the BiocManager installation.
```{r, eval = FALSE}
install.package(c("BiocManager", "BiocVersion"))
```
BiocManager can then be used for subsequent installations, e.g.,
`BiocManager::install(c("ggplot2", "GenomicRanges"))`.
### Offline config.yaml
_BiocManager_ also expects to reference an online configuration yaml
file for _Bioconductor_ version validation at
https://bioconductor.org/config.yaml. With offline use, users are
expected to either host this file locally or provide their
`config.yaml` version. The package allows either an environment
variable or R-specific option to locate this file, e.g.,
```{r, eval = FALSE}
options(
BIOCONDUCTOR_CONFIG_FILE = "file:///path/to/config.yaml"
)
```
# How it works
BiocManager's job is to make sure that all packages are installed from
the same _Bioconductor_ version, using compatible _R_ and _CRAN_
packages. However, _R_ has an annual release cycle, whereas
_Bioconductor_ has a twice-yearly release cycle. Also, _Bioconductor_
has a 'devel' branch where new packages and features are introduced,
and a 'release' branch where bug fixes and relative stability are
important; _CRAN_ packages do not distinguish between devel and
release branches.
In the past, one would install a _Bioconductor_ package by evaluating
the command `source("https://.../biocLite.R")` to read a file from the
web. The file contained an installation script that was smart enough
to figure out what version of _R_ and _Bioconductor_ were in use or
appropriate for the person invoking the script. Sourcing an executable
script from the web is an obvious security problem.
Our solution is to use a CRAN package BiocManager, so that users
install from pre-configured CRAN mirrors rather than typing in a URL
and sourcing from the web.
But how does a CRAN package know what version of _Bioconductor_ is in
use? Can we use BiocManager? No, because we don't have enough control
over the version of BiocManager available on CRAN, e.g., everyone using
the same version of _R_ would get the same version of BiocManager and
hence of _Bioconductor_. But there are two _Bioconductor_ versions per R
version, so that does not work!
BiocManager could write information to a cache on the user disk, but
this is not a robust solution for a number of reasons. Is there any
other way that _R_ could keep track of version information? Yes, by
installing a _Bioconductor_ package (BiocVersion) whose sole purpose is
to indicate the version of _Bioconductor_ in use.
By default, BiocManager installs the BiocVersion package corresponding
to the most recent released version of _Bioconductor_ for the version
of _R_ in use. At the time this section was written, the most recent
version of R is R-3.6.1, associated with _Bioconductor_ release
version 3.9. Hence on first use of `BiocManager::install()` we see
BiocVersion version 3.9.0 being installed.
```
> BiocManager::install()
Bioconductor version 3.9 (BiocManager 1.30.4), R 3.6.1 Patched (2019-07-06
r76792)
Installing package(s) 'BiocVersion'
trying URL 'https://bioconductor.org/packages/3.9/bioc/src/contrib/\
BiocVersion_3.9.0.tar.gz'
...
```
Requesting a specific version of _Bioconductor_ updates, if possible,
the BiocVersion package.
```
> ## 3.10 is available for R-3.6
> BiocManager::install(version="3.10")
Upgrade 3 packages to Bioconductor version '3.10'? [y/n]: y
Bioconductor version 3.10 (BiocManager 1.30.4), R 3.6.1 Patched (2019-07-06
r76792)
Installing package(s) 'BiocVersion'
trying URL 'https://bioconductor.org/packages/3.10/bioc/src/contrib/\
BiocVersion_3.10.1.tar.gz'
...
> ## back down again...
> BiocManager::install(version="3.9")
Downgrade 3 packages to Bioconductor version '3.9'? [y/n]: y
Bioconductor version 3.9 (BiocManager 1.30.4), R 3.6.1 Patched (2019-07-06
r76792)
Installing package(s) 'BiocVersion'
trying URL 'https://bioconductor.org/packages/3.9/bioc/src/contrib/\
BiocVersion_3.9.0.tar.gz'
...
```
Answering `n` to the prompt to up- or downgrade packages leaves the
installation unchanged, since this would immediately create an
inconsistent installation.
# Troubleshooting
## Package not available
(An initial draft of this section was produced by ChatGPT on 25 May 2023)
A user failed to install the 'celldex' package on 25 May 2023. A
transcript of the _R_ session is as follows:
```
> BiocManager::version()
[1] '3.18'
> BiocManager::install("celldex")
Bioconductor version 3.18 (BiocManager 1.30.20), R 4.3.0 Patched (2023-05-01
r84362)
Installing package(s) 'celldex'
Warning message:
package 'celldex' is not available for Bioconductor version '3.18'
A version of this package for your version of R might be available elsewhere,
see the ideas at
https://cran.r-project.org/doc/manuals/r-patched/R-admin.html#Installing-packages
```
The availability of specific packages within _Bioconductor_ can depend
on various factors, including simple errors in entering the package
name, the package's development status, maintenance, and compatibility
with the latest version of _Bioconductor_, as well as the availability
of CRAN packages that the _Bioconductor_ package depends on.
Package Name: _R_ package names are case sensitive and must be spelt
correctly, so using `BiocManager::install("Celldex")` (with a capital
`C`) or `BiocManager::install("celdex")` (with only one `l`) would
both fail to install `celldex`; _R_ will sometimes suggest the correct
name.
_CRAN_ Packages: `BiocManager::install()` tries to install packages
from CRAN and from _Bioconductor_. Check that the package is not a CRAN
package by trying to visit the CRAN 'landing page'
- `https://cran.R-project.org/package=celldex`
If this page is found, then the package is a CRAN package; see the
[R-admin][9] manual section on troubleshooting CRAN package
installations.
Check also that the package is not a CRAN package that has been
'archived' and no longer available by trying to visit
- `https://cran.R-project.org/src/contrib/Archive/celldex/`
If this page exists but the 'landing page' does not, this means that
the package has been removed from CRAN. While it is possible to
install archived packages, usually the best course of action is to
identify alternative packages to accomplish the task you are
interested in. This is especially true if the 'Last modified' date of
the most recent archived package is more than several months ago.
Compatibility: A _Bioconductor_ package must be available for the
specific version of _Bioconductor_ you are using. Try visiting the
'landing page' of the package for your version of _Bioconductor_,
e.g., for _Bioconductor_ version 3.18 and package celldex
- https://bioconductor.org/packages/3.18/celldex
If this landing page does not exist, then the package is not available
for your version of _Bioconductor_.
Users may sometimes have an out-of-date version of _R_ or
_Bioconductor_ installed; this may be intentional (e.g., to ensure
reproducibility of existing analyses) or simply because _Bioconductor_
has not yet been updated. Try visiting the current release landing
page
- https://bioconductor.org/packages/release/celldex
If the release landing page exists, and it is not important that you
continue using the out-of-date version of _Bioconductor_, consider
updating _R_ (if necessary) and _Bioconductor_ to the current release
versions using instructions at the top of this document.
Packages recently contributed to _Bioconductor_ are added to the
'devel' branch, whereas most users are configured to use the 'release'
branch. Try visiting the 'devel' landing page
- https://bioconductor.org/packages/devel/celldex
If only the devel landing page exists, then consider updating your
installation to use the development version of _Bioconductor_. Note
that the development version is not as stable as the release version,
so should not be used for time-critical or 'production' analysis.
It may be that the package you are interested in has been removed from
_Bioconductor_. Check this by visiting
- https://bioconductor.org/about/removed-packages/
If the package has been removed, the best course of action is to
identify alternative packages to accomplish the task you are
interested in.
Maintenance and Operating System Availability: A package may be
included in the release or devel version of _Bioconductor_, but
currently unavailable because it requires maintenance. This might be
indicated by a red 'build' badge as in the image below (details of the
build problem are available by clicking on the badge). The build error
usually requires that the package maintainer correct an issue with
their package; the maintainer and email address are listed on the
package landing page.
```{r out.width = '100%', echo = FALSE, eval = TRUE}
knitr::include_graphics("img/badges.png")
```
A small number of _Bioconductor_ packages are not available on all
operating systems. An orange 'platforms' badge indicates this. Click
on the badge to be taken to the 'Package Archives' section of the
landing page; BGmix is not supported on Windows, and not available on
'Intel' macOS because of build-related errors. Consider using an
alternative operating system if the package is essential to your work
```{r out.width = '100%', echo = FALSE, eval = TRUE}
knitr::include_graphics("img/archives.png")
```
Packages with landing pages from older releases but not available for
your operating system cannot be updated by the maintainer. If the
package is available in the current release and for your operating
system, consider updating to the current release of _Bioconductor_.
## Cannot load _BiocManager_
After updating _R_ (e.g., from _R_ version 3.5.x to _R_ version 3.6.x
at the time of writing this) and trying to load `BiocManager`, _R_
replies
```
Error: .onLoad failed in loadNamespace() for 'BiocManager', details:
call: NULL
error: Bioconductor version '3.8' requires R version '3.5'; see
https://bioconductor.org/install
```
This problem arises because `BiocManager` uses a second package,
`BiocVersion`, to indicate the version of _Bioconductor_ in use. In
the original installation, `BiocManager` had installed `BiocVersion`
appropriate for _R_ version 3.5. With the update, the version of
_Bioconductor_ indicated by `BiocVersion` is no longer valid -- you'll
need to update `BiocVersion` and all _Bioconductor_ packages to the
most recent version available for your new version of _R_.
The recommended practice is to maintain a separate library for each
_R_ and _Bioconductor_ version. So instead of installing packages into
_R_'s system library (e.g., as 'administrator'), install only base _R_
into the system location. Then use aliases or other approaches to
create _R_ / _Bioconductor_ version-specific installations. This is
described in the section on [maintaining multiple
versions](#multiple-versions) of _R_ and _Bioconductor_.
Alternatively, one could update all _Bioconductor_ packages in the
previous installation directory. The problem with this is that the
previous version of _Bioconductor_ is removed, compromising the
ability to reproduce earlier results. Update all _Bioconductor_
packages in the previous installation directory by removing _all_
versions of `BiocVersion`
```
remove.packages("BiocVersion") # repeat until all instances removed
```
Then install the updated `BiocVersion`, and update all _Bioconductor_
packages; answer 'yes' when you are asked to update a potentially
large number of _Bioconductor_ packages.
```{r, eval = FALSE}
BiocManager::install()
```
Confirm that the updated _Bioconductor_ is valid for your version of
_R_
```{r, eval = FALSE}
BiocManager::valid()
```
## Timeout during package download
Large packages can take a long time to downloaded over poor internet
connects. The BiocManager package sets the time limit to 300 seconds,
using `options(timeout = 300)`. Only part of a package may download,
e.g., only 15.1 of 79.4 MB in the example below
```
trying URL 'https://bioconductor.org/packages/3.12/data/annotation/src/contrib/org.Hs.eg.db_3.12.0.tar.gz'
Content type 'application/x-gzip' length 83225518 bytes (79.4 MB)
=========
downloaded 15.1 MB
```
or perhaps with a warning (often difficult to see in the output)
```
Error in download.file(url, destfile, method, mode = "wb", ...) :
...
...: Timeout of 300 seconds was reached
...
```
Try increasing the download timeout, e.g, `options(timeout = 600)`.
## Multiple `BiocVersion` installations
One potential problem occurs when there are two or more `.libPaths()`,
with more than one BiocVersion package installed. This might occur for
instance if a 'system administrator' installed BiocVersion, and then a
user installed their own version. In this circumstance, it seems
appropriate to standardize the installation by repeatedly calling
`remove.packages("BiocVersion")` until all versions are removed, and
then installing the desired version.
## Errors determining _Bioconductor_ version
An essential task for _BiocManager_ is to determine that the version
of _Bioconductor_ is appropriate for the version of _R_. Several
errors can occur when this task fails.
- Bioconductor version cannot be determined; no internet connection?
When the _Bioconductor_ version cannot be obtained from the version
map hosted at https://bioconductor.org/config.yaml, this error will
occur. It may be a result of poor internet connectivity or offline
use. See the [offline config.yaml](#offline-config.yaml) section
above.
- Bioconductor version cannot be validated; no internet connection?
Usually occurs when the map is unable to be downloaded possibly due
to a missing `BIOCONDUCTOR_CONFIG_FILE`. For offline use, a copy of
the configuration file should be downloaded and its address set to
the environment variable or option.
- Bioconductor version map cannot be validated; is it misconfigured?
On _rare_ occasion, the version map hosted at
https://bioconductor.org/config.yaml may be misconfigured. The check
ensures that all the version name tags, i.e., out-of-date, release,
devel, and future are in the map.
- Bioconductor version cannot be validated; is type input
mis-specified? The type input refers to the version name inputs,
mainly release and devel. This error is chiefly due to internal
logic and is not due to user error. Please open a [GitHub
issue][10].
# Session information
```{r, eval = TRUE}
sessionInfo()
```
[1]: https://cran.r-project.org/package=BiocManager
[2]: https://bioconductor.org
[3]: https://support.bioconductor.org
[5]: https://bioconductor.org/about/release-announcements/
[6]: https://cran.R-project.org/
[7]: https://bioconductor.org/about/mirrors/mirror-how-to/
[8]: https://cran.r-project.org/mirror-howto.html
[9]: https://cran.r-project.org/doc/manuals/r-patched/R-admin.html#Installing-packages
[10]: https://github.com/Bioconductor/BiocManager/issues
|
/scratch/gouwar.j/cran-all/cranData/BiocManager/inst/doc/BiocManager.Rmd
|
---
title: "Installing and Managing _Bioconductor_ Packages"
author:
- name: Marcel Ramos
affiliation: Roswell Park Comprehensive Cancer Center, Buffalo, NY
- name: Martin Morgan
affiliation: Roswell Park Comprehensive Cancer Center, Buffalo, NY
output:
html_document:
toc: true
vignette: |
%\VignetteIndexEntry{Installing and Managing Bioconductor Packages}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE, eval = interactive())
```
# Introduction
Use the [BiocManager][1] package to install and manage packages from the
_[Bioconductor][2]_ project for the statistical analysis and comprehension of
high-throughput genomic data.
Current _Bioconductor_ packages are available on a 'release' version intended
for every-day use, and a 'devel' version where new features are introduced. A
new release version is created every six months. Using the [BiocManager][1]
package helps users install packages from the same release.
# Basic use
## Installing _R_
We recommend using the current 'release' version of _R_. [Follow
instructions][6] for installing _R_.
## Installing _BiocManager_
Use standard _R_ installation procedures to install the
[BiocManager][1] package. This command is requried only once per _R_
installation.
```{r, eval = FALSE}
install.packages("BiocManager", repos = "https://cloud.r-project.org")
```
## Installing _Bioconductor_, _CRAN_, or GitHub packages
Install _Bioconductor_ (or CRAN) packages with
```{r, eval = FALSE}
BiocManager::install(c("GenomicRanges", "Organism.dplyr"))
```
Installed packages can be updated to their current version with
```{r, eval = FALSE}
BiocManager::install()
```
## Previous releases
To install CRAN package versions consistent with previous releases of
Bioconductor, use the [BiocArchive][BiocArchive] package. BiocArchive enables
contemporary installations of CRAN packages with out-of-date _Bioconductor_
releases using [Posit Package Manager][RSPM].
[BiocArchive]: https://github.com/Bioconductor/BiocArchive
[RSPM]: https://packagemanager.posit.co/client/#/repos/2/overview
## Version and validity of installations
Use `version()` to discover the version of _Bioconductor_ currently in
use.
```{r}
BiocManager::version()
```
_Bioconductor_ packages work best when they are all from the same release. Use
`valid()` to identify packages that are out-of-date or from unexpected
versions.
```{r}
BiocManager::valid()
```
`valid()` returns an object that can be queried for detailed
information about invalid packages, as illustrated in the following
screen capture
```
> v <- valid()
Warning message:
6 packages out-of-date; 0 packages too new
> names(v)
[1] "out_of_date" "too_new"
> head(v$out_of_date, 2)
Package LibPath
bit "bit" "/home/mtmorgan/R/x86_64-pc-linux-gnu-library/3.5-Bioc-3.8"
ff "ff" "/home/mtmorgan/R/x86_64-pc-linux-gnu-library/3.5-Bioc-3.8"
Installed Built ReposVer Repository
bit "1.1-12" "3.5.0" "1.1-13" "https://cloud.r-project.org/src/contrib"
ff "2.2-13" "3.5.0" "2.2-14" "https://cloud.r-project.org/src/contrib"
>
```
## Available packages
Packages available for your version of _Bioconductor_ can be
discovered with `available()`; the first argument can be used to
filter package names based on a regular expression, e.g., 'BSgenome'
package available for _Homo sapiens_
```{r}
avail <- BiocManager::available()
length(avail) # all CRAN & Bioconductor packages
BiocManager::available("BSgenome.Hsapiens") # BSgenome.Hsapiens.* packages
```
Questions about installing and managing _Bioconductor_ packages should
be addressed to the [_Bioconductor_ support site][3].
# Advanced use
## Changing version
Use the `version=` argument to update all packages to a specific _Bioconductor_
version
```{r, eval = FALSE}
BiocManager::install(version="3.7")
```
_Bioconductor_ versions are associated with specific _R_ versions, as
summarized [here][5]. Attempting to install a version of
_Bioconductor_ that is not supported by the version of _R_ in use
leads to an error; using the most recent version of _Bioconductor_ may
require installing a new version of _R_.
```
> BiocManager::install(version="3.9")
Error: Bioconductor version '3.9' requires R version '3.6'; see
https://bioconductor.org/install
```
A special version, `version="devel"`, allows use of _Bioconductor_
packages that are under development.
## Managing multiple versions {#multiple-versions}
It is possible to have multiple versions of _Bioconductor_ installed on the
same computer. A best practice is to [create an initial _R_ installation][6].
Then create and use a library for each version of _Bioconductor_. The library
will contain all _Bioconductor_, CRAN, and other packages for that version of
_Bioconductor_. We illustrate the process assuming use of _Bioconductor_
version 3.7, available using _R_ version 3.5
Create a directory to contain the library (replace `USER_NAME` with your user
name on Windows)
- Linux: `~/R/3.5-Bioc-3.7`
- macOS: `~/Library/R/3.5-Bioc-3.7/library`
- Windows: `C:\Users\USER_NAME\Documents\R\3.5-Bioc-3.7`
Set the environment variable `R_LIBS_USER` to this directory, and invoke _R_.
Command line examples for Linux are
- Linux: `R_LIBS_USER=~/R/3.5-Bioc-3.7 R`
- macOS: `R_LIBS_USER=~/Library/R/3.5-Bioc-3.7/library R`
- Windows: `cmd /C "set R_LIBS_USER=C:\Users\USER_NAME\Documents\R\3.5-Bioc-3.7 && R"`
Once in _R_, confirm that the version-specific library path has been set
```{r, eval = FALSE}
.libPaths()
```
On Linux and macOS, create a bash alias to save typing, e.g.,
- Linux: `alias Bioc3.7='R_LIBS_USER=~/R/3.5-Bioc-3.7 R'`
- macOS: `alias Bioc3.7='R_LIBS_USER=~/Library/R/3.5-Bioc-3.7/library R'`
Invoke these from the command line as `Bioc3.7`.
On Windows, create a shortcut. Go to My Computer and navigate to a directory
that is in your PATH. Then right-click and choose New->Shortcut.
In the "type the location of the item" box, put:
```
cmd /C "set R_LIBS_USER=C:\Users\USER_NAME\Documents\R\3.5-Bioc-3.7 && R"
```
Click "Next". In the "Type a name for this shortcut" box, type `Bioc-3.7`.
## Offline use
Offline use of _BiocManager_ is possible for organizations and users that would
like to provide access to internal repositories of _Bioconductor_ packages
while enforcing appropriate version checks between Bioconductor and R.
For offline use, organizations and users require the following steps:
1. Use `rsync` to create local repositories of [CRAN][8] and
[Bioconductor][7]. Tell _R_ about these repositories using (e.g.,
in a site-wide `.Rprofile`, see `?.Rprofile`).
```{r, eval = FALSE}
options(
repos = c(CRAN_mirror = "file:///path/to/CRAN-mirror"),
BioC_mirror = "file:///path/to/Bioc-mirror"
)
```
Validate repository setting by reviewing the output of `repositories()`.
2. Create an environment variable or option, e.g.,
```{r, eval = FALSE}
options(
BIOCONDUCTOR_ONLINE_VERSION_DIAGNOSIS = FALSE
)
```
3. Use `install.packages()` to bootstrap the BiocManager installation.
```{r, eval = FALSE}
install.package(c("BiocManager", "BiocVersion"))
```
BiocManager can then be used for subsequent installations, e.g.,
`BiocManager::install(c("ggplot2", "GenomicRanges"))`.
### Offline config.yaml
_BiocManager_ also expects to reference an online configuration yaml
file for _Bioconductor_ version validation at
https://bioconductor.org/config.yaml. With offline use, users are
expected to either host this file locally or provide their
`config.yaml` version. The package allows either an environment
variable or R-specific option to locate this file, e.g.,
```{r, eval = FALSE}
options(
BIOCONDUCTOR_CONFIG_FILE = "file:///path/to/config.yaml"
)
```
# How it works
BiocManager's job is to make sure that all packages are installed from
the same _Bioconductor_ version, using compatible _R_ and _CRAN_
packages. However, _R_ has an annual release cycle, whereas
_Bioconductor_ has a twice-yearly release cycle. Also, _Bioconductor_
has a 'devel' branch where new packages and features are introduced,
and a 'release' branch where bug fixes and relative stability are
important; _CRAN_ packages do not distinguish between devel and
release branches.
In the past, one would install a _Bioconductor_ package by evaluating
the command `source("https://.../biocLite.R")` to read a file from the
web. The file contained an installation script that was smart enough
to figure out what version of _R_ and _Bioconductor_ were in use or
appropriate for the person invoking the script. Sourcing an executable
script from the web is an obvious security problem.
Our solution is to use a CRAN package BiocManager, so that users
install from pre-configured CRAN mirrors rather than typing in a URL
and sourcing from the web.
But how does a CRAN package know what version of _Bioconductor_ is in
use? Can we use BiocManager? No, because we don't have enough control
over the version of BiocManager available on CRAN, e.g., everyone using
the same version of _R_ would get the same version of BiocManager and
hence of _Bioconductor_. But there are two _Bioconductor_ versions per R
version, so that does not work!
BiocManager could write information to a cache on the user disk, but
this is not a robust solution for a number of reasons. Is there any
other way that _R_ could keep track of version information? Yes, by
installing a _Bioconductor_ package (BiocVersion) whose sole purpose is
to indicate the version of _Bioconductor_ in use.
By default, BiocManager installs the BiocVersion package corresponding
to the most recent released version of _Bioconductor_ for the version
of _R_ in use. At the time this section was written, the most recent
version of R is R-3.6.1, associated with _Bioconductor_ release
version 3.9. Hence on first use of `BiocManager::install()` we see
BiocVersion version 3.9.0 being installed.
```
> BiocManager::install()
Bioconductor version 3.9 (BiocManager 1.30.4), R 3.6.1 Patched (2019-07-06
r76792)
Installing package(s) 'BiocVersion'
trying URL 'https://bioconductor.org/packages/3.9/bioc/src/contrib/\
BiocVersion_3.9.0.tar.gz'
...
```
Requesting a specific version of _Bioconductor_ updates, if possible,
the BiocVersion package.
```
> ## 3.10 is available for R-3.6
> BiocManager::install(version="3.10")
Upgrade 3 packages to Bioconductor version '3.10'? [y/n]: y
Bioconductor version 3.10 (BiocManager 1.30.4), R 3.6.1 Patched (2019-07-06
r76792)
Installing package(s) 'BiocVersion'
trying URL 'https://bioconductor.org/packages/3.10/bioc/src/contrib/\
BiocVersion_3.10.1.tar.gz'
...
> ## back down again...
> BiocManager::install(version="3.9")
Downgrade 3 packages to Bioconductor version '3.9'? [y/n]: y
Bioconductor version 3.9 (BiocManager 1.30.4), R 3.6.1 Patched (2019-07-06
r76792)
Installing package(s) 'BiocVersion'
trying URL 'https://bioconductor.org/packages/3.9/bioc/src/contrib/\
BiocVersion_3.9.0.tar.gz'
...
```
Answering `n` to the prompt to up- or downgrade packages leaves the
installation unchanged, since this would immediately create an
inconsistent installation.
# Troubleshooting
## Package not available
(An initial draft of this section was produced by ChatGPT on 25 May 2023)
A user failed to install the 'celldex' package on 25 May 2023. A
transcript of the _R_ session is as follows:
```
> BiocManager::version()
[1] '3.18'
> BiocManager::install("celldex")
Bioconductor version 3.18 (BiocManager 1.30.20), R 4.3.0 Patched (2023-05-01
r84362)
Installing package(s) 'celldex'
Warning message:
package 'celldex' is not available for Bioconductor version '3.18'
A version of this package for your version of R might be available elsewhere,
see the ideas at
https://cran.r-project.org/doc/manuals/r-patched/R-admin.html#Installing-packages
```
The availability of specific packages within _Bioconductor_ can depend
on various factors, including simple errors in entering the package
name, the package's development status, maintenance, and compatibility
with the latest version of _Bioconductor_, as well as the availability
of CRAN packages that the _Bioconductor_ package depends on.
Package Name: _R_ package names are case sensitive and must be spelt
correctly, so using `BiocManager::install("Celldex")` (with a capital
`C`) or `BiocManager::install("celdex")` (with only one `l`) would
both fail to install `celldex`; _R_ will sometimes suggest the correct
name.
_CRAN_ Packages: `BiocManager::install()` tries to install packages
from CRAN and from _Bioconductor_. Check that the package is not a CRAN
package by trying to visit the CRAN 'landing page'
- `https://cran.R-project.org/package=celldex`
If this page is found, then the package is a CRAN package; see the
[R-admin][9] manual section on troubleshooting CRAN package
installations.
Check also that the package is not a CRAN package that has been
'archived' and no longer available by trying to visit
- `https://cran.R-project.org/src/contrib/Archive/celldex/`
If this page exists but the 'landing page' does not, this means that
the package has been removed from CRAN. While it is possible to
install archived packages, usually the best course of action is to
identify alternative packages to accomplish the task you are
interested in. This is especially true if the 'Last modified' date of
the most recent archived package is more than several months ago.
Compatibility: A _Bioconductor_ package must be available for the
specific version of _Bioconductor_ you are using. Try visiting the
'landing page' of the package for your version of _Bioconductor_,
e.g., for _Bioconductor_ version 3.18 and package celldex
- https://bioconductor.org/packages/3.18/celldex
If this landing page does not exist, then the package is not available
for your version of _Bioconductor_.
Users may sometimes have an out-of-date version of _R_ or
_Bioconductor_ installed; this may be intentional (e.g., to ensure
reproducibility of existing analyses) or simply because _Bioconductor_
has not yet been updated. Try visiting the current release landing
page
- https://bioconductor.org/packages/release/celldex
If the release landing page exists, and it is not important that you
continue using the out-of-date version of _Bioconductor_, consider
updating _R_ (if necessary) and _Bioconductor_ to the current release
versions using instructions at the top of this document.
Packages recently contributed to _Bioconductor_ are added to the
'devel' branch, whereas most users are configured to use the 'release'
branch. Try visiting the 'devel' landing page
- https://bioconductor.org/packages/devel/celldex
If only the devel landing page exists, then consider updating your
installation to use the development version of _Bioconductor_. Note
that the development version is not as stable as the release version,
so should not be used for time-critical or 'production' analysis.
It may be that the package you are interested in has been removed from
_Bioconductor_. Check this by visiting
- https://bioconductor.org/about/removed-packages/
If the package has been removed, the best course of action is to
identify alternative packages to accomplish the task you are
interested in.
Maintenance and Operating System Availability: A package may be
included in the release or devel version of _Bioconductor_, but
currently unavailable because it requires maintenance. This might be
indicated by a red 'build' badge as in the image below (details of the
build problem are available by clicking on the badge). The build error
usually requires that the package maintainer correct an issue with
their package; the maintainer and email address are listed on the
package landing page.
```{r out.width = '100%', echo = FALSE, eval = TRUE}
knitr::include_graphics("img/badges.png")
```
A small number of _Bioconductor_ packages are not available on all
operating systems. An orange 'platforms' badge indicates this. Click
on the badge to be taken to the 'Package Archives' section of the
landing page; BGmix is not supported on Windows, and not available on
'Intel' macOS because of build-related errors. Consider using an
alternative operating system if the package is essential to your work
```{r out.width = '100%', echo = FALSE, eval = TRUE}
knitr::include_graphics("img/archives.png")
```
Packages with landing pages from older releases but not available for
your operating system cannot be updated by the maintainer. If the
package is available in the current release and for your operating
system, consider updating to the current release of _Bioconductor_.
## Cannot load _BiocManager_
After updating _R_ (e.g., from _R_ version 3.5.x to _R_ version 3.6.x
at the time of writing this) and trying to load `BiocManager`, _R_
replies
```
Error: .onLoad failed in loadNamespace() for 'BiocManager', details:
call: NULL
error: Bioconductor version '3.8' requires R version '3.5'; see
https://bioconductor.org/install
```
This problem arises because `BiocManager` uses a second package,
`BiocVersion`, to indicate the version of _Bioconductor_ in use. In
the original installation, `BiocManager` had installed `BiocVersion`
appropriate for _R_ version 3.5. With the update, the version of
_Bioconductor_ indicated by `BiocVersion` is no longer valid -- you'll
need to update `BiocVersion` and all _Bioconductor_ packages to the
most recent version available for your new version of _R_.
The recommended practice is to maintain a separate library for each
_R_ and _Bioconductor_ version. So instead of installing packages into
_R_'s system library (e.g., as 'administrator'), install only base _R_
into the system location. Then use aliases or other approaches to
create _R_ / _Bioconductor_ version-specific installations. This is
described in the section on [maintaining multiple
versions](#multiple-versions) of _R_ and _Bioconductor_.
Alternatively, one could update all _Bioconductor_ packages in the
previous installation directory. The problem with this is that the
previous version of _Bioconductor_ is removed, compromising the
ability to reproduce earlier results. Update all _Bioconductor_
packages in the previous installation directory by removing _all_
versions of `BiocVersion`
```
remove.packages("BiocVersion") # repeat until all instances removed
```
Then install the updated `BiocVersion`, and update all _Bioconductor_
packages; answer 'yes' when you are asked to update a potentially
large number of _Bioconductor_ packages.
```{r, eval = FALSE}
BiocManager::install()
```
Confirm that the updated _Bioconductor_ is valid for your version of
_R_
```{r, eval = FALSE}
BiocManager::valid()
```
## Timeout during package download
Large packages can take a long time to downloaded over poor internet
connects. The BiocManager package sets the time limit to 300 seconds,
using `options(timeout = 300)`. Only part of a package may download,
e.g., only 15.1 of 79.4 MB in the example below
```
trying URL 'https://bioconductor.org/packages/3.12/data/annotation/src/contrib/org.Hs.eg.db_3.12.0.tar.gz'
Content type 'application/x-gzip' length 83225518 bytes (79.4 MB)
=========
downloaded 15.1 MB
```
or perhaps with a warning (often difficult to see in the output)
```
Error in download.file(url, destfile, method, mode = "wb", ...) :
...
...: Timeout of 300 seconds was reached
...
```
Try increasing the download timeout, e.g, `options(timeout = 600)`.
## Multiple `BiocVersion` installations
One potential problem occurs when there are two or more `.libPaths()`,
with more than one BiocVersion package installed. This might occur for
instance if a 'system administrator' installed BiocVersion, and then a
user installed their own version. In this circumstance, it seems
appropriate to standardize the installation by repeatedly calling
`remove.packages("BiocVersion")` until all versions are removed, and
then installing the desired version.
## Errors determining _Bioconductor_ version
An essential task for _BiocManager_ is to determine that the version
of _Bioconductor_ is appropriate for the version of _R_. Several
errors can occur when this task fails.
- Bioconductor version cannot be determined; no internet connection?
When the _Bioconductor_ version cannot be obtained from the version
map hosted at https://bioconductor.org/config.yaml, this error will
occur. It may be a result of poor internet connectivity or offline
use. See the [offline config.yaml](#offline-config.yaml) section
above.
- Bioconductor version cannot be validated; no internet connection?
Usually occurs when the map is unable to be downloaded possibly due
to a missing `BIOCONDUCTOR_CONFIG_FILE`. For offline use, a copy of
the configuration file should be downloaded and its address set to
the environment variable or option.
- Bioconductor version map cannot be validated; is it misconfigured?
On _rare_ occasion, the version map hosted at
https://bioconductor.org/config.yaml may be misconfigured. The check
ensures that all the version name tags, i.e., out-of-date, release,
devel, and future are in the map.
- Bioconductor version cannot be validated; is type input
mis-specified? The type input refers to the version name inputs,
mainly release and devel. This error is chiefly due to internal
logic and is not due to user error. Please open a [GitHub
issue][10].
# Session information
```{r, eval = TRUE}
sessionInfo()
```
[1]: https://cran.r-project.org/package=BiocManager
[2]: https://bioconductor.org
[3]: https://support.bioconductor.org
[5]: https://bioconductor.org/about/release-announcements/
[6]: https://cran.R-project.org/
[7]: https://bioconductor.org/about/mirrors/mirror-how-to/
[8]: https://cran.r-project.org/mirror-howto.html
[9]: https://cran.r-project.org/doc/manuals/r-patched/R-admin.html#Installing-packages
[10]: https://github.com/Bioconductor/BiocManager/issues
|
/scratch/gouwar.j/cran-all/cranData/BiocManager/vignettes/BiocManager.Rmd
|
CalculateHUM_Ex<-function(data,indexF,indexClass,allLabel,amountL)
{
#library(Rcpp)
#library(gtools)
dataEach=NULL
class.index=NULL
for(i in 1:length(allLabel))
{
vrem=which(data[,indexClass]==allLabel[i])
dataEach=c(dataEach,list(data[vrem,indexF,drop = FALSE]))
class.index=c(class.index,list(vrem))
}
indexLabel<-combn(allLabel,amountL)
output<-matrix(ncol=(length(indexF)+amountL),nrow=ncol(indexLabel))
seqAll=NULL
#cycle for different label combinations
for(j in 1:ncol(indexLabel))
{
indexUnion=NULL
indexL=NULL
for(i in 1:amountL)
{
v.class=which(allLabel==indexLabel[i,j])
indexL=c(indexL,v.class)
output[j,i]<-indexLabel[i,j]
indexUnion=union(indexUnion,class.index[[v.class]])
}
seqMax=NULL
seq=gtools::permutations(amountL,amountL,1:amountL)
for(i in 1:length(indexF))
{
s_data=NULL
dataV=data[,indexF[i]]
prodValue=1
for (k in 1:amountL)
{
vrem=sort(dataEach[[indexL[k]]][,i])
s_data=c(s_data,list(vrem))
prodValue = prodValue*length(vrem)
}
#claculate the threshold values for plot of 2D ROC and 3D ROC
thresholds <- sort(unique(dataV[indexUnion]))
thresholds=(c(-Inf, thresholds) + c(thresholds, +Inf))/2
out=CalcGene(s_data,seq, prodValue,thresholds)
output[j,(amountL+i)]<-out$HUM
seqMax=cbind(seqMax,out$seq)
}
colnames(seqMax)=names(data[,indexF,drop = FALSE])
seqAll=c(seqAll,list(seqMax))
}
name<-NULL
for(i in 1:amountL)
{
name<-c(name,paste("Diagnosis",i,sep=""))
}
colnames(output)<-c(name,indexF)
return(list(HUM=output,seq=seqAll))
}
|
/scratch/gouwar.j/cran-all/cranData/Biocomb/R/CalculateHUM_Ex.R
|
CalculateHUM_ROC<-function(data,indexF,indexClass,indexLabel,seq)
{
indexL=NULL
label=levels(data[,indexClass])
if(is.null(label)) return()
for(i in 1:length(indexLabel))
{
indexL=c(indexL,which(label==indexLabel[i]))
}
Sp=NULL
Sn=NULL
S3=NULL
optSp=NULL
optSn=NULL
optS3=NULL
indexEach=NULL
indexUnion=NULL
for(i in 1:length(label))
{
vrem=which(data[,indexClass]==label[i])
indexEach=c(indexEach,list(vrem))
if(label[i]%in%indexLabel)
indexUnion=union(indexUnion,vrem)
}
for(i in 1:length(indexF))
{
s_data=NULL
dataV=data[,indexF[i]]
prodValue=1
for (j in 1:length(indexLabel))
{
vrem=sort(dataV[indexEach[[indexL[j]]]])
s_data=c(s_data,list(vrem))
prodValue = prodValue*length(vrem)
}
#claculate the threshold values for plot of 2D ROC and 3D ROC
thresholds <- sort(unique(dataV[indexUnion]))
thresholds=(c(-Inf, thresholds) + c(thresholds, +Inf))/2
out=CalcROC(s_data,seq[,indexF[i]], thresholds)
#out=CalcROC(s_data,seq[,i], thresholds)
Sp=c(Sp,list(out$Sp))
Sn=c(Sn,list(out$Sn))
optSp=c(optSp,out$optSp)
optSn=c(optSn,out$optSn)
if(length(indexLabel)==3)
{
S3=c(S3,list(out$S3))
optS3=c(optS3,out$optS3)
}
}
names(optSp)=names(data[,indexF,drop = FALSE])
names(optSn)=names(data[,indexF,drop = FALSE])
names(Sp)=names(data[,indexF,drop = FALSE])
names(Sn)=names(data[,indexF,drop = FALSE])
if(length(indexLabel)==3)
{
names(S3)=names(data[,indexF,drop = FALSE])
names(optS3)=names(data[,indexF,drop = FALSE])
}
if(length(indexLabel)==3)
{
return(list(Sp=Sp,Sn=Sn,S3=S3,thresholds=thresholds,optSn=optSn,optSp=optSp,optS3=optS3))
}
else
{
return(list(Sp=Sp,Sn=Sn,optSn=optSn,optSp=optSp))
}
}
|
/scratch/gouwar.j/cran-all/cranData/Biocomb/R/CalculateHUM_ROC.R
|
CalculateHUM_seq<-function(data,indexF,indexClass,indexLabel)
{
indexL=NULL
label=levels(data[,indexClass])
if(is.null(label)) return()
for(i in 1:length(indexLabel))
{
indexL=c(indexL,which(label==indexLabel[i]))
}
output=NULL
seqMax=NULL
indexEach=NULL
indexUnion=NULL
for(i in 1:length(label))
{
vrem=which(data[,indexClass]==label[i])
indexEach=c(indexEach,list(vrem))
if(label[i]%in%indexLabel)
indexUnion=union(indexUnion,vrem)
}
len=length(indexL)
seq=gtools::permutations(len,len,1:len)
for(i in 1:length(indexF))
{
s_data=NULL
dataV=data[,indexF[i]]
prodValue=1
for (j in 1:length(indexLabel))
{
vrem=sort(dataV[indexEach[[indexL[j]]]])
s_data=c(s_data,list(vrem))
prodValue = prodValue*length(vrem)
}
#seq=sort(d_median,index.return=TRUE)
#out=CalcGene(s_data,seq$ix, prodValue)
#calculate the threshold values for plot of 2D ROC and 3D ROC
thresholds <- sort(unique(dataV[indexUnion]))
thresholds=(c(-Inf, thresholds) + c(thresholds, +Inf))/2
out=CalcGene(s_data,seq,prodValue,thresholds)
output=c(output,out$HUM)
seqMax=cbind(seqMax,out$seq)
}
names(output)=names(data[,indexF,drop = FALSE])
colnames(seqMax)=names(output)
#return(output)
return(list(HUM=output,seq=seqMax))
}
|
/scratch/gouwar.j/cran-all/cranData/Biocomb/R/CalculateHUM_seq.R
|
CalculateHUM_Plot<-function(sel,Sn,Sp,optSn,optSp,HUM,print.optim=TRUE)
{
x_coor=1-Sn[[sel]]
plot(x_coor, Sp[[sel]], xlab="1 - Specificity", ylab="Sensitivity", type="n", xlim=c(0,1), ylim=c(0,1), lwd=2, asp=1)
polygon(c(1,x_coor, 0,1), c(0,Sp[[sel]], 0,0), col="gainsboro",angle=45,border=NULL)
text(0.5,0.5, sprintf("AUC= %.3f", HUM[sel]), adj=c(0,1), cex=1, col="red")
if(print.optim)
{
points(1-optSn[sel],optSp[sel],cex=1, col="blue")
text(1-optSn[sel],optSp[sel], sprintf("(%.3f,%.3f)",optSp[sel],optSn[sel]), adj=c(0,1), cex=1, col="blue")
}
lines(x_coor, Sp[[sel]], type="l", lwd=2, col="red")
}
Calculate3D<-function(sel,Sn,Sp,S3,optSn,optSp,optS3,thresholds,HUM,name,print.optim=TRUE)
{
len=length(thresholds)
z=matrix(S3[[sel]],len,len,byrow=TRUE)
ym=matrix(Sp[[sel]],len,len,byrow=TRUE)
vrem=seq(1,length(Sn[[sel]]),by=len)
x=Sn[[sel]][vrem]
y=Sp[[sel]][1:len]
xrow=unique(x)
yrow=unique(y)
zz=matrix(0,length(xrow),length(yrow))
indexrow=NULL
indexcol=NULL
for(i in 1:length(xrow))
{
rr=which(x==xrow[i])
zr=z[rr,]
for(j in 1:length(yrow))
{
index=which(ym[rr,]==yrow[j])
if(length(index)>0)
{
zz[i,j]=max(zr[index])
}
}
}
for(j in 1:length(xrow))
{
# insert to exclude zeros in matrix zz
index=which(zz[j,]==0)
k=1
if(length(index)!=0)
{
if(length(index)!=1)
{
for(i in 2:length(index))
{
if(index[i]!=(index[i-1]+1))
{
for(ll in k:(i-1))
{
zz[j,index[ll]]=zz[j,index[i-1]+1]
}
k=i
}
}
if(index[length(index)]<length(yrow))
{
for(ll in k:length(index))
{
zz[j,index[ll]]=zz[j,index[length(index)]+1]
}
}
}
else
{
if(index[1]<length(yrow))
{
zz[j,index[1]]=zz[j,index[1]+1]
}
}
}
}
#--------
if(length(which(loadedNamespaces()=="rgl"))!=0)
{
clear3d()
out=persp3d(xrow,yrow,zz,theta = 120, phi = 10, expand = 0.5,ticktype = "detailed",col="#CC00FFFF",
ltheta = -120, shade = 0.75, border = NA, main=sel,xlab = name[1], ylab = name[2], zlab = name[3],
xlim=c(0,1), ylim=c(0,1), zlim=c(0,1))
if(print.optim)
{
points3d(optSn[sel],optSp[sel],optS3[sel],col="red",size=8)
text3d(optSn[sel],optSp[sel],optS3[sel]+0.2,sprintf("(%.3f,%.3f,%.3f)",optSn[sel],optSp[sel],optS3[sel]),col="blue")
}
rgl.bringtotop()
}
persp(xrow,yrow,zz,theta = 50, phi = 15, expand = 0.5,ticktype = "detailed",col="#CC00FFFF",
ltheta = -120, shade = 0.75, border = NA, main=sel,xlab = name[1], ylab =name[2], zlab = name[3],xlim=c(0,1), ylim=c(0,1), zlim=c(0,1),
cex.lab=2, cex.main=2, cex.axis=1.5)
}
|
/scratch/gouwar.j/cran-all/cranData/Biocomb/R/ROC_Plot.R
|
# Generated by using Rcpp::compileAttributes() -> do not edit by hand
# Generator token: 10BE3573-1514-4C36-9D1C-5A225CD40393
fun1_chi <- function(data, classI) {
.Call(`_Biocomb_fun1_chi`, data, classI)
}
fun2_chi <- function(int_l, mat_int) {
.Call(`_Biocomb_fun2_chi`, int_l, mat_int)
}
fun3_chi <- function(chi_s, int_l, datain, chi_value, mat_int) {
.Call(`_Biocomb_fun3_chi`, chi_s, int_l, datain, chi_value, mat_int)
}
check_incons <- function(data, vrem_nom, cl) {
.Call(`_Biocomb_check_incons`, data, vrem_nom, cl)
}
fun4_chi <- function(chi_s, int_l, datain, vrem_nominal, chi_attr, sig_attr, cl, mat_int, threshold, df, step, delta, shag) {
.Call(`_Biocomb_fun4_chi`, chi_s, int_l, datain, vrem_nominal, chi_attr, sig_attr, cl, mat_int, threshold, df, step, delta, shag)
}
forward_path <- function(features, m3) {
.Call(`_Biocomb_forward_path`, features, m3)
}
CalcGene <- function(s_data, seqAll, prodValue, thresholds) {
.Call(`_Biocomb_CalcGene`, s_data, seqAll, prodValue, thresholds)
}
CalcROC <- function(s_data, seq, thresholds) {
.Call(`_Biocomb_CalcROC`, s_data, seq, thresholds)
}
|
/scratch/gouwar.j/cran-all/cranData/Biocomb/R/RcppExports.R
|
Sub.filename<-function(filename){
if(grepl(".xls",filename)){
name<-sub(".xls","",filename)
}
if(grepl(".xlsx",filename)){
name<-sub(".xlsx","",filename)
}
if(grepl(".csv",filename)){
name<-sub(".csv","",filename)
}
return(name)
}
|
/scratch/gouwar.j/cran-all/cranData/Biocomb/R/Sub.filename.R
|
do.numeric<-function(char){
num<-NULL
if(grepl("^\\d+\\,\\d+$", char)){
num<-as.numeric(sub(",",".",char))
}
if(grepl("^\\d+\\/\\d+$", char)){
str<-unlist(strsplit(char,"[ /]"))
num<-as.numeric(str[1])/as.numeric(str[2])
}
if(grepl("^\\d+$", char)|grepl("^\\d+\\.\\d+$",char)){
num<-as.numeric(char)
}
return(num)
}
char.to.numeric<-function(char){
if(substring(char,1,1)=="-"){
char<-substring(char,2)
num<-do.numeric(char)
num<-num*(-1)
}else{
num<-do.numeric(char)
}
return(num)
}
|
/scratch/gouwar.j/cran-all/cranData/Biocomb/R/char.to.numeric.R
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.