content
stringlengths 0
14.9M
| filename
stringlengths 44
136
|
---|---|
---
title: "Mosaic plots"
author: "Michael Friendly"
date: "`r Sys.Date()`"
package: vcdExtra
output:
rmarkdown::html_vignette:
fig_caption: yes
bibliography: ["vcd.bib", "vcdExtra.bib"]
csl: apa.csl
vignette: >
%\VignetteIndexEntry{Mosaic plots}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r setup, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
warning = FALSE,
fig.height = 6,
fig.width = 7,
fig.path = "fig/tut04-",
dev = "png",
comment = "##"
)
# save some typing
knitr::set_alias(w = "fig.width",
h = "fig.height",
cap = "fig.cap")
# Load packages
set.seed(1071)
library(vcd)
library(vcdExtra)
library(ggplot2)
library(seriation)
data(HairEyeColor)
data(PreSex)
data(Arthritis, package="vcd")
art <- xtabs(~Treatment + Improved, data = Arthritis)
if(!file.exists("fig")) dir.create("fig")
```
Mosaic plots provide an ideal method both for visualizing contingency tables and for
visualizing the fit--- or more importantly--- **lack of fit** of a loglinear model.
For a two-way table, `mosaic()`, by default, fits a model of independence, $[A][B]$
or `~A + B` as an R formula. The `vcdExtra` package extends this to models fit
using `glm(..., family=poisson)`, which can include specialized models for
ordered factors, or square tables that are intermediate between the saturated model,
$[A B]$ = `A * B`, and the independence model $[A][B]$.
For $n$-way tables, `vcd::mosaic()` can fit any loglinear model, and can also be
used to plot a model fit with `MASS:loglm()`. The `vcdExtra` package extends this
to models fit using `stats::glm()` and, by extension, to non-linear models fit
using the [gnm package](https://cran.r-project.org/package=gnm).
See @vcd:Friendly:1994, @vcd:Friendly:1999 for the statistical ideas behind these
uses of mosaic displays in connection with loglinear models. Our book @FriendlyMeyer:2016:DDAR
gives a detailed discussion of mosaic plots and many more examples.
The essential ideas are to:
* recursively sub-divide a unit square into rectangular "tiles" for the
cells of the table, such that the area of each tile is proportional to the cell frequency. Tiles are split in a sequential order:
+ First according to the **marginal** proportions of a first variable, V1
+ Next according to the **conditional** proportions of a 2nd variable, V2 | V1
+ Next according to the **conditional** proportions of a 3rd variable, V3 | {V1, V2}
+ ...
* For a given loglinear model, the tiles can then be shaded in various ways to reflect
the residuals (lack of fit) for a given model.
* The pattern of residuals can then
be used to suggest a better model or understand *where* a given model fits or
does not fit.
`mosaic()` provides a wide range of options for the directions of splitting,
the specification of shading, labeling, spacing, legend and many other details.
It is actually implemented as a special case of a more general
class of displays for $n$-way tables called `strucplot`, including
sieve diagrams, association plots, double-decker plots as well as mosaic
plots.
For details, see `help(strucplot)` and the "See also" links therein,
and also @vcd:Meyer+Zeileis+Hornik:2006b, which is available as
an R vignette via `vignette("strucplot", package="vcd")`.
***Example***:
A mosaic plot for the Arthritis treatment data fits the model of independence,
`~ Treatment + Improved` and displays the association in the pattern of
residual shading. The goal is to visualize the difference in the proportions
of `Improved` for the two levels of `Treatment` : "Placebo" and "Treated".
The plot below is produced with the following call to `mosaic()`.
With the first split by `Treatment` and the shading used, it is easy to see
that more people given the placebo experienced no improvement, while more
people given the active treatment reported marked improvement.
```{r}
#| Arthritis1,
#| fig.height = 6,
#| fig.width = 7,
#| fig.cap = "Mosaic plot for the `Arthritis` data, using `shading_max`"
data(Arthritis, package="vcd")
art <- xtabs(~Treatment + Improved, data = Arthritis)
mosaic(art, gp = shading_max,
split_vertical = TRUE,
main="Arthritis: [Treatment] [Improved]")
```
`gp = shading_max` specifies that color in the plot signals a significant residual at a 90% or 99% significance level,
with the more intense shade for 99%.
Note that the residuals for the independence model were not large
(as shown in the legend),
yet the association between `Treatment` and `Improved`
is highly significant.
```{r, art1}
summary(art)
```
In contrast, one of the other shading schemes, from @vcd:Friendly:1994
(use: `gp = shading_Friendly`),
uses fixed cutoffs of $\pm 2, \pm 4$,
to shade cells which are *individually* significant
at approximately $\alpha = 0.05$ and $\alpha = 0.001$ levels, respectively.
The plot below uses `gp = shading_Friendly`.
```{r}
#| Arthritis2,
#| fig.height = 6,
#| fig.width = 7,
#| fig.cap = "Mosaic plot for the `Arthritis` data, using `shading_Friendly`"
mosaic(art, gp = shading_Friendly,
split_vertical = TRUE,
main="Arthritis: gp = shading_Friendly")
```
## Permuting variable levels
Mosaic plots using tables or frequency data frames as input typically take the levels of the
table variables in the order presented in the dataset. For character variables, this is often
alphabetical order. That might be helpful for looking up a value, but is unhelpful for seeing
and understanding the pattern of association.
It is usually much better to order the levels of the row and column variables to help reveal
the nature of their association. This is an example of **effect ordering for data display**
[@FriendlyKwan:02:effect].
***Example***:
Data from @Glass:54 gave this 5 x 5 table on the occupations of 3500 British fathers and their sons, where the occupational categories are listed in alphabetic order.
```{r glass}
data(Glass, package="vcdExtra")
(glass.tab <- xtabs(Freq ~ father + son, data=Glass))
```
The mosaic display shows very strong association, but aside from the
diagonal cells, the pattern is unclear. Note the use of
`set_varnames` to give more descriptive labels for the variables
and abbreviate the occupational category labels.
and `interpolate` to set the shading levels for the mosaic.
```{r glass-mosaic1}
largs <- list(set_varnames=list(father="Father's Occupation",
son="Son's Occupation"),
abbreviate=10)
gargs <- list(interpolate=c(1,2,4,8))
mosaic(glass.tab,
shade=TRUE,
labeling_args=largs,
gp_args=gargs,
main="Alphabetic order",
legend=FALSE,
rot_labels=c(20,90,0,70))
```
The occupational categories differ in **status**, and can be reordered correctly as follows,
from `Professional` down to `Unskilled`.
```{r glass-order}
# reorder by status
ord <- c(2, 1, 4, 3, 5)
row.names(glass.tab)[ord]
```
The revised mosaic plot can be produced by indexing the rows and columns of
the table using `ord`.
```{r glass-mosaic2}
mosaic(glass.tab[ord, ord],
shade=TRUE,
labeling_args=largs,
gp_args=gargs,
main="Effect order",
legend=FALSE,
rot_labels=c(20,90,0,70))
```
From this, and for the examples in the next section, it is useful to re-define
`father` and `son` as **ordered** factors in the original `Glass` frequency data.frame.
```{r glass-ord}
Glass.ord <- Glass
Glass.ord$father <- ordered(Glass.ord$father, levels=levels(Glass$father)[ord])
Glass.ord$son <- ordered(Glass.ord$son, levels=levels(Glass$son)[ord])
str(Glass.ord)
```
## Square tables
For mobility tables such as this, where the rows and columns refer to the same
occupational categories it comes as no surprise that there is a strong association
in the diagonal cells: most often, sons remain in the same occupational categories
as their fathers.
However, the re-ordered mosaic display also reveals something subtler:
when a son differs in occupation from the father, it is more likely that
he will appear in a category one-step removed than more steps removed.
The residuals seem to decrease with the number of steps from the diagonal.
For such tables, specialized loglinear models provide interesting cases
intermediate between the independence model, [A] [B],
and the saturated model, [A B]. These can be fit using `glm()`, with the data
in frequency form,
```
glm(Freq ~ A + B + assoc, data = ..., family = poisson)
```
where `assoc` is a special term to handle a restricted form of association, different from
`A:B` which specifies the saturated model in this notation.
* **Quasi-independence**: Asserts independence, but ignores the diagonal cells by
fitting them exactly. The loglinear model is:
$\log m_{ij} = \mu + \lambda^A_i + \lambda^B_j + \delta_i I(i = j)$,
where $I()$ is the indicator function.
* **Symmetry**: This model asserts that the joint distribution of the row and column variables
is symmetric, that is $\pi_{ij} = \pi_{ji}$:
A son is equally likely to move from their father's occupational category $i$ to another
category, $j$, as the reverse, moving from $j$ to $i$.
Symmetry is quite strong, because it also implies
**marginal homogeneity**, that the marginal probabilities of the row and column variables
are equal,
$\pi{i+} = \sum_j \pi_{ij} = \sum_j \pi_{ji} = \pi_{+i}$ for all $i$.
* **Quasi-symmetry**: This model uses the standard main-effect terms in the loglinear
model, but asserts that the association parameters are symmetric,
$\log m_{ij} = \mu + \lambda^A_i + \lambda^B_j + \lambda^{AB}_{ij}$,
where $\lambda^{AB}_{ij} = \lambda^{AB}_{ji}$.
The [gnm package](https://cran.r-project.org/package=gnm) provides a variety of these functions:
`gnm::Diag()`, `gnm::Symm()` and `gnm::Topo()` for an interaction factor as specified by an array of levels, which may be arbitrarily structured.
For example, the following generates a term for a diagonal factor in a $4 \times 4$
table. The diagonal values reflect parameters fitted for each diagonal cell. Off-diagonal
values, "." are ignored.
```{r diag}
rowfac <- gl(4, 4, 16)
colfac <- gl(4, 1, 16)
diag4by4 <- Diag(rowfac, colfac)
matrix(Diag(rowfac, colfac, binary = FALSE), 4, 4)
```
`Symm()` constructs parameters for symmetric cells. The particular values don't matter.
All that does matter is that the same value, e.g., `1:2` appears in both the (1,2) and
(2,1) cells.
```{r symm}
symm4by4 <- Symm(rowfac, colfac)
matrix(symm4by4, 4, 4)
```
***Example***:
To illustrate, we fit the four models below, starting with the independence model
`Freq ~ father + son` and then adding terms to reflect the restricted
forms of association, e.g., `Diag(father, son)` for diagonal terms and
`Symm(father, son)` for symmetry.
```{r glass-models}
library(gnm)
glass.indep <- glm(Freq ~ father + son,
data = Glass.ord, family=poisson)
glass.quasi <- glm(Freq ~ father + son + Diag(father, son),
data = Glass.ord, family=poisson)
glass.symm <- glm(Freq ~ Symm(father, son),
data = Glass.ord, family=poisson)
glass.qsymm <- glm(Freq ~ father + son + Symm(father, son),
data = Glass.ord, family=poisson)
```
We can visualize these using the `vcdExtra::mosaic.glm()` method, which extends
mosaic displays to handle fitted `glm` objects. *Technical note*: for
models fitted using `glm()`, standardized residuals, `residuals_type="rstandard"`
have better statistical properties than the default Pearson residuals in mosaic
plots and analysis.
```{r glass-quasi}
mosaic(glass.quasi,
residuals_type="rstandard",
shade=TRUE,
labeling_args=largs,
gp_args=gargs,
main="Quasi-Independence",
legend=FALSE,
rot_labels=c(20,90,0,70)
)
```
Mosaic plots for the other models would give further visual assessment of these models,
however we can also test differences among them. For nested models, `anova()` gives tests
of how much better a more complex model is compared to the previous one.
```{r glass-anova}
# model comparisons: for *nested* models
anova(glass.indep, glass.quasi, glass.qsymm, test="Chisq")
```
Alternatively, `vcdExtra::LRstats()` gives model summaries for a collection of
models, not necessarily nested, with AIC and BIC statistics reflecting
model parsimony.
```{r glass-lrstats}
models <- glmlist(glass.indep, glass.quasi, glass.symm, glass.qsymm)
LRstats(models)
```
By all criteria, the model of quasi symmetry fits best. The residual deviance $G^2
is not significant. The mosaic is largely unshaded, indicating a good fit, but there
are a few shaded cells that indicate the remaining positive and negative residuals.
For comparative mosaic displays, it is sometimes useful to show the $G^2$ statistic
in the main title, using `vcdExtra::modFit()` for this purpose.
```{r glass-qsymm}
mosaic(glass.qsymm,
residuals_type="rstandard",
shade=TRUE,
labeling_args=largs,
gp_args=gargs,
main = paste("Quasi-Symmetry", modFit(glass.qsymm)),
legend=FALSE,
rot_labels=c(20,90,0,70)
)
```
## Correspondence analysis ordering
When natural orders for row and column levels are not given a priori, we can find
orderings that make more sense using correspondence analysis.
The general ideas are that:
* Correspondence analysis assigns scores to the row and column variables to best account for the association in 1, 2, ... dimensions
* The first CA dimension accounts for largest proportion of the Pearson $\chi^2$
* Therefore, permuting the levels of the row and column variables by the CA Dim1 scores gives a more coherent mosaic plot, more clearly showing the nature of the association.
* The [seriation package](https://cran.r-project.org/package=seriation) now has a method to order variables in frequency tables using CA.
***Example***:
As an example, consider the `HouseTasks` dataset, a 13 x 4 table of frequencies of household tasks performed by couples, either by the `Husband`, `Wife`, `Alternating` or `Jointly`.
You can see from the table that some tasks (Repairs) are done largely by the husband; some (laundry, main meal)
are largely done by the wife, while others are done jointly or alternating between husband and
wife. But the `Task` and `Who` levels are both in alphabetical order.
```{r housetasks}
data("HouseTasks", package = "vcdExtra")
HouseTasks
```
The naive mosaic plot for this dataset is shown below, splitting first by `Task` and then by `Who`. Due to the length of the factor labels, some features of `labeling` were used to make the display more readable.
```{r housetasks-mos1}
require(vcd)
mosaic(HouseTasks, shade = TRUE,
labeling = labeling_border(rot_labels = c(45,0, 0, 0),
offset_label =c(.5,5,0, 0),
varnames = c(FALSE, TRUE),
just_labels=c("center","right"),
tl_varnames = FALSE),
legend = FALSE)
```
Correspondence analysis, using the [ca package](https://cran.r-project.org/package=ca),
shows that nearly 89% of the $\chi^2$ can be accounted for in two dimensions.
```{r housetasks-ca}
require(ca)
HT.ca <- ca(HouseTasks)
summary(HT.ca, rows=FALSE, columns=FALSE)
```
The CA plot has a fairly simple interpretation: Dim1 is largely the distinction between
tasks primarily done by the wife vs. the husband. Dim2 distinguishes tasks that are done
singly vs. those that are done jointly.
```{r housetasks-ca-plot}
plot(HT.ca, lines = TRUE)
```
So, we can use the `CA` method of `seriation::seriate()` to find the order of permutations of
`Task` and `Who` along the CA dimensions.
```{r housetasks-seriation}
require(seriation)
order <- seriate(HouseTasks, method = "CA")
# the permuted row and column labels
rownames(HouseTasks)[order[[1]]]
colnames(HouseTasks)[order[[2]]]
```
Now, use `seriation::permute()` to use `order` for the permutations of `Task` and `Who`,
and plot the resulting mosaic:
```{r housetasks-mos2}
# do the permutation
HT_perm <- permute(HouseTasks, order, margin=1)
mosaic(HT_perm, shade = TRUE,
labeling = labeling_border(rot_labels = c(45,0, 0, 0),
offset_label =c(.5,5,0, 0),
varnames = c(FALSE, TRUE),
just_labels=c("center","right"),
tl_varnames = FALSE),
legend = FALSE)
```
It is now easy to see the cluster of tasks (laundry and cooking) done largely by the wife
at the top, and those (repairs, driving) done largely by the husband at the bottom.
## References
|
/scratch/gouwar.j/cran-all/cranData/vcdExtra/vignettes/mosaics.Rmd
|
---
title: "Tests of Independence"
author: "Michael Friendly"
date: "`r Sys.Date()`"
output:
rmarkdown::html_vignette:
fig_caption: yes
bibliography: ["vcd.bib", "vcdExtra.bib"]
vignette: >
%\VignetteIndexEntry{Tests of Independence}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r setup, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
warning = FALSE,
fig.height = 6,
fig.width = 7,
fig.path = "fig/tut02-",
dev = "png",
comment = "##"
)
# save some typing
knitr::set_alias(w = "fig.width",
h = "fig.height",
cap = "fig.cap")
# Old Sweave options
# \SweaveOpts{engine=R,eps=TRUE,height=6,width=7,results=hide,fig=FALSE,echo=TRUE}
# \SweaveOpts{engine=R,height=6,width=7,results=hide,fig=FALSE,echo=TRUE}
# \SweaveOpts{prefix.string=fig/vcd-tut,eps=FALSE}
# \SweaveOpts{keep.source=TRUE}
# preload datasets ???
set.seed(1071)
library(vcd)
library(vcdExtra)
library(ggplot2)
data(HairEyeColor)
data(PreSex)
data(Arthritis, package="vcd")
art <- xtabs(~Treatment + Improved, data = Arthritis)
if(!file.exists("fig")) dir.create("fig")
```
OK, now we're ready to do some analyses. This vignette focuses on relatively simple non-parametric
tests and measures of association.
## CrossTable
For tabular displays,
the `CrossTable()` function in the `gmodels` package produces cross-tabulations
modeled after `PROC FREQ` in SAS or `CROSSTABS` in SPSS.
It has a wealth of options for the quantities that can be shown in each cell.
Recall the GSS data used earlier.
```{r, GSStab}
# Agresti (2002), table 3.11, p. 106
GSS <- data.frame(
expand.grid(sex = c("female", "male"),
party = c("dem", "indep", "rep")),
count = c(279,165,73,47,225,191))
(GSStab <- xtabs(count ~ sex + party, data=GSS))
```
Generate a cross-table showing cell frequency and the cell contribution to $\chi^2$.
```{r, xtabs-ex2}
# 2-Way Cross Tabulation
library(gmodels)
CrossTable(GSStab, prop.t=FALSE, prop.r=FALSE, prop.c=FALSE)
```
There are options to report percentages (row, column, cell), specify decimal
places, produce Chi-square, Fisher, and McNemar tests of independence, report
expected and residual values (pearson, standardized, adjusted standardized),
include missing values as valid, annotate with row and column titles, and format
as SAS or SPSS style output! See `help(CrossTable)` for details.
## Chi-square test
For 2-way tables you can use `chisq.test()` to test independence of the row
and column variable. By default, the $p$-value is calculated from the asymptotic
chi-squared distribution of the test statistic. Optionally, the $p$-value can be
derived via Monte Carlo simulation.
```{r, chisq}
(HairEye <- margin.table(HairEyeColor, c(1, 2)))
chisq.test(HairEye)
chisq.test(HairEye, simulate.p.value = TRUE)
```
## Fisher Exact Test {#sec:Fisher}
`fisher.test(X)` provides an **exact test** of independence. `X` must be a two-way
contingency table in table form. Another form,
`fisher.test(X, Y)` takes two
categorical vectors of the same length.
For tables larger than $2 \times 2$ the method can be computationally intensive (or can fail) if
the frequencies are not small.
```{r fisher}
fisher.test(GSStab)
```
Fisher's test is meant for tables with small total sample size.
It generates an error for the `HairEye` data with $n$=592 total frequency.
```{r fisher-error, error=TRUE}
fisher.test(HairEye)
```
## Mantel-Haenszel test and conditional association {#sec:mantel}
Use the `mantelhaen.test(X)` function to perform a Cochran-Mantel-Haenszel
$\chi^2$ chi
test of the null hypothesis that two nominal variables are
*conditionally independent*, $A \perp B \; | \; C$, in each stratum, assuming that there is no three-way
interaction. `X` is a 3 dimensional contingency table, where the last dimension
refers to the strata.
The `UCBAdmissions` serves as an example of a $2 \times 2 \times 6$ table,
with `Dept` as the stratifying variable.
```{r, mantel1}
# UC Berkeley Student Admissions
mantelhaen.test(UCBAdmissions)
```
The results show no evidence for association between admission and gender
when adjusted for department. However, we can easily see that the assumption
of equal association across the strata (no 3-way association) is probably
violated. For $2 \times 2 \times k$ tables, this can be examined
from the odds ratios for each $2 \times 2$ table (`oddsratio()`), and
tested by using `woolf_test()` in `vcd`.
```{r, mantel2}
oddsratio(UCBAdmissions, log=FALSE)
lor <- oddsratio(UCBAdmissions) # capture log odds ratios
summary(lor)
woolf_test(UCBAdmissions)
```
## Some plot methods
### Fourfold displays
We can visualize the odds ratios of Admission for
each department with fourfold displays using `fourfold()`. The cell
frequencies $n_{ij}$ of each $2 \times 2$ table are shown as a quarter circle whose
radius is proportional to $\sqrt{n_{ij}}$, so that its area is proportional to the
cell frequency.
```{r, reorder3}
UCB <- aperm(UCBAdmissions, c(2, 1, 3))
dimnames(UCB)[[2]] <- c("Yes", "No")
names(dimnames(UCB)) <- c("Sex", "Admit?", "Department")
```
Confidence rings for the odds ratio allow a visual test of the null of no association;
the rings for adjacent quadrants overlap *iff* the observed counts are consistent
with the null hypothesis. In the extended version (the default), brighter colors
are used where the odds ratio is significantly different from 1.
The following lines produce @ref(fig:fourfold1).
<!-- \footnote{The color values `col[3:4]` were modified from their default values -->
<!-- to show a greater contrast between significant and insignificant associations here.} -->
```{r}
#| fourfold1,
#| h=5, w=7.5,
#| cap = "Fourfold display for the `UCBAdmissions` data. Where the odds ratio differs
#| significantly from 1.0, the confidence bands do not overlap, and the circle quadrants are
#| shaded more intensely."
col <- c("#99CCFF", "#6699CC", "#F9AFAF", "#6666A0", "#FF0000", "#000080")
fourfold(UCB, mfrow=c(2,3), color=col)
```
Another `vcd` function, `cotabplot()`, provides a more general approach
to visualizing conditional associations in contingency tables,
similar to trellis-like plots produced by `coplot()` and lattice graphics.
The `panel` argument supplies a function used to render each conditional
subtable. The following gives a display (not shown) similar to @ref(fig:fourfold1).
```{r fourfold2, eval=FALSE}
cotabplot(UCB, panel = cotab_fourfold)
```
### Doubledecker plots
When we want to view the conditional
probabilities of a response variable (e.g., `Admit`)
in relation to several factors,
an alternative visualization is a `doubledecker()` plot.
This plot is a specialized version of a mosaic plot, which
highlights the levels of a response variable (plotted vertically)
in relation to the factors (shown horizontally). The following
call produces @ref(fig:doubledecker), where we use indexing
on the first factor (`Admit`) to make `Admitted`
the highlighted level.
In this plot, the
association between `Admit` and `Gender` is shown
where the heights of the highlighted conditional probabilities
do not align. The excess of females admitted in Dept A stands out here.
```{r}
#| doubledecker,
#| h=5, w=8,
#| out.width = "75%",
#| cap = "Doubledecker display for the `UCBAdmissions` data. The heights
#| of the highlighted bars show the conditional probabilities of `Admit`,
#| given `Dept` and `Gender`."
doubledecker(Admit ~ Dept + Gender, data=UCBAdmissions[2:1,,])
```
### Odds ratio plots
Finally, the there is a `plot()` method for `oddsratio` objects.
By default, it shows the 95% confidence interval for the log odds ratio.
@ref(fig:oddsratio) is produced by:
```{r}
#| oddsratio,
#| h=6, w=6,
#| out.width = "60%",
#| cap = "Log odds ratio plot for the `UCBAdmissions` data."
plot(lor,
xlab="Department",
ylab="Log Odds Ratio (Admit | Gender)")
```
{#fig:oddsratio}
## Cochran-Mantel-Haenszel tests for ordinal factors {#sec:CMH}
The standard $\chi^2$ tests for association in a two-way table
treat both table factors as nominal (unordered) categories.
When one or both factors of a two-way table are
quantitative or ordinal, more powerful tests of association
may be obtained by taking ordinality into account, using
row and or column scores to test for linear trends or differences
in row or column means.
More general versions of the CMH tests (Landis etal., 1978)
[@Landis-etal:1978] are provided by assigning
numeric scores to the row and/or column variables.
For example, with two ordinal factors (assumed to be equally spaced), assigning
integer scores, `1:R` and `1:C` tests the linear $\times$ linear component
of association. This is statistically equivalent to the Pearson correlation between the
integer-scored table variables, with $\chi^2 = (n-1) r^2$, with only 1 $df$
rather than $(R-1)\times(C-1)$ for the test of general association.
When only one table
variable is ordinal, these general CMH tests are analogous to an ANOVA, testing
whether the row mean scores or column mean scores are equal, again consuming
fewer $df$ than the test of general association.
The `CMHtest()` function in `vcdExtra` calculates these various
CMH tests for two possibly ordered factors, optionally stratified other factor(s).
***Example***:
```{r, table-form2, include=FALSE}
## A 4 x 4 table Agresti (2002, Table 2.8, p. 57) Job Satisfaction
JobSat <- matrix(c(1,2,1,0, 3,3,6,1, 10,10,14,9, 6,7,12,11), 4, 4)
dimnames(JobSat) = list(income=c("< 15k", "15-25k", "25-40k", "> 40k"),
satisfaction=c("VeryD", "LittleD", "ModerateS", "VeryS"))
JobSat <- as.table(JobSat)
```
Recall the $4 \times 4$ table, `JobSat` introduced in \@ref(sec:creating),
```{r, jobsat}
JobSat
```
Treating the `satisfaction` levels as equally spaced, but using
midpoints of the `income` categories as row scores gives the following results:
```{r, cmh1}
CMHtest(JobSat, rscores=c(7.5,20,32.5,60))
```
Note that with the relatively small cell frequencies, the test for general
give no evidence for association. However, the the `cor` test for linear x linear
association on 1 df is nearly significant. The `coin` package contains the
functions `cmh_test()` and `lbl_test()`
for CMH tests of general association and linear x linear association respectively.
## Measures of Association
There are a variety of statistical measures of *strength* of association for
contingency tables--- similar in spirit to $r$ or $r^2$ for continuous variables.
With a large sample size, even a small degree of association can show a
significant $\chi^2$, as in the example below for the `GSS` data.
The `assocstats()` function in `vcd` calculates the $\phi$
contingency coefficient, and Cramer's V for an $r \times c$ table.
The input must be in table form, a two-way $r \times c$ table.
It won't work with `GSS` in frequency form, but by now you should know how
to convert.
```{r, assoc1}
assocstats(GSStab)
```
For tables with ordinal variables, like `JobSat`, some people prefer the
Goodman-Kruskal $\gamma$ statistic
[@vcd:Agresti:2002, \S 2.4.3]
based on a comparison of concordant
and discordant pairs of observations in the case-form equivalent of a two-way table.
```{r, gamma}
GKgamma(JobSat)
```
A web article by Richard Darlington,
[http://node101.psych.cornell.edu/Darlington/crosstab/TABLE0.HTM]
gives further description of these and other measures of association.
## Measures of Agreement
The
`Kappa()` function in the `vcd` package calculates Cohen's $\kappa$ and weighted
$\kappa$ for a square two-way table with the same row and column categories [@Cohen:60].
\footnote{
Don't confuse this with `kappa()` in base R that computes something
entirely different (the condition number of a matrix).
}
Normal-theory $z$-tests are obtained by dividing $\kappa$ by its asymptotic standard
error (ASE). A `confint()` method for `Kappa` objects provides confidence intervals.
```{r, kappa}
data(SexualFun, package = "vcd")
(K <- Kappa(SexualFun))
confint(K)
```
A visualization of agreement [@Bangdiwala:87], both unweighted and weighted for degree of departure
from exact agreement is provided by the `agreementplot()` function.
@fig(fig:agreesex) shows the agreementplot for the `SexualFun` data,
produced as shown below.
The Bangdiwala measures (returned by the function)
represent the proportion of the
shaded areas of the diagonal rectangles, using weights $w_1$ for exact agreement,
and $w_2$ for partial agreement one step from the main diagonal.
```{r}
#| agreesex,
#| h=6, w=7,
#| out.width = "70%",
#| cap = "Agreement plot for the `SexualFun` data."
agree <- agreementplot(SexualFun, main="Is sex fun?")
unlist(agree)
```
In other examples, the agreement plot can help to show *sources*
of disagreement. For example, when the shaded boxes are above or below the diagonal
(red) line, a lack of exact agreement can be attributed in part to
different frequency of use of categories by the two raters-- lack of
*marginal homogeneity*.
## Correspondence analysis
Correspondence analysis is a technique for visually exploring relationships
between rows and columns in contingency tables. The `ca` package gives one implementation.
For an $r \times c$ table,
the method provides a breakdown of the Pearson $\chi^2$ for association in up to $M = \min(r-1, c-1)$
dimensions, and finds scores for the row ($x_{im}$) and column ($y_{jm}$) categories
such that the observations have the maximum possible correlations.%
^[Related methods are the non-parametric CMH tests using assumed row/column scores (\@ref(sec:CMH),
the analogous `glm()` model-based methods (\@ref(sec:CMH), and the more general RC models which can be fit using `gnm()`. Correspondence analysis differs in that it is a primarily descriptive/exploratory method (no significance tests), but is directly tied to informative graphic displays of the row/column categories.]
Here, we carry out a simple correspondence analysis of the `HairEye` data.
The printed results show that nearly 99% of the association between hair color and eye color
can be accounted for in 2 dimensions, of which the first dimension accounts for 90%.
```{r, ca1}
library(ca)
ca(HairEye)
```
The resulting `ca` object can be plotted just by running the `plot()`
method on the `ca` object, giving the result in
\@ref(fig:ca-haireye). `plot.ca()` does not allow labels for dimensions;
these can be added with `title()`.
It can be seen that most of the association is accounted for by the ordering
of both hair color and eye color along Dimension 1, a dark to light dimension.
```{r ca-haireye, cap = "Correspondence analysis plot for the `HairEye` data"}
plot(ca(HairEye), main="Hair Color and Eye Color")
```
## References
|
/scratch/gouwar.j/cran-all/cranData/vcdExtra/vignettes/tests.Rmd
|
#### Class definitions. ####
##### ##### ##### ##### #####
#
# Class vcfR
#
##### ##### ##### ##### #####
#' @title vcfR class
#'
#' @name vcfR-class
#' @rdname vcfR-class
#' @aliases vcfR-class
#'
#' @description
#' An S4 class for storing VCF data.
#'
#'
#' @slot meta character vector for the meta information
#' @slot fix matrix for the fixed information
#' @slot gt matrix for the genotype information
#'
#'
#' @details Defines a class for variant call format data.
#' A vcfR object contains three slots.
#' The first slot is a character vector which holds the meta data.
#' The second slot holds an eight column matrix to hold the fixed data.
#' The third slot is a matrix which holds the genotype data.
#' The genotype data is optional according to the VCF definition.
#' When it is missing the gt slot should consist of a character matrix with zero rows and columns.
#'
#'
#' See \code{vignette('vcf_data')} for more information.
#' See the \href{http://samtools.github.io/hts-specs/}{VCF specification} for the file specification.
#'
#' @export
#' @import methods
setClass(
Class="vcfR",
representation=representation(
meta="character",
fix="matrix",
gt="matrix"
),
prototype=prototype(
meta=character(),
fix = matrix(
ncol=8, nrow=0,
dimnames=list(
c(),
c('CHROM','POS','ID','REF','ALT','QUAL','FILTER','INFO')
)
),
gt=matrix("a", ncol=0, nrow=0)
)
)
##### ##### ##### ##### #####
#
# Class chromR
#
##### ##### ##### ##### #####
#### Class definition. ####
setOldClass("DNAbin")
#' @title chromR class
#'
#' @name chromR-class
#' @rdname chromR-class
#'
#' @description
#' A class for representing chromosomes (or supercontigs, contigs, scaffolds, etc.).
#'
#' @details
#' Defines a class for chromosomal or contig data.
#' This
#'
#' This object has a number of slots.
#'
#' \itemize{
#' \item \strong{name} name of the object (character)
#' \item \strong{len} length of the sequence (integer)
#' \item \strong{window_size} window size for windowing analyses (integer)
#'
#' \item \strong{seq} object of class ape::DNAbin
#' \item \strong{vcf} object of class vcfR
#' \item \strong{ann} annotation data in a gff-like data.frame
#'
#' \item \strong{var.info} a data.frame containing information on variants
#' \item \strong{win.info} a data.frame containing information on windows
#' \item \strong{seq.info} a list containing information on the sequence
#'
# \item gt.m matrix of genotypes
#
# \item mask a logical vector to indicate masked variants
#' }
#'
# More descriptions can be put here.
#'
#' The \strong{seq} slot contains an object of class ape::DNAbin.
#' A DNAbin object is typically either a matrix or list of DNAbin objects.
#' The matrix form appears to be better behaved than the list form.
#' Because of this behavior this slot should be the matrix form.
#' When this slot is not populated it is of class "NULL" instead of "DNAbin".
#' Note that characters need to be lower case when inserted into an object of class DNAbin.
#' The function \code{\link[base]{tolower}} can facilitate this.
#'
#'
#' The \strong{vcf} slot is an object of class vcfR \code{\link{vcfR-class}}.
#'
#' The \strong{ann} slot is a data.frame containing \href{https://github.com/The-Sequence-Ontology/Specifications/blob/master/gff3.md}{gff format} data.
#' When this slot is not populated it has nrows equal to zero.
#'
#' The \strong{var.info} slot contains a data.frame containing information about variants.
#' Every row of this data.frame is a variant.
#' Columns will typically contain the chromosome name, the position of the variant (POS), the mask as well as any other per variant information.
#'
#' The \strong{win.info} slot contains a data.frame containing information about windows.
#' For example, window, start, end, length, A, C, G, T, N, other, variants and genic fields are stored here.
#'
#' The \strong{seq.info} slot is a list containing two matrices.
#' The first matrix describes rectangles for called nucleotides and the second describes rectangles for 'N' calls.
#' Within each matrix, the first column indicates the start position and the second column indicates the end position of each rectangle.
#'
#'
#'
#' @seealso \code{\link{vcfR-class}}, \code{\link[ape]{DNAbin}},
# \href{http://www.1000genomes.org/wiki/analysis/variant\%20call\%20format/vcf-variant-call-format-version-41}{vcf format},
#' \href{https://github.com/samtools/hts-specs}{VCF specification}
#' \href{https://github.com/The-Sequence-Ontology/Specifications/blob/master/gff3.md}{gff3 format}
#'
#'
#' @import methods
#' @import ape
#'
#' @export
setClass(
Class="chromR",
representation=representation(
names = "character",
len = "integer",
window_size = "integer",
vcf = "vcfR",
seq = "DNAbin",
ann = "data.frame",
#
var.info = "data.frame",
win.info = "data.frame",
seq.info = "list",
#
gt.m = "matrix"
),
prototype=prototype(
names = "Chromosome",
len = as.integer(0),
window_size = as.integer(1e3),
vcf = new(Class="vcfR"),
# seq = ape::as.DNAbin('n'),
seq = NULL,
ann = data.frame(
matrix(ncol=9, nrow=0,
dimnames=list(
c(),
c("seqid", "source", "type", "start",
"end", "score", "strand", "phase",
"attributes"))
),
stringsAsFactors=FALSE)
)
)
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/AllClass.R
|
#' @title Reformat INFO data as a data.frame
#' @name INFO2df
#' @rdname INFO2df
#'
#' @description
#' Reformat INFO data as a data.frame and handle class when possible.
#'
#' @param x an object of class vcfR or chromR.
#'
#' @details
#' The INFO column of VCF data contains descriptors for each variant.
#' Because this column may contain many comma delimited descriptors it may be difficult to interpret.
#' The function INFO2df converts the data into a data.frame.
#' The function metaINFO2df extracts the information in the meta section that describes the INFO descriptors.
#' This function is called by INFO2df to help it handle the class of the data.
#'
#' @return
#' A data.frame
#'
#'
#' @examples
#' data(vcfR_test)
#' metaINFO2df(vcfR_test)
#' getINFO(vcfR_test)
#' INFO2df(vcfR_test)
#'
#'
#' @export
#'
INFO2df <- function(x){
if( inherits(x, "chromR") ){
x <- x@vcfR
}
metaINFO <- metaINFO2df(x)
# Initialize a data.frame for the INFO data
INFOdf <- data.frame( matrix( nrow=nrow(x@fix), ncol=nrow(metaINFO) ) )
names(INFOdf) <- metaINFO[,'ID']
for( i in 1:nrow(metaINFO) ){
tmp <- extract.info(x, element = metaINFO[,'ID'][i])
if( metaINFO[,'Type'][i] == "Integer" & metaINFO[,'Number'][i] == "1" ){
tmp <- as.integer( tmp )
}
if( metaINFO[,'Type'][i] == "Float" & metaINFO[,'Number'][i] == "1" ){
tmp <- as.numeric( tmp )
}
INFOdf[,metaINFO[,'ID'][i]] <- tmp
}
return(INFOdf)
}
#' @rdname INFO2df
#'
#' @param field should either the INFo or FORMAT data be returned?
#'
#' @export
#'
metaINFO2df <- function(x, field = "INFO"){
if( inherits(x, "chromR") ){
x <- x@vcfR
}
field <- match.arg( field, choices = c("INFO","FORMAT") )
# Isolate INFO from meta
if( field == "INFO" ){
INFO <- x@meta[grep("##INFO=", x@meta)]
INFO <- sub("##INFO=<", "", INFO)
}
if( field == "FORMAT" ){
INFO <- x@meta[grep("##FORMAT=", x@meta)]
INFO <- sub("##FORMAT=<", "", INFO)
}
# Clean things up a bit.
INFO <- sub(">$", "", INFO)
INFO <- sub('\"', "", INFO)
INFO <- sub('\"$', "", INFO)
INFO <- strsplit(INFO, split = ",")
ID <- unlist( lapply(INFO, function(x){ grep("^ID=", x, value=TRUE) }) )
Number <- unlist( lapply(INFO, function(x){ grep("^Number=", x, value=TRUE) }) )
Type <- unlist( lapply(INFO, function(x){ grep("^Type=", x, value=TRUE) }) )
Description <- unlist( lapply(INFO, function(x){ grep("^Description=", x, value=TRUE) }) )
Source <- unlist( lapply(INFO, function(x){ grep("^Source=", x, value=TRUE) }) )
Version <- unlist( lapply(INFO, function(x){ grep("^Version=", x, value=TRUE) }) )
ID <- unlist(lapply(strsplit(ID, split = "=" ), function(x){x[2]}))
Number <- unlist(lapply(strsplit(Number, split = "=" ), function(x){x[2]}))
Type <- unlist(lapply(strsplit(Type, split = "=" ), function(x){x[2]}))
Description <- unlist(lapply(strsplit(Description, split = "=" ), function(x){x[2]}))
Source <- unlist(lapply(strsplit(Source, split = "=" ), function(x){x[2]}))
Version <- unlist(lapply(strsplit(Version, split = "=" ), function(x){x[2]}))
# INFO.Type <- cbind(ID, Number, Type, Description, Source, Version)
INFO.Type <- data.frame( ID = ID, Number = Number, Type = Type,
Description = Description, stringsAsFactors = FALSE)
if( !is.null(Source) ) { INFO.Type$Source = Source }
if( !is.null(Version) ){ INFO.Type$Version = Version }
return( INFO.Type )
}
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/INFO2df.R
|
# Generated by using Rcpp::compileAttributes() -> do not edit by hand
# Generator token: 10BE3573-1514-4C36-9D1C-5A225CD40393
.NM2winNM <- function(x, pos, maxbp, winsize = 100L, depr = 1L) {
.Call(`_vcfR_NM2winNM`, x, pos, maxbp, winsize, depr)
}
.windowize_NM <- function(x, pos, starts, ends, summary = "mean", depr = 1L) {
.Call(`_vcfR_windowize_NM`, x, pos, starts, ends, summary, depr)
}
#' @title AD_frequency
#' @name AD_frequency
#' @rdname AD_frequency
#'
#' @description
#' Create allele frequencies from matrices of allelic depths (AD)
#'
#' @param ad a matrix of allele depths (e.g., "7,2")
#' @param allele which (1-based) allele to report frequency for
#' @param sum_type type of sum to calculate, see details
#' @param delim character that delimits values
#' @param decreasing should the values be sorted decreasing (1) or increasing (0)?
#'
#' @details
#' Files containing VCF data frequently include data on allelic depth (e.g., AD).
#' This is the number of times each allele has been sequenced.
#' Our naive assumption for diploids is that these alleles should be observed at a frequency of 1 or zero for homozygous positions and near 0.5 for heterozygous positions.
#' Deviations from this expectation may indicate allelic imbalance or ploidy differences.
#' This function is intended to facilitate the exploration of allele frequencies for all positions in a sample.
#'
#' The alleles are sorted by their frequency within the function.
#' The user can then specify is the would like to calculate the frequency of the most frequent allele (allele = 1), the second most frequent allele (allele = 2) and so one.
#' If an allele is requested that does not exist it should result in NA for that position and sample.
#'
#' There are two methods to calculate a sum for the denominator of the frequency.
#' When sum_type = 0 the alleles are sorted decendingly and the first two allele counts are used for the sum.
#' This may be useful when a state of diploidy may be known to be appropriate and other alleles may be interpreted as erroneous.
#' When sum_type = 1 a sum is taken over all the observed alleles for a variant.
#'
#' @return A numeric matrix of frequencies
#'
#' @examples
#' set.seed(999)
#' x1 <- round(rnorm(n=9, mean=10, sd=2))
#' x2 <- round(rnorm(n=9, mean=20, sd=2))
#' ad <- matrix(paste(x1, x2, sep=","), nrow=3, ncol=3)
#' colnames(ad) <- paste('Sample', 1:3, sep="_")
#' rownames(ad) <- paste('Variant', 1:3, sep="_")
#' ad[1,1] <- "9,23,12"
#' AD_frequency(ad=ad)
#'
#'
#' @export
AD_frequency <- function(ad, delim = ",", allele = 1L, sum_type = 0L, decreasing = 1L) {
.Call(`_vcfR_AD_frequency`, ad, delim, allele, sum_type, decreasing)
}
.write_fasta <- function(seq, seqname, filename, rowlength = 80L, verbose = 1L, depr = 1L) {
invisible(.Call(`_vcfR_write_fasta`, seq, seqname, filename, rowlength, verbose, depr))
}
.elementNumber <- function(x, element = "GT") {
.Call(`_vcfR_elementNumber`, x, element)
}
.extract_GT_to_CM <- function(fix, gt, element = "DP", alleles = 0L, extract = 1L, convertNA = 1L) {
.Call(`_vcfR_extract_GT_to_CM`, fix, gt, element, alleles, extract, convertNA)
}
.CM_to_NM <- function(x) {
.Call(`_vcfR_CM_to_NM`, x)
}
.extract_haps <- function(ref, alt, gt, unphased_as_NA, verbose) {
.Call(`_vcfR_extract_haps`, ref, alt, gt, unphased_as_NA, verbose)
}
.grepa <- function() {
invisible(.Call(`_vcfR_grepa`))
}
.shankaR <- function() {
invisible(.Call(`_vcfR_shankaR`))
}
#'
#' @rdname freq_peak
#'
#' @title freq_peak
#' @description Find density peaks in frequency data.
#'
#' @param myMat a matrix of frequencies [0-1].
#' @param pos a numeric vector describing the position of variants in myMat.
#' @param winsize sliding window size.
#' @param bin_width Width of bins to summarize ferequencies in (0-1].
#' @param lhs logical specifying whether the search for the bin of greatest density should favor values from the left hand side.
#'
#' @details
#' Noisy data, such as genomic data, lack a clear consensus.
#' Summaries may be made in an attempt to 'clean it up.'
#' Common summaries, such as the mean, rely on an assumption of normalicy.
#' An assumption that frequently can be violated.
#' This leaves a conundrum as to how to effectively summarize these data.
#'
#'
#' Here we implement an attempt to summarize noisy data through binning the data and selecting the bin containing the greatest density of data.
#' The data are first divided into parameter sized windows.
#' Next the data are categorized by parameterizable bin widths.
#' Finally, the bin with the greatest density, the greatest count of data, is used as a summary.
#' Because this method is based on binning the data it does not rely on a distributional assumption.
#'
#'
#' The parameter \code{lhs} specifyies whether the search for the bin of greatest density should be performed from the left hand side.
#' The default value of TRUE starts at the left hand side, or zero, and selects a new bin as having the greatest density only if a new bin has a greater density.
#' If the new bin has an equal density then no update is made.
#' This causees the analysis to select lower frequencies.
#' When this parameter is set to FALSE ties result in an update of the bin of greatest density.
#' This causes the analysis to select higher frequencies.
#' It is recommended that when testing the most abundant allele (typically [0.5-1]) to use the default of TRUE so that a low value is preferred.
#' Similarly, when testing the less abundant alleles it is recommended to set this value at FALSE to preferentially select high values.
#'
#'
#' @return
#' A freq_peak object (a list) containing:
#' \itemize{
#' \item The window size
#' \item The binwidth used for peak binning
#' \item a matrix containing window coordinates
#' \item a matrix containing peak locations
#' \item a matrix containing the counts of variants for each sample in each window
#' }
#'
#' The window matrix contains start and end coordinates for each window, the rows of the original matrix that demarcate each window and the position of the variants that begin and end each window.
#'
#' The matrix of peak locations contains the midpoint for the bin of greatest density for each sample and each window.
#' Alternatively, if `count = TRUE` the number of non-missing values in each window is reported.
#' The number of non-mising values in each window may be used to censor windows containing low quantities of data.
#'
#' @seealso
#' peak_to_ploid,
#' freq_peak_plot
#'
#' @examples
#' data(vcfR_example)
#' gt <- extract.gt(vcf)
#' hets <- is_het(gt)
#' # Censor non-heterozygous positions.
#' is.na(vcf@gt[,-1][!hets]) <- TRUE
#' # Extract allele depths.
#' ad <- extract.gt(vcf, element = "AD")
#' ad1 <- masplit(ad, record = 1)
#' ad2 <- masplit(ad, record = 2)
#' freq1 <- ad1/(ad1+ad2)
#' freq2 <- ad2/(ad1+ad2)
#' myPeaks1 <- freq_peak(freq1, getPOS(vcf))
#' is.na(myPeaks1$peaks[myPeaks1$counts < 20]) <- TRUE
#' myPeaks2 <- freq_peak(freq2, getPOS(vcf), lhs = FALSE)
#' is.na(myPeaks2$peaks[myPeaks2$counts < 20]) <- TRUE
#' myPeaks1
#'
#' # Visualize
#' mySample <- "P17777us22"
#' myWin <- 2
#' hist(freq1[myPeaks1$wins[myWin,'START_row']:myPeaks1$wins[myWin,'END_row'], mySample],
#' breaks=seq(0,1,by=0.02), col="#A6CEE3", main="", xlab="", xaxt="n")
#' hist(freq2[myPeaks2$wins[myWin,'START_row']:myPeaks2$wins[myWin,'END_row'], mySample],
#' breaks=seq(0,1,by=0.02), col="#1F78B4", main="", xlab="", xaxt="n", add = TRUE)
#' axis(side=1, at=c(0,0.25,0.333,0.5,0.666,0.75,1),
#' labels=c(0,'1/4','1/3','1/2','2/3','3/4',1), las=3)
#' abline(v=myPeaks1$peaks[myWin,mySample], col=2, lwd=2)
#' abline(v=myPeaks2$peaks[myWin,mySample], col=2, lwd=2)
#'
#' # Visualize #2
#' mySample <- "P17777us22"
#' plot(getPOS(vcf), freq1[,mySample], ylim=c(0,1), type="n", yaxt='n',
#' main = mySample, xlab = "POS", ylab = "Allele balance")
#' axis(side=2, at=c(0,0.25,0.333,0.5,0.666,0.75,1),
#' labels=c(0,'1/4','1/3','1/2','2/3','3/4',1), las=1)
#' abline(h=c(0.25,0.333,0.5,0.666,0.75), col=8)
#' points(getPOS(vcf), freq1[,mySample], pch = 20, col= "#A6CEE3")
#' points(getPOS(vcf), freq2[,mySample], pch = 20, col= "#1F78B4")
#' segments(x0=myPeaks1$wins[,'START_pos'], y0=myPeaks1$peaks[,mySample],
#' x1=myPeaks1$wins[,'END_pos'], lwd=3)
#' segments(x0=myPeaks1$wins[,'START_pos'], y0=myPeaks2$peaks[,mySample],
#' x1=myPeaks1$wins[,'END_pos'], lwd=3)
#'
#'
#'
#' @export
freq_peak <- function(myMat, pos, winsize = 10000L, bin_width = 0.02, lhs = TRUE) {
.Call(`_vcfR_freq_peak`, myMat, pos, winsize, bin_width, lhs)
}
.gt_to_popsum <- function(var_info, gt) {
.Call(`_vcfR_gt_to_popsum`, var_info, gt)
}
#' @rdname is_het
#' @name is_het
#'
#'
#'
#' @export
is_het <- function(x, na_is_false = TRUE) {
.Call(`_vcfR_is_het`, x, na_is_false)
}
#'
#' @rdname masplit
#'
#' @title masplit
#' @description Split a matrix of delimited strings.
#'
#' @param myMat a matrix of delimited strings (e.g., "7,2").
#' @param delim character that delimits values.
#' @param count return the count of delimited records.
#' @param record which (1-based) record to return.
#' @param sort should the records be sorted prior to selecting the element (0,1)?
#' @param decreasing should the values be sorted decreasing (1) or increasing (0)?
#'
#'
#' @details
#' Split a matrix of delimited strings that represent numerics into numerics.
#' The parameter \strong{count} returns a matrix of integers indicating how many delimited records exist in each element.
#' This is intended to help if you do not know how many records are in each element particularly if there is a mixture of numbers of records.
#' The parameter \strong{record} indicates which record to return (first, second, third, ...).
#' The parameter \strong{sort} indicates whether the records in each element should be sorted (1) or not (0) prior to selection.
#' When sorting has been selected \strong{decreasing} indicates if the sorting should be performed in a decreasing (1) or increasing (0) manner prior to selection.
#'
#'
#'
#'
#' @return A numeric matrix
#'
#'
#' @examples
#' set.seed(999)
#' x1 <- round(rnorm(n=9, mean=10, sd=2))
#' x2 <- round(rnorm(n=9, mean=20, sd=2))
#' ad <- matrix(paste(x1, x2, sep=","), nrow=3, ncol=3)
#' colnames(ad) <- paste('Sample', 1:3, sep="_")
#' rownames(ad) <- paste('Variant', 1:3, sep="_")
#' ad[1,1] <- "9,23,12"
#' is.na(ad[3,1]) <- TRUE
#'
#' ad
#' masplit(ad, count = 1)
#' masplit(ad, sort = 0)
#' masplit(ad, sort = 0, record = 2)
#' masplit(ad, sort = 0, record = 3)
#' masplit(ad, sort = 1, decreasing = 0)
#'
#'
#' @export
masplit <- function(myMat, delim = ",", count = 0L, record = 1L, sort = 1L, decreasing = 1L) {
.Call(`_vcfR_masplit`, myMat, delim, count, record, sort, decreasing)
}
pair_sort <- function() {
.Call(`_vcfR_pair_sort`)
}
.rank_variants <- function(variants, ends, score) {
.Call(`_vcfR_rank_variants`, variants, ends, score)
}
.vcf_stats_gz <- function(x, nrows = -1L, skip = 0L, verbose = 1L) {
.Call(`_vcfR_vcf_stats_gz`, x, nrows, skip, verbose)
}
.read_meta_gz <- function(x, stats, verbose) {
.Call(`_vcfR_read_meta_gz`, x, stats, verbose)
}
.read_body_gz <- function(x, stats, nrows = -1L, skip = 0L, cols = 0L, convertNA = 1L, verbose = 1L) {
.Call(`_vcfR_read_body_gz`, x, stats, nrows, skip, cols, convertNA, verbose)
}
.seq_to_rects <- function(seq, targets) {
.Call(`_vcfR_seq_to_rects`, seq, targets)
}
.window_init <- function(window_size, max_bp) {
.Call(`_vcfR_window_init`, window_size, max_bp)
}
.windowize_fasta <- function(wins, seq) {
.Call(`_vcfR_windowize_fasta`, wins, seq)
}
.windowize_variants <- function(windows, variants) {
.Call(`_vcfR_windowize_variants`, windows, variants)
}
.windowize_annotations <- function(wins, ann_starts, ann_ends, chrom_length) {
.Call(`_vcfR_windowize_annotations`, wins, ann_starts, ann_ends, chrom_length)
}
.write_vcf_body <- function(fix, gt, filename = "myFile.vcf.gz", mask = 0L) {
invisible(.Call(`_vcfR_write_vcf_body`, fix, gt, filename, mask))
}
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/RcppExports.R
|
#' @title Populate the ID column of VCF data
#' @name addID
#' @rdname addID
#'
#' @description
#' Populate the ID column of VCF data by concatenating the chromosome, position and optionally an index.
#'
#' @param x an object of class vcfR or chromR.
#' @param sep a character string to separate the terms.
#'
#' @details
#' Variant callers typically leave the ID column empty in VCF data.
#' However, the VCF data may potentially include variants with IDs as well as variants without.
#' This function populates the missing elements by concatenating the chromosome and position.
#' When this concatenation results in non-unique names, an index is added to force uniqueness.
#'
#'
#' @examples
#' data(vcfR_test)
#' head(vcfR_test)
#' vcfR_test <- addID(vcfR_test)
#' head(vcfR_test)
#'
#'
#' @export
#'
addID <- function(x, sep="_"){
# if( class(x) == 'chromR' ){
if( inherits(x, "chromR") ){
ID <- x@vcf@fix[,'ID']
CHROM <- x@vcf@fix[,'CHROM']
POS <- x@vcf@fix[,'POS']
# } else if( class(x) == 'vcfR' ){
} else if( inherits(x,'vcfR') ){
ID <- x@fix[,'ID']
CHROM <- x@fix[,'CHROM']
POS <- x@fix[,'POS']
} else {
stop("expecting an object of class vcfR or chromR.")
}
if( sum(!is.na(ID)) < length(ID) ){
ID[ is.na(ID) ] <- paste( CHROM[ is.na(ID) ], POS[ is.na(ID) ], sep=sep )
if( length(unique(ID)) < length(ID) ){
ID <- paste( ID, 1:length(ID), sep=sep )
}
}
# if( class(x) == 'chromR' ){
if( inherits(x, 'chromR') ){
x@vcf@fix[,'ID'] <- ID
# } else if( class(x) == 'vcfR' ){
} else if( inherits(x, 'vcfR') ){
x@fix[,'ID'] <- ID
} else {
stop("expecting an object of class vcfR or chromR.")
}
return(x)
}
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/addID.R
|
#### check_keys ####
#' @rdname check_keys
#' @aliases check_keys
#'
#' @title Check that INFO and FORMAT keys are unique
#'
#' @param x an oblect of class vcfR
#'
#' @description
#' The INFO and FORMAT columns contain information in key-value pairs.
#' If for some reason a key is not unique it will create issues in retrieving this information.
#' This function checks the keys defined in the meta section to make sure they are unique.
#' Note that it does not actually check the INFO and FORMAT columns, just their definitions in the meta section.
#' This is because each variant can have different information in their INFO and META cells.
#' Checking these on large files will therefore come with a performance cost.
#'
#' @seealso queryMETA()
#'
#' @examples
#' data(vcfR_test)
#' check_keys(vcfR_test)
#' queryMETA(vcfR_test)
#' queryMETA(vcfR_test, element = 'DP')
#' # Note that DP occurs as unique in INFO and FORMAT but they may be different.
#'
#'
#' @export
check_keys <- function(x) {
# if(class(x) != 'vcfR'){
if( !inherits(x, 'vcfR') ){
stop( paste('Expecting a vcfR object, instead received:', class(x)) )
}
# First check INFO.
myKeys <- grep('INFO', x@meta, value = TRUE)
myKeys <- sub('##INFO=<ID=','',myKeys)
myKeys <- unlist(lapply(strsplit(myKeys, ','), function(x){x[1]}))
myKeys <- table(myKeys)
myKeys <- myKeys[myKeys > 1]
if( length(myKeys) > 0){
warning(paste("The following INFO key occurred more than once:", names(myKeys), '\n'))
}
# Check FORMAT.
myKeys <- grep('FORMAT', x@meta, value = TRUE)
myKeys <- sub('##FORMAT=<ID=','',myKeys)
myKeys <- unlist(lapply(strsplit(myKeys, ','), function(x){x[1]}))
myKeys <- table(myKeys)
myKeys <- myKeys[myKeys > 1]
if( length(myKeys) > 0){
warning(paste("The following FORMAT key occurred more than once:", names(myKeys), '\n'))
}
}
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/check_keys.R
|
#'
#' @rdname chromR-method
#' @title chromR-method
#'
#' @aliases chromR,chromR-method
#'
#' @description Methods that act on objects of class chromR
#'
#'
#' @param x an object of class chromR
#' @param y not currently used
#' @param object an object of class chromR
#' @param value a character containing a name
#' @param n integer indicating the number of elements to be printed from an object
#' @param ... Arguments to be passed to methods
#'
#'
#' @details
#' Methods that act on objects of class chromR.
#'
#' @importFrom utils object.size
#'
#'
##### Generic methods. #####
setMethod( f="show",
signature = "chromR",
definition=function(object){
#1234567890123456789012345678901234567890
cat( "***** Class chromR, method Show *****\n" )
# cat( "\n" )
cat( paste("Name:", object@names, "\n") )
# cat( "\n" )
cat( paste("Chromosome length:", format(object@len, big.mark=","), "bp\n") )
cat( " Chromosome labels: ")
if( length( labels(object@seq) ) > 0 ){
cat( paste( labels(object@seq), sep = ",") )
} else {
cat( "None" )
}
cat( "\n" )
cat( paste("Annotation (@ann) count:", format(nrow(object@ann), big.mark=","), "\n") )
cat( " Annotation chromosome names: " )
if( length( unique( object@ann[,1] ) ) > 0 ){
cat( paste( unique( object@ann[,1] ) ), sep = "," )
} else {
cat( "None" )
}
cat( "\n" )
cat( paste("Variant (@vcf) count:", format(nrow(object@vcf), big.mark=","), "\n") )
cat( " Variant (@vcf) chromosome names: " )
if( length( unique(getCHROM(object@vcf)) ) > 0 ){
cat( paste( unique(getCHROM(object@vcf)), sep = "," ) )
} else {
cat( "None" )
}
cat( "\n" )
# cat( "\n" )
cat( "Object size: ")
print( utils::object.size(object), units="MB" )
cat( "Use head(object) for more details.\n" )
# cat( "\n" )
cat( "***** End Show (chromR) *****\n" )
}
)
#' @rdname chromR-method
# ' @aliases plot
#' @aliases plot,chromR-method
#' @export
#'
setMethod( f="plot",
signature= "chromR",
definition=function (x,y,...){
DP <- [email protected]$DP[[email protected]$mask]
MQ <- [email protected]$MQ[[email protected]$mask]
QUAL <- as.numeric(x@vcf@fix[[email protected]$mask, 'QUAL'])
if( nrow([email protected] ) > 0 ){
SNPS <- [email protected]$variants/[email protected]$length
} else {
SNPS <- NULL
}
graphics::par(mfrow=c(2,2))
if( length(stats::na.omit(DP)) > 0 ){
graphics::hist(DP, col=3, main="Read depth (DP)", xlab="")
graphics::rug(DP)
} else {
plot(1:2,1:2, type='n', xlab="", ylab="")
graphics::title(main="No depths found")
}
if( length(stats::na.omit(MQ)) > 0 ){
graphics::hist(MQ, col=4, main="Mapping quality (MQ)", xlab="")
graphics::rug(MQ)
} else {
plot(1:2,1:2, type='n', xlab="", ylab="")
graphics::title(main="No mapping qualities found")
}
if( length(stats::na.omit(QUAL)) > 0 ){
graphics::hist(QUAL, col=5, main="Quality (QUAL)", xlab="")
graphics::rug(QUAL)
} else {
plot(1:2,1:2, type='n', xlab="", ylab="")
graphics::title(main="No qualities found")
}
if( length(SNPS) > 0 ){
graphics::hist( SNPS, col=6, main="Variant count (per window)", xlab="")
graphics::rug( SNPS )
} else {
plot(1:2,1:2, type='n', xlab="", ylab="")
graphics::title(main="No SNP densities found")
}
graphics::par(mfrow=c(1,1))
return(invisible(NULL))
}
)
##### ##### ##### ##### #####
#' @rdname chromR-method
#' @export
#'
setMethod( f="print",
signature="chromR",
definition=function (x,y,...){
cat("***** Object of class 'chromR' *****\n")
cat(paste("Name: ", x@names, "\n"))
cat(paste("Length: ", format(x@len, big.mark=","), "\n"))
cat("\nVCF fixed data:\n")
cat("Last column (info) omitted.\n")
cat("\nVCF variable data:\n")
cat(paste("Columns: ", ncol(x@vcf@gt), "\n"))
cat(paste("Rows: ", nrow(x@vcf@gt), "\n"))
cat("(First column is format.)\n")
cat("\nAnnotation data:\n")
if(length(x@ann)>0){
print(head(x@ann[,1:8], n=4))
cat("Last column (attributes) omitted.\n")
} else {
cat("Empty slot.\n")
}
cat("***** End print (chromR) ***** \n")
}
)
#' @rdname chromR-method
#' @export
#'
setMethod( f="head",
signature = "chromR",
definition=function(x, n = 6){
#1234567890123456789012345678901234567890
cat("***** Class chromR, method head *****")
cat("\n")
cat(paste("Name: ", x@names))
cat("\n")
cat( paste("Length: ", format(x@len, big.mark=",")) )
cat("\n")
cat("\n")
#1234567890123456789012345678901234567890
cat("***** Sample names (chromR) *****")
cat("\n")
if(ncol(x@vcf@gt) <= 2 * n){
print(colnames(x@vcf@gt)[-1])
} else {
print(head(colnames(x@vcf@gt)[-1]))
print("...")
print(utils::tail(colnames(x@vcf@gt)[-1]))
}
cat("\n")
#1234567890123456789012345678901234567890
cat("***** VCF fixed data (chromR) *****")
cat("\n")
if(nrow(x@vcf@gt) <= 2 * n){
print(x@vcf@fix[,1:7])
} else {
print(head(x@vcf@fix[,1:7]))
print("...")
print(utils::tail(x@vcf@fix[,1:7]))
}
cat("\n")
cat("INFO column has been suppressed, first INFO record:")
cat("\n")
print(unlist(strsplit(as.character(x@vcf@fix[1, 'INFO']), split=";")))
cat("\n")
#1234567890123456789012345678901234567890
cat("***** VCF genotype data (chromR) *****")
cat("\n")
if(ncol(x@vcf@gt)>=6){
#1234567890123456789012345678901234567890
cat("***** First 6 columns *********")
cat("\n")
if(nrow(x@vcf@gt) <= 2 * n){
print(x@vcf@gt[,1:6])
} else {
print(head(x@vcf@gt[,1:6]))
# print("...")
# print(tail(x@vcf@gt[,1:6]))
}
} else {
print(x@vcf@gt[1:6,])
}
cat("\n")
#1234567890123456789012345678901234567890
cat("***** Var info (chromR) *****")
cat("\n")
if(ncol([email protected])>=6){
#1234567890123456789012345678901234567890
cat("***** First 6 columns *****")
cat("\n")
print([email protected][1:n,1:6])
} else {
print([email protected][1:n,])
}
cat("\n")
#1234567890123456789012345678901234567890
cat("***** VCF mask (chromR) *****")
cat("\n")
cat( paste("Percent unmasked:", round(100*(sum([email protected]$mask)/length([email protected]$mask)), digits=2 ) ) )
cat("\n")
cat("\n")
#1234567890123456789012345678901234567890
cat("***** End head (chromR) *****")
cat("\n")
}
)
#' @rdname chromR-method
#'
setMethod(f="names<-",
signature( x = "chromR", value = "character" ),
function(x, value){
if( length(value) >=1 ){
x@names <- value[1]
} else {
x@names <- character()
}
return(x)
}
)
#' @rdname chromR-method
#' @export
#'
setMethod( f="length",
signature = "chromR",
definition=function(x){
return(x@len)
}
)
# EOF.
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/chromR-method.R
|
#' Example chromR object.
#'
#' An example chromR object containing parts of the *Phytophthora infestans* genome.
#'
#'
#' This data is a subset of the pinfsc50 dataset.
#' It has been subset to positions between 500 and 600 kbp.
#' The coordinate systems of the vcf and gff file have been altered by subtracting 500,000.
#' This results in a 100 kbp section of supercontig_1.50 that has positional data ranging from 1 to 100 kbp.
#'
#'
#' @examples
#' data(chromR_example)
#'
#'
#'
#'
#' @docType data
#' @keywords datasets
#' @format A chromR object
#' @name chromR_example
#' @aliases chrom
NULL
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/chromR_example.R
|
#' @title chromR_functions
#' @name chromR functions
# @aliases chromR functions
#' @rdname chromR_functions
#' @description Functions which act on chromR objects
##### Set a mask #####
# ' @rdname chromR_functions
#' @export
#' @aliases masker
#'
#' @param min_QUAL minimum variant quality
#' @param min_DP minimum cumulative depth
#' @param max_DP maximum cumulative depth
#' @param min_MQ minimum mapping quality
#' @param max_MQ maximum mapping quality
#' @param preserve a logical indicating whether or not to preserve the state of
#' the current mask field. Defaults to \code{FALSE}
#' @param ... arguments to be passed to methods
#'
#' @details
#' The function \strong{masker} creates a logical vector that determines which variants are masked.
#' By masking certain variants, instead of deleting them, it preserves the dimensions of the data structure until a change needs to be committed.
#' Variants can be masked based on the value of the QUAL column of the vcf object.
#' Experience seems to show that this value is either at its maximum (999) or a rather low value.
#' The maximum and minimum sequence depth can also be used (mindp and maxdp).
#' The default is to mask all variants with depths of less than the 0.25 quantile and greater than the 0.75 quantile (these are also known as the lower and upper quartile).
#' The minimum and maximum mapping qualities (minmq, maxmq) can also be used.
#'
#'
#' This vector is stored in the var.info$mask slot of a chromR object.
#'
#masker <- function(x, min_QUAL=999, min_DP=0.25, max_DP=0.75, minmq=20, maxmq=50, ...){
masker <- function(x, min_QUAL=1, min_DP=1, max_DP=1e4, min_MQ=20, max_MQ=100, preserve=FALSE, ...){
quals <- getQUAL(x)
# quals <- [email protected]$QUAL
# info <- [email protected][,grep("DP|MQ",names([email protected])), drop=FALSE]
# mask <- rep(TRUE, times=nrow(info))
if (preserve){
mask <- [email protected]$mask
} else {
mask <- rep(TRUE, times=nrow([email protected]))
}
# Mask on QUAL
if(sum(is.na(quals)) < length(quals)){
# mask[quals < min_QUAL] <- FALSE
mask <- mask & quals >= min_QUAL
}
# Mask on DP
if( !is.null( [email protected]$DP ) ){
if(sum(is.na([email protected]$DP)) < length([email protected]$DP)){
# mask[[email protected]$DP < min_DP] <- FALSE
# mask[[email protected]$DP > max_DP] <- FALSE
mask <- mask &
[email protected]$DP >= min_DP &
[email protected]$DP <= max_DP
}
}
if( !is.null( [email protected]$MQ ) ){
if(sum(is.na([email protected]$MQ)) < length([email protected]$MQ)){
# mask[[email protected]$MQ < min_MQ] <- FALSE
# mask[[email protected]$MQ > max_MQ] <- FALSE
mask <- mask &
[email protected]$MQ >= min_MQ &
[email protected]$MQ <= max_MQ
}
}
[email protected]$mask <- mask
return(x)
}
#' @rdname chromR_functions
#'
#' @param x object of class chromR
#'
#' @export
#' @aliases variant.table
#'
#' @details
#' The function \strong{variant.table} creates a data.frame containing information about variants.
#'
variant.table <- function(x){
tab <- [email protected][[email protected]$mask,]
# tab <- cbind(rep(x@name, times=nrow(tab)), [email protected]$QUAL[[email protected]$mask], tab)
# names(tab)[1] <- "chrom"
# names(tab)[2] <- "QUAL"
tab
}
#' @rdname chromR_functions
#' @export
#' @aliases win.table
#' @details
#' The funciton \strong{win.table}
#'
win.table <- function(x){
tab <- [email protected]
tab <- cbind(rep(x@name, times=nrow(tab)), tab)
names(tab)[1] <- "chrom"
tab
}
# EOF.
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/chromR_functions.R
|
#' @title Convert chrom objects to vcfR objects
#' @rdname chrom_to_vcfR
#' @export
#'
#' @description
#' Convert chrom objects to vcfR objects.
#'
#' @param x Object of class chrom
#' @param use.mask Logical, determine if mask from chrom object should be used to subset vcf data
#'
#' @details
#' The chrom object is subset and recast as a vcfR object. When use.mask is set
#' to TRUE (the default), the object is subset to only the variants (rows) indicated
#' to include by the mask. When use.mask is set to FALSE, all variants (rows) from
#' the chrom object are included in the new vcfR object.
#'
#' @return Returns an object of class vcfR.
#'
#'
chromR2vcfR <- function(x, use.mask=FALSE){
# if(class(x) != "chromR"){
if( !inherits(x, "chromR") ){
stop("Unexpected class! Expecting an object of class chromR.")
}
mask <- [email protected]$mask
vcf <- x@vcf
if(use.mask == TRUE){
vcf <- vcf[mask,]
}
return(vcf)
}
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/chromR_to_vcfR.R
|
#' @rdname chromo_plot
#' @title Plot chromR object
#' @name chromo_plot
#' @export
#' @aliases chromo
#'
#' @description plot chromR objects
#'
#' @param chrom an object of class chrom.
#' @param boxp logical specifying whether marginal boxplots should be plotted [T/F].
#' @param dp.alpha degree of transparency applied to points in dot plots [0-255].
#' @param chrom.s start position for the chromosome. (Deprecated. use xlim)
#' @param chrom.e end position for the chromosome. (Deprecated. use xlim)
#' @param drlist1 a named list containing elements to create a drplot
#' @param drlist2 a named list containing elements to create a drplot
#' @param drlist3 a named list containing elements to create a drplot
#' @param ... arguments to be passed to other methods.
#'
#'
#' @details
#' Each \strong{drlist} parameter is a list containing elements necessarry to plot a dr.plot.
#' This list should contain up to seven elements named title, dmat, rlist, dcol, rcol, rbcol and bwcol.
#' These elements are documented in the dr.plot page where they are presented as individual parameters.
#' The one exception is bwcol which is a vector of colors for the marginal box and whisker plot.
#' This is provided so that different colors may be used in the dot plot and the box and whisker plot.
#' For example, transparency may be desired in the dot plot but not the box and whisker plot.
#' When one (or more) of these elements is omitted an attempt to use default values is made.
#'
#' @return Returns an invisible NULL.
#'
#'
#' @seealso \code{\link{dr.plot}}
#'
#'
chromo <- function( chrom,
boxp = TRUE,
dp.alpha = TRUE,
chrom.s = 1,
chrom.e = NULL,
drlist1 = NULL, drlist2 = NULL, drlist3 = NULL,
# title1, dmat1, rlist1, dcol1, rcol1, rbcol1,
# title2, dmat2, rlist2, dcol2, rcol2, rbcol2,
# title3, dmat3, rlist3, dcol3, rcol3, rbcol3,
# verbose=TRUE,
# nsum=TRUE,
...){
# if( class(chrom) != "chromR" ){
if( !inherits(chrom, "chromR") ){
stop("Expecting object of class chromR")
}
if( chrom.s != 1 | !is.null(chrom.e) ){
stop("The parameters 'chrom.s' and 'chrom.e' were deprecated in vcfR v1.5.0. Please use 'xlim' instead")
}
myDots <- list(...)
if( !is.null( myDots$xlim ) ){
chrom.s <- myDots$xlim[1]
chrom.e <- myDots$xlim[2]
} else {
chrom.s <- 1
chrom.e <- length(chrom)
}
# Test to see if the mask is populated.
# if( length(grep('mask', colnames([email protected]))) < 1 ){
# [email protected]$mask <- rep( TRUE, times=nrow(chrom@vcf@fix) )
# }
# Save original parameters.
orig.oma <- graphics::par('oma')
orig.mar <- graphics::par('mar')
# Get user's par(), ignoring the read-only variables.
userpar <- graphics::par(no.readonly = TRUE)
# Promise to reset graphics device
on.exit({
graphics::par(userpar)
})
# Initialize parameters.
mwidth <- 8
ncols <- 1
nrows <- 0
mheight <- 0.3 # Minor plot height
heights <- c()
##### ##### ##### ##### #####
#
# Determine the layout of the plot.
#
##### ##### ##### ##### #####
# Plot title
# if( length(chrom@name) > 0 ){
if( length(chrom@names) > 0 ){
graphics::par( oma = c(3,0,1,0) )
} else {
graphics::par( oma = c(3,0,0,0) )
}
# Marginal boxplots.
if( boxp == TRUE ){
ncols <- ncols + 1
mwidth <- c(mwidth, 1)
}
# drlist1
if( !is.null(drlist1) ){
nrows <- nrows + 1
heights <- c( heights, 1)
}
# drlist2
if( !is.null(drlist2) ){
nrows <- nrows + 1
heights <- c( heights, 1)
}
# drlist3
if( !is.null(drlist3) ){
nrows <- nrows + 1
heights <- c( heights, 1)
}
# Variant plot.
if( !is.null([email protected]$variants) ){
nrows <- nrows + 1
heights <- c( heights, 1)
}
# Nucleotide content and sequence plots.
if( length( grep("^A$", colnames([email protected]) ) ) == 1 ){
nrows <- nrows + 2
heights <- c( heights, 1, mheight)
}
# Annotation plot.
if( nrow(chrom@ann) > 0 ){
nrows <- nrows + 1
heights <- c( heights, mheight)
}
if( nrows == 0 ){
stop("no data has been included!")
}
##### ##### ##### ##### #####
#
# Establish layout for plot
#
##### ##### ##### ##### #####
graphics::layout( matrix( 1:c( ncols * nrows ),
nrow=nrows,
ncol=ncols,
byrow = TRUE ),
widths = mwidth,
heights = heights
)
##### ##### ##### ##### #####
#
# Plot
#
##### ##### ##### ##### #####
##### ##### ##### ##### #####
#
# drplots
#
##### ##### ##### ##### #####
# drplot1
if( !is.null(drlist1) ){
graphics::par( mar = c(0,4,0,0) )
bdim <- dr.plot( dmat = drlist1$dmat,
rlst = drlist1$rlst,
chrom.s = chrom.s,
chrom.e = chrom.e,
title = drlist1$title,
dcol = drlist1$dcol,
rcol = drlist1$rcol,
rbcol = drlist1$rbcol,
... )
graphics::par( mar = orig.mar )
if( boxp == TRUE ){
graphics::par( mar = c(0,0,0,0) )
if( is.null(drlist1$bwcol) ){
drlist1$bwcol <- drlist1$dcol
}
graphics::boxplot( x = drlist1$dmat[,-1],
ylim = bdim,
xaxt = "n",
yaxt = "n",
col = drlist1$bwcol
)
graphics::par( mar = orig.mar )
}
}
# drplot2
if( !is.null(drlist2) ){
graphics::par( mar = c(0,4,0,0) )
bdim <- dr.plot( dmat = drlist2$dmat,
rlst = drlist2$rlst,
chrom.s = chrom.s,
chrom.e = chrom.e,
title = drlist2$title,
dcol = drlist2$dcol,
rcol = drlist2$rcol,
rbcol = drlist2$rbcol,
... )
graphics::par( mar = orig.mar )
if( boxp == TRUE ){
graphics::par( mar = c(0,0,0,0) )
if( is.null(drlist2$bwcol) ){
drlist2$bwcol <- drlist2$dcol
}
graphics::boxplot( x = drlist2$dmat[,-1],
ylim = bdim,
xaxt = "n",
yaxt = "n",
col = drlist2$bwcol
)
graphics::par( mar = orig.mar )
}
}
# drplot3
if( !is.null(drlist3) ){
graphics::par( mar = c(0,4,0,0) )
bdim <- dr.plot( dmat = drlist3$dmat,
rlst = drlist3$rlst,
chrom.s = chrom.s,
chrom.e = chrom.e,
title = drlist3$title,
dcol = drlist3$dcol,
rcol = drlist3$rcol,
rbcol = drlist3$rbcol,
... )
graphics::par( mar = orig.mar )
if( boxp == TRUE ){
graphics::par( mar = c(0,0,0,0) )
if( is.null(drlist3$bwcol) ){
drlist3$bwcol <- drlist3$dcol
}
graphics::boxplot( x = drlist3$dmat[,-1],
ylim = bdim,
xaxt = "n",
yaxt = "n",
col = drlist3$bwcol
)
graphics::par( mar = orig.mar )
}
}
##### ##### ##### ##### #####
#
# chromR plots
#
##### ##### ##### ##### #####
# Variant plot
if( !is.null([email protected]$variants) ){
rmat <- cbind([email protected][,'start'] ,
0,
[email protected][,'end'],
[email protected][,'variants'] / c([email protected][,'end'] - [email protected][,'start'])
)
graphics::par( mar = c(0,4,0,0) )
bdim <- dr.plot( dmat = NULL, rlst = list( rmat ), chrom.s = 1, chrom.e = chrom@len,
title = "Variants per Site", hline = NULL,
dcol = NULL,
rcol = grDevices::rgb( red=178, green=34, blue=34, alpha=255, maxColorValue = 255 ),
rbcol = grDevices::rgb( red=178, green=34, blue=34, alpha=255, maxColorValue = 255 ),
... )
if( length( grep("^A$", colnames([email protected])) ) == 0 & nrow(chrom@ann) == 0 ){
graphics::axis( side = 1, line = 0 )
}
graphics::par( mar = c(5,4,4,2) + 0.1 )
if( boxp == TRUE ){
graphics::par( mar = c(0,0,0,0) )
graphics::boxplot( rmat[,4], ylim=bdim, yaxt = "n",
col = grDevices::rgb( red=178, green=34, blue=34, alpha=255, maxColorValue = 255 ) )
graphics::par( mar = c(5,4,4,2) + 0.1 )
}
}
# Nucleotide plot
if( length( grep("^A$", colnames([email protected])) ) == 1 ){
rmat1 <- cbind([email protected][,'start'],
0,
[email protected][,'end'],
rowSums([email protected][,c('A', 'T')]) / c([email protected][,'end'] - [email protected][,'start'] )
)
rmat2 <- cbind([email protected][,'start'],
rmat1[,4],
[email protected][,'end'],
rmat1[,4] + rowSums([email protected][,c('C', 'G')]) / c([email protected][,'end'] - [email protected][,'start'] )
)
graphics::par( mar = c(0,4,0,0) )
bdim <- dr.plot( dmat = NULL, rlst = list(rmat1, rmat2), chrom.s = 1, chrom.e = chrom@len,
title = "Nucleotide Content", hline = NULL,
dcol = NULL,
rcol = c(grDevices::rgb( red=000, green=034, blue=205, maxColorValue = 255),
grDevices::rgb( red=255, green=235, blue=000, maxColorValue = 255)),
rbcol = c(grDevices::rgb( red=000, green=034, blue=205, maxColorValue = 255),
grDevices::rgb( red=255, green=235, blue=000, maxColorValue = 255)),
... )
graphics::par( mar = c(5,4,4,2) + 0.1 )
if( boxp == TRUE ){
graphics::par( mar = c(0,0,0,0) )
rmat1 <- cbind(rmat1[,4], rmat2[,4])
rmat1[,2] <- rmat1[,2] - rmat1[,1]
graphics::boxplot( rmat1, ylim=bdim, yaxt = "n",
col = c(grDevices::rgb( red=000, green=034, blue=205, maxColorValue = 255),
grDevices::rgb( red=255, green=235, blue=000, maxColorValue = 255)
),
border = c(grDevices::rgb( red=000, green=034, blue=205, maxColorValue = 255),
grDevices::rgb( red=255, green=235, blue=000, maxColorValue = 255)
),
xaxt = "n"
)
graphics::par( mar = c(5,4,4,2) + 0.1 )
}
# Sequence plot.
if( nrow([email protected]$nuc.win) > 0 ){
rmat1 <- cbind([email protected]$nuc.win[,1], -1, [email protected]$nuc.win[,2], 1)
} else {
rmat1 <- NULL
}
if( nrow([email protected]$N.win) > 0 ){
rmat2 <- cbind([email protected]$N.win[,1], -0.5, [email protected]$N.win[,2], 0.5)
} else {
rmat2 <- NULL
}
graphics::par( mar = c(0,4,0,0) )
# Create a list from the sequence data.
if( !is.null(rmat1) & !is.null(rmat2) ){
rlist <- list( rmat1, rmat2 )
}
if( !is.null(rmat1) & is.null(rmat2) ){
rlist <- list( rmat1 )
}
if( is.null(rmat1) & !is.null(rmat2) ){
rlist <- list( rmat2 )
}
if( is.null(rmat1) & is.null(rmat2) ){
rlist <- NULL
}
dr.plot( rlst = rlist, chrom.s = 1, chrom.e = chrom@len,
title = "Nucleotides", hline = NULL,
dcol = NULL,
rcol = c('green', 'red'),
rbcol = c('green', 'red'),
yaxt = "n",
#frame.plot = FALSE,
... )
if( nrow(chrom@ann) == 0 ){
graphics::axis( side = 1, line = 0 )
}
graphics::par( mar = c(5,4,4,2) + 0.1 )
if( boxp == TRUE){
null.plot()
}
}
# Annotation plot.
if( nrow(chrom@ann) > 0 ){
rmat <- cbind( chrom@ann[,4], -1, chrom@ann[,5], 1)
graphics::par( mar=c(0,4,0,0) )
dr.plot( rlst = list( rmat ), chrom.e = chrom@len, title = "Annotations",
rcol = grDevices::rgb(178,34,34, maxColorValue = 255),
rbcol = grDevices::rgb(178,34,34, maxColorValue = 255),
hline = 0,
yaxt = "n",
...)
graphics::axis( side = 1, line = 0 )
graphics::par( mar = c(5,4,4,2) + 0.1 )
if( boxp == TRUE){
null.plot()
}
}
graphics::title( xlab = "Base pairs", line = 1.6, outer = TRUE )
# if( length(chrom@name) > 0 ){
if( length(chrom@names) > 0 ){
graphics::title( main = chrom@names, line = 0.2, outer = TRUE )
# graphics::title( main = chrom@name, line = 0.2, outer = TRUE )
}
##### ##### ##### ##### #####
#
# Reset graphics parameters to defaults.
#
##### ##### ##### ##### #####
# graphics::par( mar = orig.mar )
# graphics::par( oma = orig.oma )
# graphics::par( mfrow = c(1,1) )
invisible(NULL)
}
##### ##### ##### ##### #####
#
# End chromo
#
##### ##### ##### ##### #####
##### ##### ##### ##### #####
#
# Begin chromoqc
#
##### ##### ##### ##### #####
#' @rdname chromo_plot
#' @export
#' @aliases chromoqc
#'
chromoqc <- function( chrom,
boxp = TRUE,
dp.alpha = 255,
...){
# if( class(chrom) != "chromR" ){
if( !inherits(chrom, "chromR") ){
stop( paste("expecting an object of class chromR, got", class(chrom), "instead.") )
}
# Read depth
myList1 <- list(title = "Read Depth (DP)",
dmat = [email protected][ [email protected][,"mask"] , c("POS","DP") ],
dcol = grDevices::rgb( red=30, green=144, blue=255, alpha=dp.alpha, maxColorValue = 255),
bwcol = grDevices::rgb( red=30, green=144, blue=255, maxColorValue = 255)
)
# Mapping Quality (MQ)
if( !is.null([email protected]$MQ) ){
myList2 <- list(title = "Mapping Quality (MQ)",
dmat = [email protected][ [email protected][,"mask"] , c("POS","MQ") ],
dcol = grDevices::rgb( red=46, green=139, blue=87, alpha=dp.alpha, maxColorValue = 255),
bwcol = grDevices::rgb( red=46, green=139, blue=87, maxColorValue = 255)
)
} else {
myList2 <- NULL
}
# Phred-Scaled Quality (QUAL)
dmat <- as.matrix( cbind([email protected][,"POS"],
as.numeric( chrom@vcf@fix[,"QUAL"] ) ) )
dmat <- dmat[ [email protected][,"mask"], , drop = FALSE]
myList3 <- list(title = "Phred-Scaled Quality (QUAL)",
dmat = dmat,
dcol = grDevices::rgb(red=139, green=0, blue=139, alpha=dp.alpha, maxColorValue = 255),
bwcol = grDevices::rgb(red=139, green=0, blue=139, maxColorValue = 255)
)
chromo( chrom, boxp = boxp,
# chrom.e = chrom@len,
drlist1 = myList1,
drlist2 = myList2,
drlist3 = myList3,
...
)
}
##### ##### ##### ##### #####
#
# End chromoqc
#
##### ##### ##### ##### #####
##### ##### ##### ##### #####
# EOF.
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/chromo_plot.R
|
#' @title Create chromR object
#'
#' @name create.chromR
#' @rdname create_chromR
#' @export
#' @aliases create.chromR
#'
#' @description
#' Creates and populates an object of class chromR.
#'
#' @param vcf an object of class vcfR
#' @param name a name for the chromosome (for plotting purposes)
#' @param seq a sequence as a DNAbin object
#' @param ann an annotation file (gff-like)
#' @param verbose should verbose output be printed to the console?
#' @param x an object of class chromR
#' @param gff a data.frame containing annotation data in the gff format
# @param ... arguments
#'
#' @details
#' Creates and names a chromR object from a name, a chromosome (an ape::DNAbin object), variant data (a vcfR object) and annotation data (gff-like).
#' The function \strong{create.chromR} is a wrapper which calls functions to populate the slots of the chromR object.
#'
#' The function \strong{vcf2chromR} is called by create.chromR and transfers the data from the slots of a vcfR object to the slots of a chromR object.
#' It also tries to extract the 'DP' and 'MQ' fileds (when present) from the fix slot's INFO column.
#' It is not anticipated that a user would need to use this function directly, but its placed here in case they do.
#'
#' The function \strong{seq2chromR} is currently defined as a generic function.
#' This may change in the future.
#' This function takes an object of class DNAbin and assigns it to the 'seq' slot of a chromR object.
#'
#' The function \strong{ann2chromR} is called by create.chromR and transfers the information from a gff-like object to the 'ann' slot of a chromR object.
#' It is not anticipated that a user would need to use this function directly, but its placed here in case they do.
#'
#'
#' @seealso
# \code{\link{seq2chromR}},
# \code{\link{vcf2chromR}},
#' \code{\link{chromR-class}},
#' \code{\link{vcfR-class}},
#' \code{\link[ape]{DNAbin}},
# \href{http://www.1000genomes.org/wiki/analysis/variant\%20call\%20format/vcf-variant-call-format-version-41}{vcf format},
#' \href{https://github.com/samtools/hts-specs}{VCF specification}
#' \href{https://github.com/The-Sequence-Ontology/Specifications/blob/master/gff3.md}{gff3 format}
#'
#' @examples
#' library(vcfR)
#' data(vcfR_example)
#' chrom <- create.chromR('sc50', seq=dna, vcf=vcf, ann=gff)
#' head(chrom)
#' chrom
# colnames(chrom)
#' plot(chrom)
#'
#' chrom <- masker(chrom, min_QUAL = 1, min_DP = 300, max_DP = 700, min_MQ = 59, max_MQ = 61)
#' chrom <- proc.chromR(chrom, win.size=1000)
#'
#' plot(chrom)
#
#' chromoqc(chrom)
# chromoqc(pinf_mt, xlim=c(25e+03, 3e+04), dot.alpha=99)
#
# set.seed(10)
# x1 <- as.integer(runif(n=20, min=1, max=39000))
# y1 <- runif(n=length(x1), min=1, max=100)
# chromodot(pinf_mt, x1=x1, y1=y1)
#'
# 1 2 3 4 5
# 12345678901234567890123456789012345678901234567890
# chromodot(pinf_mt, x1=x1, y1=y1, label1='My data',
# x2=x1, y2=y1, label2='More of my data',
# dot.alpha='ff')
#'
# chromohwe(pinf_mt, dot.alpha='ff')
#'
# chromopop(pinf_mt)
# gt <- extract.gt(pinf_mt)
# head(gt)
# tab <- variant.table(pinf_mt)
# head(tab)
# win <- window_table(pinf_mt)
# head(win)
# hist(tab$Ho - tab$He, col=5)
# # Note that this example is a mitochondrion, so this is a bit silly.
#'
create.chromR <- function(vcf, name="CHROM", seq=NULL, ann=NULL, verbose=TRUE){
# Determine whether we received the expected classes.
#stopifnot(class(vcf) == "vcfR")
stopifnot( inherits(vcf, "vcfR") )
if( length( unique( getCHROM(vcf) ) ) > 1 ){
myChroms <- unique( getCHROM(vcf) )
message('vcfR object includes more than one chromosome (CHROM).')
message( paste(myChroms, collapse = ", ") )
message("Subsetting to the first chromosome")
vcf <- vcf[ getCHROM(vcf) == myChroms[1],]
}
if( length( names(seq) ) > 1 ){
mySeqs <- names(seq)
message('DNAbin object includes more than one chromosome.')
message( paste(mySeqs, collapse = ", ") )
message("Subsetting to the first chromosome")
seq <- seq[ mySeqs[1] ]
}
if( length( unique( ann[,1] ) ) > 1 ){
myChroms <- unique( ann[,1] )
message('Annotations include more than one chromosome.')
message( paste(myChroms, collapse = ", ") )
message("Subsetting to the first chromosome")
ann <- ann[ann[,1] == myChroms[1], , drop = FALSE]
}
# Initialize chromR object.
x <- new(Class="chromR")
names(x) <- name
# setName(x) <- name
# Insert vcf into Chom.
if(length(vcf)>0){
# x <- vcf2chromR(x, vcf)
x@vcf <- vcf
}
# Insert seq into chromR
# Needs to handle lists and matrices of DNAbin
# Matrices are better behaved.
#
if(is.null(seq)){
POS <- getPOS(x)
x@len <- POS[length(POS)]
# x@len <- [email protected]$POS[length([email protected]$POS)]
#} else if (class(seq)=="DNAbin"){
} else if ( inherits(seq, "DNAbin") ){
x <- seq2chromR(x, seq)
} else {
#stopifnot( class(seq)=="DNAbin" )
stopifnot( inherits(seq, "DNAbin") )
}
# Annotations.
if( !is.null(ann) ){
if( nrow(ann) > 0 ){
# if(nrow(ann) > 0){
#stopifnot(class(ann) == "data.frame")
stopifnot( inherits(ann, "data.frame") )
#if(class(ann[,4]) == "factor"){ann[,4] <- as.character(ann[,4])}
if( inherits(ann[,4], "factor") ){ann[,4] <- as.character(ann[,4])}
#if(class(ann[,5]) == "factor"){ann[,5] <- as.character(ann[,5])}
if( inherits(ann[,5], "factor") ){ann[,5] <- as.character(ann[,5])}
#if(class(ann[,4]) == "character"){ann[,4] <- as.numeric(ann[,4])}
if( inherits(ann[,4], "character") ){ann[,4] <- as.numeric(ann[,4])}
#if(class(ann[,5]) == "character"){ann[,5] <- as.numeric(ann[,5])}
if( inherits(ann[,5], "character") ){ann[,5] <- as.numeric(ann[,5])}
x@ann <- ann
# Manage length
if( max(as.integer(as.character(ann[,4]))) > x@len ){
x@len <- max(as.integer(as.character(ann[,4])))
}
if( max(as.integer(as.character(ann[,5]))) > x@len ){
x@len <- max(as.integer(as.character(ann[,5])))
}
}
}
# Report names of objects to user.
if(verbose == TRUE){
# Print names of elements to see if they match.
message("Names in vcf:")
chr_names <- unique(getCHROM(x))
message(paste(' ', chr_names, sep=""))
# message(paste(' ', unique(as.character([email protected]$CHROM)), sep=""))
#if(class(x@seq) == "DNAbin"){
if( inherits(x@seq, "DNAbin") ){
message("Names of sequences:")
message(paste(' ', unique(labels(x@seq)), sep=""))
# if(unique(as.character([email protected]$CHROM)) != unique(labels(x@seq))){
if(chr_names != unique(labels(x@seq))){
warning("
Names in variant data and sequence data do not match perfectly.
If you choose to proceed, we'll do our best to match the data.
But prepare yourself for unexpected results.")
# message("Names in variant file and sequence file do not match perfectly.")
# message("If you choose to proceed, we'll do our best to match data.")
# message("But prepare yourself for unexpected results.")
}
}
if(nrow(x@ann) > 0){
message("Names in annotation:")
message(paste(' ', unique(as.character(x@ann[,1])), sep=""))
# if(unique(as.character([email protected]$CHROM)) != unique(as.character(x@ann[,1]))){
if( length( unique(as.character(x@ann[,1])) ) > 1 ){
warning("The annotation data appear to include more than one chromosome.\nUsing only the first.\n")
firstChrom <- unique(as.character(x@ann[,1]))[1]
x@ann <- x@ann[ x@ann[,1] == firstChrom, , drop = FALSE]
myChrom <- unique( x@ann[,1] )
warning( paste('Using annotation chromosome:', myChrom, '\n') )
}
if(chr_names != unique(as.character(x@ann[,1]))){
warning("
Names in variant data and annotation data do not match perfectly.
If you choose to proceed, we'll do our best to match the data.
But prepare yourself for unexpected results.")
# message("Names in variant file and annotation file do not match perfectly.\n")
# message("If you choose to proceed, we'll do our best to match data.\n")
# message("But prepare yourself for unexpected results.\n")
}
}
}
# Check to see if annotation positions exceed seq position.
if( nrow(x@ann) > 0 ){
if( max(as.integer(as.character(x@ann[,4]))) > x@len | max(as.integer(as.character(x@ann[,5]))) > x@len ){
stop("Annotation positions exceed chromosome positions. Is this the correct set of annotations?")
}
}
if( verbose == TRUE ){
message("Initializing var.info slot.")
}
[email protected] <- data.frame( CHROM = x@vcf@fix[,"CHROM"] , POS = as.integer(x@vcf@fix[,"POS"]) )
# mq <- getINFO(x, element="MQ")
mq <- extract.info(x, element = 'MQ', as.numeric = TRUE)
if( length(mq) > 0 ){ [email protected]$MQ <- mq }
# dp <- getDP(x)
dp <- extract.info(x, element = 'DP', as.numeric = TRUE)
if( length(dp) > 0 ){ [email protected]$DP <- dp }
if( nrow([email protected]) > 0 ){
[email protected]$mask <- TRUE
}
if( verbose == TRUE ){
message("var.info slot initialized.")
}
return(x)
}
##### ##### ##### ##### #####
#
# chromR data loading functions
#
##### ##### ##### ##### #####
#' @rdname create_chromR
#' @export
#' @aliases vcf2chromR
# @aliases chromR-methods vcf2chromR
#'
#'
# @description
# Methods to work with objects of the chromR class
# Reads in a vcf file and stores it in a vcf class.
#'
# @param x an object of class chromR
#'
#'
vcfR2chromR <- function(x, vcf){
[email protected] <- as.data.frame(vcf@fix)
# colnames([email protected]) <- c('CHROM','POS','ID','REF','ALT','QUAL','FILTER','INFO')
# [email protected][,2] <- as.numeric([email protected][,2])
# [email protected][,6] <- as.numeric([email protected][,6])
#
for(i in 1:ncol(vcf@gt)){
vcf@gt[,i] <- as.character(vcf@gt[,i])
}
[email protected] <- vcf@gt
#
[email protected] <- vcf@meta
#
# Initialize var.info slot
[email protected] <- data.frame(matrix(ncol=5, nrow=nrow(vcf@fix)))
names([email protected]) <- c('CHROM', 'POS', 'mask', 'DP','MQ')
# names([email protected]) <- c('DP','MQ', 'mask')
#
[email protected]$CHROM <- [email protected]$CHROM
[email protected]$POS <- [email protected]$POS
[email protected]$mask <- rep(TRUE, times=nrow([email protected]))
#
if(length(grep("DP=", vcf@fix[,8])) > 0){
[email protected]$DP <- unlist(lapply(strsplit(unlist(lapply(strsplit(as.character(vcf@fix[,8]), ";"), function(x){grep("^DP=", x, value=TRUE)})),"="),function(x){as.numeric(x[2])}))
}
if(length(grep("MQ=", vcf@fix[,8])) > 0){
[email protected]$MQ <- unlist(lapply(strsplit(unlist(lapply(strsplit(as.character(vcf@fix[,8]), ";"), function(x){grep("^MQ=", x, value=TRUE)})),"="),function(x){as.numeric(x[2])}))
}
#
# assign may be more efficient.
return(x)
}
# Needs to handle lists and matrices of DNAbin.
# Matrices appear better behaved.
#
#' @rdname create_chromR
#' @export
#' @aliases seq2chromR
#'
seq2chromR <- function(x, seq=NULL){
# A DNAbin will store in a list when the fasta contains
# multiple sequences, but as a matrix when the fasta
# only contains one sequence.
if(is.list(seq)){
#stopifnot(length(seq)==1)
if( length(seq) != 1 ){
stop("seq2chromR expects a DNAbin object with only one sequence in it.")
}
x@seq <- as.matrix(seq)
x@len <- length(x@seq)
} else if (is.matrix(seq)){
stopifnot(nrow(seq)==1)
# x@seq <- ape::as.DNAbin(as.character(seq)[1,])
# dimnames(pinf_dna)[[1]][1]
x@seq <- seq
x@len <- length(x@seq)
} else {
stop("DNAbin is neither a list or matrix")
}
return(x)
}
#' @rdname create_chromR
#' @export
#' @aliases ann2chromR
#'
ann2chromR <- function(x, gff){
x@ann <- as.data.frame(gff)
colnames(x@ann) <- c('seqid','source','type','start','end','score','strand','phase','attributes')
x@ann$start <- as.numeric(as.character(x@ann$start))
x@ann$end <- as.numeric(as.character(x@ann$end))
return(x)
}
##### ##### ##### ##### #####
# EOF.
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/create_chromR.R
|
#' @title dr.plot elements
#' @name dr.plot elements
#'
#' @description Plot chromR objects and their components
#' @rdname drplot
#'
#' @param dmat a numeric matrix for dot plots where the first column is position (POS) and subsequent columns are y-values.
#' @param chrom.s start position for the chromosome
#' @param chrom.e end position for the chromosome
#' @param rlst a list containing numeric matrices containing rectangle coordinates.
#' @param title optional string to be used for the plot title.
#' @param hline vector of positions to be used for horizontal lines.
#' @param dcol vector of colors to be used for dot plots.
#' @param rcol vector of colors to be used for rectangle plots.
#' @param rbcol vector of colors to be used for rectangle borders.
#' @param ... arguments to be passed to other methods.
#'
#'
#' @details Plot details
#' The parameter \strong{rlist} is list of numeric matrices containing rectangle coordinates.
#' The first column of each matrix is the left positions, the second column is the bottom coordinates, the third column is the right coordinates and the fourth column is the top coordinates.
#'
#' @return Returns the y-axis minimum and maximum values invisibly.
#'
#' @seealso
#' \code{\link[graphics]{rect}}
#' \code{\link{chromo}}
#'
#' @export
dr.plot <- function( dmat = NULL, rlst = NULL,
chrom.s = 1, chrom.e = NULL,
title = NULL, hline = NULL,
dcol = NULL,
rcol = NULL, rbcol = NULL,
... ){
# Attempt to handle rlst.
# Not a list but a matrix
# if( class(rlst) != "list" & class(rlst) == "matrix" ){
if( !inherits(rlst, "list") & inherits(rlst, "matrix") ){
rlst <- list( rlst )
}
# if( class(rlst) != "list" & !is.null(rlst) ){
if( !inherits(rlst, "list") & !is.null(rlst) ){
stop( paste("parameter rlst is of type", class(rlst), "instead of type list.") )
}
# Determine x max.
if( is.null(chrom.e) ){
stop("chrom.e (end chromosome position) must be specified.")
}
# Determine y min and max.
if( !is.null(dmat) ){
ymin <- min( dmat[,-1], na.rm = TRUE )
ymax <- max( dmat[,-1], na.rm = TRUE )
} else {
ymin <- 0.0
ymax <- 0.1
}
if( !is.null(rlst) ){
rmin <- min( unlist( lapply( rlst, function(x){ min(x[,c(2,4)], na.rm = TRUE) } ) ) )
rmax <- max( unlist( lapply( rlst, function(x){ max(x[,c(2,4)], na.rm = TRUE) } ) ) )
} else {
rmin <- 0
rmax <- 0
}
ymin <- min( c(ymin, rmin), na.rm = TRUE )
ymax <- max( c(ymax, rmax), na.rm = TRUE )
ymin <- ymin * 1.1
ymax <- ymax * 1.1
# if( ymin == 0 ){ ymin <- -0.9 }
# Color palettes.
if( is.null(dcol) ){
dcol <- 1:8
}
if( is.null(rcol) ){
rcol <- 1:8
}
if( is.null(rbcol) ){
rbcol <- 1:8
}
# Initialize the plot.
plot( c(chrom.s, chrom.e), c(0,0), type="n",
xaxt = "n", xlab="",
ylab="", ylim = c(ymin, ymax),
las=1,
... )
# Horizontal lines
if( !is.null(hline) ){
graphics::abline( h = hline, lty = 2, col = "#808080" )
}
# Rect plot.
if( length(rlst) > 1 ){
rcol <- rep( rcol, times=length(rlst))
rbcol <- rep(rbcol, times=length(rlst))
}
if( !is.null(rlst) ){
for( i in 1:length(rlst) ){
rmat <- rlst[[i]]
graphics::rect( xleft = rmat[,1], ybottom = rmat[,2],
xright = rmat[,3], ytop = rmat[,4],
col = rcol[i], border = rbcol[i],
... )
}
}
# palette("default")
# Dot plot.
if( !is.null(ncol(dmat)) ){
dcol <- rep( dcol, times=ncol(dmat) )
}
if( !is.null(dmat) ){
POS <- dmat[ ,1 ]
dmat <- dmat[ ,-1 , drop=FALSE ]
for( i in 1:ncol(dmat) ){
graphics::points( POS, dmat[,i], pch = 20, col = dcol[i] )
}
}
graphics::title( main = title, line = -1.2 )
return( invisible( c(ymin, ymax) ) )
}
#' @rdname drplot
#'
#'
#' @export
null.plot <- function(){
org.mar <- graphics::par("mar")
graphics::par( mar=c(0,4,0,0) )
plot( 1:10, 1:10, type = "n", axes = FALSE, frame.plot = FALSE, xlab="", ylab="" )
graphics::par( mar=org.mar)
}
##### ##### ##### ##### #####
# EOF.
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/drplot.R
|
#' @title Extract elements from vcfR objects
#'
#' @rdname extract_gt
#'
#' @description
#' Extract elements from the 'gt' slot, convert extracted genotypes to their allelic state, extract indels from the data structure or extract elements from the INFO column of the 'fix' slot.
#'
#' @param x An object of class chromR or vcfR
#' @param element element to extract from vcf genotype data. Common options include "DP", "GT" and "GQ"
#' @param mask a logical indicating whether to apply the mask (TRUE) or return all variants (FALSE). Alternatively, a vector of logicals may be provided.
# @param as.matrix attempt to recast as a numeric matrix
#' @param verbose should verbose output be generated
#' @param as.numeric logical, should the matrix be converted to numerics
#' @param return.alleles logical indicating whether to return the genotypes (0/1) or alleles (A/T)
#' @param IDtoRowNames logical specifying whether to use the ID column from the FIX region as rownames
# @param allele.sep character which delimits the alleles in a genotype (/ or |), here this is not used for a regex (as it is in other functions)
#' @param extract logical indicating whether to return the extracted element or the remaining string
#' @param convertNA logical indicating whether to convert "." to NA.
#'
#' @details
#'
#' The function \strong{extract.gt} isolates elements from the 'gt' portion of vcf data.
#' Fields available for extraction are listed in the FORMAT column of the 'gt' slot.
#' Because different vcf producing software produce different fields the options will vary by software.
#' The mask parameter allows the mask to be implemented when using a chromR object.
#' The 'as.numeric' option will convert the results from a character to a numeric.
#' Note that if the data is not actually numeric, it will result in a numeric result which may not be interpretable.
#' The 'return.alleles' option allows the default behavior of numerically encoded genotypes (e.g., 0/1) to be converted to their nucleic acid representation (e.g., A/T).
# The allele.sep parameter allows the genotype delimiter to be specified.
#' Note that this is not used for a regular expression as similar parameters are used in other functions.
#' Extract allows the user to extract just the specified element (TRUE) or every element except the one specified.
#'
#' Note that when 'as.numeric' is set to 'TRUE' but the data are not actually numeric, unexpected results will likely occur.
#' For example, the genotype field will typically be populated with values such as "0/1" or "1|0".
#' Although these may appear numeric, they contain a delimiter (the forward slash or the pipe) that is non-numeric.
#' This means that there is no straight forward conversion to a numeric and unexpected values should be expected.
#'
#'
#' @seealso
#' \code{is.polymorphic}
#'
#'
#' @examples
#' data(vcfR_test)
#' gt <- extract.gt(vcfR_test)
#' gt <- extract.gt(vcfR_test, return.alleles = TRUE)
#'
#' @export
extract.gt <- function(x, element="GT",
mask=FALSE,
as.numeric=FALSE,
return.alleles=FALSE,
IDtoRowNames = TRUE,
# allele.sep="/",
extract = TRUE,
convertNA = TRUE ){
# Validate that we have an expected data structure
#if( class(x) != "chromR" & class(x) != "vcfR" ){
if( !inherits(x, "chromR") & !inherits(x, "vcfR") ){
stop( "Expected an object of class chromR or vcfR" )
}
# Catch unreasonable mask specification.
#if(class(x) == "vcfR"){
if( inherits(x, "vcfR") ){
if(length(mask) == 1 && mask == TRUE){
# This condition does not appear to make
# sense and should be overridden.
mask <- FALSE
}
}
# If of class chromR, extract the vcf
#if(class(x) == "chromR"){
if( inherits(x, "chromR") ){
tmpMask <- [email protected]$mask
x <- x@vcf
}
# If a mask was specified in the call,
# override the one from var.info
if(length(mask) > 1){
tmpMask <- mask
mask <- TRUE
}
# Validate that the gt slot is a matrix
# if( class(x@gt) != "matrix" ){
if( !inherits(x@gt, "matrix") ){
stop( paste("gt slot expected to be of class matrix. Instead found class", class(x@gt)) )
}
if(as.numeric == TRUE & return.alleles == TRUE ){
stop("Invalid parameter choice, as.numeric and return.alleles can't both be true, alleles are characters!")
}
# If of class vcfR, call compiled code to extract field.
#if(class(x) == "vcfR"){
if( inherits(x, "vcfR") ){
if(colnames(x@gt)[1] != "FORMAT"){
stop("First column is not named 'FORMAT', this is essential information.")
}
outM <- .extract_GT_to_CM(x@fix,
x@gt,
element,
return.alleles,
as.integer(extract),
convertNA = as.numeric(convertNA) )
}
# If as.numeric is true, convert to a numeric matrix.
if(as.numeric == TRUE){
outM <- .CM_to_NM(outM)
}
#
if( IDtoRowNames == TRUE ){
if( sum(is.na(x@fix[,'ID'])) > 0 ){
x <- addID(x)
}
if( length(unique(x@fix[,'ID'])) != nrow(x@fix) ){
stop('ID column contains non-unique names')
}
rownames(outM) <- x@fix[,'ID']
}
# Apply mask.
if(mask == TRUE){
outM <- outM[tmpMask,]
}
return(outM)
}
#' @rdname extract_gt
#' @aliases extract.haps
# @param gt.split character which delimits alleles in genotypes
#' @param unphased_as_NA logical specifying how to handle unphased genotypes
#'
#' @details
#' The function \strong{extract.haps} uses extract.gt to isolate genotypes.
#' It then uses the information in the REF and ALT columns as well as an allele delimiter (gt_split) to split genotypes into their allelic state.
#' Ploidy is determined by the first non-NA genotype in the first sample.
#'
#' The VCF specification allows for genotypes to be delimited with a '|' when they are phased and a '/' when unphased.
#' This becomes important when dividing a genotype into two haplotypes.
#' When the alleels are phased this is straight forward.
#' When the alleles are unphased it presents a decision.
#' The default is to handle unphased data by converting them to NAs.
#' When unphased_as_NA is set to TRUE the alleles will be returned in the order they appear in the genotype.
#' This does not assign each allele to it's correct chromosome.
#' It becomes the user's responsibility to make informed decisions at this point.
#'
#'
#' @export
#extract.haps <- function(x, mask=FALSE, gt.split="|",verbose=TRUE){
extract.haps <- function(x,
mask=FALSE,
unphased_as_NA = TRUE,
verbose=TRUE ){
#if(class(x) == "chromR"){
if( inherits(x, "chromR") ){
if(length(mask) == 1 && mask==TRUE){
x <- chromR2vcfR(x, use.mask = TRUE)
} else {
# x <- chrom_to_vcfR(x)
x <- x@vcf
}
}
if(length(mask) > 1){
x <- x[mask,]
}
# Determine ploidy
first.gt <- unlist(strsplit(x@gt[,-1][!is.na(x@gt[,-1])][1], ":"))[1]
# ploidy <- length(unlist(strsplit(first.gt, split = gt.split, fixed = TRUE )))
ploidy <- length(unlist(strsplit(first.gt, split = "[\\|/]" )))
if( nrow( x@fix ) == 0 ){
# No variants, return empty matrix.
haps <- x@gt[ 0, -1 ]
} else if ( ploidy == 1 ){
haps <- extract.gt( x, return.alleles = TRUE )
} else if ( ploidy > 1 ) {
gt <- extract.gt( x )
haps <- .extract_haps(x@fix[,'REF'], x@fix[,'ALT'],
gt, as.numeric(unphased_as_NA), as.numeric(verbose))
} else {
stop('Oops, we should never arrive here!')
}
haps
}
#' @rdname extract_gt
#'
#' @aliases is.indel
#'
#'
#' @details
#' The function \strong{is.indel} returns a logical vector indicating which variants are indels (variants where an allele is greater than one character).
#'
#'
#' @examples
#' data(vcfR_test)
#' is.indel(vcfR_test)
#'
#'
#' @export
is.indel <- function(x){
#if(class(x) == 'chromR'){
if( inherits(x, 'chromR') ){
x <- x@vcf
}
#if(class(x) != "vcfR"){
if( !inherits(x, "vcfR") ){
stop("Unexpected class! Expecting an object of class vcfR or chromR.")
}
# Create an evaluation matrix
isIndel <- matrix(FALSE, nrow=nrow(x), ncol = 2)
colnames(isIndel) <- c('REF','ALT')
# Check reference for indels
isIndel[,'REF'] <- nchar(x@fix[,'REF']) > 1
# Check alternate for indels
checkALT <- function(x){
x <- stats::na.omit(x)
x <- x[ x != "<NON_REF>" ]
if( length(x) > 0 ){
max(nchar(x)) > 1
} else {
FALSE
}
}
isIndel[,'ALT'] <- unlist( lapply( strsplit(x@fix[,'ALT'], split=","), checkALT) )
mask <- rowSums(isIndel)
mask <- mask > 0
return(mask)
}
#' @rdname extract_gt
#'
#' @aliases extract.indels
#'
#' @param return.indels logical indicating whether to return indels or not
#'
#' @details
#' The function \strong{extract.indels} is used to remove indels from SNPs.
#' The function queries the 'REF' and 'ALT' columns of the 'fix' slot to see if any alleles are greater than one character in length.
#' When the parameter return_indels is FALSE only SNPs will be returned.
#' When the parameter return_indels is TRUE only indels will be returned.
#'
#'
#' @examples
#' data(vcfR_test)
#' getFIX(vcfR_test)
#' vcf <- extract.indels(vcfR_test)
#' getFIX(vcf)
#' vcf@fix[nrow(vcf@fix),'ALT'] <- ".,A"
#' vcf <- extract.indels(vcf)
#' getFIX(vcf)
#'
#' data(vcfR_test)
#' vcfR_test@fix[1,'ALT'] <- "<NON_REF>"
#' vcf <- extract.indels(vcfR_test)
#' getFIX(vcf)
#'
#' data(vcfR_test)
#' extract.haps(vcfR_test, unphased_as_NA = FALSE)
#' extract.haps(vcfR_test)
#'
#'
#' @export
extract.indels <- function(x, return.indels=FALSE){
#if(class(x) == 'chromR'){
if( inherits(x, 'chromR') ){
x <- x@vcf
}
#if(class(x) != "vcfR"){
if( !inherits(x, "vcfR") ){
stop("Unexpected class! Expecting an object of class vcfR or chromR.")
}
mask <- is.indel(x)
if(return.indels == FALSE){
x <- x[ !mask, , drop = FALSE ]
} else {
x <- x[ mask, , drop = FALSE ]
}
return(x)
}
#' @rdname extract_gt
#' @aliases extract.info
#'
#' @details
#' The function \strong{extract.info} is used to isolate elements from the INFO column of vcf data.
#'
#' @export
extract.info <- function(x, element, as.numeric=FALSE, mask=FALSE){
#if( class(x) == 'chromR' ){
if( inherits(x, 'chromR') ){
mask <- [email protected]$mask
x <- x@vcf
}
#if( class(x) != 'vcfR' ){
if( !inherits(x, 'vcfR') ){
stop("Expecting an object of class vcfR or chromR.")
}
# values <- unlist(
# lapply(strsplit(unlist(
# lapply(strsplit(x@fix[,'INFO'], split=";"),
# function(x){grep(paste("^", element, "=", sep=""), x, value=TRUE)})),
# split="="), function(x){x[2]})
# )
values <- strsplit(x@fix[,'INFO'], split=";")
values <- lapply(values, function(x){grep(paste("^", element, "=", sep=""), x, value=TRUE)})
values <- lapply(values, function(x){ unlist( strsplit(x, split="=") ) })
values <- lapply(values, function(x){x[2]})
values <- lapply(values, function(x){ if(is.null(x)){NA}else{x} })
values <- unlist(values)
if(as.numeric == TRUE){
values <- as.numeric(values)
}
# if( mask != FALSE & !is.null(mask) ){
# values <- values[[email protected]$mask]
# values <- values[mask]
# }
values
}
##### ##### ##### ##### #####
# EOF.
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/extract_gt.R
|
#'
#'
#' @title Create fasta format output
#' @rdname fasta_output
#' @aliases write.fasta
#'
#' @description Generate fasta format output
#'
#' @param x object of class chromR
#' @param file name for output file
# @param gt_split character which delimits alleles in genotype
#' @param rowlength number of characters each row should not exceed
#' @param tolower convert all characters to lowercase (T/F)
#' @param verbose should verbose output be generated (T/F)
#' @param APPEND should data be appended to an existing file (T/F)
#' @param depr logical (T/F), this function has been deprecated, set to FALSE to override.
#'
#'
#' @details
#' The function \strong{write_fasta} takes an object of class chromR and writes it to a fasta.gz (gzipped text) format file.
#' The sequence in the seq slot of the chromR object is used to fill in the invariant sites.
#' The parameter 'tolower', when set to TRUE, converts all the characters in teh sequence to lower case.
#' This is important because some software, such as ape::DNAbin, requires sequences to be in lower case.
#'
#'
#' @export
#'
write.fasta <- function(x, file = "", rowlength=80, tolower=TRUE, verbose=TRUE, APPEND = FALSE, depr = TRUE){
#write.fasta <- function(x, file = "", gt_split = "|", rowlength=80, tolower=TRUE, verbose=TRUE, APPEND = FALSE){
if( depr ){
myMsg <- "The function write.fasta was deprecated in vcfR version 1.6.0. If you use this function and would like to advocate for its persistence, please contact the maintainer of vcfR. The maintainer can be contacted at maintainer('vcfR')"
stop(myMsg)
}
# if(class(x) != "chromR"){
if( !inherits(x, "chromR") ){
stop("Expected object of class chromR")
}
if(APPEND == FALSE){
if(file.exists(file)){
file.remove(file)
}
}
# haps <- extract_haps(x, gt_split = gt_split)
haps <- .extract_haps(x)
if(tolower == TRUE){
haps <- apply(haps, MARGIN=2, tolower)
}
for(i in 1:ncol(haps)){
seq <- as.character(x@seq)[1,]
# seq[[email protected]$POS] <- haps[,i]
seq[[email protected]$POS] <- haps[,i]
# invisible(.Call('vcfR_write_fasta', PACKAGE = 'vcfR', seq, colnames(haps)[i], file, rowlength, as.integer(verbose)))
invisible(.write_fasta(seq, colnames(haps)[i], file, rowlength, as.integer(verbose)))
}
#invisible(.Call('vcfR_write_fasta', PACKAGE = 'vcfR', seq, seqname, filename, rowlength, verbose))
}
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/fasta_output.R
|
#' @title Plot freq_peak object
#' @name freq_peak_plot
#' @rdname freq_peak_plot
#'
#' @description
#' Converts allele balance data produced by \code{freq_peak()} to a copy number by assinging the allele balance data (frequencies) to its closest expected ratio.
#'
#' @param pos chromosomal position of variants
#' @param posUnits units ('bp', 'Kbp', 'Mbp', 'Gbp') for `pos` to be converted to in the main plot
#' @param ab1 matrix of allele balances for allele 1
#' @param ab2 matrix of allele balances for allele 2
#' @param fp1 freq_peak object for allele 1
#' @param fp2 freq_peak object for allele 2
#' @param mySamp sample indicator
#' @param col1 color 1
#' @param col2 color 2
#' @param alpha sets the transparency for dot plot (0-255)
#' @param main main plot title.
#' @param mhist logical indicating to include a marginal histogram
#' @param layout call layout
#' @param ... parameters passed on to other functions
#'
#' @details
#'
#' Creates a visualization of allele balance data consisting of a dot plot with position as the x-axis and frequency on the y-axis and an optional marginal histogram.
#' The only required information is a vector of chromosomal positions, however this is probably not going to create an interesting plot.
#'
#'
#' @return An invisible NULL.
#'
#' @seealso
#' freq_peak,
#' peak_to_ploid
#'
#' @examples
#'
#' # An empty plot.
#' freq_peak_plot(pos=1:40)
#'
#' data(vcfR_example)
#' gt <- extract.gt(vcf)
#' hets <- is_het(gt)
#' # Censor non-heterozygous positions.
#' is.na(vcf@gt[,-1][!hets]) <- TRUE
#' # Extract allele depths.
#' ad <- extract.gt(vcf, element = "AD")
#' ad1 <- masplit(ad, record = 1)
#' ad2 <- masplit(ad, record = 2)
#' freq1 <- ad1/(ad1+ad2)
#' freq2 <- ad2/(ad1+ad2)
#' myPeaks1 <- freq_peak(freq1, getPOS(vcf))
#' is.na(myPeaks1$peaks[myPeaks1$counts < 20]) <- TRUE
#' myPeaks2 <- freq_peak(freq2, getPOS(vcf), lhs = FALSE)
#' is.na(myPeaks2$peaks[myPeaks2$counts < 20]) <- TRUE
#' freq_peak_plot(pos = getPOS(vcf), ab1 = freq1, ab2 = freq2, fp1 = myPeaks1, fp2=myPeaks2)
#'
#'
#'
#' @export
freq_peak_plot <- function(pos,
posUnits = 'bp',
ab1 = NULL,
ab2 = NULL,
fp1 = NULL,
fp2 = NULL,
mySamp = 1,
col1 = "#A6CEE3",
col2 = "#1F78B4",
alpha = 44,
main = NULL,
mhist = TRUE,
layout = TRUE,
...){
if( !inherits(fp1, "freq_peak") & !is.null(fp1) ){
msg <- "fp1 does not appear to be a freq_peak object"
stop(msg)
}
if( !inherits(fp2, "freq_peak") & !is.null(fp2) ){
msg <- "fp2 does not appear to be a freq_peak object"
stop(msg)
}
# Handle x-axis units.
if( posUnits == 'bp'){
# Don't need to do anything.
} else if(posUnits == 'Kbp'){
pos <- pos/1e3
fp1$wins[,'START_pos'] <- fp1$wins[,'START_pos']/1e3
fp1$wins[,'END_pos'] <- fp1$wins[,'END_pos']/1e3
fp2$wins[,'START_pos'] <- fp2$wins[,'START_pos']/1e3
fp2$wins[,'END_pos'] <- fp2$wins[,'END_pos']/1e3
} else if(posUnits == 'Mbp'){
pos <- pos/1e6
fp1$wins[,'START_pos'] <- fp1$wins[,'START_pos']/1e6
fp1$wins[,'END_pos'] <- fp1$wins[,'END_pos']/1e6
fp2$wins[,'START_pos'] <- fp2$wins[,'START_pos']/1e6
fp2$wins[,'END_pos'] <- fp2$wins[,'END_pos']/1e6
} else if(posUnits == 'Gbp'){
pos <- pos/1e9
fp1$wins[,'START_pos'] <- fp1$wins[,'START_pos']/1e9
fp1$wins[,'END_pos'] <- fp1$wins[,'END_pos']/1e9
fp2$wins[,'START_pos'] <- fp2$wins[,'START_pos']/1e9
fp2$wins[,'END_pos'] <- fp2$wins[,'END_pos']/1e9
}
xlabel <- paste('Position (', posUnits, ')', sep = '')
# Handle color
col1 <- grDevices::col2rgb(col1, alpha = FALSE)
col2 <- grDevices::col2rgb(col2, alpha = FALSE)
col1 <- grDevices::rgb(col1[1,1], col1[2,1], col1[3,1], maxColorValue=255)
col2 <- grDevices::rgb(col2[1,1], col2[2,1], col2[3,1], maxColorValue=255)
col1d <- paste(col1, alpha, sep = "")
col2d <- paste(col2, alpha, sep = "")
# Store original par.
orig_par <- graphics::par(no.readonly = TRUE)
# Determine plot geometry.
if( mhist == TRUE & layout == TRUE){
graphics::layout(matrix(1:2, nrow=1), widths = c(4,1))
}
graphics::par(mar=c(5,4,4,0))
# Initialize plot
plot( range(pos, na.rm = TRUE), c(0,1), ylim=c(0,1), type="n", yaxt='n',
main = "", xlab = xlabel, ylab = "Allele balance")
graphics::axis(side=2, at=c(0,0.25,0.333,0.5,0.666,0.75,1),
labels=c(0,'1/4','1/3','1/2','2/3','3/4',1), las=1)
graphics::abline(h=c(0.2,0.25,0.333,0.5,0.666,0.75,0.8), col=8)
# Add dot plots
if( !is.null(ab1) ){
graphics::points(pos, ab1[,mySamp], pch = 20, col= col1d)
}
if( !is.null(ab2) ){
graphics::points(pos, ab2[,mySamp], pch = 20, col= col2d)
}
# Add window peak indicators
if( !is.null(fp1) ){
graphics::segments(x0=fp1$wins[,'START_pos'], y0=fp1$peaks[,mySamp],
x1=fp1$wins[,'END_pos'], lwd=3)
}
if( !is.null(fp2) ){
graphics::segments(x0=fp2$wins[,'START_pos'], y0=fp2$peaks[,mySamp],
x1=fp2$wins[,'END_pos'], lwd=3)
}
# Title
if( !is.null(main) ){
graphics::title(main = main)
}
# if( !is.null(ab1) ){
# graphics::title(main = colnames(ab1[, mySamp, drop = F]))
# } else if( !is.null(ab2) ){
# graphics::title(main = colnames(ab2[, mySamp, drop = F]))
# }
# Marginal histogram
if( mhist == TRUE){
graphics::par(mar=c(5,1,4,2))
if( !is.null(fp1$bin_width) ){
hsbrk <- seq(0,1,by=fp1$bin_width)
} else if( !is.null(fp2$bin_width) ){
hsbrk <- seq(0,1,by=fp2$bin_width)
} else {
hsbrk <- seq(0,1,by=0.02)
}
# Ensure floating point comparison don't get us
hsbrk[1] <- -0.001
hsbrk[length(hsbrk)] <- 1.001
if( is.null(ab1) & is.null(ab2) ){
# Null marginal histogram
graphics::barplot(height=0.01, width=0.02, space = 0, horiz = T, add = FALSE, col="#000000", xlim = c(0,1.0))
}
if ( !is.null(ab1) & is.null(ab2) ){
bp1 <- graphics::hist(ab1[,mySamp], breaks = hsbrk, plot = FALSE)
graphics::barplot(height=bp1$counts, width=fp1$bin_width, space = 0, horiz = T, add = FALSE, col=col1)
}
if ( is.null(ab1) & !is.null(ab2) ){
bp2 <- graphics::hist(ab2[,mySamp], breaks = hsbrk, plot = FALSE)
graphics::barplot(height=bp2$counts, width=fp2$bin_width, space = 0, horiz = T, add = FALSE, col=col2)
}
if ( !is.null(ab1) & !is.null(ab2) ){
bp1 <- graphics::hist(ab1[,mySamp], breaks = hsbrk, plot = FALSE)
graphics::barplot(height=bp1$counts, width=fp1$bin_width, space = 0, horiz = T, add = FALSE, col=col1)
bp2 <- graphics::hist(ab2[,mySamp], breaks = hsbrk, plot = FALSE)
graphics::barplot(height=bp2$counts, width=fp2$bin_width, space = 0, horiz = T, add = TRUE, col=col2)
}
graphics::title(xlab="Count")
graphics::par(mar=c(5,4,4,2))
}
if(layout == TRUE){
graphics::par(orig_par)
}
return( invisible(NULL) )
}
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/freq_peak_plot.R
|
calc_jost <- function(x){
# x is a list of subpopulations.
# Each list element contains a data.frame with columns
# CHROM, POS, mask, n, Allele_counts, He, Ne.
nPop <- length(x)
nLoci <- nrow(x[[1]])
# A matrix for heterozygosities.
Hs <- matrix(nrow = nrow(x[[1]]), ncol = nPop)
colnames(Hs) <- paste("Hs", names(x), sep = "_")
# Find the maximum number of alleles.
# We'll use this so we can store data in matrices.
maxAlleles <- 0
for(j in 1:nPop){
tmp <- strsplit(as.character(x[[j]]$Allele_counts), split = ",")
tmp <- max(unlist(lapply(tmp, function(x){length(x)})))
if(tmp > maxAlleles){
maxAlleles <- tmp
}
}
# Hs is the heterozygosities for population j (created above).
# subPop.l is a list of matricies that hold allele counts for each population.
# Nj is the count or number of each allele in population j.
subPop.l <- vector(mode = 'list', length = nPop)
Nj <- matrix(nrow = nLoci, ncol = nPop)
for(j in 1:nPop){
subPop.l[[j]] <- matrix(0, nrow = nLoci, ncol = maxAlleles)
ps <- strsplit(as.character(x[[j]]$Allele_counts), split = ",")
lapply(as.list(1:nLoci), function(x){ subPop.l[[j]][x,1:length(ps[[x]])] <<- as.numeric(ps[[x]])})
Nj[,j] <- rowSums(subPop.l[[j]])
ps <- lapply(ps, function(x){as.numeric(x)/sum(as.numeric(x), na.rm = TRUE)})
ps <- lapply(ps , function(x){1- sum(x^2)})
Hs[,j] <- unlist(ps)
# Hs[,j] <- unlist( lapply(ps , function(x){1- sum(x^2)}) )
}
#
Dg <- lapply(subPop.l, function(x){sweep(x, MARGIN = 1, STATS = rowSums(x, na.rm = TRUE), FUN = "/")})
Dg <- Reduce('+', Dg)
# Dg <- Reduce('+', lapply(subPop.l, function(x){x/sum(x)}))
Dg <- Dg/nPop
Dg <- Dg^2
Dg <- 1/rowSums(Dg)
# Hs2 <- Hs^2
# Hs2[Hs2 < 0] <- 0
Ha <- rowMeans(Hs, na.rm = TRUE)
Da <- 1/(1-Ha)
Db <- Dg/Da
a <- matrix(0, nrow = nLoci, ncol = maxAlleles)
b <- matrix(0, nrow = nLoci, ncol = maxAlleles)
# Calculate a
sum1 <- matrix(0, nrow = nLoci, ncol = maxAlleles)
sum2 <- matrix(0, nrow = nLoci, ncol = maxAlleles)
for(j in 1:nPop){
tmp <- sweep(subPop.l[[j]], MARGIN = 1, STATS = Nj[,j], FUN = "/")
sum1 <- sum1 + tmp
sum2 <- sum2 + tmp^2
}
sum1 <- sum1^2
a <- (sum1 - sum2)/(nPop-1)
a <- rowSums(a)
# Calculate b
for(j in 1:nPop){
tmp <- subPop.l[[j]] * (subPop.l[[j]] - 1)
tmp[tmp<0] <- 0
myDenom <- Nj[,j] * (Nj[,j] - 1)
tmp <- sweep(tmp, MARGIN = 1, STATS = myDenom, FUN = "/")
b <- b + tmp
}
b <- rowSums(b)
Dest_Chao <- 1 - (a/b)
myRet <- data.frame(Hs)
myRet$a <- a
myRet$b <- b
myRet$Dest_Chao <- Dest_Chao
myRet$Da <- Da
myRet$Dg <- Dg
myRet$Db <- Db
return(myRet)
}
calc_nei <- function(x1, x2){
# x1 is a data.frame for the total population.
# x2 is a list of subpopulations.
nPop <- length(x2)
ps <- strsplit(as.character(x1$Allele_counts), split = ",")
nAllele <- unlist(lapply(ps, function(x){ sum(as.numeric(x)) }))
ps <- lapply(ps, function(x){as.numeric(x)/sum(as.numeric(x), na.rm = TRUE)})
Ht <- unlist(lapply(ps , function(x){1- sum(x^2)}))
# nAllele <- x1$n
nAlleles <- matrix(nrow = length(nAllele), ncol = nPop)
Hs <- matrix(nrow = nrow(x2[[1]]), ncol = nPop)
colnames(Hs) <- paste("Hs", names(x2), sep = "_")
Htmax <- vector("character", length = nrow(Hs))
Hsize <- matrix(nrow=nrow(Hs), ncol=nPop)
colnames(Hsize) <- paste("n", names(x2), sep = "_")
for(i in 1:nPop){
Htmax <- paste(Htmax, as.character(x2[[i]]$Allele_counts), sep = ",")
ps <- strsplit(as.character(x2[[i]]$Allele_counts), split = ",")
nAlleles[,i] <- unlist(lapply(ps, function(x){ sum(as.numeric(x)) }))
Hsize[,i] <- unlist(lapply(ps, function(x){sum(as.numeric(x), na.rm = TRUE)}))
ps <- lapply(ps, function(x){as.numeric(x)/sum(as.numeric(x), na.rm = TRUE)})
ps <- lapply(ps , function(x){1- sum(x^2)})
Hs[,i] <- unlist(ps)
# nAlleles[,i] <- x2[[i]]$n
}
Htmax <- substring(Htmax, 2)
ps <- strsplit(Htmax, split = ",")
ps <- lapply(ps, function(x){as.numeric(x)/sum(as.numeric(x), na.rm = TRUE)})
ps <- lapply(ps , function(x){1- sum(x^2)})
Htmax <- unlist(ps)
# Gst <- (Ht - rowMeans(Hs))/Ht
Gst <- (Ht - rowSums(Hs * nAlleles)/nAllele)/Ht
# Gstmax <- (Htmax - rowMeans(Hs))/ Htmax
Gstmax <- (Htmax - rowSums(Hs * nAlleles)/nAllele)/ Htmax
Gprimest <- Gst/Gstmax
Hs <- cbind(Hs, Ht, Hsize, Gst, Htmax, Gstmax, Gprimest)
return(Hs)
}
#' @title Genetic differentiation
#'
#' @name genetic_diff
#' @rdname genetic_diff
#' @aliases genetic_diff
#' @export
#'
#' @description
#' Calculate measures of genetic differentiation.
#'
#' @param vcf a vcfR object
#' @param pops factor indicating populations
#' @param method the method to measure differentiation
#'
#' @details Measures of genetic differentiation, or fixation indicies, are commonly reported population genetic parameters.
#' This function reports genetic differentiation for all variants presented to it.
#'
#' The method \strong{nei} returns Nei's Gst as well as Hedrick's G'st, a correction for high alleleism (Hedrick 2005).
#' Here it is calculated as in equation 2 from Hedrick (2005) with the exception that the heterozygosities are weighted by the number of alleles observed in each subpopulation.
#' This is similar to \code{hierfstat::pairwise.fst()} but by using the number of alleles instead of the number of individuals it avoids making an assumption about how many alleles are contributed by each individual.
#' G'st is calculated as in equation 4b from Hedrick (2005).
#' This method is based on heterozygosity where all of the alleles in a population are used to calculate allele frequecies.
#' This may make this a good choice when there is a mixture of ploidies in the sample.
#'
#' The method \strong{jost} return's Jost's D as a measure of differentiation.
#' This is calculated as in equation 13 from Jost (2008).
#' Examples are available at Jost's website: \url{http://www.loujost.com}.
#'
#' A nice review of Fst and some of its analogues can be found in Holsinger and Weir (2009).
#'
#' @seealso poppr.amova in \href{https://cran.r-project.org/package=poppr}{poppr}, amova in \href{https://cran.r-project.org/package=ade4}{ade4}, amova in \href{https://cran.r-project.org/package=pegas}{pegas}, \href{https://cran.r-project.org/package=hierfstat}{hierfstat}, \href{https://cran.r-project.org/package=DEMEtics}{DEMEtics}, and, \href{https://cran.r-project.org/package=mmod}{mmod}.
#'
#'
#' @references
#' Hedrick, Philip W. "A standardized genetic differentiation measure." Evolution 59.8 (2005): 1633-1638.
#'
#' Holsinger, Kent E., and Bruce S. Weir. "Genetics in geographically structured populations: defining, estimating and interpreting FST." Nature Reviews Genetics 10.9 (2009): 639-650.
#'
#' Jost, Lou. "GST and its relatives do not measure differentiation." Molecular ecology 17.18 (2008): 4015-4026.
#'
#' Whitlock, Michael C. "G'ST and D do not replace FST." Molecular Ecology 20.6 (2011): 1083-1091.
#'
#'
#' @examples
#' data(vcfR_example)
#' myPops <- as.factor(rep(c('a','b'), each = 9))
#' myDiff <- genetic_diff(vcf, myPops, method = "nei")
#' colMeans(myDiff[,c(3:8,11)], na.rm = TRUE)
#' hist(myDiff$Gprimest, xlab = expression(italic("G'"["ST"])),
#' col='skyblue', breaks = seq(0, 1, by = 0.01))
#'
#'
genetic_diff <- function(vcf, pops, method = "nei"){
# if( class(vcf) != "vcfR" ){
if( !inherits(vcf, "vcfR") ){
stop( paste("Expecting an object of class vcfR, instead received:", class(vcf)) )
}
# if( class(pops) != "factor" ){
if( !inherits(pops, "factor") ){
stop( paste("Expecting a factor, instead received:", class(pops)) )
}
method <- match.arg(method, choices = c('jost', 'nei'))
nPop <- length(levels(pops))
subpop.l <- vector('list', length = nPop)
names(subpop.l) <- levels(pops)
# Extract genotypes.
gt <- extract.gt(vcf, element = "GT")
# Assemble data for gt_to_popsum.
var_info <- as.data.frame(vcf@fix[,1:2, drop = FALSE])
if( is.null(var_info$mask) ){
var_info$mask <- TRUE
}
# Get allele counts for total and subs.
for(i in 1:nPop){
subpop.l[[i]] <- .gt_to_popsum(var_info = var_info,
gt = gt[,pops == levels(pops)[i], drop = FALSE]
)
}
if( method == "nei" ){
tot <- .gt_to_popsum(var_info = var_info, gt = gt)
gdiff <- calc_nei(tot, subpop.l)
} else if( method == "jost" ){
gdiff <- calc_jost(subpop.l)
# warning('This methd is not currently implemented')
}
gdiff <- as.data.frame(gdiff)
gdiff <- cbind(vcf@fix[,1:2, drop = FALSE], gdiff)
return(gdiff)
}
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/genetic_diff.R
|
#' @title Genotype matrix functions
#' @name Genotype matrix functions
#'
#' @description Functions which modify a matrix or vector of genotypes.
#'
#' @rdname genotype_matrix
#' @aliases alleles2consensus
#'
#' @param x a matrix of alleles as genotypes (e.g., A/A, C/G, etc.)
#' @param sep a character which delimits the alleles in a genotype (/ or |)
#' @param NA_to_n logical indicating whether NAs should be scores as n
#'
#' @details
#' The function \strong{alleles2consensus} converts genotypes to a single consensus allele using IUPAC ambiguity codes for heterozygotes.
#' Note that some functions, such as ape::seg.sites do not recognize ambiguity characters (other than 'n').
#' This means that these functions, as well as functions that depend on them (e.g., pegas::tajima.test), will produce unexpected results.
#'
#' Missing data are handled in a number of steps.
#' When both alleles are missing ('.') the genotype is converted to NA.
#' Secondly, if one of the alleles is missing ('.') the genotype is converted to NA>
#' Lastly, NAs can be optionally converted to 'n' for compatibility with DNAbin objects.
#'
#' @export
alleles2consensus <- function( x, sep = "/", NA_to_n = TRUE ){
# lookup <- cbind(paste(c('A','C','G','T', 'A','T','C','G', 'A','C','G','T', 'A','G','C','T'),
# c('A','C','G','T', 'T','A','G','C', 'C','A','T','G', 'G','A','T','C'),
# sep=sep),
# c('a','c','g','t', 'w','w','s','s', 'm','m','k','k', 'r','r','y','y'))
lookup1 <- cbind(paste(c('A','C','G','T', 'A','T','C','G', 'A','C','G','T', 'A','G','C','T'),
c('A','C','G','T', 'T','A','G','C', 'C','A','T','G', 'G','A','T','C'),
sep="/"),
c('a','c','g','t', 'w','w','s','s', 'm','m','k','k', 'r','r','y','y'))
lookup2 <- cbind(paste(c('A','C','G','T', 'A','T','C','G', 'A','C','G','T', 'A','G','C','T'),
c('A','C','G','T', 'T','A','G','C', 'C','A','T','G', 'G','A','T','C'),
sep="|"),
c('a','c','g','t', 'w','w','s','s', 'm','m','k','k', 'r','r','y','y'))
# Both alleles missing, set to NA.
# x <- gsub( paste(".", ".", sep=sep), NA, x, fixed=TRUE)
x <- gsub( paste(".", ".", sep="/"), NA, x, fixed=TRUE)
x <- gsub( paste(".", ".", sep="|"), NA, x, fixed=TRUE)
# One of the alleles missing set to NA.
x <- gsub( ".", NA, x, fixed=TRUE)
for(i in 1:nrow( lookup1 ))
{
x[ x == lookup1[i,1] ] <- lookup1[i,2]
x[ x == lookup2[i,1] ] <- lookup2[i,2]
}
if( NA_to_n == TRUE )
{
x[ is.na(x) ] <- 'n'
}
x
}
#' @rdname genotype_matrix
#' @aliases get.alleles
#'
#' @param x2 a vector of genotypes
#' @param split character passed to strsplit to split the genotype into alleles
#' @param na.rm logical indicating whether to remove NAs
#' @param as.numeric logical specifying whether to convert to a numeric
#'
#' @details
#' The function \strong{get.alleles} takes a vector of genotypes and returns the unique alleles.
#'
#' @export
get.alleles <- function( x2, split="/", na.rm = FALSE, as.numeric = FALSE ){
x2 <- unlist(strsplit(x2, split))
if(na.rm == TRUE){
x2 <- x2[ x2 != "NA" ]
}
if(as.numeric == TRUE){
x2 <- as.numeric(x2)
}
x2 <- unique(x2)
x2
}
##### ##### ##### ##### #####
# EOF.
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/genotype_matrix_functions.R
|
#' Get elements from the fixed region of a VCF file
#'
#' Both chromR objects and vcfR objects contain a region with fixed variables.
#' These accessors allow you to isolate these variables from these objects.
#'
#' @param x a vcfR or chromR object
#' @param getINFO logical specifying whether getFIX should return the INFO column
#'
#' @return a vector or data frame
#' @rdname getFIX
#' @export
#' @aliases getFIX,chromR-method getFIX,vcfR-method
#' @examples
#' library("vcfR")
#' data("vcfR_example")
#' data("chromR_example")
# ' chrom <- create.chromR('sc50', seq=dna, vcf=vcf, ann=gff)
#' getFIX(vcf) %>% head
#' getFIX(chrom) %>% head
#'
#' getCHROM(vcf) %>% head
#' getCHROM(chrom) %>% head
#'
#' getPOS(vcf) %>% head
#' getPOS(chrom) %>% head
#'
#' getID(vcf) %>% head
#' getID(chrom) %>% head
#'
#' getREF(vcf) %>% head
#' getREF(chrom) %>% head
#'
#' getALT(vcf) %>% head
#' getALT(chrom) %>% head
#'
#' getQUAL(vcf) %>% head
#' getQUAL(chrom) %>% head
#'
#' getFILTER(vcf) %>% head
#' getFILTER(chrom) %>% head
#'
#' getINFO(vcf) %>% head
#' getINFO(chrom) %>% head
#'
getFIX <- function(x, getINFO = FALSE) standardGeneric("getFIX")
#' @export
setGeneric("getFIX")
setMethod(
f = "getFIX",
signature(x = "chromR"),
definition = function(x, getINFO = FALSE) {
if(getINFO == TRUE){
return(x@vcf@fix)
} else {
return(x@vcf@fix[,-8])
}
})
setMethod(
f = "getFIX",
signature(x = "vcfR"),
definition = function(x, getINFO = FALSE) {
if(getINFO == TRUE){
return(x@fix)
} else {
return(x@fix[,-8])
}
})
#' @rdname getFIX
#' @export
#' @aliases getCHROM,chromR-method
#' getCHROM,vcfR-method
getCHROM <- function(x) standardGeneric("getCHROM")
#' @export
setGeneric("getCHROM")
setMethod(
f = "getCHROM",
signature(x = "chromR"),
definition = function(x) {
x@vcf@fix[,"CHROM"]
})
setMethod(
f = "getCHROM",
signature(x = "vcfR"),
definition = function(x) {
x@fix[,"CHROM"]
})
#' @rdname getFIX
#' @export
#' @aliases getPOS,chromR-method
#' getPOS,vcfR-method
getPOS <- function(x) standardGeneric("getPOS")
#' @export
setGeneric("getPOS")
setMethod(
f = "getPOS",
signature(x = "chromR"),
definition = function(x) {
as.integer(x@vcf@fix[,"POS"])
})
setMethod(
f = "getPOS",
signature(x = "vcfR"),
definition = function(x) {
as.integer(x@fix[,"POS"])
})
#' @rdname getFIX
#' @export
#' @aliases getQUAL,chromR-method
#' getQUAL,vcfR-method
getQUAL <- function(x) standardGeneric("getQUAL")
#' @export
setGeneric("getQUAL")
setMethod(
f = "getQUAL",
signature(x = "chromR"),
definition = function(x) {
as.numeric(x@vcf@fix[,"QUAL"])
})
setMethod(
f = "getQUAL",
signature(x = "vcfR"),
definition = function(x) {
as.numeric(x@fix[,"QUAL"])
})
#' @rdname getFIX
#' @export
#' @aliases getALT,chromR-method
#' getALT,vcfR-method
getALT <- function(x) standardGeneric("getALT")
#' @export
setGeneric("getALT")
setMethod(
f = "getALT",
signature(x = "chromR"),
definition = function(x) {
x@vcf@fix[,"ALT"]
})
setMethod(
f = "getALT",
signature(x = "vcfR"),
definition = function(x) {
x@fix[,"ALT"]
})
#' @rdname getFIX
#' @export
#' @aliases getREF,chromR-method
#' getREF,vcfR-method
getREF <- function(x) standardGeneric("getREF")
#' @export
setGeneric("getREF")
setMethod(
f = "getREF",
signature(x = "chromR"),
definition = function(x) {
x@vcf@fix[,"REF"]
})
setMethod(
f = "getREF",
signature(x = "vcfR"),
definition = function(x) {
x@fix[,"REF"]
})
#' @rdname getFIX
#' @export
#' @aliases getID,chromR-method
#' getID,vcfR-method
getID <- function(x) standardGeneric("getID")
#' @export
setGeneric("getID")
setMethod(
f = "getID",
signature(x = "chromR"),
definition = function(x) {
x@vcf@fix[,"ID"]
})
setMethod(
f = "getID",
signature(x = "vcfR"),
definition = function(x) {
x@fix[,"ID"]
})
#' @rdname getFIX
#' @export
#' @aliases getFILTER,chromR-method
#' getFILTER,vcfR-method
getFILTER <- function(x) standardGeneric("getFILTER")
#' @export
setGeneric("getFILTER")
setMethod(
f = "getFILTER",
signature(x = "chromR"),
definition = function(x) {
x@vcf@fix[,"FILTER"]
})
setMethod(
f = "getFILTER",
signature(x = "vcfR"),
definition = function(x) {
x@fix[,"FILTER"]
})
#' @rdname getFIX
#' @export
#' @aliases getINFO,chromR-method
#' getINFO,vcfR-method
getINFO <- function(x) standardGeneric("getINFO")
#' @export
setGeneric("getINFO")
setMethod(
f = "getINFO",
signature(x = "chromR"),
definition = function(x) {
x@vcf@fix[,"INFO"]
})
setMethod(
f = "getINFO",
signature(x = "vcfR"),
definition = function(x) {
x@fix[,"INFO"]
})
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/get_methods.R
|
thetas <- function(x){
# print(x)
rnum <- x[1]
anum <- x[2]
if(is.na(rnum)){return(c(NA,NA,NA))}
n <- rnum + anum
Si <- vector(mode="numeric", length=n)
Si[anum] <- 1
theta_w <- sum(1/1:(rnum+anum-1))^-1 * 1
theta_pi <- (2*anum*rnum)/(n*(n-1))
theta_h <- (2*1*anum^2)/(n*(n-1))
return(c(theta_pi, theta_w, theta_h))
}
#' @rdname pop_gen_sum
#' @export
#' @aliases gt2popsum
#'
#' @param deprecated logical specifying whether to run the function (FALSE) or present deprecation message (TRUE).
#'
#' @details
#' The function `gt2popsum` was deprecated in vcfR 1.8.0.
#' This was because it was written entirely in R and did not perform well.
#' Users should use `gt.to.popsum()` instead because it has similar
#' functionality but includes calls to C++ to increase its performance.
#'
gt2popsum <- function(x, deprecated = TRUE){
#if(class(x) != "chromR"){
if( !inherits(x, "chromR") ){
stop("Object is not of class chromR")
}
# stopifnot(class(x) == "chromR")
# gt <- extract.gt(x, element = "GT", mask = [email protected]$mask)
# stopifnot(length(grep("(1/1|0/0|0/1)", unique(as.vector(gt)))) == 3)
# gt <- [email protected]
#
if(deprecated == TRUE){
msg <- "This function has been deprecated since vcfR 1.8.0."
msg <- paste(msg, "If you would like to advocate to have this function included in future versions of vcfR please contact the maintainer.")
msg <- paste(msg, "Contact information for package maintainers can be found with maintainer('vcfR').")
stop(msg)
}
hwe <- function(x){
# Genotype counts
n11 <- x[1]
n1i <- x[2]
nii <- x[3]
n <- sum(n11, n1i, nii)
#
# Allele count and frequency
n1 <- 2*n11 + n1i
p1 <- n1/(2*n)
#
# Probability
num <- (factorial(n) * factorial(n1) * factorial(2*n - n1) * 2^(n1i))
den <- (factorial((n1 - n1i)/2) * factorial(n1i) * factorial(n-(n1+n1i)/2) * factorial(2*n))
prob <- num/den
#
# Disequilibrium
Da <- n11/n - (n1/(2*n))^2
# Chi-square
chisq <- ((n*Da)^2)/(n*p1^2) + ((-2*n*Da)^2)/(2*n*p1*(1-p1)) + ((n*Da)^2)/(n*(1-p1)^2)
p <- 1 - stats::pchisq(chisq, df=1)
return(c(prob, Da, chisq, p))
}
# tmp[gt == "0/0"] <- 0
# tmp[gt == "0/1"] <- 1
# tmp[gt == "1/0"] <- 1
# tmp[gt == "1/1"] <- 2
# gt <- extract.gt(x, element = "GT", mask = rep(TRUE, times=nrow([email protected])))
gt <- extract.gt(x, element = "GT")
tmp <- matrix(ncol=ncol(gt), nrow=nrow(gt))
tmp[gt == "0/0" | gt == "0|0"] <- 0
tmp[gt == "0/1" | gt == "0|1"] <- 1
tmp[gt == "1/0" | gt == "1|0"] <- 1
tmp[gt == "1/1" | gt == "1|1"] <- 2
#
gt <- tmp
rm(tmp)
#
mask <- [email protected]$mask
summ <- matrix(ncol=19, nrow=nrow(gt),
dimnames=list(c(),
c('n', 'RR','RA','AA','nAllele','nREF','nALT','Ho','He',
'hwe.prob', 'hwe.Da', 'hwe.chisq', 'hwe.p',
'Ne','theta_pi','theta_w','theta_h','tajimas_d', 'faywu_h'))
)
#
# Homozygous for reference allele.
summ[mask,'RR'] <- unlist(apply(gt[mask, , drop=FALSE], MARGIN=1,
function(x){sum(x==0, na.rm=TRUE)}))
# Heterozygote.
summ[mask,'RA'] <- unlist(apply(gt[mask, , drop=FALSE], MARGIN=1,
function(x){sum(x==1, na.rm=TRUE)}))
# Homozygous for alternate allele.
summ[mask,'AA'] <- unlist(apply(gt[mask, , drop=FALSE], MARGIN=1,
function(x){sum(x==2, na.rm=TRUE)}))
#
summ[mask, 'n'] <- unlist(apply(gt[mask, , drop=FALSE], MARGIN=1,
function(x){sum(!is.na(x))}))
#
summ[mask,'nREF'] <- unlist(apply(gt[mask, , drop=FALSE], MARGIN=1,
function(x){sum(2*length(stats::na.omit(x))-sum(x), na.rm=TRUE)})
)
summ[mask,'nALT'] <- rowSums(gt[mask, , drop=FALSE], na.rm=TRUE)
summ[,'nAllele'] <- summ[,'nREF']+summ[,'nALT']
#
# Observed heterozygosity
summ[mask,'Ho'] <- unlist(apply(gt[mask, , drop=FALSE], MARGIN=1,
function(x){sum(x==1, na.rm=TRUE)/length(stats::na.omit(x))}))
#
# Expected heterozygosity
summ[,'He'] <- 1 - ((summ[,'nREF']/summ[,'nAllele'])^2 + (summ[,'nALT']/summ[,'nAllele'])^2)
#
summ[,'Ne'] <- 1/(1-summ[,'He'])
#
# Hardy-Weinberg Disequilibrium
summ[mask,c('hwe.prob', 'hwe.Da', 'hwe.chisq', 'hwe.p')] <- t(apply(summ[mask,c('RR', 'RA', 'AA'), drop=FALSE], MARGIN=1, FUN=hwe))
#
# Thetas.
summ[,c('theta_pi','theta_w','theta_h')] <- t(apply(summ[,c('nREF','nALT'), drop=FALSE], MARGIN=1,thetas))
#summ[,7:9] <- t(apply(summ[,c('nREF','nALT'), drop=FALSE], MARGIN=1,thetas))
#
summ[,'tajimas_d'] <- summ[,'theta_pi'] - summ[,'theta_w']
summ[,'faywu_h'] <- summ[,'theta_pi'] - summ[,'theta_h']
# summ[,10] <- summ[,7] - summ[,8]
# summ[,11] <- summ[,7] - summ[,9]
#
# print(head(summ))
# [email protected] <- as.data.frame(summ)
[email protected] <- cbind([email protected], as.data.frame(summ))
return(x)
}
#' @title Population genetics summaries
#' @name Population genetics summaries
#' @rdname pop_gen_sum
#' @aliases gt.to.popsum
#'
#' @description Functions that make population genetics summaries
#'
#' @param x object of class chromR or vcfR
#'
#' @details
#' This function creates common population genetic summaries from either a chromR or vcfR object.
#' The default is to return a matrix containing allele counts, He, and Ne.
#' \strong{Allele_counts} is the a comma delimited string of counts.
#' The first position is the count of reference alleles, the second positions is the count of the first alternate alleles, the third is the count of second alternate alleles, and so on.
#' \strong{He} is the gene diversity, or heterozygosity, of the population.
#' This is \eqn{1 - \sum x^{2}_{i}}, or the probability that two alleles sampled from the population are different, following Nei (1973).
#' \strong{Ne} is the effective number of alleles in the population.
#' This is \eqn{1/\sum x^{2}_{i}} or one minus the homozygosity, from Nei (1987) equation 8.17.
#'
#' Nei, M., 1973. Analysis of gene diversity in subdivided populations. Proceedings of the National Academy of Sciences, 70(12), pp.3321-3323.
#'
#' Nei, M., 1987. Molecular evolutionary genetics. Columbia University Press.
#'
#' @examples
#' data(vcfR_test)
#' # Check the genotypes.
#' extract.gt(vcfR_test)
#' # Summarize the genotypes.
#' gt.to.popsum(vcfR_test)
#'
#' @export
gt.to.popsum <- function(x){
# if(class(x) != "chromR" | class(x) != "vcfR"){stop("Object is not of class chromR or vcfR")}
if(!inherits(x, c('chromR', 'vcfR'))){
stop("Object is not of class chromR or vcfR")
}
#if(class(x) == "chromR"){
if( inherits(x, "chromR") ){
var.info <- [email protected]
# If summaries already exist, we'll remove them.
[email protected] <- [email protected][,grep("^n$|^Allele_counts$|^He$|^Ne$", colnames([email protected]), invert = TRUE)]
}
#if(class(x) == "vcfR"){
if( inherits(x, "vcfR") ){
# var.info <- matrix(nrow = nrow(x@fix), ncol = 5)
# colnames(var.info) <- c('mask', "n", "Allele_counts", "He", "Ne")
# var.info <- matrix(nrow = nrow(x@fix), ncol = 1)
# colnames(var.info) <- c('mask')
# var.info[,'mask', drop = FALSE] <- TRUE
var.info <- matrix(TRUE, ncol=1, nrow=nrow(x@fix), dimnames = list(NULL, 'mask'))
}
# Extract genotypes from vcf.gt
gt <- extract.gt(x, element="GT")
var.info <- .gt_to_popsum(var_info=var.info, gt=gt)
#if(class(x) == 'chromR'){
if( inherits(x, 'chromR') ){
[email protected] <- var.info
return(x)
} else {
return(var.info[,-1])
}
}
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/gt_to_popsum.R
|
#### Misc functions ####
#' @title Heatmap with barplots
#'
#' @name heatmap.bp
#' @rdname heatmap_bp
#' @aliases heatmap.bp
#' @export
#'
#' @description
#' Heatmap of a numeric matrix with barplots summarizing columns and rows.
#'
#'
#' @param x a numeric matrix.
#' @param cbarplot a logical indicating whether the columns should be summarized with a barplot.
#' @param rbarplot a logical indicating whether the rows should be summarized with a barplot.
#' @param legend a logical indicating whether a legend should be plotted.
#' @param clabels a logical indicating whether column labels should be included.
#' @param rlabels a logical indicating whether row labels should be included.
#' @param na.rm a logical indicating whether missing values should be removed.
#' @param scale character indicating if the values should be centered and scaled in either the row direction or the column direction, or none. The default is "none".
#' @param col.ramp vector of colors to be used for the color ramp.
#' @param ... additional arguments to be passed on.
#'
#' @details The function heatmap.bp creates a heatmap from a numeric matrix with optional barplots to summarize the rows and columns.
#'
#' @seealso \code{\link[stats]{heatmap}}, \code{\link[graphics]{image}}, heatmap2 in \href{https://cran.r-project.org/package=gplots}{gplots}, \href{https://cran.r-project.org/package=pheatmap}{pheatmap}.
#'
#' @examples
#' library(vcfR)
#'
#' x <- as.matrix(mtcars)
#'
#' heatmap.bp(x)
#' heatmap.bp(x, scale="col")
#' # Use an alternate color ramp
#' heatmap.bp(x, col.ramp = colorRampPalette(c("red", "yellow", "#008000"))(100))
# library(viridis)
#' heatmap.bp(x)
#'
#' \dontrun{
#' heatmap.bp(x, cbarplot = FALSE, rbarplot = FALSE, legend = FALSE)
#' heatmap.bp(x, cbarplot = FALSE, rbarplot = TRUE, legend = FALSE)
#' heatmap.bp(x, cbarplot = FALSE, rbarplot = FALSE, legend = TRUE)
#' heatmap.bp(x, cbarplot = FALSE, rbarplot = TRUE, legend = TRUE)
#'
#' heatmap.bp(x, cbarplot = TRUE, rbarplot = FALSE, legend = FALSE)
#' heatmap.bp(x, cbarplot = TRUE, rbarplot = TRUE, legend = FALSE)
#' heatmap.bp(x, cbarplot = TRUE, rbarplot = FALSE, legend = TRUE)
#' heatmap.bp(x, cbarplot = TRUE, rbarplot = TRUE, legend = TRUE)
#' }
#'
#'
# data(vcfR_example)
# pinf_mt <- create_chrom('pinf_mt', seq=pinf_dna, vcf=pinf_vcf, ann=pinf_gff)
# pinf_mt <- masker(pinf_mt)
# pinf_gq <- extract.gt(pinf_mt, element="GQ", as.numeric=TRUE)
# heatmap.bp(pinf_gq)
# heatmap.bp(pinf_gq, scale="col")
# heatmap.bp(pinf_gq, col.ramp = colorRampPalette(c("red", "yellow", "#008000"))(100))
# heatmap.bp(pinf_gq, col.ramp = colorRampPalette(c("#D55E00", "#F0E442", "#009E73"))(100))
#' @importFrom viridisLite viridis
heatmap.bp <- function(x, cbarplot = TRUE, rbarplot = TRUE,
legend = TRUE, clabels = TRUE, rlabels = TRUE,
na.rm = TRUE, scale = c("row", "column", "none"),
# col.ramp = colorRampPalette(c("red", "yellow", "#008000"))(100),
# col.ramp = viridis::viridis(n = 100, alpha=1),
col.ramp = viridisLite::viridis(n = 100, alpha=1),
...){
# require(viridis)
# viridisLite::viridis(n=4)
# stopifnot(class(x) == 'matrix')
stopifnot(inherits(x, 'matrix'))
scale <- if(missing(scale))
"none"
else match.arg(scale)
#
# Determine the geometry of the plot.
nrows <- 1
ncols <- 1
if(cbarplot == TRUE){ nrows <- nrows + 1 }
if(rbarplot == TRUE){ ncols <- ncols + 1 }
if(legend == TRUE){ ncols <- ncols + 1 }
# Scale the data as appropriate.
if (scale == "row") {
x <- sweep(x, 1L, rowMeans(x, na.rm = na.rm), check.margin = FALSE)
sx <- apply(x, 1L, stats::sd, na.rm = na.rm)
x <- sweep(x, 1L, sx, "/", check.margin = FALSE)
}
else if (scale == "column") {
x <- sweep(x, 2L, colMeans(x, na.rm = na.rm), check.margin = FALSE)
sx <- apply(x, 2L, stats::sd, na.rm = na.rm)
x <- sweep(x, 2L, sx, "/", check.margin = FALSE)
}
# Handle column names.
if(is.null(colnames(x))){colnames(x) <- 1:ncol(x)}
if(is.null(rownames(x))){rownames(x) <- 1:nrow(x)}
# Get user's par(), ignoring the read-only variables.
userpar <- graphics::par(no.readonly = TRUE)
# Promise to reset graphics device
on.exit({
graphics::par(userpar)
})
# Set plot geometry.
if( cbarplot == FALSE & rbarplot == FALSE & legend == FALSE ){
# One panel.
}
if( cbarplot == FALSE & rbarplot == TRUE & legend == FALSE ){
graphics::layout(matrix(1:2, nrow=nrows, ncol=ncols, byrow = TRUE),
widths=c(4, 0.6))
}
if( cbarplot == FALSE & rbarplot == FALSE & legend == TRUE ){
graphics::layout(matrix(1:2, nrow=nrows, ncol=ncols, byrow = TRUE),
widths=c(4, 0.2))
}
if( cbarplot == FALSE & rbarplot == TRUE & legend == TRUE ){
graphics::layout(matrix(1:3, nrow=nrows, ncol=ncols, byrow = TRUE),
widths=c(4, 0.6, 0.2))
}
if( cbarplot == TRUE & rbarplot == FALSE & legend == FALSE ){
graphics::layout(matrix(1:2, nrow=nrows, ncol=ncols, byrow = TRUE),
heights=c(1, 4))
}
if( cbarplot == TRUE & rbarplot == TRUE & legend == FALSE ){
graphics::layout(matrix(1:4, nrow=nrows, ncol=ncols, byrow = TRUE),
widths=c(4, 0.6), heights=c(1, 4))
}
if( cbarplot == TRUE & rbarplot == FALSE & legend == TRUE ){
graphics::layout(matrix(1:4, nrow=nrows, ncol=ncols, byrow = TRUE),
widths=c(4, 0.2), heights=c(1, 4))
}
if( cbarplot == TRUE & rbarplot == TRUE & legend == TRUE ){
graphics::layout(matrix(1:6, nrow=nrows, ncol=ncols, byrow = TRUE),
widths=c(4, 0.6, 0.2), heights=c(1, 4))
}
# Global parameters.
graphics::par(mar=c(0,0,0,0))
graphics::par(oma=c(1,1,1,1))
if( cbarplot == TRUE ){
graphics::barplot(colSums(x, na.rm=na.rm),
space=0, border=NA, axes=FALSE,
names.arg="",
col=c("#808080", "#c0c0c0"), xaxs="i")
if(clabels == TRUE & scale == 'none'){
graphics::text(c(1:ncol(x))-0.5, 0.0, colnames(x), adj=c(0.0,0.5), srt=90)
} else if (clabels == TRUE & scale != 'none'){
graphics::text(c(1:ncol(x))-0.5, min(colSums(x, na.rm=na.rm), na.rm=na.rm), colnames(x), adj=c(0.0,0.5), srt=90)
}
if( rbarplot == TRUE ){
plot(1, 1, type="n", axes=FALSE, xlab="", ylab="")
}
if( legend == TRUE ){
plot(1, 1, type="n", axes=FALSE, xlab="", ylab="")
}
}
# Plot image matrix.
graphics::image(t(x), col = col.ramp,
axes=FALSE, frame.plot=TRUE)
# Row barplot.
if( rbarplot == TRUE ){
graphics::barplot(rowSums(x, na.rm=na.rm),
space=0, border=NA,
horiz=TRUE, axes=FALSE, names.arg="",
col=c("#808080", "#c0c0c0"),
yaxs="i")
if(rlabels == TRUE & scale == 'none'){
graphics::text(0, c(1:nrow(x))-0.5, rownames(x), adj=c(0.0, 0.5), srt=0)
} else if(rlabels == TRUE & scale != 'none'){
graphics::text(min(rowSums(x, na.rm=na.rm), na.rm=na.rm), c(1:nrow(x))-0.5, rownames(x), adj=c(0.0,0.5), srt=0)
}
}
# Legend.
if( legend == TRUE ){
mp <- graphics::barplot(rep(1, times=length(col.ramp)), space=0, border=NA, horiz = TRUE,
col = col.ramp, axes=FALSE)
# graphics::text(0.5, 5, "Low", col="#FFFFFF")
# graphics::text(0.5, 95, "High", col="#FFFFFF")
if ( mp[nrow(mp),1] - mp[1,1] >= 1 ){
graphics::text(0.5, mp[1,1], "Low", col="#FFFFFF", adj=c(0.5,0))
graphics::text(0.5, mp[nrow(mp),1], "High", col="#FFFFFF", adj=c(0.5,1))
}
}
invisible(NULL)
}
##### ##### ##### ##### #####
# EOF
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/heatmap_bp.R
|
#' @title Read and write vcf format files
#' @rdname io_vcfR
#' @name VCF input and output
#'
#' @export
#'
#' @description
#' Read and files in the *.vcf structured text format, as well as the compressed *.vcf.gz format.
#' Write objects of class vcfR to *.vcf.gz.
#'
#' @param file A filename for a variant call format (vcf) file.
#' @param limit amount of memory (in bytes) not to exceed when reading in a file.
#' @param nrows integer specifying the maximum number of rows (variants) to read in.
#' @param skip integer specifying the number of rows (variants) to skip before beginning to read data.
#' @param cols vector of column numbers to extract from file.
#' @param x An object of class vcfR or chromR.
# @param vfile an output filename.
#' @param mask logical vector indicating rows to use.
#' @param APPEND logical indicating whether to append to existing vcf file or write a new file.
#' @param convertNA logical specifying to convert VCF missing data to NA.
#' @param checkFile test if the first line follows the VCF specification.
#' @param check_keys logical determining if \code{check_keys()} is called to test if INFO and FORMAT keys are unique.
#'
#' @param verbose report verbose progress.
#'
#' @details
#' The function \strong{read.vcfR} reads in files in *.vcf (text) and *.vcf.gz (gzipped text) format and returns an object of class vcfR.
#' The parameter 'limit' is an attempt to keep the user from trying to read in a file which contains more data than there is memory to hold.
#' Based on the dimensions of the data matrix, an estimate of how much memory needed is made.
#' If this estimate exceeds the value of 'limit' an error is thrown and execution stops.
#' The user may increase this limit to any value, but is encourages to compare that value to the amout of available physical memory.
#'
#'
#' It is possible to input part of a VCF file by using the parameters nrows, skip and cols.
#' The first eight columns (the fix region) are part of the definition and will always be included.
#' Any columns beyond eight are optional (the gt region).
#' You can specify which of these columns you would like to input by setting the cols parameter.
#' If you want a usable vcfR object you will want to always include nine (the FORMAT column).
#' If you do not include column nine you may experience reduced functionality.
#'
#'
#' According to the VCF specification \strong{missing data} are encoded by a period (".").
#' Within the R language, missing data can be encoded as NA.
#' The parameter `convertNA` allows the user to either retain the VCF representation or the R representation of missing data.
#' Note that the conversion only takes place when the entire value can be determined to be missing.
#' For example, ".|.:48:8:51,51" would be retained because the missing genotype is accompanied by other delimited information.
#' In contrast, ".|." should be converted to NA when \code{convertNA = TRUE}.
#'
#'
#' If file begins with http://, https://, ftp://, or ftps:// it is interpreted as a link.
#' When this happens, file is split on the delimiter '/' and the last element is used as the filename.
#' A check is performed to determine if this file exists in the working directory.
#' If a local file is found it is used.
#' If a local file is not found the remote file is downloaded to the working directory and read in.
#'
#' The function \strong{write.vcf} takes an object of either class vcfR or chromR and writes the vcf data to a vcf.gz file (gzipped text).
#' If the parameter 'mask' is set to FALSE, the entire object is written to file.
#' If the parameter 'mask' is set to TRUE and the object is of class chromR (which has a mask slot), this mask is used to subset the data.
#' If an index is supplied as 'mask', then this index is used, and recycled as necessary, to subset the data.
#'
#' Because vcfR provides the opportunity to manipulate VCF data, it also provides the opportunity for the user to create invalid VCF files.
#' If there is a question regarding the validity of a file you have created one option is the \href{https://vcftools.github.io/perl_module.html#vcf-validator}{VCF validator} from VCF tools.
#'
#'
#' @return read.vcfR returns an object of class \code{\link{vcfR-class}}.
#' See the \strong{vignette:} \code{vignette('vcf_data')}.
#' The function write.vcf creates a gzipped VCF file.
#'
#' @seealso
# \code{\link[PopGenome]{readVCF}}
# \code{\link[pegas]{read.vcf}}
# \link[pegas]{read.vcf}
#'
#' CRAN:
#' \href{https://cran.r-project.org/package=pegas}{pegas}::read.vcf,
#' \href{https://cran.r-project.org/package=PopGenome}{PopGenome}::readVCF,
#' \href{https://cran.r-project.org/package=data.table}{data.table}::fread
#'
#' Bioconductor:
#' \href{https://www.bioconductor.org/packages/release/bioc/html/VariantAnnotation.html}{VariantAnnotation}::readVcf
#'
#' Use: browseVignettes('vcfR') to find examples.
#'
#'
#' @examples
#' data(vcfR_test)
#' vcfR_test
#' head(vcfR_test)
#' # CRAN requires developers to us a tempdir when writing to the filesystem.
#' # You may want to implement this example elsewhere.
#' orig_dir <- getwd()
#' temp_dir <- tempdir()
#' setwd( temp_dir )
#' write.vcf( vcfR_test, file = "vcfR_test.vcf.gz" )
#' vcf <- read.vcfR( file = "vcfR_test.vcf.gz", verbose = FALSE )
#' vcf
#' setwd( orig_dir )
#'
#'
# ' @rdname io_vcfR
#' @aliases read.vcfR
#' @export
#'
read.vcfR <- function(file,
limit=1e7,
nrows = -1,
skip = 0,
cols = NULL,
convertNA = TRUE,
checkFile = TRUE,
check_keys = TRUE,
verbose = TRUE){
# require(memuse)
if( !is.character(file) ){
stop('The parameter file is expected to be a character.')
}
if( grepl('^http://|^https://|^ftp://|^ftps://', file) ){
# We have a link instead os a file.
file_name <- unlist(strsplit(file, split = "/"))
file_name <- file_name[[length(file_name)]]
if(file.exists(file_name)){
message(paste("Local file", file_name, "found."))
message('Using this local copy instead of retrieving a remote copy.')
} else {
message(paste("Downloading remote file", file))
utils::download.file(url = file, destfile = file_name, quiet = FALSE)
message("File downloaded.")
message("It will probably be faster to use this local file in the future instead of re-downloading it.")
}
file <- file_name
}
# gzopen does not appear to deal well with tilde expansion.
if( grepl("^~", file) ){
file <- path.expand(file)
}
# Test that this is a VCF file.
if(checkFile == TRUE){
vcf <- scan(file=file, what = character(), nmax=1, sep="\n", quiet = TRUE, comment.char = "")
if(substr(vcf,start=1, stop=17) != "##fileformat=VCFv"){
msg <- paste("File:", file, "does not appear to be a VCF file.\n")
msg <- paste(msg, " First line of file:\n", file)
msg <- paste(msg, "\n")
msg <- paste(msg, " Should begin with:\n##fileformat=VCFv")
msg <- paste(msg, "\n")
stop(msg)
}
}
vcf <- new(Class="vcfR")
stats <- .vcf_stats_gz(file, nrows=nrows, skip = skip, verbose = as.integer(verbose) )
# stats should be a named vector containing "meta", "header_line", "variants", "columns", and "last_line".
# They should have been initialize to zero.
if( stats['columns'] > 0 & stats['last_line'] > 0 & stats['columns'] != stats['last_line']){
msg <- paste("Your file appears to have", stats['columns'], "header elements")
msg <- paste(msg, "and", stats['last_line'], "columns in the body.\n")
msg <- paste(msg, "This should never happen!")
stop(msg)
}
if( stats['columns'] == 0 & stats['last_line'] > 0 ){
stats['columns'] <- stats['last_line']
}
if(verbose == TRUE){
cat("File attributes:")
cat("\n")
cat( paste(" meta lines:", stats['meta']) )
cat("\n")
cat( paste(" header_line:", stats['header_line']) )
cat("\n")
cat( paste(" variant count:", stats['variants']) )
cat("\n")
cat( paste(" column count:", stats['columns']) )
cat("\n")
}
utils::flush.console()
if( stats['meta'] < 0 ){
stop( paste("stats['meta'] less than zero:", stats['meta'], ", this should never happen.") )
}
if( stats['header_line'] < 0 ){
stop( paste("stats['header_line'] less than zero:", stats['header_line'], ", this should never happen.") )
}
if( stats['variants'] < 0 ){
stop( paste("stats['variants'] less than zero:", stats['variants'], ", this should never happen.") )
}
if( stats['columns'] < 0 ){
stop( paste("stats['columns'] less than zero:", stats['columns'], ", this should never happen.") )
}
if( is.null(cols) ){
cols <- 1:stats['columns']
}
# Make sure we include the first nine columns.
cols <- sort( unique( c(1:8, cols) ) )
# ram_est <- stats['variants'] * stats['columns'] * 8 + 248
ram_est <- memuse::howbig(stats['variants'], stats['columns'])
if(ram_est@size > limit){
message(paste("The number of variants in your file is:", prettyNum(stats['variants'], big.mark=",")))
message(paste("The number of samples in your file is:", prettyNum(stats['columns'] - 1, big.mark=",")))
message(paste("This will result in an object of approximately:", ram_est, "in size"))
stop("Object size limit exceeded")
}
# Read meta
vcf@meta <- .read_meta_gz(file, stats, as.numeric(verbose))
# Read body
body <- .read_body_gz(file, stats = stats,
nrows = nrows, skip = skip, cols = cols,
convertNA = as.numeric(convertNA), verbose = as.numeric(verbose))
vcf@fix <- body[ ,1:8, drop=FALSE ]
if( ncol(body) > 8 ){
vcf@gt <- body[ , -c(1:8), drop=FALSE ]
} else {
vcf@gt <- matrix("a", nrow=0, ncol=0)
}
# Check if keys in meta section are unique.
if( check_keys == TRUE ){
check_keys(vcf)
}
return(vcf)
}
#' @rdname io_vcfR
#' @export
#' @aliases write.vcf
#'
write.vcf <- function(x, file = "", mask = FALSE, APPEND = FALSE){
#if(class(x) == "chromR"){
if( inherits(x, "chromR") ){
if( mask == TRUE ){
is.na( x@vcf@fix[,'FILTER'] ) <- TRUE
x@vcf@fix[,'FILTER'][ [email protected][,'mask'] ] <- 'PASS'
}
x <- x@vcf
}
# if(class(x) != "vcfR"){
if( !inherits(x, "vcfR") ){
stop("Unexpected class! Expecting an object of class vcfR or chromR.")
}
# gzopen does not appear to deal well with tilde expansion.
file <- path.expand(file)
if(APPEND == FALSE){
gz <- gzfile(file, "w")
if( length(x@meta) > 0 ){
write(x@meta, gz)
}
header <- c(colnames(x@fix), colnames(x@gt))
header[1] <- "#CHROM"
header <- paste(header, collapse="\t")
write(header, gz)
close(gz)
}
if(mask == FALSE){
test <- .write_vcf_body(fix = x@fix, gt = x@gt, filename = file, mask = 0)
} else if (mask == TRUE){
test <- .write_vcf_body(fix = x@fix, gt = x@gt, filename = file, mask = 1)
}
}
##### ##### ##### ##### #####
# EOF
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/io_vcfR.R
|
#' @title Query genotypes for heterozygotes
#' @name is.het
#' @rdname is_het
#'
#' @description Query a matrix of genotypes for heterozygotes
#'
#'
#' @aliases is.het
#'
#' @param x a matrix of genotypes
#' @param na_is_false should missing data be returned as NA (FALSE) or FALSE (TRUE)
#'
#' @details
#'
#' This function was designed to identify heterozygous positions in a matrix of genotypes.
#' The matrix of genotypes can be created with \code{\link{extract.gt}}.
#' Because the goal was to identify heterozygotes it may be reasonable to ignore missing values by setting na_is_false to TRUE so that the resulting matrix will consist of only TRUE and FALSE.
#' In order to preserve missing data as missing na_is_false can be set to FALSE where if at least one allele is missing NA is returned.
#'
#'
#' @seealso
#' \code{\link{extract.gt}}
#'
#' @examples
#' data(vcfR_test)
#' gt <- extract.gt(vcfR_test)
#' hets <- is_het(gt)
#' # Censor non-heterozygous positions.
#' is.na(vcfR_test@gt[,-1][!hets]) <- TRUE
#'
#' @export
is.het <- function(x, na_is_false = TRUE){
# if( class(x) != 'matrix' ){
if( !inherits(x, 'matrix') ){
stop( paste( "Expecting a matrix, received a", class(x) ) )
}
test_gt <- function(x, na_is_false = na_is_false){
is.na( x[ x=="." ] ) <- TRUE
if( sum( is.na(x) ) > 0 & na_is_false == FALSE ){
return(NA)
} else {
x <- unique(x)
if( length(x) > 1 ){
return(TRUE)
} else {
return(FALSE)
}
}
}
proc_gt <- function(x, na_is_false = na_is_false){
x <- strsplit(x, split="[/\\|]")
x <- lapply(x, test_gt, na_is_false)
unlist(x)
}
x2 <- apply( x, MARGIN=2, proc_gt, na_is_false = na_is_false )
return(x2)
}
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/is_het.R
|
#' @title Minor allele frequency
#' @name maf
#' @rdname maf
#'
#' @description
#' Calculate the minor (or other) allele frequency.
#'
#' @param x an object of class vcfR or chromR
#' @param element specify the allele number to return
#'
#' @details
#' The function maf() calculates the counts and frequency for an allele.
#' A variant may contain more than two alleles.
#' Rare alleles may be true rare alleles or the result of genotyping error.
#' In an attempt to address these competing issues we sort the alleles by their frequency and the report statistics based on their position.
#' For example, setting element=1 would return information about the major (most common) allele.
#' Setting element=2 returns information about the second allele.
#'
#' @return
#' a matrix of four columns.
#' The first column is the total number of alleles, the second is the number of NA genotypes, the third is the count and fourth the frequency.
#'
#' @export
maf <- function(x, element=2){
get_maf <- function(x, element=2){
maf <- vector(mode='numeric', length=4)
names(maf) <- c('nAllele', 'NA', 'Count', 'Frequency')
x <- unlist(strsplit(x, split="[|/]"))
maf['NA'] <- sum( is.na(x) )
x <- table(x, useNA = "no")
x <- sort(x, decreasing = TRUE, na.last = TRUE)
# if( nrow(x) == 0 ){
if( length(x) == 0 ){
is.na(maf) <- TRUE
} else {
maf['nAllele'] <- sum(x)
if( !is.na(x[element]) ){
maf['Count'] <- x[element]
maf['Frequency'] <- x[element]/sum(x)
}
}
return(maf)
}
gt <- extract.gt(x, element = "GT")
maf <- t(apply(gt, MARGIN=1, get_maf, element=element))
return(maf)
}
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/maf.R
|
.onUnload <- function (libpath) {
library.dynam.unload("vcfR", libpath)
}
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/onUnload.R
|
#'
#'
#' @title Ordinate a sample's data
#' @name ordisample
#' @rdname ordisample
#'
#' @description
#' Ordinate information from a sample's GT region and INFO column.
#'
#' @param x an object of class vcfR or chromR.
#' @param sample a sample number where the first sample (column) is 2
#' @param distance metric to be used for ordination, options are in \code{\link[vegan]{vegdist}}
#' @param plot logical specifying whether to plot the ordination
#' @param alpha alpha channel (transparency) ranging from 0-255
#' @param verbose logical specifying whether to produce verbose output
#' @param ... parameters to be passed to child processes
#'
#'
#' @details
#' The INFO column of VCF data contains descriptors for each variant.
#' Each sample typically includes several descriptors of each variant in the GT region as well.
#' This can present an overwhelming amount of information.
#' Ordination is used in this function to reduce this complexity.
#'
#' The ordination procedure can be rather time consuming depending on how much data is used.
#' I good recommendation is to always start with a small subset of your full dataset and slowly scale up.
#' There are several steps in this function that attempt to eliminate variants or characters that have missing values in them.
#' This that while starting with a small number is good, you will need to have a large enough number so that a substantial amount of the data make it to the ordination step.
#' In the example I use 100 variants which appears to be a reasonable compromise.
#'
#' The data contained in VCF files can frequently contain a large fraction of missing data.
#' I advovate censoring data that does not meet quality control thresholds as missing which compounds the problem.
#' An attempt is made to omit these missing data by querying the GT and INFO data for missingness and omitting the missing variants.
#' The data may also include characters (columns) that contain all missing values which are omitted as well.
#' When verbose == TRUE these omissions are reported as messages.
#'
#' Some data may contain multiple values.
#' For example, AD is the sequence depth for each observed allele.
#' In these instances the values are sorted and the largest value is used.
#'
#' Several of the steps of this ordination make distributional assumptions.
#' That is, they assume the data to be normally distributed.
#' There is no real reason to assume this assumption to be valid with VCF data.
#' It has been my experience that this assumption is frequently violated with VCF data.
#' It is therefore suggested to use this funciton as an exploratory tool that may help inform other decisions.
#' These analyst may be able to address these issues through data transformation or other topics beyond the scope of this function.
#' This function is intended to provide a rapid assessment of the data which may help determine if more elegant handling of the data may be required.
#' Interpretation of the results of this function need to take into account that assumptions may have been violated.
#'
#'
#'
#' @return
#' A list consisting of two objects.
#' \itemize{
#' \item an object of class 'metaMDS' created by the function vegan::metaMDS
#' \item an object of class 'envfit' created by the function vegan::envfit
#' }
#' This list is returned invisibly.
#'
#' @seealso
#' \code{\link[vegan]{metaMDS}},
#' \code{\link[vegan]{vegdist}},
#' \code{\link[vegan]{monoMDS}},
#' \code{\link[MASS]{isoMDS}}
#'
#'
#' @examples
#' \dontrun{
# data(vcfR_test)
#'
#' # Example of normally distributed, random data.
#' set.seed(9)
#' x1 <- rnorm(500)
#' set.seed(99)
#' y1 <- rnorm(500)
#' plot(x1, y1, pch=20, col="#8B451388", main="Normal, random, bivariate data")
#'
#' data(vcfR_example)
#' ordisample(vcf[1:100,], sample = "P17777us22")
#'
#' vars <- 1:100
#' myOrd <- ordisample(vcf[vars,], sample = "P17777us22", plot = FALSE)
#' names(myOrd)
#' plot(myOrd$metaMDS, type = "n")
#' points(myOrd$metaMDS, display = "sites", pch=20, col="#8B451366")
#' text(myOrd$metaMDS, display = "spec", col="blue")
#' plot(myOrd$envfit, col = "#008000", add = TRUE)
#' head(myOrd$metaMDS$points)
#' myOrd$envfit
#' pairs(myOrd$data1)
#'
#' # Seperate heterozygotes and homozygotes.
#' gt <- extract.gt(vcf)
#' hets <- is_het(gt, na_is_false = FALSE)
#' vcfhe <- vcf
#' vcfhe@gt[,-1][ !hets & !is.na(hets) ] <- NA
#' vcfho <- vcf
#' vcfho@gt[,-1][ hets & !is.na(hets) ] <- NA
#'
#' myOrdhe <- ordisample(vcfhe[vars,], sample = "P17777us22", plot = FALSE)
#' myOrdho <- ordisample(vcfho[vars,], sample = "P17777us22", plot = FALSE)
#' pairs(myOrdhe$data1)
#' pairs(myOrdho$data1)
#' hist(myOrdho$data1$PL, breaks = seq(0,9000, by=100), col="#8B4513")
#' }
#'
#'
#' @import vegan
#' @export
#'
ordisample <- function(x, sample, distance = "bray", plot = TRUE, alpha = 88, verbose = TRUE, ...){
# require(vegan, quietly = verbose)
#if( class(sample) == "character" ){
if( inherits(sample, "character") ){
sample <- grep( sample, colnames(x@gt), fixed = TRUE )
}
if( length(sample) !=1 ){
stop( "Invalid specification of 'sample.' Please use either an integer or a character." )
}
x <- x[,c(1,sample)]
# INFO data
myINFO <- INFO2df(x)
myMETA <- metaINFO2df(x)
for(i in 1:ncol(myINFO)){
tmp <- myINFO[,i]
#if( class(myINFO[,i]) == "character" ){
if( inherits(myINFO[,i], "character") ){
tmp <- strsplit(tmp, split = ",")
tmp <- lapply(tmp, function(x){x[1]})
tmp <- unlist(tmp)
if( myMETA$Type[i] == "Integer"){
tmp <- as.integer(tmp)
}
if( myMETA$Type[i] == "Float"){
tmp <- as.numeric(tmp)
}
}
myINFO[,i] <- tmp
}
# Get FORMAT fields
myFORMAT <- metaINFO2df(x, field = "FORMAT")
myFORMAT <- myFORMAT[grep("^GT$", myFORMAT$ID, invert = TRUE),]
myGT <- data.frame( matrix( nrow=nrow(x), ncol=nrow(myFORMAT) ) )
names(myGT) <- myFORMAT$ID
tmp <- extract.gt( x, element = colnames(myGT)[1] )
rownames(myGT) <- rownames(tmp)
for(i in 1:ncol(myGT)){
tmp <- extract.gt( x, element = colnames(myGT)[i] )
# First handle reserved words.
# if( colnames(myGT)[i] == "AD" ){
# tmp <- AD_frequency(tmp)
# AD_frequency may help distinguish heterozygotes
# from homozygotes but seems dubious here.
# }
if( colnames(myGT)[i] == "PL" ){
tmp <- AD_frequency(tmp, decreasing = 0)
} else {
tmp <- strsplit(tmp, split = ",")
# tmp <- lapply(tmp, function(x){x[1]})
tmp <- lapply(tmp, function(x){ sort(x, decreasing = TRUE)[1] })
tmp <- unlist(tmp)
if( myFORMAT$Type[i] == "Integer"){
tmp <- as.integer(tmp)
}
if( myFORMAT$Type[i] == "Float"){
tmp <- as.numeric(tmp)
}
}
myGT[,i] <- tmp
}
# Manage NAs.
# GT missingness.
badVars <- apply(myGT, MARGIN=1, function(x){ sum( is.na(x) ) > 0 })
if( verbose == TRUE & sum(badVars) > 0 ){
message(paste( sum(badVars), "variants containing missing values removed." ))
}
myGT <- myGT[!badVars,]
myINFO <- myINFO[!badVars,]
badChars <- apply(myINFO, MARGIN=2, function(x){ sum( is.na(x) ) == length(x) })
if( verbose == TRUE & sum(badChars) > 0 ){
message(paste("INFO character: ", names(myINFO)[badChars], " omitted due to missingness." ))
}
myINFO <- myINFO[,!badChars]
badChars <- apply(myINFO, MARGIN=2, function(x){ length( unique(x) ) == 1 })
if( verbose == TRUE & sum(badChars) > 0 ){
message(paste("INFO character: ", names(myINFO)[badChars], " omitted as monomorphic." ))
}
myINFO <- myINFO[,!badChars]
# INFO missingness.
badVars <- apply(myINFO, MARGIN=1, function(x){ sum( is.na(x) ) > 0 })
if( verbose == TRUE & sum(badVars) > 0 ){
message(paste( sum(badVars), "variants containing missing values removed." ))
}
myGT <- myGT[!badVars,]
myINFO <- myINFO[!badVars,]
# Ordination
mds1 <- vegan::metaMDS(myGT, distance = distance, k = 2)
# Covariates
ord.fit <- vegan::envfit(mds1, env=myINFO, perm=999, na.rm = TRUE)
# Plot
if( plot == TRUE ){
graphics::plot(mds1, type = "n")
graphics::points(mds1, display = "sites", cex = 0.8, pch=20, col=grDevices::rgb(139,69,19, alpha=alpha, maxColorValue = 255))
vegan::ordiellipse(mds1, groups = factor( rep(1, times=nrow(myGT)) ), kind = "sd", conf = 0.68, col = "#808080" )
graphics::text(mds1, display = "spec", col="blue", ...)
graphics::title( main = colnames(x@gt)[2] )
graphics::plot(ord.fit, choices = c(1,2), at = c(0,0),
axis = FALSE,
# p.max = 0.05,
col = "#008000", add = TRUE, ... )
}
invisible( list( metaMDS = mds1,
envfit = ord.fit,
data1 = myGT,
data2 = myINFO) )
}
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/ordisample.R
|
#' @title Pairwise genetic differentiation across populations
#'
#' @aliases pairwise_genetic_diff
#'
#' @description
#' \code{pairwise_genetic_diff} Calculate measures of genetic differentiation across all population pairs.
#'
#' @param vcf a vcfR object
#' @param pops factor indicating populations
#' @param method the method to measure differentiation
#'
#' @author Javier F. Tabima
#'
#' @return a data frame containing the pairwise population differentiation indices of interest across all pairs of populations in the population factor.
#'
#' @examples
#' data(vcfR_example)
#' pops <- as.factor(rep(c('a','b'), each = 9))
#' myDiff <- pairwise_genetic_diff(vcf, pops, method = "nei")
#' colMeans(myDiff[,c(4:ncol(myDiff))], na.rm = TRUE)
#' pops <- as.factor(rep(c('a','b','c'), each = 6))
#' myDiff <- pairwise_genetic_diff(vcf, pops, method = "nei")
#' colMeans(myDiff[,c(4:ncol(myDiff))], na.rm = TRUE)
#'
#' @seealso \code{\link{genetic_diff}} in \code{\link{vcfR}}
#'
#' @export
pairwise_genetic_diff <- function (vcf, pops, method="nei"){
var_info <- as.data.frame(vcf@fix[, 1:2, drop = FALSE])
if (is.null(var_info$mask)) {
var_info$mask <- TRUE
}
# Create a list of pairwise comparisons.
combination.df <- utils::combn(x = as.character(unique(pops)), m = 2, simplify = FALSE)
# Function to make pairwise comparisons
pwDiff <- function (x) {
# x contains the names of two populations to be compared.
# pops is a factor of population designations for each sample.
# vcf is the vcfR object.
# method is the method to be used in the comparison.
vcf.temp <- vcf
pop.tem <- as.factor(as.character(pops[pops %in% x]))
samples.temp <- colnames(vcf.temp@gt)[-1][pops %in% x]
vcf.temp@gt <- vcf.temp@gt[, c(TRUE, colnames(vcf.temp@gt)[-1] %in% samples.temp)]
temp.gendif <- genetic_diff(vcf.temp, pop.tem, method = method)
if (method == "nei") {
temp.genind <- temp.gendif[,colnames(temp.gendif) %in% c("Gst","Gprimest")]
colnames(temp.genind) <- paste0(colnames(temp.genind),"_", paste0(levels(pop.tem), collapse = "_"))
} else if (method == "jost") {
temp.genind <- temp.gendif[,colnames(temp.gendif) %in% c("Dest_Chao","Db")]
colnames(temp.genind) <- paste0(colnames(temp.genind),"_", paste0(levels(pop.tem), collapse = "_"))
}
return(temp.genind)
}
test <- lapply(combination.df, pwDiff)
pop.diff <- as.data.frame(do.call(cbind,test))
pop.diff <- cbind(var_info, pop.diff)
return(pop.diff)
}
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/pairwise_genetic_diff.R
|
#' @title Convert allele balance peaks to ploidy
#' @name peak_to_ploid
#' @rdname peak_to_ploid
#'
#' @description
#' Converts allele balance data produced by \code{freq_peak()} to a copy number by assinging the allele balance data (frequencies) to its closest expected ratio.
#'
#' @param x an object produced by \code{freq_peak()}.
#'
#' @details
#' Converts allele balance data produced by \code{freq_peak()} to copy number.
#' See the examples section for a graphical representation of the expectations and the bins around them.
#' Once a copy number has called a distance from expectation (dfe) is calculated as a form of confidence.
#' The bins around different copy numbers are of different width, so the dfe is scaled by its respective bin width.
#' This results in a dfe that is 0 when it is exactly at our expectation (high confidence) and at 1 when it is half way between two expectations (low confidence).
#'
#'
#' @seealso \code{freq_peak}, \code{freq_peak_plot}
#'
#'
#' @return A list consisting of two matrices containing the calls and the distance from expectation (i.e., confidence).
#'
#'
#' @examples
#' # Thresholds.
#' plot(c(0.0, 1), c(0,1), type = "n", xaxt = "n", xlab = "Expectation", ylab = "Allele balance")
#' myCalls <- c(1/5, 1/4, 1/3, 1/2, 2/3, 3/4, 4/5)
#' axis(side = 1, at = myCalls, labels = c('1/5', '1/4', '1/3','1/2', '2/3', '3/4', '4/5'), las=2)
#' abline(v=myCalls)
#' abline(v=c(7/40, 9/40, 7/24, 5/12), lty=3, col ="#B22222")
#' abline(v=c(7/12, 17/24, 31/40, 33/40), lty=3, col ="#B22222")
#' text(x=7/40, y=0.1, labels = "7/40", srt = 90)
#' text(x=9/40, y=0.1, labels = "9/40", srt = 90)
#' text(x=7/24, y=0.1, labels = "7/24", srt = 90)
#' text(x=5/12, y=0.1, labels = "5/12", srt = 90)
#' text(x=7/12, y=0.1, labels = "7/12", srt = 90)
#' text(x=17/24, y=0.1, labels = "17/24", srt = 90)
#' text(x=31/40, y=0.1, labels = "31/40", srt = 90)
#' text(x=33/40, y=0.1, labels = "33/40", srt = 90)
#'
#' # Prepare data and visualize
#' data(vcfR_example)
#' gt <- extract.gt(vcf)
#' # Censor non-heterozygous positions.
#' hets <- is_het(gt)
#' is.na(vcf@gt[,-1][!hets]) <- TRUE
#' # Extract allele depths.
#' ad <- extract.gt(vcf, element = "AD")
#' ad1 <- masplit(ad, record = 1)
#' ad2 <- masplit(ad, record = 2)
#' freq1 <- ad1/(ad1+ad2)
#' freq2 <- ad2/(ad1+ad2)
#' myPeaks1 <- freq_peak(freq1, getPOS(vcf))
#' # Censor windows with fewer than 20 heterozygous positions
#' is.na(myPeaks1$peaks[myPeaks1$counts < 20]) <- TRUE
#' # Convert peaks to ploidy call
#' peak_to_ploid(myPeaks1)
#'
#'
#' @export
peak_to_ploid <- function(x){
# Validate our input
# if( class(x) != "list" | sum(names(x) == c("wins", "peaks", "counts")) != 3 ){
# msg <- "expecting a list with three elements named 'wins', 'peaks', and 'counts'"
# stop(msg)
# }
if( !inherits(x, "freq_peak") ){
msg <- "expecting a freq_peak object."
}
# Initialize a result data structure.
# gmat <- matrix(nrow=nrow(x$peaks), ncol=ncol(x$peaks))
# colnames(gmat) <- colnames(x$peaks)
# rownames(gmat) <- rownames(x$peaks)
gmat <- x$peaks
# Allele balance expectation
abe <- matrix(ncol=ncol(gmat), nrow = nrow(gmat))
# Bin to ploidy
# critical <- 1/4 - (1/3-1/4)/2
# critical <- c(critical, 1/4 + (1/3-1/4)/2)
# critical <- c(critical, 1/2 - (2/3 - 1/2)/2)
# critical <- c(critical, 1/2 + (2/3 - 1/2)/2)
# critical <- c(critical, 3/4 - (1/3-1/4)/2)
# critical <- c(critical, 3/4 + (1/3-1/4)/2)
critical <- c( 9/40, 7/24, 5/12, 7/12, 17/24, 31/40)
abe[ gmat <= 1 & gmat > critical[6] ] <- 4/5
gmat[ gmat <= 1 & gmat > critical[6] ] <- 5
abe[ gmat <= critical[6] & gmat > critical[5] ] <- 3/4
gmat[ gmat <= critical[6] & gmat > critical[5] ] <- 4
abe[ gmat <= critical[5] & gmat > critical[4] ] <- 2/3
gmat[ gmat <= critical[5] & gmat > critical[4] ] <- 3
abe[ gmat <= critical[4] & gmat >= critical[3] ] <- 1/2
gmat[ gmat <= critical[4] & gmat >= critical[3] ] <- 2
abe[ gmat < critical[3] & gmat >= critical[2] ] <- 1/3
gmat[ gmat < critical[3] & gmat >= critical[2] ] <- 3
abe[ gmat < critical[2] & gmat >= critical[1] ] <- 1/4
gmat[ gmat < critical[2] & gmat >= critical[1] ] <- 4
abe[ gmat < critical[1] & gmat >= 0 ] <- 1/5
gmat[ gmat < critical[1] & gmat >= 0 ] <- 5
is.na(gmat[x$peaks < 7/40 & !is.na(x$peaks)]) <- TRUE
is.na(gmat[x$peaks > 33/40 & !is.na(x$peaks)]) <- TRUE
is.na(abe[x$peaks < 7/40 & !is.na(x$peaks)]) <- TRUE
is.na(abe[x$peaks > 33/40 & !is.na(x$peaks)]) <- TRUE
# Distance from expectation
dfe <- x$peaks - abe
# Scale dfe by bin width
dfe[ abe == 4/5 & !is.na(dfe) ] <- dfe[ abe == 4/5 & !is.na(abe) ] / (33/40 - 4/5)
dfe[ abe == 3/4 & !is.na(dfe) & dfe > 0 ] <- dfe[ abe == 3/4 & !is.na(dfe) & dfe > 0 ] / (31/40 - 3/4)
dfe[ abe == 3/4 & !is.na(dfe) & dfe < 0 ] <- dfe[ abe == 3/4 & !is.na(dfe) & dfe < 0 ] / (3/4 - 17/24)
dfe[ abe == 2/3 & !is.na(dfe) & dfe > 0 ] <- dfe[ abe == 2/3 & !is.na(dfe) & dfe > 0 ] / (17/24 - 2/3)
dfe[ abe == 2/3 & !is.na(dfe) & dfe < 0 ] <- dfe[ abe == 2/3 & !is.na(dfe) & dfe < 0 ] / (2/3 - 7/12)
dfe[ abe == 1/2 & !is.na(dfe) ] <- dfe[ abe == 1/2 & !is.na(dfe) ] / (7/12 - 1/2)
dfe[ abe == 1/3 & !is.na(dfe) & dfe > 0 ] <- dfe[ abe == 1/3 & !is.na(dfe) & dfe > 0 ] / (5/12 - 1/3)
dfe[ abe == 1/3 & !is.na(dfe) & dfe < 0 ] <- dfe[ abe == 1/3 & !is.na(dfe) & dfe < 0 ] / (1/3 - 7/24)
dfe[ abe == 1/4 & !is.na(dfe) & dfe > 0 ] <- dfe[ abe == 1/4 & !is.na(dfe) & dfe > 0 ] / (7/24 - 1/4)
dfe[ abe == 1/4 & !is.na(dfe) & dfe < 0 ] <- dfe[ abe == 1/4 & !is.na(dfe) & dfe < 0 ] / (1/4 - 9/40)
dfe[ abe == 1/5 & !is.na(dfe) ] <- dfe[ abe == 1/5 & !is.na(dfe) ] / (9/40 - 1/5)
#return(gmat)
list( calls = gmat,
#abe = abe,
dfe = dfe)
}
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/peak_to_ploid.R
|
#### Import the pipe operator from magrittr ####
#' Pipe operator
#'
#' @name %>%
#' @rdname pipe
#' @keywords internal
#' @export
#' @importFrom magrittr %>%
#' @usage lhs \%>\% rhs
NULL
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/pipe.R
|
#' @title Process chromR object
#' @name Process chromR objects
#' @rdname proc_chromR
#' @description Functions which process chromR objects
#'
#' @param x object of class chromR
#' @param win.size integer indicating size for windowing processes
#' @param verbose logical indicating whether verbose output should be reported
# ' @param ... arguments to be passed to methods
#' @param max.win maximum window size
#' @param regex a regular expression to indicate nucleotides to be searched for
#'
#' @details
#' The function \strong{proc_chromR()} calls helper functions to process the data present in a chromR object into summaries statistics.
#'
#' The function \strong{regex.win()} is used to generate coordinates to define rectangles to represent regions of the chromosome containing called nucleotides (acgtwsmkrybdhv).
#' It is then called a second time to generate coordinates to define rectangles to represent regions called as uncalled nucleotides (n, but not gaps).
#'
#' The function \strong{gt2popsum} is called to create summaries of the variant data.
#'
#' The function \strong{var.win} is called to create windowized summaries of the chromR object.
#'
#' Each \strong{window} receives a \strong{name} and its coordinates.
#' Several attempts are made to name the windows appropriately.
#' First, the CHROM column of vcfR@fix is queried for a name.
#' Next, the label of the sequence is queried for a name.
#' Next, the first cell of the annotation matrix is queried.
#' If an appropriate name was not found in the above locations the chromR object's 'name' slot is used.
#' Note that the 'name' slot has a default value.
#' If this default value is not updated then all of your windows may receive the same name.
#'
#'
# ' @rdname proc_chromR
#' @export
#' @aliases proc.chromR
#'
proc.chromR <- function(x, win.size = 1e3, verbose=TRUE){
#stopifnot(class(x) == "chromR")
stopifnot( inherits(x, "chromR") )
if( is.null( x@seq ) & verbose == TRUE ){
warning( "seq slot is NULL." )
}
if( nrow(x@ann) == 0 & verbose == TRUE ){
warning( "annotation slot has no rows." )
}
#if(class(x@seq) == "DNAbin"){
if( inherits(x@seq, "DNAbin") ){
ptime <- system.time([email protected]$nuc.win <- seq2rects(x))
if(verbose==TRUE){
message("Nucleotide regions complete.")
message(paste(" elapsed time: ", round(ptime[3], digits=4)))
}
} else if ( is.null( x@seq ) & verbose == TRUE ){
warning( "seq slot is NULL, chromosome representation not made (seq2rects)." )
}
#if(class(x@seq) == "DNAbin"){
if( inherits(x@seq, "DNAbin") ){
ptime <- system.time([email protected]$N.win <- seq2rects(x, chars="n"))
if(verbose==TRUE){
message("N regions complete.")
message(paste(" elapsed time: ", round(ptime[3], digits=4)))
}
} else if ( is.null( x@seq ) & verbose == TRUE ){
warning( "seq slot is NULL, chromosome representation not made (seq2rects, chars=n)." )
}
# Population summary
if( nrow(x@vcf@gt) > 0 ){
if( nrow( x@vcf@gt[ [email protected]$mask, , drop = FALSE ] ) > 0 ){
ptime <- system.time(x <- gt.to.popsum(x))
if(verbose==TRUE){
message("Population summary complete.")
message(paste(" elapsed time: ", round(ptime[3], digits=4)))
}
}
}
# if(nrow([email protected][[email protected]$mask,])>0){
# Initialize windows.
if( length(x@len) > 0 ){
ptime <- system.time([email protected] <- .window_init(window_size=win.size, max_bp=x@len))
# Name of windows based on chromosome name.
if( !is.na([email protected]$CHROM[1]) ){
[email protected] <- cbind(rep([email protected]$CHROM[1], times=nrow([email protected])), [email protected])
names([email protected])[1] <- "CHROM"
} else if( !is.null(x@seq) ){
[email protected] <- cbind(rep( labels(x@seq)[1], times=nrow([email protected])), [email protected])
names([email protected])[1] <- "CHROM"
} else if( nrow(x@ann) > 0 ){
[email protected] <- cbind(rep( x@ann[1,1], times=nrow([email protected])), [email protected])
names([email protected])[1] <- "CHROM"
} else {
[email protected] <- cbind(rep( x@name, times=nrow([email protected])), [email protected])
names([email protected])[1] <- "CHROM"
}
if(verbose==TRUE){
# print("window_init complete.")
# print(paste(" elapsed time: ", round(ptime[3], digits=4)))
message("window_init complete.")
message(paste(" elapsed time: ", round(ptime[3], digits=4)))
}
}
# }
#if(class(x@seq) == "DNAbin"){
if( inherits(x@seq, "DNAbin") ){
# if( nrow( x@vcf@gt[[email protected]$mask, , drop = FALSE ] ) > 0 ){
ptime <- system.time([email protected] <- .windowize_fasta([email protected],
seq=as.character(x@seq)[1,]
))
if(verbose==TRUE){
# print("windowize_fasta complete.")
# print(paste(" elapsed time: ", round(ptime[3], digits=4)))
message("windowize_fasta complete.")
message(paste(" elapsed time: ", round(ptime[3], digits=4)))
}
# }
} else if ( is.null( x@seq ) & verbose == TRUE ){
warning( "seq slot is NULL, windowize_fasta not run." )
}
# Windowize annotations.
# if(nrow([email protected][[email protected]$mask,])>0){
if( nrow(x@ann) > 0 ){
#if( nrow( x@vcf@gt[[email protected]$mask, , drop = FALSE] ) > 0 ){
ptime <- system.time([email protected] <- .windowize_annotations([email protected],
ann_starts=as.numeric(as.character(x@ann[,4])),
ann_ends=as.numeric(as.character(x@ann[,5])),
chrom_length=x@len)
)
if(verbose==TRUE){
# print("windowize_annotations complete.")
# print(paste(" elapsed time: ", round(ptime[3], digits=4)))
message("windowize_annotations complete.")
message(paste(" elapsed time: ", round(ptime[3], digits=4)))
}
#}
} else if ( nrow(x@ann) == 0 ){
if ( verbose == TRUE ){
warning( "ann slot has zero rows." )
}
if( nrow([email protected]) > 0 ){
[email protected]$genic <- 0
}
}
# Windowize variants.
# if(nrow([email protected][[email protected]$mask,])>0){
if( nrow( x@vcf@fix[[email protected]$mask, , drop = FALSE ] ) > 0 ){
ptime <- system.time([email protected] <- .windowize_variants([email protected], [email protected][c('POS','mask')]))
if(verbose==TRUE){
# print("windowize_variants complete.")
# print(paste(" elapsed time: ", round(ptime[3], digits=4)))
message("windowize_variants complete.")
message(paste(" elapsed time: ", round(ptime[3], digits=4)))
}
} else {
if( nrow([email protected]) > 0 ){
[email protected]$variants <- 0
}
}
return(x)
}
##### ##### seq.info functions #####
#' @rdname proc_chromR
#' @export
#' @aliases regex.win
#'
#acgt.win <- function(x, max.win=1000, regex="[acgtwsmkrybdhv]"){
regex.win <- function(x, max.win=1000, regex="[acgtwsmkrybdhv]"){
# A DNAbin will store in a list when the fasta contains
# multiple sequences, but as a matrix when the fasta
# only contains one sequence.
if(is.matrix(as.character(x@seq))){
seq <- as.character(x@seq)[1:length(x@seq)]
}
if(is.list(as.character(x@seq))){
seq <- as.character(x@seq)[[1]]
}
# Subset to nucleotides of interest.
seq <- grep(regex, seq, ignore.case=T, perl=TRUE)
if(length(seq) == 0){
return(matrix(NA, ncol=2))
}
#
bp.windows <- matrix(NA, ncol=2, nrow=max.win)
bp.windows[1,1] <- seq[1]
i <- 1
# Scroll through the sequence looking for
# gaps (nucledotides not in the regex).
# When you find them make a window.
# Sequences with no gaps will have no
# windows.
for(j in 2:length(seq)){
if(seq[j]-seq[j-1] > 1){
bp.windows[i,2] <- seq[j-1]
i <- i+1
bp.windows[i,1] <- seq[j]
}
}
bp.windows[i,2] <- seq[j]
if(i == 1){
# If there is one row we get an integer.
# We need a matrix.
bp.windows <- bp.windows[1:i,]
bp.windows <- matrix(bp.windows, ncol=2)
} else {
bp.windows <- bp.windows[1:i,]
}
# [email protected] <- bp.windows
# return(x)
return(bp.windows)
}
#' @rdname proc_chromR
#' @aliases seq2rects
#'
#' @description
#' Create representation of a sequence.
#' Begining and end points are determined for stretches of nucleotides.
#' Stretches are determined by querying each nucleotides in a sequence to determine if it is represented in the database of characters (chars).
#'
#'
#' @param chars a vector of characters to be used as a database for inclusion in rectangles
#' @param lower converts the sequence and database to lower case, making the search case insensitive
#'
#'
#' @export
#'
seq2rects <- function(x, chars="acgtwsmkrybdhv", lower=TRUE){
if(is.matrix(as.character(x@seq))){
# seq <- as.character(x@seq)[1:length(x@seq)]
seq <- as.character(x@seq)[1,]
}
if(lower == TRUE){
seq <- tolower(seq)
chars <- tolower(chars)
}
rects <- .seq_to_rects(seq, targets=chars)
return(rects)
}
#' @rdname proc_chromR
#' @export
#' @aliases var.win
#'
#var.win <- function(x, win.size=1e3){
var.win <- function(x, win.size=1e3){
# A DNAbin will store in a list when the fasta contains
# multiple sequences, but as a matrix when the fasta
# only contains one sequence.
# Convert DNAbin to string of chars.
#if(class(x@seq) == "DNAbin"){
if( inherits(x@seq, "DNAbin") ){
if(is.matrix(as.character(x@seq))){
seq <- as.character(x@seq)[1:length(x@seq)]
} else if(is.list(as.character(x@seq))){
seq <- as.character(x@seq)[[1]]
}
}
# Create a vector of 0 and 1 marking genic sites.
if(nrow(x@ann) > 0){
genic_sites <- rep(0, times=x@len)
genic_sites[unlist(apply(x@ann[, 4:5], MARGIN=1, function(x){seq(from=x[1], to=x[2], by=1)}))] <- 1
}
# Initialize data.frame of windows.
win.info <- seq(1, x@len, by=win.size)
win.info <- cbind(win.info, c(win.info[-1]-1, x@len))
win.info <- cbind(1:nrow(win.info), win.info)
win.info <- cbind(win.info, win.info[,3]-win.info[,2]+1)
# win.info <- cbind(win.info, matrix(ncol=7, nrow=nrow(win.info)))
# Declare a function to count nucleotide classes.
win.proc <- function(y, seq){
seq <- seq[y[2]:y[3]]
a <- length(grep("[aA]", seq, perl=TRUE))
c <- length(grep("[cC]", seq, perl=TRUE))
g <- length(grep("[gG]", seq, perl=TRUE))
t <- length(grep("[tT]", seq, perl=TRUE))
n <- length(grep("[nN]", seq, perl=TRUE))
o <- length(grep("[^aAcCgGtTnN]", seq, perl=TRUE))
count <- sum([email protected]$POS[[email protected]$mask] >= y[2] & [email protected]$POS[[email protected]$mask] <= y[3])
genic <- sum(genic_sites[y[2]:y[3]])
#
c(a,c,g,t,n,o, count, genic)
}
# Implement function to count nucleotide classes.
#if(class(x@seq) == "DNAbin"){
if( inherits(x@seq, "DNAbin") ){
win.info <- cbind(win.info, t(apply(win.info, MARGIN=1, win.proc, seq=seq)))
win.info <- as.data.frame(win.info)
names(win.info) <- c('window','start','end','length','A','C','G','T','N','other','variants', 'genic')
}
win.info
}
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/proc_chromR.R
|
#' @title Query the META section of VCF data
#' @name queryMETA
#' @rdname queryMETA
#'
#' @description
#' Query the META section of VCF data for information about acronyms.
#'
#' @param x an object of class vcfR or chromR.
#' @param element an acronym to search for in the META portion of the VCF data.
#' @param nice logical indicating whether to format the data in a 'nice' manner.
#'
#' @details
#' The META portion of VCF data defines acronyms that are used elsewhere in the data.
#' In order to better understand these acronyms they should be referenced.
#' This function facilitates looking up of acronyms to present their relevant information.
#' When 'element' is 'NULL' (the default), all acronyms from the META region are returned.
#' When 'element' is specified an attempt is made to return information about the provided element.
#' The function \code{grep} is used to perform this query.
#' If 'nice' is set to FALSE then the data is presented as it was in the file.
#' If 'nice' is set to TRUE the data is processed to make it appear more 'nice'.
#'
#' @seealso \code{\link[base]{grep}}, \code{\link[base]{regex}}.
#'
#' @examples
#' data(vcfR_test)
#' queryMETA(vcfR_test)
#' queryMETA(vcfR_test, element = "DP")
#'
#'
#' @export
#'
queryMETA <- function(x, element = NULL, nice = TRUE){
if( inherits(x, "chromR") ){
x <- x@vcfR
}
if( is.null(element) ){
ID <- grep("=<ID=", x@meta, value = TRUE)
ID <- grep("contig=<ID", ID, value = TRUE, invert = TRUE)
if( nice ){
ID <- nice(ID)
ID <- lapply( ID, function(x){ x[1] } )
ID <- unlist(ID)
}
myContigs <- grep("contig=<ID", x@meta)
if( length(myContigs) > 0 ){
ID <- c(ID, paste(length(myContigs), "contig=<IDs omitted from queryMETA"))
}
return(ID)
}
ID <- grep(element, x@meta, value = TRUE)
if( nice ){
ID <- nice(ID)
}
return(ID)
}
nice <- function(x){
x <- sub("^##", "", x)
x <- sub("<", "", x)
x <- sub(">$", "", x)
x <- sub("\"", "", x)
x <- sub("\"$", "", x)
x <- strsplit(x, split = ",")
return(x)
}
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/queryMETA.R
|
#'
#' @title Query the gt slot
#' @name query.gt
#' @rdname query_gt
#'
#' @description Query the 'gt' slot of objects of class vcfR
#'
#'
#' @aliases is.polymorphic
#'
#' @param x an object of class vcfR
#' @param na.omit logical to omit missing data
#'
#' @details
#' The function \strong{is_polymorphic} returns a vector of logicals indicating whether a variant is polymorphic.
#' Only variable sites are reported in vcf files.
#' However, once someone manipulates a vcfR object, a site may become invariant.
#' For example, if a sample is removed it may result in a site becoming invariant.
#' This function queries the sites in a vcfR object and returns a vector of logicals (TRUE/FALSE) to indicate if they are actually variable.
#'
#' @seealso
#' \code{\link{extract.gt}}
#'
#'
#' @export
is.polymorphic <- function(x, na.omit=FALSE){
if( !inherits(x, "vcfR") ){
stop("Expected an object of class vcfR")
}
x <- extract.gt(x)
test.poly <- function(x, na.omit=na.omit){
if(na.omit == TRUE){
x <- stats::na.omit(x)
}
sum(x[1] == x[-1]) < (length(x) - 1)
}
apply(x, MARGIN=1, test.poly, na.omit=na.omit)
}
#' @rdname query_gt
#' @aliases is_biallelic
#'
#' @details
#' The function \strong{is_bialleleic} returns a vector of logicals indicating whether a variant is biallelic.
#' Some analyses or downstream analyses only work with biallelic loci.
#' This function can help manage this.
#'
#' Note that \strong{is_bialleleic} queries the ALT column in the fix slot to count alleles.
#' If you remove samples from the gt slot you may invalidate the information in the fix slot.
#' For example, if you remove the samples with the alternate allele you will make the position invariant and this function will provide inaccurate information.
#' So use caution if you've made many modifications to your data.
#'
#' @export
is.biallelic <- function(x){
# x <- as.character(x@fix$ALT)
x <- as.character(x@fix[,'ALT'])
x <- strsplit(x, split=",")
lapply(x, length) == 1
}
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/query_gt.R
|
#' @title Ranking variants within windows
#' @name Ranking
#' @rdname ranking
#'
#' @description
#' Rank variants within windows.
#'
#' @param x an object of class Crhom or a data.frame containing...
# @param ends a vector containing the position of the end of each window
#' @param scores a vector of scores for each variant to be used to rank the data
#'
#'
#'
# ' @rdname ranking
#' @aliases rank.variants.chromR
#'
#' @export
rank.variants.chromR <- function(x, scores){
# if( class(x) != "chromR" ){
if( !inherits(x, "chromR") ){
stop("expecting object of class chromR or data.frame")
}
stopifnot(class([email protected]) == 'data.frame')
stopifnot(is.vector([email protected]$end))
stopifnot(class([email protected]$end) == 'numeric')
# stopifnot(is.vector([email protected]['end']))
# stopifnot(class([email protected]['end']) == 'numeric')
stopifnot(is.vector(scores))
stopifnot(class(scores) == 'numeric')
if( nrow(x@vcf) != length(scores) ){
msg <- "The number of variants and scores do not match."
msg <- paste(msg, " nrow(x@vcf): ", nrow(x@vcf), sep = "")
msg <- paste(msg, ", length(scores): ", length(scores), sep = "")
stop(msg)
}
[email protected] <- .rank_variants([email protected], [email protected]$end, scores)
return(x)
}
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/ranking.R
|
#' @title Create non-overlapping positions (POS) for VCF data
#' @name rePOS
#' @rdname rePOS
#'
#' @description
#' Converts allele balance data produced by \code{freq_peak()} to a copy number by assinging the allele balance data (frequencies) to its closest expected ratio.
#'
#' @param x a vcfR object
#' @param lens a data.frame describing the reference
#' @param ret.lens logical specifying whether lens should be returned
#' @param buff an integer indicating buffer length
#'
#' @details
#' Each chromosome in a genome typically begins with position one.
#' This creates a problem when plotting the data associated with each chromosome because the information will overlap.
#' This function uses the information in the data.frame \code{lens} to create a new coordinate system where chromosomes do not overlap.
#'
#' The data.frame \strong{lens} should have a row for each chromosome and two columns.
#' The first column is the name of each chromosome as it appears in the vcfR object.
#' The second column is the length of each chromosome.
#'
#' The parameter \strong{buff} indicates the length of a buffer to put in between each chromosome.
#' This buffer may help distinguish chromosomes from one another.
#'
#' In order to create the new coordinates the \code{lens} data.frame is updated with the new start positions.
#' The parameter \strong{}
#'
#'
#' @return Either a vector of integers that represent the new coordinate system or a list containing the vector of integers and the lens data.frame.
#'
#'
#' @examples
#' # Create some VCF data.
#' data(vcfR_example)
#' vcf1 <-vcf[1:500,]
#' vcf2 <-vcf[500:1500,]
#' vcf3 <- vcf[1500:2533]
#' vcf1@fix[,'CHROM'] <- 'chrom1'
#' vcf2@fix[,'CHROM'] <- 'chrom2'
#' vcf3@fix[,'CHROM'] <- 'chrom3'
#' vcf2@fix[,'POS'] <- as.character(getPOS(vcf2) - 21900)
#' vcf3@fix[,'POS'] <- as.character(getPOS(vcf3) - 67900)
#' vcf <- rbind2(vcf1, vcf2)
#' vcf <- rbind2(vcf, vcf3)
#' rm(vcf1, vcf2, vcf3)
#'
#' # Create lens
#' lens <- data.frame(matrix(nrow=3, ncol=2))
#' lens[1,1] <- 'chrom1'
#' lens[2,1] <- 'chrom2'
#' lens[3,1] <- 'chrom3'
#' lens[1,2] <- 22000
#' lens[2,2] <- 47000
#' lens[3,2] <- 32089
#'
#' # Illustrate the issue.
#' dp <- extract.info(vcf, element="DP", as.numeric=TRUE)
#' plot(getPOS(vcf), dp, col=as.factor(getCHROM(vcf)))
#'
#' # Resolve the issue.
#' newPOS <- rePOS(vcf, lens)
#' dp <- extract.info(vcf, element="DP", as.numeric=TRUE)
#' plot(newPOS, dp, col=as.factor(getCHROM(vcf)))
#'
#' # Illustrate the buffer
#' newPOS <- rePOS(vcf, lens, buff=10000)
#' dp <- extract.info(vcf, element="DP", as.numeric=TRUE)
#' plot(newPOS, dp, col=as.factor(getCHROM(vcf)))
#'
#'
#' @export
rePOS <- function(x, lens, ret.lens = FALSE, buff = 0){
#if( class(x) == 'chromR' ){
if( inherits(x, 'chromR') ){
x <- x@vcfR
}
#if( class(x) != 'vcfR' ){
if( !inherits(x, 'vcfR') ){
msg <- paste('expecting a chromR or vcfR object, received instead a', class(x))
stop(msg)
}
# Check CHROM names.
# if( sum(lens[,1] %in% getCHROM(x)) != nrow(lens) ){
if( sum(unique(getCHROM(x)) %in% lens[,1]) != length(unique(getCHROM(x))) ){
# msg <- "chromosome (CHROM) names in vcfR object and lens do not appear to match"
msg <- "chromosome (CHROM) names in vcfR object is not the same or a subset of those in lens"
stop(msg)
}
# Update lens with new starts.
colnames(lens)[1:2] <- c('chrom', 'length')
lens$new_start <- 0
lens$new_start[1] <- 1
lens$mids <- 0
lens$mids[1] <- round(lens$length[1]/2)
for(i in 2:nrow(lens)){
lens$new_start[i] <- lens$new_start[i-1] + lens$length[i-1]
# Apply buffer
lens$new_start[i] <- lens$new_start[i] + buff
# Midpoint
lens$mids[i] <- lens$new_start[i] + lens$length[i]/2
}
# Apply new start to POS.
oldPOS <- getPOS(x)
oldCHROM <- getCHROM(x)
# table converts our character vector to a factor.
# This tends to sort things.
# We want to retain the order so let's recast this ourselves.
# oldCHROM <- factor(oldCHROM, levels=unique(oldCHROM))
oldCHROM <- factor(oldCHROM, levels=lens$chrom)
myM <- as.matrix(table(oldCHROM))
newPOS <- oldPOS + rep(lens$new_start, times=myM[,1]) - 1
if(ret.lens == TRUE){
return(list(newPOS=newPOS, lens=lens))
} else {
return(newPOS)
}
}
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/rePOS.R
|
linkage <- function(x){
gt <- [email protected]
mask <- x@mask
link.m <- matrix(ncol=8, nrow=nrow(gt)-1,
dimnames=list(c(), c('pos', 'len', 'bigD', 'Delta', 'Dprime', 'delta', 'd', 'Q'))
)
link <- function(x){
n1 <- length(!is.na(gt[x,]))
n2 <- length(!is.na(gt[x+1,]))
# print(x)
}
lapply(1:nrow(link.m), link)
# print(head(gt))
return(x)
}
##### ##### Set populations #####
# @rdname Chrom-methods
# @export
# @aliases set.pop1
#
# @param pop1 a numeric vector indicating the samples in population 1
#
set.pop1 <- function(x, pop1){
x@pop1 <- pop1
return(x)
}
# @rdname Chrom-methods
# @export
# @aliases set.pop2
#
# @param pop2 a numeric vector indicating the samples in population 2
#
set.pop2 <- function(x, pop2){
x@pop2 <- pop2
return(x)
}
##### ##### gt.m2sfs #####
gt.m2sfs <- function(x){
# cat(x@pop1)
# cat(length(x@pop1))
# cat('\n')
# if(length(x@pop1) < 1 | length(x@pop2) < 1 | is.na(x@pop1) | is.na(x@pop2)){
# cat("One or both populations are not defined\n")
# cat("Creating arbitrary populations\n")
# x@pop1 <- 1:floor(ncol([email protected][,-1])/2)
# x@pop2 <- c(1+max(1:floor(ncol([email protected][,-1])/2))):ncol([email protected])
# }
pop1 <- [email protected][x@mask, x@pop1]
pop2 <- [email protected][x@mask, x@pop2]
sfs <- matrix(ncol=ncol(pop1)*2+1, nrow=ncol(pop2)*2+1)
sfs1d <- cbind(rowSums(pop2)+1, rowSums(pop1)+1)
sfs1d[,1] <- nrow(sfs) + 1 - sfs1d[,1]
apply(sfs1d, MARGIN=1, function(x){
if(is.na(sfs[x[1],x[2]])){
sfs[x[1],x[2]] <<- 1
}else{
sfs[x[1],x[2]] <<- sfs[x[1],x[2]] +1
}}
)
x@sfs <- sfs
return(x)
}
#### Graphic functions ####
plot.sfs <- function(x, log10=TRUE, ...){
sfs <- x@sfs
if(log10){sfs <- log10(sfs)}
#
graphics::layout(matrix(c(1,2), nrow=1), widths=c(4,1))
graphics::image(t(sfs)[,nrow(sfs):1], col=grDevices::rainbow(100, end=0.85),
axes=FALSE, frame.plot=TRUE)
# axis(side=1, at=seq(1,ncol(sfs), by=1)/ncol(sfs), labels=NA)
graphics::axis(side=1, at=seq(0, ncol(sfs)-1, by=1)/(ncol(sfs)-1), labels=NA)
graphics::axis(side=1, at=seq(0, ncol(sfs)-1, by=5)/(ncol(sfs)-1), labels=seq(0, ncol(sfs)-1, by=5), las=1, tcl=-0.7)
graphics::axis(side=3, at=seq(0, ncol(sfs)-1, by=1)/(ncol(sfs)-1), labels=NA)
graphics::axis(side=2, at=seq(0, nrow(sfs)-1, by=1)/(nrow(sfs)-1), labels=NA)
graphics::axis(side=2, at=seq(0, nrow(sfs)-1, by=5)/(nrow(sfs)-1), labels=seq(0, nrow(sfs)-1, by=5), las=1, tcl=-0.7)
graphics::axis(side=4, at=seq(0, nrow(sfs)-1, by=1)/(nrow(sfs)-1), labels=NA)
graphics::abline(a=0, b=1)
graphics::title(main=paste("SFS for", x@name))
#
graphics::par(mar=c(5,0,4,3))
graphics::barplot(height=rep(1, times=100), width=1, space=0,
col=grDevices::rainbow(100, start=0, end=0.85), border=NA, horiz=TRUE, axes=FALSE)
graphics::axis(side=4, at=seq(0,100, length.out=2),
labels=format(seq(0, 10^max(sfs, na.rm=TRUE), length.out=2), digits=3),
las=1)
graphics::axis(side=4, at=seq(1, max(sfs, na.rm=TRUE), by=1)*(100/max(sfs, na.rm=TRUE)),
labels=10^seq(1, max(sfs, na.rm=TRUE), by=1), las=1
)
#
graphics::par(mar=c(5,4,4,2), mfrow=c(1,1))
}
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/sandbox.R
|
#'
#'
#' @title Write summary tables from chromR objects
#' @rdname summary_tables
#' @export
#'
#' @description
#' Write summary tables from chromR objects.
#'
#' @param file A filename for the output file
#' @param x An object of class chromR
#' @param mask logical vector indicating rows to use
#' @param APPEND logical indicating whether to append to existing file (omitting the header) or write a new file
#'
#' @details
#' The function \strong{write.var.info} takes the variant information table from a chromR object and writes it as a comma delimited file.
#'
#' The function \strong{write.win.info} takes the window information table from a chromR object and writes it as a comma delimited file.
#'
#'
#' @seealso
#' \code{\link{write.vcf}}
#'
# CRAN:
# \href{https://cran.r-project.org/web/packages/pegas/index.html}{pegas}::read.vcf,
# \href{https://cran.r-project.org/web/packages/PopGenome/index.html}{PopGenome}::readVCF,
# \href{https://cran.r-project.org/web/packages/data.table/index.html}{data.table}::fread
#
# Bioconductor:
# \href{http://www.bioconductor.org/packages/release/bioc/html/VariantAnnotation.html}{VariantAnnotation}::readVcf
#
# ' @rdname Summary.tables
#' @aliases write.var.info
#'
#' @export
#'
write.var.info <- function(x, file = "", mask = FALSE, APPEND = FALSE){
#if(class(x) == "vcfR"){
if( inherits(x, "vcfR") ){
stop("Unexpected class! Detected class vcfR. This class does not contain variant summaries.")
}
#if(class(x) != "chromR"){
if( !inherits(x, "chromR") ){
stop("Unexpected class! Expecting an object of class chromR.")
}
if(mask == FALSE){
utils::write.table([email protected], file = file, append = APPEND, sep = ",", row.names = FALSE, col.names = !APPEND)
} else if(mask == TRUE){
utils::write.table([email protected][[email protected]$mask,], file = file, append = APPEND, sep = ",", row.names = FALSE, col.names = !APPEND)
}
}
#' @rdname summary_tables
#' @aliases write.win.info
#'
#' @export
#'
write.win.info <- function(x, file = "", APPEND = FALSE){
#if(class(x) == "vcfR"){
if( inherits(x, "vcfR") ){
stop("Unexpected class! Detected class vcfR. This class does not contain window summaries.")
}
#if(class(x) != "chromR"){
if( !inherits(x, "chromR") ){
stop("Unexpected class! Expecting an object of class chromR.")
}
utils::write.table([email protected], file = file, append = APPEND, sep = ",", row.names = FALSE, col.names = !APPEND)
}
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/summary_tables.R
|
##### Method show #####
#'
#' @rdname vcfR-method
# ' @aliases show.vcfR,show,vcfR-method
#' @aliases show,vcfR-method
#' @title show
#'
#' @description
#' Display a summary of a vcfR object.
#'
#' @param object a vcfR object
#'
#' @details
#' The method \strong{show} is used to display an object.
#' Because vcf data are relatively large, this has been abbreviated.
#' Here we display the first four lines of the meta section, and truncate them to no more than 80 characters.
#' The first eight columns and six rows of the fix section are also displayed.
#'
setMethod(
f="show",
signature = "vcfR",
definition=function(object){
if( ncol(object@gt) > 1 ){
nsamp <- ncol(object@gt) - 1
} else {
nsamp <- 0
}
nchrom <- length( unique( getCHROM( object ) ) )
nvar <- nrow(object@fix)
nna <- sum( is.na(object@gt[,-1]) )
pna <- nna / c( nsamp * nvar )
cat("***** Object of Class vcfR *****\n")
cat( paste( nsamp, "samples\n") )
cat( paste( nchrom, "CHROMs\n") )
cat( paste( format(nvar, big.mark=","), "variants\n") )
cat( "Object size: ")
print(object.size(object), units="MB")
cat( paste( format(pna * 100, digits = 4), "percent missing data\n") )
cat("***** ***** *****\n")
# message("***** --*-- *****")
}
)
#### Method head ####
#'
#' @name head
#' @rdname vcfR-method
#' @title head
#' @aliases head,vcfR-method
#' @docType methods
#'
#' @param x object of class vcfR
#' @param n number of rows to print
#' @param maxchar maximum number of characters to print per line
#' @param ... arguments to be passed to other methods
#'
#' @description \strong{head} returns the first parts of an object of class vcfR.
#'
#' @details
#' The method \strong{head} is similar to show, but is more flexible.
#' The number of rows displayed is parameterized by the variable n.
#' And the maximum number of characters to print per line (row) is also parameterized.
#' In contract to show, head includes a summary of the gt portion of the vcfR object.
#'
#'
setMethod(
f="head",
signature="vcfR",
definition=function (x, n=6, maxchar=80){
print("***** Object of class 'vcfR' *****")
print("***** Meta section *****")
if(length(x@meta) > n){
for( i in 1:n ){
if( nchar(x@meta[i]) <= maxchar ){
print(x@meta[i])
} else {
print( paste( substr(x@meta[i], 1, maxchar-12 ), "[Truncated]" ) )
}
}
print(paste("First", n, "rows."))
} else {
print(x@meta)
}
print("", quote=FALSE)
print("***** Fixed section *****")
if(nrow(x@fix) >= n){
print(x@fix[1:n,1:7])
} else {
print(x@fix[,1:7])
}
print("", quote=FALSE)
print("***** Genotype section *****")
if(nrow(x@gt) >= n){
if(ncol(x@gt)<6){
print(x@gt[1:n,])
} else {
print(x@gt[1:n,1:6])
print("First 6 columns only.")
}
} else {
if(ncol(x@gt)<6){
print(x@gt)
} else {
print(x@gt[,1:6])
}
}
print("", quote=FALSE)
print("Unique GT formats:")
if( nrow(x@gt) == 0 ){
print("No gt slot present")
print("", quote=FALSE)
} else {
print(unique(as.character(x@gt[,1])))
print("", quote=FALSE)
}
}
)
#### Method [] ####
#'
#' @rdname vcfR-method
#' @title Brackets
#' @description The brackets ('[]') subset objects of class vcfR
#' @details
#' The \strong{square brackets ([])} are used to subset objects of class vcfR.
#' Rows are subset by providing a vector i to specify which rows to use.
#' The columns in the fix slot will not be subset by j.
#' The parameter j is a vector used to subset the columns of the gt slot.
#' Note that it is essential to include the first column here (FORMAT) or downsream processes will encounter trouble.
#'
#' The \strong{samples} parameter allows another way to select samples.
#' Because the first column of the gt section is the FORMAT column you typically need to include that column and sample numbers therefore begin at two.
#' Use of the samples parameter allows you to select columns by a vector of numerics, logicals or characters.
#' When numerics are used the samples can be selected starting at one.
#' The function will then add one to this vector and include one to select the desired samples and the FORMAT column.
#' When a vector of characters is used it should contain the desired sample names.
#' The function will add the FORMAT column if it is not the first element.
#' When a vector of logicals is used a TRUE will be added to the vector to ensure the FORMAT column is selected.
#' Note that specification of samples will override specification of j.
#'
#'
# @export
# @aliases []
#'
#' @aliases [,vcfR-method
#'
#' @param i vector of rows (variants) to include
#' @param j vector of columns (samples) to include
#' @param samples vector (numeric, character or logical) specifying samples, see details
#' @param drop delete the dimensions of an array which only has one level
#'
setMethod(
f= "[",
signature(x = "vcfR"),
# signature(x = "vcfR", i = "ANY", j = "ANY"),
# signature(x = "vcfR", i = "ANY", j = "ANY", samples = "ANY"),
definition=function(x, i, j, samples = NULL, ..., drop){
# definition=function(x, i, j, ..., drop){
if( !is.null(samples) ){
if( inherits(samples, what = c("numeric", "integer") ) ){
samples <- samples + 1
j <- c(1, samples)
} else if( inherits(samples, what = "character") ){
if( samples[1] != "FORMAT" ){
j <- c("FORMAT", samples)
} else {
j <- samples
}
} else if( inherits(samples, what = "logical") ){
j <- c(TRUE, samples)
} else {
stop(paste("samples specified, expecting a numeric, character or logical but received", class(samples)))
}
}
if(nrow(x@gt) == nrow(x@fix)){
x@gt <- x@gt[ i, j, drop = FALSE ]
} else if (nrow(x@gt) == 0){
# Do nothing.
} else {
msg <- paste("The fix slot has", nrow(x@fix), "rows while the gt slot has", nrow(x@gt), "rows, this should never happen.")
stop(msg)
}
x@fix <- x@fix[ i, , drop = FALSE ]
if(nrow(x@gt) > 0){
if(colnames(x@gt)[1] != 'FORMAT'){
warning("You have chosen to omit the FORMAT column, this is typically undesireable.")
}
}
return(x)
}
)
setGeneric("plot")
#### Method plot ####
#'
#' @rdname vcfR-method
#' @aliases plot,vcfR-method
#'
#' @title plot.vcfR
#' @description The \strong{plot} method visualizes objects of class vcfR
# @export
# ' @aliases plot.vcfR
# ' @aliases vcfR,vcfR-method
#'
#' @param y not used
#'
#' @details
#' The \strong{plot} method generates a histogram from data found in the 'QUAL' column from the 'fix' slot.
#'
setMethod(
f="plot",
signature= "vcfR",
definition=function(x, y, ...){
x <- as.numeric(x@fix[,'QUAL'])
graphics::hist(x, col=5, main='Histogram of qualities', xlab='QUAL')
graphics::rug(x)
}
)
##### ##### ##### ##### #####
#
# rbind
#
##### ##### ##### ##### #####
setMethod("rbind",
signature( "vcfR" ),
function (..., deparse.level = 0)
{
## store arguments
dots <- list(...)
## extract arguments which are vcfR objects
myList <- dots[sapply(dots, inherits, "vcfR")]
if(!all(sapply(myList, class)=="vcfR")) stop("some objects are not vcfR objects")
## keep the rest in 'dots'
dots <- dots[!sapply(dots, inherits, "vcfR")]
# Initialize
x <- myList[[1]]
browser()
# Implement
x@fix <- do.call( rbind, lapply( myList, function(x){ x@fix } ) )
x@gt <- do.call( rbind, lapply( myList, function(x){ x@gt } ) )
return(x)
}
)
#' @rdname vcfR-method
#' @aliases rbind2.vcfR
#'
setMethod("rbind2",
signature(x = "vcfR", y = "missing"),
function (x, y, ...)
{
# message("y is missing.")
return(x)
}
)
#' @rdname vcfR-method
#' @aliases rbind2.vcfR
#'
setMethod("rbind2",
signature(x = "vcfR", y = "ANY"),
function (x, y, ...)
{
# message("y is ANY.")
return(x)
}
)
#setGeneric("rbind2")
#' @rdname vcfR-method
#' @aliases rbind2.vcfR
#' @export
#'
setMethod("rbind2",
signature( x="vcfR", y="vcfR" ),
function (x, y, ...)
{
# message("rbind2.vcfR")
# browser()
x@fix <- rbind( x@fix, y@fix )
x@gt <- rbind( x@gt, y@gt )
return(x)
}
)
##### ##### ##### ##### #####
#
# nrow
#
##### ##### ##### ##### #####
#' @rdname vcfR-method
#' @aliases dim.vcfR
#' @export
#'
setMethod("dim",
signature(x = "vcfR"),
function (x)
{
x <- c( nrow(x@fix), ncol(x@fix), ncol(x@gt) )
names(x) <- c( 'variants', 'fix_cols', 'gt_cols')
return(x)
}
)
#' @rdname vcfR-method
#' @aliases nrow.vcfR
#' @export
#'
setMethod("nrow",
signature(x = "vcfR"),
function (x)
{
rows <- nrow(x@fix)
return(rows)
}
)
##### ##### ##### ##### #####
# EOF.
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/vcfR-method.R
|
#' @keywords internal
"_PACKAGE"
## usethis namespace: start
## usethis namespace: end
NULL
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/vcfR-package.R
|
# vcf.R.
#' Variant call format files processed with vcfR.
#'
#' vcfR provides a suite of tools for input and output of variant call format (VCF) files, manipulation of their content and visualization.
#'
#' @details
#'
#'
#' \strong{File input and output} is facilitated with the functions \code{read.vcfR} and \code{write.vcf}.
#' Input of vcf format data results in an S4 \code{\link{vcfR-class}} object.
# ' Objects of class vcfR can be manipulated with \code{\link{vcfR-method}} and \code{\link{extract.gt}}.
#' Objects of class vcfR can be manipulated with \link[vcfR:vcfR-method]{vcfR-method} and \code{extract.gt}.
#' Contents of the vcfR object can be visualized with the \code{\link{plot}} method.
#' More complex visualizations can be created using a series of functions.
#' See \code{vignette(topic="sequence_coverage")} for an example.
#' Once manipulations are complete the object may be written to a *.vcf.gz format file using \code{write.vcf} or exported to objects supported by other R packages with \code{vcfR2genind} or \code{vcfR2loci}.
#'
#'
#' More complex visualization can be accomplished by converting a vcfR object to a \code{\link{chromR-class}} object.
#' An example exists on the \code{create.chromR} man page.
#'
#'
#'
#' A \strong{complete list of functions} can be displayed with: library(help = vcfR).
#'
#' \strong{Vignettes} (documentation) can be listed with: \code{browseVignettes('vcfR')}.
#'
#'
#' Several example \strong{datasets} are included in vcfR.
#' \strong{vcfR_test} comes from the VCF specification and provides a vcfR object with a diversity of examples in a small dataset.
#' \strong{vcfR_example} is a subset of the pinfsc50 dataset that includes VCF, GFF and FASTA data for moderate sized testing.
#' The \href{https://cran.r-project.org/package=pinfsc50}{pinfsc50} dataset is available as a separate package and includes VCF, GFF and FASTA data for testing and benchmarking.
#'
#' @seealso
#' More documentation for vcfR can be found at the \href{https://knausb.github.io/vcfR_documentation/}{vcfR documentation} website.
#'
#'
#'
#' @import pinfsc50
#' @import ape
#' @docType package
#' @name vcfR-package
# ' @rdname vcfR
#' @useDynLib vcfR, .registration = TRUE
#' @importFrom Rcpp sourceCpp
#' @importFrom stats setNames
#'
#'
NULL
#### #### #### #### ####
# EOF
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/vcfR.R
|
#' @title Convert vcfR to DNAbin
#' @name vcfR2DNAbin
#'
#' @rdname vcfR2DNAbin
#' @aliases vcfR2DNAbin
#'
#' @description
#' Convert objects of class vcfR to objects of class ape::DNAbin
#'
#' @param x an object of class chromR or vcfR
#' @param extract.indels logical indicating to remove indels (TRUE) or to include them while retaining alignment
#' @param consensus logical, indicates whether an IUPAC ambiguity code should be used for diploid heterozygotes
#' @param extract.haps logical specifying whether to separate each genotype into alleles based on a delimiting character
# @param gt.split character to delimit alleles within genotypes
#' @param unphased_as_NA logical indicating how to handle alleles in unphased genotypes
#' @param asterisk_as_del logical indicating that the asterisk allele should be converted to a deletion (TRUE) or NA (FALSE)
#' @param ref.seq reference sequence (DNAbin) for the region being converted
#' @param start.pos chromosomal position for the start of the ref.seq
#' @param verbose logical specifying whether to produce verbose output
#'
#' @details
#' Objects of class \strong{DNAbin}, from the package ape, store nucleotide sequence information.
#' Typically, nucleotide sequence information contains all the nucleotides within a region, for example, a gene.
#' Because most sites are typically invariant, this results in a large amount of redundant data.
#' This is why files in the vcf format only contain information on variant sites, it results in a smaller file.
#' Nucleotide sequences can be generated which only contain variant sites.
#' However, some applications require the invariant sites.
#' For example, inference of phylogeny based on maximum likelihood or Bayesian methods requires invariant sites.
#' The function vcfR2DNAbin therefore includes a number of options in attempt to accomodate various scenarios.
#'
#'
#' The presence of indels (insertions or deletions)in a sequence typically presents a data analysis problem.
#' Mutation models typically do not accomodate this data well.
#' For now, the only option is for indels to be omitted from the conversion of vcfR to DNAbin objects.
#' The option \strong{extract.indels} was included to remind us of this, and to provide a placeholder in case we wish to address this in the future.
#'
#'
#' The \strong{ploidy} of the samples is inferred from the first non-missing genotype.
# The option \code{gt.split} is used to split this genotype into alleles and these are counted.
# Values for \code{gt.split} are typically '|' for phased data or '/' for unphased data.
# Note that this option is an exact match and not used in a regular expression, as the 'sep' parameter in \code{\link{vcfR2genind}} is used.
#' All samples and all variants within each sample are assumed to be of the same ploid.
#'
#'
#' Conversion of \strong{haploid data} is fairly straight forward.
#' The options \code{consensus} and \code{extract.haps} are not relevant here.
#' When vcfR2DNAbin encounters missing data in the vcf data (NA) it is coded as an ambiguous nucleotide (n) in the DNAbin object.
#' When no reference sequence is provided (option \code{ref.seq}), a DNAbin object consisting only of variant sites is created.
#' When a reference sequence and a starting position are provided the entire sequence, including invariant sites, is returned.
#' The reference sequence is used as a starting point and variable sitees are added to this.
#' Because the data in the vcfR object will be using a chromosomal coordinate system, we need to tell the function where on this chromosome the reference sequence begins.
#'
#'
#' Conversion of \strong{diploid data} presents a number of scenarios.
#' When the option \code{consensus} is TRUE and \code{extract.haps} is FALSE, each genotype is split into two alleles and the two alleles are converted into their IUPAC ambiguity code.
#' This results in one sequence for each diploid sample.
#' This may be an appropriate path when you have unphased data.
#' Note that functions called downstream of this choice may handle IUPAC ambiguity codes in unexpected manners.
#' When extract.haps is set to TRUE, each genotype is split into two alleles.
#' These alleles are inserted into two sequences.
#' This results in two sequences per diploid sample.
#' Note that this really only makes sense if you have phased data.
#' The options ref.seq and start.pos are used as in halpoid data.
#'
#'
#' When a variant overlaps a deletion it may be encoded by an \strong{asterisk allele (*)}.
#' The GATK site covers this in a post on \href{https://gatk.broadinstitute.org/hc/en-us/articles/360035531912-Spanning-or-overlapping-deletions-allele-}{Spanning or overlapping deletions} ].
#' This is handled in vcfR by allowing the user to decide how it is handled with the paramenter \code{asterisk_as_del}.
#' When \code{asterisk_as_del} is TRUE this allele is converted into a deletion ('-').
#' When \code{asterisk_as_del} is FALSE the asterisk allele is converted to NA.
#' If \code{extract.indels} is set to FALSE it should override this decision.
#'
#'
#' Conversion of \strong{polyploid data} is currently not supported.
#' However, I have made some attempts at accomodating polyploid data.
#' If you have polyploid data and are interested in giving this a try, feel free.
#' But be prepared to scrutinize the output to make sure it appears reasonable.
#'
#'
#' Creation of DNAbin objects from large chromosomal regions may result in objects which occupy large amounts of memory.
#' If in doubt, begin by subsetting your data and the scale up to ensure you do not run out of memory.
#'
#'
#'
#'
#' @seealso
#' \href{https://cran.r-project.org/package=ape}{ape}
#'
#'
#' @examples
#' library(ape)
#' data(vcfR_test)
#'
#' # Create an example reference sequence.
#' nucs <- c('a','c','g','t')
#' set.seed(9)
#' myRef <- as.DNAbin(matrix(nucs[round(runif(n=20, min=0.5, max=4.5))], nrow=1))
#'
#' # Recode the POS data for a smaller example.
#' set.seed(99)
#' vcfR_test@fix[,'POS'] <- sort(sample(10:20, size=length(getPOS(vcfR_test))))
#'
#' # Just vcfR
#' myDNA <- vcfR2DNAbin(vcfR_test)
#' seg.sites(myDNA)
#' image(myDNA)
#'
#' # ref.seq, no start.pos
#' myDNA <- vcfR2DNAbin(vcfR_test, ref.seq = myRef)
#' seg.sites(myDNA)
#' image(myDNA)
#'
#' # ref.seq, start.pos = 4.
#' # Note that this is the same as the previous example but the variants are shifted.
#' myDNA <- vcfR2DNAbin(vcfR_test, ref.seq = myRef, start.pos = 4)
#' seg.sites(myDNA)
#' image(myDNA)
#'
#' # ref.seq, no start.pos, unphased_as_NA = FALSE
#' myDNA <- vcfR2DNAbin(vcfR_test, unphased_as_NA = FALSE, ref.seq = myRef)
#' seg.sites(myDNA)
#' image(myDNA)
#'
#'
#'
#' @export
vcfR2DNAbin <- function( x,
extract.indels = TRUE,
consensus = FALSE,
extract.haps = TRUE,
unphased_as_NA = TRUE,
asterisk_as_del = FALSE,
ref.seq = NULL,
start.pos = NULL,
verbose = TRUE )
{
# Sanitize input.
#if( class(x) == 'chromR' ){ x <- x@vcf }
if( inherits(x, 'chromR') ){
x <- x@vcf
}
#if( class(x) != 'vcfR' ){
if( !inherits(x, 'vcfR') ){
stop( "Expecting an object of class chromR or vcfR" )
}
if( consensus == TRUE & extract.haps == TRUE){
stop("consensus and extract_haps both set to TRUE. These options are incompatible. A haplotype should not be ambiguous.")
}
#if( !is.null(start.pos) & class(start.pos) == "character" ){
if( !is.null(start.pos) & inherits(start.pos, "character") ){
start.pos <- as.integer(start.pos)
}
if( extract.indels == FALSE & consensus == TRUE ){
msg <- "invalid selection: extract.indels set to FALSE and consensus set to TRUE."
msg <- c(msg, "There is no IUPAC ambiguity code for indels")
stop(msg)
}
# Check and sanitize ref.seq.
#if( class(ref.seq) != 'DNAbin' & !is.null(ref.seq) ){
if( !inherits(ref.seq, 'DNAbin') & !is.null(ref.seq) ){
stop( paste("expecting ref.seq to be of class DNAbin but it is of class", class(ref.seq)) )
}
if( is.list(ref.seq) ){
#ref.seq <- as.matrix(ref.seq)
ref.seq <- ref.seq[[1]]
}
if( is.matrix(ref.seq) ){
ref.seq <- ref.seq[1,]
ref.seq <- ref.seq[1:ncol(ref.seq)]
}
# If vector
# dna <- as.matrix(t(dna))
# Check start.pos
if( is.null(start.pos) & !is.null(ref.seq) ){
if( verbose == TRUE ){
warning("start.pos == NULL, this means that I do not know where the variants are located in the ref.seq. I'll try start.pos == 1, but results may be unexpected")
}
start.pos <- 1
}
# Extract indels.
# Currently the only option is TRUE.
if( extract.indels == TRUE ){
x <- extract.indels(x)
if( verbose == TRUE ){
message(paste("After extracting indels,", nrow(x), "variants remain."))
}
} else {
# stop("extract.indels == FALSE is not currently implemented.")
# Make alleles at each locus the same length
# https://stackoverflow.com/a/36136878
equal_allele_len <- function(x){
alleles <- c(myRef[x], unlist(strsplit(myAlt[x], split = ",")))
alleles[is.na(alleles)] <- 'n'
alleles <- format(alleles, width=max(nchar(alleles)))
alleles <- gsub("\\s", "-", alleles)
myRef[x] <<- alleles[1]
myAlt[x] <<- paste(alleles[2:length(alleles)], collapse=",")
invisible()
}
myRef <- getREF(x)
myAlt <- getALT(x)
invisible(lapply(1:length(myRef), equal_allele_len))
x@fix[,'REF'] <- myRef
x@fix[,'ALT'] <- myAlt
}
# Save POS in case we need it.
# i.e., for inserting variants into a matrix.
# pos <- as.numeric(x@fix[,'POS'])
pos <- getPOS(x)
# If we think we have variants we should extract them.
# Our GT matrix may contain zero rows, all NA or data.
# Check for zero rows.
if( nrow(x@fix) == 0 ){
# Create an empty matrix.
x <- x@gt[ 0, -1 ]
} else if( sum(!is.na(x@gt[,-1])) == 0 ){
# Check for all NA.
# Case of zero rows will sum to zero here.
# Create an empty matrix.
x <- x@gt[ 0, -1 ]
} else {
# If x is still of class vcfR, we should process it.
# if( class(x) == "vcfR" ){
# first.gt <- x@gt[ ,-1 ][ !is.na(x@gt[,-1]) ][1]
if( consensus == TRUE & extract.haps == FALSE ){
# x <- extract.gt( x, return.alleles = TRUE, allele.sep = gt.split )
# x <- alleles2consensus( x, sep = gt.split )
x <- extract.gt( x, return.alleles = TRUE )
x <- alleles2consensus( x )
} else {
# x <- extract.haps( x, gt.split = gt.split, verbose = verbose )
x <- extract.haps( x, unphased_as_NA = unphased_as_NA, verbose = verbose )
}
}
if( asterisk_as_del == TRUE){
x[ x == "*" & !is.na(x) ] <- '-'
} else {
x[ x == "*" & !is.na(x) ] <- 'n'
}
# Data could be haploid, diploid or higher ploid.
# x should be a matrix of variants by here.
# Return full sequence when ref.seq is not NULL
if( is.null(ref.seq) == FALSE ){
# Create a matrix of nucleotides.
# The number of columns should match the number
# of columns in x (i.e., number of haplotypes).
# The number of rows should match the reference
# sequence length.
# The matrix will be initialized with the
# reference and will have no variants.
variants <- x
x <- matrix( as.character(ref.seq),
nrow = length(ref.seq),
ncol = length(colnames(x)),
byrow = FALSE
)
colnames(x) <- colnames(variants)
# Populate matrix of reference sequences with variants.
# We need to subset the variant data to the region of interest.
# First we remove variants above the region.
# Then we remove variants below this region.
# Then we rescale the region to be one-based.
# variants <- variants[ pos < start.pos + dim(ref.seq)[2], , drop = FALSE]
# pos <- pos[ pos < start.pos + dim(ref.seq)[2] ]
variants <- variants[ pos < start.pos + length(ref.seq), , drop = FALSE]
pos <- pos[ pos < start.pos + length(ref.seq) ]
variants <- variants[ pos >= start.pos, , drop = FALSE]
pos <- pos[ pos >= start.pos ]
pos <- pos - start.pos + 1
x[pos,] <- variants
}
# Convert NA to n
x[ is.na(x) ] <- 'n'
# DNAbin characters must be lower case.
# tolower requires dim(X) to be positive.
if( nrow(x) > 0 ){
# x <- apply( x, MARGIN=2, tolower )
x <- tolower( x )
}
# Convert matrix to DNAbin
if( extract.indels == FALSE ){
# Indel strings need to be split into characters
x <- apply(x, MARGIN=2, function(x){ unlist(strsplit(x,"")) })
}
x <- ape::as.DNAbin(t(x))
return(x)
}
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/vcfR2DNAbin.R
|
#'
#' @title Convert a vcfR object to hapmap
#'
#' @description Converts a vcfR object to hapmap
#'
#' @param vcf a vcfR object.
# ' @param out_file name of output file.
# ' @param method should 'N' or 'H' format data be generated?
#'
#' @details
#' Converts a vcfR object to a hapmap format.
#'
#' @return a data.frame that can be used as an input for GAPIT.
#'
#' @author Brian J. Knaus
#'
# ' @seealso \href{http://popgen.sc.fsu.edu/Migrate/Migrate-n.html}{Migrate-N} website.
#' @examples
#' data(vcfR_test)
#' myHapMap <- vcfR2hapmap(vcfR_test)
#' class(myHapMap)
#' \dontrun{
#' # Example of how to create a (GAPIT compliant) HapMap file.
#' write.table(myHapMap,
#' file = "myHapMap.hmp.txt",
#' sep = "\t",
#' row.names = FALSE,
#' col.names = FALSE)
#' }
#'
#' @export
vcfR2hapmap <- function(vcf) {
# print("vcfR2hapmap works!")
vcf <- vcf[!is.indel(vcf), ]
vcf <- vcf[is.biallelic(vcf), ]
gt <- extract.gt(vcf, return.alleles = TRUE)
gt <- sub("/|\\|", "", gt, fixed = FALSE)
gt[ is.na(gt) ] <- "NN"
gt[ gt == "." ] <- "NN"
hapMap <- matrix(data = NA, nrow = nrow(gt), ncol = ncol(gt) + 11)
hapMap <- as.data.frame(hapMap)
colnames(hapMap) <- c(
c("rs", "alleles", "chrom", "pos", "strand", "assembly", "center",
"protLSID", "assayLSID", "panel", "QCcode"),
colnames(gt)
)
hapMap[,1] <- rownames(gt)
hapMap[,3] <- getCHROM(vcf)
hapMap[,4] <- getPOS(vcf)
hapMap[, 12:ncol(hapMap)] <- gt
class(hapMap) <- c("hapMap", class(hapMap))
# GAPIT compatibility
hapMap <- rbind(colnames(hapMap), hapMap)
return(hapMap)
}
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/vcfR2hapmap.R
|
#' @title Convert a vcfR object to MigrateN input file
#' @description The function converts a vcfR object to a text format that can be used as an infile for MigrateN.
#'
#' @param vcf a vcfR object.
#' @param pop factor indicating population membership for each sample.
#' @param in_pop vector of population names indicating which population to include in migrate output file.
#' @param out_file name of output file.
#' @param method should 'N' or 'H' format data be generated?
#'
#' @return a text file that can be used as an input for MigrateN software (SNP format).
#'
#' @details
#' This function converts a vcfR object to a text file which can be used as input for MigrateN.
#' The function will remove loci with missing data, indels, and loci that are not bialleleic (loci with more than two alleles).
#' Thus, only SNP data analysed where the length of each locus (inmutational steps) is 1 (as opposed to microsatellites or indels).
#'
#' The output file should contain Unix line endings ("\\n").
#' Note that opening the output file in a Windows text editor (just to validate number of markers, individuals or populations) might change the end of line character (eol) to a Windows line ending ("\\r\\n").
#' This may produce an error running migrate-n.
#' Because these are typically non-printing characters, this may be a difficult problem to troubleshoot.
#' The easiest way to circumvent the problem is to transfer the output file to Unix machine and view it there.
#' If you do introduce Windows line endings you can convert them back to Unix with a program such as `dos2unix` or `fromdos` to change the line endings.
#'
#' @author Shankar Shakya and Brian J. Knaus
#'
#' @seealso \href{http://popgen.sc.fsu.edu/Migrate/Migrate-n.html}{Migrate-N} website.
#'
#' @examples
#' \dontrun{
# ' pkg <- "pinfsc50"
# ' my_vcf <- system.file("extdata", "pinf_sc50.vcf.gz", package = pkg)
# ' my_vcf <- read.vcfR( my_vcf, verbose = FALSE )
# '
#' data(vcfR_example)
#' my_pop <- as.factor(paste("pop_", rep(c("A", "B", "C"), each = 6), sep = ""))
#' vcfR2migrate(vcf = vcf , pop = my_pop , in_pop = c("pop_A","pop_C"),
#' out_file = "my2pop.txt", method = 'H')
#' }
#'
#'
#' @export
vcfR2migrate <- function(vcf, pop, in_pop, out_file = "MigrateN_infile.txt", method = c('N','H') ) {
method <- match.arg(method, c('N','H'), several.ok = FALSE)
# Validate the input.
# if( class(vcf) != "vcfR"){
if( !inherits(vcf, "vcfR") ){
stop(paste("Expecting an object of class vcfR, received a", class(vcf), "instead"))
}
# if( class(pop) != "factor"){
if( !inherits(pop, "factor") ){
stop(paste("Expecting population vector, received a", class(pop), "instead"))
}
# Remove indels and non-biallelic loci
vcf <- extract.indels(vcf, return.indels = F)
vcf <- vcf[is.biallelic(vcf),]
# Remove loci containing missing genotypes.
gt <- extract.gt(vcf, convertNA = T)
vcf <- vcf[!rowSums((is.na(gt))),]
# FORMAT <- vcf@gt[1:nrow(gt),1]
# vcf@gt <- cbind(FORMAT, gt)
# Subset VCF data to populations.
# vcf_list <- lapply(levels(my_pop), function(x){ vcf[,c(TRUE, x == my_pop)] })
# names(vcf_list) <- levels(pop)
vcf_list <- lapply(in_pop, function(x){ vcf[,c(TRUE, x == pop)] })
names(vcf_list) <- in_pop
# for (i in (1:length(vcf_list))) {
# temp_pop <- names(vcf_list[i])
# temp_vcf <- vcf
# FORMAT <- vcf@gt[,1]
# gt <- temp_vcf@gt[, -1]
# cols <- gt[ , which(names(vcf_list[i]) == pop)]
# temp_vcf@gt <- cbind(FORMAT, cols)
# vcf_list[[i]] <- temp_vcf
# }
if(method == 'N'){
myHeader <- c('N', length(vcf_list), nrow(vcf_list[[1]]))
pop_list <- vector(mode = 'list', length=length(vcf_list))
names(pop_list) <- names(vcf_list)
# Extract alleles
for(i in 1:length(vcf_list)){
gt <- extract.gt(vcf_list[[i]], return.alleles = T)
allele1 <- apply(gt, MARGIN = 2, function(x){ substr(x, 1, 1) })
rownames(allele1) <- NULL
allele1 <- t(allele1)
rownames(allele1) <- paste(rownames(allele1), "_1", sep = "")
allele2 <- apply(gt, MARGIN = 2, function(x){ substr(x, 3, 3) })
rownames(allele2) <- NULL
allele2 <- t(allele2)
rownames(allele2) <- paste(rownames(allele2), "_2", sep = "")
pop_list[[i]][[1]] <- allele1
pop_list[[i]][[2]] <- allele2
}
# Write to file
write(myHeader, file = out_file, ncolumns = length(myHeader), sep = "\t")
write(rep(1, times = ncol(pop_list[[1]][[1]])), file = out_file, ncolumns = ncol(pop_list[[1]][[1]]), append = TRUE, sep = "\t")
for(i in 1:length(pop_list)){
popName <- c(2*nrow(pop_list[[i]][[1]]), names(pop_list)[i])
write(popName, file = out_file, ncolumns = length(popName), append = TRUE, sep = "\t")
for(j in 1:ncol(pop_list[[i]][[1]])){
utils::write.table(pop_list[[i]][[1]][,j], file = out_file, append = TRUE, quote = FALSE,
sep = "\t", row.names = TRUE, col.names = FALSE)
utils::write.table(pop_list[[i]][[2]][,j], file = out_file, append = TRUE, quote = FALSE,
sep = "\t", row.names = TRUE, col.names = FALSE)
}
}
} else if(method == 'H'){
myHeader <- c('H', length(vcf_list), nrow(vcf_list[[1]]))
# Summarize populations
pop_list <- vector(mode = 'list', length=length(vcf_list))
names(pop_list) <- names(vcf_list)
for(i in 1:length(vcf_list)){
# Matrix to hold the summary
myMat <- matrix(nrow = nrow(vcf_list[[i]]), ncol = 6)
# Population summary
var_info <- as.data.frame(vcf_list[[i]]@fix[,1:2, drop = FALSE])
var_info$mask <- TRUE
gt <- extract.gt(vcf_list[[i]])
popSum <- .gt_to_popsum(var_info = var_info, gt = gt)
# popSum <- matrix(unlist(strsplit(as.character(popSum$Allele_counts), split = ",", fixed = TRUE)), ncol = 2, byrow = TRUE)
# Populate matrix
myMat[,1] <- paste(vcf_list[[i]]@fix[,'CHROM'], vcf_list[[i]]@fix[,'POS'], sep = "_")
myMat[,2] <- vcf_list[[i]]@fix[,'REF']
myMat[,4] <- vcf_list[[i]]@fix[,'ALT']
myMat[,3] <- unlist(lapply(strsplit(as.character(popSum$Allele_counts), split = ",", fixed = TRUE), function(x){x[1]}))
myMat[,3][is.na(myMat[,3])] <- 0
myMat[,5] <- unlist(lapply(strsplit(as.character(popSum$Allele_counts), split = ",", fixed = TRUE), function(x){x[2]}))
myMat[,5][is.na(myMat[,5])] <- 0
myMat[,6] <- as.numeric(myMat[,3]) + as.numeric(myMat[,5])
pop_list[[i]] <- myMat
}
# Write to file
write(myHeader, file = out_file, ncolumns = length(myHeader), sep = "\t")
#write(rep(1, times = nrow(pop_list[[1]])), file = out_file, ncolumns = nrow(pop_list[[1]]), append = TRUE, sep = "\t")
for(i in 1:length(pop_list)){
popName <- c(pop_list[[i]][1,6], names(pop_list[i]))
write(popName, file = out_file, ncolumns = length(popName), append = TRUE, sep = "\t")
utils::write.table(pop_list[[i]], file = out_file, append = TRUE, quote = FALSE,
sep = "\t", row.names = FALSE, col.names = FALSE)
}
} else {
stop("You should never get here!")
}
return( invisible(NULL) )
}
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/vcfR2migrate.R
|
#' @title Convert vcfR objects to other formats
#' @name Format conversion
#' @rdname vcfR_conversion
#' @description
#' Convert vcfR objects to objects supported by other R packages
#'
#' @param x an object of class chromR or vcfR
#' @param return.alleles should the VCF encoding of the alleles be returned (FALSE) or the actual alleles (TRUE).
#'
#' @details
#' After processing vcf data in vcfR, one will likely proceed to an analysis step.
#' Within R, three obvious choices are:
#' \href{https://cran.r-project.org/package=pegas}{pegas},
#' \href{https://cran.r-project.org/package=adegenet}{adegenet}
#' and \href{https://cran.r-project.org/package=poppr}{poppr}.
#' The package pegas uses objects of type loci.
#' The function \strong{vcfR2loci} calls extract.gt to create a matrix of genotypes which is then converted into an object of type loci.
#'
#' The packages adegenet and poppr use the genind object.
#' The function \strong{vcfR2genind} uses extract.gt to create a matrix of genotypes and uses the adegenet function df2genind to create a genind object.
#' The package poppr additionally uses objects of class genclone which can be created from genind objects using poppr::as.genclone.
#' A genind object can be converted to a genclone object with the function poppr::as.genclone.
#'
#'
#' The function vcfR2genlight calls the 'new' method for the genlight object.
#' This method implements multi-threading through calls to the function \code{parallel::mclapply}.
#' Because 'forks' do not exist in the windows environment, this will only work for windows users when n.cores=1.
#' In the Unix environment, users may increase this number to allow the use of multiple threads (i.e., cores).
#'
#' @note \subsection{For users of \pkg{poppr}}{
#' If you wish to use \code{vcfR2genind()}, it is \strong{strongly recommended} to use it with the option \code{return.alleles = TRUE}.
#' The reason for this is because the \pkg{poppr} package accomodates mixed-ploidy data by interpreting "0" alleles \emph{in genind objects} to be NULL alleles in both \code{poppr::poppr.amova()} and \code{poppr::locus_table()}.
#' }
#'
#'
#' @seealso
#' \code{extract.gt},
#' \code{alleles2consensus},
#' \code{adegenet::df2genind},
#' \code{adegenet::genind},
#' \href{https://cran.r-project.org/package=pegas}{pegas},
#' \href{https://cran.r-project.org/package=adegenet}{adegenet},
#' and
#' \href{https://cran.r-project.org/package=poppr}{poppr}.
#' To convert to objects of class \strong{DNAbin} see \code{vcfR2DNAbin}.
#'
# ' @rdname vcfR_conversion
#' @aliases vcfR2genind
#'
#' @param sep character (to be used in a regular expression) to delimit the alleles of genotypes
#' @param ... pass other parameters to adegenet::df2genlight
#'
#' @details
#' The parameter \strong{...} is used to pass parameters to other functions.
#' In \code{vcfR2genind} it is used to pass parameters to \code{adegenet::df2genind}.
#' For example, setting \code{check.ploidy=FALSE} may improve the performance of \code{adegenet::df2genind}, as long as you know the ploidy.
#' See \code{?adegenet::df2genind} to see these options.
#'
#' @export
vcfR2genind <- function(x, sep="[|/]", return.alleles = FALSE, ...) {
locNames <- x@fix[,'ID']
x <- extract.gt(x, return.alleles = return.alleles)
x[grep('.', x, fixed = TRUE)] <- NA
# x[grep('\\.', x)] <- NA
# x[x == "./."] <- NA
# x[x == ".|."] <- NA
# adegenet like to delimit on periods.
rownames(x) <- sub(".", "_", rownames(x), fixed = TRUE)
# x <- adegenet::df2genind(t(x), sep=sep)
if( requireNamespace('adegenet') ){
x <- adegenet::df2genind(t(x), sep=sep, ...)
# x <- df2genind(t(x), sep=sep)
} else {
warning("adegenet not installed")
}
x
}
#' @rdname vcfR_conversion
#' @aliases vcfR2loci
#'
#' @export
vcfR2loci <- function(x, return.alleles = FALSE)
{
# if(class(x) == "chromR")
# {
# x <- x@vcf
# }
x <- extract.gt(x, return.alleles = return.alleles)
# modified from pegas::as.loci.genind
x <- as.data.frame(t(x))
icol <- 1:ncol(x)
for (i in icol) x[, i] <- factor(x[, i] )
class(x) <- c("loci", "data.frame")
attr(x, "locicol") <- icol
x
}
#' @rdname vcfR_conversion
#' @aliases vcfR2genlight
#'
#' @param n.cores integer specifying the number of cores to use.
#'
#' @examples
#' adegenet_installed <- require("adegenet")
#' if (adegenet_installed) {
#' data(vcfR_test)
#' # convert to genlight (preferred method with bi-allelic SNPs)
#' gl <- vcfR2genlight(vcfR_test)
#'
#' # convert to genind, keeping information about allelic state
#' # (slightly slower, but preferred method for use with the "poppr" package)
#' gid <- vcfR2genind(vcfR_test, return.alleles = TRUE)
#'
#' # convert to genind, returning allelic states as 0, 1, 2, etc.
#' # (not preferred, but slightly faster)
#' gid2 <- vcfR2genind(vcfR_test, return.alleles = FALSE)
#' }
#'
#' @export
vcfR2genlight <- function(x, n.cores=1){
bi <- is.biallelic(x)
if(sum(!bi) > 0){
msg <- paste("Found", sum(!bi), "loci with more than two alleles.")
msg <- c(msg, "\n", paste("Objects of class genlight only support loci with two alleles."))
msg <- c(msg, "\n", paste(sum(!bi), 'loci will be omitted from the genlight object.'))
warning(msg)
x <- x[bi,]
}
x <- addID(x)
CHROM <- x@fix[,'CHROM']
POS <- x@fix[,'POS']
ID <- x@fix[,'ID']
x <- extract.gt(x)
x[x=="0|0"] <- 0
x[x=="0|1"] <- 1
x[x=="1|0"] <- 1
x[x=="1|1"] <- 2
x[x=="0/0"] <- 0
x[x=="0/1"] <- 1
x[x=="1/0"] <- 1
x[x=="1/1"] <- 2
# dim(x)
if( requireNamespace('adegenet') ){
x <- new('genlight', t(x), n.cores=n.cores)
} else {
warning("adegenet not installed")
}
# x <- adegenet::as.genlight(t(x), n.cores=3)
# x <- adegenet::as.genlight(t(x))
adegenet::chromosome(x) <- CHROM
adegenet::position(x) <- POS
adegenet::locNames(x) <- ID
return(x)
}
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/vcfR_conversion.R
|
#' Example data for vcfR.
#'
#' An example dataset containing parts of the *Phytophthora infestans* genome.
#'
#' \itemize{
#' \item dna DNAbin object
#' \item gff gff format data.frame
#' \item vcf vcfR object
# \item pinf_dna mitochondion IIa (GenBank: AY898627.1) as a DNAbin object (ape)
# \item pinf_gff annotation data (gff-like) as a data.frame
# \item pinf_vcf variant information as a vcfR object
#' }
#'
#'
#' This data is a subset of the pinfsc50 dataset.
#' It has been subset to positions between 500 and 600 kbp.
#' The coordinate systems of the vcf and gff file have been altered by subtracting 500,000.
#' This results in a 100 kbp section of supercontig_1.50 that has positional data ranging from 1 to 100 kbp.
#'
#' Note that it is encouraged to keep package contents small to facilitate easy
#' downloading and installation. This is why a mitochondrion was chosen as an
#' example. In practice I've used this package on supercontigs. This package
#' was designed for much larger datasets in mind than in this example.
#'
#' @examples
#' data(vcfR_example)
#'
#'
#'
#'
#' @docType data
#' @keywords datasets
#' @format A DNAbin object, a data.frame and a vcfR object
#' @name vcfR_example
# @aliases pinf_dna pinf_vcf pinf_gff
#' @aliases dna gff vcf
NULL
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/vcfR_example.R
|
#' Test data for vcfR.
#'
#' A test file containing a diversity of examples intended to test functionality.
#'
#' \itemize{
#' \item vcfR_test vcfR object
#' }
#'
#'
#' This data set began as the example (section 1.1) from The Variant Call Format Specification \href{http://samtools.github.io/hts-specs/}{VCFv4.3} .
#' This data consisted of 3 samples and 5 variants.
#' As I encounter examples that challenge the code in vcfR they can be added to this data set.
#'
#'
#'
#' @examples
#' data(vcfR_test)
#'
#'
#' \dontrun{
#' # When I add data it can be saved with this command.
#' save(vcfR_test, file="data/vcfR_test.RData")
#' }
#'
#'
#' @docType data
#' @keywords datasets
#' @format A vcfR object
#' @name vcfR_test
#' @aliases vcf_test
NULL
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/vcfR_test.R
|
#### Document all the vcf2tidy related functions together ####
#' @title Convert vcfR objects to tidy data frames
#' @name Convert to tidy data frames
#' @rdname vcfR_to_tidy_conversion
#' @description
#' Convert the information in a vcfR object to a long-format data frame
#' suitable for analysis or use with Hadley Wickham's packages,
#' \href{https://cran.r-project.org/package=dplyr}{dplyr},
#' \href{https://cran.r-project.org/package=tidyr}{tidyr}, and
#' \href{https://cran.r-project.org/package=ggplot2}{ggplot2}.
#' These packages have been
#' optimized for operation on large data frames, and, though they can bog down
#' with very large data sets, they provide a good framework for handling and filtering
#' large variant data sets. For some background
#' on the benefits of such "tidy" data frames, see
#' \doi{10.18637/jss.v059.i10}.
# ' \doi{doi.org/10.18637/jss.v059.i10}.
# ' \doi{doi.org/10.18637/jss.v059.i10}{this article}.
# ' \href{https://doi.org/10.18637/jss.v059.i10}{this article}.
# ' \href{https://www.jstatsoft.org/article/view/v059i10}{this article}.
#'
#' For some filtering operations, such as those where one wants to filter genotypes
#' upon GT fields in combination with INFO fields, or more complex
#' operations in which one wants to filter
#' loci based upon the number of individuals having greater than a certain quality score,
#' it will be advantageous to put all the information into a long format data frame
#' and use \code{dplyr} to perform the operations. Additionally, a long data format is
#' required for using \code{ggplot2}. These functions convert vcfR objects to long format
#' data frames.
#'
#' @param x an object of class vcfR
#'
#' @details
#' The function \strong{vcfR2tidy} is the main function in this series. It takes a vcfR
#' object and converts the information to a list of long-format data frames. The user can
#' specify whether only the INFO or both the INFO and the FORMAT columns should be extracted, and also
#' which INFO and FORMAT fields to extract. If no specific INFO or FORMAT fields are asked
#' for, then they will all be returned. If \code{single_frame == FALSE} and
#' \code{info_only == FALSE} (the default),
#' the function returns a list with three components: \code{fix}, \code{gt}, and \code{meta} as follows:
#' \enumerate{
#' \item \code{fix} A data frame of the fixed information columns and the parsed INFO columns, and
#' an additional column, \code{ChromKey}---an integer identifier
#' for each locus, ordered by their appearance in the original data frame---that serves
#' together with POS as a key back to rows in \code{gt}.
#' \item \code{gt} A data frame of the genotype-related fields. Column names are the names of the
#' FORMAT fields with \code{gt_column_prepend} (by default, "gt_") prepended to them. Additionally
#' there are columns \code{ChromKey}, and \code{POS} that can be used to associate
#' each row in \code{gt} with a row in \code{fix}.
#' \item\code{meta} The meta-data associated with the columns that were extracted from the INFO and FORMAT
#' columns in a tbl_df-ed data frame.
#' }
#' This is the default return object because it might be space-inefficient to
#' return a single tidy data frame if there are many individuals and the CHROM names are
#' long and/or there are many INFO fields. However, if
#' \code{single_frame = TRUE}, then the results are returned as a list with component \code{meta}
#' as before, but rather than having \code{fix} and \code{gt} as before, both those data frames
#' have been joined into component \code{dat} and a ChromKey column is not returned, because
#' the CHROM column is available.
#'
#' If \code{info_only == FALSE}, then just the fixed columns and the parsed INFO columns are
#' returned, and the FORMAT fields are not parsed at all. The return value is a list with
#' components \code{fix} and \code{meta}. No column ChromKey appears.
#'
#' The following functions are called by \strong{vcfR2tidy} but are documented below because
#' they may be useful individually.
#'
#' The function \strong{extract_info_tidy} let's you pass in a vector of the INFO fields that
#' you want extracted to a long format data frame. If you don't tell it which fields to
#' extract it will extract all the INFO columns detailed in the VCF meta section.
#' The function returns a tbl_df data frame of the INFO fields along with with an additional
#' integer column \code{Key} that associates
#' each row in the output data frame with each row (i.e. each CHROM-POS combination)
#' in the original vcfR object \code{x}.
#'
#' The function \strong{extract_gt_tidy} let's you pass in a vector of the FORMAT fields that
#' you want extracted to a long format data frame. If you don't tell it which fields to
#' extract it will extract all the FORMAT columns detailed in the VCF meta section.
#' The function returns a tbl_df data frame of the FORMAT fields with an additional
#' integer column \code{Key} that associates
#' each row in the output data frame with each row (i.e. each CHROM-POS combination),
#' in the original vcfR object \code{x}, and an additional column \code{Indiv} that gives
#' the name of the individual.
#'
#' The function \strong{vcf_field_names} is a helper function that
#' parses information from the metadata section of the
#' VCF file to return a data frame with the \emph{metadata} information about either the INFO
#' or FORMAT tags. It
#' returns a \code{tbl_df}-ed data frame with column names: "Tag", "ID", "Number","Type",
#' "Description", "Source", and "Version".
#'
#' @return An object of class tidy::data_frame or a list where every element is of class tidy::data_frame.
#'
#' @note To run all the examples, you can issue this:
#' \code{example("vcfR2tidy")}
#'
#' @author Eric C. Anderson <eric.anderson@@noaa.gov>
#' @seealso
#' \href{https://cran.r-project.org/package=dplyr}{dplyr},
#' \href{https://cran.r-project.org/package=tidyr}{tidyr}.
#'
#' @examples
#' # load the data
# data(vcfR_example)
#' data("vcfR_test")
#' vcf <- vcfR_test
#'
#'
#' # extract all the INFO and FORMAT fields into a list of tidy
#' # data frames: fix, gt, and meta. Here we don't coerce columns
#' # to integer or numeric types...
#' Z <- vcfR2tidy(vcf)
#' names(Z)
#'
#'
#' # here is the meta data in a table
#' Z$meta
#'
#'
#' # here is the fixed info
#' Z$fix
#'
#'
#' # here are the GT fields. Note that ChromKey and POS are keys
#' # back to Z$fix
#' Z$gt
#'
#'
#' # Note that if you wanted to tidy this data set even further
#' # you could break up the comma-delimited columns easily
#' # using tidyr::separate
#'
#'
#'
#'
#' # here we put the data into a single, joined data frame (list component
#' # dat in the returned list) and the meta data. Let's just pick out a
#' # few fields:
#' vcfR2tidy(vcf,
#' single_frame = TRUE,
#' info_fields = c("AC", "AN", "MQ"),
#' format_fields = c("GT", "PL"))
#'
#'
#' # note that the "gt_GT_alleles" column is always returned when any
#' # FORMAT fields are extracted.
#'
#'
#'
#'
#' # Here we extract a single frame with all fields but we automatically change
#' # types of the columns according to the entries in the metadata.
#' vcfR2tidy(vcf, single_frame = TRUE, info_types = TRUE, format_types = TRUE)
#'
#'
#'
#'
#' # for comparison, here note that all the INFO and FORMAT fields that were
#' # extracted are left as character ("chr" in the dplyr summary)
#' vcfR2tidy(vcf, single_frame = TRUE)
#'
#'
#'
#'
#'
#' # Below are some examples with the vcfR2tidy "subfunctions"
#'
#'
#' # extract the AC, AN, and MQ fields from the INFO column into
#' # a data frame and convert the AN values integers and the MQ
#' # values into numerics.
#' extract_info_tidy(vcf, info_fields = c("AC", "AN", "MQ"), info_types = c(AN = "i", MQ = "n"))
#'
#' # extract all fields from the INFO column but leave
#' # them as character vectors
#' extract_info_tidy(vcf)
#'
#' # extract all fields from the INFO column and coerce
#' # types according to metadata info
#' extract_info_tidy(vcf, info_types = TRUE)
#'
#' # get the INFO field metadata in a data frame
#' vcf_field_names(vcf, tag = "INFO")
#'
#' # get the FORMAT field metadata in a data frame
#' vcf_field_names(vcf, tag = "FORMAT")
#'
#'
#'
#### vcfR2tidy ####
# ' @rdname vcfR_to_tidy_conversion
#' @aliases vcfR2tidy
#'
#' @param info_only if TRUE return a list with only a \code{fix} component
#' (a single data frame that has the parsed INFO information) and
#' a \code{meta} component. Don't extract any of the FORMAT fields.
#' @param single_frame return a single tidy data frame in list component
#' \code{dat} rather returning it in components
#' \code{fix} and/or \code{gt}.
#' @param toss_INFO_column if TRUE (the default) the INFO column will be removed from output as
#' its consituent parts will have been parsed into separate columns.
#' @param ... more options to pass to \code{\link{extract_info_tidy}} and
#' \code{\link{extract_gt_tidy}}. See parameters listed below.
#'
# @importFrom dplyr everything
# @import dplyr
#'
#' @export
vcfR2tidy <- function(x,
info_only = FALSE,
single_frame = FALSE,
toss_INFO_column = TRUE,
...) {
INFO <- Key <- ID <- CHROM <- ChromKey <- POS <- NULL
#### Some Error Checking and Preliminaries ####
if(single_frame == TRUE && info_only == TRUE)
stop("You can pass both single_frame and info_only as TRUE")
#check to make sure that the user didn't pass in unacceptable params in ...
dotslist <- list(...)
unk_parm <- setdiff(
names(dotslist),
c("info_fields", "info_types", "info_sep", "format_fields", "format_types", "dot_is_NA",
"alleles", "allele.sep", "gt_column_prepend", "verbose")
)
if(length(unk_parm) > 0){
stop("Unknown \"...\" parameters ",
paste(unk_parm, collapse = " "),
" to function vcfR2tidy"
)
}
info_dots <- dotslist[names(dotslist) %in% c("info_fields", "info_types", "info_sep")]
info_dots$x = x
format_dots <- dotslist[names(dotslist) %in% c("format_fields", "format_types", "dot_is_NA",
"alleles", "allele.sep", "gt_column_prepend", "verbose")]
format_dots$x = x
# klugie hack for dealing with the gt_column_prepend
if(!is.null(format_dots[["gt_column_prepend"]])) {
gt_prep <- format_dots[["gt_column_prepend"]]
} else {
gt_prep = "gt_"
}
#### extract the INFO data. and return if that is all the is requested ####
# get the base fix data as a data frame
# base <- as.data.frame(x@fix, stringsAsFactors = FALSE) %>% tibble::as.tibble()
base <- as.data.frame(x@fix, stringsAsFactors = FALSE) %>% tibble::as_tibble()
base$POS <- as.integer(base$POS)
base$QUAL <- as.numeric(base$QUAL)
if(toss_INFO_column == TRUE) {
base <- base %>% dplyr::select(-INFO)
# base <- base %>% dplyr::select_(~ -INFO)
}
# also get the full meta data for all the INFO fields
info_meta_full <- vcf_field_names(x, tag = "INFO")
fix <- do.call(what = extract_info_tidy, args = info_dots)
# Handle zero INFO records
if( nrow(fix) == 0 ){
fix <- data.frame(
Key = 1:nrow(base)
#INFO = rep(NA, times = nrow(base))
)
}
if(info_only == TRUE) {
# ret <- cbind(base, fix) %>%
ret <- dplyr::bind_cols(base, fix) %>%
tibble::as_tibble() %>%
# tibble::as.tibble() %>%
dplyr::select( -Key)
# dplyr::select_(~ -Key)
# only retain meta info for the fields that we are returning
# info_meta <- info_meta_full %>%
# dplyr::filter_(~ID %in% names(ret))
info_meta <- info_meta_full %>%
dplyr::filter(info_meta_full$ID %in% names(ret))
return(list(fix = ret, meta = info_meta))
}
#### Extract the GT data, and return what is appropriate ####
# if you got here then we need to extract some gt fields, too
gt <- do.call(what = extract_gt_tidy, args = format_dots)
# get the full FORMAT meta data and add the gt_column_prepend to them
gt_meta_full <- vcf_field_names(x, tag = "FORMAT") %>%
dplyr::mutate(ID = paste(gt_prep, ID, sep = ""))
# dplyr::mutate_(ID = ~paste(gt_prep, ID, sep = ""))
# if the user is asking for a single data frame we give it to them here:
if(single_frame == TRUE) {
# ret <- cbind(base, fix) %>%
ret <- dplyr::bind_cols(base, fix) %>%
dplyr::left_join(gt, by = "Key") %>%
tibble::as_tibble() %>%
# tibble::as.tibble() %>%
dplyr::select( -Key) # no point in keeping Key around at this point
# dplyr::select_(~ -Key) # no point in keeping Key around at this point
# info_meta <- info_meta_full %>%
# dplyr::filter_(~ID %in% names(ret))
info_meta <- info_meta_full %>%
dplyr::filter(ID %in% names(ret))
# gt_meta <- gt_meta_full %>%
# dplyr::filter_(~ID %in% names(ret))
gt_meta <- gt_meta_full %>%
dplyr::filter(ID %in% names(ret))
return(list(dat = ret, meta = dplyr::bind_rows(info_meta, gt_meta)))
}
# if the user is not asking for a single data frame then we return a list
# which has appropriate keys for getting the fix and the gt associated
# appropriately.
# retfix <- cbind(base, fix) %>%
retfix <- dplyr::bind_cols(base, fix) %>%
tibble::as_tibble() %>%
# tibble::as.tibble() %>%
dplyr::mutate(ChromKey = as.integer(factor(CHROM), levels = unique(CHROM))) %>%
dplyr::select(ChromKey, dplyr::everything()) # note that we will drop Key from this after we have used it
# dplyr::select_(~ChromKey, ~dplyr::everything()) # note that we will drop Key from this after we have used it
retgt <- gt %>%
# dplyr::left_join(dplyr::select_(retfix, ~ChromKey, ~Key, ~POS), by = "Key") %>%
dplyr::left_join(dplyr::select(retfix, ChromKey, Key, POS), by = "Key") %>%
# dplyr::select_(~ChromKey, ~POS, ~dplyr::everything()) %>%
dplyr::select(ChromKey, POS, dplyr::everything()) %>%
# dplyr::select_(~ -Key)
dplyr::select( -Key)
# info_meta <- info_meta_full %>%
# dplyr::filter_(~ID %in% names(retfix))
info_meta <- info_meta_full %>%
dplyr::filter(ID %in% names(retfix))
# gt_meta <- gt_meta_full %>%
# dplyr::filter_(~ID %in% names(retgt))
gt_meta <- gt_meta_full %>%
dplyr::filter(ID %in% names(retgt))
# return the list
list(
fix = retfix %>%
dplyr::select( -Key),
# dplyr::select_(~ -Key),
gt = retgt,
meta = dplyr::bind_rows(info_meta, gt_meta)
)
}
#### extract_info_tidy ####
#' @rdname vcfR_to_tidy_conversion
#' @aliases extract_info_tidy
#'
#' @param info_fields names of the fields to be extracted from the INFO column
#' into a long format data frame. If this is left as NULL (the default) then
#' the function returns a column for every INFO field listed in the metadata.
#' @param info_types named vector of "i" or "n" if you want the fields extracted from the INFO column to be converted to integer or numeric types, respectively.
#' When set to NULL they will be characters.
#' The names have to be the exact names of the fields.
#' For example \code{info_types = c(AF = "n", DP = "i")} will convert column AF to numeric and DP to integer.
#' If you would like the function to try to figure out the conversion from the metadata information, then set \code{info_types = TRUE}.
#' Anything with Number == 1 and (Type == Integer or Type == Numeric) will then be converted accordingly.
#' @param info_sep the delimiter used in the data portion of the INFO fields to
#' separate different entries. By default it is ";", but earlier versions of the VCF
#' standard apparently used ":" as a delimiter.
#' @export
extract_info_tidy <- function(x, info_fields = NULL, info_types = TRUE, info_sep = ";") {
if(!is.null(info_fields) && any(duplicated(info_fields))) stop("Requesting extraction of duplicate info_field names")
#if(class(x) != "vcfR") stop("Expecting x to be a vcfR object, not a ", class(x))
if( !inherits(x, "vcfR") ) stop("Expecting x to be a vcfR object, not a ", class(x))
ID <- NULL
vcf <- x
x <- as.data.frame(x@fix, stringsAsFactors = FALSE) %>%
tibble::as_tibble()
# tibble::as.tibble()
# if info_fields is NULL then we try to do all of them
if(is.null(info_fields)) {
info_df <- vcfR::vcf_field_names(vcf, tag = "INFO")
info_fields <- info_df$ID
}
# if info_types == TRUE
# then we try to discern the fields amongst info_fields that should be coerced to integer and
# numeric
if(!is.null(info_types) && length(info_types) == 1 && info_types[1] == TRUE) {
info_df <- vcfR::vcf_field_names(vcf, tag = "INFO") %>%
dplyr::filter(ID %in% info_fields)
# dplyr::filter_(~ID %in% info_fields)
info_types <- guess_types(info_df)
}
# here is where the action is
# first split into a list of vectors and then make them named vectors of values and
# pick them out in order using info_fields
ret <- stringr::str_split(string = x$INFO, pattern = info_sep) %>%
lapply(function(x) {
y <- stringr::str_split(x, pattern = "=", n = 2)
vals <- unlist(lapply(y, function(z) z[2]))
names(vals) <- unlist(lapply(y, function(z) z[1]))
unname(vals[info_fields])
}) %>%
unlist
# If there were no variants ret will be NULL.
if(is.null(ret)){
ret <- matrix(nrow = 0, ncol = length(info_fields), byrow = TRUE)
} else {
ret <- matrix(ret, ncol = length(info_fields), byrow = TRUE)
}
ret <- as.data.frame(ret, stringsAsFactors = FALSE) %>%
setNames(info_fields) %>%
tibble::as_tibble()
if(!is.null(info_types)) {
ns <- info_types[!is.na(info_types) & info_types == "n"]
is <- info_types[!is.na(info_types) & info_types == "i"]
fs <- info_types[!is.na(info_types) & info_types == "f"]
if(length(ns) > 0) {
ret[names(ns)] <- lapply(ret[names(ns)], as.numeric)
}
if(length(is) > 0) {
ret[names(is)] <- lapply(ret[names(is)], as.integer)
}
proc_flag <- function(x, INFO){
x2 <- rep(FALSE, times = length(INFO))
x2[ grep(x, INFO) ] <- TRUE
x2
}
if(length(fs) > 0) {
ret[names(fs)] <- lapply(names(fs), proc_flag, x$INFO)
}
}
if(nrow(ret) > 0){
ret <- cbind(Key = 1:nrow(ret), ret) %>% tibble::as_tibble()
} else {
ret <- cbind(Key = vector(mode = 'integer', length = 0), ret) %>% tibble::as_tibble()
}
ret
}
#### extract_gt_tidy ####
#' @rdname vcfR_to_tidy_conversion
#' @aliases extract_gt_tidy
#'
#' @param format_fields names of the fields in the FORMAT column to be extracted from
#' each individual in the vcfR object into
#' a long format data frame. If left as NULL, the function will extract all the FORMAT
#' columns that were documented in the meta section of the VCF file.
#' @param format_types named vector of "i" or "n" if you want the fields extracted according to the FORMAT column to be converted to integer or numeric types, respectively.
#' When set to TRUE an attempt to determine their type will be made from the meta information.
#' When set to NULL they will be characters.
#' The names have to be the exact names of the format_fields.
#' Works equivalently to the \code{info_types} argument in
#' \code{\link{extract_info_tidy}}, i.e., if you set it to TRUE then it uses the information in the
#' meta section of the VCF to coerce to types as indicated.
#' @param dot_is_NA if TRUE then a single "." in a character field will be set to NA. If FALSE
#' no conversion is done. Note that "." in a numeric or integer field
#' (according to format_types) with Number == 1 is always
#' going to be set to NA.
#' @param alleles if TRUE (the default) then this will return a column, \code{gt_GT_alleles} that
#' has the genotype of the individual expressed as the alleles rather than as 0/1.
#' @param allele.sep character which delimits the alleles in a genotype (/ or |) to be passed to
#' \code{\link{extract.gt}}. Here this is not used for a regex (as it is in other functions), but merely
#' for output formatting.
#' @param gt_column_prepend string to prepend to the names of the FORMAT columns
#' @param verbose logical to specify if verbose output should be produced
#' in the output so that they
#' do not conflict with any INFO columns in the output. Default is "gt_". Should be a
#' valid R name. (i.e. don't start with a number, have a space in it, etc.)
#' @export
extract_gt_tidy <- function(x,
format_fields = NULL,
format_types = TRUE,
dot_is_NA = TRUE,
alleles = TRUE,
allele.sep = "/",
gt_column_prepend = "gt_",
verbose = TRUE) {
if(!is.null(format_fields) && any(duplicated(format_fields))){
stop("Requesting extraction of duplicate format_field names")
}
# if(class(x) != "vcfR"){
if( !inherits(x, "vcfR") ){
stop("Expecting x to be a vcfR object, not a ", class(x))
}
# https://www.r-bloggers.com/no-visible-binding-for-global-variable/
ID <- Key <- Indiv <- NULL
vcf <- x # Rename it.
# Get this, because we may need it.
# Extracts FORMAT acronyms from the meta region.
format_df <- vcfR::vcf_field_names(vcf, tag = "FORMAT")
# If format_fields is NULL then we try to do all of them
if(is.null(format_fields)) {
format_fields <- format_df$ID
}
# If info_types == TRUE
# then we try to discern the fields amongst info_fields that should be coerced to integer and
# numeric.
if(!is.null(format_types) && length(format_types) == 1 && format_types[1] == TRUE) {
format_types <- guess_types(format_df %>% dplyr::filter(ID %in% format_fields))
# format_types <- guess_types(format_df %>% dplyr::filter_(~ID %in% format_fields))
}
# Make a parallel vector that indicates which fields should be numeric or not
# so we can tell extract.gt to take care of it.
coerce_numeric <- rep(FALSE, length(format_fields))
coerce_numeric[format_fields %in% names(format_types)] <- TRUE
# Now get all the gt fields
ex <- 1:length(format_fields)
names(ex) <- format_fields
get_gt <- function(i, ...){
if(verbose == TRUE){
message("Extracting gt element ", names(ex)[i])
}
ret <- extract.gt(x = vcf, element = format_fields[i], as.numeric = coerce_numeric[i])
ret <- as.vector(ret)
ret
}
geno_info <- lapply(ex, get_gt)
# geno_info <- dplyr::as_data_frame(geno_info)
geno_info <- dplyr::as_tibble(geno_info)
if( nrow(geno_info) > 0 ){
geno_info <- dplyr::mutate(Key = rep(1:nrow(vcf@fix), times = ncol(vcf@gt) - 1),
Indiv = rep(colnames(vcf@gt)[-1], each = nrow(vcf@fix)),
geno_info
)
} else {
geno_info <- dplyr::mutate(Key = vector(mode = 'integer', length = 0),
Indiv = vector(mode = 'integer', length = 0),
geno_info
)
}
geno_info <- dplyr::select(geno_info, Key, Indiv, dplyr::everything())
# geno_info <- dplyr::select_(geno_info, ~Key, ~Indiv, ~dplyr::everything())
# geno_info <- lapply(ex, function(i) {
# message("Extracting gt element ", names(ex)[i])
# ret <- extract.gt(x = vcf, element = format_fields[i], as.numeric = coerce_numeric[i])
# if(dot_is_NA == TRUE) ret[ret == "."] <- NA
# as.vector(ret)
# }) %>%
# dplyr::as_data_frame() %>%
#setNames(paste("gt_", names(.), sep = "")) %>%
# dplyr::mutate_(Key = ~rep(1:nrow(vcf@fix), times = ncol(vcf@gt) - 1),
# ChromKey = rep(fix$ChromKey, times = ncol(V@gt) - 1),
# Indiv = ~rep(colnames(vcf@gt)[-1], each = nrow(vcf@fix))) %>%
# dplyr::select_(~Key, ~Indiv, ~everything())
# now coerce numerics that should be integers to ints:
# if( length(format_types) > 0 ){
if( sum( format_types == "i" ) > 0 ){
geno_info[names(format_types)[format_types == "i"]] <-
lapply(geno_info[names(format_types)[format_types == "i"]], as.integer)
}
# and now, if alleles == TRUE, get the GT column expressed as alleles
if(alleles == TRUE) {
geno_info$GT_alleles <- as.vector(extract.gt(x = vcf, element = "GT", return.alleles = TRUE))
}
# now prepend gt_ to every column name except the Key and Indiv columns:
names(geno_info)[-c(1,2)] <- paste(gt_column_prepend, names(geno_info)[-c(1,2)], sep = "")
geno_info
}
# given a data frame of INFO or FORMAT info, this sets everything
# that has Number = 1 to Integer or Numeric as appropriate. Returns
# a named vector suitable for passing to, for example, info_types.
# this is not exported
guess_types <- function(D) {
Number <- Type <- tt <- ID <- NULL
tmp <- D %>%
# dplyr::filter_(~Number == 1) %>%
dplyr::filter(Number == 1) %>%
dplyr::mutate(tt = dplyr::if_else(Type == "Integer", "i", dplyr::if_else(Type == "Numeric" | Type == "Float", "n", ""))) %>%
dplyr::filter(tt %in% c("n", "i")) %>%
# dplyr::filter_(~tt %in% c("n", "i")) %>%
dplyr::select(ID, Number, Type, tt)
# dplyr::select_(~ID, ~Number, ~Type, ~tt)
# tmp <- D %>% dplyr::filter_(~Number == 0 & Type == 'Flag') %>%
tmp <- D %>% dplyr::filter(Number == 0 & Type == 'Flag') %>%
dplyr::mutate(tt = dplyr::if_else(Type == "Flag", "f", "")) %>%
dplyr::filter(tt %in% c("f")) %>%
# dplyr::filter_(~tt %in% c("f")) %>%
dplyr::select(ID, Number, Type, tt) %>%
# dplyr::select_(~ID, ~Number, ~Type, ~tt) %>%
dplyr::bind_rows(tmp)
ret <- tmp$tt
names(ret) <- tmp$ID
ret
}
#### vcf_field_names ####
#' @rdname vcfR_to_tidy_conversion
#' @aliases vcf_field_names
#'
#' @param tag name of the lines in the metadata section of the VCF file to parse out.
#' Default is "INFO". The only other one tested and supported, currently is, "FORMAT".
#'
#' @export
vcf_field_names <- function(x, tag = "INFO") {
# if(class(x) != "vcfR") stop("Expecting x to be a vcfR object, not a ", class(x))
if( !inherits(x, "vcfR") ) stop("Expecting x to be a vcfR object, not a ", class(x))
if( tag != 'INFO' & tag != 'FORMAT') stop("Expecting tag to either be INFO or FORMAT")
# Subset to tag.
x <- x@meta
left_regx <- paste("^##", tag, "=<", sep = "") # regex to match and replace
x <- x[grep(left_regx, x)]
# Handle zero tags.
if( length(x) == 0 ){
nullReturn <- structure(
list(Tag = character(0), ID = character(0), Number = character(0),
Type = character(0), Description = character(0)),
row.names = integer(0),
class = c("tbl_df", "tbl", "data.frame"))
return( nullReturn )
}
# Clean up the string ends.
x <- sub(left_regx, "", x)
x <- sub(">$", "", x)
# Delimit on quote protected commas.
x <- lapply(x, function(x){scan(text=x, what="character", sep=",", quiet = TRUE)})
# Get unique keys.
myKeys <- unique(unlist(lapply(strsplit(unlist(x), split = "="), function(x){x[1]})))
# Omit default keys so we can make them first.
myKeys <- grep("^ID$|^Number$|^Type$|^Description$", myKeys, invert = TRUE, value = TRUE)
myKeys <- c("ID", "Number", "Type", "Description", myKeys)
myReturn <- data.frame(matrix(ncol=length(myKeys) + 1, nrow=length(x)))
colnames(myReturn) <- c("Tag", myKeys)
myReturn[,'Tag'] <- tag
getValue <- function(x){
myValue <- grep(paste("^", myKeys[i], "=", sep=""), x, value = TRUE)
if(length(myValue) == 0){
is.na(myValue) <- TRUE
} else {
myValue <- sub(".*=", "", myValue)
}
myValue
}
for(i in 1:length(myKeys)){
myReturn[,i+1] <- unlist(lapply(x, function(x){ getValue(x) }))
}
tibble::as_tibble(myReturn)
}
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/vcfR_to_tidy_functions.R
|
#' Example data from the Variant Effect Predictor (VEP).
#'
#' Example data to use with unit tests.
#'
#' \itemize{
#' \item vep vcfR object
#' }
#'
#'
#' Output from the \href{https://useast.ensembl.org/info/docs/tools/vep/index.html}{VEP} may include values with multiple equals signs.
#' This does not appear to conform with the VCF specification (at the time of writing this \href{http://samtools.github.io/hts-specs/}{VCF v4.3}).
#' But it appears fairly easy to accomodate.
#' This example data can be used to make unit tests to validate functionality.
#'
#'
#' @examples
#' data(vep)
#' vcfR2tidy(vep, info_only = TRUE)$fix
#'
#'
#' @docType data
#' @keywords datasets
#' @format A vcfR object
#' @name vep
#' @aliases vep
NULL
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/vep.R
|
#' @title Create window summaries of data
#' @name Windowing
#' @rdname windowing
#'
#'
#' @description
#' Create windows of non-overlapping data and summarize.
#'
#' @param x A NumericMatrix
#' @param pos A vector of chromosomal positions for each row of data (variants)
#' @param maxbp Length of chromosome
#' @param winsize Size (in bp) for windows
#' @param depr logical (T/F), this function has been deprecated, set to FALSE to override.
#'
#' @details
#' The numeric matrix where samples are in columns and variant data are in rows.
#' The windowing process therefore occurs along columns of data.
#' This matrix could be created with \code{\link{extract.gt}}.
#'
#' The chromosome is expected to contain positions 1 though maxbp.
#' If maxbp is not specified this can be inferred from the last element in pos.
#'
#'
#' @param starts integer vector of starting positions for windows
#' @param ends integer vector of ending positions for windows
#' @param summary string indicating type of summary (mean, median, sum)
#'
# ' @rdname windowing
# @aliases windowing alias NM2winNM
#'
#' @export
#'
NM2winNM <- function(x, pos, maxbp, winsize = 100L, depr = TRUE) {
if( depr ){
myMsg <- "The function NM2winNM was deprecated in vcfR version 1.6.0. If you use this function and would like to advocate for its persistence, please contact the maintainer of vcfR. The maintainer can be contacted at maintainer('vcfR')"
stop(myMsg)
}
.NM2winNM(x, pos, maxbp, winsize)
}
#' @rdname windowing
#' @export
#'
z.score <- function(x){
winave <- apply(x, MARGIN=2, mean, na.rm=TRUE)
winsd <- apply(x, MARGIN=2, stats::sd, na.rm=TRUE)
zsc <- sweep(x, MARGIN=2, STATS=winave, FUN="-")
zsc <- sweep(zsc, MARGIN=2, STATS=winsd, FUN="/")
zsc
}
#' @rdname windowing
#'
#'
#' @export
#'
windowize.NM <- function(x, pos, starts, ends, summary="mean", depr = TRUE){
if( depr ){
myMsg <- "The function windowizeNM was deprecated in vcfR version 1.6.0. If you use this function and would like to advocate for its persistence, please contact the maintainer of vcfR. The maintainer can be contacted at maintainer('vcfR')"
stop(myMsg)
}
.windowize_NM(x, pos, starts, ends, summary=summary)
}
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/windowing.R
|
.onAttach <- function(libname, pkgname){
# pkg.version <- packageDescription("vcfR", fields = "Version")
pkg.version <- utils::packageVersion("vcfR")
startup.txt <- paste("\n",
" ***** *** vcfR *** *****\n",
# " ***** ***** ***** *****\n",
" This is vcfR ", pkg.version, " \n",
" browseVignettes('vcfR') # Documentation\n",
" citation('vcfR') # Citation\n",
# " > To cite: citation('vcfR')\n",
# " > Documentation: browseVignettes('vcfR')\n",
" ***** ***** ***** *****",
"\n",
sep="")
packageStartupMessage(startup.txt)
}
|
/scratch/gouwar.j/cran-all/cranData/vcfR/R/zzz.R
|
## -----------------------------------------------------------------------------
library(vcfR)
data(vcfR_example)
## ----write.vcf, eval=FALSE----------------------------------------------------
# write.vcf(vcf, "test.vcf.gz")
# unlink("test.vcf.gz") # Clean up after our example is done.
## ----genind, eval=TRUE--------------------------------------------------------
my_genind <- vcfR2genind(vcf)
class(my_genind)
my_genind
## ----genclone, eval=TRUE------------------------------------------------------
my_genclone <- poppr::as.genclone(my_genind)
class(my_genclone)
my_genclone
## ----genlight, eval=TRUE------------------------------------------------------
vcf_file <- system.file("extdata", "pinf_sc50.vcf.gz", package = "pinfsc50")
vcf <- read.vcfR(vcf_file, verbose = FALSE)
x <- vcfR2genlight(vcf)
x
## ----snpclone-----------------------------------------------------------------
library(poppr)
x <- as.snpclone(x)
x
## ----load vcf dna gff---------------------------------------------------------
# Find the files.
vcf_file <- system.file("extdata", "pinf_sc50.vcf.gz", package = "pinfsc50")
dna_file <- system.file("extdata", "pinf_sc50.fasta", package = "pinfsc50")
gff_file <- system.file("extdata", "pinf_sc50.gff", package = "pinfsc50")
# Read in data.
vcf <- read.vcfR(vcf_file, verbose = FALSE)
dna <- ape::read.dna(dna_file, format="fasta")
gff <- read.table(gff_file, sep="\t", quote = "")
## ----vcfR2DNAbin, tidy=TRUE---------------------------------------------------
record <- 130
#my_dnabin1 <- vcfR2DNAbin(vcf, consensus = TRUE, extract.haps = FALSE, gt.split="|", ref.seq=dna[,gff[record,4]:gff[record,5]], start.pos=gff[record,4], verbose=FALSE)
my_dnabin1 <- vcfR2DNAbin(vcf, consensus = TRUE, extract.haps = FALSE, ref.seq=dna[,gff[record,4]:gff[record,5]], start.pos=gff[record,4], verbose=FALSE)
my_dnabin1
## ----image_DNAbin1, fig.align='center', fig.width=7, fig.height=7-------------
par(mar=c(5,8,4,2))
ape::image.DNAbin(my_dnabin1[,ape::seg.sites(my_dnabin1)])
par(mar=c(5,4,4,2))
## ----vcfR2DNAbin_2, tidy=TRUE-------------------------------------------------
#my_dnabin1 <- vcfR2DNAbin(vcf, consensus=FALSE, extract.haps=TRUE, gt.split="|", ref.seq=dna[,gff[record,4]:gff[record,5]], start.pos=gff[record,4], verbose=FALSE)
my_dnabin1 <- vcfR2DNAbin(vcf, consensus=FALSE, extract.haps=TRUE, ref.seq=dna[,gff[record,4]:gff[record,5]], start.pos=gff[record,4], verbose=FALSE)
## ----image_DNAbin_2, fig.align='center', fig.width=7, fig.height=7------------
par(mar=c(5,8,4,2))
ape::image.DNAbin(my_dnabin1[,ape::seg.sites(my_dnabin1)])
par(mar=c(5,4,4,2))
## ----eval=FALSE---------------------------------------------------------------
# write.dna( my_dnabin1, file = 'my_gene.fasta', format = 'fasta' )
# unlink('my_gene.fasta') # Clean up after we're done with the example.
## ----vcfR2loci, eval=FALSE----------------------------------------------------
# system.time( my_loci <- vcfR2loci(vcf) )
# class(my_loci)
|
/scratch/gouwar.j/cran-all/cranData/vcfR/inst/doc/converting_data.R
|
---
title: "Converting vcfR objects to other forms"
author: "Brian J. Knaus"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Converting data}
%\VignetteEngine{knitr::rmarkdown}
---
Once we have finished examining our data in vcfR, we'll want to format it so that other softwares can utilize it.
A straightforward path is to create a *.vcf.gz format file.
One downside to this path is that it creates an intermediate file.
When working on large datasets this intermediate file may be rather large.
If your path remains in R, it may be preferable to convert your vcfR objects to objects defined by other packages.
Here we explore examples of these paths.
## Data import
We'll use two datasets to illustrate data conversion.
The function vcfR2genind calls adegenet::df2genind, a function which predates high throughput sequencing.
This path currently doesn't scale well to large datasets.
So we'll begin with the vcfR example dataset.
This dataset consists of 19 samples with 2,533 variants.
Later we'll use the example dataset from the package pinfsc50 which includes the same samples, but with 22,0331 variants.
```{r}
library(vcfR)
data(vcfR_example)
```
## Creating *.vcf.gz format files.
The function **write.vcf()** can be used to create *.vcf.gz files (gzipped VCF files) from objects of class vcfR or chromR.
These VCF files can be used for any downstream analysis which uses VCF files as input.
```{r write.vcf, eval=FALSE}
write.vcf(vcf, "test.vcf.gz")
unlink("test.vcf.gz") # Clean up after our example is done.
```
## Creating genind objects
The packages **adegenet** and **poppr** use objects of class **genind**.
We can create genind objects with the function **vcfR2genind()**.
```{r genind, eval=TRUE}
my_genind <- vcfR2genind(vcf)
class(my_genind)
my_genind
```
The warning is because our example dataset has uninteresting locus names (they're all NULL).
Adegenet replaces these names with slightly more interesting, unique names.
The function vcfR2genind calls extract.gt to create a matrix of genotypes.
This matrix is converted into a genind object with the adegenet function df2genind.
Currently, this function does not scale well to large quantities of data.
This appears to be due a call to the function adegenet::df2genind (this function was produced prior to high throughput sequencing).
## Creating genclone objects
The package **poppr** uses objects of class genclone as well as genind.
Once a genind object has been created, it is fairly straight forward to create a genclone object.
```{r genclone, eval=TRUE}
my_genclone <- poppr::as.genclone(my_genind)
class(my_genclone)
my_genclone
```
## Creating genlight objects
The **genlight** object is used by **adegenet** and **poppr**.
It was designed specifically to handle high-throughput genotype data.
At present it appears to only support two alleles at a locus, but varying levels of ploidy.
Variant callers such as FreeBayes and the GATK's haplotype caller currently support more than two alleles per locus.
To address this incompatibility, vcfR2genelight omits loci that include more than two alleles.
The benefit of the genlight object is that the genlight object is much more efficient to use than the genind object as it was designed with high throughput sequencing in mind.
When verbose is set to TRUE the function vcfR2genlight will throw a warning and report how many loci it has omitted.
When verbose is set to FALSE the loci will be omitted silently.
```{r genlight, eval=TRUE}
vcf_file <- system.file("extdata", "pinf_sc50.vcf.gz", package = "pinfsc50")
vcf <- read.vcfR(vcf_file, verbose = FALSE)
x <- vcfR2genlight(vcf)
x
```
## Creating snpclone objects
The **genlight** object is extended by the **snpclone** object for analysis of clonal and partially clonal populations in **poppr**.
The genlight object can be converted to a snpclone object with functions in the poppr package.
```{r snpclone}
library(poppr)
x <- as.snpclone(x)
x
```
Note that we now have a **mlg** slot to hold multilocus genotype indicators.
## Creating DNAbin objects
The package **ape** handles sequence data using objects of class **DNAbin**.
The VCF file only contains information on variant positions.
Omitting invariant data provides for a more efficient representation of the data than including the invariant sites.
Converting VCF data to sequence data presents a challenge in that these invariant sites may need to be included.
This means that these objects can easily occupy large amounts of memory, and may exceed the physical memory when long sequences with many samples are included.
In order to accomodate these issues, we've taken an approach which attempts to create DNAbin objects from portions of a chomosome, such as a gene.
This means we'll need a little more information than we've needed for other conversions.
First, we'll need to locate and read in our VCF file, a reference sequence and a gff file that has the coordinates for a gene.
```{r load vcf dna gff}
# Find the files.
vcf_file <- system.file("extdata", "pinf_sc50.vcf.gz", package = "pinfsc50")
dna_file <- system.file("extdata", "pinf_sc50.fasta", package = "pinfsc50")
gff_file <- system.file("extdata", "pinf_sc50.gff", package = "pinfsc50")
# Read in data.
vcf <- read.vcfR(vcf_file, verbose = FALSE)
dna <- ape::read.dna(dna_file, format="fasta")
gff <- read.table(gff_file, sep="\t", quote = "")
```
We can use information from the annotation file (gff) to extract a gene.
Here we have specifically chosen one which has variants.
We can use IUPAC ambiguity codes to convert heterozygous sites into a one character encoding.
This results in a single sequence per individual.
Alternatively, we can create two haplotypes for each diploid sample, resulting in two sequences per individual.
```{r vcfR2DNAbin, tidy=TRUE}
record <- 130
#my_dnabin1 <- vcfR2DNAbin(vcf, consensus = TRUE, extract.haps = FALSE, gt.split="|", ref.seq=dna[,gff[record,4]:gff[record,5]], start.pos=gff[record,4], verbose=FALSE)
my_dnabin1 <- vcfR2DNAbin(vcf, consensus = TRUE, extract.haps = FALSE, ref.seq=dna[,gff[record,4]:gff[record,5]], start.pos=gff[record,4], verbose=FALSE)
my_dnabin1
```
We can visualize the variable sites using tools from the package 'ape.'
```{r image_DNAbin1, fig.align='center', fig.width=7, fig.height=7}
par(mar=c(5,8,4,2))
ape::image.DNAbin(my_dnabin1[,ape::seg.sites(my_dnabin1)])
par(mar=c(5,4,4,2))
```
Here, the ambiguous sites are visualized as 'other.'
While the DNAbin object can include the ambiguity codes, not all downstream software handle these codes well.
So the user should excercise prudence when using this option.
If we instead create two haplotypes for each diploid sample, it results in a DNAbin object which includes only unambiguous nucleotides(A, C, G and T).
This typically requires the data to be phased (I use [beagle4](https://faculty.washington.edu/browning/beagle/beagle.html)).
In VCF files this is indicated by delimiting the alleles of the genotype with a pipe ('|') for phased data, while unphased data are delimited with a forward slash ('/').
```{r vcfR2DNAbin_2, tidy=TRUE}
#my_dnabin1 <- vcfR2DNAbin(vcf, consensus=FALSE, extract.haps=TRUE, gt.split="|", ref.seq=dna[,gff[record,4]:gff[record,5]], start.pos=gff[record,4], verbose=FALSE)
my_dnabin1 <- vcfR2DNAbin(vcf, consensus=FALSE, extract.haps=TRUE, ref.seq=dna[,gff[record,4]:gff[record,5]], start.pos=gff[record,4], verbose=FALSE)
```
```{r image_DNAbin_2, fig.align='center', fig.width=7, fig.height=7}
par(mar=c(5,8,4,2))
ape::image.DNAbin(my_dnabin1[,ape::seg.sites(my_dnabin1)])
par(mar=c(5,4,4,2))
```
Once we have a DNAbin object, it can be analysed in a number of R packages, such as ape and pegas.
We can also output a fasta file for other softwares to use.
```{r, eval=FALSE}
write.dna( my_dnabin1, file = 'my_gene.fasta', format = 'fasta' )
unlink('my_gene.fasta') # Clean up after we're done with the example.
```
Also see:
- Heng Li's [seqtk](https://github.com/lh3/seqtk)
- [GATK's](https://software.broadinstitute.org/gatk/) FastaAlternateReferenceMaker
## Creating loci objects
The package **pegas** uses objects of class **loci**.
We can use the function vcfR2loci to convert our vcfR object to one of class loci.
```{r vcfR2loci, eval=FALSE}
system.time( my_loci <- vcfR2loci(vcf) )
class(my_loci)
```
This takes a noticable amount of time to execute but is effective.
We can now proceed to downstream analyses.
## Conclusion
The use of vcfR is an intermediary point in an analysis.
Once VCF data are obtained, vcfR provides an interactive way to scrutinize and filter the data.
A number of paths have been provided that take the results of VCF format data from exploration and filtering to downstream analyses by other software that uses VCF files as input or several R packages.
|
/scratch/gouwar.j/cran-all/cranData/vcfR/inst/doc/converting_data.Rmd
|
## -----------------------------------------------------------------------------
pkg <- "pinfsc50"
vcf_file <- system.file("extdata", "pinf_sc50.vcf.gz", package = pkg)
dna_file <- system.file("extdata", "pinf_sc50.fasta", package = pkg)
gff_file <- system.file("extdata", "pinf_sc50.gff", package = pkg)
## ----read.vcfR----------------------------------------------------------------
library(vcfR)
vcf <- read.vcfR( vcf_file, verbose = FALSE )
## ----read.dna-----------------------------------------------------------------
dna <- ape::read.dna(dna_file, format = "fasta")
## ----gff----------------------------------------------------------------------
gff <- read.table(gff_file, sep="\t", quote="")
## ----create.chromR------------------------------------------------------------
library(vcfR)
chrom <- create.chromR(name='Supercontig', vcf=vcf, seq=dna, ann=gff)
## ----plot chrom, fig.align='center', fig.height=7, fig.width=7----------------
plot(chrom)
## ----masker, fig.align='center', fig.height=7, fig.width=7--------------------
chrom <- masker(chrom, min_QUAL = 1, min_DP = 300, max_DP = 700, min_MQ = 59.9, max_MQ = 60.1)
plot(chrom)
## ----proc.chromR, fig.align='center', fig.height=7, fig.width=7---------------
chrom <- proc.chromR(chrom, verbose=TRUE)
plot(chrom)
## ----chromoqc1, fig.align='center', fig.height=7, fig.width=7-----------------
chromoqc(chrom, dp.alpha=20)
## ----chromoqc2, fig.align='center', fig.height=7, fig.width=7-----------------
chromoqc(chrom, xlim=c(5e+05, 6e+05))
|
/scratch/gouwar.j/cran-all/cranData/vcfR/inst/doc/intro_to_vcfR.R
|
---
title: "Introduction to vcfR"
author: "Brian J. Knaus"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Introduction to vcfR}
%\VignetteEngine{knitr::rmarkdown}
---
vcfR is a package intended to help visualize, manipulate and quality filter data in VCF files.
> More documentation for vcfR can be found at the [vcfR documentation](https://knausb.github.io/vcfR_documentation/) website.
## Preliminaries
Input files frequently present challenges to analysis.
A common problem I encounter is that chromosome names are not standardized among VCF, FASTA and GFF files.
This presents work for the analyst.
I suggest reading these files into R, syncronizing the names in R and then proceeding with downstream analyses.
The other option I see is to create a set of files where the data is identical to the initial files, but the names have been syncronized.
This later choice results in the creation of files which are largely redundant, something I feel is unnecessary.
Memory use is another important consideration when using vcfR.
A strength of R is that it was typically intended to read in entire datasets into memory.
This allows for visualization, manipulation and analyses to be performed on the entire dataset at once.
The size of genomic datasets, particularly the VCF data, present a challenge in that they may be too large for practical use in R.
This presents us with the challenge of reading enough data into memory so that we can explore a large amount of it, but not so much that we exceed our existing resources.
It has been my experience that R does not perfomr well when memory use approaches 1 GB of RAM, so simply investing in a workstation with a large amount of memory may not be a solution.
(This may change in the future as R is under continual development.)
My solution to finding this balance is to work on single chomosomes.
Actually, the 'chromosomes' in my projects are typically supercontigs, scaffolds or contigs.
This is one situation where not having complete chromosomes actually can be an advantage.
## Data input
The vcfR package is designed to work with data from [VCF](https://www.internationalgenome.org/node/101) files.
The use of a sequence file ([FASTA format](https://en.wikipedia.org/wiki/FASTA_format)) and an annotation file ([GFF format](https://github.com/The-Sequence-Ontology/Specifications/blob/master/gff3.md)) can provide helpful context, but are not required.
We'll begin our example by locating the data files from the package 'pinfsc50.'
```{r}
pkg <- "pinfsc50"
vcf_file <- system.file("extdata", "pinf_sc50.vcf.gz", package = pkg)
dna_file <- system.file("extdata", "pinf_sc50.fasta", package = pkg)
gff_file <- system.file("extdata", "pinf_sc50.gff", package = pkg)
```
Then read in the VCF file with vcfR.
```{r read.vcfR}
library(vcfR)
vcf <- read.vcfR( vcf_file, verbose = FALSE )
```
The function `read.vcfR()` takes the filename you specify and reads it into R where it is stored as a **vcfR** object.
The **vcfR** object is an S4 class object with three slots containing the metadata, the fixed data and the genotype data.
More information on VCF data can be found in the vignette 'vcf data.'
This object provides a known organization for the data so that downstream functions can easily access it.
Genomic reference sequence files are typically in FASTA format files.
These can be read in using the package ape.
```{r read.dna}
dna <- ape::read.dna(dna_file, format = "fasta")
```
Annotation files (we currently support [GFF](https://github.com/The-Sequence-Ontology/Specifications/blob/master/gff3.md)), files which contain coordinates for annotations such as start and end points of genes, are tabular and can be read in with typical R functions.
```{r gff}
gff <- read.table(gff_file, sep="\t", quote="")
```
In my experience, GFF files typically to not surround text with quotes.
This can present a challenge to reading these files into R.
Disabling quotes in the call to `read.table()` typically helps handle this.
Once the data has been read into memory, modifications can be made to chromosome names or any other inconsistencies and one can proceed.
vcfR was designed to work on an individual chromosome, supercontig or contig, depending on the state of your genome.
Reading an entire genome into memory may present a technical challenge when there is a lot of data.
For example, when the genome is large or there are a lot of samples.
Attempting to read in lage datasets to memory may exhaust all available memory and result in an unresponsive computer.
Working on chromosomes appears to be a natural way to decompose this problem.
Once you have read an object into R (e.g., an annotation or sequence file) you may need to subset it to data for a single chromosome, if necessary.
## Creating chromR objects
Once the data are in memory we can use it to create a **chromR** object with the function `create.chromR()`.
The `create.chromR()` function creates a new chromR object and populates it with data you provided it.
```{r create.chromR}
library(vcfR)
chrom <- create.chromR(name='Supercontig', vcf=vcf, seq=dna, ann=gff)
```
Note that the names of our three data sources are not identical.
This results in a warning.
When we examine the output we see that the name in the VCF file is 'Supercontig_1.50,' while the name in the FASTA is 'Supercontig_1.50 of Phytophthora infestans T30-4.'
We know that these are synonyms so we can ignore the warning and proceed.
The parameter 'name' is a name you can assign to your object.
This information is used when plotting the chromR object.
The vcfR object should be of class vcfR, which was most likely created with the function `read.vcfR()`.
This object is inserted into the chrom object with the function `vcfR2chromR()`.
The parameter 'seq' should be a DNAbin object (see the R package ape) with one sequence in it.
This sequence will be inserted into the chromR object with the function `seq2chromR()`.
If a sequence is not provided, `seq2chromR()` will infer the length of your chromosome from the maximum position in the vcfR object.
The parameter 'ann' should be a [GFF](https://github.com/The-Sequence-Ontology/Specifications/blob/master/gff3.md) and be of class data.frame.
These can typically be read in with base R functions such as `read.table()`.
This table will then be inserted into the chromR object by `ann2chromR()`.
This function will check to see if columns 4 and 5 ("start" and "end") are numeric.
If not, it will try to recast them as so.
## Processing chromR objects
Once the chromR object has been created a few processing steps are needed.
First, you may want to get a quick look at some of your data.
This can be done with the plot function.
```{r plot chrom, fig.align='center', fig.height=7, fig.width=7}
plot(chrom)
```
The distribution of read depth (DP) stands out.
Presumably, there is some base ploid level at which most of each genome is sequenced at.
Here we see a peak, which may represent that base ploid region, but we also see a long tail which may represent copy number variants.
Because genotypers typically expect a constant level of ploidy, variant calls in copy number variants may be suspect.
We can see that mapping qualities (MQ) are all rather peaked around a value of 60.
Because of this, if we would like to filter on this parameter we now know that we would have to employ a narrow threshold.
Interpretation of the qualities (QUAL) appears less straightforward and may be clinal from a value of zero.
You may conclude that this is not an ideal parameter to filter on.
No SNP densities are found at this point because this data results from windowing analyses performed by `proc_chromR()` (see below).
Filtering on other parameters may reveal a more straight forward path.
Note that VCF data created by different variant calling software may or may not have these fields or their ranges may be different.
For example, here the mapping quality is peaked at 60.
Other softwares may create files where mapping quality is peaked at 20 or some other value.
This is why it is important to visualize the distribution of your data so you understand its properties.
We can use the `masker()` function to try to filter out data that we do not have high confidence in.
The masker() function uses quality, depth and mapping quality to try to select high quality variants.
See `?masker` for default values.
When using masker, variants deemed to be of low quality are not deleted from the dataset.
Instead, a logical vector is created to indicate which variants have or have not been filtered.
This maintains the geometry of the data matrices throughout the analyisis and allows the user to easily undo any changes.
```{r masker, fig.align='center', fig.height=7, fig.width=7}
chrom <- masker(chrom, min_QUAL = 1, min_DP = 300, max_DP = 700, min_MQ = 59.9, max_MQ = 60.1)
plot(chrom)
```
Once we're satisfied with which variants we want to consider to be of high quality we can process the chromR object with `proc.chromR()`.
This function calls several helper functions to process the variant, sequence and annotation data for visualization.
The function `regex.win()` defines rectangles for where called sequence (A, C, G and T) occur as well as where ambiguous nucleotides occur (N) which are used later for plotting.
This function also defines rectangles for annotated features, which are also for plotting.
The function `var.win()` performs windowing analyses on the data.
Currently it summarizes variant count per window as well as G/C content per window.
```{r proc.chromR, fig.align='center', fig.height=7, fig.width=7}
chrom <- proc.chromR(chrom, verbose=TRUE)
plot(chrom)
```
Now that we've processed our chromR object, we have variant counts per window.
We're also ready to move on to more complex plots.
## Visualizing data
At this point we've input three types of data (variant, sequence and annotation), inserted them into a chromR object, masked variants we feel were not of high quality and processed some summaries of these data.
We can now move on to visualizing these data.
The function `chromoqc()` uses the R function `layout()` to make composite plots of the data. These plots can include barplots as well as scatterplots which have chromosomal coordinates.
```{r chromoqc1, fig.align='center', fig.height=7, fig.width=7}
chromoqc(chrom, dp.alpha=20)
```
We can also zoom in on a feature of interest by using the `xlim` parameter.
```{r chromoqc2, fig.align='center', fig.height=7, fig.width=7}
chromoqc(chrom, xlim=c(5e+05, 6e+05))
```
## Output of data
One of the goals of the package vcfR is to help investigators understand and explore their data.
Once they've gained an understanding of this data, they will likely want to act upon it.
One way to act upon this understanding is to use their aquired comprehension of their data to filter it to what they feel is of adequate quality.
### Output to VCF file
Within the framework of the package vcfR, the filtering and output of variants determined to be of adequate quality can be accomplished with the function `write.vcf()`.
This function takes a vcfR object and optionally subsets it using the mask, created in previous steps, and outputs it to a (gzipped) VCF file.
This file should be usable by all VCF compliant softwares for downstream analyses.
### Conversion to other R objects
Conversion of vcfR and chromR to objects supported in other R packages is covered in the vignette 'Converting data.'
|
/scratch/gouwar.j/cran-all/cranData/vcfR/inst/doc/intro_to_vcfR.Rmd
|
## ----fig.cap="Cartoon representation of VCF file organization", echo=FALSE, fig.height=4, fig.width=4, fig.align='center'----
par(mar=c(0.1,0.1,0.1,0.1))
plot(c(0,5), c(0,5), type="n", frame.plot=FALSE, axes=FALSE, xlab="", ylab="")
rect(xleft=0, ybottom=4, xright=3, ytop=5)
rect(xleft=0, ybottom=0, xright=2, ytop=4)
rect(xleft=2, ybottom=0, xright=5, ytop=4)
text(1.5, 4.7, "Meta information", cex=1)
text(1.5, 4.4, "(@meta)", cex=1)
text(1.0, 2.5, "Fixed information", cex=1)
text(1.0, 2.2, "(@fix)", cex=1)
text(3.5, 2.5, "Genotype information", cex=1)
text(3.5, 2.2, "(@gt)", cex=1)
par(mar=c(5,4,4,2))
## -----------------------------------------------------------------------------
library(vcfR)
data(vcfR_example)
vcf
## ----echo=TRUE----------------------------------------------------------------
strwrap(vcf@meta[1:7])
## -----------------------------------------------------------------------------
queryMETA(vcf)
## -----------------------------------------------------------------------------
queryMETA(vcf, element = 'DP')
## -----------------------------------------------------------------------------
queryMETA(vcf, element = 'FORMAT=<ID=DP')
## ----echo=TRUE----------------------------------------------------------------
head(getFIX(vcf))
## ----echo=TRUE----------------------------------------------------------------
vcf@gt[1:6, 1:4]
## -----------------------------------------------------------------------------
head(vcf)
|
/scratch/gouwar.j/cran-all/cranData/vcfR/inst/doc/vcf_data.R
|
---
title: "VCF data"
author: "Brian J. Knaus"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{vcf data}
%\VignetteEngine{knitr::rmarkdown}
---
Most variant calling pipelines result in files containing variant information.
The [variant call format (VCF)](http://samtools.github.io/hts-specs/ "VCF format at hts-specs") is a popular format for this data.
Variant callers typically attempt to agressively call variants with the perspective that a downstream quality control step will remove low quality variants.
A first step in working with this data is to understand their contents.
## Three sections
A VCF file can be thought of as having three sections: a **meta region**, a **fix region** and a **gt region**.
The meta region is at the top of the file.
The information in the meta region defines the abbreviations used elsewhere in the file.
It may also document software used to create the file as well as parameters used by this software.
Below the meta region, the data are tabular.
The first eight columns of this table contain information about each variant.
This data may be common over all variants, such as its chromosomal position, or a summary over all samples, such as quality metrics.
These data are fixed, or the same, over all samples.
The fix region is required in a VCF file, subsequent columns are optional but are common in my experience.
Beginning at column ten is a column for every sample.
The values in these columns are information for each sample and each variant.
The organization of each cell containing a genotype and associated information is specified in column nine.
The location of these three regions within a file can be represented by the cartoon below.
```{r, fig.cap="Cartoon representation of VCF file organization", echo=FALSE, fig.height=4, fig.width=4, fig.align='center', }
par(mar=c(0.1,0.1,0.1,0.1))
plot(c(0,5), c(0,5), type="n", frame.plot=FALSE, axes=FALSE, xlab="", ylab="")
rect(xleft=0, ybottom=4, xright=3, ytop=5)
rect(xleft=0, ybottom=0, xright=2, ytop=4)
rect(xleft=2, ybottom=0, xright=5, ytop=4)
text(1.5, 4.7, "Meta information", cex=1)
text(1.5, 4.4, "(@meta)", cex=1)
text(1.0, 2.5, "Fixed information", cex=1)
text(1.0, 2.2, "(@fix)", cex=1)
text(3.5, 2.5, "Genotype information", cex=1)
text(3.5, 2.2, "(@gt)", cex=1)
par(mar=c(5,4,4,2))
```
The VCF file specification is flexible.
This means that there are slots for certain types of data, but any particular software which creates a VCF file does not necessarily use them all.
Similarly, authors have the opportunity to include new forms of data, forms which may not have been foreseen by the authors of the VCF specification.
The result is that all VCF files do not contain the same information.
For this example, we'll use example data provided with vcfR.
```{r}
library(vcfR)
data(vcfR_example)
vcf
```
The function `library()` loads libraries, in this case the package vcfR.
The function `data()` loads datasets that were included with R and its packages.
Our usage of `data()` loads the objects 'gff', 'dna' and 'vcf' from the 'vcfR_example' dataset.
Here we're only interested in the object 'vcf' which contains example VCF data.
When we call the object name with no function it invokes the 'show' method which prints some summary information to the console.
## The meta region
The meta region contains information about the file, its creation, as well as information to interpret abbreviations used elsewhere in the file.
Each line of the meta region begins with a double pound sign ('##').
The example which comes with vcfR is shown below.
(Only the first 10 lines are shown for brevity.)
```{r, echo=TRUE}
strwrap(vcf@meta[1:7])
```
The first line contains the version of the VCF format used in the file.
This line is required.
The second line specifies the software which created the VCF file.
This is not required, so not all VCF files include it.
When they do, the file becomes self documenting.
Note that the alignment software is not included here because it was used upstream of the VCF file's creation (aligners typically create \*.SAM or \*.BAM format files).
Because the file can only include information about the software that created it, the entire pipeline does not get documented.
Some VCF files may contain a line for every chromosome (or supercontig or contig depending on your genome), so they may become rather long.
Here, the remaining lines contain INFO and FORMAT specifications which define abbreviations used in the fix and gt portions of the file.
The meta region may include long lines that may not be easy to view.
In vcfR we've created a function to help prcess this data.
```{r}
queryMETA(vcf)
```
When the function `queryMETA()` is called with only a vcfR object as a parameter, it attempts to summarize the meta information.
Not all of the information is returned.
For example, 'contig' elements are not returned.
This is an attempt to summarize information that may be most useful for comprehension of the file's contents.
```{r}
queryMETA(vcf, element = 'DP')
```
When an element parameter is included, only the information about that element is returned.
In this example the element 'DP' is returned.
We see that this acronym is defined as both a 'FORMAT' and 'INFO' acronym.
We can narrow down our query by including more information in the element parameter.
```{r}
queryMETA(vcf, element = 'FORMAT=<ID=DP')
```
Here we've isolated the definition of 'DP' as a 'FORMAT' element.
Note that the function `queryMETA()` includes the parameter `nice` which by default is TRUE and attempts to present the data in a nicely formatted manner.
However, our query is performed on the actual information in the 'meta' region.
It is therefore sometimes appropriate to set `nice = FALSE` so that we can see the raw data.
In the above example the angled bracket ('<') is omitted from the `nice = TRUE` representation but is essential to distinguishing the 'FORMAT' element from the 'INFO' element.
## The fix region
The fix region contains information for each variant which is sometimes summarized over all samples.
The first eight columns of the fixed region and are titled CHROM, POS, ID, REF, ALT, QUAL, FILTER and INFO.
This is per variant information which is 'fixed', or the same, over all samples.
The first two columns indicate the location of the variant by chromosome and position within that chromosome.
Here, the ID field has not been used, so it consists of missing data (NA).
The REF and ALT columns indicate the reference and alternate allelic states.
When multiple alternate allelic states are present they are delimited with commas.
The QUAL column attempts to summarize the quality of each variant over all samples.
The FILTER field is not used here but could contain information on whether a variant has passed some form of quality assessment.
```{r, echo=TRUE}
head(getFIX(vcf))
```
The eigth column, titled INFO, is a semicolon delimited list of information.
It can be rather long and cumbersome.
The function `getFIX()` will suppress this column by default.
Each abbreviation in the INFO column should be defined in the meta section.
We can validate this by querying the meta portion, as we did in the 'meta' section above.
## The gt region
The gt (genotype) region contains information about each variant for each sample.
The values for each variant and each sample are colon delimited.
Multiple types of data for each genotype may be stored in this manner.
The format of the data is specified by the FORMAT column (column nine).
Here we see that we have information for GT, AD, DP, GQ and PL.
The definition of these acronyms can be referenced by querying the the meta region, as demonstrated previously.
Every variant does not necessarily have the same information (e.g., SNPs and indels may be handled differently), so the rows are best treated independently.
Different variant callers may include different information in this region.
```{r, echo=TRUE}
vcf@gt[1:6, 1:4]
```
## vcfR
Using the R package vcfR, we can read VCF format files into memory using the function `read.vcfR()`.
Once in memory we can use the `head()` method to summarize the information in the three VCF regions.
```{r}
head(vcf)
```
We now have a summary of our VCF file which we can use to help understand what forms of information are contained within it.
This information can be further explored with plotting functions and used to filter the VCF file for high quality variants.
|
/scratch/gouwar.j/cran-all/cranData/vcfR/inst/doc/vcf_data.Rmd
|
---
title: "vcfR workflow"
author: "Brian J. Knaus"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{vcfR workflow}
%\VignetteEngine{knitr::rmarkdown}
---
## Workflow
Work begins with a variant call format file (VCF).
A reference file (FASTA) and an annotation file are also suggested.
These later two files are not necessary, but I feel they contribute substantially.
The below flowchart outlines the major steps involved with vcfR use.

The green rectangles contain functions that the user will want to become familiar with.
There are effectively three phases in the workflow: reading in data, processing the objects in memory and plotting the data graphically.
Reading in the data frequently presents a bottleneck.
Unfortunately, this is typically due to the time it takes to read from a drive and therefore may not be anything I can improve on.
Once the data are in memory we can manipulate it.
VcfR uses Rcpp to implement functions written in C++ to try to improve the performance of these functions.
Lastly, we visualize the objects.
Visualization relies on R's base graphics package.
This is also something that I am not likely to be able to improve on if it becomes a bottleneck.
By dividing the analysis into these three phases I've divided that workflow into things I may be able to improve upon through my code writing, and things which I have little control over.
|
/scratch/gouwar.j/cran-all/cranData/vcfR/inst/doc/workflow.Rmd
|
---
title: "Converting vcfR objects to other forms"
author: "Brian J. Knaus"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Converting data}
%\VignetteEngine{knitr::rmarkdown}
---
Once we have finished examining our data in vcfR, we'll want to format it so that other softwares can utilize it.
A straightforward path is to create a *.vcf.gz format file.
One downside to this path is that it creates an intermediate file.
When working on large datasets this intermediate file may be rather large.
If your path remains in R, it may be preferable to convert your vcfR objects to objects defined by other packages.
Here we explore examples of these paths.
## Data import
We'll use two datasets to illustrate data conversion.
The function vcfR2genind calls adegenet::df2genind, a function which predates high throughput sequencing.
This path currently doesn't scale well to large datasets.
So we'll begin with the vcfR example dataset.
This dataset consists of 19 samples with 2,533 variants.
Later we'll use the example dataset from the package pinfsc50 which includes the same samples, but with 22,0331 variants.
```{r}
library(vcfR)
data(vcfR_example)
```
## Creating *.vcf.gz format files.
The function **write.vcf()** can be used to create *.vcf.gz files (gzipped VCF files) from objects of class vcfR or chromR.
These VCF files can be used for any downstream analysis which uses VCF files as input.
```{r write.vcf, eval=FALSE}
write.vcf(vcf, "test.vcf.gz")
unlink("test.vcf.gz") # Clean up after our example is done.
```
## Creating genind objects
The packages **adegenet** and **poppr** use objects of class **genind**.
We can create genind objects with the function **vcfR2genind()**.
```{r genind, eval=TRUE}
my_genind <- vcfR2genind(vcf)
class(my_genind)
my_genind
```
The warning is because our example dataset has uninteresting locus names (they're all NULL).
Adegenet replaces these names with slightly more interesting, unique names.
The function vcfR2genind calls extract.gt to create a matrix of genotypes.
This matrix is converted into a genind object with the adegenet function df2genind.
Currently, this function does not scale well to large quantities of data.
This appears to be due a call to the function adegenet::df2genind (this function was produced prior to high throughput sequencing).
## Creating genclone objects
The package **poppr** uses objects of class genclone as well as genind.
Once a genind object has been created, it is fairly straight forward to create a genclone object.
```{r genclone, eval=TRUE}
my_genclone <- poppr::as.genclone(my_genind)
class(my_genclone)
my_genclone
```
## Creating genlight objects
The **genlight** object is used by **adegenet** and **poppr**.
It was designed specifically to handle high-throughput genotype data.
At present it appears to only support two alleles at a locus, but varying levels of ploidy.
Variant callers such as FreeBayes and the GATK's haplotype caller currently support more than two alleles per locus.
To address this incompatibility, vcfR2genelight omits loci that include more than two alleles.
The benefit of the genlight object is that the genlight object is much more efficient to use than the genind object as it was designed with high throughput sequencing in mind.
When verbose is set to TRUE the function vcfR2genlight will throw a warning and report how many loci it has omitted.
When verbose is set to FALSE the loci will be omitted silently.
```{r genlight, eval=TRUE}
vcf_file <- system.file("extdata", "pinf_sc50.vcf.gz", package = "pinfsc50")
vcf <- read.vcfR(vcf_file, verbose = FALSE)
x <- vcfR2genlight(vcf)
x
```
## Creating snpclone objects
The **genlight** object is extended by the **snpclone** object for analysis of clonal and partially clonal populations in **poppr**.
The genlight object can be converted to a snpclone object with functions in the poppr package.
```{r snpclone}
library(poppr)
x <- as.snpclone(x)
x
```
Note that we now have a **mlg** slot to hold multilocus genotype indicators.
## Creating DNAbin objects
The package **ape** handles sequence data using objects of class **DNAbin**.
The VCF file only contains information on variant positions.
Omitting invariant data provides for a more efficient representation of the data than including the invariant sites.
Converting VCF data to sequence data presents a challenge in that these invariant sites may need to be included.
This means that these objects can easily occupy large amounts of memory, and may exceed the physical memory when long sequences with many samples are included.
In order to accomodate these issues, we've taken an approach which attempts to create DNAbin objects from portions of a chomosome, such as a gene.
This means we'll need a little more information than we've needed for other conversions.
First, we'll need to locate and read in our VCF file, a reference sequence and a gff file that has the coordinates for a gene.
```{r load vcf dna gff}
# Find the files.
vcf_file <- system.file("extdata", "pinf_sc50.vcf.gz", package = "pinfsc50")
dna_file <- system.file("extdata", "pinf_sc50.fasta", package = "pinfsc50")
gff_file <- system.file("extdata", "pinf_sc50.gff", package = "pinfsc50")
# Read in data.
vcf <- read.vcfR(vcf_file, verbose = FALSE)
dna <- ape::read.dna(dna_file, format="fasta")
gff <- read.table(gff_file, sep="\t", quote = "")
```
We can use information from the annotation file (gff) to extract a gene.
Here we have specifically chosen one which has variants.
We can use IUPAC ambiguity codes to convert heterozygous sites into a one character encoding.
This results in a single sequence per individual.
Alternatively, we can create two haplotypes for each diploid sample, resulting in two sequences per individual.
```{r vcfR2DNAbin, tidy=TRUE}
record <- 130
#my_dnabin1 <- vcfR2DNAbin(vcf, consensus = TRUE, extract.haps = FALSE, gt.split="|", ref.seq=dna[,gff[record,4]:gff[record,5]], start.pos=gff[record,4], verbose=FALSE)
my_dnabin1 <- vcfR2DNAbin(vcf, consensus = TRUE, extract.haps = FALSE, ref.seq=dna[,gff[record,4]:gff[record,5]], start.pos=gff[record,4], verbose=FALSE)
my_dnabin1
```
We can visualize the variable sites using tools from the package 'ape.'
```{r image_DNAbin1, fig.align='center', fig.width=7, fig.height=7}
par(mar=c(5,8,4,2))
ape::image.DNAbin(my_dnabin1[,ape::seg.sites(my_dnabin1)])
par(mar=c(5,4,4,2))
```
Here, the ambiguous sites are visualized as 'other.'
While the DNAbin object can include the ambiguity codes, not all downstream software handle these codes well.
So the user should excercise prudence when using this option.
If we instead create two haplotypes for each diploid sample, it results in a DNAbin object which includes only unambiguous nucleotides(A, C, G and T).
This typically requires the data to be phased (I use [beagle4](https://faculty.washington.edu/browning/beagle/beagle.html)).
In VCF files this is indicated by delimiting the alleles of the genotype with a pipe ('|') for phased data, while unphased data are delimited with a forward slash ('/').
```{r vcfR2DNAbin_2, tidy=TRUE}
#my_dnabin1 <- vcfR2DNAbin(vcf, consensus=FALSE, extract.haps=TRUE, gt.split="|", ref.seq=dna[,gff[record,4]:gff[record,5]], start.pos=gff[record,4], verbose=FALSE)
my_dnabin1 <- vcfR2DNAbin(vcf, consensus=FALSE, extract.haps=TRUE, ref.seq=dna[,gff[record,4]:gff[record,5]], start.pos=gff[record,4], verbose=FALSE)
```
```{r image_DNAbin_2, fig.align='center', fig.width=7, fig.height=7}
par(mar=c(5,8,4,2))
ape::image.DNAbin(my_dnabin1[,ape::seg.sites(my_dnabin1)])
par(mar=c(5,4,4,2))
```
Once we have a DNAbin object, it can be analysed in a number of R packages, such as ape and pegas.
We can also output a fasta file for other softwares to use.
```{r, eval=FALSE}
write.dna( my_dnabin1, file = 'my_gene.fasta', format = 'fasta' )
unlink('my_gene.fasta') # Clean up after we're done with the example.
```
Also see:
- Heng Li's [seqtk](https://github.com/lh3/seqtk)
- [GATK's](https://software.broadinstitute.org/gatk/) FastaAlternateReferenceMaker
## Creating loci objects
The package **pegas** uses objects of class **loci**.
We can use the function vcfR2loci to convert our vcfR object to one of class loci.
```{r vcfR2loci, eval=FALSE}
system.time( my_loci <- vcfR2loci(vcf) )
class(my_loci)
```
This takes a noticable amount of time to execute but is effective.
We can now proceed to downstream analyses.
## Conclusion
The use of vcfR is an intermediary point in an analysis.
Once VCF data are obtained, vcfR provides an interactive way to scrutinize and filter the data.
A number of paths have been provided that take the results of VCF format data from exploration and filtering to downstream analyses by other software that uses VCF files as input or several R packages.
|
/scratch/gouwar.j/cran-all/cranData/vcfR/vignettes/converting_data.Rmd
|
---
title: "Introduction to vcfR"
author: "Brian J. Knaus"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Introduction to vcfR}
%\VignetteEngine{knitr::rmarkdown}
---
vcfR is a package intended to help visualize, manipulate and quality filter data in VCF files.
> More documentation for vcfR can be found at the [vcfR documentation](https://knausb.github.io/vcfR_documentation/) website.
## Preliminaries
Input files frequently present challenges to analysis.
A common problem I encounter is that chromosome names are not standardized among VCF, FASTA and GFF files.
This presents work for the analyst.
I suggest reading these files into R, syncronizing the names in R and then proceeding with downstream analyses.
The other option I see is to create a set of files where the data is identical to the initial files, but the names have been syncronized.
This later choice results in the creation of files which are largely redundant, something I feel is unnecessary.
Memory use is another important consideration when using vcfR.
A strength of R is that it was typically intended to read in entire datasets into memory.
This allows for visualization, manipulation and analyses to be performed on the entire dataset at once.
The size of genomic datasets, particularly the VCF data, present a challenge in that they may be too large for practical use in R.
This presents us with the challenge of reading enough data into memory so that we can explore a large amount of it, but not so much that we exceed our existing resources.
It has been my experience that R does not perfomr well when memory use approaches 1 GB of RAM, so simply investing in a workstation with a large amount of memory may not be a solution.
(This may change in the future as R is under continual development.)
My solution to finding this balance is to work on single chomosomes.
Actually, the 'chromosomes' in my projects are typically supercontigs, scaffolds or contigs.
This is one situation where not having complete chromosomes actually can be an advantage.
## Data input
The vcfR package is designed to work with data from [VCF](https://www.internationalgenome.org/node/101) files.
The use of a sequence file ([FASTA format](https://en.wikipedia.org/wiki/FASTA_format)) and an annotation file ([GFF format](https://github.com/The-Sequence-Ontology/Specifications/blob/master/gff3.md)) can provide helpful context, but are not required.
We'll begin our example by locating the data files from the package 'pinfsc50.'
```{r}
pkg <- "pinfsc50"
vcf_file <- system.file("extdata", "pinf_sc50.vcf.gz", package = pkg)
dna_file <- system.file("extdata", "pinf_sc50.fasta", package = pkg)
gff_file <- system.file("extdata", "pinf_sc50.gff", package = pkg)
```
Then read in the VCF file with vcfR.
```{r read.vcfR}
library(vcfR)
vcf <- read.vcfR( vcf_file, verbose = FALSE )
```
The function `read.vcfR()` takes the filename you specify and reads it into R where it is stored as a **vcfR** object.
The **vcfR** object is an S4 class object with three slots containing the metadata, the fixed data and the genotype data.
More information on VCF data can be found in the vignette 'vcf data.'
This object provides a known organization for the data so that downstream functions can easily access it.
Genomic reference sequence files are typically in FASTA format files.
These can be read in using the package ape.
```{r read.dna}
dna <- ape::read.dna(dna_file, format = "fasta")
```
Annotation files (we currently support [GFF](https://github.com/The-Sequence-Ontology/Specifications/blob/master/gff3.md)), files which contain coordinates for annotations such as start and end points of genes, are tabular and can be read in with typical R functions.
```{r gff}
gff <- read.table(gff_file, sep="\t", quote="")
```
In my experience, GFF files typically to not surround text with quotes.
This can present a challenge to reading these files into R.
Disabling quotes in the call to `read.table()` typically helps handle this.
Once the data has been read into memory, modifications can be made to chromosome names or any other inconsistencies and one can proceed.
vcfR was designed to work on an individual chromosome, supercontig or contig, depending on the state of your genome.
Reading an entire genome into memory may present a technical challenge when there is a lot of data.
For example, when the genome is large or there are a lot of samples.
Attempting to read in lage datasets to memory may exhaust all available memory and result in an unresponsive computer.
Working on chromosomes appears to be a natural way to decompose this problem.
Once you have read an object into R (e.g., an annotation or sequence file) you may need to subset it to data for a single chromosome, if necessary.
## Creating chromR objects
Once the data are in memory we can use it to create a **chromR** object with the function `create.chromR()`.
The `create.chromR()` function creates a new chromR object and populates it with data you provided it.
```{r create.chromR}
library(vcfR)
chrom <- create.chromR(name='Supercontig', vcf=vcf, seq=dna, ann=gff)
```
Note that the names of our three data sources are not identical.
This results in a warning.
When we examine the output we see that the name in the VCF file is 'Supercontig_1.50,' while the name in the FASTA is 'Supercontig_1.50 of Phytophthora infestans T30-4.'
We know that these are synonyms so we can ignore the warning and proceed.
The parameter 'name' is a name you can assign to your object.
This information is used when plotting the chromR object.
The vcfR object should be of class vcfR, which was most likely created with the function `read.vcfR()`.
This object is inserted into the chrom object with the function `vcfR2chromR()`.
The parameter 'seq' should be a DNAbin object (see the R package ape) with one sequence in it.
This sequence will be inserted into the chromR object with the function `seq2chromR()`.
If a sequence is not provided, `seq2chromR()` will infer the length of your chromosome from the maximum position in the vcfR object.
The parameter 'ann' should be a [GFF](https://github.com/The-Sequence-Ontology/Specifications/blob/master/gff3.md) and be of class data.frame.
These can typically be read in with base R functions such as `read.table()`.
This table will then be inserted into the chromR object by `ann2chromR()`.
This function will check to see if columns 4 and 5 ("start" and "end") are numeric.
If not, it will try to recast them as so.
## Processing chromR objects
Once the chromR object has been created a few processing steps are needed.
First, you may want to get a quick look at some of your data.
This can be done with the plot function.
```{r plot chrom, fig.align='center', fig.height=7, fig.width=7}
plot(chrom)
```
The distribution of read depth (DP) stands out.
Presumably, there is some base ploid level at which most of each genome is sequenced at.
Here we see a peak, which may represent that base ploid region, but we also see a long tail which may represent copy number variants.
Because genotypers typically expect a constant level of ploidy, variant calls in copy number variants may be suspect.
We can see that mapping qualities (MQ) are all rather peaked around a value of 60.
Because of this, if we would like to filter on this parameter we now know that we would have to employ a narrow threshold.
Interpretation of the qualities (QUAL) appears less straightforward and may be clinal from a value of zero.
You may conclude that this is not an ideal parameter to filter on.
No SNP densities are found at this point because this data results from windowing analyses performed by `proc_chromR()` (see below).
Filtering on other parameters may reveal a more straight forward path.
Note that VCF data created by different variant calling software may or may not have these fields or their ranges may be different.
For example, here the mapping quality is peaked at 60.
Other softwares may create files where mapping quality is peaked at 20 or some other value.
This is why it is important to visualize the distribution of your data so you understand its properties.
We can use the `masker()` function to try to filter out data that we do not have high confidence in.
The masker() function uses quality, depth and mapping quality to try to select high quality variants.
See `?masker` for default values.
When using masker, variants deemed to be of low quality are not deleted from the dataset.
Instead, a logical vector is created to indicate which variants have or have not been filtered.
This maintains the geometry of the data matrices throughout the analyisis and allows the user to easily undo any changes.
```{r masker, fig.align='center', fig.height=7, fig.width=7}
chrom <- masker(chrom, min_QUAL = 1, min_DP = 300, max_DP = 700, min_MQ = 59.9, max_MQ = 60.1)
plot(chrom)
```
Once we're satisfied with which variants we want to consider to be of high quality we can process the chromR object with `proc.chromR()`.
This function calls several helper functions to process the variant, sequence and annotation data for visualization.
The function `regex.win()` defines rectangles for where called sequence (A, C, G and T) occur as well as where ambiguous nucleotides occur (N) which are used later for plotting.
This function also defines rectangles for annotated features, which are also for plotting.
The function `var.win()` performs windowing analyses on the data.
Currently it summarizes variant count per window as well as G/C content per window.
```{r proc.chromR, fig.align='center', fig.height=7, fig.width=7}
chrom <- proc.chromR(chrom, verbose=TRUE)
plot(chrom)
```
Now that we've processed our chromR object, we have variant counts per window.
We're also ready to move on to more complex plots.
## Visualizing data
At this point we've input three types of data (variant, sequence and annotation), inserted them into a chromR object, masked variants we feel were not of high quality and processed some summaries of these data.
We can now move on to visualizing these data.
The function `chromoqc()` uses the R function `layout()` to make composite plots of the data. These plots can include barplots as well as scatterplots which have chromosomal coordinates.
```{r chromoqc1, fig.align='center', fig.height=7, fig.width=7}
chromoqc(chrom, dp.alpha=20)
```
We can also zoom in on a feature of interest by using the `xlim` parameter.
```{r chromoqc2, fig.align='center', fig.height=7, fig.width=7}
chromoqc(chrom, xlim=c(5e+05, 6e+05))
```
## Output of data
One of the goals of the package vcfR is to help investigators understand and explore their data.
Once they've gained an understanding of this data, they will likely want to act upon it.
One way to act upon this understanding is to use their aquired comprehension of their data to filter it to what they feel is of adequate quality.
### Output to VCF file
Within the framework of the package vcfR, the filtering and output of variants determined to be of adequate quality can be accomplished with the function `write.vcf()`.
This function takes a vcfR object and optionally subsets it using the mask, created in previous steps, and outputs it to a (gzipped) VCF file.
This file should be usable by all VCF compliant softwares for downstream analyses.
### Conversion to other R objects
Conversion of vcfR and chromR to objects supported in other R packages is covered in the vignette 'Converting data.'
|
/scratch/gouwar.j/cran-all/cranData/vcfR/vignettes/intro_to_vcfR.Rmd
|
---
title: "VCF data"
author: "Brian J. Knaus"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{vcf data}
%\VignetteEngine{knitr::rmarkdown}
---
Most variant calling pipelines result in files containing variant information.
The [variant call format (VCF)](http://samtools.github.io/hts-specs/ "VCF format at hts-specs") is a popular format for this data.
Variant callers typically attempt to agressively call variants with the perspective that a downstream quality control step will remove low quality variants.
A first step in working with this data is to understand their contents.
## Three sections
A VCF file can be thought of as having three sections: a **meta region**, a **fix region** and a **gt region**.
The meta region is at the top of the file.
The information in the meta region defines the abbreviations used elsewhere in the file.
It may also document software used to create the file as well as parameters used by this software.
Below the meta region, the data are tabular.
The first eight columns of this table contain information about each variant.
This data may be common over all variants, such as its chromosomal position, or a summary over all samples, such as quality metrics.
These data are fixed, or the same, over all samples.
The fix region is required in a VCF file, subsequent columns are optional but are common in my experience.
Beginning at column ten is a column for every sample.
The values in these columns are information for each sample and each variant.
The organization of each cell containing a genotype and associated information is specified in column nine.
The location of these three regions within a file can be represented by the cartoon below.
```{r, fig.cap="Cartoon representation of VCF file organization", echo=FALSE, fig.height=4, fig.width=4, fig.align='center', }
par(mar=c(0.1,0.1,0.1,0.1))
plot(c(0,5), c(0,5), type="n", frame.plot=FALSE, axes=FALSE, xlab="", ylab="")
rect(xleft=0, ybottom=4, xright=3, ytop=5)
rect(xleft=0, ybottom=0, xright=2, ytop=4)
rect(xleft=2, ybottom=0, xright=5, ytop=4)
text(1.5, 4.7, "Meta information", cex=1)
text(1.5, 4.4, "(@meta)", cex=1)
text(1.0, 2.5, "Fixed information", cex=1)
text(1.0, 2.2, "(@fix)", cex=1)
text(3.5, 2.5, "Genotype information", cex=1)
text(3.5, 2.2, "(@gt)", cex=1)
par(mar=c(5,4,4,2))
```
The VCF file specification is flexible.
This means that there are slots for certain types of data, but any particular software which creates a VCF file does not necessarily use them all.
Similarly, authors have the opportunity to include new forms of data, forms which may not have been foreseen by the authors of the VCF specification.
The result is that all VCF files do not contain the same information.
For this example, we'll use example data provided with vcfR.
```{r}
library(vcfR)
data(vcfR_example)
vcf
```
The function `library()` loads libraries, in this case the package vcfR.
The function `data()` loads datasets that were included with R and its packages.
Our usage of `data()` loads the objects 'gff', 'dna' and 'vcf' from the 'vcfR_example' dataset.
Here we're only interested in the object 'vcf' which contains example VCF data.
When we call the object name with no function it invokes the 'show' method which prints some summary information to the console.
## The meta region
The meta region contains information about the file, its creation, as well as information to interpret abbreviations used elsewhere in the file.
Each line of the meta region begins with a double pound sign ('##').
The example which comes with vcfR is shown below.
(Only the first 10 lines are shown for brevity.)
```{r, echo=TRUE}
strwrap(vcf@meta[1:7])
```
The first line contains the version of the VCF format used in the file.
This line is required.
The second line specifies the software which created the VCF file.
This is not required, so not all VCF files include it.
When they do, the file becomes self documenting.
Note that the alignment software is not included here because it was used upstream of the VCF file's creation (aligners typically create \*.SAM or \*.BAM format files).
Because the file can only include information about the software that created it, the entire pipeline does not get documented.
Some VCF files may contain a line for every chromosome (or supercontig or contig depending on your genome), so they may become rather long.
Here, the remaining lines contain INFO and FORMAT specifications which define abbreviations used in the fix and gt portions of the file.
The meta region may include long lines that may not be easy to view.
In vcfR we've created a function to help prcess this data.
```{r}
queryMETA(vcf)
```
When the function `queryMETA()` is called with only a vcfR object as a parameter, it attempts to summarize the meta information.
Not all of the information is returned.
For example, 'contig' elements are not returned.
This is an attempt to summarize information that may be most useful for comprehension of the file's contents.
```{r}
queryMETA(vcf, element = 'DP')
```
When an element parameter is included, only the information about that element is returned.
In this example the element 'DP' is returned.
We see that this acronym is defined as both a 'FORMAT' and 'INFO' acronym.
We can narrow down our query by including more information in the element parameter.
```{r}
queryMETA(vcf, element = 'FORMAT=<ID=DP')
```
Here we've isolated the definition of 'DP' as a 'FORMAT' element.
Note that the function `queryMETA()` includes the parameter `nice` which by default is TRUE and attempts to present the data in a nicely formatted manner.
However, our query is performed on the actual information in the 'meta' region.
It is therefore sometimes appropriate to set `nice = FALSE` so that we can see the raw data.
In the above example the angled bracket ('<') is omitted from the `nice = TRUE` representation but is essential to distinguishing the 'FORMAT' element from the 'INFO' element.
## The fix region
The fix region contains information for each variant which is sometimes summarized over all samples.
The first eight columns of the fixed region and are titled CHROM, POS, ID, REF, ALT, QUAL, FILTER and INFO.
This is per variant information which is 'fixed', or the same, over all samples.
The first two columns indicate the location of the variant by chromosome and position within that chromosome.
Here, the ID field has not been used, so it consists of missing data (NA).
The REF and ALT columns indicate the reference and alternate allelic states.
When multiple alternate allelic states are present they are delimited with commas.
The QUAL column attempts to summarize the quality of each variant over all samples.
The FILTER field is not used here but could contain information on whether a variant has passed some form of quality assessment.
```{r, echo=TRUE}
head(getFIX(vcf))
```
The eigth column, titled INFO, is a semicolon delimited list of information.
It can be rather long and cumbersome.
The function `getFIX()` will suppress this column by default.
Each abbreviation in the INFO column should be defined in the meta section.
We can validate this by querying the meta portion, as we did in the 'meta' section above.
## The gt region
The gt (genotype) region contains information about each variant for each sample.
The values for each variant and each sample are colon delimited.
Multiple types of data for each genotype may be stored in this manner.
The format of the data is specified by the FORMAT column (column nine).
Here we see that we have information for GT, AD, DP, GQ and PL.
The definition of these acronyms can be referenced by querying the the meta region, as demonstrated previously.
Every variant does not necessarily have the same information (e.g., SNPs and indels may be handled differently), so the rows are best treated independently.
Different variant callers may include different information in this region.
```{r, echo=TRUE}
vcf@gt[1:6, 1:4]
```
## vcfR
Using the R package vcfR, we can read VCF format files into memory using the function `read.vcfR()`.
Once in memory we can use the `head()` method to summarize the information in the three VCF regions.
```{r}
head(vcf)
```
We now have a summary of our VCF file which we can use to help understand what forms of information are contained within it.
This information can be further explored with plotting functions and used to filter the VCF file for high quality variants.
|
/scratch/gouwar.j/cran-all/cranData/vcfR/vignettes/vcf_data.Rmd
|
---
title: "vcfR workflow"
author: "Brian J. Knaus"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{vcfR workflow}
%\VignetteEngine{knitr::rmarkdown}
---
## Workflow
Work begins with a variant call format file (VCF).
A reference file (FASTA) and an annotation file are also suggested.
These later two files are not necessary, but I feel they contribute substantially.
The below flowchart outlines the major steps involved with vcfR use.

The green rectangles contain functions that the user will want to become familiar with.
There are effectively three phases in the workflow: reading in data, processing the objects in memory and plotting the data graphically.
Reading in the data frequently presents a bottleneck.
Unfortunately, this is typically due to the time it takes to read from a drive and therefore may not be anything I can improve on.
Once the data are in memory we can manipulate it.
VcfR uses Rcpp to implement functions written in C++ to try to improve the performance of these functions.
Lastly, we visualize the objects.
Visualization relies on R's base graphics package.
This is also something that I am not likely to be able to improve on if it becomes a bottleneck.
By dividing the analysis into these three phases I've divided that workflow into things I may be able to improve upon through my code writing, and things which I have little control over.
|
/scratch/gouwar.j/cran-all/cranData/vcfR/vignettes/workflow.Rmd
|
# Generated by using Rcpp::compileAttributes() -> do not edit by hand
# Generator token: 10BE3573-1514-4C36-9D1C-5A225CD40393
#' calculate the number of heterozygous SNPs for each sample
#' @param vcffile path to the VCF file with index
#' @param region region to extract, default "" for all
#' @param pass restrict to variants with FILTER==PASS
#' @param qual restrict to variants with QUAL > qual.
#' @param samples samples to extract, default "-" for all
#' @return A list of heterozygosity couts for each sample along with its id in the vcf header
#' @export
heterozygosity <- function(vcffile, region = "", samples = "-", pass = FALSE, qual = 0) {
.Call(`_vcfppR_heterozygosity`, vcffile, region, samples, pass, qual)
}
#' @name vcfreader
#' @title API for manipulating the VCF/BCF.
#' @description Type the name of the class to see the details and methods
#' @return A C++ class with the following fields/methods for manipulating the VCF/BCF
#' @field new Constructor given a vcf file \itemize{
#' \item Parameter: vcffile - The path of a vcf file
#' }
#' @field new Constructor given a vcf file and the region \itemize{
#' \item Parameter: vcffile - The path of a vcf file
#' \item Parameter: region - The region to be constrained
#' }
#' @field new Constructor given a vcf file, the region and the samples \itemize{
#' \item Parameter: vcffile - The path of a vcf file
#' \item Parameter: region - The region to be constrained
#' \item Parameter: samples - The samples to be constrained. Comma separated list of samples to include (or exclude with "^" prefix).
#' }
#' @field variant Try to get next variant record. return FALSE if there are no more variants or hit the end of file, otherwise TRUE.
#' @field chr Return the CHROM field of current variant
#' @field pos Return the POS field of current variant
#' @field id Return the CHROM field of current variant
#' @field ref Return the REF field of current variant
#' @field alt Return the ALT field of current variant
#' @field qual Return the QUAL field of current variant
#' @field filter Return the FILTER field of current variant
#' @field info Return the INFO field of current variant
#' @field infoInt Return the tag value of integer type in INFO field of current variant \itemize{ \item Parameter: tag - The tag name to retrieve in INFO}
#' @field infoFloat Return the tag value of float type in INFO field of current variant \itemize{ \item Parameter: tag - The tag name to retrieve in INFO}
#' @field infoStr Return the tag value of string type in INFO field of current variant \itemize{ \item Parameter: tag - The tag name to retrieve in INFO}
#' @field infoIntVec Return the tag value in a vector of integer type in INFO field of current variant \itemize{ \item Parameter: tag - The tag name to retrieve in INFO}
#' @field infoFloatVec Return the tag value in a vector of float type in INFO field of current variant \itemize{ \item Parameter: tag - The tag name to retrieve in INFO}
#' @field genotypes Return the genotype values in a vector of integers \itemize{ \item Parameter: collapse - Boolean value indicates wheather to collapse the size of genotypes, eg, return diploid genotypes.}
#' @field formatInt Return the tag value of integer type for each sample in FORAMT field of current variant \itemize{ \item Parameter: tag - The tag name to retrieve in FORAMT}
#' @field formatFloat Return the tag value of float type for each sample in FORAMT field of current variant \itemize{ \item Parameter: tag - The tag name to retrieve in FORAMT}
#' @field formatStr Return the tag value of string type for each sample in FORAMT field of current variant \itemize{ \item Parameter: tag - The tag name to retrieve in FORAMT}
#' @field isSNP Test if current variant is exculsively a SNP or not
#' @field isIndel Test if current variant is exculsively a INDEL or not
#' @field isSV Test if current variant is exculsively a SV or not
#' @field isMultiAllelics Test if current variant is exculsively a Multi Allelics or not
#' @field isMultiAllelicSNP Test if current variant is exculsively a Multi Biallelics (SNPs) or not
#' @field hasSNP Test if current variant has a SNP or not
#' @field hasINDEL Test if current variant has a INDEL or not
#' @field hasINS Test if current variant has a INS or not
#' @field hasDEL Test if current variant has a DEL or not
#' @field hasMNP Test if current variant has a MNP or not
#' @field hasBND Test if current variant has a BND or not
#' @field hasOTHER Test if current variant has a OTHER or not
#' @field hasOVERLAP Test if current variant has a OVERLAP or not
#' @field nsamples Return the number of samples
#' @field samples Return a vector of samples id
#' @field header Return the raw string of the vcf header
#' @field string Return the raw string of current variant including newline
#' @field line Return the raw string of current variant without newline
#' @field output Init an output object for streaming out the variants to another vcf
#' @field write Streaming out current variant the output vcf
#' @field close Close the connection to the output vcf
#' @field setCHR Modify the CHR of current variant \itemize{ \item Parameter: s - A string for CHR}
#' @field setID Modify the ID of current variant \itemize{ \item Parameter: s - A string for ID}
#' @field setPOS Modify the POS of current variant \itemize{ \item Parameter: pos - An integer for POS}
#' @field setRefAlt Modify the REF and ALT of current variant \itemize{ \item Parameter: s - A string reperated by comma}
#' @field setInfoInt Modify the given tag of INT type in the INFO of current variant
#' \itemize{
#' \item Parameter: tag - A string for the tag name
#' \item Parameter: v - An integer for the tag value}
#' @field setInfoFloat Modify the given tag of FLOAT type in the INFO of current variant
#' \itemize{
#' \item Parameter: tag - A string for the tag name
#' \item Parameter: v - A double for the tag value}
#' @field setInfoStr Modify the given tag of STRING type in the INFO of current variant
#' \itemize{
#' \item Parameter: tag - A string for the tag name
#' \item Parameter: s - A string for the tag value}
#' @field setPhasing Modify the phasing status of each sample
#' \itemize{\item Parameter: v - An integer vector with size of the number of samples. only 1s and 0s are valid.}
#' @field setGenotypes Modify the genotypes of current variant
#' \itemize{\item Parameter: v - An integer vector for genotypes. Use NA or -9 for missing value.}
#' @field setFormatInt Modify the given tag of INT type in the FORMAT of current variant
#' \itemize{
#' \item Parameter: tag - A string for the tag name
#' \item Parameter: v - An integer for the tag value}
#' @field setFormatFloat Modify the given tag of FLOAT type in the FORMAT of current variant
#' \itemize{
#' \item Parameter: tag - A string for the tag name
#' \item Parameter: v - A double for the tag value}
#' @field setFormatStr Modify the given tag of STRING type in the FORMAT of current variant
#' \itemize{
#' \item Parameter: tag - A string for the tag name
#' \item Parameter: s - A string for the tag value}
#' @field rmInfoTag Remove the given tag from the INFO of current variant
#' \itemize{\item Parameter: s - A string for the tag name}
#' @field rmFormatTag Remove the given tag from the FORMAT of current variant
#' \itemize{\item Parameter: s - A string for the tag name}
#' @field setVariant Modify current variant by adding a vcf line
#' \itemize{\item Parameter: s - A string for one line in the VCF}
#' @field addINFO Add a INFO in the header of the vcf
#' \itemize{
#' \item Parameter: id - A string for the tag name
#' \item Parameter: number - A string for the number
#' \item Parameter: type - A string for the type
#' \item Parameter: desc - A string for description of what it means}
#' @field addFORMAT Add a FORMAT in the header of the vcf
#' \itemize{
#' \item Parameter: id - A string for the tag name
#' \item Parameter: number - A string for the number
#' \item Parameter: type - A string for the type
#' \item Parameter: desc - A string for description of what it means}
#' @examples
#' vcffile <- system.file("extdata", "raw.gt.vcf.gz", package="vcfppR")
#' br <- vcfreader$new(vcffile)
#' res <- rep(0L, br$nsamples())
#' while(br$variant()) {
#' if(br$isSNP()) {
#' gt <- br$genotypes(TRUE) == 1
#' gt[is.na(gt)] <- FALSE
#' res <- res + gt
#' }
#' }
NULL
summaryVariants <- function(vcffile, region = "", samples = "-", filter_pass = FALSE, qual = 0) {
.Call(`_vcfppR_summaryVariants`, vcffile, region, samples, filter_pass, qual)
}
summarySVs <- function(vcffile, region = "", samples = "-", filter_pass = FALSE, qual = 0) {
.Call(`_vcfppR_summarySVs`, vcffile, region, samples, filter_pass, qual)
}
tableGT <- function(vcffile, region, samples, format, ids, qualval, pass, INFO, snps, indels, multiallelics, multisnps, svs) {
.Call(`_vcfppR_tableGT`, vcffile, region, samples, format, ids, qualval, pass, INFO, snps, indels, multiallelics, multisnps, svs)
}
tableFormat <- function(vcffile, region, samples, format, ids, qualval, pass, INFO, snps, indels, multiallelics, multisnps, svs) {
.Call(`_vcfppR_tableFormat`, vcffile, region, samples, format, ids, qualval, pass, INFO, snps, indels, multiallelics, multisnps, svs)
}
#' @name vcfwriter
#' @title API for writing the VCF/BCF.
#' @description Type the name of the class to see the details and methods
#' @return A C++ class with the following fields/methods for writing the VCF/BCF
#' @field new Constructor given a vcf file \itemize{
#' \item Parameter: vcffile - The path of a vcf file. don't start with "~"
#' \item Parameter: version - The version of VCF specification
#' }
#' @field addContig Add a Contig in the header of the vcf
#' \itemize{ \item Parameter: str - A string for the CONTIG name }
#' @field addFILTER Add a FILTER in the header of the vcf
#' \itemize{
#' \item Parameter: id - A string for the FILTER name
#' \item Parameter: desc - A string for description of what it means}
#' @field addINFO Add a INFO in the header of the vcf
#' \itemize{
#' \item Parameter: id - A string for the tag name
#' \item Parameter: number - A string for the number
#' \item Parameter: type - A string for the type
#' \item Parameter: desc - A string for description of what it means}
#' @field addFORMAT Add a FORMAT in the header of the vcf
#' \itemize{
#' \item Parameter: id - A string for the tag name
#' \item Parameter: number - A string for the number
#' \item Parameter: type - A string for the type
#' \item Parameter: desc - A string for description of what it means}
#' @field addSample Add a SAMPLE in the header of the vcf
#' \itemize{ \item Parameter: str - A string for a SAMPLE name }
#' @field addLine Add a line in the header of the vcf
#' \itemize{ \item Parameter: str - A string for a line in the header of VCF }
#' @field writeline Write a variant record given a line
#' \itemize{ \item Parameter: line - A string for a line in the variant of VCF. Not ended with "newline" }
#' @field close Close and save the vcf file
#' @examples
#' outvcf <- file.path(paste0(tempfile(), ".vcf.gz"))
#' bw <- vcfwriter$new(outvcf, "VCF4.1")
#' bw$addContig("chr20")
#' bw$addFORMAT("GT", "1", "String", "Genotype");
#' bw$addSample("NA12878")
#' s1 <- "chr20\t2006060\t.\tG\tC\t100\tPASS\t.\tGT\t1|0"
#' bw$writeline(s1)
#' bw$close()
NULL
|
/scratch/gouwar.j/cran-all/cranData/vcfppR/R/RcppExports.R
|
concordance_by_freq <- function(truthG, testDS, breaks, af, FUN,
which_snps = NULL,
flip = FALSE,
per_snp = FALSE,
per_ind = FALSE) {
if (!is.null(which_snps)) {
af <- af[which_snps]
truthG <- truthG[which_snps, ]
testDS <- testDS[which_snps, ]
}
truthG <- as.matrix(truthG)
testDS <- as.matrix(testDS)
af <- as.numeric(af)
if (flip) {
w <- af > 0.5
af[w] <- 1 - af[w]
truthG[w, ] <- 2 - truthG[w, ]
testDS[w, ] <- 2 - testDS[w, ]
}
x <- cut(af, breaks = breaks, include.lowest = TRUE)
if (per_ind) {
cors_per_af <- tapply(1:length(x), x, function(w) {
list(
n = length(w),
nA = sum(truthG[w, ], na.rm = TRUE),
concordance = unlist(sapply(1:ncol(truthG), function(ind) {
FUN(truthG[w, ind], testDS[w, ind])
}))
)
})
} else if (ncol(truthG) > 1 && per_snp) {
# for multiple sample, calculate r2 per snp then average them
cors_per_af <- tapply(1:length(x), x, function(w) {
c(
n = length(w),
nA = sum(truthG[w, ], na.rm = TRUE),
concordance = mean(sapply(w, function(ww) {
FUN(truthG[ww, ], testDS[ww, ])
}), na.rm = TRUE)
)
})
} else {
cors_per_af <- tapply(1:length(x), x, function(w) {
c(
n = length(w),
nA = sum(truthG[w, ], na.rm = TRUE),
concordance = FUN(truthG[w,], testDS[w,])
)
})
}
# fill with NA for AF bins without SNPs
cors_per_af <- t(sapply(cors_per_af, function(a) {
if (is.null(a[1])) {
return(c(n = NA, nA = NA, concordance = NA))
}
a
}))
return(cors_per_af)
}
## sugar r2
R2 <- function(a, b) {
cor(as.vector(a), as.vector(b), use = "pairwise.complete")**2
}
## follow hap.py
## truth\imputed
## 0 1 2
## 0 ignore FP FP
## 1 FN TP FP/FN
## 2 FN FP/FN TP
## f1 = 2 * TP / (2 * TP + FP + FN)
F1 <- function(a, b) {
o <- table(as.vector(a), as.vector(b), useNA = "always")
## make table square
if(nrow(o)!=ncol(o)){
if(nrow(o) == ncol(o)+1){
o <- o[-nrow(o),]
} else if (nrow(o)+1==ncol(o)){
o <- o[,-ncol(o)]
} else{
warning("ONLY homozygous (0) found in either truth or test data")
return(NA)
}
}
if(all(dim(o)==c(4,4)))
o <- o[1:3,1:3]
if(all(dim(o)!=c(3,3))) {
warning("F1 should be used only for a sample with genotypes of all types, hom ref(0), het(1) and hom alt(2)")
return(NA)
}
TP <- o[2,2] + o[3,3]
FP <- o[1,2] + o[1, 3] + o[2, 3] + o[3,2]
FN <- o[2,1] +o[2,3] + o[3,1] + o[3,2]
res <- 2 * TP / (2 * TP + FP + FN)
return(res)
}
## follow GLIMPSE2_concordance
## None Reference Concordnace = 1 - (e0 + e1 + e2) / (e0 + e1 + e2 + m1 + m2)
## truth\imputed
## 0 1 2
## 0 ignore e0 e0
## 1 e1 m1 e1
## 2 e2 e2 m2
## a <- c(1, 2, 0, 1,1)
## b <- c(1, 1, 0, 0,1)
## NRC(a, b)
##
NRC <- function(a, b) {
o <- table(as.vector(a), as.vector(b), useNA = "always")
## make table square
if(nrow(o)!=ncol(o)){
if(nrow(o) == ncol(o)+1){
o <- o[-nrow(o),]
} else if (nrow(o)+1==ncol(o)){
o <- o[,-ncol(o)]
} else{
warning("ONLY homozygous (0) found in either truth or test data")
return(NA)
}
}
if(all(dim(o)==c(4,4)))
o <- o[1:3,1:3]
if(all(dim(o)!=c(3,3))) {
warning("NRC should be used only for a sample with genotypes of all types, hom ref(0), het(1) and hom alt(2)")
return(NA)
}
mismatches <- sum(c(o[1,2:3], o[2,1], o[2,3], o[3,1:2]))
matches <- sum(c(o[2,2], o[3,3]))
res <- mismatches / (mismatches+matches)
return(1-res)
}
|
/scratch/gouwar.j/cran-all/cranData/vcfppR/R/common.R
|
#' @title
#' Compare two VCF/BCF files reporting various statistics
#'
#' @details
#' \code{vcfcomp} implements various statisitcs to compare two VCF/BCF files,
#' e.g. report genotype concocrdance, correlation stratified by allele frequency.
#'
#' @param test path to the first VCF/BCF file referred as test.
#'
#' @param truth path to the second VCF/BCF file referred as truth, or saved RDS file.
#'
#' @param formats character vector. the FORMAT tags to extract for the test and truth respectively.
#' default c("DS", "GT") extracts 'DS' of the target and 'GT' of the truth.
#'
#' @param stats the statistics to be calculated. supports the following.
#' "r2": pearson correlation coefficient ** 2.
#' "f1": F1-score, good balance between sensitivity and precision.
#' "nrc": Non-Reference Concordance rate
#'
#' @param by.sample logical. calculate concordance for each samples, then average by bins.
#'
#' @param by.variant logical. calculate concordance for each variant, then average by bins.
#' if both bysample and by variant are TRUE, then do average on all samples first.
#' if both bysample and by variant are FALSE, then do average on all samples and variants.
#'
#' @param flip logical. flip the ref and alt variants
#'
#' @param names character vector. reset samples' names in the test VCF.
#'
#' @param bins numeric vector. break statistics into allele frequency bins.
#'
#' @param af file path to allele frequency text file or saved RDS file.
#'
#' @param out output prefix for saving objects into RDS file
#'
#' @param ... options passed to \code{vcftable}
#'
#' @return a list of various statistics
#' @author Zilong Li \email{[email protected]}
#'
#' @examples
#' library('vcfppR')
#' test <- system.file("extdata", "imputed.gt.vcf.gz", package="vcfppR")
#' truth <- system.file("extdata", "imputed.gt.vcf.gz", package="vcfppR")
#' samples <- "HG00133,HG00143,HG00262"
#' res <- vcfcomp(test, truth, stats="f1", format=c('GT','GT'), samples=samples)
#' str(res)
#' @export
vcfcomp <- function(test, truth,
stats = "all",
formats = c("DS", "GT"),
by.sample = FALSE,
by.variant = FALSE,
flip = FALSE,
names = NULL,
bins = NULL,
af = NULL,
out = NULL,
...) {
if(is.null(bins)){
bins <- sort(unique(c(
c(0, 0.01 / 1000, 0.02 / 1000, 0.05 / 1000),
c(0, 0.01 / 100, 0.02 / 100, 0.05 / 100),
c(0, 0.01 / 10, 0.02 / 10, 0.05 / 10),
c(0, 0.01 / 1, 0.02 / 1, 0.05 / 1),
seq(0.1, 0.5, length.out = 5)
)))
}
if((stats=="f1" | stats == "nrc") & (formats[1] != "GT") & (stats != "all")) {
message("F1 score or NRC rate use GT format")
formats[1] <- "GT"
}
d1 <- vcftable(test, format = formats[1], setid = TRUE, ...)
if(!is.null(names) & is.vector(names)) d1$samples <- names
d2 <- tryCatch( { suppressWarnings(readRDS(truth)) }, error = function(e) {
vcftable(truth, format = formats[2], setid = TRUE, ...)
} )
sites <- intersect(d1$id, d2$id)
## chr pos ref alt af
if(!is.null(af)){
af <- tryCatch( { suppressWarnings(readRDS(af)) }, error = function(e) {
af <- read.table(af, header = TRUE)
af$id <- paste0(af[,"chr"], "_", af[,"pos"], "_", af[,"ref"], "_", af[,"alt"])
subset(af, select = c(id, af))
} )
sites <- intersect(af[,"id"], sites) ## use intersect sites only
}
## save some useful objects
if(!is.null(out)){
saveRDS(af, file.path(paste0(out, ".af.rds")))
saveRDS(truth, file.path(paste0(out, ".truth.rds")))
}
ord <- match(d1$samples, d2$samples)
ds <- d1[[10]]
ds <- ds[match(sites, d1$id), ]
gt <- d2[[10]]
gt <- gt[match(sites, d2$id), ord]
rownames(gt) <- sites
rownames(ds) <- sites
if(is.null(af)){
af <- rowMeans(gt, na.rm = TRUE) / 2
} else {
af <- af[match(sites, af[,"id"]), "af"]
}
names(af) <- sites
if(stats == "all") {
## F2
res.r2 <- concordance_by_freq(gt, ds, bins, af, R2, which_snps = sites,
flip = flip, per_ind = FALSE, per_snp = by.variant)
if(stats == "r2")
return(list(samples = d1$samples, r2=res.r2))
## F1 and NRC
d1 <- vcftable(test, format = "GT", setid = TRUE, ...)
ds <- d1[[10]]
ds <- ds[match(sites, d1$id), ]
rownames(ds) <- sites
res.f1 <- concordance_by_freq(gt, ds, bins, af, F1, which_snps = sites,
flip = flip, per_ind = by.sample, per_snp = by.variant)
res.nrc <- concordance_by_freq(gt, ds, bins, af, NRC, which_snps = sites,
flip = flip, per_ind = by.sample, per_snp = by.variant)
return(list(samples = d1$samples, r2=res.r2, f1=res.f1, nrc=res.nrc))
} else {
res <- switch(stats,
r2 = concordance_by_freq(gt, ds, bins, af, R2, which_snps = sites,
flip = flip, per_ind = by.sample, per_snp = by.variant),
f1 = concordance_by_freq(gt, ds, bins, af, F1, which_snps = sites,
flip = flip, per_ind = by.sample, per_snp = by.variant),
nrc = concordance_by_freq(gt, ds, bins, af, NRC, which_snps = sites,
flip = flip, per_ind = by.sample, per_snp = by.variant))
return(list(samples = d1$samples, stats = res))
}
}
|
/scratch/gouwar.j/cran-all/cranData/vcfppR/R/vcf-compare.R
|
#' @title
#' count the heterozygous sites per sample in the VCF/BCF
#'
#' @param vcffile path to the VCF/BCF file
#'
#' @param region region to subset like bcftools
#'
#' @param samples samples to subset like bcftools
#'
#' @param pass restrict to variants with FILTER==PASS
#'
#' @param qual restrict to variants with QUAL > qual.
#'
#' @param fun which popgen function to run. available functions are
#' "heterozygosity".
#'
#' @return \code{vcfpopgen} a list containing the following components:
#'\describe{
#'\item{samples}{: character vector; \cr
#' the samples ids in the VCF file after subsetting
#'}
#'
#'\item{hets}{: integer vector; \cr
#' the counts of heterozygous sites of each sample in the same order as \code{samples}
#'}
#'
#'}
#' @author Zilong Li \email{[email protected]}
#'
#' @examples
#' library('vcfppR')
#' vcffile <- system.file("extdata", "raw.gt.vcf.gz", package="vcfppR")
#' res <- vcfpopgen(vcffile)
#' str(res)
#' @export
vcfpopgen <- function(vcffile,
region = "",
samples = "-",
pass = FALSE,
qual = 0,
fun = "heterozygosity") {
return(heterozygosity(vcffile, region, samples, pass, qual))
}
|
/scratch/gouwar.j/cran-all/cranData/vcfppR/R/vcf-popgen.R
|
#' @title
#' summarize the various variant types at both variant level and sample level.
#'
#' @details
#' bcftools view -s "id01,id02" input.bcf.gz chr1:100000-20000
#'
#' @param vcffile path to the VCF/BCF file
#'
#' @param region region to subset like bcftools
#'
#' @param samples samples to subset like bcftools
#'
#' @param pass restrict to variants with FILTER==PASS
#'
#' @param qual restrict to variants with QUAL > qual.
#'
#' @param svtype summarize the variants with SVTYPE
#'
#' @return \code{vcfsummary} a list containing the following components:
#'\describe{
#'\item{summary}{: named integer vector; \cr
#' summarize the counts of each variant type
#'}
#'
#'\item{samples}{: character vector; \cr
#' the samples ids in the VCF file after subsetting
#'}
#'
#'\item{vartype}{: integer vector; \cr
#' the counts of the variant type at sample level in the same order as \code{samples}
#'}
#'
#'}
#' @author Zilong Li \email{[email protected]}
#'
#' @examples
#' library('vcfppR')
#' svfile <- system.file("extdata", "sv.vcf.gz", package="vcfppR")
#' res <- vcfsummary(svfile, region = "chr21:1-10000000", svtype = TRUE)
#' str(res)
#' @export
vcfsummary <- function(vcffile,
region = "",
samples = "-",
pass = FALSE,
qual = 0,
svtype = FALSE) {
if(svtype) {
return(summarySVs(vcffile, region, samples, pass, qual))
} else {
return(summaryVariants(vcffile, region, samples, pass, qual))
}
}
|
/scratch/gouwar.j/cran-all/cranData/vcfppR/R/vcf-summary.R
|
#' @title
#' read VCF/BCF contents into R data structure
#'
#' @description
#' The swiss army knife for reading VCF/BCF into R data types rapidly and easily.
#'
#' @details
#' \code{vcftable} uses the C++ API of vcfpp, which is a wrapper of htslib, to read VCF/BCF files.
#' Thus, it has the full functionalities of htslib, such as restrict to specific variant types,
#' samples and regions. For the memory efficiency reason, the \code{vcftable} is designed
#' to parse only one tag at a time in the FORMAT column of the VCF. In default, only the matrix of genotypes,
#' i.e. "GT" tag, are returned by \code{vcftable}, but there are many other tags supported by the \code{format} option.
#'
#' @param vcffile path to the VCF/BCF file
#'
#' @param region region to subset in bcftools-like style: "chr1", "chr1:1-10000000"
#'
#' @param samples samples to subset in bcftools-like style.
#' comma separated list of samples to include (or exclude with "^" prefix).
#' e.g. "id01,id02", "^id01,id02".
#'
#' @param vartype restrict to specific type of variants. supports "snps","indels", "sv", "multisnps","multiallelics"
#' @param format the FORMAT tag to extract. default "GT" is extracted.
#'
#' @param ids character vector. restrict to sites with ID in the given vector. default NULL won't filter any sites.
#'
#' @param qual numeric. restrict to variants with QUAL > qual.
#'
#' @param pass logical. restrict to variants with FILTER = "PASS".
#'
#' @param info logical. drop INFO column in the returned list.
#'
#' @param collapse logical. It acts on the FORMAT. If the FORMAT to extract is "GT", the dim of raw genotypes matrix of diploid is (M, 2 * N),
#' where M is #markers and N is #samples. default TRUE will collapse the genotypes for each sample such that the matrix is (M, N).
#' Set this to FALSE if one wants to maintain the phasing order, e.g. "1|0" is parsed as c(1, 0) with collapse=FALSE.
#' If the FORMAT to extract is not "GT", then with collapse=TRUE it will try to turn a list of the extracted vector into a matrix.
#' However, this raises issues when one variant is mutliallelic resulting in more vaules than others.
#'
#' @param setid logical. reset ID column as CHR_POS_REF_ALT.
#'
#' @return Return a list containing the following components:
#'\describe{
#'\item{samples}{: character vector; \cr
#' the samples ids in the VCF file after subsetting
#'}
#'
#'\item{chr}{: character vector; \cr
#' the CHR column in the VCF file
#'}
#'
#'\item{pos}{: character vector; \cr
#' the POS column in the VCF file
#'}
#'
#'\item{id}{: character vector; \cr
#' the ID column in the VCF file
#'}
#'
#'\item{ref}{: character vector; \cr
#' the REF column in the VCF file
#'}
#'
#'\item{alt}{: character vector; \cr
#' the ALT column in the VCF file
#'}
#'
#'\item{qual}{: character vector; \cr
#' the QUAL column in the VCF file
#'}
#'
#'\item{filter}{: character vector; \cr
#' the FILTER column in the VCF file
#'}
#'
#'\item{info}{: character vector; \cr
#' the INFO column in the VCF file
#'}
#'
#'\item{format}{: matrix of either integer of numberic values depending on the tag to extract; \cr
#' a specifiy tag in the FORMAT column to be extracted
#'}
#'}
#' @author Zilong Li \email{[email protected]}
#'
#' @examples
#' library('vcfppR')
#' vcffile <- system.file("extdata", "raw.gt.vcf.gz", package="vcfppR")
#' res <- vcftable(vcffile, "chr21:1-5050000", vartype = "snps")
#' str(res)
#' @export
vcftable <- function(vcffile,
region = "",
samples = "-",
vartype = "all",
format = "GT",
ids = NULL,
qual = 0,
pass = FALSE,
info = TRUE,
collapse = TRUE,
setid = FALSE) {
snps <- FALSE
indels <- FALSE
svs <- FALSE
multiallelics <- FALSE
multisnps <- FALSE
if(vartype == "snps") snps <- TRUE
else if(vartype == "indels") indels <- TRUE
else if(vartype == "sv") svs <- TRUE
else if(vartype == "multisnps") multisnps <- TRUE
else if(vartype == "multiallelics") multiallelics <- TRUE
else if(vartype != "all") stop("Invaild variant type!")
if(is.null(ids)) ids <- c("")
res <- NULL
if(format == "GT") {
res <- tableGT(vcffile, region, samples, "GT", ids, qual, pass, info, snps, indels, multiallelics, multisnps, svs)
if(length(res$gt)==0) return(res)
res[[10]] <- do.call("rbind", res[[10]])
n <- ncol(res$gt)
ploidy <- n / length(res$samples)
if(ploidy == 2 && collapse) {
res$gt <- res$gt[, seq(1, n, 2)] + res$gt[, seq(2, n, 2)]
res$gt[res$gt < 0] <- NA
} else {
res$gt[res$gt < 0] <- NA
}
} else {
res <- tableFormat(vcffile, region, samples, format, ids, qual, pass, info, snps, indels, multiallelics, multisnps, svs)
if(is.list(res[[10]]) && collapse) res[[10]] <- do.call("rbind", res[[10]])
}
if(setid) res$id <- paste(res$chr, res$pos, res$ref, res$alt, sep = "_")
return(res)
}
|
/scratch/gouwar.j/cran-all/cranData/vcfppR/R/vcf-tables.R
|
## usethis namespace: start
#' @useDynLib vcfppR, .registration = TRUE
#' @import methods Rcpp stats
#' @importFrom Rcpp loadModule
#' @importFrom stats cor
#' @importFrom utils read.table
## usethis namespace: end
"_PACKAGE"
|
/scratch/gouwar.j/cran-all/cranData/vcfppR/R/vcfppR-package.R
|
# Export the "vcfreader" C++ class by explicitly requesting vcfreader be
# exported via roxygen2's export tag.
#' @export vcfreader
#' @export vcfwriter
loadModule("vcfreader", TRUE)
loadModule("vcfwriter", TRUE)
|
/scratch/gouwar.j/cran-all/cranData/vcfppR/R/zzz.R
|
# meta.ave.mean2 ==========================================================
#' Confidence interval for an average mean difference from 2-group studies
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for an
#' average mean difference from two or more 2-group studies. A Satterthwaite
#' adjustment to the degrees of freedom is used to improve the accuracy of the
#' confidence intervals. Equality of variances within or across studies is not
#' assumed.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param m1 vector of estimated means for group 1
#' @param m2 vector of estimated means for group 2
#' @param sd1 vector of estimated SDs for group 1
#' @param sd2 vector of estimated SDs for group 2
#' @param n1 vector of group 1 sample sizes
#' @param n2 vector of group 2 sample sizes
#' @param bystudy logical to also return each study estimate (TRUE) or not
#'
#'
#' @return
#' Returns a matrix. The first row is the average estimate across all studies. If bystudy
#' is TRUE, there is 1 additional row for each study. The matrix has the following columns:
#' * Estimate - estimated effect size
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#' * df - degrees of freedom
#'
#'
#' @references
#' \insertRef{Bonett2009a}{vcmeta}
#'
#'
#' @examples
#' m1 <- c(7.4, 6.9)
#' m2 <- c(6.3, 5.7)
#' sd1 <- c(1.72, 1.53)
#' sd2 <- c(2.35, 2.04)
#' n1 <- c(40, 60)
#' n2 <- c(40, 60)
#' meta.ave.mean2(.05, m1, m2, sd1, sd2, n1, n2, bystudy = TRUE)
#'
#' # Should return:
#' # Estimate SE LL UL df
#' # Average 1.15 0.2830183 0.5904369 1.709563 139.41053
#' # Study 1 1.10 0.4604590 0.1819748 2.018025 71.46729
#' # Study 2 1.20 0.3292036 0.5475574 1.852443 109.42136
#'
#'
#' @importFrom stats qt
#' @export
meta.ave.mean2 <- function(alpha, m1, m2, sd1, sd2, n1, n2, bystudy = TRUE) {
m <- length(m1)
nt <- sum(n1 + n2)
v1 <- sd1^2
v2 <- sd2^2
var <- v1/n1 + v2/n2
d <- m1 - m2
ave <- sum(d)/m
se <- sqrt(sum(var)/m^2)
u1 <- sum(var)^2
u2 <- sum(v1^2/(n1^3 - n1^2) + v2^2/(n2^3 - n2^2))
df <- u1/u2
t <- qt(1 - alpha/2, df)
ll <- ave - t*se
ul <- ave + t*se
out <- cbind(ave, se, ll, ul, df)
row <- "Average"
if (bystudy) {
se <- sqrt(var)
u1 <- var^2
u2 <- v1^2/(n1^3 - n1^2) + v2^2/(n2^3 - n2^2)
df <- u1/u2
t <- qt(1 - alpha/2, df)
ll <- d - t*se
ul <- d + t*se
row2 <- t(t(paste(rep("Study", m), seq(1, m))))
row <- rbind(row, row2)
out2 <- cbind(d, se, ll, ul, df)
out <- rbind(out, out2)
}
colnames(out) <- c("Estimate", "SE", "LL", "UL", "df")
rownames(out) <- row
return(out)
}
# meta.ave.stdmean2 ==========================================================
#' Confidence interval for an average standardized mean difference
#' from 2-group studies
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for an
#' average standardized mean difference from two or more 2-group studies.
#' Unweighted variances, weighted variances, and single group variance are
#' options for the standardizer. Equality of variances within or across studies
#' is not assumed.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param m1 vector of estimated means for group 1
#' @param m2 vector of estimated means for group 2
#' @param sd1 vector of estimated SDs for group 1
#' @param sd2 vector of estimated SDs for group 2
#' @param n1 vector of group 1 sample sizes
#' @param n2 vector of group 2 sample sizes
#' @param stdzr
#' * set to 0 for square root unweighted average variance standardizer
#' * set to 1 for group 1 SD standardizer
#' * set to 2 for group 2 SD standardizer
#' * set to 3 for square root weighted average variance standardizer
#' @param bystudy logical to also return each study estimate (TRUE) or not
#'
#'
#' @return
#' Returns a matrix. The first row is the average estimate across all studies. If bystudy
#' is TRUE, there is 1 additional row for each study. The matrix has the following columns:
#' * Estimate - estimated effect size
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @references
#' \insertRef{Bonett2009a}{vcmeta}
#'
#'
#' @examples
#' m1 <- c(21.9, 23.1, 19.8)
#' m2 <- c(16.1, 17.4, 15.0)
#' sd1 <- c(3.82, 3.95, 3.67)
#' sd2 <- c(3.21, 3.30, 3.02)
#' n1 <- c(40, 30, 24)
#' n2 <- c(40, 28, 25)
#' meta.ave.stdmean2(.05, m1, m2, sd1, sd2, n1, n2, 0, bystudy = TRUE)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Average 1.526146 0.1734341 1.1862217 1.866071
#' # Study 1 1.643894 0.2629049 1.1286100 2.159178
#' # Study 2 1.566132 0.3056278 0.9671126 2.165152
#' # Study 3 1.428252 0.3289179 0.7835848 2.072919
#'
#'
#' @importFrom stats qnorm
#' @export
meta.ave.stdmean2 <- function(alpha, m1, m2, sd1, sd2, n1, n2, stdzr, bystudy = TRUE) {
df1 <- n1 - 1
df2 <- n2 - 1
m <- length(m1)
z <- qnorm(1 - alpha/2)
nt <- sum(n1 + n2)
v1 <- sd1^2
v2 <- sd2^2
if (stdzr == 0) {
s1 <- sqrt((sd1^2 + sd2^2)/2)
d <- (m1 - m2)/s1
du <- (1 - 3/(4*(n1 + n2) - 9))*d
ave <- sum(du)/m
var <- d^2*(v1^2/df1 + v2^2/df2)/(8*s1^4) + (v1/df1 + v2/df2)/s1^2
se <- sqrt(sum(var)/m^2)
} else if (stdzr == 1) {
d <- (m1 - m2)/sd1
du <- (1 - 3/(4*n1 - 5))*d
ave <- sum(du)/m
var <- d^2/(2*df1) + 1/df1 + v2/(df2*v1)
se <- sqrt(sum(var)/m^2)
} else if (stdzr == 2) {
cat ("Standardizer = sd2", fill = TRUE)
d <- (m1 - m2)/sd2
du <- (1 - 3/(4*n2 - 5))*d
ave <- sum(du)/m
var <- d^2/(2*df2) + 1/df2 + v1/(df1*v2)
se <- sqrt(sum(var)/m^2)
} else {
s2 <- sqrt((df1*sd1^2 + df2*sd2^2)/(df1 + df2))
d <- (m1 - m2)/s2
du <- (1 - 3/(4*(n1 + n2) - 9))*d
ave <- sum(du)/m
var <- d^2*(1/df1 + 1/df2)/8 + 1/n1 + 1/n2
se <- sqrt(sum(var)/m^2)
}
ll <- ave - z*se
ul <- ave + z*se
out <- cbind(ave, se, ll, ul)
row <- "Average"
if (bystudy) {
if (stdzr == 0) {
se <- sqrt(d^2*(v1^2/df1 + v2^2/df2)/(8*s1^4) + (v1/df1 + v2/df2)/s1^2)
} else if (stdzr == 1) {
se <- sqrt(d^2/(2*df1) + 1/df1 + v2/(df2*v1))
} else if (stdzr == 2) {
se <- sqrt(d^2/(2*df2) + 1/df2 + v1/(df1*v2))
} else {
se <- sqrt(d^2*(1/df1 + 1/df2)/8 + 1/n1 + 1/n2)
}
ll <- d - z*se
ul <- d + z*se
row2 <- t(t(paste(rep("Study", m), seq(1,m))))
row <- rbind(row, row2)
out2 <- cbind(d, se, ll, ul)
out <- rbind(out, out2)
}
colnames(out) <- c("Estimate", "SE", "LL", "UL")
rownames(out) <- row
return(out)
}
# meta.ave.mean.ps ==========================================================
#' Confidence interval for an average mean difference from paired-samples studies
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for an
#' average mean difference from two or more paired-samples studies.
#' A Satterthwaite adjustment to the degrees of freedom is used to improve
#' the accuracy of the confidence interval for the average effect size.
#' Equality of variances within or across studies is not assumed.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param m1 vector of estimated means for measurement 1
#' @param m2 vector of estimated means for measurement 2
#' @param sd1 vector of estimated SDs for measurement 1
#' @param sd2 vector of estimated SDs for measurement 2
#' @param cor vector of estimated correlations for paired measurements
#' @param n vector of sample sizes
#' @param bystudy logical to also return each study estimate (TRUE) or not
#'
#'
#' @return
#' Returns a matrix. The first row is the average estimate across all studies. If bystudy
#' is TRUE, there is 1 additional row for each study. The matrix has the following columns:
#' * Estimate - estimated effect size
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#' * df - degrees of freedom
#'
#'
#' @references
#' \insertRef{Bonett2009a}{vcmeta}
#'
#'
#' @examples
#' m1 <- c(53, 60, 53, 57)
#' m2 <- c(55, 62, 58, 61)
#' sd1 <- c(4.1, 4.2, 4.5, 4.0)
#' sd2 <- c(4.2, 4.7, 4.9, 4.8)
#' cor <- c(.7, .7, .8, .85)
#' n <- c(30, 50, 30, 70)
#' meta.ave.mean.ps(.05, m1, m2, sd1, sd2, cor, n, bystudy = TRUE)
#'
#' # Should return:
#' # Estimate SE LL UL df
#' # Average -3.25 0.2471557 -3.739691 -2.7603091 112.347
#' # Study 1 -2.00 0.5871400 -3.200836 -0.7991639 29.000
#' # Study 2 -2.00 0.4918130 -2.988335 -1.0116648 49.000
#' # Study 3 -5.00 0.5471136 -6.118973 -3.8810270 29.000
#' # Study 4 -4.00 0.3023716 -4.603215 -3.3967852 69.000
#'
#'
#' @importFrom stats qt
#' @export
meta.ave.mean.ps <- function(alpha, m1, m2, sd1, sd2, cor, n, bystudy = TRUE) {
m <- length(m1)
nt <- sum(n)
v1 <- sd1^2
v2 <- sd2^2
v <- (v1 + v2 - 2*cor*sd1*sd2)/n
d <- m1 - m2
ave <- sum(d)/m
se <- sqrt(sum(v)/m^2)
u1 <- sum(v)^2
u2 <- sum(v^2/(n - 1))
df <- u1/u2
t <- qt(1 - alpha/2, df)
ll <- ave - t*se
ul <- ave + t*se
out <- cbind(ave, se, ll, ul, df)
row <- "Average"
if (bystudy) {
se <- sqrt(v)
df <- n - 1
t <- qt(1 - alpha/2, df)
ll <- d - t*se
ul <- d + t*se
row2 <- t(t(paste(rep("Study", m), seq(1,m))))
row <- rbind(row, row2)
out2 <- cbind(d, se, ll, ul, df)
out <- rbind(out, out2)
}
colnames(out) <- c("Estimate", "SE", "LL", "UL", "df")
rownames(out) <- row
return(out)
}
# meta.ave.stdmean.ps ==========================================================
#' Confidence interval for an average standardized mean difference from
#' paired-samples studies
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for an
#' average standardized mean difference from two or more paired-samples
#' studies. Unweighted variances and single group variance are options
#' for the standardizer. Equality of variances within or across studies is not
#' assumed.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param m1 vector of estimated means for measurement 1
#' @param m2 vector of estimated means for measurement 2
#' @param sd1 vector of estimated SDs for measurement 1
#' @param sd2 vector of estimated SDs for measurement 2
#' @param cor vector of estimated correlations for paired measurements
#' @param n vector of sample sizes
#' @param stdzr
#' * set to 0 for square root unweighted average variance standardizer
#' * set to 1 for group 1 SD standardizer
#' * set to 2 for group 2 SD standardizer
#' @param bystudy logical to also return each study estimate (TRUE) or not
#'
#'
#' @return
#' Returns a matrix. The first row is the average estimate across all studies. If bystudy
#' is TRUE, there is 1 additional row for each study. The matrix has the following columns:
#' * Estimate - estimated effect size
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @references
#' \insertRef{Bonett2009a}{vcmeta}
#'
#'
#' @examples
#' m1 <- c(23.9, 24.1)
#' m2 <- c(25.1, 26.9)
#' sd1 <- c(1.76, 1.58)
#' sd2 <- c(2.01, 1.76)
#' cor <- c(.78, .84)
#' n <- c(25, 30)
#' meta.ave.stdmean.ps(.05, m1, m2, sd1, sd2, cor, n, 1, bystudy = TRUE)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Average -1.1931045 0.1568034 -1.500433 -0.8857755
#' # Study 1 -0.6818182 0.1773785 -1.029474 -0.3341628
#' # Study 2 -1.7721519 0.2586234 -2.279044 -1.2652594
#'
#'
#' @importFrom stats qnorm
#' @export
meta.ave.stdmean.ps <- function(alpha, m1, m2, sd1, sd2, cor, n, stdzr, bystudy = TRUE) {
df <- n - 1
m <- length(m1)
z <- qnorm(1 - alpha/2)
nt <- sum(n)
v1 <- sd1^2
v2 <- sd2^2
vd <- v1 + v2 - 2*cor*sd1*sd2
if (stdzr == 0) {
s <- sqrt((sd1^2 + sd2^2)/2)
d <- (m1 - m2)/s
du <- sqrt((n - 2)/df)*d
ave <- sum(du)/m
var <- d^2*(v1^2 + v2^2 + 2*cor^2*v1*v2)/(8*df*s^4) + vd/(df*s^2)
se <- sqrt(sum(var)/m^2)
}
else if (stdzr == 1) {
d <- (m1 - m2)/sd1
du <- (1 - 3/(4*df - 1))*d
ave <- sum(du)/m
var <- d^2/(2*df) + vd/(df*v1)
se <- sqrt(sum(var)/m^2)
}
else {
d <- (m1 - m2)/sd2
du <- (1 - 3/(4*df - 1))*d
ave <- sum(du)/m
var <- d^2/(2*df) + vd/(df*v2)
se <- sqrt(sum(var)/m^2)
}
ll <- ave - z*se
ul <- ave + z*se
out <- cbind(ave, se, ll, ul)
row <- "Average"
if (bystudy) {
if (stdzr == 0) {
se <- sqrt(d^2*(v1^2 + v2^2 + 2*cor^2*v1*v2)/(8*df*s^4) + vd/(df*s^2))
} else if (stdzr == 1) {
se <- sqrt(d^2/(2*df) + vd/(df*v1))
} else {
se <- sqrt(d^2/(2*df) + vd/(df*v2))
}
ll <- d - z*se
ul <- d + z*se
row2 <- t(t(paste(rep("Study", m), seq(1,m))))
row <- rbind(row, row2)
out2 <- cbind(d, se, ll, ul)
out <- rbind(out, out2)
}
colnames(out) <- c("Estimate", "SE", "LL", "UL")
rownames(out) <- row
return(out)
}
# meta.ave.meanratio2 ==========================================================
#' Confidence interval for an average mean ratio from 2-group studies
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for a
#' geometric average mean ratio from two or more 2-group studies. A Satterthwaite
#' adjustment to the degrees of freedom is used to improve the accuracy of the
#' confidence intervals. Equality of variances within or across studies is not assumed.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param m1 vector of estimated means for group 1
#' @param m2 vector of estimated means for group 2
#' @param sd1 vector of estimated SDs for group 1
#' @param sd2 vector of estimated SDs for group 2
#' @param n1 vector of group 1 sample sizes
#' @param n2 vector of group 2 sample sizes
#' @param bystudy logical to also return each study estimate (TRUE) or not
#'
#'
#' @return
#' Returns a matrix. The first row is the average estimate across all studies. If bystudy
#' is TRUE, there is 1 additional row for each study. The matrix has the following columns:
#' * Estimate - estimated effect size
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#' * exp(Estimate) - exponentiated estimate
#' * exp(LL) - lower limit of the exponentiated confidence interval
#' * exp(UL) - upper limit of the exponentiated confidence interval
#' * df - degrees of freedom
#'
#'
#' @references
#' \insertRef{Bonett2020}{vcmeta}
#'
#'
#' @examples
#' m1 <- c(7.4, 6.9)
#' m2 <- c(6.3, 5.7)
#' sd1 <- c(1.7, 1.5)
#' sd2 <- c(2.3, 2.0)
#' n1 <- c(40, 20)
#' n2 <- c(40, 20)
#' meta.ave.meanratio2(.05, m1, m2, sd1, sd2, n1, n2, bystudy = TRUE)
#'
#' # Should return:
#' # Estimate SE LL UL exp(Estimate)
#' # Average 0.1759928 0.05738065 0.061437186 0.2905484 1.192429
#' # Study 1 0.1609304 0.06820167 0.024749712 0.2971110 1.174603
#' # Study 2 0.1910552 0.09229675 0.002986265 0.3791242 1.210526
#' # exp(LL) exp(UL) df
#' # Average 1.063364 1.337161 66.26499
#' # Study 1 1.025059 1.345965 65.69929
#' # Study 2 1.002991 1.461004 31.71341
#'
#'
#' @importFrom stats qt
#' @export
meta.ave.meanratio2 <- function(alpha, m1, m2, sd1, sd2, n1, n2, bystudy = TRUE) {
m <- length(m1)
nt <- sum(n1 + n2)
v <- rep(1, m)*(1/m)
logratio <- log(m1/m2)
var1 <- sd1^2/(n1*m1^2)
var2 <- sd2^2/(n2*m2^2)
est <- t(v)%*%logratio
se <- sqrt(t(v)%*%(diag(var1 + var2))%*%v)
df <- se^4/sum(v^4*var1^2/(n1 - 1) + v^4*var2^2/(n2 - 1))
t <- qt(1 - alpha/2, df)
ll <- est - t*se
ul <- est + t*se
out <- cbind(est, se, ll, ul, exp(est), exp(ll), exp(ul), df)
row <- "Average"
if (bystudy) {
se <- sqrt(var1 + var2)
u1 <- se^4
u2 <- var1^2/(n1 - 1) + var2^2/(n2 - 1)
df <- u1/u2
t <- qt(1 - alpha/2, df)
ll <- logratio - t*se
ul <- logratio + t*se
row2 <- t(t(paste(rep("Study", m), seq(1, m))))
row <- rbind(row, row2)
out2 <- cbind(logratio, se, ll, ul, exp(logratio), exp(ll), exp(ul), df)
out <- rbind(out, out2)
}
colnames(out) <- c("Estimate", "SE", "LL", "UL", "exp(Estimate)", "exp(LL)", "exp(UL)", "df")
rownames(out) <- row
return(out)
}
# meta.ave.meanratio.ps ==========================================================
#' Confidence interval for an average mean ratio from paired-samples studies
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for a
#' geometric average mean ratio from two or more paired-samples studies. A
#' Satterthwaite adjustment to the degrees of freedom is used to improve the
#' accuracy of the confidence interval for the average effect size. Equality
#' of variances within or across studies is not assumed.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param m1 vector of estimated means for measurement 1
#' @param m2 vector of estimated means for measurement 2
#' @param sd1 vector of estimated SDs for measurement 1
#' @param sd2 vector of estimated SDs for measurement 2
#' @param cor vector of estimated correlations for paired measurements
#' @param n vector of sample sizes
#' @param bystudy logical to also return each study estimate (TRUE) or not
#'
#'
#' @return
#' Returns a matrix. The first row is the average estimate across all studies. If bystudy
#' is TRUE, there is 1 additional row for each study. The matrix has the following columns:
#' * Estimate - estimated effect size
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#' * exp(Estimate) - exponentiated estimate
#' * exp(LL) - lower limit of the exponentiated confidence interval
#' * exp(UL) - upper limit of the exponentiated confidence interval
#' * df - degrees of freedom
#'
#'
#' @examples
#' m1 <- c(53, 60, 53, 57)
#' m2 <- c(55, 62, 58, 61)
#' sd1 <- c(4.1, 4.2, 4.5, 4.0)
#' sd2 <- c(4.2, 4.7, 4.9, 4.8)
#' cor <- c(.7, .7, .8, .85)
#' n <- c(30, 50, 30, 70)
#' meta.ave.meanratio.ps(.05, m1, m2, sd1, sd2, cor, n, bystudy = TRUE)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Average -0.05695120 0.004350863 -0.06558008 -0.04832231
#' # Study 1 -0.03704127 0.010871086 -0.05927514 -0.01480740
#' # Study 2 -0.03278982 0.008021952 -0.04891054 -0.01666911
#' # Study 3 -0.09015110 0.009779919 -0.11015328 -0.07014892
#' # Study 4 -0.06782260 0.004970015 -0.07773750 -0.05790769
#' # exp(Estimate) exp(LL) exp(UL) df
#' # Average 0.9446402 0.9365240 0.9528266 103.0256
#' # Study 1 0.9636364 0.9424474 0.9853017 29.0000
#' # Study 2 0.9677419 0.9522663 0.9834691 49.0000
#' # Study 3 0.9137931 0.8956968 0.9322550 29.0000
#' # Study 4 0.9344262 0.9252073 0.9437371 69.0000
#'
#'
#' @importFrom stats qt
#'@export
meta.ave.meanratio.ps <- function(alpha, m1, m2, sd1, sd2, cor, n, bystudy = TRUE) {
m <- length(m1)
nt <- sum(n)
v <- rep(1, m)*(1/m)
logratio <- log(m1/m2)
var <- (sd1^2/m1^2 + sd2^2/m2^2 - 2*cor*sd1*sd2/(m1*m2))/n
est <- t(v)%*%logratio
se <- sqrt(t(v)%*%(diag(var))%*%v)
df <- se^4/sum(v^4*var^2/(n - 1))
t <- qt(1 - alpha/2, df)
ll <- est - t*se
ul <- est + t*se
out <- cbind(est, se, ll, ul, exp(est), exp(ll), exp(ul), df)
row <- "Average"
if (bystudy) {
se <- sqrt(var)
df <- n - 1
t <- qt(1 - alpha/2, df)
ll <- logratio - t*se
ul <- logratio + t*se
row2 <- t(t(paste(rep("Study", m), seq(1, m))))
row <- rbind(row, row2)
out2 <- cbind(logratio, se, ll, ul, exp(logratio), exp(ll), exp(ul), df)
out <- rbind(out, out2)
}
colnames(out) <- c("Estimate", "SE", "LL", "UL", "exp(Estimate)", "exp(LL)", "exp(UL)", "df")
rownames(out) <- row
return(out)
}
# meta.ave.cor ==========================================================
#' Confidence interval for an average Pearson or partial correlation
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for an
#' average Pearson or partial correlation from two or more studies. The
#' sample correlations must be all Pearson correlations or all partial
#' correlations. Use the meta.ave.gen function to meta-analyze any
#' combination of Pearson, partial, or Spearman correlations.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param n vector of sample sizes
#' @param cor vector of estimated correlations
#' @param s number of control variables (set to 0 for Pearson)
#' @param bystudy logical to also return each study estimate (TRUE) or not
#'
#'
#' @return
#' Returns a matrix. The first row is the average estimate across all studies. If bystudy
#' is TRUE, there is 1 additional row for each study. The matrix has the following columns:
#' * Estimate - estimated effect size
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' n <- c(55, 190, 65, 35)
#' cor <- c(.40, .65, .60, .45)
#' meta.ave.cor(.05, n, cor, 0, bystudy = TRUE)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Average 0.525 0.05113361 0.4176678 0.6178816
#' # Study 1 0.400 0.11430952 0.1506943 0.6014699
#' # Study 2 0.650 0.04200694 0.5594086 0.7252465
#' # Study 3 0.600 0.08000000 0.4171458 0.7361686
#' # Study 4 0.450 0.13677012 0.1373507 0.6811071
#'
#'
#' @references
#' \insertRef{Bonett2008a}{vcmeta}
#'
#'
#' @importFrom stats qnorm
#' @export
meta.ave.cor <- function(alpha, n, cor, s, bystudy = TRUE) {
m <- length(n)
z <- qnorm(1 - alpha/2)
nt <- sum(n)
var.cor <- (1 - cor^2)^2/ (n - 3 - s)
ave.cor <- sum(cor)/m
se.ave <- sqrt(sum(var.cor)/m^2)
z.ave <- log((1 + ave.cor)/(1 - ave.cor))/2
ll0 <- z.ave - z*se.ave/(1 - ave.cor^2)
ul0 <- z.ave + z*se.ave/(1 - ave.cor^2)
ll <- (exp(2*ll0) - 1)/(exp(2*ll0) + 1)
ul <- (exp(2*ul0) - 1)/(exp(2*ul0) + 1)
out <- cbind(ave.cor, se.ave, ll, ul)
row <- "Average"
if (bystudy) {
se.cor <- sqrt((1 - cor^2)^2/ (n - 1 - s))
se.z <- sqrt(1/(n - 3 - s))
z.cor <- log((1 + cor)/(1 - cor))/2
ll0 <- z.cor - z*se.z
ul0 <- z.cor + z*se.z
ll <- (exp(2*ll0) - 1)/(exp(2*ll0) + 1)
ul <- (exp(2*ul0) - 1)/(exp(2*ul0) + 1)
row2 <- t(t(paste(rep("Study", m), seq(1,m))))
row <- rbind(row, row2)
out2 <- cbind(cor, se.cor, ll, ul)
out <- rbind(out, out2)
}
colnames(out) <- c("Estimate", "SE", "LL", "UL")
rownames(out) <- row
return (out)
}
# meta.ave.slope ==========================================================
#' Confidence interval for an average slope coefficient
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for an
#' average slope coefficient in a simple linear regression model from two
#' or more studies. A Satterthwaite adjustment to the degrees of freedom
#' is used to improve the accuracy of the confidence interval.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param n vector of sample sizes
#' @param cor vector of estimated correlations
#' @param sdy vector of estimated SDs of y
#' @param sdx vector of estimated SDs of x
#' @param bystudy logical to also return each study estimate (TRUE) or not
#'
#'
#' @return
#' Returns a matrix. The first row is the average estimate across all studies. If bystudy
#' is TRUE, there is 1 additional row for each study. The matrix has the following columns:
#' * Estimate - estimated effect size
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#' * df - degrees of freedom
#'
#'
#' @examples
#' n <- c(45, 85, 50, 60)
#' cor <- c(.24, .35, .16, .20)
#' sdy <- c(12.2, 14.1, 11.7, 15.9)
#' sdx <- c(1.34, 1.87, 2.02, 2.37)
#' meta.ave.slope(.05, n, cor, sdy, sdx, bystudy = TRUE)
#'
#' # Should return:
#' # Estimate SE LL UL df
#' # Average 1.7731542 0.4755417 0.8335021 2.712806 149.4777
#' # Study 1 2.1850746 1.3084468 -0.4536599 4.823809 43.0000
#' # Study 2 2.6390374 0.7262491 1.1945573 4.083518 83.0000
#' # Study 3 0.9267327 0.8146126 -0.7111558 2.564621 48.0000
#' # Study 4 1.3417722 0.8456799 -0.3510401 3.034584 58.0000
#'
#'
#' @importFrom stats qt
#' @export
meta.ave.slope <- function(alpha, n, cor, sdy, sdx, bystudy = TRUE) {
m <- length(n)
nt <- sum(n)
b <- cor*(sdy/sdx)
var.b <- (sdy^2*(1 - cor^2)^2*(n - 1))/(sdx^2*(n - 1)*(n - 2))
ave.b <- sum(b)/m
se.ave <- sqrt(sum(var.b)/m^2)
u1 <- sum(var.b)^2
u2 <- sum(var.b^2/(n - 1))
df <- u1/u2
t <- qt(1 - alpha/2, df)
ll <- ave.b - t*se.ave
ul <- ave.b + t*se.ave
out <- cbind(ave.b, se.ave, ll, ul, df)
row <- "Average"
if (bystudy) {
se.b <- sqrt(var.b)
df <- n - 2
t <- qt(1 - alpha/2, df)
ll <- b - t*se.b
ul <- b + t*se.b
row2 <- t(t(paste(rep("Study", m), seq(1,m))))
row <- rbind(row, row2)
out2 <- cbind(b, se.b, ll, ul, df)
out <- rbind(out, out2)
}
colnames(out) <- c("Estimate", "SE", "LL", "UL", "df")
rownames(out) <- row
return (out)
}
# meta.ave.path ==========================================================
#' Confidence interval for an average slope coefficient in a general
#' linear model or a path model.
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for an
#' average slope coefficient in a general linear model (ANOVA, ANCOVA,
#' multiple regression) or a path model from two or more studies.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param n vector of sample sizes
#' @param slope vector of slope estimates
#' @param se vector of slope standard errors
#' @param s number of predictors of the response variable
#' @param bystudy logical to also return each study estimate (TRUE) or not
#'
#'
#' @return
#' Returns a matrix. The first row is the average estimate across all studies. If bystudy
#' is TRUE, there is 1 additional row for each study. The matrix has the following columns:
#' * Estimate - estimated effect size
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' n <- c(75, 85, 250, 160)
#' slope <- c(1.57, 1.38, 1.08, 1.25)
#' se <- c(.658, .724, .307, .493)
#' meta.ave.path(.05, n, slope, se, 2, bystudy = TRUE)
#'
#' # Should return:
#' # Estimate SE LL UL df
#' # Average 1.32 0.2844334 0.75994528 1.880055 263.1837
#' # Study 1 1.57 0.6580000 0.25830097 2.881699 72.0000
#' # Study 2 1.38 0.7240000 -0.06026664 2.820267 82.0000
#' # Study 3 1.08 0.3070000 0.47532827 1.684672 247.0000
#' # Study 4 1.25 0.4930000 0.27623174 2.223768 157.0000
#'
#'
#' @importFrom stats qt
#' @export
meta.ave.path <- function(alpha, n, slope, se, s, bystudy = TRUE) {
m <- length(n)
nt <- sum(n)
var.b <- se^2
ave.b <- sum(slope)/m
se.ave <- sqrt(sum(var.b)/m^2)
u1 <- sum(var.b)^2
u2 <- sum(var.b^2/(n - s - 1))
df <- u1/u2
t <- qt(1 - alpha/2, df)
ll <- ave.b - t*se.ave
ul <- ave.b + t*se.ave
out <- cbind(ave.b, se.ave, ll, ul, df)
row <- "Average"
if (bystudy) {
df <- n - s - 1
t <- qt(1 - alpha/2, df)
ll <- slope - t*se
ul <- slope + t*se
row2 <- t(t(paste(rep("Study", m), seq(1,m))))
row <- rbind(row, row2)
out2 <- cbind(slope, se, ll, ul, df)
out <- rbind(out, out2)
}
colnames(out) <- c("Estimate", "SE", "LL", "UL", "df")
rownames(out) <- row
return (out)
}
# meta.ave.spear ==========================================================
#' Confidence interval for an average Spearman correlation
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for an
#' average Spearman correlation from two or more studies. The Spearman
#' correlation is preferred to the Pearson correlation if the relation
#' between the two quantitative variables is monotonic rather than linear
#' or if the bivariate normality assumption is not plausible.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param n vector of sample sizes
#' @param cor vector of estimated Spearman correlations
#' @param bystudy logical to also return each study estimate (TRUE) or not
#'
#'
#' @return
#' Returns a matrix. The first row is the average estimate across all studies. If bystudy
#' is TRUE, there is 1 additional row for each study. The matrix has the following columns:
#' * Estimate - estimated effect size
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' n <- c(150, 200, 300, 200, 350)
#' cor <- c(.14, .29, .16, .21, .23)
#' meta.ave.spear(.05, n, cor, bystudy = TRUE)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Average 0.206 0.02944265 0.14763960 0.2629309
#' # Study 1 0.140 0.08031750 -0.02151639 0.2943944
#' # Study 2 0.290 0.06492643 0.15476515 0.4145671
#' # Study 3 0.160 0.05635101 0.04689807 0.2690514
#' # Study 4 0.210 0.06776195 0.07187439 0.3402225
#' # Study 5 0.230 0.05069710 0.12690280 0.3281809
#'
#'
#' @references
#' \insertRef{Bonett2008a}{vcmeta}
#'
#'
#' @importFrom stats qnorm
#' @export
meta.ave.spear <- function(alpha, n, cor, bystudy = TRUE) {
m <- length(n)
z <- qnorm(1 - alpha/2)
nt <- sum(n)
var.cor <- (1 + cor^2/2)*(1 - cor^2)^2/(n - 3)
ave.cor <- sum(cor)/m
se.ave <- sqrt(sum(var.cor)/m^2)
z.ave <- log((1 + ave.cor)/(1 - ave.cor))/2
ll0 <- z.ave - z*se.ave/(1 - ave.cor^2)
ul0 <- z.ave + z*se.ave/(1 - ave.cor^2)
ll <- (exp(2*ll0) - 1)/(exp(2*ll0) + 1)
ul <- (exp(2*ul0) - 1)/(exp(2*ul0) + 1)
out <- cbind(ave.cor, se.ave, ll, ul)
row <- "Average"
if (bystudy) {
se.cor <- sqrt((1 + cor^2/2)*(1 - cor^2)^2/(n - 1))
se.z <- sqrt((1 + cor^2/2)/(n - 3))
z.cor <- log((1 + cor)/(1 - cor))/2
ll0 <- z.cor - z*se.z
ul0 <- z.cor + z*se.z
ll <- (exp(2*ll0) - 1)/(exp(2*ll0) + 1)
ul <- (exp(2*ul0) - 1)/(exp(2*ul0) + 1)
row2 <- t(t(paste(rep("Study", m), seq(1,m))))
row <- rbind(row, row2)
out2 <- cbind(cor, se.cor, ll, ul)
out <- rbind(out, out2)
}
colnames(out) <- c("Estimate", "SE", "LL", "UL")
rownames(out) <- row
return (out)
}
# meta.ave.pbcor ==========================================================
#' Confidence interval for an average point-biserial correlation
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for an
#' average point-biserial correlation from two or more studies. Two types
#' of point-biserial correlations can be meta-analyzed. One type uses
#' an unweighted variance and is appropriate in 2-group experimental
#' designs. The other type uses a weighted variance and is appropriate in
#' 2-group nonexperimental designs with simple random sampling (but not
#' stratified random sample) within each study. This function requires
#' all point-biserial correlations to be of the same type. Use the
#' meta.ave.gen function to meta-analyze any combination of biserial
#' correlation types.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param m1 vector of estimated means for group 1
#' @param m2 vector of estimated means for group 2
#' @param sd1 vector of estimated SDs for group 1
#' @param sd2 vector of estimated SDs for group 2
#' @param n1 vector of group 1 sample sizes
#' @param n2 vector of group 2 sample sizes
#' @param type
#' * set to 1 for weighted variance
#' * set to 2 for unweighted variance
#' @param bystudy logical to also return each study estimate (TRUE) or not
#'
#'
#' @return
#' Returns a matrix. The first row is the average estimate across all studies. If bystudy
#' is TRUE, there is 1 additional row for each study. The matrix has the following columns:
#' * Estimate - estimated effect size
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' m1 <- c(21.9, 23.1, 19.8)
#' m2 <- c(16.1, 17.4, 15.0)
#' sd1 <- c(3.82, 3.95, 3.67)
#' sd2 <- c(3.21, 3.30, 3.02)
#' n1 <- c(40, 30, 24)
#' n2 <- c(40, 28, 25)
#' meta.ave.pbcor(.05, m1, m2, sd1, sd2, n1, n2, 2, bystudy = TRUE)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Average 0.6159094 0.04363432 0.5230976 0.6942842
#' # Study 1 0.6349786 0.06316796 0.4842098 0.7370220
#' # Study 2 0.6160553 0.07776700 0.4255342 0.7380898
#' # Study 3 0.5966942 0.08424778 0.3903883 0.7283966
#'
#'
#' @references
#' \insertRef{Bonett2020b}{vcmeta}
#'
#'
#' @importFrom stats qnorm
#' @export
meta.ave.pbcor <- function(alpha, m1, m2, sd1, sd2, n1, n2, type, bystudy = TRUE) {
m <- length(m1)
z <- qnorm(1 - alpha/2)
n <- n1 + n2
nt <- sum(n)
df1 <- n1 - 1
df2 <- n2 - 1
if (type == 1) {
p <- n1/n
b <- (n - 2)/(n*p*(1 - p))
s <- sqrt((df1*sd1^2 + df2*sd2^2)/(df1 + df2))
d <- (m1 - m2)/s
se.d <- sqrt(d^2*(1/df1 + 1/df2)/8 + 1/n1 + 1/n2)
se <- sqrt((b^2*se.d^2)/(d^2 + b)^3)
cor <- d/sqrt(d^2 + b)
}
else {
s <- sqrt((sd1^2 + sd2^2)/2)
d <- (m1 - m2)/s
a1 <- d^2*(sd1^4/df1 + sd1^4/df2)/(8*s^4)
a2 <- sd1^2/(s^2*df1) + sd2^2/(s^2*df2)
se.d <- sqrt(a1 + a2)
se <- sqrt((16*se.d^2)/(d^2 + 4)^3)
cor <- d/sqrt(d^2 + 4)
}
ll.d <- d - z*se.d
ul.d <- d + z*se.d
ave <- sum(cor)/m
var.ave <- sum(se^2)/m^2
se.ave <- sqrt(var.ave)
cor.f <- log((1 + ave)/(1 - ave))/2
ll0 <- cor.f - z*se.ave/(1 - ave^2)
ul0 <- cor.f + z*se.ave/(1 - ave^2)
ll <- (exp(2*ll0) - 1)/(exp(2*ll0) + 1)
ul <- (exp(2*ul0) - 1)/(exp(2*ul0) + 1)
out <- cbind(ave, se.ave, ll, ul)
row <- "Average"
if (bystudy) {
if (type == 1) {
ll <- ll.d/sqrt(ll.d^2 + b)
ul <- ul.d/sqrt(ul.d^2 + b)
}
else {
ll <- ll.d/sqrt(ll.d^2 + 4)
ul <- ul.d/sqrt(ul.d^2 + 4)
}
row2 <- t(t(paste(rep("Study", m), seq(1,m))))
row <- rbind(row, row2)
out2 <- cbind(cor, se, ll, ul)
out <- rbind(out, out2)
}
colnames(out) <- c("Estimate", "SE", "LL", "UL")
rownames(out) <- row
return(out)
}
# meta.ave.semipart ==========================================================
#' Confidence interval for an average semipartial correlation
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for an
#' average semipartial correlation from two or more studies.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param n vector of sample sizes
#' @param cor vector of estimated semipartial correlations
#' @param r2 vector of squared multiple correlations for full model
#' @param bystudy logical to also return each study estimate (TRUE) or not
#'
#'
#' @return
#' Returns a matrix. The first row is the average estimate across all studies. If bystudy
#' is TRUE, there is 1 additional row for each study. The matrix has the following columns:
#' * Estimate - estimated effect size
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' n <- c(128, 97, 210, 217)
#' cor <- c(.35, .41, .44, .39)
#' r2 <- c(.29, .33, .36, .39)
#' meta.ave.semipart(.05, n, cor, r2, bystudy = TRUE)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Average 0.3975 0.03221240 0.3325507 0.4586965
#' # Study 1 0.3500 0.07175200 0.2023485 0.4820930
#' # Study 2 0.4100 0.07886080 0.2447442 0.5521076
#' # Study 3 0.4400 0.05146694 0.3338366 0.5351410
#' # Study 4 0.3900 0.05085271 0.2860431 0.4848830
#'
#'
#' @importFrom stats qnorm
#' @export
meta.ave.semipart <- function(alpha, n, cor, r2, bystudy = TRUE) {
m <- length(n)
z <- qnorm(1 - alpha/2)
nt <- sum(n)
r0 <- r2 - cor^2
var.cor <- (r2^2 - 2*r2 + r0 - r0^2 + 1)/(n - 3)
ave.cor <- sum(cor)/m
se.ave <- sqrt(sum(var.cor)/m^2)
z.ave <- log((1 + ave.cor)/(1 - ave.cor))/2
ll0 <- z.ave - z*se.ave/(1 - ave.cor^2)
ul0 <- z.ave + z*se.ave/(1 - ave.cor^2)
ll <- (exp(2*ll0) - 1)/(exp(2*ll0) + 1)
ul <- (exp(2*ul0) - 1)/(exp(2*ul0) + 1)
out <- cbind(ave.cor, se.ave, ll, ul)
row <- "Average"
if (bystudy) {
se.cor = sqrt(var.cor)
se.z <- se.cor/(1 - cor^2)
z.cor <- log((1 + cor)/(1 - cor))/2
ll0 <- z.cor - z*se.z
ul0 <- z.cor + z*se.z
ll <- (exp(2*ll0) - 1)/(exp(2*ll0) + 1)
ul <- (exp(2*ul0) - 1)/(exp(2*ul0) + 1)
row2 <- t(t(paste(rep("Study", m), seq(1,m))))
row <- rbind(row, row2)
out2 <- cbind(cor, se.cor, ll, ul)
out <- rbind(out, out2)
}
colnames(out) <- c("Estimate", "SE", "LL", "UL")
rownames(out) <- row
return (out)
}
# meta.ave.cronbach ==========================================================
#' Confidence interval for an average Cronbach alpha reliability
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for an
#' average Cronbach reliability coefficient from two or more studies.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param n vector of sample sizes
#' @param rel vector of sample reliabilities
#' @param r number of measurements (e.g., items) used to compute each reliability
#' @param bystudy logical to also return each study estimate (TRUE) or not
#'
#'
#' @return
#' Returns a matrix. The first row is the average estimate across all studies. If bystudy
#' is TRUE, there is 1 additional row for each study. The matrix has the following columns:
#' * Estimate - estimated effect size
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' n <- c(583, 470, 546, 680)
#' rel <- c(.91, .89, .90, .89)
#' meta.ave.cronbach(.05, n, rel, 10, bystudy = TRUE)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Average 0.8975 0.003256081 0.8911102 0.9038592
#' # Study 1 0.9100 0.005566064 0.8985763 0.9204108
#' # Study 2 0.8900 0.007579900 0.8743616 0.9041013
#' # Study 3 0.9000 0.006391375 0.8868623 0.9119356
#' # Study 4 0.8900 0.006297549 0.8771189 0.9018203
#'
#'
#' @references
#' \insertRef{Bonett2010}{vcmeta}
#'
#'
#' @importFrom stats qnorm
#' @export
meta.ave.cronbach <- function(alpha, n, rel, r, bystudy = TRUE) {
m <- length(n)
z <- qnorm(1 - alpha/2)
nt <- sum(n)
hn <- m/sum(1/n)
a <- ((r - 2)*(m - 1))^.25
var.rel <- 2*r*(1 - rel)^2/((r - 1)*(n - 2 - a))
ave.rel <- sum(rel)/m
se.ave <- sqrt(sum(var.rel)/m^2)
log.ave <- log(1 - ave.rel) - log(hn/(hn - 1))
ul <- 1 - exp(log.ave - z*se.ave/(1 - ave.rel))
ll <- 1 - exp(log.ave + z*se.ave/(1 - ave.rel))
out <- cbind(ave.rel, se.ave, ll, ul)
row <- "Average"
if (bystudy) {
se.rel <- sqrt(2*r*(1 - rel)^2/((r - 1)*(n - 2)))
log.rel <- log(1 - rel) - log(n/(n - 1))
ul <- 1 - exp(log.rel - z*se.rel/(1 - rel))
ll <- 1 - exp(log.rel + z*se.rel/(1 - rel))
row2 <- t(t(paste(rep("Study", m), seq(1,m))))
row <- rbind(row, row2)
out2 <- cbind(rel, se.rel, ll, ul)
out <- rbind(out, out2)
}
colnames(out) <- c("Estimate", "SE", "LL", "UL")
rownames(out) <- row
return (out)
}
# meta.ave.odds ==========================================================
#' Confidence interval for average odds ratio from 2-group studies
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for a
#' geometric average odds ratio from two or more studies.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param f1 vector of group 1 frequency counts
#' @param f2 vector of group 2 frequency counts
#' @param n1 vector of group 1 sample sizes
#' @param n2 vector of group 2 sample sizes
#' @param bystudy logical to also return each study estimate (TRUE) or not
#'
#'
#' @return
#' Returns a matrix. The first row is the average estimate across all studies. If bystudy
#' is TRUE, there is 1 additional row for each study. The matrix has the following columns:
#' * Estimate - estimated effect size
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#' * exp(Estimate) - the exponentiated estimate
#' * exp(LL) - lower limit of the exponentiated confidence interval
#' * exp(UL) - upper limit of the exponentiated confidence interval
#'
#'
#' @examples
#' n1 <- c(204, 201, 932, 130, 77)
#' n2 <- c(106, 103, 415, 132, 83)
#' f1 <- c(24, 40, 93, 14, 5)
#' f2 <- c(12, 9, 28, 3, 1)
#' meta.ave.odds(.05, f1, f2, n1, n2, bystudy = TRUE)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Average 0.86211102 0.2512852 0.36960107 1.3546210
#' # Study 1 0.02581353 0.3700520 -0.69947512 0.7511022
#' # Study 2 0.91410487 0.3830515 0.16333766 1.6648721
#' # Study 3 0.41496672 0.2226089 -0.02133877 0.8512722
#' # Study 4 1.52717529 0.6090858 0.33338907 2.7209615
#' # Study 5 1.42849472 0.9350931 -0.40425414 3.2612436
#' # exp(Estimate) exp(LL) exp(UL)
#' # Average 2.368155 1.4471572 3.875292
#' # Study 1 1.026150 0.4968460 2.119335
#' # Study 2 2.494541 1.1774342 5.284997
#' # Study 3 1.514320 0.9788873 2.342625
#' # Study 4 4.605150 1.3956902 15.194925
#' # Study 5 4.172414 0.6674745 26.081952
#'
#'
#' @references
#' \insertRef{Bonett2015}{vcmeta}
#'
#'
#' @importFrom stats qnorm
#' @export
meta.ave.odds <- function(alpha, f1, f2, n1, n2, bystudy = TRUE) {
m <- length(n1)
z <- qnorm(1 - alpha/2)
nt <- sum(n1 + n2)
lor <- log((f1 + .5)*(n2 - f2 + .5)/((f2 + .5)*(n1 - f1 + .5)))
var.lor <- 1/(f1 + .5) + 1/(f2 + .5) + 1/(n1 - f1 + .5) + 1/(n2 - f2 + .5)
ave.lor <- sum(lor)/m
se.ave <- sqrt(sum(var.lor)/m^2)
ll <- ave.lor - z*se.ave
ul <- ave.lor + z*se.ave
out <- cbind(ave.lor, se.ave, ll, ul, exp(ave.lor), exp(ll), exp(ul))
row <- "Average"
if (bystudy) {
se <- sqrt(var.lor)
ll <- lor - z*se
ul <- lor + z*se
row2 <- t(t(paste(rep("Study", m), seq(1,m))))
row <- rbind(row, row2)
out2 <- cbind(lor, se, ll, ul, exp(lor), exp(ll), exp(ul))
out <- rbind(out, out2)
}
colnames(out) <- c("Estimate", "SE", "LL", "UL", "exp(Estimate)", "exp(LL)", "exp(UL)")
rownames(out) <- row
return (out)
}
# meta.ave.propratio2 ==========================================================
#' Confidence interval for an average proportion ratio from 2-group studies
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for a
#' geometric average proportion ratio from two or more studies.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param f1 vector of group 1 frequency counts
#' @param f2 vector of group 2 frequency counts
#' @param n1 vector of group 1 sample sizes
#' @param n2 vector of group 2 sample sizes
#' @param bystudy logical to also return each study estimate (TRUE) or not
#'
#'
#' @return
#' Returns a matrix. The first row is the average estimate across all studies. If bystudy
#' is TRUE, there is 1 additional row for each study. The matrix has the following columns:
#' * Estimate - estimated effect size
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#' * exp(Estimate) - exponentiated estimate
#' * exp(LL) - lower limit of the exponentiated confidence interval
#' * exp(UL) - upper limit of the exponentiated confidence interval
#'
#'
#' @examples
#' n1 <- c(204, 201, 932, 130, 77)
#' n2 <- c(106, 103, 415, 132, 83)
#' f1 <- c(24, 40, 93, 14, 5)
#' f2 <- c(12, 9, 28, 3, 1)
#' meta.ave.propratio2(.05, f1, f2, n1, n2, bystudy = TRUE)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Average 0.84705608 0.2528742 0.35143178 1.3426804
#' # Study 1 0.03604257 0.3297404 -0.61023681 0.6823220
#' # Study 2 0.81008932 0.3442007 0.13546839 1.4847103
#' # Study 3 0.38746839 0.2065227 -0.01730864 0.7922454
#' # Study 4 1.49316811 0.6023296 0.31262374 2.6737125
#' # Study 5 1.50851199 0.9828420 -0.41782290 3.4348469
#' # exp(Estimate) exp(LL) exp(UL)
#' # Average 2.332769 1.4211008 3.829294
#' # Study 1 1.036700 0.5432222 1.978466
#' # Study 2 2.248109 1.1450730 4.413686
#' # Study 3 1.473246 0.9828403 2.208350
#' # Study 4 4.451175 1.3670071 14.493677
#' # Study 5 4.520000 0.6584788 31.026662
#'
#'
#' @references
#' \insertRef{Price2008}{vcmeta}
#'
#'
#' @importFrom stats qnorm
#' @export
meta.ave.propratio2 <- function(alpha, f1, f2, n1, n2, bystudy = TRUE) {
m <- length(n1)
z <- qnorm(1 - alpha/2)
nt <- sum(n1 + n2)
p1 <- (f1 + 1/4)/(n1 + 7/4)
p2 <- (f2 + 1/4)/(n2 + 7/4)
lrr <- log(p1/p2)
v1 <- 1/(f1 + 1/4 + (f1 + 1/4)^2/(n1 - f1 + 3/2))
v2 <- 1/(f2 + 1/4 + (f2 + 1/4)^2/(n2 - f2 + 3/2))
var.lrr <- v1 + v2
ave.lrr <- sum(lrr)/m
se.ave <- sqrt(sum(var.lrr)/m^2)
ll <- ave.lrr - z*se.ave
ul <- ave.lrr + z*se.ave
out <- cbind(ave.lrr, se.ave, ll, ul, exp(ave.lrr), exp(ll), exp(ul))
row <- "Average"
if (bystudy) {
se <- sqrt(var.lrr)
ll <- lrr - z*se
ul <- lrr + z*se
row2 <- t(t(paste(rep("Study", m), seq(1,m))))
row <- rbind(row, row2)
out2 <- cbind(lrr, se, ll, ul, exp(lrr), exp(ll), exp(ul))
out <- rbind(out, out2)
}
colnames(out) <- c("Estimate", "SE", "LL", "UL", "exp(Estimate)", "exp(LL)", "exp(UL)")
rownames(out) <- row
return (out)
}
# meta.ave.prop2 ==========================================================
#' Confidence interval for an average proportion difference in
#' 2-group studies
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for an
#' average proportion difference from two or more studies.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param f1 vector of group 1 frequency counts
#' @param f2 vector of group 2 frequency counts
#' @param n1 vector of group 1 sample sizes
#' @param n2 vector of group 2 sample sizes
#' @param bystudy logical to also return each study estimate (TRUE) or not
#'
#'
#' @return
#' Returns a matrix. The first row is the average estimate across all studies. If bystudy
#' is TRUE, there is 1 additional row for each study. The matrix has the following columns:
#' * Estimate - estimated effect size
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' n1 <- c(204, 201, 932, 130, 77)
#' n2 <- c(106, 103, 415, 132, 83)
#' f1 <- c(24, 40, 93, 14, 5)
#' f2 <- c(12, 9, 28, 3, 1)
#' meta.ave.prop2(.05, f1, f2, n1, n2, bystudy = TRUE)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Average 0.0567907589 0.01441216 2.854345e-02 0.08503807
#' # Study 1 0.0009888529 0.03870413 -7.486985e-02 0.07684756
#' # Study 2 0.1067323481 0.04018243 2.797623e-02 0.18548847
#' # Study 3 0.0310980338 0.01587717 -2.064379e-05 0.06221671
#' # Study 4 0.0837856174 0.03129171 2.245499e-02 0.14511624
#' # Study 5 0.0524199553 0.03403926 -1.429577e-02 0.11913568
#'
#'
#' @references
#' \insertRef{Bonett2014}{vcmeta}
#'
#'
#' @importFrom stats qnorm
#' @export
meta.ave.prop2 <- function(alpha, f1, f2, n1, n2, bystudy = TRUE) {
m <- length(n1)
z <- qnorm(1 - alpha/2)
nt <- sum(n1 + n2)
p1 <- (f1 + 1/m)/(n1 + 2/m)
p2 <- (f2 + 1/m)/(n2 + 2/m)
rd <- p1 - p2
v1 <- p1*(1 - p1)/(n1 + 2/m)
v2 <- p2*(1 - p2)/(n2 + 2/m)
var.rd <- v1 + v2
ave.rd <- sum(rd)/m
se.ave <- sqrt(sum(var.rd)/m^2)
ll <- ave.rd - z*se.ave
ul <- ave.rd + z*se.ave
out <- cbind(ave.rd, se.ave, ll, ul)
row <- "Average"
if (bystudy) {
p1 <- (f1 + 1)/(n1 + 2)
p2 <- (f2 + 1)/(n2 + 2)
rd <- p1 - p2
v1 <- p1*(1 - p1)/(n1 + 2)
v2 <- p2*(1 - p2)/(n2 + 2)
se <- sqrt(v1 + v2)
ll <- rd - z*se
ul <- rd + z*se
row2 <- t(t(paste(rep("Study", m), seq(1,m))))
row <- rbind(row, row2)
out2 <- cbind(rd, se, ll, ul)
out <- rbind(out, out2)
}
colnames(out) <- c("Estimate", "SE", "LL", "UL")
rownames(out) <- row
return (out)
}
# meta.ave.prop.ps ==========================================================
#' Confidence interval for an average proportion difference in
#' paired-samples studies
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for an
#' average proportion difference from two or more studies.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param f11 vector of frequency counts in cell 1,1
#' @param f12 vector of frequency counts in cell 1,2
#' @param f21 vector of frequency counts in cell 2,1
#' @param f22 vector of frequency counts in cell 2,2
#' @param bystudy logical to also return each study estimate (TRUE) or not
#'
#'
#' @return
#' Returns a matrix. The first row is the average estimate across all studies. If bystudy
#' is TRUE, there is 1 additional row for each study. The matrix has the following columns:
#' * Estimate - estimated effect size
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' f11 <- c(17, 28, 19)
#' f12 <- c(43, 56, 49)
#' f21 <- c(3, 5, 5)
#' f22 <- c(37, 54, 39)
#' meta.ave.prop.ps(.05, f11, f12, f21, f22, bystudy = TRUE)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Average 0.3809573 0.03000016 0.3221581 0.4397565
#' # Study 1 0.3921569 0.05573055 0.2829270 0.5013867
#' # Study 2 0.3517241 0.04629537 0.2609869 0.4424614
#' # Study 3 0.3859649 0.05479300 0.2785726 0.4933572
#'
#'
#' @references
#' \insertRef{Bonett2014}{vcmeta}
#'
#'
#' @importFrom stats qnorm
#' @export
meta.ave.prop.ps <- function(alpha, f11, f12, f21, f22, bystudy = TRUE) {
m <- length(f11)
z <- qnorm(1 - alpha/2)
n <- f11 + f12 + f21 + f22
nt <- sum(n)
p12 <- (f12 + 1/m)/(n + 2/m)
p21 <- (f21 + 1/m)/(n + 2/m)
rd <- p12 - p21
var.rd <- (p12 + p21 - rd^2)/(n + 2/m)
ave.rd <- sum(rd)/m
se.ave <- sqrt(sum(var.rd)/m^2)
ll <- ave.rd - z*se.ave
ul <- ave.rd + z*se.ave
out <- cbind(ave.rd, se.ave, ll, ul)
row <- "Average"
if (bystudy) {
p12 <- (f12 + 1)/(n + 2)
p21 <- (f21 + 1)/(n + 2)
rd <- p12 - p21
se <- sqrt((p12 + p21 - rd^2)/(n + 2))
ll <- rd - z*se
ul <- rd + z*se
row2 <- t(t(paste(rep("Study", m), seq(1,m))))
row <- rbind(row, row2)
out2 <- cbind(rd, se, ll, ul)
out <- rbind(out, out2)
}
colnames(out) <- c("Estimate", "SE", "LL", "UL")
rownames(out) <- row
return (out)
}
# meta.ave.agree ==========================================================
#' Confidence interval for an average G-index agreement coefficient
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for an
#' average G-index of agreement from two or more studies. This function
#' assumes that two raters each provide a dichotomous rating to a sample
#' of objects. As a measure of agreement, the G-index is usually preferred
#' to Cohen's kappa.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param f11 vector of frequency counts in cell 1,1
#' @param f12 vector of frequency counts in cell 1,2
#' @param f21 vector of frequency counts in cell 2,1
#' @param f22 vector of frequency counts in cell 2,2
#' @param bystudy logical to also return each study estimate (TRUE) or not
#'
#'
#' @return
#' Returns a matrix. The first row is the average estimate across all studies. If bystudy
#' is TRUE, there is 1 additional row for each study. The matrix has the following columns:
#' * Estimate - estimated effect size
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' f11 <- c(43, 56, 49)
#' f12 <- c(7, 2, 9)
#' f21 <- c(3, 5, 5)
#' f22 <- c(37, 54, 39)
#' meta.ave.agree(.05, f11, f12, f21, f22, bystudy = TRUE)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Average 0.7843250 0.03540254 0.7149373 0.8537127
#' # Study 1 0.7446809 0.06883919 0.6097585 0.8796032
#' # Study 2 0.8512397 0.04770701 0.7577356 0.9447437
#' # Study 3 0.6981132 0.06954284 0.5618117 0.8344147
#'
#'
#' @importFrom stats qnorm
#' @export
meta.ave.agree <- function(alpha, f11, f12, f21, f22, bystudy = TRUE) {
m <- length(f11)
z <- qnorm(1 - alpha/2)
n <- f11 + f12 + f21 + f22
nt <- sum(n)
p0 <- (f11 + f22 + 2/m)/(n + 4/m)
g <- 2*p0 - 1
ave.g <- sum(g)/m
var.g <- 4*p0*(1 - p0)/(n + 4/m)
se.ave <- sqrt(sum(var.g)/m^2)
ll <- ave.g - z*se.ave
ul <- ave.g + z*se.ave
out <- cbind(ave.g, se.ave, ll, ul)
row <- "Average"
if (bystudy) {
p0 <- (f11 + f22 + 2)/(n + 4)
g <- 2*p0 - 1
se <- sqrt(4*p0*(1 - p0)/(n + 4))
ll <- g - z*se
ul <- g + z*se
row2 <- t(t(paste(rep("Study", m), seq(1,m))))
row <- rbind(row, row2)
out2 <- cbind(g, se, ll, ul)
out <- rbind(out, out2)
}
colnames(out) <- c("Estimate", "SE", "LL", "UL")
rownames(out) <- row
return (out)
}
# meta.ave.var ==========================================================
#' Confidence interval for an average variance
#'
#'
#' @description
#' Computes the estimate and confidence interval for an average variance
#' from two or more studies. The estimated average variance or the
#' upper limit could be used as a variance planning value in sample
#' size planning.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param var vector of sample variances
#' @param n vector of sample sizes
#' @param bystudy logical to also return each study estimate (TRUE) or not
#'
#'
#' @return
#' Returns a matrix. The first row is the average estimate across all studies. If bystudy
#' is TRUE, there is 1 additional row for each study. The matrix has the following columns:
#' * Estimate - estimated variance
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' var <- c(26.63, 22.45, 34.12)
#' n <- c(40, 30, 50)
#' meta.ave.var(.05, var, n, bystudy = TRUE)
#'
#' # Should return:
#' # Estimate LL UL
#' # Average 27.73333 21.45679 35.84589
#' # Study 1 26.63000 17.86939 43.90614
#' # Study 2 22.45000 14.23923 40.57127
#' # Study 3 34.12000 23.80835 52.98319
#'
#'
#' @importFrom stats qnorm
#' @importFrom stats qchisq
#' @export
meta.ave.var <- function(alpha, var, n, bystudy = TRUE) {
m <- length(n)
z <- qnorm(1 - alpha/2)
var.var <- 2*var^2/(n - 1)
ave.var <- sum(var)/m
se.ave <- sqrt(sum(var.var)/m^2)
ln.ave <- log(ave.var)
ll <- exp(ln.ave - z*se.ave/ave.var)
ul <- exp(ln.ave + z*se.ave/ave.var)
out <- cbind(ave.var, ll, ul)
row <- "Average"
if (bystudy) {
chi.U <- qchisq(1 - alpha/2, (n - 1))
ll <- (n - 1)*var/chi.U
chi.L <- qchisq(alpha/2, (n - 1))
ul <- (n - 1)*var/chi.L
row2 <- t(t(paste(rep("Study", m), seq(1,m))))
row <- rbind(row, row2)
out2 <- cbind(var, ll, ul)
out <- rbind(out, out2)
}
colnames(out) <- c("Estimate", "LL", "UL")
rownames(out) <- row
return (out)
}
# meta.ave.gen ==========================================================
#' Confidence interval for an average of any parameter
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for an
#' average of any type of parameter from two or more studies.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param est vector of parameter estimates
#' @param se vector of standard errors
#' @param bystudy logical to also return each study estimate (TRUE) or not
#'
#'
#' @return
#' Returns a matrix. The first row is the average estimate across all studies. If bystudy
#' is TRUE, there is 1 additional row for each study. The matrix has the following columns:
#' * Estimate - estimated effect size
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#'
#' est <- c(.022, .751, .421, .287, .052, .146, .562, .904)
#' se <- c(.124, .464, .102, .592, .864, .241, .252, .318)
#' meta.ave.gen(.05, est, se, bystudy = TRUE)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Average 0.393125 0.1561622 0.08705266 0.6991973
#' # Study 1 0.022000 0.1240000 -0.22103553 0.2650355
#' # Study 2 0.751000 0.4640000 -0.15842329 1.6604233
#' # Study 3 0.421000 0.1020000 0.22108367 0.6209163
#' # Study 4 0.287000 0.5920000 -0.87329868 1.4472987
#' # Study 5 0.052000 0.8640000 -1.64140888 1.7454089
#' # Study 6 0.146000 0.2410000 -0.32635132 0.6183513
#' # Study 7 0.562000 0.2520000 0.06808908 1.0559109
#' # Study 8 0.904000 0.3180000 0.28073145 1.5272685
#'
#'
#' @importFrom stats qnorm
#' @export
meta.ave.gen <- function(alpha, est, se, bystudy = TRUE) {
m <- length(est)
z <- qnorm(1 - alpha/2)
ave.est <- sum(est)/m
se.ave <- sqrt(sum(se^2)/m^2)
ll <- ave.est - z*se.ave
ul <- ave.est + z*se.ave
out <- cbind(ave.est, se.ave, ll, ul)
row <- "Average"
if (bystudy) {
ll <- est - z*se
ul <- est + z*se
row2 <- t(t(paste(rep("Study", m), seq(1,m))))
row <- rbind(row, row2)
out2 <- cbind(est, se, ll, ul)
out <- rbind(out, out2)
}
colnames(out) <- c("Estimate", "SE", "LL", "UL")
rownames(out) <- row
return (out)
}
# meta.ave.gen.cc ==========================================================
#' Confidence interval for an average effect size using a constant
#' coefficient model
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for a
#' weighted average effect from two or more studies using the constant
#' coefficient (fixed-effect) meta-analysis model.
#'
#'
#' @details
#' The weighted average estimate will be biased regardless of the number of
#' studies or the sample size in each study. The actual confidence interval
#' coverage probability can be much smaller than the specified confidence
#' level when the population effect sizes are not identical across studies.
#'
#' The constant coefficient model should be used with caution, and the varying
#' coefficient methods in this package are the recommended alternatives. The
#' varying coefficient methods do not require effect-size homogeneity across
#' the selected studies. This constant coefficient meta-analysis function is
#' included in the vcmeta package primarily for classroom demonstrations to
#' illustrate the problematic characteristics of the constant coefficient
#' meta-analysis model.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param est vector of parameter estimates
#' @param se vector of standard errors
#' @param bystudy logical to also return each study estimate (TRUE) or not
#'
#'
#' @return
#' Returns a matrix. The first row is the average estimate across all studies. If bystudy
#' is TRUE, there is 1 additional row for each study. The matrix has the following columns:
#' * Estimate - estimated effect size
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' est <- c(.022, .751, .421, .287, .052, .146, .562, .904)
#' se <- c(.124, .464, .102, .592, .864, .241, .252, .318)
#' meta.ave.gen.cc(.05, est, se, bystudy = TRUE)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Average 0.3127916 0.06854394 0.17844794 0.4471352
#' # Study 1 0.0220000 0.12400000 -0.22103553 0.2650355
#' # Study 2 0.7510000 0.46400000 -0.15842329 1.6604233
#' # Study 3 0.4210000 0.10200000 0.22108367 0.6209163
#' # Study 4 0.2870000 0.59200000 -0.87329868 1.4472987
#' # Study 5 0.0520000 0.86400000 -1.64140888 1.7454089
#' # Study 6 0.1460000 0.24100000 -0.32635132 0.6183513
#' # Study 7 0.5620000 0.25200000 0.06808908 1.0559109
#' # Study 8 0.9040000 0.31800000 0.28073145 1.5272685
#'
#'
#' @references
#' * \insertRef{Hedges1985}{vcmeta}
#' * \insertRef{Borenstein2009}{vcmeta}
#'
#'
#' @seealso \link[vcmeta]{meta.ave.gen}
#'
#'
#' @importFrom stats qnorm
#' @export
meta.ave.gen.cc <- function(alpha, est, se, bystudy = TRUE) {
m <- length(est)
z <- qnorm(1 - alpha/2)
w <- 1/se^2
weighted.est <- sum(w*est)/sum(w)
se.w <- sqrt(1/sum(w))
ll <- weighted.est - z*se.w
ul <- weighted.est + z*se.w
out <- cbind(weighted.est, se.w, ll, ul)
row <- "Average"
if (bystudy) {
ll <- est - z*se
ul <- est + z*se
row2 <- t(t(paste(rep("Study", m), seq(1,m))))
row <- rbind(row, row2)
out2 <- cbind(est, se, ll, ul)
out <- rbind(out, out2)
}
colnames(out) <- c("Estimate", "SE", "LL", "UL")
rownames(out) <- row
return (out)
}
# meta.ave.gen.rc ==========================================================
#' Confidence interval for an average effect size using a random coefficient
#' model
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for a
#' weighted average effect from multiple studies using the random
#' coefficient (random-effects) meta-analysis model. An estimate of
#' effect-size heterogeneity (tau-squared) is also computed.
#'
#'
#' @details
#' The random coefficient model assumes that the studies in the meta-analysis
#' are a random sample from some definable superpopulation of studies. This
#' assumption is very difficult to justify. The weighted average estimate
#' will be biased regardless of the number of studies or the sample size
#' in each study. The actual confidence interval coverage probability can
#' much smaller than the specified confidence level if the effect sizes are
#' correlated with the weights (which occurs frequently). The confidence
#' interval for tau-squared assumes that the true effect sizes in the
#' superpopulation of studies have a normal distribution. A large number
#' of studies, each with a large sample size, is required to assess the
#' superpopulation normality assumption and to accurately estimate
#' tau-squared. The confidence interval for the population tau-squared is
#' hypersensitive to very minor and difficult-to-detect violations of the
#' superpopulation normality assumption.
#'
#' The random coefficient model should be used with caution, and the varying
#' coefficient methods in this package are the recommended alternative. The
#' varying coefficient methods allows the effect sizes to differ across studies
#' but do not require the studies to be a random sample from a definable
#' superpopoulation of studies. This random coefficient function is included
#' in the vcmeta package primarily for classroom demonstrations to illustrate
#' the problimatic characteristics of the random coefficient meta-analysis
#' model.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param est vector of parameter estimates
#' @param se vector of standard errors
#' @param bystudy logical to also return each study estimate (TRUE) or not
#'
#'
#' @return
#' Returns a matrix. The first row is the average estimate across all studies. If bystudy
#' is true, there is 1 additional row for each study. The matrix has the following columns:
#' * Estimate - estimated effect size
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' est <- c(.022, .751, .421, .287, .052, .146, .562, .904)
#' se <- c(.124, .464, .102, .592, .864, .241, .252, .318)
#' meta.ave.gen.rc(.05, est, se, bystudy = TRUE)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Tau-squared 0.03772628 0.0518109 0.00000000 0.1392738
#' # Average 0.35394806 0.1155239 0.12752528 0.5803708
#' # Study 1 0.02200000 0.1240000 -0.22103553 0.2650355
#' # Study 2 0.75100000 0.4640000 -0.15842329 1.6604233
#' # Study 3 0.42100000 0.1020000 0.22108367 0.6209163
#' # Study 4 0.28700000 0.5920000 -0.87329868 1.4472987
#' # Study 5 0.05200000 0.8640000 -1.64140888 1.7454089
#' # Study 6 0.14600000 0.2410000 -0.32635132 0.6183513
#' # Study 7 0.56200000 0.2520000 0.06808908 1.0559109
#' # Study 8 0.90400000 0.3180000 0.28073145 1.5272685
#'
#' @references
#' * \insertRef{Hedges1985}{vcmeta}
#' * \insertRef{Borenstein2009}{vcmeta}
#'
#'
#' @seealso \link[vcmeta]{meta.ave.gen}
#'
#'
#' @importFrom stats qnorm
#' @importFrom stats qt
#' @export
meta.ave.gen.rc <- function(alpha, est, se, bystudy = TRUE) {
m <- length(est)
z <- qnorm(1 - alpha/2)
w1 <- 1/se^2
sw1 <- sum(w1)
sw2 <- sum(w1*w1)
sw3 <- sum(w1*w1*w1)
C <- sw1 - sw2/sw1
w1.est <- sum(w1*est)/sw1
Q <- sum(w1*(est - w1.est)*(est - w1.est))
v <- Q - m + 1
t2 <- v/C
if (t2 < 0) t2 = 0
A <- (m - 1 + 2*C*t2 + (sw2 - 2*(sw3/sw1) + sw2^2/sw1^2)*t2^2)
se.t2 <- sqrt(2*A/C^2)
w2 <- 1/(se^2 + t2)
w2.est <- sum(w2*est)/sum(w2)
se.w2 <- sqrt(1/sum(w2))
ll.t2 <- t2 - z*se.t2
ul.t2 <- t2 + z*se.t2
if (ll.t2 < 0) {ll.t2 = 0}
ll <- w2.est - z*se.w2
ul <- w2.est + z*se.w2
out1 <- cbind(t2, se.t2, ll.t2, ul.t2)
out2 <- cbind(w2.est, se.w2, ll, ul)
out <- rbind(out1, out2)
row1 <- "Tau-squared"
row2 <- "Average"
row <- rbind(row1, row2)
if (bystudy) {
ll <- est - z*se
ul <- est + z*se
row3 <- t(t(paste(rep("Study", m), seq(1,m))))
row <- rbind(row1, row2, row3)
out3 <- cbind(est, se, ll, ul)
out <- rbind(out1, out2, out3)
}
colnames(out) <- c("Estimate", "SE", "LL", "UL")
rownames(out) <- row
return (out)
}
use_imports <- function() {
mathjaxr::preview_rd()
}
|
/scratch/gouwar.j/cran-all/cranData/vcmeta/R/meta_ave.R
|
# ================= Sub-group Comparison of Effect Sizes ============
# meta.sub.cor =====================================================
#' Confidence interval for a difference in average Pearson or partial
#' correlations for two sets of studies
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for a
#' difference in average Pearson or partial correlations for two mutually
#' exclusive sets of studies. Each set can have one or more studies. All
#' of the correlations must be either Pearson correlations or partial
#' correlations.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param n vector of sample sizes
#' @param cor vector of estimated Pearson correlations
#' @param s number of control variables (set to 0 for Pearson)
#' @param group vector of group indicators:
#' * 1 for set A
#' * 2 for set B
#' * 0 to ignore
#'
#'
#' @return
#' Returns a matrix with three rows:
#' * Row 1 - estimate for Set A
#' * Row 2 - estimate for Set B
#' * Row 3 - estimate for difference, Set A - Set B
#'
#' The columns are:
#' * Estimate - estimated average correlation or difference
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' n <- c(55, 190, 65, 35)
#' cor <- c(.40, .65, .60, .45)
#' group <- c(1, 1, 2, 0)
#' meta.sub.cor(.05, n, cor, 0, group)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Set A: 0.525 0.06195298 0.3932082 0.6356531
#' # Set B: 0.600 0.08128008 0.4171458 0.7361686
#' # Set A - Set B: -0.075 0.10219894 -0.2645019 0.1387283
#'
#'
#' @references
#' \insertRef{Bonett2008a}{vcmeta}
#'
#'
#' @importFrom stats qnorm
#' @export
meta.sub.cor <- function(alpha, n, cor, s, group) {
m <- length(n)
z <- qnorm(1 - alpha/2)
nt <- sum(n)
var <- (1 - cor^2)^2/(n - 3 - s)
g1 <- (group == rep(1, m))*1
g2 <- (group == rep(2, m))*1
m1 <- sum(g1)
m2 <- sum(g2)
ave.A <- sum(g1*cor)/m1
se.ave.A <- sqrt(sum(g1*var)/m1^2)
z.A <- log((1 + ave.A)/(1 - ave.A))/2
ll0.A <- z.A - z*se.ave.A/(1 - ave.A^2)
ul0.A <- z.A + z*se.ave.A/(1 - ave.A^2)
ll.A <- (exp(2*ll0.A) - 1)/(exp(2*ll0.A) + 1)
ul.A <- (exp(2*ul0.A) - 1)/(exp(2*ul0.A) + 1)
ave.B <- sum(g2*cor)/m2
se.ave.B <- sqrt(sum(g2*var)/m2^2)
z.B <- log((1 + ave.B)/(1 - ave.B))/2
ll0.B <- z.B - z*se.ave.B/(1 - ave.B^2)
ul0.B <- z.B + z*se.ave.B/(1 - ave.B^2)
ll.B <- (exp(2*ll0.B) - 1)/(exp(2*ll0.B) + 1)
ul.B <- (exp(2*ul0.B) - 1)/(exp(2*ul0.B) + 1)
diff <- ave.A - ave.B
se.diff <- sqrt(se.ave.A^2 + se.ave.B^2)
ll.diff <- diff - sqrt((ave.A - ll.A)^2 + (ul.B - ave.B)^2)
ul.diff <- diff + sqrt((ul.A - ave.A)^2 + (ave.B - ll.B)^2)
out1 <- t(c(ave.A, se.ave.A, ll.A, ul.A))
out2 <- t(c(ave.B, se.ave.B, ll.B, ul.B))
out3 <- t(c(diff, se.diff, ll.diff, ul.diff))
out <- rbind(out1, out2, out3)
colnames(out) <- c("Estimate", "SE", "LL", "UL")
rownames(out) <- c("Set A:", "Set B:", "Set A - Set B:")
return (out)
}
# meta.sub.spear =============================================
#' Confidence interval for a difference in average Spearman
#' correlations for two sets of studies
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for a
#' difference in average Spearman correlations for two mutually
#' exclusive sets of studies. Each set can have one or more studies.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param n vector of sample sizes
#' @param cor vector of estimated Spearman correlations
#' @param group vector of group indicators:
#' * 1 for set A
#' * 2 for set B
#' * 0 to ignore
#'
#'
#' @return
#' Returns a matrix with three rows:
#' * Row 1 - estimate for Set A
#' * Row 2 - estimate for Set B
#' * Row 3 - estimate for difference, Set A - Set B
#'
#' The columns are:
#' * Estimate - estimated average correlation or difference
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' n <- c(55, 190, 65, 35)
#' cor <- c(.40, .65, .60, .45)
#' group <- c(1, 1, 2, 0)
#' meta.sub.spear(.05, n, cor, group)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Set A: 0.525 0.06483629 0.3865928 0.6402793
#' # Set B: 0.600 0.08829277 0.3992493 0.7458512
#' # Set A - Set B: -0.075 0.10954158 -0.2760700 0.1564955
#'
#'
#' @references
#' \insertRef{Bonett2008a}{vcmeta}
#'
#'
#' @importFrom stats qnorm
#' @export
meta.sub.spear <- function(alpha, n, cor, group) {
m <- length(n)
z <- qnorm(1 - alpha/2)
nt <- sum(n)
var <- (1 + cor^2/2)*(1 - cor^2)^2/(n - 3)
g1 <- (group == rep(1, m))*1
g2 <- (group == rep(2, m))*1
m1 <- sum(g1)
m2 <- sum(g2)
ave.A <- sum(g1*cor)/m1
se.ave.A <- sqrt(sum(g1*var)/m1^2)
z.A <- log((1 + ave.A)/(1 - ave.A))/2
ll0.A <- z.A - z*se.ave.A/(1 - ave.A^2)
ul0.A <- z.A + z*se.ave.A/(1 - ave.A^2)
ll.A <- (exp(2*ll0.A) - 1)/(exp(2*ll0.A) + 1)
ul.A <- (exp(2*ul0.A) - 1)/(exp(2*ul0.A) + 1)
ave.B <- sum(g2*cor)/m2
se.ave.B <- sqrt(sum(g2*var)/m2^2)
z.B <- log((1 + ave.B)/(1 - ave.B))/2
ll0.B <- z.B - z*se.ave.B/(1 - ave.B^2)
ul0.B <- z.B + z*se.ave.B/(1 - ave.B^2)
ll.B <- (exp(2*ll0.B) - 1)/(exp(2*ll0.B) + 1)
ul.B <- (exp(2*ul0.B) - 1)/(exp(2*ul0.B) + 1)
diff <- ave.A - ave.B
se.diff <- sqrt(se.ave.A^2 + se.ave.B^2)
ll.diff <- diff - sqrt((ave.A - ll.A)^2 + (ul.B - ave.B)^2)
ul.diff <- diff + sqrt((ul.A - ave.A)^2 + (ave.B - ll.B)^2)
out1 <- t(c(ave.A, se.ave.A, ll.A, ul.A))
out2 <- t(c(ave.B, se.ave.B, ll.B, ul.B))
out3 <- t(c(diff, se.diff, ll.diff, ul.diff))
out <- rbind(out1, out2, out3)
colnames(out) <- c("Estimate", "SE", "LL", "UL")
rownames(out) <- c("Set A:", "Set B:", "Set A - Set B:")
return (out)
}
# meta.sub.pbcor ===============================================
#' Confidence interval for a difference in average point-biserial
#' correlations for two sets of studies
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for a
#' difference in average point-biserial correlations for two mutually
#' exclusive sets of studies. Each set can have one or more studies. Two
#' types of point-biserial correlations can be analyzed. One type uses
#' an unweighted variance and is recommended for 2-group experimental
#' designs. The other type uses a weighted variance and is recommended
#' for 2-group nonexperimental designs with simple random sampling (but
#' not stratified random sampling) within each study. Equality of
#' variances within or across studies is not assumed.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param m1 vector of estimated means for group 1
#' @param m2 vector of estimated means for group 2
#' @param sd1 vector of estimated SDs for group 1
#' @param sd2 vector of estimated SDs for group 2
#' @param n1 vector of group 1 sample sizes
#' @param n2 vector of group 2 sample sizes
#' @param type
#' * set to 1 for weighted variance
#' * set to 2 for unweighted variance
#' @param group vector of group indicators:
#' * 1 for set A
#' * 2 for set B
#' * 0 to ignore
#'
#'
#' @return
#' Returns a matrix with three rows:
#' * Row 1 - estimate for Set A
#' * Row 2 - estimate for Set B
#' * Row 3 - estimate for difference, Set A - Set B
#'
#' The columns are:
#' * Estimate - estimated average correlation or difference
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' m1 <- c(45.1, 39.2, 36.3, 34.5)
#' m2 <- c(30.0, 35.1, 35.3, 36.2)
#' sd1 <- c(10.7, 10.5, 9.4, 11.5)
#' sd2 <- c(12.3, 12.0, 10.4, 9.6)
#' n1 <- c(40, 20, 50, 25)
#' n2 <- c(40, 20, 48, 26)
#' group <- c(1, 1, 2, 2)
#' meta.sub.pbcor(.05, m1, m2, sd1, sd2, n1, n2, 2, group)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Set A: 0.36338772 0.08552728 0.1854777 0.5182304
#' # Set B: -0.01480511 0.08741322 -0.1840491 0.1552914
#' # Set A - Set B: 0.37819284 0.12229467 0.1320530 0.6075828
#'
#'
#' @references
#' \insertRef{Bonett2020b}{vcmeta}
#'
#'
#' @importFrom stats qnorm
#' @export
meta.sub.pbcor <- function(alpha, m1, m2, sd1, sd2, n1, n2, type, group) {
m <- length(m1)
z <- qnorm(1 - alpha/2)
n <- n1 + n2
nt <- sum(n)
df1 <- n1 - 1
df2 <- n2 - 1
if (type == 1) {
p <- n1/n
b <- (n - 2)/(n*p*(1 - p))
s <- sqrt((df1*sd1^2 + df2*sd2^2)/(df1 + df2))
d <- (m1 - m2)/s
se.d <- sqrt(d^2*(1/df1 + 1/df2)/8 + 1/n1 + 1/n2)
var <- (b^2*se.d^2)/(d^2 + b)^3
cor <- d/sqrt(d^2 + b)
}
else {
s <- sqrt((sd1^2 + sd2^2)/2)
d <- (m1 - m2)/s
a1 <- d^2*(sd1^4/df1 + sd1^4/df2)/(8*s^4)
a2 <- sd1^2/(s^2*df1) + sd2^2/(s^2*df2)
se.d <- sqrt(a1 + a2)
var <- (16*se.d^2)/(d^2 + 4)^3
cor <- d/sqrt(d^2 + 4)
}
g1 <- (group == rep(1, m))*1
g2 <- (group == rep(2, m))*1
m.A <- sum(g1)
m.B <- sum(g2)
ave.A <- sum(g1*cor)/m.A
se.ave.A <- sqrt(sum(g1*var)/m.A^2)
z.A <- log((1 + ave.A)/(1 - ave.A))/2
ll0.A <- z.A - z*se.ave.A/(1 - ave.A^2)
ul0.A <- z.A + z*se.ave.A/(1 - ave.A^2)
ll.A <- (exp(2*ll0.A) - 1)/(exp(2*ll0.A) + 1)
ul.A <- (exp(2*ul0.A) - 1)/(exp(2*ul0.A) + 1)
ave.B <- sum(g2*cor)/m.B
se.ave.B <- sqrt(sum(g2*var)/m.B^2)
z.B <- log((1 + ave.B)/(1 - ave.B))/2
ll0.B <- z.B - z*se.ave.B/(1 - ave.B^2)
ul0.B <- z.B + z*se.ave.B/(1 - ave.B^2)
ll.B <- (exp(2*ll0.B) - 1)/(exp(2*ll0.B) + 1)
ul.B <- (exp(2*ul0.B) - 1)/(exp(2*ul0.B) + 1)
diff <- ave.A - ave.B
se.diff <- sqrt(se.ave.A^2 + se.ave.B^2)
ll.diff <- diff - sqrt((ave.A - ll.A)^2 + (ul.B - ave.B)^2)
ul.diff <- diff + sqrt((ul.A - ave.A)^2 + (ave.B - ll.B)^2)
out1 <- t(c(ave.A, se.ave.A, ll.A, ul.A))
out2 <- t(c(ave.B, se.ave.B, ll.B, ul.B))
out3 <- t(c(diff, se.diff, ll.diff, ul.diff))
out <- rbind(out1, out2, out3)
colnames(out) <- c("Estimate", "SE", "LL", "UL")
rownames(out) <- c("Set A:", "Set B:", "Set A - Set B:")
return (out)
}
# meta.sub.semipart =========================================
#' Confidence interval for a difference in average semipartial
#' correlations for two sets of studies
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for a
#' difference in average semipartial correlations for two sets of
#' mutually exclusive studies. Each set can have one or more studies.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param n vector of sample sizes
#' @param cor vector of estimated semi-partial correlations
#' @param r2 vector of squared multiple correlations for a model that
#' includes the IV and all control variables
#' @param group vector of group indicators:
#' * 1 for set A
#' * 2 for set B
#' * 0 to ignore
#'
#'
#' @return
#' Returns a matrix with three rows:
#' * Row 1 - estimate for Set A
#' * Row 2 - estimate for Set B
#' * Row 3 - estimate for difference, Set A - Set B
#'
#' The columns are:
#' * Estimate - estimated average correlation or difference
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' n <- c(55, 190, 65, 35)
#' cor <- c(.40, .65, .60, .45)
#' r2 <- c(.25, .41, .43, .39)
#' group <- c(1, 1, 2, 0)
#' meta.sub.semipart(.05, n, cor, r2, group)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Set A: 0.525 0.05955276 0.3986844 0.6317669
#' # Set B: 0.600 0.07931155 0.4221127 0.7333949
#' # Set A - Set B: -0.075 0.09918091 -0.2587113 0.1324682
#'
#'
#' @importFrom stats qnorm
#' @export
meta.sub.semipart <- function(alpha, n, cor, r2, group) {
m <- length(n)
z <- qnorm(1 - alpha/2)
nt <- sum(n)
r0 <- r2 - cor^2
var = (r2^2 - 2*r2 + r0 - r0^2 + 1)/(n - 3)
g1 <- (group == rep(1, m))*1
g2 <- (group == rep(2, m))*1
m1 <- sum(g1)
m2 <- sum(g2)
ave.A <- sum(g1*cor)/m1
se.ave.A <- sqrt(sum(g1*var)/m1^2)
z.A <- log((1 + ave.A)/(1 - ave.A))/2
ll0.A <- z.A - z*se.ave.A/(1 - ave.A^2)
ul0.A <- z.A + z*se.ave.A/(1 - ave.A^2)
ll.A <- (exp(2*ll0.A) - 1)/(exp(2*ll0.A) + 1)
ul.A <- (exp(2*ul0.A) - 1)/(exp(2*ul0.A) + 1)
ave.B <- sum(g2*cor)/m2
se.ave.B <- sqrt(sum(g2*var)/m2^2)
z.B <- log((1 + ave.B)/(1 - ave.B))/2
ll0.B <- z.B - z*se.ave.B/(1 - ave.B^2)
ul0.B <- z.B + z*se.ave.B/(1 - ave.B^2)
ll.B <- (exp(2*ll0.B) - 1)/(exp(2*ll0.B) + 1)
ul.B <- (exp(2*ul0.B) - 1)/(exp(2*ul0.B) + 1)
diff <- ave.A - ave.B
se.diff <- sqrt(se.ave.A^2 + se.ave.B^2)
ll.diff <- diff - sqrt((ave.A - ll.A)^2 + (ul.B - ave.B)^2)
ul.diff <- diff + sqrt((ul.A - ave.A)^2 + (ave.B - ll.B)^2)
out1 <- t(c(ave.A, se.ave.A, ll.A, ul.A))
out2 <- t(c(ave.B, se.ave.B, ll.B, ul.B))
out3 <- t(c(diff, se.diff, ll.diff, ul.diff))
out <- rbind(out1, out2, out3)
colnames(out) <- c("Estimate", "SE", "LL", "UL")
rownames(out) <- c("Set A:", "Set B:", "Set A - Set B:")
return (out)
}
# meta.sub.cronbach ==============================================
#' Confidence interval for a difference in average Cronbach
#' reliabilities for two sets of studies
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for a
#' difference in average Cronbach reliability coefficients for two mutually
#' exclusive sets of studies. Each set can have one or more studies. The
#' number of measurements used to compute the sample reliablity coefficient
#' is assumed to be the same for all studies.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param n vector of sample sizes
#' @param rel vector of estimated Cronbach reliabilities
#' @param r number of measurements (e.g., items)
#' @param group vector of group indicators:
#' * 1 for set A
#' * 2 for set B
#' * 0 to ignore
#'
#'
#' @return
#' Returns a matrix with three rows:
#' * Row 1 - estimate for Set A
#' * Row 2 - estimate for Set B
#' * Row 3 - estimate for difference, Set A - Set B
#'
#' The columns are:
#' * Estimate - estimated average correlation or difference
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' n <- c(120, 170, 150, 135)
#' rel <- c(.89, .87, .73, .71)
#' group <- c(1, 1, 2, 2)
#' r <- 10
#' meta.sub.cronbach(.05, n, rel, r, group)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Set A: 0.88 0.01068845 0.8581268 0.8999386
#' # Set B: 0.72 0.02515130 0.6684484 0.7668524
#' # Set A - Set B: 0.16 0.02732821 0.1082933 0.2152731
#'
#'
#' @references
#' \insertRef{Bonett2010}{vcmeta}
#'
#'
#' @importFrom stats qnorm
#' @export
meta.sub.cronbach <- function(alpha, n, rel, r, group) {
m <- length(n)
z <- qnorm(1 - alpha/2)
nt <- sum(n)
g1 <- (group == rep(1, m))*1
g2 <- (group == rep(2, m))*1
m1 <- sum(g1)
m2 <- sum(g2)
hn1 <- m1/sum(g1/n)
hn2 <- m2/sum(g2/n)
a1 <- ((r - 2)*(m1 - 1))^.25
var1 <- 2*r*(1 - rel)^2/((r - 1)*(n - 2 - a1))
a2 <- ((r - 2)*(m2 - 1))^.25
var2 <- 2*r*(1 - rel)^2/((r - 1)*(n - 2 - a2))
ave.A <- sum(g1*rel)/m1
se.ave.A <- sqrt(sum(g1*var1)/m1^2)
log.A <- log(1 - ave.A) - log(hn1/(hn1 - 1))
ul.A <- 1 - exp(log.A - z*se.ave.A/(1 - ave.A))
ll.A <- 1 - exp(log.A + z*se.ave.A/(1 - ave.A))
ave.B <- sum(g2*rel)/m2
se.ave.B <- sqrt(sum(g2*var2)/m2^2)
log.B <- log(1 - ave.B) - log(hn2/(hn2 - 1))
ul.B <- 1 - exp(log.B - z*se.ave.B/(1 - ave.B))
ll.B <- 1 - exp(log.B + z*se.ave.B/(1 - ave.B))
diff <- ave.A - ave.B
se.diff <- sqrt(se.ave.A^2 + se.ave.B^2)
ll.diff <- diff - sqrt((ave.A - ll.A)^2 + (ul.B - ave.B)^2)
ul.diff <- diff + sqrt((ul.A - ave.A)^2 + (ave.B - ll.B)^2)
out1 <- t(c(ave.A, se.ave.A, ll.A, ul.A))
out2 <- t(c(ave.B, se.ave.B, ll.B, ul.B))
out3 <- t(c(diff, se.diff, ll.diff, ul.diff))
out <- rbind(out1, out2, out3)
colnames(out) <- c("Estimate", "SE", "LL", "UL")
rownames(out) <- c("Set A:", "Set B:", "Set A - Set B:")
return (out)
}
# meta.sub.gen ===============================================================
#' Confidence interval for a difference in average effect size for two sets
#' of studies
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for a
#' difference in the average effect size (any type of effect size) for
#' two mutually exclusive sets of studies. Each set can have one or more
#' studies. All of the effects sizes should be compatible.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param est vector of estimated effect sizes
#' @param se vector of effect size standard errors
#' @param group vector of group indicators:
#' * 1 for set A
#' * 2 for set B
#' * 0 to ignore
#'
#'
#' @return
#' Returns a matrix with three rows:
#' * Row 1 - estimate for Set A
#' * Row 2 - estimate for Set B
#' * Row 3 - estimate for difference, Set A - Set B
#'
#' The columns are:
#' * Estimate - estimated average effect size or difference
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' est <- c(.920, .896, .760, .745)
#' se <- c(.098, .075, .069, .055)
#' group <- c(1, 1, 2, 2)
#' meta.sub.gen(.05, est, se, group)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Set A: 0.9080 0.06170292 0.787064504 1.0289355
#' # Set B: 0.7525 0.04411916 0.666028042 0.8389720
#' # Set A - Set B: 0.1555 0.07585348 0.006829917 0.3041701
#'
#'
#' @importFrom stats qnorm
#' @export
meta.sub.gen <- function(alpha, est, se, group) {
m <- length(est)
z <- qnorm(1 - alpha/2)
var <- se^2
g1 <- (group == rep(1, m))*1
g2 <- (group == rep(2, m))*1
m1 <- sum(g1)
m2 <- sum(g2)
ave.A <- sum(g1*est)/m1
se.ave.A <- sqrt(sum(g1*var)/m1^2)
ll.A <- ave.A - z*se.ave.A
ul.A <- ave.A + z*se.ave.A
ave.B <- sum(g2*est)/m2
se.ave.B <- sqrt(sum(g2*var)/m2^2)
ll.B <- ave.B - z*se.ave.B
ul.B <- ave.B + z*se.ave.B
diff <- ave.A - ave.B
se.diff <- sqrt(se.ave.A^2 + se.ave.B^2)
ll.diff <- diff - z*se.diff
ul.diff <- diff + z*se.diff
out1 <- t(c(ave.A, se.ave.A, ll.A, ul.A))
out2 <- t(c(ave.B, se.ave.B, ll.B, ul.B))
out3 <- t(c(diff, se.diff, ll.diff, ul.diff))
out <- rbind(out1, out2, out3)
colnames(out) <- c("Estimate", "SE", "LL", "UL")
rownames(out) <- c("Set A:", "Set B:", "Set A - Set B:")
return (out)
}
# ================= Linear Contrasts of Effect Sizes ================
# meta.lc.mean2 ====================================================
#' Confidence interval for a linear contrast of mean differences from
#' 2-group studies
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for a
#' linear contrast of 2-group mean differences from two or more studies.
#' A Satterthwaite adjustment to the degrees of freedom is used to improve
#' the accuracy of the confidence interval. Equality of variances within or across
#' studies is not assumed.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param m1 vector of estimated means for group 1
#' @param m2 vector of estimated means for group 2
#' @param sd1 vector of estimated SDs for group 1
#' @param sd2 vector of estimated SDs for group 2
#' @param n1 vector of group 1 sample sizes
#' @param n2 vector of group 2 sample sizes
#' @param v vector of contrast coefficients
#'
#' @return
#' Returns 1-row matrix with the following columns:
#' * Estimate - estimated linear contrast
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#' * df - degrees of freedom
#'
#'
#' @examples
#' m1 <- c(45.1, 39.2, 36.3, 34.5)
#' m2 <- c(30.0, 35.1, 35.3, 36.2)
#' sd1 <- c(10.7, 10.5, 9.4, 11.5)
#' sd2 <- c(12.3, 12.0, 10.4, 9.6)
#' n1 <- c(40, 20, 50, 25)
#' n2 <- c(40, 20, 48, 26)
#' v <- c(.5, .5, -.5, -.5)
#' meta.lc.mean2(.05, m1, m2, sd1, sd2, n1, n2, v)
#'
#' # Should return:
#' # Estimate SE LL UL df
#' # Contrast 9.95 2.837787 4.343938 15.55606 153.8362
#'
#'
#' @references
#' \insertRef{Bonett2009a}{vcmeta}
#'
#'
#' @importFrom stats qt
#' @export
meta.lc.mean2 <- function(alpha, m1, m2, sd1, sd2, n1, n2, v) {
m <- length(m1)
nt <- sum(n1 + n2)
var1 <- sd1^2
var2 <- sd2^2
var <- var1/n1 + var2/n2
d <- m1 - m2
con <- t(v)%*%d
var <- t(v)%*%(diag(var))%*%v
se <- sqrt(var)
u1 <- var^2*sum(v^2)^2
u2 <- sum(v^4*var1^2/(n1^3 - n1^2) + v^4*var2^2/(n2^3 - n2^2))
df <- u1/u2
t <- qt(1 - alpha/2, df)
ll <- con - t*se
ul <- con + t*se
out <- cbind(con, se, ll, ul, df)
colnames(out) <- c("Estimate", "SE", "LL", "UL", "df")
rownames(out) = "Contrast"
return(out)
}
# meta.lc.stdmean2 ==================================================
#' Confidence interval for a linear contrast of standardized mean
#' differences from 2-group studies
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for a
#' linear contrast of 2-group standardized mean differences from two or
#' more studies. Equality of variances within or across studies is not assumed.
#' Use the square root average variance standardizer (stdzr = 0) for 2-group
#' experimental designs. Use the square root weighted variance standardizer
#' (stdzr = 3) for 2-group nonexperimental designs with simple random sampling.
#' The stdzr = 1 and stdzr = 2 options can be used with either experimental
#' or nonexperimental designs.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param m1 vector of estimated means for group 1
#' @param m2 vector of estimated means for group 2
#' @param sd1 vector of estimated SDs for group 1
#' @param sd2 vector of estimated SDs for group 2
#' @param n1 vector of group 1 sample sizes
#' @param n2 vector of group 2 sample sizes
#' @param v vector of contrast coefficients
#' @param stdzr
#' * set to 0 for square root unweighted average variance standardizer
#' * set to 1 for group 1 SD standardizer
#' * set to 2 for group 2 SD standardizer
#' * set to 3 for square root weighted average variance standardizer
#'
#'
#' @return
#' Returns 1-row matrix with the following columns:
#' * Estimate - estimated linear contrast
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' m1 <- c(45.1, 39.2, 36.3, 34.5)
#' m2 <- c(30.0, 35.1, 35.3, 36.2)
#' sd1 <- c(10.7, 10.5, 9.4, 11.5)
#' sd2 <- c(12.3, 12.0, 10.4, 9.6)
#' n1 <- c(40, 20, 50, 25)
#' n2 <- c(40, 20, 48, 26)
#' v <- c(.5, .5, -.5, -.5)
#' meta.lc.stdmean2(.05, m1, m2, sd1, sd2, n1, n2, v, 0)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Contrast 0.8557914 0.2709192 0.3247995 1.386783
#'
#'
#' @references
#' \insertRef{Bonett2009a}{vcmeta}
#'
#'
#' @importFrom stats qnorm
#' @export
meta.lc.stdmean2 <- function(alpha, m1, m2, sd1, sd2, n1, n2, v, stdzr) {
df1 <- n1 - 1
df2 <- n2 - 1
m <- length(m1)
z <- qnorm(1 - alpha/2)
nt = sum(n1 + n2)
var1 <- sd1^2
var2 <- sd2^2
if (stdzr == 0) {
s1 <- sqrt((var1 + var2)/2)
d <- (m1 - m2)/s1
du <- (1 - 3/(4*(n1 + n2) - 9))*d
con <- t(v)%*%du
var <- d^2*(var1^2/df1 + var2^2/df2)/(8*s1^4) + (var1/df1 + var2/df2)/s1^2
se <- sqrt(t(v)%*%(diag(var))%*%v)
} else if (stdzr == 1) {
d <- (m1 - m2)/sd1
du <- (1 - 3/(4*n1 - 5))*d
con <- t(v)%*%du
var <- d^2/(2*df1) + 1/df1 + var2/(df2*var1)
se <- sqrt(t(v)%*%(diag(var))%*%v)
} else if (stdzr ==2) {
d <- (m1 - m2)/sd2
du <- (1 - 3/(4*n2 - 5))*d
con <- t(v)%*%du
var <- d^2/(2*df2) + 1/df2 + var1/(df1*var2)
se <- sqrt(t(v)%*%(diag(var))%*%v)
} else {
s2 <- sqrt((df1*sd1^2 + df2*sd2^2)/(df1 + df2))
d <- (m1 - m2)/s2
du <- (1 - 3/(4*(n1 + n2) - 9))*d
con <- t(v)%*%du
var <- d^2*(1/df1 + 1/df2)/8 + 1/n1 + 1/n2
se <- sqrt(t(v)%*%(diag(var))%*%v)
}
ll <- con - z*se
ul <- con + z*se
out <- cbind(con, se, ll, ul)
colnames(out) <- c("Estimate", "SE", "LL", "UL")
rownames(out) <- "Contrast"
return(out)
}
# meta.lc.mean.ps ====================================================
#' Confidence interval for a linear contrast of mean differences from
#' paired-samples studies
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for a
#' linear contrast of paired-samples mean differences from two or more studies.
#' A Satterthwaite adjustment to the degrees of freedom is used to improve
#' the accuracy of the confidence interval. Equality of variances within or across
#' studies is not assumed.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param m1 vector of estimated means for group 1
#' @param m2 vector of estimated means for group 2
#' @param sd1 vector of estimated SDs for group 1
#' @param sd2 vector of estimated SDs for group 2
#' @param cor vector of estimated correlations for paired measurements
#' @param n vector of sample sizes
#' @param v vector of contrast coefficients
#'
#'
#' @return
#' Returns 1-row matrix with the following columns:
#' * Estimate - estimated linear contrast
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#' * df - degrees of freedom
#'
#'
#' @examples
#' m1 <- c(53, 60, 53, 57)
#' m2 <- c(55, 62, 58, 61)
#' sd1 <- c(4.1, 4.2, 4.5, 4.0)
#' sd2 <- c(4.2, 4.7, 4.9, 4.8)
#' cor <- c(.7, .7, .8, .85)
#' n <- c(30, 50, 30, 70)
#' v <- c(.5, .5, -.5, -.5)
#' meta.lc.mean.ps(.05, m1, m2, sd1, sd2, cor, n, v)
#'
#' # Should return:
#' # Estimate SE LL UL df
#' # Contrast 2.5 0.4943114 1.520618 3.479382 112.347
#'
#'
#' @references
#' \insertRef{Bonett2009a}{vcmeta}
#'
#'
#' @importFrom stats qt
#' @export
meta.lc.mean.ps <- function(alpha, m1, m2, sd1, sd2, cor, n, v) {
m <- length(m1)
nt <- sum(n)
var1 <- sd1^2
var2 <- sd2^2
var <- (var1 + var2 - 2*cor*sd1*sd2)/n
d <- m1 - m2
con <- t(v)%*%d
se <- sqrt(t(v)%*%(diag(var))%*%v)
u1 <- sum(var*v^2)^2
u2 <- sum((var*v^2)^2/(n - 1))
df <- u1/u2
t <- qt(1 - alpha/2, df)
ll <- con - t*se
ul <- con + t*se
out <- cbind(con, se, ll, ul, df)
colnames(out) <- c("Estimate", "SE", "LL", "UL", "df")
rownames(out) = "Contrast"
return(out)
}
# meta.lc.stdmean.ps ===================================================
#' Confidence interval for a linear contrast of standardized
#' mean differences from paired-samples studies
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for a
#' linear contrast of paired-samples standardized mean differences from two or
#' more studies. Equality of variances within or across studies is not assumed.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param m1 vector of estimated means for group 1
#' @param m2 vector of estimated means for group 2
#' @param sd1 vector of estimated SDs for group 1
#' @param sd2 vector of estimated SDs for group 2
#' @param cor vector of estimated correlations for paired measurements
#' @param n vector of sample sizes
#' @param v vector of contrast coefficients
#' @param stdzr
#' * set to 0 for square root unweighted average variance standardizer
#' * set to 1 for group 1 SD standardizer
#' * set to 2 for group 2 SD standardizer
#'
#'
#' @return
#' Returns 1-row matrix with the following columns:
#' * Estimate - estimated linear contrast
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' m1 <- c(53, 60, 53, 57)
#' m2 <- c(55, 62, 58, 61)
#' sd1 <- c(4.1, 4.2, 4.5, 4.0)
#' sd2 <- c(4.2, 4.7, 4.9, 4.8)
#' cor <- c(.7, .7, .8, .85)
#' n <- c(30, 50, 30, 70)
#' v <- c(.5, .5, -.5, -.5)
#' meta.lc.stdmean.ps(.05, m1, m2, sd1, sd2, cor, n, v, 0)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Contrast 0.5127577 0.1392232 0.2398851 0.7856302
#'
#'
#' @references
#' \insertRef{Bonett2009a}{vcmeta}
#'
#'
#' @importFrom stats qnorm
#' @export
meta.lc.stdmean.ps <- function(alpha, m1, m2, sd1, sd2, cor, n, v, stdzr) {
df <- n - 1
m <- length(m1)
z <- qnorm(1 - alpha/2)
nt <- sum(n)
var1 <- sd1^2
var2 <- sd2^2
vd <- var1 + var2 - 2*cor*sd1*sd2
if (stdzr == 0) {
s <- sqrt((var1 + var2)/2)
d <- (m1 - m2)/s
du <- sqrt((n - 2)/df)*d
var <- d^2*(var1^2 + var2^2 + 2*cor^2*var1*var2)/(8*df*s^4) + vd/(df*s^2)
con <- t(v)%*%du
se <- sqrt(t(v)%*%(diag(var))%*%v)
} else if (stdzr == 1) {
d <- (m1 - m2)/sd1
du <- (1 - 3/(4*df - 1))*d
con <- t(v)%*%du
var <- d^2/(2*df) + vd/(df*var1)
se <- sqrt(t(v)%*%(diag(var))%*%v)
} else {
d <- (m1 - m2)/sd2
du <- (1 - 3/(4*df - 1))*d
con <- t(v)%*%du
var <- d^2/(2*df) + vd/(df*var2)
se <- sqrt(t(v)%*%(diag(var))%*%v)
}
ll <- con - z*se
ul <- con + z*se
out <- cbind(con, se, ll, ul)
colnames(out) <- c("Estimate", "SE", "LL", "UL")
rownames(out) <- "Contrast"
return(out)
}
# meta.lc.meanratio2 =================================================
#' Confidence interval for a log-linear contrast of mean ratios from
#' 2-group studies
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for a
#' log-linear contrast of 2-group mean ratio from two or more studies. A
#' Satterthwaite adjustment to the degrees of freedom is used to improve
#' the accuracy of the confidence interval. Equality of variances within or across
#' studies is not assumed.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param m1 vector of estimated means for group 1
#' @param m2 vector of estimated means for group 2
#' @param sd1 vector of estimated SDs for group 1
#' @param sd2 vector of estimated SDs for group 2
#' @param n1 vector of group 1 sample sizes
#' @param n2 vector of group 2 sample sizes
#' @param v vector of contrast coefficients
#'
#'
#' @return
#' Returns 1-row matrix with the following columns:
#' * Estimate - estimated log-linear contrast
#' * SE - standard error of log-linear contrast
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#' * exp(Estimate) - exponentiated log-linear contrast
#' * exp(LL) - lower limit of the exponentiated confidence interval
#' * exp(UL) - upper limit of the exponentiated confidence interval
#' * df - degrees of freedom
#'
#'
#' @examples
#' m1 <- c(45.1, 39.2, 36.3, 34.5)
#' m2 <- c(30.0, 35.1, 35.3, 36.2)
#' sd1 <- c(10.7, 10.5, 9.4, 11.5)
#' sd2 <- c(12.3, 12.0, 10.4, 9.6)
#' n1 <- c(40, 20, 50, 25)
#' n2 <- c(40, 20, 48, 26)
#' v <- c(.5, .5, -.5, -.5)
#' meta.lc.meanratio2(.05, m1, m2, sd1, sd2, n1, n2, v)
#'
#' # Should return:
#' # Estimate SE LL UL exp(Estimate)
#' # Contrast 0.2691627 0.07959269 0.1119191 0.4264064 1.308868
#' # exp(LL) exp(UL) df
#' # Contrast 1.118422 1.531743 152.8665
#'
#'
#' @references
#' \insertRef{Bonett2020}{vcmeta}
#'
#'
#' @importFrom stats qt
#' @export
meta.lc.meanratio2 <- function(alpha, m1, m2, sd1, sd2, n1, n2, v) {
nt <- sum(n1 + n2)
logratio <- log(m1/m2)
var1 <- sd1^2/(n1*m1^2)
var2 <- sd2^2/(n2*m2^2)
est <- t(v)%*%logratio
se <- sqrt(t(v)%*%(diag(var1 + var2))%*%v)
df <- se^4/sum(v^4*var1^2/(n1 - 1) + v^4*var2^2/(n2 - 1))
t <- qt(1 - alpha/2, df)
ll <- est - t*se
ul <- est + t*se
out <- cbind(est, se, ll, ul, exp(est), exp(ll), exp(ul), df)
colnames(out) <- c(
"Estimate",
"SE",
"LL",
"UL",
"exp(Estimate)",
"exp(LL)",
"exp(UL)",
"df"
)
rownames(out) <- "Contrast"
return(out)
}
# meta.lc.meanratio.ps ================================================
#' Confidence interval for a log-linear contrast of mean ratios from
#' paired-samples studies
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for a
#' log-linear contrast of paired-sample mean ratios from two or more studies.
#' A Satterthwaite adjustment to the degrees of freedom is used to improve
#' the accuracy of the confidence interval. Equality of variances within or across
#' studies is not assumed.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param m1 vector of estimated means for group 1
#' @param m2 vector of estimated means for group 2
#' @param sd1 vector of estimated SDs for group 1
#' @param sd2 vector of estimated SDs for group 2
#' @param cor vector of estimated correlations for paired measurements
#' @param n vector of sample sizes
#' @param v vector of contrast coefficients
#'
#'
#' @return
#' Returns 1-row matrix with the following columns:
#' * Estimate - estimatedf log-linear contrast
#' * SE - standard error of log-linear contrast
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#' * exp(Estimate) - exponentiated log-linear contrast
#' * exp(LL) - lower limit of the exponentiated confidence interval
#' * exp(UL) - upper limit of the exponentiated confidence interval
#' * df - degrees of freedom
#'
#'
#' @examples
#' m1 <- c(53, 60, 53, 57)
#' m2 <- c(55, 62, 58, 61)
#' sd1 <- c(4.1, 4.2, 4.5, 4.0)
#' sd2 <- c(4.2, 4.7, 4.9, 4.8)
#' cor <- c(.7, .7, .8, .85)
#' n <- c(30, 50, 30, 70)
#' v <- c(.5, .5, -.5, -.5)
#' meta.lc.meanratio.ps(.05, m1, m2, sd1, sd2, cor, n, v)
#'
#' # Should return:
#' # Estimate SE LL UL exp(Estimate)
#' # Contrast 0.0440713 0.008701725 0.02681353 0.06132907 1.045057
#' # exp(LL) exp(UL) df
#' # Contrast 1.027176 1.063249 103.0256
#'
#'
#' @references
#' \insertRef{Bonett2020}{vcmeta}
#'
#'
#' @importFrom stats qt
#' @export
meta.lc.meanratio.ps <- function(alpha, m1, m2, sd1, sd2, cor, n, v) {
nt <- sum(n)
logratio <- log(m1/m2)
var <- (sd1^2/m1^2 + sd2^2/m2^2 - 2*cor*sd1*sd2/(m1*m2))/n
est <- t(v)%*%logratio
se <- sqrt(t(v)%*%(diag(var))%*%v)
df <- se^4/sum(v^4*var^2/(n - 1))
t <- qt(1 - alpha/2, df)
ll <- est - t*se
ul <- est + t*se
out <- cbind(est, se, ll, ul, exp(est), exp(ll), exp(ul), df)
colnames(out) <- c("Estimate", "SE", "LL", "UL", "exp(Estimate)",
"exp(LL)", "exp(UL)", "df")
rownames(out) <- "Contrast"
return(out)
}
# meta.lc.odds =====================================================
#' Confidence interval for a log-linear contrast of odds ratios
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for an
#' exponentiated log-linear contrast of odds ratios from two or more studies.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param f1 vector of group 1 frequency counts
#' @param f2 vector of group 2 frequency counts
#' @param n1 vector of group 1 sample sizes
#' @param n2 vector of group 2 sample sizes
#' @param v vector of contrast coefficients
#'
#'
#' @return
#' Returns 1-row matrix with the following columns:
#' * Estimate - estimated log-linear contrast
#' * SE - standard error of log-linear contrast
#' * exp(Estimate) - exponentiated log-linear contrast
#' * exp(LL) - lower limit of the exponentiated confidence interval
#' * exp(UL) - upper limit of the exponentiated confidence interval
#'
#'
#' @examples
#' n1 <- c(50, 150, 150)
#' f1 <- c(16, 50, 25)
#' n2 <- c(50, 150, 150)
#' f2 <- c(7, 15, 20)
#' v <- c(1, -1, 0)
#' meta.lc.odds(.05, f1, f2, n1, n2, v)
#'
#' # Should return:
#' # Estimate SE exp(Estimate) exp(LL) exp(UL)
#' # Contrast -0.4596883 0.5895438 0.6314805 0.1988563 2.005305
#'
#'
#' @references
#' \insertRef{Bonett2015}{vcmeta}
#'
#'
#' @importFrom stats qnorm
#' @export
meta.lc.odds <- function(alpha, f1, f2, n1, n2, v) {
m <- length(n1)
nt <- sum(n1 + n2)
z <- qnorm(1 - alpha/2)
lor <- log((f1 + .5)*(n2 - f2 + .5)/((f2 + .5)*(n1 - f1 + .5)))
var.lor <- 1/(f1 + .5) + 1/(f2 + .5) + 1/(n1 - f1 + .5) + 1/(n2 - f2 + .5)
con.lor <- t(v)%*%lor
se.lor <- sqrt(t(v)%*%(diag(var.lor))%*%v)
ll <- exp(con.lor - z*se.lor)
ul <- exp(con.lor + z*se.lor)
con.or <- exp(con.lor)
se.or <- con.or*se.lor
out <- cbind(con.lor, se.lor, con.or, ll, ul)
colnames(out) <- c("Estimate", "SE", "exp(Estimate)", "exp(LL)", "exp(UL)")
rownames(out) <- "Contrast"
return (out)
}
# meta.lc.propratio2 =====================================================
#' Confidence interval for a log-linear contrast of proportion ratios from 2-group studies
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for an
#' exponentiated log-linear contrast of 2-group proportion ratios from
#' two or more studies.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param f1 vector of group 1 frequency counts
#' @param f2 vector of group 2 frequency counts
#' @param n1 vector of group 1 sample sizes
#' @param n2 vector of group 2 sample sizes
#' @param v vector of contrast coefficients
#'
#' @return
#' Returns 1-row matrix with the following columns:
#' * Estimate - estimated log-linear contrast
#' * SE - standard error of log-linear contrast
#' * exp(Estimate) - exponentiated log-linear contrast
#' * exp(LL) - lower limit of the exponentiated confidence interval
#' * exp(UL) - upper limit of the exponentiated confidence interval
#'
#'
#' @examples
#' n1 <- c(50, 150, 150)
#' f1 <- c(16, 50, 25)
#' n2 <- c(50, 150, 150)
#' f2 <- c(7, 15, 20)
#' v <- c(1, -1, 0)
#' meta.lc.propratio2(.05, f1, f2, n1, n2, v)
#'
#' # Should return:
#' # Estimate SE exp(Estimate) exp(LL) exp(UL)
#' # Contrast -0.3853396 0.4828218 0.6802196 0.2640405 1.752378
#'
#'
#' @references
#' \insertRef{Price2008}{vcmeta}
#'
#'
#' @importFrom stats qnorm
#' @export
meta.lc.propratio2 <- function(alpha, f1, f2, n1, n2, v) {
m <- length(n1)
nt <- sum(n1 + n2)
z <- qnorm(1 - alpha/2)
p1 <- (f1 + 1/4)/(n1 + 7/4)
p2 <- (f2 + 1/4)/(n2 + 7/4)
lrr <- log(p1/p2)
var1 <- 1/(f1 + 1/4 + (f1 + 1/4)^2/(n1 - f1 + 3/2))
var2 <- 1/(f2 + 1/4 + (f2 + 1/4)^2/(n2 - f2 + 3/2))
var.lrr <- var1 + var2
con.lrr <- t(v)%*%lrr
se.lrr = sqrt(t(v)%*%(diag(var.lrr))%*%v)
ll <- exp(con.lrr - z*se.lrr)
ul <- exp(con.lrr + z*se.lrr)
con.rr <- exp(con.lrr)
se.rr <- con.rr*se.lrr
out <- cbind(con.lrr, se.lrr, con.rr, ll, ul)
colnames(out) <- c("Estimate", "SE", "exp(Estimate)", "exp(LL)", "exp(UL)")
rownames(out) <- "Contrast"
return (out)
}
# meta.lc.prop2 =============================================================
#' Confidence interval for a linear contrast of proportion differences in 2-group studies
#'
#'
#' @description
#' Computes the estimate, standard error, and adjusted Wald confidence interval for a
#' linear contrast of 2-group proportion differences from two or more studies.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param f1 vector of group 1 frequency counts
#' @param f2 vector of group 2 frequency counts
#' @param n1 vector of group 1 sample sizes
#' @param n2 vector of group 2 sample sizes
#' @param v vector of contrast coefficients
#'
#'
#' @return
#' Returns 1-row matrix with the following columns:
#' * Estimate - estimated linear contrast
#' * SE - standard error
#' * LL - lower limit of the adjusted Wald confidence interval
#' * UL - upper limit of the adjusted Wald confidence interval
#'
#'
#' @examples
#' n1 <- c(50, 150, 150)
#' n2 <- c(50, 150, 150)
#' f1 <- c(16, 50, 25)
#' f2 <- c(7, 15, 20)
#' v <- c(1, -1, 0)
#' meta.lc.prop2(.05, f1, f2, n1, n2, v)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Contrast -0.05466931 0.09401019 -0.2389259 0.1295873
#'
#'
#' @references
#' \insertRef{Bonett2014}{vcmeta}
#'
#'
#' @importFrom stats qnorm
#' @export
meta.lc.prop2 <- function(alpha, f1, f2, n1, n2, v) {
m <- length(n1)
z <- qnorm(1 - alpha/2)
nt <- sum(n1 + n2)
p1 <- (f1 + 1/m)/(n1 + 2/m)
p2 <- (f2 + 1/m)/(n2 + 2/m)
rd <- p1 - p2
var1 <- p1*(1 - p1)/(n1 + 2/m)
var2 <- p2*(1 - p2)/(n2 + 2/m)
var <- var1 + var2
con <- t(v)%*%rd
se <- sqrt(t(v)%*%(diag(var))%*%v)
ll <- con - z*se
ul <- con + z*se
out <- cbind(con, se, ll, ul)
colnames(out) <- c("Estimate", "SE", "LL", "UL")
rownames(out) = "Contrast"
return (out)
}
# meta.lc.prop.ps =======================================================
#' Confidence interval for a linear contrast of proportion differences in
#' paired-samples studies
#'
#'
#' @description
#' Computes the estimate, standard error, and adjusted Wald confidence interval
#' for a linear contrast of paired-samples proportion differences from two or
#' more studies.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param f11 vector of frequency counts in cell 1,1
#' @param f12 vector of frequency counts in cell 1,2
#' @param f21 vector of frequency counts in cell 2,1
#' @param f22 vector of frequency counts in cell 2,2
#' @param v vector of contrast coefficients
#'
#'
#' @return
#' Returns 1-row matrix with the following columns:
#' * Estimate - estimated linear contrast
#' * SE - standard error
#' * LL - lower limit of the adjusted Wald confidence interval
#' * UL - upper limit of the adjusted Wald confidence interval
#'
#'
#' @examples
#' f11 <- c(17, 28, 19)
#' f12 <- c(43, 56, 49)
#' f21 <- c(3, 5, 5)
#' f22 <- c(37, 54, 39)
#' v <- c(.5, .5, -1)
#' meta.lc.prop.ps(.05, f11, f12, f21, f22, v)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Contrast -0.01436285 0.06511285 -0.1419817 0.113256
#'
#'
#' @references
#' \insertRef{Bonett2014}{vcmeta}
#'
#'
#' @importFrom stats qnorm
#' @export
meta.lc.prop.ps <- function(alpha, f11, f12, f21, f22, v) {
m <- length(f11)
z <- qnorm(1 - alpha/2)
n <- f11 + f12 + f21 + f22
nt <- sum(n)
p12 <- (f12 + 1/m)/(n + 2/m)
p21 <- (f21 + 1/m)/(n + 2/m)
rd <- p12 - p21
con <- t(v)%*%rd
var <- (p12 + p21 - rd^2)/(n + 2/m)
se <- sqrt(t(v)%*%(diag(var))%*%v)
ll <- con - z*se
ul <- con + z*se
out <- cbind(con, se, ll, ul)
colnames(out) <- c("Estimate", "SE", "LL", "UL")
rownames(out) <- "Contrast"
return (out)
}
# meta.lc.agree ===================================================
#' Confidence interval for a linear contrast of G-index coefficients
#'
#'
#' @description
#' Computes the estimate, standard error, and adjusted Wald confidence
#' interval for a linear contrast of G-index of agreement coefficients
#' from two or more studies. This function assumes that two raters each
#' provide a dichotomous rating for a sample of objects.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param f11 vector of frequency counts in cell 1,1
#' @param f12 vector of frequency counts in cell 1,2
#' @param f21 vector of frequency counts in cell 2,1
#' @param f22 vector of frequency counts in cell 2,2
#' @param v vector of contrast coefficients
#'
#' @return
#' Returns 1-row matrix with the following columns:
#' * Estimate - estimated linear contrast
#' * SE - standard error
#' * LL - lower limit of the adjusted Wald confidence interval
#' * UL - upper limit of the adjusted Wald confidence interval
#'
#'
#' @examples
#' f11 <- c(43, 56, 49)
#' f12 <- c(7, 2, 9)
#' f21 <- c(3, 5, 5)
#' f22 <- c(37, 54, 39)
#' v <- c(.5, .5, -1)
#' meta.lc.agree(.05, f11, f12, f21, f22, v)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Contrast 0.1022939 0.07972357 -0.05396142 0.2585492
#'
#'
#' @importFrom stats qnorm
#' @export
meta.lc.agree <- function(alpha, f11, f12, f21, f22, v) {
m <- length(f11)
z <- qnorm(1 - alpha/2)
n <- f11 + f12 + f21 + f22
nt <- sum(n)
p0 <- (f11 + f22 + 2/m)/(n + 4/m)
g <- 2*p0 - 1
con <- t(v)%*%g
var <- 4*p0*(1 - p0)/(n + 4/m)
se <- sqrt(t(v)%*%(diag(var))%*%v)
ll <- con - z*se
ul <- con + z*se
out <- cbind(con, se, ll, ul)
colnames(out) <- c("Estimate", "SE", "LL", "UL")
rownames(out) <- "Contrast"
return (out)
}
# meta.lc.mean1 ===================================================
#' Confidence interval for a linear contrast of means
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for a
#' linear contrast of means from two or more studies. This function will
#' use either an unequal variance (recommended) or an equal variance method.
#' A Satterthwaite adjustment to the degrees of freedom is used with the
#' unequal variance method.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param m vector of estimated means
#' @param sd vector of estimated standard deviations
#' @param n vector of sample sizes
#' @param v vector of contrast coefficients
#' @param eqvar
#' * FALSE for unequal variance method
#' * TRUE for equal variance method
#'
#' @return
#' Returns 1-row matrix with the following columns:
#' * Estimate - estimated linear contrast
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#' * df - degrees of freedom
#'
#'
#' @examples
#' m <- c(33.5, 37.9, 38.0, 44.1)
#' sd <- c(3.84, 3.84, 3.65, 4.98)
#' n <- c(10, 10, 10, 10)
#' v <- c(.5, .5, -.5, -.5)
#' meta.lc.mean1(.05, m, sd, n, v, eqvar = FALSE)
#'
#' # Should return:
#' # Estimate SE LL UL df
#' # Contrast -5.35 1.300136 -7.993583 -2.706417 33.52169
#'
#' @references
#' \insertRef{Snedecor1980}{vcmeta}
#'
#'
#' @importFrom stats qt
#' @export
meta.lc.mean1 <- function(alpha, m, sd, n, v, eqvar = FALSE) {
est <- t(v)%*%m
k <- length(m)
nt <- sum(n)
if (eqvar){
df <- sum(n) - k
v1 <- sum((n - 1)*sd^2)/df
se <- sqrt(v1*t(v)%*%solve(diag(n))%*%v)
t1 <- qt(1 - alpha/2, df)
ll <- est - t1*se
ul <- est + t1*se
} else {
v2 <- diag(sd^2)%*%(solve(diag(n)))
se <- sqrt(t(v)%*%v2%*%v)
df = (se^4)/sum(((v^4)*(sd^4)/(n^2*(n-1))))
t2 <- qt(1 - alpha/2, df)
ll <- est - t2*se
ul <- est + t2*se
}
out <- cbind(est, se, ll, ul, df)
colnames(out) <- c("Estimate", "SE", "LL", "UL", "df")
rownames(out) <- "Contrast"
return(out)
}
# meta.lc.prop1 ==============================================
#' Confidence interval for a linear contrast of proportions
#'
#'
#' @description
#' Computes the estimate, standard error, and an adjusted Wald confidence
#' interval for a linear contrast of proportions from two or more studies.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param f vector of frequency counts
#' @param n vector of sample sizes
#' @param v vector of contrast coefficients
#'
#'
#' @return
#' Returns 1-row matrix with the following columns:
#' * Estimate -estimated linear contrast
#' * SE - standard error
#' * LL - lower limit of the adjusted Wald confidence interval
#' * UL - upper limit of the adjusted Wald confidence interval
#' @examples
#' f <- c(26, 24, 38)
#' n <- c(60, 60, 60)
#' v <- c(-.5, -.5, 1)
#' meta.lc.prop1(.05, f, n, v)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Contrast 0.2119565 0.07602892 0.06294259 0.3609705
#'
#'
#' @references
#' \insertRef{Price2004}{vcmeta}
#'
#'
#' @importFrom stats qnorm
#' @export
meta.lc.prop1 <- function(alpha, f, n, v) {
z <- qnorm(1 - alpha/2)
m <- length(v) - length(which(v==0))
nt <- sum(n)
p <- (f + 2/m)/(n + 4/m)
est <- t(v)%*%p
se <- sqrt(t(v)%*%diag(p*(1 - p))%*%solve(diag(n + 4/m))%*%v)
ll <- est - z*se
ul <- est + z*se
out <- cbind(est, se, ll, ul)
colnames(out) <- c("Estimate", "SE", "LL", "UL")
rownames(out) <- "Contrast"
return(out)
}
# meta.lc.gen =====================================================
#' Confidence interval for a linear contrast of effect sizes
#'
#'
#' @description
#' Computes the estimate, standard error, and confidence interval for a
#' linear contrast of any type of effect size from two or more studies.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param est vector of parameter estimates
#' @param se vector of standard errors
#' @param v vector of contrast coefficients
#'
#'
#' @return
#' Returns 1-row matrix with the following columns:
#' * Estimate - estimated linear contrast
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' est <- c(.55, .59, .44, .48, .26, .19)
#' se <- c(.054, .098, .029, .084, .104, .065)
#' v <- c(.5, .5, -.25, -.25, -.25, -.25)
#' meta.lc.gen(.05, est, se, v)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Contrast 0.2275 0.06755461 0.0950954 0.3599046
#'
#' @importFrom stats qnorm
#' @export
meta.lc.gen <- function(alpha, est, se, v) {
z <- qnorm(1 - alpha/2)
con <- t(v)%*%est
se <- sqrt(t(v)%*%(diag(se^2))%*%v)
ll <- con - z*se
ul <- con + z*se
out <- cbind(con, se, ll, ul)
colnames(out) <- c("Estimate", "SE", "LL", "UL")
rownames(out) <- "Contrast"
return(out)
}
|
/scratch/gouwar.j/cran-all/cranData/vcmeta/R/meta_comp.R
|
# meta.ave.fisher ============================================================
#' Fisher confidence interval for an average correlation.
#'
#'
#' @description
#' This function should be used with the \link[vcmeta]{meta.ave.gen}
#' function when the effect size is a correlation. Use the estimated average
#' correlation and its standard error from meta.ave.gen in this function to
#' obtain a more accurate confidence interval for the population average
#' correlation.
#'
#'
#' @param alpha alpha value for 1-alpha confidence
#' @param cor estimate of average correlation
#' @param se standard error of average correlation
#'
#'
#' @return
#' Returns a 1-row matrix. The columns are:
#' * Estimate - estimate of average correlation (from input)
#' * LL - lower limit of the confidence interval
#' * UL - lower limit of the confidence interval
#'
#' @examples
#' meta.ave.fisher(0.05, 0.376, .054)
#'
#' # Should return:
#' # Estimate LL UL
#' # 0.376 0.2656039 0.4766632
#'
#'
#' @importFrom stats qnorm
#' @export
meta.ave.fisher <- function(alpha, cor, se) {
z <- qnorm(1 - alpha/2)
zr <- log((1 + cor)/(1 - cor))/2
ll0 <- zr - z*se/(1 - cor^2)
ul0 <- zr + z*se/(1 - cor^2)
ll <- (exp(2*ll0) - 1)/(exp(2*ll0) + 1)
ul <- (exp(2*ul0) - 1)/(exp(2*ul0) + 1)
out <- t(c(cor, ll, ul))
colnames(out) <- c("Estimate", "LL", "UL")
rownames(out) <- ""
return(out)
}
# cor.from.t =============================================================
#' Computes Pearson correlation between paired measurements from t statistic
#'
#' @description
#' This function computes the Pearson correlation between paired
#' measurements using a reported paired-samples t statistic and
#' other sample information. This correlation estimate is needed
#' in several functions that analyze mean differences and
#' standardized mean differences in paired-samples studies.
#'
#'
#' @param m1 estimated mean for measurement 1
#' @param m2 estimated mean for measurement 2
#' @param sd1 estimated standard deviation for measurement 1
#' @param sd2 estimated standard deviation for measurement 2
#' @param t value for paired-samples t-test
#' @param n sample size
#'
#' @return
#' Returns the sample Pearson correlation between the two paired measurements
#'
#' @examples
#' cor.from.t(9.4, 9.8, 1.26, 1.40, 2.27, 30)
#'
#' # Should return:
#' # Estimate
#' # Correlation: 0.7415209
#'
#'
#' @export
cor.from.t <- function(m1, m2, sd1, sd2, t, n) {
out <- t(((sd1^2 + sd2^2) - n*(m1 - m2)^2/t^2)/(2*sd1*sd2))
colnames(out) <- c("Estimate")
rownames(out) <- c("Correlation: ")
return (out)
}
# meta.chitest ========================================================
#' Computes a chi-square test of effect-size homogeneity
#'
#'
#' @description
#' Computes a chi-square test of effect size homogeneity and p-value using
#' effect-size estimates and their standard errors from two or more studies.
#' This test should not be used to justify the use of a constant coeffient
#' (fixed-effect) meta-analysis.
#'
#'
#' @param est vector of effect-size estimates
#' @param se vector of effect-size standard errors
#'
#'
#' @return
#' Returns a one-row matrix:
#' * Q - chi-square test statitic
#' * df - degrees of freedom
#' * p - p-value
#'
#'
#' @examples
#' est <- c(.297, .324, .281, .149)
#' se <- c(.082, .051, .047, .094)
#' meta.chitest(est, se)
#'
#' # Should return:
#' # Q df p
#' # 2.706526 3 0.4391195
#'
#'
#' @references
#' \insertRef{Borenstein2009}{vcmeta}
#'
#'
#' @importFrom stats pchisq
#' @export
meta.chitest <- function(est, se) {
df <- length(est) - 1
w <- 1/se^2
ave <- sum(w*est)/sum(w)
Q <- sum(w*(est - ave)*(est - ave))
p <- 1 - pchisq(Q, df)
out <- t(c(Q, df, p))
colnames(out) <- c("Q", "df", "p")
rownames(out) <- ""
return(out)
}
# stdmean2.from.t ============================================================
#' Computes Cohen's d from pooled-variance t statistic
#'
#'
#' @description
#' This function computes Cohen's d for a 2-group design (which is a
#' standardized mean difference with a weighted variance standardizer) using
#' a pooled-variance independent-samples t statistic and the two sample sizes.
#' This function also computes the standard error for Cohen's d. The Cohen's d
#' estimate and standard error assume equality of population variances.
#'
#'
#' @param t pooled-variance t statistic
#' @param n1 sample size for group 1
#' @param n2 sample size for group 2
#'
#' @return
#' Returns Cohen's d and its equal-variance standard error
#'
#' @examples
#' stdmean2.from.t(3.27, 25, 25)
#'
#' # Should return:
#' # Estimate SE
#' # Cohen's d 0.9439677 0.298801
#'
#' @export
stdmean2.from.t <- function(t, n1, n2) {
d <- t*sqrt(1/n1 + 1/n2)
se <- sqrt(d^2*(1/(n1 - 1) + 1/(n2 - 1))/8 + 1/n1 + 1/n2)
out <- t(c(d, se))
colnames(out) <- c("Estimate", "SE")
rownames(out) <- c("Cohen's d: ")
return (out)
}
# table.from.odds ============================================================
#' Computes the cell frequencies in a 2x2 table using the marginal proportions
#' and odds ratio
#'
#'
#' @description
#' This function computes the cell proportions and frequencies in a 2x2
#' contingency table using the reported marginal proportions, estimated odds
#' ratio, and total sample size. The cell frequncies could then be used to
#' compute other measures of effect size. In the output, "cell ij" refers to
#' row i and column j.
#'
#'
#' @param p1row marginal proportion for row 1
#' @param p1col marginal proportion for column 1
#' @param or estimated odds ratio
#' @param n total sample size
#'
#'
#' @return A 2-row matrix. The rows are:
#' * Row 1 gives the four computed cell proportions
#' * Row 2 gives the four computed cell frequencies
#'
#'
#' The columns are:
#' * cell 11 - proportion and frequency for cell 11
#' * cell 12 - proportion and frequency for cell 12
#' * cell 21 - proportion and frequency for cell 21
#' * cell 22 - proportion and frequency for cell 22
#'
#'
#' @examples
#' table.from.odds(.17, .5, 3.18, 100)
#'
#' # Should return:
#' # cell 11 cell 12 cell 21 cell 22
#' # Proportion: 0.1233262 0.04667383 0.3766738 0.4533262
#' # Frequency: 12.0000000 5.00000000 38.0000000 45.0000000
#'
#'
#' @references
#' \insertRef{Bonett2007}{vcmeta}
#'
#'
#' @export
table.from.odds <- function(p1row, p1col, or, n){
if (or <= 0) {stop("the odds ratio must be greater than 0")}
p2row <- 1 - p1row
if (or != 1){
a <- or*(p1row + p1col) + p2row - p1col
b <- sqrt(a^2 - 4*p1row*p1col*or*(or - 1))
p11 <- (a - b)/(2*(or - 1))}
else {
p11 <- p1row*p1col
}
p12 <- p1row - p11
p21 <- p1col - p11
p22 <- 1 - (p11 + p12 + p21)
f11 <- round(n*p11)
f12 <- round(n*p12)
f21 <- round(n*p21)
f22 <- n - (f11 + f12 + f21)
out1 <- t(c(p11, p12, p21, p22))
out2 <- t(c(f11, f12, f21, f22))
out <- rbind(out1, out2)
colnames(out) <- c("cell 11", "cell 12", "cell 21", "cell 22")
rownames(out) <- c("Proportion:", "Frequency:")
return(out)
}
# table.from.phi ============================================================
#' Computes the cell frequencies in a 2x2 table using the marginal proportions
#' and phi correlation
#'
#'
#' @description
#' This function computes the cell proportions and frequencies in a 2x2
#' contingency table using the reported marginal proportions, estimated phi
#' correlation, and total sample size. The cell frequncies could then be used
#' to compute other measures of effect size. In the output, "cell ij" refers
#' to row i and column j.
#'
#'
#' @param p1row marginal proportion for row 1
#' @param p1col marginal proportion for column 1
#' @param phi estimated phi correlation
#' @param n total sample size
#'
#'
#' @return A 2-row matrix. The rows are:
#' * Row 1 gives the four computed cell proportions
#' * Row 2 gives the four computed cell frequencies
#'
#'
#' The columns are:
#' * cell 11 - proportion and frequency for cell 11
#' * cell 12 - proportion and frequency for cell 12
#' * cell 21 - proportion and frequency for cell 21
#' * cell 22 - proportion and frequency for cell 22
#'
#'
#' @examples
#' table.from.phi(.28, .64, .38, 200)
#'
#' # Should return:
#' # cell 11 cell 12 cell 21 cell 22
#' # Proportion: 0.2610974 0.0189026 0.3789026 0.3410974
#' # Frequency 52.0000000 4.0000000 76.0000000 68.0000000
#'
#'
#' @export
table.from.phi <- function(p1row, p1col, phi, n){
if (abs(phi) > 1) {stop("phi must be between -1 and 1")}
p2row <- 1 - p1row
p2col <- 1 - p1col
phimax <- sqrt(p1col*p2row/(p1row*p2col))
if (phimax > 1) {phimax = 1/phimax}
phimin <- sqrt(p2col*p2row/(p1row*p1col))
if (phimin > 1) {phimin = 1/phimin}
if (phi > phimax) {stop("phi is too large for given marginal proportions")}
if (phi < -phimin) {stop("phi is too small for given marginal proportions")}
a <- sqrt(p1row*p2row*p1col*p2col)
p11 <- a*phi + p1row*p1col
p12 <- p1row - p11
p21 <- p1col - p11
p22 <- 1 - (p11 + p12 + p21)
f11 <- round(n*p11)
f12 <- round(n*p12)
f21 <- round(n*p21)
f22 <- n - (f11 + f12 + f21)
out1 <- t(c(p11, p12, p21, p22))
out2 <- t(c(f11, f12, f21, f22))
out <- rbind(out1, out2)
colnames(out) <- c("cell 11", "cell 12", "cell 21", "cell 22")
rownames(out) <- c("Proportion:", "Frequency")
return(out)
}
userefs <- function() {
Rdpack::c_Rd
}
|
/scratch/gouwar.j/cran-all/cranData/vcmeta/R/meta_misc.R
|
# meta.lm.mean2 ===========================================================
#' Meta-regression analysis for 2-group mean differences
#'
#'
#' @description
#' This function estimates the intercept and slope coefficients in a
#' meta-regression model where the dependent variable is a 2-group
#' mean difference. The estimates are OLS estimates with robust standard
#' errors that accommodate residual heteroscedasticity.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param m1 vector of estimated means for group 1
#' @param m2 vector of estimated means for group 2
#' @param sd1 vector of estimated SDs for group 1
#' @param sd2 vector of estimated SDs for group 2
#' @param n1 vector of group 1 sample sizes
#' @param n2 vector of group 2 sample sizes
#' @param X matrix of predictor values
#'
#'
#' @return
#' Returns a matrix. The first row is for the intercept with one additional
#' row per predictor. The matrix has the following columns:
#' * Estimate - OLS estimate
#' * SE - standard error
#' * t - t-value
#' * p - p-value
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#' * df - degrees of freedom
#'
#'
#' @examples
#' n1 <- c(65, 30, 29, 45, 50)
#' n2 <- c(67, 32, 31, 20, 52)
#' m1 <- c(31.1, 32.3, 31.9, 29.7, 33.0)
#' m2 <- c(34.1, 33.2, 30.6, 28.7, 26.5)
#' sd1 <- c(7.1, 8.1, 7.8, 6.8, 7.6)
#' sd2 <- c(7.8, 7.3, 7.5, 7.2, 6.8)
#' x1 <- c(4, 6, 7, 7, 8)
#' x2 <- c(1, 0, 0, 0, 1)
#' X <- matrix(cbind(x1, x2), 5, 2)
#' meta.lm.mean2(.05, m1, m2, sd1, sd2, n1, n2, X)
#'
#' # Should return:
#' # Estimate SE t p LL UL df
#' # b0 -15.20 3.4097610 -4.457791 0.000 -21.902415 -8.497585 418
#' # b1 2.35 0.4821523 4.873979 0.000 1.402255 3.297745 418
#' # b2 2.85 1.5358109 1.855697 0.064 -0.168875 5.868875 418
#'
#'
#' @references
#' \insertRef{Bonett2009a}{vcmeta}
#'
#'
#' @importFrom stats qt
#' @importFrom stats pt
#' @export
meta.lm.mean2 <- function(alpha, m1, m2, sd1, sd2, n1, n2, X) {
m <- length(m1)
nt <- sum(n1 + n2)
var <- sd1^2/n1 + sd2^2/n2
d <- m1 - m2
x0 <- matrix(c(1), m, 1)
X <- cbind(x0, X)
q <- ncol(X)
M <- solve(t(X)%*%X)
b <- M%*%t(X)%*%d
V <- diag(var)
se <- sqrt(diag(M%*%t(X)%*%V%*%X%*%M))
df <- nt - q
crit <- qt(1 - alpha/2, df)
ll <- b - crit*se
ul <- b + crit*se
t <- b/se
p <- round(2*(1 - pt(abs(t), df)), digits = 3)
out <- cbind(b, se, t, p, ll, ul, df)
row <- t(t(paste0(rep("b", q), seq(1:q) - 1)))
colnames(out) <- c("Estimate", "SE", "t", "p", "LL", "UL", "df")
rownames(out) <- row
return(out)
}
# meta.lm.stdmean2 ==========================================================
#' Meta-regression analysis for 2-group standardized mean differences
#'
#'
#' @description
#' This function estimates the intercept and slope coefficients in a
#' meta-regression model where the dependent variable is a 2-group
#' standardized mean difference. The estimates are OLS estimates with
#' robust standard errors that accommodate residual heteroscedasticity.
#'
#
#' @param alpha alpha level for 1-alpha confidence
#' @param m1 vector of estimated means for group 1
#' @param m2 vector of estimated means for group 2
#' @param sd1 vector of estimated SDs for group 1
#' @param sd2 vector of estimated SDs for group 2
#' @param n1 vector of group 1 sample sizes
#' @param n2 vector of group 2 sample sizes
#' @param X matrix of predictor values
#' @param stdzr
#' * set to 0 for square root unweighted average variance standardizer
#' * set to 1 for group 1 SD standardizer
#' * set to 2 for group 2 SD standardizer
#' * set to 3 for square root weighted average variance standardizer
#'
#'
#' @return
#' Returns a matrix. The first row is for the intercept with one additional
#' row per predictor. The matrix has the following columns:
#' * Estimate - OLS estimate
#' * SE - standard error
#' * z - z-value
#' * p - p-value
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' n1 <- c(65, 30, 29, 45, 50)
#' n2 <- c(67, 32, 31, 20, 52)
#' m1 <- c(31.1, 32.3, 31.9, 29.7, 33.0)
#' m2 <- c(34.1, 33.2, 30.6, 28.7, 26.5)
#' sd1 <- c(7.1, 8.1, 7.8, 6.8, 7.6)
#' sd2 <- c(7.8, 7.3, 7.5, 7.2, 6.8)
#' x1 <- c(4, 6, 7, 7, 8)
#' X <- matrix(x1, 5, 1)
#' meta.lm.stdmean2(.05, m1, m2, sd1, sd2, n1, n2, X, 0)
#'
#' # Should return:
#' # Estimate SE z p LL UL
#' # b0 -1.6988257 0.4108035 -4.135373 0 -2.5039857 -0.8936657
#' # b1 0.2871641 0.0649815 4.419167 0 0.1598027 0.4145255
#'
#'
#' @references
#' \insertRef{Bonett2009a}{vcmeta}
#'
#'
#' @importFrom stats qnorm
#' @importFrom stats pnorm
#' @export
meta.lm.stdmean2 <- function(alpha, m1, m2, sd1, sd2, n1, n2, X, stdzr) {
df1 <- n1 - 1
df2 <- n2 - 1
m <- length(m1)
nt <- sum(n1 + n2)
z <- qnorm(1 - alpha/2)
n <- n1 + n2
v1 <- sd1^2
v2 <- sd2^2
if (stdzr == 0) {
s1 <- sqrt((sd1^2 + sd2^2)/2)
d <- (m1 - m2)/s1
du <- (1 - 3/(4*n - 9))*d
var <- d^2*(v1^2/df1 + v2^2/df2)/(8*s1^4) + (v1/df1 + v2/df2)/s1^2
} else if (stdzr == 1) {
d <- (m1 - m2)/sd1
du <- (1 - 3/(4*n1 - 5))*d
var <- d^2/(2*df1) + 1/df1 + v2/(df2*v1)
} else if (stdzr == 2) {
cat ("Standardizer = sd2", fill = TRUE)
d <- (m1 - m2)/sd2
du <- (1 - 3/(4*n2 - 5))*d
var <- d^2/(2*df2) + 1/df2 + v1/(df1*v2)
} else {
s2 <- sqrt((df1*sd1^2 + df2*sd2^2)/(df1 + df2))
d <- (m1 - m2)/sd2
du <- (1 - 3/(4*n - 9))*d
var <- d^2*(1/df1 + 1/df2)/8 + 1/n1 + 1/n2
}
x0 <- matrix(c(1), m, 1)
X <- cbind(x0, X)
q <- ncol(X)
M <- solve(t(X)%*%X)
b <- M%*%t(X)%*%d
V <- diag(var)
se <- sqrt(diag(M%*%t(X)%*%V%*%X%*%M))
ll <- b - z*se
ul <- b + z*se
z <- b/se
p <- round(2*(1 - pnorm(abs(z))), digits = 3)
out <- cbind(b, se, z, p, ll, ul)
row <- t(t(paste0(rep("b", q), seq(1:q) - 1)))
colnames(out) <- c("Estimate", "SE", "z", "p", "LL", "UL")
rownames(out) <- row
return(out)
}
# meta.lm.mean.ps ==========================================================
#' Meta-regression analysis for paired-samples mean differences
#'
#'
#' @description
#' This function estimates the intercept and slope coefficients in a
#' meta-regression model where the dependent variable is a paired-samples
#' mean difference. The estimates are OLS estimates with robust standard
#' errors that accommodate residual heteroscedasticity.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param m1 vector of estimated means for group 1
#' @param m2 vector of estimated means for group 2
#' @param sd1 vector of estimated SDs for group 1
#' @param sd2 vector of estimated SDs for group 2
#' @param cor vector of estimated correlations
#' @param n vector of sample sizes
#' @param X matrix of predictor values
#'
#' @return
#' Returns a matrix. The first row is for the intercept with one additional
#' row per predictor. The matrix has the following columns:
#' * Estimate - OLS estimate
#' * SE - standard error
#' * t - t-value
#' * p - p-value
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#' * df - degrees of freedom
#'
#'
#' @examples
#' n <- c(65, 30, 29, 45, 50)
#' cor <- c(.87, .92, .85, .90, .88)
#' m1 <- c(20.1, 20.5, 19.3, 21.5, 19.4)
#' m2 <- c(10.4, 10.2, 8.5, 10.3, 7.8)
#' sd1 <- c(9.3, 9.9, 10.1, 10.5, 9.8)
#' sd2 <- c(7.8, 8.0, 8.4, 8.1, 8.7)
#' x1 <- c(2, 3, 3, 4, 4)
#' X <- matrix(x1, 5, 1)
#' meta.lm.mean.ps(.05, m1, m2, sd1, sd2, cor, n, X)
#'
#' # Should return:
#' # Estimate SE t p LL UL df
#' # b0 8.00 1.2491990 6.404104 0.000 5.5378833 10.462117 217
#' # b1 0.85 0.3796019 2.239188 0.026 0.1018213 1.598179 217
#'
#'
#' @references
#' \insertRef{Bonett2009a}{vcmeta}
#'
#'
#' @importFrom stats qt
#' @importFrom stats pt
#' @export
meta.lm.mean.ps <- function(alpha, m1, m2, sd1, sd2, cor, n, X) {
m <- length(m1)
nt <- sum(n)
var <- (sd1^2 + sd2^2 - 2*cor*sd1*sd2)/n
d <- m1 - m2
x0 <- matrix(c(1), m, 1)
X <- cbind(x0, X)
q <- ncol(X)
M <- solve(t(X)%*%X)
b <- M%*%t(X)%*%d
V <- diag(var)
se <- sqrt(diag(M%*%t(X)%*%V%*%X%*%M))
df <- nt - q
t <- qt(1 - alpha/2, df)
ll <- b - t*se
ul <- b + t*se
t <- b/se
p <- round(2*(1 - pt(abs(t), df)), digits = 3)
out <- cbind(b, se, t, p, ll, ul, df)
row <- t(t(paste0(rep("b", q), seq(1:q) - 1)))
colnames(out) <- c("Estimate", "SE", "t", "p", "LL", "UL", "df")
rownames(out) <- row
return(out)
}
# meta.lm.stdmean.ps =======================================================
#' Meta-regression analysis for paired-samples standardized mean differences
#'
#'
#' @description
#' This function estimates the intercept and slope coefficients in a
#' meta-regression model where the dependent variable is a paired-samples
#' standardized mean difference. The estimates are OLS estimates with
#' robust standard errors that accommodate residual heteroscedasticity.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param m1 vector of estimated means for group 1
#' @param m2 vector of estimated means for group 2
#' @param sd1 vector of estimated SDs for group 1
#' @param sd2 vector of estimated SDs for group 2
#' @param cor vector of estimated correlations
#' @param n vector of sample sizes
#' @param X matrix of predictor values
#' @param stdzr
#' * set to 0 for square root unweighted average variance standardizer
#' * set to 1 for group 1 SD standardizer
#' * set to 2 for group 2 SD standardizer
#'
#'
#' @return
#' Returns a matrix. The first row is for the intercept with one additional
#' row per predictor. The matrix has the following columns:
#' * Estimate - OLS estimate
#' * SE - standard error
#' * t - t-value
#' * p - p-value
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#'
#' n <- c(65, 30, 29, 45, 50)
#' cor <- c(.87, .92, .85, .90, .88)
#' m1 <- c(20.1, 20.5, 19.3, 21.5, 19.4)
#' m2 <- c(10.4, 10.2, 8.5, 10.3, 7.8)
#' sd1 <- c(9.3, 9.9, 10.1, 10.5, 9.8)
#' sd2 <- c(7.8, 8.0, 8.4, 8.1, 8.7)
#' x1 <- c(2, 3, 3, 4, 4)
#' X <- matrix(x1, 5, 1)
#' meta.lm.stdmean.ps(.05, m1, m2, sd1, sd2, cor, n, X, 0)
#'
#' # Should return:
#' # Estimate SE z p LL UL
#' # b0 1.01740253 0.25361725 4.0115667 0.000 0.5203218 1.5144832
#' # b1 0.04977943 0.07755455 0.6418635 0.521 -0.1022247 0.2017836
#'
#'
#' @references
#' \insertRef{Bonett2009a}{vcmeta}
#'
#'
#' @importFrom stats pnorm
#' @importFrom stats qnorm
#' @export
meta.lm.stdmean.ps <- function(alpha, m1, m2, sd1, sd2, cor, n, X, stdzr) {
m <- length(m1)
nt <- sum(n)
df <- n - 1
z <- qnorm(1 - alpha/2)
v1 <- sd1^2
v2 <- sd2^2
vd <- v1 + v2 - 2*cor*sd1*sd2
if (stdzr == 0) {
s <- sqrt((sd1^2 + sd2^2)/2)
d <- (m1 - m2)/s
du <- sqrt((n - 2)/df)*d
var <- d^2*(v1^2 + v2^2 + 2*cor^2*v1*v2)/(8*df*s^4) + vd/(df*s^2)
} else if (stdzr == 1) {
d <- (m1 - m2)/sd1
du <- (1 - 3/(4*df - 1))*d
var <- d^2/(2*df) + vd/(df*v1)
} else {
d <- (m1 - m2)/sd2
du <- (1 - 3/(4*df - 1))*d
var <- d^2/(2*df) + vd/(df*v2)
}
x0 <- matrix(c(1), m, 1)
X <- cbind(x0, X)
q <- ncol(X)
M <- solve(t(X)%*%X)
b <- M%*%t(X)%*%d
V <- diag(var)
se <- sqrt(diag(M%*%t(X)%*%V%*%X%*%M))
ll <- b - z*se
ul <- b + z*se
z <- b/se
p <- round(2*(1 - pnorm(abs(z))), digits = 3)
out <- cbind(b, se, z, p, ll, ul)
row <- t(t(paste0(rep("b", q), seq(1:q) - 1)))
colnames(out) <- c("Estimate", "SE", "z", "p", "LL", "UL")
rownames(out) <- row
return(out)
}
# meta.lm.meanratio2 =======================================================
#' Meta-regression analysis for 2-group log mean ratios
#'
#'
#' @description
#' This function estimates the intercept and slope coefficients in a
#' meta-regression model where the dependent variable is a 2-group
#' log mean ratio. The estimates are OLS estimates with robust standard
#' errors that accommodate residual heteroscedasticity. The exponentiated
#' slope estimate for a predictor variable describes a multiplicative
#' change in the mean ratio associated with a 1-unit increase in that
#' predictor variable, controlling for all other predictor variables
#' in the model.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param m1 vector of estimated means for group 1
#' @param m2 vector of estimated means for group 2
#' @param sd1 vector of estimated SDs for group 1
#' @param sd2 vector of estimated SDs for group 2
#' @param n1 vector of group 1 sample sizes
#' @param n2 vector of group 2 sample sizes
#' @param X matrix of predictor values
#'
#'
#' @return
#' Returns a matrix. The first row is for the intercept with one additional
#' row per predictor. The matrix has the following columns:
#' * Estimate - OLS estimate
#' * SE - standard error
#' * z - z-value
#' * p - p-value
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#' * exp(Estimate) - the exponentiated estimate
#' * exp(LL) - lower limit of the exponentiated confidence interval
#' * exp(UL) - upper limit of the exponentiated confidence interval
#'
#'
#' @examples
#' n1 <- c(65, 30, 29, 45, 50)
#' n2 <- c(67, 32, 31, 20, 52)
#' m1 <- c(31.1, 32.3, 31.9, 29.7, 33.0)
#' m2 <- c(34.1, 33.2, 30.6, 28.7, 26.5)
#' sd1 <- c(7.1, 8.1, 7.8, 6.8, 7.6)
#' sd2 <- c(7.8, 7.3, 7.5, 7.2, 6.8)
#' x1 <- c(4, 6, 7, 7, 8)
#' X <- matrix(x1, 5, 1)
#' meta.lm.meanratio2(.05, m1, m2, sd1, sd2, n1, n2, X)
#'
#' # Should return:
#' # Estimate SE LL UL z p
#' # b0 -0.40208954 0.09321976 -0.58479692 -0.21938216 -4.313351 0
#' # b1 0.06831545 0.01484125 0.03922712 0.09740377 4.603078 0
#' # exp(Estimate) exp(LL) exp(UL)
#' # b0 0.6689208 0.557219 0.8030148
#' # b1 1.0707030 1.040007 1.1023054
#'
#'
#' @importFrom stats pnorm
#' @importFrom stats qnorm
#' @export
meta.lm.meanratio2 <- function(alpha, m1, m2, sd1, sd2, n1, n2, X) {
m <- length(m1)
nt <- sum(n1 + n2)
var1 <- sd1^2/(n1*m1^2)
var2 <- sd2^2/(n2*m2^2)
var <- var1 + var2
y <- log(m1/m2)
x0 <- matrix(c(1), m, 1)
X <- cbind(x0, X)
q <- ncol(X)
M <- solve(t(X)%*%X)
b <- M%*%t(X)%*%y
V <- diag(var)
se <- sqrt(diag(M%*%t(X)%*%V%*%X%*%M))
crit <- qnorm(1 - alpha/2)
ll <- b - crit*se
ul <- b + crit*se
z <- b/se
p <- round(2*(1 - pnorm(abs(z))), digits = 3)
out <- cbind(b, se, z, p, ll, ul, exp(b), exp(ll), exp(ul))
row <- t(t(paste0(rep("b", q), seq(1:q) - 1)))
colnames(out) <- c("Estimate", "SE", "z", "p", "LL", "UL",
"exp(Estimate)", "exp(LL)", "exp(UL)")
rownames(out) <- row
return(out)
}
# meta.lm.meanratio.ps =====================================================
#' Meta-regression analysis for paired-samples log mean ratios
#'
#'
#' @description
#' This function estimates the intercept and slope coefficients in a
#' meta-regression model where the dependent variable is a paired-samples
#' log mean ratio. The estimates are OLS estimates with robust standard
#' errors that accommodate residual heteroscedasticity. The exponentiated
#' slope estimate for a predictor variable describes a multiplicative
#' change in the mean ratio associated with a 1-unit increase in that
#' predictor variable, controlling for all other predictor variables
#' in the model.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param m1 vector of estimated means for group 1
#' @param m2 vector of estimated means for group 2
#' @param sd1 vector of estimated SDs for group 1
#' @param sd2 vector of estimated SDs for group 2
#' @param cor vector of estimated correlations
#' @param n vector of sample sizes
#' @param X matrix of predictor values
#'
#'
#' @return
#' Returns a matrix. The first row is for the intercept with one additional
#' row per predictor. The matrix has the following columns:
#' * Estimate - OLS estimate
#' * SE - standard error
#' * z - z-value
#' * p - p-value
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#' * exp(Estimate) - the exponentiated estimate
#' * exp(LL) - lower limit of the exponentiated confidence interval
#' * exp(UL) - upper limit of the exponentiated confidence interval
#'
#'
#' @examples
#' n <- c(65, 30, 29, 45, 50)
#' cor <- c(.87, .92, .85, .90, .88)
#' m1 <- c(20.1, 20.5, 19.3, 21.5, 19.4)
#' m2 <- c(10.4, 10.2, 8.5, 10.3, 7.8)
#' sd1 <- c(9.3, 9.9, 10.1, 10.5, 9.8)
#' sd2 <- c(7.8, 8.0, 8.4, 8.1, 8.7)
#' x1 <- c(2, 3, 3, 4, 4)
#' X <- matrix(x1, 5, 1)
#' meta.lm.meanratio.ps(.05, m1, m2, sd1, sd2, cor, n, X)
#'
#' # Should return:
#' # Estimate SE LL UL z p
#' # b0 0.50957008 0.13000068 0.254773424 0.7643667 3.919749 0.000
#' # b1 0.07976238 0.04133414 -0.001251047 0.1607758 1.929697 0.054
#' # exp(Estimate) exp(LL) exp(UL)
#' # b0 1.664575 1.2901693 2.147634
#' # b1 1.083030 0.9987497 1.174422
#'
#'
#' @importFrom stats pnorm
#' @importFrom stats qnorm
#' @export
meta.lm.meanratio.ps <- function(alpha, m1, m2, sd1, sd2, cor, n, X) {
m <- length(m1)
nt <- sum(n)
var <- (sd1^2/m1^2 + sd2^2/m2^2 - 2*cor*sd1*sd2/(m1*m2))/n
y <- log(m1/m2)
x0 <- matrix(c(1), m, 1)
X <- cbind(x0, X)
q <- ncol(X)
M <- solve(t(X)%*%X)
b <- M%*%t(X)%*%y
V <- diag(var)
se <- sqrt(diag(M%*%t(X)%*%V%*%X%*%M))
df <- nt - q
crit <- qnorm(1 - alpha/2)
ll <- b - crit*se
ul <- b + crit*se
z <- b/se
p <- round(2*(1 - pnorm(abs(z))), digits = 3)
out <- cbind(b, se, z, p, ll, ul, exp(b), exp(ll), exp(ul))
row <- t(t(paste0(rep("b", q), seq(1:q) - 1)))
colnames(out) <- c("Estimate", "SE", "z", "p", "LL", "UL",
"exp(Estimate)", "exp(LL)", "exp(UL)")
rownames(out) <- row
return(out)
}
# meta.lm.cor.gen ==========================================================
#' Meta-regression analysis for correlations
#'
#'
#' @description
#' This function estimates the intercept and slope coefficients in a
#' meta-regression model where the dependent variable is a
#' Fisher-transformed correlation. The correlations can be of different types
#' (e.g., Pearson, partial, Spearman). The estimates are OLS estimates
#' with robust standard errors that accommodate residual heteroscedasticity.
#' This function uses estimated correlations and their standard errors as
#' input. The correlations are Fisher-transformed and hence the parameter
#' estimates do not have a simple interpretation. However, the hypothesis
#' test results can be used to decide if a population slope is either
#' positive or negative.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param cor vector of estimated correlations
#' @param se number of control variables
#' @param X matrix of predictor values
#'
#' @return
#' Returns a matrix. The first row is for the intercept with one additional
#' row per predictor. The matrix has the following columns:
#' * Estimate - OLS estimate
#' * SE - standard error
#' * z - z-value
#' * p - p-value
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#'
#' cor <- c(.40, .65, .60, .45)
#' se <- c(.182, .114, .098, .132)
#' x1 <- c(18, 25, 23, 19)
#' X <- matrix(x1, 4, 1)
#' meta.lm.cor.gen(.05, cor, se, X)
#'
#' # Should return:
#' # Estimate SE z p
#' # b0 -0.47832153 0.63427931 -0.7541181 0.451
#' # b1 0.05047154 0.02879859 1.7525699 0.080
#'
#'
#' @importFrom stats pnorm
#' @importFrom stats qnorm
#' @export
meta.lm.cor.gen <- function(alpha, cor, se, X) {
m <- length(cor)
z <- qnorm(1 - alpha/2)
zcor <- log((1 + cor)/(1 - cor))/2
zvar <- se^2/(1 - cor^2)
x0 <- matrix(c(1), m, 1)
X <- cbind(x0, X)
q <- ncol(X)
M <- solve(t(X)%*%X)
b <- M%*%t(X)%*%zcor
V <- diag(zvar)
se <- sqrt(diag(M%*%t(X)%*%V%*%X%*%M))
z <- b/se
p <- round(2*(1 - pnorm(abs(z))), digits = 3)
out <- cbind(b, se, z, p)
row <- t(t(paste0(rep("b", q), seq(1:q) - 1)))
colnames(out) <- c("Estimate", "SE", "z", "p")
rownames(out) <- row
return (out)
}
# meta.lm.cor ==============================================================
#' Meta-regression analysis for Pearson or partial correlations
#'
#'
#' @description
#' This function estimates the intercept and slope coefficients in a
#' meta-regression model where the dependent variable is a
#' Fisher-transformed Pearson or partial correlation. The estimates are OLS
#' estimates with robust standard errors that accommodate residual heteroscedasticity.
#' The correlations are Fisher-transformed and hence the parameter estimates
#' do not have a simple interpretation. However, the hypothesis test results
#' can be used to decide if a population slope is either positive or negative.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param n vector of sample sizes
#' @param cor vector of estimated Pearson or partial correlations
#' @param s number of control variables
#' @param X matrix of predictor values
#'
#'
#' @return
#' Returns a matrix. The first row is for the intercept with one additional
#' row per predictor. The matrix has the following columns:
#' * Estimate - OLS estimate
#' * SE - Standard error
#' * z - z-value
#' * p - p-value
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#'
#' n <- c(55, 190, 65, 35)
#' cor <- c(.40, .65, .60, .45)
#' q <- 0
#' x1 <- c(18, 25, 23, 19)
#' X <- matrix(x1, 4, 1)
#' meta.lm.cor(.05, n, cor, q, X)
#'
#' # Should return:
#' # Estimate SE z p LL UL
#' # b0 -0.47832153 0.48631509 -0.983563 0.325 -1.431481595 0.47483852
#' # b1 0.05047154 0.02128496 2.371231 0.018 0.008753794 0.09218929
#'
#'
#' @importFrom stats pnorm
#' @importFrom stats qnorm
#' @export
meta.lm.cor <- function(alpha, n, cor, s, X) {
m <- length(n)
nt <- sum(n)
z <- qnorm(1 - alpha/2)
zcor <- log((1 + cor)/(1 - cor))/2
zvar <- 1/(n - 3 - s)
x0 <- matrix(c(1), m, 1)
X <- cbind(x0, X)
q <- ncol(X)
M <- solve(t(X)%*%X)
b <- M%*%t(X)%*%zcor
V <- diag(zvar)
se <- sqrt(diag(M%*%t(X)%*%V%*%X%*%M))
ll <- b - z*se
ul <- b + z*se
z <- b/se
p <- round(2*(1 - pnorm(abs(z))), digits = 3)
out <- cbind(b, se, z, p, ll, ul)
row <- t(t(paste0(rep("b", q), seq(1:q) - 1)))
colnames(out) <- c("Estimate", "SE", "z", "p", "LL", "UL")
rownames(out) <- row
return (out)
}
# meta.lm.spear ============================================================
#' Meta-regression analysis for Spearman correlations
#'
#'
#' @description
#' This function estimates the intercept and slope coefficients in a
#' meta-regression model where the dependent variable is a
#' Fisher-transformed Spearman correlation. The estimates are OLS estimates
#' with robust standard errors that accommodate residual heteroscedasticity.
#' The correlations are Fisher-transformed and hence the parameter
#' estimates do not have a simple interpretation. However, the hypothesis
#' test results can be used to decide if a population slope is either
#' positive or negative.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param n vector of sample sizes
#' @param cor vector of estimated Spearman correlations
#' @param X matrix of predictor values
#'
#'
#' @return
#' Returns a matrix. The first row is for the intercept with one additional
#' row per predictor. The matrix has the following columns:
#' * Estimate - OLS estimate
#' * SE - standard error
#' * z - z-value
#' * p - p-value
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#'
#' n <- c(150, 200, 300, 200, 350)
#' cor <- c(.14, .29, .16, .21, .23)
#' x1 <- c(18, 25, 23, 19, 24)
#' X <- matrix(x1, 5, 1)
#' meta.lm.spear(.05, n, cor, X)
#'
#' # Should return:
#' # Estimate SE z p LL UL
#' # b0 -0.08920088 0.26686388 -0.3342561 0.738 -0.612244475 0.43384271
#' # b1 0.01370866 0.01190212 1.1517825 0.249 -0.009619077 0.03703639
#'
#'
#' @importFrom stats pnorm
#' @importFrom stats qnorm
#' @export
meta.lm.spear <- function(alpha, n, cor, X) {
m <- length(n)
nt <- sum(n)
z <- qnorm(1 - alpha/2)
zcor <- log((1 + cor)/(1 - cor))/2
zvar <- (1 + cor^2/2)/(n - 3)
x0 <- matrix(c(1), m, 1)
X <- cbind(x0, X)
q <- ncol(X)
M <- solve(t(X)%*%X)
b <- M%*%t(X)%*%zcor
V <- diag(zvar)
se <- sqrt(diag(M%*%t(X)%*%V%*%X%*%M))
ll <- b - z*se
ul <- b + z*se
z <- b/se
p <- round(2*(1 - pnorm(abs(z))), digits = 3)
out <- cbind(b, se, z, p, ll, ul)
row <- t(t(paste0(rep("b", q), seq(1:q) - 1)))
colnames(out) <- c("Estimate", "SE", "z", "p", "LL", "UL")
rownames(out) <- row
return (out)
}
# meta.lm.semipart =========================================================
#' Meta-regression analysis for semipartial correlations
#'
#'
#' @description
#' This function estimates the intercept and slope coefficients in a
#' meta-regression model where the dependent variable is a Fisher-transformed
#' semipartial correlation. The estimates are OLS estimates with robust
#' standard errors that accommodate residual heteroscedasticity. The
#' correlations are Fisher-transformed and hence the parameter estimates
#' do not have a simple interpretation. However, the hypothesis test results
#' can be used to decide if a population slope is either positive or negative.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param n vector of sample sizes
#' @param cor vector of estimated semipartial correlations
#' @param r2 vector of estimated squared multiple correlations for full model
#' @param X matrix of predictor values
#'
#'
#' @return
#' Returns a matrix. The first row is for the intercept with one additional
#' row per predictor. The matrix has the following columns:
#' * Estimate - OLS estimate
#' * SE - standard error
#' * z - z-value
#' * p - p-value
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#'
#' n <- c(128, 97, 210, 217)
#' cor <- c(.35, .41, .44, .39)
#' r2 <- c(.29, .33, .36, .39)
#' x1 <- c(18, 25, 23, 19)
#' X <- matrix(x1, 4, 1)
#' meta.lm.semipart(.05, n, cor, r2, X)
#'
#' # Should return:
#' # Estimate SE z p LL UL
#' # b0 0.19695988 0.3061757 0.6432905 0.520 -0.40313339 0.79705315
#' # b1 0.01055584 0.0145696 0.7245114 0.469 -0.01800004 0.03911172
#'
#'
#' @importFrom stats pnorm
#' @importFrom stats qnorm
#' @export
meta.lm.semipart <- function(alpha, n, cor, r2, X) {
m <- length(n)
nt <- sum(n)
z <- qnorm(1 - alpha/2)
r0 <- r2 - cor^2
zcor <- log((1 + cor)/(1 - cor))/2
zvar = (r2^2 - 2*r2 + r0 - r0^2 + 1)/((1 - cor^2)^2*(n - 3))
x0 <- matrix(c(1), m, 1)
X <- cbind(x0, X)
q <- ncol(X)
M <- solve(t(X)%*%X)
b <- M%*%t(X)%*%zcor
V <- diag(zvar)
se <- sqrt(diag(M%*%t(X)%*%V%*%X%*%M))
ll <- b - z*se
ul <- b + z*se
z <- b/se
p <- round(2*(1 - pnorm(abs(z))), digits = 3)
out <- cbind(b, se, z, p, ll, ul)
row <- t(t(paste0(rep("b", q), seq(1:q) - 1)))
colnames(out) <- c("Estimate", "SE", "z", "p", "LL", "UL")
rownames(out) <- row
return (out)
}
# meta.lm.cronbach =========================================================
#' Meta-regression analysis for Cronbach reliabilities
#'
#'
#' @description
#' This function estimates the intercept and slope coefficients in a
#' meta-regression model where the dependent variable is a log-complement
#' Cronbach reliablity. The estimates are OLS estimates with robust standard
#' errors that accommodate residual heteroscedasticity. The exponentiated slope
#' estimate for a predictor variable describes a multiplicative change in
#' non-reliability associated with a 1-unit increase in that predictor
#' variable, controlling for all other predictor variables in the model.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param n vector of sample sizes
#' @param rel vector of estimated reliabilities
#' @param r number of measurements (e.g., items)
#' @param X matrix of predictor values
#'
#'
#' @return
#' Returns a matrix. The first row is for the intercept with one additional
#' row per predictor. The matrix has the following columns:
#' * Estimate - exponentiated OLS estimate
#' * SE - standard error
#' * z - z-value
#' * p - p-value
#' * LL - lower limit of the exponentiated confidence interval
#' * UL - upper limit of the exponentiated confidence interval
#'
#'
#' @examples
#' n <- c(583, 470, 546, 680)
#' rel <- c(.91, .89, .90, .89)
#' x1 <- c(1, 0, 0, 0)
#' X <- matrix(x1, 4, 1)
#' meta.lm.cronbach(.05, n, rel, 10, X)
#'
#' # Should return:
#' # Estimate SE z p LL UL
#' # b0 -2.2408328 0.03675883 -60.960391 0.000 -2.3128788 -2.16878684
#' # b1 -0.1689006 0.07204625 -2.344336 0.019 -0.3101087 -0.02769259
#'
#'
#' @references
#' \insertRef{Bonett2010}{vcmeta}
#'
#'
#' @importFrom stats pnorm
#' @importFrom stats qnorm
#' @export
meta.lm.cronbach <- function(alpha, n, rel, r, X) {
m <- length(n)
nt <- sum(n)
z <- qnorm(1 - alpha/2)
hn <- m/sum(1/n)
a <- ((r - 2)*(m - 1))^.25
log.rel <- log(1 - rel) - log(hn/(hn - 1))
var.rel <- 2*r*(1 - rel)^2/((r - 1)*(n - 2 - a))
var.log <- var.rel/(1 - rel)^2
x0 <- matrix(c(1), m, 1)
X <- cbind(x0, X)
q <- ncol(X)
M <- solve(t(X)%*%X)
b <- M%*%t(X)%*%log.rel
V <- diag(var.log)
se <- sqrt(diag(M%*%t(X)%*%V%*%X%*%M))
ll <- b - z*se
ul <- b + z*se
z <- b/se
p <- round(2*(1 - pnorm(abs(z))), digits = 3)
out <- cbind(b, se, z, p, ll, ul)
row <- t(t(paste0(rep("b", q), seq(1:q) - 1)))
colnames(out) <- c("Estimate", "SE", "z", "p", "LL", "UL")
rownames(out) <- row
return (out)
}
# meta.lm.odds =============================================================
#' Meta-regression analysis for odds ratios
#'
#'
#' @description
#' This function estimates the intercept and slope coefficients in a
#' meta-regression model where the dependent variable is a log odds
#' ratio. The estimates are OLS estimates with robust standard errors
#' that accommodate residual heteroscedasticity. The exponentiated
#' slope estimate for a predictor variable describes a multiplicative
#' change in the odds ratio associated with a 1-unit increase in that
#' predictor variable, controlling for all other predictor variables
#' in the model.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param f1 vector of group 1 frequency counts
#' @param f2 vector of group 2 frequency counts
#' @param n1 vector of group 1 sample sizes
#' @param n2 vector of group 2 sample sizes
#' @param X matrix of predictor values
#'
#'
#' @return
#' Returns a matrix. The first row is for the intercept with one additional
#' row per predictor. The matrix has the following columns:
#' * Estimate - OLS estimate
#' * SE - standard error
#' * z - z-value
#' * p - p-value
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#' * exp(Estimate) - the exponentiated estimate
#' * exp(LL) - lower limit of the exponentiated confidence interval
#' * exp(UL) - upper limit of the exponentiated confidence interval
#'
#'
#' @examples
#' n1 <- c(204, 201, 932, 130, 77)
#' n2 <- c(106, 103, 415, 132, 83)
#' f1 <- c(24, 40, 93, 14, 5)
#' f2 <- c(12, 9, 28, 3, 1)
#' x1 <- c(4, 4, 5, 3, 26)
#' x2 <- c(1, 1, 1, 0, 0)
#' X <- matrix(cbind(x1, x2), 5, 2)
#' meta.lm.odds(.05, f1, f2, n1, n2, X)
#'
#' # Should return:
#' # Estimate SE z p LL UL
#' # b0 1.541895013 0.69815801 2.20851868 0.027 0.1735305 2.91025958
#' # b1 -0.004417932 0.04840623 -0.09126784 0.927 -0.0992924 0.09045653
#' # b2 -1.071122269 0.60582695 -1.76803337 0.077 -2.2585213 0.11627674
#' # exp(Estimate) exp(LL) exp(UL)
#' # b0 4.6734381 1.1894969 18.361564
#' # b1 0.9955918 0.9054779 1.094674
#' # b2 0.3426238 0.1045049 1.123307
#'
#'
#' @references
#' \insertRef{Bonett2015}{vcmeta}
#'
#'
#' @importFrom stats pnorm
#' @importFrom stats qnorm
#' @export
meta.lm.odds <- function(alpha, f1, f2, n1, n2, X) {
m <- length(n1)
nt <- sum(n1 + n2)
z <- qnorm(1 - alpha/2)
lor <- log((f1 + .5)*(n2 - f2 + .5)/((f2 + .5)*(n1 - f1 + .5)))
var <- 1/(f1 + .5) + 1/(f2 + .5) + 1/(n1 - f1 + .5) + 1/(n2 - f2 + .5)
x0 <- matrix(c(1), m, 1)
X <- cbind(x0, X)
q <- ncol(X)
M <- solve(t(X)%*%X)
b <- M%*%t(X)%*%lor
V <- diag(var)
se <- sqrt(diag(M%*%t(X)%*%V%*%X%*%M))
ll <- b - z*se
ul <- b + z*se
exp.b <- exp(b)
exp.ll <- exp(ll)
exp.ul <- exp(ul)
z <- b/se
p <- round(2*(1 - pnorm(abs(z))), digits = 3)
out <- cbind(b, se, z, p, ll, ul, exp.b, exp.ll, exp.ul)
row <- t(t(paste0(rep("b", q), seq(1:q) - 1)))
colnames(out) <- c("Estimate", "SE", "z", "p", "LL", "UL",
"exp(Estimate)", "exp(LL)", "exp(UL)")
rownames(out) <- row
return (out)
}
# meta.lm.propratio2 =======================================================
#' Meta-regression analysis for proportion ratios
#'
#'
#' @description
#' This function estimates the intercept and slope coefficients in a
#' meta-regression model where the dependent variable is a log
#' proportion ratio. The estimates are OLS estimates with robust standard
#' errors that accommodate residual heteroscedasticity. The exponentiated
#' slope estimate for a predictor variable describes a multiplicative
#' change in the proportion ratio associated with a 1-unit increase in
#' that predictor variable, controlling for all other predictor variables
#' in the model.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param f1 vector of group 1 frequency counts
#' @param f2 vector of group 2 frequency counts
#' @param n1 vector of group 1 sample sizes
#' @param n2 vector of group 2 sample sizes
#' @param X matrix of predictor values
#'
#'
#' @return
#' Returns a matrix. The first row is for the intercept with one additional
#' row per predictor. The matrix has the following columns:
#' * Estimate - OLS estimate
#' * SE - standard error
#' * z - z-value
#' * p - p-value
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#' * exp(Estimate) - the exponentiated estimate
#' * exp(LL) - lower limit of the exponentiated confidence interval
#' * exp(UL) - upper limit of the exponentiated confidence interval
#'
#'
#' @examples
#' n1 <- c(204, 201, 932, 130, 77)
#' n2 <- c(106, 103, 415, 132, 83)
#' f1 <- c(24, 40, 93, 14, 5)
#' f2 <- c(12, 9, 28, 3, 1)
#' x1 <- c(4, 4, 5, 3, 26)
#' x2 <- c(1, 1, 1, 0, 0)
#' X <- matrix(cbind(x1, x2), 5, 2)
#' meta.lm.propratio2(.05, f1, f2, n1, n2, X)
#'
#' # Should return:
#' # Estimate SE z p LL UL
#' # b0 1.4924887636 0.69172794 2.15762393 0.031 0.13672691 2.84825062
#' # b1 0.0005759509 0.04999884 0.01151928 0.991 -0.09741998 0.09857188
#' # b2 -1.0837844594 0.59448206 -1.82307345 0.068 -2.24894789 0.08137897
#' # exp(Estimate) exp(LL) exp(UL)
#' # b0 4.4481522 1.1465150 17.257565
#' # b1 1.0005761 0.9071749 1.103594
#' # b2 0.3383128 0.1055102 1.084782
#'
#'
#' @references
#' \insertRef{Price2008}{vcmeta}
#'
#'
#' @importFrom stats pnorm
#' @importFrom stats qnorm
#' @export
meta.lm.propratio2 <- function(alpha, f1, f2, n1, n2, X) {
m <- length(n1)
nt <- sum(n1 + n2)
z <- qnorm(1 - alpha/2)
p1 <- (f1 + 1/4)/(n1 + 7/4)
p2 <- (f2 + 1/4)/(n2 + 7/4)
lrr <- log(p1/p2)
v1 <- 1/(f1 + 1/4 + (f1 + 1/4)^2/(n1 - f1 + 3/2))
v2 <- 1/(f2 + 1/4 + (f2 + 1/4)^2/(n2 - f2 + 3/2))
x0 <- matrix(c(1), m, 1)
X <- cbind(x0, X)
q <- ncol(X)
M <- solve(t(X)%*%X)
b <- M%*%t(X)%*%lrr
V <- diag(v1 + v2)
se <- sqrt(diag(M%*%t(X)%*%V%*%X%*%M))
ll <- b - z*se
ul <- b + z*se
exp.b <- exp(b)
exp.ll <- exp(ll)
exp.ul <- exp(ul)
z <- b/se
p <- round(2*(1 - pnorm(abs(z))), digits = 3)
out <- cbind(b, se, z, p, ll, ul, exp.b, exp.ll, exp.ul)
row <- t(t(paste0(rep("b", q), seq(1:q)-1)))
colnames(out) <- c("Estimate", "SE", "z", "p", "LL", "UL",
"exp(Estimate)", "exp(LL)", "exp(UL)")
rownames(out) <- row
return (out)
}
# meta.lm.prop2 ============================================================
#' Meta-regression analysis for 2-group proportion differences
#'
#'
#' @description
#' This function estimates the intercept and slope coefficients in a
#' meta-regression model where the dependent variable is a 2-group
#' proportion difference. The estimates are OLS estimates with
#' robust standard errors that accommodate residual heteroscedasticity.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param f1 vector of group 1 frequency counts
#' @param f2 vector of group 2 frequency counts
#' @param n1 vector of group 1 sample sizes
#' @param n2 vector of group 2 sample sizes
#' @param X matrix of predictor values
#'
#' @return
#' Returns a matrix. The first row is for the intercept with one additional
#' row per predictor. The matrix has the following columns:
#' * Estimate - OLS estimate
#' * SE - standard error
#' * z - z-value
#' * p - p-value
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' f1 <- c(24, 40, 93, 14, 5)
#' f2 <- c(12, 9, 28, 3, 1)
#' n1 <- c(204, 201, 932, 130, 77)
#' n2 <- c(106, 103, 415, 132, 83)
#' x1 <- c(4, 4, 5, 3, 26)
#' x2 <- c(1, 1, 1, 0, 0)
#' X <- matrix(cbind(x1, x2), 5, 2)
#' meta.lm.prop2(.05, f1, f2, n1, n2, X)
#'
#' # Should return:
#' # Estimate SE z p LL UL
#' # b0 0.089756283 0.034538077 2.5987632 0.009 0.02206290 0.157449671
#' # b1 -0.001447968 0.001893097 -0.7648672 0.444 -0.00515837 0.002262434
#' # b2 -0.034670988 0.034125708 -1.0159786 0.310 -0.10155615 0.032214170
#'
#'
#' @references
#' \insertRef{Bonett2014}{vcmeta}
#'
#'
#' @importFrom stats pnorm
#' @importFrom stats qnorm
#' @export
meta.lm.prop2 <- function(alpha, f1, f2, n1, n2, X) {
m <- length(n1)
nt <- sum(n1 + n2)
z <- qnorm(1 - alpha/2)
p1 <- (f1 + 1/m)/(n1 + 2/m)
p2 <- (f2 + 1/m)/(n2 + 2/m)
rd <- p1 - p2
v1 <- p1*(1 - p1)/(n1 + 2/m)
v2 <- p2*(1 - p2)/(n2 + 2/m)
x0 <- matrix(c(1), m, 1)
X <- cbind(x0, X)
q <- ncol(X)
M <- solve(t(X)%*%X)
b <- M%*%t(X)%*%rd
V <- diag(v1 + v2)
se <- sqrt(diag(M%*%t(X)%*%V%*%X%*%M))
ll <- b - z*se
ul <- b + z*se
z <- b/se
p <- round(2*(1 - pnorm(abs(z))), digits = 3)
out <- cbind(b, se, z, p, ll, ul)
row <- t(t(paste0(rep("b", q), seq(1:q) - 1)))
colnames(out) <- c("Estimate", "SE", "z", "p", "LL", "UL")
rownames(out) <- row
return (out)
}
# meta.lm.prop.ps ==========================================================
#' Meta-regression analysis for paired-samples proportion differences
#'
#'
#' @description
#' This function estimates the intercept and slope coefficients
#' in a meta-regression model where the dependent variable is a
#' paired-samples proportion difference. The estimates are OLS
#' estimates with robust standard errors that accommodate residual
#' heteroscedasticity.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param f11 vector of frequency counts in cell 1,1
#' @param f12 vector of frequency counts in cell 1,2
#' @param f21 vector of frequency counts in cell 2,1
#' @param f22 vector of frequency counts in cell 2,2
#' @param X matrix of predictor values
#'
#'
#' @return
#' Returns a matrix. The first row is for the intercept with one additional
#' row per predictor. The matrix has the following columns:
#' * Estimate - OLS estimate
#' * SE - standard error
#' * z - z-value
#' * p - p-value
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' f11 <- c(40, 20, 25, 30)
#' f12 <- c(3, 2, 2, 1)
#' f21 <- c(7, 6, 8, 6)
#' f22 <- c(26, 25, 13, 25)
#' x1 <- c(1, 1, 4, 6)
#' x2 <- c(1, 1, 0, 0)
#' X <- matrix(cbind(x1, x2), 4, 2)
#' meta.lm.prop.ps(.05, f11, f12, f21, f22, X)
#'
#' # Should return:
#' # Estimate SE z p LL UL
#' # b0 -0.21113402 0.21119823 -0.9996960 0.317 -0.62507494 0.20280690
#' # b1 0.02185567 0.03861947 0.5659236 0.571 -0.05383711 0.09754845
#' # b2 0.12575138 0.17655623 0.7122455 0.476 -0.22029248 0.47179524
#'
#'
#' @references
#' \insertRef{Bonett2014}{vcmeta}
#'
#'
#' @importFrom stats pnorm
#' @importFrom stats qnorm
#' @export
meta.lm.prop.ps <- function(alpha, f11, f12, f21, f22, X) {
m <- length(f11)
z <- qnorm(1 - alpha/2)
n <- f11 + f12 + f21 + f22
nt <- sum(n)
p12 <- (f12 + 1/m)/(n + 2/m)
p21 <- (f21 + 1/m)/(n + 2/m)
rd <- p12 - p21
var <- (p12 + p21 - rd^2)/(n + 2/m)
x0 <- matrix(c(1), m, 1)
X <- cbind(x0, X)
q <- ncol(X)
M <- solve(t(X)%*%X)
b <- M%*%t(X)%*%rd
V <- diag(var)
se <- sqrt(diag(M%*%t(X)%*%V%*%X%*%M))
ll <- b - z*se
ul <- b + z*se
z <- b/se
p <- round(2*(1 - pnorm(abs(z))), digits = 3)
out <- cbind(b, se, z, p, ll, ul)
row <- t(t(paste0(rep("b", q), seq(1:q) - 1)))
colnames(out) <- c("Estimate", "SE", "z", "p", "LL", "UL")
rownames(out) <- row
return (out)
}
# meta.lm.agree ============================================================
#' Meta-regression analysis for G agreement indices
#'
#'
#' @description
#' This function estimates the intercept and slope coefficients in a
#' meta-regression model where the dependent variable is a G-index of
#' agreement. The estimates are OLS estimates with robust standard errors
#' that accomodate residual heteroscedasticity.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param f11 vector of frequency counts in cell 1,1
#' @param f12 vector of frequency counts in cell 1,2
#' @param f21 vector of frequency counts in cell 2,1
#' @param f22 vector of frequency counts in cell 2,2
#' @param X matrix of predictor values
#'
#' @return
#' Returns a matrix. The first row is for the intercept with one additional
#' row per predictor. The matrix has the following columns:
#' * Estimate - OLS estimate
#' * SE - standard error
#' * z - z-value
#' * p - p-value
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' f11 <- c(40, 20, 25, 30)
#' f12 <- c(3, 2, 2, 1)
#' f21 <- c(7, 6, 8, 6)
#' f22 <- c(26, 25, 13, 25)
#' x1 <- c(1, 1, 4, 6)
#' x2 <- c(1, 1, 0, 0)
#' X <- matrix(cbind(x1, x2), 4, 2)
#' meta.lm.agree(.05, f11, f12, f21, f22, X)
#'
#' # Should return:
#' # Estimate SE z p LL UL
#' # b0 0.1904762 0.38772858 0.4912617 0.623 -0.56945786 0.9504102
#' # b1 0.0952381 0.07141957 1.3335013 0.182 -0.04474169 0.2352179
#' # b2 0.4205147 0.32383556 1.2985438 0.194 -0.21419136 1.0552207
#'
#'
#' @importFrom stats pnorm
#' @importFrom stats qnorm
#' @export
meta.lm.agree <- function(alpha, f11, f12, f21, f22, X) {
m <- length(f11)
z <- qnorm(1 - alpha/2)
n <- f11 + f12 + f21 + f22
nt <- sum(n)
p0 <- (f11 + f22 + 2/m)/(n + 4/m)
g <- 2*p0 - 1
var <- 4*p0*(1 - p0)/(n + 4/m)
x0 <- matrix(c(1), m, 1)
X <- cbind(x0, X)
q <- ncol(X)
M <- solve(t(X)%*%X)
b <- M%*%t(X)%*%g
V <- diag(var)
se <- sqrt(diag(M%*%t(X)%*%V%*%X%*%M))
ll <- b - z*se
ul <- b + z*se
z <- b/se
p <- round(2*(1 - pnorm(abs(z))), digits = 3)
out <- cbind(b, se, z, p, ll, ul)
row <- t(t(paste0(rep("b", q), seq(1:q) - 1)))
colnames(out) <- c("Estimate", "SE", "z", "p", "LL", "UL")
rownames(out) <- row
return (out)
}
# meta.lm.mean1 ============================================================
#' Meta-regression analysis for 1-group means
#'
#'
#' @description
#' This function estimates the intercept and slope coefficients in a
#' meta-regression model where the dependent variable is a mean
#' from one group. The estimates are OLS estimates with robust
#' standard errors that accomodate residual heteroscedasticity.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param m vector of estimated means
#' @param sd vector of estimated standard deviations
#' @param n vector of sample sizes
#' @param X matrix of predictor values
#'
#'
#' @return
#' Returns a matrix. The first row is for the intercept with one additional
#' row per predictor. The matrix has the following columns:
#' * Estimate - OLS estimate
#' * SE - standard error
#' * t - t-value
#' * p - p-value
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#' * df - degrees of freedom
#'
#'
#' @examples
#' n <- c(25, 15, 30, 25, 40)
#' m <- c(20.1, 20.5, 19.3, 21.5, 19.4)
#' sd <- c(10.4, 10.2, 8.5, 10.3, 7.8)
#' x1 <- c(1, 1, 0, 0, 0)
#' x2 <- c( 12, 13, 11, 13, 15)
#' X <- matrix(cbind(x1, x2), 5, 2)
#' meta.lm.mean1(.05, m, sd, n, X)
#'
#' # Should return:
#' # Estimate SE t p LL UL df
#' # b0 19.45490196 6.7873381 2.86635227 0.005 6.0288763 32.880928 132
#' # b1 0.25686275 1.9834765 0.12950128 0.897 -3.6666499 4.180375 132
#' # b2 0.04705882 0.5064693 0.09291544 0.926 -0.9547876 1.048905 132
#'
#'
#' @importFrom stats qt
#' @importFrom stats pt
#' @export
meta.lm.mean1 <- function(alpha, m, sd, n, X) {
k <- length(m)
nt <- sum(n)
var <- sd^2/n
x0 <- matrix(c(1), k, 1)
X <- cbind(x0, X)
q <- ncol(X)
M <- solve(t(X)%*%X)
b <- M%*%t(X)%*%m
V <- diag(var)
se <- sqrt(diag(M%*%t(X)%*%V%*%X%*%M))
df <- sum(n) - q
t <- qt(1 - alpha/2, df)
ll <- b - t*se
ul <- b + t*se
t <- b/se
p <- round(2*(1 - pt(abs(t), df)), digits = 3)
out <- cbind(b, se, t, p, ll, ul, df)
row <- t(t(paste0(rep("b", q), seq(1:q) - 1)))
colnames(out) <- c("Estimate", "SE", "t", "p", "LL", "UL", "df")
rownames(out) <- row
return(out)
}
# meta.lm.prop1 ============================================================
#' Meta-regression analysis for 1-group proportions
#'
#'
#' @description
#' This function estimates the intercept and slope coefficients in a
#' meta-regression model where the dependent variable is a proportion
#' from one group. The estimates are OLS estimates with robust
#' standard errors that accomodate residual heteroscedasticity.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param f vector of frequency counts
#' @param n vector of sample sizes
#' @param X matrix of predictor values
#'
#' @return
#' Returns a matrix. The first row is for the intercept with one additional
#' row per predictor. The matrix has the following columns:
#' * Estimate - OLS estimate
#' * SE - standard error
#' * z - z-value
#' * p - p-value
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' f <- c(38, 26, 24, 15, 45, 38)
#' n <- c(80, 60, 70, 50, 180, 200)
#' x1 <- c(10, 15, 18, 22, 24, 30)
#' X <- matrix(x1, 6, 1)
#' meta.lm.prop1(.05, f, n, X)
#'
#' # Should return:
#' # Estimate SE z p LL UL
#' # b0 0.63262816 0.06845707 9.241239 0 0.49845477 0.766801546
#' # b1 -0.01510565 0.00290210 -5.205076 0 -0.02079367 -0.009417641
#'
#'
#' @importFrom stats pnorm
#' @importFrom stats qnorm
#' @export
meta.lm.prop1 <- function(alpha, f, n, X) {
z <- qnorm(1 - alpha/2)
k <- length(f)
nt <- sum(n)
p <- (f + 2/k)/(n + 4/k)
var <- p*(1 - p)/(n + 4/k)
x0 <- matrix(c(1), k, 1)
X <- cbind(x0, X)
q <- ncol(X)
M <- solve(t(X)%*%X)
b <- M%*%t(X)%*%p
V <- diag(var)
se <- sqrt(diag(M%*%t(X)%*%V%*%X%*%M))
ll <- b - z*se
ul <- b + z*se
z <- b/se
p <- round(2*(1 - pnorm(abs(z))), digits = 3)
out <- cbind(b, se, z, p, ll, ul)
row <- t(t(paste0(rep("b", q), seq(1:q) - 1)))
colnames(out) <- c("Estimate", "SE", "z", "p", "LL", "UL")
rownames(out) <- row
return(out)
}
# meta.lm.gen ==============================================================
#' Meta-regression analysis for any type of effect size
#'
#'
#' @description
#' This function estimates the intercept and slope coefficients in a
#' meta-regression model where the dependent variable is any type of
#' effect size. The estimates are OLS estimates with robust standard
#' errors that accomodate residual heteroscedasticity.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param est vector of parameter estimates
#' @param se vector of standard errors
#' @param X matrix of predictor values
#'
#' @return
#' Returns a matrix. The first row is for the intercept with one additional
#' row per predictor. The matrix has the following columns:
#' * Estimate - OLS estimate
#' * SE - standard error
#' * z - z-value
#' * p - p-value
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' est <- c(4.1, 4.7, 4.9, 5.7, 6.6, 7.3)
#' se <- c(1.2, 1.5, 1.3, 1.8, 2.0, 2.6)
#' x1 <- c(10, 20, 30, 40, 50, 60)
#' x2 <- c(1, 1, 1, 0, 0, 0)
#' X <- matrix(cbind(x1, x2), 6, 2)
#' meta.lm.gen(.05, est, se, X)
#'
#' # Should return:
#' # Estimate SE z p LL UL
#' # b0 3.5333333 4.37468253 0.80767766 0.419 -5.0408869 12.1075535
#' # b1 0.0600000 0.09058835 0.66233679 0.508 -0.1175499 0.2375499
#' # b2 -0.1666667 2.81139793 -0.05928249 0.953 -5.6769054 5.3435720
#'
#'
#' @importFrom stats pnorm
#' @importFrom stats qnorm
#' @export
meta.lm.gen <- function(alpha, est, se, X) {
m <- length(est)
z <- qnorm(1 - alpha/2)
x0 <- matrix(c(1), m, 1)
X <- cbind(x0, X)
q <- ncol(X)
M <- solve(t(X)%*%X)
b <- M%*%t(X)%*%est
V <- diag(se^2)
seb <- sqrt(diag(M%*%t(X)%*%V%*%X%*%M))
ll <- b - z*seb
ul <- b + z*seb
z <- b/seb
p <- round(2*(1 - pnorm(abs(z))), digits = 3)
out <- cbind(b, seb, z, p, ll, ul)
row <- t(t(paste0(rep("b", q), seq(1:q) - 1)))
colnames(out) <- c("Estimate", "SE", "z", "p", "LL", "UL")
rownames(out) <- row
return(out)
}
|
/scratch/gouwar.j/cran-all/cranData/vcmeta/R/meta_model.R
|
# replicate.plot
#' Plot to compare estimates from original and follow-up studies
#'
#'
#' @description
#' Generates a basic plot using ggplot2 to visualize the estimates from
#' and original and follow-up studies.
#'
#'
#' @param result - a result matrix from any of the replicate functions in vcmeta
#' @param focus - Optional specification of the focus of the plot;
#' defaults to 'Both'
#' * Both - plots each estimate, differencence, and average
#' * Difference - plot each estimate and difference between them
#' * Average - plot each estimate and the average effect size
#' @param reference_line - Optional x-value for a reference line. Only applies
#' if focus is 'Difference' or 'Both'. Defaults to NULL, in which case a
#' reference line is not drawn.
#' @param diamond_height - Optional height of the diamond representing average
#' effect size. Only applies if focus is 'Average' or 'Both'.
#' Defaults to 0.2
#' @param difference_axis_ticks - Optional requested number of ticks on the
#' difference axis. Only applies if focus is 'Difference' or 'Both'.
#' Defaults to 5.
#' @param ggtheme - optional ggplot2 theme object; defaults to theme_classic()
#'
#'
#' @return
#' Returns a ggplot object. If stored, can be further customized via
#' the ggplot API
#'
#'
#' @examples
#' # Compare Damisch et al., 2010 to Calin-Jageman & Caldwell 2014
#' # Damisch et al., 2010, Exp 1, German participants made 10 mini-golf putts.
#' # Half were told they had a 'lucky' golf ball; half were not.
#' # Found a large but uncertain improvement in shots made in the luck condition
#' # Calin-Jageman & Caldwell, 2014, Exp 1, was a pre-registered replication with
#' # input from Damisch, though with English-speaking participants.
#' #
#' # Here we compare the effect sizes, in original units, for the two studies.
#' # Use the replicate.mean2 function because the design is a 2-group design.
#'
#' library(ggplot2)
#' damisch_v_calinjageman_raw <- replicate.mean2(
#' alpha = 0.05,
#' m11 = 6.42,
#' m12 = 4.75,
#' sd11 = 1.88,
#' sd12 = 2.15,
#' n11 = 14,
#' n12 = 14,
#' m21 = 4.73,
#' m22 = 4.62,
#' sd21 = 1.958,
#' sd22 = 2.12,
#' n21 = 66,
#' n22 = 58
#' )
#'
#' # View the comparison:
#' damisch_v_calinjageman_raw
#'
#'
#' # Now plot the comparison, focusing on the difference
#' replicate.plot(damisch_v_calinjageman_raw, focus = "Difference")
#'
#' # Plot the comparison, focusing on the average
#' replicate.plot(damisch_v_calinjageman_raw,
#' focus = "Average",
#' reference_line = 0,
#' diamond_height = 0.1
#' )
#'
#'
#' # Plot the comparison with both difference and average.
#' # In this case, store the plot for manipulation
#' myplot <- replicate.plot(
#' damisch_v_calinjageman_raw,
#' focus = "Both",
#' reference_line = 0
#' )
#'
#' # View the stored plot
#' myplot
#'
#' # Change x-labels and study labels
#' myplot <- myplot + xlab("Difference in Putts Made, Lucky - Control")
#' myplot <- myplot + scale_y_discrete(
#' labels = c(
#' "Average",
#' "Difference",
#' "Calin-Jageman & Caldwell, 2014",
#' "Damisch et al., 2010"
#' )
#' )
#'
#' # View the updated plot
#' myplot
#'
#' @importFrom ggplot2 ggplot
#' @importFrom ggplot2 aes
#' @importFrom ggplot2 aes_string
#' @importFrom ggplot2 geom_linerange
#' @importFrom ggplot2 geom_segment
#' @importFrom ggplot2 geom_vline
#' @importFrom ggplot2 geom_point
#' @importFrom ggplot2 geom_polygon
#' @importFrom ggplot2 sec_axis
#' @importFrom ggplot2 theme
#' @importFrom ggplot2 ylab
#' @importFrom ggplot2 element_blank
#' @importFrom ggplot2 scale_x_continuous
#' @export
replicate.plot <- function(
result,
focus = c("Both", "Difference", "Average"),
reference_line = NULL,
diamond_height = 0.2,
difference_axis_ticks = 5,
ggtheme = ggplot2::theme_classic()
) {
# Options ---------------------------------------------
focus <- match.arg(focus)
plot_average <- focus != "Difference"
plot_difference <- focus != "Average"
is_log <- "exp(Estimate)" %in% colnames(result)
diff_axis_y <- 0
# Definitions ------------------------------------------
# Row names
comparison_name <- "Original:"
ref_name <- "Follow-up:"
avg_name <- "Average:"
diff_name <- "Original - Follow-up:"
# Column names
se_name <- "SE"
if (is_log) {
es_name <- "exp(Estimate)"
ll_name <- "exp(LL)"
ul_name <- "exp(UL)"
} else {
es_name <- "Estimate"
ll_name <- "LL"
ul_name <- "UL"
}
col_adj <- c(es_name, ll_name, ul_name)
# Data prep --------------------------------------------
# Convert matrix to data frame, drop unused columns, save names to name
as_df <- as.data.frame(result)
as_df <- as_df[ , c(col_adj, se_name)]
as_df$name <- row.names(result)
# Filter out average or difference based on focus
if(!plot_average) {
as_df <- as_df[as_df$name != avg_name, ]
}
if(!plot_difference) {
as_df <- as_df[as_df$name != diff_name, ]
}
# Convert names to factor with levels in rev order for plotting
as_df$name <- factor(
as_df$name,
levels = rev(as_df$name)
)
# Set y values and size
as_df$y_value <- as.integer(as_df$name)
as_df$size <- max(as_df[[se_name]]) - as_df[[se_name]] + min(as_df[[se_name]])
as_df$diff_axis_y <- diff_axis_y
if (plot_difference) {
# Shift difference by reference value
ref_es <- as_df[ref_name, es_name]
as_df[diff_name, col_adj] <- as_df[diff_name, col_adj] + ref_es
# Calculate floating axis breaks
diff_start <- min(0, as_df[[ll_name]]-ref_es)
diff_end <- max(0, as_df[[ul_name]]-ref_es)
diff_breaks <- pretty(diff_start:diff_end, n = difference_axis_ticks)
}
if (plot_average) {
# Generate polygon data
row_avg <- as.list(as_df[avg_name, ])
diamond_xs <- c(
row_avg[[ll_name]],
row_avg[[es_name]],
row_avg[[ul_name]],
row_avg[[es_name]]
)
d_y <- row_avg$y_value
diamond_ys <- c(d_y, d_y - diamond_height , d_y, d_y + diamond_height)
poly_data <- data.frame(x = diamond_xs, y = diamond_ys)
}
# Make graph ----------------------------------------------
# Basic graph
myplot <- ggplot(
data = as_df,
aes_string(x = es_name, y = "name")
)
myplot <- myplot + ggtheme
# CIs
myplot <- myplot + geom_linerange(aes_string(xmin = ll_name, xmax = ul_name))
# For differences, plot reference lines
if (plot_difference) {
# Comparison line
myplot <- myplot + geom_segment(
data = as_df[diff_name, ],
linetype = "dotted",
color = "black",
aes_string(
x = es_name,
xend = es_name,
y = "y_value",
yend = "diff_axis_y"
)
)
# Reference line
myplot <- myplot + geom_segment(
data = as_df[ref_name, ],
linetype = "dashed",
color = "black",
aes_string(
x = es_name,
xend = es_name,
y = "y_value",
yend = "diff_axis_y"
)
)
}
if (plot_average & !is.null(reference_line)) {
myplot <- myplot + geom_vline(
xintercept = reference_line,
linetype = "dotted"
)
}
# Effect sizes
myplot <- myplot + geom_point(
aes_string(
colour = "name",
fill = "name",
size = "size"
),
shape = "square filled"
)
# For averages, plot diamond for effect size
if (plot_average) {
myplot <- myplot + geom_polygon(
data = poly_data,
aes_string(x = "x", y = "y")
)
}
# For differences, display floating difference axis
if (plot_difference) {
# Specify axis
myplot <- myplot + scale_x_continuous(
position = "top",
sec.axis = sec_axis(
name = "Difference",
trans = ~.-ref_es,
breaks = diff_breaks
)
)
# Floating difference axis
myplot <- myplot + geom_segment(
linetype = "solid",
color = "black",
aes(
x = min(diff_breaks)+ref_es,
xend = max(diff_breaks)+ref_es,
y = diff_axis_y,
yend = diff_axis_y
)
)
}
# Clean up axis lines and hide legends
myplot <- myplot + theme(axis.line.y.left = element_blank())
myplot <- myplot + theme(axis.ticks.y.left = element_blank())
if(plot_difference) {
myplot <- myplot + theme(axis.line.x.bottom = element_blank())
}
myplot <- myplot + ylab("")
myplot <- myplot + theme(legend.position = "none")
return(myplot)
}
# meta.ave.plot
#' Forest plot for average effect sizes
#'
#'
#' @description
#' Generates a forest plot to visualize effect sizes estimates and overall
#' averages from the meta.ave functions in vcmeta. If the column
#' exp(Estimate) is present, this function plots the exponentiated
#' effect size and CI found in columns exp(Estimate), exp(LL), and exp(UL).
#' Otherwise, this function plots the effect size and CI found in
#' the columns Estimate, LL, and UL.
#'
#'
#' @param result - a result matrix from any of the replicate functions in vcmeta
#' @param reference_line Optional x-value for a reference line. Only applies
#' if focuse is 'Difference' or 'Both'. Defaults to NULL, in which case a
#' reference line is not drawn.
#' @param diamond_height - Optional height of the diamond representing average
#' effect size. Only applies if focus is 'Average' or 'Both'.
#' Defaults to 0.2
#' @param ggtheme - optional ggplot2 theme object; defaults to theme_classic()
#'
#'
#' @return
#' Returns a ggplot object. If stored, can be further customized via
#' the ggplot API
#'
#' @examples
#' # Plot results from meta.ave.mean2
#' m1 <- c(7.4, 6.9)
#' m2 <- c(6.3, 5.7)
#' sd1 <- c(1.72, 1.53)
#' sd2 <- c(2.35, 2.04)
#' n1 <- c(40, 60)
#' n2 <- c(40, 60)
#' result <- meta.ave.mean2(.05, m1, m2, sd1, sd2, n1, n2, bystudy = TRUE)
#' meta.ave.plot(result, reference_line = 0)
#'
#'
#' # Plot results from meta.ave.meanratio2
#' # Note that this plots the exponentiated effect size and CI
#' m1 <- c(53, 60, 53, 57)
#' m2 <- c(55, 62, 58, 61)
#' sd1 <- c(4.1, 4.2, 4.5, 4.0)
#' sd2 <- c(4.2, 4.7, 4.9, 4.8)
#' cor <- c(.7, .7, .8, .85)
#' n <- c(30, 50, 30, 70)
#' result <- meta.ave.meanratio.ps(.05, m1, m2, sd1, sd2, cor, n, bystudy = TRUE)
#' myplot <- meta.ave.plot(result, reference_line = 1)
#' myplot
#'
#' # Change x-scale to log2
#' library(ggplot2)
#' myplot <- myplot + scale_x_continuous(
#' trans = 'log2',
#' limits = c(0.75, 1.25),
#' name = "Estimated Ratio of Means, Log2 Scale"
#' )
#' myplot
#'
#'
#' @importFrom ggplot2 ggplot
#' @importFrom ggplot2 aes
#' @importFrom ggplot2 aes_string
#' @importFrom ggplot2 geom_linerange
#' @importFrom ggplot2 geom_segment
#' @importFrom ggplot2 geom_vline
#' @importFrom ggplot2 geom_point
#' @importFrom ggplot2 geom_polygon
#' @importFrom ggplot2 sec_axis
#' @importFrom ggplot2 theme
#' @importFrom ggplot2 ylab
#' @importFrom ggplot2 xlab
#' @importFrom ggplot2 element_blank
#' @importFrom ggplot2 scale_x_continuous
#' @importFrom ggplot2 scale_y_continuous
#' @importFrom utils head
#' @export
meta.ave.plot <- function(
result,
reference_line = NULL,
diamond_height = 0.2,
ggtheme = ggplot2::theme_classic()
) {
# Options ----------------------------------------------
is_log <- "exp(Estimate)" %in% colnames(result)
# Definitions ------------------------------------------
avg_name <- "Average"
se_name <- "SE"
if (is_log) {
es_name <- "exp(Estimate)"
ll_name <- "exp(LL)"
ul_name <- "exp(UL)"
} else {
es_name <- "Estimate"
ll_name <- "LL"
ul_name <- "UL"
}
# Data prep --------------------------------------------
# Convert matrix to data frame
as_df <- as.data.frame(result)
# Move average to bottom
as_df <- rbind(
as_df[-1, ],
head(as_df, 1)
)
# Set name column and levels for proper order
as_df$name <- factor(
row.names(as_df),
levels = rev(row.names(as_df))
)
# Set y values and size
as_df$y_value <- as.integer(as_df$name)
as_df$size <- max(as_df[ , se_name]) - as_df[, se_name] + min(as_df[ , se_name])
# Generate polygon data
# Generate polygon data
row_avg <- as.list(as_df[avg_name, ])
diamond_xs <- c(
row_avg[[ll_name]],
row_avg[[es_name]],
row_avg[[ul_name]],
row_avg[[es_name]]
)
d_y <- row_avg$y_value
diamond_ys <- c(d_y, d_y - diamond_height , d_y, d_y + diamond_height)
poly_data <- data.frame(x = diamond_xs, y = diamond_ys)
# Make graph ----------------------------------------------
# Basic graph
myplot <- ggplot(
data = as_df,
aes_string(x = es_name, y = "name")
)
myplot <- myplot + ggtheme
# Optional reference line
if (!is.null(reference_line)) {
myplot <- myplot + geom_vline(
xintercept = reference_line,
linetype = "dotted"
)
}
# CIs
myplot <- myplot + geom_linerange(aes_string(xmin = ll_name, xmax = ul_name))
# Effect sizes
myplot <- myplot + geom_point(
aes_string(
colour = "name",
fill = "name",
size = "size"
),
shape = "square filled"
)
# Diamond for average effect size
myplot <- myplot + geom_polygon(
data = poly_data,
aes_string(x = "x", y = "y")
)
# Clean up axis lines and hide legends
myplot <- myplot + ylab("")
myplot <- myplot + theme(legend.position = "none")
return(myplot)
}
|
/scratch/gouwar.j/cran-all/cranData/vcmeta/R/meta_plots.R
|
# replicate.mean2 ============================================================
#' Compares and combines 2-group mean differences in original and follow-up studies
#'
#'
#' @description
#' This function computes confidence intervals from an original study and a
#' follow-up study where the effect size is a 2-group mean difference. Confidence
#' intervals for the difference and average effect size are also computed.
#' Equality of variances within or across studies is not assumed. A
#' Satterthwaite adjustment to the degrees of freedom is used to improve the
#' accuracy of the confidence intervals. The same results can be obtained using
#' the \link[vcmeta]{meta.lc.mean2} function with appropriate contrast coefficients.
#' The confidence level for the difference is 1 – 2*alpha, which is recommended for
#' equivalence testing.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param m11 estimated mean for group 1 in original study
#' @param m12 estimated mean for group 2 in original study
#' @param sd11 estimated SD for group 1 in original study
#' @param sd12 estimated SD for group 2 in original study
#' @param n11 sample size for group 1 in original study
#' @param n12 sample size for group 2 in original study
#' @param m21 estimated mean for group 1 in follow-up study
#' @param m22 estimated mean for group 2 in follow-up study
#' @param sd21 estimated SD for group 1 in follow-up study
#' @param sd22 estimated SD for group 2 in follow-up study
#' @param n21 sample size for group 1 in follow-up study
#' @param n22 sample size for group 2 in follow-up study
#'
#'
#' @return A 4-row matrix. The rows are:
#' * Row 1 summarizes the original study
#' * Row 2 summarizes the follow-up study
#' * Row 3 estimates the difference in mean differences
#' * Row 4 estimates the average mean difference
#'
#'
#' The columns are:
#' * Estimate - mean difference estimate (single study, difference, average)
#' * SE - standard error
#' * t - t-value
#' * p - p-value
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#' * df - degrees of freedom
#'
#'
#' @examples
#' replicate.mean2(.05, 21.9, 16.1, 3.82, 3.21, 40, 40,
#' 25.2, 19.1, 3.98, 3.79, 75, 75)
#'
#' # Should return:
#' # Estimate SE t p
#' # Original: 5.80 0.7889312 7.3517180 1.927969e-10
#' # Follow-up: 6.10 0.6346075 9.6122408 0.000000e+00
#' # Original - Follow-up: -0.30 1.0124916 -0.2962988 7.673654e-01
#' # Average: 5.95 0.5062458 11.7531843 0.000000e+00
#' # LL UL df
#' # Original: 4.228624 7.371376 75.75255
#' # Follow-up: 4.845913 7.354087 147.64728
#' # Original - Follow-up: -1.974571 1.374571 169.16137
#' # Average: 4.950627 6.949373 169.16137
#'
#'
#' @references
#' \insertRef{Bonett2021}{vcmeta}
#'
#'
#' @importFrom stats qt
#' @importFrom stats pt
#' @export
replicate.mean2 <- function(alpha, m11, m12, sd11, sd12, n11, n12, m21, m22, sd21, sd22, n21, n22){
v11 <- sd11^2; v12 <- sd12^2
v21 <- sd21^2; v22 <- sd22^2
est1 <- m11 - m12
est2 <- m21 - m22
est3 <- est1 - est2
est4 <- (est1 + est2)/2
se1 <- sqrt(v11/n11 + v12/n12)
se2 <- sqrt(v21/n21 + v22/n22)
se3 <- sqrt(se1^2 + se2^2)
se4 <- se3/2
v1 <- v11^2/(n11^3 - n11^2)
v2 <- v12^2/(n12^3 - n12^2)
v3 <- v21^2/(n21^3 - n21^2)
v4 <- v22^2/(n22^3 - n22^2)
df1 <- (se1^4)/(v1 + v2)
df2 <- (se2^4)/(v3 + v4)
df3 <- (se3^4)/(v1 + v2 + v3 + v4)
t1 <- est1/se1
t2 <- est2/se2
t3 <- est3/se3
t4 <- est4/se4
pval1 <- 2*(1 - pt(abs(t1),df1))
pval2 <- 2*(1 - pt(abs(t2),df2))
pval3 <- 2*(1 - pt(abs(t3),df3))
pval4 <- 2*(1 - pt(abs(t4),df3))
tcrit1 <- qt(1 - alpha/2, df1)
tcrit2 <- qt(1 - alpha/2, df2)
tcrit3 <- qt(1 - alpha, df3)
tcrit4 <- qt(1 - alpha/2, df3)
ll1 <- est1 - tcrit1*se1; ul1 <- est1 + tcrit1*se1
ll2 <- est2 - tcrit2*se2; ul2 <- est2 + tcrit2*se2
ll3 <- est3 - tcrit3*se3; ul3 <- est3 + tcrit3*se3
ll4 <- est4 - tcrit4*se4; ul4 <- est4 + tcrit4*se4
out1 <- t(c(est1, se1, t1, pval1, ll1, ul1, df1))
out2 <- t(c(est2, se2, t2, pval2, ll2, ul2, df2))
out3 <- t(c(est3, se3, t3, pval3, ll3, ul3, df3))
out4 <- t(c(est4, se4, t4, pval4, ll4, ul4, df3))
out <- rbind(out1, out2, out3, out4)
colnames(out) <- c("Estimate", "SE", "t", "p", "LL", "UL", "df")
rownames(out) <- c("Original:", "Follow-up:", "Original - Follow-up:", "Average:")
return(out)
}
# replicate.mean.ps ============================================================
#' Compares and combines paired-samples mean differences in original and follow-up studies
#'
#'
#' @description
#' This function computes confidence intervals from an original study and a
#' follow-up study where the effect size is a paired-samples mean difference.
#' Confidence intervals for the difference and average effect size are also
#' computed. Equality of variances within or across studies is not assumed.
#' A Satterthwaite adjustment to the degrees of freedom is used to
#' improve the accuracy of the confidence intervals for the difference and
#' average. The same results can be obtained using the \link[vcmeta]{meta.lc.mean.ps}
#' function with appropriate contrast coefficients. The confidence level for
#' the difference is 1 – 2*alpha, which is recommended for equivalence testing.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param m11 estimated mean for group 1 in original study
#' @param m12 estimated mean for group 2 in original study
#' @param sd11 estimated SD for group 1 in original study
#' @param sd12 estimated SD for group 2 in original study
#' @param n1 sample size in original study
#' @param cor1 estimated correlation of paired observations in orginal study
#' @param m21 estimated mean for group 1 in follow-up study
#' @param m22 estimated mean for group 2 in follow-up study
#' @param sd21 estimated SD for group 1 in follow-up study
#' @param sd22 estimated SD for group 2 in follow-up study
#' @param n2 sample size in follow-up study
#' @param cor2 estimated correlation of paired observations in follow-up study
#'
#'
#' @return
#' A 4-row matrix. The rows are:
#' * Row 1 summarizes the original study
#' * Row 2 summarizes the follow-up study
#' * Row 3 estimates the difference in mean differences
#' * Row 4 estimates the average mean difference
#'
#'
#' The columns are:
#' * Estimate - mean difference estimate (single study, difference, average)
#' * SE - standard error
#' * df - degrees of freedom
#' * t - t-value
#' * p - p-value
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' replicate.mean.ps(.05, 86.22, 70.93, 14.89, 12.32, .765, 20,
#' 84.81, 77.24, 15.68, 16.95, .702, 75)
#'
#' # Should return:
#' # Estimate SE t p
#' # Original: 15.29 2.154344 7.097288 9.457592e-07
#' # Follow-up: 7.57 1.460664 5.182575 1.831197e-06
#' # Original - Follow-up: 7.72 2.602832 2.966000 5.166213e-03
#' # Average: 11.43 1.301416 8.782740 1.010232e-10
#' # LL UL df
#' # Original: 10.780906 19.79909 19.00000
#' # Follow-up: 4.659564 10.48044 74.00000
#' # Original - Follow-up: 3.332885 12.10712 38.40002
#' # Average: 8.796322 14.06368 38.40002
#'
#'
#' @references
#' \insertRef{Bonett2021}{vcmeta}
#'
#'
#' @importFrom stats qt
#' @importFrom stats pt
#' @export
replicate.mean.ps <- function(alpha, m11, m12, sd11, sd12, cor1, n1, m21, m22, sd21, sd22, cor2, n2) {
v11 <- sd11^2; v12 <- sd12^2
v21 <- sd21^2; v22 <- sd22^2
vd1 <- v11 + v12 - 2*cor1*sd11*sd12
vd2 <- v21 + v22 - 2*cor2*sd21*sd22
est1 <- m11 - m12
est2 <- m21 - m22
est3 <- est1 - est2
est4 <- (est1 + est2)/2
se1 <- sqrt(vd1/n1)
se2 <- sqrt(vd2/n2)
se3 <- sqrt(se1^2 + se2^2)
se4 <- se3/2
df1 <- n1 - 1
df2 <- n2 - 1
df3 <- se3^4/(se1^4/df1 + se2^4/df2)
t1 <- est1/se1
t2 <- est2/se2
t3 <- est3/se3
t4 <- est4/se4
pval1 <- 2*(1 - pt(t1, df1))
pval2 <- 2*(1 - pt(t2, df2))
pval3 <- 2*(1 - pt(t3, df3))
pval4 <- 2*(1 - pt(t4, df3))
tcrit1 <- qt(1 - alpha/2, df1)
tcrit2 <- qt(1 - alpha/2, df2)
tcrit3 <- qt(1 - alpha, df3)
tcrit4 <- qt(1 - alpha/2, df3)
ll1 <- est1 - tcrit1*se1; ul1 <- est1 + tcrit1*se1
ll2 <- est2 - tcrit2*se2; ul2 <- est2 + tcrit2*se2
ll3 <- est3 - tcrit3*se3; ul3 <- est3 + tcrit3*se3
ll4 <- est4 - tcrit4*se4; ul4 <- est4 + tcrit4*se4
out1 <- t(c(est1, se1, t1, pval1, ll1, ul1, df1))
out2 <- t(c(est2, se2, t2, pval2, ll2, ul2, df2))
out3 <- t(c(est3, se3, t3, pval3, ll3, ul3, df3))
out4 <- t(c(est4, se4, t4, pval4, ll4, ul4, df3))
out <- rbind(out1, out2, out3, out4)
colnames(out) <- c("Estimate", "SE", "t", "p", "LL", "UL", "df")
rownames(out) <- c("Original:", "Follow-up:", "Original - Follow-up:", "Average:")
return(out)
}
# replicate.stdmean2 ============================================================
#' Compares and combines 2-group standardized mean differences in original and
#' follow-up studies
#'
#'
#' @description
#' This function computes confidence intervals from an original study and a
#' follow-up study where the effect size is a 2-group standardized mean
#' difference. Confidence intervals for the difference and average effect
#' size are also computed. Equality of variances within or across studies
#' is not assumed. The same results can be obtained using the
#' \link[vcmeta]{meta.lc.stdmean2} function with appropriate contrast coefficients.
#' The confidence level for the difference is 1 – 2*alpha, which is recommended
#' for equivalence testing.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param m11 estimated mean for group 1 in original study
#' @param m12 estimated mean for group 2 in original study
#' @param sd11 estimated SD for group 1 in original study
#' @param sd12 estimated SD for group 2 in original study
#' @param n11 sample size for group 1 in original study
#' @param n12 sample size for group 2 in original study
#' @param m21 estimated mean for group 1 in follow-up study
#' @param m22 estimated mean for group 2 in follow-up study
#' @param sd21 estimated SD for group 1 in follow-up study
#' @param sd22 estimated SD for group 2 in follow-up study
#' @param n21 sample size for group 1 in follow-up study
#' @param n22 sample size for group 2 in follow-up study
#'
#'
#' @return
#' A 4-row matrix. The rows are:
#' * Row 1 summarizes the original study
#' * Row 2 summarizes the follow-up study
#' * Row 3 estimates the difference in standardized mean differences
#' * Row 4 estimates the average standardized mean difference
#'
#'
#' The columns are:
#' * Estimate - standardized mean difference estimate (single study, difference, average)
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' replicate.stdmean2(.05, 21.9, 16.1, 3.82, 3.21, 40, 40,
#' 25.2, 19.1, 3.98, 3.79, 75, 75)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Original: 1.62803662 0.2594668 1.1353486 2.1524396
#' # Follow-up: 1.56170447 0.1870576 1.2030461 1.9362986
#' # Original - Follow-up: 0.07422178 0.3198649 -0.4519092 0.6003527
#' # Average: 1.59487055 0.1599325 1.2814087 1.9083324
#'
#'
#' @references
#' \insertRef{Bonett2021}{vcmeta}
#'
#'
#' @importFrom stats qnorm
#' @export
replicate.stdmean2 <- function(alpha, m11, m12, sd11, sd12, n11, n12, m21, m22, sd21, sd22, n21, n22) {
zcrit1 <- qnorm(1 - alpha/2)
zcrit2 <- qnorm(1 - alpha)
v11 <- sd11^2
v12 <- sd12^2
v21 <- sd21^2
v22 <- sd22^2
df11 <- n11 - 1
df12 <- n12 - 1
df21 <- n21 - 1
df22 <- n22 - 1
s1 <- sqrt((v11 + v12)/2)
s2 <- sqrt((v21 + v22)/2)
a1 <- 1 - 3/(4*(n11 + n12) - 9)
a2 <- 1 - 3/(4*(n21 + n22) - 9)
est1 <- (m11 - m12)/s1
est2 <- (m21 - m22)/s2
est3 <- est1 - est2
est4 <- (a1*est1 + a2*est2)/2
se1 <- sqrt(est1^2*(v11^2/df11 + v12^2/df12)/(8*s1^4) + (v11/df11 + v12/df12)/s1^2)
se2 <- sqrt(est2^2*(v21^2/df21 + v22^2/df22)/(8*s2^4) + (v21/df21 + v22/df22)/s2^2)
se3 <- sqrt(se1^2 + se2^2)
se4 <- se3/2
ll1 <- est1 - zcrit1*se1; ul1 <- est1 + zcrit1*se1
ll2 <- est2 - zcrit1*se2; ul2 <- est2 + zcrit1*se2
ll3 <- est3 - zcrit2*se3; ul3 <- est3 + zcrit2*se3
ll4 <- est4 - zcrit1*se4; ul4 <- est4 + zcrit1*se4
out1 <- t(c(a1*est1, se1, ll1, ul1))
out2 <- t(c(a2*est2, se2, ll2, ul2))
out3 <- t(c(est3, se3, ll3, ul3))
out4 <- t(c(est4, se4, ll4, ul4))
out <- rbind(out1, out2, out3, out4)
colnames(out) <- c("Estimate", "SE", "LL", "UL")
rownames(out) <- c("Original:", "Follow-up:", "Original - Follow-up:", "Average:")
return(out)
}
# replicate.stdmean.ps ============================================================
#' Compares and combines paired-samples standardized mean differences in original and
#' follow-up studies
#'
#'
#' @description
#' This function computes confidence intervals from an original study and a follow-up
#' study where the effect size is a paired-samples standardized mean difference.
#' Confidence intervals for the difference and average effect size are also computed.
#' Equality of variances within or across studies is not assumed. The same results
#' can be obtained using the \link[vcmeta]{meta.lc.stdmean.ps} function with
#' appropriate contrast coefficients. The confidence level for the difference is
#' 1 – 2*alpha, which is recommended for equivalence testing.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param m11 estimated mean for group 1 in original study
#' @param m12 estimated mean for group 2 in original study
#' @param sd11 estimated SD for group 1 in original study
#' @param sd12 estimated SD for group 2 in original study
#' @param cor1 estimated correlation of paired observations in orginal study
#' @param n1 sample size in original study
#' @param m21 estimated mean for group 1 in follow-up study
#' @param m22 estimated mean for group 2 in follow-up study
#' @param sd21 estimated SD for group 1 in follow-up study
#' @param sd22 estimated SD for group 2 in follow-up study
#' @param cor2 estimated correlation of paired observations in follow-up study
#' @param n2 sample size in follow-up study
#'
#'
#' @return
#' A 4-row matrix. The rows are:
#' * Row 1 summarizes the original study
#' * Row 2 summarizes the follow-up study
#' * Row 3 estimates the difference in standardized mean differences
#' * Row 4 estimates the average standardized mean difference
#'
#'
#' The columns are:
#' * Estimate - standardized mean difference estimate (single study, difference, average)
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' replicate.stdmean.ps(alpha = .05,
#' m11 = 86.22, m12 = 70.93, sd11 = 14.89, sd12 = 12.32, cor1 = .765, n1 = 20,
#' m21 = 84.81, m22 = 77.24, sd21 = 15.68, sd22 = 16.95, cor2 = .702, n2 = 75)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Orginal: 1.0890300 0.22915553 0.6697353 1.5680085
#' # Follow-up: 0.4604958 0.09590506 0.2756687 0.6516096
#' # Original - Follow-up: 0.6552328 0.24841505 0.2466264 1.0638392
#' # Average: 0.7747629 0.12420752 0.5313206 1.0182052
#'
#'
#' @references
#' \insertRef{Bonett2021}{vcmeta}
#'
#'
#' @importFrom stats qnorm
#' @export
replicate.stdmean.ps <- function(alpha, m11, m12, sd11, sd12, cor1, n1, m21, m22, sd21, sd22, cor2, n2) {
zcrit1 <- qnorm(1 - alpha/2)
zcrit2 <- qnorm(1 - alpha)
v11 <- sd11^2
v12 <- sd12^2
v21 <- sd21^2
v22 <- sd22^2
df1 <- n1 - 1
df2 <- n2 - 1
s1 <- sqrt((v11 + v12)/2)
s2 <- sqrt((v21 + v22)/2)
vd1 <- v11 + v12 - 2*cor1*sd11*sd12
vd2 <- v21 + v22 - 2*cor2*sd21*sd22
a1 <- sqrt((n1 - 2)/df1)
a2 <- sqrt((n2 - 2)/df2)
est1 <- (m11 - m12)/s1
est2 <- (m21 - m22)/s2
est3 <- est1 - est2
est4 <- (a1*est1 + a2*est2)/2
se1 <- sqrt(est1^2*(v11^2 + v12^2 + 2*cor1^2*v11*v12)/(8*df1*s1^4) + vd1/(df1*s1^2))
se2 <- sqrt(est2^2*(v21^2 + v22^2 + 2*cor2^2*v21*v22)/(8*df2*s2^4) + vd2/(df2*s2^2))
se3 <- sqrt(se1^2 + se2^2)
se4 <- se3/2
ll1 <- est1 - zcrit1*se1; ul1 <- est1 + zcrit1*se1
ll2 <- est2 - zcrit1*se2; ul2 <- est2 + zcrit1*se2
ll3 <- est3 - zcrit2*se3; ul3 <- est3 + zcrit2*se3
ll4 <- est4 - zcrit1*se4; ul4 <- est4 + zcrit1*se4
out1 <- t(c(a1*est1, se1, ll1, ul1))
out2 <- t(c(a2*est2, se2, ll2, ul2))
out3 <- t(c(est3, se3, ll3, ul3))
out4 <- t(c(est4, se4, ll4, ul4))
out <- rbind(out1, out2, out3, out4)
colnames(out) <- c("Estimate", "SE", "LL", "UL")
rownames(out) <- c("Orginal:", "Follow-up:", "Original - Follow-up:", "Average:")
return(out)
}
# replicate.cor ============================================================
#' Compares and combines Pearson or partial correlations in original and
#' follow-up studies
#'
#'
#' @description
#' This function can be used to compare and combine Pearson or partial
#' correlations from an original study and a follow-up study. The
#' confidence level for the difference is 1 – 2*alpha, which is recommended
#' for equivalence testing.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param cor1 estimated Pearson correlation in original study
#' @param n1 sample size in original study
#' @param cor2 estimated Pearson correlation in follow-up study
#' @param n2 sample size in follow-up study
#' @param s number of control variables in each study (0 for Pearson)
#'
#'
#' @return
#' A 4-row matrix. The rows are:
#' * Row 1 summarizes the original study
#' * Row 2 summarizes the follow-up study
#' * Row 3 estimates the difference in correlations
#' * Row 4 estimates the average correlation
#'
#'
#' The columns are:
#' * Estimate - Pearson or partial correlation estimate (single study, difference, average)
#' * SE - standard error
#' * z - t-value for rows 1 and 2; z-value for rows 3 and 4
#' * p - p-value
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' replicate.cor(.05, .598, 80, .324, 200, 0)
#'
#' # Should return:
#' # Estimate SE z p LL UL
#' # Original: 0.598 0.07320782 6.589418 4.708045e-09 0.4355043 0.7227538
#' # Follow-up: 0.324 0.06376782 4.819037 2.865955e-06 0.1939787 0.4428347
#' # Original - Follow-up: 0.274 0.09708614 2.633335 8.455096e-03 0.1065496 0.4265016
#' # Average: 0.461 0.04854307 7.634998 2.264855e-14 0.3725367 0.5411607
#'
#'
#' @references
#' \insertRef{Bonett2021}{vcmeta}
#'
#'
#' @importFrom stats pt
#' @importFrom stats pnorm
#' @importFrom stats qnorm
#' @export
replicate.cor <- function(alpha, cor1, n1, cor2, n2, s) {
zcrit1 <- qnorm(1 - alpha/2)
zcrit2 <- qnorm(1 - alpha)
zr1 <- log((1 + cor1)/(1 - cor1))/2
zr2 <- log((1 + cor2)/(1 - cor2))/2
se1 <- sqrt((1 - cor1^2)^2/(n1 - 3 - s))
se2 <- sqrt((1 - cor2^2)^2/(n2 - 3 - s))
dif <- cor1 - cor2
ave <- (cor1 + cor2)/2
ave.z <- log((1 + ave)/(1 - ave))/2
se1.z <- sqrt(1/((n1 - 3 - s)))
se2.z <- sqrt(1/((n2 - 3 - s)))
se3 <- sqrt(se1^2 + se2^2)
se4 <- sqrt(se1^2 + se2^2)/2
se4.z <- sqrt(((se1^2 + se2^2)/4)/(1 - ave^2))
t1 <- cor1*sqrt(n1 - 2)/sqrt(1 - cor1^2)
t2 <- cor2*sqrt(n2 - 2)/sqrt(1 - cor2^2)
t3 <- (zr1 - zr2)/sqrt(se1.z^2 + se2.z^2)
t4 <- (zr1 + zr2)/sqrt(se1.z^2 + se2.z^2)
pval1 <- 2*(1 - pt(abs(t1), n1 - 2 - s))
pval2 <- 2*(1 - pt(abs(t2), n2 - 2 - s))
pval3 <- 2*(1 - pnorm(abs(t3)))
pval4 <- 2*(1 - pnorm(abs(t4)))
ll0a <- zr1 - zcrit1*se1.z; ul0a <- zr1 + zcrit1*se1.z
ll1a <- (exp(2*ll0a) - 1)/(exp(2*ll0a) + 1)
ul1a <- (exp(2*ul0a) - 1)/(exp(2*ul0a) + 1)
ll0b <- zr1 - zcrit2*se1.z; ul0b <- zr1 + zcrit2*se1.z
ll1b <- (exp(2*ll0b) - 1)/(exp(2*ll0b) + 1)
ul1b <- (exp(2*ul0b) - 1)/(exp(2*ul0b) + 1)
ll0a <- zr2 - zcrit1*se2.z; ul0a <- zr2 + zcrit1*se2.z
ll2a <- (exp(2*ll0a) - 1)/(exp(2*ll0a) + 1)
ul2a <- (exp(2*ul0a) - 1)/(exp(2*ul0a) + 1)
ll0b <- zr2 - zcrit2*se2.z; ul0b <- zr2 + zcrit2*se2.z
ll2b <- (exp(2*ll0b) - 1)/(exp(2*ll0b) + 1)
ul2b <- (exp(2*ul0b) - 1)/(exp(2*ul0b) + 1)
ll3 <- dif - sqrt((cor1 - ll1b)^2 + (ul2b - cor2)^2)
ul3 <- dif + sqrt((ul1b - cor1)^2 + (cor2 - ll2b)^2)
ll0 <- ave.z - zcrit1*se4.z
ul0 <- ave.z + zcrit1*se4.z
ll4 <- (exp(2*ll0) - 1)/(exp(2*ll0) + 1)
ul4 <- (exp(2*ul0) - 1)/(exp(2*ul0) + 1)
out1 <- t(c(cor1, se1, t1, pval1, ll1a, ul1a))
out2 <- t(c(cor2, se2, t2, pval2, ll2a, ul2a))
out3 <- t(c(dif, se3, t3, pval3, ll3, ul3))
out4 <- t(c(ave, se4, t4, pval4, ll4, ul4))
out <- rbind(out1, out2, out3, out4)
colnames(out) <- c("Estimate", "SE", "z", "p", "LL", "UL")
rownames(out) <- c("Original:", "Follow-up:", "Original - Follow-up:", "Average:")
return(out)
}
# replicate.prop2 ============================================================
#' Compares and combines 2-group proportion differences in original and
#' follow-up studies
#'
#'
#' @description
#' This function computes confidence intervals from an original study and a
#' follow-up study where the effect size is a 2-group proportion difference.
#' Confidence intervals for the difference and average effect size are also
#' computed. The same results can be obtained using the \link[vcmeta]{meta.lc.prop2}
#' function with appropriate contrast coefficients. The confidence level for
#' the difference is 1 – 2*alpha, which is recommended for equivalence testing.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param f11 frequency count for group 1 in original study
#' @param f12 frequency count for group 2 in original study
#' @param n11 sample size for group 1 in original study
#' @param n12 sample size for group 2 in original study
#' @param f21 frequency count for group 1 in follow-up study
#' @param f22 frequency count for group 2 in follow-up study
#' @param n21 sample size for group 1 in follow-up study
#' @param n22 sample size for group 2 in follow-up study
#'
#'
#' @return A 4-row matrix. The rows are:
#' * Row 1 summarizes the original study
#' * Row 2 summarizes the follow-up study
#' * Row 3 estimates the difference in proportion differences
#' * Row 4 estimates the average proportion difference
#'
#'
#' The columns are:
#' * Estimate - proportion difference estimate (single study, difference, average)
#' * SE - standard error
#' * z - z-value
#' * p - p-value
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' replicate.prop2(.05, 21, 16, 40, 40, 19, 13, 60, 60)
#'
#' # Should return:
#' # Estimate SE z p
#' # Original: 0.11904762 0.10805233 1.1017590 0.2705665
#' # Follow-up: 0.09677419 0.07965047 1.2149858 0.2243715
#' # Original - Follow-up: 0.02359056 0.13542107 0.1742016 0.8617070
#' # Average: 0.11015594 0.06771053 1.6268656 0.1037656
#' # LL UL
#' # Original: -0.09273105 0.3308263
#' # Follow-up: -0.05933787 0.2528863
#' # Original - Follow-up: -0.19915727 0.2463384
#' # Average: -0.02255427 0.2428661
#'
#'
#' @references
#' \insertRef{Bonett2021}{vcmeta}
#'
#'
#' @importFrom stats qnorm
#' @importFrom stats pnorm
#' @export
replicate.prop2 <- function(alpha, f11, f12, n11, n12, f21, f22, n21, n22){
zcrit1 <- qnorm(1 - alpha/2)
zcrit2 <- qnorm(1 - alpha)
p11.o <- (f11 + 1)/(n11 + 2)
p12.o <- (f12 + 1)/(n12 + 2)
p21.f <- (f21 + 1)/(n21 + 2)
p22.f <- (f22 + 1)/(n22 + 2)
est1 <- p11.o - p12.o
est2 <- p21.f - p22.f
p11 <- (f11 + .5)/(n11 + 1)
p12 <- (f12 + .5)/(n12 + 1)
p21 <- (f21 + .5)/(n21 + 1)
p22 <- (f22 + .5)/(n22 + 1)
est3 <- (p11 - p12) - (p21 - p22)
est4 <- ((p11 - p12) + (p21 - p22))/2
v11 <- p11.o*(1 - p11.o)/(n11 + 2)
v12 <- p12.o*(1 - p12.o)/(n12 + 2)
v21 <- p21.f*(1 - p21.f)/(n21 + 2)
v22 <- p22.f*(1 - p22.f)/(n22 + 2)
se1 <- sqrt(v11 + v12)
se2 <- sqrt(v21 + v22)
v11 <- p11*(1 - p11)/(n11 + 1)
v12 <- p12*(1 - p12)/(n12 + 1)
v21 <- p21*(1 - p21)/(n21 + 1)
v22 <- p22*(1 - p22)/(n22 + 1)
se3 <- sqrt(v11 + v12 + v21 + v22)
se4 <- se3/2
z1 <- est1/se1
z2 <- est2/se2
z3 <- est3/se3
z4 <- est4/se4
p1 <- 2*(1 - pnorm(abs(z1)))
p2 <- 2*(1 - pnorm(abs(z2)))
p3 <- 2*(1 - pnorm(abs(z3)))
p4 <- 2*(1 - pnorm(abs(z4)))
ll1 <- est1 - zcrit1*se1; ul1 <- est1 + zcrit1*se1
ll2 <- est2 - zcrit1*se2; ul2 <- est2 + zcrit1*se2
ll3 <- est3 - zcrit2*se3; ul3 <- est3 + zcrit2*se3
ll4 <- est4 - zcrit1*se4; ul4 <- est4 + zcrit1*se4
out1 <- t(c(est1, se1, z1, p1, ll1, ul1))
out2 <- t(c(est2, se2, z2, p2, ll2, ul2))
out3 <- t(c(est3, se3, z3, p3, ll3, ul3))
out4 <- t(c(est4, se4, z4, p4, ll4, ul4))
out <- rbind(out1, out2, out3, out4)
colnames(out) <- c("Estimate", "SE", "z", "p", "LL", "UL")
rownames(out) <- c("Original:", "Follow-up:", "Original - Follow-up:", "Average:")
return(out)
}
# replicate.oddsratio ============================================================
#' Compares and combines odds ratios in original and follow-up studies
#'
#' @description
#' This function computes confidence intervals for an odds ratio from an
#' original study and a follow-up study. Confidence intervals for the
#' ratio of odds ratios and geometric average odds ratio are also
#' computed. The confidence level for the ratio of ratios is 1 – 2*alpha, which
#' is recommended for equivalence testing.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param est1 estimate of log odds ratio in original study
#' @param se1 standard error of log odds ratio in original study
#' @param est2 estimate of log odds ratio in follow-up study
#' @param se2 standard error of log odds ratio in follow-up study
#'
#'
#' @return A 4-row matrix. The rows are:
#' * Row 1 summarizes the original study
#' * Row 2 summarizes the follow-up study
#' * Row 3 estimates the ratio of odds ratios
#' * Row 4 estimates the geometric average odds ratio
#'
#'
#' The columns are:
#' * Estimate - odds ratio estimate (single study, ratio, average)
#' * SE - standard error
#' * z - z-value
#' * p - p-value
#' * LL - exponentiated lower limit of the confidence interval
#' * UL - exponentiated upper limit of the confidence interval
#'
#'
#' @examples
#' replicate.oddsratio(.05, 1.39, .302, 1.48, .206)
#'
#' # Should return:
#' # Estimate SE z p
#' # Original: 1.39000000 0.3020000 4.6026490 4.171509e-06
#' # Follow-up: 1.48000000 0.2060000 7.1844660 6.747936e-13
#' # Original/Follow-up: -0.06273834 0.3655681 -0.1716188 8.637372e-01
#' # Average: 0.36067292 0.1827840 1.9732190 4.847061e-02
#' # exp(LL) exp(UL)
#' # Original: 2.2212961 7.256583
#' # Follow-up: 2.9336501 6.578144
#' # Original/Fllow-up: 0.5147653 1.713551
#' # Average: 1.0024257 2.052222
#'
#'
#' @references
#' \insertRef{Bonett2021}{vcmeta}
#'
#'
#' @importFrom stats qnorm
#' @importFrom stats pnorm
#' @export
replicate.oddsratio <- function(alpha, est1, se1, est2, se2){
zcrit1 <- qnorm(1 - alpha/2)
zcrit2 <- qnorm(1 - alpha)
est3 <- log(est1) - log(est2)
est4 <- (log(est1) + log(est2))/2
se3 <- sqrt(se1^2 + se2^2)
se4 <- se3/2
z1 <- est1/se1
z2 <- est2/se2
z3 <- est3/se3
z4 <- est4/se4
p1 <- 2*(1 - pnorm(abs(z1)))
p2 <- 2*(1 - pnorm(abs(z2)))
p3 <- 2*(1 - pnorm(abs(z3)))
p4 <- 2*(1 - pnorm(abs(z4)))
ll1 <- exp(est1 - zcrit1*se1); ul1 <- exp(est1 + zcrit1*se1)
ll2 <- exp(est2 - zcrit1*se2); ul2 <- exp(est2 + zcrit1*se2)
ll3 <- exp(est3 - zcrit2*se3); ul3 <- exp(est3 + zcrit2*se3)
ll4 <- exp(est4 - zcrit1*se4); ul4 <- exp(est4 + zcrit1*se4)
out1 <- t(c(est1, se1, z1, p1, ll1, ul1))
out2 <- t(c(est2, se2, z2, p2, ll2, ul2))
out3 <- t(c(est3, se3, z3, p3, ll3, ul3))
out4 <- t(c(est4, se4, z4, p4, ll4, ul4))
out <- rbind(out1, out2, out3, out4)
colnames(out) <- c("Estimate", "SE", "z", "p", "exp(LL)", "exp(UL)")
rownames(out) <- c("Original:", "Follow-up:", "Original/Follow-up:", "Average:")
return(out)
}
# replicate.slope ============================================================
#' Compares and combines slope coefficients in original and follow-up studies
#'
#' @description
#' This function computes confidence intervals for a slope from the original and
#' follow-up studies, the difference in slopes, and the average of the slopes.
#' Equality of error variances across studies is not assumed. The confidence
#' interval for the difference uses a 1 - 2*alpha confidence level, which is
#' recommended for equivalence testing. Use the \link[vcmeta]{replicate.gen}
#' function for slopes in other types of models (e.g., binary logistic, ordinal
#' logistic, SEM).
#'
#'
#' @param alpha alpha level for 1-alpha or 1 - 2alpha confidence
#' @param b1 sample slope in original study
#' @param se1 standard error of slope in original study
#' @param n1 sample size in original study
#' @param b2 sample slope in follow-up study
#' @param se2 standard error of slope in follow-up study
#' @param n2 sample size in follow-up study
#' @param s number of predictor variables in model
#'
#'
#' @return A 4-row matrix. The rows are:
#' * Row 1 summarizes the original study
#' * Row 2 summarizes the follow-up study
#' * Row 3 estimates the difference in slopes
#' * Row 4 estimates the average slope
#'
#'
#' The columns are:
#' * Estimate - slope estimate (single study, difference, average)
#' * SE - standard error
#' * t - t-value
#' * p - p-value
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#' * df - degrees of freedom
#'
#'
#' @examples
#' replicate.slope(.05, 23.4, 5.16, 50, 18.5, 4.48, 90, 4)
#'
#' # Should return:
#' # Estimate SE t p
#' # Original: 23.40 5.160000 4.5348837 4.250869e-05
#' # Follow-up: 18.50 4.480000 4.1294643 8.465891e-05
#' # Original - Follow-up: 4.90 6.833447 0.7170612 4.749075e-01
#' # Average: 20.95 3.416724 6.1316052 1.504129e-08
#' # LL UL df
#' # Original: 13.007227 33.79277 45.0000
#' # Follow-up: 9.592560 27.40744 85.0000
#' # Original - Follow-up: -6.438743 16.23874 106.4035
#' # Average: 14.176310 27.72369 106.4035
#'
#'
#' @references
#' \insertRef{Bonett2021}{vcmeta}
#'
#'
#' @importFrom stats qt
#' @importFrom stats pt
#' @export
replicate.slope <- function(alpha, b1, se1, n1, b2, se2, n2, s) {
df1 <- n1 - s - 1
df2 <- n2 - s - 1
est1 <- b1
est2 <- b2
est3 <- est1 - est2
est4 <- (est1 + est2)/2
se3 <- sqrt(se1^2 + se2^2)
se4 <- se3/2
v1 <- se1^4/df1
v2 <- se2^4/df2
df3 <- (se3^4)/(v1 + v2)
t1 <- est1/se1
t2 <- est2/se2
t3 <- est3/se3
t4 <- est4/se4
pval1 <- 2*(1 - pt(abs(t1),df1))
pval2 <- 2*(1 - pt(abs(t2),df2))
pval3 <- 2*(1 - pt(abs(t3),df3))
pval4 <- 2*(1 - pt(abs(t4),df3))
tcrit1 <- qt(1 - alpha/2, df1)
tcrit2 <- qt(1 - alpha/2, df2)
tcrit3 <- qt(1 - alpha, df3)
tcrit4 <- qt(1 - alpha/2, df3)
ll1 <- est1 - tcrit1*se1; ul1 <- est1 + tcrit1*se1
ll2 <- est2 - tcrit2*se2; ul2 <- est2 + tcrit2*se2
ll3 <- est3 - tcrit3*se3; ul3 <- est3 + tcrit3*se3
ll4 <- est4 - tcrit4*se4; ul4 <- est4 + tcrit4*se4
out1 <- t(c(est1, se1, t1, pval1, ll1, ul1, df1))
out2 <- t(c(est2, se2, t2, pval2, ll2, ul2, df2))
out3 <- t(c(est3, se3, t3, pval3, ll3, ul3, df3))
out4 <- t(c(est4, se4, t4, pval4, ll4, ul4, df3))
out <- rbind(out1, out2, out3, out4)
colnames(out) <- c("Estimate", "SE", "t", "p", "LL", "UL", "df")
rownames(out) <- c("Original:", "Follow-up:", "Original - Follow-up:", "Average:")
return(out)
}
# replicate.gen ============================================================
#' Compares and combines effect sizes in original and follow-up studies
#'
#'
#' @description
#' This function can be used to compare and combine any effect size using the
#' effect size estimate and its standard error from the original study and
#' the follow-up study. The same results can be obtained using the
#' \link[vcmeta]{meta.lc.gen} function with appropriate contrast coefficients.
#' The confidence level for the difference is 1 – 2*alpha, which is
#' recommended for equivalence testing.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param est1 estimated effect size in original study
#' @param se1 effect size standard error in original study
#' @param est2 estimated effect size in follow-up study
#' @param se2 effect size standard error in follow-up study
#'
#' @return
#' A 4-row matrix. The rows are:
#' * Row 1 summarizes the original study
#' * Row 2 summarizes the follow-up study
#' * Row 3 estimates the difference in effect sizes
#' * Row 4 estimates the average effect size
#'
#'
#' Columns are:
#' * Estimate - effect size estimate (single study, difference, average)
#' * SE - standard error
#' * z - z-value
#' * p - p-value
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' replicate.gen(.05, .782, .210, .650, .154)
#'
#' # Should return:
#' # Estimate SE z p LL UL
#' # Original: 0.782 0.2100000 3.7238095 1.962390e-04 0.3704076 1.1935924
#' # Follow-up: 0.650 0.1540000 4.2207792 2.434593e-05 0.3481655 0.9518345
#' # Original - Follow-up: 0.132 0.2604151 0.5068831 6.122368e-01 -0.2963446 0.5603446
#' # Average: 0.716 0.1302075 5.4989141 3.821373e-08 0.4607979 0.9712021
#'
#' @references
#' \insertRef{Bonett2021}{vcmeta}
#'
#'
#' @importFrom stats pnorm
#' @importFrom stats qnorm
#' @export
replicate.gen <- function(alpha, est1, se1, est2, se2) {
est3 <- est1 - est2
est4 <- (est1 + est2)/2
se3 <- sqrt(se1^2 + se2^2)
se4 <- se3/2
z1 <- est1/se1
z2 <- est2/se2
z3 <- est3/se3
z4 <- est4/se4
pval1 <- 2*(1 - pnorm(abs(z1)))
pval2 <- 2*(1 - pnorm(abs(z2)))
pval3 <- 2*(1 - pnorm(abs(z3)))
pval4 <- 2*(1 - pnorm(abs(z4)))
zcrit1 <- qnorm(1 - alpha/2)
zcrit2 <- qnorm(1 - alpha)
ll1 <- est1 - zcrit1*se1; ul1 <- est1 + zcrit1*se1
ll2 <- est2 - zcrit1*se2; ul2 <- est2 + zcrit1*se2
ll3 <- est3 - zcrit2*se3; ul3 <- est3 + zcrit2*se3
ll4 <- est4 - zcrit1*se4; ul4 <- est4 + zcrit1*se4
out1 <- t(c(est1, se1, z1, pval1, ll1, ul1))
out2 <- t(c(est2, se2, z2, pval2, ll2, ul2))
out3 <- t(c(est3, se3, z3, pval3, ll3, ul3))
out4 <- t(c(est4, se4, z4, pval4, ll4, ul4))
out <- rbind(out1, out2, out3, out4)
colnames(out) <- c("Estimate", "SE", "z", "p", "LL", "UL")
rownames(out) <- c("Original:", "Follow-up:", "Original - Follow-up:", "Average:")
return(out)
}
# replicate.spear ===============================================================
#' Compares and combines Spearman correlations in original and follow-up studies
#'
#'
#' @description
#' This function can be used to compare and combine Spearman correlations from
#' an original study and a follow-up study. The confidence level for the
#' difference is 1 – 2*alpha, which is recommended for equivalence testing.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param cor1 estimated Spearman correlation in original study
#' @param n1 sample size in original study
#' @param cor2 estimated Spearman correlation in follow-up study
#' @param n2 sample size in follow-up study
#'
#'
#' @return
#' A 4-row matrix. The rows are:
#' * Row 1 summarizes the original study
#' * Row 2 summarizes the follow-up study
#' * Row 3 estimates the difference in correlations
#' * Row 4 estimates the average correlation
#'
#'
#' The columns are:
#' * Estimate - Spearman correlation estimate (single study, difference, average)
#' * SE - standard error
#' * z - z-value
#' * p - p-value
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' replicate.spear(.05, .598, 80, .324, 200)
#'
#' # Should return:
#' # Estimate SE z p LL UL
#' # Original: 0.598 0.07948367 5.315140 1.065752e-07 0.41985966 0.7317733
#' # Follow-up: 0.324 0.06541994 4.570582 4.863705e-06 0.19049455 0.4457384
#' # Original - Follow-up: 0.274 0.10294378 3.437975 5.860809e-04 0.09481418 0.4342171
#' # Average: 0.461 0.05147189 9.967944 0.000000e+00 0.36695230 0.5457190
#'
#'
#' @references
#' \insertRef{Bonett2021}{vcmeta}
#'
#'
#' @importFrom stats pnorm
#' @importFrom stats qnorm
#' @export
replicate.spear <- function(alpha, cor1, n1, cor2, n2) {
zcrit1 <- qnorm(1 - alpha/2)
zcrit2 <- qnorm(1 - alpha)
dif <- cor1 - cor2
ave <- (cor1 + cor2)/2
ave.z <- log((1 + ave)/(1 - ave))/2
zr1 <- log((1 + cor1)/(1 - cor1))/2
zr2 <- log((1 + cor2)/(1 - cor2))/2
se1 <- sqrt((1 + cor1^2/2)*(1 - cor1^2)^2/(n1 - 3))
se2 <- sqrt((1 + cor2^2/2)*(1 - cor2^2)^2/(n2 - 3))
se1.z <- sqrt((1 + cor1^2/2)/((n1 - 3)))
se2.z <- sqrt((1 + cor2^2/2)/((n2 - 3)))
se3 <- sqrt(se1^2 + se2^2)
se4 <- sqrt(se1^2 + se2^2)/2
se4.z <- sqrt(((se1^2 + se2^2)/4)/(1 - ave^2))
t1 <- cor1*sqrt(n1 - 1)
t2 <- cor2*sqrt(n2 - 1)
t3 <- (zr1 - zr2)/sqrt(se1^2 + se2^2)
t4 <- (zr1 + zr2)/sqrt(se1^2 + se2^2)
pval1 <- 2*(1 - pnorm(abs(t1)))
pval2 <- 2*(1 - pnorm(abs(t2)))
pval3 <- 2*(1 - pnorm(abs(t3)))
pval4 <- 2*(1 - pnorm(abs(t4)))
ll0a <- zr1 - zcrit1*se1.z; ul0a <- zr1 + zcrit1*se1.z
ll1a <- (exp(2*ll0a) - 1)/(exp(2*ll0a) + 1)
ul1a <- (exp(2*ul0a) - 1)/(exp(2*ul0a) + 1)
ll0a <- zr2 - zcrit1*se2.z; ul0a <- zr2 + zcrit1*se2.z
ll2a <- (exp(2*ll0a) - 1)/(exp(2*ll0a) + 1)
ul2a <- (exp(2*ul0a) - 1)/(exp(2*ul0a) + 1)
ll0b <- zr1 - zcrit2*se1.z; ul0b <- zr1 + zcrit2*se1.z
ll1b <- (exp(2*ll0b) - 1)/(exp(2*ll0b) + 1)
ul1b <- (exp(2*ul0b) - 1)/(exp(2*ul0b) + 1)
ll0b <- zr2 - zcrit2*se2.z; ul0b <- zr2 + zcrit2*se2.z
ll2b <- (exp(2*ll0b) - 1)/(exp(2*ll0b) + 1)
ul2b <- (exp(2*ul0b) - 1)/(exp(2*ul0b) + 1)
ll3 <- dif - sqrt((cor1 - ll1b)^2 + (ul2b - cor2)^2)
ul3 <- dif + sqrt((ul1b - cor1)^2 + (cor2 - ll2b)^2)
ll0 <- ave.z - zcrit1*se4.z
ul0 <- ave.z + zcrit1*se4.z
ll4 <- (exp(2*ll0) - 1)/(exp(2*ll0) + 1)
ul4 <- (exp(2*ul0) - 1)/(exp(2*ul0) + 1)
out1 <- t(c(cor1, se1, t1, pval1, ll1a, ul1a))
out2 <- t(c(cor2, se2, t2, pval2, ll2a, ul2a))
out3 <- t(c(dif, se3, t3, pval3, ll3, ul3))
out4 <- t(c(ave, se4, t4, pval4, ll4, ul4))
out <- rbind(out1, out2, out3, out4)
colnames(out) <- c("Estimate", "SE", "z", "p", "LL", "UL")
rownames(out) <- c("Original:", "Follow-up:", "Original - Follow-up:", "Average:")
return(out)
}
# replicate.prop1 ============================================================
#' Compares and combines single proportion in original and follow-up studies
#'
#'
#' @description
#' This function computes confidence intervals for a single proportion from an
#' original study and a follow-up study. Confidence intervals for the
#' difference between the two proportions and average of the two proportions
#' are also computed. The same results can be obtained using the \link[vcmeta]{meta.lc.prop1}
#' function with appropriate contrast coefficients. The confidence level for the
#' difference is 1 – 2*alpha, which is recommended for equivalence testing.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param f1 frequency count in original study
#' @param n1 sample size in original study
#' @param f2 frequency count in follow-up study
#' @param n2 sample size for in follow-up study
#'
#'
#' @return A 4-row matrix. The rows are:
#' * Row 1 summarizes the original study
#' * Row 2 summarizes the follow-up study
#' * Row 3 estimates the difference in proportions
#' * Row 4 estimates the average proportion
#'
#'
#' The columns are:
#' * Estimate - proportion estimate (single study, difference, average)
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' replicate.prop1(.05, 21, 300, 35, 400)
#'
#' # Should return:
#' # Estimate SE LL UL
#' # Original: 0.07565789 0.01516725 0.04593064 0.10538515
#' # Follow-up: 0.09158416 0.01435033 0.06345803 0.11971029
#' # Original - Follow-up: -0.01670456 0.02065098 -0.05067239 0.01726328
#' # Average: 0.08119996 0.01032549 0.06096237 0.10143755
#'
#'
#' @references
#' \insertRef{Bonett2021}{vcmeta}
#'
#'
#' @importFrom stats qnorm
#' @export
replicate.prop1 <- function(alpha, f1, n1, f2, n2){
est1 <- (f1 + 2)/(n1 + 4)
est2 <- (f2 + 2)/(n2 + 4)
est1.d <- (f1 + 1)/(n1 + 2)
est2.d <- (f2 + 1)/(n2 + 2)
est3 <- est1.d - est2.d
est4 <- (est1.d + est2.d)/2
se1 <- sqrt(est1*(1 - est1)/(n1 + 4))
se2 <- sqrt(est2*(1 - est2)/(n2 + 4))
se3 <- sqrt(est1.d*(1 - est1.d)/(n1 + 2) + est2.d*(1 - est2.d)/(n2 + 2))
se4 <- se3/2
zcrit1 <- qnorm(1 - alpha/2)
zcrit3 <- qnorm(1 - alpha)
ll1 <- est1 - zcrit1*se1; ul1 <- est1 + zcrit1*se1
ll2 <- est2 - zcrit1*se2; ul2 <- est2 + zcrit1*se2
ll3 <- est3 - zcrit3*se3; ul3 <- est3 + zcrit3*se3
ll4 <- est4 - zcrit1*se4; ul4 <- est4 + zcrit1*se4
out1 <- t(c(est1, se1, ll1, ul1))
out2 <- t(c(est2, se2, ll2, ul2))
out3 <- t(c(est3, se3, ll3, ul3))
out4 <- t(c(est4, se4, ll4, ul4))
out <- rbind(out1, out2, out3, out4)
colnames(out) <- c("Estimate", "SE", "LL", "UL")
rownames(out) <- c("Original:", "Follow-up:", "Original - Follow-up:", "Average:")
return(out)
}
# replicate.mean1 ============================================================
#' Compares and combines single mean in original and follow-up studies
#'
#'
#' @description
#' This function computes confidence intervals for a single mean from an
#' original study and a follow-up study. Confidence intervals for the
#' difference between the two means and average of the two means are also
#' computed. Equality of variances across studies is not assumed. A
#' Satterthwaite adjustment to the degrees of freedom is used to improve
#' the accuracy of the confidence intervals for the difference and average.
#' The same results can be obtained using the \link[vcmeta]{meta.lc.mean1}
#' function with appropriate contrast coefficients. The confidence level
#' for the difference is 1 – 2*alpha, which is recommended for equivalence
#' testing.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param m1 estimated mean in original study
#' @param sd1 estimated SD in original study
#' @param n1 sample size in original study
#' @param m2 estimated mean in follow-up study
#' @param sd2 estimated SD in follow-up study
#' @param n2 sample size for in follow-up study
#'
#'
#' @return A 4-row matrix. The rows are:
#' * Row 1 summarizes the original study
#' * Row 2 summarizes the follow-up study
#' * Row 3 estimates the difference in means
#' * Row 4 estimates the average mean
#'
#'
#' The columns are:
#' * Estimate - mean estimate (single study, difference, average)
#' * SE - standard error
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#' * df - degrees of freedom
#'
#'
#' @examples
#' replicate.mean1(.05, 21.9, 3.82, 40, 25.2, 3.98, 75)
#'
#' # Should return:
#' # Estimate SE LL UL df
#' # Original: 21.90 0.6039950 20.678305 23.121695 39.00000
#' # Follow-up: 25.20 0.4595708 24.284285 26.115715 74.00000
#' # Original - Follow-up: -3.30 0.7589567 -4.562527 -2.037473 82.63282
#' # Average: 23.55 0.3794784 22.795183 24.304817 82.63282
#'
#'
#' @references
#' \insertRef{Bonett2021}{vcmeta}
#'
#'
#' @importFrom stats qt
#' @export
replicate.mean1 <- function(alpha, m1, sd1, n1, m2, sd2, n2){
v1 <- sd1^2
v2 <- sd2^2
est1 <- m1
est2 <- m2
est3 <- est1 - est2
est4 <- (est1 + est2)/2
se1 <- sqrt(v1/n1)
se2 <- sqrt(v2/n2)
se3 <- sqrt(se1^2 + se2^2)
se4 <- se3/2
v1 <- v1^2/(n1^3 - n1^2)
v2 <- v2^2/(n2^3 - n2^2)
df1 <- n1 - 1
df2 <- n2 - 1
df3 <- (se3^4)/(v1 + v2)
tcrit1 <- qt(1 - alpha/2, df1)
tcrit2 <- qt(1 - alpha/2, df2)
tcrit3 <- qt(1 - alpha, df3)
tcrit4 <- qt(1 - alpha/2, df3)
ll1 <- est1 - tcrit1*se1; ul1 <- est1 + tcrit1*se1
ll2 <- est2 - tcrit2*se2; ul2 <- est2 + tcrit2*se2
ll3 <- est3 - tcrit3*se3; ul3 <- est3 + tcrit3*se3
ll4 <- est4 - tcrit4*se4; ul4 <- est4 + tcrit4*se4
out1 <- t(c(est1, se1, ll1, ul1, df1))
out2 <- t(c(est2, se2, ll2, ul2, df2))
out3 <- t(c(est3, se3, ll3, ul3, df3))
out4 <- t(c(est4, se4, ll4, ul4, df3))
out <- rbind(out1, out2, out3, out4)
colnames(out) <- c("Estimate", "SE", "LL", "UL", "df")
rownames(out) <- c("Original:", "Follow-up:", "Original - Follow-up:", "Average:")
return(out)
}
# replicate.ratio.prop2 =======================================================
#' Compares and combines 2-group proportion ratios in original and follow-up
#' studies
#'
#'
#' @description
#' This function computes confidence intervals from an original study and a
#' follow-up study where the effect size is a 2-group proportion ratio.
#' Confidence intervals for the ratio and geometric average of effect sizes
#' are also computed. The confidence level for the ratio of ratios is 1 – 2*alpha,
#' which is recommended for equivalence testing.
#'
#'
#' @param alpha alpha level for 1-alpha confidence 21q``````````````````````` `````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````lpha alpha level for 1-alpha confidence
#' @param f11 frequency count for group 1 in original study
#' @param f12 frequency count for group 2 in original study
#' @param n11 sample size for group 1 in original study
#' @param n12 sample size for group 2 in original study
#' @param f21 frequency count for group 1 in follow-up study
#' @param f22 frequency count for group 2 in follow-up study
#' @param n21 sample size for group 1 in follow-up study
#' @param n22 sample size for group 2 in follow-up study
#'
#'
#' @return A 4-row matrix. The rows are:
#' * Row 1 summarizes the original study
#' * Row 2 summarizes the follow-up study
#' * Row 3 estimates the ratio of proportion ratios
#' * Row 4 estimates the geometric average proportion ratio
#'
#'
#' The columns are:
#' * Estimate - proportion difference estimate (single study, ratio, average)
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' replicate.ratio.prop2(.05, 21, 16, 40, 40, 19, 13, 60, 60)
#'
#' # Should return:
#' # Estimate LL UL
#' # Original: 1.3076923 0.8068705 2.119373
#' # Follow-up: 1.4528302 0.7939881 2.658372
#' # Original/Follow-up: 0.9000999 0.4703209 1.722611
#' # Average: 1.3783522 0.9362893 2.029132
#'
#'
#' @references
#' \insertRef{Bonett2021}{vcmeta}
#'
#'
#' @importFrom stats qnorm
#' @importFrom stats pnorm
#' @export
replicate.ratio.prop2 <- function(alpha, f11, f12, n11, n12, f21, f22, n21, n22){
zcrit1 <- qnorm(1 - alpha/2)
zcrit2 <- qnorm(1 - alpha)
p11 <- (f11 + 1/4)/(n11 + 7/4)
p12 <- (f12 + 1/4)/(n12 + 7/4)
v11 <- 1/(f11 + 1/4 + (f11 + 1/4)^2/(n11 - f11 + 3/2))
v12 <- 1/(f12 + 1/4 + (f12 + 1/4)^2/(n12 - f12 + 3/2))
se1 <- sqrt(v11 + v12)
est1 <- log(p11/p12)
p21 <- (f21 + 1/4)/(n21 + 7/4)
p22 <- (f22 + 1/4)/(n22 + 7/4)
v21 <- 1/(f21 + 1/4 + (f21 + 1/4)^2/(n21 - f21 + 3/2))
v22 <- 1/(f22 + 1/4 + (f22 + 1/4)^2/(n22 - f22 + 3/2))
se2 <- sqrt(v21 + v22)
est2 <- log(p21/p22)
est3 <- est1 - est2
est4 <- (est1 + est2)/2
se3 <- sqrt(se1^2 + se2^2)
se4 <- se3/2
ll1 <- exp(est1 - zcrit1*se1); ul1 <- exp(est1 + zcrit1*se1)
ll2 <- exp(est2 - zcrit1*se2); ul2 <- exp(est2 + zcrit1*se2)
ll3 <- exp(est3 - zcrit2*se3); ul3 <- exp(est3 + zcrit2*se3)
ll4 <- exp(est4 - zcrit1*se4); ul4 <- exp(est4 + zcrit1*se4)
out1 <- t(c(exp(est1), ll1, ul1))
out2 <- t(c(exp(est2), ll2, ul2))
out3 <- t(c(exp(est3), ll3, ul3))
out4 <- t(c(exp(est4), ll4, ul4))
out <- rbind(out1, out2, out3, out4)
colnames(out) <- c("Estimate", "LL", "UL")
rownames(out) <- c("Original:", "Follow-up:", "Original/Follow-up:", "Average:")
return(out)
}
# replicate.prop.ps ===========================================================
#' Compares and combines paired-samples proportion differences in original and
#' follow-up studies
#'
#'
#' @description
#' This function computes confidence intervals from an original study and a
#' follow-up study where the effect size is a paired-samples proportion
#' difference. Confidence intervals for the difference and average of effect
#' sizes are also computed. The confidence level for the difference is
#' 1 – 2*alpha, which is recommended for equivalence testing.
#'
#'
#' @param alpha alpha level for 1-alpha confidence
#' @param f1 vector of frequency counts for 2x2 table in original study
#' @param f2 vector of frequency counts for 2x2 table in follow-up study
#'
#'
#' @return A 4-row matrix. The rows are:
#' * Row 1 summarizes the original study
#' * Row 2 summarizes the follow-up study
#' * Row 3 estimates the difference in proportion differences
#' * Row 4 estimates the average proportion difference
#'
#'
#' The columns are:
#' * Estimate - proportion difference estimate (single study, difference, average)
#' * SE - standard error
#' * z - z-value
#' * p - p-value
#' * LL - lower limit of the confidence interval
#' * UL - upper limit of the confidence interval
#'
#'
#' @examples
#' f1 <- c(42, 2, 15, 61)
#' f2 <- c(69, 5, 31, 145)
#' replicate.prop.ps(.05, f1, f2)
#'
#' # Should return:
#' # Estimate SE z p
#' # Original: 0.106557377 0.03440159 3.09745539 1.951898e-03
#' # Follow-up: 0.103174603 0.02358274 4.37500562 1.214294e-05
#' # Original - Follow-up: 0.003852359 0.04097037 0.09402793 9.250870e-01
#' # Average: 0.105511837 0.02048519 5.15064083 2.595979e-07
#' # LL UL
#' # Original: 0.03913151 0.17398325
#' # Follow-up: 0.05695329 0.14939592
#' # Original - Follow-up: -0.06353791 0.07124263
#' # Average: 0.06536161 0.14566206
#'
#'
#' @references
#' \insertRef{Bonett2021}{vcmeta}
#'
#'
#' @importFrom stats qnorm
#' @importFrom stats pnorm
#' @export
replicate.prop.ps <- function(alpha, f1, f2){
zcrit1 <- qnorm(1 - alpha/2)
zcrit2 <- qnorm(1 - alpha)
n1 <- sum(f1)
p01 <- (f1[2] + 1)/(n1 + 2)
p10 <- (f1[3] + 1)/(n1 + 2)
est1 <- p10 - p01
se1 <- sqrt(((p01 + p10) - (p01 - p10)^2)/(n1 + 2))
n2 <- sum(f2)
p01 <- (f2[2] + 1)/(n2 + 2)
p10 <- (f2[3] + 1)/(n2 + 2)
est2 <- p10 - p01
se2 <- sqrt(((p01 + p10) - (p01 - p10)^2)/(n2 + 2))
p011 <- (f1[2] + .5)/(n1 + 1)
p101 <- (f1[3] + .5)/(n1 + 1)
p012 <- (f2[2] + .5)/(n2 + 1)
p102 <- (f2[3] + .5)/(n2 + 1)
est3 <- p101 - p011 - p102 + p012
v1 = ((p101 + p011) - (p101 - p011)^2)/(n1 + 1)
v2 = ((p102 + p012) - (p102 - p012)^2)/(n2 + 1)
se3 <- sqrt(v1 + v2)
est4 <- ((p101 - p011) + (p102 - p012))/2
se4 <- se3/2
z1 <- est1/se1
z2 <- est2/se2
z3 <- est3/se3
z4 <- est4/se4
p1 <- 2*(1 - pnorm(abs(z1)))
p2 <- 2*(1 - pnorm(abs(z2)))
p3 <- 2*(1 - pnorm(abs(z3)))
p4 <- 2*(1 - pnorm(abs(z4)))
ll1 <- est1 - zcrit1*se1; ul1 <- est1 + zcrit1*se1
ll2 <- est2 - zcrit1*se2; ul2 <- est2 + zcrit1*se2
ll3 <- est3 - zcrit2*se3; ul3 <- est3 + zcrit2*se3
ll4 <- est4 - zcrit1*se4; ul4 <- est4 + zcrit1*se4
out1 <- t(c(est1, se1, z1, p1, ll1, ul1))
out2 <- t(c(est2, se2, z2, p2, ll2, ul2))
out3 <- t(c(est3, se3, z3, p3, ll3, ul3))
out4 <- t(c(est4, se4, z4, p4, ll4, ul4))
out <- rbind(out1, out2, out3, out4)
colnames(out) <- c("Estimate", "SE", "z", "p", "LL", "UL")
rownames(out) <- c("Original:", "Follow-up:", "Original - Follow-up:", "Average:")
return(out)
}
|
/scratch/gouwar.j/cran-all/cranData/vcmeta/R/meta_rep.R
|
# se.mean2 =================================================================
#' Computes the standard error for a 2-group mean difference
#'
#'
#' @description
#' This function can be used to compute the standard error of a
#' 2-group mean difference using the estimated means, estimated
#' standard deviations, and sample sizes. The effect size estimate
#' and standard error output from this function can be used as input
#' in the \link[vcmeta]{meta.ave.gen}, \link[vcmeta]{meta.lc.gen},
#' and \link[vcmeta]{meta.lm.gen} functions in applications where
#' compatible mean differences from a combination of 2-group
#' and paired-samples experiments are used in the meta-analysis.
#' Equality of variances is not asumed.
#'
#'
#' @param m1 estimated mean for group 1
#' @param m2 estimated mean for group 2
#' @param sd1 estimated standard deviation for group 1
#' @param sd2 estimated standard deviation for group 2
#' @param n1 sample size for group 1
#' @param n2 sample size for group 2
#'
#'
#' @return
#' Returns a one-row matrix:
#' * Estimate - estimated mean difference
#' * SE - standard error
#'
#'
#' @examples
#' se.mean2(21.9, 16.1, 3.82, 3.21, 40, 40)
#'
# # Should return:
#' # Estimate SE
#' # Mean difference: 5.8 0.7889312
#'
#'
#' @references
#' \insertRef{Snedecor1980}{vcmeta}
#'
#'
#' @export
se.mean2 <- function(m1, m2, sd1, sd2, n1, n2) {
d <- m1 - m2
se <- sqrt(sd1^2/n1 + sd2^2/n2)
out <- t(c(d, se))
colnames(out) <- c("Estimate", "SE")
rownames(out) <- "Mean difference: "
return(out)
}
# se.mean.ps =================================================================
#' Computes the standard error for a paired-samples mean difference
#'
#'
#' @description
#' This function can be used to compute the standard error of a
#' paired-samples mean difference using the estimated means,
#' estimated standard deviations, estimated Pearson correlation,
#' and sample size. The effect size estimate and standard error
#' output from this function can be used as input in the
#' \link[vcmeta]{meta.ave.gen}, \link[vcmeta]{meta.lc.gen},
#' and \link[vcmeta]{meta.lm.gen} functions in applications where
#' compatible mean differences from a combination of 2-group
#' and paired-samples experiments are used in the meta-analysis.
#' Equality of variances is not assumed.
#'
#'
#' @param m1 estimated mean for measurement 1
#' @param m2 estimated mean for measurement 2
#' @param sd1 estimated standard deviation for measurement 1
#' @param sd2 estimated standard deviation for measurement 2
#' @param cor estimated correlation for measurements 1 and 2
#' @param n sample size
#'
#'
#' @return
#' Returns a one-row matrix:
#' * Estimate - estimated mean difference
#' * SE - standard error
#'
#'
#' @examples
#' se.mean.ps(23.9, 25.1, 1.76, 2.01, .78, 25)
#'
#' # Should return:
#' # Estimate SE
#' # Mean difference: -1.2 0.2544833
#'
#' @references
#' \insertRef{Snedecor1980}{vcmeta}
#'
#'
#' @export
se.mean.ps <- function(m1, m2, sd1, sd2, cor, n) {
d <- m1 - m2
se <- sqrt((sd1^2 + sd2^2 - 2*cor*sd1*sd2)/n)
out <- t(c(d, se))
colnames(out) <- c("Estimate", "SE")
rownames(out) <- "Mean difference: "
return(out)
}
# se.stdmean2 ================================================================
#' Computes the standard error for a 2-group standardized mean difference
#'
#'
#' @description
#' This function computes the standard error of a 2-group standardized
#' mean difference using the sample sizes and the estimated means
# and standard deviations. Use the square root average variance
#' standardizer (stdzr = 0) for 2-group experimental designs. Use the
#' square root weighted variance standardizer (stdzr = 3) for 2-group
#' nonexperimental designs with simple random sampling. The single-group
#' standardizers (stdzr = 1 and stdzr = 2) can be used with either
#' 2-group experimental or nonexperimental designs. The effect size
#' estimate and standard error output from this function can be used as
#' input in the \link[vcmeta]{meta.ave.gen}, \link[vcmeta]{meta.lc.gen},
#' and \link[vcmeta]{meta.lm.gen} functions in applications where compatible
#' standardized mean differences from a combination of 2-group and
#' paired-samples experiments are used in the meta-analysis. Equality
#' of variances is not assumed.
#'
#'
#' @param m1 estimated mean for group 1
#' @param m2 estimated mean for group 2
#' @param sd1 estimated standard deviation for group 1
#' @param sd2 estimated standard deviation for group 2
#' @param n1 sample size for group 1
#' @param n2 sample size for group 2
#' @param stdzr
#' * set to 0 for square root average variance standardizer
#' * set to 1 for group 1 SD standardizer
#' * set to 2 for group 2 SD standardizer
#' * set to 3 for square root weighted variance standardizer
#'
#'
#' @return
#' Returns a one-row matrix:
#' * Estimate - estimated standardized mean difference
#' * SE - standard error
#'
#'
#' @examples
#' se.stdmean2(21.9, 16.1, 3.82, 3.21, 40, 40, 0)
#'
#' # Should return:
#' # Estimate SE
#' # Standardized mean difference: 1.643894 0.2629049
#'
#'
#' @references
#' \insertRef{Bonett2009a}{vcmeta}
#'
#'
#' @seealso \link[vcmeta]{se.cohen}
#'
#'
#' @export
se.stdmean2 <- function(m1, m2, sd1, sd2, n1, n2, stdzr) {
df1 <- n1 - 1
df2 <- n2 - 1
if (stdzr == 0) {
s <- sqrt((sd1^2 + sd2^2)/2)
d <- (m1 - m2)/s
se <- sqrt(d^2*(sd1^4/df1 + sd2^4/df2)/(8*s^4) + (sd1^2/df1 + sd2^2/df2)/s^2)
}
else if (stdzr == 1) {
s <- sd1
d <- (m1 - m2)/s
se <- sqrt(d^2/(2*df1) + 1/df1 + sd2^2/(df2*sd1^2))
}
else if (stdzr == 2) {
s <- sd2
d <- (m1 - m2)/s
se <- sqrt(d^2/(2*df2) + 1/df2 + sd1^2/(df1*sd2^2))
}
else {
s <- sqrt((sd1^2 + sd2^2)/2)
d <- (m1 - m2)/s
se <- sqrt(d^2*(1/df1 + 1/df2)/8 + 1/n1 + 1/n2)
}
out <- t(c(d, se))
colnames(out) <- c("Estimate", "SE")
rownames(out) <- "Standardized mean difference: "
return(out)
}
# se.stdmean.ps ==============================================================
#' Computes the standard error for a paired-samples standardized mean
#' difference
#'
#'
#' @description
#' This function computes the standard error of a paired-samples standardized
#' mean difference using the sample size and estimated means, standard
#' deviations, and estimated correlation. The effect size estimate and standard error
#' output from this function can be used as input in the \link[vcmeta]{meta.ave.gen},
#' \link[vcmeta]{meta.lc.gen}, and \link[vcmeta]{meta.lm.gen} functions in
#' applications where compatible standardized mean differences from a combination
#' of 2-group and paired-samples experiments are used in the meta-analysis.
#' Equality of variances is not assumed.
#'
#'
#' @param m1 estimated mean for measurement 1
#' @param m2 estimated mean for measurement 2
#' @param sd1 estimated standard deviation for measurement 1
#' @param sd2 estimated standard deviation for measurement 2
#' @param cor estimated correlation for measurements 1 and 2
#' @param n sample size
#' @param stdzr
#' * set to 0 for square root average variance standardizer
#' * set to 1 for group 1 SD standardizer
#' * set to 2 for group 2 SD standardizer
#'
#' @return
#' Returns a one-row matrix:
#' * Estimate - estimated standardized mean difference
#' * SE - standard error
#'
#'
#' @examples
#' se.stdmean.ps(23.9, 25.1, 1.76, 2.01, .78, 25, 0)
#'
#' # Should return:
#' # Estimate SE
#' # Standardizedd mean difference: -0.6352097 0.1602852
#'
#'
#' @references
#' \insertRef{Bonett2009a}{vcmeta}
#'
#'
#' @export
se.stdmean.ps <- function(m1, m2, sd1, sd2, cor, n, stdzr) {
df <- n - 1
v1 <- sd1^2
v2 <- sd2^2
vd <- v1 + v2 - 2*cor*sd1*sd2
if (stdzr == 0) {
s <- sqrt((sd1^2 + sd2^2)/2)
d <- (m1 - m2)/s
se <- sqrt(d^2*(v1^2 + v2^2 + 2*cor^2*v1*v2)/(8*df*s^4) + vd/(df*s^2))
}
else if (stdzr == 1) {
s <- sd1
d <- (m1 - m2)/s
se <- sqrt(d^2/(2*df) + vd/(df*v1))
}
else {
s <- sd2
d <- (m1 - m2)/s
se <- sqrt(d^2/(2*df) + vd/(df*v2))
}
out <- t(c(d, se))
colnames(out) <- c("Estimate", "SE")
rownames(out) <- "Standardized mean difference: "
return(out)
}
# se.cor ==========================================================
#' Computes the standard error for a Pearson or partial correlation
#'
#'
#' @description
#' This function can be used to compute the standard error of a
#' Pearson or partial correlation using the estimated correlation,
#' sample size, and number of control variables. The correlation,
#' along with the standard error output from this function, can be used
#' as input in the \link[vcmeta]{meta.ave.gen} function in applications
#' where a combination of different types of correlations are used in
#' the meta-analysis.
#'
#'
#' @param cor estimated Pearson or partial correlation
#' @param s number of control variables (set to 0 for Pearson)
#' @param n sample size
#'
#'
#' @return
#' Returns a one-row matrix:
#' * Estimate - Pearson or partial correlation (from input)
#' * SE - standard error
#'
#'
#' @examples
#' se.cor(.40, 0, 55)
#'
#' # Should return:
#' # Estimate SE
#' # Correlation: 0.4 0.116487
#'
#'
#' @references
#' \insertRef{Bonett2008a}{vcmeta}
#'
#'
#' @export
se.cor <- function(cor, s, n) {
se.cor <- sqrt((1 - cor^2)^2/(n - 3 - s))
out <- t(c(cor, se.cor))
colnames(out) <- c("Estimate", "SE")
rownames(out) <- "Correlation: "
return(out)
}
# se.spear ===================================================================
#' Computes the standard error for a Spearman correlation
#'
#'
#' @description
#' This function can be used to compute the Bonett-Wright standard
#' error of a Spearman correlation using the estimated correlation
#' and sample size. The standard error from this function can be used
#' as input in the \link[vcmeta]{meta.ave.gen} function in applications
#' where a combination of different types of correlations are used in
#' the meta-analysis.
#'
#'
#' @param cor estimated Spearman correlation
#' @param n sample size
#'
#'
#' @return
#' Returns a one-row matrix:
#' * Estimate - Spearman correlation (from input)
#' * SE - standard error
#'
#'
#' @examples
#' se.spear(.40, 55)
#'
#' # Should return:
#' # Estimate SE
#' # Spearman correlation: 0.4 0.1210569
#'
#'
#' @references
#' \insertRef{Bonett2000}{vcmeta}
#'
#'
#' @export
se.spear <- function(cor, n) {
se.cor <- sqrt((1 - cor^2)^2*(1 + cor^2/2)/(n - 3))
out <- t(c(cor, se.cor))
colnames(out) <- c("Estimate", "SE")
rownames(out) <- "Spearman correlation: "
return(out)
}
# se.semipart ================================================================
#' Computes the standard error for a semipartial correlation
#'
#'
#' @description
#' This function can be used to compute the standard error of a
#' semipartial correlation using the estimated correlation, sample
#' size, and squared multiple correlation for the full model.
#' The effect size estimate and standard error output from this
#' function can be used as input in the \link[vcmeta]{meta.ave.gen}
#' function in applications where a combination of different types
#' of correlations are used in the meta-analysis.
#'
#'
#' @param cor estimated semipartial correlation
#' @param r2 estimated squared multiple correlation for full model
#' @param n sample size
#'
#'
#' @return
#' Returns a one-row matrix:
#' * Estimate - semipartial correlation (from input)
#' * SE - standard error
#'
#'
#' @examples
#' se.semipartial(.40, .25, 60)
#'
#' # Should return:
#' # Estimate SE
#' # Semipartial correlation: 0.4 0.1063262
#'
#'
#' @export
se.semipartial <- function(cor, r2, n) {
r0 <- r2 - cor^2
a <- r2^2 - 2*r2 + r0 - r0^2 + 1
se.cor <- sqrt(a/(n - 3))
out <- t(c(cor, se.cor))
colnames(out) <- c("Estimate", "SE")
rownames(out) <- "Semipartial correlation: "
return(out)
}
# se.pbcor ==============================================================
#' Computes the standard error for a point-biserial correlation
#'
#'
#' @description
#' This function computes a point-biserial correlation and its standard
#' error for two types of point-biserial correlations in 2-group designs
#' using the estimated means, estimated standard deviations, and samples
#' sizes. Equality of variances is not assumed. One type of point-biserial
#' correlation uses an unweighted average of variances and is recommended
#' for 2-group experimental designs. The other type of point-biserial
#' correlation uses a weighted average of variances and is recommended for
#' 2-group nonexperimental designs with simple random sampling (but not
#' stratified random sampling). This function is useful in a meta-analysis
#' of compatible point-biserial correlations where some studies used a
#' 2-group experimental design and other studies used a 2-group
#' nonexperimental design. The effect size estimate and standard error
#' output from this function can be used as input in the
#' \link[vcmeta]{meta.ave.gen} function.
#'
#'
#' @param m1 estimated mean for group 1
#' @param m2 estimated mean for group 2
#' @param sd1 estimated standard deviation for group 1
#' @param sd2 estimated standard deviation for group 2
#' @param n1 sample size for group 1
#' @param n2 sample size for group 2
#' @param type
#' * set to 1 for weighted variance average
#' * set to 2 for unweighted variance average
#'
#'
#' @return
#' Returns a one-row matrix:
#' * Estimate - estimated point-biserial correlation
#' * SE - standard error
#'
#'
#' @examples
#' se.pbcor(21.9, 16.1, 3.82, 3.21, 40, 40, 1)
#'
#' # Should return:
#' # Estimate SE
#' # Point-biserial correlation: 0.6349786 0.05981325
#'
#'
#' @references
#' \insertRef{Bonett2020b}{vcmeta}
#'
#'
#' @export
se.pbcor <- function(m1, m2, sd1, sd2, n1, n2, type) {
df1 <- n1 - 1
df2 <- n2 - 1
if (type == 1) {
u <- n1/(n1 + n2)
s <- sqrt((df1*sd1^2 + df2*sd2^2)/(df1 + df2))
d <- (m1 - m2)/s
c <- 1/(u*(1 - u))
cor <- d/sqrt(d^2 + c)
se.d <- sqrt(d^2*(1/df1 + 1/df2)/8 + 1/n1 + 1/n2)
se.cor <- (c/(d^2 + c)^(3/2))*se.d
} else {
s <- sqrt((sd1^2 + sd2^2)/2)
d <- (m1 - m2)/s
cor <- d/sqrt(d^2 + 4)
se.d <- sqrt(d^2*(sd1^4/df1 + sd2^4/df2)/(8*s^4) + (sd1^2/df1 + sd2^2/df2)/s^2)
se.cor <- (4/(d^2 + 4)^(3/2))*se.d
}
out <- t(c(cor, se.cor))
colnames(out) <- c("Estimate", "SE")
rownames(out) <- "Point-biserial correlation: "
return(out)
}
# se.odds ====================================================================
#' Computes the standard error for a log odds ratio
#'
#'
#' @description
#' This function computes a log odds ratio and its standard error using
#' the frequency counts and sample sizes in a 2-group design. These
#' frequency counts and sample sizes can be obtained from a 2x2
#' contingency table. This function is useful in a meta-analysis of
#' odds ratios where some studies report the sample odds ratio and its
#' standard error and other studies only report the frequency counts
#' or a 2x2 contingency table. The log odds ratio and standard error
#' output from this function can be used as input in the \link[vcmeta]{meta.ave.gen},
#' \link[vcmeta]{meta.lc.gen}, and \link[vcmeta]{meta.lm.gen} functions.
#'
#'
#' @param f1 number of participants who have the outcome in group 1
#' @param f2 number of participants who have the outcome in group 2
#' @param n1 sample size for group 1
#' @param n2 sample size for group 2
#'
#'
#' @return
#' Returns a one-row matrix:
#' * Estimate - estimated log odds ratio
#' * SE - standard error
#'
#'
#' @examples
#' se.odds(36, 50, 21, 50)
#'
#' # Should return:
#' # Estimate SE
#' # Log odds ratio: 1.239501 0.4204435
#'
#'
#' @references
#' \insertRef{Bonett2015}{vcmeta}
#'
#'
#' @export
se.odds <- function(f1, n1, f2, n2) {
log.OR <- log((f1 + .5)*(n2 - f2 + .5)/((f2 + .5)*(n1 - f1 + .5)))
se.log.OR <- sqrt(1/(f1 + .5) + 1/(f2 + .5) + 1/(n1 - f1 + .5) + 1/(n2 - f2 + .5))
out <- t(c(log.OR, se.log.OR))
colnames(out) <- c("Estimate", "SE")
rownames(out) <- "Log odds ratio: "
return(out)
}
# se.meanratio2 =========================================================
#' Computes the standard error for a 2-group log mean ratio
#'
#'
#' @description
#' This function can be used to compute the standard error of a
#' 2-group log mean ratio using the estimated means, estimated standard
#' deviations, and sample sizes. The log mean estimate and standard
#' error output from this function can be used as input in the
#' \link[vcmeta]{meta.ave.gen}, \link[vcmeta]{meta.lc.gen}, and
#' \link[vcmeta]{meta.lm.gen} functions in application where compatible
#' mean ratios from a combination of 2-group and paired-samples experiments
#' are used in the meta-analysis. Equality of variances is not assumed.
#'
#'
#' @param m1 estimated mean for group 1
#' @param m2 estimated mean for group 2
#' @param sd1 estimated standard deviation for group 1
#' @param sd2 estimated standard deviation for group 2
#' @param n1 sample size for group 1
#' @param n2 sample size for group 2
#'
#'
#' @return
#' Returns a one-row matrix:
#' * Estimate - estimated log mean ratio
#' * SE - standard error
#'
#' @examples
#' se.meanratio2(21.9, 16.1, 3.82, 3.21, 40, 40)
#'
#' # Should return:
#' # Estimate SE
#' # Log mean ratio: 0.3076674 0.041886
#'
#'
#' @references
#' \insertRef{Bonett2020}{vcmeta}
#'
#'
#' @export
se.meanratio2 <- function(m1, m2, sd1, sd2, n1, n2) {
logratio <- log(m1/m2)
var1 <- sd1^2/(n1*m1^2)
var2 <- sd2^2/(n2*m2^2)
se <- sqrt(var1 + var2)
out <- t(c(logratio, se))
colnames(out) <- c("Estimate", "SE")
rownames(out) <- "Log mean ratio: "
return(out)
}
# se.meanratio.ps =============================================================
#' Computes the standard error for a paired-samples log mean ratio
#'
#'
#' @description
#' This function can be used to compute the standard error of a
#' paired-samples log mean ratio using the estimated means, estimated
#' standard deviations, estimated Pearson correlation, and sample
#' size. The log-mean estimate and standard error output from
#' this function can be used as input in the \link[vcmeta]{meta.ave.gen},
#' \link[vcmeta]{meta.lc.gen}, and \link[vcmeta]{meta.lm.gen} functions in
#' applications where compatible mean ratios from a combination of 2-group
#' and paired-samples experiments are used in the meta-analysis.
#' Equality of variances is not assumed.
#'
#'
#' @param m1 estimated mean for measurement 1
#' @param m2 estimated mean for measurement 2
#' @param sd1 estimated standard deviation for measurement 1
#' @param sd2 estimated standard deviation for measurement 2
#' @param cor estimated correlation for measurements 1 and 2
#' @param n sample size
#'
#'
#' @return
#' Returns a one-row matrix:
#' * Estimate - estimated log mean ratio
#' * SE - standard error
#'
#' @examples
#' se.meanratio.ps(21.9, 16.1, 3.82, 3.21, .748, 40)
#'
#' # Should return:
#' # Estimate SE
#' # Log mean ratio: 0.3076674 0.02130161
#'
#'
#' @references
#' \insertRef{Bonett2020}{vcmeta}
#'
#'
#' @export
se.meanratio.ps <- function(m1, m2, sd1, sd2, cor, n) {
logratio <- log(m1/m2)
var1 <- sd1^2/(n*m1^2)
var2 <- sd2^2/(n*m2^2)
cov <- cor*sd1*sd2/(n*m1*m2)
se <- sqrt(var1 + var2 - 2*cov)
out <- t(c(logratio, se))
colnames(out) <- c("Estimate", "SE")
rownames(out) <- "Log mean ratio: "
return(out)
}
# se.slope =================================================================
#' Computes a slope and standard error
#'
#'
#' @description
#' This function can be used to compute a slope and its standard error
#' for a simple linear regression model (random-x model) using the estimated
#' Pearson correlation and the estimated standard deviations of the
#' response and predictor variables. This function is useful in a meta-analysis
#' of slopes of a simple linear regression model where some studies report
#' the Pearson correlation but not the slope.
#'
#'
#' @param cor estimated Pearson correlation
#' @param sdy estimated standard deviation of the response variable
#' @param sdx estimated standard deviation of the predictor variable
#' @param n sample size
#'
#' @return
#' Returns a one-row matrix:
#' * Estimate - estimated slope
#' * SE - standard error
#'
#'
#' @examples
#' se.slope(.392, 4.54, 2.89, 60)
#'
#' # Should return:
#' # Estimate SE
#' # Slope: 0.6158062 0.1897647
#'
#'
#' @references
#' \insertRef{Snedecor1980}{vcmeta}
#'
#'
#' @export
se.slope <- function(cor, sdy, sdx, n) {
slope <- cor*sdy/sdx
se.slope <- sqrt((sdy^2*(1 - cor^2)*(n - 1))/(sdx^2*(n - 1)*(n - 2)))
out <- t(c(slope, se.slope))
colnames(out) <- c("Estimate", "SE")
rownames(out) <- "Slope: "
return(out)
}
# se.prop2 ===================================================================
#' Computes the estimate and standard error for a 2-group proportion
#' difference
#'
#'
#' @description
#' This function can be used to compute the Agresti-Caffo standard
#' error of a 2-group proportion difference using the frequency
#' counts and sample sizes. The effect size estimate and standard
#' error output from this function can be used as input in the \link[vcmeta]{meta.ave.gen},
#' \link[vcmeta]{meta.lc.gen}, and \link[vcmeta]{meta.lm.gen} functions in
#' applications where compatible proportion differences from a combination of
#' 2-group and paired-samples studies are used in the meta-analysis.
#'
#'
#' @param f1 number of participants in group 1 who have the outcome
#' @param f2 number of participants in group 2 who have the outcome
#' @param n1 sample size for group 1
#' @param n2 sample size for group 2
#'
#'
#' @return
#' Returns a one-row matrix:
#' * Estimate - estimated proportion difference
#' * SE - standard error
#'
#'
#' @examples
#' se.prop2(31, 16, 40, 40)
#'
#' # Should return:
#' # Estimate SE
#' # Proportion difference: 0.3571429 0.1002777
#'
#'
#' @references
#' \insertRef{Agresti2000}{vcmeta}
#'
#'
#' @export
se.prop2 <- function(f1, f2, n1, n2) {
p1 <- (f1 + 1)/(n1 + 2)
p2 <- (f2 + 1)/(n2 + 2)
est <- p1 - p2
se <- sqrt(p1*(1 - p1)/(n1 + 2) + p2*(1 - p2)/(n2 + 2))
out <- t(c(est, se))
colnames(out) <- c("Estimate", "SE")
rownames(out) <- "Proportion difference: "
return(out)
}
# se.prop.ps ==============================================================
#' Computes the estimate and standard error for a paired-samples
#' proportion difference
#'
#'
#' @description
#' This function can be used to compute the Bonett-Price standard error
#' of a paired-samples proportion difference using the frequency counts
#' from a 2 x 2 contingency table. The effect size estimate and standard
#' error output from this function can be used as input in the \link[vcmeta]{meta.ave.gen},
#' \link[vcmeta]{meta.lc.gen}, and \link[vcmeta]{meta.lm.gen} functions in
#' applications where compatible proportion differences from a combination of
#' 2-group and paired-samples studies are used in the meta-analysis.
#'
#'
#' @param f00 number of participants with y = 0 and x = 0
#' @param f01 number of participants with y = 0 and x = 1
#' @param f10 number of participants with y = 1 and x = 0
#' @param f11 number of participants with y = 1 and x = 1
#'
#'
#' @return
#' Returns a one-row matrix:
#' * Estimate - estimated proportion difference
#' * SE - standard error
#'
#'
#' @examples
#' se.prop.ps(16, 64, 5, 15)
#'
#' # Should return:
#' # Estimate SE
#' # Proportion difference: 0.5784314 0.05953213
#'
#'
#' @references
#' \insertRef{Bonett2012}{vcmeta}
#'
#'
#' @export
se.prop.ps <- function(f00, f01, f10, f11) {
n <- f00 + f01 + f10 + f11
p01 <- (f01 + 1)/(n + 2)
p10 <- (f10 + 1)/(n + 2)
est <- p01 - p10
se <- sqrt(((p01 + p10) - (p01 - p10)^2)/(n + 2))
out <- t(c(est, se))
colnames(out) <- c("Estimate", "SE")
rownames(out) <- "Proportion difference: "
return(out)
}
# se.ave.mean2.dep ============================================================
#' Computes the standard error for the average of 2-group mean differences from
#' two parallel measurement response variables in the same sample
#'
#'
#' @description
#' In a study that reports a 2-group mean difference for two response
#' variables that satisfy the conditions of parallel measurments, this function
#' can be used to compute the standard error of the average of the two mean
#' differences using the two estimated means, estimated standard deviations,
#' estimated within-group correlation between the two response variables, and
#' the two sample sizes. The average mean difference and standard error output
#' from this function can then be used as input in the
#' \link[vcmeta]{meta.ave.gen}, \link[vcmeta]{meta.lc.gen}, and
#' \link[vcmeta]{meta.lm.gen} functions in a meta-analysis where some studies
#' have used one of the two parallel response variables and other studies have
#' used the other parallel response variable. Equality of variances is not
#' assumed.
#'
#'
#' @param m1A estimated mean for variable A in group 1
#' @param m2A estimated mean for variable A in group 2
#' @param sd1A estimated standard deviation for variable A in group 1
#' @param sd2A estimated standard deviation for variable A in group 2
#' @param m1B estimated mean for variable B in group 1
#' @param m2B estimated mean for variable B in group 2
#' @param sd1B estimated standard deviation for variable B in group 1
#' @param sd2B estimated standard deviation for variable B in group 2
#' @param rAB estimated within-group correlation between variables A and B
#' @param n1 sample size for group 1
#' @param n2 sample size for group 2
#'
#'
#' @return
#' Returns a one-row matrix:
#' * Estimate - estimated average mean difference
#' * SE - standard error
#' * VAR(A) - variance of mean difference for variable A
#' * VAR(B) - variance of mean difference for variable B
#' * COV(A,B) - covariance of mean differences for variables A and B
#'
#'
#' @examples
#' se.ave.mean2.dep(21.9, 16.1, 3.82, 3.21, 24.8, 17.1, 3.57, 3.64, .785, 40, 40)
#'
#' # Should return:
#' # Estimate SE VAR(A) VAR(B) COV(A,B)
#' # Average mean difference: 6.75 0.7526878 0.6224125 0.6498625 0.4969403
#'
#'
#' @export
se.ave.mean2.dep <- function(m1A, m2A, sd1A, sd2A, m1B, m2B, sd1B, sd2B, rAB, n1, n2) {
m1 <- (m1A + m1B)/2
m2 <- (m2A + m2B)/2
est <- m1 - m2
v1 <- sd1A^2/n1 + sd2A^2/n2
v2 <- sd1B^2/n1 + sd2B^2/n2
cov <- rAB*sd1A*sd1B/n1 + rAB*sd2A*sd2B/n2
se <- sqrt((v1 + v2 + 2*cov)/4)
out <- t(c(est, se, v1, v2, cov))
colnames(out) <- c("Estimate", "SE", "VAR(A)", "VAR(B)", "COV(A,B)")
rownames(out) <- "Average mean difference: "
return(out)
}
# se.ave.cor.over =============================================================
#' Computes the standard error for the average of two Pearson correlations with
#' one variable in common that have been estimated from the same sample
#'
#'
#' @description
#' In a study that reports the sample size and three correlations (cor12, cor13,
#' and cor23 where variable 1 is called the "overlapping" variable), and
#' variables 2 and 3 are different measurements of the same attribute, this
#' function can be used to compute the average of cor12 and cor13 and its
#' standard error. The average correlation and the standard error from this
#' function can be used as input in the \link[vcmeta]{meta.ave.gen} function
#' in a meta-analysis where some studies have reported cor12 and other studies
#' have reported cor13.
#'
#'
#' @param cor12 estimated correlation between variables 1 and 2
#' @param cor13 estimated correlation between variables 1 and 3
#' @param cor23 estimated correlation between variables 2 and 3
#' @param n sample size
#'
#'
#' @return
#' Returns a two-row matrix. The first row gives results for the average
#' correlation and the second row gives the results with a Fisher
#' transformation. The columns are:
#' * Estimate - estimated average of cor12 and cor13
#' * SE - standard error
#' * VAR(cor12) - variance of cor12
#' * VAR(cor13) - variance of cor13
#' * COV(cor12,cor13) - covariance of cor12 and cor13
#'
#'
#' @examples
#' se.ave.cor.over(.462, .518, .755, 100)
#'
#' # Should return:
#' # Estimate SE VAR(cor12) VAR(cor13) COV(cor12,cor13)
#' # Correlation: 0.4900000 0.07087351 0.006378045 0.00551907 0.004097553
#' # Fisher: 0.5360603 0.09326690 0.010309278 0.01030928 0.007119936
#'
#'
#' @export
se.ave.cor.over <- function(cor12, cor13, cor23, n) {
est1 <- (cor12 + cor13)/2
cov1 <- ((cor23 - cor12*cor13/2)*(1 - cor12^2 - cor13^2 - cor23^2) + cor23^3)/(n - 3)
v1 <- (1 - cor12^2)^2/(n - 3)
v2 <- (1 - cor13^2)^2/(n - 3)
se1 <- sqrt((v1 + v2 + 2*cov1)/4)
est2 <- log((1 + est1)/(1 - est1))/2
se2 <- se1/(1 - est1^2)
cov2 <- cov1/((1 - cor12^2)*(1 - cor13^2))
v1.z <- 1/(n - 3)
v2.z <- 1/(n - 3)
out1 <- t(c(est1, se1, v1, v2, cov1))
out2 <- t(c(est2, se2, v1.z, v2.z, cov2))
out <- rbind(out1, out2)
colnames(out) <- c("Estimate", "SE", "VAR(cor12)", "VAR(cor13)", "COV(cor12,cor13)")
rownames(out) <- c("Correlation: ", "Fisher: ")
return(out)
}
# se.ave.cor.nonover ==========================================================
#' Computes the standard error for the average of two Pearson correlations with
#' no variables in common that have been estimated from the same sample
#'
#'
#' @description
#' In a study that reports the sample size and six correlations (cor12, cor34,
#' cor13, cor14, cor23, and cor24) where variables 1 and 3 are different
#' measurements of one attribute and variables 2 and 4 are different
#' measurements of a second attribute, this function can be used to compute the
#' average of cor12 and cor34 and its standard error. Note that cor12 and cor34
#' have no variable in common (i.e., no "overlapping" variable). The average
#' correlation and the standard error from this function can be used as
#' input in the \link[vcmeta]{meta.ave.gen} function in a meta-analysis where
#' some studies have reported cor12 and other studies have reported cor34.
#'
#'
#' @param cor12 estimated correlation between variables 1 and 2
#' @param cor34 estimated correlation between variables 3 and 4
#' @param cor13 estimated correlation between variables 1 and 3
#' @param cor14 estimated correlation between variables 1 and 4
#' @param cor23 estimated correlation between variables 2 and 3
#' @param cor24 estimated correlation between variables 2 and 4
#' @param n sample size
#'
#'
#' @return
#' Returns a two-row matrix. The first row gives results for the average
#' correlation and the second row gives the results with a Fisher
#' transformation. The columns are:
#' * Estimate - estimated average of cor12 and cor34
#' * SE - standard error
#' * VAR(cor12) - variance of cor12
#' * VAR(cor34) - variance of cor34
#' * COV(cor12,cor34) - covariance of cor12 and cor34
#'
#'
#' @examples
#' se.ave.cor.nonover(.357, .398, .755, .331, .347, .821, 100)
#'
#' # Should return:
#' # Estimate SE VAR(cor12) VAR(cor34) COV(cor12,cor34)
#' # Correlation: 0.377500 0.07768887 0.00784892 0.007301895 0.004495714
#' # Fisher: 0.397141 0.09059993 0.01030928 0.010309278 0.006122153
#'
#'
#' @export
se.ave.cor.nonover <- function(cor12, cor34, cor13, cor14, cor23, cor24, n) {
est1 <- (cor12 + cor34)/2
c1 <- (cor12*cor34)*(cor13^2 + cor14^2 + cor23^2 + cor24^2)/2 + cor13*cor24 + cor14*cor23
c2 <- (cor12*cor13*cor14 + cor12*cor23*cor24 + cor13*cor23*cor34 + cor14*cor24*cor34)
cov1 <- (c1 - c2)/(n - 3)
v1 <- (1 - cor12^2)^2/(n - 3)
v2 <- (1 - cor34^2)^2/(n - 3)
se1 <- sqrt((v1 + v2 + 2*cov1)/4)
est2 <- log((1 + est1)/(1 - est1))/2
se2 <- se1/(1 - est1^2)
cov2 <- cov1/((1 - cor12^2)*(1 - cor34^2))
v1.z <- 1/(n - 3)
v2.z <- 1/(n - 3)
out1 <- t(c(est1, se1, v1, v2, cov1))
out2 <- t(c(est2, se2, v1.z, v2.z, cov2))
out <- rbind(out1, out2)
colnames(out) <- c("Estimate", "SE", "VAR(cor12)", "VAR(cor34)", "COV(cor12,cor34)")
rownames(out) <- c("Correlation: ", "Fisher: ")
return(out)
}
# se.tetra ==================================================================
#' Computes the standard error for a tetrachoric correlation approximation
#'
#'
#' @description
#' This function can be used to compute an estimate of a tetrachoric
#' correlation approximation and its standard error using the frequency counts
#' from a 2 x 2 contingency table for two artifically dichotomous variables.
#' A tetrachoric approximation could be compatible with a Pearson correlation
#' in a meta-analysis. The tetrachoric approximation and the standard error
#' from this function can be used as input in the \link[vcmeta]{meta.ave.gen}
#' function in a meta-analysis where some studies have reported Pearson
#' correlations between quantitative variables x and y and other studies have
#' reported a 2 x 2 contingency table for dichotomous measurements of variables
#' x and y.
#'
#'
#' @param f00 number of participants with y = 0 and x = 0
#' @param f01 number of participants with y = 0 and x = 1
#' @param f10 number of participants with y = 1 and x = 0
#' @param f11 number of participants with y = 1 and x = 1
#'
#'
#' @references
#' \insertRef{Bonett2005}{vcmeta}
#'
#'
#' @return
#' Returns a 1-row matrix. The columns are:
#' * Estimate - estimated tetrachoric approximation
#' * SE - standard error
#'
#'
#' @examples
#' se.tetra(46, 15, 54, 85)
#'
#' # Should return:
#' # Estimate SE
#' # Tetrachoric: 0.5135167 0.09358336
#'
#'
#' @export
se.tetra <- function(f00, f01, f10, f11) {
n <- f00 + f01 + f10 + f11
or <- (f11 + .5)*(f00 + .5)/((f01 + .5)*(f10 + .5))
r1 <- (f00 + f01 + 1)/(n + 2)
r2 <- (f10 + f11 + 1)/(n + 2)
c1 <- (f00 + f10 + 1)/(n + 2)
c2 <- (f01 + f11 + 1)/(n + 2)
pmin <- min(c1, c2, r1, r2)
c <- (1 - abs(r1 - c1)/5 - (.5 - pmin)^2)/2
lor <- log(or)
se.lor <- sqrt(1/(f00 + .5) + 1/(f01 + .5) + 1/(f10 + .5) + 1/(f11 + .5))
tetra <- cos(3.14159/(1 + or^c))
k <- (3.14159*c*or^c)*sin(3.14159/(1 + or^c))/(1 + or^c)^2
se <- k*se.lor
out <- t(c(tetra, se))
colnames(out) <- c("Estimate", "SE")
rownames(out) <- "Tetrachoric: "
return(out)
}
# se.biphi ==================================================================
#' Computes the standard error for a biserial-phi correlation
#'
#'
#' @description
#' This function can be used to compute an estimate of a biserial-phi
#' correlation and its standard error using the frequency counts from a 2 x 2
#' contingency table where one variable is naturally dichotomous and the other
#' variable is artifically dichotomous. A biserial-phi correlation could be
#' compatible with a point-biserial correlation in a meta-analysis. The
#' biserial-phi estimate and the standard error from this function can be used
#' as input in the \link[vcmeta]{meta.ave.gen} function in a meta-analysis
#' where a point-biserial correlation has been obtained in some studies and
#' a biserial-phi correlation has been obtained in other studies.
#'
#'
#' @param f1 number of participants in group 1 who have the attribute
#' @param f2 number of participants in group 2 who have the attribute
#' @param n1 sample size for group 1
#' @param n2 sample size for group 2
#'
#'
#' @return
#' Returns a 1-row matrix. The columns are:
#' * Estimate - estimated biserial-phi correlation
#' * SE - standard error
#'
#'
#' @examples
#' se.biphi(34, 22, 50, 50)
#'
#' # Should return:
#' # Estimate SE
#' # Biserial-phi: 0.27539 0.1074594
#'
#'
#' @export
se.biphi <- function(f1, f2, n1, n2) {
if (f1 > n1) {stop("f cannot be greater than n")}
if (f2 > n2) {stop("f cannot be greater than n")}
f00 <- f1
f10 <- n1 - f1
f01 <- f2
f11 <- n2 - f2
p1 <- n1/(n1 + n2)
p2 <- n2/(n1 + n2)
or <- (f11 + .5)*(f00 + .5)/((f01 + .5)*(f10 + .5))
lor <- log(or)
se.lor <- sqrt(1/(f00 + .5) + 1/(f01 + .5) + 1/(f10 + .5) + 1/(f11 + .5))
c <- 2.89/(p1*p2)
biphi <- lor/sqrt(lor^2 + c)
se.biphi <- sqrt(c^2/(lor^2 + c)^3)*se.lor
out <- t(c(biphi, se.biphi))
colnames(out) <- c("Estimate", "SE")
rownames(out) <- "Biserial-phi: "
return(out)
}
# se.cohen ====================================================================
#' Computes the standard error for Cohen's d
#'
#'
#' @description
#' This function computes the standard error of Cohen's d using only the two
#' sample sizes and an estimate of Cohen's d. Cohen's d and its standard error
#' assume equal variances. The estimate of Cohen's d, with the standard error
#' output from this function, can be used as input in the \link[vcmeta]{meta.ave.gen},
#' \link[vcmeta]{meta.lc.gen}, and \link[vcmeta]{meta.lm.gen} functions in
#' applications where different types of compatible standardized mean
#' differences are used in the meta-analysis.
#'
#'
#' @param d estimated Cohen's d
#' @param n1 sample size for group 1
#' @param n2 sample size for group 2
#'
#'
#' @return
#' Returns a one-row matrix:
#' * Estimate - Cohen's d (from input)
#' * SE - standard error
#'
#'
#' @examples
#' se.cohen(.78, 35, 50)
#'
#' # Should return:
#' # Estimate SE
#' # Cohen's d: 0.78 0.2288236
#'
#'
#' @seealso \link[vcmeta]{se.stdmean2}
#'
#'
#' @export
se.cohen <- function(d, n1, n2) {
df1 <- n1 - 1
df2 <- n2 - 1
se <- sqrt(d^2*(1/df1 + 1/df2)/8 + 1/n1 + 1/n2)
out <- t(c(d, se))
colnames(out) <- c("Estimate", "SE")
rownames(out) <- "Cohen's d: "
return(out)
}
|
/scratch/gouwar.j/cran-all/cranData/vcmeta/R/meta_se.R
|
#logic mostly adopted from test.data.table()
test.vcov <- function(...) {
test_dir = paste0(getNamespaceInfo("vcov", "path"), "/tests/")
olddir = setwd(test_dir)
on.exit(setwd(olddir))
cat("Running tests of `vcov`")
sys.source(file.path(test_dir, "tests.Rraw"),
envir = new.env(parent = .GlobalEnv))
}
test <- function(name, x, y, approx = FALSE, error, warning, message) {
#Extra spaces since \r moves the cursor to the beginning of
# the line, but doesn't erase the current text in the line --
# so new text only overwrites old text if it's wider.
# Otherwise, the scars remain.
cat("\rRunning test: ", name, " ", sep = "")
#since most of the package is about using `cat`,
# which supersedes output suppression through `invisible`
capture.output(
x.catch <- tryCatch(x, error = identity,
warning = identity, message = identity)
)
if (inherits(x.catch, "error")) {
if (missing(error)) {
cat("\n`", deparse(substitute(x)),
"` produced an unanticipated error: '",
x.catch$message, "'.\n", sep = "")
return()
}
if (grepl(error, x.catch$message)) return()
cat("\nExpected error matching '", error,
"', but returned '", x.catch$message, "'.\n", sep = "")
return()
}
if (!missing(error)) {
cat("\nExpected error matching '", error,
"', but returned no error.\n", sep = "")
return()
}
if (inherits(x.catch, "warning")) {
if (missing(warning)) {
cat("\n`", deparse(substitute(x)),
"` produced an unanticipated warning: '",
x.catch$message, "'.\n", sep = "")
return()
}
if (grepl(warning, x.catch$message)) return()
cat("\nExpected warning matching '", error,
"', but returned '", x.catch$message, "'.\n", sep = "")
return()
}
if (!missing(warning)) {
cat("\nExpected warning matching '", warning,
"', but returned no warning.\n", sep = "")
return()
}
if (inherits(x.catch, "message")) {
if (missing(message)) {
cat("\n`", deparse(substitute(x)),
"` produced an unanticipated message: '",
x.catch$message, "'.\n", sep = "")
return()
}
if (grepl(message, x.catch$message)) return()
cat("\nExpected message matching '", error,
"', but returned '", x.catch$message, "'.\n", sep = "")
return()
}
if (!missing(message)) {
cat("\nExpected message matching '", message,
"', but returned no message.\n", sep = "")
return()
}
#allow for numerical errors if approx = TRUE
if (approx) {
if (all.equal(x.catch, y)) return()
else
cat("\n`", deparse(substitute(x)),
"` evaluated without errors to:\n", x,
"\nwhich is not equal to the expected output:\n",
eval(substitute(y)), "\nat default tolerance\n", sep = "")
} else {
if (identical(x.catch, y)) return()
else
cat("\n`", deparse(substitute(x)),
"` evaluated without errors to:\n", x,
"\nwhich is not identical to the expected output:\n",
eval(substitute(y)), "\n", sep = "")
}
return()
}
|
/scratch/gouwar.j/cran-all/cranData/vcov/R/test.vcov.R
|
se = function(object, ...) sqrt(diag(Vcov(object, ...)))
Vcov = function(object, ...) UseMethod('Vcov')
Vcov.default = vcov
Vcov.lm = function(object, ...) {
if (p <- object$rank) {
p1 = seq_len(p)
rss = if (is.null(w <- object$weights)) {
sum(object$residuals^2)
} else {
sum(w * object$residuals^2)
}
covmat = rss * chol2inv(object$qr$qr[p1, p1, drop = FALSE])/
object$df.residual
nm = names(object$coefficients)
dimnames(covmat) = list(nm, nm)
return(covmat)
} else return(numeric(0))
}
Vcov.glm = function(object, dispersion = NULL, ...) {
if (p <- object$rank) {
if (is.null(dispersion)) {
dispersion = if (object$family$family %in% c('poisson', 'binomial')) {
1
} else {
df_r = object$df.residual
if (df_r) {
if (any(!object$weights))
warning('observations with zero weight not',
'used for calculating dispersion')
w = object$weights
idx = w > 0
sum(w[idx] * object$residuals[idx]^2)/df_r
} else NaN
}
}
p1 = seq_len(p)
nm <- names(object$coefficients[object$qr$pivot[p1]])
covmat = dispersion * chol2inv(object$qr$qr[p1, p1, drop = FALSE])
dimnames(covmat) = list(nm, nm)
return(covmat)
} else return(numeric(0))
}
|
/scratch/gouwar.j/cran-all/cranData/vcov/R/vcov.R
|
# Generated by using Rcpp::compileAttributes() -> do not edit by hand
# Generator token: 10BE3573-1514-4C36-9D1C-5A225CD40393
minque_Rcpp <- function(y, X, Kerns, vc) {
.Call(`_vcpen_minque_Rcpp`, y, X, Kerns, vc)
}
vcpen_Rcpp <- function(y, X, Kerns, lambda_factor, lambda_grid, frac1, sigma2_init, maxiter, print_iter) {
.Call(`_vcpen_vcpen_Rcpp`, y, X, Kerns, lambda_factor, lambda_grid, frac1, sigma2_init, maxiter, print_iter)
}
|
/scratch/gouwar.j/cran-all/cranData/vcpen/R/RcppExports.R
|
#' Variance Component Linear Kernel Matrix
#'
#' Variance component Linear kernel matrix from genotype dosage
#'
#' @param dose data.frame or matrix with
#' @param method type of kernel; currently only linear kernel implemented
#' @return square symmetric kernel matrix for subject similarity by genotype dosage
#' @examples
#' data(vcexample)
#' Kern1 <- kernel_linear(dose[,which(doseinfo[,1]==1)], method="linear")
#' Kern1[1:5,1:5]
#'
#' @author JP Sinnwell, DJ Schaid
#' @seealso \code{\link{vcpen}}
#' @name kernel_linear
NULL
#> NULL
#' @rdname kernel_linear
#' @export
kernel_linear <- function(dose, method="linear"){
## linear kernel matrix based matrix of SNP doses of minor alleles
n.snp <- ncol(dose)
## kernel matrix based on centered dose (consistent with how genetic relationship
## matrix is calcualted in Gemma)
dose.mean <- apply(dose, 2, mean, na.rm=TRUE)
dose.sd <- sqrt(apply(dose, 2, var, na.rm=TRUE))
zdose <- t((t(dose) - dose.mean)/dose.sd)
kmat <- zdose %*% t(zdose) / n.snp
return(kmat)
}
|
/scratch/gouwar.j/cran-all/cranData/vcpen/R/kernel_linear.R
|
#' MINQUE estimation of variance components
#'
#' Estimate variance components by MINQUE method, allowing multiple iterations
#'
#' @param y Numeric vector of traits. Only continuous trait currently allowed.
#' @param X Matrix of covariates (columns) for subjects (rows), matching subjects in the trait (y) vector.
#' @param Kerns List of kernel matrices: a kernel matrix for each variance compenent. The last kernel matrix in the list (an identity matrix) is for the residual variance component.
#' @param n.iter Number of minque iterations
#' @param eps Default small positive value for non-positive vc estimates within iterations.
#' @return List with estimates of variance components (vc), covariate regression coefficients (beta), and residuals of model fit.
#' @examples
#' data(vcexample)
#' nvc <- 1+length(unique(doseinfo[,2]))
#' id <- 1:nrow(dose)
#' ## vcs for genetic kernel matrices
#' Kerns <- vector("list", length=nvc)
#' for(i in 1:(nvc-1)){
#' Kerns[[i]] <- kernel_linear(dose[,grep(i, doseinfo[,2])])
#' rownames(Kerns[[i]]) <- id
#' colnames(Kerns[[i]]) <- id
#' }
#' ## vc for residual variance
#' Kerns[[nvc]] <- diag(nrow(dose))
#' rownames(Kerns[[nvc]]) <- id
#' colnames(Kerns[[nvc]]) <- id
#' prefit <- minque(response, covmat, Kerns, n.iter=2)
#' prefit[1]
#' prefit[2]
#' fit <- vcpen(response, covmat, Kerns, vc_init = prefit$vc)
#'
#' @author JP Sinnwell, DJ Schaid
## methods for minque
#' @name minque
#' @rdname minque
#' @export
minque <- function(y, X, Kerns, n.iter=1, eps=0.001){
## init values of vc
vc <- rep(.5, length(Kerns))
vc[length(Kerns)] <- 1
## eps <- 0.001
for(i in 1:n.iter){
fit <- minque_Rcpp(y, X, Kerns, vc)
vc <- fit$vc
## for negative vc, set to small pos value for next iter
vc <- ifelse(vc < 0, eps, vc)
}
fit$vc <- ifelse(fit$vc < 0, 0, fit$vc)
return(fit)
}
|
/scratch/gouwar.j/cran-all/cranData/vcpen/R/minque.R
|
#' Example data for Penalized Variance Component method
#'
#' Datasets for an example run of vcpen with 4 variance components calculated as kernel matrices from genotype dosage (dose) on 100 subjects with two covariates (covmat), and a continuous response.
#'
#' @format The example contains three data.frames and a response vector for 100 subjects at 70 SNPs accross 4 variance components:
#' \describe{
#' \item{\code{covmat}}{two arbitrary covariates (columns) for 100 subjects (rows)}
#' \item{\code{dose}}{genotype dosage at 70 SNPs (columns) and 100 subjects (rows)}
#' \item{\code{doseinfo}}{2-column matrix with indices for grouping SNPs into variance components (for Kernel Matrix)}
#' \item{\code{response}}{continuous response vector for 100 subjects}
#' }
#' @examples
#' data(vcexample)
#' dim(dose)
#' dim(doseinfo)
#' dim(covmat)
#' length(response)
#' @name vcexample
NULL
#> NULL
#' @rdname vcexample
#' @name covmat
NULL
#> NULL
#' @rdname vcexample
#' @name dose
NULL
#> NULL
#' @rdname vcexample
#' @name doseinfo
NULL
#> NULL
#' @rdname vcexample
#' @name response
NULL
#> NULL
|
/scratch/gouwar.j/cran-all/cranData/vcpen/R/vcexample.R
|
#' Penalized Variance Components
#'
#' Penalized Variance Component analysis
#'
#' @param y Numeric vector of traits. Only continuous trait currently allowed.
#' @param X Matrix of covariates (columns) for subjects (rows), matching subjects in the trait (y) vector.
#' @param Kerns List of kernel matrices: a kernel matrix for each variance compenent. The last kernel matrix in the list (an identity matrix) is for the residual variance component.
#' @param frac1 Fraction of penalty imposed on L1 penalty, between 0 and 1 (0 for only L2; 1 for only L1 penalty).
#' @param lambda_factor Weight for each vc (values between 0 and 1) for how much it should be penalized: 0 means no penalty. Default value of NULL implies weight of 1 for all vc's.
#' @param lambda_grid Vector of lambda penalties for fitting the penalized model. Best to order values from largest to smallest so parameter estimates from a large penalty can be used as initial values for the next smaller penalty. Default value of NULL implies initial values of seq(from=.10, to=0, by=-0.01).
#' @param maxiter Maximum number of iterations allowed during penalized fitting.
#' @param vc_init Numeric vector of initial values for variance components. Default value of NULL implies initial values determined by 2 iterations of minque estimation.
#' @param print_iter Logical: if TRUE, print the iteration results (mainly for refined checks)
#' @param object Fitted vcpen object (used in summary method)
#' @param \dots Optional arguments for summary method
#' @param digits Signficant digits for summary method
#'
#' @return object with S3 class vcpen
#' @examples
#' data(vcexample)
#' nvc <- 1+length(unique(doseinfo[,2]))
#' id <- 1:nrow(dose)
#' ## vcs for genetic kernel matrices
#' Kerns <- vector("list", length=nvc)
#' for(i in 1:(nvc-1)){
#' Kerns[[i]] <- kernel_linear(dose[,grep(i, doseinfo[,2])])
#' rownames(Kerns[[i]]) <- id
#' colnames(Kerns[[i]]) <- id
#' }
#' ## vc for residual variance
#' Kerns[[nvc]] <- diag(nrow(dose))
#' rownames(Kerns[[nvc]]) <- id
#' colnames(Kerns[[nvc]]) <- id
#' fit <- vcpen(response, covmat, Kerns, frac1 = .6)
#' summary(fit)
#'
#' @author JP Sinnwell, DJ Schaid
#' @name vcpen
NULL
#> NULL
#' @rdname vcpen
#' @export
vcpen <- function(y, X, Kerns, frac1=0.8, lambda_factor=NULL, lambda_grid=NULL,
maxiter=1000, vc_init=NULL, print_iter=FALSE){
nvc <- length(Kerns)
## lambda_factor is the factor assigned for each vc, with values between
## 0 and 1; 0 means don't penalize, and 1 means give full weight
## for penalizing
if( is.null(lambda_factor)){
lambda_factor=rep(1, (nvc-1) )
}
if(is.null(lambda_grid)){
lambda_grid <- seq(from=.10, to=0, by=-0.01)
}
## vc_init is a vector of initial starting values of vcs
if(is.null(vc_init)){
fit.minque <- minque(y, X, Kerns, n.iter=2)
vc_init <- fit.minque$vc
vc_init <- ifelse(vc_init < 0.01, 0.01, vc_init)
}
fit <- vcpen_Rcpp(y, X, Kerns, lambda_factor, lambda_grid,
frac1, vc_init, maxiter, print_iter)
if(is.null(colnames(X))){
xnames <- paste("x", 1:nrow(fit$beta_grid), sep=".")
} else {
xnames <- colnames(X)
}
dimnames(fit$beta_grid) <- list(xnames, fit$lambda_grid)
df <- data.frame(t(fit$vc_grid))
names(df) <- paste0("vc", 1:ncol(df))
df <- cbind(lambda=fit$lambda_grid, df)
fit$vc_grid <- df
## define eps for deciding non-zero VCs
eps <- .001
npar <- apply(fit$vc_grid[,-1] > eps, 1, sum) + apply(abs(fit$beta_grid) > eps, 2, sum)
#bic_grid <- as.vector(-2*fit$logl_grid+ log(fit$n_subj)*npar)
bic_grid <- as.numeric(-2*fit$logl_grid + log(fit$n_subj)*npar)
index <- 1:length(bic_grid)
## if ties, choose bic with larger lambda penalty
is.min.bic <- bic_grid == min(bic_grid)
index <- index[is.min.bic][1]
## JPS 2021/12: add drop=FALSE to keep same behavior of as.vector()
## but named vector might be better in next update
fit$vc <- fit$vc_grid[index,-1, drop=FALSE]
fit$beta <- fit$beta_grid[, index, drop=TRUE]
fit$grid_info <- data.frame(lambda = fit$lambda_grid, iter=fit$iter+1,
logl=fit$logl_grid, loglpen=fit$logllasso_grid,
bic=bic_grid, min_bic = is.min.bic)
## remove redundant info
fit$iter_grid <- NULL
fit$logl_grid <- NULL
fit$logllasso_grid <- NULL
fit$lambda_grid <- NULL
class(fit) <- c("vcpen", "list")
return(fit)
}
#' @name summary.vcpen
#' @rdname vcpen
#' @export
summary.vcpen <- function(object, ..., digits=4) {
cat("vcpen object\n")
cat(paste0(" N-subjects = ", object$n_subj, "\n"))
cat(paste0(" N-VC = ", object$n_vc, "\n"))
cat("\n Model fits over lambda penalty grid:\n\n")
print(object$grid_info, digits=digits, ...)
cat("\n VC estimates by lambda penalties:\n\n")
print(object$vc_grid, digits=digits, ...)
cat("\nEstimates with min BIC:\n")
cat("beta:\n")
print(object$beta, digits=digits)
cat("VC estimates:\n")
print(object$vc, digits=digits)
invisible()
}
|
/scratch/gouwar.j/cran-all/cranData/vcpen/R/vcpen.R
|
## ----setup, include=FALSE-----------------------------------------------------
knitr::opts_chunk$set(echo = TRUE, tidy.opts=list(width.cutoff=80), tidy=TRUE, comment=NA)
## ----message = FALSE----------------------------------------------------------
require(vcpen)
## ---- loaddat-----------------------------------------------------------------
data(vcexample)
ls()
head(dose)
head(doseinfo)
response[1:10]
## ---- kerns-------------------------------------------------------------------
nvc <- 1+length(unique(doseinfo[,2]))
id <- 1:nrow(dose)
## vcs for genetic kernel matrices
Kerns <- vector("list", length=nvc)
for(i in 1:(nvc-1)){
## below uses kernel_linear, but users can replace this with their choice of function to
## create other types of kernel matrices.
Kerns[[i]] <- kernel_linear(dose[,grep(i, doseinfo[,2])])
rownames(Kerns[[i]]) <- id
colnames(Kerns[[i]]) <- id
}
## vc for residual variance requires identity matrix
Kerns[[nvc]] <- diag(nrow(dose))
rownames(Kerns[[nvc]]) <- id
colnames(Kerns[[nvc]]) <- id
## ---- runvcpen6---------------------------------------------------------------
fit <- vcpen(response, covmat, Kerns)
summary(fit)
## ---- runvcpen1---------------------------------------------------------------
fit.frac1 <- vcpen(response, covmat, Kerns, frac1 = .1)
summary(fit.frac1)
## ---- vcinit------------------------------------------------------------------
vcinit <- minque(response, covmat, Kerns, n.iter=2)
names(vcinit)
vcinit$beta
vcinit$vc
|
/scratch/gouwar.j/cran-all/cranData/vcpen/inst/doc/vcpen.R
|
---
title: "Penalized Variance Components"
author: "JP Sinnwell, DJ Schaid"
output:
rmarkdown::html_vignette:
toc: yes
toc_depth: 3
vignette: |
%\VignetteIndexEntry{Penalized Variance Components}
%\VignetteEncoding{UTF-8}
%\VignetteEngine{knitr::rmarkdown}
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE, tidy.opts=list(width.cutoff=80), tidy=TRUE, comment=NA)
```
# Overview of the *vcpen* package for Penalized Variance Components
A penalized likelihood model is used to estimate variance components with an elastic-net penalty function that applies both L1 and L2 penalties to the variance components, using the function `vcpen()`. Each variance component multiplies a kernel matrix, and we provide the function `kernel_linear()` to compute linear kernel matrices, but users are welcome to use their own functions to compute kernel matrices.
The function `vcpen()` allows the user to provide intitial starting values for the variance components. If no initial values are provided, the default is to use our funcion `minque()` to calculate initial values. For linear mixed models, MINQUE is the first iteration of restricted maximum likeihood estimation (REML), and iterative updates of MINQUE converge to REML estimation.
```{r message = FALSE}
require(vcpen)
```
# Preparing to run *vcpen*
## Sample dataset
Below provides snapshots of an example dataset. The response is the outcome variable, covmat is a matrix of adjusting covariates, and dose is a matrix of the dose of a minor allele for SNPs (dose values of 0, 1, 2). The doseinfo illustrates how the SNPs (columns of dose) map into groups, for creating kernel matrices for each group. A kernel matrix for n subjects is an nxn matrix that measures similarity of the dose values for each pair of subjects.
```{r, loaddat}
data(vcexample)
ls()
head(dose)
head(doseinfo)
response[1:10]
```
## Make kernel matrices
The example below illustrates how to loop over groups (indicated by doseinfo) to create linear kernel matrices for each group. Note that the number of variance components is the number of groups plus 1, because the last group is for the residual variance component, which will have a kernel matrix that is the identity matrix.
```{r, kerns}
nvc <- 1+length(unique(doseinfo[,2]))
id <- 1:nrow(dose)
## vcs for genetic kernel matrices
Kerns <- vector("list", length=nvc)
for(i in 1:(nvc-1)){
## below uses kernel_linear, but users can replace this with their choice of function to
## create other types of kernel matrices.
Kerns[[i]] <- kernel_linear(dose[,grep(i, doseinfo[,2])])
rownames(Kerns[[i]]) <- id
colnames(Kerns[[i]]) <- id
}
## vc for residual variance requires identity matrix
Kerns[[nvc]] <- diag(nrow(dose))
rownames(Kerns[[nvc]]) <- id
colnames(Kerns[[nvc]]) <- id
```
# Penalized estimation of VCs
## Default settings.
Run with default settings, which uses `minque()` to estimate initial values for variance components and default `frac1=0.8`.
```{r, runvcpen6}
fit <- vcpen(response, covmat, Kerns)
summary(fit)
```
## Changing penalty fraction:
Perform the same run as above, but with lower penalty fraction.
```{r, runvcpen1}
fit.frac1 <- vcpen(response, covmat, Kerns, frac1 = .1)
summary(fit.frac1)
```
# Demo of using `minque()` outside of `vcpen()`
This demonstrates how users can use `minque()` as a general approach to approximate REML variance components. Increasing `n.iter` will cause the resulting variance components to be closer to the fully interative REML estimates.
```{r, vcinit}
vcinit <- minque(response, covmat, Kerns, n.iter=2)
names(vcinit)
vcinit$beta
vcinit$vc
```
References
=============
Schaid DJ, Sinnwell JP, Larson NB, Chen J (2020). Penalized Variance Components for Association of Multiple Genes with Traits. Genet Epidemiol, To Appear.
|
/scratch/gouwar.j/cran-all/cranData/vcpen/inst/doc/vcpen.Rmd
|
---
title: "Penalized Variance Components"
author: "JP Sinnwell, DJ Schaid"
output:
rmarkdown::html_vignette:
toc: yes
toc_depth: 3
vignette: |
%\VignetteIndexEntry{Penalized Variance Components}
%\VignetteEncoding{UTF-8}
%\VignetteEngine{knitr::rmarkdown}
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE, tidy.opts=list(width.cutoff=80), tidy=TRUE, comment=NA)
```
# Overview of the *vcpen* package for Penalized Variance Components
A penalized likelihood model is used to estimate variance components with an elastic-net penalty function that applies both L1 and L2 penalties to the variance components, using the function `vcpen()`. Each variance component multiplies a kernel matrix, and we provide the function `kernel_linear()` to compute linear kernel matrices, but users are welcome to use their own functions to compute kernel matrices.
The function `vcpen()` allows the user to provide intitial starting values for the variance components. If no initial values are provided, the default is to use our funcion `minque()` to calculate initial values. For linear mixed models, MINQUE is the first iteration of restricted maximum likeihood estimation (REML), and iterative updates of MINQUE converge to REML estimation.
```{r message = FALSE}
require(vcpen)
```
# Preparing to run *vcpen*
## Sample dataset
Below provides snapshots of an example dataset. The response is the outcome variable, covmat is a matrix of adjusting covariates, and dose is a matrix of the dose of a minor allele for SNPs (dose values of 0, 1, 2). The doseinfo illustrates how the SNPs (columns of dose) map into groups, for creating kernel matrices for each group. A kernel matrix for n subjects is an nxn matrix that measures similarity of the dose values for each pair of subjects.
```{r, loaddat}
data(vcexample)
ls()
head(dose)
head(doseinfo)
response[1:10]
```
## Make kernel matrices
The example below illustrates how to loop over groups (indicated by doseinfo) to create linear kernel matrices for each group. Note that the number of variance components is the number of groups plus 1, because the last group is for the residual variance component, which will have a kernel matrix that is the identity matrix.
```{r, kerns}
nvc <- 1+length(unique(doseinfo[,2]))
id <- 1:nrow(dose)
## vcs for genetic kernel matrices
Kerns <- vector("list", length=nvc)
for(i in 1:(nvc-1)){
## below uses kernel_linear, but users can replace this with their choice of function to
## create other types of kernel matrices.
Kerns[[i]] <- kernel_linear(dose[,grep(i, doseinfo[,2])])
rownames(Kerns[[i]]) <- id
colnames(Kerns[[i]]) <- id
}
## vc for residual variance requires identity matrix
Kerns[[nvc]] <- diag(nrow(dose))
rownames(Kerns[[nvc]]) <- id
colnames(Kerns[[nvc]]) <- id
```
# Penalized estimation of VCs
## Default settings.
Run with default settings, which uses `minque()` to estimate initial values for variance components and default `frac1=0.8`.
```{r, runvcpen6}
fit <- vcpen(response, covmat, Kerns)
summary(fit)
```
## Changing penalty fraction:
Perform the same run as above, but with lower penalty fraction.
```{r, runvcpen1}
fit.frac1 <- vcpen(response, covmat, Kerns, frac1 = .1)
summary(fit.frac1)
```
# Demo of using `minque()` outside of `vcpen()`
This demonstrates how users can use `minque()` as a general approach to approximate REML variance components. Increasing `n.iter` will cause the resulting variance components to be closer to the fully interative REML estimates.
```{r, vcinit}
vcinit <- minque(response, covmat, Kerns, n.iter=2)
names(vcinit)
vcinit$beta
vcinit$vc
```
References
=============
Schaid DJ, Sinnwell JP, Larson NB, Chen J (2020). Penalized Variance Components for Association of Multiple Genes with Traits. Genet Epidemiol, To Appear.
|
/scratch/gouwar.j/cran-all/cranData/vcpen/vignettes/vcpen.Rmd
|
#' Coerce names, etc. to cassettes
#'
#' @export
#' @param x Input, a cassette name (character), or something that
#' can be coerced to a cassette
#' @param ... further arguments passed on to [cassettes()] or
#' [read_cassette_meta()
#' @return a cassette of class `Cassette`
#' @examples \dontrun{
#' vcr_configure(dir = tempfile())
#' insert_cassette("foobar")
#' cassettes(on_disk = FALSE)
#' cassettes(on_disk = TRUE)
#' as.cassette("foobar", on_disk = FALSE)
#' eject_cassette() # eject the current cassette
#'
#' # cleanup
#' unlink(file.path(tempfile(), "foobar.yml"))
#' }
as.cassette <- function(x, ...) UseMethod("as.cassette")
#' @export
as.cassette.default <- function(x, ...) {
stop("no 'as.cassette' method for ", class(x), call. = FALSE)
}
#' @export
as.cassette.cassette <- function(x, ...) x
#' @export
as.cassette.character <- function(x, ...) {
cassettes(...)[[x]]
}
#' @export
as.cassette.cassettepath <- function(x, ...) read_cassette_meta(x, ...)
#' @export
as.cassette.list <- function(x, ...) lapply(x, as.cassette, ...)
#' Coerce to a cassette path
#'
#' @export
#' @rdname as.cassette
as.cassettepath <- function(x) UseMethod("as.cassettepath")
#' @export
as.cassettepath.character <- function(x) {
if (file.exists(x)) {
structure(x, class = "cassettepath")
} else {
stop("Path not found", call. = FALSE)
}
}
#' @export
print.cassettepath <- function(x, ...) cat(paste0("<cassette path>"), x[[1]])
|
/scratch/gouwar.j/cran-all/cranData/vcr/R/as-casette.R
|
#' @title Cassette handler
#' @description Main R6 class that is called from the main user facing
#' function [use_cassette()]
#' @export
#' @keywords internal
#' @return an object of class `Cassette`
#' @seealso [vcr_configure()], [use_cassette()], [insert_cassette()]
#' @section Points of webmockr integration:
#' - `initialize()`: webmockr is used in the `initialize()` method to
#' create webmockr stubs. stubs are created on call to `Cassette$new()`
#' within `insert_cassette()`, but then on exiting `use_cassette()`,
#' or calling `eject()` on `Cassette` class from `insert_cassette()`,
#' stubs are cleaned up.
#' - `eject()` method: [webmockr::disable()] is called before exiting
#' eject to disable webmock so that webmockr does not affect any HTTP
#' requests that happen afterwards
#' - `call_block()` method: call_block is used in the [use_cassette()]
#' function to evaluate whatever code is passed to it; within call_block
#' [webmockr::webmockr_allow_net_connect()] is run before we evaluate
#' the code block to allow real HTTP requests, then
#' [webmockr::webmockr_disable_net_connect()] is called after evalulating
#' the code block to disallow real HTTP requests
#' - `make_http_interaction()` method: [webmockr::pluck_body()] utility
#' function is used to pull the request body out of the HTTP request
#' - `serialize_to_crul()` method: method: [webmockr::RequestSignature] and
#' [webmockr::Response] are used to build a request and response,
#' respectively, then passed to [webmockr::build_crul_response()]
#' to make a complete `crul` HTTP response object
#' @examples
#' library(vcr)
#' vcr_configure(dir = tempdir())
#'
#' res <- Cassette$new(name = "bob")
#' res$file()
#' res$originally_recorded_at()
#' res$recording()
#' res$serializable_hash()
#' res$eject()
#' res$should_remove_matching_existing_interactions()
#' res$storage_key()
#' res$match_requests_on
#'
#' # record all requests
#' res <- Cassette$new("foobar", record = "all")
#' res$eject()
#'
#' # cleanup
#' unlink(file.path(tempdir(), c("bob.yml", "foobar.yml")))
#'
#' library(vcr)
#' vcr_configure(dir = tempdir())
#' res <- Cassette$new(name = "jane")
#' library(crul)
#' HttpClient$new("https://httpbin.org")$get("get")
Cassette <- R6::R6Class(
"Cassette",
public = list(
#' @field name (character) cassette name
name = NA,
#' @field record (character) record mode
record = "all",
#' @field manfile (character) cassette file path
manfile = NA,
#' @field recorded_at (character) date/time recorded at
recorded_at = NA,
#' @field serialize_with (character) serializer to use (yaml|json)
serialize_with = "yaml",
#' @field serializer (character) serializer to use (yaml|json)
serializer = NA,
#' @field persist_with (character) persister to use (FileSystem only)
persist_with = "FileSystem",
#' @field persister (character) persister to use (FileSystem only)
persister = NA,
#' @field match_requests_on (character) matchers to use
#' default: method & uri
match_requests_on = c("method", "uri"),
#' @field re_record_interval (numeric) the re-record interval
re_record_interval = NULL,
#' @field tag ignored, not used right now
tag = NA,
#' @field tags ignored, not used right now
tags = NA,
#' @field root_dir root dir, gathered from [vcr_configuration()]
root_dir = NA,
#' @field update_content_length_header (logical) Whether to overwrite the
#' `Content-Length` header
update_content_length_header = FALSE,
#' @field allow_playback_repeats (logical) Whether to allow a single HTTP
#' interaction to be played back multiple times
allow_playback_repeats = FALSE,
#' @field allow_unused_http_interactions (logical) ignored, not used right now
allow_unused_http_interactions = TRUE,
#' @field exclusive (logical) ignored, not used right now
exclusive = FALSE,
#' @field preserve_exact_body_bytes (logical) Whether to base64 encode the
#' bytes of the requests and responses
preserve_exact_body_bytes = FALSE,
#' @field args (list) internal use
args = list(),
#' @field http_interactions_ (list) internal use
http_interactions_ = NULL,
#' @field new_recorded_interactions (list) internal use
new_recorded_interactions = NULL,
#' @field clean_outdated_http_interactions (logical) Should outdated interactions
#' be recorded back to file
clean_outdated_http_interactions = FALSE,
#' @field to_return (logical) internal use
to_return = NULL,
#' @field cassette_opts (list) various cassette options
cassette_opts = NULL,
#' @description Create a new `Cassette` object
#' @param name The name of the cassette. vcr will sanitize this to ensure it
#' is a valid file name.
#' @param record The record mode. Default: "once". In the future we'll support
#' "once", "all", "none", "new_episodes". See [recording] for more information
#' @param serialize_with (character) Which serializer to use.
#' Valid values are "yaml" (default), the only one supported for now.
#' @param persist_with (character) Which cassette persister to
#' use. Default: "file_system". You can also register and use a
#' custom persister.
#' @param match_requests_on List of request matchers
#' to use to determine what recorded HTTP interaction to replay. Defaults to
#' `["method", "uri"]`. The built-in matchers are "method", "uri",
#' "headers" and "body" ("host" and "path" not supported yet, but should
#' be in a future version)
#' @param re_record_interval (numeric) When given, the cassette will be
#' re-recorded at the given interval, in seconds.
#' @param tag,tags tags ignored, not used right now
#' @param update_content_length_header (logical) Whether or
#' not to overwrite the `Content-Length` header of the responses to
#' match the length of the response body. Default: `FALSE`
#' @param allow_playback_repeats (logical) Whether or not to
#' allow a single HTTP interaction to be played back multiple times.
#' Default: `FALSE`.
#' @param allow_unused_http_interactions (logical) ignored, not used right now
#' @param exclusive (logical) ignored, not used right now
#' @param preserve_exact_body_bytes (logical) Whether or not
#' to base64 encode the bytes of the requests and responses for
#' this cassette when serializing it. See also `preserve_exact_body_bytes`
#' in [vcr_configure()]. Default: `FALSE`
#' @param clean_outdated_http_interactions (logical) Should outdated interactions
#' be recorded back to file. Default: `FALSE`
#' @return A new `Cassette` object
initialize = function(
name, record, serialize_with, persist_with, match_requests_on,
re_record_interval, tag, tags, update_content_length_header,
allow_playback_repeats, allow_unused_http_interactions,
exclusive, preserve_exact_body_bytes,
clean_outdated_http_interactions) {
self$name <- name
self$root_dir <- vcr_configuration()$dir
self$serialize_with <- serialize_with %||% vcr_c$serialize_with
check_serializer(self$serialize_with)
self$persist_with <- persist_with %||% vcr_c$persist_with
if (!missing(record)) {
self$record <- check_record_mode(record)
}
self$make_dir()
ext <- switch(self$serialize_with, yaml = "yml", json = "json")
self$manfile <- sprintf("%s/%s.%s", path.expand(cassette_path()),
self$name, ext)
if (!file.exists(self$manfile)) cat("\n", file = self$manfile)
if (!missing(match_requests_on)) {
self$match_requests_on <- check_request_matchers(match_requests_on)
}
if (!missing(re_record_interval))
self$re_record_interval <- re_record_interval
if (!missing(tag)) self$tag = tag
if (!missing(tags)) self$tags = tags
if (!missing(update_content_length_header)) {
assert(update_content_length_header, "logical")
self$update_content_length_header = update_content_length_header
}
if (!missing(allow_playback_repeats)) {
assert(allow_playback_repeats, "logical")
self$allow_playback_repeats = allow_playback_repeats
}
if (!missing(allow_unused_http_interactions))
self$allow_unused_http_interactions = allow_unused_http_interactions
if (!missing(exclusive)) self$exclusive = exclusive
if (!missing(preserve_exact_body_bytes)) {
assert(preserve_exact_body_bytes, "logical")
self$preserve_exact_body_bytes <- preserve_exact_body_bytes
}
if (!missing(clean_outdated_http_interactions)) {
self$clean_outdated_http_interactions <- clean_outdated_http_interactions
}
self$make_args()
if (!file.exists(self$manfile)) self$write_metadata()
self$recorded_at <- file.info(self$file())$mtime
self$serializer = serializer_fetch(self$serialize_with, self$name)
self$persister = persister_fetch(self$persist_with, self$serializer$path)
# check for re-record
if (self$should_re_record()) self$record <- "all"
# get previously recorded interactions
## if none pass, if some found, make webmockr stubs
#### first, get previously recorded interactions into `http_interactions_` var
self$http_interactions()
# then do the rest
prev <- self$previously_recorded_interactions()
if (length(prev) > 0) {
stub_previous_request <- function(previous_interaction) {
req <- previous_interaction$request
res <- previous_interaction$response
uripp <- crul::url_parse(req$uri)
m <- self$match_requests_on
.stub_request_with <- function(match_parameters, request) {
.check_match_parameters <- function(mp) {
vmp <- c("method", "uri", "body", "headers", "query")
mp[mp %in% vmp]
}
mp <- .check_match_parameters(match_parameters)
stub_method <- ifelse("method" %in% mp,
req$method,
"any"
)
stub_uri <- ifelse(identical(mp, c("body")),
".+",
ifelse("uri" %in% mp,
req$uri,
"."
)
)
if (stub_uri %in% c(".", ".+")) {
sr <- webmockr::stub_request(method = stub_method,
uri_regex = stub_uri)
} else {
sr <- webmockr::stub_request(method = stub_method,
uri = stub_uri)
}
with_list <- list()
if ("query" %in% mp) {
with_list[["query"]] <- uripp$parameter
}
if ("headers" %in% mp) {
with_list[["headers"]] <- req$headers
}
if ("body" %in% mp) {
with_list[["body"]] <- req$body
}
# if list is empty, skip wi_th
if (length(with_list) != 0) webmockr::wi_th(sr, .list = with_list)
}
.stub_request_with(m, req)
}
invisible(lapply(prev, stub_previous_request))
}
tmp <- list(
self$name,
self$record,
self$serialize_with,
self$persist_with,
self$match_requests_on,
self$update_content_length_header,
self$allow_playback_repeats,
self$preserve_exact_body_bytes
)
init_opts <- compact(
stats::setNames(tmp, c("name", "record", "serialize_with",
"persist_with", "match_requests_on", "update_content_length_header",
"allow_playback_repeats", "preserve_exact_body_bytes")))
self$cassette_opts <- init_opts
init_opts <- paste(names(init_opts), unname(init_opts), sep = ": ",
collapse = ", ")
vcr_log_info(sprintf("Initialized with options: {%s}", init_opts),
vcr_c$log_opts$date)
# create new env for recorded interactions
self$new_recorded_interactions <- list()
# check on write to disk path
if (!is.null(vcr_c$write_disk_path))
dir.create(vcr_c$write_disk_path, showWarnings = FALSE, recursive = TRUE)
# put cassette in vcr_cassettes environment
include_cassette(self)
},
#' @description print method for `Cassette` objects
#' @param x self
#' @param ... ignored
print = function(x, ...) {
cat(paste0("<vcr - Cassette> ", self$name), sep = "\n")
cat(paste0(" Record method: ", self$record), sep = "\n")
cat(paste0(" Serialize with: ", self$serialize_with), sep = "\n")
cat(paste0(" Persist with: ", self$persist_with), sep = "\n")
cat(paste0(" Re-record interval (s): ", self$re_record_interval),
sep = "\n")
cat(paste0(" Clean outdated interactions?: ",
self$clean_outdated_http_interactions), sep = "\n")
cat(paste0(" update_content_length_header: ",
self$update_content_length_header), sep = "\n")
cat(paste0(" allow_playback_repeats: ",
self$allow_playback_repeats), sep = "\n")
cat(paste0(" allow_unused_http_interactions: ",
self$allow_unused_http_interactions), sep = "\n")
cat(paste0(" exclusive: ", self$exclusive), sep = "\n")
cat(paste0(" preserve_exact_body_bytes: ",
self$preserve_exact_body_bytes), sep = "\n")
invisible(self)
},
#' @description run code
#' @param ... pass in things to be evaluated
#' @return various
call_block = function(...) {
tmp <- list(...)
if (length(tmp) == 0) {
stop("`vcr::use_cassette` requires a code block. ",
"If you cannot wrap your code in a block, use ",
"`vcr::insert_cassette` / `vcr::eject_cassette` instead")
}
invisible(force(...))
},
#' @description ejects the current cassette
#' @return self
eject = function() {
on.exit(private$remove_empty_cassette())
self$write_recorded_interactions_to_disk()
# remove cassette from list of current cassettes
rm(list = self$name, envir = vcr_cassettes)
if (!vcr_c$quiet) message("ejecting cassette: ", self$name)
# disable webmockr
webmockr::disable(quiet=vcr_c$quiet)
# set current casette name to NULL
vcr__env$current_cassette <- NULL
# return self
return(self)
},
#' @description get the file path for the cassette
#' @return character
file = function() self$manfile,
#' @description is the cassette in recording mode?
#' @return logical
recording = function() {
if (self$record == "none") {
return(FALSE)
} else if (self$record == "once") {
return(self$is_empty())
} else {
return(TRUE)
}
},
#' @description is the cassette on disk empty
#' @return logical
is_empty = function() {
nchar(self$raw_cassette_bytes()) < 1
},
#' @description timestamp the cassette was originally recorded at
#' @return POSIXct date
originally_recorded_at = function() {
as.POSIXct(self$recorded_at, tz = "GMT")
},
#' @description Get a list of the http interactions to record + recorded_with
#' @return list
serializable_hash = function() {
list(
http_interactions = self$interactions_to_record(),
recorded_with = utils::packageVersion("vcr")
)
},
#' @description Get the list of http interactions to record
#' @return list
interactions_to_record = function() {
## FIXME - gotta sort out defining and using hooks better
## just returning exact same input
self$merged_interactions()
# FIXME: not sure what's going on here, so not using yet
#. maybe we don't need this?
# "We dee-dupe the interactions by roundtripping them to/from a hash.
# This is necessary because `before_record` can mutate the interactions."
# lapply(self$merged_interactions(), function(z) {
# VCRHooks$invoke_hook("before_record", z)
# })
},
#' @description Get interactions to record
#' @return list
merged_interactions = function() {
old_interactions <- self$previously_recorded_interactions()
old_interactions <- lapply(old_interactions, function(x) {
HTTPInteraction$new(
request = x$request,
response = x$response,
recorded_at = x$recorded_at)
})
if (self$should_remove_matching_existing_interactions()) {
new_interaction_list <-
HTTPInteractionList$new(self$new_recorded_interactions,
self$match_requests_on)
old_interactions <-
Filter(function(x) {
req <- Request$new()$from_hash(x$request)
!unlist(new_interaction_list$has_interaction_matching(req))
},
old_interactions
)
}
return(c(self$up_to_date_interactions(old_interactions),
self$new_recorded_interactions))
},
#' @description Cleans out any old interactions based on the
#' re_record_interval and clean_outdated_http_interactions settings
#' @param interactions list of http interactions, of class [HTTPInteraction]
#' @return list of interactions to record
up_to_date_interactions = function(interactions) {
if (
!self$clean_outdated_http_interactions && is.null(self$re_record_interval)
) {
return(interactions)
}
Filter(function(z) {
as.POSIXct(z$recorded_at, tz = "GMT") > (as.POSIXct(Sys.time(), tz = "GMT") - self$re_record_interval)
}, interactions)
},
#' @description Should re-record interactions?
#' @return logical
should_re_record = function() {
if (is.null(self$re_record_interval)) return(FALSE)
if (is.null(self$originally_recorded_at())) return(FALSE)
now <- as.POSIXct(Sys.time(), tz = "GMT")
time_comp <- (self$originally_recorded_at() + self$re_record_interval) < now
info <- sprintf(
"previously recorded at: '%s'; now: '%s'; interval: %s seconds",
self$originally_recorded_at(), now, self$re_record_interval)
if (!time_comp) {
vcr_log_info(
sprintf("Not re-recording since the interval has not elapsed (%s).", info),
vcr_c$log_opts$date)
return(FALSE)
} else if (has_internet()) {
vcr_log_info(sprintf("re-recording (%s).", info), vcr_c$log_opts$date)
return(TRUE)
} else {
vcr_log_info(
sprintf("Not re-recording because no internet connection is available (%s).", info),
vcr_c$log_opts$date)
return(FALSE)
}
},
#' @description Is record mode NOT "all"?
#' @return logical
should_stub_requests = function() {
self$record != "all"
},
#' @description Is record mode "all"?
#' @return logical
should_remove_matching_existing_interactions = function() {
self$record == "all"
},
#' @description Get the serializer path
#' @return character
storage_key = function() self$serializer$path,
#' @description Get character string of entire cassette; bytes is a misnomer
#' @return character
raw_cassette_bytes = function() {
file <- self$file()
if (is.null(file)) return("")
tmp <- readLines(file) %||% ""
paste0(tmp, collapse = "")
},
#' @description Create the directory that holds the cassettes, if not present
#' @return no return; creates a directory recursively, if missing
make_dir = function() {
dir.create(path.expand(self$root_dir), showWarnings = FALSE,
recursive = TRUE)
},
#' @description get http interactions from the cassette via the serializer
#' @return list
deserialized_hash = function() {
tmp <- self$serializer$deserialize(self)
if (inherits(tmp, "list")) {
return(tmp)
} else {
stop(tmp, " does not appear to be a valid cassette", call. = FALSE)
}
},
#' @description get all previously recorded interactions
#' @return list
previously_recorded_interactions = function() {
if (nchar(self$raw_cassette_bytes()) > 0) {
tmp <- compact(
lapply(self$deserialized_hash()[["http_interactions"]], function(z) {
response <- VcrResponse$new(
z$response$status,
z$response$headers,
z$response$body$string %||% z$response$body$base64_string,
opts = self$cassette_opts,
disk = z$response$body$file
)
if (self$update_content_length_header)
response$update_content_length_header()
zz <- HTTPInteraction$new(
request = Request$new(z$request$method,
z$request$uri,
z$request$body$string,
z$request$headers,
disk = z$response$body$file),
response = response
)
hash <- zz$to_hash()
if (request_ignorer$should_be_ignored(hash$request)) NULL else hash
}))
return(tmp)
} else {
return(list())
}
},
#' @description write recorded interactions to disk
#' @return nothing returned
write_recorded_interactions_to_disk = function() {
if (!self$any_new_recorded_interactions()) return(NULL)
hash <- self$serializable_hash()
if (length(hash[["http_interactions"]]) == 0) return(NULL)
fun <- self$serializer$serialize()
fun(hash[[1]], self$persister$file_name, self$preserve_exact_body_bytes)
},
#' @description record an http interaction (doesn't write to disk)
#' @param x an crul or httr response object, with the request at `$request`
#' @return nothing returned
record_http_interaction = function(x) {
int <- self$make_http_interaction(x)
self$http_interactions_$response_for(int$request)
vcr_log_info(sprintf(" Recorded HTTP interaction: %s => %s",
request_summary(int$request), response_summary(int$response)),
vcr_c$log_opts$date)
self$new_recorded_interactions <- c(self$new_recorded_interactions, int)
},
#' @description Are there any new recorded interactions?
#' @return logical
any_new_recorded_interactions = function() {
length(self$new_recorded_interactions) != 0
},
#' @description make list of all options
#' @return nothing returned
make_args = function() {
self$args <- list(
record = self$record,
match_requests_on = self$match_requests_on,
re_record_interval = self$re_record_interval,
tag = self$tag, tags = self$tags,
update_content_length_header = self$update_content_length_header,
allow_playback_repeats = self$allow_playback_repeats,
allow_unused_http_interactions = self$allow_unused_http_interactions,
exclusive = self$exclusive, serialize_with = self$serialize_with,
persist_with = self$persist_with,
preserve_exact_body_bytes = self$preserve_exact_body_bytes
)
},
#' @description write metadata to the cassette
#' @return nothing returned
write_metadata = function() {
aa <- c(name = self$name, self$args)
for (i in seq_along(aa)) {
cat(sprintf("%s: %s", names(aa[i]), aa[i]),
file = sprintf("%s/%s_metadata.yml",
path.expand(cassette_path()), self$name),
sep = "\n", append = TRUE)
}
},
#' @description make [HTTPInteractionList] object, assign to http_interactions_ var
#' @return nothing returned
http_interactions = function() {
self$http_interactions_ <- HTTPInteractionList$new(
interactions = {
if (self$should_stub_requests()) {
self$previously_recorded_interactions()
} else {
list()
}
},
request_matchers = self$match_requests_on
# request_matchers = vcr_configuration()$match_requests_on
)
},
#' @description Make an `HTTPInteraction` object
#' @param x an crul or httr response object, with the request at `$request`
#' @return an object of class [HTTPInteraction]
make_http_interaction = function(x) {
# content must be raw or character
assert(unclass(x$content), c('raw', 'character'))
new_file_path <- ""
is_disk <- FALSE
if (is.character(x$content)) {
if (file.exists(x$content)) {
is_disk <- TRUE
write_disk_path <- vcr_c$write_disk_path
if (is.null(write_disk_path))
stop("if writing to disk, write_disk_path must be given; ",
"see ?vcr_configure")
new_file_path <- file.path(write_disk_path, basename(x$content))
}
}
request <- Request$new(
method = x$request$method,
uri = x$url,
body = if (inherits(x, "response")) { # httr
bd <- webmockr::pluck_body(x$request)
if (inherits(bd, "raw")) rawToChar(bd) else bd
} else { # crul
webmockr::pluck_body(x$request)
},
headers = if (inherits(x, "response")) {
as.list(x$request$headers)
} else {
x$request_headers
},
opts = self$cassette_opts,
disk = is_disk
)
response <- VcrResponse$new(
status = if (inherits(x, "response")) {
c(list(status_code = x$status_code), httr::http_status(x))
} else unclass(x$status_http()),
headers = if (inherits(x, "response")) x$headers else x$response_headers,
body = if (is.raw(x$content)) {
if (can_rawToChar(x$content)) rawToChar(x$content) else x$content
} else {
stopifnot(inherits(unclass(x$content), "character"))
if (file.exists(x$content)) {
# calculate new file path in fixtures/
# copy file into fixtures/file_cache/
# don't move b/c don't want to screw up first use before using
# cached request
file.copy(x$content, write_disk_path,
overwrite = TRUE, recursive = TRUE) # copy the file
new_file_path
# raw(0)
} else {
x$content
}
},
http_version = if (inherits(x, "response")) {
x$all_headers[[1]]$version
} else {
x$response_headers$status
},
opts = self$cassette_opts,
disk = is_disk
)
if (self$update_content_length_header)
response$update_content_length_header()
HTTPInteraction$new(request = request, response = response)
},
#' @description Make a crul response object
#' @return a crul response
serialize_to_crul = function() {
if (length(self$deserialized_hash()) != 0) {
intr <- self$deserialized_hash()[[1]][[1]]
} else {
intr <- tryCatch(
self$previously_recorded_interactions()[[1]],
error = function(e) e
)
if (inherits(intr, "error")) {
intr <- tryCatch(
self$new_recorded_interactions[[1]],
error = function(e) e
)
if (inherits(intr, "error")) {
stop("no requests found to construct a crul response")
}
}
}
# request
req <- webmockr::RequestSignature$new(
method = intr$request$method,
uri = intr$request$uri,
options = list(
body = intr$request$body %||% NULL,
headers = intr$request$headers %||% NULL,
proxies = NULL,
auth = NULL
)
)
# response
resp <- webmockr::Response$new()
resp$set_url(intr$request$uri)
bod <- intr$response$body
resp$set_body(if ("string" %in% names(bod)) bod$string else bod)
resp$set_request_headers(intr$request$headers)
resp$set_response_headers(intr$response$headers)
resp$set_status(status = intr$response$status$status_code %||% 200)
# generate crul response
webmockr::build_crul_response(req, resp)
}
),
private = list(
remove_empty_cassette = function() {
if (!any(nzchar(readLines(self$file())))) {
unlink(self$file(), force = TRUE)
if (vcr_c$warn_on_empty_cassette)
warning(empty_cassette_message(self$name), call. = FALSE)
}
}
)
)
empty_cassette_message <- function(x) {
c(
sprintf("Empty cassette (%s) deleted; consider the following:\n", x),
" - If an error occurred resolve that first, then check:\n",
" - vcr only supports crul & httr; requests w/ curl, download.file, etc. are not supported\n",
" - If you are using crul/httr, are you sure you made an HTTP request?\n")
}
|
/scratch/gouwar.j/cran-all/cranData/vcr/R/cassette_class.R
|
#' List cassettes, get current cassette, etc.
#'
#' @export
#' @param on_disk (logical) Check for cassettes on disk + cassettes in session
#' (`TRUE`), or check for only cassettes in session (`FALSE`). Default: `TRUE`
#' @param verb (logical) verbose messages
#' @details
#'
#' - `cassettes()`: returns cassettes found in your R session, you can toggle
#' whether we pull from those on disk or not
#' - `current_cassette()`: returns an empty list when no cassettes are in use,
#' while it returns the current cassette (a `Cassette` object) when one is
#' in use
#' - `cassette_path()`: just gives you the current directory path where
#' cassettes will be stored
#'
#' @examples
#' vcr_configure(dir = tempdir())
#'
#' # list all cassettes
#' cassettes()
#' cassettes(on_disk = FALSE)
#'
#' # list the currently active cassette
#' insert_cassette("stuffthings")
#' current_cassette()
#' eject_cassette()
#'
#' cassettes()
#' cassettes(on_disk = FALSE)
#'
#' # list the path to cassettes
#' cassette_path()
#' vcr_configure(dir = file.path(tempdir(), "foo"))
#' cassette_path()
#'
#' vcr_configure_reset()
cassettes <- function(on_disk = TRUE, verb = FALSE){
# combine cassettes on disk with cassettes in session
if (on_disk) {
out <- unlist(list(
lapply(get_cassette_data_paths(), read_cassette_meta, verbose = verb),
cassettes_session()
), FALSE)
out[!duplicated(names(out))]
} else {
cassettes_session()
}
}
#' @export
#' @rdname cassettes
current_cassette <- function() {
tmp <- last(cassettes(FALSE))
if (length(tmp) == 0) return(list())
tmp <- if (length(tmp) == 1) tmp[[1]] else tmp
return(tmp)
}
#' @export
#' @rdname cassettes
cassette_path <- function() vcr_c$dir
cassette_exists <- function(x) x %in% get_cassette_names()
read_cassette_meta <- function(x, verbose = TRUE, ...){
tmp <- yaml::yaml.load_file(x, ...)
if (!inherits(tmp, "list") | !"http_interactions" %in% names(tmp)) {
if (verbose) message(x, " not found, missing data, or malformed")
return(list())
} else {
structure(tmp$http_interactions[[1]], class = "cassette")
}
}
get_cassette_meta_paths <- function(){
metafiles <- names(grep("metadata", vapply(cassette_files(), basename, ""),
value = TRUE))
as.list(stats::setNames(metafiles, unname(sapply(metafiles, function(x)
yaml::yaml.load_file(x)$name))))
}
cassette_files <- function(){
path <- path.expand(cassette_path())
check_create_path(path)
list.files(path, full.names = TRUE)
}
get_cassette_path <- function(x){
if ( x %in% get_cassette_names() ) get_cassette_data_paths()[[x]]
}
is_path <- function(x) file.exists(path.expand(x))
get_cassette_names <- function(){
tmp <- vcr_files()
if (length(tmp) == 0) return("")
sub("\\.yml|\\.yaml|\\.json", "", basename(tmp))
}
vcr_files <- function() {
# remove some file types
files <- names(grep("metadata|rs-graphics|_pkgdown|travis|appveyor",
vapply(cassette_files(), basename, ""),
invert = TRUE, value = TRUE))
# include only certain file types
tokeep <- switch(vcr_c$serialize_with, yaml = "yml|yaml", json = "json")
names(grep(tokeep, vapply(cassette_files(), basename, ""),
value = TRUE))
}
get_cassette_data_paths <- function() {
files <- vcr_files()
if (length(files) == 0) return(list())
as.list(stats::setNames(files, get_cassette_names()))
}
check_create_path <- function(x){
if (file.exists(x)) dir.create(x, recursive = TRUE, showWarnings = FALSE)
}
cassettes_session <- function(x) {
xx <- ls(envir = vcr_cassettes)
if (length(xx) > 0) {
stats::setNames(lapply(xx, get, envir = vcr_cassettes), xx)
} else {
list()
}
}
include_cassette <- function(cassette) {
# assign cassette to bucket of cassettes in session
assign(cassette$name, cassette, envir = vcr_cassettes)
}
|
/scratch/gouwar.j/cran-all/cranData/vcr/R/cassettes.R
|
#' Check cassette names
#'
#' @export
#' @param pattern (character) regex pattern for file paths to check.
#' this is done inside of `tests/testthat/`. default: "test-"
#' @param behavior (character) "stop" (default) or "warning". if "warning",
#' we use `immediate.=TRUE` so the warning happens at the top of your
#' tests rather than you seeing it after tests have run (as would happen
#' by default)
#' @param allowed_duplicates (character) cassette names that can be duplicated
#' @includeRmd man/rmdhunks/cassette-names.Rmd details
check_cassette_names <- function(pattern = "test-", behavior = "stop",
allowed_duplicates = NULL) {
assert(allowed_duplicates, "character")
files <- list.files(".", pattern = pattern, full.names = TRUE)
if (length(files) == 0) return()
cassette_names <- function(x) {
tmp <- parse(x, keep.source = TRUE)
df <- utils::getParseData(tmp)
row.names(df) = NULL
z <- as.numeric(row.names(df[df$text == "use_cassette", ])) + 2
gsub("\"", "", df[z, "text"])
}
nms <- stats::setNames(lapply(files, cassette_names), files)
cnms <- unname(unlist(nms))
if (!is.null(allowed_duplicates)) {
cnms <- cnms[!cnms %in% allowed_duplicates]
}
if (any(duplicated(cnms))) {
dups <- unique(cnms[duplicated(cnms)])
fdups <- c()
for (i in seq_along(dups)) {
matched <- lapply(nms, function(w) dups[i] %in% w)
fdups[i] <- sprintf("%s (found in %s)", dups[i],
paste0(basename(names(nms[unlist(matched)])), collapse = ", ")
)
}
mssg <- c("you should not have duplicated cassette names:",
paste0("\n ", paste0(fdups, collapse = "\n ")))
switch(behavior,
stop = stop(mssg, call. = FALSE),
warning = warning(mssg, call. = FALSE, immediate. = TRUE)
)
}
}
|
/scratch/gouwar.j/cran-all/cranData/vcr/R/check_cassette_names.R
|
#' Global Configuration Options
#'
#' Configurable options that define vcr's default behavior.
#'
#' @param ... configuration settings used to override defaults. See below for a
#' complete list of valid arguments.
#'
#' @section Configurable settings:
#'
#' ## vcr options
#'
#' ### File locations
#'
#' - `dir` Cassette directory
#' - `write_disk_path` (character) path to write files to
#' for any requests that write responses to disk. by default this parameter
#' is `NULL`. For testing a package, you'll probably want this path to
#' be in your `tests/` directory, perhaps next to your cassettes
#' directory, e.g., where your cassettes are in `tests/fixtures`, your
#' files from requests that write to disk are in `tests/files`.
#' If you want to ignore these files in your installed package,
#' add them to `.Rinstignore`. If you want these files ignored on build
#' then add them to `.Rbuildignore` (though if you do, tests that depend
#' on these files probably will not work because they won't be found; so
#' you'll likely have to skip the associated tests as well).
#'
#' ### Contexts
#'
#' - `turned_off` (logical) VCR is turned on by default. Default:
#' `FALSE`
#' - `allow_unused_http_interactions` (logical) Default: `TRUE`
#' - `allow_http_connections_when_no_cassette` (logical) Determines how vcr
#' treats HTTP requests that are made when no vcr cassette is in use. When
#' `TRUE`, requests made when there is no vcr cassette in use will be allowed.
#' When `FALSE` (default), an [UnhandledHTTPRequestError] error will be raised
#' for any HTTP request made when there is no cassette in use
#'
#' ### Filtering
#'
#' - `ignore_hosts` (character) Vector of hosts to ignore. e.g., localhost, or
#' google.com. These hosts are ignored and real HTTP requests allowed to go
#' through
#' - `ignore_localhost` (logical) Default: `FALSE`
#' - `ignore_request` List of requests to ignore. NOT USED RIGHT NOW, sorry
#' - `filter_sensitive_data` named list of values to replace. Format is:
#' ```
#' list(thing_to_replace_it_with = thing_to_replace)
#' ```
#' We replace all instances of `thing_to_replace` with
#' `thing_to_replace_it_with`. Uses [gsub()] internally, with `fixed=TRUE`;
#' so does exact matches. Before recording (writing to a cassette) we do
#' the replacement and then when reading from the cassette we do the reverse
#' replacement to get back to the real data. Before record replacement happens
#' in internal function `write_interactions()`, while before playback
#' replacement happens in internal function `YAML$deserialize()`
#'
#' - `filter_sensitive_data_regex` named list of values to replace. Follows
#' `filter_sensitive_data` format, except uses `fixed=FALSE` in the [gsub()]
#' function call; this means that the value in `thing_to_replace` is a regex
#' pattern.
#'
#' - `filter_request_headers` (character/list) **request** headers to filter.
#' A character vector of request headers to remove - the headers will not be
#' recorded to disk. Alternatively, a named list similar to
#' `filter_sensitive_data` instructing vcr with what value to replace the
#' real value of the request header.
#' - `filter_response_headers` (named list) **response** headers to filter.
#' A character vector of response headers to remove - the headers will not be
#' recorded to disk. Alternatively, a named list similar to
#' `filter_sensitive_data` instructing vcr with what value to replace the
#' real value of the response header.
#' - `filter_query_parameters` (named list) query parameters to filter.
#' A character vector of query parameters to remove - the query parameters
#' will not be recorded to disk. Alternatively, a named list similar to
#' `filter_sensitive_data` instructing vcr with what value to replace the
#' real value of the query parameter.
#'
#' ## Errors
#'
#' - `verbose_errors` Do you want more verbose errors or less verbose
#' errors when cassette recording/usage fails? Default is `FALSE`, that is,
#' less verbose errors. If `TRUE`, error messages will include more details
#' about what went wrong and suggest possible solutions. For testing
#' in an interactive R session, if `verbose_errors=FALSE`, you can run
#' `vcr_last_error()` to get the full error. If in non-interactive mode,
#' which most users will be in when running the entire test suite for a
#' package, you can set an environment variable (`VCR_VERBOSE_ERRORS`)
#' to toggle this setting (e.g.,
#' `Sys.setenv(VCR_VERBOSE_ERRORS=TRUE); devtools::test()`)
#'
#' ### Internals
#'
#' - `cassettes` (list) don't use
#' - `linked_context` (logical) linked context
#' - `uri_parser` the uri parser, default: `crul::url_parse()`
#'
#' ### Logging
#'
#' - `log` (logical) should we log important vcr things? Default: `FALSE`
#' - `log_opts` (list) Additional logging options:
#' - 'file' either `"console"` or a file path to log to
#' - 'log_prefix' default: "Cassette". We insert the cassette name after
#' that prefix, then the rest of the message.
#' - More to come...
#'
#' ## Cassette Options
#'
#' These settings can be configured globally, using `vcr_configure()`, or
#' locally, using either `use_cassette()` or `insert_cassette()`. Global
#' settings are applied to *all* cassettes but are overridden by settings
#' defined locally for individual cassettes.
#'
#' - `record` (character) One of 'all', 'none', 'new_episodes', or 'once'.
#' See [recording]
#' - `match_requests_on` vector of matchers. Default: (`method`, `uri`)
#' See [request-matching] for details.
#' - `serialize_with`: (character) "yaml" or "json". Note that you can have
#' multiple cassettes with the same name as long as they use different
#' serializers; so if you only want one cassette for a given cassette name,
#' make sure to not switch serializers, or clean up files you no longer need.
#' - `json_pretty`: (logical) want JSON to be newline separated to be easier
#' to read? Or remove newlines to save disk space? default: FALSE
#' - `persist_with` (character) only option is "FileSystem"
#' - `preserve_exact_body_bytes` (logical) preserve exact body bytes for
#' - `re_record_interval` (numeric) When given, the cassette will be
#' re-recorded at the given interval, in seconds.
#' - `clean_outdated_http_interactions` (logical) Should outdated interactions
#' be recorded back to file. Default: `FALSE`
#' - `quiet` (logical) Suppress any messages from both vcr and webmockr.
#' Default: `TRUE`
#' - `warn_on_empty_cassette` (logical) Should a warning be thrown when an
#' empty cassette is detected? Empty cassettes are cleaned up (deleted) either
#' way. This option only determines whether a warning is thrown or not.
#' Default: `FALSE`
#'
#' @examples
#' vcr_configure(dir = tempdir())
#' vcr_configure(dir = tempdir(), record = "all")
#' vcr_configuration()
#' vcr_config_defaults()
#' vcr_configure(dir = tempdir(), ignore_hosts = "google.com")
#' vcr_configure(dir = tempdir(), ignore_localhost = TRUE)
#'
#'
#' # logging
#' vcr_configure(dir = tempdir(), log = TRUE,
#' log_opts = list(file = file.path(tempdir(), "vcr.log")))
#' vcr_configure(dir = tempdir(), log = TRUE, log_opts = list(file = "console"))
#' vcr_configure(dir = tempdir(), log = TRUE,
#' log_opts = list(
#' file = file.path(tempdir(), "vcr.log"),
#' log_prefix = "foobar"
#' ))
#' vcr_configure(dir = tempdir(), log = FALSE)
#'
#' # filter sensitive data
#' vcr_configure(dir = tempdir(),
#' filter_sensitive_data = list(foo = "<bar>")
#' )
#' vcr_configure(dir = tempdir(),
#' filter_sensitive_data = list(foo = "<bar>", hello = "<world>")
#' )
#' @export
vcr_configure <- function(...) {
params <- list(...)
invalid <- !names(params) %in% vcr_c$fields()
if (any(invalid)) {
warning(
"The following configuration parameters are not valid:",
sprintf("\n * %s", params[invalid]),
call. = FALSE
)
params <- params[!invalid]
}
if (length(params) == 0) return(vcr_c)
# TODO: Is this still the right place to change these settings?
ignore_hosts <- params$ignore_hosts
ignore_localhost <- params$ignore_localhost %||% FALSE
if (!is.null(ignore_hosts) || ignore_localhost) {
x <- RequestIgnorer$new()
if (!is.null(ignore_hosts)) x$ignore_hosts(hosts = ignore_hosts)
if (ignore_localhost) x$ignore_localhost()
}
for (i in seq_along(params)) {
vcr_c[[names(params)[i]]] <- params[[i]]
}
return(vcr_c)
}
#' @export
#' @rdname vcr_configure
vcr_configure_reset <- function() vcr_c$reset()
#' @export
#' @rdname vcr_configure
vcr_configuration <- function() vcr_c
#' @export
#' @rdname vcr_configure
vcr_config_defaults <- function() VCRConfig$new()$as_list()
VCRConfig <- R6::R6Class(
"VCRConfig",
private = list(
.dir = NULL,
.record = NULL,
.match_requests_on = NULL,
.allow_unused_http_interactions = NULL,
.serialize_with = NULL,
.json_pretty = NULL,
.persist_with = NULL,
.ignore_hosts = NULL,
.ignore_localhost = NULL,
.ignore_request = NULL,
.uri_parser = NULL,
.preserve_exact_body_bytes = NULL,
.turned_off = NULL,
.re_record_interval = NULL,
.clean_outdated_http_interactions = NULL,
.allow_http_connections_when_no_cassette = NULL,
.cassettes = NULL,
.linked_context = NULL,
.log = NULL,
.log_opts = NULL,
.filter_sensitive_data = NULL,
.filter_sensitive_data_regex = NULL,
.filter_request_headers = NULL,
.filter_response_headers = NULL,
.filter_query_parameters = NULL,
.write_disk_path = NULL,
.verbose_errors = NULL,
.quiet = NULL,
.warn_on_empty_cassette = NULL
),
active = list(
dir = function(value) {
if (missing(value)) return(private$.dir)
private$.dir <- value
},
record = function(value) {
if (missing(value)) return(private$.record)
private$.record <- check_record_mode(value)
},
match_requests_on = function(value) {
if (missing(value)) return(private$.match_requests_on)
private$.match_requests_on <- check_request_matchers(value)
},
allow_unused_http_interactions = function(value) {
if (missing(value)) return(private$.allow_unused_http_interactions)
private$.allow_unused_http_interactions <- value
},
serialize_with = function(value) {
if (missing(value)) return(private$.serialize_with)
private$.serialize_with <- value
},
json_pretty = function(value) {
if (missing(value)) return(private$.json_pretty)
private$.json_pretty <- value
},
persist_with = function(value) {
if (missing(value)) return(private$.persist_with)
private$.persist_with <- value
},
ignore_hosts = function(value) {
if (missing(value)) return(private$.ignore_hosts)
private$.ignore_hosts <- assert(value, "character")
},
ignore_localhost = function(value) {
if (missing(value)) return(private$.ignore_localhost)
private$.ignore_localhost <- assert(value, "logical")
},
ignore_request = function(value) {
if (missing(value)) return(private$.ignore_request)
private$.ignore_request <- value
},
uri_parser = function(value) {
if (missing(value)) return(private$.uri_parser)
private$.uri_parser <- value
},
preserve_exact_body_bytes = function(value) {
if (missing(value)) return(private$.preserve_exact_body_bytes)
private$.preserve_exact_body_bytes <- value
},
turned_off = function(value) {
if (missing(value)) return(private$.turned_off)
private$.turned_off <- value
},
re_record_interval = function(value) {
if (missing(value)) return(private$.re_record_interval)
private$.re_record_interval <- value
},
clean_outdated_http_interactions = function(value) {
if (missing(value)) return(private$.clean_outdated_http_interactions)
private$.clean_outdated_http_interactions <- value
},
allow_http_connections_when_no_cassette = function(value) {
if (missing(value)) return(private$.allow_http_connections_when_no_cassette)
private$.allow_http_connections_when_no_cassette <- value
},
cassettes = function(value) {
if (missing(value)) return(private$.cassettes)
private$.cassettes <- value
},
linked_context = function(value) {
if (missing(value)) return(private$.linked_context)
private$.linked_context <- value
},
log = function(value) {
if (missing(value)) return(private$.log)
private$.log <- assert(value, "logical")
},
log_opts = function(value) {
if (missing(value)) return(private$.log_opts)
log_opts <- assert(value, "list")
if (length(log_opts) > 0) {
if ("file" %in% names(log_opts)) {
assert(log_opts$file, "character")
if (private$.log) vcr_log_file(log_opts$file)
}
if ("log_prefix" %in% names(log_opts)) {
assert(log_opts$log_prefix, "character")
}
if ("date" %in% names(log_opts)) {
assert(log_opts$date, "logical")
}
}
# add missing log options
log_opts <- merge_list(
log_opts,
list(file = "vcr.log", log_prefix = "Cassette", date = TRUE)
)
private$.log_opts <- log_opts
},
filter_sensitive_data = function(value) {
if (missing(value)) return(private$.filter_sensitive_data)
private$.filter_sensitive_data <- assert(value, "list")
},
filter_sensitive_data_regex = function(value) {
if (missing(value)) return(private$.filter_sensitive_data_regex)
private$.filter_sensitive_data_regex <- assert(value, "list")
},
filter_request_headers = function(value) {
if (missing(value)) return(private$.filter_request_headers)
if (is.character(value)) value <- as.list(value)
private$.filter_request_headers <- assert(value, "list")
},
filter_response_headers = function(value) {
if (missing(value)) return(private$.filter_response_headers)
if (is.character(value)) value <- as.list(value)
private$.filter_response_headers <- assert(value, "list")
},
filter_query_parameters = function(value) {
if (missing(value)) return(private$.filter_query_parameters)
if (is.character(value)) value <- as.list(value)
lapply(value, function(w) {
if (!length(w) %in% 0:2)
stop("filter query values must be of length 1 or 2",
call. = FALSE)
})
private$.filter_query_parameters <- assert(value, "list")
},
write_disk_path = function(value) {
if (missing(value)) return(private$.write_disk_path)
private$.write_disk_path <- value
},
verbose_errors = function(value) {
env_ve <- vcr_env_verbose_errors()
if (missing(value) && is.null(env_ve)) return(private$.verbose_errors)
value <- env_ve %||% value
private$.verbose_errors <- assert(value, "logical")
},
quiet = function(value) {
if (missing(value)) return(private$.quiet)
private$.quiet <- assert(value, "logical")
},
warn_on_empty_cassette = function(value) {
if (missing(value)) return(private$.warn_on_empty_cassette)
private$.warn_on_empty_cassette <- assert(value, "logical")
}
),
public = list(
initialize = function(
dir = ".",
record = "once",
match_requests_on = c("method", "uri"),
allow_unused_http_interactions = TRUE,
serialize_with = "yaml",
json_pretty = FALSE,
persist_with = "FileSystem",
ignore_hosts = NULL,
ignore_localhost = FALSE,
ignore_request = NULL,
uri_parser = "crul::url_parse",
preserve_exact_body_bytes = FALSE,
turned_off = FALSE,
re_record_interval = NULL,
clean_outdated_http_interactions = FALSE,
allow_http_connections_when_no_cassette = FALSE,
cassettes = list(),
linked_context = NULL,
log = FALSE,
log_opts = list(file = "vcr.log", log_prefix = "Cassette", date = TRUE),
filter_sensitive_data = NULL,
filter_sensitive_data_regex = NULL,
filter_request_headers = NULL,
filter_response_headers = NULL,
filter_query_parameters = NULL,
write_disk_path = NULL,
verbose_errors = FALSE,
quiet = TRUE,
warn_on_empty_cassette = TRUE
) {
self$dir <- dir
self$record <- record
self$match_requests_on <- match_requests_on
self$allow_unused_http_interactions <- allow_unused_http_interactions
self$serialize_with <- serialize_with
self$json_pretty <- json_pretty
self$persist_with <- persist_with
self$ignore_hosts <- ignore_hosts
self$ignore_localhost <- ignore_localhost
self$ignore_request <- ignore_request
self$uri_parser <- uri_parser
self$preserve_exact_body_bytes <- preserve_exact_body_bytes
self$turned_off <- turned_off
self$re_record_interval <- re_record_interval
self$clean_outdated_http_interactions <- clean_outdated_http_interactions
self$allow_http_connections_when_no_cassette <- allow_http_connections_when_no_cassette
self$cassettes <- cassettes
self$linked_context <- linked_context
self$log <- log
self$log_opts <- log_opts
self$filter_sensitive_data <- filter_sensitive_data
self$filter_sensitive_data_regex <- filter_sensitive_data_regex
self$filter_request_headers = filter_request_headers
self$filter_response_headers = filter_response_headers
self$filter_query_parameters = filter_query_parameters
self$write_disk_path <- write_disk_path
self$verbose_errors <- verbose_errors
self$quiet <- quiet
self$warn_on_empty_cassette <- warn_on_empty_cassette
},
# reset all settings to defaults
reset = function() self$initialize(),
# print out names of configurable settings
fields = function() sub("^\\.", "", names(private)),
# return current configuration as a list
as_list = function() {
setNames(mget(names(private), private), self$fields())
},
print = function(...) {
cat("<vcr configuration>", sep = "\n")
cat(paste0(" Cassette Dir: ", private$.dir), sep = "\n")
cat(paste0(" Record: ", private$.record), sep = "\n")
cat(paste0(" Serialize with: ", private$.serialize_with), sep = "\n")
cat(paste0(" URI Parser: ", private$.uri_parser), sep = "\n")
cat(paste0(" Match Requests on: ",
pastec(private$.match_requests_on)), sep = "\n")
cat(paste0(" Preserve Bytes?: ",
private$.preserve_exact_body_bytes), sep = "\n")
logloc <- if (private$.log) sprintf(" (%s)", private$.log_opts$file) else ""
cat(paste0(" Logging?: ", private$.log, logloc), sep = "\n")
cat(paste0(" ignored hosts: ", pastec(private$.ignore_hosts)), sep = "\n")
cat(paste0(" ignore localhost?: ", private$.ignore_localhost), sep = "\n")
cat(paste0(" Write disk path: ", private$.write_disk_path), sep = "\n")
invisible(self)
}
)
)
pastec <- function(x) paste0(x, collapse = ", ")
vcr_env_verbose_errors <- function() {
var <- "VCR_VERBOSE_ERRORS"
x <- Sys.getenv(var, "")
if (x != "") {
x <- as.logical(x)
vcr_env_var_check(x, var)
x
}
}
|
/scratch/gouwar.j/cran-all/cranData/vcr/R/configuration.R
|
#' Eject a cassette
#'
#' @export
#' @param cassette (character) a single cassette names to eject; see `name`
#' parameter definition in [insert_cassette()] for cassette name rules
#' @param options (list) a list of options to apply to the eject process
#' @param skip_no_unused_interactions_assertion (logical) If `TRUE`, this will
#' skip the "no unused HTTP interactions" assertion enabled by the
#' `allow_unused_http_interactions = FALSE` cassette option. This is intended
#' for use when your test has had an error, but your test framework has
#' already handled it - IGNORED FOR NOW
#' @return The ejected cassette if there was one
#' @seealso [use_cassette()], [insert_cassette()]
#' @examples
#' vcr_configure(dir = tempdir())
#' insert_cassette("hello")
#' (x <- current_cassette())
#'
#' # by default does current cassette
#' x <- eject_cassette()
#' x
#' # can also select by cassette name
#' # eject_cassette(cassette = "hello")
eject_cassette <- function(cassette = NULL, options = list(),
skip_no_unused_interactions_assertion = NULL) {
on.exit(webmockr::webmockr_disable_net_connect(), add=TRUE)
if (is.null(cassette)) {
# current cassette
cas <- current_cassette()
if (length(cas) == 0) stp("no cassette in use currently")
} else {
if (!cassette_exists(cassette)) stp("cassette '", cassette, "' not found")
cas <- cassettes(FALSE)[[cassette]]
if (is.null(cas)) stp("cassette '", cassette, "' not found")
}
# eject it
cas$eject()
}
|
/scratch/gouwar.j/cran-all/cranData/vcr/R/eject_cassette.R
|
error_suggestions <- list(
use_new_episodes = list(
text = c("You can use the :new_episodes record mode to allow vcr to",
"record this new request to the existing cassette"),
url = "https://books.ropensci.org/http-testing/record-modes.html#new_episodes"
),
delete_cassette_for_once = list(
text = c("The current record mode ('once') does not allow new requests to be recorded",
"to a previously recorded cassette. You can delete the cassette file and re-run",
"your tests to allow the cassette to be recorded with this request"),
url = "https://books.ropensci.org/http-testing/record-modes.html#once"
),
deal_with_none = list(
text = c("The current record mode ('none') does not allow requests to be recorded. You",
"can temporarily change the record mode to :once, delete the cassette file ",
"and re-run your tests to allow the cassette to be recorded with this request"),
url = "https://books.ropensci.org/http-testing/record-modes.html#none"
),
use_a_cassette = list(
text = c("If you want vcr to record this request and play it back during future test",
"runs, you should wrap your test (or this portion of your test) in a",
"`vcr::use_cassette` block"),
url = "https://books.ropensci.org/http-testing/intro"
),
allow_http_connections_when_no_cassette = list(
text = c("If you only want vcr to handle requests made while a cassette is in use,",
"configure `allow_http_connections_when_no_cassette = TRUE`. vcr will",
"ignore this request since it is made when there is no cassette"),
url = "https://books.ropensci.org/http-testing/vcr-configuration#allow-http-connections-when-no-cassette"
),
ignore_request = list(
text = c("If you want vcr to ignore this request (and others like it), you can",
"set an `ignore_request` function"),
url = "https://books.ropensci.org/http-testing/vcr-configuration#config-ignore-requests"
),
allow_playback_repeats = list(
text = c("The cassette contains an HTTP interaction that matches this request,",
"but it has already been played back. If you wish to allow a single HTTP",
"interaction to be played back multiple times, set the `allow_playback_repeats`",
"cassette option"),
url = "https://books.ropensci.org/http-testing/request-matching#playback-repeats"
),
match_requests_on = list(
text = c("The cassette contains %s not been",
"played back. If your request is non-deterministic, you may need to",
"change your 'match_requests_on' cassette option to be more lenient",
"or use a custom request matcher to allow it to match"),
url = "https://books.ropensci.org/http-testing/request-matching"
),
try_debug_logger = list(
text = c("If you're surprised vcr is raising this error",
"and want insight about how vcr attempted to handle the request,",
"you can use 'logging' to see more details"),
url = "https://books.ropensci.org/http-testing/debugging-your-tests-that-use-vcr.html#logging-1"
)
)
|
/scratch/gouwar.j/cran-all/cranData/vcr/R/error_suggestions.R
|
#' @title UnhandledHTTPRequestError
#' @description Handle http request errors
#' @export
#' @details How this error class is used:
#' If `record="once"` we trigger this.
#'
#' Users can use vcr in the context of both [use_cassette()]
#' and [insert_cassette()]
#'
#' For the former, all requests go through the call_block
#' But for the latter, requests go through webmockr.
#'
#' Where is one place where we can put UnhandledHTTPRequestError
#' that will handle both use_cassette and insert_cassette?
#'
#' @section Error situations where this is invoked:
#'
#' - `record=once` AND there's a new request that doesn't match
#' the one in the cassette on disk
#' - in webmockr: if no stub found and there are recorded
#' interactions on the cassette, and record = once, then
#' error with UnhandledHTTPRequestError
#' - but if record != once, then allow it, unless record == none
#' - others?
#'
#' @examples
#' vcr_configure(dir = tempdir())
#' cassettes()
#' insert_cassette("turtle")
#' request <- Request$new("post", 'https://eu.httpbin.org/post?a=5',
#' "", list(foo = "bar"))
#'
#' err <- UnhandledHTTPRequestError$new(request)
#' err$request_description()
#' err$current_matchers()
#' err$match_request_on_headers()
#' err$match_request_on_body()
#' err$formatted_headers()
#' cat(err$formatted_headers(), "\n")
#' cat(err$cassettes_description(), "\n")
#' cat(err$cassettes_list(), "\n")
#' err$formatted_suggestions()
#' cat(err$format_bullet_point('foo bar', 1), "\n")
#' err$suggestion_for("use_new_episodes")
#' err$suggestions()
#' err$no_cassette_suggestions()
#' err$record_mode_suggestion()
#' err$has_used_interaction_matching()
#' err$match_requests_on_suggestion()
#'
#' # err$construct_message()
#'
#' # cleanup
#' eject_cassette("turtle")
#' unlink(tempdir())
UnhandledHTTPRequestError <- R6::R6Class(
"UnhandledHTTPRequestError",
public = list(
#' @field request a [Request] object
request = NULL,
#' @field cassette a cassette name
cassette = NULL,
#' @description Create a new `UnhandledHTTPRequestError` object
#' @param request (Request) a [Request] object
#' @return A new `UnhandledHTTPRequestError` object
initialize = function(request) {
assert(request, "Request")
self$request <- request
self$cassette <- current_cassette()
},
#' @description Run unhandled request handling
#' @return various
run = function() {
any_errors <- FALSE
if (!is.null(self$cassette) && !identical(self$cassette, list())) {
if (self$cassette$record %in% c("once", "none")) {
any_errors <- TRUE
}
} else {
if (identical(self$cassette, list())) any_errors <- TRUE
}
if (any_errors) self$construct_message()
return(invisible())
},
#' @description Construct and execute stop message for why request failed
#' @return a stop message
construct_message = function() {
# create formatted_suggestions for later use
vcr__env$last_error <- list()
vcr__env$last_error$request_description <- self$request_description()
vcr__env$last_error$cassettes_description <- self$cassettes_description()
vcr__env$last_error$formatted_suggestion <- self$formatted_suggestions()
mssg <- paste0(
c("", "", paste0(rep("=", 80), collapse = ""),
"An HTTP request has been made that vcr does not know how to handle:",
self$request_description(),
if (vcr_c$verbose_errors) self$cassettes_description() else self$cassettes_list(),
if (vcr_c$verbose_errors) vcr__env$last_error$formatted_suggestion else self$get_help(),
paste0(rep("=", 80), collapse = ""), "", ""),
collapse = "\n")
orig_warn_len <- getOption("warning.length")
on.exit(options(warning.length = orig_warn_len))
options(warning.length = 2000)
stop(mssg, call. = FALSE)
},
#' @description construct request description
#' @return character
request_description = function() {
lines <- c()
lines <- c(lines,
paste(
toupper(self$request$method),
sensitive_remove(self$request$uri), # remove sensitive data
sep = " "))
if (self$match_request_on_headers()) {
lines <- c(lines,
sprintf(" Headers:\n%s",
sensitive_remove(self$formatted_headers())
)
)
}
if (self$match_request_on_body()) {
lines <- c(lines, sprintf(" Body: %s", self$request$body))
}
paste0(lines, collapse = "\n")
},
#' @description get current request matchers
#' @return character
current_matchers = function() {
if (length(cassettes_session()) > 0) {
current_cassette()$match_requests_on
} else {
vcr_configuration()$match_requests_on
}
},
#' @description are headers included in current matchers?
#' @return logical
match_request_on_headers = function() {
"headers" %in% self$current_matchers()
},
#' @description is body includled in current matchers?
#' @return logical
match_request_on_body = function() {
"body" %in% self$current_matchers()
},
#' @description get request headers
#' @return character
formatted_headers = function() {
tmp <- Map(function(a, b) {
sprintf(" %s: %s", a, b)
}, names(self$request$headers), self$request$headers)
paste0(tmp, collapse = "\n")
},
#' @description construct description of current or lack thereof cassettes
#' @return character
cassettes_description = function() {
if (length(cassettes_session()) > 0) {
tmp <- self$cassettes_list()
tmp2 <- paste0(c("\n",
"Under the current configuration vcr can not find a suitable HTTP interaction",
"to replay and is prevented from recording new requests. There are a few ways",
"you can deal with this:\n"), collapse = "\n")
c(tmp, tmp2)
} else {
paste0(c("There is currently no cassette in use. There are a few ways",
"you can configure vcr to handle this request:\n"), collapse = "\n")
}
},
#' @description cassette details
#' @return character
cassettes_list = function() {
if (length(cassettes_session()) > 0) {
lines <- c()
xx <- if (length(cassettes_session()) == 1) {
"vcr is currently using the following cassette:"
} else {
"vcr is currently using the following cassettes:"
}
lines <- c(lines, xx)
# FIXME: should fix this to generalize to many cassettes, see ruby code
zz <- c(
paste0(" - ", self$cassette$file() %try% ""),
paste0(" - record_mode: ", self$cassette$record),
paste0(" - match_requests_on: ",
paste0(self$cassette$match_requests_on, collapse = ", "))
)
paste0(c(lines, zz), collapse = "\n")
} else {
paste0(c("There is currently no cassette in use. There are a few ways",
"you can configure vcr to handle this request:\n"), collapse = "\n")
}
},
#' @description get help message for non-verbose error
#' @return character
get_help = function() {
vm <- if (interactive()) "Run `vcr::vcr_last_error()`" else "Set `VCR_VERBOSE_ERRORS=TRUE`"
c(paste0(vm, " for more verbose errors"),
"If you're not sure what to do, open an issue https://github.com/ropensci/vcr/issues",
"& see https://books.ropensci.org/http-testing")
},
#' @description make suggestions for what to do
#' @return character
formatted_suggestions = function() {
formatted_points <- c()
sugs <- self$suggestions()
xx <- Map(function(bp, index) {
fp <- c(formatted_points, self$format_bullet_point(bp$text, index))
fn <- self$format_foot_note(bp$url, index)
list(fp = fp, fn = fn)
}, sugs, seq_along(sugs) - 1)
paste0(c(vapply(xx, "[[", "", 1), "\n", vapply(xx, "[[", "", 2)),
collapse = "", sep = "\n")
},
#' @description add bullet point to beginning of a line
#' @param lines (character) vector of strings
#' @param index (integer) a number
#' @return character
format_bullet_point = function(lines, index) {
lines[1] <- paste0(" * ", lines[1])
lines[length(lines)] <- paste(lines[length(lines)],
sprintf("[%s].", index + 1))
paste0(lines, collapse = "\n ")
},
#' @description make a foot note
#' @param url (character) a url
#' @param index (integer) a number
#' @return character
format_foot_note = function(url, index) {
sprintf("[%s] %s", index + 1, url)
},
#' @description get a suggestion by key
#' @param key (character) a character string
#' @return character
suggestion_for = function(key) {
error_suggestions[[key]]
},
#' @description get all suggestions
#' @return list
suggestions = function() {
if (length(cassettes_session()) == 0) {
return(self$no_cassette_suggestions())
}
tmp <- c("try_debug_logger", "use_new_episodes", "ignore_request")
tmp <- c(tmp, self$record_mode_suggestion())
if (self$has_used_interaction_matching())
tmp <- c(tmp, "allow_playback_repeats")
tmp <- lapply(tmp, self$suggestion_for)
compact(c(tmp, list(self$match_requests_on_suggestion())))
},
#' @description get all no cassette suggestions
#' @return list
no_cassette_suggestions = function() {
x <- c("try_debug_logger", "use_a_cassette",
"allow_http_connections_when_no_cassette", "ignore_request")
lapply(x, self$suggestion_for)
},
#' @description get the appropriate record mode suggestion
#' @return character
record_mode_suggestion = function() {
record_modes <- unlist(lapply(cassettes_session(), function(z) z$record))
if (all(record_modes == "none")) {
"deal_with_none"
} else if (all(record_modes == "once")) {
"delete_cassette_for_once"
} else {
c()
}
},
#' @description are there any used interactions
#' @return logical
has_used_interaction_matching = function() {
any(vapply(cassettes_session(), function(z) {
z$http_interactions()
z$http_interactions_$has_used_interaction_matching(self$request) %||% FALSE
}, logical(1)))
},
#' @description match requests on suggestion
#' @return list
match_requests_on_suggestion = function() {
num_remaining_interactions <- sum(vapply(cassettes_session(), function(z) {
z$http_interactions()
z$http_interactions_$remaining_unused_interaction_count()
}, numeric(1)))
if (num_remaining_interactions == 0) return(NULL)
interaction_description <- if (num_remaining_interactions == 1) {
"1 HTTP interaction that has"
} else {
paste0(num_remaining_interactions, " HTTP interactions that have")
}
tmp <- self$suggestion_for("match_requests_on")
description_lines <- tmp$text
link <- tmp$url
description_lines[1] <- sprintf(description_lines[1],
interaction_description)
list(text = paste0(description_lines, collapse = "\n "), url = link)
}
)
)
#' Get full suggestion messages for the last vcr cassette failure
#'
#' @export
#' @rdname UnhandledHTTPRequestError
#' @examples \dontrun{
#' # vcr_last_error()
#' }
vcr_last_error <- function() {
if (is.null(vcr__env$last_error) || length(vcr__env$last_error) == 0) {
stop("no error to report; either no cassette in use \n",
" or there's a problem with this package (i.e., open an issue)",
call. = FALSE)
}
message(
paste0(
c("", "", paste0(rep("=", 80), collapse = ""),
vcr__env$last_error$request_description,
vcr__env$last_error$cassettes_description,
vcr__env$last_error$formatted_suggestion,
paste0(rep("=", 80), collapse = ""), "", ""),
collapse = "\n")
)
}
|
/scratch/gouwar.j/cran-all/cranData/vcr/R/errors.R
|
#' Remove headers or replace header values
#' @noRd
#' @details
#' Applies to request and response headers.
#' @examples
#' # remove one header
#' filter_request_headers <- "User-Agent"
#' # remove multiple headers
#' filter_request_headers <- c("User-Agent", "Authorization")
#' # replace one header's value
#' filter_request_headers <- list(Authorization = "foo-bar")
#' # replace many header's values
#' filter_request_headers <- list(Authorization = "foo-bar", Accept = "everything!")
#' # mix: remove one header, replace another header's value
#' filter_request_headers <- list("Accept", Authorization = "foo-bar")
headers_remove <- function(x) {
filter_req_or_res <- function(int, h, which) {
if (!is.null(h)) {
if (is.null(names(h))) toremove <- unlist(h)
if (!is.null(names(h))) toremove <- unname(unlist(h[!nzchar(names(h))]))
# remove zero length strings
toremove <- Filter(nzchar, toremove)
for (i in seq_along(toremove)) {
int <- lapply(int, function(b) {
b[[which]]$headers[[toremove[i]]] <- NULL
return(b)
})
}
toreplace <- h[nzchar(names(h))]
if (length(toreplace)) {
for (i in seq_along(toreplace)) {
int <- lapply(int, function(b) {
if (names(toreplace)[i] %in% names(b[[which]]$headers)) {
b[[which]]$headers[[names(toreplace)[i]]] <- toreplace[[i]]
}
return(b)
})
}
}
}
return(int)
}
x <- filter_req_or_res(x, vcr_c$filter_request_headers, "request")
filter_req_or_res(x, vcr_c$filter_response_headers, "response")
}
|
/scratch/gouwar.j/cran-all/cranData/vcr/R/filter_headers.R
|
#' Remove or replace query parameters
#' @noRd
query_params_remove <- function(int) {
h <- vcr_c$filter_query_parameters
if (!is.null(h)) {
if (is.null(names(h))) toremove <- unlist(h)
if (!is.null(names(h))) toremove <- unname(unlist(h[!nzchar(names(h))]))
# remove zero length strings
toremove <- Filter(nzchar, toremove)
for (i in seq_along(toremove)) {
int <- lapply(int, function(b) {
b$request$uri <- drop_param(b$request$uri, toremove[i])
return(b)
})
}
toreplace <- h[nzchar(names(h))]
if (length(toreplace)) {
for (i in seq_along(toreplace)) {
int <- lapply(int, function(b) {
vals <- toreplace[[i]]
val <- if (length(vals) == 2) vals[2] else vals[1]
b$request$uri <-
replace_param(b$request$uri, names(toreplace)[i], val)
return(b)
})
}
}
}
return(int)
}
#' Put back query parameters
#' @details Ignore character strings as we can't put back completely removed
#' query parameters
#' @noRd
query_params_put_back <- function(int) {
h <- vcr_c$filter_query_parameters
if (!is.null(h)) {
toputback <- h[nzchar(names(h))]
if (length(toputback)) {
for (i in seq_along(toputback)) {
int$http_interactions <- lapply(int$http_interactions, function(b) {
vals <- toputback[[i]]
if (length(vals) == 2) {
b$request$uri <-
replace_param_with(b$request$uri, names(toputback)[i], vals[2], vals[1])
} else {
b$request$uri <-
replace_param(b$request$uri, names(toputback)[i], vals)
}
return(b)
})
}
}
}
return(int)
}
query_params_remove_str <- function(uri) {
h <- vcr_c$filter_query_parameters
if (!is.null(h)) {
if (is.null(names(h))) toremove <- unlist(h)
if (!is.null(names(h))) toremove <- unname(unlist(h[!nzchar(names(h))]))
toremove <- Filter(nzchar, toremove)
for (i in seq_along(toremove)) uri <- drop_param(uri, toremove[i])
toreplace <- h[nzchar(names(h))]
if (length(toreplace)) {
for (i in seq_along(toreplace)) {
vals <- toreplace[[i]]
if (length(vals) == 2) {
uri <-
replace_param_with(uri, names(toreplace)[i], vals[2], vals[1])
} else {
uri <- replace_param(uri, names(toreplace)[i], vals)
}
}
}
}
return(uri)
}
list2str <- function(w) {
paste(names(w), unlist(unname(w)), sep="=", collapse="&")
}
buildurl <- function(x) {
x$parameter <- list2str(x$parameter)
url <- urltools::url_compose(x)
# trim trailing ?
sub("\\?$", "", url)
}
# drop_param(url="https://hb.opencpu.org/get?foo=bar&baz=3&z=4", name="z")
# => "https://hb.opencpu.org/get?foo=bar&baz=3"
drop_param <- function(url, name) {
assert(name, "character")
stopifnot("can only drop one name at a time" = length(name) == 1)
z <- parseurl(url)
z$parameter[[name]] <- NULL
buildurl(z)
}
# replace_param(url="https://hb.opencpu.org/get?foo=5", name="foo", value=4)
# => "https://hb.opencpu.org/get?foo=4"
# # return param value unchanged if param name not found
# replace_param(url="https://hb.opencpu.org/get?bar=3", name="foo", value=4)
# => "https://hb.opencpu.org/get?bar=3"
# # No params at all
# replace_param(url="https://hb.opencpu.org/get", name="foo", value=4)
# => "https://hb.opencpu.org/get"
replace_param <- function(url, name, value) {
assert(name, "character")
stopifnot("can only replace one name at a time" = length(name) == 1)
z <- parseurl(url)
if (!is.list(z$parameter)) return(url)
if (is.null(z$parameter[[name]])) return(url)
z$parameter[[name]] <- value
buildurl(z)
}
replace_param_with <- function(url, name, fake, real) {
assert(name, "character")
stopifnot("can only replace one name at a time" = length(name) == 1)
z <- parseurl(url)
if (!is.list(z$parameter)) return(url)
if (is.null(z$parameter[[name]])) return(url)
z$parameter[[name]] <- sub(fake, real, z$parameter[[name]])
buildurl(z)
}
|
/scratch/gouwar.j/cran-all/cranData/vcr/R/filter_query_parameters.R
|
trimquotes <- function(x, y) {
pattern <- "^\"|\"$|^'|'$"
if (grepl(pattern, x)) {
msg <- "filter_sensitive_data: leading & trailing quotes trimmed from '"
warning(paste0(msg, y, "'"), call.=FALSE)
}
return(gsub(pattern, "", x))
}
# filter_sensitive_data replacement
# FIXME: eventually move to higher level so that this happens
# regardless of serializer
sensitive_put_back <- function(x) {
if (!is.null(vcr_c$filter_sensitive_data)) {
fsd <- vcr_c$filter_sensitive_data
for (i in seq_along(fsd)) {
x <- gsub(names(fsd)[i], fsd[[i]], x, fixed = TRUE)
}
}
return(x)
}
sensitive_remove <- function(x) {
fsd <- vcr_c$filter_sensitive_data
if (!is.null(fsd)) {
for (i in seq_along(fsd)) {
if (nchar(fsd[[i]]) > 0) {
strg <- trimquotes(fsd[[i]], names(fsd)[i])
x <- gsub(strg, names(fsd)[i], x, fixed = TRUE)
}
}
}
fsdr <- vcr_c$filter_sensitive_data_regex
if (!is.null(fsdr)) {
for (i in seq_along(fsdr)) {
if (nchar(fsdr[[i]]) > 0) {
x <- gsub(fsdr[[i]], names(fsdr)[i], x, fixed = FALSE)
}
}
}
return(x)
}
|
/scratch/gouwar.j/cran-all/cranData/vcr/R/filter_sensitive_data.R
|
# (x <- Hooks$new())
# x$hooks
# x$define_hook(hook_type = "foo", fun = function(x) x ^ 2)
# x$hooks$foo(4)
# x$clear_hooks()
# x$hooks
#' @title Hooks class
#'
#' @description Helps define new hooks, hold hooks, and accessors to get and
#' use hooks.
#'
#' @keywords internal
#' @details
#' \strong{Private Methods}
#' \describe{
#' \item{`make_hook(x, plac, fun)`}{
#' Make a hook.
#' - x (character) Hook name
#' - plac Placement, one of "start" or "end"
#' - fun a function/callback
#' }
#' }
#' @format NULL
#' @usage NULL
Hooks <- R6::R6Class(
'Hooks',
public = list(
#' @field hooks intenal use
hooks = list(),
#' @description invoke a hook
#' @param hook_type (character) Hook name
#' @param args (named list) Args passed when invoking a hook
#' @return executes hook
invoke_hook = function(hook_type, args) {
self$hooks[[hook_type]](args)
},
#' @description clear all hooks
#' @return no return
clear_hooks = function() {
# clear hooks, set back to an empty list
self$hooks <- list()
},
#' @description define a hook
#' @param hook_type (character) Hook name
#' @param fun A function
#' @param prepend (logical) Whether to prepend or add to the end
#' of the string. Default: `FALSE`
#' @return no return; defines hook internally
define_hook = function(hook_type, fun, prepend = FALSE) {
private$make_hook(hook_type, if (prepend) "start" else "end", fun)
}
),
private = list(
make_hook = function(x, plac, fun) {
defhk <- DefinedHooks$new()
self$hooks[[x]] <-
defhk$set_hook(name = x,
placement_method = plac,
fun = fun
)
}
)
)
# defined hooks - xxx
DefinedHooks <- R6::R6Class(
'DefinedHooks',
public = list(
hooks = list(),
set_hook = function(name, placement_method, fun) {
attr(fun, "placement_method") <- placement_method
self$hooks[[name]] <- fun
return(self$hooks[[name]])
}
)
)
|
/scratch/gouwar.j/cran-all/cranData/vcr/R/hooks.R
|
#' @title HTTPInteraction class
#' @description object holds request and response objects
#' @export
#' @details
#' \strong{Methods}
#' \describe{
#' \item{\code{to_hash()}}{
#' Create a hash from the HTTPInteraction object
#' }
#' \item{\code{from_hash(hash)}}{
#' Create a HTTPInteraction object from a hash
#' }
#' }
#' @examples \dontrun{
#' # make the request
#' library(vcr)
#' url <- "https://eu.httpbin.org/post"
#' body <- list(foo = "bar")
#' cli <- crul::HttpClient$new(url = url)
#' res <- cli$post(body = body)
#'
#' # build a Request object
#' (request <- Request$new("POST", uri = url,
#' body = body, headers = res$response_headers))
#' # build a VcrResponse object
#' (response <- VcrResponse$new(
#' res$status_http(),
#' res$response_headers,
#' res$parse("UTF-8"),
#' res$response_headers$status))
#'
#' # make HTTPInteraction object
#' (x <- HTTPInteraction$new(request = request, response = response))
#' x$recorded_at
#' x$to_hash()
#'
#' # make an HTTPInteraction from a hash with the object already made
#' x$from_hash(x$to_hash())
#'
#' # Make an HTTPInteraction from a hash alone
#' my_hash <- x$to_hash()
#' HTTPInteraction$new()$from_hash(my_hash)
#' }
HTTPInteraction <- R6::R6Class(
'HTTPInteraction',
public = list(
#' @field request A `Request` class object
request = NULL,
#' @field response A `VcrResponse` class object
response = NULL,
#' @field recorded_at (character) Time http interaction recorded at
recorded_at = NULL,
#' @description Create a new `HTTPInteraction` object
#' @param request A `Request` class object
#' @param response A `VcrResponse` class object
#' @param recorded_at (character) Time http interaction recorded at
#' @return A new `HTTPInteraction` object
initialize = function(request, response, recorded_at) {
if (!missing(request)) self$request <- request
if (!missing(response)) self$response <- response
self$recorded_at <- Sys.time()
},
#' @description Create a hash from the HTTPInteraction object
#' @return a named list
to_hash = function() {
list(request = self$request$to_hash(),
response = self$response$to_hash(),
recorded_at = self$recorded_at)
},
#' @description Create a HTTPInteraction object from a hash
#' @param hash a named list
#' @return a new `HttpInteraction` object
from_hash = function(hash) {
HTTPInteraction$new(
Request$new()$from_hash(hash$request),
VcrResponse$new()$from_hash(hash$response),
hash$recorded_at
)
}
)
)
|
/scratch/gouwar.j/cran-all/cranData/vcr/R/http_interaction.R
|
# Null list, an empty HTTPInteractionList object
NullList <- R6::R6Class(
'NullList',
public = list(
response_for = function() NULL,
has_interaction_matching = function() FALSE,
has_used_interaction_matching = function() FALSE,
remaining_unused_interaction_count = function() 0
)
)
#' @title HTTPInteractionList class
#' @description keeps track of all [HTTPInteraction] objects
#' @export
#' @param request The request from an object of class `HTTPInteraction`
#' @details
#' \strong{Private Methods}
#' \describe{
#' \item{\code{has_unused_interactions()}}{
#' Are there any unused interactions? returns boolean
#' }
#' \item{\code{matching_interaction_index_for()}}{
#' asdfadf
#' }
#' \item{\code{matching_used_interaction_for(request)}}{
#' asdfadfs
#' }
#' \item{\code{interaction_matches_request(request, interaction)}}{
#' Check if a request matches an interaction (logical)
#' }
#' \item{\code{from_hash()}}{
#' Get a hash back.
#' }
#' \item{\code{request_summary(z)}}{
#' Get a request summary (character)
#' }
#' \item{\code{response_summary(z)}}{
#' Get a response summary (character)
#' }
#' }
#' @examples \dontrun{
#' vcr_configure(
#' dir = tempdir(),
#' record = "once"
#' )
#'
#' # make interactions
#' ## make the request
#' ### turn off mocking
#' crul::mock(FALSE)
#' url <- "https://eu.httpbin.org/post"
#' cli <- crul::HttpClient$new(url = url)
#' res <- cli$post(body = list(a = 5))
#'
#' ## request
#' (request <- Request$new("POST", url, list(a = 5), res$headers))
#' ## response
#' (response <- VcrResponse$new(
#' res$status_http(),
#' res$response_headers,
#' res$parse("UTF-8"),
#' res$response_headers$status))
#' ## make an interaction
#' (inter <- HTTPInteraction$new(request = request, response = response))
#'
#' # make an interactionlist
#' (x <- HTTPInteractionList$new(
#' interactions = list(inter),
#' request_matchers = vcr_configuration()$match_requests_on
#' ))
#' x$interactions
#' x$request_matchers
#' x$parent_list
#' x$parent_list$response_for()
#' x$parent_list$has_interaction_matching()
#' x$parent_list$has_used_interaction_matching()
#' x$parent_list$remaining_unused_interaction_count()
#' x$used_interactions
#' x$allow_playback_repeats
#' x$interactions
#' x$response_for(request)
#' }
HTTPInteractionList <- R6::R6Class(
'HTTPInteractionList',
public = list(
#' @field interactions (list) list of interaction class objects
interactions = NULL,
#' @field request_matchers (character) vector of request matchers
request_matchers = NULL,
#' @field allow_playback_repeats whether to allow playback repeats
allow_playback_repeats = FALSE,
#' @field parent_list A list for empty objects, see `NullList`
parent_list = NullList$new(),
#' @field used_interactions (list) Interactions that have been used
used_interactions = list(),
#' @description Create a new `HTTPInteractionList` object
#' @param interactions (list) list of interaction class objects
#' @param request_matchers (character) vector of request matchers
#' @param allow_playback_repeats whether to allow playback repeats or not
#' @param parent_list A list for empty objects, see `NullList`
#' @param used_interactions (list) Interactions that have been used. That is,
#' interactions that are on disk in the current cassette, and a
#' request has been made that matches that interaction
#' @return A new `HTTPInteractionList` object
initialize = function(interactions,
request_matchers,
allow_playback_repeats = FALSE,
parent_list = NullList$new(),
used_interactions = list()) {
self$interactions <- interactions
self$request_matchers <- request_matchers
self$allow_playback_repeats <- allow_playback_repeats
self$parent_list <- parent_list
self$used_interactions <- used_interactions
interaction_summaries <- vapply(interactions, function(x) {
sprintf("%s => %s",
request_summary(Request$new()$from_hash(x$request)),
response_summary(VcrResponse$new()$from_hash(x$response)))
}, "")
vcr_log_info(sprintf(
"Init. HTTPInteractionList w/ request matchers [%s] & %s interaction(s): { %s }",
paste0(self$request_matchers, collapse = ", "),
length(interactions),
paste0(interaction_summaries, collapse = ', ')
), vcr_c$log_opts$date)
},
#' @description Check if there's a matching interaction, returns a
#' response object
response_for = function(request) {
index <- private$matching_interaction_index(request)
if (length(index) > 0) {
# index should be length 1 here it seems
# FIXME: for now just get the first one
# if (length(index > 1)) warning("more than 1 found, using first")
index <- index[1]
# delete the http interaction at <index>, and capture it into `interaction`
interaction <- self$interactions[[index]]
self$interactions <- delete_at(self$interactions, index)
# put `interaction` at front of list with `unshift`
self$used_interactions <- unshift(self$used_interactions, list(interaction))
vcr_log_info(sprintf(" Found matching interaction for %s at index %s: %s",
request_summary(Request$new()$from_hash(request)),
index,
response_summary(VcrResponse$new()$from_hash(interaction$response))),
vcr_c$log_opts$date)
interaction$response
} else {
tmp <- private$matching_used_interaction_for(request)
if (tmp) {
tmp$response
} else {
self$parent_list$response_for()
}
}
},
#' @description Check if has a matching interaction
#' @return logical
has_interaction_matching = function(request) {
private$matching_interaction_bool(request) ||
private$matching_used_interaction_for(request) ||
self$parent_list$has_interaction_matching()
},
#' @description check if has used interactions matching a given request
#' @return logical
has_used_interaction_matching = function(request) {
lapply(self$used_interactions, function(i) {
private$interaction_matches_request(request, i)
})
},
#' @description Number of unused interactions
#' @return integer
remaining_unused_interaction_count = function() {
length(self$interactions)
},
#' @description Checks if there are no unused interactions left.
#' @return various
assert_no_unused_interactions = function() {
if (!private$has_unused_interactions()) return(NULL)
descriptions <- lapply(self$interactions, function(x) {
sprintf(" - %s => %s",
request_summary(x$request, self$request_matchers),
response_summary(x$response))
})
vcr_log_info(descriptions, vcr_c$log_opts$date)
stop("There are unused HTTP interactions left in the cassette:\n",
descriptions, call. = FALSE)
}
),
private = list(
# return: logical
has_unused_interactions = function() {
length(self$interactions) > 0
},
gather_match_checks = function(request) {
out <- logical(0)
iter <- 0
while (!any(out) && iter < length(self$interactions)) {
iter <- iter + 1
bool <- private$interaction_matches_request(
request, self$interactions[[iter]])
out <- c(out, bool)
}
return(out)
},
# return: logical
matching_interaction_bool = function(request) {
any(private$gather_match_checks(request))
},
# return: integer
matching_interaction_index = function(request) {
which(private$gather_match_checks(request))
},
# return: interactions list or `FALSE`
matching_used_interaction_for = function(request) {
if (!self$allow_playback_repeats) return(FALSE)
if (length(self$used_interactions) == 0) return(FALSE)
tmp <- FALSE
i <- 0
while (!tmp) {
i <- i + 1
tmp <- private$interaction_matches_request(request,
self$used_interactions[[i]])
}
if (tmp) self$used_interactions[[i]] else FALSE
},
# return: interactions list
interaction_matches_request = function(req, interaction) {
bod <- interaction$request$body
if (length(names(bod)) > 0) {
if ("string" %in% names(bod)) bod <- bod$string
}
intreq <- Request$new(
interaction$request$method,
interaction$request$uri,
bod,
interaction$request$headers
)
vcr_log_info(sprintf(" Checking if {%s} matches {%s} using matchers: [%s]",
request_summary(req),
request_summary(intreq),
paste0(self$request_matchers, collapse = ", ")),
vcr_c$log_opts$date)
all(unlist(lapply(self$request_matchers, function(y) {
matcher <- RequestMatcherRegistry$new()$registry[[y]]
res <- matcher$matches(req, intreq)
msg <- if (res) "matched" else "did not match"
# cat(paste0("method: ", req$method), sep = "\n ")
# cat(paste0("body: ", req$body), sep = "\n ")
vcr_log_info(sprintf(" %s %s: current request [%s] vs [%s]",
y, msg,
request_summary(req, self$request_matchers),
request_summary(intreq, self$request_matchers)),
vcr_c$log_opts$date)
return(res)
})))
},
# return: character
request_summary = function(z) {
paste(z$method, z$uri)
},
# return: character
response_summary = function(z) {
paste(
z$status$status_code,
sprintf("['%s ...'", substring(gsub("\n", " ", z$body), 1, 50)),
"]"
)
}
)
)
# makes a copy - does not modify in place
# x: must be a list
# y: must be numeric; ignores values out of range
delete_at <- function(x, y) {
stopifnot(is.list(x))
stopifnot(is.numeric(y))
x[-y]
}
# makes a copy - does not modify in place
# x: a list with objects of class `HTTPInteraction`
# y: a list with an object of class `HTTPInteraction`
unshift <- function(x, y) {
stopifnot(inherits(x, "list"))
stopifnot(inherits(y, "list"))
# stopifnot(inherits(y[[1]], "HTTPInteraction"))
c(y, x)
}
|
/scratch/gouwar.j/cran-all/cranData/vcr/R/http_interaction_list.R
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.