content
stringlengths 0
14.9M
| filename
stringlengths 44
136
|
---|---|
---
title: "ATNr"
output: rmarkdown::html_vignette
bibliography: vignette.bib
vignette: >
%\VignetteIndexEntry{ATNr}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
```{r, include=FALSE, echo=FALSE}
oldpar <- par()
# if (!nzchar(Sys.getenv("_R_CHECK_LIMIT_CORES_", ""))) {
# ## Possible values: 'TRUE' 'false', 'warn', 'error'
# Sys.setenv("_R_CHECK_LIMIT_CORES_" = "TRUE")
# }
Sys.setenv("OMP_NUM_THREADS" = 1)
```
The package *ATNr* defines the differential equations and parametrisation of different versions of the Allometric Trophic Network (ATN) model. It is structured around a model object that contains a function implementing the ordinary differential equations (ODEs) of the model and various attributes defining the different parameters to run the ODEs.
Two different versions of the model are implemented:
* Scaled version: @delmas2017simulations
* Unscaled version incorporating nutrient dynamics: @schneider2016animal
* Unscaled version without nutrients: @binzer2016interactive
The version without nutrients from @delmas2017simulations is scaled, meaning that the biological rates controlling the growth rate of the species are normalised by the growth rate of the smallest basal species. For more details on the three models, see the specific vignette: `vignette(model_descriptions, package = "ATNr")`.
# A quick go through
## Creating a model
The definition of ATN is based on a model object (formally a S4 class in R). The model object is initialised specifying a fixed set of parameters: *the number of species*, *the number of basal species*, *species body masses*, *a matrix defining the trophic interactions* and, for the version including the nutrient dynamics, *the number of nutrients*.
The first thing to do is therefore to create the corresponding R variables. While one can use an empirical food web for its analysis, it is also possible to generate synthetic food webs using the niche model from @williams2000simple or using allometric scaling as defined in @schneider2016animal.
### Generating synthetic food webs (if needed)
*ATNr* has two functions to generate synthetic food webs, `create_niche_model()` for the niche model (@williams2000simple) and `create_Lmatrix()` for the allometric scaling model (@schneider2016animal).
The niche model requires information on the number of species and connectance of the desired food web:
```{r}
library(ATNr)
set.seed(123)
n_species <- 20 # number of species
conn <- 0.3 # connectance
fw <- create_niche_model(n_species, conn)
# The number of basal species can be calculated:
n_basal <- sum(colSums(fw) == 0)
```
As the niche model does not rely on allometry, it is possible to estimate species body masses from their trophic levels, which can be calculated form the ```TroLev``` function of the package. For instance:
```{r}
TL = TroLev(fw) #trophic levels
masses <- 1e-2 * 10 ^ (TL - 1)
```
The allometric scaling model generate links based on species body masses. Therefore, it requires as an input a vector containing the body mass of species, as well as a parameter informing on the number of basal species desired. It produces a so-called L matrix which formally quantifies the probability for a consumer to successfully attack and consumer an encountered resource:
```{r}
n_species <- 20
n_basal <- 5
masses <- sort(10^runif(n_species, 2, 6)) #body mass of species
L <- create_Lmatrix(masses, n_basal)
```
This L matrix can then be transformed into a binary food web:
```{r}
fw <- L
fw[fw > 0] <- 1
```
More details about the generative models and the the usage precaution around them can be found in the section "[The food web generative functions]"
### Creating a specific ATN model
As soon as a food web is stored in a matrix, it is possible to create a model object that refers to the desired specific model
```{r}
# initialisation of the model object. It is possible to create a ode corresponding to
# Schneider et al. 2016, Delmas et al. 2016 or Binzer et al. 2016:
# 1) Schneider et al. 2016
n_nutrients <- 3
model_unscaled_nuts <- create_model_Unscaled_nuts(n_species, n_basal, n_nutrients, masses, fw)
# 2) Delmas et al. 2016:
model_scaled <- create_model_Scaled(n_species, n_basal, masses, fw)
# 3) Binzer et al. 2016
model_unscaled <- create_model_Unscaled(n_species, n_basal, masses, fw)
```
Once created, it is possible to access to the methods and attributes of the object to initialise or update them:
```{r}
# updating the hill coefficient of consumers in the Unscaled_nuts model:
model_unscaled_nuts$q <- rep(1.4, model_unscaled_nuts$nb_s - model_unscaled_nuts$nb_s)
# Changing the assimilation efficiencies of all species to 0.5 in the Scaled model:
model_scaled$e = rep(0.5, model_scaled$nb_s)
# print the different fields that can be updated and their values:
# str(model_unscaled_nuts)
```
It is important to keep in mind that some rules apply here:
* The order of the species in the different fields must be consistent: the first species in the `$BM` object corresponds to the first species in the `$fw` object and in the `$e` object.
* The objects that are specific to a species type (i.e. basal species or consumers) are dimensioned accordingly: the handling time (`$h`) sets the handling time of consumers on resources. Therefore, the `h` matrix has a number of rows equal to the number of species and a number of columns equal to the number of consumers (as non consumer species do not have a handling time by definition). In that case, the first row correspond to the first species and the first column to the first consumer.
* The object describing the interactions between plants and nutrients (`$K` or `$V`) are matrices for which the number of rows equals to the number of nutrients and a number of columns matches the number of basal species (this point is specific to the Schneider model which is the only one including an explicit dynamics of the nutrient pool).
To run the population dynamics, all the parameters must be defined. It is possible to automatically load a by default parametrisation using the dedicated functions:
```{r}
# for a model created by create_model_Unscaled_nuts():
model_unscaled_nuts <- initialise_default_Unscaled_nuts(model_unscaled_nuts, L)
# for a model created by create_model_Scaled():
model_scaled <- initialise_default_Scaled(model_scaled)
# for a model created by create_model_Unscaled():
model_unscaled <- initialise_default_Unscaled(model_unscaled)
```
Importantly, for the unscaled with nutrients model of @schneider2016animal, the calculation of consumption rates rely on the L matrix created above or, in case of empirical networks, by a matrix that defines the probability of a consumer to successfully attack and consume an encountered prey. The default initialisation of the `Unsacled_nuts` and `Unscaled` models can also include temperature effects (20° C by default).
## Running the population dynamics
Once all the parameters are properly defined, the ODEs can be integrated by any solver. We present here a solution based on `lsoda` from library `deSolve` (@DeSolve), but other solutions exist (`sundialr` is also a possibility). The package propose a direct wrapper to `lsoda` with the function `lsoda_wrapper`:
```{r wrappers}
biomasses <-runif(n_species, 2, 3) # starting biomasses
biomasses <- append(runif(3, 20, 30), biomasses) # nutrient concentration
# defining the desired integration time
times <- seq(0, 1500, 5)
sol <- lsoda_wrapper(times, biomasses, model_unscaled_nuts)
```
To have more control of the integration, it is however possible to not use the wrapper proposed in the package and directly work with the `lsoda` function. Here is an example:
```{r deSolve}
# running simulations for the Schneider model
model_unscaled_nuts$initialisations()
sol <- deSolve::lsoda(
biomasses,
times,
function(t, y, params) {
return(list(params$ODE(y, t)))
},
model_unscaled_nuts
)
```
Not that the call of `model_unscaled_nuts$initialisation()` is important here as it pre-computes some variables to optimise code execution. This function is normally internally called by `lsoda_wrapper`. In case of an integration that does not rely on this wrapper function, the call to `$initialisation()` is needed for ALL the model types.
The package also contains a simple function to plot the time series obtained: plot_odeweb. The colours only differentiate the species using their ranks in the food web matrix (from blue to red).
```{r plot_odeweb, fig.width=4, fig.height=3, fig.align='center'}
par(mar = c(4, 4, 1, 1))
plot_odeweb(sol, model_unscaled_nuts$nb_s)
```
# The food web generative functions
It is possible to create a model object using empirical food webs, however synthetic ones can be valuable tools to explore different theoretical questions. To allow this possibility, two different models are available in the package: the niche model (@williams2000simple) or the allometric scaling model (@schneider2016animal). Thereafter, we use the following function to visualise the adjacency matrices (where rows correspond to resources and columns to consumers) of the food webs:
```{r}
# function to plot the fw
show_fw <- function(mat, title = NULL) {
par(mar = c(.5, .5, 2, .5))
S <- nrow(mat)
mat <- mat[nrow(mat):1, ]
mat <- t(mat)
image(mat, col = c("goldenrod", "steelblue"),
frame = FALSE, axes = FALSE)
title(title)
grid(nx = S, ny = S, lty = 1, col = adjustcolor("grey20", alpha.f = .1))
}
```
The niche model orders species based on their trophic niche, randomly sampled from a uniform distribution. For each species $i$, a diet range ($r_i$) is then drawn from a Beta distribution and a diet center $c_i$ from a uniform distribution. For each species $i$, all species that have trophic niche within the interval $[c_i - r_i / 2, c_i + r_i / 2]$ are considered to be prey of species $i$. In this package, we followed the modification to the niche model of @williams2000simple as specified in @allesina2008general.
Generating a food web from the niche model is made by a simple call to the corresponding functions:
```{r}
S <- 50 # number of species
C <- 0.2 # connectance
fw <- create_niche_model(S, C)
```
The function ensure that the food web returned are not composed of disconnected sub networks (i.i several connected components).
The allometric scaling model assumes an optimal consumer/resource body mass ratio (_Ropt_, default = 100) for attack rates, i.e. the probability that when a consumer encounter a species it will predate on it. In particular, each attack rate is calculated using a Ricker function:
$$
a_{ij} = \left( \frac{m_i}{m_j \cdot Ropt} \cdot e^{(1 - \frac{m_i}{m_j \cdot Ropt})} \right) ^\gamma
$$
where $m_i$ is the body mass of species $i$ and $\gamma$ sets the width of the trophic niche.
Generating a food web with the allometric scaling model necessitate few more steps. The trophic niche of species is defined by a body mass interval and is quantified (see fig. 2 and 3 from Schneider et al., 2016). This quantified version actually return the probabilities of a successful attack event to occur when a consumer encounter a prey. These probabilities are estimated with a Ricker function of 4 parameters: the body masses of the resource and of the consumer, the optimal predator-prey body mass ratio `Ropt` and the width of the trophic niche `gamma`. A threshold (`th`) filters out links with very low probabilities of attack success. The probabilities are stored in a matrix obtained from:
```{r}
# number of species and body masses
n_species <- 20
n_basal <- 5
# body mass of species. Here we assume two specific rules for basal and non basal species
masses <- c(sort(10^runif(n_basal, 1, 3)), sort(10^runif(n_species - n_basal, 2, 6)))
L <- create_Lmatrix(masses, n_basal, Ropt = 100, gamma = 2, th = 0.01)
```
Then, a food web is a binary version of the L matrix that can be stored either using booleans (FALSE/TRUE) or numeric values (0/1):
```{r, fig.width=4, fig.height=4, fig.align='center'}
# boolean version
fw <- L > 0
# 0/1 version:
fw <- L
fw[fw > 0] <- 1
show_fw(fw, title = "L-matrix model food web")
```
# Examples
## effect of temperature on species persistence
_ATNr_ makes it relatively easy to vary one parameter to assess its effect on the population dynamics. For example, we can study how changes in temperatures from 15 to 30 Celsius degrees affects the number species to go extinct.
```{r}
set.seed(12)
# 1) define number of species, their body masses, and the structure of the
# community
n_species <- 50
n_basal <- 20
n_nut <- 2
# body mass of species
masses <- 10 ^ c(sort(runif(n_basal, 1, 3)),
sort(runif(n_species - n_basal, 2, 9)))
# 2) create the food web
# create the L matrix
L <- create_Lmatrix(masses, n_basal, Ropt = 50, gamma = 2, th = 0.01)
# create the 0/1 version of the food web
fw <- L
fw[fw > 0] <- 1
# 3) create the model
model <- create_model_Unscaled_nuts(n_species, n_basal, n_nut, masses, fw)
# 4) define the temperature gradient and initial conditions
temperatures <- seq(4, 22, by = 2)
extinctions <- rep(NA, length(temperatures))
# defining biomasses
biomasses <- runif(n_species + n_nut, 2, 3)
# 5) define the desired integration time.
times <- seq(0, 100000, 100)
# 6) and loop over temperature to run the population dynamics
i <- 0
for (t in temperatures){
# initialise the model parameters for the specific temperature
# Here, no key parameters (numbers of species or species' body masses) are modified
# Therefore, it is not needed to create a new model object
# TO reinitialise the different parameters is enough
model <- initialise_default_Unscaled_nuts(model, L, temperature = t)
# updating the value of q, same for all consumers
model$q = rep(1.4, n_species - n_basal)
model$S <- rep(10, n_nut)
# running simulations for the Schneider model:
sol <- lsoda_wrapper(times, biomasses, model, verbose = FALSE)
# retrieve the number of species that went extinct before the end of the
# simulation excluding here the 3 first columns: first is time, 2nd and 3rd
# are nutrients
i <- i + 1
extinctions[i] <- sum(sol[nrow(sol), 4:ncol(sol)] < 1e-6)
}
```
```{r, fig.width=4, fig.height=3, fig.align='center'}
plot(temperatures, extinctions,
pch = 20, cex = 0.5, ylim = c(0,50), frame = FALSE,
ylab = "Number of Extinctions", xlab = "Temperature (°C)")
lines(temperatures, extinctions, col = 'blue')
```
## Effect of predator-prey body mass ratio and temperature on species persistence
Predator-prey body mass ratio and environment temperature have been shown to affect persistence of species in local communities, e.g. @binzer2016interactive. Here, we use the _ATNr_ model (**name here**) to replicate the results from @binzer2016interactive. In particular, we compute the fraction of species species that persist for predator-prey body mass ratio values in $\left[ 10^{-1}, 10^4 \right]$ and temperature values in $\{0, 40\}$ °C.
First, we create a food web with 30 species and initialize within a for loop the model with a given value of body mass ratio and temperature. Species persistence is calculate as the fraction of species that are not extinct at the end of the simulations.
```{r binzer example}
# set.seed(142)
# number of species
S <- 30
# vector containing the predator prey body mass ratios to test
scaling <- 10 ^ seq(-1, 4, by = .5)
# vectors to store the results
persistence0 <- c()
persistence40 <- c()
# create the studied food web
fw <- create_niche_model(S = S, C = 0.1)
# calculating trophic levels
TL = TroLev(fw)
biomasses <- runif(S, 2, 3)
# run a loop over the different pred-prey body mass ratios
for (scal in scaling) {
# update species body masses following the specific body mass ratio
masses <- 0.01 * scal ^ (TL - 1)
# create the models with parameters corresponding to 0 and 40 degrees Celcius
mod0 <- create_model_Unscaled(nb_s = S,
nb_b = sum(colSums(fw) == 0),
BM = masses,
fw = fw)
mod0 <- initialise_default_Unscaled(mod0, temperature = 0)
mod0$c <- rep(0, mod0$nb_s - mod0$nb_b)
mod0$alpha <- diag(mod0$nb_b)
mod40 <- create_model_Unscaled(nb_s = S,
nb_b = sum(colSums(fw) == 0),
BM = masses,
fw = fw)
mod40 <- initialise_default_Unscaled(mod40, temperature = 40)
mod40$c <- rep(0, mod40$nb_s - mod40$nb_b)
mod40$alpha <- diag(mod40$nb_b)
times <- seq(1, 1e9, by = 1e7)
# run the model corresponding to the 0 degree conditions
sol <- lsoda_wrapper(times, biomasses, mod0, verbose = FALSE)
persistence0 <- append(persistence0, sum(sol[nrow(sol), -1] > mod0$ext) / S)
# run the model corresponding to the 40 degrees conditions
sol <- lsoda_wrapper(times, biomasses, mod40, verbose = FALSE)
persistence40 <- append(persistence40, sum(sol[nrow(sol), -1] > mod40$ext) / S)
}
```
Similarly to @binzer2016interactive, species persistence increases with increasing values of predator-prey body mass ratios, but temperature effect differs depending n this ratio: when predator prey body mass ratio is low, high temperature lead to more persistence while increasing predator prey body mass ratio tend to reduce the effects of temperature.
```{r binzer example plot, fig.width=6, fig.height=4, fig.align='center'}
plot(log10(scaling), persistence40,
xlab = expression("Body mass ratio between TL"[i + 1]* " and TL"[i]),
ylab = "Persistence",
ylim = c(0, 1),
frame = FALSE, axes = FALSE, type = 'l', col = "red")
lines(log10(scaling), persistence0, col = "blue")
axis(2, at = seq(0, 1, by = .1), labels = seq(0, 1, by = .1))
axis(1, at = seq(-1, 4, by = 1), labels = 10 ^ seq(-1, 4, by = 1))
legend(0.1, 0.9, legend = c("40 \u00B0C", "0 \u00B0C"), fill = c("red", "blue"))
```
## Paradox of enrichment
The paradox of enrichment states that by increasing the carrying capacity of basal species may destabilize the population dynamics (@Rosenzweig). Here, we show how this can be studied with the _ATNr_; we used the model from @delmas2017simulations, but similar results can be obtained using the other two models in the package.
First, we create a food web with 10 species and initialize the model
```{r delmas 1}
set.seed(1234)
S <- 10
fw <- NULL
TL <- NULL
fw <- create_niche_model(S, C = .15)
TL <- TroLev(fw)
masses <- 0.01 * 100 ^ (TL - 1) #body mass of species
mod <- create_model_Scaled(nb_s = S,
nb_b = sum(colSums(fw) == 0),
BM = masses,
fw = fw)
mod <- initialise_default_Scaled(mod)
times <- seq(0, 300, by = 2)
biomasses <- runif(S, 2, 3) # starting biomasses
```
Then, we solve the system specifying the carrying capacity of basal species equal to one (`mod$K <- 1`) and then increased this to ten (`mod$K <- 10`)
```{r delmas 2}
mod$K <- 1
sol1 <- lsoda_wrapper(times, biomasses, mod, verbose = FALSE)
mod$K <- 10
sol10 <- lsoda_wrapper(times, biomasses, mod, verbose = FALSE)
```
As shown in the plot below, for _K = 1_ the system reaches a stable equilibrium, whereas when we increase the carrying capacity (_K = 10_) the system departs from this stable equilibrium and periodic oscillations appear.
```{r delmas 3, fig.width=6, fig.height=6, fig.align='center'}
par(mfrow = c(2, 1))
plot_odeweb(sol1, S)
title("Carrying capacity = 1")
plot_odeweb(sol10, S)
title("Carrying capacity = 10")
```
# Common mistakes, things to not do
## Not updating model parameters properly in the model object
The building block of this package are the C++ classes to solve ODEs for the ATN model. Model parameters are stored in such classes and can be changed only by addressing them within the respective objects. For instance:
```{r mistake 1}
set.seed(1234)
nb_s <- 20
nb_b <- 5
nb_n <- 2
masses <- sort(10 ^ runif(nb_s, 2, 6)) #body mass of species
biomasses = runif(nb_s + nb_n, 2, 3)
L <- create_Lmatrix(masses, nb_b, Ropt = 50)
L[, 1:nb_b] <- 0
fw <- L
fw[fw > 0] <- 1
model_unscaled_nuts <- create_model_Unscaled_nuts(nb_s, nb_b, nb_n, masses, fw)
model_unscaled_nuts <- initialise_default_Unscaled_nuts(model_unscaled_nuts, L)
nb_s <- 30 #this does not change the model parameter
model_unscaled_nuts$nb_s #this is the model parameter
```
## Updating key parameters without creating a new model object
Changing parameters that are used in the `create_model` functions without creating a new model object is almost always a bad idea. Those parameters are important structural parameters, and changing one of them implies changes in most of the other variables contained in the model object. For instance, the example above, changing the number of species in the model object will lead to inconsistencies in the different variables: the dimensions of objects storing attack rates, body masses and so on won't match the updated number of species. Some basic checks are made before starting the integration in the `lsoda_wrapper` function, based on the `run_checks` procedure also available in the package.
```{r}
times <- seq(0, 15000, 150)
model_unscaled_nuts$nb_s = 40
# this will return an error :
# sol <- lsoda_wrapper(times, biomasses, model_schneider)
```
However, some modification can remain undetected. For instance, modifying species' body masses only won't raise any errors. However, a change in species body mass should be associated to a change in all the associated biological rates. The following code won't raise any errors, but will produce results relying on a model with an incoherent set of parameters and therefore wrong result:
```{r mistake 2, fig.width=6}
set.seed(1234)
nb_s <- 20
nb_b <- 5
nb_n <- 2
masses <- sort(10 ^ runif(nb_s, 2, 6)) #body mass of species
biomasses = runif(nb_s + nb_n, 2, 3)
L <- create_Lmatrix(masses, nb_b, Ropt = 50)
L[, 1:nb_b] <- 0
fw <- L
fw[fw > 0] <- 1
model_unscaled_nuts <- create_model_Unscaled_nuts(nb_s, nb_b, nb_n, masses, fw)
model_unscaled_nuts <- initialise_default_Unscaled_nuts(model_unscaled_nuts, L)
model_unscaled_nuts$BM <- sqrt(model_unscaled_nuts$BM) # we change body masses within the model
sol <- lsoda_wrapper(seq(1, 5000, 50), biomasses, model_unscaled_nuts)
par(mar = c(4, 4, 1, 1))
plot_odeweb(sol, model_unscaled_nuts$nb_s)
```
In general, each time one of these key parameters is modified (`nb_s`, `nb_b`, `nb_n` for the Schneider model, `BM`, `fw`), it is strongly recommended to create a new model objects with the updated parameters:
```{r}
nb_s <- 30
nb_n <- 2
masses <- sort(10 ^ runif(nb_s, 2, 6)) #body mass of species
biomasses <- runif(nb_s + nb_n, 2, 3)
L <- create_Lmatrix(masses, nb_b, Ropt = 50)
L[, 1:nb_b] <- 0
fw <- L
fw[fw > 0] <- 1
# create a new object:
model_unscaled_nuts <- create_model_Unscaled_nuts(nb_s, nb_b, nb_n, masses, fw)
model_unscaled_nuts <- initialise_default_Unscaled_nuts(model_unscaled_nuts, L)
# safely run the integration:
sol <- lsoda_wrapper(times, biomasses, model_unscaled_nuts)
```
Specifically to the Schneider model, changing the 'Lmatrix' requires to update the feeding rate (`$b`).
## Changing the dimensions of vectors and matrix fields in a model object without doing it consistently.
Changing the dimensions of a vector or matrix object somehow implies a change in the number of species (see section above). For instance, decreasing the length of the assimilation efficiencies vector `$e` should imply a consistent update of all the other parameters (handling times, attack rates, body masses, etc depending on the model). The function `remove_species` is made to properly remove species from model objects without having to manually regenerate all parameters.
## Shallow copying models
As built on Rcpp, the different models are only pointers to C++ objects. It means that this script:
```{r mistake 4}
nb_s <- 30
nb_n <- 2
masses <- sort(10 ^ runif(nb_s, 2, 6)) #body mass of species
biomasses <- runif(nb_s + nb_n, 2, 3)
L <- create_Lmatrix(masses, nb_b, Ropt = 50)
L[, 1:nb_b] <- 0
fw <- L
fw[fw > 0] <- 1
# create a new object:
model_1 <- create_model_Unscaled_nuts(nb_s, nb_b, nb_n, masses, fw)
model_1 <- initialise_default_Unscaled_nuts(model_1, L)
# trying to create a new model that is similar to model_1
model_2 = model_1
```
will not create a new model object. Formally, it creates a new pointer to the same address, which means that `model_1` and `model_2` are in reality the same variable (shallow copy). Therefore, modifying one modifies the other:
```{r}
model_1$q = 1.8
# this also updated the value in model_2:
model_2$q
```
Therefore, to create a new model object based on another one, it is important to formally create one (either with one of the `create_model_` function, or using `new`). More information on copying variables using pointers here: https://stackoverflow.com/questions/184710/what-is-the-difference-between-a-deep-copy-and-a-shallow-copy
## Modifying a model object in a *apply function
Modifying a R variable inside a ```*apply``` function in does not modify it:
```{r}
plus.3 = function(x, useless) {
y = x+3
useless = useless + 1
return(y)
}
useless = 4:10
useless2 = useless
x = sapply(1:5, plus.3, useless)
# the useless variable was not modified:
useless == useless2
```
However, this is not the case anymore with a model object. If we consider a model object:
```{r}
n_species <- 20
n_basal <- 5
n_cons = n_species - n_basal
n_nut <- 2
masses <- 10 ^ c(sort(runif(n_basal, 0, 3)),
sort(runif(n_species - n_basal, 2, 5)))
L <- create_Lmatrix(masses, n_basal, Ropt = 100, gamma = 2, th = 0.01)
fw <- L
fw[fw > 0] <- 1
model <- create_model_Unscaled_nuts(n_species, n_basal, n_nut, masses, fw)
model <- initialise_default_Unscaled_nuts(model, L, temperature = 20)
```
and a function that takes this model object as an argument, setting the b matrix to 0:
```{r}
# a function that sets all elements of model$b to 0
a.fun <- function(x, model){
model$b = model$b*0
return(x+1)
}
```
then, we can see that the global model object is indeed modified when the function is called by *apply:
```{r, eval = FALSE}
x = c(1,2)
sum(model$b)
y = lapply(x, a.fun, model)
sum(model$b)
```
This behaviour is still due to the fact that in a *apply function the model is shallow-copied and each iteration points in fact to the same object in memory.
However, this behaviour is not present when using a parallel version of an apply function, as in parallel computations the object is automatically deep-copied and passed to each task separately:
```{r, eval = FALSE}
library(parallel)
sum(model$b)
model <- initialise_default_Unscaled_nuts(model, L, temperature = 20)
y = mclapply(x, a.fun, model = model, mc.cores=5)
sum(model$b)
```
```{r restore par, include=FALSE, echo=FALSE}
par(oldpar)
```
| /scratch/gouwar.j/cran-all/cranData/ATNr/vignettes/ATNr.Rmd |
rotate <- function(m, to = "left", ...) {
stopifnot(to == "left")
if(inherits(m, c('constparty', 'party'))) {
class(m) <- c('left_rotated_tree', class(m))
m
}else{
warning("Could not typecast to ", sQuote("left_rotated_tree"),
" ! Returning unmodified object")
m
}
}
.plot_node.left_rotated_tree <- function(node, obj, xlim, ylim, nx, ny, terminal_panel, inner_panel, edge_panel,
tnex=2, drop_terminal=TRUE, ...)
{
### unpacking '...'
kwargs <- list(...)
if(!is.null(kwargs$debug)) {debug <- kwargs$debug} else {debug <- FALSE}
if(!is.null(kwargs$cex)) {cex <- kwargs$cex} else {cex <- 1} # removes all x-axis except of the last one
if(!is.null(kwargs$remove.xaxis)) {remove.xaxis <- kwargs$remove.xaxis} else {remove.xaxis <- FALSE}
if(!is.null(kwargs$tree.offset)) {tree.offset <- kwargs$tree.offset}
else {tree.offset <- 0} #shifts the entire tree x lines up
if(!is.null(kwargs$remove.nobs)) {remove.nobs <- kwargs$remove.nobs} else {remove.nobs <- FALSE}
if(!is.null(kwargs$nobs.loc)) {nobs.loc <- kwargs$nobs.loc} else {nobs.loc <- 'right'} #nobs location
#options
### the workhorse for plotting trees
### set up viewport for terminal node
if (is.terminal(node)) {
y <- ylim[2]
x <- xlim[2] - 0.5*tnex # starting from right plot is centered at half of its width
if(remove.xaxis && ylim[1]<1) { #This if clause is buggy
tn_vp <- viewport(y = unit(y, "native"),
x = unit(x, "native"),
height = unit(1, "native")+unit(2*cex,"lines"),
width = unit(tnex, "native"),
just = c("center", "top"),# just = c("left", "center")
name = paste("Node", id_node(node), sep = ""),
gp=gpar(cex=cex))
pushViewport(tn_vp)
if (debug) grid.rect(gp = gpar(fill=FALSE, lty = "dotted", col = 4))
terminal_panel(node)
} else {
tn_vp <- viewport(y = unit(y, "native"),
x = unit(x, "native"),
height = unit(1, "native"),
width = unit(tnex, "native"),
just = c("center", "top"),# just = c("left", "center")
name = paste("Node", id_node(node), sep = ""),
gp=gpar(cex=cex))
pushViewport(tn_vp)
if (debug) grid.rect(gp = gpar(lty = "dotted", col = 4))
terminal_panel(node)
}
upViewport()
return(NULL)
}
## convenience function for computing relative position of splitting node
pos_frac <- function(node) {
if(is.terminal(node)) 0.5 else {
width_kids <- sapply(kids_node(node), width)
nk <- length(width_kids)
rval <- if(nk %% 2 == 0) sum(width_kids[1:(nk/2)]) else
mean(cumsum(width_kids)[nk/2 + c(-0.5, 0.5)])
rval/sum(width_kids)
}
}
## extract information
split <- split_node(node)
kids <- kids_node(node)
width_kids <- sapply(kids, width)
nk <- length(width_kids)
### position of inner node
y0 <- ylim[1] + pos_frac(node) * diff(ylim)
x0 <- min(xlim)
### relative positions of kids
yfrac <- sapply(kids, pos_frac)
y1lim <- ylim[1] + cumsum(c(0, width_kids))/sum(width_kids) * diff(ylim)
y1 <- y1lim[1:nk] + yfrac * diff(y1lim)
if (!drop_terminal) {
x1 <- rep(x0 + 1, nk)
} else {
x1 <- ifelse(sapply(kids, is.terminal), xlim[2] - tnex, x0 + 1)
}
### draw edges
grid.lines(x = unit(c(x0, x0+.5), "native"),
y = unit(c(y0, y0), "native") + unit(tree.offset, "native")) #horizontal
terminal.kid = sapply(kids, is.terminal) #boolean list containing whether child is terminal or not.
for(i in 1:nk) {
grid.lines(x = unit(c(x0+.5, x0+.5), "native"),
y = unit(c(y0, y1[i]), "native") + unit(tree.offset, "native")) #vertical
if(terminal.kid[i]) {
#if (debug) grid.points(x=unit(x1[i]-0.5, "native"),
# y=unit(y1[i]+0.5, "native") - unit(1, "lines"),
# pch=4)
grid.lines(x = unit(c(x0+.5, xlim[2]-tnex), "native")-unit(c(0,.4),"lines"),
y = unit(c(y1[i], y1[i]), "native") + unit(tree.offset, "native")) #horizontal
# grid.lines(x = unit(c(x0+.5, x1[i]-0.5), "native"),
# y = unit(c(y1[i]+.5, y1[i]+.5), "native")-unit(1, "lines")) #horizontal
} else {
grid.lines(x = unit(c(x0+.5, x1[i]), "native"),
y = unit(c(y1[i], y1[i]), "native") + unit(tree.offset, "native")) #horizontal
}
}
### position of labels
xpos <- x0 + 0.5
ypos <- y0 - (y0 - y1)/2
### setup labels
for(i in 1:nk) {
if(terminal.kid[i]) {
w = unit(xlim[2]-tnex-xpos, "native") +unit(0.6*cex, "lines")
} else {
w = unit(1, "native") - unit(.4*cex, "lines")
}
sp_vp <- viewport(x = unit(xpos, "native")-unit(1*cex, "lines"),
y = unit(ypos[i], "native")+unit(.25*(-1)^(i+1)*cex,"lines") + unit(tree.offset, "native"),
width = w,
height = unit(1*cex, "lines"), just= "left",
name = paste("edge", id_node(node), "-", i, sep = ""),
gp=gpar(cex=cex))
pushViewport(sp_vp)
if(debug) grid.rect(gp = gpar(fill=FALSE, lty = "dotted", col = 2))
edge_panel(node, i)
upViewport()
}
### node ids for all nodes
fill <- "white"
fill <- rep(fill, length.out = 2L)
for(i in 1:length(kids)) {
nodeID <- viewport(x = unit(x0 + .5, "native"), # - unit(.5, "lines")
y = unit(y1[i], "native") + unit(tree.offset, "native"),
width = max(unit(1*cex, "lines"), unit(1.3*cex, "strwidth", as.character(id_node(kids[[i]])))),
height = max(unit(1*cex, "lines"), unit(1.3*cex, "strheight", as.character(id_node(kids[[i]])))),
just="right", gp=gpar(cex=cex)
)
pushViewport(nodeID)
grid.rect(gp = gpar(fill = fill[2]), just = "center")
grid.text(as.character(id_node(kids[[i]])), just = "center")
popViewport()
}
### number of observations
if(!remove.nobs) {
for(i in 1:nk) {
#if(!terminal.kid[i]) {
## extract data
nid <- id_node(kids[[i]])
dat <- data_party(obj, nid)
yn <- dat[["(response)"]]
wn <- dat[["(weights)"]]
n.text <- sprintf("n = %s", sum(wn))
if(is.null(wn)) wn <- rep(1, NROW(yn))
if(nobs.loc == 'top') {
#possibility to display number of observation above/below node number
nobs_vp <- viewport(x = unit(x0 + .5, "native") - max(unit(.5*cex, "lines"),
unit(.65*cex, "strwidth", as.character(nid))),
y = unit(y1[i], "native") + unit(.5*(-1)^i*cex, "lines")
+ (-1)^i*max(unit(.5*cex, "lines"),
unit(.65*cex, "strheight", as.character(nid)))
+ unit(tree.offset, "native"),
width = unit(1*cex, "strwidth", n.text),
height = unit(1*cex, "lines"), just ="center",
name = paste("nobs", id_node(node), "-", i, sep = ""),
gp=gpar(cex=cex))
} else {
#if nobs.loc == 'right'
nobs_vp <- viewport(x = unit(x0 + .5, "native") + unit(.2*cex, "lines"),
y = unit(y1[i], "native") + unit(.5*(-1)^i*cex, "lines")
+ unit(tree.offset, "native"),
width = unit(1*cex, "strwidth", n.text),
height = unit(1*cex, "lines"), just ="left",
name = paste("nobs", id_node(node), "-", i, sep = ""),
gp=gpar(cex=cex))
}
pushViewport(nobs_vp)
if(debug) grid.rect(gp = gpar(fill=FALSE, lty = "dotted", col = 2))
grid.text(n.text)
upViewport()
#}
}
}
### create viewport for inner node
in_vp <- viewport(x = unit(x0, "native"),
y = unit(y0, "native") + unit(tree.offset, "native"),
height = unit(.5, "native"),
width = unit(.3, "native"),
name = paste("Node", id_node(node), sep = ""),
gp=gpar(cex=cex))
pushViewport(in_vp)
if(debug) grid.rect(gp = gpar(fill=FALSE, lty = "dotted"))
inner_panel(node, ...)
upViewport()
## call workhorse for kids
for(i in 1:nk)
.plot_node.left_rotated_tree(kids[[i]], obj,
c(x1[i], xlim[2]), c(y1lim[i], y1lim[i+1]), nx, ny,
# Note: this might be wrong! I'd expect something else than nx
terminal_panel, inner_panel, edge_panel,
tnex = tnex, drop_terminal = drop_terminal, ...)
}
plot.left_rotated_tree <- function(x, main = NULL,
terminal_panel = node_ecdf, tp_args = list(),
inner_panel = node_inner.left_rotated_tree, ip_args = list(),
edge_panel = edge_simple.left_rotated_tree, ep_args = list(),
type = "extended", drop_terminal = NULL, tnex = NULL,
newpage = TRUE, pop = TRUE, gp = gpar(), ...)
{
# unpacking '...'
kwargs <- list(...)
if(!is.null(kwargs$debug)) {debug <- kwargs$debug} else {debug <- "FALSE"}
obj <- x
### compute default settings
type <- match.arg(type)
stopifnot(type == "extended")
if (is.null(tnex)) tnex <- 2
if (is.null(drop_terminal)) drop_terminal <- TRUE
### extract tree
node <- node_party(x)
### total number of terminal nodes
ny <- width(node)
### maximal depth of the tree
nx <- depth(node, root = TRUE)
## setup newpage
if (newpage) grid.newpage()
## setup root viewport
root_vp <- viewport(layout = grid.layout(3, 3,
heights = unit(c(ifelse(is.null(main), 0, 2), 1, 2),
c("lines", "null", "lines")),
widths = unit(c(1, 1, 1),
c("lines", "null", "lines"))),
name = "root",
gp = gp)
pushViewport(root_vp)
## viewport for main title (if any)
if (!is.null(main)) {
main_vp <- viewport(layout.pos.col = 2, layout.pos.row = 1,
name = "main")
pushViewport(main_vp)
if(debug) grid.rect(gp = gpar(fill=FALSE, lty = "dotted", col = 3))
grid.text(y=unit(1, "lines"), main, just = "center")
upViewport()
}
## setup viewport for tree
tree_vp <- viewport(layout.pos.col = 2, layout.pos.row = 2,
yscale = c(0, ny), xscale = c(0, nx + (tnex - 1)),
name = "tree")
pushViewport(tree_vp)
if(debug) grid.rect(gp = gpar(fill=FALSE, lty = "dotted", col = 3))
### setup panel functions (if necessary)
if(inherits(terminal_panel, "grapcon_generator"))
terminal_panel <- do.call("terminal_panel", c(list(x), as.list(tp_args)))
if(inherits(inner_panel, "grapcon_generator"))
inner_panel <- do.call("inner_panel", c(list(x), as.list(ip_args)))
if(inherits(edge_panel, "grapcon_generator"))
edge_panel <- do.call("edge_panel", c(list(x), as.list(ep_args)))
if((nx <= 1 & ny <= 1)) {
pushViewport(plotViewport(margins = rep(1.5, 4), name = paste("Node", id_node(node), sep = "")))
terminal_panel(node)
} else {
## call the workhorse
.plot_node.left_rotated_tree(node, obj,
ylim = c(0, ny), xlim = c(0.25, nx + (tnex - 1)),
nx = nx, ny = ny,
terminal_panel = terminal_panel,
inner_panel = inner_panel,
edge_panel = edge_panel,
tnex = tnex,
drop_terminal = drop_terminal, ...)
}
upViewport()
if (pop) popViewport() else upViewport()
}
### Plot function for inner node ###
# To change it, write a new function (change class to "grapcon_generator") and execute
# R> plot.left_rotated_tree(..., inner_panel = YOURFUNCTION)
node_inner.left_rotated_tree <- function(obj, id = TRUE, pval = TRUE, abbreviate = FALSE, fill = "white", gp = gpar())
{
meta <- obj$data
nam <- names(obj)
extract_label <- function(node) {
if(is.terminal(node)) return(rep.int("", 2L))
varlab <- character_split(split_node(node), meta)$name
if(abbreviate > 0L) varlab <- abbreviate(varlab, as.integer(abbreviate))
## FIXME: make more flexible rather than special-casing p-value
if(pval) {
pval <- suppressWarnings(try(!is.null(info_node(node)$p.value), silent = TRUE))
pval <- if(inherits(pval, "try-error")) FALSE else pval
}
if(pval) {
pvalue <- node$info$p.value
plab <- ifelse(pvalue < 10^(-3L),
paste("p <", 10^(-3L)),
paste("p =", round(pvalue, digits = 3L)))
} else {
plab <- ""
}
return(c(varlab, plab))
}
maxstr <- function(node) {
lab <- extract_label(node)
klab <- if(is.terminal(node)) "" else unlist(lapply(kids_node(node), maxstr))
lab <- c(lab, klab)
lab <- unlist(lapply(lab, function(x) strsplit(x, "\n")))
lab <- lab[which.max(nchar(lab))]
if(length(lab) < 1L) lab <- ""
return(lab)
}
nstr <- maxstr(node_party(obj))
if(nchar(nstr) < 6) nstr <- "aAAAAa"
### panel function for the inner nodes
rval <- function(node, ...) {
# extract ...
kwargs <- list(...)
if(!is.null(kwargs$remove.nobs)) {remove.nobs <- kwargs$remove.nobs} else {remove.nobs <- FALSE}
node_vp <- viewport(
x = unit(0.5, "npc"),
y = unit(0.5, "npc"),
width = unit(1, "strwidth", nstr)+unit(.2, "lines"),
height = unit(3, "lines"),
name = paste("node_inner.left_rotated_tree", id_node(node), sep = ""),
gp = gp
)
pushViewport(node_vp)
xell <- c(seq(0, 0.2, by = 0.01),
seq(0.2, 0.8, by = 0.05),
seq(0.8, 1, by = 0.01))
yell <- sqrt(xell * (1-xell))
lab <- extract_label(node)
#fill <- rep(fill, length.out = 2L)
# grid.polygon(x = unit(c(xell, rev(xell)), "npc"),#x = unit(0.1, "npc"), y = unit(0.1, "npc"),
# y = unit(c(yell, -yell)+0.5, "npc"),
# gp = gpar(fill = fill[1], col=fill[1]))
grid.rect(height=unit(0.15,"lines") ,gp = gpar(fill = fill, col=fill))
## FIXME: something more general instead of pval ?
grid.text(lab[1L], y = unit(1.5 + 0.5 * (lab[2L] != ""), "lines")) #That's the variable name
#grid.text('Test', y = unit(1, "lines")) #That's the variable name
if(lab[2L] != "") grid.text(lab[2L], y = unit(1, "lines")) #Printing p-value
if(id) {
if(id_node(node)==1) {
nodeIDvp <- viewport(x = unit(.5, "npc")+unit(.5, "strwidth", nstr)+unit(.1,"lines"),
y = unit(.5, "npc"),
width = max(unit(1, "lines"), unit(1.3, "strwidth", nam[id_node(node)])),
height = max(unit(1, "lines"), unit(1.3, "strheight", nam[id_node(node)])),
just="left")
pushViewport(nodeIDvp)
grid.rect(gp = gpar(fill = fill))
grid.text(nam[id_node(node)]) #print node number
popViewport()
# Print number of observations
if(!remove.nobs) {
dat <- data_party(obj, id_node(node))
yn <- dat[["(response)"]]
wn <- dat[["(weights)"]]
if(is.null(wn)) wn <- rep(1, NROW(yn))
nodeNum <- viewport(x = unit(.5, "npc"),
#- unit(.5, "strwidth", nstr) + unit(0.2, "lines")+ max(unit(1, "lines"), unit(1.3, "strwidth", nam[id_node(node)])),
y = unit(1, "npc"),
width = max(unit(.8, "lines"), unit(1.3, "strwidth", sprintf("n = %s", sum(wn)))),
height = max(unit(1, "lines"), unit(1.3, "strheight", sprintf("n = %s", sum(wn)))))
pushViewport(nodeNum)
# if (debug) {
# grid.rect(gp = gpar(fill = fill[2], lty="dotted", col="red"))
# } else {
grid.rect(gp = gpar(fill = fill[2], col="white"))#
# }
grid.text(sprintf("n = %s", sum(wn)), just = c("center", "center")) #print node number
popViewport()
}
}
}
upViewport()
}
return(rval)
}
class(node_inner.left_rotated_tree) <- "grapcon_generator"
# ### Plot function for terminal node ###
# # To change it, write a new function and execute
# # R> plot.left_rotated_tree(..., terminal_panel = YOURFUNCTION)
# node_ecdf.left_rotated_tree <- function(obj, col = "black", bg = "white", ylines = 2,
# id = TRUE, mainlab = NULL, gp = gpar())
# {
#
# ## extract response
# y <- obj$fitted[["(response)"]]
# stopifnot(inherits(y, "numeric") || inherits(y, "integer"))
#
# dostep <- function(f) {
# x <- knots(f)
# y <- f(x)
# ### create a step function based on x, y coordinates
# ### modified from `survival:print.survfit'
# if (is.na(x[1] + y[1])) {
# x <- x[-1]
# y <- y[-1]
# }
# n <- length(x)
# if (n > 2) {
# # replace verbose horizonal sequences like
# # (1, .2), (1.4, .2), (1.8, .2), (2.3, .2), (2.9, .2), (3, .1)
# # with (1, .2), (3, .1). They are slow, and can smear the looks
# # of the line type.
# dupy <- c(TRUE, diff(y[-n]) !=0, TRUE)
# n2 <- sum(dupy)
#
# #create a step function
# xrep <- rep(x[dupy], c(1, rep(2, n2-1)))
# yrep <- rep(y[dupy], c(rep(2, n2-1), 1))
# RET <- list(x = xrep, y = yrep)
# } else {
# if (n == 1) {
# RET <- list(x = x, y = y)
# } else {
# RET <- list(x = x[c(1,2,2)], y = y[c(1,1,2)])
# }
# }
# return(RET)
# }
#
# ### panel function for ecdf in nodes
# rval <- function(node, ...) {
#
# # extract ...
# #pass
# kwargs <- list(...)
# if(!is.null(kwargs$remove.xaxis)) {remove.xaxis <- kwargs$remove.xaxis} else {remove.xaxis <- FALSE}
# if(!is.null(kwargs$cex)) {cex <- kwargs$cex} else {cex <- 1}
#
# ## extract data
# nid <- id_node(node)
# dat <- data_party(obj, nid)
# yn <- dat[["(response)"]]
# wn <- dat[["(weights)"]]
# if(is.null(wn)) wn <- rep(1, NROW(yn))
#
# #defining helper function
# .pred_ecdf <- function(y, w) {
# if (length(y) == 0) return(NA)
# iw <- as.integer(round(w))
# if (max(abs(w - iw)) < sqrt(.Machine$double.eps)) {
# y <- rep(y, w)
# return(ecdf(y))
# } else {
# stop("cannot compute empirical distribution function with non-integer weights")
# }
# }
#
# ## get ecdf in node
# f <- .pred_ecdf(yn, wn)
# a <- dostep(f)
#
#
# ## set up plot
# yscale <- c(0, 1)
# xscale <- range(y)
# a$x <- c(xscale[1], a$x[1], a$x, xscale[2])
# a$x <- a$x - min(a$x)
# a$x <- a$x / max(a$x)
# a$y <- c(0, 0, a$y, 1)
#
# if(remove.xaxis) {
# top_vp <- viewport(layout = grid.layout(nrow = 2, ncol = 2,
# widths = unit(c(ylines, 1),
# c("lines", "null")),
# heights = unit(c(1.2, 1), c("lines", "null"))),
# width = unit(1, "npc"),
# height = unit(1, "npc"),
# name = paste("node_ecdf", nid, sep = ""), gp = gp)
# } else {
# top_vp <- viewport(y=unit(.5, "npc")+unit(1,"lines"),
# layout = grid.layout(nrow = 2, ncol = 2,
# widths = unit(c(ylines, 1),
# c("lines", "null")),
# heights = unit(c(1.2, 1), c("lines", "null"))),
# width = unit(1, "npc"),
# height = unit(1, "npc")-unit(2, "lines"),
# name = paste("node_ecdf", nid, sep = ""), gp = gp)
# }
#
# pushViewport(top_vp)
# grid.rect(gp = gpar(fill = bg, col = 0))
#
# ## number of observations
# top <- viewport(layout.pos.col=2, layout.pos.row=1)
# pushViewport(top)
#
# n.text <- sprintf("n = %s", sum(wn))
# grid.rect(x = unit(0, "npc"),
# y = unit(.5, "lines"),
# height = unit(1, "lines"),
# width = unit(1, "strwidth", n.text)+unit(.2, "lines"),
# gp = gpar(fill = "transparent", col = 1),
# just= "left")
# grid.text(n.text, x = unit(0, "npc")+unit(.1, "lines"),
# y = unit(.5, "lines"),
# just="left")
# popViewport()
#
# plot <- viewport(layout.pos.col=2, layout.pos.row=2,
# xscale=xscale, yscale=yscale,
# name = paste0("node_surv", nid, "plot"),
# clip = FALSE)
#
# pushViewport(plot)
# if(!remove.xaxis) {grid.xaxis()}
# grid.yaxis()
# grid.rect(gp = gpar(fill = "transparent"))
# grid.clip()
# grid.lines(a$x, a$y, gp = gpar(col = col))
#
# # Dashed line for mean
# x.mean <- a$x[a$y[-length(a$y)]<=0.5 & a$y[-1]>0.5]
# grid.lines(x=c(x.mean, x.mean), y=c(0,1), gp = gpar(col=2, lty="dashed"))
#
# upViewport(2)
# }
#
# return(rval)
# }
# class(node_ecdf.left_rotated_tree) <- "grapcon_generator"
### Plot function for the edge panel ###
edge_simple.left_rotated_tree <- function(obj, digits = 3, abbreviate = FALSE,
justmin = Inf, just = c("alternate", "increasing", "decreasing", "equal"),
fill = "white")
{
meta <- obj$data
justfun <- function(i, split) {
myjust <- if(mean(nchar(split)) > justmin) {
match.arg(just, c("alternate", "increasing", "decreasing", "equal"))
} else {
"equal"
}
k <- length(split)
rval <- switch(myjust,
"equal" = rep.int(0, k),
"alternate" = rep(c(0.5, -0.5), length.out = k),
"increasing" = seq(from = -k/2, to = k/2, by = 1),
"decreasing" = seq(from = k/2, to = -k/2, by = -1)
)
unit(0.5, "npc") + unit(rval[i], "lines")
}
### panel function for simple edge labelling
function(node, i, debug=FALSE) {
split <- character_split(split_node(node), meta, digits = digits)$levels
y <- justfun(i, split)
split <- split[i]
# try() because the following won't work for split = "< 10 Euro", for example.
if(any(grep(">", split) > 0) | any(grep("<", split) > 0)) {
tr <- suppressWarnings(try(parse(text = paste("phantom(0)", split)), silent = TRUE))
if(!inherits(tr, "try-error")) split <- tr
}
grid.rect(x=0, y = y, gp = gpar(fill = fill, col = fill), width = unit(1, "strwidth", split),just="left")
grid.text(split, x=0, y = y, just = "left")
}
}
class(edge_simple.left_rotated_tree) <- "grapcon_generator"
| /scratch/gouwar.j/cran-all/cranData/ATR/R/rotate_plot.R |
#' @title Bounding the average treatment effect (ATE)
#'
#' @description Bounds the average treatment effect (ATE) under the unconfoundedness assumption without the overlap condition.
#' @param Y n-dimensional vector of binary outcomes
#' @param D n-dimensional vector of binary treatments
#' @param X n by p matrix of covariates
#' @param rps n-dimensional vector of the reference propensity score
#' @param Q bandwidth parameter that determines the maximum number of observations for pooling information (default: Q = 3)
#' @param studentize TRUE if the columns of X are studentized and FALSE if not (default: TRUE)
#' @param alpha (1-alpha) nominal coverage probability for the confidence interval of ATE (default: 0.05)
#' @param x_discrete TRUE if the distribution of X is discrete and FALSE otherwise (default: FALSE)
#' @param n_hc number of hierarchical clusters to discretize non-discrete covariates; relevant only if x_discrete is FALSE.
#' The default choice is n_hc = ceiling(length(Y)/10), so that there are 10 observations in each cluster on average.
#'
#' @return An S3 object of type "ATbounds". The object has the following elements.
#' \item{call}{a call in which all of the specified arguments are specified by their full names}
#' \item{type}{ATE}
#' \item{cov_prob}{Confidence level: 1-alpha}
#' \item{y1_lb}{estimate of the lower bound on the average of Y(1), i.e. E[Y(1)]}
#' \item{y1_ub}{estimate of the upper bound on the average of Y(1), i.e. E[Y(1)]}
#' \item{y0_lb}{estimate of the lower bound on the average of Y(0), i.e. E[Y(0)]}
#' \item{y0_ub}{estimate of the upper bound on the average of Y(0), i.e. E[Y(0)]}
#' \item{est_lb}{estimate of the lower bound on ATE, i.e. E[Y(1) - Y(0)]}
#' \item{est_ub}{estimate of the upper bound on ATE, i.e. E[Y(1) - Y(0)]}
#' \item{est_rps}{the point estimate of ATE using the reference propensity score}
#' \item{se_lb}{standard error for the estimate of the lower bound on ATE}
#' \item{se_ub}{standard error for the estimate of the upper bound on ATE}
#' \item{ci_lb}{the lower end point of the confidence interval for ATE}
#' \item{ci_ub}{the upper end point of the confidence interval for ATE}
#'
#' @examples
#' Y <- RHC[,"survival"]
#' D <- RHC[,"RHC"]
#' X <- RHC[,c("age","edu")]
#' rps <- rep(mean(D),length(D))
#' results_ate <- atebounds(Y, D, X, rps, Q = 3)
#'
#' @references Sokbae Lee and Martin Weidner. Bounding Treatment Effects by Pooling Limited Information across Observations.
#'
#' @export
atebounds <- function(Y, D, X, rps, Q = 3L, studentize = TRUE, alpha = 0.05, x_discrete = FALSE, n_hc = NULL){
call <- match.call()
X <- as.matrix(X)
if (studentize == TRUE){
X <- scale(X) # centers and scales the columns of X
}
n <- nrow(X)
ymin <- min(Y)
ymax <- max(Y)
if (is.null(n_hc) == TRUE){
n_hc = ceiling(n/10) # number of clusters
}
### ATE estimation using reference propensity scores ###
y1_rps <- mean(D*Y/rps)
y0_rps <- mean((1-D)*Y/(1-rps))
ate_rps <- y1_rps - y0_rps
if (x_discrete == FALSE){ # Computing weights with non-discrete covariates
hc <- stats::hclust(stats::dist(X), method = "complete") # hierarchical cluster
ind_Xunique <- stats::cutree(hc, k = n_hc) # An index vector that has the same dimension as that of X
mx <- n_hc
} else if (x_discrete == TRUE){ # Computing weights with discrete covariates
Xunique <- mgcv::uniquecombs(X) # A matrix of unique rows from X
ind_Xunique <- attr(Xunique,"index") # An index vector that has the same dimension as that of X
mx <- nrow(Xunique) # number of unique rows
} else {
stop("x_discrete must be either TRUE or FALSE.")
}
res <- matrix(NA,nrow=mx,ncol=4)
for (i in 1:mx){ # this loop may not be fast if mx is very large
disc_ind <- (ind_Xunique == i)
nx <- sum(disc_ind) # number of obs. such that X_i = x for each row of x
nx1 <- sum(D*disc_ind) # number of obs. such that X_i = x and D_i = 1 for each row of x
nx0 <- nx - nx1 # number of obs. such that X_i = x and D_i = 0 for each row of x
nx1 <- nx1 + (nx1 == 0) # replace nx1 with 1 when it is zero to avoid NaN
nx0 <- nx0 + (nx0 == 0) # replace nx0 with 1 when it is zero to avoid NaN
y1bar <- (sum(D*Y*disc_ind)/nx1) # Dividing by zero never occurs because the numerator is zero whenever nx1 is zero
y0bar <- (sum((1-D)*Y*disc_ind)/nx0) # Dividing by zero never occurs because the numerator is zero whenever nx0 is zero
# Computing weights
if (Q >= 1){
qq <- min(Q,nx)
k_upper <- 2*floor(qq/2)
v_x1 <- 1
v_x0 <- 1
rps_x <- rps[disc_ind]
if (x_discrete == FALSE){
rps_x <- mean(rps_x) # if X is non-discrete, take the average
} else if (x_discrete == TRUE){
rps_x <- unique(rps_x)
if (length(rps_x) > 1){
stop("The reference propensity score should be unique for the same value of X if X is discrete.")
}
}
for (k in 0:k_upper){
px1k <- ((rps_x-1)/rps_x)^k
px0k <- (rps_x/(rps_x-1))^k
if ((qq %% 2) == 1){ # if min(q,nx) is odd
term_x1 <- ((nx - nx1)/nx)*(1/choose(nx-1,qq-1))*choose(nx1,k)*choose(nx-1-nx1,qq-1-k)
term_x0 <- ((nx - nx0)/nx)*(1/choose(nx-1,qq-1))*choose(nx0,k)*choose(nx-1-nx0,qq-1-k)
} else if ((qq %% 2) == 0){ # if min(q,nx) is even
term_x1 <- (1/choose(nx,qq))*choose(nx1,k)*choose(nx-nx1,qq-k)
term_x0 <- (1/choose(nx,qq))*choose(nx0,k)*choose(nx-nx0,qq-k)
} else{
stop("'min(q,nx)' must be a positive integer.")
}
omega1 <- px1k*term_x1
omega0 <- px0k*term_x0
v_x1 <- v_x1 - omega1
v_x0 <- v_x0 - omega0
}
} else{
stop("'Q' must be a positive integer.")
}
res[i,1] <- ((mx*nx)/n)*(ymin + v_x1*(y1bar - ymin))
res[i,2] <- ((mx*nx)/n)*(ymin + v_x0*(y0bar - ymin))
res[i,3] <- ((mx*nx)/n)*(ymax + v_x1*(y1bar - ymax))
res[i,4] <- ((mx*nx)/n)*(ymax + v_x0*(y0bar - ymax))
}
### Obtain bound estimates ###
est <- apply(res,2,mean)
y1_lb <- est[1]
y0_lb <- est[2]
y1_ub <- est[3]
y0_ub <- est[4]
Lx <- res[,1]-res[,4]
Ux <- res[,3]-res[,2]
ate_lb <- mean(Lx)
ate_ub <- mean(Ux)
se_lb <- stats::sd(Lx)/sqrt(mx)
se_ub <- stats::sd(Ux)/sqrt(mx)
# Stoye (2020) construction
two_sided <- 1-alpha/2
cv_norm <- stats::qnorm(two_sided)
ci1_lb <- ate_lb - cv_norm*se_lb
ci1_ub <- ate_ub + cv_norm*se_ub
ate_star <- (se_ub*ate_lb + se_lb*ate_ub)/(se_lb + se_ub)
se_star <- 2*(se_lb*se_ub)/(se_lb + se_ub) # This corresponds to rho=1 in Stoye (2020)
ci2_lb <- ate_star - cv_norm*se_star
ci2_ub <- ate_star + cv_norm*se_star
if (ci1_lb <= ci1_ub){ # if the first confidence interval is non-empty
ci_lb <- min(ci1_lb,ci2_lb)
ci_ub <- max(ci1_ub,ci2_ub)
} else {
ci_lb <- ci2_lb
ci_ub <- ci2_ub
}
outputs = list(call = call, type = "ATE", cov_prob = (1-alpha),
"y1_lb"=y1_lb,"y1_ub"=y1_ub,"y0_lb"=y0_lb,"y0_ub"=y0_ub,
"est_lb"=ate_lb,"est_ub"=ate_ub,"est_rps"=ate_rps,
"se_lb"=se_lb,"se_ub"=se_ub,"ci_lb"=ci_lb,"ci_ub"=ci_ub)
class(outputs) = 'ATbounds'
outputs
}
| /scratch/gouwar.j/cran-all/cranData/ATbounds/R/atebounds.R |
#' @title Bounding the average treatment effect on the treated (ATT)
#'
#' @description Bounds the average treatment effect on the treated (ATT) under the unconfoundedness assumption without the overlap condition.
#'
#' @param Y n-dimensional vector of binary outcomes
#' @param D n-dimensional vector of binary treatments
#' @param X n by p matrix of covariates
#' @param rps n-dimensional vector of the reference propensity score
#' @param Q bandwidth parameter that determines the maximum number of observations for pooling information (default: Q = 3)
#' @param studentize TRUE if X is studentized elementwise and FALSE if not (default: TRUE)
#' @param alpha (1-alpha) nominal coverage probability for the confidence interval of ATE (default: 0.05)
#' @param x_discrete TRUE if the distribution of X is discrete and FALSE otherwise (default: FALSE)
#' @param n_hc number of hierarchical clusters to discretize non-discrete covariates; relevant only if x_discrete is FALSE.
#' The default choice is n_hc = ceiling(length(Y)/10), so that there are 10 observations in each cluster on average.
#'
#' @return An S3 object of type "ATbounds". The object has the following elements.
#' \item{call}{a call in which all of the specified arguments are specified by their full names}
#' \item{type}{ATT}
#' \item{cov_prob}{Confidence level: 1-alpha}
#' \item{est_lb}{estimate of the lower bound on ATT, i.e. E[Y(1) - Y(0) | D = 1]}
#' \item{est_ub}{estimate of the upper bound on ATT, i.e. E[Y(1) - Y(0) | D = 1]}
#' \item{est_rps}{the point estimate of ATT using the reference propensity score}
#' \item{se_lb}{standard error for the estimate of the lower bound on ATT}
#' \item{se_ub}{standard error for the estimate of the upper bound on ATT}
#' \item{ci_lb}{the lower end point of the confidence interval for ATT}
#' \item{ci_ub}{the upper end point of the confidence interval for ATT}
#'
#' @examples
#' Y <- RHC[,"survival"]
#' D <- RHC[,"RHC"]
#' X <- RHC[,c("age","edu")]
#' rps <- rep(mean(D),length(D))
#' results_att <- attbounds(Y, D, X, rps, Q = 3)
#'
#' @references Sokbae Lee and Martin Weidner. Bounding Treatment Effects by Pooling Limited Information across Observations.
#'
#' @export
attbounds <- function(Y, D, X, rps, Q = 3L, studentize = TRUE, alpha = 0.05, x_discrete = FALSE, n_hc = NULL){
call <- match.call()
X <- as.matrix(X)
if (studentize == TRUE){
X <- scale(X) # centers and scales the columns of X
}
n <- nrow(X)
ymin <- min(Y)
ymax <- max(Y)
if (is.null(n_hc) == TRUE){
n_hc = ceiling(n/10) # number of clusters
}
### ATT estimation using reference propensity scores ###
rps_wt <- rps/(1-rps)
att_rps <- sum(D*Y-rps_wt*(1-D)*Y)/sum(D)
if (x_discrete == FALSE){ # Computing weights with non-discrete covariates
hc <- stats::hclust(stats::dist(X), method = "complete") # hierarchical cluster
ind_Xunique <- stats::cutree(hc, k = n_hc) # An index vector that has the same dimension as that of X
mx <- n_hc
} else if (x_discrete == TRUE){ # Computing weights with discrete covariates
Xunique <- mgcv::uniquecombs(X) # A matrix of unique rows from X
ind_Xunique <- attr(Xunique,"index") # An index vector that has the same dimension as that of X
mx <- nrow(Xunique) # number of unique rows
} else {
stop("x_discrete must be either TRUE or FALSE.")
}
res <- matrix(NA,nrow=mx,ncol=2)
for (i in 1:mx){ # this loop may not be fast if mx is very large
disc_ind <- (ind_Xunique == i)
nx <- sum(disc_ind) # number of obs. such that X_i = x for each row of x
nx1 <- sum(D*disc_ind) # number of obs. such that X_i = x and D_i = 1 for each row of x
nx0 <- nx - nx1 # number of obs. such that X_i = x and D_i = 0 for each row of x
nx1 <- nx1 + (nx1 == 0) # replace nx1 with 1 when it is zero to avoid NaN
nx0 <- nx0 + (nx0 == 0) # replace nx0 with 1 when it is zero to avoid NaN
y1bar <- (sum(D*Y*disc_ind)/nx1) # Dividing by zero never occurs because the numerator is zero whenever nx1 is zero
y0bar <- (sum((1-D)*Y*disc_ind)/nx0) # Dividing by zero never occurs because the numerator is zero whenever nx0 is zero
# Computing weights
if (Q >= 1){
qq <- min(Q,nx)
k_upper <- 2*floor(qq/2)
v_x0 <- nx1/nx
rps_x <- rps[disc_ind]
if (x_discrete == FALSE){
rps_x <- mean(rps_x) # if X is non-discrete, take the average
} else if (x_discrete == TRUE){
rps_x <- unique(rps_x)
if (length(rps_x) > 1){
stop("The reference propensity score should be unique for the same value of X if X is discrete.")
}
}
for (k in 0:k_upper){
px0k <- (rps_x/(rps_x-1))^k
if ((qq %% 2) == 1){ # if min(q,nx) is odd
term_x0 <- ((nx - nx0)/nx)*(1/choose(nx-1,qq-1))*choose(nx0,k)*choose(nx-1-nx0,qq-1-k)
} else if ((qq %% 2) == 0){ # if min(q,nx) is even
term_x0 <- (1/choose(nx,qq))*choose(nx0,k)*choose(nx-nx0,qq-k)
} else{
stop("'min(q,nx)' must be a positive integer.")
}
omega0 <- px0k*term_x0
v_x0 <- v_x0 - omega0
}
} else{
stop("'Q' must be a positive integer.")
}
res[i,1] <- ((mx*nx)/n)*((nx1/nx)*(y1bar - ymax) - v_x0*(y0bar - ymax))
res[i,2] <- ((mx*nx)/n)*((nx1/nx)*(y1bar - ymin) - v_x0*(y0bar - ymin))
}
### Obtain bound estimates ###
meanD <- mean(D)
est <- apply(res,2,mean)
att_lb <- est[1]/meanD
att_ub <- est[2]/meanD
# Standard errors
ift <- res/meanD - ((mx*nx)/n)*est/(meanD^2)
se <- apply(ift,2,stats::sd)
se_lb <- se[1]/sqrt(mx)
se_ub <- se[2]/sqrt(mx)
# Stoye (2020) construction
two_sided <- 1-alpha/2
cv_norm <- stats::qnorm(two_sided)
ci1_lb <- att_lb - cv_norm*se_lb
ci1_ub <- att_ub + cv_norm*se_ub
att_star <- (se_ub*att_lb + se_lb*att_ub)/(se_lb + se_ub)
se_star <- 2*(se_lb*se_ub)/(se_lb + se_ub) # This corresponds to rho=1 in Stoye (2020)
ci2_lb <- att_star - cv_norm*se_star
ci2_ub <- att_star + cv_norm*se_star
if (ci1_lb <= ci1_ub){ # if the first confidence interval is non-empty
ci_lb <- min(ci1_lb,ci2_lb)
ci_ub <- max(ci1_ub,ci2_ub)
} else {
ci_lb <- ci2_lb
ci_ub <- ci2_ub
}
outputs = list(call = call, type = "ATT", cov_prob = (1-alpha),
"est_lb"=att_lb,"est_ub"=att_ub,"es_rps"=att_rps,
"se_lb"=se_lb,"se_ub"=se_ub,"ci_lb"=ci_lb,"ci_ub"=ci_ub)
class(outputs) = 'ATbounds'
outputs
}
| /scratch/gouwar.j/cran-all/cranData/ATbounds/R/attbounds.R |
#' EFM
#'
#' The electronic fetal monitoring (EFM) and cesarean section (CS) dataset
#' from Neutra, Greenland, and Friedman (1980) consists of observations on 14,484 women
#' who delivered at Beth Israel Hospital, Boston from January 1970 to December 1975.
#' The purpose of the study is to evaluate the impact of EFM on cesarean section (CS) rates.
#' It is found by Neutra, Greenland, and Friedman (1980) that relevant confounding factors are:
#' nulliparity (nullipar), arrest of labor progression (arrest), malpresentation (breech), and year of study (year).
#' The dataset provided in the R package is from the supplementary materials of Richardson, Robins, and Wang (2017),
#' who used this dataset to illustrate their proposed methods
#' for modeling and estimating relative risk and risk difference.
#'
#' @references Neutra, R.R., Greenland, S. and Friedman, E.A., 1980.
#' Effect of fetal monitoring on cesarean section rates.
#' Obstetrics and gynecology, 55(2), pp.175-180.
#'
#' @references Richardson, T.S., Robins, J.M. and Wang, L., 2017.
#' On modeling and estimation for the relative risk and risk difference.
#' Journal of the American Statistical Association, 112(519), pp.1121-1130.
#'
#' @format A data frame with 14484 rows and 6 variables:
#' \describe{
#' \item{cesarean}{Outcome: 1 if delivery was via cesarean section; 0 otherwise}
#' \item{monitor}{Treatment: 1 if electronic fetal monitoring (EFM) was used; 0 otherwise}
#' \item{arrest}{Covariate: 1 = arrest of labor progression; 0 otherwise}
#' \item{breech}{Covariate: 1 = malpresentation (breech); 0 otherwise}
#' \item{nullipar}{Covariate: 1 = nulliparity; 0 otherwise}
#' \item{year}{Year of study: 0,...,5 (actual values are 1970,...,1975)}
#' }
#'
#' @source The dataset from Neutra, Greenland, and Friedman (1980) is available
#' as part of supplementary materials of Richardson, Robins, and Wang (2017)
#' on Journal of the American Statistical Association website at
#' \doi{10.1080/01621459.2016.1192546}.
"EFM"
| /scratch/gouwar.j/cran-all/cranData/ATbounds/R/data_EFM.R |
#' RHC
#'
#' The right heart catheterization (RHC) dataset is publicly available on the Vanderbilt Biostatistics website.
#' RHC is a diagnostic procedure for directly measuring cardiac function in critically ill patients.
#' The dependent variable is 1 if a patient survived after 30 days of admission, 0 if a patient died within 30 days.
#' The treatment variable is 1 if RHC was applied within 24 hours of admission, and 0 otherwise.
#' The sample size was n = 5735, and 2184 patients were treated with RHC.
#' Connors et al. (1996) used a propensity score matching approach to study the efficacy of RHC,
#' using data from the observational study called SUPPORT (Murphy and Cluff, 1990).
#' Many authors used this dataset subsequently.
#' The 72 covariates are constructed, following Hirano and Imbens (2001).
#'
#' @references Connors, A.F., Speroff, T., Dawson, N.V., Thomas, C., Harrell, F.E., Wagner, D., Desbiens, N., Goldman, L., Wu, A.W., Califf, R.M. and Fulkerson, W.J., 1996.
#' The effectiveness of right heart catheterization in the initial care of critically III patients. JAMA, 276(11), pp.889-897.
#' \doi{10.1001/jama.1996.03540110043030}
#'
#' @references Hirano, K., Imbens, G.W. Estimation of Causal Effects using Propensity Score Weighting: An Application to Data on Right Heart Catheterization, 2001.
#' Health Services & Outcomes Research Methodology 2, pp.259–278.
#' \doi{10.1023/A:1020371312283}
#'
#' @references D. J. Murphy, L. E. Cluff, SUPPORT: Study to understand prognoses and preferences for outcomes and risks of treatments—study design, 1990.
#' Journal of Clinical Epidemiology, 43, pp. 1S–123S
#' \url{https://www.jclinepi.com/issue/S0895-4356(00)X0189-8}
#' .
#' @format A data frame with 5735 rows and 74 variables:
#' \describe{
#' \item{survival}{Outcome: 1 if a patient survived after 30 days of admission, and 0 if a patient died within 30 days}
#' \item{RHC}{Treatment: 1 if RHC was applied within 24 hours of admission, and 0 otherwise.}
#' \item{age}{Age in years}
#' \item{edu}{Years of education}
#' \item{cardiohx}{Cardiovascular symptoms}
#' \item{chfhx}{Congestive Heart Failure}
#' \item{dementhx}{Dementia, stroke or cerebral infarct, Parkinson’s disease}
#' \item{psychhx}{Psychiatric history, active psychosis or severe depression}
#' \item{chrpulhx}{Chronic pulmonary disease, severe pulmonary disease}
#' \item{renalhx}{Chronic renal disease, chronic hemodialysis or peritoneal dialysis}
#' \item{liverhx}{Cirrhosis, hepatic failure}
#' \item{gibledhx}{Upper GI bleeding}
#' \item{malighx}{Solid tumor, metastatic disease, chronic leukemia/myeloma, acute leukemia, lymphoma}
#' \item{immunhx}{Immunosuppression, organ transplant, HIV, Diabetes Mellitus, Connective Tissue Disease}
#' \item{transhx}{transfer (> 24 hours) from another hospital}
#' \item{amihx}{Definite myocardial infarction}
#' \item{das2d3pc}{DASI - Duke Activity Status Index}
#' \item{surv2md1}{Estimate of prob. of surviving 2 months}
#' \item{aps1}{APACHE score}
#' \item{scoma1}{Glasgow coma score}
#' \item{wtkilo1}{Weight}
#' \item{temp1}{Temperature}
#' \item{meanbp1}{Mean Blood Pressure}
#' \item{resp1}{Respiratory Rate}
#' \item{hrt1}{Heart Rate}
#' \item{pafi1}{PaO2/FI02 ratio}
#' \item{paco21}{PaCO2}
#' \item{ph1}{PH}
#' \item{wblc1}{WBC}
#' \item{hema1}{Hematocrit}
#' \item{sod1}{Sodium}
#' \item{pot1}{Potassium}
#' \item{crea1}{Creatinine}
#' \item{bili1}{Bilirubin}
#' \item{alb1}{Albumin}
#' \item{cat1_CHF}{1 if the primary disease category is CHF, and 0 otherwise (Omitted category = ARF).}
#' \item{cat1_Cirrhosis}{1 if the primary disease category is Cirrhosis, and 0 otherwise (Omitted category = ARF).}
#' \item{cat1_Colon_Cancer}{1 if the primary disease category is Colon Cancer, and 0 otherwise (Omitted category = ARF).}
#' \item{cat1_Coma}{1 if the primary disease category is Coma, and 0 otherwise (Omitted category = ARF).}
#' \item{cat1_COPD}{1 if the primary disease category is COPD, and 0 otherwise (Omitted category = ARF).}
#' \item{cat1_Lung_Cancer}{1 if the primary disease category is Lung Cancer, and 0 otherwise (Omitted category = ARF).}
#' \item{cat1_MOSF_Malignancy}{1 if the primary disease category is MOSF w/Malignancy, and 0 otherwise (Omitted category = ARF).}
#' \item{cat1_MOSF_Sepsis}{1 if the primary disease category is MOSF w/Sepsis, and 0 otherwise (Omitted category = ARF).}
#' \item{ca_Metastatic}{1 if cancer is metastatic, and 0 otherwise (Omitted category = no cancer).}
#' \item{ca_Yes}{1 if cancer is localized, and 0 otherwise (Omitted category = no cancer).}
#' \item{ninsclas_Medicaid}{1 if medical insurance category is Medicaid, and 0 otherwise (Omitted category = Private).}
#' \item{ninsclas_Medicare}{1 if medical insurance category is Medicare, and 0 otherwise (Omitted category = Private).}
#' \item{ninsclas_Medicare_and_Medicaid}{1 if medical insurance category is Medicare & Medicaid, and 0 otherwise (Omitted category = Private).}
#' \item{ninsclas_No_insurance}{1 if medical insurance category is No Insurance, and 0 otherwise (Omitted category = Private).}
#' \item{ninsclas_Private_and_Medicare}{1 if medical insurance category is Private & Medicare, and 0 otherwise (Omitted category = Private).}
#' \item{race_black}{1 if Black, and 0 otherwise (Omitted category = White).}
#' \item{race_other}{1 if Other, and 0 otherwise (Omitted category = White).}
#' \item{income3}{1 if Income >$50k, and 0 otherwise (Omitted category = under $11k).}
#' \item{income1}{1 if Income $11–$25k, and 0 otherwise (Omitted category = under $11k).}
#' \item{income2}{1 if Income $25–$50k, and 0 otherwise (Omitted category = under $11k).}
#' \item{resp_Yes}{Respiratory diagnosis}
#' \item{card_Yes}{Cardiovascular diagnosis}
#' \item{neuro_Yes}{Neurological diagnosis}
#' \item{gastr_Yes}{Gastrointestinal diagnosis}
#' \item{renal_Yes}{Renal diagnosis}
#' \item{meta_Yes}{Metabolic diagnosis}
#' \item{hema_Yes}{Hematological diagnosis}
#' \item{seps_Yes}{Sepsis diagnosis}
#' \item{trauma_Yes}{Trauma diagnosis}
#' \item{ortho_Yes}{Orthopedic diagnosis}
#' \item{dnr1_Yes}{Do Not Resuscitate status on day 1}
#' \item{sex_Female}{Female}
#' \item{cat2_Cirrhosis}{1 if the secondary disease category is Cirrhosis, and 0 otherwise (Omitted category = NA).}
#' \item{cat2_Colon_Cancer}{1 if secondary disease category is Colon Cancer, and 0 otherwise (Omitted category = NA).}
#' \item{cat2_Coma}{1 if the secondary disease category is Coma, and 0 otherwise (Omitted category = NA).}
#' \item{cat2_Lung_Cancer}{1 if the secondary disease category is Lung Cancer, and 0 otherwise (Omitted category = NA).}
#' \item{cat2_MOSF_Malignancy}{1 if the secondary disease category is MOSF w/Malignancy, and 0 otherwise (Omitted category = NA).}
#' \item{cat2_MOSF_Sepsis}{1 if the secondary disease category is MOSF w/Sepsis, and 0 otherwise (Omitted category = NA).}
#' \item{wt0}{weight = 0 (missing)}
#' }
#'
#' @source The dataset is publicly available on the Vanderbilt Biostatistics website at
#' \url{https://hbiostat.org/data/}.
"RHC"
| /scratch/gouwar.j/cran-all/cranData/ATbounds/R/data_RHC.R |
#' @title Simulating observations from the data-generating process considered in Lee and Weidner (2021)
#'
#' @description Simulates observations from the data-generating process considered in Lee and Weidner (2021)
#'
#' @param n sample size
#' @param ps_spec specification of the propensity score: "overlap" or "non-overlap" (default: "overlap")
#' @param x_discrete TRUE if the distribution of the covariate is uniform on {-3.0, -2.9, ..., 3.0} and
#' FALSE if the distribution of the covariate is uniform on [--3,3] (default: FALSE)
#'
#' @return An S3 object of type "ATbounds". The object has the following elements.
#' \item{outcome}{n observations of binary outcomes}
#' \item{treat}{n observations of binary treatments}
#' \item{covariate}{n observations of a scalar covariate}
#' \item{ate_oracle}{the sample analog of E[Y(1) - Y(0)]}
#' \item{att_oracle}{the sample analog of E[D{Y(1) - Y(0)}|D=1]}
#'
#' @examples
#' data <- simulation_dgp(100, ps_spec = "overlap")
#' y <- data$outcome
#' d <- data$treat
#' x <- data$covariate
#' ate <- data$ate_oracle
#' att <- data$att_oracle
#'
#' @references Sokbae Lee and Martin Weidner. Bounding Treatment Effects by Pooling Limited Information across Observations.
#'
#' @export
simulation_dgp <- function(n, ps_spec = "overlap", x_discrete = FALSE){
x <- stats::runif(n, min = -3, max = 3)
if (x_discrete == TRUE){
x <- round(x*10)/10 # discrete X in {-3.0, -2.9, ..., 3.0}
}
if (ps_spec == "overlap"){
px <- 0.5
} else if (ps_spec == "non-overlap"){
px <- 0.25*(x >= 2) + 0.5*(abs(x) < 2)
}
treat <- as.integer(px < stats::runif(n)) # treat = 1 always if x <= -2 for the non-overlap case
ps <- 1-px
y_1 <- 1 + px + stats::rnorm(n)
y_0 <- px + stats::rnorm(n)
y_1 <- as.integer(y_1 > 0)
y_0 <- as.integer(y_0 > 0)
y <- treat*y_1 + (1-treat)*y_0
ate_oracle <- mean(y_1 - y_0)
att_oracle <- mean(treat*(y_1-y_0))/mean(ps)
outputs <- list("outcome"=y,"treat"=treat,"covariate"=x,
"ate_oracle"=ate_oracle,"att_oracle"=att_oracle)
class(outputs) = 'ATbounds'
outputs
}
| /scratch/gouwar.j/cran-all/cranData/ATbounds/R/simulation_dgp.R |
#' @title Summary method for ATbounds objects
#'
#' @description Produce a summary for an ATbounds object.
#' @param object ATbounds object
#' @param ... Additional arguments for summary generic
#'
#' @return A summary is produced with bounds estimates and confidence intervals.
#' In addition, it has the following elements.
#' \item{Lower_Bound}{lower bound estimate and lower end point of the confidence interval}
#' \item{Upper_Bound}{upper bound estimate and upper end point of the confidence interval}
#'
#' @examples
#' Y <- RHC[,"survival"]
#' D <- RHC[,"RHC"]
#' X <- RHC[,c("age","edu")]
#' rps <- rep(mean(D),length(D))
#' results_ate <- atebounds(Y, D, X, rps, Q = 3)
#' summary(results_ate)
#'
#' @references Sokbae Lee and Martin Weidner. Bounding Treatment Effects by Pooling Limited Information across Observations.
#'
#' @export
summary.ATbounds <- function(object,...){
heading=c(paste("ATbounds: ",object$type,sep=""),
paste("Call:",format(object$call)),
paste("Confidence Level:", format(object$cov_prob)))
est_lb = object$est_lb
est_ub = object$est_ub
ci_lb = object$ci_lb
ci_ub = object$ci_ub
sumob = data.frame(Lower_Bound = c(est_lb,ci_lb), Upper_Bound = c(est_ub,ci_ub),
row.names = c("Bound Estimate", "Confidence Interval"))
structure(sumob,heading=heading,class=c("anova","data.frame"))
}
| /scratch/gouwar.j/cran-all/cranData/ATbounds/R/summary.ATbounds.R |
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## ----setup--------------------------------------------------------------------
library(ATbounds)
## -----------------------------------------------------------------------------
nsw_treated <- read.table("http://users.nber.org/~rdehejia/data/nsw_treated.txt")
colnames(nsw_treated) <- c("treat","age","edu","black","hispanic",
"married","nodegree","RE75","RE78")
nsw_control <- read.table("http://users.nber.org/~rdehejia/data/nsw_control.txt")
colnames(nsw_control) <- c("treat","age","edu","black","hispanic",
"married","nodegree","RE75","RE78")
## -----------------------------------------------------------------------------
nsw <- rbind(nsw_treated,nsw_control)
attach(nsw)
D <- treat
Y <- (RE78 > 0)
## -----------------------------------------------------------------------------
rps <- rep(mean(D),length(D))
## -----------------------------------------------------------------------------
ate_nsw <- mean(D*Y)/mean(D)-mean((1-D)*Y)/mean(1-D)
print(ate_nsw)
## -----------------------------------------------------------------------------
model <- lm(Y ~ D)
summary(model)
confint(model)
## -----------------------------------------------------------------------------
detach(nsw)
nswre_treated <- read.table("http://users.nber.org/~rdehejia/data/nswre74_treated.txt")
colnames(nswre_treated) <- c("treat","age","edu","black","hispanic",
"married","nodegree","RE74","RE75","RE78")
nswre_control <- read.table("http://users.nber.org/~rdehejia/data/nswre74_control.txt")
colnames(nswre_control) <- c("treat","age","edu","black","hispanic",
"married","nodegree","RE74","RE75","RE78")
nswre <- rbind(nswre_treated,nswre_control)
attach(nswre)
D <- treat
Y <- (RE78 > 0)
X <- cbind(age,edu,black,hispanic,married,nodegree,RE74/1000,RE75/1000)
## -----------------------------------------------------------------------------
rps <- rep(mean(D),length(D))
## -----------------------------------------------------------------------------
ate_nswre <- mean(D*Y)/mean(D)-mean((1-D)*Y)/mean(1-D)
print(ate_nswre)
## -----------------------------------------------------------------------------
model <- lm(Y ~ D)
summary(model)
confint(model)
## -----------------------------------------------------------------------------
bns_nsw <- atebounds(Y, D, X, rps)
## -----------------------------------------------------------------------------
summary(bns_nsw)
## -----------------------------------------------------------------------------
summary(atebounds(Y, D, X, rps, Q = 2))
## -----------------------------------------------------------------------------
summary(atebounds(Y, D, X, rps, Q = 4))
## -----------------------------------------------------------------------------
print(ate_nswre)
## -----------------------------------------------------------------------------
summary(atebounds(Y, D, X, rps))
summary(atebounds(Y, D, X, rps, n_hc = ceiling(length(Y)/5)))
summary(atebounds(Y, D, X, rps, n_hc = ceiling(length(Y)/20)))
## -----------------------------------------------------------------------------
print(bns_nsw)
## -----------------------------------------------------------------------------
bns_nsw_att <- attbounds(Y, D, X, rps)
summary(bns_nsw_att)
## -----------------------------------------------------------------------------
summary(attbounds(Y, D, X, rps, Q = 2))
## -----------------------------------------------------------------------------
summary(attbounds(Y, D, X, rps, Q = 4))
## -----------------------------------------------------------------------------
summary(attbounds(Y, D, X, rps))
summary(attbounds(Y, D, X, rps, n_hc = ceiling(length(Y)/5)))
summary(attbounds(Y, D, X, rps, n_hc = ceiling(length(Y)/20)))
## -----------------------------------------------------------------------------
psid2_control <- read.table("http://users.nber.org/~rdehejia/data/psid2_controls.txt")
colnames(psid2_control) <- c("treat","age","edu","black","hispanic",
"married","nodegree","RE74","RE75","RE78")
psid <- rbind(nswre_treated,psid2_control)
detach(nswre)
attach(psid)
D <- treat
Y <- (RE78 > 0)
X <- cbind(age,edu,black,hispanic,married,nodegree,RE74/1000,RE75/1000)
## -----------------------------------------------------------------------------
rps_sp <- rep(mean(D),length(D))
bns_psid <- atebounds(Y, D, X, rps_sp)
summary(bns_psid)
## -----------------------------------------------------------------------------
summary(atebounds(Y, D, X, rps_sp, Q=1))
## -----------------------------------------------------------------------------
summary(attbounds(Y, D, X, rps_sp))
detach(psid)
## -----------------------------------------------------------------------------
Y <- RHC[,"survival"]
D <- RHC[,"RHC"]
X <- as.matrix(RHC[,-c(1,2)])
## -----------------------------------------------------------------------------
# Logit estimation of propensity score
glm_ps <- stats::glm(D~X,family=binomial("logit"))
ps <- glm_ps$fitted.values
ps_treated <- ps[D==1]
ps_control <- ps[D==0]
# Plotting histograms of propensity scores
df <- data.frame(cbind(D,ps))
colnames(df)<-c("RHC","PS")
df$RHC <- as.factor(df$RHC)
levels(df$RHC) <- c("No RHC (Control)", "RHC (Treated)")
ggplot2::ggplot(df, ggplot2::aes(x=PS, color=RHC, fill=RHC)) +
ggplot2::geom_histogram(breaks=seq(0,1,0.1),alpha=0.5,position="identity")
## -----------------------------------------------------------------------------
# ATT normalized estimation
y1_att <- mean(D*Y)/mean(D)
att_wgt <- ps/(1-ps)
y0_att_num <- mean((1-D)*att_wgt*Y)
y0_att_den <- mean((1-D)*att_wgt)
y0_att <- y0_att_num/y0_att_den
att_ps <- y1_att - y0_att
print(att_ps)
## -----------------------------------------------------------------------------
rps <- rep(mean(D),length(D))
## -----------------------------------------------------------------------------
att_rps <- mean(D*Y)/mean(D) - mean((1-D)*Y)/mean(1-D)
print(att_rps)
## -----------------------------------------------------------------------------
Xunique <- mgcv::uniquecombs(X) # A matrix of unique rows from X
print(c("no. of unique rows:", nrow(Xunique)))
print(c("sample size :", nrow(X)))
## -----------------------------------------------------------------------------
summary(attbounds(Y, D, X, rps))
## ---- eval=FALSE--------------------------------------------------------------
# # Bounding ATT: sensitivity analysis
# # not run to save time
# nhc_set <- c(5, 10, 20)
# results_att <- {}
#
# for (hc in nhc_set){
# nhc <- ceiling(length(Y)/hc)
#
# for (q in c(1,2,3,4)){
# res <- attbounds(Y, D, X, rps, Q = q, n_hc = nhc)
# results_att <- rbind(results_att,c(hc,q,res$est_lb,res$est_ub,res$ci_lb,res$ci_ub))
# }
# }
# colnames(results_att) = c("L","Q","LB","UB","CI-LB","CI-UB")
# print(results_att, digits = 3)
## -----------------------------------------------------------------------------
summary(atebounds(Y, D, X, rps, Q = 1))
## -----------------------------------------------------------------------------
summary(atebounds(Y, D, X, rps, Q = 2))
## -----------------------------------------------------------------------------
summary(atebounds(Y, D, X, rps, Q = 3))
## -----------------------------------------------------------------------------
summary(atebounds(Y, D, X, rps, Q = 4))
## -----------------------------------------------------------------------------
Y <- EFM[,"cesarean"]
D <- EFM[,"monitor"]
X <- as.matrix(EFM[,c("arrest", "breech", "nullipar", "year")])
year <- EFM[,"year"]
## -----------------------------------------------------------------------------
ate_rps <- mean(D*Y)/mean(D) - mean((1-D)*Y)/mean(1-D)
print(ate_rps)
## -----------------------------------------------------------------------------
rps <- rep(mean(D),length(D))
print(rps[1])
## -----------------------------------------------------------------------------
summary(atebounds(Y, D, X, rps, Q = 1, x_discrete = TRUE))
## -----------------------------------------------------------------------------
summary(atebounds(Y, D, X, rps, Q = 2, x_discrete = TRUE))
## -----------------------------------------------------------------------------
summary(atebounds(Y, D, X, rps, Q = 3, x_discrete = TRUE))
## -----------------------------------------------------------------------------
summary(atebounds(Y, D, X, rps, Q = 5, x_discrete = TRUE))
summary(atebounds(Y, D, X, rps, Q = 10, x_discrete = TRUE))
summary(atebounds(Y, D, X, rps, Q = 20, x_discrete = TRUE))
summary(atebounds(Y, D, X, rps, Q = 50, x_discrete = TRUE))
summary(atebounds(Y, D, X, rps, Q = 100, x_discrete = TRUE))
| /scratch/gouwar.j/cran-all/cranData/ATbounds/inst/doc/ATbounds_vignette.R |
---
title: "ATbounds: An R Vignette"
author: "Sokbae Lee and Martin Weidner"
abstract: ATbounds is an R package that provides estimation and inference methods for bounding average treatment effects (on the treated) that are valid under an unconfoundedness assumption. The bounds are designed to be robust in challenging situations, for example, when the the conditioning variables take on a large number of different values in the observed sample, or when the overlap condition is violated. This robustness is achieved by only using limited "pooling" of information across observations.
output: rmarkdown::pdf_document
bibliography: refs.bib
vignette: >
%\VignetteIndexEntry{ATbounds: An R Vignette}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
## Introduction
__ATbounds__ is an R package that provides estimation and inference methods for bounding average treatment effects (on the treated) that are valid under an unconfoundedness assumption. The bounds are designed to be robust in challenging situations, for example, when the the conditioning variables take on a large number of different values in the observed sample, or when the overlap condition is violated. This robustness is achieved by only using limited "pooling" of information across observations. Namely, the bounds are constructed as sample averages over functions of the observed outcomes such that the contribution of each outcome only depends on the treatment status of a limited number of observations. No information pooling across observations leads to so-called "Manski bounds" [@manski1989anatomy; @manski1990nonparametric], while unlimited information pooling leads to standard inverse propensity score weighting. The ATbounds package provides inference methods for exploring the intermediate range between these two extremes.
The methodology used in the ATbounds package is described in detail in Lee and Weidner (2021), "Bounding Treatment Effects by Pooling Limited Information across Observations," "Bounding Treatment Effects by Pooling Limited Information across Observations," available at <https://arxiv.org/abs/2111.05243>.
We begin by calling the ATbounds package.
```{r setup}
library(ATbounds)
```
# Case Study 1: Bounding the Effects of a Job Training Program
To illustrate the usefulness of the package, we first use the well-known
@LaLonde-AER dataset available on Rajeev Dehejia's web page
at <http://users.nber.org/~rdehejia/nswdata2.html>.
## LaLonde's Experimental Sample
We fist look at LaLonde's original experimental sample.
```{r}
nsw_treated <- read.table("http://users.nber.org/~rdehejia/data/nsw_treated.txt")
colnames(nsw_treated) <- c("treat","age","edu","black","hispanic",
"married","nodegree","RE75","RE78")
nsw_control <- read.table("http://users.nber.org/~rdehejia/data/nsw_control.txt")
colnames(nsw_control) <- c("treat","age","edu","black","hispanic",
"married","nodegree","RE75","RE78")
```
The outcome variable is `RE78` (earnings in 1978). The binary treatment indicator is `treat` (1 if treated, 0 if not treated). We now combine the treatment and control samples and define the variables.
```{r}
nsw <- rbind(nsw_treated,nsw_control)
attach(nsw)
D <- treat
Y <- (RE78 > 0)
```
In this vignette, we define the outcome to be whether employed in 1978 (that is, earnings in 1978 are positive).
The LaLonde dataset is from the National Supported Work Demonstration (NSW), which is a randomized controlled temporary employment program. In view of that, we set the reference propensity score to be independent of covariates.
```{r}
rps <- rep(mean(D),length(D))
```
The average treatment effect is obtained by
```{r}
ate_nsw <- mean(D*Y)/mean(D)-mean((1-D)*Y)/mean(1-D)
print(ate_nsw)
```
Alternatively, we run simple regression
```{r}
model <- lm(Y ~ D)
summary(model)
confint(model)
```
The 95% confidence interval $[0.01,0.14]$ is rather wide but excludes zero.
## Dehejia-Wahba Sample
@DehejiaWahba-JASA and @DehejiaWahba-RESTAT extract a further subset of LaLonde's NSW experimental data to obtain a subset containing information on RE74 (earnings in 1974).
```{r}
detach(nsw)
nswre_treated <- read.table("http://users.nber.org/~rdehejia/data/nswre74_treated.txt")
colnames(nswre_treated) <- c("treat","age","edu","black","hispanic",
"married","nodegree","RE74","RE75","RE78")
nswre_control <- read.table("http://users.nber.org/~rdehejia/data/nswre74_control.txt")
colnames(nswre_control) <- c("treat","age","edu","black","hispanic",
"married","nodegree","RE74","RE75","RE78")
nswre <- rbind(nswre_treated,nswre_control)
attach(nswre)
D <- treat
Y <- (RE78 > 0)
X <- cbind(age,edu,black,hispanic,married,nodegree,RE74/1000,RE75/1000)
```
The covariates are as follows:
* `age`: age in years,
* `edu`: years of education,
* `black`: 1 if black, 0 otherwise,
* `hispanic`: 1 if Hispanic, 0 otherwise,
* `married`: 1 if married, 0 otherwise,
* `nodegree`: 1 if no degree, 0 otherwise,
* `RE74`: earnings in 1974,
* `RE75`: earnings in 1975.
If we assume that the Dehejia-Wahba sample still preserves initial randomization, we can
set the reference propensity score to be independent of covariates. However, it may not be the case and therefore, our approach can provide a robust method to check whether the Dehejia-Wahba sample can be viewed as as a random sample from a randomized controlled experiment.
We first define the the reference propensity score.
```{r}
rps <- rep(mean(D),length(D))
```
Using this reference propensity score, the average treatment effect is obtained by
```{r}
ate_nswre <- mean(D*Y)/mean(D)-mean((1-D)*Y)/mean(1-D)
print(ate_nswre)
```
Alternatively, we run simple regression
```{r}
model <- lm(Y ~ D)
summary(model)
confint(model)
```
The resulting 95% confidence interval $[0.02,0.20]$ is wide but again excludes zero.
### Bounds on ATE
We now introduce our bounds on the average treatment effect (ATE).
```{r}
bns_nsw <- atebounds(Y, D, X, rps)
```
In implementing `atebounds`, there are several options:
* `Q`: bandwidth parameter that determines the maximum number of observations for pooling information (default: $Q = 3$)
* `studentize`: `TRUE` if `X` is studentized elementwise and `FALSE` if not (default: `TRUE`)
* `alpha`: $(1-\alpha)$ nominal coverage probability for the confidence interval of ATE (default: 0.05)
* `discrete`: `TRUE` if `X` includes only discrete covariates and `FALSE` if not (default: `FALSE`)
* `n_hc`: number of hierarchical clusters to discretize non-discrete covariates; relevant only if x_discrete is FALSE. The default choice is `n_hc = ceiling(length(Y)/10)`, so that there are 10 observations in each cluster on average.
The clusters are constructed via hierarchical, agglomerative clustering with complete linkage, where the distance is measured by the Euclidean distance after studentizing each of the covariates. As mentioned above, the number $m$ of clusters is set by $$m = \left\lceil \frac{n}{L} \right\rceil$$ for $L = 10$.
We show the summary results saved in `bns_nsw`.
```{r}
summary(bns_nsw)
```
Note that the estimate of the lower bound is larger than that of the upper bound.
This crossing problem can occur in finite samples due to random sampling errors.
While statistical theory guarantees that this problem cannot occur asymptotically,
it is desirable to have a non-empty confidence interval in applications.
We therefore use the method in @stoye2020 to obtain a valid confidence interval that is never empty.
The 95% confidence interval $[-0.01,0.19]$ obtained here is similar to the previous interval $[0.02,0.20]$, which was obtained under the assumption that the Dehejia-Wahba sample is a random sample from NSW. This suggests that first, there is no evidence against violation of the random sampling assumption in the Dehejia-Wahba sample and second, our inference method does not suffer from unduly enlargement of the confidence interval to achieve robustness, although a null effect is now included in our confidence interval.
With $Q=2$, the bounds for ATE are
```{r}
summary(atebounds(Y, D, X, rps, Q = 2))
```
With $Q=4$, the bounds for ATE are
```{r}
summary(atebounds(Y, D, X, rps, Q = 4))
```
Recall that the point estimate of ATE under the random sampling assumption was
```{r}
print(ate_nswre)
```
Thus, the ATE estimate based on the simple mean difference well within the 95% confidence intervals across $Q=2, 3, 4$.
Recall that covariates $X$ include non-discrete variables: `RE74` and `RE75`. As a default option, `atebounds` chooses the number of hierarchical clusters to be `n_hc = ceiling(length(Y)/10)`.
To check sensitivity to `n_hc`, we now run the following:
```{r}
summary(atebounds(Y, D, X, rps))
summary(atebounds(Y, D, X, rps, n_hc = ceiling(length(Y)/5)))
summary(atebounds(Y, D, X, rps, n_hc = ceiling(length(Y)/20)))
```
It can be seen that the alternative estimation result is more similar to the default one with `n_hc = ceiling(length(Y)/20)` than with `n_hc = ceiling(length(Y)/5)`. Overall, the results are qualitatively similar across the three different specifications of `n_hc`.
Finally, to see what is saved in `bns_nsw`, we now print it out.
```{r}
print(bns_nsw)
```
The output list contains:
* `call`: a call in which all of the specified arguments are specified by their full names
* `type`: ATE
* `cov_prob`: confidence level ($1-\alpha$)
* `y1_lb`: estimate of the lower bound on the average of $Y(1)$, i.e. $\mathbb{E}[Y(1)]$,
* `y1_ub`: estimate of the upper bound on the average of $Y(1)$, i.e. $\mathbb{E}[Y(1)]$,
* `y0_lb`: estimate of the lower bound on the average of $Y(0)$, i.e. $\mathbb{E}[Y(0)]$,
* `y0_ub`: estimate of the upper bound on the average of $Y(0)$, i.e. $\mathbb{E}[Y(0)]$,
* `est_lb`: estimate of the lower bound on ATE, i.e. $\mathbb{E}[Y(1) - Y(0)]$,
* `est_ub`: estimate of the upper bound on ATE, i.e. $\mathbb{E}[Y(1) - Y(0)]$,
* `est_rps`: the point estimate of ATE using the reference propensity score,
* `se_lb`: standard error for the estimate on the lower bound on ATE,
* `se_ub`: standard error for the estimate of the upper bound on ATE,
* `ci_lb`: the lower end point of the confidence interval for ATE,
* `ci_ub`: the upper end point of the confidence interval for ATE.
### Bounds on ATT
We now look at bounds on the average treatment effect on the treated (ATT).
```{r}
bns_nsw_att <- attbounds(Y, D, X, rps)
summary(bns_nsw_att)
```
We experiment with `Q`.
```{r}
summary(attbounds(Y, D, X, rps, Q = 2))
```
```{r}
summary(attbounds(Y, D, X, rps, Q = 4))
```
We also experiment with `n_hc`.
```{r}
summary(attbounds(Y, D, X, rps))
summary(attbounds(Y, D, X, rps, n_hc = ceiling(length(Y)/5)))
summary(attbounds(Y, D, X, rps, n_hc = ceiling(length(Y)/20)))
```
Bound estimates cross (that is, the lower bound is larger than the upper bound) and furthermore sensitive to the choice of `Q` and `n_hc`. This is likely to be driven by the relatively small sample size. However, once we factor into sampling uncertainty and look at the confidence intervals, all the results are more or less similar.
## NSW treated and PSID control
The Dehejia-Wahba sample can be regarded as a data scenario where the propensity score is known and satisfies the overlap condition. We now turn to a different data scenario where it is likely that the propensity score is unknown and may not satisfy the overlap condition.
```{r}
psid2_control <- read.table("http://users.nber.org/~rdehejia/data/psid2_controls.txt")
colnames(psid2_control) <- c("treat","age","edu","black","hispanic",
"married","nodegree","RE74","RE75","RE78")
psid <- rbind(nswre_treated,psid2_control)
detach(nswre)
attach(psid)
D <- treat
Y <- (RE78 > 0)
X <- cbind(age,edu,black,hispanic,married,nodegree,RE74/1000,RE75/1000)
```
Here, we use one of non-experimental comparison groups constructed by LaLonde from the Population Survey of Income Dynamics, namely PSID2 controls. We now estimate the reference propensity score using the sample proportion and obtain the new bound estimates:
```{r}
rps_sp <- rep(mean(D),length(D))
bns_psid <- atebounds(Y, D, X, rps_sp)
summary(bns_psid)
```
The confidence interval $[-0.35,0.31]$ here is much larger than the confidence interval $[-0.01,0.19]$ with the Dehejia-Wahba sample. It is worth noting that the sample proportion is unlikely to be correctly specified in the NSW-treated/PSID-control sample. Thus, it seems that our inference method produces a wider confidence interval in order to be robust against misspecification of the propensity scores.
We now consider the Manski bounds, which can be obtained by setting $Q = 1$.
```{r}
summary(atebounds(Y, D, X, rps_sp, Q=1))
```
We can see that the Manski bounds are even larger and the same regardless of the specification of the reference propensity scores. Recall the Manski bounds do not impose the unconfoundedness assumption and do not rely on any pooling information (hence, it does not matter how to specify the reference propensity score).
Finally, we obtain the bounds for the average treatment effect on the treated (ATT).
```{r}
summary(attbounds(Y, D, X, rps_sp))
detach(psid)
```
We find that the bounds on ATT are wide, as in ATE.
# Case Study 2: Bounding the Effect of Right Heart Catheterization
As a second empirical example, we revisit the well-known Right Heart Catheterization Dataset.
In particular, we apply our methods to @Connors1996's study of the
efficacy of right heart catheterization (RHC), which is a diagnostic
procedure for directly measuring cardiac function in critically ill
patients. This dataset has been subsequently used in the context of
limited overlap by @crump2009dealing, @Rothe:2017, and
@Li-et-al:2018 among others. The dataset is publicly available on the
Vanderbilt Biostatistics website at
<https://hbiostat.org/data>.
In this example, the dependent variable is 1 if a patient survived after
30 days of admission, and 0 if a patient died within 30 days. The binary
treatment variable is 1 if RHC was applied within 24 hours of admission,
and 0 otherwise. The sample size was $n = 5735$, and 2184 patients were
treated with RHC. There are a large number of covariates:
@hirano2001estimation constructed 72 variables from the dataset and
the same number of covariates were considered in both
@crump2009dealing and @Li-et-al:2018 and 50 covariates were used in
@Rothe:2017. In our exercise, we constructed the same 72 covariates.
A cleaned version of the dataset is available in the package.
```{r}
Y <- RHC[,"survival"]
D <- RHC[,"RHC"]
X <- as.matrix(RHC[,-c(1,2)])
```
As in the aforementioned papers, we estimated the propensity scores by a
logit model with all 72 covariates being added linearly.
```{r}
# Logit estimation of propensity score
glm_ps <- stats::glm(D~X,family=binomial("logit"))
ps <- glm_ps$fitted.values
ps_treated <- ps[D==1]
ps_control <- ps[D==0]
# Plotting histograms of propensity scores
df <- data.frame(cbind(D,ps))
colnames(df)<-c("RHC","PS")
df$RHC <- as.factor(df$RHC)
levels(df$RHC) <- c("No RHC (Control)", "RHC (Treated)")
ggplot2::ggplot(df, ggplot2::aes(x=PS, color=RHC, fill=RHC)) +
ggplot2::geom_histogram(breaks=seq(0,1,0.1),alpha=0.5,position="identity")
```
The figure above shows the
histograms of estimated propensity scores for treated and control
groups. It is very similar to Fig.1 in @crump2009dealing and to Figure
2 in @Rothe:2017. The support of the estimated propensity scores are
almost on the unit interval for both treated and control units, although
there is some visual evidence on limited overlap (that is, control units
have much fewer propensity scores close to 1).
## Bounding the Average Treatment Effect on the Treated
In this section, we focus on ATT. We first estimate ATT by the
normalized inverse probability weighted estimator:
$$\begin{aligned}
\widehat{\text{ATT}}_{\text{PS}}
:= \frac{\sum_{i=1}^n D_i Y_i}{\sum_{i=1}^n D_i}
- \frac{\sum_{i=1}^n (1-D_i) W_i Y_i}{\sum_{i=1}^n (1-D_i) W_i},
\end{aligned}$$
where $W_i := \widehat{p}(X_i)/[1-\widehat{p}(X_i)]$ and
$\widehat{p}(X_i)$ is the estimated propensity score for observation $i$
based on the logit model described above.
See, e.g., equation (3) and discussions in @busso2014new for
details of the normalized inverse probability weighted ATT
estimator.
The estimator
$\widehat{\text{ATT}}_{\text{PS}}$ requires that the assumed propensity
score model is correctly specified and the overlap condition is
satisfied. The resulting estimate is
$\widehat{\text{ATT}}_{\text{PS}} = -0.0639$.
```{r}
# ATT normalized estimation
y1_att <- mean(D*Y)/mean(D)
att_wgt <- ps/(1-ps)
y0_att_num <- mean((1-D)*att_wgt*Y)
y0_att_den <- mean((1-D)*att_wgt)
y0_att <- y0_att_num/y0_att_den
att_ps <- y1_att - y0_att
print(att_ps)
```
We now turn to our methods. As before, we take the reference propensity score to be
$\widehat{p}_{\text{RPS}}(X_i) = n^{-1} \sum_{i=1}^n D_i$ for each
observation $i$. That is, we assign the sample proportion of the treated
to the reference propensity scores uniformly for all observations. Of
course, this is likely to be misspecified; however, it has the advantage
that $1/\widehat{p}_{\text{RPS}}(X_i)$ is never close to 0 or 1.
```{r}
rps <- rep(mean(D),length(D))
```
The
resulting inverse reference-propensity-score weighted ATT
estimator is
$$\begin{aligned}
\widehat{\text{ATT}}_{\text{RPS}}
:= \frac{\sum_{i=1}^n D_i Y_i}{\sum_{i=1}^n D_i}
- \frac{\sum_{i=1}^n (1-D_i) Y_i}{\sum_{i=1}^n (1-D_i)}
= -0.0507.
\end{aligned}$$
```{r}
att_rps <- mean(D*Y)/mean(D) - mean((1-D)*Y)/mean(1-D)
print(att_rps)
```
When the sample proportion is used as the propensity score estimator, there is no difference between
unnormalized and normalized versions of ATT estimates. In fact, it is simply the mean difference between treatment and control groups.
```{r}
Xunique <- mgcv::uniquecombs(X) # A matrix of unique rows from X
print(c("no. of unique rows:", nrow(Xunique)))
print(c("sample size :", nrow(X)))
```
Note that none of the covariates in the observed sample are identical. We therefore implement hierarchical, agglomerative clustering with complete linkage.
We show empirical results using the default option.
```{r}
summary(attbounds(Y, D, X, rps))
```
to check sensitivity with respect to tuning parameters, we vary $L$ and $Q$.
```{r, eval=FALSE}
# Bounding ATT: sensitivity analysis
# not run to save time
nhc_set <- c(5, 10, 20)
results_att <- {}
for (hc in nhc_set){
nhc <- ceiling(length(Y)/hc)
for (q in c(1,2,3,4)){
res <- attbounds(Y, D, X, rps, Q = q, n_hc = nhc)
results_att <- rbind(results_att,c(hc,q,res$est_lb,res$est_ub,res$ci_lb,res$ci_ub))
}
}
colnames(results_att) = c("L","Q","LB","UB","CI-LB","CI-UB")
print(results_att, digits = 3)
```
The table below reports
estimation results of ATT bounds for extended values of $L$ and $Q$.
When $Q=1$, our estimated bounds correspond to Manski bounds, which
includes zero and is wide with the interval length of almost one in all
cases of $L$. Our bounds with $Q=1$ are different across $L$ because we
apply hierarchical clustering before obtaining Manski bounds. With
$Q=2$, the bounds shrink so that the estimated upper bound is zero for
all cases of $L$; with $Q = 3$, they shrink even further so that the
upper end point of the 95% confidence interval excludes zero. Among
three different values of $L$, the case of $L=5$ gives the tightest
confidence interval but in this case, the lower bound is larger than the
upper bound, indicating that the estimates might be biased. In view of
that, we take the bound estimates with $L=10$ as our preferred estimates
$[-0.077, -0.039]$ with the 95% confidence interval $[-0.117,0.006]$.
When $Q=4$, the lower bound estimates exceed the upper bound estimates
with $L = 5, 10$. However, the estimates with $L = 20$ give almost
identical results to our preferred estimates. It seems that the pairs of
$(L, Q) = (10, 3)$ or $(L, Q) = (20, 4)$ provide reasonable estimates.
L Q LB UB CI-LB CI-UB
---- --- -------- -------- -------- --------
5 1 -0.638 0.282 -0.700 0.330
2 -0.131 -0.000 -0.174 0.033
3 -0.034 -0.048 -0.076 -0.007
4 -0.006 -0.073 -0.079 -0.006
10 1 -0.664 0.307 -0.766 0.376
2 -0.169 0.004 -0.216 0.039
3 -0.077 -0.039 -0.117 -0.006
4 -0.049 -0.057 -0.090 -0.016
20 1 -0.675 0.316 -0.843 0.430
2 -0.178 -0.005 -0.238 0.034
3 -0.099 -0.046 -0.149 -0.007
4 -0.065 -0.060 -0.112 -0.017
---- --- -------- -------- -------- --------
Table: ATT Bounds: Right Heart Catheterization Study
The study of @Connors1996 offered a conclusion that RHC could cause an
increase in patient mortality. Based on our preferred estimates, we can
exclude positive effects with confidence. This conclusion is based
solely on the unconfoundedness condition, but not on the overlap
condition, nor on the correct specification of the logit model. Overall,
our estimates seem to be consistent with the qualitative findings in
@Connors1996.
## Bounding the Average Treatment Effect
We now turn to bounds on ATE. Using again the sample proportion of the treated as the reference propensity score,
we bound the ATE.
We start with $Q=1$.
```{r}
summary(atebounds(Y, D, X, rps, Q = 1))
```
We now consider $Q=2$
```{r}
summary(atebounds(Y, D, X, rps, Q = 2))
```
and $Q=3$.
```{r}
summary(atebounds(Y, D, X, rps, Q = 3))
```
Finally, we take $Q = 4$.
```{r}
summary(atebounds(Y, D, X, rps, Q = 4))
```
Overall, the results are similar to ATT.
# Case Study 3: EFM
The electronic fetal monitoring (EFM) and cesarean section (CS) dataset from @EFMdata consists of observations on 14,484 women who delivered at Beth Israel Hospital, Boston from January 1970 to December 1975. The purpose of the study is to evaluate the impact of EFM on cesarean section (CS) rates. @EFMdata report that relevant confounding factors are: nulliparity (nullipar), arrest of labor progression (arrest), malpresentation (breech), and year of study (year). The dataset included in the R package is from the supplementary materials of @RRW-JASA, who used this dataset to illustrate their proposed methods for modeling and estimating relative risk and risk difference. In this dataset, all covariates are discrete.
```{r}
Y <- EFM[,"cesarean"]
D <- EFM[,"monitor"]
X <- as.matrix(EFM[,c("arrest", "breech", "nullipar", "year")])
year <- EFM[,"year"]
```
```{r}
ate_rps <- mean(D*Y)/mean(D) - mean((1-D)*Y)/mean(1-D)
print(ate_rps)
```
We take the reference propensity score to be the sample proportion of the treatment.
```{r}
rps <- rep(mean(D),length(D))
print(rps[1])
```
## Bounding the Average Treatment Effect
Using again the sample proportion of the treated as the reference propensity score,
we bound the ATE.
We start with $Q=1$.
```{r}
summary(atebounds(Y, D, X, rps, Q = 1, x_discrete = TRUE))
```
We now consider $Q=2$
```{r}
summary(atebounds(Y, D, X, rps, Q = 2, x_discrete = TRUE))
```
and $Q=3$.
```{r}
summary(atebounds(Y, D, X, rps, Q = 3, x_discrete = TRUE))
```
In this example, the reference propensity score is close to 0.5, thus implying that the results will be robust even if we take a very large $Q$. In view of that, we take $Q = 5, 10, 20, 50, 100$.
```{r}
summary(atebounds(Y, D, X, rps, Q = 5, x_discrete = TRUE))
summary(atebounds(Y, D, X, rps, Q = 10, x_discrete = TRUE))
summary(atebounds(Y, D, X, rps, Q = 20, x_discrete = TRUE))
summary(atebounds(Y, D, X, rps, Q = 50, x_discrete = TRUE))
summary(atebounds(Y, D, X, rps, Q = 100, x_discrete = TRUE))
```
Overall, the empirical results suggest that there is no significant effect of EFM on cesarean section rates.
## References
| /scratch/gouwar.j/cran-all/cranData/ATbounds/inst/doc/ATbounds_vignette.Rmd |
---
title: "ATbounds: An R Vignette"
author: "Sokbae Lee and Martin Weidner"
abstract: ATbounds is an R package that provides estimation and inference methods for bounding average treatment effects (on the treated) that are valid under an unconfoundedness assumption. The bounds are designed to be robust in challenging situations, for example, when the the conditioning variables take on a large number of different values in the observed sample, or when the overlap condition is violated. This robustness is achieved by only using limited "pooling" of information across observations.
output: rmarkdown::pdf_document
bibliography: refs.bib
vignette: >
%\VignetteIndexEntry{ATbounds: An R Vignette}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
## Introduction
__ATbounds__ is an R package that provides estimation and inference methods for bounding average treatment effects (on the treated) that are valid under an unconfoundedness assumption. The bounds are designed to be robust in challenging situations, for example, when the the conditioning variables take on a large number of different values in the observed sample, or when the overlap condition is violated. This robustness is achieved by only using limited "pooling" of information across observations. Namely, the bounds are constructed as sample averages over functions of the observed outcomes such that the contribution of each outcome only depends on the treatment status of a limited number of observations. No information pooling across observations leads to so-called "Manski bounds" [@manski1989anatomy; @manski1990nonparametric], while unlimited information pooling leads to standard inverse propensity score weighting. The ATbounds package provides inference methods for exploring the intermediate range between these two extremes.
The methodology used in the ATbounds package is described in detail in Lee and Weidner (2021), "Bounding Treatment Effects by Pooling Limited Information across Observations," "Bounding Treatment Effects by Pooling Limited Information across Observations," available at <https://arxiv.org/abs/2111.05243>.
We begin by calling the ATbounds package.
```{r setup}
library(ATbounds)
```
# Case Study 1: Bounding the Effects of a Job Training Program
To illustrate the usefulness of the package, we first use the well-known
@LaLonde-AER dataset available on Rajeev Dehejia's web page
at <http://users.nber.org/~rdehejia/nswdata2.html>.
## LaLonde's Experimental Sample
We fist look at LaLonde's original experimental sample.
```{r}
nsw_treated <- read.table("http://users.nber.org/~rdehejia/data/nsw_treated.txt")
colnames(nsw_treated) <- c("treat","age","edu","black","hispanic",
"married","nodegree","RE75","RE78")
nsw_control <- read.table("http://users.nber.org/~rdehejia/data/nsw_control.txt")
colnames(nsw_control) <- c("treat","age","edu","black","hispanic",
"married","nodegree","RE75","RE78")
```
The outcome variable is `RE78` (earnings in 1978). The binary treatment indicator is `treat` (1 if treated, 0 if not treated). We now combine the treatment and control samples and define the variables.
```{r}
nsw <- rbind(nsw_treated,nsw_control)
attach(nsw)
D <- treat
Y <- (RE78 > 0)
```
In this vignette, we define the outcome to be whether employed in 1978 (that is, earnings in 1978 are positive).
The LaLonde dataset is from the National Supported Work Demonstration (NSW), which is a randomized controlled temporary employment program. In view of that, we set the reference propensity score to be independent of covariates.
```{r}
rps <- rep(mean(D),length(D))
```
The average treatment effect is obtained by
```{r}
ate_nsw <- mean(D*Y)/mean(D)-mean((1-D)*Y)/mean(1-D)
print(ate_nsw)
```
Alternatively, we run simple regression
```{r}
model <- lm(Y ~ D)
summary(model)
confint(model)
```
The 95% confidence interval $[0.01,0.14]$ is rather wide but excludes zero.
## Dehejia-Wahba Sample
@DehejiaWahba-JASA and @DehejiaWahba-RESTAT extract a further subset of LaLonde's NSW experimental data to obtain a subset containing information on RE74 (earnings in 1974).
```{r}
detach(nsw)
nswre_treated <- read.table("http://users.nber.org/~rdehejia/data/nswre74_treated.txt")
colnames(nswre_treated) <- c("treat","age","edu","black","hispanic",
"married","nodegree","RE74","RE75","RE78")
nswre_control <- read.table("http://users.nber.org/~rdehejia/data/nswre74_control.txt")
colnames(nswre_control) <- c("treat","age","edu","black","hispanic",
"married","nodegree","RE74","RE75","RE78")
nswre <- rbind(nswre_treated,nswre_control)
attach(nswre)
D <- treat
Y <- (RE78 > 0)
X <- cbind(age,edu,black,hispanic,married,nodegree,RE74/1000,RE75/1000)
```
The covariates are as follows:
* `age`: age in years,
* `edu`: years of education,
* `black`: 1 if black, 0 otherwise,
* `hispanic`: 1 if Hispanic, 0 otherwise,
* `married`: 1 if married, 0 otherwise,
* `nodegree`: 1 if no degree, 0 otherwise,
* `RE74`: earnings in 1974,
* `RE75`: earnings in 1975.
If we assume that the Dehejia-Wahba sample still preserves initial randomization, we can
set the reference propensity score to be independent of covariates. However, it may not be the case and therefore, our approach can provide a robust method to check whether the Dehejia-Wahba sample can be viewed as as a random sample from a randomized controlled experiment.
We first define the the reference propensity score.
```{r}
rps <- rep(mean(D),length(D))
```
Using this reference propensity score, the average treatment effect is obtained by
```{r}
ate_nswre <- mean(D*Y)/mean(D)-mean((1-D)*Y)/mean(1-D)
print(ate_nswre)
```
Alternatively, we run simple regression
```{r}
model <- lm(Y ~ D)
summary(model)
confint(model)
```
The resulting 95% confidence interval $[0.02,0.20]$ is wide but again excludes zero.
### Bounds on ATE
We now introduce our bounds on the average treatment effect (ATE).
```{r}
bns_nsw <- atebounds(Y, D, X, rps)
```
In implementing `atebounds`, there are several options:
* `Q`: bandwidth parameter that determines the maximum number of observations for pooling information (default: $Q = 3$)
* `studentize`: `TRUE` if `X` is studentized elementwise and `FALSE` if not (default: `TRUE`)
* `alpha`: $(1-\alpha)$ nominal coverage probability for the confidence interval of ATE (default: 0.05)
* `discrete`: `TRUE` if `X` includes only discrete covariates and `FALSE` if not (default: `FALSE`)
* `n_hc`: number of hierarchical clusters to discretize non-discrete covariates; relevant only if x_discrete is FALSE. The default choice is `n_hc = ceiling(length(Y)/10)`, so that there are 10 observations in each cluster on average.
The clusters are constructed via hierarchical, agglomerative clustering with complete linkage, where the distance is measured by the Euclidean distance after studentizing each of the covariates. As mentioned above, the number $m$ of clusters is set by $$m = \left\lceil \frac{n}{L} \right\rceil$$ for $L = 10$.
We show the summary results saved in `bns_nsw`.
```{r}
summary(bns_nsw)
```
Note that the estimate of the lower bound is larger than that of the upper bound.
This crossing problem can occur in finite samples due to random sampling errors.
While statistical theory guarantees that this problem cannot occur asymptotically,
it is desirable to have a non-empty confidence interval in applications.
We therefore use the method in @stoye2020 to obtain a valid confidence interval that is never empty.
The 95% confidence interval $[-0.01,0.19]$ obtained here is similar to the previous interval $[0.02,0.20]$, which was obtained under the assumption that the Dehejia-Wahba sample is a random sample from NSW. This suggests that first, there is no evidence against violation of the random sampling assumption in the Dehejia-Wahba sample and second, our inference method does not suffer from unduly enlargement of the confidence interval to achieve robustness, although a null effect is now included in our confidence interval.
With $Q=2$, the bounds for ATE are
```{r}
summary(atebounds(Y, D, X, rps, Q = 2))
```
With $Q=4$, the bounds for ATE are
```{r}
summary(atebounds(Y, D, X, rps, Q = 4))
```
Recall that the point estimate of ATE under the random sampling assumption was
```{r}
print(ate_nswre)
```
Thus, the ATE estimate based on the simple mean difference well within the 95% confidence intervals across $Q=2, 3, 4$.
Recall that covariates $X$ include non-discrete variables: `RE74` and `RE75`. As a default option, `atebounds` chooses the number of hierarchical clusters to be `n_hc = ceiling(length(Y)/10)`.
To check sensitivity to `n_hc`, we now run the following:
```{r}
summary(atebounds(Y, D, X, rps))
summary(atebounds(Y, D, X, rps, n_hc = ceiling(length(Y)/5)))
summary(atebounds(Y, D, X, rps, n_hc = ceiling(length(Y)/20)))
```
It can be seen that the alternative estimation result is more similar to the default one with `n_hc = ceiling(length(Y)/20)` than with `n_hc = ceiling(length(Y)/5)`. Overall, the results are qualitatively similar across the three different specifications of `n_hc`.
Finally, to see what is saved in `bns_nsw`, we now print it out.
```{r}
print(bns_nsw)
```
The output list contains:
* `call`: a call in which all of the specified arguments are specified by their full names
* `type`: ATE
* `cov_prob`: confidence level ($1-\alpha$)
* `y1_lb`: estimate of the lower bound on the average of $Y(1)$, i.e. $\mathbb{E}[Y(1)]$,
* `y1_ub`: estimate of the upper bound on the average of $Y(1)$, i.e. $\mathbb{E}[Y(1)]$,
* `y0_lb`: estimate of the lower bound on the average of $Y(0)$, i.e. $\mathbb{E}[Y(0)]$,
* `y0_ub`: estimate of the upper bound on the average of $Y(0)$, i.e. $\mathbb{E}[Y(0)]$,
* `est_lb`: estimate of the lower bound on ATE, i.e. $\mathbb{E}[Y(1) - Y(0)]$,
* `est_ub`: estimate of the upper bound on ATE, i.e. $\mathbb{E}[Y(1) - Y(0)]$,
* `est_rps`: the point estimate of ATE using the reference propensity score,
* `se_lb`: standard error for the estimate on the lower bound on ATE,
* `se_ub`: standard error for the estimate of the upper bound on ATE,
* `ci_lb`: the lower end point of the confidence interval for ATE,
* `ci_ub`: the upper end point of the confidence interval for ATE.
### Bounds on ATT
We now look at bounds on the average treatment effect on the treated (ATT).
```{r}
bns_nsw_att <- attbounds(Y, D, X, rps)
summary(bns_nsw_att)
```
We experiment with `Q`.
```{r}
summary(attbounds(Y, D, X, rps, Q = 2))
```
```{r}
summary(attbounds(Y, D, X, rps, Q = 4))
```
We also experiment with `n_hc`.
```{r}
summary(attbounds(Y, D, X, rps))
summary(attbounds(Y, D, X, rps, n_hc = ceiling(length(Y)/5)))
summary(attbounds(Y, D, X, rps, n_hc = ceiling(length(Y)/20)))
```
Bound estimates cross (that is, the lower bound is larger than the upper bound) and furthermore sensitive to the choice of `Q` and `n_hc`. This is likely to be driven by the relatively small sample size. However, once we factor into sampling uncertainty and look at the confidence intervals, all the results are more or less similar.
## NSW treated and PSID control
The Dehejia-Wahba sample can be regarded as a data scenario where the propensity score is known and satisfies the overlap condition. We now turn to a different data scenario where it is likely that the propensity score is unknown and may not satisfy the overlap condition.
```{r}
psid2_control <- read.table("http://users.nber.org/~rdehejia/data/psid2_controls.txt")
colnames(psid2_control) <- c("treat","age","edu","black","hispanic",
"married","nodegree","RE74","RE75","RE78")
psid <- rbind(nswre_treated,psid2_control)
detach(nswre)
attach(psid)
D <- treat
Y <- (RE78 > 0)
X <- cbind(age,edu,black,hispanic,married,nodegree,RE74/1000,RE75/1000)
```
Here, we use one of non-experimental comparison groups constructed by LaLonde from the Population Survey of Income Dynamics, namely PSID2 controls. We now estimate the reference propensity score using the sample proportion and obtain the new bound estimates:
```{r}
rps_sp <- rep(mean(D),length(D))
bns_psid <- atebounds(Y, D, X, rps_sp)
summary(bns_psid)
```
The confidence interval $[-0.35,0.31]$ here is much larger than the confidence interval $[-0.01,0.19]$ with the Dehejia-Wahba sample. It is worth noting that the sample proportion is unlikely to be correctly specified in the NSW-treated/PSID-control sample. Thus, it seems that our inference method produces a wider confidence interval in order to be robust against misspecification of the propensity scores.
We now consider the Manski bounds, which can be obtained by setting $Q = 1$.
```{r}
summary(atebounds(Y, D, X, rps_sp, Q=1))
```
We can see that the Manski bounds are even larger and the same regardless of the specification of the reference propensity scores. Recall the Manski bounds do not impose the unconfoundedness assumption and do not rely on any pooling information (hence, it does not matter how to specify the reference propensity score).
Finally, we obtain the bounds for the average treatment effect on the treated (ATT).
```{r}
summary(attbounds(Y, D, X, rps_sp))
detach(psid)
```
We find that the bounds on ATT are wide, as in ATE.
# Case Study 2: Bounding the Effect of Right Heart Catheterization
As a second empirical example, we revisit the well-known Right Heart Catheterization Dataset.
In particular, we apply our methods to @Connors1996's study of the
efficacy of right heart catheterization (RHC), which is a diagnostic
procedure for directly measuring cardiac function in critically ill
patients. This dataset has been subsequently used in the context of
limited overlap by @crump2009dealing, @Rothe:2017, and
@Li-et-al:2018 among others. The dataset is publicly available on the
Vanderbilt Biostatistics website at
<https://hbiostat.org/data>.
In this example, the dependent variable is 1 if a patient survived after
30 days of admission, and 0 if a patient died within 30 days. The binary
treatment variable is 1 if RHC was applied within 24 hours of admission,
and 0 otherwise. The sample size was $n = 5735$, and 2184 patients were
treated with RHC. There are a large number of covariates:
@hirano2001estimation constructed 72 variables from the dataset and
the same number of covariates were considered in both
@crump2009dealing and @Li-et-al:2018 and 50 covariates were used in
@Rothe:2017. In our exercise, we constructed the same 72 covariates.
A cleaned version of the dataset is available in the package.
```{r}
Y <- RHC[,"survival"]
D <- RHC[,"RHC"]
X <- as.matrix(RHC[,-c(1,2)])
```
As in the aforementioned papers, we estimated the propensity scores by a
logit model with all 72 covariates being added linearly.
```{r}
# Logit estimation of propensity score
glm_ps <- stats::glm(D~X,family=binomial("logit"))
ps <- glm_ps$fitted.values
ps_treated <- ps[D==1]
ps_control <- ps[D==0]
# Plotting histograms of propensity scores
df <- data.frame(cbind(D,ps))
colnames(df)<-c("RHC","PS")
df$RHC <- as.factor(df$RHC)
levels(df$RHC) <- c("No RHC (Control)", "RHC (Treated)")
ggplot2::ggplot(df, ggplot2::aes(x=PS, color=RHC, fill=RHC)) +
ggplot2::geom_histogram(breaks=seq(0,1,0.1),alpha=0.5,position="identity")
```
The figure above shows the
histograms of estimated propensity scores for treated and control
groups. It is very similar to Fig.1 in @crump2009dealing and to Figure
2 in @Rothe:2017. The support of the estimated propensity scores are
almost on the unit interval for both treated and control units, although
there is some visual evidence on limited overlap (that is, control units
have much fewer propensity scores close to 1).
## Bounding the Average Treatment Effect on the Treated
In this section, we focus on ATT. We first estimate ATT by the
normalized inverse probability weighted estimator:
$$\begin{aligned}
\widehat{\text{ATT}}_{\text{PS}}
:= \frac{\sum_{i=1}^n D_i Y_i}{\sum_{i=1}^n D_i}
- \frac{\sum_{i=1}^n (1-D_i) W_i Y_i}{\sum_{i=1}^n (1-D_i) W_i},
\end{aligned}$$
where $W_i := \widehat{p}(X_i)/[1-\widehat{p}(X_i)]$ and
$\widehat{p}(X_i)$ is the estimated propensity score for observation $i$
based on the logit model described above.
See, e.g., equation (3) and discussions in @busso2014new for
details of the normalized inverse probability weighted ATT
estimator.
The estimator
$\widehat{\text{ATT}}_{\text{PS}}$ requires that the assumed propensity
score model is correctly specified and the overlap condition is
satisfied. The resulting estimate is
$\widehat{\text{ATT}}_{\text{PS}} = -0.0639$.
```{r}
# ATT normalized estimation
y1_att <- mean(D*Y)/mean(D)
att_wgt <- ps/(1-ps)
y0_att_num <- mean((1-D)*att_wgt*Y)
y0_att_den <- mean((1-D)*att_wgt)
y0_att <- y0_att_num/y0_att_den
att_ps <- y1_att - y0_att
print(att_ps)
```
We now turn to our methods. As before, we take the reference propensity score to be
$\widehat{p}_{\text{RPS}}(X_i) = n^{-1} \sum_{i=1}^n D_i$ for each
observation $i$. That is, we assign the sample proportion of the treated
to the reference propensity scores uniformly for all observations. Of
course, this is likely to be misspecified; however, it has the advantage
that $1/\widehat{p}_{\text{RPS}}(X_i)$ is never close to 0 or 1.
```{r}
rps <- rep(mean(D),length(D))
```
The
resulting inverse reference-propensity-score weighted ATT
estimator is
$$\begin{aligned}
\widehat{\text{ATT}}_{\text{RPS}}
:= \frac{\sum_{i=1}^n D_i Y_i}{\sum_{i=1}^n D_i}
- \frac{\sum_{i=1}^n (1-D_i) Y_i}{\sum_{i=1}^n (1-D_i)}
= -0.0507.
\end{aligned}$$
```{r}
att_rps <- mean(D*Y)/mean(D) - mean((1-D)*Y)/mean(1-D)
print(att_rps)
```
When the sample proportion is used as the propensity score estimator, there is no difference between
unnormalized and normalized versions of ATT estimates. In fact, it is simply the mean difference between treatment and control groups.
```{r}
Xunique <- mgcv::uniquecombs(X) # A matrix of unique rows from X
print(c("no. of unique rows:", nrow(Xunique)))
print(c("sample size :", nrow(X)))
```
Note that none of the covariates in the observed sample are identical. We therefore implement hierarchical, agglomerative clustering with complete linkage.
We show empirical results using the default option.
```{r}
summary(attbounds(Y, D, X, rps))
```
to check sensitivity with respect to tuning parameters, we vary $L$ and $Q$.
```{r, eval=FALSE}
# Bounding ATT: sensitivity analysis
# not run to save time
nhc_set <- c(5, 10, 20)
results_att <- {}
for (hc in nhc_set){
nhc <- ceiling(length(Y)/hc)
for (q in c(1,2,3,4)){
res <- attbounds(Y, D, X, rps, Q = q, n_hc = nhc)
results_att <- rbind(results_att,c(hc,q,res$est_lb,res$est_ub,res$ci_lb,res$ci_ub))
}
}
colnames(results_att) = c("L","Q","LB","UB","CI-LB","CI-UB")
print(results_att, digits = 3)
```
The table below reports
estimation results of ATT bounds for extended values of $L$ and $Q$.
When $Q=1$, our estimated bounds correspond to Manski bounds, which
includes zero and is wide with the interval length of almost one in all
cases of $L$. Our bounds with $Q=1$ are different across $L$ because we
apply hierarchical clustering before obtaining Manski bounds. With
$Q=2$, the bounds shrink so that the estimated upper bound is zero for
all cases of $L$; with $Q = 3$, they shrink even further so that the
upper end point of the 95% confidence interval excludes zero. Among
three different values of $L$, the case of $L=5$ gives the tightest
confidence interval but in this case, the lower bound is larger than the
upper bound, indicating that the estimates might be biased. In view of
that, we take the bound estimates with $L=10$ as our preferred estimates
$[-0.077, -0.039]$ with the 95% confidence interval $[-0.117,0.006]$.
When $Q=4$, the lower bound estimates exceed the upper bound estimates
with $L = 5, 10$. However, the estimates with $L = 20$ give almost
identical results to our preferred estimates. It seems that the pairs of
$(L, Q) = (10, 3)$ or $(L, Q) = (20, 4)$ provide reasonable estimates.
L Q LB UB CI-LB CI-UB
---- --- -------- -------- -------- --------
5 1 -0.638 0.282 -0.700 0.330
2 -0.131 -0.000 -0.174 0.033
3 -0.034 -0.048 -0.076 -0.007
4 -0.006 -0.073 -0.079 -0.006
10 1 -0.664 0.307 -0.766 0.376
2 -0.169 0.004 -0.216 0.039
3 -0.077 -0.039 -0.117 -0.006
4 -0.049 -0.057 -0.090 -0.016
20 1 -0.675 0.316 -0.843 0.430
2 -0.178 -0.005 -0.238 0.034
3 -0.099 -0.046 -0.149 -0.007
4 -0.065 -0.060 -0.112 -0.017
---- --- -------- -------- -------- --------
Table: ATT Bounds: Right Heart Catheterization Study
The study of @Connors1996 offered a conclusion that RHC could cause an
increase in patient mortality. Based on our preferred estimates, we can
exclude positive effects with confidence. This conclusion is based
solely on the unconfoundedness condition, but not on the overlap
condition, nor on the correct specification of the logit model. Overall,
our estimates seem to be consistent with the qualitative findings in
@Connors1996.
## Bounding the Average Treatment Effect
We now turn to bounds on ATE. Using again the sample proportion of the treated as the reference propensity score,
we bound the ATE.
We start with $Q=1$.
```{r}
summary(atebounds(Y, D, X, rps, Q = 1))
```
We now consider $Q=2$
```{r}
summary(atebounds(Y, D, X, rps, Q = 2))
```
and $Q=3$.
```{r}
summary(atebounds(Y, D, X, rps, Q = 3))
```
Finally, we take $Q = 4$.
```{r}
summary(atebounds(Y, D, X, rps, Q = 4))
```
Overall, the results are similar to ATT.
# Case Study 3: EFM
The electronic fetal monitoring (EFM) and cesarean section (CS) dataset from @EFMdata consists of observations on 14,484 women who delivered at Beth Israel Hospital, Boston from January 1970 to December 1975. The purpose of the study is to evaluate the impact of EFM on cesarean section (CS) rates. @EFMdata report that relevant confounding factors are: nulliparity (nullipar), arrest of labor progression (arrest), malpresentation (breech), and year of study (year). The dataset included in the R package is from the supplementary materials of @RRW-JASA, who used this dataset to illustrate their proposed methods for modeling and estimating relative risk and risk difference. In this dataset, all covariates are discrete.
```{r}
Y <- EFM[,"cesarean"]
D <- EFM[,"monitor"]
X <- as.matrix(EFM[,c("arrest", "breech", "nullipar", "year")])
year <- EFM[,"year"]
```
```{r}
ate_rps <- mean(D*Y)/mean(D) - mean((1-D)*Y)/mean(1-D)
print(ate_rps)
```
We take the reference propensity score to be the sample proportion of the treatment.
```{r}
rps <- rep(mean(D),length(D))
print(rps[1])
```
## Bounding the Average Treatment Effect
Using again the sample proportion of the treated as the reference propensity score,
we bound the ATE.
We start with $Q=1$.
```{r}
summary(atebounds(Y, D, X, rps, Q = 1, x_discrete = TRUE))
```
We now consider $Q=2$
```{r}
summary(atebounds(Y, D, X, rps, Q = 2, x_discrete = TRUE))
```
and $Q=3$.
```{r}
summary(atebounds(Y, D, X, rps, Q = 3, x_discrete = TRUE))
```
In this example, the reference propensity score is close to 0.5, thus implying that the results will be robust even if we take a very large $Q$. In view of that, we take $Q = 5, 10, 20, 50, 100$.
```{r}
summary(atebounds(Y, D, X, rps, Q = 5, x_discrete = TRUE))
summary(atebounds(Y, D, X, rps, Q = 10, x_discrete = TRUE))
summary(atebounds(Y, D, X, rps, Q = 20, x_discrete = TRUE))
summary(atebounds(Y, D, X, rps, Q = 50, x_discrete = TRUE))
summary(atebounds(Y, D, X, rps, Q = 100, x_discrete = TRUE))
```
Overall, the empirical results suggest that there is no significant effect of EFM on cesarean section rates.
## References
| /scratch/gouwar.j/cran-all/cranData/ATbounds/vignettes/ATbounds_vignette.Rmd |
AUCNews <-
function() {
newsfile <- file.path(system.file(package="AUC"), "NEWS")
file.show(newsfile)
}
| /scratch/gouwar.j/cran-all/cranData/AUC/R/AUCNews.R |
#' Compute the area under the curve of a given performance measure.
#'
#' This function computes the area under the sensitivity curve (AUSEC), the area under the specificity curve (AUSPC),
#' the area under the accuracy curve (AUACC), or the area under the receiver operating characteristic curve (AUROC).
#'
#' @param x an object produced by one of the functions \code{sensitivity}, \code{specificity}, \code{accuracy}, or \code{roc}
#' @param min a numeric value between 0 and 1, denoting the cutoff that defines the start of the area under the curve
#' @param max a numeric value between 0 and 1, denoting the cutoff that defines the end of the area under the curve
#'
#' @examples
#'
#' data(churn)
#'
#' auc(sensitivity(churn$predictions,churn$labels))
#'
#' auc(specificity(churn$predictions,churn$labels))
#'
#' auc(accuracy(churn$predictions,churn$labels))
#'
#' auc(roc(churn$predictions,churn$labels))
#'
#'
#' @references Ballings, M., Van den Poel, D., Threshold Independent Performance Measures for Probabilistic Classifcation Algorithms, Forthcoming.
#' @seealso \code{\link{sensitivity}}, \code{\link{specificity}}, \code{\link{accuracy}}, \code{\link{roc}}, \code{\link{auc}}, \code{\link{plot}}
#' @return A numeric value between zero and one denoting the area under the curve
#' @author Authors: Michel Ballings and Dirk Van den Poel, Maintainer: \email{Michel.Ballings@@UGent.be}
auc <- function(x, min=0,max=1) {
if (any(class(x) == "roc")) {
if (min != 0 || max != 1 ) {
x$fpr <- x$fpr[x$cutoffs >= min & x$cutoffs <= max]
x$tpr <- x$tpr[x$cutoffs >= min & x$cutoffs <= max]
}
ans <- 0
for (i in 2:length(x$fpr)) {
ans <- ans + 0.5 * abs(x$fpr[i] - x$fpr[i-1]) * (x$tpr[i] + x$tpr[i-1])
}
}else if (any(class(x) %in% c("accuracy","sensitivity","specificity"))) {
if (min != 0 || max != 1 ) {
x$cutoffs <- x$cutoffs[x$cutoffs >= min & x$cutoffs <= max]
x$measure <- x$measure[x$cutoffs >= min & x$cutoffs <= max]
}
ans <- 0
for (i in 2:(length(x$cutoffs))) {
ans <- ans + 0.5 * abs(x$cutoffs[i-1] - x$cutoffs[i]) * (x$measure[i] + x$measure[i-1])
}
}
return(as.numeric(ans))
} | /scratch/gouwar.j/cran-all/cranData/AUC/R/auc.R |
#labels need to be a factor, predictons need to be numeric
.confusionMatrix <- function(predictions,labels,perc.rank, measure) {
#This function is a scaled down faster version of parts of code of the ROCR package.
#It has less functionality and less error handling and is focused on speed.
#For more functionality (e.g., averaging cross validaton runs) see ROCR
if (measure=='sensitivity') {
if (perc.rank==TRUE) predictions <- rank(predictions,ties.method="min")/length(predictions)
} else if (measure =='specificity') {
if (perc.rank==TRUE) predictions <- rank(predictions,ties.method="max")/length(predictions)
} else if (measure =='accuracy'){
if (table(labels)[1] >= table(labels)[2]) {
if (perc.rank==TRUE) predictions <- rank(predictions,ties.method="max")/length(predictions)
} else if (table(labels)[1] < table(labels)[2]) {
if (perc.rank==TRUE) predictions <- rank(predictions,ties.method="min")/length(predictions)
}
}
levels <- sort(levels(labels))
labels <- ordered(labels,levels=levels)
# if (length(levels) != 2) {
# message <- paste("Number of classes should be 2.\n")
# stop(message)
# }
## compute cutoff/fp/tp data
cutoffs <- numeric()
fp <- numeric()
tp <- numeric()
fn <- numeric()
tn <- numeric()
n.pos <- numeric()
n.neg <- numeric()
n.pos.pred <- numeric()
n.neg.pred <- numeric()
n.pos <- sum( labels == levels[2] )
n.neg <- sum( labels == levels[1] )
pos.label <- levels(labels)[2]
neg.label <- levels(labels)[1]
pred.order <- order(predictions, decreasing=TRUE)
predictions.sorted <- predictions[pred.order]
tp <- cumsum(labels[pred.order]==pos.label) #predicted to be positive, and in reality they are not
fp <- cumsum(labels[pred.order]==neg.label) #predicted to be positive, but in reality they are not (i.e., negative since there are 2 classes)
## remove fp & tp for duplicated predictions
## as duplicated keeps the first occurrence, but we want the last, two rev are used.
dups <- rev(duplicated(rev(predictions.sorted)))
tp <- c(0, tp[!dups])
fp <- c(0, fp[!dups])
cutoffs <- c(1, predictions.sorted[!dups])
fn <- n.pos - tp
tn <- n.neg - fp
n.pos.pred <- tp + fp
n.neg.pred <- tn + fn
list(cutoffs=cutoffs,tp=tp, fp=fp, tn=tn, fn=fn, n.pos=n.pos, n.neg=n.neg, n.pos.pred=n.pos.pred, n.neg.pred=n.neg.pred)
} | /scratch/gouwar.j/cran-all/cranData/AUC/R/auxiliary.R |
#' Compute the receiver operating characteristic (ROC) curve.
#'
#' This function computes the receiver operating characteristic (ROC) curve required for the \code{auc} function and the \code{plot} function.
#'
#' @param predictions A numeric vector of classification probabilities (confidences, scores) of the positive event.
#' @param labels A factor of observed class labels (responses) with the only allowed values \{0,1\}.
#'
#' @examples
#'
#' data(churn)
#'
#' roc(churn$predictions,churn$labels)
#'
#' @references Ballings, M., Van den Poel, D., Threshold Independent Performance Measures for Probabilistic Classifcation Algorithms, Forthcoming.
#' @seealso \code{\link{sensitivity}}, \code{\link{specificity}}, \code{\link{accuracy}}, \code{\link{roc}}, \code{\link{auc}}, \code{\link{plot}}
#' @return A list containing the following elements:
#' \item{cutoffs}{A numeric vector of threshold values}
#' \item{fpr}{A numeric vector of false positive rates corresponding to the threshold values}
#' \item{tpr}{A numeric vector of true positive rates corresponding to the threshold values}
#' @author Authors: Michel Ballings and Dirk Van den Poel, Maintainer: \email{Michel.Ballings@@UGent.be}
roc <- function(predictions, labels) {
#This function is a scaled down faster version of parts of code of the ROCR package.
#It has less functionality and less error handling and is focused on speed.
#please see ROCR
cm <- .confusionMatrix(predictions,labels,FALSE,'roc')
x <- cm$fp / cm$n.neg
y <- cm$tp / cm$n.pos
finite.bool <- is.finite(x) & is.finite(y)
x <- x[ finite.bool ]
y <- y[ finite.bool ]
if (length(x) < 2) {
stop(paste("Not enough distinct predictions to compute area",
"under the ROC curve."))
}
ans <- list(cutoffs=cm$cutoffs, fpr=x, tpr=y )
class(ans) <- c('AUC','roc')
return(ans)
}
#' Compute the sensitivity curve.
#'
#' This function computes the sensitivity curve required for the \code{auc} function and the \code{plot} function.
#'
#' @param predictions A numeric vector of classification probabilities (confidences, scores) of the positive event.
#' @param labels A factor of observed class labels (responses) with the only allowed values \{0,1\}.
#' @param perc.rank A logical. If TRUE (default) the percentile rank of the predictions is used.
#'
#' @examples
#'
#' data(churn)
#'
#' sensitivity(churn$predictions,churn$labels)
#'
#' @references Ballings, M., Van den Poel, D., Threshold Independent Performance Measures for Probabilistic Classifcation Algorithms, Forthcoming.
#' @seealso \code{\link{sensitivity}}, \code{\link{specificity}}, \code{\link{accuracy}}, \code{\link{roc}}, \code{\link{auc}}, \code{\link{plot}}
#' @return A list containing the following elements:
#' \item{cutoffs}{A numeric vector of threshold values}
#' \item{measure}{A numeric vector of sensitivity values corresponding to the threshold values}
#' @author Authors: Michel Ballings and Dirk Van den Poel, Maintainer: \email{Michel.Ballings@@UGent.be}
sensitivity <- function(predictions, labels, perc.rank=TRUE) {
cm <- .confusionMatrix(predictions,labels,perc.rank, 'sensitivity')
ans <- list( cutoffs=c(cm$cutoffs,0), measure=c(cm$tp / cm$n.pos,1) )
class(ans) <- c('AUC','sensitivity')
return(ans)
}
#' Compute the specificity curve.
#'
#' This function computes the specificity curve required for the \code{auc} function and the \code{plot} function.
#'
#' @param predictions A numeric vector of classification probabilities (confidences, scores) of the positive event.
#' @param labels A factor of observed class labels (responses) with the only allowed values \{0,1\}.
#' @param perc.rank A logical. If TRUE (default) the percentile rank of the predictions is used.
#'
#' @examples
#'
#' data(churn)
#'
#' specificity(churn$predictions,churn$labels)
#'
#' @references Ballings, M., Van den Poel, D., Threshold Independent Performance Measures for Probabilistic Classifcation Algorithms, Forthcoming.
#' @seealso \code{\link{sensitivity}}, \code{\link{specificity}}, \code{\link{accuracy}}, \code{\link{roc}}, \code{\link{auc}}, \code{\link{plot}}
#' @return A list containing the following elements:
#' \item{cutoffs}{A numeric vector of threshold values}
#' \item{measure}{A numeric vector of specificity values corresponding to the threshold values}
#' @author Authors: Michel Ballings and Dirk Van den Poel, Maintainer: \email{Michel.Ballings@@UGent.be}
specificity <- function(predictions, labels, perc.rank=TRUE) {
cm <- .confusionMatrix(predictions,labels, perc.rank, 'specificity')
ans <- list( cutoffs=c(cm$cutoffs,0), measure=c(cm$tn / cm$n.neg,0) )
class(ans) <- c('AUC','specificity')
return(ans)
}
#' Compute the accuracy curve.
#'
#' This function computes the accuracy curve required for the \code{auc} function and the \code{plot} function.
#'
#' @param predictions A numeric vector of classification probabilities (confidences, scores) of the positive event.
#' @param labels A factor of observed class labels (responses) with the only allowed values \{0,1\}.
#' @param perc.rank A logical. If TRUE (default) the percentile rank of the predictions is used.
#'
#' @examples
#'
#' data(churn)
#'
#' accuracy(churn$predictions,churn$labels)
#'
#' @references Ballings, M., Van den Poel, D., Threshold Independent Performance Measures for Probabilistic Classifcation Algorithms, Forthcoming.
#' @seealso \code{\link{sensitivity}}, \code{\link{specificity}}, \code{\link{accuracy}}, \code{\link{roc}}, \code{\link{auc}}, \code{\link{plot}}
#' @return A list containing the following elements:
#' \item{cutoffs}{A numeric vector of threshold values}
#' \item{measure}{A numeric vector of accuracy values corresponding to the threshold values}
#' @author Authors: Michel Ballings and Dirk Van den Poel, Maintainer: \email{Michel.Ballings@@UGent.be}
accuracy <- function(predictions, labels, perc.rank=TRUE) {
cm <- .confusionMatrix(predictions,labels, perc.rank, 'accuracy')
#at cutoff of 0 the accuracy equals maximal tpr (i.e., 1) times the proportion of positives
ans <- list( cutoffs=c(cm$cutoffs,0), measure= c((cm$tn+cm$tp) / (cm$n.pos + cm$n.neg), mean(as.integer(as.character(labels)) ) ))
class(ans) <- c('AUC','accuracy')
return(ans)
} | /scratch/gouwar.j/cran-all/cranData/AUC/R/measures.R |
#' Plot the sensitivity, specificity, accuracy and roc curves.
#'
#' This function plots the (partial) sensitivity, specificity, accuracy and roc curves.
#'
#' @param x an object produced by one of the functions \code{sensitivity}, \code{specificity}, \code{accuracy}, or \code{roc}
#' @param y Not used.
#' @param ... Arguments to be passed to methods, such as graphical parameters. See ?plot
#' @param type Type of plot. Default is line plot.
#' @param add Logical. If TRUE the curve is added to an existing plot. If FALSE a new plot is created.
#' @param min a numeric value between 0 and 1, denoting the cutoff that defines the start of the area under the curve
#' @param max a numeric value between 0 and 1, denoting the cutoff that defines the end of the area under the curve
#'
#' @examples
#'
#' data(churn)
#'
#' plot(sensitivity(churn$predictions,churn$labels))
#'
#' plot(specificity(churn$predictions,churn$labels))
#'
#' plot(accuracy(churn$predictions,churn$labels))
#'
#' plot(roc(churn$predictions,churn$labels))
#'
#'
#' @references Ballings, M., Van den Poel, D., Threshold Independent Performance Measures for Probabilistic Classifcation Algorithms, Forthcoming.
#' @seealso \code{\link{sensitivity}}, \code{\link{specificity}}, \code{\link{accuracy}}, \code{\link{roc}}, \code{\link{auc}}, \code{\link{plot}}
#' @author Authors: Michel Ballings and Dirk Van den Poel, Maintainer: \email{Michel.Ballings@@UGent.be}
#' @method plot AUC
plot.AUC <- function(x,y=NULL, ...,type='l',add=FALSE, min=0,max=1) {
if (any(class(x) == "roc")) {
if (min != 0 || max != 1 ) {
x$fpr <- x$fpr[x$cutoffs >= min & x$cutoffs <= max]
x$tpr <- x$tpr[x$cutoffs >= min & x$cutoffs <= max]
}
}else{
if (min != 0 || max != 1 ) {
ind <- x$cutoffs >= min & x$cutoffs <= max
x$cutoffs <- x$cutoffs[ind]
x$measure <- x$measure[ind]
}
}
if (any(class(x) == "roc")) {
if (add==FALSE) {
plot(x$fpr,x$tpr, type=type,xlab='1- specificity', ylab='sensitivity',xlim=c(0,1), ylim=c(0,1),...)
lines(x=seq(0,1,by=0.01),y=seq(0,1,by=0.01),lty=1, col='grey')
}else {
lines(x$fpr,x$tpr,type=type, xlab='1- specificity', ylab='sensitivity',...)
}
}else if (any(class(x) == "accuracy")) {
if (add==FALSE) {
plot(x$cutoffs,x$measure,type=type, xlab='Cutoffs', ylab='Accuracy',xlim=c(0,1), ylim=c(0,1),...)
}else {
lines(x$cutoffs,x$measure,type=type, xlab='Cutoffs', ylab='Accuracy', ...)
}
}else if (any(class(x) == "specificity")) {
if (add==FALSE) {
plot(x$cutoffs,x$measure,type=type, xlab='Cutoffs', ylab='Specificity',xlim=c(0,1), ylim=c(0,1),...)
}else {
lines(x$cutoffs,x$measure,type=type, xlab='Cutoffs', ylab='Specificity', ...)
}
}else if (any(class(x) == "sensitivity")) {
if (add==FALSE) {
plot(x$cutoffs, x$measure,type=type, xlab='Cutoffs', ylab='Sensitivity',xlim=c(0,1), ylim=c(0,1), ...)
}else {
lines(x$cutoffs, x$measure,type=type, xlab='Cutoffs', ylab='Sensitivity', ...)
}
}
} | /scratch/gouwar.j/cran-all/cranData/AUC/R/plot.R |
.onAttach <- function(libname, pkgname) {
AUCver <- read.dcf(file=system.file("DESCRIPTION", package=pkgname),
fields="Version")
packageStartupMessage(paste(pkgname, AUCver))
packageStartupMessage("Type AUCNews() to see the change log and ?AUC to get an overview.")
}
| /scratch/gouwar.j/cran-all/cranData/AUC/R/zzz.R |
#' Firth AU testing
#'
#' Calculates approximate unconditional Firth test p-value for testing independence in 2x2 case-control tables.
#' The Firth test requires significantly more computational time than the tests computed in the au.tests function.
#' @param m0 Number of control subjects
#' @param m1 Number of case subjects
#' @param r0 Number of control subjects exposed
#' @param r1 Number of case subjects exposed
#' @param lowthresh A threshold for probabilities below to be considered as zero. Defaults to 1e-12.
#' @return A single AU p-value, computed under the Firth test.
#' @examples
#' au.firth(15000, 5000, 1, 0)
au.firth = function(m0, m1, r0, r1, lowthresh=1E-12)
{
if (r0 == 0 & r1 == 0)
{
return(c(au.firth.p = 1))
}
if (r0 == m0 & r1 == m1)
{
return(c(au.firth.p = 1))
}
if (is.na(m0 + m1 + r0 + r1))
{
return(c(au.firth.p = NA))
}
p = (r0+r1)/(m0+m1) # observed p
y = c(1,1,0,0)
x = c(1,0,1,0)
data = data.frame(y = y, x = x)
weights = c(r1, m1-r1, r0, m0-r0)
firth.t = sum(c(-2,2)*logistf(y~x, data=data, weights=weights)$loglik)
# Approximate unconditional p-value
hicount = qbinom(lowthresh, m0+m1, p, lower.tail = F)
dd = expand.grid(r0x=0:hicount, r1x=0:hicount)
dd = dd[-1,]
dd = subset(dd, dd$r0x + dd$r1x <= hicount)
dd$prob = dbinom(dd$r0x, m0, p)*dbinom(dd$r1x, m1, p)
dd$firth.tx = sapply(1:nrow(dd), function(a)
{sum(c(-2,2)*logistf(y~x, data=data, weights = c(dd$r1x[a], m1-dd$r1x[a], dd$r0x[a], m0-dd$r0x[a]))$loglik)})
matchrow = which( with(dd, r0x==r0 & r1x==r1) )
p.obs = dd[matchrow, "prob"]
dd = dd[-matchrow,]
dd.firth = p.obs + sum(dd[dd$firth.tx >= firth.t,]$prob)
c(au.firth.p = dd.firth)
}
| /scratch/gouwar.j/cran-all/cranData/AUtests/R/au.firth.r |
#' Stratified AU testing
#'
#' Calculates AU p-values for testing independence in 2x2 case-control tables, while adjusting for categorical covariates.
#' Inputs are given as a vector of counts in each strata defined by the covariate(s). Note that computational time can be extremely high.
#' @param m0list Number of control subjects in each strata
#' @param m1list Number of case subjects in each strata
#' @param r0list Number of control subjects exposed in each strata
#' @param r1list Number of case subjects exposed in each strata
#' @param lowthresh A threshold for probabilities below to be considered as zero. Defaults to 1e-12.
#' @return An AU p-value, computed under the likelihood ratio test.
#' @examples
#' au.test.strat(c(500, 1250), c(150, 100), c(0, 0), c(10, 5))
au.test.strat = function(m0list, m1list, r0list, r1list, lowthresh=1E-12)
{
q = length(m0list) # number of strata
# For each strata:
# - make data frame (dd) of all plausible outcomes
# - calculate (log) probability of each outcome being observed
# - calculate LR stat for each outcome
dd.list = lapply(1:q, function(i)
{
p = (r0list[i]+r1list[i])/(m0list[i]+m1list[i])
count = qbinom(c(1-lowthresh, lowthresh), m0list[i]+m1list[i], p, lower.tail = FALSE)
locount = count[1]
hicount = count[2]
dd = expand.grid(r0x=0:min(hicount, m0list[i]), r1x=0:min(hicount, m1list[i]))
dd = dd[dd$r0x + dd$r1x <= hicount & dd$r0x + dd$r1x >= locount,]
delrows = which(with(dd, r0x > m0list[i] | r1x > m1list[i]))
if (length(delrows) > 0) dd = dd[-delrows,]
dd$prob = dbinom(dd$r0x, m0list[i], p, log = TRUE) + dbinom(dd$r1x, m1list[i], p, log = TRUE)
dd$p0x = with(dd, r0x/m0list[i])
dd$p1x = with(dd, r1x/m1list[i])
dd$pLx = with(dd, (r0x+r1x)/(m0list[i]+m1list[i]))
dd$llik.nullx = with(dd, dbinom(r0x, m0list[i], pLx, log = TRUE) + dbinom(r1x, m1list[i], pLx, log = TRUE))
dd$llik.altx = with(dd, dbinom(r0x, m0list[i], p0x, log = TRUE) + dbinom(r1x, m1list[i], p1x, log = TRUE))
dd$llrx = with(dd, llik.altx - llik.nullx)
dd
})
lengths = sapply(1:q, function(i) dim(dd.list[[i]])[1]) # number of outcomes in each strata
# all combinations of outcomes across all strata (dim (l1*l2*...*lq) x q )
dd2 = expand.grid(lapply(1:q, function(i) 1:lengths[i]))
# compute overall probabilities and LR test statistics
dd2$prob = exp(rowSums(sapply(1:q, function(i){ dd.list[[i]]$prob[dd2[,i]] } )))
dd2$llrx = rowSums(sapply(1:q, function(i){ dd.list[[i]]$llrx[dd2[,i]] } ))
# Compare LR stats to observed data
matchrows = sapply(1:q, function(i) which(with(dd.list[[i]], (r0x==r0list[i]) & (r1x==r1list[i])))) # matches in dd.list (q matches)
matchrow = which(apply(dd2[,1:q], 1, function(indexes){ all(indexes==matchrows) } )) # match in dd2 (1 match) - can this be sped up??
# Observed LR stat
llr = dd2[matchrow, "llrx"]
p.obs = dd2[matchrow, "prob"]
dd2 = dd2[-matchrow,]
# AU p-value: sum probabilities for all outcome combinations with LR stats >= observed LR stat
dd.lrt = p.obs + sum(dd2[dd2$llrx >= llr,]$prob)
c(lrt.p=dd.lrt)
}
| /scratch/gouwar.j/cran-all/cranData/AUtests/R/au.test.strat.r |
# AU p-value
# * Input: single dataset (m0 = # control, m1 = # case, r0 = # control with allele, r1 = # case with allele)
# * Compute test statistic, T
# * Given n = m0+m1, p = (r0+r1)/(m0+m1), get all plausible datasets (m0, m1, r0x, r1x), assign P(r0x,r1x) = P(r0x)P(r1x) (null)
# * Compute test statistics Tx for all plausible datasets
# * Output: sum probabilities of datasets where abs(Tx) >= abs(T)
#' AU testing
#'
#' Calculates approximate unconditional p-values for testing independence in 2x2 case-control tables.
#' @param m0 Number of control subjects
#' @param m1 Number of case subjects
#' @param r0 Number of control subjects exposed
#' @param r1 Number of case subjects exposed
#' @param lowthresh A threshold for probabilities below to be considered as zero. Defaults to 1e-12.
#' @return A vector of AU p-values, computed under score, likelihood ratio, and Wald tests.
#' @examples
#' au.tests(15000, 5000, 30, 25)
#' au.tests(10000, 10000, 30, 25)
au.tests = function(m0, m1, r0, r1, lowthresh=1E-12)
{
if (r0 == 0 & r1 == 0)
{
return(c(score.p = 1, lr.p = 1, wald.p = 1, wald0.p = 1))
}
if (r0 == m0 & r1 == m1)
{
return(c(score.p = 1, lr.p = 1, wald.p = 1, wald0.p = 1))
}
if (is.na(m0 + m1 + r0 + r1))
{
return(c(score.p = NA, lr.p = NA, wald.p = NA, wald0.p = NA))
}
p = (r0+r1)/(m0+m1) # observed p
# Score test
ybar = m1/(m0+m1)
t = r1*(1-ybar) - r0*ybar
sd.t = sqrt((1-ybar)^2*m1*p*(1-p) + ybar^2*m0*p*(1-p))
# Likelihood ratio test
p0 = r0/m0
p1 = r1/m1
pL = (r0+r1)/(m0+m1)
llik.null = dbinom(r0, m0, pL, log=T) + dbinom(r1, m1, pL, log=T)
llik.alt = dbinom(r0, m0, p0, log=T) + dbinom(r1, m1, p1, log=T)
llr = llik.alt - llik.null
# Wald test (with regularization)
reg = 0.5
betahat = log( (r1+reg)/(m1-r1+reg)/((r0+reg)/(m0-r0+reg)) )
sehat = sqrt(1/(r0+reg) + 1/(r1+reg) + 1/(m0-r0+reg) + 1/(m1-r1+reg))
waldT = betahat/sehat
# Wald test (no regularization)
reg0 = 0
betahat0 = log( (r1+reg0)/(m1-r1+reg0)/((r0+reg0)/(m0-r0+reg0)) )
sehat0 = sqrt(1/(r0+reg0) + 1/(r1+reg0) + 1/(m0-r0+reg0) + 1/(m1-r1+reg0))
waldT0 = betahat0/sehat0
# Approximate unconditional p-value
hicount = qbinom(lowthresh, m0+m1, p, lower.tail=F)
dd = expand.grid(r0x=0:hicount, r1x=0:hicount)
dd = dd[-1,]
delrows = which(with(dd, r0x > m0 | r1x > m1))
if (length(delrows) > 0) dd = dd[-delrows,]
dd$prob = dbinom(dd$r0x, m0, p)*dbinom(dd$r1x, m1, p)
dd$px = with(dd, (r0x+r1x+1)/(m0+m1+2) )
dd$tx = with(dd, r1x*(1-ybar) - r0x*ybar )
dd$sd.tx = with(dd, sqrt((1-ybar)^2*m1*px*(1-px) + ybar^2*m0*px*(1-px)))
dd$p0x = with(dd, r0x/m0)
dd$p1x = with(dd, r1x/m1)
dd$pLx = with(dd, (r0x+r1x)/(m0+m1))
dd$llik.nullx = with(dd, dbinom(r0x, m0, pLx, log=T) + dbinom(r1x, m1, pLx, log=T))
dd$llik.altx = with(dd, dbinom(r0x, m0, p0x, log=T) + dbinom(r1x, m1, p1x, log=T))
dd$llrx = with(dd, llik.altx - llik.nullx)
dd$betahatw = with(dd, log(r1x+ reg) - log(m1-r1x+reg) - log(r0x+reg) + log(m0-r0x+reg) )
dd$sehatw = with(dd, sqrt(1/(r0x+reg) + 1/(r1x+reg) + 1/(m0-r0x+reg) + 1/(m1-r1x+reg)) )
dd$waldTw = with(dd, betahatw/sehatw)
dd$betahatw0 = with(dd, log(r1x+ reg0) - log(m1-r1x+reg0) - log(r0x+reg0) + log(m0-r0x+reg0) )
dd$sehatw0 = with(dd, sqrt(1/(r0x+reg0) + 1/(r1x+reg0) + 1/(m0-r0x+reg0) + 1/(m1-r1x+reg0)) )
dd$waldTw0 = with(dd, betahatw0/sehatw0)
infrows = which( with(dd, abs(betahatw0) == Inf) )
p.inf = sum(dd[infrows, "prob"])
matchrow = which( with(dd, r0x==r0 & r1x==r1) )
p.obs = dd[matchrow, "prob"]
dd = dd[-matchrow,]
dd.score = p.obs + sum(dd[abs(dd$tx/dd$sd.tx) >= abs(t/sd.t),]$prob)
dd.lrt = p.obs + sum(dd[dd$llrx >= llr,]$prob)
dd.wald = p.obs + sum(dd[abs(dd$waldTw) >= abs(waldT),]$prob)
dd = dd[-which( with(dd, abs(betahatw0) == Inf) ),]
dd.wald0 = p.obs + sum(dd[abs(dd$waldTw0) >= abs(waldT0),]$prob) + p.inf
c(score.p = dd.score, lr.p = dd.lrt, wald.p=dd.wald, wald0.p = dd.wald0)
}
| /scratch/gouwar.j/cran-all/cranData/AUtests/R/au.tests.r |
#' Basic testing
#'
#' Calculates standard p-values for testing independence in 2x2 case-control tables.
#' @param m0 Number of control subjects
#' @param m1 Number of case subjects
#' @param r0 Number of control subjects exposed
#' @param r1 Number of case subjects exposed
#' @return A vector of p-values, computed under score, likelihood ratio, Wald, Firth, and Fisher's exact tests.
#' @examples
#' basic.tests(15000, 5000, 30, 25)
basic.tests = function(m0, m1, r0, r1)
{
if (r0 == 0 & r1 == 0)
{
return(c(score.p = 1, lr.p = 1, wald.p = 1, wald0.p = 1, firth.p = 1, fisher.p = 1))
}
if (r0 == m0 & r1 == m1)
{
return(c(score.p = 1, lr.p = 1, wald.p = 1, wald0.p = 1, firth.p = 1, fisher.p = 1))
}
if (is.na(m0 + m1 + r0 + r1))
{
return(c(score.p = NA, lr.p = NA, wald.p = NA, wald0.p = NA, firth.p = NA, fisher.p = NA))
}
p = (r0+r1)/(m0+m1) # observed p
# Score test
ybar = m1/(m0+m1)
t = r1*(1-ybar) - r0*ybar
sd.t = sqrt((1-ybar)^2*m1*p*(1-p) + ybar^2*m0*p*(1-p))
score.t = abs(t/sd.t)
score.pv = 2*(1-pnorm(score.t))
# Likelihood ratio test
p0 = r0/m0
p1 = r1/m1
pL = (r0+r1)/(m0+m1)
llik.null = dbinom(r0, m0, pL, log=T) + dbinom(r1, m1, pL, log=T)
llik.alt = dbinom(r0, m0, p0, log=T) + dbinom(r1, m1, p1, log=T)
llr = llik.alt - llik.null
lr.pv = 1-pchisq(2*llr, df = 1)
# Wald test (with regularization)
reg = 0.5
betahat = log( (r1+reg)/(m1-r1+reg)/((r0+reg)/(m0-r0+reg)) )
sehat = sqrt(1/(r0+reg) + 1/(r1+reg) + 1/(m0-r0+reg) + 1/(m1-r1+reg))
waldT = betahat/sehat
wald.pv = 2*(1-pnorm(abs(waldT)))
# Wald test (no regularization)
if (r0 == 0 | r1 == 0) wald0.pv = 1
else
{
reg0 = 0
betahat0 = log( (r1+reg0)/(m1-r1+reg0)/((r0+reg0)/(m0-r0+reg0)) )
sehat0 = sqrt(1/(r0+reg0) + 1/(r1+reg0) + 1/(m0-r0+reg0) + 1/(m1-r1+reg0))
waldT0 = betahat0/sehat0
wald0.pv = 2*(1-pnorm(abs(waldT0)))
}
# Firth test
y = c(1,1,0,0)
x = c(1,0,1,0)
data = data.frame(y = y, x = x)
m.firth = logistf(y~x, data=data, weights=c(r1, m1-r1, r0, m0-r0))
firth.t = sum(c(-2, 2)*m.firth$loglik)
firth.pv = m.firth$prob[2]
names(firth.pv) = NULL
# Fisher's exact test
table.obs = matrix(c(m0-r0, m1-r1, r0, r1), nrow=2,
dimnames=list(Disease = c("Control", "Case"), Gene = c(0, 1)))
fisher.pv = fisher.test(table.obs)$p.value
c(score.p = score.pv, lr.p = lr.pv, wald.p = wald.pv, wald0.p = wald0.pv, firth.p = firth.pv, fisher.p = fisher.pv)
}
| /scratch/gouwar.j/cran-all/cranData/AUtests/R/basic.tests.r |
#' Stratified permutation testing
#'
#' Calculates permutation p-values for testing independence in 2x2 case-control tables, while adjusting for categorical covariates.
#' Inputs are given as a vector of counts in each strata defined by the covariate(s). Note that computational time can be extremely high.
#' @param m0list Number of control subjects in each strata
#' @param m1list Number of case subjects in each strata
#' @param r0list Number of control subjects exposed in each strata
#' @param r1list Number of case subjects exposed in each strata
#' @return A permutation p-value, computed under the likelihood ratio test.
#' @examples
#' perm.test.strat(c(7000, 1000), c(11000, 1000), c(50, 30), c(70, 40))
perm.test.strat = function(m0list, m1list, r0list, r1list)
{
q = length(m0list) # number of strata
# For each strata:
# - make data frame (dd) of all possible (permuted) outcomes
# - calculate (log) probability of each outcome being observed
# - calculate LR stat for each outcome
dd.list = lapply(1:q, function(i)
{
dd = data.frame(r0x=0:(r0list[i]+r1list[i]))
dd$r1x = r0list[i] + r1list[i] - dd$r0x
delrows = which(with(dd, r0x > m0list[i] | r1x > m1list[i]))
if (length(delrows) > 0) dd = dd[-delrows,]
dd$prob = dhyper(dd$r1x, r0list[i]+r1list[i], m0list[i]+m1list[i]-r0list[i]-r1list[i], m1list[i], log=TRUE)
dd$p0x = with(dd, r0x/m0list[i])
dd$p1x = with(dd, r1x/m1list[i])
dd$pLx = with(dd, (r0x+r1x)/(m0list[i]+m1list[i]))
dd$llik.nullx = with(dd, dbinom(r0x, m0list[i], pLx, log=T) + dbinom(r1x, m1list[i], pLx, log=T))
dd$llik.altx = with(dd, dbinom(r0x, m0list[i], p0x, log=T) + dbinom(r1x, m1list[i], p1x, log=T))
dd$llrx = with(dd, llik.altx - llik.nullx)
dd
})
lengths = sapply(1:q, function(i) dim(dd.list[[i]])[1]) # number of outcomes in each strata
# all combinations of outcomes across all strata (dim (l1*l2*...*lq) x q )
dd2 = expand.grid(lapply(1:q, function(i) 1:lengths[i]))
# compute overall probabilities and LR test statistics
dd2$prob = exp(rowSums( sapply(1:q, function(i){dd.list[[i]]$prob[dd2[,i]]} ) ))
dd2$llrx = rowSums( sapply(1:q, function(i){dd.list[[i]]$llrx[dd2[,i]]} ) )
# Compare LR stats to observed data
matchrows = sapply(1:q, function(i) which(with(dd.list[[i]], (r0x==r0list[i]) & (r1x==r1list[i])))) # matches in dd.list (q matches)
matchrow = which(apply(dd2[,1:q], 1, function(indexes){ all(indexes==matchrows) } )) # match in dd2 (1 match)
# Observed LR stat
llr = dd2[matchrow, "llrx"]
p.obs = dd2[matchrow, "prob"]
dd2 = dd2[-matchrow,]
# Permutation p-value: sum probabilities for all outcome combinations with LR stats >= observed LR stat
dd.lrt = p.obs + sum(dd2[dd2$llrx >= llr,]$prob)
c(lrt.p=dd.lrt)
}
| /scratch/gouwar.j/cran-all/cranData/AUtests/R/perm.test.strat.r |
#' Permutation testing
#'
#' Calculates permutation p-values for testing independence in 2x2 case-control tables.
#' @param m0 Number of control subjects
#' @param m1 Number of case subjects
#' @param r0 Number of control subjects exposed
#' @param r1 Number of case subjects exposed
#' @param lowthresh A threshold for probabilities below to be considered as zero. Defaults to 1e-12.
#' @return A vector of permutation p-values, computed under score, likelihood ratio, Wald, and Firth tests.
#' @examples
#' perm.tests(15000, 5000, 30, 25)
perm.tests = function(m0, m1, r0, r1, lowthresh=1E-12)
{
if (r0 == 0 & r1 == 0)
{
return(c(score.p = 1, lr.p = 1, wald.p = 1, wald0.p = 1, firth.p = 1))
}
if (r0 == m0 & r1 == m1)
{
return(c(score.p = 1, lr.p = 1, wald.p = 1, wald0.p = 1, firth.p = 1))
}
if (is.na(m0 + m1 + r0 + r1))
{
return(c(score.p = NA, lr.p = NA, wald.p = NA, wald0.p = NA, firth.p = NA))
}
p = (r0+r1)/(m0+m1) # observed p
# Score test
ybar = m1/(m0+m1)
t = r1*(1-ybar) - r0*ybar
sd.t = sqrt((1-ybar)^2*m1*p*(1-p) + ybar^2*m0*p*(1-p))
# Likelihood ratio test
p0 = r0/m0
p1 = r1/m1
pL = (r0+r1)/(m0+m1)
llik.null = dbinom(r0, m0, pL, log=T) + dbinom(r1, m1, pL, log=T)
llik.alt = dbinom(r0, m0, p0, log=T) + dbinom(r1, m1, p1, log=T)
llr = llik.alt - llik.null
# Wald test (with regularization)
reg = 0.5
betahat = log( (r1+reg)/(m1-r1+reg)/((r0+reg)/(m0-r0+reg)) )
sehat = sqrt(1/(r0+reg) + 1/(r1+reg) + 1/(m0-r0+reg) + 1/(m1-r1+reg))
waldT = betahat/sehat
# Wald test (no regularization)
reg0 = 0
betahat0 = log( (r1+reg0)/(m1-r1+reg0)/((r0+reg0)/(m0-r0+reg0)) )
sehat0 = sqrt(1/(r0+reg0) + 1/(r1+reg0) + 1/(m0-r0+reg0) + 1/(m1-r1+reg0))
waldT0 = betahat0/sehat0
# Firth test
y = c(1,1,0,0)
x = c(1,0,1,0)
data = data.frame(y = y, x = x)
m.firth = logistf(y~x, data=data, weights=c(r1, m1-r1, r0, m0-r0))
firth.t = sum(c(-2, 2)*m.firth$loglik)
firth.p = m.firth$prob[2]
# Permutation testing
dd = data.frame(r0x=0:(r0+r1))
dd$r1x = r0 + r1 - dd$r0x
delrows = which(with(dd, r0x > m0 | r1x > m1))
if (length(delrows) > 0) dd = dd[-delrows,]
dd$prob = dhyper(dd$r1x, r0+r1, m0+m1-r0-r1, m1)
dd$firthx = sapply(1:nrow(dd), function(xx) sum(c(-2,2)*(logistf(y~x, data=data,
weights=c(dd$r1x[xx], m1-dd$r1x[xx], dd$r0x[xx], m0-dd$r0x[xx]))$loglik)))
dd$px = with(dd, (r0x+r1x)/(m0+m1) )
dd$tx = with(dd, r1x*(1-ybar) - r0x*ybar )
dd$sd.tx = with(dd, sqrt((1-ybar)^2*m1*px*(1-px) + ybar^2*m0*px*(1-px)))
dd$p0x = with(dd, r0x/m0)
dd$p1x = with(dd, r1x/m1)
dd$pLx = with(dd, (r0x+r1x)/(m0+m1))
dd$llik.nullx = with(dd, dbinom(r0x, m0, pLx, log=T) + dbinom(r1x, m1, pLx, log=T))
dd$llik.altx = with(dd, dbinom(r0x, m0, p0x, log=T) + dbinom(r1x, m1, p1x, log=T))
dd$llrx = with(dd, llik.altx - llik.nullx)
dd$betahatw = with(dd, log(r1x+ reg) - log(m1-r1x+reg) - log(r0x+reg) + log(m0-r0x+reg) )
dd$sehatw = with(dd, sqrt(1/(r0x+reg) + 1/(r1x+reg) + 1/(m0-r0x+reg) + 1/(m1-r1x+reg)) )
dd$waldTw = with(dd, betahatw/sehatw)
dd$betahatw0 = with(dd, log(r1x+ reg0) - log(m1-r1x+reg0) - log(r0x+reg0) + log(m0-r0x+reg0) )
dd$sehatw0 = with(dd, sqrt(1/(r0x+reg0) + 1/(r1x+reg0) + 1/(m0-r0x+reg0) + 1/(m1-r1x+reg0)) )
dd$waldTw0 = with(dd, betahatw0/sehatw0)
infrows = which( with(dd, abs(betahatw0) == Inf) )
p.inf = sum(dd[infrows, "prob"])
matchrow = which( with(dd, r0x==r0 & r1x==r1) )
p.obs = dd[matchrow, "prob"]
dd = dd[-matchrow,]
dd.score = p.obs + sum(dd[abs(dd$tx/dd$sd.tx) >= abs(t/sd.t),]$prob)
dd.lrt = p.obs + sum(dd[dd$llrx >= llr,]$prob)
dd.wald = p.obs + sum(dd[abs(dd$waldTw) >= abs(waldT),]$prob)
dd.firth = p.obs + sum(dd[dd$firthx >= firth.t,]$prob)
dd = dd[-which( with(dd, abs(betahatw0) == Inf) ),]
dd.wald0 = p.obs + sum(dd[abs(dd$waldTw0) >= abs(waldT0),]$prob) + p.inf
c(score.p = dd.score, lr.p = dd.lrt, wald.p = dd.wald, wald0.p = dd.wald0, firth.p = dd.firth)
}
| /scratch/gouwar.j/cran-all/cranData/AUtests/R/perm.tests.r |
## ------------------------------------------------------------------------
library(AUtests)
# Example data, 1:3 case-control ratio
perm.tests(15000, 5000, 45, 55)
## ------------------------------------------------------------------------
basic.tests(15000, 5000, 45, 55)
## ------------------------------------------------------------------------
# Example data, balanced case-control ratio
au.tests(10000, 10000, 45, 60)
au.firth(10000, 10000, 45, 60)
## ------------------------------------------------------------------------
m0list = c(500, 1250) # controls
m1list = c(150, 100) # cases
r0list = c(60, 20) # exposed controls
r1list = c(25, 5) # exposed cases
## ------------------------------------------------------------------------
perm.tests(1750, 250, 80, 30)
au.tests(1750, 250, 80, 30)
## ------------------------------------------------------------------------
perm.test.strat(m0list, m1list, r0list, r1list)
au.test.strat(m0list, m1list, r0list, r1list)
| /scratch/gouwar.j/cran-all/cranData/AUtests/inst/doc/vignette-html.R |
---
title: "AUtests: approximate unconditional and permutation tests for 2x2 tables"
author: "Arjun Sondhi"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{AUtests: approximate unconditional and permutation tests for 2x2 tables}
%\VignetteEngine{knitr::rmarkdown}
\usepackage[utf8]{inputenc}
---
This package contains functions for association testing in 2x2 tables (ie. two binary variables). In particular, the scientific setting that motivated this package's development was testing for associations between diseases and rare genetic variants in case-control studies. When the expected number of subjects possessing a variant is small, standard methods perform poorly (usually tend to be overly conservative in controlling the Type I error).
The two alternative methods implemented in the package are permutation testing and approximate unconditional (AU) testing.
## Permutation tests
Permutation testing works by computing a test statistic T for the observed data, generating all plausible datasets with the same total number of exposed subjects, then adding up the probabilities of those datasets which give more extreme test statistics than T.
The `perm.tests` function returns p-values from permutation tests based on score, likelihood ratio, Wald (with and without regularization), and Firth statistics.
The following code runs the tests for a dataset containing 5,000 cases (55 with a minor allele of interest) and 15,000 controls (45 with a minor allele of interest):
```{r}
library(AUtests)
# Example data, 1:3 case-control ratio
perm.tests(15000, 5000, 45, 55)
```
For comparison purposes, the `basic.tests` function returns p-values for the standard score, likelihood ratio, Wald, Firth, and Fisher's exact tests:
```{r}
basic.tests(15000, 5000, 45, 55)
```
## Approximate unconditional tests
AU testing works by computing a test statistic T for the observed data, generating all plausible datasets with *any* number of variants, then adding up the probabilities of those datasets which give more extreme test statistics than T.
The `au.tests` function returns p-values from AU tests based on score, likelihood ratio, and Wald (with and without regularization) statistics. The `au.firth` function returns a p-value from the AU Firth test. It was implemented as a separate function due to its increased computational time.
The following code runs the tests for a dataset containing 10,000 cases (60 with a minor allele of interest) and 10,000 controls (45 with a minor allele of interest):
```{r}
# Example data, balanced case-control ratio
au.tests(10000, 10000, 45, 60)
au.firth(10000, 10000, 45, 60)
```
## AU and permutation likelihood ratio tests with categorical covariates
In order to gain precision or adjust for a confounding variable, it can be of interest to perform a stratified analysis. The `perm.test.strat` function implements a permutation likelihood ratio test that allows for categorical covariates, and the `au.test.strat` implements a similar AU test. The functions read in vectors of controls, cases, controls with the exposure, and cases wih the exposure, where the i-th element of each vector corresponds to the coount for the i-th strata.
Consider the following example data, with two strata (ie. a binary covariate):
```{r}
m0list = c(500, 1250) # controls
m1list = c(150, 100) # cases
r0list = c(60, 20) # exposed controls
r1list = c(25, 5) # exposed cases
```
A non-stratified analysis would yield a highly significant result:
```{r}
perm.tests(1750, 250, 80, 30)
au.tests(1750, 250, 80, 30)
```
When adjusting for the covariate, however, the result is much less significant:
```{r}
perm.test.strat(m0list, m1list, r0list, r1list)
au.test.strat(m0list, m1list, r0list, r1list)
```
| /scratch/gouwar.j/cran-all/cranData/AUtests/inst/doc/vignette-html.Rmd |
---
title: "AUtests: approximate unconditional and permutation tests for 2x2 tables"
author: "Arjun Sondhi"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{AUtests: approximate unconditional and permutation tests for 2x2 tables}
%\VignetteEngine{knitr::rmarkdown}
\usepackage[utf8]{inputenc}
---
This package contains functions for association testing in 2x2 tables (ie. two binary variables). In particular, the scientific setting that motivated this package's development was testing for associations between diseases and rare genetic variants in case-control studies. When the expected number of subjects possessing a variant is small, standard methods perform poorly (usually tend to be overly conservative in controlling the Type I error).
The two alternative methods implemented in the package are permutation testing and approximate unconditional (AU) testing.
## Permutation tests
Permutation testing works by computing a test statistic T for the observed data, generating all plausible datasets with the same total number of exposed subjects, then adding up the probabilities of those datasets which give more extreme test statistics than T.
The `perm.tests` function returns p-values from permutation tests based on score, likelihood ratio, Wald (with and without regularization), and Firth statistics.
The following code runs the tests for a dataset containing 5,000 cases (55 with a minor allele of interest) and 15,000 controls (45 with a minor allele of interest):
```{r}
library(AUtests)
# Example data, 1:3 case-control ratio
perm.tests(15000, 5000, 45, 55)
```
For comparison purposes, the `basic.tests` function returns p-values for the standard score, likelihood ratio, Wald, Firth, and Fisher's exact tests:
```{r}
basic.tests(15000, 5000, 45, 55)
```
## Approximate unconditional tests
AU testing works by computing a test statistic T for the observed data, generating all plausible datasets with *any* number of variants, then adding up the probabilities of those datasets which give more extreme test statistics than T.
The `au.tests` function returns p-values from AU tests based on score, likelihood ratio, and Wald (with and without regularization) statistics. The `au.firth` function returns a p-value from the AU Firth test. It was implemented as a separate function due to its increased computational time.
The following code runs the tests for a dataset containing 10,000 cases (60 with a minor allele of interest) and 10,000 controls (45 with a minor allele of interest):
```{r}
# Example data, balanced case-control ratio
au.tests(10000, 10000, 45, 60)
au.firth(10000, 10000, 45, 60)
```
## AU and permutation likelihood ratio tests with categorical covariates
In order to gain precision or adjust for a confounding variable, it can be of interest to perform a stratified analysis. The `perm.test.strat` function implements a permutation likelihood ratio test that allows for categorical covariates, and the `au.test.strat` implements a similar AU test. The functions read in vectors of controls, cases, controls with the exposure, and cases wih the exposure, where the i-th element of each vector corresponds to the coount for the i-th strata.
Consider the following example data, with two strata (ie. a binary covariate):
```{r}
m0list = c(500, 1250) # controls
m1list = c(150, 100) # cases
r0list = c(60, 20) # exposed controls
r1list = c(25, 5) # exposed cases
```
A non-stratified analysis would yield a highly significant result:
```{r}
perm.tests(1750, 250, 80, 30)
au.tests(1750, 250, 80, 30)
```
When adjusting for the covariate, however, the result is much less significant:
```{r}
perm.test.strat(m0list, m1list, r0list, r1list)
au.test.strat(m0list, m1list, r0list, r1list)
```
| /scratch/gouwar.j/cran-all/cranData/AUtests/vignettes/vignette-html.Rmd |
#' Evaluating ABC for each fitted model\cr
#'
#' This function evaluates ABC score for fitted model, one model at a time. For a model I,
#' the ABC is defined as
#' \deqn{ABC(I)=\sum\limits_{i=1}^n\bigg(Y_i-\hat{Y}_i^{I}\bigg)^2+2r_I\sigma^2+\lambda\sigma^2C_I.}
#' When comparing ABC of fitted models to the same data set, the smaller
#' the ABC, the better fit.
#'
#' @details
#' \itemize{
#' \item For inputs \code{pi1}, \code{pi2}, and \code{pi3}, the number needs to
#' satisfy the condition: \eqn{\pi_1+\pi_2+\pi_3=1-\pi_0} where \eqn{\pi_0}
#' is a numeric value between 0 and 1, the smaller the better.
#' \item For input \code{lambda}, the number needs to satisfy the condition:
#' \eqn{\lambda\geq 5.1/log(2)}.
#' }
#'
#' @param X Input data. An optional data frame, or numeric matrix of dimension
#' \code{n} by \code{nmain.p}. Note that the two-way interaction effects should not
#' be included in \code{X} because this function automatically generates the
#' corresponding two-way interaction effects if needed.
#' @param y Response variable. A \code{n}-dimensional vector, where \code{n} is the number
#' of observations in \code{X}.
#' @param heredity Whether to enforce Strong, Weak, or No heredity. Default is "Strong".
#' @param nmain.p A numeric value that represents the total number of main effects
#' in \code{X}.
#' @param sigma The standard deviation of the noise term. In practice, sigma is usually
#' unknown. In such case, this function automatically estimate sigma using root mean
#' square error (RMSE). Default is NULL. Otherwise, users need to enter a numeric value.
#' @param extract A either "Yes" or "No" logical vector that represents whether or not
#' to extract specific columns from \code{X}. Default is "No".
#' @param varind Only used when \code{extract = "Yes"}. A numeric vector of class
#' \code{c()} that specifies the indices of variables to be extracted from \code{X}.
#' If \code{varind} contains indices of two-way interaction effects, then this function
#' automatically generates corresponding two-way interaction effects from \code{X}.
#' @param interaction.ind Only used when \code{extract = "Yes"}. A two-column numeric
#' matrix containing all possible two-way interaction effects. It must be generated
#' outside of this function using \code{t(utils::combn())} or \code{indchunked()}. See Example section for
#' details.
#' @param pi1 A numeric value between 0 and 1, defined by users. Default is 0.32.
#' For guidance on selecting an appropriate value, please refer to the Details section.
#' @param pi2 A numeric value between 0 and 1, defined by users. Default is 0.32.
#' For guidance on selecting an appropriate value, please refer to the Details section.
#' @param pi3 A numeric value between 0 and 1, defined by users. Default is 0.32.
#' For guidance on selecting an appropriate value, please refer to the Details section.
#' @param lambda A numeric value defined by users. Default is 10.
#' For guidance on selecting an appropriate value, please refer to the Details section.
#'
#' @return A numeric value is returned. It represents the ABC score of the fitted model.
#'
#' @export
#'
#' @seealso \code{\link{Extract}}, \code{\link{initial}}.
#' @references
#' Ye, C. and Yang, Y., 2019. \emph{High-dimensional adaptive minimax sparse estimation with interactions.}
#'
#' @examples # sigma is unknown
#' set.seed(0)
#' nmain.p <- 4
#' interaction.ind <- t(combn(4,2))
#' X <- matrix(rnorm(50*4,1,0.1), 50, 4)
#' epl <- rnorm(50,0,0.01)
#' y<- 1+X[,1]+X[,2]+X[,3]+X[,4]+epl
#' ABC(X, y, nmain.p = 4, interaction.ind = interaction.ind)
#' ABC(X, y, nmain.p = 4, extract = "Yes",
#' varind = c(1,2,5), interaction.ind = interaction.ind)
#' #'
#' @examples # users want to enter a suggested value for sigma
# ABC(X, y, nmain.p = 4, sigma = 0.01)
# ABC(X, y, nmain.p = 4, sigma = 0.01, extract = "Yes",
# varind = c(1,2,5), interaction.ind = interaction.ind)
#'
#' @examples # model with only one predictor
#' try(ABC(X, y, nmain.p = 4, extract = "Yes",
#' varind = 1, interaction.ind = interaction.ind)) # warning message
ABC <- function(X, y, heredity = "Strong", nmain.p, sigma = NULL,
extract = "No", varind = NULL, interaction.ind = NULL,
pi1 = 0.32, pi2 = 0.32, pi3 = 0.32, lambda = 10){
colnames(X) <- make.names(rep("","X",ncol(X)+1),unique=TRUE)[-1]
n <- dim(X)[1]
if (is.null(sigma)){
full <- Extract(X, varind = c(1:(dim(X)[2]+dim(interaction.ind)[1])), interaction.ind)
sigma <- estimateSigma(full, y)$sigmahat
}else{
sigma <- sigma
}
if (extract == "Yes"){
if (is.null(varind)) stop("You must specify the variables to be extracted")
if (is.null(interaction.ind)) stop("Interaction.ind is missing.
Use t(utils::combn()) to generate interaction matrix.")
data_extract <- Extract(X, varind, interaction.ind)
data <- data_extract
r.I <- Matrix::rankMatrix(data)[1]
allpar <- as.numeric(gsub(".*?([0-9]+).*", "\\1", colnames(data)))
}else{
r.I <- Matrix::rankMatrix(X)[1]
allpar <- as.numeric(gsub(".*?([0-9]+).*", "\\1", colnames(X)))
data <- X
}
k2 <- sum(allpar > nmain.p)
k1 <- length(allpar) - k2
if (r.I < ncol(data)){
yhat <- pracma::orth(data)%*%solve(crossprod(pracma::orth(data)),
t(pracma::orth(data))%*%y, tol = 1e-50)
}else{
yhat <- (data)%*%solve(crossprod(data), t(data)%*%y, tol = 1e-50)
}
SSE <- sum((y - yhat)^2)
if (heredity == "Strong"){
C.I.strong <- -log(pi1)+log(min(nmain.p,n))+log(min(mychoose(k1),n))
+ log(choose(nmain.p,k1))+ log(choose(choose(k1,2),k2))
ABC <- SSE+2*r.I*sigma^2+lambda*sigma^2*C.I.strong
if(k1 == 1) warning("This model contains only one predictor")
}
else if (heredity == "Weak"){
K <- k1*nmain.p-choose(k1,2)-k1
C.I.weak <- -log(pi2)+log(min(nmain.p,n))+log(min(K,n))
+ log(choose(nmain.p,k1))+ log(choose(K,k2))
ABC <- SSE+2*r.I*sigma^2+lambda*sigma^2*C.I.weak
if(k1 == 1) warning("This model contains only one predictor")
}
else if (heredity == "No"){
C.I.no <- -log(pi3)+log(min(nmain.p,n))+log(min(choose(nmain.p,2),n))
+ log(choose(nmain.p,k1))+ log(choose(choose(nmain.p,2),k2))
ABC <- SSE+2*r.I*sigma^2+lambda*sigma^2*C.I.no
if(k1 == 1) warning("This model contains only one predictor")
}
return(ABC)
}
| /scratch/gouwar.j/cran-all/cranData/AVGAS/R/ABC.R |
#' A Variable selection using Genetic AlgorithmS
#'
#' @param X Input data. An optional data frame, or numeric matrix of dimension
#' \code{n} by \code{nmain.p}. Note that the two-way interaction effects should not
#' be included in \code{X} because this function automatically generates the
#' corresponding two-way interaction effects if needed.
#' @param y Response variable. A \code{n}-dimensional vector, where \code{n} is the number
#' of observations in \code{X}.
#' @param heredity Whether to enforce Strong, Weak, or No heredity. Default is "Strong".
#' @param nmain.p A numeric value that represents the total number of main effects
#' in \code{X}.
#' @param r1 A numeric value indicating the maximum number of main effects. This number
#' can be different from the \code{r1} defined in \code{\link{detect}}.
#' @param r2 A numeric value indicating the maximum number of interaction effects. This number
#' can be different from the \code{r1} defined in \code{\link{detect}}.
#' @param sigma The standard deviation of the noise term. In practice, sigma is usually
#' unknown. In such case, this function automatically estimate sigma using root mean
#' square error (RMSE). Default is NULL. Otherwise, users need to enter a numeric value.
#' @param interaction.ind A two-column numeric matrix containing all possible
#' two-way interaction effects. It must be generated outside of this function
#' using \code{t(utils::combn())}. See Example section for details.
#' @param lambda A numeric value defined by users. Default is 10.
#' For guidance on selecting an appropriate value, please refer to the Details section.
#' @param q A numeric value indicating the number of models in each generation (e.g.,
#' the population size). Default is 40.
#' @param allout Whether to print all outputs from this function. A "Yes" or "No"
#' logical vector. Default is "No". See Value section for details.
#' @param interonly Whether or not to consider fitted models with only two-way
#' interaction effects. A “Yes" or "No" logical vector. Default is "No".
#' @param pi1 A numeric value between 0 and 1, defined by users. Default is 0.32.
#' For guidance on selecting an appropriate value, please refer to \code{\link{ABC}}.
#' @param pi2 A numeric value between 0 and 1, defined by users. Default is 0.32.
#' For guidance on selecting an appropriate value, please refer to \code{\link{ABC}}.
#' @param pi3 A numeric value between 0 and 1, defined by users. Default is 0.32.
#' For guidance on selecting an appropriate value, please refer to \code{\link{ABC}}.
#' @param aprob A numeric value between 0 and 1, defined by users.
#' The addition probability during mutation. Default is 0.9.
#' @param dprob A numeric value between 0 and 1, defined by users.
#' The deletion probability during mutation. Default is 0.9.
#' @param aprobm A numeric value between 0 and 1, defined by users.
#' The main effect addition probability during addition. Default is 0.1.
#' @param aprobi A numeric value between 0 and 1, defined by users.
#' The interaction effect addition probability during addition. Default is 0.9.
#' @param dprobm A numeric value between 0 and 1, defined by users.
#' The main effect deletion probability during deletion. Default is 0.9.
#' @param dprobi A numeric value between 0 and 1, defined by users.
#' The interaction effect deletion probability during deletion. Default is 0.1.
#' @param take Only used when \code{allout = "No"}. Number of top candidate models
#' to display. Default is 3.
#'
#' @return A list of output. The components are:
#' \item{final_model}{The final selected model.}
#' \item{cleaned_candidate_model}{All candidate models where each row corresponding
#' to a fitted model; the first 1 to \code{r1 + r2} columns representing the predictor
#' indices in that model, and the last column is a numeric value representing the
#' ABC score of that fitted model. Duplicated models are not allowed.}
#' \item{InterRank}{Rank of all candidate interaction effects. A two-column numeric
#' matrix. The first column contains indices of ranked two-way interaction effects, and the
#' second column contains its corresponding ABC score.}
#' @export
#' @seealso \code{\link{initial}}, \code{\link{cross}}, \code{\link{mut}}, \code{\link{ABC}}, \code{\link{Genone}}, and \code{\link{Extract}}.
#' @importFrom utils combn
#' @importFrom selectiveInference estimateSigma
#' @importFrom Matrix rankMatrix
#' @importFrom pracma orth
#' @importFrom stats rnorm
#' @importFrom stats reorder
#' @importFrom VariableScreening screenIID
#' @importFrom stats na.exclude
#' @importFrom ggplot2 ggplot
#' @importFrom ggplot2 aes
#' @importFrom ggplot2 geom_point
#' @importFrom ggplot2 ylim
#' @importFrom ggplot2 theme
#' @importFrom ggplot2 element_text
#' @importFrom ggplot2 labs
#' @importFrom dplyr distinct
#' @importFrom stats na.omit
#'
#' @examples # allout = "No"
#'
# set.seed(0)
# nmain.p <- 4
# interaction.ind <- t(combn(4,2))
# X <- matrix(rnorm(50*4,1,0.1), 50, 4)
# epl <- rnorm(50,0,0.01)
# y <- 1+X[,1]+X[,2]+X[,1]*X[,2]+epl
#
# a1 <- AVGAS(X, y, nmain.p=4, r1=3, r2=3,
# interaction.ind = interaction.ind, q=5)
#'
#' @examples # allout = "Yes"
# a2 <- AVGAS(X, y, nmain.p=4, r1=3, r2=3,
# interaction.ind = interaction.ind, q=5, allout = "Yes")
#'
AVGAS <- function(X, y, heredity = "Strong", nmain.p, r1, r2, sigma = NULL,
interaction.ind = NULL, lambda = 10, q = 40, allout = "No",
interonly = "No", pi1 = 0.32, pi2 = 0.32, pi3 = 0.32,
aprob = 0.9, dprob = 0.9, aprobm = 0.1, aprobi=0.9,
dprobm = 0.9, dprobi = 0.1, take = 3){
first <- Genone(X, y, heredity = heredity, nmain.p = nmain.p, r1 = r1, r2 = r2, sigma = sigma,
interaction.ind = interaction.ind, lambda = lambda, q = q, allout = "Yes",
interonly = interonly, pi1 = pi1, pi2 = pi2, pi3 = pi3, aprob = aprob, dprob = dprob,
aprobm=aprobm, aprobi = aprobi, dprobm = dprobm, dprobi = dprobi)
parents <- first
parents$initialize <- first$newparents
oldABCscore <- as.numeric(first$parents_models_cleaned[1,(r1+r2)+1])
InterRank = first$InterRank
count <- 0
repeat{
count <- count + 1
NewMatrixB <- cross(parents, heredity = heredity,
nmain.p = nmain.p, r1 = r1, r2 = r2, interaction.ind = interaction.ind)
NewmatrixC <- mut(parents, heredity = heredity, nmain.p = nmain.p, r1 = r1, r2 = r2,
interaction.ind = interaction.ind,
interonly = interonly, aprob = aprob, dprob = dprob,
aprobm = aprobm, aprobi=aprobi, dprobm = dprobm, dprobi = dprobi)
MatrixE <- rbind(NewMatrixB, NewmatrixC)
if (interonly == "Yes"){
MatrixEF <- MatrixE
for (i in 1:dim(MatrixEF)[1]) {
if (length(which(MatrixEF[i,]%in% 1:nmain.p))>0){
tempind <- which(MatrixEF[i,]%in% 1:nmain.p)
MatrixEF[i,tempind] <- 0
}
}
MatrixE <- MatrixEF
}else{
MatrixE <- MatrixE
}
NewABCscore <- list()
for (i in 1:dim(MatrixE)[1]){
NewABCscore[[i]] <- ABC(X, y, heredity = heredity, nmain.p = nmain.p, sigma = sigma,
extract = "Yes", varind = c(as.numeric(MatrixE[i,][which(!MatrixE[i,]==0)])),
interaction.ind = interaction.ind,
pi1 = pi1, pi2 = pi2, pi3= pi3, lambda = lambda)
}
temp2 <- as.matrix(cbind(MatrixE, New.scores = unlist(NewABCscore)))
temp3 <- as.matrix(rbind(first$parents_models_cleaned, temp2))
new.modelw.score <- temp3[order(unlist(temp3[,((r1+r2)+1)])),]
new.modelw.score <- cbind(t(apply(t(apply(new.modelw.score[,-((r1+r2)+1)], 1, function(x){x <- sort(x, decreasing = TRUE);x})), 1, function(x) {x[x != 0] <- sort(x[x != 0]);x})),new.modelw.score[,(r1+r2)+1])
new.modelw.score <- unique_rows(new.modelw.score)
if (nrow(new.modelw.score) < q){
MatrixD <- new.modelw.score[,1:(r1+r2)]
MatrixD <- rbind(MatrixD, matrix(0, nrow = q - nrow(MatrixD), ncol = (r1+r2)))
}else{
MatrixD <- new.modelw.score[1:q, 1:(r1+r2)]
}
NewABCscore <- as.numeric(new.modelw.score[1,(r1+r2)+1])
if (round(NewABCscore,10) >= round(oldABCscore,10)){
break
}else{
oldABCscore <- NewABCscore
parents <- first
parents$initialize <- MatrixD
}
}
selected_model <- as.numeric(MatrixD[1,])
model.match <- predictor_match(new.modelw.score, r1 = r1, r2 = r2 , nmain.p = nmain.p, interaction.ind = interaction.ind)
model.match.matrix <- matrix(0, nrow = length(model.match), ncol = (r1+r2))
for (b in 1: nrow(model.match.matrix)) {
model.match.matrix[b,1:length(model.match[[b]])] <- model.match[[b]]
}
model.match.matrix <- as.matrix(cbind(model.match.matrix, ABCscore = round(unlist(new.modelw.score[,(r1+r2)+1]), 4)))
if (allout == "Yes"){
return(list(
final_model = model.match[1],
cleaned_candidate_model = model.match.matrix,
InterRank = InterRank
))
}else{
return(list(
final_model = model.match[1],
cleaned_candidate_model = model.match.matrix[1:take,]
))
}
}
| /scratch/gouwar.j/cran-all/cranData/AVGAS/R/AVGAS.R |
#' Extracting specific columns from a data\cr
#'
#' This function extracts specific columns from \code{X} based on \code{varind}.
#' It provides an efficient procedure for conducting ABC evaluation,
#' especially when working with high-dimensional data.
#'
#' @details Please be aware that this function automatically renames column names
#' into a designated format (e.g., X.1, X.2 for main effects, and X.1X.2 for
#' interaction effect, etc), regardless of the original column names in \code{X}.
#'
#' Under no heredity condition, this function can be applied in the context of
#' interaction only linear regression models. See Example section for details.
#'
#' @param X Input data. An optional data frame, or numeric matrix of dimension
#' \code{n} by \code{nmain.p}. Note that the two-way interaction effects should not
#' be included in \code{X} because this function automatically generates the
#' corresponding two-way interaction effects if needed.
#' @param varind A numeric vector of class \code{c()} that specifies the indices
#' of variables to be extracted from \code{X}. Duplicated values are not allowed.
#' See Example section for details.
#' @param interaction.ind A two-column numeric matrix containing all possible
#' two-way interaction effects. It must be generated outside of this function using
#' \code{t(utils::combn())}. See Example section for details.
#'
#' @return A numeric matrix is returned.
#' @export
#'
#' @seealso \code{\link{ABC}}, \code{\link{initial}}.
#'
#' @examples # Extract main effect X1 and X2 from X1,...X4
#' set.seed(0)
#' X1 <- matrix(rnorm(20), ncol = 4)
#' y1 <- X1[, 2] + rnorm(5)
#' interaction.ind <- t(combn(4,2))
#'
#' @examples # Extract main effect X1 and interaction effect X1X2 from X1,..X4
#' Extract(X1, varind = c(1,5), interaction.ind)
#'
#' @examples # Extract interaction effect X1X2 from X1,...X4
#' Extract(X1, varind = 5, interaction.ind)
#'
#' @examples # Extract using duplicated values in varind.
#' try(Extract(X1, varind = c(1,1), interaction.ind)) # this will not run
Extract <- function(X, varind, interaction.ind = NULL){
if (is.null(interaction.ind)) stop("Interaction.ind is missing.
Use t(utils::combn()) to generate interaction matrix.")
if (as.logical(any(duplicated(varind[which(varind!=0)])))){
stop("There cannot be duplicated values in varind.")
}
ncoln <- length(varind)
nrown <- nrow(X)
nmain.p <- ncol(X)
mainind <- varind[which(varind%in%1:nmain.p)]
mainvars.matrix <- X[, mainind]
interind <- varind[which(varind>nmain.p)]
a <- interaction.ind[interind-nmain.p,]
if (length(a) >1){
intervars.matrix <- matrix(0, nrow = nrown,ncol = mydim(a)[1])
for (i in 1:mydim(a)[1]) {
if (mydim(a)[1]==1) intervars.matrix[,i] <- X[, a[1]]*X[,a[2]]
else {
intervars.matrix[,i] <- X[, a[i,1]]*X[,a[i,2]]
}
}
colnames(intervars.matrix) <- paste0("X.", interind)
data_extract <- cbind(mainvars.matrix, intervars.matrix)
}
else{
data_extract <- mainvars.matrix
}
if (length(mainind) == 1){
data_extract <- as.matrix(data_extract)
colnames(data_extract)[which(colnames(data_extract)=="mainvars.matrix" )] <- paste0("X.", mainind)
}
if (length(mainind) == 1 && length(interind) ==0){
colnames(data_extract) <- paste0("X.", mainind)
}
if (length(mainind) == 0){
data_extract <- data_extract
}
if (length(mainind)>1){
colnames(data_extract)[1:dim(mainvars.matrix)[2]] <- paste0("X.", mainind)
}
return(data_extract)
}
mychoose <- function(k1){
if (k1 == 1){
return(1)
}else{
return(choose(k1,2))
}
}
mydim <- function(x){
if (is.matrix(x)) dim(x)
else return(1)
}
mysample <- function(x, size, replace = F, prob = NULL){
if (length(x) == 1) return(x)
if (length(x) > 1) return(sample(x, size, replace, prob))
}
sort_zeros <- function(vec){
non_zeros <- vec[vec != 0]
zeros <- vec[vec == 0]
if (length(zeros) > 0){
return(c(sort(non_zeros), zeros))
}else{
return(c(sort(non_zeros)))
}
}
unique_rows <- function(matrix) {
unique_matrix <- matrix[!duplicated(matrix[,ncol(matrix)]), ]
return(unique_matrix)
}
| /scratch/gouwar.j/cran-all/cranData/AVGAS/R/Extract.R |
#' Gathering useful information for first generation
#'
#' This function automatically ranks all candidate interaction effects under
#' Strong, Weak, or No heredity condition, compare and obtain first generation
#' candidate models. The selected models will be re-ordered so that main effects
#' come first, followed by interaction effects. Only two-way interaction effects
#' will be considered.
#'
#' @param X Input data. An optional data frame, or numeric matrix of dimension
#' \code{n} by \code{nmain.p}. Note that the two-way interaction effects should not
#' be included in \code{X} because this function automatically generates the
#' corresponding two-way interaction effects if needed.
#' @param y Response variable. A \code{n}-dimensional vector, where \code{n} is the number
#' of observations in \code{X}.
#' @param heredity Whether to enforce Strong, Weak, or No heredity. Default is "Strong".
#' @param nmain.p A numeric value that represents the total number of main effects
#' in \code{X}.
#' @param r1 A numeric value indicating the maximum number of main effects.
#' @param r2 A numeric value indicating the maximum number of interaction effects.
#' @param sigma The standard deviation of the noise term. In practice, sigma is usually
#' unknown. In such case, this function automatically estimate sigma using root mean
#' square error (RMSE). Default is NULL. Otherwise, users need to enter a numeric value.
#' @param interaction.ind A two-column numeric matrix containing all possible
#' two-way interaction effects. It must be generated outside of this function
#' using \code{t(utils::combn())}. See Example section for details.
#' @param lambda A numeric value defined by users. Default is 10.
#' For guidance on selecting an appropriate value, please refer to the Details section.
#' @param q A numeric value indicating the number of models in each generation (e.g.,
#' the population size). Default is 40.
#' @param allout Whether to print all outputs from this function. A "Yes" or "No"
#' logical vector. Default is "No". See Value section for details.
#' @param interonly Whether or not to consider fitted models with only two-way
#' interaction effects. A “Yes" or "No" logical vector. Default is "No".
#' @param pi1 A numeric value between 0 and 1, defined by users. Default is 0.32.
#' For guidance on selecting an appropriate value, please refer to \code{\link{ABC}}.
#' @param pi2 A numeric value between 0 and 1, defined by users. Default is 0.32.
#' For guidance on selecting an appropriate value, please refer to \code{\link{ABC}}.
#' @param pi3 A numeric value between 0 and 1, defined by users. Default is 0.32.
#' For guidance on selecting an appropriate value, please refer to \code{\link{ABC}}.
#' @param aprob A numeric value between 0 and 1, defined by users.
#' The addition probability during mutation. Default is 0.9.
#' @param dprob A numeric value between 0 and 1, defined by users.
#' The deletion probability during mutation. Default is 0.9.
#' @param aprobm A numeric value between 0 and 1, defined by users.
#' The main effect addition probability during addition. Default is 0.1.
#' @param aprobi A numeric value between 0 and 1, defined by users.
#' The interaction effect addition probability during addition. Default is 0.9.
#' @param dprobm A numeric value between 0 and 1, defined by users.
#' The main effect deletion probability during deletion. Default is 0.9.
#' @param dprobi A numeric value between 0 and 1, defined by users.
#' The interaction effect deletion probability during deletion. Default is 0.1.
#'
#' @return A list of output. The components are:
#' \item{newparents}{ New parents models used for t+1-th generation. A numeric matrix
#' of dimension \code{q} by \code{r1+r2} where each row represents a fitted model.
#' Duplicated models are allowed.}
#' \item{parents_models}{ A numeric matrix containing all fitted models from
#' \code{\link{initial}}, \code{\link{cross}}, and \code{\link{mut}} where each
#' row corresponding to a fitted model and each column representing the predictor
#' index in that model. Duplicated models are allowed.}
#' \item{parents_models_cleaned}{ A numeric matrix containing fitted models from
#' \code{\link{initial}}, \code{\link{cross}}, and \code{\link{mut}} with ABC scores.
#' Each row corresponding to a fitted model; the first 1 to \code{r1 + r2} columns
#' representing the predictor indices in that model, and the last column is a numeric value
#' representing the ABC score of that fitted model. Duplicated models are not allowed.}
#' \item{InterRank}{ Rank of all candidate interaction effects. A two-column numeric
#' matrix. The first column contains indices of ranked two-way interaction effects, and the
#' second column contains its corresponding ABC score.}
#'
#' @export
#' @seealso \code{\link{initial}}, \code{\link{cross}}, \code{\link{mut}}, \code{\link{ABC}}, and \code{\link{Extract}}.
#'
#' @examples # allout = "No"
#' set.seed(0)
#' nmain.p <- 4
#' interaction.ind <- t(combn(4,2))
#' X <- matrix(rnorm(50*4,1,0.1), 50, 4)
#' epl <- rnorm(50,0,0.01)
#' y <- 1+X[,1]+X[,2]+X[,1]*X[,2]+epl
#' g1 <- Genone(X, y, nmain.p = 4, r1= 3, r2=3,
#' interaction.ind = interaction.ind, q = 5)
#'
#' @examples # allout = "Yes"
#' g2 <- Genone(X, y, nmain.p = 4, r1= 3, r2=3,
#' interaction.ind = interaction.ind, q = 5, allout = "Yes")
Genone <- function(X, y, heredity = "Strong", nmain.p, r1, r2,
sigma = NULL, interaction.ind = NULL,
lambda = 10, q = 40, allout = "No",
interonly = "No", pi1 = 0.32, pi2 = 0.32, pi3 = 0.32,
aprob = 0.9, dprob = 0.9, aprobm = 0.1, aprobi=0.9, dprobm = 0.9, dprobi = 0.1){
if (is.null(interaction.ind)) stop("Interaction.ind is missing.
Use t(utils::combn()) to generate interaction matrix.")
initial_parents <- initial(X, y, heredity = heredity,
nmain.p = nmain.p, sigma = sigma, r1 = r1, r2 = r2,
interaction.ind = interaction.ind,
pi1 = pi1, pi2 = pi2, pi3= pi3, lambda = lambda, q = q)
InterRank <- initial_parents$InterRank
MatrixA <- initial_parents$initialize
MatrixB <- cross(initial_parents, heredity = heredity,
nmain.p = nmain.p, r1 = r1, r2 = r2, interaction.ind = interaction.ind)
MatrixC <- mut(initial_parents, heredity = heredity, nmain.p = nmain.p, r1 = r1, r2 = r2,
interaction.ind = interaction.ind,
interonly = interonly, aprob = aprob, dprob = dprob,
aprobm = aprobm, aprobi=aprobi, dprobm = dprobm, dprobi = dprobi)
UnionABC <- rbind(MatrixA, MatrixB, MatrixC)
a <- t(apply(UnionABC, 1, function(x) {x <- sort(x, decreasing = TRUE);x}))
b <- t(apply(a, 1, function(x) {x[x != 0] <- sort(x[x != 0]);x}))
UnionABC_cleaned <- dplyr::distinct(as.data.frame(b))
ABCscore.1 <- list()
for (i in 1:dim(UnionABC_cleaned)[1]) {
ABCscore.1[[i]] <- ABC(X, y, heredity = heredity, nmain.p = nmain.p, sigma = sigma,
extract = "Yes", varind = c(as.numeric(UnionABC_cleaned[i,][which(!UnionABC_cleaned[i,]==0)])),
interaction.ind = interaction.ind,
pi1 = pi1, pi2 = pi2, pi3= pi3, lambda = lambda)
}
ABCscore.1 <- as.matrix(ABCscore.1)
if (dim(UnionABC_cleaned)[1] < q) {
MatrixD <- stats::na.omit(UnionABC_cleaned[order(as.numeric(ABCscore.1))[1:q],])
}else{
MatrixD <- UnionABC_cleaned[order(as.numeric(ABCscore.1))[1:q],]
}
MatrixD <- as.matrix(MatrixD)
rownames(MatrixD) <- colnames(MatrixD) <- NULL
temp <- as.matrix(cbind(UnionABC_cleaned, m.scores = unlist(ABCscore.1)))
modelw.score <- temp[order(unlist(temp[,((r1+r2)+1)])),]
colnames(modelw.score) <- NULL
if (allout == "Yes"){
return(list(newparents = MatrixD, # Generation 1 parents matrix used for next generation
parents_models = UnionABC, # Gen 1 initialize, crossover, mutation not cleaned
parents_models_cleaned = modelw.score, # Gen 1 initialize, crossover, mutation cleaned
InterRank = InterRank
))
}
else{
return(list(newparents = MatrixD, # Generation 1 parents matrix used for next generation
InterRank = InterRank
))
}
}
| /scratch/gouwar.j/cran-all/cranData/AVGAS/R/Genone.R |
Heredity <- function(x, nmain.p, interaction.ind, heredity = "Strong"){
if (is.null(heredity)) stop("You must define a heredity condition")
intereffect <- x[which(x>nmain.p)]
if (length(intereffect)== 0) stop("This model contains no interaction effect")
maineffect <- x[which(x%in% 1:nmain.p)]
check <- c()
if (heredity == "Strong"){
for (i in 1:length(intereffect)) {
check <- c(check, interaction.ind[as.numeric(intereffect[i]-nmain.p),])
}
result <- all(check%in% maineffect)
}
else if (heredity == "Weak"){
for (i in 1:length(intereffect)) {
ee <- interaction.ind[as.numeric(intereffect[i]-nmain.p),]
check[i] <- any(ee%in% maineffect)
}
result <- all(check)
}
return(result)
}
| /scratch/gouwar.j/cran-all/cranData/AVGAS/R/Heredity.R |
#' Performing crossover\cr
#'
#' This function performs crossover which only stores all fitted models without
#' making any comparison. The selected indices in each fitted model will be
#' automatically re-ordered so that main effects comes first, followed by
#' two-way interaction effects, and zero reservation spaces.
#'
#' @param parents A numeric matrix of dimension \code{q} by \code{r1+r2},
#' obtained from \code{initial} or previous generation where each row corresponding
#' a fitted model and each column representing the predictor index in the fitted model.
#' @param heredity Whether to enforce Strong, Weak, or No heredity. Default is "Strong".
#' @param nmain.p A numeric value that represents the total number of main effects
#' in \code{X}.
#' @param r1 A numeric value indicating the maximum number of main effects.
#' @param r2 A numeric value indicating the maximum number of interaction effects.
#' @param interaction.ind A two-column numeric matrix containing all possible
#' two-way interaction effects. It must be generated outside of this function
#' using \code{t(utils::combn())}. See Example section for details.
#'
#' @return A numeric matrix \code{single.child.bit} is returned. Each row representing
#' a fitted model, and each column corresponding to the predictor index in the fitted model.
#' Duplicated models are allowed.
#' @export
#' @seealso \code{\link{initial}}.
#'
#' @examples # Under Strong heredity
#' set.seed(0)
#' nmain.p <- 4
#' interaction.ind <- t(combn(4,2))
#' X <- matrix(rnorm(50*4,1,0.1), 50, 4)
#' epl <- rnorm(50,0,0.01)
#' y<- 1+X[,1]+X[,2]+X[,1]*X[,2]+epl
#' p1 <- initial(X, y, nmain.p = 4, r1 = 3, r2 = 3,
#' interaction.ind = interaction.ind, q = 5)
#' c1 <- cross(p1, nmain.p=4, r1 = 3, r2 = 3,
#' interaction.ind = interaction.ind)
cross <- function(parents, heredity = "Strong", nmain.p, r1, r2, interaction.ind = NULL){
if (is.null(interaction.ind)) stop("Interaction.ind is missing.
Use t(utils::combn()) to generate interaction matrix.")
max_model_size <- length(parents$initialize[1,])
parentsMB <- parents$InterRank[,1]
single.child.bit <- matrix(0,nrow=choose(dim(parents$initialize)[1],2),ncol=max_model_size)
tempcount <- 0
for (i in 1:(dim(parents$initialize)[1]-1)) {
for (j in ((i+1):(dim(parents$initialize)[1]))) {
tempcount <- tempcount + 1
crossind <- union(parents$initialize[i,][which(!parents$initialize[i,]==0)],
parents$initialize[j,][which(!parents$initialize[j,]==0)])
crossind <- as.numeric(unlist(crossind))
crossindmain <- as.numeric(unique(crossind[which(crossind%in% 1:nmain.p)]))
crossindinter <- as.numeric(unique(crossind[which(crossind>nmain.p)]))
if (length(crossindmain)<=r1 & length(crossindinter)<=r2){
if (length(crossindmain)>0){
single.child.bit[tempcount, c(1:length(crossindmain))] <- crossindmain
}else{
single.child.bit[tempcount, c(1:r1)] <- 0
}
if (length(crossindinter)>0){
single.child.bit[tempcount,
c((max(which(!single.child.bit[tempcount,] == 0))+1):((max(which(!single.child.bit[tempcount,] == 0))+1)+length(crossindinter)-1))] <- crossindinter
}else{
single.child.bit[tempcount, c((length(crossindmain)+1):max_model_size)] <- 0
}
}else{
single.child.bit[tempcount, c(1:min(r1, length(crossindmain)))] <- sort(mysample(crossindmain, min(r1,length(crossindmain))))
single.child.bit[tempcount, (max(which(!single.child.bit[tempcount,] == 0))+1)] <- as.numeric(unlist(parentsMB))[1]
}
}
}
single.child.bit <- as.matrix(single.child.bit)
return(single.child.bit)
}
| /scratch/gouwar.j/cran-all/cranData/AVGAS/R/cross.R |
#' Suggesting values for \code{r2}
#'
#' This function suggests the values for \code{r2}.
#'
#' @param X Input data. An optional data frame, or numeric matrix of dimension
#' \code{n} by \code{nmain.p}. Note that the two-way interaction effects should not
#' be included in \code{X} because this function automatically generates the
#' corresponding two-way interaction effects if needed.
#' @param y Response variable. A \code{n}-dimensional vector, where \code{n} is the number
#' of observations in \code{X}.
#' @param heredity Whether to enforce Strong, Weak, or No heredity. Default is "Strong".
#' @param nmain.p A numeric value that represents the total number of main effects
#' in \code{X}.
#' @param sigma The standard deviation of the noise term. In practice, sigma is usually
#' unknown. In such case, this function automatically estimate sigma using root mean
#' square error (RMSE). Default is NULL. Otherwise, users need to enter a numeric value.
#' @param r1 A numeric value indicating the maximum number of main effects.
#' @param r2 A numeric value indicating the maximum number of interaction effects.
#' @param interaction.ind A two-column numeric matrix containing all possible
#' two-way interaction effects. It must be generated outside of this function
#' using \code{t(utils::combn())}. See Example section for details.
#' @param pi1 A numeric value between 0 and 1, defined by users. Default is 0.32.
#' For guidance on selecting an appropriate value, please refer to \code{\link{ABC}}.
#' @param pi2 A numeric value between 0 and 1, defined by users. Default is 0.32.
#' For guidance on selecting an appropriate value, please refer to \code{\link{ABC}}.
#' @param pi3 A numeric value between 0 and 1, defined by users. Default is 0.32.
#' For guidance on selecting an appropriate value, please refer to \code{\link{ABC}}.
#' @param lambda A numeric value defined by users. Default is 10.
#' For guidance on selecting an appropriate value, please refer to \code{\link{ABC}}.
#' @param q A numeric value indicating the number of models in each generation (e.g.,
#' the population size). Default is 40.
#'
#' @return A \code{list} of output. The components are:
#' \item{InterRank}{Rank of all candidate interaction effects. A two-column numeric
#' matrix. The first column contains indices of ranked two-way interaction effects, and the
#' second column contains its corresponding ABC score.}
#' \item{mainind.sel}{Selected main effects. A \code{r1}-dimensional vector.}
#' \item{mainpool}{Ranked main effects in \code{X}.}
#' \item{plot}{Plot of potential interaction effects and their corresponding ABC scores.}
#' @export
#'
#' @seealso \code{\link{initial}}.
#'
#' @examples # under Strong heredity
# set.seed(0)
# nmain.p <- 4
# interaction.ind <- t(combn(4,2))
# X <- matrix(rnorm(50*4,1,0.1), 50, 4)
# epl <- rnorm(50,0,0.01)
# y<- 1+X[,1]+X[,2]+X[,1]*X[,2]+epl
# d1 <- detect(X, y, nmain.p = 4, r1 = 3, r2 = 3,
# interaction.ind = interaction.ind, q = 5)
#'
#' @examples # under No heredity
#' set.seed(0)
#' nmain.p <- 4
#' interaction.ind <- t(combn(4,2))
#' X <- matrix(rnorm(50*4,1,0.1), 50, 4)
#' epl <- rnorm(50,0,0.01)
#' y<- 1+X[,1]+X[,2]+X[,1]*X[,2]+epl
#' d2 <- detect(X, y, heredity = "No", nmain.p = 4, r1 = 3, r2 = 3,
#' interaction.ind = interaction.ind, q = 5)
#'
detect <- function (X, y, heredity = "Strong",
nmain.p, sigma = NULL, r1, r2,
interaction.ind = NULL,
pi1 = 0.32, pi2 = 0.32, pi3 = 0.32,
lambda = 10, q = 40){
bbb <- int(X, y, heredity = heredity,
nmain.p = nmain.p, sigma = sigma, r1 = r1, r2 = r2,
interaction.ind = interaction.ind,
pi1 = pi1, pi2 = pi2, pi3 = pi3,
lambda = lambda, q = q)
interpool <- bbb$InterRank
ccc <- as.data.frame(interpool)
inter <- ccc[,1]
scores <- ccc[,2]
if (dim(interpool)[1] <= 50){
gp <- ggplot2::ggplot(ccc,
ggplot2::aes(x = stats::reorder(as.character(inter),
+as.numeric(scores)), y = as.numeric(scores))) +
geom_point() + ggplot2::theme(axis.text.x = ggplot2::element_text(angle = 90))
}else{
gp <- ggplot2::ggplot(as.data.frame(interpool)[1:50,],
ggplot2::aes(x = stats::reorder(as.character(inter),
+as.numeric(scores)), y = as.numeric(scores))) +
geom_point() + ggplot2::theme(axis.text.x = ggplot2::element_text(angle = 90))
}
return(plot = gp)
}
int <- function(X, y, heredity = "Strong",
nmain.p, sigma = NULL, r1, r2,
interaction.ind = NULL,
pi1 = 0.32, pi2 = 0.32, pi3 = 0.32,
lambda = 10, q = 40){
if (is.null(interaction.ind)) stop("Interaction.ind is missing. Use t(utils::combn()) to generate interaction matrix.")
colnames(X) <- make.names(rep("","X",ncol(X)+1),unique=TRUE)[-1]
max_model_size <- r1 + r2
DCINFO <- DCSIS(X,y,nsis=(dim(X)[1])/log(dim(X)[1]))
mainind <- as.numeric(gsub(".*?([0-9]+).*", "\\1", colnames(X)[order(DCINFO$rankedallVar)]))
Shattempind <- mainind[1:r1]
# no heredity pool
if (heredity == "No"){
interpooltemp <- interaction.ind
}
# weak heredity pool
if (heredity =="Weak"){
for (i in 1:q) {
df <- rbind(interaction.ind[interaction.ind[,1] %in% Shattempind,][order(stats::na.exclude(match(interaction.ind[,1], Shattempind))),],
interaction.ind[interaction.ind[,2] %in% Shattempind,][order(stats::na.exclude(match(interaction.ind[,2], Shattempind))),])
interpooltemp <- df[!duplicated(df),]
}
}
# strong heredity pool
if (heredity =="Strong"){
interpooltemp <- t(utils::combn(sort(Shattempind),2))
}
intercandidates.ind <- match(do.call(paste, as.data.frame(interpooltemp)), do.call(paste, as.data.frame(interaction.ind)))+nmain.p
interscoreind <- list()
for (i in 1:length(intercandidates.ind)) {
interscoreind[[i]] <- ABC(X, y, heredity = heredity, nmain.p = nmain.p, sigma = sigma,
extract = "Yes", varind = c(Shattempind,intercandidates.ind[i]),
interaction.ind = interaction.ind,
pi1 = pi1, pi2 = pi2, pi3= pi3, lambda = lambda)
}
interscoreind <- interscoreind
MA <- as.matrix(cbind(inter = intercandidates.ind, scores = interscoreind))
if (dim(MA)[1] == 1){
MB <- MA
}else{
MB <- MA[order(unlist(MA[,2]), na.last = TRUE),]
}
interpool <- MB
return(list(InterRank = interpool,
mainind.sel = Shattempind,
mainpool = mainind))
}
DCSIS <- function(X,Y,nsis=(dim(X)[1])/log(dim(X)[1])){
if (dim(X)[1]!=length(Y)) {
stop("X and Y should have same number of rows")
}
if (missing(X)|missing(Y)) {
stop("The data is missing")
}
if (TRUE%in%(is.na(X)|is.na(Y)|is.na(nsis))) {
stop("The input vector or matrix cannot have NA")
}
n=dim(X)[1]
p=dim(X)[2]
B=matrix(1,n,1)
C=matrix(1,1,p)
sxy1=matrix(0,n,p)
sxy2=matrix(0,n,p)
sxy3=matrix(0,n,1)
sxx1=matrix(0,n,p)
syy1=matrix(0,n,1)
for (i in 1:n){
XX1=abs(X-B%*%X[i,])
YY1=sqrt(apply((Y-B%*%Y[i])^2,1,sum))
sxy1[i,]=apply(XX1*(YY1%*%C),2,mean)
sxy2[i,]=apply(XX1,2,mean)
sxy3[i,]=mean(YY1)
XX2=XX1^2
sxx1[i,]=apply(XX2,2,mean)
YY2=YY1^2
syy1[i,]=mean(YY2)
}
SXY1=apply(sxy1,2,mean)
SXY2=apply(sxy2,2,mean)*apply(sxy3,2,mean)
SXY3=apply(sxy2*(sxy3%*%C),2,mean)
SXX1=apply(sxx1,2,mean)
SXX2=apply(sxy2,2,mean)^2
SXX3=apply(sxy2^2,2,mean)
SYY1=apply(syy1,2,mean)
SYY2=apply(sxy3,2,mean)^2
SYY3=apply(sxy3^2,2,mean)
dcovXY=sqrt(SXY1+SXY2-2*SXY3)
dvarXX=sqrt(SXX1+SXX2-2*SXX3)
dvarYY=sqrt(SYY1+SYY2-2*SYY3)
dcorrXY=dcovXY/sqrt(dvarXX*dvarYY)
A=order(dcorrXY,decreasing=TRUE)
return (list(rankedallVar = A,
rankednsisVar = A[1:min(length(order(dcorrXY,decreasing=TRUE)),nsis)],
scoreallVar = dcorrXY)
)
}
indchunked <- function(n, chunk_size) {
chunk <- matrix(0, chunk_size, 2)
idx <- 1
chunk_idx <- 1
for(i in 1:(n-1)) {
for(j in (i+1):n) {
chunk[chunk_idx,] <- c(i, j)
chunk_idx <- chunk_idx + 1
if(chunk_idx > chunk_size) {
chunk <- chunk
chunk_idx <- 1
}
idx <- idx + 1
}
}
if(chunk_idx > 1) {
chunk[1:(chunk_idx-1),]
chunk <- chunk
}
return(chunk)
}
| /scratch/gouwar.j/cran-all/cranData/AVGAS/R/detect.R |
#' Setting up initial candidate models\cr
#'
#' This function automatically ranks all candidate interaction effects under
#' Strong, Weak, or No heredity condition and obtains initial candidate models.
#'
#' @param X Input data. An optional data frame, or numeric matrix of dimension
#' \code{n} by \code{nmain.p}. Note that the two-way interaction effects should not
#' be included in \code{X} because this function automatically generates the
#' corresponding two-way interaction effects if needed.
#' @param y Response variable. A \code{n}-dimensional vector, where \code{n} is the number
#' of observations in \code{X}.
#' @param heredity Whether to enforce Strong, Weak, or No heredity. Default is "Strong".
#' @param nmain.p A numeric value that represents the total number of main effects
#' in \code{X}.
#' @param sigma The standard deviation of the noise term. In practice, sigma is usually
#' unknown. In such case, this function automatically estimate sigma using root mean
#' square error (RMSE). Default is NULL. Otherwise, users need to enter a numeric value.
#' @param r1 A numeric value indicating the maximum number of main effects. This number
#' can be different from the \code{r1} defined in \code{\link{detect}}.
#' @param r2 A numeric value indicating the maximum number of interaction effects.
#' This number can be different from the \code{r2} defined in \code{\link{detect}}.
#' @param interaction.ind A two-column numeric matrix containing all possible
#' two-way interaction effects. It must be generated outside of this function
#' using \code{t(utils::combn())}. See Example section for details.
#' @param pi1 A numeric value between 0 and 1, defined by users. Default is 0.32.
#' For guidance on selecting an appropriate value, please refer to \code{\link{ABC}}.
#' @param pi2 A numeric value between 0 and 1, defined by users. Default is 0.32.
#' For guidance on selecting an appropriate value, please refer to \code{\link{ABC}}.
#' @param pi3 A numeric value between 0 and 1, defined by users. Default is 0.32.
#' For guidance on selecting an appropriate value, please refer to \code{\link{ABC}}.
#' @param lambda A numeric value defined by users. Default is 10.
#' For guidance on selecting an appropriate value, please refer to \code{\link{ABC}}.
#' @param q A numeric value indicating the number of models in each generation (e.g.,
#' the population size). Default is 40.
#'
#' @return A \code{list} of output. The components are:
#' \item{initialize}{Initial candidate models. A numeric matrix of dimension \code{q} by
#' \code{r1+r2} where each row represents a fitted model. Duplicated models are allowed.}
#' \item{InterRank}{Rank of all candidate interaction effects. A two-column numeric
#' matrix. The first column contains indices of ranked two-way interaction effects, and the
#' second column contains its corresponding ABC score.}
#' \item{mainind.sel}{Selected main effects. A \code{r1}-dimensional vector.}
#' \item{mainpool}{Ranked main effects in \code{X}.}
#' @export
#'
#' @seealso \code{\link{ABC}}, \code{\link{Extract}}.
#' @examples # Under Strong heredity
# set.seed(0)
# nmain.p <- 4
# interaction.ind <- t(combn(4,2))
# X <- matrix(rnorm(50*4,1,0.1), 50, 4)
# epl <- rnorm(50,0,0.01)
# y <- 1+X[,1]+X[,2]+X[,1]*X[,2]+epl
# p1 <- initial(X, y, nmain.p = 4, r1 = 3, r2 = 3,
# interaction.ind = interaction.ind, q = 5)
initial <- function(X, y, heredity = "Strong",
nmain.p, sigma = NULL, r1, r2,
interaction.ind = NULL,
pi1 = 0.32, pi2 = 0.32, pi3 = 0.32,
lambda = 10, q = 40){
if (is.null(interaction.ind)) stop("Interaction.ind is missing.
Use t(utils::combn()) or indchunked() to generate interaction matrix.")
colnames(X) <- make.names(rep("","X",ncol(X)+1),unique=TRUE)[-1]
max_model_size <- r1 + r2
aaa <- int(X, y, heredity = heredity,
nmain.p = nmain.p, sigma = sigma, r1, r2,
interaction.ind = interaction.ind,
pi1 = pi1, pi2 = pi2, pi3 = pi3,
lambda = lambda, q = q)
parents <- matrix(0, nrow = q, ncol = max_model_size)
MB <- aaa$InterRank
mainind <- aaa$mainpool
Shattempind <- aaa$mainind.sel
interind <- unlist(MB[,1])
for (i in 1:dim(parents)[1]) {
parents[i,c(1:length(Shattempind))] <- Shattempind
if (length(unlist(MB[,1])) < dim(parents)[1]){
parents[i,max(which(!parents[i,]==0))+1] <- rep_len(interind, length.out=dim(parents)[1])[i]
}else{
parents[i,max(which(!parents[i,]==0))+1] <- interind[i]
}
}
parents <- list(
initialize = parents,
InterRank = MB,
mainind.sel = Shattempind,
mainpool = mainind
)
return(parents)
}
| /scratch/gouwar.j/cran-all/cranData/AVGAS/R/initial.R |
#' Performing mutation\cr
#'
#' This function performs mutation which only stores all fitted models without
#' making any comparison. The selected indices in each fitted model will be
#' automatically re-ordered so that main effects comes first, followed by
#' two-way interaction effects, and zero reservation spaces.
#'
#' @param parents A numeric matrix of dimension \code{q} by \code{r1+r2},
#' obtained from \code{initial} or previous generation where each row corresponding
#' a fitted model and each column representing the predictor index in the fitted model.
#' @param heredity Whether to enforce Strong, Weak, or No heredity. Default is "Strong".
#' @param nmain.p A numeric value that represents the total number of main effects
#' in \code{X}.
#' @param r1 A numeric value indicating the maximum number of main effects.
#' @param r2 A numeric value indicating the maximum number of interaction effects.
#' @param interaction.ind A two-column numeric matrix containing all possible
#' two-way interaction effects. It must be generated outside of this function
#' using \code{t(utils::combn())}. See Example section for details.
#' @param interonly Whether or not to consider fitted models with only two-way
#' interaction effects. A “Yes" or "No" logical vector. Default is "No".
#' @param aprob A numeric value between 0 and 1, defined by users.
#' The addition probability during mutation. Default is 0.9.
#' @param dprob A numeric value between 0 and 1, defined by users.
#' The deletion probability during mutation. Default is 0.9.
#' @param aprobm A numeric value between 0 and 1, defined by users.
#' The main effect addition probability during addition. Default is 0.1.
#' @param aprobi A numeric value between 0 and 1, defined by users.
#' The interaction effect addition probability during addition. Default is 0.9.
#' @param dprobm A numeric value between 0 and 1, defined by users.
#' The main effect deletion probability during deletion. Default is 0.9.
#' @param dprobi A numeric value between 0 and 1, defined by users.
#' The interaction effect deletion probability during deletion. Default is 0.1.
#'
#' @return A numeric matrix \code{single.child.mutated} is returned. Each row representing
#' a fitted model, and each column corresponding to the predictor index in the fitted model.
#' Duplicated models are allowed.
#' @export
#' @seealso \code{\link{initial}}.
#'
#' @examples # Under Strong heredity, interonly = "No"
#' set.seed(0)
#' nmain.p <- 4
#' interaction.ind <- t(combn(4,2))
#' X <- matrix(rnorm(50*4,1,0.1), 50, 4)
#' epl <- rnorm(50,0,0.01)
#' y <- 1+X[,1]+X[,2]+X[,1]*X[,2]+epl
#' p1 <- initial(X, y, nmain.p = 4, r1 = 3, r2 = 3,
#' interaction.ind = interaction.ind, q = 5)
#' m1 <- mut(p1, nmain.p = 4, r1 = 3, r2 = 3,
#' interaction.ind =interaction.ind)
#' @examples # Under Strong heredity, interonly = "Yes"
#' m2 <- mut(p1, heredity = "No", nmain.p = 4, r1 = 3, r2 = 3,
#' interaction.ind =interaction.ind, interonly = "Yes")
mut <- function(parents, heredity = "Strong", nmain.p,
r1, r2, interaction.ind = NULL, interonly = "No",
aprob = 0.9, dprob = 0.9, aprobm = 0.1, aprobi=0.9, dprobm = 0.9, dprobi = 0.1){
if (is.null(interaction.ind)) stop("Interaction.ind is missing.
Use t(utils::combn()) to generate interaction matrix.")
single.child.mutated <- parents$initialize
for (i in 1: nrow(single.child.mutated)) {
if (length(single.child.mutated[i,][which(single.child.mutated[i,] == 0)])>0){
addition <- stats::rbinom(1, 1, prob = aprob)
if (as.logical(addition)){
additionindpool <- setdiff(union(as.numeric(parents$mainpool),as.numeric(parents$InterRank[,1]) ),
single.child.mutated[i,][which(!(single.child.mutated[i,]) == 0)])
additionindpool.main <- additionindpool[additionindpool%in% 1:nmain.p]
additionindpool.inter <- additionindpool[additionindpool>nmain.p]
aamain <- length(single.child.mutated[i,][which(single.child.mutated[i,]%in%1:nmain.p)])
aainter <- length(single.child.mutated[i,][which(single.child.mutated[i,]>nmain.p)])
if (!length(additionindpool.inter)==0){
additionind <- mysample(stats::na.omit(c(additionindpool.main[1], additionindpool.inter[1])), 1 , prob=c(aprobm,aprobi))
if (additionind<=nmain.p & !aamain==0 & aamain<r1){
additionind <- mysample(additionindpool.main[1],1)
single.child.mutated[i,max(which(!single.child.mutated[i,] == 0))+1] <- additionind
}else{
single.child.mutated[i,] <- single.child.mutated[i,]
}
if (additionind>nmain.p & !aainter==0 & aainter<r2){
if (heredity == "Strong" | heredity == "Weak"){
check <- Heredity(x = c(single.child.mutated[i,][which(!single.child.mutated[i,]==0)],additionind),
nmain.p = nmain.p, interaction.ind = interaction.ind, heredity = heredity)
if (check == TRUE){
single.child.mutated[i,max(which(!single.child.mutated[i,] == 0))+1] <- additionind
}
}
}else{
single.child.mutated[i,] <- single.child.mutated[i,]
}
}
}
else{
single.child.mutated[i,] <- single.child.mutated[i,]
}
}
deletion <- stats::rbinom(1, 1, prob = dprob)
if (as.logical(deletion)){
if (sum(!single.child.mutated[i,]==0)>1){
bbb <- as.numeric(single.child.mutated[i,][which(single.child.mutated[i,]%in% 1:nmain.p)])
ccc <- as.numeric(single.child.mutated[i,][which(single.child.mutated[i,]>nmain.p)])
dmain <- stats::rbinom(1, 1, prob = dprobm)
if (as.logical(dmain) & !length(bbb)==0){
sample_index <- as.numeric(mysample(bbb,1))
deletionind <- sample_index
}
dinter <- stats::rbinom(1, 1, prob = dprobi)
if (dmain == FALSE & as.logical(dinter) & !length(dinter)==0){
sample_index <- as.numeric(mysample(ccc,1))
deletionind <- sample_index
}
if (dmain == FALSE & dinter == FALSE){
single.child.mutated[i,] <- single.child.mutated[i,]
}else{
if (heredity == "No"){
if (interonly == "Yes"){
single.child.mutated[i,] <- replace(single.child.mutated[i,], which(single.child.mutated[i,] < nmain.p+1), 0)
}else{
single.child.mutated[i,] <- replace(single.child.mutated[i,], which(single.child.mutated[i,] == deletionind), 0)
}
}
if (heredity == "Strong"){
if (deletionind %in% 1:nmain.p){
mutate.inter <- single.child.mutated[i,][single.child.mutated[i,] >nmain.p]
for (j in 1:length(mutate.inter)) {
if (any(interaction.ind[mutate.inter[j]-nmain.p,] %in% deletionind)){
single.child.mutated[i,] <- replace(single.child.mutated[i,], which(single.child.mutated[i,] == mutate.inter[j]), 0)
}
}
}
single.child.mutated[i,] <- replace(single.child.mutated[i,], which(single.child.mutated[i,] == deletionind), 0)
}
if (heredity =="Weak"){
if (deletionind %in% 1:nmain.p){
mutate.inter <- single.child.mutated[i,][single.child.mutated[i,] > nmain.p]
for (j in 1:length(mutate.inter)) {
if (!any(interaction.ind[mutate.inter[j]-nmain.p,]%in% setdiff(single.child.mutated[i,],deletionind))){
single.child.mutated[i,] <- replace(single.child.mutated[i,], which(single.child.mutated[i,] == mutate.inter[j]), 0)
}
}
}
single.child.mutated[i,] <- replace(single.child.mutated[i,], which(single.child.mutated[i,] == deletionind), 0)
}
}
}
}
else{
single.child.mutated[i,] <- single.child.mutated[i,]
}
}
single.child.mutated <- as.matrix(single.child.mutated)
for (i in 1:nrow(single.child.mutated)) {
single.child.mutated[i,] <- sort_zeros(single.child.mutated[i,])
}
return(single.child.mutated)
}
| /scratch/gouwar.j/cran-all/cranData/AVGAS/R/mut.R |
predictor_match <- function(candidate.model, r1,r2, nmain.p, interaction.ind){
model.match <- list()
if (mydim(candidate.model)[1]==1){
ee <- candidate.model[1:(r1+r2)]
ee1 <- ee[which(ee%in% 1:nmain.p)]
ee2 <- ee[which(ee > nmain.p)]
if (length(ee2) == 0){
model.match[[1]] <-c(paste0("X.", ee1))
}
if (length(ee1) == 0){
model.match[[1]] <- c( paste0("X.", interaction.ind[ee2-nmain.p,1], "X.", interaction.ind[ee2-nmain.p,2]))
}
if (!length(ee2) ==0 & !length(ee1) ==0){
model.match[[1]] <-c(paste0("X.", ee1), paste0("X.", interaction.ind[ee2-nmain.p,1], "X.", interaction.ind[ee2-nmain.p,2]))
}
}else{
for (i in 1:nrow(candidate.model)) {
ee <- candidate.model[i, 1:(r1+r2)]
ee1 <- ee[which(ee%in% 1:nmain.p)]
ee2 <- ee[which(ee > nmain.p)]
if (length(ee2) == 0){
model.match[[i]] <-c(paste0("X.", ee1))
}
if (length(ee1) == 0){
model.match[[i]] <- c( paste0("X.", interaction.ind[ee2-nmain.p,1], "X.", interaction.ind[ee2-nmain.p,2]))
}
if (!length(ee2) ==0 & !length(ee1) ==0){
model.match[[i]] <-c(paste0("X.", ee1), paste0("X.", interaction.ind[ee2-nmain.p,1], "X.", interaction.ind[ee2-nmain.p,2]))
}
}
}
return(model.match = model.match)
}
| /scratch/gouwar.j/cran-all/cranData/AVGAS/R/predictor_match.R |
#' Tests if AWS SDK for Java jar files are available
#' @return boolean
#' @export
test_awr_jars <- function() {
jars <- rJava::.jcall('java/lang/System', 'S', 'getProperty', 'java.class.path')
grepl('aws-java-sdk', jars)
}
#' Asserts if AWS SDK for Java jar files are available
#' @return invisible \code{TRUE} on success, otherwise error
#' @export
assert_awr_jars <- function() {
stopifnot(test_awr_jars())
invisible(TRUE)
}
#' Checks if AWS SDK for Java jar files are available
#' @return \code{TRUE} on success, informative message as a string on error
#' @export
check_awr_jars <- function() {
if (test_awr_jars()) {
return(TRUE)
}
paste(
'The AWS Java SDK was not found in the Java JAR class path,',
'which means the AWR R package is not ready to be used yet!\n',
'If you already have the AWS Java SDK jar files,',
'you can use rJava::.jaddClassPath to reference those,',
'otherwise you need to compile or download the JAR files',
'as described in the README.md of the AWR package.',
sep = '\n'
)
}
| /scratch/gouwar.j/cran-all/cranData/AWR/R/check.R |
#' Making the AWS Java SDK JAR classes available in R
#'
#' This R package makes the \code{jar} files of the AWS SDK for Java
#' available to be used in downstream R packages. Please note the
#' installation instructions for the System Requirements in the
#' \code{README.md}.
#' @references \url{https://aws.amazon.com/sdk-for-java}
#' @docType package
#' @importFrom rJava .jpackage .jcall
#' @name AWR-package
#' @examples \dontrun{
#' library(rJava)
#' client <- .jnew("com.amazonaws.services.s3.AmazonS3Client")
#' kc$getS3AccountOwner()$getDisplayName()
#' }
NULL
.onAttach <- function(libname, pkgname) {
if (!test_awr_jars()) {
packageStartupMessage(check_awr_jars())
}
}
.onLoad <- function(libname, pkgname) {
## add the package-bundled jars to the Java classpath
rJava::.jpackage(
pkgname, lib.loc = libname,
## for devtools::load_all in the development environment
morePaths = list.files(system.file('java', package = pkgname), full.names = TRUE))
}
| /scratch/gouwar.j/cran-all/cranData/AWR/R/zzz.R |
#' Checkpoint at current or given sequence number
#' @param sequenceNumber optional
#' @export
checkpoint <- function(sequenceNumber) {
params <- list(action = 'checkpoint')
if (!missing(sequenceNumber)) {
params <- c(params, list(checkpoint = sequenceNumber))
}
## send checkpointing request
write_line_to_stdout(toJSON(params, auto_unbox = TRUE))
## wait until checkpointing is finished
read_line_from_stdin()
}
| /scratch/gouwar.j/cran-all/cranData/AWR.Kinesis/R/checkpointing.R |
## internal environment storing metadata on the active shard
.shard <- new.env()
.shard$id <- NA
#' Run Kinesis Consumer application
#' @param initialize optional function to be run on startup. Please note that the variables created inside of this function will not be available to eg \code{processRecords}, so make sure to store the shared variables in the parent or global namespace
#' @param processRecords function to process records taking a \code{data.frame} object with \code{partitionKey}, \code{sequenceNumber} and \code{data} columns as the \code{records} argument. Probably you only need the \code{data} column from this object
#' @param shutdown optional function to be run when finished processing all records in a shard
#' @param checkpointing if set to \code{TRUE} (default), \code{kinesis_consumer} will checkpoint after each \code{processRecords} call. To disable checkpointing altogether, set this to \code{FALSE}. If you want to checkpoint periodically, set this to the frequency in minutes as integer.
#' @param updater optional list of list(s) including frequency (in minutes) and function to be run, most likely to update some objects in the parent or global namespace populated first in the \code{initialize} call. If the frequency is smaller than how long the \code{processRecords} call runs, it will be triggered once after each \code{processRecords} call
#' @param logfile file path of the log file. To disable logging, set \code{log_threshold} to something high with the \code{AWR.Kinesis} namespace
#' @export
#' @note Don't run this function directly, it is to be called by the MultiLangDaemon. See the package README for more details.
#' @references \url{https://github.com/awslabs/amazon-kinesis-client/blob/v1.x/src/main/java/com/amazonaws/services/kinesis/multilang/package-info.java}
#' @examples \dontrun{
#' log_threshold(FATAL, namespace = 'AWR.Kinesis')
#' AWS.Kinesis::kinesis_consumer(
#' initialize = function() log_info('Loading some data'),
#' processRecords = function(records) log_info('Received some records from Kinesis'),
#' updater = list(list(1, function() log_info('Updating some data every minute')),
#' list(1/60, function() log_info('This is a high frequency updater call')))
#' )
#' }
kinesis_consumer <- function(initialize, processRecords, shutdown,
checkpointing = TRUE, updater, logfile = tempfile()) {
## store when we last checkpointed
checkpoint_timestamp <- Sys.time()
if (!missing(updater)) {
## check object structure
if (!is.list(updater)) stop('The updater argument should be a list of list(s).')
for (ui in 1:length(updater)) {
if (!is.list(updater[[ui]])) stop(paste('The', ui, 'st/nd/th updater should be a list.'))
if (length(updater[[ui]]) != 2) stop(paste('The', ui, 'st/nd/th updater should include 2 elements.'))
if (!is.numeric(updater[[ui]][[1]])) stop(paste('The first element of the', ui, 'st/nd/th updater should be a numeric frequency.'))
if (!is.function(updater[[ui]][[2]])) stop(paste('The second element of the', ui, 'st/nd/th updater should be a function.'))
}
## init time for the updater functions
updater_timestamps <- rep(Sys.time(), length(updater))
}
## schedule garbage collection
gc_timestamp <- Sys.time()
## log to file instead of stdout (which is used for communication with the Kinesis daemon)
log_appender(appender_file(logfile))
log_formatter(formatter_paste)
log_info('Starting R Kinesis Consumer application')
## custom log layout to add shard ID in each log line
log_layout(function(level, msg, ...) {
timestamp <- format(Sys.time(), tz = 'UTC')
sprintf("%s [%s UTC] %s %s", attr(level, 'level'), timestamp, .shard$id, msg)
})
## run an infinite loop reading from stdin and writing to stout
while (TRUE) {
## read and parse next message from stdin
line <- read_line_from_stdin()
## init Kinesis consumer app
if (line$action == 'initialize') {
.shard$id <- line$shardId
log_info('Start of initialize ')
if (!missing(initialize)) {
initialize()
}
log_info('End of initialize')
}
## we are about to kill this process
if (line$action == 'shutdown') {
log_info('Shutting down')
if (!missing(shutdown)) {
shutdown()
}
if (line$reason == 'TERMINATE') {
checkpoint()
}
}
## process records
if (line$action == 'processRecords') {
n <- nrow(line$records)
log_debug(paste('Processing', n, 'records'))
## nothing to do right now
if (n == 0) next()
## parse response into data.table
records <- data.frame(
partitionKey = line$records$partitionKey,
sequenceNumber = line$records$sequenceNumber,
data = sapply(line$records$data,
function(x) rawToChar(base64_dec(x)), USE.NAMES = FALSE),
stringsAsFactors = FALSE)
## do business logic
processRecords(records)
## always checkpoint
if (isTRUE(checkpointing)) {
checkpoint()
}
## checkpoint once every few minutes
if (is.integer(checkpointing) && length(checkpointing) == 1 &&
difftime(Sys.time(), checkpoint_timestamp, units = 'mins') > checkpointing) {
log_debug('Time to checkpoint')
checkpoint(line$records[nrow(line$records), 'sequenceNumber'])
## reset timer
checkpoint_timestamp <- Sys.time()
}
## updater functions
if (!missing(updater)) {
for (ui in 1:length(updater)) {
if (difftime(Sys.time(), updater_timestamps[ui], units = 'mins') > updater[[ui]][[1]]) {
log_debug(paste('Time to run updater', ui))
updater[[ui]][[2]]()
updater_timestamps[ui] <- Sys.time()
}
}
}
## just in case, garbage collection (once in every hour?)
if (difftime(Sys.time(), gc_timestamp, units = 'mins') > 60) {
invisible(gc())
}
}
## return response for action
if (line$action != 'checkpoint') {
write_line_to_stdout(toJSON(list(action = unbox('status'), responseFor = unbox(line$action))))
}
## indeed shut down if this process is not needed any more
if (line$action == 'shutdown') {
quit(save = 'no', status = 0, runLast = FALSE)
}
}
}
| /scratch/gouwar.j/cran-all/cranData/AWR.Kinesis/R/consumer.R |
#' Get record from a Kinesis Stream
#' @param stream stream name (string)
#' @param region AWS region (string)
#' @param limit number of records to fetch
#' @param shard_id optional shard id - will pick a random active shard if left empty
#' @param iterator_type shard iterator type
#' @param start_sequence_number for \code{AT_SEQUENCE_NUMBER} and \code{AFTER_SEQUENCE_NUMBER} iterators
#' @param start_timestamp for \code{AT_TIMESTAMP} iterator
#' @note Use this no more than getting sample data from a stream - it's not intended for prod usage.
#' @references \url{https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/kinesis/model/GetRecordsRequest.html}
#' @return character vector that you might want to post-process eg with \code{jsonlite::stream_in}
#' @export
kinesis_get_records <- function(stream, region = 'us-west-1', limit = 25,
shard_id,
iterator_type = c('TRIM_HORIZON', 'LATEST',
'AT_SEQUENCE_NUMBER', 'AFTER_SEQUENCE_NUMBER',
'AT_TIMESTAMP'),
start_sequence_number, start_timestamp) {
iterator_type <- match.arg(iterator_type)
## prepare Kinesis client
client <- .jnew('com.amazonaws.services.kinesis.AmazonKinesisClient')
client$setEndpoint(sprintf('kinesis.%s.amazonaws.com', region))
## pick a random shard if no specified
if (missing(shard_id)) {
shards <- client$describeStream(stream)
shards <- sapply(
as.list(shards$getStreamDescription()$getShards()$toArray()),
function(x) x$getShardId())
shards <- sub('^shardId-', '', shards)
shard_id <- sample(shards, 1)
}
## prepare iterator
req <- .jnew('com.amazonaws.services.kinesis.model.GetShardIteratorRequest')
req$setStreamName(stream)
req$setShardId(.jnew('java/lang/String', shard_id))
req$setShardIteratorType(iterator_type)
if (!missing(start_sequence_number)) {
req$setStartingSequenceNumber(start_sequence_number)
}
if (!missing(start_timestamp)) {
req$setTimestamp(start_timestamp)
}
iterator <- client$getShardIterator(req)$getShardIterator()
## get records
req <- .jnew('com.amazonaws.services.kinesis.model.GetRecordsRequest')
req$setLimit(.jnew('java/lang/Integer', as.integer(limit)))
req$setShardIterator(iterator)
res <- client$getRecords(req)$getRecords()
## transform from Java to R object
sapply(res,
function(x)
rawToChar(x$getData()$array()))
}
| /scratch/gouwar.j/cran-all/cranData/AWR.Kinesis/R/get.R |
#' An R Kinesis Consumer
#'
#' Please find more details in the \code{README.md} file.
#' @docType package
#' @importFrom logger log_trace log_debug log_info log_appender log_formatter formatter_paste appender_file log_layout
#' @importFrom jsonlite fromJSON toJSON base64_dec base64_enc unbox
#' @importFrom rJava .jnew J .jbyte
#' @importFrom utils assignInMyNamespace
#' @import AWR
#' @name AWR.Kinesis-package
NULL
## connection to be opened in the first call to read_line_from_stdin
stdincon <- NULL
.onUnload <- function(libpath) {
## close opened connection
if (!is.null(stdincon)) {
close(stdincon)
}
}
| /scratch/gouwar.j/cran-all/cranData/AWR.Kinesis/R/imports.R |
#' Read one non-empty line from stdin without any warnings printed to stdout
#' @return string
#' @keywords internal
read_line_from_stdin <- function() {
## load stdin only once per R session to avoid the memory leak with
## always re-opening the connection
if (is.null(stdincon)) {
assignInMyNamespace(
'stdincon',
suppressWarnings(file('stdin', open = 'r', blocking = TRUE)))
}
## stdincon was opened at package load
line <- scan(stdincon, what = character(0), nlines = 1, quiet = TRUE)
## empty line received
if (length(line) == 0) {
Sys.sleep(0.25)
log_trace('Nothing read from stdin, looking for new messages...')
return(eval.parent(match.call()))
}
## return parsed line with logging
log_trace(paste0('Read ', nchar(line), ' char(s) from stdin: ',
substr(line, 1, 500), ifelse(nchar(line) > 500, ' ...', '')))
return(fromJSON(line))
}
#' Securely write a line to stdout with logging
#' @param line string
#' @keywords internal
write_line_to_stdout <- function(line) {
flush(stdout())
log_trace(paste('Writing to stdout:', line))
cat('\n\n', line, '\n\n')
flush(stdout())
}
| /scratch/gouwar.j/cran-all/cranData/AWR.Kinesis/R/io.R |
#' Write a record to a Kinesis Stream
#' @param stream stream name (string)
#' @param region AWS region (string)
#' @param data data blog (string)
#' @param partitionKey determines which shard in the stream the data record is assigned to, eg username, stock symbol etc (string)
#' @export
#' @references \url{https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/kinesis/model/PutRecordRequest.html}
#' @examples \dontrun{
#' df <- mtcars[1, ]
#' str(kinesis_put_record('test-AWR', data = jsonlite::toJSON(df), partitionKey = row.names(df)))
#' }
#' @return invisible list including the shard id and sequence number
kinesis_put_record <- function(stream, region = 'us-west-1', data, partitionKey) {
## prepare request
req <- .jnew('com.amazonaws.services.kinesis.model.PutRecordRequest')
req$setStreamName(stream)
req$setData(J('java.nio.ByteBuffer')$wrap(.jbyte(charToRaw(data))))
req$setPartitionKey(partitionKey)
## send to AWS
client <- .jnew('com.amazonaws.services.kinesis.AmazonKinesisClient')
client$setEndpoint(sprintf('kinesis.%s.amazonaws.com', region))
res <- client$putRecord(req)
## return list invisible
invisible(list(
shard = res$getShardId(),
sequenceNumber = res$getSequenceNumber()))
}
| /scratch/gouwar.j/cran-all/cranData/AWR.Kinesis/R/put.R |
#' Compute the conditional Aalen-Johansen estimator.
#'
#' @param data A list of trajectory data for each individual.
#' @param x A numeric value for conditioning.
#' @param a A bandwidth. Default uses an asymmetric version using alpha.
#' @param p An integer representing the number of states. The absorbing state is last.
#' @param alpha A probability around the point x, for asymmetric sub-sampling.
#' @param collapse Logical, whether to collapse the last state of the model.
#'
#' @return A list containing the Aalen-Johansen estimator, the Nelson-Aalen estimator, and related quantities.
#' @export
#'
aalen_johansen <- function(data, x = NULL, a = NULL, p = NULL, alpha = 0.05, collapse = FALSE){
# Get the relevant data by filtering for rows where (x - X)/a is within -1/2 and 1/2
n <- length(data)
is_unconditional <- is.null(x)
if(!is_unconditional && is.null(data[[1]]$X)) stop("Provide covariate information in data")
if(is_unconditional){relevant_data <- 1:n}else{
X <- unlist(lapply(data, FUN = function(Z) Z$X))
if(is.null(a)){
pvl <- ecdf(X)(x)
upper <- quantile(X, min(1,pvl + alpha/2))
lower <- quantile(X, max(0,pvl - alpha/2))
relevant_data <- which((X<=upper)&(X>=lower))
}else{
relevant_data <- which(((x - X)/a<=1/2)&((x - X)/a>=-1/2))
}
}
data_x <- data[relevant_data]
prop <- length(relevant_data)/n
n_x <- length(data_x)
if(is.null(p)) p <- max(unique(unlist(lapply(data_x, function(Z) Z$states)))) - 1
# Initialize output lists
out <- out2 <- list()
# Extract sojourn times, cumulative sojourn times, and state transition information
R_times <- unlist(lapply(data_x, FUN = function(Z) tail(Z$times,1)))
t_pool <- unlist(lapply(data_x, FUN = function(Z) Z$times[-1]))
#R_times <- unlist(lapply(data_x, FUN = function(Z) sum(Z$sojourns)))
#t_pool <- unlist(lapply(data_x, FUN = function(Z) cumsum(Z$sojourns)))
individuals <- c()
for(i in 1:n_x){
individuals <- c(individuals,rep(i,length(data_x[[i]]$times[-1])))
}
jumps_pool <- matrix(NA,0,2)
for(i in 1:n_x){
v <- data_x[[i]]$states
jumps_pool <- rbind(jumps_pool,cbind(rev(rev(v)[-1]),v[-1]))
}
# Sort the times, individuals, and transitions by time
order_of_times <- order(t_pool)
ordered_times <- c(0,t_pool[order_of_times])
ordered_individuals <- c(NA,individuals[order_of_times])
ordered_jumps <- jumps_pool[order_of_times,]
ordered_N <- rep(list(matrix(0,p+1,p+1)),nrow(ordered_jumps))
for(i in 1:nrow(ordered_jumps)){
ordered_N[[i]][ordered_jumps[i,][1],ordered_jumps[i,][2]] <- 1
}
ordered_N <- lapply(ordered_N, FUN = function(Z) Z - diag(diag(Z)))
colsums_of_N <- lapply(ordered_N,function(N)colSums(N-t(N)))
# Compute out and out2
out[[1]] <- ordered_N[[1]]*n^{-1}/prop
for(tm in 2:(length(ordered_times)-1)){
out[[tm]] <- out[[tm - 1]] + ordered_N[[tm]]*n^{-1}/prop
}
#
decisions <- ordered_times %in% R_times
out2[[1]] <- colsums_of_N[[1]] * n^{-1}/prop
for(tm in 2:(length(ordered_times)-1)){
out2[[tm]] <- out2[[tm - 1]] + colsums_of_N[[tm]] * n^{-1}/prop
if(decisions[tm]){
wch <- ordered_individuals[tm]
end_state <- as.numeric(1:(p+1) == tail(data_x[[wch]]$states,1))
out2[[tm]] <- out2[[tm]] - end_state * n^{-1}/prop
}
}
# Extract initial status for each individual
I_initial <- lapply(data_x, FUN = function(Z) as.numeric(1:(p+1) == head(Z$states,1)))
# Compute initial rate for all individuals
I0 <- Reduce("+", I_initial) * n^{-1}/prop
# Compute rates over time
It <- lapply(out2, FUN = function(N) return(I0 + N) )
# Initialize list for increments
increments <- list()
# Compute increments for each time point
increments[[1]] <- out[[1]]
for(i in 2:length(out)){
increments[[i]] <- out[[i]] - out[[i - 1]]
}
# Compute contribution of each time point
contribution_first <- increments[[1]]/I0
contribution_first[is.nan(contribution_first)] <- 0
contributions <- mapply(FUN = function(a,b){
res <- b/a
res[is.nan(res)] <- 0
return(res)
}, It[-length(It)], increments[-1],SIMPLIFY = FALSE)
# Compute cumulative sum of contributions
cumsums <- list()
cumsums[[1]] <- contribution_first
for(i in 2:length(out)){
cumsums[[i]] <- contributions[[i-1]] + cumsums[[i - 1]]
}
# Do the possible collapse of Lambdas
if(collapse == TRUE){
p <- p - 1
cumsums <- lapply(cumsums, FUN = function(M) M[1:(p+1),1:(p+1)])
contributions <- lapply(contributions, FUN = function(M) M[1:(p+1),1:(p+1)])
I0 <- rev(rev(I0)[-1])
}
# Compute the Aalen-Johansen estimator using difference equations
aj <- list()
aj[[1]] <- I0
Delta <- cumsums[[1]]
aj[[2]] <- aj[[1]] + as.vector(aj[[1]] %*% Delta) - aj[[1]] * rowSums(Delta)
for(i in 2:length(out)){
Delta <- contributions[[i-1]]
aj[[i + 1]] <- aj[[i]] + as.vector(aj[[i]] %*% Delta) - aj[[i]] * rowSums((Delta))
}
# Final touch: adding the diagonal to the Nelson-Aalen estimator
cumsums <- append(list(matrix(0,p+1,p+1)),lapply(cumsums, FUN = function(M){M_out <- M; diag(M_out) <- -rowSums(M); M_out}))
# Return output as a list
return(list(p = aj, Lambda = cumsums, N = out, I0 = I0, It = It, t = ordered_times))
}
| /scratch/gouwar.j/cran-all/cranData/AalenJohansen/R/Aalen-Johansen.R |
RK4_matrix <- function(a, b, n, A){
g <- lapply(1:(n+1), function(x){matrix(0, nrow = dim(as.matrix(A(a)))[1], ncol = dim(as.matrix(A(a)))[2])})
g[[1]] <- diag(dim(as.matrix(A(a)))[1])
if(a!=b){
h <- (b-a)/n
if(h>0){
for(i in 1:n){
G1 <- g[[i]]%*%A(a + h * (i-1))*h
G2 <- (g[[i]]+1/2*G1)%*%A(a + h * (i-1) + 1/2*h)*h
G3 <- (g[[i]]+1/2*G2)%*%A(a + h * (i-1) + 1/2*h)*h
G4 <- (g[[i]]+G3)%*%A(a + h*i)*h
g[[i+1]] <- g[[i]] + 1/6*(G1+2*G2+2*G3+G4)
}
}
if(h<0){
for(i in 1:n){
G1 <- A(a + h * (i-1))%*%g[[i]]*h
G2 <- A(a + h * (i-1) + 1/2*h)%*%(g[[i]]+1/2*G1)*h
G3 <- A(a + h * (i-1) + 1/2*h)%*%(g[[i]]+1/2*G2)*h
G4 <- A(a + h*i)%*%(g[[i]]+G3)*h
g[[i+1]] <- g[[i]] - 1/6*(G1+2*G2+2*G3+G4)
}
}
}
return(g)
}
#' Calculate the product integral of a matrix function
#'
#' @param start Start time.
#' @param end End time.
#' @param step_size Step size of the grid.
#' @param lambda A given matrix function.
#'
#' @return The product integral of the given matrix function.
#'
#' @export
prodint <- function(start, end, step_size, lambda){
RK4_matrix(start, end, end/step_size, lambda)}
| /scratch/gouwar.j/cran-all/cranData/AalenJohansen/R/Additional_tools.R |
sim_jump <- function(i, t, u = 0, tn, rate, dist, b = NA){
if(is.na(b)){
b <- -optimize(function(x){-rate(t+x, u+x)}, c(0, tn - t))$objective #not reliable; better set b
}
r <- rexp(1, rate = b)
s <- t + r
v <- u + r
y <- runif(1)
while(y > rate(s, v)/b){
r <- rexp(1, rate = b)
s <- s + r
v <- v + r
y <- runif(1)
}
pr <- dist(s, v)
j <- sample(length(pr), 1, prob = pr)
return(list(time = s, mark = j))
}
#' Simulate the path of a time-inhomogeneous (semi-)Markov process until a maximal time
#'
#' @param i The initial state, integer.
#' @param t The initial time, numeric.
#' @param u The initial duration (since the last transition), numeric. By default equal to zero
#' @param tn The maximal time, numeric. By default equal to inifinity
#' @param rates The total transition rates out of states, a function with arguments state (integer), time (numeric), and duration (numeric) returning a rate (numeric).
#' @param dists The distribution of marks, a function with arguments state (integer), time (numeric), and duration (numeric) returning a probability vector.
#' @param abs Vector indicating which states are absorbing. By default the last state is absorbing.
#' @param bs Vector of upper bounds on the total transition rates. By default the bounds are determined using optimize, which might only identify a local maximum.
#'
#' @return A list concerning jump times and states, with the first time being the initial time t and state and the last time being tn (if not absorbed)
#'
#' @import stats
#' @import utils
#'
#' @export
#'
#' @examples
#'
#' jump_rate <- function(i, t, u){if(i == 1){3*t} else if(i == 2){5*t} else{0}}
#' mark_dist <- function(i, s, v){if(i == 1){c(0, 1/3, 2/3)} else if(i == 2){c(1/5, 0, 4/5)} else{0}}
#' sim <- sim_path(sample(1:2, 1), t = 0, tn = 2, rates = jump_rate, dists = mark_dist)
#' sim
sim_path <- function(i, rates, dists, t = 0, u = 0, tn = Inf, abs = numeric(0), bs = NA){
times <- t
marks <- i
if(length(abs) == 0){
abs <- c(rep(FALSE, length(dists(i, t, u)) - 1), TRUE)
}
while(!abs[tail(marks, 1)]){
z <- sim_jump(tail(marks, 1), tail(times, 1), u, tn, function(s, v){rates(tail(marks, 1), s, v)}, function(s, v){dists(tail(marks, 1), s, v)}, bs[tail(marks, 1)])
if(z$time > tn){
break
}
times <- c(times, z$time)
marks <- c(marks, z$mark)
u <- 0
}
if(!abs[tail(marks, 1)]){
times <- c(times, tn)
marks <- c(marks, tail(marks, 1))
}
return(list(times = times, states = marks))
}
| /scratch/gouwar.j/cran-all/cranData/AalenJohansen/R/sim.R |
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## -----------------------------------------------------------------------------
library(AalenJohansen)
set.seed(2)
jump_rate <- function(i, t, u){
if(i == 1){
2 / (1+1/2*t)
} else if(i == 2){
3 / (1+1/2*t)
} else{
0
}
}
mark_dist <- function(i, s, v){
if(i == 1){
c(0, 1/2, 1/2)
} else if(i == 2){
c(2/3, 0, 1/3)
} else{
0
}
}
lambda <- function(t){
A <- matrix(c(2/(1+1/2*t)*mark_dist(1, t, 0), 3/(1+1/2*t)*mark_dist(2, t, 0), rep(0, 3)),
nrow = 3, ncol = 3, byrow = TRUE)
diag(A) <- -rowSums(A)
A
}
## -----------------------------------------------------------------------------
n <- 1000
c <- runif(n, 0, 5)
sim <- list()
for(i in 1:n){
sim[[i]] <- sim_path(sample(1:2, 1), rates = jump_rate, dists = mark_dist,
tn = c[i], bs = c(2*c[i], 3*c[i], 0))
}
## -----------------------------------------------------------------------------
sum(c == unlist(lapply(sim, FUN = function(z){tail(z$times, 1)}))) / n
## -----------------------------------------------------------------------------
fit <- aalen_johansen(sim)
## ---- fig.align = 'center', fig.height = 3, fig.width = 6---------------------
v1 <- unlist(lapply(fit$Lambda, FUN = function(L) L[2,1]))
v0 <- fit$t
p <- unlist(lapply(fit$p, FUN = function(L) L[2]))
P <- unlist(lapply(prodint(0, 5, 0.01, lambda), FUN = function(L) (c(1/2, 1/2, 0) %*% L)[2]))
par(mfrow = c(1, 2))
par(mar = c(2.5, 2.5, 1.5, 1.5))
plot(v0, v1, type = "l", lty = 2, xlab = "", ylab = "", main = "Hazard")
lines(v0, 4*log(1+1/2*v0))
plot(v0, p, type = "l", lty = 2, xlab = "", ylab = "", main = "Probability")
lines(seq(0, 5, 0.01), P)
## -----------------------------------------------------------------------------
jump_rate <- function(i, t, u){
if(i == 1){
2
} else if(i == 2){
3
} else{
0
}
}
mark_dist <- function(i, s, v){
if(i == 1){
c(0, 1/2, 1/2)
} else if(i == 2){
c(2/3, 0, 1/3)
} else{
0
}
}
lambda <- function(t, x){
A <- matrix(c(2/(1+x*t)*mark_dist(1, t, 0), 3/(1+x*t)*mark_dist(2, t, 0), rep(0, 3)),
nrow = 3, ncol = 3, byrow = TRUE)
diag(A) <- -rowSums(A)
A
}
n <- 10000
X <- runif(n)
c <- runif(n, 0, 5)
sim <- list()
for(i in 1:n){
rates <- function(j, y, z){jump_rate(j, y, z)/(1+X[i]*y)}
sim[[i]] <- sim_path(sample(1:2, 1), rates = rates, dists = mark_dist,
tn = c[i], bs = c(2*c[i], 3*c[i], 0))
sim[[i]]$X <- X[i]
}
## -----------------------------------------------------------------------------
sum(c == unlist(lapply(sim, FUN = function(z){tail(z$times, 1)}))) / n
## -----------------------------------------------------------------------------
x1 <- 0.2
x2 <- 0.8
fit1 <- aalen_johansen(sim, x = x1)
fit2 <- aalen_johansen(sim, x = x2)
## ---- fig.align = 'center', fig.height = 3, fig.width = 6---------------------
v11 <- unlist(lapply(fit1$Lambda, FUN = function(L) L[2,1]))
v10 <- fit1$t
v21 <- unlist(lapply(fit2$Lambda, FUN = function(L) L[2,1]))
v20 <- fit2$t
p1 <- unlist(lapply(fit1$p, FUN = function(L) L[2]))
P1 <- unlist(lapply(prodint(0, 5, 0.01, function(t){lambda(t, x = x1)}),
FUN = function(L) (c(1/2, 1/2, 0) %*% L)[2]))
p2 <- unlist(lapply(fit2$p, FUN = function(L) L[2]))
P2 <- unlist(lapply(prodint(0, 5, 0.01, function(t){lambda(t, x = x2)}),
FUN = function(L) (c(1/2, 1/2, 0) %*% L)[2]))
par(mfrow = c(1, 2))
par(mar = c(2.5, 2.5, 1.5, 1.5))
plot(v10, v11, type = "l", lty = 2, xlab = "", ylab = "", main = "Hazard", col = "red")
lines(v10, 2/x1*log(1+x1*v10), col = "red")
lines(v20, v21, lty = 2, col = "blue")
lines(v20, 2/x2*log(1+x2*v20), col = "blue")
plot(v10, p1, type = "l", lty = 2, xlab = "", ylab = "", main = "Probability", col = "red")
lines(seq(0, 5, 0.01), P1, col = "red")
lines(v20, p2, lty = 2, col = "blue")
lines(seq(0, 5, 0.01), P2, col = "blue")
## -----------------------------------------------------------------------------
jump_rate_enlarged <- function(i, t, u){
if(i == 1){
2.5
} else if(i == 2){
4
} else{
0
}
}
mark_dist_enlarged <- function(i, s, v){
if(i == 1){
c(0, 2/5, 2/5, 1/5)
} else if(i == 2){
c(2/4, 0, 1/4, 1/4)
} else{
0
}
}
n <- 10000
X <- runif(n)
tn <- 5
sim <- list()
for(i in 1:n){
rates <- function(j, y, z){jump_rate_enlarged(j, y, z)/(1+X[i]*y)}
sim[[i]] <- sim_path(sample(1:2, 1), rates = rates, dists = mark_dist_enlarged,
tn = tn, abs = c(FALSE, FALSE, TRUE, TRUE),
bs = c(2.5*tn, 4*tn, 0, 0))
sim[[i]]$X <- X[i]
}
## -----------------------------------------------------------------------------
sum(tn == unlist(lapply(sim, FUN = function(z){tail(z$times, 1)}))
| 4 == unlist(lapply(sim, FUN = function(z){tail(z$states, 1)}))) / n
## -----------------------------------------------------------------------------
fit1 <- aalen_johansen(sim, x = x1, collapse = TRUE)
fit2 <- aalen_johansen(sim, x = x2, collapse = TRUE)
## ---- fig.align = 'center', fig.height = 3, fig.width = 6---------------------
v11 <- unlist(lapply(fit1$Lambda, FUN = function(L) L[2,1]))
v10 <- fit1$t
v21 <- unlist(lapply(fit2$Lambda, FUN = function(L) L[2,1]))
v20 <- fit2$t
p1 <- unlist(lapply(fit1$p, FUN = function(L) L[2]))
p2 <- unlist(lapply(fit2$p, FUN = function(L) L[2]))
par(mfrow = c(1, 2))
par(mar = c(2.5, 2.5, 1.5, 1.5))
plot(v10, v11, type = "l", lty = 2, xlab = "", ylab = "", main = "Hazard", col = "red")
lines(v10, 2/x1*log(1+x1*v10), col = "red")
lines(v20, v21, lty = 2, col = "blue")
lines(v20, 2/x2*log(1+x2*v20), col = "blue")
plot(v10, p1, type = "l", lty = 2, xlab = "", ylab = "", main = "Probability", col = "red")
lines(seq(0, 5, 0.01), P1, col = "red")
lines(v20, p2, lty = 2, col = "blue")
lines(seq(0, 5, 0.01), P2, col = "blue")
## -----------------------------------------------------------------------------
jump_rate <- function(i, t, u){
if(i == 1){
0.1 + 0.002*t
} else if(i == 2){
ifelse(u < 4, 0.29, 0.09) + 0.001*t
} else{
0
}
}
mark_dist <- function(i, s, v){
if(i == 1){
c(0, 0.9, 0.1)
} else if(i == 2){
c(0, 0, 1)
} else{
0
}
}
## -----------------------------------------------------------------------------
n <- 5000
c <- runif(n, 10, 40)
sim <- list()
for(i in 1:n){
sim[[i]] <- sim_path(1, rates = jump_rate, dists = mark_dist, tn = c[i],
bs = c(0.1+0.002*c[i], 0.29+0.001*c[i], 0))
}
## -----------------------------------------------------------------------------
sum(c == unlist(lapply(sim, FUN = function(z){tail(z$times, 1)}))) / n
## -----------------------------------------------------------------------------
fit <- aalen_johansen(sim)
## ---- fig.align = 'center', fig.height = 4.5, fig.width = 4.5-----------------
v0 <- fit$t
p <- unlist(lapply(fit$p, FUN = function(L) L[2]))
integrand <- function(t, s){
exp(-0.1*s-0.001*s^2)*(0.09 + 0.0018*s)*exp(-0.20*pmin(t-s, 4)-0.09*(t-s)-0.0005*(t^2-s^2))
}
P <- Vectorize(function(t){
integrate(f = integrand, lower = 0, upper = t, t = t)$value
}, vectorize.args = "t")
plot(v0, p, type = "l", lty = 2, xlab = "", ylab = "", main = "Probability")
lines(seq(0, 40, 0.1), P(seq(0, 40, 0.1)))
## -----------------------------------------------------------------------------
landmark <- sim[unlist(lapply(sim, FUN = function(z){any(z$times <= 10
& c(z$times[-1], Inf) > 10
& z$states == 2)}))]
landmark <- lapply(landmark, FUN = function(z){list(times = z$times, states = z$states,
X = 10 - z$times[z$times <= 10
& c(z$times[-1], Inf) > 10
& z$states == 2])})
## -----------------------------------------------------------------------------
length(landmark) / n
## -----------------------------------------------------------------------------
u1 <- 1
u2 <- 5
fit1 <- aalen_johansen(landmark, x = u1)
fit2 <- aalen_johansen(landmark, x = u2)
fit3 <- aalen_johansen(landmark)
## ---- fig.align = 'center', fig.height = 4.5, fig.width = 4.5-----------------
v10 <- fit1$t
v20 <- fit2$t
v30 <- fit3$t
p1 <- unlist(lapply(fit1$p, FUN = function(L) L[2]))
p2 <- unlist(lapply(fit2$p, FUN = function(L) L[2]))
p3 <- unlist(lapply(fit3$p, FUN = function(L) L[2]))
P <- function(t, u){
exp(-(t-10)*0.09-(t^2-100)*0.0005-pmax(0, pmin(t, 4-(u-10))-10)*0.20)
}
plot(v10, p1, type = "l", lty = 2, xlab = "", ylab = "", main = "Probability",
col = "red", xlim = c(10, 40))
lines(seq(10, 40, 0.1), P(seq(10, 40, 0.1), u1), col = "red")
lines(v20, p2, lty = 2, col = "blue")
lines(seq(10, 40, 0.1), P(seq(10, 40, 0.1), u2), col = "blue")
lines(v30, p3, lty = 3)
| /scratch/gouwar.j/cran-all/cranData/AalenJohansen/inst/doc/AalenJohansen-vignette.R |
---
title: "Conditional Nelson--Aalen and Aalen--Johansen Estimation"
author: "Martin Bladt & Christian Furrer"
date: "28th of February, 2023"
package: "AalenJohansen"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{my-vignette}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
This vignette illustrates, through four examples, the potential uses of the R-package $\texttt{AalenJohansen}$, which is an implementation of the conditional Nelson--Aalen and Aalen--Johansen estimators introduced in Bladt \& Furrer (2023).
## 1. Markov model with independent censoring
We start out with a simple time-inhomogeneous Markov model:
\begin{align*}
\frac{\mathrm{d}\Lambda(t)}{\mathrm{d}t}=\lambda(t)=\frac{1}{1+\frac{1}{2}t}
\begin{pmatrix}
-2 & 1& 1 \\
2 & -3 & 1 \\
0 & 0 & 0
\end{pmatrix}\!.
\end{align*}
```{r}
library(AalenJohansen)
set.seed(2)
jump_rate <- function(i, t, u){
if(i == 1){
2 / (1+1/2*t)
} else if(i == 2){
3 / (1+1/2*t)
} else{
0
}
}
mark_dist <- function(i, s, v){
if(i == 1){
c(0, 1/2, 1/2)
} else if(i == 2){
c(2/3, 0, 1/3)
} else{
0
}
}
lambda <- function(t){
A <- matrix(c(2/(1+1/2*t)*mark_dist(1, t, 0), 3/(1+1/2*t)*mark_dist(2, t, 0), rep(0, 3)),
nrow = 3, ncol = 3, byrow = TRUE)
diag(A) <- -rowSums(A)
A
}
```
We simulate $1,000$ independent and identically distributed realizations subject to independent right-censoring. Right-censoring follows the distribution $\text{Unif}(0,5)$.
```{r}
n <- 1000
c <- runif(n, 0, 5)
sim <- list()
for(i in 1:n){
sim[[i]] <- sim_path(sample(1:2, 1), rates = jump_rate, dists = mark_dist,
tn = c[i], bs = c(2*c[i], 3*c[i], 0))
}
```
The degree of censoring is
```{r}
sum(c == unlist(lapply(sim, FUN = function(z){tail(z$times, 1)}))) / n
```
We fit the classic Nelson--Aalen and Aalen-Johansen estimators.
```{r}
fit <- aalen_johansen(sim)
```
For illustrative purposes, we plot $\Lambda_{21}$ and the state occupation probability $p_2$ for both the true model (full) and using the classic estimators (dashed).
```{r, fig.align = 'center', fig.height = 3, fig.width = 6}
v1 <- unlist(lapply(fit$Lambda, FUN = function(L) L[2,1]))
v0 <- fit$t
p <- unlist(lapply(fit$p, FUN = function(L) L[2]))
P <- unlist(lapply(prodint(0, 5, 0.01, lambda), FUN = function(L) (c(1/2, 1/2, 0) %*% L)[2]))
par(mfrow = c(1, 2))
par(mar = c(2.5, 2.5, 1.5, 1.5))
plot(v0, v1, type = "l", lty = 2, xlab = "", ylab = "", main = "Hazard")
lines(v0, 4*log(1+1/2*v0))
plot(v0, p, type = "l", lty = 2, xlab = "", ylab = "", main = "Probability")
lines(seq(0, 5, 0.01), P)
```
## 2. Markov model with independent censoring and covariates
We now consider a simple extension with covariates:
\begin{align*}
\frac{\mathrm{d}\Lambda(t|x)}{\mathrm{d}t}=\lambda(t|x)=\frac{1}{1+x\cdot t}
\begin{pmatrix}
-2 & 1& 1 \\
2 & -3 & 1 \\
0 & 0 & 0
\end{pmatrix}\!.
\end{align*}
We simulate $10,000$ independent realizations subject to independent right-censoring. Right-censoring follows the distribution $\text{Unif}(0,5)$, while $X\sim\text{Unif}(0,1)$.
```{r}
jump_rate <- function(i, t, u){
if(i == 1){
2
} else if(i == 2){
3
} else{
0
}
}
mark_dist <- function(i, s, v){
if(i == 1){
c(0, 1/2, 1/2)
} else if(i == 2){
c(2/3, 0, 1/3)
} else{
0
}
}
lambda <- function(t, x){
A <- matrix(c(2/(1+x*t)*mark_dist(1, t, 0), 3/(1+x*t)*mark_dist(2, t, 0), rep(0, 3)),
nrow = 3, ncol = 3, byrow = TRUE)
diag(A) <- -rowSums(A)
A
}
n <- 10000
X <- runif(n)
c <- runif(n, 0, 5)
sim <- list()
for(i in 1:n){
rates <- function(j, y, z){jump_rate(j, y, z)/(1+X[i]*y)}
sim[[i]] <- sim_path(sample(1:2, 1), rates = rates, dists = mark_dist,
tn = c[i], bs = c(2*c[i], 3*c[i], 0))
sim[[i]]$X <- X[i]
}
```
The degree of censoring is
```{r}
sum(c == unlist(lapply(sim, FUN = function(z){tail(z$times, 1)}))) / n
```
We fit the conditional Nelson--Aalen and Aalen--Johansen estimators for $x=0.2, 0.8$.
```{r}
x1 <- 0.2
x2 <- 0.8
fit1 <- aalen_johansen(sim, x = x1)
fit2 <- aalen_johansen(sim, x = x2)
```
For illustrative purposes, we plot $\Lambda_{21}$ and the conditional state occupation probability $p_2$ for both the true model (full) and using the conditional estimators (dashed). This is done for both $x=0.2$ (in red) and $x=0.8$ (in blue).
```{r, fig.align = 'center', fig.height = 3, fig.width = 6}
v11 <- unlist(lapply(fit1$Lambda, FUN = function(L) L[2,1]))
v10 <- fit1$t
v21 <- unlist(lapply(fit2$Lambda, FUN = function(L) L[2,1]))
v20 <- fit2$t
p1 <- unlist(lapply(fit1$p, FUN = function(L) L[2]))
P1 <- unlist(lapply(prodint(0, 5, 0.01, function(t){lambda(t, x = x1)}),
FUN = function(L) (c(1/2, 1/2, 0) %*% L)[2]))
p2 <- unlist(lapply(fit2$p, FUN = function(L) L[2]))
P2 <- unlist(lapply(prodint(0, 5, 0.01, function(t){lambda(t, x = x2)}),
FUN = function(L) (c(1/2, 1/2, 0) %*% L)[2]))
par(mfrow = c(1, 2))
par(mar = c(2.5, 2.5, 1.5, 1.5))
plot(v10, v11, type = "l", lty = 2, xlab = "", ylab = "", main = "Hazard", col = "red")
lines(v10, 2/x1*log(1+x1*v10), col = "red")
lines(v20, v21, lty = 2, col = "blue")
lines(v20, 2/x2*log(1+x2*v20), col = "blue")
plot(v10, p1, type = "l", lty = 2, xlab = "", ylab = "", main = "Probability", col = "red")
lines(seq(0, 5, 0.01), P1, col = "red")
lines(v20, p2, lty = 2, col = "blue")
lines(seq(0, 5, 0.01), P2, col = "blue")
```
## 3. Markov model with dependent censoring and covariates
We consider the same model as before, but now introduce dependent right-censoring. To be precise, we assume that right-censoring occurs at rate $\frac{1}{2}\frac{1}{1+\frac{1}{2}t}$ in the first state, while it occurs at twice this rate in the second state. Finally, all remaining individuals are right-censored at time $t$.
We again simulate $10,000$ independent realizations.
```{r}
jump_rate_enlarged <- function(i, t, u){
if(i == 1){
2.5
} else if(i == 2){
4
} else{
0
}
}
mark_dist_enlarged <- function(i, s, v){
if(i == 1){
c(0, 2/5, 2/5, 1/5)
} else if(i == 2){
c(2/4, 0, 1/4, 1/4)
} else{
0
}
}
n <- 10000
X <- runif(n)
tn <- 5
sim <- list()
for(i in 1:n){
rates <- function(j, y, z){jump_rate_enlarged(j, y, z)/(1+X[i]*y)}
sim[[i]] <- sim_path(sample(1:2, 1), rates = rates, dists = mark_dist_enlarged,
tn = tn, abs = c(FALSE, FALSE, TRUE, TRUE),
bs = c(2.5*tn, 4*tn, 0, 0))
sim[[i]]$X <- X[i]
}
```
The degree of censoring is
```{r}
sum(tn == unlist(lapply(sim, FUN = function(z){tail(z$times, 1)}))
| 4 == unlist(lapply(sim, FUN = function(z){tail(z$states, 1)}))) / n
```
We fit the conditional Nelson--Aalen and Aalen--Johansen estimators for $x=0.2, 0.8$.
```{r}
fit1 <- aalen_johansen(sim, x = x1, collapse = TRUE)
fit2 <- aalen_johansen(sim, x = x2, collapse = TRUE)
```
For illustrative purposes, we plot $\Lambda_{21}$ and the conditional state occupation probability $p_2$ for both the true model (full) and using the conditional estimators (dashed). This is done for both $x=0.2$ (in red) and $x=0.8$ (in blue).
```{r, fig.align = 'center', fig.height = 3, fig.width = 6}
v11 <- unlist(lapply(fit1$Lambda, FUN = function(L) L[2,1]))
v10 <- fit1$t
v21 <- unlist(lapply(fit2$Lambda, FUN = function(L) L[2,1]))
v20 <- fit2$t
p1 <- unlist(lapply(fit1$p, FUN = function(L) L[2]))
p2 <- unlist(lapply(fit2$p, FUN = function(L) L[2]))
par(mfrow = c(1, 2))
par(mar = c(2.5, 2.5, 1.5, 1.5))
plot(v10, v11, type = "l", lty = 2, xlab = "", ylab = "", main = "Hazard", col = "red")
lines(v10, 2/x1*log(1+x1*v10), col = "red")
lines(v20, v21, lty = 2, col = "blue")
lines(v20, 2/x2*log(1+x2*v20), col = "blue")
plot(v10, p1, type = "l", lty = 2, xlab = "", ylab = "", main = "Probability", col = "red")
lines(seq(0, 5, 0.01), P1, col = "red")
lines(v20, p2, lty = 2, col = "blue")
lines(seq(0, 5, 0.01), P2, col = "blue")
```
## 4. Semi-Markov model with independent censoring
Last but not least, we consider a time-inhomogeneous semi-Markov model with non-zero transition rates given by
\begin{align*}
\lambda_{12}(t, u)
&=
0.09 + 0.0018t, \\
\lambda_{13}(t, u)
&=
0.01 + 0.0002t, \\
\lambda_{23}(t, u)
&=
0.09 + 1(u < 4)0.20 + 0.001t.
\end{align*}
```{r}
jump_rate <- function(i, t, u){
if(i == 1){
0.1 + 0.002*t
} else if(i == 2){
ifelse(u < 4, 0.29, 0.09) + 0.001*t
} else{
0
}
}
mark_dist <- function(i, s, v){
if(i == 1){
c(0, 0.9, 0.1)
} else if(i == 2){
c(0, 0, 1)
} else{
0
}
}
```
We simulate $5,000$ independent and identically distributed realizations subject to independent right-censoring. Right-censoring follows the distribution $\text{Unif}(10,40)$.
```{r}
n <- 5000
c <- runif(n, 10, 40)
sim <- list()
for(i in 1:n){
sim[[i]] <- sim_path(1, rates = jump_rate, dists = mark_dist, tn = c[i],
bs = c(0.1+0.002*c[i], 0.29+0.001*c[i], 0))
}
```
The degree of censoring is
```{r}
sum(c == unlist(lapply(sim, FUN = function(z){tail(z$times, 1)}))) / n
```
We fit the Aalen--Johansen estimator.
```{r}
fit <- aalen_johansen(sim)
```
For illustrative purposes, we plot the estimate of the state occupation probability $p_2$ (dashed). The true values (full) are obtained via numerical integration, utilizing that this specific model has a hierarchical structure.
```{r, fig.align = 'center', fig.height = 4.5, fig.width = 4.5}
v0 <- fit$t
p <- unlist(lapply(fit$p, FUN = function(L) L[2]))
integrand <- function(t, s){
exp(-0.1*s-0.001*s^2)*(0.09 + 0.0018*s)*exp(-0.20*pmin(t-s, 4)-0.09*(t-s)-0.0005*(t^2-s^2))
}
P <- Vectorize(function(t){
integrate(f = integrand, lower = 0, upper = t, t = t)$value
}, vectorize.args = "t")
plot(v0, p, type = "l", lty = 2, xlab = "", ylab = "", main = "Probability")
lines(seq(0, 40, 0.1), P(seq(0, 40, 0.1)))
```
We now want to estimate the conditional state occupation probability $p_2$, given sojourn in the second state at time $10$ with duration $u=1,5$. For this, we first need to sub-sample the data (landmarking).
```{r}
landmark <- sim[unlist(lapply(sim, FUN = function(z){any(z$times <= 10
& c(z$times[-1], Inf) > 10
& z$states == 2)}))]
landmark <- lapply(landmark, FUN = function(z){list(times = z$times, states = z$states,
X = 10 - z$times[z$times <= 10
& c(z$times[-1], Inf) > 10
& z$states == 2])})
```
The degree of sub-sampling is
```{r}
length(landmark) / n
```
Next, we fit the conditional Aalen--Johansen estimator for $u=1,5$. We also fit the usual landmark Aalen--Johansen estimator.
```{r}
u1 <- 1
u2 <- 5
fit1 <- aalen_johansen(landmark, x = u1)
fit2 <- aalen_johansen(landmark, x = u2)
fit3 <- aalen_johansen(landmark)
```
For illustrative purposes, we plot the conditional state occupation probability $p_2$ using the conditional estimator (dashed), the usual landmark estimator (dotted), and the true model (full). This is done for both $u=1$ (in red) and $u=5$ (in blue).
```{r, fig.align = 'center', fig.height = 4.5, fig.width = 4.5}
v10 <- fit1$t
v20 <- fit2$t
v30 <- fit3$t
p1 <- unlist(lapply(fit1$p, FUN = function(L) L[2]))
p2 <- unlist(lapply(fit2$p, FUN = function(L) L[2]))
p3 <- unlist(lapply(fit3$p, FUN = function(L) L[2]))
P <- function(t, u){
exp(-(t-10)*0.09-(t^2-100)*0.0005-pmax(0, pmin(t, 4-(u-10))-10)*0.20)
}
plot(v10, p1, type = "l", lty = 2, xlab = "", ylab = "", main = "Probability",
col = "red", xlim = c(10, 40))
lines(seq(10, 40, 0.1), P(seq(10, 40, 0.1), u1), col = "red")
lines(v20, p2, lty = 2, col = "blue")
lines(seq(10, 40, 0.1), P(seq(10, 40, 0.1), u2), col = "blue")
lines(v30, p3, lty = 3)
```
| /scratch/gouwar.j/cran-all/cranData/AalenJohansen/inst/doc/AalenJohansen-vignette.Rmd |
---
title: "Conditional Nelson--Aalen and Aalen--Johansen Estimation"
author: "Martin Bladt & Christian Furrer"
date: "28th of February, 2023"
package: "AalenJohansen"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{my-vignette}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
This vignette illustrates, through four examples, the potential uses of the R-package $\texttt{AalenJohansen}$, which is an implementation of the conditional Nelson--Aalen and Aalen--Johansen estimators introduced in Bladt \& Furrer (2023).
## 1. Markov model with independent censoring
We start out with a simple time-inhomogeneous Markov model:
\begin{align*}
\frac{\mathrm{d}\Lambda(t)}{\mathrm{d}t}=\lambda(t)=\frac{1}{1+\frac{1}{2}t}
\begin{pmatrix}
-2 & 1& 1 \\
2 & -3 & 1 \\
0 & 0 & 0
\end{pmatrix}\!.
\end{align*}
```{r}
library(AalenJohansen)
set.seed(2)
jump_rate <- function(i, t, u){
if(i == 1){
2 / (1+1/2*t)
} else if(i == 2){
3 / (1+1/2*t)
} else{
0
}
}
mark_dist <- function(i, s, v){
if(i == 1){
c(0, 1/2, 1/2)
} else if(i == 2){
c(2/3, 0, 1/3)
} else{
0
}
}
lambda <- function(t){
A <- matrix(c(2/(1+1/2*t)*mark_dist(1, t, 0), 3/(1+1/2*t)*mark_dist(2, t, 0), rep(0, 3)),
nrow = 3, ncol = 3, byrow = TRUE)
diag(A) <- -rowSums(A)
A
}
```
We simulate $1,000$ independent and identically distributed realizations subject to independent right-censoring. Right-censoring follows the distribution $\text{Unif}(0,5)$.
```{r}
n <- 1000
c <- runif(n, 0, 5)
sim <- list()
for(i in 1:n){
sim[[i]] <- sim_path(sample(1:2, 1), rates = jump_rate, dists = mark_dist,
tn = c[i], bs = c(2*c[i], 3*c[i], 0))
}
```
The degree of censoring is
```{r}
sum(c == unlist(lapply(sim, FUN = function(z){tail(z$times, 1)}))) / n
```
We fit the classic Nelson--Aalen and Aalen-Johansen estimators.
```{r}
fit <- aalen_johansen(sim)
```
For illustrative purposes, we plot $\Lambda_{21}$ and the state occupation probability $p_2$ for both the true model (full) and using the classic estimators (dashed).
```{r, fig.align = 'center', fig.height = 3, fig.width = 6}
v1 <- unlist(lapply(fit$Lambda, FUN = function(L) L[2,1]))
v0 <- fit$t
p <- unlist(lapply(fit$p, FUN = function(L) L[2]))
P <- unlist(lapply(prodint(0, 5, 0.01, lambda), FUN = function(L) (c(1/2, 1/2, 0) %*% L)[2]))
par(mfrow = c(1, 2))
par(mar = c(2.5, 2.5, 1.5, 1.5))
plot(v0, v1, type = "l", lty = 2, xlab = "", ylab = "", main = "Hazard")
lines(v0, 4*log(1+1/2*v0))
plot(v0, p, type = "l", lty = 2, xlab = "", ylab = "", main = "Probability")
lines(seq(0, 5, 0.01), P)
```
## 2. Markov model with independent censoring and covariates
We now consider a simple extension with covariates:
\begin{align*}
\frac{\mathrm{d}\Lambda(t|x)}{\mathrm{d}t}=\lambda(t|x)=\frac{1}{1+x\cdot t}
\begin{pmatrix}
-2 & 1& 1 \\
2 & -3 & 1 \\
0 & 0 & 0
\end{pmatrix}\!.
\end{align*}
We simulate $10,000$ independent realizations subject to independent right-censoring. Right-censoring follows the distribution $\text{Unif}(0,5)$, while $X\sim\text{Unif}(0,1)$.
```{r}
jump_rate <- function(i, t, u){
if(i == 1){
2
} else if(i == 2){
3
} else{
0
}
}
mark_dist <- function(i, s, v){
if(i == 1){
c(0, 1/2, 1/2)
} else if(i == 2){
c(2/3, 0, 1/3)
} else{
0
}
}
lambda <- function(t, x){
A <- matrix(c(2/(1+x*t)*mark_dist(1, t, 0), 3/(1+x*t)*mark_dist(2, t, 0), rep(0, 3)),
nrow = 3, ncol = 3, byrow = TRUE)
diag(A) <- -rowSums(A)
A
}
n <- 10000
X <- runif(n)
c <- runif(n, 0, 5)
sim <- list()
for(i in 1:n){
rates <- function(j, y, z){jump_rate(j, y, z)/(1+X[i]*y)}
sim[[i]] <- sim_path(sample(1:2, 1), rates = rates, dists = mark_dist,
tn = c[i], bs = c(2*c[i], 3*c[i], 0))
sim[[i]]$X <- X[i]
}
```
The degree of censoring is
```{r}
sum(c == unlist(lapply(sim, FUN = function(z){tail(z$times, 1)}))) / n
```
We fit the conditional Nelson--Aalen and Aalen--Johansen estimators for $x=0.2, 0.8$.
```{r}
x1 <- 0.2
x2 <- 0.8
fit1 <- aalen_johansen(sim, x = x1)
fit2 <- aalen_johansen(sim, x = x2)
```
For illustrative purposes, we plot $\Lambda_{21}$ and the conditional state occupation probability $p_2$ for both the true model (full) and using the conditional estimators (dashed). This is done for both $x=0.2$ (in red) and $x=0.8$ (in blue).
```{r, fig.align = 'center', fig.height = 3, fig.width = 6}
v11 <- unlist(lapply(fit1$Lambda, FUN = function(L) L[2,1]))
v10 <- fit1$t
v21 <- unlist(lapply(fit2$Lambda, FUN = function(L) L[2,1]))
v20 <- fit2$t
p1 <- unlist(lapply(fit1$p, FUN = function(L) L[2]))
P1 <- unlist(lapply(prodint(0, 5, 0.01, function(t){lambda(t, x = x1)}),
FUN = function(L) (c(1/2, 1/2, 0) %*% L)[2]))
p2 <- unlist(lapply(fit2$p, FUN = function(L) L[2]))
P2 <- unlist(lapply(prodint(0, 5, 0.01, function(t){lambda(t, x = x2)}),
FUN = function(L) (c(1/2, 1/2, 0) %*% L)[2]))
par(mfrow = c(1, 2))
par(mar = c(2.5, 2.5, 1.5, 1.5))
plot(v10, v11, type = "l", lty = 2, xlab = "", ylab = "", main = "Hazard", col = "red")
lines(v10, 2/x1*log(1+x1*v10), col = "red")
lines(v20, v21, lty = 2, col = "blue")
lines(v20, 2/x2*log(1+x2*v20), col = "blue")
plot(v10, p1, type = "l", lty = 2, xlab = "", ylab = "", main = "Probability", col = "red")
lines(seq(0, 5, 0.01), P1, col = "red")
lines(v20, p2, lty = 2, col = "blue")
lines(seq(0, 5, 0.01), P2, col = "blue")
```
## 3. Markov model with dependent censoring and covariates
We consider the same model as before, but now introduce dependent right-censoring. To be precise, we assume that right-censoring occurs at rate $\frac{1}{2}\frac{1}{1+\frac{1}{2}t}$ in the first state, while it occurs at twice this rate in the second state. Finally, all remaining individuals are right-censored at time $t$.
We again simulate $10,000$ independent realizations.
```{r}
jump_rate_enlarged <- function(i, t, u){
if(i == 1){
2.5
} else if(i == 2){
4
} else{
0
}
}
mark_dist_enlarged <- function(i, s, v){
if(i == 1){
c(0, 2/5, 2/5, 1/5)
} else if(i == 2){
c(2/4, 0, 1/4, 1/4)
} else{
0
}
}
n <- 10000
X <- runif(n)
tn <- 5
sim <- list()
for(i in 1:n){
rates <- function(j, y, z){jump_rate_enlarged(j, y, z)/(1+X[i]*y)}
sim[[i]] <- sim_path(sample(1:2, 1), rates = rates, dists = mark_dist_enlarged,
tn = tn, abs = c(FALSE, FALSE, TRUE, TRUE),
bs = c(2.5*tn, 4*tn, 0, 0))
sim[[i]]$X <- X[i]
}
```
The degree of censoring is
```{r}
sum(tn == unlist(lapply(sim, FUN = function(z){tail(z$times, 1)}))
| 4 == unlist(lapply(sim, FUN = function(z){tail(z$states, 1)}))) / n
```
We fit the conditional Nelson--Aalen and Aalen--Johansen estimators for $x=0.2, 0.8$.
```{r}
fit1 <- aalen_johansen(sim, x = x1, collapse = TRUE)
fit2 <- aalen_johansen(sim, x = x2, collapse = TRUE)
```
For illustrative purposes, we plot $\Lambda_{21}$ and the conditional state occupation probability $p_2$ for both the true model (full) and using the conditional estimators (dashed). This is done for both $x=0.2$ (in red) and $x=0.8$ (in blue).
```{r, fig.align = 'center', fig.height = 3, fig.width = 6}
v11 <- unlist(lapply(fit1$Lambda, FUN = function(L) L[2,1]))
v10 <- fit1$t
v21 <- unlist(lapply(fit2$Lambda, FUN = function(L) L[2,1]))
v20 <- fit2$t
p1 <- unlist(lapply(fit1$p, FUN = function(L) L[2]))
p2 <- unlist(lapply(fit2$p, FUN = function(L) L[2]))
par(mfrow = c(1, 2))
par(mar = c(2.5, 2.5, 1.5, 1.5))
plot(v10, v11, type = "l", lty = 2, xlab = "", ylab = "", main = "Hazard", col = "red")
lines(v10, 2/x1*log(1+x1*v10), col = "red")
lines(v20, v21, lty = 2, col = "blue")
lines(v20, 2/x2*log(1+x2*v20), col = "blue")
plot(v10, p1, type = "l", lty = 2, xlab = "", ylab = "", main = "Probability", col = "red")
lines(seq(0, 5, 0.01), P1, col = "red")
lines(v20, p2, lty = 2, col = "blue")
lines(seq(0, 5, 0.01), P2, col = "blue")
```
## 4. Semi-Markov model with independent censoring
Last but not least, we consider a time-inhomogeneous semi-Markov model with non-zero transition rates given by
\begin{align*}
\lambda_{12}(t, u)
&=
0.09 + 0.0018t, \\
\lambda_{13}(t, u)
&=
0.01 + 0.0002t, \\
\lambda_{23}(t, u)
&=
0.09 + 1(u < 4)0.20 + 0.001t.
\end{align*}
```{r}
jump_rate <- function(i, t, u){
if(i == 1){
0.1 + 0.002*t
} else if(i == 2){
ifelse(u < 4, 0.29, 0.09) + 0.001*t
} else{
0
}
}
mark_dist <- function(i, s, v){
if(i == 1){
c(0, 0.9, 0.1)
} else if(i == 2){
c(0, 0, 1)
} else{
0
}
}
```
We simulate $5,000$ independent and identically distributed realizations subject to independent right-censoring. Right-censoring follows the distribution $\text{Unif}(10,40)$.
```{r}
n <- 5000
c <- runif(n, 10, 40)
sim <- list()
for(i in 1:n){
sim[[i]] <- sim_path(1, rates = jump_rate, dists = mark_dist, tn = c[i],
bs = c(0.1+0.002*c[i], 0.29+0.001*c[i], 0))
}
```
The degree of censoring is
```{r}
sum(c == unlist(lapply(sim, FUN = function(z){tail(z$times, 1)}))) / n
```
We fit the Aalen--Johansen estimator.
```{r}
fit <- aalen_johansen(sim)
```
For illustrative purposes, we plot the estimate of the state occupation probability $p_2$ (dashed). The true values (full) are obtained via numerical integration, utilizing that this specific model has a hierarchical structure.
```{r, fig.align = 'center', fig.height = 4.5, fig.width = 4.5}
v0 <- fit$t
p <- unlist(lapply(fit$p, FUN = function(L) L[2]))
integrand <- function(t, s){
exp(-0.1*s-0.001*s^2)*(0.09 + 0.0018*s)*exp(-0.20*pmin(t-s, 4)-0.09*(t-s)-0.0005*(t^2-s^2))
}
P <- Vectorize(function(t){
integrate(f = integrand, lower = 0, upper = t, t = t)$value
}, vectorize.args = "t")
plot(v0, p, type = "l", lty = 2, xlab = "", ylab = "", main = "Probability")
lines(seq(0, 40, 0.1), P(seq(0, 40, 0.1)))
```
We now want to estimate the conditional state occupation probability $p_2$, given sojourn in the second state at time $10$ with duration $u=1,5$. For this, we first need to sub-sample the data (landmarking).
```{r}
landmark <- sim[unlist(lapply(sim, FUN = function(z){any(z$times <= 10
& c(z$times[-1], Inf) > 10
& z$states == 2)}))]
landmark <- lapply(landmark, FUN = function(z){list(times = z$times, states = z$states,
X = 10 - z$times[z$times <= 10
& c(z$times[-1], Inf) > 10
& z$states == 2])})
```
The degree of sub-sampling is
```{r}
length(landmark) / n
```
Next, we fit the conditional Aalen--Johansen estimator for $u=1,5$. We also fit the usual landmark Aalen--Johansen estimator.
```{r}
u1 <- 1
u2 <- 5
fit1 <- aalen_johansen(landmark, x = u1)
fit2 <- aalen_johansen(landmark, x = u2)
fit3 <- aalen_johansen(landmark)
```
For illustrative purposes, we plot the conditional state occupation probability $p_2$ using the conditional estimator (dashed), the usual landmark estimator (dotted), and the true model (full). This is done for both $u=1$ (in red) and $u=5$ (in blue).
```{r, fig.align = 'center', fig.height = 4.5, fig.width = 4.5}
v10 <- fit1$t
v20 <- fit2$t
v30 <- fit3$t
p1 <- unlist(lapply(fit1$p, FUN = function(L) L[2]))
p2 <- unlist(lapply(fit2$p, FUN = function(L) L[2]))
p3 <- unlist(lapply(fit3$p, FUN = function(L) L[2]))
P <- function(t, u){
exp(-(t-10)*0.09-(t^2-100)*0.0005-pmax(0, pmin(t, 4-(u-10))-10)*0.20)
}
plot(v10, p1, type = "l", lty = 2, xlab = "", ylab = "", main = "Probability",
col = "red", xlim = c(10, 40))
lines(seq(10, 40, 0.1), P(seq(10, 40, 0.1), u1), col = "red")
lines(v20, p2, lty = 2, col = "blue")
lines(seq(10, 40, 0.1), P(seq(10, 40, 0.1), u2), col = "blue")
lines(v30, p3, lty = 3)
```
| /scratch/gouwar.j/cran-all/cranData/AalenJohansen/vignettes/AalenJohansen-vignette.Rmd |
#Ac3net:
#This R package allows inferring directional conservative causal core network from large scale data.
#The inferred network consists of only direct physical interactions.
## Copyright (C) January 2011 Gokmen Altay <[email protected]>
## This program is a free software for only academic useage but not for commercial useage; you can redistribute it and/or
## modify it under the terms of the GNU GENERAL PUBLIC LICENSE
## either version 3 of the License, or any later version.
##
## This program is distributed WITHOUT ANY WARRANTY;
## You can get a copy of the GNU GENERAL PUBLIC LICENSE
## from
## http://www.gnu.org/licenses/gpl.html
## See the licence information for the dependent package from
## igraph package itself.
#takes an adjacency matrix and returns the absolute maximum correlated partner of each variable on the rows.
Ac3net.MTC <- function(data, iterations=10, MTC=TRUE, MTCmethod="BH", estmethod='pearson')
{
data<-as.matrix(data)
v<-c() #null distribution vector
for(i in 1:iterations)
{
datashufled<-matrix(sample(data), nrow(data),ncol(data))
datashufled <- datashufled +1 #make sure no zero
varofCountRows <- apply(datashufled, 1, var)
i <- which(varofCountRows==0) # because 0 var gives NA in the mim!
if(sum(i)>0) datashufled<- datashufled[-i,]
mim <- cor(t(datashufled), method = estmethod) #pearson or spearman if unnormalized data
diag(mim) <-0 #no self links allowed
mim <- Ac3net.filtersames(mim)
#
mim<-as.vector(mim)
mim<-mim[mim!=0]
v<-c(v,mim)
} #end for i
v <- abs(v)
v <- sort(v)
vl <- length(v)
nrow_ <- nrow(mim); ncol_ <- ncol(mim)
########
varofCountRows <- apply(data, 1, var)
i <- which(varofCountRows==0) # because 0 var gives NA in the mim!
if(sum(i)>0) data<- data[-i,]
mim <- cor(t(data), method = estmethod) #pearson or spearman if unnormalized data
diag(mim) <-0 #no self links allowed
mim <- Ac3net.filtersames(mim)
#########
m <- abs( as.vector(mim) )
rm(mim)
p <- rep(1,length(m))
maxv <- max(v)
i <- which(m > maxv)
if(length(i)>0) p[i] <- 0
indx <- which( (m <= maxv) & (m != 0) )
ln <- length(indx)
for(i in 1:ln){
ind <- which( v >= m[ indx[i] ])
p[ indx[i] ] <- length(ind)/vl
}
if(MTC==TRUE) p <- p.adjust(p, method = MTCmethod)
mimp <- matrix(p,nrow=nrow_,ncol=ncol_)
return(mimp) # returns a the corresponding matrix of mim with p-values
}
| /scratch/gouwar.j/cran-all/cranData/Ac3net/R/Ac3net.MTC.R |
#Ac3net:
#This R package allows inferring directional conservative causal core network from large scale data.
#The inferred network consists of only direct physical interactions.
## Copyright (C) January 2011 Gokmen Altay <[email protected]>
## This program is a free software for only academic useage but not for commercial useage; you can redistribute it and/or
## modify it under the terms of the GNU GENERAL PUBLIC LICENSE
## either version 3 of the License, or any later version.
##
## This program is distributed WITHOUT ANY WARRANTY;
## You can get a copy of the GNU GENERAL PUBLIC LICENSE
## from
## http://www.gnu.org/licenses/gpl.html
## See the licence information for the dependent package from
## igraph package itself.
#takes an adjacency matrix and returns the absolute maximum correlated partner of each variable on the rows.
Ac3net <- function(DataOrMim, processed=FALSE, ratio_ = 0.002, PCmincutoff=0.6,PCmaxcutoff=0.96, cutoff=0,
estmethod='pearson', pval=1, iterations=10, MTC=FALSE, MTCmethod="BH" )
{ print("DataOrMim can be either data or adjancency matrix.
If you are making a comparison study and already have the adjancency matrix and
eliminated the insignificant scores by you cutoff, then input that matrix to DataOrMim object
and set processed=TRUE. Otherwise just enter your data along with your arbitrary parameter settings.")
if(processed==FALSE) {
DataOrMim <- DataOrMim +1 #make sure no zero
varofCountRows <- apply(DataOrMim, 1, var)
i <- which(varofCountRows==0) # because 0 var gives NA in the mim!
if(sum(i)>0) DataOrMim<- DataOrMim[-i,]
mim <- cor(t(DataOrMim), method = estmethod) #pearson or spearman if unnormalized data
diag(mim) <-0 #no self links allowed
if(MTC==TRUE) {
if(pval==1) {
if(cutoff==0) cutoff <- Ac3net.cutoff(mim, ratio_ = ratio_, PCmincutoff=PCmincutoff, PCmaxcutoff=PCmaxcutoff)
mim[abs(mim) < cutoff] <- 0
}
if(pval < 1){
mimp <- Ac3net.MTC(data=DataOrMim, iterations=iterations, MTC=MTC, MTCmethod=MTCmethod, estmethod=estmethod)
mim[mimp >= pval] <- 0
}
}else{
if(cutoff!=0) mim[abs(mim) < cutoff] <- 0
else{
cutoff <- Ac3net.cutoff(mim=mim, ratio_ = ratio_, PCmincutoff=PCmincutoff, PCmaxcutoff=PCmaxcutoff)
mim[abs(mim) < cutoff] <- 0
}
}
mim <- Ac3net.filtersames(mim)
mim <- Ac3net.maxmim(mim) #returns Ac3net network
}
if(processed==TRUE){#means it is mim matrix and already eliminated by a cutoff
# mim (DataOrMim object), must be filtered (processed) by a cutoff.
mim <- Ac3net.filtersames(mim=DataOrMim)
mim <- Ac3net.maxmim(mim_=mim) #returns Ac3net network
}
return(mim) #returns Ac3net network
}
| /scratch/gouwar.j/cran-all/cranData/Ac3net/R/Ac3net.R |
#Ac3net:
#This R package allows inferring directional conservative causal core network from large scale data.
#The inferred network consists of only direct physical interactions.
## Copyright (C) January 2011 Gokmen Altay <[email protected]>
## This program is a free software for only academic useage but not for commercial useage; you can redistribute it and/or
## modify it under the terms of the GNU GENERAL PUBLIC LICENSE
## either version 3 of the License, or any later version.
##
## This program is distributed WITHOUT ANY WARRANTY;
## You can get a copy of the GNU GENERAL PUBLIC LICENSE
## from
## http://www.gnu.org/licenses/gpl.html
## See the licence information for the dependent package from
## igraph package itself.
#takes an
Ac3net.commonlinks <- function(net1,net2, directed=TRUE){
net1<- as.matrix(net1); net2<- as.matrix(net2) #no levels
if(directed==TRUE){
net1 <- as.data.table(net1); net2 <- as.data.table(net2)
net1 <- net1[!duplicated(data.table( net1[[1]],net1[[2]] ) )] # #eliminitaes multiples of A--B
net1 <- net1[!( net1[[1]]==net1[[2]])]
net2 <- net2[!duplicated(data.table( net2[[1]],net2[[2]] ) )]
net2 <- net2[!( net2[[1]]==net2[[2]])]
# all matrices are directionally unique now. Also no self links.
net1<- as.matrix(net1); net2<- as.matrix(net2)
a1<- paste0(as.character(net1[,1]),'***', as.character(net1[,2]))
b1 <- paste0(as.character(net2[,1]),'***', as.character(net2[,2]))
c1 <- intersect(a1,b1)
i1 <- match(c1, a1)
}
if(directed==FALSE){
net1 <- as.data.table(net1); net2 <- as.data.table(net2)
net1 <- net1[!duplicated(data.table(pmin(net1[[1]],net1[[2]]),pmax(net1[[1]],net1[[2]])))]
net2 <- net2[!duplicated(data.table(pmin(net2[[1]],net2[[2]]),pmax(net2[[1]],net2[[2]])))]
net1<- as.matrix(net1); net2<- as.matrix(net2)
a1<- paste0(as.character(net1[,1]),'***', as.character(net1[,2]))
b1 <- paste0(as.character(net2[,1]),'***', as.character(net2[,2]))
c1 <- intersect(a1,b1)
i1 <- match(c1, a1)
b2 <- paste0(as.character(net2[,2]),'***', as.character(net2[,1]))
c2 <- intersect(a1,b2)
i2 <- match(c2, a1)
i1 <- union(i1,i2)
}
commonnet <- net1[i1,]
return(commonnet)
} | /scratch/gouwar.j/cran-all/cranData/Ac3net/R/Ac3net.commonlinks.R |
#Ac3net:
#This R package allows inferring directional conservative causal core network from large scale data.
#The inferred network consists of only direct physical interactions.
## Copyright (C) January 2011 Gokmen Altay <[email protected]>
## This program is a free software for only academic useage but not for commercial useage; you can redistribute it and/or
## modify it under the terms of the GNU GENERAL PUBLIC LICENSE
## either version 3 of the License, or any later version.
##
## This program is distributed WITHOUT ANY WARRANTY;
## You can get a copy of the GNU GENERAL PUBLIC LICENSE
## from
## http://www.gnu.org/licenses/gpl.html
## See the licence information for the dependent package from
## igraph package itself.
#takes an adjacency matrix and returns the absolute maximum correlated partner of each variable on the rows.
Ac3net.cutoff <- function(mim, ratio_ = 0.002, PCmincutoff=0.6, PCmaxcutoff=0.96)
{
diag(mim) <-0
ccc <- as.vector(mim) #might not be symmetric
ccc <- abs(ccc[ccc != 0]) #diagonal removed
x<-sort(ccc, decreasing=T)
a<- nrow(mim)
a<- a*(a-1)/2
rnratio <- round(ratio_*a)
cutoff <- x[rnratio]
if(cutoff < PCmincutoff) cutoff <- PCmincutoff
if(cutoff > PCmaxcutoff) cutoff <- PCmaxcutoff
return(cutoff)
}
| /scratch/gouwar.j/cran-all/cranData/Ac3net/R/Ac3net.cutoff.R |
#Ac3net:
#This R package allows inferring directional conservative causal core network from large scale data.
#The inferred network consists of only direct physical interactions.
## Copyright (C) January 2011 Gokmen Altay <[email protected]>
## This program is a free software for only academic useage but not for commercial useage; you can redistribute it and/or
## modify it under the terms of the GNU GENERAL PUBLIC LICENSE
## either version 3 of the License, or any later version.
##
## This program is distributed WITHOUT ANY WARRANTY;
## You can get a copy of the GNU GENERAL PUBLIC LICENSE
## from
## http://www.gnu.org/licenses/gpl.html
## See the licence information for the dependent package from
## igraph package itself.
#if there are equal variable names, then they are likely to have max correlation. Thus this fubction set this correlations to 0.
Ac3net.filtersames <- function(mim)
{
genes<- rownames(mim)
ngene <- length(genes)
for(i in 1:ngene){
indx <- which(genes==genes[i])
if(length(indx)>1){mim[i, indx] <- 0}
}
return(mim)
}
| /scratch/gouwar.j/cran-all/cranData/Ac3net/R/Ac3net.filtersames.R |
#Ac3net:
#This R package allows inferring directional conservative causal core network from large scale data.
#The inferred network consists of only direct physical interactions.
## Copyright (C) January 2011 Gokmen Altay <[email protected]>
## This program is a free software for only academic useage but not for commercial useage; you can redistribute it and/or
## modify it under the terms of the GNU GENERAL PUBLIC LICENSE
## either version 3 of the License, or any later version.
##
## This program is distributed WITHOUT ANY WARRANTY;
## You can get a copy of the GNU GENERAL PUBLIC LICENSE
## from
## http://www.gnu.org/licenses/gpl.html
## See the licence information for the dependent package from
## igraph package itself.
#takes an
Ac3net.getDifferentLinks <- function(net1, net2, directed=TRUE){
#if directed==T then selects only the different links from A-->B. wrt net2
#if directed==F, then additionaly,selects the links B-->A from net2 as well, for an A-->B in net1
net1<- as.matrix(net1); net2<- as.matrix(net2) #no levels
if(directed==TRUE){
net1 <- as.data.table(net1); net2 <- as.data.table(net2)
net1 <- net1[!duplicated(data.table( net1[[1]],net1[[2]] ) )] # #eliminitaes multiples of A--B
net1 <- net1[!( net1[[1]]==net1[[2]])]
net2 <- net2[!duplicated(data.table( net2[[1]],net2[[2]] ) )]
net2 <- net2[!( net2[[1]]==net2[[2]])]
# all matrices are directionally unique now. Also no self links.
net1<- as.matrix(net1); net2<- as.matrix(net2)
a1<- paste0(as.character(net1[,1]),'***', as.character(net1[,2]))
b1 <- paste0(as.character(net2[,1]),'***', as.character(net2[,2]))
c1 <- setdiff(a1,b1)
i1 <- match(c1, a1)
}
if(directed==FALSE){
net1 <- as.data.table(net1); net2 <- as.data.table(net2)
net1 <- net1[!duplicated(data.table(pmin(net1[[1]],net1[[2]]),pmax(net1[[1]],net1[[2]])))]
net2 <- net2[!duplicated(data.table(pmin(net2[[1]],net2[[2]]),pmax(net2[[1]],net2[[2]])))]
net1<- as.matrix(net1); net2<- as.matrix(net2)
a1<- paste0(as.character(net1[,1]),'***', as.character(net1[,2]))
b1 <- paste0(as.character(net2[,1]),'***', as.character(net2[,2]))
c1 <- setdiff(a1,b1)
i1 <- match(c1, a1)
b2 <- paste0(as.character(net2[,2]),'***', as.character(net2[,1]))
c2 <- setdiff(a1,b2)
i2 <- match(c2, a1)
i1 <- intersect(i1,i2)
}
differentLinks <- net1[i1,]
return(differentLinks)
}
| /scratch/gouwar.j/cran-all/cranData/Ac3net/R/Ac3net.getDifferentLinks.R |
#Ac3net:
#This R package allows inferring directional conservative causal core network from large scale data.
#The inferred network consists of only direct physical interactions.
## Copyright (C) January 2011 Gokmen Altay <[email protected]>
## This program is a free software for only academic useage but not for commercial useage; you can redistribute it and/or
## modify it under the terms of the GNU GENERAL PUBLIC LICENSE
## either version 3 of the License, or any later version.
##
## This program is distributed WITHOUT ANY WARRANTY;
## You can get a copy of the GNU GENERAL PUBLIC LICENSE
## from
## http://www.gnu.org/licenses/gpl.html
## See the licence information for the dependent package from
## igraph package itself.
#takes a network
Ac3net.getDirectedOrDualLinks <- function(net1, dual_=FALSE){
#if directed==T then selects only the direct links from A-->B (no B-->A in this case).
#if directed==F, then selects only the dual links A-->B and B-->A cases
net1<- as.matrix(net1) #no levels
net1 <- as.data.table(net1)
net1 <- net1[!duplicated(data.table( net1[[1]],net1[[2]] ) )] # #eliminitaes multiples of A--B
net1 <- net1[!( net1[[1]]==net1[[2]])]
# all matrices are directionally unique now. Also no self links.
net1<- as.matrix(net1)
a1<- paste0(as.character(net1[,1]),'***', as.character(net1[,2]))
a1 <- unique(a1)
b1 <- paste0(as.character(net1[,2]),'***', as.character(net1[,1]))
b1 <- unique(b1)
if(dual_==TRUE){c1 <- intersect(a1,b1); i1 <- match(c1, a1)}
if(dual_==FALSE){c1 <- setdiff(a1,b1); i1 <- match(c1, a1)}
return(net1[i1,])
} | /scratch/gouwar.j/cran-all/cranData/Ac3net/R/Ac3net.getDirectedOrDualLinks.R |
#Ac3net:
#This R package allows inferring directional conservative causal core network from large scale data.
#The inferred network consists of only direct physical interactions.
## Copyright (C) January 2011 Gokmen Altay <[email protected]>
## This program is a free software for only academic useage but not for commercial useage; you can redistribute it and/or
## modify it under the terms of the GNU GENERAL PUBLIC LICENSE
## either version 3 of the License, or any later version.
##
## This program is distributed WITHOUT ANY WARRANTY;
## You can get a copy of the GNU GENERAL PUBLIC LICENSE
## from
## http://www.gnu.org/licenses/gpl.html
## See the licence information for the dependent package from
## igraph package itself.
#takes an adjacency matrix and returns the absolute maximum correlated partner of each variable on the rows.
Ac3net.maxmim <- function(mim_, net_=TRUE, cutoff_=0)
{
numprobs <- nrow(mim_)
maxM <- c()
for(i in 1: numprobs){
#first eliminates the same named pairs if exist. Works in rectangular matrices as well.
#indx <- which(colnames(mim_)==rownames(mim_)[i])
#if(length(indx)>0) mim_[i, indx] <- 0
mim_ <- Ac3net.filtersames(mim=mim_)
#
j <- which.max(abs(mim_[i,])) # compare magnitudes but not signs
tmp <- cbind(colnames(mim_)[j], rownames(mim_)[i], mim_[i,j], i, j) #source is in the second column
maxM <- rbind(maxM,tmp)
}
colnames(maxM) <- c("Source","Target","CORR","RowIndx","ColIndx")
if(net_==TRUE){
maxM <- as.data.table(maxM)
maxM <- maxM[abs(as.numeric(maxM$CORR))>cutoff_]
maxM<- maxM[order(-abs(as.numeric(maxM$CORR)))]
maxM$CORR <- as.numeric(maxM$CORR)
}
return(maxM)
} | /scratch/gouwar.j/cran-all/cranData/Ac3net/R/Ac3net.maxmim.R |
#Ac3net:
#This R package allows inferring directional conservative causal core network from large scale data.
#The inferred network consists of only direct physical interactions.
## Copyright (C) January 2011 Gokmen Altay <[email protected]>
## This program is a free software for only academic useage but not for commercial useage; you can redistribute it and/or
## modify it under the terms of the GNU GENERAL PUBLIC LICENSE
## either version 3 of the License, or any later version.
##
## This program is distributed WITHOUT ANY WARRANTY;
## You can get a copy of the GNU GENERAL PUBLIC LICENSE
## from
## http://www.gnu.org/licenses/gpl.html
## See the licence information for the dependent package from
## igraph package itself.
#takes an adjacency matrix and returns the absolute maximum correlated partner of each variable on the rows.
Ac3net.performance <- function(predictNet, referenceNet, data_, directed=TRUE)
{
#filter reference network for the data
genes <- rownames(data_)
referenceNet <- as.matrix(referenceNet)
i1 <- which(referenceNet[,1] %in% genes)
i2 <- which(referenceNet[,2] %in% genes)
i <- intersect(i1,i2)
referenceNet <- referenceNet[i,]
#
x3 <- Ac3net.commonlinks(net1=predictNet,net2=referenceNet, directed=directed)
TP <- nrow(x3)
FP <- nrow(predictNet) - TP
FN <- nrow(referenceNet) - TP
gnames <- unique( rownames(data_) )
if(directed==TRUE) allx <- ( length(gnames)*length(gnames) - length(gnames) )
if(directed==FALSE) allx <- ( length(gnames)*length(gnames) - length(gnames) )/2
TN <- allx - (TP + FP +FN)
precision <- TP/(TP + FP)
recall <- TP/(TP + FN)
Fscore <- 2 * precision * recall/(precision + recall)
Accuracy <- (TP+TN)/(TP+TN+FP+FN) #Accuracy
output <- c(Accuracy, Fscore, precision , TP, FP, FN, TN, recall)
names(output) <- c("Accuracy", "F-score", "precision", "TP", "FP", "FN", "TN", "recall")
return(output)
} | /scratch/gouwar.j/cran-all/cranData/Ac3net/R/Ac3net.performance.R |
#' Get An Academic Colour Palette
#'
#' Return either a specific colour palette or all colour palettes offered by
#' `AcademicThemes`.
#'
#' @param palette A string containing the name of the palette to be returned. If
#' no name is given then all palettes are returned instead.
#' @param n A number indicating how many different colours should be included in
#' the palette. If not specified only the specific colours in the palette
#' will be returned.
#'
#' @return A single vector or a list of vectors containing HEX codes for academic
#' colour palettes.
#' @export
#'
#' @examples
#' # Get the colour palette used by the UKRI
#' academic_colour_palette("ukri_mrc")
academic_colour_palette <- function(palette = NA, n = NA) {
# Generate the colour palettes
# ================================================================
# ========================= CONTRIBUTORS =========================
# == To add a new colour palette, please add the vector of HEX ==
# == codes to the list below in alphabetical order by name. ==
# == Where possible the vector of colours should be in some ==
# == logical order (e.g. the order in which they appear in the ==
# == logo of the institution). Preferably the colours should ==
# == also be in order from darker to lighter. ==
# ================================================================
# ================================================================
palettes <- list(
ahrc = c("#192B65", "#707FB1", "#9BA957", "#F3AB3E"),
bbsrc = c("#293C91", "#C43089", "#E5B440"),
cgem_igc = c("#0E2E5A", "#D22D48"),
cruk = c("#2E0188", "#00B6EA", "#EE0286"),
eastbio = c("#284E96", "#386C4D", "#E07E38", "#BB2D4A"),
epsrc = c("#711D4B", "#459B8D"),
nerc = c("#5A5419", "#B2BB44"),
res_eng = c("#50515F", "#797F5C", "#B1BB50"),
roslin_edi = c("#BA4B91", "#6ABBEE", "#7EB966", "#C9773D"),
tu_dort = c("#87888A", "#7DB831"),
ukri_ahrc = c("#2D2E5F", "#E38D33", "#F1BB44"),
ukri_bbsrc = c("#2D2E5F", "#874598", "#D263E5"),
ukri_epsrc = c("#2D2E5F", "#46958A", "#68CCAD"),
ukri_esrc = c("#2D2E5F", "#BB4264", "#ED6560"),
ukri_iuk = c("#2D2E5F", "#7E2A96", "#AF3DB5"),
ukri_mrc = c("#2D2E5F", "#3A88A9", "#00BAD2"),
ukri_nerc = c("#2D2E5F", "#518346", "#7DBD5C"),
ukri_re = c("#2D2E5F", "#B3473A", "#EE722E"),
ukri_stfc = c("#2D2E5F", "#0C3283", "#2B61EF"),
uni_of_birm = c("#221F20", "#DA3732", "#4799D1"),
uni_of_bristol = c("#000000", "#B03C3D"),
uni_of_camb = c("#000000", "#B03C3D", "#D44435"),
uni_of_dund = c("#CA342A", "#442593", "#0026D6", "#F5DC4B"),
uni_of_edi = c("#0E2E5C", "#D22D48"),
uni_of_lee = c("#923637", "#5A855B"),
uni_of_liv = c("#1E2D77", "#9A7529"),
uni_of_manc = c("#63338B", "#F9D348"),
uni_of_sheff = c("#242353", "#448CCC", "#F3E65E"),
uni_of_soton = c("#131F56", "#822A18", "#D3B83F"),
uni_of_st_andr = c("#205396", "#DA4232", "#F6ED53"),
uni_of_stirl = c("#000000", "#2C673D"),
x_net_bio = c("#3E81A3", "#DB8251")
)
# If no colour palette is selected then return them all
if (any(is.na(palette))) {
if (!is.na(n)) {
warning("\n Argument `n` was ignored as no colour palette was specified")
}
return(palettes)
}
# Check the selected colour palette is valid
if (length(palette) > 1) {
stop("\n \u2716 Given palette name should only contain one entry")
}
# Check the selected colour palette is a string
if (!is.character(palette)) {
stop("\n \u2716 Given palette name should be a string")
}
# Check the selected colour palette is in the list of colour palettes
if (!(palette %in% names(palettes))) {
stop(paste0('\n \u2716 "', palette, '" is not a colour palette in `AcademicThemes`'))
}
# Return the selected colour palette
if (!is.na(n)) {
# Check that n is numeric
if (!is.numeric(n)) {
stop("\n \u2716 `n` should be numeric")
}
# Check that n is an integer
if (round(n) != n) {
stop("\n \u2716 `n` should be an integer")
}
grDevices::colorRampPalette(palettes[[palette]])(n)
} else {
return(palettes[[palette]])
}
}
#' Get The Academic Colour Palette Names
#'
#' @return A vector of the names of the colour palettes available in `AcademicThemes`.
#' @export
#'
#' @examples
#' academic_colour_palette_names()
academic_colour_palette_names <- function() {
# Get the colour palettes
palettes <- academic_colour_palette()
# Return the names of the colour palettes
names(palettes)
}
| /scratch/gouwar.j/cran-all/cranData/AcademicThemes/R/colour_palettes.R |
#' Scale Plot Colours With Academic Themes (Continuous)
#'
#' @param palette_name The name of a colour palette in `AcademicThemes`.
#' @param ... Arguments passed to `ggplot2::scale_colour_gradientn()`
#'
#' @return A layer that can be added to a ggplot2 object.
#' @export
#'
#' @examples
#' library(ggplot2)
#' ggplot(
#' data.frame(
#' x = runif(1500),
#' y = runif(1500)
#' ),
#' aes(x = x, y = y, colour = x)
#' ) +
#' geom_point() +
#' scale_colour_academic_c("cruk") +
#' theme_classic() +
#' labs(
#' x = "X-Axis",
#' y = "Y-Axis",
#' colour = "Colour"
#' )
scale_colour_academic_c <- function(palette_name, ...) {
palette <- academic_colour_palette(palette_name)
ggplot2::scale_colour_gradientn(
colours = palette
)
}
#' Scale Plot Colours With Academic Themes (Discrete)
#'
#' @param palette_name The name of a colour palette in `AcademicThemes`.
#' @param ... Arguments passed to `ggplot2::discrete_scale()`.
#'
#' @return A layer that can be added to a ggplot2 object.
#' @export
#'
#' @examples
#' library(ggplot2)
#' ggplot(
#' data.frame(
#' x = runif(1500),
#' y = runif(1500),
#' c = sample(LETTERS[1:3], 1500, replace = TRUE)
#' ),
#' aes(x = x, y = y, colour = c)
#' ) +
#' geom_point() +
#' scale_colour_academic_d("cruk") +
#' theme_classic() +
#' labs(
#' x = "X-Axis",
#' y = "Y-Axis",
#' colour = "Colour"
#' )
scale_colour_academic_d <- function(palette_name, ...) {
palette <- grDevices::colorRampPalette(academic_colour_palette(palette_name))
ggplot2::discrete_scale(
palette = palette,
aesthetics = "colour",
scale_name = paste0("AcademicTheme: ", palette_name),
...
)
}
| /scratch/gouwar.j/cran-all/cranData/AcademicThemes/R/ggplot2_colour_layer.R |
#' Scale Plot Fills With Academic Themes (Continuous)
#'
#' @param palette_name The name of a colour palette in `AcademicThemes`.
#' @param ... Arguments passed to `ggplot2::scale_fill_gradientn()`
#'
#' @return A layer that can be added to a ggplot2 object.
#' @export
#'
#' @examples
#' library(ggplot2)
#' ggplot(
#' data.frame(
#' x = rnorm(10000),
#' y = rnorm(10000)
#' ),
#' aes(x = x, y = y)
#' ) +
#' geom_hex() +
#' scale_fill_academic_c("cruk") +
#' theme_classic() +
#' labs(
#' x = "X-Axis",
#' y = "Y-Axis",
#' fill = "Fill"
#' )
scale_fill_academic_c <- function(palette_name, ...) {
palette <- academic_colour_palette(palette_name)
ggplot2::scale_fill_gradientn(
colours = palette
)
}
#' Scale Plot Fills With Academic Themes (Discrete)
#'
#' @param palette_name The name of a colour palette in `AcademicThemes`.
#' @param ... Arguments passed to `ggplot2::discrete_scale()`.
#'
#' @return A layer that can be added to a ggplot2 object.
#' @export
#'
#' @examples
#' library(ggplot2)
#' ggplot(
#' data.frame(
#' x = LETTERS[1:5],
#' y = 5:1
#' ),
#' aes(x = x, y = y, fill = x)
#' ) +
#' geom_col() +
#' scale_fill_academic_d("cruk") +
#' theme_classic() +
#' labs(
#' x = "X-Axis",
#' y = "Y-Axis",
#' fill = "Fill"
#' )
scale_fill_academic_d <- function(palette_name, ...) {
palette <- grDevices::colorRampPalette(academic_colour_palette(palette_name))
ggplot2::discrete_scale(
palette = palette,
aesthetics = "fill",
scale_name = paste0("AcademicTheme: ", palette_name),
...
)
}
| /scratch/gouwar.j/cran-all/cranData/AcademicThemes/R/ggplot2_fill_layer.R |
.onAttach <- function(libname, pkgname) {
packageStartupMessage(
"==AcademicThemes================================\n",
"Please be mindful of the effects of your choice\n",
"of colour palette on people who are colour blind\n",
"================================================"
)
}
| /scratch/gouwar.j/cran-all/cranData/AcademicThemes/R/zzz.R |
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## ----setup--------------------------------------------------------------------
library(AcademicThemes)
palettes <- academic_colour_palette()
head(palettes)
## -----------------------------------------------------------------------------
library(scales)
cruk_palette <- academic_colour_palette("cruk")
cruk_palette
show_col(cruk_palette)
## -----------------------------------------------------------------------------
cruk_palette_9 <- academic_colour_palette("cruk", n = 9)
cruk_palette_9
show_col(cruk_palette_9)
## -----------------------------------------------------------------------------
library(tidyverse)
tibble(
x = LETTERS[1:5],
y = 5:1
) %>%
ggplot() +
aes(x = x, y = y, fill = x) +
geom_col() +
guides(fill = "none") +
labs(
x = "Groups",
y = "Value"
) +
theme_bw()
## -----------------------------------------------------------------------------
tibble(
x = LETTERS[1:5],
y = 5:1
) %>%
ggplot() +
aes(x = x, y = y, fill = x) +
geom_col() +
guides(fill = "none") +
labs(
x = "Groups",
y = "Value"
) +
theme_bw() +
scale_fill_academic_d("cruk")
| /scratch/gouwar.j/cran-all/cranData/AcademicThemes/inst/doc/AcademicThemes.R |
---
title: "AcademicThemes"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{AcademicThemes}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
## Installation
You can install `AcademicThemes` from CRAN with:
``` r
install.packages("AcademicThemes")
```
You can also install the development version of `AcademicThemes` from [GitHub](https://github.com/) with:
``` r
# install.packages("devtools")
devtools::install_github("hwarden162/AcademicThemes")
```
## Accessing Colour Palettes
`AcademicThemes` is a package for recolouring `ggplot2` plots to use colours from different academic institutions. These palettes are accessed through the `academic_colour_palette()` function. If no arguments are specified then a list of all the colour palettes are returned.
```{r setup}
library(AcademicThemes)
palettes <- academic_colour_palette()
head(palettes)
```
If you want to access just one colour palette you can give the name of the colour palette as an argument. Here is an example for accessing the colours of the Cancer Research UK logo.
```{r}
library(scales)
cruk_palette <- academic_colour_palette("cruk")
cruk_palette
show_col(cruk_palette)
```
To access the names of the palettes you can use the `academic_colour_palette_names()` function or they are all listed with examples of the colours in the Colour Palettes article of this site.
By default this will return just the colours in the palette, but if you would like to use the palette to create a palette with a different number of colours you can supply this as an argument.
```{r}
cruk_palette_9 <- academic_colour_palette("cruk", n = 9)
cruk_palette_9
show_col(cruk_palette_9)
```
## Recolouring `ggplot2` Plots
These palettes can be used to automatically recolour `ggplot2` plots similar to packages such as `viridis`. Here is an example of a plot
```{r}
library(tidyverse)
tibble(
x = LETTERS[1:5],
y = 5:1
) %>%
ggplot() +
aes(x = x, y = y, fill = x) +
geom_col() +
guides(fill = "none") +
labs(
x = "Groups",
y = "Value"
) +
theme_bw()
```
One of the colour palettes can be used to recolour this plot using the `scale_fill_academic_d()` function.
```{r}
tibble(
x = LETTERS[1:5],
y = 5:1
) %>%
ggplot() +
aes(x = x, y = y, fill = x) +
geom_col() +
guides(fill = "none") +
labs(
x = "Groups",
y = "Value"
) +
theme_bw() +
scale_fill_academic_d("cruk")
```
Any function called `scale_colour_*` will change the colour of the plot and any function called `scale_fill_*` will change the fill of the plot. If the variable you are mapping to the aesthetic is continuous you use the function that ends `*_c` and if it is discrete you use the function that ends `*_d`.
| /scratch/gouwar.j/cran-all/cranData/AcademicThemes/inst/doc/AcademicThemes.Rmd |
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## ----setup, echo = FALSE, message = FALSE-------------------------------------
library(AcademicThemes)
for (palette in academic_colour_palette_names()) {
colour_palette <- academic_colour_palette(palette)
image(1:length(colour_palette), 1, matrix(1:length(colour_palette)),
main = paste0("Colour Palette: ", palette), xlab = "", ylab = "",
col = colour_palette, xaxt = "n", yaxt = "n", bty = "n")
}
| /scratch/gouwar.j/cran-all/cranData/AcademicThemes/inst/doc/Colour_Palettes.R |
---
title: "Colour Palettes"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Colour Palettes}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
Here are the colour palettes supplied by `AcademicThemes`. The palettes listed here include ones that might not yet be on CRAN. To gain access to all palettes either wait from CRAN to be updated or follow the instructions at the top of the Get Started page to install the developement version.
```{r setup, echo = FALSE, message = FALSE}
library(AcademicThemes)
for (palette in academic_colour_palette_names()) {
colour_palette <- academic_colour_palette(palette)
image(1:length(colour_palette), 1, matrix(1:length(colour_palette)),
main = paste0("Colour Palette: ", palette), xlab = "", ylab = "",
col = colour_palette, xaxt = "n", yaxt = "n", bty = "n")
}
```
| /scratch/gouwar.j/cran-all/cranData/AcademicThemes/inst/doc/Colour_Palettes.Rmd |
---
title: "AcademicThemes"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{AcademicThemes}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
## Installation
You can install `AcademicThemes` from CRAN with:
``` r
install.packages("AcademicThemes")
```
You can also install the development version of `AcademicThemes` from [GitHub](https://github.com/) with:
``` r
# install.packages("devtools")
devtools::install_github("hwarden162/AcademicThemes")
```
## Accessing Colour Palettes
`AcademicThemes` is a package for recolouring `ggplot2` plots to use colours from different academic institutions. These palettes are accessed through the `academic_colour_palette()` function. If no arguments are specified then a list of all the colour palettes are returned.
```{r setup}
library(AcademicThemes)
palettes <- academic_colour_palette()
head(palettes)
```
If you want to access just one colour palette you can give the name of the colour palette as an argument. Here is an example for accessing the colours of the Cancer Research UK logo.
```{r}
library(scales)
cruk_palette <- academic_colour_palette("cruk")
cruk_palette
show_col(cruk_palette)
```
To access the names of the palettes you can use the `academic_colour_palette_names()` function or they are all listed with examples of the colours in the Colour Palettes article of this site.
By default this will return just the colours in the palette, but if you would like to use the palette to create a palette with a different number of colours you can supply this as an argument.
```{r}
cruk_palette_9 <- academic_colour_palette("cruk", n = 9)
cruk_palette_9
show_col(cruk_palette_9)
```
## Recolouring `ggplot2` Plots
These palettes can be used to automatically recolour `ggplot2` plots similar to packages such as `viridis`. Here is an example of a plot
```{r}
library(tidyverse)
tibble(
x = LETTERS[1:5],
y = 5:1
) %>%
ggplot() +
aes(x = x, y = y, fill = x) +
geom_col() +
guides(fill = "none") +
labs(
x = "Groups",
y = "Value"
) +
theme_bw()
```
One of the colour palettes can be used to recolour this plot using the `scale_fill_academic_d()` function.
```{r}
tibble(
x = LETTERS[1:5],
y = 5:1
) %>%
ggplot() +
aes(x = x, y = y, fill = x) +
geom_col() +
guides(fill = "none") +
labs(
x = "Groups",
y = "Value"
) +
theme_bw() +
scale_fill_academic_d("cruk")
```
Any function called `scale_colour_*` will change the colour of the plot and any function called `scale_fill_*` will change the fill of the plot. If the variable you are mapping to the aesthetic is continuous you use the function that ends `*_c` and if it is discrete you use the function that ends `*_d`.
| /scratch/gouwar.j/cran-all/cranData/AcademicThemes/vignettes/AcademicThemes.Rmd |
---
title: "Colour Palettes"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Colour Palettes}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
Here are the colour palettes supplied by `AcademicThemes`. The palettes listed here include ones that might not yet be on CRAN. To gain access to all palettes either wait from CRAN to be updated or follow the instructions at the top of the Get Started page to install the developement version.
```{r setup, echo = FALSE, message = FALSE}
library(AcademicThemes)
for (palette in academic_colour_palette_names()) {
colour_palette <- academic_colour_palette(palette)
image(1:length(colour_palette), 1, matrix(1:length(colour_palette)),
main = paste0("Colour Palette: ", palette), xlab = "", ylab = "",
col = colour_palette, xaxt = "n", yaxt = "n", bty = "n")
}
```
| /scratch/gouwar.j/cran-all/cranData/AcademicThemes/vignettes/Colour_Palettes.Rmd |
#' Antigenicity Accelerated Stability Data
#'
#' An example dataset containing antigenicity concentration data at different
#' temperatures over a period of up to 147 days.
#'
#' @docType data
#'
#' @usage data(antigenicity)
#'
#' @format An object of class \code{"data.frame"} with 50 rows and 5 variables
#' \describe{
#' \item{time}{Number of days in years for which the datapoints are gathered.}
#' \item{conc}{The concentration at a time.}
#' \item{K}{The temperature in Kelvin.}
#' \item{Celsius}{The temperature in celsius.}
#' \item{days}{Number of days for which the datapoints are gathered.}
#'
#' }
#'
#' @keywords dataset
#'
"antigenicity"
| /scratch/gouwar.j/cran-all/cranData/AccelStab/R/antigenicity.R |
#' @title Temperature Excursion
#'
#' @description Predict a temperature excursion for a product.
#'
#' @details Use the output from step1.down to run a temperature excursion prediction.
#'
#' @param step1_down_object The fit object from the step1.down function (required).
#' @param temp_changes A list that represents the order of the temperatures that
#' the product is subjected to. Must be the same length as time_changes.
#' @param time_changes List that represents the times at which the temperature changes,
#' Starts from time zero and must be the same length as temp_changes.
#' @param CI Show confidence intervals.
#' @param PI Show prediction intervals.
#' @param draw Number of simulations used to estimate confidence intervals.
#' @param confidence_interval Confidence level for the confidence and prediction intervals
#' around the predictions (default 0.95).
#' @param intercept Use a forced y-intercept. If null, the fitted value will be used.
#' @param ribbon Add shade to confidence and prediction intervals (optional).
#' @param xname Label for the x-axis (optional).
#' @param yname Label for the y-axis (optional).
#' @param plot_simulations If TRUE, randomly selects 100 of the simulations to
#' display on the plot.
#'
#' @return An SB class object, a list including the following elements:
#' \itemize{
#' \item *prediction* - A data frame containing the predictions with the confidence and prediction intervals.
#' \item *simulations* - Matrix of the simulations.
#' \item *excursion plot* - A plot with predictions and statistical intervals.
#' \item *user_parameters* - List of users input parameters which is utilised by other
#' functions in the package.
#' }
#'
#' @examples
#' #load antigenicity
#' data(antigenicity)
#'
#' #run step1.down fit
#' fit1 <- step1_down(data = antigenicity, y = "conc", .time = "time",
#' C = "Celsius", max_time_pred = 3)
#'
#' #run excursion function with fixed intercept.
#' excursion <- excursion(step1_down_object = fit1,
#' temp_changes = c(5,15,10),
#' time_changes = c(0.5,1.5,3),
#' CI = TRUE, PI = TRUE, draw = 4000,
#' confidence_interval = 0.95,
#' intercept = 80,
#' xname = "Time in years", yname = "Concentration",
#' ribbon = TRUE, plot_simulations = TRUE)
#'
#' excursion$excursion_plot
#'
#' @import ggplot2
#' @import dplyr
#'
#' @export excursion
excursion <- function(step1_down_object, temp_changes, time_changes, CI = TRUE,
PI = TRUE, draw = 10000, confidence_interval = 0.95,
intercept = NULL, ribbon = TRUE, xname = NULL, yname = NULL,
plot_simulations = FALSE){
if (length(temp_changes) != length(time_changes))
stop("temp_changes and time_changes must be the same length.")
fit_object <- step1_down_object$fit
dat <- step1_down_object$data
Kref = mean(dat$K)
time_lengths <- c(time_changes[1],diff(time_changes)) # Useful later
k1 = fit_object$par$k1
k2 = fit_object$par$k2
k3 = fit_object$par$k3
c0 = ifelse(is.null(intercept),fit_object$par$c0, intercept) # Making intercept option
coeffs_fit <- coef(fit_object)
preds <- data.frame( # Making empty prediction frame
phase_time = numeric(),
temps = numeric(),
conc = numeric(),
phase = numeric(),
total_time = numeric())
for (i in 1:length(temp_changes)){ # Making the prediction frame with 0.01yr intervals
preds <- rbind(preds, data.frame(
phase_time = seq(0,time_lengths[i],length.out = 101),
temps = rep(temp_changes[i],101),
conc = rep(NA,101),
degrad = rep(NA,101),
phase = rep(i,101),
total_time = seq(ifelse(i==1,0,time_changes[i-1]),time_changes[i],length.out = 101)))
}
# Now it splits for each one of the four options
if(step1_down_object$user_parameters$reparameterisation == T &&
step1_down_object$user_parameters$zero_order == T){
for (i in 1:nrow(preds)){ # predictions
if (preds$phase[i] == 1){ # First phase no initial degradation
preds$degrad[i] <- preds$phase_time[i] * exp(k1 - k2 / (preds$temps[i] + 273.15)+ k2/Kref)
preds$conc[i] <- c0 - c0 * preds$degrad[i]
}else{
# This finds the degradation from the end of the previous phase
degrad_tracker <- preds %>% filter(phase == (preds$phase[i] - 1)) %>% select(degrad) %>% max()
preds$degrad[i] <- (degrad_tracker) + preds$phase_time[i] * exp(k1 - k2 / (preds$temps[i] + 273.15) + k2/Kref)
preds$conc[i] <- c0 - c0 * preds$degrad[i]
}
}
# Boot counter
boot_count = 1
if (CI | PI | plot_simulations){ # adding confidence and prediction intervals
# making the covariance matrix
SIG = vcov(fit_object)
sigma = summary(fit_object)$sigma
# making pred_fct
pred_fct <- function(parms){
k1 = parms[1]
k2 = parms[2]
c0 = ifelse(is.null(intercept),parms[3], intercept)
conc_boot <- rep(NA,101 * length(time_changes))
degrad_boot <- rep(NA,101 * length(time_changes))
for (i in 1:length(time_changes)){
if (i == 1){ # First phase no initial degradation
degrad_boot[1:101] <- preds$phase_time[1:101] * exp(k1 - k2 / (preds$temps[1:101] + 273.15)+ k2/Kref)
conc_boot[1:101] <- c0 - c0 * degrad_boot[1:101]
}else{
# This finds the degradation from the end of the previous phase
degrad_tracker <- degrad_boot[(i-1)*101]
degrad_boot[((i-1)*101 +1):(i*101)] <- (degrad_tracker) + preds$phase_time[((i-1)*101 +1):(i*101)] * exp(k1 - k2 / (preds$temps[((i-1)*101 +1):(i*101)] + 273.15)+ k2/Kref)
conc_boot[((i-1)*101 +1):(i*101)] <- c0 - c0 * degrad_boot[((i-1)*101 +1):(i*101)]
}
if (i == length(time_changes)){
if (boot_count %% 1000 == 0){
print(paste0("Sample draw progress: ",(boot_count*100)/draw,"%"))
}
boot_count <<- boot_count+1
}
}
return(conc_boot)
}
# Multi T bootstrap
rand.coef = rmvt(draw, sigma = SIG, df = nrow(dat) - 3) + matrix(nrow = draw, ncol = 3, byrow = TRUE, coeffs_fit)
res.boot = matrix(nrow = draw, ncol = nrow(preds), byrow = TRUE, apply(rand.coef, 1, pred_fct))
CI1b = apply(res.boot, 2, quantile, ((1-confidence_interval)/2), na.rm = TRUE)
CI2b = apply(res.boot, 2, quantile, ((1+confidence_interval)/2), na.rm = TRUE)
simulations <- res.boot
res.boot = res.boot + rnorm(draw*length(preds$total_time), 0, sigma)
PI1b = apply(res.boot, 2, quantile, ((1-confidence_interval)/2), na.rm = TRUE)
PI2b = apply(res.boot, 2, quantile, ((1+confidence_interval)/2), na.rm = TRUE)
}
}else if(step1_down_object$user_parameters$reparameterisation == F &&
step1_down_object$user_parameters$zero_order == T){
for (i in 1:nrow(preds)){ # predictions
if (preds$phase[i] == 1){ # First phase no initial degradation
preds$degrad[i] <- preds$phase_time[i] * exp(k1 - k2 / (preds$temps[i] + 273.15))
preds$conc[i] <- c0 - c0 * preds$degrad[i]
}else{
# This finds the degradation from the end of the previous phase
degrad_tracker <- preds %>% filter(phase == (preds$phase[i] - 1)) %>% select(degrad) %>% max()
preds$degrad[i] <- (degrad_tracker) + preds$phase_time[i] * exp(k1 - k2 / (preds$temps[i] + 273.15))
preds$conc[i] <- c0 - c0 * preds$degrad[i]
}
}
# Boot counter
boot_count = 1
if (CI | PI | plot_simulations){ # adding confidence and prediction intervals
# making the covariance matrix
SIG = vcov(fit_object)
sigma = summary(fit_object)$sigma
# making pred_fct
pred_fct <- function(parms){
k1 = parms[1]
k2 = parms[2]
c0 = ifelse(is.null(intercept),parms[3], intercept)
conc_boot <- rep(NA,101 * length(time_changes))
degrad_boot <- rep(NA,101 * length(time_changes))
for (i in 1:length(time_changes)){
if (i == 1){ # First phase no initial degradation
degrad_boot[1:101] <- preds$phase_time[1:101] * exp(k1 - k2 / (preds$temps[1:101] + 273.15))
conc_boot[1:101] <- c0 - c0 * degrad_boot[1:101]
}else{
# This finds the degradation from the end of the previous phase
degrad_tracker <- degrad_boot[(i-1)*101]
degrad_boot[((i-1)*101 +1):(i*101)] <- (degrad_tracker) + preds$phase_time[((i-1)*101 +1):(i*101)] * exp(k1 - k2 / (preds$temps[((i-1)*101 +1):(i*101)] + 273.15))
conc_boot[((i-1)*101 +1):(i*101)] <- c0 - c0 * degrad_boot[((i-1)*101 +1):(i*101)]
}
if (i == length(time_changes)){
if (boot_count %% 1000 == 0){
print(paste0("Sample draw progress: ",(boot_count*100)/draw,"%"))
}
boot_count <<- boot_count+1
}
}
return(conc_boot)
}
# Multi T bootstrap
rand.coef = rmvt(draw, sigma = SIG, df = nrow(dat) - 3) + matrix(nrow = draw, ncol = 3, byrow = TRUE, coeffs_fit)
res.boot = matrix(nrow = draw, ncol = nrow(preds), byrow = TRUE, apply(rand.coef, 1, pred_fct))
CI1b = apply(res.boot, 2, quantile, ((1-confidence_interval)/2), na.rm = TRUE)
CI2b = apply(res.boot, 2, quantile, ((1+confidence_interval)/2), na.rm = TRUE)
simulations <- res.boot
res.boot = res.boot + rnorm(draw*length(preds$total_time), 0, sigma)
PI1b = apply(res.boot, 2, quantile, ((1-confidence_interval)/2), na.rm = TRUE)
PI2b = apply(res.boot, 2, quantile, ((1+confidence_interval)/2), na.rm = TRUE)
}
}else if(step1_down_object$user_parameters$reparameterisation == T &&
step1_down_object$user_parameters$zero_order == F){
for (i in 1:nrow(preds)){ # predictions
if (preds$phase[i] == 1){ # First phase no initial degradation
preds$degrad[i] <- (1 - ((1 - k3) * (1/(1 - k3) - preds$phase_time[i] * exp(k1 - k2 / (preds$temps[i] + 273.15)+ k2/Kref)))^(1/(1-k3)))
preds$conc[i] <- c0 - c0 * preds$degrad[i]
}else{
# This finds the degradation from the end of the previous phase
degrad_tracker <- preds %>% filter(phase == (preds$phase[i] - 1)) %>% select(degrad) %>% max()
preds$degrad[i] <- (1 - ((1 - k3) * (((1- degrad_tracker) ^(1 - k3))/(1 - k3) - preds$phase_time[i] * exp(k1 - k2 / (preds$temps[i] + 273.15) + k2/Kref)))^(1/(1-k3)))
preds$conc[i] <- c0 - c0 * preds$degrad[i]
}
}
# Boot counter
boot_count = 1
if (CI | PI | plot_simulations){ # adding confidence and prediction intervals
# making the covariance matrix
SIG = vcov(fit_object)
sigma = summary(fit_object)$sigma
# making pred_fct
pred_fct <- function(parms){
k1 = parms[1]
k2 = parms[2]
k3 = parms[3]
c0 = ifelse(is.null(intercept),parms[4], intercept)
conc_boot <- rep(NA,101 * length(time_changes))
degrad_boot <- rep(NA,101 * length(time_changes))
for (i in 1:length(time_changes)){
if (i == 1){ # First phase no initial degradation
degrad_boot[1:101] <- (1 - ((1 - k3) * (1/(1 - k3) - preds$phase_time[1:101] * exp(k1 - k2 / (preds$temps[1:101] + 273.15)+ k2/Kref)))^(1/(1-k3)))
conc_boot[1:101] <- c0 - c0 * degrad_boot[1:101]
}else{
# This finds the degradation from the end of the previous phase
degrad_tracker <- degrad_boot[(i-1)*101]
degrad_boot[((i-1)*101 +1):(i*101)] <- (1 - ((1 - k3) * (((1- degrad_tracker) ^(1 - k3))/(1 - k3) - preds$phase_time[((i-1)*101 +1):(i*101)] * exp(k1 - k2 / (preds$temps[((i-1)*101 +1):(i*101)] + 273.15)+ k2/Kref)))^(1/(1-k3)))
conc_boot[((i-1)*101 +1):(i*101)] <- c0 - c0 * degrad_boot[((i-1)*101 +1):(i*101)]
}
if (i == length(time_changes)){
if (boot_count %% 1000 == 0){
print(paste0("Sample draw progress: ",(boot_count*100)/draw,"%"))
}
boot_count <<- boot_count+1
}
}
return(conc_boot)
}
# Multi T bootstrap
rand.coef = rmvt(draw, sigma = SIG, df = nrow(dat) - 4) + matrix(nrow = draw, ncol = 4, byrow = TRUE, coeffs_fit)
res.boot = matrix(nrow = draw, ncol = nrow(preds), byrow = TRUE, apply(rand.coef, 1, pred_fct))
CI1b = apply(res.boot, 2, quantile, ((1-confidence_interval)/2), na.rm = TRUE)
CI2b = apply(res.boot, 2, quantile, ((1+confidence_interval)/2), na.rm = TRUE)
simulations <- res.boot
res.boot = res.boot + rnorm(draw*length(preds$total_time), 0, sigma)
PI1b = apply(res.boot, 2, quantile, ((1-confidence_interval)/2), na.rm = TRUE)
PI2b = apply(res.boot, 2, quantile, ((1+confidence_interval)/2), na.rm = TRUE)
}
}else if(step1_down_object$user_parameters$reparameterisation == F &&
step1_down_object$user_parameters$zero_order == F){
for (i in 1:nrow(preds)){ # predictions
if (preds$phase[i] == 1){ # First phase no initial degradation
preds$degrad[i] <- (1 - ((1 - k3) * (1/(1 - k3) - preds$phase_time[i] * exp(k1 - k2 / (preds$temps[i] + 273.15))))^(1/(1-k3)))
preds$conc[i] <- c0 - c0 * preds$degrad[i]
}else{
# This finds the degradation from the end of the previous phase
#browser()
degrad_tracker <- preds %>% filter(phase == (preds$phase[i] - 1)) %>% select(degrad) %>% max()
preds$degrad[i] <- (1 - ((1 - k3) * (((1- degrad_tracker) ^(1 - k3))/(1 - k3) - preds$phase_time[i] * exp(k1 - k2 / (preds$temps[i] + 273.15))))^(1/(1-k3)))
preds$conc[i] <- c0 - c0 * preds$degrad[i]
}
}
# Boot counter
boot_count = 1
if (CI | PI | plot_simulations){ # adding confidence and prediction intervals
# making the covariance matrix
SIG = vcov(fit_object)
sigma = summary(fit_object)$sigma
# making pred_fct
pred_fct <- function(parms){
k1 = parms[1]
k2 = parms[2]
k3 = parms[3]
c0 = ifelse(is.null(intercept),parms[4], intercept)
conc_boot <- rep(NA,101 * length(time_changes))
degrad_boot <- rep(NA,101 * length(time_changes))
for (i in 1:length(time_changes)){
if (i == 1){ # First phase no initial degradation
degrad_boot[1:101] <- (1 - ((1 - k3) * (1/(1 - k3) - preds$phase_time[1:101] * exp(k1 - k2 / (preds$temps[1:101] + 273.15))))^(1/(1-k3)))
conc_boot[1:101] <- c0 - c0 * degrad_boot[1:101]
}else{
# This finds the degradation from the end of the previous phase
degrad_tracker <- degrad_boot[(i-1)*101]
degrad_boot[((i-1)*101 +1):(i*101)] <- (1 - ((1 - k3) * (((1- degrad_tracker) ^(1 - k3))/(1 - k3) - preds$phase_time[((i-1)*101 +1):(i*101)] * exp(k1 - k2 / (preds$temps[((i-1)*101 +1):(i*101)] + 273.15))))^(1/(1-k3)))
conc_boot[((i-1)*101 +1):(i*101)] <- c0 - c0 * degrad_boot[((i-1)*101 +1):(i*101)]
}
if (i == length(time_changes)){
if (boot_count %% 1000 == 0){
print(paste0("Sample draw progress: ",(boot_count*100)/draw,"%"))
}
boot_count <<- boot_count+1
}
}
return(conc_boot)
}
# Multi T bootstrap
rand.coef = rmvt(draw, sigma = SIG, df = nrow(dat) - 4) + matrix(nrow = draw, ncol = 4, byrow = TRUE, coeffs_fit)
res.boot = matrix(nrow = draw, ncol = nrow(preds), byrow = TRUE, apply(rand.coef, 1, pred_fct))
CI1b = apply(res.boot, 2, quantile, ((1-confidence_interval)/2), na.rm = TRUE)
CI2b = apply(res.boot, 2, quantile, ((1+confidence_interval)/2), na.rm = TRUE)
simulations <- res.boot
res.boot = res.boot + rnorm(draw*length(preds$total_time), 0, sigma)
PI1b = apply(res.boot, 2, quantile, ((1-confidence_interval)/2), na.rm = TRUE)
PI2b = apply(res.boot, 2, quantile, ((1+confidence_interval)/2), na.rm = TRUE)
}}
if(PI | CI | plot_simulations){
preds <- cbind(preds,CI1b) %>% cbind(CI2b) %>% cbind(PI1b) %>% cbind(PI2b)
selected_indices <- sample(1:draw, 100)
selected_rows <- simulations[selected_indices, ]
simu_df <- data.frame(
total_time = rep(preds$total_time, 100),
conc = c(t(selected_rows)),
simulation_no = rep(selected_indices, each = 101*length(temp_changes))
)}
mytheme <- ggplot2::theme(legend.position = "bottom", strip.background = element_rect(fill = "white"),
legend.key = element_rect(fill = "white"), legend.key.width = unit(2,"cm"),
axis.text = element_text(size = 13), axis.title = element_text(size = 13),
strip.text = element_text(size = 13),
legend.text = element_text(size = 13),
legend.title = element_text(size = 13))
confidence_i <- paste0(confidence_interval * 100," % CI")
prediction_i <- paste0(confidence_interval * 100," % PI")
lines_t <- c("solid","dotted","longdash")
names(lines_t) <- c("Prediction",confidence_i,prediction_i)
if (is.null(xname))
xname = "Time"
if (is.null(yname))
yname = "Response Variable"
plot1 <- preds %>% mutate(phase = as.factor(phase),temps = as.factor(temps)) %>%
ggplot() +
{if(plot_simulations)geom_line(data = simu_df,mapping=aes(x= total_time, y = conc, group = simulation_no),alpha = 0.17,color = "grey")} +
labs( x = xname, y = yname) +
geom_line(mapping = aes(x = total_time, y = conc, colour = temps, group = phase , linetype = "Prediction")) +
mytheme +
{if(CI)geom_line(mapping = aes(x = total_time, y = CI1b, colour = temps,group = phase , linetype = confidence_i))} +
{if(CI)geom_line(mapping = aes(x = total_time, y = CI2b, colour = temps,group = phase , linetype = confidence_i))} +
{if(PI)geom_line(mapping = aes(x = total_time, y = PI1b, colour = temps,group = phase , linetype = prediction_i))} +
{if(PI)geom_line(mapping = aes(x = total_time, y = PI2b, colour = temps,group = phase , linetype = prediction_i))} +
{if(ribbon && PI)geom_ribbon(aes(x = total_time, ymin=PI1b, ymax=PI2b,group = phase , fill = temps), alpha=0.08, show.legend = FALSE)} +
{if(ribbon && CI)geom_ribbon(aes(x = total_time, ymin=CI1b, ymax=CI2b,group = phase , fill= temps), alpha=0.13, show.legend = FALSE)} +
scale_linetype_manual(name = NULL,values = lines_t)+
scale_color_discrete(name = "Celsius") +
theme(legend.box = "vertical", legend.spacing = unit(-0.4,"line"))
if(PI ==F && CI==F && plot_simulations==F){
simulations = NULL}
results = list(preds,simulations,plot1,step1_down_object$user_parameters)
names(results) = c("predictions","simulations","excursion_plot","user_parameters")
class(results) = "SB"
return(results)
}
globalVariables(c('phase','degrad','temps','total_time','conc','simulation_no'))
| /scratch/gouwar.j/cran-all/cranData/AccelStab/R/excursion.R |
#' Potency Accelerated Stability Data
#'
#' An example dataset containing potency data at different
#' temperatures..
#'
#' @docType data
#'
#' @usage data(potency)
#'
#' @format An object of class \code{"data.frame"} with 78 rows and 3 variables
#' \describe{
#' \item{Time}{Time for which the datapoints are gathered.}
#' \item{Potency}{Measured potency at a time.}
#' \item{Celsius}{The temperature in celsius.}
#'
#' }
#'
#' @keywords dataset
#'
"potency"
| /scratch/gouwar.j/cran-all/cranData/AccelStab/R/potency.R |
#' @title Step1 Down Model
#'
#' @description Fit the one-step Šesták–Berggren kinetic model.
#'
#' @details Fit the one-step Šesták–Berggren kinetic (non-linear) model using
#' accelerated stability data.
#'
#' @param data Dataframe containing accelerated stability data (required).
#' @param y Name of decreasing variable (e.g. concentration) contained within data
#' (required).
#' @param .time Time variable contained within data (required).
#' @param K Kelvin variable (numeric or column name) (optional).
#' @param C Celsius variable (numeric or column name) (optional).
#' @param validation Validation dummy variable (column name) (optional).
#' @param draw Number of simulations used to estimate confidence intervals.
#' @param parms Starting values for the parameters as a list - k1, k2, k3, and c0.
#' @param temp_pred_C Integer or numeric value to predict the response for a
#' given temperature (in Celsius).
#' @param max_time_pred Maximum time to predict the response variable.
#' @param confidence_interval Confidence level for the confidence and prediction intervals
#' around the predictions (default 0.95).
#' @param by Number of points (on the time scale) to smooth the statistical
#' intervals around the predictions.
#' @param reparameterisation Use alternative parameterisation of the one-step
#' model which aims to reduce correlation between k1 and k2.
#' @param zero_order Set kinetic order, k3, to zero (straight lines).
#'
#' @return An SB class object, a list including the following elements:
#' \itemize{
#' \item *fit* - The non-linear fit.
#' \item *data* - The data set.
#' \item *prediction* - A data frame containing the predictions with the confidence and prediction intervals.
#' \item *user_parameters* - List of users input parameters which is utilised by other
#' functions in the package.
#' }
#'
#' @examples #load antigenicity and potency data.
#' data(antigenicity)
#' data(potency)
#'
#' #Basic use of the step1.down function with C column defined.
#' fit1 <- step1_down(data = antigenicity, y = "conc", .time = "time", C = "Celsius", draw = 5000)
#'
#' #Basic use of the step1.down function with K column defined.
#' fit2 <- step1_down(data = antigenicity, y = "conc", .time = "time", K = "K", draw = 5000)
#'
#' #When zero_order = FALSE, the output suggests using zero_order = TRUE for Potency dataset.
#' fit3 <- step1_down(data = potency, y = "Potency", .time = "Time",C = "Celsius",
#' reparameterisation = FALSE, zero_order = TRUE, draw = 5000)
#'
#' #reparameterisation is TRUE.
#' fit4 <- step1_down(data = antigenicity, y = "conc", .time = "time",C = "Celsius",
#' reparameterisation = TRUE, draw = 5000)
#'
#' @importFrom stats vcov coef runif confint rnorm quantile qt complete.cases
#' @importFrom minpack.lm nls.lm
#' @importFrom mvtnorm rmvt
#'
#' @export step1_down
step1_down <- function (data, y, .time, K = NULL, C = NULL, validation = NULL,
draw = 10000, parms = NULL, temp_pred_C = NULL,
max_time_pred = NULL, confidence_interval = 0.95, by = 101,
reparameterisation = FALSE, zero_order = FALSE){
if (is.null(K) & is.null(C))
stop("Select the temperature variable in Kelvin or Celsius")
if (!is.null(parms) & !is.list(parms))
stop("The starting values for parameters must be a list, or keep as NULL")
user_parameters <- list(
data = data, y = y, .time = .time, K = K, C = C, validation = validation,draw = draw,
parms = parms, temp_pred_C = temp_pred_C, max_time_pred = max_time_pred,
confidence_interval = confidence_interval, by = by,
reparameterisation = reparameterisation, zero_order = zero_order
)
if(!is.null(C) & !is.null(K)) {
data[, C] <- ifelse(is.na(data[, C]) & !is.na(data[, K]),
data$K - 273.15,
data[, C])
data[, K] <- ifelse(is.na(data[, K]) & !is.na(data[, C]),
data$C + 273.15,
data[, K])
}
data <- data[complete.cases(data[, c(C,K,y,.time)]), ]
dat = data
if (!is.null(validation))
if (!all(dat[,validation] %in% c(0,1)))
stop("Validation column must contain 1s and 0s only")
if (is.null(K))
dat$K = dat[, C] + 273.15
if (is.null(C)) {
dat$C = dat[, K] - 273.15
C = "C"}
Kref = mean(dat$K)
dat$Celsius = as.factor(dat[, C])
dat$time = dat[, .time]
dat$y = dat[, y]
if(!is.null(validation)){
dat$validation = ifelse(dat[,validation] == 0, "Fit", "Validation")
if(validation != "validation"){
dat <- dat[, !names(dat) %in% c(validation)]
}
}
if(.time != "time"){
dat <- dat[, !names(dat) %in% c(.time)]
}
if(y != "y"){
dat <- dat[, !names(dat) %in% c(y)]
}
Temps = sort(unique(dat$K))
if (!is.null(temp_pred_C))
Temps = unique(sort(c(Temps, temp_pred_C + 273.15)))
if (is.null(max_time_pred))
max_time_pred = max(dat$time, na.rm = TRUE)
times.pred = seq(0, max_time_pred, length.out = by)
dat_full <- dat
if(!is.null(validation)){
dat <- dat[dat$validation == "Fit",]
}
if(is.null(parms)){
sorted_data <- dat[order(dat$time), ]
min_time <- min(sorted_data$time)
if (sum(sorted_data$time == min_time) > 3) {
selected_rows <- sorted_data$time == min_time
} else {
selected_rows <- seq_len(min(3, nrow(sorted_data)))
}
c0_initial <- mean(sorted_data$y[selected_rows])
}
if(reparameterisation & zero_order){ # reparameterisation and k3 is 0
MyFctNL = function(parms) { # Make function
k1 = parms$k1
k2 = parms$k2
c0 = parms$c0
Model = c0 - c0 * dat$time * exp(k1 - k2/dat$K +
k2/Kref)
residual = dat$y - Model
return(residual)
}
# Fit model :
if (!is.null(parms)) {
fit = minpack.lm::nls.lm(par = parms, fn = MyFctNL, lower = rep(0,
length(parms)))
}
else {
repeat {
suppressWarnings(rm(fit))
parms = list(k1 = stats::runif(1, 0, 40), k2 = stats::runif(1,
1000, 20000), c0 = c0_initial)
fit = suppressWarnings(minpack.lm::nls.lm(par = parms,
fn = MyFctNL, lower = rep(0, length(parms))))
fit <- tryCatch({
suppressWarnings(minpack.lm::nls.lm(par = parms, fn = MyFctNL, lower = rep(0, length(parms))))
},
error = function(e){"error"},
warning = function(w){"warning"})
vcov_test <- tryCatch({
stats::vcov(fit)
},
error = function(e){"error"},
warning = function(w){"warning"})
if(all(!(fit %in% c("error","warning"))) && all(!(vcov_test %in% c("error","warning", NaN)))){
break
}
}
fit = minpack.lm::nls.lm(par = parms, fn = MyFctNL, lower = rep(0,
length(parms)))
}
# Calculate the predictions
k1 = stats::coef(fit)[1]
k2 = stats::coef(fit)[2]
c0 = stats::coef(fit)[3]
SIG = stats::vcov(fit)
sigma = summary(fit)$sigma
DF = summary(fit)$df[2]
pred = expand.grid(time = times.pred, K = Temps)
pred$Degradation = pred$time * exp(k1 - k2/pred$K + k2/Kref)
pred$Response = c0 - c0 * pred$Degradation
if(is.null(draw)){
pred$derivk1 = -c0 * pred$Degradation
pred$derivk2 = -c0 * (1/Kref - 1/pred$K) * pred$Degradation
pred$derivc0 = 1 - pred$Degradation
pred$varY = (pred$derivk1)^2 * SIG[1, 1] + (pred$derivk2)^2 *
SIG[2, 2] + (pred$derivc0)^2 * SIG[3, 3] + 2 * pred$derivk1 *
pred$derivk2 * SIG[1, 2] + 2 * pred$derivk1 * pred$derivc0 *
SIG[1, 3] + 2 * pred$derivk2 * pred$derivc0 * SIG[2,
3]
pred$derivk1 = pred$derivk2 = pred$derivc0 = NULL}else{ # Bootstrap
pred_fct = function(coef.fit)
{
degrad = pred$time * exp(coef.fit[1] - coef.fit[2] / pred$K + coef.fit[2] / Kref)
conc = coef.fit[3] - coef.fit[3]*degrad
return(conc)
}
# Multi T bootstrap
rand.coef = rmvt(draw, sigma = SIG, df = nrow(dat) - 3) + matrix(nrow = draw, ncol = 3, byrow = TRUE, coef(fit))
res.boot = matrix(nrow = draw, ncol = nrow(pred), byrow = TRUE, apply(rand.coef, 1, pred_fct))
CI1b = apply(res.boot, 2, quantile, ((1-confidence_interval)/2), na.rm = TRUE)
CI2b = apply(res.boot, 2, quantile, ((1+confidence_interval)/2), na.rm = TRUE)
res.boot = res.boot + rnorm(draw*length(pred$time), 0, sigma)
PI1b = apply(res.boot, 2, quantile, ((1-confidence_interval)/2), na.rm = TRUE)
PI2b = apply(res.boot, 2, quantile, ((1+confidence_interval)/2), na.rm = TRUE)
}
}else if(!reparameterisation & zero_order){ # no reparameterisation and k3 is 0
MyFctNL = function(parms) { # make function
k1 = parms$k1
k2 = parms$k2
c0 = parms$c0
Model = c0 - c0 * dat$time * exp(k1 - k2 / dat$K)
residual = dat$y - Model
return(residual)
}
if (!is.null(parms)) { # fit model
fit = minpack.lm::nls.lm(par = parms, fn = MyFctNL, lower = rep(0,
length(parms)))
}
else {
repeat {
suppressWarnings(rm(fit))
parms = list(k1 = stats::runif(1, 0, 40), k2 = stats::runif(1,
1000, 20000), c0 = c0_initial)
fit <- tryCatch({
suppressWarnings(minpack.lm::nls.lm(par = parms, fn = MyFctNL, lower = rep(0, length(parms))))
},
error = function(e){"error"},
warning = function(w){"warning"})
vcov_test <- tryCatch({
stats::vcov(fit)
},
error = function(e){"error"},
warning = function(w){"warning"})
if(all(!(fit %in% c("error","warning"))) && all(!(vcov_test %in% c("error","warning", NaN)))){
break
}
}
fit = minpack.lm::nls.lm(par = parms, fn = MyFctNL, lower = rep(0,
length(parms)))
}
# Predict
k1 = coef(fit)[1]
k2 = coef(fit)[2]
c0 = coef(fit)[3]
SIG = vcov(fit)
sigma = summary(fit)$sigma
DF = summary(fit)$df[2]
pred = expand.grid("time" = times.pred, K = Temps)
pred$Degradation = pred$time * exp(k1 - k2 / pred$K)
pred$Response = c0 - c0*pred$Degradation
if(is.null(draw)){
pred$derivk1 = -c0 * pred$Degradation
pred$derivk2 = c0 / pred$K * pred$Degradation
pred$derivc0 = 1 - pred$Degradation
pred$varY = (pred$derivk1)^2 * SIG[1,1] + (pred$derivk2)^2 * SIG[2,2] + (pred$derivc0)^2 * SIG[3,3] +
2*pred$derivk1*pred$derivk2 * SIG[1,2] + 2*pred$derivk1*pred$derivc0 * SIG[1,3] + 2*pred$derivk2*pred$derivc0 * SIG[2,3]
pred$derivk1 = pred$derivk2 = pred$derivc0 = NULL}else{ # Bootstrap
pred_fct = function(coef.fit)
{
degrad = pred$time * exp(coef.fit[1] - coef.fit[2] / pred$K)
conc = coef.fit[3] - coef.fit[3]*degrad
return(conc)
}
# Multi T bootstrap
rand.coef = rmvt(draw, sigma = SIG, df = nrow(dat) - 3) + matrix(nrow = draw, ncol = 3, byrow = TRUE, coef(fit))
res.boot = matrix(nrow = draw, ncol = nrow(pred), byrow = TRUE, apply(rand.coef, 1, pred_fct))
CI1b = apply(res.boot, 2, quantile, ((1-confidence_interval)/2), na.rm = TRUE)
CI2b = apply(res.boot, 2, quantile, ((1+confidence_interval)/2), na.rm = TRUE)
res.boot = res.boot + rnorm(draw*length(pred$time), 0, sigma)
PI1b = apply(res.boot, 2, quantile, ((1-confidence_interval)/2), na.rm = TRUE)
PI2b = apply(res.boot, 2, quantile, ((1+confidence_interval)/2), na.rm = TRUE)
}
}else if(reparameterisation & !zero_order){ #reparameterisation and k3 is not zero
MyFctNL = function(parms) {
k1 = parms$k1
k2 = parms$k2
k3 = parms$k3
c0 = parms$c0
Model = c0 - c0 * (1 - ((1 - k3) * (1/(1 - k3) - dat$time *
exp(k1 - k2/dat$K + k2/Kref)))^(1/(1 - k3)))
residual = dat$y - Model
return(residual)
}
if (!is.null(parms)) { # Fit the model
fit = minpack.lm::nls.lm(par = parms, fn = MyFctNL, lower = rep(0,
length(parms)))
}
else {
repeat {
suppressWarnings(rm(fit))
parms = list(k1 = stats::runif(1, 0, 60), k2 = stats::runif(1,
1000, 20000), k3 = stats::runif(1, 0, 11), c0 = c0_initial)
fit <- tryCatch({
suppressWarnings(minpack.lm::nls.lm(par = parms, fn = MyFctNL, lower = rep(0, length(parms))))
},
error = function(e){"error"},
warning = function(w){"warning"})
vcov_test <- tryCatch({
stats::vcov(fit)
},
error = function(e){"error"},
warning = function(w){"warning"})
if(all(!(fit %in% c("error","warning"))) && all(!(vcov_test %in% c("error","warning", NaN)))){
break
}
}
fit = minpack.lm::nls.lm(par = parms, fn = MyFctNL, lower = rep(0,
length(parms)))
}
# Predict
k1 = coef(fit)[1]
k2 = coef(fit)[2]
k3 = coef(fit)[3]
if (k3 == 0){print("k3 is fitted to be exactly 0, we strongly suggest using option zero_order = TRUE")
}else if(confint(fit,'k3')[1] < 0 && confint(fit,'k3')[2] > 0){print(paste0("The 95% Wald Confidence Interval for k3 includes 0, k3 is estimated as ",signif(k3,4),". We suggest considering option zero_order = TRUE"))}
c0 = coef(fit)[4]
SIG = vcov(fit)
sigma = summary(fit)$sigma
DF = summary(fit)$df[2]
pred = expand.grid("time" = times.pred, K = Temps)
pred$Degradation = 1 - ((1 - k3) * (1/(1 - k3) - pred$time * exp(k1 - k2 / pred$K + k2 / Kref)))^(1/(1-k3))
pred$Response = c0 - c0*pred$Degradation
if(is.null(draw)){
pred$derivk1 = c0 * pred$time * (-exp(k1 - k2/pred$K + k2/Kref)) * ((1 - k3) * (1/(1 - k3) - pred$time * exp(k1 - k2/pred$K + k2/Kref)))^(1/(1 - k3) - 1)
pred$derivk2 = c0 * pred$time * (1/Kref - 1/pred$K) * (-exp(k1 - k2/pred$K + k2/Kref)) * ((1 - k3) * (1/(1 - k3) - pred$time * exp(k1 - k2/pred$K + k2/Kref)))^(1/(1 - k3) - 1)
pred$derivk3 = c0 * ((1 - k3) * (1/(1 - k3) - pred$time * exp(k1 - k2/pred$K + k2/Kref)))^(1/(1 - k3)) * ((pred$time * exp(k1 - k2/pred$K + k2/Kref)) / ((1 - k3)^2 * (1/(1 - k3) - pred$time * exp(k1 - k2/pred$K + k2/Kref))) + log((1 - k3) * (1/(1 - k3) - pred$time * exp(k1 - k2/pred$K + k2/Kref)))/(1 - k3)^2)
pred$derivc0 = 1 - pred$Degradation
pred$varY = (pred$derivk1)^2 * SIG[1,1] + (pred$derivk2)^2 * SIG[2,2] + (pred$derivk3)^2 * SIG[3,3] + (pred$derivc0)^2 * SIG[4,4] +
2*pred$derivk1*pred$derivk2 * SIG[1,2] + 2*pred$derivk1*pred$derivk3 * SIG[1,3] + 2*pred$derivk1*pred$derivc0 * SIG[1,4] +
2*pred$derivk2*pred$derivk3 * SIG[2,3] + 2*pred$derivk2*pred$derivc0 * SIG[2,4] + 2*pred$derivk3*pred$derivc0 * SIG[3,4]
pred$derivk1 = pred$derivk2 = pred$derivk3 = pred$derivc0 = NULL}else{ # Bootstrap
pred_fct = function(coef.fit)
{
degrad = 1 - ((1 - coef.fit[3]) * (1/(1 - coef.fit[3]) - pred$time * exp(coef.fit[1] - coef.fit[2] / pred$K + coef.fit[2] / Kref)))^(1/(1-coef.fit[3]))
conc = coef.fit[4] - coef.fit[4]*degrad
return(conc)
}
# Multi T bootstrap
rand.coef = rmvt(draw, sigma = SIG, df = nrow(dat) - 4) + matrix(nrow = draw, ncol = 4, byrow = TRUE, coef(fit))
res.boot = matrix(nrow = draw, ncol = nrow(pred), byrow = TRUE, apply(rand.coef, 1, pred_fct))
CI1b = apply(res.boot, 2, quantile, ((1-confidence_interval)/2), na.rm = TRUE)
CI2b = apply(res.boot, 2, quantile, ((1+confidence_interval)/2), na.rm = TRUE)
res.boot = res.boot + rnorm(draw*length(pred$time), 0, sigma)
PI1b = apply(res.boot, 2, quantile, ((1-confidence_interval)/2), na.rm = TRUE)
PI2b = apply(res.boot, 2, quantile, ((1+confidence_interval)/2), na.rm = TRUE)
}
}else if(!reparameterisation & !zero_order){ # No re-parameterisation and k3 not zero
MyFctNL = function(parms) {
k1 = parms$k1
k2 = parms$k2
k3 = parms$k3
c0 = parms$c0
test = c0 - c0 * (1 - ((1 - k3) * (1/(1 - k3) - dat$time * exp(k1 - k2 / dat$K)))^(1/(1-k3)))
residual = dat$y - test
return(residual)
}
if (!is.null(parms)) { # Fitting the model
fit = minpack.lm::nls.lm(par = parms, fn = MyFctNL, lower = rep(0,
length(parms)))
}
else {
repeat {
suppressWarnings(rm(fit))
parms = list(k1 = stats::runif(1, 0, 60), k2 = stats::runif(1,
1000, 20000), k3 = stats::runif(1, 0, 11), c0 = c0_initial)
fit <- tryCatch({
suppressWarnings(minpack.lm::nls.lm(par = parms, fn = MyFctNL, lower = rep(0, length(parms))))
},
error = function(e){"error"},
warning = function(w){"warning"})
vcov_test <- tryCatch({
stats::vcov(fit)
},
error = function(e){"error"},
warning = function(w){"warning"})
if(all(!(fit %in% c("error","warning"))) && all(!(vcov_test %in% c("error","warning", NaN)))){
break
}
}
fit = minpack.lm::nls.lm(par = parms, fn = MyFctNL, lower = rep(0,
length(parms)))
}
# Predict
k1 = coef(fit)[1]
k2 = coef(fit)[2]
k3 = coef(fit)[3]
if (k3 == 0){print("k3 is fitted to be exactly 0, we strongly suggest using option zero_order = TRUE")
}else if(confint(fit,'k3')[1] < 0 && confint(fit,'k3')[2] > 0){print(paste0("The 95% Wald Confidence Interval for k3 includes 0, k3 is estimated as ",signif(k3,4),". We suggest considering option zero_order = TRUE"))}
c0 = coef(fit)[4]
SIG = vcov(fit)
sigma = summary(fit)$sigma
DF = summary(fit)$df[2]
pred = expand.grid("time" = times.pred, K = Temps)
pred$Degradation = 1 - ((1 - k3) * (1/(1 - k3) - pred$time * exp(k1 - k2 / pred$K)))^(1/(1-k3))
pred$Response = c0 - c0*pred$Degradation
if(is.null(draw)){ # Derivatives
pred$derivk1 = c0 * pred$time * (-exp(k1 - k2/pred$K)) * ((1 - k3) * (1/(1 - k3) - pred$time * exp(k1 - k2/pred$K)))^(1/(1 - k3) - 1)
pred$derivk2 = (c0 * pred$time * exp(k1 - k2/pred$K) * ((1 - k3) * (1/(1 - k3) - pred$time * exp(k1 - k2/pred$K)))^(1/(1 - k3) - 1)) / pred$K
pred$derivk3 = c0 * ((1 - k3) * (1/(1 - k3) - pred$time * exp(k1 - k2/pred$K)))^(1/(1 - k3)) * ((pred$time * exp(k1 - k2/pred$K)) / ((1 - k3)^2 * (1/(1 - k3) - pred$time * exp(k1 - k2/pred$K))) + log((1 - k3) * (1/(1 - k3) - pred$time * exp(k1 - k2/pred$K)))/(1 - k3)^2)
pred$derivc0 = 1 - pred$Degradation
pred$varY = (pred$derivk1)^2 * SIG[1,1] + (pred$derivk2)^2 * SIG[2,2] + (pred$derivk3)^2 * SIG[3,3] + (pred$derivc0)^2 * SIG[4,4] +
2*pred$derivk1*pred$derivk2 * SIG[1,2] + 2*pred$derivk1*pred$derivk3 * SIG[1,3] + 2*pred$derivk1*pred$derivc0 * SIG[1,4] +
2*pred$derivk2*pred$derivk3 * SIG[2,3] + 2*pred$derivk2*pred$derivc0 * SIG[2,4] + 2*pred$derivk3*pred$derivc0 * SIG[3,4]
pred$derivk1 = pred$derivk2 = pred$derivk3 = pred$derivc0 = NULL }else{ # Bootstrap
pred_fct = function(coef.fit)
{
degrad = 1 - ((1 - coef.fit[3]) * (1/(1 - coef.fit[3]) - pred$time * exp(coef.fit[1] - coef.fit[2] / pred$K)))^(1/(1-coef.fit[3]))
conc = coef.fit[4] - coef.fit[4]*degrad
return(conc)
}
# Multi T bootstrap
rand.coef = rmvt(draw, sigma = SIG, df = nrow(dat) - 4) + matrix(nrow = draw, ncol = 4, byrow = TRUE, coef(fit))
res.boot = matrix(nrow = draw, ncol = nrow(pred), byrow = TRUE, apply(rand.coef, 1, pred_fct))
CI1b = apply(res.boot, 2, quantile, ((1-confidence_interval)/2), na.rm = TRUE)
CI2b = apply(res.boot, 2, quantile, ((1+confidence_interval)/2), na.rm = TRUE)
res.boot = res.boot + rnorm(draw*length(pred$time), 0, sigma)
PI1b = apply(res.boot, 2, quantile, ((1-confidence_interval)/2), na.rm = TRUE)
PI2b = apply(res.boot, 2, quantile, ((1+confidence_interval)/2), na.rm = TRUE)
}
}
pred$Celsius = as.factor(pred$K - 273.15)
pred$K = as.factor(pred$K)
pred$fit = "Prediction"
pred$CI = paste(100*confidence_interval, "% CI")
pred$PI = paste(100*confidence_interval, "% PI")
if(is.null(draw)){
pred$CI1 = pred$Response - qt(0.5 + confidence_interval/2, summary(fit)$df[2]) * sqrt(pred$varY)
pred$CI2 = pred$Response + qt(0.5 + confidence_interval/2, summary(fit)$df[2]) * sqrt(pred$varY)
pred$PI1 = pred$Response - qt(0.5 + confidence_interval/2, summary(fit)$df[2]) * sqrt(pred$varY + sigma^2)
pred$PI2 = pred$Response + qt(0.5 + confidence_interval/2, summary(fit)$df[2]) * sqrt(pred$varY + sigma^2)
}else{
pred$CI1 = CI1b
pred$CI2 = CI2b
pred$PI1 = PI1b
pred$PI2 = PI2b}
results = list(fit, dat_full, pred,user_parameters)
names(results) = c("fit", "data", "prediction","user_parameters")
class(results) = "SB"
return(results)
}
| /scratch/gouwar.j/cran-all/cranData/AccelStab/R/step1_down.R |
#' @title Step1 Down Model Root Mean Square Error Calculation
#'
#' @description Calculate Root Mean Square Error (RMSE) for the one-step Šesták–Berggren kinetic model.
#'
#' @details Calculate RMSE for the one-step Šesták–Berggren kinetic (non-linear) model using
#' user provided parameters.
#'
#' @param data Dataframe containing accelerated stability data (required).
#' @param y Name of decreasing variable (e.g. concentration) contained within data (required).
#' @param .time Time variable contained within data (required).
#' @param K Kelvin variable (numeric or column name) (optional).
#' @param C Celsius variable (numeric or column name) (optional).
#' @param parms Values for the parameters as a list - k1, k2, k3, and c0. If multiple are provided all combinations will be used (required).
#' @param reparameterisation Use alternative parameterisation of the one-step
#' model which aims to reduce correlation between k1 and k2.
#'
#' @return A data frame containing one row for each RMSE calculation
#'
#' @examples #load antigenicity and potency data.
#' data(antigenicity)
#' data(potency)
#'
#' #Basic use of the step1_down_rmse function with C column defined.
#' rmse1 <- step1_down_rmse(data = antigenicity, y = "conc", .time = "time",
#' C = "Celsius", parms = list(c0 = c(96,98,100), k1 = c(42,45),
#' k2 = c(12000,12500), k3 = c(8,9,10)))
#'
#' #Basic use of the step1_down_rmse function with K column defined.
#' rmse2 <- step1_down_rmse(data = antigenicity, y = "conc", .time = "time",
#' K = "K", parms = list(c0 = c(98), k1 = c(42,45), k2 = c(12500), k3 = c(8,9)))
#'
#' #reparameterisation is TRUE.
#' rmse3 <- step1_down_rmse(data = antigenicity, y = "conc", .time = "time",
#' C = "Celsius", parms = list(c0 = c(100,95), k1 = c(2,2.5), k2 = c(12000,13000),
#' k3 = c(9,10)), reparameterisation = TRUE)
#'
#' @importFrom dplyr %>% mutate
#'
#' @export step1_down_rmse
step1_down_rmse <- function (data, y, .time, K = NULL, C = NULL,
parms, reparameterisation = FALSE){
if (is.null(K) & is.null(C))
stop("Select the temperature variable in Kelvin or Celsius")
if (!is.list(parms))
stop("The starting values for parameters must be a list")
if(!is.null(C) & !is.null(K)) {
data[, C] <- ifelse(is.na(data[, C]) & !is.na(data[, K]),
data$K - 273.15,
data[, C])
data[, K] <- ifelse(is.na(data[, K]) & !is.na(data[, C]),
data$C + 273.15,
data[, K])
}
data <- data[complete.cases(data[, c(C,K,y,.time)]), ]
dat = data
if (is.null(K))
dat$K = dat[, C] + 273.15
if (is.null(C)) {
dat$C = dat[, K] - 273.15
C = "C"}
Kref = mean(dat$K)
dat$Celsius = as.factor(dat[, C])
dat$time = dat[, .time]
dat$y = dat[, y]
if(.time != "time"){
dat <- dat[, !names(dat) %in% c(.time)]
}
if(y != "y"){
dat <- dat[, !names(dat) %in% c(y)]
}
result_grid <- expand.grid(parms) %>% mutate(rmse = NA)
if(reparameterisation){
for (i in 1:dim(result_grid)[1]){
c0 <- result_grid[i,]$c0
k1 <- result_grid[i,]$k1
k2 <- result_grid[i,]$k2
k3 <- result_grid[i,]$k3
dat$Degradation = 1 - ((1 - k3) * (1/(1 - k3) - dat$time * exp(k1 - k2 / dat$K + k2 / Kref)))^(1/(1-k3))
dat$Response = c0 - c0*dat$Degradation
dat$sqrResidual = (dat$Response - dat$y)^2
result_grid[i,'rmse'] <- sqrt(mean(dat$sqrResidual))
}
}else{
for (i in 1:dim(result_grid)[1]){
c0 <- result_grid[i,]$c0
k1 <- result_grid[i,]$k1
k2 <- result_grid[i,]$k2
k3 <- result_grid[i,]$k3
dat$Degradation = 1 - ((1 - k3) * (1/(1 - k3) - dat$time * exp(k1 - k2 / dat$K)))^(1/(1-k3))
dat$Response = c0 - c0*dat$Degradation
dat$sqrResidual = (dat$Response - dat$y)^2
result_grid[i,'rmse'] <- sqrt(mean(dat$sqrResidual))
}
}
return(result_grid)
}
| /scratch/gouwar.j/cran-all/cranData/AccelStab/R/step1_down_rmse.R |
#' @title Plot Confidence Intervals
#'
#' @description Plot the stability data and visualise the predictions with
#' confidence intervals.
#'
#' @details Use the fit object obtained from the step1.down function to plot the
#' data and visualise the predictions with confidence intervals applied.
#' There is an option to view the confidence intervals as a ribbon. The
#' confidence interval value is chosen in the step1.down function.
#'
#' @param step1_down_object The fit object from the step1.down function (required).
#' @param xname Label for the x-axis (optional).
#' @param yname Label for the y-axis (optional).
#' @param xlim x-axis limits (optional).
#' @param ylim y-axis limits (optional).
#' @param ribbon Add shade to confidence intervals (optional).
#'
#' @return Plot of stability data with prediction curves and confidence intervals.
#'
#' @examples
#' #load antigenciity data
#' data(antigenicity)
#'
#' #run step1.down fit
#' fit1 <- step1_down(data = antigenicity, y = "conc", .time = "time",
#' C = "Celsius", max_time_pred = 3, confidence_interval = 0.9)
#'
#' #plot raw data with prediction curves and confidence intervals.
#' step1_plot_CI(step1_down_object = fit1, xlim = NULL, ylim = NULL,
#' xname = "Time (Years)", yname = "Concentration", ribbon = TRUE)
#'
#' @import ggplot2
#'
#' @export step1_plot_CI
step1_plot_CI <- function (step1_down_object, xname = NULL, yname = NULL,
xlim = NULL, ylim = NULL, ribbon = FALSE)
{
if (is.null(step1_down_object))
stop("First, run the model")
if (is.null(xname))
xname = "Time"
if (is.null(yname))
yname = "Response Variable"
dat = step1_down_object$data
pred = step1_down_object$prediction
mytheme <- ggplot2::theme(legend.position = "bottom", strip.background = element_rect(fill = "white"),
legend.key = element_rect(fill = "white"), legend.key.width = unit(2,"cm"),
axis.text = element_text(size = 13), axis.title = element_text(size = 13),
strip.text = element_text(size = 13),
legend.text = element_text(size = 13),
legend.title = element_text(size = 13))
validation = step1_down_object$user_parameters$validation
if(!is.null(validation)){
shape_types <- c(16,1)
names(shape_types) <- c("Fit", "Validation")
}
confidence_i <- paste0(step1_down_object$user_parameters$confidence_interval * 100," % CI")
line_types <- if(ribbon){c("solid", "dotted")}else{c("dotted", "solid")}
names(line_types) <- c("Prediction",confidence_i)
plot = ggplot() + geom_point(data=dat, mapping=aes(x= time, y = y, colour = Celsius, shape = validation)) +
labs( x = xname, y = yname) +
{if(!is.null(xlim))scale_x_continuous(limits = xlim)} +
{if(!is.null(ylim))scale_y_continuous(limits = ylim)} +
mytheme +
geom_line(data=pred, mapping=aes(x= time, y = Response, colour = Celsius, linetype = "Prediction")) +
geom_line(data=pred, mapping=aes(x= time, y = CI1, colour = Celsius, linetype = confidence_i)) +
geom_line(data=pred, mapping=aes(x= time, y = CI2, colour = Celsius, linetype = confidence_i)) +
scale_linetype_manual(name = NULL, values = line_types) +
{if(ribbon)geom_ribbon(data=pred, aes(x = time, ymin=CI1, ymax=CI2, fill = Celsius), alpha=0.13, show.legend = FALSE)} +
{if(!is.null(validation))scale_shape_manual(values = shape_types, name = NULL)} +
theme(legend.box = "vertical", legend.spacing = unit(-0.4,"line"))
return(plot)
}
globalVariables(c('time','y','Celsius','Response','CI1','CI2'))
| /scratch/gouwar.j/cran-all/cranData/AccelStab/R/step1_plot_CI.R |
#' @title Plot Prediction Intervals
#'
#' @description Plot the stability data and visualise the predictions with prediction intervals.
#'
#' @details Use the fit object obtained from the step1.down function to plot the
#' stability data and visualise the predictions with prediction intervals applied.
#' There is an option to view the prediction intervals as a ribbon. The
#' prediction interval value is chosen in the step1.down function.
#'
#' @param step1_down_object The fit object from the step1.down function (required).
#' @param xname Label for the x-axis (optional).
#' @param yname Label for the y-axis (optional).
#' @param xlim x-axis limits (optional).
#' @param ylim y-axis limits (optional).
#' @param ribbon Add shade to prediction intervals (optional).
#'
#' @return Plot of stability data with prediction curves and prediction intervals.
#'
#' @examples
#' #load antigenicity data
#' data(antigenicity)
#'
#' #run step1.down fit
#' fit1 <- step1_down(data = antigenicity, y = "conc", .time = "time",
#' C = "Celsius", max_time_pred = 3)
#'
#' #plot raw data with prediction curves and prediction intervals.
#' step1_plot_PI(step1_down_object = fit1, xlim = NULL, ylim = NULL,
#' xname = "Time (Years)", yname = "Concentration", ribbon = TRUE)
#'
#' @import ggplot2
#'
#' @export step1_plot_PI
step1_plot_PI <- function (step1_down_object, xname = NULL, yname = NULL,
xlim = NULL, ylim = NULL, ribbon = FALSE)
{
if (is.null(step1_down_object))
stop("First, run the model")
if (is.null(xname))
xname = "Time"
if (is.null(yname))
yname = "Response Variable"
dat = step1_down_object$data
pred = step1_down_object$prediction
mytheme <- ggplot2::theme(legend.position = "bottom", strip.background = element_rect(fill = "white"),
legend.key = element_rect(fill = "white"), legend.key.width = unit(2,"cm"),
axis.text = element_text(size = 13), axis.title = element_text(size = 13),
strip.text = element_text(size = 13),
legend.text = element_text(size = 13),
legend.title = element_text(size = 13))
validation = step1_down_object$user_parameters$validation
if(!is.null(validation)){
shape_types <- c(16,1)
names(shape_types) <- c("Fit", "Validation")
}
prediction_i <- paste0(step1_down_object$user_parameters$confidence_interval * 100," % PI")
line_types <- if(ribbon){c("solid", "dotted")}else{c("dotted", "solid")}
names(line_types) <- c("Prediction",prediction_i)
plot = ggplot() + geom_point(data=dat, mapping=aes(x= time, y = y, colour = Celsius, shape = validation)) +
labs( x = xname, y = yname) +
{if(!is.null(xlim))scale_x_continuous(limits = xlim)} +
{if(!is.null(ylim))scale_y_continuous(limits = ylim)} +
mytheme +
geom_line(data=pred, mapping=aes(x= time, y = Response, colour = Celsius, linetype = "Prediction")) +
geom_line(data=pred, mapping=aes(x= time, y = PI1, colour = Celsius, linetype = prediction_i)) +
geom_line(data=pred, mapping=aes(x= time, y = PI2, colour = Celsius, linetype = prediction_i)) +
{if(ribbon)geom_ribbon(data=pred, aes(x = time, ymin=PI1, ymax=PI2, fill = Celsius), alpha=0.08, show.legend = FALSE)} +
scale_linetype_manual(name = NULL, values = line_types) +
{if(!is.null(validation))scale_shape_manual(values = shape_types, name = NULL)} +
theme(legend.box = "vertical", legend.spacing = unit(-0.4,"line"))
return(plot)
}
globalVariables(c('PI1','PI2'))
| /scratch/gouwar.j/cran-all/cranData/AccelStab/R/step1_plot_PI.R |
#' @title Focus on Temperature
#'
#' @description Plot the stability data and visualise the predictions with focus on
#' one temperature.
#'
#' @details Plot the stability data and visualise the predictions focusing on one
#' chosen temperature with confidence and prediction intervals.
#'
#' @param step1_down_object The fit object from the step1.down function (required).
#' @param focus_T Selected temperature to highlight on the plot.
#' @param xname Label for the x-axis (optional).
#' @param yname Label for the y-axis (optional).
#' @param xlim the x-axis limits (optional).
#' @param ylim the y-axis limits (optional).
#' @param ribbon adds shade to confidence and prediction intervals (optional).
#'
#' @return ggplot2 object with focus on chosen temperature.
#'
#' @examples
#' #load potency data
#' data(potency)
#'
#' #run step1_down fit
#' fit1 <- step1_down(data = potency, y = "Potency", .time = "Time",
#' C = "Celsius", zero_order = TRUE)
#'
#' #plot raw data with prediction curves with focus on temperature in dataset.
#' step1_plot_T(fit1, focus_T = 5,ribbon = TRUE, xlim = NULL, ylim = c(0,12),
#' xname = "Time (Month)", yname = "Potency")
#'
#' #plot raw data with prediction curves with focus on temperature not in dataset.
#' step1_plot_T(fit1, focus_T = -10,ribbon = TRUE, xlim = NULL, ylim = c(0,12),
#' xname = "Time (Months)", yname = "Potency")
#'
#' @import ggplot2
#' @import scales
#'
#' @export step1_plot_T
step1_plot_T <- function (step1_down_object, focus_T = NULL, xname = NULL, yname = NULL,
xlim = NULL, ylim = NULL, ribbon = FALSE)
{
if (is.null(step1_down_object))
stop("First, run the model")
if (is.null(focus_T))
stop("You must select a temperature to focus on")
if (is.null(xname))
xname = "Time"
if (is.null(yname))
yname = "Response Variable"
if(!(focus_T %in% step1_down_object$prediction$Celsius)){
step1_down_object_temp <- step1_down(
data = step1_down_object$user_parameters$data,
y = step1_down_object$user_parameters$y,
.time = step1_down_object$user_parameters$.time,
K = step1_down_object$user_parameters$K,
C = step1_down_object$user_parameters$C,
validation = step1_down_object$user_parameters$validation,
draw = step1_down_object$user_parameters$draw,
parms = step1_down_object$user_parameters$parms,
temp_pred_C = c(step1_down_object$user_parameters$temp_pred_C,focus_T),
max_time_pred = step1_down_object$user_parameters$max_time_pred,
confidence_interval = step1_down_object$user_parameters$confidence_interval,
by = step1_down_object$user_parameters$by,
reparameterisation = step1_down_object$user_parameters$reparameterisation,
zero_order = step1_down_object$user_parameters$zero_order
)
dat = step1_down_object_temp$data
pred = step1_down_object_temp$prediction
confidence_interval = step1_down_object_temp$user_parameters$confidence_interval
}else{
dat = step1_down_object$data
pred = step1_down_object$prediction
confidence_interval = step1_down_object$user_parameters$confidence_interval
}
mytheme <- ggplot2::theme(legend.position = "bottom", strip.background = element_rect(fill = "white"),
legend.key = element_rect(fill = "white"), legend.key.width = unit(2,"cm"),
axis.text = element_text(size = 13), axis.title = element_text(size = 13),
strip.text = element_text(size = 13),
legend.text = element_text(size = 13),
legend.title = element_text(size = 13))
validation = step1_down_object$user_parameters$validation
if(!is.null(validation)){
shape_types <- c(16,1)
names(shape_types) <- c("Fit", "Validation")
}
xx = pred$Celsius == focus_T
confidence_i <- paste0(confidence_interval * 100," % CI")
prediction_i <- paste0(confidence_interval * 100," % PI")
lines_t <- c("solid","dotted","longdash")
names(lines_t) <- c("Prediction",confidence_i,prediction_i)
colour_t <- scales::hue_pal()(length(unique(pred$Celsius)))
names(colour_t) <- as.character(unique(pred$Celsius))
plot = ggplot() +
labs( x = xname, y = yname) +
{if(!is.null(xlim))scale_x_continuous(limits = xlim)} +
{if(!is.null(ylim))scale_y_continuous(limits = ylim)} +
mytheme +
geom_line(data=pred, mapping=aes(x= time, y = Response, colour = Celsius, linetype = "Prediction")) +
geom_line(data=pred[xx,], mapping=aes(x= time, y = CI1, colour = Celsius, linetype = confidence_i)) +
geom_line(data=pred[xx,], mapping=aes(x= time, y = CI2, colour = Celsius, linetype = confidence_i)) +
{if(ribbon)geom_ribbon(data=pred[xx,], aes(x = time, ymin=PI1, ymax=PI2, fill = Celsius), alpha=0.08, show.legend = FALSE)} +
{if(ribbon)geom_ribbon(data=pred[xx,], aes(x = time, ymin=CI1, ymax=CI2, fill= Celsius), alpha=0.13, show.legend = FALSE)} +
geom_line(data=pred[xx,], mapping=aes(x= time, y = PI1, colour = Celsius, linetype = prediction_i)) +
geom_line(data=pred[xx,], mapping=aes(x= time, y = PI2, colour = Celsius, linetype = prediction_i)) +
geom_point(data=dat, mapping=aes(x= time, y = y, colour = Celsius, shape = validation)) +
scale_linetype_manual(name = NULL, values = lines_t) +
scale_colour_manual(name = "Celsius", values = colour_t) +
scale_fill_manual(name = NULL, values = colour_t) +
{if(!is.null(validation))scale_shape_manual(values = shape_types, name = NULL)} +
theme(legend.box = "vertical", legend.spacing = unit(-0.4,"line"))
return(plot)
}
globalVariables(c('PI1','PI2'))
| /scratch/gouwar.j/cran-all/cranData/AccelStab/R/step1_plot_T.R |
#' @title Plot Stability Data
#'
#' @description Plot raw accelerated stability data.
#'
#' @details Plot the raw accelerated stability data by selecting the columns -
#' response, time and temperature.
#'
#' @param data Dataframe containing accelerated stability data.
#' @param y Name of decreasing variable (e.g. concentration) contained within data
#' @param .time Time variable contained within data.
#' @param K Kelvin variable (numeric or column name) (optional).
#' @param C Celsius variable (numeric or column name) (optional).
#' @param validation Validation dummy variable (column name) (optional).
#' @param xname Label for the x-axis (optional).
#' @param yname Label for the y-axis (optional).
#' @param xlim x-axis limits (optional).
#' @param ylim y-axis limits (optional).
#'
#' @return Plot of raw accelerated stability data.
#'
#' @examples
#' #load example datasets
#' data(antigenicity)
#' data(potency)
#'
#' step1_plot_desc(data=antigenicity, y="conc", .time="time", C = "Celsius")
#'
#' step1_plot_desc(data=potency, y="Potency", .time="Time", C = "Celsius")
#'
#' @import ggplot2
#'
#' @export step1_plot_desc
step1_plot_desc <- function (data, y, .time, K = NULL, C = NULL, validation = NULL,
xname = NULL, yname = NULL, xlim = NULL, ylim = NULL){
if (is.null(K) & is.null(C))
stop("Select the temperature variable in Kelvin or Celsius")
dat = data
if (!is.null(validation))
if (!all(dat[,validation] %in% c(0,1)))
stop("Validation column must contain 1s and 0s only")
if (is.null(xname))
xname = "Time"
if (is.null(yname))
yname = "Response Variable"
if (is.null(C)){
dat$C = dat[, K] - 273.15
dat$Celsius = as.factor(dat$C)
}else{
dat$Celsius = as.factor(dat[, C])
}
dat$time = dat[, .time]
dat$y = dat[, y]
if(!is.null(validation)){
dat$validation = ifelse(dat[,validation] == 0, "Fit", "Validation")
shape_types <- c(16,1)
names(shape_types) <- c("Fit", "Validation")
}
mytheme <- ggplot2::theme(legend.position = "bottom", strip.background = element_rect(fill = "white"),
legend.key = element_rect(fill = "white"), legend.key.width = unit(2,"cm"),
axis.text = element_text(size = 13), axis.title = element_text(size = 13),
strip.text = element_text(size = 13),
legend.text = element_text(size = 13),
legend.title = element_text(size = 13))
plot <- ggplot2::ggplot(dat, aes(time, y, colour = Celsius)) + ggplot2::geom_point(mapping = aes(shape = validation)) +
ggplot2::stat_summary(fun = mean, geom = "line") +
labs( x = xname, y = yname) +
{if(!is.null(xlim))scale_x_continuous(limits = xlim)} +
{if(!is.null(ylim))scale_y_continuous(limits = ylim)} +
{if(!is.null(validation))scale_shape_manual(values = shape_types, name = NULL)} +
mytheme +
theme(legend.box = "vertical", legend.spacing = unit(-0.4,"line"))
return(plot)
}
| /scratch/gouwar.j/cran-all/cranData/AccelStab/R/step1_plot_desc.R |
#' @title Create Diagnostic Plots
#'
#' @description Generate residual diagnostic plots from a step1_down fit.
#'
#' @details Use the fit object obtained from the step1_down function to plot the
#' residual diagnostic plots, assess the quality of fit and search for anomalies.
#' Plots created are: Residuals Histogram, Observed Vs Predicted results, Residuals
#' Vs Predicted results and QQplot of Residuals.
#'
#' @param step1_down_object The fit object from the step1_down function (required).
#' @param bins The number of bins in the Histogram plot (default 7).
#'
#' @return A list containing the four ggplot2 plots.
#'
#' @examples
#' #load antigenicity data
#' data(antigenicity)
#'
#' #run step1_down fit
#' fit1 <- step1_down(data = antigenicity, y = "conc", .time = "time",
#' C = "Celsius", max_time_pred = 3)
#'
#' #plot diagnostic plots to asses the fit
#' step1_plot_diagnostic(fit1)
#'
#' @import ggplot2
#' @importFrom stats dnorm sd
#'
#' @export step1_plot_diagnostic
step1_plot_diagnostic <- function(step1_down_object, bins = 7)
{
if (is.null(step1_down_object))
stop("First, run the model")
dat = step1_down_object$data
mytheme <- ggplot2::theme(legend.position = "bottom", strip.background = element_rect(fill = "white"),
legend.key = element_rect(fill = "white"), legend.key.width = unit(2,"cm"),
axis.text = element_text(size = 13), axis.title = element_text(size = 13),
strip.text = element_text(size = 13),
legend.text = element_text(size = 13),
legend.title = element_text(size = 13))
validation = step1_down_object$user_parameters$validation
if(!is.null(validation)){
dat <- dat[dat$validation == "Fit",]
}
dat$residuals <- summary(step1_down_object$fit)$residuals
dat$predicted <- dat$y - dat$residuals
# Histogram plot
res_histo = ggplot(dat, aes(x = residuals)) +
geom_histogram(aes(y = after_stat(density)),
breaks = seq(min(dat$residuals), max(dat$residuals), by = (max(dat$residuals) - min(dat$residuals))/bins),
colour = "black",
fill = "white") +
stat_function(fun = dnorm, args = list(mean = mean(dat$residuals), sd = sd(dat$residuals)),
xlim = c(min(dat$residuals), max(dat$residuals)),
col = "turquoise",
linewidth = 1,
alpha = 0.6) + ggtitle ("Residuals Histogram") + xlab("Residuals") + ylab("Density") +
mytheme
# observed vs predicted
obs_pred = ggplot() + geom_point(data = dat, mapping = aes(x = predicted, y = y, colour = Celsius)) +
geom_smooth(data = dat, method ="lm", formula = y ~ x, mapping = aes(x = predicted, y = y)) +
labs( x = "Predicted response", y = "Observed data")+
ggtitle ("Observed Vs Predicted") +
mytheme
# residuals vs predicted
res_pred = ggplot() + geom_point(data = dat, mapping = aes(x = predicted, y = residuals, colour = Celsius)) +
labs( x = "Predicted response", y = "Residuals") +
geom_hline(yintercept=0, linetype="solid", color = "black")+
ggtitle ("Residuals Vs Predicted") +
mytheme
# QQplot
qqplot <- ggplot(as.data.frame(dat), aes(sample = residuals)) +
stat_qq(aes(colour = Celsius)) + stat_qq_line() +
ggtitle ("Q-Q Plot") + xlab("Theoretical Quantiles") + ylab("Sample Quantiles")+
mytheme
results = list(res_histo,obs_pred,res_pred,qqplot)
names(results) = c("Residuals_Histogram","Observed_V_Predicted","Residuals_V_Predicted","Q_Q_Plot")
return(results)
}
globalVariables(c('residuals','density','predicted'))
| /scratch/gouwar.j/cran-all/cranData/AccelStab/R/step1_plot_diagnostic.R |
#' @title Plot Model Predictions
#'
#' @description Plot the stability data and visualise the predictions.
#'
#' @details Use the fit object from the step1.down function to plot the accelerated
#' stability data and visualise the predictions.
#'
#' @param step1_down_object The fit object from the step1.down function (required).
#' @param xname Label for the x-axis (optional).
#' @param yname Label for the y-axis (optional).
#' @param xlim x-axis limits (optional).
#' @param ylim y-axis limits (optional).
#'
#' @return Plot of accelerated stability data with prediction curves.
#'
#' @examples
#' #load antigenicity data
#' data(antigenicity)
#'
#' fit1 <- step1_down(data = antigenicity, y = "conc", .time = "time",
#' C = "Celsius", max_time_pred = 3)
#'
#' step1_plot_pred(step1_down_object = fit1, xlim = NULL, ylim = NULL,
#' xname = "Time (Years)", yname = "Concentration")
#'
#' @import ggplot2
#'
#' @export step1_plot_pred
step1_plot_pred <- function (step1_down_object, xname = NULL, yname = NULL,
xlim = NULL, ylim = NULL)
{
if (is.null(step1_down_object))
stop("First, run the model")
if (is.null(xname))
xname = "Time"
if (is.null(yname))
yname = "Response Variable"
dat = step1_down_object$data
pred = step1_down_object$prediction
mytheme <- ggplot2::theme(legend.position = "bottom", strip.background = element_rect(fill = "white"),
legend.key = element_rect(fill = "white"), legend.key.width = unit(2,"cm"),
axis.text = element_text(size = 13), axis.title = element_text(size = 13),
strip.text = element_text(size = 13),
legend.text = element_text(size = 13),
legend.title = element_text(size = 13))
validation = step1_down_object$user_parameters$validation
if(!is.null(validation)){
shape_types <- c(16,1)
names(shape_types) <- c("Fit", "Validation")
}
plot = ggplot() + geom_point(data=dat, mapping=aes(x= time, y = y, colour = Celsius, shape = validation)) +
labs( x = xname, y = yname) +
{if(!is.null(xlim))scale_x_continuous(limits = xlim)} +
{if(!is.null(ylim))scale_y_continuous(limits = ylim)} +
mytheme +
geom_line(data=pred, mapping=aes(x= time, y = Response, colour = Celsius)) +
scale_linetype_manual(name = NULL, values=c("solid", "dotted", "longdash")) +
{if(!is.null(validation))scale_shape_manual(values = shape_types, name = NULL)} +
theme(legend.box = "vertical", legend.spacing = unit(-0.4,"line"))
return(plot)
}
| /scratch/gouwar.j/cran-all/cranData/AccelStab/R/step1_plot_pred.R |
#' @title Sample the Multivariate t Distribution
#'
#' @description Take a selected number of samples from the multivariate t distribution (mvt).
#'
#' @details Using the provided data the function creates a fit of the
#' Šesták–Berggren kinetic model and then draws a selected number of
#' samples from the mvt of the model parameters.
#'
#' @param data Dataframe containing accelerated stability data (required).
#' @param y Name of decreasing variable (e.g. concentration) contained within data
#' (required).
#' @param .time Time variable contained within data (required).
#' @param K Kelvin variable (numeric or column name) (optional).
#' @param C Celsius variable (numeric or column name) (optional).
#' @param validation Validation dummy variable (column name) (optional).
#' @param draw Number of samples to draw from mvt (required).
#' @param parms Starting values for the parameters as a list - k1, k2, k3, and c0 (optional).
#' @param reparameterisation Use alternative parameterisation of the one-step
#' model which aims to reduce correlation between k1 and k2.
#' @param zero_order Set kinetic order, k3, to zero (straight lines).
#'
#' @return A matrix containing parameter draws from the mvt distribution.
#'
#' @examples #load antigenicity data.
#' data(antigenicity)
#'
#' #Basic use of the step1_sample_mvt function with C column defined and 1000 draws.
#' sample1 <- step1_sample_mvt(data = antigenicity, y = "conc", .time = "time",
#' C = "Celsius", draw = 1000)
#'
#' #Basic use of the step1_sample_mvt function with K column defined and 50000 draws
#' sample2 <- step1_sample_mvt(data = antigenicity, y = "conc", .time = "time",
#' K = "K", draw = 50000)
#'
#' #reparameterisation is TRUE and 10000 draws.
#' sample3 <- step1_sample_mvt(data = antigenicity, y = "conc", .time = "time",
#' C = "Celsius", reparameterisation = TRUE, draw = 10000)
#'
#' @importFrom stats vcov coef runif confint rnorm quantile qt complete.cases
#' @importFrom minpack.lm nls.lm
#' @importFrom mvtnorm rmvt
#'
#' @export step1_sample_mvt
step1_sample_mvt <- function (data, y, .time, K = NULL, C = NULL, validation = NULL,
draw, parms = NULL, reparameterisation = FALSE, zero_order = FALSE){
if (is.null(K) & is.null(C))
stop("Select the temperature variable in Kelvin or Celsius")
if (!is.null(parms) & !is.list(parms))
stop("The starting values for parameters must be a list, or keep as NULL")
if(!is.null(C) & !is.null(K)) {
data[, C] <- ifelse(is.na(data[, C]) & !is.na(data[, K]),
data$K - 273.15,
data[, C])
data[, K] <- ifelse(is.na(data[, K]) & !is.na(data[, C]),
data$C + 273.15,
data[, K])
}
data <- data[complete.cases(data[, c(C,K,y,.time)]), ]
dat = data
if (!is.null(validation))
if (!all(dat[,validation] %in% c(0,1)))
stop("Validation column must contain 1s and 0s only")
if (is.null(K))
dat$K = dat[, C] + 273.15
if (is.null(C)) {
dat$C = dat[, K] - 273.15
C = "C"}
Kref = mean(dat$K)
dat$Celsius = as.factor(dat[, C])
dat$time = dat[, .time]
dat$y = dat[, y]
if(!is.null(validation)){
dat$validation = ifelse(dat[,validation] == 0, "Fit", "Validation")
if(validation != "validation"){
dat <- dat[, !names(dat) %in% c(validation)]
}
}
if(.time != "time"){
dat <- dat[, !names(dat) %in% c(.time)]
}
if(y != "y"){
dat <- dat[, !names(dat) %in% c(y)]
}
dat_full <- dat
if(!is.null(validation)){
dat <- dat[dat$validation == "Fit",]
}
if(reparameterisation & zero_order){ # reparameterisation and k3 is 0
MyFctNL = function(parms) { # Make function
k1 = parms$k1
k2 = parms$k2
c0 = parms$c0
Model = c0 - c0 * dat$time * exp(k1 - k2/dat$K +
k2/Kref)
residual = dat$y - Model
return(residual)
}
# Fit model :
if (!is.null(parms)) {
fit = minpack.lm::nls.lm(par = parms, fn = MyFctNL, lower = rep(0,
length(parms)))
}
else {
repeat {
parms = list(k1 = stats::runif(1, 0, 40), k2 = stats::runif(1,
1000, 20000), c0 = mean(dat$y[dat$time == 0]))
fit = suppressWarnings(minpack.lm::nls.lm(par = parms,
fn = MyFctNL, lower = rep(0, length(parms))))
res = tryCatch({
summary(fit)
}, error = function(e) e, warning = function(w) w)
res2 = tryCatch({
stats::vcov(fit)
}, error = function(e) e)
if (any(stats::coef(fit) != parms) && !inherits(res, "error") &&
!inherits(res2, "error"))
(break)()
}
fit = minpack.lm::nls.lm(par = parms, fn = MyFctNL, lower = rep(0,
length(parms)))
}
SIG = stats::vcov(fit)
pred_fct = function(coef.fit)
{
degrad = pred$time * exp(coef.fit[1] - coef.fit[2] / pred$K + coef.fit[2] / Kref)
conc = coef.fit[3] - coef.fit[3]*degrad
return(conc)
}
# Multi T bootstrap
rand.coef = rmvt(draw, sigma = SIG, df = nrow(dat) - 3) + matrix(nrow = draw, ncol = 3, byrow = TRUE, coef(fit))
}else if(!reparameterisation & zero_order){ # no reparameterisation and k3 is 0
MyFctNL = function(parms) { # make function
k1 = parms$k1
k2 = parms$k2
c0 = parms$c0
Model = c0 - c0 * dat$time * exp(k1 - k2 / dat$K)
residual = dat$y - Model
return(residual)
}
if (!is.null(parms)) { # fit model
fit = minpack.lm::nls.lm(par = parms, fn = MyFctNL, lower = rep(0,
length(parms)))
}
else {
repeat {
parms = list(k1 = stats::runif(1, 0, 40), k2 = stats::runif(1,
1000, 20000), c0 = mean(dat$y[dat$time == 0]))
fit = suppressWarnings(minpack.lm::nls.lm(par = parms,
fn = MyFctNL, lower = rep(0, length(parms))))
res = tryCatch({
summary(fit)
}, error = function(e) e, warning = function(w) w)
res2 = tryCatch({
stats::vcov(fit)
}, error = function(e) e)
if (any(stats::coef(fit) != parms) && !inherits(res, "error") &&
!inherits(res2, "error"))
(break)()
}
fit = minpack.lm::nls.lm(par = parms, fn = MyFctNL, lower = rep(0,
length(parms)))
}
SIG = vcov(fit)
pred_fct = function(coef.fit)
{
degrad = pred$time * exp(coef.fit[1] - coef.fit[2] / pred$K)
conc = coef.fit[3] - coef.fit[3]*degrad
return(conc)
}
# Multi T bootstrap
rand.coef = rmvt(draw, sigma = SIG, df = nrow(dat) - 3) + matrix(nrow = draw, ncol = 3, byrow = TRUE, coef(fit))
}else if(reparameterisation & !zero_order){ #reparameterisation and k3 is not zero
MyFctNL = function(parms) {
k1 = parms$k1
k2 = parms$k2
k3 = parms$k3
c0 = parms$c0
Model = c0 - c0 * (1 - ((1 - k3) * (1/(1 - k3) - dat$time *
exp(k1 - k2/dat$K + k2/Kref)))^(1/(1 - k3)))
residual = dat$y - Model
return(residual)
}
if (!is.null(parms)) { # Fit the model
fit = minpack.lm::nls.lm(par = parms, fn = MyFctNL, lower = rep(0,
length(parms)))
}
else {
repeat {
parms = list(k1 = stats::runif(1, 0, 60), k2 = stats::runif(1,
1000, 20000), k3 = stats::runif(1, 0, 11), c0 = mean(dat$y[dat$time == 0]))
fit = suppressWarnings(minpack.lm::nls.lm(par = parms,
fn = MyFctNL, lower = rep(0, length(parms))))
res = tryCatch({
summary(fit)
}, error = function(e) e, warning = function(w) w)
res2 = tryCatch({
stats::vcov(fit)
}, error = function(e) e)
if (any(stats::coef(fit) != parms) && !inherits(res, "error") &&
!inherits(res2, "error"))
(break)()
}
fit = minpack.lm::nls.lm(par = parms, fn = MyFctNL, lower = rep(0,
length(parms)))
}
k3 = coef(fit)[3]
if (k3 == 0){print("k3 is fitted to be exactly 0, we strongly suggest using option zero_order = TRUE")
}else if(confint(fit,'k3')[1] < 0 && confint(fit,'k3')[2] > 0){print(paste0("The 95% Wald Confidence Interval for k3 includes 0, k3 is estimated as ",signif(k3,4),". We suggest considering option zero_order = TRUE"))}
SIG = vcov(fit)
pred_fct = function(coef.fit)
{
degrad = 1 - ((1 - coef.fit[3]) * (1/(1 - coef.fit[3]) - pred$time * exp(coef.fit[1] - coef.fit[2] / pred$K + coef.fit[2] / Kref)))^(1/(1-coef.fit[3]))
conc = coef.fit[4] - coef.fit[4]*degrad
return(conc)
}
# Multi T bootstrap
rand.coef = rmvt(draw, sigma = SIG, df = nrow(dat) - 4) + matrix(nrow = draw, ncol = 4, byrow = TRUE, coef(fit))
}else if(!reparameterisation & !zero_order){ # No re-parameterisation and k3 not zero
MyFctNL = function(parms) {
k1 = parms$k1
k2 = parms$k2
k3 = parms$k3
c0 = parms$c0
test = c0 - c0 * (1 - ((1 - k3) * (1/(1 - k3) - dat$time * exp(k1 - k2 / dat$K)))^(1/(1-k3)))
residual = dat$y - test
return(residual)
}
if (!is.null(parms)) { # Fitting the model
fit = minpack.lm::nls.lm(par = parms, fn = MyFctNL, lower = rep(0,
length(parms)))
}
else {
repeat {
parms = list(k1 = stats::runif(1, 0, 60), k2 = stats::runif(1,
1000, 20000), k3 = stats::runif(1, 0, 11), c0 = mean(dat$y[dat$time == 0]))
fit = suppressWarnings(minpack.lm::nls.lm(par = parms,
fn = MyFctNL, lower = rep(0, length(parms))))
res = tryCatch({
summary(fit)
}, error = function(e) e, warning = function(w) w)
res2 = tryCatch({
stats::vcov(fit)
}, error = function(e) e)
if (any(stats::coef(fit) != parms) && !inherits(res, "error") &&
!inherits(res2, "error"))
(break)()
}
fit = minpack.lm::nls.lm(par = parms, fn = MyFctNL, lower = rep(0,
length(parms)))
}
k3 = coef(fit)[3]
if (k3 == 0){print("k3 is fitted to be exactly 0, we strongly suggest using option zero_order = TRUE")
}else if(confint(fit,'k3')[1] < 0 && confint(fit,'k3')[2] > 0){print(paste0("The 95% Wald Confidence Interval for k3 includes 0, k3 is estimated as ",signif(k3,4),". We suggest considering option zero_order = TRUE"))}
SIG = vcov(fit)
pred_fct = function(coef.fit)
{
degrad = 1 - ((1 - coef.fit[3]) * (1/(1 - coef.fit[3]) - pred$time * exp(coef.fit[1] - coef.fit[2] / pred$K)))^(1/(1-coef.fit[3]))
conc = coef.fit[4] - coef.fit[4]*degrad
return(conc)
}
# Multi T bootstrap
rand.coef = rmvt(draw, sigma = SIG, df = nrow(dat) - 4) + matrix(nrow = draw, ncol = 4, byrow = TRUE, coef(fit))
}
if(zero_order){
colnames(rand.coef) <- c("k1","k2","c0")
}else{
colnames(rand.coef) <- c("k1","k2","k3","c0")
}
return(rand.coef)
}
globalVariables(c('pred'))
| /scratch/gouwar.j/cran-all/cranData/AccelStab/R/step1_sample_mvt.R |
## code_generic.R ---
##
## Author: Andreas Kiermeier
##
## Created: 21 Aug 2007
##
## Purpose: Generic code (mostly) which applies to all types of sampling plans
##
## Changes:
## 21Aug07: * Created
## 29Dec16: * find.plan: Included check for missing N for hypergeometric
## 06Apr22: * find.k: Include interval=c(0,1000) in call to find.k
## ----------------------------------------------------------------------
setGeneric("assess", function(object, PRP, CRP, print=TRUE)
standardGeneric("assess"))
check.paccept <-
function(pa){
## Purpose: Utility function to check that supplied P(accept) values
## fall within [0,1]
## ----------------------------------------------------------------------
## Arguments:
## pa: a vector of P(accept) values
## ----------------------------------------------------------------------
## Author: Andreas Kiermeier, Date: 16 May 2007, 10:19
if (any(pa < 0) | any(pa > 1))
return(FALSE)
return(TRUE)
}
check.quality <-
function(pd, type){
## Purpose: Utility function to check that supplied Proportion defective
## values fall within
## [0,1] for the binomial or hypergeometric
## [0,inf] for the poisson
## ----------------------------------------------------------------------
## Arguments:
## pd: a vector of proportion defective values
## ----------------------------------------------------------------------
## Author: Andreas Kiermeier, Date: 16 May 2007, 10:19
if (any(pd < 0))
return(FALSE)
if (type %in% c("binomial", "hypergeom", "normal") & any(pd > 1))
return(FALSE)
return(TRUE)
}
## Utility to find k for a given n
find.k <- function(n, pd, pa, interval=c(0,5)){
tmp <- uniroot(function(x, n, pd, pa){
pt(x*sqrt(n), df=n-1, ncp=-qnorm(pd)*sqrt(n)) - pa},
interval=interval, n=n, pd=pd, pa=1-pa)
return(tmp$root)
}
find.plan <- function(PRP, CRP,
type=c("binomial","hypergeom","poisson","normal"),
N,
s.type=c("known", "unknown"))
{
## Purpose: Find the sampling plan with the smallest sample size, which
## meets a prespecified Producer and Consumer Risk Points.
##
## The convention used here, as in many books, is to use equality
## for the Producer Risk Point rather than the consumer risk point.
##
## No consideration is given to "cost functions".
## ----------------------------------------------------------------------
## Arguments:
## PRP : Producer risk point in the form c(pdefect, paccept)
## CRP : Consumer risk point in the form c(pdefect, paccept)
## N : Population size - only used for hypergeomtric distribution
## type : The distributional assumption
## s.type: Only used for 'normal' distribution - indicates whether the
## standard deviation is known or unknown (use sample s.d.)
## ----------------------------------------------------------------------
## Author: Andreas Kiermeier, Date: 20 Aug 2007, 12:09
type <- match.arg(type)
s.type <- match.arg(s.type)
## Check that N is supplied for hypergeometric distribution
if(type=="hypergeom" & missing(N))
stop("N must be supplied for the hypergeometric distribution.")
## Needs checking that risk points are "valid" - use existing functions
if (missing(PRP) | missing(CRP))
stop("Poducer and Consumer Risk Points must be provided.")
else if(!check.quality(PRP[1], type=type) |
!check.paccept(PRP[2]) )
stop("Producer Risk Point - Quality and/or desired P(accept) out of bounds")
else if(!check.quality(CRP[1], type=type) |
!check.paccept(CRP[2]) )
stop("Consumer Risk Point - Quality and/or desired P(accept) out of bounds")
else if(CRP[1] <= PRP[1])
stop("Consumer Risk Point quality must be greater than Producer Risk Point quality")
# else if(CRP[2] > PRP[2])
# stop("Consumer Risk Point P(accept) must be less than or equal to Producer Risk Point P(accept)")
## Attributes Sampling Plan - Binomial distribution
if (type == "binomial") {
c <- 0
n <- c+1
repeat {
if (calc.OCbinomial(n=n,c=c,r=c+1,pd=CRP[1]) > CRP[2])
n <- n + 1
else if (calc.OCbinomial(n=n,c=c,r=c+1,pd=PRP[1]) < PRP[2])
c <- c + 1
else
break
}
return(list(n=n, c=c, r=c+1))
}
## Attributes Sampling Plan - Hypergeometric distribution
if (type == "hypergeom") {
c <- 0
n <- c+1
repeat {
if (calc.OChypergeom(n=n,c=c,r=c+1,N=N,D=CRP[1]*N) > CRP[2])
n <- n + 1
else if (calc.OChypergeom(n=n,c=c,r=c+1,N=N,D=PRP[1]*N) < PRP[2])
c <- c + 1
else
break
}
return(list(n=n, c=c, r=c+1))
}
## Attributes Sampling Plan - Poisson distribution
if (type == "poisson") {
c <- 0
n <- c+1
repeat {
if (calc.OCpoisson(n=n,c=c,r=c+1,pd=CRP[1]) > CRP[2])
n <- n + 1
else if (calc.OCpoisson(n=n,c=c,r=c+1,pd=PRP[1]) < PRP[2])
c <- c + 1
else
break
}
return(list(n=n, c=c, r=c+1))
}
## Variables Sampling Plan - Normal distribution
else if (type=="normal") {
## With known standard deviation
if (s.type=="known") {
n <- ceiling( ((qnorm(1-PRP[2]) + qnorm(CRP[2]))/
(qnorm(CRP[1])-qnorm(PRP[1])) )^2)
k <- qnorm(1-PRP[2])/sqrt(n) - qnorm(PRP[1])
return(list(n=n, k=k, s.type=s.type))
}
## With unknown standard deviation
else if (s.type=="unknown") {
n <- 2 ## Need a minimum of 1 degree of freedom (=n-1) for the NC t-dist
k <- find.k(n, PRP[1], PRP[2], interval=c(0,1000))
pa <- 1- pt(k*sqrt(n), df=n-1, ncp=-qnorm(CRP[1])*sqrt(n))
while(pa > CRP[2]){
n <- n+1
k <- find.k(n, PRP[1], PRP[2], interval=c(0,1000))
pa <- 1-pt(k*sqrt(n), df=n-1, ncp=-qnorm(CRP[1])*sqrt(n))
}
return(list(n=n, k=k, s.type=s.type))
}
}
}
## x1 <- find.plan(c(0.05, 0.95), c(0.15, 0.075), type="bin")
## x <- OC2c(x1$n, x1$c, x1$r, type="bin")
## assess(x, c(0.05, 0.95), c(0.15, 0.075))
## x1 <- find.plan(c(0.05, 0.95), c(0.15, 0.075), type="hyp", N=100)
## x <- OC2c(x1$n, x1$c, x1$r, type="hyp", N=100)
## assess(x, c(0.05, 0.95), c(0.15, 0.075))
## x1 <- find.plan(c(0.05, 0.95), c(0.15, 0.075), type="pois")
## x <- OC2c(x1$n, x1$c, x1$r, type="pois")
## assess(x, c(0.05, 0.95), c(0.15, 0.075))
## The following examples come from Guenther's book
## PRP <- c(0.01, 0.95)
## CRP <- c(0.10, 0.1)
## x1 <- find.plan(PRP=PRP, CRP=CRP, type="nor", s.type="unknown")
## x <- OCvar(x1$n, x1$k, s.type=x1$s.type, pd=seq(0,0.2, by=0.002))
## plot(x)
## points(PRP[1], PRP[2], col="red", pch=19); points(CRP[1], CRP[2], col="red", pch=19)
## assess(x, PRP=PRP, CRP=CRP)
## PRP <- c(0.05, 0.95)
## CRP <- c(0.20, 0.1)
## x1 <- find.plan(PRP=PRP, CRP=CRP, type="nor", s.type="known")
## x <- OCvar(x1$n, x1$k, s.type=x1$s.type, pd=seq(0,0.2, by=0.002))
## plot(x)
## points(PRP[1], PRP[2], col="red", pch=19); points(CRP[1], CRP[2], col="red", pch=19)
## assess(x, PRP=PRP, CRP=CRP)
### Local Variables:
### comment-start: "## "
### fill-column: 80
### End:
| /scratch/gouwar.j/cran-all/cranData/AcceptanceSampling/R/code_generic.R |
## code_twoclass.R ---
##
## Author: Andreas Kiermeier
##
## Created: 08 Mar 2007
##
## Purpose: A package to provide functionality for creating and
## evaluating acceptance sampling plans.
##
## Changes:
## 16Aug07: * Added check in OC2c validation code to ensure that sample sizes
## are greater than zero.
## * Added virtual class OCvar for variables sampling plans (single
## only)
## * Added actual class for variables sampling plans - Normal
## 20Aug07: * Added function {find.k} to find constant k for given sample size in
## normal variables sampling plans
## * Added function {find.plan} to find smallest sampling plan
## for given Producer and Consumer Risk Points
## 27Feb08: * Changed the validation for r & c (which are cumulative) to be
## compared against cumsum(n) as n is not cumulative.
## 05Mar08: * Fixed problem with multiple sampling plans - previous calculations
## were completely wrong.
## Code now enumerates over all stages and all possible outcomes which
## leadto additional sampling.
## 29Dec16: * calc.OChypergeom: Included checking if N and D are integers. Issue
## warning if not (for backwards compatibility). Thanks to Thomas
## LaBone and Peter Bloomfield for raising the issue and suggesting
## a solution.
## 15Jan19: * Change class definitions to avoid errors in latest R development
## build.
## 21Jan19: * Cleaned up class definitions; changed "representation(...)" to
## "slots=c(...)" and included " contains="VIRTUAL" " (for virtual
## class.
## 05Dec23: * Fixed bug related to decision making when multiple sample stages
## are used. Problem occurs when more stages are specified than needed.
## Thanks to Walter Hoyer for reporting this.
##
## Notes:
## For implemented package use
##
## getFromNamespace(paste("calc.",OCtype,sep=""), ns="AcceptanceSampling")
##
## while for testing directly use
##
## get(paste("calc.",OCtype,sep=""))
##
## There are THREE (3) of these instances
## ----------------------------------------------------------------------
## ----------------------------------------------------------------------
## Class definitions
## ----------------------------------------------------------------------
setClass("OC2c", slots=c(n="numeric", ## A vector of sample sizes at each
## stage of sampling
## NOT CUMULATIVE
c="numeric", ## vector of acceptance numbers for
## each stage of sampling. Accept if actual number
## of defectives/defects is <= c
## CUMULATIVE
r="numeric", ## vector of rejection numbers for
## each stage of sampling. Reject if actual number
## of defectives/defects is >= r
## CUMULATIVE
type="character",
paccept="numeric"),
contains="VIRTUAL",
validity=function(object){
if(any(is.na(object@n)) | any(is.na(object@c)) |
any(is.na(object@r)))
return("Missing values in 'n', 'c', or 'r' not allowed")
## Check that n, c and r are of the same length
l <- length(object@n)
if (l != length(object@c) | l != length(object@r))
return("n, c and r must be of same length.")
## Check that the sample sizes make sense
if (any(object@n <= 0))
return("Sample size(s) 'n' must be greater than 0.")
## Check that the acceptance numbers make sense
if (any(object@c < 0) | any(object@c > cumsum(object@n)))
return("Acceptance number(s) 'c' must be in the range [0,n], here n is the cumulative sample size.")
## Check that the rejection numbers make sense
if (any(object@r < 0) | any(object@r > cumsum(object@n)))
return("Rejection number(s) 'r' must be in the range [0,n], here n is the cumulative sample size.")
if (any(object@r <= object@c))
return("Rejection number(s) 'r' must be greater than acceptance number(s) 'c'.")
## For double sampling (or more) make sure that acceptance and
## rejection number are non-decreasing and non-increasing, respectively.
if (l > 1) {
if (any(diff(object@c)<0) )
return("'c' must be non-decreasing")
if (any(diff(object@r)<0) )
return("'r' must be non-decreasing")
}
## Check that a decision is made on the last stage, and only on
## the last stage. At the last stage, and only then, should
## the difference between r and c equal to 1. The value of m will
## be NA if a decision cannot be made and TRUE if a decision can be
## made before the last stage
m <- match(1, (object@r-object@c)) < l
if (is.na(m))
return("Decision from last stage cannot be made: r[l] > c[l]+1")
else if (m)
return("Too many stages specified - decision is made before the last stage.")
# if (object@r[l] != object@c[l] + 1)
# return("Decision from last sample cannot be made: r != c+1")
## Otherwise things seem fine.
return(TRUE)
})
setClass("OCbinomial",
slots=c(pd="numeric"),
contains="OC2c",
prototype=prototype("OC2c", type="binomial", pd=seq(0,1,by=0.01)),
validity=function(object){
## Check that the proportion of defectives make sense
if (any(is.na(object@pd)))
return("Missing values in 'pd' not allowed")
if (any(object@pd < 0.) | any(object@pd > 1.) )
return("Proportion defectives must be in the range [0,1]")
})
setClass("OChypergeom",
slots=c(N="numeric", pd="numeric"),
contains="OC2c",
prototype=prototype("OC2c", type="hypergeom", N=100, pd=(0:100)/100),
validity=function(object){
## Check that the population size of of length 1
if (length(object@N) > 1)
return("Length of population size 'N' != 1")
if (is.na(object@N))
return("Missing value in 'N' not allowed")
## Check that the population size is not less than 1
if (object@N < 1.)
return("Population size 'N' must be at least 1")
## Check that the population size is non-negative
if (object@N < sum(object@n))
return("Total sample size must be less than population size 'N'")
## Check that the proportion of defectives make sense
if (any(is.na(object@pd)))
return("Missing value in 'pd' not allowed")
if (any(object@pd < 0.) | any(object@pd > 1) )
return("Proportion defectives 'pd' must be in the range [0,1]")
})
setClass("OCpoisson",
slots=c(pd="numeric"),
contains="OC2c",
prototype=prototype("OC2c", type="poisson",pd=seq(0,1,0.01)),
validity=function(object){
## Check that the proportion of defectives make sense
if (any(is.na(object@pd)))
return("Missing values in 'pd' not allowed")
if (any(object@pd < 0.))
return("Rate of defects 'pd' must be non-negative")
})
## ----------------------------------------------------------------------
## Methods to create new object and calculate P(accept)
## Only OC2c to be exported
## other functions are helpers only
## ----------------------------------------------------------------------
OC2c <- function(n,c,r=if (length(c)<=2) rep(1+c[length(c)], length(c)) else NULL,
type=c("binomial","hypergeom", "poisson"), ...){
## Decide on what 'type' to use
type <- match.arg(type)
OCtype <- paste("OC",type,sep="")
## Create a new object of that type
obj <- new(OCtype, n=n, c=c, r=r, type=type, ...)
## Evaluate the probability of acceptance for this type and given
## pd.
## First get the generic calculation function
## OCtype <- get(paste("calc.",OCtype,sep=""))
OCtype <- getFromNamespace(paste("calc.",OCtype,sep=""),
ns="AcceptanceSampling")
## now, based on the type, decide on what to pass to the function
## Only need to check for existing type since new() would have stuffed up
## if we don't have a class for the type.
if (type =="binomial")
obj@paccept <- OCtype(n=obj@n, c=obj@c, r=obj@r, pd=obj@pd)
if (type =="hypergeom")
obj@paccept <- OCtype(n=obj@n, c=obj@c, r=obj@r, N=obj@N, D=obj@pd*obj@N)
if (type =="poisson")
obj@paccept <- OCtype(n=obj@n, c=obj@c, r=obj@r, pd=obj@pd)
obj
}
calc.OCbinomial <- function(n,c,r,pd)
{
p.acc <- sapply(pd, FUN=calc.OCbinomial.pdi, n=n, c=c, r=r)
p.acc
}
calc.OCbinomial.pdi <- function(pd,n,c,r)
{
## This is really a helper function - it does all the work for each
## value of pd.
k.s <- length(n) ## number of stages in this sampling
prob.acc <- function(x, n, p){
k <- length(x)
k1 <- k-1
prod(dbinom(x[1:k1], n[1:k1], p))*pbinom(x[k], n[k], p)
}
for (k in 1:k.s) {
## For each stage, find out all the possibilities which could
## lead to still not having made a decision and then calculate
## the appropriate probabilities.
if(k==1) {
## Only a single sampling stage to do - this is simple
p.acc <- sapply(pd, FUN=function(el){
pbinom(q=c[1],size=n[1],prob=el)})
## p.acc now exists and can be used in the following stages.
}
else if (k==2) {
## Two sampling stages. Needs to be handled separately from
## more stages due to matrix dimensions
c.s <- c+1 ## Use to calculate limits
r.s <- r-1 ## Use to calculate limits
## The possibilities which lead to a decision to be made at
## the second stage
x <- data.frame(X1=seq(c.s[1], r.s[1], by=1),
X.last=c[2]-seq(c.s[1], r.s[1], by=1))
p.acc <- p.acc + sum(apply(x, 1, FUN=prob.acc, n=n, p=pd))
}
else {
## More than two sampling stages.
## Things are more tricky.
c.s <- c+1 ## Use to calculate limits
r.s <- r-1 ## Use to calculate limits
expand.call <- "expand.grid(c.s[k-1]:r.s[k-1]"
for(i in 2:(k-1)){
expand.call <- paste(expand.call,paste("c.s[k-",i,"]:r.s[k-",i,"]",sep=""),sep=",")
}
expand.call <- paste(expand.call,")",sep="")
x <- eval(parse(text=expand.call)[[1]])
x <- x[,(k-1):1]
names(x) <- paste("X",1:(k-1),sep="")
for(i in ncol(x):2){
x[,i] <- x[,i]-x[,i-1]
}
x <- cbind(x, X.last=c[k] - rowSums(x[,1:(k-1)]))
p.acc <- p.acc + sum(apply(x, 1, FUN=prob.acc, n=n, p=pd))
}
}
return(p.acc)
}
calc.OChypergeom <- function(n,c,r,N,D) {
## Check that N and D are integer values. Issues warning if not.
## Check preformed here rather than class validation for backwards
## compatibility.
## Use is.wholenumber function from R help file for "integer"
is.wholenumber <- function(x, tol = .Machine$double.eps^0.5){
abs(x - round(x)) < tol
}
if(!all(is.wholenumber(N), is.wholenumber(D))){
warning("N and D (or N*pd) should be integers.")
}
p.acc <- sapply(D, FUN=calc.OChypergeom.pdi, n=n, c=c, r=r, N=N)
p.acc
}
## phyper(q=0,m=5,n=100-5,k=13) +
## dhyper(x=1,m=5,n=100-5,k=13)*phyper(q=0,m=4,n=100-13-4,k=13)
calc.OChypergeom.pdi <- function(D,n,c,r,N)
{
## This is really a helper function - it does all the work for each
## value of pd.
k.s <- length(n) ## number of stages in this sampling
prob.acc <- function(x, n, N, D){
k <- length(x) ## Number of sampling stages
k1 <- k-1
## Total number of defects and total sample size taken so far.
## Note that 0 is prepended to indicate that at stage 1, zero
## defects have been found.
x.cum <- cumsum(x)
n.cum <- cumsum(n)
N.cum <- N-c(0,n.cum[1:k1])
D.cum <- D-c(0,x.cum[1:k1])
prod(dhyper(x=x[1:k1], m=pmax(D.cum[1:k1],0),
n=N.cum[1:k1]-pmax(D.cum[1:k1],0), k=n[1:k1]))*
phyper(q=x[k], m=pmax(D.cum[k],0), n=N.cum[k]-pmax(D.cum[k],0), k=n[k])
}
for (k in 1:k.s) {
## For each stage, find out all the possibilities which could
## lead to still not having made a decision and then calculate
## the appropriate probabilities.
if(k==1) {
## Only a single sampling stage to do - this is simple
p.acc <- sapply(D, FUN=function(el){
phyper(q=c[1], m=el, n=N-el, k=n[1])})
## p.acc now exists and can be used in the following stages.
}
else if (k==2) {
## Two sampling stages. Needs to be handled separately from
## more stages due to matrix dimensions
c.s <- c+1 ## Use to calculate limits
r.s <- r-1 ## Use to calculate limits
## The possibilities which lead to a decision to be made at
## the second stage
x <- data.frame(X1=seq(c.s[1], r.s[1], by=1),
X.last=c[2]-seq(c.s[1], r.s[1], by=1))
p.acc <- p.acc + sum(apply(x, 1, FUN=prob.acc, n=n, N=N, D=D))
}
else {
## More than two sampling stages.
## Things are more tricky.
c.s <- c+1 ## Use to calculate limits
r.s <- r-1 ## Use to calculate limits
expand.call <- "expand.grid(c.s[k-1]:r.s[k-1]"
for(i in 2:(k-1)){
expand.call <- paste(expand.call,paste("c.s[k-",i,"]:r.s[k-",i,"]",sep=""),sep=",")
}
expand.call <- paste(expand.call,")",sep="")
x <- eval(parse(text=expand.call)[[1]])
x <- x[,(k-1):1]
names(x) <- paste("X",1:(k-1),sep="")
for(i in ncol(x):2){
x[,i] <- x[,i]-x[,i-1]
}
x <- cbind(x, X.last=c[k] - rowSums(x[,1:(k-1)]))
p.acc <- p.acc + sum(apply(x, 1, FUN=prob.acc, n=n, N=N, D=D))
}
}
return(p.acc)
}
calc.OCpoisson <- function(n,c,r,pd)
{
p.acc <- sapply(pd, FUN=calc.OCpoisson.pdi, n=n, c=c, r=r)
p.acc
}
## ppois(q=c, lambda=el*n) el=pd.
calc.OCpoisson.pdi <- function(pd,n,c,r)
{
## This is really a helper function - it does all the work for each
## value of pd.
k.s <- length(n) ## number of stages in this sampling
prob.acc <- function(x, n, p){
k <- length(x)
k1 <- k-1
prod(dpois(x[1:k1], n[1:k1]*p))*ppois(x[k], n[k]*p)
}
for (k in 1:k.s) {
## For each stage, find out all the possibilities which could
## lead to still not having made a decision and then calculate
## the appropriate probabilities.
if(k==1) {
## Only a single sampling stage to do - this is simple
p.acc <- sapply(pd, FUN=function(el){
ppois(q=c[1],lambda=n[1]*el)})
## p.acc now exists and can be used in the following stages.
}
else if (k==2) {
## Two sampling stages. Needs to be handled separately from
## more stages due to matrix dimensions
c.s <- c+1 ## Use to calculate limits
r.s <- r-1 ## Use to calculate limits
## The possibilities which lead to a decision to be made at
## the second stage
x <- data.frame(X1=seq(c.s[1], r.s[1], by=1),
X.last=c[2]-seq(c.s[1], r.s[1], by=1))
p.acc <- p.acc + sum(apply(x, 1, FUN=prob.acc, n=n, p=pd))
}
else {
## More than two sampling stages.
## Things are more tricky.
c.s <- c+1 ## Use to calculate limits
r.s <- r-1 ## Use to calculate limits
expand.call <- "expand.grid(c.s[k-1]:r.s[k-1]"
for(i in 2:(k-1)){
expand.call <- paste(expand.call,paste("c.s[k-",i,"]:r.s[k-",i,"]",sep=""),sep=",")
}
expand.call <- paste(expand.call,")",sep="")
x <- eval(parse(text=expand.call)[[1]])
x <- x[,(k-1):1]
names(x) <- paste("X",1:(k-1),sep="")
for(i in ncol(x):2){
x[,i] <- x[,i]-x[,i-1]
}
x <- cbind(x, X.last=c[k] - rowSums(x[,1:(k-1)]))
p.acc <- p.acc + sum(apply(x, 1, FUN=prob.acc, n=n, p=pd))
}
}
return(p.acc)
}
## calc.OCpoisson <- function(n,c,r,pd)
## {
## ## n needs to be cumulative since c and r are specified that way too.
## n <- cumsum(n)
## ## Get a list with a vector for each pd.
## ## Length of vector equals number of samples, e.g. double = length 2.
## ## The rate of defects is given per item. Need to convert to
## ## rate per sample size (multiply by n)
## p.accept <- lapply(pd, FUN=function(el) ppois(q=c, lambda=el*n) )
## p.unsure <- lapply(pd, FUN=function(el) {
## ppois(q=(r-1), lambda=el*n) - ppois(q=c, lambda=el*n)})
## ## Now combine the sampling stages via helper function
## pa <- mapply(FUN=calc.paccept, p.accept=p.accept, p.unsure=p.unsure)
## pa
## }
## ----------------------------------------------------------------------
## Printing methods and functions
## ----------------------------------------------------------------------
OC2c.show.default <-
function(object){
if(length(object@n)==0){
x <- matrix(rep(NA,3), ncol=1)
}
else
x <- rbind(object@n, object@c, object@r)
dimnames(x) <- list(c("Sample size(s)", "Acc. Number(s)",
"Rej. Number(s)"),
paste("Sample", 1:ncol(x)))
show(x)
}
OC2c.show.prob <-
function(object) {
if (object@type=="binomial") {
x <- cbind(object@pd, object@paccept)
colnames(x) <- c("Prop. defective","P(accept)")
}
else if (object@type=="hypergeom"){
x <- cbind(object@pd*object@N, object@pd, object@paccept)
colnames(x) <- c("Pop. Defectives", "Pop. Prop. defective","P(accept)")
}
else if (object@type=="poisson"){
x <- cbind(object@pd, object@paccept)
colnames(x) <- c("Rate of defects","P(accept)")
}
else
stop("No full print method defined for this type")
rownames(x) <- rep("", length(object@paccept))
show(x)
}
setMethod("show", "OC2c",
function(object){
cat(paste("Acceptance Sampling Plan (",object@type,")\n\n",sep=""))
OC2c.show.default(object)
})
setMethod("show", "OChypergeom",
function(object){
cat(paste("Acceptance Sampling Plan (",
object@type," with N=",object@N,")\n\n",sep=""))
OC2c.show.default(object)
})
setMethod("summary", "OC2c",
function(object, full=FALSE){
cat(paste("Acceptance Sampling Plan (",object@type,")\n\n",sep=""))
OC2c.show.default(object)
if (full){
cat("\nDetailed acceptance probabilities:\n\n")
OC2c.show.prob(object)
}
})
setMethod("summary", "OChypergeom",
function(object, full=FALSE){
cat(paste("Acceptance Sampling Plan (",
object@type," with N=",object@N,")\n\n",sep=""))
OC2c.show.default(object)
if (full){
cat("\nDetailed acceptance probabilities:\n\n")
OC2c.show.prob(object)
}
})
## ----------------------------------------------------------------------
## Plotting methods
## ----------------------------------------------------------------------
setMethod("plot", signature(x="OCbinomial", y="missing"),
function(x, y, type="o", ylim=c(0,1),...){
plot(x@pd, x@paccept, type=type,
xlab="Proportion defective", ylab="P(accept)",
ylim=ylim, ...)
})
setMethod("plot", signature(x="numeric", y="OCbinomial"),
function(x, y, type="o", ylim=c(0,1),...){
plot(x, y@paccept, type=type,
ylab="P(accept)", ylim=ylim, ...)
})
setMethod("plot", signature(x="OChypergeom", y="missing"),
function(x, type="p", ylim=c(0,1), axis=c("pd","D","both"), ...){
xs <- match.arg(axis)
if (xs=="pd")
plot(x@pd, x@paccept, type=type,
xlab=paste("Proportion of population defectives (N=",x@N,")",sep=""),
ylab="P(accept)", ylim=ylim, ...)
else if (xs=="D")
plot(x@pd*x@N, x@paccept, type=type,
xlab=paste("Population defectives, D (N=",x@N,")",sep=""),
ylab="P(accept)", ylim=ylim, ...)
else if (xs=="both") {
plot(x@pd, x@paccept, type=type,
xlab=paste("Proportion of population defectives, (N=",x@N,")",sep=""),
ylab="P(accept)", ylim=ylim, mar=c(5,4,5,2)+0.1,...)
ax <- axis(1)
axis(3, at=ax, labels=ax*x@N)
mtext(paste("Population defectives, D (N=",x@N,")",sep=""),
side=3, line=3)
}
})
setMethod("plot", signature(x="numeric", y="OChypergeom"),
function(x, y, type="p", ylim=c(0,1), ...){
plot(x, y@paccept, type=type,
ylab="P(accept)", ylim=ylim, ...)
})
setMethod("plot", signature(x="OCpoisson", y="missing"),
function(x, y, type="o", ylim=c(0,1),...){
plot(x@pd, x@paccept, type=type,
xlab="Rate of defects", ylab="P(accept)",
ylim=ylim, ...)
})
setMethod("plot", signature(x="numeric", y="OCpoisson"),
function(x, y, type="o", ylim=c(0,1),...){
plot(x, y@paccept, type=type,
ylab="P(accept)", ylim=ylim, ...)
})
## ----------------------------------------------------------------------
## Methods to evaluation risk points
## All these functions are helpers only and should not be exported
## "assess" methods are exported
## ----------------------------------------------------------------------
assess.OC2c <-
function(object, PRP, CRP){
## Purpose: This is the function that does the work.
## Evaluate whether a particular sampling plan can meet
## specified producer and/or consumer risk points
## ----------------------------------------------------------------------
## Arguments:
## object: An object of class OC2c
## PRP : Producer risk point in the form c(pdefect, paccept)
## CRP : Consumer risk point in the form c(pdefect, paccept)
## print : Print the result
## ----------------------------------------------------------------------
## Author: Andreas Kiermeier, Date: 16 May 2007, 10:19
planOK <- TRUE
## Check that what we are given is OK
if (missing(PRP))
PRP <- rep(NA,3)
else if (!missing(PRP)){
if( !check.quality(PRP[1], type=object@type) |
!check.paccept(PRP[2]) )
stop("Quality and/or desired P(accept) out of bounds")
## Get the appropriate function for the distribution and
## calculate the P(accept)
## calc.pa <- get(paste("calc.OC",object@type,sep=""))
calc.pa <- getFromNamespace(paste("calc.OC",object@type,sep=""),
ns="AcceptanceSampling")
pa <- switch(object@type,
binomial=calc.pa(object@n, object@c, object@r, PRP[1]),
hypergeom=calc.pa(object@n, object@c, object@r, object@N, PRP[1]*object@N),
poisson=calc.pa(object@n, object@c, object@r, PRP[1]))
PRP <- c(PRP, pa)
## Check that the plan meets the desired point
## For PRP have to have P(accept) greater than desired prob.
if (pa >= PRP[2])
planOK <- TRUE
else
planOK <- FALSE
}
if (missing(CRP))
CRP <- rep(NA,3)
else if (!missing(CRP)){
if( !check.quality(CRP[1], type=object@type) |
!check.paccept(CRP[2]) )
stop("Quality and/or desired P(accept) out of bound")
## Get the appropriate function for the distribution and
## calculate the P(accept)
## calc.pa <- get(paste("calc.OC",object@type,sep=""))
calc.pa <- getFromNamespace(paste("calc.OC",object@type,sep=""),
ns="AcceptanceSampling")
pa <- switch(object@type,
binomial=calc.pa(object@n, object@c, object@r, CRP[1]),
hypergeom=calc.pa(object@n, object@c, object@r, object@N, CRP[1]*object@N),
poisson=calc.pa(object@n, object@c, object@r, CRP[1]))
CRP <- c(CRP, pa)
## Check that the plan meets the desired point
## For CRP have to have P(accept) less than desired prob.
if (pa <= CRP[2])
planOK <- planOK & TRUE
else
planOK <- planOK & FALSE
}
return(list(OK=planOK, PRP=PRP, CRP=CRP))
}
setMethod("assess", signature(object="OC2c"),
function(object, PRP, CRP, print)
{
## Purpose: Evaluate whether a particular sampling plan can meet
## specified producer and/or consumer risk points
## ----------------------------------------------------------------------
## Arguments:
## object: An object of class OC2c
## PRP : Producer risk point in the form c(pdefect, paccept)
## CRP : Consumer risk point in the form c(pdefect, paccept)
## print : Print the result
## ----------------------------------------------------------------------
## Author: Andreas Kiermeier, Date: 16 May 2007, 10:19
if(!hasArg(PRP) & !hasArg(CRP))
stop("At least one risk point, PRP or CRP, must be specified")
else if(CRP[1] <= PRP[1])
stop("Consumer Risk Point quality must be greater than Producer Risk Point quality")
plan <- assess.OC2c(object, PRP, CRP)
if (print) {
cat(paste("Acceptance Sampling Plan (",object@type,")\n\n",sep=""))
OC2c.show.default(object)
cat(paste("\nPlan", ifelse(plan$OK, "CAN","CANNOT"),
"meet desired risk point(s):\n\n"))
## Both PRP and CRP
if(hasArg(PRP) & hasArg(CRP))
RP <- cbind(PRP=plan$PRP, CRP=plan$CRP)
## Only PRP
else if (hasArg(PRP))
RP <- cbind(PRP=plan$PRP)
## Only CRP
else if (hasArg(CRP))
RP <- cbind(CRP=plan$CRP)
rownames(RP) <- c(" Quality", " RP P(accept)", "Plan P(accept)")
show(t(RP))
}
if(object@type=="hypergeom")
return(invisible(c(list(n=object@n, c=object@c, r=object@r,
n=object@N), plan)))
else
return(invisible(c(list(n=object@n, c=object@c, r=object@r), plan)))
})
### Local Variables:
### comment-start: "## "
### fill-column: 80
### End:
| /scratch/gouwar.j/cran-all/cranData/AcceptanceSampling/R/code_twoclass.R |
## code_var.R ---
##
## Author: Andreas Kiermeier
##
## Created: 08 Mar 2007
##
## Purpose: A package to provide functionality for creating and
## evaluating acceptance sampling plans.
##
## Changes:
## 16Aug07: * Added check in OC2c validation code to ensure that sample sizes
## are greater than zero.
## * Added virtual class OCvar for variables sampling plans (single
## only)
## * Added actual class for variables sampling plans - Normal
## 20Aug07: * Added function {find.k} to find constant k for given sample size in
## normal variables sampling plans
## * Added function {find.plan} to find smallest sampling plan
## for given Producer and Consumer Risk Points
## 15Jan19: * Change class definitions to avoid errors in latest R development
## build.
## 21Jan19: * Cleaned up class definitions; changed "representation(...)" to
## "slots=c(...)" and included " contains="VIRTUAL" " (for virtual
## class.
## ----------------------------------------------------------------------
## --------------------------------------------------------------------------------
## Variables Sampling Plans
## --------------------------------------------------------------------------------
setClass("OCvar",
slots=c(n="numeric", ## A vector of sample sizes at each
## stage of sampling - NOT comulative sample size
k="numeric", ## vector used to determine acceptance
type="character",
paccept="numeric"),
contains="VIRTUAL",
validity=function(object){
if(any(is.na(object@n)) | any(is.na(object@k)) )
return("Missing values in 'n' or 'k'")
## Check that n and k are of length 1
if (length(object@n) != 1 | length(object@k) != 1)
return("n and k must be of length 1.")
## Check that the sample size makes sense
if (any(object@n <= 0))
return("Sample size 'n' must be greater than 0.")
## Check that the value for k makes sense
if (any(object@k <= 0))
return("Cut-off 'k' must be greater than 0.")
## Otherwise things seem fine.
return(TRUE)
})
setClass("OCnormal",
slots=c(pd="numeric", s.type="character"),
contains="OCvar",
prototype=prototype("OCvar",type="normal",pd=seq(0,1,by=0.01),s.type="known"),
validity=function(object){
## ## Check that the standard deviation is positive
## if (length(object@s) > 1)
## return("Standard deviation 's' must be of length 1.")
## ## Check that the standard deviation is positive
## if (object@s <= 0.)
## return("Standard deviation 's' must be greater than 0.")
## Check that the s.type is either 'known' or ' unknown'
if ([email protected] != "known" & [email protected] != "unknown")
return("s.type must be either 'known' or 'unknown'.")
## Check that the proportion of defectives make sense
if (any(is.na(object@pd)))
return("Missing values in 'pd' not allowed")
if (any(object@pd < 0.) | any(object@pd > 1.) )
return("Proportion defectives must be in the range [0,1]")
})
## ----------------------------------------------------------------------
## Methods to create new object and calculate P(accept)
## other functions are helpers only
## ----------------------------------------------------------------------
OCvar <- function(n, k, type=c("normal"), ...){
## Decide on what 'type' to use
type <- match.arg(type)
OCtype <- paste("OC",type,sep="")
## Create a new object of that type
obj <- new(OCtype, n=n, k=k, type=type, ...)
## Evaluate the probability of acceptance for this type and given
## pd.
## First get the generic calculation function
## use 'get' for development and 'getFromNamespace' for the actual
## package implementation
OCtype <- getFromNamespace(paste("calc.",OCtype,sep=""),
ns="AcceptanceSampling")
## OCtype <- get(paste("calc.",OCtype,sep=""))
## now, based on the type, decide on what to pass to the function
## Only need to check for existing type since new() would have stuffed up
## if we don't have a class for the type.
if (type =="normal")
obj@paccept <- OCtype(n=obj@n, k=obj@k, pd=obj@pd,
[email protected])
obj
}
calc.OCnormal <- function(n,k,pd,s.type)
{
## Are we dealing with the standard normal?
if (s.type=="known"){
pa <- 1-pnorm( (k+qnorm(pd))*sqrt(n))
}
if (s.type=="unknown"){
pa <- 1- pt(k*sqrt(n), df=n-1, ncp=-qnorm(pd)*sqrt(n))
}
return(pa)
}
## ----------------------------------------------------------------------
## Printing methods and functions
## ----------------------------------------------------------------------
OCvar.show.default <-
function(object){
if(length(object@n)==0){
x <- matrix(rep(NA,3), ncol=1)
}
else
x <- rbind(object@n, object@k)
dimnames(x) <- list(c("Sample size",
"Constant k"),
paste("Sample", 1:ncol(x)))
show(x)
}
OCvar.show.prob <-
function(object) {
if (object@type=="normal") {
x <- cbind(object@pd, object@paccept)
colnames(x) <- c("Prop. defective","P(accept)")
}
else
stop("No full print method defined for this type")
rownames(x) <- rep("", length(object@paccept))
show(x)
}
setMethod("show", "OCvar",
function(object){
cat(paste("Acceptance Sampling Plan (",object@type,")\n",sep=""))
cat(paste("Standard deviation assumed to be ",[email protected],"\n\n",sep=""))
OCvar.show.default(object)
})
setMethod("summary", "OCvar",
function(object, full=FALSE){
show(object)
if (full){
cat("\nDetailed acceptance probabilities:\n\n")
OCvar.show.prob(object)
}
})
## ----------------------------------------------------------------------
## Plotting methods
## ----------------------------------------------------------------------
setMethod("plot", signature(x="OCnormal", y="missing"),
function(x, y, type="o", ylim=c(0,1),...){
plot(x@pd, x@paccept, type=type,
xlab="Proportion defective", ylab="P(accept)",
ylim=ylim, ...)
})
setMethod("plot", signature(x="numeric", y="OCnormal"),
function(x, y, type="o", ylim=c(0,1),...){
plot(x, y@paccept, type=type,
ylab="P(accept)", ylim=ylim, ...)
})
## ----------------------------------------------------------------------
## Methods to evaluation risk points
## All these functions are helpers only and should not be exported
## "assess" methods are exported
## ----------------------------------------------------------------------
assess.OCvar <-
function(object, PRP, CRP){
## Purpose: This is the function that does the work.
## Evaluate whether a particular sampling plan can meet
## specified producer and/or consumer risk points
## ----------------------------------------------------------------------
## Arguments:
## object: An object of class OCvar
## PRP : Producer risk point in the form c(pdefect, paccept)
## CRP : Consumer risk point in the form c(pdefect, paccept)
## print : Print the result
## ----------------------------------------------------------------------
## Author: Andreas Kiermeier, Date: 16 May 2007, 10:19
planOK <- TRUE
## Check that what we are given is OK
if (missing(PRP))
PRP <- rep(NA,3)
else if (!missing(PRP)){
if( !check.quality(PRP[1], type=object@type) |
!check.paccept(PRP[2]) )
stop("Quality and/or desired P(accept) out of bounds")
## Get the appropriate function for the distribution and
## calculate the P(accept)
calc.pa <- getFromNamespace(paste("calc.OC",object@type,sep=""),
ns="AcceptanceSampling")
## calc.pa <- get(paste("calc.OC",object@type,sep=""))
pa <- switch(object@type,
normal=calc.pa(n=object@n, k=object@k, [email protected],
pd=PRP[1]))
PRP <- c(PRP, pa)
## Check that the plan meets the desired point
## For PRP have to have P(accept) greater than desired prob.
if (pa >= PRP[2])
planOK <- TRUE
else
planOK <- FALSE
}
if (missing(CRP))
CRP <- rep(NA,3)
else if (!missing(CRP)){
if( !check.quality(CRP[1], type=object@type) |
!check.paccept(CRP[2]) )
stop("Quality and/or desired P(accept) out of bound")
## Get the appropriate function for the distribution and
## calculate the P(accept)
calc.pa <- getFromNamespace(paste("calc.OC",object@type,sep=""),
ns="AcceptanceSampling")
## calc.pa <- get(paste("calc.OC",object@type,sep=""))
pa <- switch(object@type,
normal=calc.pa(n=object@n, k=object@k, [email protected],
pd=CRP[1]))
CRP <- c(CRP, pa)
## Check that the plan meets the desired point
## For CRP have to have P(accept) less than desired prob.
if (pa <= CRP[2])
planOK <- planOK & TRUE
else
planOK <- planOK & FALSE
}
return(list(OK=planOK, PRP=PRP, CRP=CRP))
}
setMethod("assess", signature(object="OCvar"),
function(object, PRP, CRP, print)
{
## Purpose: Evaluate whether a particular sampling plan can meet
## specified producer and/or consumer risk points
## ----------------------------------------------------------------------
## Arguments:
## object: An object of class OCvar
## PRP : Producer risk point in the form c(pdefect, paccept)
## CRP : Consumer risk point in the form c(pdefect, paccept)
## print : Print the result
## ----------------------------------------------------------------------
## Author: Andreas Kiermeier, Date: 21 August 2007
if(!hasArg(PRP) & !hasArg(CRP))
stop("At least one risk point, PRP or CRP, must be specified")
else if(CRP[1] <= PRP[1])
stop("Consumer Risk Point quality must be greater than Producer Risk Point quality")
plan <- assess.OCvar(object, PRP, CRP)
if (print) {
show(object)
cat(paste("\nPlan", ifelse(plan$OK, "CAN","CANNOT"),
"meet desired risk point(s):\n\n"))
## Both PRP and CRP
if(hasArg(PRP) & hasArg(CRP))
RP <- cbind(PRP=plan$PRP, CRP=plan$CRP)
## Only PRP
else if (hasArg(PRP))
RP <- cbind(PRP=plan$PRP)
## Only CRP
else if (hasArg(CRP))
RP <- cbind(CRP=plan$CRP)
rownames(RP) <- c(" Quality", " RP P(accept)", "Plan P(accept)")
show(t(RP))
}
return(invisible(c(list(n=object@n, k=object@k, [email protected]), plan)))
})
### Local Variables:
### comment-start: "## "
### fill-column: 80
### End:
| /scratch/gouwar.j/cran-all/cranData/AcceptanceSampling/R/code_var.R |
### R code from vignette source 'acceptance_sampling_manual.Rnw'
###################################################
### code chunk number 1: acceptance_sampling_manual.Rnw:605-606
###################################################
library(AcceptanceSampling)
###################################################
### code chunk number 2: acceptance_sampling_manual.Rnw:622-624
###################################################
x <- OC2c(10, 3)
x
###################################################
### code chunk number 3: acceptance_sampling_manual.Rnw:629-631
###################################################
plot(x)
grid(lty="solid")
###################################################
### code chunk number 4: acceptance_sampling_manual.Rnw:653-660
###################################################
xb <- OC2c(5, 1, type="b") ## Binomial
xh <- OC2c(5, 1, type="h", N=50, pd=(0:50)/50) ## Hypergeometric
xp <- OC2c(5, 1, type="p") ## Poisson
plot(xb, type="l", xlim=c(0, 0.2), ylim=c(0.6, 1))
grid(lty="solid")
points(xh@pd, xh@paccept, col="green")
lines(xp@pd, xp@paccept, col="red")
###################################################
### code chunk number 5: acceptance_sampling_manual.Rnw:686-691
###################################################
x.mean <- seq(248, 255, 0.05)
x.pd <- pnorm(250, mean=x.mean, sd=1.5)
x.plan <- OC2c(10, 1, pd=x.pd)
plot(x.mean, x.plan, xlab="Mean weight")
grid(lty="solid")
###################################################
### code chunk number 6: acceptance_sampling_manual.Rnw:707-709
###################################################
x <- OC2c(10,3, pd=seq(0,0.1,0.01))
summary(x, full=TRUE)
###################################################
### code chunk number 7: acceptance_sampling_manual.Rnw:725-726
###################################################
assess(OC2c(20,0), PRP=c(0.05, 0.95), CRP=c(0.15, 0.075))
###################################################
### code chunk number 8: acceptance_sampling_manual.Rnw:741-742
###################################################
find.plan(PRP=c(0.05, 0.95), CRP=c(0.15, 0.075), type="binom")
###################################################
### code chunk number 9: acceptance_sampling_manual.Rnw:778-782
###################################################
x <- OC2c(n=c(8,8), c=c(0,1), r=c(2,2))
x
plot(x)
grid(lty="solid")
###################################################
### code chunk number 10: acceptance_sampling_manual.Rnw:817-823
###################################################
x.mean <- seq(248, 255, 0.05)
x.pd <- pnorm(250, mean=x.mean, sd=1.5)
find.plan(PRP=c(0.05, 0.95), CRP=c(0.15, 0.075), type="normal", s.type="known")
x.plan <- OCvar(n=26, k=1.322271, pd=x.pd)
plot(x.mean, x.plan, xlab="Mean weight")
grid(lty="solid")
###################################################
### code chunk number 11: acceptance_sampling_manual.Rnw:849-850
###################################################
find.plan(PRP=c(0.05, 0.95), CRP=c(0.15, 0.075), type="normal", s.type="unknown")
###################################################
### code chunk number 12: acceptance_sampling_manual.Rnw:858-859
###################################################
find.plan(PRP=c(0.05, 0.95), CRP=c(0.15, 0.075), type="normal", s.type="known")
###################################################
### code chunk number 13: acceptance_sampling_manual.Rnw:875-882
###################################################
xb <- OC2c(n=80,c=7)
xn1 <- OCvar(n=49, k=1.326538, s.type="unknown")
xn2 <- OCvar(n=26, k=1.322271)
plot(xb, type="l", xlim=c(0,0.3))
grid(lty="solid")
lines(xn1@pd, xn1@paccept, col="green")
lines(xn2@pd, xn2@paccept, col="red")
###################################################
### code chunk number 14: acceptance_sampling_manual.Rnw:896-897
###################################################
xn1 <- OCvar(n=35, k=1.89, s.type="unknown", pd=seq(0,0.2,by=0.01))
###################################################
### code chunk number 15: acceptance_sampling_manual.Rnw:902-903
###################################################
summary(xn1, full=TRUE)
| /scratch/gouwar.j/cran-all/cranData/AcceptanceSampling/inst/doc/acceptance_sampling_manual.R |
#' @import DatabaseConnector
#' @import ParallelLogger
#' @import SqlRender
#' @importFrom utils compareVersion packageVersion read.csv zip write.csv
#' @importFrom stats aggregate cycle end frequency start ts window
#' @importFrom rlang .data
#' @import dplyr
NULL
| /scratch/gouwar.j/cran-all/cranData/Achilles/R/Achilles-package.R |
# @file Achilles
#
# Copyright 2023 Observational Health Data Sciences and Informatics
#
# This file is part of Achilles
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# @author Observational Health Data Sciences and Informatics
# @author Martijn Schuemie
# @author Patrick Ryan
# @author Frank DeFalco
# @author Vojtech Huser
# @author Chris Knoll
# @author Ajit Londhe
# @author Taha Abdul-Basser
#' @title
#' achilles
#'
#' @description
#' \code{achilles} creates descriptive statistics summary for an entire OMOP CDM instance.
#'
#' @details
#' \code{achilles} creates descriptive statistics summary for an entire OMOP CDM instance.
#'
#' @param connectionDetails An R object of type \code{connectionDetails} created using the
#' function \code{createConnectionDetails} in the
#' \code{DatabaseConnector} package.
#' @param cdmDatabaseSchema Fully qualified name of database schema that contains OMOP CDM
#' schema. On SQL Server, this should specifiy both the database and
#' the schema, so for example, on SQL Server, 'cdm_instance.dbo'.
#' @param resultsDatabaseSchema Fully qualified name of database schema that we can write final
#' results to. Default is cdmDatabaseSchema. On SQL Server, this
#' should specifiy both the database and the schema, so for example,
#' on SQL Server, 'cdm_results.dbo'.
#' @param scratchDatabaseSchema Fully qualified name of the database schema that will store all of
#' the intermediate scratch tables, so for example, on SQL Server,
#' 'cdm_scratch.dbo'. Must be accessible to/from the cdmDatabaseSchema
#' and the resultsDatabaseSchema. Default is resultsDatabaseSchema.
#' Making this "#" will run Achilles in single-threaded mode and use
#' temporary tables instead of permanent tables.
#' @param vocabDatabaseSchema String name of database schema that contains OMOP Vocabulary.
#' Default is cdmDatabaseSchema. On SQL Server, this should specifiy
#' both the database and the schema, so for example 'results.dbo'.
#' @param tempEmulationSchema Formerly oracleTempSchema. For databases like Oracle where you
#' must specify the name of the database schema where you want all
#' temporary tables to be managed. Requires create/insert permissions
#' to this database.
#' @param sourceName String name of the data source name. If blank, CDM_SOURCE table
#' will be queried to try to obtain this.
#' @param analysisIds (OPTIONAL) A vector containing the set of Achilles analysisIds for
#' which results will be generated. If not specified, all analyses
#' will be executed. Use \code{\link{getAnalysisDetails}} to get a
#' list of all Achilles analyses and their Ids.
#' @param createTable If true, new results tables will be created in the results schema.
#' If not, the tables are assumed to already exist, and analysis
#' results will be inserted (slower on MPP).
#' @param smallCellCount To avoid patient identification, cells with small counts (<=
#' smallCellCount) are deleted. Set to 0 for complete summary without
#' small cell count restrictions.
#' @param cdmVersion Define the OMOP CDM version used: currently supports v5 and above.
#' Use major release number or minor number only (e.g. 5, 5.3)
#' @param createIndices Boolean to determine if indices should be created on the resulting
#' Achilles tables. Default= TRUE
#' @param numThreads (OPTIONAL, multi-threaded mode) The number of threads to use to run
#' Achilles in parallel. Default is 1 thread.
#' @param tempAchillesPrefix (OPTIONAL, multi-threaded mode) The prefix to use for the scratch
#' Achilles analyses tables. Default is "tmpach"
#' @param dropScratchTables (OPTIONAL, multi-threaded mode) TRUE = drop the scratch tables (may
#' take time depending on dbms), FALSE = leave them in place for later
#' removal.
#' @param sqlOnly Boolean to determine if Achilles should be fully executed. TRUE =
#' just generate SQL files, don't actually run, FALSE = run Achilles
#' @param outputFolder Path to store logs and SQL files
#' @param verboseMode Boolean to determine if the console will show all execution steps.
#' Default = TRUE
#' @param optimizeAtlasCache Boolean to determine if the atlas cache has to be optimized.
#' Default = FALSE
#' @param defaultAnalysesOnly Boolean to determine if only default analyses should be run.
#' Including non-default analyses is substantially more resource
#' intensive. Default = TRUE
#' @param updateGivenAnalysesOnly Boolean to determine whether to preserve the results of the
#' analyses NOT specified with the \code{analysisIds} parameter. To
#' update only analyses specified by \code{analysisIds}, set
#' createTable = FALSE and updateGivenAnalysesOnly = TRUE. By default,
#' updateGivenAnalysesOnly = FALSE, to preserve the original behavior
#' of Achilles when supplied \code{analysisIds}.
#' @param excludeAnalysisIds (OPTIONAL) A vector containing the set of Achilles analyses to
#' exclude.
#' @param sqlDialect (OPTIONAL) String to be used when specifying sqlOnly = TRUE and
#' NOT supplying the \code{connectionDetails} parameter.
#' if the \code{connectionDetails} parameter is supplied, \code{sqlDialect}
#' is ignored. If the \code{connectionDetails} parameter is not supplied,
#' \code{sqlDialect} must be supplied to enable \code{SqlRender}
#' to translate properly. \code{sqlDialect} takes the value normally
#' supplied to connectionDetails$dbms. Default = NULL.
#'
#' @returns
#' An object of type \code{achillesResults} containing details for connecting to the database
#' containing the results
#' @examples
#' \dontrun{
#' connectionDetails <- createConnectionDetails(dbms = "sql server", server = "some_server")
#' achillesResults <- achilles(connectionDetails = connectionDetails,
#' cdmDatabaseSchema = "cdm",
#' resultsDatabaseSchema = "results",
#' scratchDatabaseSchema = "scratch",
#' sourceName = "Some Source",
#' cdmVersion = "5.3",
#' numThreads = 10,
#' outputFolder = "output")
#' }
#'
#' @export
achilles <- function(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema = cdmDatabaseSchema,
scratchDatabaseSchema = resultsDatabaseSchema,
vocabDatabaseSchema = cdmDatabaseSchema,
tempEmulationSchema = resultsDatabaseSchema,
sourceName = "",
analysisIds,
createTable = TRUE,
smallCellCount = 5,
cdmVersion = "5",
createIndices = TRUE,
numThreads = 1,
tempAchillesPrefix = "tmpach",
dropScratchTables = TRUE,
sqlOnly = FALSE,
outputFolder = "output",
verboseMode = TRUE,
optimizeAtlasCache = FALSE,
defaultAnalysesOnly = TRUE,
updateGivenAnalysesOnly = FALSE,
excludeAnalysisIds,
sqlDialect = NULL) {
totalStart <- Sys.time()
achillesSql <- c()
# Check if the correct parameters are supplied when running in sqlOnly mode
if (sqlOnly &&
missing(connectionDetails) && is.null(sqlDialect)) {
stop(
"Error: When specifying sqlOnly = TRUE, sqlDialect or connectionDetails must be supplied."
)
}
if (sqlOnly && !missing(connectionDetails)) {
print(
"Running Achilles in SQL ONLY mode. Using connectionDetails, sqlDialect is ignored. Please wait for script generation."
)
}
if (sqlOnly &&
missing(connectionDetails) && !is.null(sqlDialect)) {
connectionDetails <-
DatabaseConnector::createConnectionDetails(dbms = sqlDialect)
print(
"Running Achilles in SQL ONLY mode. Using dialect supplied by sqlDialect. Please wait for script generation."
)
}
if (!dir.exists(outputFolder)) {
dir.create(outputFolder)
}
ParallelLogger::addDefaultFileLogger(file.path(outputFolder, "log_achilles.txt"))
ParallelLogger::addDefaultErrorReportLogger(file.path(outputFolder, "errorReportR.txt"))
on.exit(ParallelLogger::unregisterLogger("DEFAULT_FILE_LOGGER", silent = TRUE))
on.exit(ParallelLogger::unregisterLogger("DEFAULT_ERRORREPORT_LOGGER", silent = TRUE), add = TRUE)
if (verboseMode) {
ParallelLogger::addDefaultConsoleLogger()
on.exit(ParallelLogger::unregisterLogger("DEFAULT_CONSOLE_LOGGER"), add = TRUE)
}
# Try to get CDM Version if not provided
if (!missing(cdmVersion)) {
ParallelLogger::logInfo(paste("CDM Version", cdmVersion, "passed as parameter."))
} else if (missing(cdmVersion) && !sqlOnly) {
cdmVersion <- .getCdmVersion(connectionDetails, cdmDatabaseSchema)
ParallelLogger::logInfo(paste("CDM Version", cdmVersion, "found in cdm_source table."))
}
cdmVersion <- as.character(cdmVersion)
# Check CDM version is valid
if (compareVersion(a = as.character(cdmVersion), b = "5") < 0) {
stop("Error: Invalid CDM Version number. CDM V5 and greater are supported.")
}
# Establish folder paths
if (!dir.exists(outputFolder)) {
dir.create(path = outputFolder, recursive = TRUE)
}
# Get source name if none provided
if (missing(sourceName) & !sqlOnly) {
sourceName <- .getSourceName(connectionDetails, cdmDatabaseSchema)
}
# Obtain analyses to run
analysisDetails <- getAnalysisDetails()
if (!missing(analysisIds)) {
# If specific analysis_ids are given, run only those
analysisDetails <-
analysisDetails[analysisDetails$ANALYSIS_ID %in% analysisIds,]
} else if (defaultAnalysesOnly) {
# If specific analyses are not given, determine whether or not to run only default analyses
analysisDetails <-
analysisDetails[analysisDetails$IS_DEFAULT == 1,]
}
# Remove unwanted analyses that have not already been excluded, if any are specified
if (!missing(excludeAnalysisIds) && any(analysisDetails$ANALYSIS_ID %in% excludeAnalysisIds)) {
analysisDetails <- analysisDetails[-which(analysisDetails$ANALYSIS_ID %in% excludeAnalysisIds),]
}
resultsTables <- list(
list(
detailType = "results",
tablePrefix = tempAchillesPrefix,
schema = read.csv(
file = system.file("csv", "schemas", "schema_achilles_results.csv", package = "Achilles"),
header = TRUE
),
analysisIds = analysisDetails[analysisDetails$DISTRIBUTION <=0, ]$ANALYSIS_ID
),
list(
detailType = "results_dist",
tablePrefix = sprintf("%1s_%2s", tempAchillesPrefix, "dist"),
schema = read.csv(
file = system.file("csv","schemas","schema_achilles_results_dist.csv",package = "Achilles"),
header = TRUE
),
analysisIds = analysisDetails[abs(analysisDetails$DISTRIBUTION) == 1, ]$ANALYSIS_ID
)
)
# Initialize thread and scratchDatabaseSchema settings and verify ParallelLogger installed
schemaDelim <- "."
# Do not connect to a database if running in sqlOnly mode
if (sqlOnly) {
if (.supportsTempTables(connectionDetails) && connectionDetails$dbms != "oracle") {
scratchDatabaseSchema <- "#"
schemaDelim <- "s_"
}
} else {
if (numThreads == 1 || scratchDatabaseSchema == "#") {
numThreads <- 1
if (.supportsTempTables(connectionDetails) && connectionDetails$dbms != "oracle") {
scratchDatabaseSchema <- "#"
schemaDelim <- "s_"
}
ParallelLogger::logInfo("Beginning single-threaded execution")
# first invocation of the connection, to persist throughout to maintain temp tables
connection <- DatabaseConnector::connect(connectionDetails = connectionDetails)
on.exit(DatabaseConnector::disconnect(connection), add = TRUE)
} else if (!requireNamespace("ParallelLogger", quietly = TRUE)) {
stop("Multi-threading support requires package 'ParallelLogger'.",
" Consider running single-threaded by setting",
" `numThreads = 1` and `scratchDatabaseSchema = '#'`.", " You may install it using devtools with the following code:",
"\n devtools::install_github('OHDSI/ParallelLogger')", "\n\nAlternately, you might want to install ALL suggested packages using:",
"\n devtools::install_github('OHDSI/Achilles', dependencies = TRUE)", call. = FALSE)
} else {
ParallelLogger::logInfo("Beginning multi-threaded execution")
}
}
# Determine whether or not to create Achilles support tables
if (!createTable && missing(analysisIds)) {
createTable <- TRUE
preserveResults <- FALSE
} else if (!createTable && !missing(analysisIds) && !updateGivenAnalysesOnly) {
createTable <- TRUE
preserveResults <- FALSE
} else if (!createTable && !missing(analysisIds) && updateGivenAnalysesOnly) {
preserveResults <- TRUE
}
## If not creating support tables, then either remove ALL prior results or only those results for the given analysisIds
if (!sqlOnly) {
if (!createTable && !preserveResults) {
.deleteExistingResults(connectionDetails = connectionDetails,
resultsDatabaseSchema = resultsDatabaseSchema,
analysisDetails = analysisDetails)
} else if (!createTable && preserveResults) {
.deleteGivenAnalyses(connectionDetails = connectionDetails,
resultsDatabaseSchema = resultsDatabaseSchema,
analysisIds = analysisIds)
}
}
# Create and populate the achilles_analysis table (assumes inst/csv/achilles_analysis_details.csv exists)
if (createTable) {
sql <- SqlRender::loadRenderTranslateSql(
sqlFilename = "analyses/achilles_analysis_ddl.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
resultsDatabaseSchema = resultsDatabaseSchema
)
# Populate achilles_analysis without the "distribution" and "distributed_field"
# columns from achilles_analysis_details.csv
analysisDetailsCsv <- Achilles::getAnalysisDetails()
analysisDetailsCsv <- analysisDetailsCsv[, -c(2, 3)]
if (!sqlOnly) {
if (numThreads != 1) {
connection <- DatabaseConnector::connect(connectionDetails = connectionDetails)
on.exit(DatabaseConnector::disconnect(connection), add = TRUE)
}
# Create empty achilles_analysis
DatabaseConnector::executeSql(
connection = connection,
sql = sql,
errorReportFile = file.path(outputFolder, "achillesErrorCreateAchillesAnalysis.txt")
)
# Populate achilles_analysis with data from achilles_analysis_details.csv from above
DatabaseConnector::insertTable(
connection = connection,
databaseSchema = resultsDatabaseSchema,
tableName = "ACHILLES_ANALYSIS",
data = analysisDetailsCsv,
dropTableIfExists = FALSE,
createTable = FALSE,
tempTable = FALSE
)
if (numThreads != 1) {
DatabaseConnector::disconnect(connection)
}
}
}
# Clean up existing scratch tables
if ((numThreads > 1 || !.supportsTempTables(connectionDetails)) && !sqlOnly) {
# Drop the scratch tables
ParallelLogger::logInfo(sprintf("Dropping scratch Achilles tables from schema %s",
scratchDatabaseSchema))
dropAllScratchTables(connectionDetails = connectionDetails,
scratchDatabaseSchema = scratchDatabaseSchema,
tempAchillesPrefix = tempAchillesPrefix, numThreads = numThreads, tableTypes = c("achilles"),
outputFolder = outputFolder, defaultAnalysesOnly = defaultAnalysesOnly)
ParallelLogger::logInfo(sprintf("Temporary Achilles tables removed from schema %s",
scratchDatabaseSchema))
}
# Generate Main Analyses
mainAnalysisIds <- analysisDetails$ANALYSIS_ID
mainSqls <- lapply(mainAnalysisIds, function(analysisId) {
list(
analysisId = analysisId,
sql = .getAnalysisSql(
analysisId = analysisId,
connectionDetails = connectionDetails,
schemaDelim = schemaDelim,
scratchDatabaseSchema = scratchDatabaseSchema,
cdmDatabaseSchema = cdmDatabaseSchema,
resultsDatabaseSchema = resultsDatabaseSchema,
tempEmulationSchema = tempEmulationSchema,
cdmVersion = cdmVersion,
tempAchillesPrefix = tempAchillesPrefix,
resultsTables = resultsTables,
sourceName = sourceName,
numThreads = numThreads,
outputFolder = outputFolder
)
)
})
achillesSql <- c(achillesSql, lapply(mainSqls, function(s) s$sql))
if (!sqlOnly) {
ParallelLogger::logInfo("Executing multiple queries. This could take a while")
if (numThreads == 1) {
for (mainSql in mainSqls) {
start <- Sys.time()
ParallelLogger::logInfo(sprintf("Analysis %d (%s) -- START",
mainSql$analysisId,
analysisDetails$ANALYSIS_NAME[analysisDetails$ANALYSIS_ID ==
mainSql$analysisId]))
tryCatch({
DatabaseConnector::executeSql(connection = connection,
sql = mainSql$sql,
errorReportFile = file.path(outputFolder,
paste0("achillesError_", mainSql$analysisId, ".txt")))
delta <- Sys.time() - start
ParallelLogger::logInfo(sprintf("[Main Analysis] [COMPLETE] %d (%f %s)",
as.integer(mainSql$analysisId),
delta, attr(delta, "units")))
}, error = function(e) {
ParallelLogger::logError(sprintf("Analysis %d -- ERROR %s", mainSql$analysisId, e))
})
}
} else {
cluster <- ParallelLogger::makeCluster(numberOfThreads = numThreads,
singleThreadToMain = TRUE)
results <- ParallelLogger::clusterApply(cluster = cluster, x = mainSqls, function(mainSql) {
start <- Sys.time()
connection <- DatabaseConnector::connect(connectionDetails = connectionDetails)
on.exit(DatabaseConnector::disconnect(connection = connection), add = TRUE)
ParallelLogger::logInfo(sprintf("[Main Analysis] [START] %d (%s)",
as.integer(mainSql$analysisId),
analysisDetails$ANALYSIS_NAME[analysisDetails$ANALYSIS_ID == mainSql$analysisId]))
tryCatch({
DatabaseConnector::executeSql(connection = connection,
sql = mainSql$sql,
errorReportFile = file.path(outputFolder,
paste0("achillesError_", mainSql$analysisId, ".txt")))
delta <- Sys.time() - start
ParallelLogger::logInfo(sprintf("[Main Analysis] [COMPLETE] %d (%f %s)",
as.integer(mainSql$analysisId),
delta, attr(delta, "units")))
}, error = function(e) {
ParallelLogger::logError(sprintf("[Main Analysis] [ERROR] %d (%s)",
as.integer(mainSql$analysisId),
e))
})
})
ParallelLogger::stopCluster(cluster = cluster)
}
}
# Merge scratch tables into final analysis tables
include <- sapply(resultsTables, function(d) {
any(d$analysisIds %in% analysisDetails$ANALYSIS_ID)
})
resultsTablesToMerge <- resultsTables[include]
mergeSqls <- lapply(resultsTablesToMerge, function(table) {
.mergeAchillesScratchTables(
resultsTable = table,
connectionDetails = connectionDetails,
analysisIds = analysisDetails$ANALYSIS_ID,
createTable = createTable,
schemaDelim = schemaDelim,
scratchDatabaseSchema = scratchDatabaseSchema,
resultsDatabaseSchema = resultsDatabaseSchema,
tempEmulationSchema = tempEmulationSchema,
cdmVersion = cdmVersion,
tempAchillesPrefix = tempAchillesPrefix,
numThreads = numThreads,
smallCellCount = smallCellCount,
outputFolder = outputFolder,
sqlOnly = sqlOnly
)
})
achillesSql <- c(achillesSql, mergeSqls)
if (!sqlOnly) {
ParallelLogger::logInfo("Merging scratch Achilles tables")
if (numThreads == 1) {
tryCatch({
for (sql in mergeSqls) {
DatabaseConnector::executeSql(connection = connection, sql = sql)
}
}, error = function(e) {
ParallelLogger::logError(sprintf("Merging scratch Achilles tables [ERROR] (%s)", e))
})
} else {
cluster <- ParallelLogger::makeCluster(numberOfThreads = numThreads,
singleThreadToMain = TRUE)
tryCatch({
dummy <- ParallelLogger::clusterApply(cluster = cluster, x = mergeSqls, function(sql) {
connection <- DatabaseConnector::connect(connectionDetails = connectionDetails)
on.exit(DatabaseConnector::disconnect(connection = connection), add = TRUE)
DatabaseConnector::executeSql(connection = connection, sql = sql)
})
}, error = function(e) {
ParallelLogger::logError(sprintf("Merging scratch Achilles tables (merging scratch Achilles tables) [ERROR] (%s)",
e))
})
ParallelLogger::stopCluster(cluster = cluster)
}
}
if (!sqlOnly) {
ParallelLogger::logInfo(sprintf("Done. Achilles results can now be found in schema %s",
resultsDatabaseSchema))
}
# Clean up scratch tables - single threaded, drop and disconnect. For multithreaded, do not disconnect
if (numThreads == 1 && dropScratchTables && !sqlOnly) {
if (connectionDetails$dbms == "oracle") {
ParallelLogger::logInfo(sprintf("Dropping scratch Achilles tables from schema %s",
scratchDatabaseSchema))
# Oracle TEMP tables are created as persistent tables and are given randomly generated string
# prefixes preceding tempAchillesPrefix, therefore, they need their own code to drop the
# scratch tables.
allTables <- DatabaseConnector::getTableNames(connection, scratchDatabaseSchema)
tablesToDrop <- c(allTables[which(grepl(tempAchillesPrefix, allTables, fixed = TRUE))],
allTables[which(grepl(tolower(tempAchillesPrefix),
allTables, fixed = TRUE))], allTables[which(grepl(toupper(tempAchillesPrefix), allTables,
fixed = TRUE))])
dropSqls <- lapply(tablesToDrop, function(scratchTable) {
sql <- SqlRender::render("IF OBJECT_ID('@scratchDatabaseSchema@schemaDelim@scratchTable', 'U') IS NOT NULL DROP TABLE @scratchDatabaseSchema@schemaDelim@scratchTable;\n",
scratchDatabaseSchema = scratchDatabaseSchema, schemaDelim = schemaDelim, scratchTable = scratchTable)
sql <- SqlRender::translate(sql = sql, targetDialect = connectionDetails$dbms)
})
dropSqls <- unlist(dropSqls)
for (k in 1:length(dropSqls)) {
DatabaseConnector::executeSql(connection, dropSqls[k])
}
ParallelLogger::logInfo(sprintf("Temporary Achilles tables removed from schema %s",
scratchDatabaseSchema))
DatabaseConnector::disconnect(connection = connection)
} else {
ParallelLogger::logInfo(sprintf("Dropping scratch Achilles tables from schema %s",
scratchDatabaseSchema))
dropAllScratchTables(connectionDetails = connectionDetails,
scratchDatabaseSchema = scratchDatabaseSchema,
tempAchillesPrefix = tempAchillesPrefix, numThreads = numThreads, tableTypes = c("achilles"),
outputFolder = outputFolder, defaultAnalysesOnly = defaultAnalysesOnly)
ParallelLogger::logInfo(sprintf("Temporary Achilles tables removed from schema %s", scratchDatabaseSchema))
DatabaseConnector::disconnect(connection = connection)
}
} else if (dropScratchTables & !sqlOnly) {
# Drop the scratch tables
ParallelLogger::logInfo(sprintf("Dropping scratch Achilles tables from schema %s",
scratchDatabaseSchema))
dropAllScratchTables(connectionDetails = connectionDetails,
scratchDatabaseSchema = scratchDatabaseSchema,
tempAchillesPrefix = tempAchillesPrefix, numThreads = numThreads, tableTypes = c("achilles"),
outputFolder = outputFolder, defaultAnalysesOnly = defaultAnalysesOnly)
ParallelLogger::logInfo(sprintf("Temporary Achilles tables removed from schema %s",
scratchDatabaseSchema))
}
# Create indices
indicesSql <- "/* INDEX CREATION SKIPPED PER USER REQUEST */"
if (createIndices) {
achillesTables <- lapply(unique(analysisDetails$DISTRIBUTION), function(a) {
if (a == 0) {
"achilles_results"
} else {
"achilles_results_dist"
}
})
indicesSql <- createIndices(connectionDetails = connectionDetails,
resultsDatabaseSchema = resultsDatabaseSchema,
outputFolder = outputFolder, sqlOnly = sqlOnly, verboseMode = verboseMode, achillesTables = unique(achillesTables))
}
achillesSql <- c(achillesSql, indicesSql)
# Optimize Atlas Cache
if (optimizeAtlasCache) {
optimizeAtlasCacheSql <- optimizeAtlasCache(connectionDetails = connectionDetails,
resultsDatabaseSchema = resultsDatabaseSchema,
vocabDatabaseSchema = vocabDatabaseSchema, outputFolder = outputFolder, sqlOnly = sqlOnly,
verboseMode = verboseMode, tempAchillesPrefix = tempAchillesPrefix)
achillesSql <- c(achillesSql, optimizeAtlasCacheSql)
}
if (sqlOnly) {
SqlRender::writeSql(sql = paste(achillesSql, collapse = "\n\n"),
targetFile = file.path(outputFolder,
"achilles.sql"))
ParallelLogger::logInfo(sprintf("All Achilles SQL scripts can be found in folder: %s",
file.path(outputFolder,
"achilles.sql")))
}
achillesResults <- list(resultsConnectionDetails = connectionDetails,
resultsTable = "achilles_results",
resultsDistributionTable = "achilles_results_dist", analysis_table = "achilles_analysis", sourceName = sourceName,
analysisIds = analysisDetails$ANALYSIS_ID, achillesSql = paste(achillesSql, collapse = "\n\n"),
indicesSql = indicesSql, call = match.call())
class(achillesResults) <- "achillesResults"
invisible(achillesResults)
totalDelta <- Sys.time() - totalStart
ParallelLogger::logInfo(sprintf("[Total Runtime] %f %s", totalDelta, attr(totalDelta, "units")))
}
#' Create indicies
#'
#' @details
#' Post-processing, create indices to help performance. Cannot be used with Redshift.
#'
#' @param connectionDetails An R object of type \code{connectionDetails} created using the
#' function \code{createConnectionDetails} in the
#' \code{DatabaseConnector} package.
#' @param resultsDatabaseSchema Fully qualified name of database schema that we can write final
#' results to. Default is cdmDatabaseSchema. On SQL Server, this should
#' specifiy both the database and the schema, so for example, on SQL
#' Server, 'cdm_results.dbo'.
#' @param outputFolder Path to store logs and SQL files
#' @param sqlOnly TRUE = just generate SQL files, don't actually run, FALSE = run
#' Achilles
#' @param verboseMode Boolean to determine if the console will show all execution steps.
#' Default = TRUE
#' @param achillesTables Which achilles tables should be indexed? Default is both
#' achilles_results and achilles_results_dist.
#' @returns
#' A collection of queries that were executed to drop any existing indices and create new indicies as
#' specified.
#'
#' @export
createIndices <- function(connectionDetails,
resultsDatabaseSchema,
outputFolder,
sqlOnly = FALSE,
verboseMode = TRUE,
achillesTables = c("achilles_results", "achilles_results_dist")) {
# Log execution
if (verboseMode) {
appenders <- list(ParallelLogger::createConsoleAppender(),
ParallelLogger::createFileAppender(layout = ParallelLogger::layoutParallel,
fileName = file.path(outputFolder, "log_createIndices.txt")))
} else {
appenders <- list(ParallelLogger::createFileAppender(layout = ParallelLogger::layoutParallel,
fileName = file.path(outputFolder, "log_createIndices.txt")))
}
logger <- ParallelLogger::createLogger(name = "createIndices",
threshold = "INFO",
appenders = appenders)
ParallelLogger::registerLogger(logger)
dropIndicesSql <- c()
indicesSql <- c()
# dbms specific index operations
if (connectionDetails$dbms %in% c("redshift", "netezza", "bigquery", "snowflake", "spark")) {
return(sprintf("/* INDEX CREATION SKIPPED, INDICES NOT SUPPORTED IN %s */",
toupper(connectionDetails$dbms)))
}
if (connectionDetails$dbms == "pdw") {
indicesSql <- c(indicesSql,
SqlRender::render("create clustered columnstore index ClusteredIndex_Achilles_results on @resultsDatabaseSchema.achilles_results;",
resultsDatabaseSchema = resultsDatabaseSchema))
}
indices <- read.csv(file = system.file("csv",
"post_processing",
"indices.csv",
package = "Achilles"),
header = TRUE, stringsAsFactors = FALSE)
# create index SQLs
for (i in 1:nrow(indices)) {
if (indices[i, ]$TABLE_NAME %in% achillesTables) {
sql <- SqlRender::render(sql = "drop index @resultsDatabaseSchema.@indexName;",
resultsDatabaseSchema = resultsDatabaseSchema,
indexName = indices[i, ]$INDEX_NAME)
sql <- SqlRender::translate(sql = sql, targetDialect = connectionDetails$dbms)
dropIndicesSql <- c(dropIndicesSql, sql)
sql <- SqlRender::render(sql = "create index @indexName on @resultsDatabaseSchema.@tableName (@fields);",
resultsDatabaseSchema = resultsDatabaseSchema, tableName = indices[i, ]$TABLE_NAME, indexName = indices[i,
]$INDEX_NAME, fields = paste(strsplit(x = indices[i, ]$FIELDS, split = "~")[[1]],
collapse = ","))
sql <- SqlRender::translate(sql = sql, targetDialect = connectionDetails$dbms)
indicesSql <- c(indicesSql, sql)
}
}
if (!sqlOnly) {
connection <- DatabaseConnector::connect(connectionDetails = connectionDetails)
on.exit(DatabaseConnector::disconnect(connection = connection), add = TRUE)
try(DatabaseConnector::executeSql(connection = connection,
sql = paste(dropIndicesSql, collapse = "\n\n")),
silent = TRUE)
DatabaseConnector::executeSql(connection = connection,
sql = paste(indicesSql, collapse = "\n\n"))
}
ParallelLogger::unregisterLogger("createIndices")
invisible(c(dropIndicesSql, indicesSql))
}
#' Get all analysis details
#'
#' @details
#' Get a list of all analyses with their analysis IDs and strata.
#'
#' @returns
#' A data.frame with the analysis details.
#'
#' @export
getAnalysisDetails <- function() {
read.csv(system.file("csv", "achilles", "achilles_analysis_details.csv", package = "Achilles"),
stringsAsFactors = FALSE)
}
#' Drop all possible scratch tables
#'
#' @details
#' Drop all possible Achilles scratch tables
#'
#' @param connectionDetails An R object of type \code{connectionDetails} created using the
#' function \code{createConnectionDetails} in the
#' \code{DatabaseConnector} package.
#' @param scratchDatabaseSchema string name of database schema that Achilles scratch tables were
#' written to.
#' @param tempAchillesPrefix The prefix to use for the "temporary" (but actually permanent)
#' Achilles analyses tables. Default is "tmpach"
#' @param numThreads The number of threads to use to run this function. Default is 1
#' thread.
#' @param tableTypes The types of Achilles scratch tables to drop: achilles
#' @param outputFolder Path to store logs and SQL files
#' @param verboseMode Boolean to determine if the console will show all execution steps.
#' Default = TRUE
#' @param defaultAnalysesOnly Boolean to determine if only default analyses should be run.
#' Including non-default analyses is substantially more resource
#' intensive. Default = TRUE
#' @returns No return value, called to drop interim scratch tables.
#'
#' @export
dropAllScratchTables <- function(connectionDetails,
scratchDatabaseSchema,
tempAchillesPrefix = "tmpach",
numThreads = 1, tableTypes = c("achilles"), outputFolder, verboseMode = TRUE, defaultAnalysesOnly = TRUE) {
# Log execution
if (verboseMode) {
appenders <- list(ParallelLogger::createConsoleAppender(),
ParallelLogger::createFileAppender(layout = ParallelLogger::layoutParallel,
fileName = file.path(outputFolder, "log_dropScratchTables.txt")))
} else {
appenders <- list(ParallelLogger::createFileAppender(layout = ParallelLogger::layoutParallel,
fileName = file.path(outputFolder, "log_dropScratchTables.txt")))
}
logger <- ParallelLogger::createLogger(name = "dropAllScratchTables",
threshold = "INFO",
appenders = appenders)
ParallelLogger::registerLogger(logger)
# Initialize thread and scratchDatabaseSchema settings
schemaDelim <- "."
if (numThreads == 1 || scratchDatabaseSchema == "#") {
numThreads <- 1
if (.supportsTempTables(connectionDetails) && connectionDetails$dbms != "oracle") {
scratchDatabaseSchema <- "#"
schemaDelim <- "s_"
}
}
if ("achilles" %in% tableTypes) {
# Drop Achilles Scratch Tables
analysisDetails <- getAnalysisDetails()
if (defaultAnalysesOnly) {
resultsTables <- lapply(analysisDetails$ANALYSIS_ID[analysisDetails$DISTRIBUTION <= 0 & analysisDetails$IS_DEFAULT ==
1], function(id) {
sprintf("%s_%d", tempAchillesPrefix, id)
})
} else {
resultsTables <- lapply(analysisDetails$ANALYSIS_ID[analysisDetails$DISTRIBUTION <= 0],
function(id) {
sprintf("%s_%d", tempAchillesPrefix, id)
})
}
resultsDistTables <- lapply(analysisDetails$ANALYSIS_ID[abs(analysisDetails$DISTRIBUTION) ==
1], function(id) {
sprintf("%s_dist_%d", tempAchillesPrefix, id)
})
dropSqls <- lapply(c(resultsTables, resultsDistTables), function(scratchTable) {
sql <- SqlRender::render("IF OBJECT_ID('@scratchDatabaseSchema@schemaDelim@scratchTable', 'U') IS NOT NULL DROP TABLE @scratchDatabaseSchema@schemaDelim@scratchTable;",
scratchDatabaseSchema = scratchDatabaseSchema, schemaDelim = schemaDelim, scratchTable = scratchTable)
sql <- SqlRender::translate(sql = sql, targetDialect = connectionDetails$dbms)
})
cluster <- ParallelLogger::makeCluster(numberOfThreads = numThreads, singleThreadToMain = TRUE)
dummy <- ParallelLogger::clusterApply(cluster = cluster, x = dropSqls, function(sql) {
connection <- DatabaseConnector::connect(connectionDetails = connectionDetails)
tryCatch({
DatabaseConnector::executeSql(connection = connection, sql = sql)
}, error = function(e) {
ParallelLogger::logError(sprintf("Drop Achilles Scratch Table -- ERROR (%s)", e))
}, finally = {
DatabaseConnector::disconnect(connection = connection)
})
})
ParallelLogger::stopCluster(cluster = cluster)
}
ParallelLogger::unregisterLogger("dropAllScratchTables")
}
#' Optimize atlas cache
#'
#' @details
#' Post-processing, optimize data for atlas cache in separate table to help performance.
#'
#' @param connectionDetails An R object of type \code{connectionDetails} created using the
#' function \code{createConnectionDetails} in the
#' \code{DatabaseConnector} package.
#' @param resultsDatabaseSchema Fully qualified name of database schema that we can write final
#' results to. Default is cdmDatabaseSchema. On SQL Server, this should
#' specifiy both the database and the schema, so for example, on SQL
#' Server, 'cdm_results.dbo'.
#' @param vocabDatabaseSchema String name of database schema that contains OMOP Vocabulary. Default
#' is cdmDatabaseSchema. On SQL Server, this should specifiy both the
#' database and the schema, so for example 'results.dbo'.
#' @param outputFolder Path to store logs and SQL files
#' @param sqlOnly TRUE = just generate SQL files, don't actually run, FALSE = run
#' Achilles
#' @param verboseMode Boolean to determine if the console will show all execution steps.
#' Default = TRUE
#' @param tempAchillesPrefix The prefix to use for the "temporary" (but actually permanent)
#' Achilles analyses tables. Default is "tmpach"
#' @returns
#' The SQL statement executed to update cache tables is returned.
#'
#' @export
optimizeAtlasCache <- function(connectionDetails,
resultsDatabaseSchema,
vocabDatabaseSchema = resultsDatabaseSchema,
outputFolder = "output", sqlOnly = FALSE, verboseMode = TRUE, tempAchillesPrefix = "tmpach") {
if (!dir.exists(outputFolder)) {
dir.create(path = outputFolder, recursive = TRUE)
}
# Log execution
unlink(file.path(outputFolder, "log_optimize_atlas_cache.txt"))
if (verboseMode) {
appenders <- list(ParallelLogger::createConsoleAppender(),
ParallelLogger::createFileAppender(layout = ParallelLogger::layoutParallel,
fileName = file.path(outputFolder, "log_optimize_atlas_cache.txt")))
} else {
appenders <- list(ParallelLogger::createFileAppender(layout = ParallelLogger::layoutParallel,
fileName = file.path(outputFolder, "log_optimize_atlas_cache.txt")))
}
logger <- ParallelLogger::createLogger(name = "optimizeAtlasCache",
threshold = "INFO",
appenders = appenders)
ParallelLogger::registerLogger(logger)
resultsConceptCountTable <- list(tablePrefix = tempAchillesPrefix,
schema = read.csv(file = system.file("csv",
"schemas", "schema_achilles_results_concept_count.csv", package = "Achilles"), header = TRUE))
optimizeAtlasCacheSql <- SqlRender::loadRenderTranslateSql(sqlFilename = "analyses/create_result_concept_table.sql",
packageName = "Achilles", dbms = connectionDetails$dbms, resultsDatabaseSchema = resultsDatabaseSchema,
vocabDatabaseSchema = vocabDatabaseSchema, fieldNames = paste(resultsConceptCountTable$schema$FIELD_NAME,
collapse = ", "))
if (!sqlOnly) {
connection <- DatabaseConnector::connect(connectionDetails = connectionDetails)
tryCatch({
ParallelLogger::logInfo("Optimizing atlas cache")
DatabaseConnector::executeSql(connection = connection, sql = optimizeAtlasCacheSql)
ParallelLogger::logInfo("Atlas cache was optimized")
}, error = function(e) {
ParallelLogger::logError(sprintf("Optimizing atlas cache [ERROR] (%s)", e))
}, finally = {
DatabaseConnector::disconnect(connection = connection)
})
}
ParallelLogger::unregisterLogger("optimizeAtlasCache")
invisible(optimizeAtlasCacheSql)
}
.getCdmVersion <- function(connectionDetails, cdmDatabaseSchema) {
sql <- SqlRender::render(sql = "select cdm_version from @cdmDatabaseSchema.cdm_source",
cdmDatabaseSchema = cdmDatabaseSchema)
sql <- SqlRender::translate(sql = sql, targetDialect = connectionDetails$dbms)
connection <- DatabaseConnector::connect(connectionDetails = connectionDetails)
cdmVersion <- tryCatch({
c <- tolower((DatabaseConnector::querySql(connection = connection, sql = sql))[1, ])
gsub(pattern = "v", replacement = "", x = c)
}, error = function(e) {
""
}, finally = {
DatabaseConnector::disconnect(connection = connection)
rm(connection)
})
cdmVersion
}
.supportsTempTables <- function(connectionDetails) {
!(connectionDetails$dbms %in% c("bigquery"))
}
.getAnalysisSql <- function(analysisId,
connectionDetails,
schemaDelim,
scratchDatabaseSchema,
cdmDatabaseSchema,
resultsDatabaseSchema, tempEmulationSchema, cdmVersion, tempAchillesPrefix, resultsTables, sourceName,
numThreads, outputFolder) {
SqlRender::loadRenderTranslateSql(sqlFilename = file.path("analyses",
paste(analysisId, "sql", sep = ".")),
packageName = "Achilles", dbms = connectionDetails$dbms, warnOnMissingParameters = FALSE, scratchDatabaseSchema = scratchDatabaseSchema,
cdmDatabaseSchema = cdmDatabaseSchema, resultsDatabaseSchema = resultsDatabaseSchema, schemaDelim = schemaDelim,
tempAchillesPrefix = tempAchillesPrefix, tempEmulationSchema = tempEmulationSchema, source_name = sourceName,
achilles_version = packageVersion(pkg = "Achilles"), cdmVersion = cdmVersion, singleThreaded = (scratchDatabaseSchema ==
"#"))
}
.mergeAchillesScratchTables <- function(resultsTable,
analysisIds,
createTable,
connectionDetails,
schemaDelim,
scratchDatabaseSchema,
resultsDatabaseSchema,
tempEmulationSchema,
cdmVersion,
tempAchillesPrefix,
numThreads,
smallCellCount,
outputFolder,
sqlOnly) {
castedNames <- apply(resultsTable$schema, 1, function(field) {
SqlRender::render(
"cast(@fieldName as @fieldType) as @fieldName",
fieldName = field["FIELD_NAME"],
fieldType = field["FIELD_TYPE"]
)
})
# obtain the analysis SQLs to union in the merge
if (!sqlOnly) {
logs <- .parseLogs(outputFolder)
}
detailSqls <- lapply(
resultsTable$analysisIds[resultsTable$analysisIds %in% analysisIds],
function(analysisId) {
analysisSql <- SqlRender::render(
sql = "select @castedNames from
@scratchDatabaseSchema@schemaDelim@tablePrefix_@analysisId",
scratchDatabaseSchema = scratchDatabaseSchema,
schemaDelim = schemaDelim,
castedNames = paste(castedNames, collapse = ", "),
tablePrefix = resultsTable$tablePrefix,
analysisId = analysisId
)
if (!sqlOnly) {
# obtain the runTime for this analysis
runTime <- .getAchillesResultBenchmark(analysisId, logs)
benchmarkSelects <- lapply(resultsTable$schema$FIELD_NAME, function(c) {
if (tolower(c) == "analysis_id") {
sprintf("%d as analysis_id", .getBenchmarkOffset() + as.integer(analysisId))
} else if (tolower(c) == "stratum_1") {
sprintf("'%s' as stratum_1", runTime)
} else if (tolower(c) == "count_value") {
sprintf("%d as count_value", smallCellCount + 1)
} else {
sprintf("NULL as %s", c)
}
})
benchmarkSql <- SqlRender::render(
sql = "select @benchmarkSelect",
benchmarkSelect = paste(benchmarkSelects, collapse = ", ")
)
analysisSql <- paste(c(analysisSql, benchmarkSql), collapse = " union all ")
}
analysisSql
}
)
SqlRender::loadRenderTranslateSql(
sqlFilename = "analyses/merge_achilles_tables.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
warnOnMissingParameters = FALSE,
createTable = createTable,
resultsDatabaseSchema = resultsDatabaseSchema,
tempEmulationSchema = tempEmulationSchema,
detailType = resultsTable$detailType,
detailSqls = paste(detailSqls, collapse = " \nunion all\n "),
fieldNames = paste(resultsTable$schema$FIELD_NAME, collapse = ", "),
smallCellCount = smallCellCount
)
}
.getSourceName <- function(connectionDetails, cdmDatabaseSchema) {
sql <- SqlRender::render(sql = "select cdm_source_name from @cdmDatabaseSchema.cdm_source",
cdmDatabaseSchema = cdmDatabaseSchema)
sql <- SqlRender::translate(sql = sql, targetDialect = connectionDetails$dbms)
connection <- DatabaseConnector::connect(connectionDetails = connectionDetails)
sourceName <- tryCatch({
s <- DatabaseConnector::querySql(connection = connection, sql = sql)
s[1, ]
}, error = function(e) {
""
}, finally = {
DatabaseConnector::disconnect(connection = connection)
rm(connection)
})
sourceName
}
.deleteExistingResults <- function(connectionDetails, resultsDatabaseSchema, analysisDetails) {
resultIds <- analysisDetails$ANALYSIS_ID[analysisDetails$DISTRIBUTION == 0]
distIds <- analysisDetails$ANALYSIS_ID[analysisDetails$DISTRIBUTION == 1]
if (length(resultIds) > 0) {
sql <- SqlRender::render(sql = "delete from @resultsDatabaseSchema.achilles_results where analysis_id in (@analysisIds);",
resultsDatabaseSchema = resultsDatabaseSchema, analysisIds = paste(resultIds, collapse = ","))
sql <- SqlRender::translate(sql = sql, targetDialect = connectionDetails$dbms)
connection <- DatabaseConnector::connect(connectionDetails = connectionDetails)
on.exit(DatabaseConnector::disconnect(connection = connection))
DatabaseConnector::executeSql(connection = connection, sql = sql)
}
if (length(distIds) > 0) {
sql <- SqlRender::render(sql = "delete from @resultsDatabaseSchema.achilles_results_dist where analysis_id in (@analysisIds);",
resultsDatabaseSchema = resultsDatabaseSchema, analysisIds = paste(distIds, collapse = ","))
sql <- SqlRender::translate(sql = sql, targetDialect = connectionDetails$dbms)
connection <- DatabaseConnector::connect(connectionDetails = connectionDetails)
on.exit(DatabaseConnector::disconnect(connection = connection))
DatabaseConnector::executeSql(connection = connection, sql = sql)
}
}
.deleteGivenAnalyses <- function(connectionDetails, resultsDatabaseSchema, analysisIds) {
conn <- DatabaseConnector::connect(connectionDetails)
on.exit(DatabaseConnector::disconnect(conn))
sql <- "delete from @resultsDatabaseSchema.achilles_results where analysis_id in (@analysisIds);"
sql <- SqlRender::render(sql,
resultsDatabaseSchema = resultsDatabaseSchema,
analysisIds = paste(analysisIds,
collapse = ","))
sql <- SqlRender::translate(sql, targetDialect = connectionDetails$dbms)
DatabaseConnector::executeSql(conn, sql)
sql <- "delete from @resultsDatabaseSchema.achilles_results_dist where analysis_id in (@analysisIds);"
sql <- SqlRender::render(sql,
resultsDatabaseSchema = resultsDatabaseSchema,
analysisIds = paste(analysisIds,
collapse = ","))
sql <- SqlRender::translate(sql, targetDialect = connectionDetails$dbms)
DatabaseConnector::executeSql(conn, sql)
}
.getAchillesResultBenchmark <- function(analysisId, logs) {
logs <- logs[logs$analysisId == analysisId, ]
if (nrow(logs) == 1) {
runTime <- strsplit(logs[1, ]$runTime, " ")[[1]]
runTimeValue <- round(as.numeric(runTime[1]), 2)
runTimeUnit <- runTime[2]
if (runTimeUnit == "mins") {
runTimeValue <- runTimeValue * 60
} else if (runTimeUnit == "hours") {
runTimeValue <- runTimeValue * 60 * 60
} else if (runTimeUnit == "days") {
runTimeValue <- runTimeValue * 60 * 60 * 24
}
runTimeValue
} else {
"ERROR: no runtime found in log file"
}
}
.parseLogs <- function(outputFolder) {
logs <- utils::read.table(
file = file.path(
outputFolder,
"log_achilles.txt"
),
header = FALSE,
sep = "\t",
stringsAsFactors = FALSE
)
names(logs) <- c("startTime", "thread", "logType", "package", "packageFunction", "comment")
logs <- logs[grepl(pattern = "COMPLETE", x = logs$comment), ]
logs$analysisId <- logs$runTime <- NA
for (i in 1:nrow(logs)) {
logs[i, ]$analysisId <- .parseAnalysisId(logs[i, ]$comment)
logs[i, ]$runTime <- .parseRunTime(logs[i, ]$comment)
}
logs
}
.formatName <- function(name) {
gsub("_", " ", gsub("\\[(.*?)\\]_", "", gsub(" ", "_", name)))
}
.parseAnalysisId <- function(comment) {
comment <- .formatName(comment)
as.integer(gsub("\\s*\\([^\\)]+\\)", "", as.character(comment)))
}
.parseRunTime <- function(comment) {
comment <- .formatName(comment)
gsub("[\\(\\)]", "", regmatches(comment, gregexpr("\\(.*?\\)", comment))[[1]])
}
.getBenchmarkOffset <- function() {
2e+06
}
| /scratch/gouwar.j/cran-all/cranData/Achilles/R/Achilles.R |
# @file createTimeSeries
#
# Copyright 2023 Observational Health Data Sciences and Informatics
#
# This file is part of Achilles
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# @author Observational Health Data Sciences and Informatics
# @author Martijn Schuemie
# @author Patrick Ryan
# @author Frank DeFalco
# @author Vojtech Huser
# @author Chris Knoll
# @author Ajit Londhe
# @author Taha Abdul-Basser
# @author Anthony Molinaro
#' @title
#' createTimeSeries
#'
#' @description
#' \code{createTimeSeries} Creates a monthly multivariate time series object given a data frame in the
#' proper format.
#'
#' @details
#' \code{createTimeSeries} Requires the following: \preformatted{ 1. The given data frame must contain
#' four columns: START_DATE, COUNT_VALUE, PREVALENCE, and PROPORTION_WITHIN_YEAR. 2. START_DATE must
#' be in the YYYYMMDD format. 3. COUNT_VALUE, PREVALENCE, and PROPORTION_WITHIN_YEAR contain only
#' numeric data. } The individual monthly univariate time series can be extracted by specifying the
#' correct column name (see example).
#'
#'
#' @param temporalData A data frame from which to create the time series
#'
#' @return
#' A multivariate time series object
#'
#' @examples
#' # Example 1:
#' temporalData <- data.frame(START_DATE = seq.Date(as.Date("20210101", "%Y%m%d"),
#' as.Date("20231201",
#' "%Y%m%d"), by = "month"), COUNT_VALUE = round(runif(36, 1, 1000)), PREVALENCE = round(runif(36,
#' 0, 10), 2), PROPORTION_WITHIN_YEAR = round(runif(36, 0, 1), 2), stringsAsFactors = FALSE)
#' dummyTs <- createTimeSeries(temporalData)
#' dummyTs.cv <- dummyTs[, "COUNT_VALUE"]
#' dummyTs.pv <- dummyTs[, "PREVALENCE"]
#' dummyTs.pwy <- dummyTs[, "PROPORTION_WITHIN_YEAR"]
#'
#' \dontrun{
#' # Example 2:
#' pneumonia <- 255848
#' temporalData <- getTemporalData(connectionDetails = connectionDetails, cdmDatabaseSchema = "cdm",
#' resultsDatabaseSchema = "results", conceptId = pneumonia)
#' pneumoniaTs <- createTimeSeries(temporalData)
#' pneumoniaTs.cv <- pneumoniaTs[, "COUNT_VALUE"]
#' pneumoniaTs.pv <- pneumoniaTs[, "PREVALENCE"]
#' pneumoniaTs.pwy <- pneumoniaTs[, "PROPORTION_WITHIN_YEAR"]
#' }
#'
#' @export
createTimeSeries <- function(temporalData) {
requiredColumns <- c("START_DATE", "COUNT_VALUE", "PREVALENCE", "PROPORTION_WITHIN_YEAR")
if (sum(colnames(temporalData) %in% requiredColumns) < 4)
stop(paste0("ERROR: INVALID DATA FRAME FORMAT. The data frame must contain columns: ",
paste(requiredColumns,
collapse = ", ")))
if (nrow(temporalData) == 0) {
stop("ERROR: Cannot create time series from an empty data frame")
}
resultSetData <- temporalData
# Convert YYYYMMDD string into a valid date
resultSetData$START_DATE <- as.Date(resultSetData$START_DATE, "%Y%m%d")
# Sort the temporal data by START_DATE rather than using an ORDER BY in the SQL
resultSetData <- resultSetData[order(resultSetData$START_DATE), ]
# Create a vector of dense dates to capture all dates between the start and end of the time
# series
lastRow <- nrow(resultSetData)
denseDates <- seq.Date(from = as.Date(resultSetData$START_DATE[1], "%Y%m%d"),
to = as.Date(resultSetData$START_DATE[lastRow],
"%Y%m%d"), by = "month")
# Find gaps, if any, in data (e.g., dates that have no data, give that date a 0 count and 0
# prevalence)
denseDatesDf <- data.frame(START_DATE = denseDates, CNT = rep(0, length(denseDates)))
joinResults <- dplyr::left_join(denseDatesDf, resultSetData, by = c(START_DATE = "START_DATE"))
joinResults$COUNT_VALUE[which(is.na(joinResults$COUNT_VALUE))] <- 0
joinResults$PREVALENCE[which(is.na(joinResults$PREVALENCE))] <- 0
joinResults$PROPORTION_WITHIN_YEAR[which(is.na(joinResults$PROPORTION_WITHIN_YEAR))] <- 0
# Now that we no longer have sparse dates, keep only necessary columns and build the time series
joinResults <- joinResults[, c("START_DATE",
"COUNT_VALUE",
"PREVALENCE",
"PROPORTION_WITHIN_YEAR")]
# Find the end of the dense results
lastRow <- nrow(joinResults)
# Create the multivariate time series
tsData <- data.frame(COUNT_VALUE = joinResults$COUNT_VALUE, PREVALENCE = joinResults$PREVALENCE,
PROPORTION_WITHIN_YEAR = joinResults$PROPORTION_WITHIN_YEAR)
resultSetDataTs <- ts(data = tsData, start = c(as.numeric(substring(joinResults$START_DATE[1], 1,
4)), as.numeric(substring(joinResults$START_DATE[1],
6,
7))), end = c(as.numeric(substring(joinResults$START_DATE[lastRow],
1, 4)), as.numeric(substring(joinResults$START_DATE[lastRow], 6, 7))), frequency = 12)
return(resultSetDataTs)
}
| /scratch/gouwar.j/cran-all/cranData/Achilles/R/createTimeSeries.r |
generateAOProcedureReports <- function(connectionDetails, proceduresData, cdmDatabaseSchema, resultsDatabaseSchema, vocabDatabaseSchema, outputPath)
{
writeLines("Generating procedure reports")
queryPrevalenceByGenderAgeYear <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/procedure/sqlPrevalenceByGenderAgeYear.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryPrevalenceByMonth <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/procedure/sqlPrevalenceByMonth.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryProcedureFrequencyDistribution <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/procedure/sqlFrequencyDistribution.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryProceduresByType <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/procedure/sqlProceduresByType.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryAgeAtFirstOccurrence <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/procedure/sqlAgeAtFirstOccurrence.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
conn <- DatabaseConnector::connect(connectionDetails)
on.exit(DatabaseConnector::disconnect(connection = conn))
dataPrevalenceByGenderAgeYear <- DatabaseConnector::querySql(conn,queryPrevalenceByGenderAgeYear)
dataPrevalenceByMonth <- DatabaseConnector::querySql(conn,queryPrevalenceByMonth)
dataProceduresByType <- DatabaseConnector::querySql(conn,queryProceduresByType)
dataAgeAtFirstOccurrence <- DatabaseConnector::querySql(conn,queryAgeAtFirstOccurrence)
dataProcedureFrequencyDistribution <- DatabaseConnector::querySql(conn,queryProcedureFrequencyDistribution)
buildProcedureReport <- function(concept_id) {
summaryRecord <- proceduresData[proceduresData$CONCEPT_ID==concept_id,]
report <- {}
report$CONCEPT_ID <- concept_id
report$CONCEPT_NAME <- summaryRecord$CONCEPT_NAME
report$CDM_TABLE_NAME <- "PROCEDURE_OCCURRENCE"
report$NUM_PERSONS <- summaryRecord$NUM_PERSONS
report$PERCENT_PERSONS <-summaryRecord$PERCENT_PERSONS
report$RECORDS_PER_PERSON <- summaryRecord$RECORDS_PER_PERSON
report$PREVALENCE_BY_GENDER_AGE_YEAR <- dataPrevalenceByGenderAgeYear[dataPrevalenceByGenderAgeYear$CONCEPT_ID == concept_id,c(3,4,5,6)]
report$PREVALENCE_BY_MONTH <- dataPrevalenceByMonth[dataPrevalenceByMonth$CONCEPT_ID == concept_id,c(3,4)]
report$PROCEDURE_FREQUENCY_DISTRIBUTION <- dataProcedureFrequencyDistribution[dataProcedureFrequencyDistribution$CONCEPT_ID == concept_id,c(3,4)]
report$PROCEDURES_BY_TYPE <- dataProceduresByType[dataProceduresByType$PROCEDURE_CONCEPT_ID == concept_id,c(4,5)]
report$AGE_AT_FIRST_OCCURRENCE <- dataAgeAtFirstOccurrence[dataAgeAtFirstOccurrence$CONCEPT_ID == concept_id,c(2,3,4,5,6,7,8,9)]
dir.create(paste0(outputPath,"/concepts/procedure_occurrence"),recursive=T,showWarnings = F)
filename <- paste(outputPath, "/concepts/procedure_occurrence/concept_" , concept_id , ".json", sep='')
write(jsonlite::toJSON(report),filename)
}
uniqueConcepts <- unique(proceduresData$CONCEPT_ID)
x <- lapply(uniqueConcepts, buildProcedureReport)
}
generateAOPersonReport <- function(connectionDetails, cdmDatabaseSchema, resultsDatabaseSchema, vocabDatabaseSchema, outputPath)
{
writeLines("Generating person report")
output = {}
conn <- DatabaseConnector::connect(connectionDetails)
on.exit(DatabaseConnector::disconnect(connection = conn))
renderedSql <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/person/population.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
warnOnMissingParameters = FALSE,
cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
personSummaryData <- DatabaseConnector::querySql(conn,renderedSql)
output$SUMMARY = personSummaryData
renderedSql <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/person/population_age_gender.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
warnOnMissingParameters = FALSE,
cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
ageGenderData <- DatabaseConnector::querySql(conn,renderedSql)
output$AGE_GENDER_DATA = ageGenderData
renderedSql <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/person/gender.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
warnOnMissingParameters = FALSE,
cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
genderData <- DatabaseConnector::querySql(conn,renderedSql)
output$GENDER_DATA = genderData
renderedSql <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/person/race.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
warnOnMissingParameters = FALSE,
cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
raceData <- DatabaseConnector::querySql(conn,renderedSql)
output$RACE_DATA = raceData
renderedSql <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/person/ethnicity.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
warnOnMissingParameters = FALSE,
cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
ethnicityData <- DatabaseConnector::querySql(conn,renderedSql)
output$ETHNICITY_DATA = ethnicityData
renderedSql <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/person/yearofbirth.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
warnOnMissingParameters = FALSE,
cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
birthYearData <- DatabaseConnector::querySql(conn,renderedSql)
output$BIRTH_YEAR_DATA <- birthYearData
jsonOutput = jsonlite::toJSON(output)
write(jsonOutput, file=paste(outputPath, "/person.json", sep=""))
}
generateAOAchillesPerformanceReport <- function(connectionDetails, cdmDatabaseSchema, resultsDatabaseSchema, vocabDatabaseSchema, outputPath)
{
writeLines("Generating achilles performance report")
queryAchillesPerformance <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/performance/sqlAchillesPerformance.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
warnOnMissingParameters = FALSE,
cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
conn <- DatabaseConnector::connect(connectionDetails)
on.exit(DatabaseConnector::disconnect(connection = conn))
dataPerformance <- DatabaseConnector::querySql(conn,queryAchillesPerformance)
names(dataPerformance) <- c("analysis_id", "analysis_name","category", "elapsed_seconds")
dataPerformance$elapsed_seconds <- format(round(as.numeric(dataPerformance$elapsed_seconds),digits = 2),nsmall = 2)
data.table::fwrite(dataPerformance, file.path(outputPath, "achilles-performance.csv"))
}
generateAODeathReport <- function(connectionDetails, cdmDatabaseSchema, resultsDatabaseSchema, vocabDatabaseSchema, outputPath)
{
writeLines("Generating death report")
queryPrevalenceByGenderAgeYear <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/death/sqlPrevalenceByGenderAgeYear.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryPrevalenceByMonth <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/death/sqlPrevalenceByMonth.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema
)
queryDeathByType <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/death/sqlDeathByType.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryAgeAtDeath <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/death/sqlAgeAtDeath.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
conn <- DatabaseConnector::connect(connectionDetails)
on.exit(DatabaseConnector::disconnect(connection = conn))
deathByTypeData <- DatabaseConnector::querySql(conn,queryDeathByType)
prevalenceByGenderAgeYearData <- DatabaseConnector::querySql(conn,queryPrevalenceByGenderAgeYear)
prevalenceByMonthData <- DatabaseConnector::querySql(conn,queryPrevalenceByMonth)
ageAtDeathData <- DatabaseConnector::querySql(conn,queryAgeAtDeath)
output = {}
output$PREVALENCE_BY_GENDER_AGE_YEAR = prevalenceByGenderAgeYearData
output$PREVALENCE_BY_MONTH = prevalenceByMonthData
output$DEATH_BY_TYPE = deathByTypeData
output$AGE_AT_DEATH = ageAtDeathData
filename <- file.path(outputPath, "death.json")
write(jsonlite::toJSON(output),filename)
}
generateAOObservationPeriodReport <- function(connectionDetails, cdmDatabaseSchema, resultsDatabaseSchema, vocabDatabaseSchema, outputPath)
{
writeLines("Generating observation period reports")
conn <- DatabaseConnector::connect(connectionDetails)
on.exit(DatabaseConnector::disconnect(connection = conn))
output = {}
renderedSql <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/observationperiod/ageatfirst.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema
)
ageAtFirstObservationData <- DatabaseConnector::querySql(conn,renderedSql)
output$AGE_AT_FIRST_OBSERVATION <- ageAtFirstObservationData
renderedSql <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/observationperiod/agebygender.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
ageByGenderData <- DatabaseConnector::querySql(conn,renderedSql)
output$AGE_BY_GENDER = ageByGenderData
observationLengthHist <- {}
renderedSql <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/observationperiod/observationlength_stats.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema
)
observationLengthStats <- DatabaseConnector::querySql(conn,renderedSql)
observationLengthHist$MIN = observationLengthStats$MIN_VALUE
observationLengthHist$MAX = observationLengthStats$MAX_VALUE
observationLengthHist$INTERVAL_SIZE = observationLengthStats$INTERVAL_SIZE
observationLengthHist$INTERVALS = (observationLengthStats$MAX_VALUE - observationLengthStats$MIN_VALUE) / observationLengthStats$INTERVAL_SIZE
renderedSql <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/observationperiod/observationlength_data.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema
)
observationLengthData <- DatabaseConnector::querySql(conn,renderedSql)
output$OBSERVATION_LENGTH_HISTOGRAM = observationLengthHist
renderedSql <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/observationperiod/cumulativeduration.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema
)
cumulativeDurationData <- DatabaseConnector::querySql(conn,renderedSql)
cumulativeDurationData$X_LENGTH_OF_OBSERVATION <- cumulativeDurationData$X_LENGTH_OF_OBSERVATION / 365.25
cumulativeDurationData$SERIES_NAME <- NULL
names(cumulativeDurationData) <- c("YEARS","PERCENT_PEOPLE")
output$CUMULATIVE_DURATION = cumulativeDurationData
renderedSql <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/observationperiod/observationlengthbygender.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
opLengthByGenderData <- DatabaseConnector::querySql(conn,renderedSql)
opLengthByGenderData$MIN_VALUE <- opLengthByGenderData$MIN_VALUE / 365.25
opLengthByGenderData$P10_VALUE <- opLengthByGenderData$P10_VALUE / 365.25
opLengthByGenderData$P25_VALUE <- opLengthByGenderData$P25_VALUE / 365.25
opLengthByGenderData$MEDIAN_VALUE <- opLengthByGenderData$MEDIAN_VALUE / 365.25
opLengthByGenderData$P75_VALUE <- opLengthByGenderData$P75_VALUE / 365.25
opLengthByGenderData$P90_VALUE <- opLengthByGenderData$P90_VALUE / 365.25
opLengthByGenderData$MAX_VALUE <- opLengthByGenderData$MAX_VALUE / 365.25
output$OBSERVATION_PERIOD_LENGTH_BY_GENDER = opLengthByGenderData
renderedSql <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/observationperiod/observationlengthbyage.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema
)
opLengthByAgeData <- DatabaseConnector::querySql(conn,renderedSql)
opLengthByAgeData$MIN_VALUE <- opLengthByAgeData$MIN_VALUE / 365.25
opLengthByAgeData$P10_VALUE <- opLengthByAgeData$P10_VALUE / 365.25
opLengthByAgeData$P25_VALUE <- opLengthByAgeData$P25_VALUE / 365.25
opLengthByAgeData$MEDIAN_VALUE <- opLengthByAgeData$MEDIAN_VALUE / 365.25
opLengthByAgeData$P75_VALUE <- opLengthByAgeData$P75_VALUE / 365.25
opLengthByAgeData$P90_VALUE <- opLengthByAgeData$P90_VALUE / 365.25
opLengthByAgeData$MAX_VALUE <- opLengthByAgeData$MAX_VALUE / 365.25
output$OBSERVATION_PERIOD_LENGTH_BY_AGE = opLengthByAgeData
observedByYearHist <- {}
renderedSql <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/observationperiod/observedbyyear_stats.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema
)
observedByYearStats <- DatabaseConnector::querySql(conn,renderedSql)
observedByYearHist$MIN = observedByYearStats$MIN_VALUE
observedByYearHist$MAX = observedByYearStats$MAX_VALUE
observedByYearHist$INTERVAL_SIZE = observedByYearStats$INTERVAL_SIZE
observedByYearHist$INTERVALS = (observedByYearStats$MAX_VALUE - observedByYearStats$MIN_VALUE) / observedByYearStats$INTERVAL_SIZE
renderedSql <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/observationperiod/observedbyyear_data.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema
)
observedByYearData <- DatabaseConnector::querySql(conn,renderedSql)
observedByYearHist$DATA <- observedByYearData
output$OBSERVED_BY_YEAR_HISTOGRAM = observedByYearHist
observedByMonth <- {}
renderedSql <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/observationperiod/observedbymonth.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema
)
observedByMonth <- DatabaseConnector::querySql(conn,renderedSql)
output$OBSERVED_BY_MONTH = observedByMonth
renderedSql <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/observationperiod/periodsperperson.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema
)
personPeriodsData <- DatabaseConnector::querySql(conn,renderedSql)
output$PERSON_PERIODS_DATA = personPeriodsData
filename <- file.path(outputPath, "observationperiod.json")
write(jsonlite::toJSON(output),filename)
}
generateAOVisitReports <- function(connectionDetails, cdmDatabaseSchema, resultsDatabaseSchema, vocabDatabaseSchema, outputPath)
{
writeLines("Generating visit reports")
queryVisits <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/visit/sqlVisitTreemap.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryPrevalenceByGenderAgeYear <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/visit/sqlPrevalenceByGenderAgeYear.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryPrevalenceByMonth <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/visit/sqlPrevalenceByMonth.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryVisitDurationByType <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/visit/sqlVisitDurationByType.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryAgeAtFirstOccurrence <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/visit/sqlAgeAtFirstOccurrence.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
conn <- DatabaseConnector::connect(connectionDetails)
dataVisits <- DatabaseConnector::querySql(conn,queryVisits)
names(dataVisits)[names(dataVisits) == 'CONCEPT_PATH'] <- 'CONCEPT_NAME'
dataPrevalenceByGenderAgeYear <- DatabaseConnector::querySql(conn,queryPrevalenceByGenderAgeYear)
dataPrevalenceByMonth <- DatabaseConnector::querySql(conn,queryPrevalenceByMonth)
dataVisitDurationByType <- DatabaseConnector::querySql(conn,queryVisitDurationByType)
dataAgeAtFirstOccurrence <- DatabaseConnector::querySql(conn,queryAgeAtFirstOccurrence)
buildVisitReport <- function(concept_id) {
summaryRecord <- dataVisits[dataVisits$CONCEPT_ID==concept_id,]
report <- {}
report$CONCEPT_ID <- concept_id
report$CDM_TABLE_NAME <- "VISIT_OCCURRENCE"
report$CONCEPT_NAME <- summaryRecord$CONCEPT_NAME
report$NUM_PERSONS <- summaryRecord$NUM_PERSONS
report$PERCENT_PERSONS <-summaryRecord$PERCENT_PERSONS
report$RECORDS_PER_PERSON <- summaryRecord$RECORDS_PER_PERSON
report$PREVALENCE_BY_GENDER_AGE_YEAR <- dataPrevalenceByGenderAgeYear[dataPrevalenceByGenderAgeYear$CONCEPT_ID == concept_id,c(3,4,5,6)]
report$PREVALENCE_BY_MONTH <- dataPrevalenceByMonth[dataPrevalenceByMonth$CONCEPT_ID == concept_id,c(3,4)]
report$VISIT_DURATION_BY_TYPE <- dataVisitDurationByType[dataVisitDurationByType$CONCEPT_ID == concept_id,c(2,3,4,5,6,7,8,9)]
report$AGE_AT_FIRST_OCCURRENCE <- dataAgeAtFirstOccurrence[dataAgeAtFirstOccurrence$CONCEPT_ID == concept_id,c(2,3,4,5,6,7,8,9)]
dir.create(paste0(outputPath,"/concepts/visit_occurrence"),recursive=T,showWarnings = F)
filename <- paste(outputPath, "/concepts/visit_occurrence/concept_" , concept_id , ".json", sep='')
write(jsonlite::toJSON(report),filename)
}
uniqueConcepts <- unique(dataVisits$CONCEPT_ID)
x <- lapply(uniqueConcepts, buildVisitReport)
}
generateAOVisitDetailReports <- function(connectionDetails, cdmDatabaseSchema, resultsDatabaseSchema, vocabDatabaseSchema, outputPath)
{
writeLines("Generating visit_detail reports")
queryVisitDetails <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/visitdetail/sqlVisitDetailTreemap.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryPrevalenceByGenderAgeYear <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/visitdetail/sqlPrevalenceByGenderAgeYear.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
warnOnMissingParameters = FALSE,
cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryPrevalenceByMonth <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/visitdetail/sqlPrevalenceByMonth.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
warnOnMissingParameters = FALSE,
cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryVisitDetailDurationByType <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/visitdetail/sqlVisitDetailDurationByType.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
warnOnMissingParameters = FALSE,
cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryAgeAtFirstOccurrence <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/visitdetail/sqlAgeAtFirstOccurrence.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
warnOnMissingParameters = FALSE,
cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
conn <- DatabaseConnector::connect(connectionDetails)
on.exit(DatabaseConnector::disconnect(connection = conn))
dataVisitDetails <- DatabaseConnector::querySql(conn,queryVisitDetails)
names(dataVisitDetails)[names(dataVisitDetails) == 'CONCEPT_PATH'] <- 'CONCEPT_NAME'
dataPrevalenceByGenderAgeYear <- DatabaseConnector::querySql(conn,queryPrevalenceByGenderAgeYear)
dataPrevalenceByMonth <- DatabaseConnector::querySql(conn,queryPrevalenceByMonth)
dataVisitDetailDurationByType <- DatabaseConnector::querySql(conn,queryVisitDetailDurationByType)
dataAgeAtFirstOccurrence <- DatabaseConnector::querySql(conn,queryAgeAtFirstOccurrence)
buildVisitDetailReport <- function(concept_id) {
summaryRecord <- dataVisitDetails[dataVisitDetails$CONCEPT_ID==concept_id,]
report <- {}
report$CONCEPT_ID <- concept_id
report$CDM_TABLE_NAME <- "VISIT_DETAIL"
report$CONCEPT_NAME <- summaryRecord$CONCEPT_NAME
report$NUM_PERSONS <- summaryRecord$NUM_PERSONS
report$PERCENT_PERSONS <-summaryRecord$PERCENT_PERSONS
report$RECORDS_PER_PERSON <- summaryRecord$RECORDS_PER_PERSON
report$PREVALENCE_BY_GENDER_AGE_YEAR <- dataPrevalenceByGenderAgeYear[dataPrevalenceByGenderAgeYear$CONCEPT_ID == concept_id,c(3,4,5,6)]
report$PREVALENCE_BY_MONTH <- dataPrevalenceByMonth[dataPrevalenceByMonth$CONCEPT_ID == concept_id,c(3,4)]
report$VISIT_DETAIL_DURATION_BY_TYPE <- dataVisitDetailDurationByType[dataVisitDetailDurationByType$CONCEPT_ID == concept_id,c(2,3,4,5,6,7,8,9)]
report$AGE_AT_FIRST_OCCURRENCE <- dataAgeAtFirstOccurrence[dataAgeAtFirstOccurrence$CONCEPT_ID == concept_id,c(2,3,4,5,6,7,8,9)]
dir.create(paste0(outputPath,"/concepts/visit_detail"),recursive=T,showWarnings = F)
filename <- paste(outputPath, "/concepts/visit_detail/concept_" , concept_id , ".json", sep='')
write(jsonlite::toJSON(report),filename)
}
uniqueConcepts <- unique(dataVisitDetails$CONCEPT_ID)
x <- lapply(uniqueConcepts, buildVisitDetailReport)
}
generateAOMetadataReport <- function(connectionDetails, cdmDatabaseSchema, outputPath)
{
conn <- DatabaseConnector::connect(connectionDetails)
on.exit(DatabaseConnector::disconnect(connection = conn))
if (DatabaseConnector::existsTable(connection = conn, databaseSchema = cdmDatabaseSchema, tableName = "METADATA"))
{
writeLines("Generating metadata report")
queryMetadata <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/metadata/sqlMetadata.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
cdm_database_schema = cdmDatabaseSchema
)
dataMetadata <- DatabaseConnector::querySql(conn, queryMetadata)
data.table::fwrite(dataMetadata, file=paste0(outputPath, "/metadata.csv"))
}
}
generateAOObservationReports <- function(connectionDetails, observationsData, cdmDatabaseSchema, resultsDatabaseSchema, vocabDatabaseSchema, outputPath)
{
writeLines("Generating Observation reports")
queryPrevalenceByGenderAgeYear <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/observation/sqlPrevalenceByGenderAgeYear.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryPrevalenceByMonth <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/observation/sqlPrevalenceByMonth.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryObsFrequencyDistribution <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/observation/sqlFrequencyDistribution.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryObservationsByType <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/observation/sqlObservationsByType.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryAgeAtFirstOccurrence <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/observation/sqlAgeAtFirstOccurrence.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
conn <- DatabaseConnector::connect(connectionDetails)
on.exit(DatabaseConnector::disconnect(connection = conn))
dataPrevalenceByGenderAgeYear <- DatabaseConnector::querySql(conn,queryPrevalenceByGenderAgeYear)
dataPrevalenceByMonth <- DatabaseConnector::querySql(conn,queryPrevalenceByMonth)
dataObservationsByType <- DatabaseConnector::querySql(conn,queryObservationsByType)
dataAgeAtFirstOccurrence <- DatabaseConnector::querySql(conn,queryAgeAtFirstOccurrence)
dataObsFrequencyDistribution <- DatabaseConnector::querySql(conn,queryObsFrequencyDistribution)
uniqueConcepts <- unique(observationsData$CONCEPT_ID)
buildObservationReport <- function(concept_id) {
summaryRecord <- observationsData[observationsData$CONCEPT_ID==concept_id,]
report <- {}
report$CONCEPT_ID <- concept_id
report$CONCEPT_NAME <- summaryRecord$CONCEPT_NAME
report$CDM_TABLE_NAME <- "OBSERVATION"
report$NUM_PERSONS <- summaryRecord$NUM_PERSONS
report$PERCENT_PERSONS <-summaryRecord$PERCENT_PERSONS
report$RECORDS_PER_PERSON <- summaryRecord$RECORDS_PER_PERSON
report$PREVALENCE_BY_GENDER_AGE_YEAR <- dataPrevalenceByGenderAgeYear[dataPrevalenceByGenderAgeYear$CONCEPT_ID == concept_id,c(3,4,5,6)]
report$PREVALENCE_BY_MONTH <- dataPrevalenceByMonth[dataPrevalenceByMonth$CONCEPT_ID == concept_id,c(3,4)]
report$OBS_FREQUENCY_DISTRIBUTION <- dataObsFrequencyDistribution[dataObsFrequencyDistribution$CONCEPT_ID == concept_id,c(3,4)]
report$OBSERVATIONS_BY_TYPE <- dataObservationsByType[dataObservationsByType$OBSERVATION_CONCEPT_ID == concept_id,c(4,5)]
report$AGE_AT_FIRST_OCCURRENCE <- dataAgeAtFirstOccurrence[dataAgeAtFirstOccurrence$CONCEPT_ID == concept_id,c(2,3,4,5,6,7,8,9)]
dir.create(paste0(outputPath,"/concepts/observation"),recursive=T,showWarnings = F)
filename <- paste(outputPath, "/concepts/observation/concept_" , concept_id , ".json", sep='')
write(jsonlite::toJSON(report),filename)
}
uniqueConcepts <- unique(observationsData$CONCEPT_ID)
x <- lapply(uniqueConcepts, buildObservationReport)
}
generateAOCdmSourceReport <- function(connectionDetails, cdmDatabaseSchema, outputPath)
{
conn <- DatabaseConnector::connect(connectionDetails)
on.exit(DatabaseConnector::disconnect(connection = conn))
if (DatabaseConnector::existsTable(connection = conn, databaseSchema = cdmDatabaseSchema, tableName = "CDM_SOURCE"))
{
writeLines("Generating cdm source report")
queryCdmSource <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/metadata/sqlCdmSource.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
cdm_database_schema = cdmDatabaseSchema
)
dataCdmSource <- DatabaseConnector::querySql(conn, queryCdmSource)
data.table::fwrite(dataCdmSource, file=paste0(outputPath, "/cdmsource.csv"))
}
}
generateAODashboardReport <- function(outputPath)
{
output <- {}
personReport <- jsonlite::fromJSON(file = paste(outputPath, "/person.json", sep=""))
output$SUMMARY <- personReport$SUMMARY
output$GENDER_DATA <- personReport$GENDER_DATA
opReport <- jsonlite::fromJSON(file = paste(outputPath, "/observationperiod.json", sep=""))
output$AGE_AT_FIRST_OBSERVATION_HISTOGRAM <- opReport$AGE_AT_FIRST_OBSERVATION_HISTOGRAM
output$CUMULATIVE_DURATION <- opReport$CUMULATIVE_DURATION
output$OBSERVED_BY_MONTH <- opReport$OBSERVED_BY_MONTH
jsonOutput <- jsonlite::toJSON(output)
write(jsonOutput, file=paste(outputPath, "/dashboard.json", sep=""))
}
generateAOMeasurementReports <- function(connectionDetails, dataMeasurements, cdmDatabaseSchema, resultsDatabaseSchema, vocabDatabaseSchema, outputPath)
{
writeLines("Generating Measurement reports")
queryPrevalenceByGenderAgeYear <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/measurement/sqlPrevalenceByGenderAgeYear.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryPrevalenceByMonth <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/measurement/sqlPrevalenceByMonth.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryFrequencyDistribution <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/measurement/sqlFrequencyDistribution.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryMeasurementsByType <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/measurement/sqlMeasurementsByType.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryAgeAtFirstOccurrence <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/measurement/sqlAgeAtFirstOccurrence.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryRecordsByUnit <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/measurement/sqlRecordsByUnit.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryMeasurementValueDistribution <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/measurement/sqlMeasurementValueDistribution.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryLowerLimitDistribution <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/measurement/sqlLowerLimitDistribution.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryUpperLimitDistribution <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/measurement/sqlUpperLimitDistribution.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryValuesRelativeToNorm <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/measurement/sqlValuesRelativeToNorm.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
conn <- DatabaseConnector::connect(connectionDetails)
on.exit(DatabaseConnector::disconnect(connection = conn))
dataPrevalenceByGenderAgeYear <- DatabaseConnector::querySql(conn,queryPrevalenceByGenderAgeYear)
dataPrevalenceByMonth <- DatabaseConnector::querySql(conn,queryPrevalenceByMonth)
dataMeasurementsByType <- DatabaseConnector::querySql(conn,queryMeasurementsByType)
dataAgeAtFirstOccurrence <- DatabaseConnector::querySql(conn,queryAgeAtFirstOccurrence)
dataRecordsByUnit <- DatabaseConnector::querySql(conn,queryRecordsByUnit)
dataMeasurementValueDistribution <- DatabaseConnector::querySql(conn,queryMeasurementValueDistribution)
dataLowerLimitDistribution <- DatabaseConnector::querySql(conn,queryLowerLimitDistribution)
dataUpperLimitDistribution <- DatabaseConnector::querySql(conn,queryUpperLimitDistribution)
dataValuesRelativeToNorm <- DatabaseConnector::querySql(conn,queryValuesRelativeToNorm)
dataFrequencyDistribution <- DatabaseConnector::querySql(conn,queryFrequencyDistribution)
uniqueConcepts <- unique(dataPrevalenceByMonth$CONCEPT_ID)
buildMeasurementReport <- function(concept_id) {
summaryRecord <- dataMeasurements[dataMeasurements$CONCEPT_ID==concept_id,]
report <- {}
report$CONCEPT_ID <- concept_id
report$CDM_TABLE_NAME <- "MEASUREMENT"
report$CONCEPT_NAME <- summaryRecord$CONCEPT_NAME
report$NUM_PERSONS <- summaryRecord$NUM_PERSONS
report$PERCENT_PERSONS <-summaryRecord$PERCENT_PERSONS
report$RECORDS_PER_PERSON <- summaryRecord$RECORDS_PER_PERSON
report$PREVALENCE_BY_GENDER_AGE_YEAR <- dataPrevalenceByGenderAgeYear[dataPrevalenceByGenderAgeYear$CONCEPT_ID == concept_id,c(3,4,5,6)]
report$PREVALENCE_BY_MONTH <- dataPrevalenceByMonth[dataPrevalenceByMonth$CONCEPT_ID == concept_id,c(3,4)]
report$FREQUENCY_DISTRIBUTION <- dataFrequencyDistribution[dataFrequencyDistribution$CONCEPT_ID == concept_id,c(3,4)]
report$MEASUREMENTS_BY_TYPE <- dataMeasurementsByType[dataMeasurementsByType$MEASUREMENT_CONCEPT_ID == concept_id,c(4,5)]
report$AGE_AT_FIRST_OCCURRENCE <- dataAgeAtFirstOccurrence[dataAgeAtFirstOccurrence$CONCEPT_ID == concept_id,c(2,3,4,5,6,7,8,9)]
report$RECORDS_BY_UNIT <- dataRecordsByUnit[dataRecordsByUnit$MEASUREMENT_CONCEPT_ID == concept_id,c(4,5)]
report$MEASUREMENT_VALUE_DISTRIBUTION <- dataMeasurementValueDistribution[dataMeasurementValueDistribution$CONCEPT_ID == concept_id,c(2,3,4,5,6,7,8,9)]
report$LOWER_LIMIT_DISTRIBUTION <- dataLowerLimitDistribution[dataLowerLimitDistribution$CONCEPT_ID == concept_id,c(2,3,4,5,6,7,8,9)]
report$UPPER_LIMIT_DISTRIBUTION <- dataUpperLimitDistribution[dataUpperLimitDistribution$CONCEPT_ID == concept_id,c(2,3,4,5,6,7,8,9)]
report$VALUES_RELATIVE_TO_NORM <- dataValuesRelativeToNorm[dataValuesRelativeToNorm$MEASUREMENT_CONCEPT_ID == concept_id,c(4,5)]
dir.create(paste0(outputPath,"/concepts/measurement"),recursive=T,showWarnings = F)
filename <- paste(outputPath, "/concepts/measurement/concept_" , concept_id , ".json", sep='')
write(jsonlite::toJSON(report),filename)
}
x <- lapply(uniqueConcepts, buildMeasurementReport)
}
generateAODrugEraReports <- function(connectionDetails, dataDrugEra, cdmDatabaseSchema, resultsDatabaseSchema, vocabDatabaseSchema, outputPath)
{
writeLines("Generating drug era reports")
queryAgeAtFirstExposure <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/drugera/sqlAgeAtFirstExposure.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryPrevalenceByGenderAgeYear <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/drugera/sqlPrevalenceByGenderAgeYear.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryPrevalenceByMonth <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/drugera/sqlPrevalenceByMonth.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryLengthOfEra <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/drugera/sqlLengthOfEra.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
conn <- DatabaseConnector::connect(connectionDetails)
on.exit(DatabaseConnector::disconnect(connection = conn))
dataAgeAtFirstExposure <- DatabaseConnector::querySql(conn,queryAgeAtFirstExposure)
dataPrevalenceByGenderAgeYear <- DatabaseConnector::querySql(conn,queryPrevalenceByGenderAgeYear)
dataPrevalenceByMonth <- DatabaseConnector::querySql(conn,queryPrevalenceByMonth)
dataLengthOfEra <- DatabaseConnector::querySql(conn,queryLengthOfEra)
uniqueConcepts <- unique(dataDrugEra$CONCEPT_ID)
buildDrugEraReport <- function(concept_id) {
summaryRecord <- dataDrugEra[dataDrugEra$CONCEPT_ID==concept_id,]
report <- {}
report$CONCEPT_ID <- concept_id
report$CDM_TABLE_NAME <- "DRUG_ERA"
report$CONCEPT_NAME <- summaryRecord$CONCEPT_NAME
report$NUM_PERSONS <- summaryRecord$NUM_PERSONS
report$PERCENT_PERSONS <-summaryRecord$PERCENT_PERSONS
report$RECORDS_PER_PERSON <- summaryRecord$RECORDS_PER_PERSON
report$AGE_AT_FIRST_EXPOSURE <- dataAgeAtFirstExposure[dataAgeAtFirstExposure$CONCEPT_ID == concept_id,c(2,3,4,5,6,7,8,9)]
report$PREVALENCE_BY_GENDER_AGE_YEAR <- dataPrevalenceByGenderAgeYear[dataPrevalenceByGenderAgeYear$CONCEPT_ID == concept_id,c(2,3,4,5)]
report$PREVALENCE_BY_MONTH <- dataPrevalenceByMonth[dataPrevalenceByMonth$CONCEPT_ID == concept_id,c(2,3)]
report$LENGTH_OF_ERA <- dataLengthOfEra[dataLengthOfEra$CONCEPT_ID == concept_id, c(2,3,4,5,6,7,8,9)]
dir.create(paste0(outputPath,"/concepts/drug_era"),recursive=T,showWarnings = F)
filename <- paste(outputPath, "/concepts/drug_era/concept_" , concept_id , ".json", sep='')
write(jsonlite::toJSON(report),filename)
}
x <- lapply(uniqueConcepts, buildDrugEraReport)
}
generateAODrugReports <- function(connectionDetails, dataDrugs, cdmDatabaseSchema, resultsDatabaseSchema, vocabDatabaseSchema, outputPath)
{
writeLines("Generating drug reports")
queryAgeAtFirstExposure <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/drug/sqlAgeAtFirstExposure.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryDaysSupplyDistribution <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/drug/sqlDaysSupplyDistribution.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryDrugsByType <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/drug/sqlDrugsByType.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryPrevalenceByGenderAgeYear <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/drug/sqlPrevalenceByGenderAgeYear.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryPrevalenceByMonth <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/drug/sqlPrevalenceByMonth.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryDrugFrequencyDistribution <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/drug/sqlFrequencyDistribution.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryQuantityDistribution <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/drug/sqlQuantityDistribution.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryRefillsDistribution <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/drug/sqlRefillsDistribution.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
conn <- DatabaseConnector::connect(connectionDetails)
on.exit(DatabaseConnector::disconnect(connection = conn))
dataAgeAtFirstExposure <- DatabaseConnector::querySql(conn,queryAgeAtFirstExposure)
dataDaysSupplyDistribution <- DatabaseConnector::querySql(conn,queryDaysSupplyDistribution)
dataDrugsByType <- DatabaseConnector::querySql(conn,queryDrugsByType)
dataPrevalenceByGenderAgeYear <- DatabaseConnector::querySql(conn,queryPrevalenceByGenderAgeYear)
dataPrevalenceByMonth <- DatabaseConnector::querySql(conn,queryPrevalenceByMonth)
dataQuantityDistribution <- DatabaseConnector::querySql(conn,queryQuantityDistribution)
dataRefillsDistribution <- DatabaseConnector::querySql(conn,queryRefillsDistribution)
dataDrugFrequencyDistribution <- DatabaseConnector::querySql(conn,queryDrugFrequencyDistribution)
uniqueConcepts <- unique(dataPrevalenceByMonth$CONCEPT_ID)
buildDrugReport <- function(concept_id) {
summaryRecord <- dataDrugs[dataDrugs$CONCEPT_ID==concept_id,]
report <- {}
report$CONCEPT_ID <- concept_id
report$CDM_TABLE_NAME <- "DRUG_EXPOSURE"
report$CONCEPT_NAME <- summaryRecord$CONCEPT_NAME
report$NUM_PERSONS <- summaryRecord$NUM_PERSONS
report$PERCENT_PERSONS <-summaryRecord$PERCENT_PERSONS
report$RECORDS_PER_PERSON <- summaryRecord$RECORDS_PER_PERSON
report$AGE_AT_FIRST_EXPOSURE <- dataAgeAtFirstExposure[dataAgeAtFirstExposure$DRUG_CONCEPT_ID == concept_id,c(2,3,4,5,6,7,8,9)]
report$DAYS_SUPPLY_DISTRIBUTION <- dataDaysSupplyDistribution[dataDaysSupplyDistribution$DRUG_CONCEPT_ID == concept_id, c(2,3,4,5,6,7,8,9)]
report$DRUGS_BY_TYPE <- dataDrugsByType[dataDrugsByType$DRUG_CONCEPT_ID == concept_id, c(3,4)]
report$PREVALENCE_BY_GENDER_AGE_YEAR <- dataPrevalenceByGenderAgeYear[dataPrevalenceByGenderAgeYear$CONCEPT_ID == concept_id,c(3,4,5,6)]
report$PREVALENCE_BY_MONTH <- dataPrevalenceByMonth[dataPrevalenceByMonth$CONCEPT_ID == concept_id,c(3,4)]
report$DRUG_FREQUENCY_DISTRIBUTION <- dataDrugFrequencyDistribution[dataDrugFrequencyDistribution$CONCEPT_ID == concept_id,c(3,4)]
report$QUANTITY_DISTRIBUTION <- dataQuantityDistribution[dataQuantityDistribution$DRUG_CONCEPT_ID == concept_id, c(2,3,4,5,6,7,8,9)]
report$REFILLS_DISTRIBUTION <- dataRefillsDistribution[dataRefillsDistribution$DRUG_CONCEPT_ID == concept_id, c(2,3,4,5,6,7,8,9)]
dir.create(paste0(outputPath,"/concepts/drug_exposure"),recursive=T,showWarnings = F)
filename <- paste(outputPath, "/concepts/drug_exposure/concept_" , concept_id , ".json", sep='')
write(jsonlite::toJSON(report),filename)
}
x <- lapply(uniqueConcepts, buildDrugReport)
}
generateAODeviceReports <- function(connectionDetails, dataDevices, cdmDatabaseSchema, resultsDatabaseSchema, vocabDatabaseSchema, outputPath)
{
writeLines("Generating device exposure reports")
queryAgeAtFirstExposure <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/device/sqlAgeAtFirstExposure.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryDevicesByType <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/device/sqlDevicesByType.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryPrevalenceByGenderAgeYear <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/device/sqlPrevalenceByGenderAgeYear.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryPrevalenceByMonth <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/device/sqlPrevalenceByMonth.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryDeviceFrequencyDistribution <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/device/sqlFrequencyDistribution.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
conn <- DatabaseConnector::connect(connectionDetails)
on.exit(DatabaseConnector::disconnect(connection = conn))
dataAgeAtFirstExposure <- DatabaseConnector::querySql(conn,queryAgeAtFirstExposure)
dataDevicesByType <- DatabaseConnector::querySql(conn,queryDevicesByType)
dataPrevalenceByGenderAgeYear <- DatabaseConnector::querySql(conn,queryPrevalenceByGenderAgeYear)
dataPrevalenceByMonth <- DatabaseConnector::querySql(conn,queryPrevalenceByMonth)
dataDeviceFrequencyDistribution <- DatabaseConnector::querySql(conn,queryDeviceFrequencyDistribution)
uniqueConcepts <- unique(dataDevices$CONCEPT_ID)
buildDeviceReport <- function(concept_id) {
summaryRecord <- dataDevices[dataDevices$CONCEPT_ID==concept_id,]
report <- {}
report$CONCEPT_ID <- concept_id
report$CDM_TABLE_NAME <- "DEVICE_EXPOSURE"
report$CONCEPT_NAME <- summaryRecord$CONCEPT_NAME
report$NUM_PERSONS <- summaryRecord$NUM_PERSONS
report$PERCENT_PERSONS <-summaryRecord$PERCENT_PERSONS
report$RECORDS_PER_PERSON <- summaryRecord$RECORDS_PER_PERSON
report$AGE_AT_FIRST_EXPOSURE <- dataAgeAtFirstExposure[dataAgeAtFirstExposure$CONCEPT_ID == concept_id,c(2,3,4,5,6,7,8,9)]
report$DEVICES_BY_TYPE <- dataDevicesByType[dataDevicesByType$CONCEPT_ID == concept_id, c(3,4)]
report$PREVALENCE_BY_GENDER_AGE_YEAR <- dataPrevalenceByGenderAgeYear[dataPrevalenceByGenderAgeYear$CONCEPT_ID == concept_id,c(3,4,5,6)]
report$PREVALENCE_BY_MONTH <- dataPrevalenceByMonth[dataPrevalenceByMonth$CONCEPT_ID == concept_id,c(3,4)]
report$DEVICE_FREQUENCY_DISTRIBUTION <- dataDeviceFrequencyDistribution[dataDeviceFrequencyDistribution$CONCEPT_ID == concept_id,c(3,4)]
dir.create(paste0(outputPath,"/concepts/device_exposure"),recursive=T,showWarnings = F)
filename <- paste(outputPath, "/concepts/device_exposure/concept_" , concept_id , ".json", sep='')
write(jsonlite::toJSON(report),filename)
}
x <- lapply(uniqueConcepts, buildDeviceReport)
}
generateAOConditionReports <- function(connectionDetails, dataConditions, cdmDatabaseSchema, resultsDatabaseSchema, vocabDatabaseSchema, outputPath)
{
writeLines("Generating condition reports")
queryPrevalenceByGenderAgeYear <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/condition/sqlPrevalenceByGenderAgeYear.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
warnOnMissingParameters = FALSE,
cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryPrevalenceByMonth <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/condition/sqlPrevalenceByMonth.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
warnOnMissingParameters = FALSE,
cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryConditionsByType <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/condition/sqlConditionsByType.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
warnOnMissingParameters = FALSE,
cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryAgeAtFirstDiagnosis <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/condition/sqlAgeAtFirstDiagnosis.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
warnOnMissingParameters = FALSE,
cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
conn <- DatabaseConnector::connect(connectionDetails)
on.exit(DatabaseConnector::disconnect(connection = conn))
dataPrevalenceByGenderAgeYear <- DatabaseConnector::querySql(conn,queryPrevalenceByGenderAgeYear)
dataPrevalenceByMonth <- DatabaseConnector::querySql(conn,queryPrevalenceByMonth)
dataConditionsByType <- DatabaseConnector::querySql(conn,queryConditionsByType)
dataAgeAtFirstDiagnosis <- DatabaseConnector::querySql(conn,queryAgeAtFirstDiagnosis)
uniqueConcepts <- unique(dataPrevalenceByMonth$CONCEPT_ID)
buildConditionReport <- function(concept_id) {
summaryRecord <- dataConditions[dataConditions$CONCEPT_ID==concept_id,]
report <- {}
report$CONCEPT_ID <- concept_id
report$CDM_TABLE_NAME <- "CONDITION_OCCURRENCE"
report$CONCEPT_NAME <- summaryRecord$CONCEPT_NAME
report$NUM_PERSONS <- summaryRecord$NUM_PERSONS
report$PERCENT_PERSONS <-summaryRecord$PERCENT_PERSONS
report$RECORDS_PER_PERSON <- summaryRecord$RECORDS_PER_PERSON
report$PREVALENCE_BY_GENDER_AGE_YEAR <- dataPrevalenceByGenderAgeYear[dataPrevalenceByGenderAgeYear$CONCEPT_ID == concept_id,c(3,4,5,6)]
report$PREVALENCE_BY_MONTH <- dataPrevalenceByMonth[dataPrevalenceByMonth$CONCEPT_ID == concept_id,c(3,4)]
report$CONDITIONS_BY_TYPE <- dataConditionsByType[dataConditionsByType$CONDITION_CONCEPT_ID == concept_id,c(2,3)]
report$AGE_AT_FIRST_DIAGNOSIS <- dataAgeAtFirstDiagnosis[dataAgeAtFirstDiagnosis$CONCEPT_ID == concept_id,c(2,3,4,5,6,7,8,9)]
dir.create(paste0(outputPath,"/concepts/condition_occurrence"),recursive=T,showWarnings = F)
filename <- paste(outputPath, "/concepts/condition_occurrence/concept_" , concept_id , ".json", sep='')
write(jsonlite::toJSON(report),filename)
}
x <- lapply(uniqueConcepts, buildConditionReport)
}
generateAOConditionEraReports <- function(connectionDetails, dataConditionEra, cdmDatabaseSchema, resultsDatabaseSchema, vocabDatabaseSchema, outputPath)
{
writeLines("Generating condition era reports")
queryPrevalenceByGenderAgeYear <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/conditionera/sqlPrevalenceByGenderAgeYear.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
warnOnMissingParameters = FALSE,
cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryPrevalenceByMonth <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/conditionera/sqlPrevalenceByMonth.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
warnOnMissingParameters = FALSE,
cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryAgeAtFirstDiagnosis <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/conditionera/sqlAgeAtFirstDiagnosis.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
warnOnMissingParameters = FALSE,
cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
queryLengthOfEra <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/conditionera/sqlLengthOfEra.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
warnOnMissingParameters = FALSE,
cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
conn <- DatabaseConnector::connect(connectionDetails)
on.exit(DatabaseConnector::disconnect(connection = conn))
dataPrevalenceByGenderAgeYear <- DatabaseConnector::querySql(conn, queryPrevalenceByGenderAgeYear)
dataPrevalenceByMonth <- DatabaseConnector::querySql(conn, queryPrevalenceByMonth)
dataLengthOfEra <- DatabaseConnector::querySql(conn, queryLengthOfEra)
dataAgeAtFirstDiagnosis <- DatabaseConnector::querySql(conn, queryAgeAtFirstDiagnosis)
uniqueConcepts <- unique(dataConditionEra$CONCEPT_ID)
buildConditionEraReport <- function(concept_id) {
summaryRecord <- dataConditionEra[dataConditionEra$CONCEPT_ID==concept_id,]
report <- {}
report$CONCEPT_ID <- concept_id
report$CDM_TABLE_NAME <- "CONDITION_ERA"
report$CONCEPT_NAME <- summaryRecord$CONCEPT_NAME
report$NUM_PERSONS <- summaryRecord$NUM_PERSONS
report$PERCENT_PERSONS <-summaryRecord$PERCENT_PERSONS
report$RECORDS_PER_PERSON <- summaryRecord$RECORDS_PER_PERSON
report$AGE_AT_FIRST_EXPOSURE <- dataAgeAtFirstDiagnosis[dataAgeAtFirstDiagnosis$CONCEPT_ID == concept_id,c(2,3,4,5,6,7,8,9)]
report$PREVALENCE_BY_GENDER_AGE_YEAR <- dataPrevalenceByGenderAgeYear[dataPrevalenceByGenderAgeYear$CONCEPT_ID == concept_id,c(2,3,4,5)]
report$PREVALENCE_BY_MONTH <- dataPrevalenceByMonth[dataPrevalenceByMonth$CONCEPT_ID == concept_id,c(2,3)]
report$LENGTH_OF_ERA <- dataLengthOfEra[dataLengthOfEra$CONCEPT_ID == concept_id, c(2,3,4,5,6,7,8,9)]
dir.create(paste0(outputPath,"/concepts/condition_era"),recursive=T,showWarnings = F)
filename <- paste(outputPath, "/concepts/condition_era/concept_" , concept_id , ".json", sep='')
write(jsonlite::toJSON(report),filename)
}
x <- lapply(uniqueConcepts, buildConditionEraReport)
}
#' @title exportToAres
#'
#' @description
#' \code{exportToAres} Exports Achilles statistics for ARES
#'
#' @details
#' Creates export files
#'
#'
#' @param connectionDetails An R object of type ConnectionDetail (details for the function that contains server info, database type, optionally username/password, port)
#' @param cdmDatabaseSchema Name of the database schema that contains the OMOP CDM.
#' @param resultsDatabaseSchema Name of the database schema that contains the Achilles analysis files. Default is cdmDatabaseSchema
#' @param outputPath A folder location to save the JSON files. Default is current working folder
#' @param vocabDatabaseSchema string name of database schema that contains OMOP Vocabulary. Default is cdmDatabaseSchema. On SQL Server, this should specifiy both the database and the schema, so for example 'results.dbo'.
#' @param reports vector of reports to run, c() defaults to all reports
#'
#' See \code{showReportTypes} for a list of all report types
#'
#' @return none
#'
#'@importFrom data.table fwrite
#'@importFrom dplyr ntile desc
#'@export
#'
exportToAres <- function(
connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
vocabDatabaseSchema,
outputPath,
reports = c())
{
conn <- DatabaseConnector::connect(connectionDetails)
on.exit(DatabaseConnector::disconnect(connection = conn))
# generate a folder name for this release of the cdm characterization
sql <- SqlRender::render(sql = "select * from @cdmDatabaseSchema.cdm_source;",cdmDatabaseSchema = cdmDatabaseSchema)
sql <- SqlRender::translate(sql = sql, targetDialect = connectionDetails$dbms)
metadata <- DatabaseConnector::querySql(conn, sql)
sourceKey <- gsub(" ","_",metadata$CDM_SOURCE_ABBREVIATION)
releaseDateKey <- format(lubridate::ymd(metadata$CDM_RELEASE_DATE), "%Y%m%d")
sourceOutputPath <- file.path(outputPath, sourceKey, releaseDateKey)
dir.create(sourceOutputPath,showWarnings = F,recursive=T)
print(paste0("processing AO export to ", sourceOutputPath))
if (length(reports) == 0 || (length(reports) > 0 && "density" %in% reports)) {
# data density - totals
renderedSql <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/datadensity/totalrecords.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema
)
totalRecordsData <- DatabaseConnector::querySql(conn,renderedSql)
colnames(totalRecordsData) <- c("domain", "date", "records")
totalRecordsData$date <- lubridate::parse_date_time(totalRecordsData$date, "ym")
data.table::fwrite(totalRecordsData, file=paste0(sourceOutputPath, "/datadensity-total.csv"))
domainAggregates <- aggregate(totalRecordsData$records, by=list(domain=totalRecordsData$domain), FUN=sum)
names(domainAggregates) <- c("domain","count_records")
data.table::fwrite(domainAggregates, file=paste0(sourceOutputPath, "/records-by-domain.csv"))
# data density - records per person
renderedSql <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/datadensity/recordsperperson.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema
)
recordsPerPerson <- DatabaseConnector::querySql(conn,renderedSql)
colnames(recordsPerPerson) <- c("domain", "date", "records")
recordsPerPerson$date <- lubridate::parse_date_time(recordsPerPerson$date, "ym")
recordsPerPerson$records <- round(recordsPerPerson$records,2)
data.table::fwrite(recordsPerPerson, file=paste0(sourceOutputPath, "/datadensity-records-per-person.csv"))
# data density - concepts per person
renderedSql <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/datadensity/conceptsperperson.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema
)
conceptsPerPerson <- DatabaseConnector::querySql(conn,renderedSql)
data.table::fwrite(conceptsPerPerson, file=paste0(sourceOutputPath, "/datadensity-concepts-per-person.csv"))
# data density - domains per person
renderedSql <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/datadensity/domainsperperson.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema
)
domainsPerPerson <- DatabaseConnector::querySql(conn,renderedSql)
domainsPerPerson$PERCENT_VALUE <- round(as.numeric(domainsPerPerson$PERCENT_VALUE),2)
data.table::fwrite(domainsPerPerson, file=paste0(sourceOutputPath, "/datadensity-domains-per-person.csv"))
}
if (length(reports) == 0 || (length(reports) > 0 && ("domain" %in% reports || "concept" %in% reports))) {
# metadata
generateAOMetadataReport(connectionDetails, cdmDatabaseSchema, sourceOutputPath)
# cdm source
generateAOCdmSourceReport(connectionDetails, cdmDatabaseSchema, sourceOutputPath)
# domain summary - observation period
generateAOObservationPeriodReport(connectionDetails, cdmDatabaseSchema, resultsDatabaseSchema, vocabDatabaseSchema, sourceOutputPath)
# death report
generateAODeathReport(connectionDetails, cdmDatabaseSchema, resultsDatabaseSchema, vocabDatabaseSchema, sourceOutputPath)
# domain summary - conditions
queryConditions <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/condition/sqlConditionTable.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
dataConditions <- DatabaseConnector::querySql(conn,queryConditions)
dataConditions$PERCENT_PERSONS <- format(round(dataConditions$PERCENT_PERSONS,4), nsmall=4)
dataConditions$PERCENT_PERSONS_NTILE <- dplyr::ntile(dplyr::desc(dataConditions$PERCENT_PERSONS),10)
dataConditions$RECORDS_PER_PERSON <- format(round(dataConditions$RECORDS_PER_PERSON,1),nsmall=1)
dataConditions$RECORDS_PER_PERSON_NTILE <- dplyr::ntile(dplyr::desc(dataConditions$RECORDS_PER_PERSON),10)
data.table::fwrite(dataConditions, file=paste0(sourceOutputPath, "/domain-summary-condition_occurrence.csv"))
# domain summary - condition eras
queryConditionEra <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/conditionera/sqlConditionEraTable.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
dataConditionEra <- DatabaseConnector::querySql(conn,queryConditionEra)
dataConditionEra$PERCENT_PERSONS <- format(round(dataConditionEra$PERCENT_PERSONS,4), nsmall=4)
dataConditionEra$PERCENT_PERSONS_NTILE <- dplyr::ntile(dplyr::desc(dataConditionEra$PERCENT_PERSONS),10)
dataConditionEra$RECORDS_PER_PERSON <- format(round(dataConditionEra$RECORDS_PER_PERSON,1),nsmall=1)
dataConditionEra$RECORDS_PER_PERSON_NTILE <- dplyr::ntile(dplyr::desc(dataConditionEra$RECORDS_PER_PERSON),10)
data.table::fwrite(dataConditionEra, file=paste0(sourceOutputPath, "/domain-summary-condition_era.csv"))
# domain summary - drugs
queryDrugs <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/drug/sqlDrugTable.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
dataDrugs <- DatabaseConnector::querySql(conn,queryDrugs)
dataDrugs$PERCENT_PERSONS <- format(round(dataDrugs$PERCENT_PERSONS,4), nsmall=4)
dataDrugs$PERCENT_PERSONS_NTILE <- dplyr::ntile(dplyr::desc(dataDrugs$PERCENT_PERSONS),10)
dataDrugs$RECORDS_PER_PERSON <- format(round(dataDrugs$RECORDS_PER_PERSON,1),nsmall=1)
dataDrugs$RECORDS_PER_PERSON_NTILE <- dplyr::ntile(dplyr::desc(dataDrugs$RECORDS_PER_PERSON),10)
data.table::fwrite(dataDrugs, file=paste0(sourceOutputPath, "/domain-summary-drug_exposure.csv"))
# domain stratification by drug type concept
queryDrugType <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/drug/sqlDomainDrugStratification.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
dataDrugType <- DatabaseConnector::querySql(conn,queryDrugType)
data.table::fwrite(dataDrugType, file=paste0(sourceOutputPath, "/domain-drug-stratification.csv"))
# domain summary - drug era
queryDrugEra <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/drugera/sqlDrugEraTable.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
dataDrugEra <- DatabaseConnector::querySql(conn,queryDrugEra)
dataDrugEra$PERCENT_PERSONS <- format(round(dataDrugEra$PERCENT_PERSONS,4), nsmall=4)
dataDrugEra$PERCENT_PERSONS_NTILE <- dplyr::ntile(dplyr::desc(dataDrugEra$PERCENT_PERSONS),10)
dataDrugEra$RECORDS_PER_PERSON <- format(round(dataDrugEra$RECORDS_PER_PERSON,1),nsmall=1)
dataDrugEra$RECORDS_PER_PERSON_NTILE <- dplyr::ntile(dplyr::desc(dataDrugEra$RECORDS_PER_PERSON), 10)
data.table::fwrite(dataDrugEra, file=paste0(sourceOutputPath, "/domain-summary-drug_era.csv"))
# domain summary - measurements
queryMeasurements <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/measurement/sqlMeasurementTable.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
dataMeasurements <- DatabaseConnector::querySql(conn,queryMeasurements)
dataMeasurements$PERCENT_PERSONS <- format(round(dataMeasurements$PERCENT_PERSONS,4), nsmall=4)
dataMeasurements$PERCENT_PERSONS_NTILE <- dplyr::ntile(dplyr::desc(dataMeasurements$PERCENT_PERSONS), 10)
dataMeasurements$RECORDS_PER_PERSON <- format(round(dataMeasurements$RECORDS_PER_PERSON,1),nsmall=1)
dataMeasurements$RECORDS_PER_PERSON_NTILE <- dplyr::ntile(dplyr::desc(dataMeasurements$RECORDS_PER_PERSON), 10)
data.table::fwrite(dataMeasurements, file=paste0(sourceOutputPath, "/domain-summary-measurement.csv"))
# domain summary - observations
queryObservations <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/observation/sqlObservationTable.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
dataObservations <- DatabaseConnector::querySql(conn,queryObservations)
dataObservations$PERCENT_PERSONS <- format(round(dataObservations$PERCENT_PERSONS,4), nsmall=4)
dataObservations$PERCENT_PERSONS_NTILE <- dplyr::ntile(dplyr::desc(dataObservations$PERCENT_PERSONS), 10)
dataObservations$RECORDS_PER_PERSON <- format(round(dataObservations$RECORDS_PER_PERSON,1),nsmall=1)
dataObservations$RECORDS_PER_PERSON_NTILE <- dplyr::ntile(dplyr::desc(dataObservations$RECORDS_PER_PERSON), 10)
data.table::fwrite(dataObservations, file=paste0(sourceOutputPath, "/domain-summary-observation.csv"))
# domain summary - visit details
queryVisitDetails <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/visitdetail/sqlVisitDetailTreemap.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
dataVisitDetails <- DatabaseConnector::querySql(conn,queryVisitDetails)
dataVisitDetails$PERCENT_PERSONS <- format(round(dataVisitDetails$PERCENT_PERSONS,4), nsmall=4)
dataVisitDetails$PERCENT_PERSONS_NTILE <- dplyr::ntile(dplyr::desc(dataVisitDetails$PERCENT_PERSONS),10)
dataVisitDetails$RECORDS_PER_PERSON <- format(round(dataVisitDetails$RECORDS_PER_PERSON,1),nsmall=1)
dataVisitDetails$RECORDS_PER_PERSON_NTILE <- dplyr::ntile(dplyr::desc(dataVisitDetails$RECORDS_PER_PERSON),10)
names(dataVisitDetails)[names(dataVisitDetails) == 'CONCEPT_PATH'] <- 'CONCEPT_NAME'
data.table::fwrite(dataVisitDetails, file=paste0(sourceOutputPath, "/domain-summary-visit_detail.csv"))
# domain summary - visits
queryVisits <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/visit/sqlVisitTreemap.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
dataVisits <- DatabaseConnector::querySql(conn,queryVisits)
dataVisits$PERCENT_PERSONS <- format(round(dataVisits$PERCENT_PERSONS,4), nsmall=4)
dataVisits$PERCENT_PERSONS_NTILE <- dplyr::ntile(dplyr::desc(dataVisits$PERCENT_PERSONS),10)
dataVisits$RECORDS_PER_PERSON <- format(round(dataVisits$RECORDS_PER_PERSON,1),nsmall=1)
dataVisits$RECORDS_PER_PERSON_NTILE <- dplyr::ntile(dplyr::desc(dataVisits$RECORDS_PER_PERSON),10)
names(dataVisits)[names(dataVisits) == 'CONCEPT_PATH'] <- 'CONCEPT_NAME'
data.table::fwrite(dataVisits, file=paste0(sourceOutputPath, "/domain-summary-visit_occurrence.csv"))
# domain stratification by visit concept
queryVisits <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/visit/sqlDomainVisitStratification.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
dataVisits <- DatabaseConnector::querySql(conn,queryVisits)
data.table::fwrite(dataVisits, file=paste0(sourceOutputPath, "/domain-visit-stratification.csv"))
# domain summary - procedures
queryProcedures <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/procedure/sqlProcedureTable.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
dataProcedures <- DatabaseConnector::querySql(conn,queryProcedures)
dataProcedures$PERCENT_PERSONS <- format(round(dataProcedures$PERCENT_PERSONS,4), nsmall=4)
dataProcedures$PERCENT_PERSONS_NTILE <- dplyr::ntile(dplyr::desc(dataProcedures$PERCENT_PERSONS),10)
dataProcedures$RECORDS_PER_PERSON <- format(round(dataProcedures$RECORDS_PER_PERSON,1),nsmall=1)
dataProcedures$RECORDS_PER_PERSON_NTILE <- dplyr::ntile(dplyr::desc(dataProcedures$RECORDS_PER_PERSON),10)
data.table::fwrite(dataProcedures, file=paste0(sourceOutputPath, "/domain-summary-procedure_occurrence.csv"))
# domain summary - devices
queryDevices <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/device/sqlDeviceTable.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
dataDevices <- DatabaseConnector::querySql(conn,queryDevices)
dataDevices$PERCENT_PERSONS <- format(round(dataDevices$PERCENT_PERSONS,4), nsmall=4)
dataDevices$PERCENT_PERSONS_NTILE <- dplyr::ntile(dplyr::desc(dataDevices$PERCENT_PERSONS),10)
dataDevices$RECORDS_PER_PERSON <- format(round(dataDevices$RECORDS_PER_PERSON,1),nsmall=1)
dataDevices$RECORDS_PER_PERSON_NTILE <- dplyr::ntile(dplyr::desc(dataDevices$RECORDS_PER_PERSON),10)
data.table::fwrite(dataDevices, file=paste0(sourceOutputPath, "/domain-summary-device_exposure.csv"))
}
# domain summary - provider
queryProviders <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/provider/sqlProviderSpecialty.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema
)
writeLines("Generating provider reports")
dataProviders <- DatabaseConnector::querySql(conn,queryProviders)
dataProviders$PERCENT_PERSONS <- format(round(dataProviders$PERCENT_PERSONS,4), nsmall=4)
data.table::fwrite(dataProviders, file=paste0(sourceOutputPath, "/domain-summary-provider.csv"))
if (length(reports) == 0 || (length(reports) > 0 && "quality" %in% reports)) {
# quality - completeness
queryCompleteness <- SqlRender::loadRenderTranslateSql(
sqlFilename = "export/quality/sqlCompletenessTable.sql",
packageName = "Achilles",
dbms = connectionDetails$dbms,
results_database_schema = resultsDatabaseSchema
)
dataCompleteness <- DatabaseConnector::querySql(conn,queryCompleteness)
dataCompleteness <- dataCompleteness[order(-dataCompleteness$RECORD_COUNT),]
# prevent downstream crashes with large files
if (nrow(dataCompleteness) > 100000) {
dataCompleteness <- dataCompleteness[1:100000,]
}
data.table::fwrite(dataCompleteness, file=paste0(sourceOutputPath, "/quality-completeness.csv"))
}
if (length(reports) == 0 || (length(reports) > 0 && "performance" %in% reports)) {
generateAOAchillesPerformanceReport(connectionDetails, cdmDatabaseSchema, resultsDatabaseSchema, vocabDatabaseSchema, sourceOutputPath)
}
if (length(reports) == 0 || (length(reports) > 0 && "concept" %in% reports)) {
# concept level reporting
conceptsFolder <- file.path(sourceOutputPath,"concepts")
dir.create(conceptsFolder,showWarnings = F)
generateAOVisitReports(connectionDetails, cdmDatabaseSchema, resultsDatabaseSchema, vocabDatabaseSchema, sourceOutputPath)
generateAOVisitDetailReports(connectionDetails, cdmDatabaseSchema, resultsDatabaseSchema, vocabDatabaseSchema, sourceOutputPath)
generateAOMeasurementReports(connectionDetails, dataMeasurements, cdmDatabaseSchema, resultsDatabaseSchema, vocabDatabaseSchema, sourceOutputPath)
generateAOConditionReports(connectionDetails, dataConditions, cdmDatabaseSchema, resultsDatabaseSchema, vocabDatabaseSchema, sourceOutputPath)
generateAOConditionEraReports(connectionDetails, dataConditionEra, cdmDatabaseSchema, resultsDatabaseSchema, vocabDatabaseSchema, sourceOutputPath)
generateAODrugReports(connectionDetails, dataDrugs, cdmDatabaseSchema, resultsDatabaseSchema, vocabDatabaseSchema, sourceOutputPath)
generateAODeviceReports(connectionDetails, dataDevices, cdmDatabaseSchema, resultsDatabaseSchema, vocabDatabaseSchema, sourceOutputPath)
generateAODrugEraReports(connectionDetails, dataDrugEra, cdmDatabaseSchema, resultsDatabaseSchema, vocabDatabaseSchema, sourceOutputPath)
generateAOProcedureReports(connectionDetails, dataProcedures, cdmDatabaseSchema, resultsDatabaseSchema, vocabDatabaseSchema, sourceOutputPath)
generateAOObservationReports(connectionDetails, dataObservations, cdmDatabaseSchema, resultsDatabaseSchema, vocabDatabaseSchema, sourceOutputPath)
}
if (length(reports) == 0 || (length(reports) > 0 && "person" %in% reports)) {
generateAOPersonReport(connectionDetails, cdmDatabaseSchema, resultsDatabaseSchema, vocabDatabaseSchema, sourceOutputPath)
}
}
| /scratch/gouwar.j/cran-all/cranData/Achilles/R/exportToAres.R |
#' @title
#' exportResultsToCSV
#'
#' @description
#' \code{exportResultsToCSV} exports all results to a CSV file
#'
#' @details
#' \code{exportResultsToCSV} writes a CSV file with all results to the export folder.
#'
#' @param connectionDetails An R object of type \code{connectionDetails} created using the
#' function \code{createConnectionDetails} in the
#' \code{DatabaseConnector} package.
#' @param resultsDatabaseSchema Fully qualified name of database schema that we can write final
#' results to. Default is cdmDatabaseSchema. On SQL Server, this should
#' specifiy both the database and the schema, so for example, on SQL
#' Server, 'cdm_results.dbo'.
#' @param analysisIds (OPTIONAL) A vector containing the set of Achilles analysisIds for
#' which results will be generated. If not specified, all analyses will
#' be executed. Use \code{\link{getAnalysisDetails}} to get a list of
#' all Achilles analyses and their Ids.
#' @param minCellCount To avoid patient identification, cells with small counts (<=
#' minCellCount) are deleted. Set to 0 for complete summary without
#' small cell count restrictions.
#' @param exportFolder Path to store results
#' @returns
#' No return value. Called to export CSV file to the file system.
#' @export
exportResultsToCSV <- function(connectionDetails,
resultsDatabaseSchema,
analysisIds = c(),
minCellCount = 5,
exportFolder) {
# Ensure the export folder exists
if (!file.exists(exportFolder)) {
dir.create(exportFolder, recursive = TRUE)
}
# Connect to the database
connection <- DatabaseConnector::connect(connectionDetails)
on.exit(DatabaseConnector::disconnect(connection))
# Obtain the data from the achilles tables
sql <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/raw/export_raw_achilles_results.sql",
packageName = "Achilles", dbms = connectionDetails$dbms, warnOnMissingParameters = FALSE, results_database_schema = resultsDatabaseSchema,
min_cell_count = minCellCount, analysis_ids = analysisIds)
ParallelLogger::logInfo("Querying achilles_results")
results <- DatabaseConnector::querySql(connection = connection, sql = sql)
# Save the data to the export folder
readr::write_csv(results, file.path(exportFolder, "achilles_results.csv"))
}
| /scratch/gouwar.j/cran-all/cranData/Achilles/R/exportToCSV.R |
# @file exportToJson
#
# Copyright 2023 Observational Health Data Sciences and Informatics
#
# This file is part of Achilles
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# @author Observational Health Data Sciences and Informatics
# @author Martijn Schuemie
# @author Patrick Ryan
# @author Vojtech Huser
# @author Chris Knoll
# @author Ajit Londhe
# @author Taha Abdul-Basser
# When adding a new report, append it to inst/csv/export/all_reports.csv
getAllReports <- function() {
reports <- read.csv(file = system.file("csv", "export", "all_reports.csv", package = "Achilles"),
stringsAsFactors = FALSE, header = TRUE)$REPORT
return(reports)
}
initOutputPath <- function(outputPath) {
# create output path if it doesn't already exist, warn if it does
if (file.exists(outputPath)) {
writeLines(paste("Warning: folder", outputPath, "already exists"))
} else {
dir.create(paste(outputPath, "/", sep = ""))
}
}
#' @title
#' showReportTypes
#'
#' @description
#' \code{showReportTypes} Displays the Report Types that can be passed as vector values to
#' exportToJson.
#'
#' @details
#' exportToJson supports the following report types: "CONDITION","CONDITION_ERA", "DASHBOARD",
#' "DATA_DENSITY", "DEATH", "DRUG", "DRUG_ERA", "META", "OBSERVATION", "OBSERVATION_PERIOD", "PERSON",
#' "PROCEDURE","VISIT"
#'
#' @return
#' none (opens the allReports vector in a View() display)
#' @examples
#' \dontrun{
#' showReportTypes()
#' }
#' @export
showReportTypes <- function() {
utils::View(getAllReports())
}
#' @title
#' exportToJson
#'
#' @description
#' \code{exportToJson} Exports Achilles statistics into a JSON form for reports.
#'
#' @details
#' Creates individual files for each report found in Achilles.Web
#'
#'
#' @param connectionDetails An R object of type ConnectionDetail (details for the function that
#' contains server info, database type, optionally username/password,
#' port)
#' @param cdmDatabaseSchema Name of the database schema that contains the OMOP CDM.
#' @param resultsDatabaseSchema Name of the database schema that contains the Achilles analysis
#' files. Default is cdmDatabaseSchema
#' @param outputPath A folder location to save the JSON files. Default is current working
#' folder
#' @param reports A character vector listing the set of reports to generate. Default is
#' all reports.
#' @param vocabDatabaseSchema string name of database schema that contains OMOP Vocabulary. Default
#' is cdmDatabaseSchema. On SQL Server, this should specifiy both the
#' database and the schema, so for example 'results.dbo'.
#' @param compressIntoOneFile Boolean indicating if the JSON files should be compressed into one
#' zip file. Please note that in Windows, the zip application must be
#' stored in the system environment, e.g. Sys.setenv("R_ZIPCMD",
#' "some_path_to_zip"). Due to recursion, the actual Achilles files and
#' folders will be embedded in any parent directories that the source
#' folder has. See \code{showReportTypes} for a list of all report types
#'
#' @return
#' none
#' @examples
#' \dontrun{
#' connectionDetails <- DatabaseConnector::createConnectionDetails(dbms = "sql server",
#' server = "yourserver")
#' exportToJson(connectionDetails, cdmDatabaseSchema = "cdm4_sim", outputPath = "your/output/path")
#' }
#' @export
exportToJson <- function(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
reports = getAllReports(), vocabDatabaseSchema = cdmDatabaseSchema, compressIntoOneFile = FALSE) {
start <- Sys.time()
if (missing(resultsDatabaseSchema))
resultsDatabaseSchema <- cdmDatabaseSchema
initOutputPath(outputPath)
# connect to the results schema
connectionDetails$schema <- resultsDatabaseSchema
conn <- DatabaseConnector::connect(connectionDetails)
# generate reports
if ("CONDITION" %in% reports) {
generateConditionTreemap(conn, connectionDetails$dbms, cdmDatabaseSchema, resultsDatabaseSchema,
outputPath, vocabDatabaseSchema)
generateConditionReports(conn, connectionDetails$dbms, cdmDatabaseSchema, resultsDatabaseSchema,
outputPath, vocabDatabaseSchema)
}
if ("CONDITION_ERA" %in% reports) {
generateConditionEraTreemap(conn,
connectionDetails$dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath, vocabDatabaseSchema)
generateConditionEraReports(conn,
connectionDetails$dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath, vocabDatabaseSchema)
}
if ("DATA_DENSITY" %in% reports)
generateDataDensityReport(conn,
connectionDetails$dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath, vocabDatabaseSchema)
if ("DEATH" %in% reports) {
generateDeathReports(conn, connectionDetails$dbms, cdmDatabaseSchema, resultsDatabaseSchema,
outputPath, vocabDatabaseSchema)
}
if ("DRUG_ERA" %in% reports) {
generateDrugEraTreemap(conn, connectionDetails$dbms, cdmDatabaseSchema, resultsDatabaseSchema,
outputPath, vocabDatabaseSchema)
generateDrugEraReports(conn, connectionDetails$dbms, cdmDatabaseSchema, resultsDatabaseSchema,
outputPath, vocabDatabaseSchema)
}
if ("DRUG" %in% reports) {
generateDrugTreemap(conn,
connectionDetails$dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema)
generateDrugReports(conn,
connectionDetails$dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema)
}
if (("META" %in% reports)) {
generateMetadataReport(conn, connectionDetails$dbms, cdmDatabaseSchema, resultsDatabaseSchema,
outputPath, vocabDatabaseSchema)
generateCdmSourceReport(conn, connectionDetails$dbms, cdmDatabaseSchema, resultsDatabaseSchema,
outputPath, vocabDatabaseSchema)
}
if (("MEASUREMENT" %in% reports)) {
generateMeasurementTreemap(conn,
connectionDetails$dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath, vocabDatabaseSchema)
generateMeasurementReports(conn,
connectionDetails$dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath, vocabDatabaseSchema)
}
if ("OBSERVATION" %in% reports) {
generateObservationTreemap(conn,
connectionDetails$dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath, vocabDatabaseSchema)
generateObservationReports(conn,
connectionDetails$dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath, vocabDatabaseSchema)
}
if ("OBSERVATION_PERIOD" %in% reports)
generateObservationPeriodReport(conn,
connectionDetails$dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath, vocabDatabaseSchema)
if ("PERSON" %in% reports)
generatePersonReport(conn, connectionDetails$dbms, cdmDatabaseSchema, resultsDatabaseSchema,
outputPath, vocabDatabaseSchema)
if ("PROCEDURE" %in% reports) {
generateProcedureTreemap(conn, connectionDetails$dbms, cdmDatabaseSchema, resultsDatabaseSchema,
outputPath, vocabDatabaseSchema)
generateProcedureReports(conn, connectionDetails$dbms, cdmDatabaseSchema, resultsDatabaseSchema,
outputPath, vocabDatabaseSchema)
}
if ("VISIT" %in% reports) {
generateVisitTreemap(conn, connectionDetails$dbms, cdmDatabaseSchema, resultsDatabaseSchema,
outputPath, vocabDatabaseSchema)
generateVisitReports(conn, connectionDetails$dbms, cdmDatabaseSchema, resultsDatabaseSchema,
outputPath, vocabDatabaseSchema)
}
if ("VISIT_DETAIL" %in% reports) {
generateVisitDetailTreemap(conn,
connectionDetails$dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath, vocabDatabaseSchema)
generateVisitDetailReports(conn,
connectionDetails$dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath, vocabDatabaseSchema)
}
if ("PERFORMANCE" %in% reports) {
generateAchillesPerformanceReport(conn,
connectionDetails$dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath, vocabDatabaseSchema)
}
# dashboard is always last
if ("DASHBOARD" %in% reports) {
generateDashboardReport(outputPath)
}
DatabaseConnector::disconnect(conn)
if (compressIntoOneFile) {
zip(zipfile = file.path(outputPath,
sprintf("%s.zip", cdmDatabaseSchema)), files = c(outputPath),
flags = c("-r"))
}
delta <- Sys.time() - start
writeLines(paste("Export took", signif(delta, 3), attr(delta, "units")))
writeLines(paste("JSON files can now be found in", outputPath))
}
#' @title
#' exportConditionToJson
#'
#' @description
#' \code{exportConditonToJson} Exports Achilles Condition report into a JSON form for reports.
#'
#' @details
#' Creates individual files for Condition report found in Achilles.Web
#'
#'
#' @param connectionDetails An R object of type ConnectionDetail (details for the function that
#' contains server info, database type, optionally username/password,
#' port)
#' @param cdmDatabaseSchema Name of the database schema that contains the vocabulary files
#' @param resultsDatabaseSchema Name of the database schema that contains the Achilles analysis
#' files. Default is cdmDatabaseSchema
#' @param outputPath folder location to save the JSON files. Default is current working
#' folder
#' @param vocabDatabaseSchema name of database schema that contains OMOP Vocabulary. Default is
#' cdmDatabaseSchema. On SQL Server, this should specifiy both the
#' database and the schema, so for example 'results.dbo'.
#'
#' @return
#' none
#' @examples
#' \dontrun{
#' connectionDetails <- DatabaseConnector::createConnectionDetails(dbms = "sql server",
#' server = "yourserver")
#' exportConditionToJson(connectionDetails,
#' cdmDatabaseSchema = "cdm4_sim",
#' outputPath = "your/output/path")
#' }
#' @export
exportConditionToJson <- function(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
exportToJson(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
reports = c("CONDITION"),
vocabDatabaseSchema)
}
#' @title
#' exportConditionEraToJson
#'
#' @description
#' \code{exportConditionEraToJson} Exports Achilles Condition Era report into a JSON form for reports.
#'
#' @details
#' Creates individual files for Condition Era report found in Achilles.Web
#'
#'
#' @param connectionDetails An R object of type ConnectionDetail (details for the function that
#' contains server info, database type, optionally username/password,
#' port)
#' @param cdmDatabaseSchema Name of the database schema that contains the vocabulary files
#' @param resultsDatabaseSchema Name of the database schema that contains the Achilles analysis
#' files. Default is cdmDatabaseSchema
#' @param outputPath folder location to save the JSON files. Default is current working
#' folder
#' @param vocabDatabaseSchema name of database schema that contains OMOP Vocabulary. Default is
#' cdmDatabaseSchema. On SQL Server, this should specifiy both the
#' database and the schema, so for example 'results.dbo'.
#'
#' @return
#' none
#' @examples
#' \dontrun{
#' connectionDetails <- DatabaseConnector::createConnectionDetails(dbms = "sql server",
#' server = "yourserver")
#' exportConditionEraToJson(connectionDetails,
#' cdmDatabaseSchema = "cdm4_sim",
#' outputPath = "your/output/path")
#' }
#' @export
exportConditionEraToJson <- function(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
exportToJson(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
reports = c("CONDITION_ERA"),
vocabDatabaseSchema)
}
#' @title
#' exportDashboardToJson
#'
#' @description
#' \code{exportDashboardToJson} Exports Achilles Dashboard report into a JSON form for reports.
#'
#' @details
#' Creates individual files for Dashboard report found in Achilles.Web. NOTE: This function reads the
#' results from the other exports and aggregates them into a single file. If other reports are not
#' genreated, this function will fail.
#'
#'
#' @param connectionDetails An R object of type ConnectionDetail (details for the function that
#' contains server info, database type, optionally username/password,
#' port)
#' @param cdmDatabaseSchema Name of the database schema that contains the vocabulary files
#' @param resultsDatabaseSchema Name of the database schema that contains the Achilles analysis
#' files. Default is cdmDatabaseSchema
#' @param outputPath folder location to save the JSON files. Default is current working
#' folder
#'
#' @param vocabDatabaseSchema name of database schema that contains OMOP Vocabulary. Default is
#' cdmDatabaseSchema. On SQL Server, this should specifiy both the
#' database and the schema, so for example 'results.dbo'.
#'
#' @return
#' none
#' @examples
#' \dontrun{
#' connectionDetails <- DatabaseConnector::createConnectionDetails(dbms = "sql server",
#' server = "yourserver")
#' exportDashboardToJson(connectionDetails,
#' cdmDatabaseSchema = "cdm4_sim",
#' outputPath = "your/output/path")
#' }
#' @export
exportDashboardToJson <- function(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
exportToJson(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
reports = c("DASHBOARD"),
vocabDatabaseSchema)
}
#' @title
#' exportDataDensityToJson
#'
#' @description
#' \code{exportDataDensityToJson} Exports Achilles Data Density report into a JSON form for reports.
#'
#' @details
#' Creates individual files for Data Density report found in Achilles.Web
#'
#'
#' @param connectionDetails An R object of type ConnectionDetail (details for the function that
#' contains server info, database type, optionally username/password,
#' port)
#' @param cdmDatabaseSchema Name of the database schema that contains the vocabulary files
#' @param resultsDatabaseSchema Name of the database schema that contains the Achilles analysis
#' files. Default is cdmDatabaseSchema
#' @param outputPath folder location to save the JSON files. Default is current working
#' folder
#'
#' @param vocabDatabaseSchema name of database schema that contains OMOP Vocabulary. Default is
#' cdmDatabaseSchema. On SQL Server, this should specifiy both the
#' database and the schema, so for example 'results.dbo'.
#'
#' @return
#' none
#' @examples
#' \dontrun{
#' connectionDetails <- DatabaseConnector::createConnectionDetails(dbms = "sql server",
#' server = "yourserver")
#' exportDataDensityToJson(connectionDetails,
#' cdmDatabaseSchema = "cdm4_sim",
#' outputPath = "your/output/path")
#' }
#' @export
exportDataDensityToJson <- function(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
exportToJson(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
reports = c("DATA_DENSITY"),
vocabDatabaseSchema)
}
#' @title
#' exportDeathToJson
#'
#' @description
#' \code{exportDeathToJson} Exports Achilles Death report into a JSON form for reports.
#'
#' @details
#' Creates individual files for Death report found in Achilles.Web
#'
#'
#' @param connectionDetails An R object of type ConnectionDetail (details for the function that
#' contains server info, database type, optionally username/password,
#' port)
#' @param cdmDatabaseSchema Name of the database schema that contains the vocabulary files
#' @param resultsDatabaseSchema Name of the database schema that contains the Achilles analysis
#' files. Default is cdmDatabaseSchema
#' @param outputPath folder location to save the JSON files. Default is current working
#' folder
#'
#' @param vocabDatabaseSchema name of database schema that contains OMOP Vocabulary. Default is
#' cdmDatabaseSchema. On SQL Server, this should specifiy both the
#' database and the schema, so for example 'results.dbo'.
#'
#' @return
#' none
#' @examples
#' \dontrun{
#' connectionDetails <- DatabaseConnector::createConnectionDetails(dbms = "sql server",
#' server = "yourserver")
#' exportDeathToJson(connectionDetails,
#' cdmDatabaseSchema = "cdm4_sim",
#' outputPath = "your/output/path")
#' }
#' @export
exportDeathToJson <- function(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
exportToJson(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
reports = c("DEATH"),
vocabDatabaseSchema)
}
#' @title
#' exportDrugToJson
#'
#' @description
#' \code{exportDrugToJson} Exports Achilles Drug report into a JSON form for reports.
#'
#' @details
#' Creates individual files for Drug report found in Achilles.Web
#'
#'
#' @param connectionDetails An R object of type ConnectionDetail (details for the function that
#' contains server info, database type, optionally username/password,
#' port)
#' @param cdmDatabaseSchema Name of the database schema that contains the vocabulary files
#' @param resultsDatabaseSchema Name of the database schema that contains the Achilles analysis
#' files. Default is cdmDatabaseSchema
#' @param outputPath folder location to save the JSON files. Default is current working
#' folder
#'
#' @param vocabDatabaseSchema name of database schema that contains OMOP Vocabulary. Default is
#' cdmDatabaseSchema. On SQL Server, this should specifiy both the
#' database and the schema, so for example 'results.dbo'.
#'
#' @return
#' none
#' @examples
#' \dontrun{
#' connectionDetails <- DatabaseConnector::createConnectionDetails(dbms = "sql server",
#' server = "yourserver")
#' exportDrugToJson(connectionDetails,
#' cdmDatabaseSchema = "cdm4_sim",
#' outputPath = "your/output/path")
#' }
#' @export
exportDrugToJson <- function(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
exportToJson(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
reports = c("DRUG"),
vocabDatabaseSchema)
}
#' @title
#' exportDrugEraToJson
#'
#' @description
#' \code{exportDrugEraToJson} Exports Achilles Drug Era report into a JSON form for reports.
#'
#' @details
#' Creates individual files for Drug Era report found in Achilles.Web
#'
#'
#' @param connectionDetails An R object of type ConnectionDetail (details for the function that
#' contains server info, database type, optionally username/password,
#' port)
#' @param cdmDatabaseSchema Name of the database schema that contains the vocabulary files
#' @param resultsDatabaseSchema Name of the database schema that contains the Achilles analysis
#' files. Default is cdmDatabaseSchema
#' @param outputPath folder location to save the JSON files. Default is current working
#' folder
#'
#' @param vocabDatabaseSchema name of database schema that contains OMOP Vocabulary. Default is
#' cdmDatabaseSchema. On SQL Server, this should specifiy both the
#' database and the schema, so for example 'results.dbo'.
#'
#' @return
#' none
#' @examples
#' \dontrun{
#' connectionDetails <- DatabaseConnector::createConnectionDetails(dbms = "sql server",
#' server = "yourserver")
#' exportDrugEraToJson(connectionDetails,
#' cdmDatabaseSchema = "cdm4_sim",
#' outputPath = "your/output/path")
#' }
#' @export
exportDrugEraToJson <- function(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
exportToJson(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
reports = c("DRUG_ERA"),
vocabDatabaseSchema)
}
#' @title
#' exportMetaToJson
#'
#' @description
#' \code{exportMetaToJson} Exports Achilles META report into a JSON form for reports.
#'
#' @details
#' Creates individual files for Achilles META report found in Achilles.Web
#'
#'
#' @param connectionDetails An R object of type ConnectionDetail (details for the function that
#' contains server info, database type, optionally username/password,
#' port)
#' @param cdmDatabaseSchema Name of the database schema that contains the vocabulary files
#' @param resultsDatabaseSchema Name of the database schema that contains the Achilles analysis
#' files. Default is cdmDatabaseSchema
#' @param outputPath folder location to save the JSON files. Default is current working
#' folder
#'
#' @param vocabDatabaseSchema name of database schema that contains OMOP Vocabulary. Default is
#' cdmDatabaseSchema. On SQL Server, this should specifiy both the
#' database and the schema, so for example 'results.dbo'.
#'
#' @return
#' none
#' @examples
#' \dontrun{
#' connectionDetails <- DatabaseConnector::createConnectionDetails(dbms = "sql server",
#' server = "yourserver")
#' exportMetaToJson(connectionDetails,
#' cdmDatabaseSchema = "cdm4_sim",
#' outputPath = "your/output/path")
#' }
#' @export
exportMetaToJson <- function(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
exportToJson(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
reports = c("META"),
vocabDatabaseSchema)
}
#' @title
#' exportMeasurementToJson
#'
#' @description
#' \code{exportMeasurementToJson} Exports Measurement report into a JSON form for reports.
#'
#' @details
#' Creates individual files for Measurement report found in Achilles.Web
#'
#'
#' @param connectionDetails An R object of type ConnectionDetail (details for the function that
#' contains server info, database type, optionally username/password,
#' port)
#' @param cdmDatabaseSchema Name of the database schema that contains the vocabulary files
#' @param resultsDatabaseSchema Name of the database schema that contains the Achilles analysis
#' files. Default is cdmDatabaseSchema
#' @param outputPath folder location to save the JSON files. Default is current working
#' folder
#'
#' @param vocabDatabaseSchema name of database schema that contains OMOP Vocabulary. Default is
#' cdmDatabaseSchema. On SQL Server, this should specifiy both the
#' database and the schema, so for example 'results.dbo'.
#'
#' @return
#' none
#' @examples
#' \dontrun{
#' connectionDetails <- DatabaseConnector::createConnectionDetails(dbms = "sql server",
#' server = "yourserver")
#' exportMeasurementToJson(connectionDetails,
#' cdmDatabaseSchema = "cdm4_sim",
#' outputPath = "your/output/path")
#' }
#' @export
exportMeasurementToJson <- function(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
exportToJson(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
reports = c("MEASUREMENT"),
vocabDatabaseSchema)
}
#' @title
#' exportObservationToJson
#'
#' @description
#' \code{exportObservationToJson} Exports Achilles Observation report into a JSON form for reports.
#'
#' @details
#' Creates individual files for Observation report found in Achilles.Web
#'
#'
#' @param connectionDetails An R object of type ConnectionDetail (details for the function that
#' contains server info, database type, optionally username/password,
#' port)
#' @param cdmDatabaseSchema Name of the database schema that contains the vocabulary files
#' @param resultsDatabaseSchema Name of the database schema that contains the Achilles analysis
#' files. Default is cdmDatabaseSchema
#' @param outputPath folder location to save the JSON files. Default is current working
#' folder
#'
#' @param vocabDatabaseSchema name of database schema that contains OMOP Vocabulary. Default is
#' cdmDatabaseSchema. On SQL Server, this should specifiy both the
#' database and the schema, so for example 'results.dbo'.
#'
#' @return
#' none
#' @examples
#' \dontrun{
#' connectionDetails <- DatabaseConnector::createConnectionDetails(dbms = "sql server",
#' server = "yourserver")
#' exportObservationToJson(connectionDetails,
#' cdmDatabaseSchema = "cdm4_sim",
#' outputPath = "your/output/path")
#' }
#' @export
exportObservationToJson <- function(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
exportToJson(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
reports = c("OBSERVATION"),
vocabDatabaseSchema)
}
#' @title
#' exportObservationPeriodToJson
#'
#' @description
#' \code{exportObservationPeriodToJson} Exports Achilles Observation Period report into a JSON form
#' for reports.
#'
#' @details
#' Creates individual files for Observation Period report found in Achilles.Web
#'
#'
#' @param connectionDetails An R object of type ConnectionDetail (details for the function that
#' contains server info, database type, optionally username/password,
#' port)
#' @param cdmDatabaseSchema Name of the database schema that contains the vocabulary files
#' @param resultsDatabaseSchema Name of the database schema that contains the Achilles analysis
#' files. Default is cdmDatabaseSchema
#' @param outputPath folder location to save the JSON files. Default is current working
#' folder
#'
#' @param vocabDatabaseSchema name of database schema that contains OMOP Vocabulary. Default is
#' cdmDatabaseSchema. On SQL Server, this should specifiy both the
#' database and the schema, so for example 'results.dbo'.
#'
#' @return
#' none
#' @examples
#' \dontrun{
#' connectionDetails <- DatabaseConnector::createConnectionDetails(dbms = "sql server",
#' server = "yourserver")
#' exportObservationPeriodToJson(connectionDetails,
#' cdmDatabaseSchema = "cdm4_sim",
#' outputPath = "your/output/path")
#' }
#' @export
exportObservationPeriodToJson <- function(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
exportToJson(
connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
reports = c("OBSERVATION_PERIOD"),
vocabDatabaseSchema
)
}
#' @title
#' exportPersonToJson
#'
#' @description
#' \code{exportPersonToJson} Exports Achilles Person report into a JSON form for reports.
#'
#' @details
#' Creates individual files for Person report found in Achilles.Web
#'
#'
#' @param connectionDetails An R object of type ConnectionDetail (details for the function that
#' contains server info, database type, optionally username/password,
#' port)
#' @param cdmDatabaseSchema Name of the database schema that contains the vocabulary files
#' @param resultsDatabaseSchema of the database schema that contains the Achilles analysis files.
#' Default is cdmDatabaseSchema
#' @param outputPath folder location to save the JSON files. Default is current working
#' folder
#'
#' @param vocabDatabaseSchema name of database schema that contains OMOP Vocabulary. Default is
#' cdmDatabaseSchema. On SQL Server, this should specifiy both the
#' database and the schema, so for example 'results.dbo'.
#'
#' @return
#' none
#' @examples
#' \dontrun{
#' connectionDetails <- DatabaseConnector::createConnectionDetails(dbms = "sql server",
#' server = "yourserver")
#' exportPersonToJson(connectionDetails,
#' cdmDatabaseSchema = "cdm4_sim",
#' outputPath = "your/output/path")
#' }
#' @export
exportPersonToJson <- function(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
exportToJson(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
reports = c("PERSON"),
vocabDatabaseSchema)
}
#' @title
#' exportProcedureToJson
#'
#' @description
#' \code{exportProcedureToJson} Exports Achilles Procedure report into a JSON form for reports.
#'
#' @details
#' Creates individual files for Procedure report found in Achilles.Web
#'
#'
#' @param connectionDetails An R object of type ConnectionDetail (details for the function that
#' contains server info, database type, optionally username/password,
#' port)
#' @param cdmDatabaseSchema Name of the database schema that contains the vocabulary files
#' @param resultsDatabaseSchema Name of the database schema that contains the Achilles analysis
#' files. Default is cdmDatabaseSchema
#' @param outputPath folder location to save the JSON files. Default is current working
#' folder
#'
#' @param vocabDatabaseSchema name of database schema that contains OMOP Vocabulary. Default is
#' cdmDatabaseSchema. On SQL Server, this should specifiy both the
#' database and the schema, so for example 'results.dbo'.
#'
#' @return
#' none
#' @examples
#' \dontrun{
#' connectionDetails <- DatabaseConnector::createConnectionDetails(dbms = "sql server",
#' server = "yourserver")
#' exportProcedureToJson(connectionDetails,
#' cdmDatabaseSchema = "cdm4_sim",
#' outputPath = "your/output/path")
#' }
#' @export
exportProcedureToJson <- function(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
exportToJson(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
reports = c("PROCEDURE"),
vocabDatabaseSchema)
}
#' @title
#' exportVisitToJson
#'
#' @description
#' \code{exportVisitToJson} Exports Achilles Visit report into a JSON form for reports.
#'
#' @details
#' Creates individual files for Visit report found in Achilles.Web
#'
#'
#' @param connectionDetails An R object of type ConnectionDetail (details for the function that
#' contains server info, database type, optionally username/password,
#' port)
#' @param cdmDatabaseSchema Name of the database schema that contains the vocabulary files
#' @param resultsDatabaseSchema Name of the database schema that contains the Achilles analysis
#' files. Default is cdmDatabaseSchema
#' @param outputPath folder location to save the JSON files. Default is current working
#' folder
#' @param vocabDatabaseSchema name of database schema that contains OMOP Vocabulary. Default is
#' cdmDatabaseSchema. On SQL Server, this should specifiy both the
#' database and the schema, so for example 'results.dbo'.
#'
#' @return
#' none
#' @examples
#' \dontrun{
#' connectionDetails <- DatabaseConnector::createConnectionDetails(dbms = "sql server",
#' server = "yourserver")
#' exportVisitToJson(connectionDetails,
#' cdmDatabaseSchema = "cdm4_sim",
#' outputPath = "your/output/path")
#' }
#' @export
exportVisitToJson <- function(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
exportToJson(
connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
reports = c("VISIT"),
vocabDatabaseSchema
)
}
#' @title
#' exportVisitDetailToJson
#'
#' @description
#' \code{exportVisitDetailToJson} Exports Achilles VISIT_DETAIL report into a JSON form for reports.
#'
#' @details
#' Creates individual files for VISIT_DETAIL report found in Achilles.Web
#'
#'
#' @param connectionDetails An R object of type ConnectionDetail (details for the function that
#' contains server info, database type, optionally username/password,
#' port)
#' @param cdmDatabaseSchema Name of the database schema that contains the vocabulary files
#' @param resultsDatabaseSchema Name of the database schema that contains the Achilles analysis
#' files. Default is cdmDatabaseSchema
#' @param outputPath folder location to save the JSON files. Default is current working
#' folder
#' @param vocabDatabaseSchema name of database schema that contains OMOP Vocabulary. Default is
#' cdmDatabaseSchema. On SQL Server, this should specifiy both the
#' database and the schema, so for example 'results.dbo'.
#'
#' @return
#' none
#' @examples
#' \dontrun{
#' connectionDetails <- DatabaseConnector::createConnectionDetails(dbms = "sql server",
#' server = "yourserver")
#' exportVisitDetailToJson(connectionDetails,
#' cdmDatabaseSchema = "cdm4_sim",
#' outputPath = "your/output/path")
#' }
#' @export
exportVisitDetailToJson <- function(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
exportToJson(
connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
reports = c("VISIT_DETAIL"),
vocabDatabaseSchema
)
}
#' @title
#' exportPerformanceToJson exportPerformanceToJson
#'
#' @description
#' \code{exportPerformanceToJson} Exports Achilles performance report into a JSON form for reports.
#'
#' @details
#' Creates performance report including how long each Achilles result took to generate.
#'
#'
#' @param connectionDetails An R object of type ConnectionDetail (details for the function that
#' contains server info, database type, optionally username/password,
#' port)
#' @param cdmDatabaseSchema Name of the database schema that contains the vocabulary files
#' @param resultsDatabaseSchema Name of the database schema that contains the Achilles analysis
#' files. Default is cdmDatabaseSchema
#' @param outputPath folder location to save the JSON files. Default is current working
#' folder
#' @param vocabDatabaseSchema name of database schema that contains OMOP Vocabulary. Default is
#' cdmDatabaseSchema. On SQL Server, this should specifiy both the
#' database and the schema, so for example 'results.dbo'.
#'
#' @return
#' none
#' @examples
#' \dontrun{
#' connectionDetails <- DatabaseConnector::createConnectionDetails(dbms = "sql server",
#' server = "yourserver")
#' exportPerformanceToJson(connectionDetails,
#' cdmDatabaseSchema = "cdm4_sim",
#' outputPath = "your/output/path")
#' }
#' @export
exportPerformanceToJson <- function(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
exportToJson(
connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
reports = c("PERFORMANCE"),
vocabDatabaseSchema
)
}
generateAchillesPerformanceReport <- function(conn,
dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
writeLines("Generating achilles performance report")
output <- {
}
queryAchillesPerformance <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/performance/sqlAchillesPerformance.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
output$MESSAGES <- DatabaseConnector::querySql(conn, queryAchillesPerformance)
jsonOutput <- jsonlite::toJSON(output)
write(jsonOutput, file = paste(outputPath, "/achillesperformance.json", sep = ""))
}
generateMetadataReport <- function(conn, dbms, cdmDatabaseSchema, resultsDatabaseSchema, outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
writeLines("Generating metadata report")
output <- {
}
queryMetadata <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/metadata/sqlMetadata.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema)
if ("METADATA" %in% DatabaseConnector::getTableNames(connection = conn, databaseSchema = cdmDatabaseSchema)) {
output$MESSAGES <- DatabaseConnector::querySql(conn, queryMetadata)
jsonOutput <- jsonlite::toJSON(output)
write(jsonOutput, file = paste(outputPath, "/metadata.json", sep = ""))
} else {
writeLines("No METADATA table found, skipping export")
}
}
generateCdmSourceReport <- function(conn,
dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
writeLines("Generating cdm source report")
output <- {
}
queryCdmSource <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/metadata/sqlCdmSource.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema)
if ("CDM_SOURCE" %in% DatabaseConnector::getTableNames(connection = conn, databaseSchema = cdmDatabaseSchema)) {
output$MESSAGES <- DatabaseConnector::querySql(conn, queryCdmSource)
jsonOutput <- jsonlite::toJSON(output)
write(jsonOutput, file = paste(outputPath, "/cdm_source.json", sep = ""))
} else {
writeLines("No CDM_SOURCE table found, skipping export")
}
}
generateDrugEraTreemap <- function(conn, dbms, cdmDatabaseSchema, resultsDatabaseSchema, outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
writeLines("Generating drug era treemap")
progressBar <- utils::txtProgressBar(max = 1, style = 3)
progress <- 0
queryDrugEraTreemap <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/drugera/sqlDrugEraTreemap.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
dataDrugEraTreemap <- DatabaseConnector::querySql(conn, queryDrugEraTreemap)
write(jsonlite::toJSON(dataDrugEraTreemap, method = "C"), paste(outputPath, "/drugera_treemap.json",
sep = ""))
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
close(progressBar)
}
generateDrugTreemap <- function(conn,
dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
writeLines("Generating drug treemap")
progressBar <- utils::txtProgressBar(max = 1, style = 3)
progress <- 0
queryDrugTreemap <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/drug/sqlDrugTreemap.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
dataDrugTreemap <- DatabaseConnector::querySql(conn, queryDrugTreemap)
write(jsonlite::toJSON(dataDrugTreemap, method = "C"),
paste(outputPath, "/drug_treemap.json", sep = ""))
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
close(progressBar)
}
generateConditionTreemap <- function(conn,
dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
writeLines("Generating condition treemap")
progressBar <- utils::txtProgressBar(max = 1, style = 3)
progress <- 0
queryConditionTreemap <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/condition/sqlConditionTreemap.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
dataConditionTreemap <- DatabaseConnector::querySql(conn, queryConditionTreemap)
write(jsonlite::toJSON(dataConditionTreemap, method = "C"),
paste(outputPath, "/condition_treemap.json",
sep = ""))
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
close(progressBar)
}
generateConditionEraTreemap <- function(conn,
dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
writeLines("Generating condition era treemap")
progressBar <- utils::txtProgressBar(max = 1, style = 3)
progress <- 0
queryConditionEraTreemap <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/conditionera/sqlConditionEraTreemap.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
dataConditionEraTreemap <- DatabaseConnector::querySql(conn, queryConditionEraTreemap)
write(jsonlite::toJSON(dataConditionEraTreemap, method = "C"),
paste(outputPath, "/conditionera_treemap.json",
sep = ""))
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
close(progressBar)
}
generateConditionReports <- function(conn,
dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
writeLines("Generating condition reports")
treemapFile <- file.path(outputPath, "condition_treemap.json")
if (!file.exists(treemapFile)) {
writeLines(paste("Warning: treemap file",
treemapFile,
"does not exist. Skipping detail report generation."))
return()
}
treemapData <- jsonlite::fromJSON(file = treemapFile)
uniqueConcepts <- unique(treemapData$CONCEPT_ID)
totalCount <- length(uniqueConcepts)
conditionsFolder <- file.path(outputPath, "conditions")
if (file.exists(conditionsFolder)) {
writeLines(paste("Warning: folder ", conditionsFolder, " already exists"))
} else {
dir.create(paste(conditionsFolder, "/", sep = ""))
}
progressBar <- utils::txtProgressBar(style = 3)
progress <- 0
queryPrevalenceByGenderAgeYear <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/condition/sqlPrevalenceByGenderAgeYear.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryPrevalenceByMonth <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/condition/sqlPrevalenceByMonth.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryConditionsByType <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/condition/sqlConditionsByType.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryAgeAtFirstDiagnosis <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/condition/sqlAgeAtFirstDiagnosis.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
dataPrevalenceByGenderAgeYear <- DatabaseConnector::querySql(conn, queryPrevalenceByGenderAgeYear)
dataPrevalenceByMonth <- DatabaseConnector::querySql(conn, queryPrevalenceByMonth)
dataConditionsByType <- DatabaseConnector::querySql(conn, queryConditionsByType)
dataAgeAtFirstDiagnosis <- DatabaseConnector::querySql(conn, queryAgeAtFirstDiagnosis)
buildConditionReport <- function(concept_id) {
report <- {
}
report$PREVALENCE_BY_GENDER_AGE_YEAR <- dataPrevalenceByGenderAgeYear[dataPrevalenceByGenderAgeYear$CONCEPT_ID ==
concept_id, c(3, 4, 5, 6)]
report$PREVALENCE_BY_MONTH <- dataPrevalenceByMonth[dataPrevalenceByMonth$CONCEPT_ID == concept_id,
c(3, 4)]
report$CONDITIONS_BY_TYPE <- dataConditionsByType[dataConditionsByType$CONDITION_CONCEPT_ID ==
concept_id, c(2, 3)]
report$AGE_AT_FIRST_DIAGNOSIS <- dataAgeAtFirstDiagnosis[dataAgeAtFirstDiagnosis$CONCEPT_ID ==
concept_id, c(2, 3, 4, 5, 6, 7, 8, 9)]
filename <- paste(outputPath, "/conditions/condition_", concept_id, ".json", sep = "")
write(jsonlite::toJSON(report, method = "C"), filename)
# Update progressbar:
env <- parent.env(environment())
curVal <- get("progress", envir = env)
assign("progress", curVal + 1, envir = env)
utils::setTxtProgressBar(get("progressBar", envir = env),
(curVal + 1)/get("totalCount", envir = env))
}
dummy <- lapply(uniqueConcepts, buildConditionReport)
utils::setTxtProgressBar(progressBar, 1)
close(progressBar)
}
generateConditionEraReports <- function(conn,
dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
writeLines("Generating condition era reports")
treemapFile <- file.path(outputPath, "conditionera_treemap.json")
if (!file.exists(treemapFile)) {
writeLines(paste("Warning: treemap file",
treemapFile,
"does not exist. Skipping detail report generation."))
return()
}
treemapData <- jsonlite::fromJSON(file = treemapFile)
uniqueConcepts <- unique(treemapData$CONCEPT_ID)
totalCount <- length(uniqueConcepts)
conditionsFolder <- file.path(outputPath, "conditioneras")
if (file.exists(conditionsFolder)) {
writeLines(paste("Warning: folder ", conditionsFolder, " already exists"))
} else {
dir.create(paste(conditionsFolder, "/", sep = ""))
}
progressBar <- utils::txtProgressBar(style = 3)
progress <- 0
queryPrevalenceByGenderAgeYear <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/conditionera/sqlPrevalenceByGenderAgeYear.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryPrevalenceByMonth <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/conditionera/sqlPrevalenceByMonth.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryAgeAtFirstDiagnosis <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/conditionera/sqlAgeAtFirstDiagnosis.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryLengthOfEra <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/conditionera/sqlLengthOfEra.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
dataPrevalenceByGenderAgeYear <- DatabaseConnector::querySql(conn, queryPrevalenceByGenderAgeYear)
dataPrevalenceByMonth <- DatabaseConnector::querySql(conn, queryPrevalenceByMonth)
dataLengthOfEra <- DatabaseConnector::querySql(conn, queryLengthOfEra)
dataAgeAtFirstDiagnosis <- DatabaseConnector::querySql(conn, queryAgeAtFirstDiagnosis)
buildConditionEraReport <- function(concept_id) {
report <- {
}
report$PREVALENCE_BY_GENDER_AGE_YEAR <- dataPrevalenceByGenderAgeYear[dataPrevalenceByGenderAgeYear$CONCEPT_ID ==
concept_id, c(2, 3, 4, 5)]
report$PREVALENCE_BY_MONTH <- dataPrevalenceByMonth[dataPrevalenceByMonth$CONCEPT_ID == concept_id,
c(2, 3)]
report$LENGTH_OF_ERA <- dataLengthOfEra[dataLengthOfEra$CONCEPT_ID == concept_id, c(2, 3, 4,
5, 6, 7, 8, 9)]
report$AGE_AT_FIRST_DIAGNOSIS <- dataAgeAtFirstDiagnosis[dataAgeAtFirstDiagnosis$CONCEPT_ID ==
concept_id, c(2, 3, 4, 5, 6, 7, 8, 9)]
filename <- paste(outputPath, "/conditioneras/condition_", concept_id, ".json", sep = "")
write(jsonlite::toJSON(report, method = "C"), filename)
# Update progressbar:
env <- parent.env(environment())
curVal <- get("progress", envir = env)
assign("progress", curVal + 1, envir = env)
utils::setTxtProgressBar(get("progressBar", envir = env),
(curVal + 1)/get("totalCount", envir = env))
}
dummy <- lapply(uniqueConcepts, buildConditionEraReport)
utils::setTxtProgressBar(progressBar, 1)
close(progressBar)
}
generateDrugEraReports <- function(conn, dbms, cdmDatabaseSchema, resultsDatabaseSchema, outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
writeLines("Generating drug era reports")
treemapFile <- file.path(outputPath, "drugera_treemap.json")
if (!file.exists(treemapFile)) {
writeLines(paste("Warning: treemap file",
treemapFile,
"does not exist. Skipping detail report generation."))
return()
}
treemapData <- jsonlite::fromJSON(file = treemapFile)
uniqueConcepts <- unique(treemapData$CONCEPT_ID)
totalCount <- length(uniqueConcepts)
drugerasFolder <- file.path(outputPath, "drugeras")
if (file.exists(drugerasFolder)) {
writeLines(paste("Warning: folder ", drugerasFolder, " already exists"))
} else {
dir.create(paste(drugerasFolder, "/", sep = ""))
}
progressBar <- utils::txtProgressBar(style = 3)
progress <- 0
queryAgeAtFirstExposure <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/drugera/sqlAgeAtFirstExposure.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryPrevalenceByGenderAgeYear <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/drugera/sqlPrevalenceByGenderAgeYear.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryPrevalenceByMonth <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/drugera/sqlPrevalenceByMonth.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryLengthOfEra <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/drugera/sqlLengthOfEra.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
dataAgeAtFirstExposure <- DatabaseConnector::querySql(conn, queryAgeAtFirstExposure)
dataPrevalenceByGenderAgeYear <- DatabaseConnector::querySql(conn, queryPrevalenceByGenderAgeYear)
dataPrevalenceByMonth <- DatabaseConnector::querySql(conn, queryPrevalenceByMonth)
dataLengthOfEra <- DatabaseConnector::querySql(conn, queryLengthOfEra)
buildDrugEraReport <- function(concept_id) {
report <- {
}
report$AGE_AT_FIRST_EXPOSURE <- dataAgeAtFirstExposure[dataAgeAtFirstExposure$CONCEPT_ID == concept_id,
c(2, 3, 4, 5, 6, 7, 8, 9)]
report$PREVALENCE_BY_GENDER_AGE_YEAR <- dataPrevalenceByGenderAgeYear[dataPrevalenceByGenderAgeYear$CONCEPT_ID ==
concept_id, c(2, 3, 4, 5)]
report$PREVALENCE_BY_MONTH <- dataPrevalenceByMonth[dataPrevalenceByMonth$CONCEPT_ID == concept_id,
c(2, 3)]
report$LENGTH_OF_ERA <- dataLengthOfEra[dataLengthOfEra$CONCEPT_ID == concept_id, c(2, 3, 4,
5, 6, 7, 8, 9)]
filename <- paste(outputPath, "/drugeras/drug_", concept_id, ".json", sep = "")
write(jsonlite::toJSON(report, method = "C"), filename)
# Update progressbar:
env <- parent.env(environment())
curVal <- get("progress", envir = env)
assign("progress", curVal + 1, envir = env)
utils::setTxtProgressBar(get("progressBar", envir = env),
(curVal + 1)/get("totalCount", envir = env))
}
dummy <- lapply(uniqueConcepts, buildDrugEraReport)
utils::setTxtProgressBar(progressBar, 1)
close(progressBar)
}
generateDrugReports <- function(conn,
dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
writeLines("Generating drug reports")
treemapFile <- file.path(outputPath, "drug_treemap.json")
if (!file.exists(treemapFile)) {
writeLines(paste("Warning: treemap file",
treemapFile,
"does not exist. Skipping detail report generation."))
return()
}
treemapData <- jsonlite::fromJSON(file = treemapFile)
uniqueConcepts <- unique(treemapData$CONCEPT_ID)
totalCount <- length(uniqueConcepts)
drugsFolder <- file.path(outputPath, "drugs")
if (file.exists(drugsFolder)) {
writeLines(paste("Warning: folder ", drugsFolder, " already exists"))
} else {
dir.create(paste(drugsFolder, "/", sep = ""))
}
progressBar <- utils::txtProgressBar(style = 3)
progress <- 0
queryAgeAtFirstExposure <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/drug/sqlAgeAtFirstExposure.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryDaysSupplyDistribution <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/drug/sqlDaysSupplyDistribution.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryDrugsByType <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/drug/sqlDrugsByType.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryPrevalenceByGenderAgeYear <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/drug/sqlPrevalenceByGenderAgeYear.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryPrevalenceByMonth <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/drug/sqlPrevalenceByMonth.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryDrugFrequencyDistribution <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/drug/sqlFrequencyDistribution.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryQuantityDistribution <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/drug/sqlQuantityDistribution.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryRefillsDistribution <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/drug/sqlRefillsDistribution.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
dataAgeAtFirstExposure <- DatabaseConnector::querySql(conn, queryAgeAtFirstExposure)
dataDaysSupplyDistribution <- DatabaseConnector::querySql(conn, queryDaysSupplyDistribution)
dataDrugsByType <- DatabaseConnector::querySql(conn, queryDrugsByType)
dataPrevalenceByGenderAgeYear <- DatabaseConnector::querySql(conn, queryPrevalenceByGenderAgeYear)
dataPrevalenceByMonth <- DatabaseConnector::querySql(conn, queryPrevalenceByMonth)
dataQuantityDistribution <- DatabaseConnector::querySql(conn, queryQuantityDistribution)
dataRefillsDistribution <- DatabaseConnector::querySql(conn, queryRefillsDistribution)
dataDrugFrequencyDistribution <- DatabaseConnector::querySql(conn, queryDrugFrequencyDistribution)
buildDrugReport <- function(concept_id) {
report <- {
}
report$AGE_AT_FIRST_EXPOSURE <- dataAgeAtFirstExposure[dataAgeAtFirstExposure$DRUG_CONCEPT_ID ==
concept_id, c(2, 3, 4, 5, 6, 7, 8, 9)]
report$DAYS_SUPPLY_DISTRIBUTION <- dataDaysSupplyDistribution[dataDaysSupplyDistribution$DRUG_CONCEPT_ID ==
concept_id, c(2, 3, 4, 5, 6, 7, 8, 9)]
report$DRUGS_BY_TYPE <- dataDrugsByType[dataDrugsByType$DRUG_CONCEPT_ID == concept_id, c(3, 4)]
report$PREVALENCE_BY_GENDER_AGE_YEAR <- dataPrevalenceByGenderAgeYear[dataPrevalenceByGenderAgeYear$CONCEPT_ID ==
concept_id, c(3, 4, 5, 6)]
report$PREVALENCE_BY_MONTH <- dataPrevalenceByMonth[dataPrevalenceByMonth$CONCEPT_ID == concept_id,
c(3, 4)]
report$DRUG_FREQUENCY_DISTRIBUTION <- dataDrugFrequencyDistribution[dataDrugFrequencyDistribution$CONCEPT_ID ==
concept_id, c(3, 4)]
report$QUANTITY_DISTRIBUTION <- dataQuantityDistribution[dataQuantityDistribution$DRUG_CONCEPT_ID ==
concept_id, c(2, 3, 4, 5, 6, 7, 8, 9)]
report$REFILLS_DISTRIBUTION <- dataRefillsDistribution[dataRefillsDistribution$DRUG_CONCEPT_ID ==
concept_id, c(2, 3, 4, 5, 6, 7, 8, 9)]
filename <- paste(outputPath, "/drugs/drug_", concept_id, ".json", sep = "")
write(jsonlite::toJSON(report, method = "C"), filename)
# Update progressbar:
env <- parent.env(environment())
curVal <- get("progress", envir = env)
assign("progress", curVal + 1, envir = env)
utils::setTxtProgressBar(get("progressBar", envir = env),
(curVal + 1)/get("totalCount", envir = env))
}
dummy <- lapply(uniqueConcepts, buildDrugReport)
utils::setTxtProgressBar(progressBar, 1)
close(progressBar)
}
generateProcedureTreemap <- function(conn,
dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
writeLines("Generating procedure treemap")
progressBar <- utils::txtProgressBar(max = 1, style = 3)
progress <- 0
queryProcedureTreemap <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/procedure/sqlProcedureTreemap.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
dataProcedureTreemap <- DatabaseConnector::querySql(conn, queryProcedureTreemap)
write(jsonlite::toJSON(dataProcedureTreemap, method = "C"),
paste(outputPath, "/procedure_treemap.json",
sep = ""))
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
close(progressBar)
}
generateProcedureReports <- function(conn,
dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
writeLines("Generating procedure reports")
treemapFile <- file.path(outputPath, "procedure_treemap.json")
if (!file.exists(treemapFile)) {
writeLines(paste("Warning: treemap file",
treemapFile,
"does not exist. Skipping detail report generation."))
return()
}
treemapData <- jsonlite::fromJSON(file = treemapFile)
uniqueConcepts <- unique(treemapData$CONCEPT_ID)
totalCount <- length(uniqueConcepts)
proceduresFolder <- file.path(outputPath, "procedures")
if (file.exists(proceduresFolder)) {
writeLines(paste("Warning: folder ", proceduresFolder, " already exists"))
} else {
dir.create(paste(proceduresFolder, "/", sep = ""))
}
progressBar <- utils::txtProgressBar(style = 3)
progress <- 0
queryPrevalenceByGenderAgeYear <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/procedure/sqlPrevalenceByGenderAgeYear.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryPrevalenceByMonth <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/procedure/sqlPrevalenceByMonth.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryProcedureFrequencyDistribution <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/procedure/sqlFrequencyDistribution.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryProceduresByType <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/procedure/sqlProceduresByType.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryAgeAtFirstOccurrence <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/procedure/sqlAgeAtFirstOccurrence.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
dataPrevalenceByGenderAgeYear <- DatabaseConnector::querySql(conn, queryPrevalenceByGenderAgeYear)
dataPrevalenceByMonth <- DatabaseConnector::querySql(conn, queryPrevalenceByMonth)
dataProceduresByType <- DatabaseConnector::querySql(conn, queryProceduresByType)
dataAgeAtFirstOccurrence <- DatabaseConnector::querySql(conn, queryAgeAtFirstOccurrence)
dataProcedureFrequencyDistribution <- DatabaseConnector::querySql(conn,
queryProcedureFrequencyDistribution)
buildProcedureReport <- function(concept_id) {
report <- {
}
report$PREVALENCE_BY_GENDER_AGE_YEAR <- dataPrevalenceByGenderAgeYear[dataPrevalenceByGenderAgeYear$CONCEPT_ID ==
concept_id, c(3, 4, 5, 6)]
report$PREVALENCE_BY_MONTH <- dataPrevalenceByMonth[dataPrevalenceByMonth$CONCEPT_ID == concept_id,
c(3, 4)]
report$PROCEDURE_FREQUENCY_DISTRIBUTION <- dataProcedureFrequencyDistribution[dataProcedureFrequencyDistribution$CONCEPT_ID ==
concept_id, c(3, 4)]
report$PROCEDURES_BY_TYPE <- dataProceduresByType[dataProceduresByType$PROCEDURE_CONCEPT_ID ==
concept_id, c(4, 5)]
report$AGE_AT_FIRST_OCCURRENCE <- dataAgeAtFirstOccurrence[dataAgeAtFirstOccurrence$CONCEPT_ID ==
concept_id, c(2, 3, 4, 5, 6, 7, 8, 9)]
filename <- paste(outputPath, "/procedures/procedure_", concept_id, ".json", sep = "")
write(jsonlite::toJSON(report, method = "C"), filename)
# Update progressbar:
env <- parent.env(environment())
curVal <- get("progress", envir = env)
assign("progress", curVal + 1, envir = env)
utils::setTxtProgressBar(get("progressBar", envir = env),
(curVal + 1)/get("totalCount", envir = env))
}
dummy <- lapply(uniqueConcepts, buildProcedureReport)
utils::setTxtProgressBar(progressBar, 1)
close(progressBar)
}
generatePersonReport <- function(conn,
dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
writeLines("Generating person reports")
progressBar <- utils::txtProgressBar(max = 7, style = 3)
progress <- 0
output <- {
}
# 1. Title: Population a. Visualization: Table b.Row #1: CDM source name c.Row #2: # of persons
renderedSql <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/person/population.sql",
packageName = "Achilles",
dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema, results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema)
personSummaryData <- DatabaseConnector::querySql(conn, renderedSql)
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
output$SUMMARY <- personSummaryData
# 2. Title: Gender distribution a. Visualization: Pie b.Category: Gender c.Value: % of persons
renderedSql <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/person/gender.sql",
packageName = "Achilles",
dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema, results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema)
genderData <- DatabaseConnector::querySql(conn, renderedSql)
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
output$GENDER_DATA <- genderData
# 3. Title: Race distribution a. Visualization: Pie b.Category: Race c.Value: % of persons
renderedSql <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/person/race.sql",
packageName = "Achilles",
dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema, results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema)
raceData <- DatabaseConnector::querySql(conn, renderedSql)
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
output$RACE_DATA <- raceData
# 4. Title: Ethnicity distribution a. Visualization: Pie b.Category: Ethnicity c.Value: % of
# persons
renderedSql <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/person/ethnicity.sql",
packageName = "Achilles",
dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema, results_database_schema = resultsDatabaseSchema,
vocab_database_schema = vocabDatabaseSchema)
ethnicityData <- DatabaseConnector::querySql(conn, renderedSql)
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
output$ETHNICITY_DATA <- ethnicityData
# 5. Title: Year of birth distribution a. Visualization: Histogram b.Category: Year of birth
# c.Value: # of persons
birthYearHist <- {
}
renderedSql <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/person/yearofbirth_stats.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
birthYearStats <- DatabaseConnector::querySql(conn, renderedSql)
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
birthYearHist$MIN <- birthYearStats$MIN_VALUE
birthYearHist$MAX <- birthYearStats$MAX_VALUE
birthYearHist$INTERVAL_SIZE <- birthYearStats$INTERVAL_SIZE
birthYearHist$INTERVALS <- (birthYearStats$MAX_VALUE - birthYearStats$MIN_VALUE)/birthYearStats$INTERVAL_SIZE
renderedSql <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/person/yearofbirth_data.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
birthYearData <- DatabaseConnector::querySql(conn, renderedSql)
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
birthYearHist$DATA <- birthYearData
output$BIRTH_YEAR_HISTOGRAM <- birthYearHist
# Convert to JSON and save file result
jsonOutput <- jsonlite::toJSON(output)
write(jsonOutput, file = paste(outputPath, "/person.json", sep = ""))
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
close(progressBar)
}
generateObservationPeriodReport <- function(conn,
dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
writeLines("Generating observation period reports")
progressBar <- utils::txtProgressBar(max = 11, style = 3)
progress <- 0
output <- {
}
# 1. Title: Age at time of first observation a. Visualization: Histogram b. Category: Age
# c.Value: # of persons
ageAtFirstObservationHist <- {
}
# stats are hard coded for this result to make x-axis consistent across datasources
ageAtFirstObservationHist$MIN <- 0
ageAtFirstObservationHist$MAX <- 100
ageAtFirstObservationHist$INTERVAL_SIZE <- 1
ageAtFirstObservationHist$INTERVALS <- 100
renderedSql <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/observationperiod/ageatfirst.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
ageAtFirstObservationData <- DatabaseConnector::querySql(conn, renderedSql)
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
ageAtFirstObservationHist$DATA <- ageAtFirstObservationData
output$AGE_AT_FIRST_OBSERVATION_HISTOGRAM <- ageAtFirstObservationHist
# 2. Title: Age by gender a.Visualization: Side-by-side boxplot b.Category: Gender c.Values:
# Min/25%/Median/95%/Max - age at time of first observation
renderedSql <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/observationperiod/agebygender.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
ageByGenderData <- DatabaseConnector::querySql(conn, renderedSql)
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
output$AGE_BY_GENDER <- ageByGenderData
# 3. Title: Length of observation a.Visualization: bar b.Category: length of observation period,
# 30d increments c.Values: # of persons
observationLengthHist <- {
}
renderedSql <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/observationperiod/observationlength_stats.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
observationLengthStats <- DatabaseConnector::querySql(conn, renderedSql)
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
observationLengthHist$MIN <- observationLengthStats$MIN_VALUE
observationLengthHist$MAX <- observationLengthStats$MAX_VALUE
observationLengthHist$INTERVAL_SIZE <- observationLengthStats$INTERVAL_SIZE
observationLengthHist$INTERVALS <- (observationLengthStats$MAX_VALUE - observationLengthStats$MIN_VALUE)/observationLengthStats$INTERVAL_SIZE
renderedSql <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/observationperiod/observationlength_data.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
observationLengthData <- DatabaseConnector::querySql(conn, renderedSql)
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
observationLengthHist$DATA <- observationLengthData
output$OBSERVATION_LENGTH_HISTOGRAM <- observationLengthHist
# 4. Title: Cumulative duration of observation a.Visualization: scatterplot b.X-axis: length of
# observation period c.Y-axis: % of population observed d.Note: will look like a Kaplan-Meier
# survival plot, but information is the same as shown in a length of observation barchart, just
# plotted as cumulative
renderedSql <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/observationperiod/cumulativeduration.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
cumulativeDurationData <- DatabaseConnector::querySql(conn, renderedSql)
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
output$CUMULATIVE_DURATION <- cumulativeDurationData
# 5. Title: Observation period length distribution, by gender a.Visualization: side-by-side
# boxplot b.Category: Gender c.Values: Min/25%/Median/95%/Max length of observation period
renderedSql <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/observationperiod/observationlengthbygender.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
opLengthByGenderData <- DatabaseConnector::querySql(conn, renderedSql)
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
output$OBSERVATION_PERIOD_LENGTH_BY_GENDER <- opLengthByGenderData
# 6. Title: Observation period length distribution, by age a.Visualization: side-by-side boxplot
# b.Category: Age decile c.Values: Min/25%/Median/95%/Max length of observation period
renderedSql <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/observationperiod/observationlengthbyage.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
opLengthByAgeData <- DatabaseConnector::querySql(conn, renderedSql)
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
output$OBSERVATION_PERIOD_LENGTH_BY_AGE <- opLengthByAgeData
# 7. Title: Number of persons with continuous observation by year a.Visualization: Histogram
# b.Category: Year c.Values: # of persons with continuous coverage
observedByYearHist <- {
}
renderedSql <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/observationperiod/observedbyyear_stats.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
observedByYearStats <- DatabaseConnector::querySql(conn, renderedSql)
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
observedByYearHist$MIN <- observedByYearStats$MIN_VALUE
observedByYearHist$MAX <- observedByYearStats$MAX_VALUE
observedByYearHist$INTERVAL_SIZE <- observedByYearStats$INTERVAL_SIZE
observedByYearHist$INTERVALS <- (observedByYearStats$MAX_VALUE - observedByYearStats$MIN_VALUE)/observedByYearStats$INTERVAL_SIZE
renderedSql <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/observationperiod/observedbyyear_data.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
observedByYearData <- DatabaseConnector::querySql(conn, renderedSql)
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
observedByYearHist$DATA <- observedByYearData
output$OBSERVED_BY_YEAR_HISTOGRAM <- observedByYearHist
# 8. Title: Number of persons with continuous observation by month a.Visualization: Histogram
# b.Category: Month/year c.Values: # of persons with continuous coverage
observedByMonth <- {
}
renderedSql <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/observationperiod/observedbymonth.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
observedByMonth <- DatabaseConnector::querySql(conn, renderedSql)
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
output$OBSERVED_BY_MONTH <- observedByMonth
# 9. Title: Number of observation periods per person a.Visualization: Pie b.Category: Number of
# observation periods c.Values: # of persons
renderedSql <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/observationperiod/periodsperperson.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
personPeriodsData <- DatabaseConnector::querySql(conn, renderedSql)
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
output$PERSON_PERIODS_DATA <- personPeriodsData
# Convert to JSON and save file result
jsonOutput <- jsonlite::toJSON(output)
write(jsonOutput, file = paste(outputPath, "/observationperiod.json", sep = ""))
close(progressBar)
}
generateDashboardReport <- function(outputPath) {
writeLines("Generating dashboard report")
output <- {
}
progressBar <- utils::txtProgressBar(max = 4, style = 3)
progress <- 0
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
personReport <- jsonlite::fromJSON(file = paste(outputPath, "/person.json", sep = ""))
output$SUMMARY <- personReport$SUMMARY
output$GENDER_DATA <- personReport$GENDER_DATA
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
opReport <- jsonlite::fromJSON(file = paste(outputPath, "/observationperiod.json", sep = ""))
output$AGE_AT_FIRST_OBSERVATION_HISTOGRAM <- opReport$AGE_AT_FIRST_OBSERVATION_HISTOGRAM
output$CUMULATIVE_DURATION <- opReport$CUMULATIVE_DURATION
output$OBSERVED_BY_MONTH <- opReport$OBSERVED_BY_MONTH
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
jsonOutput <- jsonlite::toJSON(output)
write(jsonOutput, file = paste(outputPath, "/dashboard.json", sep = ""))
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
close(progressBar)
}
generateDataDensityReport <- function(conn,
dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
writeLines("Generating data density reports")
progressBar <- utils::txtProgressBar(max = 3, style = 3)
progress <- 0
output <- {
}
# 1. Title: Total records a.Visualization: scatterplot b.X-axis: month/year c.y-axis: records
# d.series: person, visit, condition, drug, procedure, observation
renderedSql <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/datadensity/totalrecords.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
totalRecordsData <- DatabaseConnector::querySql(conn, renderedSql)
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
output$TOTAL_RECORDS <- totalRecordsData
# 2. Title: Records per person a.Visualization: scatterplot b.X-axis: month/year c.y-axis:
# records/person d.series: person, visit, condition, drug, procedure, observation
renderedSql <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/datadensity/recordsperperson.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
recordsPerPerson <- DatabaseConnector::querySql(conn, renderedSql)
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
output$RECORDS_PER_PERSON <- recordsPerPerson
# 3. Title: Concepts per person a.Visualization: side-by-side boxplot b.Category:
# Condition/Drug/Procedure/Observation c.Values: Min/25%/Median/95%/Max number of distinct
# concepts per person
renderedSql <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/datadensity/conceptsperperson.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
conceptsPerPerson <- DatabaseConnector::querySql(conn, renderedSql)
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
output$CONCEPTS_PER_PERSON <- conceptsPerPerson
# Convert to JSON and save file result
jsonOutput <- jsonlite::toJSON(output)
write(jsonOutput, file = paste(outputPath, "/datadensity.json", sep = ""))
close(progressBar)
}
generateMeasurementTreemap <- function(conn,
dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
writeLines("Generating measurement treemap")
progressBar <- utils::txtProgressBar(max = 1, style = 3)
progress <- 0
queryMeasurementTreemap <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/measurement/sqlMeasurementTreemap.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
dataMeasurementTreemap <- DatabaseConnector::querySql(conn, queryMeasurementTreemap)
write(jsonlite::toJSON(dataMeasurementTreemap, method = "C"),
paste(outputPath, "/measurement_treemap.json",
sep = ""))
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
close(progressBar)
}
generateMeasurementReports <- function(conn,
dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
writeLines("Generating Measurement reports")
treemapFile <- file.path(outputPath, "measurement_treemap.json")
if (!file.exists(treemapFile)) {
writeLines(paste("Warning: treemap file",
treemapFile,
"does not exist. Skipping detail report generation."))
return()
}
treemapData <- jsonlite::fromJSON(file = treemapFile)
uniqueConcepts <- unique(treemapData$CONCEPT_ID)
totalCount <- length(uniqueConcepts)
measurementsFolder <- file.path(outputPath, "measurements")
if (file.exists(measurementsFolder)) {
writeLines(paste("Warning: folder ", measurementsFolder, " already exists"))
} else {
dir.create(paste(measurementsFolder, "/", sep = ""))
}
progressBar <- utils::txtProgressBar(style = 3)
progress <- 0
queryPrevalenceByGenderAgeYear <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/measurement/sqlPrevalenceByGenderAgeYear.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryPrevalenceByMonth <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/measurement/sqlPrevalenceByMonth.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryFrequencyDistribution <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/measurement/sqlFrequencyDistribution.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryMeasurementsByType <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/measurement/sqlMeasurementsByType.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryAgeAtFirstOccurrence <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/measurement/sqlAgeAtFirstOccurrence.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryRecordsByUnit <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/measurement/sqlRecordsByUnit.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryMeasurementValueDistribution <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/measurement/sqlMeasurementValueDistribution.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryLowerLimitDistribution <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/measurement/sqlLowerLimitDistribution.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryUpperLimitDistribution <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/measurement/sqlUpperLimitDistribution.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryValuesRelativeToNorm <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/measurement/sqlValuesRelativeToNorm.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
dataPrevalenceByGenderAgeYear <- DatabaseConnector::querySql(conn, queryPrevalenceByGenderAgeYear)
dataPrevalenceByMonth <- DatabaseConnector::querySql(conn, queryPrevalenceByMonth)
dataMeasurementsByType <- DatabaseConnector::querySql(conn, queryMeasurementsByType)
dataAgeAtFirstOccurrence <- DatabaseConnector::querySql(conn, queryAgeAtFirstOccurrence)
dataRecordsByUnit <- DatabaseConnector::querySql(conn, queryRecordsByUnit)
dataMeasurementValueDistribution <- DatabaseConnector::querySql(conn,
queryMeasurementValueDistribution)
dataLowerLimitDistribution <- DatabaseConnector::querySql(conn, queryLowerLimitDistribution)
dataUpperLimitDistribution <- DatabaseConnector::querySql(conn, queryUpperLimitDistribution)
dataValuesRelativeToNorm <- DatabaseConnector::querySql(conn, queryValuesRelativeToNorm)
dataFrequencyDistribution <- DatabaseConnector::querySql(conn, queryFrequencyDistribution)
buildMeasurementReport <- function(concept_id) {
report <- {
}
report$PREVALENCE_BY_GENDER_AGE_YEAR <- dataPrevalenceByGenderAgeYear[dataPrevalenceByGenderAgeYear$CONCEPT_ID ==
concept_id, c(3, 4, 5, 6)]
report$PREVALENCE_BY_MONTH <- dataPrevalenceByMonth[dataPrevalenceByMonth$CONCEPT_ID == concept_id,
c(3, 4)]
report$FREQUENCY_DISTRIBUTION <- dataFrequencyDistribution[dataFrequencyDistribution$CONCEPT_ID ==
concept_id, c(3, 4)]
report$MEASUREMENTS_BY_TYPE <- dataMeasurementsByType[dataMeasurementsByType$MEASUREMENT_CONCEPT_ID ==
concept_id, c(4, 5)]
report$AGE_AT_FIRST_OCCURRENCE <- dataAgeAtFirstOccurrence[dataAgeAtFirstOccurrence$CONCEPT_ID ==
concept_id, c(2, 3, 4, 5, 6, 7, 8, 9)]
report$RECORDS_BY_UNIT <- dataRecordsByUnit[dataRecordsByUnit$MEASUREMENT_CONCEPT_ID == concept_id,
c(4, 5)]
report$MEASUREMENT_VALUE_DISTRIBUTION <- dataMeasurementValueDistribution[dataMeasurementValueDistribution$CONCEPT_ID ==
concept_id, c(2, 3, 4, 5, 6, 7, 8, 9)]
report$LOWER_LIMIT_DISTRIBUTION <- dataLowerLimitDistribution[dataLowerLimitDistribution$CONCEPT_ID ==
concept_id, c(2, 3, 4, 5, 6, 7, 8, 9)]
report$UPPER_LIMIT_DISTRIBUTION <- dataUpperLimitDistribution[dataUpperLimitDistribution$CONCEPT_ID ==
concept_id, c(2, 3, 4, 5, 6, 7, 8, 9)]
report$VALUES_RELATIVE_TO_NORM <- dataValuesRelativeToNorm[dataValuesRelativeToNorm$MEASUREMENT_CONCEPT_ID ==
concept_id, c(4, 5)]
filename <- paste(outputPath, "/measurements/measurement_", concept_id, ".json", sep = "")
write(jsonlite::toJSON(report, method = "C"), filename)
# Update progressbar:
env <- parent.env(environment())
curVal <- get("progress", envir = env)
assign("progress", curVal + 1, envir = env)
utils::setTxtProgressBar(get("progressBar", envir = env),
(curVal + 1)/get("totalCount", envir = env))
}
dummy <- lapply(uniqueConcepts, buildMeasurementReport)
utils::setTxtProgressBar(progressBar, 1)
close(progressBar)
}
generateObservationTreemap <- function(conn,
dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
writeLines("Generating observation treemap")
progressBar <- utils::txtProgressBar(max = 1, style = 3)
progress <- 0
queryObservationTreemap <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/observation/sqlObservationTreemap.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
dataObservationTreemap <- DatabaseConnector::querySql(conn, queryObservationTreemap)
write(jsonlite::toJSON(dataObservationTreemap, method = "C"),
paste(outputPath, "/observation_treemap.json",
sep = ""))
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
close(progressBar)
}
generateObservationReports <- function(conn,
dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
writeLines("Generating Observation reports")
treemapFile <- file.path(outputPath, "observation_treemap.json")
if (!file.exists(treemapFile)) {
writeLines(paste("Warning: treemap file",
treemapFile,
"does not exist. Skipping detail report generation."))
return()
}
treemapData <- jsonlite::fromJSON(file = treemapFile)
uniqueConcepts <- unique(treemapData$CONCEPT_ID)
totalCount <- length(uniqueConcepts)
observationsFolder <- file.path(outputPath, "observations")
if (file.exists(observationsFolder)) {
writeLines(paste("Warning: folder ", observationsFolder, " already exists"))
} else {
dir.create(paste(observationsFolder, "/", sep = ""))
}
progressBar <- utils::txtProgressBar(style = 3)
progress <- 0
queryPrevalenceByGenderAgeYear <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/observation/sqlPrevalenceByGenderAgeYear.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryPrevalenceByMonth <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/observation/sqlPrevalenceByMonth.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryObsFrequencyDistribution <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/observation/sqlFrequencyDistribution.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryObservationsByType <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/observation/sqlObservationsByType.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryAgeAtFirstOccurrence <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/observation/sqlAgeAtFirstOccurrence.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
dataPrevalenceByGenderAgeYear <- DatabaseConnector::querySql(conn, queryPrevalenceByGenderAgeYear)
dataPrevalenceByMonth <- DatabaseConnector::querySql(conn, queryPrevalenceByMonth)
dataObservationsByType <- DatabaseConnector::querySql(conn, queryObservationsByType)
dataAgeAtFirstOccurrence <- DatabaseConnector::querySql(conn, queryAgeAtFirstOccurrence)
dataObsFrequencyDistribution <- DatabaseConnector::querySql(conn, queryObsFrequencyDistribution)
buildObservationReport <- function(concept_id) {
report <- {
}
report$PREVALENCE_BY_GENDER_AGE_YEAR <- dataPrevalenceByGenderAgeYear[dataPrevalenceByGenderAgeYear$CONCEPT_ID ==
concept_id, c(3, 4, 5, 6)]
report$PREVALENCE_BY_MONTH <- dataPrevalenceByMonth[dataPrevalenceByMonth$CONCEPT_ID == concept_id,
c(3, 4)]
report$OBS_FREQUENCY_DISTRIBUTION <- dataObsFrequencyDistribution[dataObsFrequencyDistribution$CONCEPT_ID ==
concept_id, c(3, 4)]
report$OBSERVATIONS_BY_TYPE <- dataObservationsByType[dataObservationsByType$OBSERVATION_CONCEPT_ID ==
concept_id, c(4, 5)]
report$AGE_AT_FIRST_OCCURRENCE <- dataAgeAtFirstOccurrence[dataAgeAtFirstOccurrence$CONCEPT_ID ==
concept_id, c(2, 3, 4, 5, 6, 7, 8, 9)]
filename <- paste(outputPath, "/observations/observation_", concept_id, ".json", sep = "")
write(jsonlite::toJSON(report, method = "C"), filename)
# Update progressbar:
env <- parent.env(environment())
curVal <- get("progress", envir = env)
assign("progress", curVal + 1, envir = env)
utils::setTxtProgressBar(get("progressBar", envir = env),
(curVal + 1)/get("totalCount", envir = env))
}
dummy <- lapply(uniqueConcepts, buildObservationReport)
utils::setTxtProgressBar(progressBar, 1)
close(progressBar)
}
generateVisitTreemap <- function(conn,
dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
writeLines("Generating visit_occurrence treemap")
progressBar <- utils::txtProgressBar(max = 1, style = 3)
progress <- 0
queryVisitTreemap <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/visit/sqlVisitTreemap.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
dataVisitTreemap <- DatabaseConnector::querySql(conn, queryVisitTreemap)
write(jsonlite::toJSON(dataVisitTreemap, method = "C"),
paste(outputPath, "/visit_treemap.json", sep = ""))
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
close(progressBar)
}
generateVisitReports <- function(conn,
dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
writeLines("Generating visit reports")
treemapFile <- file.path(outputPath, "visit_treemap.json")
if (!file.exists(treemapFile)) {
writeLines(paste("Warning: treemap file",
treemapFile,
"does not exist. Skipping detail report generation."))
return()
}
treemapData <- jsonlite::fromJSON(file = treemapFile)
uniqueConcepts <- unique(treemapData$CONCEPT_ID)
totalCount <- length(uniqueConcepts)
visitsFolder <- file.path(outputPath, "visits")
if (file.exists(visitsFolder)) {
writeLines(paste("Warning: folder ", visitsFolder, " already exists"))
} else {
dir.create(paste(visitsFolder, "/", sep = ""))
}
progressBar <- utils::txtProgressBar(style = 3)
progress <- 0
queryPrevalenceByGenderAgeYear <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/visit/sqlPrevalenceByGenderAgeYear.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryPrevalenceByMonth <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/visit/sqlPrevalenceByMonth.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryVisitDurationByType <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/visit/sqlVisitDurationByType.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryAgeAtFirstOccurrence <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/visit/sqlAgeAtFirstOccurrence.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
dataPrevalenceByGenderAgeYear <- DatabaseConnector::querySql(conn, queryPrevalenceByGenderAgeYear)
dataPrevalenceByMonth <- DatabaseConnector::querySql(conn, queryPrevalenceByMonth)
dataVisitDurationByType <- DatabaseConnector::querySql(conn, queryVisitDurationByType)
dataAgeAtFirstOccurrence <- DatabaseConnector::querySql(conn, queryAgeAtFirstOccurrence)
buildVisitReport <- function(concept_id) {
report <- {
}
report$PREVALENCE_BY_GENDER_AGE_YEAR <- dataPrevalenceByGenderAgeYear[dataPrevalenceByGenderAgeYear$CONCEPT_ID ==
concept_id, c(3, 4, 5, 6)]
report$PREVALENCE_BY_MONTH <- dataPrevalenceByMonth[dataPrevalenceByMonth$CONCEPT_ID == concept_id,
c(3, 4)]
report$VISIT_DURATION_BY_TYPE <- dataVisitDurationByType[dataVisitDurationByType$CONCEPT_ID ==
concept_id, c(2, 3, 4, 5, 6, 7, 8, 9)]
report$AGE_AT_FIRST_OCCURRENCE <- dataAgeAtFirstOccurrence[dataAgeAtFirstOccurrence$CONCEPT_ID ==
concept_id, c(2, 3, 4, 5, 6, 7, 8, 9)]
filename <- paste(outputPath, "/visits/visit_", concept_id, ".json", sep = "")
write(jsonlite::toJSON(report, method = "C"), filename)
# Update progressbar:
env <- parent.env(environment())
curVal <- get("progress", envir = env)
assign("progress", curVal + 1, envir = env)
utils::setTxtProgressBar(get("progressBar", envir = env),
(curVal + 1)/get("totalCount", envir = env))
}
dummy <- lapply(uniqueConcepts, buildVisitReport)
utils::setTxtProgressBar(progressBar, 1)
close(progressBar)
}
generateDeathReports <- function(conn,
dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
writeLines("Generating death reports")
progressBar <- utils::txtProgressBar(max = 4, style = 3)
progress <- 0
output <- {
}
# 1. Title: Prevalence drilldown, prevalence by gender, age, and year a.Visualization: trellis
# lineplot b.Trellis category: age decile c.X-axis: year d.y-axis: condition prevalence (%
# persons) e.series: male, female
renderedSql <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/death/sqlPrevalenceByGenderAgeYear.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
prevalenceByGenderAgeYearData <- DatabaseConnector::querySql(conn, renderedSql)
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
output$PREVALENCE_BY_GENDER_AGE_YEAR <- prevalenceByGenderAgeYearData
# 2. Title: Prevalence by month a.Visualization: scatterplot b.X-axis: month/year c.y-axis: % of
# persons d.Comment: plot to show seasonality
renderedSql <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/death/sqlPrevalenceByMonth.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
prevalenceByMonthData <- DatabaseConnector::querySql(conn, renderedSql)
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
output$PREVALENCE_BY_MONTH <- prevalenceByMonthData
# 3. Title: Death records by type a.Visualization: pie b.Category: death type c.value: % of
# records
renderedSql <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/death/sqlDeathByType.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
deathByTypeData <- DatabaseConnector::querySql(conn, renderedSql)
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
output$DEATH_BY_TYPE <- deathByTypeData
# 4. Title: Age at death a.Visualization: side-by-side boxplot b.Category: gender c.Values:
# Min/25%/Median/95%/Max as age at death
renderedSql <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/death/sqlAgeAtDeath.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
ageAtDeathData <- DatabaseConnector::querySql(conn, renderedSql)
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
output$AGE_AT_DEATH <- ageAtDeathData
# Convert to JSON and save file result
jsonOutput <- jsonlite::toJSON(output)
write(jsonOutput, file = paste(outputPath, "/death.json", sep = ""))
close(progressBar)
}
generateVisitDetailTreemap <- function(conn,
dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
writeLines("Generating visit_detail treemap")
progressBar <- utils::txtProgressBar(max = 1, style = 3)
progress <- 0
queryVisitDetailTreemap <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/visitdetail/sqlVisitDetailTreemap.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
dataVisitDetailTreemap <- DatabaseConnector::querySql(conn, queryVisitDetailTreemap)
write(jsonlite::toJSON(dataVisitDetailTreemap, method = "C"),
paste(outputPath, "/visitdetail_treemap.json",
sep = ""))
progress <- progress + 1
utils::setTxtProgressBar(progressBar, progress)
close(progressBar)
}
generateVisitDetailReports <- function(conn,
dbms,
cdmDatabaseSchema,
resultsDatabaseSchema,
outputPath,
vocabDatabaseSchema = cdmDatabaseSchema) {
writeLines("Generating visit_detail reports")
treemapFile <- file.path(outputPath, "visitdetail_treemap.json")
if (!file.exists(treemapFile)) {
writeLines(paste("Warning: treemap file",
treemapFile,
"does not exist. Skipping detail report generation."))
return()
}
treemapData <- jsonlite::fromJSON(file = treemapFile)
uniqueConcepts <- unique(treemapData$CONCEPT_ID)
totalCount <- length(uniqueConcepts)
visitdetailFolder <- file.path(outputPath, "visitdetail")
if (file.exists(visitdetailFolder)) {
writeLines(paste("Warning: folder ", visitdetailFolder, " already exists"))
} else {
dir.create(paste(visitdetailFolder, "/", sep = ""))
}
progressBar <- utils::txtProgressBar(style = 3)
progress <- 0
queryPrevalenceByGenderAgeYear <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/visitdetail/sqlPrevalenceByGenderAgeYear.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryPrevalenceByMonth <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/visitdetail/sqlPrevalenceByMonth.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryVisitDetailDurationByType <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/visitdetail/sqlVisitDetailDurationByType.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
queryAgeAtFirstOccurrence <- SqlRender::loadRenderTranslateSql(sqlFilename = "export/visitdetail/sqlAgeAtFirstOccurrence.sql",
packageName = "Achilles", dbms = dbms, warnOnMissingParameters = FALSE, cdm_database_schema = cdmDatabaseSchema,
results_database_schema = resultsDatabaseSchema, vocab_database_schema = vocabDatabaseSchema)
dataPrevalenceByGenderAgeYear <- DatabaseConnector::querySql(conn, queryPrevalenceByGenderAgeYear)
dataPrevalenceByMonth <- DatabaseConnector::querySql(conn, queryPrevalenceByMonth)
dataVisitDetailDurationByType <- DatabaseConnector::querySql(conn, queryVisitDetailDurationByType)
dataAgeAtFirstOccurrence <- DatabaseConnector::querySql(conn, queryAgeAtFirstOccurrence)
buildVisitDetailReport <- function(concept_id) {
report <- {
}
report$PREVALENCE_BY_GENDER_AGE_YEAR <- dataPrevalenceByGenderAgeYear[dataPrevalenceByGenderAgeYear$CONCEPT_ID ==
concept_id, c(3, 4, 5, 6)]
report$PREVALENCE_BY_MONTH <- dataPrevalenceByMonth[dataPrevalenceByMonth$CONCEPT_ID == concept_id,
c(3, 4)]
report$VISIT_DETAIL_DURATION_BY_TYPE <- dataVisitDetailDurationByType[dataVisitDetailDurationByType$CONCEPT_ID ==
concept_id, c(2, 3, 4, 5, 6, 7, 8, 9)]
report$AGE_AT_FIRST_OCCURRENCE <- dataAgeAtFirstOccurrence[dataAgeAtFirstOccurrence$CONCEPT_ID ==
concept_id, c(2, 3, 4, 5, 6, 7, 8, 9)]
filename <- paste(outputPath, "/visitdetail/visitdetail_", concept_id, ".json", sep = "")
write(jsonlite::toJSON(report, method = "C"), filename)
# Update progressbar:
env <- parent.env(environment())
curVal <- get("progress", envir = env)
assign("progress", curVal + 1, envir = env)
utils::setTxtProgressBar(get("progressBar", envir = env),
(curVal + 1)/get("totalCount", envir = env))
}
dummy <- lapply(uniqueConcepts, buildVisitDetailReport)
utils::setTxtProgressBar(progressBar, 1)
close(progressBar)
}
| /scratch/gouwar.j/cran-all/cranData/Achilles/R/exportToJson.R |
generateDomainOverlapSql <- function() {
# remove existing file as to not endlessly append. :|
sqlFile <- "domainOverlap.sql"
if (file.exists(sqlFile)) {
file.remove(sqlFile)
}
# creates a matrix of domain overlap possibilities. If you want to add a domain, you would add
# to the list directly below.
domainMatrix <- tidyr::crossing(condition_occurrence = 0:1,
drug_exposure = 0:1,
device_exposure = 0:1,
measurement = 0:1, death = 0:1, procedure_occurrence = 0:1, observation = 0:1)
domainMatrixResults <- domainMatrix
domainMatrixResults <- domainMatrixResults %>%
mutate(count = 0, proportion = 0, dataSource = "")
# Creates notes
write(x = "-- Analysis 2004: Number of distinct patients that overlap between specific domains",
sqlFile, append = TRUE)
write(x = "-- Bit String Breakdown: 1) Condition Occurrence 2) Drug Exposure 3) Device Exposure 4) Measurement 5) Death 6) Procedure Occurrence 7) Observation",
sqlFile, append = TRUE)
write(x = "", sqlFile, append = TRUE)
# Creates temp tables for each specific domain
write(x = "select distinct person_id into #conoc from @cdmDatabaseSchema.condition_occurrence;", sqlFile, append = TRUE)
write(x = "select distinct person_id into #drexp from @cdmDatabaseSchema.drug_exposure;", sqlFile, append = TRUE)
write(x = "select distinct person_id into #dvexp from @cdmDatabaseSchema.device_exposure;", sqlFile, append = TRUE)
write(x = "select distinct person_id into #msmt from @cdmDatabaseSchema.measurement;", sqlFile, append = TRUE)
write(x = "select distinct person_id into #death from @cdmDatabaseSchema.death;", sqlFile,append = TRUE)
write(x = "select distinct person_id into #prococ from @cdmDatabaseSchema.procedure_occurrence;", sqlFile, append = TRUE)
write(x = "select distinct person_id into #obs from @cdmDatabaseSchema.observation;", sqlFile, append = TRUE)
write(x = "", sqlFile, append = TRUE)
write(x = "with rawData as (", sqlFile, append = TRUE)
# Begins going through domain matrix by row to calculate overlap of different domain
# combinations.
for (i in 1:nrow(domainMatrix)) {
# Builds bit-driven string for strata1
domainString <- ""
for (b in 1:ncol(domainMatrix)) {
domainString <- paste0(domainString, domainMatrixResults[i, b])
}
sql <- "select count(*) as count_value from("
previousDomain <- ""
# Building of custom domain overlap queries.
for (j in 1:ncol(domainMatrix)) {
# Condition Occurrence
if ((j == 1) & (domainMatrix[i, j] == 1)) {
if (sql == "select count(*) as count_value from(") {
sql <- paste0(sql, "select person_id from #conoc")
previousDomain <- "a"
}
}
# Drug Exposure
if ((j == 2) & (domainMatrix[i, j] == 1)) {
if (sql == "select count(*) as count_value from(") {
sql <- paste0(sql, "select person_id from #drexp")
previousDomain <- "b"
} else {
sql <- paste0(sql, " intersect select person_id from #drexp")
previousDomain <- "b"
}
}
# Device exposure
if ((j == 3) & (domainMatrix[i, j] == 1)) {
if (sql == "select count(*) as count_value from(") {
sql <- paste0(sql, "select person_id from #dvexp")
previousDomain <- "c"
} else {
sql <- paste0(sql, " intersect select person_id from #dvexp")
previousDomain <- "c"
}
}
# Measurement
if ((j == 4) & (domainMatrix[i, j] == 1)) {
if (sql == "select count(*) as count_value from(") {
sql <- paste0(sql, "select person_id from #msmt")
previousDomain <- "d"
} else {
sql <- paste0(sql, " intersect select person_id from #msmt")
previousDomain <- "d"
}
}
# Death
if ((j == 5) & (domainMatrix[i, j] == 1)) {
if (sql == "select count(*) as count_value from(") {
sql <- paste0(sql, "select person_id from #death")
previousDomain <- "e"
} else {
sql <- paste0(sql, " intersect select person_id from #death")
previousDomain <- "e"
}
}
# Procedure Occurrence
if ((j == 6) & (domainMatrix[i, j] == 1)) {
if (sql == "select count(*) as count_value from(") {
sql <- paste0(sql, "select person_id from #prococ")
previousDomain <- "f"
} else {
sql <- paste0(sql, " intersect select person_id from #prococ")
previousDomain <- "f"
}
}
# Observation
if ((j == 7) & (domainMatrix[i, j] == 1)) {
if (sql == "select count(*) as count_value from(") {
sql <- paste0(sql, "select person_id from #obs")
} else {
sql <- paste0(sql, " intersect select person_id from #obs")
}
}
} # End for loop for domainMatrix by column
sql <- paste0(sql, ")")
# Formats output for achilles_results input
preSql <- paste0("select 2004 as analysis_id,
'", domainString, "' as stratum_1,
cast((1.0 * personIntersection.count_value / totalPersonsDb.totalPersons) as varchar(255)) as stratum_2,
CAST(NULL AS VARCHAR(255)) as stratum_3,
CAST(NULL AS VARCHAR(255)) as stratum_4,
CAST(NULL AS VARCHAR(255)) as stratum_5,
personIntersection.count_value
from
(")
# Creates Unions for generation of .sql file
if (i == nrow(domainMatrix)) {
postSql <- " as subquery) as personIntersection,
(select count(distinct(person_id)) as totalPersons from @cdmDatabaseSchema.person) as totalPersonsDb) select * INTO @scratchDatabaseSchema@schemaDelim@tempAchillesPrefix_2004 from rawData;"
} else {
postSql <- " as subquery) as personIntersection,
(select count(distinct(person_id)) as totalPersons from @cdmDatabaseSchema.person) as totalPersonsDb UNION ALL"
}
sql <- paste0(preSql, sql, postSql)
# ignores creation no domain specified
if (domainString == "0000000") {
next
} else {
write(x = sql, sqlFile, append = TRUE)
}
} # End for loop for domainMatrix by row
# clean up temp tables
# Creates temp tables for each specific domain
write(x = "drop table #conoc;", sqlFile, append = TRUE)
write(x = "drop table #drexp;", sqlFile, append = TRUE)
write(x = "drop table #dvexp;", sqlFile, append = TRUE)
write(x = "drop table #msmt;", sqlFile, append = TRUE)
write(x = "drop table #death;", sqlFile, append = TRUE)
write(x = "drop table #prococ;", sqlFile, append = TRUE)
write(x = "drop table #obs;", sqlFile, append = TRUE)
write(x = "", sqlFile, append = TRUE)
} # End function
| /scratch/gouwar.j/cran-all/cranData/Achilles/R/generateDomainOverlapSql.R |
#'@title Get the seasonality score for a given monthly time series
#'
#'@description The seasonality score of a monthly time series is computed as its departure from a uniform distribution.
#'
#'@details
#' The degree of seasonality of a monthly time series is based on its departure from a uniform distribution.
#' If the number of cases for a given concept is uniformly distributed across all time periods (in this case, all months),
#' then its monthly proportion would be approximately constant. In this case, the time series would be
#' considered "strictly non-seasonal" and its "seasonality score" would be zero.
#' Similarly, if all cases recur at a single point in time (that is, in a single month), such a time series would be considered
#' "strictly seasonal" and its seasonality score would be 1. All other time series would have
#' a seasonality score between 0 and 1. Currently, only monthly time series are supported.
#'
#'@param tsData A time series object.
#'
#'@return A numeric value between 0 and 1 (inclusive) representing the seasonality of a time series.
#'
#'@export
getSeasonalityScore <- function(tsData)
{
unifDist <- 1/12
a <- c(1,rep(0,11))
maxDist <- sum(abs(a-unifDist))
tsObj <- tsData
tsObj <- Achilles::tsCompleteYears(tsObj)
# Matrix version: switch to and update this version once the rare-events issue is corrected
# NB: Remember to avoid dividing by zero with the matrix approach
# M <- matrix(data=tsObj, ncol=12, byrow=TRUE)
# ss <- sum(abs(t((rep(1,dim(M)[1]) %*% M)/as.integer(rep(1,dim(M)[1]) %*% M %*% rep(1,12))) - unifDist))/maxDist
# Original version using sum across years
tsObj.yrProp <- Achilles::sumAcrossYears(tsObj)$PROP
tsObj.ss <- round(sum(abs(tsObj.yrProp-unifDist))/maxDist,2)
tsObj.ss <- round(tsObj.ss,2)
return (tsObj.ss)
} | /scratch/gouwar.j/cran-all/cranData/Achilles/R/getSeasonalityScore.r |
# @file getTemporalData
#
# Copyright 2023 Observational Health Data Sciences and Informatics
#
# This file is part of Achilles
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# @author Observational Health Data Sciences and Informatics
# @author Martijn Schuemie
# @author Patrick Ryan
# @author Frank DeFalco
# @author Vojtech Huser
# @author Chris Knoll
# @author Ajit Londhe
# @author Taha Abdul-Basser
# @author Anthony Molinaro
#' @title
#' getTemporalData
#'
#' @description
#' \code{getTemporalData} Retrieve specific monthly analyses data to support temporal
#' characterization.
#'
#' @details
#' \code{getTemporalData} Assumes \code{achilles} has been run. \preformatted{Currently supported
#' Achilles monthly analyses are: 202 - Visit Occurrence 402 - Condition occurrence 602 - Procedure
#' Occurrence 702 - Drug Exposure 802 - Observation 1802 - Measurement 2102 - Device}
#'
#' @param connectionDetails An R object of type \code{connectionDetails} created using the
#' function \code{createConnectionDetails} in the
#' \code{DatabaseConnector} package.
#' @param cdmDatabaseSchema Fully qualified name of database schema that contains OMOP CDM
#' schema. On SQL Server, this should specifiy both the database and the
#' schema, so for example, on SQL Server, 'cdm_instance.dbo'.
#' @param resultsDatabaseSchema Fully qualified name of database schema that we can write final
#' results to. Default is cdmDatabaseSchema. On SQL Server, this should
#' specifiy both the database and the schema, so for example, on SQL
#' Server, 'cdm_results.dbo'.
#' @param analysisIds (OPTIONAL) A vector containing the set of Achilles analysisIds for
#' which results will be returned. The following are supported:
#' \code{202,402,602,702,802,1802,2102}. If not specified, data for all
#' analysis will be returned. Ignored if \code{conceptId} is given.
#' @param conceptId (OPTIONAL) A SNOMED concept_id from the \code{CONCEPT} table for
#' which a monthly Achilles analysis exists. If not specified, all
#' concepts for a given analysis will be returned.
#' @return
#' A data frame of query results from \code{DatabaseConnector}
#'
#' @examples
#' \dontrun{
#' pneumonia <- 255848
#' monthlyResults <- getTemporalData(connectionDetails = connectionDetails,
#' cdmDatabaseSchema = "cdm",
#'
#' resultsDatabaseSchema = "results", conceptId = pneumonia)
#' }
#'
#' @export
getTemporalData <- function(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
analysisIds = NULL,
conceptId = NULL) {
if (!is.null(conceptId)) {
print(paste0("Retrieving Achilles monthly data for temporal support for concept_id: ",
conceptId))
conceptIdGiven <- TRUE
analysisIdGiven <- FALSE
} else if (!is.null(analysisIds)) {
print(paste0("Retrieving Achilles monthly data for temporal support for analyses: ",
paste(analysisIds,
collapse = ", ")))
conceptIdGiven <- FALSE
analysisIdGiven <- TRUE
} else {
print("Retrieving Achilles monthly data for temporal support for all supported analyses")
conceptIdGiven <- FALSE
analysisIdGiven <- FALSE
}
if (typeof(connectionDetails$server) == "character")
dbName <- toupper(strsplit(connectionDetails$server,
"/")[[1]][2]) else dbName <- toupper(strsplit(connectionDetails$server(), "/")[[1]][2])
translatedSql <- SqlRender::loadRenderTranslateSql(sqlFilename = "temporal/achilles_temporal_data.sql",
packageName = "Achilles", dbms = connectionDetails$dbms, db_name = dbName, cdm_schema = cdmDatabaseSchema,
results_schema = resultsDatabaseSchema, concept_id = conceptId, analysis_ids = analysisIds, concept_id_given = conceptIdGiven,
analysis_id_given = analysisIdGiven)
conn <- DatabaseConnector::connect(connectionDetails)
queryResults <- DatabaseConnector::querySql(conn, translatedSql)
on.exit(DatabaseConnector::disconnect(conn))
return(queryResults)
}
| /scratch/gouwar.j/cran-all/cranData/Achilles/R/getTemporalData.r |
#'@title Determine whether or not a time series is stationary in the mean
#'
#'@description Uses the Augmented Dickey-Fuller test to determine when the time series has a unit root.
#'
#'@details
#' A time series must have a minimum of three complete years of data.
#' For details on the implementation of the Augmented Dickey-Fuller test,
#' see the tseries package on cran.
#'
#'@param tsData A time series object.
#'
#'@return A boolean indicating whether or not the given time series is stationary.
#'
#'@export
isStationary <- function(tsData)
{
tsObj <- tsData
minMonths <- 36
tsObj <- Achilles::tsCompleteYears(tsObj)
if (length(tsObj) < minMonths)
stop("ERROR: Time series must have a minimum of three complete years of data")
ADF_IS_STATIONARY <- suppressWarnings(tseries::adf.test(tsObj, alternative="stationary")$p.value <= .05)
return (ADF_IS_STATIONARY)
} | /scratch/gouwar.j/cran-all/cranData/Achilles/R/isStationary.r |
# @file listMissingAnalyses
#
# Copyright 2023 Observational Health Data Sciences and Informatics
#
# This file is part of Achilles
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# @author Observational Health Data Sciences and Informatics
# @author Martijn Schuemie
# @author Patrick Ryan
# @author Frank DeFalco
# @author Vojtech Huser
# @author Chris Knoll
# @author Ajit Londhe
# @author Taha Abdul-Basser
# @author Anthony Molinaro
#' @title
#' listMissingAnalyses
#'
#' @description
#' \code{listMissingAnalyses} Find and return analyses that exist in \code{getAnalysisDetails}, but
#' not in achilles_results or achilles_results_dist
#'
#' @param connectionDetails An R object of type \code{connectionDetails} created using the
#' function \code{createConnectionDetails} in the
#' \code{DatabaseConnector} package.
#' @param resultsDatabaseSchema Fully qualified name of database schema that contains
#' achilles_results and achilles_results_dist tables.
#'
#' @return
#' A dataframe which is a subset of \code{getAnalysisDetails}
#'
#' @examples
#' \dontrun{
#' Achilles::listMissingAnalyses(connectionDetails = connectionDetails,
#' resultsDatabaseSchema = "results")
#' }
#'
#' @export
listMissingAnalyses <- function(connectionDetails, resultsDatabaseSchema) {
# Determine which analyses are missing by comparing analysisDetails with achilles_results and
# achilles_results_dist
analysisDetails <- getAnalysisDetails()
allAnalysisIds <- analysisDetails$ANALYSIS_ID
conn <- DatabaseConnector::connect(connectionDetails)
print("Retrieving previously computed achilles_results and achilles_results_dist data...")
sql <- "select distinct analysis_id from @results_schema.achilles_results
union
select distinct analysis_id from @results_schema.achilles_results_dist;"
sql <- SqlRender::render(sql, results_schema = resultsDatabaseSchema)
sql <- SqlRender::translate(sql, targetDialect = connectionDetails$dbms)
existingAnalysisIds <- DatabaseConnector::querySql(conn, sql)$ANALYSIS_ID
DatabaseConnector::disconnect(conn)
missingAnalysisIds <- setdiff(allAnalysisIds, existingAnalysisIds)
colsToDisplay <- c("ANALYSIS_ID",
"DISTRIBUTION",
"CATEGORY",
"IS_DEFAULT",
"ANALYSIS_NAME")
retVal <- analysisDetails[analysisDetails$ANALYSIS_ID %in% missingAnalysisIds, colsToDisplay]
retVal <- retVal[order(retVal$ANALYSIS_ID), ]
return(retVal)
}
| /scratch/gouwar.j/cran-all/cranData/Achilles/R/listMissingAnalyses.r |
# @file performTemporalCharacterization
#
# Copyright 2023 Observational Health Data Sciences and Informatics
#
# This file is part of Achilles
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# @author Observational Health Data Sciences and Informatics
# @author Martijn Schuemie
# @author Patrick Ryan
# @author Frank DeFalco
# @author Vojtech Huser
# @author Chris Knoll
# @author Ajit Londhe
# @author Taha Abdul-Basser
# @author Anthony Molinaro
#'@title performTemporalCharacterization
#'
#' @description
#' \code{performTemporalCharacterization} Perform temporal characterization on a concept or family of concepts belonging to a supported Achilles analysis.
#'
#' @details
#' \code{performTemporalAnalyses} Assumes \code{achilles} has been run.
#' \preformatted{Currently supported Achilles analyses for temporal analyses are:
#' 202 - Visit Occurrence
#' 402 - Condition occurrence
#' 602 - Procedure Occurrence
#' 702 - Drug Exposure
#' 802 - Observation
#' 1802 - Measurement
#' 2102 - Device}
#'
#' @param connectionDetails An R object of type \code{connectionDetails} created using the
#' function \code{createConnectionDetails} in the
#' \code{DatabaseConnector} package.
#' @param cdmDatabaseSchema Fully qualified name of database schema that contains OMOP CDM
#' schema. On SQL Server, this should specifiy both the database and the
#' schema, so for example, on SQL Server, 'cdm_instance.dbo'.
#' @param resultsDatabaseSchema Fully qualified name of database schema that we can write final
#' results to. Default is cdmDatabaseSchema. On SQL Server, this should
#' specifiy both the database and the schema, so for example, on SQL
#' Server, 'cdm_results.dbo'.
#' @param analysisIds (OPTIONAL) A vector containing the set of Achilles analysisIds for
#' which results will be returned. The following are supported: \code{202,402,602,702,802,1802,2102}.
#' If not specified, data for all analysis will be returned. Ignored if \code{conceptId} is given.
#' @param conceptId (OPTIONAL) A SNOMED concept_id from the \code{CONCEPT} table for which a monthly Achilles analysis exists.
#' If not specified, all concepts for a given analysis will be returned.
#' @param outputFile CSV file where temporal characterization will be written. Default is temporal-characterization.csv.
#'
#' @return
#' A csv file with temporal analyses for each time series
#'
#' @examples
#' \dontrun{
#' # Example 1:
#' pneumonia <- 255848
#' performTemporalCharacterization(
#' connectionDetails = connectionDetails,
#' cdmDatabaseSchema = "cdm",
#' resultsDatabaseSchema = "results",
#' conceptId = pneumonia,
#' outputFolder = "output/pneumoniaTemporalChar.csv")
#'
#' # Example 2:
#' performTemporalCharacterization(
#' connectionDetails = connectionDetails,
#' cdmDatabaseSchema = "cdm",
#' resultsDatabaseSchema = "results",
#' analysisIds = c(402,702),
#' outputFolder = "output/conditionAndDrugTemporalChar.csv")
#'
#' # Example 3:
#' performTemporalCharacterization(
#' connectionDetails = connectionDetails,
#' cdmDatabaseSchema = "cdm",
#' resultsDatabaseSchema = "results",
#' outputFolder = "output/CompleteTemporalChar.csv")
#' }
#'
#'@export
performTemporalCharacterization <- function(
connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema,
analysisIds = NULL,
conceptId = NULL,
outputFile = "temporal-characterization.csv")
{
# Minimum number of months of data to perform temporal characterization
minMonths <- 36
# Pull temporal data from Achilles and get list of unique concept_ids
temporalData <- Achilles::getTemporalData(connectionDetails,cdmDatabaseSchema,resultsDatabaseSchema,analysisIds,conceptId)
if (nrow(temporalData) == 0) {
stop("CANNOT PERFORM TEMPORAL CHARACTERIZATION: NO ACHILLES DATA FOUND")
}
allConceptIds <- unique(temporalData$CONCEPT_ID)
rowData <- data.frame(
DB_NAME = character(),
CDM_TABLE_NAME = character(),
CONCEPT_ID = numeric(),
CONCEPT_NAME = character(),
SEASONALITY_SCORE = numeric(),
IS_STATIONARY = logical(),
stringsAsFactors = FALSE )
print(paste0("Attempting temporal characterization on ", length(allConceptIds), " individual concepts"))
# Loop through temporal data, perform temporal characterization, and write out results
for (conceptId in allConceptIds) {
tempData <- temporalData[temporalData$CONCEPT_ID == conceptId,]
tempData.ts <- Achilles::createTimeSeries(tempData)
tempData.ts <- tempData.ts[,"PREVALENCE"]
tempData.ts <- Achilles::tsCompleteYears(tempData.ts)
if (length(tempData.ts) >= minMonths) {
tempData.ts.ss <- Achilles::getSeasonalityScore(tempData.ts)
tempData.ts.is <- Achilles::isStationary(tempData.ts)
rowData[nrow(rowData)+1,] <- c( tempData$DB_NAME[1],
tempData$CDM_TABLE_NAME[1],
tempData$CONCEPT_ID[1],
tempData$CONCEPT_NAME[1],
tempData.ts.ss,
tempData.ts.is )
}
}
write.csv(rowData,outputFile,row.names = FALSE)
print(paste0("Temporal characterization complete. Results can be found in ", outputFile))
invisible(rowData)
}
| /scratch/gouwar.j/cran-all/cranData/Achilles/R/performTemporalCharacterization.r |
# @file runMissingAnalyses
#
# Copyright 2023 Observational Health Data Sciences and Informatics
#
# This file is part of Achilles
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# @author Observational Health Data Sciences and Informatics
# @author Martijn Schuemie
# @author Patrick Ryan
# @author Frank DeFalco
# @author Vojtech Huser
# @author Chris Knoll
# @author Ajit Londhe
# @author Taha Abdul-Basser
# @author Anthony Molinaro
#' @title
#' runMissingAnalyses
#'
#' @description
#' \code{runMissingAnalyses} Automatically find and compute analyses that haven't been executed.
#'
#' @param connectionDetails An R object of type \code{connectionDetails} created using the
#' function \code{createConnectionDetails} in the
#' \code{DatabaseConnector} package.
#' @param cdmDatabaseSchema Fully qualified name of database schema that contains OMOP CDM
#' schema. On SQL Server, this should specifiy both the database and the
#' schema, so for example, on SQL Server, 'cdm_instance.dbo'.
#' @param resultsDatabaseSchema Fully qualified name of database schema that we can write final
#' results to. Default is cdmDatabaseSchema. On SQL Server, this should
#' specifiy both the database and the schema, so for example, on SQL
#' Server, 'cdm_results.dbo'.
#' @param scratchDatabaseSchema Fully qualified name of the database schema that will store all of
#' the intermediate scratch tables, so for example, on SQL Server,
#' 'cdm_scratch.dbo'. Must be accessible to/from the cdmDatabaseSchema
#' and the resultsDatabaseSchema. Default is resultsDatabaseSchema.
#' Making this "#" will run Achilles in single-threaded mode and use
#' temporary tables instead of permanent tables.
#' @param vocabDatabaseSchema String name of database schema that contains OMOP Vocabulary. Default
#' is cdmDatabaseSchema. On SQL Server, this should specifiy both the
#' database and the schema, so for example 'results.dbo'.
#' @param tempEmulationSchema Formerly tempEmulationSchema. For databases like Oracle where you
#' must specify the name of the database schema where you want all
#' temporary tables to be managed. Requires create/insert permissions to
#' this database.
#' @param outputFolder Path to store logs and SQL files
#' @param defaultAnalysesOnly Boolean to determine if only default analyses should be run.
#' Including non-default analyses is substantially more resource
#' intensive. Default = TRUE
#' @returns
#' No return value. Run to execute analyses currently missing from results.
#'
#' @examples
#' \dontrun{
#' Achilles::runMissingAnalyses(connectionDetails = connectionDetails,
#' cdmDatabaseSchema = "cdm",
#' resultsDatabaseSchema = "results",
#'
#' outputFolder = "/tmp")
#' }
#'
#' @export
runMissingAnalyses <- function(connectionDetails,
cdmDatabaseSchema,
resultsDatabaseSchema = cdmDatabaseSchema,
scratchDatabaseSchema = resultsDatabaseSchema,
vocabDatabaseSchema = cdmDatabaseSchema,
tempEmulationSchema = resultsDatabaseSchema,
outputFolder = "output",
defaultAnalysesOnly = TRUE)
{
missingAnalyses <- Achilles::listMissingAnalyses(connectionDetails,resultsDatabaseSchema)
if (nrow(missingAnalyses) == 0) {
stop("NO MISSING ANALYSES FOUND")
}
if (defaultAnalysesOnly) {
missingAnalyses <- missingAnalyses[missingAnalyses$IS_DEFAULT == 1,]
}
if (nrow(missingAnalyses) == 0) {
stop("NO DEFAULT MISSING ANALYSES FOUND")
}
# By supplying analysisIds along with specifying createTable=F and updateGivenAnalysesOnly=T,
# we add the missing analysis_ids without removing existing data
achilles(connectionDetails = connectionDetails,
cdmDatabaseSchema = cdmDatabaseSchema,
resultsDatabaseSchema = resultsDatabaseSchema,
scratchDatabaseSchema = scratchDatabaseSchema,
vocabDatabaseSchema = cdmDatabaseSchema,
tempEmulationSchema = tempEmulationSchema,
analysisIds = missingAnalyses$ANALYSIS_ID,
defaultAnalysesOnly = defaultAnalysesOnly,
outputFolder = outputFolder,
createTable = FALSE,
updateGivenAnalysesOnly = TRUE)
}
| /scratch/gouwar.j/cran-all/cranData/Achilles/R/runMissingAnalyses.r |
#'@title For a monhtly time series, compute sum and proportion by month across all years
#'
#'@param tsData A time series object
#'
#'@return A data frame reporting the monthly sum across all years and the proportion this sum contributes to the total.
#'
#'@export
sumAcrossYears <- function(tsData)
{
# Read the time series into a data frame with character string months
tsAsDf <- data.frame(MONTH_NUM=cycle(tsData), TS_VALUE=tsData, stringsAsFactors = F)
# Summarize by month across all years
tsAggregatedByMonth <- tsAsDf %>% dplyr::group_by(.data$MONTH_NUM) %>% dplyr::summarize(SUM=sum(.data$TS_VALUE))
# Compute proportion for each month
tsPropByMonth <- data.frame(MONTH_NUM=tsAggregatedByMonth$MONTH_NUM, PROP=tsAggregatedByMonth$SUM/sum(tsData), stringsAsFactors = F)
# Get sum and proportion in a single data frame
tsSummary <- merge(tsAggregatedByMonth, tsPropByMonth, by.x = "MONTH_NUM", by.y="MONTH_NUM")
return (tsSummary[order(tsSummary$PROP, decreasing = T),])
} | /scratch/gouwar.j/cran-all/cranData/Achilles/R/sumAcrossYears.r |
#'@title Trim a monthly time series object to so that partial years are removed
#'
#'@details This function is only supported for monthly time series
#'
#'@param tsData A time series object
#'
#'@return A time series with partial years removed.
#'
#'@export
tsCompleteYears <- function(tsData)
{
if (frequency(tsData) != 12) {
stop("This function is only supported for monthly time series.")
}
origStartMonth <- start(tsData)[2]
origStartYear <- start(tsData)[1]
origEndMonth <- end(tsData)[2]
origEndYear <- end(tsData)[1]
newStartMonth <- 1
newEndMonth <- 12
tsObj <- tsData
if (origStartMonth > 1) tsObj <- window(tsObj, start=c(origStartYear+1,newStartMonth))
if (origEndMonth < 12) tsObj <- window(tsObj, end=c(origEndYear-1,newEndMonth))
return (tsObj)
}
| /scratch/gouwar.j/cran-all/cranData/Achilles/R/tsCompleteYears.r |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.