content
stringlengths 0
14.9M
| filename
stringlengths 44
136
|
---|---|
---
title: "Getting started with the WriteR application"
author: "A. Jonathan R. Godfrey"
bibliography: BrailleRPublications.bib
vignette: >
%\VignetteIndexEntry{IntroWriteR}
%\VignetteEngine{knitr::rmarkdown}
output: knitr:::html_vignette
---
## Introduction
The WriteR application was written to support use of R markdown and the BrailleR package. It is a Python script making use of wxPython to help build the graphic user interface (GUI) in such a way that it works for screen reader users.
The script is in the BrailleR package, but it cannot run unless the user has both Python and wxPython installed. Two commands have been included in the BrailleR package to help Windows users obtain installation files for them.
## Getting Python and wxPython (Windows users only)
Issue the following commands at the R prompt
`library(BrailleR)`
`GetPython()`
`GetWxPython()`
These commands automatically download the installation files and start the installation process going. The downloaded files will be saved in your MyBrailleR folder. You will need to follow the instructions and answer questions that arise whenever you install new software. These are reputable installation files from the primary sites for Python and wxPython. Windows and any security software you might have should know that, but you can never tell! You will probably need to let Windows know it is OK to install the software in the default location. That pop-up might not appear as the window with focus so if things look like they're going slowly, look around for the pop-up window.
Once you have completed both installations, you are ready to go. You shouldn't need those installation files again, but keep them just in case. They will have been saved in your `MyBrailleR` folder.
## Opening WriteR from BrailleR
Opening WriteR is as easy as typing WriteR! Well almost. You have the option of specifying a filename; if that file exists, it gets opened for you, and if it doesn't exist, then it gets created with a few lines already included at the top to help get you started. Try:
`WriteR("MyFirst.Rmd")`
## What can I do with WriteR?
The window you are in has a number of menus, a status bar at the bottom and a big space in the middle for your work. Take a quick look at those menus; some will look familiar because they are common to many Windows applications.
The file you have open is a markdown file. It is just text which is why it is so easy to read. The file extension of `Rmd` means it is an R markdown file. There are several flavours of markdown in common use, but they are practically all the same except for some very minor differences.
A markdown file can be converted into many file formats for distribution. These include HTML, pdf, Microsoft Word, Open Office, and a number of different slide presentation formats. Let's make the HTML file now.
## Our first HTML file
Making your first HTML file is as easy as hitting a single key, or using one of the options in the `Build` menu. The variety of options are the commonly used ones in RStudio.
Navigate to the current working directory using your file browser. To find out where that is, type `getwd()` back in the R window. You should see the file `MyFirst.Rmd` and once you have built it, the associated HTML file.
Open the HTML file and see how the markdown has been rendered. You may need to switch back and forth between the WriteR window and your browser to compare the plain text and the beautiful HTML.
Now edit the Rmd file in WriteR to your heart's content.
|
/scratch/gouwar.j/cran-all/cranData/BrailleR/vignettes/IntroWriteR.Rmd
|
---
title: Testing the VI.ggplot() within the BrailleR package"
author: "A. Jonathan R. Godfrey"
bibliography: BrailleRPublications.bib
vignette: >
%\VignetteIndexEntry{qplot}
%\VignetteEngine{knitr::rmarkdown}
output: knitr:::html_vignette
---
This vignette contained many more plots in its initial development. The set has been cut back considerably to offer meaningful testing only, and because much of the material was moved over to a book called [BrailleR in Action](https://R-Resources.massey.ac.nz/BrailleRInAction/). Doing so also had an advantage of speeding up the package creation, testing, and installation.
N.B. the commands here are either exact copies of the commands presented in Wickham (2009) or some minor alterations to them. Notably, some code given in the book no longer works. This is given a `#!`
The `ggplot2` package has a `summary` method that often but not always offers something to show that things have changed from one plot to another. Summary
commands are included below but commented out.
```{r GetLibraries}
library(BrailleR)
library(ggplot2)
dsmall = diamonds[1:100,]
```
```{r g1}
g1 = qplot(carat, price, data = diamonds)
# summary(g1)
g1
# VI(g1) ### automatic since BrailleR v0.32.0
```
If the user does not actually plot the graph, they can still find out what it will look like once it is plotted by using the `VI()` command on the graph object. This became unnecessary from version 0.32.0 of BrailleR.
N.B. All `VI()` commands can now be deleted from this document.
```{r g2}
g2 = qplot(carat, price, data = dsmall, colour = color)
# summary(g2)
g2
```
```{r g3}
g3 = qplot(carat, price, data = dsmall, shape = cut)
# summary(g3)
g3
```
```{r g4}
# to get semi-transparent points
g4 = qplot(carat, price, data = diamonds, alpha = I(1/100))
# summary(g4)
g4
```
```{r g5}
# to add a smoother (default is loess for n<1000)
g5 = qplot(carat, price, data = dsmall, geom = c("point", "smooth"))
# summary(g5)
g5
#! g5a = qplot(carat, price, data = dsmall, geom = c("point", "smooth"), span = 1)
library(splines)
#! g5b = qplot(carat, price, data = dsmall, geom = c("point", "smooth"), method = "lm")
#! g5c = qplot(carat, price, data = dsmall, geom = c("point", "smooth"), method = "lm", formula = y ~ ns(x,5))
```
```{r g6, include=FALSE}
# continuous v categorical
g6 = qplot(color, price / carat, data = diamonds, geom = "jitter", alpha = I(1 / 50))
# summary(g6)
g6
# VI(g6) ### automatic since BrailleR v0.32.0
g6a = qplot(color, price / carat, data = diamonds, geom = "boxplot")
# summary(g6a)
g6a
```
```{r g7}
# univariate plots
g7a = qplot(carat, data = diamonds, geom = "histogram")
# summary(g7a)
g7a
g7b = qplot(carat, data = diamonds, geom = "histogram", binwidth = 1, xlim = c(0,3))
g7b
g7c = qplot(carat, data = diamonds, geom = "histogram", binwidth = 0.1, xlim = c(0,3))
g7c
g7d = qplot(carat, data = diamonds, geom = "histogram", binwidth = 0.01, xlim = c(0,3))
# summary(g7d)
g7d
```
```{r g8, include=FALSE}
g8 = qplot(carat, data = diamonds, geom = "density")
# summary(g8)
g8
```
```{r g9, include=FALSE}
# data is separated by implication using the following...
g9 = qplot(carat, data = diamonds, geom = "density", colour = color)
# summary(g9)
g9
g10 = qplot(carat, data = diamonds, geom = "histogram", fill = color)
# summary(g10)
g10
```
```{r g11}
# bar charts for categorical variable
g11a = qplot(color, data = diamonds)
# summary(g11a)
g11a
g11b = qplot(color, data = diamonds, geom = "bar")
# summary(g11b)
g11b
g12a = qplot(color, data = diamonds, geom = "bar", weight = carat)
# summary(g12a)
g12a
g12b = qplot(color, data = diamonds, geom = "bar", weight = carat) + scale_y_continuous("carat")
# summary(g12b)
g12b
```
```{r g13}
# time series plots
g13a = qplot(date, unemploy / pop, data = economics, geom = "line")
# summary(g13a)
g13a
g13b = qplot(date, uempmed, data = economics, geom = "line")
# summary(g13b)
g13b
```
```{r g14, include=FALSE}
# path plots
year <- function(x) as.POSIXlt(x)$year + 1900
g14a = qplot(unemploy / pop, uempmed, data = economics, geom = c("point", "path"))
# summary(g14a)
g14a
#g14b = qplot(unemploy / pop, uempmed, data = economics, geom = "path", colour = year(date)) + scale_area()
#summary(g14b)
```
```{r g15, include=FALSE}
# facets is the ggplot term for trellis' panels
g15a = qplot(carat, data = diamonds, facets = color ~ ., geom = "histogram", binwidth = 0.1, xlim = c(0, 3))
# summary(g15a)
g15a
g15b = qplot(carat, ..density.., data = diamonds, facets = color ~ ., geom = "histogram", binwidth = 0.1, xlim = c(0, 3))
# summary(g15b)
g15b
```
```{r g16}
# rescaling of the axes
g16 = qplot(carat, price, data = dsmall, log = "xy")
# summary(g16)
g16
```
```{r g17, include=FALSE}
# Facets syntax without a "." before the "~" causes grief
g17 = qplot(displ, hwy, data=mpg, facets =~ year) + geom_smooth()
# summary(g17)
g17
```
|
/scratch/gouwar.j/cran-all/cranData/BrailleR/vignettes/qplot.Rmd
|
#' Simulation time series data for individual
#'
#' A dataset containing values of 10 interested
#' variables over 50 periods.
#'
#' @examples
#' ## Generated by the following R codes
#' set.seed(1000)
#' n = 50; p = 10
#' Precision = diag(rep(2, p)) # generate precision matrix
#' for (i in 1 : (p - 1)){
#' temp = ifelse(i > 2 * p / 3, 0.4, 1)
#' Precision[i, i + 1] = temp
#' Precision[i + 1, i] = temp
#' }
#' # R=-cov2cor(Precision) + diag(rep(2, p)) # real partial correlation matrix
#' Sigma = solve(Precision) # generate covariance matrix
#' rho = 0.5
#' y = matrix(0, n, p) # generate observed time series data
#' Epsilon = MASS::mvrnorm(n, rep(0, p), Sigma)
#' y[1, ] = Epsilon[1, ]
#' for (i in 2 : n){
#' y[i, ] = rho * y[i - 1, ] + sqrt(1 - rho^2) * Epsilon[i, ]
#' }
#' indsim = y
"indsim"
#' Simulation time series data for population A
#'
#' A dataset containing values of 10 interested
#' variables of 20 subjects over 50 periods.
#' @seealso \code{\link{popsimB}}.
#' @examples
#' ## Generated by the following R codes
#' set.seed(1234)
#' n = 50; p = 10; m1 = 20; m2 = 10
#' Precision1 = Precision2 = diag(rep(1, p)) # generate Precision matrix for population
#' for (i in 1 : (p - 1)){
#' temp1 = ifelse(i > 2 * p / 3, -0.2, 0.4)
#' temp2 = ifelse(i < p / 3, 0.4, -0.2)
#' Precision1[i, i + 1] = Precision1[i + 1, i] = temp1
#' Precision2[i, i + 1] = Precision2[i + 1, i] = temp2
#' }
#' # R1=-cov2cor(Precision1) + diag(rep(2, p)) # real partial correlation matrix
#' # R2=-cov2cor(Precision2) + diag(rep(2, p))
#' Index = matrix(0, p, p) # generate covariance matrix for each subject
#' for (i in 1 : p){
#' for (j in 1 : p){
#' if (i != j & abs(i - j) <= 3) Index[i, j] = 1
#' }
#' }
#' SigmaAll1 = array(dim = c(p, p, m1))
#' SigmaAll2 = array(dim = c(p, p, m2))
#' for (sub in 1 : m1){
#' RE = matrix(rnorm(p^2, 0, sqrt(2) * 0.05), p, p) * Index
#' RE1 = (RE + t(RE)) / 2
#' PrecisionInd = Precision1 + RE1
#' SigmaAll1[, , sub] = solve(PrecisionInd)
#' }
#' for (sub in 1 : m2){
#' RE = matrix(rnorm(p^2, 0, sqrt(2) * 0.15), p, p) * Index
#' RE1 = (RE + t(RE)) / 2
#' PrecisionInd = Precision2 + RE1
#' SigmaAll2[, , sub] = solve(PrecisionInd)
#' }
#' rho = 0.3 # generate observed time series data
#' y1 = array(dim = c(n, p, m1))
#' y2 = array(dim = c(n, p, m2))
#' for (sub in 1 : m1){
#' SigmaInd1 = SigmaAll1[, , sub]
#' ytemp = matrix(0, n, p)
#' Epsilon = MASS::mvrnorm(n, rep(0, p), SigmaInd1)
#' ytemp[1, ] = Epsilon[1, ]
#' for (i in 2 : n){
#' ytemp[i, ] = rho * ytemp[i - 1, ] + sqrt(1 - rho^2) * Epsilon[i, ]
#' }
#' y1[, , sub] = ytemp
#' }
#' for (sub in 1 : m2){
#' SigmaInd2 = SigmaAll2[, , sub]
#' Xtemp = matrix(0, n, p)
#' Epsilon = MASS::mvrnorm(n, rep(0, p), SigmaInd2)
#' ytemp[1, ] = Epsilon[1, ]
#' for (i in 2 : n){
#' ytemp[i, ] = rho * ytemp[i - 1, ] + sqrt(1 - rho^2) * Epsilon[i, ]
#' }
#' y2[, , sub] = ytemp
#' }
#' popsimA = y1
#' popsimB = y2
"popsimA"
#' Simulation time series data for population B
#'
#' A dataset containing values of 10 interested
#' variables of 10 subjects over 50 periods.
#'
#' @seealso \code{\link{popsimA}}.
"popsimB"
|
/scratch/gouwar.j/cran-all/cranData/BrainCon/R/data.R
|
#' Estimate individual-level partial correlation coefficients
#'
#' Estimate individual-level partial correlation coefficients in time series data
#' with \eqn{1-\alpha} confidence intervals.
#' Note that these are confidence intervals for single parameters, not simultaneous confidence intervals.
#' \cr
#' \cr
#'
#'@param X time series data of an individual which is a \eqn{n*p} numeric matrix, where \eqn{n} is the number of periods of time and \eqn{p} is the number of variables.
#'@param lambda a penalty parameter of order \eqn{\sqrt{\log(p)/n}}.
#'If \code{NULL}, \eqn{\sqrt{2*2.01/n*\log(p*(\log(p))^{1.5}/n^{0.5})}} is used in scaled lasso, and \eqn{\sqrt{2*\log(p)/n}} is used in lasso.
#'Increasing the penalty parameter may lead to larger residuals in the node-wise regression,
#'causing larger absolute values of estimates of partial correlation coefficients, which may cause more false positives in subsequent tests.
#'@param type a character string representing the method of estimation. \code{"slasso"} means scaled lasso, and \code{"lasso"} means lasso. Default value is \code{"slasso"}.
#'@param alpha significance level, default value is \code{0.05}.
#'@param ci a logical indicating whether to compute \eqn{1-\alpha} confidence interval, default value is \code{TRUE}.
#'
#'@return An \code{indEst} class object containing two or four components.
#'
#' \code{coef} a \eqn{p*p} partial correlation coefficients matrix.
#'
#' \code{ci.lower} a \eqn{p*p} numeric matrix containing the lower bound of \eqn{1-\alpha} confidence interval,
#' returned if \code{ci} is \code{TRUE}.
#'
#' \code{ci.upper} a \eqn{p*p} numeric matrix containing the upper bound of \eqn{1-\alpha} confidence interval,
#' returned if \code{ci} is \code{TRUE}.
#'
#' \code{asym.ex} a matrix measuring the asymptotic expansion of estimates, which will be used for multiple tests.
#'
#' \code{type} regression type in estimation.
#'
#'@seealso \code{\link{population.est}}.
#'
#'@examples
#' ## Quick example for the individual-level estimates
#' data(indsim)
#' # estimating partial correlation coefficients by scaled lasso
#' pc = individual.est(indsim)
#'
#' @references
#' Qiu Y. and Zhou X. (2021).
#' Inference on multi-level partial correlations
#' based on multi-subject time series data,
#' \emph{Journal of the American Statistical Association}, 00, 1-15.
#' @references
#' Sun T. and Zhang C. (2012).
#' Scaled Sparse Linear Regression,
#' \emph{Biometrika}, 99, 879–898.
#' @references
#' Liu W. (2013).
#' Gaussian Graphical Model Estimation With False Discovery Rate Control,
#' \emph{The Annals of Statistics}, 41, 2948–2978.
#' @references
#' Ren Z., Sun T., Zhang C. and Zhou H. (2015).
#' Asymptotic Normality and Optimalities in Estimation of Large Gaussian Graphical Models,
#' \emph{The Annals of Statistics}, 43, 991–1026.
individual.est <- function(X, lambda = NULL, type = c("slasso", "lasso"), alpha = 0.05, ci = TRUE){
X = as.matrix(X)
n = dim(X)[1]
p = dim(X)[2]
Mp = p * (p - 1) / 2
X = scale(X, scale = FALSE)
XS = scale(X, center = FALSE)
sdX = apply(X, 2, sd)
if (min(sdX) == 0)
stop("The argument X should not have any constant column!\n")
Eresidual = matrix(0, n, p)
CoefMatrix = matrix(0, p, p - 1)
type = match.arg(type)
if (is.null(lambda)){
if (type == "slasso"){
lambda = sqrt(2 * 2.01 * log(p * (log(p))^(1.5) / sqrt(n)) / n)
} else if (type == "lasso"){
lambda = sqrt(2 * log(p) / n)
}
}
if (type == "slasso"){
for (i in 1 : p){
slasso = scaledlasso(X = XS[, -i], y = X[, i], lam0 = lambda)
Eresidual[, i] = slasso$residuals
CoefMatrix[i, ] = slasso$coefficients / sdX[-i]
}
} else if (type == "lasso"){
for (i in 1 : p){
lasso = glmnet(x = XS[,-i], y = X[,i], intercept = FALSE, standardize = FALSE)
Coef = coef.glmnet(lasso, s = lambda * sdX[i])
CoefMatrix[i, ] = as.vector(Coef)[-1] / sdX[-i]
Predict = predict.glmnet(lasso, s = lambda* sdX[i], newx = XS[,-i])
Eresidual[, i] = X[,i] - Predict[,1]
}
}
CovRes = t(Eresidual) %*% Eresidual / n
m = 1
Est = matrix(1, p, p)
BTAll = matrix(0, n, Mp)
for (i in 1 : (p - 1)){
for (j in (i + 1) : p){
temp = CovRes[i, j] + diag(CovRes)[i] * CoefMatrix[j, i] + diag(CovRes)[j] * CoefMatrix[i, j - 1]
Est[j, i] = Est[i, j] = pmin(pmax(-1, temp / sqrt(diag(CovRes)[i] * diag(CovRes)[j])), 1)
omegaHat = - temp / (diag(CovRes)[i] * diag(CovRes)[j])
BTAll[, m] = ( Eresidual[, i] * Eresidual[, j] + temp ) / sqrt(diag(CovRes)[i] * diag(CovRes)[j]) - omegaHat * sqrt(diag(CovRes)[j]) * ( Eresidual[, i]^2 - CovRes[i, i] ) / (2 * sqrt(diag(CovRes)[i])) - omegaHat * sqrt(diag(CovRes)[i]) * ( Eresidual[, j]^2 - CovRes[j, j] ) / (2 * sqrt(diag(CovRes)[j]))
m = m + 1
}
}
BTAllcenter = scale(BTAll, scale = FALSE)
if (!ci) return(structure(list(coef = Est, asym.ex = BTAllcenter, type = type), class='indEst'))
NumAll = c()
DenAll = c()
for(i in 1 : Mp){
AR1 = ar(BTAllcenter[, i], aic = FALSE, order.max = 1)
rhoEst = AR1$ar
sigma2Est = AR1$var.pred
NumAll[i] = 4 * (rhoEst * sigma2Est)^2 / (1 - rhoEst)^8
DenAll[i] = sigma2Est^2 / (1 - rhoEst)^4
}
a2All = sum(NumAll) / sum(DenAll)
bandwidthAll = 1.3221 * (a2All * n)^(0.2)
diagW1 = colSums(BTAll^2) / n
for (h in 1 : (n - 1)){
gammah = colSums(matrix(BTAll[(1 + h):n,] * BTAll[1:(n - h),], ncol=Mp))
diagW1 = diagW1 + 2 * QS(h / bandwidthAll) * gammah / n
}
m = 1
ci.upper = ci.lower = diag(rep(1, p))
for (i in 1 : (p - 1)){
for (j in (i + 1) : p){
ci.upper[j, i] = ci.upper[i, j] = min(1, Est[i, j] + qnorm(1 - alpha / 2) * sqrt(diagW1[m] / n))
ci.lower[j, i] = ci.lower[i, j] = max(-1, Est[i, j] - qnorm(1 - alpha / 2) * sqrt(diagW1[m] / n))
m = m + 1
}
}
return(structure(list(coef = Est, ci.lower = ci.lower, ci.upper = ci.upper, asym.ex = BTAllcenter, type = type), class = 'indEst'))
}
|
/scratch/gouwar.j/cran-all/cranData/BrainCon/R/individual.est.R
|
#' Identify nonzero individual-level partial correlations
#'
#' Identify nonzero individual-level partial correlations in time series data
#' by controlling the rate of the false discovery proportion (FDP) exceeding \eqn{c0}
#' at \eqn{\alpha}, considering time dependence.
#' Input an \code{indEst} class object returned by \code{\link{individual.est}} or \code{\link{population.est}}.
#' \cr
#' \cr
#'
#'@param indEst An \code{indEst} class object.
#'@param alpha significance level, default value is \code{0.05}.
#'@param c0 threshold of the exceedance rate of FDP,
#'default value is \code{0.1}.
#'The choice of \code{c0} depends on the empirical problem. A smaller value of \code{c0} will
#'reduce false positives, but it may also cost more false negatives.
#'@param targetSet a two-column matrix. Each row contains two index corresponding to a pair of variables of interest.
#'If \code{NULL}, any pair of two variables is considered to be of interest.
#'@param MBT times of multiplier bootstrap, default value is \code{3000}.
#'@param simplify a logical indicating whether results should be simplified if possible.
#'
#'@return If \code{simplify} is \code{FALSE}, a \eqn{p*p} matrix with values 0 or 1 is returned.
#'If the j-th row and k-th column of the matrix is 1,
#'then the partial correlation coefficient between
#'the j-th variable and the k-th variable is identified to be nonzero.
#'
#'And if \code{simplify} is \code{TRUE}, a two-column matrix is returned,
#'indicating the row index and the column index of recovered nonzero partial correlations.
#'We only retain the results which the row index is less than the column index.
#'Those with larger test statistics are sorted first.
#'
#'@seealso \code{\link{population.est}} for making inferences on one individual in the population.
#'
#'@examples
#' ## Quick example for the individual-level inference
#' data(indsim)
#' # estimating partial correlation coefficients by scaled lasso
#' pc = individual.est(indsim)
#' # conducting hypothesis test
#' Res = individual.test(pc)
#'
#' @references
#' Qiu Y. and Zhou X. (2021).
#' Inference on multi-level partial correlations
#' based on multi-subject time series data,
#' \emph{Journal of the American Statistical Association}, 00, 1-15.
individual.test <- function(indEst, alpha = 0.05, c0 = 0.1, targetSet = NULL, MBT = 3000, simplify = !is.null(targetSet)){
force(simplify)
if (!inherits(indEst, 'indEst'))
stop("The argument indEst requires an 'indEst' class input!\n")
Est=indEst$coef
BTAllcenter = indEst$asym.ex
n = nrow(BTAllcenter)
p = nrow(Est)
Mp = p * (p - 1) / 2
if (is.null(targetSet)){
targetSet = lower.tri(Est)
index = 1 : Mp
} else {
simplify = TRUE
targetSet = normalize.set(targetSet, p)
index = (2 * p - targetSet[, 1]) * (targetSet[, 1] - 1) / 2 + targetSet[, 2] - targetSet[, 1]
}
NumAll = c()
DenAll = c()
for(i in 1 : Mp){
AR1 = ar(BTAllcenter[, i], aic = FALSE, order.max = 1)
rhoEst = AR1$ar
sigma2Est = AR1$var.pred
NumAll[i] = 4 * (rhoEst * sigma2Est)^2 / (1 - rhoEst)^8
DenAll[i] = sigma2Est^2 / (1 - rhoEst)^4
}
a2All = sum(NumAll) / sum(DenAll)
bandwidthAll = 1.3221 * (a2All * n)^(0.2)
BTcovAll = matrix(0, n, n)
for (i in 1 : n){
for (j in 1 : n){
BTcovAll[i, j] = QS(abs(i - j) / bandwidthAll)
}
}
BTAllsim = matrix(0, Mp, MBT)
for (i in 1 : MBT){
temp = mvrnorm(1, rep(0, n), BTcovAll)
BTAllsim[, i] = (n)^(-0.5) * colSums(temp * BTAllcenter)
}
WdiagAllEmp = colSums(BTAllcenter ^ 2) / n
TestAllstandard = WdiagAllEmp^(-1/2) * Est[lower.tri(Est)]
BTAllsim0 = WdiagAllEmp^(-1/2) * BTAllsim
SignalID=c()
TestPro = Est[targetSet]
TestProstandard = TestAllstandard[index]
BTPro = abs(BTAllsim0)[index, ]
repeat{
PCmaxIndex = which.max(abs(TestProstandard))
SignalIDtemp = which(Est == TestPro[PCmaxIndex], arr.ind = T)
SignalID = rbind(SignalID, SignalIDtemp)
TestPro = TestPro[-PCmaxIndex]
BTPro = BTPro[-PCmaxIndex, ]
TestProstandard = TestProstandard[-PCmaxIndex]
TestStatPro = sqrt(n) * max(abs(TestProstandard))
BTAllsimPro = apply(BTPro, 2, max)
QPro = sort(BTAllsimPro)[(1 - alpha) * MBT]
if (TestStatPro < QPro) break
}
aug = floor(c0 * dim(SignalID)[1] / (2 * (1 - c0)))
if (aug > 0){
PCmaxIndex = order(-abs(TestProstandard))[1 : aug]
for (q in 1 : length(PCmaxIndex)){
SignalIDtemp = which(Est == TestPro[PCmaxIndex[q]], arr.ind = TRUE)
SignalID = rbind(SignalID, SignalIDtemp)
}
}
if (simplify) return(subset(SignalID, SignalID[,1] < SignalID[,2]))
recovery = diag(rep(1, p))
recovery[SignalID[,1]+(SignalID[,2]-1)*p]=1
return(recovery)
}
|
/scratch/gouwar.j/cran-all/cranData/BrainCon/R/individual.test.R
|
#' tool functions
#'
#' @param u numeric value.
#' @param set two-column numeric matrix.
#' @param p the number of variables.
#' @param X the input matrix of scaled lasso.
#' @param y response variable of scaled lasso.
#' @param lam0 numeric value, the penalty parameter of scaled lasso.
#'
#' @return Intermediate results.
#'
#' @name tool
#' @keywords internal
NULL
#' @rdname tool
QS = function(u){
if (u == 0) ker = 1
else ker = 25 * ( sin(6 * pi * u / 5) / (6 * pi * u / 5) - cos(6 * pi * u / 5) ) / (12 * pi^2 * u^2)
return(ker)
}
#' @rdname tool
normalize.set <- function(set, p){
set = as.matrix(set)
if (ncol(set) != 2)
stop('The argument targetSet requires a two-column matrix!\n')
colnames(set) = c('row', 'col')
set = rbind(set, set[,2:1])
set = set[set[,1] < set[,2],]
set = set[set[,2] <= p,]
return(set[!duplicated(set),])
}
#' @rdname tool
scaledlasso <- function (X, y, lam0 = NULL){
objlasso = glmnet(x = X, y = y, intercept = FALSE, standardize = FALSE)
sigmaint = 0.1
sigmanew = 5
flag = 0
while (abs(sigmaint - sigmanew) > 1e-04 & flag <= 100) {
flag = flag + 1
sigmaint = sigmanew
lam = lam0 * sigmaint
hy = predict.glmnet(objlasso, s = lam, newx = X)[, 1]
sigmanew = sqrt(mean((y - hy)^2))
}
hbeta = as.vector(coef.glmnet(objlasso, s = lam))[-1]
return(list(hsigma = sigmanew, coefficients = hbeta,
fitted.values = hy, residuals = y - hy))
}
|
/scratch/gouwar.j/cran-all/cranData/BrainCon/R/others.R
|
#' @exportPattern "^[[:alpha:]]+"
#' @importFrom MASS mvrnorm
#' @importFrom glmnet glmnet coef.glmnet predict.glmnet
#' @importFrom stats ar pnorm qbeta rbinom rnorm runif sd qnorm
NULL
|
/scratch/gouwar.j/cran-all/cranData/BrainCon/R/pkg.R
|
#' Estimate population-level partial correlation coefficients
#'
#' Estimate population-level partial correlation coefficients in time series data.
#' And also return coefficients for each individual.
#' Input time series data for population as a 3-dimensional array or a list.
#' \cr
#' \cr
#'
#'@param Z If each individual shares the same number of periods of time, \code{Z} can be a \eqn{n*p*m} dimensional array, where \eqn{m} is number of individuals.
#'In general, \code{Z} should be a m-length list, and each element in the list is a \eqn{n_i*p} matrix, where \eqn{n_i} stands for the number of periods of time of the i-th individual.
#'@param lambda a scalar or a m-length vector, representing the penalty parameters of order \eqn{\sqrt{\log(p)/n_i}} for each individual.
#'If a scalar, the penalty parameters used in each individual are the same.
#'If a m-length vector, the penalty parameters for each individual are specified in order.
#'And if \code{NULL}, penalty parameters are specified by \code{type}.
#'More details about the penalty parameters are in \code{\link{individual.est}}.
#'@param type a character string representing the method of estimation. \code{"slasso"} means scaled lasso, and \code{"lasso"} means lasso. Default value is \code{"slasso"}.
#'@param alpha a numeric scalar, default value is \code{0.05}. It is used when \code{ind.ci} is \code{TRUE}.
#'@param ind.ci a logical indicating whether to compute \eqn{1-\alpha} confidence intervals of each subject, default value is \code{FALSE}.
#'
#'@return A \code{popEst} class object containing two components.
#'
#' \code{coef} a \eqn{p*p} partial correlation coefficients matrix.
#'
#' \code{ind.est} a \eqn{m}-length list, containing estimates for each individuals.
#'
#' \code{type} regression type in estimation.
#'
#'@examples
#' ## Quick example for the population-level estimates
#' data(popsimA)
#' # estimating partial correlation coefficients by scaled lasso
#' pc = population.est(popsimA)
#'
#' ## Inference on the first subject in population
#' Res_1 = individual.test(pc$ind.est[[1]])
#'
#' @references
#' Qiu Y. and Zhou X. (2021).
#' Inference on multi-level partial correlations
#' based on multi-subject time series data,
#' \emph{Journal of the American Statistical Association}, 00, 1-15.
population.est <- function(Z, lambda = NULL, type = c("slasso", "lasso"), alpha = 0.05, ind.ci = FALSE){
if (!is.array(Z) & !is.list(Z))
stop("The argument Z requires an 3-D array or a list!")
if (is.array(Z))
Z = lapply(apply(Z, 3, list), '[[', 1)
n = sapply(Z, nrow)
p = unique(sapply(Z, ncol))
if (length(p)>1)
stop("Each individual has to have the same number of variables!")
MC = length(Z)
type = match.arg(type)
if (length(lambda) == 0){
if (type == "slasso"){
lambda = sqrt(2 * 2.01 * log(p * (log(p))^(1.5) / sqrt(n)) / n)
} else if (type == "lasso"){
lambda = sqrt(2 * log(p) / n)
}
} else if (length(lambda) == 1){
lambda = rep(lambda, MC)
} else if (length(lambda) != MC){
stop("The argument lambda requires a scalar or a m-length vector!")
}
ind.est = list()
CAll = array(dim = c(p, p, MC))
for (sub in 1 : MC){
ind.est[[sub]] = individual.est(Z[[sub]], lambda = lambda[sub], type = type, alpha = alpha, ci = ind.ci)
CAll[, , sub] = ind.est[[sub]][['coef']]
}
Est = apply(CAll, c(1, 2), mean)
return(structure(list(coef = Est, ind.est = ind.est, type = type), class = 'popEst'))
}
|
/scratch/gouwar.j/cran-all/cranData/BrainCon/R/population.est.R
|
#' The one-sample population inference using Genovese and Wasserman's method
#'
#' Identify the nonzero partial correlations in one-sample population,
#' based on controlling the rate of the false discovery proportion (FDP) exceeding \eqn{c0}
#' at \eqn{\alpha}. The method is based on the minimum of the p-values.
#' Input a \code{popEst} class object returned by \code{\link{population.est}}.
#' \cr
#' \cr
#'
#'@param popEst A \code{popEst} class object.
#'@param alpha significance level, default value is \code{0.05}.
#'@param c0 threshold of the exceedance rate of FDP,
#'default value is \code{0.1}.
#'@param targetSet a two-column matrix. Each row contains two index corresponding to a pair of variables of interest.
#'If \code{NULL}, any pair of two variables is considered to be of interest.
#'@param simplify a logical indicating whether results should be simplified if possible.
#'
#'@return If \code{simplify} is \code{FALSE}, a \eqn{p*p} matrix with values 0 or 1 is returned, and 1 means nonzero.
#'
#'And if \code{simplify} is \code{TRUE}, a two-column matrix is returned,
#'indicating the row index and the column index of recovered nonzero partial correlations.
#'Those with lower p values are sorted first.
#'
#'@seealso \code{\link{population.test}}.
#'
#'@examples
#' ## Quick example for the one-sample population inference
#' data(popsimA)
#' # estimating partial correlation coefficients
#' pc = population.est(popsimA)
#' # conducting hypothesis test
#' Res = population.test.MinPv(pc)
#'
#' @references
#' Genovese C. and Wasserman L. (2006).
#' Exceedance Control of the False Discovery Proportion,
#' \emph{Journal of the American Statistical Association}, 101, 1408-1417.
#' @references
#' Qiu Y. and Zhou X. (2021).
#' Inference on multi-level partial correlations
#' based on multi-subject time series data,
#' \emph{Journal of the American Statistical Association}, 00, 1-15.
population.test.MinPv <- function(popEst, alpha = 0.05, c0 = 0.1, targetSet = NULL, simplify = !is.null(targetSet)){
force(simplify)
if (!inherits(popEst, 'popEst'))
stop("The argument popEst requires a 'popEst' class input!\n")
EstAll = popEst$coef
p = nrow(EstAll)
MC = length(popEst[['ind.est']])
if (is.null(targetSet)){
targetSet = which(upper.tri(EstAll), arr.ind = T)
Mp = p * (p - 1) / 2
} else {
simplify = TRUE
targetSet = normalize.set(targetSet, p)
Mp = nrow(targetSet)
}
CAll = array(dim = c(p, p, MC))
for (sub in 1 : MC){
CAll[, , sub] = popEst[['ind.est']][[sub]][['coef']]
}
SdAll = apply(CAll, c(1, 2), sd)
EstT = sqrt(MC) * EstAll / (SdAll + 1e-6)
pv0 = 2 * (1 - pnorm(abs(EstT)))
pv1 = sort(pv0[targetSet])
Beta = qbeta(alpha, 1, Mp:1)
a0 = which(pv1 > Beta)[1]
a1 = ceiling(a0 / (1 - c0))
pvThreshold = ifelse(is.na(a1), pv1[1]/2, pv1[a1])
if (simplify){
index = which(pv0[targetSet] < pvThreshold)
ord.index = index[order(pv0[targetSet][index])]
return(matrix(targetSet[ord.index, ], ncol=2,
dimnames = list(NULL, c("row", "col"))))
}
MinPv = 1 * (pv0 < pvThreshold)
return(MinPv)
}
|
/scratch/gouwar.j/cran-all/cranData/BrainCon/R/population.test.MinPv.R
|
#' The one-sample population inference
#'
#' Identify the nonzero partial correlations in one-sample population,
#' based on controlling the rate of the false discovery proportion (FDP) exceeding \eqn{c0}
#' at \eqn{\alpha}, considering time dependence.
#' Input a \code{popEst} class object returned by \code{\link{population.est}}.
#' \cr
#' \cr
#'
#'@param popEst A \code{popEst} class object.
#'@param alpha significance level, default value is \code{0.05}.
#'@param c0 threshold of the exceedance rate of FDP,
#'default value is \code{0.1}. A smaller value of \code{c0} will
#'reduce false positives, but it may also cost more false negatives.
#'@param targetSet a two-column matrix. Each row contains two index corresponding to a pair of variables of interest.
#'If \code{NULL}, any pair of two variables is considered to be of interest.
#'@param MBT times of multiplier bootstrap, default value is \code{5000}.
#'@param simplify a logical indicating whether results should be simplified if possible.
#'
#'@return If \code{simplify} is \code{FALSE}, a \eqn{p*p} matrix with values 0 or 1 is returned, and 1 means nonzero.
#'
#'And if \code{simplify} is \code{TRUE}, a two-column matrix is returned,
#'indicating the row index and the column index of recovered nonzero partial correlations.
#'We only retain the results which the row index is less than the column index.
#'Those with larger test statistics are sorted first.
#'
#'@seealso \code{\link{individual.test}}.
#'
#'@examples
#' ## Quick example for the one-sample population inference
#' data(popsimA)
#' # estimating partial correlation coefficients by scaled lasso
#' pc = population.est(popsimA)
#' # conducting hypothesis test
#' Res = population.test(pc)
#' # conducting hypothesis test in variables of interest
#' set = cbind(rep(7:9, each = 10), 1:10)
#' Res_like = population.test(pc, targetSet = set)
#'
#' @references
#' Qiu Y. and Zhou X. (2021).
#' Inference on multi-level partial correlations
#' based on multi-subject time series data,
#' \emph{Journal of the American Statistical Association}, 00, 1-15.
population.test <- function(popEst, alpha = 0.05, c0 = 0.1, targetSet = NULL, MBT = 5000, simplify = !is.null(targetSet)){
force(simplify)
if (!inherits(popEst, 'popEst'))
stop("The argument popEst requires a 'popEst' class input!\n")
EstAll = popEst$coef
p = nrow(EstAll)
MC = length(popEst[['ind.est']])
if (is.null(targetSet)){
targetSet = upper.tri(EstAll)
Mp = p * (p - 1) / 2
} else {
simplify = TRUE
targetSet = normalize.set(targetSet, p)
Mp = nrow(targetSet)
}
EstVec = matrix(0, MC, Mp)
for (i in 1 : MC){
Est = popEst[['ind.est']][[i]][['coef']]
EstVec[i,] = Est[targetSet]
}
EstVecCenter = scale(EstVec, scale = FALSE)
BTAllsim = matrix(0, Mp, MBT)
for (i in 1 : MBT){
temp = rnorm(MC)
BTAllsim[, i] = (MC)^(-0.5) * colSums(temp * EstVecCenter)
}
SignalID = c()
TestPro = EstAll[targetSet]
BTPro = abs(BTAllsim)
repeat{
PCmaxIndex = which.max(abs(TestPro))
SignalIDtemp = which(EstAll == TestPro[PCmaxIndex], arr.ind = T)
SignalID = rbind(SignalID, SignalIDtemp)
TestPro = TestPro[-PCmaxIndex]
BTPro = BTPro[-PCmaxIndex, ]
TestStatPro = sqrt(MC) * max(abs(TestPro))
BTAllsimPro = apply(BTPro, 2, max)
QPro = sort(BTAllsimPro)[(1 - alpha) * MBT]
if (TestStatPro < QPro) break
}
aug = floor(c0 * dim(SignalID)[1] / (2 * (1 - c0)))
if (aug > 0){
PCmaxIndex = order(-abs(TestPro))[1 : aug]
for (q in 1 : length(PCmaxIndex)){
SignalIDtemp = which(EstAll == TestPro[PCmaxIndex[q]], arr.ind = TRUE)
SignalID = rbind(SignalID, SignalIDtemp)
}
}
if (simplify) return(subset(SignalID, SignalID[,1] < SignalID[,2]))
recovery = diag(rep(1, p))
recovery[SignalID[,1] + (SignalID[,2] - 1) * p] = 1
return(recovery)
}
|
/scratch/gouwar.j/cran-all/cranData/BrainCon/R/population.test.R
|
#' Identify differences of partial correlations between two populations using Genovese and Wasserman's method
#'
#' Identify differences of partial correlations between two populations
#' in two groups of time series data,
#' based on controlling the rate of the false discovery proportion (FDP) exceeding \eqn{c0}
#' at \eqn{\alpha}. The method is based on the minimum of the p-values.
#' Input two \code{popEst} class objects returned by \code{\link{population.est}}
#' (the number of individuals in two groups can be different).
#' \cr
#' \cr
#'
#'@param popEst1 A \code{popEst} class object.
#'@param popEst2 A \code{popEst} class object.
#'@param alpha significance level, default value is \code{0.05}.
#'@param c0 threshold of the exceedance rate of FDP,
#'default value is \code{0.1}.
#'@param targetSet a two-column matrix. Each row contains two index corresponding to a pair of variables of interest.
#'If \code{NULL}, any pair of two variables is considered to be of interest.
#'@param simplify a logical indicating whether results should be simplified if possible.
#'
#'@return If \code{simplify} is \code{FALSE}, a \eqn{p*p} matrix with values 0 or 1 is returned, and 1 means unequal.
#'
#'And if \code{simplify} is \code{TRUE}, a two-column matrix is returned,
#'indicating the row index and the column index of recovered unequal partial correlations.
#'Those with lower p values are sorted first.
#'
#'@examples
#' ## Quick example for the two-sample case inference
#' data(popsimA)
#' data(popsimB)
#' # estimating partial correlation coefficients by lasso (scaled lasso does the same)
#' pc1 = population.est(popsimA, type = 'l')
#' pc2 = population.est(popsimB, type = 'l')
#' # conducting hypothesis test
#' Res = population2sample.test.MinPv(pc1, pc2)
#'
#' @references
#' Genovese C., and Wasserman L. (2006).
#' Exceedance Control of the False Discovery Proportion,
#' \emph{Journal of the American Statistical Association}, 101, 1408-1417
#' @references
#' Qiu Y. and Zhou X. (2021).
#' Inference on multi-level partial correlations
#' based on multi-subject time series data,
#' \emph{Journal of the American Statistical Association}, 00, 1-15.
population2sample.test.MinPv <- function(popEst1, popEst2, alpha = 0.05, c0 = 0.1, targetSet = NULL, simplify = !is.null(targetSet)){
force(simplify)
if (!inherits(popEst1, 'popEst') | !inherits(popEst1, 'popEst'))
stop("The arguments popEst1 and popEst2 require 'popEst' class inputs!\n")
EstAll1 = popEst1$coef
EstAll2 = popEst2$coef
p = nrow(EstAll1)
MC1 = length(popEst1[['ind.est']])
MC2 = length(popEst2[['ind.est']])
if (is.null(targetSet)){
targetSet = which(upper.tri(EstAll1), arr.ind = T)
Mp = p * (p - 1) / 2
} else {
simplify = TRUE
targetSet = normalize.set(targetSet, p)
Mp = nrow(targetSet)
}
CAll1 = array(dim = c(p, p, MC1))
CAll2 = array(dim = c(p, p, MC2))
for (sub in 1 : MC1)
CAll1[, , sub] = popEst1[['ind.est']][[sub]][['coef']]
for (sub in 1 : MC2)
CAll2[, , sub] = popEst2[['ind.est']][[sub]][['coef']]
SdAll1 = apply(CAll1, c(1, 2), sd)
SdAll2 = apply(CAll2, c(1, 2), sd)
EstT = (EstAll1 - EstAll2) / sqrt(SdAll1^2 / MC1 + SdAll2^2 / MC2 + 1e-6)
pv0 = 2 * (1 - pnorm(abs(EstT)))
pv1 = sort(pv0[targetSet])
Beta = qbeta(alpha, 1, Mp:1)
a0 = which(pv1 > Beta)[1]
a1 = ceiling(a0 / (1 - c0))
pvThreshold = ifelse(is.na(a1), pv1[1]/2, pv1[a1])
if (simplify){
index = which(pv0[targetSet] < pvThreshold)
ord.index = index[order(pv0[targetSet][index])]
return(matrix(targetSet[ord.index, ], ncol=2,
dimnames = list(NULL, c("row", "col"))))
}
MinPv = 1 * (pv0 < pvThreshold)
return(MinPv)
}
|
/scratch/gouwar.j/cran-all/cranData/BrainCon/R/population2sample.test.MinPv.R
|
#' Identify differences of partial correlations between two populations
#'
#' Identify differences of partial correlations between two populations
#' in two groups of time series data by
#' controlling the rate of the false discovery proportion (FDP) exceeding \eqn{c0}
#' at \eqn{\alpha}, considering time dependence.
#' Input two \code{popEst} class objects returned by \code{\link{population.est}}
#' (the number of individuals in two groups can be different).
#' \cr
#' \cr
#'
#'@param popEst1 A \code{popEst} class object.
#'@param popEst2 A \code{popEst} class object.
#'@param alpha significance level, default value is \code{0.05}.
#'@param c0 threshold of the exceedance rate of FDP,
#'default value is \code{0.1}. A smaller value of \code{c0} will
#'reduce false positives, but it may also cost more false negatives.
#'@param targetSet a two-column matrix. Each row contains two index corresponding to a pair of variables of interest.
#'If \code{NULL}, any pair of two variables is considered to be of interest.
#'@param MBT times of multiplier bootstrap, default value is \code{5000}.
#'@param simplify a logical indicating whether results should be simplified if possible.
#'
#'@return If \code{simplify} is \code{FALSE}, a \eqn{p*p} matrix with values 0 or 1 is returned.
#'If the j-th row and k-th column of the matrix is 1,
#'then the partial correlation coefficients between
#'the j-th variable and the k-th variable in two populations
#'are identified to be unequal.
#'
#'And if \code{simplify} is \code{TRUE}, a two-column matrix is returned,
#'indicating the row index and the column index of recovered unequal partial correlations.
#'We only retain the results which the row index is less than the column index.
#'Those with larger test statistics are sorted first.
#'
#'@examples
#' ## Quick example for the two-sample case inference
#' data(popsimA)
#' data(popsimB)
#' # estimating partial correlation coefficients by lasso (scaled lasso does the same)
#' pc1 = population.est(popsimA, type = 'l')
#' pc2 = population.est(popsimB, type = 'l')
#' # conducting hypothesis test
#' Res = population2sample.test(pc1, pc2)
#' # conducting hypothesis test and returning simplified results
#' Res_s = population2sample.test(pc1, pc2, simplify = TRUE)
#'
#' @references
#' Qiu Y. and Zhou X. (2021).
#' Inference on multi-level partial correlations
#' based on multi-subject time series data,
#' \emph{Journal of the American Statistical Association}, 00, 1-15.
population2sample.test <- function(popEst1, popEst2, alpha = 0.05, c0 = 0.1, targetSet = NULL, MBT = 5000, simplify = !is.null(targetSet)){
force(simplify)
if (!inherits(popEst1, 'popEst') | !inherits(popEst1, 'popEst'))
stop("The arguments popEst1 and popEst2 require 'popEst' class inputs!\n")
EstAll1 = popEst1$coef
EstAll2 = popEst2$coef
p = nrow(EstAll1)
MC1 = length(popEst1[['ind.est']])
MC2 = length(popEst2[['ind.est']])
if (is.null(targetSet)){
targetSet = upper.tri(EstAll1)
Mp = p * (p - 1) / 2
} else {
simplify = TRUE
targetSet = normalize.set(targetSet, p)
Mp = nrow(targetSet)
}
EstVec1 = matrix(0, MC1, Mp)
EstVec2 = matrix(0, MC2, Mp)
for (i in 1 : MC1){
Est = popEst1[['ind.est']][[i]][['coef']]
EstVec1[i,] = Est[targetSet]
}
for (i in 1 : MC2){
Est = popEst2[['ind.est']][[i]][['coef']]
EstVec2[i,] = Est[targetSet]
}
EstVecCenter1 = scale(EstVec1, scale = FALSE)
EstVecCenter2 = scale(EstVec2, scale = FALSE)
TestAllstandard1 = EstAll1[targetSet]
TestAllstandard2 = EstAll2[targetSet]
EstAll = EstAll1 - EstAll2
BTAllsim = matrix(0, Mp, MBT)
for (i in 1 : MBT){
temp1 = rnorm(MC1)
temp2 = rnorm(MC2)
BTAllsim[, i] = (MC1)^(-0.5) * colSums(temp1 * EstVecCenter1) - (MC1)^(0.5) * colMeans(temp2 * EstVecCenter2)
}
SignalID=c()
TestPro = TestAllstandard1 - TestAllstandard2
BTPro = abs(BTAllsim)
repeat{
PCmaxIndex = which.max(abs(TestPro))
SignalIDtemp = which(EstAll == TestPro[PCmaxIndex], arr.ind = T)
SignalID = rbind(SignalID, SignalIDtemp)
TestPro = TestPro[-PCmaxIndex]
BTPro = BTPro[-PCmaxIndex, ]
TestStatPro = sqrt(MC1) * max(abs(TestPro))
BTAllsimPro = apply(BTPro, 2, max)
QPro = sort(BTAllsimPro)[(1 - alpha) * MBT]
if (TestStatPro < QPro) break
}
aug = round(c0 * dim(SignalID)[1] / (2 * (1 - c0)) + 1e-3)
if (aug > 0){
PCmaxIndex = order(-abs(TestPro))[1 : aug]
for (q in 1 : length(PCmaxIndex)){
SignalIDtemp = which(EstAll == TestPro[PCmaxIndex[q]], arr.ind = TRUE)
SignalID = rbind(SignalID, SignalIDtemp)
}
}
if (simplify) return(subset(SignalID, SignalID[,1] < SignalID[,2]))
recovery = matrix(0, p, p)
recovery[SignalID[,1]+(SignalID[,2]-1)*p]=1
return(recovery)
}
|
/scratch/gouwar.j/cran-all/cranData/BrainCon/R/population2sample.test.R
|
#' @keywords internal
#' @name BranchGLM-package
#' @aliases BranchGLM-package NULL
#' @docType package
#' @examples
#' # Using iris data to demonstrate package usage
#' Data <- iris
#'
#' # Fitting linear regression model
#' Fit <- BranchGLM(Sepal.Length ~ ., data = Data, family = "gaussian", link = "identity")
#' Fit
#'
#' # Doing branch and bound best subset selection
#' VS <- VariableSelection(Fit, type = "branch and bound", metric = "BIC",
#' showprogress = FALSE, bestmodels = 10)
#' VS
#'
#' ## Plotting results
#' plot(VS, ptype = "variables")
#'
"_PACKAGE"
## usethis namespace: start
#' @useDynLib BranchGLM, .registration = TRUE
#' @import stats
#' @import graphics
#' @importFrom methods is
#' @importFrom Rcpp evalCpp
## usethis namespace: end
NULL
#' Internal BranchGLM Functions
#' @description Internal BranchGLM Functions.
#' @details These are not intended for use by users, these are Rcpp functions
#' that do not check the arguments, so improper usage may result in R crashing.
#'
#' @aliases BranchGLMFit MetricIntervalCpp SwitchBranchAndBoundCpp BranchAndBoundCpp
#' BackwardBranchAndBoundCpp ForwardCpp BackwardCpp MakeTable MakeTableFactor2
#' CindexCpp CindexTrap ROCCpp
#' @keywords internal
|
/scratch/gouwar.j/cran-all/cranData/BranchGLM/R/BranchGLM-package.R
|
#' Fits GLMs
#' @description Fits generalized linear models (GLMs) via RcppArmadillo with the
#' ability to perform some computation in parallel with OpenMP.
#' @param formula a formula for the model.
#' @param data a data.frame, list or environment (or object coercible by
#' [as.data.frame] to a data.frame), containing the variables in formula.
#' Neither a matrix nor an array will be accepted.
#' @param family the distribution used to model the data, one of "gaussian",
#' "gamma", "binomial", or "poisson".
#' @param link the link used to link the mean structure to the linear predictors. One of
#' "identity", "logit", "probit", "cloglog", "sqrt", "inverse", or "log". The accepted
#' links depend on the specified family, see more in details.
#' @param offset the offset vector, by default the zero vector is used.
#' @param method one of "Fisher", "BFGS", or "LBFGS". BFGS and L-BFGS are
#' quasi-newton methods which are typically faster than Fisher's scoring when
#' there are many covariates (at least 50).
#' @param grads a positive integer to denote the number of gradients used to
#' approximate the inverse information with, only for `method = "LBFGS"`.
#' @param parallel a logical value to indicate if parallelization should be used.
#' @param nthreads a positive integer to denote the number of threads used with OpenMP,
#' only used if `parallel = TRUE`.
#' @param tol a positive number to denote the tolerance used to determine model convergence.
#' @param maxit a positive integer to denote the maximum number of iterations performed.
#' The default for Fisher's scoring is 50 and for the other methods the default is 200.
#' @param init a numeric vector of initial values for the betas, if not specified
#' then they are automatically selected via linear regression with the transformation
#' specified by the link function. This is ignored for linear regression models.
#' @param fit a logical value to indicate whether to fit the model or not.
#' @param keepData a logical value to indicate whether or not to store a copy of
#' data and the design matrix, the default is TRUE. If this is FALSE, then the
#' results from this cannot be used inside of `VariableSelection`.
#' @param keepY a logical value to indicate whether or not to store a copy of y,
#' the default is TRUE. If this is FALSE, then the binomial GLM helper functions
#' may not work and this cannot be used inside of `VariableSelection`.
#' @param contrasts see `contrasts.arg` of `model.matrix.default`.
#' @param x design matrix used for the fit, must be numeric.
#' @param y outcome vector, must be numeric.
#' @seealso [predict.BranchGLM], [coef.BranchGLM], [VariableSelection], [confint.BranchGLM], [logLik.BranchGLM]
#' @return `BranchGLM` returns a `BranchGLM` object which is a list with the following components
#' \item{`coefficients`}{ a matrix with the coefficient estimates, SEs, Wald test statistics, and p-values}
#' \item{`iterations`}{ number of iterations it took the algorithm to converge, if the algorithm failed to converge then this is -1}
#' \item{`dispersion`}{ the value of the dispersion parameter}
#' \item{`logLik`}{ the log-likelihood of the fitted model}
#' \item{`vcov`}{ the variance-covariance matrix of the fitted model}
#' \item{`resDev`}{ the residual deviance of the fitted model}
#' \item{`AIC`}{ the AIC of the fitted model}
#' \item{`preds`}{ predictions from the fitted model}
#' \item{`linpreds`}{ linear predictors from the fitted model}
#' \item{`tol`}{ tolerance used to fit the model}
#' \item{`maxit`}{ maximum number of iterations used to fit the model}
#' \item{`formula`}{ formula used to fit the model}
#' \item{`method`}{ iterative method used to fit the model}
#' \item{`grads`}{ number of gradients used to approximate inverse information for L-BFGS}
#' \item{`y`}{ y vector used in the model, not included if `keepY = FALSE`}
#' \item{`x`}{ design matrix used to fit the model, not included if `keepData = FALSE`}
#' \item{`offset`}{ offset vector in the model, not included if `keepData = FALSE`}
#' \item{`fulloffset`}{ supplied offset vector, not included if `keepData = FALSE`}
#' \item{`data`}{ original `data` argument supplied to the function, not included if `keepData = FALSE`}
#' \item{`mf`}{ the model frame, not included if `keepData = FALSE`}
#' \item{`numobs`}{ number of observations in the design matrix}
#' \item{`names`}{ names of the predictor variables}
#' \item{`yname`}{ name of y variable}
#' \item{`parallel`}{ whether parallelization was employed to speed up model fitting process}
#' \item{`missing`}{ number of missing values removed from the original dataset}
#' \item{`link`}{ link function used to model the data}
#' \item{`family`}{ family used to model the data}
#' \item{`ylevel`}{ the levels of y, only included for binomial glms}
#' \item{`xlev`}{ the levels of the factors in the dataset}
#' \item{`terms`}{the terms object used}
#'
#' `BranchGLM.fit` returns a list with the following components
#' \item{`coefficients`}{ a matrix with the coefficients estimates, SEs, Wald test statistics, and p-values}
#' \item{`iterations`}{ number of iterations it took the algorithm to converge, if the algorithm failed to converge then this is -1}
#' \item{`dispersion`}{ the value of the dispersion parameter}
#' \item{`logLik`}{ the log-likelihood of the fitted model}
#' \item{`vcov`}{ the variance-covariance matrix of the fitted model}
#' \item{`resDev`}{ the residual deviance of the fitted model}
#' \item{`AIC`}{ the AIC of the fitted model}
#' \item{`preds`}{ predictions from the fitted model}
#' \item{`linpreds`}{ linear predictors from the fitted model}
#' \item{`tol`}{ tolerance used to fit the model}
#' \item{`maxit`}{ maximum number of iterations used to fit the model}
#' @details
#'
#' ## Fitting
#' Can use BFGS, L-BFGS, or Fisher's scoring to fit the GLM. BFGS and L-BFGS are
#' typically faster than Fisher's scoring when there are at least 50 covariates
#' and Fisher's scoring is typically best when there are fewer than 50 covariates.
#' This function does not currently support the use of weights. In the special
#' case of gaussian regression with identity link the `method` argument is ignored
#' and the normal equations are solved directly.
#'
#' The models are fit in C++ by using Rcpp and RcppArmadillo. In order to help
#' convergence, each of the methods makes use of a backtracking line-search using
#' the strong Wolfe conditions to find an adequate step size. There are
#' three conditions used to determine convergence, the first is whether there is a
#' sufficient decrease in the negative log-likelihood, the second is whether
#' the l2-norm of the score is sufficiently small, and the last condition is
#' whether the change in each of the beta coefficients is sufficiently
#' small. The `tol` argument controls all of these criteria. If the algorithm fails to
#' converge, then `iterations` will be -1.
#'
#' All observations with any missing values are removed before model fitting.
#'
#' `BranchGLM.fit` can be faster than calling `BranchGLM` if the
#' x matrix and y vector are already available, but doesn't return as much information.
#' The object returned by `BranchGLM.fit` is not of class `BranchGLM`, so
#' all of the methods for `BranchGLM` objects such as `predict` or
#' `VariableSelection` cannot be used.
#'
#' ## Dispersion Parameter
#' The dispersion parameter for gamma regression is estimated via maximum likelihood,
#' very similar to the `gamma.dispersion` function from the MASS package. The
#' dispersion parameter for gaussian regression is also estimated via maximum
#' likelihood estimation.
#'
#' ## Families and Links
#' The binomial family accepts "cloglog", "log", "logit", and "probit" as possible
#' link functions. The gamma and gaussian families accept "identity", "inverse",
#' "log", and "sqrt" as possible link functions. The Poisson family accepts "identity",
#' "log", and "sqrt" as possible link functions.
#'
#' @examples
#' Data <- iris
#'
#' # Linear regression
#' ## Using BranchGLM
#' BranchGLM(Sepal.Length ~ ., data = Data, family = "gaussian", link = "identity")
#'
#' ## Using BranchGLM.fit
#' x <- model.matrix(Sepal.Length ~ ., data = Data)
#' y <- Data$Sepal.Length
#' BranchGLM.fit(x, y, family = "gaussian", link = "identity")
#'
#' # Gamma regression
#' ## Using BranchGLM
#' BranchGLM(Sepal.Length ~ ., data = Data, family = "gamma", link = "log")
#'
#' ### init
#' BranchGLM(Sepal.Length ~ ., data = Data, family = "gamma", link = "log",
#' init = rep(0, 6), maxit = 50, tol = 1e-6, contrasts = NULL)
#'
#' ### method
#' BranchGLM(Sepal.Length ~ ., data = Data, family = "gamma", link = "log",
#' init = rep(0, 6), maxit = 50, tol = 1e-6, contrasts = NULL, method = "LBFGS")
#'
#' ### offset
#' BranchGLM(Sepal.Length ~ ., data = Data, family = "gamma", link = "log",
#' init = rep(0, 6), maxit = 50, tol = 1e-6, contrasts = NULL,
#' offset = Data$Sepal.Width)
#'
#' ## Using BranchGLM.fit
#' x <- model.matrix(Sepal.Length ~ ., data = Data)
#' y <- Data$Sepal.Length
#' BranchGLM.fit(x, y, family = "gamma", link = "log", init = rep(0, 6),
#' maxit = 50, tol = 1e-6, offset = Data$Sepal.Width)
#'
#'
#' @references McCullagh, P., & Nelder, J. A. (1989). Generalized Linear Models (2nd ed.).
#' Chapman & Hall.
#' @export
BranchGLM <- function(formula, data, family, link, offset = NULL,
method = "Fisher", grads = 10,
parallel = FALSE, nthreads = 8,
tol = 1e-6, maxit = NULL, init = NULL, fit = TRUE,
contrasts = NULL, keepData = TRUE,
keepY = TRUE){
### converting family, link, and method to lower
family <- tolower(family)
link <- tolower(link)
method <- tolower(method)
### Validating supplied arguments
if(!is(formula, "formula")){
stop("formula must be a valid formula")
}
if(length(method) != 1 || !is.character(method)){
stop("method must be exactly one of 'Fisher', 'BFGS', or 'LBFGS'")
}else if(method == "fisher"){
method <- "Fisher"
}else if(method == "bfgs"){
method <- "BFGS"
}else if(method == "lbfgs"){
method <- "LBFGS"
}else{
stop("method must be exactly one of 'Fisher', 'BFGS', or 'LBFGS'")
}
if(length(family) != 1 || !family %in% c("gaussian", "binomial", "poisson", "gamma")){
stop("family must be one of 'gaussian', 'binomial', 'gamma', or 'poisson'")
}
if(length(link) != 1 ||!link %in% c("logit", "probit", "cloglog", "log", "identity", "inverse", "sqrt")){
stop("link must be one of 'logit', 'probit', 'cloglog', 'log', 'inverse', 'sqrt', or 'identity'")
}
### Evaluating arguments
mf <- match.call(expand.dots = FALSE)
m <- match(c("formula", "data", "offset"), names(mf), 0L)
mf <- mf[c(1L, m)]
mf$drop.unused.levels <- TRUE
mf$na.action <- "na.omit"
mf[[1L]] <- quote(model.frame)
mf <- eval(mf, parent.frame())
## Getting data objects
y <- model.response(mf, "any")
fulloffset <- offset
offset <- as.vector(model.offset(mf))
x <- model.matrix(attr(mf, "terms"), mf, contrasts)
if(is.null(offset)){
offset <- rep(0, length(y))
}
## Checking y variable for binomial family
if(tolower(family) == "binomial"){
if(!(link %in% c("cloglog", "log", "logit", "probit"))){
stop("valid link functions for binomial regression are 'cloglog', 'log', 'logit', and 'probit'")
}else if(is.factor(y) && (nlevels(y) == 2)){
ylevel <- levels(y)
y <- as.numeric(y == ylevel[2])
}else if(is.numeric(y) && all(y %in% c(0, 1))){
ylevel <- c(0, 1)
}else if(is.logical(y)){
ylevel <- c(FALSE, TRUE)
y <- y * 1
}else{
stop("response variable for binomial regression must be numeric with only
0s and 1s, a two-level factor, or a logical vector")
}
}
## Getting maxit
if(is.null(maxit)){
if(method == "Fisher"){
maxit = 50
}else{
maxit = 200
}
}
### Using BranchGLM.fit to fit GLM
if(fit){
df <- BranchGLM.fit(x, y, family, link, offset, method, grads, parallel, nthreads,
init, maxit, tol)
}else{
df <- list("coefficients" = matrix(NA, nrow = ncol(x), ncol = 4),
"vcov" = matrix(NA, nrow = ncol(x), ncol = ncol(x)))
colnames(df$coefficients) <- c("Estimate", "SE", "z", "p-values")
}
# Setting names for coefficients
row.names(df$coefficients) <- colnames(x)
# Setting names for vcov
rownames(df$vcov) <- colnames(df$vcov) <- colnames(x)
df$formula <- formula
df$method <- method
if(keepY){
df$y <- y
}
df$numobs <- nrow(x)
if(keepData){
df$data <- data
df$x <- x
df$mf <- mf
df$offset <- offset
df$fulloffset <- fulloffset
}
df$names <- attributes(terms(formula, data = data))$factors |>
colnames()
df$yname <- attributes(terms(formula, data = data))$variables[-1] |>
as.character()
df$yname <- df$yname[attributes(terms(formula, data = data))$response]
df$parallel <- parallel
df$missing <- nrow(data) - nrow(x)
df$link <- link
df$contrasts <- contrasts
df$family <- family
df$terms <- attr(mf, "terms")
df$xlev <- .getXlevels(df$terms, mf)
df$grads <- grads
df$tol <- tol
df$maxit <- maxit
if(family == "binomial"){
df$ylevel <- ylevel
}
if((family == "gaussian" || family == "gamma")){
colnames(df$coefficients)[3] <- "t"
}
structure(df, class = "BranchGLM")
}
#' @rdname BranchGLM
#' @export
BranchGLM.fit <- function(x, y, family, link, offset = NULL,
method = "Fisher", grads = 10,
parallel = FALSE, nthreads = 8, init = NULL,
maxit = NULL, tol = 1e-6){
### converting family, link, and method to lower
family <- tolower(family)
link <- tolower(link)
method <- tolower(method)
### Getting method
if(length(method) != 1 || !is.character(method)){
stop("method must be exactly one of 'Fisher', 'BFGS', or 'LBFGS'")
}else if(method == "fisher"){
method <- "Fisher"
}else if(method == "bfgs"){
method <- "BFGS"
}else if(method == "lbfgs"){
method <- "LBFGS"
}else{
stop("method must be exactly one of 'Fisher', 'BFGS', or 'LBFGS'")
}
## Performing a few checks
if(!is.matrix(x) || !is.numeric(x)){
stop("x must be a numeric matrix")
}else if(!is.numeric(y)){
stop("y must be numeric")
}else if(nrow(x) != length(y)){
stop("the number of rows in x must be the same as the length of y")
}else if(nrow(x) == 0){
stop("design matrix x has no rows and y has a length of 0")
}
## Checking grads and tol
if(length(grads) != 1 || !is.numeric(grads) || as.integer(grads) <= 0){
stop("grads must be a positive integer")
}
if(length(tol) != 1 || !is.numeric(tol) || tol <= 0){
stop("tol must be a positive number")
}
## Getting maxit
if(is.null(maxit)){
if(method == "Fisher"){
maxit = 50
}else{
maxit = 200
}
}else if(length(maxit) != 1 || !is.numeric(maxit) || maxit <= 0){
stop("maxit must be a positive integer")
}
## Getting initial values
if(is.null(init)){
init <- rep(0, ncol(x))
GetInit <- TRUE
}else if(!is.numeric(init) || length(init) != ncol(x)){
stop("init must be null or a numeric vector with length equal to the number of betas")
}else if(any(is.infinite(init)) || any(is.na(init))){
stop("init must not contain any infinite values, NAs, or NaNs")
}else{
GetInit <- FALSE
}
## Checking y variable and link function for each family
if(family == "binomial"){
if(!(link %in% c("cloglog", "log", "logit", "probit"))){
stop("valid link functions for binomial regression are 'cloglog', 'log', 'logit', and 'probit'")
}else if(!all(y %in% c(0, 1))){
stop("for binomial regression y must be a vector of 0s and 1s")
}
}else if(family == "poisson"){
if(!(link %in% c("identity", "log", "sqrt"))){
stop("valid link functions for poisson regression are 'identity', 'log', and 'sqrt'")
}else if(!is.numeric(y) || any(y < 0)){
stop("response variable for poisson regression must be a numeric vector of non-negative integers")
}else if(any(as.integer(y)!= y)){
stop("response variable for poisson regression must be a numeric vector of non-negative integers")
}
}else if(family == "gaussian"){
if(!(link %in% c("inverse", "identity", "log", "sqrt"))){
stop("valid link functions for gaussian regression are 'identity', 'inverse', 'log', and 'sqrt'")
}else if(!is.numeric(y)){
stop("response variable for gaussian regression must be numeric")
}else if(link == "log" && any(y <= 0)){
stop("gaussian regression with log link must have positive response values")
}else if(link == "inverse" && any(y == 0)){
stop("gaussian regression with inverse link must have non-zero response values")
}else if(link == "sqrt" && any(y < 0)){
stop("gaussian regression with sqrt link must have non-negative response values")
}
}else if(family == "gamma"){
if(!(link %in% c("inverse", "identity", "log", "sqrt"))){
stop("valid link functions for gamma regression are 'identity', 'inverse', 'log', and 'sqrt'")
}else if(!is.numeric(y) || any(y <= 0)){
stop("response variable for gamma regression must be positive")
}
}else{
stop("the supplied family is not supported")
}
## Getting offset
if(is.null(offset)){
offset <- rep(0, length(y))
}else if(length(offset) != length(y)){
stop("offset must be the same length as y")
}else if(!is.numeric(offset)){
stop("offset must be a numeric vector")
}else if(any(is.infinite(offset)) || any(is.na(offset))){
stop("offset must not contain any infinite values, NAs, or NaNs")
}
if(length(nthreads) != 1 || !is.numeric(nthreads) || is.na(nthreads) || nthreads <= 0){
stop("nthreads must be a positive integer")
}
if(length(parallel) != 1 || !is.logical(parallel) || is.na(parallel)){
stop("parallel must be either TRUE or FALSE")
}else if(parallel){
df <- BranchGLMfit(x, y, offset, init, method, grads, link, family, nthreads,
tol, maxit, GetInit)
}else{
df <- BranchGLMfit(x, y, offset, init, method, grads, link, family, 1, tol, maxit,
GetInit)
}
df$tol <- tol
df$maxit <- maxit
return(df)
}
#' Extract Model Formula from BranchGLM Objects
#' @description Extracts model formula from BranchGLM objects.
#' @param x a `BranchGLM` object.
#' @param ... further arguments passed to or from other methods.
#' @return a formula representing the model used to obtain `object`.
#' @export
formula.BranchGLM <- function(x, ...){
return(x$formula)
}
#' Extract Number of Observations from BranchGLM Objects
#' @description Extracts number of observations from BranchGLM objects.
#' @param object a `BranchGLM` object.
#' @param ... further arguments passed to or from other methods.
#' @return A single number indicating the number of observations used to fit the model.
#' @export
nobs.BranchGLM <- function(object, ...){
return(object$numobs)
}
#' Extract Log-Likelihood from BranchGLM Objects
#' @description Extracts log-likelihood from BranchGLM objects.
#' @param object a `BranchGLM` object.
#' @param ... further arguments passed to or from other methods.
#' @return An object of class `logLik` which is a number corresponding to
#' the log-likelihood with the following attributes: "df" (degrees of freedom)
#' and "nobs" (number of observations).
#' @export
logLik.BranchGLM <- function(object, ...){
df <- length(coef(object))
if(object$family == "gaussian" || object$family == "gamma"){
df <- df + 1
}
val <- object$logLik
attr(val, "nobs") <- nobs(object)
attr(val, "df") <- df
class(val) <- "logLik"
return(val)
}
#' Extract covariance matrix from BranchGLM Objects
#' @description Extracts covariance matrix of beta coefficients from BranchGLM objects.
#' @param object a `BranchGLM` object.
#' @param ... further arguments passed to or from other methods.
#' @return A numeric matrix which is the covariance matrix of the beta coefficients.
#' @export
vcov.BranchGLM <- function(object, ...){
return(object$vcov)
}
#' Extract Coefficients from BranchGLM Objects
#' @description Extracts beta coefficients from BranchGLM objects.
#' @param object a `BranchGLM` object.
#' @param ... further arguments passed to or from other methods.
#' @return A named vector with the corresponding coefficient estimates.
#' @export
coef.BranchGLM <- function(object, ...){
coefs <- object$coefficients[,1]
names(coefs) <- row.names(object$coefficients)
return(coefs)
}
#' Predict Method for BranchGLM Objects
#' @description Obtains predictions from `BranchGLM` objects.
#' @param object a `BranchGLM` object.
#' @param newdata a data.frame, if not specified then the data the model was fit on is used.
#' @param offset a numeric vector containing the offset variable, this is ignored if
#' newdata is not supplied.
#' @param type one of "linpreds" which is on the scale of the linear predictors or
#' "response" which is on the scale of the response variable. If not specified,
#' then "response" is used.
#' @param na.action a function which indicates what should happen when the data
#' contains NAs. The default is `na.pass`. This is ignored if newdata is not
#' supplied and data isn't included in the supplied `BranchGLM` object.
#' @param ... further arguments passed to or from other methods.
#' @return A numeric vector of predictions.
#' @examples
#' Data <- airquality
#'
#' # Example without offset
#' Fit <- BranchGLM(Temp ~ ., data = Data, family = "gaussian", link = "identity")
#'
#' ## Using default na.action
#' predict(Fit)
#'
#' ## Using na.omit
#' predict(Fit, na.action = na.omit)
#'
#' ## Using new data
#' predict(Fit, newdata = Data[1:20, ], na.action = na.pass)
#'
#' # Using offset
#' FitOffset <- BranchGLM(Temp ~ . - Day, data = Data, family = "gaussian",
#' link = "identity", offset = Data$Day * -0.1)
#'
#' ## Getting predictions for data used to fit model
#' ### Don't need to supply offset vector
#' predict(FitOffset)
#'
#' ## Getting predictions for new dataset
#' ### Need to include new offset vector since we are
#' ### getting predictions for new dataset
#' predict(FitOffset, newdata = Data[1:20, ], offset = Data$Day[1:20] * -0.1)
#'
#' @export
predict.BranchGLM <- function(object, newdata = NULL, offset = NULL,
type = "response", na.action = na.pass, ...){
if(!is.null(newdata) && !is(newdata, "data.frame")){
stop("newdata argument must be a data.frame or NULL")
}
if(length(type) != 1 ){
stop("type must have a length of 1")
}else if(!(type %in% c("linpreds", "response"))){
stop("type argument must be either 'linpreds' or 'response'")
}
if(is.null(newdata) && !is.null(object$data)){
newdata <- object$data
offset <- object$fulloffset
}else if(is.null(newdata) && is.null(object$data)){
if(type == "linpreds"){
linpreds <- object$linpreds
names(linpreds) <- rownames(object$x)
return(linpreds)
}else if(type == "response"){
preds <- object$preds
names(preds) <- rownames(object$x)
return(preds)
}
}
# Changing environment for formula and offset since we need them to be the same
if(is.null(offset)){
if(!is.null(newdata) && !is.null(object$fulloffset) && any(object$fulloffset != 0)){
warning("offset should be supplied for new dataset")
}
offset2 <- rep(0, nrow(newdata))
}else{
offset2 <- offset
}
environment(offset2) <- environment()
# Getting mf
myterms <- delete.response(terms(object))
environment(myterms) <- environment()
m <- model.frame(myterms, data = newdata, na.action = na.action,
xlev = object$xlev, offset = offset2)
# Getting offset and x
offset <- model.offset(m)
environment(offset) <- NULL
x <- model.matrix(myterms, m, contrasts = object$contrasts)
if(ncol(x) != length(coef(object))){
stop("could not find all predictor variables in newdata")
}else if(tolower(type) == "linpreds"){
preds <- drop(x %*% coef(object) + offset) |> unname()
names(preds) <- rownames(x)
return(preds)
}else if(tolower(type) == "response"){
preds <- GetPreds(drop(x %*% coef(object) + offset) |> unname(), object$link)
names(preds) <- rownames(x)
return(preds)
}
}
#' Get Predictions
#' @param linpreds numeric vector of linear predictors.
#' @param link the specified link.
#' @noRd
GetPreds <- function(linpreds, Link){
if(Link == "log"){
exp(linpreds)
}
else if(Link == "logit"){
1 / (1 + exp(-linpreds))
}
else if(Link == "probit"){
pnorm(linpreds)
}
else if(Link == "cloglog"){
1 - exp(-exp(linpreds))
}
else if(Link == "inverse"){
1 / (linpreds)
}
else if(Link == "identity"){
linpreds
}
else{
linpreds^2
}
}
#' Print Method for BranchGLM Objects
#' @description Print method for `BranchGLM` objects.
#' @param x a `BranchGLM` object.
#' @param coefdigits number of digits to display for coefficients table.
#' @param digits number of digits to display for information after table.
#' @param ... further arguments passed to or from other methods.
#' @return The supplied `BranchGLM` object.
#' @export
print.BranchGLM <- function(x, coefdigits = 4, digits = 2, ...){
if(length(coefdigits)!= 1 || !is.numeric(coefdigits) || coefdigits < 0){
stop("coefdigits must be a non-negative number")
}
if(length(digits)!= 1 || !is.numeric(digits) || digits < 0){
stop("coefdigits must be a non-negative number")
}
cat(paste0("Results from ", x$family, " regression with ", x$link,
" link function \nUsing the formula ", deparse1(x$formula), "\n\n"))
printCoefmat(signif(x$coefficients, digits = coefdigits), signif.stars = TRUE, P.values = TRUE,
has.Pvalue = TRUE)
cat(paste0("\nDispersion parameter taken to be ", round(x$dispersion, coefdigits)))
cat(paste0("\n", x$numobs, " observations used to fit model\n(", x$missing,
" observations removed due to missingness)\n"))
cat(paste0("\nResidual Deviance: ", round(x$resDev, digits = digits), " on ",
x$numobs - nrow(x$coefficients), " degrees of freedom"))
cat(paste0("\nAIC: ", round(x$AIC, digits = digits)))
if(x$family != "gaussian" || x$link != "identity"){
if(x$method == "Fisher"){
method = "Fisher's scoring"
}else if(x$method == "LBFGS"){
method = "L-BFGS"
}else{method = "BFGS"}
if(x$iterations == 1){
cat(paste0("\nAlgorithm converged in 1 iteration using ", method, "\n"))
}else if(x$iterations > 1 || x$iterations == 0){
cat(paste0("\nAlgorithm converged in ", x$iterations, " iterations using ", method, "\n"))
}else{
cat("\nAlgorithm failed to converge\n")
}
}else{
cat("\n")
}
if(x$parallel){
cat("Parallel computation was used to speed up model fitting process")
}
invisible(x)
}
|
/scratch/gouwar.j/cran-all/cranData/BranchGLM/R/BranchGLM.R
|
#' Likelihood Ratio Confidence Intervals for Beta Coefficients for BranchGLM Objects
#' @description Finds profile likelihood ratio confidence intervals for beta
#' coefficients with the ability to calculate the intervals in parallel.
#' @param object a `BranchGLM` object.
#' @param parm a specification of which parameters are to be given confidence intervals,
#' either a vector of numbers or a vector of names. If missing, all parameters are considered.
#' @param level the confidence level required.
#' @param parallel a logical value to indicate if parallelization should be used.
#' @param nthreads a positive integer to denote the number of threads used with OpenMP,
#' only used if `parallel = TRUE`.
#' @param ... further arguments passed from other methods.
#' @seealso [plot.BranchGLMCIs], [plotCI]
#' @return An object of class `BranchGLMCIs` which is a list with the following components.
#' \item{`CIs`}{ a numeric matrix with the confidence intervals}
#' \item{`level`}{ the supplied level}
#' \item{`MLE`}{ a numeric vector of the MLEs of the coefficients}
#' @details Endpoints of the confidence intervals that couldn't be found by the algorithm
#' are filled in with NA. When there is a lot of multicollinearity in the data
#' the algorithm may have problems finding many of the intervals.
#' @examples
#' Data <- iris
#' ### Fitting linear regression model
#' mymodel <- BranchGLM(Sepal.Length ~ ., data = Data, family = "gaussian", link = "identity")
#'
#' ### Getting confidence intervals
#' CIs <- confint(mymodel, level = 0.95)
#' CIs
#'
#' ### Plotting CIs
#' plot(CIs, mary = 7, cex.y = 0.9)
#'
#' @export
confint.BranchGLM <- function(object, parm, level = 0.95,
parallel = FALSE, nthreads = 8, ...){
# Using parm
if(missing(parm)){
parm <- 1:ncol(object$x)
}else if(is.character(parm)){
parm <- match(parm, colnames(object$x), nomatch = 0L)
if(length(parm) == 1 && parm == 0L){
stop("no parameters specified in parm were found")
}
}else if(any(parm > ncol(object$x))){
stop("numbers in parm must be less than or equal to the number of parameters")
}
# Checking level
if(length(level) != 1 || !is.numeric(level) || level >= 1 || level <= 0){
stop("level must be a number between 0 and 1")
}
# Checking nthreads and parallel
if(length(nthreads) != 1 || !is.numeric(nthreads) || is.na(nthreads) || nthreads <= 0){
stop("nthreads must be a positive integer")
}
if(length(parallel) != 1 || !is.logical(parallel) || is.na(parallel)){
stop("parallel must be either TRUE or FALSE")
}
if(!parallel){
nthreads <- 1
}
# Getting SEs for make initial values for CIs
a <- (1 - level) / 2
coefs <- coef(object)
SEs <- qnorm(1 - a) * sqrt(diag(object$vcov))
# Getting LR CIs
if(object$family == "gaussian" || object$family == "gamma"){
object$AIC <- object$AIC - 2
}
metrics <- rep(object$AIC, ncol(object$x))
model <- matrix(rep(-1, ncol(object$x)), ncol = 1)
model[parm] <- 1
res <- MetricIntervalCpp(object$x, object$y, object$offset,
1:ncol(object$x) - 1, rep(1, ncol(object$x)), model,
object$method, object$grads, object$link, object$family,
nthreads, object$tol, object$maxit, rep(2, ncol(object$x)),
coefs, SEs,
metrics, qchisq(level, 1), object$AIC,"ITP")
# Replacing infinities with NA
res$LowerBounds <- ifelse(is.finite(res$LowerBounds), res$LowerBounds, NA)
res$UpperBounds <- ifelse(is.finite(res$UpperBounds), res$UpperBounds, NA)
# Getting CIs in right format
CIs <- cbind(res$LowerBounds, res$UpperBounds)
rownames(CIs) <- colnames(object$x)
colnames(CIs) <- c(paste0(round(a, 3) * 100, "%"), paste0(round(1 - a, 3) * 100, "%"))
return(structure(list("CIs" = CIs[parm, , drop = FALSE], "level" = level, "MLE" = coefs[parm]),
class = "BranchGLMCIs"))
}
#' Print Method for BranchGLMCIs Objects
#' @description Print method for BranchGLMCIs objects.
#' @param x a `BranchGLMCIs` object.
#' @param digits number of significant digits to display.
#' @param ... further arguments passed from other methods.
#' @return The supplied `BranchGLMCIs` object.
#' @export
print.BranchGLMCIs <- function(x, digits = 4, ...){
print(signif(x$CIs, digits = digits))
invisible(x)
}
#' Plot Method for BranchGLMCIs Objects
#' @description Creates a plot to visualize confidence intervals from BranchGLMCIs objects.
#' @param x a `BranchGLMCIs` object.
#' @param which which intervals to plot, can use a numeric vector of indices, a
#' character vector of names of desired variables, or "all" to plot all intervals.
#' @param mary a numeric value used to determine how large to make margin of y-axis. If variable
#' names are cut-off, consider increasing this from the default value of 5.
#' @param ... further arguments passed to [plotCI].
#' @seealso [plotCI]
#' @return This only produces a plot, nothing is returned.
#' @examples
#' Data <- iris
#' ### Fitting linear regression model
#' mymodel <- BranchGLM(Sepal.Length ~ ., data = Data, family = "gaussian", link = "identity")
#'
#' ### Getting confidence intervals
#' CIs <- confint(mymodel, level = 0.95)
#' CIs
#'
#' ### Plotting CIs
#' plot(CIs, mary = 7, cex.y = 0.9)
#'
#' @export
plot.BranchGLMCIs <- function(x, which = "all", mary = 5, ...){
# Using which
if(is.character(which) && length(which) == 1 && tolower(which) == "all"){
which <- 1:length(x$MLE)
}
x$CIs <- x$CIs[which, , drop = FALSE]
x$MLE <- x$MLE[which]
# Getting xlimits
xlim <- c(min(min(x$CIs, na.rm = TRUE), min(x$MLE, na.rm = TRUE)),
max(max(x$CIs, na.rm = TRUE), max(x$MLE, na.rm = TRUE)))
# Setting margins
oldmar <- par("mar")
on.exit(par(mar = oldmar))
par(mar = c(5, mary, 3, 1) + 0.1)
# Plotting CIs
plotCI(x$CIs, x$MLE,
main = paste0(round(x$level * 100, 1), "% Likelihood Ratio CIs"),
xlab = "Beta Coefficients",
xlim = xlim, ...)
abline(v = 0, xpd = FALSE)
}
#' Plot Confidence Intervals
#' @description Creates a plot to display confidence intervals.
#' @param CIs a numeric matrix of confidence intervals, must have exactly 2 columns.
#' The variable names displayed in the plot are taken from the column names.
#' @param points points to be plotted in the middle of the CIs, typically means or medians.
#' The default is to plot the midpoints of the intervals.
#' @param ylab a label for the y-axis.
#' @param ylas the style of the y-axis label, see more about this at `las` in [par].
#' @param cex.y font size used for variable names on y-axis.
#' @param decreasing a logical value indicating if confidence intervals should be
#' displayed in decreasing or increasing order according to points. Can use NA
#' if no ordering is desired.
#' @param ... further arguments passed to [plot.default].
#' @return This only produces a plot, nothing is returned.
#' @examples
#' Data <- iris
#' ### Fitting linear regression model
#' mymodel <- BranchGLM(Sepal.Length ~ ., data = Data, family = "gaussian", link = "identity")
#'
#' ### Getting confidence intervals
#' CIs <- confint.default(mymodel, level = 0.95)
#' xlim <- c(min(CIs), max(CIs))
#'
#' ### Plotting CIs
#' par(mar = c(5, 7, 3, 1) + 0.1)
#' plotCI(CIs, main = "95% Confidence Intervals", xlim = xlim, cex.y = 0.9,
#' xlab = "Beta Coefficients")
#' abline(v = 0)
#'
#' @export
plotCI <- function(CIs, points = NULL, ylab = "", ylas = 2, cex.y = 1,
decreasing = FALSE, ...){
# Getting points
if(is.null(points)){
points <- apply(CIs, 1, mean)
}
# Getting CIs in right format
if(!is.matrix(CIs) || (ncol(CIs) != 2) || !is.numeric(CIs)){
stop("CIs must be a numeric matrix with exactly 2 columns")
}else if(nrow(CIs) != length(points)){
stop("the number of rows in CIs must be the same as the length of points")
}
CIs <- t(CIs)
# Getting order of points
if(!is.na(decreasing)){
ind <- order(points, decreasing = decreasing)
}else{
ind <- 1:length(points)
}
quants <- CIs[, ind, drop = FALSE]
points <- points[ind]
# Creating plot
## Creating base layer of plot
plot(points, 1:length(points), ylim = c(0, ncol(quants) + 1), ylab = ylab,
yaxt = "n", ...)
## Creating confidence intervals
segments(y0 = 1:ncol(quants), x0 = quants[1, ], x1 = quants[2, ])
segments(y0 = 1:ncol(quants) - 0.25, x0 = quants[1, ],
y1 = 1:ncol(quants) + 0.25)
segments(y0 = 1:ncol(quants) - 0.25, x0 = quants[2, ],
y1 = 1:ncol(quants) + 0.25)
## Adding axis labels for y-axis
axis(2, at = 1:ncol(quants), labels = colnames(quants), las = ylas,
cex.axis = cex.y)
}
|
/scratch/gouwar.j/cran-all/cranData/BranchGLM/R/BranchGLMCIs.R
|
#' Confusion Matrix
#' @description Creates a confusion matrix and calculates related measures.
#' @param object a `BranchGLM` object or a numeric vector.
#' @param ... further arguments passed to other methods.
#' @param y observed values, can be a numeric vector of 0s and 1s, a two-level factor vector, or
#' a logical vector.
#' @param cutoff cutoff for predicted values, the default is 0.5.
#' @name Table
#' @return A `BranchGLMTable` object which is a list with the following components
#' \item{`table`}{ a matrix corresponding to the confusion matrix}
#' \item{`accuracy`}{ a number corresponding to the accuracy}
#' \item{`sensitivity`}{ a number corresponding to the sensitivity}
#' \item{`specificity`}{ a number corresponding to the specificity}
#' \item{`PPV`}{ a number corresponding to the positive predictive value}
#' \item{`levels`}{ a vector corresponding to the levels of the response variable}
#' @examples
#' Data <- ToothGrowth
#' Fit <- BranchGLM(supp ~ ., data = Data, family = "binomial", link = "logit")
#' Table(Fit)
#' @export
Table <- function(object, ...) {
UseMethod("Table")
}
#' @rdname Table
#' @export
Table.numeric <- function(object, y, cutoff = .5, ...){
## Checking y and object
if((!is.numeric(y)) && (!is.factor(y)) && (!is.logical(y))){
stop("y must be a numeric, two-level factor, or logical vector")
}else if(length(y) != length(object)){
stop("Length of y must be the same as the length of object")
}else if((any(object > 1) || any(object < 0))){
stop("object must be between 0 and 1")
}else if(any(is.na(object)) || any(is.na(y))){
stop("object and y must not have any missing values")
}else if(is.factor(y) && nlevels(y) != 2){
stop("If y is a factor vector it must have exactly two levels")
}else if(is.numeric(y) && any((y != 1) & (y != 0))){
stop("If y is numeric it must only contain 0s and 1s.")
}
if(is.numeric(y)){
Table <- MakeTable(object, y, cutoff)
List <- list("table" = Table,
"accuracy" = (Table[1, 1] + Table[2, 2]) / (sum(Table)),
"sensitivity" = Table[2, 2] / (Table[2, 2] + Table[2, 1]),
"specificity" = Table[1, 1] / (Table[1, 1] + Table[1, 2]),
"PPV" = Table[2, 2] / (Table[2, 2] + Table[1, 2]),
"levels" = c(0, 1))
}else if(is.factor(y)){
Table <- MakeTableFactor2(object, as.character(y), levels(y), cutoff)
List <- list("table" = Table,
"accuracy" = (Table[1, 1] + Table[2, 2]) / (sum(Table)),
"sensitivity" = Table[2, 2] / (Table[2, 2] + Table[2, 1]),
"specificity" = Table[1, 1] / (Table[1, 1] + Table[1, 2]),
"PPV" = Table[2, 2] / (Table[2, 2] + Table[1, 2]),
"levels" = levels(y))
}else{
Table <- MakeTable(object, y * 1, cutoff)
List <- list("table" = Table,
"accuracy" = (Table[1, 1] + Table[2, 2]) / (sum(Table)),
"sensitivity" = Table[2, 2] / (Table[2, 2] + Table[2, 1]),
"specificity" = Table[1, 1] / (Table[1, 1] + Table[1, 2]),
"PPV" = Table[2, 2] / (Table[2, 2] + Table[1, 2]),
"levels" = c(FALSE, TRUE))
}
return(structure(List, class = "BranchGLMTable"))
}
#' @rdname Table
#' @export
Table.BranchGLM <- function(object, cutoff = .5, ...){
if(is.null(object$y)){
stop("supplied BranchGLM object must have a y component")
}
if(object$family != "binomial"){
stop("This method is only valid for BranchGLM models in the binomial family")
}
preds <- predict(object, type = "response")
Table <- MakeTable(preds, object$y, cutoff)
List <- list("table" = Table,
"accuracy" = (Table[1, 1] + Table[2, 2]) / (sum(Table)),
"sensitivity" = Table[2, 2] / (Table[2, 2] + Table[2, 1]),
"specificity" = Table[1, 1] / (Table[1, 1] + Table[1, 2]),
"PPV" = Table[2, 2] / (Table[2, 2] + Table[1, 2]),
"levels" = object$ylevel)
return(structure(List, class = "BranchGLMTable"))
}
#' Print Method for BranchGLMTable Objects
#' @description Print method for BranchGLMTable objects.
#' @param x a `BranchGLMTable` object.
#' @param digits number of digits to display.
#' @param ... further arguments passed to other methods.
#' @return The supplied `BranchGLMTable` object.
#' @export
print.BranchGLMTable <- function(x, digits = 4, ...){
Numbers <- apply(x$table, 2, FUN = function(x){max(nchar(x))})
Numbers <- pmax(Numbers, c(4, 4)) |>
pmax(nchar(x$levels))
LeftSpace <- 10 + max(nchar(x$levels))
cat("Confusion matrix:\n")
cat(paste0(rep("-", LeftSpace + sum(Numbers) + 2), collapse = ""))
cat("\n")
cat(paste0(paste0(rep(" ", LeftSpace + Numbers[1] - 4), collapse = ""),
"Predicted\n",
paste0(rep(" ", LeftSpace + floor((Numbers[1] - nchar(x$levels[1])) / 2)),
collapse = ""),
x$levels[1],
paste0(rep(" ", ceiling((Numbers[1] - nchar(x$levels[1])) / 2) + 1 +
floor((Numbers[2] - nchar(x$levels[2])) / 2)),
collapse = ""), x$levels[2], "\n\n",
paste0(rep(" ", 9), collapse = ""), x$levels[1],
paste0(rep(" ", 1 + max(nchar(x$levels)) - nchar(x$levels[1]) +
floor((Numbers[1] - nchar(x$table[1, 1])) / 2)),
collapse = ""),
x$table[1, 1],
paste0(rep(" ", ceiling((Numbers[1] - nchar(x$table[1, 1])) / 2) + 1 +
floor((Numbers[2] - nchar(x$table[1, 2])) / 2)),
collapse = ""),
x$table[1, 2],
"\n", "Observed\n",
paste0(rep(" ", 9), collapse = ""), x$levels[2],
paste0(rep(" ", 1 + max(nchar(x$levels)) - nchar(x$levels[2]) +
floor((Numbers[1] - nchar(x$table[2, 1])) / 2)),
collapse = ""),
x$table[2, 1],
paste0(rep(" ", ceiling((Numbers[1] - nchar(x$table[2, 1])) / 2) + 1 +
floor((Numbers[2] - nchar(x$table[2, 2])) / 2)),
collapse = ""),
x$table[2, 2], "\n\n"))
cat(paste0(rep("-", LeftSpace + sum(Numbers) + 2), collapse = ""))
cat("\n")
cat("Measures:\n")
cat(paste0(rep("-", LeftSpace + sum(Numbers) + 2), collapse = ""))
cat("\n")
cat("Accuracy: ", round(x$accuracy, digits = digits), "\n")
cat("Sensitivity: ", round(x$sensitivity, digits = digits), "\n")
cat("Specificity: ", round(x$specificity, digits = digits), "\n")
cat("PPV: ", round(x$PPV, digits = digits), "\n")
invisible(x)
}
#' Cindex/AUC
#' @param object a `BranchGLM` object, a `BranchGLMROC` object, or a numeric vector.
#' @param ... further arguments passed to other methods.
#' @param y Observed values, can be a numeric vector of 0s and 1s, a two-level
#' factor vector, or a logical vector.
#' @name Cindex
#' @return A number corresponding to the c-index/AUC.
#' @description Calculates the c-index/AUC.
#' @details Uses trapezoidal rule to calculate AUC when given a BranchGLMROC object and
#' uses Mann-Whitney U to calculate it otherwise. The trapezoidal rule method is less accurate,
#' so the two methods may give different results.
#' @examples
#' Data <- ToothGrowth
#' Fit <- BranchGLM(supp ~ ., data = Data, family = "binomial", link = "logit")
#' Cindex(Fit)
#' AUC(Fit)
#' @export
Cindex <- function(object, ...) {
UseMethod("Cindex")
}
#' @rdname Cindex
#' @export
AUC <- Cindex
#' @rdname Cindex
#' @export
Cindex.numeric <- function(object, y, ...){
if((!is.numeric(y)) && (!is.factor(y)) && (!is.logical(y))){
stop("y must be a numeric, two-level factor, or logical vector")
}else if(length(y) != length(object)){
stop("Length of y must be the same as the length of object")
}else if((any(object > 1) || any(object < 0))){
stop("object must be between 0 and 1")
}else if(any(is.na(object)) || any(is.na(y))){
stop("object and y must not have any missing values")
}else if(is.factor(y) && nlevels(y) != 2){
stop("If y is a factor vector it must have exactly two levels")
}else if(is.numeric(y) && any((y != 1) & (y != 0))){
stop("If y is numeric it must only contain 0s and 1s.")
}
if(is.numeric(y)){
cindex <- CindexU(object, y)
}
else if(is.factor(y)){
y <- (y == levels(y)[2])
cindex <- CindexU(object, y)
}
else{
y <- y * 1
cindex <- CindexU(object, y)
}
cindex
}
#' @rdname Cindex
#' @export
Cindex.BranchGLM <- function(object, ...){
if(is.null(object$y)){
stop("supplied BranchGLM object must have a y component")
}
if(object$family != "binomial"){
stop("This method is only valid for BranchGLM models in the binomial family")
}
preds <- predict(object, type = "response")
cindex <- CindexU(preds, object$y)
cindex
}
#' @rdname Cindex
#' @export
Cindex.BranchGLMROC <- function(object, ...){
cindex <- CindexTrap(object$Info$Sensitivity,
object$Info$Specificity)
cindex
}
#' Calculated AUC/cindex
#' @param preds numeric vector of predictions.
#' @param y a numeric vector of 0s and 1s.
#' @noRd
CindexU <- function(preds, y){
y1 <- which(y == 1)
Ranks <- rank(preds, ties.method = "average")
U <- sum(Ranks[y1]) - (length(y1) * (length(y1) + 1))/(2)
return(U / (length(y1) * as.double(length(y) - length(y1))))
}
#' ROC Curve
#' @description Creates an ROC curve.
#' @param object a `BranchGLM` object or a numeric vector.
#' @param ... further arguments passed to other methods.
#' @param y observed values, can be a numeric vector of 0s and 1s, a two-level
#' factor vector, or a logical vector.
#' @name ROC
#' @return A `BranchGLMROC` object which can be plotted with `plot()`. The AUC can also
#' be calculated using `AUC()`.
#' @examples
#' Data <- ToothGrowth
#' Fit <- BranchGLM(supp ~ ., data = Data, family = "binomial", link = "logit")
#' MyROC <- ROC(Fit)
#' plot(MyROC)
#' @export
ROC <- function(object, ...) {
UseMethod("ROC")
}
#' @rdname ROC
#' @export
ROC.numeric <- function(object, y, ...){
if((!is.numeric(y)) && (!is.factor(y)) && (!is.logical(y))){
stop("y must be a numeric, two-level factor, or logical vector")
}else if(length(y) != length(object)){
stop("Length of y must be the same as the length of object")
}else if((any(object > 1) || any(object < 0))){
stop("object must be between 0 and 1")
}else if(any(is.na(object)) || any(is.na(y))){
stop("object and y must not have any missing values")
}else if(is.factor(y) && nlevels(y) != 2){
stop("If y is a factor vector it must have exactly two levels")
}else if(is.numeric(y) && any((y != 1) & (y != 0))){
stop("If y is numeric it must only contain 0s and 1s.")
}
if(is.numeric(y)){
SortOrder <- order(object)
object <- object[SortOrder]
ROC <- ROCCpp(object, y[SortOrder], unique(object))
}else if(is.factor(y)){
y <- (y == levels(y)[2])
SortOrder <- order(object)
object <- object[SortOrder]
ROC <- ROCCpp(object, y[SortOrder], unique(object))
}else{
y <- y * 1
SortOrder <- order(object)
object <- object[SortOrder]
ROC <- ROCCpp(object, y[SortOrder], unique(object))
}
ROC <- list("NumObs" = length(object),
"Info" = ROC)
return(structure(ROC, class = "BranchGLMROC"))
}
#' @rdname ROC
#' @export
ROC.BranchGLM <- function(object, ...){
if(is.null(object$y)){
stop("supplied BranchGLM object must have a y component")
}
if(object$family != "binomial"){
stop("This method is only valid for BranchGLM models in the binomial family")
}
preds <- predict(object, type = "response")
SortOrder <- order(preds)
preds <- preds[SortOrder]
ROC <- ROCCpp(preds, object$y[SortOrder], unique(preds))
ROC <- list("NumObs" = length(preds),
"Info" = ROC)
return(structure(ROC, class = "BranchGLMROC"))
}
#' Print Method for BranchGLMROC Objects
#' @description Print method for BranchGLMROC objects.
#' @param x a `BranchGLMROC` object.
#' @param ... further arguments passed to other methods.
#' @return The supplied `BranchGLMROC` object.
#' @export
print.BranchGLMROC <- function(x, ...){
cat(paste0("Number of observations used to make ROC curve: ",
x$NumObs, "\n\nUse plot function to make plot of ROC curve \nCan also use AUC/Cindex function to get the AUC"))
invisible(x)
}
#' Plot Method for BranchGLMROC Objects
#' @description This plots a ROC curve.
#' @param x a `BranchGLMROC` object.
#' @param xlab label for the x-axis.
#' @param ylab label for the y-axis.
#' @param type what type of plot to draw, see more details at [plot.default].
#' @param ... further arguments passed to [plot.default].
#' @return This only produces a plot, nothing is returned.
#' @examples
#' Data <- ToothGrowth
#' Fit <- BranchGLM(supp ~ ., data = Data, family = "binomial", link = "logit")
#' MyROC <- ROC(Fit)
#' plot(MyROC)
#' @export
plot.BranchGLMROC <- function(x, xlab = "1 - Specificity", ylab = "Sensitivity",
type = "l", ...){
plot(1 - x$Info$Specificity, x$Info$Sensitivity, xlab = xlab, ylab = ylab,
type = type, ... )
abline(0, 1, lty = "dotted")
}
#' Plotting Multiple ROC Curves
#' @param ... any number of `BranchGLMROC` objects.
#' @param legendpos a keyword to describe where to place the legend, such as "bottomright".
#' The default is "bottomright"
#' @param title title for the plot.
#' @param colors vector of colors to be used on the ROC curves.
#' @param names vector of names used to create a legend for the ROC curves.
#' @param lty vector of linetypes used to create the ROC curves or a
#' single linetype to be used for all ROC curves.
#' @param lwd vector of linewidths used to create the ROC curves or a
#' single linewidth to be used for all ROC curves.
#' @return This only produces a plot, nothing is returned.
#' @examples
#' Data <- ToothGrowth
#'
#' ### Logistic ROC
#' LogisticFit <- BranchGLM(supp ~ ., data = Data, family = "binomial", link = "logit")
#' LogisticROC <- ROC(LogisticFit)
#'
#' ### Probit ROC
#' ProbitFit <- BranchGLM(supp ~ ., data = Data, family = "binomial", link = "probit")
#' ProbitROC <- ROC(ProbitFit)
#'
#' ### Cloglog ROC
#' CloglogFit <- BranchGLM(supp ~ ., data = Data, family = "binomial", link = "cloglog")
#' CloglogROC <- ROC(CloglogFit)
#'
#' ### Plotting ROC curves
#'
#' MultipleROCCurves(LogisticROC, ProbitROC, CloglogROC,
#' names = c("Logistic ROC", "Probit ROC", "Cloglog ROC"))
#'
#' @export
MultipleROCCurves <- function(..., legendpos = "bottomright", title = "ROC Curves",
colors = NULL, names = NULL, lty = 1, lwd = 1){
ROCs <- list(...)
if(length(ROCs) == 0){
stop("must provide at least one ROC curve")
}
if(!all(sapply(ROCs, is, class = "BranchGLMROC"))){
stop("All arguments in ... must be BranchGLMROC objects")
}
if(is.null(colors)){
colors <- 1:length(ROCs)
}else if(length(ROCs) != length(colors)){
stop("colors must have the same length as the number of ROC curves")
}
if(length(lty) == 1){
lty <- rep(lty, length(colors))
}else if(length(ROCs) != length(lty)){
stop("lty must have the same length as the number of ROC curves or a length of 1")
}
if(length(lwd) == 1){
lwd <- rep(lwd, length(colors))
}else if(length(ROCs) != length(lwd)){
stop("lwd must have the same length as the number of ROC curves or a length of 1")
}
if(length(title) > 1){
stop("title must have a length of 1")
}else if(!(is.character(title) || is.expression(title))){
stop("title must be a character string or an expression")
}
plot(ROCs[[1]], col = colors[1], lty = lty[1], lwd = lwd[1], main = title)
if(length(ROCs) > 1){
for(i in 2:length(ROCs)){
lines(1 - ROCs[[i]]$Info$Specificity, ROCs[[i]]$Info$Sensitivity,
col = colors[i], lty = lty[i], lwd = lwd[i])
}
}
if(is.null(names)){
legend(legendpos, legend = paste0("ROC ", 1:length(colors)),
col = colors, lty = lty, lwd = lwd)
}else if(length(names) != length(colors)){
stop("names must have the same length as the number of ROC curves")
}else{
legend(legendpos, legend = names,
col = colors, lty = lty, lwd = lwd)
}
}
|
/scratch/gouwar.j/cran-all/cranData/BranchGLM/R/BranchGLMTable.R
|
# Generated by using Rcpp::compileAttributes() -> do not edit by hand
# Generator token: 10BE3573-1514-4C36-9D1C-5A225CD40393
BranchAndBoundCpp <- function(x, y, offset, indices, num, interactions, method, m, Link, Dist, nthreads, tol, maxit, keep, maxsize, pen, display_progress, NumBest, cutoff) {
.Call(`_BranchGLM_BranchAndBoundCpp`, x, y, offset, indices, num, interactions, method, m, Link, Dist, nthreads, tol, maxit, keep, maxsize, pen, display_progress, NumBest, cutoff)
}
BackwardBranchAndBoundCpp <- function(x, y, offset, indices, num, interactions, method, m, Link, Dist, nthreads, tol, maxit, keep, pen, display_progress, NumBest, cutoff) {
.Call(`_BranchGLM_BackwardBranchAndBoundCpp`, x, y, offset, indices, num, interactions, method, m, Link, Dist, nthreads, tol, maxit, keep, pen, display_progress, NumBest, cutoff)
}
SwitchBranchAndBoundCpp <- function(x, y, offset, indices, num, interactions, method, m, Link, Dist, nthreads, tol, maxit, keep, pen, display_progress, NumBest, cutoff) {
.Call(`_BranchGLM_SwitchBranchAndBoundCpp`, x, y, offset, indices, num, interactions, method, m, Link, Dist, nthreads, tol, maxit, keep, pen, display_progress, NumBest, cutoff)
}
BranchGLMfit <- function(x, y, offset, init, method, m, Link, Dist, nthreads, tol, maxit, GetInit) {
.Call(`_BranchGLM_BranchGLMfit`, x, y, offset, init, method, m, Link, Dist, nthreads, tol, maxit, GetInit)
}
MetricIntervalCpp <- function(x, y, offset, indices, num, model, method, m, Link, Dist, nthreads, tol, maxit, pen, mle, se, best, cutoff, Metric, rootMethod) {
.Call(`_BranchGLM_MetricIntervalCpp`, x, y, offset, indices, num, model, method, m, Link, Dist, nthreads, tol, maxit, pen, mle, se, best, cutoff, Metric, rootMethod)
}
ForwardCpp <- function(x, y, offset, indices, num, interactions, method, m, Link, Dist, nthreads, tol, maxit, keep, steps, pen) {
.Call(`_BranchGLM_ForwardCpp`, x, y, offset, indices, num, interactions, method, m, Link, Dist, nthreads, tol, maxit, keep, steps, pen)
}
BackwardCpp <- function(x, y, offset, indices, num, interactions, method, m, Link, Dist, nthreads, tol, maxit, keep, steps, pen) {
.Call(`_BranchGLM_BackwardCpp`, x, y, offset, indices, num, interactions, method, m, Link, Dist, nthreads, tol, maxit, keep, steps, pen)
}
MakeTable <- function(preds, y, cutoff) {
.Call(`_BranchGLM_MakeTable`, preds, y, cutoff)
}
MakeTableFactor2 <- function(preds, y, levels, cutoff) {
.Call(`_BranchGLM_MakeTableFactor2`, preds, y, levels, cutoff)
}
CindexCpp <- function(preds, y) {
.Call(`_BranchGLM_CindexCpp`, preds, y)
}
CindexTrap <- function(Sens, Spec) {
.Call(`_BranchGLM_CindexTrap`, Sens, Spec)
}
ROCCpp <- function(preds, y, Cutoffs) {
.Call(`_BranchGLM_ROCCpp`, preds, y, Cutoffs)
}
|
/scratch/gouwar.j/cran-all/cranData/BranchGLM/R/RcppExports.R
|
#' Variable Selection for GLMs
#' @description Performs forward selection, backward elimination,
#' and efficient best subset variable selection with information criterion for
#' generalized linear models (GLMs). Best subset selection is performed with branch and
#' bound algorithms to greatly speed up the process.
#' @param object a formula or a `BranchGLM` object.
#' @param ... further arguments.
#' @param data a data.frame, list or environment (or object coercible by
#' [as.data.frame] to a data.frame), containing the variables in formula.
#' Neither a matrix nor an array will be accepted.
#' @param family the distribution used to model the data, one of "gaussian", "gamma",
#' "binomial", or "poisson".
#' @param link the link used to link the mean structure to the linear predictors. One of
#' "identity", "logit", "probit", "cloglog", "sqrt", "inverse", or "log".
#' @param offset the offset vector, by default the zero vector is used.
#' @param method one of "Fisher", "BFGS", or "LBFGS". Fisher's scoring is recommended
#' for forward selection and branch and bound methods since they will typically
#' fit many models with a small number of covariates.
#' @param type one of "forward", "backward", "branch and bound", "backward branch and bound",
#' or "switch branch and bound" to indicate the type of variable selection to perform.
#' The default value is "switch branch and bound". The branch and bound algorithms are guaranteed to
#' find the best models according to the metric while "forward" and "backward" are
#' heuristic approaches that may not find the optimal model.
#' @param metric the metric used to choose the best models, the default is "AIC",
#' but "BIC" and "HQIC" are also available. AIC is the Akaike information criterion,
#' BIC is the Bayesian information criterion, and HQIC is the Hannan-Quinn information
#' criterion.
#' @param bestmodels a positive integer to indicate the number of the best models to
#' find according to the chosen metric or NULL. If this is NULL, then cutoff is
#' used instead. This is only used for the branch and bound methods.
#' @param cutoff a non-negative number which indicates that the function
#' should return all models that have a metric value within cutoff of the
#' best metric value or NULL. Only one of this or bestmodels should be specified and
#' when both are NULL a cutoff of 0 is used. This is only used for the branch
#' and bound methods.
#' @param keep a character vector of names to denote variables that must be in the models.
#' @param keepintercept a logical value to indicate whether to keep the intercept in
#' all models, only used if an intercept is included in the formula.
#' @param maxsize a positive integer to denote the maximum number of variables to
#' consider in a single model, the default is the total number of variables.
#' This number adds onto any variables specified in keep. This argument only works
#' for `type = "forward"` and `type = "branch and bound"`.
#' @param grads a positive integer to denote the number of gradients used to
#' approximate the inverse information with, only for `method = "LBFGS"`..
#' @param parallel a logical value to indicate if parallelization should be used.
#' @param nthreads a positive integer to denote the number of threads used with OpenMP,
#' only used if `parallel = TRUE`.
#' @param tol a positive number to denote the tolerance used to determine model convergence.
#' @param maxit a positive integer to denote the maximum number of iterations performed.
#' The default for Fisher's scoring is 50 and for the other methods the default is 200.
#' @param showprogress a logical value to indicate whether to show progress updates
#' for branch and bound methods.
#' @param contrasts see `contrasts.arg` of `model.matrix.default`.
#' @seealso [plot.BranchGLMVS], [coef.BranchGLMVS], [predict.BranchGLMVS],
#' [summary.BranchGLMVS]
#' @details
#'
#' The supplied formula or the formula from the fitted model is
#' treated as the upper model. The variables specified in keep along with an
#' intercept (if included in formula and keepintercept = TRUE) is the lower model.
#' Factor variables are either kept in their entirety or entirely removed and
#' interaction terms are properly handled. All observations that have any missing
#' values in the upper model are removed.
#'
#' ## Algorithms
#' The branch and bound method makes use of an efficient branch and bound algorithm
#' to find the optimal models. This will find the best models according to the metric and
#' can be much faster than an exhaustive search and can be made even faster with
#' parallel computation. The backward branch and bound method is very similar to
#' the branch and bound method, except it tends to be faster when the best models
#' contain most of the variables. The switch branch and bound method is a
#' combination of the two methods and is typically the fastest of the 3 branch and
#' bound methods.
#'
#' Fisher's scoring is recommended for branch and bound selection and forward selection.
#' L-BFGS may be faster for backward elimination, especially when there are many variables.
#'
#' @return A `BranchGLMVS` object which is a list with the following components
#' \item{`initmodel`}{ the `BranchGLM` object corresponding to the upper model}
#' \item{`numchecked`}{ number of models fit}
#' \item{`names`}{ character vector of the names of the predictor variables}
#' \item{`order`}{ the order the variables were added to the model or removed from the model, this is not included for branch and bound selection}
#' \item{`type`}{ type of variable selection employed}
#' \item{`metric`}{ metric used to select best models}
#' \item{`bestmodels`}{ numeric matrix used to describe the best models}
#' \item{`bestmetrics`}{ numeric vector with the best metrics found in the search}
#' \item{`beta`}{ numeric matrix of beta coefficients for the best models}
#' \item{`cutoff`}{ the cutoff that was used, this is set to -1 if bestmodels was used instead}
#' \item{`keep`}{ vector of which variables were kept through the selection process}
#' @name VariableSelection
#'
#' @examples
#' Data <- iris
#' Fit <- BranchGLM(Sepal.Length ~ ., data = Data, family = "gaussian",
#' link = "identity")
#'
#' # Doing branch and bound selection
#' VS <- VariableSelection(Fit, type = "branch and bound", metric = "BIC",
#' bestmodels = 10, showprogress = FALSE)
#' VS
#'
#' ## Plotting the BIC of the best models
#' plot(VS, type = "b")
#'
#' ## Getting the coefficients of the best model according to BIC
#' FinalModel <- coef(VS, which = 1)
#' FinalModel
#'
#' # Now doing it in parallel (although it isn't necessary for this dataset)
#' parVS <- VariableSelection(Fit, type = "branch and bound", parallel = TRUE,
#' metric = "BIC", bestmodels = 10, showprogress = FALSE)
#'
#' ## Getting the coefficients of the best model according to BIC
#' FinalModel <- coef(parVS, which = 1)
#' FinalModel
#'
#' # Using a formula
#' formVS <- VariableSelection(Sepal.Length ~ ., data = Data, family = "gaussian",
#' link = "identity", metric = "BIC", type = "branch and bound", bestmodels = 10,
#' showprogress = FALSE)
#'
#' ## Getting the coefficients of the best model according to BIC
#' FinalModel <- coef(formVS, which = 1)
#' FinalModel
#'
#' # Using the keep argument
#' keepVS <- VariableSelection(Fit, type = "branch and bound",
#' keep = c("Species", "Petal.Width"), metric = "BIC", bestmodels = 4,
#' showprogress = FALSE)
#' keepVS
#'
#' ## Getting the coefficients from the fourth best model according to BIC when
#' ## keeping Petal.Width and Species in every model
#' FinalModel <- coef(keepVS, which = 4)
#' FinalModel
#'
#' # Treating categorical variable beta parameters separately
#' ## This function automatically groups together parameters from a categorical variable
#' ## to avoid this, you need to create the indicator variables yourself
#' x <- model.matrix(Sepal.Length ~ ., data = iris)
#' Sepal.Length <- iris$Sepal.Length
#' Data <- cbind.data.frame(Sepal.Length, x[, -1])
#' VSCat <- VariableSelection(Sepal.Length ~ ., data = Data, family = "gaussian",
#' link = "identity", metric = "BIC", bestmodels = 10, showprogress = FALSE)
#' VSCat
#'
#' ## Plotting results
#' plot(VSCat, cex.names = 0.75)
#'
#' @export
#'
VariableSelection <- function(object, ...) {
UseMethod("VariableSelection")
}
#'@rdname VariableSelection
#'@export
VariableSelection.formula <- function(object, data, family, link, offset = NULL,
method = "Fisher", type = "switch branch and bound",
metric = "AIC",
bestmodels = NULL, cutoff = NULL,
keep = NULL, keepintercept = TRUE, maxsize = NULL,
grads = 10, parallel = FALSE,
nthreads = 8, tol = 1e-6, maxit = NULL,
contrasts = NULL,
showprogress = TRUE, ...){
### Performing variable selection
### model.frame searches for offset in the environment the formula is in, so
### we need to change the environment of the formula to be the current environment
formula <- object
environment(formula) <- environment()
fit <- BranchGLM(formula, data = data, family = family, link = link,
offset = offset, method = method, grads = grads,
tol = tol, maxit = maxit, contrasts = contrasts,
fit = FALSE)
VariableSelection(fit, type = type, metric = metric,
bestmodels = bestmodels, cutoff = cutoff,
keep = keep, keepintercept = keepintercept,
maxsize = maxsize, parallel = parallel,
nthreads = nthreads,
showprogress = showprogress, ...)
}
#'@rdname VariableSelection
#'@export
VariableSelection.BranchGLM <- function(object, type = "switch branch and bound",
metric = "AIC",
bestmodels = NULL, cutoff = NULL,
keep = NULL, keepintercept = TRUE, maxsize = NULL,
parallel = FALSE, nthreads = 8,
showprogress = TRUE, ...){
## converting metric to upper and type to lower
type <- tolower(type)
metric <- toupper(metric)
## Checking if supplied BranchGLM object has x and data
if(is.null(object$x)){
stop("the supplied model must have an x component")
}else if(nrow(object$x) == 0){
stop("the design matrix in object has 0 rows")
}
## Checking if supplied BranchGLM object has y
if(is.null(object$y)){
stop("the supplied model must have a y component")
}else if(length(object$y) == 0){
stop("the y component in object has 0 rows")
}
## Validating supplied arguments
if(length(nthreads) != 1 || !is.numeric(nthreads) || is.na(nthreads) || nthreads <= 0){
stop("nthreads must be a positive integer")
}
if(length(parallel) != 1 || !is.logical(parallel) || is.na(parallel)){
stop("parallel must be either TRUE or FALSE")
}
### Checking showprogress
if(length(showprogress) != 1 || !is.logical(showprogress)){
stop("showprogress must be a logical value")
}
### Checking metric
if(length(metric) > 1 || !is.character(metric)){
stop("metric must be one of 'AIC','BIC', or 'HQIC'")
}else if(!(metric %in% c("AIC", "BIC", "HQIC"))){
stop("metric must be one of 'AIC','BIC', or 'HQIC'")
}
### Checking type
if(length(type) != 1 || !is.character(type)){
stop("type must be one of 'forward', 'backward', 'branch and bound', 'backward branch and bound', or 'switch branch and bound'")
}
### Checking bestmodels
if(is.null(bestmodels)){
}else if(length(bestmodels) != 1 || !is.numeric(bestmodels) ||
bestmodels <= 0 || bestmodels != as.integer(bestmodels)){
stop("bestmodels must be a positive integer")
}else if(!is.null(cutoff) && !is.null(bestmodels)){
stop("only one of bestmodels or cutoff can be specified")
}
### Checking cutoff
if(is.null(cutoff)){
}else if(length(cutoff) != 1 || !is.numeric(cutoff) || cutoff < 0){
stop("cutoff must be a non-negative number")
}
if(is.null(cutoff)){
if(is.null(bestmodels)){
cutoff <- 0
bestmodels <- 1
}else{
cutoff <- -1
}
}else if(is.null(bestmodels)){
bestmodels <- 1
}
indices <- attr(object$x, "assign")
counts <- table(indices)
interactions <- attr(object$terms, "factors")[-1L, ]
## Removing rows with all zeros
if(is.matrix(interactions)){
interactions <- interactions[apply(interactions, 1, function(x){sum(x) > 0}),]
}else{
### This only happens when only 1 variable is included
interactions <- matrix(1, nrow = 1, ncol = 1)
}
## Checking for intercept
if(colnames(object$x)[1] == "(Intercept)"){
intercept <- TRUE
interactions <- rbind(0, interactions)
interactions <- cbind(0, interactions)
}else{
intercept <- FALSE
indices <- indices - 1
}
if(is.null(maxsize)){
maxsize <- length(counts)
}else if(length(maxsize) != 1 || !is.numeric(maxsize) || maxsize <= 0){
stop("maxsize must be a positive integer specifying the max size of the models")
}
## Setting starting model and saving keep1 for later use since keep is modified
### Checking keep
CurNames <- colnames(attributes(terms(object$formula, data = object$data))$factors)
if(!is.character(keep) && !is.null(keep)){
stop("keep must be a character vector or NULL")
}else if(!is.null(keep) && !all(keep %in% CurNames)){
keep <- keep[!(keep %in% CurNames)]
stop(paste0("the following elements were found in keep, but are not variable names: ",
paste0(keep, collapse = ", ")))
}
### Checking keepintercept
if(length(keepintercept) != 1 || !is.logical(keepintercept)){
stop("keepintercept must be a logical value")
}
keep1 <- keep
if(!intercept){
# Changing keepintercept to FALSE since there is no intercept
keepintercept <- FALSE
}
if(is.null(keep) && type != "backward"){
keep <- rep(0, length(counts))
if(intercept && keepintercept){
keep[1] <- -1
}
}else if(is.null(keep) && type == "backward"){
keep <- rep(1, length(counts))
if(intercept && keepintercept){
keep[1] <- -1
}
}else{
keep <- (CurNames %in% keep) * -1
if(type == "backward"){
keep[keep == 0] <- 1
}
if(intercept && keepintercept){
keep <- c(-1, keep)
}else if(intercept){
keep <- c(0, keep)
}
}
## Checking for parallel
if(!parallel){
nthreads <- 1
}
## Getting penalties
if(metric == "AIC"){
pen <- as.vector(counts) * 2
penalty <- 2
}else if(metric == "BIC"){
pen <- as.vector(counts) * log(nrow(object$x))
penalty <- log(nrow(object$x))
}else if(metric == "HQIC"){
pen <- as.vector(counts) * 2 * log(log(nrow(object$x)))
penalty <- 2 * log(log(nrow(object$x)))
}
## Performing variable selection
if(type == "forward"){
if(bestmodels > 1 || cutoff > 0){
warning("forward selection only finds 1 final model")
}
df <- ForwardCpp(object$x, object$y, object$offset, indices, counts,
interactions, object$method, object$grads, object$link,
object$family, nthreads, object$tol, object$maxit, keep,
maxsize, pen)
}else if(type == "backward"){
if(bestmodels > 1 || cutoff > 0){
warning("backward elimination only finds 1 final model")
}
df <- BackwardCpp(object$x, object$y, object$offset, indices, counts,
interactions, object$method, object$grads,
object$link, object$family, nthreads, object$tol, object$maxit,
keep, maxsize, pen)
}else if(type == "branch and bound"){
df <- BranchAndBoundCpp(object$x, object$y, object$offset, indices, counts,
interactions, object$method, object$grads,
object$link, object$family, nthreads,
object$tol, object$maxit, keep, maxsize,
pen, showprogress, bestmodels, cutoff)
}else if(type == "backward branch and bound"){
df <- BackwardBranchAndBoundCpp(object$x, object$y, object$offset, indices,
counts, interactions, object$method, object$grads,
object$link, object$family, nthreads, object$tol,
object$maxit, keep,
pen, showprogress, bestmodels, cutoff)
}else if(type == "switch branch and bound"){
df <- SwitchBranchAndBoundCpp(object$x, object$y, object$offset, indices, counts,
interactions, object$method, object$grads,
object$link, object$family, nthreads,
object$tol, object$maxit, keep,
pen, showprogress, bestmodels, cutoff)
}else{
stop("type must be one of 'forward', 'backward', 'branch and bound', 'backward branch and bound', or 'switch branch and bound'")
}
# Creating coefficient names
names <- object$names
if(intercept){
names <- c("(Intercept)", names)
}
if(type %in% c("forward", "backward")){
# Checking for infinite best metric value
if(is.infinite(df$bestmetric)){
stop("no models were found that had an invertible fisher information")
}
# Adding penalty to gaussian and gamma families
if(object$family %in% c("gaussian", "gamma")){
df$bestmetric <- df$bestmetric + penalty
}
df$order <- df$order[df$order > 0]
if(!intercept){
df$order <- df$order + 1
}
beta <- matrix(df$beta, ncol = 1)
bestmodel <- matrix(df$bestmodel, ncol = 1)
rownames(bestmodel) <- names
rownames(beta) <- colnames(object$x)
FinalList <- list("numchecked" = df$numchecked,
"order" = object$names[df$order],
"type" = type,
"metric" = metric,
"bestmodels" = bestmodel,
"bestmetrics" = df$bestmetric,
"beta" = beta,
"names" = names,
"initmodel" = object,
"cutoff" = -1,
"keep" = keep1,
"keepintercept" = keepintercept)
}else{
# Adding penalty to gaussian and gamma families
if(object$family %in% c("gaussian", "gamma")){
df$bestmetrics <- df$bestmetrics + penalty
}
# Checking for infinite best metric values
if(all(is.infinite(df$bestmetrics))){
stop("no models were found that had an invertible fisher information")
}
# Only returning best models that have a finite metric value
newInd <- colSums(df$bestmodels != 0) != 0
bestInd <- is.finite(df$bestmetrics)
bestInd <- (newInd + bestInd) == 2
bestmodels <- df$bestmodels[, bestInd, drop = FALSE]
# Only returning best models that are not the null model
bestmodels <- sapply(1:length(keep), function(i){
ind <- which((indices + 1) == i)
temp <- bestmodels[ind, , drop = FALSE]
apply(temp, 2, function(x)all(x != 0) * (keep[i] + 0.5) * 2)
})
if(is.vector(bestmodels)){
bestmodels <- matrix(bestmodels, ncol = 1)
}else{
bestmodels <- t(bestmodels)
}
beta <- df$bestmodels[, bestInd, drop = FALSE]
rownames(bestmodels) <- names
rownames(beta) <- colnames(object$x)
FinalList <- list("numchecked" = df$numchecked,
"type" = type,
"metric" = metric,
"bestmodels" = bestmodels,
"bestmetrics" = df$bestmetrics[bestInd],
"beta" = beta,
"names" = names,
"initmodel" = object,
"cutoff" = cutoff,
"keep" = keep1,
"keepintercept" = keepintercept)
}
structure(FinalList, class = "BranchGLMVS")
}
#' @rdname fit
#' @export
fit.BranchGLMVS <- function(object, which = 1, keepData = TRUE, keepY = TRUE, ...){
fit(summary(object), which = which, keepData = keepData, keepY = keepY, ...)
}
#' @rdname plot.summary.BranchGLMVS
#' @export
plot.BranchGLMVS <- function(x, ptype = "both", marnames = 7, addLines = TRUE,
type = "b", horiz = FALSE,
cex.names = 1, cex.lab = 1,
cex.axis = 1, cex.legend = 1,
cols = c("deepskyblue", "indianred", "forestgreen"),
...){
plot(summary(x), ptype = ptype, marnames = marnames,
addLines = addLines, type = type, horiz = horiz,
cex.names = cex.names, cex.lab = cex.lab,
cex.axis = cex.axis, cex.legend = cex.legend,
cols = cols, ...)
}
#' Extract Coefficients from BranchGLMVS or summary.BranchGLMVS Objects
#' @description Extracts beta coefficients from BranchGLMVS or summary.BranchGLMVS objects.
#' @param object a `BranchGLMVS` or `summary.BranchGLMVS` object.
#' @param which a numeric vector of indices or "all" to indicate which models to
#' get coefficients from, the default is the best model.
#' @param ... ignored.
#' @return A numeric matrix with the corresponding coefficient estimates.
#' @examples
#' Data <- iris
#' Fit <- BranchGLM(Sepal.Length ~ ., data = Data,
#' family = "gaussian", link = "identity")
#'
#' # Doing branch and bound selection
#' VS <- VariableSelection(Fit, type = "branch and bound", metric = "BIC",
#' bestmodels = 10, showprogress = FALSE)
#'
#' ## Getting coefficients from best model
#' coef(VS, which = 1)
#'
#' ## Getting coefficients from all best models
#' coef(VS, which = "all")
#'
#' @export
coef.BranchGLMVS <- function(object, which = 1, ...){
## Checking which
if(!is.numeric(which) && is.character(which) && length(which) == 1){
if(tolower(which) == "all"){
which <- 1:NCOL(object$bestmodels)
}
else{
stop("which must be a numeric vector or 'all'")
}
}else if(!is.numeric(which)){
stop("which must be a numeric vector or 'all'")
}else if(any(which < 1)){
stop("integers provided in which must be positive")
}else if(any(which > NCOL(object$bestmodels))){
stop("integers provided in which must be less than or equal to the number of best models")
}
## Getting coefficients from all models in which
allcoefs <- object$beta[, which, drop = FALSE]
rownames(allcoefs) <- colnames(object$initmodel$x)
## Adding column names to identify each model
colnames(allcoefs) <- paste0("Model", which)
return(allcoefs)
}
#' Predict Method for BranchGLMVS or summary.BranchGLMVS Objects
#' @description Obtains predictions from BranchGLMVS or summary.BranchGLMVS objects.
#' @param object a `BranchGLMVS` or `summary.BranchGLMVS` object.
#' @param which a positive integer to indicate which model to get predictions from,
#' the default is the best model.
#' @param ... further arguments passed to [predict.BranchGLM].
#' @seealso [predict.BranchGLM]
#' @return A numeric vector of predictions.
#' @export
#' @examples
#' Data <- iris
#' Fit <- BranchGLM(Sepal.Length ~ ., data = Data,
#' family = "gamma", link = "log")
#'
#' # Doing branch and bound selection
#' VS <- VariableSelection(Fit, type = "branch and bound", metric = "BIC",
#' bestmodels = 10, showprogress = FALSE)
#'
#' ## Getting predictions from best model
#' predict(VS, which = 1)
#'
#' ## Getting linear predictors from 5th best model
#' predict(VS, which = 5, type = "linpreds")
#'
predict.BranchGLMVS <- function(object, which = 1, ...){
## Checking which
if(!is.numeric(which) || length(which) != 1){
stop("which must be a positive integer")
}
### Getting BranchGLM object
myfit <- object$initmodel
myfit$coefficients[, 1] <- coef(object, which = which)
### Getting predictions
predict(myfit, ...)
}
#' Print Method for BranchGLMVS Objects
#' @description Print method for BranchGLMVS objects.
#' @param x a `BranchGLMVS` object.
#' @param digits number of digits to display.
#' @param ... further arguments passed to other methods.
#' @return The supplied `BranchGLMVS` object.
#' @export
print.BranchGLMVS <- function(x, digits = 2, ...){
cat("Variable Selection Info:\n")
cat(paste0(rep("-", 24), collapse = ""))
cat("\n")
if(x$type != "backward"){
cat(paste0("Variables were selected using ", x$type, " selection with ", x$metric, "\n"))
}else{
cat(paste0("Variables were selected using ", x$type, " elimination with ", x$metric, "\n"))
}
if(x$cutoff >= 0){
if(length(x$bestmetrics) == 1){
cat(paste0("Found 1 model within ", round(x$cutoff, digits), " " , x$metric,
" of the best ", x$metric, "(", round(x$bestmetrics[1], digits = digits), ")\n"))
}else{
cat(paste0("Found ", length(x$bestmetrics), " models within ",
round(x$cutoff, digits), " " , x$metric,
" of the best ", x$metric, "(", round(x$bestmetrics[1], digits = digits), ")\n"))
}
}else{
if(length(x$bestmetrics) == 1){
cat(paste0("Found the top model with ", x$metric, " = ", round(x$bestmetrics[1], digits = digits), "\n"))
}else{
cat(paste0("The range of ", x$metric, " values for the top ", length(x$bestmetrics),
" models is (", round(x$bestmetrics[1], digits = digits),
", ", round(x$bestmetrics[length(x$bestmetrics)], digits = digits), ")\n"))
}
}
cat(paste0("Number of models fit: ", x$numchecked))
cat("\n")
if(!is.null(x$keep) || x$keepintercept){
temp <- x$keep
if(x$keepintercept){
temp <- c("(Intercept)", temp)
}
cat("Variables that were kept in each model: ", paste0(temp, collapse = ", "))
}
cat("\n")
if(length(x$order) == 0){
if(x$type == "forward"){
cat("No variables were added to the model")
}else if(x$type == "backward"){
cat("No variables were removed from the model")
}
}else if(x$type == "forward" ){
cat("Order the variables were added to the model:\n")
}else if(x$type == "backward" ){
cat("Order the variables were removed from the model:\n")
}
cat("\n")
if(length(x$order) > 0){
for(i in 1:length(x$order)){
cat(paste0(i, "). ", x$order[i], "\n"))
}
}
invisible(x)
}
|
/scratch/gouwar.j/cran-all/cranData/BranchGLM/R/VariableSelection.R
|
#' Summary Method for BranchGLMVS Objects
#' @description Summary method for BranchGLMVS objects.
#' @param object a `BranchGLMVS` object.
#' @param ... further arguments passed to or from other methods.
#' @seealso [plot.summary.BranchGLMVS], [coef.summary.BranchGLMVS], [predict.summary.BranchGLMVS]
#' @return An object of class `summary.BranchGLMVS` which is a list with the
#' following components
#' \item{`results`}{ a data.frame which has the metric values for the best models along
#' with the sets of variables included in each model}
#' \item{`VS`}{ the supplied `BranchGLMVS` object}
#' \item{`formulas`}{ a list containing the formulas of the best models}
#' \item{`metric`}{ the metric used to perform variable selection}
#' @examples
#'
#' Data <- iris
#' Fit <- BranchGLM(Sepal.Length ~ ., data = Data, family = "gaussian", link = "identity")
#'
#' # Doing branch and bound selection
#' VS <- VariableSelection(Fit, type = "branch and bound", metric = "BIC",
#' bestmodels = 10, showprogress = FALSE)
#' VS
#'
#' ## Getting summary of the process
#' Summ <- summary(VS)
#' Summ
#'
#' ## Plotting the BIC of the best models
#' plot(Summ, type = "b")
#'
#' ## Plotting the variables in the best models
#' plot(Summ, ptype = "variables")
#'
#' ## Getting coefficients
#' coef(Summ)
#'
#' @export
summary.BranchGLMVS <- function(object, ...){
# Getting whether each variables is included in each model
BestModels <- t(object$bestmodels)
BestModels[BestModels == -1] <- "kept"
BestModels[BestModels == 0] <- "no"
BestModels[BestModels == 1] <- "yes"
# Creating data frame with results
df <- data.frame(BestModels, object$bestmetrics)
colnames(df) <- c(object$names, object$metric)
# Creating formulas for each model
Models <- object$bestmodels
if(!is.matrix(Models)){
# if Models is a vector, then change it to a matrix
Models <- matrix(Models, ncol = 1)
}
# Generating formulas for each of the best models
formulas <- apply(Models, 2, FUN = function(x){
tempnames <- object$names[x != 0]
tempnames <- tempnames[which(tempnames != "(Intercept)")]
if(length(tempnames) > 0){
MyFormula <- as.formula(paste0(object$initmodel$yname, " ~ ", paste0(tempnames, collapse = "+")))
if(!("(Intercept)" %in% object$names[x != 0])){
MyFormula <- deparse1(MyFormula) |>
paste0(" - 1") |>
as.formula()
}
}else{
# We can do this since we only include non-null models in bestmodels
MyFormula <- formula(paste0(object$initmodel$yname, " ~ 1"))
}
MyFormula
}
)
MyList <- list("results" = df,
"VS" = object,
"formulas" = formulas,
"metric" = object$metric)
return(structure(MyList, class = "summary.BranchGLMVS"))
}
#' Fits GLMs for summary.BranchGLMVS and BranchGLMVS Objects
#' @name fit
#' @param object a `summary.BranchGLMVS` or `BranchGLMVS` object.
#' @param which a positive integer indicating which model to fit,
#' the default is to fit the first model .
#' @param keepData Whether or not to store a copy of data and design matrix, the default
#' is TRUE. If this is FALSE, then the results from this cannot be used inside of `VariableSelection`.
#' @param keepY Whether or not to store a copy of y, the default is TRUE. If
#' this is FALSE, then the binomial GLM helper functions may not work and this
#' cannot be used inside of `VariableSelection`.
#' @param ... further arguments passed to other methods.
#' @details The information needed to fit the GLM is taken from the original information
#' supplied to the `VariableSelection` function.
#'
#' The fitted models do not have standard errors or p-values since these are
#' biased due to the selection process.
#'
#' @return An object of class [BranchGLM].
#' @export
#'
fit <- function(object, ...) {
UseMethod("fit")
}
#' @rdname fit
#' @export
fit.summary.BranchGLMVS <- function(object, which = 1, keepData = TRUE, keepY = TRUE,
...){
.Deprecated("coef")
if(!is.numeric(which) || which < 0 || which > length(object$formulas) ||
which != as.integer(which)){
stop("which must be a positive integer denoting the rank of the model to fit")
}
FinalModel <- BranchGLM(object$formulas[[which]], data = object$VS$initmodel$mf,
family = object$VS$initmodel$family, link = object$VS$initmodel$link,
offset = object$VS$initmodel$offset,
method = object$VS$initmodel$method,
tol = object$VS$initmodel$tol, maxit = object$VS$initmodel$maxit,
keepData = keepData, keepY = keepY)
# Removing standard errors and p-values along with vcov
FinalModel$coefficients[, 2:4] <- NA
FinalModel$vcov <- NA
FinalModel$numobs <- object$VS$initmodel$numobs
FinalModel$missing <- object$VS$initmodel$missing
return(FinalModel)
}
#' @rdname coef.BranchGLMVS
#' @export
coef.summary.BranchGLMVS <- function(object, which = 1, ...){
coef(object$VS, which = which)
}
#' @rdname predict.BranchGLMVS
#' @export
predict.summary.BranchGLMVS <- function(object, which = 1, ...){
predict(object$VS, which = which, ...)
}
#' Print Method for summary.BranchGLMVS Objects
#' @description Print method for summary.BranchGLMVS objects.
#' @param x a `summary.BranchGLMVS` object.
#' @param digits number of digits to display.
#' @param ... further arguments passed to other methods.
#' @return The supplied `summary.BranchGLMVS` object.
#' @export
print.summary.BranchGLMVS <- function(x, digits = 2, ...){
temp <- x$results
temp[, ncol(temp)] <- round(temp[ncol(temp)], digits = digits)
print(temp)
return(invisible(x))
}
#' Plot Method for summary.BranchGLMVS and BranchGLMVS Objects
#' @description Creates plots to help visualize variable selection results from
#' BranchGLMVS or summary.BranchGLMVS objects.
#' @param x a `summary.BranchGLMVS` or `BranchGLMVS` object.
#' @param ptype the type of plot to produce, look at details for more explanation.
#' @param marnames a numeric value used to determine how large to make margin of axis with variable
#' names, this is only for the "variables" plot. If variable names are cut-off,
#' consider increasing this from the default value of 7.
#' @param addLines a logical value to indicate whether or not to add black lines to
#' separate the models for the "variables" plot. This is typically useful for smaller
#' amounts of models, but can be annoying if there are many models.
#' @param type what type of plot to draw for the "metrics" plot, see more details at [plot.default].
#' @param horiz a logical value to indicate whether models should be displayed horizontally for the "variables" plot.
#' @param cex.names how big to make variable names in the "variables" plot.
#' @param cex.lab how big to make axis labels.
#' @param cex.axis how big to make axis annotation.
#' @param cex.legend how big to make legend labels.
#' @param cols the colors used to create the "variables" plot. Should be a character
#' vector of length 3, the first color will be used for included variables,
#' the second color will be used for excluded variables, and the third color will
#' be used for kept variables.
#' @param ... further arguments passed to [plot.default] for the "metrics" plot
#' and [image.default] for the "variables" plot.
#' @details The different values for ptype are as follows
#' \itemize{
#' \item "metrics" for a plot that displays the metric values ordered by rank
#' \item "variables" for a plot that displays which variables are in each of the top models
#' \item "both" for both plots
#' }
#'
#' If there are so many models that the "variables" plot appears to be
#' entirely black, then set addLines to FALSE.
#'
#' @examples
#' Data <- iris
#' Fit <- BranchGLM(Sepal.Length ~ ., data = Data, family = "gaussian", link = "identity")
#'
#' # Doing branch and bound selection
#' VS <- VariableSelection(Fit, type = "branch and bound", metric = "BIC", bestmodels = 10,
#' showprogress = FALSE)
#' VS
#'
#' ## Getting summary of the process
#' Summ <- summary(VS)
#' Summ
#'
#' ## Plotting the BIC of best models
#' plot(Summ, type = "b", ptype = "metrics")
#'
#' ## Plotting the BIC of the best models
#' plot(Summ, ptype = "variables")
#'
#' ### Alternative colors
#' plot(Summ, ptype = "variables",
#' cols = c("yellowgreen", "purple1", "grey50"))
#'
#' ### Smaller text size for names
#' plot(Summ, ptype = "variables", cex.names = 0.75)
#'
#' @return This only produces plots, nothing is returned.
#' @export
plot.summary.BranchGLMVS <- function(x, ptype = "both", marnames = 7, addLines = TRUE,
type = "b", horiz = FALSE,
cex.names = 1, cex.lab = 1,
cex.axis = 1, cex.legend = 1,
cols = c("deepskyblue", "indianred", "forestgreen"),
...){
# Converting ptype to lower
ptype <- tolower(ptype)
if(length(ptype) != 1 || !is.character(ptype)){
stop("ptype must be one of 'metrics', 'variables', or 'both'")
}else if(!ptype %in% c("metrics", "both", "variables")){
stop("ptype must be one of 'metrics', 'variables', or 'both'")
}
if(ptype %in% c("metrics", "both")){
plot(1:nrow(x$results), x$results[, ncol(x$results)],
xlab = "Rank", ylab = x$metric,
main = paste0("Best Models Ranked by ", x$metric),
type = type, cex.lab = cex.lab, cex.axis = cex.axis,
...)
}
# Checking cols
if(length(cols) != 3 || !is.character(cols)){
stop("cols must be a character vector of length 3")
}
if(ptype %in% c("variables", "both") && !horiz){
# This is inspired by the plot.regsubsets function
n <- length(x$formulas)
Names <- colnames(x$results)[-(ncol(x$results))]
z <- x$results[, -(ncol(x$results))]
z[z == "kept"] <- 2
z[z == "no"] <- 1
z[z == "yes"] <- 0
z <- apply(z, 2, as.numeric)
if(!is.matrix(z)){
z <- matrix(z, ncol = length(z))
}
y <- 1:ncol(z)
x1 <- 1:nrow(z)
# Creating image
oldmar <- par("mar")
on.exit(par(mar = oldmar))
par(mar = c(5, marnames, 3, 6) + 0.1)
if(all(z != 2)){
# Do this if there were no variable kept
image(x1, y, z, ylab = "",
xaxt = "n", yaxt = "n", xlab = "",
main = paste0("Best Models Ranked by ", x$metric),
col = cols[-3], ...)
legend(grconvertX(1, from = "npc"), grconvertY(1, from = "npc"),
legend = c("Included", "Excluded"),
fill = cols[-3],
xpd = TRUE, cex = cex.legend)
}else{
# Do this if there were any kept variables
image(x1, y, z, ylab = "",
xaxt = "n", yaxt = "n", xlab = "",
main = paste0("Best Models Ranked by ", x$metric),
col = cols, ...)
legend(grconvertX(1, from = "npc"), grconvertY(1, from = "npc"),
legend = c("Included", "Excluded", "Kept"),
fill = cols,
xpd = TRUE, cex = cex.legend)
}
# Adding lines
if(addLines){
abline(h = y + 0.5, v = x1 - 0.5)
}else{
abline(h = y + 0.5)
}
# Adding axis labels
axis(1, at = x1, labels = x1, line = 1, las = 1, cex.axis = cex.axis)
axis(2, at = y, labels = Names, line = 1, las = 2, cex.axis = cex.names)
# Adding y-axis title, this is used to avoid overlapping of axis title and labels
mtext(paste0("Rank According to ", x$metric), side = 1, line = 4, cex = cex.lab)
}else if(ptype %in% c("variables", "both") && horiz){
# This is inspired by the plot.regsubsets function
n <- length(x$formulas)
Names <- colnames(x$results)[-(ncol(x$results))]
z <- x$results[, -(ncol(x$results))]
z[z == "kept"] <- 2
z[z == "no"] <- 1
z[z == "yes"] <- 0
z <- apply(z, 2, as.numeric)
if(is.matrix(z)){
z <- t(z)
}else{
z <- matrix(z, nrow = length(z))
}
y <- 1:ncol(z)
x1 <- 1:nrow(z)
# Creating image
oldmar <- par("mar")
on.exit(par(mar = oldmar))
par(mar = c(marnames, 5, 3, 6) + 0.1)
if(all(z != 2)){
# Do this if there were no variable kept
image(x1, y, z, ylab = "",
xaxt = "n", yaxt = "n", xlab = "",
main = paste0("Best Models Ranked by ", x$metric),
col = cols[-3], ...)
legend(grconvertX(1, from = "npc"), grconvertY(1, from = "npc"),
legend = c("Included", "Excluded"),
fill = cols[-3],
xpd = TRUE, cex = cex.legend)
}else{
# Do this if there were any kept variables
image(x1, y, z, ylab = "",
xaxt = "n", yaxt = "n", xlab = "",
main = paste0("Best Models Ranked by ", x$metric),
col = cols, ...)
legend(grconvertX(1, from = "npc"), grconvertY(1, from = "npc"),
legend = c("Included", "Excluded", "Kept"),
fill = cols,
xpd = TRUE, cex = cex.legend)
}
# Adding lines
if(addLines){
abline(v = x1 - 0.5, h = y + 0.5)
}else{
abline(v = x1 - 0.5)
}
# Adding axis labels
axis(1, at = x1, labels = Names, line = 1, las = 2, cex.axis = cex.names)
axis(2, at = y, labels = y, line = 1, las = 2, cex.axis = cex.axis)
# Adding y-axis title, this is used to avoid overlapping of axis title and labels
mtext(paste0("Rank According to ", x$metric), side = 2, line = 4, cex = cex.lab)
}
}
|
/scratch/gouwar.j/cran-all/cranData/BranchGLM/R/summaryBranchGLMVS.R
|
## ----setup, include = FALSE---------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## -----------------------------------------------------------------------------
# Loading in BranchGLM
library(BranchGLM)
# Fitting gaussian regression models for mtcars dataset
cars <- mtcars
## Identity link
BranchGLM(mpg ~ ., data = cars, family = "gaussian", link = "identity")
## -----------------------------------------------------------------------------
# Fitting gamma regression models for mtcars dataset
## Inverse link
GammaFit <- BranchGLM(mpg ~ ., data = cars, family = "gamma", link = "inverse")
GammaFit
## Log link
GammaFit <- BranchGLM(mpg ~ ., data = cars, family = "gamma", link = "log")
GammaFit
## -----------------------------------------------------------------------------
# Fitting poisson regression models for warpbreaks dataset
warp <- warpbreaks
## Log link
BranchGLM(breaks ~ ., data = warp, family = "poisson", link = "log")
## -----------------------------------------------------------------------------
# Fitting binomial regression models for toothgrowth dataset
Data <- ToothGrowth
## Logit link
BranchGLM(supp ~ ., data = Data, family = "binomial", link = "logit")
## Probit link
BranchGLM(supp ~ ., data = Data, family = "binomial", link = "probit")
## -----------------------------------------------------------------------------
# Fitting logistic regression model for toothgrowth dataset
catFit <- BranchGLM(supp ~ ., data = Data, family = "binomial", link = "logit")
Table(catFit)
## -----------------------------------------------------------------------------
# Creating ROC curve
catROC <- ROC(catFit)
plot(catROC, main = "ROC Curve", col = "indianred")
## -----------------------------------------------------------------------------
# Getting Cindex/AUC
Cindex(catFit)
AUC(catFit)
## ----fig.width = 4, fig.height = 4--------------------------------------------
# Showing ROC plots for logit, probit, and cloglog
probitFit <- BranchGLM(supp ~ . ,data = Data, family = "binomial",
link = "probit")
cloglogFit <- BranchGLM(supp ~ . ,data = Data, family = "binomial",
link = "cloglog")
MultipleROCCurves(catROC, ROC(probitFit), ROC(cloglogFit),
names = c("Logistic ROC", "Probit ROC", "Cloglog ROC"))
## -----------------------------------------------------------------------------
preds <- predict(catFit)
Table(preds, Data$supp)
AUC(preds, Data$supp)
ROC(preds, Data$supp) |> plot(main = "ROC Curve", col = "deepskyblue")
## -----------------------------------------------------------------------------
# Predict method
predict(GammaFit)
# Accessing coefficients matrix
GammaFit$coefficients
|
/scratch/gouwar.j/cran-all/cranData/BranchGLM/inst/doc/BranchGLM-Vignette.R
|
---
title: "BranchGLM Vignette"
output:
rmarkdown::html_vignette:
toc: TRUE
number_sections: TRUE
vignette: >
%\VignetteIndexEntry{BranchGLM Vignette}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r setup, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
# Fitting GLMs
- `BranchGLM()` allows fitting of gaussian, binomial, gamma, and Poisson GLMs with a variety of links available.
- Parallel computation can also be done to speed up the fitting process, but it is only useful for larger datasets.
## Optimization methods
- The optimization method can be specified, the default method is fisher scoring, but BFGS and L-BFGS are also available.
- BFGS and L-BFGS typically perform better when there are many predictors in the model (at least 50 predictors), otherwise fisher scoring is typically faster.
- The `grads` argument is for L-BFGS only and it is the number of gradients that are stored at a time and are used to approximate the inverse information. The default value for this is 10, but another common choice is 5.
- The `tol` argument controls how strict the convergence criteria are, lower values of this will lead to more accurate results, but may also be slower.
- The `method` argument is ignored for linear regression and the OLS solution is
used.
## Initial values
- Initial values for the coefficient estimates may be specified via the `init`
argument.
- If no initial values are specified, then the initial values are estimated
via linear regression with the response variable transformed by the link function.
## Parallel computation
- Parallel computation can be employed via OpenMP by setting the parallel argument
to `TRUE` and setting the `nthreads` argument to the desired number of threads used.
- For smaller datasets this can actually slow down the model fitting process, so
parallel computation should only be used for larger datasets.
# Families
## Gaussian
- Permissible links for the gaussian family are
- identity, which results in linear regression
- inverse
- log
- square root (sqrt)
- The most commonly used link function for the gaussian family is the identity link.
- The dispersion parameter for this family is estimated by using the mean square
error.
```{r}
# Loading in BranchGLM
library(BranchGLM)
# Fitting gaussian regression models for mtcars dataset
cars <- mtcars
## Identity link
BranchGLM(mpg ~ ., data = cars, family = "gaussian", link = "identity")
```
## Gamma
- Permissible links for the gamma family are
- identity
- inverse, this is the canonical link for the gamma family
- log
- square root (sqrt)
- The most commonly used link functions for the gamma family are inverse and log.
- The dispersion parameter for this family is estimated via maximum likelihood,
similar to the `MASS::gamma.dispersion()` function.
```{r}
# Fitting gamma regression models for mtcars dataset
## Inverse link
GammaFit <- BranchGLM(mpg ~ ., data = cars, family = "gamma", link = "inverse")
GammaFit
## Log link
GammaFit <- BranchGLM(mpg ~ ., data = cars, family = "gamma", link = "log")
GammaFit
```
## Poisson
- Permissible links for the Poisson family are
- identity
- log, this is the canonical link for the Poisson family
- square root (sqrt)
- The most commonly used link function for the Poisson family is the log link.
- The dispersion parameter for this family is always 1.
```{r}
# Fitting poisson regression models for warpbreaks dataset
warp <- warpbreaks
## Log link
BranchGLM(breaks ~ ., data = warp, family = "poisson", link = "log")
```
## Binomial
- Permissible links for the binomial family are
- cloglog
- log
- logit, this is the canonical link for the binomial family
- probit
- The most commonly used link functions for the binomial family are logit and probit.
- The dispersion parameter for this family is always 1.
```{r}
# Fitting binomial regression models for toothgrowth dataset
Data <- ToothGrowth
## Logit link
BranchGLM(supp ~ ., data = Data, family = "binomial", link = "logit")
## Probit link
BranchGLM(supp ~ ., data = Data, family = "binomial", link = "probit")
```
### Functions for binomial GLMs
- **BranchGLM** has some utility functions for binomial GLMs
- `Table()` creates a confusion matrix based on the predicted classes and observed classes
- `ROC()` creates an ROC curve which can be plotted with `plot()`
- `AUC()` and `Cindex()` calculate the area under the ROC curve
- `MultipleROCCurves()` allows for the plotting of multiple ROC curves on the same plot
#### Table
```{r}
# Fitting logistic regression model for toothgrowth dataset
catFit <- BranchGLM(supp ~ ., data = Data, family = "binomial", link = "logit")
Table(catFit)
```
#### ROC
```{r}
# Creating ROC curve
catROC <- ROC(catFit)
plot(catROC, main = "ROC Curve", col = "indianred")
```
#### Cindex/AUC
```{r}
# Getting Cindex/AUC
Cindex(catFit)
AUC(catFit)
```
#### MultipleROCPlots
```{r, fig.width = 4, fig.height = 4}
# Showing ROC plots for logit, probit, and cloglog
probitFit <- BranchGLM(supp ~ . ,data = Data, family = "binomial",
link = "probit")
cloglogFit <- BranchGLM(supp ~ . ,data = Data, family = "binomial",
link = "cloglog")
MultipleROCCurves(catROC, ROC(probitFit), ROC(cloglogFit),
names = c("Logistic ROC", "Probit ROC", "Cloglog ROC"))
```
#### Using predictions
- For each of the methods used in this section predicted probabilities and observed
classes can also be supplied instead of the `BranchGLM` object.
```{r}
preds <- predict(catFit)
Table(preds, Data$supp)
AUC(preds, Data$supp)
ROC(preds, Data$supp) |> plot(main = "ROC Curve", col = "deepskyblue")
```
# Useful functions
- **BranchGLM** has many utility functions for GLMs such as
- `coef()` to extract the coefficients
- `logLik()` to extract the log likelihood
- `AIC()` to extract the AIC
- `BIC()` to extract the BIC
- `predict()` to obtain predictions from the fitted model
- The coefficients, standard errors, Wald test statistics, and p-values are stored in the `coefficients` slot of the fitted model
```{r}
# Predict method
predict(GammaFit)
# Accessing coefficients matrix
GammaFit$coefficients
```
|
/scratch/gouwar.j/cran-all/cranData/BranchGLM/inst/doc/BranchGLM-Vignette.Rmd
|
## ----include = FALSE----------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## -----------------------------------------------------------------------------
# Loading BranchGLM package
library(BranchGLM)
# Fitting gamma regression model
cars <- mtcars
# Fitting gamma regression with inverse link
GammaFit <- BranchGLM(mpg ~ ., data = cars, family = "gamma", link = "inverse")
# Forward selection with mtcars
forwardVS <- VariableSelection(GammaFit, type = "forward")
forwardVS
## Getting final coefficients
coef(forwardVS, which = 1)
## -----------------------------------------------------------------------------
# Backward elimination with mtcars
backwardVS <- VariableSelection(GammaFit, type = "backward")
backwardVS
## Getting final coefficients
coef(backwardVS, which = 1)
## -----------------------------------------------------------------------------
# Branch and bound with mtcars
VS <- VariableSelection(GammaFit, type = "branch and bound", showprogress = FALSE)
VS
## Getting final coefficients
coef(VS, which = 1)
## -----------------------------------------------------------------------------
# Can also use a formula and data
formulaVS <- VariableSelection(mpg ~ . ,data = cars, family = "gamma",
link = "inverse", type = "branch and bound",
showprogress = FALSE, metric = "AIC")
formulaVS
## Getting final coefficients
coef(formulaVS, which = 1)
## ----fig.height = 4, fig.width = 6--------------------------------------------
# Finding top 10 models
formulaVS <- VariableSelection(mpg ~ . ,data = cars, family = "gamma",
link = "inverse", type = "branch and bound",
showprogress = FALSE, metric = "AIC",
bestmodels = 10)
formulaVS
## Plotting results
plot(formulaVS, type = "b")
## Getting all coefficients
coef(formulaVS, which = "all")
## ----fig.height = 4, fig.width = 6--------------------------------------------
# Finding all models with an AIC within 2 of the best model
formulaVS <- VariableSelection(mpg ~ . ,data = cars, family = "gamma",
link = "inverse", type = "branch and bound",
showprogress = FALSE, metric = "AIC",
cutoff = 2)
formulaVS
## Plotting results
plot(formulaVS, type = "b")
## ----fig.height = 4, fig.width = 6--------------------------------------------
# Example of using keep
keepVS <- VariableSelection(mpg ~ . ,data = cars, family = "gamma",
link = "inverse", type = "branch and bound",
keep = c("hp", "cyl"), metric = "AIC",
showprogress = FALSE, bestmodels = 10)
keepVS
## Getting summary and plotting results
plot(keepVS, type = "b")
## Getting coefficients for top 10 models
coef(keepVS, which = "all")
## ----fig.height = 4, fig.width = 6--------------------------------------------
# Variable selection with grouped beta parameters for species
Data <- iris
VS <- VariableSelection(Sepal.Length ~ ., data = Data, family = "gaussian",
link = "identity", metric = "AIC", bestmodels = 10,
showprogress = FALSE)
VS
## Plotting results
plot(VS, cex.names = 0.75, type = "b")
## ----fig.height = 4, fig.width = 6--------------------------------------------
# Treating categorical variable beta parameters separately
## This function automatically groups together parameters from a categorical variable
## to avoid this, you need to create the indicator variables yourself
x <- model.matrix(Sepal.Length ~ ., data = iris)
Sepal.Length <- iris$Sepal.Length
Data <- cbind.data.frame(Sepal.Length, x[, -1])
VSCat <- VariableSelection(Sepal.Length ~ ., data = Data, family = "gaussian",
link = "identity", metric = "AIC", bestmodels = 10,
showprogress = FALSE)
VSCat
## Plotting results
plot(VSCat, cex.names = 0.75, type = "b")
|
/scratch/gouwar.j/cran-all/cranData/BranchGLM/inst/doc/VariableSelection-Vignette.R
|
---
title: "VariableSelection Vignette"
output:
rmarkdown::html_vignette:
toc: TRUE
number_sections: TRUE
vignette: >
%\VignetteIndexEntry{VariableSelection Vignette}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
# Performing variable selection
- Forward selection, backward elimination, and branch and bound selection can be done using `VariableSelection()`.
- `VariableSelection()` can accept either a `BranchGLM` object or a formula along with the data and the desired family and link to perform the variable selection.
- Available metrics are AIC, BIC and HQIC, which are used to compare models and to select the best models.
- `VariableSelection()` returns some information about the search, more detailed
information about the best models can be seen by using the `summary()` function.
- Note that `VariableSelection()` will properly handle interaction terms and
categorical variables.
- `keep` can also be specified if any set of variables are desired to be kept in every model.
## Metrics
- The 3 different metrics available for comparing models are the following
- Akaike information criterion (AIC), which typically results in models that are
useful for prediction
- $AIC = -2logLik + 2 \times p$
- Bayesian information criterion (BIC), which results in models that are more
parsimonious than those selected by AIC
- $BIC = -2logLik + \log{(n)} \times p$
- Hannan-Quinn information criterion (HQIC), which is in the middle of AIC and BIC
- $HQIC = -2logLik + 2 * \log({\log{(n)})} \times p$
## Stepwise methods
- Forward selection and backward elimination are both stepwise variable selection methods.
- They are not guaranteed to find the best model or even a good model, but they are very fast.
- Forward selection is recommended if the number of variables is greater than the number of observations or if many of the larger models don't converge.
- These methods will only return 1 best model.
- Parallel computation can be used for the methods, but is generally only necessary
for large datasets.
### Forward selection example
```{r}
# Loading BranchGLM package
library(BranchGLM)
# Fitting gamma regression model
cars <- mtcars
# Fitting gamma regression with inverse link
GammaFit <- BranchGLM(mpg ~ ., data = cars, family = "gamma", link = "inverse")
# Forward selection with mtcars
forwardVS <- VariableSelection(GammaFit, type = "forward")
forwardVS
## Getting final coefficients
coef(forwardVS, which = 1)
```
### Backward elimination example
```{r}
# Backward elimination with mtcars
backwardVS <- VariableSelection(GammaFit, type = "backward")
backwardVS
## Getting final coefficients
coef(backwardVS, which = 1)
```
## Branch and bound
- The branch and bound methods can be much slower than the stepwise methods, but
they are guaranteed to find the best models.
- The branch and bound methods are typically much faster than an exhaustive search and can also be made even faster if parallel computation is used.
### Branch and bound example
- If `showprogress` is true, then progress of the branch and bound algorithm will be reported occasionally.
- Parallel computation can be used with these methods and can lead to very large speedups.
```{r}
# Branch and bound with mtcars
VS <- VariableSelection(GammaFit, type = "branch and bound", showprogress = FALSE)
VS
## Getting final coefficients
coef(VS, which = 1)
```
- A formula with the data and the necessary BranchGLM fitting information can
also be used instead of supplying a `BranchGLM` object.
```{r}
# Can also use a formula and data
formulaVS <- VariableSelection(mpg ~ . ,data = cars, family = "gamma",
link = "inverse", type = "branch and bound",
showprogress = FALSE, metric = "AIC")
formulaVS
## Getting final coefficients
coef(formulaVS, which = 1)
```
### Using bestmodels
- The bestmodels argument can be used to find the top k models according to the
metric.
```{r, fig.height = 4, fig.width = 6}
# Finding top 10 models
formulaVS <- VariableSelection(mpg ~ . ,data = cars, family = "gamma",
link = "inverse", type = "branch and bound",
showprogress = FALSE, metric = "AIC",
bestmodels = 10)
formulaVS
## Plotting results
plot(formulaVS, type = "b")
## Getting all coefficients
coef(formulaVS, which = "all")
```
### Using cutoff
- The cutoff argument can be used to find all models that have a metric value
that is within cutoff of the minimum metric value found.
```{r, fig.height = 4, fig.width = 6}
# Finding all models with an AIC within 2 of the best model
formulaVS <- VariableSelection(mpg ~ . ,data = cars, family = "gamma",
link = "inverse", type = "branch and bound",
showprogress = FALSE, metric = "AIC",
cutoff = 2)
formulaVS
## Plotting results
plot(formulaVS, type = "b")
```
## Using keep
- Specifying variables via `keep` will ensure that those variables are kept through the selection process.
```{r, fig.height = 4, fig.width = 6}
# Example of using keep
keepVS <- VariableSelection(mpg ~ . ,data = cars, family = "gamma",
link = "inverse", type = "branch and bound",
keep = c("hp", "cyl"), metric = "AIC",
showprogress = FALSE, bestmodels = 10)
keepVS
## Getting summary and plotting results
plot(keepVS, type = "b")
## Getting coefficients for top 10 models
coef(keepVS, which = "all")
```
## Categorical variables
- Categorical variables are automatically grouped together, if this behavior is
not desired, then the indicator variables for that categorical variable should be
created before using `VariableSelection()`
- First we show an example of the default behavior of the function with a categorical
variable. In this example the categorical variable of interest is Species.
```{r, fig.height = 4, fig.width = 6}
# Variable selection with grouped beta parameters for species
Data <- iris
VS <- VariableSelection(Sepal.Length ~ ., data = Data, family = "gaussian",
link = "identity", metric = "AIC", bestmodels = 10,
showprogress = FALSE)
VS
## Plotting results
plot(VS, cex.names = 0.75, type = "b")
```
- Next we show an example where the beta parameters for each level for Species
are handled separately
```{r, fig.height = 4, fig.width = 6}
# Treating categorical variable beta parameters separately
## This function automatically groups together parameters from a categorical variable
## to avoid this, you need to create the indicator variables yourself
x <- model.matrix(Sepal.Length ~ ., data = iris)
Sepal.Length <- iris$Sepal.Length
Data <- cbind.data.frame(Sepal.Length, x[, -1])
VSCat <- VariableSelection(Sepal.Length ~ ., data = Data, family = "gaussian",
link = "identity", metric = "AIC", bestmodels = 10,
showprogress = FALSE)
VSCat
## Plotting results
plot(VSCat, cex.names = 0.75, type = "b")
```
## Convergence issues
- It is not recommended to use the branch and bound algorithms if many of the upper models do not converge since it can make the algorithms very slow.
- Sometimes when using backwards selection and all the upper models that are tested
do not converge, no final model can be selected.
- For these reasons, if there are convergence issues it is recommended to use forward selection.
|
/scratch/gouwar.j/cran-all/cranData/BranchGLM/inst/doc/VariableSelection-Vignette.Rmd
|
---
title: "BranchGLM Vignette"
output:
rmarkdown::html_vignette:
toc: TRUE
number_sections: TRUE
vignette: >
%\VignetteIndexEntry{BranchGLM Vignette}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r setup, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
# Fitting GLMs
- `BranchGLM()` allows fitting of gaussian, binomial, gamma, and Poisson GLMs with a variety of links available.
- Parallel computation can also be done to speed up the fitting process, but it is only useful for larger datasets.
## Optimization methods
- The optimization method can be specified, the default method is fisher scoring, but BFGS and L-BFGS are also available.
- BFGS and L-BFGS typically perform better when there are many predictors in the model (at least 50 predictors), otherwise fisher scoring is typically faster.
- The `grads` argument is for L-BFGS only and it is the number of gradients that are stored at a time and are used to approximate the inverse information. The default value for this is 10, but another common choice is 5.
- The `tol` argument controls how strict the convergence criteria are, lower values of this will lead to more accurate results, but may also be slower.
- The `method` argument is ignored for linear regression and the OLS solution is
used.
## Initial values
- Initial values for the coefficient estimates may be specified via the `init`
argument.
- If no initial values are specified, then the initial values are estimated
via linear regression with the response variable transformed by the link function.
## Parallel computation
- Parallel computation can be employed via OpenMP by setting the parallel argument
to `TRUE` and setting the `nthreads` argument to the desired number of threads used.
- For smaller datasets this can actually slow down the model fitting process, so
parallel computation should only be used for larger datasets.
# Families
## Gaussian
- Permissible links for the gaussian family are
- identity, which results in linear regression
- inverse
- log
- square root (sqrt)
- The most commonly used link function for the gaussian family is the identity link.
- The dispersion parameter for this family is estimated by using the mean square
error.
```{r}
# Loading in BranchGLM
library(BranchGLM)
# Fitting gaussian regression models for mtcars dataset
cars <- mtcars
## Identity link
BranchGLM(mpg ~ ., data = cars, family = "gaussian", link = "identity")
```
## Gamma
- Permissible links for the gamma family are
- identity
- inverse, this is the canonical link for the gamma family
- log
- square root (sqrt)
- The most commonly used link functions for the gamma family are inverse and log.
- The dispersion parameter for this family is estimated via maximum likelihood,
similar to the `MASS::gamma.dispersion()` function.
```{r}
# Fitting gamma regression models for mtcars dataset
## Inverse link
GammaFit <- BranchGLM(mpg ~ ., data = cars, family = "gamma", link = "inverse")
GammaFit
## Log link
GammaFit <- BranchGLM(mpg ~ ., data = cars, family = "gamma", link = "log")
GammaFit
```
## Poisson
- Permissible links for the Poisson family are
- identity
- log, this is the canonical link for the Poisson family
- square root (sqrt)
- The most commonly used link function for the Poisson family is the log link.
- The dispersion parameter for this family is always 1.
```{r}
# Fitting poisson regression models for warpbreaks dataset
warp <- warpbreaks
## Log link
BranchGLM(breaks ~ ., data = warp, family = "poisson", link = "log")
```
## Binomial
- Permissible links for the binomial family are
- cloglog
- log
- logit, this is the canonical link for the binomial family
- probit
- The most commonly used link functions for the binomial family are logit and probit.
- The dispersion parameter for this family is always 1.
```{r}
# Fitting binomial regression models for toothgrowth dataset
Data <- ToothGrowth
## Logit link
BranchGLM(supp ~ ., data = Data, family = "binomial", link = "logit")
## Probit link
BranchGLM(supp ~ ., data = Data, family = "binomial", link = "probit")
```
### Functions for binomial GLMs
- **BranchGLM** has some utility functions for binomial GLMs
- `Table()` creates a confusion matrix based on the predicted classes and observed classes
- `ROC()` creates an ROC curve which can be plotted with `plot()`
- `AUC()` and `Cindex()` calculate the area under the ROC curve
- `MultipleROCCurves()` allows for the plotting of multiple ROC curves on the same plot
#### Table
```{r}
# Fitting logistic regression model for toothgrowth dataset
catFit <- BranchGLM(supp ~ ., data = Data, family = "binomial", link = "logit")
Table(catFit)
```
#### ROC
```{r}
# Creating ROC curve
catROC <- ROC(catFit)
plot(catROC, main = "ROC Curve", col = "indianred")
```
#### Cindex/AUC
```{r}
# Getting Cindex/AUC
Cindex(catFit)
AUC(catFit)
```
#### MultipleROCPlots
```{r, fig.width = 4, fig.height = 4}
# Showing ROC plots for logit, probit, and cloglog
probitFit <- BranchGLM(supp ~ . ,data = Data, family = "binomial",
link = "probit")
cloglogFit <- BranchGLM(supp ~ . ,data = Data, family = "binomial",
link = "cloglog")
MultipleROCCurves(catROC, ROC(probitFit), ROC(cloglogFit),
names = c("Logistic ROC", "Probit ROC", "Cloglog ROC"))
```
#### Using predictions
- For each of the methods used in this section predicted probabilities and observed
classes can also be supplied instead of the `BranchGLM` object.
```{r}
preds <- predict(catFit)
Table(preds, Data$supp)
AUC(preds, Data$supp)
ROC(preds, Data$supp) |> plot(main = "ROC Curve", col = "deepskyblue")
```
# Useful functions
- **BranchGLM** has many utility functions for GLMs such as
- `coef()` to extract the coefficients
- `logLik()` to extract the log likelihood
- `AIC()` to extract the AIC
- `BIC()` to extract the BIC
- `predict()` to obtain predictions from the fitted model
- The coefficients, standard errors, Wald test statistics, and p-values are stored in the `coefficients` slot of the fitted model
```{r}
# Predict method
predict(GammaFit)
# Accessing coefficients matrix
GammaFit$coefficients
```
|
/scratch/gouwar.j/cran-all/cranData/BranchGLM/vignettes/BranchGLM-Vignette.Rmd
|
---
title: "VariableSelection Vignette"
output:
rmarkdown::html_vignette:
toc: TRUE
number_sections: TRUE
vignette: >
%\VignetteIndexEntry{VariableSelection Vignette}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
# Performing variable selection
- Forward selection, backward elimination, and branch and bound selection can be done using `VariableSelection()`.
- `VariableSelection()` can accept either a `BranchGLM` object or a formula along with the data and the desired family and link to perform the variable selection.
- Available metrics are AIC, BIC and HQIC, which are used to compare models and to select the best models.
- `VariableSelection()` returns some information about the search, more detailed
information about the best models can be seen by using the `summary()` function.
- Note that `VariableSelection()` will properly handle interaction terms and
categorical variables.
- `keep` can also be specified if any set of variables are desired to be kept in every model.
## Metrics
- The 3 different metrics available for comparing models are the following
- Akaike information criterion (AIC), which typically results in models that are
useful for prediction
- $AIC = -2logLik + 2 \times p$
- Bayesian information criterion (BIC), which results in models that are more
parsimonious than those selected by AIC
- $BIC = -2logLik + \log{(n)} \times p$
- Hannan-Quinn information criterion (HQIC), which is in the middle of AIC and BIC
- $HQIC = -2logLik + 2 * \log({\log{(n)})} \times p$
## Stepwise methods
- Forward selection and backward elimination are both stepwise variable selection methods.
- They are not guaranteed to find the best model or even a good model, but they are very fast.
- Forward selection is recommended if the number of variables is greater than the number of observations or if many of the larger models don't converge.
- These methods will only return 1 best model.
- Parallel computation can be used for the methods, but is generally only necessary
for large datasets.
### Forward selection example
```{r}
# Loading BranchGLM package
library(BranchGLM)
# Fitting gamma regression model
cars <- mtcars
# Fitting gamma regression with inverse link
GammaFit <- BranchGLM(mpg ~ ., data = cars, family = "gamma", link = "inverse")
# Forward selection with mtcars
forwardVS <- VariableSelection(GammaFit, type = "forward")
forwardVS
## Getting final coefficients
coef(forwardVS, which = 1)
```
### Backward elimination example
```{r}
# Backward elimination with mtcars
backwardVS <- VariableSelection(GammaFit, type = "backward")
backwardVS
## Getting final coefficients
coef(backwardVS, which = 1)
```
## Branch and bound
- The branch and bound methods can be much slower than the stepwise methods, but
they are guaranteed to find the best models.
- The branch and bound methods are typically much faster than an exhaustive search and can also be made even faster if parallel computation is used.
### Branch and bound example
- If `showprogress` is true, then progress of the branch and bound algorithm will be reported occasionally.
- Parallel computation can be used with these methods and can lead to very large speedups.
```{r}
# Branch and bound with mtcars
VS <- VariableSelection(GammaFit, type = "branch and bound", showprogress = FALSE)
VS
## Getting final coefficients
coef(VS, which = 1)
```
- A formula with the data and the necessary BranchGLM fitting information can
also be used instead of supplying a `BranchGLM` object.
```{r}
# Can also use a formula and data
formulaVS <- VariableSelection(mpg ~ . ,data = cars, family = "gamma",
link = "inverse", type = "branch and bound",
showprogress = FALSE, metric = "AIC")
formulaVS
## Getting final coefficients
coef(formulaVS, which = 1)
```
### Using bestmodels
- The bestmodels argument can be used to find the top k models according to the
metric.
```{r, fig.height = 4, fig.width = 6}
# Finding top 10 models
formulaVS <- VariableSelection(mpg ~ . ,data = cars, family = "gamma",
link = "inverse", type = "branch and bound",
showprogress = FALSE, metric = "AIC",
bestmodels = 10)
formulaVS
## Plotting results
plot(formulaVS, type = "b")
## Getting all coefficients
coef(formulaVS, which = "all")
```
### Using cutoff
- The cutoff argument can be used to find all models that have a metric value
that is within cutoff of the minimum metric value found.
```{r, fig.height = 4, fig.width = 6}
# Finding all models with an AIC within 2 of the best model
formulaVS <- VariableSelection(mpg ~ . ,data = cars, family = "gamma",
link = "inverse", type = "branch and bound",
showprogress = FALSE, metric = "AIC",
cutoff = 2)
formulaVS
## Plotting results
plot(formulaVS, type = "b")
```
## Using keep
- Specifying variables via `keep` will ensure that those variables are kept through the selection process.
```{r, fig.height = 4, fig.width = 6}
# Example of using keep
keepVS <- VariableSelection(mpg ~ . ,data = cars, family = "gamma",
link = "inverse", type = "branch and bound",
keep = c("hp", "cyl"), metric = "AIC",
showprogress = FALSE, bestmodels = 10)
keepVS
## Getting summary and plotting results
plot(keepVS, type = "b")
## Getting coefficients for top 10 models
coef(keepVS, which = "all")
```
## Categorical variables
- Categorical variables are automatically grouped together, if this behavior is
not desired, then the indicator variables for that categorical variable should be
created before using `VariableSelection()`
- First we show an example of the default behavior of the function with a categorical
variable. In this example the categorical variable of interest is Species.
```{r, fig.height = 4, fig.width = 6}
# Variable selection with grouped beta parameters for species
Data <- iris
VS <- VariableSelection(Sepal.Length ~ ., data = Data, family = "gaussian",
link = "identity", metric = "AIC", bestmodels = 10,
showprogress = FALSE)
VS
## Plotting results
plot(VS, cex.names = 0.75, type = "b")
```
- Next we show an example where the beta parameters for each level for Species
are handled separately
```{r, fig.height = 4, fig.width = 6}
# Treating categorical variable beta parameters separately
## This function automatically groups together parameters from a categorical variable
## to avoid this, you need to create the indicator variables yourself
x <- model.matrix(Sepal.Length ~ ., data = iris)
Sepal.Length <- iris$Sepal.Length
Data <- cbind.data.frame(Sepal.Length, x[, -1])
VSCat <- VariableSelection(Sepal.Length ~ ., data = Data, family = "gaussian",
link = "identity", metric = "AIC", bestmodels = 10,
showprogress = FALSE)
VSCat
## Plotting results
plot(VSCat, cex.names = 0.75, type = "b")
```
## Convergence issues
- It is not recommended to use the branch and bound algorithms if many of the upper models do not converge since it can make the algorithms very slow.
- Sometimes when using backwards selection and all the upper models that are tested
do not converge, no final model can be selected.
- For these reasons, if there are convergence issues it is recommended to use forward selection.
|
/scratch/gouwar.j/cran-all/cranData/BranchGLM/vignettes/VariableSelection-Vignette.Rmd
|
# These functions calculate the covariance from input variables of
# Bienayme - Galton - Watson multitype processes.
# Copyright (C) 2010 Camilo Jose Torres-Jimenez <[email protected]>
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
BGWM.covar <- function(dists, type=c("general","multinomial","independents"), d, n=1, z0=NULL, maxiter = 1e5)
{
if(n < 1)
stop("'n' must be a positive number")
#parameters restrictions
stype <- match.arg(type)
stype <- switch(stype,
"general" = 1,
"multinomial" = 2,
"independents" = 3)
V <- switch( stype,
{#1
V <- BGWM.gener.covar(dists, d)
V
},
{#2
V <- BGWM.multinom.covar(dists, d, maxiter)
V
},
{#3
V <- BGWM.indep.covar(dists, d, maxiter)
V
})
if( n > 1 )
{
m <- BGWM.mean( dists, type, d, maxiter=maxiter )
V <- matrix( t(V), d*d, d )
m.n_i <- diag( rep( 1, d ), d, d )
if( length(z0) != 0 )
{
Cov <- rowSums( V %*% diag( c( BGWM.mean( dists, type, d, (n-1), z0, maxiter ) ), d, d ) )
for(i in (n-1):1)
{
m.n_i <- m.n_i %*% m
if( i != 1 )
AUX <- rowSums( V %*% diag( c( BGWM.mean( dists, type, d, (i-1), z0, maxiter ) ), d, d ) )
else
AUX <- rowSums( V %*% diag( z0, d, d ) )
AUX <- matrix( AUX, d, d )
AUX <- t(m.n_i) %*% AUX %*% m.n_i
Cov <- Cov + AUX
}
Cov <- matrix( Cov, d, d )
dimnames(Cov) <- list( paste( "type", 1:d, sep="" ), paste( "type", 1:d, sep="" ) )
}
else
{
Cov <- NULL
for( j in 1:d )
{
z0 <- rep(0,d)
z0[j] <- 1
Cov.j <- rowSums( V %*% diag( c( BGWM.mean( dists, type, d, (n-1), z0, maxiter ) ), d, d ) )
for(i in (n-1):1)
{
m.n_i <- m.n_i %*% m
if( i != 1 )
AUX <- rowSums( V %*% diag( c( BGWM.mean( dists, type, d, (i-1), z0, maxiter ) ), d, d ) )
else
AUX <- rowSums( V %*% diag( z0, d, d ) )
AUX <- matrix( AUX, d, d )
AUX <- t(m.n_i) %*% AUX %*% m.n_i
Cov.j <- Cov.j + AUX
}
Cov.j <- matrix( Cov.j, d, d )
Cov <- rbind( Cov, Cov.j )
}
dimnames(Cov) <- list( paste( "dist", rep(1:d,rep(d,d)), ".type", rep(1:d,d), sep="" ), paste( "type", 1:d, sep="" ) )
}
}
else
{
if( length(z0) != 0 )
{
V <- matrix( t(V), d*d, d )
Cov <- rowSums( V %*% diag( z0, d, d ) )
Cov <- matrix( Cov, d, d )
dimnames(Cov) <- list( paste( "type", 1:d, sep="" ), paste( "type", 1:d, sep="" ) )
}
else
{
Cov <- V
dimnames(Cov) <- list( paste( "dist", rep(1:d,rep(d,d)), ".type", rep(1:d,d), sep="" ), paste( "type", 1:d, sep="" ) )
}
}
Cov
}
BGWM.gener.covar <- function(gener.dists, d)
{
#Controles sobre los parametros
s <- gener.dists[[1]]
p <- gener.dists[[2]]
v <- gener.dists[[3]]
p <- data.frame( p = unlist(p), k = as.factor( rep( 1:d, unlist(s) ) ) )
v <- data.frame( v = matrix( unlist( lapply( v, t ) ), ncol=d, byrow=TRUE ), k = as.factor( rep( 1:d, unlist(s) ) ) )
#
p[,1] <- p[,1] / rep( aggregate(p[,1], list(p[,2]) , sum)[,2] , unlist(s) )
E2.X <- v
E2.X[,-(d+1)] <- E2.X[,-(d+1)] * p[,1]
E2.X <- aggregate( E2.X[,-(d+1)], list( E2.X[,(d+1)] ), sum )[,-1]
E2.X <- matrix( c( apply( E2.X, 1, tcrossprod ) ), ncol=d, byrow=TRUE )
#
aux1 <- v[,-(d+1)] * v[,-(d+1)] * p[,1]
aux1 <- data.frame( aux1, k = as.factor( rep( 1:d, unlist(s) ) ) )
aux1 <- aggregate( aux1[,-(d+1)], list(aux1[,(d+1)]), sum )[,-1]
aux1 <- t( as.matrix( aux1 ) )
#
a <- combn(1:d, 2)
aux2 <- v[,a[1,]] * v[,a[2,]] * p[,1]
aux2 <- data.frame( aux2, k = as.factor( rep( 1:d, unlist(s) ) ) )
aux2 <- aggregate( aux2[,-(ncol(a)+1)], list(aux2[,(ncol(a)+1)]), sum )[,-1]
aux2 <- t( as.matrix( aux2 ) )
#
E.X2 <- matrix( rep( NA, (d*d*d) ), ncol=d*d )
E.X2[row( E.X2 ) %% d == col( E.X2 ) %% d] <- aux1
E.X2[( row( E.X2 ) %% d > col( E.X2 ) %% d | row( E.X2 ) == d ) & col( E.X2 ) %% d !=0] <- aux2
E.X2[is.na(E.X2)] <- aux2[order(a[2,]),]
covar <- t(E.X2) - E2.X
covar
}
BGWM.multinom.covar <- function(multinom.dists, d, maxiter = 1e5)
{
#Controles sobre los parametros
dists <- multinom.dists[[1]]
pmultinom <- multinom.dists[[2]]
pmultinom <- as.matrix(pmultinom)
mean <- rep( NA, d )
var <- rep( NA, d )
# unif
a <- dists[,1] == "unif"
if(TRUE %in% a)
{
mean[a] <- ( as.numeric( dists[a,2] ) + as.numeric( dists[a,3] ) ) / 2
var[a] <- ( ( ( as.numeric( dists[a,3] ) - as.numeric( dists[a,2] ) + 1 ) ^ 2 ) - 1 ) / 12
}
# binom
a <- dists[,1] == "binom"
if(TRUE %in% a)
{
mean[a] <- as.numeric( dists[a,2] ) * as.numeric( dists[a,3] )
var[a] <- as.numeric( dists[a,2] ) * as.numeric( dists[a,3] ) * ( 1 - as.numeric( dists[a,3] ) )
}
# hyper
a <- dists[,1] == "hyper"
if(TRUE %in% a)
{
mean[a] <- as.numeric( dists[a,2] ) * as.numeric( dists[a,4] ) / ( as.numeric( dists[a,2] ) + as.numeric( dists[a,3] ) )
var[a] <- ( as.numeric( dists[a,2] ) * as.numeric( dists[a,4] ) * as.numeric( dists[a,3] ) * ( as.numeric( dists[a,2] ) + as.numeric( dists[a,3] ) - as.numeric( dists[a,4] ) ) ) / ( ( (as.numeric( dists[a,2] ) + as.numeric( dists[a,3] )) ^ 2 ) * ( as.numeric( dists[a,2] ) + as.numeric( dists[a,3] ) - 1 ) )
}
# geom
a <- dists[,1] == "geom"
if(TRUE %in% a)
{
mean[a] <- ( 1 - as.numeric( dists[a,2] ) ) / as.numeric( dists[a,2] )
var[a] <- ( 1 - as.numeric( dists[a,2] ) ) / ( as.numeric( dists[a,2] ) ^ 2 )
}
# nbinom
a <- dists[,1] == "nbinom"
if(TRUE %in% a)
{
mean[a] <- as.numeric( dists[a,2] ) * ( 1 - as.numeric( dists[a,3] ) ) / as.numeric( dists[a,3] )
var[a] <- as.numeric( dists[a,2] ) * ( 1 - as.numeric( dists[a,3] ) ) / ( as.numeric( dists[a,3] ) ^ 2 )
}
# pois
a <- dists[,1] == "pois"
if(TRUE %in% a)
{
mean[a] <- as.numeric( dists[a,2] )
var[a] <- as.numeric( dists[a,2] )
}
# norm
a <- dists[,1] == "norm"
n <- length( var[a] )
if(n > 0)
{
aux <- .C("param_estim_roundcut0_norm",
as.integer( maxiter ),
as.integer( n ),
as.double( as.numeric( dists[a,2] ) ),
as.double( as.numeric( dists[a,3] ) ),
as.integer( 3 ),
mean.estim=double( n ),
var.estim=double( n ),
PACKAGE="Branching")
mean[a] <- round( aux$mean.estim, round( log10(maxiter)/2 ) )
var[a] <- round( aux$var.estim, floor( log10(maxiter)/2 ) )
}
# lnorm
a <- dists[,1] == "lnorm"
n <- length( var[a] )
if(n > 0)
{
aux <- .C("param_estim_round_lnorm",
as.integer( maxiter ),
as.integer( n ),
as.double( as.numeric( dists[a,2] ) ),
as.double( as.numeric( dists[a,3] ) ),
as.integer( 3 ),
mean.estim=double( n ),
var.estim=double( n ),
PACKAGE="Branching")
mean[a] <- round( aux$mean.estim, round( log10(maxiter)/2 ) )
var[a] <- round( aux$var.estim, floor( log10(maxiter)/2 ) )
}
# gamma
a <- dists[,1] == "gamma"
n <- length( var[a] )
if(n > 0)
{
aux <- .C("param_estim_round_gamma",
as.integer( maxiter ),
as.integer( n ),
as.double( as.numeric( dists[a,2] ) ),
as.double( as.numeric( dists[a,3] ) ),
as.integer( 3 ),
mean.estim=double( n ),
var.estim=double( n ),
PACKAGE="Branching")
mean[a] <- round( aux$mean.estim, round( log10(maxiter)/2 ) )
var[a] <- round( aux$var.estim, floor( log10(maxiter)/2 ) )
}
aux1 <- matrix( rep( 0, (d*d*d) ), ncol=d )
aux1[row(aux1) %% d == col(aux1) %% d] <- pmultinom
aux2 <- matrix( c( apply( pmultinom, 1, tcrossprod ) ), ncol=d, byrow=TRUE )
mean <- rep( mean, rep( d, d ) )
var <- rep( var, rep( d, d ) )
covar <- (aux1 - aux2) * mean + aux2 * var
covar
}
BGWM.indep.covar <- function(indep.dists, d, maxiter = 1e5)
{
#Controles sobre los parametros
dists <- indep.dists
var <- rep( NA, (d*d) )
# unif
a <- dists[,1] == "unif"
if(TRUE %in% a)
var[a] <- ( ( ( as.numeric( dists[a,3] ) - as.numeric( dists[a,2] ) + 1 ) ^ 2 ) - 1 ) / 12
# binom
a <- dists[,1] == "binom"
if(TRUE %in% a)
var[a] <- as.numeric( dists[a,2] ) * as.numeric( dists[a,3] ) * ( 1 - as.numeric( dists[a,3] ) )
# hyper
a <- dists[,1] == "hyper"
if(TRUE %in% a)
var[a] <- ( as.numeric( dists[a,2] ) * as.numeric( dists[a,4] ) * as.numeric( dists[a,3] ) * ( as.numeric( dists[a,2] ) + as.numeric( dists[a,3] ) - as.numeric( dists[a,4] ) ) ) / ( ( (as.numeric( dists[a,2] ) + as.numeric( dists[a,3] )) ^ 2 ) * ( as.numeric( dists[a,2] ) + as.numeric( dists[a,3] ) - 1 ) )
# geom
a <- dists[,1] == "geom"
if(TRUE %in% a)
var[a] <- ( 1 - as.numeric( dists[a,2] ) ) / ( as.numeric( dists[a,2] ) ^ 2 )
# nbinom
a <- dists[,1] == "nbinom"
if(TRUE %in% a)
var[a] <- as.numeric( dists[a,2] ) * ( 1 - as.numeric( dists[a,3] ) ) / ( as.numeric( dists[a,3] ) ^ 2 )
# pois
a <- dists[,1] == "pois"
if(TRUE %in% a)
var[a] <- as.numeric( dists[a,2] )
# norm
a <- dists[,1] == "norm"
n <- length( var[a] )
if(n > 0)
var[a] <- round( .C("param_estim_roundcut0_norm",
as.integer( maxiter ),
as.integer( n ),
as.double( as.numeric( dists[a,2] ) ),
as.double( as.numeric( dists[a,3] ) ),
as.integer( 2 ),
mean.estim=double( n ),
var.estim=double( n ),
PACKAGE="Branching")$var.estim , floor( log10(maxiter)/2 ) )
# lnorm
a <- dists[,1] == "lnorm"
n <- length( var[a] )
if(n > 0)
var[a] <- round( .C("param_estim_round_lnorm",
as.integer( maxiter ),
as.integer( n ),
as.double( as.numeric( dists[a,2] ) ),
as.double( as.numeric( dists[a,3] ) ),
as.integer( 2 ),
mean.estim=double( n ),
var.estim=double( n ),
PACKAGE="Branching")$var.estim , floor( log10(maxiter)/2 ) )
# gamma
a <- dists[,1] == "gamma"
n <- length( var[a] )
if(n > 0)
var[a] <- round( .C("param_estim_round_gamma",
as.integer( maxiter ),
as.integer( n ),
as.double( as.numeric( dists[a,2] ) ),
as.double( as.numeric( dists[a,3] ) ),
as.integer( 2 ),
mean.estim=double( n ),
var.estim=double( n ),
PACKAGE="Branching")$var.estim , floor( log10(maxiter)/2 ) )
covar <- matrix( rep( 0, (d*d*d) ), ncol=d )
covar[row( covar ) %% d == col( covar ) %% d] <- matrix( var, ncol=d, byrow=TRUE )
covar
}
|
/scratch/gouwar.j/cran-all/cranData/Branching/R/BGWM.covar.R
|
# These functions calculate a covariance estimation from observed sample of
# Bienayme - Galton - Watson multitype processes.
# Copyright (C) 2010 Camilo Jose Torres-Jimenez <[email protected]>
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
BGWM.covar.estim <- function(sample, method=c("EE-m","MLE-m"), d, n, z0)
{
method <- match.arg(method)
method <- switch(method,
"EE-m" = 1,
"MLE-m" = 2)
V <- switch(method,
{#1
V <- BGWM.covar.EE(sample, d, n, z0)
V
},
{#2
V <- BGWM.covar.MLE(sample, d, n, z0)
V
})
dimnames(V) <- list( paste( "dist", rep(1:d,rep(d,d)), ".type", rep(1:d,d), sep="" ), paste( "type", 1:d, sep="" ) )
list(method=switch( method, "with Empirical Estimation of the means", "with Maximum Likelihood Estimation of the means" ), V=V )
}
BGWM.covar.EE <- function(y, d, n, z0)
{
y <- as.matrix(y)
if(length(d) != 1)
stop("'d' must be a number")
if(length(n) != 1)
stop("'n' must be a number")
if(length(z0) != d)
stop("'z0' must be a d-dimensional vector")
if(TRUE %in% (z0 < 0))
stop("'z0' must have positive elements")
if(is.matrix(y) == FALSE)
stop("'y' must be a matrix")
if(ncol(y) != d || nrow(y) < (n*d))
stop("'y' must have d columns and at least (n*d) rows")
if(n == 1)
stop("'n' must be greater than 1")
y <- y[1:(n*d),]
out <- matrix( rep( 0, (d*d*d) ), ncol=d )
Mn <- BGWM.mean.EE(y, d, n, z0)
for( i in 1:(n-1) )
{
if(i != 1)
Zi_1 <- apply( y[seq( (i-2)*d+1, (i-1)*d, 1 ),], 2, sum )
else
Zi_1 <- z0
Zi_1 <- rep( Zi_1, rep( d, d ) )
aux <- BGWM.mean.EE(y, d, i, z0) - Mn
out <- out + matrix( c( apply( aux, 1, tcrossprod ) ), ncol=d, byrow=TRUE ) * Zi_1
}
out <- out / n
out
}
BGWM.covar.MLE <- function(y, d, n, z0)
{
y <- as.matrix(y)
if(length(d) != 1)
stop("'d' must be a number")
if(length(n) != 1)
stop("'n' must be a number")
if(length(z0) != d)
stop("'z0' must be a d-dimensional vector")
if(TRUE %in% (z0 < 0))
stop("'z0' must have positive elements")
if(is.matrix(y) == FALSE)
stop("'y' must be a matrix")
if(ncol(y) != d || nrow(y) < (n*d))
stop("'y' must have d columns and at least (n*d) rows")
if(n == 1)
stop("'n' must be greater than 1")
y <- y[1:(n*d),]
out <- matrix( rep( 0, (d*d*d) ), ncol=d )
Mn <- BGWM.mean.MLE(y, d, n, z0)
for( i in 1:(n-1) )
{
if(i != 1)
aux2 <- aux2 + apply( y[seq( (i-2)*d+1, (i-1)*d, 1 ),], 2, sum )
else
aux2 <- z0
aux1 <- BGWM.mean.MLE(y, d, i, z0) - Mn
out <- out + matrix( c( apply( aux1, 1, tcrossprod ) ), ncol=d, byrow=TRUE ) * rep( aux2, rep( d, d ) )
}
out <- out / n
out
}
|
/scratch/gouwar.j/cran-all/cranData/Branching/R/BGWM.covar.estim.R
|
# These functions calculate the mean of a Bienayme - Galton - Watson
# multitype processes.
# Copyright (C) 2010 Camilo Jose Torres-Jimenez <[email protected]>
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
BGWM.mean <- function(dists, type=c("general","multinomial","independents"), d, n=1, z0=NULL, maxiter = 1e5)
{
if(n < 1)
stop("'n' must be a positive number")
#parameters restrictions
type <- match.arg(type)
type <- switch(type,
"general" = 1,
"multinomial" = 2,
"independents" = 3)
M <- switch(type,
{#1
M <- BGWM.gener.mean(dists, d)
M
},
{#2
M <- BGWM.multinom.mean(dists, d, maxiter)
M
},
{#3
M <- BGWM.indep.mean(dists, d, maxiter)
M
})
dimnames(M) <- list( paste( "type", 1:d, sep="" ), paste( "type", 1:d, sep="" ) )
mean <- M
if( n != 1 )
{
if( n > 1 )
# This could be changed to a better method
for( i in 2:n )
mean <- mean %*% M
}
if( length(z0) != 0 )
mean <- z0 %*% mean
mean
}
BGWM.gener.mean <- function(gener.dists, d)
{
#parameters restrictions
s <- gener.dists[[1]]
p <- gener.dists[[2]]
v <- gener.dists[[3]]
probs <- data.frame( p = unlist(p), k = as.factor( rep( 1:d, unlist(s) ) ) )
vectors <- data.frame( v = matrix( unlist( lapply( v, t ) ), ncol=d, byrow=TRUE), k = as.factor( rep( 1:d, unlist(s) ) ) )
probs[,1] <- probs[,1] / rep( aggregate(probs[,1], list(probs[,2]) , sum)[,2] , unlist(s) )
vectors[,-(d+1)] <- vectors[,-(d+1)] * probs[,1]
mean <- as.matrix(aggregate( vectors[,-(d+1)], list(vectors[,(d+1)]), sum )[,-1])
mean
}
BGWM.multinom.mean <- function(multinom.dists, d, maxiter = 1e5)
{
#parameters restrictions
dists <- multinom.dists[[1]]
pmultinom <- multinom.dists[[2]]
pmultinom <- pmultinom / apply(pmultinom, 1, sum)
mean <- rep( NA, d )
# unif
a <- dists[,1] == "unif"
if(TRUE %in% a)
mean[a] <- ( as.numeric( dists[a,2] ) + as.numeric( dists[a,3] ) ) / 2
# binom
a <- dists[,1] == "binom"
if(TRUE %in% a)
mean[a] <- as.numeric( dists[a,2] ) * as.numeric( dists[a,3] )
# hyper
a <- dists[,1] == "hyper"
if(TRUE %in% a)
mean[a] <- as.numeric( dists[a,2] ) * as.numeric( dists[a,4] ) / ( as.numeric( dists[a,2] ) + as.numeric( dists[a,3] ) )
# geom
a <- dists[,1] == "geom"
if(TRUE %in% a)
mean[a] <- ( 1 - as.numeric( dists[a,2] ) ) / as.numeric( dists[a,2] )
# nbinom
a <- dists[,1] == "nbinom"
if(TRUE %in% a)
mean[a] <- as.numeric( dists[a,2] ) * ( 1 - as.numeric( dists[a,3] ) ) / as.numeric( dists[a,3] )
# pois
a <- dists[,1] == "pois"
if(TRUE %in% a)
mean[a] <- as.numeric( dists[a,2] )
# norm
a <- dists[,1] == "norm"
n <- length( mean[a] )
if(n > 0)
mean[a] <- round( .C("param_estim_roundcut0_norm",
as.integer( maxiter ),
as.integer( n ),
as.double( as.numeric( dists[a,2] ) ),
as.double( as.numeric( dists[a,3] ) ),
as.integer( 1 ),
mean.estim=double( n ),
var.estim=double( n ),
PACKAGE="Branching")$mean.estim , round( log10(maxiter)/2 ) )
# lnorm
a <- dists[,1] == "lnorm"
n <- length( mean[a] )
if(n > 0)
mean[a] <- round( .C("param_estim_round_lnorm",
as.integer( maxiter ),
as.integer( n ),
as.double( as.numeric( dists[a,2] ) ),
as.double( as.numeric( dists[a,3] ) ),
as.integer( 1 ),
mean.estim=double( n ),
var.estim=double( n ),
PACKAGE="Branching")$mean.estim , round( log10(maxiter)/2 ) )
# gamma
a <- dists[,1] == "gamma"
n <- length( mean[a] )
if(n > 0)
mean[a] <- round( .C("param_estim_round_gamma",
as.integer( maxiter ),
as.integer( n ),
as.double( as.numeric( dists[a,2] ) ),
as.double( as.numeric( dists[a,3] ) ),
as.integer( 1 ),
mean.estim=double( n ),
var.estim=double( n ),
PACKAGE="Branching")$mean.estim , round( log10(maxiter)/2 ) )
mean <- diag( mean ) %*% pmultinom
mean
}
BGWM.indep.mean <- function(indep.dists, d, maxiter = 1e5)
{
#parameters restrictions
dists <- indep.dists
mean <- rep( NA, (d*d) )
# unif
a <- dists[,1] == "unif"
if(TRUE %in% a)
mean[a] <- ( as.numeric( dists[a,2] ) + as.numeric( dists[a,3] ) ) / 2
# binom
a <- dists[,1] == "binom"
if(TRUE %in% a)
mean[a] <- as.numeric( dists[a,2] ) * as.numeric( dists[a,3] )
# hyper
a <- dists[,1] == "hyper"
if(TRUE %in% a)
mean[a] <- as.numeric( dists[a,2] ) * as.numeric( dists[a,4] ) / ( as.numeric( dists[a,2] ) + as.numeric( dists[a,3] ) )
# geom
a <- dists[,1] == "geom"
if(TRUE %in% a)
mean[a] <- ( 1 - as.numeric( dists[a,2] ) ) / as.numeric( dists[a,2] )
# nbinom
a <- dists[,1] == "nbinom"
if(TRUE %in% a)
mean[a] <- as.numeric( dists[a,2] ) * ( 1 - as.numeric( dists[a,3] ) ) / as.numeric( dists[a,3] )
# pois
a <- dists[,1] == "pois"
if(TRUE %in% a)
mean[a] <- as.numeric( dists[a,2] )
# norm
a <- dists[,1] == "norm"
n <- length( mean[a] )
if(n > 0)
mean[a] <- round( .C("param_estim_roundcut0_norm",
as.integer( maxiter ),
as.integer( n ),
as.double( as.numeric( dists[a,2] ) ),
as.double( as.numeric( dists[a,3] ) ),
as.integer( 1 ),
mean.estim=double( n ),
var.estim=double( n ),
PACKAGE="Branching")$mean.estim , round( log10(maxiter)/2 ) )
# lnorm
a <- dists[,1] == "lnorm"
n <- length( mean[a] )
if(n > 0)
mean[a] <- round( .C("param_estim_round_lnorm",
as.integer( maxiter ),
as.integer( n ),
as.double( as.numeric( dists[a,2] ) ),
as.double( as.numeric( dists[a,3] ) ),
as.integer( 1 ),
mean.estim=double( n ),
var.estim=double( n ),
PACKAGE="Branching")$mean.estim , round( log10(maxiter)/2 ) )
# gamma
a <- dists[,1] == "gamma"
n <- length( mean[a] )
if(n > 0)
mean[a] <- round( .C("param_estim_round_gamma",
as.integer( maxiter ),
as.integer( n ),
as.double( as.numeric( dists[a,2] ) ),
as.double( as.numeric( dists[a,3] ) ),
as.integer( 1 ),
mean.estim=double( n ),
var.estim=double( n ),
PACKAGE="Branching")$mean.estim , round( log10(maxiter)/2 ) )
mean <- matrix( mean, nrow=d, byrow=TRUE )
mean
}
|
/scratch/gouwar.j/cran-all/cranData/Branching/R/BGWM.mean.R
|
# These functions calculate a mean estimation from observed sample of
# Bienayme - Galton - Watson multitype processes.
# Copyright (C) 2010 Camilo Jose Torres-Jimenez <[email protected]>
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
BGWM.mean.estim <- function(sample, method=c("EE","MLE"), d, n, z0)
{
method <- match.arg(method)
method <- switch(method,
"EE" = 1,
"MLE" = 2)
m <- switch(method,
{#1
m <- BGWM.mean.EE(sample, d, n, z0)
m
},
{#2
m <- BGWM.mean.MLE(sample, d, n, z0)
m
})
colnames(m) <- paste( "type", 1:d, sep="" )
rownames(m) <- paste( "type", 1:d, sep="" )
list( method=switch( method, "Empirical Estimation of the means", "Maximum Likelihood Estimation of the means" ), m=m )
}
BGWM.mean.EE <- function(y, d, n, z0)
{
y <- as.matrix(y)
if(length(d) != 1)
stop("'d' must be a number")
if(length(n) != 1)
stop("'n' must be a number")
if(length(z0) != d)
stop("'z0' must be a d-dimensional vector")
if(TRUE %in% (z0 < 0))
stop("'z0' must have positive elements")
if(is.matrix(y) == FALSE)
stop("'y' must be a matrix")
if(ncol(y) != d || nrow(y) < (n*d))
stop("'y' must have d columns and at least (n*d) rows")
z <- y[seq( (n-2)*d+1 , (n-1)*d, 1 ),]
if(n != 1)
z <- apply( z, 2, sum )
else
z <- z0
z <- diag( (1 / z) )
y <- y[seq( (n-1)*d+1 , n*d, 1 ),]
out <- z %*% y
out
}
BGWM.mean.MLE <- function(y, d, n, z0)
{
y <- as.matrix(y)
if(length(d) != 1)
stop("'d' must be a number")
if(length(n) != 1)
stop("'n' must be a number")
if(length(z0) != d)
stop("'z0' must be a d-dimensional vector")
if(TRUE %in% (z0 < 0))
stop("'z0' must have positive elements")
if(is.matrix(y) == FALSE)
stop("'y' must be a matrix")
if(ncol(y) != d || nrow(y) < (n*d))
stop("'y' must have d columns and at least (n*d) rows")
if(n != 1)
{
z <- rbind( z0, y[seq( 1, (n-1)*d, 1 ),] )
z <- apply( z, 2, sum )
}
else
z <- z0
z <- diag( ( 1 / z ) )
y <- as.data.frame(y[seq( 1, (n*d), 1),])
y[,"type"] <- as.factor( rep( 1:d, n ) )
y <- aggregate( y[, 1:d], list( y[,"type"] ), sum )[,-1]
out <- as.matrix(z) %*% as.matrix(y)
out
}
|
/scratch/gouwar.j/cran-all/cranData/Branching/R/BGWM.mean.estim.R
|
# These functions simulate Bienayme - Galton - Watson multitype processes.
# Copyright (C) 2010 Camilo Jose Torres-Jimenez <[email protected]>
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
rBGWM <- function(dists, type=c("general","multinomial","independents"), d, n, z0=rep(1,d),
c.s=TRUE, tt.s=TRUE, rf.s=TRUE, file=NULL)
{
type <- match.arg(type)
type <- switch(type,
"general" = 1,
"multinomial" = 2,
"independents" = 3)
R <- switch(type,
{#1
R <- rBGWM.gener(dists, d, n, z0, file)
R
},
{#2
R <- rBGWM.multinom(dists, d, n, z0, file)
R
},
{#3
R <- rBGWM.indep(dists, d, n, z0, file)
R
})
cdata <- R$cdata
if(TRUE %in% (cdata<0))
warning("exceeded maximum capacity of data type. Process truncated")
cdata <- matrix(as.numeric(cdata),ncol=d,byrow=TRUE)
cdata <- data.frame(cdata)
dimnames(cdata) <- list(paste("i",
rep(1:n,rep(d,n)),
".",
rep(paste("type",1:d,sep=""),n),sep=""),
paste("type",1:d,sep=""))
cdata$i <- as.factor(rep(1:n,rep(d,n)))
ttdata <- aggregate(cdata[,1:d],list(cdata$i),sum)[,-1]
ttdata <- as.matrix(ttdata)
cdata <- as.matrix(cdata[,1:d])
if(rf.s==TRUE)
{
rfdata <- ttdata / apply( ttdata, 1, sum )
rfdata <- as.matrix(rfdata)
}
else
rfdata <- NULL
if(tt.s==FALSE)
ttdata <- NULL
if(c.s==FALSE)
cdata <- NULL
out <- list(i.dists=dists,
i.d=d,
i.n=n,
i.z0=z0,
o.c.s=cdata,
o.tt.s=ttdata,
o.rf.s=rfdata)
out
}
# rBGWM general
rBGWM.gener <- function(gener.dists, d, n, z0=rep(1,d), file=NULL)
{
sizes <- c( unlist( gener.dists[[1]] ) )
probs <- lapply( gener.dists[[2]], cumsum )
probs <- c( unlist( probs ) )
vectors <- lapply( gener.dists[[3]], unique )
vectors <- c( unlist( lapply( vectors, t ) ) )
if(length(d)!= 1)
stop("'d' must be a positive number")
if(length(n)!= 1)
stop("'n' must be a positive number")
if(length(z0)!= d)
stop("'z0' must be a d-dimensional vector")
if(TRUE %in% (z0 < 0))
stop("'z0' must have positive elements")
if(length(sizes) != d)
stop("'gener.dists$sizes' must be a d-dimensional vector")
if(length(probs) != sum(sizes))
stop("'gener.dists$probs' does not have the right structure (wrong number of elements)")
if(TRUE %in% (probs <= 0))
stop("'gener.dists$probs' elements must be all positive")
if(length(vectors) != sum(sizes*d))
stop("'gener.dists$vectors' does not have the right structure (wrong number of elements or duplicated rows by distribution)")
# more restrictions?
R <- .C(rBGWMgeneral,
as.integer(d),
as.integer(n),
as.integer(z0),
as.integer(sizes),
as.integer(vectors),
as.double(probs),
cdata=double(d*d*n),
as.character(file))
R
}
# rBGWM multinomial
rBGWM.multinom <- function(multinom.dists, d, n, z0=rep(1,d), file=NULL)
{
dists <- multinom.dists[[1]]
pmultinom <- multinom.dists[[2]]
p <- as.matrix(pmultinom)
p <- p / apply( p, 1, sum )
nrodists <- nrow(dists)
dists[is.na(dists)] <- 0
names.dists <- dists[,1]
param.dists <- as.matrix(dists[,-1])
aux <- d*d*n
if(nrow(p) != ncol(p) || nrow(p) != d || ncol(p) != d)
stop("'pmultinom' must be a squared matrix of order d")
if(length(d) != 1)
stop("'d' must be a number")
if(length(n) != 1)
stop("'n' must be a number")
if(length(z0) != d)
stop("'z0' must be a d-dimensional vector")
if(TRUE %in% (z0 < 0))
stop("'z0' must have positive elements")
if(nrodists != 1 && nrodists != d)
stop("'dists' must be 1 or d distributions")
if(FALSE %in% (tolower(names.dists) %in% c("binom","gamma","geom","hyper","lnorm","nbinom","norm","pois","unif")))
stop("There are in 'dists' some distributions not implemented yet (only binom, gamma, geom, hyper, lnorm, nbinom, norm, pois, unif are implemented)")
# more restrictions?
names.dists[names.dists == "unif"] <- 1
names.dists[names.dists == "binom"] <- 2
names.dists[names.dists == "hyper"] <- 3
names.dists[names.dists == "geom"] <- 4
names.dists[names.dists == "nbinom"] <- 5
names.dists[names.dists == "pois"] <- 6
names.dists[names.dists == "norm"] <- 7
names.dists[names.dists == "lnorm"] <- 8
names.dists[names.dists == "gamma"] <- 9
R <- .C(rBGWMmultinomial,
as.integer(d),
as.integer(n),
as.integer(z0),
as.integer(nrodists),
as.integer(names.dists),
as.integer(ncol(param.dists)),
as.double(t(param.dists)),
as.double(t(p)),
cdata=double(aux),
as.character(file))
R
}
# rBGWM independents
rBGWM.indep <- function(indep.dists, d, n, z0=rep(1,d), file=NULL)
{
dists <- indep.dists
nrodists <- nrow(dists)
names.dists <- dists[,1]
dists[is.na(dists)] <- 0
param.dists <- as.matrix(dists[,-1])
aux <- d*d*n
if(length(d)!= 1)
stop("'d' must be a number")
if(length(n)!= 1)
stop("'n' must be a number")
if(length(z0)!= d)
stop("'z0' must be a d-dimensional vector")
if(TRUE %in% (z0 < 0))
stop("'z0' must have positive elements")
if(nrodists != 1 && nrodists != (d*d))
stop("'dists' must be 1 or d*d distributions")
if(FALSE %in% (tolower(names.dists) %in% c("binom","gamma","geom","hyper","lnorm","nbinom","norm","pois","unif")))
stop("There are in 'dists' some distributions not implemented yet (only binom, gamma, geom, hyper, lnorm, nbinom, norm, pois, unif are implemented)")
# more restrictions?
names.dists[names.dists == "unif"] <- 1
names.dists[names.dists == "binom"] <- 2
names.dists[names.dists == "hyper"] <- 3
names.dists[names.dists == "geom"] <- 4
names.dists[names.dists == "nbinom"] <- 5
names.dists[names.dists == "pois"] <- 6
names.dists[names.dists == "norm"] <- 7
names.dists[names.dists == "lnorm"] <- 8
names.dists[names.dists == "gamma"] <- 9
R <- .C(rBGWMindependent,
as.integer(d),
as.integer(n),
as.integer(z0),
as.integer(nrodists),
as.integer(names.dists),
as.integer(ncol(param.dists)),
as.double(t(param.dists)),
cdata=double(aux),
as.character(file))
R
}
|
/scratch/gouwar.j/cran-all/cranData/Branching/R/rBGWM.R
|
#' Atmospheric pressure (Patm)
#'
#' @param z Elevation above sea level (m)
#' @examples
#' \dontrun{
#' Patm <- Patm(z)
#' }
#' @export
#' @return Returns a data.frame object with the atmospheric pressure calculated.
#' @author Roberto Filgueiras, Luan P. Venancio, Catariny C. Aleman and Fernando F. da Cunha
Patm <- function(z){
P <- 101.3*((293 - 0.0065*z)/293)^5.26
P <- as.data.frame(P)
colnames(P)<- "Patm (kPa)"
return(P)
}
#' Psychrometric constant
#' @description Psychrometric constant (kPa/°C) is calculated in this function.
#' @param Patm Atmospheric pressure (kPa)
#' @examples
#' \dontrun{
#' psy_df <- psy_const(Patm)
#' }
#' @export
#' @return A data.frame object with the psychrometric constant calculated.
#' @author Roberto Filgueiras, Luan P. Venancio, Catariny C. Aleman and Fernando F. da Cunha
psy_const <- function(Patm){
psy_const <- 0.000665*Patm
psy_const<- as.data.frame(psy_const)
colnames(psy_const)<- "psy_const"
return(psy_const)
}
|
/scratch/gouwar.j/cran-all/cranData/BrazilMet/R/atmospheric_parameters.R
|
#' Conversion factors for radiation
#'
#' @description Function to convert the radiation data. The conversion name can be understand as follow:
#'
#' \itemize{
#' \item conversion_1 = MJ m-2 day-1 to J cm-2 day-1;
#' \item conversion_2 = MJ m-2 day-1 to cal cm-2 day-1;
#' \item conversion_3 = MJ m-2 day-1 to W m-2;
#' \item conversion_4 = MJ m-2 day-1 to mm day-1;
#' \item conversion_5 = cal cm-2 day-1 to MJ m-2 day-1;
#' \item conversion_6 = cal cm-2 day-1 to J cm-2 day-1;
#' \item conversion_7 = cal cm-2 day-1 to W m-2;
#' \item conversion_8 = cal cm-2 day-1 to mm day-1;
#' \item conversion_9 = W m-2 to MJ m-2 day-1;
#' \item conversion_10 = W m-2 to J cm-2 day-1;
#' \item conversion_11 = W m-2 to cal cm-2 day-1;
#' \item conversion_12 = W m-2 to mm day-1;
#' \item conversion_13 = mm day-1 to MJ m-2 day-1;
#' \item conversion_14 = mm day-1 to J cm-2 day-1;
#' \item conversion_15 = mm day-1 to cal cm-2 day-1;
#' \item conversion_16 = mm day-1 to W m-2.
#' }
#' @param data_to_convert A data.frame with radiation values to convert.
#' @param conversion_name A character with the conversion_name summarize in the description of this function.
#' @examples
#' \dontrun{
#' radiation_conversion_df <- radiation_conversion(data_to_convert = df$rad,
#' conversion_name = "conversion_1")
#' }
#' @export
#' @return A data.frame object wit the converted radiation.
#' @author Roberto Filgueiras, Luan P. Venancio, Catariny C. Aleman and Fernando F. da Cunha
radiation_conversion <- function(data_to_convert, conversion_name){
data_to_convert<-as.data.frame(data_to_convert)
conversion_factor <- switch (conversion_name,
"conversion_1" = 100,
"conversion_2" = 23.9,
"conversion_3" = 11.6,
"conversion_4" = 0.408,
"conversion_5" = 0.041868,
"conversion_6" = 4.1868,
"conversion_7" = 0.485,
"conversion_8" = 0.0171,
"conversion_9" = 0.0864,
"conversion_10" = 8.64,
"conversion_11" = 2.06,
"conversion_12" = 0.035,
"conversion_13" = 2.45,
"conversion_14" = 245,
"conversion_15" = 58.5,
"conversion_16" = 28.4
)
rad_converted <-data_to_convert*conversion_factor
colnames(rad_converted) <- paste0("rc_", conversion_name)
return(rad_converted)
}
|
/scratch/gouwar.j/cran-all/cranData/BrazilMet/R/conversion_factor_for_radiations.R
|
#' Download of hourly data from automatic weather stations (AWS) of INMET-Brazil in daily aggregates
#' @description This function will download the hourly AWS data of INMET and it will aggregate the data in a daily time scale, based on the period of time selected (start_date and end_date).The function only works for downloading data from the same year.
#' @param station The station code (ID - WMO code) for download. To see the station ID, please see the function *see_stations_info*.
#' @param start_date Date that start the investigation, should be in the following format (1958-01-01 /Year-Month-Day)
#' @param end_date Date that end the investigation, should be in the following format (2017-12-31 /Year-Month-Day)
#' @import stringr
#' @import dplyr
#' @import utils
#' @importFrom stats aggregate
#' @importFrom stats na.omit
#' @importFrom utils download.file
#' @importFrom utils read.csv
#' @importFrom utils unzip
#' @importFrom dplyr full_join
#' @importFrom dplyr filter
#' @importFrom dplyr select
#' @importFrom dplyr summarize
#' @importFrom dplyr mutate
#' @importFrom dplyr %>%
#' @examples
#' \dontrun{
#' df<-download_AWS_INMET_daily(station = "A001", start_date = "2001-01-01", end_date = "2001-12-31")
#' }
#' @export
#' @return Returns a data.frame with the AWS data requested
#' @author Roberto Filgueiras, Luan P. Venancio, Catariny C. Aleman and Fernando F. da Cunha
download_AWS_INMET_daily <- function(station, start_date, end_date){
year<-substr(start_date, 1, 4)
tempdir<- tempfile()
tf<-paste0(gsub('\\', '/', tempdir, fixed=TRUE), ".zip")
outdir<-gsub('\\', '/', tempdir, fixed=TRUE)
utils::download.file(url = paste0("https://portal.inmet.gov.br/uploads/dadoshistoricos/", year, ".zip") , destfile = tf, method="auto", cacheOK = F)
a<-unzip(zipfile = tf, exdir = outdir, junkpaths = T)
pasta<-paste0(outdir)
if(length(list.files(pasta, pattern = station, full.names = T, all.files = T)) == 0){message("There is no data for this period for this station. Choose another period!")} else{
list.files(pasta, pattern = station)
b<-read.csv(file = list.files(pasta, pattern = station, full.names = T) ,header = T, sep = ';', skip = 8, na = '-9999', dec = ",")
a<-as.data.frame(a)
X<-`pressao-max(mB)`<-`pressao-min(mB)`<- Data <- Hora <- NULL
`to-min(C)`<- `to-max(C)`<- `tar_min(C)`<- `tar_max(C)`<- `tbs(C)`<- NULL
`ppt-h (mm)` <- `pressao (mB)`<- `UR-max`<-`UR-min` <- UR <- NULL
`U10 (m/s)` <- `U-raj (m/s)` <- `U-dir(degrees)` <- `RG(Kj/m2)` <- `RG(Mj/m2)`<- NULL
df<-data.frame(matrix(ncol = 18, nrow = 0))
colnames(df)<-c("Data", "Hora", "ppt-h (mm)", "pressao (mB)", "RG(Mj/m2)", "tbs-(C)", "tpo(C)", "tar_max(C)", "tar_min(C)",
"to-max(C)", "to-min(C)", "UR-max","UR-min","UR",
"U-dir(degrees)","U-raj (m/s)", "U10 (m/s)", "OMM" )
OMM<-read.csv(file = list.files(pasta, pattern = station, full.names = T), header = F, sep = ';')
OMM<-(OMM[4,2])
dfx<-read.csv(file = list.files(pasta, pattern = station, full.names = T), header = T, sep = ';', skip = 8, na = '-9999', dec = ",")
names(dfx)
names(dfx)<-c("Data", "Hora", "ppt-h (mm)", "pressao (mB)", "pressao-max(mB)","pressao-min(mB)", "RG(Kj/m2)", "tbs(C)", "tpo(C)",
"tar_max(C)", "tar_min(C)","to-max(C)", "to-min(C)", "UR-max","UR-min","UR","U-dir(degrees)","U-raj (m/s)",
"U10 (m/s)", "X")
latitude<-read.csv(file = list.files(pasta, pattern = station, full.names = T), header = F, sep = ';', dec = ",")
latitude<-latitude[5,2]
lat<-substr(latitude,1, 3)
itude<-substr(latitude,5, 10)
latitude<-as.numeric(paste0(lat, ".", itude))
longitude<-read.csv(file = list.files(pasta, pattern = station, full.names = T), header = F, sep = ';', dec = ",")
longitude<-longitude[6,2]
long<-substr(longitude,1, 3)
itude<-substr(longitude,5, 10)
longitude<-as.numeric(paste0(long, ".", itude))
altitude <- read.csv(file = list.files(pasta, pattern = station, full.names = T), header = F, sep = ';', dec = ",")
altitude <- altitude[7,2]
altitude <-gsub(",", replacement = ".", altitude)
altitude <- as.numeric(altitude)
dfx <- dplyr::select(dfx, -X, -`pressao-max(mB)`, -`pressao-min(mB)`)
dfx <- as_tibble(dfx)
dfx <- mutate(dfx, Data = as.Date(Data), Hora = as.numeric(as.factor(Hora)))
dfx$date_hora <- paste0(dfx$Data, dfx$Hora)
dfx$date_hora<-as.POSIXct(strptime(dfx$date_hora, format = "%Y-%m-%d %H"))
for (i in 1:nrow(dfx)){
if (longitude > -37.5) (dfx$date_hora[i] <- dfx$date_hora[i]- as.difftime(2, units = "hours")) else
if (longitude > -52.5) (dfx$date_hora[i] <- dfx$date_hora[i]- as.difftime(3, units = "hours")) else
if (longitude > -67.5) (dfx$date_hora[i] <- dfx$date_hora[i]- as.difftime(4, units = "hours")) else
if (longitude > -82.5) (dfx$date_hora[i] <- dfx$date_hora[i]- as.difftime(5, units = "hours"))
}
dfx$Data<-as.POSIXct(strptime(dfx$date_hora, format = "%Y-%m-%d"))
dfx$Hora<- format(as.POSIXct(dfx$date_hora, format ="%Y-%m-%d %H"), "%H")
if(nrow(dfx) < 4380){} else {
dfx_temp <- na.omit(dplyr::select(dfx, Hora, Data, `to-min(C)`, `to-max(C)`, `tar_min(C)`, `tar_max(C)`, `tbs(C)`))
n_dfx_temp <- group_by(dfx_temp, Data) %>% summarise(n = n()) %>% filter(n == 24)
if(nrow(n_dfx_temp) == 0){} else {
dfx_temp <- left_join(dfx_temp, n_dfx_temp, by = "Data")
dfx_temp <- dplyr::filter(dfx_temp, n == 24)
dfx_temp <- dplyr::mutate(dfx_temp, tar_mean = (`tar_min(C)`+ `tar_max(C)`)/2)
dfx_temp <- dplyr::mutate(dfx_temp, to_mean = (`to-min(C)`+ `to-max(C)`)/2)
dfx_temp_mean_day <- aggregate(tar_mean ~ Data, dfx_temp, mean)
dfx_temp_min_day <- aggregate(`tar_min(C)` ~ Data, dfx_temp, min)
dfx_temp_max_day <- aggregate(`tar_max(C)` ~ Data, dfx_temp, max)
dfx_to_min_day <- aggregate(`to-min(C)` ~ Data, dfx_temp, min)
dfx_to_max_day <- aggregate(`to-max(C)`~ Data, dfx_temp, max)
dfx_to_mean_day <- aggregate(to_mean ~ Data, dfx_temp, mean)
dfx_tbs_day <- aggregate(`tbs(C)`~ Data, dfx_temp, mean)
dfx_temps_day <-cbind(dfx_temp_mean_day, dfx_temp_min_day, dfx_temp_max_day, dfx_to_mean_day, dfx_to_min_day, dfx_to_max_day, dfx_tbs_day)
dfx_temps_day <-dplyr::select(dfx_temps_day, -3, -5, -7, -9, -11, -13)
dfx_prec <- na.omit(dplyr::select(dfx, Hora, Data, `ppt-h (mm)`))
dfx_prec<- group_by(dfx_prec, Data)
if(nrow(dfx_prec) == 0){} else {
dfx_prec_day <- aggregate(`ppt-h (mm)` ~ Data, dfx_prec, sum)
dfx_press<-na.omit(dplyr::select(dfx, Hora, Data, `pressao (mB)`))
n_dfx_press<-group_by(dfx_press, Data) %>% summarise(n = n()) %>% filter(n == 24)
if(nrow(n_dfx_press) == 0){} else {
dfx_press<- left_join(dfx_press, n_dfx_press, by = "Data")
dfx_press<- dplyr::filter(dfx_press, n == 24)
dfx_press_mean_day <- aggregate(`pressao (mB)` ~ Data, dfx_press, mean)
dfx_ur<-na.omit(dplyr::select(dfx, Hora, Data,`UR-max`,`UR-min`, UR))
n_dfx_ur<- group_by(dfx_ur, Data) %>% summarise(n = n()) %>% filter(n == 24)
if(nrow(n_dfx_ur) == 0){} else {
dfx_ur<-left_join(dfx_ur, n_dfx_ur, by = "Data")
dfx_ur<-dplyr::filter(dfx_ur, n == 24)
dfx_ur_mean_day <- aggregate(UR ~ Data, dfx_ur, mean)
dfx_ur_min_day <- aggregate(`UR-min` ~ Data, dfx_ur, min)
dfx_ur_max_day <- aggregate(`UR-max` ~ Data, dfx_ur, max)
dfx_urs_day <- cbind(dfx_ur_mean_day, dfx_ur_max_day, dfx_ur_min_day)
dfx_urs_day <- dplyr::select(dfx_urs_day, -3, -5)
dfx_vv <- na.omit(dplyr::select(dfx, Hora, Data, `U10 (m/s)`,`U-raj (m/s)`, `U-dir(degrees)`))
n_dfx_vv<-group_by(dfx_vv, Data) %>% summarise(n = n()) %>% filter(n == 24)
if(nrow(n_dfx_vv) == 0){} else {
dfx_vv <- left_join(dfx_vv, n_dfx_vv, by = "Data")
dfx_vv <- dplyr::filter(dfx_vv, n == 24)
dfx_vv <- mutate(dfx_vv, u2 = (4.868/(log(67.75*10 - 5.42)))*`U10 (m/s)`)
dfx_vv_mean_day <- aggregate(`U10 (m/s)` ~ Data, dfx_vv, mean)
dfx_vv_meanu2_day <- aggregate(u2 ~ Data, dfx_vv, mean)
dfx_vv_raj_day <- aggregate(`U-raj (m/s)` ~ Data, dfx_vv, max)
dfx_vv_dir_day <- aggregate(`U-dir(degrees)` ~ Data, dfx_vv, mean)
dfx_vvs_day <- cbind(dfx_vv_mean_day, dfx_vv_meanu2_day, dfx_vv_raj_day, dfx_vv_dir_day)
dfx_vvs_day <- dplyr::select(dfx_vvs_day, -3, -5, -7)
dfx_RG <- dplyr::select(dfx, Hora, Data, `RG(Kj/m2)`)
dfx_RG <- dplyr::mutate(dfx_RG, `RG(Mj/m2)` = `RG(Kj/m2)`/1000)
dfx_RG <- na.omit(dplyr::select(dfx_RG, -`RG(Kj/m2)`))
dfx_RG <- dplyr::filter(dfx_RG, `RG(Mj/m2)`> 0)
n_RG <- group_by(dfx_RG, Data) %>% summarise(n = n()) %>% filter(n >= 12)
if(nrow(n_RG) == 0){} else {
dfx_RG <- left_join(dfx_RG, n_RG, by = "Data")
dfx_RG <- dplyr::filter(dfx_RG, n >= 12)
dfx_RG_sum_day <- aggregate(`RG(Mj/m2)`~ Data, dfx_RG, sum)
julian_day <- as.data.frame(as.numeric(format(dfx_RG_sum_day$Data, "%j")))
names(julian_day)<- "julian_day"
dfx_RG_sum_day <- cbind(dfx_RG_sum_day, julian_day)
lat_rad <- (pi/180)*(latitude)
dr<-1+0.033*cos((2*pi/365)*dfx_RG_sum_day$julian_day)
summary(dr)
solar_declination<-0.409*sin(((2*pi/365)*dfx_RG_sum_day$julian_day)-1.39)
sunset_hour_angle<-acos(-tan(lat_rad)*tan(solar_declination))
ra <- ((24*(60))/pi)*(0.0820)*dr*(sunset_hour_angle*sin(lat_rad)*sin(solar_declination)+cos(lat_rad)*cos(solar_declination)*sin(sunset_hour_angle))
ra <- as.data.frame(ra)
dfx_RG_sum_day<-cbind(dfx_RG_sum_day, ra)
dfx_day <- dplyr::full_join(dfx_temps_day, dfx_prec_day, by = "Data")
dfx_day <- full_join(dfx_day, dfx_press_mean_day, by = "Data")
dfx_day <- full_join(dfx_day, dfx_urs_day, by = "Data")
dfx_day <- full_join(dfx_day, dfx_vvs_day, by = "Data")
dfx_day <- full_join(dfx_day, dfx_RG_sum_day, by = "Data")
dfx_day<-mutate(dfx_day, OMM = OMM)
df<-rbind(df, dfx_day)
}
}
}
}
}
}
}
df<-filter(df, Data >= start_date & Data <= end_date)
df<- df %>% mutate(longitude = longitude, latitude = latitude, altitude = altitude)
colnames(df)<-c("Date","Tair_mean (c)", "Tair_min (c)", "Tair_max (c)", "Dew_tmean (c)", "Dew_tmin (c)",
"Dew_tmax (c)", "Dry_bulb_t (c)", "Rainfall (mm)", "Patm (mB)","Rh_mean (porc)", "Rh_max (porc)",
"Rh_min (porc)", "Ws_10 (m s-1)", "Ws_2 (m s-1)", "Ws_gust (m s-1)", "Wd (degrees)", "Sr (Mj m-2 day-1)",
"DOY", "Ra (Mj m-2 day-1)", "Station_code", "Longitude (degrees)", "Latitude (degrees)", "Altitude (m)" )
return(df)
}
}
|
/scratch/gouwar.j/cran-all/cranData/BrazilMet/R/daily_download_AWS_INMET.R
|
#' Eto calculation based on FAO-56 Penman-Monteith methodology, with data from automatic weather stations (AWS) downloaded and processed in function *daily_download_AWS_INMET*
#' @description This function will calculate the reference evapotranspiration (ETo) based on FAO-56 (Allen et al., 1998) with the automatic weather stations (AWS) data, downloaded and processed in function *daily_download_AWS_INMET*.
#' @param lat A numeric value of the Latitude of the AWS (decimal degrees).
#' @param tmin A dataframe with Minimum daily air temperature (°C).
#' @param tmax A dataframe with Maximum daily air temperature (°C).
#' @param tmean A dataframe with Mean daily air temperature (°C).
#' @param Rs A dataframe with mean daily solar radiation (MJ m-2 day-1).
#' @param u2 A dataframe with Wind speed at meters high (m s-2).
#' @param Patm A dataframe with atmospheric Pressure (mB).
#' @param RH_max A dataframe with Maximum relative humidity (percentage).
#' @param RH_min A dataframe with Minimum relative humidity (percentage).
#' @param z A numeric value of the altitude of AWS (m).
#' @param date A data.frame with the date information (YYYY-MM-DD).
#' @import stringr
#' @import dplyr
#' @import utils
#' @importFrom stats aggregate
#' @importFrom stats na.omit
#' @importFrom utils download.file
#' @importFrom utils read.csv
#' @importFrom utils unzip
#' @importFrom dplyr full_join
#' @importFrom dplyr filter
#' @importFrom dplyr select
#' @importFrom dplyr summarize
#' @importFrom dplyr mutate
#' @importFrom dplyr %>%
#' @examples
#' \dontrun{
#' eto<-daily_eto_FAO56(lat, tmin, tmax, tmean, Rs, u2, Patm, RH_max, RH_min, z, date)
#' }
#' @export
#' @return Returns a data.frame with the AWS data requested
#' @author Roberto Filgueiras, Luan P. Venancio, Catariny C. Aleman and Fernando F. da Cunha
daily_eto_FAO56 <- function(lat, tmin, tmax, tmean, Rs, u2, Patm, RH_max, RH_min, z, date){
delta<-(4098*(0.6108*exp(17.27*tmean/(tmean+237.30))))/(tmean +237.30)^2
#Step 5 - convertion of P from mmHg to kPa
Patm<- Patm/10
#Step 6 - Psychrometric constant (KPa °C-1)
psy_constant<- 0.000665*Patm
#Step 7 - Delta Term (DT) (auxiliary calculation for radiation term)
DT = (delta/(delta + psy_constant*(1+0.34*u2)))
#Step 8 - Psi term (PT) (auxliary calculation for Wind Term)
PT<- (psy_constant)/(delta + psy_constant*(1 + 0.34*u2))
#Step 9 - temperature term (TT) (auxiliary calculation for wind Term)
TT <- (900/(tmean + 273))*u2
#Step 10 - Mean saturation vapor pressure derived from air temperature
e_t<- 0.6108*exp(17.27*tmean/(tmean + 237.3)) #e_t (kPa)
e_tmax<- 0.6108*exp(17.27*tmax/(tmax + 237.3)) #e_tmax (kPa)
e_tmin<- 0.6108*exp(17.27*tmin/(tmin + 237.3)) #e_tmin (kPa)
es<- (e_tmax + e_tmin)/2
# Step 11 actual vapor pressure - ea (kPa)
ea<- (e_tmin * (RH_max/100) + e_tmax*(RH_min/100))/2
# Step 12 The inverse relative distance Earth-Sun (dr) and solar declination (solar_decli)
j<- as.numeric(format(date, "%j"))# julian day
dr <- 1 + 0.033*cos(2*pi*j/365)
solar_decli <- 0.409*sin((2*pi*j/365)- 1.39)
#Step 13 - Conversion of latitude (lat) in degrees (decimal degrees) to radian (lat_rad)
lat_rad<- (pi/180)*lat
#Step 14 - sunset hour angle (ws) rad
ws<- acos(-tan(lat_rad)*tan(solar_decli))
#Step 15 - Extraterrestrial radiation - ra (MJ m-2 day-1)
Gsc <- 0.0820 #(MJ m-2 min)
ra <- (24*(60)/pi)*Gsc*dr*((ws*sin(lat_rad)*sin(solar_decli)) + (cos(lat_rad)*cos(solar_decli)*sin(ws)))
# Step 16 - Clear sky solar radiation (rso)
rso<- (0.75 + (2*10^-5)*z)*ra
#Step 17 - Net solar or net shortwave radiation (Rns)
#Rs is the incoming solar radiation ( Mj m-2 day-1)
Rns<- (1- 0.23)*Rs
#Step 18 - Net outgoing long wave radiation (Rnl) (MJ m-2 day-1)
sigma<- 4.903*10^-9 # MJ K-4 m-2 day -1
Rnl <- sigma*((((tmax +273.16)^4) + ((tmin + 273.16)^4))/2)*(0.34 - 0.14*sqrt(ea))*(1.35*(Rs/rso) - 0.35)
#Step 19 - Net Radiation (Rn)
Rn <- Rns - Rnl
#TO express the Rn in equivalent of evaporation (mm)
Rng <- 0.408 *Rn
# Final Step - FS1. Radiation tem (ETrad)
ETrad<- DT*Rng
# Final Step FS2 - Wind term (ETwind)
ETwind <- PT*TT*(es - ea)
# Final Reference evapotranspiration value
ETo <- ETwind + ETrad
}
|
/scratch/gouwar.j/cran-all/cranData/BrazilMet/R/daily_eto_FAO56.R
|
#' Hargreaves - Samani ETo
#' @param tmin A dataframe with Maximum daily air temperature (°C)
#' @param tmean A dataframe with Minimum daily air temperature (°C)
#' @param tmax A dataframe with Maximum daily air temperature (°C)
#' @param ra A dataframe of extraterrestrial radiation (MJ m-2 day-1)
#' @examples
#' \dontrun{
#' eto_hs <-eto_hs(tmin, tmean, tmax, ra)
#' }
#' @export
#' @return Returns a data.frame object with the ETo HS data
#' @author Roberto Filgueiras, Luan P. Venancio, Catariny C. Aleman and Fernando F. da Cunha
eto_hs <- function(tmin, tmean, tmax, ra){
HS<- as.data.frame(0.0023*(tmean + 17.8)*((tmax - tmin)^0.5)*(0.408*ra))
colnames(HS)[1] <- "Eto_HS"
return(HS)
}
|
/scratch/gouwar.j/cran-all/cranData/BrazilMet/R/eto_hs.R
|
#' Extraterrestrial radiation for daily periods (ra)
#' @description ra is expressed in MJ m-2 day-1
#' @param latitude A dataframe with latitude in decimal degrees that you want to calculate the ra.
#' @param date A dataframe with the dates that you want to calculate the ra.
#' @examples
#' \dontrun{
#' ra <- ra_calculation(latitude, date)
#' }
#' @export
#' @return A data.frame with the extraterrestrial radiation for daily periods
#' @author Roberto Filgueiras, Luan P. Venancio, Catariny C. Aleman and Fernando F. da Cunha
ra_calculation <- function(latitude, date){
julian_day <- as.data.frame(as.numeric(format(date, "%j")))
lat_rad <- (pi/180)*(latitude)
dr<-1+0.033*cos((2*pi/365)*julian_day)
solar_declination<-0.409*sin(((2*pi/365)*julian_day)-1.39)
sunset_hour_angle<-acos(-tan(lat_rad)*tan(solar_declination))
ra <- ((24*(60))/pi)*(0.0820)*dr*(sunset_hour_angle*sin(lat_rad)*sin(solar_declination)+cos(lat_rad)*cos(solar_declination)*sin(sunset_hour_angle))
ra <- as.data.frame(ra)
colnames(ra)<- "ra"
return(ra)
}
#' Solar radiation based in Angstrom formula (sr_ang)
#' @description If global radiation is not measure at station, it can be estimated with this function.
#' @param latitude A dataframe with latitude in decimal degrees that you want to calculate the ra.
#' @param date A dataframe with the dates that you want to calculate the ra.
#' @param n The actual duration of sunshine. This variable is recorded with Campbell-Stokes sunshine recorder.
#' @param as A dataframe with latitude in decimal degrees that you want to calculate the ra. The values of as = 0.25 is recommended by Allen et al. (1998).
#' @param bs A dataframe with the dates that you want to calculate the ra. The values of bs = 0.50 is recommended by Allen et al. (1998).
#' @examples
#' \dontrun{
#' sr_ang <- sr_ang_calculation(latitude, date, n, as, bs)
#' }
#' @export
#' @return A data.frame object with solar radiation data
#' @author Roberto Filgueiras, Luan P. Venancio, Catariny C. Aleman and Fernando F. da Cunha
sr_ang_calculation <- function(latitude, date, n, as, bs){
julian_day <- as.data.frame(as.numeric(format(date, "%j")))
lat_rad <- (pi/180)*(latitude)
dr<-1+0.033*cos((2*pi/365)*julian_day)
solar_declination<-0.409*sin(((2*pi/365)*julian_day)-1.39)
sunset_hour_angle<-acos(-tan(lat_rad)*tan(solar_declination))
ra <- ((24*(60))/pi)*(0.0820)*dr*(sunset_hour_angle*sin(lat_rad)*sin(solar_declination)+cos(lat_rad)*cos(solar_declination)*sin(sunset_hour_angle))
ra <- as.data.frame(ra)
N <- (24/pi)*sunset_hour_angle
sr_ang <- (as + bs*(n/N))*ra
sr_ang <- as.data.frame(sr_ang)
colnames(sr_ang)<- "sr_ang"
return(sr_ang)
}
#' Solar radiation data derived from air temperature differences
#' @description If global radiation is not measure at station, it can be estimated with this function.
#' @param latitude A dataframe with latitude in decimal degrees that you want to calculate the ra.
#' @param date A dataframe with the dates that you want to calculate the ra.
#' @param location_krs Adjustment coefficient based in location. Please decide between "coastal or "interior". If coastal the krs will be 0.19, if interior the krs will be 0.16.
#' @param tmin A dataframe with Minimum daily air temperature (°C)
#' @param tmax A dataframe with Maximum daily air temperature (°C)
#' @examples
#' \dontrun{
#' sr_tair <- sr_tair_calculation(latitude, date, tmax, tmin, location_krs)
#' }
#' @export
#' @return A data.frame object with solar radiation data
#' @author Roberto Filgueiras, Luan P. Venancio, Catariny C. Aleman and Fernando F. da Cunha
sr_tair_calculation <- function(latitude, date, tmax, tmin, location_krs){
if(location_krs == "interior"){krs <- 0.16} else {if(location_krs == "coastal"){krs <- 0.19} else {krs <- NULL}}
julian_day <- as.data.frame(as.numeric(format(date, "%j")))
lat_rad <- (pi/180)*(latitude)
dr<-1+0.033*cos((2*pi/365)*julian_day)
solar_declination<-0.409*sin(((2*pi/365)*julian_day)-1.39)
sunset_hour_angle<-acos(-tan(lat_rad)*tan(solar_declination))
ra <- ((24*(60))/pi)*(0.0820)*dr*(sunset_hour_angle*sin(lat_rad)*sin(solar_declination)+cos(lat_rad)*cos(solar_declination)*sin(sunset_hour_angle))
ra <- as.data.frame(ra)
sr_tair <- krs*sqrt(tmax - tmin)*ra
sr_tair <- as.data.frame(sr_tair)
colnames(sr_tair)<- "sr_tair"
return(sr_tair)
}
#' Clear-sky solar radiation with calibrated values available
#' @description Clear-sky solar radiation is calculated in this function for near sea level or when calibrated values for as and bs are available.
#' @param as A dataframe with latitude in decimal degrees that you want to calculate the ra. The values of as = 0.25 is recommended by Allen et al. (1998).
#' @param bs A dataframe with the dates that you want to calculate the ra. The values of bs = 0.50 is recommended by Allen et al. (1998).
#' @param ra Extraterrestrial radiation for daily periods (ra).
#' @examples
#' \dontrun{
#' rso_df <- rso_calculation_1(as, bs, ra)
#' }
#' @export
#' @return A data.frame object with the clear-sky radiation data
#' @author Roberto Filgueiras, Luan P. Venancio, Catariny C. Aleman and Fernando F. da Cunha
rso_calculation_1 <- function(as, bs, ra){
rso1 <- (as + bs)*ra
rso1 <- as.data.frame(rso1)
colnames(rso1)<- "rso1"
return(rso1)
}
#' Solar radiation data from a nearby weather station
#' @description The solar radiation data is calculated based in a nearby weather station.
#' @param rs_reg A dataframe with the solar radiation at the regional location (MJ m-2 day-1).
#' @param ra_reg A dataframe with the extraterrestrial radiation at the regional location (MJ m-2 day-1).
#' @param ra A dataframe with the extraterrestrial radiation for daily periods (ra).
#' @examples
#' \dontrun{
#' rs_nearby_df <- rs_nearby_calculation(rs_reg, ra_reg, ra)
#' }
#' @export
#' @return A data.frame object with the Solar radiation data based on a nearby weather station
#' @author Roberto Filgueiras, Luan P. Venancio, Catariny C. Aleman and Fernando F. da Cunha
rs_nearby_calculation <- function(rs_reg, ra_reg, ra){
rs_nearby <- (rs_reg/ra_reg)*ra
rs_nearby <- as.data.frame(rs_nearby)
colnames(rs_nearby)<- "rs_nearby"
return(rs_nearby)
}
#' Clear-sky solar radiation when calibrated values are not available
#' @description Clear-sky solar radiation is calculated in this function for near sea level or when calibrated values for as and bs are available.
#' @param z Station elevation above sea level (m)
#' @param ra Extraterrestrial radiation for daily periods (ra).
#' @examples
#' \dontrun{
#' rso_df <- rso_calculation_2(z, ra)
#' }
#' @export
#' @return A data.frame object with the clear-sky solar radiation
#' @author Roberto Filgueiras, Luan P. Venancio, Catariny C. Aleman and Fernando F. da Cunha
rso_calculation_2 <- function(z, ra){
rso2 <- (0.75 + 0.00002*z)*ra
rso2 <- as.data.frame(rso2)
colnames(rso2)<- "rso2"
return(rso2)
}
#' Net solar or net shortwave radiation (rns)
#' @description The rns results form the balance between incoming and reflected solar radiation (MJ m-2 day-1).
#' @param albedo Albedo or canopy reflectance coefficient. The 0.23 is the value used for hypothetical grass reference crop (dimensionless).
#' @param rs The incoming solar radiation (MJ m-2 day-1).
#' @examples
#' \dontrun{
#' ra <- rns_calculation(albedo, rs)
#' }
#' @export
#' @return A data.frame object with the net solar or net shortwave radiation data.
#' @author Roberto Filgueiras, Luan P. Venancio, Catariny C. Aleman and Fernando F. da Cunha
rns_calculation <- function(albedo, rs){
rns <- (1 - albedo)*rs
rns <- as.data.frame(rns)
colnames(rns) <- "rns"
return(rns)
}
#' Net longwave radiation (rnl)
#' @description Net outgoing longwave radiation is calculate with this function
#' @param tmin A dataframe with Minimum daily air temperature (°C)
#' @param tmax A dataframe with Maximum daily air temperature (°C)
#' @param ea A dataframe with the actual vapour pressure (KPa).
#' @param rs A dataframe with the incomimg solar radiation (MJ m-2 day-1).
#' @param rso A dataframe with the clear-sky radiation (MJ m-2 day-1)
#' @examples
#' \dontrun{
#' rnl_df <- rnl_calculation(tmin, tmax, ea, rs, rso)
#' }
#' @export
#' @return A data.frame object with the net longwave radiation.
#' @author Roberto Filgueiras, Luan P. Venancio, Catariny C. Aleman and Fernando F. da Cunha
rnl_calculation <- function(tmin, tmax, ea, rs, rso){
sb_constant <- 0.000000004903
rs_rso<-rs/rso
if(rs_rso > 1){rs_rso<-1} else {rs_rso}
rnl <- sb_constant*((((tmax+273.15)^4) + ((tmin + 273.15)^4))/2)*(0.34 - (0.14*sqrt(ea)))*((1.35*(rs_rso))-0.35)
rnl<- as.data.frame(rnl)
colnames(rnl) <- "rnl"
return(rnl)
}
#' Net radiation (rn)
#' @description The net radiation (MJ m-2 day-1) is the difference between the incoming net shortwave radiation (rns) and the outgoing net longwave radiation (rnl).
#' @param rns The incomimg net shortwave radiation (MJ m-2 day-1).
#' @param rnl The outgoing net longwave radiation (MJ m-2 day-1).
#' @examples
#' \dontrun{
#' rn <- rn_calculation(rns, rnl)
#' }
#' @export
#' @return A data.frame object with the net radiation data.
#' @author Roberto Filgueiras, Luan P. Venancio, Catariny C. Aleman and Fernando F. da Cunha
rn_calculation <- function(rns, rnl){
rn <- (rns - rnl)
rn <- as.data.frame(rn)
colnames(rn) <- "rn"
return(rn)
}
|
/scratch/gouwar.j/cran-all/cranData/BrazilMet/R/radiation_parameters.R
|
#' Localization of the automatic weather station of INMET
#' @description Function to see the localization of the automatic weather station of INMET.
#' @importFrom readxl read_xlsx
#' @examples
#' \dontrun{
#' see_stations_info()
#' }
#' @return A data.frame with informations of OMM code, latitude, longitude and altitude of all AWS stations available in INMET.
#' @export
#' @author Roberto Filgueiras, Luan P. Venancio, Catariny C. Aleman and Fernando F. da Cunha
see_stations_info <- function(){
a <- readxl::read_xlsx(system.file("extdata", paste0("Localization_AWS", ".xlsx"), package = "BrazilMet"))
return(a)
}
|
/scratch/gouwar.j/cran-all/cranData/BrazilMet/R/see_stations_info.R
|
#' Mean saturation vapour pressure (es)
#' @param tmin A dataframe with Minimum daily air temperature (°C).
#' @param tmax A dataframe with Maximum daily air temperature (°C).
#' @examples
#' \dontrun{
#' es <-es_calculation(tmin, tmax)
#' }
#' @export
#' @return Returns a data.frame object with the es data.
#' @author Roberto Filgueiras, Luan P. Venancio, Catariny C. Aleman and Fernando F. da Cunha.
es_calculation <- function(tmin, tmax){
# - Mean saturation vapor pressure derived from air temperature
e_tmax<- 0.6108*exp(17.27*tmax/(tmax + 237.3)) #e_tmax (kPa)
e_tmin<- 0.6108*exp(17.27*tmin/(tmin + 237.3)) #e_tmin (kPa)
es<- (e_tmax + e_tmin)/2
es<-as.data.frame(es)
}
#' Actual vapour pressure (ea) derived from dewpoint temperature
#' @param tdew A dataframe with dewpoint temperature (°C).
#' @examples
#' \dontrun{
#' ea <-ea_dew_calculation(tdew).
#' }
#' @export
#' @return Returns a data.frame object with the ea from dewpoint data.
#' @author Roberto Filgueiras, Luan P. Venancio, Catariny C. Aleman and Fernando F. da Cunha.
ea_dew_calculation <- function(tdew){
ea_dew <- 0.6108*exp(17.27*tdew/(tdew + 237.3))
}
|
/scratch/gouwar.j/cran-all/cranData/BrazilMet/R/variables_air_humidity.R
|
#' Actual vapour pressure (ea) derived from relative humidity data
#' @param tmin A dataframe with minimum daily air temperature (°C)
#' @param tmax A dataframe with maximum daily air temperature (°C)
#' @param rh_min A dataframe with minimum daily relative air humidity (percentage).
#' @param rh_mean A dataframe with mean daily relative air humidity (percentage).
#' @param rh_max A dataframe with maximum daily relative air humidity (percentage).
#' @examples
#' \dontrun{
#' ea <- ea_rh_calculation(tmin, tmax, rh_min, rh_mean, rh_max)
#' }
#' @export
#' @return Returns a data.frame object with the with ea from relative humidity data.
#' @author Roberto Filgueiras, Luan P. Venancio, Catariny C. Aleman and Fernando F. da Cunha
ea_rh_calculation <- function(tmin, tmax, rh_min, rh_mean, rh_max){
if(is.null(tmax) | is.null(rh_min)){
ea_rh <- as.data.frame((0.6108*exp(17.27*tmin/(tmin + 237.3)))*(rh_max/100))} else {}
if(is.null(rh_max)){
e_tmax<- 0.6108*exp(17.27*tmax/(tmax + 237.3)) #e_tmax (kPa)
e_tmin<- 0.6108*exp(17.27*tmin/(tmin + 237.3)) #e_tmin (kPa)
ea_rh <- as.data.frame((rh_mean/100)*((e_tmin + e_tmax)/2))
}else{e_tmax<- 0.6108*exp(17.27*tmax/(tmax + 237.3)) #e_tmax (kPa)
e_tmin<- 0.6108*exp(17.27*tmin/(tmin + 237.3))
ea_rh <- as.data.frame((e_tmin*(rh_max/100) + e_tmax*(rh_min/100))/2)}
colnames(ea_rh)[1]<- "ea_rh"
return(ea_rh)
}
#' Vapour pressure deficit (es - ea)
#' @param tmin A dataframe with minimum daily air temperature (°C).
#' @param tmax A dataframe with maximum daily air temperature (°C).
#' @param tdew A dataframe with dewpoint temperature (°C).
#' @param rh_min A dataframe with minimum daily relative air humidity (percentage).
#' @param rh_mean A dataframe with mean daily relative air humidity (percentage).
#' @param rh_max A dataframe with maximum daily relative air humidity (percentage).
#' @param ea_method The methodology to calculate the actual vapour pressure. Assume the "rh" (default) for relative humidity procedure and "dew" for dewpoint temperature procedure.
#' @examples
#' \dontrun{
#' ea <- es_ea_calculation(tmin, tmax, tdew, rh_min, rh_mean, rh_max, ea_method)
#' }
#' @export
#' @return Returns a data.frame object with the ea from relative humidity data.
#' @author Roberto Filgueiras, Luan P. Venancio, Catariny C. Aleman and Fernando F. da Cunha
es_ea_calculation <- function(tmin, tmax, tdew, rh_min, rh_mean, rh_max, ea_method){
if(is.null(ea_method)){ea_method <- "rh"}
# - Mean saturation vapor pressure derived from air temperature
e_tmax<- 0.6108*exp(17.27*tmax/(tmax + 237.3)) #e_tmax (kPa)
e_tmin<- 0.6108*exp(17.27*tmin/(tmin + 237.3)) #e_tmin (kPa)
es<- (e_tmax + e_tmin)/2
es<-as.data.frame(es)
if(ea_method == "dew"){
# - Actual vapour pressure (ea) derived from dewpoint temperature
ea <- as.data.frame(0.6108*exp(17.27*tdew/(tdew + 237.3)))
} else {
# - Actual vapor pressure (ea) derived from relative humidity data
if(is.null(tmax) | is.null(rh_min)){ea <- (0.6108*exp(17.27*tmin/(tmin + 237.3)))*(rh_max/100)} else {
}
if(is.null(rh_max)){
e_tmax<- 0.6108*exp(17.27*tmax/(tmax + 237.3)) #e_tmax (kPa)
e_tmin<- 0.6108*exp(17.27*tmin/(tmin + 237.3)) #e_tmin (kPa)
ea <- as.data.frame((rh_mean/100)*((e_tmin + e_tmax)/2))
}else{
e_tmax<- 0.6108*exp(17.27*tmax/(tmax + 237.3)) #e_tmax (kPa)
e_tmin<- 0.6108*exp(17.27*tmin/(tmin + 237.3))
ea <- as.data.frame((e_tmin*(rh_max/100) + e_tmax*(rh_min/100))/2)}
}
es_ea <- as.data.frame(es - ea)
colnames(es_ea)<- "es_ea"
return(es_ea)
}
#' Relative humidity (rh) calculation
#' @description Relative humidity is calculated in this function based on minimum air temperature of the day and the air temperature of the moment.
#' @param tmin A dataframe with minimum daily air temperature (°C)
#' @param tmean A dataframe with mean air temperature (°C) that you want to calculate the relative humidity.
#' @examples
#' \dontrun{
#' rh <- rh_calculation(tmin, tmean)
#' }
#' @export
#' @return A data.frame object with the relative humidity calculated
#' @author Roberto Filgueiras, Luan P. Venancio, Catariny C. Aleman and Fernando F. da Cunha
rh_calculation <- function(tmin, tmean){
e_tmin <- 0.6108*exp(17.27*tmin/(tmin + 237.3))
e_t <- 0.6108*exp(17.27*t/(tmean + 237.3))
rh <- 100*(e_tmin/e_t)
rh<-as.data.frame(rh)
colnames(rh) <- "rh"
return(rh)
}
|
/scratch/gouwar.j/cran-all/cranData/BrazilMet/R/variables_air_humidity_2.R
|
#' Wind speed at 2 meters high
#' @description Wind speed at two meters high can be calculated with this function.
#' @param uz measured wind speed at z meters above ground surface
#' @param z height of measurement above ground surface.
#' @examples
#' \dontrun{
#' u2_df <- u2_calculation(uz, z)
#' }
#' @export
#' @return A data.frame with the wind speed at 2 meters high calculated.
#' @author Roberto Filgueiras, Luan P. Venancio, Catariny C. Aleman and Fernando F. da Cunha
u2_calculation <- function(uz, z){
u2 <- uz*(4.87/(log(67.8*z - 5.42)))
u2 <- as.data.frame(u2)
colnames(u2) <- "u2"
return(u2)
}
|
/scratch/gouwar.j/cran-all/cranData/BrazilMet/R/wind_speed_variables.R
|
Buishand_R <- function(serie,n_period=10,dstr='norm',simulations = 1000,
seed_set = 9658, change_random_seed = TRUE){
if(!change_random_seed){
if(exists(x = '.Random.seed')){
old_random <- .Random.seed
}
}
if(!is.null(seed_set)){
if(is.numeric(seed_set)){
set.seed(seed_set)
}else{
stop(paste0('seed_set must be either NULL or a number to use as argument to set.seed'))
}
}
serie <- as.vector(serie)
n <- length(serie)
na_ind <- is.na(serie)
n_no_na <- n - sum(as.numeric(na_ind))
if(n < 2*n_period){
stop('serie no long enough or n_period too long')
}
serie_mean <- mean(serie,na.rm = T)
serie_sd <- sd(serie,na.rm = T)
serie_com <- serie[!na_ind]
#if dstr = gamma; need parameters to fit:
if(dstr == 'gamma'){
delta <- min(serie_com) - 1
par_gamma <- fitdistr(serie_com-delta,'gamma')
}
#Interval to compute the test
i_ini <- n_period
i_fin <- n-n_period
serie <- serie-serie_mean
a_v1 <- min(serie, na.rm = T)
a_v2 <- max(serie, na.rm = T)
for(i in i_ini:i_fin){
a <-sum(serie[1:i],na.rm = T)/serie_sd
if( a > a_v1){
a_v1 <- a
i_break1 <- i
}
if(a < a_v2){
a_v2 <- a
i_break2 <- i
}
}
a_v <- (a_v1 - a_v2)/sqrt(n_no_na)
if(abs(a_v2) >abs(a_v1)){ i_break <- i_break2
}else{i_break <- i_break1}
#Begin Simulations
a_sim <- vector(mode = 'double',length = simulations)
if(dstr == 'norm'){
#Monte Carlo for Normal FDP
for(i in 1:simulations){
aux <- rnorm(n_no_na,mean=serie_mean,sd = serie_sd)
sd_aux <- sd(aux)
mn_aux <- mean(aux)
aux <- aux - mn_aux
a_v1 <- min(aux)
a_v2 <- max(aux)
for(j in i_ini:(n_no_na-n_period-1)){
a <-sum(aux[1:j],na.rm = T)/sd_aux
if( a > a_v1){
a_v1 <- a
}
if(a < a_v2){
a_v2 <- a
}
}
a_sim[i] <- (a_v1 - a_v2)/sqrt(n_no_na)
}
} else if( dstr == 'gamma'){
# Monte Carlo for Gamma FDP
for(i in 1:simulations){
aux <- rgamma(n=n_no_na,shape=par_gamma$estimate[1],rate = par_gamma$estimate[2])
aux <- aux + delta
sd_aux <- sd(aux)
mn_aux <- mean(aux)
aux <- aux - mn_aux
a_v1 <- min(aux)
a_v2 <- max(aux)
for(j in i_ini:(n_no_na-n_period-1)){
a <-sum(aux[1:j],na.rm = T)/sd_aux
if( a > a_v1){
a_v1 <- a
}
if(a < a_v2){
a_v2 <- a
}
}
a_sim[i] <- (a_v1 - a_v2)/sqrt(n_no_na)
}
} else if (dstr == 'self'){
#Bootstrap
for(i in 1:simulations){
aux <- sample(x = serie_com,replace = T,size = n_no_na)
sd_aux <- sd(aux)
if(sd_aux == 0){
next
}
mn_aux <- mean(aux)
aux <- aux - mn_aux
a_v1 <- min(aux)
a_v2 <- max(aux)
for(j in i_ini:(n_no_na-n_period-1)){
a <-sum(aux[1:j],na.rm = T)/sd_aux
if( a > a_v1){
a_v1 <- a
}
if(a < a_v2){
a_v2 <- a
}
}
a_sim[i] <- (a_v1 - a_v2)/sqrt(n_no_na)
}
}else{
stop('not supported dstr input')
}
cum_dist_func <- ecdf(a_sim)
p <- 1-cum_dist_func(a_v)
out <- list(breaks = i_break+1 ,p.value = p)
if(!change_random_seed){
if(exists(x = 'old')){
.Random.seed <- old_random
}
}
return(out)
}
|
/scratch/gouwar.j/cran-all/cranData/BreakPoints/R/Buishand_range.R
|
N_break_point <- function(serie, n_max = 1, n_period=10,
seed=FALSE, auto_select = FALSE,
alpha = NULL,method='SNHT',dstr='norm',
seed_set = 9658, change_random_seed = TRUE,
seed_method = 6842){
# select method
if(!change_random_seed){
if(exists(x = '.Random.seed')){
old_random <- .Random.seed
}
}
if(!is.null(seed_set)){
if(is.numeric(seed_set)){
set.seed(seed_set)
}else{
stop(paste0('seed_set must be either NULL or a number to use as argument to set.seed'))
}
}
if(!is.logical(seed)){
if(length(seed) != n_max){
stop('The given seed is not supported. If seed is given must be of length n_max')
}
}
{
if( method == 'pettit'){
fun <- pettit
}else if( method == 'student'){
fun <- stu
}else if( method == 'mann-whitney'){
fun <- man.whi
}else if( method == 'buishand'){
fun <- function(x,n_period){
return(Buishand_R(serie = x,n_period = n_period,dstr = dstr, seed_set = seed_method))
}
}else if( method == 'SNHT'){
fun <- function(x,n_period){
return(SNHT(serie = x,n_period = n_period,dstr = dstr, seed_set = seed_method))
}
}else{stop('Not supported method')}
}
target <- as.vector(serie)
n_targ <- length(target)
isna <- as.numeric(is.na(target))
n_period_2 <- as.integer(n_period)
ii <- c(rep(0,n_period_2),rollapply(isna,width=n_period+1,sum),rep(0,n_period_2))
ii <- which(ii >= (n_period_2/2))
if(length(ii) > 1){
na_break <- c(ii , n_targ+1)
ii_aux <- ii[2:length(ii)] - ii[1:(length(ii)-1)]
ii_aux <- ii_aux < 10
jump <- c(ii[1]<n_period_2,ii_aux,n_targ+1-ii[length(ii)]<n_period_2)
}else if(length(ii) == 1){
na_break <- c(ii , n_targ+1)
jump <- c(ii<n_period_2,n_targ+1-ii<n_period_2)
}else{
na_break <- n_targ+1
jump <- FALSE
}
new_target <- target
new_n_targ <- n_targ
output <- list()
n_max_new <- n_max
outputcont <- 0
for(new_serie in 1:length(na_break)){
n_max <- n_max_new
if(new_serie == 1){
if(jump[1]){next}
ini <- 0
target <- new_target[1:(na_break[new_serie]-1)]
}else{
if(jump[new_serie]){next}
ini <- na_break[new_serie-1]-1
target <- new_target[na_break[new_serie-1]:(na_break[new_serie]-1)]
}
outputcont <- outputcont +1
n_targ <- length(target)
if((n_max+1)*n_period > n_targ-2){
n_max <- n_targ%/%n_period-1
if(n_max < 1){
if(length(na_break) > 1){
warning(('Not possible to find breakpoints in part of the serie, too short'))
outputcont <- outputcont - 1; next
}else{
stop('Not possible to find breakpoints, target serie too short')
}
}
warning(paste0('the given n is too big for the target and n_period length, ',n_max, ' will be use as maximal amount of breakpoints') )
}
output_aux <- list(breaks = list(),p.value=list(),n=list())
for(n in 1:n_max){
if(is.logical(seed)){
breaks <- as.integer(1:n * (n_targ/(n+1)))+1
}else{
if(length(seed[[n]])==n){
breaks <- seed[[n]]
}else{
warning(paste('The seed provided at',n,'breaks differs in length equal space break seed will be use instead',sep=' '))
breaks <- as.integer(1:n * (n_targ/(n+1)))+1
}
}
breaks <- sort(breaks,decreasing = F)
p <- rep(1,length(breaks))
breaks_old <- rep(0,length(breaks))
if(n == 1){
ff <- fun(target,n_period)
breaks <- ff$breaks
p <- ff$p.value
}else {
no_problem <- T
iters <- 0
breaks_old_old_old <-breaks
breaks_old_old <-breaks
p_old_old <- p
p_old <- p
while (any(breaks_old != breaks) & no_problem){
iters <- iters + 1
breaks_old_old_old <- breaks_old_old
breaks_old_old <- breaks_old
breaks_old <- breaks
p_old_old <- p_old
p_old <- p
for(i in 1:n){
if(i == 1){
aux <- target[1:(breaks[2]-1)]
break_aux <- 0
}else if(i == n){
aux <- target[breaks[n-1]:n_targ]
break_aux <- breaks[n-1]-1
}else{
aux <- target[breaks[i-1]:(breaks[i+1]-1)]
break_aux <- breaks[i-1]-1
}
ff <- fun(aux,n_period)
breaks[i] <- ff$breaks + break_aux
p[i] <- ff$p.value
}
if(iters > 3){
if(all(breaks==breaks_old_old)){
no_problem <- F
warning(paste0('several critical point found at n =', n))
breaks <- NULL
next
}else if(all(breaks==breaks_old_old_old)){
no_problem <- F
warning(paste0('several critical point found at n =', n))
breaks <- NULL
next
}
}
}
}
if(is.null(breaks)){
output_aux$breaks[[n]] <- NA
output_aux$p.value[[n]] <- 1
output_aux$n[[n]] <- n
}else{
output_aux$breaks[[n]] <- breaks+ini
output_aux$p.value[[n]] <- p
output_aux$n[[n]] <- n
}
}
output[[outputcont]] <- output_aux
}
if(auto_select){
output_new <- output
output <- list(breaks = NULL,p.value=NULL,n=NULL)
cont <-0
bb <- NULL
pp_final <- NULL
n_final <- 0
for(output_aux in output_new){
cont <- cont + 1
n_max <- length(output_aux$p.value)
pp <- vector(mode = 'double',length = n_max)
for(i in 1:n_max){
pp[i] <- max(output_aux$p.value[[i]])
}
if(is.null(alpha)){
i <- which.min(pp)
} else{
aa <- 1:n_max
i <-max(aa[pp < alpha])
if(is.infinite(i)){next}
}
bb <- c(bb,output_aux$breaks[[i]])
pp_final <- c(pp_final,output_aux$p.value[[i]])
n_final <- i + n_final
}
output <- list(breaks = bb,p.value=pp_final,n=n_final)
}
if(!change_random_seed){
if(exists(x = 'old_random')){
.Random.seed <- old_random
}
}
return(output)
}
|
/scratch/gouwar.j/cran-all/cranData/BreakPoints/R/N_break_point.R
|
SNHT <- function(serie,n_period=10,dstr='norm',simulations = 1000,
seed_set = 9658, change_random_seed = TRUE){
if(!change_random_seed){
if(exists(x = '.Random.seed')){
old_random <- .Random.seed
}
}
if(!is.null(seed_set)){
if(is.numeric(seed_set)){
set.seed(seed_set)
}else{
stop(paste0('seed_set must be either NULL or a number to use as argument to set.seed'))
}
}
serie <- as.vector(serie)
n <- length(serie)
na_ind <- is.na(serie)
n_no_na <- n - sum(as.numeric(na_ind))
if(n < 2*n_period){
stop('serie no long enough or n_period too long')
}
i_ini <- n_period
i_fin <- n-n_period
serie_mean <- mean(serie,na.rm = T)
serie_sd <- sd(serie,na.rm = T)
if(dstr == 'gamma'){
delta <- min(serie[!na_ind]) - 1
par_gamma <- fitdistr(serie[!na_ind]-delta,'gamma')
}
serie_com <- serie[!na_ind]
serie <- (serie - serie_mean)/serie_sd
t <- rep(0,i_fin)
for(i in i_ini:i_fin){
z1 <- mean(serie[1:i],na.rm = T)
z2 <- mean(serie[(i+1):n],na.rm = T)
i_no_na <- i - sum(as.numeric(is.na(serie[1:i])))
t[i] <- i_no_na*z1**2 + (n_no_na-i_no_na)*z2**2
}
i_break <- which.max(t)
t_cri <- max(t)
a_sim <- vector(mode = 'double',length = simulations)
#Begin simulations:
if(dstr == 'norm'){
for(j in 1:simulations){
aux <- rnorm(n_no_na,mean=serie_mean,sd = serie_sd)
sd_aux <- sd(aux)
mn_aux <- mean(aux)
aux <- (aux - mn_aux)/sd_aux
t <- rep(0,n_no_na)
for(i in n_period:(n_no_na-n_period-1)){
z1 <- mean(aux[1:i])
z2 <- mean(aux[(i+1):n_no_na])
t[i] <- i*z1**2 + (n_no_na-i)*z2**2
}
a_sim[j]<- max(t)
}
} else if( dstr == 'gamma'){
for(j in 1:simulations){
aux <- rgamma(n=n_no_na,shape=par_gamma$estimate[1],rate = par_gamma$estimate[2])
aux <- aux + delta
sd_aux <- sd(aux)
mn_aux <- mean(aux)
aux <- (aux - mn_aux)/sd_aux
t <- rep(0,n_no_na)
for(i in n_period:(n_no_na-n_period-1)){
z1 <- mean(aux[1:i])
z2 <- mean(aux[(i+1):n_no_na])
t[i] <- i*z1**2 + (n_no_na-i)*z2**2
}
a_sim[j]<- max(t)
}
} else if (dstr == 'self'){
for(j in 1:simulations){
aux <- sample(x = serie_com,replace = T,size = n_no_na)
sd_aux <- sd(aux)
if(sd_aux == 0){
next
}
mn_aux <- mean(aux)
aux <- (aux - mn_aux)/sd_aux
t <- rep(0,n_no_na)
for(i in n_period:(n_no_na-n_period-1)){
z1 <- mean(aux[1:i])
z2 <- mean(aux[(i+1):n_no_na])
t[i] <- i*z1**2 + (n_no_na-i)*z2**2
}
a_sim[j]<- max(t)
}
}else{
stop('not supported dstr input')
}
#Check p.value
cum_dist_func <- ecdf(a_sim)
p <- 1-cum_dist_func(t_cri)
out <- list(breaks = i_break+1 ,p.value = p)
if(!change_random_seed){
if(exists(x = 'old')){
.Random.seed <- old_random
}
}
return(out)
}
|
/scratch/gouwar.j/cran-all/cranData/BreakPoints/R/SNHT.R
|
man.whi <- function(serie,n_period=10){
serie <- as.vector(serie)
n <- length(serie)
if(n < 2*n_period){
stop('serie no long enough or n_period too long')
}
i_ini <- n_period
i_fin <- n-n_period
p_v <- 1
for(i in i_ini:i_fin){
aux1 <- serie[1:i]
aux2 <- serie[(i+1):n]
p <- wilcox.test(aux1,aux2, paired = F, var.equal = F)$p.value
if(p <= p_v){
p_v <- p
i_break <- i+1
}
}
out <- list(breaks = i_break ,p.value =p_v)
return(out)
}
|
/scratch/gouwar.j/cran-all/cranData/BreakPoints/R/man_whitney.R
|
pettit <- function(serie,n_period=10){
serie <- as.vector(serie)
n <- length(serie)
if(n < 2*n_period){
stop('serie no long enough or n_period too long')
}
i_ini <- n_period
i_fin <- n-n_period
U_v <- -1
n_row <- n-i_ini
aux1 <- matrix(serie[1:i_fin],ncol = i_fin,nrow = n_row)
aux2 <- matrix(serie[(i_ini+1):n],ncol = i_fin,nrow = n_row,byrow = T)
data <- sign(aux1-aux2)
for(i in 1:(i_fin-i_ini+1)){
aa <- i_ini -1 +i
U <- abs(sum(data[1:aa,i:n_row],na.rm = T))
if(U > U_v){
U_v <- U
i_break <- i + i_ini
}
}
out <- list(breaks = i_break ,p.value = 2 * exp(-6*U_v**2/(n**3 + n**2)))
return(out)
}
|
/scratch/gouwar.j/cran-all/cranData/BreakPoints/R/pettit.R
|
stu <- function(serie,n_period=10){
serie <- as.vector(serie)
n <- length(serie)
if(n < 2*n_period){
stop('serie no long enough or n_period too long')
}
i_ini <- n_period
i_fin <- n-n_period
p_v <- 1
for(i in i_ini:i_fin){
aux1 <- serie[1:i]
aux2 <- serie[(i+1):n]
p <- t.test(aux1,aux2, paired = F, var.equal = F)$p.value
if(p <= p_v){
p_v <- p
i_break <- i+1
}
}
out <- list(breaks = i_break ,p.value =p_v)
return(out)
}
|
/scratch/gouwar.j/cran-all/cranData/BreakPoints/R/student.R
|
sum_yam <- function(x){
y <- x[(length(x)/2+1):length(x)]
x <- x[1:(length(x)/2)]
return(abs(mean(x)-mean(y))/( sd(x)+sd(y)))
}
yamamoto <- function(serie, alpha = 0.1, n_period = 10){
n_period <- as.integer(n_period)
if(n_period %% 2 != 0){
n_period <- n_period - 1
warning(paste0('n_period is not odd ', n_period,' will be used instead'))
}
coef <- qt(p = 1-alpha,df = n_period-1) / sqrt(n_period)
quiqui <- c(rep(NA,10),
rollapply(serie,width=n_period*2,by=1,FUN=sum_yam) / coef,
rep(NA,9))
if(all(quiqui<=1,na.rm = T)){
return(list(breaks=NULL, n=NULL))
}
quiqui1 <- quiqui > 1
index_qui <- 1:length(quiqui)
index_qui <- index_qui[quiqui1]
index_qui <- index_qui[!is.na(index_qui)]
# index_qui <- c(5,10,17:19,40:42,index_qui,62,65)
if(length(index_qui) == 1){
return(list(breaks=index_qui, n=1))
}
aux <- index_qui[2:length(index_qui)] - index_qui[1:(length(index_qui)-1)]
while(any(aux==1)){
id <- which(aux==1)
vect <- id
cont <- 0
for( iii in id){
cont <- cont + 1
if(quiqui[index_qui[iii]] > quiqui[index_qui[iii+1]]){
vect[cont] <- vect[cont]+1
}
}
index_qui <- index_qui[-vect]
if(length(index_qui) == 1){
return(list(breaks=index_qui, n=1))
}
aux <- index_qui[2:length(index_qui)] - index_qui[1:(length(index_qui)-1)]
}
while(any(aux < n_period)){
id <- which(aux < n_period)
vect <- id
cont <- 0
for( iii in id){
cont <- cont + 1
if(quiqui[index_qui[iii]] > quiqui[index_qui[iii+1]]){
vect[cont] <- vect[cont]+1
}
}
index_qui <- index_qui[-vect]
if(length(index_qui) == 1){
return(list(breaks=index_qui, n=1))
}
aux <- index_qui[2:length(index_qui)] - index_qui[1:(length(index_qui)-1)]
}
return(list(breaks=index_qui, n=length(index_qui)))
}
|
/scratch/gouwar.j/cran-all/cranData/BreakPoints/R/yamamoto.R
|
setClass("brobmat", slots = c(x="matrix",positive="logical"))
setClass("swift",
representation = "VIRTUAL"
)
setClass("brob",
slots = c(x="numeric",positive="logical"),
contains = "swift"
)
setClass("glub",
slots = c(real="brob",imag="brob"),
contains = "swift"
)
|
/scratch/gouwar.j/cran-all/cranData/Brobdingnag/R/aaa_allclasses.R
|
".Brob.valid" <- function(object){
len <- length(object@positive)
if(len != length(object@x)){
return("length mismatch")
} else {
return(TRUE)
}
}
setValidity("brob", .Brob.valid)
"brob" <- function(x=double(),positive){
if(missing(positive)){
positive <- rep(TRUE,length(x))
}
if(length(positive)==1){
positive <- rep(positive,length(x))
}
new("brob",x=as.numeric(x),positive=positive)
}
"is.brob" <- function(x){is(x,"brob")}
"is.glub" <- function(x){is(x,"glub")}
"as.brob" <- function(x){
if(is.brob(x)){
return(x)
} else if(is.complex(x)) {
warning("imaginary parts discarded")
return(Recall(Re(x)))
} else if(is.glub(x)){
warning("imaginary parts discarded")
return(Re(x))
} else if(is.brobmat(x)){
return(brobmat_to_brob(x))
} else {
return(brob(log(abs(c(x))), c(x)>=0))
}
}
setAs("brob", "numeric", function(from){
out <- exp(from@x)
out[!from@positive] <- -out[!from@positive]
return(out)
} )
setMethod("as.numeric",signature(x="brob"),function(x){as(x,"numeric")})
setAs("brob", "complex", function(from){
return(as.numeric(from)+ 0i)
} )
setMethod("as.complex",signature(x="brob"),function(x){as(x,"complex")})
".Brob.print" <- function(x, digits=5){
noquote( paste(c("-","+")[1+x@positive],"exp(",signif(x@x,digits),")",sep=""))
}
"print.brob" <- function(x, ...){
jj <- .Brob.print(x, ...)
print(jj)
return(invisible(jj))
}
setMethod("show", "brob", function(object){print.brob(object)})
setGeneric("getX",function(x){standardGeneric("getX")})
setGeneric("getP",function(x){standardGeneric("getP")})
setMethod("getX","brob",function(x){x@x})
setMethod("getP","brob",function(x){x@positive})
setMethod("length","brob",function(x){length(x@x)})
setMethod("is.infinite","brob",function(x){x@x == +Inf})
setMethod("is.finite" ,"brob",function(x){x@x != +Inf})
setGeneric("sign<-",function(x,value){standardGeneric("sign<-")})
setMethod("sign<-","brob",function(x,value){
brob(x@x,value)
} )
setMethod("[", "brob",
function(x, i, j, drop){
if(!missing(j)){
warning("second argument to extractor function ignored")
}
brob(x@x[i], x@positive[i])
} )
setReplaceMethod("[",signature(x="brob"),
function(x,i,j,value){
jj.x <- x@x
jj.pos <- x@positive
if(is.brob(value)){
jj.x[i] <- value@x
jj.pos[i] <- value@positive
return(brob(x=jj.x,positive=jj.pos))
} else {
x[i] <- as.brob(value)
return(x)
}
} )
setGeneric(".cPair", function(x,y){standardGeneric(".cPair")})
setMethod(".cPair", c("brob", "brob"), function(x,y){.Brob.cPair(x,y)})
setMethod(".cPair", c("brob", "ANY"), function(x,y){.Brob.cPair(x,as.brob(y))})
setMethod(".cPair", c("ANY", "brob"), function(x,y){.Brob.cPair(as.brob(x),y)})
setMethod(".cPair", c("ANY", "ANY"), function(x,y){c(x,y)})
"cbrob" <- function(x, ...) {
if(nargs()<3)
.cPair(x,...)
else
.cPair(x, Recall(...))
}
".Brob.cPair" <- function(x,y){
x <- as.brob(x)
y <- as.brob(y)
brob(c(x@x,y@x),c(x@positive,y@positive))
}
setGeneric("log")
setMethod("sqrt","brob", function(x){
brob(ifelse(x@positive,x@x/2, NaN),TRUE)
} )
setMethod("Math", "brob",
function(x){
switch(.Generic,
abs = brob(x@x),
log = {
out <- x@x
out[!x@positive] <- NaN
out
},
log10 = {
out <- x@x/log(10)
out[!x@positive] <- NaN
out
},
log2 = {
out <- x@x/log(2)
out[!x@positive] <- NaN
out
},
exp = brob(x),
cosh = {(brob(x) + brob(-x))/2},
sinh = {(brob(x) - brob(-x))/2},
acos =,
acosh =,
asin =,
asinh =,
atan =,
atanh =,
cos =,
sin =,
tan =,
tanh =,
trunc = callGeneric(as.numeric(x)),
lgamma =,
cumsum =,
gamma =,
ceiling=,
floor = as.brob(callGeneric(as.numeric(x))),
stop(gettextf("Function %s not implemented on Brobdingnagian numbers", dQuote(.Generic)))
)
} )
".Brob.negative" <- function(e1){
brob(e1@x,!e1@positive)
}
".Brob.ds" <- function(e1,e2){ # "ds" == "different signs"
xor(e1@positive,e2@positive)
}
".Brob.add" <- function(e1,e2){
e1 <- as.brob(e1)
e2 <- as.brob(e2)
jj <- rbind(e1@x,e2@x)
x1 <- jj[1,]
x2 <- jj[2,]
out.x <- double(length(x1))
jj <- rbind(e1@positive,e2@positive)
p1 <- jj[1,]
p2 <- jj[2,]
out.pos <- p1
ds <- .Brob.ds(e1,e2)
ss <- !ds #ss == "Same Sign"
out.x[ss] <- pmax(x1[ss],x2[ss]) + log1p(+exp(-abs(x1[ss]-x2[ss])))
out.x[ds] <- pmax(x1[ds],x2[ds]) + log1p(-exp(-abs(x1[ds]-x2[ds])))
# Now special dispensation for 0+0:
out.x[ (x1 == -Inf) & (x2 == -Inf)] <- -Inf
out.pos <- p1
out.pos[ds] <- xor((x1[ds] > x2[ds]) , (!p1[ds]) )
return(brob(out.x,out.pos))
}
".Brob.mult" <- function(e1,e2){
e1 <- as.brob(e1)
e2 <- as.brob(e2)
return(brob(e1@x + e2@x, !.Brob.ds(e1,e2)))
}
".Brob.power"<- function(e1,e2){
stopifnot(is.brob(e1) | is.brob(e2))
if(is.brob(e2)){ # e2 a brob => answer a brob (ignore signs)
return(brob(log(e1) * brob(e2@x), TRUE))
} else { #e2 a non-brob (try to account for signs)
s <- as.integer(2*e1@positive-1) #s = +/-1
return(brob(e1@x*as.brob(e2), (s^as.numeric(e2))>0))
}
}
".Brob.inverse" <- function(b){brob(-b@x,b@positive)}
setMethod("Arith",signature(e1 = "brob", e2="missing"),
function(e1,e2){
switch(.Generic,
"+" = e1,
"-" = .Brob.negative(e1),
stop(gettextf("unary operator %s not implemented on Brobdingnagian numbers", dQuote(.Generic)))
)
} )
".Brob.arith" <- function(e1,e2){
switch(.Generic,
"+" = .Brob.add (e1, e2),
"-" = .Brob.add (e1, .Brob.negative(as.brob(e2))),
"*" = .Brob.mult (e1, e2),
"/" = .Brob.mult (e1, .Brob.inverse(as.brob(e2))),
"^" = .Brob.power(e1, e2),
stop(gettextf("binary operator %s not implemented on Brobdingnagian numbers", dQuote(.Generic)))
) }
setMethod("Arith", signature(e1 = "brob", e2="ANY"), .Brob.arith)
setMethod("Arith", signature(e1 = "ANY", e2="brob"), .Brob.arith)
setMethod("Arith", signature(e1 = "brob", e2="brob"), .Brob.arith)
".Brob.equal" <- function(e1,e2){
(e1@x==e2@x) & (e1@positive==e2@positive)
}
".Brob.greater" <- function(e1,e2){
jj.x <- rbind(e1@x,e2@x)
jj.p <- rbind(e1@positive,e2@positive)
ds <- .Brob.ds(e1,e2)
ss <- !ds #ss == "Same Sign"
greater <- logical(length(ss))
greater[ds] <- jj.p[1,ds]
greater[ss] <- jj.p[1,ss] & (jj.x[1,ss] > jj.x[2,ss])
return(greater)
}
".Brob.compare" <- function(e1,e2){
if( (length(e1) == 0) | (length(e2)==0)) {
return(logical(0))
}
e1 <- as.brob(e1)
e2 <- as.brob(e2)
switch(.Generic,
"==" = .Brob.equal(e1,e2),
"!=" = !.Brob.equal(e1,e2),
">" = .Brob.greater(e1,e2),
"<" = !.Brob.greater(e1,e2) & !.Brob.equal(e1,e2),
">=" = .Brob.greater(e1,e2) | .Brob.equal(e1,e2),
"<=" = !.Brob.greater(e1,e2) | .Brob.equal(e1,e2),
stop(gettextf("comparison operator %s not implemented on Brobdingnagian numbers", dQuote(.Generic)))
)
}
setMethod("Compare", signature(e1="brob", e2="ANY" ), .Brob.compare)
setMethod("Compare", signature(e1="ANY" , e2="brob"), .Brob.compare)
setMethod("Compare", signature(e1="brob", e2="brob"), .Brob.compare)
".Brob.logic" <- function(e1,e2){
stop("No logic currently implemented for Brobdingnagian numbers")
}
setMethod("Logic",signature(e1="swift",e2="ANY"), .Brob.logic)
setMethod("Logic",signature(e1="ANY",e2="swift"), .Brob.logic)
setMethod("Logic",signature(e1="swift",e2="swift"), .Brob.logic)
if(!isGeneric("max")){
setGeneric("max", function(x, ..., na.rm = FALSE)
{
standardGeneric("max")
},
useAsDefault = function(x, ..., na.rm = FALSE)
{
base::max(x, ..., na.rm = na.rm)
},
group = "Summary")
}
if(!isGeneric("min")){
setGeneric("min", function(x, ..., na.rm = FALSE)
{
standardGeneric("min")
},
useAsDefault = function(x, ..., na.rm = FALSE)
{
base::min(x, ..., na.rm = na.rm)
},
group = "Summary")
}
if(!isGeneric("range")){
setGeneric("range", function(x, ..., na.rm = FALSE)
{
standardGeneric("range")
},
useAsDefault = function(x, ..., na.rm = FALSE)
{
base::range(x, ..., na.rm = na.rm)
},
group = "Summary")
}
if(!isGeneric("prod")){
setGeneric("prod", function(x, ..., na.rm = FALSE)
{
standardGeneric("prod")
},
useAsDefault = function(x, ..., na.rm = FALSE)
{
base::prod(x, ..., na.rm = na.rm)
},
group = "Summary")
}
if(!isGeneric("sum")){
setGeneric("sum", function(x, ..., na.rm = FALSE)
{
standardGeneric("sum")
},
useAsDefault = function(x, ..., na.rm = FALSE)
{
base::sum(x, ..., na.rm = na.rm)
},
group = "Summary")
}
".Brob.max" <- function(x, ..., na.rm=FALSE){
p <- x@positive
val <- x@x
if(any(p)){
return(brob(max(val[p])))
} else {
# all negative
return(brob(min(val),FALSE))
}
}
".Brob.prod" <- function(x){
p <- x@positive
val <- x@x
return(brob(sum(val),(sum(!p)%%2)==0))
}
".Brob.sum" <- function(x){
.Brob.sum.allpositive( x[x>0]) -
.Brob.sum.allpositive(-x[x<0])
}
".Brob.sum.allpositive" <- function(x){
if(length(x)<1){return(as.brob(0))}
val <- x@x
p <- x@positive
mv <- max(val)
return(brob(mv + log1p(sum(exp(val[-which.max(val)]-mv))),TRUE))
}
setMethod("Summary", "brob",
function(x, ..., na.rm=FALSE){
switch(.Generic,
max = .Brob.max( x, ..., na.rm=na.rm),
min = -.Brob.max(-x, ..., na.rm=na.rm),
range = cbrob(min(x,na.rm=na.rm),max(x,na.rm=na.rm)),
prod = .Brob.prod(x),
sum = .Brob.sum(x),
stop(gettextf("Function %s not implemented on Brobdingnagian numbers", dQuote(.Generic)))
)
}
)
setMethod("plot",signature(x="brob",y="missing"),function(x, ...){plot.default(as.numeric(x), ...)})
setMethod("plot",signature(x="brob",y="ANY" ),function(x, y, ...){plot.default(as.numeric(x), as.numeric(y), ...)})
setMethod("plot",signature(x="ANY" ,y="brob"),function(x, y, ...){plot.default(as.numeric(x), as.numeric(y), ...)})
|
/scratch/gouwar.j/cran-all/cranData/Brobdingnag/R/brob.R
|
## "x[]":
setMethod("[",
signature(x = "brobmat",
i = "missing", j = "missing",
drop = "ANY"),
function(x, i, j, ..., drop){
return(x)
} )
## select rows, x[i,]:
setMethod("[",
signature(x = "brobmat",
i = "index", j = "missing",
drop = "ANY"),
function(x,i,j, ..., drop) {
if(missing(drop)){drop <- TRUE}
xv <- getX(x)[i,,drop=drop]
if(drop & (!is.matrix(xv))){
return(brob(xv,getP(x)[i,]))
} else {
return(newbrobmat(xv, getP(x)[i,,drop=FALSE]))
}
} )
## select columns, x[,j]:
setMethod("[",
signature(x = "brobmat",
i = "missing", j = "index",
drop = "ANY"),
function(x,i,j, ..., drop) {
if(missing(drop)){drop <- TRUE}
xv <- getX(x)[,j,drop=drop]
if(drop & (!is.matrix(xv))){
return(brob(xv,getP(x)[,j]))
} else {
return(newbrobmat(xv, getP(x)[,j,drop=FALSE]))
}
} )
## matrix indexing
setMethod("[",
signature(x = "brobmat",
i = "matrix", j = "missing",
drop = "ANY"),
function(x,i,j, ..., drop) {
xv <- getX(x)[i]
return(brobmat(getX(x)[i], getP(x)[i]))
} )
## select both rows *and* columns
setMethod("[",
signature(x = "brobmat",
i = "index", j = "index",
drop = "ANY"),
function(x,i,j, ..., drop) {
if(missing(drop)){drop <- TRUE}
xv <- getX(x)[i,j,drop=drop]
if(drop & (!is.matrix(xv))){
return(brob(xv,getP(x)[i,j]))
} else {
return(newbrobmat(xv, getP(x)[i,j,drop=FALSE]))
}
} )
## bail out if any of (i,j,drop) is "non-sense"
setMethod("[",
signature(x = "brobmat",
i = "ANY", j = "ANY",
drop = "ANY"),
function(x,i,j, ..., drop){
stop("invalid or not-yet-implemented brobmat subsetting")
} )
|
/scratch/gouwar.j/cran-all/cranData/Brobdingnag/R/extract.R
|
".Glub.valid" <- function(object){
if(length(object@real) == length(object@imag)){
return(TRUE)
} else {
return("length mismatch")
}
}
setValidity("glub", .Glub.valid)
setAs("glub", "complex", function(from){
complex(real=as.numeric(from@real), imaginary=as.numeric(from@imag))
} )
setMethod("as.complex",signature(x="glub"),function(x){as(x,"complex")})
setAs("glub", "numeric", function(from){
warning("imaginary parts discarded in coercion; use as.complex() to retain them")
as.numeric(Re(from))
} )
setMethod("as.numeric",signature(x="glub"),function(x){as(x,"numeric")})
setMethod("is.infinite",signature(x="glub"),function(x){is.infinite(Re(x)) | is.infinite(Im(x))})
setMethod("is.finite",signature(x="glub"),function(x){is.finite(Re(x)) & is.finite(Im(x))})
"glub" <- function(real=double(), imag=double()){
if(missing(imag)){
imag <- 0
}
real <- as.brob(real)
imag <- as.brob(imag)
jj.x <- cbind(real@x,imag@x)
jj.p <- cbind(real@positive,imag@positive)
new("glub",
real = brob(jj.x[,1],jj.p[,1]),
imag = brob(jj.x[,2],jj.p[,2])
)
}
setMethod("Re","glub",function(z){z@real})
setMethod("Im","glub",function(z){z@imag})
setMethod("length","glub",function(x){length(Re(x))})
setMethod("Mod", "glub", function(z){sqrt(Re(z)*Re(z) + Im(z)*Im(z))})
".Brob.arg" <- function(z){
atan2(as.numeric(Im(z)),as.numeric(Re(z)))
}
".Glub.complex" <- function(z){
switch(.Generic,
Arg = .Brob.arg(z),
Conj = glub(Re(z),-Im(z)),
stop(gettextf("Complex operator %s not implemented on glub numbers", dQuote(.Generic)))
)
}
setMethod("Complex","glub", .Glub.complex)
setGeneric("Re<-",function(z,value){standardGeneric("Re<-")})
setGeneric("Im<-",function(z,value){standardGeneric("Im<-")})
setMethod("Re<-","glub",function(z,value){
return(glub(real=value, imag=Im(z)))
} )
setMethod("Im<-","glub",function(z,value){
z <- as.glub(z)
return(glub(real=z@real, imag=value))
} )
setMethod("Im<-","brob",function(z,value){
return(glub(real=z, imag=value))
} )
"as.glub" <- function(x){
if(is.glub(x)){
return(x)
} else if (is.brob(x)) {
return(glub(real=as.brob(x),imag=as.brob(0)))
} else {
return(glub(real=as.brob(Re(x)),imag=as.brob(Im(x))))
}
}
setMethod("[", "glub",
function(x, i, j, drop){
if(!missing(j)){warning("second argument (j) ignored")}
glub(x@real[i], x@imag[i])
}
)
setReplaceMethod("[",signature(x="glub"),
function(x,i,j,value){
if(!missing(j)){warning("second argument (j) ignored")}
value <- as.glub(value)
x@real[i] <- Re(value)
x@imag[i] <- Im(value)
return(x)
}
)
setMethod(".cPair", c("glub", "glub"), function(x,y).Glub.cPair(x,y))
setMethod(".cPair", c("glub", "ANY"), function(x,y).Glub.cPair(x,as.glub(y)))
setMethod(".cPair", c("ANY", "glub"), function(x,y).Glub.cPair(as.glub(x),y))
setMethod(".cPair", c("complex", "brob"), function(x,y).Glub.cPair(as.glub(x),y))
setMethod(".cPair", c("brob", "complex"), function(x,y).Glub.cPair(as.glub(x),y))
setMethod(".cPair", c("glub", "brob"), function(x,y).Glub.cPair(as.glub(x),y))
setMethod(".cPair", c("brob", "glub"), function(x,y).Glub.cPair(as.glub(x),y))
".Glub.cPair" <- function(x,y){
x <- as.glub(x)
y <- as.glub(y)
return(glub(.Brob.cPair(Re(x),Re(y)), .Brob.cPair(Im(x),Im(y))))
}
"print.glub" <- function(x,...){
real <- .Brob.print(Re(x),...)
imag <- .Brob.print(Im(x),...)
jj <- noquote(paste(real,imag,"i ",sep=""))
print(jj)
}
setMethod("show", "glub", function(object){print.glub(object)})
setMethod("Math", "glub",
function(x){
switch(.Generic,
abs = Mod(x),
log = { glub(log(Mod(x)),Arg(x)) },
log10 = { glub(log10(Mod(x)),Arg(x)/log(10)) },
log2 = { glub(log2 (Mod(x)),Arg(x)/log( 2)) },
exp = { exp(Re(x))*exp(1i*as.numeric(Im(x)))},
sqrt = { exp(log(x)/2)},
cosh = { (exp(x)+exp(-x))/2},
sinh = { (exp(x)-exp(-x))/2},
tanh = { (exp(x)-exp(-x))/(exp(x)+exp(-x))},
cos = { (exp(1i*x)+exp(-1i*x))/(2 )},
sin = { (exp(1i*x)-exp(-1i*x))/(2i)},
tan = { (exp(1i*x)-exp(-1i*x))/(exp(1i*x)+exp(-1i*x))},
acos = { -1i*log( x + 1i*sqrt( 1-x*x)) },
acosh = { log( x + sqrt(-1+x*x)) },
asin = { -1i*log(1i*x + sqrt( 1-x*x)) },
asinh = { log( x + sqrt( 1+x*x)) },
atan = { 0.5i*log((1i+x)/(1i-x)) },
atanh = { 0.5 *log((1 +x)/(1 -x)) },
trunc = callGeneric(as.complex(x)),
lgamma =,
cumsum =,
gamma =,
ceiling=,
floor = as.glub(callGeneric(as.complex(x))),
stop(gettextf("function %s not implemented on glub numbers", dQuote(.Generic)))
)
}
)
".Glub.negative" <- function(e1){
glub(-Re(e1),-Im(e1))
}
".Glub.add" <- function(e1,e2){
e1 <- as.glub(e1)
e2 <- as.glub(e2)
glub(Re(e1)+Re(e2),Im(e1)+Im(e2))
}
".Glub.mult" <- function(e1,e2){
e1 <- as.glub(e1)
e2 <- as.glub(e2)
glub(Re(e1)*Re(e2)-Im(e1)*Im(e2), Re(e1)*Im(e2)+Im(e1)*Re(e2))
}
".Glub.power" <- function(e1,e2){
exp(e2*log(e1))
}
".Glub.inverse" <- function(e1){
jj <- Re(e1)*Re(e1) + Im(e1)*Im(e1)
glub(Re(e1)/jj, -Im(e1)/jj)
}
".Glub.arith" <- function(e1,e2){
switch(.Generic,
"+" = .Glub.add (e1, e2),
"-" = .Glub.add (e1, .Glub.negative(e2)),
"*" = .Glub.mult (e1, e2),
"/" = .Glub.mult (e1, .Glub.inverse(e2)),
"^" = .Glub.power(e1, e2),
stop(gettextf("binary operator %s not implemented on glub numbers", dQuote(.Generic)))
)
}
setMethod("Arith",signature(e1 = "glub", e2="missing"),
function(e1,e2){
switch(.Generic,
"+" = e1,
"-" = .Glub.negative(e1),
stop(gettextf("unary operator %s not implemented on glub objects", dQuote(.Generic)))
)
}
)
setMethod("Arith",signature(e1 = "glub", e2="glub"), .Glub.arith)
setMethod("Arith",signature(e1 = "glub", e2="ANY" ), .Glub.arith)
setMethod("Arith",signature(e1 = "ANY" , e2="glub"), .Glub.arith)
setMethod("Arith",signature(e1= "brob" , e2="complex"), .Glub.arith)
setMethod("Arith",signature(e1= "complex", e2="brob" ), .Glub.arith)
setMethod("Arith",signature(e1= "glub" , e2="complex"), .Glub.arith)
setMethod("Arith",signature(e1= "complex", e2="glub" ), .Glub.arith)
setMethod("Arith",signature(e1= "glub", e2="brob"), .Glub.arith)
setMethod("Arith",signature(e1= "brob", e2="glub"), .Glub.arith)
".Glub.equal" <- function(e1,e2){
(Re(e1) == Re(e2)) & ( Im(e1) == Im(e2))
}
".Glub.compare" <- function(e1,e2){
e1 <- as.glub(e1)
e2 <- as.glub(e2)
switch(.Generic,
"==" = .Glub.equal(e1,e2),
"!=" = !.Glub.equal(e1,e2),
stop(gettextf("comparison operator %s not implemented on glub numbers", dQuote(.Generic)))
)
}
setMethod("Compare", signature(e1="glub",e2="glub"), .Glub.compare)
setMethod("Compare", signature(e1="glub",e2="ANY" ), .Glub.compare)
setMethod("Compare", signature(e1="ANY", e2="glub"), .Glub.compare)
setMethod("Compare", signature(e1="brob", e2="glub"), .Glub.compare)
setMethod("Compare", signature(e1="glub", e2="brob"), .Glub.compare)
".Glub.prod" <- function(z){
out <- as.glub(1)
for(i in 1:length(z)){
out <- out * z[i]
}
return(out)
}
".Glub.sum" <- function(x){
glub(sum(Re(x)),sum(Im(x)))
}
setMethod("Summary", "glub",
function(x, ..., na.rm=FALSE){
switch(.Generic,
prod = .Glub.prod(x),
sum = .Glub.sum(x),
stop(gettextf("function %s not implemented on glub numbers", dQuote(.Generic)))
)
}
)
setMethod("plot",signature(x="glub",y="missing"),function(x, ...){plot.default(as.complex(x), ...)})
setMethod("plot",signature(x="glub",y="ANY" ),function(x, y, ...){plot.default(as.complex(x), as.complex(y), ...)})
setMethod("plot",signature(x="ANY" ,y="glub"),function(x, y, ...){plot.default(as.complex(x), as.complex(y), ...)})
|
/scratch/gouwar.j/cran-all/cranData/Brobdingnag/R/glub.R
|
`.brobmat.valid` <- function(object){
if(length(object@x) != length(object@positive)){
return("length mismatch")
} else {
return(TRUE)
}
}
setValidity("brobmat", .Brob.valid)
`newbrobmat` <- function(x,positive){
new("brobmat", x=x, positive=c(positive)) # this is the only use of new() here
}
`brobmat` <- function(..., positive){
data <- list(...)[[1]]
if(is.matrix(data)){
M <- data
} else if(is.brob(data)){
jj <- list(...)
jj[[1]] <- getX(data)
M <- do.call(matrix,jj) # signs not accounted for
return(newbrobmat(M,positive=getP(data)))
} else {
M <- matrix(...)
}
if(missing(positive)){positive <- rep(TRUE,length(M))}
positive <- cbind(c(M),positive)[,2]>0
return(newbrobmat(M,positive=positive))
}
`is.brobmat` <- function(x){is(x,"brobmat")}
setMethod("getX","brobmat",function(x){x@x})
setMethod("getX","numeric",function(x){x})
setMethod("getP","brobmat",function(x){
out <- getX(x)
storage.mode(out) <- "logical"
out[] <- x@positive
## No occurrences of '@' after this line
return(out)
})
setMethod("getP","numeric",function(x){x>0})
setMethod("length","brobmat",function(x){length(getX(x))})
`as.brobmat` <- function(x){
if(is.brob(x)){
return(newbrobmat(matrix(getX(x)),matrix(getP(x)))) # n-by-1
} else if(is.numeric(x)){
x <- as.matrix(x)
return(newbrobmat(log(abs(x)), c(x>=0)))
}
}
`is.brobmat` <- function(x){is(x,"brobmat")}
setAs("brobmat", "matrix", function(from){
out <- exp(getX(from))
negs <- !getP(from)
out[negs] <- -out[negs]
return(out)
} )
`brobmat_to_brob` <- function(x){ brob(c(getX(x)),c(getP(x))) }
setMethod("as.matrix",signature(x="brobmat"),function(x){as(x,"matrix")})
setGeneric("nrow")
setGeneric("ncol")
setMethod("nrow",signature(x="brobmat"),function(x){nrow(getX(x))})
setMethod("ncol",signature(x="brobmat"),function(x){ncol(getX(x))})
`.brobmat.print` <- function(x, digits=5){
out <- getX(x)
out[] <- paste(c("-","+")[1+getP(x)],"exp(",signif(out,digits),")",sep="")
noquote(out)
}
`print.brobmat` <- function(x, ...){
jj <- .brobmat.print(x, ...)
print(jj)
return(invisible(jj))
}
setMethod("show", "brobmat", function(object){print.brobmat(object)})
setMethod("Math", "brobmat",
function(x){
switch(.Generic,
abs = brobmat(getX(x)),
log = {
out <- getX(x)
out[!getP(x)] <- NaN
out # numeric matrix
},
log10 = {
out <- getX(x)
out[!getP(x)] <- NaN
out/log(10) # numeric matrix
},
log2 = {
out <- getX(x)
out[!getP(x)] <- NaN
out/log(2) # numeric matrix
},
exp =,
cosh =,
sinh =,
acos =,
acosh =,
asin =,
asinh =,
atan =,
atanh =,
cos =,
sin =,
tan =,
tanh =,
trunc =,
lgamma =,
cumsum =,
gamma =,
ceiling=,
floor =,
stop(gettextf("Function %s not implemented on brobmat objects", dQuote(.Generic)))
)
} )
setMethod("Arith",signature(e1 = "brobmat", e2="missing"),
function(e1,e2){
switch(.Generic,
"+" = e1,
"-" = newbrobmat(getX(e1),positive=!getP(e1)),
stop(gettextf("unary operator %s not implemented on brobmat objects", dQuote(.Generic)))
)
} )
"brobmat.arith" <- function(e1,e2){
switch(.Generic,
"+" = brobmat.add (e1, e2),
"-" = brobmat.add (e1, -e2),
"*" = brobmat.mult (e1, e2),
"/" = brobmat.mult (e1, brobmat.inverse(e2)),
"^" = brobmat.power(e1, e2),
stop(gettextf("binary operator %s not implemented on Brobdingnagian numbers", dQuote(.Generic)))
) }
setMethod("Arith", signature(e1 = "brobmat", e2="brob" ), brobmat.arith)
setMethod("Arith", signature(e1 = "brob" , e2="brobmat"), brobmat.arith)
setMethod("Arith", signature(e1 = "brobmat", e2="ANY" ), brobmat.arith)
setMethod("Arith", signature(e1 = "ANY" , e2="brobmat"), brobmat.arith)
setMethod("Arith", signature(e1 = "brobmat", e2="brobmat"), brobmat.arith)
`getat` <- function(e1,e2=e1){
if(length(e1)>=length(e2)){
return(attributes(getX(e1)))
} else {
return(attributes(getX(e2)))
}
}
`brobmat.add` <- function(e1,e2){
out <- as.brob(e1) + as.brob(e2)
jj <- getX(out)
attributes(jj) <- getat(e1,e2)
return(newbrobmat(jj,getP(out)))
}
`brobmat.mult` <- function(e1,e2){
out <- as.brob(e1) * as.brob(e2)
jj <- getX(out)
attributes(jj) <- getat(e1,e2)
return(newbrobmat(jj,getP(out)))
}
`brobmat.inverse` <- function(e1){
if(is.brobmat(e1)){
out <- 1/as.brob(e1)
jj <- getX(out)
attributes(jj) <- getat(e1)
return(newbrobmat(jj,getP(out)))
} else {
return(1/e1)
}
}
`brobmat.power` <- function(e1,e2){
out <- as.brob(e1) ^ as.brob(e2)
jj <- getX(out)
attributes(jj) <- getat(e1,e2)
return(newbrobmat(jj,getP(out)))
}
"brobmat.equal" <- function(e1,e2){
out <- as.brob(e1) == as.brob(e2)
attributes(out) <- getat(e1,e2)
return(out)
}
"brobmat.greater" <- function(e1,e2){
out <- as.brob(e1) > as.brob(e2)
attributes(out) <- getat(e1,e2)
return(out)
}
"brobmat.compare" <- function(e1,e2){
if( (length(e1) == 0) | (length(e2)==0)) {
return(logical(0))
}
switch(.Generic,
"==" = brobmat.equal(e1,e2),
"!=" = !brobmat.equal(e1,e2),
">" = brobmat.greater(e1,e2),
"<" = !brobmat.greater(e1,e2) & !brobmat.equal(e1,e2),
">=" = brobmat.greater(e1,e2) | brobmat.equal(e1,e2),
"<=" = !brobmat.greater(e1,e2) | brobmat.equal(e1,e2),
stop(gettextf("comparison operator %s not implemented on brobmat objects", dQuote(.Generic)))
)
}
setMethod("Compare", signature(e1="brobmat", e2="ANY" ), brobmat.compare)
setMethod("Compare", signature(e1="ANY" , e2="brobmat"), brobmat.compare)
setMethod("Compare", signature(e1="brobmat", e2="brobmat"), brobmat.compare)
`brobmat_matrixprod` <- function(x,y){
stopifnot(ncol(x)==nrow(y))
out <- brobmat(NA,nrow(x),ncol(y))
for(i in seq_len(nrow(x))){
for(j in seq_len(ncol(y))){
out[i,j] <- sum(x[i,,drop=TRUE]*y[,j,drop=TRUE])
} # j loop closes
} # i loop closes
if(!is.null(rownames(x))){rownames(out) <- rownames(x)}
if(!is.null(colnames(x))){colnames(out) <- colnames(y)}
return(out)
}
setMethod("%*%", signature(x="brobmat", y="ANY" ), brobmat_matrixprod)
setMethod("%*%", signature(x="ANY" , y="brobmat"), brobmat_matrixprod)
setMethod("%*%", signature(x="brobmat", y="brobmat"), brobmat_matrixprod)
setGeneric("as.vector")
setMethod("as.vector", signature(x="brobmat"), function(x){as.brob(x)})
setMethod("as.vector", signature(x="brob"), function(x){x})
setGeneric("rownames")
setMethod("rownames", signature(x="brobmat"), function(x){rownames(getX(x))})
setGeneric("colnames")
setMethod("colnames", signature(x="brobmat"), function(x){colnames(getX(x))})
setGeneric("dimnames")
setMethod("dimnames", signature(x="brobmat"), function(x){dimnames(getX(x))})
setGeneric("rownames<-")
setMethod("rownames<-", signature(x="brobmat"),
function(x,value){
jj <- getX(x)
rownames(jj) <- value
return(brobmat(jj,getP(x)))
} )
setGeneric("colnames<-")
setMethod("colnames<-", signature(x="brobmat"),
function(x,value){
jj <- getX(x)
colnames(jj) <- value
return(brobmat(jj,getP(x)))
} )
setGeneric("dimnames<-")
setMethod("dimnames<-", signature(x="brobmat"),
function(x,value){
jj <- getX(x)
dimnames(jj) <- value
return(brobmat(jj,getP(x)))
} )
setGeneric("diag", function(x, ...){standardGeneric("diag")})
setMethod("diag", signature(x="brobmat"),function(x,...){brob(diag(getX(x)),diag(getP(x)))})
setMethod("diag", signature(x="ANY"), function(x,...){base::diag(x)})
setGeneric("t", function(x, ...) standardGeneric("t"))
setMethod("t", signature(x="brobmat"),function(x,...){brob(t(getX(x)),t(getP(x)))})
setMethod("t", signature(x="ANY"),function(x,...){base::t(x)})
|
/scratch/gouwar.j/cran-all/cranData/Brobdingnag/R/matrix.R
|
## x[] <- value
setReplaceMethod("[",
signature(x = "brobmat",
i = "missing", j = "missing",
value = "ANY"),
function (x, i, j, ..., value){
value <- as.brob(value)
jj.x <- getX(x)
jj.pos <- getP(x)
jj.x[] <- getX(value) # matrix or vector
jj.pos[] <- getP(value)
return(newbrobmat(x=jj.x,positive=jj.pos))
} )
## x[i,] <- value
setReplaceMethod("[",
signature(x = "brobmat",
i = "index", j = "missing",
value = "ANY"),
function (x, i, j, ..., value){
value <- as.brob(value)
jj.x <- getX(x)
jj.pos <- getP(x)
jj.x[i,] <- getX(value) # matrix or vector
jj.pos[i,] <- getP(value)
return(newbrobmat(x=jj.x,positive=jj.pos))
} )
## x[,j] <- value
setReplaceMethod("[",
signature(x = "brobmat",
i = "missing", j = "index",
value = "ANY"),
function (x, i, j, ..., value){
value <- as.brob(value)
jj.x <- getX(x)
jj.pos <- getP(x)
jj.x[,j] <- getX(value) # matrix or vector
jj.pos[,j] <- getP(value)
return(newbrobmat(x=jj.x,positive=jj.pos))
} )
## x[cbind(1:3,2:4)] <- value
setReplaceMethod("[",
signature(x = "brobmat",
i = "matrix", j = "missing",
value = "ANY"),
function (x, i, j, ..., value){
value <- as.brob(value)
jj.x <- getX(x)
jj.pos <- getP(x)
jj.x[i] <- getX(value) # matrix or vector
jj.pos[i] <- getP(value)
return(newbrobmat(x=jj.x,positive=jj.pos))
} )
## x[i,j] <- value
setReplaceMethod("[",
signature(x = "brobmat",
i = "index", j = "index",
value = "ANY"),
function (x, i, j, ..., value){
value <- as.brob(value)
jj.x <- getX(x)
jj.pos <- getP(x)
jj.x[i,j] <- getX(value) # matrix or vector
jj.pos[i,j] <- getP(value)
return(newbrobmat(x=jj.x,positive=jj.pos))
} )
|
/scratch/gouwar.j/cran-all/cranData/Brobdingnag/R/replace.R
|
### R code from vignette source 'Brobdingnag.Rnw'
###################################################
### code chunk number 1: googol_definition
###################################################
###################################################
### code chunk number 2: Brobdingnag.Rnw:76-77
###################################################
require(Brobdingnag)
###################################################
### code chunk number 3: Brobdingnag.Rnw:78-79
###################################################
googol <- as.brob(10)^100
###################################################
### code chunk number 4: define_f
###################################################
stirling <- function(n){n^n*exp(-n)*sqrt(2*pi*n)}
###################################################
### code chunk number 5: f_of_a_googol
###################################################
stirling(googol)
###################################################
### code chunk number 6: TwoToTheGoogolth
###################################################
2^(1/googol)
###################################################
### code chunk number 7: define_function_f
###################################################
f <- function(x){as.numeric( (pi*x -3*x -(pi-3)*x)/x)}
###################################################
### code chunk number 8: try.f.with.one.seventh
###################################################
f(1/7)
f(as.brob(1/7))
###################################################
### code chunk number 9: try.f.with.a.googol
###################################################
f(1e100)
f(as.brob(1e100))
###################################################
### code chunk number 10: try_f_with_bignumbers
###################################################
f(as.brob(10)^1000)
|
/scratch/gouwar.j/cran-all/cranData/Brobdingnag/inst/doc/Brobdingnag.R
|
### R code from vignette source 'S4_brob.Rnw'
###################################################
### code chunk number 1: setClass
###################################################
###################################################
### code chunk number 2: S4_brob.Rnw:125-134
###################################################
setClass("swift",
representation = "VIRTUAL"
)
setClass("brob",
representation = representation(x="numeric",positive="logical"),
prototype = list(x=numeric(),positive=logical()),
contains = "swift"
)
###################################################
### code chunk number 3: new
###################################################
new("brob",x=1:10,positive=rep(TRUE,10))
###################################################
### code chunk number 4: new_flaky_arguments (eval = FALSE)
###################################################
## new("brob",x=1:10,positive=c(TRUE,FALSE,FALSE))
###################################################
### code chunk number 5: validity_method
###################################################
.Brob.valid <- function(object){
len <- length(object@positive)
if(len != length(object@x)){
return("length mismatch")
} else {
return(TRUE)
}
}
###################################################
### code chunk number 6: call_setValidity
###################################################
setValidity("brob", .Brob.valid)
###################################################
### code chunk number 7: brob_definition
###################################################
"brob" <- function(x=double(),positive){
if(missing(positive)){
positive <- rep(TRUE,length(x))
}
if(length(positive)==1){
positive <- rep(positive,length(x))
}
new("brob",x=as.numeric(x),positive=positive)
}
###################################################
### code chunk number 8: call_brob_recycling
###################################################
brob(1:10,FALSE)
###################################################
### code chunk number 9: use.function.is
###################################################
is(brob(1:5),"brob")
###################################################
### code chunk number 10: is.brob_definition
###################################################
is.brob <- function(x){is(x,"brob")}
is.glub <- function(x){is(x,"glub")}
###################################################
### code chunk number 11: as.brob_definition
###################################################
"as.brob" <- function(x){
if(is.brob(x)){
return(x)
} else if(is.complex(x)) {
warning("imaginary parts discarded")
return(Recall(Re(x)))
} else if(is.glub(x)){
warning("imaginary parts discarded")
return(Re(x))
} else {
return(brob(log(abs(x)), x>=0))
}
}
###################################################
### code chunk number 12: as.brob_call
###################################################
as.brob(1:10)
###################################################
### code chunk number 13: setAs
###################################################
###################################################
### code chunk number 14: S4_brob.Rnw:363-368
###################################################
setAs("brob", "numeric", function(from){
out <- exp(from@x)
out[!from@positive] <- -out[!from@positive]
return(out)
} )
###################################################
### code chunk number 15: setMethodbrob
###################################################
setMethod("as.numeric",signature(x="brob"),function(x){as(x,"numeric")})
###################################################
### code chunk number 16: setAsbrobcomplex
###################################################
setAs("brob", "complex", function(from){
return(as.numeric(from)+ 0i)
} )
setMethod("as.complex",signature(x="brob"),function(x){as(x,"complex")})
###################################################
### code chunk number 17: asCheck
###################################################
x <- as.brob(1:4)
x
as.numeric(x)
###################################################
### code chunk number 18: print_methods
###################################################
.Brob.print <- function(x, digits=5){
noquote( paste(c("-","+")[1+x@positive],"exp(",signif(x@x,digits),")",sep=""))
}
###################################################
### code chunk number 19: print.brob
###################################################
print.brob <- function(x, ...){
jj <- .Brob.print(x, ...)
print(jj)
return(invisible(jj))
}
###################################################
### code chunk number 20: setmethodbrobshow
###################################################
setMethod("show", "brob", function(object){print.brob(object)})
###################################################
### code chunk number 21: as.brob14
###################################################
as.brob(1:4)
###################################################
### code chunk number 22: get.n.set
###################################################
###################################################
### code chunk number 23: S4_brob.Rnw:462-466
###################################################
setGeneric("getX",function(x){standardGeneric("getX")})
setGeneric("getP",function(x){standardGeneric("getP")})
setMethod("getX","brob",function(x){x@x})
setMethod("getP","brob",function(x){x@positive})
###################################################
### code chunk number 24: setlength
###################################################
###################################################
### code chunk number 25: S4_brob.Rnw:478-479
###################################################
setMethod("length","brob",function(x){length(x@x)})
###################################################
### code chunk number 26: setmethodSquareBrace
###################################################
###################################################
### code chunk number 27: S4_brob.Rnw:489-496
###################################################
setMethod("[", "brob",
function(x, i, j, drop){
if(!missing(j)){
warning("second argument to extractor function ignored")
}
brob(x@x[i], x@positive[i])
} )
###################################################
### code chunk number 28: setReplaceMethod
###################################################
###################################################
### code chunk number 29: S4_brob.Rnw:509-526
###################################################
setReplaceMethod("[",signature(x="brob"),
function(x,i,j,value){
if(!missing(j)){
warning("second argument to extractor function ignored")
}
jj.x <- x@x
jj.pos <- x@positive
if(is.brob(value)){
jj.x[i] <- value@x
jj.pos[i] <- value@positive
return(brob(x=jj.x,positive=jj.pos))
} else {
x[i] <- as.brob(value)
return(x)
}
} )
###################################################
### code chunk number 30: .Brob.cPair
###################################################
.Brob.cPair <- function(x,y){
x <- as.brob(x)
y <- as.brob(y)
brob(c(x@x,y@x),c(x@positive,y@positive))
}
###################################################
### code chunk number 31: setGeneric_cbrob
###################################################
###################################################
### code chunk number 32: S4_brob.Rnw:565-566
###################################################
setGeneric(".cPair", function(x,y){standardGeneric(".cPair")})
###################################################
### code chunk number 33: setMethod.Cpair
###################################################
###################################################
### code chunk number 34: S4_brob.Rnw:576-580
###################################################
setMethod(".cPair", c("brob", "brob"), function(x,y){.Brob.cPair(x,y)})
setMethod(".cPair", c("brob", "ANY"), function(x,y){.Brob.cPair(x,as.brob(y))})
setMethod(".cPair", c("ANY", "brob"), function(x,y){.Brob.cPair(as.brob(x),y)})
setMethod(".cPair", c("ANY", "ANY"), function(x,y){c(x,y)})
###################################################
### code chunk number 35: cbrob
###################################################
"cbrob" <- function(x, ...) {
if(nargs()<3)
.cPair(x,...)
else
.cPair(x, Recall(...))
}
###################################################
### code chunk number 36: test.cbrob
###################################################
a <- 1:3
b <- as.brob(1e100)
cbrob(a,a,b,a)
###################################################
### code chunk number 37: sqrtmethod
###################################################
###################################################
### code chunk number 38: S4_brob.Rnw:630-633
###################################################
setMethod("sqrt","brob", function(x){
brob(ifelse(x@positive,x@x/2, NaN),TRUE)
} )
###################################################
### code chunk number 39: checklogsqrt
###################################################
sqrt(brob(4))
###################################################
### code chunk number 40: mathgeneric
###################################################
###################################################
### code chunk number 41: S4_brob.Rnw:645-676
###################################################
setMethod("Math", "brob",
function(x){
switch(.Generic,
abs = brob(x@x),
log = {
out <- x@x
out[!x@positive] <- NaN
out
},
exp = brob(x),
cosh = {(brob(x) + brob(-x))/2},
sinh = {(brob(x) - brob(-x))/2},
acos =,
acosh =,
asin =,
asinh =,
atan =,
atanh =,
cos =,
sin =,
tan =,
tanh =,
trunc = callGeneric(as.numeric(x)),
lgamma =,
cumsum =,
gamma =,
ceiling=,
floor = as.brob(callGeneric(as.numeric(x))),
stop(paste(.Generic, "not allowed on Brobdingnagian numbers"))
)
} )
###################################################
### code chunk number 42: checktrig
###################################################
sin(brob(4))
###################################################
### code chunk number 43: .brob.arithstuff
###################################################
.Brob.negative <- function(e1){
brob(e1@x,!e1@positive)
}
.Brob.ds <- function(e1,e2){
xor(e1@positive,e2@positive)
}
.Brob.add <- function(e1,e2){
e1 <- as.brob(e1)
e2 <- as.brob(e2)
jj <- rbind(e1@x,e2@x)
x1 <- jj[1,]
x2 <- jj[2,]
out.x <- double(length(x1))
jj <- rbind(e1@positive,e2@positive)
p1 <- jj[1,]
p2 <- jj[2,]
out.pos <- p1
ds <- .Brob.ds(e1,e2)
ss <- !ds
out.x[ss] <- pmax(x1[ss],x2[ss]) + log1p(+exp(-abs(x1[ss]-x2[ss])))
out.x[ds] <- pmax(x1[ds],x2[ds]) + log1p(-exp(-abs(x1[ds]-x2[ds])))
out.x[ (x1 == -Inf) & (x2 == -Inf)] <- -Inf
out.pos <- p1
out.pos[ds] <- xor((x1[ds] > x2[ds]) , (!p1[ds]) )
return(brob(out.x,out.pos))
}
.Brob.mult <- function(e1,e2){
e1 <- as.brob(e1)
e2 <- as.brob(e2)
return(brob(e1@x + e2@x, !.Brob.ds(e1,e2)))
}
.Brob.power <- function(e1,e2){
stopifnot(is.brob(e1) | is.brob(e2))
if(is.brob(e2)){
return(brob(log(e1) * brob(e2@x), TRUE))
} else {
s <- as.integer(2*e1@positive-1)
return(brob(e1@x*as.brob(e2), (s^as.numeric(e2))>0))
}
}
.Brob.inverse <- function(b){brob(-b@x,b@positive)}
###################################################
### code chunk number 44: setMethodArithUnary
###################################################
###################################################
### code chunk number 45: S4_brob.Rnw:767-776
###################################################
setMethod("Arith",signature(e1 = "brob", e2="missing"),
function(e1,e2){
switch(.Generic,
"+" = e1,
"-" = .Brob.negative(e1),
stop(paste("Unary operator", .Generic,
"not allowed on Brobdingnagian numbers"))
)
} )
###################################################
### code chunk number 46: check_minus_5
###################################################
-brob(5)
###################################################
### code chunk number 47: brob.arith
###################################################
.Brob.arith <- function(e1,e2){
switch(.Generic,
"+" = .Brob.add (e1, e2),
"-" = .Brob.add (e1, .Brob.negative(as.brob(e2))),
"*" = .Brob.mult (e1, e2),
"/" = .Brob.mult (e1, .Brob.inverse(as.brob(e2))),
"^" = .Brob.power(e1, e2),
stop(paste("binary operator \"", .Generic, "\" not defined for Brobdingnagian numbers"))
) }
###################################################
### code chunk number 48: setMethodArith
###################################################
setMethod("Arith", signature(e1 = "brob", e2="ANY"), .Brob.arith)
setMethod("Arith", signature(e1 = "ANY", e2="brob"), .Brob.arith)
setMethod("Arith", signature(e1 = "brob", e2="brob"), .Brob.arith)
###################################################
### code chunk number 49: check_addition
###################################################
1e100 + as.brob(10)^100
###################################################
### code chunk number 50: brob.equalandgreater
###################################################
.Brob.equal <- function(e1,e2){
(e1@x==e2@x) & (e1@positive==e2@positive)
}
.Brob.greater <- function(e1,e2){
jj.x <- rbind(e1@x,e2@x)
jj.p <- rbind(e1@positive,e2@positive)
ds <- .Brob.ds(e1,e2)
ss <- !ds
greater <- logical(length(ss))
greater[ds] <- jj.p[1,ds]
greater[ss] <- jj.p[1,ss] & (jj.x[1,ss] > jj.x[2,ss])
return(greater)
}
###################################################
### code chunk number 51: brob.compare
###################################################
".Brob.compare" <- function(e1,e2){
e1 <- as.brob(e1)
e2 <- as.brob(e2)
switch(.Generic,
"==" = .Brob.equal(e1,e2),
"!=" = !.Brob.equal(e1,e2),
">" = .Brob.greater(e1,e2),
"<" = !.Brob.greater(e1,e2) & !.Brob.equal(e1,e2),
">=" = .Brob.greater(e1,e2) | .Brob.equal(e1,e2),
"<=" = !.Brob.greater(e1,e2) | .Brob.equal(e1,e2),
stop(paste(.Generic, "not supported for Brobdingnagian numbers"))
)
}
###################################################
### code chunk number 52: setMethodCompare
###################################################
###################################################
### code chunk number 53: S4_brob.Rnw:872-875
###################################################
setMethod("Compare", signature(e1="brob", e2="ANY" ), .Brob.compare)
setMethod("Compare", signature(e1="ANY" , e2="brob"), .Brob.compare)
setMethod("Compare", signature(e1="brob", e2="brob"), .Brob.compare)
###################################################
### code chunk number 54: check.compare
###################################################
as.brob(10) < as.brob(11)
as.brob(10) <= as.brob(10)
###################################################
### code chunk number 55: brob.logic
###################################################
.Brob.logic <- function(e1,e2){
stop("No logic currently implemented for Brobdingnagian numbers")
}
###################################################
### code chunk number 56: setmethodlogic
###################################################
###################################################
### code chunk number 57: S4_brob.Rnw:906-909
###################################################
setMethod("Logic",signature(e1="swift",e2="ANY"), .Brob.logic)
setMethod("Logic",signature(e1="ANY",e2="swift"), .Brob.logic)
setMethod("Logic",signature(e1="swift",e2="swift"), .Brob.logic)
###################################################
### code chunk number 58: logchunk
###################################################
if(!isGeneric("log")){
setGeneric("log",group="Math")
}
###################################################
### code chunk number 59: miscgenerics
###################################################
###################################################
### code chunk number 60: S4_brob.Rnw:954-1005
###################################################
if(!isGeneric("sum")){
setGeneric("max", function(x, ..., na.rm = FALSE)
{
standardGeneric("max")
},
useAsDefault = function(x, ..., na.rm = FALSE)
{
base::max(x, ..., na.rm = na.rm)
},
group = "Summary")
setGeneric("min", function(x, ..., na.rm = FALSE)
{
standardGeneric("min")
},
useAsDefault = function(x, ..., na.rm = FALSE)
{
base::min(x, ..., na.rm = na.rm)
},
group = "Summary")
setGeneric("range", function(x, ..., na.rm = FALSE)
{
standardGeneric("range")
},
useAsDefault = function(x, ..., na.rm = FALSE)
{
base::range(x, ..., na.rm = na.rm)
},
group = "Summary")
setGeneric("prod", function(x, ..., na.rm = FALSE)
{
standardGeneric("prod")
},
useAsDefault = function(x, ..., na.rm = FALSE)
{
base::prod(x, ..., na.rm = na.rm)
},
group = "Summary")
setGeneric("sum", function(x, ..., na.rm = FALSE)
{
standardGeneric("sum")
},
useAsDefault = function(x, ..., na.rm = FALSE)
{
base::sum(x, ..., na.rm = na.rm)
},
group = "Summary")
}
###################################################
### code chunk number 61: brob.maxmin
###################################################
.Brob.max <- function(x, ..., na.rm=FALSE){
p <- x@positive
val <- x@x
if(any(p)){
return(brob(max(val[p])))
} else {
return(brob(min(val),FALSE))
}
}
.Brob.prod <- function(x){
p <- x@positive
val <- x@x
return(brob(sum(val),(sum(p)%%2)==0))
}
.Brob.sum <- function(x){
.Brob.sum.allpositive( x[x>0]) -
.Brob.sum.allpositive(-x[x<0])
}
.Brob.sum.allpositive <- function(x){
if(length(x)<1){return(as.brob(0))}
val <- x@x
p <- x@positive
mv <- max(val)
return(brob(mv + log1p(sum(exp(val[-which.max(val)]-mv))),TRUE))
}
###################################################
### code chunk number 62: setmethodsummary
###################################################
###################################################
### code chunk number 63: S4_brob.Rnw:1050-1062
###################################################
setMethod("Summary", "brob",
function(x, ..., na.rm=FALSE){
switch(.Generic,
max = .Brob.max( x, ..., na.rm=na.rm),
min = -.Brob.max(-x, ..., na.rm=na.rm),
range = cbrob(min(x,na.rm=na.rm),max(x,na.rm=na.rm)),
prod = .Brob.prod(x),
sum = .Brob.sum(x),
stop(paste(.Generic, "not allowed on Brobdingnagian numbers"))
)
}
)
###################################################
### code chunk number 64: checksum
###################################################
sum(as.brob(1:100)) - 5050
###################################################
### code chunk number 65: factorial
###################################################
stirling <- function(x){sqrt(2*pi*x)*exp(-x)*x^x}
###################################################
### code chunk number 66: use.stirling
###################################################
stirling(100)
stirling(as.brob(100))
###################################################
### code chunk number 67: compare.two.stirlings
###################################################
as.numeric(stirling(100)/stirling(as.brob(100)))
###################################################
### code chunk number 68: stirling.of.1000
###################################################
stirling(1000)
stirling(as.brob(1000))
|
/scratch/gouwar.j/cran-all/cranData/Brobdingnag/inst/doc/S4_brob.R
|
---
title: "Brobdingnagian matrices"
author: "Robin K. S. Hankin"
date: "`r Sys.Date()`"
vignette: >
%\VignetteIndexEntry{brobmat}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r set-options, echo = FALSE}
knitr::opts_chunk$set(collapse = TRUE, comment = "#>", dev = "png", fig.width = 7, fig.height = 3.5, message = FALSE, warning = FALSE)
options(width = 80, tibble.width = Inf)
```
# Brobdgingnagian matrices
R package `Brobdingnag` has basic functionality for matrices. It
includes matrix multiplication and addition, but determinants and
matrix inverses are not implemented. First load the package:
```{r}
library("Brobdingnag")
```
The standard way to create a Brobdgingnagian matrix (a `brobmat`) is
to use function `brobmat()` which takes arguments similar to
`matrix()` and returns a matrix of entries created with `brob()`:
```{r}
M1 <- brobmat(-10:13,4,6)
colnames(M1) <- state.abb[1:6]
M1
```
Function `brobmat()` takes an argument `positive` which specifies the sign:
```{r}
M2 <- brobmat(
c(1,104,-66,45,1e40,-2e40,1e-200,232.2),2,4,
positive=c(T,F,T,T,T,F,T,T))
M2
```
Standard matrix arithmetic is implemented, thus:
```{r}
rownames(M2) <- c("a","b")
colnames(M2) <- month.abb[1:4]
M2
M2[2,3] <- 0
M2
M2+1000
```
We can also do matrix multiplication, although it is slow:
```{r}
M2 %*% M1
```
## Numerical verification: matrix multiplication
We will verify matrix multiplication by carrying out the same
operation in two different ways. First, create two largish
Brobdingnagian matrices:
```{r}
nrows <- 11
ncols <- 18
M3 <- brobmat(rnorm(nrows*ncols),nrows,ncols,positive=sample(c(T,F),nrows*ncols,replace=T))
M4 <- brobmat(rnorm(nrows*ncols),ncols,nrows,positive=sample(c(T,F),nrows*ncols,replace=T))
M3[1:3,1:3]
```
Now calculate the matrix product by coercing to numeric matrices and
multiplying:q
```{r}
p1 <- as.matrix(M3) %*% as.matrix(M4)
```
and then by using Brobdingnagian matrix multiplying, and then coercing
to numeric:
```{r}
p2 <- as.matrix(M3 %*% M4)
```
The difference:
```{r}
max(abs(p1-p2))
```
is small. Now the other way:
```{r}
q1 <- M3 %*% M4
q2 <- as.brobmat(as.matrix(M3) %*% as.matrix(M4))
max(abs(as.brob(q1-q2)))
```
## Numerical verification: integration with the `cubature` package
The matrix functionality of the `Brobdingnag` package was originally
written to leverage the functionality of the `cubature` package. Here
I give some numerical verification for this.
Suppose we wish to evaluate
\[
\int_{x=0}^{x=4}(x^2-4)\,dx
\]
using numerical methods. See how the integrand includes positive and
negative values; the theoretical value is $\frac{16}{3}=5.33\ldots$.
The `cubature` idiom for this would be
```{r,label = numericalintegration}
library("cubature")
f.numeric <- function(x){x^2 - 4}
out.num <- cubature::hcubature(f = f.numeric, lowerLimit = 0, upperLimit = 4, vectorInterface = TRUE)
out.num
```
and the Brobdingnagian equivalent would be
```{r,label = numericalintegrationbrob}
f.brob <- function(x) {
x <- as.brob(x[1, ])
as.matrix( brobmat(x^2 - 4, ncol = length(x)))
}
out.brob <- cubature::hcubature(f = f.brob, lowerLimit = 0, upperLimit = 4, vectorInterface = TRUE)
out.brob
```
We may compare the two methods:
```{r,label=comparebrobandnumeric}
out.brob$integral - out.num$integral
```
|
/scratch/gouwar.j/cran-all/cranData/Brobdingnag/vignettes/brobmat.Rmd
|
# @file Prior.R
#
# Copyright 2023 Observational Health Data Sciences and Informatics
#
# This file is part of BrokenAdaptiveRidge
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# @author Marc A. Suchard
# @author Ning Li
#' @title Create a BAR Cyclops prior object
#'
#' @description
#' \code{createBarPrior} creates a BAR Cyclops prior object for use with \code{\link{fitCyclopsModel}}.
#'
#' @param penalty Specifies the BAR penalty; possible values are `BIC` or `AIC` or a numeric value
#' @param exclude A vector of numbers or covariateId names to exclude from prior
#' @param forceIntercept Logical: Force intercept coefficient into regularization
#' @param fitBestSubset Logical: Fit final subset with no regularization
#' @param initialRidgeVariance Numeric: variance used for algorithm initiation
#' @param tolerance Numeric: maximum abs change in coefficient estimates from successive iterations to achieve convergence
#' @param maxIterations Numeric: maxium iterations to achieve convergence
#' @param threshold Numeric: absolute threshold at which to force coefficient to 0
#' @param delta Numeric: change from 2 in ridge norm dimension
#'
#' @examples
#' prior <- createBarPrior(penalty = "bic")
#'
#' @return
#' A BAR Cyclops prior object of class inheriting from
#' \code{"cyclopsPrior"} for use with \code{fitCyclopsModel}.
#'
#' @import Cyclops
#'
#' @export
createBarPrior <- function(penalty = "bic",
exclude = c(),
forceIntercept = FALSE,
fitBestSubset = FALSE,
initialRidgeVariance = 1E4,
tolerance = 1E-8,
maxIterations = 1E4,
threshold = 1E-6,
delta = 0) {
# TODO Check that penalty (and other arguments) is valid
fitHook <- function(...) {
# closure to capture BAR parameters
barHook(fitBestSubset, initialRidgeVariance, tolerance,
maxIterations, threshold, delta, ...)
}
structure(list(penalty = penalty,
exclude = exclude,
forceIntercept = forceIntercept,
fitHook = fitHook),
class = "cyclopsPrior")
}
# Below are package-private functions
barHook <- function(fitBestSubset,
initialRidgeVariance,
tolerance,
maxIterations,
cutoff,
delta,
cyclopsData,
barPrior,
control,
weights,
forceNewObject,
returnEstimates,
startingCoefficients,
fixedCoefficients) {
# Getting starting values
startFit <- Cyclops::fitCyclopsModel(cyclopsData, prior = createBarStartingPrior(cyclopsData,
exclude = barPrior$exclude,
forceIntercept = barPrior$forceIntercept,
initialRidgeVariance = initialRidgeVariance),
control, weights, forceNewObject, returnEstimates, startingCoefficients, fixedCoefficients)
priorType <- createBarPriorType(cyclopsData, barPrior$exclude, barPrior$forceIntercept)
include <- setdiff(c(1:Cyclops::getNumberOfCovariates(cyclopsData)), priorType$excludeIndices)
pre_coef <- coef(startFit)
penalty <- getPenalty(cyclopsData, barPrior)
futile.logger::flog.trace("Initial penalty: %f", penalty)
continue <- TRUE
count <- 0
converged <- FALSE
while (continue) {
count <- count + 1
working_coef <- ifelse(abs(pre_coef) <= cutoff, 0.0, pre_coef)
fixed <- working_coef == 0.0
variance <- abs(working_coef) ^ (2 - delta) / penalty
if (!is.null(priorType$excludeIndices)) {
working_coef[priorType$excludeIndices] <- pre_coef[priorType$excludeIndices]
fixed[priorType$excludeIndices] <- FALSE
variance[priorType$excludeIndices] <- 0
}
prior <- Cyclops::createPrior(priorType$types, variance = variance,
forceIntercept = barPrior$forceIntercept)
fit <- Cyclops::fitCyclopsModel(cyclopsData,
prior = prior,
control, weights, forceNewObject,
startingCoefficients = working_coef,
fixedCoefficients = fixed)
coef <- coef(fit)
end <- min(10, length(variance))
futile.logger::flog.trace("Itr: %d", count)
futile.logger::flog.trace("\tVar : ", variance[1:end], capture = TRUE)
futile.logger::flog.trace("\tCoef: ", coef[1:end], capture = TRUE)
futile.logger::flog.trace("")
if (max(abs(coef - pre_coef)) < tolerance) {
converged <- TRUE
} else {
pre_coef <- coef
}
if (converged || count >= maxIterations) {
continue <- FALSE
}
}
if (count >= maxIterations) {
stop(paste0('Algorithm did not converge after ',
maxIterations, ' iterations.',
' Estimates may not be stable.'))
}
if (fitBestSubset) {
fit <- Cyclops::fitCyclopsModel(cyclopsData, prior = createPrior("none"),
control, weights, forceNewObject, fixedCoefficients = fixed)
}
class(fit) <- c(class(fit), "cyclopsBarFit")
fit$barConverged <- converged
fit$barIterations <- count
fit$barFinalPriorVariance <- variance
return(fit)
}
createBarStartingPrior <- function(cyclopsData,
exclude,
forceIntercept,
initialRidgeVariance) {
Cyclops::createPrior("normal", variance = initialRidgeVariance, exclude = exclude, forceIntercept = forceIntercept)
}
createBarPriorType <- function(cyclopsData,
exclude,
forceIntercept) {
exclude <- Cyclops:::.checkCovariates(cyclopsData, exclude)
if (Cyclops:::.cyclopsGetHasIntercept(cyclopsData) && !forceIntercept) {
interceptId <- bit64::as.integer64(Cyclops:::.cyclopsGetInterceptLabel(cyclopsData))
warn <- FALSE
if (is.null(exclude)) {
exclude <- c(interceptId)
warn <- TRUE
} else {
if (!interceptId %in% exclude) {
exclude <- c(interceptId, exclude)
warn <- TRUE
}
}
if (warn) {
warning("Excluding intercept from regularization")
}
}
indices <- NULL
if (!is.null(exclude)) {
covariateIds <- Cyclops::getCovariateIds(cyclopsData)
indices <- which(covariateIds %in% exclude)
}
types <- rep("normal", Cyclops::getNumberOfCovariates(cyclopsData))
if (!is.null(exclude)) {
types[indices] <- "none"
}
list(types = types,
excludeCovariateIds = exclude,
excludeIndices = indices)
}
getPenalty <- function(cyclopsData, barPrior) {
if (is.numeric(barPrior$penalty)) {
return(barPrior$penalty)
}
if (barPrior$penalty == "bic") {
return(log(Cyclops::getNumberOfRows(cyclopsData)) / 2) # TODO Handle stratified models
} else {
stop("Unhandled BAR penalty type")
}
}
|
/scratch/gouwar.j/cran-all/cranData/BrokenAdaptiveRidge/R/Prior.R
|
# @file fastBarPrior.R
#
# Copyright 2023 Observational Health Data Sciences and Informatics
#
# This file is part of BrokenAdaptiveRidge
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# @author Marc A. Suchard
# @author Ning Li
# @author Eric S. Kawaguchi
#' @title Create a fastBAR Cyclops prior object
#'
#' @description
#' \code{createFastBarPrior} creates a fastBAR Cyclops prior object for use with \code{\link{fitCyclopsModel}}.
#'
#' @param penalty Specifies the BAR penalty
#' @param exclude A vector of numbers or covariateId names to exclude from prior
#' @param forceIntercept Logical: Force intercept coefficient into regularization
#' @param fitBestSubset Logical: Fit final subset with no regularization
#' @param initialRidgeVariance Numeric: variance used for algorithm initiation
#' @param tolerance Numeric: maximum abs change in coefficient estimates from successive iterations to achieve convergence
#' @param maxIterations Numeric: maximum iterations to achieve convergence
#' @param threshold Numeric: absolute threshold at which to force coefficient to 0
#'
#' @examples
#' nobs = 500; ncovs = 100
#' prior <- createFastBarPrior(penalty = log(ncovs), initialRidgeVariance = 1 / log(ncovs))
#'
#' @return
#' A BAR Cyclops prior object of class inheriting from
#' \code{"cyclopsPrior"} for use with \code{fitCyclopsModel}.
#'
#' @import Cyclops
#'
#' @export
createFastBarPrior <- function(penalty = 0,
exclude = c(),
forceIntercept = FALSE,
fitBestSubset = FALSE,
initialRidgeVariance = 1E4,
tolerance = 1E-8,
maxIterations = 1E4,
threshold = 1E-6) {
# TODO Check that penalty (and other arguments) is valid
fitHook <- function(...) {
# closure to capture BAR parameters
fastBarHook(fitBestSubset, initialRidgeVariance, tolerance,
maxIterations, threshold, ...)
}
structure(list(penalty = penalty,
exclude = exclude,
forceIntercept = forceIntercept,
fitHook = fitHook),
class = "cyclopsPrior")
}
# Below are package-private functions
fastBarHook <- function(fitBestSubset,
initialRidgeVariance,
tolerance,
maxIterations,
delta,
cyclopsData,
barPrior,
control,
weights,
forceNewObject,
returnEstimates,
startingCoefficients,
fixedCoefficients) {
# Getting starting values
startFit <- Cyclops::fitCyclopsModel(cyclopsData, prior = createBarStartingPrior(cyclopsData,
exclude = barPrior$exclude,
forceIntercept = barPrior$forceIntercept,
initialRidgeVariance = initialRidgeVariance),
control, weights, forceNewObject, returnEstimates, startingCoefficients, fixedCoefficients)
priorType <- createFastBarPriorType(cyclopsData, barPrior$exclude, barPrior$forceIntercept)
include <- setdiff(c(1:Cyclops::getNumberOfCovariates(cyclopsData)), priorType$excludeIndices)
working_coef <- coef(startFit)
penalty <- getPenalty(cyclopsData, barPrior)
futile.logger::flog.trace("Initial penalty: %f", penalty)
continue <- TRUE
count <- 0
converged <- FALSE
variance <- rep(1 / penalty, getNumberOfCovariates(cyclopsData)) #Create penalty for each covariate.
while (continue) {
count <- count + 1
#Note: Don't fix zeros as zero for next iteration.
#fixed <- working_coef == 0.0
if (!is.null(priorType$excludeIndices)) {
working_coef[priorType$excludeIndices]
#fixed[priorType$excludeIndices] <- FALSE
variance[priorType$excludeIndices] <- 0
}
prior <- Cyclops::createPrior(priorType$types, variance = variance,
forceIntercept = barPrior$forceIntercept)
#Fit fastBAR for one epoch
fit <- Cyclops::fitCyclopsModel(cyclopsData,
prior = prior,
control = createControl(convergenceType = "onestep"),
weights, forceNewObject,
startingCoefficients = working_coef)
coef <- coef(fit)
end <- min(10, length(variance))
futile.logger::flog.trace("Itr: %d", count)
futile.logger::flog.trace("\tVar : ", variance[1:end], capture = TRUE)
futile.logger::flog.trace("\tCoef: ", coef[1:end], capture = TRUE)
futile.logger::flog.trace("")
#Check for convergence
if (max(abs(coef - working_coef)) < tolerance) {
converged <- TRUE
} else {
working_coef <- coef
}
if (converged || count >= maxIterations) {
continue <- FALSE
}
}
if (count >= maxIterations) {
stop(paste0('Algorithm did not converge after ',
maxIterations, ' iterations.',
' Estimates may not be stable.'))
}
if (fitBestSubset) {
fit <- Cyclops::fitCyclopsModel(cyclopsData, prior = createPrior("none"),
control, weights, forceNewObject, fixedCoefficients = (working_coef == 0))
}
class(fit) <- c(class(fit), "cyclopsFastBarFit")
fit$barConverged <- converged
fit$barIterations <- count
fit$penalty <- penalty
fit$barFinalPriorVariance <- variance
return(fit)
}
createFastBarPriorType <- function(cyclopsData,
exclude,
forceIntercept) {
exclude <- Cyclops:::.checkCovariates(cyclopsData, exclude)
if (Cyclops:::.cyclopsGetHasIntercept(cyclopsData) && !forceIntercept) {
interceptId <- bit64::as.integer64(Cyclops:::.cyclopsGetInterceptLabel(cyclopsData))
warn <- FALSE
if (is.null(exclude)) {
exclude <- c(interceptId)
warn <- TRUE
} else {
if (!interceptId %in% exclude) {
exclude <- c(interceptId, exclude)
warn <- TRUE
}
}
if (warn) {
warning("Excluding intercept from regularization")
}
}
indices <- NULL
if (!is.null(exclude)) {
covariateIds <- Cyclops::getCovariateIds(cyclopsData)
indices <- which(covariateIds %in% exclude)
}
# "Unpenalize" excluded covariates
types <- rep("barupdate", Cyclops::getNumberOfCovariates(cyclopsData))
if (!is.null(exclude)) {
types[indices] <- "none"
}
list(types = types,
excludeCovariateIds = exclude,
excludeIndices = indices)
}
#Same as Prior.R
#createBarStartingPrior <- function(cyclopsData,
# exclude,
# forceIntercept,
# initialRidgeVariance) {
#
# Cyclops::createPrior("normal", variance = initialRidgeVariance, exclude = exclude, forceIntercept = forceIntercept)
#}
|
/scratch/gouwar.j/cran-all/cranData/BrokenAdaptiveRidge/R/fastBarPrior.R
|
#' convert file
#'
#' Convert a file using Brown Dog Conversion service
#' @param url The URL to the Brown Dog Server to use
#' @param input_file The input file, either local file with path, or file url
#' @param output The output format extension
#' @param output_path The path for the created output file. May contain different filename. note the path ends with '/'
#' @param token Brown Dog access token
#' @param wait The amount of time to wait for the DAP service to respond. Default is 60
#' @param download The flag to download the result file. Default is true
#' @return The output filename
#' @import RCurl
#' @import httpuv
#' @examples
#' \dontrun{
#' key <- get_key("https://bd-api-dev.ncsa.illinois.edu", "your email", "password")
#' token <- get_token("https://bd-api-dev.ncsa.illinois.edu", key)
#' convert_file("https://bd-api-dev.ncsa.illinois.edu",
#' "http://browndog.ncsa.illinois.edu/examples/gi/Dongying_sample.csv", "xlsx", "/",
#' token)
#' }
#' @export
convert_file = function (url, input_file, output, output_path, token, wait=60, download=TRUE){
httpheader <- c(Accept="text/plain", Authorization = token)
curloptions <- list(httpheader = httpheader)
if(startsWith(input_file,'http://') || startsWith(input_file,'https://') || startsWith(input_file,'ftp://')){
convert_api <- paste0(url,"/v1/conversions/", output, "/", httpuv::encodeURIComponent(input_file))
result_bds <- getURL(convert_api,.opts = curloptions)
}
else{
convert_api <- paste0(url,"/v1/conversions/", output, "/")
result_bds <- RCurl::postForm(convert_api,"file"= RCurl::fileUpload(input_file),.opts = curloptions)
}
#convert is not success
if(!startsWith(result_bds, "http")){
return(result_bds)
}
result_url <- gsub('.*<a.*>(.*)</a>.*', '\\1', result_bds)
if (download){
inputbasename <- strsplit(basename(input_file),'\\.')
outputfile <- paste0(output_path,inputbasename[[1]][1],".", output)
output_filename <- download(result_url[1], outputfile, token, wait)
}else{
return(result_url[1])
}
return(output_filename)
}
|
/scratch/gouwar.j/cran-all/cranData/BrownDog/R/convert_file.R
|
#' Download file from browndog
#'
#' This will download a file, if a 404 is returned it will wait until
#' the file is available. If the file is still not available after
#' timeout tries, it will return NA. If the file is downloaded it will
#' return the name of the file
#' @param url the url of the file to download
#' @param file the filename
#' @param token Brown Dog access token
#' @param timeout timeout number of seconds to wait for file (default 60)
#' @return the name of file if successfull or NA if not.
#' @import RCurl
#' @examples
#' \dontrun{
#' key <- get_key("https://bd-api-dev.ncsa.illinois.edu", "your email", "password")
#' token <- get_token("https://bd-api-dev.ncsa.illinois.edu", key)
#' download("https://bd-api-dev.ncsa.illinois.edu", "vdc.csv", token)
#' }
#' @export
download = function(url, file, token, timeout = 60) {
count <- 0
httpheader <- c(Authorization = token)
.opts <- list(httpheader = httpheader, httpauth = 1L, followlocation = TRUE)
while (!RCurl::url.exists(url,.opts = .opts) && count < timeout) {
count <- count + 1
Sys.sleep(1)
}
if (count >= timeout) {
return(NA)
}
f = RCurl::CFILE(file, mode = "wb")
RCurl::curlPerform(url = url, writedata = f@ref, .opts = .opts)
RCurl::close(f)
return(file)
}
|
/scratch/gouwar.j/cran-all/cranData/BrownDog/R/download.R
|
#' Extract file
#'
#' Extract content-based metadata from the given input file's content using Brown Dog extraction service
#' @param url The URL to the Brown Dog server to use.
#' @param file The input file could be URL or file with the path
#' @param token Brown Dog access token
#' @param wait The amount of time to wait for the DTS to respond. Default is 60 seconds
#' @return The extracted metadata in JSON format
#' @import RCurl
#' @import jsonlite
#' @examples
#' \dontrun{
#' key <- get_key("https://bd-api-dev.ncsa.illinois.edu", "your email", "password")
#' token <- get_token("https://bd-api-dev.ncsa.illinois.edu", key)
#' extract_file("https://bd-api-dev.ncsa.illinois.edu",
#' "http://browndog.ncsa.illinois.edu/examples/gi/Dongying_sample.csv", token)
#' }
#' @export
#'
extract_file = function (url, file, token, wait = 60){
if(startsWith(file,'http://') || startsWith(file,'https://') || startsWith(file,'ftp://')){
postbody <- jsonlite::toJSON(list(fileurl = unbox(file)))
httpheader <- c("Content-Type" = "application/json", "Accept" = "application/json", "Authorization" = token)
uploadurl <- paste0(url,"/v1/extractions/url")
res_upload <- RCurl::httpPOST(url = uploadurl, postfields = postbody, httpheader = httpheader)
} else{
httpheader <- c("Accept" = "application/json", "Authorization" = token)
curloptions <-list(httpheader=httpheader)
res_upload <- RCurl::postForm(paste0(url,"/v1/extractions/file"),
"File" = fileUpload(file),
.opts = curloptions)
}
r <- jsonlite::fromJSON(res_upload)
file_id <- r$id
print(file_id)
httpheader <- c("Accept" = "application/json", "Authorization" = token )
if (file_id != ""){
while (wait > 0){
res_status <- httpGET(url = paste0(url, "/v1/extractions/",file_id,"/status"), httpheader = httpheader)
status <- jsonlite::fromJSON(res_status)
if (status$Status == "Done"){
#print(status)
break
}
Sys.sleep(2)
wait <- wait -1
}
res_tags <- RCurl::httpGET(url = paste0(url, "/v1/extractions/files/", file_id,"/tags"), httpheader = httpheader)
tags <- jsonlite::fromJSON(res_tags)
res_techmd <- RCurl::httpGET(url = paste0(url,"/v1/extractions/files/",file_id,"/metadata.jsonld"), httpheader = httpheader)
techmd <- jsonlite::fromJSON(res_techmd, simplifyDataFrame = FALSE)
res_vmd <- RCurl::httpGET(url = paste0(url, "/v1/extractions/files/",file_id,"/versus_metadata"), httpheader = httpheader)
versusmd <- jsonlite::fromJSON(res_vmd)
metadatalist <- list(id = jsonlite::unbox(tags$id), filename = jsonlite::unbox(tags$filename), tags = tags$tags, technicalmetadata = techmd, versusmetadata = versusmd)
#metadatalist <- list(id = unbox(tags$id), filename = unbox(tags$filename), tags = tags$tags, technicalmetadata = techmd)
metadata <- jsonlite::toJSON(metadatalist)
return(metadata)
}
}
|
/scratch/gouwar.j/cran-all/cranData/BrownDog/R/extract_file.R
|
#' Get Key
#'
#' Get a key from the BD API gateway to access BD services
#' @param url URL of the BD API gateway
#' @param username user name for BrownDog
#' @param password password for BrownDog
#' @return BD API key
#' @import RCurl
#' @import jsonlite
#' @import utils
#' @examples
#' \dontrun{
#' get_key("https://bd-api-dev.ncsa.illinois.edu", "your email", "password")
#' }
#' @export
get_key = function(url, username, password){
if(grepl("@", url)){
auth_host <- strsplit(url,'@')
url <- auth_host[[1]][2]
auth <- strsplit(auth_host[[1]][1],'//')
userpass <- utils::URLdecode(auth[[1]][2])
bdsURL <- paste0(auth[[1]][1],"//", url, "/v1/keys")
}else{
userpass <- paste0(username,":", password)
bdsURL <- paste0(url,"/v1/keys")
}
curloptions <- list(userpwd = userpass, httpauth = 1L)
httpheader <- c("Accept" = "application/json")
responseKey <- RCurl::httpPOST(url = bdsURL, httpheader = httpheader,curl = RCurl::curlSetOpt(.opts = curloptions))
key <- jsonlite::fromJSON(responseKey)[[1]]
return(key)
}
|
/scratch/gouwar.j/cran-all/cranData/BrownDog/R/get_key.R
|
#' Get input format.
#'
#' Check Brown Dog Service for available output formats for the given input format.
#' @param url The URL to the Brown Dog server to use.
#' @param inputformat The format of the input file.
#' @param token Brown Dog access token
#' @return: A string array of reachable output format extensions.
#' @import RCurl
#' @examples
#' \dontrun{
#' key <- get_key("https://bd-api-dev.ncsa.illinois.edu", "your email", "password")
#' token <- get_token("https://bd-api-dev.ncsa.illinois.edu", key)
#' get_output_formats("https://bd-api-dev.ncsa.illinois.edu", "csv",
#' token)
#' }
#' @export
get_output_formats = function(url, inputformat, token){
api_call <- paste0(url, "/v1/conversions/inputs/", inputformat)
httpheader <- c("Accept" = "text/plain", "Authorization" = token)
r <- RCurl::httpGET(url = api_call, httpheader = httpheader)
arr <- strsplit(r,"\n")
if(length(arr[[1]]) == 0){
return(list())
} else{
return(arr)
}
}
|
/scratch/gouwar.j/cran-all/cranData/BrownDog/R/get_output_formats.R
|
#' Get Token
#'
#' Get a Token from the BD API gateway to access BD services
#' @param url URL of the BD API gateway
#' @param key permanet key for BD API
#' @return BD API Token
#' @import RCurl
#' @import jsonlite
#' @examples
#' \dontrun{
#' key <- get_key("https://bd-api-dev.ncsa.illinois.edu", "your email", "password")
#' get_token("https://bd-api-dev.ncsa.illinois.edu", key)
#' }
#' @export
get_token = function(url, key){
httpheader <- c("Accept" = "application/json")
bdsURL <- paste0(url,"/v1/keys/",key,"/tokens")
responseToken <- RCurl::httpPOST(url = bdsURL, httpheader = httpheader)
token <- jsonlite::fromJSON(responseToken)[[1]]
return(token)
}
|
/scratch/gouwar.j/cran-all/cranData/BrownDog/R/get_token.R
|
BBqr <-
function(x,y,tau=0.5, runs=11000, burn=1000, thin=1) {
#x: matrix of predictors.
#y: vector of dependent variable.
#tau: quantile level.
#runs: the length of the Markov chain.
#burn: the length of burn-in.
#thin: thinning parameter of MCMC draws
x <- as.matrix(x)
if(ncol(x)==1) {x=x} else {
x=x
if (all(x[,2]==1)) x=x[,-2] }
# Calculate some useful quantities
n <- nrow(x)
p <- ncol(x)
# check input
if (tau<=0 || tau>=1) stop ("invalid tau: tau should be >= 0 and <= 1.
\nPlease respecify tau and call again.\n")
if(n != length(y)) stop("length(y) not equal to nrow(x)")
if(n == 0) return(list(coefficients=numeric(0),fitted.values=numeric(0),
deviance=numeric(0)))
if(!(all(is.finite(y)) || all(is.finite(x)))) stop(" All values must be
finite and non-missing")
# Saving output matrices
betadraw = matrix(nrow=runs, ncol=p)
sigmadraw = matrix(nrow=runs, ncol=1)
# Calculate some useful quantities
xi = (1 - 2*tau)
zeta = tau*(1-tau)
# Initial valus
beta = rep(0.99, p)
v = rep(1, n)
sigma = 1
lambda= 0.05
Lambda=diag(lambda, p)
ystar<-y-0.5
# low and upp
low<-ifelse(y==1,0,-Inf)
upp<-ifelse(y==1,Inf,0)
# Draw from inverse Gaussian distribution
rInvgauss <- function(n, mu, lambda = 1){
un <- runif(n)
Xi <- rchisq(n,1)
f <- mu/(2*lambda)*(2*lambda+mu*Xi+sqrt(4*lambda*mu*Xi+mu^2*Xi^2))
s <- mu^2/f
ifelse(un < mu/(mu+s), s, f)}
# Draw from a truncated normal distribution
rtnorm<-function(n,mean=0,sd=1,lower.bound=-Inf,upper.bound=Inf){
lower<-pnorm(lower.bound,mean,sd)
upper<-pnorm(upper.bound,mean,sd)
qnorm(runif(n,lower,upper),mean,sd)}
# Start the algorithm
for (iter in 1: runs) {
# Draw the latent variable v from inverse Gaussian distribution.
lambda = 1/(2*sigma)
mu = 1/(abs(ystar - x%*%beta))
v = c(1/rInvgauss(n, mu = mu, lambda = lambda))
# Draw sigma
shape = 3/2*n
rate = sum((ystar - x%*%beta - xi*v)^2/(4*v))+zeta*sum(v)
sigma = 1/rgamma(1, shape= shape, rate= rate)
# Draw beta
V=diag(1/(2*sigma*v))
varcov <- chol2inv(chol(t(x)%*%V%*%x + Lambda))
betam <- varcov %*% (t(x)%*%(V %*% (ystar-xi*v)))
beta <-betam+t(chol(varcov))%*%rnorm(p)
# Draw ystar
mu<-x%*%beta+xi*v
sd<- sqrt(2*sigma*v)
mu[which(y==1 & mu<0)]=0
mu[which(y==0 & mu>0)]=0
ystar<-rtnorm(n,mean=mu,sd=sd,lower.bound=low,upper.bound=upp)
ystar= ystar/sd
# Sort beta and sigma
betadraw[iter,] = beta
sigmadraw[iter,] = sigma
}
coefficients =apply(as.matrix(betadraw[-(1:burn), ]),2,mean)
names(coefficients)=colnames(x)
if (all(x[,1]==1)) names(coefficients)[1]= "Intercept"
result <- list(beta = betadraw[seq(burn, runs, thin),],
sigma = sigmadraw[seq(burn, runs, thin),],
coefficients=coefficients)
return(result)
class(result) <- "BBqr"
result
}
|
/scratch/gouwar.j/cran-all/cranData/Brq/R/BBqr.R
|
BLBqr <-
function(x, y, tau=0.5, runs=11000, burn=1000, thin=1) {
#x: matrix of predictors.
#y: vector of dependent variable.
#tau: quantile level.
#runs: the length of the Markov chain.
#burn: the length of burn-in.
#thin: thinning parameter of MCMC draws
x <- as.matrix(x)
if(ncol(x)==1) {x=x} else {
x=x
if (all(x[,2]==1)) x=x[,-2] }
# Calculate some useful quantities
n <- nrow(x)
p <- ncol(x)
# check input
if (tau<=0 || tau>=1) stop ("invalid tau: tau should be >= 0 and <= 1.
\nPlease respecify tau and call again.\n")
if(n != length(y)) stop("length(y) not equal to nrow(x)")
if(n == 0) return(list(coefficients=numeric(0),fitted.values=numeric(0),
deviance=numeric(0)))
if(!(all(is.finite(y)) || all(is.finite(x)))) stop(" All values must be
finite and non-missing")
# Saving output matrices
betadraw = matrix(nrow=runs, ncol=p)
Lambdadraw= matrix(nrow=runs, ncol=1)
sigmadraw = matrix(nrow=runs, ncol=1)
# Calculate some useful quantities
xi = (1 - 2*tau)
zeta = tau*(1-tau)
# Initial valus
beta = rep(1, p)
s = rep(1, p)
v = rep(1, n)
Lambda2 = 1
sigma = 1
ystar<-y-0.5
# low and upp
low<-ifelse(y==1,0,-Inf)
upp<-ifelse(y==1,Inf,0)
# Hyperparameters
a = 0.1
b = 0.1
# Draw from inverse Gaussian distribution
rInvgauss <- function(n, mu, lambda = 1){
un <- runif(n)
Xi <- rchisq(n,1)
f <- mu/(2*lambda)*(2*lambda+mu*Xi+sqrt(4*lambda*mu*Xi+mu^2*Xi^2))
s <- mu^2/f
ifelse(un < mu/(mu+s), s, f)}
# Draw from a truncated normal distribution
rtnorm<-function(n,mean=0,sd=1,lower.bound=-Inf,upper.bound=Inf){
lower<-pnorm(lower.bound,mean,sd)
upper<-pnorm(upper.bound,mean,sd)
qnorm(runif(n,lower,upper),mean,sd)}
# Start the algorithm
for (iter in 1: runs) {
# Draw the latent variable v from inverse Gaussian distribution.
lambda = 1/(2*sigma)
mu = 1/(abs(ystar - x%*%beta))
v = c(1/rInvgauss(n, mu = mu, lambda = lambda))
# Draw the latent variable s from inverse Gaussian distribution.
lambda= Lambda2
mu = sqrt(lambda/(beta^2/sigma) )
s =c(1/rInvgauss(p, mu = mu, lambda = lambda))
# Draw sigma
shape = p/2 + 3/2*n
rate = sum((ystar - x%*%beta - xi*v)^2 / (4*v) )+zeta*sum(v) + sum(beta^2/(2*s))
sigma = 1/rgamma(1, shape= shape, rate= rate)
# Draw beta
V=diag(1/(2*v))
invA <- chol2inv(chol(t(x)%*%V%*%x + diag(1/s)) )
betam <- invA%*%(t(x)%*%(V %*% (ystar-xi*v)))
varcov=sigma*invA
beta <-betam+t(chol(varcov))%*%rnorm(p)
# Draw Lambda2
tshape = p + a
trate = sum(s)/2 + b
Lambda2 = rgamma(1, shape=tshape, rate=trate)
# Draw ystar
mu<-x%*%beta+xi*v
sd<- sqrt(2*sigma*v)
mu[which(y==1 & mu<0)]=0
mu[which(y==0 & mu>0)]=0
ystar<-rtnorm(n,mean=mu,sd=sd,lower.bound=low,upper.bound=upp)
ystar= ystar/sd
# Sort beta and sigma
betadraw[iter,] = beta
Lambdadraw[iter,]= Lambda2
sigmadraw[iter,] = sigma
}
coefficients =apply(as.matrix(betadraw[-(1:burn), ]),2,mean)
names(coefficients)=colnames(x)
if (all(x[,1]==1)) names(coefficients)[1]= "Intercept"
result <- list(beta = betadraw[seq(burn, runs, thin),],
lambda = Lambdadraw[seq(burn, runs, thin),],
sigma <- sigmadraw[seq(burn, runs, thin),],
coefficients=coefficients)
return(result)
class(result) <- "BLBqr"
result
}
|
/scratch/gouwar.j/cran-all/cranData/Brq/R/BLBqr.R
|
Brq <-
function(x, ...) UseMethod("Brq")
|
/scratch/gouwar.j/cran-all/cranData/Brq/R/Brq.R
|
Brq.default <-
function(x, y, tau=0.5, method=c("Bqr","BBqr","BLqr","BLBqr","BALqr","Btqr","BLtqr","BALtqr"), left=0, runs= 5000, burn= 1000, thin=1, ...)
{
set.seed(123456)
x <- as.matrix(x)
y <- as.numeric(y)
n=dim(x)[1]
p=dim(x)[2]
Coeff=NULL
esti=NULL
method <- match.arg(method,c("Bqr","BBqr","BLqr","BLBqr","BALqr","Btqr","BLtqr","BALtqr"))
Coeff=NULL
Betas <- array(,dim = c(length(seq(burn, runs, thin)), p, length(tau)))
if(length(tau)>1){
for (i in 1:length(tau)){
est= switch(method,
Bqr = Bqr(x,y,tau=tau[i], runs=runs, burn=burn, thin=thin),
BBqr = BBqr(x,y,tau=tau[i], runs=runs, burn=burn, thin=thin),
BLqr = BLqr(x,y,tau=tau[i], runs=runs, burn=burn, thin=thin),
BLBqr = BLqr(x,y,tau=tau[i], runs=runs, burn=burn, thin=thin),
BALqr = BALqr(x,y,tau=tau[i], runs=runs, burn=burn, thin=thin),
Btqr = Btqr(x,y,tau=tau[i],left = 0,runs=runs, burn=burn, thin=thin),
BLtqr = BLtqr(x,y,tau=tau[i], left = 0, runs=runs, burn=burn, thin=thin),
BALtqr= BALtqr(x,y,tau=tau[i], left = 0, runs=runs, burn=burn, thin=thin))
Betas[,,i]=est$beta
result=est$coefficients
Coeff= cbind(Coeff,result)
}
paste("tau=", format(round(tau, 3)))
taulabs <- paste("tau=", format(round(tau, 3)))
dimnames(Coeff) <- list(dimnames(x)[[2]], taulabs)
esti$beta=Betas
esti$tau <- tau
esti$coefficients=Coeff
esti$call <- match.call()
class(esti) <- "Brq"
esti
} else {
est= switch(method,
Bqr = Bqr(x,y,tau=tau, runs=runs, burn=burn, thin=thin),
BBqr = BBqr(x,y,tau=tau, runs=runs, burn=burn, thin=thin),
BLqr = BLqr(x,y,tau=tau, runs=runs, burn=burn, thin=thin),
BLBqr = BLqr(x,y,tau=tau, runs=runs, burn=burn, thin=thin),
BALqr = BALqr(x,y,tau=tau, runs=runs, burn=burn, thin=thin),
Btqr = Btqr(x,y,tau=tau,left = 0,runs=runs, burn=burn, thin=thin),
BLtqr = BLtqr(x,y,tau=tau, left = 0, runs=runs, burn=burn, thin=thin),
BALtqr= BALtqr(x,y,tau=tau, left = 0, runs=runs, burn=burn, thin=thin))
est$tau <- tau
est$fitted.values <- as.vector(x %*% est$coefficients)
est$residuals <- y - est$fitted.values
est$call <- match.call()
class(est) <- "Brq"
est
}
}
|
/scratch/gouwar.j/cran-all/cranData/Brq/R/Brq.default.R
|
Brq.formula <-
function(formula, data=list(), ...)
{
mf <- model.frame(formula=formula, data=data)
x <- model.matrix(attr(mf, "terms"), data=mf)
y <- model.response(mf)
est <- Brq.default(x, y, ...)
est$call <- match.call()
est$formula <- formula
est
}
|
/scratch/gouwar.j/cran-all/cranData/Brq/R/Brq.formula.R
|
Bqr <-
function(x,y,tau=0.5, runs=11000, burn=1000, thin=1) {
#x: matrix of predictors.
#y: vector of dependent variable.
#tau: quantile level.
#runs: the length of the Markov chain.
#burn: the length of burn-in.
#thin: thinning parameter of MCMC draws
x <- as.matrix(x)
if(ncol(x)==1) {x=x} else {
x=x
if (all(x[,2]==1)) x=x[,-2] }
# Calculate some useful quantities
n <- nrow(x)
p <- ncol(x)
# check input
if (tau<=0 || tau>=1) stop ("invalid tau: tau should be >= 0 and <= 1.
\nPlease respecify tau and call again.\n")
if(n != length(y)) stop("length(y) not equal to nrow(x)")
if(n == 0) return(list(coefficients=numeric(0),fitted.values=numeric(0),
deviance=numeric(0)))
if(!(all(is.finite(y)) || all(is.finite(x)))) stop(" All values must be
finite and non-missing")
# Saving output matrices
betadraw = matrix(nrow=runs, ncol=p)
MuY = matrix(nrow=runs, ncol=n)
VarY = matrix(nrow=runs, ncol=n)
sigmadraw = matrix(nrow=runs, ncol=1)
# Calculate some useful quantities
xi = (1 - 2*tau)
zeta = tau*(1-tau)
# Initial valus
beta = rep(0.99, p)
v = rep(1, n)
sigma = 1
# Draw from inverse Gaussian distribution
rInvgauss <- function(n, mu, lambda = 1){
un <- runif(n)
Xi <- rchisq(n,1)
f <- mu/(2*lambda)*(2*lambda+mu*Xi+sqrt(4*lambda*mu*Xi+mu^2*Xi^2))
s <- mu^2/f
ifelse(un < mu/(mu+s), s, f)}
# Start the algorithm
for (iter in 1: runs) {
# Draw the latent variable v from inverse Gaussian distribution.
lambda = 1/(2*sigma)
mu = 1/(abs(y - x%*%beta))
v = c(1/rInvgauss(n, mu = mu, lambda = lambda))
# Draw sigma
Mu = x%*%beta + xi*v
shape = 3/2*n
rate = sum((y - Mu)^2/(4*v))+zeta*sum(v)
sigma = 1/rgamma(1, shape= shape, rate= rate)
# Draw beta
vsigma=2*sigma*v
V=diag(1/vsigma)
varcov <- chol2inv(chol(t(x)%*%V%*%x))
betam <- varcov %*% (t(x)%*%(V %*% (y-xi*v)))
beta <-betam+t(chol(varcov))%*%rnorm(p)
# Sort beta and sigma
betadraw[iter,] = beta
MuY[iter, ] = Mu
VarY[iter, ] = vsigma
sigmadraw[iter,] = sigma
}
coefficients =apply(as.matrix(betadraw[-(1:burn), ]),2,mean)
names(coefficients)=colnames(x)
if (all(x[,1]==1)) names(coefficients)[1]= "Intercept"
result <- list(beta = betadraw[seq(burn, runs, thin),],
MuY = MuY[seq(burn, runs, thin),],
VarY = VarY[seq(burn, runs, thin),],
sigma = sigmadraw[seq(burn, runs, thin),],
y=y,
coefficients=coefficients)
return(result)
class(result) <- "Bqr"
result
}
|
/scratch/gouwar.j/cran-all/cranData/Brq/R/Brq1.R
|
BLqr <-
function(x, y, tau=0.5, runs=11000, burn=1000, thin=1) {
#x: matrix of predictors.
#y: vector of dependent variable.
#tau: quantile level.
#runs: the length of the Markov chain.
#burn: the length of burn-in.
#thin: thinning parameter of MCMC draws
x <- as.matrix(x)
if(ncol(x)==1) {x=x} else {
x=x
if (all(x[,2]==1)) x=x[,-2] }
# Calculate some useful quantities
n <- nrow(x)
p <- ncol(x)
# check input
if (tau<=0 || tau>=1) stop ("invalid tau: tau should be >= 0 and <= 1.
\nPlease respecify tau and call again.\n")
if(n != length(y)) stop("length(y) not equal to nrow(x)")
if(n == 0) return(list(coefficients=numeric(0),fitted.values=numeric(0),
deviance=numeric(0)))
if(!(all(is.finite(y)) || all(is.finite(x)))) stop(" All values must be
finite and non-missing")
# Saving output matrices
betadraw = matrix(nrow=runs, ncol=p)
Lambdadraw= matrix(nrow=runs, ncol=1)
sigmadraw = matrix(nrow=runs, ncol=1)
# Calculate some useful quantities
xi = (1 - 2*tau)
zeta = tau*(1-tau)
# Initial valus
beta = rep(1, p)
s = rep(1, p)
v = rep(1, n)
Lambda2 = 1
sigma = 1
# Hyperparameters
a = 0.1
b = 0.1
# Draw from inverse Gaussian distribution
rInvgauss <- function(n, mu, lambda = 1){
un <- runif(n)
Xi <- rchisq(n,1)
f <- mu/(2*lambda)*(2*lambda+mu*Xi+sqrt(4*lambda*mu*Xi+mu^2*Xi^2))
s <- mu^2/f
ifelse(un < mu/(mu+s), s, f)}
# Start the algorithm
for (iter in 1: runs) {
# Draw the latent variable v from inverse Gaussian distribution.
lambda = 1/(2*sigma)
mu = 1/(abs(y - x%*%beta))
v = c(1/rInvgauss(n, mu = mu, lambda = lambda))
# Draw the latent variable s from inverse Gaussian distribution.
lambda= Lambda2
mu = sqrt(lambda/(beta^2/sigma) )
s =c(1/rInvgauss(p, mu = mu, lambda = lambda))
# Draw sigma
shape = p/2 + 3/2*n
rate = sum((y - x%*%beta - xi*v)^2 / (4*v) )+zeta*sum(v) + sum(beta^2/(2*s))
sigma = 1/rgamma(1, shape= shape, rate= rate)
# Draw beta
V=diag(1/(2*v))
invA <- chol2inv(chol(t(x)%*%V%*%x + diag(1/s)) )
betam <- invA%*%(t(x)%*%(V %*% (y-xi*v)))
varcov=sigma*invA
beta <-betam+t(chol(varcov))%*%rnorm(p)
# Draw Lambda2
tshape = p + a
trate = sum(s)/2 + b
Lambda2 = rgamma(1, shape=tshape, rate=trate)
# Sort beta and sigma
betadraw[iter,] = beta
Lambdadraw[iter,]= Lambda2
sigmadraw[iter,] = sigma
}
coefficients =apply(as.matrix(betadraw[-(1:burn), ]),2,mean)
names(coefficients)=colnames(x)
if (all(x[,1]==1)) names(coefficients)[1]= "Intercept"
result <- list(beta = betadraw[seq(burn, runs, thin),],
lambda = Lambdadraw[seq(burn, runs, thin),],
sigma <- sigmadraw[seq(burn, runs, thin),],
coefficients=coefficients)
return(result)
class(result) <- "BLqr"
result
}
|
/scratch/gouwar.j/cran-all/cranData/Brq/R/Brq2.R
|
BALqr <-
function(x, y, tau=0.5, runs=11000, burn=1000, thin=1) {
#x: matrix of predictors.
#y: vector of dependent variable.
#tau: quantile level.
#runs: the length of the Markov chain.
#burn: the length of burn-in.
#thin: thinning parameter of MCMC draws
x <- as.matrix(x)
if(ncol(x)==1) {x=x} else {
x=x
if (all(x[,2]==1)) x=x[,-2] }
# Calculate some useful quantities
n <- nrow(x)
p <- ncol(x)
# check input
if (tau<=0 || tau>=1) stop ("invalid tau: tau should be >= 0 and <= 1.
\nPlease respecify tau and call again.\n")
if(n != length(y)) stop("length(y) not equal to nrow(x)")
if(n == 0) return(list(coefficients=numeric(0),fitted.values=numeric(0),
deviance=numeric(0)))
if(!(all(is.finite(y)) || all(is.finite(x)))) stop(" All values must be
finite and non-missing")
# Saving output matrices
betadraw = matrix(nrow=runs, ncol=p)
Lambdadraw= matrix(nrow=runs, ncol=p)
sigmadraw = matrix(nrow=runs, ncol=1)
# Calculate some useful quantities
xi = (1 - 2*tau)
zeta = tau*(1-tau)
# Initial valus
beta = rep(1, p)
s = rep(1, p)
v = rep(1, n)
Lambda2 = rep(1, p)
sigma = 1
# Hyperparameters
a = 0.1
b = 0.1
# Draw from inverse Gaussian distribution
rInvgauss <- function(n, mu, lambda = 1){
un <- runif(n)
Xi <- rchisq(n,1)
f <- mu/(2*lambda)*(2*lambda+mu*Xi+sqrt(4*lambda*mu*Xi+mu^2*Xi^2))
s <- mu^2/f
ifelse(un < mu/(mu+s), s, f)}
# Start the algorithm
for (iter in 1: runs) {
# Draw the latent variable v from inverse Gaussian distribution.
lambda = 1/(2*sigma)
mu = 1/(abs(y - x%*%beta))
v = c(1/rInvgauss(n, mu = mu, lambda = lambda))
# Draw the latent variable s from inverse Gaussian distribution.
lambda= Lambda2
mu = sqrt(lambda/(beta^2/sigma) )
s =c(1/rInvgauss(p, mu = mu, lambda = lambda))
# Draw sigma
shape = p/2 + 3/2*n
rate = sum((y - x%*%beta - xi*v)^2 / (4*v) )+zeta*sum(v) + sum(beta^2/(2*s))
sigma = 1/rgamma(1, shape= shape, rate= rate)
# Draw beta
V=diag(1/(2*v))
invA <- chol2inv(chol(t(x)%*%V%*%x + diag(1/s)) )
betam <- invA%*%(t(x)%*%(V %*% (y-xi*v)))
varcov=sigma*invA
beta <-betam+t(chol(varcov))%*%rnorm(p)
# Draw Lambda2
tshape = 1 + a
trate = s/2 + b
Lambda2 = rgamma(p, shape=tshape, rate=trate)
# Sort beta and sigma
betadraw[iter,] = beta
Lambdadraw[iter,]= Lambda2
sigmadraw[iter,] = sigma
}
coefficients =apply(as.matrix(betadraw[-(1:burn), ]),2,mean)
names(coefficients)=colnames(x)
if (all(x[,1]==1)) names(coefficients)[1]= "Intercept"
result <- list(beta = betadraw[seq(burn, runs, thin),],
lambda = Lambdadraw[seq(burn, runs, thin),],
sigma <- sigmadraw[seq(burn, runs, thin),],
coefficients=coefficients)
return(result)
class(result) <- "BALqr"
result
}
|
/scratch/gouwar.j/cran-all/cranData/Brq/R/Brq3.R
|
Btqr <-
function(x,y,tau=0.5, left = 0, runs=11000, burn=1000, thin=1) {
#x: matrix of predictors.
#y: vector of dependent variable.
#tau: quantile level.
#runs: the length of the Markov chain.
#burn: the length of burn-in.
#thin: thinning parameter of MCMC draws
x <- as.matrix(x)
if(ncol(x)==1) {x=x} else {
x=x
if (all(x[,2]==1)) x=x[,-2] }
# Calculate some useful quantities
n <- nrow(x)
p <- ncol(x)
n0 <-sum(y<=left)
id0<-which(y<=left)
x0=x[y<=left,]
y[y<=left]=left
yt <-y
# check input
if (tau<=0 || tau>=1) stop ("invalid tau: tau should be >= 0 and <= 1.
\nPlease respecify tau and call again.\n")
if(n != length(y)) stop("length(y) not equal to nrow(x)")
if(n == 0) return(list(coefficients=numeric(0),fitted.values=numeric(0),
deviance=numeric(0)))
if(!(all(is.finite(y)) || all(is.finite(x)))) stop(" All values must be
finite and non-missing")
# Saving output matrices
betadraw = matrix(nrow=runs, ncol=p)
sigmadraw = matrix(nrow=runs, ncol=1)
# Calculate some useful quantities
xi = (1 - 2*tau)
zeta = tau*(1-tau)
# Initial valus
beta = rep(0.99, p)
v = rep(1, n)
sigma = 1
# Draw from inverse Gaussian distribution
rInvgauss <- function(n, mu, lambda = 1){
un <- runif(n)
Xi <- rchisq(n,1)
f <- mu/(2*lambda)*(2*lambda+mu*Xi+sqrt(4*lambda*mu*Xi+mu^2*Xi^2))
s <- mu^2/f
ifelse(un < mu/(mu+s), s, f)}
# Start the algorithm
for (iter in 1: runs) {
# Draw the latent variable v from inverse Gaussian distribution.
lambda = 1/(2*sigma)
mu = 1/(abs(yt - x%*%beta))
v = c(1/rInvgauss(n, mu = mu, lambda = lambda))
# Draw sigma
shape = 3/2*n
rate = sum((yt - x%*%beta - xi*v)^2/(4*v))+zeta*sum(v)
sigma = 1/rgamma(1, shape= shape, rate= rate)
# Draw beta
V=diag(1/(2*sigma*v))
varcov <- chol2inv(chol(t(x)%*%V%*%x) )
betam <- varcov %*% t(x)%*%V %*% (yt-xi*v)
beta <-betam+t(chol(varcov))%*%rnorm(p)
# Draw yt
v0=v[id0]
Mu0=x0%*%beta + xi*v0
Sig0=sqrt(2*sigma*v0)
u0 = runif(n0)
xu0= u0*pnorm(left,Mu0,Sig0)
yt[id0]= qnorm(xu0,Mu0,Sig0)
# Sort beta and sigma
betadraw[iter,] = beta
sigmadraw[iter,] = sigma
}
coefficients =apply(as.matrix(betadraw[-(1:burn), ]),2,mean)
names(coefficients)=colnames(x)
if (all(x[,1]==1)) names(coefficients)[1]= "Intercept"
result <- list(beta = betadraw[seq(burn, runs, thin),],
sigma = sigmadraw[seq(burn, runs, thin),],
coefficients=coefficients)
return(result)
class(result) <- "Btqr"
result
}
|
/scratch/gouwar.j/cran-all/cranData/Brq/R/Btqr1.R
|
BLtqr <-
function(x, y, tau=0.5, left = 0, runs=11000, burn=1000, thin=1) {
#x: matrix of predictors.
#y: vector of dependent variable.
#tau: quantile level.
#runs: the length of the Markov chain.
#burn: the length of burn-in.
#thin: thinning parameter of MCMC draws
x <- as.matrix(x)
if(ncol(x)==1) {x=x} else {
x=x
if (all(x[,2]==1)) x=x[,-2] }
# Calculate some useful quantities
n <- nrow(x)
p <- ncol(x)
n0 <-sum(y<=left)
id0<-which(y<=left)
x0=x[y<=left,]
y[y<=left]=left
yt <-y
# check input
if (tau<=0 || tau>=1) stop ("invalid tau: tau should be >= 0 and <= 1.
\nPlease respecify tau and call again.\n")
if(n != length(y)) stop("length(y) not equal to nrow(x)")
if(n == 0) return(list(coefficients=numeric(0),fitted.values=numeric(0),
deviance=numeric(0)))
if(!(all(is.finite(y)) || all(is.finite(x)))) stop(" All values must be
finite and non-missing")
# Saving output matrices
betadraw = matrix(nrow=runs, ncol=p)
Lambdadraw= matrix(nrow=runs, ncol=1)
sigmadraw = matrix(nrow=runs, ncol=1)
# Calculate some useful quantities
xi = (1 - 2*tau)
zeta = tau*(1-tau)
# Initial valus
beta = rep(1, p)
s = rep(1, p)
v = rep(1, n)
Lambda2 = 1
sigma = 1
# Hyperparameters
a = 0.1
b = 0.1
# Draw from inverse Gaussian distribution
rInvgauss <- function(n, mu, lambda = 1){
un <- runif(n)
Xi <- rchisq(n,1)
f <- mu/(2*lambda)*(2*lambda+mu*Xi+sqrt(4*lambda*mu*Xi+mu^2*Xi^2))
s <- mu^2/f
ifelse(un < mu/(mu+s), s, f)}
# Start the algorithm
for (iter in 1: runs) {
# Draw the latent variable v from inverse Gaussian distribution.
lambda = 1/(2*sigma)
mu = 1/(abs(yt - x%*%beta))
v = c(1/rInvgauss(n, mu = mu, lambda = lambda))
# Draw the latent variable s from inverse Gaussian distribution.
lambda= Lambda2
mu = sqrt(lambda/(beta^2/sigma) )
s =c(1/rInvgauss(p, mu = mu, lambda = lambda))
# Draw sigma
shape = p/2 + 3/2*n
rate = sum((yt - x%*%beta - xi*v)^2 / (4*v) )+zeta*sum(v) + sum(beta^2/(2*s))
sigma = 1/rgamma(1, shape= shape, rate= rate)
# Draw beta
V=diag(1/(2*v))
invA <- chol2inv(chol(t(x)%*%V%*%x + diag(1/s)) )
betam <- invA%*%(t(x)%*%(V %*% (yt-xi*v)))
varcov=sigma*invA
beta <-betam+t(chol(varcov))%*%rnorm(p)
# Draw Lambda2
tshape = p + a
trate = sum(s)/2 + b
Lambda2 = rgamma(1, shape=tshape, rate=trate)
# Draw yt
v0=v[id0]
Mu0=x0%*%beta + xi*v0
Sig0=sqrt(2*sigma*v0)
u0 = runif(n0)
xu0= u0*pnorm(left,Mu0,Sig0)
yt[id0]= qnorm(xu0,Mu0,Sig0)
# Sort beta and sigma
betadraw[iter,] = beta
Lambdadraw[iter,]= Lambda2
sigmadraw[iter,] = sigma
}
coefficients =apply(as.matrix(betadraw[-(1:burn), ]),2,mean)
names(coefficients)=colnames(x)
if (all(x[,1]==1)) names(coefficients)[1]= "Intercept"
result <- list(beta = betadraw[seq(burn, runs, thin),],
lambda = Lambdadraw[seq(burn, runs, thin),],
sigma <- sigmadraw[seq(burn, runs, thin),],
coefficients=coefficients)
return(result)
class(result) <- "BLtqr"
result
}
|
/scratch/gouwar.j/cran-all/cranData/Brq/R/Btqr2.R
|
BALtqr <-
function(x, y,tau=0.5, left = 0, runs=11000, burn=1000, thin=1) {
#x: matrix of predictors.
#y: vector of dependent variable.
#tau: quantile level.
#runs: the length of the Markov chain.
#burn: the length of burn-in.
#thin: thinning parameter of MCMC draws
x <- as.matrix(x)
if(ncol(x)==1) {x=x} else {
x=x
if (all(x[,2]==1)) x=x[,-2] }
# Calculate some useful quantities
n <- nrow(x)
p <- ncol(x)
n0 <-sum(y<=left)
id0<-which(y<=left)
x0=x[y<=left,]
y[y<=left]=left
yt <-y
# check input
if (tau<=0 || tau>=1) stop ("invalid tau: tau should be >= 0 and <= 1.
\nPlease respecify tau and call again.\n")
if(n != length(y)) stop("length(y) not equal to nrow(x)")
if(n == 0) return(list(coefficients=numeric(0),fitted.values=numeric(0),
deviance=numeric(0)))
if(!(all(is.finite(y)) || all(is.finite(x)))) stop(" All values must be
finite and non-missing")
# Saving output matrices
betadraw = matrix(nrow=runs, ncol=p)
Lambdadraw= matrix(nrow=runs, ncol=p)
sigmadraw = matrix(nrow=runs, ncol=1)
# Calculate some useful quantities
xi = (1 - 2*tau)
zeta = tau*(1-tau)
# Initial valus
beta = rep(1, p)
s = rep(1, p)
v = rep(1, n)
Lambda2 = rep(1, p)
sigma = 1
# Hyperparameters
a = 0.1
b = 0.1
# Draw from inverse Gaussian distribution
rInvgauss <- function(n, mu, lambda = 1){
un <- runif(n)
Xi <- rchisq(n,1)
f <- mu/(2*lambda)*(2*lambda+mu*Xi+sqrt(4*lambda*mu*Xi+mu^2*Xi^2))
s <- mu^2/f
ifelse(un < mu/(mu+s), s, f)}
# Start the algorithm
for (iter in 1: runs) {
# Draw the latent variable v from inverse Gaussian distribution.
lambda = 1/(2*sigma)
mu = 1/(abs(yt - x%*%beta))
v = c(1/rInvgauss(n, mu = mu, lambda = lambda))
# Draw the latent variable s from inverse Gaussian distribution.
lambda= Lambda2
mu = sqrt(lambda/(beta^2/sigma) )
s =c(1/rInvgauss(p, mu = mu, lambda = lambda))
# Draw sigma
shape = p/2 + 3/2*n
rate = sum((yt - x%*%beta - xi*v)^2 / (4*v) )+zeta*sum(v) + sum(beta^2/(2*s))
sigma = 1/rgamma(1, shape= shape, rate= rate)
# Draw beta
V=diag(1/(2*v))
invA <- chol2inv(chol(t(x)%*%V%*%x + diag(1/s)) )
betam <- invA%*%(t(x)%*%(V %*% (yt-xi*v)))
varcov=sigma*invA
beta <-betam+t(chol(varcov))%*%rnorm(p)
# Draw Lambda2
tshape = 1 + a
trate = s/2 + b
Lambda2 = rgamma(p, shape=tshape, rate=trate)
# Draw yt
v0=v[id0]
Mu0=x0%*%beta + xi*v0
Sig0=sqrt(2*sigma*v0)
u0 = runif(n0)
xu0= u0*pnorm(left,Mu0,Sig0)
yt[id0]= qnorm(xu0,Mu0,Sig0)
# Sort beta and sigma
betadraw[iter,] = beta
Lambdadraw[iter,]= Lambda2
sigmadraw[iter,] = sigma
}
coefficients =apply(as.matrix(betadraw[-(1:burn), ]),2,mean)
names(coefficients)=colnames(x)
if (all(x[,1]==1)) names(coefficients)[1]= "Intercept"
result <- list(beta = betadraw[seq(burn, runs, thin),],
lambda = Lambdadraw[seq(burn, runs, thin),],
sigma <- sigmadraw[seq(burn, runs, thin),],
coefficients=coefficients)
return(result)
class(result) <- "BALtqr"
result
}
|
/scratch/gouwar.j/cran-all/cranData/Brq/R/Btqr3.R
|
DIC=function(object){
# Estimate Deviance Information Criterion (DIC)
#
# References:
# Bayesian Data Analysis.
# Gelman, A., Carlin, J., Stern, H., and Rubin D.
# Second Edition, 2003
llSum = 0
y=object$y
N=dim(object$MuY)[1]
PostMu=apply(object$MuY, 2, mean)
PostVar=apply(object$VarY, 2, mean)
L=sum( dnorm(y, PostMu, PostVar, log=TRUE) )
for (i in 1:N) {
m=object$MuY[i, ]
s=sqrt(object$VarY[i, ])
llSum = llSum + sum( dnorm(y, m, s, log=TRUE) )
}
P = 2 * (L - (1 / N * llSum))
dic = -2 * (L - P)
return(dic)
class(DIC) <- "Brq"
}
|
/scratch/gouwar.j/cran-all/cranData/Brq/R/DIC.R
|
model <-
function(object){
welcome<-function(){
cat("===== Model selection based on credible intervals ======")
cat("\n")
cat("# #")
cat("\n")
cat("# Author: Rahim Alhamzawi #")
cat("\n")
cat("# Contact: [email protected] #")
cat("\n")
cat("# July, 2018 #")
cat("\n")
cat("# #")
cat("\n")
cat("=========================================================")
cat("\n")
}
##############################################################
result=NULL
if(length(object$tau)>1){
for(ii in 1:length(object$tau)){
CredInt = apply(object$beta[,,ii], 2, quantile, c(0.025, 0.975))
#Estimate= apply(object$beta[,,ii], 2, mean)
Estimate= object$coefficients[,ii]
for(i in 1:length(CredInt [1,])){
if (sign(CredInt [1,i])==-1 & sign (CredInt [2,i])==1) Estimate [i]=0
}
result= cbind(result,Estimate)}
}else{
CredInt = apply(object$beta, 2, quantile, c(0.025, 0.975))
Estimate= coef(object)
for(i in 1:length(CredInt [1,])){
if (sign(CredInt [1,i])==-1 & sign (CredInt [2,i])==1) Estimate [i]=0
}
result= cbind(Estimate)
}
welcome()
taulabs <- paste("tau=", format(round(object$tau, 3)))
dimnames(result) <- list(dimnames(object$beta)[[2]], taulabs)
rownames(result)=rownames(coef(object))
result
}
|
/scratch/gouwar.j/cran-all/cranData/Brq/R/model.R
|
plot.Brq <-
function (x, plottype = c("hist", "trace", "ACF", "traceACF",
"histACF", "tracehist", "traceACFhist"), Coefficients = 1, breaks = 30,
lwd = 1, col1 = 0, col2 = 1, col3 = 1, col4 = 1, ...)
{
call <- match.call()
mf <- match.call(expand.dots = FALSE)
mf$drop.unused.levels <- FALSE
Betas=as.matrix(x$beta[, Coefficients])
k = ncol(as.matrix( Betas))
if (k == 2)
par(mfrow = c(1, 2))
if (k == 3)
par(mfrow = c(1, 3))
if (k == 4)
par(mfrow = c(2, 2))
if (k > 4 & k <= 12)
par(mfrow = c(ceiling(k/3), 3))
if (k > 12)
par(mfrow = c(ceiling(k/3), 3))
plottype <- match.arg(plottype)
switch(plottype, trace = for (i in 1:k) {
ts.plot(Betas[, i], xlab = "iterations", ylab = "",
main = noquote(names(coef(x)))[i], col = col4)
}, ACF = for (i in 1:k) {
acf(Betas[, i], main = noquote(names(coef(x)))[i], col = col3)
}, traceACF = {
par(mfrow = c(k, 2))
for (i in 1:k) {
ts.plot(Betas[, i], xlab = "iterations", ylab = "",
main = noquote(names(coef(x)))[i], col = col4)
acf(Betas[, i], main = noquote(names(coef(x)))[i],
col = col3)
}
}, histACF = {
par(mfrow = c(k, 2))
for (i in 1:k) {
hist(Betas[, i], breaks = breaks, prob = TRUE, main = "",
xlab = noquote(names(coef(x)))[i], col = col1)
lines(density(Betas[, i], adjust = 2), lty = "dotted",
col = col2, lwd = lwd)
acf(Betas[, i], main = noquote(names(coef(x)))[i],
col = col3)
}
}, tracehist = {
par(mfrow = c(k, 2))
for (i in 1:k) {
ts.plot(Betas[, i], xlab = "iterations", ylab = "",
main = noquote(names(coef(x)))[i], col = col4)
hist(Betas[, i], breaks = breaks, prob = TRUE, main = "",
xlab = noquote(names(coef(x)))[i], col = col1)
lines(density(Betas[, i], adjust = 2), lty = "dotted",
col = col2, lwd = lwd)
}
}, traceACFhist = {
par(mfrow = c(k, 3))
for (i in 1:k) {
ts.plot(Betas[, i], xlab = "iterations", ylab = "",
main = noquote(names(coef(x)))[i], col = col4)
hist(Betas[, i], breaks = breaks, prob = TRUE, main = "",
xlab = noquote(names(coef(x)))[i], col = col1)
lines(density(Betas[, i], adjust = 2), lty = "dotted",
col = col2, lwd = lwd)
acf(Betas[, i], main = noquote(names(coef(x)))[i],
col = col3)
}
}, hist = for (i in 1:k) {
hist(Betas[, i], breaks = breaks, prob = TRUE, main = "",
xlab = noquote(names(coef(x)))[i], col = col1)
lines(density(Betas[, i], adjust = 2), lty = "dotted",
col = col2, lwd = lwd)
})
}
|
/scratch/gouwar.j/cran-all/cranData/Brq/R/plot.Brq.R
|
print.Brq <-
function(x, ...)
{
cat("Call:\n")
print(x$call)
cat("\nCoefficients:\n")
print(x$coefficients)
}
|
/scratch/gouwar.j/cran-all/cranData/Brq/R/print.Brq.R
|
print.summary.Brq <-
function(x, ...)
{
cat("Call:\n")
print(x$call)
cat("\n")
cat("tau:")
print(x$tau)
cat("\n")
print(x$coefficients)
}
|
/scratch/gouwar.j/cran-all/cranData/Brq/R/print.summary.Brq.R
|
summary.Brq <-
function(object, ...){
if(length(object$tau)>1){
p=dim(coef(object))[1]
result=array(,dim = c(p, 3, length(object$tau)))
estim=NULL
for (i in 1:length(object$tau)){
CredInt=apply(object$beta[,,i],2,quantile,c(0.025,0.975))
TAB <- cbind( Coefficient = coef(object)[,i],
L.CredIntv = CredInt[1,],
U.CredIntv = CredInt[2,])
colnames(TAB) <- c("Estimate", "L.CredIntv", "U.CredIntv")
colnames(result) <- c("Estimate", "L.CredIntv", "U.CredIntv")
result[,,i] <- TAB
}
for(i in 1:length(object$tau)){
estim$tau=object$tau[i]
estim$coefficients=result[,,i]
print(estim)
}
}else{
CredInt=apply(object$beta,2,quantile,c(0.025,0.975))
TAB <- cbind( Coefficient = coef(object),
L.CredIntv = CredInt[1,],
U.CredIntv = CredInt[2,])
colnames(TAB) <- c("Estimate", "L.CredIntv", "U.CredIntv")
result <- list(call=object$call,tau=object$tau,coefficients=TAB)
class(result) <- "summary.Brq"
result
}
}
|
/scratch/gouwar.j/cran-all/cranData/Brq/R/summary.Brq.R
|
BsProb <-
function (X, y, blk = 0, mFac = 3, mInt = 2, p = 0.25, g = 2,
ng = 1, nMod = 10)
{
X <- as.matrix(X)
y <- unlist(y)
if (length(y) != nrow(X))
stop("X and y should have same number of observations")
if (blk == 0) {
ifelse(is.null(colnames(X)), faclab <- paste("F", seq(ncol(X)),
sep = ""), faclab <- colnames(X))
colnames(X) <- faclab
}
else {
if (is.null(colnames(X))) {
faclab <- paste("F", seq(ncol(X) - blk), sep = "")
blklab <- paste("B", seq(blk), sep = "")
colnames(X) <- c(blklab, faclab)
}
else {
faclab <- colnames(X)[-seq(blk)]
blklab <- colnames(X)[seq(blk)]
}
}
rownames(X) <- rownames(X, do.NULL = FALSE, prefix = "r")
storage.mode(X) <- "double"
Y <- as.double(y)
N <- as.integer(nrow(X))
COLS <- as.integer(ncol(X) - blk)
BLKS <- as.integer(blk)
MXFAC <- as.integer(max(1, mFac))
MXINT <- as.integer(mInt)
PI <- as.double(p)
if (length(g) == 1) {
INDGAM <- as.integer(0)
GAMMA <- as.double(g)
NGAM <- as.integer(1)
INDG2 <- as.integer(0)
GAM2 <- as.double(0)
}
else {
if (ng == 1) {
INDGAM <- as.integer(0)
GAMMA <- as.double(c(g[1], g[2]))
NGAM <- as.integer(1)
INDG2 <- as.integer(1)
GAM2 <- as.double(g[2])
}
else {
INDGAM <- as.integer(1)
GAMMA <- as.double(seq(min(g), max(g), length = ng))
NGAM <- as.integer(ng)
INDG2 <- as.integer(0)
GAM2 <- as.double(0)
}
}
NTOP <- as.integer(nMod)
mdcnt <- as.integer(0)
ptop <- as.double(rep(0, NTOP))
sigtop <- as.double(rep(0, NTOP))
nftop <- as.integer(rep(0, NTOP))
jtop <- matrix(0, nrow = NTOP, ncol = MXFAC)
dimnames(jtop) <- list(paste("M", seq(NTOP), sep = ""), paste("x",
seq(MXFAC), sep = ""))
storage.mode(jtop) <- "integer"
del <- as.double(0)
sprob <- as.double(rep(0, (COLS + 1)))
names(sprob) <- c("none", faclab)
pgam <- as.double(rep(0, NGAM))
prob <- matrix(0, nrow = (1 + COLS), ncol = NGAM)
dimnames(prob) <- list(c("none", paste("x", 1:COLS, sep = "")),
seq(NGAM))
storage.mode(prob) <- "double"
ind <- as.integer(-1)
lst <- .Fortran("bm", X, Y, N, COLS, BLKS, MXFAC, MXINT,
PI, INDGAM, INDG2, GAM2, NGAM, GAMMA, NTOP, mdcnt, ptop,
sigtop, nftop, jtop, del, sprob, pgam, prob, ind, PACKAGE = "BsMD")
names(lst) <- c("X", "Y", "N", "COLS", "BLKS", "MXFAC", "MXINT",
"PI", "INDGAM", "INDG2", "GAM2", "NGAM", "GAMMA", "NTOP",
"mdcnt", "ptop", "sigtop", "nftop", "jtop", "del", "sprob",
"pgam", "prob", "ind")
invisible(structure(lst, class = c("BsProb", class(lst))))
}
|
/scratch/gouwar.j/cran-all/cranData/BsMD/R/BsProb.R
|
DanielPlot <-
function (fit, code = FALSE, faclab = NULL, block = FALSE, datax = TRUE,
half = FALSE, pch = "*", cex.fac = par("cex.lab"), cex.lab = par("cex.lab"),
cex.pch = par("cex.axis"), ...)
{
if (any(names(coef(fit)) == "(Intercept)")) {
factor.effects <- 2 * coef(fit)[-1]
}
else {
factor.effects <- 2 * coef(fit)
}
names(factor.effects) <- attr(fit$terms, "term.labels")
factor.effects <- factor.effects[!is.na(factor.effects)]
if (half) {
tn <- data.frame(x = qnorm(0.5 * ((rank(abs(factor.effects)) -
0.5)/length(factor.effects) + 1)), x = abs(factor.effects))
names(tn$x) <- names(factor.effects)
xlab <- "half-normal score"
ylab <- "absolute effects"
}
else {
tn <- qqnorm(factor.effects, plot = FALSE)
xlab <- "normal score"
ylab <- "effects"
}
if (datax) {
tmp <- tn$x
tn$x <- tn$y
tn$y <- tmp
tmp <- xlab
xlab <- ylab
ylab <- tmp
}
labx <- names(factor.effects)
laby <- 1:length(tn$y)
points.labels <- names(factor.effects)
plot.default(tn, xlim = c(min(tn$x), max(tn$x) + diff(range(tn$x))/5),
pch = pch, xlab = xlab, ylab = ylab, cex.lab = cex.lab,
...)
if (is.null(faclab)) {
if (!code) {
effect.code <- labx
}
else {
terms.ord <- attr(fit$terms, "order")
max.order <- max(terms.ord)
no.factors <- length(terms.ord[terms.ord == 1])
factor.label <- attr(fit$terms, "term.labels")[1:no.factors]
factor.code <- LETTERS[1:no.factors]
if (block)
factor.code <- c("BK", factor.code)
texto <- paste(factor.code[1], "=", factor.label[1])
for (i in 2:no.factors) {
texto <- paste(texto, ", ", factor.code[i], "=",
factor.label[i])
}
mtext(side = 1, line = 2.5, texto, cex = cex.fac)
get.sep <- function(string, max.order) {
k <- max.order - 1
get.sep <- rep(0, k)
j <- 1
for (i in 1:nchar(string)) {
if (substring(string, i, i) == ":") {
get.sep[j] <- i
if (j == k)
break
j <- j + 1
}
}
get.sep
}
labeling <- function(string, get.sep, max.order,
factor.code, factor.label) {
labeling <- ""
sep <- get.sep(string, max.order)
sep <- sep[sep > 0]
n <- length(sep) + 1
if (n > 1) {
sep <- c(0, sep, nchar(string) + 1)
for (i in 1:n) {
labeling <- paste(labeling, sep = "", factor.code[factor.label ==
substring(string, sep[i] + 1, sep[i + 1] -
1)][1])
}
}
else labeling <- paste(labeling, sep = "", factor.code[factor.label ==
string][1])
labeling
}
effect.code <- rep("", length(terms.ord))
for (i in 1:length(terms.ord)) {
effect.code[i] <- labeling(names(tn$x)[i], get.sep,
max.order, factor.code, factor.label)
}
}
text(tn, paste(" ", effect.code), cex = cex.pch, adj = 0,
xpd = NA)
}
else {
if (!is.list(faclab))
stop("* Argument 'faclab' has to be NULL or a list with idx and lab objects")
text(tn$x[faclab$idx], tn$y[faclab$idx], labels = faclab$lab,
cex = cex.fac, adj = 0)
}
invisible(cbind(as.data.frame(tn), no = 1:length(tn$x)))
}
|
/scratch/gouwar.j/cran-all/cranData/BsMD/R/DanielPlot.R
|
LenthPlot <-
function (obj, alpha = 0.050000000000000003, plt = TRUE, limits = TRUE,
xlab = "factors", ylab = "effects", faclab = NULL, cex.fac = par("cex.lab"),
cex.axis = par("cex.axis"), adj = 1, ...)
{
if (inherits(obj, "lm")) {
i <- pmatch("(Intercept)", names(coef(obj)))
if (!is.na(i))
obj <- 2 * coef(obj)[-pmatch("(Intercept)", names(coef(obj)))]
}
b <- obj
if (!is.null(faclab)) {
if (!is.list(faclab))
stop("* Argument 'faclab' has to be NULL or a list with 'idx' and 'lab' elements")
names(b) <- rep("", length(b))
names(b)[faclab$idx] <- faclab$lab
}
m <- length(b)
d <- m/3
s0 <- 1.5 * median(abs(b))
cj <- as.numeric(b[abs(b) < 2.5 * s0])
PSE <- 1.5 * median(abs(cj))
ME <- qt(1 - alpha/2, d) * PSE
gamma <- (1 + (1 - alpha)^(1/m))/2
SME <- qt(gamma, d) * PSE
if (plt) {
n <- length(b)
x <- seq(n)
ylim <- range(c(b, 1.2 * c(ME, -ME)))
plot(x, b, xlim = c(1, n + 1), ylim = ylim, type = "n",
xlab = xlab, ylab = ylab, frame = FALSE, axes = FALSE,
...)
idx <- x[names(b) != ""]
text(x[idx], rep(par("usr")[3], length(idx)), labels = names(b)[idx],
cex = cex.fac, xpd = NA)
axis(2, cex.axis = cex.axis)
for (i in seq(along = x)) segments(x[i], 0, x[i], b[i],
lwd = 3, col = 1, lty = 1)
abline(h = 0, lty = 4, xpd = FALSE)
if (limits) {
abline(h = ME * c(1, -1), xpd = FALSE, lty = 2, col = grey(0.20000000000000001))
text(adj * (n + 1) * c(1, 1), (ME + strheight("M",
cex = cex.axis)) * c(1, -1), labels = "ME", cex = 0.90000000000000002 *
cex.axis, xpd = FALSE)
abline(h = SME * c(1, -1), xpd = FALSE, lty = 3,
col = grey(0.20000000000000001))
text(adj * (n + 1) * c(1, 1), (SME + strheight("M",
cex = cex.axis)) * c(1, -1), labels = "SME",
cex = 0.90000000000000002 * cex.axis, xpd = FALSE)
}
}
return(c(alpha = alpha, PSE = PSE, ME = ME, SME = SME))
}
|
/scratch/gouwar.j/cran-all/cranData/BsMD/R/LenthPlot.R
|
MD <-
function (X, y, nFac, nBlk = 0, mInt = 3, g = 2, nMod, p, s2,
nf, facs, nFDes = 4, Xcand, mIter = 20, nStart = 5, startDes = NULL,
top = 20, eps = 1.0000000000000001e-05)
{
if (nFac + nBlk != ncol(X))
stop("nFac + nBlk != ncol(X)")
if (nFac + nBlk != ncol(Xcand))
stop("nFac + nBlk != ncol(Xcand)")
if (ncol(Xcand) != ncol(X))
stop("ncol(Xcand) != ncol(X)")
ITMAX <- as.integer(mIter)
N0 <- as.integer(nrow(X))
NRUNS <- as.integer(nFDes)
N <- as.integer(nrow(Xcand))
X <- as.matrix(X)
storage.mode(X) <- "double"
Y <- as.double(y)
GAMMA <- as.double(g[1])
GAM2 <- as.double(0)
if (length(g) > 1)
GAM2 <- as.double(g[2])
COLS <- as.integer(nFac)
BL <- as.integer(nBlk)
CUT <- as.integer(mInt)
GAMMA <- as.double(g[1])
if (length(g) == 1) {
IND <- as.integer(0)
}
else {
IND <- as.integer(1)
GAM2 <- as.double(g[2])
}
Xcand <- as.matrix(Xcand)
storage.mode(Xcand) <- "double"
NM <- as.integer(nMod)
P <- as.double(as.numeric(p))
SIGMA2 <- as.double(as.numeric(s2))
NF <- as.integer(as.numeric(nf))
MNF <- as.integer(max(NF))
JFAC <- as.matrix(facs)
storage.mode(JFAC) <- "integer"
if (is.null(startDes)) {
if (is.null(nStart))
stop("nStart needed when startDes is NULL")
INITDES <- as.integer(1)
NSTART <- as.integer(nStart)
MBEST <- matrix(0, nrow = NSTART, ncol = NRUNS)
storage.mode(MBEST) <- "integer"
}
else {
INITDES <- as.integer(0)
startDes <- as.matrix(startDes)
NSTART <- as.integer(nrow(startDes))
if (ncol(startDes) != NRUNS)
stop("ncol(startDes) should be nFDes")
MBEST <- as.matrix(startDes)
storage.mode(MBEST) <- "integer"
}
NTOP <- as.integer(top)
TOPD <- as.double(rep(0, NTOP))
TOPDES <- matrix(0, nrow = NTOP, ncol = NRUNS)
dimnames(TOPDES) <- list(seq(top), paste("r", seq(NRUNS),
sep = ""))
storage.mode(TOPDES) <- "integer"
EPS <- as.double(eps)
flag <- as.integer(-1)
lst <- .Fortran("md", NSTART, NRUNS, ITMAX, INITDES, N0,
IND, X, Y, GAMMA, GAM2, BL, COLS, N, Xcand, NM, P, SIGMA2,
NF, MNF, JFAC, CUT, MBEST, NTOP, TOPD, TOPDES, EPS, flag,
PACKAGE = "BsMD")
names(lst) <- c("NSTART", "NRUNS", "ITMAX", "INITDES", "N0",
"IND", "X", "Y", "GAMMA", "GAM2", "BL", "COLS", "N",
"Xcand", "NM", "P", "SIGMA2", "NF", "MNF", "JFAC", "CUT",
"MBEST", "NTOP", "TOPD", "TOPDES", "EPS", "flag")
invisible(structure(lst, class = c("MD", class(lst))))
}
|
/scratch/gouwar.j/cran-all/cranData/BsMD/R/MD.R
|
plot.BsProb <-
function (x, code = TRUE, prt = FALSE, cex.axis = par("cex.axis"),
...)
{
spikes <- function(prob, lwd = 3, ...) {
y <- prob
n <- nrow(y)
x <- seq(n)
lab <- rownames(prob)
plot(x, y[, 1], xlim = range(x), ylim = c(0, 1), type = "n",
xlab = "factors", ylab = "posterior probability",
frame = FALSE, axes = FALSE, ...)
if (ncol(y) == 1) {
for (i in x) segments(x[i], 0, x[i], y[i, 1], lwd = lwd,
col = grey(0.20000000000000001))
}
else {
y[, 1] <- apply(prob, 1, min)
y[, 2] <- apply(prob, 1, max)
for (i in x) {
segments(x[i], 0, x[i], y[i, 2], lwd = lwd, col = grey(0.80000000000000004),
lty = 1)
segments(x[i], 0, x[i], y[i, 1], lwd = lwd, col = grey(0.20000000000000001),
lty = 1)
}
}
axis(1, at = x, labels = lab, line = 0, cex.axis = cex.axis)
axis(2, cex.axis = cex.axis)
invisible(NULL)
}
if (!any(class(x) == "BsProb"))
return("\nArgument `x' should be class BsProb. Output of corresponding function.")
ifelse(x$INDGAM == 0, prob <- as.matrix(x$sprob), prob <- x$prob)
if (code)
rownames(prob) <- rownames(x$prob)
else rownames(prob) <- names(x$sprob)
spikes(prob, ...)
if (prt)
summary.BsProb(x)
invisible(NULL)
}
|
/scratch/gouwar.j/cran-all/cranData/BsMD/R/plot.BsProb.R
|
print.BsProb <-
function (x, X = TRUE, resp = TRUE, factors = TRUE, models = TRUE,
nMod = 10, digits = 3, plt = FALSE, verbose = FALSE, ...)
{
if (verbose) {
print(unclass(x))
return(invisible(NULL))
}
nFac <- ncol(x$X) - x$blk
if (X) {
cat("\n Design Matrix:\n")
print(round(x$X, digits))
}
if (resp) {
cat("\n Response vector:\n")
cat(round(x$Y, digits = digits), fill = 80)
}
cat("\n Calculations:\n")
if (x$INDGAM == 0) {
if (x$INDG2 == 0) {
calc <- c(x$N, x$COLS, x$BLKS, x$MXFAC, x$MXINT,
x$P, x$GAMMA, x$mdcnt)
names(calc) <- c("nRun", "nFac", "nBlk", "mFac",
"mInt", "p", "g", "totMod")
}
else {
calc <- c(x$N, x$COLS, x$BLKS, x$MXFAC, x$MXINT,
x$P, x$GAMMA[1], x$GAMMA[2], x$mdcnt)
names(calc) <- c("nRun", "nFac", "nBlk", "mFac",
"mInt", "p", "g[main]", "g[int]", "totMod")
}
}
else {
calc <- c(x$N, x$COLS, x$BLKS, x$MXFAC, x$MXINT, x$P,
x$GAMMA[1], x$GAMMA[x$NGAM], x$mdcnt)
names(calc) <- c("nRun", "nFac", "nBlk", "mFac", "mInt",
"p", "g[1]", paste("g[", x$NGAM, "]", sep = ""),
"totMod")
}
out.list <- list(calc = calc)
print(round(calc, digits = digits))
if (plt)
plot.BsProb(x, code = TRUE)
if (factors) {
if (x$INDGAM == 1)
cat("\n Weighted factor probabilities:\n")
else cat("\n Factor probabilities:\n")
prob <- data.frame(Factor = names(x$sprob), Code = rownames(x$prob),
Prob = round(x$sprob, digits), row.names = seq(length(x$sprob)))
print(prob, digits = digits)
out.list[["probabilities"]] <- prob
}
if (x$INDGAM == 0 & models) {
cat("\n Model probabilities:\n")
ind <- seq(min(nMod, x$NTOP))
Prob <- round(x$ptop, digits)
NumFac <- x$nftop
Sigma2 <- round(x$sigtop, digits)
Factors <- apply(x$jtop, 1, function(x) ifelse(all(x ==
0), "none", paste(x[x != 0], collapse = ",")))
dd <- data.frame(Prob, Sigma2, NumFac, Factors)[ind,
]
print(dd, digits = digits, right = FALSE)
out.list[["models"]] <- dd
}
if (x$INDGAM == 1) {
cat("\n Values of posterior density of gamma:\n")
dd <- data.frame(gamma = x$GAMMA, pgam = x$pgam)
out.list[["gamma.density"]] <- dd
print(dd, digits = digits)
cat("\n Posterior probabilities for each gamma value:\n")
print(dd <- round(rbind(gamma = x$GAMMA, x$prob), digits = digits))
out.list[["probabilities"]] <- dd
}
invisible(out.list)
}
|
/scratch/gouwar.j/cran-all/cranData/BsMD/R/print.BsProb.R
|
print.MD <-
function (x, X = FALSE, resp = FALSE, Xcand = TRUE, models = TRUE,
nMod = x$nMod, digits = 3, verbose = FALSE, ...)
{
if (verbose) {
print(unclass(x))
return(invisible(NULL))
}
nFac <- ncol(x$X) - x$blk
if (X) {
cat("\n Design Matrix:\n")
print(x$X)
}
if (resp) {
cat("\n Response vector:\n")
cat(round(x$Y, digits = digits), fill = 80)
}
cat("\n Base:\n")
calc <- c(x$N0, x$COLS, x$BL, x$CUT, x$GAMMA, x$GAM2, x$NM)
names(calc) <- c("nRuns", "nFac", "nBlk", "maxInt", "gMain",
"gInter", "nMod")
print(calc)
cat("\n Follow up:\n")
out <- c(x$N, x$NRUNS, x$ITMAX, x$NSTART)
names(out) <- c("nCand", "nRuns", "maxIter", "nStart")
print(out)
calc <- c(calc, out)
out.list <- list(calc = calc)
if (models && x$NM > 0) {
cat("\n Competing Models:\n")
ind <- seq(x$NM)
Prob <- round(x$P, digits)
NumFac <- x$NF
Sigma2 <- round(x$SIGMA2, digits)
Factors <- apply(x$JFAC, 1, function(x) ifelse(all(x ==
0), "none", paste(x[x != 0], collapse = ",")))
dd <- data.frame(Prob, Sigma2, NumFac, Factors)
print(dd, digits = digits, right = FALSE)
out.list[["models"]] <- dd
}
if (Xcand) {
cat("\n Candidate runs:\n")
print(round(x$Xcand, digits))
}
if (any(x$D <= 0))
ind <- min(which(x$D <= 0))
else ind <- x$NTOP
toprun <- data.frame(D = x$TOPD, x$TOPDES)
ind <- min(nMod, ind)
cat("\n Top", ind, "runs:\n")
print(dd <- round(toprun[seq(ind), ], digits))
out.list[["follow.up"]] <- dd
invisible(out.list)
}
|
/scratch/gouwar.j/cran-all/cranData/BsMD/R/print.MD.R
|
summary.BsProb <-
function (object, nMod = 10, digits = 3, ...)
{
nFac <- ncol(object$X) - object$blk
cat("\n Calculations:\n")
if (object$INDGAM == 0) {
if (object$INDG2 == 0) {
calc <- c(object$N, object$COLS, object$BLKS, object$MXFAC,
object$MXINT, object$P, object$GAMMA, object$mdcnt)
names(calc) <- c("nRun", "nFac", "nBlk", "mFac",
"mInt", "p", "g", "totMod")
}
else {
calc <- c(object$N, object$COLS, object$BLKS, object$MXFAC,
object$MXINT, object$P, object$GAMMA[1], object$GAMMA[2],
object$mdcnt)
names(calc) <- c("nRun", "nFac", "nBlk", "mFac",
"mInt", "p", "g[main]", "g[int]", "totMod")
}
}
else {
calc <- c(object$N, object$COLS, object$BLKS, object$MXFAC,
object$MXINT, object$P, object$GAMMA[1], object$GAMMA[object$NGAM],
object$mdcnt)
names(calc) <- c("nRun", "nFac", "nBlk", "mFac", "mInt",
"p", "g[1]", paste("g[", object$NGAM, "]", sep = ""),
"totMod")
}
out.list <- list(calc = calc)
print(round(calc, digits = digits))
prob <- data.frame(Factor = names(object$sprob), Code = rownames(object$prob),
Prob = round(object$sprob, digits), row.names = seq(length(object$sprob)))
if (object$INDGAM == 0) {
cat("\n Factor probabilities:\n")
print(prob, digits = digits)
cat("\n Model probabilities:\n")
ind <- seq(min(nMod, object$NTOP))
Prob <- round(object$ptop, digits)
NumFac <- object$nftop
Sigma2 <- round(object$sigtop, digits)
Factors <- apply(object$jtop, 1, function(x) ifelse(all(x ==
0), "none", paste(x[x != 0], collapse = ",")))
dd <- data.frame(Prob, Sigma2, NumFac, Factors)[ind,
]
print(dd, digits = digits, right = FALSE)
out.list[["probabilities"]] <- prob
out.list[["models"]] <- dd
}
if (object$INDGAM == 1) {
cat("\n Posterior probabilities for each gamma value:\n")
print(dd <- round(rbind(gamma = object$GAMMA, object$prob),
digits = digits))
out.list[["probabilities"]] <- dd
}
invisible(out.list)
}
|
/scratch/gouwar.j/cran-all/cranData/BsMD/R/summary.BsProb.R
|
summary.MD <-
function (object, digits = 3, verbose = FALSE, ...)
{
if (verbose) {
print(unclass(object))
return(invisible(NULL))
}
nFac <- ncol(object$X) - object$blk
cat("\n Base:\n")
calc <- c(object$N0, object$COLS, object$BL, object$CUT,
object$GAMMA, object$GAM2, object$NM)
names(calc) <- c("nRuns", "nFac", "nBlk", "maxInt", "gMain",
"gInter", "nMod")
print(calc)
cat("\n Follow up:\n")
out <- c(object$N, object$NRUNS, object$ITMAX, object$NSTART)
names(out) <- c("nCand", "nRuns", "maxIter", "nStart")
print(out)
calc <- c(calc, out)
out.list <- list(calc = calc)
if (any(object$D <= 0))
ind <- min(which(object$D <= 0))
else ind <- object$NTOP
toprun <- data.frame(D = object$TOPD, object$TOPDES)
ind <- min(10, ind)
cat("\n Top", ind, "runs:\n")
print(dd <- round(toprun[seq(ind), ], digits))
out.list[["follow.up"]] <- dd
invisible(out.list)
}
|
/scratch/gouwar.j/cran-all/cranData/BsMD/R/summary.MD.R
|
### R code from vignette source 'BsMD.Rnw'
###################################################
### code chunk number 1: BM86data
###################################################
options(width=80)
library(BsMD)
data(BM86.data,package="BsMD")
print(BM86.data)
###################################################
### code chunk number 2: BM86fitting
###################################################
advance.lm <- lm(y1 ~ X1 + X2 + X3 + X4 + X5 + X6 + X7 + X8 + X9 +
X10 + X11 + X12 + X13 + X14 + X15, data=BM86.data)
shrinkage.lm <- lm(y2 ~ X1 + X2 + X3 + X4 + X5 + X6 + X7 + X8 + X9 +
X10 + X11 + X12 + X13 + X14 + X15, data=BM86.data)
strength.lm <- lm(y3 ~ X1 + X2 + X3 + X4 + X5 + X6 + X7 + X8 + X9 +
X10 + X11 + X12 + X13 + X14 + X15, data=BM86.data)
yield.lm <- lm(y4 ~ X1 + X2 + X3 + X4 + X5 + X6 + X7 + X8 + X9 +
X10 + X11 + X12 + X13 + X14 + X15, data=BM86.data)
coef.tab <- data.frame(advance=coef(advance.lm),shrinkage=coef(shrinkage.lm),
strength=coef(strength.lm),yield=coef(yield.lm))
print(round(coef.tab,2))
###################################################
### code chunk number 3: DanielPlots
###################################################
par(mfrow=c(1,2),mar=c(3,3,1,1),mgp=c(1.5,.5,0),oma=c(0,0,0,0),
xpd=TRUE,pty="s",cex.axis=0.7,cex.lab=0.8,cex.main=0.9)
DanielPlot(advance.lm,cex.pch=0.8,main="a) Default Daniel Plot")
DanielPlot(advance.lm,cex.pch=0.8,main="b) Labelled Plot",pch=20,
faclab=list(idx=c(2,4,8),lab=c(" 2"," 4"," 8")))
###################################################
### code chunk number 4: BsMD.Rnw:163-169
###################################################
par(mfrow=c(1,2),mar=c(3,3,1,1),mgp=c(1.5,.5,0),oma=c(0,0,0,0),
xpd=TRUE,pty="s",cex.axis=0.7,cex.lab=0.8,cex.main=0.9)
DanielPlot(strength.lm,half=TRUE,cex.pch=0.8,main="a) Half-Normal Plot",
faclab=list(idx=c(4,12,13),lab=c(" x4"," x12"," x13")))
DanielPlot(strength.lm,main="b) Normal Plot",
faclab=list(idx=c(4,12,13),lab=c(" 4"," 12"," 13")))
###################################################
### code chunk number 5: BsMD.Rnw:207-216
###################################################
par(mfrow=c(1,2),mar=c(4,4,1,1),mgp=c(1.5,.5,0),oma=c(0,0,0,0),
xpd=TRUE,pty="s",cex.axis=0.7,cex.lab=0.8,cex.main=0.9)
LenthPlot(shrinkage.lm)
title("a) Default Lenth Plot")
b <- coef(shrinkage.lm)[-1] # Intercept removed
LenthPlot(shrinkage.lm,alpha=0.01,adj=0.2)
title(substitute("b) Lenth Plot (" *a* ")",list(a=quote(alpha==0.01))))
text(14,2*b[14],"P ",adj=1,cex=.7) # Label x14 corresponding to factor P
text(15,2*b[15]," -M",adj=0,cex=.7) # Label x15 corresponding to factor -M
###################################################
### code chunk number 6: BsMD.Rnw:229-234
###################################################
par(mfrow=c(1,2),mar=c(3,3,1,1),mgp=c(1.5,.5,0),oma=c(0,0,0,0),
pty="s",cex.axis=0.7,cex.lab=0.8,cex.main=0.9)
DanielPlot(yield.lm,cex.pch=0.6,main="a) Daniel Plot")
LenthPlot(yield.lm,alpha=0.05,xlab="factors",adj=.9,
main="b) Lenth Plot")
###################################################
### code chunk number 7: BsMD.Rnw:296-306
###################################################
par(mfrow=c(1,2),mar=c(3,3,1,1),mgp=c(1.5,.5,0),oma=c(0,0,0,0),
pty="s",cex.axis=0.7,cex.lab=0.8,cex.main=0.9)
X <- as.matrix(BM86.data[,1:15])
y <- BM86.data[,16] # Using prior probability of p=0.20, and k=10 (gamma=2.49)
advance.BsProb <- BsProb(X=X,y=y,blk=0,mFac=15,mInt=1,p=0.20,g=2.49,ng=1,nMod=10)
print(advance.BsProb,X=FALSE,resp=FALSE,nMod=5)
plot(advance.BsProb,main="a) Bayes Plot")
DanielPlot(advance.lm,cex.pch=0.6,main="b) Daniel Plot",
faclab=list(idx=c(2,4,8),lab=c(" x2"," x4"," x8")))
#title("Example I",outer=TRUE,line=-1,cex=.8)
###################################################
### code chunk number 8: BsMD.Rnw:329-342
###################################################
par(mfrow=c(1,2),mar=c(3,3,1,1),mgp=c(1.5,.5,0),oma=c(0,0,0,0),
pty="s",cex.axis=0.7,cex.lab=0.8,cex.main=0.9)
X <- as.matrix(BM86.data[,1:15])
y <- BM86.data[,19]
# Using prior probability of p=0.20, and k=5,10,15
yield.BsProb <- BsProb(X=X,y=y,blk=0,mFac=15,mInt=1,p=0.20,g=c(1.22,3.74),ng=10,nMod=10)
summary(yield.BsProb)
plot(yield.BsProb,main="a) Bayes Plot")
#title(substitute("( " *g* " )",list(g=quote(1.2<=gamma<=3.7))),line=-1)
title(substitute("( " *g1* "" *g2* " )",list(g1=quote(1.2<=gamma),g2=quote(""<=3.7))),line=-1)
DanielPlot(yield.lm,cex.pch=0.6,main="b) Daniel Plot",
faclab=list(idx=c(1,7,8,9,10,14),lab=paste(" ",c(1,7,8,9,10,14),sep="")))
#title("Example IV",outer=TRUE,line=-1,cex=.8)
###################################################
### code chunk number 9: BsMD.Rnw:372-390
###################################################
par(mfrow=c(1,2),mar=c(3,3,1,1),mgp=c(1.5,.5,0),oma=c(0,0,0,0),
pty="s",cex.axis=0.7,cex.lab=0.8,cex.main=0.9)
data(BM93.e1.data,package="BsMD")
X <- as.matrix(BM93.e1.data[,2:6])
y <- BM93.e1.data[,7]
prob <- 0.25
gamma <- 1.6
# Using prior probability of p=0.20, and k=5,10,15
reactor5.BsProb <- BsProb(X=X,y=y,blk=0,mFac=5,mInt=3,p=prob,g=gamma,ng=1,nMod=10)
summary(reactor5.BsProb)
plot(reactor5.BsProb,main="a) Main Effects")
data(PB12Des,package="BsMD")
X <- as.matrix(PB12Des)
reactor11.BsProb <- BsProb(X=X,y=y,blk=0,mFac=11,mInt=3,p=prob,g=gamma,ng=1,nMod=10)
print(reactor11.BsProb,models=FALSE)
plot(reactor11.BsProb,main="b) All Contrasts")
#title("12-runs Plackett-Burman Design",outer=TRUE,line=-1,cex.main=0.9)
###################################################
### code chunk number 10: BsMD.Rnw:414-432
###################################################
par(mfrow=c(1,2),mar=c(3,3,1,1),mgp=c(1.5,.5,0),oma=c(0,0,1,0),
pty="s",cex.axis=0.7,cex.lab=0.8,cex.main=0.9)
data(BM93.e2.data,package="BsMD")
X <- as.matrix(BM93.e2.data[,1:7])
y <- BM93.e2.data[,8]
prob <- 0.25
gamma <- c(1,2)
ng <- 20
# Using prior probability of p=0.20, and k=5,10,15
fatigueG.BsProb <- BsProb(X=X,y=y,blk=0,mFac=7,mInt=2,p=prob,g=gamma,ng=ng,nMod=10)
plot(fatigueG.BsProb$GAMMA,1/fatigueG.BsProb$prob[1,],type="o",
xlab=expression(gamma),ylab=substitute("P{" *g* "|y}",list(g=quote(gamma))))
title(substitute("a) P{" *g* "|y}"%prop%"1/P{Null|y, " *g* "}",list(g=quote(gamma))),
line=+.5,cex.main=0.8)
gamma <- 1.5
fatigue.BsProb <- BsProb(X=X,y=y,blk=0,mFac=7,mInt=2,p=prob,g=gamma,ng=1,nMod=10)
plot(fatigue.BsProb,main="b) Bayes Plot",code=FALSE)
title(substitute("( "*g*" )",list(g=quote(gamma==1.5))),line=-1)
###################################################
### code chunk number 11: BsMD.Rnw:458-475
###################################################
par(mfrow=c(1,2),mar=c(4,4,1,1),mgp=c(2,.5,0),oma=c(0,0,1,0),
pty="s",cex.axis=0.7,cex.lab=0.8,cex.main=0.9)
data(BM93.e3.data,package="BsMD")
print(BM93.e3.data)
X <- as.matrix(BM93.e3.data[1:16,2:9])
y <- BM93.e3.data[1:16,10]
prob <- 0.25
gamma <- 2.0
# Using prior probability of p=0.25, and gamma=2.0
plot(BsProb(X=X,y=y,blk=0,mFac=8,mInt=3,p=prob,g=gamma,ng=1,nMod=10),
code=FALSE,main="a) Fractional Factorial (FF)")
X <- as.matrix(BM93.e3.data[,c(2:9,1)])
y <- BM93.e3.data[,10]
plot(BsProb(X=X,y=y,blk=0,mFac=9,mInt=3,p=prob,g=gamma,ng=1,nMod=5),
code=FALSE,main="b) FF with Extra Runs",prt=TRUE,)
mtext(side=1,"(Blocking factor blk)",cex=0.7,line=2.5)
###################################################
### code chunk number 12: MSBExample1
###################################################
par(mfrow=c(1,2),mar=c(3,4,1,1),mgp=c(2,.5,0),oma=c(0,0,1,0),
pty="s",cex.axis=0.7,cex.lab=0.8,cex.main=0.9)
data(BM93.e3.data,package="BsMD")
X <- as.matrix(BM93.e3.data[1:16,c(1,2,4,6,9)])
y <- BM93.e3.data[1:16,10]
injection16.BsProb <- BsProb(X=X,y=y,blk=1,mFac=4,mInt=3,p=0.25,g=2,ng=1,nMod=5)
X <- as.matrix(BM93.e3.data[1:16,c(1,2,4,6,9)])
p <- injection16.BsProb$ptop
s2 <- injection16.BsProb$sigtop
nf <- injection16.BsProb$nftop
facs <- injection16.BsProb$jtop
nFDes <- 4
Xcand <- matrix(c(1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
-1,-1,-1,-1,1,1,1,1,-1,-1,-1,-1,1,1,1,1,
-1,-1,1,1,-1,-1,1,1,-1,-1,1,1,-1,-1,1,1,
-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,
-1,1,1,-1,1,-1,-1,1,1,-1,-1,1,-1,1,1,-1),
nrow=16,dimnames=list(1:16,c("blk","A","C","E","H"))
)
print(MD(X=X,y=y,nFac=4,nBlk=1,mInt=3,g=2,nMod=5,p=p,s2=s2,nf=nf,facs=facs,
nFDes=4,Xcand=Xcand,mIter=20,nStart=25,top=5))
###################################################
### code chunk number 13: ReactorData
###################################################
data(Reactor.data,package="BsMD")
print(Reactor.data)
#print(cbind(run=1:16,Reactor.data[1:16,],run=17:32,Reactor.data[17:32,]))
###################################################
### code chunk number 14: BsMD.Rnw:581-604
###################################################
par(mfrow=c(1,2),mar=c(3,4,1,1),mgp=c(2,.5,0),oma=c(0,0,0,0),
pty="s",cex.axis=0.7,cex.lab=0.8,cex.main=0.9)
fraction <- c(25,2,19,12,13,22,7,32)
cat("Fraction: ",fraction)
X <- as.matrix(cbind(blk=rep(-1,8),Reactor.data[fraction,1:5]))
y <- Reactor.data[fraction,6]
print(reactor8.BsProb <- BsProb(X=X,y=y,blk=1,mFac=5,mInt=3,
p=0.25,g=0.40,ng=1,nMod=32),X=FALSE,resp=FALSE,factors=TRUE,models=FALSE)
plot(reactor8.BsProb,code=FALSE,main="a) Initial Design\n(8 runs)")
p <- reactor8.BsProb$ptop
s2 <- reactor8.BsProb$sigtop
nf <- reactor8.BsProb$nftop
facs <- reactor8.BsProb$jtop
nFDes <- 4
Xcand <- as.matrix(cbind(blk=rep(+1,32),Reactor.data[,1:5]))
print(MD(X=X,y=y,nFac=5,nBlk=1,mInt=3,g=0.40,nMod=32,p=p,s2=s2,nf=nf,facs=facs,
nFDes=4,Xcand=Xcand,mIter=20,nStart=25,top=5),Xcand=FALSE,models=FALSE)
new.runs <- c(4,10,11,26)
cat("Follow-up:",new.runs)
X <- rbind(X,Xcand[new.runs,])
y <- c(y,Reactor.data[new.runs,6])
print(reactor12.BsProb <- BsProb(X=X,y=y,blk=1,mFac=5,mInt=3,p=0.25,g=1.20,ng=1,nMod=5))
plot(reactor12.BsProb,code=FALSE,main="b) Complete Design\n(12 runs)")
###################################################
### code chunk number 15: One-at-a-time
###################################################
data(Reactor.data,package="BsMD")
#cat("Initial Design:\n")
X <- as.matrix(cbind(blk=rep(-1,8),Reactor.data[fraction,1:5]))
y <- Reactor.data[fraction,6]
lst <- reactor8.BsProb <- BsProb(X=X,y=y,blk=1,mFac=5,mInt=3,p=0.25,g=0.40,ng=1,nMod=32)
#cat("Follow-Up: run 1\n")
p <- lst$ptop; s2 <- lst$sigtop; nf <- lst$nftop; facs <- lst$jtop
reactor8.MD <- MD(X=X,y=y,nFac=5,nBlk=1,mInt=3,g=0.40,nMod=32,p=p,s2=s2,nf=nf,facs=facs,
nFDes=1,Xcand=Xcand,mIter=20,nStart=25,top=3)
new.run <- 10
X <- rbind(X,Xcand[new.run,]); rownames(X)[nrow(X)] <- new.run
y <- c(y,Reactor.data[new.run,6])
lst <- reactor9.BsProb <- BsProb(X=X,y=y,blk=1,mFac=5,mInt=3,p=0.25,g=0.7,ng=1,nMod=32)
#cat("Follow-Up: run 2\n")
p <- lst$ptop; s2 <- lst$sigtop; nf <- lst$nftop; facs <- lst$jtop
reactor9.MD <- MD(X=X,y=y,nFac=5,nBlk=1,mInt=3,g=0.7,nMod=32,p=p,s2=s2,nf=nf,facs=facs,
nFDes=1,Xcand=Xcand,mIter=20,nStart=25,top=3)
new.run <- 4
X <- rbind(X,Xcand[new.run,]); rownames(X)[nrow(X)] <- new.run
y <- c(y,Reactor.data[new.run,6])
lst <- reactor10.BsProb <- BsProb(X=X,y=y,blk=1,mFac=5,mInt=3,p=0.25,g=1.0,ng=1,nMod=32)
#cat("Follow-Up: run 3\n")
p <- lst$ptop; s2 <- lst$sigtop; nf <- lst$nftop; facs <- lst$jtop
reactor10.MD <- MD(X=X,y=y,nFac=5,nBlk=1,mInt=3,g=1.0,nMod=32,p=p,s2=s2,nf=nf,facs=facs,
nFDes=1,Xcand=Xcand,mIter=20,nStart=25,top=3)
new.run <- 11
X <- rbind(X,Xcand[new.run,]); rownames(X)[nrow(X)] <- new.run
y <- c(y,Reactor.data[new.run,6])
lst <- reactor11.BsProb <- BsProb(X=X,y=y,blk=1,mFac=5,mInt=3,p=0.25,g=1.3,ng=1,nMod=32)
#cat("Follow-Up: run 4\n")
p <- lst$ptop; s2 <- lst$sigtop; nf <- lst$nftop; facs <- lst$jtop
reactor10.MD <- MD(X=X,y=y,nFac=5,nBlk=1,mInt=3,g=1.3,nMod=32,p=p,s2=s2,nf=nf,facs=facs,
nFDes=1,Xcand=Xcand,mIter=20,nStart=25,top=3)
new.run <- 15
X <- rbind(X,Xcand[new.run,]); rownames(X)[nrow(X)] <- new.run
y <- c(y,Reactor.data[new.run,6])
reactor12 <- BsProb(X=X,y=y,blk=1,mFac=5,mInt=3,p=0.25,g=1.30,ng=1,nMod=10)
print(reactor12,nMod=5,models=TRUE,plt=FALSE)
###################################################
### code chunk number 16: BsMD.Rnw:672-685
###################################################
par(mfrow=c(2,2),mar=c(3,4,1,1),mgp=c(2,.5,0),oma=c(1,0,1,0),
pty="s",cex.axis=0.7,cex.lab=0.8,cex.main=0.9)
#plot(reactor8.BsProb,code=FALSE)
#mtext(side=1,"a) 8 runs",line=3,cex=0.7)
plot(reactor9.BsProb,code=FALSE)
mtext(side=1,"b) 9 runs",line=3,cex=0.7)
plot(reactor10.BsProb,code=FALSE)
mtext(side=1,"c) 10 runs",line=3,cex=0.7)
plot(reactor11.BsProb,code=FALSE)
mtext(side=1,"d) 11 runs",line=3,cex=0.7)
plot(reactor12.BsProb,code=FALSE)
mtext(side=1,"e) 12 runs",line=3,cex=0.7)
title("One-at-a-time Experiments",outer=TRUE)
|
/scratch/gouwar.j/cran-all/cranData/BsMD/inst/doc/BsMD.R
|
#' Implementing Statistical Classification and Regression.
#'
#' Build a multi-layer feed-forward neural network model for statistical classification and regression analysis with random effects.
#'@param formula.string a formula string or a vector of numeric values. When it is a string, it denotes a classification or regression equation, of the form label ~ predictors or response ~ predictors, where predictors are separated by + operator. If it is a numeric vector, it will be a label or a response variable of a classification or regression equation, respectively.
#'@param data a data frame or a design matrix. When formula.string is a string, data should be a data frame which includes the label (or the response) and the predictors expressed in the formula string. When formula.string is a vector, i.e. a vector of labels or responses, data should be an nxp numeric matrix whose columns are predictors for further classification or regression.
#'@param train.ratio a ratio that is used to split data into training and test sets. When data is an n-by-p matrix, the resulting train data will be a (train.ratio x n)-by-p matrix. The default is 0.7.
#'@param arrange a logical value to arrange data for the classification only (automatically set up to FALSE for regression) when splitting data into training and test sets. If it is true, data will be arranged for the resulting training set to contain the specified ratio (train.ratio) of labels of the whole data. See also Split2TrainTest().
#'@param batch.size a batch size used for training during iterations.
#'@param total.iter a number of iterations used for training.
#'@param hiddenlayer a vector of numbers of nodes in hidden layers.
#'@param batch.norm a logical value to specify whether or not to use the batch normalization option for training. The default is TRUE.
#'@param drop a logical value to specify whether or not to use the dropout option for training. The default is TRUE.
#'@param drop.ratio a ratio for the dropout; used only if drop is TRUE. The default is 0.1.
#'@param lr a learning rate. The default is 0.1.
#'@param init.weight a weight used to initialize the weight matrix of each layer. The default is 0.1.
#'@param activation a vector of activation functions used in all hidden layers. For two hidden layers (e.g., hiddenlayer=c(100, 50)), it is a vector of two activation functions, e.g., c("Sigmoid", "SoftPlus"). The list of available activation functions includes Sigmoid, Relu, LeakyRelu, TanH, ArcTan, ArcSinH, ElliotSig, SoftPlus, BentIdentity, Sinusoid, Gaussian, Sinc, and Identity. For details of the activation functions, please refer to Wikipedia.
#'@param optim an optimization method which is used for training. The following methods are available: "SGD", "Momentum", "AdaGrad", "Adam", "Nesterov", and "RMSprop."
#'@param type a statistical model for the analysis: "Classification" or "Regression."
#'@param rand.eff a logical value to specify whether or not to add a random effect into classification or regression.
#'@param distr a distribution of a random effect; used only if rand.eff is TRUE. The following distributions are available: "Normal", "Exponential", "Logistic", and "Cauchy."
#'@param disp a logical value which specifies whether or not to display intermediate training results (loss and accuracy) during the iterations.
#'
#'@return A list of the following values:
#'\describe{
#'\item{lW}{a list of n terms of weight matrices where n is equal to the number of hidden layers plus one.}
#'
#'\item{lb}{a list of n terms of bias vectors where n is equal to the number of hidden layers plus one.}
#'
#'\item{lParam}{a list of parameters used for the training process.}
#'
#'\item{train.loss}{a vector of loss values of the training set obtained during iterations where its length is eqaul to number of epochs.}
#'
#'\item{train.accuracy}{a vector of accuracy values of the training set obtained during during iterations where its length is eqaul to number of epochs.}
#'
#'\item{test.loss}{a vector of loss values of the test set obtained during the iterations where its length is eqaul to number of epochs.}
#'
#'\item{test.accuracy}{a vector of accuracy values of the test set obtained during the iterations where its length is eqaul to number of epochs.}
#'
#'\item{predicted.softmax}{an r-by-n numeric matrix where r is the number of labels (classification) or 1 (regression), and n is the size of the test set. Its entries are predicted softmax values (classification) or predicted values (regression) of the test sets, obtained by using the weight matrices (lW) and biases (lb).}
#'
#'\item{predicted.encoding}{an r-by-n numeric matrix which is a result of one-hot encoding of the predicted.softmax; valid for classification only.}
#'
#'\item{confusion.matrix}{an r-by-r confusion matrix; valid classification only.}
#'
#'\item{precision}{an (r+1)-by-3 matrix which reports precision, recall, and F1 of each label; valid classification only.}
#'
#'}
#'
#'@examples
#'####################
#'# train.ratio = 0.6 ## 60% of data is used for training
#'# batch.size = 10
#'# total.iter = 100
#'# hiddenlayer=c(20,10) ## Use two hidden layers
#'# arrange=TRUE #### Use "arrange" option
#'# activations = c("Relu","SoftPlus") ### Use Relu and SoftPlus
#'# optim = "Nesterov" ### Use the "Nesterov" method for the optimization.
#'# type = Classification
#'# rand.eff = TRUE #### Add some random effect
#'# distr="Normal" #### The random effect is a normal random variable
#'# disp = TRUE #### Display intemeidate results during iterations.
#'
#'
#'data(iris)
#'
#'lst = TrainBuddle("Species~Sepal.Width+Petal.Width", iris, train.ratio=0.6,
#' arrange=TRUE, batch.size=10, total.iter = 100, hiddenlayer=c(20, 10),
#' batch.norm=TRUE, drop=TRUE, drop.ratio=0.1, lr=0.1, init.weight=0.1,
#' activation=c("Relu","SoftPlus"), optim="Nesterov",
#' type = "Classification", rand.eff=TRUE, distr = "Normal", disp=TRUE)
#'
#'lW = lst$lW
#'lb = lst$lb
#'lParam = lst$lParam
#'
#'confusion.matrix = lst$confusion.matrix
#'precision = lst$precision
#'
#'confusion.matrix
#'precision
#'
#'
#'### Another classification example
#'### Using mnist data
#'
#'
#'data(mnist_data)
#'
#'Img_Mat = mnist_data$Images
#'Img_Label = mnist_data$Labels
#'
#' ##### Use 100 images
#'
#'X = Img_Mat ### X: 100 x 784 matrix
#'Y = Img_Label ### Y: 100 x 1 vector
#'
#'lst = TrainBuddle(Y, X, train.ratio=0.6, arrange=TRUE, batch.size=10, total.iter = 100,
#' hiddenlayer=c(20, 10), batch.norm=TRUE, drop=TRUE,
#' drop.ratio=0.1, lr=0.1, init.weight=0.1,
#' activation=c("Relu","SoftPlus"), optim="AdaGrad",
#' type = "Classification", rand.eff=TRUE, distr = "Logistic", disp=TRUE)
#'
#'
#'confusion.matrix = lst$confusion.matrix
#'precision = lst$precision
#'
#'confusion.matrix
#'precision
#'
#'
#'
#'
#'
#'
#'############### Regression example
#'
#'
#'n=100
#'p=10
#'X = matrix(rnorm(n*p, 1, 1), n, p) ## X is a 100-by-10 design matrix
#'b = matrix( rnorm(p, 1, 1), p,1)
#'e = matrix(rnorm(n, 0, 1), n,1)
#'Y = X %*% b + e ### Y=X b + e
#'######### train.ratio=0.7
#'######### batch.size=20
#'######### arrange=FALSE
#'######### total.iter = 100
#'######### hiddenlayer=c(20)
#'######### activation = c("Identity")
#'######### "optim" = "Adam"
#'######### type = "Regression"
#'######### rand.eff=FALSE
#'
#'lst = TrainBuddle(Y, X, train.ratio=0.7, arrange=FALSE, batch.size=20, total.iter = 100,
#' hiddenlayer=c(20), batch.norm=TRUE, drop=TRUE, drop.ratio=0.1, lr=0.1,
#' init.weight=0.1, activation=c("Identity"), optim="AdaGrad",
#' type = "Regression", rand.eff=FALSE, disp=TRUE)
#'
#'
#'
#'
#'@references
#'[1] Geron, A. Hand-On Machine Learning with Scikit-Learn and TensorFlow. Sebastopol: O'Reilly, 2017. Print.
#'@references
#'[2] Han, J., Pei, J, Kamber, M. Data Mining: Concepts and Techniques. New York: Elsevier, 2011. Print.
#'@references
#'[3] Weilman, S. Deep Learning from Scratch. O'Reilly Media, 2019. Print.
#'@export
#'@seealso CheckNonNumeric(), GetPrecision(), FetchBuddle(), MakeConfusionMatrix(), OneHot2Label(), Split2TrainTest()
#'@importFrom Rcpp evalCpp
#'@useDynLib Buddle
TrainBuddle = function(formula.string, data, train.ratio=0.7, arrange=0, batch.size=10,
total.iter=10000, hiddenlayer=c(100), batch.norm=TRUE, drop=TRUE, drop.ratio=0.1,
lr=0.1, init.weight=0.1, activation=c("Sigmoid"), optim="SGD",
type="Classification", rand.eff=FALSE, distr="Normal", disp=TRUE){
########## Changing R env to C++ env
Train_ratio = train.ratio
bArrange = arrange
nBatch_Size = batch.size
nTotal_Iterations = total.iter
HiddenLayer = hiddenlayer
bBatch = batch.norm
bDrop = drop
drop_ratio = drop.ratio
Activation = activation
strOpt = optim
Type = type
bRand = rand.eff
strDist = distr
bDisp = disp
d_learning_rate = lr
d_init_weight = init.weight
if(Type=="Regression"){
bArrange=0
}
######################
nHiddenLayer =length(HiddenLayer)
nAct = length(Activation)
if(nAct>nHiddenLayer){
stop("Length of Activation vector should be equal or smaller than the length of HiddenLayer.")
}else if(nAct==nHiddenLayer){
nstrVec=GetStrVec(Activation)
}else{
NewActVec=rep("", times=nHiddenLayer)
NewActVec[1:nAct] = Activation
NewActVec[(nAct+1):nHiddenLayer] = "Relu"
nstrVec = GetStrVec(NewActVec)
}
######################
if(length(formula.string)==1){
lOneHot = OneHotEncoding(formula.string, data)
Y = lOneHot$Y
X = lOneHot$X ### X:n x p
T_Mat=lOneHot$OneHot #### T: rxn
Label = lOneHot$Label
dimm = dim(X)
n = dimm[1]
p = dimm[2]
}else{
Y = formula.string
X = data #### X : nxp
dimm = dim(X)
n = dimm[1]
p = dimm[2]
T_Mat = OneHotEncodingSimple(Y, n) #### T: rxn
}
cn = count(Y)
Label = cn$x
lCheck = CheckNonNumeric(X)
if(lCheck[[1]]!=0){
print("There are non-numeric values in the design matrix X.")
return(lCheck)
}
###################### Split X and T into train and test
nTrain = floor(n*Train_ratio)
if(nTrain<=10){
print(paste("The size of the train set is "+ nTrain, ". Increase the train ratio or get more data.", sep="") )
}
if(bArrange==1){
lYX = Split2TrainTest(Y, X, Train_ratio)
Y_test = lYX$y.test
Y_train = lYX$y.train
nTrain = length(Y_train)
Y[1:nTrain] = Y_train
Y[(nTrain+1):n] = Y_test
T_Mat = OneHotEncodingSimple(Y, n)
T_train = T_Mat[ , 1:nTrain]
T_test = T_Mat[ , (nTrain+1):n]
X_test = lYX$x.test
X_train = lYX$x.train
}else{
Y_train = Y[1:nTrain]
Y_test = Y[(nTrain+1):n]
X_train = X[1:nTrain, ]
X_test = X[(nTrain+1):n, ]
T_train = T_Mat[ , 1:nTrain]
T_test = T_Mat[ , (nTrain+1):n]
}
dimmT = dim(T_train)
r = dimmT[1]
nPerEpoch = nTrain/nBatch_Size
nEpoch = floor(nTotal_Iterations/nPerEpoch)
if(nBatch_Size>=nTrain){
print("The batch size is bigger than the size of the train set.")
print("The half of the size of the train set will be tried as a new batch size.")
nBatch_Size = floor(nTrain/2)
if(nBatch_Size==0){
stop("Batch size is 0.")
}
}else{
if(nEpoch==0){
print("The number of epoch is zero. Increase total iteration number, reduce the train ratio, or increase the batch size.")
stop()
}
}
############################### Start Buddle
lst = Buddle_Main(t(X_train), T_train, t(X_test), T_test, nBatch_Size,
nTotal_Iterations, HiddenLayer, bBatch, bDrop, drop_ratio,
d_learning_rate, d_init_weight,nstrVec, strOpt, Type, bRand, strDist, bDisp)
lW=lst[[1]]; lb=lst[[2]]
train_loss= lst[[3]][[1]]
train_accuracy = lst[[3]][[2]]
test_loss = lst[[3]][[3]]
test_accuracy = lst[[3]][[4]]
nLen = length(test_accuracy)
plot(1:nLen, test_accuracy, main = "Accuracy: Training vs. Test",
ylab="Accuracy", xlab="Epoch", type="l", col="red", ylim=c(0,1))
lines(1:nLen, train_accuracy, type="l", col="blue")
legend("topleft", c("Test", "Train"), fill=c("red", "blue"))
lParam = list(label = r, hiddenLayer=HiddenLayer, batch=bBatch, drop=bDrop, drop.ratio=drop_ratio,
lr = d_learning_rate, init.weight = d_init_weight, activation = nstrVec,
optim=strOpt, type = Type, rand.eff = bRand, distr = strDist, disp = bDisp)
lst2 = Buddle_Predict(t(X_test), lW, lb, lParam)
Predicted_SoftMax = lst2[[1]]
Predicted_OneHotEconding = lst2[[2]]
if(type == "Classification"){
Predicted_Label = OneHot2Label(Predicted_OneHotEconding, Label)
CM = MakeConfusionMatrix(Predicted_Label, Y_test, Label)
Precision = GetPrecision(CM)
}else{
Predicted_Label = Predicted_SoftMax
CM = NA
Precision = NA
}
lResult = list(lW=lW, lb=lb,
lParam = lParam,
train.loss=train_loss,
train.accuracy = train_accuracy,
test.loss=test_loss,
test.accuracy = test_accuracy,
predicted.softmax = t(Predicted_SoftMax),
predicted.encoding = t(Predicted_OneHotEconding),
predicted.label = Predicted_Label,
confusion.matrix = CM, precision=Precision)
return(lResult)
}
#' Predicting Classification and Regression.
#'
#' Yield prediction (softmax value or value) for regression and classification for given data based on the results of training.
#'@param X a matrix of real values which will be used for predicting classification or regression.
#'@param lW a list of weight matrices obtained after training.
#'@param lb a list of bias vectors obtained after training.
#'@param lParam a list of parameters used for training. It includes: label, hiddenlayer, batch, drop, drop.ratio, lr, init.weight, activation, optim, type, rand.eff, distr, and disp.
#'
#'@return A list of the following values:
#'\describe{
#'\item{predicted}{predicted real values (regression) or softmax values (classification).}
#'
#'\item{One.Hot.Encoding}{one-hot encoding values of the predicted softmax values for classification. For regression, a zero matrix will be returned. To convert the one-hot encoding values to labels, use OneHot2Label().}
#'}
#'
#'@examples
#'
#'### Using mnist data again
#'
#'data(mnist_data)
#'
#'X1 = mnist_data$Images ### X1: 100 x 784 matrix
#'Y1 = mnist_data$Labels ### Y1: 100 x 1 vector
#'
#'
#'
#'############################# Train Buddle
#'
#'lst = TrainBuddle(Y1, X1, train.ratio=0.6, arrange=TRUE, batch.size=10, total.iter = 100,
#' hiddenlayer=c(20, 10), batch.norm=TRUE, drop=TRUE,
#' drop.ratio=0.1, lr=0.1, init.weight=0.1,
#' activation=c("Relu","SoftPlus"), optim="AdaGrad",
#' type = "Classification", rand.eff=TRUE, distr = "Logistic", disp=TRUE)
#'
#'lW = lst[[1]]
#'lb = lst[[2]]
#'lParam = lst[[3]]
#'
#'
#'X2 = matrix(rnorm(20*784,0,1), 20,784) ## Genderate a 20-by-784 matrix
#'
#'lst = FetchBuddle(X2, lW, lb, lParam) ## Pass X2 to FetchBuddle for prediction
#'
#'
#'
#'
#'
#'@references
#'[1] Geron, A. Hand-On Machine Learning with Scikit-Learn and TensorFlow. Sebastopol: O'Reilly, 2017. Print.
#'@references
#'[2] Han, J., Pei, J, Kamber, M. Data Mining: Concepts and Techniques. New York: Elsevier, 2011. Print.
#'@references
#'[3] Weilman, S. Deep Learning from Scratch. O'Reilly Media, 2019. Print.
#'@export
#'@seealso CheckNonNumeric(), GetPrecision(), MakeConfusionMatrix(), OneHot2Label(), Split2TrainTest(), TrainBuddle()
#'@importFrom Rcpp evalCpp
#'@useDynLib Buddle
FetchBuddle = function(X, lW, lb, lParam){
if(is.matrix(X)==FALSE){
p = length(X)
tmpX = matrix(0, 1, p)
for(i in 1:p){
tmpX[1, i] = X[i]
}
rm(X)
X = tmpX
}else{
dimm = dim(X)
n = dimm[1]
p = dimm[2]
}
lst = Buddle_Predict(t(X), lW, lb, lParam)
lResult = list(predicted = lst[[1]], One.Hot.Encoding = lst[[2]])
return(lResult)
}
#' Detecting Non-numeric Values.
#'
#' Check whether or not an input matrix includes any non-numeric values (NA, NULL, "", character, etc) before being used for training. If any non-numeric values exist, then TrainBuddle() or FetchBuddle() will return non-numeric results.
#'@param X an n-by-p matrix.
#'
#'@return A list of (n+1) values where n is the number of non-numeric values. The first element of the list is n, and all other elements are entries of X where non-numeric values occur. For example, when the (1,1)th and the (2,3)th entries of a 5-by-5 matrix X are non-numeric, then the list returned by CheckNonNumeric() will contain 2, (1,1), and (2,3).
#'
#'@examples
#'
#'n = 5;
#'p = 5;
#'X = matrix(0, n, p) #### Generate a 5-by-5 matrix which includes two NA's.
#'X[1,1] = NA
#'X[2,3] = NA
#'
#'lst = CheckNonNumeric(X)
#'
#'lst
#'
#'@export
#'@seealso GetPrecision(), FetchBuddle(), MakeConfusionMatrix(), OneHot2Label(), Split2TrainTest(), TrainBuddle()
#'@importFrom Rcpp evalCpp
#'@useDynLib Buddle
CheckNonNumeric = function(X){
dimm = dim(X)
n = dimm[1]
p = dimm[2]
nInc = 0
lst = list()
nIndex=2
for(i in 1:n){
for(j in 1:p){
val = X[i, j]
if((is.na(val)==TRUE) || is.null(val)==TRUE || is.numeric(val)==FALSE){
nInc = nInc+1
lst[[nIndex]] = c(i,j)
nIndex=nIndex+1
}
}
}
lst[[1]] = nInc
return(lst)
}
#' Splitting Data into Training and Test Sets.
#'
#' Convert data into training and test sets so that the training set contains approximately the specified ratio of all labels.
#'@param Y an n-by-1 vector of responses or labels.
#'@param X an n-by-p design matrix of predictors.
#'@param train.ratio a ratio of the size of the resulting training set to the size of data.
#'
#'
#'@return A list of the following values:
#'\describe{
#'
#'\item{y.train}{the training set of Y.}
#'\item{y.test}{the test set of Y.}
#'\item{x.train}{the training set of X.}
#'\item{x.test}{the test set of X.}
#'
#'}
#'@examples
#'
#'data(iris)
#'
#'Label = c("setosa", "versicolor", "virginica")
#'
#'
#'train.ratio=0.8
#'Y = iris$Species
#'X = cbind( iris$Sepal.Length, iris$Sepal.Width, iris$Petal.Length, iris$Petal.Width)
#'
#'lst = Split2TrainTest(Y, X, train.ratio)
#'
#'Ytrain = lst$y.train
#'Ytest = lst$y.test
#'
#'length(Ytrain)
#'length(Ytest)
#'
#'length(which(Ytrain==Label[1]))
#'length(which(Ytrain==Label[2]))
#'length(which(Ytrain==Label[3]))
#'
#'length(which(Ytest==Label[1]))
#'length(which(Ytest==Label[2]))
#'length(which(Ytest==Label[3]))
#'
#'
#'
#'
#'@export
#'@seealso CheckNonNumeric(), GetPrecision(), FetchBuddle(), MakeConfusionMatrix(), OneHot2Label(), TrainBuddle()
#'@importFrom Rcpp evalCpp
#'@useDynLib Buddle
Split2TrainTest=function(Y, X, train.ratio){
Train_ratio = train.ratio
dimm = dim(X)
n=dimm[1];p=dimm[2]
cn = count(Y)
cnx = cn$x
newcnf = floor(cn$freq * Train_ratio )
nLev = length(newcnf)
for(i in 1:nLev){
if(newcnf[i]==0){newcnf[i]=1}
}
nTrain = sum(newcnf)
nTest = n-nTrain
YTrain = rep(Y[1], times=nTrain)
XTrain = X
YTest = rep(Y[1], times=nTest)
XTest = X
nIncYTr=1;nIncYTst=1;nIncXTr=1;nIncXTst=1;
for(i in 1:nLev){
val = cnx[i]
nMany = newcnf[i] #### How many for train
Wh = which(Y==val)
nLenWh = length(Wh)
Ord = Wh[1:nMany] ############ Train index
nLenOrd = length(Ord)
for(i in 1:nLenOrd){
nIndex = Ord[i]
YTrain[nIncYTr] = Y[nIndex]
XTrain[nIncYTr, ]= X[nIndex, ]
nIncYTr = nIncYTr+1
}
if(nLenWh != nMany){
NOrd = Wh[(nMany+1):nLenWh] ############ Test index
nLenNOrd = length(NOrd)
for(i in 1:nLenNOrd){
nIndex = NOrd[i]
YTest[nIncYTst] = Y[nIndex]
XTest[nIncYTst, ]= X[nIndex, ]
nIncYTst = nIncYTst+1
}
}
}
nIncYTr = nIncYTr-1
nIncYTst = nIncYTst-1
XTrain = XTrain[1:nIncYTr,]
XTest = XTest[1:nIncYTst,]
lst = list(y.train=YTrain, y.test=YTest, x.train=XTrain, x.test=XTest)
return(lst)
}
#' Obtaining Labels
#'
#' Convert a one-hot encoding matrix to a vector of labels.
#'@param OHE an r-by-n one-hot encoding matrix.
#'@param Label an r-by-1 vector of values or levels which a label can take.
#'
#'
#'@return An n-by-1 vector of labels.
#'
#'@examples
#'
#'Label = c("setosa", "versicolor", "virginica")
#'r = length(Label)
#'
#'n=10
#'OHE = matrix(0, r, n) ### Generate a random one-hot encoding matrix
#'for(i in 1:n){
#' if(i%%r==0){
#' OHE[i, 3] = 1
#' }else if(i\%\%r==1){
#' OHE[i, 1] = 1
#' }else{
#' OHE[i, 2] = 1
#' }
#'
#'}
#'
#'pred.label = OneHot2Label(OHE, Label)
#'
#'pred.label
#'
#'
#'
#'
#'@export
#'@seealso CheckNonNumeric(), GetPrecision(), FetchBuddle(), MakeConfusionMatrix(), Split2TrainTest(), TrainBuddle()
#'@importFrom Rcpp evalCpp
#'@useDynLib Buddle
OneHot2Label = function(OHE, Label){
T_Mat = OHE
dimm=dim(T_Mat)
p = dimm[1]
n = dimm[2]
AnswerKey = as.character(Label)
ans = rep("", times=n)
for(i in 1:n){
nIndex = which(T_Mat[,i]==1)
ans[i] = AnswerKey[nIndex]
}
return(ans)
}
#' Making a Confusion Matrix.
#'
#' Create a confusion matrix from two vectors of labels: predicted label obtained from FetchBuddle() as a result of prediction and true label of a test set.
#'@param predicted.label a vector of predicted labels.
#'@param true.label a vector of true labels.
#'@param Label a vector of all possible values or levels which a label can take.
#'
#'@return An r-by-r confusion matrix where r is the length of Label.
#'
#'@examples
#'
#'
#'data(iris)
#'
#'Label = c("setosa", "versicolor", "virginica")
#'
#'predicted.label = c("setosa", "setosa", "virginica", "setosa", "versicolor", "versicolor")
#'true.label = c("setosa", "virginica", "versicolor","setosa", "versicolor", "virginica")
#'
#'confusion.matrix = MakeConfusionMatrix(predicted.label, true.label, Label)
#'precision = GetPrecision(confusion.matrix)
#'
#'confusion.matrix
#'precision
#'
#'
#'
#'
#'
#'@export
#'@seealso CheckNonNumeric(), GetPrecision(), FetchBuddle(), OneHot2Label(), Split2TrainTest(), TrainBuddle()
#'@importFrom Rcpp evalCpp
#'@useDynLib Buddle
MakeConfusionMatrix = function(predicted.label, true.label, Label){
predicted = as.character(predicted.label)
answerkey = as.character(true.label)
Label = as.character(Label)
nLen = length(predicted)
n = length(Label)
CM = matrix(0,n,n)
colnames(CM, do.NULL = TRUE)
colnames(CM) = as.character(Label)
rownames(CM, do.NULL = TRUE)
rownames(CM) = as.character(Label)
for(i in 1:nLen){
val = predicted[i]
ans = answerkey[i]
nval = which(Label==val)
nans = which(Label==ans)
CM[nval, nans] = CM[nval, nans]+1
}
return(CM)
}
#' Obtaining Accuracy.
#'
#' Compute measures of accuracy such as precision, recall, and F1 from a given confusion matrix.
#'@param confusion.matrix a confusion matrix.
#'
#'@return An (r+1)-by-3 matrix when the input is an r-by-r confusion matrix.
#'
#'@examples
#'
#'data(iris)
#'
#'Label = c("setosa", "versicolor", "virginica")
#'
#'predicted.label = c("setosa", "setosa", "virginica", "setosa", "versicolor", "versicolor")
#'true.label = c("setosa", "virginica", "versicolor","setosa", "versicolor", "virginica")
#'
#'confusion.matrix = MakeConfusionMatrix(predicted.label, true.label, Label)
#'precision = GetPrecision(confusion.matrix)
#'
#'confusion.matrix
#'precision
#'
#'
#'@export
#'@seealso CheckNonNumeric(), FetchBuddle(), MakeConfusionMatrix(), OneHot2Label(), Split2TrainTest(), TrainBuddle()
#'@importFrom Rcpp evalCpp
#'@useDynLib Buddle
GetPrecision = function(confusion.matrix){
CM = confusion.matrix
RT = rownames(CM)
dimm = dim(CM)
nClass = dimm[1]
MeasureMatrix=matrix(0,(nClass+1), 3)
PrecisionVec= rep(0,times=nClass)
RecallVec= rep(0,times=nClass)
F1Vec= rep(0,times=nClass)
ConsVec = rep(0, times=nClass)
nAccuracy = 0
for(i in 1:nClass){
TP = CM[i,i]
FP = sum(CM[,i])- TP
FN = sum(CM[i,])- TP
precisionVal = TP/(TP+FP)
recallVal = TP/(TP+FN)
f1Val = (2*precisionVal*recallVal)/(precisionVal+recallVal)
PrecisionVec[i] = precisionVal
RecallVec[i] = recallVal
F1Vec[i] = f1Val
ConsVec[i] = sum(CM[i,])
nAccuracy = nAccuracy+TP
}
cTitle=c("Precision", "Recall", "F1")
colnames(MeasureMatrix, do.NULL = TRUE)
colnames(MeasureMatrix)=cTitle
rTitle = rep("", times=(nClass+1))
rTitle[1:nClass+1] = RT
rTitle[(nClass+1)] = "Total"
rownames(MeasureMatrix, do.NULL = TRUE)
rownames(MeasureMatrix) = rTitle
MeasureMatrix[(1:nClass),1]= PrecisionVec
MeasureMatrix[(1:nClass),2]= RecallVec
MeasureMatrix[(1:nClass),3]= F1Vec
nInstance = sum(ConsVec)
MeasureMatrix[(nClass+1),1] = t(ConsVec)%*% PrecisionVec / nInstance
MeasureMatrix[(nClass+1),2] = t(ConsVec)%*% RecallVec / nInstance
MeasureMatrix[(nClass+1),3] = t(ConsVec)%*% F1Vec / nInstance
out = list()
out[[1]] = MeasureMatrix
out[[2]] = nAccuracy/nInstance
return(out)
}
ListVar = function(Str){
fstr = formula(Str)
Response = fstr[[2]]
lPredictor = fstr[[3]]
lst = list()
lst[[1]] = Response
nIter = 100000
nInc=2
for(i in 1:nIter){
len = length(lPredictor)
if(len==1){
lst[[nInc]] = lPredictor
break
}else{
lst[[nInc]] = lPredictor[[3]]
nInc=nInc+1
lPredictor = lPredictor[[2]]
}
}
return(lst)
}
SplitVariable = function(Str){
if(class(Str)=="character"){
fstr = formula(Str)
}else{
fstr = Str
}
Response = fstr[[2]]
lPredictor = fstr[[3]]
lst = list()
lst[[1]] = Response
nIter = 100000
nInc=2
for(i in 1:nIter){
len = length(lPredictor)
if(len==1){
lst[[nInc]] = lPredictor
break
}else{
lst[[nInc]] = lPredictor[[3]]
nInc=nInc+1
lPredictor = lPredictor[[2]]
}
}
return(lst)
}
OneHotEncodingSimple = function(y, n){
############################ Make T mat
cn = count(y)
#lev = as.numeric( as.character( cn$x ) )
lev = cn$x
# if(is.na(lev[1])==TRUE){
# lev = as.character( cn$x )
# }
nLen = length(lev)
T_Mat = matrix(0, nLen, n)
for(i in 1:n){
yVal = y[i]
nIndex = which(lev==yVal)
T_Mat[nIndex, i]=1
}
return(T_Mat)
}
OneHotEncoding = function(Str, DM){
dimm = dim(DM)
n = dimm[1]
lst = ListVar(Str)
nLen = length(lst)
nPred = nLen-1
ColVec = rep("", nPred)
yname = as.character( lst[[1]])
for(i in 1:nPred){
ColVec[i] = as.character( lst[[nLen+1-i]] )
}
############################ Make T mat
y = DM[, yname]
cn = count(y)
lev = cn$x
# lev = as.numeric( as.character( cn$x ) )
#
# if(is.na(lev[1])==TRUE){
# lev = as.character( cn$x )
# }
nLen = length(lev)
T_Mat = matrix(0, nLen, n)
for(i in 1:n){
yVal = y[i]
nIndex = which(lev==yVal)
T_Mat[nIndex, i]=1
}
FreqVec = as.numeric(cn$freq)
############################ Make X
X = matrix(0, n, nPred)
for(i in 1:nPred){
Var = ColVec[i]
X[, i] = DM[, Var]
}
lst = list(Y=y, X=X, OneHotMatrix=T_Mat, Label = lev)
return(lst)
}
RActiveStr2Num = function(Str){
if(Str=="Sigmoid"){
return(1)
}else if(Str=="Relu"){
return(2)
}else if(Str=="LeakyRelu"){
return(3)
}else if(Str=="TanH"){
return(4)
}else if(Str=="ArcTan"){
return(5)
}else if(Str=="ArcSinH"){
return(6)
}else if(Str=="ElliotSig"){
return(7)
}else if(Str=="SoftPlus"){
return(8)
}else if(Str=="BentIdentity"){
return(9)
}else if(Str=="Sinusoid"){
return(10)
}else if(Str=="Gaussian"){
return(11)
}else if(Str=="Sinc"){
return(12)
}else{
return(2)
}
}
GetStrVec = function(ActVec){
n = length(ActVec)
ans = rep(0, times=n)
for(i in 1:n){
ans[i] = RActiveStr2Num(ActVec[i])
}
return(ans)
}
|
/scratch/gouwar.j/cran-all/cranData/Buddle/R/BuddleMain.R
|
#' Image data of handwritten digits.
#'
#' A dataset containing 100 images of handwritten digits.
#'
#'
#'#'@format A list containing a matrix of image data and a vector of labels:
#' \describe{
#' \item{Images}{100-by-784 matrix of image data of handwritten digits.}
#' \item{Labels}{100-by-1 vector of labels of handwritten digits.}
#'
#' }
#' @source \url{http://yann.lecun.com/exdb/mnist/}
#'
#'
#'
#'@examples
#'data(mnist_data)
#'
#'Img_Mat = mnist_data$Images
#'Img_Label = mnist_data$Labels
#'
#'digit_data = Img_Mat[1, ] ### image data (784-by-1 vector) of the first handwritten digit (=5)
#'label = Img_Label[1] ### label of the first handwritten digit (=5)
#'imgmat = matrix(digit_data, 28, 28) ### transform the vector of image data to a matrix
# image(imgmat, axes = FALSE, col = grey(seq(0, 1, length = 256))) ### convert data to a real image
#'
#'
#'@docType data
#'@keywords datasets
#'@name mnist_data
#'@usage data(mnist_data)
#'@export
#'
#'
NULL
|
/scratch/gouwar.j/cran-all/cranData/Buddle/R/Img_data.R
|
# Generated by using Rcpp::compileAttributes() -> do not edit by hand
# Generator token: 10BE3573-1514-4C36-9D1C-5A225CD40393
#'@keywords internal
Buddle_Main <- function(X_train, T_train, X_test, T_test, nBatch_Size, nTotal_Iterations, HiddenLayer, bBatch, bDrop, drop_ratio, d_learning_rate, d_init_weight, nstrVec, strOpt, strType, bRand, strDist, bDisp) {
.Call(`_Buddle_Buddle_Main`, X_train, T_train, X_test, T_test, nBatch_Size, nTotal_Iterations, HiddenLayer, bBatch, bDrop, drop_ratio, d_learning_rate, d_init_weight, nstrVec, strOpt, strType, bRand, strDist, bDisp)
}
#'@keywords internal
Buddle_Predict <- function(X, lW, lb, lParam) {
.Call(`_Buddle_Buddle_Predict`, X, lW, lb, lParam)
}
|
/scratch/gouwar.j/cran-all/cranData/Buddle/R/RcppExports.R
|
# -----------------------------------------------------------------------------
# BuildSys.R
# -----------------------------------------------------------------------------
# Implements an R based build system for making and debugging C/C++ dlls
#
# By Paavo Jumppanen
# Copyright (c) 2020-2021, CSIRO Marine and Atmospheric Research
# License: GPL-2
# -----------------------------------------------------------------------------
dynlib <- function(BaseName)
{
LibName <- paste(BaseName, .Platform$dynlib.ext, sep="")
return (LibName)
}
# -----------------------------------------------------------------------------
# class representing a source file and its dependencies
# -----------------------------------------------------------------------------
setClass("BSysSourceFile",
slots = c(
Filename = "character",
Type = "character",
Dependencies = "list",
Externals = "list"
)
)
# -----------------------------------------------------------------------------
# Constructor for SourceFile class
# -----------------------------------------------------------------------------
setMethod("initialize", "BSysSourceFile",
function(.Object, Filename, SrcFolder, IncludeFolder, Type)
{
.Object@Filename <- Filename
.Object@Type <- Type
.Object@Dependencies <- list()
.Object@Externals <- list()
buildDependencies <- function(.Object, Filename)
{
if ((Type == "c") || (Type == "cpp"))
{
MatchExp <- "#include[\t ]*"
StripExp <- "<|>|\""
CommentExp <- "//.*"
CaseSensitive <- TRUE
}
else if (Type == "f")
{
MatchExp <- "INCLUDE[\t ]*"
StripExp <- "'"
CommentExp <- "!.*"
CaseSensitive <- FALSE
}
Lines <- readLines(Filename)
for (Line in Lines)
{
# Find include statements to figure out dependencies.
# This has no proper pre-processing component so if you are
# using pre-processor conditionals then this will pull out
# more dependencies than your code may have.
if (grepl(MatchExp, Line))
{
Include <- sub(CommentExp, "", gsub(StripExp, "", sub(MatchExp, "", Line)))
PrefixedInclude <- paste(IncludeFolder, Include, sep="")
if (file.exists(PrefixedInclude))
{
.Object@Dependencies <- c(.Object@Dependencies, Include)
# look for nested includes
.Object <- buildDependencies(.Object, PrefixedInclude)
}
else
{
.Object@Externals <- c(.Object@Externals, Include)
}
}
}
return (.Object)
}
FilePath <- paste0(SrcFolder, Filename)
return (buildDependencies(.Object, FilePath))
}
)
# -----------------------------------------------------------------------------
# method to return build rule for source file
# -----------------------------------------------------------------------------
setGeneric("makeBuildRule", function(.Object, ...) standardGeneric("makeBuildRule"))
setMethod("makeBuildRule", "BSysSourceFile",
function(.Object, RelativePath="")
{
if (.Object@Type == "c")
{
BuildRule <- paste("\t$(CC) $(CFLAGS) -c ", RelativePath, .Object@Filename, sep="")
}
else if (.Object@Type == "cpp")
{
BuildRule <- paste("\t$(CXX) $(CXXFLAGS) -c ", RelativePath, .Object@Filename, sep="")
}
else if (.Object@Type == "f")
{
BuildRule <- paste("\t$(FC) $(FFLAGS) -c ", RelativePath, .Object@Filename, sep="")
}
return (BuildRule)
}
)
# -----------------------------------------------------------------------------
# class reprensenting a project and the files that define it
# -----------------------------------------------------------------------------
setClass("BSysProject",
slots = c(
ProjectName = "character",
WorkingFolder = "character",
SourceName = "character",
IncludeName = "character",
ObjName = "character",
InstallLibraryName = "character",
InstallIncludeName = "character",
Flat = "logical",
SourceFiles = "list",
Packages = "character",
Includes = "character",
Defines = "character",
Libraries = "character",
CFLAGS = "character",
CXXFLAGS = "character",
FFLAGS = "character",
LDFLAGS = "character",
LDLIBS = "character",
DEFINES = "character",
IsDebug = "logical",
DebugState = "list"
)
)
# -----------------------------------------------------------------------------
# Constructor for CodeProject class
# -----------------------------------------------------------------------------
setMethod("initialize", "BSysProject",
function(.Object,
WorkingFolder=NULL,
Name="",
SourceFiles=NULL,
SourceName="src",
IncludeName="include",
ObjName="obj",
InstallLibraryName=as.character(NULL),
InstallIncludeName=as.character(NULL),
Flat=TRUE,
Packages=as.character(c()),
Includes=as.character(c()),
Defines=as.character(c()),
Libraries=as.character(c()),
CFLAGS=as.character(c()),
CXXFLAGS=as.character(c()),
FFLAGS=as.character(c()),
LDFLAGS=as.character(c()),
LDLIBS=as.character(c()),
DEFINES=as.character(c()),
Debug=TRUE)
{
.Object@ProjectName <- ""
.Object@WorkingFolder <- ""
.Object@SourceName <- ""
.Object@IncludeName <- ""
.Object@ObjName <- ""
.Object@InstallLibraryName <- as.character(NULL)
.Object@InstallIncludeName <- as.character(NULL)
.Object@Flat <- TRUE
.Object@SourceFiles <- list()
.Object@Packages <- Packages
.Object@Includes <- c(R.home("include"), Includes)
.Object@Defines <- Defines
.Object@Libraries <- Libraries
.Object@CFLAGS <- CFLAGS
.Object@CXXFLAGS <- CXXFLAGS
.Object@FFLAGS <- FFLAGS
.Object@LDFLAGS <- LDFLAGS
.Object@LDLIBS <- LDLIBS
.Object@DEFINES <- DEFINES
.Object@IsDebug <- Debug
.Object@DebugState <- list()
return (initProjectFromFolder(.Object,
WorkingFolder,
Name,
SourceFiles,
SourceName,
IncludeName,
ObjName,
InstallLibraryName,
InstallIncludeName,
Flat,
Packages,
Includes,
Defines,
Libraries,
CFLAGS,
CXXFLAGS,
FFLAGS,
LDFLAGS,
LDLIBS,
DEFINES,
Debug))
}
)
# -----------------------------------------------------------------------------
# Method to initialise Project based on folder contents
# -----------------------------------------------------------------------------
setGeneric("initProjectFromFolder", function(.Object, ...) standardGeneric("initProjectFromFolder"))
setMethod("initProjectFromFolder", "BSysProject",
function(.Object,
WorkingFolder=NULL,
Name="",
SourceFiles=NULL,
SourceName="src",
IncludeName="include",
ObjName="obj",
InstallLibraryName=as.character(NULL),
InstallIncludeName=as.character(NULL),
Flat=TRUE,
Packages=as.character(c()),
Includes=as.character(c()),
Defines=as.character(c()),
Libraries=as.character(c()),
CFLAGS=as.character(c()),
CXXFLAGS=as.character(c()),
FFLAGS=as.character(c()),
LDFLAGS=as.character(c()),
LDLIBS=as.character(c()),
DEFINES=as.character(c()),
Debug=TRUE)
{
if (is.null(WorkingFolder))
{
stop("WorkingFolder not specified. Please specify a WorkingFolder.")
}
if (Sys.info()["sysname"] == "Windows")
{
RLIBPATH <- R.home("bin")
KnownLibDependencies <- list(BLAS.h="Rblas", Lapack.h="Rlapack", iconv.h="Riconv")
}
else
{
RLIBPATH <- paste(R.home(), "/lib", Sys.getenv("R_ARCH"), sep="")
KnownLibDependencies <- list(BLAS.h="blas", Lapack.h="lapack", iconv.h="iconv")
}
testFolder <- function(Path)
{
if (!dir.exists(Path))
{
warning("The folder", Path, "does not exist.\n")
}
}
addExternalDependencies <- function(SourceFile, SrcFolder, IncludeFolder, CodeProject)
{
if (CodeProject@IsDebug)
{
DefTMB_SafeBounds <- "TMB_SAFEBOUNDS"
}
else
{
DefTMB_SafeBounds <- ""
}
DefLIB_UNLOAD <- paste("LIB_UNLOAD=R_unload_", CodeProject@ProjectName, sep="")
DefTMB_LIB_INIT <- paste("TMB_LIB_INIT=R_init_", CodeProject@ProjectName, sep="")
RcppDep <- list(pkg="Rcpp",
add.abort=TRUE,
defs=c(DefLIB_UNLOAD),
libs=as.character(c()))
RcppEigenDep <- list(pkg="RcppEigen",
add.abort=TRUE,
defs=c(DefLIB_UNLOAD),
libs=as.character(c()))
TMBDep <- list(pkg="TMB",
add.abort=TRUE,
defs=c(DefTMB_SafeBounds, DefLIB_UNLOAD, DefTMB_LIB_INIT),
libs=as.character(c()))
KnownPackageDependencies <- list(Rcpp.h =list(RcppDep),
RcppEigen.h =list(RcppDep, RcppEigenDep),
TMB.hpp =list(TMBDep, RcppDep, RcppEigenDep))
AddAbort <- FALSE
for (External in SourceFile@Externals)
{
PackageDependencies <- KnownPackageDependencies[[External]]
if (!is.null(PackageDependencies))
{
for (PackageDependency in PackageDependencies)
{
AddAbort <- AddAbort || PackageDependency$add.abort
if (!PackageDependency$pkg %in% CodeProject@Packages)
{
CodeProject@Packages <- c(CodeProject@Packages, PackageDependency$pkg)
IncludePath <- getPackagePath(PackageDependency$pkg, "/include")
if (!IncludePath %in% CodeProject@Includes)
{
CodeProject@Includes <- c(CodeProject@Includes, IncludePath)
}
for (Define in PackageDependency$defs)
{
if (!Define %in% CodeProject@Defines)
{
CodeProject@Defines <- c(CodeProject@Defines, Define)
}
}
for (Lib in PackageDependency$libs)
{
if (!Define %in% CodeProject@Defines)
{
CodeProject@Defines <- c(CodeProject@Defines, Define)
}
}
}
}
}
LibDependency <- KnownLibDependencies[[External]]
if (!is.null(LibDependency))
{
CodeProject@Libraries <- c(CodeProject@Libraries, LibDependency)
}
}
if (AddAbort)
{
AbortOverrideCode <- c("//----------------------------------------------",
"// BuildSys standard C library abort() override.",
"//----------------------------------------------",
"",
"#include <stdexcept>",
"",
"extern \"C\" void abort(void)",
"{",
" // If you are here then your code has called abort.",
" // We throw bad_alloc() cos that is what is caught in TMB,",
" // Rcpp and RcppEigen exception handling.",
" throw std::bad_alloc();",
"}",
"")
AbortSourceFilePath <- paste0(SrcFolder, "bsys_abort.cpp")
if (!file.exists(AbortSourceFilePath))
{
abortSourceFile <- file(AbortSourceFilePath, "wt")
writeLines(AbortOverrideCode, abortSourceFile)
close(abortSourceFile)
}
SourceFile <- new("BSysSourceFile", "bsys_abort.cpp", SrcFolder, IncludeFolder, "cpp")
CodeProject@SourceFiles[[length(.Object@SourceFiles) + 1]] <- SourceFile
}
return (CodeProject)
}
addSlash <- function(Path)
{
if (grepl("\\\\[^ ]+|\\\\$", Path))
{
stop(paste("'", Path, "' uses \\ as delimiter. Please use / instead.", sep=""))
}
if ((nchar(Path) != 0) && !grepl("/$", Path))
{
Path <- paste(Path, "/", sep="")
}
return (Path)
}
FullPath <- addSlash(normalizePath(WorkingFolder, winslash="/", mustWork=FALSE))
if (nchar(FullPath) == 0)
{
FullPath <- addSlash(getwd())
}
.Object@WorkingFolder <- FullPath
.Object@ProjectName <- Name
.Object@SourceName <- ""
.Object@IncludeName <- ""
.Object@ObjName <- ""
.Object@InstallLibraryName <- as.character(NULL)
.Object@InstallIncludeName <- as.character(NULL)
.Object@Flat <- Flat
.Object@SourceFiles <- list()
.Object@Packages <- Packages
.Object@Includes <- c(R.home("include"), Includes)
.Object@Defines <- Defines
.Object@Libraries <- Libraries
.Object@CFLAGS <- CFLAGS
.Object@CXXFLAGS <- CXXFLAGS
.Object@FFLAGS <- FFLAGS
.Object@LDFLAGS <- LDFLAGS
.Object@LDLIBS <- LDLIBS
.Object@DEFINES <- DEFINES
.Object@IsDebug <- Debug
.Object@DebugState <- list()
testFolder(.Object@WorkingFolder)
if (!Flat)
{
.Object@SourceName <- addSlash(SourceName)
.Object@IncludeName <- addSlash(IncludeName)
.Object@ObjName <- addSlash(ObjName)
ObjectFolder <- paste(.Object@WorkingFolder, .Object@ObjName, sep="")
testFolder(ObjectFolder)
}
if (!identical(character(0), InstallLibraryName))
{
.Object@InstallLibraryName <- addSlash(InstallLibraryName)
InstallLibraryFolder <- paste(.Object@WorkingFolder, .Object@InstallLibraryName, sep="")
testFolder(InstallLibraryFolder)
}
if (!identical(character(0), InstallIncludeName))
{
.Object@InstallIncludeName <- addSlash(InstallIncludeName)
}
SrcFolder <- paste(.Object@WorkingFolder, .Object@SourceName, sep="")
IncludeFolder <- paste(.Object@WorkingFolder, .Object@IncludeName, sep="")
testFolder(SrcFolder)
testFolder(IncludeFolder)
if (is.null(SourceFiles))
{
AllFiles <- dir(SrcFolder)
}
else
{
AllFiles <- SourceFiles
}
for (File in AllFiles)
{
if (grepl("bsys_abort.cpp$", File))
{
# ignore as this is auto-created and will be re-created
}
else if (grepl("\\.c$", File))
{
# c source file
SourceFile <- new("BSysSourceFile", File, SrcFolder, IncludeFolder, "c")
.Object@SourceFiles[[length(.Object@SourceFiles) + 1]] <- SourceFile
}
else if (grepl("\\.cpp$", File))
{
# c++ source file
SourceFile <- new("BSysSourceFile", File, SrcFolder, IncludeFolder, "cpp")
.Object@SourceFiles[[length(.Object@SourceFiles) + 1]] <- SourceFile
}
else if ((grepl("\\.f$|\\.for$|\\.f95$|\\.f90$|\\.f77$", File)))
{
# fortran source file
SourceFile <- new("BSysSourceFile", File, SrcFolder, IncludeFolder, "f")
.Object@SourceFiles[[length(.Object@SourceFiles) + 1]] <- SourceFile
}
else
{
# Other file
}
}
if (nchar(.Object@ProjectName) == 0)
{
# If project has only 1 source file use it to initialise project name
if (length(.Object@SourceFiles) == 1)
{
.Object@ProjectName <- gsub("\\..*$", "", .Object@SourceFiles[[1]]@Filename)
}
else
{
stop("You must supply a 'Name' for this project object")
}
}
# This step needs to happen after setting ProjectName
for (SourceFile in .Object@SourceFiles)
{
.Object <- addExternalDependencies(SourceFile, SrcFolder, IncludeFolder, .Object)
}
return (.Object)
}
)
# -----------------------------------------------------------------------------
# Method to supress printing entire object
# -----------------------------------------------------------------------------
setMethod("show", "BSysProject",
function(object)
{
cat(paste("BuildSys Project:",
object@ProjectName,
"\nWorking Folder:",
object@WorkingFolder,
"\n"))
}
)
# -----------------------------------------------------------------------------
# Method to build makefile
# -----------------------------------------------------------------------------
setGeneric("buildMakefile", function(.Object, ...) standardGeneric("buildMakefile"))
setMethod("buildMakefile", "BSysProject",
function(.Object, Force=FALSE)
{
# -------------------------------------------------------------------------
# idStamp() creates a stamp to check staleness of makefile
# -------------------------------------------------------------------------
idStamp <- function()
{
MakeID <- digest::digest(.Object, algo="md5")
IdStamp <- paste("# MakeID:", MakeID, "--Do not edit this line")
return (IdStamp)
}
# -------------------------------------------------------------------------
# checkMakefile() checks if makefile up to date
# -------------------------------------------------------------------------
checkMakefile <- function(MakefilePath)
{
UptoDate <- FALSE
# Check if makefile exists.
if (file.exists(MakefilePath))
{
# Check if makefile up to date.
Lines <- readLines(MakefilePath, n=1)
if (Lines[1] == idStamp())
{
UptoDate <- TRUE
}
}
return (UptoDate)
}
# -------------------------------------------------------------------------
# createMakefile() creates a new makefile based on project
# -------------------------------------------------------------------------
createMakefile <- function(MakefilePath)
{
dropLeadingTrailingSlashes <- function(Path)
{
Path <- gsub("/+$", "", Path)
Path <- gsub("^/+", "", Path)
return (Path)
}
DlibName <- dynlib(.Object@ProjectName)
LDLIBS <- .Object@LDLIBS
RootRelativePath <- ""
SrcRelativePath <- ""
IncludeRelativePath <- ""
ObjName <- dropLeadingTrailingSlashes(.Object@ObjName)
SourceName <- dropLeadingTrailingSlashes(.Object@SourceName)
IncludeName <- dropLeadingTrailingSlashes(.Object@IncludeName)
InstallLibraryName <- NULL
InstallIncludeName <- NULL
if (!identical(character(0), .Object@InstallLibraryName))
{
InstallLibraryName <- dropLeadingTrailingSlashes(.Object@InstallLibraryName)
}
if (!identical(character(0), .Object@InstallIncludeName))
{
InstallIncludeName <- dropLeadingTrailingSlashes(.Object@InstallIncludeName)
}
if (nchar(ObjName) != 0)
{
depth <- length(which(as.integer(gregexpr("/", ObjName)[[1]]) != -1)) + 1
for (cx in 1:depth)
{
RootRelativePath <- paste(RootRelativePath, "../", sep="")
}
}
for (Library in .Object@Libraries)
{
if (grepl("\\\\[^ ]+|\\\\$", Library))
{
stop(paste("'", Library, "' uses \\ as delimiter. Please use / instead.", sep=""))
}
if (grepl("/", Library))
{
LibPath <- gsub("{1}/[^/]*$", "", Library)
Lib <- gsub(paste(LibPath, "/", sep=""), "", Library)
LDLIBS <- c(LDLIBS, paste("-L", LibPath, " -l", Lib, sep=""))
}
else
{
LDLIBS <- c(LDLIBS, paste("-l", Library, sep=""))
}
}
if (.Object@IsDebug)
{
COMMONFLAGS <- c("-O0", "-g", "-DDEBUG", "-D_DEBUG")
}
else
{
COMMONFLAGS <- c("-O2", "-DNDEBUG")
}
LDFLAGS <- c("-shared")
WinSub <- ""
if (Sys.info()["sysname"] == "Windows")
{
if (Sys.info()["machine"]=="x86-64")
{
WinSub <- "x64/"
}
else
{
WinSub <- "x32/"
}
}
else
{
COMMONFLAGS <- c(COMMONFLAGS, "-fPIC")
LDFLAGS <- c(LDFLAGS, "-fPIC")
}
for (Define in .Object@Defines)
{
if (nchar(Define) > 0)
{
COMMONFLAGS <- c(COMMONFLAGS, paste("-D", Define, sep=""))
}
}
for (Define in .Object@DEFINES)
{
if ((nchar(Define) > 0) && !(Define %in% .Object@Define))
{
COMMONFLAGS <- c(COMMONFLAGS, paste("-D", Define, sep=""))
}
}
if (nchar(SourceName) != 0)
{
SrcRelativePath <- paste(RootRelativePath, SourceName, "/", sep="")
}
IncludeRelativePath <- ""
if (nchar(IncludeName) != 0)
{
IncludeRelativePath <- paste(RootRelativePath, IncludeName, "/", sep="")
COMMONFLAGS <- c(COMMONFLAGS, paste("-I", IncludeRelativePath, sep=""))
}
for (Include in .Object@Includes)
{
COMMONFLAGS <- c(COMMONFLAGS, paste("-I", Include, sep=""))
}
LDFLAGS <- c(LDFLAGS, .Object@LDFLAGS)
CFLAGS <- c("$(COMMONFLAGS)", .Object@CFLAGS)
CXXFLAGS <- c("$(COMMONFLAGS)", "-Wno-ignored-attributes", .Object@CXXFLAGS)
FFLAGS <- c("$(COMMONFLAGS)", .Object@FFLAGS)
gcc.path <- normalizePath(Sys.which("gcc"), "/", mustWork=FALSE)
gcc.dir <- ""
if (grepl("mingw", gcc.path))
{
# windows
# Need the correct compiler for the architecture. Sys.which() just picks up whichever one is in the PATH
if (Sys.info()["machine"]=="x86-64")
{
# mingw64
gcc.path <- sub("/mingw\\d\\d", "/mingw64", gcc.path)
WinSub <- "x64/"
}
else
{
# mingw32
gcc.path <- sub("/mingw\\d\\d", "/mingw32", gcc.path)
WinSub <- "x32/"
}
gcc.dir <- sub("/gcc.*", "/", gcc.path)
}
# Build makefile
MakefileTxt <-c(
idStamp(),
paste("R_SHARE_DIR=", R.home("share"), sep=""),
paste("R_HOME=", R.home(), sep=""),
paste("include $(R_HOME)/etc/", WinSub, "Makeconf", sep=""),
paste("CC=", gcc.dir, "gcc", sep=""),
paste("CXX=", gcc.dir, "g++", sep=""),
paste("FC=", gcc.dir, "gfortran", sep=""),
paste("COMMONFLAGS=", paste(COMMONFLAGS, collapse="\\\n"), sep=""),
paste("CFLAGS=", paste(CFLAGS, collapse="\\\n"), sep=""),
paste("CXXFLAGS=", paste(CXXFLAGS, collapse="\\\n"), sep=""),
paste("FFLAGS=", paste(FFLAGS, collapse="\\\n"), sep=""),
paste("LDFLAGS=", paste(LDFLAGS, collapse="\\\n"), sep=""),
paste("LDLIBS=", paste(LDLIBS, collapse="\\\n"), sep=""),
paste("objects=",
paste(sapply(.Object@SourceFiles, function(item){ gsub("\\..*$", ".o", item@Filename)}), collapse=" \\\n"), sep=""),
"",
paste(DlibName, " : $(objects)", sep=""),
paste("\t$(CXX) -o ", DlibName, " $(LDFLAGS) $(objects) $(LDLIBS) $(LIBR)", sep=""),
""
)
for (SourceFile in .Object@SourceFiles)
{
BaseName <- gsub("\\..*$", "", SourceFile@Filename)
MakeRule <- paste(BaseName, ".o : ",
SrcRelativePath,
SourceFile@Filename,
" ",
paste(sapply(SourceFile@Dependencies, function(dep) {paste(IncludeRelativePath, dep, sep="")}), collapse=" "),
sep="")
BuildRule <- makeBuildRule(SourceFile, SrcRelativePath)
MakefileTxt <- c(MakefileTxt,
MakeRule,
BuildRule,
"")
}
MakefileTxt <- c(MakefileTxt,
paste("clean : \n\trm ", DlibName, " $(objects)", sep=""),
"")
# if install paths are define then create install rule
if (!is.null(InstallLibraryName) || !is.null(InstallIncludeName))
{
MakefileTxt <- c(MakefileTxt, "install :")
if (!is.null(InstallLibraryName))
{
InstallLibraryRelativePath <- paste(RootRelativePath, InstallLibraryName, sep="")
MakefileTxt <- c(MakefileTxt,
paste("\tcp", DlibName, InstallLibraryRelativePath))
}
if (!is.null(InstallIncludeName))
{
InstallIncludeRelativePath <- paste(RootRelativePath, InstallIncludeName, sep="")
MakefileTxt <- c(MakefileTxt,
paste("\tcp ", IncludeRelativePath, "* ", InstallIncludeRelativePath, sep=""))
}
MakefileTxt <- c(MakefileTxt, "")
}
makefile <- file(MakefilePath, "wt")
writeLines(MakefileTxt, makefile)
close(makefile)
}
if (nchar(.Object@ObjName) != 0)
{
MakefilePath <- paste(.Object@WorkingFolder, .Object@ObjName, "/makefile", sep="")
}
else
{
MakefilePath <- paste(.Object@WorkingFolder, "makefile", sep="")
}
Created <- FALSE
if (!checkMakefile(MakefilePath) || Force)
{
createMakefile(MakefilePath)
Created <- TRUE
}
return (Created)
}
)
# -----------------------------------------------------------------------------
# Method to build dynamic library project
# -----------------------------------------------------------------------------
setGeneric("make", function(.Object, ...) standardGeneric("make"))
setMethod("make", "BSysProject",
function(.Object, Operation="", Debug=NULL)
{
runMake <- function(.Object, Operation)
{
quoteArg <- function(arg)
{
return (paste0("\"", arg, "\""))
}
IsWindows <- (Sys.info()["sysname"] == "Windows")
DlibName <- dynlib(.Object@ProjectName)
ObjFolder <- paste(.Object@WorkingFolder, .Object@ObjName, sep="")
CapturePath <- paste(.Object@WorkingFolder, .Object@ProjectName, ".log", sep="")
ScriptPath <- paste(.Object@WorkingFolder, .Object@ProjectName, ".sh", sep="")
FinishedFile <- paste(.Object@WorkingFolder, .Object@ProjectName, ".fin", sep="")
hasTee <- function()
{
TestCmd <- "tee --version &>/dev/null"
# construct test script to see if tee is present
BashScript <- c("#!/bin/bash",
TestCmd)
ScriptFile <- file(ScriptPath, "wt")
writeLines(BashScript, ScriptFile)
close(ScriptFile)
command.line <- paste(Sys.which("bash"), quoteArg(ScriptPath))
result <- try(system(command.line, wait=TRUE), silent=TRUE)
unlink(ScriptFile)
hasTee <- ((class(result) != "try-error") && (result == 0))
return (hasTee)
}
HasTee <- hasTee()
CaptureCmd <- if (IsWindows && HasTee) paste("2>&1 | tee", quoteArg(CapturePath)) else ""
# run make
if (Operation == "clean")
{
operation <- paste("cd", quoteArg(ObjFolder), "\nmake clean", CaptureCmd)
}
else if (Operation == "install")
{
operation <- paste("cd", quoteArg(ObjFolder), "\nmake install", CaptureCmd)
}
else if (Operation == "")
{
operation <- paste("cd", quoteArg(ObjFolder), "\nmake", CaptureCmd)
}
else
{
stop("Undefined make() Operation")
}
# construct caller script
BashScript <- c("#!/bin/bash",
operation,
paste("echo finished >", quoteArg(FinishedFile)))
ScriptFile <- file(ScriptPath, "wt")
writeLines(BashScript, ScriptFile)
close(ScriptFile)
command.line <- paste(Sys.which("bash"), quoteArg(ScriptPath))
unlink(FinishedFile)
unloadLibrary(.Object)
if (IsWindows && HasTee)
{
system(command.line, wait=FALSE, invisible=FALSE)
}
else
{
system(command.line, wait=TRUE)
}
# test for completion of script. We do this rather than using
# wait=TRUE in system call so that we can make a visible shell that
# shows live progress. With wait=TRUE a visible shell shows no
# output.
while (!file.exists(FinishedFile))
{
Sys.sleep(1)
}
unlink(FinishedFile)
unlink(ScriptPath)
if (IsWindows && HasTee)
{
CaptureFile <- file(CapturePath, "rt")
writeLines(readLines(CaptureFile))
close(CaptureFile)
unlink(CapturePath)
}
}
# if Debug provided and different from current update
if (!is.null(Debug) && (Debug != .Object@IsDebug))
{
.Object@IsDebug <- Debug
}
# build makefile
if (buildMakefile(.Object))
{
if (Operation != "clean")
{
# as the makefile has changed do a clean to force a complete re-build
runMake(.Object, "clean")
}
}
runMake(.Object, Operation)
return (.Object)
}
)
# -----------------------------------------------------------------------------
# Method to get library path
# -----------------------------------------------------------------------------
setGeneric("libraryPath", function(.Object, ...) standardGeneric("libraryPath"))
setMethod("libraryPath", "BSysProject",
function(.Object)
{
DlibName <- dynlib(.Object@ProjectName)
if (nchar(.Object@ObjName) != 0)
{
DlibPath <- paste(.Object@WorkingFolder, .Object@ObjName, DlibName, sep="")
}
else
{
DlibPath <- paste(.Object@WorkingFolder, DlibName, sep="")
}
return (DlibPath)
}
)
# -----------------------------------------------------------------------------
# Method to get source path
# -----------------------------------------------------------------------------
setGeneric("sourcePath", function(.Object, ...) standardGeneric("sourcePath"))
setMethod("sourcePath", "BSysProject",
function(.Object)
{
SourcePath <- .Object@WorkingFolder
if (nchar(.Object@SourceName) != 0)
{
SourcePath <- paste(SourcePath, .Object@SourceName, sep="")
}
return (SourcePath)
}
)
# -----------------------------------------------------------------------------
# Method to get include path
# -----------------------------------------------------------------------------
setGeneric("includePath", function(.Object, ...) standardGeneric("includePath"))
setMethod("includePath", "BSysProject",
function(.Object)
{
IncludePath <- .Object@WorkingFolder
if (nchar(.Object@IncludeName) != 0)
{
IncludePath <- paste(IncludePath, .Object@IncludeName, sep="")
}
return (IncludePath)
}
)
# -----------------------------------------------------------------------------
# Method to get obj path
# -----------------------------------------------------------------------------
setGeneric("objPath", function(.Object, ...) standardGeneric("objPath"))
setMethod("objPath", "BSysProject",
function(.Object)
{
ObjPath <- .Object@WorkingFolder
if (nchar(.Object@ObjName) != 0)
{
ObjPath <- paste(ObjPath, .Object@ObjName, sep="")
}
return (ObjPath)
}
)
# -----------------------------------------------------------------------------
# Method to get install library path
# -----------------------------------------------------------------------------
setGeneric("installLibraryPath", function(.Object, ...) standardGeneric("installLibraryPath"))
setMethod("installLibraryPath", "BSysProject",
function(.Object)
{
InstallLibraryPath <- NULL
if (!identical(character(0), .Object@InstallLibraryName))
{
InstallLibraryPath <- .Object@WorkingFolder
if (nchar(.Object@InstallLibraryName) != 0)
{
InstallLibraryPath <- paste(InstallLibraryPath, .Object@InstallLibraryName, sep="")
}
}
return (InstallLibraryPath)
}
)
# -----------------------------------------------------------------------------
# Method to get install include path
# -----------------------------------------------------------------------------
setGeneric("installIncludePath", function(.Object, ...) standardGeneric("installIncludePath"))
setMethod("installIncludePath", "BSysProject",
function(.Object)
{
InstallIncludePath <- NULL
if (!identical(character(0), .Object@InstallIncludeName))
{
InstallIncludePath <- .Object@WorkingFolder
if (nchar(.Object@InstallIncludeName) != 0)
{
InstallIncludePath <- paste(InstallIncludePath, .Object@InstallIncludeName, sep="")
}
}
return (InstallIncludePath)
}
)
# -----------------------------------------------------------------------------
# Method to load library
# -----------------------------------------------------------------------------
setGeneric("loadLibrary", function(.Object, ...) standardGeneric("loadLibrary"))
setMethod("loadLibrary", "BSysProject",
function(.Object)
{
return (dyn.load(libraryPath(.Object)))
}
)
# -----------------------------------------------------------------------------
# Method to unload library
# -----------------------------------------------------------------------------
setGeneric("unloadLibrary", function(.Object, ...) standardGeneric("unloadLibrary"))
setMethod("unloadLibrary", "BSysProject",
function(.Object)
{
libPath <- libraryPath(.Object)
tr <- try(dyn.unload(libraryPath(.Object)), silent=TRUE)
if (!is(tr, "try-error"))
{
message("Note: Library", libPath, "was unloaded.\n")
}
}
)
# -----------------------------------------------------------------------------
# Method to cleanup project of created files and folders
# -----------------------------------------------------------------------------
setGeneric("clean", function(.Object, ...) standardGeneric("clean"))
setMethod("clean", "BSysProject",
function(.Object)
{
# remove makefile
if (nchar(.Object@ObjName) != 0)
{
MakefilePath <- paste(.Object@WorkingFolder, .Object@ObjName, "/makefile", sep="")
}
else
{
MakefilePath <- paste(.Object@WorkingFolder, "makefile", sep="")
}
unlink(MakefilePath)
# remove debug related files and folders
RprofileFolder <- paste(sourcePath(.Object), .Object@ProjectName, ".Rprof", sep="")
debugProjectPath <- paste(RprofileFolder, "/", .Object@ProjectName, "_DebugProject.RData", sep="")
debugSessionPath <- paste(RprofileFolder, "/", .Object@ProjectName, "_DebugSession.RData", sep="")
debugCmdTxtPath <- paste(RprofileFolder, "/debugCmd.txt", sep="")
Rprofile.path <- paste(RprofileFolder, "/.Rprofile", sep="")
bsysAbortPath <- paste(sourcePath(.Object), "bsys_abort.cpp", sep="")
unlink(debugProjectPath)
unlink(debugSessionPath)
unlink(Rprofile.path)
unlink(bsysAbortPath)
unlink(debugCmdTxtPath)
unlink(RprofileFolder, recursive=TRUE, force=TRUE)
vsCodeFolder <- paste(sourcePath(.Object), ".vscode", sep="")
launch.file <- paste(vsCodeFolder, "/launch.json", sep="")
c_cpp_properties.file <- paste(vsCodeFolder, "/c_cpp_properties.json", sep="")
unlink(launch.file)
unlink(c_cpp_properties.file)
unlink(vsCodeFolder, recursive=TRUE, force=TRUE)
}
)
# -----------------------------------------------------------------------------
# Helper to find and check for package includes
# -----------------------------------------------------------------------------
getPackagePath <- function(PackageName, SubPath)
{
PackagePath <- ""
for (path in .libPaths())
{
path <- paste(path, "/", PackageName, SubPath, sep="")
if (file.exists(path))
{
PackagePath <- path
break
}
}
if (nchar(PackagePath) == 0)
{
stop(paste("Cannot find include path ", paste(PackageName, SubPath, sep=""), ". Check that the", PackageName, "package is installed."))
}
return (PackagePath)
}
# -----------------------------------------------------------------------------
# Helper to list loaded libraries not bound to packages
# -----------------------------------------------------------------------------
getLoadedSharedLibraries <- function()
{
packages.paths <- .libPaths()
packages.paths <- c(packages.paths, R.home())
loadList <- getLoadedDLLs()
sharedLibrariesList <- c()
exclusions <- c("base", "(embedding)")
for (item in loadList)
{
item <- unlist(item)
if (!(item$name %in% exclusions))
{
is.package <- FALSE
for (packages.path in packages.paths)
{
is.package <- is.package || grepl(packages.path, item$path)
}
if (!is.package)
{
path <- sub("\\.dylib", "", sub("\\.so", "", sub("\\.dll", "", item$path)))
sharedLibrariesList <- c(sharedLibrariesList, path)
}
}
}
return (sharedLibrariesList)
}
# -----------------------------------------------------------------------------
# Method to debug library
# -----------------------------------------------------------------------------
setGeneric("vcDebug", function(.Object, ...) standardGeneric("vcDebug"))
setMethod("vcDebug", "BSysProject",
function(.Object, LaunchEditor=TRUE)
{
RprofileFolder <- paste(sourcePath(.Object), .Object@ProjectName, ".Rprof", sep="")
debugProjectPath <- paste(RprofileFolder, "/DebugProject.RData", sep="")
debugSessionPath <- paste(RprofileFolder, "/DebugSession.RData", sep="")
debugCmdFilePath <- paste(RprofileFolder, "/debugCmd.txt", sep="")
Rprofile.path <- paste(RprofileFolder, "/.Rprofile", sep="")
if (LaunchEditor)
{
# Helper to obtain info for json files
getIntellisenseInfo <- function()
{
TargetName <- ""
OS <- Sys.info()[["sysname"]]
Architecture <- Sys.info()[["machine"]]
Extra <- NULL
ShortPath <- function(arg) {return (arg)}
if (OS == "Windows")
{
TargetName <- "Win32"
Mode <- "gcc"
ShortPath <- function(arg) {gsub("\\\\", "/", utils::shortPathName(arg))}
}
else if (OS == "Linux")
{
TargetName <- "Linux"
Mode <- "gcc"
}
else if (OS == "Darwin")
{
TargetName <- "Mac"
Mode <- "clang"
Extra <- " \"macFrameworkPath\": [\"/System/Library/Frameworks\"],"
}
else
{
warning(paste("Unsupported intellisense target:", OS,"\n"))
}
if (Architecture =="x86-64")
{
Architecture <- "x86_64"
Mode <- paste(Mode, "-x64", sep="")
}
else if (Architecture =="x86_64")
{
Mode <- paste(Mode, "-x64", sep="")
}
else if (Architecture == "x86")
{
Mode <- paste(Mode, "-x86", sep="")
}
else
{
warning("Unknown intellisense architecture\n")
}
return (list(TargetName=TargetName, Architecture=Architecture, Mode=Mode, normPath=ShortPath))
}
# create Rprofile folder
file.attr <- file.info(RprofileFolder)
if (is.na(file.attr$size))
{
dir.create(RprofileFolder)
}
else if (!file.attr$isdir)
{
warning(paste("Cannot create", RprofileFolder, "folder as .vscode file exists.\n"))
}
# create .vscode folder if needed
vsCodeFolder <- paste(sourcePath(.Object), ".vscode", sep="")
file.attr <- file.info(vsCodeFolder)
IsDarwin <- (Sys.info()["sysname"] == "Darwin")
if (is.na(file.attr$size))
{
dir.create(vsCodeFolder)
}
else if (!file.attr$isdir)
{
warning("Cannot create .vscode folder as .vscode file exists.\n")
}
debug.app <- "gdb"
external.console <- "true"
R.args <- "\"--no-save\", \"--no-restore\""
debug.command.args <- ""
debug.Cmd.lines <- c()
# get needed paths
if (IsDarwin)
{
VisualStudioCode <- "open -a Visual\\ Studio\\ Code.app --args"
R.path <- "/Applications/R.app/Contents/MacOS/R"
if (!file.exists(R.path))
{
warning("Cannot find R.\n")
}
gdb.path <- "/Applications/Xcode.app/Contents/Developer/usr/bin/lldb"
debug.app <- "lldb"
if (!file.exists(gdb.path))
{
warning("Cannot find lldb-mi. Ensure Xcode is installed.\n")
}
external.console <- "false"
R.args <- paste("\"", RprofileFolder, "\"", sep="")
debug.command.args <- paste("--source", debugCmdFilePath)
debug.Cmd.lines <- c("breakpoint set -f bsys_abort.cpp -b abort")
}
else
{
VisualStudioCode <- "code"
R.path <- normalizePath(Sys.which("Rgui"), "/", mustWork=FALSE)
if (nchar(R.path) == 0)
{
R.path <- paste(R.home(), "/bin/exec/R", sep="")
}
gdb.path <- normalizePath(Sys.which(debug.app), "/", mustWork=FALSE)
if (nchar(gdb.path) == 0)
{
warning(paste("Cannot find path to gdb. Check that", debug.app, "is accessible via the PATH environment variable.\n"))
}
debug.command.args <- paste("--init-command", debugCmdFilePath)
debug.Cmd.lines <- c(debug.Cmd.lines, "set breakpoint pending on",
"break bsys_abort.cpp:abort")
}
# write debuggeer command file
debugCmdfile <- file(debugCmdFilePath, "wt")
writeLines(debug.Cmd.lines, debugCmdfile)
close(debugCmdfile)
gcc.path <- normalizePath(Sys.which("gcc"), "/", mustWork=FALSE)
if (nchar(gcc.path) == 0)
{
warning("Cannot find path to gcc. Check that gcc is accessible via the PATH environment variable.\n")
}
# build intellisense include paths
R.include <- R.home("include")
intellisense.includes <- ""
intellisense.defines <- paste(sapply(.Object@Defines, function(str) {paste("\"", str, "\"", sep="")}), collapse=",", sep="")
intellisense.info <- getIntellisenseInfo()
normPath <- intellisense.info$normPath
if (grepl("mingw", gcc.path))
{
# windows
# Need the correct compiler for the architecture. Sys.which() just picks up whichever one is in the PATH
if (Sys.info()["machine"]=="x86-64")
{
# mingw64
gcc.path <- sub("/mingw\\d\\d", "/mingw64", gcc.path)
}
else
{
# mingw32
gcc.path <- sub("/mingw\\d\\d", "/mingw32", gcc.path)
}
rtools.path <- sub("/mingw.*", "/", gcc.path)
gcc.include <- sub("/bin/gcc.*", "/include", gcc.path)
intellisense.includes <- paste(intellisense.includes, ",\"", gcc.include, "/**\"", sep="")
root.include <- paste(rtools.path, "usr/include", sep="")
root.user.include <- paste(rtools.path, "usr/local/include", sep="")
if (file.exists(root.include))
{
intellisense.includes <- paste(intellisense.includes, ",\"", normPath(root.include), "/**\"", sep="")
}
if (file.exists(root.user.include))
{
intellisense.includes <- paste(intellisense.includes, ",\"", normPath(root.user.include), "/**\"", sep="")
}
}
else
{
# linux
gcc.include <- "/usr/include"
gcc.local.include <- "/usr/local/include"
if (file.exists(gcc.include))
{
intellisense.includes <- paste(intellisense.includes, ",\"", normPath(gcc.include), "/**\"", sep="")
}
if (file.exists(gcc.local.include))
{
intellisense.includes <- paste(intellisense.includes, ",\"", normPath(gcc.local.include), "/**\"", sep="")
}
}
intellisense.includes <- paste(intellisense.includes, ",\"", normPath(R.include), "/**\"", sep="")
intellisense.includes <- paste(intellisense.includes, ",\"", normPath(includePath(.Object)), "**\"", sep="")
# add other include paths
for (include in .Object@Includes)
{
intellisense.includes <- paste(intellisense.includes, ",\"", normPath(include), "/**\"", sep="")
}
# create debugRprofile.txt environment setup file for gdb debug session
working.dir <- .Object@WorkingFolder
Rprofile_lines <- c(
paste("require(BuildSys)", sep=""),
paste("setwd(\"", working.dir, "\")", sep=""),
paste("load(\"", debugSessionPath, "\")", sep=""),
paste("load(\"", debugProjectPath, "\")", sep=""),
"vcDebug(BSysDebugProject, FALSE)"
)
Rprofile_file <- file(Rprofile.path, "wb")
writeLines(Rprofile_lines, Rprofile_file)
close(Rprofile_file)
StopAtEntry <- "false"
# create launch.json
if (IsDarwin)
{
# Configure for CodeLLDB
launch_lines <- c(
"{",
" \"version\": \"0.2.0\",",
" \"configurations\": [",
" {",
paste(" \"name\": \"(", debug.app, ") Launch\",", sep=""),
" \"type\": \"lldb\",",
" \"request\": \"launch\",",
paste(" \"program\": \"", normPath(R.path), "\",", sep=""),
paste(" \"args\": [", R.args, "],", sep=""),
paste(" \"stopOnEntry\": ", StopAtEntry, ",", sep=""),
paste(" \"cwd\": \"", normPath(RprofileFolder), "\",", sep=""),
paste(" \"env\": {\"name\":\"R_HOME\",\"value\":\"",R.home(),"\"},", sep=""),
paste(" \"initCommands\": [", paste(sapply(debug.Cmd.lines, function(x) {paste0("\"", x, "\"")}),collapse=","),"]", sep=""),
" }",
" ]",
"}")
}
else
{
# Configure for LLDB-mi
launch_lines <- c(
"{",
" \"version\": \"0.2.0\",",
" \"configurations\": [",
" {",
paste(" \"name\": \"(", debug.app, ") Launch\",", sep=""),
" \"type\": \"cppdbg\",",
" \"request\": \"launch\",",
paste(" \"targetArchitecture\":\"", intellisense.info$Architecture,"\",", sep=""),
paste(" \"program\": \"", normPath(R.path), "\",", sep=""),
paste(" \"args\": [", R.args, "],", sep=""),
paste(" \"stopAtEntry\": ", StopAtEntry, ",", sep=""),
paste(" \"cwd\": \"", normPath(RprofileFolder), "\",", sep=""),
paste(" \"environment\": [{\"name\":\"R_HOME\",\"value\":\"",R.home(),"\"}],", sep=""),
paste(" \"externalConsole\": ", external.console, ",", sep=""),
paste(" \"MIMode\": \"", debug.app, "\",", sep=""),
paste(" \"miDebuggerPath\": \"", normPath(gdb.path), "\",", sep=""),
paste(" \"miDebuggerArgs\": \"", debug.command.args, "\",", sep=""),
" \"setupCommands\": [",
" {",
paste(" \"description\": \"Enable pretty-printing for ", debug.app, "\",", sep=""),
" \"text\": \"-enable-pretty-printing\",",
" \"ignoreFailures\": true",
" }",
" ]",
" }",
" ]",
"}")
}
launch_file <- file(paste(vsCodeFolder, "/launch.json", sep=""), "wb")
writeLines(launch_lines, launch_file)
close(launch_file)
# create c_cpp_properties.json
c_cpp_properties_lines <- c(
"{",
" \"configurations\": [",
" {",
paste(" \"name\": \"", intellisense.info$TargetName, "\",", sep=""),
paste(" \"intelliSenseMode\": \"", intellisense.info$Mode, "\",", sep=""),
paste(" \"includePath\": [\"${workspaceFolder}\"", intellisense.includes, "],", sep=""),
intellisense.info$Extra,
paste(" \"defines\": [", intellisense.defines, "],", sep=""),
paste(" \"compilerPath\": \"", normPath(gcc.path), "\",", sep=""),
" \"cStandard\": \"c89\",",
" \"cppStandard\": \"c++14\",",
" \"browse\": {",
" \"limitSymbolsToIncludedHeaders\": true,",
" \"databaseFilename\": \"\"",
" }",
" }",
" ],",
" \"version\": 4",
"}")
c_cpp_properties_file <- file(paste(vsCodeFolder, "/c_cpp_properties.json", sep=""), "wb")
writeLines(c_cpp_properties_lines, c_cpp_properties_file)
close(c_cpp_properties_file)
# save session state / loaded packages and DLLs
.Object@DebugState <- list(session.packages=(.packages()), session.sharedLibraries=getLoadedSharedLibraries())
BSysDebugProject <- .Object
save(BSysDebugProject, file=debugProjectPath)
# save the current R session for use in debug session
save.image(file=debugSessionPath)
# spawn Visual Studio Code
tr <- try(system(paste(VisualStudioCode, " \"", sourcePath(.Object) ,".\"", sep=""), wait=FALSE), silent=TRUE)
if (is(tr, "try-error"))
{
warning("Cannot find Visual Studio Code. Please ensure it is installed and reachable through the PATH environment variable.\n")
}
}
else
{
current.packages <- (.packages())
for (package in .Object@DebugState$session.packages)
{
if (!(package %in% current.packages))
{
library(package, character.only=TRUE)
}
}
for (sharedLibrary in .Object@DebugState$session.sharedLibraries)
{
dyn.load(dynlib(sharedLibrary))
}
}
}
)
|
/scratch/gouwar.j/cran-all/cranData/BuildSys/R/BuildSys.R
|
"alpha.proxy" <-
function (weight=.2, vol.man=.2, vol.bench=.2, vol.other=.2, cor.man=.2,
cor.bench=.2, plot.it=TRUE, transpose=FALSE, ...)
{
fun.copyright <- "Placed in the public domain 2003-2012 by Burns Statistics Ltd."
fun.version <- "alpha.proxy 002"
# check ranges for possible bad scaling
if(any(weight <=0 | weight >= 1)) stop("bad value(s) for weight")
if(any(cor.man < -1 | cor.man > 1)) stop("bad value(s) for cor.man")
if(any(cor.bench < -1 | cor.bench > 1))
stop("bad value(s) for cor.bench")
if(any(vol.man <= 0)) stop("all vol.man values must be positive")
if(any(vol.bench <= 0)) stop("all vol.bench values must be positive")
if(any(vol.other <= 0)) stop("all vol.other values must be positive")
if(all(vol.man >= 1)) {
warning(paste("large vol.man values, did you mistakenly",
"give values in percent?"))
}
if(all(vol.bench >= 1)) {
warning(paste("large vol.bench values, did you mistakenly",
"give values in percent?"))
}
if(all(vol.other >= 1)) {
warning(paste("large vol.other values, did you mistakenly",
"give values in percent?"))
}
# get organized
sizes <- numeric(6)
names(sizes) <- c("weight", "vol.man", "vol.bench", "vol.other",
"cor.man", "cor.bench")
sizes["weight"] <- length(weight)
sizes["vol.man"] <- length(vol.man)
sizes["vol.bench"] <- length(vol.bench)
sizes["vol.other"] <- length(vol.other)
sizes["cor.man"] <- length(cor.man)
sizes["cor.bench"] <- length(cor.bench)
if(any(sizes == 0)) stop("zero length input(s)")
if(sum(sizes > 1) > 2) stop("more than two inputs longer than 1")
twovecs <- sum(sizes > 1) == 2
# do computation
if(!twovecs) {
ans <- -weight * (vol.man^2 - vol.bench^2) - 2 *
(1 - weight) * vol.other * (vol.man * cor.man -
vol.bench * cor.bench)
return(10000 * ans)
}
count <- 1
longsizes <- sizes[sizes != 1]
z <- array(NA, longsizes)
for(i in names(longsizes)) assign(i, sort(get(i)))
# nested for loops sacrifice efficiency for convenience
for(i.cb in 1:sizes["cor.bench"]) {
for(i.cm in 1:sizes["cor.man"]) {
for(i.vo in 1:sizes["vol.other"]) {
for(i.vb in 1:sizes["vol.bench"]) {
for(i.vm in 1:sizes["vol.man"]) {
for(i.w in 1:sizes["weight"]) {
z[count] <- -weight[i.w] * (vol.man[i.vm]^2 -
vol.bench[i.vb]^2) - 2 * (1 - weight[i.w]) *
vol.other[i.vo] * (vol.man[i.vm] * cor.man[i.cm] -
vol.bench[i.vb] * cor.bench[i.cb])
count <- count + 1
}}}}}}
z <- 10000 * z
# actually do something
if(transpose) {
z <- t(z)
longsizes <- rev(longsizes)
}
ans <- list(x = eval(as.name(names(longsizes[1]))),
y = eval(as.name(names(longsizes[2]))), z = z,
call = deparse(match.call()))
if(plot.it && twovecs) {
the.labs <- c(weight="Weight", vol.man="Manager Volatility",
vol.bench="Benchmark Volatility",
vol.other="Volatility of the Rest",
cor.man="Correlation of Manager with the Rest",
cor.bench="Correlation of Benchmark with the Rest")
filled.contour(ans, xlab=the.labs[names(longsizes)[1]],
ylab=the.labs[names(longsizes)[2]],
plot.axis={axis(1); axis(2); contour(ans, add=TRUE)},
...)
invisible(ans)
} else {
ans
}
}
|
/scratch/gouwar.j/cran-all/cranData/BurStFin/R/alpha.proxy.R
|
"factor.model.stat" <-
function (x, weights=seq(0.5, 1.5, length.out=nobs), output="full", center=TRUE,
frac.var=.5, iter.max=1, nfac.miss=1, full.min=20, reg.min=40,
sd.min=20, quan.sd=.90, tol=1e-3, zero.load=FALSE,
range.factors=c(0, Inf), constant.returns.okay=FALSE,
specific.floor=0.1, floor.type="quantile", verbose=2)
{
fun.copyright <- "Placed in the public domain 2006-2014 by Burns Statistics Ltd."
fun.version <- "factor.model.stat 014"
subfun.ssd <- function(z, weights, sd.min) {
nas <- is.na(z)
if(any(nas)) {
if(sum(!nas) < sd.min) return(NA)
sum(weights[!nas] * z[!nas]^2) / sum(weights[!nas])
} else {
sum(weights * z^2)
}
}
#
# start of main function
#
x <- as.matrix(x)
if(!is.numeric(x)) stop("'x' needs to be numeric")
x[!is.finite(x)] <- NA
# for use in finance, try to check it is returns and not prices
if(verbose >= 1 && min(x, na.rm=TRUE) >= 0) {
warning(paste("minimum of values in 'x' is",
min(x, na.rm=TRUE), "are you giving a price",
"matrix rather than a return matrix?",
"(warning suppressed if verbose < 1)"))
}
xna <- is.na(x)
allmis <- rowSums(xna) == ncol(x)
if(any(allmis)) {
x <- x[!allmis, , drop=FALSE]
xna <- is.na(x)
}
num.mis <- colSums(xna)
if(any(num.mis > 0)) {
if(sum(num.mis == 0) < full.min)
stop("not enough columns without missing values")
if(!length(dimnames(x)[[2]]))
stop("'x' needs column names when missing values exist")
max.miss <- max(num.mis)
lnfm <- length(nfac.miss)
if(lnfm == 0) stop("'nfac.miss' must have positive length")
nfac.miss <- round(nfac.miss)
if(any(nfac.miss < 0))
stop("negative values in 'nfac.miss'")
if(lnfm < max.miss) {
nfac.miss <- c(nfac.miss, rep(nfac.miss[lnfm],
max.miss - lnfm))
}
}
if(!is.character(output) || length(output) != 1) {
stop(paste("'output' should be a single character string",
"-- given has mode", mode(output), "and length",
length(output)))
}
output.menu <- c("full", "factor", "systematic", "specific")
output.num <- pmatch(output, output.menu, nomatch=0)
if(output.num == 0) {
stop(paste("unknown or ambiguous input for 'output'",
"-- the allowed choices are:",
paste(output.menu, collapse=", ")))
}
output <- output.menu[output.num]
nassets <- ncol(x)
nobs <- nrow(x)
if(is.null(weights)) {
weights <- rep(1, nobs)
} else if(!is.numeric(weights)) {
stop(paste("'weights' must be numeric -- given has mode",
mode(weights), "and length", length(weights)))
}
if(length(weights) != nobs) {
if(length(weights) == nobs + sum(allmis)) {
weights <- weights[!allmis]
} else if(length(weights) == 1 && weights > 0) {
weights <- rep(1, nobs)
} else {
stop(paste("bad value for 'weights'",
"-- must be a single positive number",
"(meaning equal weighting) or have length",
"equal to the number of observations"))
}
}
if(any(weights < 0)) {
stop(paste(sum(weights < 0), "negative value(s) in 'weights'"))
}
weights <- weights / sum(weights)
if(is.logical(center)) {
if(center) {
center <- colSums(x * weights, na.rm=TRUE)
} else {
center <- rep(0, nassets)
}
} else if(length(center) != nassets) stop("wrong length for 'center'")
x <- sweep(x, 2, center, "-")
sdev <- sqrt(apply(x, 2, subfun.ssd, weights=weights, sd.min=sd.min))
sdzero.names <- NULL
sdzero <- FALSE
if(any(sdev <= 0, na.rm=TRUE)) {
sdzero <- !is.na(sdev) & sdev <= 0
sdzero.names <- dimnames(x)[[2]][sdzero]
if(constant.returns.okay) {
sdev[which(sdzero)] <- 1e-16
if(verbose >= 1) {
warning(paste(sum(sdzero),
"asset(s) with constant returns:",
paste(sdzero.names, collapse=", "),
"(warning suppressed with verbose < 1)"))
}
} else {
stop(paste(sum(sdzero),
"asset(s) with constant returns:",
paste(dimnames(x)[[2]][sdzero], collapse=", ")))
}
}
if(any(is.na(sdev))) {
sdev[is.na(sdev)] <- quantile(sdev, quan.sd, na.rm=TRUE)
}
x <- scale(x, scale=sdev, center=FALSE)
x <- sqrt(weights) * x # x is now weighted
fullcolnames <- dimnames(x)[[2]][num.mis == 0]
fullcols <- which(num.mis == 0)
decomp <- try(svd(x[, fullcols, drop=FALSE], nu=0), silent=TRUE)
if(inherits(decomp, "try-error")) {
# presumably Lapack error, failing to converge
rever <- nrow(x):1
decomp <- svd(x[rever, fullcols, drop=FALSE])
decomp$v <- decomp$v[rever, , drop=FALSE]
}
svdcheck <- colSums(decomp$v^2)
if(any(abs(svdcheck - 1) > 1e-3)) {
stop("bad result from 'svd'")
}
cumvar <- cumsum(decomp$d^2) / sum(decomp$d^2)
nfac <- sum(cumvar < frac.var) + 1
if(nfac > max(range.factors)) {
nfac <- max(range.factors)
} else if(nfac < min(range.factors)) {
nfac <- min(range.factors)
}
if(nfac > length(cumvar)) nfac <- length(cumvar)
fseq <- 1:nfac
loadings <- scale(decomp$v[, fseq, drop=FALSE],
scale=1/decomp$d[fseq], center=FALSE)
svd.d <- decomp$d
if(iter.max > 0) {
cmat <- crossprod(x[, fullcols, drop=FALSE])
uniqueness <- 1 - rowSums(loadings^2)
uniqueness[which(uniqueness < 0)] <- 0
uniqueness[which(uniqueness > 1)] <- 1
start <- uniqueness
converged <- FALSE
for(i in 1:iter.max) {
cor.red <- cmat
diag(cor.red) <- diag(cor.red) - uniqueness
decomp <- eigen(cor.red, symmetric=TRUE)
t.val <- decomp$value[fseq]
t.val[t.val < 0] <- 0
loadings <- scale(decomp$vector[, fseq, drop=FALSE],
center=FALSE, scale=1/sqrt(t.val))
uniqueness <- 1 - rowSums(loadings^2)
uniqueness[which(uniqueness < 0)] <- 0
uniqueness[which(uniqueness > 1)] <- 1
if(all(abs(uniqueness - start) < tol)) {
converged <- TRUE
break
}
start <- uniqueness
}
}
dimnames(loadings) <- list(fullcolnames, NULL)
if(any(num.mis > 0)) {
# calculate loadings for columns with NAs
floadings <- loadings
if(zero.load) {
loadings <- array(0, c(nassets, nfac))
} else {
meanload <- colMeans(floadings)
loadings <- t(array(meanload, c(nfac, nassets)))
}
dimnames(loadings) <- list(dimnames(x)[[2]], NULL)
loadings[dimnames(floadings)[[1]], ] <- floadings
scores <- x[, fullcols, drop=FALSE] %*% floadings
dsquare <- svd.d[1:nfac]^2
nfac.miss[nfac.miss > nfac] <- nfac
xna <- is.na(x)
for(i in (1:nassets)[num.mis > 0 & nobs - num.mis > reg.min]) {
t.nfac <- nfac.miss[ num.mis[i] ]
if(t.nfac == 0) next
t.okay <- which(!xna[, i])
t.seq <- 1:t.nfac
t.load <- lsfit(x[t.okay, i], scores[t.okay, t.seq],
intercept=FALSE)$coef / dsquare[t.seq]
loadings[i, t.seq] <- t.load
NULL
}
}
comm <- rowSums(loadings^2)
if(any(comm > 1)) {
# adjust loadings where communalities too large
toobig <- comm > 1
if(verbose >= 2 && sum(reallytoobig <- comm > 1+1e-5)) {
anam <- dimnames(loadings)[[1]]
if(!length(anam)) {
anam <- paste("V", 1:nrow(loadings), sep="")
}
warning(paste(sum(reallytoobig),
"asset(s) being adjusted",
"from negative specific variance",
"-- the assets are:", paste(anam[reallytoobig],
collapse=", "),
"(warning suppressed with verbose < 2)"))
}
loadings[toobig,] <- loadings[toobig,] / sqrt(comm[toobig])
comm[toobig] <- 1
}
uniqueness <- 1 - comm
if(!is.numeric(specific.floor) || length(specific.floor) != 1) {
stop(paste("'specific.floor' should be a single number",
"-- given has mode", mode(specific.floor),
"and length", length(specific.floor)))
}
if(specific.floor > 0) {
if(!is.character(floor.type) || length(floor.type) != 1) {
stop(paste("'floor.type' must be a single character",
"string -- given has mode", mode(floor.type),
"and length", length(floor.type)))
}
floor.menu <- c("quantile", "fraction")
floor.num <- pmatch(floor.type, floor.menu, nomatch=0)
if(floor.num == 0) {
stop(paste("unknown or ambiguous input for",
"'floor.type' -- valid choices are:",
paste(floor.menu, collapse=", ")))
}
floor.type <- floor.menu[floor.num]
switch(floor.type,
"quantile" = {
uf <- quantile(uniqueness, specific.floor)
uniqueness[which(uniqueness < uf)] <- uf
},
"fraction" = {
uniqueness[which(uniqueness <
specific.floor)] <- specific.floor
}
)
}
sdev[which(sdzero)] <- 0
switch(output,
full= {
cmat <- loadings %*% t(loadings)
cmat <- t(sdev * cmat) * sdev
diag(cmat) <- diag(cmat) + uniqueness * sdev^2
attr(cmat, "number.of.factors") <- ncol(loadings)
attr(cmat, "timestamp") <- date()
},
systematic=,
specific=,
factor={
cmat <- list(loadings=loadings,
uniquenesses= uniqueness, sdev=sdev,
constant.names=sdzero.names,
cumulative.variance.fraction=cumvar,
timestamp=date(), call=match.call())
class(cmat) <- "statfacmodBurSt"
})
if(output == "systematic" || output == "specific") {
fitted(cmat, output=output)
} else {
cmat
}
}
|
/scratch/gouwar.j/cran-all/cranData/BurStFin/R/factor.model.stat.R
|
"fitted.statfacmodBurSt" <-
function (object, output="full", ...)
{
fun.copyright <- "Placed in the public domain 2006-2009 by Burns Statistics"
fun.version <- "fitted.statfacmodBurSt 005"
if(!is.character(output) || length(output) != 1) {
stop(paste("'output' should be a single character string",
"-- given has mode", mode(output), "and length",
length(output)))
}
output.menu <- c("full", "systematic", "specific")
output.num <- pmatch(output, output.menu, nomatch=0)
if(output.num == 0) {
stop(paste("unknown or ambiguous input for 'output'",
"-- the allowed choices are:",
paste(output.menu, collapse=", ")))
}
output <- output.menu[output.num]
switch(output,
full={
ans <- object$loadings %*% t(object$loadings)
ans <- t(object$sdev * ans) * object$sdev
diag(ans) <- diag(ans) + object$uniquenesses *
object$sdev^2
},
systematic={
ans <- object$loadings %*% t(object$loadings)
ans <- t(object$sdev * ans) * object$sdev
},
specific={
ans <- diag(object$uniquenesses * object$sdev^2)
dimnames(ans) <- list(names(object$sdev),
names(object$sdev))
}
)
attr(ans, "number.of.factors") <- ncol(object$loadings)
attr(ans, "timestamp") <- object$timestamp
ans
}
|
/scratch/gouwar.j/cran-all/cranData/BurStFin/R/fitted.statfacmodBurSt.R
|
"partial.rainbow" <-
function (start=0, end=.35)
{
fun.copyright <- "Placed in the public domain 2003-2012 by Burns Statistics Ltd."
fun.version <- "partial.rainbow 002"
rainarg <- formals(rainbow)
rainarg$start <- start
rainarg$end <- end
ans <- rainbow
formals(ans) <- rainarg
ans
}
|
/scratch/gouwar.j/cran-all/cranData/BurStFin/R/partial.rainbow.R
|
"slideWeight" <-
function(n, fractions=c(0,1), observations=NULL, locations=NULL)
{
fun.copyright <- "Placed in the public domain 2014 by Burns Statistics Ltd."
fun.version <- "slideWeight 001"
if(!length(locations)) {
if(length(observations)) {
locations <- n - observations
} else {
locations <- fractions * n
}
} else if(length(observations)) {
stop("only one of 'observations' and 'locations' may be given")
}
stopifnot(length(locations) == 2)
locations <- sort(round(locations))
if(locations[1] >= n) {
stop("specification as given produces all zero weights",
" -- you probably inadvertently used the 'fractions'",
" argument")
}
llen <- diff(locations) + 2
slide <- seq(0, 1, length=llen)[-llen]
slideseq <- locations[1]:locations[2]
ans <- rep(1, n)
ssuse <- intersect(slideseq, 1:n)
ans[ssuse] <- slide[ssuse - locations[1] + 1]
if(locations[1] > 1) ans[1:locations[1]] <- 0
ans
}
|
/scratch/gouwar.j/cran-all/cranData/BurStFin/R/slideWeight.R
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.