content
stringlengths 0
14.9M
| filename
stringlengths 44
136
|
---|---|
##----------------------------------------------------------
## BARFIMA MODELS
##----------------------------------------------------------
##' @title
##' Functions to simulate, extract components and fit BARFIMA models
##'
##' @name BARFIMA.functions
##' @order 1
##'
##' @description
##' These functions can be used to simulate, extract components
##' and fit any model of the class \code{barfima}. A model with
##' class \code{barfima} is a special case of a model with class \code{btsr} .
##' See \sQuote{The BTSR structure} in \code{\link{btsr.functions}} for
##' more details on the general structure.
##'
##' The \eqn{\beta}ARMA model, the beta regression and a i.i.d. sample
##' from a beta distribution can be obtained as special cases.
##' See \sQuote{Details}.
##'
##' @details
##' The \eqn{\beta}ARMA model and the beta regression can be
##' obtained as special cases of the \eqn{\beta}ARFIMA model.
##'
##' \itemize{
##' \item \eqn{\beta}ARFIMA: the model from Pumi et al. (2019) is obtained by setting
##' \code{error.scale = 1} (predictive scale) and \code{xregar = TRUE} (so that the
##' regressors are included in the AR part of the model). Variations of this model are
##' obtained by changing \code{error.scale}, \code{xregar} and/or by using different
##' links for \eqn{y[t]} (in the AR part of the model) and \eqn{\mu[t]}.
##'
##' \item \eqn{\beta}ARMA: the model from Rocha and Cribari-Neto (2009, 2017) is
##' obtained by setting \code{coefs$d = 0} and \code{d = FALSE} and \code{error.scale = 1}
##' (predictive scale). Variations of this model are obtained by changing the error scale
##' and/or by using a different link for \eqn{y[t]} in the AR part of the model.
##'
##' \item beta regression: the model from Ferrari and Cribari-Neto (2004) is
##' obtained by setting \code{p = 0}, \code{q = 0} and \code{coefs$d = 0} and \code{d = FALSE}.
##' The \code{error.scale} is irrelevant. The second argument in \code{linkg} is irrelevant.
##'
##' \item an i.i.d. sample from a Beta distribution with parameters
##' \code{shape1} and \code{shape2} (compatible with the one from \code{\link{rbeta}})
##' is obtained by setting \code{linkg = "linear"}, \code{p = 0}, \code{q = 0},
##' \code{d = FALSE} and, in the coefficient list, \code{alpha = shape1/(shape1+shape2)}
##' and \code{nu = shape1+shape2}. (\code{error.scale} and \code{xregar} are
##' irrelevant)
##'}
##'
##' @references
##'
##' Ferrari, S.L.P. and Cribari-Neto, F. (2004). Beta regression for modelling rates
##' and proportions. J. Appl. Stat. 31 (7), 799-815.
##'
##' Pumi, G.; Valk, M.; Bisognin, C.; Bayer, F.M. and Prass, T.S. (2019).
##' Beta autoregressive fractionally integrated moving average models. Journal of
##' Statistical Planning and Inference (200), 196-212.
##'
##' Rocha, A.V. and Cribari-Neto, F. (2009). Beta autoregressive moving average models.
##' Test 18 (3), 529–545.
##'
##' Rocha, A.V. and Cribari-Neto, F. (2017). Erratum to: Beta autoregressive moving
##' average models. Test 26 (2), 451-459.
##'
##' @md
NULL
#> NULL
##' @rdname BARFIMA.functions
##' @order 2
##'
##' @details
##' The function \code{BARFIMA.sim} generates a random sample from a \eqn{\beta}ARFIMA(p,d,q)
##' model.
##'
##' @param n a strictly positive integer. The sample size of yt (after burn-in).
##' Default is 1.
##'
##' @param burn a non-negative integer. The length of the "burn-in" period. Default is 0.
##'
##' @param xreg optionally, a vector or matrix of external regressors.
##' For simulation purposes, the length of xreg must be \code{n+burn}.
##' Default is \code{NULL}. For extraction or fitting purposes, the length
##' of \code{xreg} must be the same as the length of the observed time series
##' \eqn{y_t}.
##'
##' @param coefs a list with the coefficients of the model. An empty list will result
##' in an error. The arguments that can be passed through this list are:
##' \itemize{
##' \item \code{alpha} optionally, a numeric value corresponding to the intercept.
##' If the argument is missing, it will be treated as zero. See
##' \sQuote{The BTSR structure} in \code{\link{btsr.functions}}.
##'
##' \item \code{beta} optionally, a vector of coefficients corresponding to the
##' regressors in \code{xreg}. If \code{xreg} is provided but \code{beta} is
##' missing in the \code{coefs} list, an error message is issued.
##'
##' \item \code{phi} optionally, for the simulation function this must be a vector
##' of size \eqn{p}, corresponding to the autoregressive coefficients
##' (including the ones that are zero), where \eqn{p} is the AR order. For
##' the extraction and fitting functions, this is a vector with the non-fixed
##' values in the vector of autoregressive coefficients.
##'
##' \item \code{theta} optionally, for the simulation function this must be a vector
##' of size \eqn{q}, corresponding to the moving average coefficients
##' (including the ones that are zero), where \eqn{q} is the MA order. For
##' the extraction and fitting functions, this is a vector with the non-fixed
##' values in the vector of moving average coefficients.
##'
##' \item \code{d} optionally, a numeric value corresponding to the long memory
##' parameter. If the argument is missing, it will be treated as zero.
##'
##' \item \code{nu} the dispersion parameter. If missing, an error message is issued.
##'
##' }
##'
##' @param y.start optionally, an initial value for yt (to be used
##' in the recursions). Default is \code{NULL}, in which case, the recursion assumes
##' that \eqn{g_2(y_t) = 0}, for \eqn{t < 1}.
##'
##' @param xreg.start optionally, a vector of initial value for xreg
##' (to be used in the recursions). Default is \code{NULL}, in which case, the recursion assumes
##' that \eqn{X_t = 0}, for \eqn{t < 1}. If \code{xregar = FALSE} this argument
##' is ignored.
##'
##' @param xregar logical; indicates if xreg is to be included in the
##' AR part of the model. See \sQuote{The BTSR structure}. Default is \code{TRUE}.
##'
##' @param error.scale the scale for the error term. See \sQuote{The BTSR structure}
##' in \code{\link{btsr.functions}}. Default is 1.
##'
##' @param complete logical; if \code{FALSE} the function returns only the simulated
##' time series yt, otherwise, additional time series are provided.
##' Default is \code{FALSE}
##'
##' @param inf the truncation point for infinite sums. Default is 1,000.
##' In practice, the Fortran subroutine uses \eqn{inf = q}, if \eqn{d = 0}.
##'
##' @param linkg character or a two character vector indicating which
##' links must be used in the model. See \sQuote{The BTSR structure}
##' in \code{\link{btsr.functions}} for details and \code{\link{link.btsr}}
##' for valid links. If only one value is provided, the same link is used
##' for \eqn{mu_t} and for \eqn{y_t} in the AR part of the model.
##' Default is \code{c("logit", "logit")}. For the linear link, the constant
##' will be always 1.
##'
##' @param seed optionally, an integer which gives the value of the fixed
##' seed to be used by the random number generator. If missing, a random integer
##' is chosen uniformly from 1,000 to 10,000.
##'
##' @param rngtype optionally, an integer indicating which random number generator
##' is to be used. Default is 2: the Mersenne Twister algorithm. See \sQuote{Common Arguments}
##' in \code{\link{btsr.functions}}.
##'
##' @param debug logical, if \code{TRUE} the output from FORTRAN is return (for
##' debugging purposes). Default is \code{FALSE} for all models.
##'
##' @return
##' The function \code{BARFIMA.sim} returns the simulated time series yt by default.
##' If \code{complete = TRUE}, a list with the following components
##' is returned instead:
##' \itemize{
##' \item \code{model}: string with the text \code{"BARFIMA"}
##'
##' \item \code{yt}: the simulated time series
##'
##' \item \code{mut}: the conditional mean
##'
##' \item \code{etat}: the linear predictor \eqn{g(\mu_t)}
##'
##' \item \code{error}: the error term \eqn{r_t}
##'
##' \item \code{xreg}: the regressors (if included in the model).
##'
##' \item \code{debug}: the output from FORTRAN (if requested).
##'
##' }
##'
##' @seealso
##' \code{\link{btsr.sim}}
##'
##' @examples
##' # Generating a Beta model were mut does not vary with time
##' # yt ~ Beta(a,b), a = mu*nu, b = (1-mu)*nu
##'
##' y <- BARFIMA.sim(linkg = "linear", n = 1000, seed = 2021,
##' coefs = list(alpha = 0.2, nu = 20))
##' hist(y)
##'
##' @export
##'
##' @md
BARFIMA.sim <- function(n = 1, burn = 0, xreg = NULL,
coefs = list(alpha = 0, beta = NULL, phi = NULL,
theta = NULL, d = 0, nu = 20),
y.start = NULL, xreg.start = NULL,
xregar = TRUE, error.scale = 1, complete = FALSE,
inf = 1000, linkg = c("logit", "logit"),
seed = NULL, rngtype = 2, debug = FALSE){
##----------------------------------
## checking required parameters:
##----------------------------------
if(is.null(coefs)) stop("coefs missing with no default")
if(!"list" %in% class(coefs)) stop("coefs must be a list")
##----------------------------------
## checking configurations:
##----------------------------------
cf <- .sim.configs(model = "BARFIMA", xreg = xreg,
y.start = y.start, xreg.start = xreg.start,
linkg = linkg, n = n, burn = burn,
coefs = coefs, xregar = xregar,
error.scale = error.scale, seed = seed,
rngtype = rngtype, y.default = 0)
out <- .btsr.sim(model = "BARFIMA", inf = inf, configs = cf,
complete = complete, debug = debug)
class(out) <- c(class(out), "barfima")
invisible(out)
}
##' @rdname BARFIMA.functions
##' @order 3
##'
##' @details
##'
##' The function \code{BARFIMA.extract} allows the user to extract the
##' components \eqn{y_t}, \eqn{\mu_t}, \eqn{\eta_t = g(\mu_t)}, \eqn{r_t},
##' the log-likelihood, and the vectors and matrices used to calculate the
##' score vector and the information matrix associated to a given set of parameters.
##'
##' This function can be used by any user to create an objective function
##' that can be passed to optimization algorithms not available in the BTSR Package.
##'
##' @param yt a numeric vector with the observed time series. If missing, an error
##' message is issued.
##'
##' @param nnew optionally, the number of out-of sample predicted values required.
##' Default is 0.
##'
##' @param xnew a vector or matrix, with \code{nnew} observations of the
##' regressors observed/predicted values corresponding to the period of
##' out-of-sample forecast. If \code{xreg = NULL}, \code{xnew} is ignored.
##'
##' @param p a non-negative integer. The order of AR polynomial.
##' If missing, the value of \code{p} is calculated from length(coefs$phi)
##' and length(fixed.values$phi). For fitting, the default is 0.
##'
##' @param q a non-negative integer. The order of the MA polynomial.
##' If missing, the value of \code{q} is calculated from length(coefs$theta)
##' and length(fixed.values$theta). For fitting, the default is 0.
##'
##' @param lags optionally, a list with the lags that the values in \code{coefs} correspond to.
##' The names of the entries in this list must match the ones in \code{coefs}.
##' For one dimensional coefficients, the \code{lag} is obviously always 1 and can
##' be suppressed. An empty list indicates that either the argument \code{fixed.lags}
##' is provided or all lags must be used.
##'
##' @param fixed.values optionally, a list with the values of the coefficients
##' that are fixed. By default, if a given vector (such as the vector of AR coefficients)
##' has fixed values and the corresponding entry in this list is empty, the fixed values
##' are set as zero. The names of the entries in this list must match the ones
##' in \code{coefs}.
##'
##' @param fixed.lags optionally, a list with the lags that the fixed values
##' in \code{fixed.values} correspond to. The names of the entries in this list must
##' match the ones in \code{fixed.values}. ##' For one dimensional coefficients, the
##' \code{lag} is obviously always 1 and can be suppressed. If an empty list is provided
##' and the model has fixed lags, the argument \code{lags} is used as reference.
##'
##' @param m a non-negative integer indicating the starting time for the sum of the
##' partial log-likelihoods, that is \eqn{\ell = \sum_{t = m+1}^n \ell_t}. Default is
##' 0.
##'
##' @param llk logical, if \code{TRUE} the value of the log-likelihood function
##' is returned. Default is \code{TRUE}.
##'
##' @param sco logical, if \code{TRUE} the score vector is returned.
##' Default is \code{FALSE}.
##'
##' @param info logical, if \code{TRUE} the information matrix is returned.
##' Default is \code{FALSE}. For the fitting function, \code{info} is automatically
##' set to \code{TRUE} when \code{report = TRUE}.
##'
##' @param extra logical, if \code{TRUE} the matrices and vectors used to
##' calculate the score vector and the information matrix are returned.
##' Default is \code{FALSE}.
##'
##' @return
##' The function \code{BARFIMA.extract} returns a list with the following components.
##'
##' \itemize{
##' \item \code{model}: string with the text \code{"BARFIMA"}
##'
##' \item \code{coefs}: the coefficients of the model passed through the
##' \code{coefs} argument
##'
##' \item \code{yt}: the observed time series
##'
##' \item \code{gyt}: the transformed time series \eqn{g_2(y_t)}
##'
##' \item \code{mut}: the conditional mean
##'
##' \item \code{etat}: the linear predictor \eqn{g_1(\mu_t)}
##'
##' \item \code{error}: the error term \eqn{r_t}
##'
##' \item \code{xreg}: the regressors (if included in the model).
##'
##' \item \code{sll}: the sum of the conditional log-likelihood (if requested)
##'
##' \item \code{sco}: the score vector (if requested)
##'
##' \item \code{info}: the information matrix (if requested)
##'
##' \item \code{Drho}, \code{T}, \code{E}, \code{h}: additional matrices and vectors
##' used to calculate the score vector and the information matrix. (if requested)
##'
##' \item \code{yt.new}: the out-of-sample forecast (if requested)
##'
##' \item \code{out.Fortran}: FORTRAN output (if requested)
##' }
##'
##' @seealso
##' \code{\link{btsr.extract}}
##'
##' @examples
##' #------------------------------------------------------------
##' # Generating a Beta model were mut does not vary with time
##' # yt ~ Beta(a,b), a = mu*nu, b = (1-mu)*nu
##' #------------------------------------------------------------
##'
##' m1 <- BARFIMA.sim(linkg = "linear",n = 100,
##' complete = TRUE, seed = 2021,
##' coefs = list(alpha = 0.2, nu = 20))
##'
##' #------------------------------------------------------------
##' # Extracting the conditional time series given yt and
##' # a set of parameters
##' #------------------------------------------------------------
##'
##' # Assuming that all coefficients are non-fixed
##' e1 = BARFIMA.extract(yt = m1$yt, coefs = list(alpha = 0.2, nu = 20),
##' link = "linear", llk = TRUE,
##' sco = TRUE, info = TRUE)
##'
##' #----------------------------------------------------
##' # comparing the simulated and the extracted values
##' #----------------------------------------------------
##' cbind(head(m1$mut), head(e1$mut))
##'
##' #---------------------------------------------------------
##' # the log-likelihood, score vector and information matrix
##' #---------------------------------------------------------
##' e1$sll
##' e1$score
##' e1$info.Matrix
##'
##' @export
##' @md
BARFIMA.extract <- function(yt, xreg = NULL, nnew = 0, xnew = NULL,
p, q, coefs = list(), lags = list(),
fixed.values = list(), fixed.lags = list(),
y.start = NULL, xreg.start = NULL,
xregar = TRUE, error.scale = 1, inf = 1000, m = 0,
linkg = c("logit","logit"), llk = TRUE, sco = FALSE,
info = FALSE, extra = FALSE, debug = FALSE){
if(is.null(coefs) & is.null(fixed.values))
stop("Please, provide a list of coefficients")
if(!is.null(coefs)){
if(! "list" %in% class(coefs)) stop("coefs must be a list")}
if(!is.null(fixed.values)){
if(! "list" %in% class(fixed.values)) stop("fixed.values must be a list")}
else{ fixed.values <- list()}
if(missing(p)) p = length(coefs$phi) + length(fixed.values$phi)
if(missing(q)) q = length(coefs$theta) + length(fixed.values$theta)
cf <- .extract.configs(model = "BARFIMA", yt = yt, y.start = y.start,
y.lower = 0, y.upper = 1, openIC = c(TRUE, TRUE),
xreg = xreg, xnew = xnew, nnew = nnew,
xreg.start = xreg.start, linkg = linkg,
p = p, q = q, inf = inf, m = m, xregar = xregar,
error.scale = error.scale, coefs = coefs,
lags = lags, fixed.values = fixed.values,
fixed.lags = fixed.lags, llk = llk, sco = sco,
info = info, extra = extra)
out <- .btsr.extract(model = "BARFIMA", yt = yt, configs = cf, debug = debug)
class(out) <- c(class(out), "barfima")
invisible(out)
}
##' @rdname BARFIMA.functions
##' @order 4
##'
##' @details
##' The function \code{BARFIMA.fit} fits a BARFIMA model to a given univariate time
##' series. For now, available optimization algorithms are \code{"L-BFGS-B"} and
##' \code{"Nelder-Mead"}. Both methods accept bounds for the parameters. For
##' \code{"Nelder-Mead"}, bounds are set via parameter transformation.
##'
##'
##' @param d logical, if \code{TRUE}, the parameter \code{d} is included
##' in the model either as fixed or non-fixed. If \code{d = FALSE} the value is
##' fixed as 0. The default is \code{TRUE}.
##'
##' @param start a list with the starting values for the non-fixed coefficients
##' of the model. If an empty list is provided, the function \code{\link{coefs.start}}
##' is used to obtain starting values for the parameters.
##'
##' @param ignore.start logical, if starting values are not provided, the
##' function uses the default values and \code{ignore.start} is ignored.
##' In case starting values are provided and \code{ignore.start = TRUE}, those
##' starting values are ignored and recalculated. The default is \code{FALSE}.
##'
##' @param lower optionally, list with the lower bounds for the
##' parameters. The names of the entries in these lists must match the ones
##' in \code{start}. The default is to assume that the parameters have no lower
##' bound except for \code{nu}, for which de default is 0. Only the bounds for
##' bounded parameters need to be specified.
##'
##' @param upper optionally, list with the upper bounds for the
##' parameters. The names of the entries in these lists must match the ones
##' in \code{start}. The default is to assume that the parameters have no upper
##' bound. Only the bounds for bounded parameters need to be specified.
##'
##' @param control a list with configurations to be passed to the
##' optimization subroutines. Missing arguments will receive default values. See
##' \cite{\link{fit.control}}.
##'
##' @param report logical, if \code{TRUE} the summary from model estimation is
##' printed and \code{info} is automatically set to \code{TRUE}. Default is \code{TRUE}.
##'
##' @param ... further arguments passed to the internal functions.
##'
##' @return
##' The function \code{btsr.fit} returns a list with the following components.
##' Each particular model can have additional components in this list.
##'
##' \itemize{
##' \item \code{model}: string with the text \code{"BARFIMA"}
##'
##' \item \code{convergence}: An integer code. 0 indicates successful completion.
##' The error codes depend on the algorithm used.
##'
##' \item \code{message}: A character string giving any additional information
##' returned by the optimizer, or NULL.
##'
##' \item \code{counts}: an integer giving the number of function evaluations.
##'
##' \item \code{control}: a list of control parameters.
##'
##' \item \code{start}: the starting values used by the algorithm.
##'
##' \item \code{coefficients}: The best set of parameters found.
##'
##' \item \code{n}: the sample size used for estimation.
##'
##' \item \code{series}: the observed time series
##'
##' \item \code{gyt}: the transformed time series \eqn{g_2(y_t)}
##'
##' \item \code{fitted.values}: the conditional mean, which corresponds to
##' the in-sample forecast, also denoted fitted values
##'
##' \item \code{etat}: the linear predictor \eqn{g_1(\mu_t)}
##'
##' \item \code{error.scale}: the scale for the error term.
##'
##' \item \code{error}: the error term \eqn{r_t}
##'
##' \item \code{residual}: the observed minus the fitted values. The same as
##' the \code{error} term if \code{error.scale = 0}.
##'
##' \item \code{forecast}: the out-of-sample forecast (if requested).
##'
##' \item \code{xnew}: the observations of the regressors observed/predicted
##' values corresponding to the period of out-of-sample forecast.
##' Only inlcudes if \code{xreg} is not \code{NULL} and \code{nnew > 0}.
##'
##' \item \code{sll}: the sum of the conditional log-likelihood (if requested)
##'
##' \item \code{info.Matrix}: the information matrix (if requested)
##'
##' \item \code{configs}: a list with the configurations adopted to fit the model.
##' This information is used by the prediction function.
##'
##' \item \code{out.Fortran}: FORTRAN output (if requested)
##'
##' \item \code{call}: a string with the description of the fitted model.
##'
##' }
##'
##' @seealso
##' \code{\link{btsr.fit}}
##'
##' @examples
##'
##' # Generating a Beta model were mut does not vary with time
##' # yt ~ Beta(a,b), a = mu*nu, b = (1-mu)*nu
##'
##' y <- BARFIMA.sim(linkg = "linear", n = 100, seed = 2021,
##' coefs = list(alpha = 0.2, nu = 20))
##'
##' # fitting the model
##' f <- BARFIMA.fit(yt = y, report = TRUE,
##' start = list(alpha = 0.5, nu = 10),
##' linkg = "linear", d = FALSE)
##'
##' @export
##'
##' @md
BARFIMA.fit <- function(yt, xreg = NULL, nnew = 0, xnew = NULL,
p = 0, d = TRUE, q = 0, m = 0, inf = 1000,
start = list(), ignore.start = FALSE,
lags = list(), fixed.values = list(),
fixed.lags = list(), lower = list(nu = 0),
upper = list(nu = Inf), linkg = c("logit","logit"),
sco = FALSE, info = FALSE, extra = FALSE, xregar = TRUE,
y.start = NULL, xreg.start = NULL,
error.scale = 1, control = list(), report = TRUE,
debug = FALSE,...){
# default values for nu (merge with user provided values)
lw <- list(nu = 0); up <- list(nu = Inf)
lw[names(lower)] <- lower; up[names(upper)] <- upper
lower <- lw; upper <- up
if(report) info = TRUE
cf <- .fit.configs(model = "BARFIMA", yt = yt, y.start = y.start,
y.lower = 0, y.upper = 1, openIC = c(TRUE, TRUE),
xreg = xreg, xnew = xnew, nnew = nnew,
xreg.start = xreg.start, linkg = linkg,
p = p, d = d, q = q, inf = inf, m = m,
xregar = xregar, error.scale = error.scale,
start = start, ignore.start = ignore.start,
lags = lags, fixed.values = fixed.values,
fixed.lags = fixed.lags, lower = lower,
upper = upper, control = control,
sco = sco, info = info, extra = extra)
if(!is.null(cf$conv)) return(invisible(out))
out <- .btsr.fit(model = "BARFIMA", yt = yt, configs = cf, debug = debug)
out$call <- .fit.print(model = "BARFIMA", p = cf$p, q = cf$q,
d = !(cf$d$nfix == 1 & cf$d$fvalues == 0),
nreg = cf$nreg)
class(out) <- c(class(out), "barfima")
if(report) print(summary(out))
invisible(out)
}
|
/scratch/gouwar.j/cran-all/cranData/BTSR/R/barfima.R
|
#' @title
#' BTSR: Bounded Time Series Regression.
#'
#' @description
#' The BTSR package provides functions to simulate, estimate and forecast a
#' wide range of regression based dynamic models for bounded time series. The
#' package covers the most commonly applied models in the literature.
#' The package's main calculations are done in FORTRAN, which translates into
#' very fast algorithms.
#'
#' @author Taiane Schaedler Prass \email{taianeprass@@gmail.com}
#' @docType package
#' @name BTSR.Package
#' @aliases BTSR
#' @keywords internal
"_PACKAGE"
#' @useDynLib BTSR, .registration=TRUE
#'
#' @section The BTSR structure:
#'
#' The general structure of the deterministic part of a BTSR model is
#'
#' \deqn{g_1(\mu_t) = \alpha + X_t\beta +
#' \sum_{j=1}^p \phi_j[g_2(y_{t-j}) - I_{xregar}X_{t-j}\beta] + h_t}
#'
#' where
#' \itemize{
#' \item \eqn{I_{xregar}} is 0, if \code{xreg} is not included in the AR part of the model and 1,
#' otherwise
#'
#' \item the term \eqn{h_t} depends on the argument \code{model}:
#' \itemize{
#' \item for BARC models: \eqn{h_t = h(T^{t-1}(u_0))}
#' \item otherwise: \eqn{h_t = \sum_{k = 1}^\infty c_k r_{t-k}}
#' }
#'
#' \item \eqn{g_1} and \eqn{g_2} are the links defined in \code{linkg}.
#' Notice that \eqn{g_2} is only used in the AR part of the model and, typically,
#' \eqn{g_1 = g_2}.
#'
#' \item \eqn{r_t} depends on the \code{error.scale} adopted:
#' \itemize{
#' \item if \code{error.scale = 0}: \eqn{r_t = y_t - \mu_t} (data scale)
#' \item if \code{error.scale = 1}: \eqn{r_t = g_1(y_t) - g_1(\mu_t)}
#' (predictive scale)
#' }
#'
#' \item \eqn{c_k} are the coefficients of \eqn{(1-L)^d\theta(L)}.
#' In particular, if \eqn{d = 0}, then \eqn{c_k = \theta_k}, for
#' \eqn{k = 1, \dots, q}, and 0 otherwise.
#' }
#'
NULL
#> NULL
|
/scratch/gouwar.j/cran-all/cranData/BTSR/R/btsr.R
|
##-------------------------------------------------------------------------
## internal function.
## Performs several checks to make sure that
## the correct type of variables will be passed to FORTRAN
## Returns a list with the arguments to be passed to simulation
## and/or fitting subroutines
##-------------------------------------------------------------------------
.extract.configs <- function(model, yt, y.start, y.lower, y.upper, openIC,
xreg, xnew, nnew, xreg.start, linkg,
p, q, inf, m, xregar, error.scale,
coefs, lags, fixed.values, fixed.lags,
llk, sco, info, extra){
##----------------------------------------------------------
## checking if the data has NA's or any value outside (0,1)
##----------------------------------------------------------
out <- .data.check(yt = yt, lower = y.lower, upper = y.upper, openIC = openIC)
if(!is.null(out$conv)) return(invisible(out))
out$n <- as.integer(length(yt))
out$y.lower = y.lower
out$y.upper = y.upper
##--------------------------------------------------
## The code allows for different links for y and mu
##--------------------------------------------------
dummy <- .link.check(model = model, link = linkg)
if(length(linkg) == 1) linkg <- c(linkg, linkg)
out$linkg <- .link.convert(link = linkg)
##--------------------------------------------------
## Regressors.
## xnew is needed by the FORTRAN subroutine so
## skip.forecast must be set as FALSE
##--------------------------------------------------
temp <- .xreg.convert(xreg = xreg, xnew = xnew, n = out$n,
nnew = nnew, skip.forecast = FALSE)
out[names(temp)] <- temp
##---------------------------------------------------------
## initial values: using y.default = y.lower -1
## assures that the Fortran subroutine will set g(y) = 0
##---------------------------------------------------------
if(out$nreg == 0) xregar = FALSE
temp <- .data.start.convert(y.start = y.start, xreg.start = xreg.start,
nreg = out$nreg, xregar = xregar,
y.default = y.lower - 1)
out[names(temp)] <- temp
##----------------------------------------------------------------------------
## parameters initialization and fixed values identification
##----------------------------------------------------------------------------
if(is.null(coefs$nu)){
if(is.null(fixed.values$nu)) stop("nu is missing with no default")}
temp <- .coefs.convert.all(model = model, p = p, q = q, nreg = out$nreg,
coefs = coefs, lags = lags,
fixed.values = fixed.values, fixed.lags = fixed.lags)
out[names(temp)] <- temp
out$p <- as.integer(p)
##-------------------------------
## Other configurations
##-------------------------------
out$inf <- as.integer(inf)
if(!is.null(out$d)){
if((out$d$nfix == 0 | out$d$fvalues != 0) & out$inf < 100)
warning(paste("non-zero d and inf = ", inf,
". Be carefull, this value may be too small",
sep = ""), immediate. = TRUE)}
out$m <- as.integer(m)
out$error.scale <- as.integer(error.scale)
out$xregar <- as.integer(xregar)
out$llk <- as.integer(llk)
out$sco <- as.integer(sco)
out$info <- as.integer(info)
out$extra <- as.integer(extra)
if(!(model == "BARC")) out$q <- as.integer(q)
out$npar <- length(out$coefs)
if(out$npar == 0) out$coefs = 0
invisible(out)
}
#-------------------------------------------------------
# Fix-me
#-------------------------------------------------------
# Using
# foo <- .check.model(model[1], "extract")
# and then
# .Fortran(foo,...)
# gives an error during the registration process
# Therefore, for now, we are using the auxiliary
# functions defined in the sequel
#-------------------------------------------------------
.btsr.extract.barfima <- function(yt, configs){
.Fortran("barfimar",
n = configs$n,
yt = yt,
gyt = numeric(configs$n),
ystart = configs$y.start,
nreg = as.integer(configs$nreg),
xreg = configs$xreg,
xstart = configs$xreg.start,
mut = numeric(configs$n),
etat = numeric(configs$n),
error = numeric(configs$n),
escale = configs$error.scale,
nnew = configs$nnew,
xnew = configs$xnew,
ynew = numeric(max(1,configs$nnew)),
linkg = configs$linkg,
npar = max(1L, configs$npar),
coefs = configs$coefs,
fixa = configs$alpha$nfix,
alpha = configs$alpha$fvalues,
fixb = configs$beta$nfix,
flagsb = configs$beta$flags,
beta = configs$beta$fvalues,
p = configs$p,
fixphi = configs$phi$nfix,
flagsphi = configs$phi$flags,
phi = configs$phi$fvalues,
xregar = configs$xregar,
q = configs$q,
fixtheta = configs$theta$nfix,
flagstheta = configs$theta$flags,
theta = configs$theta$fvalues,
fixd = configs$d$nfix,
d = configs$d$fvalues,
fixnu = configs$nu$nfix,
pdist = configs$nu$fvalues,
inf = configs$inf,
m = configs$m,
llk = configs$llk,
sll = 0,
sco = configs$sco,
U = numeric(max(1,configs$npar*configs$sco)),
info = configs$info,
K = diag(max(1,configs$npar*configs$info)),
extra = configs$extra,
Drho = matrix(0, max(1,configs$n*configs$extra),
max(1,(configs$npar-1+configs$nu$nfix)*configs$extra)),
T = numeric(max(1, configs$n*configs$extra)),
E = matrix(0, max(1,configs$n*configs$extra),
1+2*(1-configs$nu$nfix)*configs$extra),
h = numeric(max(1, configs$n*configs$extra)))
}
.btsr.extract.karfima <- function(yt, configs){
.Fortran("karfimar",
n = configs$n,
yt = yt,
gyt = numeric(configs$n),
ystart = configs$y.start,
nreg = as.integer(configs$nreg),
xreg = configs$xreg,
xstart = configs$xreg.start,
mut = numeric(configs$n),
etat = numeric(configs$n),
error = numeric(configs$n),
escale = configs$error.scale,
nnew = configs$nnew,
xnew = configs$xnew,
ynew = numeric(max(1,configs$nnew)),
linkg = configs$linkg,
npar = max(1L, configs$npar),
coefs = configs$coefs,
fixa = configs$alpha$nfix,
alpha = configs$alpha$fvalues,
fixb = configs$beta$nfix,
flagsb = configs$beta$flags,
beta = configs$beta$fvalues,
p = configs$p,
fixphi = configs$phi$nfix,
flagsphi = configs$phi$flags,
phi = configs$phi$fvalues,
xregar = configs$xregar,
q = configs$q,
fixtheta = configs$theta$nfix,
flagstheta = configs$theta$flags,
theta = configs$theta$fvalues,
fixd = configs$d$nfix,
d = configs$d$fvalues,
fixnu = configs$nu$nfix,
pdist = configs$nu$fvalues,
inf = configs$inf,
m = configs$m,
llk = configs$llk,
sll = 0,
sco = configs$sco,
U = numeric(max(1,configs$npar*configs$sco)),
info = configs$info,
K = diag(max(1,configs$npar*configs$info)),
extra = configs$extra,
Drho = matrix(0, max(1,configs$n*configs$extra),
max(1,(configs$npar-1+configs$nu$nfix)*configs$extra)),
T = numeric(max(1, configs$n*configs$extra)),
E = matrix(0, max(1,configs$n*configs$extra),
1+2*(1-configs$nu$nfix)*configs$extra),
h = numeric(max(1, configs$n*configs$extra)))
}
.btsr.extract.garfima <- function(yt, configs){
.Fortran("garfimar",
n = configs$n,
yt = yt,
gyt = numeric(configs$n),
ystart = configs$y.start,
nreg = as.integer(configs$nreg),
xreg = configs$xreg,
xstart = configs$xreg.start,
mut = numeric(configs$n),
etat = numeric(configs$n),
error = numeric(configs$n),
escale = configs$error.scale,
nnew = configs$nnew,
xnew = configs$xnew,
ynew = numeric(max(1,configs$nnew)),
linkg = configs$linkg,
npar = max(1L, configs$npar),
coefs = configs$coefs,
fixa = configs$alpha$nfix,
alpha = configs$alpha$fvalues,
fixb = configs$beta$nfix,
flagsb = configs$beta$flags,
beta = configs$beta$fvalues,
p = configs$p,
fixphi = configs$phi$nfix,
flagsphi = configs$phi$flags,
phi = configs$phi$fvalues,
xregar = configs$xregar,
q = configs$q,
fixtheta = configs$theta$nfix,
flagstheta = configs$theta$flags,
theta = configs$theta$fvalues,
fixd = configs$d$nfix,
d = configs$d$fvalues,
fixnu = configs$nu$nfix,
pdist = configs$nu$fvalues,
inf = configs$inf,
m = configs$m,
llk = configs$llk,
sll = 0,
sco = configs$sco,
U = numeric(max(1,configs$npar*configs$sco)),
info = configs$info,
K = diag(max(1,configs$npar*configs$info)),
extra = configs$extra,
Drho = matrix(0, max(1,configs$n*configs$extra),
max(1,(configs$npar-1+configs$nu$nfix)*configs$extra)),
T = numeric(max(1, configs$n*configs$extra)),
E = matrix(0, max(1,configs$n*configs$extra),
1+2*(1-configs$nu$nfix)*configs$extra),
h = numeric(max(1, configs$n*configs$extra)))
}
.btsr.extract.uwarfima <- function(yt, configs){
.Fortran("uwarfimar",
n = configs$n,
yt = yt,
gyt = numeric(configs$n),
ystart = configs$y.start,
nreg = as.integer(configs$nreg),
xreg = configs$xreg,
xstart = configs$xreg.start,
mut = numeric(configs$n),
etat = numeric(configs$n),
error = numeric(configs$n),
escale = configs$error.scale,
nnew = configs$nnew,
xnew = configs$xnew,
ynew = numeric(max(1,configs$nnew)),
linkg = configs$linkg,
npar = max(1L, configs$npar),
coefs = configs$coefs,
fixa = configs$alpha$nfix,
alpha = configs$alpha$fvalues,
fixb = configs$beta$nfix,
flagsb = configs$beta$flags,
beta = configs$beta$fvalues,
p = configs$p,
fixphi = configs$phi$nfix,
flagsphi = configs$phi$flags,
phi = configs$phi$fvalues,
xregar = configs$xregar,
q = configs$q,
fixtheta = configs$theta$nfix,
flagstheta = configs$theta$flags,
theta = configs$theta$fvalues,
fixd = configs$d$nfix,
d = configs$d$fvalues,
fixnu = configs$nu$nfix,
pdist = configs$nu$fvalues,
inf = configs$inf,
m = configs$m,
llk = configs$llk,
sll = 0,
sco = configs$sco,
U = numeric(max(1,configs$npar*configs$sco)),
info = configs$info,
K = diag(max(1,configs$npar*configs$info)),
extra = configs$extra,
Drho = matrix(0, max(1,configs$n*configs$extra),
max(1,(configs$npar-1+configs$nu$nfix)*configs$extra)),
T = numeric(max(1, configs$n*configs$extra)),
E = matrix(0, max(1,configs$n*configs$extra),
1+2*(1-configs$nu$nfix)*configs$extra),
h = numeric(max(1, configs$n*configs$extra)))
}
##---------------------------------------------------------------------------
## internal function:
## Interface between R and FORTRAN
## Also used to summarize the results of the extraction and return
## only the relevant variables
##---------------------------------------------------------------------------
.btsr.extract <- function(model, yt, configs, debug){
#foo <- .check.model(model[1], "extract")
if(configs$npar == 0){
configs$sco <- 0L
configs$info <- 0L
configs$extra <- 0L
}
temp <- switch(EXPR = model,
BARFIMA = .btsr.extract.barfima(yt, configs),
GARFIMA = .btsr.extract.garfima(yt, configs),
KARFIMA = .btsr.extract.karfima(yt, configs),
UWARFIMA = .btsr.extract.uwarfima(yt, configs))
out <- list(model = model)
vars <- c("coefs","yt", "xreg", "gyt", "mut", "etat", "error")
out[vars] <- temp[vars]
if(configs$nreg == 0) out$xreg = NULL
if(configs$llk == 1) out$sll <- temp$sll
if(configs$sco == 1){
out$score <- temp$U
names(out$score) <- names(configs$coefs)
}
if(configs$info == 1){
out$info.Matrix <- as.matrix(temp$K)
colnames(out$info.Matrix) <- names(configs$coefs)
rownames(out$info.Matrix) <- names(configs$coefs)
}
if(configs$extra == 1){
out[c("Drho", "T", "E", "h")] <- temp[c("Drho", "T", "E", "h")]
}
if(configs$nnew > 0) out$yt.new <- temp$ynew
if(debug) out$out.Fortran <- temp
invisible(out)
}
|
/scratch/gouwar.j/cran-all/cranData/BTSR/R/extract.R
|
#' Sets default values for constants used by the optimization functions
#' in FORTRAN
#'
#' @title Default control list
#'
#' @param control a list with configurations to be passed to the
#' optimization subroutines. Missing arguments will receive default values.
#' See \sQuote{Details}.
#'
#' @details The \code{control} argument is a list that can supply any of the
#' following components:
#'
#' \describe{
#'
#' \item{\code{method}}{The optimization method. Current available options
#' are \code{"L-BFGS-B"} and \code{"Nelder-Mead"}. Default is \code{"L-BFGS-B"}.}
#'
#' \item{\code{maxit}}{The maximum number of iterations. Defaults to \code{1000}.}
#'
#' \item{\code{iprint}}{The frequency of reports if \code{control$trace}
#' is positive. Defaults is -1 (no report).
#' \itemize{
#' \item For \code{"L-BFGS-B"} method:
#'
#' iprint<0 no output is generated;
#'
#' iprint=0 print only one line at the last iteration;
#'
#' 0<iprint<99 print also f and |proj g| every iprint iterations;
#'
#' iprint=99 print details of every iteration except n-vectors;
#'
#' iprint=100 print also the changes of active set and final x;
#'
#' iprint>100 print details of every iteration including x and g;
#'
#' \item For \code{"Nelder-Mead"} method:
#'
#' iprint<0 No printing
#'
#' iprint=0 Printing of parameter values and the function
#' Value after initial evidence of convergence.
#'
#' iprint>0 As for iprint = 0 plus progress reports after every
#' Iprint evaluations, plus printing for the initial simplex.
#' }}
#'
#'
#' \item{\code{factr}}{controls the convergence of the \code{"L-BFGS-B"}
#' method. Convergence occurs when the reduction in the objective is
#' within this factor of the machine tolerance. The iteration will stop when
#'
#' \deqn{(f^k - f^{k+1})/max\{|f^k|,|f^{k+1}|,1\} \le factr*epsmch}
#'
#' where epsmch is the machine precision, which is automatically
#' generated by the code. Typical values
#' for \code{factr}: 1.e+12 for low accuracy; 1.e+7 for moderate accuracy;
#' 1.e+1 for extremely high accuracy. Default is \code{1e7}, that is a
#' tolerance of about \code{1e-8}.}
#'
#' \item{\code{pgtol}}{helps control the convergence of the \code{"L-BFGS-B"}
#' method. It is a tolerance on the projected gradient in the current
#' search direction. the iteration will stop when
#'
#' \deqn{max\{|proj g_i |, i = 1, ..., n\} \le pgtol}
#'
#' where \eqn{pg_i} is the ith component of the projected gradient.
#' Default is \code{1e-12}.}
#'
#' \item{\code{stopcr}}{The criterion applied to the standard deviation of
#' the values of objective function at the points of the simplex, for
#' "Nelder-Mead" method.}
#' }
#'
#' @return a list with all arguments in \sQuote{Details}.
#'
#' @examples
#' BTSR::fit.control()
#'
#' @export
fit.control <- function(control = list()){
con <- list(method = "L-BFGS-B",
maxit = 1000,
iprint = -1,
factr = 1e+7,
pgtol = 1e-12,
stopcr = 1e-4)
con[names(control)] <- control
return(con)
}
#' This function calculates initial values for the parameter vector,
#' to pass to the optimization function.
#'
#' @title Initial values for optimization
#'
#' @param model character; The model to be fitted to the data. One of
#' \code{"BARFIMA"}, \code{"KARFIMA"}, \code{"GARFIMA"}, \code{"BARC"}.
#' Default is "Generic" so that no specific structure is assumed.
#'
#' @param yt a univariate time series. Missing values (NA's)
#' are not allowed.
#'
#' @param linkg character; one of \code{"linear"}, \code{"logit"},
##' \code{"log"}, \code{"loglog"}, \code{"cloglog"}.
##' If only one name is provided, the same link will be used for the conditional mean, that is
##' to define \eqn{g(\mu)} and for the observed time series in the AR part
##' of the model, that is, \eqn{g(y[t])}.
##'
#' @param xreg optional; a vector or matrix of external regressors,
#' which must have the same number of rows as x.
#'
#' @param p an integer; the AR order. Default is zero.
#'
#' @param q an integer; for \code{BARC} models represents the dimension of
#' the parameter associated to the map \eqn{T}. For other models is the
#' MA order. Default is zero.
#'
#' @param d logical; if FALSE, \eqn{d} is fixed as zero. Default is TRUE.
#'
#' @param y.start optional; an initialization value for \eqn{y[t]},
#' for \eqn{t \le 0}, to be used in the AR recursion. If not provided,
#' the default assume \eqn{y[t] = 0}, when using a \code{"linear"} link for
#' \eqn{yt}, and \eqn{g(y[t]) = 0}, otherwise.
#'
#' @param y.lower lower limit for the distribution support.
#' Default is \code{-Inf}.
#'
#' @param y.upper upper limit for the distribution support.
#' Default is \code{Inf}.
#'
#' @param lags optional; a list with the components \code{beta},
#' \code{phi} and \code{theta} specifying which lags must be included
#' in the model. An empty list or missing component indicates that, based on the values \code{nreg},
#' \code{p} e \code{q}), all lags must be includes in the model.
#'
#' @param fixed.values optional; a list with the fixed values for
#' each component, if any. If fixed values are provided, either \code{lags}
#' or \code{fixed.lags} must also be provided.
#'
#' @param fixed.lags optional; a list with the components \code{beta},
#' \code{phi} and \code{theta} specifying which lags must be fixed.
#' An empty list implies that fixed values will be set based on
#' \code{lags}.
#'
#' @return a list with starting values for the parameters of the selected
#' model. Possible outputs are:
#'
#' \item{alpha}{the intercept}
#' \item{beta}{the coefficients for the regressors}
#' \item{phi}{the AR coefficients}
#' \item{theta}{for BARC models, the map parameter. For any other model,
#' the MA coefficients}
#' \item{d}{the long memory parameter}
#' \item{nu}{the precison parameter}
#'
#' @importFrom stats lm.fit fitted residuals
#'
#' @export
#'
#' @examples
#' mu = 0.5
#' nu = 20
#'
#' yt = rbeta(100, shape1 = mu*nu, shape2 = (1-mu)*nu)
#' coefs.start(model = "BARFIMA", yt = yt,
#' linkg = "linear", d = FALSE,
#' y.lower = 0, y.upper = 1)
#'
#' yt = rgamma(100, shape = nu, rate = mu*nu)
#' coefs.start(model = "GARFIMA", yt = yt,
#' linkg = "linear", d = FALSE,
#' y.lower = 0, y.upper = Inf)
#'
coefs.start <- function(model = "Generic",
yt, linkg = c("linear","linear"), xreg = NULL,
p = 0, q = 0, d = TRUE, y.start = NULL,
y.lower = -Inf, y.upper = Inf,
lags = list(), fixed.values = list(),
fixed.lags = list()){
if(is.null(y.lower)) y.lower = -Inf
if(is.null(y.upper)) y.upper = Inf
if(y.lower == -Inf) y.lower = .Machine$double.xmin
if(y.upper == Inf) y.upper = .Machine$double.xmax
if(length(linkg) == 1) linkg = c(linkg, linkg)
##-------------------
## link function
##-------------------
linktemp1 <- link.btsr(link = linkg[1])
linkfun1 <- linktemp1$linkfun
g1y <- linkfun1(yt, ctt.ll = 1, y.lower = y.lower, y.upper = y.upper)
if(linkg[2] == linkg[1]){
g2y <- g1y
linkfun2 <- linkfun1
}
else{
linktemp2 <- link.btsr(link = linkg[2])
linkfun2 <- linktemp2$linkfun
g2y <- linkfun2(yt, ctt.ll = 1, y.lower = y.lower, y.upper = y.upper)
}
if(p > 0){
if(is.null(y.start)) gystart <- NA
else gystart <- linkfun2(y.start, ctt.ll = 1,
y.lower = y.lower, y.upper = y.upper)
}
n <- length(g1y)
if(is.null(xreg)) nreg <- 0
else nreg <- ncol(as.matrix(xreg))
##----------------------------------------------------
## starting values for alpha, phi and beta
##----------------------------------------------------
X <- matrix(1, nrow = n)
nreg1 <- nreg
if(nreg > 0){
lag <- 1:nreg
if(!is.null(lags$beta)) lag = lags$beta
else{
if(!is.null(fixed.lags$beta)){
fl <- fixed.lags$beta
lag <- lag[-fl]
}}
if(length(lag) > 0) X <- cbind(X, as.matrix(xreg)[,lag])
}
p1 <- p
if(p > 0){
lag <- 1:p
if(!is.null(lags$phi)) lag = lags$phi
else{
if(!is.null(fixed.lags$phi)){
fl <- fixed.lags$phi
lag <- lag[-fl]
}}
p1 <- length(lag)
if(p1 > 0){
P <- matrix(gystart, nrow = n, ncol = p1)
for(i in 1:p1) P[-c(1:lag[i]), i] <- g2y[1:(n-lag[i])]
X <- cbind(X,P)
}
}
w <- sum(is.na(X[,ncol(X)]))
if(w > 0){
X <- X[-c(1:w), ]
g1y <- g1y[-c(1:w)]
}
fit <- lm.fit(x = X, y = g1y)
mqo <- c(fit$coefficients, use.names = FALSE)
mqo[is.na(mqo)] <- 0
k <- length(mqo)
##--------------------------------------
## initializing the parameter values
##--------------------------------------
alpha <- NULL
a <- as.integer(is.null(fixed.values$alpha))
if(a == 1) alpha <- mqo[1]
else{mqo = mqo[-1]; k = k-1}
beta <- NULL
if(nreg1 > 0) beta <- mqo[(a+1):(a+nreg1)]
phi <- NULL
if(p1 > 0) phi <- mqo[(a+nreg1+1):k]
theta <- NULL
q1 <- max(q - max(length(fixed.values$theta),length(fixed.lags$theta)),
length(lags$theta))
if(q1 > 0) theta <- rep(0, q1) # for BARC models this will be fixed in the main program
dd <- NULL
if(d == TRUE){
if(is.null(fixed.values$d)) dd <- 0.01
}
nu <- NULL
if(is.null(fixed.values$nu)){
n1 <- length(g1y)
mu <- fitted(fit)
mu <- linktemp1$linkinv(mu, ctt.ll = 1, y.lower = y.lower,
y.upper = y.upper)
dlink <- linktemp1$diflink(mu, ctt.ll = 1, y.lower = y.lower,
y.upper = y.upper)
er <- residuals(fit)
sigma2 <- sum(er^2)/((n1 - k) * (dlink)^2)
nu.type <- switch (EXPR = model[1],
BARFIMA = 1,
BARC = 1,
GARFIMA = 2,
KARFIMA = 3,
UWARFIMA = 3)
if(nu.type == 1) nu = mean(mu * (1 - mu)/sigma2) - 1
if(nu.type == 2) nu = mean(mu^2/sigma2)
if(nu.type == 3) nu = 5
}
par <- list(alpha = alpha, beta = beta, phi = phi,
theta = theta, d = dd, nu = nu)
return(par)
}
##-------------------------------------------------------------------------
## internal function.
## Convert the coefficients and its bounds to the correct format to pass
## to FORTRAN
##-------------------------------------------------------------------------
.coefs.fit.config <- function(model = "Generic",
coefs = list(),lags = list(),
fixed.values = list(), fixed.lags = list(),
lower = list(), upper = list(),
p = 0, q = 0, nreg = 0){
##---------------------------------------------------
## checking for fixed and initial values.
##---------------------------------------------------
out <- .coefs.convert.all(model = model, coefs = coefs, lags = lags,
fixed.values = fixed.values,fixed.lags = fixed.lags,
p = p, q = q, nreg = nreg)
##---------------------------------------------------
## setting the bounds
##---------------------------------------------------
lwr <- NULL
upr <- NULL
nbd <- NULL
## alpha
if(out$alpha$nfix == 0){
cb <- .bounds.convert(npar = 1, lower = lower$alpha, upper = upper$alpha)
lwr <- c(lwr, alpha = cb$lower)
upr <- c(upr, alpha = cb$upper)
nbd <- c(nbd, alpha = cb$nbd)
}
## beta
npar <- nreg - out$beta$nfix
if(npar > 0){
cb <- .bounds.convert(npar = npar, lower = lower$beta, upper = upper$beta)
lwr <- c(lwr, beta = cb$lower)
upr <- c(upr, beta = cb$upper)
nbd <- c(nbd, beta = cb$nbd)
}
## phi
npar <- p - out$phi$nfix
if(npar > 0){
cb <- .bounds.convert(npar = npar, lower = lower$phi, upper = upper$phi)
lwr <- c(lwr, phi = cb$lower)
upr <- c(upr, phi = cb$upper)
nbd <- c(nbd, phi = cb$nbd)
}
## theta
npar <- q - out$theta$nfix
if(npar > 0){
cb <- .bounds.convert(npar = npar, lower = lower$theta, upper = upper$theta)
lwr <- c(lwr, theta = cb$lower)
upr <- c(upr, theta = cb$upper)
nbd <- c(nbd, theta = cb$nbd)
}
if(!(model == "BARC")){
## d - not implemented for BARC models
if(out$d$nfix == 0){
cb <- .bounds.convert(npar = 1, lower = lower$d, upper = upper$d)
lwr <- c(lwr, d = cb$lower)
upr <- c(upr, d = cb$upper)
nbd <- c(nbd, d = cb$nbd)
}
}
## nu
if(out$nu$nfix == 0){
cb <- .bounds.convert(npar = 1, lower = lower$nu, upper = upper$nu)
lwr <- c(lwr, nu = cb$lower)
upr <- c(upr, nu = cb$upper)
nbd <- c(nbd, nu = cb$nbd)
}
out$lower <- lwr
out$upper <- upr
out$nbd <- as.integer(nbd)
invisible(out)
}
##-------------------------------------------------------------------------
## internal function.
## Performs several checks to make sure that
## the correct type of variables will be passed to FORTRAN
##-------------------------------------------------------------------------
.fit.configs <- function(model, yt, y.start, y.lower, y.upper, openIC,
xreg, xnew, nnew, xreg.start, linkg, p, d, q,
inf, m, xregar, error.scale, start, ignore.start,
lags, fixed.values, fixed.lags, lower, upper,
control, sco, info, extra,...){
##----------------------------------------------------------
## checking if the data has NA's or any value outside (ylower, yupper)
##----------------------------------------------------------
out <- .data.check(yt = yt, lower = y.lower, upper = y.upper, openIC = openIC)
if(!is.null(out$conv)) return(invisible(out))
out$n <- as.integer(length(yt))
out$m <- as.integer(m)
out$y.lower = y.lower
out$y.upper = y.upper
##-------------------------
## link for mu and y
##-------------------------
dummy <- .link.check(model = model, link = linkg)
if(length(linkg) == 1) linkg <- c(linkg, linkg)
out$linkg <- .link.convert(link = linkg)
##-----------------
## regressors
##-----------------
temp <- .xreg.convert(xreg = xreg, xnew = xnew, n = out$n,
nnew = nnew, skip.forecast = FALSE)
out[names(temp)] <- temp
##---------------------------------------------------------
## initial values: using y.default = y.lower -1
## assures that the Fortran subroutine will set g(y) = 0
##---------------------------------------------------------
if(out$nreg == 0) xregar = FALSE
temp <- .data.start.convert(y.start = y.start, xreg.start = xreg.start,
nreg = out$nreg, xregar = xregar,
y.default = y.lower - 1)
out[names(temp)] <- temp
##----------------------------------------------------------------------------
## parameters initialization and fixed values identification
##----------------------------------------------------------------------------
## updating configurations: in case the user passed one of the lists as NULL
## instead of an empty list, this step will avoid breaking the code
st <- FALSE
uc <- .fix.null.configs(coefs = start, lags = lags, fixed.values = fixed.values,
fixed.lags = fixed.lags, lower = lower, upper = upper)
if(is.null(uc$coefs)) st <- TRUE
if(d == FALSE){
uc$fixed.values$d = 0
if(!is.null(uc$coefs)) start$d = NULL
}else{
if(!is.null(uc$fixed.values$d) & !is.null(start$d)){
stop(paste0("An initial value for d was provided:",
"\n start$d = ", start$d,
"\n but d was also fixed:",
"\n fixed.values$d = ", uc$fixed.values$d,".",
"\n If you wish to fix d, remove d from starting values or set d = FALSE. ",
"\n If you wish to fit d, remove d from the list of fixed values"))
}
}
##----------------------------------------------------------------------------
## checking if parameter initialization is required.
## in case no starting values were not provided, uses the default values.
## in case ignore.start = TRUE, starting values are ignored and recalculated.
## partial starting values are not allowed.
##----------------------------------------------------------------------------
if(st | ignore.start)
start <- coefs.start(model = model, yt = yt, linkg = linkg, xreg = xreg,
p = p, d = d, q = q, y.start = y.start, y.lower = y.lower,
y.upper = y.upper, lags = uc$lags, fixed.values = uc$fixed.values,
fixed.lags = uc$fixed.lags)
##----------------------------------------------------------------------------
## in case the user does not provide a value for the dispersion,
## and initialization was not required sets nu = 50
## (no particular reason for this choice)
##----------------------------------------------------------------------------
if(is.null(c(start$nu, uc$fixed.values$nu))) start$nu <- 50
##----------------------------------------------------------------------------
## organizing the values to be passed to FORTRAN
##----------------------------------------------------------------------------
if(model == "BARC"){
temp <- list(...)
if(!is.null(start$theta)) start$theta <- temp[["theta.barc"]]
}
temp <- .coefs.fit.config(model = model, coefs = start, lags = uc$lags,
fixed.values = uc$fixed.values,
fixed.lags = uc$fixed.lags, lower = uc$lower,
upper = uc$upper, p = p, q = q, nreg = out$nreg)
out[names(temp)] <- temp
out$p <- as.integer(p)
##-------------------------------
## Other configurations
##-------------------------------
out$inf <- as.integer(inf)
if(!is.null(out$d)){
if((out$d$nfix == 0 | out$d$fvalues != 0) & out$inf < 100)
warning(paste("non-zero d and inf = ", inf,
". Be carefull, this value may be too small",
sep = ""), immediate. = TRUE)}
out$error.scale <- as.integer(error.scale)
out$xregar <- as.integer(xregar)
out$sco <- as.integer(sco)
out$info <- as.integer(info)
out$extra <- as.integer(extra)
out$control <- fit.control(control)
out$npar <- length(out$coefs)
# dummy in case npar = 0
if(out$npar == 0) out$coefs <- 0
if(!(model == "BARC")) out$q <- as.integer(q)
invisible(out)
}
##-------------------------------------------------------------------------
## Internal function.
## Used to print information about the selected model
##-------------------------------------------------------------------------
.fit.print <- function(model, p, q, d, nreg){
dname <- ifelse(d, "d","0")
msg <- model
if(nreg == 0) msg <- paste(msg,"(", p, sep = "")
else msg <- paste(msg,"X(",p, sep = "")
if(!(model == "BARC")) msg <- paste(msg,",", dname, ",", q,") model", sep = "")
else msg <- paste(msg,") model",sep = "")
msg
}
##-------------------------------------------------------------------------
## Internal function.
## Used to extract information from the object returned by
## the FORTRAN function that fits the model
##-------------------------------------------------------------------------
.fit.get.results <- function(model, obj, configs){
out <- c()
out$model <- model
##----------------------------------
## Convergence
##----------------------------------
out$convergence <- obj$conv
if(configs$control$method == "L-BFGS-B"){
out$message <- switch(EXPR = paste(obj$conv),
"0" = "SUCCESSFUL TERMINATION",
"1" = "")
}
else{
out$message <- switch(EXPR = paste(obj$conv),
"0" = "SUCCESSFUL TERMINATION",
"1" = "MAXIMUM NO. OF FUNCTION EVALUATIONS EXCEEDED",
"2" = "NOP < 1 OR STOPCR <= 0")
}
if(obj$conv != 0) warning("FAIL / FUNCTION DID NOT CONVERGE!", immediate. = TRUE)
out$counts <- obj$neval
con <- fit.control(control = list())
nm <- names(con)
nc <- nm %in% names(obj)
con[nc] <- obj[nm[nc]]
out$control <- con[nc]
out$control$method <- configs$control$method
##---------------------------------------------------
## Coefficients: starting values and final values
##---------------------------------------------------
out$start <- configs$coefs
out$coefficients <- obj$coefs
##--------------------------------------
## Series
##--------------------------------------
out$n = as.integer(obj$n)
out$series <- obj$yt
out$gyt <- obj$gy
out$xreg <- NULL
if(obj$nreg > 0) out$xreg = obj$xreg
out$fitted.values <- obj$mut
out$etat <- obj$etat
out$error.scale <- obj$escale
out$error <- obj$error
if(obj$escale == 1) out$residuals <- obj$yt - obj$mut
else out$residuals <- obj$error
out$forecast <- NULL
if(obj$nnew > 0){
out$forecast <- obj$ynew
if(obj$nreg > 0) out$xnew = obj$xnew
}
if(model == "BARC"){
out$Ts <- obj$Ts
out$TS.forecast <- NULL
if(obj$nnew > 0) out$Ts.forecast <- obj$Tnew
}
##------------------------------------------------
## likelihood, gradient and information matrix
##------------------------------------------------
out$sll <- NULL
out$score <- NULL
out$info.Matrix <- NULL
if(obj$llk == 1) out$sll <- obj$sll
if(obj$sco == 1) out$score <- obj$U
if(obj$info == 1){
out$info.Matrix <- as.matrix(obj$K)
colnames(out$info.Matrix) <- names(obj$coefs)
rownames(out$info.Matrix) <- names(obj$coefs)
}
##------------------------------------------------
## Extra information for prediction
##------------------------------------------------
nms <- names(out)
nmsc <- names(configs)
nmse <- !(nmsc %in% nms)
out$configs[nmsc[nmse]] <- configs[nmse]
out$configs$llk <- as.integer(obj$llk)
class(out) <- c("btsr", class(out))
invisible(out)
}
##' @title Summary Method of class BTSR
##'
##' @description \code{summary} method for class \code{"btsr"}.
##'
##' @name summary
##'
##' @aliases summary.btsr
##' @aliases print.summary.btsr
##'
##' @param object object of class \code{"btsr"}.
##' @param ... further arguments passed to or from other methods.
##'
##' @details
##'
##' @return
##' The function \code{summary.btsr} computes and returns a list
##' of summary statistics of the fitted model given in \code{object}.
##' Returns a list of class \code{summary.btsr}, which contains the
##' following components:
##'
##' \item{model}{the corresponding model.}
##'
##' \item{call}{the matched call.}
##'
##' \item{residuals}{the residuals of the model. Depends on the definition
##' of \code{error.scale}. If error.scale= 1, \eqn{residuals = g(y) - g(\mu)}.
##' If error.scale = 0, \eqn{residuals = y - \mu}.}
##'
##' \item{coefficients}{a \eqn{k \times 4}{k x 4} matrix with columns for
##' the estimated coefficient, its standard error, z-statistic and corresponding
##' (two-sided) p-value. Aliased coefficients are omitted.}
##'
##' \item{aliased}{named logical vector showing if the original coefficients
##' are aliased.}
##'
##' \item{sigma.res}{the square root of the estimated variance of the random
##' error \deqn{\hat\sigma^2 = \frac{1}{n-k}\sum_i{r_i^2},}{\sigma^2 = \frac{1}{n-k} \sum_i r[i]^2,}
##' where \eqn{r_i}{r[i]} is the \eqn{i}-th residual, \code{residuals[i]}.}
##'
##' \item{df}{degrees of freedom, a 3-vector \eqn{(k, n-k, k*)}, the first
##' being the number of non-aliased coefficients, the last being the total
##' number of coefficients.}
##'
##' \item{vcov}{a \eqn{k \times k}{k \times k} matrix of (unscaled) covariances.
##' The inverse ov the information matrix.}
##'
##' \item{loglik}{the sum of the log-likelihood values}
##'
##' \item{aic}{the AIC value. \eqn{AIC = -2*loglik+2*k}.}
##'
##' \item{bic}{the BIC value. \eqn{BIC = -2*loglik + log(n)*k}.}
##'
##' \item{hqc}{the HQC value. \eqn{HQC = -2*loglik + log(log(n))*k}.}
##'
##' @importFrom stats pnorm
##'
##' @export
##'
summary.btsr <- function(object,...){
if(!"btsr" %in% class(object))
stop("the argument 'object' must be a 'btsr' object")
if(is.null(object$info.Matrix))
stop(paste0("\nsummary cannot be reported because\n",
"the information matriz is not present\n",
"in the object provided in the imput.\n",
"Please, fit the model again setting info = TRUE"))
npar <- length(object$coefficients)
ans <- c()
ans$model <- object$model
ans$call <- object$call
ans$residuals <- object$residuals
n <- length(object$residuals)
rdf <- ans$df.residuals <- n-npar
ans$aliased <- is.na(object$coefficients) # used in print method
ans$sigma.res <- sqrt(sum(ans$residuals^2)/rdf)
class(ans) <- "summary.btsr"
if(npar == 0){
ans$df <- c(0L, n, length(ans$aliased))
ans$coefficients <- matrix(NA_real_, 0L, 4L,
dimnames = list(NULL,c("Estimate", "Std. Error",
"z value", "Pr(>|t|)")))
return(ans)
}
ans$df = c(npar, rdf, length(ans$aliased))
ans$vcov <- solve(object$info.Matrix)
stderror <- sqrt(diag(abs(ans$vcov)))
zstat <- abs(object$coefficients/stderror)
ans$coefficients <- cbind(Estimate = object$coefficients,
`Std. Error` = stderror,
`z value` = zstat,
`Pr(>|t|)` = 2*(1 - pnorm(zstat)))
ans$loglik <- object$sll
ans$aic <- -2*ans$loglik+2*npar
ans$bic <- -2*ans$loglik + log(n)*npar
ans$hq <- -2*ans$loglik + log(log(n))*npar
ans$coefficients
return(ans)
}
##' Users are not encouraged to call these internal functions directly.
##' Internal functions for package BTSR.
##'
##' @title Print Method of class BTSR
##'
##' @description
##' Print method for objects of class \code{btsr}.
#'
##' @param x object of class \code{btsr}.
##' @param digits minimal number of significant digits, see
##' \code{\link{print.default}}.
##' @param ... further arguments to be passed to or from other methods.
##' They are ignored in this function
##'
##' @return Invisibly returns its argument, \code{x}.
##'
##' @importFrom stats coef
##'
##' @export
##'
print.btsr <- function(x, digits = max(3L, getOption("digits") - 3L), ...)
{
if(length(coef(x))) {
cat("Coefficients:\n")
print.default(format(coef(x), digits = digits),
print.gap = 2L, quote = FALSE)
} else cat("No coefficients\n")
cat("\n")
invisible(x)
}
##-----------------------------------------------
## Internal function for printing the summary
##-----------------------------------------------
##' @rdname summary
##' @importFrom stats quantile printCoefmat
##'
##' @param x an object of class \code{"summary.btsr"},
##' usually, a result of a call to \code{summary.btsr}.
##' @param digits minimal number of significant digits, see
##' \code{\link{print.default}}.
##' @param signif.stars logical. If \code{TRUE},
##' \sQuote{significance stars} are printed for each coefficient.
##'
##' @details
##' \code{print.summary.btsr} tries to be smart about formatting the
##' coefficients, standard errors, etc. and additionally provides
##' \sQuote{significance stars}.
##'
##' @export
print.summary.btsr <- function (x, digits = max(3L, getOption("digits") - 3L),
signif.stars = getOption("show.signif.stars"), ...)
{
resid <- x$residuals
df <- x$df
rdf <- df[2L]
cat("\n")
cat("-----------------------------------------------")
cat("\nCall:\n",
paste(deparse(x$call), sep="\n", collapse = "\n"), "\n\n", sep = "")
if (rdf > 5L) {
nam <- c("Min", "1Q", "Median", "3Q", "Max")
rq <- if (length(dim(resid)) == 2L)
structure(apply(t(resid), 1L, quantile),
dimnames = list(nam, dimnames(resid)[[2L]]))
else {
zz <- zapsmall(quantile(resid), digits + 1L)
structure(zz, names = nam)
}
print(rq, digits = digits, ...)
}
else if (rdf > 0L) {
print(resid, digits = digits, ...)
} else { # rdf == 0 : perfect fit!
cat("ALL", df[1L], "residuals are 0: no residual degrees of freedom!")
cat("\n")
}
if (length(x$aliased) == 0L) {
cat("\nNo Coefficients\n")
} else {
if (nsingular <- df[3L] - df[1L])
cat("\nCoefficients: (", nsingular,
" not defined because of singularities)\n", sep = "")
else cat("\nCoefficients:\n")
coefs <- x$coefficients
if(any(aliased <- x$aliased)) {
cn <- names(aliased)
coefs <- matrix(NA, length(aliased), 4, dimnames=list(cn, colnames(coefs)))
coefs[!aliased, ] <- x$coefficients
}
printCoefmat(coefs, digits = digits, signif.stars = signif.stars,
na.print = "NA", ...)
}
##
cat("\nResidual standard error:",
format(signif(x$sigma, digits)), "on", rdf, "degrees of freedom")
cat("\n")
cat("-----------------------------------------------\n")
cat("\n")
invisible(x)
}
#-------------------------------------------------------
# Fix-me
#-------------------------------------------------------
# Using
# foo <- .check.model(model[1], "fit")
# and then
# .Fortran(foo,...)
# gives an error during the registration process
# Therefore, for now, we are using the auxiliary
# functions defined in the sequel
#-------------------------------------------------------
.btsr.fit.barfima <- function(yt, configs, k1, k2){
if(configs$control$method == "L-BFGS-B")
.Fortran("optimlbfgsbbarfimar",
npar = max(1L,configs$npar),
coefs = configs$coefs,
nbd = configs$nbd,
lower = configs$lower,
upper = configs$upper,
n = configs$n,
yt = yt,
gy = numeric(configs$n),
ystart = configs$y.start,
nreg = configs$nreg,
xreg = configs$xreg,
xstart = configs$xreg.start,
mut = numeric(configs$n),
etat = numeric(configs$n),
error = numeric(configs$n),
escale = configs$error.scale,
nnew = configs$nnew,
xnew = configs$xnew,
ynew = numeric(max(1,configs$nnew)),
linkg = configs$linkg,
fixa = configs$alpha$nfix,
alpha = configs$alpha$fvalues,
fixb = configs$beta$nfix,
flagsb = configs$beta$flags,
beta = configs$beta$fvalues,
p = configs$p,
fixphi = configs$phi$nfix,
flagsphi = configs$phi$flags,
phi = configs$phi$fvalues,
xregar = configs$xregar,
q = configs$q,
fixtheta = configs$theta$nfix,
flagstheta = configs$theta$flags,
theta = configs$theta$fvalues,
fixd = configs$d$nfix,
d = configs$d$fvalues,
fixnu = configs$nu$nfix,
pdist = configs$nu$fvalues,
inf = configs$inf,
m = configs$m,
sll = 1,
U = numeric(max(1L,configs$npar)),
info = configs$info,
K = diag(max(1,configs$npar*configs$info)),
extra = configs$extra,
Drho = matrix(0, k1, k2),
T = numeric(k1),
E = matrix(0, k1,1+2*(1-configs$nu$nfix)*configs$extra),
h = numeric(k1),
iprint = as.integer(configs$control$iprint),
factr = configs$control$factr,
pgtol = configs$control$pgtol,
maxit = as.integer(configs$control$maxit),
neval = 0L,
conv = 0L)
else
.Fortran("optimnelderbarfimar",
npar = max(1L,configs$npar),
coefs = configs$coefs,
nbd = configs$nbd,
lower = configs$lower,
upper = configs$upper,
n = configs$n,
yt = yt,
gy = numeric(configs$n),
ystart = configs$y.start,
nreg = configs$nreg,
xreg = configs$xreg,
xstart = configs$xreg.start,
mut = numeric(configs$n),
etat = numeric(configs$n),
error = numeric(configs$n),
escale = configs$error.scale,
nnew = configs$nnew,
xnew = configs$xnew,
ynew = numeric(max(1,configs$nnew)),
linkg = configs$linkg,
fixa = configs$alpha$nfix,
alpha = configs$alpha$fvalues,
fixb = configs$beta$nfix,
flagsb = configs$beta$flags,
beta = configs$beta$fvalues,
p = configs$p,
fixphi = configs$phi$nfix,
flagsphi = configs$phi$flags,
phi = configs$phi$fvalues,
xregar = configs$xregar,
q = configs$q,
fixtheta = configs$theta$nfix,
flagstheta = configs$theta$flags,
theta = configs$theta$fvalues,
fixd = configs$d$nfix,
d = configs$d$fvalues,
fixnu = configs$nu$nfix,
pdist = configs$nu$fvalues,
inf = configs$inf,
m = configs$m,
sll = 0,
sco = configs$sco,
U = numeric(max(1,configs$npar*configs$sco)),
info = configs$info,
K = diag(max(1,configs$npar*configs$info)),
extra = configs$extra,
Drho = matrix(0,k1,k2),
T = numeric(k1),
E = matrix(0,k1,1+2*(1-configs$nu$nfix)*configs$extra),
h = numeric(k1),
iprint = as.integer(configs$control$iprint),
stopcr = configs$control$stopcr,
maxit = as.integer(configs$control$maxit),
neval = 0L,
conv = 0L)
}
.btsr.fit.karfima <- function(yt, configs, k1, k2){
if(configs$control$method == "L-BFGS-B")
.Fortran("optimlbfgsbkarfimar",
npar = max(1L,configs$npar),
coefs = configs$coefs,
nbd = configs$nbd,
lower = configs$lower,
upper = configs$upper,
n = configs$n,
yt = yt,
gy = numeric(configs$n),
ystart = configs$y.start,
nreg = configs$nreg,
xreg = configs$xreg,
xstart = configs$xreg.start,
mut = numeric(configs$n),
etat = numeric(configs$n),
error = numeric(configs$n),
escale = configs$error.scale,
nnew = configs$nnew,
xnew = configs$xnew,
ynew = numeric(max(1,configs$nnew)),
linkg = configs$linkg,
fixa = configs$alpha$nfix,
alpha = configs$alpha$fvalues,
fixb = configs$beta$nfix,
flagsb = configs$beta$flags,
beta = configs$beta$fvalues,
p = configs$p,
fixphi = configs$phi$nfix,
flagsphi = configs$phi$flags,
phi = configs$phi$fvalues,
xregar = configs$xregar,
q = configs$q,
fixtheta = configs$theta$nfix,
flagstheta = configs$theta$flags,
theta = configs$theta$fvalues,
fixd = configs$d$nfix,
d = configs$d$fvalues,
fixnu = configs$nu$nfix,
pdist = configs$nu$fvalues,
inf = configs$inf,
m = configs$m,
sll = 1,
U = numeric(max(1L,configs$npar)),
info = configs$info,
K = diag(max(1,configs$npar*configs$info)),
extra = configs$extra,
Drho = matrix(0, k1, k2),
T = numeric(k1),
E = matrix(0, k1,1+2*(1-configs$nu$nfix)*configs$extra),
h = numeric(k1),
iprint = as.integer(configs$control$iprint),
factr = configs$control$factr,
pgtol = configs$control$pgtol,
maxit = as.integer(configs$control$maxit),
neval = 0L,
conv = 0L)
else
.Fortran("optimnelderkarfimar",
npar = max(1L,configs$npar),
coefs = configs$coefs,
nbd = configs$nbd,
lower = configs$lower,
upper = configs$upper,
n = configs$n,
yt = yt,
gy = numeric(configs$n),
ystart = configs$y.start,
nreg = configs$nreg,
xreg = configs$xreg,
xstart = configs$xreg.start,
mut = numeric(configs$n),
etat = numeric(configs$n),
error = numeric(configs$n),
escale = configs$error.scale,
nnew = configs$nnew,
xnew = configs$xnew,
ynew = numeric(max(1,configs$nnew)),
linkg = configs$linkg,
fixa = configs$alpha$nfix,
alpha = configs$alpha$fvalues,
fixb = configs$beta$nfix,
flagsb = configs$beta$flags,
beta = configs$beta$fvalues,
p = configs$p,
fixphi = configs$phi$nfix,
flagsphi = configs$phi$flags,
phi = configs$phi$fvalues,
xregar = configs$xregar,
q = configs$q,
fixtheta = configs$theta$nfix,
flagstheta = configs$theta$flags,
theta = configs$theta$fvalues,
fixd = configs$d$nfix,
d = configs$d$fvalues,
fixnu = configs$nu$nfix,
pdist = configs$nu$fvalues,
inf = configs$inf,
m = configs$m,
sll = 0,
sco = configs$sco,
U = numeric(max(1,configs$npar*configs$sco)),
info = configs$info,
K = diag(max(1,configs$npar*configs$info)),
extra = configs$extra,
Drho = matrix(0,k1,k2),
T = numeric(k1),
E = matrix(0,k1,1+2*(1-configs$nu$nfix)*configs$extra),
h = numeric(k1),
iprint = as.integer(configs$control$iprint),
stopcr = configs$control$stopcr,
maxit = as.integer(configs$control$maxit),
neval = 0L,
conv = 0L)
}
.btsr.fit.garfima <- function(yt, configs, k1, k2){
if(configs$control$method == "L-BFGS-B")
.Fortran("optimlbfgsbgarfimar",
npar = max(1L,configs$npar),
coefs = configs$coefs,
nbd = configs$nbd,
lower = configs$lower,
upper = configs$upper,
n = configs$n,
yt = yt,
gy = numeric(configs$n),
ystart = configs$y.start,
nreg = configs$nreg,
xreg = configs$xreg,
xstart = configs$xreg.start,
mut = numeric(configs$n),
etat = numeric(configs$n),
error = numeric(configs$n),
escale = configs$error.scale,
nnew = configs$nnew,
xnew = configs$xnew,
ynew = numeric(max(1,configs$nnew)),
linkg = configs$linkg,
fixa = configs$alpha$nfix,
alpha = configs$alpha$fvalues,
fixb = configs$beta$nfix,
flagsb = configs$beta$flags,
beta = configs$beta$fvalues,
p = configs$p,
fixphi = configs$phi$nfix,
flagsphi = configs$phi$flags,
phi = configs$phi$fvalues,
xregar = configs$xregar,
q = configs$q,
fixtheta = configs$theta$nfix,
flagstheta = configs$theta$flags,
theta = configs$theta$fvalues,
fixd = configs$d$nfix,
d = configs$d$fvalues,
fixnu = configs$nu$nfix,
pdist = configs$nu$fvalues,
inf = configs$inf,
m = configs$m,
sll = 1,
U = numeric(max(1L,configs$npar)),
info = configs$info,
K = diag(max(1,configs$npar*configs$info)),
extra = configs$extra,
Drho = matrix(0, k1, k2),
T = numeric(k1),
E = matrix(0, k1,1+2*(1-configs$nu$nfix)*configs$extra),
h = numeric(k1),
iprint = as.integer(configs$control$iprint),
factr = configs$control$factr,
pgtol = configs$control$pgtol,
maxit = as.integer(configs$control$maxit),
neval = 0L,
conv = 0L)
else
.Fortran("optimneldergarfimar",
npar = max(1L,configs$npar),
coefs = configs$coefs,
nbd = configs$nbd,
lower = configs$lower,
upper = configs$upper,
n = configs$n,
yt = yt,
gy = numeric(configs$n),
ystart = configs$y.start,
nreg = configs$nreg,
xreg = configs$xreg,
xstart = configs$xreg.start,
mut = numeric(configs$n),
etat = numeric(configs$n),
error = numeric(configs$n),
escale = configs$error.scale,
nnew = configs$nnew,
xnew = configs$xnew,
ynew = numeric(max(1,configs$nnew)),
linkg = configs$linkg,
fixa = configs$alpha$nfix,
alpha = configs$alpha$fvalues,
fixb = configs$beta$nfix,
flagsb = configs$beta$flags,
beta = configs$beta$fvalues,
p = configs$p,
fixphi = configs$phi$nfix,
flagsphi = configs$phi$flags,
phi = configs$phi$fvalues,
xregar = configs$xregar,
q = configs$q,
fixtheta = configs$theta$nfix,
flagstheta = configs$theta$flags,
theta = configs$theta$fvalues,
fixd = configs$d$nfix,
d = configs$d$fvalues,
fixnu = configs$nu$nfix,
pdist = configs$nu$fvalues,
inf = configs$inf,
m = configs$m,
sll = 0,
sco = configs$sco,
U = numeric(max(1,configs$npar*configs$sco)),
info = configs$info,
K = diag(max(1,configs$npar*configs$info)),
extra = configs$extra,
Drho = matrix(0,k1,k2),
T = numeric(k1),
E = matrix(0,k1,1+2*(1-configs$nu$nfix)*configs$extra),
h = numeric(k1),
iprint = as.integer(configs$control$iprint),
stopcr = configs$control$stopcr,
maxit = as.integer(configs$control$maxit),
neval = 0L,
conv = 0L)
}
.btsr.fit.uwarfima <- function(yt, configs, k1, k2){
if(configs$control$method == "L-BFGS-B")
.Fortran("optimlbfgsbuwarfimar",
npar = max(1L,configs$npar),
coefs = configs$coefs,
nbd = configs$nbd,
lower = configs$lower,
upper = configs$upper,
n = configs$n,
yt = yt,
gy = numeric(configs$n),
ystart = configs$y.start,
nreg = configs$nreg,
xreg = configs$xreg,
xstart = configs$xreg.start,
mut = numeric(configs$n),
etat = numeric(configs$n),
error = numeric(configs$n),
escale = configs$error.scale,
nnew = configs$nnew,
xnew = configs$xnew,
ynew = numeric(max(1,configs$nnew)),
linkg = configs$linkg,
fixa = configs$alpha$nfix,
alpha = configs$alpha$fvalues,
fixb = configs$beta$nfix,
flagsb = configs$beta$flags,
beta = configs$beta$fvalues,
p = configs$p,
fixphi = configs$phi$nfix,
flagsphi = configs$phi$flags,
phi = configs$phi$fvalues,
xregar = configs$xregar,
q = configs$q,
fixtheta = configs$theta$nfix,
flagstheta = configs$theta$flags,
theta = configs$theta$fvalues,
fixd = configs$d$nfix,
d = configs$d$fvalues,
fixnu = configs$nu$nfix,
pdist = configs$nu$fvalues,
inf = configs$inf,
m = configs$m,
sll = 1,
U = numeric(max(1L,configs$npar)),
info = configs$info,
K = diag(max(1,configs$npar*configs$info)),
extra = configs$extra,
Drho = matrix(0, k1, k2),
T = numeric(k1),
E = matrix(0, k1,1+2*(1-configs$nu$nfix)*configs$extra),
h = numeric(k1),
iprint = as.integer(configs$control$iprint),
factr = configs$control$factr,
pgtol = configs$control$pgtol,
maxit = as.integer(configs$control$maxit),
neval = 0L,
conv = 0L)
else
.Fortran("optimnelderuwarfimar",
npar = max(1L,configs$npar),
coefs = configs$coefs,
nbd = configs$nbd,
lower = configs$lower,
upper = configs$upper,
n = configs$n,
yt = yt,
gy = numeric(configs$n),
ystart = configs$y.start,
nreg = configs$nreg,
xreg = configs$xreg,
xstart = configs$xreg.start,
mut = numeric(configs$n),
etat = numeric(configs$n),
error = numeric(configs$n),
escale = configs$error.scale,
nnew = configs$nnew,
xnew = configs$xnew,
ynew = numeric(max(1,configs$nnew)),
linkg = configs$linkg,
fixa = configs$alpha$nfix,
alpha = configs$alpha$fvalues,
fixb = configs$beta$nfix,
flagsb = configs$beta$flags,
beta = configs$beta$fvalues,
p = configs$p,
fixphi = configs$phi$nfix,
flagsphi = configs$phi$flags,
phi = configs$phi$fvalues,
xregar = configs$xregar,
q = configs$q,
fixtheta = configs$theta$nfix,
flagstheta = configs$theta$flags,
theta = configs$theta$fvalues,
fixd = configs$d$nfix,
d = configs$d$fvalues,
fixnu = configs$nu$nfix,
pdist = configs$nu$fvalues,
inf = configs$inf,
m = configs$m,
sll = 0,
sco = configs$sco,
U = numeric(max(1,configs$npar*configs$sco)),
info = configs$info,
K = diag(max(1,configs$npar*configs$info)),
extra = configs$extra,
Drho = matrix(0,k1,k2),
T = numeric(k1),
E = matrix(0,k1,1+2*(1-configs$nu$nfix)*configs$extra),
h = numeric(k1),
iprint = as.integer(configs$control$iprint),
stopcr = configs$control$stopcr,
maxit = as.integer(configs$control$maxit),
neval = 0L,
conv = 0L)
}
##---------------------------------------------------------------------------
## internal function:
## Interface between R and FORTRAN
## Also used to summarize the results of the optimization
## Returns only the relevant variables
##---------------------------------------------------------------------------
.btsr.fit <- function(model = "BARFIMA", yt, configs, debug){
#mdl <- .check.model(model[1],"fit")
#foo <- paste("optimlbfgsb", mdl, sep = "")
#foo <- paste("optimnelder", mdl, sep = "")
k1 <- max(1,configs$n*configs$extra)
k2 <- max(1,(configs$npar-1+configs$nu$nfix)*configs$extra)
temp <- switch(EXPR = model,
BARFIMA = .btsr.fit.barfima(yt, configs, k1, k2),
GARFIMA = .btsr.fit.garfima(yt, configs, k1, k2),
KARFIMA = .btsr.fit.karfima(yt, configs, k1, k2),
UWARFIMA = .btsr.fit.uwarfima(yt, configs, k1, k2))
temp$llk = 1
temp$sco = configs$sco
out <- .fit.get.results(model = model[1], temp, configs = configs)
if(debug) out$out.Fortran <- temp
invisible(out)
}
|
/scratch/gouwar.j/cran-all/cranData/BTSR/R/fit.R
|
##----------------------------------------------------------
## GARFIMA MODELS
##----------------------------------------------------------
##' @title
##' Functions to simulate, extract components and fit GARFIMA models
##'
##' @name GARFIMA.functions
##' @order 1
##'
##' @description
##' These functions can be used to simulate, extract components
##' and fit any model of the class \code{garfima}. A model with
##' class \code{garfima} is a special case of a model with class \code{btsr} .
##' See \sQuote{The BTSR structure} in \code{\link{btsr.functions}} for
##' more details on the general structure.
##'
##' The \eqn{\gamma}ARMA model, the gamma regression and a i.i.d. sample
##' from a gamma distribution can be obtained as special cases.
##' See \sQuote{Details}.
##'
##' @details
##' The \eqn{\gamma}ARMA model and the gamma regression can be
##' obtained as special cases of the \eqn{\gamma}ARFIMA model.
##'
##' \itemize{
##' \item \eqn{\gamma}ARFIMA: is obtained by default.
##'
##' \item \eqn{\gamma}ARMA: is obtained by setting \code{d = 0}.
##'
##' \item gamma regression: is obtained by setting \code{p = 0}, \code{q = 0}
##' and \code{d = FALSE}. The \code{error.scale} is irrelevant.
##' The second argument in \code{linkg} is irrelevant.
##'
##' \item an i.i.d. sample from a Gamma distribution with parameters
##' \code{shape} and \code{scale} (compatible with the one from \code{\link{rgamma}})
##' is obtained by setting \code{linkg = "linear"}, \code{p = 0}, \code{q = 0},
##' \code{coefs$d = 0}, \code{d = FALSE} and, in the coefficient list,
##' \code{alpha = shape*scale} and \code{nu = shape}. (\code{error.scale} and
##' \code{xregar} are irrelevant)
##'}
##'
##' @md
NULL
#> NULL
##' @rdname GARFIMA.functions
##' @order 2
##'
##' @details
##' The function \code{GARFIMA.sim} generates a random sample from a \eqn{\gamma}ARFIMA(p,d,q)
##' model.
##'
##' @param n a strictly positive integer. The sample size of yt (after burn-in).
##' Default is 1.
##'
##' @param burn a non-negative integer. The length of the "burn-in" period. Default is 0.
##'
##' @param xreg optionally, a vector or matrix of external regressors.
##' For simulation purposes, the length of xreg must be \code{n+burn}.
##' Default is \code{NULL}. For extraction or fitting purposes, the length
##' of \code{xreg} must be the same as the length of the observed time series
##' \eqn{y_t}.
##'
##' @param coefs a list with the coefficients of the model. An empty list will result
##' in an error. The arguments that can be passed through this list are:
##' \itemize{
##' \item \code{alpha} optionally, a numeric value corresponding to the intercept.
##' If the argument is missing, it will be treated as zero. See
##' \sQuote{The BTSR structure} in \code{\link{btsr.functions}}.
##'
##' \item \code{beta} optionally, a vector of coefficients corresponding to the
##' regressors in \code{xreg}. If \code{xreg} is provided but \code{beta} is
##' missing in the \code{coefs} list, an error message is issued.
##'
##' \item \code{phi} optionally, for the simulation function this must be a vector
##' of size \eqn{p}, corresponding to the autoregressive coefficients
##' (including the ones that are zero), where \eqn{p} is the AR order. For
##' the extraction and fitting functions, this is a vector with the non-fixed
##' values in the vector of autoregressive coefficients.
##'
##' \item \code{theta} optionally, for the simulation function this must be a vector
##' of size \eqn{q}, corresponding to the moving average coefficients
##' (including the ones that are zero), where \eqn{q} is the MA order. For
##' the extraction and fitting functions, this is a vector with the non-fixed
##' values in the vector of moving average coefficients.
##'
##' \item \code{d} optionally, a numeric value corresponding to the long memory
##' parameter. If the argument is missing, it will be treated as zero.
##'
##' \item \code{nu} the dispersion parameter. If missing, an error message is issued.
##'
##' }
##'
##' @param y.start optionally, an initial value for yt (to be used
##' in the recursions). Default is \code{NULL}, in which case, the recursion assumes
##' that \eqn{g_2(y_t) = 0}, for \eqn{t < 1}.
##'
##' @param xreg.start optionally, a vector of initial value for xreg
##' (to be used in the recursions). Default is \code{NULL}, in which case, the recursion assumes
##' that \eqn{X_t = 0}, for \eqn{t < 1}. If \code{xregar = FALSE} this argument
##' is ignored.
##'
##' @param xregar logical; indicates if xreg is to be included in the
##' AR part of the model. See \sQuote{The BTSR structure}. Default is \code{TRUE}.
##'
##' @param error.scale the scale for the error term. See \sQuote{The BTSR structure}
##' in \code{\link{btsr.functions}}. Default is 0.
##'
##' @param complete logical; if \code{FALSE} the function returns only the simulated
##' time series yt, otherwise, additional time series are provided.
##' Default is \code{FALSE}
##'
##' @param inf the truncation point for infinite sums. Default is 1,000.
##' In practice, the Fortran subroutine uses \eqn{inf = q}, if \eqn{d = 0}.
##'
##' @param linkg character or a two character vector indicating which
##' links must be used in the model. See \sQuote{The BTSR structure}
##' in \code{\link{btsr.functions}} for details and \code{\link{link.btsr}}
##' for valid links. If only one value is provided, the same link is used
##' for \eqn{mu_t} and for \eqn{y_t} in the AR part of the model.
##' Default is \code{c("log", "log")}. For the linear link, the constant
##' will be always 1.
##'
##' @param seed optionally, an integer which gives the value of the fixed
##' seed to be used by the random number generator. If missing, a random integer
##' is chosen uniformly from 1,000 to 10,000.
##'
##' @param rngtype optionally, an integer indicating which random number generator
##' is to be used. Default is 2: the Mersenne Twister algorithm. See \sQuote{Common Arguments}
##' in \code{\link{btsr.functions}}.
##'
##' @param debug logical, if \code{TRUE} the output from FORTRAN is return (for
##' debugging purposes). Default is \code{FALSE} for all models.
##'
##' @return
##' The function \code{GARFIMA.sim} returns the simulated time series yt by default.
##' If \code{complete = TRUE}, a list with the following components
##' is returned instead:
##' \itemize{
##' \item \code{model}: string with the text \code{"GARFIMA"}
##'
##' \item \code{yt}: the simulated time series
##'
##' \item \code{mut}: the conditional mean
##'
##' \item \code{etat}: the linear predictor \eqn{g(\mu_t)}
##'
##' \item \code{error}: the error term \eqn{r_t}
##'
##' \item \code{xreg}: the regressors (if included in the model).
##'
##' \item \code{debug}: the output from FORTRAN (if requested).
##'
##' }
##'
##' @seealso
##' \code{\link{btsr.sim}}
##'
##' @examples
##' # Generating a Gamma model were mut does not vary with time
##' # yt ~ Gamma(a,b), a = nu (shape), b = mu/nu (scale)
##'
##' y <- GARFIMA.sim(linkg = "linear", n = 1000, seed = 2021,
##' coefs = list(alpha = 0.2, nu = 20))
##' hist(y)
##'
##' @export
##'
##' @md
GARFIMA.sim <- function(n = 1, burn = 0, xreg = NULL,
coefs = list(alpha = 0, beta = NULL, phi = NULL,
theta = NULL, d = 0, nu = 20),
y.start = NULL, xreg.start = NULL,
xregar = TRUE, error.scale = 0, complete = FALSE,
inf = 1000, linkg = c("log", "log"), seed = NULL,
rngtype = 2, debug = FALSE){
####----------------------------------
#### checking required parameters:
####----------------------------------
if(is.null(coefs)) stop("coefs missing with no default")
if(!"list" %in% class(coefs)) stop("coefs must be a list")
####----------------------------------
#### checking configurations:
####----------------------------------
cf <- .sim.configs(model = "GARFIMA", xreg = xreg,
y.start = y.start, xreg.start = xreg.start,
linkg = linkg, n = n, burn = burn,
coefs = coefs, xregar = xregar,
error.scale = error.scale, seed = seed,
rngtype = rngtype, y.default = 0)
out <- .btsr.sim(model = "GARFIMA", inf = inf, configs = cf,
complete = complete, debug = debug)
class(out) <- c(class(out), "garfima")
invisible(out)
}
##' @rdname GARFIMA.functions
##' @order 3
##'
##' @details
##'
##' The function \code{GARFIMA.extract} allows the user to extract the
##' components \eqn{y_t}, \eqn{\mu_t}, \eqn{\eta_t = g(\mu_t)}, \eqn{r_t},
##' the log-likelihood, and the vectors and matrices used to calculate the
##' score vector and the information matrix associated to a given set of parameters.
##'
##' This function can be used by any user to create an objective function
##' that can be passed to optimization algorithms not available in the BTSR Package.
##'
##' @param yt a numeric vector with the observed time series. If missing, an error
##' message is issued.
##'
##' @param nnew optionally, the number of out-of sample predicted values required.
##' Default is 0.
##'
##' @param xnew a vector or matrix, with \code{nnew} observations of the
##' regressors observed/predicted values corresponding to the period of
##' out-of-sample forecast. If \code{xreg = NULL}, \code{xnew} is ignored.
##'
##' @param p a non-negative integer. The order of AR polynomial.
##' If missing, the value of \code{p} is calculated from length(coefs$phi)
##' and length(fixed.values$phi). For fitting, the default is 0.
##'
##' @param q a non-negative integer. The order of the MA polynomial.
##' If missing, the value of \code{q} is calculated from length(coefs$theta)
##' and length(fixed.values$theta). For fitting, the default is 0.
##'
##' @param lags optionally, a list with the lags that the values in \code{coefs} correspond to.
##' The names of the entries in this list must match the ones in \code{coefs}.
##' For one dimensional coefficients, the \code{lag} is obviously always 1 and can
##' be suppressed. An empty list indicates that either the argument \code{fixed.lags}
##' is provided or all lags must be used.
##'
##' @param fixed.values optionally, a list with the values of the coefficients
##' that are fixed. By default, if a given vector (such as the vector of AR coefficients)
##' has fixed values and the corresponding entry in this list is empty, the fixed values
##' are set as zero. The names of the entries in this list must match the ones
##' in \code{coefs}.
##'
##' @param fixed.lags optionally, a list with the lags that the fixed values
##' in \code{fixed.values} correspond to. The names of the entries in this list must
##' match the ones in \code{fixed.values}. ##' For one dimensional coefficients, the
##' \code{lag} is obviously always 1 and can be suppressed. If an empty list is provided
##' and the model has fixed lags, the argument \code{lags} is used as reference.
##'
##' @param m a non-negative integer indicating the starting time for the sum of the
##' partial log-likelihoods, that is \eqn{\ell = \sum_{t = m+1}^n \ell_t}. Default is
##' 0.
##'
##' @param llk logical, if \code{TRUE} the value of the log-likelihood function
##' is returned. Default is \code{TRUE}.
##'
##' @param sco logical, if \code{TRUE} the score vector is returned.
##' Default is \code{FALSE}.
##'
##' @param info logical, if \code{TRUE} the information matrix is returned.
##' Default is \code{FALSE}. For the fitting function, \code{info} is automatically
##' set to \code{TRUE} when \code{report = TRUE}.
##'
##' @param extra logical, if \code{TRUE} the matrices and vectors used to
##' calculate the score vector and the information matrix are returned.
##' Default is \code{FALSE}.
##'
##' @return
##' The function \code{GARFIMA.extract} returns a list with the following components.
##'
##' \itemize{
##' \item \code{model}: string with the text \code{"GARFIMA"}
##'
##' \item \code{coefs}: the coefficients of the model passed through the
##' \code{coefs} argument
##'
##' \item \code{yt}: the observed time series
##'
##' \item \code{gyt}: the transformed time series \eqn{g_2(y_t)}
##'
##' \item \code{mut}: the conditional mean
##'
##' \item \code{etat}: the linear predictor \eqn{g_1(\mu_t)}
##'
##' \item \code{error}: the error term \eqn{r_t}
##'
##' \item \code{xreg}: the regressors (if included in the model).
##'
##' \item \code{sll}: the sum of the conditional log-likelihood (if requested)
##'
##' \item \code{sco}: the score vector (if requested)
##'
##' \item \code{info}: the information matrix (if requested)
##'
##' \item \code{Drho}, \code{T}, \code{E}, \code{h}: additional matrices and vectors
##' used to calculate the score vector and the information matrix. (if requested)
##'
##' \item \code{yt.new}: the out-of-sample forecast (if requested)
##'
##' \item \code{out.Fortran}: FORTRAN output (if requested)
##' }
##'
##' @seealso
##' \code{\link{btsr.extract}}
##'
##' @examples
##' #------------------------------------------------------------
##' # Generating a Gamma model were mut does not vary with time
##' # yt ~ Gamma(a,b), a = nu (shape), b = mu/nu (scale)
##' #------------------------------------------------------------
##'
##' m1 <- GARFIMA.sim(linkg = "linear",n = 100,
##' complete = TRUE, seed = 2021,
##' coefs = list(alpha = 0.2, nu = 20))
##'
##' #------------------------------------------------------------
##' # Extracting the conditional time series given yt and
##' # a set of parameters
##' #------------------------------------------------------------
##'
##' # Assuming that all coefficients are non-fixed
##' e1 = GARFIMA.extract(yt = m1$yt, coefs = list(alpha = 0.2, nu = 20),
##' link = "linear", llk = TRUE,
##' sco = TRUE, info = TRUE)
##'
##' #----------------------------------------------------
##' # comparing the simulated and the extracted values
##' #----------------------------------------------------
##' cbind(head(m1$mut), head(e1$mut))
##'
##' #---------------------------------------------------------
##' # the log-likelihood, score vector and information matrix
##' #---------------------------------------------------------
##' e1$sll
##' e1$score
##' e1$info.Matrix
##'
##' @export
##' @md
GARFIMA.extract <- function(yt, xreg = NULL, nnew = 0, xnew = NULL,
p, q, coefs = list(),lags = list(),
fixed.values = list(), fixed.lags = list(),
y.start = NULL, xreg.start = NULL,
xregar = TRUE, error.scale = 0, inf = 1000, m = 0,
linkg = c("log","log"), llk = TRUE, sco = FALSE,
info = FALSE, extra = FALSE, debug = FALSE){
if(is.null(coefs) & is.null(fixed.values))
stop("Please, provide a list of coefficients")
if(!is.null(coefs)){
if(! "list" %in% class(coefs)) stop("coefs must be a list")}
if(!is.null(fixed.values)){
if(! "list" %in% class(fixed.values)) stop("fixed.values must be a list")}
else{ fixed.values <- list()}
if(missing(p)) p = length(coefs$phi) + length(fixed.values$phi)
if(missing(q)) q = length(coefs$theta) + length(fixed.values$theta)
cf <- .extract.configs(model = "GARFIMA", yt = yt, y.start = y.start,
y.lower = 0, y.upper = Inf, openIC = c(TRUE, TRUE),
xreg = xreg, xnew = xnew, nnew = nnew,
xreg.start = xreg.start, linkg = linkg,
p = p, q = q, inf = inf, m = m, xregar = xregar,
error.scale = error.scale, coefs = coefs,
lags = lags, fixed.values = fixed.values,
fixed.lags = fixed.lags, llk = llk, sco = sco,
info = info, extra = extra)
out <- .btsr.extract(model = "GARFIMA", yt = yt, configs = cf, debug = debug)
class(out) <- c(class(out), "garfima")
invisible(out)
}
##' @rdname GARFIMA.functions
##' @order 4
##'
##' @details
##' The function \code{GARFIMA.fit} fits a GARFIMA model to a given univariate time
##' series. For now, available optimization algorithms are \code{"L-BFGS-B"} and
##' \code{"Nelder-Mead"}. Both methods accept bounds for the parameters. For
##' \code{"Nelder-Mead"}, bounds are set via parameter transformation.
##'
##'
##' @param d logical, if \code{TRUE}, the parameter \code{d} is included
##' in the model either as fixed or non-fixed. If \code{d = FALSE} the value is
##' fixed as 0. The default is \code{TRUE}.
##'
##' @param start a list with the starting values for the non-fixed coefficients
##' of the model. If an empty list is provided, the function \code{\link{coefs.start}}
##' is used to obtain starting values for the parameters.
##'
##' @param ignore.start logical, if starting values are not provided, the
##' function uses the default values and \code{ignore.start} is ignored.
##' In case starting values are provided and \code{ignore.start = TRUE}, those
##' starting values are ignored and recalculated. The default is \code{FALSE}.
##'
##' @param lower optionally, list with the lower bounds for the
##' parameters. The names of the entries in these lists must match the ones
##' in \code{start}. The default is to assume that the parameters have no lower
##' bound except for \code{nu}, for which de default is 0. Only the bounds for
##' bounded parameters need to be specified.
##'
##' @param upper optionally, list with the upper bounds for the
##' parameters. The names of the entries in these lists must match the ones
##' in \code{start}. The default is to assume that the parameters have no upper
##' bound. Only the bounds for bounded parameters need to be specified.
##'
##' @param control a list with configurations to be passed to the
##' optimization subroutines. Missing arguments will receive default values. See
##' \cite{\link{fit.control}}.
##'
##' @param report logical, if \code{TRUE} the summary from model estimation is
##' printed and \code{info} is automatically set to \code{TRUE}. Default is \code{TRUE}.
##'
##' @param ... further arguments passed to the internal functions.
##'
##' @return
##' The function \code{btsr.fit} returns a list with the following components.
##' Each particular model can have additional components in this list.
##'
##' \itemize{
##' \item \code{model}: string with the text \code{"GARFIMA"}
##'
##' \item \code{convergence}: An integer code. 0 indicates successful completion.
##' The error codes depend on the algorithm used.
##'
##' \item \code{message}: A character string giving any additional information
##' returned by the optimizer, or NULL.
##'
##' \item \code{counts}: an integer giving the number of function evaluations.
##'
##' \item \code{control}: a list of control parameters.
##'
##' \item \code{start}: the starting values used by the algorithm.
##'
##' \item \code{coefficients}: The best set of parameters found.
##'
##' \item \code{n}: the sample size used for estimation.
##'
##' \item \code{series}: the observed time series
##'
##' \item \code{gyt}: the transformed time series \eqn{g_2(y_t)}
##'
##' \item \code{fitted.values}: the conditional mean, which corresponds to
##' the in-sample forecast, also denoted fitted values
##'
##' \item \code{etat}: the linear predictor \eqn{g_1(\mu_t)}
##'
##' \item \code{error.scale}: the scale for the error term.
##'
##' \item \code{error}: the error term \eqn{r_t}
##'
##' \item \code{residual}: the observed minus the fitted values. The same as
##' the \code{error} term if \code{error.scale = 0}.
##'
##' \item \code{forecast}: the out-of-sample forecast (if requested).
##'
##' \item \code{xnew}: the observations of the regressors observed/predicted
##' values corresponding to the period of out-of-sample forecast.
##' Only inlcudes if \code{xreg} is not \code{NULL} and \code{nnew > 0}.
##'
##' \item \code{sll}: the sum of the conditional log-likelihood (if requested)
##'
##' \item \code{info.Matrix}: the information matrix (if requested)
##'
##' \item \code{configs}: a list with the configurations adopted to fit the model.
##' This information is used by the prediction function.
##'
##' \item \code{out.Fortran}: FORTRAN output (if requested)
##'
##' \item \code{call}: a string with the description of the fitted model.
##'
##' }
##'
##' @seealso
##' \code{\link{btsr.fit}}
##'
##' @examples
##'
##' # Generating a Beta model were mut does not vary with time
##' # yt ~ Beta(a,b), a = mu*nu, b = (1-mu)*nu
##'
##' y <- GARFIMA.sim(linkg = "linear", n = 100, seed = 2021,
##' coefs = list(alpha = 0.2, nu = 20))
##'
##' # fitting the model
##' f <- GARFIMA.fit(yt = y, report = TRUE,
##' start = list(alpha = 0.5, nu = 10),
##' linkg = "linear", d = FALSE)
##'
##' @export
##'
##' @md
GARFIMA.fit <- function(yt, xreg = NULL, nnew = 0, xnew = NULL,
p = 0, d = TRUE, q = 0, m = 0, inf = 1000,
start = list(), ignore.start = FALSE,
lags = list(), fixed.values = list(),
fixed.lags = list(), lower = list(nu = 0),
upper = list(nu = Inf), linkg = c("log","log"),
sco = TRUE, info = FALSE, extra = FALSE, xregar = TRUE,
y.start = NULL, xreg.start = NULL,
error.scale = 0, control = list(), report = TRUE,
debug = FALSE,...){
# default values for nu (merge with user provided values)
lw <- list(nu = 0); up <- list(nu = Inf)
lw[names(lower)] <- lower; up[names(upper)] <- upper
lower <- lw; upper <- up
if(report) info = TRUE
cf <- .fit.configs(model = "GARFIMA", yt = yt, y.start = y.start,
y.lower = 0, y.upper = Inf, openIC = c(TRUE, TRUE),
xreg = xreg, xnew = xnew, nnew = nnew,
xreg.start = xreg.start, linkg = linkg,
p = p, d = d, q = q, inf = inf, m = m,
xregar = xregar, error.scale = error.scale,
start = start, ignore.start = ignore.start,
lags = lags, fixed.values = fixed.values,
fixed.lags = fixed.lags, lower = lower,
upper = upper, control = control,
sco = sco, info = info, extra = extra)
if(!is.null(cf$conv)) return(invisible(out))
out <- .btsr.fit(model = "GARFIMA", yt = yt, configs = cf, debug = debug)
out$call <- .fit.print(model = "GARFIMA", p = cf$p, q = cf$q,
d = !(cf$d$nfix == 1 & cf$d$fvalues == 0),
nreg = cf$nreg)
class(out) <- c(class(out), "garfima")
if(report) print(summary(out))
invisible(out)
}
|
/scratch/gouwar.j/cran-all/cranData/BTSR/R/garfima.R
|
##----------------------------------------------------------
## Generic functions: sim, extract and fit
##----------------------------------------------------------
##' @title
##' Generic functions to simulate, extract components and fit BTSR models
##'
##' @name btsr.functions
##' @order 1
##'
##' @description
##' These generic functions can be used to simulate, extract components
##' and fit any model of the class \code{btsr}. All functions are wrappers
##' for the corresponding function associated to the chosen model.
##' See \sQuote{The BTSR structure} and \sQuote{Common Arguments}.
##'
##' @details
##'
##' # The BTSR structure
##'
##' The general structure of the deterministic part of a BTSR model is
##'
##' \deqn{g_1(\mu_t) = \alpha + X_t\beta +
##' \sum_{j=1}^p \phi_j[g_2(y_{t-j}) - I_{xregar}X_{t-j}\beta] + h_t}
##'
##' where
##' \itemize{
##' \item \eqn{I_{xregar}} is 0, if \code{xreg} is not included in the AR part of the model and 1,
##' otherwise
##'
##' \item the term \eqn{h_t} depends on the argument \code{model}:
##' \itemize{
##' \item for BARC models: \eqn{h_t = h(T^{t-1}(u_0))}
##' \item otherwise: \eqn{h_t = \sum_{k = 1}^\infty c_k r_{t-k}}
##' }
##'
##' \item \eqn{g_1} and \eqn{g_2} are the links defined in \code{linkg}.
##' Notice that \eqn{g_2} is only used in the AR part of the model and, typically,
##' \eqn{g_1 = g_2}.
##'
##' \item \eqn{r_t} depends on the \code{error.scale} adopted:
##' \itemize{
##' \item if \code{error.scale = 0}: \eqn{r_t = y_t - \mu_t} (data scale)
##' \item if \code{error.scale = 1}: \eqn{r_t = g_1(y_t) - g_1(\mu_t)}
##' (predictive scale)
##' }
##'
##' \item \eqn{c_k} are the coefficients of \eqn{(1-L)^d\theta(L)}.
##' In particular, if \eqn{d = 0}, then \eqn{c_k = \theta_k}, for
##' \eqn{k = 1, \dots, q}.
##' }
##'
##' # Common Arguments
##'
##' In what follows we describe some of the arguments that are
##' commom to all BTSR models. For more details on extra arguments,
##' see the corresponding function associated to the selected model.
##'
##' @md
NULL
#> NULL
##' @rdname btsr.functions
##' @order 2
##'
##' @details
##'
##' The function \code{btsr.sim} is used to generate random samples
##' from BTSR models. See \sQuote{The BTSR structure}.
##'
##' # Common Arguments
##'
##' ## Simulation Function
##'
##' Common arguments passed through \code{"..."} in \code{btsr.sim} are:
##'\itemize{
##' \item \code{n} a strictly positive integer. The sample size of yt (after burn-in).
##' Default for all models is 1.
##'
##' \item \code{burn} a non-negative integer. length of "burn-in" period.
##' Default for all models is 0.
##'
##' \item \code{xreg} optionally, a vector or matrix of external regressors.
##' For simulation purposes, the length of xreg must be \code{n+burn}.
##' Default for all models is \code{NULL}
##'
##' \item \code{coefs} a list with the coefficients of the model. Each model has
##' its default. An empty list will result in an error. The arguments in this list
##' are:
##' \itemize{
##' \item \code{alpha} optionally, A numeric value corresponding to the intercept.
##' If the argument is missing, it will be treated as zero.
##'
##' \item \code{beta} optionally, a vector of coefficients corresponding to the
##' regressors in \code{xreg}. If \code{xreg} is provided but \code{beta} is
##' missing in the \code{coefs} list, an error message is issued.
##'
##' \item \code{phi} optionally, a vector of size \eqn{p}, corresponding to the
##' autoregressive coefficients (including the ones that are zero), where \eqn{p}
##' is the AR order.
##'
##' \item \code{nu} the dispersion parameter. If missing, an error message is issued.
##'
##' \item \code{rho, y.lower, y.upper, theta, d, u0} model specif arguments.
##' See the documentation corresponding to each model.
##' }
##'
##' \item \code{y.start} optionally, a initial value for yt (to be used
##' in the recursions). Default is \code{NULL}, in which case, the recursion assumes
##' that \eqn{g_2(y_t) = 0}, for \eqn{t < 1}.
##'
##' \item \code{xreg.start} optionally, a vector of initial value for xreg
##' (to be used in the recursions). Default is \code{NULL}, in which case, the recursion assumes
##' that \eqn{X_t = 0}, for \eqn{t < 1}. If \code{xregar = FALSE} this argument
##' is ignored.
##'
##' \item \code{xregar} logical; indicates if xreg is to be included in the
##' AR part of the model. See \sQuote{The BTSR structure}. Default is \code{TRUE}.
##'
##' \item \code{error.scale} the scale for the error term. See also \sQuote{The BTSR structure}.
##' Each model has its default.
##'
##' \item \code{inf} the truncation point for infinite sums. Default is 1000.
##' In practice, the Fortran subroutine uses \eqn{inf = q}, if \eqn{d = 0}.
##' BARC models do not have this argument.
##'
##' \item \code{linkg} character or a two character vector indicating which
##' links must be used in the model. See \sQuote{The BTSR structure}.
##' If only one value is provided, the same link is used for \eqn{mu_t} and
##' for \eqn{y_t} in the AR part of the model. Each model has its default.
##'
##' \item \code{seed} optionally, an integer which gives the value of the fixed
##' seed to be used by the random number generator. If missing, a random integer
##' is chosen uniformly from 1,000 to 10,000.
##'
##' \item \code{rngtype} optionally, an integer indicating which random number generator
##' is to be used. Default is 2. The current options are:
##' \itemize{
##' \item \code{0}: Jason Blevins algorithm. Available at <https://jblevins.org/log/openmp>
##' \item \code{1}: Wichmann-Hill algorithm (Wichmann and Hill, 1982).
##' \item \code{2}: Mersenne Twister algorithm (Matsumoto and Nishimura, 1998).
##' FORTRAN code adapted from <https://jblevins.org/mirror/amiller/mt19937.f90> and
##' <https://jblevins.org/mirror/amiller/mt19937a.f90>
##' \item \code{3}: Marsaglia-MultiCarry algorithm - kiss 32. Random number generator suggested
##' by George Marsaglia in "Random numbers for C: The END?" posted on sci.crypt.random-numbers
##' in 1999.
##' \item \code{4}: Marsaglia-MultiCarry algorithm - kiss 64. Based on the
##' 64-bit KISS (Keep It Simple Stupid) random number generator distributed by
##' George Marsaglia in <https://groups.google.com/d/topic/comp.lang.fortran/qFv18ql_WlU>
##' \item \code{5}: Knuth's 2002 algorithm (Knuth, 202). FORTRAN code adapted
##' from <https://www-cs-faculty.stanford.edu/~knuth/programs/frng.f>
##' \item \code{6}: L'Ecuyer's 1999 algorithm - 64-bits (L'Ecuyer, 1999).
##' FORTRAN code adapted from <https://jblevins.org/mirror/amiller/lfsr258.f90>
##' }
##' For more details on these algorithms see \code{\link[base]{Random}} and references
##' therein.
##'
##' \item \code{debug} logical, if \code{TRUE} the output from FORTRAN is return (for
##' debuggin purposes). Default is \code{FALSE} for all models.
##'}
##'
##'
##' @param model character; one of \code{"BARFIMA"}, \code{"GARFIMA"},
##' \code{"KARFIMA"}, \code{"BARC"}.
##'
##' @param complete logical; if \code{FALSE} the function returns only the simulated
##' time series yt, otherwise, additional time series are provided.
##' Default is \code{FALSE} for all models.
##'
##' @param ... further arguments passed to the functions, according to
##' the model selected in the argument \code{model}. See \sQuote{Common Arguments}
##'
##' @return
##' The function \code{btsr.sim} returns the simulated time series yt by default.
##' If \code{complete = TRUE}, a list with the following components
##' is returned instead:
##' \itemize{
##' \item \code{model}: character; one of \code{"BARFIMA"}, \code{"GARFIMA"},
##' \code{"KARFIMA"}, \code{"BARC"}. (same as the input argument)
##'
##' \item \code{yt}: the simulated time series
##'
##' \item \code{gyt}: the transformed time series \eqn{g2(y_t)}
##'
##' \item \code{mut}: the conditional mean
##'
##' \item \code{etat}: the linear predictor \eqn{g(\mu_t)}
##'
##' \item \code{error}: the error term \eqn{r_t}
##'
##' \item \code{xreg}: the regressors (if included in the model).
##'
##' \item \code{debug}: the output from FORTRAN (if requested).
##' }
##'
##' @seealso
##' \code{\link{BARFIMA.sim}}, \code{\link{GARFIMA.sim}},
##' \code{\link{KARFIMA.sim}}, \code{\link{BARC.sim}}
##'
##' @references
##' Knuth, D. E. (2002). The Art of Computer Programming. Volume 2,
##' third edition, ninth printing.
##'
##' L'Ecuyer, P. (1999). Good parameters and implementations for combined
##' multiple recursive random number generators. Operations Research, 47,
##' 159-164. <doi:10.1287/opre.47.1.159.>
##'
##' Matsumoto, M. and Nishimura, T. (1998). Mersenne Twister: A 623-dimensionally
##' equidistributed uniform pseudo-random number generator, ACM Transactions on
##' Modeling and Computer Simulation, 8, 3-30.
##'
##' Wichmann, B. A. and Hill, I. D. (1982). Algorithm AS 183: An Efficient
##' and Portable Pseudo-random Number Generator. Applied Statistics, 31, 188-190;
##' Remarks: 34, 198 and 35, 89. <doi:10.2307/2347988.>
##'
##'
##' @examples
##' # Generating a Beta model were mut does not vary with time
##' # yt ~ Beta(a,b), a = mu*nu, b = (1-mu)*nu
##'
##' y <- btsr.sim(model= "BARFIMA", linkg = "linear",
##' n = 1000, seed = 2021,
##' coefs = list(alpha = 0.2, nu = 20))
##' hist(y)
##'
##' @export
##'
##' @md
btsr.sim <- function(model, complete = FALSE,...){
temp <- list(...)
cf = list(d = 0)
if(!is.null(temp[['coefs']])){
cf[names(temp$coefs)] <- temp$coefs
cf$d = 0
}
switch(EXPR = model[1],
BARFIMA = BARFIMA.sim(complete = complete,...),
GARFIMA = GARFIMA.sim(complete = complete,...),
KARFIMA = KARFIMA.sim(complete = complete,...),
UWARFIMA = UWARFIMA.sim(complete = complete,...),
BARMA = BARFIMA.sim(coefs = cf,...),
GARMA = GARFIMA.sim(coefs = cf,...),
KARMA = KARFIMA.sim(coefs = cf,...),
UWARMA = UWARFIMA.sim(coefs = cf,...),
BARC = BARC.sim(complete = complete,...),
"not available")
}
##' @rdname btsr.functions
##' @order 3
##'
##' @details
##'
##' The function \code{btsr.extract} allows the user to extract the
##' components \eqn{y_t}, \eqn{\mu_t}, \eqn{\eta_t = g(\mu_t)}, \eqn{r_t},
##' the log-likelihood, and the vectors and matrices used to calculate the
##' score vector and the information matrix associated to a given set of parameters.
##'
##' This function can be used by any user to create an objective function
##' that can be passed to optimization functions not available in BTSR Package.
##' At this point, there is no other use for which this function was intended.
##'
##' # Common Arguments
##'
##' ## Extracting Function
##'
##' Common arguments passed through \code{"..."} in \code{btsr.extract} are:
##'
##'\itemize{
##' \item \code{yt} a numeric vector with the observed time series. If missing, an error
##' message is issued.
##'
##' \item \code{xreg} optionally, a vector or matrix with the regressor's values.
##' Default is \code{NULL} for all models.
##'
##' \item \code{nnew} optionally, the number of out-of sample predicted values required.
##' Default is 0 for all models.
##'
##' \item \code{xnew} a vector or matrix, with \code{nnew} observations of the
##' regressors observed/predicted values corresponding to the period of
##' out-of-sample forecast. If \code{xreg = NULL}, \code{xnew} is ignored.
##'
##' \item \code{p} a non-negative integer. The order of AR polynomial.
##' If missing, the value of \code{p} is calculated from length(coefs$phi)
##' and length(fixed.values$phi).
##'
##' \item \code{q,r} a non-negative integer. The order of the MA polynomial and
##' the size of the vector of parameters for the map function (BARC only).
##' If missing, the argument is calcualted based on length(coefs$theta)
##' and length(fixed.values$theta).
##'
##' \item \code{coefs} a list with the coefficients of the model. Each model has
##' its default. Passing both, \code{coefs} and \code{fixed.values} empty
##' will result in an error. The arguments in this list are
##' \itemize{
##' \item \code{alpha} a numeric value corresponding to the intercept.
##' If missing, will be set as zero.
##'
##' \item \code{beta} a vector of coefficients corresponding to the
##' regressors in \code{xreg}. If \code{xreg} is provided but \code{beta} is
##' missing in the \code{coefs} list, an error message is issued.
##'
##' \item \code{phi} a vector with the non-fixed values in the vector of
##' AR coefficients.
##'
##' \item \code{nu} the dispersion parameter. If missing, an error message is issued.
##'
##' \item \code{theta, d, u0} model specific arguments. See the documentation
##' corresponding to each model.
##' }
##'
##' \item \code{lags} optionally, a list with the lags that the values in \code{coefs} correspond to.
##' The names of the entries in this list must match the ones in \code{coefs}.
##' For one dimensional coefficients, the \code{lag} is obviously always 1 and can
##' be suppressed. An empty list indicates that either the argument \code{fixed.lags}
##' is provided or all lags must be used.
##'
##' \item \code{fixed.values} optionally, a list with the values of the coefficients
##' that are fixed. By default, if a given vector (such as the vector of AR coefficients)
##' has fixed values and the corresponding entry in this list is empty, the fixed values
##' are set as zero. The names of the entries in this list must match the ones
##' in \code{coefs}.
##'
##' \item \code{fixed.lags} optionally, a list with the lags that the fixed values
##' in \code{fixed.values} correspond to. The names of the entries in this list must
##' match the ones in \code{fixed.values}. ##' For one dimensional coefficients, the
##' \code{lag} is obviously always 1 and can be suppressed. If an empty list is provided
##' and the model has fixed lags, the argument \code{lags} is used as reference.
##'
##' \item \code{y.start} optionally, a initial value for yt (to be used
##' in the recursions). Default is \code{NULL}, in which case, the recursion assumes
##' that \eqn{g_2(y_t) = 0}, for \eqn{t < 1}.
##'
##' \item \code{xreg.start} optionally, a vector of initial value for xreg
##' (to be used in the recursions). Default is \code{NULL}, in which case, the recursion assumes
##' that \eqn{X_t = 0}, for \eqn{t < 1}. If \code{xregar = FALSE} this argument
##' is ignored.
##'
##' \item \code{xregar} logical; indicates if xreg is to be included in the
##' AR part of the model. See \sQuote{The BTSR structure}. Default is \code{TRUE}.
##'
##' \item \code{error.scale} the scale for the error term. See also \sQuote{The BTSR structure}.
##' Each model has its default.
##'
##' \item \code{inf} the truncation point for infinite sums. Default is 1.
##' BARC models do not have this argument.
##'
##' \item \code{m} a non-negative integer indicating the starting time for the sum of the
##' partial log-likelihoods, that is \eqn{\ell = \sum_{t = m+1}^n \ell_t}. Default is
##' 0.
##'
##' \item \code{linkg} character or a two character vector indicating which
##' links must be used in the model. See \sQuote{The BTSR structure}.
##' If only one value is provided, the same link is used for \eqn{mu_t} and
##' for \eqn{y_t} in the AR part of the model. Each model has its default.
##'
##' \item \code{llk} logical, if \code{TRUE} the value of the log-likelihood function
##' is returned. Default is \code{TRUE} for all models.
##'
##' \item \code{sco} logical, if \code{TRUE} the score vector is returned.
##' Default is \code{FALSE} for all models.
##'
##' \item \code{info} logical, if \code{TRUE} the information matrix is returned.
##' Default is \code{FALSE} for all models.
##'
##' \item \code{extra} logical, if \code{TRUE} the matrices and vectors used to
##' calculate the score vector and the information matrix are returned.
##' Default is \code{FALSE} for all models.
##'
##' \item \code{debug} logical, if \code{TRUE} the output from FORTRAN is return (for
##' debuggin purposes). Default is \code{FALSE} for all models.
##'}
##'
##'
##' @param model character; one of \code{"BARFIMA"}, \code{"GARFIMA"},
##' \code{"KARFIMA"}, \code{"BARC"}.
##' @param ... further arguments passed to the functions, according to
##' the model selected in the argument \code{model}. See \sQuote{Common Arguments}
##'
##' @return
##' The function \code{btsr.extract} returns a list with the following components.
##' Each particular model can have additional components in this list.
##'
##' \itemize{
##' \item \code{model}: character; one of \code{"BARFIMA"}, \code{"GARFIMA"},
##' \code{"KARFIMA"}, \code{"BARC"}. (same as the input argument)
##'
##' \item \code{coefs}: the coefficients of the model passed through the
##' \code{coefs} argument
##'
##' \item \code{yt}: the observed time series
##'
##' \item \code{gyt}: the transformed time series \eqn{g_2(y_t)}
##'
##' \item \code{mut}: the conditional mean
##'
##' \item \code{etat}: the linear predictor \eqn{g_1(\mu_t)}
##'
##' \item \code{error}: the error term \eqn{r_t}
##'
##' \item \code{xreg}: the regressors (if included in the model).
##'
##' \item \code{forecast}: the out-of-sample forecast (if requested).
##'
##' \item \code{xnew}: the observations of the regressors observed/predicted
##' values corresponding to the period of out-of-sample forecast.
##' Only inlcudes if \code{xreg} is not \code{NULL} and \code{nnew > 0}.
##'
##' \item \code{sll}: the sum of the conditional log-likelihood (if requested)
##'
##' \item \code{sco}: the score vector (if requested)
##'
##' \item \code{info}: the information matrix (if requested)
##'
##' \item \code{Drho}, \code{T}, \code{E}, \code{h}: additional matrices and vectors
##' used to calculate the score vector and the information matrix. (if requested)
##'
##' \item \code{yt.new}: the out-of-sample forecast (if requested)
##'
##' \item \code{out.Fortran}: FORTRAN output (if requested)
##'
##' }
##'
##' @seealso
##' \code{\link{BARFIMA.extract}}, \code{\link{GARFIMA.extract}},
##' \code{\link{KARFIMA.extract}}, \code{\link{BARC.extract}}
##'
##' @examples
##' #------------------------------------------------------------
##' # Generating a Beta model were mut does not vary with time
##' # yt ~ Beta(a,b), a = mu*nu, b = (1-mu)*nu
##' #------------------------------------------------------------
##'
##' m1 <- btsr.sim(model= "BARFIMA", linkg = "linear",
##' n = 100, seed = 2021, complete = TRUE,
##' coefs = list(alpha = 0.2, nu = 20))
##'
##' #------------------------------------------------------------
##' # Extracting the conditional time series given yt and
##' # a set of parameters
##' #------------------------------------------------------------
##'
##' # Assuming that all coefficients are non-fixed
##' e1 = btsr.extract(model = "BARFIMA", yt = m1$yt,
##' coefs = list(alpha = 0.2, nu = 20),
##' link = "linear", llk = TRUE,
##' sco = TRUE, info = TRUE)
##'
##' # Assuming that all coefficients are fixed
##' e2 = btsr.extract(model = "BARFIMA", yt = m1$yt,
##' fixed.values = list(alpha = 0.2, nu = 20),
##' link = "linear", llk = TRUE,
##' sco = TRUE, info = TRUE)
##'
##' # Assuming at least one fixed coefficient and one non-fixed
##' e3 = btsr.extract(model = "BARFIMA", yt = m1$yt,
##' fixed.values = list(alpha = 0.2, nu = 20),
##' link = "linear", llk = TRUE,
##' sco = TRUE, info = TRUE)
##' e4 = btsr.extract(model = "BARFIMA", yt = m1$yt,
##' fixed.values = list(alpha = 0.2, nu = 20),
##' link = "linear", llk = TRUE,
##' sco = TRUE, info = TRUE)
##'
##' #----------------------------------------------------
##' # comparing the simulated and the extracted values
##' #----------------------------------------------------
##' cbind(head(m1$mut), head(e1$mut), head(e2$mut), head(e3$mut), head(e4$mut))
##'
##' #----------------------------------------------------
##' # comparing the log-likelihood values obtained (must be the all equal)
##' #----------------------------------------------------
##' c(e1$sll, e2$sll, e3$sll, e4$sll)
##'
##' #----------------------------------------------------
##' # comparing the score vectors:
##' #----------------------------------------------------
##' # - e1 must have 2 values: dl/dmu and dl/dnu
##' # - e2 must be empty
##' # - e3 and e4 must have one value corresponding
##' # to the non-fixed coefficient
##' #----------------------------------------------------
##' e1$score
##' e2$score
##' e3$score
##' e4$score
##'
##' #----------------------------------------------------
##' # comparing the information matrices.
##' #----------------------------------------------------
##' # - e1 must be a 2x2 matrix
##' # - e2 must be empty
##' # - e3 and e4 must have one value corresponding
##' # to the non-fixed coefficient
##' #----------------------------------------------------
##' e1$info.Matrix
##' e2$info.Matrix
##' e3$info.Matrix
##' e4$info.Matrix
##'
##' @export
##'
##' @md
btsr.extract <- function(model,...){
temp <- list(...)
fv = list(d = 0)
if( !is.null(temp[['fixed.values']]))
fv[names(temp$fixed.values)] <- temp$fixed.values
switch(EXPR = model[1],
BARFIMA = BARFIMA.extract(...),
GARFIMA = GARFIMA.extract(...),
KARFIMA = KARFIMA.extract(...),
UWARFIMA = UWARFIMA.extract(...),
BARMA = BARFIMA.extract(fixed.values = fv,...),
GARMA = GARFIMA.extract(fixed.values = fv,...),
KARMA = KARFIMA.extract(fixed.values = fv,...),
UWARMA = UWARFIMA.extract(fixed.values = fv,...),
BARC = BARC.extract(...),
"not available")
}
##' @rdname btsr.functions
##' @order 4
##'
##' @details
##' The function \code{btsr.fit} fits a BTSR model to a given univariate time
##' series. For now, available optimization algorithms are \code{"L-BFGS-B"} and
##' \code{"Nelder-Mead"}. Both methods accept bounds for the parameters. For
##' \code{"Nelder-Mead"}, bounds are set via parameter transformation.
##'
##' # Common Arguments
##'
##' ## Fitting Function
##'
##' Common arguments passed through \code{"..."} in \code{btsr.fit} are the same as
##' in \code{\link{btsr.extract}} plus the following:
##'
##'\itemize{
##'
##' \item \code{d} logical, if \code{TRUE}, the parameter \code{d} is included
##' in the model either as fixed or non-fixed. If \code{d = FALSE} the value is
##' fixed as 0. The default is \code{TRUE} for all models, except BARC that does
##' not have this parameter.
##'
##' \item \code{start} a list with the starting values for the non-fixed coefficients
##' of the model. If an empty list is provided, the function \code{\link{coefs.start}}
##' is used to obtain starting values for the parameters.
##'
##' \item \code{ignore.start} logical, if starting values are not provided, the
##' function uses the default values and \code{ignore.start} is ignored.
##' In case starting values are provided and \code{ignore.start = TRUE}, those
##' starting values are ignored and recalculated. The default is \code{FALSE}.
##'
##' \item \code{lower, upper} optionally, list with the lower and upper bounds for the
##' parameters. The names of the entries in these lists must match the ones
##' in \code{start}. The default is to assume that the parameters are unbounded.
##' Only the bounds for bounded parameters need to be specified.
##'
##' \item \code{control} a list with configurations to be passed to the
##' optimization subroutines. Missing arguments will receive default values. See
##' \cite{\link{fit.control}}.
##'
##' \item \code{report} logical, if \code{TRUE} the summary from model estimation is
##' printed and \code{info} is automatically set to \code{TRUE}. Default is \code{TRUE}.
##'}
##'
##'
##' @param model character; one of \code{"BARFIMA"}, \code{"GARFIMA"},
##' \code{"KARFIMA"}, \code{"BARC"}.
##' @param ... further arguments passed to the functions, according to
##' the model selected in the argument \code{model}. See \sQuote{Common Arguments}
##'
##' @return
##' The function \code{btsr.fit} returns a list with the following components.
##' Each particular model can have additional components in this list.
##'
##' \itemize{
##' \item \code{model}: character; one of \code{"BARFIMA"}, \code{"GARFIMA"},
##' \code{"KARFIMA"}, \code{"BARC"}. (same as the input argument)
##'
##' \item \code{convergence}: An integer code. 0 indicates successful completion.
##' The error codes depend on the algorithm used.
##'
##' \item \code{message}: A character string giving any additional information
##' returned by the optimizer, or NULL.
##'
##' \item \code{counts}: an integer giving the number of function evaluations.
##'
##' \item \code{control}: a list of control parameters.
##'
##' \item \code{start}: the starting values used by the algorithm.
##'
##' \item \code{coefficients}: The best set of parameters found.
##'
##' \item \code{n}: the sample size used for estimation.
##'
##' \item \code{series}: the observed time series
##'
##' \item \code{gyt}: the transformed time series \eqn{g_2(y_t)}
##'
##' \item \code{fitted.values}: the conditional mean, which corresponds to
##' the in-sample forecast, also denoted fitted values
##'
##' \item \code{etat}: the linear predictor \eqn{g_1(\mu_t)}
##'
##' \item \code{error.scale}: the scale for the error term.
##'
##' \item \code{error}: the error term \eqn{r_t}
##'
##' \item \code{residuals}: the observed minus the fitted values. The same as
##' the \code{error} term if \code{error.scale = 0}.
##'
##' \item \code{sll}: the sum of the conditional log-likelihood (if requested)
##'
##' \item \code{info.Matrix}: the information matrix (if requested)
##'
##' \item \code{configs}: a list with the configurations adopted to fit the model.
##' This information is used by the prediction function.
##'
##' \item \code{out.Fortran}: FORTRAN output (if requested)
##'
##' \item \code{call}: a string with the description of the fitted model.
##'
##' }
##'
##' @seealso
##' \code{\link{BARFIMA.fit}}, \code{\link{GARFIMA.fit}},
##' \code{\link{KARFIMA.fit}}, \code{\link{BARC.fit}}
##'
##' @examples
##'
##' # Generating a Beta model were mut does not vary with time
##' # yt ~ Beta(a,b), a = mu*nu, b = (1-mu)*nu
##'
##' y <- btsr.sim(model= "BARFIMA", linkg = "linear",
##' n = 100, seed = 2021,
##' coefs = list(alpha = 0.2, nu = 20))
##'
##' # fitting the model
##' f <- btsr.fit(model = "BARFIMA", yt = y, report = TRUE,
##' start = list(alpha = 0.5, nu = 10),
##' linkg = "linear", d = FALSE)
##'
##' @export
##'
##' @md
btsr.fit <- function(model,...){
switch(EXPR = model[1],
BARFIMA = BARFIMA.fit(...),
GARFIMA = GARFIMA.fit(...),
KARFIMA = KARFIMA.fit(...),
UWARFIMA = UWARFIMA.fit(...),
BARMA = BARFIMA.fit(d = FALSE,...),
GARMA = GARFIMA.fit(d = FALSE,...),
KARMA = KARFIMA.fit(d = FALSE,...),
UWARMA = UWARFIMA.fit(d = FALSE,...),
BARC = BARC.fit(...),
"not available")
}
|
/scratch/gouwar.j/cran-all/cranData/BTSR/R/generic.R
|
##----------------------------------------------------------
## KARFIMA MODELS
##----------------------------------------------------------
##' @title
##' Functions to simulate, extract components and fit KARFIMA models
##'
##' @name KARFIMA.functions
##' @order 1
##'
##' @description
##' These functions can be used to simulate, extract components
##' and fit any model of the class \code{karfima}. A model with
##' class \code{karfima} is a special case of a model with class \code{btsr} .
##' See \sQuote{The BTSR structure} in \code{\link{btsr.functions}} for
##' more details on the general structure.
##'
##' The KARMA model, the Kumaraswamy regression and a i.i.d. sample
##' from a Kumaraswamy distribution can be obtained as special cases.
##' See \sQuote{Details}.
##'
##' @details
##' The KARMA model and the Kumaraswamy regression can be
##' obtained as special cases of the KARFIMA model.
##'
##' \itemize{
##' \item KARFIMA: is obtained by default.
##'
##' \item KARMA: is obtained by setting \code{d = 0}.
##'
##' \item Kumaraswamy regression: is obtained by setting \code{p = 0},
##' \code{q = 0} and \code{d = FALSE}. The \code{error.scale} is irrelevant.
##' The second argument in \code{linkg} is irrelevant.
##'
##' \item an i.i.d. sample from a Kumaraswamy distribution
##' is obtained by setting \code{linkg = "linear"}, \code{p = 0}, \code{q = 0},
##' \code{coefs$d = 0}, \code{d = FALSE}. (\code{error.scale} and
##' \code{xregar} are irrelevant)
##'}
##'
##' @md
NULL
#> NULL
##' @rdname KARFIMA.functions
##' @order 2
##'
##' @details
##' The function \code{KARFIMA.sim} generates a random sample from a KARFIMA(p,d,q)
##' model.
##'
##' @param n a strictly positive integer. The sample size of yt (after burn-in).
##' Default is 1.
##'
##' @param burn a non-negative integer. The length of the "burn-in" period. Default is 0.
##'
##' @param xreg optionally, a vector or matrix of external regressors.
##' For simulation purposes, the length of xreg must be \code{n+burn}.
##' Default is \code{NULL}. For extraction or fitting purposes, the length
##' of \code{xreg} must be the same as the length of the observed time series
##' \eqn{y_t}.
##'
##' @param rho a positive number, between 0 and 1, indicating the quantile
##' to be modeled so that \eqn{\mu_t} is the conditional \eqn{rho}-quantile.
##'
##' @param y.lower the lower limit for the density support. Default is 0.
##'
##' @param y.upper the upper limit for the density support. Default is 1.
##'
##' @param coefs a list with the coefficients of the model. An empty list will result
##' in an error. The arguments that can be passed through this list are:
##' \itemize{
##' \item \code{alpha} optionally, a numeric value corresponding to the intercept.
##' If the argument is missing, it will be treated as zero. See
##' \sQuote{The BTSR structure} in \code{\link{btsr.functions}}.
##'
##' \item \code{beta} optionally, a vector of coefficients corresponding to the
##' regressors in \code{xreg}. If \code{xreg} is provided but \code{beta} is
##' missing in the \code{coefs} list, an error message is issued.
##'
##' \item \code{phi} optionally, for the simulation function this must be a vector
##' of size \eqn{p}, corresponding to the autoregressive coefficients
##' (including the ones that are zero), where \eqn{p} is the AR order. For
##' the extraction and fitting functions, this is a vector with the non-fixed
##' values in the vector of autoregressive coefficients.
##'
##' \item \code{theta} optionally, for the simulation function this must be a vector
##' of size \eqn{q}, corresponding to the moving average coefficients
##' (including the ones that are zero), where \eqn{q} is the MA order. For
##' the extraction and fitting functions, this is a vector with the non-fixed
##' values in the vector of moving average coefficients.
##'
##' \item \code{d} optionally, a numeric value corresponding to the long memory
##' parameter. If the argument is missing, it will be treated as zero.
##'
##' \item \code{nu} the dispersion parameter. If missing, an error message is issued.
##'
##' }
##'
##' @param y.start optionally, an initial value for yt (to be used
##' in the recursions). Default is \code{NULL}, in which case, the recursion assumes
##' that \eqn{g_2(y_t) = 0}, for \eqn{t < 1}.
##'
##' @param xreg.start optionally, a vector of initial value for xreg
##' (to be used in the recursions). Default is \code{NULL}, in which case, the recursion assumes
##' that \eqn{X_t = 0}, for \eqn{t < 1}. If \code{xregar = FALSE} this argument
##' is ignored.
##'
##' @param xregar logical; indicates if xreg is to be included in the
##' AR part of the model. See \sQuote{The BTSR structure}. Default is \code{TRUE}.
##'
##' @param error.scale the scale for the error term. See \sQuote{The BTSR structure}
##' in \code{\link{btsr.functions}}. Default is 1.
##'
##' @param complete logical; if \code{FALSE} the function returns only the simulated
##' time series yt, otherwise, additional time series are provided.
##' Default is \code{FALSE}
##'
##' @param inf the truncation point for infinite sums. Default is 1,000.
##' In practice, the Fortran subroutine uses \eqn{inf = q}, if \eqn{d = 0}.
##'
##' @param linkg character or a two character vector indicating which
##' links must be used in the model. See \sQuote{The BTSR structure}
##' in \code{\link{btsr.functions}} for details and \code{\link{link.btsr}}
##' for valid links. If only one value is provided, the same link is used
##' for \eqn{mu_t} and for \eqn{y_t} in the AR part of the model.
##' Default is \code{c("logit", "logit")}. For the linear link, the constant
##' will be always 1.
##'
##' @param seed optionally, an integer which gives the value of the fixed
##' seed to be used by the random number generator. If missing, a random integer
##' is chosen uniformly from 1,000 to 10,000.
##'
##' @param rngtype optionally, an integer indicating which random number generator
##' is to be used. Default is 2: the Mersenne Twister algorithm. See \sQuote{Common Arguments}
##' in \code{\link{btsr.functions}}.
##'
##' @param debug logical, if \code{TRUE} the output from FORTRAN is return (for
##' debugging purposes). Default is \code{FALSE} for all models.
##'
##' @return
##' The function \code{KARFIMA.sim} returns the simulated time series yt by default.
##' If \code{complete = TRUE}, a list with the following components
##' is returned instead:
##' \itemize{
##' \item \code{model}: string with the text \code{"KARFIMA"}
##'
##' \item \code{yt}: the simulated time series
##'
##' \item \code{mut}: the conditional mean
##'
##' \item \code{etat}: the linear predictor \eqn{g(\mu_t)}
##'
##' \item \code{error}: the error term \eqn{r_t}
##'
##' \item \code{xreg}: the regressors (if included in the model).
##'
##' \item \code{debug}: the output from FORTRAN (if requested).
##'
##' }
##'
##' @seealso
##' \code{\link{btsr.sim}}
##'
##' @examples
##' # Generating a Kumaraswamy model were mut does not vary with time
##' # For linear link, alpha = mu
##' #
##' # Warning:
##' # |log(1-rho)| >> |log(1 - mu^nu)|
##' # may cause numerical instability.
##'
##' y <- KARFIMA.sim(linkg = "linear", n = 1000, seed = 2021,
##' coefs = list(alpha = 0.7, nu = 2))
##' hist(y)
##'
##' @export
##'
##' @md
KARFIMA.sim <- function(n = 1, burn = 0, xreg = NULL, rho = 0.5,
y.lower = 0, y.upper = 1,
coefs = list(alpha = 0, beta = NULL, phi = NULL,
theta = NULL, d = 0, nu = 20),
y.start = NULL, xreg.start = NULL,
xregar = TRUE, error.scale = 1, complete = FALSE,
inf = 1000, linkg = c("logit", "logit"), seed = NULL,
rngtype = 2, debug = FALSE){
##----------------------------------
## checking required parameters:
##----------------------------------
if(is.null(coefs)) stop("coefs missing with no default")
if(!"list" %in% class(coefs)) stop("coefs must be a list")
if(is.null(rho)){
rho = 0.5
warning("rho is missing. Assuming rho = 0.5", immediate. = TRUE)
}
if(is.null(y.lower)) y.lower = 0
if(is.null(y.upper)) y.upper = 1
##----------------------------------
## checking configurations:
##----------------------------------
cf <- .sim.configs(model = "KARFIMA", xreg = xreg,
y.start = y.start, xreg.start = xreg.start,
linkg = linkg, n = n, burn = burn,
coefs = coefs, xregar = xregar,
error.scale = error.scale, seed = seed,
rngtype = rngtype, y.default = y.lower - 1)
cf$nu = c(cf$nu, rho, y.lower, y.upper)
out <- .btsr.sim(model = "KARFIMA", inf = inf, configs = cf,
complete = complete, debug = debug)
class(out) <- c(class(out), "karfima")
invisible(out)
}
##' @rdname KARFIMA.functions
##' @order 3
##'
##' @details
##'
##' The function \code{KARFIMA.extract} allows the user to extract the
##' components \eqn{y_t}, \eqn{\mu_t}, \eqn{\eta_t = g(\mu_t)}, \eqn{r_t},
##' the log-likelihood, and the vectors and matrices used to calculate the
##' score vector and the information matrix associated to a given set of parameters.
##'
##' This function can be used by any user to create an objective function
##' that can be passed to optimization algorithms not available in the BTSR Package.
##'
##' @param yt a numeric vector with the observed time series. If missing, an error
##' message is issued.
##'
##' @param nnew optionally, the number of out-of sample predicted values required.
##' Default is 0.
##'
##' @param xnew a vector or matrix, with \code{nnew} observations of the
##' regressors observed/predicted values corresponding to the period of
##' out-of-sample forecast. If \code{xreg = NULL}, \code{xnew} is ignored.
##'
##' @param p a non-negative integer. The order of AR polynomial.
##' If missing, the value of \code{p} is calculated from length(coefs$phi)
##' and length(fixed.values$phi). For fitting, the default is 0.
##'
##' @param q a non-negative integer. The order of the MA polynomial.
##' If missing, the value of \code{q} is calculated from length(coefs$theta)
##' and length(fixed.values$theta). For fitting, the default is 0.
##'
##' @param lags optionally, a list with the lags that the values in \code{coefs} correspond to.
##' The names of the entries in this list must match the ones in \code{coefs}.
##' For one dimensional coefficients, the \code{lag} is obviously always 1 and can
##' be suppressed. An empty list indicates that either the argument \code{fixed.lags}
##' is provided or all lags must be used.
##'
##' @param fixed.values optionally, a list with the values of the coefficients
##' that are fixed. By default, if a given vector (such as the vector of AR coefficients)
##' has fixed values and the corresponding entry in this list is empty, the fixed values
##' are set as zero. The names of the entries in this list must match the ones
##' in \code{coefs}.
##'
##' @param fixed.lags optionally, a list with the lags that the fixed values
##' in \code{fixed.values} correspond to. The names of the entries in this list must
##' match the ones in \code{fixed.values}. ##' For one dimensional coefficients, the
##' \code{lag} is obviously always 1 and can be suppressed. If an empty list is provided
##' and the model has fixed lags, the argument \code{lags} is used as reference.
##'
##' @param m a non-negative integer indicating the starting time for the sum of the
##' partial log-likelihoods, that is \eqn{\ell = \sum_{t = m+1}^n \ell_t}. Default is
##' 0.
##'
##' @param llk logical, if \code{TRUE} the value of the log-likelihood function
##' is returned. Default is \code{TRUE}.
##'
##' @param sco logical, if \code{TRUE} the score vector is returned.
##' Default is \code{FALSE}.
##'
##' @param info logical, if \code{TRUE} the information matrix is returned.
##' Default is \code{FALSE}. For the fitting function, \code{info} is automatically
##' set to \code{TRUE} when \code{report = TRUE}.
##'
##' @param extra logical, if \code{TRUE} the matrices and vectors used to
##' calculate the score vector and the information matrix are returned.
##' Default is \code{FALSE}.
##'
##' @return
##' The function \code{KARFIMA.extract} returns a list with the following components.
##'
##' \itemize{
##' \item \code{model}: string with the text \code{"KARFIMA"}
##'
##' \item \code{coefs}: the coefficients of the model passed through the
##' \code{coefs} argument
##'
##' \item \code{yt}: the observed time series
##'
##' \item \code{gyt}: the transformed time series \eqn{g_2(y_t)}
##'
##' \item \code{mut}: the conditional mean
##'
##' \item \code{etat}: the linear predictor \eqn{g_1(\mu_t)}
##'
##' \item \code{error}: the error term \eqn{r_t}
##'
##' \item \code{xreg}: the regressors (if included in the model).
##'
##' \item \code{sll}: the sum of the conditional log-likelihood (if requested)
##'
##' \item \code{sco}: the score vector (if requested)
##'
##' \item \code{info}: the information matrix (if requested)
##'
##' \item \code{Drho}, \code{T}, \code{E}, \code{h}: additional matrices and vectors
##' used to calculate the score vector and the information matrix. (if requested)
##'
##' \item \code{yt.new}: the out-of-sample forecast (if requested)
##'
##' \item \code{out.Fortran}: FORTRAN output (if requested)
##' }
##'
##' @seealso
##' \code{\link{btsr.extract}}
##'
##' @examples
##' #------------------------------------------------------------
##' # Generating a Kumaraswamy model were mut does not vary with time
##' # For linear link, alpha = mu
##' #
##' # Warning:
##' # |log(1-rho)| >> |log(1 - mu^nu)|
##' # may cause numerical instability.
##' #------------------------------------------------------------
##'
##' m1 <- KARFIMA.sim(linkg = "linear",n = 100,
##' complete = TRUE, seed = 2021,
##' coefs = list(alpha = 0.7, nu = 2))
##'
##' #------------------------------------------------------------
##' # Extracting the conditional time series given yt and
##' # a set of parameters
##' #------------------------------------------------------------
##'
##' # Assuming that all coefficients are non-fixed
##' e1 = KARFIMA.extract(yt = m1$yt, coefs = list(alpha = 0.7, nu = 2),
##' link = "linear", llk = TRUE,
##' sco = TRUE, info = TRUE)
##'
##' #----------------------------------------------------
##' # comparing the simulated and the extracted values
##' #----------------------------------------------------
##' cbind(head(m1$mut), head(e1$mut))
##'
##' #---------------------------------------------------------
##' # the log-likelihood, score vector and information matrix
##' #---------------------------------------------------------
##' e1$sll
##' e1$score
##' e1$info.Matrix
##'
##' @export
##' @md
KARFIMA.extract <- function(yt, xreg = NULL, nnew = 0, xnew = NULL, p, q,
rho = 0.5, y.lower = 0, y.upper = 1,
coefs = list(), lags = list(),
fixed.values = list(), fixed.lags = list(),
y.start = NULL, xreg.start = NULL,
xregar = TRUE, error.scale = 1, inf = 1000, m = 0,
linkg = c("logit","logit"), llk = TRUE, sco = FALSE,
info = FALSE, extra = FALSE, debug = FALSE){
if(is.null(coefs) & is.null(fixed.values))
stop("Please, provide a list of coefficients")
if(!is.null(coefs)){
if(! "list" %in% class(coefs)) stop("coefs must be a list")}
if(!is.null(fixed.values)){
if(! "list" %in% class(fixed.values)) stop("fixed.values must be a list")}
else{ fixed.values <- list()}
if(is.null(rho)) stop("rho is missing. NULL is not accepted")
if(is.null(y.lower)) stop("y.lower is missing. NULL is not accepted")
if(is.null(y.upper)) stop("y.upper is missing. NULL is not accepted")
if(missing(p)) p = length(coefs$phi) + length(fixed.values$phi)
if(missing(q)) q = length(coefs$theta) + length(fixed.values$theta)
cf <- .extract.configs(model = "KARFIMA", yt = yt, y.start = y.start,
y.lower = y.lower, y.upper = y.upper,
openIC = c(TRUE, TRUE), xreg = xreg, xnew = xnew,
nnew = nnew, xreg.start = xreg.start, linkg = linkg,
p = p, q = q, inf = inf, m = m, xregar = xregar,
error.scale = error.scale, coefs = coefs,
lags = lags, fixed.values = fixed.values,
fixed.lags = fixed.lags, llk = llk, sco = sco,
info = info, extra = extra)
# adding the extra parameters to pass to the extracting function
# via "pdist" parameter
cf$nu$fvalues = c(cf$nu$fvalues, rho, y.lower, y.upper)
out <- .btsr.extract(model = "KARFIMA", yt = yt, configs = cf, debug = debug)
class(out) <- c(class(out), "karfima")
invisible(out)
}
##' @rdname KARFIMA.functions
##' @order 4
##'
##' @details
##' The function \code{KARFIMA.fit} fits a KARFIMA model to a given univariate time
##' series. For now, available optimization algorithms are \code{"L-BFGS-B"} and
##' \code{"Nelder-Mead"}. Both methods accept bounds for the parameters. For
##' \code{"Nelder-Mead"}, bounds are set via parameter transformation.
##'
##'
##' @param d logical, if \code{TRUE}, the parameter \code{d} is included
##' in the model either as fixed or non-fixed. If \code{d = FALSE} the value is
##' fixed as 0. The default is \code{TRUE}.
##'
##' @param start a list with the starting values for the non-fixed coefficients
##' of the model. If an empty list is provided, the function \code{\link{coefs.start}}
##' is used to obtain starting values for the parameters.
##'
##' @param ignore.start logical, if starting values are not provided, the
##' function uses the default values and \code{ignore.start} is ignored.
##' In case starting values are provided and \code{ignore.start = TRUE}, those
##' starting values are ignored and recalculated. The default is \code{FALSE}.
##'
##' @param lower optionally, list with the lower bounds for the
##' parameters. The names of the entries in these lists must match the ones
##' in \code{start}. The default is to assume that the parameters have no lower
##' bound except for \code{nu}, for which de default is 0. Only the bounds for
##' bounded parameters need to be specified.
##'
##' @param upper optionally, list with the upper bounds for the
##' parameters. The names of the entries in these lists must match the ones
##' in \code{start}. The default is to assume that the parameters have no upper
##' bound. Only the bounds for bounded parameters need to be specified.
##'
##' @param control a list with configurations to be passed to the
##' optimization subroutines. Missing arguments will receive default values. See
##' \cite{\link{fit.control}}.
##'
##' @param report logical, if \code{TRUE} the summary from model estimation is
##' printed and \code{info} is automatically set to \code{TRUE}. Default is \code{TRUE}.
##'
##' @param ... further arguments passed to the internal functions.
##'
##' @return
##' The function \code{btsr.fit} returns a list with the following components.
##' Each particular model can have additional components in this list.
##'
##' \itemize{
##' \item \code{model}: string with the text \code{"KARFIMA"}
##'
##' \item \code{convergence}: An integer code. 0 indicates successful completion.
##' The error codes depend on the algorithm used.
##'
##' \item \code{message}: A character string giving any additional information
##' returned by the optimizer, or NULL.
##'
##' \item \code{counts}: an integer giving the number of function evaluations.
##'
##' \item \code{control}: a list of control parameters.
##'
##' \item \code{start}: the starting values used by the algorithm.
##'
##' \item \code{coefficients}: The best set of parameters found.
##'
##' \item \code{n}: the sample size used for estimation.
##'
##' \item \code{series}: the observed time series
##'
##' \item \code{gyt}: the transformed time series \eqn{g_2(y_t)}
##'
##' \item \code{fitted.values}: the conditional mean, which corresponds to
##' the in-sample forecast, also denoted fitted values
##'
##' \item \code{etat}: the linear predictor \eqn{g_1(\mu_t)}
##'
##' \item \code{error.scale}: the scale for the error term.
##'
##' \item \code{error}: the error term \eqn{r_t}
##'
##' \item \code{residual}: the observed minus the fitted values. The same as
##' the \code{error} term if \code{error.scale = 0}.
##'
##' \item \code{forecast}: the out-of-sample forecast (if requested).
##'
##' \item \code{xnew}: the observations of the regressors observed/predicted
##' values corresponding to the period of out-of-sample forecast.
##' Only inlcudes if \code{xreg} is not \code{NULL} and \code{nnew > 0}.
##'
##' \item \code{sll}: the sum of the conditional log-likelihood (if requested)
##'
##' \item \code{info.Matrix}: the information matrix (if requested)
##'
##' \item \code{configs}: a list with the configurations adopted to fit the model.
##' This information is used by the prediction function.
##'
##' \item \code{out.Fortran}: FORTRAN output (if requested)
##'
##' \item \code{call}: a string with the description of the fitted model.
##'
##' }
##'
##' @seealso
##' \code{\link{btsr.fit}}
##'
##' @examples
##'
##' # Generating a Kumaraswamy model were mut does not vary with time
##' # For linear link, alpha = mu
##' #
##' # Warning:
##' # |log(1-rho)| >> |log(1 - mu^nu)|
##' # may cause numerical instability.
##'
##' y <- KARFIMA.sim(linkg = "logit", n = 100, seed = 2021,
##' coefs = list(alpha = 0.7, nu = 2))
##'
##' # fitting the model
##' f <- KARFIMA.fit(yt = y, report = TRUE,
##' start = list(alpha = 0.5, nu = 1),
##' linkg = "logit", d = FALSE)
##'
##' @export
##'
##' @md
KARFIMA.fit <- function(yt, xreg = NULL, nnew = 0, xnew = NULL, p = 0, d = TRUE,
q = 0, m = 0, inf = 1000, rho = 0.5, y.lower = 0, y.upper = 1,
start = list(), ignore.start = FALSE,
lags = list(), fixed.values = list(),
fixed.lags = list(), lower = list(nu = 0),
upper = list(nu = Inf), linkg = c("logit","logit"),
sco = FALSE, info = FALSE, extra = FALSE, xregar = TRUE,
y.start = NULL, xreg.start = NULL,
error.scale = 1, control = list(), report = TRUE,
debug = FALSE,...){
# default values for nu (merge with user provided values)
lw <- list(nu = 0); up <- list(nu = Inf)
lw[names(lower)] <- lower; up[names(upper)] <- upper
lower <- lw; upper <- up
if(is.null(rho)){
rho = 0.5
warning("rho is missing. Assuming rho = 0.5", immediate. = TRUE)
}
if(is.null(y.lower)) y.lower = 0
if(is.null(y.upper)) y.upper = 1
if(report) info = TRUE
cf <- .fit.configs(model = "KARFIMA", yt = yt, y.start = y.start,
y.lower = y.lower, y.upper = y.upper,
openIC = c(TRUE, TRUE),xreg = xreg, xnew = xnew,
nnew = nnew, xreg.start = xreg.start, linkg = linkg,
p = p, d = d, q = q, inf = inf, m = m,
xregar = xregar, error.scale = error.scale,
start = start, ignore.start = ignore.start,
lags = lags, fixed.values = fixed.values,
fixed.lags = fixed.lags, lower = lower,
upper = upper, control = control,
sco = sco, info = info, extra = extra)
if(!is.null(cf$conv)) return(invisible(out))
cf$nu$fvalues = c(cf$nu$fvalues, rho, y.lower, y.upper)
out <- .btsr.fit(model = "KARFIMA", yt = yt, configs = cf, debug = debug)
out$call <- .fit.print(model = "KARFIMA", p = cf$p, q = cf$q,
d = !(cf$d$nfix == 1 & cf$d$fvalues == 0),
nreg = cf$nreg)
class(out) <- c(class(out), "karfima")
if(report) print(summary(out))
invisible(out)
}
|
/scratch/gouwar.j/cran-all/cranData/BTSR/R/karfima.R
|
##-------------------------------------------------------------------------
## Similar to make.link()
##-------------------------------------------------------------------------
##' Given the name of a link, this function returns a link function,
##' an inverse link function, the derivative \eqn{d\eta / d\mu}{deta/dmu}
##' and the derivative \eqn{d\mu / d\eta}{dmu/deta}.
##'
##' @title Create a Link for BTSR models
##'
##' @param link character; one of \code{"linear"}, \code{"logit"},
##' \code{"log"}, \code{"loglog"}, \code{"cloglog"}. See \sQuote{Details}.
##'
##' @return An object of class \code{"link-btsr"}, a list with components
##'
##' \item{linkfun}{Link function \code{function(mu)}}
##' \item{linkinv}{Inverse link function \code{function(eta)}}
##' \item{linkdif}{Derivative \code{function(mu)} \eqn{d\eta / d\mu}{deta/dmu}}
##' \item{mu.eta}{Derivative \code{function(eta)} \eqn{d\mu / d\eta}{dmu/deta}}
##' \item{name}{a name to be used for the link}
##'
##'@details The available links are:
##'
##' linear: \eqn{f(x) = ax}, for \eqn{a} real. The parameter is set using the
##' argument \code{ctt.ll}, when invoking the functions created by \code{link.btsr}
##'
##' logit: \eqn{f(x) = log(x/(1-x))}
##'
##' log: \eqn{f(x) = log(x)}
##'
##' loglog: \eqn{f(x) = log(-log(x))}
##'
##' cloglog: \eqn{f(x) = log(-log(1-x))}
##'
##' @examples
##' mylink <- BTSR::link.btsr("linear")
##' y = 0.8
##' a = 3.4
##' gy = a*y
##'
##' mylink$linkfun(mu = y, ctt.ll = a); gy
##' mylink$linkinv(eta = gy, ctt.ll = a); y
##' mylink$diflink(mu = y, ctt.ll = a); a
##' mylink$mu.eta(eta = gy, ctt.ll = a); 1/a
##'
##'
##' @export
link.btsr <- function(link){
##--------------------------------------------------
## linkfun: Link function function(mu)
## linkinv: Inverse link function function(eta)
## mu.eta: Derivative function(eta) dmu/deta
## diflink: Derivative function(mu) deta/dmu
##--------------------------------------------------
# convert character to number
linktemp <- .link.convert(link)
# defines g(mu)
linkfun <- function(mu,...){
args <- list(...)
ctt.ll <- 1
yl <- 0
yu <- 1
if(link == "linear" & !is.null(args[['ctt.ll']])) ctt.ll <- args$ctt.ll
if(!is.null(args[['y.lower']])) yl <- max(args$y.lower, .Machine$double.xmin)
if(!is.null(args[['y.upper']])) yu <- min(args$y.upper, .Machine$double.xmax)
n <- length(mu)
.Fortran("linkR", link = linktemp, a = ctt.ll, ylim = c(yl, yu),
n = n, ilk = 0L, y = mu, lk = 1L,
gy = numeric(n), dl = 0L, dlink = 1)$gy
}
# defines g^{-1}(eta)
linkinv <- function(eta,...){
args <- list(...)
ctt.ll <- 1
yl <- 0
yu <- 1
if(link == "linear" & !is.null(args[['ctt.ll']])) ctt.ll <- args$ctt.ll
if(!is.null(args[['y.lower']])) yl <- max(args$y.lower, .Machine$double.xmin)
if(!is.null(args[['y.upper']])) yu <- min(args$y.upper, .Machine$double.xmax)
n <- length(eta)
.Fortran("linkR", link = linktemp, a = ctt.ll, ylim = c(yl, yu),
n = n, ilk = 1L, y = numeric(n), lk = 0L,
gy = eta, dl = 0L, dlink = 1)$y
}
# defines dg/dmu
diflink <- function(mu,...){
args <- list(...)
ctt.ll <- 1
yl <- 0
yu <- 1
if(link == "linear" & !is.null(args[['ctt.ll']])) ctt.ll <- args$ctt.ll
if(!is.null(args[['y.lower']])) yl <- max(args$y.lower, .Machine$double.xmin)
if(!is.null(args[['y.upper']])) yu <- min(args$y.upper, .Machine$double.xmax)
n <- length(mu)
.Fortran("linkR", link = linktemp, a = ctt.ll, ylim = c(yl, yu),
n = n, ilk = 0L, y = mu, lk = 0L, gy = 1,
dl = 1L, dlink = numeric(n))$dlink
}
# defines dmu/deta = 1/g'(ginv(eta))
mu.eta <- function(eta,...){
1/diflink(mu = linkinv(eta = eta,...),...)
}
#environment(linkfun) <- environment(linkinv) <- environment(mu.eta) <- environment(diflink) <- asNamespace("BTSR")
structure(list(linkfun = linkfun,
linkinv = linkinv,
diflink = diflink,
mu.eta = mu.eta,
name = link), class = "link-btsr")
}
##-------------------------------------------------------------------------
## internal function. Converts the link to the corresponding integer
## to be passed to FORTRAN
##-------------------------------------------------------------------------
.link.convert <- function(link){
##------------------------------------------
## Links
##------------------------------------------
## 0 = linear: f(x) = ax, a real
## 1 = logit: f(x) = log(x/(1-x))
## 2 = log: f(x) = log(x)
## 3 = loglog: f(x) = log(-log(x))
## 4 = cloglog: f(x) = log(-log(1-x))
##------------------------------------------
links <- matrix(0:4, ncol = 1)
nm <- c("linear", "logit", "log", "loglog", "cloglog")
rownames(links) <- nm
lk <- numeric(length(link))
for(i in 1:length(lk)){
lk[i] <- ifelse(link[i] %in% rownames(links), links[link[i],1], NA)
if(is.na(lk[i])){
mes <- paste(link[i], "link not available, available links are: ")
for(j in 1:nrow(links)) mes <- paste(mes, "'", nm[j], "', ", sep = "")
stop(mes)
}
}
as.integer(lk)
}
##-------------------------------------------------------------------------
## internal function. Checks compatibility
##-------------------------------------------------------------------------
.link.check <- function(model, link){
lk = unique(link)
ok = TRUE
for(i in 1:length(lk)){
if(model == "GARFIMA"){
# g1 and g2 cannot be from (a,b) -> (-Inf, Inf) because
# the data does not have an upper bound
if(lk[i] %in% c("logit","loglog", "cloglog")) ok = FALSE
}else{
# g2 can be anything, g1 must be from (a,b) -> (-Inf, Inf)
if(i == 1 & lk[i] %in% c("log")) ok = FALSE
}
}
if(!ok) stop("The selected model and link are not compatible")
invisible(ok)
}
|
/scratch/gouwar.j/cran-all/cranData/BTSR/R/link.R
|
#' @title Predict method for BTSR
#'
#' @description Predicted values based on btsr object.
#'
#' @param object Object of class inheriting from \code{"btsr"}
#' @param newdata A matrix with new values for the regressors. If omitted
#' and \code{"xreg"} is present in the model, the fitted values are returned.
#' If the model does not include regressors, the functions will use
#' the value of \code{nnew}.
#' @param nnew number of out-of-sample forecasts required. If \code{newdata} is
#' provided, \code{nnew} is ignored.
#' @param ... further arguments passed to or from other methods.
#'
#' @details
#' \code{predict.btsr} produces predicted values, obtained by evaluating
#' the regression function in the frame \code{newdata}.
#'
#' If \code{newdata} is omitted the predictions are based on the data
#' used for the fit.
#'
#' For now, prediction intervals are not provided.
#'
#' @return A list with the following arguments
#'
#' \item{series}{The original time series yt.}
#'
#' \item{xreg}{The original regressors (if any).}
#'
#' \item{fitted.values}{The in-sample forecast given by \eqn{\mu_t}.}
#'
#' \item{etat}{In-sample values of \eqn{g(\mu[t])}.}
#'
#' \item{error}{The error term (depends on the argument \code{error.scale})}
#'
#' \item{residuals}{The (in-sample) residuals, that is, the observed minus the predicted values.
#' Same as error when \code{error.scale} = 0}
#'
#' \item{forecast}{The predicted values for yt.}
#'
#' \item{TS}{only for \code{"BARC"} models. The iterated map.}
#'
#' \item{Ts.forecast}{only for \code{"BARC"} models. The predicted values
#' of the iterated map.}
#'
#' @examples
##' #------------------------------------------------------------
##' # Generating a Beta model were mut does not vary with time
##' # yt ~ Beta(a,b), a = mu*nu, b = (1-mu)*nu
##' #------------------------------------------------------------
##'
##' y <- btsr.sim(model= "BARFIMA", linkg = "linear",
##' n = 100, seed = 2021,
##' coefs = list(alpha = 0.2, nu = 20))
##'
##' # fitting the model
##' f <- btsr.fit(model = "BARFIMA", yt = y, report = TRUE,
##' start = list(alpha = 0.5, nu = 10),
##' linkg = "linear", d = FALSE)
##'
##' pred = predict(f, nnew = 5)
##' pred$forecast
##'
##' @export
##'
predict.btsr <-
function(object, newdata, nnew = 0,...){
out <- list()
nms.out <- c("series", "xreg", "fitted.values", "etat", "error", "residuals", "forecast")
if(object$model == "BARC") nms.out = c(nms.out, "Ts", "Ts.forecast")
if(missing(newdata)) newdata = NULL
if(is.null(newdata) & nnew <= 0){
#------------------------------------------------------
# New data was not provided.
# Extracting existing components and returning
#------------------------------------------------------
out[nms.out] <- object[nms.out]
}else{
if(is.null(newdata) & object$configs$nreg > 0)
stop("Please, provide the new values for the regressors")
#------------------------------------------------------
# New data was provided.
# Making the necessary calculations
#------------------------------------------------------
xnew = NULL
if(!is.null(newdata)){
xnew = as.matrix(newdata)
nnew = nrow(xnew)
}
nms = names(object)
obj <- object[nms[nms != "configs"]]
obj[names(object$configs)] <- object$configs
temp <- .xreg.convert(xreg = object$xreg, xnew = xnew,
n = obj$n, nnew = nnew, skip.forecast = FALSE)
obj[names(temp)] <- temp
obj[c("llk", "sco", "info", "extra")] <- 0L
obj$coefs <- obj$coefficients
if(obj$model == "BARC")
temp <- .barc.predict(obj, TRUE)
else
temp <- .btsr.predict(obj, TRUE)
out[nms.out] <- obj[nms.out]
out$forecast <- temp$yt.new
out$xnew <- NULL
if(obj$nreg > 0) out$xnew <- xnew
else out$xreg <- NULL
if(obj$model == "BARC") out$Ts.forecast <- temp$Ts.new
}
out
}
##---------------------------------------------------------------------------
## internal function:
## Interface between R and FORTRAN
##---------------------------------------------------------------------------
.btsr.predict <- function(object, debug){
if(! object$model %in% c("BARFIMA", "GARFIMA", "KARFIMA"))
stop("The selected model is not implemented yet")
temp <- .Fortran("btsrpredictR",
n = object$n,
series = object$series,
ylower = object$y.lower,
yupper = object$y.upper,
gy = object$gyt,
nreg = object$nreg,
xreg = object$xreg,
escale = object$error.scale,
error = object$error,
nnew = object$nnew,
xnew = object$xnew,
ynew = numeric(max(1,object$nnew)),
linkg = object$linkg,
npar = max(1L,object$npar),
coefs = object$coefs,
fixa = object$alpha$nfix,
alpha = object$alpha$fvalues,
fixb = object$beta$nfix,
flagsb = object$beta$flags,
beta = object$beta$fvalues,
p = object$p,
fixphi = object$phi$nfix,
flagsphi = object$phi$flags,
phi = object$phi$fvalues,
xregar = object$xregar,
q = object$q,
fixtheta = object$theta$nfix,
flagstheta = object$theta$flags,
theta = object$theta$fvalues,
fixd = object$d$nfix,
d = object$d$fvalues,
fixnu = object$nu$nfix,
nu = object$nu$fvalues[1],
inf = object$inf)
out <- list(model = object$model,
yt.new = temp$ynew)
if(debug) out$out.Fortran <- temp
invisible(out)
}
|
/scratch/gouwar.j/cran-all/cranData/BTSR/R/pred.R
|
##-------------------------------------------------------------------------
## internal function
## Assigns a initial value to the seed argument passed to fortran,
## based on the type of random number generation chosen by the user
##-------------------------------------------------------------------------
.seed.start <- function(seed, rngtype){
##------------------------------------------
## Random number generators
##------------------------------------------
## 0: Original rng_uniform (Jason Blevins)
## 1: Wichmann-Hill
## 2: Mersenne Twister
## 3: Marsaglia-MultiCarry (kiss 32)
## 4: Marsaglia-MultiCarry (kiss 64)
## 5: Knuth (2002)
## 6: L'Ecuyer's 1999 (64-bits)
##------------------------------------------
if(rngtype == 0) add <- c(521288629, 362436069, 16163801, 1131199299)
if(rngtype == 1) add <- c(3026, 3030, 3032)
if(rngtype %in% c(2,6)) add <- 521288629
if(rngtype %in% c(3,4)) add <- c(123456789, 362436069, 521288629, 916191069)
if(rngtype == 5) add <- c(153587801,-759022222,-759022222,-1718083407,-123456789)
if(! rngtype %in% 0:6) stop("rngtype must be a number between 0 and 6")
if(length(seed) != length(add)){
if(length(seed) > length(add)){
warning(paste("only the first ", length(add),
" values in seed will be used", sep = ""), immediate. = TRUE)
seed <- seed[1:length(add)]
}else{
if(length(seed) > 1){
warning("only the first value in seed will be used", immediate. = TRUE)
seed <- seed[1]
}
}
}
seed <- seed + add
if(rngtype == 1){
# These values should be positive integers between 1 and 30,000.
seed <- sapply(seed, function(x) min(x, 30000))
}
return(as.integer(seed))
}
|
/scratch/gouwar.j/cran-all/cranData/BTSR/R/rng.R
|
##----------------------------------------------------------
##
## Simulation functions
##
##----------------------------------------------------------
##-------------------------------------------------------------------------
## internal function.
## Performs several checks to make sure that
## the correct type of variables will be passed to FORTRAN
##-------------------------------------------------------------------------
.sim.configs <- function(model, xreg, y.start, xreg.start, linkg, n, burn,
coefs, xregar, error.scale, seed, rngtype, y.default){
if(missing(seed)) seed = NULL
if(is.null(seed)) seed = stats::runif(1, min = 1000, max = 10000)
##-------------------
## initial values:
##-------------------
if(length(coefs$beta) == 0) xregar = FALSE
out <- .data.start.convert(y.start = y.start, xreg.start = xreg.start,
nreg = length(coefs$beta), xregar = xregar,
y.default = y.default)
##--------------------------------------------------------
## final sample size and burn-in (must be integer)
##--------------------------------------------------------
out$n <- as.integer(n)
out$burn <- as.integer(burn)
##--------------------------------------------------------
## link function for mu and y (must be integer)
##--------------------------------------------------------
dummy <- .link.check(model = model, link = linkg)
out$linkg <- .link.convert(linkg)
if(any(is.na(out$linkg)))
stop(paste("at least one link requested is not implemented yet", sep = ""))
## if only one link is provided, uses the same link for mu and y.
if(length(linkg) == 1) out$linkg <- c(out$linkg, out$linkg)
##-----------
## coefs
##-----------
pars <- c("alpha", "beta", "phi", "nu")
out[pars] <- coefs[pars]
if(is.null(coefs$nu)) stop("nu is missing with no default")
##---------------------------------------
## if alpha is missing, set alpha = 0
##---------------------------------------
if(is.null(out$alpha)) out$alpha <- 0
##--------------------------------------------------
## xreg and beta - forecast is not required here
## if xreg = NULL, set beta = 0 (dummy)
##--------------------------------------------------
cr <- .xreg.convert(xreg = xreg, xnew = NULL, n = burn + n,
nnew = 0, skip.forecast = TRUE)
out[names(cr)] <- cr
if(!is.null(xreg)){
if(nrow(out$xreg) < burn+n) stop("nrow(xreg) < burn + n")
if(is.null(out$beta)) stop("beta is missing with no default")
if(length(out$beta) < out$nreg) stop("beta and nreg are not compatible")
}else{out$beta <- 0}
##-----------------------------------------
## if phi is missing, set phi = 0 (dummy)
##-----------------------------------------
out$p <- length(coefs$phi)
if(is.null(out$phi)) out$phi <- 0
##--------------------------------------------------------------
## xregar (if FALSE, xreg is not included in the AR recursion)
##--------------------------------------------------------------
out$xregar <- as.integer(xregar)
##--------------------------------------------------------------
## error.scale = 1 => e = g(y) - g(mu)
## = 0 => e = y - mu
##--------------------------------------------------------------
out$error.scale <- as.integer(error.scale)
##----------------------------------------------------
## setting the seed and rngtype (must be integer)
##----------------------------------------------------
out$seed <- .seed.start(seed = seed, rngtype = rngtype)
out$rngtype <- as.integer(rngtype)
## theta parameter for BARC model is checked in check.configs.barc
## this function will be called by the function specific to BARC models
if(model == "BARC") return(invisible(out))
##-------------------------------
## theta and q - MA component
##-------------------------------
out$theta <- coefs$theta
out$q <- length(coefs$theta)
if(is.null(out$theta)) out$theta <- 0
##------
## d
##------
out$d <- ifelse(is.null(coefs$d), 0, coefs$d)
invisible(out)
}
#-------------------------------------------------------
# Fix-me
#-------------------------------------------------------
# Using
# foo <- .check.model(model[1], "sim")
# and then
# .Fortran(foo,...)
# gives an error during the registration process
# Therefore, for now, we are using the auxiliary
# functions defined in the sequel
#-------------------------------------------------------
.btsr.sim.barfima <- function(configs, inf){
.Fortran('simbarfimar',
n = configs$n,
burn = configs$burn,
pdist = configs$nu,
alpha = configs$alpha,
nreg = configs$nreg,
beta = configs$beta,
p = configs$p,
phi = configs$phi,
q = configs$q,
theta = configs$theta,
d = configs$d,
linkg = configs$linkg,
xreg = configs$xreg,
xregar = configs$xregar,
yt = numeric(configs$n+configs$burn),
ystart = configs$y.start,
xstart = configs$xreg.start,
mut = numeric(configs$n+configs$burn),
etat = numeric(configs$n+configs$burn),
error = numeric(configs$n+configs$burn),
escale = configs$error.scale,
ns = length(configs$seed),
seed = configs$seed,
rngtype = configs$rngtype,
inf = as.integer(inf),
rev = 1L)
}
.btsr.sim.karfima <- function(configs, inf){
.Fortran('simkarfimar',
n = configs$n,
burn = configs$burn,
pdist = configs$nu,
alpha = configs$alpha,
nreg = configs$nreg,
beta = configs$beta,
p = configs$p,
phi = configs$phi,
q = configs$q,
theta = configs$theta,
d = configs$d,
linkg = configs$linkg,
xreg = configs$xreg,
xregar = configs$xregar,
yt = numeric(configs$n+configs$burn),
ystart = configs$y.start,
xstart = configs$xreg.start,
mut = numeric(configs$n+configs$burn),
etat = numeric(configs$n+configs$burn),
error = numeric(configs$n+configs$burn),
escale = configs$error.scale,
ns = length(configs$seed),
seed = configs$seed,
rngtype = configs$rngtype,
inf = as.integer(inf),
rev = 1L)
}
.btsr.sim.garfima <- function(configs, inf){
.Fortran('simgarfimar',
n = configs$n,
burn = configs$burn,
pdist = configs$nu,
alpha = configs$alpha,
nreg = configs$nreg,
beta = configs$beta,
p = configs$p,
phi = configs$phi,
q = configs$q,
theta = configs$theta,
d = configs$d,
linkg = configs$linkg,
xreg = configs$xreg,
xregar = configs$xregar,
yt = numeric(configs$n+configs$burn),
ystart = configs$y.start,
xstart = configs$xreg.start,
mut = numeric(configs$n+configs$burn),
etat = numeric(configs$n+configs$burn),
error = numeric(configs$n+configs$burn),
escale = configs$error.scale,
ns = length(configs$seed),
seed = configs$seed,
rngtype = configs$rngtype,
inf = as.integer(inf),
rev = 1L)
}
.btsr.sim.uwarfima <- function(configs, inf){
.Fortran('simuwarfimar',
n = configs$n,
burn = configs$burn,
pdist = configs$nu,
alpha = configs$alpha,
nreg = configs$nreg,
beta = configs$beta,
p = configs$p,
phi = configs$phi,
q = configs$q,
theta = configs$theta,
d = configs$d,
linkg = configs$linkg,
xreg = configs$xreg,
xregar = configs$xregar,
yt = numeric(configs$n+configs$burn),
ystart = configs$y.start,
xstart = configs$xreg.start,
mut = numeric(configs$n+configs$burn),
etat = numeric(configs$n+configs$burn),
error = numeric(configs$n+configs$burn),
escale = configs$error.scale,
ns = length(configs$seed),
seed = configs$seed,
rngtype = configs$rngtype,
inf = as.integer(inf),
rev = 1L)
}
##---------------------------------------------------------------------------
## internal function:
## Interface between R and FORTRAN
## Also used to summarize the results of the simulation and return
## only the relevant variables
##---------------------------------------------------------------------------
.btsr.sim <- function(model = "BARFIMA", inf, configs, complete, debug){
if(abs(configs$d) > 0 & inf < 100){
warning(paste("non-zero d and inf = ", inf,
". Be carefull, this value may be too small",
sep = ""), immediate. = TRUE)}
#fun <- .check.model(model[1],"sim")
out <- switch(EXPR = model,
BARFIMA = .btsr.sim.barfima(configs, inf),
GARFIMA = .btsr.sim.garfima(configs, inf),
KARFIMA = .btsr.sim.karfima(configs, inf),
UWARFIMA = .btsr.sim.uwarfima(configs, inf))
if(out$rev == 1){
warning("Revision Required. Try changing the link functions\n", immediate. = TRUE)
return(invisible(out))
}
##-----------------------------------------------
## if complete = TRUE returns the full model.
## otherwise only yt is returned
##-----------------------------------------------
ini <- configs$burn + 1
end <- configs$burn + configs$n
if(complete){
final <- list(model = model,
yt = out$yt[ini:end],
mut = out$mu[ini:end],
etat = out$eta[ini:end],
error = out$error[ini:end],
xreg = out$xreg[ini:end,])
if(out$nreg == 0) final$xreg <- NULL
if(debug) final$out.Fortran <- out
}
else final <- out$yt[ini:end]
invisible(final)
}
|
/scratch/gouwar.j/cran-all/cranData/BTSR/R/simu.R
|
##----------------------------------------------------------------------------
## internal function.
## Not supposed to be called by the user.
## This function checks if the required model is implemented
##----------------------------------------------------------------------------
.check.model <- function(model, type){
if(!model %in% c("BARFIMA", "GARFIMA", "KARFIMA", "UWARFIMA"))
stop("The selected model is not implemented yet")
if(!type %in% c("sim", "fit", "extract"))
stop("wrong type. Must be one of: sim, extract, fit")
fun <- switch(EXPR = model,
BARFIMA = "barfimaR",
GARFIMA = "garfimaR",
KARFIMA = "karfimaR",
UWARFIMA = "uwarfimaR")
tp <- NULL
if(type == "sim") tp <- "sim"
fun <- paste0(tp,fun)
return(fun)
}
##----------------------------------------------------------------------------
## internal function.
## Not supposed to be called by the user.
## This function is used to avoid problems if the user defines one of
## these variable as NULL.
##----------------------------------------------------------------------------
.fix.null.configs <- function(coefs, lags, fixed.values, fixed.lags, lower, upper){
# default settings
out <- list(coefs = NULL, lags = list(), fixed.lags = list(),
fixed.values = list(), lower = list(), upper = list())
# updating values passed by the user
if(length(coefs) > 0) out$coefs[names(coefs)] <- coefs
if(length(lags) > 0) out$lags[names(lags)] <- lags
if(length(fixed.values) > 0) out$fixed.values[names(fixed.values)] <- fixed.values
if(length(fixed.lags) > 0) out$fixed.lags[names(fixed.lags)] <- fixed.lags
if(length(lower) > 0) out$lower[names(lower)] <- lower
if(length(upper) > 0) out$upper[names(upper)] <- upper
return(out)
}
##-------------------------------------------------------------------------
## internal function.
## Not supposed to be called by the user.
## Initializes xreg and xnew to pass to FORTRAN.
## This function creates dummy matrices of size 1 x 1 when
## the model does not have regressors or forecast is not required.
##-------------------------------------------------------------------------
.xreg.convert <- function(xreg, xnew, n, nnew, skip.forecast){
out <- c()
out$nnew <- as.integer(nnew)
##------------------------------
## nreg = 0
##------------------------------
if(is.null(xreg)){
##----------------------------
## no regressors in the model
## xnew will be ignored
##----------------------------
out$xreg <- matrix(0, ncol = 1, nrow = max(1,n))
out$nreg <- 0L
##----------------------------------
## checking if forecast is required
##----------------------------------
if(skip.forecast){
# for compatibility with other functions
out$xnew <- matrix(0, ncol = 1, nrow = 1)
out$nnew <- 0L
}else{
out$xnew <- matrix(0, ncol = 1, nrow = max(1,nnew))}
return(invisible(out))
}
##---------------------------
## nreg > 0
##---------------------------
out$xreg <- as.matrix(xreg)
out$nreg <- ncol(out$xreg)
if(nrow(out$xreg) != n)
stop("xreg and y do not have the same number of observations")
##----------------------------------
## checking if forecast is required
##----------------------------------
if(skip.forecast){
# for compatibility with other functions
out$xnew <- matrix(0, ncol = 1, nrow = 1)
out$nnew <- 0L
return(invisible(out))
}
##----------------------------------
## if forecast is required
##----------------------------------
if(is.null(xnew)){
out$nnew <- as.integer(nnew)
out$xnew <- matrix(0, ncol = 1, nrow = max(1,nnew))
}else{
xnew <- as.matrix(xnew)
if(ncol(xnew) != out$nreg){
##------------------------------------------------------
## if xnew is not compatible with xreg
##------------------------------------------------------
stop("number of columns in xnew and xreg are not the same")
}else{
out$xnew <- xnew
out$nnew <- nrow(xnew)
}
}
invisible(out)
}
##---------------------------------------------------------------------------
## internal function.
## Initializes y.start and xreg.start to pass to FORTRAN.
##
## The default is to set the initial values as zero.
##
## For now, if initilization is required, the user must also provide the initial
## values for these variables.
##
## To do: to implement a function to initialize y.start and xreg.start based
## on the model selected.
##---------------------------------------------------------------------------
.data.start.convert <- function(y.start, xreg.start, nreg, xregar, y.default){
# If the model dos not include xreg in the AR regression,
# there is no use for initial values of xreg.
if(!xregar) xreg.start = rep(0, max(1,nreg))
else xreg.start = NULL
# checking compatibility
if(!is.null(xreg.start) & xregar)
if(length(xreg.start) != nreg)
stop("length(xreg.start) is not the same as nreg")
out <- c()
# if y.start and/or xreg.start are provided, uses this values
# otherwise, the initial values are set as the default value
out$y.start <- ifelse(is.null(y.start), y.default, y.start)
if(is.null(xreg.start)) out$xreg.start <- rep(0, max(1,nreg))
else out$xreg.start <- xreg.start
invisible(out)
}
##-------------------------------------------------------------------------
## internal function.
## Checks if the data has any NA and if it is the the correct range.
##-------------------------------------------------------------------------
.data.check <- function(yt, lower, upper, openIC){
out <- list()
##----------------------
## checking for NA's
##----------------------
if(sum(is.na(yt)) > 0){
out$conv <- 1
out$message <- "NA in the data"
warning("NA's are not allowed", immediate. = TRUE)
return(out)
}
##--------------------------------------
## checking if y is in the correct range
##--------------------------------------
if(openIC[1]) a <- (min(yt) <= lower + .Machine$double.eps)
else a <- (min(yt) < lower)
if(openIC[2]) b <- (max(yt) >= upper - .Machine$double.eps)
else b <- (max(yt) > upper)
if (a | b){
out$conv <- 1
out$conv_message <- "Out of range"
warning(paste("OUT OF RANGE. yt must be bewteen ", lower,
" and ", upper, sep = ""), immediate. = TRUE)
return(out)
}
return(out)
}
##-------------------------------------------------------------------------
## Internal function.
## Initializes the variables with fixed values
## and fixed lags to pass to FORTRAN.
##-------------------------------------------------------------------------
.coefs.convert <- function(parname, fvalues, flags, coefs, lags, npar){
out <- c()
##----------------------
## npar = nfix + nfit
##----------------------
if(is.null(npar)) stop("npar is missing")
# if only npar is provided, the parameter is assumed to be
# fixed and it will be set as zero.
if(is.null(c(fvalues, flags, coefs, lags)) & npar > 0) fvalues = rep(0, npar)
##---------------------------------------
## non-fixed values
## lags = position of non-fixed values
##---------------------------------------
lc = length(coefs)
ll = length(lags)
if(!is.null(coefs) & !is.null(lags)){
if(lc != ll) stop("coefs and lags are not compatible.
Please, make sure that length(coefs) = length(lags)")}
##---------------------------------------
## fixed values
## flags = position of fixed values
##---------------------------------------
lfv = length(fvalues)
lfl = length(flags)
if(!is.null(fvalues) & !is.null(flags)){
if(lfv != lfl) stop("fixed.values and fixed.lags are not compatible.
Please, make sure that length(fvalues) = length(flags)")}
if(!is.null(lags) & !is.null(flags))
if(any(lags %in% flags)) stop("lags and flags have non-empty intersection")
##-------------------
## total
##-------------------
lc = ll = max(lc, ll)
lfv = lfl = max(lfv, lfl)
if(npar != lc + lfv) stop("values provided are not compatible.
Please, check if fixed values/lags and
non-fixed values/lags were correctly informed.")
if(lfv == 0){
##--------------------------------
## there are no fixed values:
##----------------------------------
out$flags <- 0L ## dummy passed to FORTRAN
out$fvalues <- 0 ## dummy passed to FORTRAN
out$nfix <- 0L
lags = 1:npar
}else{
##----------------------------------------------------
## if there are fixed values
##----------------------------------------------------
if(is.null(lags) & is.null(flags) & npar > 1){
stop("cannot decide which lags must be fixed/fitted.
Please provide lags or flags")}
if(npar == 1) flags = 1
##--------------
## lags
##--------------
all = 1:npar
if(is.null(lags) & lc > 0) lags = all[-flags] # flags was provided
if(is.null(flags)) flags = all[-lags] # lags was provided
##----------------
## fixed values
##----------------
if(is.null(fvalues)) fvalues <- rep(0,lfv)
out$flags <- as.integer(flags)
out$fvalues <- fvalues
out$nfix <- as.integer(lfv)
}
##-----------------------------------------
## checking for non-fixed parameter values
##-----------------------------------------
if(npar == lfv) out$coefs <- NULL
else{
## if the non-fixed values were not provided, set as 0.
if(is.null(coefs)) out$coefs <- rep(0, lc)
else out$coefs <- coefs
}
if(lc > 0){
if(length(lags) == 1) out$coefsnames <- parname
else out$coefsnames <- paste(parname,"(",lags,")",sep = "")
}else{out$coefsnames <- NULL}
invisible(out)
}
##-------------------------------------------------------------------------
## Internal function.
## Initializes the variables with fixed values and fixed lags
## to pass to FORTRAN.
##-------------------------------------------------------------------------
.coefs.convert.all <- function(model, coefs,lags, fixed.values, fixed.lags, p, q, nreg){
out <- c()
par <- NULL
nm = NULL
##-------------------------------------------------------
## checking for and parsing fixed and non-fixed values.
##-------------------------------------------------------
## alpha - the intercept
out$alpha <- .coefs.convert(parname = "alpha", fvalues = fixed.values$alpha,
flags = NULL, coefs = coefs$alpha,
lags = NULL, npar = 1)
par <- c(par, alpha = out$alpha$coefs)
nm <- c(nm, out$alpha$coefsname)
## beta - coefficients associated to Xreg
out$beta <- .coefs.convert(parname = "beta", fvalues = fixed.values$beta,
flags = fixed.lags$beta, coefs = coefs$beta,
lags = lags$beta, npar = nreg)
par <- c(par, beta = out$beta$coefs)
nm <- c(nm, out$beta$coefsname)
if(nreg - out$beta$nfix > 0){
if(length(out$beta$coefs) < nreg - out$beta$nfix)
stop("missing some values of beta in the parameter list")}
## phi - AR coefficients
out$phi <- .coefs.convert(parname = "phi", fvalues = fixed.values$phi,
flags = fixed.lags$phi, coefs = coefs$phi,
lags = lags$phi, npar = p)
par <- c(par, phi = out$phi$coefs)
nm <- c(nm, out$phi$coefsname)
if(p - out$phi$nfix > 0){
if(length(out$phi$coefs) < p - out$phi$nfix)
stop("missing some values of phi in the parameter list")}
## theta - MA coefficients or map parameter in BARC models
out$theta <- .coefs.convert(parname = "theta", fvalues = fixed.values$theta,
flags = fixed.lags$theta, coefs = coefs$theta,
lags = lags$theta, npar = q)
par <- c(par, theta = out$theta$coefs)
nm <- c(nm, out$theta$coefsname)
if(p - out$theta$nfix > 0){
if(length(out$theta$coefs) < q - out$theta$nfix)
stop("missing some values of theta in the parameter list")}
if(!(model == "BARC")){
## d - long memory parameter
out$d <- .coefs.convert(parname = "d", fvalues = fixed.values$d,
flags = NULL, coefs = coefs$d,
lags = NULL, npar = 1)
par <- c(par, d = out$d$coefs)
nm <- c(nm, out$d$coefsname)
}
## nu - dispersion parameter
out$nu <- .coefs.convert(parname = "nu", fvalues = fixed.values$nu,
flags = NULL, coefs = coefs$nu,
lags = NULL, npar = 1)
par <- c(par, nu = out$nu$coefs)
nm <- c(nm, out$nu$coefsname)
if(!is.null(par)) names(par) <- nm
out$coefs <- par
out$coefsnames <- nm
invisible(out)
}
##-------------------------------------------------------------------------
## Internal function.
## Convert the bounds to pass to FORTRAN.
##-------------------------------------------------------------------------
.bounds.convert <- function(npar, lower, upper){
##----------------------------------
## bounds
##----------------------------------
## 0 = no bounds
## 1 = lower bound only
## 2 = lower and upper bounds
## 3 = upper bound only
##----------------------------------
out <- c()
out$nbd <- integer(npar)
out$lower <- numeric(npar)
out$upper <- numeric(npar)
if(is.null(lower)) lower <- rep(-Inf, npar)
if(is.null(upper)) upper <- rep(Inf, npar)
w1 <- (lower > -Inf)&(upper == Inf)
w2 <- (lower > -Inf)&(upper < Inf)
w3 <- (lower == -Inf)&(upper < Inf)
##----------------------------
## lower bound only
##----------------------------
if(sum(w1) > 0){
out$nbd[w1] <- 1L
out$upper[w1] <- 0 ## will be ignored by the function
}
##----------------------------
## upper and lower bound
##----------------------------
if(sum(w2) > 0){
out$nbd[w2] <- 2L
out$lower[w2] <- lower[w2]
out$upper[w2] <- upper[w2]
}
##----------------------------
## upper bound only
##----------------------------
if(sum(w3) > 0){
out$nbd[w3] <- 3L
out$lower[w3] <- 0 ## will be ignored by the function
}
##-------------------------------------------
## no bounds (FORTRAN does not accept Inf)
##-------------------------------------------
w0 <- (out$nbd == 0L)
if(sum(w0) > 0){
out$lower[w0] <- 0
out$upper[w0] <- 0
}
invisible(out)
}
|
/scratch/gouwar.j/cran-all/cranData/BTSR/R/utils.R
|
##----------------------------------------------------------
## UWARFIMA MODELS
##----------------------------------------------------------
##' @title
##' Functions to simulate, extract components and fit UWARFIMA models
##'
##' @name UWARFIMA.functions
##' @order 1
##'
##' @description
##' These functions can be used to simulate, extract components
##' and fit any model of the class \code{uwarfima}. A model with
##' class \code{uwarfima} is a special case of a model with class \code{btsr} .
##' See \sQuote{The BTSR structure} in \code{\link{btsr.functions}} for
##' more details on the general structure.
##'
##' The UWARMA model, the Unit-Weibull regression and a i.i.d. sample
##' from a Unit-Weibull distribution can be obtained as special cases.
##' See \sQuote{Details}.
##'
##' @details
##' The UWARMA model and the Unit-Weibull regression can be
##' obtained as special cases of the UWARFIMA model.
##'
##' \itemize{
##' \item UWARFIMA: is obtained by default.
##'
##' \item UWARMA: is obtained by setting \code{d = 0}.
##'
##' \item Unit-Weibull regression: is obtained by setting \code{p = 0},
##' \code{q = 0} and \code{d = FALSE}. The \code{error.scale} is irrelevant.
##' The second argument in \code{linkg} is irrelevant.
##'
##' \item an i.i.d. sample from a Unit-Weibull distribution
##' is obtained by setting \code{linkg = "linear"}, \code{p = 0}, \code{q = 0},
##' \code{coefs$d = 0}, \code{d = FALSE}. (\code{error.scale} and
##' \code{xregar} are irrelevant)
##'}
##'
##' @md
NULL
#> NULL
##' @rdname UWARFIMA.functions
##' @order 2
##'
##' @details
##' The function \code{UWARFIMA.sim} generates a random sample from a UWARFIMA(p,d,q)
##' model.
##'
##' @param n a strictly positive integer. The sample size of yt (after burn-in).
##' Default is 1.
##'
##' @param burn a non-negative integer. Length of the "burn-in" period. Default is 0.
##'
##' @param xreg optionally, a vector or matrix of external regressors.
##' For simulation purposes, the length of xreg must be \code{n+burn}.
##' Default is \code{NULL}. For extraction or fitting purposes, the length
##' of \code{xreg} must be the same as the length of the observed time series
##' \eqn{y_t}.
##'
##' @param rho a positive number, between 0 and 1, indicating the quantile
##' to be modeled. In this case, \eqn{\mu_t} corresponds to the conditional
##' \eqn{rho}-quantile of the distribution.
##'
##' @param coefs a list with the coefficients of the model. An empty list will result
##' in an error. The arguments that can be passed through this list are:
##' \itemize{
##' \item \code{alpha} optionally, a numeric value corresponding to the intercept.
##' If the argument is missing, it will be treated as zero. See
##' \sQuote{The BTSR structure} in \code{\link{btsr.functions}}.
##'
##' \item \code{beta} optionally, a vector of coefficients corresponding to the
##' regressors in \code{xreg}. If \code{xreg} is provided but \code{beta} is
##' missing in the \code{coefs} list, an error message is issued.
##'
##' \item \code{phi} optionally, for the simulation function this must be a vector
##' of size \eqn{p}, corresponding to the autoregressive coefficients
##' (including the ones that are zero), where \eqn{p} is the AR order. For
##' the extraction and fitting functions, this is a vector with the non-fixed
##' values in the vector of autoregressive coefficients.
##'
##' \item \code{theta} optionally, for the simulation function this must be a vector
##' of size \eqn{q}, corresponding to the moving average coefficients
##' (including the ones that are zero), where \eqn{q} is the MA order. For
##' the extraction and fitting functions, this is a vector with the non-fixed
##' values in the vector of moving average coefficients.
##'
##' \item \code{d} optionally, a numeric value corresponding to the long memory
##' parameter. If the argument is missing, it will be treated as zero.
##'
##' \item \code{nu} is a shape parameter. If missing, an error message is issued.
##'
##' }
##'
##' @param y.start optionally, an initial value for yt (to be used
##' in the recursions). Default is \code{NULL}, in which case, the recursion assumes
##' that \eqn{g_2(y_t) = 0}, for \eqn{t < 1}.
##'
##' @param xreg.start optionally, a vector of initial value for xreg
##' (to be used in the recursions). Default is \code{NULL}, in which case, the recursion assumes
##' that \eqn{X_t = 0}, for \eqn{t < 1}. If \code{xregar = FALSE} this argument
##' is ignored.
##'
##' @param xregar logical; indicates if xreg is to be included in the
##' AR part of the model. See \sQuote{The BTSR structure}. Default is \code{TRUE}.
##'
##' @param error.scale the scale for the error term. See \sQuote{The BTSR structure}
##' in \code{\link{btsr.functions}}. Default is 1.
##'
##' @param complete logical; if \code{FALSE} the function returns only the simulated
##' time series yt, otherwise, additional time series are provided (see below).
##' Default is \code{FALSE}
##'
##' @param inf the truncation point for infinite sums. Default is 1,000.
##' In practice, the Fortran subroutine uses \eqn{inf = q}, if \eqn{d = 0}.
##'
##' @param linkg character or a two character vector indicating which
##' links must be used in the model. See \sQuote{The BTSR structure}
##' in \code{\link{btsr.functions}} for details and \code{\link{link.btsr}}
##' for valid links. If only one value is provided, the same link is used
##' for \eqn{mu_t} and for \eqn{y_t} in the AR part of the model.
##' Default is \code{c("logit", "logit")}. For the linear link, the constant
##' will be always 1.
##'
##' @param seed optionally, an integer which gives the value of the fixed
##' seed to be used by the random number generator. If missing, a random integer
##' is chosen uniformly from 1,000 to 10,000.
##'
##' @param rngtype optionally, an integer indicating which random number generator
##' is to be used. Default is 2: the Mersenne Twister algorithm. See \sQuote{Common Arguments}
##' in \code{\link{btsr.functions}}.
##'
##' @param debug logical, if \code{TRUE} the output from FORTRAN is return (for
##' debugging purposes). Default is \code{FALSE} for all models.
##'
##' @return
##' The function \code{UWARFIMA.sim} returns the simulated time series yt by default.
##' If \code{complete = TRUE}, a list with the following components
##' is returned instead:
##' \itemize{
##' \item \code{model}: string with the text \code{"UWARFIMA"}
##'
##' \item \code{yt}: the simulated time series
##'
##' \item \code{mut}: the conditional mean
##'
##' \item \code{etat}: the linear predictor \eqn{g(\mu_t)}
##'
##' \item \code{error}: the error term \eqn{r_t}
##'
##' \item \code{xreg}: the regressors (if included in the model).
##'
##' \item \code{debug}: the output from FORTRAN (if requested).
##'
##' }
##'
##' @seealso
##' \code{\link{btsr.sim}}
##'
##' @examples
##' # Generating a Unit-Weibull model were mut does not vary with time
##' # For linear link, alpha = mu
##'
##' y <- UWARFIMA.sim(linkg = "linear", n = 1000, seed = 2021,
##' coefs = list(alpha = 0.7, nu = 2))
##' hist(y)
##'
##' @export
##'
##' @md
UWARFIMA.sim <- function(n = 1, burn = 0, xreg = NULL, rho = 0.5,
coefs = list(alpha = 0, beta = NULL, phi = NULL,
theta = NULL, d = 0, nu = 20),
y.start = NULL, xreg.start = NULL,
xregar = TRUE, error.scale = 1, complete = FALSE,
inf = 1000, linkg = c("logit", "logit"), seed = NULL,
rngtype = 2, debug = FALSE){
##----------------------------------
## checking required parameters:
##----------------------------------
if(is.null(coefs)) stop("coefs missing with no default")
if(!"list" %in% class(coefs)) stop("coefs must be a list")
if(is.null(rho)){
rho = 0.5
warning("rho is missing. Assuming rho = 0.5", immediate. = TRUE)
}
##----------------------------------
## checking configurations:
##----------------------------------
cf <- .sim.configs(model = "UWARFIMA", xreg = xreg,
y.start = y.start, xreg.start = xreg.start,
linkg = linkg, n = n, burn = burn,
coefs = coefs, xregar = xregar,
error.scale = error.scale, seed = seed,
rngtype = rngtype, y.default = 0)
cf$nu = c(cf$nu, rho)
out <- .btsr.sim(model = "UWARFIMA", inf = inf, configs = cf,
complete = complete, debug = debug)
class(out) <- c(class(out), "uwarfima")
invisible(out)
}
##' @rdname UWARFIMA.functions
##' @order 3
##'
##' @details
##'
##' The function \code{UWARFIMA.extract} allows the user to extract the
##' components \eqn{y_t}, \eqn{\mu_t}, \eqn{\eta_t = g(\mu_t)}, \eqn{r_t},
##' the log-likelihood, and the vectors and matrices used to calculate the
##' score vector and the information matrix associated to a given set of parameters.
##'
##' This function can be used by any user to create an objective function
##' that can be passed to optimization algorithms not available in the BTSR Package.
##'
##' @param yt a numeric vector with the observed time series. If missing, an error
##' message is issued.
##'
##' @param nnew optionally, the number of out-of sample predicted values required.
##' Default is 0.
##'
##' @param xnew a vector or matrix, with \code{nnew} observations of the
##' regressors observed/predicted values corresponding to the period of
##' out-of-sample forecast. If \code{xreg = NULL}, \code{xnew} is ignored.
##'
##' @param p a non-negative integer. The order of AR polynomial.
##' If missing, the value of \code{p} is calculated from length(coefs$phi)
##' and length(fixed.values$phi). For fitting, the default is 0.
##'
##' @param q a non-negative integer. The order of the MA polynomial.
##' If missing, the value of \code{q} is calculated from length(coefs$theta)
##' and length(fixed.values$theta). For fitting, the default is 0.
##'
##' @param lags optionally, a list with the lags that the values in \code{coefs} correspond to.
##' The names of the entries in this list must match the ones in \code{coefs}.
##' For one dimensional coefficients, the \code{lag} is obviously always 1 and can
##' be suppressed. An empty list indicates that either the argument \code{fixed.lags}
##' is provided or all lags must be used.
##'
##' @param fixed.values optionally, a list with the values of the coefficients
##' that are fixed. By default, if a given vector (such as the vector of AR coefficients)
##' has fixed values and the corresponding entry in this list is empty, the fixed values
##' are set as zero. The names of the entries in this list must match the ones
##' in \code{coefs}.
##'
##' @param fixed.lags optionally, a list with the lags that the fixed values
##' in \code{fixed.values} correspond to. The names of the entries in this list must
##' match the ones in \code{fixed.values}. ##' For one dimensional coefficients, the
##' \code{lag} is obviously always 1 and can be suppressed. If an empty list is provided
##' and the model has fixed lags, the argument \code{lags} is used as reference.
##'
##' @param m a non-negative integer indicating the starting time for the sum of the
##' partial log-likelihoods, that is \eqn{\ell = \sum_{t = m+1}^n \ell_t}. Default is
##' 0.
##'
##' @param llk logical, if \code{TRUE} the value of the log-likelihood function
##' is returned. Default is \code{TRUE}.
##'
##' @param sco logical, if \code{TRUE} the score vector is returned.
##' Default is \code{FALSE}.
##'
##' @param info logical, if \code{TRUE} the information matrix is returned.
##' Default is \code{FALSE}. For the fitting function, \code{info} is automatically
##' set to \code{TRUE} when \code{report = TRUE}.
##'
##' @param extra logical, if \code{TRUE} the matrices and vectors used to
##' calculate the score vector and the information matrix are returned.
##' Default is \code{FALSE}.
##'
##' @return
##' The function \code{UWARFIMA.extract} returns a list with the following components.
##'
##' \itemize{
##' \item \code{model}: string with the text \code{"UWARFIMA"}
##'
##' \item \code{coefs}: the coefficients of the model passed through the
##' \code{coefs} argument
##'
##' \item \code{yt}: the observed time series
##'
##' \item \code{gyt}: the transformed time series \eqn{g_2(y_t)}
##'
##' \item \code{mut}: the conditional mean
##'
##' \item \code{etat}: the linear predictor \eqn{g_1(\mu_t)}
##'
##' \item \code{error}: the error term \eqn{r_t}
##'
##' \item \code{xreg}: the regressors (if included in the model).
##'
##' \item \code{sll}: the sum of the conditional log-likelihood (if requested)
##'
##' \item \code{sco}: the score vector (if requested)
##'
##' \item \code{info}: the information matrix (if requested)
##'
##' \item \code{Drho}, \code{T}, \code{E}, \code{h}: additional matrices and vectors
##' used to calculate the score vector and the information matrix. (if requested)
##'
##' \item \code{yt.new}: the out-of-sample forecast (if requested)
##'
##' \item \code{out.Fortran}: FORTRAN output (if requested)
##' }
##'
##' @seealso
##' \code{\link{btsr.extract}}
##'
##' @examples
##' #------------------------------------------------------------
##' # Generating a Unit-Weibull model were mut does not vary with time
##' # For linear link, alpha = mu
##' #------------------------------------------------------------
##'
##' m1 <- UWARFIMA.sim(linkg = "linear",n = 100,
##' complete = TRUE, seed = 2021,
##' coefs = list(alpha = 0.7, nu = 2))
##'
##' #------------------------------------------------------------
##' # Extracting the conditional time series given yt and
##' # a set of parameters
##' #------------------------------------------------------------
##'
##' # Assuming that all coefficients are non-fixed
##' e1 = UWARFIMA.extract(yt = m1$yt, coefs = list(alpha = 0.7, nu = 2),
##' link = "linear", llk = TRUE,
##' sco = TRUE, info = TRUE)
##'
##' #----------------------------------------------------
##' # comparing the simulated and the extracted values
##' #----------------------------------------------------
##' cbind(head(m1$mut), head(e1$mut))
##'
##' #---------------------------------------------------------
##' # the log-likelihood, score vector and information matrix
##' #---------------------------------------------------------
##' e1$sll
##' e1$score
##' e1$info.Matrix
##'
##' @export
##' @md
UWARFIMA.extract <- function(yt, xreg = NULL, nnew = 0, xnew = NULL, p, q,
rho = 0.5, coefs = list(), lags = list(),
fixed.values = list(), fixed.lags = list(),
y.start = NULL, xreg.start = NULL,
xregar = TRUE, error.scale = 1, inf = 1000, m = 0,
linkg = c("logit","logit"), llk = TRUE, sco = FALSE,
info = FALSE, extra = FALSE, debug = FALSE){
if(is.null(coefs) & is.null(fixed.values))
stop("Please, provide a list of coefficients")
if(!is.null(coefs)){
if(! "list" %in% class(coefs)) stop("coefs must be a list")}
if(!is.null(fixed.values)){
if(! "list" %in% class(fixed.values)) stop("fixed.values must be a list")}
else{ fixed.values <- list()}
if(is.null(rho)) stop("rho is missing. NULL is not accepted")
if(missing(p)) p = length(coefs$phi) + length(fixed.values$phi)
if(missing(q)) q = length(coefs$theta) + length(fixed.values$theta)
cf <- .extract.configs(model = "UWARFIMA", yt = yt, y.start = y.start,
y.lower = 0, y.upper = 1,
openIC = c(TRUE, TRUE), xreg = xreg, xnew = xnew,
nnew = nnew, xreg.start = xreg.start, linkg = linkg,
p = p, q = q, inf = inf, m = m, xregar = xregar,
error.scale = error.scale, coefs = coefs,
lags = lags, fixed.values = fixed.values,
fixed.lags = fixed.lags, llk = llk, sco = sco,
info = info, extra = extra)
# adding the extra parameters to pass to the extracting function
# via "pdist" parameter
cf$nu$fvalues = c(cf$nu$fvalues, rho)
out <- .btsr.extract(model = "UWARFIMA", yt = yt, configs = cf, debug = debug)
class(out) <- c(class(out), "uwarfima")
invisible(out)
}
##' @rdname UWARFIMA.functions
##' @order 4
##'
##' @details
##' The function \code{UWARFIMA.fit} fits a UWARFIMA model to a given univariate time
##' series. For now, available optimization algorithms are \code{"L-BFGS-B"} and
##' \code{"Nelder-Mead"}. Both methods accept bounds for the parameters. For
##' \code{"Nelder-Mead"}, bounds are set via parameter transformation.
##'
##'
##' @param d logical, if \code{TRUE}, the parameter \code{d} is included
##' in the model either as fixed or non-fixed. If \code{d = FALSE} the value is
##' fixed as 0. The default is \code{TRUE}.
##'
##' @param start a list with the starting values for the non-fixed coefficients
##' of the model. If an empty list is provided, the function \code{\link{coefs.start}}
##' is used to obtain starting values for the parameters.
##'
##' @param ignore.start logical, if starting values are not provided, the
##' function uses the default values and \code{ignore.start} is ignored.
##' In case starting values are provided and \code{ignore.start = TRUE}, those
##' starting values are ignored and recalculated. The default is \code{FALSE}.
##'
##' @param lower optionally, list with the lower bounds for the
##' parameters. The names of the entries in these lists must match the ones
##' in \code{start}. The default is to assume that the parameters have no lower
##' bound except for \code{nu}, for which de default is 0. Only the bounds for
##' bounded parameters need to be specified.
##'
##' @param upper optionally, list with the upper bounds for the
##' parameters. The names of the entries in these lists must match the ones
##' in \code{start}. The default is to assume that the parameters have no upper
##' bound. Only the bounds for bounded parameters need to be specified.
##'
##' @param control a list with configurations to be passed to the
##' optimization subroutines. Missing arguments will receive default values. See
##' \cite{\link{fit.control}}.
##'
##' @param report logical, if \code{TRUE} the summary from model estimation is
##' printed and \code{info} is automatically set to \code{TRUE}. Default is \code{TRUE}.
##'
##' @param ... further arguments passed to the internal functions.
##'
##' @return
##' The function \code{btsr.fit} returns a list with the following components.
##' Each particular model can have additional components in this list.
##'
##' \itemize{
##' \item \code{model}: string with the text \code{"UWARFIMA"}
##'
##' \item \code{convergence}: An integer code. 0 indicates successful completion.
##' The error codes depend on the algorithm used.
##'
##' \item \code{message}: A character string giving any additional information
##' returned by the optimizer, or NULL.
##'
##' \item \code{counts}: an integer giving the number of function evaluations.
##'
##' \item \code{control}: a list of control parameters.
##'
##' \item \code{start}: the starting values used by the algorithm.
##'
##' \item \code{coefficients}: The best set of parameters found.
##'
##' \item \code{n}: the sample size used for estimation.
##'
##' \item \code{series}: the observed time series
##'
##' \item \code{gyt}: the transformed time series \eqn{g_2(y_t)}
##'
##' \item \code{fitted.values}: the conditional mean, which corresponds to
##' the in-sample forecast, also denoted fitted values
##'
##' \item \code{etat}: the linear predictor \eqn{g_1(\mu_t)}
##'
##' \item \code{error.scale}: the scale for the error term.
##'
##' \item \code{error}: the error term \eqn{r_t}
##'
##' \item \code{residual}: the observed minus the fitted values. The same as
##' the \code{error} term if \code{error.scale = 0}.
##'
##' \item \code{forecast}: the out-of-sample forecast (if requested).
##'
##' \item \code{xnew}: the observations of the regressors observed/predicted
##' values corresponding to the period of out-of-sample forecast.
##' Only inlcudes if \code{xreg} is not \code{NULL} and \code{nnew > 0}.
##'
##' \item \code{sll}: the sum of the conditional log-likelihood (if requested)
##'
##' \item \code{info.Matrix}: the information matrix (if requested)
##'
##' \item \code{configs}: a list with the configurations adopted to fit the model.
##' This information is used by the prediction function.
##'
##' \item \code{out.Fortran}: FORTRAN output (if requested)
##'
##' \item \code{call}: a string with the description of the fitted model.
##'
##' }
##'
##' @seealso
##' \code{\link{btsr.fit}}
##'
##' @examples
##'
##' # Generating a Unit-Weibull model were mut does not vary with time
##' # For linear link, alpha = mu
##'
##' y <- UWARFIMA.sim(linkg = "logit", n = 100, seed = 2021,
##' coefs = list(alpha = 0.7, nu = 2))
##'
##' # fitting the model
##' f <- UWARFIMA.fit(yt = y, report = TRUE,
##' start = list(alpha = 0.5, nu = 1),
##' linkg = "logit", d = FALSE)
##'
##' @export
##'
##' @md
UWARFIMA.fit <- function(yt, xreg = NULL, nnew = 0, xnew = NULL, p = 0, d = TRUE,
q = 0, m = 0, inf = 1000, rho = 0.5,
start = list(), ignore.start = FALSE,
lags = list(), fixed.values = list(),
fixed.lags = list(), lower = list(nu = 0),
upper = list(nu = Inf), linkg = c("logit","logit"),
sco = FALSE, info = FALSE, extra = FALSE, xregar = TRUE,
y.start = NULL, xreg.start = NULL,
error.scale = 1, control = list(), report = TRUE,
debug = FALSE,...){
# default values for nu (merge with user provided values)
lw <- list(nu = 0); up <- list(nu = Inf)
lw[names(lower)] <- lower; up[names(upper)] <- upper
lower <- lw; upper <- up
if(is.null(rho)){
rho = 0.5
warning("rho is missing. Assuming rho = 0.5", immediate. = TRUE)
}
if(report) info = TRUE
cf <- .fit.configs(model = "UWARFIMA", yt = yt, y.start = y.start,
y.lower = 0, y.upper = 1,
openIC = c(TRUE, TRUE),xreg = xreg, xnew = xnew,
nnew = nnew, xreg.start = xreg.start, linkg = linkg,
p = p, d = d, q = q, inf = inf, m = m,
xregar = xregar, error.scale = error.scale,
start = start, ignore.start = ignore.start,
lags = lags, fixed.values = fixed.values,
fixed.lags = fixed.lags, lower = lower,
upper = upper, control = control,
sco = sco, info = info, extra = extra)
if(!is.null(cf$conv)) return(invisible(out))
cf$nu$fvalues = c(cf$nu$fvalues, rho)
out <- .btsr.fit(model = "UWARFIMA", yt = yt, configs = cf, debug = debug)
out$call <- .fit.print(model = "UWARFIMA", p = cf$p, q = cf$q,
d = !(cf$d$nfix == 1 & cf$d$fvalues == 0),
nreg = cf$nreg)
class(out) <- c(class(out), "uwarfima")
if(report) print(summary(out))
invisible(out)
}
|
/scratch/gouwar.j/cran-all/cranData/BTSR/R/uwarfima.R
|
#################################################################################
## R package BTSR is copyright Taiane Schaedler Prass and Guilherme Pumi,
## with the exceptions described in the sequel.
##
## This file is part of the R package BTSR.
##
## The R package BTSR is free software: You can redistribute it and/or
## modify it under the terms of the GNU General Public License as published by
## the Free Software Foundation, either version 3 of the License, or any later
## version (at your option). See the GNU General Public License at
## <https://www.gnu.org/licenses/> for details.
##
## The R package BTSR is distributed in the hope that it will be useful,
## but WITHOUT ANY WARRANTY without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
##
#################################################################################
##--------------------------------------------------------------------------------
src/00_lbfgsb.f90
Contains the subroutines
dtrsl, dpofa, ddot
which are part of LINPACK, with authors J.J. Dongarra, Cleve Moler and G.W. Stewart.
They were taken from the Netlib archive now at www.netlib.org and do not
clearly state their copyright status.
Contains the subroutines of l-bfgs-b algorithm Written by
Ciyou Zhu, Richard Byrd, Jorge Nocedal, Jose Luis Morales and Peihuang Lu-Chen.
The original code is now available at
http://users.iems.northwestern.edu/~nocedal/lbfgsb.html
Condition for Use: This software is freely available, but we expect that
all publications describing work using this software, or all commercial
products using it, quote at least one of the references given below.
This software is released under the "New BSD License" (aka "Modified BSD License"
or "3-clause license").
References
R. H. Byrd, P. Lu and J. Nocedal. A Limited Memory Algorithm for Bound Constrained
Optimization, (1995), SIAM Journal on Scientific and Statistical Computing , 16, 5,
pp. 1190-1208.
C. Zhu, R. H. Byrd and J. Nocedal. L-BFGS-B: Algorithm 778: L-BFGS-B, FORTRAN
routines for large scale bound constrained optimization (1997), ACM Transactions on
Mathematical Software, Vol 23, Num. 4, pp. 550 - 560.
J.L. Morales and J. Nocedal. L-BFGS-B: Remark on Algorithm 778: L-BFGS-B, FORTRAN
routines for large scale bound constrained optimization (2011), to appear in ACM
Transactions on Mathematical Software.
##-----------------------------------------------------------------------------
##--------------------------------------------------------------------------------
src/00_main.f90
The subroutines
xtransform, xtransformstart
were based on the Matlab subroutine 'fminsearchbnd' available at
https://www.mathworks.com/matlabcentral/fileexchange/8277-
fminsearchbnd-fminsearchcon?focused=5216898&tab=function
##--------------------------------------------------------------------------------
##--------------------------------------------------------------------------------
src/00_specfun.f90
The function
trigama
was taken from
https://people.sc.fsu.edu/~jburkardt/f_src/asa121/asa121.html
The FORTRAN90 version was written by John Burkardt.
This code is distributed under the GNU LGPL license.
The function
lngamma
was taken from https://jblevins.org/mirror/amiller/lanczos.f90
This function was written by Alan Miller.
https://jblevins.org/mirror/amiller/ is an archived copy of the
Fortran source code repository of Alan Miller previously located at
http://users.bigpond.net.au/amiller/. It is hosted by Jason Blevins
with permission. All code written by Alan Miller is released into
the public domain. Code written by other authors or from other sources
(e.g., academic journals) may be subject to other restrictions.
The functions
alnrel, algdiv, gsumln, bcorr, betaln, rlog1,
gamln1, gamln, gam1, brcomp, ipmpar, dpmpar, psi
were taken from https://jblevins.org/mirror/amiller/specfunc.zip
These are special functions from the NSWC library. The file
specfunc.zip was compiled by Alan Miller.
At https://github.com/jacobwilliams/nswc one reads:
"The NSWC Mathematics Subroutine Library is a collection of Fortran 77
routines specializing in numerical mathematics collected and developed
by the U.S. Naval Surface Warfare Center. This software is made available,
without cost, to the general scientific community. The 1993 edition is an
update of the 1990 edition. NSWC has made every effort to include only reliable,
transportable, reasonably efficient and easy to use code in this library. They
have thoroughly tested all the routines on a variety of machines ranging
from supercomputers to PC's."
##--------------------------------------------------------------------------------
##--------------------------------------------------------------------------------
src/01_Nelder.f90
Contains the subroutine
minim
programmed by D.E. Shaw, with amendments by R.W.M. Wedderburn.
Further amended by Alan Miller. Further modified by Taiane Schaedler Prass.
The original code available at https://jblevins.org/mirror/amiller/minim.f90
##--------------------------------------------------------------------------------
##--------------------------------------------------------------------------------
src/01_RNG.f90
rng_seed_Blevins, rng_uniform_Blevins
were taken from https://jblevins.org/log/openmp
They were written by Jason Blevins.
rng_uniform_wh
was taken from https://people.math.sc.edu/Burkardt/f_src/asa183/asa183.f90
The FORTRAN90 version was written by John Burkardt.
This code is distributed under the GNU LGPL license.
rng_seed_sgrnd, rng_uniform_Mersenne
were taken from https://jblevins.org/mirror/amiller/mt19937a.f90
Fortran translation by Hiroshi Takano. Code converted to Fortran90 by Alan Miller.
rng_uniform_kiss32
is public domain code. It was taken from http://www.fortran.com/kiss.f90
rng_uniform_kiss64
was taken from http://lgge.osug.fr/meom/pages-perso/brankart/Outils/mod_kiss.f90
This module was written by Jean-Michel Brankart.
rng_array, rng_seed_rnstrt
were taken from https://jblevins.org/mirror/amiller/rand3.f90
FORTRAN 77 version by Steve Kifowit with modifications by Alan Miller
based upon the code written by Knuth. The code was converted to FOTRAN90
by Alan Miller. The FORTRAN77 code written by Donald E. Knuth is available at
https://www-cs-faculty.stanford.edu/~knuth/programs/frng.f
rng_seed_lfsr258, rng_uniform_Le
were taken from https://jblevins.org/mirror/amiller/lfsr258.f90
Fortran version by Alan Miller.
random_beta, standard_qnorm
were taken from https://jblevins.org/mirror/amiller/random.f90
https://jblevins.org/mirror/amiller/as241.f90
They were all written by Alan Miller. All code written by Alan Miller is
released into the public domain.
dgamma_default, dpois_raw, bd0, stirlerr
are based on the code in "dgamma.c", "dpois.c", "bd0.c", "stirlerr.c",
found in "R-4.1.0/src/nmath/". These codes were written by Catherine Loader.
|
/scratch/gouwar.j/cran-all/cranData/BTSR/inst/copyright.R
|
#' This project was funded and sponsored by
#' [Wharton Customer Analytics](https://wca.wharton.upenn.edu).
#'
#' This package implements the BG/BB, BG/NBD and Pareto/NBD models, which
#' capture/project customer purchase patterns in a typical
#' non-contractual setting.
#'
#' While these models are developed on a customer-by-customer basis, they
#' do not necessarily require data at such a granular level. The
#' Pareto/NBD requires a "customer-by-sufficient-statistic" matrix
#' (CBS), which consists of each customer's frequency, recency (the time
#' of their last transactions) and total time observed - but the timing
#' of each and every transaction (other than the last) is not needed by
#' the model. If, however, you do have the granular data in the form of
#' an event log (which contains at least columns for customer
#' identification and the time of each transaction, and potentially more
#' columns such as transaction amount), this package provides functions
#' to convert it to a CBS. You can use \code{\link{dc.ReadLines}} to get
#' your event log from a comma-delimited file to an event log usable by
#' this package; it is possible to use read.table or read.csv, but
#' formatting will be required afterwards. You can then convert the event
#' log directly to a CBS (for both the calibration and holdout periods)
#' using \code{\link{dc.ElogToCbsCbt}}. As the name suggests, this
#' function also produces a customer-by-time matrix (CBT). This matrix
#' consists of a row for every customer and a column for every date, and
#' is populated by a statistic of your choice (reach, frequency, or
#' spend). It is not necessary for any of the models presented in this
#' package, but is used as a building block to produce the CBS.
#'
#' The BG/NBD model requires all the same inputs as the Pareto/NBD model.
#'
#' The BG/BB model requires the same information as the Pareto/NBD model,
#' but as it models discrete transaction opportunities, this information
#' can be condensed into a recency-frequency matrix. A recency-frequency
#' matrix contains a row for every recency/frequency combination in the
#' given time period, and each row contains the number of customers with
#' that recency/frequency combination. Since frequency will always be
#' less than or equal to recency, this matrix will contain (n)(n-1)/2 + 1
#' rows at most, with n as the number of transaction opportunities (of
#' course, the maximum number of rows for pooled data - for customers
#' with varying numbers of transaction opportunities - will be the sum of
#' the above equation for each unique number of transaction
#' opportunities). You can convert a CBS to recency-frequency matrices
#' using \code{\link{dc.MakeRFmatrixCal}} and
#' \code{\link{dc.MakeRFmatrixHoldout}}.
#'
#' If you want to test the data contained in the package, or have data
#' formatted as a customer-by-sufficient-statistic or recency-frequency
#' matrix, a good starting place would be
#' \code{\link{pnbd.EstimateParameters}},
#' \code{\link{bgnbd.EstimateParameters}}, or
#' \code{\link{bgbb.EstimateParameters}}.
#'
#' Following that, \code{\link{pnbd.PlotFrequencyInCalibration}},
#' \code{\link{bgnbd.PlotFrequencyInCalibration}} and
#' \code{\link{bgbb.PlotFrequencyInCalibration}} will give a check that
#' the model fits the data in-sample. Further plotting functions,
#' comparing actual and expected results, are labelled
#' "pnbd.Plot...", "bgnbd.Plot..." and "bgbb.Plot...".
#' The building blocks of these functions are also provided:
#' \code{\link{pnbd.LL}}, \code{\link{bgnbd.LL}}
#' \code{\link{bgbb.LL}}, \code{\link{pnbd.pmf}},
#' \code{\link{bgnbd.pmf}}, \code{\link{bgbb.pmf}},
#' \code{\link{pnbd.Expectation}}, \code{\link{bgnbd.Expectation}},
#' \code{\link{bgbb.Expectation}},
#' \code{\link{pnbd.ConditionalExpectedTransactions}},
#' \code{\link{bgnbd.ConditionalExpectedTransactions}}, and
#' \code{\link{bgbb.ConditionalExpectedTransactions}} may be of
#' particular interest.
#'
#' This package uses the following conventions:
#'
#' The time period used to estimate the model parameters is called the
#' _calibration period_. Users may be accustomed to this being
#' called the estimation period, or simply being referred to as
#' "in-sample". Function parameter names generally follow this
#' convention: for example, "n.cal" is used to refer to the number
#' of transaction opportunities in the calibration period.
#'
#' The time period used to validate model performance is called the
#' _holdout period_. Users may be accustomed to this being called
#' the validation period, or simply being referred to as
#' "out-of-sample". Function parameters relating to this time
#' period are generally appended with ".star". For example, n.star
#' is used to refer to the number of transaction opportunities in the
#' holdout period.
#'
#' As described in the papers referenced below, the BG/BB, BG/NBD and
#' Pareto/NBD models are generally concerned with repeat transactions,
#' not total transactions. This means that a customer's first transaction
#' in the calibration period is usually not part of the data being
#' modeled - this is due to the fact that a new customer generally does
#' not show up "on the company's radar" until after their first
#' purchase has taken place. This means that the modal number of repeat
#' purchases tends to be zero. If your data does not have a relatively large
#' number of customers with zero transactions, but does have a relatively large
#' number of customers with one transaction, and the estimation functions
#' are struggling, the problem is most likely that you are including
#' customers' very first transactions. Some of the data-conversion
#' functions have examples illustrating how to work with data that
#' includes this very first transaction. Note that this does not apply to
#' the holdout period; in the holdout period, we already know about the
#' customer and take all of their previous transactions into account.
#'
#' @references See \url{https://www.brucehardie.com} for papers, notes, and datasets relating to applied probability models in marketing.
#' @references Fader, Peter S., and Bruce G.S. Hardie. \dQuote{A Note on Deriving the Pareto/NBD Model and Related Expressions.} November. 2005. Web. \url{http://www.brucehardie.com/notes/008/}
#' @references Fader, Peter S., Bruce G.S. Hardie, and Ka L. Lee. \dQuote{RFM and CLV: Using Iso-Value Curves for Customer Base Analysis.} \emph{Journal of Marketing Research} Vol.42, pp.415-430. November. 2005. \url{http://www.brucehardie.com/papers.html}
#' @references Fader, Peter S., and Bruce G.S. Hardie. \dQuote{Deriving an Expression for P (X(t) = x) Under the Pareto/NBD Model.} September. 2006. Web. \url{http://www.brucehardie.com/notes/012/}
#' @references Fader, Peter S., and Bruce G.S. Hardie. \dQuote{Creating an RFM summary using Excel.} December. 2008. Web. \url{http://www.brucehardie.com/notes/022/}
#' @references Fader, Peter S., Bruce G.S. Hardie, and Jen Shang. \dQuote{Customer-Base Analysis in a Discrete-Time Noncontractual Setting.} \emph{Marketing Science} 29(6), pp. 1086-1108. 2010. INFORMS. \url{http://www.brucehardie.com/papers/020/}
#' @references Jerath, Kinshuk, Peter S. Fader, and Bruce G.S. Hardie. \dQuote{Customer-Base Analysis on a 'Data Diet': Model Inference Using Repeated Cross-Sectional Summary (RCSS) Data.} June. 2011. Available at SSRN: \url{https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1708562} or \doi{10.2139/ssrn.1708562}
#' @references Fader, Peter S., Bruce G.S. Hardie, and Ka L. Lee. \dQuote{``Counting Your Customers'' the Easy Way: An Alternative to the Pareto/NBD Model.} \emph{Marketing Science} Vol.24, pp.275-284. Spring. 2005. \url{http://www.brucehardie.com/papers.html}
#' @references Fader, Peter S., Hardie, Bruce G.S., and Lee, Ka Lok. \dQuote{Computing P(alive) Using the BG/NBD Model.} December. 2008. Web. \url{http://www.brucehardie.com/notes/021/palive_for_BGNBD.pdf}
"_PACKAGE"
|
/scratch/gouwar.j/cran-all/cranData/BTYD/R/BTYD.R
|
################################################################################ Beta-Geometric Beta-Binomial Functions
library(hypergeo)
#' BG/BB Log-Likelihood using a recency-frequency matrix
#'
#' Calculates the log-likelihood of the BG/BB model.
#'
#' @param params BG/BB parameters - a vector with alpha, beta, gamma, and delta,
#' in that order. Alpha and beta are unobserved parameters for the
#' beta-Bernoulli transaction process. Gamma and delta are unobserved
#' parameters for the beta-geometric dropout process.
#' @param rf.matrix recency-frequency matrix. It must contain columns for
#' frequency ("x"), recency ("t.x"), number of transaction opportunities in
#' the calibration period ("n.cal"), and the number of customers with this
#' combination of recency, frequency and transaction opportunities in the
#' calibration period ("custs"). Note that recency must be the time between
#' the start of the calibration period and the customer's last transaction,
#' not the time between the customer's last transaction and the end of the
#' calibration period.
#' @seealso [`bgbb.LL`]
#' @return The total log-likelihood of the provided data in rf.matrix.
#' @references Fader, Peter S., Bruce G.S. Hardie, and Jen Shang.
#' "Customer-Base Analysis in a Discrete-Time Noncontractual Setting."
#' _Marketing Science_ 29(6), pp. 1086-1108. 2010. INFORMS.
#' [Web.](http://www.brucehardie.com/papers/020/)
#' @examples
#' data(donationsSummary)
#'
#' rf.matrix <- donationsSummary$rf.matrix
#' # donationsSummary$rf.matrix already has appropriate column names
#'
#' params <- c(1.20, 0.75, 0.66, 2.78)
#' bgbb.rf.matrix.LL(params, rf.matrix)
#' @md
bgbb.rf.matrix.LL <- function(params,
rf.matrix) {
dc.check.model.params(printnames = c("alpha", "beta", "gamma", "delta"),
params = params,
func = "bgbb.rf.matrix.LL")
tryCatch(x.vec <- rf.matrix[, "x"], error = function(e) stop("Error in bgbb.rf.matrix.LL: rf.matrix must have a frequency column labelled \"x\""))
tryCatch(t.x.vec <- rf.matrix[, "t.x"], error = function(e) stop("Error in bgbb.rf.matrix.LL: rf.matrix must have a recency column labelled \"t.x\""))
tryCatch(n.periods.vec <- rf.matrix[, "n.cal"], error = function(e) stop("Error in bgbb.rf.matrix.LL: rf.matrix must have a column for number of transaction opportunities in the calibration period, labelled \"n.cal\""))
tryCatch(n.custs.vec <- rf.matrix[, "custs"], error = function(e) stop("Error in bgbb.rf.matrix.LL: rf.matrix must have a column for the number of customers that have each combination of \"x\", \"t.x\", and \"n.cal\", labelled \"custs\""))
LL.sum <- sum(n.custs.vec *
bgbb.LL(params,
x.vec,
t.x.vec,
n.periods.vec))
return(LL.sum)
}
#' BG/BB Log-Likelihood
#'
#' Calculates the log-likelihood of the BG/BB model.
#'
#' x, t.x, and n.cal may be vectors. The standard rules for vector operations
#' apply - if they are not of the same length, shorter vectors will be recycled
#' (start over at the first element) until they are as long as the longest
#' vector. It is advisable to keep vectors to the same length and to use single
#' values for parameters that are to be the same for all calculations. If one of
#' these parameters has a length greater than one, the output will be also be a
#' vector.
#'
#' @param params BG/BB parameters - a vector with alpha, beta, gamma, and
#' delta, in that order. Alpha and beta are unobserved parameters for the
#' beta-Bernoulli transaction process. Gamma and delta are unobserved
#' parameters for the beta-geometric dropout process.
#' @param x the number of repeat transactions made by the customer in the
#' calibration period. Can also be vector of frequencies - see details.
#' @param t.x recency - the transaction opportunity in which the customer made
#' their last transaction. Can also be a vector of recencies - see details.
#' @param n.cal number of transaction opportunities in the calibration
#' period. Can also be a vector of calibration period transaction
#' opportunities - see details.
#' @return A vector of log-likelihoods as long as the longest input vector (x,
#' t.x, or n.cal).
#' @references Fader, Peter S., Bruce G.S. Hardie, and Jen Shang.
#' "Customer-Base Analysis in a Discrete-Time Noncontractual Setting."
#' _Marketing Science_ 29(6), pp. 1086-1108. 2010. INFORMS.
#' [Web.](http://www.brucehardie.com/papers/020/)
#' @examples
#' params <- c(1.20, 0.75, 0.66, 2.78)
#'
#' # Returns the log likelihood of the parameters for a customer who
#' # made 3 transactions in a calibration period with 6 transaction opportunities,
#' # with the last transaction occurring during the 4th transaction opportunity.
#' bgbb.LL(params, x=3, t.x=4, n.cal=6)
#'
#' # We can also give vectors as function parameters:
#' set.seed(7)
#' x <- sample(1:3, 10, replace = TRUE)
#' t.x <- sample(3:5, 10, replace = TRUE)
#' n.cal <- rep(5, 10)
#' bgbb.LL(params, x, t.x, n.cal)
#' @md
bgbb.LL <- function(params,
x,
t.x,
n.cal) {
inputs <- try(dc.InputCheck(params = params,
func = 'bgbb.LL',
printnames = c("alpha", "beta", "gamma", "delta"),
x = x,
t.x = t.x,
n.cal = n.cal))
if('try-error' == class(inputs)) return(inputs)
x <- inputs$x
t.x <- inputs$t.x
n.cal <- inputs$n.cal
max.length <- nrow(inputs)
alpha <- params[1]
beta <- params[2]
gamma <- params[3]
delta <- params[4]
denom.ab <- lbeta(alpha, beta)
denom.gd <- lbeta(gamma, delta)
indiv.LL.sum <- lbeta(alpha + x,
beta + n.cal - x) -
denom.ab +
lbeta(gamma, delta + n.cal) -
denom.gd
check <- n.cal - t.x - 1
addition <- function(alpha,
beta,
gamma,
delta,
denom.ab,
denom.gd,
x,
t.x,
check) {
ii <- 0:check
# implement log-sum-exp trick as shown on Wikipedia:
# https://en.wikipedia.org/wiki/LogSumExp
xset <- lbeta(alpha + x,
beta + t.x - x + ii) -
denom.ab +
lbeta(gamma + 1,
delta + t.x + ii) -
denom.gd
# was:
# log(sum(exp(xset)))
# now:
xstar <- max(xset)
xdiff <- xset - xstar
xstar + log(sum(exp(xdiff)))
}
# for every element of vectors for which t.x<n.cal, add the result of 'addition'
# in logspace. addLogs defined in dc.R. addLogs(a,b) = log(exp(a) + exp(b))
for (i in 1:max.length) {
if (check[i] >= 0)
indiv.LL.sum[i] <- addLogs(indiv.LL.sum[i],
addition(alpha,
beta,
gamma,
delta,
denom.ab,
denom.gd,
x[i],
t.x[i],
check[i]))
}
return(indiv.LL.sum)
}
#' BG/BB Parameter estimation
#'
#' Estimates parameters for the BG/BB model.
#'
#' The best-fitting parameters are determined using the [`bgbb.rf.matrix.LL`]
#' function. The sum of the log-likelihood for each customer (for a set of
#' parameters) is maximized in order to estimate paramaters.
#'
#' A set of starting parameters must be provided for this method. If no
#' parameters are provided, (1,1,1,1) is used as a default. It may be useful to
#' use starting values for parameters that represent your best guess of the
#' heterogeneity in the transaction and dropout rates of customers. It may be
#' necessary to run the estimation from multiple starting points to ensure that
#' it converges. To compare the log-likelihoods of different parameters, use
#' [`bgbb.rf.matrix.LL`].
#'
#' The lower bound on the parameters to be estimated is always zero, since BG/BB
#' parameters cannot be negative. The upper bound can be set with the
#' max.param.value parameter.
#'
#' @inheritParams bgbb.rf.matrix.LL
#' @param par.start initial BG/BB parameters - a vector with alpha, beta, gamma,
#' and delta, in that order. Alpha and beta are unobserved parameters for the
#' beta-Bernoulli transaction process. Gamma and delta are unobserved
#' parameters for the beta-geometric dropout process.
#' @param max.param.value the upper bound on parameters.
#' @return Vector of estimated paramaters.
#' @seealso [`bgbb.rf.matrix.LL`]
#' @examples
#' data(donationsSummary)
#'
#' rf.matrix <- donationsSummary$rf.matrix
#' # donationsSummary$rf.matrix already has appropriate column names
#'
#' # starting-point parameters
#' startingparams <- c(1, 1, 0.5, 3)
#' # estimated parameters
#' est.params <- bgbb.EstimateParameters(rf.matrix, startingparams)
#' # log-likelihood of estimated parameters
#' bgbb.rf.matrix.LL(est.params, rf.matrix)
#' @md
bgbb.EstimateParameters <- function(rf.matrix,
par.start = c(1, 1, 1, 1),
max.param.value = 1000) {
dc.check.model.params(printnames = c("alpha", "beta", "gamma", "delta"),
params = par.start,
func = "bgbb.EstimateParameters")
bgbb.eLL <- function(params,
rf.matrix,
max.param.value) {
params <- exp(params)
params[params > max.param.value] <- max.param.value
return(-1 * bgbb.rf.matrix.LL(params, rf.matrix))
}
logparams <- log(par.start)
results <- optim(logparams,
bgbb.eLL,
rf.matrix = rf.matrix,
max.param.value = max.param.value,
method = "L-BFGS-B")
estimated.params <- exp(results$par)
estimated.params[estimated.params > max.param.value] <- max.param.value
return(estimated.params)
}
#' BG/BB Probability Mass Function
#'
#' Probability mass function for the BG/BB.
#'
#' P(X(n)=x | alpha, beta, gamma, delta). Returns the probability that a
#' customer makes x transactions in the first n transaction opportunities.
#'
#' Parameters `n` and `x` may be vectors. The standard rules for vector
#' operations apply - if they are not of the same length, the shorter vector
#' will be recycled (start over at the first element) until it is as long as the
#' longest vector. It is advisable to keep vectors to the same length and to use
#' single values for parameters that are to be the same for all calculations. If
#' one of these parameters has a length greater than one, the output will be a
#' vector of probabilities.
#'
#' @inheritParams bgbb.rf.matrix.LL
#' @param n number of transaction opportunities; may also be a vector.
#' @param x number of transactions; may also be a vector.
#' @return Probability of X(n)=x, conditional on model parameters.
#' @seealso [`bgbb.pmf.General`]
#' @references Fader, Peter S., Bruce G.S. Hardie, and Jen Shang.
#' "Customer-Base Analysis in a Discrete-Time Noncontractual Setting."
#' _Marketing Science_ 29(6), pp. 1086-1108. 2010. INFORMS.
#' [Web.](http://www.brucehardie.com/papers/020/)
#' @examples
#' params <- c(1.20, 0.75, 0.66, 2.78)
#' # The probability that a customer made 3 transactions in the first
#' # 6 transaction opportunities.
#' bgbb.pmf(params, n=6, x=3)
#'
#' # Vectors may also be used as arguments:
#' bgbb.pmf(params, n=6, x=0:6)
#' @md
bgbb.pmf <- function(params,
n,
x) {
inputs <- try(dc.InputCheck(params = params,
func = 'bgbb.pmf',
printnames = c("alpha", "beta", "gamma", "delta"),
n = n,
x = x))
if('try-error' == class(inputs)) return(inputs)
if (any(inputs$x > inputs$n)) {
stop("bgbb.pmf was given x > n")
}
return(bgbb.pmf.General(params = params,
n.cal = 0,
n.star = inputs$n,
x.star = inputs$x))
}
#' BG/BB General Probability Mass Function
#'
#' Calculates the probability that a customer will make `x.star` transactions in
#' the first `n.star` transaction opportunities following the calibration
#' period.
#'
#' P(X(n, n + n*) = x* | alpha, beta, gamma, delta). This is a more generalized
#' version of the bgbb.pmf. Setting `n.cal` to 0 reduces this function to the
#' probability mass function in its usual format - the probability that a user
#' will make x.star transactions in the first n.star transaction opportunities.
#'
#' It is impossible for a customer to make a negative number of transactions, or
#' to make more transactions than there are transaction opportunities. This
#' function will throw an error if such inputs are provided.
#'
#' `n.cal`, `n.star`, and `x.star` may be vectors. The standard rules for vector
#' operations apply - if they are not of the same length, shorter vectors will
#' be recycled (start over at the first element) until they are as long as the
#' longest vector. It is advisable to keep vectors to the same length and to use
#' single values for parameters that are to be the same for all calculations. If
#' one of these parameters has a length greater than one, the output will be a
#' vector of probabilities.
#'
#' @inheritParams bgbb.LL
#' @param n.star number of transaction opportunities in the holdout period, or a
#' vector of holdout period transaction opportunities.
#' @param x.star number of transactions in the holdout period, or a vector of
#' transaction frequencies.
#' @return Probability of X(n, n + n*) = x*, given BG/BB model parameters.
#' @seealso [`bgbb.pmf`]
#' @references Fader, Peter S., Bruce G.S. Hardie, and Jen Shang.
#' "Customer-Base Analysis in a Discrete-Time Noncontractual Setting."
#' _Marketing Science_ 29(6), pp. 1086-1108. 2010. INFORMS.
#' [Web.](http://www.brucehardie.com/papers/020/)
#' @examples
#' params <- c(1.20, 0.75, 0.66, 2.78)
#' # Probability that a customer will make 3 transactions in the 10
#' # transaction opportunities following the 6 transaction opportunities
#' # in the calibration period, given BG/BB parameters.
#' bgbb.pmf.General(params, n.cal=6, n.star=10, x.star=3)
#'
#' # Vectors may also be provided as input:
#' # Comparison between different frequencies:
#' bgbb.pmf.General(params, n.cal=6, n.star=10, x.star=1:10)
#' # Comparison between different holdout transaction opportunities:
#' bgbb.pmf.General(params, n.cal=6, n.star=5:15, x.star=3)
#' @md
bgbb.pmf.General <- function(params,
n.cal,
n.star,
x.star) {
inputs <- try(dc.InputCheck(params = params,
func = 'bgbb.pmf.General',
printnames = c("alpha", "beta", "gamma", "delta"),
n.cal = n.cal,
n.star = n.star,
x.star = x.star))
if('try-error' == class(inputs)) return(inputs)
n.cal <- inputs$n.cal
n.star <- inputs$n.star
x.star <- inputs$x.star
max.length <- nrow(inputs)
if (any(x.star > n.star)) {
stop("bgbb.pmf.General was given x.star > n.star")
}
alpha <- params[1]
beta <- params[2]
gamma <- params[3]
delta <- params[4]
piece.1 <- rep(0, max.length)
piece.1[x.star == 0] <- 1 - exp(lbeta(gamma,
delta + n.cal[x.star == 0]) -
lbeta(gamma,
delta))
piece.2 <- exp(lchoose(n.star, x.star) +
lbeta(alpha + x.star,
beta + n.star - x.star) -
lbeta(alpha, beta) +
lbeta(gamma, delta + n.cal + n.star) -
lbeta(gamma, delta))
piece.3 = rep(0, max.length)
rows.to.sum <- which(x.star <= n.star - 1)
piece.3[rows.to.sum] <- unlist(sapply(rows.to.sum,
function(index) {
ii <- x.star[index]:(n.star[index] - 1)
sum(exp(lchoose(ii, x.star[index]) +
lbeta(alpha + x.star[index],
beta + ii - x.star[index]) -
lbeta(alpha, beta) +
lbeta(gamma + 1,
delta + n.cal[index] + ii) -
lbeta(gamma, delta)))
}))
expectation <- piece.1 + piece.2 + piece.3
return(expectation)
}
#' BG/BB Expectation
#'
#' Returns the number of transactions that a randomly chosen customer (for whom
#' we have no prior information) is expected to make in the first n transaction
#' opportunities.
#'
#' E(X(n) | alpha, beta, gamma, delta)
#'
#' @inheritParams bgbb.pmf
#' @return Mean of the BG/BB probability mass function.
#' @references Fader, Peter S., Bruce G.S. Hardie, and Jen Shang.
#' "Customer-Base Analysis in a Discrete-Time Noncontractual Setting."
#' _Marketing Science_ 29(6), pp. 1086-1108. 2010. INFORMS.
#' [Web.](http://www.brucehardie.com/papers/020/)
#' @examples
#' params <- c(1.20, 0.75, 0.66, 2.78)
#' # Expected number of transactions that a randomly chosen customer
#' # will make in the first 10 transaction opportunities.
#' bgbb.Expectation(params, n=10)
#'
#' # We can also compare expected transactions over time:
#' bgbb.Expectation(params, n=1:10)
#' @md
bgbb.Expectation <- function(params,
n) {
# we don't need inputs here but the parameter checks
# we need for this function can be done with dc.InputCheck()
inputs <- try(dc.InputCheck(params = params,
func = 'bgbb.Expectation',
printnames = c("alpha", "beta", "gamma", "delta"),
n = n))
if('try-error' == class(inputs)) return(inputs)
alpha <- params[1]
beta <- params[2]
gamma <- params[3]
delta <- params[4]
piece.one <- (alpha/(alpha + beta)) * (delta/(gamma - 1))
piece.two <- exp(lgamma(gamma + delta) -
lgamma(gamma + delta + n) +
lgamma(1 + delta + n) -
lgamma(1 + delta))
return(piece.one * (1 - piece.two))
}
#' BG/BB P(Alive)
#'
#' Uses BG/BB model parameters and a customer's past transaction behavior to
#' return the probability that they will be alive in the transaction opportunity
#' following the calibration period.
#'
#' `x`, `t.x`, and `n.cal` may be vectors. The standard rules for vector
#' operations apply - if they are not of the same length, shorter vectors will
#' be recycled (start over at the first element) until they are as long as the
#' longest vector. It is advisable to keep vectors to the same length and to use
#' single values for parameters that are to be the same for all calculations. If
#' one of these parameters has a length greater than one, the output will be a
#' vector of probabilities.
#'
#' P(alive at n+1 | alpha, beta, gamma, delta, x, t.x, n)
#'
#' @inheritParams bgbb.LL
#' @return Probability that the customer is alive at the (n+1)th transaction
#' opportunity. If `x`, `t.x`, and/or `n.cal` are of length greater than one,
#' then this will be a vector of probabilities (containing one element
#' matching each element of the longest input vector).
#' @references Fader, Peter S., Bruce G.S. Hardie, and Jen Shang.
#' "Customer-Base Analysis in a Discrete-Time Noncontractual Setting."
#' _Marketing Science_ 29(6), pp. 1086-1108. 2010. INFORMS.
#' [Web.](http://www.brucehardie.com/papers/020/)
#' @examples
#' params <- c(1.20, 0.75, 0.66, 2.78)
#'
#' # The probability that a customer who made 3 transactions in
#' # the calibration period (which consisted of 6 transaction
#' # opportunities), with the last transaction occurring at the
#' # 4th transaction opportunity, is alive at the 7th transaction
#' # opportunity
#' bgbb.PAlive(params, x=3, t.x=4, n.cal=6)
#'
#' # The input parameters may also be vectors:
#' bgbb.PAlive(params, x=1, t.x=1:6, n.cal=6)
#' @md
bgbb.PAlive <- function(params,
x,
t.x,
n.cal) {
inputs <- try(dc.InputCheck(params = params,
func = 'bgbb.PAlive',
printnames = c("alpha", "beta", "gamma", "delta"),
x = x,
t.x = t.x,
n.cal = n.cal))
if('try-error' == class(inputs)) return(inputs)
x <- inputs$x
t.x <- inputs$t.x
n.cal <- inputs$n.cal
alpha <- params[1]
beta <- params[2]
gamma <- params[3]
delta <- params[4]
piece.1 <- exp(lbeta(alpha + x, beta + n.cal - x) -
lbeta(alpha, beta) +
lbeta(gamma, delta + n.cal + 1) -
lbeta(gamma, delta))
piece.2 <- 1/exp(bgbb.LL(params,
x,
t.x,
n.cal))
p.alive <- piece.1 * piece.2
return(p.alive)
}
#' BG/BB Discounted Expected Residual Transactions
#'
#' Computes the number of discounted expected residual transactions by a
#' customer, conditional on their behavior in the calibration period.
#'
#' DERT(d | alpha, beta, gamma, delta, x, t.x, n). This is the present value of
#' the expected future transaction stream for a customer with x transactions and
#' a recency of t.x in n.cal transaction opportunities, discounted by a rate d.
#'
#' `x`, `t.x`, and `n.cal` may be vectors. The standard rules for vector
#' operations apply - if they are not of the same length, shorter vectors will
#' be recycled (start over at the first element) until they are as long as the
#' longest vector. It is advisable to keep vectors to the same length and to use
#' single values for parameters that are to be the same for all calculations. If
#' one of these parameters has a length greater than one, the output will be
#' also be a vector.
#'
#' @inheritParams bgbb.PAlive
#' @param d discount rate.
#' @return The present value of the expected future transaction stream for a particular customer.
#' @references Fader, Peter S., Bruce G.S. Hardie, and Jen Shang.
#' "Customer-Base Analysis in a Discrete-Time Noncontractual Setting."
#' _Marketing Science_ 29(6), pp. 1086-1108. 2010. INFORMS.
#' [Web.](http://www.brucehardie.com/papers/020/)
#' See equation 14.
#' @examples
#' params <- c(1.20, 0.75, 0.66, 2.78)
#' # Compute DERT for a customer who made 3 transactions
#' # in the calibration period(consisting of 6 transaction
#' # opportunities), with the last transaction occurring
#' # during the 4th transaction opportunity, discounted at
#' # 10%.
#' bgbb.DERT(params, x=3, t.x=4, n.cal=6, d=0.1)
#'
#' # We can also compare DERT for several customers:
#' bgbb.DERT(params, x=1:6, t.x=6, n.cal=6, d=0.1)
#' @md
bgbb.DERT <- function(params,
x,
t.x,
n.cal,
d) {
inputs <- try(dc.InputCheck(params = params,
func = 'bgbb.DERT',
printnames = c("alpha", "beta", "gamma", "delta"),
x = x,
t.x = t.x,
n.cal = n.cal))
if('try-error' == class(inputs)) return(inputs)
x <- inputs$x
t.x <- inputs$t.x
n.cal <- inputs$n.cal
alpha <- params[1]
beta <- params[2]
gamma <- params[3]
delta <- params[4]
piece.1 <- exp(lbeta(alpha + x + 1,
beta + n.cal - x) -
lbeta(alpha,
beta))
piece.2 <- exp(lbeta(gamma,
delta + n.cal + 1) -
lbeta(gamma,
delta))/(1 + d)
piece.3 <- Re(hypergeo(A = 1,
B = delta + n.cal + 1,
C = gamma + delta + n.cal + 1,
z = 1/(1 + d)))
piece.4 <- exp(bgbb.LL(params, x, t.x, n.cal))
dert <- piece.1 * piece.2 * (piece.3/piece.4)
return(dert)
}
#' BG/BB Discounted Expected Residual Transactions using a recency-frequency matrix
#'
#' Computes the number of discounted expected residual transactions by a
#' customer, conditional on their behavior in the calibration period.
#'
#' @inheritParams bgbb.DERT
#' @inheritParams bgbb.rf.matrix.LL
#' @return The present value of the expected future transaction stream for a particular customer.
#' @seealso [`bgbb.DERT`]
#' @references Fader, Peter S., Bruce G.S. Hardie, and Jen Shang.
#' "Customer-Base Analysis in a Discrete-Time Noncontractual Setting."
#' _Marketing Science_ 29(6), pp. 1086-1108. 2010. INFORMS.
#' [Web.](http://www.brucehardie.com/papers/020/)
#' See equation 14.
#' @examples
#' data(donationsSummary)
#'
#' rf.matrix <- donationsSummary$rf.matrix
#' # donationsSummary$rf.matrix already has appropriate column names
#'
#' # starting-point parameters
#' startingparams <- c(1, 1, 0.5, 3)
#' # estimated parameters
#' est.params <- bgbb.EstimateParameters(rf.matrix, startingparams)
#'
#' # compute DERT for a customer from every row in rf.matrix,
#' # discounted at 10%.
#' bgbb.rf.matrix.DERT(est.params, rf.matrix, d = 0.1)
#' @md
bgbb.rf.matrix.DERT <- function(params,
rf.matrix,
d) {
dc.check.model.params(printnames = c("alpha", "beta", "gamma", "delta"),
params = params,
func = "bgbb.rf.matrix.DERT")
tryCatch(x <- rf.matrix[, "x"], error = function(e) stop("Error in bgbb.rf.matrix.DERT: rf.matrix must have a frequency column labelled \"x\""))
tryCatch(t.x <- rf.matrix[, "t.x"], error = function(e) stop("Error in bgbb.rf.matrix.DERT: rf.matrix must have a recency column labelled \"t.x\""))
tryCatch(n.cal <- rf.matrix[, "n.cal"], error = function(e) stop("Error in bgbb.rf.matrix.DERT: rf.matrix must have a column for number of transaction opportunities in the calibration period, labelled \"n.cal\""))
return(bgbb.DERT(params,
x,
t.x,
n.cal,
d))
}
#' BG/BB Conditional Expected Transactions
#'
#' Calculates the number of expected transactions in the holdout period,
#' conditional on a customer's behavior in the calibration period.
#'
#' E(X(n, n+n*) | alpha, beta, gamma, delta, x, t.x, n). This function requires
#' the holdout period to immediately follow the calibration period.
#'
#' `n.cal`, `n.star`, `x`, and `t.x` may be vectors. The standard rules for
#' vector operations apply - if they are not of the same length, shorter vectors
#' will be recycled (start over at the first element) until they are as long as
#' the longest vector. It is advisable to keep vectors to the same length and to
#' use single values for parameters that are to be the same for all
#' calculations. If one of these parameters has a length greater than one, the
#' output will be a vector of probabilities.
#'
#' @inheritParams bgbb.LL
#' @inheritParams bgbb.pmf.General
#' @return The number of transactions a customer is expected to make in the
#' `n.star` transaction opportunities following the calibration period,
#' conditional on their behavior during the calibration period.
#' @references Fader, Peter S., Bruce G.S. Hardie, and Jen Shang.
#' "Customer-Base Analysis in a Discrete-Time Noncontractual Setting."
#' _Marketing Science_ 29(6), pp. 1086-1108. 2010. INFORMS.
#' [Web.](http://www.brucehardie.com/papers/020/)
#' @examples
#' params <- c(1.20, 0.75, 0.66, 2.78)
#' # the number of transactions a customer is expected
#' # to make in the 10 transaction opportunities
#' # following the calibration period, which consisted
#' # of 6 transaction opportunities (during which they
#' # made 3 transactions, the last of which occurred
#' # in the 4th opportunity)
#' bgbb.ConditionalExpectedTransactions(params, n.cal=6, n.star=10, x=3, t.x=4)
#'
#' # We can also use vectors as input:
#' bgbb.ConditionalExpectedTransactions(params, n.cal=6, n.star=1:10, x=3, t.x=4)
#' bgbb.ConditionalExpectedTransactions(params, n.cal=6, n.star=10, x=1:4, t.x=4)
#' @md
bgbb.ConditionalExpectedTransactions <- function(params,
n.cal,
n.star,
x,
t.x) {
inputs <- try(dc.InputCheck(params = params,
func = 'bgbb.ConditionalExpectedTransactions',
printnames = c("alpha", "beta", "gamma", "delta"),
n.cal = n.cal,
n.star = n.star,
x = x,
t.x = t.x))
if('try-error' == class(inputs)) return(inputs)
x <- inputs$x
t.x <- inputs$t.x
n.cal <- inputs$n.cal
n.star <- inputs$n.star
alpha <- params[1]
beta <- params[2]
gamma <- params[3]
delta <- params[4]
piece.1 <- 1/exp(bgbb.LL(params, x, t.x, n.cal))
piece.2 <- exp(lbeta(alpha + x + 1,
beta + n.cal - x) -
lbeta(alpha,
beta))
piece.3 <- delta/(gamma - 1)
piece.4 <- exp(lgamma(gamma + delta) -
lgamma(1 + delta))
piece.5 <- exp(lgamma(1 + delta + n.cal) -
lgamma(gamma + delta + n.cal))
piece.6 <- exp(lgamma(1 + delta + n.cal + n.star) -
lgamma(gamma + delta + n.cal + n.star))
expected.frequency <- piece.1 * piece.2 * piece.3 * piece.4 * (piece.5 - piece.6)
which.is.nan <- is.nan(expected.frequency)
if (sum(which.is.nan) > 0) {
error.msg.long <- paste("numerical error, parameters exploded in bgbb.ConditionalExpectedTransactions",
"params:", alpha, beta, gamma, delta, "n.cal:", n.cal[which.is.nan],
"n.star:", n.star[which.is.nan], "x:", x[which.is.nan], "t.x:", t.x[which.is.nan],
"...", "piece.1:", piece.1[which.is.nan], "piece.2:", piece.2[which.is.nan],
"piece.3:", piece.3[which.is.nan], "piece.4:", piece.4[which.is.nan],
"piece.5:", piece.5[which.is.nan], "piece.6:", piece.6[which.is.nan])
stop(error.msg.long)
}
return(expected.frequency)
}
#' BG/BB Plot Frequency in Calibration Period
#'
#' Plots the actual and expected number of customers who made a certain number
#' of repeat transactions in the calibration period. Also returns a matrix with
#' this comparison.
#'
#' @inheritParams bgbb.rf.matrix.LL
#' @param censor optional. Any calibration period frequency at this number, or
#' above it, will be binned together. If the censor number is greater than the
#' maximum recency in the recency-frequency matrix, the maximum recency will
#' be used as the censor number.
#' @param plotZero If FALSE, the histogram will exclude the zero bin.
#' @param xlab descriptive label for the x axis.
#' @param ylab descriptive label for the y axis.
#' @param title title placed on the top-center of the plot.
#' @return Calibration period repeat transaction frequency comparison matrix,
#' actual vs. expected.
#' @references Fader, Peter S., Bruce G.S. Hardie, and Jen Shang.
#' "Customer-Base Analysis in a Discrete-Time Noncontractual Setting."
#' _Marketing Science_ 29(6), pp. 1086-1108. 2010. INFORMS.
#' [Web.](http://www.brucehardie.com/papers/020/)
#' @examples
#' data(donationsSummary)
#'
#' rf.matrix <- donationsSummary$rf.matrix
#' # donationsSummary$rf.matrix already has appropriate column names
#'
#' # starting-point parameters
#' startingparams <- c(1, 1, 0.5, 3)
#' # estimated parameters
#' est.params <- bgbb.EstimateParameters(rf.matrix, startingparams)
#'
#' # plot actual vs. expected frequencies in the calibration period
#' bgbb.PlotFrequencyInCalibration(est.params, rf.matrix)
#' @md
bgbb.PlotFrequencyInCalibration <- function(params,
rf.matrix,
censor = NULL,
plotZero = TRUE,
xlab = "Calibration period transactions",
ylab = "Customers",
title = "Frequency of Repeat Transactions") {
dc.check.model.params(printnames = c("alpha", "beta", "gamma", "delta"),
params = params,
func = "bgbb.PlotFrequencyInCalibration")
tryCatch(x <- rf.matrix[, "x"],
error = function(e) stop("Error in bgbb.PlotFrequencyInCalibration: rf.matrix must have a frequency column labelled \"x\""))
tryCatch(n.cal <- rf.matrix[, "n.cal"],
error = function(e) stop("Error in bgbb.PlotFrequencyInCalibration: rf.matrix must have a column for number of transaction opportunities in the calibration period, labelled \"n.cal\""))
tryCatch(custs <- rf.matrix[, "custs"],
error = function(e) stop("Error in bgbb.PlotFrequencyInCalibration: rf.matrix must have a column for the number of customers that have each combination of \"x\", \"t.x\", and \"n.cal\", labelled \"custs\""))
max.n.cal <- max(n.cal)
if (is.null(censor))
censor <- max.n.cal
total.custs <- sum(custs)
actual.frequency <- rep(0, max.n.cal + 1)
expected.frequency <- rep(0, max.n.cal + 1)
for (ii in 0:max.n.cal) {
actual.frequency[ii + 1] <- sum(custs[x == ii])
expected.frequency[ii + 1] <- sum(unlist(sapply(unique(n.cal[n.cal >= ii]),
function(this.n.cal) {
sum(custs[n.cal == this.n.cal]) * bgbb.pmf(params, this.n.cal, ii)
})))
}
freq.comparison <- rbind(actual.frequency, expected.frequency)
colnames(freq.comparison) <- 0:max.n.cal
if (ncol(freq.comparison) <= censor) {
censored.freq.comparison <- freq.comparison
} else {
## Rename for easier coding
fc <- freq.comparison
## Build censored freq comparison (cfc)
cfc <- fc
cfc <- cfc[, 1:(censor + 1)]
cfc[1, (censor + 1)] <- sum(fc[1, (censor + 1):ncol(fc)])
cfc[2, (censor + 1)] <- sum(fc[2, (censor + 1):ncol(fc)])
censored.freq.comparison <- cfc
}
if (plotZero == FALSE)
censored.freq.comparison <- censored.freq.comparison[, -1]
n.ticks <- ncol(censored.freq.comparison)
if (plotZero == TRUE) {
x.labels <- 0:(n.ticks - 1)
if (censor < ncol(freq.comparison) - 1)
x.labels[n.ticks] <- paste(n.ticks - 1, "+", sep = "")
} else {
x.labels <- 1:(n.ticks)
if (censor < ncol(freq.comparison) - 1)
x.labels[n.ticks] <- paste(n.ticks, "+", sep = "")
}
colnames(censored.freq.comparison) <- x.labels
ylim <- c(0,
ceiling(max(c(censored.freq.comparison[1, ],
censored.freq.comparison[2, ])) * 1.1))
barplot(censored.freq.comparison,
beside = TRUE,
ylim = ylim,
main = title,
xlab = xlab,
ylab = ylab,
col = 1:2)
legend("top",
legend = c("Actual", "Model"),
col = 1:2,
lwd = 2,
cex = 0.75)
return(censored.freq.comparison)
}
#' BG/BB Plot Frequency in Holdout
#'
#' Plots the actual and expected number of customers who made a certain number
#' of transactions in the holdout period, binned according to holdout period
#' frequencies. Also returns a matrix with this comparison and the number of
#' customers in each bin.
#'
#' @inheritParams bgbb.PlotFrequencyInCalibration
#' @param n.cal number of transaction opportunities in the calibration period.
#' @param rf.matrix.holdout holdout period recency-frequency matrix. It must
#' contain columns for frequency in the holdout period ("x.star"), the number
#' of transaction opportunities in the holdout period ("n.star"), and the
#' number of customers with each frequency ("custs").
#' @return Holdout period repeat transaction frequency comparison matrix (actual vs. expected).
#' @references Fader, Peter S., Bruce G.S. Hardie, and Jen Shang.
#' "Customer-Base Analysis in a Discrete-Time Noncontractual Setting."
#' _Marketing Science_ 29(6), pp. 1086-1108. 2010. INFORMS.
#' [Web.](http://www.brucehardie.com/papers/020/)
#' @examples
#' data(donationsSummary)
#'
#' rf.matrix <- donationsSummary$rf.matrix
#' rf.matrix.holdout <- donationsSummary$rf.matrix.holdout
#' # donationsSummary$rf.matrix and donationsSummary$rf.matrix.holdout already
#' # have appropriate column names
#'
#' # starting-point parameters
#' startingparams <- c(1, 1, 0.5, 3)
#' # estimated parameters
#' est.params <- bgbb.EstimateParameters(rf.matrix, startingparams)
#'
#' # number of periods in the calibration period
#' n.cal = max(rf.matrix[,"n.cal"])
#'
#' bgbb.PlotFrequencyInHoldout(est.params, n.cal, rf.matrix.holdout)
#' @md
bgbb.PlotFrequencyInHoldout <- function(params,
n.cal,
rf.matrix.holdout,
censor = NULL,
plotZero = TRUE,
title = "Frequency of Repeat Transactions",
xlab = "Holdout period transactions",
ylab = "Customers") {
dc.check.model.params(printnames = c("alpha", "beta", "gamma", "delta"),
params = params,
func = "bgbb.PlotFrequencyInHoldout")
if (n.cal < 0 || !is.numeric(n.cal))
stop("n.cal must be numeric and may not be negative.")
tryCatch(x.star <- rf.matrix.holdout[, "x.star"], error = function(e) stop("Error in bgbb.PlotFrequencyInHoldout: rf.matrix.holdout must have a frequency column labelled \"x.star\""))
tryCatch(n.star <- rf.matrix.holdout[, "n.star"], error = function(e) stop("Error in bgbb.PlotFrequencyInHoldout: rf.matrix.holdout must have a column with the number of transaction opportunities for each group, labelled \"n.star\""))
tryCatch(custs <- rf.matrix.holdout[, "custs"], error = function(e) stop("Error in bgbb.PlotFrequencyInHoldout: rf.matrix.holdout must have a column for the number of customers represented by each row, labelled \"custs\""))
max.n.star <- max(n.star)
if (is.null(censor))
censor <- max.n.star
total.custs <- sum(custs)
actual.frequency <- rep(0, max.n.star + 1)
expected.frequency <- rep(0, max.n.star + 1)
for (ii in 0:max.n.star) {
actual.frequency[ii + 1] <- sum(custs[x.star == ii])
expected.frequency[ii + 1] <- sum(unlist(sapply(unique(n.star[n.star >= ii]),
function(this.n.star) {
sum(custs[n.star == this.n.star]) * bgbb.pmf.General(params, n.cal,
this.n.star, ii)
})))
}
freq.comparison <- rbind(actual.frequency, expected.frequency)
colnames(freq.comparison) <- 0:max.n.star
if (ncol(freq.comparison) <= censor) {
censored.freq.comparison <- freq.comparison
} else {
## Rename for easier coding
fc <- freq.comparison
## Build censored freq comparison (cfc)
cfc <- fc
cfc <- cfc[, 1:(censor + 1)]
cfc[1, (censor + 1)] <- sum(fc[1, (censor + 1):ncol(fc)])
cfc[2, (censor + 1)] <- sum(fc[2, (censor + 1):ncol(fc)])
if (plotZero == FALSE) {
cfc <- cfc[, -1]
}
censored.freq.comparison <- cfc
}
if (plotZero == TRUE) {
x.labels <- 0:(ncol(censored.freq.comparison) - 1)
} else {
x.labels <- 1:(ncol(censored.freq.comparison))
}
if (censor < ncol(freq.comparison) - 1) {
x.labels[(censor + 1)] <- paste(censor, "+", sep = "")
}
colnames(censored.freq.comparison) <- x.labels
barplot(censored.freq.comparison,
beside = TRUE,
main = title,
xlab = xlab,
ylab = ylab,
col = 1:2)
legend("topright",
legend = c("Actual", "Model"),
col = 1:2,
lwd = 2)
return(censored.freq.comparison)
}
#' BG/BB Tracking Cumulative Transactions Plot
#'
#' Plots the actual and expected cumulative total repeat transactions by all
#' customers for the calibration and holdout periods. Also returns a matrix with
#' this comparison.
#'
#' The holdout period should immediately follow the calibration period. This
#' function assumes that all customers' calibration periods end on the same date,
#' rather than starting on the same date (thus customers' birth periods are
#' determined using `max(n.cal) - n.cal` rather than assuming that they are all 0).
#'
#' @inheritParams bgbb.rf.matrix.LL
#' @inheritParams bgbb.PlotFrequencyInHoldout
#' @param actual.cum.repeat.transactions vector containing the cumulative number
#' of repeat transactions made by customers in all transaction opportunities
#' (both calibration and holdout periods). Its unit of time should be the same
#' as the units of the recency-frequency matrix used to estimate the model
#' parameters.
#' @param xticklab vector containing a label for each tick mark on the x axis.
#' @return Matrix containing actual and expected cumulative repeat transactions.
#' @references Fader, Peter S., Bruce G.S. Hardie, and Jen Shang.
#' "Customer-Base Analysis in a Discrete-Time Noncontractual Setting."
#' _Marketing Science_ 29(6), pp. 1086-1108. 2010. INFORMS.
#' [Web.](http://www.brucehardie.com/papers/020/)
#' @examples
#' data(donationsSummary)
#' # donationsSummary$rf.matrix already has appropriate column names
#' rf.matrix <- donationsSummary$rf.matrix
#'
#' # starting-point parameters
#' startingparams <- c(1, 1, 0.5, 3)
#' # estimated parameters
#' est.params <- bgbb.EstimateParameters(rf.matrix, startingparams)
#'
#' # get the annual repeat transactions, and transform them into
#' # a cumulative form
#' actual.inc.repeat.transactions <- donationsSummary$annual.trans
#' actual.cum.repeat.transactions <- cumsum(actual.inc.repeat.transactions)
#'
#' # set appropriate x-axis
#' x.tickmarks <- c( "'96","'97","'98","'99","'00","'01","'02","'03","'04","'05","'06" )
#'
#' # plot actual vs. expected transactions. The calibration period was 6 periods long.
#' bgbb.PlotTrackingCum(est.params, rf.matrix, actual.cum.repeat.transactions, xticklab=x.tickmarks)
#' @md
bgbb.PlotTrackingCum <- function(params,
rf.matrix,
actual.cum.repeat.transactions,
xlab = "Time",
ylab = "Cumulative Transactions",
xticklab = NULL,
title = "Tracking Cumulative Transactions") {
# we don't need inputs here but the parameter checks
# we need for this plot can be done with dc.InputCheck()
inputs <- try(dc.InputCheck(params = params,
func = 'bgbb.PlotTrackingCum',
printnames = c("alpha", "beta", "gamma", "delta"),
actual.cum.repeat.transactions = actual.cum.repeat.transactions))
if('try-error' == class(inputs)) return(inputs)
tryCatch(n.cal <- rf.matrix[, "n.cal"],
error = function(e) stop("Error in bgbb.PlotTrackingCum: rf.matrix must have a column for number of transaction opportunities in the calibration period, labelled \"n.cal\""))
tryCatch(custs <- rf.matrix[, "custs"],
error = function(e) stop("Error in bgbb.PlotTrackingCum: rf.matrix must have a column for the number of customers represented by each row, labelled \"custs\""))
actual <- actual.cum.repeat.transactions
n.periods <- length(actual)
cust.birth.periods <- max(n.cal) - n.cal
expected <- sapply(1:n.periods, function(interval) {
if (interval <= min(cust.birth.periods))
return(0)
sum(bgbb.Expectation(params,
interval -
cust.birth.periods[cust.birth.periods <= interval]) *
custs[cust.birth.periods <= interval])
})
pur.comparison <- rbind(actual, expected)
ylim <- c(0, max(c(actual, expected)) * 1.05)
plot(actual,
type = "l",
xaxt = "n",
xlab = xlab,
ylab = ylab,
col = 1,
ylim = ylim,
main = title)
lines(expected, lty = 2, col = 2)
if (is.null(xticklab) == FALSE) {
if (ncol(pur.comparison) != length(xticklab)) {
stop("Plot error, xticklab does not have the correct size in bgbb.PlotTrackingCum")
}
axis(1, at = 1:ncol(pur.comparison), labels = xticklab)
}
if (is.null(n.cal) == FALSE) {
abline(v = max(n.cal), lty = 2)
}
legend("bottomright",
legend = c("Actual", "Model"),
col = 1:2,
lty = 1:2,
lwd = 1,
cex = 0.75)
return(pur.comparison)
}
#' BG/BB Tracking Incremental Transactions Plot
#'
#' Plots the actual and expected incremental total repeat transactions by all
#' customers for the calibration and holdout periods. Also returns a matrix of
#' this comparison.
#'
#' The holdout period should immediately follow the calibration period. This
#' function assumes that all customers' calibration periods end on the same date,
#' rather than starting on the same date (thus customers' birth periods are
#' determined using `max(n.cal) - n.cal` rather than assuming that they are all 0).
#'
#' @inheritParams bgbb.PlotTrackingCum
#' @param actual.inc.repeat.transactions vector containing the incremental
#' number of repeat transactions made by customers in all transaction
#' opportunities (both calibration and holdout periods). Its unit of time
#' should be the same as the units of the recency-frequency matrix used to
#' estimate the model parameters.
#' @return Matrix containing actual and expected incremental repeat transactions.
#' @references Fader, Peter S., Bruce G.S. Hardie, and Jen Shang.
#' "Customer-Base Analysis in a Discrete-Time Noncontractual Setting."
#' _Marketing Science_ 29(6), pp. 1086-1108. 2010. INFORMS.
#' [Web.](http://www.brucehardie.com/papers/020/)
#' @examples
#' data(donationsSummary)
#' # donationsSummary$rf.matrix already has appropriate column names
#' rf.matrix <- donationsSummary$rf.matrix
#'
#' # starting-point parameters
#' startingparams <- c(1, 1, 0.5, 3)
#' # estimated parameters
#' est.params <- bgbb.EstimateParameters(rf.matrix, startingparams)
#'
#' # get the annual repeat transactions
#' actual.inc.repeat.transactions <- donationsSummary$annual.trans
#'
#' # Set appropriate x-axis
#' x.tickmarks <- c( "'96","'97","'98","'99","'00","'01","'02","'03","'04","'05","'06" )
#'
#' # Plot actual vs. expected transactions. The calibration period was 6 periods long.
#' bgbb.PlotTrackingInc(est.params, rf.matrix, actual.inc.repeat.transactions, xticklab=x.tickmarks)
#' @md
bgbb.PlotTrackingInc <- function(params,
rf.matrix,
actual.inc.repeat.transactions,
xlab = "Time",
ylab = "Transactions",
xticklab = NULL,
title = "Tracking Incremental Transactions") {
# we don't need inputs here but the parameter checks
# we need for this plot can be done with dc.InputCheck()
inputs <- try(dc.InputCheck(params = params,
func = 'bgbb.PlotTrackingInc',
printnames = c("alpha", "beta", "gamma", "delta"),
actual.inc.repeat.transactions = actual.inc.repeat.transactions))
if('try-error' == class(inputs)) return(inputs)
tryCatch(n.cal <- rf.matrix[, "n.cal"], error = function(e) stop("Error in bgbb.PlotTrackingInc: rf.matrix must have a column for number of transaction opportunities in the calibration period, labelled \"n.cal\""))
tryCatch(custs <- rf.matrix[, "custs"], error = function(e) stop("Error in bgbb.PlotTrackingInc: rf.matrix must have a column for the number of customers represented by each row, labelled \"custs\""))
actual <- actual.inc.repeat.transactions
n.periods <- length(actual)
cust.birth.periods <- max(n.cal) - n.cal
expected.cumulative <- sapply(1:n.periods, function(interval) {
if (interval <= min(cust.birth.periods))
return(0)
sum(bgbb.Expectation(params, interval - cust.birth.periods[cust.birth.periods <=
interval]) * custs[cust.birth.periods <= interval])
})
expected <- dc.CumulativeToIncremental(expected.cumulative)
pur.comparison <- rbind(actual, expected)
ylim <- c(0, max(c(actual, expected)) * 1.05)
plot(actual,
type = "l",
xaxt = "n",
xlab = xlab,
ylab = ylab,
col = 1,
ylim = ylim,
main = title)
lines(expected, lty = 2, col = 2)
if (is.null(xticklab) == FALSE) {
if (ncol(pur.comparison) != length(xticklab)) {
stop("Plot error, xticklab does not have the correct size in bgbb.PlotTrackingInc")
}
axis(1, at = 1:ncol(pur.comparison), labels = xticklab)
}
if (is.null(n.cal) == FALSE) {
abline(v = max(n.cal), lty = 2)
}
legend("topright",
legend = c("Actual", "Model"),
col = 1:2,
lty = 1:2,
lwd = 1,
cex = 0.75)
return(pur.comparison)
}
#' BG/BB Plot Frequency vs Conditional Expected Frequency
#'
#' Plots the actual and conditional expected number of transactions made by
#' customers in the holdout period, binned according to calibration period
#' frequencies. Also returns a matrix with this comparison and the number of
#' customers in each bin.
#'
#' @inheritParams bgbb.PlotTrackingCum
#' @param n.star number of transaction opportunities in the holdout period.
#' @param x.star a vector containing the number of transactions made in the
#' holdout period by the groups of customers with the same recency and
#' frequency in the calibration period. It must be in the same order as the
#' rf.matrix.
#' @param trunc optional integer used to truncate the plot. In the plot, all
#' calibration period frequencies above the truncation number will be removed.
#' If the truncation number is greater than the maximum frequency, R will warn
#' you and change it to the maximum frequency.
#' @return Holdout period transaction frequency comparison matrix (actual vs.
#' expected), binned by calibration period frequency.
#' @references Fader, Peter S., Bruce G.S. Hardie, and Jen Shang.
#' "Customer-Base Analysis in a Discrete-Time Noncontractual Setting."
#' _Marketing Science_ 29(6), pp. 1086-1108. 2010. INFORMS.
#' [Web.](http://www.brucehardie.com/papers/020/)
#' @examples
#' data(donationsSummary)
#'
#' rf.matrix <- donationsSummary$rf.matrix
#' # donationsSummary$rf.matrix already has appropriate column names
#'
#' # starting-point parameters
#' startingparams <- c(1, 1, 0.5, 3)
#' # estimated parameters
#' est.params <- bgbb.EstimateParameters(rf.matrix, startingparams)
#'
#' # get the holdout period transactions
#' x.star <- donationsSummary$x.star
#'
#' # number of transaction opportunities in the holdout period
#' n.star <- 5
#'
#' # Plot holdout period transactions
#' bgbb.PlotFreqVsConditionalExpectedFrequency(est.params, n.star, rf.matrix, x.star, trunc=6)
#' @md
bgbb.PlotFreqVsConditionalExpectedFrequency <- function(params,
n.star,
rf.matrix,
x.star,
trunc = NULL,
xlab = "Calibration period transactions",
ylab = "Holdout period transactions",
xticklab = NULL,
title = "Conditional Expectation") {
if (length(x.star) != nrow(rf.matrix))
stop("x.star must have the same number of entries as rows in rf.matrix")
if (!(length(n.star) == 1 || length(n.star) == nrow(rf.matrix)))
stop("n.star must be a single value or have as many entries as rows in rf.matrix")
dc.check.model.params(printnames = c("r", "alpha", "s", "beta"),
params = params,
func = "bgbb.PlotFreqVsConditionalExpectedFrequency")
if (any(x.star < 0) || !is.numeric(x.star))
stop("x.star must be numeric and may not contain negative numbers.")
if (any(n.star < 0) || !is.numeric(n.star))
stop("n.star must be numeric and may not contain negative numbers.")
n.star <- rep(n.star, length.out = nrow(rf.matrix))
tryCatch(x <- rf.matrix[, "x"], error = function(e) stop("Error in bgbb.PlotFreqVsConditionalExpectedFrequency: rf.matrix must have a frequency column labelled \"x\""))
tryCatch(t.x <- rf.matrix[, "t.x"], error = function(e) stop("Error in bgbb.PlotFreqVsConditionalExpectedFrequency: rf.matrix must have a recency column labelled \"t.x\""))
tryCatch(n.cal <- rf.matrix[, "n.cal"], error = function(e) stop("Error in bgbb.PlotFreqVsConditionalExpectedFrequency: rf.matrix must have a column for number of transaction opportunities in the calibration period, labelled \"n.cal\""))
tryCatch(custs <- rf.matrix[, "custs"], error = function(e) stop("Error in bgbb.PlotFreqVsConditionalExpectedFrequency: rf.matrix must have a column for the number of customers that have each combination of \"x\", \"t.x\", and \"n.cal\", labelled \"custs\""))
if (is.null(trunc))
trunc <- max(n.cal)
if (trunc > max(n.cal)) {
warning("The truncation number provided in bgbb.PlotFreqVsConditionalExpectedFrequency was greater than the maximum number of possible transactions. It has been reduced to ",
max(n.cal))
trunc = max(n.cal)
}
actual.freq <- rep(0, max(n.cal) + 1)
expected.freq <- rep(0, max(n.cal) + 1)
bin.size <- rep(0, max(n.cal) + 1)
for (ii in 0:max(n.cal)) {
bin.size[ii + 1] <- sum(custs[x == ii])
actual.freq[ii + 1] <- sum(x.star[x == ii])
expected.freq[ii + 1] <- sum(bgbb.ConditionalExpectedTransactions(params,
n.cal[x == ii], n.star[x == ii], ii, t.x[x == ii]) * custs[x == ii])
}
comparison <- rbind(actual.freq/bin.size, expected.freq/bin.size, bin.size)
colnames(comparison) <- paste("freq.", 0:max(n.cal), sep = "")
if (is.null(xticklab) == FALSE) {
x.labels <- xticklab
} else {
if ((trunc + 1) < ncol(comparison)) {
x.labels <- 0:(trunc)
} else {
x.labels <- 0:(ncol(comparison) - 1)
}
}
actual.freq <- comparison[1, 1:(trunc + 1)]
expected.freq <- comparison[2, 1:(trunc + 1)]
custs.in.plot <- sum(comparison[3, 1:(trunc + 1)])
if (custs.in.plot < 0.9 * sum(custs)) {
warning("Less than 90% of customers are represented in your plot (", custs.in.plot,
" of ", sum(custs), " are plotted).")
}
ylim <- c(0, ceiling(max(c(actual.freq, expected.freq)) * 1.1))
plot(actual.freq,
type = "l",
xaxt = "n",
col = 1,
ylim = ylim,
xlab = xlab,
ylab = ylab,
main = title)
lines(expected.freq,
lty = 2,
col = 2)
axis(1,
at = 1:(trunc + 1),
labels = x.labels)
legend("topleft",
legend = c("Actual", "Model"),
col = 1:2,
lty = 1:2,
lwd = 1)
return(comparison)
}
#' BG/BB Plot Recency vs Conditional Expected Frequency
#'
#' Plots the actual and conditional expected number of transactions made by
#' customers in the holdout period, binned according to calibration period
#' recencies. Also returns a matrix with this comparison and the number of
#' customers in each bin.
#'
#' @inheritParams bgbb.PlotFreqVsConditionalExpectedFrequency
#' @return Holdout period transaction frequency comparison matrix (actual vs.
#' expected), binned by calibration period recency.
#' @references Fader, Peter S., Bruce G.S. Hardie, and Jen Shang.
#' "Customer-Base Analysis in a Discrete-Time Noncontractual Setting."
#' _Marketing Science_ 29(6), pp. 1086-1108. 2010. INFORMS.
#' [Web.](http://www.brucehardie.com/papers/020/)
#' @examples
#' data(donationsSummary)
#'
#' rf.matrix <- donationsSummary$rf.matrix
#' # donationsSummary$rf.matrix already has appropriate column names
#'
#' # starting-point parameters
#' startingparams <- c(1, 1, 0.5, 3)
#' # estimated parameters
#' est.params <- bgbb.EstimateParameters(rf.matrix, startingparams)
#'
#' # get the holdout period transactions
#' x.star <- donationsSummary$x.star
#'
#' # number of transaction opportunities in the holdout period
#' n.star <- 5
#'
#' # Compare holdout period transactions.
#' bgbb.PlotRecVsConditionalExpectedFrequency(est.params, n.star, rf.matrix, x.star, trunc=6)
#' @md
bgbb.PlotRecVsConditionalExpectedFrequency <- function(params,
n.star,
rf.matrix,
x.star,
trunc = NULL,
xlab = "Calibration period recency",
ylab = "Holdout period transactions",
xticklab = NULL,
title = "Conditional Expected Transactions by Recency") {
if (length(x.star) != nrow(rf.matrix))
stop("x.star must have the same number of entries as rows in rf.matrix")
if (!(length(n.star) == 1 || length(n.star) == nrow(rf.matrix)))
stop("n.star must be a single value or have as many entries as rows in rf.matrix")
dc.check.model.params(printnames = c("r", "alpha", "s", "beta"),
params = params,
func = "bgbb.PlotRecVsConditionalExpectedFrequency")
if (any(x.star < 0) || !is.numeric(x.star))
stop("x.star must be numeric and may not contain negative numbers.")
if (any(n.star < 0) || !is.numeric(n.star))
stop("n.star must be numeric and may not contain negative numbers.")
n.star <- rep(n.star, length.out = nrow(rf.matrix))
tryCatch(x <- rf.matrix[, "x"], error = function(e) stop("Error in bgbb.PlotRecVsConditionalExpectedFrequency: rf.matrix must have a frequency column labelled \"x\""))
tryCatch(t.x <- rf.matrix[, "t.x"], error = function(e) stop("Error in bgbb.PlotRecVsConditionalExpectedFrequency: rf.matrix must have a recency column labelled \"t.x\""))
tryCatch(n.cal <- rf.matrix[, "n.cal"], error = function(e) stop("Error in bgbb.PlotRecVsConditionalExpectedFrequency: rf.matrix must have a column for number of transaction opportunities in the calibration period, labelled \"n.cal\""))
tryCatch(custs <- rf.matrix[, "custs"], error = function(e) stop("Error in bgbb.PlotRecVsConditionalExpectedFrequency: rf.matrix must have a column for the number of customers that have each combination of \"x\", \"t.x\", and \"n.cal\", labelled \"custs\""))
if (is.null(trunc))
trunc <- max(n.cal)
if (trunc > max(n.cal)) {
warning("The truncation number provided in bgbb.PlotRecVsConditionalExpectedFrequency was greater than the maximum number of possible transactions. It has been reduced to ",
max(n.cal))
trunc = max(n.cal)
}
actual.freq <- rep(0, max(n.cal) + 1)
expected.freq <- rep(0, max(n.cal) + 1)
bin.size <- rep(0, max(n.cal) + 1)
for (ii in 0:max(n.cal)) {
bin.size[ii + 1] <- sum(custs[t.x == ii])
actual.freq[ii + 1] <- sum(x.star[t.x == ii])
expected.freq[ii + 1] <- sum(bgbb.ConditionalExpectedTransactions(params,
n.cal[t.x == ii], n.star[t.x == ii], x[t.x == ii], ii) * custs[t.x ==
ii])
}
comparison <- rbind(actual.freq/bin.size,
expected.freq/bin.size,
bin.size)
colnames(comparison) <- paste("rec.", 0:max(n.cal), sep = "")
custs.in.plot <- sum(comparison[3, 1:(trunc + 1)])
if (custs.in.plot < 0.9 * sum(custs)) {
warning("Less than 90% of customers are represented in your plot (", custs.in.plot,
" of ", sum(custs), " are plotted).")
}
if (is.null(xticklab) == FALSE) {
x.labels <- xticklab
} else {
if ((trunc + 1) < ncol(comparison)) {
x.labels <- 0:(trunc)
} else {
x.labels <- 0:(ncol(comparison) - 1)
}
}
actual.freq <- comparison[1, 1:(trunc + 1)]
expected.freq <- comparison[2, 1:(trunc + 1)]
ylim <- c(0, ceiling(max(c(actual.freq, expected.freq)) * 1.1))
plot(actual.freq,
type = "l",
xaxt = "n",
col = 1,
ylim = ylim,
xlab = xlab,
ylab = ylab,
main = title)
lines(expected.freq, lty = 2, col = 2)
axis(1,
at = 1:(trunc + 1),
labels = x.labels)
legend("topleft",
legend = c("Actual", "Model"),
col = 1:2,
lty = 1:2,
lwd = 1)
return(comparison)
}
#' BG/BB Posterior Mean (l,m)th Product Moment
#'
#' Computes the `(l,m)`th product moment of the joint posterior distribution of
#' P (the Bernoulli transaction process parameter) and Theta (the geometric
#' dropout process parameter).
#'
#' E((P)^l(Theta)^m | alpha, beta, gamma, delta, x, t.x, n)
#'
#' `x`, `t.x`, and `n.cal` may be vectors. The standard rules for vector
#' operations apply - if they are not of the same length, shorter vectors will
#' be recycled (start over at the first element) until they are as long as the
#' longest vector. It is advisable to keep vectors to the same length and to use
#' single values for parameters that are to be the same for all calculations. If
#' one of these parameters has a length greater than one, the output will be
#' also be a vector.
#'
#' @inheritParams bgbb.LL
#' @param l moment degree of P
#' @param m moment degree of Theta
#' @return The expected posterior `(l,m)`th product moment.
#' @references Fader, Peter S., Bruce G.S. Hardie, and Jen Shang.
#' "Customer-Base Analysis in a Discrete-Time Noncontractual Setting."
#' _Marketing Science_ 29(6), pp. 1086-1108. 2010. INFORMS.
#' [Web.](http://www.brucehardie.com/papers/020/)
#'
#' See equation 17.
#' @md
bgbb.PosteriorMeanLmProductMoment <- function(params,
l,
m,
x,
t.x,
n.cal) {
if (l < 0 || length(l) != 1 || !is.numeric(l))
stop("l must be a single numeric value and may not be less than 0.")
if (m < 0 || length(m) != 1 || !is.numeric(m))
stop("m must be a single numeric value and may not be less than 0.")
inputs <- try(dc.InputCheck(params = params,
func = 'bgbb.PosteriorMeanLmProductMoment',
printnames = c("alpha", "beta", "gamma", "delta"),
x = x,
t.x = t.x,
n.cal = n.cal))
if('try-error' == class(inputs)) return(inputs)
x <- inputs$x
t.x <- inputs$t.x
n.cal <- inputs$n.cal
alpha <- params[1]
beta <- params[2]
gamma <- params[3]
delta <- params[4]
piece.1 <- exp(lbeta(alpha + l, beta) -
lbeta(alpha, beta) +
lbeta(gamma + m, delta) -
lbeta(gamma, delta))
piece.2 <- exp(bgbb.LL(c(alpha + l, beta, gamma + m, delta),
x,
t.x,
n.cal))
piece.3 <- exp(bgbb.LL(params,
x,
t.x,
n.cal))
mean <- piece.1 * (piece.2/piece.3)
return(mean)
}
#' BG/BB Posterior Mean Transaction Rate
#'
#' Computes the mean value of the marginal posterior value of P, the Bernoulli
#' transaction process parameter.
#'
#' E(P | alpha, beta, gamma, delta, x, t.x, n). This is calculated by setting `l
#' = 1` and `m = 0` in [`bgbb.PosteriorMeanLmProductMoment`].
#'
#' `x`, `t.x`, and `n.cal` may be vectors. The standard rules for vector
#' operations apply - if they are not of the same length, shorter vectors will
#' be recycled (start over at the first element) until they are as long as the
#' longest vector. It is advisable to keep vectors to the same length and to use
#' single values for parameters that are to be the same for all calculations. If
#' one of these parameters has a length greater than one, the output will be
#' also be a vector.
#'
#' @inheritParams bgbb.LL
#' @return The posterior mean transaction rate.
#' @references Fader, Peter S., Bruce G.S. Hardie, and Jen Shang.
#' "Customer-Base Analysis in a Discrete-Time Noncontractual Setting."
#' _Marketing Science_ 29(6), pp. 1086-1108. 2010. INFORMS.
#' [Web.](http://www.brucehardie.com/papers/020/)
#' @seealso [`bgbb.rf.matrix.PosteriorMeanTransactionRate`]
#' @examples
#' data(donationsSummary)
#'
#' rf.matrix <- donationsSummary$rf.matrix
#' # donationsSummary$rf.matrix already has appropriate column names
#'
#' # starting-point parameters
#' startingparams <- c(1, 1, 0.5, 3)
#' # estimated parameters
#' est.params <- bgbb.EstimateParameters(rf.matrix, startingparams)
#'
#' # return the posterior mean transaction rate vector
#' bgbb.rf.matrix.PosteriorMeanTransactionRate(est.params, rf.matrix)
#' @md
bgbb.PosteriorMeanTransactionRate <- function(params,
x,
t.x,
n.cal) {
inputs <- try(dc.InputCheck(params = params,
func = 'bgbb.PosteriorMeanTransactionRate',
printnames = c("alpha", "beta", "gamma", "delta"),
x = x,
t.x = t.x,
n.cal = n.cal))
if('try-error' == class(inputs)) return(inputs)
x <- inputs$x
t.x <- inputs$t.x
n.cal <- inputs$n.cal
mean.transaction.rate <- bgbb.PosteriorMeanLmProductMoment(params = params,
l = 1,
m = 0,
x = x,
t.x = t.x,
n.cal = n.cal)
return(mean.transaction.rate)
}
#' BG/BB Posterior Mean Transaction Rate using a recency-frequency matrix
#'
#' Computes the mean value of the marginal posterior value of P, the Bernoulli
#' transaction process parameter.
#'
#' `rf.matrix` has columns x`, `t.x`, and `n.cal`.
#'
#' @inheritParams bgbb.rf.matrix.LL
#' @return The posterior mean transaction rate.
#' @references Fader, Peter S., Bruce G.S. Hardie, and Jen Shang.
#' "Customer-Base Analysis in a Discrete-Time Noncontractual Setting."
#' _Marketing Science_ 29(6), pp. 1086-1108. 2010. INFORMS.
#' [Web.](http://www.brucehardie.com/papers/020/)
#' @seealso [`bgbb.PosteriorMeanTransactionRate`]
#' @examples
#' data(donationsSummary)
#'
#' rf.matrix <- donationsSummary$rf.matrix
#' # donationsSummary$rf.matrix already has appropriate column names
#'
#' # starting-point parameters
#' startingparams <- c(1, 1, 0.5, 3)
#' # estimated parameters
#' est.params <- bgbb.EstimateParameters(rf.matrix, startingparams)
#'
#' # return the posterior mean transaction rate vector
#' bgbb.rf.matrix.PosteriorMeanTransactionRate(est.params, rf.matrix)
#' @md
bgbb.rf.matrix.PosteriorMeanTransactionRate <- function(params,
rf.matrix) {
dc.check.model.params(printnames = c("alpha", "beta", "gamma", "delta"),
params = params,
func = "bgbb.rf.matrix.PosteriorMeanTransactionRate")
tryCatch(x <- rf.matrix[, "x"], error = function(e) stop("Error in bgbb.rf.matrix.PosteriorMeanTransactionRate: rf.matrix must have a frequency column labelled \"x\""))
tryCatch(t.x <- rf.matrix[, "t.x"], error = function(e) stop("Error in bgbb.rf.matrix.PosteriorMeanTransactionRate: rf.matrix must have a recency column labelled \"t.x\""))
tryCatch(n.cal <- rf.matrix[, "n.cal"], error = function(e) stop("Error in bgbb.rf.matrix.PosteriorMeanTransactionRate: rf.matrix must have a column for number of transaction opportunities in the calibration period, labelled \"n.cal\""))
return(bgbb.PosteriorMeanTransactionRate(params,
x,
t.x,
n.cal))
}
#' BG/BB Posterior Mean Dropout Rate
#'
#' Computes the mean value of the marginal posterior value of Theta, the geometric dropout process parameter.
#'
#' E(Theta | alpha, beta, gamma, delta, x, t.x, n). This is calculated by setting `l = 0` and `m = 1` in [`bgbb.PosteriorMeanLmProductMoment`].
#'
#' `x`, `t.x`, and `n.cal` may be vectors. The standard rules for vector operations apply - if they are not of the same length, shorter vectors will be recycled (start over at the first element) until they are as long as the longest vector. It is advisable to keep vectors to the same length and to use single values for parameters that are to be the same for all calculations. If one of these parameters has a length greater than one, the output will be also be a vector.
#'
#' @inheritParams bgbb.LL
#' @return The posterior mean dropout rate.
#' @references Fader, Peter S., Bruce G.S. Hardie, and Jen Shang.
#' "Customer-Base Analysis in a Discrete-Time Noncontractual Setting."
#' _Marketing Science_ 29(6), pp. 1086-1108. 2010. INFORMS.
#' [Web.](http://www.brucehardie.com/papers/020/)
#' @seealso [`bgbb.rf.matrix.PosteriorMeanDropoutRate`]
#' @examples
#' data(donationsSummary)
#'
#' rf.matrix <- donationsSummary$rf.matrix
#' # donationsSummary$rf.matrix already has appropriate column names
#'
#' # starting-point parameters
#' startingparams <- c(1, 1, 0.5, 3)
#' # estimated parameters
#' est.params <- bgbb.EstimateParameters(rf.matrix, startingparams)
#'
#' # return the posterior mean dropout rate vector
#' bgbb.rf.matrix.PosteriorMeanDropoutRate(est.params, rf.matrix)
#' @md
bgbb.PosteriorMeanDropoutRate <- function(params,
x,
t.x,
n.cal) {
inputs <- try(dc.InputCheck(params = params,
func = 'bgbb.PosteriorMeanDropoutRate',
printnames = c("alpha", "beta", "gamma", "delta"),
x = x,
t.x = t.x,
n.cal = n.cal))
if('try-error' == class(inputs)) return(inputs)
x <- inputs$x
t.x <- inputs$t.x
n.cal <- inputs$n.cal
mean.dropout.rate <- bgbb.PosteriorMeanLmProductMoment(params = params,
l = 0,
m = 1,
x = x,
t.x = t.x,
n.cal = n.cal)
return(mean.dropout.rate)
}
#' BG/BB Posterior Mean Dropout Rate using a recency-frequency matrix
#'
#' Computes the mean value of the marginal posterior value of Theta, the geometric dropout process parameter.
#'
#' E(Theta | alpha, beta, gamma, delta, x, t.x, n). This is calculated by setting `l = 0` and `m = 1` in [`bgbb.PosteriorMeanLmProductMoment`].
#'
#' `rf.matrix` has columns x`, `t.x`, and `n.cal`.
#'
#' @inheritParams bgbb.rf.matrix.LL
#' @return The posterior mean dropout rate.
#' @references Fader, Peter S., Bruce G.S. Hardie, and Jen Shang.
#' "Customer-Base Analysis in a Discrete-Time Noncontractual Setting."
#' _Marketing Science_ 29(6), pp. 1086-1108. 2010. INFORMS.
#' [Web.](http://www.brucehardie.com/papers/020/)
#' @seealso [`bgbb.PosteriorMeanDropoutRate`]
#' @examples
#' data(donationsSummary)
#'
#' rf.matrix <- donationsSummary$rf.matrix
#' # donationsSummary$rf.matrix already has appropriate column names
#'
#' # starting-point parameters
#' startingparams <- c(1, 1, 0.5, 3)
#' # estimated parameters
#' est.params <- bgbb.EstimateParameters(rf.matrix, startingparams)
#'
#' # return the posterior mean dropout rate vector
#' bgbb.rf.matrix.PosteriorMeanDropoutRate(est.params, rf.matrix)
#' @md
bgbb.rf.matrix.PosteriorMeanDropoutRate <- function(params,
rf.matrix) {
dc.check.model.params(printnames = c("alpha", "beta", "gamma", "delta"),
params = params,
func = "bgbb.rf.matrix.PosteriorMeanDropoutRate")
tryCatch(x <- rf.matrix[, "x"], error = function(e) stop("Error in bgbb.rf.matrix.PosteriorMeanDropoutRate: rf.matrix must have a frequency column labelled \"x\""))
tryCatch(t.x <- rf.matrix[, "t.x"], error = function(e) stop("Error in bgbb.rf.matrix.PosteriorMeanDropoutRate: rf.matrix must have a recency column labelled \"t.x\""))
tryCatch(n.cal <- rf.matrix[, "n.cal"], error = function(e) stop("Error in bgbb.rf.matrix.PosteriorMeanDropoutRate: rf.matrix must have a column for number of transaction opportunities in the calibration period, labelled \"n.cal\""))
return(bgbb.PosteriorMeanDropoutRate(params,
x,
t.x,
n.cal))
}
#' BG/BB Heatmap of Holdout Period Expected Transactions
#'
#' Plots a heatmap based on the conditional expected holdout period frequency
#' for each recency-frequency combination in the calibration period.
#'
#' E(X(n, n+n*) | alpha, beta, gamma, delta, x, t.x, n). This function requires
#' the holdout period to immediately follow the calibration period.
#'
#' @inheritParams bgbb.pmf
#' @param n.cal number of transaction opportunities in the calibration period.
#' @param n.star number of transaction opportunities in the holdout period.
#' @param xlab descriptive label for the x axis.
#' @param ylab descriptive label for the y axis.
#' @param xticklab vector containing a label for each tick mark on the x axis.
#' @param title title placed on the top-center of the plot.
#' @return A matrix containing the conditional expected transactions in the
#' holdout period for each recency-frequency combination in the calibration
#' period. The rows represent calibration period frequencies, and the columns
#' represent calibration period recencies.
#' @seealso [`bgbb.ConditionalExpectedTransactions`]
#' @references Fader, Peter S., Bruce G.S. Hardie, and Jen Shang.
#' "Customer-Base Analysis in a Discrete-Time Noncontractual Setting."
#' _Marketing Science_ 29(6), pp. 1086-1108. 2010. INFORMS.
#' [Web.](http://www.brucehardie.com/papers/020/)
#' @examples
#' data(donationsSummary)
#'
#' rf.matrix <- donationsSummary$rf.matrix
#' # donationsSummary$rf.matrix already has appropriate column names
#'
#' # starting-point parameters
#' startingparams <- c(1, 1, 0.5, 3)
#' # estimated parameters
#' est.params <- bgbb.EstimateParameters(rf.matrix, startingparams)
#'
#' # Plot a heatmap of conditional expected transactions in
#' # a holdout period of 5 transaction opportunities, given
#' # that the calibration period consisted of 6 transaction
#' # opportunities.
#' bgbb.HeatmapHoldoutExpectedTrans(est.params, n.cal=6, n.star=5)
#' @md
bgbb.HeatmapHoldoutExpectedTrans <- function(params,
n.cal,
n.star,
xlab = "Recency",
ylab = "Frequency",
xticklab = NULL,
title = "Heatmap of Conditional Expected Transactions") {
dc.check.model.params(printnames = c("alpha", "beta", "gamma", "delta"),
params = params,
func = "bgbb.HeatmapHoldoutExpectedTrans")
if (n.cal < 0 || !is.numeric(n.cal))
stop("n.cal must be numeric and may not be negative.")
if (n.star < 0 || !is.numeric(n.star))
stop("n.star must be numeric and may not be negative.")
heatmap.mx <- matrix(0, n.cal + 1, n.cal + 1)
heatmap.mx[1, 1] <- bgbb.ConditionalExpectedTransactions(params,
n.cal,
n.star,
0, 0)
for (xx in 1:n.cal) {
for (tt in 1:n.cal) {
if (xx <= tt) {
expected.trans <- bgbb.ConditionalExpectedTransactions(params,
n.cal,
n.star,
xx,
tt)
heatmap.mx[xx + 1, tt + 1] <- expected.trans
}
}
}
if (is.null(xticklab) == TRUE) {
xticklab <- 0:n.cal
}
colnames(heatmap.mx) <- xticklab
rownames(heatmap.mx) <- 0:n.cal
heatmap(heatmap.mx,
Rowv = NA,
Colv = NA,
col = gray(8:2/9),
scale = "none",
ylab = ylab,
xlab = xlab,
main = title,
verbose = TRUE)
return(heatmap.mx)
}
#' BG/BB Plot Transaction Rate Heterogeneity
#'
#' Plots and returns the estimated beta distribution of P (customers' propensities to purchase).
#'
#' This returns the distribution of each customer's Bernoulli parameter, which determines the level of their purchasing (using the BG/BB assumption that purchasing on the individual level can be modeled with a Bernoulli distribution).
#'
#' @inheritParams bgbb.pmf
#' @return Distribution of customers' propensities to purchase.
#' @examples
#' params <- c(1.2, 0.75, 0.66, 2.78)
#' bgbb.PlotTransactionRateHeterogeneity(params)
#' params <- c(0.2, 1.5, 3.2, 6)
#' bgbb.PlotTransactionRateHeterogeneity(params)
#' @md
bgbb.PlotTransactionRateHeterogeneity <- function(params) {
dc.check.model.params(printnames = c("alpha", "beta", "gamma", "delta"),
params = params,
func = "bgbb.PlotTransactionRateHeterogeneity")
alpha <- params[1]
beta <- params[2]
x.axis.ticks <- 0.01 * 0:100
heterogeneity <- dbeta(x.axis.ticks, alpha, beta)
plot(x.axis.ticks,
heterogeneity,
type = "l",
xlab = "Transaction Rate",
ylab = "Density",
main = "Heterogeneity in Transaction Rate")
rate.mean <- round(alpha/(alpha + beta), 4)
rate.var <- round((alpha * beta)/((alpha + beta)^2 * (alpha + beta + 1)), 4)
mean.var.label <- paste("Mean:", rate.mean, " Var:", rate.var)
mtext(mean.var.label, side = 3)
return(heterogeneity)
}
#' BG/BB Plot Dropout Rate Heterogeneity
#'
#' Plots and returns the estimated beta distribution of Theta (customers' propensities to drop out).
#'
#' This returns the distribution of each customer's geometric parameter that determines their lifetime (using the BG/BB assumption that a customer's lifetime can be modeled with a geometric distribution).
#'
#' @inheritParams bgbb.pmf
#' @return Distribution of customers' propensities to drop out.
#' @examples
#' params <- c(1.2, 0.75, 0.66, 2.78)
#' bgbb.PlotDropoutRateHeterogeneity(params)
#' params <- c(0.2, 1.5, 3.2, 6)
#' bgbb.PlotDropoutRateHeterogeneity(params)
#' @md
bgbb.PlotDropoutRateHeterogeneity <- function(params) {
dc.check.model.params(printnames = c("alpha", "beta", "gamma", "delta"),
params = params,
func = "bgbb.PlotDropoutRateHeterogeneity")
alpha <- params[3]
beta <- params[4]
x.axis.ticks <- 0.01 * 0:100
heterogeneity <- dbeta(x.axis.ticks, alpha, beta)
plot(x.axis.ticks,
heterogeneity,
type = "l",
xlab = "Dropout rate",
ylab = "Density",
main = "Heterogeneity in Dropout Rate")
rate.mean <- round(alpha/(alpha + beta), 4)
rate.var <- round((alpha * beta)/((alpha + beta)^2 * (alpha + beta + 1)), 4)
mean.var.label <- paste("Mean:", rate.mean, " Var:", rate.var)
mtext(mean.var.label, side = 3)
return(heterogeneity)
}
|
/scratch/gouwar.j/cran-all/cranData/BTYD/R/bgbb.R
|
################################################## BG/NBD estimation, visualization functions
library(hypergeo)
# Two things discovered in this script so far:
# -- bgnbd.cbs.LL should be called with the un-compressed version of cal.cbs, the 3-column one
# -- bgnbd.LL spec, as written, won't avoid the large x problem. Patched that, not tested yet.
#' Define general parameters
#'
#' This is to ensure consistency across all functions that require common bits
#' and bobs.
#'
#' @inheritParams bgnbd.LL
#' @inheritParams bgnbd.ConditionalExpectedTransactions
#' @param func function calling dc.InputCheck
#' @param hardie if TRUE, use \code{\link{h2f1}} instead of
#' \code{\link[hypergeo]{hypergeo}} when you call this function from within
#' \code{\link{bgnbd.ConditionalExpectedTransactions}}.
#' @return a list with things you need for \code{\link{bgnbd.LL}},
#' \code{\link{bgnbd.PAlive}} and
#' \code{\link{bgnbd.ConditionalExpectedTransactions}}
#' @seealso \code{\link{bgnbd.LL}}
#' @seealso \code{\link{bgnbd.PAlive}}
#' @seealso \code{\link{bgnbd.ConditionalExpectedTransactions}}
bgnbd.generalParams <- function(params,
func,
x,
t.x,
T.cal,
T.star = NULL,
hardie = NULL) {
inputs <- try(dc.InputCheck(params = params,
func = func,
printnames = c("r", "alpha", "a", "b"),
x = x,
t.x = t.x,
T.cal = T.cal))
if('try-error' == class(inputs)) return(inputs)
x <- inputs$x
t.x <- inputs$t.x
T.cal <- inputs$T.cal
r <- params[1]
alpha <- params[2]
a <- params[3]
b <- params[4]
# last two components for the alt specification
# to handle large values of x (Solution #2 in
# http://brucehardie.com/notes/027/bgnbd_num_error.pdf,
# LL specification (4) on page 4):
C3 = ((alpha + t.x)/(alpha + T.cal))^(r + x)
C4 = a / (b + x - 1)
# stuff you'll need in sundry places
out <- list()
out$PAlive <- 1/(1 + as.numeric(x > 0) * C4 / C3)
# do these computations only if needed: that is,
# if you call this function from bgnbd.LL
if(func == 'bgnbd.LL') {
# a helper for specifying the log form of the ratio of betas
# in http://brucehardie.com/notes/027/bgnbd_num_error.pdf
lb.ratio = function(a, b, x, y) {
(lgamma(a) + lgamma(b) - lgamma(a + b)) -
(lgamma(x) + lgamma(y) - lgamma(x + y))
}
# First two components -- D1 and D2 -- for the alt spec
# that can handle large values of x (Solution #2 in
# http://brucehardie.com/notes/027/bgnbd_num_error.pdf)
# Here is the D1 term of LL function (4) on page 4:
D1 = lgamma(r + x) -
lgamma(r) +
lgamma(a + b) +
lgamma(b + x) -
lgamma(b) -
lgamma(a + b + x)
D2 = r * log(alpha) - (r + x) * log(alpha + t.x)
# original implementation of the log likelihood
# A = D2 + lgamma(r + x) - lgamma(r)
# B = exp(lb.ratio(a, b + x, a, b)) *
# C3 +
# as.numeric((x > 0)) *
# exp(lb.ratio(a + 1, b + x - 1, a, b))
# out$LL = sum(A + log(B))
# with the corection for avoiding the NUM! problem:
out$LL = D1 + D2 + log(C3 + as.numeric((x > 0)) * C4)
}
# if T.star is not null, then this can produce
# conditional expected transactions too. this is
# another way of saying that you are calling this
# function from bgnbd.ConditionalExpectedTransactions,
# in which case you also need to set hardie to TRUE or FALSE
if(!is.null(T.star)) {
stopifnot(hardie %in% c(TRUE, FALSE))
term1 <- (a + b + x - 1) / (a - 1)
if(hardie == TRUE) {
hyper <- h2f1(r + x,
b + x,
a + b + x - 1,
T.star/(alpha + T.cal + T.star))
} else {
hyper <- Re(hypergeo(r + x,
b + x,
a + b + x - 1,
T.star/(alpha + T.cal + T.star)))
}
term2 <- 1 -
((alpha + T.cal)/(alpha + T.cal + T.star))^(r + x) *
hyper
out$CET <- term1 * term2 * out$PAlive
}
out
}
#' BG/NBD Log-Likelihood
#'
#' Calculates the log-likelihood of the BG/NBD model.
#'
#' \code{x}, \code{t.x} and \code{T.cal} may be vectors. The standard rules for
#' vector operations apply - if they are not of the same length, shorter vectors
#' will be recycled (start over at the first element) until they are as long as
#' the longest vector. It is advisable to keep vectors to the same length and to
#' use single values for parameters that are to be the same for all
#' calculations. If one of these parameters has a length greater than one, the
#' output will be also be a vector.
#'
#' @param params BG/NBD parameters - a vector with r, alpha, a, and b, in that
#' order. r and alpha are unobserved parameters for the NBD transaction
#' process. a and b are unobserved parameters for the Beta geometric dropout
#' process.
#' @param x number of repeat transactions in the calibration period T.cal, or a
#' vector of transaction frequencies.
#' @param t.x time of most recent repeat transaction, or a vector of recencies.
#' @param T.cal length of calibration period, or a vector of calibration period
#' lengths.
#'
#' @seealso \code{\link{bgnbd.EstimateParameters}}
#' @seealso \code{\link{bgnbd.cbs.LL}}
#'
#' @return A vector of log-likelihoods as long as the longest input vector (x,
#' t.x, or T.cal).
#'
#' @examples
#' data(cdnowSummary)
#'
#' cal.cbs <- cdnowSummary$cbs
#' # cal.cbs already has column names required by method
#'
#' # random assignment of parameters
#' params <- c(0.5, 6, 1.2, 3.3)
#' # returns the log-likelihood of the given parameters
#' bgnbd.cbs.LL (params, cal.cbs)
#'
#' # compare the speed and results to the following:
#' cal.cbs.compressed <- dc.compress.cbs(cal.cbs)
#' bgnbd.cbs.LL(params, cal.cbs.compressed)
#'
#' # Returns the log likelihood of the parameters for a customer who
#' # made 3 transactions in a calibration period that ended at t=6,
#' # with the last transaction occurring at t=4.
#' bgnbd.LL(params, x=3, t.x=4, T.cal=6)
#'
#' # We can also give vectors as function parameters:
#' set.seed(7)
#' x <- sample(1:4, 10, replace = TRUE)
#' t.x <- sample(1:4, 10, replace = TRUE)
#' T.cal <- rep(4, 10)
#' bgnbd.LL(params, x, t.x, T.cal)
bgnbd.LL <- function(params,
x,
t.x,
T.cal) {
bgnbd.generalParams(params = params,
func = 'bgnbd.LL',
x = x,
t.x = t.x,
T.cal = T.cal)$LL
}
#' BG/NBD Log-Likelihood Wrapper
#'
#' Calculates the log-likelihood sum of the BG/NBD model.
#'
#' Note: do not use a compressed \code{cal.cbs} matrix. It makes quicker work
#' for Pareto/NBD estimation as implemented in this package, but the opposite is
#' true for BG/NBD. For proof, compare the definition of the
#' \code{\link{bgnbd.cbs.LL}} to that of \code{\link{pnbd.cbs.LL}}.
#'
#' @param params BG/NBD parameters - a vector with r, alpha, a, and b, in that
#' order. r and alpha are unobserved parameters for the NBD transaction
#' process. a and b are unobserved parameters for the Beta geometric dropout
#' process.
#' @param cal.cbs calibration period CBS (customer by sufficient statistic). It
#' must contain columns for frequency ("x"), recency ("t.x"), and total time
#' observed ("T.cal"). Note that recency must be the time between the start of
#' the calibration period and the customer's last transaction, not the time
#' between the customer's last transaction and the end of the calibration
#' period. If your data is compressed (see \code{\link{dc.compress.cbs}}),
#' a fourth column labeled "custs" (number of customers with a specific
#' combination of recency, frequency and length of calibration period) is
#' available.
#'
#' @seealso \code{\link{bgnbd.EstimateParameters}}
#' @seealso \code{\link{bgnbd.LL}}
#'
#' @return The total log-likelihood of the provided data.
#'
#' @examples
#' data(cdnowSummary)
#'
#' cal.cbs <- cdnowSummary$cbs
#' # cal.cbs already has column names required by method
#'
#' # random assignment of parameters
#' params <- c(0.5, 6, 1.2, 3.3)
#' # returns the log-likelihood of the given parameters
#' bgnbd.cbs.LL(params, cal.cbs)
#'
#' # compare the speed and results to the following:
#' cal.cbs.compressed <- dc.compress.cbs(cal.cbs)
#' bgnbd.cbs.LL (params, cal.cbs.compressed)
#'
#' # Returns the log likelihood of the parameters for a customer who
#' # made 3 transactions in a calibration period that ended at t=6,
#' # with the last transaction occurring at t=4.
#' bgnbd.LL(params, x=3, t.x=4, T.cal=6)
#'
#' # We can also give vectors as function parameters:
#' set.seed(7)
#' x <- sample(1:4, 10, replace = TRUE)
#' t.x <- sample(1:4, 10, replace = TRUE)
#' T.cal <- rep(4, 10)
#' bgnbd.LL(params, x, t.x, T.cal)
bgnbd.cbs.LL <- function(params,
cal.cbs) {
dc.check.model.params(printnames = c("r", "alpha", "a", "b"),
params = params,
func = "bgnbd.cbs.LL")
# Check that you have the right columns.
# They should be 'x', 't.x', 'T.cal' and optionally 'custs.'
# They stand for, respectively:
# -- x: frequency
# -- t.x: recency
# -- T.cal: observed calendar time
# -- custs: number of customers with this (x, t.x, T.cal) combo
foo <- colnames(cal.cbs)
stopifnot(all(c('x', 't.x', 'T.cal') %in% foo))
x <- cal.cbs[,'x']
t.x <- cal.cbs[,'t.x']
T.cal <- cal.cbs[,'T.cal']
# Avoid this unfurling exercise by calling bgnbd.cbs.LL
# with the uncompressed version of cal.cbs, which doesn't
# have a "custs" column.
if ("custs" %in% colnames(cal.cbs)) {
many_rows = function(vec, nreps) {
return(rep(1, nreps) %*% t.default(vec))
}
custs <- cal.cbs[, "custs"]
logvec = (1:length(custs)) * (custs > 1)
logvec = logvec[logvec > 0]
M = sum(logvec > 0)
for (i in 1:M) {
cal.cbs = rbind(cal.cbs,
many_rows(cal.cbs[logvec[i], ],
custs[logvec[i]] - 1))
}
x = cal.cbs[, "x"]
t.x = cal.cbs[, "t.x"]
T.cal = cal.cbs[, "T.cal"]
}
return(sum(bgnbd.LL(params, x, t.x, T.cal)))
}
#' BG/NBD Parameter Estimation
#'
#' Estimates parameters for the BG/NBD model.
#'
#' The best-fitting parameters are determined using the
#' \code{\link{bgnbd.cbs.LL}} function. The sum of the log-likelihood for each
#' customer (for a set of parameters) is maximized in order to estimate
#' parameters.
#'
#' A set of starting parameters must be provided for this method. If no
#' parameters are provided, (1,3,1,3) is used as a default. These values are
#' used because they provide good convergence across data sets. It may be useful
#' to use starting values for r and alpha that represent your best guess of the
#' heterogeneity in the buy and die rate of customers. It may be necessary to
#' run the estimation from multiple starting points to ensure that it converges.
#' To compare the log-likelihoods of different parameters, use
#' \code{\link{bgnbd.cbs.LL}}.
#'
#' The lower bound on the parameters to be estimated is always zero, since
#' BG/NBD parameters cannot be negative. The upper bound can be set with the
#' max.param.value parameter.
#'
#' This function may take some time to run.
#'
#' @param cal.cbs calibration period CBS (customer by sufficient statistic). It
#' must contain columns for frequency ("x"), recency ("t.x"), and total time
#' observed ("T.cal"). Note that recency must be the time between the start of
#' the calibration period and the customer's last transaction, not the time
#' between the customer's last transaction and the end of the calibration
#' period.
#' @param par.start initial BG/NBD parameters - a vector with r, alpha, a, and
#' b, in that order. r and alpha are unobserved parameters for the NBD
#' transaction process. a and b are unobserved parameters for the Beta
#' geometric dropout process.
#' @param max.param.value the upper bound on parameters.
#' @param method the optimization method(s) passed along to
#' \code{\link[optimx]{optimx}}.
#' @param hessian set it to TRUE if you want the Hessian matrix, and then you
#' might as well have the complete \code{\link[optimx]{optimx}} object
#' returned.
#' @return Vector of estimated parameters.
#' @seealso \code{\link{bgnbd.cbs.LL}}
#' @references Fader, Peter S.; Hardie, and Bruce G.S.. "Overcoming the BG/NBD
#' Model's #NUM! Error Problem." December. 2013. Web.
#' \url{http://brucehardie.com/notes/027/bgnbd_num_error.pdf}
#'
#' @examples
#' data(cdnowSummary)
#'
#' cal.cbs <- cdnowSummary$cbs
#' # cal.cbs already has column names required by method
#'
#' # starting-point parameters
#' startingparams <- c(1.0, 3, 1.0, 3)
#'
#' # estimated parameters
#' est.params <- bgnbd.EstimateParameters(cal.cbs = cal.cbs,
#' par.start = startingparams)
#'
#' # complete object returned by \code{\link[optimx]{optimx}}
#' optimx.set <- bgnbd.EstimateParameters(cal.cbs = cal.cbs,
#' par.start = startingparams,
#' hessian = TRUE)
#'
#' # log-likelihood of estimated parameters
#' bgnbd.cbs.LL(est.params, cal.cbs)
bgnbd.EstimateParameters <- function(cal.cbs,
par.start = c(1, 3, 1, 3),
max.param.value = 10000,
method = 'L-BFGS-B',
hessian = FALSE) {
dc.check.model.params(printnames = c("r", "alpha", "a", "b"),
params = par.start,
func = "bgnbd.EstimateParameters")
bgnbd.eLL <- function(params, cal.cbs, max.param.value) {
params <- exp(params)
params[params > max.param.value] = max.param.value
return(-1 * bgnbd.cbs.LL(params, cal.cbs))
}
logparams = log(par.start)
results <- optimx(par = logparams,
fn = bgnbd.eLL,
cal.cbs = cal.cbs,
max.param.value = max.param.value,
method = method,
hessian = hessian)
if(hessian == TRUE) {
message('Your parameter estimates are now on a log scale. Exponentiate them before use.')
return(results)
}
unlist(exp(results[method, c('p1', 'p2', 'p3', 'p4')]))
}
#' BG/NBD Conditional Expected Transactions
#'
#' E\[X(T.cal, T.cal + T.star) | x, t.x, r, alpha, a, b\]
#'
#' \code{T.star}, \code{x}, \code{t.x} and \code{T.cal} may be vectors. The
#' standard rules for vector operations apply - if they are not of the same
#' length, shorter vectors will be recycled (start over at the first element)
#' until they are as long as the longest vector. It is advisable to keep vectors
#' to the same length and to use single values for parameters that are to be the
#' same for all calculations. If one of these parameters has a length greater
#' than one, the output will be a vector of probabilities.
#'
#' @inheritParams bgnbd.LL
#' @param T.star length of time for which we are calculating the expected number
#' of transactions.
#' @param hardie if TRUE, use \code{\link{h2f1}} instead of
#' \code{\link[hypergeo]{hypergeo}}.
#' @return Number of transactions a customer is expected to make in a time
#' period of length t, conditional on their past behavior. If any of the input
#' parameters has a length greater than 1, this will be a vector of expected
#' number of transactions.
#' @seealso \code{\link{bgnbd.Expectation}}
#' @references Fader, Peter S.; Hardie, Bruce G.S.and Lee, Ka Lok. “Computing
#' P(alive) Using the BG/NBD Model.” December. 2008. Web.
#' \url{http://www.brucehardie.com/notes/021/palive_for_BGNBD.pdf}
#' @examples
#' params <- c(0.243, 4.414, 0.793, 2.426)
#' # Number of transactions a customer is expected to make in 2 time
#' # intervals, given that they made 10 repeat transactions in a time period
#' # of 39 intervals, with the 10th repeat transaction occurring in the 35th
#' # interval.
#' bgnbd.ConditionalExpectedTransactions(params, T.star=2, x=10, t.x=35, T.cal=39)
#'
#' # We can also compare expected transactions across different
#' # calibration period behaviors:
#' bgnbd.ConditionalExpectedTransactions(params, T.star=2, x=5:20, t.x=25, T.cal=39)
bgnbd.ConditionalExpectedTransactions <- function(params,
T.star,
x,
t.x,
T.cal,
hardie = TRUE) {
bgnbd.generalParams(params = params,
func = 'bgnbd.ConditionalExpectedTransactions',
x = x,
t.x = t.x,
T.cal = T.cal,
T.star = T.star,
hardie = hardie)$CET
}
#' BG/NBD Expectation
#'
#' Returns the number of repeat transactions that a randomly chosen customer
#' (for whom we have no prior information) is expected to make in a given time
#' period.
#'
#' E(X(t) | r, alpha, a, b)
#'
#' @param params BG/NBD parameters - a vector with r, alpha, a, and b, in that
#' order. r and alpha are unobserved parameters for the NBD transaction
#' process. a and b are unobserved parameters for the Beta geometric dropout
#' process.
#' @param t length of time for which we are calculating the expected number of
#' repeat transactions.
#' @param hardie if TRUE, use \code{\link{h2f1}} instead of
#' \code{\link[hypergeo]{hypergeo}}.
#' @return Number of repeat transactions a customer is expected to make in a
#' time period of length t.
#' @seealso \code{\link{bgnbd.ConditionalExpectedTransactions}}
#' @references Fader, Peter S.; Hardie, Bruce G.S.and Lee, Ka Lok. “Computing
#' P(alive) Using the BG/NBD Model.” December. 2008. Web.
#' \url{http://www.brucehardie.com/notes/021/palive_for_BGNBD.pdf}
#' @examples
#' params <- c(0.243, 4.414, 0.793, 2.426)
#'
#' # Number of repeat transactions a customer is expected to make in 2 time intervals.
#' bgnbd.Expectation(params, t=2, hardie = FALSE)
#'
#' # We can also compare expected transactions over time:
#' bgnbd.Expectation(params, t=1:10)
bgnbd.Expectation <- function(params,
t,
hardie = TRUE) {
dc.check.model.params(printnames = c("r", "alpha", "a", "b"),
params = params,
func = "bgnbd.Expectation")
if (any(t < 0) || !is.numeric(t))
stop("t must be numeric and may not contain negative numbers.")
r = params[1]
alpha = params[2]
a = params[3]
b = params[4]
term1 = (a + b - 1)/(a - 1)
term2 = (alpha/(alpha + t))^r
if(hardie == TRUE) {
term3 = h2f1(r, b, a + b - 1, t/(alpha + t))
} else {
term3 = Re(hypergeo(r, b, a + b - 1, t/(alpha + t)))
}
output = term1 * (1 - term2 * term3)
return(output)
}
#' BG/NBD Probability Mass Function
#'
#' Probability mass function for the BG/NBD.
#'
#' P(X(t)=x | r, alpha, a, b). Returns the probability that a customer makes x
#' repeat transactions in the time interval (0, t].
#'
#' Parameters t and x may be vectors. The standard rules for vector operations
#' apply - if they are not of the same length, the shorter vector will be
#' recycled (start over at the first element) until it is as long as the longest
#' vector. It is advisable to keep vectors to the same length and to use single
#' values for parameters that are to be the same for all calculations. If one of
#' these parameters has a length greater than one, the output will be a vector
#' of probabilities.
#'
#' @param params BG/NBD parameters - a vector with r, alpha, a, and b, in that
#' order. r and alpha are unobserved parameters for the NBD transaction
#' process. a and b are unobserved parameters for the Beta geometric dropout
#' process.
#' @param t length end of time period for which probability is being computed.
#' May also be a vector.
#' @param x number of repeat transactions by a random customer in the period
#' defined by t. May also be a vector.
#' @return Probability of X(t)=x conditional on model parameters. If t and/or x
#' has a length greater than one, a vector of probabilities will be returned.
#' @references Fader, Peter S.; Hardie, Bruce G.S.and Lee, Ka Lok. “Computing
#' P(alive) Using the BG/NBD Model.” December. 2008.
#' [Web.](http://www.brucehardie.com/notes/021/palive_for_BGNBD.pdf)
#' @examples
#' params <- c(0.243, 4.414, 0.793, 2.426)
#' # probability that a customer will make 10 repeat transactions in the
#' # time interval (0,2]
#' bgnbd.pmf(params, t=2, x=10)
#' # probability that a customer will make no repeat transactions in the
#' # time interval (0,39]
#' bgnbd.pmf(params, t=39, x=0)
#'
#' # Vectors may also be used as arguments:
#' bgnbd.pmf(params, t=30, x=11:20)
#' @md
bgnbd.pmf <- function(params,
t,
x) {
inputs <- try(dc.InputCheck(params = params,
func = 'bgnbd.pmf',
printnames = c("r", "alpha", "a", "b"),
x = x,
t = t))
if('try-error' == class(inputs)) return(inputs)
return(bgnbd.pmf.General(params,
t.start = 0,
t.end = inputs$t,
x = inputs$x))
}
#' Generalized BG/NBD Probability Mass Function
#'
#' Generalized probability mass function for the BG/NBD.
#'
#' P(X(t.start, t.end)=x | r, alpha, a, b). Returns the probability that a
#' customer makes x repeat transactions in the time interval (t.start, t.end\].
#'
#' It is impossible for a customer to make a negative number of repeat
#' transactions. This function will return an error if it is given negative
#' times or a negative number of repeat transactions. This function will also
#' return an error if t.end is less than t.start.
#'
#' t.start, t.end, and x may be vectors. The standard rules for vector
#' operations apply - if they are not of the same length, shorter vectors will
#' be recycled (start over at the first element) until they are as long as the
#' longest vector. It is advisable to keep vectors to the same length and to use
#' single values for parameters that are to be the same for all calculations. If
#' one of these parameters has a length greater than one, the output will be a
#' vector of probabilities.
#'
#' @param params BG/NBD parameters - a vector with r, alpha, a, and b, in that
#' order. r and alpha are unobserved parameters for the NBD transaction
#' process. a and b are unobserved parameters for the Beta geometric dropout
#' process.
#' @param t.start start of time period for which probability is being
#' calculated. It can also be a vector of values.
#' @param t.end end of time period for which probability is being calculated.
#' It can also be a vector of values.
#' @param x number of repeat transactions by a random customer in the period
#' defined by (t.start, t.end]. It can also be a vector of values.
#' @return Probability of x transaction occuring between t.start and t.end
#' conditional on model parameters. If t.start, t.end, and/or x has a length
#' greater than one, a vector of probabilities will be returned.
#' @references Fader, Peter S.; Hardie, Bruce G.S.and Lee, Ka Lok. “Computing
#' P(alive) Using the BG/NBD Model.” December. 2008.
#' [Web.](http://www.brucehardie.com/notes/021/palive_for_BGNBD.pdf)
#' @examples
#' params <- c(0.243, 4.414, 0.793, 2.426)
#' # probability that a customer will make 10 repeat transactions in the
#' # time interval (1,2]
#' bgnbd.pmf.General(params, t.start=1, t.end=2, x=10)
#' # probability that a customer will make no repeat transactions in the
#' # time interval (39,78]
#' bgnbd.pmf.General(params, t.start=39, t.end=78, x=0)
#' @md
bgnbd.pmf.General <- function(params,
t.start,
t.end,
x) {
inputs <- try(dc.InputCheck(params = params,
func = 'bgnbd.pmf.General',
printnames = c("r", "alpha", "a", "b"),
t.start = t.start,
t.end = t.end,
x = x))
if('try-error' == class(inputs)) return(inputs)
t.start = inputs$t.start
t.end = inputs$t.end
x = inputs$x
max.length <- nrow(inputs)
if (any(t.start > t.end)) {
stop("Error in bgnbd.pmf.General: t.start > t.end.")
}
r <- params[1]
alpha <- params[2]
a <- params[3]
b <- params[4]
equation.part.0 <- rep(0, max.length)
t = t.end - t.start
term3 = rep(0, max.length)
term1 = beta(a, b + x)/beta(a, b) *
gamma(r + x)/gamma(r)/factorial(x) *
((alpha/(alpha + t))^r) * ((t/(alpha + t))^x)
for (i in 1:max.length) {
if (x[i] > 0) {
ii = c(0:(x[i] - 1))
summation.term = sum(gamma(r + ii)/gamma(r)/factorial(ii) *
((t[i]/(alpha + t[i]))^ii))
term3[i] = 1 - (((alpha/(alpha + t[i]))^r) * summation.term)
}
}
term2 = as.numeric(x > 0) * beta(a + 1, b + x - 1)/beta(a, b) * term3
return(term1 + term2)
}
#' BG/NBD P(Alive)
#'
#' Uses BG/NBD model parameters and a customer's past transaction behavior to
#' return the probability that they are still alive at the end of the
#' calibration period.
#'
#' P(Alive | X=x, t.x, T.cal, r, alpha, a, b)
#'
#' x, t.x, and T.cal may be vectors. The standard rules for vector operations
#' apply - if they are not of the same length, shorter vectors will be recycled
#' (start over at the first element) until they are as long as the longest
#' vector. It is advisable to keep vectors to the same length and to use single
#' values for parameters that are to be the same for all calculations. If one of
#' these parameters has a length greater than one, the output will be a vector
#' of probabilities.
#'
#' @inheritParams bgnbd.LL
#' @return Probability that the customer is still alive at the end of the
#' calibration period. If x, t.x, and/or T.cal has a length greater than one,
#' then this will be a vector of probabilities (containing one element
#' matching each element of the longest input vector).
#' @references Fader, Peter S.; Hardie, Bruce G.S.and Lee, Ka Lok. “Computing
#' P(alive) Using the BG/NBD Model.” December. 2008.
#' [Web.](http://www.brucehardie.com/notes/021/palive_for_BGNBD.pdf)
#' @examples
#' params <- c(0.243, 4.414, 0.793, 2.426)
#'
#' bgnbd.PAlive(params, x=23, t.x=39, T.cal=39)
#' # P(Alive) of a customer who has the same recency and total
#' # time observed.
#'
#' bgnbd.PAlive(params, x=5:20, t.x=30, T.cal=39)
#' # Note the "increasing frequency paradox".
#'
#' # To visualize the distribution of P(Alive) across customers:
#'
#' data(cdnowSummary)
#' cbs <- cdnowSummary$cbs
#' params <- bgnbd.EstimateParameters(cbs, par.start = c(0.243, 4.414, 0.793, 2.426))
#' p.alives <- bgnbd.PAlive(params, cbs[,"x"], cbs[,"t.x"], cbs[,"T.cal"])
#' plot(density(p.alives))
#' @md
bgnbd.PAlive <- function(params,
x,
t.x,
T.cal) {
bgnbd.generalParams(params = params,
func = 'bgnbd.PAlive',
x = x,
t.x = t.x,
T.cal = T.cal)$PAlive
}
#' BG/NBD Expected Cumulative Transactions
#'
#' Calculates the expected cumulative total repeat transactions by all customers
#' for the calibration and holdout periods.
#'
#' The function automatically divides the total period up into n.periods.final
#' time intervals. n.periods.final does not have to be in the same unit of time
#' as the T.cal data. For example: - if your T.cal data is in weeks, and you
#' want cumulative transactions per week, n.periods.final would equal T.star. -
#' if your T.cal data is in weeks, and you want cumulative transactions per day,
#' n.periods.final would equal T.star * 7.
#'
#' The holdout period should immediately follow the calibration period. This
#' function assume that all customers' calibration periods end on the same date,
#' rather than starting on the same date (thus customers' birth periods are
#' determined using max(T.cal) - T.cal rather than assuming that it is 0).
#'
#' @param params BG/NBD parameters - a vector with r, alpha, a, and b, in that
#' order. r and alpha are unobserved parameters for the NBD transaction
#' process. a and b are unobserved parameters for the Beta geometric dropout
#' process.
#' @param T.cal a vector to represent customers' calibration period lengths
#' (in other words, the "T.cal" column from a customer-by-sufficient-statistic
#' matrix).
#' @param T.tot end of holdout period. Must be a single value, not a vector.
#' @param n.periods.final number of time periods in the calibration and
#' holdout periods. See details.
#' @param hardie if TRUE, use h2f1 instead of hypergeo.
#' @return Vector of expected cumulative total repeat transactions by all
#' customers.
#' @seealso [`bgnbd.Expectation`]
#' @examples
#' data(cdnowSummary)
#'
#' cal.cbs <- cdnowSummary$cbs
#' # cal.cbs already has column names required by method
#'
#' params <- c(0.243, 4.414, 0.793, 2.426)
#'
#' # Returns a vector containing cumulative repeat transactions for 273 days.
#' # All parameters are in weeks; the calibration period lasted 39 weeks.
#' bgnbd.ExpectedCumulativeTransactions(params,
#' T.cal = cal.cbs[,"T.cal"],
#' T.tot = 39,
#' n.periods.final = 273,
#' hardie = TRUE)
#' @md
bgnbd.ExpectedCumulativeTransactions <- function(params,
T.cal,
T.tot,
n.periods.final,
hardie = TRUE) {
dc.check.model.params(printnames = c("r", "alpha", "s", "beta"),
params = params,
func = "bgnbd.ExpectedCumulativeTransactions")
if (any(T.cal < 0) || !is.numeric(T.cal))
stop("T.cal must be numeric and may not contain negative numbers.")
if (length(T.tot) > 1 || T.tot < 0 || !is.numeric(T.tot))
stop("T.cal must be a single numeric value and may not be negative.")
if (length(n.periods.final) > 1 || n.periods.final < 0 || !is.numeric(n.periods.final))
stop("n.periods.final must be a single numeric value and may not be negative.")
intervals <- seq(T.tot/n.periods.final,
T.tot,
length.out = n.periods.final)
cust.birth.periods <- max(T.cal) - T.cal
expected.transactions <- sapply(intervals,
function(interval) {
if (interval <= min(cust.birth.periods)) return(0)
t <- interval - cust.birth.periods[cust.birth.periods <= interval]
sum(bgnbd.Expectation(params = params,
t = t,
hardie = hardie))
})
return(expected.transactions)
}
#' BG/NBD Plot Frequency in Calibration Period
#'
#' Plots a histogram and returns a matrix comparing the actual and expected
#' number of customers who made a certain number of repeat transactions in the
#' calibration period, binned according to calibration period frequencies.
#'
#' This function requires a censor number, which cannot be higher than the
#' highest frequency in the calibration period CBS. The output matrix will have
#' (censor + 1) bins, starting at frequencies of 0 transactions and ending at a
#' bin representing calibration period frequencies at or greater than the censor
#' number. The plot may or may not include a bin for zero frequencies, depending
#' on the plotZero parameter.
#'
#' @param params BG/NBD parameters - a vector with r, alpha, a, and b, in that
#' order. r and alpha are unobserved parameters for the NBD transaction
#' process. a and b are unobserved parameters for the Beta geometric dropout
#' process.
#' @param cal.cbs calibration period CBS (customer by sufficient statistic). It
#' must contain columns for frequency ("x") and total time observed ("T.cal").
#' @param censor integer used to censor the data. See details.
#' @param plotZero If FALSE, the histogram will exclude the zero bin.
#' @param xlab descriptive label for the x axis.
#' @param ylab descriptive label for the y axis.
#' @param title title placed on the top-center of the plot.
#' @return Calibration period repeat transaction frequency comparison matrix
#' (actual vs. expected).
#' @examples
#' data(cdnowSummary)
#'
#' cal.cbs <- cdnowSummary$cbs
#' # cal.cbs already has column names required by method
#'
#' # parameters estimated using bgnbd.EstimateParameters
#' est.params <- c(0.243, 4.414, 0.793, 2.426)
#' # the maximum censor number that can be used
#' max(cal.cbs[,"x"])
#'
#' bgnbd.PlotFrequencyInCalibration(est.params, cal.cbs, censor=7)
bgnbd.PlotFrequencyInCalibration <- function(params,
cal.cbs,
censor,
plotZero = TRUE,
xlab = "Calibration period transactions",
ylab = "Customers",
title = "Frequency of Repeat Transactions") {
tryCatch(x <- cal.cbs[, "x"], error = function(e) stop("Error in bgnbd.PlotFrequencyInCalibration: cal.cbs must have a frequency column labelled \"x\""))
tryCatch(T.cal <- cal.cbs[, "T.cal"], error = function(e) stop("Error in bgnbd.PlotFrequencyInCalibration: cal.cbs must have a column for length of time observed labelled \"T.cal\""))
dc.check.model.params(c("r", "alpha", "a", "b"), params, "bgnbd.PlotFrequencyInCalibration")
if (censor > max(x))
stop("censor too big (> max freq) in PlotFrequencyInCalibration.")
x = cal.cbs[, "x"]
T.cal = cal.cbs[, "T.cal"]
n.x <- rep(0, max(x) + 1)
ncusts = nrow(cal.cbs)
for (ii in unique(x)) {
# Get number of customers to buy n.x times, over the grid of all possible n.x
# values (no censoring)
n.x[ii + 1] <- sum(ii == x)
}
n.x.censor <- sum(n.x[(censor + 1):length(n.x)])
n.x.actual <- c(n.x[1:censor], n.x.censor) # This upper truncates at censor (ie. if censor=7, 8 categories: {0, 1, ..., 6, 7+}).
T.value.counts <- table(T.cal) # This is the table of counts of all time durations from customer birth to end of calibration period.
T.values <- as.numeric(names(T.value.counts)) # These are all the unique time durations from customer birth to end of calibration period.
n.T.values <- length(T.values) # These are the number of time durations we need to consider.
n.x.expected <- rep(0, length(n.x.actual)) # We'll store the probabilities in here.
n.x.expected.all <- rep(0, max(x) + 1) # We'll store the probabilities in here.
for (ii in 0:max(x)) {
# We want to run over the probability of each transaction amount.
this.x.expected = 0
for (T.idx in 1:n.T.values) {
# We run over all people who had all time durations.
T = T.values[T.idx]
if (T == 0)
next
n.T = T.value.counts[T.idx] # This is the number of customers who had this time duration.
prob.of.this.x.for.this.T = bgnbd.pmf(params, T, ii)
expected.given.x.and.T = n.T * prob.of.this.x.for.this.T
this.x.expected = this.x.expected + expected.given.x.and.T
}
n.x.expected.all[ii + 1] = this.x.expected
}
n.x.expected[1:censor] = n.x.expected.all[1:censor]
n.x.expected[censor + 1] = sum(n.x.expected.all[(censor + 1):(max(x) + 1)])
col.names <- paste(rep("freq", length(censor + 1)), (0:censor), sep = ".")
col.names[censor + 1] <- paste(col.names[censor + 1], "+", sep = "")
censored.freq.comparison <- rbind(n.x.actual, n.x.expected)
colnames(censored.freq.comparison) <- col.names
cfc.plot <- censored.freq.comparison
if (plotZero == FALSE)
cfc.plot <- cfc.plot[, -1]
n.ticks <- ncol(cfc.plot)
if (plotZero == TRUE) {
x.labels <- 0:(n.ticks - 1)
x.labels[n.ticks] <- paste(n.ticks - 1, "+", sep = "")
}
ylim <- c(0, ceiling(max(cfc.plot) * 1.1))
barplot(cfc.plot, names.arg = x.labels, beside = TRUE, ylim = ylim, main = title,
xlab = xlab, ylab = ylab, col = 1:2)
legend("topright", legend = c("Actual", "Model"), col = 1:2, lwd = 2)
return(censored.freq.comparison)
}
#' BG/NBD Plot Frequency vs. Conditional Expected Frequency
#'
#' Plots the actual and conditional expected number transactions made by
#' customers in the holdout period, binned according to calibration period
#' frequencies. Also returns a matrix with this comparison and the number of
#' customers in each bin.
#'
#' This function requires a censor number, which cannot be higher than the
#' highest frequency in the calibration period CBS. The output matrix will have
#' (censor + 1) bins, starting at frequencies of 0 transactions and ending at a
#' bin representing calibration period frequencies at or greater than the censor
#' number.
#'
#' @param params BG/NBD parameters - a vector with r, alpha, a, and b, in that
#' order. r and alpha are unobserved parameters for the NBD transaction
#' process. a and b are unobserved parameters for the Beta geometric dropout
#' process.
#' @param T.star length of then holdout period.
#' @param cal.cbs calibration period CBS (customer by sufficient statistic).
#' It must contain columns for frequency ("x"), recency ("t.x"), and total
#' time observed ("T.cal"). Note that recency must be the time between the
#' start of the calibration period and the customer's last transaction, not
#' the time between the customer's last transaction and the end of the
#' calibration period.
#' @param x.star vector of transactions made by each customer in the holdout
#' period.
#' @param censor integer used to censor the data. See details.
#' @param xlab descriptive label for the x axis.
#' @param ylab descriptive label for the y axis.
#' @param xticklab vector containing a label for each tick mark on the x axis.
#' @param title title placed on the top-center of the plot.
#' @return Holdout period transaction frequency comparison matrix (actual vs.
#' expected).
#' @examples
#' data(cdnowSummary)
#'
#' cal.cbs <- cdnowSummary$cbs
#' # cal.cbs already has column names required by method
#'
#' # number of transactions by each customer in the 39 weeks
#' # following the calibration period
#' x.star <- cal.cbs[,"x.star"]
#'
#' # parameters estimated using bgnbd.EstimateParameters
#' est.params <- c(0.243, 4.414, 0.793, 2.426)
#' # the maximum censor number that can be used
#' max(cal.cbs[,"x"])
#'
#' # plot conditional expected holdout period frequencies,
#' # binned according to calibration period frequencies
#' bgnbd.PlotFreqVsConditionalExpectedFrequency(est.params,
#' T.star = 39,
#' cal.cbs,
#' x.star,
#' censor = 7)
bgnbd.PlotFreqVsConditionalExpectedFrequency <- function(params,
T.star,
cal.cbs,
x.star,
censor,
xlab = "Calibration period transactions",
ylab = "Holdout period transactions",
xticklab = NULL,
title = "Conditional Expectation") {
tryCatch(x <- cal.cbs[, "x"],
error = function(e) stop("Error in bgnbd.PlotFreqVsConditionalExpectedFrequency: cal.cbs must have a frequency column labelled \"x\""))
tryCatch(t.x <- cal.cbs[, "t.x"],
error = function(e) stop("Error in bgnbd.PlotFreqVsConditionalExpectedFrequency: cal.cbs must have a recency column labelled \"t.x\""))
tryCatch(T.cal <- cal.cbs[, "T.cal"],
error = function(e) stop("Error in bgnbd.PlotFreqVsConditionalExpectedFrequency: cal.cbs must have a column for length of time observed labelled \"T.cal\""))
dc.check.model.params(c("r", "alpha", "a", "b"), params, "bgnbd.PlotFreqVsConditionalExpectedFrequency")
if (censor > max(x))
stop("censor too big (> max freq) in PlotFreqVsConditionalExpectedFrequency.")
if (any(T.star < 0) || !is.numeric(T.star))
stop("T.star must be numeric and may not contain negative numbers.")
if (any(x.star < 0) || !is.numeric(x.star))
stop("x.star must be numeric and may not contain negative numbers.")
n.bins = censor + 1
transaction.actual = rep(0, n.bins)
transaction.expected = rep(0, n.bins)
bin.size = rep(0, n.bins)
for (cc in 0:censor) {
if (cc != censor) {
this.bin = which(cc == x)
} else if (cc == censor) {
this.bin = which(x >= cc)
}
n.this.bin = length(this.bin)
bin.size[cc + 1] = n.this.bin
transaction.actual[cc + 1] = sum(x.star[this.bin])/n.this.bin
transaction.expected[cc + 1] = sum(bgnbd.ConditionalExpectedTransactions(params,
T.star, x[this.bin], t.x[this.bin], T.cal[this.bin]))/n.this.bin
}
col.names = paste(rep("freq", length(censor + 1)), (0:censor), sep = ".")
col.names[censor + 1] = paste(col.names[censor + 1], "+", sep = "")
comparison = rbind(transaction.actual, transaction.expected, bin.size)
colnames(comparison) = col.names
if (is.null(xticklab) == FALSE) {
x.labels = xticklab
}
if (is.null(xticklab) != FALSE) {
if (censor < ncol(comparison)) {
x.labels = 0:(censor)
x.labels[censor + 1] = paste(censor, "+", sep = "")
}
if (censor >= ncol(comparison)) {
x.labels = 0:(ncol(comparison))
}
}
actual = comparison[1, ]
expected = comparison[2, ]
ylim = c(0, ceiling(max(c(actual, expected)) * 1.1))
plot(actual, type = "l", xaxt = "n", col = 1, ylim = ylim, xlab = xlab, ylab = ylab,
main = title)
lines(expected, lty = 2, col = 2)
axis(1, at = 1:ncol(comparison), labels = x.labels)
legend("topleft", legend = c("Actual", "Model"), col = 1:2, lty = 1:2, lwd = 1)
return(comparison)
}
#' BG/NBD Plot Actual vs. Conditional Expected Frequency by Recency
#'
#' Plots the actual and conditional expected number of transactions made by
#' customers in the holdout period, binned according to calibration period
#' recencies. Also returns a matrix with this comparison and the number of
#' customers in each bin.
#'
#' This function does bin customers exactly according to recency; it bins
#' customers according to integer units of the time period of cal.cbs.
#' Therefore, if you are using weeks in your data, customers will be binned as
#' follows: customers with recencies between the start of the calibration period
#' (inclusive) and the end of week one (exclusive); customers with recencies
#' between the end of week one (inclusive) and the end of week two (exclusive);
#' etc.
#'
#' The matrix and plot will contain the actual number of transactions made by
#' each bin in the holdout period, as well as the expected number of
#' transactions made by that bin in the holdout period, conditional on that
#' bin's behavior during the calibration period.
#'
#' @inheritParams bgnbd.PlotFreqVsConditionalExpectedFrequency
#' @return Matrix comparing actual and conditional expected transactions in the
#' holdout period.
#' @examples
#' data(cdnowSummary)
#'
#' cal.cbs <- cdnowSummary$cbs
#' # cal.cbs already has column names required by method
#'
#' # number of transactions by each customer in the 39 weeks following
#' # the calibration period
#' x.star <- cal.cbs[,"x.star"]
#'
#' # parameters estimated using bgnbd.EstimateParameters
#' est.params <- c(0.243, 4.414, 0.793, 2.426)
#'
#' # plot conditional expected holdout period transactions,
#' # binned according to calibration period recencies
#' bgnbd.PlotRecVsConditionalExpectedFrequency(est.params,
#' cal.cbs,
#' T.star = 39,
#' x.star)
bgnbd.PlotRecVsConditionalExpectedFrequency <- function(params,
cal.cbs,
T.star,
x.star,
xlab = "Calibration period recency",
ylab = "Holdout period transactions",
xticklab = NULL,
title = "Actual vs. Conditional Expected Transactions by Recency") {
dc.check.model.params(c("r", "alpha", "a", "b"), params, "bgnbd.PlotRecVsConditionalExpectedFrequency")
if (any(T.star < 0) || !is.numeric(T.star))
stop("T.star must be numeric and may not contain negative numbers.")
if (any(x.star < 0) || !is.numeric(x.star))
stop("x.star must be numeric and may not contain negative numbers.")
tryCatch(x <- cal.cbs[, "x"],
error = function(e) stop("Error in bgnbd.PlotRecVsConditionalExpectedFrequency: cal.cbs must have a frequency column labelled \"x\""))
tryCatch(t.x <- cal.cbs[, "t.x"],
error = function(e) stop("Error in bgnbd.PlotRecVsConditionalExpectedFrequency: cal.cbs must have a recency column labelled \"t.x\""))
tryCatch(T.cal <- cal.cbs[, "T.cal"],
error = function(e) stop("Error in bgnbd.PlotRecVsConditionalExpectedFrequency: cal.cbs must have a column for length of time observed labelled \"T.cal\""))
t.values <- sort(unique(t.x))
n.recs <- length(t.values)
transaction.actual <- rep(0, n.recs)
transaction.expected <- rep(0, n.recs)
rec.size <- rep(0, n.recs)
for (tt in 1:n.recs) {
this.t.x <- t.values[tt]
this.rec <- which(t.x == this.t.x)
n.this.rec <- length(this.rec)
rec.size[tt] <- n.this.rec
transaction.actual[tt] <- sum(x.star[this.rec])/n.this.rec
transaction.expected[tt] <- sum(bgnbd.ConditionalExpectedTransactions(params,
T.star, x[this.rec], t.x[this.rec], T.cal[this.rec]))/n.this.rec
}
comparison <- rbind(transaction.actual, transaction.expected, rec.size)
colnames(comparison) <- round(t.values, 3)
bins <- seq(1, ceiling(max(t.x)))
n.bins <- length(bins)
actual <- rep(0, n.bins)
expected <- rep(0, n.bins)
bin.size <- rep(0, n.bins)
x.labels <- NULL
if (is.null(xticklab) == FALSE) {
x.labels <- xticklab
} else {
x.labels <- 1:(n.bins)
}
point.labels <- rep("", n.bins)
point.y.val <- rep(0, n.bins)
for (ii in 1:n.bins) {
if (ii < n.bins) {
this.bin <- which(as.numeric(colnames(comparison)) >= (ii - 1) & as.numeric(colnames(comparison)) <
ii)
} else if (ii == n.bins) {
this.bin <- which(as.numeric(colnames(comparison)) >= ii - 1)
}
actual[ii] <- sum(comparison[1, this.bin])/length(comparison[1, this.bin])
expected[ii] <- sum(comparison[2, this.bin])/length(comparison[2, this.bin])
bin.size[ii] <- sum(comparison[3, this.bin])
}
ylim <- c(0, ceiling(max(c(actual, expected)) * 1.1))
plot(actual, type = "l", xaxt = "n", col = 1, ylim = ylim, xlab = xlab, ylab = ylab,
main = title)
lines(expected, lty = 2, col = 2)
axis(1, at = 1:n.bins, labels = x.labels)
legend("topleft", legend = c("Actual", "Model"), col = 1:2, lty = 1:2, lwd = 1)
return(rbind(actual, expected, bin.size))
}
#' BG/NBD Plot Transaction Rate Heterogeneity
#'
#' Plots and returns the estimated gamma distribution of lambda (customers'
#' propensities to purchase).
#'
#' This returns the distribution of each customer's Poisson parameter, which
#' determines the rate at which each customer buys.
#'
#' @param params BG/NBD parameters - a vector with r, alpha, a, and b, in that
#' order. r and alpha are unobserved parameters for the NBD transaction
#' process. a and b are unobserved parameters for the Beta geometric dropout
#' process.
#' @param lim upper-bound of the x-axis. A number is chosen by the function if
#' none is provided.
#' @return Distribution of customers' propensities to purchase.
#' @examples
#' params <- c(0.243, 4.414, 0.793, 2.426)
#' bgnbd.PlotTransactionRateHeterogeneity(params)
#' params <- c(0.53, 4.414, 0.793, 2.426)
#' bgnbd.PlotTransactionRateHeterogeneity(params)
bgnbd.PlotTransactionRateHeterogeneity <- function(params,
lim = NULL) {
dc.check.model.params(c("r", "alpha", "a", "b"), params, "bgnbd.PlotTransactionRateHeterogeneity")
shape <- params[1]
rate <- params[2]
rate.mean <- round(shape/rate, 4)
rate.var <- round(shape/rate^2, 4)
if (is.null(lim)) {
lim = qgamma(0.99, shape = shape, rate = rate)
}
x.axis.ticks <- seq(0, lim, length.out = 100)
heterogeneity <- dgamma(x.axis.ticks, shape = shape, rate = rate)
plot(x.axis.ticks, heterogeneity, type = "l", xlab = "Transaction Rate", ylab = "Density",
main = "Heterogeneity in Transaction Rate")
mean.var.label <- paste("Mean:", rate.mean, " Var:", rate.var)
mtext(mean.var.label, side = 3)
return(rbind(x.axis.ticks, heterogeneity))
}
#' BG/NBD Plot Dropout Probability Heterogeneity
#'
#' Plots and returns the estimated gamma distribution of p (customers'
#' probability of dropping out immediately after a transaction).
#'
#' @inheritParams bgnbd.PlotTransactionRateHeterogeneity
#' @return Distribution of customers' probabilities of dropping out.
#' @examples
#' params <- c(0.243, 4.414, 0.793, 2.426)
#' bgnbd.PlotDropoutRateHeterogeneity(params)
#' params <- c(0.243, 4.414, 1.33, 2.426)
#' bgnbd.PlotDropoutRateHeterogeneity(params)
bgnbd.PlotDropoutRateHeterogeneity <- function(params,
lim = NULL) {
dc.check.model.params(c("r", "alpha", "a", "b"), params, "bgnbd.PlotDropoutRateHeterogeneity")
alpha_param = params[3]
beta_param = params[4]
beta_param.mean = round(alpha_param/(alpha_param + beta_param), 4)
beta_param.var = round(alpha_param * beta_param/((alpha_param + beta_param)^2)/(alpha_param +
beta_param + 1), 4)
if (is.null(lim)) {
# get right end point of grid
lim = qbeta(0.99, shape1 = alpha_param, shape2 = beta_param)
}
x.axis.ticks = seq(0, lim, length.out = 100)
heterogeneity = dbeta(x.axis.ticks, shape1 = alpha_param, shape2 = beta_param)
plot(x.axis.ticks, heterogeneity, type = "l", xlab = "Dropout Probability p",
ylab = "Density", main = "Heterogeneity in Dropout Probability")
mean.var.label = paste("Mean:", beta_param.mean, " Var:", beta_param.var)
mtext(mean.var.label, side = 3)
return(rbind(x.axis.ticks, heterogeneity))
}
#' BG/NBD Tracking Cumulative Transactions Plot
#'
#' Plots the actual and expected cumulative total repeat transactions by all
#' customers for the calibration and holdout periods, and returns this
#' comparison in a matrix.
#'
#' actual.cu.tracking.data does not have to be in the same unit of time as the
#' T.cal data. T.tot will automatically be divided into periods to match the
#' length of actual.cu.tracking.data. See
#' [bgnbd.ExpectedCumulativeTransactions].
#'
#' The holdout period should immediately follow the calibration period. This
#' function assume that all customers' calibration periods end on the same date,
#' rather than starting on the same date (thus customers' birth periods are
#' determined using max(T.cal) - T.cal rather than assuming that it is 0).
#'
#' @inheritParams bgnbd.ExpectedCumulativeTransactions
#' @inheritParams bgnbd.PlotFreqVsConditionalExpectedFrequency
#' @param actual.cu.tracking.data vector containing the cumulative number of
#' repeat transactions made by customers for each period in the total time
#' period (both calibration and holdout periods). See details.
#' @return Matrix containing actual and expected cumulative repeat transactions.
#' @examples
#' data(cdnowSummary)
#'
#' cal.cbs <- cdnowSummary$cbs
#' # cal.cbs already has column names required by method
#'
#' # Cumulative repeat transactions made by all customers across calibration
#' # and holdout periods
#' cu.tracking <- cdnowSummary$cu.tracking
#'
#' # parameters estimated using bgnbd.EstimateParameters
#' est.params <- c(0.243, 4.414, 0.793, 2.426)
#'
#' # All parameters are in weeks; the calibration period lasted 39
#' # weeks and the holdout period another 39.
#' bgnbd.PlotTrackingCum(est.params,
#' T.cal = cal.cbs[,"T.cal"],
#' T.tot = 78,
#' actual.cu.tracking.data = cu.tracking,
#' hardie = TRUE)
#' @md
bgnbd.PlotTrackingCum <- function(params,
T.cal,
T.tot,
actual.cu.tracking.data,
n.periods.final = NA,
hardie = TRUE,
xlab = "Week",
ylab = "Cumulative Transactions",
xticklab = NULL,
title = "Tracking Cumulative Transactions") {
dc.check.model.params(c("r", "alpha", "a", "b"), params, "bgnbd.Plot.PlotTrackingCum")
if (any(T.cal < 0) || !is.numeric(T.cal))
stop("T.cal must be numeric and may not contain negative numbers.")
if (any(actual.cu.tracking.data < 0) || !is.numeric(actual.cu.tracking.data))
stop("actual.cu.tracking.data must be numeric and may not contain negative numbers.")
if (length(T.tot) > 1 || T.tot < 0 || !is.numeric(T.tot))
stop("T.cal must be a single numeric value and may not be negative.")
actual <- actual.cu.tracking.data
if(is.na(n.periods.final)) n.periods.final <- length(actual)
expected <- bgnbd.ExpectedCumulativeTransactions(params,
T.cal,
T.tot,
n.periods.final,
hardie)
cu.tracking.comparison <- rbind(actual, expected)
ylim <- c(0, max(c(actual, expected)) * 1.05)
plot(actual, type = "l", xaxt = "n", xlab = xlab, ylab = ylab, col = 1, ylim = ylim,
main = title)
lines(expected, lty = 2, col = 2)
if (is.null(xticklab) == FALSE) {
if (ncol(cu.tracking.comparison) != length(xticklab)) {
stop("Plot error, xticklab does not have the correct size")
}
axis(1, at = 1:ncol(cu.tracking.comparison), labels = xticklab)
} else {
axis(1, at = 1:length(actual), labels = 1:length(actual))
}
abline(v = max(T.cal), lty = 2)
legend("bottomright", legend = c("Actual", "Model"), col = 1:2, lty = 1:2, lwd = 1)
return(cu.tracking.comparison)
}
#' BG/NBD Tracking Incremental Transactions Comparison
#'
#' Plots the actual and expected incremental total repeat transactions by all
#' customers for the calibration and holdout periods, and returns this
#' comparison in a matrix.
#'
#' actual.inc.tracking.data does not have to be in the same unit of time as the
#' T.cal data. T.tot will automatically be divided into periods to match the
#' length of actual.inc.tracking.data. See
#' [bgnbd.ExpectedCumulativeTransactions].
#'
#' The holdout period should immediately follow the calibration period. This
#' function assume that all customers' calibration periods end on the same date,
#' rather than starting on the same date (thus customers' birth periods are
#' determined using max(T.cal) - T.cal rather than assuming that it is 0).
#'
#' @inheritParams bgnbd.PlotTrackingCum
#' @param actual.inc.tracking.data vector containing the incremental number of
#' repeat transactions made by customers for each period in the total time
#' period (both calibration and holdout periods). See details.
#' @return Matrix containing actual and expected incremental repeat
#' transactions.
#' @examples
#' data(cdnowSummary)
#' cal.cbs <- cdnowSummary$cbs
#' # cal.cbs already has column names required by method
#'
#' # Cumulative repeat transactions made by all customers across calibration
#' # and holdout periods
#' cu.tracking <- cdnowSummary$cu.tracking
#' # make the tracking data incremental
#' inc.tracking <- dc.CumulativeToIncremental(cu.tracking)
#'
#' # parameters estimated using bgnbd.EstimateParameters
#' est.params <- c(0.243, 4.414, 0.793, 2.426)
#'
#' # All parameters are in weeks; the calibration period lasted 39
#' # weeks and the holdout period another 39.
#' bgnbd.PlotTrackingInc(est.params, ,
#' T.cal = cal.cbs[,"T.cal"],
#' T.tot = 78,
#' actual.inc.tracking.data = inc.tracking,
#' hardie = TRUE)
#' @md
bgnbd.PlotTrackingInc <- function(params,
T.cal,
T.tot,
actual.inc.tracking.data,
n.periods.final = NA,
hardie = TRUE,
xlab = "Week",
ylab = "Transactions",
xticklab = NULL,
title = "Tracking Weekly Transactions") {
dc.check.model.params(printnames = c("r", "alpha", "a", "b"),
params = params,
func = "bgnbd.Plot.PlotTrackingCum")
if (any(T.cal < 0) || !is.numeric(T.cal))
stop("T.cal must be numeric and may not contain negative numbers.")
if (any(actual.inc.tracking.data < 0) || !is.numeric(actual.inc.tracking.data))
stop("actual.inc.tracking.data must be numeric and may not contain negative numbers.")
if (length(T.tot) > 1 || T.tot < 0 || !is.numeric(T.tot))
stop("T.cal must be a single numeric value and may not be negative.")
actual <- actual.inc.tracking.data
if(is.na(n.periods.final)) n.periods.final <- length(actual)
expected <- dc.CumulativeToIncremental(bgnbd.ExpectedCumulativeTransactions(params,
T.cal,
T.tot,
n.periods.final,
hardie))
ylim <- c(0, max(c(actual, expected)) * 1.05)
plot(actual, type = "l", xaxt = "n", xlab = xlab, ylab = ylab, col = 1, ylim = ylim,
main = title)
lines(expected, lty = 2, col = 2)
if (is.null(xticklab) == FALSE) {
if (length(actual) != length(xticklab)) {
stop("Plot error, xticklab does not have the correct size")
}
axis(1, at = 1:length(actual), labels = xticklab)
} else {
axis(1, at = 1:length(actual), labels = 1:length(actual))
}
abline(v = max(T.cal), lty = 2)
legend("topright", legend = c("Actual", "Model"), col = 1:2, lty = 1:2, lwd = 1)
return(rbind(actual, expected))
}
|
/scratch/gouwar.j/cran-all/cranData/BTYD/R/bgnbd.R
|
#' CDNOW repeat transaction data summary
#'
#' Data representing the purchasing behavior of 2,357 CDNOW customers between
#' January 1997 and June 1998, summarized as a customer-by-time matrix and a
#' vector of cumulative weekly transactions.
#'
#' The customers in this data represent 1/10th of the cohort of customers who
#' made their first transactions with CDNOW in the first quarter of 1997. CDNOW
#' was an online retailer, selling music and related products on the web since
#' 1994.
#'
#' @docType data
#'
#' @usage data(cdnowSummary)
#'
#' @format A named list of four elements:
#' * `cbs` A customer-by-time matrix with four columns: frequency ("x"), recency
#' ("t.x"), length of observation in the calibration period ("T.cal"), and
#' number of transactions in the holdout period ("x.star"). Each row represents
#' a customer.
#' * `cu.tracking` A vector containing cumulative transactions for
#' every week in both the calibration and estimating periods (78 weeks total).
#' This vector contains the sum of transactions across all customers.
#' * `est.params` A vector containing estimated values for the four Pareto/NBD
#' parameters: r, alpha, s, and beta, in that order. This estimation was made
#' using [`pnbd.EstimateParameters`], and is included here to avoid having to
#' run the relatively time-consuming parameter estimation function in examples.
#' * `m.x` A vector containing the average value of each customer's repeat
#' transactions. Used in examples for spend functions.
#'
#' @source The data was put together using data conversion functions included in
#' this package. The original event log is included (see [`cdnowElog`]).
#' @keywords datasets
#' @md
"cdnowSummary"
#' CDNOW event log data
#'
#' Data representing the purchasing behavior of 2,357 CDNOW customers between
#' January 1997 and June 1998, in event log format.
#'
#' The customers in this data represent 1/10th of the cohort of customers who
#' made their first transactions with CDNOW in the first quarter of 1997. CDNOW
#' was an online retailer, selling music and related products on the web since
#' 1994.
#'
#' @docType data
#'
#' @format A comma-delimited file representing an event log with 6,919 entries.
#' It has 5 columns: The customer's ID in the master dataset, the customer's
#' ID in this dataset (which represents 1/10th of the master dataset), the
#' date of the transaction in the format "%Y%m%d" (e.g. 19970225), the number
#' of CDs purchased, and the dollar value of the transaction.
#'
#' @source Can be found [online](https://www.brucehardie.com/datasets/).
#' @keywords datasets
#' @name cdnowElog
#' @md
NULL
#' Discrete simulated annual event log data
#'
#' Data simulated using BG/BB model assumptions. Contains annual transaction
#' behavior for a period of 14 years, for a cohort of 10,000 customers who made
#' their first transactions in 1970.
#'
#' This dataset was simulated in order to illustrate certain data-conversion
#' functions (see [`dc.MakeRFmatrixCal`]).
#'
#' @docType data
#'
#' @format A comma-delimited file representing an event log with 52,432 entries.
#' It has 2 columns: The customer's ID and the date of the transaction in
#' standard R date format.
#' @keywords datasets
#' @name discreteSimElog
#' @md
NULL
#' Discrete donation data summary
#'
#' This dataset contains a recency-frequency matrix capturing the discrete
#' transaction behavior of 11,104 customers over 6 transaction opportunities,
#' summarized as a recency-frequency matrix and a vector of annual transactions.
#'
#' Data from "a major nonprofit organization located in the midwestern United
#' States that is funded in large part by donations from individuals. In 1995
#' the organization "acquired" 11,104 first-time supporters; in each of the
#' following six years, these individuals either did or did not support the
#' organization."
#'
#' This dataset contains, for each possible in-sample recency/frequency
#' combination in the 1995 cohort, the number of customers and the number of
#' transactions they made during the validation period.
#'
#' @docType data
#'
#' @usage data(donationsSummary)
#'
#' @format A named list:
#' \describe{
#' \item{$rf.matrix}{A matrix with 22 rows (for each possible
#' recency-frequency combination in 6 calibration period transaction
#' opportunities) and 4 columns: number of transactions during the calibration
#' period ("x"), recency in the calibration period ("t.x"), number of
#' transaction opportunities in the calibration period ("n.cal"), and number
#' of customers with this recency-frequency combination in the calibration
#' period ("custs").}
#' \item{$rf.matrix.holdout}{A matrix with 15 rows (for each possible
#' recency-frequency combination in 5 holdout period transaction
#' opportunities) and 4 columns: number of transactions during the holdout
#' period ("x.star"), recency in the holdout period ("t.x.star"), number of
#' transaction opportunities in the holdout period ("n.star"), and number of
#' customers with the recency-frequency combination in the holdout period
#' ("custs").}
#' \item{$x.star}{A vector with 22 elements, containing the number
#' of transactions made by each calibration period recency-frequency bin in
#' the holdout period. It is in the same order as \code{$rf.matrix.}}
#' \item{$annual.sales}{A vector with 11 elements, containing the number of
#' transactions made by all customers in each time period in both the
#' calibration and holdout periods.}
#' }
#'
#' @references Fader, Peter S., Bruce G.S. Hardie, and Jen Shang. “Customer-Base
#' Analysis in a Discrete-Time Noncontractual Setting.” \emph{Marketing Science}
#' 29(6), pp. 1086-1108. 2010. INFORMS. \url{http://www.brucehardie.com/notes/020/}
#' @source Data can be found online at \url{http://www.brucehardie.com/notes/010/}
#' (Associated Excel spreadsheet)
#' @keywords datasets
"donationsSummary"
|
/scratch/gouwar.j/cran-all/cranData/BTYD/R/data.R
|
################################################################################ Functions for Manipulating Data
library(Matrix)
#' Check the inputs to functions that use this common pattern
#'
#' A bunch of functions whose names start with \code{pnbd} take a set of four
#' parameters as their first argument, and then a set of vectors or scalars such
#' as \code{x} or \code{T.cal} as their subsequent arguments. This function
#' started out as pnbd.InputCheck() and it was meant to run input checks for any
#' number of such subsequent vector arguments, as long as they all met the same
#' requirements as \code{x}, \code{t.x} and \code{T.cal} in
#' \code{\link{pnbd.LL}}: meaning, the length of the longest of these vectors is
#' a multiple of the lengths of all others, and all vectors are numeric and
#' positive.
#'
#' With an extra argument, \code{printnames}, pnbd.InputCheck() could also
#' accommodate input checks for functions whose names start with \code{bgbb},
#' \code{bgnbd}, and \code{spend} so it was basically useful everywhere. That's
#' when it became \code{dc.InputCheck()}. \code{params} can have any length as
#' long as that length is the same as the length of \code{printnames}, so
#' \code{dc.InputCheck()} can probably handle mixtures of distributions for
#' modeling BTYD behavior that are not yet implemented.
#'
#' By other arguments ... here we mean a bunch of named vectors that are used by
#' functions that call \code{dc.InputCheck}, such as x, t.x, T.cal, etc. The
#' standard rules for vector operations apply - if they are not of the same
#' length, shorter vectors will be recycled (start over at the first element)
#' until they are as long as the longest vector. Vector recycling is a good way
#' to get into trouble. Keep vectors to the same length and use single values
#' for parameters that are to be the same for all calculations. If one of these
#' parameters has a length greater than one, the output will be a vector of
#' probabilities.
#'
#' @param params If used by \code{pnbd.[...]} functions, Pareto/NBD parameters
#' -- a vector with r, alpha, s, and beta, in that order. See
#' \code{\link{pnbd.LL}}. If used by \code{bgnbd.[...]} functions, BG/NBD
#' parameters -- a vector with r, alpha, a, and b, in that order. See
#' \code{\link{bgnbd.LL}}. If used by \code{bgbb.[...]} functions, BG/BB
#' parameters -- a vector with alpha, beta, gamma, and delta, in that order.
#' See \code{\link{bgbb.LL}}. If used by \code{spend.[...]} functions, a
#' vector of gamma-gamma parameters -- p, q, and gamma, in that order. See
#' \code{\link{spend.LL}}.
#' @param func Function calling dc.InputCheck
#' @param printnames a string vector with the names of parameters to pass to
#' \code{\link{dc.check.model.params}}
#' @param ... other arguments
#' @return If all is well, a data frame with everything you need in it, with
#' nrow() equal to the length of the longest vector in \code{...}
#' @seealso \code{\link{pnbd.LL}}
#' \code{\link{pnbd.ConditionalExpectedTransactions}}
dc.InputCheck <- function(params,
func,
printnames = c("r", "alpha", "s", "beta"),
...) {
inputs <- as.list(environment())
vectors <- list(...)
dc.check.model.params(printnames = inputs$printnames,
params = inputs$params,
func = inputs$func)
max.length <- max(sapply(vectors, length))
lapply(names(vectors), function(x) {
if(max.length %% length(vectors[[x]]))
warning(paste("Maximum vector length not a multiple of the length of",
x, sep = " "))
if (any(vectors[[x]] < 0) || !is.numeric(vectors[[x]]))
stop(paste(x,
"must be numeric and may not contain negative numbers.",
sep = " "))
})
return(as.data.frame(lapply(vectors,
rep,
length.out = max.length)))
}
#' Compress Customer-by-Sufficient-Statistic (CBS) Matrix
#'
#' Combines all customers with the same combination of recency, frequency and
#' length of calibration period in the customer-by-sufficient-statistic matrix,
#' and adds a fourth column labelled "custs" (with the number of customers
#' belonging in each row).
#'
#' This function is meant to be used to speed up log-likelihood and
#' parameter estimation functions in the Pareto/NBD (pnbd) set of functions.
#' How much faster those function run depends on how similar customers are.
#' You can used compressed CBS matrices in BG/NBD estimation too, but there
#' will be no speed gains there over using un-compressed CBS data.
#'
#' This function only takes columns "x", "t.x", and "T.cal" into account. All
#' other columns will be added together - for example, if you have a spend
#' column, the output's spend column will contain the total amount spent by all
#' customers with an identical recency, frequency, and time observed.
#'
#' @param cbs calibration period CBS (customer by sufficient statistic). It
#' must contain columns for frequency ("x"), recency ("t.x"), and total time
#' observed ("T.cal"). Note that recency must be the time between the start of
#' the calibration period and the customer's last transaction, not the time
#' between the customer's last transaction and the end of the calibration
#' period.
#' @param rounding the function tries to ensure that there are similar
#' customers by rounding the customer-by-sufficient-statistic matrix first.
#' This parameter determines how many decimal places are left in the data.
#' Negative numbers are allowed; see the documentation for round in the base
#' package. As of the time of writing, that documentation states: "Rounding to
#' a negative number of digits means rounding to a power of ten, so for
#' example round(x, digits = -2) rounds to the nearest hundred."
#' @return A customer-by-sufficient-statistic matrix with an additional column
#' "custs", which contains the number of customers with each combination of
#' recency, frequency and length of calibration period.
#' @examples
#' # Create a sample customer-by-sufficient-statistic matrix:
#' set.seed(7)
#' x <- sample(1:4, 10, replace = TRUE)
#' t.x <- sample(1:4, 10, replace = TRUE)
#' T.cal <- rep(4, 10)
#' ave.spend <- sample(10:20, 10, replace = TRUE)
#' cbs <- cbind(x, t.x, T.cal, ave.spend)
#' cbs
#'
#' # If cbs is printed, you would note that the following
#' # sets of rows have the same x, t.x and T.cal:
#' # (1, 6, 8); (3, 9)
#'
#' dc.compress.cbs(cbs, 0) # No rounding necessary
#'
#' # Note that all additional columns (in this case, ave.spend)
#' # are aggregated by sum.
dc.compress.cbs <- function(cbs,
rounding = 3) {
if (!("x" %in% colnames(cbs)))
stop("Error in bgnbd.compress.cbs: cbs must have a frequency column labelled \"x\"")
if (!("t.x" %in% colnames(cbs)))
stop("Error in bgnbd.compress.cbs: cbs must have a recency column labelled \"t.x\"")
if (!("T.cal" %in% colnames(cbs)))
stop("Error in bgnbd.compress.cbs: cbs must have a column for length of time observed labelled \"T.cal\"")
orig.rows <- nrow(cbs)
if (!("custs" %in% colnames(cbs))) {
custs <- rep(1, nrow(cbs))
cbs <- cbind(cbs, custs)
}
other.colnames <- colnames(cbs)[!(colnames(cbs) %in% c("x", "t.x", "T.cal"))]
## Round x, t.x and T.cal to the desired level
cbs[, c("x", "t.x", "T.cal")] <- round(cbs[, c("x", "t.x", "T.cal")], rounding)
## Aggregate every column that is not x, t.x or T.cal by those columns. Do this by
## summing entries which have the same x, t.x and T.cal.
cbs <- as.matrix(aggregate(cbs[, !(colnames(cbs) %in% c("x", "t.x", "T.cal"))],
by = list(x = cbs[, "x"], t.x = cbs[, "t.x"], T.cal = cbs[, "T.cal"]), sum))
colnames(cbs) <- c("x", "t.x", "T.cal", other.colnames)
final.rows <- nrow(cbs)
message("Data reduced from ", orig.rows, " rows to ", final.rows, " rows.")
return(cbs)
}
#' Convert Event Log to CBS and CBT Matrices
#'
#' Uses an event log to return calibration period CBT and CBS, holdout period
#' CBT and CBS, and summary data for each customer (including times of first and
#' last transactions).
#'
#' This function automatically removes customers' first transactions, meaning
#' that the output matrices will only contain repeat transaction information.
#'
#' @param elog event log, which is a data frame with columns for customer ID
#' ("cust"), date ("date"), and optionally other columns such as "sales". Each
#' row represents an event, such as a transaction. The "date" column must
#' contain date objects, not character strings or factors.
#' @param per interval of time for customer-by-sufficient-statistic matrix.
#' May be "day", "week", "month", "quarter", or "year".
#' @param T.cal R date object indicating when the calibration period ends.
#' @param T.tot T.tot R date object indicating when holdout period ends.
#' @param merge.same.date If TRUE, transactions from the same period count as
#' a single transaction instead of counting as multiple transactions.
#' @param cohort.birth.per Time interval used to filter the event log. Can be
#' specified as a Date object or a vector of two Dates. If one date object is
#' used, the birth period is from the minimum date in the dataset through the
#' given date. If two dates are given, the birth period is set between
#' (inclusive) the two dates.
#' @param dissipate.factor integer indicating how much of the dataset to
#' eliminate. If left as 1, none of the dataset is eliminated.
#' (dissipate.factor-1)/(dissipate.factor) events will be removed from the
#' event log. For example, if 2 is provided, 1/2 of the event log is
#' eliminated, and if 10 is provided, 9/10 of the event log is eliminated.
#' @param statistic Determines type of CBT returned: can be: "reach",
#' "freq", "total.spend", or "average.spend." (note: spend requires $sales
#' column in elog)
#' @return A list of items: - `$cal` list with CBS and CBT from the calibration
#' period - `$holdout` list with CBS and CBT from holdout period -
#' `$cust.data` data frame with each customer's first and last transaction
#' details
#' @examples
#' # Create event log from file "cdnowElog.csv", which has
#' # customer IDs in the second column, dates in the third column, and
#' # sales numbers in the fifth column.
#' elog <- dc.ReadLines(system.file("data/cdnowElog.csv", package="BTYD"),2,3,5)
#'
#' elog[,"date"] <- as.Date(elog[,"date"], "%Y%m%d")
#'
#' data <- dc.ElogToCbsCbt(elog, per="week", T.cal=as.Date("1997-09-30"))
#' @md
dc.ElogToCbsCbt <- function(elog,
per = "week",
T.cal = max(elog$date),
T.tot = max(elog$date),
merge.same.date = TRUE,
cohort.birth.per = T.cal,
dissipate.factor = 1,
statistic = "freq") {
dc.WriteLine("Started making CBS and CBT from the ELOG...")
elog <- dc.FilterCustByBirth(elog, cohort.birth.per)
if (nrow(elog) == 0)
stop("error caused by customer birth filtering")
elog <- elog[elog$date <= T.tot, ]
if (nrow(elog) == 0)
stop("error caused by holdout period end date")
elog <- dc.DissipateElog(elog, dissipate.factor)
if (nrow(elog) == 0)
stop("error caused by event long dissipation")
if (merge.same.date) {
elog <- dc.MergeTransactionsOnSameDate(elog)
if (nrow(elog) == 0)
stop("error caused by event log merging")
}
calibration.elog <- elog[elog$date <= T.cal, ]
holdout.elog <- elog[elog$date > T.cal, ]
split.elog.list <- dc.SplitUpElogForRepeatTrans(calibration.elog)
repeat.transactions.elog <- split.elog.list$repeat.trans.elog
cust.data <- split.elog.list$cust.data
dc.WriteLine("Started Building CBS and CBT for calibration period...")
cbt.cal <- dc.BuildCBTFromElog(calibration.elog, statistic)
cbt.cal.rep.trans <- dc.BuildCBTFromElog(repeat.transactions.elog, statistic)
cbt.cal <- dc.MergeCustomers(cbt.cal, cbt.cal.rep.trans)
dates <- data.frame(cust.data$birth.per, cust.data$last.date, T.cal)
cbs.cal <- dc.BuildCBSFromCBTAndDates(cbt.cal, dates, per, cbt.is.during.cal.period = TRUE)
dc.WriteLine("Finished building CBS and CBT for calibration period.")
cbt.holdout <- NULL
cbs.holdout <- NULL
if (nrow(holdout.elog) > 0) {
dc.WriteLine("Started building CBS and CBT for holdout period...")
cbt.holdout <- dc.BuildCBTFromElog(holdout.elog, statistic)
dates <- c((T.cal + 1), T.tot)
cbs.holdout <- dc.BuildCBSFromCBTAndDates(cbt.holdout, dates, per, cbt.is.during.cal.period = FALSE)
cbt.holdout <- dc.MergeCustomers(cbt.cal, cbt.holdout)
cbs.holdout <- dc.MergeCustomers(cbs.cal, cbs.holdout)
dc.WriteLine("Finished building CBS and CBT for holdout.")
dc.WriteLine("...Finished Making All CBS and CBT")
return(list(cal = list(cbs = cbs.cal, cbt = cbt.cal), holdout = list(cbt = cbt.holdout,
cbs = cbs.holdout), cust.data = cust.data))
}
dc.WriteLine("...Finished Making All CBS and CBT")
return(list(cal = list(cbs = cbs.cal, cbt = cbt.cal), holdout = list(cbt = cbt.holdout,
cbs = cbs.holdout), cust.data = cust.data))
}
#' Filter Customer by Birth
#'
#' Filters an event log, keeping all transactions made by customers who made
#' their first transactions in the given time interval.
#'
#' @param elog event log, which is a data frame with columns for customer ID
#' ("cust"), date ("date"), and optionally other columns such as "sales". Each
#' row represents an event, such as a transaction. The date column must be
#' formatted as Date objects.
#' @param cohort.birth.per Time interval used to filter the event log. Can be
#' specified as a Date object or a vector of two Dates. If one date object is
#' used, the birth period is from the minimum date in the dataset through the
#' given date. If two dates are given, the birth period is set between
#' (inclusive) the two dates.
#' @return event log with only rows from customers who made their first
#' transaction within the birth period.
#' @examples
#' # Create event log from file "cdnowElog.csv", which has
#' # customer IDs in the second column, dates in the third column, and
#' # sales numbers in the fifth column.
#' elog <- dc.ReadLines(system.file("data/cdnowElog.csv", package="BTYD"),2,3,5)
#'
#' # converting the date column to Date objects is
#' # necessary for this function.
#' elog$date <- as.Date(elog$date, "%Y%m%d")
#'
#' # starting date. Note that it must be a Date object.
#' start.date <- as.Date("1997-01-01")
#' # ending date. Note that it must be a Date object.
#' end.date <- as.Date("1997-01-31")
#'
#' # Filter the elog to include only customers who made their
#' # first transaction in January 1997
#' filtered.elog <- dc.FilterCustByBirth(elog, c(start.date, end.date))
dc.FilterCustByBirth <- function(elog,
cohort.birth.per) {
L = length(cohort.birth.per)
if (L > 2) {
stop("Invalid cohort.birth.per argument")
}
if (L == 0) {
return(elog)
}
if (L == 1) {
start.date <- min(elog$date)
end.date <- cohort.birth.per
} else if (length(cohort.birth.per) == 2) {
start.date <- min(cohort.birth.per)
end.date <- max(cohort.birth.per)
}
cbt <- dc.CreateFreqCBT(elog)
custs.first.transaction.indices <- dc.GetFirstPurchasePeriodsFromCBT(cbt)
custs.first.transaction.dates <- as.Date(colnames(cbt)[custs.first.transaction.indices])
custs.in.birth.period.indices <- which(custs.first.transaction.dates >= start.date &
custs.first.transaction.dates <= end.date)
custs.in.birth.period <- rownames(cbt)[custs.in.birth.period.indices]
elog <- elog[elog$cust %in% custs.in.birth.period, ]
dc.WriteLine("Finished filtering out customers not in the birth period.")
return(elog)
}
#' Dissipate Event Log
#'
#' Filters an event log, keeping a fraction of the original event log.
#'
#' @param elog event log, which is a data frame with columns for customer ID
#' ("cust"), date ("date"), and optionally other columns such as "sales". Each
#' row represents an event, such as a transaction.
#' @param dissipate.factor integer indicating how much of the dataset to
#' eliminate. It must be greater than 1 for the function to work.
#' (dissipate.factor-1)/(dissipate.factor) events will be removed from the
#' event log. For example, if 2 is provided, 1/2 of the event log is
#' eliminated, and if 10 is provided, 9/10 of the event log is eliminated.
#' @return Reduced event log.
dc.DissipateElog <- function(elog,
dissipate.factor) {
if (dissipate.factor > 1) {
x <- rep(FALSE, dissipate.factor)
x[1] <- TRUE
keptIndices <- rep(x, length.out = nrow(elog))
elog <- elog[keptIndices, ]
elog$cust <- factor(elog$cust)
dc.WriteLine("Finished filtering out", dissipate.factor - 1, "of every",
dissipate.factor, "transactions.")
} else {
dc.WriteLine("No dissipation requested.")
}
return(elog)
}
#' Split Up Event Log for Repeat Transactions
#'
#' Turns an event log into a repeat transaction event log, removing customers'
#' first transactions. Also returns a data frame with information about
#' customers' first and last transactions.
#'
#' @param elog event log, which is a data frame with columns for customer ID
#' ("cust"), date ("date"), and optionally other columns such as "sales". Each
#' row represents an event, such as a transaction. The "date" column must
#' contain date objects, not character strings or factors.
#' @return A named list: - `repeat.trans.elog` an event log containing only
#' repeat transactions - `cust.data` data frame containing the first and last
#' transaction information for each customer
dc.SplitUpElogForRepeatTrans <- function(elog) {
dc.WriteLine("Started Creating Repeat Purchases")
unique.custs <- unique(elog$cust)
first.trans.indices <- rep(0, length(unique.custs))
last.trans.indices <- rep(0, length(unique.custs))
count <- 0
for (cust in unique.custs) {
count <- count + 1
cust.indices <- which(elog$cust == cust)
# Of this customer's transactions, find the index of the first one
first.trans.indices[count] <- min(cust.indices[which(elog$date[cust.indices] ==
min(elog$date[cust.indices]))])
# Of this customer's transactions, find the index of the last one
last.trans.indices[count] <- min(cust.indices[which(elog$date[cust.indices] ==
max(elog$date[cust.indices]))])
}
repeat.trans.elog <- elog[-first.trans.indices, ]
first.trans.data <- elog[first.trans.indices, ]
last.trans.data <- elog[last.trans.indices, ]
# [-1] is because we don't want to change the column name for custs
names(first.trans.data)[-1] <- paste("first.", names(first.trans.data)[-1], sep = "")
names(first.trans.data)[which(names(first.trans.data) == "first.date")] <- "birth.per"
names(last.trans.data) <- paste("last.", names(last.trans.data), sep = "")
# [-1] is because we don't want to include two custs columns
cust.data <- data.frame(first.trans.data, last.trans.data[, -1])
names(cust.data) <- c(names(first.trans.data), names(last.trans.data)[-1])
dc.WriteLine("Finished Creating Repeat Purchases")
return(list(repeat.trans.elog = repeat.trans.elog, cust.data = cust.data))
}
#' Build Customer-by-Time Matrix from Event Log
#'
#' Creates a customer-by-time matrix from an event log.
#'
#' @param elog event log, which is a data frame with columns for customer ID
#' ("cust"), date ("date"), and optionally other columns such as "sales". Each
#' row represents an event, such as a transaction.. For the total spend and
#' average spend matrices, the event log must have a "sales" column. If the
#' dates are not formatted to be in the order year-month-day, the columns of
#' the customer-by-time matrix may not be ordered chronologically if the
#' "date" column does not consist of date objects (R will order them
#' alphabetically). This will cause problems with other functions, so it is
#' better to convert the date column to date objects before running this
#' function.
#' @param statistic either "freq", "reach", "total.spend", or
#' "average.spend". This determines what type of customer-by-time matrix is
#' returned.
#' @return Customer-by-time matrix.
dc.BuildCBTFromElog <- function(elog,
statistic = "freq") {
dc.WriteLine("Started Building CBT...")
if (statistic == "freq") {
return(dc.CreateFreqCBT(elog))
} else if (statistic == "reach") {
return(dc.CreateReachCBT(elog))
} else if (statistic == "total.spend") {
return(dc.CreateSpendCBT(elog))
} else if (statistic == "average.spend") {
return(dc.CreateSpendCBT(elog, is.avg.spend = TRUE))
} else {
stop("Invalid cbt build (var: statistic) specified.")
}
}
#' Create Frequency Customer-by-Time Matrix
#'
#' Creates a customer-by-time matrix with total number of transactions per time period.
#'
#' @param elog event log, which is a data frame with columns for customer ID
#' ("cust"), date ("date"), and optionally other columns such as "sales". Each
#' row represents an event, such as a transaction. If the dates are not
#' formatted to be in the order year-month-day, the columns of the
#' customer-by-time matrix may not be ordered chronologically if the "date"
#' column does not consist of date objects (R will order them alphabetically).
#' This will cause problems with other functions, so it is better to convert
#' the date column to date objects before running this function.
#' @return Frequency customer-by-time matrix.
#' @examples
#' # Create event log from file "cdnowElog.csv", which has
#' # customer IDs in the second column, dates in the third column, and
#' # sales numbers in the fifth column.
#' elog <- dc.ReadLines(system.file("data/cdnowElog.csv", package="BTYD"),2,3,5)
#'
#' # Given that the dates are in the order year-month-day,
#' # it is not strictly necessary to convert the date column
#' # to date formats. However, it is good practice:
#' elog[,"date"] <- as.Date(elog[,"date"], "%Y%m%d")
#'
#' freq.cbt <- dc.CreateFreqCBT(elog)
dc.CreateFreqCBT <- function(elog) {
# Factoring is so that when xtabs sorts customers, it does so in the original
# order It doesn't matter that they're factors; rownames are stored as characters
elog$cust <- factor(elog$cust, levels = unique(elog$cust))
xt <- xtabs(~cust + date, data = elog)
dc.WriteLine("...Completed Freq CBT")
return(xt)
}
#' Create Reach Customer-by-Time Matrix
#'
#' Creates a customer-by-time matrix with 1's in periods that a customer made a
#' transaction and 0's otherwise.
#'
#' @param elog event log, which is a data frame with columns for customer ID
#' ("cust"), date ("date"), and optionally other columns such as "sales". Each
#' row represents an event, such as a transaction. If the dates are not
#' formatted to be in the order year-month-day, the columns of the
#' customer-by-time matrix may not be ordered chronologically if the "date"
#' column does not consist of date objects (R will order them alphabetically).
#' This will cause problems with other functions, so it is better to convert
#' the date column to date objects before running this function.
#' @return Reach customer-by-time matrix.
#' @examples
#' # Create event log from file "cdnowElog.csv", which has
#' # customer IDs in the second column, dates in the third column, and
#' # sales numbers in the fifth column.
#' elog <- dc.ReadLines(system.file("data/cdnowElog.csv", package="BTYD"),2,3,5)
#'
#' # Given that the dates are in the order year-month-day,
#' # it is not strictly necessary to convert the date column
#' # to date formats. However, it is good practice:
#' elog[,"date"] <- as.Date(elog[,"date"], "%Y%m%d")
#'
#' reach.cbt <- dc.CreateReachCBT(elog)
dc.CreateReachCBT <- function(elog) {
# Factoring is so that when xtabs sorts customers, it does so in the original
# order It doesn't matter that they're factors; rownames are stored as characters
elog$cust <- factor(elog$cust, levels = unique(elog$cust))
xt <- xtabs(~cust + date, data = elog)
xt[xt > 1] <- 1
dc.WriteLine("...Completed Reach CBT")
return(xt)
}
#' Create Spend Customer-by-Time Matrix
#'
#' Creates a customer-by-time matrix with spend per time period.
#'
#' @param elog event log, which is a data frame with columns for customer ID
#' ("cust"), date ("date"), and optionally other columns such as "sales". Each
#' row represents an event, such as a transaction. If the dates are not
#' formatted to be in the order year-month-day, the columns of the
#' customer-by-time matrix may not be ordered chronologically if the "date"
#' column does not consist of date objects (R will order them alphabetically).
#' This will cause problems with other functions, so it is better to convert
#' the date column to date objects before running this function.
#' @param is.avg.spend if TRUE, return average spend customer-by-time matrix;
#' else, return total spend customer-by-time matrix.
#' @return Spend customer-by-time matrix.
#' @examples
#' # Create event log from file "cdnowElog.csv", which has
#' # customer IDs in the second column, dates in the third column, and
#' # sales numbers in the fifth column.
#' elog <- dc.ReadLines(system.file("data/cdnowElog.csv", package="BTYD"),2,3,5);
#'
#' # Given that the dates are in the order year-month-day,
#' # it is not strictly necessary to convert the date column
#' # to date formats. However, it is good practice:
#' elog[,"date"] <- as.Date(elog[,"date"], "%Y%m%d")
#'
#' spend.cbt <- dc.CreateSpendCBT(elog)
dc.CreateSpendCBT <- function(elog,
is.avg.spend = FALSE) {
# Factoring is so that when xtabs sorts customers, it does so in the original
# order It doesn't matter that they're factors; rownames are stored as characters
elog$cust <- factor(elog$cust, levels = unique(elog$cust))
sales.xt <- xtabs(sales ~ cust + date, data = elog)
if (is.avg.spend) {
suppressMessages(freq.cbt <- dc.CreateFreqCBT(elog))
sales.xt <- sales.xt/freq.cbt
# For the cases where there were no transactions
sales.xt[which(!is.finite(sales.xt))] <- 0
}
dc.WriteLine("...Completed Spend CBT")
return(sales.xt)
}
#' Make Recency-Frequency Matrix Skeleton
#'
#' Creates a matrix with all possible recency and frequency combinations.
#'
#' Makes the structure in which to input data for recency-frequency matrices.
#'
#' @param n.periods number of transaction opportunities in the calibration
#' period.
#' @return Matrix with two columns: frequency ("x") and recency ("t.x"). All
#' possible recency-frequency combinations in the calibration period are
#' represented.
dc.MakeRFmatrixSkeleton <- function(n.periods) {
## note: to access the starting i'th t.x element use (i>0): i*(i-1)/2 + 2, ...
## this yields the sequence: 2, 3, 5, 8, ... there are n*(n+1)/2 + 1 elements in
## this table
n <- n.periods
rf.mx.skeleton <- matrix(0, n * (n + 1)/2 + 1, 2)
colnames(rf.mx.skeleton) <- c("x", "t.x")
for (ii in 1:n) {
ith.t.index <- 2 + ii * (ii - 1)/2
t.vector <- rep(ii, ii)
x.vector <- c(1:ii)
rf.mx.skeleton[ith.t.index:(ith.t.index + (ii - 1)), 1] <- x.vector
rf.mx.skeleton[ith.t.index:(ith.t.index + (ii - 1)), 2] <- t.vector
}
return(rf.mx.skeleton)
}
#' Make Holdout Period Recency-Frequency Matrix
#'
#' Creates a recency-frequency matrix for the holdout period.
#'
#' @param holdout.cbt holdout period frequency customer-by-time matrix. This
#' is a matrix consisting of a row per customer and a column per time period.
#' It should contain the number of transactions each customer made per time
#' period.
#' @return recency-frequency matrix for the holdout period, with three columns:
#' frequency ("x.star"), recency ("t.x.star"), number of transaction
#' opportunities in the holdout period ("n.star"), and the number of customers
#' with each frequency-recency combination ("custs").
dc.MakeRFmatrixHoldout <- function(holdout.cbt) {
holdout.length <- ncol(holdout.cbt)
matrix.skeleton <- dc.MakeRFmatrixSkeleton(holdout.length)
n.combinations <- nrow(matrix.skeleton)
n.star <- rep(holdout.length, n.combinations)
final.transactions <- dc.GetLastPurchasePeriodsFromCBT(holdout.cbt)
custs <- rep(0, n.combinations)
for (ii in 1:n.combinations) {
custs.with.freq <- which(rowSums(holdout.cbt) == matrix.skeleton[ii, 1])
custs.with.rec <- which(final.transactions == matrix.skeleton[ii, 2])
custs[ii] <- length(intersect(custs.with.freq, custs.with.rec))
}
rf.holdout.matrix <- cbind(matrix.skeleton, n.star, custs)
colnames(rf.holdout.matrix) <- c("x.star", "t.x.star", "n.star", "custs")
return(rf.holdout.matrix)
}
#' Make Calibration Period Recency-Frequency Matrix
#'
#' Make a calibration period recency-frequency matrix.
#'
#' @param frequencies vector which indicates the number of repeat transactions
#' made by customers in the calibration period.
#' @param periods.of.final.purchases a vector indicating in which period
#' customers made their final purchases.
#' @param num.of.purchase.periods the number of transaction opportunities in
#' the calibration period.
#' @param holdout.frequencies an optional vector indicating the number of
#' transactions made by customers in the holdout period.
#' @return A matrix with all possible frequency-recency combinations, and the
#' number of customers with each combination. It contains columns for
#' frequency ("x"), recency ("t.x"), number of transaction opportunities in
#' the calibration period ("n.cal"), number of customers with this combination
#' of recency, frequency, and number of periods observed ("custs"), and
#' optionally, number of transactions in the holdout period ("x.star").
#' @examples
#' elog <- dc.ReadLines(system.file("data/discreteSimElog.csv", package="BTYD"),1,2)
#' elog[,"date"] <- as.Date(elog[,"date"])
#'
#' cutoff.date <- as.Date("1977-01-01")
#' cbt <- dc.CreateReachCBT(elog)
#' cal.cbt <- cbt[,as.Date(colnames(cbt)) <= cutoff.date]
#' holdout.cbt <- cbt[,as.Date(colnames(cbt)) > cutoff.date]
#'
#' cal.start.dates.indices <- dc.GetFirstPurchasePeriodsFromCBT(cal.cbt)
#' cal.start.dates <- as.Date(colnames(cal.cbt)[cal.start.dates.indices])
#' cal.end.dates.indices <- dc.GetLastPurchasePeriodsFromCBT(cal.cbt)
#' cal.end.dates <- as.Date(colnames(cal.cbt)[cal.end.dates.indices])
#' T.cal.total <- rep(cutoff.date, nrow(cal.cbt))
#' cal.dates <- data.frame(cal.start.dates, cal.end.dates, T.cal.total)
#'
#' # Create calibration period customer-by-sufficient-statistic data frame,
#' # using years as the unit of time.
#' cal.cbs <- dc.BuildCBSFromCBTAndDates(cal.cbt,
#' cal.dates,
#' per="year",
#' cbt.is.during.cal.period=TRUE)
#'
#' holdout.start <- as.Date(colnames(holdout.cbt)[1])
#' holdout.end <- as.Date(tail(colnames(holdout.cbt),n=1))
#' # The (-1) below is to remove the effect of the birth period - we are only
#' # interested in repeat transactions in the calibration period.
#' frequencies <- (cal.cbs[,"x"] - 1)
#' periods.of.final.purchases <- cal.cbs[,"t.x"]
#' num.of.purchase.periods <- ncol(cal.cbt) - 1
#'
#' # Create a calibration period recency-frequency matrix
#' cal.rf.matrix <- dc.MakeRFmatrixCal(frequencies,
#' periods.of.final.purchases,
#' num.of.purchase.periods)
dc.MakeRFmatrixCal <- function(frequencies,
periods.of.final.purchases,
num.of.purchase.periods,
holdout.frequencies = NULL) {
if (!is.numeric(periods.of.final.purchases)) {
stop("periods.of.final.purchases must be numeric")
}
if (length(periods.of.final.purchases) != length(frequencies)) {
stop(paste("number of customers in frequencies is not equal", "to the last purchase period vector"))
}
## initializes the data structures to later be filled in with counts
rf.mx.skeleton <- dc.MakeRFmatrixSkeleton(num.of.purchase.periods)
if (is.null(holdout.frequencies)) {
RF.matrix <- cbind(rf.mx.skeleton, num.of.purchase.periods, 0)
colnames(RF.matrix) <- c("x", "t.x", "n.cal", "custs")
} else {
RF.matrix <- cbind(rf.mx.skeleton, num.of.purchase.periods, 0, 0)
colnames(RF.matrix) <- c("x", "t.x", "n.cal", "custs", "x.star")
}
## create a matrix out of the frequencies & periods.of.final.purchases
rf.n.custs <- cbind(frequencies, periods.of.final.purchases, holdout.frequencies)
## count all the pairs with zero for frequency and remove them
zeroes.rf.subset <- which(rf.n.custs[, 1] == 0) ##(which x == 0)
RF.matrix[1, 4] <- length(zeroes.rf.subset)
if (!is.null(holdout.frequencies)) {
RF.matrix[1, 5] <- sum(holdout.frequencies[zeroes.rf.subset])
}
rf.n.custs <- rf.n.custs[-zeroes.rf.subset, ]
## sort the count data by both frequency and final purchase period
rf.n.custs <- rf.n.custs[order(rf.n.custs[, 1], rf.n.custs[, 2]), ]
## formula: (x-1) + 1 + tx*(tx-1)/2 + 1 keep count of duplicates once different,
## use formula above to place count into the RF table.
current.pair <- c(rf.n.custs[1, 1], rf.n.custs[1, 2])
same.item.in.a.row.counter <- 1
if (!is.null(holdout.frequencies)) {
x.star.total <- rf.n.custs[1, 3]
}
num.count.points <- nrow(rf.n.custs)
for (ii in 2:num.count.points) {
last.pair <- current.pair
current.pair <- c(rf.n.custs[ii, 1], rf.n.custs[ii, 2])
if (identical(last.pair, current.pair)) {
same.item.in.a.row.counter <- same.item.in.a.row.counter + 1
if (!is.null(holdout.frequencies)) {
x.star.total <- x.star.total + rf.n.custs[ii, 3]
}
} else {
x <- last.pair[1]
t.x <- last.pair[2]
corresponding.rf.index <- (x - 1) + 1 + t.x * (t.x - 1)/2 + 1
RF.matrix[corresponding.rf.index, 4] <- same.item.in.a.row.counter
same.item.in.a.row.counter <- 1
if (!is.null(holdout.frequencies)) {
RF.matrix[corresponding.rf.index, 5] <- x.star.total
x.star.total <- rf.n.custs[ii, 3]
}
}
if (ii == num.count.points) {
x <- current.pair[1]
t.x <- current.pair[2]
corresponding.rf.index <- (x - 1) + 1 + t.x * (t.x - 1)/2 + 1
RF.matrix[corresponding.rf.index, 4] <- same.item.in.a.row.counter
same.item.in.a.row.counter <- NULL
if (!is.null(holdout.frequencies)) {
RF.matrix[corresponding.rf.index, 5] <- x.star.total
x.star.total = NULL
}
}
}
return(RF.matrix)
}
#' Build CBS matrix from CBT matrix
#'
#' Given a customer-by-time matrix, yields the resulting
#' customer-by-sufficient-statistic matrix.
#'
#' The customer-by-sufficient statistic matrix will contain the sum of the
#' statistic included in the customer-by-time matrix (see the cbt parameter),
#' the customer's last transaction date, and the total time period for which the
#' customer was observed.
#'
#' @param cbt customer-by-time matrix. This is a matrix consisting of a row per
#' customer and a column per time period. It should contain numeric
#' information about a customer's transactions in every time period - either
#' the number of transactions in that time period (frequency), a 1 to indicate
#' that at least 1 transaction occurred (reach), or the average/total amount
#' spent in that time period.
#' @param dates if cbt.is.during.cal.period is TRUE, then dates is a data
#' frame with three columns: 1. the dates when customers made their first
#' purchases 2. the dates when customers made their last purchases 3. the date
#' of the end of the calibration period. if cbt.is.during.cal.period is FALSE,
#' then dates is a vector with two elements: 1. the date of the beginning of
#' the holdout period 2. the date of the end of the holdout period.
#' @param per interval of time for customer-by-sufficient-statistic matrix.
#' May be "day", "week", "month", "quarter", or "year".
#' @param cbt.is.during.cal.period if TRUE, indicates the customer-by-time
#' matrix is from the calibration period. If FALSE, indicates the
#' customer-by-time matrix is from the holdout period.
#' @return Customer-by-sufficient-statistic matrix, with three columns:
#' frequency("x"), recency("t.x") and total time observed("T.cal"). See
#' details. Frequency is total transactions, not repeat transactions.
#' @examples
#' elog <- dc.ReadLines(system.file("data/cdnowElog.csv", package="BTYD"),2,3,5)
#' elog[,"date"] <- as.Date(elog[,"date"], "%Y%m%d")
#'
#' # Transaction-flow models are about interpurchase times. Since we
#' # only know purchase times to the day, we merge all transaction on
#' # the same day. This example uses dc.MergeTransactionsOnSameDate
#' # to illustrate this; however, we could have simply used dc.CreateReachCBT
#' # instead of dc.CreateFreqCBT to obtain the same result.
#' merged.elog <- dc.MergeTransactionsOnSameDate(elog)
#' cutoff.date <- as.Date("1997-09-30")
#' freq.cbt <- dc.CreateFreqCBT(merged.elog)
#' cal.freq.cbt <- freq.cbt[,as.Date(colnames(freq.cbt)) <= cutoff.date]
#' holdout.freq.cbt <- freq.cbt[,as.Date(colnames(freq.cbt)) > cutoff.date]
#'
#' cal.start.dates.indices <- dc.GetFirstPurchasePeriodsFromCBT(cal.freq.cbt)
#' cal.start.dates <- as.Date(colnames(cal.freq.cbt)[cal.start.dates.indices])
#' cal.end.dates.indices <- dc.GetLastPurchasePeriodsFromCBT(cal.freq.cbt)
#' cal.end.dates <- as.Date(colnames(cal.freq.cbt)[cal.end.dates.indices])
#' T.cal.total <- rep(cutoff.date, nrow(cal.freq.cbt))
#' cal.dates <- data.frame(cal.start.dates,
#' cal.end.dates,
#' T.cal.total)
#'
#' # Create calibration period customer-by-sufficient-statistic data frame,
#' # using weeks as the unit of time.
#' cal.cbs <- dc.BuildCBSFromCBTAndDates(cal.freq.cbt,
#' cal.dates,
#' per="week",
#' cbt.is.during.cal.period=TRUE)
#' # Force the calibration period customer-by-sufficient-statistic to only contain
#' # repeat transactions (required by BG/BB and Pareto/NBD models)
#' cal.cbs[,"x"] <- cal.cbs[,"x"] - 1
#'
#' holdout.start <- cutoff.date+1
#' holdout.end <- as.Date(colnames(holdout.freq.cbt)[ncol(holdout.freq.cbt)])
#' holdout.dates <- c(holdout.start, holdout.end)
#'
#' # Create holdout period customer-by-sufficient-statistic data frame, using weeks
#' # as the unit of time.
#' holdout.cbs <- dc.BuildCBSFromCBTAndDates(holdout.freq.cbt,
#' holdout.dates,
#' per="week",
#' cbt.is.during.cal.period=FALSE)
#' @md
dc.BuildCBSFromCBTAndDates <- function(cbt,
dates,
per,
cbt.is.during.cal.period = TRUE) {
if (cbt.is.during.cal.period == TRUE) {
dc.WriteLine("Started making calibration period CBS...")
custs.first.dates <- dates[, 1]
custs.last.dates <- dates[, 2]
T.cal <- dates[, 3]
if (length(custs.first.dates) != length(custs.last.dates)) {
stop("Invalid dates (different lengths) in BuildCBSFromFreqCBTAndDates")
}
f <- rowSums(cbt)
r <- as.numeric(difftime(custs.last.dates, custs.first.dates, units = "days"))
T <- as.numeric(difftime(T.cal, custs.first.dates, units = "days"))
x <- switch(per, day = 1, week = 7, month = 365/12, quarter = 365/4, year = 365)
r = r/x
T = T/x
cbs = cbind(f, r, T)
# cbs <- data.frame(f=f, r=r/x, T=T/x)
rownames(cbs) <- rownames(cbt)
colnames(cbs) <- c("x", "t.x", "T.cal")
} else {
## cbt is during holdout period
dc.WriteLine("Started making holdout period CBS...")
date.begin.holdout.period <- dates[1]
date.end.holdout.period <- dates[2]
f <- rowSums(cbt)
T <- as.numeric(difftime(date.end.holdout.period, date.begin.holdout.period,
units = "days")) + 1
x <- switch(per, day = 1, week = 7, month = 365/12, quarter = 365/4, year = 365)
T = T/x
cbs = cbind(f, T)
# cbs <- data.frame( f=f, T=T/x)
rownames(cbs) <- rownames(cbt)
colnames(cbs) <- c("x.star", "T.star")
}
dc.WriteLine("Finished building CBS.")
return(cbs)
}
#' Merge Customers
#'
#' Takes two CBT or CBS matrices and ensures that the second one has the same
#' row names as the first.
#'
#' Care should be taken in using this function. It inserts zero values in all
#' rows that were not in the original holdout period data. This behavior does
#' not cause a problem if using CBT matrices, but will cause a problem if using
#' CBS matrices (for example, the output will report all customers with a
#' holdout period length of zero). However, this particular issue is easily
#' fixed (see examples) and should not cause problems.
#'
#' A work-around to avoid using this function is presented in the example for
#' [`dc.BuildCBSFromCBTAndDates`] - build the full CBT and only use the columns
#' applying to each particular time period to construct separate CBTs, and from
#' them, CBSs. That is a much cleaner and less error-prone method; however, on
#' occasion the data will not be available in event log format and you may not
#' be able to construct a CBT for both time periods together.
#'
#' @param data.correct CBT or CBS with the correct customer IDs as row names.
#' Usually from the calibration period.
#' @param data.to.correct CBT or CBS which needs to be fixed (customer IDs
#' inserted). Usually from the holdout period.
#' @return Updated holdout period CBT or CBS.
#' @examples
#' elog <- dc.ReadLines(system.file("data/cdnowElog.csv", package="BTYD"),2,3,5)
#' elog[,"date"] <- as.Date(elog[,"date"], "%Y%m%d")
#' cutoff.date <- as.Date("1997-09-30")
#' cal.elog <- elog[which(elog[,"date"] <= cutoff.date),]
#' holdout.elog <- elog[which(elog[,"date"] > cutoff.date),]
#'
#' # Create calibration period CBT from cal.elog
#' cal.reach.cbt <- dc.CreateReachCBT(cal.elog)
#' # Create holdout period CBT from holdout.elog
#' holdout.reach.cbt <- dc.CreateReachCBT(holdout.elog)
#'
#' # Note the difference:
#' nrow(cal.reach.cbt) # 2357 customers
#' nrow(holdout.reach.cbt) # 684 customers
#'
#' # Create a "fixed" holdout period CBT, with the same number
#' # of customers in the same order as the calibration period CBT
#' fixed.holdout.reach.cbt <- dc.MergeCustomers(cal.reach.cbt, holdout.reach.cbt)
#' nrow(fixed.holdout.reach.cbt) # 2357 customers
#'
#' # You can verify that the above is correct by turning these into a CBS
#' # (see \code{\link{dc.BuildCBSFromCBTAndDates}} and using
#' # \code{\link{pnbd.PlotFreqVsConditionalExpectedFrequency}}, for example
#'
#' # Alternatively, we can fix the CBS, instead of the CBS:
#'
#' cal.start.dates.indices <- dc.GetFirstPurchasePeriodsFromCBT(cal.reach.cbt)
#' cal.start.dates <- as.Date(colnames(cal.reach.cbt)[cal.start.dates.indices])
#' cal.end.dates.indices <- dc.GetLastPurchasePeriodsFromCBT(cal.reach.cbt)
#' cal.end.dates <- as.Date(colnames(cal.reach.cbt)[cal.end.dates.indices])
#' T.cal.total <- rep(cutoff.date, nrow(cal.reach.cbt))
#' cal.dates <- data.frame(cal.start.dates, cal.end.dates, T.cal.total)
#'
#' # Create calibration period customer-by-sufficient-statistic data frame,
#' # using weeks as the unit of time.
#' cal.cbs <- dc.BuildCBSFromCBTAndDates(cal.reach.cbt,
#' cal.dates,
#' per="week",
#' cbt.is.during.cal.period=TRUE)
#'
#' # Force the calibration period customer-by-sufficient-statistic to only
#' # contain repeat transactions (required by BG/BB and Pareto/NBD models)
#' cal.cbs[,"x"] <- cal.cbs[,"x"] - 1
#'
#' holdout.start <- cutoff.date+1
#' holdout.end <- as.Date(colnames(fixed.holdout.reach.cbt)[ncol(fixed.holdout.reach.cbt)])
#' holdout.dates <- c(holdout.start, holdout.end)
#'
#' # Create holdout period customer-by-sufficient-statistic data frame,
#' # using weeks as the unit of time.
#' holdout.cbs <- dc.BuildCBSFromCBTAndDates(holdout.reach.cbt,
#' holdout.dates,
#' per="week",
#' cbt.is.during.cal.period=FALSE)
#'
#' # Note the difference:
#' nrow(cal.cbs) # 2357 customers
#' nrow(holdout.cbs) # 684 customers
#'
#' # Create a "fixed" holdout period CBS, with the same number
#' # of customers in the same order as the calibration period CBS
#' fixed.holdout.cbs <- dc.MergeCustomers(cal.cbs, holdout.cbs)
#' nrow(fixed.holdout.cbs) # 2357 customers
#'
#' # Furthermore, this function will assign a zero value to all fields
#' # that were not in the original holdout period CBS. Since T.star is the
#' # same for all customers in the holdout period, we should fix that:
#' fixed.holdout.cbs[,"T.star"] <- rep(max(fixed.holdout.cbs[,"T.star"]),nrow(fixed.holdout.cbs))
dc.MergeCustomers <- function(data.correct,
data.to.correct) {
## Initialize a new data frame
data.to.correct.new <- matrix(0, nrow = nrow(data.correct), ncol = ncol(data.to.correct))
# data.to.correct.new <- data.frame(data.to.correct.new.size)
orig.order <- 1:nrow(data.correct)
orig.order <- orig.order[order(rownames(data.correct))]
data.correct.ordered <- data.correct[order(rownames(data.correct)), ]
## obscure code: handles boundary case when data.correct has one column and
## coerces data.correct.ordered to be a vector
if (is.null(nrow(data.correct.ordered))) {
# data.correct.ordered <- data.frame(data.correct.ordered)
rownames(data.correct.ordered) <- rownames(data.correct)[order(rownames(data.correct))]
colnames(data.correct.ordered) <- colnames(data.correct)
}
data.to.correct <- data.to.correct[order(rownames(data.to.correct)), ]
rownames(data.to.correct.new) <- rownames(data.correct.ordered)
colnames(data.to.correct.new) <- colnames(data.to.correct)
## Initialize the two iterators ii.correct, ii.to.correct
ii.correct <- 1
ii.to.correct <- 1
## Grab the data to hold the stopping conditions
max.correct.iterations <- nrow(data.correct.ordered)
max.to.correct.iterations <- nrow(data.to.correct)
## Grab the lists of customers from the data frames and convert them to optimize
## the loop speed
cust.list.correct <- rownames(data.correct.ordered)
cust.list.to.correct <- rownames(data.to.correct)
cust.correct.indices <- c()
cust.to.correct.indices <- c()
while (ii.correct <= max.correct.iterations & ii.to.correct <= max.to.correct.iterations) {
cur.cust.correct <- cust.list.correct[ii.correct]
cur.cust.to.correct <- cust.list.to.correct[ii.to.correct]
if (cur.cust.correct < cur.cust.to.correct) {
ii.correct <- ii.correct + 1
} else if (cur.cust.correct > cur.cust.to.correct) {
ii.to.correct <- ii.to.correct + 1
} else if (cur.cust.correct == cur.cust.to.correct) {
## data.to.correct.new[ii.correct, ] = data.to.correct[ii.to.correct, ]
cust.correct.indices <- c(cust.correct.indices, ii.correct)
cust.to.correct.indices <- c(cust.to.correct.indices, ii.to.correct)
ii.correct <- ii.correct + 1
ii.to.correct <- ii.to.correct + 1
} else {
stop("Array checking error in MergeCustomers")
}
}
data.to.correct.new[cust.correct.indices, ] <- data.to.correct
data.to.correct.new <- data.to.correct.new[order(orig.order), ]
return(data.to.correct.new)
}
#' Merge Transactions on Same Day
#'
#' Updates an event log; any transactions made by the same customer on the same
#' day are combined into one transaction.
#'
#' @param elog event log, which is a data frame with columns for customer ID
#' ("cust"), date ("date"), and optionally other columns such as "sales". Each
#' row represents an event, such as a transaction.
#' @return Event log with transactions made by the same customer on the same
#' day merged into one transaction.
dc.MergeTransactionsOnSameDate <- function(elog) {
dc.WriteLine("Started merging same-date transactions...")
elog <- cbind(elog, 1:nrow(elog) * (!duplicated(elog[, c("cust", "date")])))
aggr.elog <- aggregate(elog[, !(colnames(elog) %in% c("cust", "date"))], by = list(cust = elog[,
"cust"], date = elog[, "date"]), sum)
aggr.elog <- aggr.elog[order(aggr.elog[, ncol(aggr.elog)]), ][, -ncol(aggr.elog)]
dc.WriteLine("... Finished merging same-date transactions.")
return(aggr.elog)
}
#' Remove Time Between
#'
#' This function creates a new event log, with time in the middle removed. Used,
#' for example, in sports with off-seasons.
#'
#' The four date parameters must be in ascending order.
#'
#' @param elog event log, which is a data frame with columns for customer ID
#' ("cust"), date ("date"), and optionally other columns such as "sales". Each
#' row represents an event, such as a transaction. The "date" column must
#' consist of date objects, not character strings.
#' @param day1 date of beginning of first period. Must be a date object.
#' @param day2 date of end of first period. Must be a date object.
#' @param day3 date of beginning of second period. Must be a date object.
#' @param day4 date of third period. Must be a date object.
#' @return list - `elog1` the event log with all elog$date entries between day1
#' and day2 - `elog2` the event with all elog$date entries between day3 and
#' day4 - `elog3` elog1 combined with elog2, with all dates from elog2 reduced
#' by the time removed between elog1 and elog2
#' @examples
#' elog <- dc.ReadLines(system.file("data/cdnowElog.csv", package="BTYD"),2,3,5)
#' elog[,"date"] <- as.Date(elog[,"date"], "%Y%m%d")
#'
#' # Use the cdnow data to return a 6 month event log for January, February,
#' # March, October, November, December.
#' period.one.start <- as.Date("1997-01-01")
#' period.one.end <- as.Date("1997-03-31")
#' period.two.start <- as.Date("1997-10-01")
#' period.two.end <- as.Date("1997-12-31")
#' reduced.elog <- dc.RemoveTimeBetween(elog, period.one.start, period.one.end,
#' period.two.start, period.two.end)
#'
#' # Note that the new elog will go up to June 30 at a maximum, since we
#' # are only using 6 months of data starting on January 1
#' max(reduced.elog$elog3$date) # "1997-06-30"
#' @md
dc.RemoveTimeBetween <- function(elog,
day1,
day2,
day3,
day4) {
if (day1 > day2 || day2 > day3 || day3 > day4) {
stop("Days are not input in increasing order.")
}
elog1 <- elog[which(elog$date >= day1 & elog$date <= day2), ]
elog2 <- elog[which(elog$date >= day3 & elog$date <= day4), ]
time.between.periods <- as.numeric(day3 - day2)
elog2timeErased <- elog2
elog2timeErased$date <- elog2$date - time.between.periods
elog3 = rbind(elog1, elog2timeErased)
elogsToReturn = list()
elogsToReturn$elog1 <- elog1
elogsToReturn$elog2 <- elog2
elogsToReturn$elog3 <- elog3
return(elogsToReturn)
}
#' Get First Purchase Periods from Customer-by-Time Matrix
#'
#' Uses a customer-by-time matrix to return a vector containing the periods in
#' which customers made their first purchase.
#'
#' @param cbt customer-by-time matrix. This is a matrix consisting of a row
#' per customer and a column per time period. It should contain numeric
#' information about a customer's transactions in every time period - either
#' the number of transactions in that time period (frequency), a 1 to indicate
#' that at least 1 transaction occurred (reach), or the average/total amount
#' spent in that time period.
#' @return a vector containing the indices of periods in which customers made
#' their first transactions. To convert to actual dates (if your
#' customer-by-time matrix has dates as column names), use
#' colnames(cbt)\[RESULT\]
dc.GetFirstPurchasePeriodsFromCBT <- function(cbt) {
cbt <- as.matrix(cbt)
num.custs <- nrow(cbt)
num.periods <- ncol(cbt)
first.periods <- c(num.custs)
## loops through the customers and periods and locates the first purchase periods
## of each customer. Records them in first.periods
for (ii in 1:num.custs) {
curr.cust.transactions <- as.numeric(cbt[ii, ])
transaction.index <- 1
made.purchase <- FALSE
while (made.purchase == FALSE & transaction.index <= num.periods) {
if (curr.cust.transactions[transaction.index] > 0) {
made.purchase <- TRUE
} else {
transaction.index <- transaction.index + 1
}
}
if (made.purchase == FALSE) {
first.periods[ii] <- 0
} else {
first.periods[ii] <- transaction.index
}
}
return(first.periods)
}
#' Get Last Purchase Periods from Customer-by-Time Matrix
#'
#' Uses a customer-by-time matrix to return a vector containing the periods in
#' which customers made their last purchase.
#'
#' @inheritParams dc.GetFirstPurchasePeriodsFromCBT
#' @return a vector containing the indices of periods in which customers made
#' their last transactions. To convert to actual dates (if your
#' customer-by-time matrix has dates as column names), use
#' colnames(cbt)\[RESULT\]
dc.GetLastPurchasePeriodsFromCBT <- function(cbt) {
cbt <- as.matrix(cbt)
num.custs <- nrow(cbt)
num.periods <- ncol(cbt)
last.periods <- c(num.custs)
## loops through the customers and periods and locates the first purchase periods
## of each customer. Records them in last.periods
for (ii in 1:num.custs) {
curr.cust.transactions <- as.numeric(cbt[ii, ])
transaction.index <- num.periods
made.purchase <- FALSE
while (made.purchase == FALSE & transaction.index >= 1) {
if (curr.cust.transactions[transaction.index] > 0) {
made.purchase <- TRUE
} else {
transaction.index <- transaction.index - 1
}
}
if (made.purchase == FALSE) {
last.periods[ii] <- 0
} else {
last.periods[ii] <- transaction.index
}
}
return(last.periods)
}
#' Write Line
#'
#' Writes any number of arguments to the console.
#'
#' The code is literally: cat(..., fill=TRUE); flush.console();
#'
#' @param ... objects to print to the R console.
dc.WriteLine <- function(...) {
message(...)
flush.console()
}
#' Add Logs
#'
#' Given log(a) and log(b), returns log(a + b)
#'
#' @param loga first number in log space.
#' @param logb first number in log space.
addLogs <- function(loga,
logb) {
return(logb + log(exp(loga - logb) + 1))
}
#' Subtract Logs
#'
#' Given log(a) and log(b), returns log(a - b)
#'
#' @inheritParams addLogs
subLogs <- function(loga,
logb) {
return(logb + log(exp(loga - logb) - 1))
}
#' Plot Log-Likelihood Contours
#'
#' Creates a set of contour plots, such that there is a contour plot for every
#' pair of parameters varying.
#'
#' For each contour plot, the non-varying parameters are kept constant at the
#' predicted values.
#'
#' The contour will extend out by (n.divs * zoom.percent) in both directions and
#' both dimensions from the estimated parameter values. The exception is if
#' allow.neg.params is FALSE. In this case, the contour plot will end at zero if
#' it would have extended into negative parameter values.
#'
#' The estimated parameter values will be indicated by the intersection of two
#' red lines.
#'
#' @inheritParams dc.PlotLogLikelihoodContour
#' @param multiple.screens if TRUE, plots each contour plot on a separate R graphics window.
#' @seealso [`dc.PlotLogLikelihoodContour`]
#' @examples
#' # **Example for BG/BB model:
#' data(donationsSummary)
#' rf.matrix <- donationsSummary$rf.matrix
#'
#' # starting-point parameters
#' bgbb.startingparams <- c(1, 1, 0.5, 3)
#' # estimated parameters
#' bgbb.est.params <- bgbb.EstimateParameters(rf.matrix, bgbb.startingparams)
#'
#' # set up parameter names for a more descriptive result
#' bgbb.param.names <- c("alpha", "beta", "gamma", "delta")
#'
#' # plot-log likelihood contours:
#' dc.PlotLogLikelihoodContours(bgbb.rf.matrix.LL,
#' bgbb.est.params,
#' rf.matrix = rf.matrix,
#' n.divs = 5,
#' num.contour.lines = 8,
#' zoom.percent = 0.3,
#' allow.neg.params = FALSE,
#' param.names = bgbb.param.names)
#'
#' # **Example for Pareto/NBD model:
#' data(cdnowSummary)
#' cbs <- cdnowSummary$cbs
#'
#' # Speed up calculations:
#' cbs <- dc.compress.cbs(cbs)
#'
#' # parameters estimated using pnbd.EstimateParameters
#' pnbd.est.params <- cdnowSummary$est.params
#'
#' # set up parameter names for a more descriptive result
#' pnbd.param.names <- c("r", "alpha", "s", "beta")
#'
#' # plot log-likelihood contours:
#' dc.PlotLogLikelihoodContours(pnbd.cbs.LL,
#' pnbd.est.params,
#' cal.cbs = cbs,
#' hardie = TRUE,
#' n.divs = 5,
#' num.contour.lines = 15,
#' zoom.percent = 0.3,
#' allow.neg.params = FALSE,
#' param.names = pnbd.param.names)
#'
#' # **Example for BG/NBD model:
#' data(cdnowSummary)
#' cbs <- cdnowSummary$cbs
#'
#' # parameters estimated using bgnbd.EstimateParameters
#' bgnbd.est.params <- cdnowSummary$est.params
#'
#' # set up parameter names for a more descriptive result
#' bgnbd.param.names <- c("r", "alpha", "s", "beta")
#'
#' # plot log-likelihood contours:
#' dc.PlotLogLikelihoodContours(bgnbd.cbs.LL,
#' bgnbd.est.params,
#' cal.cbs = cbs,
#' n.divs = 5,
#' num.contour.lines = 15,
#' zoom.percent = 0.3,
#' allow.neg.params = FALSE,
#' param.names = bgnbd.param.names)
#' @md
dc.PlotLogLikelihoodContours <- function(loglikelihood.fcn,
predicted.params,
...,
n.divs = 2,
multiple.screens = FALSE,
num.contour.lines = 10,
zoom.percent = 0.9,
allow.neg.params = FALSE,
param.names = c("param 1", "param 2", "param 3", "param 4")) {
permutations <- combn(length(predicted.params), 2)
num.permutations <- ncol(permutations)
contour.plots <- list()
if (multiple.screens == FALSE) {
dev.new()
plot.window.num.cols <- ceiling(num.permutations/2)
plot.window.num.rows <- 2
par(mfrow = c(plot.window.num.rows, plot.window.num.cols))
}
for (jj in 1:num.permutations) {
vary.or.fix.param <- rep("fix", 4)
vary.or.fix.param[permutations[, jj]] <- "vary"
contour.plots[[jj]] <- dc.PlotLogLikelihoodContour(loglikelihood.fcn, vary.or.fix.param,
predicted.params, ..., n.divs = n.divs, new.dev = multiple.screens, num.contour.lines = num.contour.lines,
zoom.percent = zoom.percent, allow.neg.params = allow.neg.params, param.names = param.names)
}
if (multiple.screens == FALSE) {
par(mfrow = c(1, 1))
}
return(contour.plots)
}
#' Plot Log-Likelihood Contour
#'
#' Makes a contour plot of a loglikelihood function that varies over two
#' designated parameters, centered around a set of previously estimated
#' parameters.
#'
#' The contour plot will have the first parameter labelled "vary" on the x-axis,
#' and the second parameter labelled "vary" on the y-axis. It will extend out by
#' (n.divs * zoom.percent) in both directions and both dimensions from the
#' estimated parameter values. The exception is if allow.neg.params is FALSE. In
#' this case, the contour plot will end at zero if it would have extended into
#' negative parameter values.
#'
#' The estimated parameter values will be indicated by the intersection of two
#' red lines.
#'
#' @param loglikelihood.fcn log-likelihood function to plot.
#' @param vary.or.fix.param a vector of strings containing either "vary" or
#' "fix". The parameters in the same indices as "vary" will be plotted while
#' the other parameters will remain fixed at the estimated values. See
#' details.
#' @param predicted.params estimated parameters.
#' @param ... all additional arguments required by the log-likelihood
#' function. For example, [`bgbb.rf.matrix.LL`] requires rf.matrix;
#' [`pnbd.cbs.LL`] requires cal.cbs and hardie (defaults to TRUE); and [`bgnbd.cbs.LL`] requires
#' cal.cbs.
#' @param n.divs integer representing how fine-grained the contour plot is. A
#' higher value will produce a higher resolution plot with smoother contour
#' lines, but will take longer to plot. n.divs also affects the boundaries of
#' the contour plot; see details.
#' @param new.dev if TRUE, makes a new window for each contour plot.
#' @param num.contour.lines number of contour lines to plot in the window.
#' @param zoom.percent determines boundaries of contour plot. See details.
#' @param allow.neg.params if FALSE, the contour plot will not include negative
#' values (see details). This should be set to false for the BG/BB and
#' Pareto/NBD models.
#' @param param.names a vector containing parameter names.
#' @seealso [`dc.PlotLogLikelihoodContours`]
#' @examples
#' # **Examples for BG/BB model:
#' data(donationsSummary)
#' rf.matrix <- donationsSummary$rf.matrix
#'
#' # starting-point parameters
#' bgbb.startingparams <- c(1, 1, 0.5, 3)
#' # estimated parameters
#' bgbb.est.params <- bgbb.EstimateParameters(rf.matrix, bgbb.startingparams)
#'
#' # set up parameter names for a more descriptive result
#' bgbb.param.names <- c("alpha", "beta", "gamma", "delta")
#'
#' # plot a log-likelihood contour of alpha and beta, the unobserved
#' # parameters for the beta-Bernoulli transaction process of the BG/BB.
#' # Note that allow.neg.params has been set to false as BG/BB parameters
#' # cannot be negative:
#' dc.PlotLogLikelihoodContour(bgbb.rf.matrix.LL,
#' c("vary", "vary", "fix", "fix"),
#' bgbb.est.params,
#' rf.matrix = rf.matrix,
#' n.divs = 15,
#' num.contour.lines = 15,
#' zoom.percent = 0.2,
#' allow.neg.params = FALSE,
#' param.names = bgbb.param.names)
#'
#' # plot a log-likelihood contour of gamma and delta, the unobserved
#' # parameters for the beta-geometric dropout process of the BG/BB.
#' # Note that allow.neg.params has been set to false as BG/BB parameters
#' # cannot be negative:
#' dc.PlotLogLikelihoodContour(bgbb.rf.matrix.LL,
#' c("fix", "fix", "vary", "vary"),
#' bgbb.est.params,
#' rf.matrix = rf.matrix,
#' n.divs = 15,
#' num.contour.lines = 15,
#' zoom.percent = 0.2,
#' allow.neg.params = FALSE,
#' param.names = bgbb.param.names)
#'
#' # **Example for Pareto/NBD model:
#' data(cdnowSummary)
#' cbs <- cdnowSummary$cbs
#'
#' # Speed up calculations:
#' cbs <- dc.compress.cbs(cbs)
#'
#' # parameters estimated using pnbd.EstimateParameters
#' pnbd.est.params <- cdnowSummary$est.params
#'
#' # set up parameter names for a more descriptive result
#' pnbd.param.names <- c("r", "alpha", "s", "beta")
#'
#' # plot a log-likelihood contour of r and s, the shape parameters
#' # of the transaction and dropout process models (respectively).
#' # Note that allow.neg.params has been set to false as Pareto/NBD
#' # parameters cannot be negative:
#' dc.PlotLogLikelihoodContour(pnbd.cbs.LL,
#' c("vary", "fix", "vary", "fix"),
#' pnbd.est.params,
#' cal.cbs = cbs,
#' hardie = TRUE,
#' n.divs = 20,
#' num.contour.lines = 20,
#' zoom.percent = 0.1,
#' allow.neg.params = FALSE,
#' param.names = pnbd.param.names)
#'
#' # **Example for BG/NBD model:
#' data(cdnowSummary)
#' cbs <- cdnowSummary$cbs
#'
#' # parameters estimated using bgnbd.EstimateParameters
#' bgnbd.est.params <- cdnowSummary$est.params
#'
#' # set up parameter names for a more descriptive result
#' bgnbd.param.names <- c("r", "alpha", "s", "beta")
#'
#' # plot a log-likelihood contour of r and s, the shape parameters
#' # of the transaction and dropout process models (respectively).
#' # Note that allow.neg.params has been set to false as BG/NBD
#' # parameters cannot be negative:
#' dc.PlotLogLikelihoodContour(bgnbd.cbs.LL,
#' c("vary", "fix", "vary", "fix"),
#' bgnbd.est.params,
#' cal.cbs = cbs,
#' n.divs = 20,
#' num.contour.lines = 20,
#' zoom.percent = 0.1,
#' allow.neg.params = FALSE,
#' param.names = bgnbd.param.names)
#' @md
dc.PlotLogLikelihoodContour <- function(loglikelihood.fcn,
vary.or.fix.param,
predicted.params,
...,
n.divs = 3,
new.dev = FALSE,
num.contour.lines = 10,
zoom.percent = 0.9,
allow.neg.params = FALSE,
param.names = c("param 1", "param 2", "param 3", "param 4")) {
if (new.dev) {
dev.new()
}
idx.par.vary <- which(vary.or.fix.param == "vary")
if (length(idx.par.vary) != 2) {
stop("vary.or.fix.param must have exactly two elements: \"vary\" ")
}
values.par.vary <- predicted.params[idx.par.vary]
v1 <- values.par.vary[1]
v2 <- values.par.vary[2]
par1.ticks <- c(v1 - (n.divs:1) * zoom.percent, v1, v1 + (1:n.divs) * zoom.percent)
par2.ticks <- c(v2 - (n.divs:1) * zoom.percent, v2, v2 + (1:n.divs) * zoom.percent)
param.names.vary <- param.names[idx.par.vary]
if (!allow.neg.params) {
par1.ticks <- par1.ticks[par1.ticks > 0]
par2.ticks <- par2.ticks[par2.ticks > 0]
}
n.par1.ticks = length(par1.ticks)
n.par2.ticks = length(par2.ticks)
ll <- sapply(0:(n.par1.ticks * n.par2.ticks - 1), function(e) {
i <- (e%%n.par1.ticks) + 1
j <- (e%/%n.par1.ticks) + 1
current.params <- predicted.params
current.params[idx.par.vary] <- c(par1.ticks[i], par2.ticks[j])
loglikelihood.fcn(current.params, ...)
})
loglikelihood.contours <- matrix(ll, nrow = n.par1.ticks, ncol = n.par2.ticks)
if (FALSE) {
for (ii in 1:n.par1.ticks) {
for (jj in 1:n.par2.ticks) {
current.params <- predicted.params
current.params[idx.par.vary] <- c(par1.ticks[ii], par2.ticks[jj])
loglikelihood.contours[ii, jj] <- loglikelihood.fcn(current.params,
...)
## cat('finished', (ii-1)*2*n.divs+jj, 'of', 4*n.divs*n.divs, fill=TRUE)
}
}
}
contour.plot <- contour(x = par1.ticks, y = par2.ticks, z = loglikelihood.contours,
nlevels = num.contour.lines)
# label.varying.params <- paste(idx.par.vary, collapse=', ')
contour.plot.main.label <- paste("Log-likelihood contour of", param.names.vary[1],
"and", param.names.vary[2])
abline(v = values.par.vary[1], h = values.par.vary[2], col = "red")
title(main = contour.plot.main.label,
xlab = param.names.vary[1],
ylab = param.names.vary[2])
}
#' Read Lines
#'
#' Given a .csv file that throws errors when read in by the usual read.csv and read.table methods,
#' loops through the file line-by-line and picks out the customer, date, and sales (optional)
#' transaction data to return an event log.
#'
#' Once this function has been run, you may need to convert the date column to Date objects for
#' the event log to work with other functions in this library. See the as.Date function in the
#' `base` R package for more details.
#'
#' @param csv.filename The name of the comma-delimited file to be read. This file must contain headers.
#' @param cust.idx The index of the customer ID column in the comma-delimited file.
#' @param date.idx The index of the date column in the comma-delimited file.
#' @param sales.idx The index of the sales column in the comma-delimited file.
#'
#' @examples
#' # Create event log from file "cdnowElog.csv", which has
#' # customer IDs in the second column, dates in the third column, and
#' # sales numbers in the fifth column.
#' elog <- dc.ReadLines(system.file("data/cdnowElog.csv", package="BTYD"),2,3,5)
#'
#' # convert date column to date objects, as required by some other functions
#' elog$date <- as.Date(elog$date, "$Y%m%d")
#' @md
dc.ReadLines <- function(csv.filename,
cust.idx,
date.idx,
sales.idx = -1) {
dc.WriteLine("Started reading file. Progress:")
elog.file <- file(csv.filename, open = "r")
elog.lines <- readLines(elog.file)
n.lines <- length(elog.lines) - 1
cust <- rep("", n.lines)
date <- rep("", n.lines)
if (sales.idx != -1) {
sales <- rep(0, n.lines)
}
for (ii in 2:(n.lines + 1)) {
## splitting each line by commas
split.string <- strsplit(elog.lines[ii], ",")
## assigning the comma delimited values to our vector
this.cust <- split.string[[1]][cust.idx]
this.date <- split.string[[1]][date.idx]
if (is.na(this.cust) | is.na(this.date)) {
next
}
cust[ii - 1] <- this.cust
date[ii - 1] <- this.date
if (sales.idx != -1) {
sales[ii - 1] <- split.string[[1]][sales.idx]
}
## Progress bar:
if (ii%%1000 == 0) {
dc.WriteLine(ii, "/", n.lines)
}
}
elog <- cbind(cust, date)
elog.colnames <- c("cust", "date")
if (sales.idx != -1) {
elog <- cbind(elog, sales)
elog.colnames <- c(elog.colnames, "sales")
}
elog <- data.frame(elog, stringsAsFactors = FALSE)
colnames(elog) <- elog.colnames
if (sales.idx != -1) {
elog$sales <- as.numeric(elog$sales)
}
close(elog.file)
dc.WriteLine("File successfully read.")
return(elog)
}
#' Check model params
#'
#' Check model parameters for correctness.
#'
#' @param printnames Names to print parameter errors.
#' @param params Model parameters.
#' @param func Function calling dc.check.model.params.
#' @return Stops the program if there is something wrong with the parameters.
dc.check.model.params <- function(printnames,
params,
func) {
if (length(params) != length(printnames)) {
stop("Error in ", func, ": Incorrect number of parameters; there should be ",
length(printnames), ".", call. = FALSE)
}
if (!is.numeric(params)) {
stop("Error in ", func, ": parameters must be numeric, but are of class ",
class(params), call. = FALSE)
}
if (any(params < 0)) {
stop("Error in ", func, ": All parameters must be positive. Negative parameters: ",
paste(printnames[params < 0], collapse = ", "), call. = FALSE)
}
}
#' Cumulative to Incremental
#'
#' Converts a vector of cumulative transactions to a vector of incremental transactions.
#'
#' @param cu A vector containing cumulative transactions over time.
#' @return Vector of incremental transactions.
dc.CumulativeToIncremental <- function(cu) {
inc <- cu - c(0, cu)[-(length(cu) + 1)]
return(inc)
}
|
/scratch/gouwar.j/cran-all/cranData/BTYD/R/dc.R
|
################################################## Pareto/NBD estimation, visualization functions
library(hypergeo)
library(optimx)
#' Define general parameters
#'
#' This is to ensure consistency across all functions that require the
#' likelihood function, or the log of it, and to make sure that the same
#' implementation of the hypergeometric function is used everywhere for building
#' \code{A0}.
#'
#' This function is only ever called by either \code{\link{pnbd.LL}} or
#' \code{\link{pnbd.PAlive}} so it returns directly the output that is expected
#' from those calling functions: either the log likelihood for a set of
#' customers, or the probability that a set of customers with characteristics
#' given by \code{x}, \code{t.x} and \code{T.cal}, having estimated a set of
#' \code{params}, is still alive. Either set of customers can be of size 1.
#' @inheritParams pnbd.LL
#' @param func name of the function calling dc.InputCheck; either \code{pnbd.LL}
#' or \code{pnbd.PAlive}.
#' @return A vector of log likelihood values if \code{func} is \code{pnbd.LL},
#' or a vector of probabilities that a customer is still alive if \code{func}
#' is \code{pnbd.PAlive}.
#' @seealso \code{\link{pnbd.LL}}
#' @seealso \code{\link{pnbd.PAlive}}
#' @seealso \code{\link{pnbd.DERT}}
pnbd.generalParams <- function(params,
x,
t.x,
T.cal,
func,
hardie = TRUE) {
# Since pnbd.LL and pnbd.pAlive are the only options
# for func, we don't need a printnames argument
# in the pnbd.generalParams wrapper.
stopifnot(func %in% c('pnbd.LL', 'pnbd.PAlive'))
inputs <- try(dc.InputCheck(params = params,
func = func,
printnames = c("r", "alpha", "s", "beta"),
x = x,
t.x = t.x,
T.cal = T.cal))
if('try-error' == class(inputs)) return(str(inputs)$message)
x <- inputs$x
t.x <- inputs$t.x
T.cal <- inputs$T.cal
r <- params[1]
alpha <- params[2]
s <- params[3]
beta <- params[4]
maxab <- max(alpha, beta)
absab <- abs(alpha - beta)
param2 <- s + 1
if (alpha < beta) {
param2 <- r + x
}
a <- alpha + T.cal
b <- maxab + t.x
c <- beta + T.cal
d <- maxab + T.cal
w <- r + s + x
if(hardie == TRUE) {
F1 <- h2f1(a = w,
b = param2,
c = w + 1,
z = absab / b)
F2 <- h2f1(a = w,
b = param2,
c = w + 1,
z = absab / d)
} else {
F1 <- Re(hypergeo(A = w,
B = param2,
C = w + 1,
z = absab / b))
F2 <- Re(hypergeo(A = w,
B = param2,
C = w + 1,
z = absab / d))
}
A0 <- F1/(b^w) - F2/(d^w)
# You only ever call this function from two other
# places: pnbd.LL or pnbd.PAlive.
if(func == 'pnbd.LL') {
# this returns the log likelihood for one random customer
part1 <- r * log(alpha) +
s * log(beta) +
lgamma(r + x) -
lgamma(r)
part2 <- 1 / (a^(w - s) * c^s)
return(part1 + log(part2) + log(1 + (s/w) * A0 / part2))
}
else if(func == 'pnbd.PAlive') {
# This returns the probability that a random customer is still alive
return(1 / (1 + s/w * a^(w - s) * c^s * A0))
} else {
return(NULL)
}
}
#' Use Bruce Hardie's Gaussian hypergeometric implementation
#'
#' In benchmarking \code{\link{pnbd.LL}} runs more quickly and
#' it returns the same results if it uses this helper instead of
#' \code{\link[hypergeo]{hypergeo}}, which is the default. But \code{h2f1}
#' is such a barebones function that in some edge cases it will keep
#' going until you get a segfault, where \code{\link[hypergeo]{hypergeo}}
#' would have failed with a proper error message.
#'
#' @param a, counterpart to A in \code{\link[hypergeo]{hypergeo}}
#' @param b, counterpart to B in \code{\link[hypergeo]{hypergeo}}
#' @param c, counterpart to C in \code{\link[hypergeo]{hypergeo}}
#' @param z, counterpart to z in \code{\link[hypergeo]{hypergeo}}
#' @seealso \code{\link[hypergeo]{hypergeo}}
#' @references Fader, Peter S., and Bruce G.S. Hardie. "A Note on Deriving the Pareto/NBD Model and
#' Related Expressions." November. 2005. Web. \url{http://www.brucehardie.com/notes/008/}
h2f1 <- function(a, b, c, z) {
lenz <- length(z)
j = 0
uj <- 1:lenz
uj <- uj/uj
y <- uj
lteps <- 0
while (lteps < lenz) {
lasty <- y
j <- j + 1
uj <- uj * (a + j - 1) * (b + j - 1)/(c + j - 1) * z/j
y <- y + uj
lteps <- sum(y == lasty)
}
return(y)
}
#' Pareto/NBD Log-Likelihood
#'
#' Calculates the log-likelihood of the Pareto/NBD model.
#'
#' @param params Pareto/NBD parameters - a vector with r, alpha, s, and beta, in
#' that order. r and alpha are unobserved parameters for the NBD transaction
#' process. s and beta are unobserved parameters for the Pareto (exponential
#' gamma) dropout process.
#' @param x number of repeat transactions in the calibration period T.cal, or a
#' vector of transaction frequencies.
#' @param t.x time of most recent repeat transaction, or a vector of recencies.
#' @param T.cal length of calibration period, or a vector of calibration period
#' lengths.
#' @param hardie if TRUE, use \code{\link{h2f1}} instead of
#' \code{\link[hypergeo]{hypergeo}}.
#'
#' @seealso \code{\link{pnbd.EstimateParameters}}
#'
#' @return A vector of log-likelihoods as long as the longest input vector (x,
#' t.x, or T.cal).
#' @references Fader, Peter S., and Bruce G.S. Hardie. "A Note on Deriving the
#' Pareto/NBD Model and Related Expressions." November. 2005. Web.
#' \url{http://www.brucehardie.com/notes/008/}
#'
#' @examples
#' # Returns the log likelihood of the parameters for a customer who
#' # made 3 transactions in a calibration period that ended at t=6,
#' # with the last transaction occurring at t=4.
#' pnbd.LL(params, x=3, t.x=4, T.cal=6, hardie = TRUE)
#'
#' # We can also give vectors as function parameters:
#' set.seed(7)
#' x <- sample(1:4, 10, replace = TRUE)
#' t.x <- sample(1:4, 10, replace = TRUE)
#' T.cal <- rep(4, 10)
#' pnbd.LL(params, x, t.x, T.cal, hardie = TRUE)
pnbd.LL <- function(params,
x,
t.x,
T.cal,
hardie = TRUE) {
pnbd.generalParams(params = params,
x = x,
t.x = t.x,
T.cal = T.cal,
func = 'pnbd.LL',
hardie = hardie)
}
#' Pareto/NBD P(Alive)
#'
#' Uses Pareto/NBD model parameters and a customer's past transaction behavior
#' to return the probability that they are still alive at the end of the
#' calibration period.
#'
#' P(Alive | X=x, t.x, T.cal, r, alpha, s, beta)
#'
#' x, t.x, and T.cal may be vectors. The standard rules for vector operations
#' apply - if they are not of the same length, shorter vectors will be recycled
#' (start over at the first element) until they are as long as the longest
#' vector. It is advisable to keep vectors to the same length and to use single
#' values for parameters that are to be the same for all calculations. If one of
#' these parameters has a length greater than one, the output will be a vector
#' of probabilities.
#'
#' @inheritParams pnbd.LL
#'
#' @return Probability that the customer is still alive at the end of the
#' calibration period. If x, t.x, and/or T.cal has a length greater than one,
#' then this will be a vector of probabilities (containing one element
#' matching each element of the longest input vector).
#' @references Fader, Peter S., and Bruce G.S. Hardie. "A Note on Deriving the
#' Pareto/NBD Model and Related Expressions." November. 2005. Web.
#' \url{http://www.brucehardie.com/notes/008/}
#'
#' @examples
#' data(cdnowSummary)
#' cbs <- cdnowSummary$cbs
#' params <- pnbd.EstimateParameters(cbs, hardie = TRUE)
#'
#' pnbd.PAlive(params, x=0, t.x=0, T.cal=39, TRUE)
#' # 0.2941633; P(Alive) of a customer who made no repeat transactions.
#'
#' pnbd.PAlive(params, x=23, t.x=39, T.cal=39, TRUE)
#' # 1; P(Alive) of a customer who has the same recency and total
#' # time observed.
#'
#' pnbd.PAlive(params, x=5:20, t.x=30, T.cal=39, TRUE)
#' # Note the "increasing frequency paradox".
#'
#' # To visualize the distribution of P(Alive) across customers:
#' p.alives <- pnbd.PAlive(params, cbs[,"x"], cbs[,"t.x"], cbs[,"T.cal"], TRUE)
#' plot(density(p.alives))
pnbd.PAlive <- function(params,
x,
t.x,
T.cal,
hardie = TRUE) {
pnbd.generalParams(params = params,
x = x,
t.x = t.x,
T.cal = T.cal,
func = 'pnbd.PAlive',
hardie = hardie)
}
#' Pareto/NBD Discounted Expected Residual Transactions
#'
#' Calculates the discounted expected residual transactions of a customer, given
#' their behavior during the calibration period.
#'
#' DERT(d | r, alpha, s, beta, X = x, t.x, T.cal)
#'
#' x, t.x, T.cal may be vectors. The standard rules for vector operations apply
#' - if they are not of the same length, shorter vectors will be recycled (start
#' over at the first element) until they are as long as the longest vector. It
#' is advisable to keep vectors to the same length and to use single values for
#' parameters that are to be the same for all calculations. If one of these
#' parameters has a length greater than one, the output will be also be a
#' vector.
#'
#' @inheritParams pnbd.LL
#' @param d the discount rate to be used. Make sure that it matches up with your
#' chosen time period (do not use an annual rate for monthly data, for
#' example).
#'
#' @return The number of discounted expected residual transactions for a
#' customer with a particular purchase pattern during the calibration period.
#' @references Fader, Peter S., Bruce G.S. Hardie, and Ka L. Lee. "RFM and CLV:
#' Using Iso-Value Curves for Customer Base Analysis." Journal of Marketing
#' Research Vol.42, pp.415-430. November. 2005.
#' \url{http://www.brucehardie.com/papers.html}
#' @references See equation 2.
#' @references Note that this paper refers to what this package is calling
#' discounted expected residual transactions (DERT) simply as discounted
#' expected transactions (DET).
#'
#' @examples
#' # elog <- dc.ReadLines(system.file("data/cdnowElog.csv", package="BTYD2"),2,3)
#' # elog[, 'date'] <- as.Date(elog[, 'date'], format = '%Y%m%d')
#' # cal.cbs <- dc.ElogToCbsCbt(elog)$cal$cbs
#' # params <- pnbd.EstimateParameters(cal.cbs, hardie = TRUE)
#' params <- c(0.5629966, 12.5590370, 0.4081095, 10.5148048)
#'
#' # 15% compounded annually has been converted to 0.0027 compounded continuously,
#' # as we are dealing with weekly data and not annual data.
#' d <- 0.0027
#'
#' # calculate the discounted expected residual transactions of a customer
#' # who made 7 transactions in a calibration period that was 77.86
#' # weeks long, with the last transaction occurring at the end of
#' # the 35th week.
#' pnbd.DERT(params,
#' x = 7,
#' t.x = 35,
#' T.cal = 77.86,
#' d,
#' hardie = TRUE)
#'
#' # We can also use vectors to compute DERT for several customers:
#' pnbd.DERT(params,
#' x = 1:10,
#' t.x = 30,
#' T.cal = 77.86,
#' d,
#' hardie = TRUE)
pnbd.DERT <- function(params,
x,
t.x,
T.cal,
d,
hardie = TRUE) {
loglike <- try(pnbd.LL(params = params,
x = x,
t.x = t.x,
T.cal = T.cal,
hardie = hardie))
if('try-error' %in% class(loglike)) return(loglike)
# This is the remainder of the original pnbd.DERT function def.
# No need to get too clever here. Revert to explicit assignment
# of params to r, alpha, s, beta the old-school way.
r <- params[1]
alpha <- params[2]
s <- params[3]
beta <- params[4]
z <- d * (beta + T.cal)
tricomi.part.1 = ((z)^(1 - s))/(s - 1) *
genhypergeo(U = c(1),
L = c(2 - s),
z = z,
check_mod = FALSE)
tricomi.part.2 = gamma(1 - s) *
genhypergeo(U = c(s),
L = c(s),
z = z,
check_mod = FALSE)
tricomi = tricomi.part.1 + tricomi.part.2
result <- exp(r * log(alpha) +
s * log(beta) +
(s - 1) * log(d) +
lgamma(r + x + 1) +
log(tricomi) -
lgamma(r) -
(r + x + 1) * log(alpha + T.cal) -
loglike)
return(result)
}
#' Pareto/NBD Log-Likelihood
#'
#' Calculates the log-likelihood of the Pareto/NBD model.
#'
#' @inheritParams pnbd.LL
#' @param cal.cbs calibration period CBS (customer by sufficient statistic). It
#' must contain columns for frequency ("x"), recency ("t.x"), and total time
#' observed ("T.cal"). Note that recency must be the time between the start of
#' the calibration period and the customer's last transaction, not the time
#' between the customer's last transaction and the end of the calibration
#' period. If your data is compressed (see \code{\link{dc.compress.cbs}}), a
#' fourth column labelled "custs" (number of customers with a specific
#' combination of recency, frequency and length of calibration period) will
#' make this function faster.
#' @seealso \code{\link{pnbd.EstimateParameters}}
#' @seealso \code{\link{pnbd.LL}}
#' @return The log-likelihood of the provided data.
#' @references Fader, Peter S., and Bruce G.S. Hardie. "A Note on Deriving the
#' Pareto/NBD Model and Related Expressions." November. 2005. Web.
#' \url{http://www.brucehardie.com/notes/008/}
#'
#' @examples
#' data(cdnowSummary)
#' cal.cbs <- cdnowSummary$cbs
#' # cal.cbs already has column names required by method
#'
#' # random assignment of parameters
#' params <- c(0.5, 8, 0.7, 10)
#' # returns the log-likelihood of the given parameters
#' pnbd.cbs.LL (params, cal.cbs, TRUE)
#'
#' # compare the speed and results to the following:
#' cal.cbs.compressed <- dc.compress.cbs(cal.cbs)
#' pnbd.cbs.LL (params, cal.cbs.compressed, TRUE)
pnbd.cbs.LL <- function(params,
cal.cbs,
hardie = TRUE) {
dc.check.model.params(printnames = c("r", "alpha", "s", "beta"),
params = params,
func = "pnbd.cbs.LL")
# Check that you have the right columns.
# They should be 'x', 't.x', 'T.cal' and optionally 'custs'
# in this order. They stand for, respectively
# -- x: frequency
# -- t.x: recency
# -- T.cal: observed calendar time
# -- custs: number of customers with this (x, t.x, T.cal) combo
foo <- colnames(cal.cbs)
stopifnot(foo[1] == 'x' &
foo[2] == 't.x' &
foo[3] == 'T.cal')
x <- cal.cbs[,'x']
t.x <- cal.cbs[,'t.x']
T.cal <- cal.cbs[,'T.cal']
if ("custs" %in% foo) {
custs <- cal.cbs[, "custs"]
} else {
custs <- rep(1, length(x))
}
return(sum(custs * pnbd.LL(params, x, t.x, T.cal, hardie)))
}
#' Pareto/NBD Parameter Estimation
#'
#' The best-fitting parameters are determined using the
#' \code{\link{pnbd.cbs.LL}} function. The sum of the log-likelihood for each
#' customer (for a set of parameters) is maximized in order to estimate
#' parameters.
#'
#' A set of starting parameters must be provided for this method. If no
#' parameters are provided, (1,1,1,1) is used as a default. It may be useful to
#' use starting values for r and s that represent your best guess of the
#' heterogeneity in the buy and die rate of customers. It may be necessary to
#' run the estimation from multiple starting points to ensure that it converges.
#' To compare the log-likelihoods of different parameters, use
#' \code{\link{pnbd.cbs.LL}}.
#'
#' The lower bound on the parameters to be estimated is always zero, since
#' Pareto/NBD parameters cannot be negative. The upper bound can be set with the
#' max.param.value parameter.
#'
#' This function may take some time to run. It uses \code{\link[optimx]{optimx}}
#' for maximum likelihood estimation, not \code{\link[stats]{optim}}.
#'
#' @param cal.cbs calibration period CBS (customer by sufficient statistic). It
#' must contain columns for frequency ("x"), recency ("t.x"), and total time
#' observed ("T.cal"). Note that recency must be the time between the start of
#' the calibration period and the customer's last transaction, not the time
#' between the customer's last transaction and the end of the calibration
#' period. If your data is compressed (see \code{\link{dc.compress.cbs}}), a
#' fourth column labelled "custs" (number of customers with a specific
#' combination of recency, frequency and length of calibration period) will
#' make this function faster.
#' @param par.start initial Pareto/NBD parameters - a vector with r, alpha, s,
#' and beta, in that order. r and alpha are unobserved parameters for the NBD
#' transaction process. s and beta are unobserved parameters for the Pareto
#' (exponential gamma) dropout process.
#' @param max.param.value the upper bound on parameters.
#' @param method the optimization method(s).
#' @param hardie if TRUE, have \code{\link{pnbd.LL}} use \code{\link{h2f1}}
#' instead of \code{\link[hypergeo]{hypergeo}}.
#' @param hessian set it to TRUE if you want the Hessian matrix, and then you
#' might as well have the complete \code{\link[optimx]{optimx}} object
#' returned.
#'
#' @return Unnamed vector of estimated parameters by default, \code{optimx}
#' object with everything if \code{hessian} is TRUE.
#' @seealso \code{\link{pnbd.cbs.LL}}
#' @references Fader, Peter S.; Hardie, and Bruce G.S.. "Overcoming the BG/NBD
#' Model's #NUM! Error Problem." December. 2013. Web.
#' \url{http://brucehardie.com/notes/027/bgnbd_num_error.pdf}
#'
#' @examples
#' data(cdnowSummary)
#'
#' cal.cbs <- cdnowSummary$cbs
#' # cal.cbs already has column names required by method
#'
#' # starting-point parameters
#' startingparams <- c(0.5, 6, 0.9, 8)
#'
#' # estimated parameters
#' est.params <- pnbd.EstimateParameters(cal.cbs = cal.cbs,
#' par.start = startingparams,
#' method = 'L-BFGS-B',
#' hardie = TRUE)
#'
#' # complete object returned by \code{\link[optimx]{optimx}}
#' optimx.set <- pnbd.EstimateParameters(cal.cbs = cal.cbs,
#' par.start = startingparams,
#' hardie = TRUE,
#' hessian = TRUE)
#'
#' # log-likelihood of estimated parameters
#' pnbd.cbs.LL(est.params, cal.cbs, TRUE)
pnbd.EstimateParameters <- function(cal.cbs,
par.start = c(1, 1, 1, 1),
max.param.value = 10000,
method = 'L-BFGS-B',
hardie = TRUE,
hessian = FALSE) {
dc.check.model.params(printnames = c("r", "alpha", "s", "beta"),
params = par.start,
func = "pnbd.EstimateParameters")
## helper function to be optimized
pnbd.eLL <- function(params, cal.cbs, hardie) {
params <- exp(params)
params[params > max.param.value] <- max.param.value
return(-1 * pnbd.cbs.LL(params = params,
cal.cbs = cal.cbs,
hardie = hardie))
}
logparams <- log(par.start)
if(hessian == TRUE) {
return(optimx(par = logparams,
fn = pnbd.eLL,
method = method,
cal.cbs = cal.cbs,
hardie = hardie,
hessian = hessian))
}
results <- optimx(par = logparams,
fn = pnbd.eLL,
method = method,
cal.cbs = cal.cbs,
hardie = hardie)
params <- exp(unname(unlist(results[method, c('p1', 'p2', 'p3', 'p4')])))
return(params)
}
#' Generalized Pareto/NBD Probability Mass Function
#'
#' Generalized probability mass function for the Pareto/NBD.
#'
#' P(X(t.start, t.end)=x | r, alpha, s, beta). Returns the probability that a
#' customer makes x repeat transactions in the time interval (t.start, t.end].
#'
#' It is impossible for a customer to make a negative number of repeat
#' transactions. This function will return an error if it is given negative
#' times or a negative number of repeat transactions. This function will also
#' return an error if t.end is less than t.start.
#'
#' \code{t.start}, \code{t.end}, and \code{x} may be vectors. The standard rules
#' for vector operations apply - if they are not of the same length, shorter
#' vectors will be recycled (start over at the first element) until they are as
#' long as the longest vector. It is advisable to keep vectors to the same
#' length and to use single values for parameters that are to be the same for
#' all calculations. If one of these parameters has a length greater than one,
#' the output will be a vector of probabilities.
#'
#' @param params Pareto/NBD parameters - a vector with r, alpha, s, and beta, in
#' that order. r and alpha are unobserved parameters for the NBD transaction
#' process. s and beta are unobserved parameters for the Pareto (exponential
#' gamma) dropout process.
#' @param t.start start of time period for which probability is being
#' calculated. It can also be a vector of values.
#' @param t.end end of time period for which probability is being calculated. It
#' can also be a vector of values.
#' @param x number of repeat transactions by a random customer in the period
#' defined by (t.start, t.end]. It can also be a vector of values.
#' @param hardie if TRUE, use \code{\link{h2f1}} instead of
#' \code{\link[hypergeo]{hypergeo}}.
#'
#' @return Probability of x transaction occuring between t.start and t.end
#' conditional on model parameters. If t.start, t.end, and/or x has a length
#' greater than one, a vector of probabilities will be returned.
#' @references Fader, Peter S., and Bruce G.S. Hardie. "Deriving an Expression
#' for P (X(t) = x) Under the Pareto/NBD Model." Sept. 2006. Web.
#' \url{http://www.brucehardie.com/notes/012/}
#' @references Fader, Peter S., Bruce G.S. Hardie, and Kinshuk Jerath. "Deriving
#' an Expression for P (X(t, t + tau) = x) Under the Pareto/NBD Model." Sept.
#' 2006. Web. \url{http://www.brucehardie.com/notes/013/}
#'
#' @examples
#' # probability that a customer will make 10 repeat transactions in the
#' # time interval (1,2]
#' data("cdnowSummary")
#' cal.cbs <- cdnowSummary$cbs
#' params <- pnbd.EstimateParameters(cal.cbs = cal.cbs,
#' method = "L-BFGS-B",
#' hardie = TRUE)
#' pnbd.pmf.General(params, t.start=1, t.end=2, x=10, hardie = TRUE)
#' # probability that a customer will make no repeat transactions in the
#' # time interval (39,78]
#' pnbd.pmf.General(params,
#' t.start = 39,
#' t.end = 78,
#' x = 0,
#' hardie = TRUE)
pnbd.pmf.General <- function(params,
t.start,
t.end,
x,
hardie = TRUE) {
if (any(t.start > t.end)) {
stop("Error in pnbd.pmf.General: t.start > t.end.")
}
inputs <- try(dc.InputCheck(params = params,
func = "pnbd.pmf.General",
printnames = c("r", "alpha", "s", "beta"),
t.start = t.start,
t.end = t.end,
x = x))
if('try-error' == class(inputs)) return(str(inputs)$message)
t.start <- inputs$t.start
t.end <- inputs$t.end
x <- inputs$x
max.length <- nrow(inputs)
r <- params[1]
alpha <- params[2]
s <- params[3]
beta <- params[4]
equation.part.0 <- rep(0, max.length)
equation.part.0[x == 0] <- 1 - exp(s * log(beta) - s * log(beta + t.start))
## (t.end - t.start)^x is left outside the exp() in equation.part.1
## because when t.end = t.start and x = 0, exp() gets us into trouble:
## -- exp(0 * log(0)) = NaN;
## -- doing directly 0^0 instead gets us 0^0=1
## -- 1 is much better than NaN.
equation.part.1 <- exp(lgamma(r + x) -
lgamma(r) -
lfactorial(x) +
r * log(alpha) -
r * log(alpha + t.end - t.start) -
x * log(alpha + t.end - t.start) +
s * log(beta) -
s * log(beta + t.end)) *
(t.end - t.start)^x
equation.part.2 <- r * log(alpha) + s * log(beta) + lbeta(r + x, s + 1) - lbeta(r, s)
# Marshal the parameters of the hypergeometric and the
# denominator in the expressions of B1 and B2 shown in
# http://www.brucehardie.com/notes/013/Pareto_NBD_interval_pmf_rev.pdf
B1B2 <- function(hardie,
r,
alpha,
s,
beta,
x,
t.start,
t.end = 0,
ii = 0) {
myalpha <- alpha
mybeta <- beta + t.start
maxab <- max(myalpha, mybeta) + t.end
absab <- abs(myalpha - mybeta)
param2 <- s + 1
if (myalpha < mybeta) {
# < same as <= in the case of
# the hypergeometric, because =
# case is trivial.
param2 <- r + x
}
w <- r + s + ii
a <- w
b <- param2
c <- r + s + x + 1
z <- absab/maxab
den <- maxab^w
if(hardie == TRUE) return(h2f1(a, b, c, z)/den)
return(Re(hypergeo(a, b, c, z))/den)
}
B.1 <- mapply(x = inputs$x,
t.start = inputs$t.start,
B1B2,
hardie = hardie,
r = r,
alpha = alpha,
s = s,
beta = beta)
#B.1 <- B1B2(hardie, r, alpha, s, beta, x, t.start)
equation.part.2.summation <- rep(NA, max.length)
## In the paper, for i=0 we have t^i / i * B(r+s, i).
## the denominator reduces to:
## i * Gamma (r+s) * Gamma(i) / Gamma (r+s+i) :
## Gamma (r+s) * Gamma(i+1) / Gamma(r+s+i) :
## Gamma (r+s) * Gamma(1) / Gamma(r+s) :
## 1
## The 1 represents this reduced piece of the equation.
for (i in 1:max.length) {
ii <- c(1:x[i])
equation.part.2.summation[i] <- B1B2(hardie, r, alpha, s, beta, x[i], t.start[i], t.end[i], 0)
if (x[i] > 0) {
equation.part.2.summation[i] <- equation.part.2.summation[i] +
sum((t.end[i] - t.start[i])^ii/(ii * beta(r + s, ii)) *
B1B2(hardie, r, alpha, s, beta, x[i], t.start[i], t.end[i], ii))
}
}
return(equation.part.0 +
equation.part.1 +
exp(equation.part.2 +
log(B.1 - equation.part.2.summation)))
}
#' Pareto/NBD Conditional Expected Transactions
#'
#' Uses Pareto/NBD model parameters and a customer's past transaction behavior
#' to return the number of transactions they are expected to make in a given
#' time period.
#'
#' E\[X(T.cal, T.cal + T.star) | x, t.x, r, alpha, s, beta\]
#'
#' \code{T.star}, \code{x}, \code{t.x}, and \code{T.cal} may be vectors. The
#' standard rules for vector operations apply - if they are not of the same
#' length, shorter vectors will be recycled (start over at the first element)
#' until they are as long as the longest vector. It is advisable to keep vectors
#' to the same length and to use single values for parameters that are to be the
#' same for all calculations. If one of these parameters has a length greater
#' than one, the output will be a vector of probabilities.
#'
#' @param params Pareto/NBD parameters - a vector with r, alpha, s, and beta, in
#' that order. r and alpha are unobserved parameters for the NBD transaction
#' process. s and beta are unobserved parameters for the Pareto (exponential
#' gamma) dropout process.
#' @param T.star length of time for which we are calculating the expected number
#' of transactions.
#' @param x number of repeat transactions in the calibration period T.cal, or a
#' vector of calibration period frequencies.
#' @param t.x time of most recent repeat transaction, or a vector of recencies.
#' @param T.cal length of calibration period, or a vector of calibration period
#' lengths.
#' @param hardie if TRUE, have \code{\link{pnbd.PAlive}} use \code{\link{h2f1}}
#' instead of \code{\link[hypergeo]{hypergeo}}.
#'
#' @return Number of transactions a customer is expected to make in a time
#' period of length t, conditional on their past behavior. If any of the input
#' parameters has a length greater than 1, this will be a vector of expected
#' number of transactions.
#' @references Fader, Peter S., and Bruce G.S. Hardie. "A Note on Deriving the
#' Pareto/NBD Model and Related Expressions." November. 2005. Web.
#' \url{http://www.brucehardie.com/notes/008/}
#' @seealso \code{\link{pnbd.Expectation}}
#'
#' @examples
#' params <- c(0.55, 10.56, 0.61, 11.64)
#' # Number of transactions a customer is expected to make in 2 time
#' # intervals, given that they made 10 repeat transactions in a time period
#' # of 39 intervals, with the 10th repeat transaction occurring in the 35th
#' # interval.
#' pnbd.ConditionalExpectedTransactions(params,
#' T.star = 2,
#' x = 10,
#' t.x = 35,
#' T.cal = 39,
#' hardie = TRUE)
#'
#' # We can also compare expected transactions across different
#' # calibration period behaviors:
#' pnbd.ConditionalExpectedTransactions(params,
#' T.star = 2,
#' x = 5:20,
#' t.x = 25,
#' T.cal = 39,
#' hardie = TRUE)
pnbd.ConditionalExpectedTransactions <- function(params,
T.star,
x,
t.x,
T.cal,
hardie = TRUE) {
inputs <- try(dc.InputCheck(params = params,
func = "pnbd.ConditionalExpectedTransactions",
printnames = c("r", "alpha", "s", "beta"),
T.star = T.star,
x = x,
t.x = t.x,
T.cal = T.cal))
if('try-error' == class(inputs)) return(str(inputs)$message)
T.star <- inputs$T.star
x <- inputs$x
t.x <- inputs$t.x
T.cal <- inputs$T.cal
r <- params[1]
alpha <- params[2]
s <- params[3]
beta <- params[4]
P1 <- (r + x) * (beta + T.cal)/((alpha + T.cal) * (s - 1))
P2 <- (1 - ((beta + T.cal)/(beta + T.cal + T.star))^(s - 1))
P3 <- pnbd.PAlive(params = params,
x = x,
t.x = t.x,
T.cal = T.cal,
hardie = hardie)
return(P1 * P2 * P3)
}
#' Pareto/NBD Probability Mass Function
#'
#' Probability mass function for the Pareto/NBD.
#'
#' P(X(t)=x | r, alpha, s, beta). Returns the probability that a customer makes
#' x repeat transactions in the time interval (0, t].
#'
#' Parameters \code{t} and \code{x} may be vectors. The standard rules for
#' vector operations apply - if they are not of the same length, the shorter
#' vector will be recycled (start over at the first element) until it is as long
#' as the longest vector. It is advisable to keep vectors to the same length and
#' to use single values for parameters that are to be the same for all
#' calculations. If one of these parameters has a length greater than one, the
#' output will be a vector of probabilities.
#'
#' @param params Pareto/NBD parameters - a vector with r, alpha, s, and beta, in
#' that order. r and alpha are unobserved parameters for the NBD transaction
#' process. s and beta are unobserved parameters for the Pareto (exponential
#' gamma) dropout process.
#' @param t length end of time period for which probability is being computed.
#' May also be a vector.
#' @param x number of repeat transactions by a random customer in the period
#' defined by t. May also be a vector.
#' @param hardie if TRUE, have \code{\link{pnbd.pmf.General}} use
#' \code{\link{h2f1}} instead of \code{\link[hypergeo]{hypergeo}}.
#'
#' @return Probability of X(t)=x conditional on model parameters. If t and/or x
#' has a length greater than one, a vector of probabilities will be returned.
#' @references Fader, Peter S., and Bruce G.S. Hardie. “Deriving an Expression
#' for P (X(t) = x) Under the Pareto/NBD Model.” Sept. 2006. Web.
#' \url{http://www.brucehardie.com/notes/012/}
#'
#' @examples
#' params <- c(0.55, 10.56, 0.61, 11.64)
#' # probability that a customer will make 10 repeat transactions in the
#' # time interval (0,2]
#' pnbd.pmf(params, t=2, x=10, hardie = TRUE)
#' # probability that a customer will make no repeat transactions in the
#' # time interval (0,39]
#' pnbd.pmf(params, t=39, x=0, hardie = TRUE)
#'
#' # Vectors may also be used as arguments:
#' pnbd.pmf(params = params,
#' t = 30,
#' x = 11:20,
#' hardie = TRUE)
pnbd.pmf <- function(params,
t,
x,
hardie = TRUE) {
inputs <- try(dc.InputCheck(params = params,
func = "pnbd.pmf",
printnames = c("r", "alpha", "s", "beta"),
t = t,
x = x))
if('try-error' == class(inputs)) return(str(inputs)$message)
pnbd.pmf.General(params = params,
t.start = 0,
t.end = inputs$t,
x = inputs$x,
hardie = hardie)
}
#' Pareto/NBD Expectation
#'
#' Returns the number of repeat transactions that a randomly chosen customer
#' (for whom we have no prior information) is expected to make in a given time
#' period.
#'
#' E(X(t) | r, alpha, s, beta)
#'
#' @inheritParams pnbd.LL
#' @param t The length of time for which we are calculating the expected number
#' of repeat transactions.
#' @return Number of repeat transactions a customer is expected to make in a
#' time period of length t.
#' @references Fader, Peter S., and Bruce G.S. Hardie. "A Note on Deriving the
#' Pareto/NBD Model and Related Expressions." November. 2005. Web.
#' \url{http://www.brucehardie.com/notes/008/}
#' @seealso [`pnbd.ConditionalExpectedTransactions`]
#' @examples
#' params <- c(0.55, 10.56, 0.61, 11.64)
#'
#' # Number of repeat transactions a customer is expected to make in 2 time intervals.
#' pnbd.Expectation(params = params,
#' t = 2)
#'
#' # We can also compare expected transactions over time:
#' pnbd.Expectation(params = params,
#' t = 1:10)
#' @md
pnbd.Expectation <- function(params, t) {
dc.check.model.params(printnames = c("r", "alpha", "s", "beta"),
params = params,
func = "pnbd.Expectation")
if (any(t < 0) || !is.numeric(t))
stop("t must be numeric and may not contain negative numbers.")
r = params[1]
alpha = params[2]
s = params[3]
beta = params[4]
return((r * beta)/(alpha * (s - 1)) * (1 - (beta/(beta + t))^(s - 1)))
}
#' Pareto/NBD Expected Cumulative Transactions
#'
#' Calculates the expected cumulative total repeat transactions by all customers
#' for the calibration and holdout periods.
#'
#' The function automatically divides the total period up into n.periods.final
#' time intervals. n.periods.final does not have to be in the same unit of time
#' as the T.cal data. For example: - if your T.cal data is in weeks, and you
#' want cumulative transactions per week, n.periods.final would equal T.star. -
#' if your T.cal data is in weeks, and you want cumulative transactions per day,
#' n.periods.final would equal T.star * 7.
#'
#' The holdout period should immediately follow the calibration period. This
#' function assume that all customers' calibration periods end on the same date,
#' rather than starting on the same date (thus customers' birth periods are
#' determined using max(T.cal) - T.cal rather than assuming that it is 0).
#'
#' @inheritParams pnbd.LL
#' @param T.tot End of holdout period. Must be a single value, not a vector.
#' @param n.periods.final Number of time periods in the calibration and holdout
#' periods. See details.
#' @return Vector of expected cumulative total repeat transactions by all
#' customers.
#' @seealso [`pnbd.Expectation`]
#' @examples
#' data(cdnowSummary)
#'
#' cal.cbs <- cdnowSummary$cbs
#' # cal.cbs already has column names required by method
#'
#' params <- c(0.55, 10.56, 0.61, 11.64)
#'
#' # Returns a vector containing cumulative repeat transactions for 546 days.
#' # All parameters are in weeks; the calibration period lasted 39 weeks
#' # and the holdout period another 39.
#' pnbd.ExpectedCumulativeTransactions(params = params,
#' T.cal = cal.cbs[,"T.cal"],
#' T.tot = 78,
#' n.periods.final = 546)
#' @md
pnbd.ExpectedCumulativeTransactions <- function(params,
T.cal,
T.tot,
n.periods.final) {
inputs <- try(dc.InputCheck(params = params,
func = "pnbd.ExpectedCumulativeTransactions",
printnames = c("r", "alpha", "s", "beta"),
T.cal = T.cal,
T.tot = T.tot,
n.periods.final = n.periods.final))
if('try-error' == class(inputs)) return(str(inputs)$message)
stopit <- "must be a single numeric value and may not be negative."
if (length(T.tot) > 1) stop(paste("T.tot", stopit, sep = " "))
if (length(n.periods.final) > 1) stop(paste("n.periods.final", stopit, sep = " "))
## Divide up time into equal intervals
intervals <- seq(T.tot/n.periods.final,
T.tot,
length.out = n.periods.final)
cust.birth.periods <- max(T.cal) - T.cal
expected.transactions <- sapply(intervals,
function(interval) {
if (interval <= min(cust.birth.periods))
return(0)
sum(pnbd.Expectation(params,
interval - cust.birth.periods[cust.birth.periods <= interval]))
})
return(expected.transactions)
}
#' Pareto/NBD Plot Frequency in Calibration Period
#'
#' Plots a histogram and returns a matrix comparing the actual and expected
#' number of customers who made a certain number of repeat transactions in the
#' calibration period, binned according to calibration period frequencies.
#'
#' This function requires a censor number, which cannot be higher than the
#' highest frequency in the calibration period CBS. The output matrix will have
#' (censor + 1) bins, starting at frequencies of 0 transactions and ending at a
#' bin representing calibration period frequencies at or greater than the censor
#' number. The plot may or may not include a bin for zero frequencies, depending
#' on the plotZero parameter.
#'
#' @param params Pareto/NBD parameters - a vector with r, alpha, s, and beta, in
#' that order. r and alpha are unobserved parameters for the NBD transaction
#' process. s and beta are unobserved parameters for the Pareto (exponential
#' gamma) dropout process.
#' @param cal.cbs calibration period CBS (customer by sufficient statistic). It
#' must contain columns for frequency ("x") and total time observed ("T.cal").
#' @param censor integer used to censor the data. See details.
#' @param hardie if TRUE, have \code{\link{pnbd.pmf}} use \code{\link{h2f1}}
#' instead of \code{\link[hypergeo]{hypergeo}}.
#' @param plotZero if FALSE, the histogram will exclude the zero bin.
#' @param xlab descriptive label for the x axis.
#' @param ylab descriptive label for the y axis.
#' @param title title placed on the top-center of the plot.
#'
#' @return Calibration period repeat transaction frequency comparison matrix
#' (actual vs. expected).
#'
#' @examples
#' data(cdnowSummary)
#' cal.cbs <- cdnowSummary$cbs
#' # cal.cbs already has column names required by method
#'
#' # parameters estimated using pnbd.EstimateParameters
#' est.params <- cdnowSummary$est.params
#' # the maximum censor number that can be used
#' max(cal.cbs[,"x"])
#'
#' pnbd.PlotFrequencyInCalibration(params = est.params,
#' cal.cbs = cal.cbs,
#' censor = 7,
#' hardie = TRUE)
pnbd.PlotFrequencyInCalibration <- function(params,
cal.cbs,
censor,
hardie = TRUE,
plotZero = TRUE,
xlab = "Calibration period transactions",
ylab = "Customers",
title = "Frequency of Repeat Transactions") {
dc.check.model.params(printnames = c("r", "alpha", "s", "beta"),
params = params,
func = "pnbd.PlotFrequencyInCalibration")
stopifnot("x" %in% colnames(cal.cbs) | "T.cal" %in% colnames(cal.cbs))
x <- cal.cbs[, "x"]
T.cal <- cal.cbs[, "T.cal"]
if (censor > max(x))
stop("censor too big (> max freq) in PlotFrequencyInCalibration.")
n.x <- rep(0, max(x) + 1)
custs = nrow(cal.cbs)
for (ii in unique(x)) {
n.x[ii + 1] <- sum(ii == x)
}
n.x.censor <- sum(n.x[(censor + 1):length(n.x)])
n.x.actual <- c(n.x[1:censor], n.x.censor)
T.value.counts <- table(T.cal)
T.values <- as.numeric(names(T.value.counts))
n.T.values <- length(T.values)
total.probability <- 0
n.x.expected <- rep(0, length(n.x.actual))
for (ii in 1:(censor)) {
this.x.expected <- 0
for (T.idx in 1:n.T.values) {
T <- T.values[T.idx]
if (T == 0)
next
n.T <- T.value.counts[T.idx]
expected.given.x.and.T <- n.T * pnbd.pmf(params, T, ii - 1, hardie)
this.x.expected <- this.x.expected + expected.given.x.and.T
total.probability <- total.probability + expected.given.x.and.T/custs
}
n.x.expected[ii] <- this.x.expected
}
n.x.expected[censor + 1] <- custs * (1 - total.probability)
col.names <- paste(rep("freq", length(censor + 1)), (0:censor), sep = ".")
col.names[censor + 1] <- paste(col.names[censor + 1], "+", sep = "")
censored.freq.comparison <- rbind(n.x.actual, n.x.expected)
colnames(censored.freq.comparison) <- col.names
cfc.plot <- censored.freq.comparison
if (plotZero == FALSE)
cfc.plot <- cfc.plot[, -1]
n.ticks <- ncol(cfc.plot)
if (plotZero == TRUE) {
x.labels <- 0:(n.ticks - 1)
x.labels[n.ticks] <- paste(n.ticks - 1, "+", sep = "")
} else {
x.labels <- 1:(n.ticks)
x.labels[n.ticks] <- paste(n.ticks, "+", sep = "")
}
ylim <- c(0, ceiling(max(cfc.plot) * 1.1))
barplot(cfc.plot, names.arg = x.labels, beside = TRUE, ylim = ylim, main = title,
xlab = xlab, ylab = ylab, col = 1:2)
legend("topright", legend = c("Actual", "Model"), col = 1:2, lwd = 2)
return(censored.freq.comparison)
}
#' Pareto/NBD Plot Frequency vs. Conditional Expected Frequency
#'
#' Plots the actual and conditional expected number transactions made by
#' customers in the holdout period, binned according to calibration period
#' frequencies. Also returns a matrix with this comparison and the number of
#' customers in each bin.
#'
#' This function requires a censor number, which cannot be higher than the
#' highest frequency in the calibration period CBS. The output matrix will have
#' (censor + 1) bins, starting at frequencies of 0 transactions and ending at a
#' bin representing calibration period frequencies at or greater than the censor
#' number.
#'
#' @param params Pareto/NBD parameters - a vector with r, alpha, s, and beta, in
#' that order. r and alpha are unobserved parameters for the NBD transaction
#' process. s and beta are unobserved parameters for the Pareto (exponential
#' gamma) dropout process.
#' @param T.star length of the holdout period. It must be a scalar for this
#' plot's purposes: you have one holdout period of a given length.
#' @param cal.cbs calibration period CBS (customer by sufficient statistic). It
#' must contain columns for frequency ("x"), recency ("t.x"), and total time
#' observed ("T.cal"). Note that recency must be the time between the start of
#' the calibration period and the customer's last transaction, not the time
#' between the customer's last transaction and the end of the calibration
#' period.
#' @param x.star vector of transactions made by each customer in the holdout
#' period.
#' @param censor integer used to censor the data. See details.
#' @param hardie if TRUE, have
#' \code{\link{pnbd.ConditionalExpectedTransactions}} use \code{\link{h2f1}}
#' instead of \code{\link[hypergeo]{hypergeo}}.
#' @param xlab descriptive label for the x axis.
#' @param ylab descriptive label for the y axis.
#' @param xticklab vector containing a label for each tick mark on the x axis.
#' @param title title placed on the top-center of the plot.
#'
#' @return Holdout period transaction frequency comparison matrix (actual vs. expected).
#'
#' @examples
#' data(cdnowSummary)
#'
#' cal.cbs <- cdnowSummary$cbs
#' # cal.cbs already has column names required by method
#'
#' # number of transactions by each customer in the 39 weeks
#' # following the calibration period
#' x.star <- cal.cbs[,"x.star"]
#'
#' # parameters estimated using pnbd.EstimateParameters
#' est.params <- cdnowSummary$est.params
#' # the maximum censor number that can be used
#' max(cal.cbs[,"x"])
#'
#' # plot conditional expected holdout period frequencies,
#' # binned according to calibration period frequencies
#' pnbd.PlotFreqVsConditionalExpectedFrequency(params = est.params,
#' T.star = 39,
#' cal.cbs = cal.cbs,
#' x.star = x.star,
#' censor = 7,
#' hardie = TRUE)
pnbd.PlotFreqVsConditionalExpectedFrequency <- function(params,
T.star,
cal.cbs,
x.star,
censor,
hardie = TRUE,
xlab = "Calibration period transactions",
ylab = "Holdout period transactions",
xticklab = NULL,
title = "Conditional Expectation") {
dc.check.model.params(printnames = c("r", "alpha", "s", "beta"),
params = params,
func = "pnbd.PlotFreqVsConditionalExpectedFrequency")
# Check that you have the right columns.
# They should be 'x', 't.x', 'T.cal' and optionally 'custs'
# in this order. They stand for, respectively
# -- x: frequency
# -- t.x: recency
# -- T.cal: observed calendar time
# -- custs: number of customers with this (x, t.x, T.cal) combo
foo <- colnames(cal.cbs)
stopifnot(foo[1] == 'x' &
foo[2] == 't.x' &
foo[3] == 'T.cal')
x <- cal.cbs[,'x']
t.x <- cal.cbs[,'t.x']
T.cal <- cal.cbs[,'T.cal']
if (censor > max(x))
stop("censor too big (> max freq) in PlotFreqVsConditionalExpectedFrequency.")
if (length(T.star) > 1 || T.star < 0 || !is.numeric(T.star))
stop("T.star must be a positive scalar.")
if (any(x.star < 0) || !is.numeric(x.star))
stop("x.star must be numeric and may not contain negative numbers.")
n.bins <- censor + 1
bin.size <- rep(0, n.bins)
# First, find the right sequence of
# transaction counts. It may have gaps,
# for which n.this.bin = 0, which is
# kinda hard to divide by. So:
for (cc in 0:censor) {
if (cc < censor) {
this.bin <- which(x == cc)
} else {
this.bin <- which(x >= cc)
}
n.this.bin <- length(this.bin)
bin.size[cc + 1] <- n.this.bin
}
# Now you got the right list of net bins:
# those with at least 1 customer in them.
bin.size <- bin.size[bin.size > 0]
n.bins <- length(bin.size)
names(bin.size) <- names(table(x))[1:n.bins]
xvals <- as.integer(names(bin.size))
transaction.actual <- rep(0, n.bins)
transaction.expected <- rep(0, n.bins)
for(cc in 1:n.bins) {
n.this.bin <- bin.size[cc]
if (cc < n.bins) {
this.bin <- which(x == xvals[cc])
} else {
this.bin <- which(x >= xvals[cc])
}
transaction.actual[cc] <- sum(x.star[this.bin])/n.this.bin
transaction.expected[cc] <- sum(pnbd.ConditionalExpectedTransactions(params,
T.star,
x[this.bin],
t.x[this.bin],
T.cal[this.bin],
hardie))/n.this.bin
}
col.names <- paste("freq", names(bin.size), sep = ".")
col.names[n.bins] <- paste(col.names[n.bins], '+', sep = '')
comparison <- rbind(transaction.actual, transaction.expected, bin.size)
colnames(comparison) <- col.names
x.labels <- sapply(colnames(comparison), function(x) gsub('freq.', '', x))
actual <- comparison[1, ]
expected <- comparison[2, ]
ylim <- c(0, ceiling(max(c(actual, expected)) * 1.1))
plot(actual, type = "l", xaxt = "n",
col = 1, ylim = ylim, xlab = xlab, ylab = ylab,
main = title)
lines(expected, lty = 2, col = 2)
axis(1, at = 1:ncol(comparison), labels = x.labels)
legend("topleft", legend = c("Actual", "Model"), col = 1:2, lty = 1:2, lwd = 1)
return(comparison)
}
#' Pareto/NBD Plot Actual vs. Conditional Expected Frequency by Recency
#'
#' Plots the actual and conditional expected number of transactions made by
#' customers in the holdout period, binned according to calibration period
#' recencies. Also returns a matrix with this comparison and the number of
#' customers in each bin.
#'
#' This function does bin customers exactly according to recency; it bins
#' customers according to integer units of the time period of cal.cbs.
#' Therefore, if you are using weeks in your data, customers will be binned as
#' follows: customers with recencies between the start of the calibration period
#' (inclusive) and the end of week one (exclusive); customers with recencies
#' between the end of week one (inclusive) and the end of week two (exlusive);
#' etc.
#'
#' The matrix and plot will contain the actual number of transactions made by
#' each bin in the holdout period, as well as the expected number of
#' transactions made by that bin in the holdout period, conditional on that
#' bin's behavior during the calibration period.
#'
#' @param params Pareto/NBD parameters - a vector with r, alpha, s, and beta, in
#' that order. r and alpha are unobserved parameters for the NBD transaction
#' process. s and beta are unobserved parameters for the Pareto (exponential
#' gamma) dropout process.
#' @param cal.cbs calibration period CBS (customer by sufficient statistic). It
#' must contain columns for frequency ("x"), recency ("t.x"), and total time
#' observed ("T.cal"). Note that recency must be the time between the start of
#' the calibration period and the customer's last transaction, not the time
#' between the customer's last transaction and the end of the calibration
#' period.
#' @param T.star length of the holdout period. It must be a scalar for this
#' plot's purposes: you have one holdout period of a given length.
#' @param x.star vector of transactions made by each customer in the holdout
#' period.
#' @param hardie if TRUE, have
#' \code{\link{pnbd.ConditionalExpectedTransactions}} use \code{\link{h2f1}}
#' instead of \code{\link[hypergeo]{hypergeo}}.
#' @param xlab descriptive label for the x axis.
#' @param ylab descriptive label for the y axis.
#' @param xticklab vector containing a label for each tick mark on the x axis.
#' @param title title placed on the top-center of the plot.
#'
#' @return Matrix comparing actual and conditional expected transactions in the holdout period.
#'
#' @examples
#' data(cdnowSummary)
#'
#' cal.cbs <- cdnowSummary$cbs
#' # cal.cbs already has column names required by method
#'
#' # number of transactions by each customer in the 39 weeks following
#' # the calibration period
#' x.star <- cal.cbs[,"x.star"]
#'
#' # parameters estimated using pnbd.EstimateParameters
#' est.params <- cdnowSummary$est.params
#'
#' # plot conditional expected holdout period transactions, binned according to
#' # calibration period recencies
#' pnbd.PlotRecVsConditionalExpectedFrequency(params = est.params,
#' cal.cbs = cal.cbs,
#' T.star = 39,
#' x.star = x.star,
#' hardie = TRUE)
pnbd.PlotRecVsConditionalExpectedFrequency <- function(params,
cal.cbs,
T.star,
x.star,
hardie = TRUE,
xlab = "Calibration period recency",
ylab = "Holdout period transactions",
xticklab = NULL,
title = "Actual vs. Conditional Expected Transactions by Recency") {
# No use for inputs, other than as error check.
inputs <- try(dc.InputCheck(params = params,
func = "pnbd.PlotRecVsConditionalExpectedFrequency",
printnames = c("r", "alpha", "s", "beta"),
T.star = T.star,
x.star = x.star))
if('try-error' == class(inputs)) return(str(inputs)$message)
if (length(T.star) > 1)
stop("T.star must be a scalar.")
# Check that you have the right columns.
# They should be 'x', 't.x', 'T.cal' and optionally 'custs'
# in this order. They stand for, respectively
# -- x: frequency
# -- t.x: recency
# -- T.cal: observed calendar time
# -- custs: number of customers with this (x, t.x, T.cal) combo
foo <- colnames(cal.cbs)
stopifnot(foo[1] == 'x' &
foo[2] == 't.x' &
foo[3] == 'T.cal')
x <- cal.cbs[,'x']
t.x <- cal.cbs[,'t.x']
T.cal <- cal.cbs[,'T.cal']
t.values <- sort(unique(t.x))
n.recs <- length(t.values)
transaction.actual <- rep(0, n.recs)
transaction.expected <- rep(0, n.recs)
rec.size <- rep(0, n.recs)
for (tt in 1:n.recs) {
this.t.x <- t.values[tt]
this.rec <- which(t.x == this.t.x)
n.this.rec <- length(this.rec)
rec.size[tt] <- n.this.rec
transaction.actual[tt] <- sum(x.star[this.rec])/n.this.rec
transaction.expected[tt] <- sum(pnbd.ConditionalExpectedTransactions(params,
T.star,
x[this.rec],
t.x[this.rec],
T.cal[this.rec],
hardie))/n.this.rec
}
comparison <- rbind(transaction.actual, transaction.expected, rec.size)
colnames(comparison) <- round(t.values, 3)
bins <- seq(1, ceiling(max(t.x)))
n.bins <- length(bins)
actual <- rep(0, n.bins)
expected <- rep(0, n.bins)
bin.size <- rep(0, n.bins)
x.labels <- NULL
if (is.null(xticklab) == FALSE) {
x.labels <- xticklab
} else {
x.labels <- 1:(n.bins)
}
point.labels <- rep("", n.bins)
point.y.val <- rep(0, n.bins)
for (ii in 1:n.bins) {
if (ii < n.bins) {
this.bin <- which(as.numeric(colnames(comparison)) >= (ii - 1) &
as.numeric(colnames(comparison)) < ii)
} else if (ii == n.bins) {
this.bin <- which(as.numeric(colnames(comparison)) >= ii - 1)
}
actual[ii] <- sum(comparison[1, this.bin])/length(comparison[1, this.bin])
expected[ii] <- sum(comparison[2, this.bin])/length(comparison[2, this.bin])
bin.size[ii] <- sum(comparison[3, this.bin])
}
ylim <- c(0, ceiling(max(c(actual, expected)) * 1.1))
plot(actual,
type = "l",
xaxt = "n",
col = 1,
ylim = ylim,
xlab = xlab,
ylab = ylab,
main = title)
lines(expected,
lty = 2,
col = 2)
axis(1, at = 1:n.bins, labels = x.labels)
legend("topleft",
legend = c("Actual", "Model"),
col = 1:2,
lty = 1:2,
lwd = 1)
return(rbind(actual, expected, bin.size))
}
#' Pareto/NBD Plot Discounted Expected Residual Transactions
#'
#' Plots discounted expected residual transactions for different combinations of
#' calibration period frequency and recency.
#'
#' The length of the calibration period \code{T.cal} must be a single value, not
#' a vector.
#'
#' @inheritParams pnbd.DERT
#' @param type must be either "persp" (perspective - 3 dimensional) or
#' "contour". Determines the type of plot produced by this function.
#'
#' @return A matrix with discounted expected residual transaction values for
#' every combination of calibration period frequency \code{x} and calibration
#' period recency \code{t.x}.
#'
#' @references Fader, Peter S., Bruce G.S. Hardie, and Ka L. Lee. "RFM and CLV:
#' Using Iso-Value Curves for Customer Base Analysis." Journal of Marketing
#' Research Vol.42, pp.415-430. November. 2005.
#' \url{http://www.brucehardie.com/papers.html}
#' @references Note that this paper refers to what this package is calling
#' discounted expected residual transactions (DERT) simply as discounted
#' expected transactions (DET).
#'
#' @examples
#' # The RFM and CLV paper uses all 78 weeks of the cdnow data to
#' # estimate parameters. These parameters can be estimated as follows:
#' # elog <- dc.ReadLines(system.file("data/cdnowElog.csv", package="BTYD2"),2,3)
#' # elog[, 'date'] <- as.Date(elog[, 'date'], format = '%Y%m%d')
#' # cal.cbs <- dc.ElogToCbsCbt(elog)$cal$cbs
#' # pnbd.EstimateParameters(cal.cbs, hardie = TRUE)
#'
#' # (The final function was run several times with its own output as
#' # input for starting parameters, to ensure that the result converged).
#'
#' params <- c(0.5629966, 12.5590370, 0.4081095, 10.5148048)
#'
#' # 15% compounded annually has been converted to 0.0027 compounded continously,
#' # as we are dealing with weekly data and not annual data.
#' d <- 0.0027
#'
#' pnbd.Plot.DERT(params = params,
#' x = 0:14,
#' t.x = 0:77,
#' T.cal = 77.86,
#' d = d,
#' hardie = TRUE,
#' type = "persp")
#' pnbd.Plot.DERT(params = params,
#' x = 0:14,
#' t.x = 0:77,
#' T.cal = 77.86,
#' d = d,
#' hardie = TRUE,
#' type="contour")
pnbd.Plot.DERT <- function(params,
x,
t.x,
T.cal,
d,
hardie = TRUE,
type = "persp") {
# No use for inputs, other than as error check.
inputs <- try(dc.InputCheck(params = params,
func = "pnbd.Plot.DERT",
printnames = c("r", "alpha", "s", "beta"),
x = x,
t.x = t.x,
T.cal = T.cal))
if('try-error' == class(inputs)) return(str(inputs)$message)
if (length(T.cal) > 1) stop("T.cal must be a single numeric value and may not be negative.")
if (!(type %in% c("persp", "contour"))) {
stop("The plot type in pnbd.Plot.DERT must be either 'persp' or 'contour'.")
}
DERT <- matrix(NA, length(t.x), length(x))
rownames(DERT) <- t.x
colnames(DERT) <- x
for (i in 1:length(t.x)) {
for (j in 1:length(x)) {
DERT[i, j] <- pnbd.DERT(params,
x[j],
t.x[i],
T.cal,
d,
hardie)
}
}
if (type == "contour") {
if (max(DERT, na.rm = TRUE) <= 10) {
levels <- 1:max(DERT, na.rm = TRUE)
} else if (max(DERT, na.rm = TRUE) <= 20) {
levels <- c(1, seq(2, max(DERT, na.rm = TRUE), 2))
} else {
levels <- c(1, 2, seq(5, max(DERT, na.rm = TRUE), 5))
}
contour(x = t.x,
y = x,
z = DERT,
levels = levels,
xlab = "Recency",
ylab = "Frequency",
main = "Iso-Value Representation of DERT")
}
if (type == "persp") {
persp(x = t.x,
y = x,
z = DERT,
theta = -30,
phi = 20,
axes = TRUE,
ticktype = "detailed",
nticks = 5,
main = "DERT as a Function of Frequency and Recency",
shade = 0.5,
xlab = "Recency",
ylab = "Frequency",
zlab = "Discounted expected residual transactions")
}
return(DERT)
}
#' Pareto/NBD Tracking Cumulative Transactions Plot
#'
#' Plots the actual and expected cumulative total repeat transactions by all
#' customers for the calibration and holdout periods, and returns this
#' comparison in a matrix.
#'
#' actual.cu.tracking.data does not have to be in the same unit of time as the
#' T.cal data. T.tot will automatically be divided into periods to match the
#' length of actual.cu.tracking.data. See
#' [`pnbd.ExpectedCumulativeTransactions`].
#'
#' The holdout period should immediately follow the calibration period. This
#' function assume that all customers' calibration periods end on the same date,
#' rather than starting on the same date (thus customers' birth periods are
#' determined using max(T.cal) - T.cal rather than assuming that it is 0).
#'
#' @inheritParams pnbd.ExpectedCumulativeTransactions
#' @param actual.cu.tracking.data A vector containing the cumulative number of
#' repeat transactions made by customers for each period in the total time
#' period (both calibration and holdout periods). See details.
#' @param xlab Descriptive label for the x axis.
#' @param ylab Descriptive label for the y axis.
#' @param xticklab Vector containing a label for each tick mark on the x axis.
#' @param title Title placed on the top-center of the plot.
#' @return Matrix containing actual and expected cumulative repeat transactions.
#'
#' @examples
#' data(cdnowSummary)
#'
#' cal.cbs <- cdnowSummary$cbs
#' # cal.cbs already has column names required by method
#'
#' # Cumulative repeat transactions made by all customers across calibration
#' # and holdout periods
#' cu.tracking <- cdnowSummary$cu.tracking
#'
#' # parameters estimated using pnbd.EstimateParameters
#' est.params <- cdnowSummary$est.params
#'
#' # All parameters are in weeks; the calibration period lasted 39
#' # weeks and the holdout period another 39.
#' pnbd.PlotTrackingCum(params = est.params,
#' T.cal = cal.cbs[,"T.cal"],
#' T.tot = 78,
#' actual.cu.tracking.data = cu.tracking)
#' @md
pnbd.PlotTrackingCum <- function(params,
T.cal,
T.tot,
actual.cu.tracking.data,
n.periods.final = NA,
xlab = "Week",
ylab = "Cumulative Transactions",
xticklab = NULL,
title = "Tracking Cumulative Transactions") {
# No use for inputs, other than as error check, so suppress
# any warnings about incompatible vector lengths here:
inputs <- suppressWarnings(try(dc.InputCheck(params = params,
func = "pnbd.PlotTrackingCum",
printnames = c("r", "alpha", "s", "beta"),
T.cal = T.cal,
T.tot = T.tot,
actual.cu.tracking.data = actual.cu.tracking.data,
n.periods.final = n.periods.final)))
if('try-error' == class(inputs)) return(str(inputs)$message)
inputs <- NULL
if (length(T.tot) > 1) stop("T.tot must be a single numeric value and may not be negative.")
actual <- actual.cu.tracking.data
if(is.na(n.periods.final)) n.periods.final <- length(actual)
expected <- pnbd.ExpectedCumulativeTransactions(params = params,
T.cal = T.cal,
T.tot = T.tot,
n.periods.final = n.periods.final)
cu.tracking.comparison <- rbind(actual, expected)
ylim <- c(0, max(c(actual, expected)) * 1.05)
plot(actual,
type = "l",
xaxt = "n",
xlab = xlab,
ylab = ylab,
col = 1,
ylim = ylim,
main = title)
lines(expected, lty = 2, col = 2)
if (is.null(xticklab) == FALSE) {
if (ncol(cu.tracking.comparison) != length(xticklab)) {
stop("Plot error, xticklab does not have the correct size")
}
axis(1, at = 1:ncol(cu.tracking.comparison), labels = xticklab)
} else {
axis(1, at = 1:length(actual), labels = 1:length(actual))
}
abline(v = max(T.cal),
lty = 2)
legend("bottomright",
legend = c("Actual", "Model"),
col = 1:2,
lty = 1:2,
lwd = 1)
return(cu.tracking.comparison)
}
#' Pareto/NBD Tracking Incremental Transactions Comparison
#'
#' Plots the actual and expected incremental total repeat transactions by all
#' customers for the calibration and holdout periods, and returns this
#' comparison in a matrix.
#'
#' actual.inc.tracking.data does not have to be in the same unit of time as the
#' T.cal data. T.tot will automatically be divided into periods to match the
#' length of actual.inc.tracking.data. See
#' [`pnbd.ExpectedCumulativeTransactions`].
#'
#' The holdout period should immediately follow the calibration period. This
#' function assume that all customers' calibration periods end on the same date,
#' rather than starting on the same date (thus customers' birth periods are
#' determined using max(T.cal) - T.cal rather than assuming that it is 0).
#'
#' @inheritParams pnbd.PlotTrackingCum
#' @param actual.inc.tracking.data A vector containing the incremental number of
#' repeat transactions made by customers for each period in the total time
#' period (both calibration and holdout periods). See details.
#' @return Matrix containing actual and expected incremental repeat transactions.
#' @examples
#' data(cdnowSummary)
#' cal.cbs <- cdnowSummary$cbs
#' # cal.cbs already has column names required by method
#'
#' # Cumulative repeat transactions made by all customers across calibration
#' # and holdout periods
#' cu.tracking <- cdnowSummary$cu.tracking
#' # make the tracking data incremental
#' inc.tracking <- dc.CumulativeToIncremental(cu.tracking)
#'
#' # parameters estimated using pnbd.EstimateParameters
#' est.params <- cdnowSummary$est.params
#'
#' # All parameters are in weeks; the calibration period lasted 39
#' # weeks and the holdout period another 39.
#' pnbd.PlotTrackingInc(params = est.params,
#' T.cal = cal.cbs[,"T.cal"],
#' T.tot = 78,
#' actual.inc.tracking.data = inc.tracking)
#' @md
pnbd.PlotTrackingInc <- function(params,
T.cal,
T.tot,
actual.inc.tracking.data,
n.periods.final = NA,
xlab = "Week",
ylab = "Transactions",
xticklab = NULL,
title = "Tracking Weekly Transactions") {
# No use for inputs, other than as error check, so suppress
# any warnings about incompatible vector lengths here:
inputs <- suppressWarnings(try(dc.InputCheck(params = params,
func = "pnbd.PlotTrackingInc",
printnames = c("r", "alpha", "s", "beta"),
T.cal = T.cal,
T.tot = T.tot,
actual.inc.tracking.data = actual.inc.tracking.data,
n.periods.final = n.periods.final)))
if('try-error' == class(inputs)) return(str(inputs)$message)
inputs <- NULL
if (length(T.tot) > 1) stop("T.tot must be a single numeric value and may not be negative.")
actual <- actual.inc.tracking.data
if(is.na(n.periods.final)) n.periods.final <- length(actual)
expected <- pnbd.ExpectedCumulativeTransactions(params = params,
T.cal = T.cal,
T.tot = T.tot,
n.periods.final = n.periods.final)
expected <- dc.CumulativeToIncremental(expected)
ylim <- c(0, max(c(actual, expected)) * 1.05)
plot(actual,
type = "l",
xaxt = "n",
xlab = xlab,
ylab = ylab,
col = 1,
ylim = ylim,
main = title)
lines(expected, lty = 2, col = 2)
if (is.null(xticklab) == FALSE) {
if (length(actual) != length(xticklab)) {
stop("Plot error, xticklab does not have the correct size")
}
axis(1, at = 1:length(actual), labels = xticklab)
} else {
axis(1, at = 1:length(actual), labels = 1:length(actual))
}
abline(v = max(T.cal),
lty = 2)
legend("topright",
legend = c("Actual", "Model"),
col = 1:2,
lty = 1:2,
lwd = 1)
return(rbind(actual, expected))
}
#' Plot Pareto/NBD Rate Heterogeneity
#'
#' A helper for plotting either the estimated gamma distribution of mu
#' (customers' propensities to drop out), or the estimated gamma distribution of
#' lambda (customers' propensities to purchase).
#'
#' @inheritParams pnbd.LL
#' @param func A string that is either "pnbd.PlotDropoutRateHeterogeneity" or
#' "pnbd.PlotTransactionRateHeterogeneity".
#' @param lim The upper-bound of the x-axis. A number is chosen by the function
#' if none is provided.
#' @return Depending on the value of `func`, either the distribution of
#' customers' propensities to purchase or the distribution of customers'
#' propensities to drop out.
#' @seealso [`pnbd.PlotDropoutRateHeterogeneity`]
#' @seealso [`pnbd.PlotTransactionRateHeterogeneity`]
#' @md
pnbd.PlotRateHeterogeneity <- function(params,
func,
lim = NULL) {
stopifnot(func %in% c("pnbd.PlotDropoutRateHeterogeneity",
"pnbd.PlotTransactionRateHeterogeneity"))
dc.check.model.params(printnames = c("r", "alpha", "s", "beta"),
params = params,
func = func)
shape_rate <- list(pnbd.PlotTransactionRateHeterogeneity = c(shape = params[1],
rate = params[2]),
pnbd.PlotDropoutRateHeterogeneity = c(shape = params[3],
rate = params[4]))
xlab_main <- list(pnbd.PlotTransactionRateHeterogeneity = c(xlab = "Transaction Rate",
main = "Heterogeneity in Transaction Rate"),
pnbd.PlotDropoutRateHeterogeneity = c(xlab = "Dropout Rate",
main = "Heterogeneity in Dropout Rate"))
shape <- shape_rate[[func]]['shape']
rate <- shape_rate[[func]]['rate']
rate.mean <- round(shape/rate, 4)
rate.var <- round(shape/rate^2, 4)
if (is.null(lim)) {
lim = qgamma(0.99, shape = shape, rate = rate)
}
x.axis.ticks <- seq(0, lim, length.out = 100)
heterogeneity <- dgamma(x.axis.ticks,
shape = shape,
rate = rate)
plot(x.axis.ticks,
heterogeneity,
type = "l",
xlab = xlab_main[[func]]['xlab'],
ylab = "Density",
main = xlab_main[[func]]['main'])
mean.var.label <- paste("Mean:", rate.mean, " Var:", rate.var)
mtext(mean.var.label, side = 3)
return(rbind(x.axis.ticks, heterogeneity))
}
#' Pareto/NBD Plot Transaction Rate Heterogeneity
#'
#' Plots and returns the estimated gamma distribution of lambda (customers'
#' propensities to purchase).
#'
#' This returns the distribution of each customer's Poisson parameter, which
#' determines the level of their purchasing (using the Pareto/NBD assumption
#' that purchasing on the individual level can be modeled with a Poisson
#' distribution).
#'
#' @inheritParams pnbd.PlotRateHeterogeneity
#' @return Distribution of customers' propensities to purchase.
#' @examples
#' params <- c(0.55, 10.56, 0.61, 11.64)
#' pnbd.PlotTransactionRateHeterogeneity(params)
#' params <- c(3, 10.56, 0.61, 11.64)
#' pnbd.PlotTransactionRateHeterogeneity(params)
pnbd.PlotTransactionRateHeterogeneity <- function(params,
lim = NULL) {
pnbd.PlotRateHeterogeneity(params = params,
func = "pnbd.PlotTransactionRateHeterogeneity",
lim = lim)
}
#' Pareto/NBD Plot Dropout Rate Heterogeneity
#'
#' Plots and returns the estimated gamma distribution of mu (customers'
#' propensities to drop out).
#'
#' This returns the distribution of each customer's exponential parameter that
#' determines their lifetime (using the Pareto/NBD assumption that a customer's
#' lifetime can be modeled with an exponential distribution).
#'
#' @inheritParams pnbd.PlotRateHeterogeneity
#' @return Distribution of customers' propensities to drop out.
#' @examples
#' params <- c(0.55, 10.56, 0.61, 11.64)
#' pnbd.PlotDropoutRateHeterogeneity(params)
#' params <- c(0.55, 10.56, 3, 11.64)
#' pnbd.PlotDropoutRateHeterogeneity(params)
pnbd.PlotDropoutRateHeterogeneity <- function(params,
lim = NULL) {
pnbd.PlotRateHeterogeneity(params = params,
func = "pnbd.PlotDropoutRateHeterogeneity",
lim = lim)
}
|
/scratch/gouwar.j/cran-all/cranData/BTYD/R/pnbd.R
|
## Methods to model and forecast the amount that members are spending during
## transactions.
library(hypergeo)
library(lattice)
# Now trying Markdown + Roxygen (https://cran.r-project.org/web/packages/roxygen2/vignettes/markdown.html)
#' Define general parameters
#'
#' This is to ensure consistency across all spend functions.
#'
#' This function is only ever called by functions defined in the original BTYD
#' package, such as [`spend.LL`], [`spend.marginal.likelihood`] or
#' [`spend.expected.value`] so it returns directly the output that is expected
#' from those calling functions.
#'
#' @inheritParams spend.LL
#' @param func name of the function calling [`dc.InputCheck`].
#' @return That depends on `func`: 1. If `func` is `spend.marginal.likelihood`,
#' the marginal distribution of a customer's average transaction value (if m.x
#' or x has a length greater than 1, a vector of marginal likelihoods will be
#' returned). 2. If `func` is `spend.LL`, the log-likelihood of the
#' gamma-gamma model; if m.x or x has a length greater than 1, this is a
#' vector of log-likelihoods. 3. If `func` is `spend.expected.value`, the
#' expected transaction value for a customer conditional on their transaction
#' behavior during the calibration period. If m.x or x has a length greater
#' than one, then a vector of expected transaction values will be returned.
#' @seealso [`spend.LL`]
#' @seealso [`spend.marginal.likelihood`]
#' @md
spend.generalParams <- function(params,
func,
m.x,
x) {
stopifnot(func %in% c('spend.marginal.likelihood',
'spend.LL',
'spend.expected.value'))
inputs <- try(dc.InputCheck(params = params,
printnames = c("p", "q", "gamma"),
func = func,
m.x = m.x,
x = x))
if('try-error' == class(inputs)) return(inputs)
x <- inputs$x
m.x <- inputs$m.x
max.length <- nrow(inputs)
if (any(x == 0) || any(m.x == 0)) {
warning("Customers with 0 transactions or 0 average spend in spend.marginal.likelihood")
}
p <- params[1]
q <- params[2]
gamma <- params[3]
if(func == 'spend.expected.value') {
M <- (gamma + m.x * x) * p/(p * x + q - 1)
return(M)
}
result <- rep(0, max.length)
## non.zero: a vector indicating which elements have neither x == 0 or m.x == 0
non.zero <- which(x > 0 & m.x > 0)
common_piece <- q * log(gamma) +
(p * x[non.zero] - 1) * log(m.x[non.zero]) +
(p * x[non.zero]) * log(x[non.zero]) -
(p * x[non.zero] + q) * log(gamma + m.x[non.zero] * x[non.zero])
if(func == 'spend.marginal.likelihood') {
# return the marginal likelihood of a customer's average transaction value.
result[non.zero] <- exp(lgamma(p * x[non.zero] + q) -
lgamma(p * x[non.zero]) -
lgamma(q) +
common_piece)
} else if(func == 'spend.LL') {
result[non.zero] <- (-lbeta(p * x[non.zero], q) + common_piece)
} else {
return(NULL)
}
result
}
#' Gamma-gamma marginal likelihood
#'
#' Calculates the marginal likelihood of a customer's average transaction value.
#'
#' m.x and x may be vectors. The standard rules for vector operations apply - if
#' they are not of the same length, the shorter vector will be recycled (start
#' over at the first element) until it is as long as the longest vector. It is
#' advisable to keep vectors to the same length and to use single values for
#' parameters that are to be the same for all calculations. If one of these
#' parameters has a length greater than one, the output will be a vector of
#' probabilities.
#'
#' This function will issue a warning if any of m.x or x is 0, and will return a
#' marginal likelihood of 0 for those values.
#'
#' f(m.x | p, q, gamma, x).
#'
#' @param params a vector of gamma-gamma parameters: p, q, and gamma, in that
#' order. p is the shape parameter for each transaction. The scale parameter
#' for each transaction is distributed across customers according to a gamma
#' distribution with parameters q (shape) and gamma (scale).
#' @param m.x the customer's average observed transaction value in the
#' calibration period. May also be a vector of average observed transaction
#' values - see details.
#' @param x the number of transactions the customer made in the calibration
#' period. May also be a vector of frequencies - see details.
#' @return The marginal distribution of a customer's average transaction value.
#' If m.x or x has a length greater than 1, a vector of marginal likelihoods
#' will be returned.
#' @references Fader, Peter S., Bruce G.S. Hardie, and Ka L. Lee. “RFM and CLV:
#' Using Iso-Value Curves for Customer Base Analysis.” Journal of Marketing
#' Research Vol.42, pp.415-430. November. 2005.
#' [Web.](http://www.brucehardie.com/papers/rfm_clv_2005-02-16.pdf)
#' @references See equation 3.
#' @examples
#' params <- c(6, 4, 16)
#'
#' # calculate the marginal distribution of the average transaction value
#' # of a customer who spent an average of $35 over 3 transactions.
#' spend.marginal.likelihood(params, m.x=35, x=3)
#'
#' # Several values can also be computed at once:
#' spend.marginal.likelihood(params, m.x=30:40, x=3)
#' spend.marginal.likelihood(params, m.x=35, x=1:10)
#' spend.marginal.likelihood(params, m.x=30:40, x=1:11)
#' @md
spend.marginal.likelihood <- function(params,
m.x,
x) {
spend.generalParams(params = params,
func = 'spend.marginal.likelihood',
m.x = m.x,
x = x)
}
#' Spend Log-Likelihood
#'
#' Calculates the log-likelihood of the gamma-gamma model for customer spending.
#'
#' m.x and x may be vectors. The standard rules for vector operations apply - if
#' they are not of the same length, the shorter vector will be recycled (start
#' over at the first element) until it is as long as the longest vector. It is
#' advisable to keep vectors to the same length and to use single values for
#' parameters that are to be the same for all calculations. If one of these
#' parameters has a length greater than one, the output will be a vector of
#' log-likelihoods.
#'
#' @inheritParams spend.marginal.likelihood
#' @return The log-likelihood of the gamma-gamma model. If m.x or x has a length
#' greater than 1, this is a vector of log-likelihoods.
#' @references Fader, Peter S., Bruce G.S. Hardie, and Ka L. Lee. “RFM and CLV:
#' Using Iso-Value Curves for Customer Base Analysis.” Journal of Marketing
#' Research Vol.42, pp.415-430. November. 2005.
#' [Web.](http://www.brucehardie.com/papers/rfm_clv_2005-02-16.pdf)
#' @examples \dontrun{
#' data(cdnowSummary)
#' ave.spend <- cdnowSummary$m.x
#' tot.trans <- cdnowSummary$cbs[,"x"]
#' # params <- c(6.25, 3.74, 15.44) # in original documentation. check below:
#' params <- spend.EstimateParameters(m.x.vector = ave.spend, x.vector = tot.trans)
#' # get the total log-likelihood of the data and parameters
#' # above. There will be many warnings due to the zeroes that are
#' # included in the data. If you wish to avoid these warnings, use:
#'
#' # ave.spend <- ave.spend[which(tot.trans > 0)]
#' # tot.trans <- tot.trans[which(tot.trans > 0)]
#'
#' # Note that we used tot.trans to remove the zeroes from ave.spend.
#' # This is because we need the vectors to be the same length, and it
#' # is possible that your data include customers who made transactions
#' # worth zero dollars (in which case the vector lengths would differ
#' # if we used ave.spend to remove the zeroes from ave.spend).
#'
#' sum(spend.LL(params, ave.spend, tot.trans))
#'
#' # This log-likelihood may be different than mentioned in the
#' # referenced paper; in the paper, a slightly different function
#' # which relies on total spend (not average spend) is used.
#' }
#' @md
spend.LL <- function(params,
m.x,
x) {
spend.generalParams(params = params,
func = 'spend.LL',
m.x = m.x,
x = x)
}
#' Conditional expected transaction value
#'
#' Calculates the expected transaction value for a customer, conditional on the
#' number of transaction and average transaction value during the calibration
#' period.
#'
#' E(M | p, q, gamma, m.x, x).
#'
#' m.x and x may be vectors. The standard rules for vector operations apply - if
#' they are not of the same length, the shorter vector will be recycled (start
#' over at the first element) until it is as long as the longest vector. It is
#' advisable to keep vectors to the same length and to use single values for
#' parameters that are to be the same for all calculations. If one of these
#' parameters has a length greater than one, the output will be a vector of
#' probabilities.
#'
#' @inheritParams spend.marginal.likelihood
#' @return The expected transaction value for a customer conditional on their
#' transaction behavior during the calibration period. If m.x or x has a
#' length greater than one, then a vector of expected transaction values will
#' be returned.
#' @references Fader, Peter S., Bruce G.S. Hardie, and Ka L. Lee. “RFM and CLV:
#' Using Iso-Value Curves for Customer Base Analysis.” Journal of Marketing
#' Research Vol.42, pp.415-430. November. 2005.
#' [Web.](http://www.brucehardie.com/papers/rfm_clv_2005-02-16.pdf)
#' @examples \dontrun{
#' data(cdnowSummary)
#' ave.spend <- cdnowSummary$m.x
#' tot.trans <- cdnowSummary$cbs[,"x"]
#' # params <- c(6, 4, 16); # in original documentation. rounded values of:
#' params <- spend.EstimateParameters(m.x.vector = ave.spend, x.vector = tot.trans);
#' # calculate the expected transaction value of a customer
#' # who spent an average of $35 over 3 transactions.
#' spend.expected.value(params, m.x=35, x=3)
#'
#' # m.x and x may be vectors:
#' spend.expected.value(params, m.x=30:40, x=3)
#' spend.expected.value(params, m.x=35, x=1:10)
#' spend.expected.value(params, m.x=30:40, x=1:11)
#' }
#' @md
spend.expected.value <- function(params,
m.x,
x) {
spend.generalParams(params = params,
func = 'spend.expected.value',
m.x = m.x,
x = x)
}
#' Spend Parameter Estimation
#'
#' Estimates parameters for the gamma-gamma spend model.
#'
#' The best-fitting parameters are determined using the spend.LL function. The
#' sum of the log-likelihood for each customer (for a set of parameters) is
#' maximized in order to estimate parameters.
#'
#' A set of starting parameters must be provided for this method. If no
#' parameters are provided, (1,1,1,1) is used as a default. It may be necessary
#' to run the estimation from multiple starting points to ensure that it
#' converges. To compare the log-likelihoods of different parameters, use
#' \link{spend.LL}.
#'
#' The lower bound on the parameters to be estimated is always zero, since
#' gamma-gamma parameters cannot be negative. The upper bound can be set with
#' the max.param.value parameter.
#'
#' @param m.x.vector a vector with each customer's average observed transaction
#' value in the calibration period.
#' @param x.vector a vector with the number of transactions each customer made
#' in the calibration period. Must correspond to m.x.vector in terms of
#' ordering of customers and length of the vector.
#' @param par.start initial vector of gamma-gamma parameters: p, q, and gamma,
#' in that order. p is the shape parameter for each transaction. The scale
#' parameter for each transaction is distributed across customers according to
#' a gamma distribution with parameters q (shape) and gamma (scale).
#' @param max.param.value the upper bound on parameters.
#' @return Vector of estimated parameters.
#' @examples \dontrun{
#' data(cdnowSummary)
#' ave.spend <- cdnowSummary$m.x
#' tot.trans <- cdnowSummary$cbs[,"x"]
#'
#' # There will be many warnings due to the zeroes that are
#' # included in the data above. To avoid them, use the following:
#' # (see example for spend.LL)
#'
#' ave.spend <- ave.spend[which(tot.trans > 0)]
#' tot.trans <- tot.trans[which(tot.trans > 0)]
#'
#' # We will let the spend function use default starting parameters
#' spend.EstimateParameters(ave.spend, tot.trans)
#' }
spend.EstimateParameters <- function(m.x.vector,
x.vector,
par.start = c(1, 1, 1),
max.param.value = 10000) {
if (any(m.x.vector < 0) || !is.numeric(m.x.vector))
stop("m.x must be numeric and may not contain negative numbers.")
if (any(x.vector < 0) || !is.numeric(x.vector))
stop("x must be numeric and may not contain negative numbers.")
if (length(m.x.vector) != length(x.vector))
stop("m.x.vector and x.vector must be the same length.")
if (any(x.vector == 0) || any(m.x.vector == 0))
warning("Customers with 0 transactions or 0 average spend in spend.LL")
spend.eLL <- function(params,
m.x.vector,
x.vector,
max.param.value) {
params <- exp(params)
params[params > max.param.value] <- max.param.value
return(-1 * sum(spend.LL(params = params,
m.x = m.x.vector,
x = x.vector)))
}
logparams <- log(par.start)
results <- optim(logparams,
spend.eLL,
m.x.vector = m.x.vector,
x.vector = x.vector,
max.param.value = max.param.value,
method = "L-BFGS-B")
estimated.params <- exp(results$par)
estimated.params[estimated.params > max.param.value] <- max.param.value
return(estimated.params)
}
#' Plot Actual vs. Expected Average Transaction Value
#'
#' Plots the actual and expected densities of average transaction values, and
#' returns a vector with each customer's average transaction value probability.
#'
#' @inheritParams spend.EstimateParameters
#' @param params a vector of gamma-gamma parameters: p, q, and gamma, in that
#' order. p is the shape parameter for each transaction. The scale parameter
#' for each transaction is distributed across customers according to a gamma
#' distribution with parameters q (shape) and gamma (scale).
#' @param xlab descriptive label for the x axis.
#' @param ylab descriptive label for the y axis.
#' @param title title placed on the top-center of the plot.
#' @return a vector with the probability of each customer's average transaction
#' value.
#' @seealso [`spend.marginal.likelihood`]
#' @examples \dontrun{
#' data(cdnowSummary)
#' ave.spend <- cdnowSummary$m.x
#' tot.trans <- cdnowSummary$cbs[,"x"]
#' # params <- c(6.25, 3.74, 15.44) # in original documentation. check below:
#' params <- spend.EstimateParameters(m.x.vector = ave.spend, x.vector = tot.trans)
#'
#' # Plot the actual and expected average transaction value across customers.
#' f.m.x <- spend.plot.average.transaction.value(params, ave.spend, tot.trans)
#' }
#' @md
spend.plot.average.transaction.value <- function(params,
m.x.vector,
x.vector,
xlab = "Average Transaction Value",
ylab = "Marginal Distribution of Average Transaction Value",
title = "Actual vs. Expected Average Transaction Value Across Customers") {
if (any(m.x.vector < 0) || !is.numeric(m.x.vector))
stop("m.x must be numeric and may not contain negative numbers.")
if (any(x.vector < 0) || !is.numeric(x.vector))
stop("x must be numeric and may not contain negative numbers.")
if (length(m.x.vector) != length(x.vector))
stop("m.x.vector and x.vector must be the same length.")
if (any(x.vector == 0) || any(m.x.vector == 0)) {
warning(paste("There are customers with 0 transactions or 0 average spend.",
"spend.plot.average.transaction.value removed them before plotting."))
}
# remove any customers with zero repeat transactions
ave.spending <- m.x.vector[which(x.vector > 0)]
tot.transactions <- x.vector[which(x.vector > 0)]
f.m.x <- spend.marginal.likelihood(params,
ave.spending,
tot.transactions)
plot(ave.spending,
y = f.m.x,
pch = 16,
type = "n",
xlab = xlab,
ylab = ylab,
main = title)
lines(density(ave.spending,
bw = "nrd",
adjust = 0.6),
col = 1,
lty = 1)
lines(smooth.spline(ave.spending,
y = f.m.x,
w = tot.transactions,
df = 15),
col = 2,
lty = 2)
legend("topright",
legend = c("Actual", "Model"),
col = 1:2,
lty = 1:2,
lwd = 1)
return(f.m.x)
}
|
/scratch/gouwar.j/cran-all/cranData/BTYD/R/spend.R
|
## Authors: Lukasz Dziurzynski, Edward Wadsworth
data(donationsSummary)
## Get the calibration period recency-frequency matrix from the donation data:
rf.matrix <- donationsSummary$rf.matrix
rf.matrix
## Estimate parameters for the BG/BB model from the recency-frequency matrix:
par.start <- c(0.5, 1, 0.5, 1)
params <- bgbb.EstimateParameters(rf.matrix, par.start)
params
## Check log-likelihood of the params:
bgbb.rf.matrix.LL(params, rf.matrix)
## Plot the comparison of actual and expected calibration period frequencies:
bgbb.PlotFrequencyInCalibration(params, rf.matrix, censor=7, plotZero=TRUE)
n.star <- 5 # Number of transaction opportunities in the holdout period
x.star <- donationsSummary$x.star # Transactions made by each calibration period bin in the holdout period
## Plot the comparison of actual and conditional expected holdout period frequencies,
## binned according to calibration period frequencies:
bgbb.PlotFreqVsConditionalExpectedFrequency(params, n.star, rf.matrix, x.star)
## Plot the comparison of actual and conditional expected holdout period frequencies,
## binned according to calibration period recencies:
bgbb.PlotRecVsConditionalExpectedFrequency(params, n.star, rf.matrix, x.star)
inc.annual.trans <- donationsSummary$annual.trans # incremental annual transactions
cum.annual.trans <- cumsum(donationsSummary$annual.trans) # cumulative annual transactions
## set appropriate x-axis tickmarks:
x.tickmarks.yrs.all <- c( "'96","'97","'98","'99","'00","'01","'02","'03","'04","'05","'06" )
## Plot the comparison of actual and expected total cumulative transactions across
## both the calibration and holdout periods:
bgbb.PlotTrackingCum(params, rf.matrix, cum.annual.trans, xticklab=x.tickmarks.yrs.all)
## Plot the comparison of actual and expected total incremental transactions across
## both the calibration and holdout periods:
bgbb.PlotTrackingInc(params, rf.matrix, inc.annual.trans, xticklab=x.tickmarks.yrs.all)
|
/scratch/gouwar.j/cran-all/cranData/BTYD/demo/bgbb_donations.R
|
## Authors: Daniel McCarthy, Lukasz Dziurzynski, Edward Wadsworth
data(cdnowSummary)
## Get the calibration period customer-by-sufficient-statistic matrix from the cdnow data:
cbs <- cdnowSummary$cbs
## Estimate parameters for the BG/NBD model from the CBS:
par.start <- c(1,3,1,3)
params <- bgnbd.EstimateParameters(cbs, par.start)
params
## Check log-likelihood of the params:
bgnbd.cbs.LL(params, cbs)
## Plot the comparison of actual and expected calibration period frequencies:
bgnbd.PlotFrequencyInCalibration(params, cbs, censor=7, plotZero=TRUE)
T.star <- 39 # Length of holdout period
x.star <- cbs[,"x.star"] # Transactions made by each customer in the holdout period
## Plot the comparison of actual and conditional expected holdout period frequencies,
## binned according to calibration period frequencies:
bgnbd.PlotFreqVsConditionalExpectedFrequency(params, T.star, cbs, x.star, censor=7)
## Plot the comparison of actual and conditional expected holdout period frequencies,
## binned according to calibration period recencies:
bgnbd.PlotRecVsConditionalExpectedFrequency(params, cbs, T.star, x.star)
|
/scratch/gouwar.j/cran-all/cranData/BTYD/demo/bgnbd_cdnow.R
|
## Authors: Lukasz Dziurzynski, Daniel McCarthy, Edward Wadsworth
data(cdnowSummary)
## Get the calibration period customer-by-sufficient-statistic matrix from the cdnow data:
cbs <- cdnowSummary$cbs
## Estimate parameters for the Pareto/NBD model from the CBS:
par.start <- c(0.5, 1, 0.5, 1)
params <- pnbd.EstimateParameters(cbs,
par.start,
hardie = TRUE)
params
## Check log-likelihood of the params:
pnbd.cbs.LL(params,
cbs,
hardie = TRUE)
## Plot the comparison of actual and expected calibration period frequencies:
pnbd.PlotFrequencyInCalibration(params = params,
cal.cbs = cbs,
censor = 7,
plotZero = TRUE,
hardie = TRUE)
T.star <- 39 # Length of holdout period
x.star <- cbs[,"x.star"] # Transactions made by each customer in the holdout period
## Plot the comparison of actual and conditional expected holdout period frequencies,
## binned according to calibration period frequencies:
pnbd.PlotFreqVsConditionalExpectedFrequency(params = params,
T.star = T.star,
cal.cbs = cbs,
x.star = x.star,
censor = 7,
hardie = TRUE)
## Plot the comparison of actual and conditional expected holdout period frequencies,
## binned according to calibration period recencies:
pnbd.PlotRecVsConditionalExpectedFrequency(params,
cbs,
T.star,
x.star,
hardie = TRUE)
|
/scratch/gouwar.j/cran-all/cranData/BTYD/demo/pnbd_cdnow.R
|
# First, load the appropriate data. For spend models, we need
# to know how many transactions customers made and how much
# they spent on average.
data(cdnowSummary)
ave.spend <- cdnowSummary$m.x
tot.trans <- cdnowSummary$cbs[,"x"]
# Now we can estimate model parameters. spend.LL, which is used
# by spend.EstimateParameters, will give you warning if you pass
# it a customer with zero transactions. To avoid this, we can
# remove customers with zero transactions (remember to remove them
# from both vectors, as spend.LL requires the average spend and frequency
# vectors to be of equal length)
ave.spend <- ave.spend[which(tot.trans > 0)]
tot.trans <- tot.trans[which(tot.trans > 0)]
# We will let the spend function use default starting parameters.
params <- spend.EstimateParameters(ave.spend, tot.trans)
params
# Now we can make estimation about individual customers
# The following will calculate the expected transaction
# value of a customer who spent an average of $40 over 2 transactions.
spend.expected.value(params, m.x=40, x=2)
# This would also work if we wanted to compare a vector of values:
spend.expected.value(params, m.x=30:40, x=2)
# Finally, we can plot the actual and expected average
# transaction value across customers.
spend.plot.average.transaction.value(params, ave.spend, tot.trans)
|
/scratch/gouwar.j/cran-all/cranData/BTYD/demo/spend_cdnow.R
|
## ----include=FALSE------------------------------------------------------------
library(knitr)
opts_chunk$set(
concordance=TRUE
)
## ----fig.path="", label="pnbdCalibrationFit", results="hide", echo=FALSE, include=FALSE----
library(knitr)
opts_chunk$set(comment="#")
library(BTYD)
# Set the hardie parameter value here, apply it everywhere
allHardie <- TRUE
data(cdnowSummary)
est.params <- cdnowSummary$est.params
cal.cbs <- cdnowSummary$cbs
pdf(file = 'pnbdCalibrationFit.pdf')
cal.fit <- pnbd.PlotFrequencyInCalibration(params = est.params,
cal.cbs = cal.cbs,
censor = 7,
hardie = allHardie)
dev.off()
## ----message=FALSE, tidy=FALSE------------------------------------------------
cdnowElog <- system.file("data/cdnowElog.csv", package = "BTYD")
elog <- dc.ReadLines(cdnowElog, cust.idx = 2,
date.idx = 3, sales.idx = 5)
elog[1:3,]
## ----message=FALSE------------------------------------------------------------
elog$date <- as.Date(elog$date, "%Y%m%d")
elog[1:3,]
## ----results="hide", message=FALSE--------------------------------------------
elog <- dc.MergeTransactionsOnSameDate(elog)
## ----message=FALSE------------------------------------------------------------
end.of.cal.period <- as.Date("1997-09-30")
elog.cal <- elog[which(elog$date <= end.of.cal.period), ]
## ----results="hide", message=FALSE--------------------------------------------
split.data <- dc.SplitUpElogForRepeatTrans(elog.cal)
clean.elog <- split.data$repeat.trans.elog
## ----message=FALSE------------------------------------------------------------
freq.cbt <- dc.CreateFreqCBT(clean.elog)
freq.cbt[1:3,1:5]
## ----results="hide", message=FALSE--------------------------------------------
tot.cbt <- dc.CreateFreqCBT(elog)
cal.cbt <- dc.MergeCustomers(tot.cbt, freq.cbt)
## ----tidy=FALSE, results="hide", message=FALSE--------------------------------
birth.periods <- split.data$cust.data$birth.per
last.dates <- split.data$cust.data$last.date
cal.cbs.dates <- data.frame(birth.periods, last.dates,
end.of.cal.period)
cal.cbs <- dc.BuildCBSFromCBTAndDates(cal.cbt, cal.cbs.dates,
per="week")
## ----warning=FALSE------------------------------------------------------------
params <- pnbd.EstimateParameters(cal.cbs = cal.cbs,
hardie = allHardie)
round(params, digits = 3)
LL <- pnbd.cbs.LL(params = params,
cal.cbs = cal.cbs,
hardie = allHardie)
LL
## -----------------------------------------------------------------------------
p.matrix <- c(params, LL)
for (i in 1:2){
params <- pnbd.EstimateParameters(cal.cbs = cal.cbs,
par.start = params,
hardie = allHardie)
LL <- pnbd.cbs.LL(params = params,
cal.cbs = cal.cbs,
hardie = allHardie)
p.matrix.row <- c(params, LL)
p.matrix <- rbind(p.matrix, p.matrix.row)
}
colnames(p.matrix) <- c("r", "alpha", "s", "beta", "LL")
rownames(p.matrix) <- 1:3
round(p.matrix, digits = 3)
## ----fig.path="", label="pnbdTransactionHeterogeneity", results="hide", include=FALSE----
pdf(file = 'pnbdTransactionHeterogeneity.pdf')
pnbd.PlotTransactionRateHeterogeneity(params = params)
dev.off()
## ----fig.path="", label="pnbdDropoutHeterogeneity", results="hide", include=FALSE----
pdf(file = 'pnbdDropoutHeterogeneity.pdf')
pnbd.PlotDropoutRateHeterogeneity(params = params)
dev.off()
## -----------------------------------------------------------------------------
pnbd.Expectation(params = params, t = 52)
## ----tidy=FALSE---------------------------------------------------------------
cal.cbs["1516",]
x <- cal.cbs["1516", "x"]
t.x <- cal.cbs["1516", "t.x"]
T.cal <- cal.cbs["1516", "T.cal"]
pnbd.ConditionalExpectedTransactions(params,
T.star = 52,
x,
t.x,
T.cal,
hardie = allHardie)
pnbd.PAlive(params,
x,
t.x,
T.cal,
hardie = allHardie)
## ----tidy=FALSE---------------------------------------------------------------
# avoid overflow in LaTeX code block here:
cet <- "pnbd.ConditionalExpectedTransactions"
for (i in seq(10, 25, 5)){
cond.expectation <- match.fun(cet)(params,
T.star = 52,
x = i,
t.x = 20,
T.cal = 39,
hardie = allHardie)
cat ("x:",i,"\t Expectation:",cond.expectation, fill = TRUE)
}
## ----results="hide", eval=FALSE-----------------------------------------------
# pnbd.PlotFrequencyInCalibration(params = params,
# cal.cbs = cal.cbs,
# censor = 7,
# hardie = allHardie)
## ----message=FALSE------------------------------------------------------------
elog <- dc.SplitUpElogForRepeatTrans(elog)$repeat.trans.elog
x.star <- rep(0, nrow(cal.cbs))
cal.cbs <- cbind(cal.cbs, x.star)
elog.custs <- elog$cust
for (i in 1:nrow(cal.cbs)){
current.cust <- rownames(cal.cbs)[i]
tot.cust.trans <- length(which(elog.custs == current.cust))
cal.trans <- cal.cbs[i, "x"]
cal.cbs[i, "x.star"] <- tot.cust.trans - cal.trans
}
round(cal.cbs[1:3,], digits = 3)
## ----fig.path="", label="pnbdCondExpComp", tidy=FALSE, echo=TRUE, size="small", fig.keep='none'----
T.star <- 39 # length of the holdout period
censor <- 7 # This censor serves the same purpose described above
x.star <- cal.cbs[,"x.star"]
pdf(file = 'pnbdCondExpComp.pdf')
comp <- pnbd.PlotFreqVsConditionalExpectedFrequency(params,
T.star,
cal.cbs,
x.star,
censor,
hardie = allHardie)
dev.off()
rownames(comp) <- c("act", "exp", "bin")
round(comp, digits = 3)
## -----------------------------------------------------------------------------
tot.cbt <- dc.CreateFreqCBT(elog)
d.track.data <- rep(0, 7 * 78)
origin <- as.Date("1997-01-01")
for (i in colnames(tot.cbt)){
date.index <- difftime(as.Date(i), origin) + 1
d.track.data[date.index] <- sum(tot.cbt[,i])
}
w.track.data <- rep(0, 78)
for (j in 1:78){
w.track.data[j] <- sum(d.track.data[(j*7-6):(j*7)])
}
## ----fig.path="", label="pnbdTrackingInc", tidy=FALSE, echo=TRUE, fig.keep='none'----
T.cal <- cal.cbs[,"T.cal"]
T.tot <- 78
n.periods.final <- 78
pdf(file = 'pnbdTrackingInc.pdf')
inc.tracking <- pnbd.PlotTrackingInc(params = params,
T.cal = T.cal,
T.tot = T.tot,
actual.inc.tracking.data = w.track.data,
n.periods.final = n.periods.final)
dev.off()
round(inc.tracking[,20:25], digits = 3)
## ----fig.path="", label="pnbdTrackingCum", tidy=FALSE, echo=TRUE, fig.keep='none'----
cum.tracking.data <- cumsum(w.track.data)
pdf(file = 'pnbdTrackingCum.pdf')
cum.tracking <- pnbd.PlotTrackingCum(params = params,
T.cal = T.cal,
T.tot = T.tot,
actual.cu.tracking.data = cum.tracking.data,
n.periods.final = n.periods.final)
dev.off()
round(cum.tracking[,20:25], digits = 3)
## ----fig.path="", label="bgnbdCalibrationFit", results="hide", echo=FALSE, include=FALSE----
data(cdnowSummary);
est.params <- c(0.243, 4.414, 0.793, 2.426);
cal.cbs <- cdnowSummary$cbs;
pdf(file = 'bgnbdCalibrationFit.pdf')
cal.fit <- bgnbd.PlotFrequencyInCalibration(est.params, cal.cbs, 7)
dev.off()
## ----message=FALSE, tidy=FALSE------------------------------------------------
cdnowElog <- system.file("data/cdnowElog.csv", package = "BTYD")
elog <- dc.ReadLines(cdnowElog, cust.idx = 2,
date.idx = 3, sales.idx = 5)
elog[1:3,]
## ----message=FALSE------------------------------------------------------------
elog$date <- as.Date(elog$date, "%Y%m%d");
elog[1:3,]
## ----results="hide", message=FALSE--------------------------------------------
elog <- dc.MergeTransactionsOnSameDate(elog);
## ----message=FALSE------------------------------------------------------------
end.of.cal.period <- as.Date("1997-09-30")
elog.cal <- elog[which(elog$date <= end.of.cal.period), ]
## ----results="hide", message=FALSE--------------------------------------------
split.data <- dc.SplitUpElogForRepeatTrans(elog.cal);
clean.elog <- split.data$repeat.trans.elog;
## ----message=FALSE------------------------------------------------------------
freq.cbt <- dc.CreateFreqCBT(clean.elog);
freq.cbt[1:3,1:5]
## ----results="hide", message=FALSE--------------------------------------------
tot.cbt <- dc.CreateFreqCBT(elog)
cal.cbt <- dc.MergeCustomers(tot.cbt, freq.cbt)
## ----tidy=FALSE, results="hide", message=FALSE--------------------------------
birth.periods <- split.data$cust.data$birth.per
last.dates <- split.data$cust.data$last.date
cal.cbs.dates <- data.frame(birth.periods, last.dates,
end.of.cal.period)
cal.cbs <- dc.BuildCBSFromCBTAndDates(cal.cbt, cal.cbs.dates,
per="week")
## -----------------------------------------------------------------------------
params <- bgnbd.EstimateParameters(cal.cbs);
params
LL <- bgnbd.cbs.LL(params, cal.cbs);
LL
## -----------------------------------------------------------------------------
p.matrix <- c(params, LL);
for (i in 1:2){
params <- bgnbd.EstimateParameters(cal.cbs, params);
LL <- bgnbd.cbs.LL(params, cal.cbs);
p.matrix.row <- c(params, LL);
p.matrix <- rbind(p.matrix, p.matrix.row);
}
colnames(p.matrix) <- c("r", "alpha", "a", "b", "LL");
rownames(p.matrix) <- 1:3;
p.matrix;
## ----fig.path="", label="bgnbdTransactionHeterogeneity", results="hide", include=FALSE----
pdf(file = 'bgnbdTransactionHeterogeneity.pdf')
bgnbd.PlotTransactionRateHeterogeneity(params)
dev.off()
## ----fig.path="", label="bgnbdDropoutHeterogeneity", results="hide", include=FALSE----
pdf(file = 'bgnbdDropoutHeterogeneity.pdf')
bgnbd.PlotDropoutRateHeterogeneity(params)
dev.off()
## -----------------------------------------------------------------------------
bgnbd.Expectation(params, t=52);
## ----tidy=FALSE---------------------------------------------------------------
cal.cbs["1516",]
x <- cal.cbs["1516", "x"]
t.x <- cal.cbs["1516", "t.x"]
T.cal <- cal.cbs["1516", "T.cal"]
bgnbd.ConditionalExpectedTransactions(params, T.star = 52,
x, t.x, T.cal)
bgnbd.PAlive(params, x, t.x, T.cal)
## ----tidy=FALSE---------------------------------------------------------------
for (i in seq(10, 25, 5)){
cond.expectation <- bgnbd.ConditionalExpectedTransactions(
params, T.star = 52, x = i,
t.x = 20, T.cal = 39)
cat ("x:",i,"\t Expectation:",cond.expectation, fill = TRUE)
}
## ----results="hide", eval=FALSE-----------------------------------------------
# bgnbd.PlotFrequencyInCalibration(params, cal.cbs, 7)
## ----message=FALSE------------------------------------------------------------
elog <- dc.SplitUpElogForRepeatTrans(elog)$repeat.trans.elog;
x.star <- rep(0, nrow(cal.cbs));
cal.cbs <- cbind(cal.cbs, x.star);
elog.custs <- elog$cust;
for (i in 1:nrow(cal.cbs)){
current.cust <- rownames(cal.cbs)[i]
tot.cust.trans <- length(which(elog.custs == current.cust))
cal.trans <- cal.cbs[i, "x"]
cal.cbs[i, "x.star"] <- tot.cust.trans - cal.trans
}
cal.cbs[1:3,]
## ----fig.path="", label="bgnbdCondExpComp", tidy=FALSE, echo=TRUE, size="small", fig.keep='none'----
T.star <- 39 # length of the holdout period
censor <- 7 # This censor serves the same purpose described above
x.star <- cal.cbs[,"x.star"]
pdf(file = 'bgnbdCondExpComp.pdf')
comp <- bgnbd.PlotFreqVsConditionalExpectedFrequency(params, T.star,
cal.cbs, x.star, censor)
dev.off()
rownames(comp) <- c("act", "exp", "bin")
comp
## -----------------------------------------------------------------------------
tot.cbt <- dc.CreateFreqCBT(elog)
d.track.data <- rep(0, 7 * 78)
origin <- as.Date("1997-01-01")
for (i in colnames(tot.cbt)){
date.index <- difftime(as.Date(i), origin) + 1;
d.track.data[date.index] <- sum(tot.cbt[,i]);
}
w.track.data <- rep(0, 78)
for (j in 1:78){
w.track.data[j] <- sum(d.track.data[(j*7-6):(j*7)])
}
## ----fig.path="", label="bgnbdTrackingInc", tidy=FALSE, echo=TRUE, fig.keep='none'----
T.cal <- cal.cbs[,"T.cal"]
T.tot <- 78
n.periods.final <- 78
pdf(file = 'bgnbdTrackingInc.pdf')
inc.tracking <- bgnbd.PlotTrackingInc(params,
T.cal,
T.tot,
w.track.data,
n.periods.final,
allHardie)
dev.off()
inc.tracking[,20:25]
## ----fig.path="", label="bgnbdTrackingCum", tidy=FALSE, echo=TRUE, fig.keep='none'----
cum.tracking.data <- cumsum(w.track.data)
pdf(file = 'bgnbdTrackingCum.pdf')
cum.tracking <- bgnbd.PlotTrackingCum(params,
T.cal,
T.tot,
cum.tracking.data,
n.periods.final,
allHardie)
dev.off()
cum.tracking[,20:25]
## ----fig.path="", label="bgbbCalibrationFit", results="hide", echo=FALSE, include=FALSE----
data(donationsSummary)
rf.matrix <- donationsSummary$rf.matrix
params <- bgbb.EstimateParameters(rf.matrix)
pdf(file = 'bgbbCalibrationFit.pdf')
cal.fit <- bgbb.PlotFrequencyInCalibration(params, rf.matrix, 6)
dev.off()
## ----message=FALSE, tidy=FALSE------------------------------------------------
simElog <- system.file("data/discreteSimElog.csv",
package = "BTYD")
elog <- dc.ReadLines(simElog, cust.idx = 1, date.idx = 2)
elog[1:3,]
elog$date <- as.Date(elog$date, "%Y-%m-%d")
max(elog$date);
min(elog$date);
# let's make the calibration period end somewhere in-between
T.cal <- as.Date("1977-01-01")
simData <- dc.ElogToCbsCbt(elog, per="year", T.cal)
cal.cbs <- simData$cal$cbs
freq<- cal.cbs[,"x"]
rec <- cal.cbs[,"t.x"]
trans.opp <- 7 # transaction opportunities
cal.rf.matrix <- dc.MakeRFmatrixCal(freq, rec, trans.opp)
cal.rf.matrix[1:5,]
## -----------------------------------------------------------------------------
data(donationsSummary);
rf.matrix <- donationsSummary$rf.matrix
params <- bgbb.EstimateParameters(rf.matrix);
LL <- bgbb.rf.matrix.LL(params, rf.matrix);
p.matrix <- c(params, LL);
for (i in 1:2){
params <- bgbb.EstimateParameters(rf.matrix, params);
LL <- bgbb.rf.matrix.LL(params, rf.matrix);
p.matrix.row <- c(params, LL);
p.matrix <- rbind(p.matrix, p.matrix.row);
}
colnames(p.matrix) <- c("alpha", "beta", "gamma", "delta", "LL");
rownames(p.matrix) <- 1:3;
p.matrix;
## ----fig.path="", label="bgbbTransactionHeterogeneity", results="hide", include=FALSE----
pdf(file = 'bgbbTransactionHeterogeneity.pdf')
bgbb.PlotTransactionRateHeterogeneity(params)
dev.off()
## ----fig.path="", label="bgbbDropoutHeterogeneity", results="hide", include=FALSE----
pdf(file = 'bgbbDropoutHeterogeneity.pdf')
bgbb.PlotDropoutRateHeterogeneity(params)
dev.off()
## -----------------------------------------------------------------------------
bgbb.Expectation(params, n=10);
## ----tidy=FALSE---------------------------------------------------------------
# customer A
n.cal = 6
n.star = 10
x = 0
t.x = 0
bgbb.ConditionalExpectedTransactions(params, n.cal,
n.star, x, t.x)
# customer B
x = 4
t.x = 5
bgbb.ConditionalExpectedTransactions(params, n.cal,
n.star, x, t.x)
## ----results="hide", eval=FALSE-----------------------------------------------
# bgbb.PlotFrequencyInCalibration(params, rf.matrix)
## -----------------------------------------------------------------------------
holdout.cbs <- simData$holdout$cbs
x.star <- holdout.cbs[,"x.star"]
## ----fig.path="", label="bgbbCondExpComp", tidy=FALSE, echo=TRUE, size="small", fig.keep='none'----
n.star <- 5 # length of the holdout period
x.star <- donationsSummary$x.star
pdf(file = 'bgbbCondExpComp.pdf')
comp <- bgbb.PlotFreqVsConditionalExpectedFrequency(params, n.star,
rf.matrix, x.star)
dev.off()
rownames(comp) <- c("act", "exp", "bin")
comp
## ----fig.path="", label="bgbbCondExpCompRec", tidy=FALSE, echo=TRUE, size="small", fig.keep='none'----
pdf(file = 'bgbbCondExpCompRec.pdf')
comp <- bgbb.PlotRecVsConditionalExpectedFrequency(params, n.star,
rf.matrix, x.star)
dev.off()
rownames(comp) <- c("act", "exp", "bin")
comp
## ----fig.path="", label="bgbbTrackingInc", tidy=FALSE, echo=TRUE, fig.keep='none'----
inc.track.data <- donationsSummary$annual.trans
n.cal <- 6
xtickmarks <- 1996:2006
pdf(file = 'bgbbTrackingInc.pdf')
inc.tracking <- bgbb.PlotTrackingInc(params, rf.matrix,
inc.track.data,
xticklab = xtickmarks)
dev.off()
rownames(inc.tracking) <- c("act", "exp")
inc.tracking
## ----fig.path="", label="bgbbTrackingCum", tidy=FALSE, echo=TRUE, size="small", fig.keep='none'----
cum.track.data <- cumsum(inc.track.data)
pdf(file = 'bgbbTrackingCum.pdf')
cum.tracking <- bgbb.PlotTrackingCum(params, rf.matrix, cum.track.data,
xticklab = xtickmarks)
dev.off()
rownames(cum.tracking) <- c("act", "exp")
cum.tracking
|
/scratch/gouwar.j/cran-all/cranData/BTYD/inst/doc/BTYD-walkthrough.R
|
---
title: "BTYD BG/NBD likelihood rework"
author: "Gabi Huiber"
date: "September 16, 2019"
output: html_document
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE, eval = FALSE)
```
# Problem statement
See `fix_pnbd.Rmd` (or, for a more pleasant reading experience, `fix_pnbd.html`) for a summary of how I changed the original `R/pnbd.R` script in the BTYD package for the purposes of the BTYD3 package. The BG/NBD set of functions, defined in `R/bgnbd.R`, suffer from the same kind of code duplication as the original, so tidying-up is in order. But, more substantively, the BG/NBD implementation could also benefit from the three improvements listed below:
* The original BG/NBD implementation can fail like its Pareto/NBD counterpart in the presence of large values of x -- meaning, for customers with rich purchase histories. The fix is explained [in this note](http://www.brucehardie.com/notes/027/bgnbd_num_error.pdf) but the definition of `bgnbd.LL()` in the original `R/bgnbd.R` script, though written a year after this note was published, does not implement this fix. This one does.
* Like the BTYD3 version of `R/pnbd.R`, BG/NBD now also uses the newer `optimx::optimx()` instead of `base::optim()` because there are more choices of optimization methods, a richer output, and this is also the guidance of the author, [here](http://www.ibm.com/developerworks/library/ba-optimR-john-nash/).
* Since BG/NBD also requires the Gaussian hypergeometric function for computing expectations of transaction counts, it now also allows a `hardie` TRUE/FALSE flag where appropriate, so that the user is given the choice of using either `h2f1` or the `hypergeo` package as appropriate. The trade-offs are explained in `fix_pnbd.Rmd`. NB: though the original `R/bgnbd.R` calls `library(hypergeo)` on line 3, it never uses it. Instead, it uses `h2f1` everywhere. This fixes that.
# What do the original implementations look like?
## bgnbd.LL (lines 35-75 of [bgnbd.R](https://github.com/cran/BTYD/blob/master/R/bgnbd.R))
```{r bgnbd.LL}
bgnbd.LL <- function(params, x, t.x, T.cal) {
beta.ratio = function(a, b, x, y) {
exp(lgamma(a) + lgamma(b) - lgamma(a + b) - lgamma(x) - lgamma(y) + lgamma(x +
y))
}
max.length <- max(length(x), length(t.x), length(T.cal))
if (max.length%%length(x))
warning("Maximum vector length not a multiple of the length of x")
if (max.length%%length(t.x))
warning("Maximum vector length not a multiple of the length of t.x")
if (max.length%%length(T.cal))
warning("Maximum vector length not a multiple of the length of T.cal")
dc.check.model.params(c("r", "alpha", "a", "b"), params, "bgnbd.LL")
if (any(x < 0) || !is.numeric(x))
stop("x must be numeric and may not contain negative numbers.")
if (any(t.x < 0) || !is.numeric(t.x))
stop("t.x must be numeric and may not contain negative numbers.")
if (any(T.cal < 0) || !is.numeric(T.cal))
stop("T.cal must be numeric and may not contain negative numbers.")
r = params[1]
alpha = params[2]
a = params[3]
b = params[4]
x <- rep(x, length.out = max.length)
t.x <- rep(t.x, length.out = max.length)
T.cal <- rep(T.cal, length.out = max.length)
A = r * log(alpha) + lgamma(r + x) - lgamma(r) - (r + x) * log(alpha + t.x)
B = beta.ratio(a, b + x, a, b) * ((alpha + t.x)/(alpha + T.cal))^(r + x) + as.numeric((x >
0)) * beta.ratio(a + 1, b + x - 1, a, b)
LL = sum(A + log(B))
return(LL)
}
```
## bgnbd.PAlive (lines 255-285 of [bgnbd.R](https://github.com/cran/BTYD/blob/master/R/bgnbd.R))
```{r bgnbd.PAlive}
bgnbd.PAlive <- function(params, x, t.x, T.cal) {
max.length <- max(length(x), length(t.x), length(T.cal))
if (max.length%%length(x))
warning("Maximum vector length not a multiple of the length of x")
if (max.length%%length(t.x))
warning("Maximum vector length not a multiple of the length of t.x")
if (max.length%%length(T.cal))
warning("Maximum vector length not a multiple of the length of T.cal")
dc.check.model.params(c("r", "alpha", "a", "b"), params, "bgnbd.PAlive")
if (any(x < 0) || !is.numeric(x))
stop("x must be numeric and may not contain negative numbers.")
if (any(t.x < 0) || !is.numeric(t.x))
stop("t.x must be numeric and may not contain negative numbers.")
if (any(T.cal < 0) || !is.numeric(T.cal))
stop("T.cal must be numeric and may not contain negative numbers.")
x <- rep(x, length.out = max.length)
t.x <- rep(t.x, length.out = max.length)
T.cal <- rep(T.cal, length.out = max.length)
r = params[1]
alpha = params[2]
a = params[3]
b = params[4]
term1 = (a/(b + x - 1)) * ((alpha + T.cal)/(alpha + t.x))^(r + x)
return(1/(1 + as.numeric(x > 0) * term1))
}
```
They duplicate their input checks. We could move these checks somewhere else, in a stand-alone function. We could make this function even more useful by noticing that we don't care about the names of the vectors called as arguments after `params`, just that they're of an acceptable length and have no negative elements. We also don't care about how many of them there are as long as they all must meet the same requirements. Here's one way:
## Universal input checks
```{r inputCheck}
bgnbd.InputCheck <- function(params, myfun, ...) {
inputs <- as.list(environment())
vectors <- list(...)
vectors <- vectors[!sapply(vectors, is.null)]
dc.check.model.params(c("r", "alpha", "a", "b"), inputs$params, inputs$myfun)
max.length <- max(sapply(vectors, length))
lapply(names(vectors), function(x) {
if(max.length %% length(vectors[[x]]))
warning(paste("Maximum vector length not a multiple of the length of",
x, sep = " "))
if (any(vectors[[x]] < 0) || !is.numeric(vectors[[x]]))
stop(paste(x,
"must be numeric and may not contain negative numbers.",
sep = " "))
})
return(max.length)
}
```
Now the two functions above become:
## bgnbd.LL.lite
```{r bgnbd.LL.lite}
bgnbd.LL <- function(params, x, t.x, T.cal) {
max.length <- try(bgnbd.InputCheck(params, 'bgnbd.LL', x, t.x, T.cal))
if('try-error' == class(max.length)) return(max.length)
x <- rep(x, length.out = max.length)
t.x <- rep(t.x, length.out = max.length)
T.cal <- rep(T.cal, length.out = max.length)
r = params[1]
alpha = params[2]
a = params[3]
b = params[4]
beta.ratio = function(a, b, x, y) {
exp(lgamma(a) + lgamma(b) - lgamma(a + b) - lgamma(x) - lgamma(y) + lgamma(x +
y))
}
A = r * log(alpha) + lgamma(r + x) - lgamma(r) - (r + x) * log(alpha + t.x)
B = beta.ratio(a, b + x, a, b) * ((alpha + t.x)/(alpha + T.cal))^(r + x) + as.numeric((x >
0)) * beta.ratio(a + 1, b + x - 1, a, b)
LL = sum(A + log(B))
return(LL)
}
```
## bgnbd.PAlive.lite
```{r bgnbd.PAlive.lite}
bgnbd.PAlive <- function(params, x, t.x, T.cal) {
max.length <- try(bgnbd.InputCheck(params, 'bgnbd.PAlive', x, t.x, T.cal))
if('try-error' == class(max.length)) return(max.length)
x <- rep(x, length.out = max.length)
t.x <- rep(t.x, length.out = max.length)
T.cal <- rep(T.cal, length.out = max.length)
r = params[1]
alpha = params[2]
a = params[3]
b = params[4]
term1 = (a/(b + x - 1)) * ((alpha + T.cal)/(alpha + t.x))^(r + x)
return(1/(1 + as.numeric(x > 0) * term1))
}
```
We can do a little better. The function definitions above still have some duplication. In fact, they take the same arguments and just combine them in different ways to return the output of interest. First, let's implement the large x fix in this version of `bgnbd.LL`:
## bgnbd.LL.lite with NUM! fix
```{r bgnbd.LL.lite.numfix}
bgnbd.LL <- function(params, x, t.x, T.cal) {
max.length <- try(bgnbd.InputCheck(params, 'bgnbd.LL', x, t.x, T.cal))
if('try-error' == class(max.length)) return(max.length)
x <- rep(x, length.out = max.length)
t.x <- rep(t.x, length.out = max.length)
T.cal <- rep(T.cal, length.out = max.length)
r = params[1]
alpha = params[2]
a = params[3]
b = params[4]
# alt specification to handle large values of x (Solution #2
# in http://brucehardie.com/notes/027/bgnbd_num_error.pdf)
lb.ratio = function(a, b, x, y) {
(lgamma(a) + lgamma(b) - lgamma(a + b)) -
(lgamma(x) + lgamma(y) - lgamma(x + y))
}
D1 = lb.ratio(a + b, b + x, r, b)
D2 = r * log(alpha) - (r + x) * log(alpha + t.x)
C3 = ((alpha + t.x)/(alpha + T.cal))^(r + x)
C4 = a / (b + x - 1)
LL = D1 + D2 + log(C3 + as.numeric((x > 0)) * C4)
return(LL)
}
```
Notice that in this alternative specification the ratio `C4/C3` would produce term1 in the definition of `bgnbd.PAlive()`. It would be good if we could compute them once, use everywhere.
It would also be good if you didn't do more computing than strictly needed. Maybe we could do this:
## bgnbd.GeneralParams
```{r bgnbd.GeneralParams}
bgnbd.generalParams <- function(params,
func,
x,
t.x,
T.cal,
T.star = NULL,
hardie = NULL) {
max.length <- try(pnbd.InputCheck(params = params,
func = func,
printnames = c("r", "alpha", "a", "b"),
x,
t.x,
T.cal))
if('try-error' == class(max.length)) return(max.length)
x <- rep(x, length.out = max.length)
t.x <- rep(t.x, length.out = max.length)
T.cal <- rep(T.cal, length.out = max.length)
r <- params[1]
alpha <- params[2]
a <- params[3]
b <- params[4]
# last two components for the alt specification
# to handle large values of x (Solution #2 in
# http://brucehardie.com/notes/027/bgnbd_num_error.pdf,
# LL specification (4) on page 4):
C3 = ((alpha + t.x)/(alpha + T.cal))^(r + x)
C4 = a / (b + x - 1)
# stuff you'll need in sundry places
out <- list()
out$PAlive <- 1/(1 + as.numeric(x > 0) * C4 / C3)
# do these computations only if needed: that is,
# if you call this function from bgnbd.LL
if(func == 'bgnbd.LL') {
# a helper for specifying the log form of the ratio of betas
# in http://brucehardie.com/notes/027/bgnbd_num_error.pdf
lb.ratio = function(a, b, x, y) {
(lgamma(a) + lgamma(b) - lgamma(a + b)) -
(lgamma(x) + lgamma(y) - lgamma(x + y))
}
# First two components -- D1 and D2 -- for the alt spec
# that can handle large values of x (Solution #2 in
# http://brucehardie.com/notes/027/bgnbd_num_error.pdf)
# Here is the D1 term of LL function (4) on page 4:
D1 = lgamma(r + x) -
lgamma(r) +
lgamma(a + b) +
lgamma(b + x) -
lgamma(b) -
lgamma(a + b + x)
D2 = r * log(alpha) - (r + x) * log(alpha + t.x)
# original implementation of the log likelihood
# A = D2 + lgamma(r + x) - lgamma(r)
# B = exp(lb.ratio(a, b + x, a, b)) *
# C3 +
# as.numeric((x > 0)) *
# exp(lb.ratio(a + 1, b + x - 1, a, b))
# out$LL = sum(A + log(B))
# with the corection for avoiding the NUM! problem:
out$LL = D1 + D2 + log(C3 + as.numeric((x > 0)) * C4)
}
# if T.star is not null, then this can produce
# conditional expected transactions too. this is
# another way of saying that you are calling this
# function from bgnbd.ConditionalExpectedTransactions,
# in which case you also need to set hardie to TRUE or FALSE
if(!is.null(T.star)) {
stopifnot(hardie %in% c(TRUE, FALSE))
term1 <- (a + b + x - 1) / (a - 1)
if(hardie == TRUE) {
hyper <- h2f1(r + x,
b + x,
a + b + x - 1,
T.star/(alpha + T.cal + T.star))
} else {
hyper <- Re(hypergeo(r + x,
b + x,
a + b + x - 1,
T.star/(alpha + T.cal + T.star)))
}
term2 <- 1 -
((alpha + T.cal)/(alpha + T.cal + T.star))^(r + x) *
hyper
out$CET <- term1 * term2 * out$PAlive
}
out
}
```
Notice that we didn't even use the proposed `bgnbd.InputCheck()` because the `pnbd.InputCheck()` we already defined for the Pareto/NBD (`R/pnbd.R`) functions works fine. We just need to set its `printnames` argument to suit the BG/NBD functions.
With this helper, `bgnbd.LL()` and `bgnbd.PAlive()` become one-line wrappers:
## One-liner `bgnbd.LL()`, `bgnbd.PAlive()`
```{r oneliners}
bgnbd.LL <- function(params, x, t.x, T.cal) {
bgnbd.generalParams(params, 'bgnbd.LL', x, t.x, T.cal)$LL
}
bgnbd.PAlive <- function(params, x, t.x, T.cal) {
bgnbd.generalParams(params, 'bgnbd.PAlive', x, t.x, T.cal)$PAlive
}
```
There's a third BG/NBD function that could benefit, shown below:
## bgnbd.ConditionalExpectedTransactions (lines 196-253 of [bgnbd.R](https://github.com/cran/BTYD/blob/master/R/bgnbd.R))
```{r bgnbd.ConditionalExpectedTransactions}
bgnbd.ConditionalExpectedTransactions <- function(params, T.star, x, t.x, T.cal) {
h2f1 <- function(a, b, c, z) {
lenz <- length(z)
j = 0
uj <- 1:lenz
uj <- uj/uj
y <- uj
lteps <- 0
while (lteps < lenz) {
lasty <- y
j <- j + 1
uj <- uj * (a + j - 1) * (b + j - 1)/(c + j - 1) * z/j
y <- y + uj
lteps <- sum(y == lasty)
}
return(y)
}
max.length <- max(length(T.star), length(x), length(t.x), length(T.cal))
if (max.length%%length(T.star))
warning("Maximum vector length not a multiple of the length of T.star")
if (max.length%%length(x))
warning("Maximum vector length not a multiple of the length of x")
if (max.length%%length(t.x))
warning("Maximum vector length not a multiple of the length of t.x")
if (max.length%%length(T.cal))
warning("Maximum vector length not a multiple of the length of T.cal")
dc.check.model.params(c("r", "alpha", "a", "b"), params, "bgnbd.ConditionalExpectedTransactions")
if (any(T.star < 0) || !is.numeric(T.star))
stop("T.star must be numeric and may not contain negative numbers.")
if (any(x < 0) || !is.numeric(x))
stop("x must be numeric and may not contain negative numbers.")
if (any(t.x < 0) || !is.numeric(t.x))
stop("t.x must be numeric and may not contain negative numbers.")
if (any(T.cal < 0) || !is.numeric(T.cal))
stop("T.cal must be numeric and may not contain negative numbers.")
x <- rep(x, length.out = max.length)
t.x <- rep(t.x, length.out = max.length)
T.cal <- rep(T.cal, length.out = max.length)
r = params[1]
alpha = params[2]
a = params[3]
b = params[4]
term1 <- ((a + b + x - 1)/(a - 1))
term2 <- 1 - ((alpha + T.cal)/(alpha + T.cal + T.star))^(r + x) * h2f1(r + x,
b + x, a + b + x - 1, T.star/(alpha + T.cal + T.star))
term3 <- 1 + as.numeric(x > 0) * (a/(b + x - 1)) * ((alpha + T.cal)/(alpha +
t.x))^(r + x)
out <- term1 * term2/term3
return(out)
}
```
For one thing, we already have a h2f1 stand-alone definition in `pnbd.R` and the rest of the bits and bobs can be returned by `bgnbd.generalParams()`. The lite version would be another one-liner:
## bgnbd.ConditionalExpectedTransactions.lite
```{r bgnbd.ConditionalExpectedTransactions.lite}
bgnbd.ConditionalExpectedTransactions <- function(params, T.star, x, t.x, T.cal) {
bgnbd.generalParams(params,
'bgnbd.ConditionalExpectedTransactions',
x,
t.x,
T.cal,
T.star)$CET
}
```
|
/scratch/gouwar.j/cran-all/cranData/BTYD/inst/docs/fix_bgnbd.Rmd
|
---
title: "BTYD Pareto/NBD likelihood rework"
author: "Gabi Huiber"
output:
html_document: default
pdf_document: default
params:
repo: patch_btyd
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE, eval = FALSE)
```
# Problem statement
The Pareto/NBD likelihood function for a random individual
is spelled out in 3 different places in the `R/pnbd.R` script of the original `BTYD` package from CRAN:
- in the definition of `pnbd.LL()`, called by the estimating function `pnbd.cbs.LL()`
- in the definition of `pnbd.PAlive()` -- the probability that somebody is still active
- in the definition of `pnbd.DERT()` -- the estimated discounted remaining CLV.
It is also used indirectly in `pnbd.ConditionalExpectedTransactions()`,
which calls `pnbd.PAlive()`.
This likelihood function should instead be defined once and used everywhere. This is
especially important because it also needs a fix: in log form, its original implementation
suffers from the log-sum-exp problem described
[here](https://lips.cs.princeton.edu/computing-log-sum-exp/) and
[here](https://github.com/theofilos/BTYD).
If we define it once, we need only fix it once.
# What do the original implementations look like?
## pnbd.LL (lines 23-90 of [pnbd.R](https://github.com/cran/BTYD/blob/master/R/pnbd.R))
```{r pnbd.LL}
h2f1 <- function(a, b, c, z) {
lenz <- length(z)
j = 0
uj <- 1:lenz
uj <- uj/uj
y <- uj
lteps <- 0
while (lteps < lenz) {
lasty <- y
j <- j + 1
uj <- uj * (a + j - 1) * (b + j - 1)/(c + j - 1) * z/j
y <- y + uj
lteps <- sum(y == lasty)
}
return(y)
}
max.length <- max(length(x), length(t.x), length(T.cal))
if (max.length%%length(x))
warning("Maximum vector length not a multiple of the length of x")
if (max.length%%length(t.x))
warning("Maximum vector length not a multiple of the length of t.x")
if (max.length%%length(T.cal))
warning("Maximum vector length not a multiple of the length of T.cal")
dc.check.model.params(c("r", "alpha", "s", "beta"), params, "pnbd.LL")
if (any(x < 0) || !is.numeric(x))
stop("x must be numeric and may not contain negative numbers.")
if (any(t.x < 0) || !is.numeric(t.x))
stop("t.x must be numeric and may not contain negative numbers.")
if (any(T.cal < 0) || !is.numeric(T.cal))
stop("T.cal must be numeric and may not contain negative numbers.")
x <- rep(x, length.out = max.length)
t.x <- rep(t.x, length.out = max.length)
T.cal <- rep(T.cal, length.out = max.length)
r <- params[1]
alpha <- params[2]
s <- params[3]
beta <- params[4]
maxab <- max(alpha, beta)
absab <- abs(alpha - beta)
param2 <- s + 1
if (alpha < beta) {
param2 <- r + x
}
part1 <- r * log(alpha) + s * log(beta) - lgamma(r) + lgamma(r + x)
part2 <- -(r + x) * log(alpha + T.cal) - s * log(beta + T.cal)
if (absab == 0) {
partF <- -(r + s + x) * log(maxab + t.x) + log(1 - ((maxab + t.x)/(maxab +
T.cal))^(r + s + x))
} else {
F1 = h2f1(r + s + x, param2, r + s + x + 1, absab/(maxab + t.x))
F2 = h2f1(r + s + x, param2, r + s + x + 1, absab/(maxab + T.cal)) *
((maxab + t.x)/(maxab + T.cal))^(r + s + x)
partF = -(r + s + x) * log(maxab + t.x) + log(F1 - F2)
}
part3 <- log(s) - log(r + s + x) + partF
return(part1 + log(exp(part2) + exp(part3)))
```
## pnbd.PAlive (lines 294-354 of [pnbd.R](https://github.com/cran/BTYD/blob/master/R/pnbd.R))
```{r pnbd.PAlive}
h2f1 <- function(a, b, c, z) {
lenz <- length(z)
j = 0
uj <- 1:lenz
uj <- uj/uj
y <- uj
lteps <- 0
while (lteps < lenz) {
lasty <- y
j <- j + 1
uj <- uj * (a + j - 1) * (b + j - 1)/(c + j - 1) * z/j
y <- y + uj
lteps <- sum(y == lasty)
}
return(y)
}
max.length <- max(length(x), length(t.x), length(T.cal))
if (max.length%%length(x))
warning("Maximum vector length not a multiple of the length of x")
if (max.length%%length(t.x))
warning("Maximum vector length not a multiple of the length of t.x")
if (max.length%%length(T.cal))
warning("Maximum vector length not a multiple of the length of T.cal")
dc.check.model.params(c("r", "alpha", "s", "beta"), params, "pnbd.PAlive")
if (any(x < 0) || !is.numeric(x))
stop("x must be numeric and may not contain negative numbers.")
if (any(t.x < 0) || !is.numeric(t.x))
stop("t.x must be numeric and may not contain negative numbers.")
if (any(T.cal < 0) || !is.numeric(T.cal))
stop("T.cal must be numeric and may not contain negative numbers.")
x <- rep(x, length.out = max.length)
t.x <- rep(t.x, length.out = max.length)
T.cal <- rep(T.cal, length.out = max.length)
r <- params[1]
alpha <- params[2]
s <- params[3]
beta <- params[4]
A0 <- 0
if (alpha >= beta) {
F1 <- h2f1(r + s + x, s + 1, r + s + x + 1, (alpha - beta)/(alpha + t.x))
F2 <- h2f1(r + s + x, s + 1, r + s + x + 1, (alpha - beta)/(alpha + T.cal))
A0 <- F1/((alpha + t.x)^(r + s + x)) - F2/((alpha + T.cal)^(r + s + x))
} else {
F1 <- h2f1(r + s + x, r + x, r + s + x + 1, (beta - alpha)/(beta + t.x))
F2 <- h2f1(r + s + x, r + x, r + s + x + 1, (beta - alpha)/(beta + T.cal))
A0 <- F1/((beta + t.x)^(r + s + x)) - F2/((beta + T.cal)^(r + s + x))
}
return((1 + s/(r + s + x) * (alpha + T.cal)^(r + x) * (beta + T.cal)^s * A0)^(-1))
```
## pnbd.DERT (lines 725-771 of [pnbd.R](https://github.com/cran/BTYD/blob/master/R/pnbd.R))
```{r pnbd.DERT}
max.length <- max(length(x), length(t.x), length(T.cal))
if (max.length%%length(x))
warning("Maximum vector length not a multiple of the length of x")
if (max.length%%length(t.x))
warning("Maximum vector length not a multiple of the length of t.x")
if (max.length%%length(T.cal))
warning("Maximum vector length not a multiple of the length of T.cal")
dc.check.model.params(c("r", "alpha", "s", "beta"), params, "pnbd.DERT")
if (any(x < 0) || !is.numeric(x))
stop("x must be numeric and may not contain negative numbers.")
if (any(t.x < 0) || !is.numeric(t.x))
stop("t.x must be numeric and may not contain negative numbers.")
if (any(T.cal < 0) || !is.numeric(T.cal))
stop("T.cal must be numeric and may not contain negative numbers.")
x <- rep(x, length.out = max.length)
t.x <- rep(t.x, length.out = max.length)
T.cal <- rep(T.cal, length.out = max.length)
r <- params[1]
alpha <- params[2]
s <- params[3]
beta <- params[4]
maxab = max(alpha, beta)
absab = abs(alpha - beta)
param2 = s + 1
if (alpha < beta) {
param2 = r + x
}
part1 <- (alpha^r * beta^s/gamma(r)) * gamma(r + x)
part2 <- 1/((alpha + T.cal)^(r + x) * (beta + T.cal)^s)
if (absab == 0) {
F1 <- 1/((maxab + t.x)^(r + s + x))
F2 <- 1/((maxab + T.cal)^(r + s + x))
} else {
F1 <- Re(hypergeo(r + s + x, param2, r + s + x + 1, absab/(maxab + t.x)))/((maxab +
t.x)^(r + s + x))
F2 <- Re(hypergeo(r + s + x, param2, r + s + x + 1, absab/(maxab + T.cal)))/((maxab +
T.cal)^(r + s + x))
}
likelihood = part1 * (part2 + (s/(r + s + x)) * (F1 - F2))
```
# How might we clean up?
## Move the input checks somewhere else
All three definitions check for two conditions that could be checked elsewhere. One is that
the vectors x, t.x, and T.cal should have the same length; the other is that their elements
should all be zero or greater.
The first condition is met implicitly if you pass these vectors along as a matrix, which is how
`pnbd.cbs.LL()` gets them anyway: as columns in the `cal.cbs` matrix, named x, t.x, T.cal in this order.
The second is easier to check with one line if you do pass them along as a matrix:
`sum(matrix >= 0) == nrow(matrix) * ncol(matrix)`.
This input check could be a stand-alone helper function. That function is now `dc.InputCheck`. See the help file for details.
## Move the h2f1 somewhere else; define it only once
Two of the likelihood implementations use a helper for the Gaussian hypergeometric
function that could be defined once as a standalone function, like so:
```{r h2f1}
#' Use Bruce Hardie's Gaussian hypergeometric implementation
#'
#' In benchmarking \code{\link{pnbd.LL}} runs more quickly and
#' it returns the same results if it uses this helper instead of
#' \code{\link[hypergeo]{hypergeo}}, which is the default. But \code{h2f1}
#' is such a barebones function that in some edge cases it will keep
#' going until you get a segfault, where \code{\link[hypergeo]{hypergeo}}
#' would have done the right thing and failed with a proper error message.
#'
#' @param a, counterpart to A in \code{\link[hypergeo]{hypergeo}}
#' @param b, counterpart to B in \code{\link[hypergeo]{hypergeo}}
#' @param c, counterpart to C in \code{\link[hypergeo]{hypergeo}}
#' @param z, counterpart to z in \code{\link[hypergeo]{hypergeo}}
#' @seealso \code{\link[hypergeo]{hypergeo}}
#' @references Fader, Peter S., and Bruce G.S. Hardie. "A Note on Deriving the Pareto/NBD Model and
#' Related Expressions." November. 2005. Web. \url{http://www.brucehardie.com/notes/008/}
h2f1 <- function(a, b, c, z) {
lenz <- length(z)
j = 0
uj <- 1:lenz
uj <- uj/uj
y <- uj
lteps <- 0
while (lteps < lenz) {
lasty <- y
j <- j + 1
uj <- uj * (a + j - 1) * (b + j - 1)/(c + j - 1) * z/j
y <- y + uj
lteps <- sum(y == lasty)
}
return(y)
}
```
But another possibility might be to simply not use this implementation at all:
`BTYD` already requires the `hypergeo` package.
## A first pass for cleaning up: the `lite` versions
Without the input checks and with this `h2f1` helper set aside, and after noticing that all three functions of interest use the same naming convention for the four parameters -- `params[1:4]` are called
`r`, `alpha`, `s` and `beta` -- the definitions become:
### pnbd.LL.lite
```{r pnbd.LL.lite}
maxab <- max(alpha, beta)
absab <- abs(alpha - beta)
param2 <- s + 1
if (alpha < beta) {
param2 <- r + x
}
part1 <- r * log(alpha) +
s * log(beta) -
lgamma(r) +
lgamma(r + x)
part2 <- -(r + x) * log(alpha + T.cal) -
s * log(beta + T.cal)
if (absab == 0) {
partF <- -(r + s + x) *
log(maxab + t.x) +
log(1 - ((maxab + t.x)/(maxab + T.cal))^(r + s + x))
} else {
F1 = h2f1(r + s + x, param2, r + s + x + 1, absab/(maxab + t.x))
F2 = h2f1(r + s + x, param2, r + s + x + 1, absab/(maxab + T.cal)) *
((maxab + t.x)/(maxab + T.cal))^(r + s + x)
partF = -(r + s + x) * log(maxab + t.x) +
log(F1 - F2)
}
part3 <- log(s) -
log(r + s + x) +
partF
return(part1 + log(exp(part2) + exp(part3)))
```
### pnbd.PAlive.lite
```{r pnbd.PAlive.lite}
A0 <- 0
if (alpha >= beta) {
F1 <- h2f1(r + s + x, s + 1, r + s + x + 1, (alpha - beta)/(alpha + t.x))
F2 <- h2f1(r + s + x, s + 1, r + s + x + 1, (alpha - beta)/(alpha + T.cal))
A0 <- F1/((alpha + t.x)^(r + s + x)) - F2/((alpha + T.cal)^(r + s + x))
} else {
F1 <- h2f1(r + s + x, r + x, r + s + x + 1, (beta - alpha)/(beta + t.x))
F2 <- h2f1(r + s + x, r + x, r + s + x + 1, (beta - alpha)/(beta + T.cal))
A0 <- F1/((beta + t.x)^(r + s + x)) - F2/((beta + T.cal)^(r + s + x))
}
return((1 + s/(r + s + x) * (alpha + T.cal)^(r + x) * (beta + T.cal)^s * A0)^(-1))
```
### pnbd.DERT.lite
```{r pnbd.DERT.lite}
maxab = max(alpha, beta)
absab = abs(alpha - beta)
param2 = s + 1
if (alpha < beta) {
param2 = r + x
}
part1 <- (alpha^r * beta^s/gamma(r)) * gamma(r + x)
part2 <- 1/((alpha + T.cal)^(r + x) * (beta + T.cal)^s)
if (absab == 0) {
F1 <- 1/((maxab + t.x)^(r + s + x))
F2 <- 1/((maxab + T.cal)^(r + s + x))
} else {
F1 <- Re(hypergeo(r + s + x,
param2,
r + s + x + 1,
absab/(maxab + t.x)))/((maxab + t.x)^(r + s + x))
F2 <- Re(hypergeo(r + s + x,
param2,
r + s + x + 1,
absab/(maxab + T.cal)))/((maxab + T.cal)^(r + s + x))
}
likelihood = part1 * (part2 + (s/(r + s + x)) * (F1 - F2))
```
# A second pass: the `lite2` versions
Both the Gaussian hypergeometric function implementation shown [here](http://www.brucehardie.com/notes/008/pareto_nbd_MATLAB.pdf) and the official R package [here](https://cran.r-project.org/web/packages/hypergeo/vignettes/hypergeometric.pdf) will
return 1 if the fourth parameter, z, is 0, no matter what values the first 3
parameters -- a, b, and c -- take.
But saying that z = 0 is the same as saying alpha = beta in our usage of the
hypergeometric, because in our calls to it the fourth parameter is a ratio with
abs(alpha - beta) in the numerator.
In fact, the MATLAB implementation described [here](http://www.brucehardie.com/notes/008/pareto_nbd_MATLAB.pdf)
shows a hypergeometric that takes only slightly different parameters depending on the relationship
between alpha and beta. The three R implementations above almost all check the inequality alpha > beta
at one point or another, but maybe that too could be done only once, in order to define a general
parameterization that would allow us to skip the checks thereafter. This general parameterization might be:
### generalParameterization
```{r generalParameterization}
r <- params[1]
alpha <- params[2]
s <- params[3]
beta <- params[4]
maxab <- max(alpha, beta)
absab <- abs(alpha - beta)
param2 <- s + 1
if (alpha < beta) {
param2 <- r + x
}
a <- alpha + T.cal
b <- maxab + t.x
c <- beta + T.cal
d <- maxab + T.cal
w <- r + s + x
```
With this set aside and available in one place, `pnbd.PAlive()` becomes:
### pnbd.PAlive.lite2
```{r pnbd.PAlive.lite2}
F1 <- h2f1(w, param2, w + 1, absab / b)
F2 <- h2f1(w, param2, w + 1, absab / d)
A0 <- F1/(b^w) - F2/(d^w)
return((1 + s/w * a^(w - s) * c^s * A0)^(-1))
```
And then pnbd.LL(), which returns a log likelihood and must dodge the log-sum-exp problem, becomes:
### pnbd.LL.lite2
```{r pnbd.LL.lite2}
# stuff below is almost equivalent to fix proposed at
# https://github.com/theofilos/BTYD. the exception is
# a small correction that allows A0 to be computed the
# same way that it is in pnbd.PAlive.
part1 <- r * log(alpha) +
s * log(beta) -
lgamma(r) +
lgamma(r + x)
part2 <- (s-w) * log(a) - s * log(c)
F1 <- h2f1(w, param2, w + 1, absab / b)
F2 <- h2f1(w, param2, w + 1, absab / d)
A0 <- F1/(b^w) - F2/(d^w)
return(part1 + part2 + log(1+(s/w) * exp(-part2) * A0))
# The returned expression above makes use of the log-sum-exp trick as follows:
#
# - first, ignore the part1 term. let's just pick apart the expression
# part2 + log(1+(s/w) * exp(-part2) * A0) and show that it's
# equivalent to what the original pnbd.LL() definition returned
# as log(exp(part2) + exp(part3)). the part1 term is the same.
#
# - let's re-arrange the terms inside the log expression:
# 1 = exp(0)
# = exp(part2 - part2)
# (s/w) * exp(-part2) * A0 = A0 * s/w * exp(-part2)
# = exp(log(A0 * s/w)) * exp(-part2)
# = exp[log(A0 * s/w) - part2]
#
# - now the log expressions is
# log(exp(part2 - part2) + exp(log(A0 * s/w) - part2))
#
# - and the original returned expression (excluding part1) becomes
# part2 + log(exp(part2 - part2) + exp(log(A0 * s/w) - part2))
#
# - this, by the log-sum-exp rule, is equivalent to
# log(exp(part2) + s/w * A0)
#
# - now we only need to show that s/w * A0 = exp(part3) in the original
# pnbd.LL() definition. First, recap the new parameterization:
# a <- alpha + T.cal
# b <- maxab + t.x
# c <- beta + T.cal
# d <- maxab + T.cal
# w <- r + s + x
# and remember also that the old F2 is the new F2 times (b/d)^w
#
# - in this parameterization, the original part3 becomes:
# part3 = log(s) - log(w) - w * log(b) + log(F1 - F2 * (b/d)^w)
# = log(s/w) - log(b^w) + log(F1 - F2 * (b/d)^w)
# = log(s/w) + log((F1 - F2 * (b/d)^w)/b^w)
# = log(s/w) + log(F1/b^w - F2/d^w)
# = log(s/w) + log(A0). so:
# exp(part3) = exp(log(s/w + log(A0))
# = exp(log(s/w)) * exp(log(A0))
# = s/w * A0
#
# q.e.d.
#
# One more note: in the original pnbd.LL, partF is defined
# within an if/else conditional. This is because if absab = 0,
# both F1 and F2 return 1 so the log(F1 - F2) expression is
# not computable. That if/else piece is not needed if you
# never take log(F1 - F2). And now, thanks to A0, we don't.
```
Finally, `pnbd.DERT()` becomes:
### pnbd.DERT.lite2
```{r pnbd.DERT.lite2}
part1 <- (alpha^r * beta^s/gamma(r)) * gamma(r + x)
part2 <- 1/(a^(w - s) * c^s)
F1 <- Re(hypergeo(w, param2, w + 1, absab / b))
F2 <- Re(hypergeo(w, param2, w + 1, absab / d))
A0 <- F1/(b^w) - F2/(d^w)
likelihood <- part1 * (part2 + (s/w) * A0)
```
# Two more things
The chunks of code above show only the core pieces of `pnbd.LL()`, `pnbd.PAlive()`, and
`pnbd.DERT()` after input checks are defined outside the respective function and a general
parameterization is defined to make them use the same language. This makes it easier to
see what we should check next:
1. Could `Re(hypergeo(...))` do the job of `h2f1()` or vice-versa? They return identical results
(the proof is in the `threeway_wakkthrough.R` script) so we should only keep one of them. Which one?
* `h2f1()` looks clever and simple, but where they both run into edge cases `Re(hypergeo(...))` will
fail quickly with some informative message along the lines of "series not converged" while `h2f1()`
keeps going until -- I guess -- it runs into a segfault (didn't wait to see for sure). That makes me
nervous. I'd rather use an official implementation. For an actual edge case example,
see `Re(hypergeo::hypergeo(15,27,32,1))` vs. `h2f1(15,27,32,1)`.
* on the other hand, for the non-edge case presented in the vignette, `Re(hypergeo(...))` took 200
times longer to run than `h2f1()`. See `mypars[,'elapsed_time']` in `threeway_walkthrough.R`.
2. Can we clean up even more? Especially if we're only going to keep one of the two
hypergeometric implementations, F1 and F2 in the chunks above could be defined once,
perhaps in the same spot where the general parameterization goes.
# A third pass: the `lite3` versions
Here's what the clean chunks might look like:
* First, we move the estimation of the hypergeometric into the general
parameterization chunk.
* This reduces `pnbd.PAlive()` to one line, renders `pnbd.LL()` totally
incomprehensible, and `pnbd.DERT()` is not much better; but the latter
two do now look eerily similar, which we'll exploit later.
### generalParameterizationHyper
```{r generalParameterizationHyper}
r <- params[1]
alpha <- params[2]
s <- params[3]
beta <- params[4]
maxab <- max(alpha, beta)
absab <- abs(alpha - beta)
param2 <- s + 1
if (alpha < beta) {
param2 <- r + x
}
a <- alpha + T.cal
b <- maxab + t.x
c <- beta + T.cal
d <- maxab + T.cal
w <- r + s + x
F1 <- Re(hypergeo(w, param2, w + 1, absab / b))
F2 <- Re(hypergeo(w, param2, w + 1, absab / d))
A0 <- F1/(b^w) - F2/(d^w)
```
### pnbd.PAlive.lite3
```{r pnbd.PAlive.lite3}
return((1 + s/w * a^(w - s) * c^s * A0)^(-1))
```
### pnbd.LL.lite3
```{r pnbd.LL.lite3}
part1 <- r * log(alpha) + s * log(beta) - lgamma(r) + lgamma(r + x)
part2 <- (s-w) * log(a) - s * log(c)
return(part1 + part2 + log(1+(s/w) * exp(-part2) * A0))
```
### pnbd.DERT.lite3
```{r pnbd.DERT.lite3}
part1 <- (alpha^r * beta^s/gamma(r)) * gamma(r + x)
part2 <- 1/(a^(w - s) * c^s)
likelihood <- part1 * (part2 + (s/w) * A0)
```
Now there's no ambiguity left: everything is defined in one place. And we can also see
that we can do even better, once we've made our peace with losing readability. The pieces
called `part1` and `part2` in `pnbd.LL()` are the log form of the pieces of the same name
in `pnbd.DERT()`. This is not surprising, because pnbd.LL() is supposed to return a log
likelihood, while pbbd.DERT() makes use of the actual likelihood. So, let's move `part1`
and `part2` to the general parameterization chunk, which now becomes:
### generalParameterizationHyperParts
```{r generalParameterizationHyperParts}
r <- params[1]
alpha <- params[2]
s <- params[3]
beta <- params[4]
maxab <- max(alpha, beta)
absab <- abs(alpha - beta)
param2 <- s + 1
if (alpha < beta) {
param2 <- r + x
}
a <- alpha + T.cal
b <- maxab + t.x
c <- beta + T.cal
d <- maxab + T.cal
w <- r + s + x
F1 <- Re(hypergeo(w, param2, w + 1, absab / b))
F2 <- Re(hypergeo(w, param2, w + 1, absab / d))
A0 <- F1/(b^w) - F2/(d^w)
part1 <- (alpha^r * beta^s/gamma(r)) * gamma(r + x)
part2 <- 1/(a^(w - s) * c^s)
```
# One last pass: the `lite4` versions
Below are the final versions of these likelihood variants. They reduce to one line each:
### pnbd.PAlive.lite4 (unchanged from 3 above)
```{r pnbd.PAlive.lite4}
return((1 + s/w * a^(w - s) * c^s * A0)^(-1))
```
### pnbd.LL.lite4
```{r pnbd.LL.lite4}
return(log(part1) + log(part2) + log(1 + (s/w) * A0 / part2))
```
### pnbd.DERT.lite4
```{r pnbd.DERT.lite4}
likelihood <- part1 * (part2 + (s/w) * A0)
```
One last thing: the original `pnbd.DERT()` estimates likelihood, as opposed
to log likelihood, for no good reason at all. It uses it in log form in the
Tricomi function. So we will just have it call `pnbd.LL()`; that way it can
get directly what it actually needs -- the log likelihood.
# A tentative recipe for fixing BTYD then
We will stitch these things together into actual function definitions inside
a new `pnbd.R` and build a new package called BTYD3. Here's a way:
1. At the command line: `$ git submodule add [email protected]:cran/BTYD.git`. This
will get you the original read-only BTYD source as a folder in your repo.
2. Copy this folder to BTYD3 with `$ cp -f BTYD BTYD3`
3. Edit `BTYD3/R/pnbd.R` with the new function definitions with Roxygen headers.
4. Run `$ Rscript roxygenizator.R BTYD3` to build the docs from Roxygen headers
for the new functions and recover the .Rd files for the old functions. This
script automates the work of documenting the new package the `devtools` way,
from function headers, writing the NAMESPACE file, and it will also
5. Try to build BTYD3, install it from source and check it the CRAN way:
a) `R CMD build BTYD3`
b) `R CMD check BTYD3`
6. Things will break. As they do, fix them by tweaking `roxygenizator.R` and
running it again until 5a. gets you a tarball and 5b. shows only notes, not
warnings or errors.
7. Check, using `threeway_walkthrough.R`, that BTYD and BTYD3 show the same
estimates for the CDNow example.
## Sketching out the new functions
### BTYD3::dc.inputCheck
```{r BTYD3::dc.inputCheck}
#' Check the inputs to functions that use this common pattern
#'
#' A bunch of functions whose names start with \code{pnbd} take
#' a set of four parameters as their first argument, and then
#' a set of vectors or scalars such as \code{x} or \code{T.cal}
#' as their subsequent arguments. This function started out as
#' pnbd.InputCheck() and it was meant to run input checks for any
#' number of such subsequent vector arguments, as long as they all
#' met the same requirements as \code{x}, \code{t.x} and \code{T.cal}
#' in \code{\link{pnbd.LL}}: meaning, the length of the longest of
#' these vectors is a multiple of the lengths of all others, and all
#' vectors are numeric and positive.
#'
#' With an extra argument, \code{printnames}, pnbd.InputCheck()
#' could also accommodate input checks for functions whose
#' names start with \code{bgbb}, \code{bgnbd}, and \code{spend} so it
#' was basically useful everywhere. That's when it became \code{dc.InputCheck()}.
#' \code{params} can have any length as long as that length is the same
#' as the length of \params{printnames}, so \code{dc.InputCheck()} can
#' probably handle mixtures of distributions for modeling BTYD behavior
#' that are not yet implemented.
#'
#' By other arguments ... here we mean a bunch of named vectors that are used
#' by functions that call \code{dc.InputCheck}, such as x, t.x, T.cal, etc.
#' The standard rules for vector operations apply - if they are not of the same
#' length, shorter vectors will be recycled (start over at the first element) until
#' they are as long as the longest vector. Vector recycling is a good way to get into
#' trouble. Keep vectors to the same length and use single values for parameters that
#' are to be the same for all calculations. If one of these parameters has a length
#' greater than one, the output will be a vector of probabilities.
#'
#' @param params If used by \code{pnbd.[...]} functions, Pareto/NBD parameters --
#' a vector with r, alpha, s, and beta, in that order. See \code{\link{pnbd.LL}}.
#' If used by \code{bgnbd.[...]} functions, BG/NBD parameters -- a vector with r,
#' alpha, a, and b, in that order. See \code{\link{bgnbd.LL}}. If used by
#' \code{bgbb.[...]} functions, BG/BB parameters -- a vector with alpha, beta,
#' gamma, and delta, in that order. See \code{\link{bgbb.LL}}.
#' If used by \code{spend.[...]} functions, a vector of gamma-gamma parameters --
#' p, q, and gamma, in that order. See \code{\link{spend.LL}}.
#' @param func Function calling dc.InputCheck
#' @param printnames a string vector with the names of parameters to pass to \code{\link{dc.check.model.params}}
#' @param ... other arguments
#' @return If all is well, a data frame with everything you need in it, with nrow() equal to the length of the longest vector in \code{...}
#' @seealso \code{\link{pnbd.LL}} \code{\link{pnbd.ConditionalExpectedTransactions}}
dc.InputCheck <- function(params,
func,
printnames = c("r", "alpha", "s", "beta"),
...) {
inputs <- as.list(environment())
vectors <- list(...)
dc.check.model.params(printnames = inputs$printnames,
params = inputs$params,
func = inputs$func)
max.length <- max(sapply(vectors, length))
lapply(names(vectors), function(x) {
if(max.length %% length(vectors[[x]]))
warning(paste("Maximum vector length not a multiple of the length of",
x, sep = " "))
if (any(vectors[[x]] < 0) || !is.numeric(vectors[[x]]))
stop(paste(x,
"must be numeric and may not contain negative numbers.",
sep = " "))
})
return(as.data.frame(lapply(vectors,
rep,
length.out = max.length)))
}
```
### BTYD3::pnbd.generalParams
```{r BTYD3::pnbd.generalParams}
#' Define general parameters
#'
#' This is to ensure consistency across all functions that require the likelihood function,
#' or the log of it, and to make sure that the same implementation of the hypergeometric
#' function is used everywhere for building \code{A0}.
#'
#' This function is only ever called by either \code{\link{pnbd.LL}} or \code{\link{pnbd.PAlive}}
#' so it returns directly the output that is expected from those calling functions: either
#' the log likelihood for a set of customers, or the probability that a set of customers with
#' characteristics given by \code{x}, \code{t.x} and \code{T.cal}, having estimated a set
#' of \code{params}, is still alive. Either set of customers can be of size 1.
#' @inheritParams pnbd.LL
#' @param func name of the function calling dc.InputCheck; either \code{pnbd.LL} or \code{pnbd.PAlive}.
#' @return A vector of log likelihood values if \code{func} is \code{pnbd.LL}, or a vector of probabilities
#' that a customer is still alive if \code{func} is \code{pnbd.PAlive}.
#' @seealso \code{\link{pnbd.LL}}
#' @seealso \code{\link{pnbd.PAlive}}
#' @seealso \code{\link{pnbd.DERT}}
pnbd.generalParams <- function(params,
x,
t.x,
T.cal,
func,
hardie = TRUE) {
# Since pnbd.LL and pnbd.pAlive are the only options
# for func, we don't need a printnames argument
# in the pnbd.generalParams wrapper.
stopifnot(func %in% c('pnbd.LL', 'pnbd.PAlive'))
inputs <- try(dc.InputCheck(params = params,
func = func,
printnames = c("r", "alpha", "s", "beta"),
x = x,
t.x = t.x,
T.cal = T.cal))
if('try-error' == class(inputs)) return(str(inputs)$message)
x <- inputs$x
t.x <- inputs$t.x
T.cal <- inputs$T.cal
r <- params[1]
alpha <- params[2]
s <- params[3]
beta <- params[4]
maxab <- max(alpha, beta)
absab <- abs(alpha - beta)
param2 <- s + 1
if (alpha < beta) {
param2 <- r + x
}
a <- alpha + T.cal
b <- maxab + t.x
c <- beta + T.cal
d <- maxab + T.cal
w <- r + s + x
if(hardie == TRUE) {
F1 <- h2f1(a = w,
b = param2,
c = w + 1,
z = absab / b)
F2 <- h2f1(a = w,
b = param2,
c = w + 1,
z = absab / d)
} else {
F1 <- Re(hypergeo(A = w,
B = param2,
C = w + 1,
z = absab / b))
F2 <- Re(hypergeo(A = w,
B = param2,
C = w + 1,
z = absab / d))
}
A0 <- F1/(b^w) - F2/(d^w)
# You only ever call this function from two other
# places: pnbd.LL or pnbd.PAlive.
if(func == 'pnbd.LL') {
# this returns the log likelihood for one random customer
part1 <- r * log(alpha) +
s * log(beta) +
lgamma(r + x) -
lgamma(r)
part2 <- 1 / (a^(w - s) * c^s)
return(part1 + log(part2) + log(1 + (s/w) * A0 / part2))
}
else if(func == 'pnbd.PAlive') {
# This returns the probability that a random customer is still alive
return(1 / (1 + s/w * a^(w - s) * c^s * A0))
} else {
return(NULL)
}
}
```
### The fixed versions of the three functions we started with
```{r BTYD3::(pnbd.LL, pnbd.PAlive, pnbd.DERT)}
#' Pareto/NBD Log-Likelihood
#'
#' Calculates the log-likelihood of the Pareto/NBD model.
#'
#' @param params Pareto/NBD parameters - a vector with r, alpha, s, and beta, in that order.
#' r and alpha are unobserved parameters for the NBD transaction process. s and beta are
#' unobserved parameters for the Pareto (exponential gamma) dropout process.
#' @param x number of repeat transactions in the calibration period T.cal, or a vector of transaction frequencies.
#' @param t.x time of most recent repeat transaction, or a vector of recencies.
#' @param T.cal length of calibration period, or a vector of calibration period lengths.
#' @param hardie if TRUE, use \code{\link{h2f1}} instead of \code{\link[hypergeo]{hypergeo}}.
#'
#' @seealso \code{\link{pnbd.EstimateParameters}}
#'
#' @return A vector of log-likelihoods as long as the longest input vector (x, t.x, or T.cal).
#' @references Fader, Peter S., and Bruce G.S. Hardie. "A Note on Deriving the Pareto/NBD Model
#' and Related Expressions." November. 2005. Web. \url{http://www.brucehardie.com/notes/008/}
#'
#' @examples
#' # Returns the log likelihood of the parameters for a customer who
#' # made 3 transactions in a calibration period that ended at t=6,
#' # with the last transaction occurring at t=4.
#' pnbd.LL(params, x=3, t.x=4, T.cal=6, hardie = TRUE)
#'
#' # We can also give vectors as function parameters:
#' set.seed(7)
#' x <- sample(1:4, 10, replace = TRUE)
#' t.x <- sample(1:4, 10, replace = TRUE)
#' T.cal <- rep(4, 10)
#' pnbd.LL(params, x, t.x, T.cal, hardie = TRUE)
pnbd.LL <- function(params,
x,
t.x,
T.cal,
hardie) {
pnbd.generalParams(params = params,
x = x,
t.x = t.x,
T.cal = T.cal,
func = 'pnbd.LL',
hardie = hardie)
}
#' Pareto/NBD P(Alive)
#'
#' Uses Pareto/NBD model parameters and a customer's past transaction behavior to return the probability
#' that they are still alive at the end of the calibration period.
#'
#' P(Alive | X=x, t.x, T.cal, r, alpha, s, beta)
#'
#' x, t.x, and T.cal may be vectors. The standard rules for vector operations apply - if they are
#' not of the same length, shorter vectors will be recycled (start over at the first element) until
#' they are as long as the longest vector. It is advisable to keep vectors to the same length and to
#' use single values for parameters that are to be the same for all calculations. If one of these
#' parameters has a length greater than one, the output will be a vector of probabilities.
#'
#' @inheritParams pnbd.LL
#'
#' @return Probability that the customer is still alive at the end of the calibration period.
#' If x, t.x, and/or T.cal has a length greater than one, then this will be a vector of probabilities
#' (containing one element matching each element of the longest input vector).
#' @references Fader, Peter S., and Bruce G.S. Hardie. "A Note on Deriving the Pareto/NBD Model and
#' Related Expressions." November. 2005. Web. \url{http://www.brucehardie.com/notes/008/}
#'
#' @examples
#' data(cdnowSummary)
#' cbs <- cdnowSummary$cbs
#' params <- pnbd.EstimateParameters(cbs, hardie = TRUE)
#'
#' pnbd.PAlive(params, x=0, t.x=0, T.cal=39, TRUE)
#' # 0.2941633; P(Alive) of a customer who made no repeat transactions.
#'
#' pnbd.PAlive(params, x=23, t.x=39, T.cal=39, TRUE)
#' # 1; P(Alive) of a customer who has the same recency and total
#' # time observed.
#'
#' pnbd.PAlive(params, x=5:20, t.x=30, T.cal=39, TRUE)
#' # Note the "increasing frequency paradox".
#'
#' # To visualize the distribution of P(Alive) across customers:
#' p.alives <- pnbd.PAlive(params, cbs[,"x"], cbs[,"t.x"], cbs[,"T.cal"], TRUE)
#' plot(density(p.alives))
pnbd.PAlive <- function(params,
x,
t.x,
T.cal,
hardie) {
pnbd.generalParams(params = params,
x = x,
t.x = t.x,
T.cal = T.cal,
func = 'pnbd.PAlive',
hardie = hardie)
}
#' Pareto/NBD Discounted Expected Residual Transactions
#'
#' Calculates the discounted expected residual transactions of a customer, given their behavior during the calibration period.
#'
#' DERT(d | r, alpha, s, beta, X = x, t.x, T.cal)
#'
#' x, t.x, T.cal may be vectors. The standard rules for vector operations apply - if they are not of the same length,
#' shorter vectors will be recycled (start over at the first element) until they are as long as the longest vector.
#' It is advisable to keep vectors to the same length and to use single values for parameters that are to be the same
#' for all calculations. If one of these parameters has a length greater than one, the output will be also be a vector.
#'
#' @inheritParams pnbd.LL
#' @param d the discount rate to be used. Make sure that it matches up with your chosen time period (do not use an
#' annual rate for monthly data, for example).
#'
#' @return The number of discounted expected residual transactions for a customer with a particular purchase pattern
#' during the calibration period.
#' @references Fader, Peter S., Bruce G.S. Hardie, and Ka L. Lee. "RFM and CLV: Using Iso-Value Curves for Customer
#' Base Analysis." Journal of Marketing Research Vol.42, pp.415-430. November. 2005.
#' \url{http://www.brucehardie.com/papers.html}
#' @references See equation 2.
#' @references Note that this paper refers to what this package is calling discounted expected residual transactions
#' (DERT) simply as discounted expected transactions (DET).
#'
#' @examples
#' # elog <- dc.ReadLines(system.file("data/cdnowElog.csv", package="BTYD2"),2,3)
#' # elog[, 'date'] <- as.Date(elog[, 'date'], format = '%Y%m%d')
#' # cal.cbs <- dc.ElogToCbsCbt(elog)$cal$cbs
#' # params <- pnbd.EstimateParameters(cal.cbs, hardie = TRUE)
#' params <- c(0.5629966, 12.5590370, 0.4081095, 10.5148048)
#'
#' # 15% compounded annually has been converted to 0.0027 compounded continuously,
#' # as we are dealing with weekly data and not annual data.
#' d <- 0.0027
#'
#' # calculate the discounted expected residual transactions of a customer
#' # who made 7 transactions in a calibration period that was 77.86
#' # weeks long, with the last transaction occurring at the end of
#' # the 35th week.
#' pnbd.DERT(params, x=7, t.x=35, T.cal=77.86, d, TRUE)
#'
#' # We can also use vectors to compute DERT for several customers:
#' pnbd.DERT(params, x=1:10, t.x = 30, T.cal=77.86, d, TRUE)
pnbd.DERT <- function(params,
x,
t.x,
T.cal,
d,
hardie) {
loglike <- try(pnbd.LL(params,
x,
t.x,
T.cal,
hardie))
if('try-error' %in% class(loglike)) return(loglike)
# This is the remainder of the original pnbd.DERT function def.
# No need to get too clever here. Revert to explicit assignment
# of params to r, alpha, s, beta the old-school way.
r <- params[1]
alpha <- params[2]
s <- params[3]
beta <- params[4]
z <- d * (beta + T.cal)
tricomi.part.1 = ((z)^(1 - s))/(s - 1) *
genhypergeo(U = c(1),
L = c(2 - s),
z = z,
check_mod = FALSE)
tricomi.part.2 = gamma(1 - s) *
genhypergeo(U = c(s),
L = c(s),
z = z,
check_mod = FALSE)
tricomi = tricomi.part.1 + tricomi.part.2
result <- exp(r * log(alpha) +
s * log(beta) +
(s - 1) * log(d) +
lgamma(r + x + 1) +
log(tricomi) -
lgamma(r) -
(r + x + 1) * log(alpha + T.cal) -
loglike)
return(result)
}
```
If fixing what's wrong with the BTYD were the only goal, we'd be done now.
But we also want to explore some extensions, as follows:
- if `h2f1` works faster than `hypergeo`, and produces the same result as `Re(hypergeo(...))`, then
let's preserve the option of using it; but if we're going to allow switching between hypergeometric
recipes, that requires that we fiddle with a few more functions, among them `pnbd.pmf.General()`.
- the original library uses `stats::optim()` with the `L-BFGS-B` method; the autor of `optim`, John C. Nash,
[now thinks](http://www.ibm.com/developerworks/library/ba-optimR-john-nash/) that the newer `optimx::optimx()`
is a better choice; we'll use that one and while we're at it we might as well add the ability to select our
own method from among the many available options, maybe compare results.
- we may want confidence intervals, so we may need the Hessian; now we can get it by setting the `hessian` argument to `TRUE` in the call to `pnbd.EstimateParameters()`.
# The actual recipe
1. The three functions we just fixed above are called by diagnostic plot functions, so those need to be fixed too: they have to accommodate the `hardie` parameter, and if we're going to mess with their definitions too, we might as well handle all input checks in one place.
2. There's also a `pnbd.pmf.General()` function that needs the `hardie` parameter and in addition
it has a really awkward way of calling the hypergeometric for components B1 and B2 in the original
equation: it's verbose without any benefit in legibility, and code repeats itself. We'll define a
helper function called `B1B2` that sets things up in a more uniform way instead.
3. Any functions that we make material changes to -- as in they take new parameters -- will need
to have comment headers that can be roxygenized. These headers will replace the corresponding .Rd
files when you call `devtools::document()`, but that function also deletes any .Rd files that do
not have a corresponding header. We need a script that recovers them. That script is `roxygenizator.R`.
4. To see what changed and how, the quickest thing to do might be to diff the .Rd files generated by Roxygen2 from the headers in the functions defined in `R/pnbd.R` of the `BTYD3` source folder against their counterparts in the original `BTYD` source folder. This might be done in Vim.
## Prep work
Clone the read-only mirror of BTYD from [GitHub](https://github.com/cran/BTYD) into a
sub-folder of the same name via `$ git submodule add [email protected]:cran/BTYD.git`.
You will need some pieces of it. For now, copy its `NAMESPACE` file into the root of
``r params$repo`` and add the line `import(optimx)`.
## Stuff that will go wrong
When you simply `git clone` a complete package source, such as BTYD, and try to rejigger it
under some other name, such as BTYD3, and then build it with help from `devtools`, you'll have
to fix some things by hand.
First, terms: there's a _source_ folder structure BTYD3 that sits wherever you cloned it;
let's pretend that it is a subfolder of ``r params$repo``.
Then there's also a _library_ folder structure BTYD3 that will show up in `.libPaths()` when,
after you run `R CMD build BTYD3` and get a tarball in the home folder of the _source_ structure,
you actually install that built package, as in
`install.packages("~/Documents/patch_btyd/BTYD3_2.4.tar.gz", repos = NULL, type = "source")`.
The stuff you need to fiddle with by hand is in the _source_ folder structure. If all goes well,
you won't have to pay any attention to the _library_ folder structure with the same name. They're
in two different places. They will be hard to mix up.
The reason I'm mentioning both is that when you run `R CMD check BTYD3` there will be some
notes, warnings, or errors. Some of these errors or warnings reference the _library_ folder:
they mention files that should be there but are missing because some parts of the
build failed. Others reference the _source_ folder: they mention files that should't be
there, but they are, because your operating system or the way the vignette .Rnw file
file was compiled produced some detritus. An example of OS detritus on the Mac is a sprinkling
of .DS_Store files; an example of vignette detritus is a "figure" subfolder that knitr uses for
temporary storage.
Anyway, back to the stuff you'll have to fix by hand:
- find BTYD and replace it with BTYD3 in the bodies of the following files
- `DESCRIPTION`
- `vignettes/BTYD-walkthrough.Rnw`
- change the name of `BTYD-walkthrough.Rnw` to `BTYD3-walkthrough.Rnw`
- open it in RStudio and change line 6 like so, to make URL hyperlinks blue:
`\hypersetup{colorlinks, citecolor=black, linkcolor=black, urlcolor=blue}`
- erase `man/BTYD-package.Rd` entirely
- rename `R/BTYD.R` to `R/BTYD3.R`
- check that in the source folder "vignettes" there's a "figure" sub-folder; mkdir it if
it's not there; it will be needed in the build process for temporary storage of pictures
that go in the pdf vignette in the library doc folder; if it's there and it's not empty,
delete everything inside it.
- set up `man/[...].Rd` files for any new functions you have defined by writing roxygen2
headers for them properly as shown in the examples above and [here](http://r-pkgs.had.co.nz/man.html).
- at the command line, run `Rscript roxygenizator.R BTYD3`; this does a few things as listed below:
- it will call `devtools::document()` which
will build new .Rd files from your new function headers, and obliterate a bunch of
others. This script reverses the damage by recovering them from the BTYD clone. But
you're not out of the woods yet. Some of those .Rd files will have `\examples{}`
sections that make calls to the `system.file()` command with the argument
`package = "BTYD"`. All of those examples will fail in `R CMD check BTYD3` so
these `package = "BTYD"` references must be changed to `package = "BTYD3"`
in all of the .Rd files where they appear. The painstaking approach is to go
through them by hand one by one until `R CMD check BTYD3` no longer throws
errors. Make a note of each file fixed. As of this writing: `dc.ReadLines.Rd`,
`dc.BuildCBSFromCBTAndDates.Rd`, `dc.CreateFreqCBT.Rd`, `dc.CreateReachCBT.Rd`,
`dc.CreateSpendCBT.Rd`, `dc.ElogToCbsCbt.Rd`, `dc.MakeRFmatrixCal.Rd`,
`dc.MergeCustomers.Rd` and `dc.RemoveTimeBetween.Rd`. Then you can add code
to `roxygenizator.R` to do this work automatically. See the `changeThis()` helper
defined at the top of the file and the `map()` calls to it toward the end. Do not
run `changeThis()` over the entire `dir()` on the theory that where there's nothing
to replace you will just get the original `.Rd` file. This function will change `BTYD2`
to `BTYD22` if you let it, and then your examples will be broken all over again.
Do eliminate from among the 9 .Rd file names mentioned above any file names for
which you wrote a Roxygen header. Eventually, you will write a Roxygen header for
all of the function and the `changeThis()` gymnastics will no longer be needed. When
that is done, comment out the block of code that calls `map(changeThis)` over the vector
of nine .Rd file names listed above. Comment out, rather than delete, so this documentation
still makes sense to you. If you ever have to restart from scratch, un-comment.
- `roxygenizator.R` also writes a brand new `man/BTYD3-package.Rd` out of `R/BTYD3.R`
- `devtools` will produce a useless `NAMESPACE` file consisting of a single line
that warns you that it was built with `roxygen2` and it is not to be edited by hand;
`roxygenizator.R` fixes that by overwriting this `NAMESPACE` file with the one from
the BTYD clone that you copied into the root of `patch_btyd` and to which you
added the extra line; if down the road your BTYD3 package will list other dependencies
in its `DESCRIPTION` file, add another `import()` line to this `NAMESPACE`; or, read
the `R CMD check BTYD3` warnings; at some point they will suggest which `importFrom()`
lines should be added to `NAMESPACE` so just copy them from there.
- build the package on the command line with `R CMD build BTYD3`
- this will populate a data folder in the _library_ folder structure (find it in `.libPaths()`)
- that data folder has some csv.gz files in it; `roxygenizator.R` will `gunzip` them.
- check the package on the command line once again with `R CMD check BTYD3`; it should only
bring up notes and warnings, no errors
- if that is so, `roxygenizator.R` will `install.packages(..., repos = NULL, type = "source")`
from the source tarball that was just built and checked.
You can also choose to compile the vignette to pdf by hand, in the _source_ structure, once you have
the package built and installed. To do that you'll want to make sure that the files in the _source_
data folder are csv, not csv.gz, and that the vignettes/figure folder is empty.
# BTYD2 vs. BTYD3
I attempted an earlier fix and called it BTYD2. That fix was wrong, but
I thought it might have useful pieces, so I had to call my next attempt BTYD3.
Now I know that BTYD3 works, so I might as well fix BTYD2 too. The steps:
- at the command line, `cp -Rf BTYD3 BTYD2`
- find BTYD3 and replace it with BTYD2 in the bodies of the following files
- `DESCRIPTION`
- `R/dc.R`
- `vignettes/BTYD3-walkthrough.Rnw` (and change its name)
- erase `man/BTYD3-package.Rd` entirely
- rename `R/BTYD3.R` to `R/BTYD2.R`
- at the command line run `Rscript roxygenizator.R BTYD2`. It will rebuild the docs and change
references to `package = "BTYD2"` in all of the .Rd files where `package = "BTYD"` appears.
- at the command line run `R CMD build BTYD2`, `R CMD check BTYD2`
- install all three packages from source: BTYD [from CRAN](https://cran.r-project.org/web/packages/BTYD/index.html),
BTYD2 and BTYD3 from the tarballs created with `R CMD build`.
- check that the two produce the same results as the BTYD package from CRAN by running `threeway_walkthrough.R`.
- for this to work, you may need to `brew install gsl` at the command line; update Homebrew first as shown [here](https://docs.brew.sh/FAQ).
Once all of this works, BTYD3 can go away, and my patched version of the BTYD package will be called BTYD2. At that point I will erase the BTYD3 folder, and edit `threeway_walkthrough.R` so it makes no mention of it. Now three-way stands for
1. the original BTYD from CRAN,
2. my own BTYD2 and
3. my BTYD2 with `hypergeo` instead of `h2f1`.
Success means that (1) and (2) return the same numbers for the CD-NOW set and (2) is not much slower, while (3) might return slightly different numbers (not by much) and we'll see how much slower it is.
|
/scratch/gouwar.j/cran-all/cranData/BTYD/inst/docs/fix_pnbd.Rmd
|
# Modeled after newbuild.R
# Run this from the command line with Rscript:
# Rscript quickbuild_btyd.R
library("devtools")
setwd(file.path(Sys.getenv()['HOME'], 'BTYD'))
document()
build()
install(build_vignettes = TRUE)
check()
|
/scratch/gouwar.j/cran-all/cranData/BTYD/inst/docs/quickbuild_btyd.R
|
# Generated by using Rcpp::compileAttributes() -> do not edit by hand
# Generator token: 10BE3573-1514-4C36-9D1C-5A225CD40393
slice_sample_gamma_parameters <- function(data, init, hyper, steps = 20, w = 1) {
.Call('_BTYDplus_slice_sample_gamma_parameters', PACKAGE = 'BTYDplus', data, init, hyper, steps, w)
}
pggg_palive <- function(x, tx, Tcal, k, lambda, mu) {
.Call('_BTYDplus_pggg_palive', PACKAGE = 'BTYDplus', x, tx, Tcal, k, lambda, mu)
}
pggg_slice_sample <- function(what, x, tx, Tcal, litt, k, lambda, mu, tau, t, gamma, r, alpha, s, beta) {
.Call('_BTYDplus_pggg_slice_sample', PACKAGE = 'BTYDplus', what, x, tx, Tcal, litt, k, lambda, mu, tau, t, gamma, r, alpha, s, beta)
}
xbgcnbd_pmf_cpp <- function(params, t, x, dropout_at_zero = FALSE) {
.Call('_BTYDplus_xbgcnbd_pmf_cpp', PACKAGE = 'BTYDplus', params, t, x, dropout_at_zero)
}
xbgcnbd_exp_cpp <- function(params, t, dropout_at_zero = FALSE) {
.Call('_BTYDplus_xbgcnbd_exp_cpp', PACKAGE = 'BTYDplus', params, t, dropout_at_zero)
}
|
/scratch/gouwar.j/cran-all/cranData/BTYDplus/R/RcppExports.R
|
#' Event log for customers of an online grocery store.
#'
#' These data came from an online retailer offering a broad range of grocery
#' categories. The original data set spans four years, but lacked the customers'
#' acquisition date. Therefore, we constructed a quasi cohort by limiting the
#' provided data analysis to those customers who haven't purchased at all in the
#' first two years, and had their first purchase in the first quarter of 2006.
#' This resulted in 10483 transactions being recorded for 1525 customers during
#' a period of two years (2006-2007).
#'
#' @references Platzer, M., & Reutterer, T. (2016). Ticking away the moments:
#' Timing regularity helps to better predict customer activity. Marketing
#' Science, 35(5), 779-799. \doi{10.1287/mksc.2015.0963}
#'
#' @format A data frame with 10483 rows and 2 variables: \describe{
#' \item{cust}{customer ID, factor vector} \item{date}{transaction date,
#' Date vector} }
#'
#' @source Thomas Reutterer <[email protected]>
"groceryElog"
|
/scratch/gouwar.j/cran-all/cranData/BTYDplus/R/data.R
|
#' Estimate Regularity in Intertransaction Timings
#'
#' The models (M)BG/CNBD-k and Pareto/GGG are capable of leveraging regularity
#' within transaction timings for improving forecast accuracy. This method
#' provides a quick check for the degree of regularity in the event timings. A
#' return value of close to 1 supports the assumption of exponentially
#' distributed intertransaction times, whereas values significantly larger than
#' 1 reveal the presence of regularity.
#'
#' Estimation is either done by 1) assuming the same degree of regularity across
#' all customers (Wheat & Morrison (1990) via \code{method = "wheat"}), or 2) by
#' estimating regularity for each customer separately, as the shape parameter of
#' a fitted gamma distribution, and then return the median across estimates. The
#' latter methods, though, require sufficient (>=\code{min}) transactions per
#' customer.
#'
#' Wheat & Morrison (1990)'s method calculates for each customer a statistic
#' \code{M} based on her last two number of intertransaction times as
#' \code{ITT_1 / (ITT_1 + ITT_2)}. That measure is known to follow a
#' \code{Beta(k, k)} distribution, and \code{k} can be estimated as
#' \code{(1-4*Var(M))/(8*Var(M))}. The corresponding diagnostic plot (\code{plot
#' = TRUE}) shows the actual distribution of \code{M} vs. the theoretical
#' distribution for \code{k = 1} and \code{k = 2}.
#'
#' @param elog Event log, a \code{data.frame} with columns \code{cust} and
#' transaction time \code{t} or \code{date}
#' @param method Either \code{wheat}, \code{mle}, \code{mle-minka}, \code{mle-thom} or
#' \code{cv}.
#' @param plot If \code{TRUE} then an additional diagnostic plot is provided.
#' @param title Plot title.
#' @param min Minimum number of intertransaction times per customer. Customers
#' with less than \code{min} intertransactions are not considered. Defaults to 2
#' for method `wheat`, and to 10 otherwise.
#' @return Estimated real-valued regularity parameter.
#' @references Wheat, Rita D., and Donald G. Morrison. "Estimating purchase
#' regularity with two interpurchase times." Journal of Marketing Research
#' (1990): 87-93.
#' @references Dunn, Richard, Steven Reader, and Neil Wrigley. 'An investigation
#' of the assumptions of the NBD model' Applied Statistics (1983): 249-259.
#' @references Wu, Couchen, and H-L. Chen. 'A consumer purchasing model with
#' learning and departure behaviour.' Journal of the Operational Research
#' Society (2000): 583-591.
#' @references
#' \url{https://tminka.github.io/papers/minka-gamma.pdf}
#'
#' @export
#' @examples
#' data("groceryElog")
#' estimateRegularity(groceryElog, plot = TRUE, method = 'wheat')
#' estimateRegularity(groceryElog, plot = TRUE, method = 'mle-minka')
#' estimateRegularity(groceryElog, plot = TRUE, method = 'mle-thom')
#' estimateRegularity(groceryElog, plot = TRUE, method = 'cv')
estimateRegularity <- function(elog, method = "wheat", plot = FALSE, title = "", min = NULL) {
N <- t <- NULL # suppress checkUsage warnings
if (!"cust" %in% names(elog))
stop("elog must have a column labelled \"cust\"")
if (!"date" %in% names(elog) & !"t" %in% names(elog))
stop("elog must have a column labelled \"t\" or \"date\"")
if (!"t" %in% names(elog))
elog$t <- as.numeric(elog$date)
elog_dt <- subset(setDT(copy(elog)), select = c("cust", "t"))
setkey(elog_dt)
elog_dt <- unique(elog_dt) # nolint
# discard customers with less than `min` ITTs
if (is.null(min)) {
min <- ifelse(method == "wheat", 2, 10)
} else {
stopifnot(is.numeric(min))
stopifnot(min >= 2)
}
elog_dt[, N := .N, by = "cust"]
elog_dt <- elog_dt[N > min]
if (nrow(elog_dt) == 0) stop("No customers with sufficient number of transactions.")
# calculate method specific estimate
if (method == "wheat") {
# Wheat, Rita D., and Donald G. Morrison. 'Estimating purchase regularity
# with two interpurchase times.' Journal of Marketing Research (1990): 87-93.
setkeyv(elog_dt, c("cust", "t"))
calcM <- function(itts) sample(utils::tail(itts, 2), 1) / sum(utils::tail(itts, 2))
M <- elog_dt[, calcM(diff(t)), by = "cust"]$V1
if (var(M) == 0) stop("No customers with sufficient number of transactions.")
r <- (1 - 4 * var(M)) / (8 * var(M))
if (plot) {
mar_top <- ifelse(title != "", 2.5, 1)
op <- par(mar = c(1, 1, mar_top, 1))
M_density <- density(M)
ymax <- max(M_density$y, 1.5)
plot(M_density, xlim = c(-0.05, 1.05), ylim = c(0, ymax),
main = title, sub = "", xlab = "", ylab = "",
lwd = 2, frame = FALSE, axes = FALSE)
polygon(M_density,
col = "lightgray", border = 1)
fn1 <- function(x) dbeta(x, 1, 1)
fn2 <- function(x) dbeta(x, 2, 2)
curve(fn1, add = TRUE, lty = 2, lwd = 1, col = "gray12")
curve(fn2, add = TRUE, lty = 3, lwd = 1, col = "gray12")
par(op)
}
return(r)
} else {
if (method == "mle") {
# Maximum Likelihood Estimator
# https://en.wikipedia.org/wiki/Gamma_distribution#Maximum_likelihood_estimation
est_k <- function(itts) {
s <- log(sum(itts) / length(itts)) - sum(log(itts)) / length(itts)
fn <- function(v) return( (log(v) - digamma(v) - s) ^ 2)
k <- optimize(fn, lower = 0.1, upper = 50)$min
return(k)
}
} else if (method == "mle-minka") {
# Approximation for MLE by Minka
# https://tminka.github.io/papers/minka-gamma.pdf
est_k <- function(itts) {
s <- log(sum(itts) / length(itts)) - sum(log(itts)) / length(itts)
k <- (3 - s + sqrt( (s - 3) ^ 2 + 24 * s)) / (12 * s)
return(k)
}
} else if (method == "mle-thom") {
# Approximation for ML estimator Thom (1968); see Dunn, Richard, Steven
# Reader, and Neil Wrigley. 'An investigation of the assumptions of the
# NBD model' Applied Statistics (1983): 249-259.
est_k <- function(itts) {
hm <- function(v) exp(sum(log(v)) / length(v))
mu <- log(mean(itts) / hm(itts))
k <- (1 / (4 * mu)) * (1 + sqrt(1 + 4 * mu / 3))
return(k)
}
} else if (method == "cv") {
# Estimate regularity by analyzing coefficient of variation Wu, Couchen,
# and H-L. Chen. 'A consumer purchasing model with learning and departure
# behaviour.' Journal of the Operational Research Society (2000): 583-591.
est_k <- function(itts) {
cv <- sd(itts) / mean(itts)
k <- 1 / cv ^ 2
return (k)
}
}
ks <- elog_dt[, est_k(diff(t)), by = "cust"]$V1
if (plot) {
ymax <- median(ks) * 3
suppressWarnings(boxplot(ks, horizontal = TRUE, ylim = c(0, ymax),
frame = FALSE, axes = FALSE, main = title))
axis(1, at = 0:ymax)
axis(3, at = 0:ymax, labels = rep("", 1 + ymax))
abline(v = 1:ymax, lty = "dotted", col = "lightgray")
suppressWarnings(boxplot(ks, horizontal = TRUE, add = TRUE,
col = "gray", frame = FALSE, axes = FALSE))
}
return(median(ks))
}
}
#' Plot timing patterns of sampled customers
#'
#' @param elog Event log, a \code{data.frame} with columns \code{cust} and
#' transaction time \code{t} or \code{date}.
#' @param n Number of sampled customers.
#' @param T.cal End of calibration period, which is visualized as a vertical line.
#' @param T.tot End of observation period
#' @param title Plot title.
#' @param headers Vector of length 2 for adding headers to the plot, e.g.
#' \code{c("Calibration", "Holdout")}.
#' @export
#' @examples
#' data("groceryElog")
#' plotTimingPatterns(groceryElog, T.tot = "2008-12-31")
#' plotTimingPatterns(groceryElog, T.cal = "2006-12-31", headers = c("Calibration", "Holdout"))
plotTimingPatterns <- function(elog, n = 40, T.cal = NULL, T.tot = NULL,
title = "Sampled Timing Patterns", headers = NULL) {
cust <- first <- t <- V1 <- NULL # suppress checkUsage warnings
elog_dt <- setDT(copy(elog))
custs <- sample(unique(elog_dt$cust), size = min(n, uniqueN(elog_dt$cust)), replace = FALSE)
n <- length(custs)
if (!"t" %in% names(elog_dt)) elog_dt[, `:=`(t, as.numeric(date))]
T.0 <- min(elog_dt$t)
if (is.null(T.cal)) {
T.cal <- max(elog_dt$t)
} else if (!is.numeric(T.cal)) {
T.cal <- as.numeric(as.Date(T.cal))
}
if (is.null(T.tot)) {
T.tot <- max(elog_dt$t)
} else if (!is.numeric(T.tot)) {
T.tot <- as.numeric(as.Date(T.tot))
}
elog_dt <- elog_dt[cust %in% custs & t <= T.tot]
elog_dt[, `:=`(first, min(t)), by = "cust"]
if (!is.character(elog_dt$cust)) elog_dt[, `:=`(cust, as.character(cust))]
custs <- elog_dt[, min(date), by = "cust"][order(V1), cust]
setkeyv(elog_dt, "cust")
mar_top <- ifelse(is.null(title) || title == "", 0.5, 2.5)
op <- par(mar = c(0.5, 0.5, mar_top, 0.5))
ymax <- ifelse(is.character(headers), ceiling(n * 1.05), n)
plot(1, xlim = c(T.0, T.tot), ylim = c(1, ymax), typ = "n",
axes = FALSE, frame = FALSE,
xlab = "", ylab = "",
main = title)
for (i in 1:n) {
ts <- unique(elog_dt[custs[i], t])
segments(min(ts), i, T.tot, i, col = "#efefef", lty = 1, lwd = 1)
points(min(ts), i, pch = 21, col = "#454545", bg = "#454545", cex = 0.8)
ts.cal <- ts[ts > min(ts) & ts <= as.numeric(T.cal)]
ts.val <- ts[ts > as.numeric(T.cal)]
points(ts.cal, rep(i, length(ts.cal)), pch = 21, col = "#454545", bg = "#454545", cex = 0.7)
points(ts.val, rep(i, length(ts.val)), pch = 21, col = "#454545", bg = "#999999", cex = 0.7)
}
par(op)
if (T.cal < T.tot) abline(v = T.cal, lwd = 1.5, col = "#454545")
if (is.character(headers)) {
text(headers[1], x = T.cal - (T.cal - T.0) / 2, y = ymax, col = "#454545", cex = 0.8)
if (T.cal < T.tot) text(headers[2], x = T.cal + (T.tot - T.cal) / 2, y = ymax, col = "#454545", cex = 0.8)
}
invisible()
}
#' Convert Event Log to customer-level summary statistic
#'
#' Efficient implementation for the conversion of an event log into a
#' customer-by-sufficient-statistic (CBS) \code{data.frame}, with a row for each
#' customer, which is the required data format for estimating model parameters.
#'
#' The time unit for expressing \code{t.x}, \code{T.cal} and \code{litt} are
#' determined via the argument \code{units}, which is passed forward to method
#' \code{difftime}, and defaults to \code{weeks}.
#'
#' Argument \code{T.tot} allows one to specify the end of the observation period,
#' i.e. the last possible date of an event to still be included in the event
#' log. If \code{T.tot} is not provided, then the date of the last recorded event
#' will be assumed to coincide with the end of the observation period. If
#' \code{T.tot} is provided, then any event that occurs after that date is discarded.
#'
#' Argument \code{T.cal} allows one to split the summary statistics into a
#' calibration and a holdout period. This can be useful for evaluating
#' forecasting accuracy for a given dataset. If \code{T.cal} is not provided,
#' then the whole observation period is considered, and is then subsequently
#' used for for estimating model parameters. If it is provided, then the
#' returned \code{data.frame} contains two additional fields, with \code{x.star}
#' representing the number of repeat transactions during the holdout period of
#' length \code{T.star}. And only those customers are contained, who have had at
#' least one event during the calibration period.
#'
#' Transactions with identical \code{cust} and \code{date} field are treated as
#' a single transaction, with \code{sales} being summed up.
#'
#' @param elog Event log, a \code{data.frame} with field \code{cust} for the
#' customer ID and field \code{date} for the date/time of the event, which
#' should be of type \code{Date} or \code{POSIXt}. If a field \code{sales} is
#' present, it will be aggregated as well.
#' @param units Time unit, either \code{week}, \code{day}, \code{hour},
#' \code{min} or \code{sec}. See \code{\link[base]{difftime}}.
#' @param T.cal End date of calibration period. Defaults to
#' \code{max(elog$date)}.
#' @param T.tot End date of the observation period. Defaults to
#' \code{max(elog$date)}.
#' @return \code{data.frame} with fields:
#' \item{\code{cust}}{Customer id (unique key).}
#' \item{\code{x}}{Number of recurring events in calibration period.}
#' \item{\code{t.x}}{Time between first and last event in calibration period.}
#' \item{\code{litt}}{Sum of logarithmic intertransaction timings during calibration period.}
#' \item{\code{sales}}{Sum of sales in calibration period, incl. initial transaction. Only if \code{elog$sales} is provided.}
#' \item{\code{sales.x}}{Sum of sales in calibration period, excl. initial transaction. Only if \code{elog$sales} is provided.}
#' \item{\code{first}}{Date of first transaction in calibration period.}
#' \item{\code{T.cal}}{Time between first event and end of calibration period.}
#' \item{\code{T.star}}{Length of holdout period. Only if \code{T.cal} is provided.}
#' \item{\code{x.star}}{Number of events within holdout period. Only if \code{T.cal} is provided.}
#' \item{\code{sales.star}}{Sum of sales within holdout period. Only if \code{T.cal} and \code{elog$sales} are provided.}
#' @export
#' @examples
#' data("groceryElog")
#' cbs <- elog2cbs(groceryElog, T.cal = "2006-12-31", T.tot = "2007-12-30")
#' head(cbs)
elog2cbs <- function(elog, units = "week", T.cal = NULL, T.tot = NULL) {
cust <- first <- itt <- T.star <- x.star <- sales <- sales.star <- sales.x <- NULL # suppress checkUsage warnings
stopifnot(inherits(elog, "data.frame"))
is.dt <- is.data.table(elog)
if (nrow(elog) == 0) {
cbs <- data.frame(cust = character(0),
x = numeric(0),
t.x = numeric(0),
litt = numeric(0),
first = as.Date(character(0)),
T.cal = numeric(0))
if (is.dt) cbs <- as.data.table(cbs)
return(cbs)
}
if (!all(c("cust", "date") %in% names(elog))) stop("`elog` must have fields `cust` and `date`")
if (!any(c("Date", "POSIXt") %in% class(elog$date))) stop("`date` field must be of class `Date` or `POSIXt`")
if ("sales" %in% names(elog) & !is.numeric(elog$sales)) stop("`sales` field must be numeric")
if (is.data.frame(elog) && nrow(elog) == 0) stop("data.frame supplied to elog2cbs must not be empty")
if (is.null(T.cal)) T.cal <- max(elog$date)
if (is.null(T.tot)) T.tot <- max(elog$date)
if (is.character(T.cal)) T.cal <- if (class(elog$date)[1] == "Date") as.Date(T.cal) else as.POSIXct(T.cal)
if (is.character(T.tot)) T.tot <- if (class(elog$date)[1] == "Date") as.Date(T.tot) else as.POSIXct(T.tot)
if (T.tot < T.cal) T.tot <- T.cal
stopifnot(T.tot >= min(elog$date))
has.holdout <- T.cal < T.tot
has.sales <- "sales" %in% names(elog)
# convert to data.table for improved performance
elog_dt <- data.table(elog)
setkey(elog_dt, cust, date)
# check for `sales` column, and populate if missing
if (!has.sales) {
elog_dt[, `:=`(sales, 1)]
} else {
stopifnot(is.numeric(elog_dt$sales))
}
# merge transactions with same dates
elog_dt <- elog_dt[, list(sales = sum(sales)), by = "cust,date"]
# determine time since first date for each customer
elog_dt[, `:=`(first, min(date)), by = "cust"]
elog_dt[, `:=`(t, as.numeric(difftime(date, first, units = units))), by = "cust"]
# compute intertransaction times
elog_dt[, `:=`(itt, c(0, diff(t))), by = "cust"]
# count events in calibration period
cbs <- elog_dt[date <= T.cal,
list(x = .N - 1,
t.x = max(t),
litt = sum(log(itt[itt > 0])),
sales = sum(sales),
sales.x = sum(sales[t > 0])),
by = "cust,first"]
cbs[, `:=`(T.cal, as.numeric(difftime(T.cal, first, units = units)))]
setkey(cbs, cust)
# count events in validation period
if (has.holdout) {
cbs[, `:=`(T.star, as.numeric(difftime(T.tot, first, units = units)) - T.cal)]
val <- elog_dt[date > T.cal & date <= T.tot, list(x.star = .N, sales.star = sum(sales)), keyby = "cust"]
cbs <- merge(cbs, val, all.x = TRUE, by = "cust")
cbs[is.na(x.star), `:=`(x.star, 0)]
cbs[is.na(sales.star), `:=`(sales.star, 0)]
setcolorder(cbs, c("cust", "x", "t.x", "litt", "sales", "sales.x", "first", "T.cal",
"T.star", "x.star", "sales.star"))
} else {
setcolorder(cbs, c("cust", "x", "t.x", "litt", "sales", "sales.x", "first", "T.cal"))
}
# return same object type as was passed
if (!has.sales) {
elog_dt[, `:=`(sales, NULL)]
cbs[, `:=`(sales, NULL)]
cbs[, `:=`(sales.x, NULL)]
if (has.holdout) cbs[, `:=`(sales.star, NULL)]
}
if (!is.dt) {
cbs <- data.frame(cbs)
}
return(cbs)
}
#' Convert Event Log to Transaction Counts
#'
#' Aggregates an event log to either incremental or cumulative number of
#' transactions. If \code{first=TRUE} then the initial transactions of each
#' customer are included in the count as well.
#'
#' Duplicate transactions with identical \code{cust} and \code{date} (or
#' \code{t}) field are counted only once.
#'
#' @param elog Event log, a \code{data.frame} with columns \code{cust} and
#' transaction time \code{t} or \code{date}.
#' @param by Only return every \code{by}-th count Defaults to 7, and thus
#' returns weekly numbers.
#' @param first If TRUE, then the first transaction for each customer is being
#' counted as well
#' @return Numeric vector of transaction counts.
#' @export
#' @examples
#' data("groceryElog")
#' cum <- elog2cum(groceryElog)
#' plot(cum, typ="l", frame = FALSE)
#' inc <- elog2inc(groceryElog)
#' plot(inc, typ="l", frame = FALSE)
elog2cum <- function(elog, by = 7, first = FALSE) {
t0 <- N <- cust <- NULL # suppress checkUsage warnings
stopifnot("cust" %in% names(elog))
stopifnot(is.logical(first) & length(first) == 1)
is.dt <- is.data.table(elog)
if (!is.dt) {
elog <- as.data.table(elog)
} else {
elog <- copy(elog)
}
if (!"t" %in% names(elog)) {
stopifnot("date" %in% names(elog))
cohort_start <- min(as.numeric(elog$date))
elog[, `:=`(t, as.numeric(date) - cohort_start)]
}
elog <- unique(elog[, list(cust, t)])
elog[, `:=`(t0, min(t)), by = "cust"]
grid <- data.table(t = 0 : ceiling(max(elog$t)))
grid <- merge(grid, elog[first | t > t0, .N, keyby = list(t = ceiling(t))], all.x = TRUE, by = "t")
grid <- grid[is.na(N), N := 0L]
cum <- cumsum(grid$N)
cum <- cum[seq(by, length(cum), by = by)]
return(cum)
}
#' @rdname elog2cum
#' @export
elog2inc <- function(elog, by = 7, first = FALSE) {
cum <- elog2cum(elog = elog, by = by, first = first)
return(diff(cum))
}
#' Check Model Parameters
#'
#' Wrapper for \code{BTYD::dc.check.model.params} with additional check for
#' parameter names if these are present
#'
#' @keywords internal
dc.check.model.params.safe <- function(printnames, params, func) {
# first do basic checks
BTYD::dc.check.model.params(printnames, params, func)
# then check for names, if these are present
if (!is.null(names(params))) {
idx <- names(params) != ""
if (any(printnames[idx] != names(params)[idx])) {
stop("Parameter names do not match - ", paste0(printnames, collapse = ","), " != ",
paste0(names(params), collapse = ","), call. = FALSE)
}
}
}
#' Generic Method for Tracking Plots
#'
#' @keywords internal
dc.PlotTracking <- function(actual, expected, T.cal = NULL,
xlab = "", ylab = "", title = "",
xticklab = NULL, ymax = NULL,
legend = c("Actual", "Model")) {
stopifnot(is.numeric(actual))
stopifnot(is.numeric(expected))
stopifnot(all(actual >= 0))
stopifnot(all(expected >= 0))
stopifnot(length(actual) == length(expected))
if (is.null(ymax)) ymax <- max(c(actual, expected)) * 1.05
plot(actual, type = "l", xaxt = "n", xlab = xlab, ylab = ylab, col = 1, ylim = c(0, ymax), main = title)
lines(expected, lty = 2, col = 2)
if (is.null(xticklab)) {
axis(1, at = 1:length(actual), labels = 1:length(actual))
} else {
if (length(actual) != length(xticklab)) {
stop("Plot error, xticklab does not have the correct size")
}
axis(1, at = 1:length(actual), labels = xticklab)
}
if (!is.null(T.cal)) abline(v = max(T.cal), lty = 2)
if (!is.null(legend) & length(legend) == 2) {
pos <- ifelse(which.max(expected) == length(expected), "bottomright", "topright")
legend(pos, legend = legend, col = 1:2, lty = 1:2, lwd = 1)
}
return(rbind(actual, expected))
}
#' Generic Method for Plotting Frequency vs. Conditional Expected Frequency
#'
#' @keywords internal
dc.PlotFreqVsConditionalExpectedFrequency <- function(x, actual, expected, censor,
xlab, ylab, xticklab, title) {
bin <- bin.size <- transaction.actual <- transaction.expected <- N <- NULL # suppress checkUsage warnings
if (length(x) != length(actual) | length(x) != length(expected) |
!is.numeric(x) | !is.numeric(actual) | !is.numeric(expected) |
any(x < 0) | any(actual < 0) | any(expected < 0))
stop("x, actual and expected must be non-negative numeric vectors of same length.")
if (censor > max(x)) censor <- max(x)
dt <- data.table(x, actual, expected)
dt[, bin := pmin(x, censor)]
st <- dt[, list(transaction.actual = mean(actual),
transaction.expected = mean(expected),
bin.size = .N), keyby = bin]
st <- merge(data.table(bin = 0:censor), st, all.x = TRUE, by = "bin")
comparison <- t(st)[-1, ]
col.names <- paste(rep("freq", length(censor + 1)), (0:censor), sep = ".")
col.names[censor + 1] <- paste0(col.names[censor + 1], "+")
colnames(comparison) <- col.names
if (is.null(xticklab) == FALSE) {
x.labels <- xticklab
} else {
if (censor < ncol(comparison)) {
x.labels <- 0:(censor)
x.labels[censor + 1] <- paste0(censor, "+")
} else {
x.labels <- 0:(ncol(comparison))
}
}
actual <- comparison[1, ]
expected <- comparison[2, ]
ylim <- c(0, ceiling(max(c(actual, expected)) * 1.1))
plot(actual, type = "l", xaxt = "n", col = 1, ylim = ylim, xlab = xlab, ylab = ylab, main = title)
lines(expected, lty = 2, col = 2)
axis(1, at = 1:ncol(comparison), labels = x.labels)
legend("topleft", legend = c("Actual", "Model"), col = 1:2, lty = 1:2, lwd = 1)
return(comparison)
}
#' Generic Method for Plotting Frequency vs. Conditional Expected Frequency
#'
#' @keywords internal
dc.PlotRecVsConditionalExpectedFrequency <- function(t.x, actual, expected,
xlab, ylab, xticklab, title) {
bin <- bin.size <- N <- NULL # suppress checkUsage warnings
if (length(t.x) != length(actual) | length(t.x) != length(expected) |
!is.numeric(t.x) | !is.numeric(actual) | !is.numeric(expected) |
any(t.x < 0) | any(actual < 0) | any(expected < 0))
stop("t.x, actual and expected must be non-negative numeric vectors of same length.")
dt <- data.table(t.x, actual, expected)
dt[, bin := floor(t.x)]
st <- dt[, list(actual = mean(actual),
expected = mean(expected),
bin.size = .N), keyby = bin]
st <- merge(data.table(bin = 1:floor(max(t.x))), st[bin > 0], all.x = TRUE, by = "bin")
comparison <- t(st)[-1, ]
x.labels <- NULL
if (is.null(xticklab) == FALSE) {
x.labels <- xticklab
} else {
x.labels <- 1:max(st$bin)
}
actual <- comparison[1, ]
expected <- comparison[2, ]
ylim <- c(0, ceiling(max(c(actual, expected), na.rm = TRUE) * 1.1))
plot(actual, type = "l", xaxt = "n", col = 1, ylim = ylim, xlab = xlab, ylab = ylab, main = title)
lines(expected, lty = 2, col = 2)
axis(1, at = 1:ncol(comparison), labels = x.labels)
legend("topleft", legend = c("Actual", "Model"), col = 1:2, lty = 1:2, lwd = 1)
return(comparison)
}
|
/scratch/gouwar.j/cran-all/cranData/BTYDplus/R/helpers.R
|
#' (M)BG/CNBD-k Parameter Estimation
#'
#' Estimates parameters for the (M)BG/CNBD-k model via Maximum Likelihood
#' Estimation.
#'
#' @param cal.cbs Calibration period customer-by-sufficient-statistic (CBS)
#' data.frame. It must contain a row for each customer, and columns \code{x}
#' for frequency, \code{t.x} for recency , \code{T.cal} for the total time
#' observed, as well as the sum over logarithmic intertransaction times
#' \code{litt}, in case that \code{k} is not provided. A correct format can be
#' easily generated based on the complete event log of a customer cohort with
#' \code{\link{elog2cbs}}.
#' @param k Integer-valued degree of regularity for Erlang-k distributed
#' interpurchase times. By default this \code{k} is not provided, and a grid
#' search from 1 to 12 is performed in order to determine the best-fitting
#' \code{k}. The grid search is stopped early, if the log-likelihood does not
#' increase anymore when increasing k beyond 4.
#' @param par.start Initial (M)BG/CNBD-k parameters. A vector with \code{r},
#' \code{alpha}, \code{a} and \code{b} in that order.
#' @param max.param.value Upper bound on parameters.
#' @param trace If larger than 0, then the parameter values are is printed every
#' \code{trace}-step of the maximum likelihood estimation search.
#' @return A vector of estimated parameters.
#' @export
#' @seealso \code{\link[BTYD]{bgnbd.EstimateParameters}}
#' @references (M)BG/CNBD-k: Reutterer, T., Platzer, M., & Schroeder, N. (2020).
#' Leveraging purchase regularity for predicting customer behavior the easy
#' way. International Journal of Research in Marketing.
#' \doi{10.1016/j.ijresmar.2020.09.002}
#' @references Batislam, E. P., Denizel, M., & Filiztekin, A. (2007). Empirical
#' validation and comparison of models for customer base analysis.
#' International Journal of Research in Marketing, 24(3), 201-209.
#' \doi{10.1016/j.ijresmar.2006.12.005}
#' @examples
#' \dontrun{
#' data("groceryElog")
#' cbs <- elog2cbs(groceryElog)
#' (params <- mbgcnbd.EstimateParameters(cbs))
#' }
#' @export
mbgcnbd.EstimateParameters <- function(cal.cbs, k = NULL,
par.start = c(1, 3, 1, 3), max.param.value = 10000,
trace = 0) {
xbgcnbd.EstimateParameters(cal.cbs, k = k,
par.start = par.start, max.param.value = max.param.value,
trace = trace, dropout_at_zero = TRUE)
}
#' @rdname mbgcnbd.EstimateParameters
#' @export
bgcnbd.EstimateParameters <- function(cal.cbs, k = NULL,
par.start = c(1, 3, 1, 3), max.param.value = 10000,
trace = 0) {
xbgcnbd.EstimateParameters(cal.cbs, k = k,
par.start = par.start, max.param.value = max.param.value,
trace = trace, dropout_at_zero = FALSE)
}
#' @rdname mbgcnbd.EstimateParameters
#' @export
mbgnbd.EstimateParameters <- function(cal.cbs,
par.start = c(1, 3, 1, 3), max.param.value = 10000,
trace = 0) {
xbgcnbd.EstimateParameters(cal.cbs, k = 1,
par.start = par.start, max.param.value = max.param.value,
trace = trace, dropout_at_zero = TRUE)
}
#' @keywords internal
xbgcnbd.EstimateParameters <- function(cal.cbs, k = NULL,
par.start = c(1, 3, 1, 3), max.param.value = 10000,
trace = 0, dropout_at_zero = NULL) {
stopifnot(!is.null(dropout_at_zero))
dc.check.model.params.safe(c("r", "alpha", "a", "b"), par.start, "xbgcnbd.EstimateParameters")
# either `k` or `litt` need to be present
if (is.null(k) & !"litt" %in% colnames(cal.cbs))
stop("Either regularity parameter k need to be specified, or a ",
"column with logarithmic interpurchase times litt need to be present in cal.cbs")
# if `k` is not specified we do grid search for `k`
if (is.null(k)) {
params <- list()
LL <- c()
for (k in 1:12) {
params[[k]] <- tryCatch(
xbgcnbd.EstimateParameters(
cal.cbs = cal.cbs, k = k, par.start = par.start,
max.param.value = max.param.value, trace = trace, dropout_at_zero = dropout_at_zero),
error = function(e) return(e))
if (inherits(params[[k]], "error")) {
params[[k]] <- NULL
break # stop if parameters could not be estimated, e.g. if xbgcnbd.LL returns Inf
}
LL[k] <- xbgcnbd.cbs.LL(params = params[[k]], cal.cbs = cal.cbs, dropout_at_zero = dropout_at_zero)
if (k > 2 && LL[k] < LL[k - 1] && LL[k - 1] < LL[k - 2])
break # stop if LL gets worse for increasing k
}
k <- which.max(LL)
return(params[[k]])
}
# if `litt` is missing, we set it to zero, so that xbgcnbd.cbs.LL does not complain; however this makes LL values
# for different k values not comparable
if (!"litt" %in% colnames(cal.cbs))
cal.cbs[, "litt"] <- 0
count <- 0
xbgcnbd.eLL <- function(params, k, cal.cbs, max.param.value, dropout_at_zero) {
params <- exp(params)
params[params > max.param.value] <- max.param.value
params <- c(k, params)
loglik <- xbgcnbd.cbs.LL(params = params, cal.cbs = cal.cbs, dropout_at_zero = dropout_at_zero)
count <<- count + 1
if (trace > 0 & count %% trace == 0) {
cat("xbgcnbd.EstimateParameters - k:", sprintf("%2.0f", k),
" step:", sprintf("%4.0f", count), " - ",
sprintf("%12.1f", loglik), ":", sprintf("%10.4f", params), "\n")
}
return(-1 * loglik)
}
logparams <- log(par.start)
results <- optim(logparams, xbgcnbd.eLL,
cal.cbs = cal.cbs, k = k, max.param.value = max.param.value, dropout_at_zero = dropout_at_zero,
method = "L-BFGS-B")
estimated.params <- exp(results$par)
estimated.params[estimated.params > max.param.value] <- max.param.value
estimated.params <- c(k, estimated.params)
names(estimated.params) <- c("k", "r", "alpha", "a", "b")
return(estimated.params)
}
#' (M)BG/CNBD-k Log-Likelihood
#'
#' Calculates the log-likelihood of the (M)BG/CNBD-k model.
#'
#' @param params A vector with model parameters \code{k}, \code{r},
#' \code{alpha}, \code{a} and \code{b}, in that order.
#' @param cal.cbs Calibration period customer-by-sufficient-statistic (CBS)
#' data.frame. It must contain a row for each customer, and columns \code{x}
#' for frequency, \code{t.x} for recency , \code{T.cal} for the total time
#' observed, as well as the sum over logarithmic intertransaction times
#' \code{litt}. A correct format can be easily generated based on the complete
#' event log of a customer cohort with \code{\link{elog2cbs}}.
#' @param x frequency, i.e. number of re-purchases
#' @param t.x recency, i.e. time elapsed from first purchase to last purchase
#' @param T.cal total time of observation period
#' @param litt sum of logarithmic interpurchase times
#' @return For \code{bgcnbd.cbs.LL}, the total log-likelihood of the provided
#' data. For \code{bgcnbd.LL}, a vector of log-likelihoods as long as the
#' longest input vector (\code{x}, \code{t.x}, or \code{T.cal}).
#' @references (M)BG/CNBD-k: Reutterer, T., Platzer, M., & Schroeder, N. (2020).
#' Leveraging purchase regularity for predicting customer behavior the easy
#' way. International Journal of Research in Marketing.
#' \doi{10.1016/j.ijresmar.2020.09.002}
#' @export
mbgcnbd.cbs.LL <- function(params, cal.cbs) {
xbgcnbd.cbs.LL(params, cal.cbs, dropout_at_zero = TRUE)
}
#' @rdname mbgcnbd.cbs.LL
#' @export
mbgcnbd.LL <- function(params, x, t.x, T.cal, litt) {
xbgcnbd.LL(params, x, t.x, T.cal, litt, dropout_at_zero = TRUE)
}
#' @rdname mbgcnbd.cbs.LL
#' @export
bgcnbd.cbs.LL <- function(params, cal.cbs) {
xbgcnbd.cbs.LL(params, cal.cbs, dropout_at_zero = FALSE)
}
#' @rdname mbgcnbd.cbs.LL
#' @export
bgcnbd.LL <- function(params, x, t.x, T.cal, litt) {
xbgcnbd.LL(params, x, t.x, T.cal, litt, dropout_at_zero = FALSE)
}
#' @keywords internal
xbgcnbd.cbs.LL <- function(params, cal.cbs, dropout_at_zero = NULL) {
stopifnot(!is.null(dropout_at_zero))
dc.check.model.params.safe(c("k", "r", "alpha", "a", "b"), params, "xbgcnbd.cbs.LL")
tryCatch(x <- cal.cbs$x,
error = function(e) stop("cal.cbs must have a frequency column labelled \"x\""))
tryCatch(t.x <- cal.cbs$t.x,
error = function(e) stop("cal.cbs must have a recency column labelled \"t.x\""))
tryCatch(T.cal <- cal.cbs$T.cal,
error = function(e) stop("cal.cbs must have a column for length of time observed labelled \"T.cal\""))
tryCatch(litt <- cal.cbs$litt,
error = function(e) stop("cal.cbs must have a column for ",
"sum over logarithmic inter-transaction-times labelled \"litt\""))
ll <- xbgcnbd.LL(params = params, x = x, t.x = t.x, T.cal = T.cal,
litt = litt, dropout_at_zero = dropout_at_zero)
return(sum(ll))
}
#' @keywords internal
xbgcnbd.LL <- function(params, x, t.x, T.cal, litt, dropout_at_zero = NULL) {
stopifnot(!is.null(dropout_at_zero))
max.length <- max(length(x), length(t.x), length(T.cal))
if (max.length %% length(x))
warning("Maximum vector length not a multiple of the length of x")
if (max.length %% length(t.x))
warning("Maximum vector length not a multiple of the length of t.x")
if (max.length %% length(T.cal))
warning("Maximum vector length not a multiple of the length of T.cal")
if (max.length %% length(litt))
warning("Maximum vector length not a multiple of the length of litt")
dc.check.model.params.safe(c("k", "r", "alpha", "a", "b"), params, "xbgcnbd.LL")
if (params[1] != floor(params[1]) | params[1] < 1)
stop("k must be integer being greater or equal to 1.")
if (any(x < 0) || !is.numeric(x))
stop("x must be numeric and may not contain negative numbers.")
if (any(t.x < 0) || !is.numeric(t.x))
stop("t.x must be numeric and may not contain negative numbers.")
if (any(T.cal < 0) || !is.numeric(T.cal))
stop("T.cal must be numeric and may not contain negative numbers.")
x <- rep(x, length.out = max.length)
t.x <- rep(t.x, length.out = max.length)
T.cal <- rep(T.cal, length.out = max.length)
litt <- rep(litt, length.out = max.length)
k <- params[1]
r <- params[2]
alpha <- params[3]
a <- params[4]
b <- params[5]
P1 <- (k - 1) * litt - x * log(factorial(k - 1))
P2 <- lbeta(a, b + x + ifelse(dropout_at_zero, 1, 0)) - lbeta(a, b)
P3 <- lgamma(r + k * x) - lgamma(r) + r * log(alpha)
P4 <- -1 * (r + k * x) * log(alpha + T.cal)
S1 <- as.numeric(dropout_at_zero | x > 0) *
a / (b + x - 1 + ifelse(dropout_at_zero, 1, 0)) *
( (alpha + T.cal) / (alpha + t.x)) ^ (r + k * x)
S2 <- 1
if (k > 1) {
for (j in 1:(k - 1)) {
S2a <- 1
for (i in 0:(j - 1)) S2a <- S2a * (r + k * x + i)
S2 <- S2 + (S2a * (T.cal - t.x) ^ j) / (factorial(j) * (alpha + T.cal) ^ j)
}
}
return(P1 + P2 + P3 + P4 + log(S1 + S2))
}
#' (M)BG/CNBD-k Probability Mass Function
#'
#' Uses (M)BG/CNBD-k model parameters to return the probability distribution of
#' purchase frequencies for a random customer in a given time period, i.e.
#' \eqn{P(X(t)=x|r,alpha,a,b)}.
#'
#' @param params A vector with model parameters \code{k}, \code{r},
#' \code{alpha}, \code{a} and \code{b}, in that order.
#' @param t Length end of time period for which probability is being computed.
#' May also be a vector.
#' @param x Number of repeat transactions for which probability is calculated.
#' May also be a vector.
#' @return \eqn{P(X(t)=x|r,alpha,a,b)}. If either \code{t} or \code{x} is a
#' vector, then the output will be a vector as well. If both are vectors, the
#' output will be a matrix.
#' @export
#' @references (M)BG/CNBD-k: Reutterer, T., Platzer, M., & Schroeder, N. (2020).
#' Leveraging purchase regularity for predicting customer behavior the easy
#' way. International Journal of Research in Marketing.
#' \doi{10.1016/j.ijresmar.2020.09.002}
#' @examples
#' \dontrun{
#' data("groceryElog")
#' cbs <- elog2cbs(groceryElog)
#' params <- mbgcnbd.EstimateParameters(cbs)
#' mbgcnbd.pmf(params, t = 52, x = 0:6)
#' mbgcnbd.pmf(params, t = c(26, 52), x = 0:6)
#' }
mbgcnbd.pmf <- function(params, t, x) {
xbgcnbd.pmf(params, t, x, dropout_at_zero = TRUE)
}
#' @rdname mbgcnbd.pmf
#' @export
bgcnbd.pmf <- function(params, t, x) {
xbgcnbd.pmf(params, t, x, dropout_at_zero = FALSE)
}
#' @keywords internal
xbgcnbd.pmf <- function(params, t, x, dropout_at_zero = NULL) {
stopifnot(!is.null(dropout_at_zero))
dc.check.model.params.safe(c("k", "r", "alpha", "a", "b"), params, "xbgcnbd.pmf")
if (params[1] != floor(params[1]) | params[1] < 1)
stop("k must be integer being greater or equal to 1.")
pmf <- as.matrix(sapply(t, function(t) {
sapply(x, function(x) {
xbgcnbd_pmf_cpp(params, t, x, dropout_at_zero) # call fast C++ implementation
})
}))
if (length(x) == 1) pmf <- t(pmf)
rownames(pmf) <- x
colnames(pmf) <- t
drop(pmf)
}
#' (M)BG/CNBD-k Expectation
#'
#' Returns the number of repeat transactions that a randomly chosen customer
#' (for whom we have no prior information) is expected to make in a given time
#' period, i.e. \eqn{E(X(t) | k, r, alpha, a, b)}.
#'
#' Note: Computational time increases with the number of unique values of
#' \code{t}.
#'
#' @param params A vector with model parameters \code{k}, \code{r},
#' \code{alpha}, \code{a} and \code{b}, in that order.
#' @param t Length of time for which we are calculating the expected number of repeat transactions.
#' @return Number of repeat transactions a customer is expected to make in a time period of length t.
#' @export
#' @references (M)BG/CNBD-k: Reutterer, T., Platzer, M., & Schroeder, N. (2020).
#' Leveraging purchase regularity for predicting customer behavior the easy
#' way. International Journal of Research in Marketing.
#' \doi{10.1016/j.ijresmar.2020.09.002}
#' @examples
#' \dontrun{
#' data("groceryElog")
#' cbs <- elog2cbs(groceryElog)
#' params <- mbgcnbd.EstimateParameters(cbs)
#' mbgcnbd.Expectation(params, t = c(26, 52))
#' }
mbgcnbd.Expectation <- function(params, t) {
xbgcnbd.Expectation(params, t, dropout_at_zero = TRUE)
}
#' @rdname mbgcnbd.Expectation
#' @export
bgcnbd.Expectation <- function(params, t) {
xbgcnbd.Expectation(params, t, dropout_at_zero = FALSE)
}
#' @keywords internal
xbgcnbd.Expectation <- function(params, t, dropout_at_zero = NULL) {
stopifnot(!is.null(dropout_at_zero))
dc.check.model.params.safe(c("k", "r", "alpha", "a", "b"), params, "xbgcnbd.Expectation")
if (any(t < 0) || !is.numeric(t))
stop("t must be numeric and may not contain negative numbers.")
# estimate computation time, and warn if it will take too long
if (uniqueN(t) > 100) {
estimated_time <- system.time(xbgcnbd.Expectation(params, max(t), dropout_at_zero))["elapsed"]
if (uniqueN(t) * estimated_time > 60) {
cat("note: computation will take long for many unique time values (`t`, `T.cal`, `T.star`) - consider rounding!")
}
}
# to save computation time, we collapse vector `t` on to its unique values
ts <- unique(t)
ts_map <- xbgcnbd_exp_cpp(params, ts, dropout_at_zero)
names(ts_map) <- ts
res <- (ts_map[as.character(t)])
return(res)
}
#' (M)BG/CNBD-k P(alive)
#'
#' Uses (M)BG/CNBD-k model parameters and a customer's past transaction behavior
#' to return the probability that they are still alive at the end of the
#' calibration period.
#'
#' @param params A vector with model parameters \code{k}, \code{r},
#' \code{alpha}, \code{a} and \code{b}, in that order.
#' @param x Number of repeat transactions in the calibration period T.cal, or a
#' vector of calibration period frequencies.
#' @param t.x Recency, i.e. length between first and last transaction during
#' calibration period.
#' @param T.cal Length of calibration period, or a vector of calibration period
#' lengths.
#' @return Probability that the customer is still alive at the end of the
#' calibration period.
#' @export
#' @references (M)BG/CNBD-k: Reutterer, T., Platzer, M., & Schroeder, N. (2020).
#' Leveraging purchase regularity for predicting customer behavior the easy
#' way. International Journal of Research in Marketing.
#' \doi{10.1016/j.ijresmar.2020.09.002}
#' @examples
#' \dontrun{
#' data("groceryElog")
#' cbs <- elog2cbs(groceryElog)
#' params <- mbgcnbd.EstimateParameters(cbs)
#' palive <- mbgcnbd.PAlive(params, cbs$x, cbs$t.x, cbs$T.cal)
#' head(palive) # Probability of being alive for first 6 customers
#' mean(palive) # Estimated share of customers to be still alive
#' }
mbgcnbd.PAlive <- function(params, x, t.x, T.cal) {
xbgcnbd.PAlive(params, x, t.x, T.cal, dropout_at_zero = TRUE)
}
#' @rdname mbgcnbd.PAlive
#' @export
bgcnbd.PAlive <- function(params, x, t.x, T.cal) {
xbgcnbd.PAlive(params, x, t.x, T.cal, dropout_at_zero = FALSE)
}
#' @keywords internal
xbgcnbd.PAlive <- function(params, x, t.x, T.cal, dropout_at_zero = NULL) {
stopifnot(!is.null(dropout_at_zero))
max.length <- max(length(x), length(t.x), length(T.cal))
if (max.length %% length(x))
warning("Maximum vector length not a multiple of the length of x")
if (max.length %% length(t.x))
warning("Maximum vector length not a multiple of the length of t.x")
if (max.length %% length(T.cal))
warning("Maximum vector length not a multiple of the length of T.cal")
dc.check.model.params.safe(c("k", "r", "alpha", "a", "b"), params, "xbgcnbd.PAlive")
if (params[1] != floor(params[1]) | params[1] < 1)
stop("k must be integer being greater or equal to 1.")
if (any(x < 0) || !is.numeric(x))
stop("x must be numeric and may not contain negative numbers.")
if (any(t.x < 0) || !is.numeric(t.x))
stop("t.x must be numeric and may not contain negative numbers.")
if (any(T.cal < 0) || !is.numeric(T.cal))
stop("T.cal must be numeric and may not contain negative numbers.")
x <- rep(x, length.out = max.length)
t.x <- rep(t.x, length.out = max.length)
T.cal <- rep(T.cal, length.out = max.length)
k <- params[1]
r <- params[2]
alpha <- params[3]
a <- params[4]
b <- params[5]
P1 <- (a / (b + x - 1 + ifelse(dropout_at_zero, 1, 0))) * ( (alpha + T.cal) / (alpha + t.x)) ^ (r + k * x)
P2 <- 1
if (k > 1) {
for (j in 1:(k - 1)) {
P2a <- 1
for (i in 0:(j - 1)) P2a <- P2a * (r + k * x + i)
P2 <- P2 + ( (P2a * (T.cal - t.x) ^ j) / (factorial(j) * (alpha + T.cal) ^ j))
}
}
palive <- (1 / (1 + P1 / P2))
if (dropout_at_zero == FALSE)
palive[x == 0] <- 1
return(palive)
}
#' (M)BG/CNBD-k Conditional Expected Transactions
#'
#' Uses (M)BG/CNBD-k model parameters and a customer's past transaction behavior
#' to return the number of transactions they are expected to make in a given
#' time period.
#'
#' @param params A vector with model parameters \code{k}, \code{r},
#' \code{alpha}, \code{a} and \code{b}, in that order.
#' @param T.star Length of time for which we are calculating the expected number
#' of transactions.
#' @param x Number of repeat transactions in the calibration period T.cal, or a
#' vector of calibration period frequencies.
#' @param t.x Recency, i.e. length between first and last transaction during
#' calibration period.
#' @param T.cal Length of calibration period, or a vector of calibration period
#' lengths.
#' @return Number of transactions a customer is expected to make in a time
#' period of length t, conditional on their past behavior. If any of the input
#' parameters has a length greater than 1, this will be a vector of expected
#' number of transactions.
#' @export
#' @references (M)BG/CNBD-k: Reutterer, T., Platzer, M., & Schroeder, N. (2020).
#' Leveraging purchase regularity for predicting customer behavior the easy
#' way. International Journal of Research in Marketing.
#' \doi{10.1016/j.ijresmar.2020.09.002}
#' @examples
#' \dontrun{
#' data("groceryElog")
#' cbs <- elog2cbs(groceryElog)
#' params <- mbgcnbd.EstimateParameters(cbs, k = 2)
#' # estimate transactions for next 12 weeks
#' xstar.est <- mbgcnbd.ConditionalExpectedTransactions(params,
#' T.star = 12, cbs$x, cbs$t.x, cbs$T.cal)
#' head(xstar.est) # expected number of transactions for first 6 customers
#' sum(xstar.est) # expected total number of transactions during holdout
#' }
mbgcnbd.ConditionalExpectedTransactions <- function(params, T.star, x, t.x, T.cal) {
xbgcnbd.ConditionalExpectedTransactions(params, T.star, x, t.x, T.cal, dropout_at_zero = TRUE)
}
#' @rdname mbgcnbd.ConditionalExpectedTransactions
#' @export
bgcnbd.ConditionalExpectedTransactions <- function(params, T.star, x, t.x, T.cal) {
xbgcnbd.ConditionalExpectedTransactions(params, T.star, x, t.x, T.cal, dropout_at_zero = FALSE)
}
#' @keywords internal
xbgcnbd.ConditionalExpectedTransactions <- function(params, T.star, x, t.x, T.cal, dropout_at_zero = NULL) {
stopifnot(!is.null(dropout_at_zero))
max.length <- max(length(T.star), length(x), length(t.x), length(T.cal))
if (max.length %% length(T.star))
warning("Maximum vector length not a multiple of the length of T.star")
if (max.length %% length(x))
warning("Maximum vector length not a multiple of the length of x")
if (max.length %% length(t.x))
warning("Maximum vector length not a multiple of the length of t.x")
if (max.length %% length(T.cal))
warning("Maximum vector length not a multiple of the length of T.cal")
dc.check.model.params.safe(c("k", "r", "alpha", "a", "b"), params, "xbgcnbd.ConditionalExpectedTransactions")
if (params[1] != floor(params[1]) | params[1] < 1)
stop("k must be integer being greater or equal to 1.")
if (any(T.star < 0) || !is.numeric(T.star))
stop("T.star must be numeric and may not contain negative numbers.")
if (any(x < 0) || !is.numeric(x))
stop("x must be numeric and may not contain negative numbers.")
if (any(t.x < 0) || !is.numeric(t.x))
stop("t.x must be numeric and may not contain negative numbers.")
if (any(T.cal < 0) || !is.numeric(T.cal))
stop("T.cal must be numeric and may not contain negative numbers.")
x <- rep(x, length.out = max.length)
t.x <- rep(t.x, length.out = max.length)
T.cal <- rep(T.cal, length.out = max.length)
T.star <- rep(T.star, length.out = max.length)
k <- params[1]
r <- params[2]
alpha <- params[3]
a <- params[4]
b <- params[5]
if (round(a, 2) == 1)
a <- a + 0.01 # P1 not defined for a=1, so we add slight noise in such rare cases
if (requireNamespace("gsl", quietly = TRUE)) {
h2f1 <- gsl::hyperg_2F1
} else {
# custom R implementation of h2f1 taken from BTYD source code
h2f1 <- function(a, b, c, z) {
lenz <- length(z)
j <- 0
uj <- 1:lenz
uj <- uj / uj
y <- uj
lteps <- 0
while (lteps < lenz) {
lasty <- y
j <- j + 1
uj <- uj * (a + j - 1) * (b + j - 1) / (c + j - 1) * z / j
y <- y + uj
lteps <- sum(y == lasty)
}
return(y)
}
}
# approximate via expression for conditional expected transactions for BG/NBD
# model, but adjust scale parameter by k
G <- function(r, alpha, a, b) 1 - (alpha / (alpha + T.star)) ^ r * h2f1(r, b + 1, a + b, T.star / (alpha + T.star))
P1 <- (a + b + x - 1 + ifelse(dropout_at_zero, 1, 0)) / (a - 1)
P2 <- G(r + x, k * alpha + T.cal, a, b + x - 1 + ifelse(dropout_at_zero, 1, 0))
P3 <- xbgcnbd.PAlive(params = params, x = x, t.x = t.x, T.cal = T.cal, dropout_at_zero = dropout_at_zero)
exp <- P1 * P2 * P3
# Adjust bias BG/NBD-based approximation by scaling via the Unconditional
# Expectations (for wich we have exact expression). Only do so, if we can
# safely assume that the full customer cohort is passed.
do.bias.corr <- k > 1 && length(x) == length(t.x) && length(x) == length(T.cal) && length(x) >= 100
if (do.bias.corr) {
sum.cal <- sum(xbgcnbd.Expectation(params = params, t = T.cal, dropout_at_zero = dropout_at_zero))
sum.tot <- sum(xbgcnbd.Expectation(params = params, t = T.cal + T.star, dropout_at_zero = dropout_at_zero))
bias.corr <- (sum.tot - sum.cal) / sum(exp)
exp <- exp * bias.corr
}
return(unname(exp))
}
#' (M)BG/CNBD-k Plot Frequency in Calibration Period
#'
#' Plots a histogram and returns a matrix comparing the actual and expected
#' number of customers who made a certain number of repeat transactions in the
#' calibration period, binned according to calibration period frequencies.
#'
#' @param params A vector with model parameters \code{k}, \code{r},
#' \code{alpha}, \code{a} and \code{b}, in that order.
#' @param cal.cbs Calibration period CBS (customer by sufficient statistic). It
#' must contain columns for frequency ('x') and total time observed ('T.cal').
#' @param censor Cutoff point for number of transactions in plot.
#' @param xlab Descriptive label for the x axis.
#' @param ylab Descriptive label for the y axis.
#' @param title Title placed on the top-center of the plot.
#' @return Calibration period repeat transaction frequency comparison matrix
#' (actual vs. expected).
#' @export
#' @references (M)BG/CNBD-k: Reutterer, T., Platzer, M., & Schroeder, N. (2020).
#' Leveraging purchase regularity for predicting customer behavior the easy
#' way. International Journal of Research in Marketing.
#' \doi{10.1016/j.ijresmar.2020.09.002}
#' @examples
#' \dontrun{
#' data("groceryElog")
#' cbs <- elog2cbs(groceryElog)
#' params <- mbgcnbd.EstimateParameters(cbs)
#' mbgcnbd.PlotFrequencyInCalibration(params, cbs)
#' }
mbgcnbd.PlotFrequencyInCalibration <- function(params, cal.cbs, censor = 7,
xlab = "Calibration period transactions",
ylab = "Customers",
title = "Frequency of Repeat Transactions") {
xbgcnbd.PlotFrequencyInCalibration(params, cal.cbs, censor, xlab, ylab, title, dropout_at_zero = TRUE)
}
#' @rdname mbgcnbd.PlotFrequencyInCalibration
#' @export
bgcnbd.PlotFrequencyInCalibration <- function(params, cal.cbs, censor = 7,
xlab = "Calibration period transactions",
ylab = "Customers",
title = "Frequency of Repeat Transactions") {
xbgcnbd.PlotFrequencyInCalibration(params, cal.cbs, censor, xlab, ylab, title, dropout_at_zero = FALSE)
}
#' @keywords internal
xbgcnbd.PlotFrequencyInCalibration <- function(params, cal.cbs, censor = 7,
xlab = "Calibration period transactions",
ylab = "Customers",
title = "Frequency of Repeat Transactions",
dropout_at_zero = NULL) {
stopifnot(!is.null(dropout_at_zero))
tryCatch(x_act <- cal.cbs$x,
error = function(e) stop("cal.cbs must have a frequency column labelled \"x\""))
tryCatch(T.cal <- cal.cbs$T.cal,
error = function(e) stop("cal.cbs must have a column for length of time observed labelled \"T.cal\""))
dc.check.model.params.safe(c("k", "r", "alpha", "a", "b"), params, "xbgcnbd.PlotFrequencyInCalibration")
# actual
x_act[x_act > censor] <- censor
x_act <- table(x_act)
# expected
x_est <- sapply(unique(T.cal), function(tcal) {
n <- sum(cal.cbs$T.cal == tcal)
prop <- xbgcnbd.pmf(params, t = tcal, x = 0:(censor - 1), dropout_at_zero = dropout_at_zero)
prop <- c(prop, 1 - sum(prop))
prop * (n / nrow(cal.cbs))
})
x_est <- apply(x_est, 1, sum) * nrow(cal.cbs)
mat <- matrix(c(x_act, x_est), nrow = 2, ncol = censor + 1, byrow = TRUE)
rownames(mat) <- c("n.x.actual", "n.x.expected")
colnames(mat) <- c(0:(censor - 1), paste0(censor, "+"))
barplot(mat, beside = TRUE, col = 1:2, main = title, xlab = xlab, ylab = ylab, ylim = c(0, max(mat) * 1.1))
legend("topright", legend = c("Actual", "Model"), col = 1:2, lty = 1, lwd = 2)
colnames(mat) <- paste0("freq.", colnames(mat))
mat
}
#' (M)BG/CNBD-k Expected Cumulative Transactions
#'
#' Calculates the expected cumulative total repeat transactions by all customers
#' for the calibration and holdout periods.
#'
#' Note: Computational time increases with the number of unique values of
#' \code{T.cal}.
#'
#' @param params A vector with model parameters \code{k}, \code{r},
#' \code{alpha}, \code{a} and \code{b}, in that order.
#' @param T.cal A vector to represent customers' calibration period lengths.
#' @param T.tot End of holdout period. Must be a single value, not a vector.
#' @param n.periods.final Number of time periods in the calibration and holdout
#' periods.
#' @return Vector of length \code{n.periods.final} with expected cumulative
#' total repeat transactions by all customers.
#' @export
#' @examples
#' \dontrun{
#' data("groceryElog")
#' cbs <- elog2cbs(groceryElog)
#' params <- mbgcnbd.EstimateParameters(cbs, k = 2)
#' # Returns a vector containing expected cumulative repeat transactions for 104
#' # weeks, with every eigth week being reported.
#' mbgcnbd.ExpectedCumulativeTransactions(params,
#' T.cal = cbs$T.cal,
#' T.tot = 104,
#' n.periods.final = 104 / 8)
#' }
mbgcnbd.ExpectedCumulativeTransactions <- function(params, T.cal, T.tot, n.periods.final) {
xbgcnbd.ExpectedCumulativeTransactions(params, T.cal, T.tot, n.periods.final, dropout_at_zero = TRUE)
}
#' @rdname mbgcnbd.ExpectedCumulativeTransactions
#' @export
bgcnbd.ExpectedCumulativeTransactions <- function(params, T.cal, T.tot, n.periods.final) {
xbgcnbd.ExpectedCumulativeTransactions(params, T.cal, T.tot, n.periods.final, dropout_at_zero = FALSE)
}
#' @keywords internal
xbgcnbd.ExpectedCumulativeTransactions <- function(params, T.cal, T.tot, n.periods.final, dropout_at_zero = NULL) {
stopifnot(!is.null(dropout_at_zero))
dc.check.model.params.safe(c("k", "r", "alpha", "a", "b"), params, "xbgcnbd.ExpectedCumulativeTransactions")
if (any(T.cal < 0) || !is.numeric(T.cal))
stop("T.cal must be numeric and may not contain negative numbers.")
if (length(T.tot) > 1 || T.tot < 0 || !is.numeric(T.tot))
stop("T.tot must be a single numeric value and may not be negative.")
if (length(n.periods.final) > 1 || n.periods.final < 0 || !is.numeric(n.periods.final))
stop("n.periods.final must be a single numeric value and may not be negative.")
intervals <- seq(T.tot / n.periods.final, T.tot, length.out = n.periods.final)
cust.birth.periods <- max(T.cal) - T.cal
expected.transactions <- sapply(intervals, function(interval) {
if (interval <= min(cust.birth.periods))
return(0)
sum(xbgcnbd.Expectation(params = params, t = interval - cust.birth.periods[cust.birth.periods < interval],
dropout_at_zero = dropout_at_zero))
})
return(expected.transactions)
}
#' (M)BG/CNBD-k Tracking Cumulative Transactions Plot
#'
#' Plots the actual and expected cumulative total repeat transactions by all
#' customers for the calibration and holdout periods, and returns this
#' comparison in a matrix.
#'
#' Note: Computational time increases with the number of unique values of
#' \code{T.cal}.
#'
#' @param params A vector with model parameters \code{k}, \code{r},
#' \code{alpha}, \code{a} and \code{b}, in that order.
#' @param T.cal A vector to represent customers' calibration period lengths.
#' @param T.tot End of holdout period. Must be a single value, not a vector.
#' @param actual.cu.tracking.data A vector containing the cumulative number of
#' repeat transactions made by customers for each period in the total time
#' period (both calibration and holdout periods).
#' @param xlab Descriptive label for the x axis.
#' @param ylab Descriptive label for the y axis.
#' @param xticklab A vector containing a label for each tick mark on the x axis.
#' @param title Title placed on the top-center of the plot.
#' @param ymax Upper boundary for y axis.
#' @param legend plot legend, defaults to `Actual` and `Model`.
#' @return Matrix containing actual and expected cumulative repeat transactions.
#' @export
#' @seealso \code{\link{mbgcnbd.ExpectedCumulativeTransactions}}
#' @examples
#' \dontrun{
#' data("groceryElog")
#' groceryElog <- groceryElog[groceryElog$date < "2006-06-30", ]
#' cbs <- elog2cbs(groceryElog, T.cal = "2006-04-30")
#' cum <- elog2cum(groceryElog)
#' params <- mbgcnbd.EstimateParameters(cbs, k = 2)
#' mbgcnbd.PlotTrackingCum(params, cbs$T.cal,
#' T.tot = max(cbs$T.cal + cbs$T.star), cum)
#' }
mbgcnbd.PlotTrackingCum <- function(params, T.cal, T.tot, actual.cu.tracking.data,
xlab = "Week", ylab = "Cumulative Transactions",
xticklab = NULL, title = "Tracking Cumulative Transactions",
ymax = NULL, legend = c("Actual", "Model")) {
xbgcnbd.PlotTrackingCum(params, T.cal, T.tot, actual.cu.tracking.data,
xlab, ylab, xticklab, title, ymax, dropout_at_zero = TRUE,
legend = legend)
}
#' @rdname mbgcnbd.PlotTrackingCum
#' @export
bgcnbd.PlotTrackingCum <- function(params, T.cal, T.tot, actual.cu.tracking.data,
xlab = "Week", ylab = "Cumulative Transactions",
xticklab = NULL, title = "Tracking Cumulative Transactions",
ymax = NULL, legend = c("Actual", "Model")) {
xbgcnbd.PlotTrackingCum(params, T.cal, T.tot, actual.cu.tracking.data,
xlab, ylab, xticklab, title, ymax, dropout_at_zero = FALSE,
legend = legend)
}
#' @keywords internal
xbgcnbd.PlotTrackingCum <- function(params, T.cal, T.tot, actual.cu.tracking.data,
xlab = "Week", ylab = "Cumulative Transactions",
xticklab = NULL, title = "Tracking Cumulative Transactions",
ymax = NULL, dropout_at_zero = NULL,
legend = c("Actual", "Model")) {
stopifnot(!is.null(dropout_at_zero))
actual <- actual.cu.tracking.data
expected <- xbgcnbd.ExpectedCumulativeTransactions(params, T.cal, T.tot, length(actual),
dropout_at_zero = dropout_at_zero)
dc.PlotTracking(actual = actual, expected = expected, T.cal = T.cal,
xlab = xlab, ylab = ylab, title = title, xticklab = xticklab, ymax = ymax,
legend = legend)
}
#' (M)BG/CNBD-k Tracking Incremental Transactions Comparison
#'
#' Plots the actual and expected incremental total repeat transactions by all
#' customers for the calibration and holdout periods, and returns this
#' comparison in a matrix.
#'
#' Note: Computational time increases with the number of unique values of
#' \code{T.cal}.
#'
#' @param params A vector with model parameters \code{k}, \code{r},
#' \code{alpha}, \code{a} and \code{b}, in that order.
#' @param T.cal A vector to represent customers' calibration period lengths.
#' @param T.tot End of holdout period. Must be a single value, not a vector.
#' @param actual.inc.tracking.data A vector containing the incremental number of
#' repeat transactions made by customers for each period in the total time
#' period (both calibration and holdout periods).
#' @param xlab Descriptive label for the x axis.
#' @param ylab Descriptive label for the y axis.
#' @param xticklab A vector containing a label for each tick mark on the x axis.
#' @param title Title placed on the top-center of the plot.
#' @param ymax Upper boundary for y axis.
#' @param legend plot legend, defaults to `Actual` and `Model`.
#' @return Matrix containing actual and expected incremental repeat transactions.
#' @export
#' @seealso \code{\link{mbgcnbd.ExpectedCumulativeTransactions}}
#' @examples
#' \dontrun{
#' data("groceryElog")
#' groceryElog <- groceryElog[groceryElog$date < "2006-06-30", ]
#' cbs <- elog2cbs(groceryElog, T.cal = "2006-04-30")
#' inc <- elog2inc(groceryElog)
#' params <- mbgcnbd.EstimateParameters(cbs, k = 2)
#' mbgcnbd.PlotTrackingInc(params, cbs$T.cal,
#' T.tot = max(cbs$T.cal + cbs$T.star), inc)
#' }
mbgcnbd.PlotTrackingInc <- function(params, T.cal, T.tot, actual.inc.tracking.data,
xlab = "Week", ylab = "Transactions",
xticklab = NULL, title = "Tracking Weekly Transactions",
ymax = NULL, legend = c("Actual", "Model")) {
xbgcnbd.PlotTrackingInc(params, T.cal, T.tot, actual.inc.tracking.data,
xlab, ylab, xticklab, title, ymax, dropout_at_zero = TRUE,
legend = legend)
}
#' @rdname mbgcnbd.PlotTrackingInc
#' @export
bgcnbd.PlotTrackingInc <- function(params, T.cal, T.tot, actual.inc.tracking.data,
xlab = "Week", ylab = "Transactions",
xticklab = NULL, title = "Tracking Weekly Transactions",
ymax = NULL, legend = c("Actual", "Model")) {
xbgcnbd.PlotTrackingInc(params, T.cal, T.tot, actual.inc.tracking.data,
xlab, ylab, xticklab, title, ymax, dropout_at_zero = FALSE,
legend = legend)
}
#' @keywords internal
xbgcnbd.PlotTrackingInc <- function(params, T.cal, T.tot, actual.inc.tracking.data,
xlab = "Week", ylab = "Transactions",
xticklab = NULL, title = "Tracking Weekly Transactions",
ymax = NULL, dropout_at_zero = NULL,
legend = c("Actual", "Model")) {
stopifnot(!is.null(dropout_at_zero))
actual <- actual.inc.tracking.data
expected_cum <- xbgcnbd.ExpectedCumulativeTransactions(params, T.cal, T.tot, length(actual),
dropout_at_zero = dropout_at_zero)
expected <- BTYD::dc.CumulativeToIncremental(expected_cum)
dc.PlotTracking(actual = actual, expected = expected, T.cal = T.cal,
xlab = xlab, ylab = ylab, title = title, xticklab = xticklab, ymax = ymax,
legend = legend)
}
#' (M)BG/CNBD-k Plot Frequency vs. Conditional Expected Frequency
#'
#' Plots the actual and conditional expected number transactions made by
#' customers in the holdout period, binned according to calibration period
#' frequencies, and returns this comparison in a matrix.
#'
#' @param params A vector with model parameters \code{k}, \code{r},
#' \code{alpha}, \code{a} and \code{b}, in that order.
#' @param T.star Length of the holdout period.
#' @param cal.cbs Calibration period CBS (customer by sufficient statistic). It
#' must contain columns for frequency ('x'), recency ('t.x') and total time
#' observed ('T.cal').
#' @param x.star Vector of transactions made by each customer in the holdout period.
#' @param censor Cutoff point for number of transactions in plot.
#' @param xlab Descriptive label for the x axis.
#' @param ylab Descriptive label for the x axis.
#' @param xticklab A vector containing a label for each tick mark on the x axis.
#' @param title Title placed on the top-center of the plot.
#' @return Holdout period transaction frequency comparison matrix (actual vs. expected).
#' @export
#' @seealso \code{\link{bgcnbd.PlotFreqVsConditionalExpectedFrequency}}
#' @examples
#' \dontrun{
#' data("groceryElog")
#' cbs <- elog2cbs(groceryElog, T.cal = "2006-09-30")
#' params <- mbgcnbd.EstimateParameters(cbs, k=2)
#' mbgcnbd.PlotFreqVsConditionalExpectedFrequency(params, T.star=52, cbs, cbs$x.star, censor=7)
#' }
mbgcnbd.PlotFreqVsConditionalExpectedFrequency <- function(params, T.star, cal.cbs, x.star,
censor, xlab = "Calibration period transactions",
ylab = "Holdout period transactions", xticklab = NULL,
title = "Conditional Expectation") {
x.star.est <- mbgcnbd.ConditionalExpectedTransactions(params, T.star, cal.cbs$x, cal.cbs$t.x, cal.cbs$T.cal)
dc.PlotFreqVsConditionalExpectedFrequency(x = cal.cbs$x, actual = x.star, expected = x.star.est,
censor = censor, xlab = xlab, ylab = ylab, xticklab = xticklab,
title = title)
}
#' @rdname mbgcnbd.PlotFreqVsConditionalExpectedFrequency
#' @export
bgcnbd.PlotFreqVsConditionalExpectedFrequency <- function(params, T.star, cal.cbs, x.star,
censor, xlab = "Calibration period transactions",
ylab = "Holdout period transactions", xticklab = NULL,
title = "Conditional Expectation") {
x.star.est <- bgcnbd.ConditionalExpectedTransactions(params, T.star, cal.cbs$x, cal.cbs$t.x, cal.cbs$T.cal)
dc.PlotFreqVsConditionalExpectedFrequency(x = cal.cbs$x, actual = x.star, expected = x.star.est,
censor = censor, xlab = xlab, ylab = ylab, xticklab = xticklab,
title = title)
}
#' (M)BG/CNBD-k Plot Actual vs. Conditional Expected Frequency by Recency
#'
#' Plots the actual and conditional expected number transactions made by
#' customers in the holdout period, binned according to calibration period
#' recencies, and returns this comparison in a matrix.
#'
#' @param params A vector with model parameters \code{k}, \code{r},
#' \code{alpha}, \code{a} and \code{b}, in that order.
#' @param cal.cbs Calibration period CBS (customer by sufficient statistic). It
#' must contain columns for frequency ('x'), recency ('t.x') and total time
#' observed ('T.cal').
#' @param T.star Length of the holdout period.
#' @param x.star Vector of transactions made by each customer in the holdout period.
#' @param xlab Descriptive label for the x axis.
#' @param ylab Descriptive label for the x axis.
#' @param xticklab A vector containing a label for each tick mark on the x axis.
#' @param title Title placed on the top-center of the plot.
#' @return Matrix comparing actual and conditional expected transactions in the holdout period.
#' @export
#' @seealso \code{\link{bgcnbd.PlotFreqVsConditionalExpectedFrequency}}
#' @examples
#' \dontrun{
#' data("groceryElog")
#' cbs <- elog2cbs(groceryElog, T.cal = "2006-09-30")
#' params <- mbgcnbd.EstimateParameters(cbs, k=2)
#' mbgcnbd.PlotRecVsConditionalExpectedFrequency(params, cbs, T.star=52, cbs$x.star)
#' }
mbgcnbd.PlotRecVsConditionalExpectedFrequency <- function(
params, cal.cbs, T.star, x.star,
xlab = "Calibration period recency",
ylab = "Holdout period transactions", xticklab = NULL,
title = "Actual vs. Conditional Expected Transactions by Recency") {
x.star.est <- mbgcnbd.ConditionalExpectedTransactions(params, T.star, cal.cbs$x, cal.cbs$t.x, cal.cbs$T.cal)
dc.PlotRecVsConditionalExpectedFrequency(t.x = cal.cbs$t.x, actual = x.star, expected = x.star.est,
xlab = xlab, ylab = ylab, xticklab = xticklab,
title = title)
}
#' @rdname mbgcnbd.PlotRecVsConditionalExpectedFrequency
#' @export
bgcnbd.PlotRecVsConditionalExpectedFrequency <- function(
params, cal.cbs, T.star, x.star,
xlab = "Calibration period recency",
ylab = "Holdout period transactions", xticklab = NULL,
title = "Actual vs. Conditional Expected Transactions by Recency") {
x.star.est <- bgcnbd.ConditionalExpectedTransactions(params, T.star, cal.cbs$x, cal.cbs$t.x, cal.cbs$T.cal)
dc.PlotRecVsConditionalExpectedFrequency(t.x = cal.cbs$t.x, actual = x.star, expected = x.star.est,
xlab = xlab, ylab = ylab, xticklab = xticklab,
title = title)
}
#' Simulate data according to (M)BG/CNBD-k model assumptions
#'
#' @param n Number of customers.
#' @param T.cal Length of calibration period. If a vector is provided, then it
#' is assumed that customers have different 'birth' dates, i.e.
#' \eqn{max(T.cal)-T.cal}.
#' @param T.star Length of holdout period. This may be a vector.
#' @param params A vector with model parameters \code{k}, \code{r},
#' \code{alpha}, \code{a} and \code{b}, in that order.
#' @param date.zero Initial date for cohort start. Can be of class character, Date or POSIXt.
#' @return List of length 2:
#' \item{\code{cbs}}{A data.frame with a row for each customer and the summary statistic as columns.}
#' \item{\code{elog}}{A data.frame with a row for each transaction, and columns \code{cust}, \code{date} and \code{t}.}
#' @export
#' @references (M)BG/CNBD-k: Reutterer, T., Platzer, M., & Schroeder, N. (2020).
#' Leveraging purchase regularity for predicting customer behavior the easy
#' way. International Journal of Research in Marketing.
#' \doi{10.1016/j.ijresmar.2020.09.002}
#' @examples
#' params <- c(k = 3, r = 0.85, alpha = 1.45, a = 0.79, b = 2.42)
#' data <- mbgcnbd.GenerateData(n = 200, T.cal = 24, T.star = 32, params)
#'
#' # customer by sufficient summary statistic - one row per customer
#' head(data$cbs)
#'
#' # event log - one row per event/transaction
#' head(data$elog)
mbgcnbd.GenerateData <- function(n, T.cal, T.star = NULL, params, date.zero = "2000-01-01") {
xbgcnbd.GenerateData(n, T.cal, T.star, params, date.zero, dropout_at_zero = TRUE)
}
#' @rdname mbgcnbd.GenerateData
#' @export
bgcnbd.GenerateData <- function(n, T.cal, T.star = NULL, params, date.zero = "2000-01-01") {
xbgcnbd.GenerateData(n, T.cal, T.star, params, date.zero, dropout_at_zero = FALSE)
}
#' @keywords internal
xbgcnbd.GenerateData <- function(n, T.cal, T.star = NULL, params, date.zero = "2000-01-01", dropout_at_zero = NULL) {
stopifnot(is.logical(dropout_at_zero))
dc.check.model.params.safe(c("k", "r", "alpha", "a", "b"), params, "xbgcnbd.GenerateData")
if (params[1] != floor(params[1]) | params[1] < 1)
stop("k must be integer being greater or equal to 1.")
k <- params[1]
r <- params[2]
alpha <- params[3]
a <- params[4]
b <- params[5]
# set start date for each customer, so that they share same T.cal date
T.cal.fix <- max(T.cal)
T.cal <- rep(T.cal, length.out = n)
T.zero <- T.cal.fix - T.cal
date.zero <- as.POSIXct(date.zero)
# sample intertransaction timings parameter lambda for each customer
lambdas <- rgamma(n, shape = r, rate = alpha)
# sample churn-probability p for each customer
ps <- rbeta(n, a, b)
ps <- pmax(ps, 1e-5) # avoid `too long` lives
# sample number of survivals via geometric distribution
churns <- rgeom(n, ps)
churns <- pmin(churns, 1e5) # avoid `too long` lives
if (!dropout_at_zero) churns <- churns + 1
# sample intertransaction timings
elog_list <- lapply(1:n, function(i) {
# sample transaction times
itts <- rgamma(churns[i], shape = k, rate = lambdas[i])
ts <- cumsum(c(0, itts))
ts <- T.zero[i] + ts # shift by T_0
ts <- ts[ts <= (T.cal.fix + max(T.star))] # trim to observation length
return(ts)
})
# build elog
elog <- data.table("cust" = rep(1:n, sapply(elog_list, length)), "t" = unlist(elog_list))
elog[["date"]] <- date.zero + elog[["t"]] * 3600 * 24 * 7
# build cbs
date.cal <- date.zero + T.cal.fix * 3600 * 24 * 7
date.tot <- date.cal + T.star * 3600 * 24 * 7
cbs <- elog2cbs(elog, T.cal = date.cal)
if (length(T.star) == 1) set(cbs, j = "T.star", value = T.star[1])
xstar.cols <- if (length(T.star) == 1) "x.star" else paste0("x.star", T.star)
for (j in 1:length(date.tot)) {
set(cbs, j = xstar.cols[j],
value = sapply(elog_list, function(t) sum(t > T.cal.fix & t <= T.cal.fix + T.star[j])))
}
set(cbs, j = "k", value = k)
set(cbs, j = "lambda", value = lambdas)
set(cbs, j = "p", value = ps)
set(cbs, j = "churn", value = churns)
set(cbs, j = "alive", value = (churns > cbs$x))
return(list(cbs = setDF(cbs), elog = setDF(elog)))
}
|
/scratch/gouwar.j/cran-all/cranData/BTYDplus/R/mbg-cnbd-k.R
|
#' Calculates P(alive) based on MCMC parameter draws
#'
#' @param draws MCMC draws as returned by \code{*.mcmc.DrawParameters}
#' @return Numeric vector with the customers' probabilities of being still alive
#' at end of calibration period
#' @export
#' @examples
#' data("groceryElog")
#' cbs <- elog2cbs(groceryElog)
#' param.draws <- pnbd.mcmc.DrawParameters(cbs,
#' mcmc = 100, burnin = 50, thin = 10, chains = 1) # short MCMC to run demo fast
#' palive <- mcmc.PAlive(param.draws)
#' head(palive)
#' mean(palive)
mcmc.PAlive <- function(draws) {
nr_of_cust <- length(draws$level_1)
p.alives <- sapply(1:nr_of_cust, function(i) mean(as.matrix(draws$level_1[[i]][, "z"])))
return(p.alives)
}
#' Draws number of future transactions based on MCMC parameter draws
#'
#' For each customer and each provided MCMC parameter draw this method will
#' sample the number of transactions during the holdout period \code{T.star}. If
#' argument \code{size} is provided then it returns a flexible number of draws,
#' whereas for each customer and each draw it will first make a draw from the
#' parameter draws.
#'
#' @param cal.cbs Calibration period customer-by-sufficient-statistic (CBS)
#' data.frame.
#' @param draws MCMC draws as returned by \code{*.mcmc.DrawParameters}
#' @param T.star Length of period for which future transactions are counted.
#' @param sample_size Number of samples to draw. Defaults to the same number of
#' parameter draws that are passed to \code{draws}.
#' @return 2-dim matrix [draw x customer] with sampled future transactions.
#' @export
#' @examples
#' data("groceryElog")
#' cbs <- elog2cbs(groceryElog, T.cal = "2006-12-31")
#' param.draws <- pnbd.mcmc.DrawParameters(cbs,
#' mcmc = 100, burnin = 50, thin = 10, chains = 1) # short MCMC to run demo fast
#' xstar.draws <- mcmc.DrawFutureTransactions(cbs, param.draws)
#' cbs$xstar.est <- apply(xstar.draws, 2, mean)
#' cbs$pactive <- mcmc.PActive(xstar.draws)
#' head(cbs)
mcmc.DrawFutureTransactions <- function(cal.cbs, draws, T.star = cal.cbs$T.star, sample_size = NULL) {
if (is.null(sample_size)) {
nr_of_draws <- niter(draws$level_2) * nchain(draws$level_2)
} else {
stopifnot(is.numeric(sample_size))
nr_of_draws <- as.integer(sample_size)
}
stopifnot(nr_of_draws >= 1)
nr_of_cust <- length(draws$level_1)
parameters <- varnames(draws$level_1[[1]])
if (nr_of_cust != nrow(cal.cbs))
stop("mismatch between number of customers in parameters 'cal.cbs' and 'draws'")
if (is.null(T.star))
stop("T.star is missing")
x.stars <- array(NA_real_, dim = c(nr_of_draws, nr_of_cust))
if (length(T.star) == 1)
T.star <- rep(T.star, nr_of_cust)
draw_left_truncated_gamma <- function(lower, k, lambda) {
rand <- runif(1, pgamma(lower, k, k * lambda), 1)
qgamma(rand, k, k * lambda)
}
for (cust in 1:nrow(cal.cbs)) {
Tcal <- cal.cbs$T.cal[cust]
Tstar <- T.star[cust]
tx <- cal.cbs$t.x[cust]
taus <- drop(as.matrix(draws$level_1[[cust]][, "tau"]))
if ("k" %in% parameters) {
ks <- drop(as.matrix(draws$level_1[[cust]][, "k"]))
} else {
ks <- rep(1, length(taus))
}
lambdas <- drop(as.matrix(draws$level_1[[cust]][, "lambda"]))
stopifnot(length(taus) == length(ks) && length(taus) == length(lambdas))
if (!is.null(sample_size)) {
idx <- sample(length(taus), size = sample_size, replace = TRUE)
taus <- taus[idx]
ks <- ks[idx]
lambdas <- lambdas[idx]
}
alive <- (taus > Tcal)
# Case: customer alive
for (draw in which(alive)) {
# sample itt which is larger than (Tcal-tx)
itts <- draw_left_truncated_gamma(Tcal - tx, ks[draw], lambdas[draw])
# sample 'sufficiently' large amount of inter-transaction times
minT <- pmin(Tcal + Tstar - tx, taus[draw] - tx)
nr_of_itt_draws <- pmax(10, round(minT * lambdas[draw]))
itts <- c(itts, rgamma(nr_of_itt_draws * 2, shape = ks[draw], rate = ks[draw] * lambdas[draw]))
if (sum(itts) < minT)
itts <- c(itts, rgamma(nr_of_itt_draws * 4, shape = ks[draw], rate = ks[draw] * lambdas[draw]))
if (sum(itts) < minT)
itts <- c(itts, rgamma(nr_of_itt_draws * 800, shape = ks[draw], rate = ks[draw] * lambdas[draw]))
if (sum(itts) < minT)
stop("not enough inter-transaction times sampled! cust:", cust, " draw:", draw, " ", sum(itts),
" < ", minT)
x.stars[draw, cust] <- sum(cumsum(itts) < minT)
}
# Case: customer churned
if (any(!alive)) {
x.stars[!alive, cust] <- 0
}
}
return(x.stars)
}
#' Calculates P(active) based on drawn future transactions.
#'
#' @param xstar Future transaction draws as returned by
#' \code{\link{mcmc.DrawFutureTransactions}}.
#' @return numeric A vector with the customers' probabilities of being active
#' during the holdout period.
#' @export
#' @examples
#' data("groceryElog")
#' cbs <- elog2cbs(groceryElog, T.cal = "2006-12-31")
#' param.draws <- pnbd.mcmc.DrawParameters(cbs,
#' mcmc = 100, burnin = 50, thin = 10, chains = 1) # short MCMC to run demo fast
#' xstar.draws <- mcmc.DrawFutureTransactions(cbs, param.draws)
#' cbs$pactive <- mcmc.PActive(xstar.draws)
#' head(cbs)
mcmc.PActive <- function(xstar) {
return(apply(xstar, 2, function(x) mean(x > 0)))
}
#' (Re-)set burnin of MCMC chains.
#'
#' @param draws MCMC draws as returned by \code{*.mcmc.DrawParameters}
#' @param burnin New start index.
#' @return 2-element list with MCMC draws
#' @export
#' @examples
#' data("groceryElog")
#' cbs <- elog2cbs(groceryElog)
#' param.draws <- pnbd.mcmc.DrawParameters(cbs,
#' mcmc = 100, burnin = 50, thin = 10, chains = 1) # short MCMC to run demo fast
#' param.draws.stable <- mcmc.setBurnin(param.draws, burnin = 80)
mcmc.setBurnin <- function(draws, burnin) {
if (burnin < start(draws$level_2) | burnin > end(draws$level_2))
stop("specified burnin is out of bound: ", start(draws$level_2), " - ", end(draws$level_2))
draws$level_2 <- window(draws$level_2, start = burnin)
draws$level_1 <- lapply(draws$level_1, function(draw) window(draw, start = burnin))
return(draws)
}
#' Draw diagnostic plot to inspect error in P(active).
#'
#' @param cbs A data.frame with column \code{x} and \code{x.star}.
#' @param xstar Future transaction draws as returned by
#' \code{\link{mcmc.DrawFutureTransactions}}.
#' @param title Plot title.
#' @export
#' @examples
#' data("groceryElog")
#' cbs <- elog2cbs(groceryElog, T.cal = "2006-12-31")
#' param.draws <- pnbd.mcmc.DrawParameters(cbs,
#' mcmc = 100, burnin = 50, thin = 10, chains = 1) # short MCMC to run demo fast
#' xstar.draws <- mcmc.DrawFutureTransactions(cbs, param.draws)
#' mcmc.plotPActiveDiagnostic(cbs, xstar.draws)
mcmc.plotPActiveDiagnostic <- function(cbs, xstar, title = "Diagnostic Plot for P(active)") {
pactive <- mcmc.PActive(xstar)
x.star <- cbs$x.star
op <- par(mar = c(4, 4, 2, 2), mgp = c(2.5, 1, 0))
cuts <- unique(quantile(c(0, pactive, 1), seq(0, 1, 0.1)))
spls.y <- sapply(split(x.star > 0, cut(pactive, breaks = cuts, include.lowest = T)), mean)
spls.x <- sapply(split(pactive, cut(pactive, breaks = cuts, include.lowest = T)), mean)
plot(spls.x, spls.y, typ = "b", xlim = c(0, 1), ylim = c(0, 1), frame = 0, axes = F,
xlab = "Estimated P(active)", ylab = "Actual Share of Actives", main = title)
axis(side = 1, at = seq(0, 1, 0.1), pos = 0, labels = paste(100 * seq(0, 1, 0.1), "%"))
axis(side = 2, at = seq(0, 1, 0.1), pos = 0, labels = paste(100 * seq(0, 1, 0.1), "%"), las = 2)
abline(0, 1)
abline(h = seq(0, 1, 0.1), col = "lightgray", lty = "dotted")
abline(v = seq(0, 1, 0.1), col = "lightgray", lty = "dotted")
points(mean(pactive[cbs$x == 0]), mean(x.star[cbs$x == 0] > 0), col = "red", pch = "0")
par(op)
invisible()
}
#' Probability Mass Function for Pareto/GGG, Pareto/NBD (HB) and Pareto/NBD (Abe)
#'
#' Return the probability distribution of purchase frequencies for a random
#' customer in a given time period, i.e. \eqn{P(X(t)=x)}. This is estimated by
#' generating \code{sample_size} number of random customers that follow the
#' provided parameter draws. Due to this sampling, the return result varies from
#' one call to another.
#'
#' @param draws MCMC draws as returned by \code{*.mcmc.DrawParameters}
#' @param t Length of time for which we are calculating the expected number of
#' transactions. May also be a vector.
#' @param x Number of transactions for which probability is calculated. May also
#' be a vector.
#' @param sample_size Sample size for estimating the probability distribution.
#' @param covariates (optional) Matrix of covariates, for Pareto/NBD (Abe)
#' model, passed to \code{\link{abe.GenerateData}} for simulating data.
#' @return \eqn{P(X(t)=x)}. If either \code{t} or \code{x} is a vector, then the
#' output will be a vector as well. If both are vectors, the output will be a
#' matrix.
#' @export
#' @examples
#' data("groceryElog")
#' cbs <- elog2cbs(groceryElog)
#' param.draws <- pnbd.mcmc.DrawParameters(cbs,
#' mcmc = 100, burnin = 50, thin = 10, chains = 1) # short MCMC to run demo fast
#' mcmc.pmf(param.draws, t = c(26, 52), x = 0:6)
mcmc.pmf <- function(draws, t, x, sample_size = 10000, covariates = NULL) {
cohort_draws <- as.matrix(draws$level_2)
nr_of_draws <- nrow(cohort_draws)
# use posterior mean
draw_idx_cnt <- table(sample(nr_of_draws, size = sample_size, replace = TRUE))
model <- ifelse(all(c("r", "alpha") %in% colnames(cohort_draws)), "pggg", "abe")
cbs <- rbindlist(lapply(names(draw_idx_cnt), function(idx) {
n <- unname(draw_idx_cnt[idx])
if (model == "pggg") {
params <- as.list(cohort_draws[as.integer(idx), ])
pggg.GenerateData(n = n, T.cal = 0, T.star = unique(t), params = params)$cbs
} else if (model == "abe") {
p <- cohort_draws[as.integer(idx), ]
params <- list()
params$beta <- matrix(p[grepl("^log\\_", names(p))], byrow = TRUE, ncol = 2)
params$gamma <- matrix(c(p["var_log_lambda"],
p["cov_log_lambda_log_mu"],
p["cov_log_lambda_log_mu"],
p["var_log_mu"]),
ncol = 2)
abe.GenerateData(n = n, T.cal = 0, T.star = unique(t), params = params,
covariates = covariates)$cbs
}
}))
pmf <- sapply(1:length(t), function(idx) {
col <- ifelse(uniqueN(t) == 1, "x.star", paste0("x.star", t[idx]))
stopifnot(col %in% names(cbs))
sapply(x, function(x) sum(cbs[[col]] == x) / sample_size)
})
if (length(x) == 1) pmf <- t(pmf)
rownames(pmf) <- x
colnames(pmf) <- t
drop(pmf)
}
#' Unconditional Expectation for Pareto/GGG, Pareto/NBD (HB) and Pareto/NBD (Abe)
#'
#' Uses model parameter draws to return the expected number of repeat
#' transactions that a randomly chosen customer (for whom we have no prior
#' information) is expected to make in a given time period. \deqn{E(X(t))}.
#'
#' The expected transactions need to be sampled. Due to this sampling, the
#' return result varies from one call to another. Larger values of
#' \code{sample_size} will generate more stable results.
#'
#' @param draws MCMC draws as returned by \code{*.mcmc.DrawParameters}
#' @param t Length of time for which we are calculating the expected number of
#' transactions. May also be a vector.
#' @param sample_size Sample size for estimating the probability distribution.
#' @return Number of repeat transactions a customer is expected to make in a
#' time period of length t.
#' @export
#' @examples
#' data("groceryElog")
#' cbs <- elog2cbs(groceryElog)
#' param.draws <- pnbd.mcmc.DrawParameters(cbs,
#' mcmc = 100, burnin = 50, thin = 10, chains = 1) # short MCMC to run demo fast
#' mcmc.Expectation(param.draws, t = c(26, 52))
mcmc.Expectation <- function(draws, t, sample_size = 10000) {
pmf <- mcmc.pmf(draws, t, 0:100, sample_size = sample_size)
apply(pmf * matrix(rep(0:100, length(t)), ncol = length(t)), 2, sum)
}
#' Expected Cumulative Transactions for Pareto/GGG, Pareto/NBD (HB) and
#' Pareto/NBD (Abe)
#'
#' Uses model parameter draws to return the expected number of repeat
#' transactions that a randomly chosen customer (for whom we have no prior
#' information) is expected to make in a given time period.
#'
#' The expected transactions need to be sampled. Due to this sampling, the
#' return result varies from one call to another. Larger values of
#' \code{sample_size} will generate more stable results.
#'
#' @param draws MCMC draws as returned by \code{*.mcmc.DrawParameters}
#' @param T.cal A vector to represent customers' calibration period lengths (in
#' other words, the \code{T.cal} column from a
#' customer-by-sufficient-statistic matrix). Considering rounding in order to
#' speed up calculations.
#' @param T.tot End of holdout period. Must be a single value, not a vector.
#' @param n.periods.final Number of time periods in the calibration and holdout
#' periods.
#' @param sample_size Sample size for estimating the probability distribution.
#' @param covariates (optional) Matrix of covariates, for Pareto/NBD (Abe)
#' model, passed to \code{\link{abe.GenerateData}} for simulating data.
#' @return Numeric vector of expected cumulative total repeat transactions by
#' all customers.
#' @export
#' @examples
#' data("groceryElog")
#' cbs <- elog2cbs(groceryElog)
#' param.draws <- pnbd.mcmc.DrawParameters(cbs,
#' mcmc = 100, burnin = 50, thin = 10, chains = 1) # short MCMC to run demo fast
#' # Returns a vector containing expected cumulative repeat transactions for 104
#' # weeks, with every eigth week being reported.
#' mcmc.ExpectedCumulativeTransactions(param.draws,
#' T.cal = cbs$T.cal, T.tot = 104, n.periods.final = 104/8, sample_size = 1000)
mcmc.ExpectedCumulativeTransactions <- function(draws, T.cal, T.tot, n.periods.final,
sample_size = 10000, covariates = NULL) {
if (any(T.cal < 0) || !is.numeric(T.cal))
stop("T.cal must be numeric and may not contain negative numbers.")
if (length(T.tot) > 1 || T.tot < 0 || !is.numeric(T.tot))
stop("T.cal must be a single numeric value and may not be negative.")
if (length(n.periods.final) > 1 || n.periods.final < 0 || !is.numeric(n.periods.final))
stop("n.periods.final must be a single numeric value and may not be negative.")
cohort_draws <- as.matrix(draws$level_2)
nr_of_draws <- nrow(cohort_draws)
model <- ifelse(all(c("r", "alpha") %in% colnames(cohort_draws)), "pggg", "abe")
elog <- rbindlist(lapply(1:nr_of_draws, function(i) {
n <- ceiling(sample_size / nr_of_draws)
if (model == "pggg") {
params <- as.list(cohort_draws[i, ])
elog <- pggg.GenerateData(n = n, T.cal = T.tot, T.star = 0, params = params)$elog
} else if (model == "abe") {
p <- as.list(cohort_draws[i, ])
params <- list()
params$beta <- matrix(as.numeric(p[grepl("^log\\_", names(p))]), byrow = TRUE, ncol = 2)
params$gamma <- matrix(as.numeric(c(p["var_log_lambda"],
p["cov_log_lambda_log_mu"],
p["cov_log_lambda_log_mu"],
p["var_log_mu"])),
ncol = 2)
elog <- abe.GenerateData(n = n, T.cal = T.tot, T.star = 0, params = params,
covariates = covariates)$elog
}
setDT(elog)
elog$cust <- paste0(elog$cust, "_", i)
elog <- elog[t > 0] # drop initial transaction
elog
}))
setkey(elog, t)
intervals <- seq(T.tot / n.periods.final, T.tot, length.out = n.periods.final)
cust.birth.periods <- max(T.cal) - T.cal
expected.transactions <- sapply(intervals, function(interval) {
if (interval <= min(cust.birth.periods))
return(0)
t <- interval - cust.birth.periods[cust.birth.periods < interval]
uEs <- sapply(unique(t), function(ut) elog[t < ut, .N] / sample_size)
names(uEs) <- unique(t)
sum(uEs[as.character(t)])
})
return(expected.transactions)
}
#' Tracking Cumulative Transactions Plot for Pareto/GGG, Pareto/NBD (HB) and
#' Pareto/NBD (Abe)
#'
#' Plots the actual and expected cumulative total repeat transactions by all
#' customers for the calibration and holdout periods, and returns this
#' comparison in a matrix.
#'
#' The expected transactions need to be sampled. Due to this sampling, the
#' return result varies from one call to another. Larger values of
#' \code{sample_size} will generate more stable results.
#'
#' @param draws MCMC draws as returned by \code{*.mcmc.DrawParameters}
#' @param T.cal A vector to represent customers' calibration period lengths (in
#' other words, the \code{T.cal} column from a
#' customer-by-sufficient-statistic matrix). Considering rounding in order to
#' speed up calculations.
#' @param T.tot End of holdout period. Must be a single value, not a vector.
#' @param actual.cu.tracking.data A vector containing the cumulative number of
#' repeat transactions made by customers for each period in the total time
#' period (both calibration and holdout periods).
#' @param xlab Descriptive label for the x axis.
#' @param ylab Descriptive label for the y axis.
#' @param xticklab A vector containing a label for each tick mark on the x axis.
#' @param title Title placed on the top-center of the plot.
#' @param ymax Upper boundary for y axis.
#' @param sample_size Sample size for estimating the probability distribution.
#' See \code{\link{mcmc.ExpectedCumulativeTransactions}}.
#' @param covariates (optional) Matrix of covariates, for Pareto/NBD (Abe)
#' model, passed to \code{\link{abe.GenerateData}} for simulating data.
#' @param legend plot legend, defaults to `Actual` and `Model`.
#' @return Matrix containing actual and expected cumulative repeat transactions.
#' @export
#' @seealso \code{\link{mcmc.PlotTrackingInc}}
#' \code{\link{mcmc.ExpectedCumulativeTransactions}} \code{\link{elog2cum}}
#' @examples
#' \dontrun{
#' data("groceryElog")
#' cbs <- elog2cbs(groceryElog, T.cal = "2006-12-31")
#' cum <- elog2cum(groceryElog)
#' param.draws <- pnbd.mcmc.DrawParameters(cbs)
#' mat <- mcmc.PlotTrackingCum(param.draws,
#' T.cal = cbs$T.cal,
#' T.tot = max(cbs$T.cal + cbs$T.star),
#' actual.cu.tracking.data = cum)
#' }
mcmc.PlotTrackingCum <- function(draws, T.cal, T.tot, actual.cu.tracking.data,
xlab = "Week", ylab = "Cumulative Transactions",
xticklab = NULL, title = "Tracking Cumulative Transactions",
ymax = NULL, sample_size = 10000, covariates = NULL,
legend = c("Actual", "Model")) {
actual <- actual.cu.tracking.data
expected <- mcmc.ExpectedCumulativeTransactions(draws, T.cal, T.tot, length(actual),
sample_size = sample_size, covariates = covariates)
dc.PlotTracking(actual = actual, expected = expected, T.cal = T.cal,
xlab = xlab, ylab = ylab, title = title,
xticklab = xticklab, ymax = ymax,
legend = legend)
}
#' Tracking Incremental Transactions Plot for Pareto/GGG, Pareto/NBD (HB) and
#' Pareto/NBD (Abe)
#'
#' Plots the actual and expected incremental total repeat transactions by all
#' customers for the calibration and holdout periods, and returns this
#' comparison in a matrix.
#'
#' The expected transactions need to be sampled. Due to this sampling, the
#' return result varies from one call to another. Larger values of
#' \code{sample_size} will generate more stable results.
#'
#' @param draws MCMC draws as returned by \code{*.mcmc.DrawParameters}
#' @param T.cal A vector to represent customers' calibration period lengths (in
#' other words, the \code{T.cal} column from a
#' customer-by-sufficient-statistic matrix). Considering rounding in order to
#' speed up calculations.
#' @param T.tot End of holdout period. Must be a single value, not a vector.
#' @param actual.inc.tracking.data A vector containing the incremental number of
#' repeat transactions made by customers for each period in the total time
#' period (both calibration and holdout periods).
#' @param xlab Descriptive label for the x axis.
#' @param ylab Descriptive label for the y axis.
#' @param xticklab A vector containing a label for each tick mark on the x axis.
#' @param title Title placed on the top-center of the plot.
#' @param ymax Upper boundary for y axis.
#' @param sample_size Sample size for estimating the probability distribution.
#' See \code{\link{mcmc.ExpectedCumulativeTransactions}}.
#' @param covariates (optional) Matrix of covariates, for Pareto/NBD (Abe)
#' model, passed to \code{\link{abe.GenerateData}} for simulating data.
#' @param legend plot legend, defaults to `Actual` and `Model`.
#' @return Matrix containing actual and expected incremental repeat
#' transactions.
#' @export
#' @seealso \code{\link{mcmc.PlotTrackingCum}}
#' \code{\link{mcmc.ExpectedCumulativeTransactions}} \code{\link{elog2inc}}
#' @examples
#' \dontrun{
#' data("groceryElog")
#' cbs <- elog2cbs(groceryElog, T.cal = "2006-12-31")
#' inc <- elog2inc(groceryElog)
#' param.draws <- pnbd.mcmc.DrawParameters(cbs)
#' mat <- mcmc.PlotTrackingInc(param.draws,
#' T.cal = cbs$T.cal,
#' T.tot = max(cbs$T.cal + cbs$T.star),
#' actual.inc.tracking.data = inc)
#' }
mcmc.PlotTrackingInc <- function(draws, T.cal, T.tot, actual.inc.tracking.data,
xlab = "Week", ylab = "Transactions",
xticklab = NULL, title = "Tracking Weekly Transactions",
ymax = NULL, sample_size = 10000, covariates = NULL,
legend = c("Actual", "Model")) {
actual <- actual.inc.tracking.data
expected_cum <- mcmc.ExpectedCumulativeTransactions(draws, T.cal, T.tot, length(actual),
sample_size = sample_size, covariates = covariates)
expected <- BTYD::dc.CumulativeToIncremental(expected_cum)
dc.PlotTracking(actual = actual, expected = expected, T.cal = T.cal,
xlab = xlab, ylab = ylab, title = title,
xticklab = xticklab, ymax = ymax,
legend = legend)
}
#' Frequency in Calibration Period for Pareto/GGG, Pareto/NBD (HB) and Pareto/NBD (Abe)
#'
#' Plots a histogram and returns a matrix comparing the actual and expected
#' number of customers who made a certain number of repeat transactions in the
#' calibration period, binned according to calibration period frequencies.
#'
#' The method \code{\link{mcmc.pmf}} is called to calculate the expected numbers
#' based on the corresponding model.
#'
#' @param draws MCMC draws as returned by \code{*.mcmc.DrawParameters}
#' @param cal.cbs Calibration period customer-by-sufficient-statistic (CBS)
#' data.frame. It must contain columns for frequency ('x') and total time
#' observed ('T.cal').
#' @param censor Cutoff point for number of transactions in plot.
#' @param xlab Descriptive label for the x axis.
#' @param ylab Descriptive label for the y axis.
#' @param title Title placed on the top-center of the plot.
#' @param sample_size Sample size for estimating the probability distribution.
#' See \code{\link{mcmc.pmf}}.
#' @return Calibration period repeat transaction frequency comparison matrix
#' (actual vs. expected).
#' @export
#' @seealso \code{\link{mcmc.pmf}}
#' @examples
#' \dontrun{
#' data("groceryElog")
#' cbs <- elog2cbs(groceryElog, T.cal = "2006-12-31")
#' param.draws <- pnbd.mcmc.DrawParameters(cbs,
#' mcmc = 100, burnin = 50, thin = 10, chains = 1) # short MCMC to run demo fast
#' mcmc.PlotFrequencyInCalibration(param.draws, cbs, sample_size = 100)
#' }
mcmc.PlotFrequencyInCalibration <- function(draws, cal.cbs, censor = 7,
xlab = "Calibration period transactions",
ylab = "Customers",
title = "Frequency of Repeat Transactions",
sample_size = 1000) {
# actual
x_act <- cal.cbs$x
x_act[x_act > censor] <- censor
x_act <- table(x_act)
# expected
x_est <- sapply(unique(cal.cbs$T.cal), function(tcal) {
n <- sum(cal.cbs$T.cal == tcal)
prop <- mcmc.pmf(draws, t = tcal, x = 0:(censor - 1), sample_size = sample_size)
prop <- c(prop, 1 - sum(prop))
prop * (n / nrow(cal.cbs))
})
x_est <- apply(x_est, 1, sum) * nrow(cal.cbs)
mat <- matrix(c(x_act, x_est), nrow = 2, ncol = censor + 1, byrow = TRUE)
rownames(mat) <- c("n.x.actual", "n.x.expected")
colnames(mat) <- c(0:(censor - 1), paste0(censor, "+"))
barplot(mat, beside = TRUE, col = 1:2, main = title, xlab = xlab, ylab = ylab, ylim = c(0, max(x_act) * 1.1))
legend("topright", legend = c("Actual", "Model"), col = 1:2, lty = 1:2, lwd = 1, xjust = 1)
colnames(mat) <- paste0("freq.", colnames(mat))
mat
}
|
/scratch/gouwar.j/cran-all/cranData/BTYDplus/R/mcmc.R
|
#' Parameter Estimation for the NBD model
#'
#' Estimates parameters for the NBD model via Maximum Likelihood Estimation.
#'
#' @param cal.cbs Calibration period CBS. It must contain columns for frequency
#' \code{x} and total time observed \code{T.cal}.
#' @param par.start Initial NBD parameters - a vector with \code{r} and \code{alpha} in
#' that order.
#' @param max.param.value Upper bound on parameters.
#' @return List of estimated parameters.
#' @export
#' @references Ehrenberg, A. S. (1959). The pattern of consumer purchases.
#' Journal of the Royal Statistical Society: Series C (Applied Statistics),
#' 8(1), 26-41. \doi{10.2307/2985810}
#' @examples
#' data("groceryElog")
#' cbs <- elog2cbs(groceryElog)
#' nbd.EstimateParameters(cbs)
nbd.EstimateParameters <- function(cal.cbs, par.start = c(1, 1), max.param.value = 10000) {
dc.check.model.params.safe(c("r", "alpha"), par.start, "nbd.EstimateParameters")
nbd.eLL <- function(params, cal.cbs, max.param.value) {
params <- exp(params)
params[params > max.param.value] <- max.param.value
return(-1 * nbd.cbs.LL(params, cal.cbs))
}
logparams <- log(par.start)
results <- optim(logparams, nbd.eLL, cal.cbs = cal.cbs, max.param.value = max.param.value, method = "L-BFGS-B")
estimated.params <- exp(results$par)
estimated.params[estimated.params > max.param.value] <- max.param.value
names(estimated.params) <- c("r", "alpha")
return(estimated.params)
}
#' Calculate the log-likelihood of the NBD model
#'
#' @param params NBD parameters - a vector with r and alpha, in that
#' order.
#' @param cal.cbs Calibration period CBS. It must contain columns for frequency
#' \code{x} and total time observed \code{T.cal}.
#' @return The total log-likelihood for the provided data.
#' @export
#' @examples
#' data("groceryElog")
#' cbs <- elog2cbs(groceryElog)
#' params <- nbd.EstimateParameters(cbs)
#' nbd.cbs.LL(params, cbs)
nbd.cbs.LL <- function(params, cal.cbs) {
dc.check.model.params.safe(c("r", "alpha"), params, "nbd.cbs.LL")
tryCatch(x <- cal.cbs$x,
error = function(e) stop("cal.cbs must have a frequency column labelled \"x\""))
tryCatch(T.cal <- cal.cbs$T.cal,
error = function(e) stop("cal.cbs must have a column for length of time observed labelled \"T.cal\""))
ll <- nbd.LL(params, x, T.cal)
return(sum(ll))
}
#' Calculate the log-likelihood of the NBD model
#'
#' @param params NBD parameters - a vector with \code{r} and \code{alpha}, in that
#' order.
#' @param x Frequency, i.e. number of re-purchases.
#' @param T.cal Total time of observation period.
#' @return A numeric vector of log-likelihoods.
#' @export
#' @seealso \code{\link{nbd.cbs.LL}}
nbd.LL <- function(params, x, T.cal) {
max.length <- max(length(x), length(T.cal))
if (max.length %% length(x))
warning("Maximum vector length not a multiple of the length of x")
if (max.length %% length(T.cal))
warning("Maximum vector length not a multiple of the length of T.cal")
dc.check.model.params.safe(c("r", "alpha"), params, "nbd.LL")
if (any(x < 0) || !is.numeric(x))
stop("x must be numeric and may not contain negative numbers.")
if (any(T.cal < 0) || !is.numeric(T.cal))
stop("T.cal must be numeric and may not contain negative numbers.")
x <- rep(x, length.out = max.length)
T.cal <- rep(T.cal, length.out = max.length)
r <- params[1]
alpha <- params[2]
P1 <- lgamma(r + x) + r * log(alpha)
P2 <- lgamma(r) + (r + x) * log(alpha + T.cal)
llh <- P1 - P2
return(llh)
}
#' NBD Conditional Expected Transactions
#'
#' Uses NBD model parameters and a customer's past transaction behavior to
#' return the number of transactions they are expected to make in a given time
#' period.
#'
#' @param params NBD parameters - a vector with \code{r} and \code{alpha}, in that order.
#' @param T.star Length of time for which we are calculating the expected number
#' of transactions.
#' @param x Number of repeat transactions in the calibration period \code{T.cal}, or a
#' vector of calibration period frequencies.
#' @param T.cal Length of calibration period, or a vector of calibration period
#' lengths.
#' @return Number of transactions a customer is expected to make in a time
#' period of length t, conditional on their past behavior. If any of the input
#' parameters has a length greater than 1, this will be a vector of expected
#' number of transactions.
#' @export
#' @examples
#' data("groceryElog")
#' cbs <- elog2cbs(groceryElog, T.cal = "2006-12-31")
#' params <- nbd.EstimateParameters(cbs)
#' xstar.est <- nbd.ConditionalExpectedTransactions(params, cbs$T.star, cbs$x, cbs$T.cal)
#' sum(xstar.est) # expected total number of transactions during holdout
nbd.ConditionalExpectedTransactions <- function(params, T.star, x, T.cal) {
max.length <- max(length(T.star), length(x), length(T.cal))
if (max.length %% length(T.star))
warning("Maximum vector length not a multiple of the length of T.star")
if (max.length %% length(x))
warning("Maximum vector length not a multiple of the length of x")
if (max.length %% length(T.cal))
warning("Maximum vector length not a multiple of the length of T.cal")
dc.check.model.params.safe(c("r", "alpha"), params, "nbd.ConditionalExpectedTransactions")
if (any(T.star < 0) || !is.numeric(T.star))
stop("T.star must be numeric and may not contain negative numbers.")
if (any(x < 0) || !is.numeric(x))
stop("x must be numeric and may not contain negative numbers.")
if (any(T.cal < 0) || !is.numeric(T.cal))
stop("T.cal must be numeric and may not contain negative numbers.")
T.star <- rep(T.star, length.out = max.length)
x <- rep(x, length.out = max.length)
T.cal <- rep(T.cal, length.out = max.length)
r <- params[1]
alpha <- params[2]
return(unname(T.star * (r + x) / (alpha + T.cal)))
}
#' Simulate data according to NBD model assumptions
#'
#' @param n Number of customers.
#' @param T.cal Length of calibration period.
#' @param T.star Length of holdout period. This may be a vector.
#' @param params NBD parameters - a vector with \code{r} and \code{alpha} in that order.
#' @param date.zero Initial date for cohort start. Can be of class character, Date or POSIXt.
#' @return List of length 2:
#' \item{\code{cbs}}{A data.frame with a row for each customer and the summary statistic as columns.}
#' \item{\code{elog}}{A data.frame with a row for each transaction, and columns \code{cust}, \code{date} and \code{t}.}
#' @export
#' @examples
#' n <- 200 # no. of customers
#' T.cal <- 32 # length of calibration period
#' T.star <- 32 # length of hold-out period
#' params <- c(r = 0.85, alpha = 4.45) # purchase frequency lambda_i ~ Gamma(r, alpha)
#' data <- nbd.GenerateData(n, T.cal, T.star, params)
#' cbs <- data$cbs # customer by sufficient summary statistic - one row per customer
#' elog <- data$elog # Event log - one row per event/purchase
nbd.GenerateData <- function(n, T.cal, T.star, params, date.zero = "2000-01-01") {
# check model parameters
dc.check.model.params.safe(c("r", "alpha"), params, "nbd.GenerateData")
# set start date for each customer, so that they share same T.cal date
T.cal.fix <- max(T.cal)
T.cal <- rep(T.cal, length.out = n)
T.zero <- T.cal.fix - T.cal
date.zero <- as.POSIXct(date.zero)
r <- params[1]
alpha <- params[2]
# sample intertransaction timings parameter lambda for each customer
lambdas <- rgamma(n, shape = r, rate = alpha)
# sample intertransaction timings
elog_list <- lapply(1:n, function(i) {
itts <- rexp(10 * (T.cal[i] + max(T.star)) * lambdas[i], rate = lambdas[i])
ts <- cumsum(c(0, itts))
ts <- T.zero[i] + ts # shift by T_0
ts <- ts[ts <= (T.cal.fix + max(T.star))] # trim to observation length
return(ts)
})
# build elog
elog <- data.table("cust" = rep(1:n, sapply(elog_list, length)), "t" = unlist(elog_list))
elog[["date"]] <- date.zero + elog[["t"]] * 3600 * 24 * 7
# build cbs
date.cal <- date.zero + T.cal.fix * 3600 * 24 * 7
date.tot <- date.cal + T.star * 3600 * 24 * 7
cbs <- elog2cbs(elog, T.cal = date.cal)
if (length(T.star) == 1) set(cbs, j = "T.star", value = T.star[1])
xstar.cols <- if (length(T.star) == 1) "x.star" else paste0("x.star", T.star)
for (j in 1:length(date.tot)) {
set(cbs, j = xstar.cols[j],
value = sapply(elog_list, function(t) sum(t > T.cal.fix & t <= T.cal.fix + T.star[j])))
}
set(cbs, j = "lambda", value = lambdas)
return(list(cbs = setDF(cbs), elog = setDF(elog)))
}
|
/scratch/gouwar.j/cran-all/cranData/BTYDplus/R/nbd.R
|
#' Pareto/GGG Parameter Draws
#'
#' Returns draws from the posterior distributions of the Pareto/GGG
#' parameters, on cohort as well as on customer level.
#'
#' See \code{demo('pareto-ggg')} for how to apply this model.
#'
#' @param cal.cbs Calibration period customer-by-sufficient-statistic (CBS)
#' data.frame. It must contain a row for each customer, and columns \code{x}
#' for frequency, \code{t.x} for recency , \code{T.cal} for the total time
#' observed, as well as the sum over logarithmic intertransaction times
#' \code{litt}. A correct format can be easily generated based on the complete
#' event log of a customer cohort with \code{\link{elog2cbs}}.
#' @param mcmc Number of MCMC steps.
#' @param burnin Number of initial MCMC steps which are discarded.
#' @param thin Only every \code{thin}-th MCMC step will be returned.
#' @param chains Number of MCMC chains to be run.
#' @param mc.cores Number of cores to use in parallel (Unix only). Defaults to \code{min(chains, detectCores())}.
#' @param param_init List of start values for cohort-level parameters.
#' @param trace Print logging statement every \code{trace}-th iteration. Not available for \code{mc.cores > 1}.
#' @return List of length 2:
#' \item{\code{level_1}}{list of \code{\link{mcmc.list}}s, one for each customer, with draws for customer-level parameters \code{k}, \code{lambda}, \code{tau}, \code{z}, \code{mu}}
#' \item{\code{level_2}}{\code{\link{mcmc.list}}, with draws for cohort-level parameters \code{r}, \code{alpha}, \code{s}, \code{beta}, \code{t}, \code{gamma}}
#' @export
#' @references Platzer, M., & Reutterer, T. (2016). Ticking away the moments:
#' Timing regularity helps to better predict customer activity. Marketing
#' Science, 35(5), 779-799. \doi{10.1287/mksc.2015.0963}
#' @seealso \code{\link{pggg.GenerateData} } \code{\link{mcmc.PAlive} } \code{\link{mcmc.DrawFutureTransactions} }
#' @examples
#' data("groceryElog")
#' cbs <- elog2cbs(groceryElog, T.cal = "2006-12-31")
#' param.draws <- pggg.mcmc.DrawParameters(cbs,
#' mcmc = 20, burnin = 10, thin = 2, chains = 1) # short MCMC to run demo fast
#'
#' # cohort-level parameter draws
#' as.matrix(param.draws$level_2)
#' # customer-level parameter draws for customer with ID '4'
#' as.matrix(param.draws$level_1[["4"]])
#'
#' # estimate future transactions
#' xstar.draws <- mcmc.DrawFutureTransactions(cbs, param.draws, cbs$T.star)
#' xstar.est <- apply(xstar.draws, 2, mean)
#' head(xstar.est)
pggg.mcmc.DrawParameters <- function(cal.cbs, mcmc = 2500, burnin = 500, thin = 50, chains = 2, mc.cores = NULL,
param_init = NULL, trace = 100) {
# ** methods to sample heterogeneity parameters {r, alpha, s, beta, t, gamma} **
draw_gamma_params <- function(type, level_1, level_2, hyper_prior) {
if (type == "lambda") {
x <- level_1["lambda", ]
cur_params <- c(level_2["r"], level_2["alpha"])
hyper <- unlist(hyper_prior[c("r_1", "r_2", "alpha_1", "alpha_2")])
} else if (type == "mu") {
x <- level_1["mu", ]
cur_params <- c(level_2["s"], level_2["beta"])
hyper <- unlist(hyper_prior[c("s_1", "s_2", "beta_1", "beta_2")])
} else if (type == "k") {
x <- level_1["k", ]
cur_params <- c(level_2["t"], level_2["gamma"])
hyper <- unlist(hyper_prior[c("t_1", "t_2", "gamma_1", "gamma_2")])
}
slice_sample_gamma_parameters(x, cur_params, hyper, steps = 200, w = 0.1)
}
# ** methods to sample individual-level parameters **
draw_k <- function(data, level_1, level_2) {
pggg_slice_sample("k",
x = data$x, tx = data$t.x, Tcal = data$T.cal, litt = data$litt,
k = level_1["k", ], lambda = level_1["lambda", ],
mu = level_1["mu", ], tau = level_1["tau", ],
t = level_2["t"], gamma = level_2["gamma"],
r = level_2["r"], alpha = level_2["alpha"],
s = level_2["s"], beta = level_2["beta"])
}
draw_lambda <- function(data, level_1, level_2) {
pggg_slice_sample("lambda",
x = data$x, tx = data$t.x, Tcal = data$T.cal, litt = data$litt,
k = level_1["k", ], lambda = level_1["lambda", ],
mu = level_1["mu", ], tau = level_1["tau", ],
t = level_2["t"], gamma = level_2["gamma"],
r = level_2["r"], alpha = level_2["alpha"],
s = level_2["s"], beta = level_2["beta"])
}
draw_mu <- function(data, level_1, level_2) {
N <- nrow(data)
tau <- level_1["tau", ]
s <- level_2["s"]
beta <- level_2["beta"]
mu <- rgamma(n = N, shape = s + 1, rate = beta + tau)
mu[mu == 0 | log(mu) < -30] <- exp(-30) # avoid numeric overflow
return(mu)
}
draw_tau <- function(data, level_1, level_2) {
N <- nrow(data)
x <- data$x
tx <- data$t.x
Tcal <- data$T.cal
lambda <- level_1["lambda", ]
k <- level_1["k", ]
mu <- level_1["mu", ]
# sample z
p_alive <- pggg_palive(x, tx, Tcal, k, lambda, mu)
alive <- p_alive > runif(n = N)
# sample tau
tau <- numeric(N)
# Case: still alive - left truncated exponential distribution -> [Tcal, Inf]
if (any(alive)) {
tau[alive] <- Tcal[alive] + rexp(sum(alive), mu[alive])
}
# Case: churned - distribution of tau truncated to [tx, pmin(tx+1, Tcal)]
if (any(!alive)) {
tau[!alive] <- pggg_slice_sample("tau", x = data$x[!alive], tx = data$t.x[!alive], Tcal = data$T.cal[!alive],
litt = data$litt[!alive], k = level_1["k", !alive], lambda = level_1["lambda", !alive], mu = level_1["mu",
!alive], tau = level_1["tau", !alive], t = level_2["t"], gamma = level_2["gamma"], r = level_2["r"],
alpha = level_2["alpha"], s = level_2["s"], beta = level_2["beta"])
}
return(tau)
}
run_single_chain <- function(chain_id, data, hyper_prior) {
## initialize arrays for storing draws ##
nr_of_cust <- nrow(data)
nr_of_draws <- (mcmc - 1) %/% thin + 1
level_2_draws <- array(NA_real_, dim = c(nr_of_draws, 6))
dimnames(level_2_draws)[[2]] <- c("t", "gamma", "r", "alpha", "s", "beta")
level_1_draws <- array(NA_real_, dim = c(nr_of_draws, 5, nr_of_cust))
dimnames(level_1_draws)[[2]] <- c("k", "lambda", "mu", "tau", "z")
## initialize parameters ##
level_2 <- level_2_draws[1, ]
level_2["t"] <- param_init$t
level_2["gamma"] <- param_init$gamma
level_2["r"] <- param_init$r
level_2["alpha"] <- param_init$alpha
level_2["s"] <- param_init$s
level_2["beta"] <- param_init$beta
level_1 <- level_1_draws[1, , ] # nolint
level_1["k", ] <- 1
level_1["lambda", ] <- mean(data$x) / mean(ifelse(data$t.x == 0, data$T.cal, data$t.x))
level_1["tau", ] <- data$t.x + 0.5 / level_1["lambda", ]
level_1["z", ] <- as.numeric(level_1["tau", ] > data$T.cal)
level_1["mu", ] <- 1 / level_1["tau", ]
## run MCMC chain ##
for (step in 1:(burnin + mcmc)) {
if (step %% trace == 0)
cat("chain:", chain_id, "step:", step, "of", (burnin + mcmc), "\n")
# store
if ( (step - burnin) > 0 & (step - 1 - burnin) %% thin == 0) {
idx <- (step - 1 - burnin) %/% thin + 1
level_1_draws[idx, , ] <- level_1 # nolint
level_2_draws[idx, ] <- level_2
}
# draw individual-level parameters
level_1["k", ] <- draw_k(data, level_1, level_2)
level_1["lambda", ] <- draw_lambda(data, level_1, level_2)
level_1["mu", ] <- draw_mu(data, level_1, level_2)
level_1["tau", ] <- draw_tau(data, level_1, level_2)
level_1["z", ] <- as.numeric(level_1["tau", ] > data$T.cal)
# draw heterogeneity parameters
level_2[c("t", "gamma")] <- draw_gamma_params("k", level_1, level_2, hyper_prior)
level_2[c("r", "alpha")] <- draw_gamma_params("lambda", level_1, level_2, hyper_prior)
level_2[c("s", "beta")] <- draw_gamma_params("mu", level_1, level_2, hyper_prior)
}
# convert MCMC draws into coda::mcmc objects
return(list(
"level_1" = lapply(1:nr_of_cust,
function(i) mcmc(level_1_draws[, , i], start = burnin, thin = thin)), # nolint
"level_2" = mcmc(level_2_draws, start = burnin, thin = thin)))
}
# set hyper priors
hyper_prior <- list(r_1 = 0.001, r_2 = 0.001,
alpha_1 = 0.001, alpha_2 = 0.001,
s_1 = 0.001, s_2 = 0.001,
beta_1 = 0.001, beta_2 = 0.001,
t_1 = 0.001, t_2 = 0.001,
gamma_1 = 0.001, gamma_2 = 0.001)
# set param_init (if not passed as argument)
if (is.null(param_init)) {
try({
df <- cal.cbs[sample(nrow(cal.cbs), min(nrow(cal.cbs), 1000)), ]
param_init <- c(1, 1, BTYD::pnbd.EstimateParameters(df))
names(param_init) <- c("t", "gamma", "r", "alpha", "s", "beta")
param_init <- as.list(param_init)
},
silent = TRUE)
if (is.null(param_init))
param_init <- list(t = 1, gamma = 1, r = 1, alpha = 1, s = 1, beta = 1)
cat("set param_init:", paste(round(unlist(param_init), 4), collapse = ", "), "\n")
}
# check whether input data meets requirements
stopifnot(is.data.frame(cal.cbs))
stopifnot(all(c("x", "t.x", "T.cal", "litt") %in% names(cal.cbs)))
stopifnot(all(is.finite(cal.cbs$litt)))
# run multiple chains - executed in parallel on Unix
ncores <- ifelse(!is.null(mc.cores), min(chains, mc.cores), ifelse(.Platform$OS.type == "windows", 1, min(chains,
detectCores())))
if (ncores > 1)
cat("running in parallel on", ncores, "cores\n")
draws <- mclapply(1:chains, function(i) run_single_chain(i, cal.cbs, hyper_prior), mc.cores = ncores)
# merge chains into code::mcmc.list objects
out <- list(level_1 = lapply(1:nrow(cal.cbs), function(i) mcmc.list(lapply(draws, function(draw) draw$level_1[[i]]))),
level_2 = mcmc.list(lapply(draws, function(draw) draw$level_2)))
if ("cust" %in% names(cal.cbs))
names(out$level_1) <- cal.cbs$cust
return(out)
}
#' Pareto/GGG Plot Regularity Rate Heterogeneity
#'
#' Plots and returns the estimated gamma distribution of k (customers'
#' regularity in interpurchase times).
#'
#' @param draws MCMC draws as returned by \code{\link{pggg.mcmc.DrawParameters}}.
#' @param xmax Upper bound for x-scale.
#' @param fn Optional function to summarize individual-level draws for k, e.g. 'mean'.
#' @param title Plot title.
#'
#' @references Platzer, M., & Reutterer, T. (2016). Ticking away the moments:
#' Timing regularity helps to better predict customer activity. Marketing
#' Science, 35(5), 779-799. \doi{10.1287/mksc.2015.0963}
#' @export
#' @examples
#' data("groceryElog")
#' cbs <- elog2cbs(groceryElog, T.cal = "2006-12-31")
#' param.draws <- pggg.mcmc.DrawParameters(cbs,
#' mcmc = 20, burnin = 10, thin = 2, chains = 1) # short MCMC to run demo fast
#' pggg.plotRegularityRateHeterogeneity(param.draws)
pggg.plotRegularityRateHeterogeneity <- function(draws, xmax = NULL, fn = NULL,
title = "Distribution of Regularity Rate k") {
stopifnot("k" %in% colnames(as.matrix(draws$level_1[[1]])))
ks <- sapply(draws$level_1, function(draw) as.matrix(draw[, "k"]))
if (!is.null(fn))
ks <- apply(ks, 2, fn)
if (is.null(xmax))
xmax <- min(10, quantile(ks, 0.95) * 1.5)
mar_top <- ifelse(title != "", 2.5, 1)
op <- par(mar = c(2.5, 2.5, mar_top, 2.5))
plot(density(ks, from = 0), xlim = c(0, xmax), main = title, xlab = "", ylab = "", frame = FALSE)
abline(v = 1, lty = 3)
abline(v = median(ks), col = "red")
par(op)
invisible()
}
#' Simulate data according to Pareto/GGG model assumptions
#'
#' @param n Number of customers.
#' @param T.cal Length of calibration period. If a vector is provided, then it
#' is assumed that customers have different 'birth' dates, i.e.
#' \eqn{max(T.cal)-T.cal}.
#' @param T.star Length of holdout period. This may be a vector.
#' @param params A list of model parameters \code{r},
#' \code{alpha}, \code{s}, \code{beta}, \code{t} and \code{gamma}.
#' @param date.zero Initial date for cohort start. Can be of class character, Date or POSIXt.
#' @return List of length 2:
#' \item{\code{cbs}}{A data.frame with a row for each customer and the summary statistic as columns.}
#' \item{\code{elog}}{A data.frame with a row for each transaction, and columns \code{cust}, \code{date} and \code{t}.}
#' @export
#' @references Platzer, M., & Reutterer, T. (2016). Ticking away the moments:
#' Timing regularity helps to better predict customer activity. Marketing
#' Science, 35(5), 779-799. \doi{10.1287/mksc.2015.0963}
#' @examples
#' params <- list(t = 4.5, gamma = 1.5, r = 5, alpha = 10, s = 0.8, beta = 12)
#' data <- pggg.GenerateData(n = 200, T.cal = 32, T.star = 32, params)
#' cbs <- data$cbs # customer by sufficient summary statistic - one row per customer
#' elog <- data$elog # Event log - one row per event/purchase
pggg.GenerateData <- function(n, T.cal, T.star, params, date.zero = "2000-01-01") {
# set start date for each customer, so that they share same T.cal date
T.cal.fix <- max(T.cal)
T.cal <- rep(T.cal, length.out = n)
T.zero <- T.cal.fix - T.cal
date.zero <- as.POSIXct(date.zero)
# sample regularity parameter k for each customer
if (all(c("t", "gamma") %in% names(params))) {
# Case A: regularity parameter k is gamma-distributed across customers
ks <- rgamma(n, shape = params$t, rate = params$gamma)
ks <- pmax(0.1, ks) # ensure that k is not too small, otherwise itt can be 0
} else if ("k" %in% names(params)) {
# Case B: regularity parameter k is fixed across customers
ks <- rep(params$k, n)
} else {
# Case C: k=1 is assumed, i.e. Pareto/NBD
ks <- rep(1, n)
}
# sample intertransaction timings parameter lambda for each customer
lambdas <- rgamma(n, shape = params$r, rate = params$alpha)
# sample lifetime for each customer
mus <- rgamma(n, shape = params$s, rate = params$beta)
taus <- rexp(n, rate = mus)
# sample intertransaction timings
elog_list <- lapply(1:n, function(i) {
# sample 'sufficiently' large amount of inter-transaction times
minT <- min(T.cal[i] + max(T.star), taus[i])
itt_draws <- max(10, round(minT * lambdas[i] * 1.5))
itt_fn <- function(n) rgamma(n, shape = ks[i], rate = ks[i] * lambdas[i])
itts <- itt_fn(itt_draws)
if (sum(itts) < minT) itts <- c(itts, itt_fn(itt_draws * 4))
if (sum(itts) < minT) itts <- c(itts, itt_fn(itt_draws * 800))
if (sum(itts) < minT) stop("not enough inter-transaction times sampled: ", sum(itts), " < ", minT)
ts <- cumsum(c(0, itts))
ts <- ts[ts <= taus[i]] # trim to lifetime
ts <- T.zero[i] + ts # shift by T_0
ts <- ts[ts <= (T.cal.fix + max(T.star))] # trim to observation length
return(ts)
})
# build elog
elog <- data.table("cust" = rep(1:n, sapply(elog_list, length)), "t" = unlist(elog_list))
elog[["date"]] <- date.zero + elog[["t"]] * 3600 * 24 * 7
# build cbs
date.cal <- date.zero + T.cal.fix * 3600 * 24 * 7
date.tot <- date.cal + T.star * 3600 * 24 * 7
cbs <- elog2cbs(elog, T.cal = date.cal)
if (length(T.star) == 1) set(cbs, j = "T.star", value = T.star[1])
xstar.cols <- if (length(T.star) == 1) "x.star" else paste0("x.star", T.star)
for (j in 1:length(date.tot)) {
set(cbs, j = xstar.cols[j],
value = sapply(elog_list, function(t) sum(t > T.cal.fix & t <= T.cal.fix + T.star[j])))
}
set(cbs, j = "k", value = ks)
set(cbs, j = "lambda", value = lambdas)
set(cbs, j = "mu", value = mus)
set(cbs, j = "tau", value = taus)
set(cbs, j = "alive", value = (T.zero + taus) > T.cal.fix)
return(list("cbs" = setDF(cbs), "elog" = setDF(elog)))
}
|
/scratch/gouwar.j/cran-all/cranData/BTYDplus/R/pareto-ggg-mcmc.R
|
#' Pareto/NBD (Abe) Parameter Draws
#'
#' Returns draws from the posterior distributions of the Pareto/NBD (Abe)
#' parameters, on cohort as well as on customer level.
#'
#' See \code{demo('pareto-abe')} for how to apply this model.
#'
#' @param cal.cbs Calibration period customer-by-sufficient-statistic (CBS)
#' data.frame. It must contain a row for each customer, and columns \code{x}
#' for frequency, \code{t.x} for recency and \code{T.cal} for the total time
#' observed. A correct format can be easily generated based on the complete
#' event log of a customer cohort with \code{\link{elog2cbs}}.
#' @param covariates A vector of columns of \code{cal.cbs} which contain customer-level covariates.
#' @param mcmc Number of MCMC steps.
#' @param burnin Number of initial MCMC steps which are discarded.
#' @param thin Only every \code{thin}-th MCMC step will be returned.
#' @param chains Number of MCMC chains to be run.
#' @param mc.cores Number of cores to use in parallel (Unix only). Defaults to \code{min(chains, detectCores())}.
#' @param trace Print logging statement every \code{trace}-th iteration. Not available for \code{mc.cores > 1}.
#' @return List of length 2:
#' \item{\code{level_1}}{list of \code{\link{mcmc.list}}s, one for each customer, with draws for customer-level parameters \code{k}, \code{lambda}, \code{tau}, \code{z}, \code{mu}}
#' \item{\code{level_2}}{\code{\link{mcmc.list}}, with draws for cohort-level parameters}
#' @export
#' @seealso \code{\link{abe.GenerateData} } \code{\link{mcmc.PAlive} } \code{\link{mcmc.DrawFutureTransactions} }
#' @references Abe, M. (2009). "Counting your customers" one by one: A
#' hierarchical Bayes extension to the Pareto/NBD model. Marketing Science,
#' 28(3), 541-553. \doi{10.1287/mksc.1090.0502}
#' @examples
#' data("groceryElog")
#' cbs <- elog2cbs(groceryElog, T.cal = "2006-12-31")
#' cbs$cov1 <- as.integer(cbs$cust) %% 2 # create dummy covariate
#' param.draws <- abe.mcmc.DrawParameters(cbs, c("cov1"),
#' mcmc = 100, burnin = 50, thin = 10, chains = 1) # short MCMC to run demo fast
#'
#' # cohort-level parameter draws
#' as.matrix(param.draws$level_2)
#' # customer-level parameter draws for customer with ID '4'
#' as.matrix(param.draws$level_1[["4"]])
#'
#' # estimate future transactions
#' xstar.draws <- mcmc.DrawFutureTransactions(cbs, param.draws, cbs$T.star)
#' xstar.est <- apply(xstar.draws, 2, mean)
#' head(xstar.est)
abe.mcmc.DrawParameters <- function(cal.cbs, covariates = c(), mcmc = 2500, burnin = 500, thin = 50, chains = 2,
mc.cores = NULL, trace = 100) {
# ** methods to sample heterogeneity parameters {beta, gamma} **
draw_level_2 <- function(covars, level_1, hyper_prior) {
# standard multi-variate normal regression update
draw <- bayesm::rmultireg(Y = log(t(level_1[c("lambda", "mu"), ])),
X = covars,
Bbar = hyper_prior$beta_0,
A = hyper_prior$A_0,
nu = hyper_prior$nu_00,
V = hyper_prior$gamma_00)
return(list(beta = t(draw$B), gamma = draw$Sigma))
}
# ** methods to sample individual-level parameters **
draw_z <- function(data, level_1) {
tx <- data$t.x
Tcal <- data$T.cal
lambda <- level_1["lambda", ]
mu <- level_1["mu", ]
mu_lam <- mu + lambda
t_diff <- Tcal - tx
prob <- 1 / (1 + (mu / mu_lam) * (exp(mu_lam * t_diff) - 1))
z <- as.numeric(runif(length(prob)) < prob)
return(z)
}
draw_tau <- function(data, level_1) {
N <- nrow(data)
tx <- data$t.x
Tcal <- data$T.cal
lambda <- level_1["lambda", ]
mu <- level_1["mu", ]
mu_lam <- mu + lambda
z <- level_1["z", ]
alive <- z == 1
tau <- numeric(N)
# Case: still alive - left truncated exponential distribution -> [T.cal, Inf]
if (any(alive)) {
tau[alive] <- Tcal[alive] + rexp(sum(alive), mu[alive])
}
# Case: churned - double truncated exponential distribution -> [tx, T.cal]
if (any(!alive)) {
mu_lam_tx <- pmin(700, mu_lam[!alive] * tx[!alive])
mu_lam_Tcal <- pmin(700, mu_lam[!alive] * Tcal[!alive])
rand <- runif(n = sum(!alive))
tau[!alive] <- -log( (1 - rand) * exp(-mu_lam_tx) + rand * exp(-mu_lam_Tcal)) / mu_lam[!alive]
}
return(tau)
}
draw_level_1 <- function(data, covars, level_1, level_2) {
# sample (lambda, mu) given (z, tau, beta, gamma)
N <- nrow(data)
x <- data$x
Tcal <- data$T.cal
z <- level_1["z", ]
tau <- level_1["tau", ]
mvmean <- covars[, ] %*% t(level_2$beta)
gamma <- level_2$gamma
inv_gamma <- solve(gamma)
cur_lambda <- level_1["lambda", ]
cur_mu <- level_1["mu", ]
log_post <- function(log_theta) {
log_lambda <- log_theta[1, ]
log_mu <- log_theta[2, ]
diff_lambda <- log_lambda - mvmean[, 1]
diff_mu <- log_mu - mvmean[, 2]
likel <- x * log_lambda + (1 - z) * log_mu - (exp(log_lambda) + exp(log_mu)) * (z * Tcal + (1 - z) *
tau)
prior <- -0.5 * (diff_lambda ^ 2 * inv_gamma[1, 1] +
2 * diff_lambda * diff_mu * inv_gamma[1, 2] +
diff_mu ^ 2 * inv_gamma[2, 2])
post <- likel + prior
post[log_mu > 5] <- -Inf # cap !!
return(post)
}
# current state
cur_log_theta <- rbind(log(cur_lambda), log(cur_mu))
cur_post <- log_post(cur_log_theta)
step <- function(cur_log_theta, cur_post) {
# new proposal
new_log_theta <- cur_log_theta + rbind(gamma[1, 1] * rt(N, df = 3), gamma[2, 2] * rt(n = N, df = 3))
new_log_theta[1, ] <- pmax(pmin(new_log_theta[1, ], 70), -70)
new_log_theta[2, ] <- pmax(pmin(new_log_theta[2, ], 70), -70)
new_post <- log_post(new_log_theta)
# accept/reject new proposal
mhratio <- exp(new_post - cur_post)
accepted <- mhratio > runif(n = N)
cur_log_theta[, accepted] <- new_log_theta[, accepted]
cur_post[accepted] <- new_post[accepted]
list(cur_log_theta = cur_log_theta, cur_post = cur_post)
}
iter <- 1 # how high do we need to set this? 1/5/10/100?
for (i in 1:iter) {
draw <- step(cur_log_theta, cur_post)
cur_log_theta <- draw$cur_log_theta
cur_post <- draw$cur_post
}
cur_theta <- exp(cur_log_theta)
return(list(lambda = cur_theta[1, ], mu = cur_theta[2, ]))
}
run_single_chain <- function(chain_id, data, hyper_prior) {
## initialize arrays for storing draws ##
nr_of_cust <- nrow(data)
nr_of_draws <- (mcmc - 1) %/% thin + 1
level_1_draws <- array(NA_real_, dim = c(nr_of_draws, 4, nr_of_cust))
dimnames(level_1_draws)[[2]] <- c("lambda", "mu", "tau", "z")
level_2_draws <- array(NA_real_, dim = c(nr_of_draws, 2 * K + 3))
nm <- c("log_lambda", "log_mu")
if (K > 1)
nm <- paste(rep(nm, times = K), rep(colnames(covars), each = 2), sep = "_")
dimnames(level_2_draws)[[2]] <- c(nm, "var_log_lambda", "cov_log_lambda_log_mu", "var_log_mu")
## initialize parameters ##
level_1 <- level_1_draws[1, , ] # nolint
level_1["lambda", ] <- mean(data$x) / mean(ifelse(data$t.x == 0, data$T.cal, data$t.x))
level_1["mu", ] <- 1 / (data$t.x + 0.5 / level_1["lambda", ])
## run MCMC chain ##
hyper_prior$beta_0[1, "log_lambda"] <- log(mean(level_1["lambda", ]))
hyper_prior$beta_0[1, "log_mu"] <- log(mean(level_1["mu", ]))
for (step in 1:(burnin + mcmc)) {
if (step %% trace == 0)
cat("chain:", chain_id, "step:", step, "of", (burnin + mcmc), "\n")
# draw individual-level parameters
level_1["z", ] <- draw_z(data, level_1)
level_1["tau", ] <- draw_tau(data, level_1)
level_2 <- draw_level_2(covars, level_1, hyper_prior)
draw <- draw_level_1(data, covars, level_1, level_2)
level_1["lambda", ] <- draw$lambda
level_1["mu", ] <- draw$mu
# store
if ( (step - burnin) > 0 & (step - 1 - burnin) %% thin == 0) {
idx <- (step - 1 - burnin) %/% thin + 1
level_1_draws[idx, , ] <- level_1 # nolint
level_2_draws[idx, ] <- c(level_2$beta, level_2$gamma[1, 1], level_2$gamma[1, 2], level_2$gamma[2,
2])
}
}
# convert MCMC draws into coda::mcmc objects
return(list(
"level_1" = lapply(1:nr_of_cust,
function(i) mcmc(level_1_draws[, , i], start = burnin, thin = thin)), # nolint
"level_2" = mcmc(level_2_draws, start = burnin, thin = thin)))
}
# check whether input data meets requirements
stopifnot(is.data.frame(cal.cbs))
stopifnot(all(c("x", "t.x", "T.cal") %in% names(cal.cbs)))
stopifnot(all(covariates %in% names(cal.cbs)))
# Setup Regressors (Covariates) for location of 1st-stage prior, i.e. beta = [log(lambda), log(mu)]
cal.cbs[, "intercept"] <- 1
covariates <- c("intercept", covariates)
K <- length(covariates) # number of covars
covars <- as.matrix(subset(cal.cbs, select = covariates))
# set hyper priors
beta_0 <- matrix(0, nrow = K, ncol = 2, dimnames = list(NULL, c("log_lambda", "log_mu")))
A_0 <- diag(rep(0.01, K), ncol = K, nrow = K) # diffuse precision matrix
# set diffuse hyper-parameters for 2nd-stage prior of gamma_0; follows defaults from rmultireg example
nu_00 <- 3 + K # 30
gamma_00 <- nu_00 * diag(2)
hyper_prior <- list(beta_0 = beta_0, A_0 = A_0, nu_00 = nu_00, gamma_00 = gamma_00)
# run multiple chains - executed in parallel on Unix
ncores <- ifelse(!is.null(mc.cores), min(chains, mc.cores), ifelse(.Platform$OS.type == "windows", 1, min(chains,
detectCores())))
if (ncores > 1)
cat("running in parallel on", ncores, "cores\n")
draws <- mclapply(1:chains, function(i) run_single_chain(i, cal.cbs, hyper_prior = hyper_prior), mc.cores = ncores)
# merge chains into code::mcmc.list objects
out <- list(level_1 = lapply(1:nrow(cal.cbs), function(i) mcmc.list(lapply(draws, function(draw) draw$level_1[[i]]))),
level_2 = mcmc.list(lapply(draws, function(draw) draw$level_2)))
if ("cust" %in% names(cal.cbs))
names(out$level_1) <- cal.cbs$cust
return(out)
}
#' Simulate data according to Pareto/NBD (Abe) model assumptions
#'
#' @param n Number of customers.
#' @param T.cal Length of calibration period. If a vector is provided, then it
#' is assumed that customers have different 'birth' dates, i.e.
#' \eqn{max(T.cal)-T.cal}.
#' @param T.star Length of holdout period. This may be a vector.
#' @param params A list of model parameters: \code{beta} and \code{gamma}.
#' @param date.zero Initial date for cohort start. Can be of class character, Date or POSIXt.
#' @param covariates Provide matrix of customer covariates. If NULL then random covariate values between [-1,1] are drawn.
#' @return List of length 2:
#' \item{\code{cbs}}{A data.frame with a row for each customer and the summary statistic as columns.}
#' \item{\code{elog}}{A data.frame with a row for each transaction, and columns \code{cust}, \code{date} and \code{t}.}
#' @export
#' @examples
#' # generate artificial Pareto/NBD (Abe) with 2 covariates
#' params <- list()
#' params$beta <- matrix(c(0.18, -2.5, 0.5, -0.3, -0.2, 0.8), byrow = TRUE, ncol = 2)
#' params$gamma <- matrix(c(0.05, 0.1, 0.1, 0.2), ncol = 2)
#' data <- abe.GenerateData(n = 200, T.cal = 32, T.star = 32, params)
#' cbs <- data$cbs # customer by sufficient summary statistic - one row per customer
#' elog <- data$elog # Event log - one row per event/purchase
abe.GenerateData <- function(n, T.cal, T.star, params, date.zero = "2000-01-01", covariates = NULL) {
# set start date for each customer, so that they share same T.cal date
T.cal.fix <- max(T.cal)
T.cal <- rep(T.cal, length.out = n)
T.zero <- T.cal.fix - T.cal
date.zero <- as.POSIXct(date.zero)
if (!is.matrix(params$beta))
params$beta <- matrix(params$beta, nrow = 1, ncol = 2)
nr_covars <- nrow(params$beta)
if (!is.null(covariates)) {
# ensure that provided covariates are in matrix format, with intercept
covars <- covariates
if (is.data.frame(covars)) covars <- as.matrix(covars)
if (!is.matrix(covars)) covars <- matrix(covars, ncol = 1, dimnames = list(NULL, "covariate_1"))
if (!all(covars[, 1] == 1)) covars <- cbind("intercept" = rep(1, nrow(covars)), covars)
if (is.null(colnames(covars)) & ncol(covars) > 1)
colnames(covars)[-1] <- paste("covariate", 1:(nr_covars - 1), sep = "_")
if (nr_covars != ncol(covars))
stop("provided number of covariate columns does not match implied covariate number by parameter `beta`")
if (n != nrow(covars))
covars <- covars[sample(1:nrow(covars), n, replace = TRUE), ]
} else {
# simulate covariates, if not provided
covars <- matrix(c(rep(1, n), runif( (nr_covars - 1) * n, -1, 1)), nrow = n, ncol = nr_covars)
colnames(covars) <- paste("covariate", 0:(nr_covars - 1), sep = "_")
colnames(covars)[1] <- "intercept"
}
# sample log-normal distributed parameters lambda/mu for each customer
thetas <- exp( (covars %*% params$beta) + mvtnorm::rmvnorm(n, mean = c(0, 0), sigma = params$gamma))
lambdas <- thetas[, 1]
mus <- thetas[, 2]
# sample lifetime for each customer
taus <- rexp(n, rate = mus)
# sample intertransaction timings
elog_list <- lapply(1:n, function(i) {
# sample 'sufficiently' large amount of inter-transaction times
minT <- min(T.cal[i] + max(T.star), taus[i])
itt_draws <- max(10, round(minT * lambdas[i] * 1.5))
itt_fn <- function(n) rexp(n, rate = lambdas[i])
itts <- itt_fn(itt_draws)
if (sum(itts) < minT) itts <- c(itts, itt_fn(itt_draws * 4))
if (sum(itts) < minT) itts <- c(itts, itt_fn(itt_draws * 800))
if (sum(itts) < minT) stop("not enough inter-transaction times sampled: ", sum(itts), " < ", minT)
ts <- cumsum(c(0, itts))
ts <- ts[ts <= taus[i]] # trim to lifetime
ts <- T.zero[i] + ts # shift by T_0
ts <- ts[ts <= (T.cal.fix + max(T.star))] # trim to observation length
return(ts)
})
# build elog
elog <- data.table("cust" = rep(1:n, sapply(elog_list, length)), "t" = unlist(elog_list))
elog[["date"]] <- date.zero + elog[["t"]] * 3600 * 24 * 7
# build cbs
date.cal <- date.zero + T.cal.fix * 3600 * 24 * 7
date.tot <- date.cal + T.star * 3600 * 24 * 7
cbs <- elog2cbs(elog, T.cal = date.cal)
if (length(T.star) == 1) set(cbs, j = "T.star", value = T.star[1])
xstar.cols <- if (length(T.star) == 1) "x.star" else paste0("x.star", T.star)
for (j in 1:length(date.tot)) {
set(cbs, j = xstar.cols[j],
value = sapply(elog_list, function(t) sum(t > T.cal.fix & t <= T.cal.fix + T.star[j])))
}
set(cbs, j = "lambda", value = lambdas)
set(cbs, j = "mu", value = mus)
set(cbs, j = "tau", value = taus)
set(cbs, j = "alive", value = (T.zero + taus) > T.cal.fix)
cbs <- cbind(cbs, covars)
return(list("cbs" = setDF(cbs), "elog" = setDF(elog)))
}
|
/scratch/gouwar.j/cran-all/cranData/BTYDplus/R/pareto-nbd-abe.R
|
#' Pareto/NBD (HB) Parameter Draws
#'
#' Returns draws from the posterior distributions of the Pareto/NBD (HB)
#' parameters, on cohort as well as on customer level.
#'
#' See \code{demo('pareto-ggg')} for how to apply this model.
#'
#' @param cal.cbs Calibration period customer-by-sufficient-statistic (CBS)
#' data.frame. It must contain a row for each customer, and columns \code{x}
#' for frequency, \code{t.x} for recency and \code{T.cal} for the total time
#' observed. A correct format can be easily generated based on the complete
#' event log of a customer cohort with \code{\link{elog2cbs}}.
#' @param mcmc Number of MCMC steps.
#' @param burnin Number of initial MCMC steps which are discarded.
#' @param thin Only every \code{thin}-th MCMC step will be returned.
#' @param chains Number of MCMC chains to be run.
#' @param mc.cores Number of cores to use in parallel (Unix only). Defaults to \code{min(chains, detectCores())}.
#' @param use_data_augmentation deprecated
#' @param param_init List of start values for cohort-level parameters.
#' @param trace Print logging statement every \code{trace}-th iteration. Not available for \code{mc.cores > 1}.
#' @return 2-element list:
#' \itemize{
#' \item{\code{level_1 }}{list of \code{\link{mcmc.list}}s, one for each customer, with draws for customer-level parameters \code{lambda}, \code{tau}, \code{z}, \code{mu}}
#' \item{\code{level_2 }}{\code{\link{mcmc.list}}, with draws for cohort-level parameters \code{r}, \code{alpha}, \code{s}, \code{beta}}
#' }
#' @export
#' @seealso \code{\link{pnbd.GenerateData} } \code{\link{mcmc.DrawFutureTransactions} } \code{\link{mcmc.PAlive} }
#' @references Ma, S. H., & Liu, J. L. (2007, August). The MCMC approach for
#' solving the Pareto/NBD model and possible extensions. In Third
#' international conference on natural computation (ICNC 2007) (Vol. 2, pp.
#' 505-512). IEEE. \doi{10.1109/ICNC.2007.728}
#' @references Abe, M. (2009). "Counting your customers" one by one: A
#' hierarchical Bayes extension to the Pareto/NBD model. Marketing Science,
#' 28(3), 541-553. \doi{10.1287/mksc.1090.0502}
#' @examples
#' data("groceryElog")
#' cbs <- elog2cbs(groceryElog, T.cal = "2006-12-31")
#' param.draws <- pnbd.mcmc.DrawParameters(cbs,
#' mcmc = 100, burnin = 50, thin = 10, chains = 1) # short MCMC to run demo fast
#'
#' # cohort-level parameter draws
#' as.matrix(param.draws$level_2)
#' # customer-level parameter draws for customer with ID '4'
#' as.matrix(param.draws$level_1[["4"]])
#'
#' # estimate future transactions
#' xstar.draws <- mcmc.DrawFutureTransactions(cbs, param.draws, cbs$T.star)
#' xstar.est <- apply(xstar.draws, 2, mean)
#' head(xstar.est)
pnbd.mcmc.DrawParameters <- function(cal.cbs, mcmc = 2500, burnin = 500, thin = 50, chains = 2, mc.cores = NULL,
use_data_augmentation = TRUE, param_init = NULL, trace = 100) {
# ** methods to sample heterogeneity parameters {r, alpha, s, beta} **
draw_gamma_params <- function(type, level_1, level_2, hyper_prior) {
if (type == "lambda") {
x <- level_1["lambda", ]
cur_params <- c(level_2["r"], level_2["alpha"])
hyper <- unlist(hyper_prior[c("r_1", "r_2", "alpha_1", "alpha_2")])
} else if (type == "mu") {
x <- level_1["mu", ]
cur_params <- c(level_2["s"], level_2["beta"])
hyper <- unlist(hyper_prior[c("s_1", "s_2", "beta_1", "beta_2")])
}
slice_sample_gamma_parameters(x, cur_params, hyper, steps = 50, w = 0.1)
}
# ** methods to sample individual-level parameters (with data augmentation) **
draw_lambda <- function(data, level_1, level_2) {
N <- nrow(data)
x <- data$x
T.cal <- data$T.cal
tau <- level_1["tau", ]
r <- level_2["r"]
alpha <- level_2["alpha"]
lambda <- rgamma(n = N, shape = r + x, rate = alpha + pmin(tau, T.cal))
lambda[lambda == 0 | log(lambda) < -30] <- exp(-30) # avoid numeric overflow
return(lambda)
}
draw_mu <- function(data, level_1, level_2) {
N <- nrow(data)
tau <- level_1["tau", ]
s <- level_2["s"]
beta <- level_2["beta"]
mu <- rgamma(n = N, shape = s + 1, rate = beta + tau)
mu[mu == 0 | log(mu) < -30] <- exp(-30) # avoid numeric overflow
return(mu)
}
draw_tau <- function(data, level_1) {
N <- nrow(data)
tx <- data$t.x
Tcal <- data$T.cal
lambda <- level_1["lambda", ]
mu <- level_1["mu", ]
mu_lam <- mu + lambda
t_diff <- Tcal - tx
# sample z
p_alive <- 1 / (1 + (mu / mu_lam) * (exp(mu_lam * t_diff) - 1))
alive <- p_alive > runif(n = N)
# sample tau
tau <- numeric(N)
# Case: still alive - left truncated exponential distribution -> [Tcal, Inf]
if (any(alive)) {
tau[alive] <- Tcal[alive] + rexp(sum(alive), mu[alive])
}
# Case: churned - double truncated exponential distribution -> [tx, Tcal]
if (any(!alive)) {
mu_lam_tx <- pmin(700, mu_lam[!alive] * tx[!alive])
mu_lam_Tcal <- pmin(700, mu_lam[!alive] * Tcal[!alive])
# sample with https://en.wikipedia.org/wiki/Inverse_transform_sampling
rand <- runif(n = sum(!alive))
tau[!alive] <- -log((1 - rand) * exp(-mu_lam_tx) + rand * exp(-mu_lam_Tcal)) / mu_lam[!alive] # nolint
}
return(tau)
}
run_single_chain <- function(chain_id = 1, data, hyper_prior) {
## initialize arrays for storing draws ##
nr_of_cust <- nrow(data)
nr_of_draws <- (mcmc - 1) %/% thin + 1
level_2_draws <- array(NA_real_, dim = c(nr_of_draws, 4))
dimnames(level_2_draws)[[2]] <- c("r", "alpha", "s", "beta")
level_1_draws <- array(NA_real_, dim = c(nr_of_draws, 4, nr_of_cust))
dimnames(level_1_draws)[[2]] <- c("lambda", "mu", "tau", "z")
## initialize parameters ##
level_2 <- level_2_draws[1, ]
level_2["r"] <- param_init$r
level_2["alpha"] <- param_init$alpha
level_2["s"] <- param_init$s
level_2["beta"] <- param_init$beta
level_1 <- level_1_draws[1, , ] # nolint
level_1["lambda", ] <- mean(data$x) / mean(ifelse(data$t.x == 0, data$T.cal, data$t.x))
level_1["tau", ] <- data$t.x + 0.5 / level_1["lambda", ]
level_1["z", ] <- as.numeric(level_1["tau", ] > data$T.cal)
level_1["mu", ] <- 1 / level_1["tau", ]
## run MCMC chain ##
for (step in 1:(burnin + mcmc)) {
if (step %% trace == 0)
cat("chain:", chain_id, "step:", step, "of", (burnin + mcmc), "\n")
# store
if ( (step - burnin) > 0 & (step - 1 - burnin) %% thin == 0) {
idx <- (step - 1 - burnin) %/% thin + 1
level_1_draws[idx, , ] <- level_1 # nolint
level_2_draws[idx, ] <- level_2
}
# draw individual-level parameters
level_1["lambda", ] <- draw_lambda(data, level_1, level_2)
level_1["mu", ] <- draw_mu(data, level_1, level_2)
level_1["tau", ] <- draw_tau(data, level_1)
level_1["z", ] <- as.numeric(level_1["tau", ] > data$T.cal)
# draw heterogeneity parameters
level_2[c("r", "alpha")] <- draw_gamma_params("lambda", level_1, level_2, hyper_prior)
level_2[c("s", "beta")] <- draw_gamma_params("mu", level_1, level_2, hyper_prior)
}
# convert MCMC draws into coda::mcmc objects
return(list(
"level_1" = lapply(1:nr_of_cust,
function(i) mcmc(level_1_draws[, , i], start = burnin, thin = thin)), # nolint
"level_2" = mcmc(level_2_draws, start = burnin, thin = thin)))
}
# set hyper priors
hyper_prior <- list(r_1 = 0.001, r_2 = 0.001,
alpha_1 = 0.001, alpha_2 = 0.001,
s_1 = 0.001, s_2 = 0.001,
beta_1 = 0.001, beta_2 = 0.001)
# set param_init (if not passed as argument)
if (is.null(param_init)) {
try({
df <- cal.cbs[sample(nrow(cal.cbs), min(nrow(cal.cbs), 1000)), ]
param_init <- BTYD::pnbd.EstimateParameters(df)
names(param_init) <- c("r", "alpha", "s", "beta")
param_init <- as.list(param_init)
},
silent = TRUE)
if (is.null(param_init))
param_init <- list(r = 1, alpha = 1, s = 1, beta = 1)
cat("set param_init:", paste(round(unlist(param_init), 4), collapse = ", "), "\n")
}
# check whether input data meets requirements
stopifnot(is.data.frame(cal.cbs))
stopifnot(all(c("x", "t.x", "T.cal") %in% names(cal.cbs)))
stopifnot(all(is.finite(cal.cbs$litt)))
# run multiple chains - executed in parallel on Unix
ncores <- ifelse(!is.null(mc.cores), min(chains, mc.cores), ifelse(.Platform$OS.type == "windows", 1, min(chains,
detectCores())))
if (ncores > 1)
cat("running in parallel on", ncores, "cores\n")
draws <- mclapply(1:chains, function(i) run_single_chain(i, cal.cbs, hyper_prior), mc.cores = ncores)
# merge chains into code::mcmc.list objects
out <- list(level_1 = lapply(1:nrow(cal.cbs), function(i) mcmc.list(lapply(draws, function(draw) draw$level_1[[i]]))),
level_2 = mcmc.list(lapply(draws, function(draw) draw$level_2)))
if ("cust" %in% names(cal.cbs))
names(out$level_1) <- cal.cbs$cust
return(out)
}
#' Simulate data according to Pareto/NBD model assumptions
#'
#' @param n Number of customers.
#' @param T.cal Length of calibration period. If a vector is provided, then it
#' is assumed that customers have different 'birth' dates, i.e.
#' \eqn{max(T.cal)-T.cal}.
#' @param T.star Length of holdout period. This may be a vector.
#' @param params A list of model parameters \code{r},
#' \code{alpha}, \code{s}, \code{beta}.
#' @param date.zero Initial date for cohort start. Can be of class character, Date or POSIXt.
#' @return List of length 2:
#' \item{\code{cbs}}{A data.frame with a row for each customer and the summary statistic as columns.}
#' \item{\code{elog}}{A data.frame with a row for each transaction, and columns \code{cust}, \code{date} and \code{t}.}
#' @export
#' @examples
#' params <- list(r = 5, alpha = 10, s = 0.8, beta = 12)
#' data <- pnbd.GenerateData(n = 200, T.cal = 32, T.star = 32, params)
#' cbs <- data$cbs # customer by sufficient summary statistic - one row per customer
#' elog <- data$elog # Event log - one row per event/purchase
pnbd.GenerateData <- function(n, T.cal, T.star, params, date.zero = "2000-01-01") {
params$k <- 1
pggg.GenerateData(n, T.cal, T.star, params, date.zero)
}
|
/scratch/gouwar.j/cran-all/cranData/BTYDplus/R/pareto-nbd-mcmc.R
|
# Specify packages that are used internally by this package
#' @import BTYD
#' @import data.table
#' @import coda
#' @import parallel
#' @import stats
#' @import graphics
#' @importFrom Rcpp sourceCpp
#' @useDynLib BTYDplus
#'
NULL
|
/scratch/gouwar.j/cran-all/cranData/BTYDplus/R/zzz.R
|
#' Load transaction records of 2357 CDNow customers.
cdnowElog <- read.csv(system.file("data/cdnowElog.csv", package = "BTYD"),
stringsAsFactors = FALSE,
col.names = c("cust", "sampleid", "date", "cds", "sales"))
cdnowElog$date <- as.Date(as.character(cdnowElog$date), format = "%Y%m%d")
head(cdnowElog)
range(cdnowElog$date)
#' Convert from event log to customer-by-sufficient-statistic summary.
#' Split into 39 weeks calibration, and 39 weeks holdout period.
cbs <- elog2cbs(cdnowElog, T.cal = "1997-09-30", T.tot = "1998-06-30")
head(cbs)
#' Report some basic summary stats for calibration and holdout period.
summary(cbs[, c("x", "sales", "x.star", "sales.star")])
x <- readline("Estimate NBD model (press Enter)")
# Estimate NBD model parameters.
(params.nbd <- nbd.EstimateParameters(cbs))
# Predict transactions at customer level with NBD model.
cbs$xstar.nbd <- nbd.ConditionalExpectedTransactions(
params = params.nbd,
T.star = cbs$T.star,
x = cbs$x,
T.cal = cbs$T.cal)
# Estimate total transactions during holdout, based on NBD model.
sum(cbs$xstar.nbd)
x <- readline("Estimate MBG/CNBD-k model (press Enter)")
# Estimate MBG/CNBD-k model parameters.
(params.mbgcnbd <- mbgcnbd.EstimateParameters(cbs))
#' k=1 -> no regularity detected for CDNow
# Predict transactions at customer level with MBG/CNBD-k model.
cbs$xstar.mbgcnbd <- mbgcnbd.ConditionalExpectedTransactions(
params = params.mbgcnbd,
T.star = cbs$T.star,
x = cbs$x,
t.x = cbs$t.x,
T.cal = cbs$T.cal)
# Estimate total transactions during holdout, based on MBG/CNBD-k model.
sum(cbs$xstar.mbgcnbd)
# Estimate probabilty of being still a customer at end of calibration period.
cbs$palive.mbgcnbd <- mbgcnbd.PAlive(
params = params.mbgcnbd,
x = cbs$x,
t.x = cbs$t.x,
T.cal = cbs$T.cal)
# Estimate share of retained customers at end of calibration period.
mean(cbs$palive.mbgcnbd)
x <- readline("Compare Log-Likelihoods of various models (press Enter)")
params.pnbd <- BTYD::pnbd.EstimateParameters(cbs) # estimate Pareto/NBD
params.bgcnbd <- bgcnbd.EstimateParameters(cbs) # estimate BG/CNBD-k
(ll <- c(`NBD` = nbd.cbs.LL(params.nbd, cbs),
`Pareto/NBD` = BTYD::pnbd.cbs.LL(params.pnbd, cbs),
`BG/CNBD-k` = bgcnbd.cbs.LL(params.bgcnbd, cbs),
`MBG/CNBD-k` = mbgcnbd.cbs.LL(params.mbgcnbd, cbs)))
names(which.max(ll))
# -> MBG/CNBD-k provides best fit according to log-likelihood
x <- readline("Plot Frequency in Calibration (press Enter)")
op <- par(mfrow = c(1, 2))
nil <- mbgcnbd.PlotFrequencyInCalibration(params.mbgcnbd, cbs, censor = 7, title = "MBG/CNBD-k")
nil <- BTYD::pnbd.PlotFrequencyInCalibration(params.pnbd, cbs, censor = 7, title = "Pareto/NBD")
par(op)
x <- readline("Plot Incremental Transactions (press Enter)")
inc <- elog2inc(cdnowElog)
op <- par(mfrow = c(1, 2))
nil <- mbgcnbd.PlotTrackingInc(params.mbgcnbd, cbs$T.cal, T.tot = 78, inc, title = "MBG/CNBD-k")
nil <- BTYD::pnbd.PlotTrackingInc(params.pnbd, cbs$T.cal, T.tot = 78, inc, title = "Pareto/NBD")
par(op)
x <- readline("Compare Forecasting Accuracy (press Enter)")
cbs$xstar.pnbd <- BTYD::pnbd.ConditionalExpectedTransactions(
params = params.pnbd,
T.star = cbs$T.star,
x = cbs$x,
t.x = cbs$t.x,
T.cal = cbs$T.cal)
cbs$xstar.bgcnbd <- bgcnbd.ConditionalExpectedTransactions(
params = params.bgcnbd,
T.star = cbs$T.star,
x = cbs$x,
t.x = cbs$t.x,
T.cal = cbs$T.cal)
measures <- c(
"MAE" = function(a, f) mean(abs(a - f)),
"MSLE" = function(a, f) mean(((log(a + 1) - log(f + 1)))^2),
"BIAS" = function(a, f) sum(f)/sum(a) - 1)
models <- c(
"NBD" = "nbd",
"Pareto/NBD" = "pnbd",
"BG/CNBD-k" = "bgcnbd",
"MBG/CNBD-k" = "mbgcnbd")
sapply(measures, function(measure) {
sapply(models, function(model) {
err <- do.call(measure, list(a = cbs$x.star, f = cbs[[paste0("xstar.", model)]]))
round(err, 3)
})
})
#' -> Pareto/NBD and MBG/CNBD-k provide best forecasts
x <- readline("Calculate CLV (press Enter)")
#' calculate average spends per transaction
cbs$sales.avg <- cbs$sales / (cbs$x + 1)
#' Note: in CDNow some customers have sales.avg = 0. We substitute the zeros
#' with the minimum non-zero spend, as the estimation fails otherwise
cbs$sales.avg[cbs$sales.avg == 0] <- min(cbs$sales.avg[cbs$sales.avg > 0])
#' Estimate expected average transaction value based on gamma-gamma spend model
spend.params <- BTYD::spend.EstimateParameters(cbs$sales.avg, cbs$x + 1)
cbs$sales.avg.est <- BTYD::spend.expected.value(spend.params, cbs$sales.avg, cbs$x + 1)
#' Estimated total sales during holdout for MBG/CNBD-k
cbs$sales.mbgcnbd <- cbs$sales.avg.est * cbs$xstar.mbgcnbd
c("Estimated Sales" = sum(cbs$sales.mbgcnbd),
"Actual Sales" = sum(cbs$sales.star))
x <- readline("For a demo of Pareto/NBD (Abe) with CDNow see `demo(\"pareto-abe\")`.")
|
/scratch/gouwar.j/cran-all/cranData/BTYDplus/demo/cdnow.R
|
#' Load transaction records of 1525 grocery customers.
data("groceryElog", envir = environment())
head(groceryElog)
range(groceryElog$date)
#' Convert from event log to customer-by-sufficient-statistic summary.
#' Split into 52 weeks calibration, and 52 weeks holdout period.
cbs <- elog2cbs(groceryElog, T.cal = "2006-12-31", T.tot = "2007-12-30")
head(cbs)
x <- readline("Estimate regularity via Wheat/Morrison estimator (press Enter)")
(k.est <- estimateRegularity(groceryElog))
#' -> Wheat-Morrison estimator detects Erlang-2.
#' Plot Timing Patterns of a few sampled customers
plotTimingPatterns(groceryElog, T.cal = "2006-12-31")
x <- readline("Estimate MBG/CNBD-k model (press Enter)")
# Estimate MBG/CNBD-k model parameters.
(params.mbgcnbd <- mbgcnbd.EstimateParameters(cbs))
#' k=2 -> regularity also detected via MBG/CNBD-k model
# Predict transactions at customer level with MBG/CNBD-k model.
cbs$xstar.mbgcnbd <- mbgcnbd.ConditionalExpectedTransactions(
params = params.mbgcnbd,
T.star = cbs$T.star,
x = cbs$x,
t.x = cbs$t.x,
T.cal = cbs$T.cal)
# Estimate total transactions during holdout, based on MBG/CNBD-k model.
sum(cbs$xstar.mbgcnbd)
# Estimate probabilty of being still a customer at end of calibration period.
cbs$palive.mbgcnbd <- mbgcnbd.PAlive(
params = params.mbgcnbd,
x = cbs$x,
t.x = cbs$t.x,
T.cal = cbs$T.cal)
# Estimate share of retained customers at end of calibration period.
mean(cbs$palive.mbgcnbd)
x <- readline("Compare log-likelihoods of various models (press Enter)")
params.nbd <- nbd.EstimateParameters(cbs) # estimate NBD
params.pnbd <- BTYD::pnbd.EstimateParameters(cbs) # estimate Pareto/NBD
params.bgnbd <- BTYD::bgnbd.EstimateParameters(cbs) # estimate BG/NBD
params.mbgnbd <- mbgnbd.EstimateParameters(cbs) # estimate MBG/NBD
(ll <- c(`NBD` = nbd.cbs.LL(params.nbd, cbs),
`Pareto/NBD` = BTYD::pnbd.cbs.LL(params.pnbd, cbs),
`BG/NBD` = BTYD::bgnbd.cbs.LL(params.bgnbd, cbs),
`MBG/NBD` = mbgcnbd.cbs.LL(params.mbgnbd, cbs),
`MBG/CNBD-k` = mbgcnbd.cbs.LL(params.mbgcnbd, cbs)))
names(which.max(ll))
# -> MBG/CNBD-k provides best data fit according to log-likelihood
x <- readline("Compare forecast accuracies of various models (press Enter)")
cbs$xstar.nbd <- nbd.ConditionalExpectedTransactions(
params.nbd, cbs$T.star, cbs$x, cbs$T.cal)
cbs$xstar.pnbd <- BTYD::pnbd.ConditionalExpectedTransactions(
params.pnbd, cbs$T.star, cbs$x, cbs$t.x, cbs$T.cal)
cbs$xstar.bgnbd <- BTYD::bgnbd.ConditionalExpectedTransactions(
params.bgnbd, cbs$T.star, cbs$x, cbs$t.x, cbs$T.cal)
cbs$xstar.mbgnbd <- mbgcnbd.ConditionalExpectedTransactions(
params.mbgnbd, cbs$T.star, cbs$x, cbs$t.x, cbs$T.cal)
measures <- c(
"MAE" = function(a, f) mean(abs(a - f)),
"MSLE" = function(a, f) mean(((log(a + 1) - log(f + 1)))^2),
"BIAS" = function(a, f) sum(f)/sum(a) - 1)
models <- c(
"NBD" = "nbd",
"Pareto/NBD" = "pnbd",
"BG/NBD" = "bgnbd",
"MBG/NBD" = "mbgnbd",
"MBG/CNBD-k" = "mbgcnbd")
sapply(measures, function(measure) {
sapply(models, function(model) {
err <- do.call(measure, list(a = cbs$x.star, f = cbs[[paste0("xstar.", model)]]))
round(err, 3)
})
})
#' -> MBG/CNBD-k provides best customer-level forecast accuracy
x <- readline("Plot Frequency in Calibration (press Enter)")
op <- par(mfrow = c(1, 2))
nil <- mbgcnbd.PlotFrequencyInCalibration(params.mbgcnbd, cbs, censor = 7, title = "MBG/CNBD-k")
nil <- BTYD::pnbd.PlotFrequencyInCalibration(params.pnbd, cbs, censor = 7, title = "Pareto/NBD")
par(op)
x <- readline("Plot Incremental Transactions (press Enter)")
inc <- elog2inc(groceryElog)
T.tot <- max(cbs$T.cal+cbs$T.star)
op <- par(mfrow = c(1, 2))
nil <- mbgcnbd.PlotTrackingInc(params.mbgcnbd, cbs$T.cal, T.tot = T.tot, inc, title = "MBG/CNBD-k")
nil <- BTYD::pnbd.PlotTrackingInc(params.pnbd, cbs$T.cal, T.tot = T.tot, inc, title = "Pareto/NBD")
par(op)
|
/scratch/gouwar.j/cran-all/cranData/BTYDplus/demo/mbg-cnbd-k.R
|
#' Load transaction records of 2357 CDNow customers.
cdnowElog <- read.csv(system.file("data/cdnowElog.csv", package = "BTYD"),
stringsAsFactors = FALSE,
col.names = c("cust", "sampleid", "date", "cds", "sales"))
cdnowElog$date <- as.Date(as.character(cdnowElog$date), format = "%Y%m%d")
head(cdnowElog)
range(cdnowElog$date)
#' Convert from event log to customer-by-sufficient-statistic summary.
#' Split into 39 weeks calibration, and 39 weeks holdout period.
cbs <- elog2cbs(cdnowElog, T.cal = "1997-09-30", T.tot = "1998-06-30")
x <- readline("Estimate Pareto/NBD (Abe) without covariates (press Enter)")
#' Estimate with no covariates; see model M1 in Abe (2009)
draws.m1 <- abe.mcmc.DrawParameters(
cal.cbs = cbs,
mcmc = 5000, burnin = 5000,
mc.cores = 1)
round(summary(draws.m1$level_2)$quantiles[, c("2.5%", "50%", "97.5%")], 2)
#' -> Parameter Estimates match Table 3 in Abe (2009);
x <- readline("Estimate Pareto/NBD (Abe) with covariates (press Enter)")
#' Append dollar amount of first purchase to use as covariate
first <- aggregate(sales ~ cust, cdnowElog, function(x) x[1] * 10^-3)
names(first) <- c("cust", "first.sales")
cbs <- merge(cbs, first, by = "cust")
#' Estimate with first purchase spend as covariate; see model M2 in Abe (2009)
draws.m2 <- abe.mcmc.DrawParameters(
cal.cbs = cbs,
covariates = c("first.sales"),
mcmc = 5000, burnin = 5000,
mc.cores = 1)
round(summary(draws.m2$level_2)$quantiles[, c("2.5%", "50%", "97.5%")], 4)
#' -> Parameter Estimates match Table 3 in Abe (2009), except for
#' `log_lambda_first.sales` and `log_mu_first.sales`; note however, that via
#' simulation we can establish that our implementation is able to re-identify
#' the underlying parameters correctly; see
#' `tests/testthat/test-pareto-nbd-abe.R`
x <- readline("Compare predictive performance of the two models (press Enter)")
#' Predict holdout with M1 and M2 models
#' 1) draw future transaction
xstar.m1.draws <- mcmc.DrawFutureTransactions(cbs, draws.m1, T.star = cbs$T.star)
xstar.m2.draws <- mcmc.DrawFutureTransactions(cbs, draws.m2, T.star = cbs$T.star)
#' 2) calculate mean over future transaction draws for each customer
cbs$xstar.m1 <- apply(xstar.m1.draws, 2, mean)
cbs$xstar.m2 <- apply(xstar.m2.draws, 2, mean)
#' 3) compare mean absolute error at individual level
round(c(`MAE without covariates` = mean(abs(cbs$x.star - cbs$xstar.m1)),
`MAE with covariates` = mean(abs(cbs$x.star - cbs$xstar.m2))), 4)
|
/scratch/gouwar.j/cran-all/cranData/BTYDplus/demo/pareto-abe.R
|
#' Load transaction records of 1525 grocery customers.
data("groceryElog", envir = environment())
head(groceryElog)
range(groceryElog$date)
#' Convert from event log to customer-by-sufficient-statistic summary.
#' Split into 52 weeks calibration, and 52 weeks holdout period.
cbs <- elog2cbs(groceryElog, T.cal = "2006-12-31", T.tot = "2007-12-30")
head(cbs)
x <- readline("Estimate Pareto/NBD (HB) and Pareto/GGG (press Enter)")
#' Draw Pareto/NBD parameters and compare median estimates with Pareto/NBD
#' implementation from BTYD package.
pnbd.draws <- pnbd.mcmc.DrawParameters(cal.cbs = cbs, mc.cores = 1)
#' Draw Pareto/GGG parameters and report median estimates. Note, that we only
#' use a relatively short MCMC chain to keep this demo finish reasonably fast.
pggg.draws <- pggg.mcmc.DrawParameters(cal.cbs = cbs,
mcmc = 500, burnin = 500, chains = 2, thin = 20,
mc.cores = 1)
round(rbind(`Pareto/GGG`= summary(pggg.draws$level_2)$quantiles[, "50%"],
`Pareto/NBD (HB)` = c(NA, NA, summary(pnbd.draws$level_2)$quantiles[, "50%"]),
`Pareto/NBD` = c(NA, NA, BTYD::pnbd.EstimateParameters(cbs))), 2)
#' plot estimated parameter distributions
plot(pggg.draws$level_2, density = TRUE, trace = FALSE)
pggg.plotRegularityRateHeterogeneity(pggg.draws)
# -> regularity detected in grocery dataset
#' check MCMC convergence
plot(pggg.draws$level_2, density = FALSE, trace = TRUE)
coda::effectiveSize(pggg.draws$level_2)
x <- readline("Estimate future transactions (press Enter)")
#' draw future transaction
xstar.pnbd.draws <- mcmc.DrawFutureTransactions(cbs, pnbd.draws, T.star = cbs$T.star, sample_size = 400)
xstar.pggg.draws <- mcmc.DrawFutureTransactions(cbs, pggg.draws, T.star = cbs$T.star, sample_size = 400)
#' calculate mean over future transaction draws for each customer
cbs$xstar.pnbd <- apply(xstar.pnbd.draws, 2, mean)
cbs$xstar.pggg <- apply(xstar.pggg.draws, 2, mean)
#' calculate P(alive)
cbs$palive.pnbd <- mcmc.PAlive(pnbd.draws)
cbs$palive.pggg <- mcmc.PAlive(pggg.draws)
#' calculate P(active)
cbs$pactive.pnbd <- mcmc.PActive(xstar.pnbd.draws)
cbs$pactive.pggg <- mcmc.PActive(xstar.pggg.draws)
#' compare forecast accuracy to Pareto/NBD
(mae <- c(`Pareto/GGG` = mean(abs(cbs$x.star - cbs$xstar.pggg)),
`Pareto/NBD` = mean(abs(cbs$x.star - cbs$xstar.pnbd))))
(lift <- 1 - mae[1]/mae[2])
#' -> 9% lift in customer-level accuracy when taking regularity into account
# P(active) diagnostic plot
nil <- mcmc.plotPActiveDiagnostic(cbs, xstar.pggg.draws)
|
/scratch/gouwar.j/cran-all/cranData/BTYDplus/demo/pareto-ggg.R
|
## ---- echo = FALSE------------------------------------------------------------
knitr::opts_chunk$set(collapse = TRUE, comment = "#>")
library(BTYDplus)
data("groceryElog")
## ---- fig.show="hold", fig.width=7, fig.height=2.8, fig.cap="Timing patterns for sampled grocery customers"----
library(BTYDplus)
data("groceryElog")
set.seed(123)
# plot timing patterns of 30 sampled customers
plotTimingPatterns(groceryElog, n = 30, T.cal = "2007-05-15",
headers = c("Past", "Future"), title = "")
## ---- echo=FALSE, results="asis"----------------------------------------------
cdnowElog <- read.csv(system.file("data/cdnowElog.csv", package = "BTYD"),
stringsAsFactors = FALSE,
col.names = c("cust", "sampleid", "date", "cds", "sales"))
cdnowElog$date <- as.Date(as.character(cdnowElog$date), format = "%Y%m%d")
knitr::kable(head(cdnowElog[, c("cust", "date", "sales")], 6), caption = "Transaction Log Example")
## ---- fig.show="hold", fig.width=7, fig.height=2.5, fig.cap="Weekly trends for the grocery dataset"----
data("groceryElog")
op <- par(mfrow = c(1, 2), mar = c(2.5, 2.5, 2.5, 2.5))
# incremental
weekly_inc_total <- elog2inc(groceryElog, by = 7, first = TRUE)
weekly_inc_repeat <- elog2inc(groceryElog, by = 7, first = FALSE)
plot(weekly_inc_total, typ = "l", frame = FALSE, main = "Incremental")
lines(weekly_inc_repeat, col = "red")
# cumulative
weekly_cum_total <- elog2cum(groceryElog, by = 7, first = TRUE)
weekly_cum_repeat <- elog2cum(groceryElog, by = 7, first = FALSE)
plot(weekly_cum_total, typ = "l", frame = FALSE, main = "Cumulative")
lines(weekly_cum_repeat, col = "red")
par(op)
## -----------------------------------------------------------------------------
data("groceryElog")
head(elog2cbs(groceryElog), 5)
## -----------------------------------------------------------------------------
data("groceryElog")
range(groceryElog$date)
groceryCBS <- elog2cbs(groceryElog, T.cal = "2006-12-31")
head(groceryCBS, 5)
## ---- fig.show="hold", fig.width=7, fig.height=2.2, fig.cap="Diagnostic plots for estimating regularity"----
data("groceryElog")
op <- par(mfrow = c(1, 2))
(k.wheat <- estimateRegularity(groceryElog, method = "wheat",
plot = TRUE, title = "Wheat & Morrison"))
(k.mle <- estimateRegularity(groceryElog, method = "mle",
plot = TRUE, title = "Maximum Likelihood"))
par(op)
## -----------------------------------------------------------------------------
# load grocery dataset, if it hasn't been done before
if (!exists("groceryCBS")) {
data("groceryElog")
groceryCBS <- elog2cbs(groceryElog, T.cal = "2006-12-31")
}
# estimate NBD parameters
round(params.nbd <- nbd.EstimateParameters(groceryCBS), 3)
# report log-likelihood
nbd.cbs.LL(params.nbd, groceryCBS)
## -----------------------------------------------------------------------------
# calculate expected future transactions for customers who've
# had 1 to 5 transactions in first 52 weeks
est5.nbd <- nbd.ConditionalExpectedTransactions(params.nbd,
T.star = 52, x = 1:5, T.cal = 52)
for (i in 1:5) {
cat("x =", i, ":", sprintf("%5.3f", est5.nbd[i]), "\n")
}
## -----------------------------------------------------------------------------
# predict whole customer cohort
groceryCBS$xstar.nbd <- nbd.ConditionalExpectedTransactions(
params = params.nbd, T.star = 52,
x = groceryCBS$x, T.cal = groceryCBS$T.cal)
# compare predictions with actuals at aggregated level
rbind(`Actuals` = c(`Holdout` = sum(groceryCBS$x.star)),
`NBD` = c(`Holdout` = round(sum(groceryCBS$xstar.nbd))))
## -----------------------------------------------------------------------------
# load grocery dataset, if it hasn't been done before
if (!exists("groceryCBS")) {
data("groceryElog")
groceryCBS <- elog2cbs(groceryElog, T.cal = "2006-12-31")
}
# estimate Pareto/NBD parameters
params.pnbd <- BTYD::pnbd.EstimateParameters(groceryCBS[, c("x", "t.x", "T.cal")])
names(params.pnbd) <- c("r", "alpha", "s", "beta")
round(params.pnbd, 3)
# report log-likelihood
BTYD::pnbd.cbs.LL(params.pnbd, groceryCBS[, c("x", "t.x", "T.cal")])
## -----------------------------------------------------------------------------
# calculate expected future transactions for customers who've
# had 1 to 5 transactions in first 12 weeks, but then remained
# inactive for 40 weeks
est5.pnbd <- BTYD::pnbd.ConditionalExpectedTransactions(params.pnbd,
T.star = 52, x = 1:5, t.x = 12, T.cal = 52)
for (i in 1:5) {
cat("x =", i, ":", sprintf("%5.3f", est5.pnbd[i]), "\n")
}
## -----------------------------------------------------------------------------
# predict whole customer cohort
groceryCBS$xstar.pnbd <- BTYD::pnbd.ConditionalExpectedTransactions(
params = params.pnbd, T.star = 52,
x = groceryCBS$x, t.x = groceryCBS$t.x,
T.cal = groceryCBS$T.cal)
# compare predictions with actuals at aggregated level
rbind(`Actuals` = c(`Holdout` = sum(groceryCBS$x.star)),
`Pareto/NBD` = c(`Holdout` = round(sum(groceryCBS$xstar.pnbd))))
## -----------------------------------------------------------------------------
# P(alive) for customers who've had 1 to 5 transactions in first
# 12 weeks, but then remained inactive for 40 weeks
palive.pnbd <- BTYD::pnbd.PAlive(params.pnbd,
x = 1:5, t.x = 12, T.cal = 52)
for (i in 1:5) {
cat("x =", i, ":", sprintf("%5.2f %%", 100*palive.pnbd[i]), "\n")
}
## -----------------------------------------------------------------------------
# load grocery dataset, if it hasn't been done before
if (!exists("groceryCBS")) {
data("groceryElog")
groceryCBS <- elog2cbs(groceryElog, T.cal = "2006-12-31")
}
# estimate parameters for various models
params.bgnbd <- BTYD::bgnbd.EstimateParameters(groceryCBS) # BG/NBD
params.bgcnbd <- bgcnbd.EstimateParameters(groceryCBS) # BG/CNBD-k
params.mbgnbd <- mbgnbd.EstimateParameters(groceryCBS) # MBG/NBD
params.mbgcnbd <- mbgcnbd.EstimateParameters(groceryCBS) # MBG/CNBD-k
row <- function(params, LL) {
names(params) <- c("k", "r", "alpha", "a", "b")
c(round(params, 3), LL = round(LL))
}
rbind(`BG/NBD` = row(c(1, params.bgnbd),
BTYD::bgnbd.cbs.LL(params.bgnbd, groceryCBS)),
`BG/CNBD-k` = row(params.bgcnbd,
bgcnbd.cbs.LL(params.bgcnbd, groceryCBS)),
`MBG/NBD` = row(params.mbgnbd,
mbgcnbd.cbs.LL(params.mbgnbd, groceryCBS)),
`MBG/CNBD-k` = row(params.mbgcnbd,
mbgcnbd.cbs.LL(params.mbgcnbd, groceryCBS)))
## -----------------------------------------------------------------------------
# calculate expected future transactions for customers who've
# had 1 to 5 transactions in first 12 weeks, but then remained
# inactive for 40 weeks
est5.mbgcnbd <- mbgcnbd.ConditionalExpectedTransactions(params.mbgcnbd,
T.star = 52, x = 1:5, t.x = 12, T.cal = 52)
for (i in 1:5) {
cat("x =", i, ":", sprintf("%5.3f", est5.mbgcnbd[i]), "\n")
}
## -----------------------------------------------------------------------------
# P(alive) for customers who've had 1 to 5 transactions in first
# 12 weeks, but then remained inactive for 40 weeks
palive.mbgcnbd <- mbgcnbd.PAlive(params.mbgcnbd,
x = 1:5, t.x = 12, T.cal = 52)
for (i in 1:5) {
cat("x =", i, ":", sprintf("%5.2f %%", 100*palive.mbgcnbd[i]), "\n")
}
## -----------------------------------------------------------------------------
# predict whole customer cohort
groceryCBS$xstar.mbgcnbd <- mbgcnbd.ConditionalExpectedTransactions(
params = params.mbgcnbd, T.star = 52,
x = groceryCBS$x, t.x = groceryCBS$t.x,
T.cal = groceryCBS$T.cal)
# compare predictions with actuals at aggregated level
rbind(`Actuals` = c(`Holdout` = sum(groceryCBS$x.star)),
`MBG/CNBD-k` = c(`Holdout` = round(sum(groceryCBS$xstar.mbgcnbd))))
## ---- fig.show="hold", fig.width=4, fig.height=3.5, fig.cap="Weekly actuals vs. MBG/CNBD-k predictions"----
# runs for ~37secs on a 2015 MacBook Pro
nil <- mbgcnbd.PlotTrackingInc(params.mbgcnbd,
T.cal = groceryCBS$T.cal,
T.tot = max(groceryCBS$T.cal + groceryCBS$T.star),
actual.inc.tracking = elog2inc(groceryElog))
## -----------------------------------------------------------------------------
# mean absolute error (MAE)
mae <- function(act, est) {
stopifnot(length(act)==length(est))
sum(abs(act-est)) / length(act)
}
mae.nbd <- mae(groceryCBS$x.star, groceryCBS$xstar.nbd)
mae.pnbd <- mae(groceryCBS$x.star, groceryCBS$xstar.pnbd)
mae.mbgcnbd <- mae(groceryCBS$x.star, groceryCBS$xstar.mbgcnbd)
rbind(`NBD` = c(`MAE` = round(mae.nbd, 3)),
`Pareto/NBD` = c(`MAE` = round(mae.pnbd, 3)),
`MBG/CNBD-k` = c(`MAE` = round(mae.mbgcnbd, 3)))
lift <- 1 - mae.mbgcnbd / mae.pnbd
cat("Lift in MAE for MBG/CNBD-k vs. Pareto/NBD:", round(100*lift, 1), "%")
## -----------------------------------------------------------------------------
# load grocery dataset, if it hasn't been done before
if (!exists("groceryCBS")) {
data("groceryElog")
groceryCBS <- elog2cbs(groceryElog, T.cal = "2006-12-31")
}
# generate parameter draws (~13secs on 2015 MacBook Pro)
pnbd.draws <- pnbd.mcmc.DrawParameters(groceryCBS)
# generate draws for holdout period
pnbd.xstar.draws <- mcmc.DrawFutureTransactions(groceryCBS, pnbd.draws)
# conditional expectations
groceryCBS$xstar.pnbd.hb <- apply(pnbd.xstar.draws, 2, mean)
# P(active)
groceryCBS$pactive.pnbd.hb <- mcmc.PActive(pnbd.xstar.draws)
# P(alive)
groceryCBS$palive.pnbd.hb <- mcmc.PAlive(pnbd.draws)
# show estimates for first few customers
head(groceryCBS[, c("x", "t.x", "x.star",
"xstar.pnbd.hb", "pactive.pnbd.hb",
"palive.pnbd.hb")])
## -----------------------------------------------------------------------------
class(pnbd.draws$level_2)
# convert cohort-level draws from coda::mcmc.list to a matrix, with
# each parameter becoming a column, and each draw a row
cohort.draws <- pnbd.draws$level_2
head(as.matrix(cohort.draws), 5)
# compute median across draws, and compare to ML estimates; as can be
# seen, the two parameter estimation approaches result in very similar
# estimates
round(
rbind(`Pareto/NBD (HB)` = apply(as.matrix(cohort.draws), 2, median),
`Pareto/NBD` = BTYD::pnbd.EstimateParameters(groceryCBS[, c("x", "t.x", "T.cal")]))
, 2)
## ---- fig.show="hold", warning=FALSE, fig.width=7, fig.height=3, fig.cap="MCMC traces and parameter distributions of cohort-level parameters"----
# plot trace- and density-plots for heterogeneity parameters
op <- par(mfrow = c(2, 4), mar = c(2.5, 2.5, 2.5, 2.5))
coda::traceplot(pnbd.draws$level_2)
coda::densplot(pnbd.draws$level_2)
par(op)
## ---- fig.show="hold", warning=FALSE, fig.width=7, fig.height=3, fig.cap="MCMC traces and parameter distributions of individual-level parameters for a specific customer"----
class(pnbd.draws$level_1)
length(pnbd.draws$level_1)
customer4 <- "4"
customer4.draws <- pnbd.draws$level_1[[customer4]]
head(as.matrix(customer4.draws), 5)
round(apply(as.matrix(customer4.draws), 2, median), 3)
# plot trace- and density-plots for customer4 parameters
op <- par(mfrow = c(2, 4), mar = c(2.5, 2.5, 2.5, 2.5))
coda::traceplot(pnbd.draws$level_1[[customer4]])
coda::densplot(pnbd.draws$level_1[[customer4]])
par(op)
## ---- eval = FALSE------------------------------------------------------------
# # runs for ~120secs on a MacBook Pro 2015
# op <- par(mfrow = c(1, 2))
# nil <- mcmc.PlotFrequencyInCalibration(pnbd.draws, groceryCBS)
# nil <- mcmc.PlotTrackingInc(pnbd.draws,
# T.cal = groceryCBS$T.cal,
# T.tot = max(groceryCBS$T.cal + groceryCBS$T.star),
# actual.inc.tracking.data = elog2inc(groceryElog))
# par(op)
## ---- eval = FALSE------------------------------------------------------------
# # load CDNow event log from BTYD package
# cdnowElog <- read.csv(
# system.file("data/cdnowElog.csv", package = "BTYD"),
# stringsAsFactors = FALSE,
# col.names = c("cust", "sampleid", "date", "cds", "sales"))
# cdnowElog$date <- as.Date(as.character(cdnowElog$date),
# format = "%Y%m%d")
# # convert to CBS; split into 39 weeks calibration, and 39 weeks holdout
# cdnowCbs <- elog2cbs(cdnowElog,
# T.cal = "1997-09-30", T.tot = "1998-06-30")
#
# # estimate Pareto/NBD (Abe) without covariates; model M1 in Abe (2009)
# draws.m1 <- abe.mcmc.DrawParameters(cdnowCbs,
# mcmc = 7500, burnin = 2500) # ~33secs on 2015 MacBook Pro
# quant <- function(x) round(quantile(x, c(0.025, 0.5, 0.975)), 2)
# t(apply(as.matrix(draws.m1$level_2), 2, quant))
# #> 2.5% 50% 97.5%
# #> log_lambda -3.70 -3.54 -3.32
# #> log_mu -3.96 -3.59 -3.26
# #> var_log_lambda 1.10 1.34 1.65
# #> cov_log_lambda_log_mu -0.20 0.13 0.74
# #> var_log_mu 1.44 2.62 5.05
#
# #' append dollar amount of first purchase to use as covariate
# first <- aggregate(sales ~ cust, cdnowElog, function(x) x[1] * 10^-3)
# names(first) <- c("cust", "first.sales")
# cdnowCbs <- merge(cdnowCbs, first, by = "cust")
#
# #' estimate with first purchase spend as covariate; model M2 in Abe (2009)
# draws.m2 <- abe.mcmc.DrawParameters(cdnowCbs,
# covariates = c("first.sales"),
# mcmc = 7500, burnin = 2500) # ~33secs on 2015 MacBook Pro
# t(apply(as.matrix(draws.m2$level_2), 2, quant))
# #> 2.5% 50% 97.5%
# #> log_lambda_intercept -4.02 -3.77 -3.19
# #> log_mu_intercept -4.37 -3.73 -2.69
# #> log_lambda_first.sales 0.04 6.04 9.39
# #> log_mu_first.sales -9.02 1.73 7.90
# #> var_log_lambda 0.01 1.35 1.79
# #> cov_log_lambda_log_mu -0.35 0.22 0.76
# #> var_log_mu 0.55 2.59 4.97
## ---- eval=FALSE--------------------------------------------------------------
# # load grocery dataset, if it hasn't been done before
# if (!exists("groceryCBS")) {
# data("groceryElog")
# groceryCBS <- elog2cbs(groceryElog, T.cal = "2006-12-31")
# }
# # estimte Pareto/GGG
# pggg.draws <- pggg.mcmc.DrawParameters(groceryCBS) # ~2mins on 2015 MacBook Pro
# # generate draws for holdout period
# pggg.xstar.draws <- mcmc.DrawFutureTransactions(groceryCBS, pggg.draws)
# # conditional expectations
# groceryCBS$xstar.pggg <- apply(pggg.xstar.draws, 2, mean)
# # P(active)
# groceryCBS$pactive.pggg <- mcmc.PActive(pggg.xstar.draws)
# # P(alive)
# groceryCBS$palive.pggg <- mcmc.PAlive(pggg.draws)
# # show estimates for first few customers
# head(groceryCBS[, c("x", "t.x", "x.star",
# "xstar.pggg", "pactive.pggg", "palive.pggg")])
# #> x t.x x.star xstar.pggg pactive.pggg palive.pggg
# #> 1 0 0.00000 0 0.02 0.02 0.03
# #> 2 1 50.28571 0 1.01 0.59 1.00
# #> 3 19 48.57143 14 14.76 0.87 0.87
# #> 4 0 0.00000 0 0.04 0.03 0.13
# #> 5 2 40.42857 3 2.02 0.84 0.91
# #> 6 5 47.57143 6 4.46 0.92 0.95
#
# # report median cohort-level parameter estimates
# round(apply(as.matrix(pggg.draws$level_2), 2, median), 3)
# #> t gamma r alpha s beta
# #> 1.695 0.373 0.948 5.243 0.432 4.348
# # report mean over median individual-level parameter estimates
# median.est <- sapply(pggg.draws$level_1, function(draw) {
# apply(as.matrix(draw), 2, median)
# })
# round(apply(median.est, 1, mean), 3)
# #> k lambda mu tau z
# #> 3.892 0.160 0.065 69.546 0.316
## ---- eval = FALSE------------------------------------------------------------
# # compare predictions with actuals at aggregated level
# rbind(`Actuals` = c(`Holdout` = sum(groceryCBS$x.star)),
# `Pareto/GGG` = round(sum(groceryCBS$xstar.pggg)),
# `MBG/CNBD-k` = round(sum(groceryCBS$xstar.mbgcnbd)),
# `Pareto/NBD (HB)` = round(sum(groceryCBS$xstar.pnbd.hb)))
# #> Holdout
# #> Actuals 3389
# #> Pareto/GGG 3815
# #> MBG/CNBD-k 3970
# #> Pareto/NBD (HB) 4018
#
# # error on customer level
# mae <- function(act, est) {
# stopifnot(length(act)==length(est))
# sum(abs(act-est)) / length(act)
# }
# mae.pggg <- mae(groceryCBS$x.star, groceryCBS$xstar.pggg)
# mae.mbgcnbd <- mae(groceryCBS$x.star, groceryCBS$xstar.mbgcnbd)
# mae.pnbd.hb <- mae(groceryCBS$x.star, groceryCBS$xstar.pnbd.hb)
# rbind(`Pareto/GGG` = c(`MAE` = round(mae.pggg, 3)),
# `MBG/CNBD-k` = c(`MAE` = round(mae.mbgcnbd, 3)),
# `Pareto/NBD (HB)` = c(`MAE` = round(mae.pnbd.hb, 3)))
# #> MAE
# #> Pareto/GGG 0.621
# #> MBG/CNBD-k 0.644
# #> Pareto/NBD (HB) 0.688
#
# lift <- 1 - mae.pggg / mae.pnbd.hb
# cat("Lift in MAE:", round(100*lift, 1), "%")
# #> Lift in MAE for Pareto/GGG vs. Pareto/NBD: 9.8%
|
/scratch/gouwar.j/cran-all/cranData/BTYDplus/inst/doc/BTYDplus-HowTo.R
|
---
title: "Customer Base Analysis with BTYDplus"
author: "Michael Platzer"
date: "`r Sys.Date()`"
output:
pdf_document:
fig_caption: yes
fontsize: 11pt
geometry: margin=1.4in
linestretch: 1.4
linkcolor: blue
bibliography: bibliography.bib
vignette: |
%\VignetteIndexEntry{Customer Base Analysis with BTYDplus}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, echo = FALSE}
knitr::opts_chunk$set(collapse = TRUE, comment = "#>")
library(BTYDplus)
data("groceryElog")
```
## Introduction
The BTYDplus package provides advanced statistical methods to describe and predict customers' purchase behavior in non-contractual settings. It fits probabilistic models to historic transaction records for computing customer-centric metrics of managerial interest.
The challenge of this task is threefold: For one, the churn event in a non-contractual customer relationship is not directly observable, but needs to be inferred indirectly based on observed periods of inactivity. Second, with customers behaving differently, yet having oftentimes only few transactions recorded so far, we require statistical methods that can utilize cohort-level patterns as priors for estimating customer-level quantities. And third, we attempt to predict the (unseen) future, thus need assumptions regarding the future dynamics.
Figure 1 displays the complete transaction records of 30 sampled customers of an online grocery store. Each horizontal line represents a customer, and each circle a purchase event. The typical questions that arise are:
* How many customers does the firm still have?
* How many customers will be active in one year from now?
* How many transactions can be expected in next X weeks?
* Which customers can be considered to have churned?
* Which customers will provide the most value to the company going forward?
```{r, fig.show="hold", fig.width=7, fig.height=2.8, fig.cap="Timing patterns for sampled grocery customers"}
library(BTYDplus)
data("groceryElog")
set.seed(123)
# plot timing patterns of 30 sampled customers
plotTimingPatterns(groceryElog, n = 30, T.cal = "2007-05-15",
headers = c("Past", "Future"), title = "")
```
Fitting a buy-till-you-die model to a particular customer cohort not just allows analysts to describe it in terms of its heterogeneous distribution of purchase patterns and dropout probabilities, but also provides answers for all of the above stated questions. On an aggregated level the estimated number of future transactions can then be, for example, used for capacity and production planning. The estimated future value of the cohort for assessing the return on investment for customer acquistion spends. On an individual level the customer database can be enriched with estimates on a customer's status, future activity and future value. customer scores like these can be then utilized to adapt services, messages and offers with respect to customers' state and value. Given the accessibility and speed of the provided models, practitioners can score their customer base with these advanced statistical techniques on a continuous basis.
### Models
The [BTYD](https://cran.r-project.org/package=BTYD) package already provides implementations for the **Pareto/NBD** [@schmittlein1987cyc], the **BG/NBD** [@fader2005cyc] and the **BG/BB** [@fader2010customer] model. BTYDplus complements the BTYD package by providing several additional buy-till-you-die models, that have been published in the marketing literature, but whose implementation are complex and non-trivial. In order to create a consistent experience of users of both packages, the BTYDplus adopts method signatures from BTYD where possible.
The models provided as part of [BTYDplus](https://github.com/mplatzer/BTYDplus#readme) are:
* **NBD** @ehrenberg1959pattern
* **MBG/NBD** @batislam2007empirical, @hoppe2007cbg
* **BG/CNBD-k** @platzer2017mbgcnbd
* **MBG/CNBD-k** @platzer2017mbgcnbd
* **Pareto/NBD (HB)** @ma2007mcmc
* **Pareto/NBD (Abe)** @abe2009counting
* **Pareto/GGG** @platzer2016pggg
The number of implemented models raises the question, which one to use, and which one works best in a particular case. There is no simple answer to that, but practitioners could consider trying out all of them for a given dataset, assess data fit, calculate forecast accuracy based on a holdout time period and then make a tradeoff between calculation speed, data fit and accuracy.
The implementation of the original *NBD* model from 1959 serves mainly as a basic benchmark. It assumes a heterogenous purchase process, but doesn't account for the possibility of customer defection. The *Pareto/NBD* model, introduced in 1987, combines the NBD model for transactions of active customers with a heterogeneuos dropout process, and to this date can still be considered a gold standard for buy-till-you-die models. The *BG/NBD* model adjusts the Pareto/NBD assumptions with respect to the dropout process in order to speed up computation. It is able to retain a similar level of data fit and forecast accuracy, but also improves the robustness of the parameter search. However, the BG/NBD model particularly assumes that every customer without a repeat transaction has *not* defected yet, independent of the elapsed time of inactivity. This seems counterintuitive, particularly when compared to customers with repeat transactions. Thus the *MBG/NBD* has been developed to eliminate this inconsistency by allowing customers without any activity to also remain inactive. Data fit and forecast accuracy are comparable to the BG/NBD, yet it results in more plausible estimates for the dropout process. The more recently developed *BG/CNBD-k* and *MBG/CNBD-k* model classes extend BG/NBD and MBG/NBD each but allow for regularity within the transaction timings. If such regularity is present (even in a mild form), these models can yield significant improvements in terms of customer level forecasting accuracy, while the computational costs remain at a similar order of magnitude.
All of the aforementioned models benefit from closed-form solutions for key expressions and thus can be efficiently estimated via means of maximum likelihood estimation (MLE). However, the necessity of deriving closed-form expressions restricts the model builder from relaxing the underlying behavioral assumptions. An alternative estimation method for probabilistic models is via Markov-Chain-Monte-Carlo (MCMC) simulation. MCMC comes at significantly higher costs in terms of implementation complexity and computation time, but it allows for more flexible assumptions. Additionally, one gains the benefits of (1) estimated marginal posterior distributions rather than point estimates, (2) individual-level parameter estimates, and thus (3) straightforward simulations of customer-level metrics of managerial interest. The hierarchical Bayes variant of Pareto/NBD (i.e., *Pareto/NBD (HB)*) served as a proof-of-concept for the MCMC approach, but doesn't yet take full advantage of the gained flexibility, as it sticks to the original Pareto/NBD assumptions. In contrast, Abe's variant of the Pareto/NBD (termed here *Pareto/NBD (Abe)*) relaxes the independence of purchase and dropout process, plus is capable of incorporating customer covariates. Particularly the latter can turn out to be very powerful, if any of such known covariates helps in explaining the heterogeneity within the customer cohort. Finally, the *Pareto/GGG* is another generalization of Pareto/NBD, which allows for a varying degree of regularity within the transaction timings. Analogous to (M)BG/CNBD-k, incorporating regularity can yield significant improvements in forecasting accuracy, if such regularity is present in the data.
## Analytical Workflow
The typical analysis process starts out by reading in a complete log of all events or transactions of an existing customer cohort. It is up to the analyst to define how a customer base is split into cohorts, but typically these are defined based on customers' first transaction date and/or the acquisition channel. The data requirements for such an event log are minimal, and only consist of a customer identifier field `cust`, and a `date` field of class `Date` or `POSIXt`. If the analysis should also cover the monetary component, the event log needs to contain a corresponding field `sales`. In order to get started quickly, BTYDplus provides an event log for customers of an online grocery store over a time period of two years (`data("groceryElog")`). Further, for each BTYDplus model data generators are available (`*.GenerateData`), which allow to create artificial transaction logs, that follow the assumptions of a particular model.
```{r, echo=FALSE, results="asis"}
cdnowElog <- read.csv(system.file("data/cdnowElog.csv", package = "BTYD"),
stringsAsFactors = FALSE,
col.names = c("cust", "sampleid", "date", "cds", "sales"))
cdnowElog$date <- as.Date(as.character(cdnowElog$date), format = "%Y%m%d")
knitr::kable(head(cdnowElog[, c("cust", "date", "sales")], 6), caption = "Transaction Log Example")
```
Once the transaction log has been obtained, it needs to be converted into a customer-by-sufficient-statistic summary table (via the `elog2cbs` method), so that the data can be used by model-specific parameter estimation methods (`*.EstimateParameters` for MLE- and `*.DrawParameters` for MCMC-models). The estimated parameters already provide insights regarding the purchase and dropout process, e.g. mean purchase frequency, mean lifetime, variation in dropout probability, etc. For MLE-estimated models we can further report the maximized log-likelihood (via the `*.cbs.LL` methods) to benchmark the models in terms of their data fit to a particular dataset. Further, estimates for the conditional and unconditional expected number of transactions (`*.pmf`, `*.Expectation`, `*.ConditionalExpectedTransactions`), as well as for the (unobservable) status of a customer (`*.PAlive`) can be computed based on the parameters. Such estimates can then be analyzed either on an individual level, or be aggregated to cohort level.
## Helper Methods
BTYDplus provides various model-independent helper methods for handling and describing customers' transaction logs.
### Convert Event Log to Weekly Transactions
Before starting to fit probabilistic models, an analyst might be interested in reporting the total number of transactions over time, to gain a first understanding of the dynamics at a cohort level. For this purpose the methods `elog2cum` and `elog2inc` are provided. These take an event log as a first argument, and count for each time unit the cumulated or incremental number of transactions. If argument `first` is set to TRUE, then a customer's initial transaction will be included, otherwise not.
```{r, fig.show="hold", fig.width=7, fig.height=2.5, fig.cap="Weekly trends for the grocery dataset"}
data("groceryElog")
op <- par(mfrow = c(1, 2), mar = c(2.5, 2.5, 2.5, 2.5))
# incremental
weekly_inc_total <- elog2inc(groceryElog, by = 7, first = TRUE)
weekly_inc_repeat <- elog2inc(groceryElog, by = 7, first = FALSE)
plot(weekly_inc_total, typ = "l", frame = FALSE, main = "Incremental")
lines(weekly_inc_repeat, col = "red")
# cumulative
weekly_cum_total <- elog2cum(groceryElog, by = 7, first = TRUE)
weekly_cum_repeat <- elog2cum(groceryElog, by = 7, first = FALSE)
plot(weekly_cum_total, typ = "l", frame = FALSE, main = "Cumulative")
lines(weekly_cum_repeat, col = "red")
par(op)
```
The x-axis represents time measured in weeks, thus we see that the customers were observed over a two year time period. The gap between the red line (=repeat transactions) and the black line (=total transactions) illustrates the customers' initial transactions. These only occur within the first 13 weeks because the cohort of this particular dataset has been defined by their acquisition date falling within the first quarter of 2006.
### Convert Transaction Log to CBS format
The `elog2cbs` method is an efficient implementation for the conversion of an event log into a customer-by-sufficient-statistic (CBS) `data.frame`, with a row for each customer. This is the required data format for estimating model parameters.
```{r}
data("groceryElog")
head(elog2cbs(groceryElog), 5)
```
The returned field `cust` is the unique customer identifier, `x` the number of repeat transactions (i.e., frequency), `t.x` the time of the last recorded transaction (i.e., recency), `litt` the sum over logarithmic intertransaction times (required for estimating regularity), `first` the date of the first transaction, and `T.cal` the duration between the first transaction and the end of the calibration period. If the provided `elog` data.frame contains a field `sales`, then this will be summed up, and returned as an additional field, named `sales`. Note, that transactions with identical `cust` and `date` field are counted as a single transaction, but with `sales` being summed up.
The time unit for expressing `t.x`, `T.cal` and `litt` are determined via the argument `units`, which is passed forward to method `difftime`, and defaults to `weeks`.
Argument `T.tot` allows one to specify the end of the observation period, i.e., the last possible date of an event to still be included in the event log. If `T.tot` is not provided, then the date of the last recorded event will be assumed to coincide with the end of the observation period. If `T.tot` is provided, then any event that occurs after that date is discarded.
Argument `T.cal` allows one to calculate the summary statistics for a calibration and a holdout period separately. This is particularly useful for evaluating forecasting accuracy for a given dataset. If `T.cal` is not provided, then the whole observation period is considered, and is then subsequently used for for estimating model parameters. If it is provided, then the returned `data.frame` contains two additional fields, with `x.star` representing the number of repeat transactions during the holdout period of length `T.star`. And only those customers are contained, who have had at least one event during the calibration period.
```{r}
data("groceryElog")
range(groceryElog$date)
groceryCBS <- elog2cbs(groceryElog, T.cal = "2006-12-31")
head(groceryCBS, 5)
```
### Estimate Regularity
The models BG/CNBD-k, MBG/CNBD-k and Pareto/GGG are capable of leveraging regularity within transaction timings for improving forecast accuracy. The method `estimateRegularity` provides a quick check for the degree of regularity in the event timings. A return value of close to 1 supports the assumption of exponentially distributed intertransaction times, whereas values significantly larger than 1 reveal the presence of regularity. Estimation is either done by 1) assuming a same degree of regularity across all customers (`method = "wheat"`), or 2) by estimating regularity for each customer separately, as the shape parameter of a fitted gamma distribution, and then return the median across estimates. The latter methods, though, require sufficient ($\geq 10$) transactions per customer.
@wheat1990epr's method calculates for each customer a statistic M based on her last two intertransaction times as $M := \text{ITT}_1 / (\text{ITT}_1 + \text{ITT}_2)$. That measure is known to follow a $\text{Beta}(k, k)$ distribution, if the intertransaction times of customers follow $\text{Gamma}(k, \lambda)$ with a shared $k$ but potentially varying $\lambda$, and $k$ can then be estimated as $(1 - 4 \cdot Var(M)) / (8 \cdot Var(M))$. The corresponding diagnostic plot shows the actual distribution of $M$ vs. the theoretical distribution for Exponential, respectively for Erlang-2 distributed ITTs.
```{r, fig.show="hold", fig.width=7, fig.height=2.2, fig.cap="Diagnostic plots for estimating regularity"}
data("groceryElog")
op <- par(mfrow = c(1, 2))
(k.wheat <- estimateRegularity(groceryElog, method = "wheat",
plot = TRUE, title = "Wheat & Morrison"))
(k.mle <- estimateRegularity(groceryElog, method = "mle",
plot = TRUE, title = "Maximum Likelihood"))
par(op)
```
Applied to the online grocery dataset the Wheat & Morrison estimator reports a regularity estimate of close to 2, suggesting that a Erlang-2 might be more appropriate than the exponential distribution for modelling intertransaction times in this case. The peak in the plotted distribution additionally suggests that there is a subset of customers exhibiting an even stronger degree of regularity.
The maximum likelihood estimation method fits separate gamma distributions to the intertransaction times of each customer with more than 10 events. The reported median estimate of `r paste0("k=", round(k.mle, 2))` also indicate stronger degrees of regularity for this subset of highly active customers. The boxplot then gives a deeper understanding of the distribution of `k` estimates, revealing a heterogeneity within regularity across the cohort, thus suggesting that this dataset is a good candidate for the Pareto/GGG model.
## Maximum Likelihood Estimated Models
### NBD
The NBD model by @ehrenberg1959pattern assumes a heterogenous, yet constant purchasing process, with expontentially distributed intertransaction times, whereas its purchase rate $\lambda$ is $Gamma(r, \alpha)$-distributed across customers.
Fitting the model requires converting the event log first to a CBS format and passing the dataset to `nbd.EstimateParameters`. The method searches (by using `stats::optim`) for that pair of $(r, \alpha)$ heterogeneity parameters, that maximizes the log-likelihood function (`nbd.cbs.LL`) given the data.
```{r}
# load grocery dataset, if it hasn't been done before
if (!exists("groceryCBS")) {
data("groceryElog")
groceryCBS <- elog2cbs(groceryElog, T.cal = "2006-12-31")
}
# estimate NBD parameters
round(params.nbd <- nbd.EstimateParameters(groceryCBS), 3)
# report log-likelihood
nbd.cbs.LL(params.nbd, groceryCBS)
```
With the mean of the Gamma distribution being $r / \alpha$, the mean estimate for $\lambda$ is `r round(params.nbd[1] / params.nbd[2], 2)`, which translates to a mean intertransaction time of $1 / \lambda$ of `r round(params.nbd[2] / params.nbd[1], 2)` weeks.
The expected number of (future) transactions for a customer, conditional on her past (`x` and `T.cal`), can be computed with `nbd.ConditionalExpectedTransactions`. By passing the whole CBS we can easily generate estimates for all customers in the cohort.
```{r}
# calculate expected future transactions for customers who've
# had 1 to 5 transactions in first 52 weeks
est5.nbd <- nbd.ConditionalExpectedTransactions(params.nbd,
T.star = 52, x = 1:5, T.cal = 52)
for (i in 1:5) {
cat("x =", i, ":", sprintf("%5.3f", est5.nbd[i]), "\n")
}
```
```{r}
# predict whole customer cohort
groceryCBS$xstar.nbd <- nbd.ConditionalExpectedTransactions(
params = params.nbd, T.star = 52,
x = groceryCBS$x, T.cal = groceryCBS$T.cal)
# compare predictions with actuals at aggregated level
rbind(`Actuals` = c(`Holdout` = sum(groceryCBS$x.star)),
`NBD` = c(`Holdout` = round(sum(groceryCBS$xstar.nbd))))
```
As can be seen, the NBD model heavily overforecasts the actual number of transactions (by `r paste0(round(100 * sum(groceryCBS$xstar.nbd) / sum(groceryCBS$x.star) - 100, 1), "%")`), which can be explained by the lack of a dropout process in the model assumptions. All customers are assumed to remain just as active in the second year, as they have been in their first year. However, figure 2 shows clearly a downward trend in the incremental transaction counts for the online grocery customers, thus mandating a different model.
### Pareto/NBD
The Pareto/NBD model [@schmittlein1987cyc] combines the NBD model with the possibility of customers becoming inactive. A customer's state, however, is not directly observable, and the model needs to draw inferences based on the observed elapsed time since a customer's last activity, i.e., `T.cal - t.x`. In particular the model assumes a customer's lifetime $\tau$ to be exponential distributed with parameter $\mu$, whereas $\mu$ is $\text{Gamma}(s, \beta)$-distributed across customers.
The Pareto/NBD implementation is part of the BTYD package, but the workflow of fitting the model and making predictions is analogous to BTYDplus (respectively vice versa).
```{r}
# load grocery dataset, if it hasn't been done before
if (!exists("groceryCBS")) {
data("groceryElog")
groceryCBS <- elog2cbs(groceryElog, T.cal = "2006-12-31")
}
# estimate Pareto/NBD parameters
params.pnbd <- BTYD::pnbd.EstimateParameters(groceryCBS[, c("x", "t.x", "T.cal")])
names(params.pnbd) <- c("r", "alpha", "s", "beta")
round(params.pnbd, 3)
# report log-likelihood
BTYD::pnbd.cbs.LL(params.pnbd, groceryCBS[, c("x", "t.x", "T.cal")])
```
For one, we can note, that the maximized log-likelihood of Pareto/NBD is higher than for the NBD model, implying that its data fit is better. And second, by estimating a mean lifetime $1/(\beta/s)$ of `r round(1/(params.pnbd[3]/params.pnbd[4]), 2)` weeks, the estimated mean intertransaction times change from `r round(params.nbd[2] / params.nbd[1], 2)` to `r round(1/(params.pnbd[1]/params.pnbd[2]), 2)` weeks, when compared to NBD.
Let's now again compute the conditional expected transactions for five simulated customers with an increasing number of observed transactions, but all with an observed overly long period of recent inactivity.
```{r}
# calculate expected future transactions for customers who've
# had 1 to 5 transactions in first 12 weeks, but then remained
# inactive for 40 weeks
est5.pnbd <- BTYD::pnbd.ConditionalExpectedTransactions(params.pnbd,
T.star = 52, x = 1:5, t.x = 12, T.cal = 52)
for (i in 1:5) {
cat("x =", i, ":", sprintf("%5.3f", est5.pnbd[i]), "\n")
}
```
```{r}
# predict whole customer cohort
groceryCBS$xstar.pnbd <- BTYD::pnbd.ConditionalExpectedTransactions(
params = params.pnbd, T.star = 52,
x = groceryCBS$x, t.x = groceryCBS$t.x,
T.cal = groceryCBS$T.cal)
# compare predictions with actuals at aggregated level
rbind(`Actuals` = c(`Holdout` = sum(groceryCBS$x.star)),
`Pareto/NBD` = c(`Holdout` = round(sum(groceryCBS$xstar.pnbd))))
```
As expected, the Pareto/NBD yields overall lower and thus more realistic estimates than the NBD. However, the results also reveal an interesting pattern, which might seem at first sight counter intuitive. Customers with a very active purchase history (e.g., customers with 5 transactions) receive lower estimates than customers which have been less active in the past. @fader2005rfm discuss this apparent paradox in more detail, yet the underlying mechanism can be easily explained by looking at the model's assessment of the latent activity state.
```{r}
# P(alive) for customers who've had 1 to 5 transactions in first
# 12 weeks, but then remained inactive for 40 weeks
palive.pnbd <- BTYD::pnbd.PAlive(params.pnbd,
x = 1:5, t.x = 12, T.cal = 52)
for (i in 1:5) {
cat("x =", i, ":", sprintf("%5.2f %%", 100*palive.pnbd[i]), "\n")
}
```
The probability of still being alive after a 40 week purchase hiatus drops from `r paste0(round(100*palive.pnbd[1], 1), "%")` for the one-time-repeating customer to `r paste0(round(100*palive.pnbd[5], 1), "%")` for the customer which has had already 5 transactions. The elapsed time of inactivity is a stronger indication of churn for the highly frequent than for the less frequent purchasing customer, as a low purchase frequency also allows for the possibility of such long intertransaction times as the observed 40 weeks.
### BG/CNBD-k and MBG/CNBD-k
The BG/NBD [@fader2005cyc] and the MBG/NBD [@batislam2007empirical;@hoppe2007cbg] models are contained in the larger class of (M)BG/CNBD-k models [@platzer2017mbgcnbd], and are thus presented here together in this section. The MBG/CNBD-k model assumptions are as follows: A customer's intertransaction times, while being active, are Erlang-k distributed, with purchase rate $\lambda$ being $\text{Gamma}(r, \alpha)$-distributed across customers. After each transaction a customer can become inactive (for good) with a constant dropout probability of $p$, whereas $p$ is $\text{Beta}(a, b)$-distributed across customers. The BG/CNBD-k only differs in that respect, that the customer is not allowed to drop out at the initial transaction, but only at repeat transactions.
```{r}
# load grocery dataset, if it hasn't been done before
if (!exists("groceryCBS")) {
data("groceryElog")
groceryCBS <- elog2cbs(groceryElog, T.cal = "2006-12-31")
}
# estimate parameters for various models
params.bgnbd <- BTYD::bgnbd.EstimateParameters(groceryCBS) # BG/NBD
params.bgcnbd <- bgcnbd.EstimateParameters(groceryCBS) # BG/CNBD-k
params.mbgnbd <- mbgnbd.EstimateParameters(groceryCBS) # MBG/NBD
params.mbgcnbd <- mbgcnbd.EstimateParameters(groceryCBS) # MBG/CNBD-k
row <- function(params, LL) {
names(params) <- c("k", "r", "alpha", "a", "b")
c(round(params, 3), LL = round(LL))
}
rbind(`BG/NBD` = row(c(1, params.bgnbd),
BTYD::bgnbd.cbs.LL(params.bgnbd, groceryCBS)),
`BG/CNBD-k` = row(params.bgcnbd,
bgcnbd.cbs.LL(params.bgcnbd, groceryCBS)),
`MBG/NBD` = row(params.mbgnbd,
mbgcnbd.cbs.LL(params.mbgnbd, groceryCBS)),
`MBG/CNBD-k` = row(params.mbgcnbd,
mbgcnbd.cbs.LL(params.mbgcnbd, groceryCBS)))
```
The MLE method searches across a five dimensional parameter space $(k, r, \alpha, a, b)$ to find the optimum of the log-likelihood function. As can be seen from the reported log-likelihood values, the MBG/CNBD-k is able to provide a better fit than NBD, Pareto/NBD, BG/NBD, MBG/NBD and BG/CNBD-k for the given dataset. Further, the estimate for regularity parameter $k$ is 2 and implies that regularity is present, and that Erlang-2 is considered more suitable for the intertransaction times than the exponential distribution ($k=1$).
```{r}
# calculate expected future transactions for customers who've
# had 1 to 5 transactions in first 12 weeks, but then remained
# inactive for 40 weeks
est5.mbgcnbd <- mbgcnbd.ConditionalExpectedTransactions(params.mbgcnbd,
T.star = 52, x = 1:5, t.x = 12, T.cal = 52)
for (i in 1:5) {
cat("x =", i, ":", sprintf("%5.3f", est5.mbgcnbd[i]), "\n")
}
```
```{r}
# P(alive) for customers who've had 1 to 5 transactions in first
# 12 weeks, but then remained inactive for 40 weeks
palive.mbgcnbd <- mbgcnbd.PAlive(params.mbgcnbd,
x = 1:5, t.x = 12, T.cal = 52)
for (i in 1:5) {
cat("x =", i, ":", sprintf("%5.2f %%", 100*palive.mbgcnbd[i]), "\n")
}
```
Predicting transactions for 5 simulated customers, each with a long purchase hiatus but with a varying number of past transactions, we see the same pattern as for Pareto/NBD, except that the forecasted numbers are even lower. This results from the long period of inactivity being now, in the presence of regularity, an even stronger indiciation for defection, as the Erlang-2 allows for less variation in the intertransaction times.
```{r}
# predict whole customer cohort
groceryCBS$xstar.mbgcnbd <- mbgcnbd.ConditionalExpectedTransactions(
params = params.mbgcnbd, T.star = 52,
x = groceryCBS$x, t.x = groceryCBS$t.x,
T.cal = groceryCBS$T.cal)
# compare predictions with actuals at aggregated level
rbind(`Actuals` = c(`Holdout` = sum(groceryCBS$x.star)),
`MBG/CNBD-k` = c(`Holdout` = round(sum(groceryCBS$xstar.mbgcnbd))))
```
Comparing the predictions at an aggregate level, we see that also the MBG/CNBD-k remains overly optimistic for the online grocery dataset, but to a slightly lower extent compared to the predictions resulting from the Pareto/NBD. The aggregate level dynamics can be visualized with the help of `mbgcnbd.PlotTrackingInc`.
```{r, fig.show="hold", fig.width=4, fig.height=3.5, fig.cap="Weekly actuals vs. MBG/CNBD-k predictions"}
# runs for ~37secs on a 2015 MacBook Pro
nil <- mbgcnbd.PlotTrackingInc(params.mbgcnbd,
T.cal = groceryCBS$T.cal,
T.tot = max(groceryCBS$T.cal + groceryCBS$T.star),
actual.inc.tracking = elog2inc(groceryElog))
```
However, when assessing the error at individual level, by calculating mean absolute error (MAE) of our predictions, we see a significant improvement in forecasting accuracy, by accounting for the mild degree of regularity within the timing patterns.
```{r}
# mean absolute error (MAE)
mae <- function(act, est) {
stopifnot(length(act)==length(est))
sum(abs(act-est)) / length(act)
}
mae.nbd <- mae(groceryCBS$x.star, groceryCBS$xstar.nbd)
mae.pnbd <- mae(groceryCBS$x.star, groceryCBS$xstar.pnbd)
mae.mbgcnbd <- mae(groceryCBS$x.star, groceryCBS$xstar.mbgcnbd)
rbind(`NBD` = c(`MAE` = round(mae.nbd, 3)),
`Pareto/NBD` = c(`MAE` = round(mae.pnbd, 3)),
`MBG/CNBD-k` = c(`MAE` = round(mae.mbgcnbd, 3)))
lift <- 1 - mae.mbgcnbd / mae.pnbd
cat("Lift in MAE for MBG/CNBD-k vs. Pareto/NBD:", round(100*lift, 1), "%")
```
## MCMC Estimated Models
This chapter presents three buy-till-you-die model variants which rely on Markov-Chain-Monte-Carlo simulation for parameter estimation. Implementation complexity as well as computational costs are significantly higher, and despite an efficient MCMC implementation in C++ applying these models requires much longer computing time when compared to the before presented ML-estimated models. On the upside, we gain flexibility in our model assumptions and get estimated distributions even for individual-level parameters. Thus, the return object for parameter estimation (`param.draws <- *.mcmc.DrawParameters(...)`) not only returns the point estimates of the heterogeneity parameters (`params <- *.EstimateParameters(...)`), but provides samples from the marginal posterior distributions, both at the cohort- (`param.draws$level_1`) as well as on the customer-level (`param.draws$level_2`). Based on these parameter draws, we can then easily sample the posterior distributions of any derived quantity of managerial interest, for example the number of future transactions (`mcmc.DrawFutureTransactions`) or the probability of being active in a given period.
Generally speaking, MCMC works by constructing a Markov chain which has the desired target (posterior) distribution as its equilibrium distribution. The algorithm then performs random walks on that Markov chain and will eventually (after some "burnin" phase) produce draws from the posterior. In order to assess MCMC convergence one can run multiple MCMC chains (in parallel) and check whether these provide similar distributions. Due to the high auto-correlation between subsequent iteration steps in the MCMC chains, it is also advisable to keep only every x-th step. The MCMC default settings for parameter draws (`*.mcmc.DrawParameters(..., mcmc = 2500, burnin = 500, thin = 50, chains = 2`) should work well in many empirical settings. Depending on your platform, the code will either use a single core (on Windows OS), or multiple cores in parallel (on Unix/MacOS) to run the MCMC chains. To speed up convergence, the MCMC chains will be automatically initialized with the maximum likelihood estimates of Pareto/NBD. The sampled draws are wrapped as `coda::mcmc.list` object, and the `coda` package provides various helper methods (e.g. `as.matrix.mcmc.list`, `HPDinterval`, etc.) for performing output analysis and diagnostics for MCMC (cf. `help(package="coda")`).
### Pareto/NBD (HB)
The Pareto/NBD (HB) is identical to Pareto/NBD with respect to its model assumptions, but relies on MCMC for parameter estimation and thus can leverage the aforementioned benefits of such approach. @rossi2003bayesian already provided a blueprint for applying a full Bayes approach (in contrast to an empirical Bayes approach) to hierarchical models such as Pareto/NBD. @ma2007mcmc then published a specific MCMC scheme, comprised of Gibbs sampling with slice sampling to draw from the conditional distributions. Later @abe2009counting suggested in the technical appendix to augment the parameter space with the unobserved lifetime $\tau$ and activity status $z$ in order to decouple the sampling of the transaction process from the dropout process. This allows the sampling scheme to take advantage of conjugate priors for drawing $\lambda$ and $\mu$, and is accordingly implemented in this package.
Let's apply the Pareto/NBD (HB), with the default MCMC settings in place, for the online grocery dataset. First we draw parameters with `pnbd.mcmc.DrawParameters`, and then pass these forward to the model-independent methods for deriving quantities of managerial interest, namely `mcmc.DrawFutureTransactions`, `mcmc.PActive` and `mcmc.PAlive`. Note the difference between $P(\text{active})$ and $P(\text{alive})$: the former denotes the probability of making at least one transaction within the holdout period, and the latter is the probability of making another transaction at any time in the future.
```{r}
# load grocery dataset, if it hasn't been done before
if (!exists("groceryCBS")) {
data("groceryElog")
groceryCBS <- elog2cbs(groceryElog, T.cal = "2006-12-31")
}
# generate parameter draws (~13secs on 2015 MacBook Pro)
pnbd.draws <- pnbd.mcmc.DrawParameters(groceryCBS)
# generate draws for holdout period
pnbd.xstar.draws <- mcmc.DrawFutureTransactions(groceryCBS, pnbd.draws)
# conditional expectations
groceryCBS$xstar.pnbd.hb <- apply(pnbd.xstar.draws, 2, mean)
# P(active)
groceryCBS$pactive.pnbd.hb <- mcmc.PActive(pnbd.xstar.draws)
# P(alive)
groceryCBS$palive.pnbd.hb <- mcmc.PAlive(pnbd.draws)
# show estimates for first few customers
head(groceryCBS[, c("x", "t.x", "x.star",
"xstar.pnbd.hb", "pactive.pnbd.hb",
"palive.pnbd.hb")])
```
As can be seen, the basic application of an MCMC-estimated model is just as straightforward as for ML-estimated models. However, the 2-element list return object of `pnbd.mcmc.DrawParameters` allows for further analysis: `level_1` is a list of `coda::mcmc.list`s, one for each customer, with draws for customer-level parameters ($\lambda, \mu, \tau, z$), and `level_2` a `coda::mcmc.list` with draws for cohort-level parameters ($r, \alpha, s, \beta$). Running the estimation with the default MCMC settings returns a total of 100 samples (`(mcmc-burnin)*chains/thin`) for `r paste0("nrow(groceryCBS) * 4 + 4 = ", nrow(groceryCBS) * 4 + 4)` parameters, and for each we can inspect the MCMC traces, the estimated distributions and calculate summary statistics.
For the cohort-level parameters ($r, \alpha, s, \beta$) the median point estimates are generated as follows:
```{r}
class(pnbd.draws$level_2)
# convert cohort-level draws from coda::mcmc.list to a matrix, with
# each parameter becoming a column, and each draw a row
cohort.draws <- pnbd.draws$level_2
head(as.matrix(cohort.draws), 5)
# compute median across draws, and compare to ML estimates; as can be
# seen, the two parameter estimation approaches result in very similar
# estimates
round(
rbind(`Pareto/NBD (HB)` = apply(as.matrix(cohort.draws), 2, median),
`Pareto/NBD` = BTYD::pnbd.EstimateParameters(groceryCBS[, c("x", "t.x", "T.cal")]))
, 2)
```
MCMC traces and estimated parameter distributions can be easily visualized by using the corresponding methods from the `coda` package.
```{r, fig.show="hold", warning=FALSE, fig.width=7, fig.height=3, fig.cap="MCMC traces and parameter distributions of cohort-level parameters"}
# plot trace- and density-plots for heterogeneity parameters
op <- par(mfrow = c(2, 4), mar = c(2.5, 2.5, 2.5, 2.5))
coda::traceplot(pnbd.draws$level_2)
coda::densplot(pnbd.draws$level_2)
par(op)
```
One of the advantages of the MCMC approach compared to MLE is that the parameter draws and corresponding median values can also be inspected on the customer-evel. The following example code does so for the specific customer with ID 4 (i.e., `cust=4`).
```{r, fig.show="hold", warning=FALSE, fig.width=7, fig.height=3, fig.cap="MCMC traces and parameter distributions of individual-level parameters for a specific customer"}
class(pnbd.draws$level_1)
length(pnbd.draws$level_1)
customer4 <- "4"
customer4.draws <- pnbd.draws$level_1[[customer4]]
head(as.matrix(customer4.draws), 5)
round(apply(as.matrix(customer4.draws), 2, median), 3)
# plot trace- and density-plots for customer4 parameters
op <- par(mfrow = c(2, 4), mar = c(2.5, 2.5, 2.5, 2.5))
coda::traceplot(pnbd.draws$level_1[[customer4]])
coda::densplot(pnbd.draws$level_1[[customer4]])
par(op)
```
Analogous to MLE-based models, we can also plot weekly transaction counts, as well as frequency plots at an aggregated level. These methods can be applied to all provided MCMC-based models in the following way.
```{r, eval = FALSE}
# runs for ~120secs on a MacBook Pro 2015
op <- par(mfrow = c(1, 2))
nil <- mcmc.PlotFrequencyInCalibration(pnbd.draws, groceryCBS)
nil <- mcmc.PlotTrackingInc(pnbd.draws,
T.cal = groceryCBS$T.cal,
T.tot = max(groceryCBS$T.cal + groceryCBS$T.star),
actual.inc.tracking.data = elog2inc(groceryElog))
par(op)
```
### Pareto/NBD (Abe)
@abe2009counting introduced a variant of Pareto/NBD by replacing the two independent gamma distributions for individuals' purchase rates $\lambda$ and dropout rates $\mu$ with a multivariate lognormal distribution. The BTYDplus package refers to this model variant as Pareto/NBD (Abe). The multivariate lognormal distribution permits a correlation between purchase and dropout processes, but even more importantly, can be easily extended to a linear regression model to incorporate customer-level covariates. This flexibility can significantly boost inference, if any of the captured covariates indeed helps in explaining the heterogeneity within the customer cohort.
The online grocery dataset doesn't contain any additional covariates, so for demonstration purposes we will apply Pareto/NBD (Abe) to the CDNow dataset and reproduce the findings from the original paper. First we estimate a model without covariates (M1), and then, we incorporate the dollar amount of the first purchase as a customer-level covariate (M2).
<!-- Note: we do not evaluate the Pareto/GGG and Pareto/NBD (Abe) example code here, in order to speed up vignette build. -->
```{r, eval = FALSE}
# load CDNow event log from BTYD package
cdnowElog <- read.csv(
system.file("data/cdnowElog.csv", package = "BTYD"),
stringsAsFactors = FALSE,
col.names = c("cust", "sampleid", "date", "cds", "sales"))
cdnowElog$date <- as.Date(as.character(cdnowElog$date),
format = "%Y%m%d")
# convert to CBS; split into 39 weeks calibration, and 39 weeks holdout
cdnowCbs <- elog2cbs(cdnowElog,
T.cal = "1997-09-30", T.tot = "1998-06-30")
# estimate Pareto/NBD (Abe) without covariates; model M1 in Abe (2009)
draws.m1 <- abe.mcmc.DrawParameters(cdnowCbs,
mcmc = 7500, burnin = 2500) # ~33secs on 2015 MacBook Pro
quant <- function(x) round(quantile(x, c(0.025, 0.5, 0.975)), 2)
t(apply(as.matrix(draws.m1$level_2), 2, quant))
#> 2.5% 50% 97.5%
#> log_lambda -3.70 -3.54 -3.32
#> log_mu -3.96 -3.59 -3.26
#> var_log_lambda 1.10 1.34 1.65
#> cov_log_lambda_log_mu -0.20 0.13 0.74
#> var_log_mu 1.44 2.62 5.05
#' append dollar amount of first purchase to use as covariate
first <- aggregate(sales ~ cust, cdnowElog, function(x) x[1] * 10^-3)
names(first) <- c("cust", "first.sales")
cdnowCbs <- merge(cdnowCbs, first, by = "cust")
#' estimate with first purchase spend as covariate; model M2 in Abe (2009)
draws.m2 <- abe.mcmc.DrawParameters(cdnowCbs,
covariates = c("first.sales"),
mcmc = 7500, burnin = 2500) # ~33secs on 2015 MacBook Pro
t(apply(as.matrix(draws.m2$level_2), 2, quant))
#> 2.5% 50% 97.5%
#> log_lambda_intercept -4.02 -3.77 -3.19
#> log_mu_intercept -4.37 -3.73 -2.69
#> log_lambda_first.sales 0.04 6.04 9.39
#> log_mu_first.sales -9.02 1.73 7.90
#> var_log_lambda 0.01 1.35 1.79
#> cov_log_lambda_log_mu -0.35 0.22 0.76
#> var_log_mu 0.55 2.59 4.97
```
The parameter estimates for model M1 and M2 match roughly the numbers reported in Table 3 of @abe2009counting. There are some discrepancies for the parameters `log_lambda_first.sales` and `log_mu_first.sales`, but the high level result remains unaltered: The dollar amount of a customer's initial purchase correlates positively with purchase frequency, but doesn't influence the dropout process.
Note that the BTYDplus package can establish via simulations that its provided implementation is indeed correctly able to reidentify the underlying data generating parameters, including the regression coefficients for the covariates.
### Pareto/GGG
@platzer2016pggg presented another extension of the Pareto/NBD model. The Pareto/GGG generalizes the distribution for the intertransaction times from the exponential to the Gamma distribution, whereas its shape parameter $k$ is also allowed to vary across customers following a $\text{Gamma}(t, \gamma)$ distribution. Hence, the purchase process follows a Gamma-Gamma-Gamma (GGG) mixture distribution, that is capable of capturing a varying degree of regularity across customers. For datasets which exhibit regularity in their timing patterns, and the degree of regularity varies across the customer cohort, leveraging that information can yield significant improvements in terms of forecasting accuracy. This results from improved inferences about customers' latent state in the presence of regularity.
```{r, eval=FALSE}
# load grocery dataset, if it hasn't been done before
if (!exists("groceryCBS")) {
data("groceryElog")
groceryCBS <- elog2cbs(groceryElog, T.cal = "2006-12-31")
}
# estimte Pareto/GGG
pggg.draws <- pggg.mcmc.DrawParameters(groceryCBS) # ~2mins on 2015 MacBook Pro
# generate draws for holdout period
pggg.xstar.draws <- mcmc.DrawFutureTransactions(groceryCBS, pggg.draws)
# conditional expectations
groceryCBS$xstar.pggg <- apply(pggg.xstar.draws, 2, mean)
# P(active)
groceryCBS$pactive.pggg <- mcmc.PActive(pggg.xstar.draws)
# P(alive)
groceryCBS$palive.pggg <- mcmc.PAlive(pggg.draws)
# show estimates for first few customers
head(groceryCBS[, c("x", "t.x", "x.star",
"xstar.pggg", "pactive.pggg", "palive.pggg")])
#> x t.x x.star xstar.pggg pactive.pggg palive.pggg
#> 1 0 0.00000 0 0.02 0.02 0.03
#> 2 1 50.28571 0 1.01 0.59 1.00
#> 3 19 48.57143 14 14.76 0.87 0.87
#> 4 0 0.00000 0 0.04 0.03 0.13
#> 5 2 40.42857 3 2.02 0.84 0.91
#> 6 5 47.57143 6 4.46 0.92 0.95
# report median cohort-level parameter estimates
round(apply(as.matrix(pggg.draws$level_2), 2, median), 3)
#> t gamma r alpha s beta
#> 1.695 0.373 0.948 5.243 0.432 4.348
# report mean over median individual-level parameter estimates
median.est <- sapply(pggg.draws$level_1, function(draw) {
apply(as.matrix(draw), 2, median)
})
round(apply(median.est, 1, mean), 3)
#> k lambda mu tau z
#> 3.892 0.160 0.065 69.546 0.316
```
Summarizing the estimated parameter distributions shows that regularity parameter `k` is estimated significantly larger than 1, and that it varies substantially across customers.
Concluding our vignette we will benchmark the forecasting error of Pareto/GGG, MBG/CNBD-k and Pareto/NBD.
```{r, eval = FALSE}
# compare predictions with actuals at aggregated level
rbind(`Actuals` = c(`Holdout` = sum(groceryCBS$x.star)),
`Pareto/GGG` = round(sum(groceryCBS$xstar.pggg)),
`MBG/CNBD-k` = round(sum(groceryCBS$xstar.mbgcnbd)),
`Pareto/NBD (HB)` = round(sum(groceryCBS$xstar.pnbd.hb)))
#> Holdout
#> Actuals 3389
#> Pareto/GGG 3815
#> MBG/CNBD-k 3970
#> Pareto/NBD (HB) 4018
# error on customer level
mae <- function(act, est) {
stopifnot(length(act)==length(est))
sum(abs(act-est)) / length(act)
}
mae.pggg <- mae(groceryCBS$x.star, groceryCBS$xstar.pggg)
mae.mbgcnbd <- mae(groceryCBS$x.star, groceryCBS$xstar.mbgcnbd)
mae.pnbd.hb <- mae(groceryCBS$x.star, groceryCBS$xstar.pnbd.hb)
rbind(`Pareto/GGG` = c(`MAE` = round(mae.pggg, 3)),
`MBG/CNBD-k` = c(`MAE` = round(mae.mbgcnbd, 3)),
`Pareto/NBD (HB)` = c(`MAE` = round(mae.pnbd.hb, 3)))
#> MAE
#> Pareto/GGG 0.621
#> MBG/CNBD-k 0.644
#> Pareto/NBD (HB) 0.688
lift <- 1 - mae.pggg / mae.pnbd.hb
cat("Lift in MAE:", round(100*lift, 1), "%")
#> Lift in MAE for Pareto/GGG vs. Pareto/NBD: 9.8%
```
Both, on the aggregate level as well as on the customer level we see a significant improvement in the forecasting accuracy when leveraging the regularity within the transaction timings of the online grocery dataset. Further, the superior performance of the Pareto/GGG compared to the MBG/CNBD-k model suggests that it does pay off to also consider the heterogeneity in the degree of regularity across customers, which itself can also be visualized via `pggg.mcmc.DrawParameters(groceryCBS)`.
## References
|
/scratch/gouwar.j/cran-all/cranData/BTYDplus/inst/doc/BTYDplus-HowTo.Rmd
|
---
title: "Customer Base Analysis with BTYDplus"
author: "Michael Platzer"
date: "`r Sys.Date()`"
output:
pdf_document:
fig_caption: yes
fontsize: 11pt
geometry: margin=1.4in
linestretch: 1.4
linkcolor: blue
bibliography: bibliography.bib
vignette: |
%\VignetteIndexEntry{Customer Base Analysis with BTYDplus}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, echo = FALSE}
knitr::opts_chunk$set(collapse = TRUE, comment = "#>")
library(BTYDplus)
data("groceryElog")
```
## Introduction
The BTYDplus package provides advanced statistical methods to describe and predict customers' purchase behavior in non-contractual settings. It fits probabilistic models to historic transaction records for computing customer-centric metrics of managerial interest.
The challenge of this task is threefold: For one, the churn event in a non-contractual customer relationship is not directly observable, but needs to be inferred indirectly based on observed periods of inactivity. Second, with customers behaving differently, yet having oftentimes only few transactions recorded so far, we require statistical methods that can utilize cohort-level patterns as priors for estimating customer-level quantities. And third, we attempt to predict the (unseen) future, thus need assumptions regarding the future dynamics.
Figure 1 displays the complete transaction records of 30 sampled customers of an online grocery store. Each horizontal line represents a customer, and each circle a purchase event. The typical questions that arise are:
* How many customers does the firm still have?
* How many customers will be active in one year from now?
* How many transactions can be expected in next X weeks?
* Which customers can be considered to have churned?
* Which customers will provide the most value to the company going forward?
```{r, fig.show="hold", fig.width=7, fig.height=2.8, fig.cap="Timing patterns for sampled grocery customers"}
library(BTYDplus)
data("groceryElog")
set.seed(123)
# plot timing patterns of 30 sampled customers
plotTimingPatterns(groceryElog, n = 30, T.cal = "2007-05-15",
headers = c("Past", "Future"), title = "")
```
Fitting a buy-till-you-die model to a particular customer cohort not just allows analysts to describe it in terms of its heterogeneous distribution of purchase patterns and dropout probabilities, but also provides answers for all of the above stated questions. On an aggregated level the estimated number of future transactions can then be, for example, used for capacity and production planning. The estimated future value of the cohort for assessing the return on investment for customer acquistion spends. On an individual level the customer database can be enriched with estimates on a customer's status, future activity and future value. customer scores like these can be then utilized to adapt services, messages and offers with respect to customers' state and value. Given the accessibility and speed of the provided models, practitioners can score their customer base with these advanced statistical techniques on a continuous basis.
### Models
The [BTYD](https://cran.r-project.org/package=BTYD) package already provides implementations for the **Pareto/NBD** [@schmittlein1987cyc], the **BG/NBD** [@fader2005cyc] and the **BG/BB** [@fader2010customer] model. BTYDplus complements the BTYD package by providing several additional buy-till-you-die models, that have been published in the marketing literature, but whose implementation are complex and non-trivial. In order to create a consistent experience of users of both packages, the BTYDplus adopts method signatures from BTYD where possible.
The models provided as part of [BTYDplus](https://github.com/mplatzer/BTYDplus#readme) are:
* **NBD** @ehrenberg1959pattern
* **MBG/NBD** @batislam2007empirical, @hoppe2007cbg
* **BG/CNBD-k** @platzer2017mbgcnbd
* **MBG/CNBD-k** @platzer2017mbgcnbd
* **Pareto/NBD (HB)** @ma2007mcmc
* **Pareto/NBD (Abe)** @abe2009counting
* **Pareto/GGG** @platzer2016pggg
The number of implemented models raises the question, which one to use, and which one works best in a particular case. There is no simple answer to that, but practitioners could consider trying out all of them for a given dataset, assess data fit, calculate forecast accuracy based on a holdout time period and then make a tradeoff between calculation speed, data fit and accuracy.
The implementation of the original *NBD* model from 1959 serves mainly as a basic benchmark. It assumes a heterogenous purchase process, but doesn't account for the possibility of customer defection. The *Pareto/NBD* model, introduced in 1987, combines the NBD model for transactions of active customers with a heterogeneuos dropout process, and to this date can still be considered a gold standard for buy-till-you-die models. The *BG/NBD* model adjusts the Pareto/NBD assumptions with respect to the dropout process in order to speed up computation. It is able to retain a similar level of data fit and forecast accuracy, but also improves the robustness of the parameter search. However, the BG/NBD model particularly assumes that every customer without a repeat transaction has *not* defected yet, independent of the elapsed time of inactivity. This seems counterintuitive, particularly when compared to customers with repeat transactions. Thus the *MBG/NBD* has been developed to eliminate this inconsistency by allowing customers without any activity to also remain inactive. Data fit and forecast accuracy are comparable to the BG/NBD, yet it results in more plausible estimates for the dropout process. The more recently developed *BG/CNBD-k* and *MBG/CNBD-k* model classes extend BG/NBD and MBG/NBD each but allow for regularity within the transaction timings. If such regularity is present (even in a mild form), these models can yield significant improvements in terms of customer level forecasting accuracy, while the computational costs remain at a similar order of magnitude.
All of the aforementioned models benefit from closed-form solutions for key expressions and thus can be efficiently estimated via means of maximum likelihood estimation (MLE). However, the necessity of deriving closed-form expressions restricts the model builder from relaxing the underlying behavioral assumptions. An alternative estimation method for probabilistic models is via Markov-Chain-Monte-Carlo (MCMC) simulation. MCMC comes at significantly higher costs in terms of implementation complexity and computation time, but it allows for more flexible assumptions. Additionally, one gains the benefits of (1) estimated marginal posterior distributions rather than point estimates, (2) individual-level parameter estimates, and thus (3) straightforward simulations of customer-level metrics of managerial interest. The hierarchical Bayes variant of Pareto/NBD (i.e., *Pareto/NBD (HB)*) served as a proof-of-concept for the MCMC approach, but doesn't yet take full advantage of the gained flexibility, as it sticks to the original Pareto/NBD assumptions. In contrast, Abe's variant of the Pareto/NBD (termed here *Pareto/NBD (Abe)*) relaxes the independence of purchase and dropout process, plus is capable of incorporating customer covariates. Particularly the latter can turn out to be very powerful, if any of such known covariates helps in explaining the heterogeneity within the customer cohort. Finally, the *Pareto/GGG* is another generalization of Pareto/NBD, which allows for a varying degree of regularity within the transaction timings. Analogous to (M)BG/CNBD-k, incorporating regularity can yield significant improvements in forecasting accuracy, if such regularity is present in the data.
## Analytical Workflow
The typical analysis process starts out by reading in a complete log of all events or transactions of an existing customer cohort. It is up to the analyst to define how a customer base is split into cohorts, but typically these are defined based on customers' first transaction date and/or the acquisition channel. The data requirements for such an event log are minimal, and only consist of a customer identifier field `cust`, and a `date` field of class `Date` or `POSIXt`. If the analysis should also cover the monetary component, the event log needs to contain a corresponding field `sales`. In order to get started quickly, BTYDplus provides an event log for customers of an online grocery store over a time period of two years (`data("groceryElog")`). Further, for each BTYDplus model data generators are available (`*.GenerateData`), which allow to create artificial transaction logs, that follow the assumptions of a particular model.
```{r, echo=FALSE, results="asis"}
cdnowElog <- read.csv(system.file("data/cdnowElog.csv", package = "BTYD"),
stringsAsFactors = FALSE,
col.names = c("cust", "sampleid", "date", "cds", "sales"))
cdnowElog$date <- as.Date(as.character(cdnowElog$date), format = "%Y%m%d")
knitr::kable(head(cdnowElog[, c("cust", "date", "sales")], 6), caption = "Transaction Log Example")
```
Once the transaction log has been obtained, it needs to be converted into a customer-by-sufficient-statistic summary table (via the `elog2cbs` method), so that the data can be used by model-specific parameter estimation methods (`*.EstimateParameters` for MLE- and `*.DrawParameters` for MCMC-models). The estimated parameters already provide insights regarding the purchase and dropout process, e.g. mean purchase frequency, mean lifetime, variation in dropout probability, etc. For MLE-estimated models we can further report the maximized log-likelihood (via the `*.cbs.LL` methods) to benchmark the models in terms of their data fit to a particular dataset. Further, estimates for the conditional and unconditional expected number of transactions (`*.pmf`, `*.Expectation`, `*.ConditionalExpectedTransactions`), as well as for the (unobservable) status of a customer (`*.PAlive`) can be computed based on the parameters. Such estimates can then be analyzed either on an individual level, or be aggregated to cohort level.
## Helper Methods
BTYDplus provides various model-independent helper methods for handling and describing customers' transaction logs.
### Convert Event Log to Weekly Transactions
Before starting to fit probabilistic models, an analyst might be interested in reporting the total number of transactions over time, to gain a first understanding of the dynamics at a cohort level. For this purpose the methods `elog2cum` and `elog2inc` are provided. These take an event log as a first argument, and count for each time unit the cumulated or incremental number of transactions. If argument `first` is set to TRUE, then a customer's initial transaction will be included, otherwise not.
```{r, fig.show="hold", fig.width=7, fig.height=2.5, fig.cap="Weekly trends for the grocery dataset"}
data("groceryElog")
op <- par(mfrow = c(1, 2), mar = c(2.5, 2.5, 2.5, 2.5))
# incremental
weekly_inc_total <- elog2inc(groceryElog, by = 7, first = TRUE)
weekly_inc_repeat <- elog2inc(groceryElog, by = 7, first = FALSE)
plot(weekly_inc_total, typ = "l", frame = FALSE, main = "Incremental")
lines(weekly_inc_repeat, col = "red")
# cumulative
weekly_cum_total <- elog2cum(groceryElog, by = 7, first = TRUE)
weekly_cum_repeat <- elog2cum(groceryElog, by = 7, first = FALSE)
plot(weekly_cum_total, typ = "l", frame = FALSE, main = "Cumulative")
lines(weekly_cum_repeat, col = "red")
par(op)
```
The x-axis represents time measured in weeks, thus we see that the customers were observed over a two year time period. The gap between the red line (=repeat transactions) and the black line (=total transactions) illustrates the customers' initial transactions. These only occur within the first 13 weeks because the cohort of this particular dataset has been defined by their acquisition date falling within the first quarter of 2006.
### Convert Transaction Log to CBS format
The `elog2cbs` method is an efficient implementation for the conversion of an event log into a customer-by-sufficient-statistic (CBS) `data.frame`, with a row for each customer. This is the required data format for estimating model parameters.
```{r}
data("groceryElog")
head(elog2cbs(groceryElog), 5)
```
The returned field `cust` is the unique customer identifier, `x` the number of repeat transactions (i.e., frequency), `t.x` the time of the last recorded transaction (i.e., recency), `litt` the sum over logarithmic intertransaction times (required for estimating regularity), `first` the date of the first transaction, and `T.cal` the duration between the first transaction and the end of the calibration period. If the provided `elog` data.frame contains a field `sales`, then this will be summed up, and returned as an additional field, named `sales`. Note, that transactions with identical `cust` and `date` field are counted as a single transaction, but with `sales` being summed up.
The time unit for expressing `t.x`, `T.cal` and `litt` are determined via the argument `units`, which is passed forward to method `difftime`, and defaults to `weeks`.
Argument `T.tot` allows one to specify the end of the observation period, i.e., the last possible date of an event to still be included in the event log. If `T.tot` is not provided, then the date of the last recorded event will be assumed to coincide with the end of the observation period. If `T.tot` is provided, then any event that occurs after that date is discarded.
Argument `T.cal` allows one to calculate the summary statistics for a calibration and a holdout period separately. This is particularly useful for evaluating forecasting accuracy for a given dataset. If `T.cal` is not provided, then the whole observation period is considered, and is then subsequently used for for estimating model parameters. If it is provided, then the returned `data.frame` contains two additional fields, with `x.star` representing the number of repeat transactions during the holdout period of length `T.star`. And only those customers are contained, who have had at least one event during the calibration period.
```{r}
data("groceryElog")
range(groceryElog$date)
groceryCBS <- elog2cbs(groceryElog, T.cal = "2006-12-31")
head(groceryCBS, 5)
```
### Estimate Regularity
The models BG/CNBD-k, MBG/CNBD-k and Pareto/GGG are capable of leveraging regularity within transaction timings for improving forecast accuracy. The method `estimateRegularity` provides a quick check for the degree of regularity in the event timings. A return value of close to 1 supports the assumption of exponentially distributed intertransaction times, whereas values significantly larger than 1 reveal the presence of regularity. Estimation is either done by 1) assuming a same degree of regularity across all customers (`method = "wheat"`), or 2) by estimating regularity for each customer separately, as the shape parameter of a fitted gamma distribution, and then return the median across estimates. The latter methods, though, require sufficient ($\geq 10$) transactions per customer.
@wheat1990epr's method calculates for each customer a statistic M based on her last two intertransaction times as $M := \text{ITT}_1 / (\text{ITT}_1 + \text{ITT}_2)$. That measure is known to follow a $\text{Beta}(k, k)$ distribution, if the intertransaction times of customers follow $\text{Gamma}(k, \lambda)$ with a shared $k$ but potentially varying $\lambda$, and $k$ can then be estimated as $(1 - 4 \cdot Var(M)) / (8 \cdot Var(M))$. The corresponding diagnostic plot shows the actual distribution of $M$ vs. the theoretical distribution for Exponential, respectively for Erlang-2 distributed ITTs.
```{r, fig.show="hold", fig.width=7, fig.height=2.2, fig.cap="Diagnostic plots for estimating regularity"}
data("groceryElog")
op <- par(mfrow = c(1, 2))
(k.wheat <- estimateRegularity(groceryElog, method = "wheat",
plot = TRUE, title = "Wheat & Morrison"))
(k.mle <- estimateRegularity(groceryElog, method = "mle",
plot = TRUE, title = "Maximum Likelihood"))
par(op)
```
Applied to the online grocery dataset the Wheat & Morrison estimator reports a regularity estimate of close to 2, suggesting that a Erlang-2 might be more appropriate than the exponential distribution for modelling intertransaction times in this case. The peak in the plotted distribution additionally suggests that there is a subset of customers exhibiting an even stronger degree of regularity.
The maximum likelihood estimation method fits separate gamma distributions to the intertransaction times of each customer with more than 10 events. The reported median estimate of `r paste0("k=", round(k.mle, 2))` also indicate stronger degrees of regularity for this subset of highly active customers. The boxplot then gives a deeper understanding of the distribution of `k` estimates, revealing a heterogeneity within regularity across the cohort, thus suggesting that this dataset is a good candidate for the Pareto/GGG model.
## Maximum Likelihood Estimated Models
### NBD
The NBD model by @ehrenberg1959pattern assumes a heterogenous, yet constant purchasing process, with expontentially distributed intertransaction times, whereas its purchase rate $\lambda$ is $Gamma(r, \alpha)$-distributed across customers.
Fitting the model requires converting the event log first to a CBS format and passing the dataset to `nbd.EstimateParameters`. The method searches (by using `stats::optim`) for that pair of $(r, \alpha)$ heterogeneity parameters, that maximizes the log-likelihood function (`nbd.cbs.LL`) given the data.
```{r}
# load grocery dataset, if it hasn't been done before
if (!exists("groceryCBS")) {
data("groceryElog")
groceryCBS <- elog2cbs(groceryElog, T.cal = "2006-12-31")
}
# estimate NBD parameters
round(params.nbd <- nbd.EstimateParameters(groceryCBS), 3)
# report log-likelihood
nbd.cbs.LL(params.nbd, groceryCBS)
```
With the mean of the Gamma distribution being $r / \alpha$, the mean estimate for $\lambda$ is `r round(params.nbd[1] / params.nbd[2], 2)`, which translates to a mean intertransaction time of $1 / \lambda$ of `r round(params.nbd[2] / params.nbd[1], 2)` weeks.
The expected number of (future) transactions for a customer, conditional on her past (`x` and `T.cal`), can be computed with `nbd.ConditionalExpectedTransactions`. By passing the whole CBS we can easily generate estimates for all customers in the cohort.
```{r}
# calculate expected future transactions for customers who've
# had 1 to 5 transactions in first 52 weeks
est5.nbd <- nbd.ConditionalExpectedTransactions(params.nbd,
T.star = 52, x = 1:5, T.cal = 52)
for (i in 1:5) {
cat("x =", i, ":", sprintf("%5.3f", est5.nbd[i]), "\n")
}
```
```{r}
# predict whole customer cohort
groceryCBS$xstar.nbd <- nbd.ConditionalExpectedTransactions(
params = params.nbd, T.star = 52,
x = groceryCBS$x, T.cal = groceryCBS$T.cal)
# compare predictions with actuals at aggregated level
rbind(`Actuals` = c(`Holdout` = sum(groceryCBS$x.star)),
`NBD` = c(`Holdout` = round(sum(groceryCBS$xstar.nbd))))
```
As can be seen, the NBD model heavily overforecasts the actual number of transactions (by `r paste0(round(100 * sum(groceryCBS$xstar.nbd) / sum(groceryCBS$x.star) - 100, 1), "%")`), which can be explained by the lack of a dropout process in the model assumptions. All customers are assumed to remain just as active in the second year, as they have been in their first year. However, figure 2 shows clearly a downward trend in the incremental transaction counts for the online grocery customers, thus mandating a different model.
### Pareto/NBD
The Pareto/NBD model [@schmittlein1987cyc] combines the NBD model with the possibility of customers becoming inactive. A customer's state, however, is not directly observable, and the model needs to draw inferences based on the observed elapsed time since a customer's last activity, i.e., `T.cal - t.x`. In particular the model assumes a customer's lifetime $\tau$ to be exponential distributed with parameter $\mu$, whereas $\mu$ is $\text{Gamma}(s, \beta)$-distributed across customers.
The Pareto/NBD implementation is part of the BTYD package, but the workflow of fitting the model and making predictions is analogous to BTYDplus (respectively vice versa).
```{r}
# load grocery dataset, if it hasn't been done before
if (!exists("groceryCBS")) {
data("groceryElog")
groceryCBS <- elog2cbs(groceryElog, T.cal = "2006-12-31")
}
# estimate Pareto/NBD parameters
params.pnbd <- BTYD::pnbd.EstimateParameters(groceryCBS[, c("x", "t.x", "T.cal")])
names(params.pnbd) <- c("r", "alpha", "s", "beta")
round(params.pnbd, 3)
# report log-likelihood
BTYD::pnbd.cbs.LL(params.pnbd, groceryCBS[, c("x", "t.x", "T.cal")])
```
For one, we can note, that the maximized log-likelihood of Pareto/NBD is higher than for the NBD model, implying that its data fit is better. And second, by estimating a mean lifetime $1/(\beta/s)$ of `r round(1/(params.pnbd[3]/params.pnbd[4]), 2)` weeks, the estimated mean intertransaction times change from `r round(params.nbd[2] / params.nbd[1], 2)` to `r round(1/(params.pnbd[1]/params.pnbd[2]), 2)` weeks, when compared to NBD.
Let's now again compute the conditional expected transactions for five simulated customers with an increasing number of observed transactions, but all with an observed overly long period of recent inactivity.
```{r}
# calculate expected future transactions for customers who've
# had 1 to 5 transactions in first 12 weeks, but then remained
# inactive for 40 weeks
est5.pnbd <- BTYD::pnbd.ConditionalExpectedTransactions(params.pnbd,
T.star = 52, x = 1:5, t.x = 12, T.cal = 52)
for (i in 1:5) {
cat("x =", i, ":", sprintf("%5.3f", est5.pnbd[i]), "\n")
}
```
```{r}
# predict whole customer cohort
groceryCBS$xstar.pnbd <- BTYD::pnbd.ConditionalExpectedTransactions(
params = params.pnbd, T.star = 52,
x = groceryCBS$x, t.x = groceryCBS$t.x,
T.cal = groceryCBS$T.cal)
# compare predictions with actuals at aggregated level
rbind(`Actuals` = c(`Holdout` = sum(groceryCBS$x.star)),
`Pareto/NBD` = c(`Holdout` = round(sum(groceryCBS$xstar.pnbd))))
```
As expected, the Pareto/NBD yields overall lower and thus more realistic estimates than the NBD. However, the results also reveal an interesting pattern, which might seem at first sight counter intuitive. Customers with a very active purchase history (e.g., customers with 5 transactions) receive lower estimates than customers which have been less active in the past. @fader2005rfm discuss this apparent paradox in more detail, yet the underlying mechanism can be easily explained by looking at the model's assessment of the latent activity state.
```{r}
# P(alive) for customers who've had 1 to 5 transactions in first
# 12 weeks, but then remained inactive for 40 weeks
palive.pnbd <- BTYD::pnbd.PAlive(params.pnbd,
x = 1:5, t.x = 12, T.cal = 52)
for (i in 1:5) {
cat("x =", i, ":", sprintf("%5.2f %%", 100*palive.pnbd[i]), "\n")
}
```
The probability of still being alive after a 40 week purchase hiatus drops from `r paste0(round(100*palive.pnbd[1], 1), "%")` for the one-time-repeating customer to `r paste0(round(100*palive.pnbd[5], 1), "%")` for the customer which has had already 5 transactions. The elapsed time of inactivity is a stronger indication of churn for the highly frequent than for the less frequent purchasing customer, as a low purchase frequency also allows for the possibility of such long intertransaction times as the observed 40 weeks.
### BG/CNBD-k and MBG/CNBD-k
The BG/NBD [@fader2005cyc] and the MBG/NBD [@batislam2007empirical;@hoppe2007cbg] models are contained in the larger class of (M)BG/CNBD-k models [@platzer2017mbgcnbd], and are thus presented here together in this section. The MBG/CNBD-k model assumptions are as follows: A customer's intertransaction times, while being active, are Erlang-k distributed, with purchase rate $\lambda$ being $\text{Gamma}(r, \alpha)$-distributed across customers. After each transaction a customer can become inactive (for good) with a constant dropout probability of $p$, whereas $p$ is $\text{Beta}(a, b)$-distributed across customers. The BG/CNBD-k only differs in that respect, that the customer is not allowed to drop out at the initial transaction, but only at repeat transactions.
```{r}
# load grocery dataset, if it hasn't been done before
if (!exists("groceryCBS")) {
data("groceryElog")
groceryCBS <- elog2cbs(groceryElog, T.cal = "2006-12-31")
}
# estimate parameters for various models
params.bgnbd <- BTYD::bgnbd.EstimateParameters(groceryCBS) # BG/NBD
params.bgcnbd <- bgcnbd.EstimateParameters(groceryCBS) # BG/CNBD-k
params.mbgnbd <- mbgnbd.EstimateParameters(groceryCBS) # MBG/NBD
params.mbgcnbd <- mbgcnbd.EstimateParameters(groceryCBS) # MBG/CNBD-k
row <- function(params, LL) {
names(params) <- c("k", "r", "alpha", "a", "b")
c(round(params, 3), LL = round(LL))
}
rbind(`BG/NBD` = row(c(1, params.bgnbd),
BTYD::bgnbd.cbs.LL(params.bgnbd, groceryCBS)),
`BG/CNBD-k` = row(params.bgcnbd,
bgcnbd.cbs.LL(params.bgcnbd, groceryCBS)),
`MBG/NBD` = row(params.mbgnbd,
mbgcnbd.cbs.LL(params.mbgnbd, groceryCBS)),
`MBG/CNBD-k` = row(params.mbgcnbd,
mbgcnbd.cbs.LL(params.mbgcnbd, groceryCBS)))
```
The MLE method searches across a five dimensional parameter space $(k, r, \alpha, a, b)$ to find the optimum of the log-likelihood function. As can be seen from the reported log-likelihood values, the MBG/CNBD-k is able to provide a better fit than NBD, Pareto/NBD, BG/NBD, MBG/NBD and BG/CNBD-k for the given dataset. Further, the estimate for regularity parameter $k$ is 2 and implies that regularity is present, and that Erlang-2 is considered more suitable for the intertransaction times than the exponential distribution ($k=1$).
```{r}
# calculate expected future transactions for customers who've
# had 1 to 5 transactions in first 12 weeks, but then remained
# inactive for 40 weeks
est5.mbgcnbd <- mbgcnbd.ConditionalExpectedTransactions(params.mbgcnbd,
T.star = 52, x = 1:5, t.x = 12, T.cal = 52)
for (i in 1:5) {
cat("x =", i, ":", sprintf("%5.3f", est5.mbgcnbd[i]), "\n")
}
```
```{r}
# P(alive) for customers who've had 1 to 5 transactions in first
# 12 weeks, but then remained inactive for 40 weeks
palive.mbgcnbd <- mbgcnbd.PAlive(params.mbgcnbd,
x = 1:5, t.x = 12, T.cal = 52)
for (i in 1:5) {
cat("x =", i, ":", sprintf("%5.2f %%", 100*palive.mbgcnbd[i]), "\n")
}
```
Predicting transactions for 5 simulated customers, each with a long purchase hiatus but with a varying number of past transactions, we see the same pattern as for Pareto/NBD, except that the forecasted numbers are even lower. This results from the long period of inactivity being now, in the presence of regularity, an even stronger indiciation for defection, as the Erlang-2 allows for less variation in the intertransaction times.
```{r}
# predict whole customer cohort
groceryCBS$xstar.mbgcnbd <- mbgcnbd.ConditionalExpectedTransactions(
params = params.mbgcnbd, T.star = 52,
x = groceryCBS$x, t.x = groceryCBS$t.x,
T.cal = groceryCBS$T.cal)
# compare predictions with actuals at aggregated level
rbind(`Actuals` = c(`Holdout` = sum(groceryCBS$x.star)),
`MBG/CNBD-k` = c(`Holdout` = round(sum(groceryCBS$xstar.mbgcnbd))))
```
Comparing the predictions at an aggregate level, we see that also the MBG/CNBD-k remains overly optimistic for the online grocery dataset, but to a slightly lower extent compared to the predictions resulting from the Pareto/NBD. The aggregate level dynamics can be visualized with the help of `mbgcnbd.PlotTrackingInc`.
```{r, fig.show="hold", fig.width=4, fig.height=3.5, fig.cap="Weekly actuals vs. MBG/CNBD-k predictions"}
# runs for ~37secs on a 2015 MacBook Pro
nil <- mbgcnbd.PlotTrackingInc(params.mbgcnbd,
T.cal = groceryCBS$T.cal,
T.tot = max(groceryCBS$T.cal + groceryCBS$T.star),
actual.inc.tracking = elog2inc(groceryElog))
```
However, when assessing the error at individual level, by calculating mean absolute error (MAE) of our predictions, we see a significant improvement in forecasting accuracy, by accounting for the mild degree of regularity within the timing patterns.
```{r}
# mean absolute error (MAE)
mae <- function(act, est) {
stopifnot(length(act)==length(est))
sum(abs(act-est)) / length(act)
}
mae.nbd <- mae(groceryCBS$x.star, groceryCBS$xstar.nbd)
mae.pnbd <- mae(groceryCBS$x.star, groceryCBS$xstar.pnbd)
mae.mbgcnbd <- mae(groceryCBS$x.star, groceryCBS$xstar.mbgcnbd)
rbind(`NBD` = c(`MAE` = round(mae.nbd, 3)),
`Pareto/NBD` = c(`MAE` = round(mae.pnbd, 3)),
`MBG/CNBD-k` = c(`MAE` = round(mae.mbgcnbd, 3)))
lift <- 1 - mae.mbgcnbd / mae.pnbd
cat("Lift in MAE for MBG/CNBD-k vs. Pareto/NBD:", round(100*lift, 1), "%")
```
## MCMC Estimated Models
This chapter presents three buy-till-you-die model variants which rely on Markov-Chain-Monte-Carlo simulation for parameter estimation. Implementation complexity as well as computational costs are significantly higher, and despite an efficient MCMC implementation in C++ applying these models requires much longer computing time when compared to the before presented ML-estimated models. On the upside, we gain flexibility in our model assumptions and get estimated distributions even for individual-level parameters. Thus, the return object for parameter estimation (`param.draws <- *.mcmc.DrawParameters(...)`) not only returns the point estimates of the heterogeneity parameters (`params <- *.EstimateParameters(...)`), but provides samples from the marginal posterior distributions, both at the cohort- (`param.draws$level_1`) as well as on the customer-level (`param.draws$level_2`). Based on these parameter draws, we can then easily sample the posterior distributions of any derived quantity of managerial interest, for example the number of future transactions (`mcmc.DrawFutureTransactions`) or the probability of being active in a given period.
Generally speaking, MCMC works by constructing a Markov chain which has the desired target (posterior) distribution as its equilibrium distribution. The algorithm then performs random walks on that Markov chain and will eventually (after some "burnin" phase) produce draws from the posterior. In order to assess MCMC convergence one can run multiple MCMC chains (in parallel) and check whether these provide similar distributions. Due to the high auto-correlation between subsequent iteration steps in the MCMC chains, it is also advisable to keep only every x-th step. The MCMC default settings for parameter draws (`*.mcmc.DrawParameters(..., mcmc = 2500, burnin = 500, thin = 50, chains = 2`) should work well in many empirical settings. Depending on your platform, the code will either use a single core (on Windows OS), or multiple cores in parallel (on Unix/MacOS) to run the MCMC chains. To speed up convergence, the MCMC chains will be automatically initialized with the maximum likelihood estimates of Pareto/NBD. The sampled draws are wrapped as `coda::mcmc.list` object, and the `coda` package provides various helper methods (e.g. `as.matrix.mcmc.list`, `HPDinterval`, etc.) for performing output analysis and diagnostics for MCMC (cf. `help(package="coda")`).
### Pareto/NBD (HB)
The Pareto/NBD (HB) is identical to Pareto/NBD with respect to its model assumptions, but relies on MCMC for parameter estimation and thus can leverage the aforementioned benefits of such approach. @rossi2003bayesian already provided a blueprint for applying a full Bayes approach (in contrast to an empirical Bayes approach) to hierarchical models such as Pareto/NBD. @ma2007mcmc then published a specific MCMC scheme, comprised of Gibbs sampling with slice sampling to draw from the conditional distributions. Later @abe2009counting suggested in the technical appendix to augment the parameter space with the unobserved lifetime $\tau$ and activity status $z$ in order to decouple the sampling of the transaction process from the dropout process. This allows the sampling scheme to take advantage of conjugate priors for drawing $\lambda$ and $\mu$, and is accordingly implemented in this package.
Let's apply the Pareto/NBD (HB), with the default MCMC settings in place, for the online grocery dataset. First we draw parameters with `pnbd.mcmc.DrawParameters`, and then pass these forward to the model-independent methods for deriving quantities of managerial interest, namely `mcmc.DrawFutureTransactions`, `mcmc.PActive` and `mcmc.PAlive`. Note the difference between $P(\text{active})$ and $P(\text{alive})$: the former denotes the probability of making at least one transaction within the holdout period, and the latter is the probability of making another transaction at any time in the future.
```{r}
# load grocery dataset, if it hasn't been done before
if (!exists("groceryCBS")) {
data("groceryElog")
groceryCBS <- elog2cbs(groceryElog, T.cal = "2006-12-31")
}
# generate parameter draws (~13secs on 2015 MacBook Pro)
pnbd.draws <- pnbd.mcmc.DrawParameters(groceryCBS)
# generate draws for holdout period
pnbd.xstar.draws <- mcmc.DrawFutureTransactions(groceryCBS, pnbd.draws)
# conditional expectations
groceryCBS$xstar.pnbd.hb <- apply(pnbd.xstar.draws, 2, mean)
# P(active)
groceryCBS$pactive.pnbd.hb <- mcmc.PActive(pnbd.xstar.draws)
# P(alive)
groceryCBS$palive.pnbd.hb <- mcmc.PAlive(pnbd.draws)
# show estimates for first few customers
head(groceryCBS[, c("x", "t.x", "x.star",
"xstar.pnbd.hb", "pactive.pnbd.hb",
"palive.pnbd.hb")])
```
As can be seen, the basic application of an MCMC-estimated model is just as straightforward as for ML-estimated models. However, the 2-element list return object of `pnbd.mcmc.DrawParameters` allows for further analysis: `level_1` is a list of `coda::mcmc.list`s, one for each customer, with draws for customer-level parameters ($\lambda, \mu, \tau, z$), and `level_2` a `coda::mcmc.list` with draws for cohort-level parameters ($r, \alpha, s, \beta$). Running the estimation with the default MCMC settings returns a total of 100 samples (`(mcmc-burnin)*chains/thin`) for `r paste0("nrow(groceryCBS) * 4 + 4 = ", nrow(groceryCBS) * 4 + 4)` parameters, and for each we can inspect the MCMC traces, the estimated distributions and calculate summary statistics.
For the cohort-level parameters ($r, \alpha, s, \beta$) the median point estimates are generated as follows:
```{r}
class(pnbd.draws$level_2)
# convert cohort-level draws from coda::mcmc.list to a matrix, with
# each parameter becoming a column, and each draw a row
cohort.draws <- pnbd.draws$level_2
head(as.matrix(cohort.draws), 5)
# compute median across draws, and compare to ML estimates; as can be
# seen, the two parameter estimation approaches result in very similar
# estimates
round(
rbind(`Pareto/NBD (HB)` = apply(as.matrix(cohort.draws), 2, median),
`Pareto/NBD` = BTYD::pnbd.EstimateParameters(groceryCBS[, c("x", "t.x", "T.cal")]))
, 2)
```
MCMC traces and estimated parameter distributions can be easily visualized by using the corresponding methods from the `coda` package.
```{r, fig.show="hold", warning=FALSE, fig.width=7, fig.height=3, fig.cap="MCMC traces and parameter distributions of cohort-level parameters"}
# plot trace- and density-plots for heterogeneity parameters
op <- par(mfrow = c(2, 4), mar = c(2.5, 2.5, 2.5, 2.5))
coda::traceplot(pnbd.draws$level_2)
coda::densplot(pnbd.draws$level_2)
par(op)
```
One of the advantages of the MCMC approach compared to MLE is that the parameter draws and corresponding median values can also be inspected on the customer-evel. The following example code does so for the specific customer with ID 4 (i.e., `cust=4`).
```{r, fig.show="hold", warning=FALSE, fig.width=7, fig.height=3, fig.cap="MCMC traces and parameter distributions of individual-level parameters for a specific customer"}
class(pnbd.draws$level_1)
length(pnbd.draws$level_1)
customer4 <- "4"
customer4.draws <- pnbd.draws$level_1[[customer4]]
head(as.matrix(customer4.draws), 5)
round(apply(as.matrix(customer4.draws), 2, median), 3)
# plot trace- and density-plots for customer4 parameters
op <- par(mfrow = c(2, 4), mar = c(2.5, 2.5, 2.5, 2.5))
coda::traceplot(pnbd.draws$level_1[[customer4]])
coda::densplot(pnbd.draws$level_1[[customer4]])
par(op)
```
Analogous to MLE-based models, we can also plot weekly transaction counts, as well as frequency plots at an aggregated level. These methods can be applied to all provided MCMC-based models in the following way.
```{r, eval = FALSE}
# runs for ~120secs on a MacBook Pro 2015
op <- par(mfrow = c(1, 2))
nil <- mcmc.PlotFrequencyInCalibration(pnbd.draws, groceryCBS)
nil <- mcmc.PlotTrackingInc(pnbd.draws,
T.cal = groceryCBS$T.cal,
T.tot = max(groceryCBS$T.cal + groceryCBS$T.star),
actual.inc.tracking.data = elog2inc(groceryElog))
par(op)
```
### Pareto/NBD (Abe)
@abe2009counting introduced a variant of Pareto/NBD by replacing the two independent gamma distributions for individuals' purchase rates $\lambda$ and dropout rates $\mu$ with a multivariate lognormal distribution. The BTYDplus package refers to this model variant as Pareto/NBD (Abe). The multivariate lognormal distribution permits a correlation between purchase and dropout processes, but even more importantly, can be easily extended to a linear regression model to incorporate customer-level covariates. This flexibility can significantly boost inference, if any of the captured covariates indeed helps in explaining the heterogeneity within the customer cohort.
The online grocery dataset doesn't contain any additional covariates, so for demonstration purposes we will apply Pareto/NBD (Abe) to the CDNow dataset and reproduce the findings from the original paper. First we estimate a model without covariates (M1), and then, we incorporate the dollar amount of the first purchase as a customer-level covariate (M2).
<!-- Note: we do not evaluate the Pareto/GGG and Pareto/NBD (Abe) example code here, in order to speed up vignette build. -->
```{r, eval = FALSE}
# load CDNow event log from BTYD package
cdnowElog <- read.csv(
system.file("data/cdnowElog.csv", package = "BTYD"),
stringsAsFactors = FALSE,
col.names = c("cust", "sampleid", "date", "cds", "sales"))
cdnowElog$date <- as.Date(as.character(cdnowElog$date),
format = "%Y%m%d")
# convert to CBS; split into 39 weeks calibration, and 39 weeks holdout
cdnowCbs <- elog2cbs(cdnowElog,
T.cal = "1997-09-30", T.tot = "1998-06-30")
# estimate Pareto/NBD (Abe) without covariates; model M1 in Abe (2009)
draws.m1 <- abe.mcmc.DrawParameters(cdnowCbs,
mcmc = 7500, burnin = 2500) # ~33secs on 2015 MacBook Pro
quant <- function(x) round(quantile(x, c(0.025, 0.5, 0.975)), 2)
t(apply(as.matrix(draws.m1$level_2), 2, quant))
#> 2.5% 50% 97.5%
#> log_lambda -3.70 -3.54 -3.32
#> log_mu -3.96 -3.59 -3.26
#> var_log_lambda 1.10 1.34 1.65
#> cov_log_lambda_log_mu -0.20 0.13 0.74
#> var_log_mu 1.44 2.62 5.05
#' append dollar amount of first purchase to use as covariate
first <- aggregate(sales ~ cust, cdnowElog, function(x) x[1] * 10^-3)
names(first) <- c("cust", "first.sales")
cdnowCbs <- merge(cdnowCbs, first, by = "cust")
#' estimate with first purchase spend as covariate; model M2 in Abe (2009)
draws.m2 <- abe.mcmc.DrawParameters(cdnowCbs,
covariates = c("first.sales"),
mcmc = 7500, burnin = 2500) # ~33secs on 2015 MacBook Pro
t(apply(as.matrix(draws.m2$level_2), 2, quant))
#> 2.5% 50% 97.5%
#> log_lambda_intercept -4.02 -3.77 -3.19
#> log_mu_intercept -4.37 -3.73 -2.69
#> log_lambda_first.sales 0.04 6.04 9.39
#> log_mu_first.sales -9.02 1.73 7.90
#> var_log_lambda 0.01 1.35 1.79
#> cov_log_lambda_log_mu -0.35 0.22 0.76
#> var_log_mu 0.55 2.59 4.97
```
The parameter estimates for model M1 and M2 match roughly the numbers reported in Table 3 of @abe2009counting. There are some discrepancies for the parameters `log_lambda_first.sales` and `log_mu_first.sales`, but the high level result remains unaltered: The dollar amount of a customer's initial purchase correlates positively with purchase frequency, but doesn't influence the dropout process.
Note that the BTYDplus package can establish via simulations that its provided implementation is indeed correctly able to reidentify the underlying data generating parameters, including the regression coefficients for the covariates.
### Pareto/GGG
@platzer2016pggg presented another extension of the Pareto/NBD model. The Pareto/GGG generalizes the distribution for the intertransaction times from the exponential to the Gamma distribution, whereas its shape parameter $k$ is also allowed to vary across customers following a $\text{Gamma}(t, \gamma)$ distribution. Hence, the purchase process follows a Gamma-Gamma-Gamma (GGG) mixture distribution, that is capable of capturing a varying degree of regularity across customers. For datasets which exhibit regularity in their timing patterns, and the degree of regularity varies across the customer cohort, leveraging that information can yield significant improvements in terms of forecasting accuracy. This results from improved inferences about customers' latent state in the presence of regularity.
```{r, eval=FALSE}
# load grocery dataset, if it hasn't been done before
if (!exists("groceryCBS")) {
data("groceryElog")
groceryCBS <- elog2cbs(groceryElog, T.cal = "2006-12-31")
}
# estimte Pareto/GGG
pggg.draws <- pggg.mcmc.DrawParameters(groceryCBS) # ~2mins on 2015 MacBook Pro
# generate draws for holdout period
pggg.xstar.draws <- mcmc.DrawFutureTransactions(groceryCBS, pggg.draws)
# conditional expectations
groceryCBS$xstar.pggg <- apply(pggg.xstar.draws, 2, mean)
# P(active)
groceryCBS$pactive.pggg <- mcmc.PActive(pggg.xstar.draws)
# P(alive)
groceryCBS$palive.pggg <- mcmc.PAlive(pggg.draws)
# show estimates for first few customers
head(groceryCBS[, c("x", "t.x", "x.star",
"xstar.pggg", "pactive.pggg", "palive.pggg")])
#> x t.x x.star xstar.pggg pactive.pggg palive.pggg
#> 1 0 0.00000 0 0.02 0.02 0.03
#> 2 1 50.28571 0 1.01 0.59 1.00
#> 3 19 48.57143 14 14.76 0.87 0.87
#> 4 0 0.00000 0 0.04 0.03 0.13
#> 5 2 40.42857 3 2.02 0.84 0.91
#> 6 5 47.57143 6 4.46 0.92 0.95
# report median cohort-level parameter estimates
round(apply(as.matrix(pggg.draws$level_2), 2, median), 3)
#> t gamma r alpha s beta
#> 1.695 0.373 0.948 5.243 0.432 4.348
# report mean over median individual-level parameter estimates
median.est <- sapply(pggg.draws$level_1, function(draw) {
apply(as.matrix(draw), 2, median)
})
round(apply(median.est, 1, mean), 3)
#> k lambda mu tau z
#> 3.892 0.160 0.065 69.546 0.316
```
Summarizing the estimated parameter distributions shows that regularity parameter `k` is estimated significantly larger than 1, and that it varies substantially across customers.
Concluding our vignette we will benchmark the forecasting error of Pareto/GGG, MBG/CNBD-k and Pareto/NBD.
```{r, eval = FALSE}
# compare predictions with actuals at aggregated level
rbind(`Actuals` = c(`Holdout` = sum(groceryCBS$x.star)),
`Pareto/GGG` = round(sum(groceryCBS$xstar.pggg)),
`MBG/CNBD-k` = round(sum(groceryCBS$xstar.mbgcnbd)),
`Pareto/NBD (HB)` = round(sum(groceryCBS$xstar.pnbd.hb)))
#> Holdout
#> Actuals 3389
#> Pareto/GGG 3815
#> MBG/CNBD-k 3970
#> Pareto/NBD (HB) 4018
# error on customer level
mae <- function(act, est) {
stopifnot(length(act)==length(est))
sum(abs(act-est)) / length(act)
}
mae.pggg <- mae(groceryCBS$x.star, groceryCBS$xstar.pggg)
mae.mbgcnbd <- mae(groceryCBS$x.star, groceryCBS$xstar.mbgcnbd)
mae.pnbd.hb <- mae(groceryCBS$x.star, groceryCBS$xstar.pnbd.hb)
rbind(`Pareto/GGG` = c(`MAE` = round(mae.pggg, 3)),
`MBG/CNBD-k` = c(`MAE` = round(mae.mbgcnbd, 3)),
`Pareto/NBD (HB)` = c(`MAE` = round(mae.pnbd.hb, 3)))
#> MAE
#> Pareto/GGG 0.621
#> MBG/CNBD-k 0.644
#> Pareto/NBD (HB) 0.688
lift <- 1 - mae.pggg / mae.pnbd.hb
cat("Lift in MAE:", round(100*lift, 1), "%")
#> Lift in MAE for Pareto/GGG vs. Pareto/NBD: 9.8%
```
Both, on the aggregate level as well as on the customer level we see a significant improvement in the forecasting accuracy when leveraging the regularity within the transaction timings of the online grocery dataset. Further, the superior performance of the Pareto/GGG compared to the MBG/CNBD-k model suggests that it does pay off to also consider the heterogeneity in the degree of regularity across customers, which itself can also be visualized via `pggg.mcmc.DrawParameters(groceryCBS)`.
## References
|
/scratch/gouwar.j/cran-all/cranData/BTYDplus/vignettes/BTYDplus-HowTo.Rmd
|
BTLagrangian <- function(Lagrangian, ability, theta, penalty.Qua) {
n <- nrow(ability)-1
v <- penalty.Qua
for(i in 1:(n-1)){
for(j in (i+1):n){
Lagrangian[i, j] <- Lagrangian[i, j] + v * (theta[i, j] - ability[i, 1] + ability[j, 1])
}
}
return(Lagrangian)
}
|
/scratch/gouwar.j/cran-all/cranData/BTdecayLasso/R/BTLagrangian.R
|
BTLasso.weight <- function(dataframe, ability, decay.rate = 0, fixed = 1, thersh = 1e-5, max = 100, iter = 100) {
BT <- BTdecay(dataframe, ability, decay.rate = decay.rate, fixed = fixed, iter = iter)
ability <- BT$ability
n <- nrow(ability) - 1
weight <- matrix(0, nrow = n, ncol = n)
for(i in 1:(n - 1)){
for(j in (i + 1):n){
if(abs(ability[i, 1] - ability[j, 1]) < 1/max){
weight[i, j] <- -1
} else{
weight[i, j] <- 1/abs(ability[i, 1] - ability[j, 1])
}
}
}
k <- max(weight)
for(i in 1:(n - 1)){
for(j in (i + 1):n){
if(weight[i, j] == -1){
weight[i, j] <- k
}
}
}
weight
}
|
/scratch/gouwar.j/cran-all/cranData/BTdecayLasso/R/BTLasso.weight.R
|
BTLikelihood <- function(dataframe, ability, decay.rate = 0){
df <- dataframe
u <- decay.rate
n1 <- nrow(df)
n <- nrow(ability) - 1
s <- 0
for(i in 1:n1){
a1 <- df[i, 1]
a2 <- df[i, 2]
C <- exp(-u * df[i, 5])
x <- ability[a1, 1]
y <- ability[a2, 1]
p <- ability[n + 1, 1] + x - y
q <- exp(p)
s <- s - (df[i, 3] * (p - log(q + 1)) + df[i, 4] * (-log(q + 1))) * C
}
s
}
|
/scratch/gouwar.j/cran-all/cranData/BTdecayLasso/R/BTLikelihood.R
|
BTLikelihood.all <- function(dataframe, ability, theta, penalty.Qua, weight, Lagrangian, lambda, decay.rate = 0){
df <- dataframe
v <- penalty.Qua
u <- decay.rate
n1 <- nrow(df)
n <- nrow(ability) - 1
s <- 0
for(i in 1:n1){
a1 <- df[i, 1]
a2 <- df[i, 2]
C <- exp(-u * df[i, 5])
x <- ability[a1, 1]
y <- ability[a2, 1]
p <- ability[n + 1, 1] + x - y
q <- exp(p)
s <- s - (df[i, 3] * (p - log(q + 1)) + df[i, 4] * (-log(q + 1))) * C
}
theta1 <- theta
for(i in 1:(n - 1)){
for(j in (i + 1):n){
theta1[i, j] <- theta[i, j] - ability[i, 1] + ability[j, 1]
}
}
s <- s + v/2 * sum(theta1^2) + sum(Lagrangian * theta1)
s <- s + lambda * sum(abs(theta) * weight)
s
}
|
/scratch/gouwar.j/cran-all/cranData/BTdecayLasso/R/BTLikelihood.all.R
|
#' Dataframe initialization
#'
#' @details Initial the raw dataframe and return an un-estimated ability vector and the worst
#' team who loses most.
#' @param dataframe Raw dataframe input, an example data "NFL2010" is attached in package for reference
#' The raw data is a dataframe with 5 columns. First column is home teams.
#' Second column is away teams.
#' Third column is the number of wins of home teams (if home team defeats away team, record 1 here, 0 otherwise).
#' Fourth column is the number of wins of away teams (if home team defeats away team, record 0 here, 1 otherwise).
#' Fifth column is a scalar of time when the match is played until now (Time lag). Any time scale can be used here.
#' "NFL2010" applies the unit of day.
#' @param home Whether home effect will be considered, the default is TRUE.
#' @details Note that even if the tournament does not have any home team or away team, you can still provide the match results
#' according to the description above regardless of who is at home and who is away. By selecting the home = FALSE,
#' We duplicate the dataset, switch the home, away teams and also the home, away match results. Then this dataset will
#' be attached to the original dataset and all home and away win's number will be divided by 2. MLE estimation of home effect
#' is proved to be an exact 0.
#'
#' The elimination of home effect by duplicating the original dataset will be less efficient than eliminating the home parameter
#' directly in iterations. Since most games such as football, basketball have home effect and this method provides an idea of
#' handling the case where some games have home effect and some games are played on neutral place, this method is applied here.
#' @return
#' \item{dataframe}{dataframe for Bradley-Terry run}
#' \item{ability}{Initial ability vector for iterations}
#' \item{worstTeam}{The worst team whose ability can be set as 0 during any model's run}
#' @export
BTdataframe <- function(dataframe, home =TRUE) {
## Intitialize dataframe of match results for BTdecayLasso function
df1 <- as.data.frame(dataframe)
df <- matrix(ncol = ncol(df1), nrow = nrow(df1))
team <- as.matrix(unique(df1[, 1]))
n <- length(team)
for(i in 1:n){
df[df1 == team[i, 1]] <- i
}
df[, 3] <- df1[, 3]
df[, 4] <- df1[, 4]
df[, 5] <- df1[, 5] - df1[1, 5]
## Determine the team who has the worst performance
k0 <- 1000
i0 <- 1
for (i in 1:n) {
k1 <- sum(df[df[, 1] == i, 3]) - sum(df[df[, 1] == i, 4]) + sum(df[df[, 2] == i, 4]) - sum(df[df[, 2] == i, 3])
if(k0 > k1){
i0 <- i
k0 <- k1
}
}
## Intitialize ablity vector for BTdecayLasso function
ability <- matrix(0, ncol = 1, nrow = (n + 1))
colnames(ability) <- c("score")
rownames(ability) <- c(team, "at.home")
if (home == FALSE) {
df2 <- df
df2[, 1] <- df[, 2]
df2[, 2] <- df[, 1]
df2[, 3] <- df[, 4]
df2[, 4] <- df[, 3]
df <- rbind(df, df2)
df[, 3] <- df[, 3]/2
df[, 4] <- df[, 4]/2
}
output <- list(dataframe = df, ability = ability, worstTeam = i0)
output
}
|
/scratch/gouwar.j/cran-all/cranData/BTdecayLasso/R/BTdataframe.R
|
BTdecay.Qua <- function(dataframe, ability, theta, penalty.Qua, Lagrangian, decay.rate = 0, fixed = 1, iter = 100) {
df <- dataframe
u <- decay.rate
v <- penalty.Qua
n1 <- nrow(df)
n <- nrow(ability) - 1
fn <- function(ability) {
s <- 0
for(i in 1:(n-1)){
for(j in (i+1):n){
s1 <- theta[i, j] - ability[i] + ability[j]
s <- s + Lagrangian[i, j] * s1 + v/2 * s1^2
}
}
for (i in 1:n1) {
a1 <- df[i, 1]
a2 <- df[i, 2]
C <- exp(-u * df[i, 5])
x <- ability[a1]
y <- ability[a2]
p <- ability[n + 1] + x - y
q <- exp(p)
s <- s - (df[i, 3] * p - (df[i, 3] + df[i, 4]) * log(q + 1)) * C
}
s
}
gr <- function(ability) {
Grad <- rep(0, n + 1)
for(i in 1:n){
Grad[i] <- v * (- sum(ability[-(n + 1)]) + n * ability[i] +
sum(theta[, i]) - sum(theta[i, ])) +
sum(Lagrangian[, i]) - sum(Lagrangian[i, ])
}
for(i in 1:n1){
a1 <- df[i, 1]
a2 <- df[i, 2]
C <- exp(-u * df[i, 5])
x <- ability[a1]
y <- ability[a2]
p <- ability[n + 1] + x - y
q <- exp(p)
A <- -(df[i, 3] * (1/(q + 1)) + df[i, 4] * (-q/(q + 1))) * C
Grad[a1] <- Grad[a1] + A
Grad[a2] <- Grad[a2] - A
Grad[n + 1] <- Grad[n + 1] + A
}
Grad
}
xa <- optimx::optimr(rep(0, n + 1), fn, gr = gr, method = "L-BFGS-B", control = list(maxit = iter))
if(xa$convergence == 1){
stop("Iterations diverge, please provide a smaller decay rate or more data")
}
ability[, 1] <- xa$par - xa$par[fixed]
ability[n + 1, 1] <- xa$par[n + 1]
ability
}
|
/scratch/gouwar.j/cran-all/cranData/BTdecayLasso/R/BTdecay.Qua.R
|
#' Bradley-Terry Model with Exponential Decayed weighted likelihood
#'
#' Exponential decay rate is applied to the likelihood function to achieve a better track of current abilities. When "decay.rate" is setting as 0,
#' this is a standard Bradley-Terry Model whose estimated parameters are equivalent to package "BradleyTerry2".
#' Further detailed description is attached in \code{\link{BTdecayLasso}}.
#'
#' @param dataframe Generated using \code{\link{BTdataframe}} given raw data.
#' @param ability A column vector of teams ability, the last row is the home parameter.
#' The row number is consistent with the team's index shown in dataframe. It can be generated using \code{\link{BTdataframe}} given raw data.
#' @param decay.rate The exponential decay rate. Usually ranging from (0, 0.01), A larger decay rate weights more
#' importance to most recent matches and the estimated parameters reflect more on recent behaviour.
#' @param fixed A teams index whose ability will be fixed as 0. The worstTeam's index
#' can be generated using \code{\link{BTdataframe}} given raw data.
#' @param iter Number of iterations used in L-BFGS-B algorithm.
#' @details
#' The standard Bradley-Terry Model defines the winning probability of i against j,
#' \deqn{P(Y_{ij}=1)=\frac{\exp(\tau h_{ij}^{t_{k}}+\mu_{i}-\mu_{j})}{1+\exp(\tau h_{ij}^{t_{k}}+\mu_{i}-\mu_{j})}}
#' \eqn{\tau} is the home parameter and \eqn{\mu_{i}} is the team i's ability score. \eqn{h_{ij}} takes 1 if team i is at home, -1 otherwise.
#' Given, a complete tournament's result. The objective likelihood function with an exponential decay rate is,
#' \deqn{\sum_{k=1}^{n}\sum_{i<j}\exp(-\alpha t_{k})\cdot(y_{ij}(\tau h_{ij}^{t_{k}}+\mu_{i}-\mu_{j})-\log(1+\exp(\tau h_{ij}^{t_{k}}+\mu_{i}-\mu_{j})))}
#' where n is the number of matches, \eqn{\alpha} is the exponential decay rate and \eqn{y_{ij}} takes 0 if i is defeated by j, 1 otherwise. \eqn{t_{k}} is
#' the time lag (time until now).
#' This likelihood function is optimized using L-BFGS-B method with package \bold{optimr} and summary() function with S3 method can be applied to view the outputs.
#' @return List with class "BT" contains estimated abilities and convergent code, 0 stands for convergence reaches,
#' 1 stands for convergence not reaches. If 1 is returned, we suggest that decay rate should be set lower.
#' Bradley-Terry model fails to model the situation when a team wins or loses in all matches.
#' If a high decay rate is considered, a team who only loses or wins 1 matches long time ago will also causes the same problem.
#' \item{ability}{Estimated ability scores}
#' \item{convergence}{0 stands for convergent, 1 stands for not convergent}
#' \item{decay.rate}{Decay rate of this model}
#' @examples
#' ##Initializing Dataframe
#' x <- BTdataframe(NFL2010)
#'
#' ##Standard Bradley-Terry Model optimization
#' y <- BTdecay(x$dataframe, x$ability, decay.rate = 0, fixed = x$worstTeam)
#' summary(y)
#'
#' ##Dynamic approximation of current ability scores using exponential decayed likelihood.
#' ##If we take decay.rate = 0.005
#' ##Match happens one month before will weight exp(-0.15)=0.86 on log-likelihood function
#' z <- BTdecay(x$dataframe, x$ability, decay.rate = 0.005, fixed = x$worstTeam)
#' summary(z)
#' @import optimx
#' @export
BTdecay <- function(dataframe, ability, decay.rate = 0, fixed = 1, iter = 100){
## Initialize the parameters
df <- as.matrix(dataframe)
u <- decay.rate
n1 <- nrow(df)
n <- nrow(ability) - 1
counts <- matrix(0, nrow = n, ncol = 2)
if(!(fixed %in% seq(1, n, 1))){
stop("The fixed team's index must be an integer index of one of all teams")
}
## Check the validity of standard Bradley-Terry Model
for (i in 1:n1) {
a1 <- df[i, 1]
a2 <- df[i, 2]
counts[a1, 1] <- counts[a1, 1] + df[i, 3]
counts[a2, 1] <- counts[a2, 1] + df[i, 4]
counts[a1, 2] <- counts[a1, 2] + df[i, 3] + df[i, 4]
counts[a2, 2] <- counts[a2, 2] + df[i, 3] + df[i, 4]
}
win <- c()
loss <- c()
for (i in 1:n) {
if (counts[i, 1] == counts[i, 2]) {
win <- c(win , i)
} else if (counts[i, 1] == 0) {
loss <- c(loss, i)
}
}
if (!is.null(win)) stop('Bradley-Terry Model cannot deal with the case if there exists team who wins all matches')
if (!is.null(loss)) stop('Bradley-Terry Model cannot deal with the case if there exists team who loses all matches')
## Iterations of the estimation of ability scores
fn <- function(ability){
s <- 0
for(i in 1:n1){
a1 <- df[i, 1]
a2 <- df[i, 2]
C <- exp(-u * df[i, 5])
x <- ability[a1]
y <- ability[a2]
p <- ability[n + 1] + x - y
q <- exp(p)
s <- s - (df[i, 3] * p - (df[i, 3] + df[i, 4]) * log(q + 1)) * C
}
s
}
gr <- function(ability){
Grad <- rep(0, n + 1)
for(i in 1:n1){
a1 <- df[i, 1]
a2 <- df[i, 2]
C <- exp(-u * df[i, 5])
x <- ability[a1]
y <- ability[a2]
p <- ability[n + 1] + x - y
q <- exp(p)
A <- -(df[i, 3] * (1/(q + 1)) + df[i, 4] * (-q/(q + 1))) * C
Grad[a1] <- Grad[a1] + A
Grad[a2] <- Grad[a2] - A
Grad[n + 1] <- Grad[n + 1] + A
}
Grad
}
xa <- optimx::optimr(rep(0, n + 1), fn, gr = gr, method = "L-BFGS-B", control = list(maxit = iter))
ability[, 1] <- xa$par - xa$par[fixed]
ability[n + 1, 1] <- xa$par[n + 1]
output <- list(ability = ability, convergence = xa$convergence, decay.rate = decay.rate)
if(xa$convergence == 1){
stop("Iterations diverge, please provide a smaller decay rate or more data")
}
class(output) <- "BT"
output
}
|
/scratch/gouwar.j/cran-all/cranData/BTdecayLasso/R/BTdecay.R
|
#' Bradley-Terry Model with Exponential Decayed weighted likelihood and Adaptive Lasso
#'
#' @description
#' Bradley-Terry model is applied for paired comparison data. Teams' ability score is estimated by maximizing log-likelihood function.
#'
#' To achieve a better track of current abilities, we apply an exponential decay rate to weight the log-likelihood function.
#' The most current matches will weight more than previous matches. Parameter "decay.rate" in most functions of this package is used
#' to set the amount of exponential decay rate. decay.rate should be non-negative and the appropriate range of it depends on time scale in original dataframe.
#' (see \code{\link{BTdataframe}} and parameter "dataframe"'s definition of fifth column) For example,
#' a unit of week with a "decay.rate" 0.007 is equivalent to the unit of day with "decay.rate" 0.001. Usually, for sports matches,
#' if we take the unit of day, it's ranging from 0 to 0.01. The higher choice of "decay.rate", the better track of current teams' ability
#' with a side effect of higher variance.
#'
#' If "decay.rate" is too large, for example "0.1" with a unit of day, \eqn{\exp(-0.7)} = 0.50. Only half weight will be add to the likelihood for matches played
#' one week ago and \eqn{\exp(-3.1)} = 0.05 suggests that previous matches took place one month ago will have little effect. Therefore, Only a few matches are
#' accounted for ability's estimation. It will lead to a very high variance and uncertainty. Since standard Bradley-Terry model
#' can not handle the case where there is a team who wins or loses all matches, such estimation may not provide convergent results.
#' Thus, if our estimation provides divergent result, an error will be returned and we suggest user to chose a smaller "decay.rate"
#' or adding more match results into the same modeling period.
#'
#' By default, the Adaptive Lasso is implemented for variance reduction and team's grouping. Adaptive Lasso is proved to have good grouping property.
#' Apart from adaptive lasso, user can define own weight for different
#' Lasso constraint \eqn{\left|\mu_{i}-\mu_{j}\right|} where \eqn{\mu_{i}} is team i's ability.
#'
#' Also by default, the whole Lasso path will be run. Similar to package "glmnet", user can provide their own choice of Lasso penalty "lambda" and determine whether the
#' whole Lasso path will be run (since such run is time-consuming). However, we suggest that if user is not familiar with the actual relationship among
#' lambda, the amount of penalty, the amount of shrinkage and grouping effect, a whole Lasso path should be run and selection of an
#' appropriate lambda is done by AIC or BIC criteria using \code{\link{BTdecayLassoC}} (since this model is time related, cross-validation method cannot be applied). Also, users can
#' use \code{\link{BTdecayLassoF}} to run with a specific Lasso penalty ranging from 0 to 1 (1 penalty means all estimators will shrink to 0).
#'
#' Two sets of estimated abilities will be given, the biased Lasso estimation and the HYBRID Lasso's estimation.
#' HYBRID Lasso estimation solves the restricted Maximum Likelihood optimization based on the group determined by Lasso's estimation (Different team's ability will converges to
#' the same value if Lasso penalty is added and these teams' ability is setting to be equal as a restriction).
#'
#' In addition, summary() using S3 method can be applied to view the outputs.
#'
#' @param dataframe Generated using \code{\link{BTdataframe}} given raw data.
#' @param ability A column vector of teams ability, the last row is the home parameter.
#' The row number is consistent with the team's index shown in dataframe. It can be generated using \code{\link{BTdataframe}} given raw data.
#' @param lambda The amount of Lasso penalty induced. The input should be a positive scalar or a sequence.
#' @param weight Weight for Lasso penalty on different abilities.
#' @param path whether the whole Lasso path will be run (plot.BTdecayLasso is enabled only if path = TRUE)
#' @param decay.rate A non-negative exponential decay rate. Usually ranging from (0, 0.01), A larger decay rate weights more
#' importance to most recent matches and the estimated parameters reflect more on recent behaviour.
#' @param fixed A teams index whose ability will be fixed as 0. The worstTeam's index
#' can be generated using \code{\link{BTdataframe}} given raw data.
#' @param thersh Threshold for convergence used for Augmented Lagrangian Method.
#' @param max Maximum weight for \eqn{w_{ij}} (weight used for Adaptive Lasso)
#' @param iter Number of iterations used in L-BFGS-B algorithm.
#' @details
#' According to \code{\link{BTdecay}}, the objective likelihood function to be optimized is,
#' \deqn{\sum_{k=1}^{n}\sum_{i<j}\exp(-\alpha t_{k})\cdot(y_{ij}(\tau h_{ij}^{t_{k}}+\mu_{i}-\mu_{j})-\log(1+\exp(\tau h_{ij}^{t_{k}}+\mu_{i}-\mu_{j})))}
#' The Lasso constraint is given as,
#' \deqn{\sum_{i<j}w_{ij}\left|\mu_{i}-\mu_{j}\right|\leq s}
#' where \eqn{w_{ij}} are predefined weight. For Adaptive Lasso, \eqn{\left|w_{ij}=1/(\mu_{i}^{MLE}-\mu_{j}^{MLE})\right|}.
#'
#' Maximize this constraint objective function is equivalent to minimizing the following equation,
#' \deqn{-l(\mu,\tau)+\lambda\sum_{i<j}w_{ij}|\mu_{i}-\mu_{j}|}
#' Where \eqn{-l(\mu,\tau)} is taking negative value of objective function above. Increase "lambda" will decrease "s", their relationship is
#' monotone. Here, we define "penalty" as \eqn{1-s/\max(s)}. Thus, "lambda" and "penalty" has a positive correlation.
#' @return
#' \item{ability}{Estimated ability scores with user given lambda}
#' \item{likelihood}{Negative likelihood of objective function with user given lambda}
#' \item{df}{Degree of freedom with user given lambda(number of distinct \eqn{\mu})}
#' \item{penalty}{\eqn{s/max(s)} with user given lambda}
#' \item{Lambda}{User given lambda}
#' \item{ability.path}{if path = TRUE, estimated ability scores on whole Lasso path}
#' \item{likelihood.path}{if path = TRUE, negative likelihood of objective function on whole Lasso path}
#' \item{df.path}{if path = TRUE, degree of freedom on whole Lasso path(number of distinct \eqn{\mu})}
#' \item{penalty.path}{if path = TRUE, \eqn{s/max(s)} on whole Lasso path}
#' \item{Lambda.path}{if path = TRUE, Whole Lasso path}
#' \item{path}{Whether whole Lasso path will be run}
#' \item{HYBRID.ability.path}{If path = TRUE, the whole path of evolving of HYBRID ability}
#' \item{HYBRID.likelihood.path}{if path = TRUE, the whole path of HYBRID likelihood}
#' @seealso \code{\link{BTdataframe}} for dataframe initialization,
#' \code{\link{plot.swlasso}}, \code{\link{plot.wlasso}} are used for Lasso path plot if path = TRUE in this function's run
#' @references
#' Masarotto, G. and Varin, C.(2012) The Ranking Lasso and its Application to Sport Tournaments.
#' *The Annals of Applied Statistics* **6** 1949--1970.
#'
#' Zou, H. (2006) The adaptive lasso and its oracle properties.
#' *J.Amer.Statist.Assoc* **101** 1418--1429.
#' @examples
#' ##Initializing Dataframe
#' x <- BTdataframe(NFL2010)
#'
#' ##The following code runs the main results
#' ##Usually a single lambda's run will take 1-20 s
#' ##The whole Adaptive Lasso run will take 5-20 min
#' \donttest{
#' ##BTdecayLasso run with exponential decay rate 0.005 and
#' ##lambda 0.1, use path = TRUE if you want to run whole LASSO path
#' y1 <- BTdecayLasso(x$dataframe, x$ability, lambda = 0.1, path = FALSE,
#' decay.rate = 0.005, fixed = x$worstTeam)
#' summary(y1)
#'
#' ##Defining equal weight
#' ##Note that comparing to Adaptive weight, the user defined weight may not be
#' ##efficient in groupiing. Therefore, to run the whole Lasso path
#' ##(evolving of distinct ability scores), it may take a much longer time.
#' ##We recommend the user to apply the default setting,
#' ##where Adaptive Lasso will be run.
#'
#' n <- nrow(x$ability) - 1
#' w2 <- matrix(1, nrow = n, ncol = n)
#' w2[lower.tri(w2, diag = TRUE)] <- 0
#'
#' ##BTdecayLasso run with exponential decay rate 0.005 and with a specific lambda 0.1
#' y2 <- BTdecayLasso(x$dataframe, x$ability, lambda = 0.1, weight = w2,
#' path = FALSE, decay.rate = 0.005, fixed = x$worstTeam)
#'
#' summary(y2)
#' }
#'
#' @export
BTdecayLasso <- function(dataframe, ability, lambda = NULL, weight = NULL, path = TRUE, decay.rate = 0, fixed = 1, thersh = 1e-5, max = 100, iter = 100) {
u <- decay.rate
n <- nrow(ability) - 1
theta <- matrix(0, nrow = n, ncol = n)
Lagrangian <- matrix(0, nrow = n, ncol = n)
ability[, 1] <- 0
k0 <- 0.0675
if(!(fixed %in% seq(1, n, 1))){
stop("The fixed team's index must be an integer index of one of all teams")
}
if (is.null(weight)) {
weight <- BTLasso.weight(dataframe, ability, decay.rate = decay.rate, fixed = fixed, thersh = thersh, max = max, iter = iter)
}
v <- 1
ability0 <- ability[, -1]
Hability0 <- ability[, -1]
l <- c()
hl <- c()
p <- c()
slambda <- c()
BT <- BTdecay(dataframe, ability, decay.rate = decay.rate, fixed = fixed, iter = iter)
ability1 <- BT$ability
s1 <- penaltyAmount(ability1, weight)
l1 <- BTLikelihood(dataframe, ability1, decay.rate = decay.rate)
df <- c()
j <- 1
if (path == FALSE && is.null(lambda)) {
stop("Please provide a sequence of lambda or enable lasso path")
}
if (path == TRUE) {
lambda0 <- 1
lambda1 <- exp(-0.5)
degree0 <- 0
while (degree0 < (n - 1)) {
stop <- 0
while (stop==0) {
##ability <- BTdecayLasso.step1(dataframe, ability, weight, Lagrangian, theta, v, lambda1,
## decay.rate = decay.rate, fixed = fixed, thersh = thersh, iter = iter)
ability <- BTdecay.Qua(dataframe, ability, theta, v, Lagrangian, decay.rate = decay.rate,
fixed = fixed, iter = iter)
theta <- BTtheta(ability, weight, Lagrangian, v, lambda1)
Lagrangian0 <- BTLagrangian(Lagrangian, ability, theta, v)
k <- sum(abs(Lagrangian0 - Lagrangian))
if (k < thersh) {
stop <- 1
} else {
Lagrangian <- Lagrangian0
v <- max(Lagrangian^2)
}
s0 <- penaltyAmount(ability, weight)
}
p0 <- s0/s1
ability0 <- cbind(ability0, ability)
l0 <- BTLikelihood(dataframe, ability, decay.rate = decay.rate)
l <- c(l, l0)
p <- c(p, p0)
degree1 <- round(ability[1:n, 1], -log10(thersh)-1)
degree <- length(unique(degree1))
df <- c(df, degree)
map <- function(x){
if (x == 1){
return(1)
} else {
match <- which(degree1[1:(x - 1)] == degree1[x])
if (length(match) == 0) {
return(length(unique(degree1[1:x])))
} else {
return(length(unique(degree1[1:match[1]])))
}
}
}
dataframe1 <- dataframe
dataframe1[, 1] <- sapply(dataframe[, 1], map)
dataframe1[, 2] <- sapply(dataframe[, 2], map)
Hability1 <- matrix(0, nrow = (degree + 1), ncol = 1)
HBT <- BTdecay(dataframe1, Hability1, decay.rate = decay.rate, fixed = map(fixed), iter = iter)
Hability1 <- HBT$ability
Hability <- ability
for (i in 1:n) {
Hability[i, 1] <- Hability1[map(i), 1]
}
Hability[(n + 1), 1] <- Hability1[(degree + 1), 1]
Hability0 <- cbind(Hability0, Hability)
hl0 <- BTLikelihood(dataframe, Hability, decay.rate = decay.rate)
hl <- c(hl, hl0)
slambda <- c(slambda, lambda1)
if (degree > (degree0 + 1) && abs(lambda0 - lambda1) > (thersh * 10)) {
lambda1 <- (lambda0 + lambda1)/2
} else {
lambda0 <- lambda1
lambda1 <- lambda1 * exp(-k0)
degree0 <- max(degree0, degree)
}
}
}
if (!is.null(lambda)){
lambda <- sort(lambda, decreasing = TRUE)
for (i in 1:length(lambda)) {
stop <- 0
while (stop==0) {
##ability <- BTdecayLasso.step1(dataframe, ability, weight, Lagrangian, theta, v, lambda[i],
## decay.rate = decay.rate, fixed = fixed, thersh = thersh, iter = iter)
ability <- BTdecay.Qua(dataframe, ability, theta, v, Lagrangian, decay.rate = decay.rate,
fixed = fixed, iter = iter)
theta <- BTtheta(ability, weight, Lagrangian, v, lambda[i])
Lagrangian0 <- BTLagrangian(Lagrangian, ability, theta, v)
k <- sum(abs(Lagrangian0 - Lagrangian))
if (k < thersh) {
stop <- 1
} else {
Lagrangian <- Lagrangian0
v <- max(Lagrangian^2)
}
s0 <- penaltyAmount(ability, weight)
}
p0 <- s0/s1
ability0 <- cbind(ability0, ability)
l0 <- BTLikelihood(dataframe, ability, decay.rate = decay.rate)
l <- c(l, l0)
p <- c(p, p0)
degree1 <- round(ability[1:n, 1], -log10(thersh)-1)
degree <- length(unique(degree1))
df <- c(df, degree)
map <- function(x){
if (x == 1){
return(1)
} else {
match <- which(degree1[1:(x - 1)] == degree1[x])
if (length(match) == 0) {
return(length(unique(degree1[1:x])))
} else {
return(length(unique(degree1[1:match[1]])))
}
}
}
dataframe1 <- dataframe
dataframe1[, 1] <- sapply(dataframe[, 1], map)
dataframe1[, 2] <- sapply(dataframe[, 2], map)
Hability1 <- matrix(0, nrow = (degree + 1), ncol = 1)
HBT <- BTdecay(dataframe1, Hability1, decay.rate = decay.rate, fixed = map(fixed), iter = iter)
Hability1 <- HBT$ability
Hability <- ability
for (i in 1:n) {
Hability[i, 1] <- Hability1[map(i), 1]
}
Hability[(n + 1), 1] <- Hability1[(degree + 1), 1]
Hability0 <- cbind(Hability0, Hability)
hl0 <- BTLikelihood(dataframe, Hability, decay.rate = decay.rate)
hl <- c(hl, hl0)
slambda <- c(slambda, lambda[i])
}
}
ability0 <- cbind(ability0, ability1)
Hability0 <- cbind(Hability0, ability1)
l <- c(l, l1)
hl <- c(hl, l1)
p <- c(p, 1)
df <- c(df, n)
if (is.null(lambda)) {
output <- list(ability.path = ability0, likelihood.path = l, penalty.path = p, df.path = df, Lambda.path = c(slambda, 0), path = path,
HYBRID.ability.path = Hability0, HYBRID.likelihood.path = hl, decay.rate = decay.rate)
class(output) <- "wlasso"
} else {
n3 <- length(lambda)
n4 <- length(slambda)
if (path == FALSE) {
output <- list(ability = as.matrix(ability0[, 1:n4]), likelihood = l[1:n4], penalty = p[1:n4], df = df[1:n4], Lambda = slambda, path = path,
HYBRID.ability = as.matrix(Hability0[, 1:n4]), HYBRID.likelihood = hl[1:n4], decay.rate = decay.rate)
class(output) <- "slasso"
} else {
output <- list(ability = as.matrix(ability0[, (n4 - n3 + 1):n4]), likelihood = l[(n4 - n3 + 1):n4], penalty = p[(n4 - n3 + 1):n4], df = df[(n4 - n3 + 1):n4], Lambda = lambda,
ability.path = ability0, likelihood.path = l, penalty.path = p, df.path = df, Lambda.path = c(slambda, 0), path = path,
HYBRID.ability = as.matrix(Hability0[, (n4 - n3 + 1):n4]), HYBRID.likelihood = hl[(n4 - n3 + 1):n4],
HYBRID.ability.path = Hability0, HYBRID.likelihood.path = hl,
decay.rate = decay.rate)
class(output) <- "swlasso"
}
}
output
}
|
/scratch/gouwar.j/cran-all/cranData/BTdecayLasso/R/BTdecayLasso.R
|
BTdecayLasso.step1 <- function(dataframe, ability, weight, Lagrangian, theta, penalty.Qua, lambda,
decay.rate = 0, fixed = 1, thersh = 1e-5, iter = 100) {
##stop <- 0
##s1 <- 1000
##while(stop==0){
## ability <- BTdecay.Qua(dataframe, ability, theta, penalty.Qua, Lagrangian, decay.rate = decay.rate,
## fixed = fixed, iter = iter)
## theta <- BTtheta(ability, weight, Lagrangian, penalty.Qua, lambda)
## s <- BTLikelihood.all(dataframe, ability, theta, penalty.Qua, weight, Lagrangian, lambda, decay.rate = decay.rate)
## if(abs(s-s1) < thersh){
## stop <- 1
## } else{
## s1 <- s
## }
##}
ability <- BTdecay.Qua(dataframe, ability, theta, penalty.Qua, Lagrangian, decay.rate = decay.rate,
fixed = fixed, iter = iter)
return(ability)
}
|
/scratch/gouwar.j/cran-all/cranData/BTdecayLasso/R/BTdecayLasso.step1.R
|
BTdecayLasso.step2 <- function(dataframe, ability, lambda, weight, decay.rate = 0, fixed = 1, thersh = 1e-5, iter = 100) {
u <- decay.rate
n <- nrow(ability) - 1
theta <- matrix(0, nrow = n, ncol = n)
Lagrangian <- matrix(0, nrow = n, ncol = n)
ability[, 1] <- 0
con <- matrix(NA, nrow = 0, ncol = 4)
stop <- 0
v <- 1
j <- 1
while (stop==0) {
ability <- BTdecayLasso.step1(dataframe, ability, weight, Lagrangian, theta, v, lambda,
decay.rate = decay.rate, fixed = fixed, thersh = thersh, iter = iter)
theta <- BTtheta(ability, weight, Lagrangian, v, lambda)
Lagrangian0 <- BTLagrangian(Lagrangian, ability, theta, v)
k <- sum(abs(Lagrangian0 - Lagrangian))
if (k < thersh) {
stop <- 1
} else {
Lagrangian <- Lagrangian0
v <- max(Lagrangian^2)
}
s <- penaltyAmount(ability, weight)
j <- j + 1
con <- rbind(con, matrix(c(k, s, v, j), nrow = 1, ncol = 4))
}
ability0 <- ability
ability0[, 1] <- 0
BT <- BTdecay(dataframe, ability0, decay.rate = decay.rate, fixed = fixed, iter = iter)
ability0 <- BT$ability
s0 <- penaltyAmount(ability0, weight)
p <- s/s0
degree <- round(ability[1:n, 1], -log10(thersh)-1)
degree <- length(unique(degree))
output <- list(ability = ability, df = degree, penalty = p, convergence = con)
output
}
|
/scratch/gouwar.j/cran-all/cranData/BTdecayLasso/R/BTdecayLasso.step2.R
|
#' Bradley-Terry Model with Exponential Decayed weighted likelihood and weighted Lasso with AIC or BIC criteria
#'
#'
#' Model selection via AIC or BIC criteria. For Lasso estimators, the degree of freedom is the number of distinct groups of estimated abilities.
#'
#' @param dataframe Generated using \code{\link{BTdataframe}} given raw data.
#' @param ability A column vector of teams ability, the last row is the home parameter.
#' The row number is consistent with the team's index shown in dataframe. It can be generated using \code{\link{BTdataframe}} given raw data.
#' @param weight Weight for Lasso penalty on different abilities
#' @param criteria "AIC" or "BIC"
#' @param type "HYBRID" or "LASSO"
#' @param model An Lasso path object with class wlasso or swlasso. If NULL, the whole lasso path will be run.
#' @param decay.rate The exponential decay rate. Usually ranging from (0, 0.01), A larger decay rate weights more
#' importance to most recent matches and the estimated parameters reflect more on recent behaviour.
#' @param fixed A teams index whose ability will be fixed as 0. The worstTeam's index
#' can be generated using \code{\link{BTdataframe}} given raw data.
#' @param thersh Threshold for convergence
#' @param max Maximum weight for \eqn{w_{ij}} (weight used for Adaptive Lasso)
#' @param iter Number of iterations used in L-BFGS-B algorithm.
#' @details
#' This function is usually run after the run of whole Lasso path. "model" parameter is obtained by whole
#' Lasso pass's run using \code{\link{BTdecayLasso}}. If no model is provided, this function will run Lasso path first (time-consuming).
#'
#' Users can select the information score added to HYBRID Lasso's likelihood or original Lasso's likelihood. ("HYBRID" is recommended)
#'
#' summary() function can be applied to view the outputs.
#' @return
#' \item{Score}{Lowest AIC or BIC score}
#' \item{Optimal.degree}{The degree of freedom where lowest AIC or BIC score is achieved}
#' \item{Optimal.ability}{The ability where lowest AIC or BIC score is achieved}
#' \item{ability}{Matrix contains all abilities computed in this algorithm}
#' \item{Optimal.lambda}{The lambda where lowest score is attained}
#' \item{Optimal.penalty}{The penalty (1- s/\eqn{\max(s)}) where lowest score is attained}
#' \item{type}{Type of model selection method}
#' \item{decay.rate}{Decay rate of this model}
#' @seealso \code{\link{BTdataframe}} for dataframe initialization,
#' \code{\link{BTdecayLasso}} for obtaining a whole Lasso path
#' @references
#' Masarotto, G. and Varin, C.(2012) The Ranking Lasso and its Application to Sport Tournaments.
#' *The Annals of Applied Statistics* **6** 1949--1970.
#'
#' Zou, H. (2006) The adaptive lasso and its oracle properties.
#' *J.Amer.Statist.Assoc* **101** 1418--1429.
#'
#' @export
BTdecayLassoC <- function(dataframe, ability, weight = NULL, criteria = "AIC", type = "HYBRID", model = NULL, decay.rate = 0,
fixed = 1, thersh = 1e-5, iter = 100, max = 100) {
if (is.null(weight)) {
weight <- BTLasso.weight(dataframe, ability, decay.rate = decay.rate, fixed = fixed, thersh = thersh, max = max, iter = iter)
}
if (is.null(model)) {
Lp <- BTdecayLasso(dataframe, ability, lambda = NULL, weight = weight, path = TRUE, decay.rate = decay.rate,
fixed = fixed, thersh = thersh, max = max, iter = iter)
} else if (inherits(model, "wlasso") || inherits(model, "swlasso")) {
Lp <- model
} else {
stop("Please provide a model contains whole lasso path generated by BTdecayLasso")
}
n1 <- nrow(dataframe)
if (criteria == "AIC") {
mul <- 2
} else if (criteria == "BIC") {
mul <- log(n1)
} else {
stop("criteria should either be AIC or BIC")
}
y <- Lp$df.path * mul
x <- 2 * Lp$HYBRID.likelihood.path + y
ind <- which(x == min(x))
ind <- max(ind)
dg <- Lp$df.path[ind]
if (type == "HYBRID") {
output <- list(Score = min(x), Optimal.degree = dg, Optimal.ability = as.matrix(Lp$HYBRID.ability.path[, ind]),
Optimal.lambda = Lp$Lambda.path[ind], Optimal.penalty = Lp$penalty.path[ind], type = type, decay.rate = decay.rate)
} else if (type == "LASSO") {
m0 <- Lp$Lambda.path[ind]
m1 <- Lp$Lambda.path[ind + 1]
k <- m0 - m1
j <- 1
while (k > thersh * 10) {
k <- k * 0.5
m1 <- m0 - k
BT <- BTdecayLasso.step2(dataframe, ability, m1, weight, decay.rate = decay.rate, fixed = fixed, thersh = thersh, iter = iter)
if (dg == BT$df) {
m0 <- m1
}
j <- j + 1
}
if (j == 1) {
output <- list(Score = 2 * Lp$likelihood.path[ind] + dg * mul, Optimal.degree = dg, Optimal.ability = as.matrix(Lp$ability.path[, ind]),
Optimal.lambda = m0, Optimal.penalty = Lp$penalty.path[ind], type = type, decay.rate = decay.rate)
} else {
l <- BTLikelihood(dataframe, BT$ability, decay.rate = decay.rate)
output <- list(Score = 2 * l + dg * mul, Optimal.degree = dg, Optimal.ability = BT$ability,
Optimal.lambda = m1, Optimal.penalty = BT$penalty, type = type, decay.rate = decay.rate)
}
} else {
stop("Please provide a selection type HYBRID or LASSO")
}
class(output) <- "BTC"
output
}
|
/scratch/gouwar.j/cran-all/cranData/BTdecayLasso/R/BTdecayLassoC.R
|
#' Bradley-Terry Model with Exponential Decayed weighted likelihood and Adaptive Lasso with a given penalty rate
#'
#' This function provides a method to computed the estimated abilities and lambda given an intuitive fixed Lasso penalty rate.
#' Since in Lasso method, the selection of lambda varies a lot with respect to different datasets. We can keep the consistency of
#' amount of Lasso penalty induced in different datasets from different period by setting a fixed Lasso penalty rate "penalty".
#' Please refer to \code{\link{BTdecayLasso}} for the definition of "penalty" and its relationship with "lambda".
#'
#'
#' @param dataframe Generated using \code{\link{BTdataframe}} given raw data.
#' @param ability A column vector of teams ability, the last row is the home parameter. It can be generated using \code{\link{BTdataframe}} given raw data.
#' The row number is consistent with the team's index shown in dataframe. It can be generated using \code{\link{BTdataframe}} given raw data.
#' @param penalty The amount of Lasso penalty induced (1-s/max(s)) where is the sum of Lasso penalty part.
#' @param decay.rate The exponential decay rate. Usually ranging from (0, 0.01), A larger decay rate weights more
#' importance to most recent matches and the estimated parameters reflect more on recent behaviour.
#' @param fixed A teams index whose ability will be fixed as 0. The worstTeam's index
#' can be generated using \code{\link{BTdataframe}} given raw data.
#' @param thersh Threshold for convergence
#' @param max Maximum weight for \eqn{w_{ij}} (weight used for Adaptive Lasso)
#' @param iter Number of iterations used in L-BFGS-B algorithm.
#' @details
#' The estimated ability given fixed penalty \eqn{p = 1- s/\max(s)} where s is the sum of Lasso penalty part.
#' When p = 0, this model is reduced to a standard Bradley-Terry Model.
#' When p = 1, all ability scores are shrinking to 0.
#'
#' The parameter "penalty" should be ranging from 0.01 to 0.99 due to the iteration's convergent error.
#'
#' summary() function can be applied to view the outputs.
#' @return The list with class "BTF" contains estimated abilities and other parameters.
#' \item{ability}{Estimated ability scores}
#' \item{df}{Degree of freedom (number of distinct \eqn{\mu})}
#' \item{penalty}{Amount of Lasso Penalty}
#' \item{decay.rate}{Exponential decay rate}
#' \item{lambda}{Corresponding Lasso lambda given penalty rate}
#' @seealso \code{\link{BTdataframe}} for dataframe initialization, \code{\link{BTdecayLasso}} for detailed description
#' @references
#' Masarotto, G. and Varin, C.(2012) The Ranking Lasso and its Application to Sport Tournaments.
#' *The Annals of Applied Statistics* **6** 1949--1970.
#'
#' Zou, H. (2006) The adaptive lasso and its oracle properties.
#' *J.Amer.Statist.Assoc* **101** 1418--1429.
#'
#' @export
BTdecayLassoF <- function(dataframe, ability, penalty, decay.rate = 0, fixed = 1, thersh = 1e-5, max = 100, iter = 100) {
if (penalty > 1 || penalty < 0) {
stop("Please provide a penalty ranging from 0 to 1")
}
df <- dataframe
n <- nrow(ability) - 1
df[, 5] <- df[, 5] - df[1, 5]
p <- 1 - penalty
if(!(fixed %in% seq(1, n, 1))){
stop("The fixed team's index must be an integer index of one of all teams")
}
if (p > 0.99) {
BT <- BTdecay(df, ability, decay.rate = decay.rate, fixed = fixed, iter = iter)
ability <- BT$ability
s <- list(ability = round(ability, -log10(thersh)), df = n, penalty = 0, decay.rate = decay.rate, lambda = 0)
} else {
if (p < 0.01) {
s <- list(ability = ability, df = 1, penalty = 1, decay.rate = decay.rate, lambda = Inf)
} else {
weight <- BTLasso.weight(df, ability, decay.rate = decay.rate, fixed = fixed, thersh = thersh, max = max, iter = iter)
BT1 <- BTdecayLasso.step2(df, ability, 0, weight, decay.rate = decay.rate, fixed = fixed, thersh = thersh, iter = iter)
ability <- BT1$ability
k0 <- penaltyAmount(ability, weight)
BT1 <- BTdecayLasso.step2(df, ability, 0.1, weight, decay.rate = decay.rate, fixed = fixed, thersh = thersh, iter = iter)
ability <- BT1$ability
k1 <- penaltyAmount(ability, weight)
a <- -log(k1/k0)/0.1
x0 <- 0.1
m0 <- -log(k1/k0)
x1 <- -log(p)/a
mo <- -log(p)
stop <- 0
while (stop == 0) {
BT1 <- BTdecayLasso.step2(df, ability, x1, weight, decay.rate = decay.rate, fixed = fixed, thersh = thersh, iter = iter)
ability <- BT1$ability
k2 <- penaltyAmount(ability, weight)
m1 <- -log(k2/k0)
x2 <- (x0 - x1)/(m0 - m1)*(mo - m0) + x0
if(abs(k2/k0 - p) < thersh/10){
stop <- 1
} else if (((k2/k0) < thersh/10) && (x2 > max(x1, x0))) {
x0 <- max(x1, x0)
x1 <- x2
} else if (x2 < 0) {
x1 <- 0.9 * x1
} else {
x <- x1
x1 <- x2
x0 <- x
m0 <- m1
}
}
s <- list(ability = round(ability, -log10(thersh)), df = BT1$df, penalty = penalty, decay.rate = decay.rate, lambda = x1)
}
}
class(s) <- "BTF"
s
}
|
/scratch/gouwar.j/cran-all/cranData/BTdecayLasso/R/BTdecayLassoF.R
|
BTtheta <- function(ability, weight, Lagrangian, penalty.Qua, lambda){
n <- nrow(ability) - 1
theta <- matrix(0, nrow = n, ncol = n)
v <- penalty.Qua
for(i in 1:(n - 1)){
for(j in (i + 1):n){
theta0 <- ability[i, 1] - ability[j, 1] - Lagrangian[i, j]/v
theta[i, j] <- sign(theta0) * max(abs(theta0) - lambda * weight[i, j]/v, 0)
}
}
return(theta)
}
|
/scratch/gouwar.j/cran-all/cranData/BTdecayLasso/R/BTtheta.R
|
#' The 2010 NFL Regular Season
#'
#' A dataframe containing all match results with 5 columns
#'
#' @name NFL2010
#' @docType data
#' @format A dataframe containing all match results with 5 columns
#' \describe{
#' \item{home.team}{Team who plays at home}
#' \item{away.team}{Team who plays away}
#' \item{home.win}{Take "1" if home team wins}
#' \item{away.win}{Take "1" if away team wins}
#' \item{date}{Number of days until now}
#' }
#' @keywords datasets
"NFL2010"
|
/scratch/gouwar.j/cran-all/cranData/BTdecayLasso/R/NFL2010.R
|
#' Compute the standard deviation of Bradley-Terry decay Lasso model by bootstrapping
#'
#' Bootstrapping is done assuming that Maximum Likelihood's estimation reflects the true abilities.
#' Same level of Lasso penalty "lambda" should be applied in different simulation models for Lasso induced estimation.
#'
#' @param dataframe Generated using \code{\link{BTdataframe}} given raw data.
#' @param ability A column vector of teams ability, the last row is the home parameter.
#' The row number is consistent with the team's index shown in dataframe. It can be generated using \code{\link{BTdataframe}} given raw data.
#' @param lambda The amount of Lasso penalty induced, only a single scalar is accepted in bootstrapping.
#' @param boot Amount of simulations.
#' @param weight Weight for Lasso penalty on different abilities.
#' @param decay.rate The exponential decay rate. Usually ranging from (0, 0.01), A larger decay rate weights more
#' importance to most recent matches and the estimated parameters reflect more on recent behaviour.
#' @param fixed A teams index whose ability will be fixed as 0. The worstTeam's index
#' can be generated using \code{\link{BTdataframe}} given raw data.
#' @param thersh Threshold for convergence
#' @param max Maximum weight for \eqn{w_{ij}} (weight used for Adaptive Lasso).
#' @param iter Number of iterations used in L-BFGS-B algorithm.
#' @details 100 times of simulation will be done by default, user can adjust the numbers of simulation by input of boot. However, bootstrapping process
#' is time consuming and usually 1000 time of simulations is enough to provide a stable result.
#'
#' More detailed description of "lambda", "penalty" and "weight" are documented in \code{\link{BTdecayLasso}}.
#'
#' summary() function follows S3 method can be applied to view the outputs.
#' @return A list with class "boot" contain Lasso and Hybrid Lasso's bootstrapping's mean and standard deviation.
#' \item{Lasso}{Lasso bootstrapping's result. A three column matrix where first column is the original estimation, the second column is bootstrapping mean and the last column is the
#' bootstrapping standard deviation}
#' \item{HYBRID.Lasso}{HYBRID Lasso bootstrapping's result. A three column matrix where the first column is the original estimation, the second column is bootstrapping mean and the last column is the
#' bootstrapping standard deviation}
#' @seealso \code{\link{BTdataframe}} for dataframe initialization, \code{\link{BTdecayLasso}} for detailed description
#' @references
#' Masarotto, G. and Varin, C.(2012) The Ranking Lasso and its Application to Sport Tournaments.
#' *The Annals of Applied Statistics* **6** 1949--1970.
#'
#' Zou, H. (2006) The adaptive lasso and its oracle properties.
#' *J.Amer.Statist.Assoc* **101** 1418--1429.
#'
#' @export
#' @import stats
boot.BTdecayLasso <- function(dataframe, ability, lambda, boot = 100, weight = NULL, decay.rate = 0, fixed = 1,
thersh = 1e-5, max = 100, iter = 100) {
boot <- round(boot)
if (boot < 2) {
stop("Boot should be an integer greater than 1")
}
BT <- BTdecay(dataframe, ability, decay.rate = decay.rate, fixed = fixed, iter = iter)
Tability <- BT$ability
Bability <- Tability[, -1]
Hability <- Tability[, -1]
n1 <- nrow(dataframe)
n <- nrow(ability) -1
if(!(fixed %in% seq(1, n, 1))){
stop("The fixed team's index must be an integer index of one of all teams")
}
y1 <- sapply(dataframe[, 1], function(x) Tability[x, 1])
y2 <- sapply(dataframe[, 2], function(x) Tability[x, 1])
t <- exp(Tability[n + 1] + y1 - y2)
y <- t/(1 + t)
dataframe1 <- dataframe
for (i in 1:boot) {
stop <- 0
while (stop == 0) {
r <- stats::runif(n1)
dataframe1[, 3] <- as.numeric(r < y)
dataframe1[, 4] <- 1 - dataframe1[, 3]
counts <- matrix(0, nrow = n, ncol = 2)
for (i in 1:n1) {
a1 <- dataframe1[i, 1]
a2 <- dataframe1[i, 2]
counts[a1, 1] <- counts[a1, 1] + dataframe1[i, 3]
counts[a2, 1] <- counts[a2, 1] + dataframe1[i, 4]
counts[a1, 2] <- counts[a1, 2] + dataframe1[i, 3] + dataframe1[i, 4]
counts[a2, 2] <- counts[a2, 2] + dataframe1[i, 3] + dataframe1[i, 4]
}
win <- c()
loss <- c()
for (i in 1:n) {
if (counts[i, 1] == counts[i, 2]) {
win <- c(win , i)
} else if (counts[i, 1] == 0) {
loss <- c(loss, i)
}
}
if (is.null(win) && is.null(loss)) {
stop <- 1
}
}
BTL <- BTdecayLasso(dataframe1, ability, lambda = lambda, decay.rate = decay.rate, path = FALSE, fixed = fixed,
thersh = thersh, max = max, iter = iter)
Bability <- cbind(Bability, BTL$ability)
Hability <- cbind(Hability, BTL$HYBRID.ability)
}
Bmean <- matrix(apply(Bability, 1, mean))
Bsd <- matrix(apply(Bability, 1, stats::sd))
Hmean <- matrix(apply(Hability, 1, mean))
Hsd <- matrix(apply(Hability, 1, stats::sd))
BTL <- BTdecayLasso(dataframe, ability, lambda = lambda, decay.rate = decay.rate, path = FALSE, fixed = fixed,
thersh = thersh, max = max, iter = iter)
Bori <- BTL$ability
Hori <- BTL$HYBRID.ability
Bout <- cbind(Bori, Bmean, Bsd)
colnames(Bout) <- c("Original", "Est.Mean", "Est.std")
Hout <- cbind(Hori, Hmean, Hsd)
colnames(Hout) <- c("Original", "Est.Mean", "Est.std")
output <- list(penalty = BTL$penalty, Lasso = Bout, HYBRID.Lasso = Hout, decay.rate = decay.rate)
class(output) <- "boot"
output
}
|
/scratch/gouwar.j/cran-all/cranData/BTdecayLasso/R/bootBTdecayLasso.R
|
penaltyAmount <- function(ability, weight) {
n <- nrow(ability) - 1
s <- 0
for(i in 1:(n - 1)){
for(j in (i + 1):n){
s <- s + abs(ability[i, 1] - ability[j, 1])*weight[i, j]
}
}
s
}
|
/scratch/gouwar.j/cran-all/cranData/BTdecayLasso/R/penaltyAmount.R
|
#' Plot the Lasso path
#'
#' @description Plot the whole lasso path run by BTdecayLasso() with given lambda and path = TRUE
#' @usage
#' ##S3 method for class "swlasso"
#' @param x Object with class "swlasso"
#' @param ... Further arguments pass to or from other methods
#' @export
#' @import ggplot2
plot.swlasso <- function(x, ...) {
n <- nrow(x$ability.path) - 1
df1 <- data.frame(ability = x$ability.path[1:n, 1], team = seq(1, n, 1), penalty = x$penalty.path[1])
for (i in 1:(length(x$likelihood.path) - 1)) {
df1 <- rbind(df1, data.frame(ability = x$ability.path[1:n, (i + 1)], team = seq(1, n, 1), penalty = x$penalty.path[i + 1]))
}
penalty <- ability <- team <- NULL
ggplot2::ggplot(df1, aes(x = penalty, y = ability, color = team)) + geom_line(aes(group = team))
}
#' Plot the Lasso path
#'
#' @description Plot the whole lasso path run by BTdecayLasso() with lambda = NULL and path = TRUE
#' @usage
#' ##S3 method for class "wlasso"
#' @param x Object with class "wlasso"
#' @param ... Further arguments pass to or from other methods
#' @export
#' @import ggplot2
plot.wlasso <- function(x, ...) {
n <- nrow(x$ability.path) - 1
df1 <- data.frame(ability = x$ability.path[1:n, 1], team = seq(1, n, 1), penalty = x$penalty.path[1])
for (i in 1:(length(x$likelihood.path) - 1)) {
df1 <- rbind(df1, data.frame(ability = x$ability.path[1:n, (i + 1)], team = seq(1, n, 1), penalty = x$penalty.path[i + 1]))
}
penalty <- ability <- team <- NULL
ggplot2::ggplot(df1, aes(x = penalty, y = ability, color = team)) + geom_line(aes(group = team))
}
|
/scratch/gouwar.j/cran-all/cranData/BTdecayLasso/R/plot.R
|
#' @export
summary.slasso <- function(object, ...) {
y <- object$ability
z <- object$HYBRID.ability
for (i in 1:ncol(y)) {
y[, i] <- round(y[, i], 3)
z[, i] <- round(z[, i], 3)
}
cat("penalty:\n\n")
cat(object$penalty)
cat("\n\ndecay.rate:\n\n")
cat(object$decay.rate)
cat("\n\nLasso Estimates:\n\n")
print(y)
cat("\nHYBRID Lasso Estimates:\n\n")
print(z)
}
#' @export
summary.swlasso <- function(object, ...) {
y <- object$ability
z <- object$HYBRID.ability
for (i in 1:ncol(y)) {
y[, i] <- round(y[, i], 3)
z[, i] <- round(z[, i], 3)
}
cat("penalty:\n\n")
cat(object$penalty)
cat("\n\ndecay.rate:\n\n")
cat(object$decay.rate)
cat("\n\nLasso Estimates:\n\n")
print(y)
cat("\n")
cat("HYBRID Lasso Estimates:\n\n")
print(z)
}
#' @export
summary.wlasso <- function(object, ...) {
cat("Degree Path:\n\n")
y <- data.frame(Run = seq(1, length(object$df.path), 1), Degree = object$df.path)
print(y)
}
#' @export
summary.BTC <- function(object, ...) {
y <- object$Optimal.ability
for (i in 1:ncol(y)) {
y[, i] <- round(y[, i], 3)
}
cat("type:\n\n")
cat(object$type)
cat("\n\npenalty:\n\n")
cat(object$Optimal.penalty)
cat("\n\ndegree:\n\n")
cat(object$Optimal.degree)
cat("\n\ndecay.rate:\n\n")
cat(object$decay.rate)
cat("\n\nEstimates:\n\n")
print(y)
}
#' @export
summary.boot <- function(object, ...) {
y <- object$Lasso
z <- object$HYBRID.Lasso
for (i in 1:ncol(y)) {
y[, i] <- round(y[, i], 3)
z[, i] <- round(z[, i], 3)
}
cat("penalty:\n\n")
cat(object$penalty)
cat("\n\ndecay.rate:\n\n")
cat(object$decay.rate)
cat("\n\nLasso Estimates:\n\n")
print(y)
cat("\n")
cat("HYBRID Lasso Estimates:\n\n")
print(z)
}
#' @export
summary.BT <- function(object, ...) {
y <- object$ability
for (i in 1:ncol(y)) {
y[, i] <- round(y[, i], 3)
}
cat("decay.rate:\n\n")
cat(object$decay.rate)
cat("\n\nBradley-Terry Estimates:\n\n")
print(y)
}
#' @export
summary.BTF <- function(object, ...) {
y <- object$ability
for (i in 1:ncol(y)) {
y[, i] <- round(y[, i], 3)
}
cat("penalty:\n\n")
cat(object$penalty)
cat("\n\ndecay.rate:\n\n")
cat(object$decay.rate)
cat("\n\nBradley-Terry Lasso Estimates:\n\n")
print(y)
cat("\nlambda:\n\n")
cat(object$lambda)
}
|
/scratch/gouwar.j/cran-all/cranData/BTdecayLasso/R/summary.R
|
## usethis namespace: start
#' @importFrom Rcpp sourceCpp
## usethis namespace: end
NULL
## usethis namespace: start
#' @useDynLib BTtest, .registration = TRUE
## usethis namespace: end
NULL
|
/scratch/gouwar.j/cran-all/cranData/BTtest/R/BTtest-package.R
|
#' Barigozzi & Trapani (2022) Test
#'
#' @description Runs the testing routine proposed in Barigozzi & Trapani (2022) to estimate the number and types of common trends in a nonstationary panel.
#' The method can identify the existence of a common factor subject to a linear trend, as well as the number of zero-mean \eqn{I(1)} and zero-mean \eqn{I(0)} factors.
#'
#' @param X a \eqn{T \times N} matrix of observations.
#' @param r_max the maximum number of factors to consider. Default is 10.
#' @param alpha the significance level. Default is 0.05.
#' @param BT1 logical. If \code{TRUE}, a less conservative eigenvalue rescaling scheme is used. In small samples, \code{BT1 = FALSE} will result in fewer estimated factors. Default is \code{TRUE}.
#'
#' @details For details on the testing procedure I refer to Barigozzi & Trapani (2022, sec. 4).
#'
#' @examples
#' # Simulate a nonstationary panel
#' X <- sim_DGP(N = 100, n_Periods = 200)
#'
#' # Obtain the estimated number of factors (i) with a linear trend (r_1), (ii) zero-mean I(1) (r_2)
#' # and (iii) zero-mean I(0) (r_3)
#' BTtest(X = X, r_max = 10, alpha = 0.05, BT1 = TRUE)
#' @references Barigozzi, M., & Trapani, L. (2022). Testing for common trends in nonstationary large datasets. *Journal of Business & Economic Statistics*, 40(3), 1107-1122. \doi{10.1080/07350015.2021.1901719}
#'
#' @author Paul Haimerl
#'
#' @return A vector with the estimated number of (i) factors with a linear trend (\eqn{r_1}), (ii) zero-mean \eqn{I(1)} factors (\eqn{r_2}) and (ii) zero-mean \eqn{I(0)} factors (\eqn{r_3}).
#'
#' @export
BTtest <- function(X, r_max = 10, alpha = 0.05, BT1 = TRUE){
BTtestRoutine(X = X, r_max = r_max, alpha = alpha, BT1 = BT1)
}
#' Bai (2004) IPC
#'
#' @description Calculates the Integrated Panel Criteria (\emph{IPC}) to estimate the total number of common trends in a nonstationary panel as proposed by Bai (2004).
#'
#' @param X a \eqn{T \times N} matrix of observations.
#' @param r_max the maximum number of factors to consider. Default is 10.
#'
#' @details For further details on the three criteria and their respective differences, I refer to Bai (2004, sec. 3).
#'
#' @examples
#' # Simulate a nonstationary panel
#' X <- sim_DGP(N = 100, n_Periods = 200)
#'
#' # Obtain the estimated number of common factors pre criterion
#' BaiIPC(X = X, r_max = 10)
#' @references Bai, J. (2004). Estimating cross-section common stochastic trends in nonstationary panel data. *Journal of Econometrics*, 122(1), 137-183. \doi{10.1016/j.jeconom.2003.10.022}
#'
#' @author Paul Haimerl
#'
#' @return A vector of the estimated number of factors for each of the three criteria.
#'
#' @export
BaiIPC <- function(X, r_max = 10){
BaiIPCRoutine(X = X, r_max = r_max)
}
|
/scratch/gouwar.j/cran-all/cranData/BTtest/R/BTtest.R
|
# Generated by using Rcpp::compileAttributes() -> do not edit by hand
# Generator token: 10BE3573-1514-4C36-9D1C-5A225CD40393
BTtestRoutine <- function(X, r_max, alpha, BT1) {
.Call(`_BTtest_BTtestRoutine`, X, r_max, alpha, BT1)
}
BaiIPCRoutine <- function(X, r_max) {
.Call(`_BTtest_BaiIPCRoutine`, X, r_max)
}
|
/scratch/gouwar.j/cran-all/cranData/BTtest/R/RcppExports.R
|
#' Simulate a Nonstationary Panel With Common Trends
#'
#' @description Simulate a nonstationary panel as laid out in Barigozzi & Trapani (2022, sec. 5).
#'
#' @param N the number of cross-sectional units.
#' @param n_Periods the number of simulated time periods.
#' @param drift logical. If \code{TRUE}, a linear trend is included (corresponding to both \emph{d_1 = 1} and \emph{r_1 = 1}).
#' @param drift_I1 logical. If \code{TRUE}, an \emph{I(1)} factor moves around the linear trend. Else an \emph{I(0)} factor (corresponding to \emph{d_2 = 1}).
#' @param r_I1 the total number of non zero-mean \emph{I(1)} factors (corresponding to \emph{r_2 + r_1 * d_2}).
#' @param r_I0 the total number of non zero-mean \emph{I(0)} factors (corresponding to \emph{r_3 + r_1 * (1 - d_2)}).
#' @param return_factor logical. If \code{TRUE}, the factor matrix is returned. Else the simulated observations. Default is \code{FALSE}.
#'
#' @details For further details on the construction of the DGP, see Barigozzi & Trapani (2022, sec. 5).
#'
#' @examples
#' # Simulate a panel containing a factor with a linear drift (r_1 = d_1 = 1) and I(1) process
#' # (d_2 = 1), one zero-mean I(1) factor (r_2 = 1) and two zero-mean I(0) factors (r_3 = 2)
#' X <- sim_DGP(N = 100, n_Periods = 200, drift = TRUE, drift_I1 = TRUE, r_I1 = 2, r_I0 = 2)
#'
#' # Simulate a panel containing only 3 common zero-mean I(0) factor (r_1 = 0, r_2 = 0, r_3 = 3)
#' X <- sim_DGP(N = 100, n_Periods = 200, drift = FALSE, drift_I1 = TRUE, r_I1 = 0, r_I0 = 3)
#' @references Barigozzi, M., & Trapani, L. (2022). Testing for common trends in nonstationary large datasets. *Journal of Business & Economic Statistics*, 40(3), 1107-1122. \doi{10.1080/07350015.2021.1901719}
#'
#' @author Paul Haimerl
#'
#' @return A (\emph{T x N}) matrix of simulated observations. If \code{return_factor = TRUE}, a (\emph{N x r}) matrix of factors.
#'
#' @export
sim_DGP <- function(N = 100, n_Periods = 200, drift = TRUE, drift_I1 = TRUE, r_I1 = 2, r_I0 = 1, return_factor = FALSE) {
#------------------------------#
#### Preliminaries ####
#------------------------------#
Tt <- n_Periods
# For consistency with Barigozzi and Trapani (2021)
d_1 <- r_1 <- as.numeric(drift)
# Note that (r_1 == 1, d_1 == 0, d_2 == 1, r_2 == x) is observationally identical with (r_1 == 0, r_2 == x + 1) (same with d_2 == 0 and r_3)
# As a consequence, I normalize r_1 == d_1 to save one extra (unnecessary) argument for the function
d_2 <- as.numeric(drift_I1)
if (drift_I1 & drift & r_I1 == 0) stop("Either set r_I1 > 0 or drift_I1 == FALSE\n")
if (!drift_I1 & drift & r_I0 == 0) stop("Either set r_I1 > 0 or drift_I1 == FALSE\n")
r_2 <- r_I1 - r_1 * d_2
r_3 <- r_I0 - r_1 * (1 - d_2)
r <- r_1 + r_2 + r_3
if (r > 0) {
#------------------------------#
#### Factor loadings ####
#------------------------------#
Lambda <- simLambda(r = r, N = N)
#------------------------------#
#### Factors ####
#------------------------------#
# Generate the I(1) factors according to random walks with serially correlated errors
F2_mat <- sapply(1:r_2, function(x, Tt) simRW(Tt), Tt = Tt)
if (r_2 == 0) {
F2_mat <- NULL
F2_indx <- NULL
} else {
F2_indx <- (1 + r_1):(r_1 + r_2)
}
# Generate the I(0) factors according to some stationary ARMA process with unit variances
I0_fmat <- sapply(1:r_I0, function(x, Tt) simARMA(Tt), Tt = Tt)
if (r_I0 == 0) {
I0_fmat <- NULL
F3_indx <- NULL
} else {
F3_indx <- (1 + r_I1):(r_I1 + r_I0)
}
# Generate a linear trend with slope 1
if (drift) {
trend <- 1:Tt
# Attach the trend to one of the factors and order the factor matrix
if (drift_I1) {
# In case of d_2 == 1, draw a RW without serially correlated errors
f_1 <- simRW(Tt, rholimits = c(0, 0))
F_1 <- f_1 + trend
} else {
F_1 <- I0_fmat[, 1] + trend
I0_fmat <- as.matrix(I0_fmat[, -1])
F3_indx <- F3_indx[-1]
if (length(F3_indx) == 0) F3_indx <- NULL
}
}
# Rescale and balance the factors
# If present, use F1 as the baseline, else the first F_2 factor
if (drift) {
norm <- sum((as.matrix(diff(F_1)) %*% t(Lambda[, 1]))^2)
# Rescale F_2
F_2 <- rescaleFactors(Fmat = F2_mat, Lambda = Lambda, norm = norm, indx = F2_indx, fd = TRUE)
# Rescale F_3
F_3 <- rescaleFactors(Fmat = I0_fmat, Lambda = Lambda, norm = norm, indx = F3_indx, fd = FALSE)
} else if (!is.null(F2_indx)) {
F_1 <- NULL
F_2 <- F2_mat
norm <- sum((as.matrix(diff(F2_mat)) %*% t(Lambda[, F2_indx]))^2)
# Rescale F_3
F_3 <- rescaleFactors(Fmat = I0_fmat, Lambda = Lambda, norm = norm, indx = F3_indx, fd = FALSE)
} else {
F_1 <- NULL
F_2 <- NULL
F_3 <- I0_fmat
}
Fmat <- cbind(F_1, F_2, F_3)
} else {
Fmat <- NULL
Lambda <- NULL
}
if (return_factor) {
return(Fmat)
}
#------------------------------#
#### Cross-sectional errors ####
#------------------------------#
U <- simCrossSecErr(Tt = Tt, N = N)
# Generate the signal to noise ratio
theta <- setsignal2noise(U = U, Lambda = Lambda, Fmat = Fmat, drift = drift, F2_indx = F2_indx, F3_indx = F3_indx)
# Compute the final observation
if (!is.null(Fmat)) {
X <- Fmat %*% t(Lambda) + sqrt(theta) * U
} else {
X <- sqrt(theta) * U
}
return(X)
}
# Calculate signal to noise ratio
#
# @param U (T x N) matrix holding cross-sectional errors
# @param Lambda (N x r) matrix holding loadings
# @param Fmat (T x r) matrix holding the factors
# @param drift logical. If TRUE, a linear trend is included (corresponds to both d_1 and r_1)
# @param F2_indx vector indicating the columns of f2 factors
# @param F3_indx vector indicating the columns of f3 factors
#
# @return scaling factor governing the signal to noise ratio
setsignal2noise <- function(U, Lambda, Fmat, drift, F2_indx, F3_indx) {
if (is.null(Fmat)) {
return(1)
}
denom <- sum(diff(U)^2)
if (drift) {
F1_indx <- 1
} else {
F1_indx <- NULL
}
DeltaF <- diff(Fmat)
F1_sum <- as.matrix(Lambda[, F1_indx]) %*% t(DeltaF[, F1_indx])
F2_sum <- as.matrix(Lambda[, F2_indx]) %*% t(DeltaF[, F2_indx])
F3_sum <- as.matrix(Lambda[, F3_indx]) %*% t(DeltaF[, F3_indx])
num <- sum((F1_sum + F2_sum + F3_sum)^2)
theta <- .5 * num / denom
return(theta)
}
# Calibrates the factors among each other
#
# @param Fmat (T x r_i) matrix holding the factors for i = 1, 2, 3
# @param Lambda (N x r) matrix holding loadings
# @param norm target sum of squares
# @param indx vector indicating the columns of relevant factors
# @param fd logical. If TRUE, the first differences of factors are used
#
# @return (T x r_i) matrix with calibrated factors
rescaleFactors <- function(Fmat, Lambda, norm, indx, fd) {
if (is.null(dim(Fmat))) {
return(NULL)
}
if (fd) {
factorMat_fd <- diff(Fmat)
} else {
factorMat_fd <- Fmat
}
ssq <- sum((as.matrix(factorMat_fd) %*% t(Lambda[, indx]))^2)
scaling <- norm / ssq
Fmat <- Fmat * sqrt(scaling)
return(Fmat)
}
# Draws cross-sectional errors according to Eq. 34
#
# @param N number of cross-sectional units
# @param Tt number of simulated time periods
# @param a parameter governing the serial correlation
# @param b parameter governing the cross-sectional correlation
# @param C parameter governing the extent of the cross-sectional correlation
#
# @return (T x N) matrix of cross-sectional errors
simCrossSecErr <- function(Tt, N, a = .5, b = .5, C = min(floor(N / 20), 10)) {
V <- matrix(stats::rnorm(Tt * N), nrow = Tt)
# Specify the cross-sectional correlation
k_seq <- 1:C
# Here deviation from eq. 34, by allowing the i index to circle around after reaching N
V <- V + b * cbind(V[, -k_seq], V[, k_seq])
# Specify the serial autocorrelation
U <- apply(V, 2, function(v) {
stats::filter(c(0, v), filter = a, method = "recursive", )[-1]
})
return(U)
}
# Draws factor loadings in compliance with Ass. 4
#
# @param r bumber of factors to simulate
# @param N bumber of cross-sectional units
#
# @return (N x r) matrix of loadings
simLambda <- function(r, N) {
A <- matrix(stats::rnorm(r * N), ncol = r)
# Perform QR decomposition to obtain orthonormal matrix
QR <- qr(A)
Qmat <- qr.Q(QR)
# Rescale to a standard normal distribution
Lambda <- (Qmat - mean(Qmat)) / stats::sd(Qmat)
return(Lambda)
}
# Simulates zero-mean I(1) factors according to Eq. 32
#
# @param Tt number of simulated time periods
# @param nBurnin number of burn-in periods for the process
# @param rholimits vector holding upper and lower bounds for the serial correlation parameter
# @param sd of the process innovations
# @return vector with the simulated factor
simRW <- function(Tt, nBurnin = 1, rholimits = c(.4, .8), sd = 1) {
nBurnin <- max(1, nBurnin)
Tt_tot <- Tt + nBurnin
# Draw the serial correlation
rho <- stats::runif(1, rholimits[1], rholimits[2])
# Draw the innovations
e_tilde <- stats::rnorm(Tt_tot, sd = sd)
e <- stats::filter(c(0, e_tilde), filter = rho, method = "recursive", )[-1]
# Construct the RW
RW <- cumsum(e)
return(RW[-(1:nBurnin)])
}
# Simulates zero-mean \emph{I(0)} factors according to Eq. 33 with the extension to arbitrary stationary ARMA process
#
# @param Tt number of simulated time periods
# @param pqmax vector holding upper bounds for the AR and MA lag order
# @param sd of the process innovations
# @param coef_limits vector holding upper and lower bounds for the coefficients
#
# @return vector with the simulated factor
simARMA <- function(Tt, pqmax = c(1, 0), sd = 1, coef_limits = c(-.5, .5)) {
# Pick the lag order
pq <- c(sample(1:pqmax[1], 1), sample(1:pqmax[2], 1))
# Draw the ar and ma polynomials
coef <- simARMACoef(pq = pq, limits = coef_limits)
# Construct the ARMA process
arma <- as.numeric(stats::arima.sim(list(order = c(pq[1], 0, pq[2]), ar = coef$ar, ma = coef$ma), sd = sd, n = Tt))
return(arma)
}
# Simulates the AR and MA coefficients
#
# @param pq vector holding the lag orders
# @param limits vector holding upper and lower bounds for the coefficients
#
# @return vector with stationary simulated coefficients
simARMACoef <- function(pq, limits) {
if (is.null(limits)) limits <- c(-1, 1)
# AR
if (pq[1] == 0) {
ar <- NULL
} else {
while (TRUE) {
ar <- stats::runif(pq[1], limits[1], limits[2])
minroots <- min(Mod(polyroot(c(1, -ar))))
if (minroots > 1) break
}
if (!is.null(ar)) names(ar) <- paste0("ar", 1:pq[1])
}
# MA
if (pq[2] == 0) {
ma <- NULL
} else {
ma <- stats::runif(pq[2], -1, 1)
}
if (!is.null(ma)) names(ma) <- paste0("ma", 1:pq[2])
return(list(ar = ar, ma = ma))
}
|
/scratch/gouwar.j/cran-all/cranData/BTtest/R/Sim.R
|
#' Bias and Uncertainty Corrected Sample Size (BUCSS)
#'
#' Bias- and Uncertainty-Corrected Sample Size. BUCSS implements a method of correcting for publication bias and uncertainty when planning sample sizes in a future study from an original study
#'
#' Note that \url{https://designingexperiments.com} uses Shiny R apps that implement, via a web interface, the functions contained in BUCSS.
#'
#' @references Anderson, S. & Kelley, K., Maxwell, S. E. (2017). Sample size planning for more accurate statistical power: A method correcting sample effect sizes for uncertainty and publication bias. \emph{Psychological Science}, \emph{28}, 1547--1562.
#'
#' See \url{https://designingexperiments.com/} for Shiny R implementation of the functions.
#'
#' For suggested updates, please email Samantha Anderson \email{[email protected]} or Ken Kelley \email{[email protected]}.
#' @author Samantha Anderson \email{[email protected]} and Ken Kelley \email{[email protected]}
#'
"_PACKAGE"
|
/scratch/gouwar.j/cran-all/cranData/BUCSS/R/BUCSS.R
|
# Between-subjects ANOVA.
#' Necessary sample size to reach desired power for a one or two-way
#' between-subjects ANOVA using an uncertainty and publication bias correction
#' procedure
#'
#' @description \code{ss.power.ba} returns the necessary per-group sample size
#' to achieve a desired level of statistical power for a planned study testing
#' an omnibus effect using a one or two-way fully between-subjects ANOVA,
#' based on information obtained from a previous study. The effect from the
#' previous study can be corrected for publication bias and/or uncertainty to
#' provide a sample size that will achieve more accurate statistical power for
#' a planned study, when compared to approaches that use a sample effect size
#' at face value or rely on sample size only. The bias and uncertainty
#' adjusted previous study noncentrality parameter is also returned, which can
#' be transformed to various effect size metrics.
#'
#' @details Researchers often use the sample effect size from a prior study as
#' an estimate of the likely size of an expected future effect in sample size
#' planning. However, sample effect size estimates should not usually be used
#' at face value to plan sample size, due to both publication bias and
#' uncertainty.
#'
#' The approach implemented in \code{ss.power.ba} uses the observed
#' \eqn{F}-value and sample size from a previous study to correct the
#' noncentrality parameter associated with the effect of interest for
#' publication bias and/or uncertainty. This new estimated noncentrality
#' parameter is then used to calculate the necessary per-group sample size to
#' achieve the desired level of power in the planned study.
#'
#' The approach uses a likelihood function of a truncated non-central F
#' distribution, where the truncation occurs due to small effect sizes being
#' unobserved due to publication bias. The numerator of the likelihood
#' function is simply the density of a noncentral F distribution. The
#' denominator is the power of the test, which serves to truncate the
#' distribution. Thus, the ratio of the numerator and the denominator is a
#' truncated noncentral F distribution. (See Taylor & Muller, 1996, Equation
#' 2.1. and Anderson & Maxwell, 2017, for more details.)
#'
#' Assurance is the proportion of times that power will be at or above the
#' desired level, if the experiment were to be reproduced many times. For
#' example, assurance = .5 means that power will be above the desired level
#' half of the time, but below the desired level the other half of the time.
#' Selecting assurance = .5 (selecting the noncentrality parameter at the 50th
#' percentile of the likelihood distribution) results in a median-unbiased
#' estimate of the population noncentrality parameter and does not correct for
#' uncertainty. In order to correct for uncertainty, assurance > .5
#' can be selected, which corresponds to selecting the noncentrality parameter
#' associated with the (1 - assurance) quantile of the likelihood
#' distribution.
#'
#' If the previous study of interest has not been subjected to publication
#' bias (e.g., a pilot study), \code{alpha.prior} can be set to 1 to indicate
#' no publication bias. Alternative \eqn{\alpha}-levels can also be
#' accommodated to represent differing amounts of publication bias. For
#' example, setting \code{alpha.prior}=.20 would reflect less severe
#' publication bias than the default of .05. In essence, setting
#' \code{alpha.prior} at .20 assumes that studies with \eqn{p}-values less
#' than .20 are published, whereas those with larger \eqn{p}-values are not.
#'
#' In some cases, the corrected noncentrality parameter for a given level of
#' assurance will be estimated to be zero. This is an indication that, at the
#' desired level of assurance, the previous study's effect cannot be
#' accurately estimated due to high levels of uncertainty and bias. When this
#' happens, subsequent sample size planning is not possible with the chosen
#' specifications. Two alternatives are recommended. First, users can select a
#' lower value of assurance (e.g. .8 instead of .95). Second, users can reduce
#' the influence of publciation bias by setting \code{alpha.prior} at a value
#' greater than .05. It is possible to correct for uncertainty only by setting
#' \code{alpha.prior}=1 and choosing the desired level of assurance. We
#' encourage users to make the adjustments as minimal as possible.
#'
#' \code{ss.power.ba} assumes that the planned study will have equal n.
#' Unequal n in the previous study is handled in the following way for
#' between-subjects anova designs. If the user enters an N not equally
#' divisible by the number of cells, the function calculates n by dividing N
#' by the number of cells and both rounding up and rounding down the result,
#' effectively assuming equal n. The suggested sample size for the planned
#' study is calculated using both of these values of n, and the function
#' returns the larger of these two suggestions, to be conservative. The
#' adjusted noncentrality parameter that is output is the lower of the two
#' possibilities, again, to be conservative. Although equal-n previous studies
#' are preferable, this approach will work well as long as the cell sizes are
#' only slightly discrepant.
#'
#' @param F.observed Observed F-value from a previous study used to plan sample
#' size for a planned study
#' @param N Total sample size of the previous study
#' @param levels.A Number of levels for factor A
#' @param levels.B Number of levels for factor B, which is NULL if a single
#' factor design
#' @param effect Effect most of interest to the planned study: main effect of A
#' (\code{factor.A}), main effect of B (\code{factor.B}), interaction
#' (\code{interaction})
#' @param alpha.prior Alpha-level \eqn{\alpha} for the previous study or the
#' assumed statistical significance necessary for publishing in the field; to
#' assume no publication bias, a value of 1 can be entered
#' @param alpha.planned Alpha-level (\eqn{\alpha}) assumed for the planned study
#' @param assurance Desired level of assurance, or the long run proportion of
#' times that the planned study power will reach or surpass desired level
#' (assurance > .5 corrects for uncertainty; assurance < .5 not recommended)
#' @param power Desired level of statistical power for the planned study
#' @param step Value used in the iterative scheme to determine the noncentrality
#' parameter necessary for sample size planning (0 < step < .01) (users should
#' not generally need to change this value; smaller values lead to more
#' accurate sample size planning results, but unnecessarily small values will
#' add unnecessary computational time)
#'
#' @return Suggested per-group sample size for planned study
#' Publication bias and uncertainty- adjusted prior study noncentrality parameter
#'
#' @export
#' @import stats
#'
#' @examples
#' ss.power.ba(F.observed=5, N=120, levels.A=2, levels.B=3, effect="factor.B",
#' alpha.prior=.05, alpha.planned=.05, assurance=.80, power=.80, step=.001)
#'
#' @author Samantha F. Anderson \email{[email protected]},
#' Ken Kelley \email{kkelley@@nd.edu}
#'
#' @references Anderson, S. F., & Maxwell, S. E. (2017).
#' Addressing the 'replication crisis': Using original studies to design
#' replication studies with appropriate statistical power. \emph{Multivariate
#' Behavioral Research, 52,} 305-322.
#'
#' Anderson, S. F., Kelley, K., & Maxwell, S. E. (2017). Sample size
#' planning for more accurate statistical power: A method correcting sample
#' effect sizes for uncertainty and publication bias. \emph{Psychological
#' Science, 28,} 1547-1562.
#'
#' Taylor, D. J., & Muller, K. E. (1996). Bias in linear model power and
#' sample size calculation due to estimating noncentrality.
#' \emph{Communications in Statistics: Theory and Methods, 25,} 1595-1610.
ss.power.ba <- function(F.observed, N, levels.A, levels.B=NULL, effect=c("factor.A", "factor.B", "interaction"), alpha.prior=.05, alpha.planned=.05, assurance=.80, power=.80, step=.001)
{
if(alpha.prior > 1 | alpha.prior <= 0) stop("There is a problem with 'alpha' of the prior study (i.e., the Type I error rate), please specify as a value between 0 and 1 (the default is .05).")
if(alpha.prior == 1) {alpha.prior <- .999 }
if(alpha.planned >= 1 | alpha.planned <= 0) stop("There is a problem with 'alpha' of the planned study (i.e., the Type I error rate), please specify as a value between 0 and 1 (the default is .05).")
if(assurance >= 1)
{
assurance <- assurance/100
}
if(assurance<0 | assurance>1)
{
stop("There is a problem with 'assurance' (i.e., the proportion of times statistical power is at or above the desired value), please specify as a value between 0 and 1 (the default is .80).")
}
if(assurance <.5)
{
warning( "THe assurance you have entered is < .5, which implies you will have under a 50% chance at achieving your desired level of power" )
}
if(power >= 1) power <- power/100
if(power<0 | power>1) stop("There is a problem with 'power' (i.e., desired statistical power), please specify as a value between 0 and 1 (the default is .80).")
if(missing(N)) stop("You must specify 'N', which is the total sample size.")
if(is.null(levels.B)) if(effect=="interaction") stop("You cannot select 'effect=interaction' if you do not specify 'levels.B'.")
if(is.null(levels.B)) cells <- levels.A
if(!is.null(levels.B)) cells <- levels.A*levels.B
NCP <- seq(from=0, to=100, by=step) # sequence of possible values for the noncentral parameter.
if(is.null(levels.B)) type <- "ANOVA.one.way"
if(!is.null(levels.B)) type <- "ANOVA.two.way"
## ROUNDING UP
if(type=="ANOVA.one.way")
{
cells <- levels.A
n.ru <- ceiling(N/cells) # To ensure that the between sample size is appropriate given specifications.
N.ru <- n.ru*cells
df.numerator <- levels.A - 1
df.denominator.ru <- N.ru-levels.A
}
if(type=="ANOVA.two.way")
{
cells <- levels.A*levels.B
n.ru <- ceiling(N/cells) # To ensure that the between sample size is appropriate given specifications.
N.ru <- n.ru*cells
if(effect=="factor.A") df.numerator <- levels.A - 1
if(effect=="factor.B") df.numerator <- levels.B - 1
if(effect=="interaction") df.numerator <- (levels.A - 1)*(levels.B - 1)
df.denominator.ru <- N.ru - cells
}
f.density.ru <- df(F.observed, df1=df.numerator, df2=df.denominator.ru, ncp=NCP) # density of F using F observed
critF.ru <- qf(1-alpha.prior, df1=df.numerator, df2=df.denominator.ru)
if(F.observed <= critF.ru) stop("Your observed F statistic is nonsignificant based on your specfied alpha of the prior study. Please increase 'alpha.prior' so 'F.observed' exceeds the critical value")
power.values.ru <- 1 - pf(critF.ru, df1=df.numerator, df2=df.denominator.ru, ncp = NCP) # area above critical F
area.above.F.ru <- 1 - pf(F.observed, df1=df.numerator, df2=df.denominator.ru, ncp = NCP) # area above observed F
area.area.between.ru <- power.values.ru - area.above.F.ru
TM.ru <- area.area.between.ru/power.values.ru
TM.Percentile.ru <- min(NCP[which(abs(TM.ru-assurance)==min(abs(TM.ru-assurance)))])
if(TM.Percentile.ru==0) stop("The corrected noncentrality parameter is zero. Please either choose a lower value of assurance and/or a higher value of alpha for the prior study (e.g. accounting for less publication bias)")
if(TM.Percentile.ru > 0)
{
nrep <- 2
if(type=="ANOVA.two.way") denom.df <- nrep*cells-(levels.A*levels.B)
if(type=="ANOVA.one.way") denom.df <- nrep*cells-levels.A
diff.ru <- -1
while(diff.ru < 0 )
{
criticalF <- qf(1-alpha.planned, df1 = df.numerator, df2 = denom.df)
powers.ru <- 1 - pf(criticalF, df1 = df.numerator, df2 = denom.df, ncp = (nrep/n.ru)*TM.Percentile.ru)
diff.ru <- powers.ru - power
nrep <- nrep + 1
if(type=="ANOVA.two.way") denom.df <- nrep*cells-(levels.A*levels.B)
if(type=="ANOVA.one.way") denom.df <- nrep*cells-levels.A
}
}
repn.ru <- nrep-1
##
## ROUNDING DOWN
##
if(type=="ANOVA.one.way")
{
cells <- levels.A
n.rd <- floor(N/cells) # To ensure that the between sample size is appropriate given specifications.
N.rd <- n.rd*cells
df.numerator <- levels.A - 1
df.denominator.rd <- N.rd-levels.A
}
if(type=="ANOVA.two.way")
{
cells <- levels.A*levels.B
n.rd <- floor(N/cells) # To ensure that the between sample size is appropriate given specifications.
N.rd <- n.rd*cells
if(effect=="factor.A") df.numerator <- levels.A - 1
if(effect=="factor.B") df.numerator <- levels.B - 1
if(effect=="interaction") df.numerator <- (levels.A - 1)*(levels.B - 1)
df.denominator.rd <- N.rd - cells
}
f.density.rd <- df(F.observed, df1=df.numerator, df2=df.denominator.rd, ncp=NCP) # density of F using F observed
critF.rd <- qf(1-alpha.prior, df1=df.numerator, df2=df.denominator.rd)
if(F.observed <= critF.rd) stop("Your observed F statistic is nonsignificant based on your specfied alpha of the prior study. Please increase 'alpha.prior' so 'F.observed' exceeds the critical value")
power.values.rd <- 1 - pf(critF.rd, df1=df.numerator, df2=df.denominator.rd, ncp = NCP) # area above critical F
area.above.F.rd <- 1 - pf(F.observed, df1=df.numerator, df2=df.denominator.rd, ncp = NCP) # area above observed F
area.area.between.rd <- power.values.rd - area.above.F.rd
TM.rd <- area.area.between.rd/power.values.rd
TM.Percentile.rd <- min(NCP[which(abs(TM.rd-assurance)==min(abs(TM.rd-assurance)))])
if(TM.Percentile.rd==0) stop("The corrected noncentrality parameter is zero. Please either choose a lower value of assurance and/or a higher value of alpha for the prior study (e.g. accounting for less publication bias)")
if(TM.Percentile.rd > 0)
{
nrep <- 2
if(type=="ANOVA.two.way") denom.df <- nrep*cells-(levels.A*levels.B)
if(type=="ANOVA.one.way") denom.df <- nrep*cells-levels.A
diff.rd <- -1
while(diff.rd < 0 )
{
criticalF <- qf(1-alpha.planned, df1 = df.numerator, df2 = denom.df)
powers.rd <- 1 - pf(criticalF, df1 = df.numerator, df2 = denom.df, ncp = (nrep/n.rd)*TM.Percentile.rd)
diff.rd <- powers.rd - power
nrep <- nrep + 1
if(type=="ANOVA.two.way") denom.df <- nrep*cells-(levels.A*levels.B)
if(type=="ANOVA.one.way") denom.df <- nrep*cells-levels.A
}
}
repn.rd <- nrep-1
output.n <- max(repn.ru, repn.rd)
return(list(output.n, min(TM.Percentile.rd, TM.Percentile.ru)))
}
|
/scratch/gouwar.j/cran-all/cranData/BUCSS/R/ss.power.ba.R
|
#Between-subjects ANOVA (general case)
#' Necessary sample size to reach desired power for a between-subjects ANOVA
#' with any number of factors using an uncertainty and publication bias
#' correction procedure
#'
#' @description \code{ss.power.ba.general} returns the necessary per-group
#' sample size to achieve a desired level of statistical power for a planned
#' study testing any type of effect (omnibus, contrast) using a fully
#' between-subjects ANOVA with any number of factors, based on information
#' obtained from a previous study. The effect from the previous study can be
#' corrected for publication bias and/or uncertainty to provide a sample size
#' that will achieve more accurate statistical power for a planned study, when
#' compared to approaches that use a sample effect size at face value or rely
#' on sample size only. The bias and uncertainty adjusted previous study
#' noncentrality parameter is also returned, which can be transformed to
#' various effect size metrics.
#'
#' @details Researchers often use the sample effect size from a prior study as
#' an estimate of the likely size of an expected future effect in sample size
#' planning. However, sample effect size estimates should not usually be used
#' at face value to plan sample size, due to both publication bias and
#' uncertainty.
#'
#' The approach implemented in \code{ss.power.ba.general} uses the observed
#' \eqn{F}-value and sample size from a previous study to correct the
#' noncentrality parameter associated with the effect of interest for
#' publication bias and/or uncertainty. This new estimated noncentrality
#' parameter is then used to calculate the necessary per-group sample size to
#' achieve the desired level of power in the planned study.
#'
#' The approach uses a likelihood function of a truncated non-central F
#' distribution, where the truncation occurs due to small effect sizes being
#' unobserved due to publication bias. The numerator of the likelihood
#' function is simply the density of a noncentral F distribution. The
#' denominator is the power of the test, which serves to truncate the
#' distribution. Thus, the ratio of the numerator and the denominator is a
#' truncated noncentral F distribution. (See Taylor & Muller, 1996, Equation
#' 2.1. and Anderson & Maxwell, 2017, for more details.)
#'
#' Assurance is the proportion of times that power will be at or above the
#' desired level, if the experiment were to be reproduced many times. For
#' example, assurance = .5 means that power will be above the desired level
#' half of the time, but below the desired level the other half of the time.
#' Selecting assurance = .5 (selecting the noncentrality parameter at the 50th
#' percentile of the likelihood distribution) results in a median-unbiased
#' estimate of the population noncentrality parameter and does not correct for
#' uncertainty. In order to correct for uncertainty, assurance > .5
#' can be selected, which corresponds to selecting the noncentrality parameter
#' associated with the (1 - assurance) quantile of the likelihood
#' distribution.
#'
#' If the previous study of interest has not been subjected to publication
#' bias (e.g., a pilot study), \code{alpha.prior} can be set to 1 to indicate
#' no publication bias. Alternative \eqn{\alpha}-levels can also be
#' accommodated to represent differing amounts of publication bias. For
#' example, setting \code{alpha.prior}=.20 would reflect less severe
#' publication bias than the default of .05. In essence, setting
#' \code{alpha.prior} at .20 assumes that studies with \eqn{p}-values less
#' than .20 are published, whereas those with larger \eqn{p}-values are not.
#'
#' In some cases, the corrected noncentrality parameter for a given level of
#' assurance will be estimated to be zero. This is an indication that, at the
#' desired level of assurance, the previous study's effect cannot be
#' accurately estimated due to high levels of uncertainty and bias. When this
#' happens, subsequent sample size planning is not possible with the chosen
#' specifications. Two alternatives are recommended. First, users can select a
#' lower value of assurance (e.g. .8 instead of .95). Second, users can reduce
#' the influence of publciation bias by setting \code{alpha.prior} at a value
#' greater than .05. It is possible to correct for uncertainty only by setting
#' \code{alpha.prior}=1 and choosing the desired level of assurance. We
#' encourage users to make the adjustments as minimal as possible.
#'
#' \code{ss.power.ba.general} assumes that the planned study will have equal
#' n. Unequal n in the previous study is handled in the following way for
#' between-subjects anova designs. If the user enters an N not equally
#' divisible by the number of cells, the function calculates n by dividing N
#' by the number of cells and both rounding up and rounding down the result,
#' effectively assuming equal n. The suggested sample size for the planned
#' study is calculated using both of these values of n, and the function
#' returns the larger of these two suggestions, to be conservative. The
#' adjusted noncentrality parameter that is output is the lower of the two
#' possibilities, again, to be conservative. Although equal-n previous studies
#' are preferable, this approach will work well as long as the cell sizes are
#' only slightly discrepant.
#'
#' @param F.observed Observed F-value from a previous study used to plan sample
#' size for a planned study
#' @param N Total sample size of the previous study
#' @param cells Number of cells for the design (the product of the number of
#' levels of each factor)
#' @param df.numerator Numerator degrees of freedom for the effect of interest
#' @param df.denominator Denominator degrees of freedom for the effect of
#' interest
#' @param alpha.prior Alpha-level \eqn{\alpha} for the previous study or the
#' assumed statistical significance necessary for publishing in the field; to
#' assume no publication bias, a value of 1 can be entered
#' @param alpha.planned Alpha-level (\eqn{\alpha}) assumed for the planned study
#' @param assurance Desired level of assurance, or the long run proportion of
#' times that the planned study power will reach or surpass desired level
#' (assurance > .5 corrects for uncertainty; assurance < .5 not recommended)
#' @param power Desired level of statistical power for the planned study
#' @param step Value used in the iterative scheme to determine the noncentrality
#' parameter necessary for sample size planning (0 < step < .01) (users should
#' not generally need to change this value; smaller values lead to more
#' accurate sample size planning results, but unnecessarily small values will
#' add unnecessary computational time)
#'
#' @return Suggested per-group sample size for planned study
#' Publication bias and uncertainty- adjusted prior study noncentrality parameter
#'
#' @export
#' @import stats
#'
#' @examples
#' ss.power.ba.general(F.observed=5, N=120, cells=6, df.numerator=2,
#' df.denominator=117, alpha.prior=.05, alpha.planned=.05, assurance=.80,
#' power=.80, step=.001)
#'
#' @author Samantha F. Anderson \email{[email protected]},
#' Ken Kelley \email{kkelley@@nd.edu}
#'
#' @references Anderson, S. F., & Maxwell, S. E. (2017).
#' Addressing the 'replication crisis': Using original studies to design
#' replication studies with appropriate statistical power. \emph{Multivariate
#' Behavioral Research, 52,} 305-322.
#'
#' Anderson, S. F., Kelley, K., & Maxwell, S. E. (2017). Sample size
#' planning for more accurate statistical power: A method correcting sample
#' effect sizes for uncertainty and publication bias. \emph{Psychological
#' Science, 28,} 1547-1562.
#'
#' Taylor, D. J., & Muller, K. E. (1996). Bias in linear model power and
#' sample size calculation due to estimating noncentrality.
#' \emph{Communications in Statistics: Theory and Methods, 25,} 1595-1610.
ss.power.ba.general <- function(F.observed, N, cells, df.numerator, df.denominator, alpha.prior=.05, alpha.planned=.05, assurance=.80, power=.80, step=.001)
{
if(alpha.prior > 1 | alpha.prior <= 0) stop("There is a problem with 'alpha' of the prior study (i.e., the Type I error rate), please specify as a value between 0 and 1 (the default is .05).")
if(alpha.prior == 1) {alpha.prior <- .999 }
if(alpha.planned >= 1 | alpha.planned <= 0) stop("There is a problem with 'alpha' of the planned study (i.e., the Type I error rate), please specify as a value between 0 and 1 (the default is .05).")
if(assurance >= 1)
{
assurance <- assurance/100
}
if(assurance<0 | assurance>1)
{
stop("There is a problem with 'assurance' (i.e., the proportion of times statistical power is at or above the desired value), please specify as a value between 0 and 1 (the default is .80).")
}
if(assurance <.5)
{
warning( "THe assurance you have entered is < .5, which implies you will have under a 50% chance at achieving your desired level of power" )
}
if(power >= 1) power <- power/100
if(power<0 | power>1) stop("There is a problem with 'power' (i.e., desired statistical power), please specify as a value between 0 and 1 (the default is .80).")
if(missing(N)) stop("You must specify 'N', which is the total sample size.")
NCP <- seq(from=0, to=100, by=step) # sequence of possible values for the noncentral parameter.
## ROUNDING UP
n.ru <- ceiling(N/cells) # To ensure that the between sample size is appropriate given specifications.
N.ru <- n.ru*cells
df.denominator.ru <- (n.ru*cells) - cells
f.density.ru <- df(F.observed, df1=df.numerator, df2=df.denominator.ru, ncp=NCP) # density of F using F observed
critF.ru <- qf(1-alpha.prior, df1=df.numerator, df2=df.denominator.ru)
if(F.observed <= critF.ru) stop("Your observed F statistic is nonsignificant based on your specfied alpha of the prior study. Please increase 'alpha.prior' so 'F.observed' exceeds the critical value")
power.values.ru <- 1 - pf(critF.ru, df1=df.numerator, df2=df.denominator.ru, ncp = NCP) # area above critical F
area.above.F.ru <- 1 - pf(F.observed, df1=df.numerator, df2=df.denominator.ru, ncp = NCP) # area above observed F
area.area.between.ru <- power.values.ru - area.above.F.ru
TM.ru <- area.area.between.ru/power.values.ru
TM.Percentile.ru <- min(NCP[which(abs(TM.ru-assurance)==min(abs(TM.ru-assurance)))])
if(TM.Percentile.ru==0) stop("The corrected noncentrality parameter is zero. Please either choose a lower value of assurance and/or a higher value of alpha for the prior study (e.g. accounting for less publication bias)")
if (TM.Percentile.ru > 0)
{
nrep <- 2
denom.df <- (nrep*cells) - cells
diff.ru <- -1
while (diff.ru < 0 )
{
criticalF <- qf(1-alpha.planned, df1 = df.numerator, df2 = denom.df)
powers.ru <- 1 - pf(criticalF, df1 = df.numerator, df2 = denom.df, ncp = (nrep/n.ru)*TM.Percentile.ru)
diff.ru <- powers.ru - power
nrep <- nrep + 1
denom.df <- (nrep*cells) - cells
}
}
repn.ru <- nrep-1
## ROUNDING DOWN
n.rd <- floor(N/cells) # To ensure that the between sample size is appropriate given specifications.
N.rd <- n.rd*cells
df.denominator.rd <- (n.rd*cells) - cells
f.density.rd <- df(F.observed, df1=df.numerator, df2=df.denominator.rd, ncp=NCP) # density of F using F observed
critF.rd <- qf(1-alpha.prior, df1=df.numerator, df2=df.denominator.rd)
if(F.observed <= critF.rd) stop("Your observed F statistic is nonsignificant based on your specfied alpha of the prior study. Please increase 'alpha.prior' so 'F.observed' exceeds the critical value")
power.values.rd <- 1 - pf(critF.rd, df1=df.numerator, df2=df.denominator.rd, ncp = NCP) # area above critical F
area.above.F.rd <- 1 - pf(F.observed, df1=df.numerator, df2=df.denominator.rd, ncp = NCP) # area above observed F
area.area.between.rd <- power.values.rd - area.above.F.rd
TM.rd <- area.area.between.rd/power.values.rd
TM.Percentile.rd <- min(NCP[which(abs(TM.rd-assurance)==min(abs(TM.rd-assurance)))])
if(TM.Percentile.rd==0) stop("The corrected noncentrality parameter is zero. Please either choose a lower value of assurance and/or a higher value of alpha for the prior study (e.g. accounting for less publication bias)")
if (TM.Percentile.rd > 0)
{
nrep <- 2
denom.df <- (nrep*cells) - cells
diff.rd <- -1
while (diff.rd < 0 )
{
criticalF <- qf(1-alpha.planned, df1 = df.numerator, df2 = denom.df)
powers.rd <- 1 - pf(criticalF, df1 = df.numerator, df2 = denom.df, ncp = (nrep/n.rd)*TM.Percentile.rd)
diff.rd <- powers.rd - power
nrep <- nrep + 1
denom.df <- (nrep*cells) - cells
}
}
repn.rd <- nrep-1
output.n <- max(repn.ru, repn.rd)
return(list(output.n, min(TM.Percentile.rd, TM.Percentile.ru)))
}
|
/scratch/gouwar.j/cran-all/cranData/BUCSS/R/ss.power.ba.general.R
|
# Dependent t test.
#' Necessary sample size to reach desired power for a dependent t-test using an
#' uncertainty and publication bias correction procedure
#'
#' @description \code{ss.power.dt} returns the necessary per-group sample size
#' to achieve a desired level of statistical power for a planned study using
#' a dependent t-test, based on information obtained from a previous study.
#' The effect from the previous study can be corrected for publication bias
#' and/or uncertainty to provide a sample size that will achieve more accurate
#' statistical power for a planned study, when compared to approaches that use
#' a sample effect size at face value or rely on sample size only. The bias
#' and uncertainty adjusted previous study noncentrality parameter is also
#' returned, which can be transformed to various effect size metrics.
#'
#' @details Researchers often use the sample effect size from a prior study as
#' an estimate of the likely size of an expected future effect in sample size
#' planning. However, sample effect size estimates should not usually be used
#' at face value to plan sample size, due to both publication bias and
#' uncertainty.
#'
#' The approach implemented in \code{ss.power.dt} uses the observed
#' \eqn{t}-value and sample size from a previous study to correct the
#' noncentrality parameter associated with the effect of interest for
#' publication bias and/or uncertainty. This new estimated noncentrality
#' parameter is then used to calculate the necessary per-group sample size to
#' achieve the desired level of power in the planned study.
#'
#' The approach uses a likelihood function of a truncated non-central F
#' distribution, where the truncation occurs due to small effect sizes being
#' unobserved due to publication bias. The numerator of the likelihood
#' function is simply the density of a noncentral F distribution. The
#' denominator is the power of the test, which serves to truncate the
#' distribution. In the two-group case, this formula reduces to the density of
#' a truncated noncentral \eqn{t}-distribution.(See Taylor & Muller, 1996,
#' Equation 2.1. and Anderson & Maxwell, 2017, for more details.)
#'
#' Assurance is the proportion of times that power will be at or above the
#' desired level, if the experiment were to be reproduced many times. For
#' example, assurance = .5 means that power will be above the desired level
#' half of the time, but below the desired level the other half of the time.
#' Selecting assurance = .5 (selecting the noncentrality parameter at the 50th
#' percentile of the likelihood distribution) results in a median-unbiased
#' estimate of the population noncentrality parameter and does not correct for
#' uncertainty. In order to correct for uncertainty, assurance > .5
#' can be selected, which corresponds to selecting the noncentrality parameter
#' associated with the (1 - assurance) quantile of the likelihood
#' distribution.
#'
#' If the previous study of interest has not been subjected to publication
#' bias (e.g., a pilot study), \code{alpha.prior} can be set to 1 to indicate
#' no publication bias. Alternative \eqn{\alpha}-levels can also be
#' accommodated to represent differing amounts of publication bias. For
#' example, setting \code{alpha.prior}=.20 would reflect less severe
#' publication bias than the default of .05. In essence, setting
#' \code{alpha.prior} at .20 assumes that studies with \eqn{p}-values less
#' than .20 are published, whereas those with larger \eqn{p}-values are not.
#'
#' In some cases, the corrected noncentrality parameter for a given level of
#' assurance will be estimated to be zero. This is an indication that, at the
#' desired level of assurance, the previous study's effect cannot be
#' accurately estimated due to high levels of uncertainty and bias. When this
#' happens, subsequent sample size planning is not possible with the chosen
#' specifications. Two alternatives are recommended. First, users can select a
#' lower value of assurance (e.g. .8 instead of .95). Second, users can reduce
#' the influence of publciation bias by setting \code{alpha.prior} at a value
#' greater than .05. It is possible to correct for uncertainty only by setting
#' \code{alpha.prior}=1 and choosing the desired level of assurance. We
#' encourage users to make the adjustments as minimal as possible.
#'
#' @param t.observed Observed \eqn{t}-value from a previous study used to plan
#' sample size for a planned study
#' @param N Total sample size of the previous study
#' @param alpha.prior Alpha-level \eqn{\alpha} for the previous study or the
#' assumed statistical significance necessary for publishing in the field; to
#' assume no publication bias, a value of 1 can be entered
#' @param alpha.planned Alpha-level (\eqn{\alpha}) assumed for the planned study
#' @param assurance Desired level of assurance, or the long run proportion of
#' times that the planned study power will reach or surpass desired level
#' (assurance > .5 corrects for uncertainty; assurance < .5 not recommended)
#' @param power Desired level of statistical power for the planned study
#' @param step Value used in the iterative scheme to determine the noncentrality
#' parameter necessary for sample size planning (0 < step < .01) (users should
#' not generally need to change this value; smaller values lead to more
#' accurate sample size planning results, but unnecessarily small values will
#' add unnecessary computational time)
#'
#' @return Suggested per-group sample size for planned study
#' Publication bias and uncertainty- adjusted prior study noncentrality parameter
#'
#' @export
#' @import stats
#'
#' @examples
#' ss.power.dt(t.observed=3, N=40, alpha.prior=.05, alpha.planned=.05,
#' assurance=.80, power=.80, step=.001)
#'
#' @author Samantha F. Anderson \email{[email protected]},
#' Ken Kelley \email{kkelley@@nd.edu}
#'
#' @references Anderson, S. F., & Maxwell, S. E. (2017).
#' Addressing the 'replication crisis': Using original studies to design
#' replication studies with appropriate statistical power. \emph{Multivariate
#' Behavioral Research, 52,} 305-322.
#'
#' Anderson, S. F., Kelley, K., & Maxwell, S. E. (2017). Sample size
#' planning for more accurate statistical power: A method correcting sample
#' effect sizes for uncertainty and publication bias. \emph{Psychological
#' Science, 28,} 1547-1562.
#'
#' Taylor, D. J., & Muller, K. E. (1996). Bias in linear model power and
#' sample size calculation due to estimating noncentrality.
#' \emph{Communications in Statistics: Theory and Methods, 25,} 1595-1610.
ss.power.dt <- function(t.observed, N, alpha.prior=.05, alpha.planned=.05, assurance=.80, power=.80, step=.001)
{
if(alpha.prior > 1 | alpha.prior <= 0) stop("There is a problem with 'alpha' of the prior study (i.e., the Type I error rate), please specify as a value between 0 and 1 (the default is .05).")
if(alpha.prior == 1) {alpha.prior <- .999 }
if(alpha.planned >= 1 | alpha.planned <= 0) stop("There is a problem with 'alpha' of the planned study (i.e., the Type I error rate), please specify as a value between 0 and 1 (the default is .05).")
if(assurance >= 1)
{
assurance <- assurance/100
}
if(assurance<0 | assurance>1)
{
stop("There is a problem with 'assurance' (i.e., the proportion of times statistical power is at or above the desired value), please specify as a value between 0 and 1 (the default is .80).")
}
if(assurance <.5)
{
warning( "THe assurance you have entered is < .5, which implies you will have under a 50% chance at achieving your desired level of power" )
}
if(power >= 1) power <- power/100
if(power<0 | power>1) stop("There is a problem with 'power' (i.e., desired statistical power), please specify as a value between 0 and 1 (the default is .80).")
if(missing(N)) stop("You need to specify a sample size (i.e., the number of pairs) used in the original study.")
if(N <= 1) stop("Your total sample size is too small")
DF <- N-1
NCP <- seq(from=0, to=100, by=step)
d.density <- dt(t.observed, df=DF, ncp=NCP)
value.critical <- qt(1-alpha.prior/2, df=DF)
if(t.observed <= value.critical) stop("Your observed t statistic is nonsignificant based on your specfied alpha of the prior study. Please increase 'alpha.prior' so 't.observed' exceeds the critical value")
area.above.crit <- 1 - pt(value.critical, df=DF, ncp=NCP)
area.other.tail <- pt(-1*value.critical, df=DF, ncp=NCP)
power.values <- area.above.crit + area.other.tail
area.above.t <- 1 - pt(t.observed, df=DF, ncp = NCP)
area.above.t.opp <- pt(-1*t.observed, df = DF, ncp = NCP)
area.area.between <- (area.above.crit - area.above.t) + (area.other.tail - area.above.t.opp)
TM <- area.area.between/power.values
TM.Percentile <- min(NCP[which(abs(TM-assurance)==min(abs(TM-assurance)))])
if(TM.Percentile==0) stop("The corrected noncentrality parameter is zero. Please either choose a lower value of assurance and/or a higher value of alpha for the prior study (e.g. accounting for less publication bias)")
if (TM.Percentile > 0)
{
Nrep <- 2
denom.df <- Nrep-1
diff <- -1
while (diff < 0 )
{
criticalT <- qt(1-alpha.planned/2, df = denom.df)
powers1 <- 1 - pt(criticalT, df = denom.df, ncp = sqrt(Nrep/N)*TM.Percentile)
powers2 <- pt(-1*criticalT, df=denom.df, ncp = sqrt(Nrep/N)*TM.Percentile)
powers <- powers1 + powers2
diff <- powers - power
Nrep <- Nrep + 1
denom.df = Nrep - 1
}
repN <- Nrep - 1
}
return(list(repN, TM.Percentile))
}
|
/scratch/gouwar.j/cran-all/cranData/BUCSS/R/ss.power.dt.r
|
#' Necessary sample size to reach desired power for an independent t-test using
#' an uncertainty and publication bias correction procedure
#'
#' @description \code{ss.power.it} returns the necessary per-group sample size
#' to achieve a desired level of statistical power for a planned study using
#' an independent t-test, based on information obtained from a previous study.
#' The effect from the previous study can be corrected for publication bias
#' and/or uncertainty to provide a sample size that will achieve more accurate
#' statistical power for a planned study, when compared to approaches that use
#' a sample effect size at face value or rely on sample size only. The bias
#' and uncertainty adjusted previous study noncentrality parameter is also
#' returned, which can be transformed to various effect size metrics.
#'
#' @details Researchers often use the sample effect size from a prior study as
#' an estimate of the likely size of an expected future effect in sample size
#' planning. However, sample effect size estimates should not usually be used
#' at face value to plan sample size, due to both publication bias and
#' uncertainty.
#'
#' The approach implemented in \code{ss.power.it} uses the observed
#' \eqn{t}-value and sample size from a previous study to correct the
#' noncentrality parameter associated with the effect of interest for
#' publication bias and/or uncertainty. This new estimated noncentrality
#' parameter is then used to calculate the necessary per-group sample size to
#' achieve the desired level of power in the planned study.
#'
#' The approach uses a likelihood function of a truncated non-central F
#' distribution, where the truncation occurs due to small effect sizes being
#' unobserved due to publication bias. The numerator of the likelihood
#' function is simply the density of a noncentral F distribution. The
#' denominator is the power of the test, which serves to truncate the
#' distribution. In the two-group case, this formula reduces to the density of
#' a truncated noncentral \eqn{t}-distribution.(See Taylor & Muller, 1996,
#' Equation 2.1. and Anderson & Maxwell, 2017, for more details.)
#'
#' Assurance is the proportion of times that power will be at or above the
#' desired level, if the experiment were to be reproduced many times. For
#' example, assurance = .5 means that power will be above the desired level
#' half of the time, but below the desired level the other half of the time.
#' Selecting assurance = .5 (selecting the noncentrality parameter at the 50th
#' percentile of the likelihood distribution) results in a median-unbiased
#' estimate of the population noncentrality parameter and does not corrects for
#' uncertainty. In order to correct for uncertainty, assurance > .5
#' can be selected, which corresponds to selecting the noncentrality parameter
#' associated with the (1 - assurance) quantile of the likelihood
#' distribution.
#'
#' If the previous study of interest has not been subjected to publication
#' bias (e.g., a pilot study), \code{alpha.prior} can be set to 1 to indicate
#' no publication bias. Alternative \eqn{\alpha}-levels can also be
#' accommodated to represent differing amounts of publication bias. For
#' example, setting \code{alpha.prior}=.20 would reflect less severe
#' publication bias than the default of .05. In essence, setting
#' \code{alpha.prior} at .20 assumes that studies with \eqn{p}-values less
#' than .20 are published, whereas those with larger \eqn{p}-values are not.
#'
#' In some cases, the corrected noncentrality parameter for a given level of
#' assurance will be estimated to be zero. This is an indication that, at the
#' desired level of assurance, the previous study's effect cannot be
#' accurately estimated due to high levels of uncertainty and bias. When this
#' happens, subsequent sample size planning is not possible with the chosen
#' specifications. Two alternatives are recommended. First, users can select a
#' lower value of assurance (e.g. .8 instead of .95). Second, users can reduce
#' the influence of publciation bias by setting \code{alpha.prior} at a value
#' greater than .05. It is possible to correct for uncertainty only by setting
#' \code{alpha.prior}=1 and choosing the desired level of assurance. We
#' encourage users to make the adjustments as minimal as possible.
#'
#' \code{ss.power.it} assumes that the planned study will have equal n.
#' Unequal n in the previous study is handled in the following way for the
#' independent-t. If the user enters an odd value for N, no information is
#' available on the exact group sizes. The function calculates n by dividing N
#' by 2 and both rounding up and rounding down the result, thus assuming equal
#' n. The suggested sample size for the planned study is calculated using both
#' of these values of n, and the function returns the larger of these two
#' suggestions, to be conservative. If the user enters a vector for n with two
#' different values, specific information is available on the exact group
#' sizes. n is calcualted as the harmonic mean of these two values (a measure
#' of effective sample size). Again, this is rounded both up and down. The
#' suggested sample size for the planned study is calculated using both of
#' these values of n, and the function returns the larger of these two
#' suggestions, to be conservative. The adjusted noncentrality parameter
#' that is output is the lower of the two possibilities, again, to be
#' conservative. When the individual group sizes of an unequal-n previous study
#' are known, we highly encourage entering these explicitly, especially if the
#' sample sizes are quite discrepant, as this allows for the greatest precision
#' in estimating an appropriate planned study n.
#'
#' @param t.observed Observed \eqn{t}-value from a previous study used to plan
#' sample size for a planned study
#' @param n Per group sample size (if equal) or the two group sample sizes of
#' the previous study (enter either a single number or a vector of length 2)
#' @param N Total sample size of the previous study, assumed equal across groups
#' if specified
#' @param alpha.prior Alpha-level \eqn{\alpha} for the previous study or the
#' assumed statistical significance necessary for publishing in the field; to
#' assume no publication bias, a value of 1 can be entered
#' @param alpha.planned Alpha-level (\eqn{\alpha}) assumed for the planned study
#' @param assurance Desired level of assurance, or the long run proportion of
#' times that the planned study power will reach or surpass desired level
#' (assurance > .5 corrects for uncertainty; assurance < .5 not recommended)
#' @param power Desired level of statistical power for the planned study
#' @param step Value used in the iterative scheme to determine the noncentrality
#' parameter necessary for sample size planning (0 < step < .01) (users
#' should not generally need to change this value; smaller values lead to more
#' accurate sample size planning results, but unnecessarily small values will
#' add unnecessary computational time)
#'
#' @return Suggested per-group sample size for planned study
#' Publication bias and uncertainty- adjusted prior study noncentrality parameter
#'
#' @export
#' @import stats
#'
#' @examples
#' ss.power.it(t.observed=3, n=20, alpha.prior=.05, alpha.planned=.05,
#' assurance=.80, power=.80, step=.001)
#'
#' @author Samantha F. Anderson \email{[email protected]},
#' Ken Kelley \email{kkelley@@nd.edu}
#'
#' @references Anderson, S. F., & Maxwell, S. E. (2017).
#' Addressing the 'replication crisis': Using original studies to design
#' replication studies with appropriate statistical power. \emph{Multivariate
#' Behavioral Research, 52,} 305-322.
#'
#' Anderson, S. F., Kelley, K., & Maxwell, S. E. (2017). Sample size
#' planning for more accurate statistical power: A method correcting sample
#' effect sizes for uncertainty and publication bias. \emph{Psychological
#' Science, 28,} 1547-1562.
#'
#' Taylor, D. J., & Muller, K. E. (1996). Bias in linear model power and
#' sample size calculation due to estimating noncentrality.
#' \emph{Communications in Statistics: Theory and Methods, 25,} 1595-1610.
ss.power.it <- function(t.observed, n, N, alpha.prior=.05, alpha.planned=.05,
assurance=.80, power=.80, step=.001)
{
if(alpha.prior > 1 | alpha.prior <= 0) stop("There is a problem with 'alpha' of the prior study (i.e., the Type I error rate), please specify as a value between 0 and 1 (the default is .05).")
if(alpha.prior == 1) {alpha.prior <- .999 }
if(alpha.planned >= 1 | alpha.planned <= 0) stop("There is a problem with 'alpha' of the planned study (i.e., the Type I error rate), please specify as a value between 0 and 1 (the default is .05).")
if(assurance >= 1)
{
assurance <- assurance/100
}
if(assurance<0 | assurance>1)
{
stop("There is a problem with 'assurance' (i.e., the proportion of times statistical power is at or above the desired value), please specify as a value between 0 and 1 (the default is .80).")
}
if(assurance <.5)
{
warning( "THe assurance you have entered is < .5, which implies you will have under a 50% chance at achieving your desired level of power" )
}
if(power >= 1) power <- power/100
if(power<0 | power>1) stop("There is a problem with 'power' (i.e., desired statistical power), please specify as a value between 0 and 1 (the default is .80).")
if(!missing(N))
{
if(!is.null(N))
{
if(N <= 2) stop("Your total sample size is too small")
if(!missing(n)) stop("Because you specified 'N' you should not specify 'n'.")
if(missing(n))
{
n.ru <- ceiling(N/2)
N.ru <- 2*n.ru
DF.ru <- N.ru-2
n.rd <- floor(N/2)
N.rd <- 2*n.rd
DF.rd <- N.rd-2
}
}
}
if(missing(N))
{
if(!(length(n) %in% c(1, 2))) stop("The value of 'n' should be a vector of length two or a single value (for equal group sample sizes)")
if(length(n)==2)
{
n.1 <- n[1]
n.2 <- n[2]
n.ru <- ceiling(2/((1/n.1)+(1/n.2)))
N.ru <- 2*n.ru
DF.ru <- N.ru-2
n.rd <- floor(2/((1/n.1)+(1/n.2)))
N.rd <- 2*n.rd
DF.rd <- N.rd-2
}
if(length(n)==1)
{
n.ru <- n
N.ru <- 2*n.ru
DF.ru <- N.ru-2
n.rd <- n
N.rd <- 2*n.rd
DF.rd <- N.rd-2
}
}
NCP <- seq(from=0, to=100, by=step)
## Rounding up
d.density.ru <- dt(t.observed, df=DF.ru, ncp=NCP)
value.critical.ru <- qt(1-alpha.prior/2, df=DF.ru)
if(t.observed <= value.critical.ru) stop("Your observed t statistic is nonsignificant based on your specfied alpha of the prior study. Please increase 'alpha.prior' so 't.observed' exceeds the critical value")
area.above.critical.value.ru <- 1 - pt(value.critical.ru, df=DF.ru, ncp=NCP)
area.other.tail.ru <- pt(-1*value.critical.ru, df=DF.ru, ncp=NCP)
power.values.ru <- area.above.critical.value.ru + area.other.tail.ru
area.above.t.ru <- 1 - pt(t.observed, df=DF.ru, ncp = NCP)
area.above.t.opp.ru <- pt(-1*t.observed, df = DF.ru, ncp = NCP)
area.area.between.ru <- (area.above.critical.value.ru - area.above.t.ru) + (area.other.tail.ru - area.above.t.opp.ru)
TM.ru <- area.area.between.ru/power.values.ru
TM.Percentile.ru <- min(NCP[which(abs(TM.ru-assurance)==min(abs(TM.ru-assurance)))])
if(TM.Percentile.ru==0) stop("The corrected noncentrality parameter is zero. Please either choose a lower value of assurance and/or a higher value of alpha for the prior study (e.g. accounting for less publication bias)")
if(TM.Percentile.ru > 0)
{
nrep <- 2
denom.df <- (2*nrep)-2
diff.ru <- -1
while (diff.ru < 0)
{
criticalT <- qt(1-alpha.planned/2, df = denom.df)
powers1.ru <- 1 - pt(criticalT, df = denom.df, ncp = sqrt(nrep/n.ru)*TM.Percentile.ru)
powers2.ru <- pt(-1*criticalT, df=denom.df, ncp = sqrt(nrep/n.ru)*TM.Percentile.ru)
powers.ru <- powers1.ru + powers2.ru
diff.ru <- powers.ru - power
nrep <- nrep + 1
denom.df = (2*nrep)-2
}
}
repn.ru <- nrep-1
##
## Rounding up
d.density.rd <- dt(t.observed, df=DF.rd, ncp=NCP)
value.critical.rd <- qt(1-alpha.prior/2, df=DF.rd)
if(t.observed <= value.critical.rd) stop("Your observed t statistic is nonsignificant based on your specfied alpha of the prior study. Please increase 'alpha.prior' so 't.observed' exceeds the critical value")
area.above.critical.value.rd <- 1 - pt(value.critical.rd, df=DF.rd, ncp=NCP)
area.other.tail.rd <- pt(-1*value.critical.rd, df=DF.rd, ncp=NCP)
power.values.rd <- area.above.critical.value.rd + area.other.tail.rd
area.above.t.rd <- 1 - pt(t.observed, df=DF.rd, ncp = NCP)
area.above.t.opp.rd <- pt(-1*t.observed, df = DF.rd, ncp = NCP)
area.area.between.rd <- (area.above.critical.value.rd - area.above.t.rd) + (area.other.tail.rd - area.above.t.opp.rd)
TM.rd <- area.area.between.rd/power.values.rd
TM.Percentile.rd <- min(NCP[which(abs(TM.rd-assurance)==min(abs(TM.rd-assurance)))])
if(TM.Percentile.rd==0) stop("The corrected noncentrality parameter is zero. Please either choose a lower value of assurance and/or a higher value of alpha for the prior study (e.g. accounting for less publication bias)")
if(TM.Percentile.rd > 0)
{
nrep <- 2
denom.df <- (2*nrep)-2
diff.rd <- -1
while (diff.rd < 0)
{
criticalT <- qt(1-alpha.planned/2, df = denom.df)
powers1.rd <- 1 - pt(criticalT, df = denom.df, ncp = sqrt(nrep/n.rd)*TM.Percentile.rd)
powers2.rd <- pt(-1*criticalT, df=denom.df, ncp = sqrt(nrep/n.rd)*TM.Percentile.rd)
powers.rd <- powers1.rd + powers2.rd
diff.rd <- powers.rd - power
nrep <- nrep + 1
denom.df = (2*nrep)-2
}
}
repn.rd <- nrep-1
output.n <- max(repn.ru, repn.rd)
return(list(output.n, min(TM.Percentile.ru, TM.Percentile.rd)))
}
|
/scratch/gouwar.j/cran-all/cranData/BUCSS/R/ss.power.it.R
|
# Multiple Linear Regression: Test of model R2
#' Necessary sample size to reach desired power for a test of model R2 in a
#' multiple regression using an uncertainty and publication bias correction
#' procedure
#'
#' @description \code{ss.power.reg.all} returns the necessary total sample size
#' to achieve a desired level of statistical power for a test of model R2
#' in a planned study using multiple regression, based on information
#' obtained from a previous study.The effect from the previous study
#' can be corrected for publication bias and/or uncertainty to provide
#' a sample size that will achieve more accurate statistical power for a
#' planned study, when compared to approaches that use a sample effect size at
#' face value or rely on sample size only. The bias and uncertainty adjusted
#' previous study noncentrality parameter is also returned, which can be
#' transformed to various effect size metrics.
#'
#' @details Researchers often use the sample effect size from a prior study as
#' an estimate of the likely size of an expected future effect in sample size
#' planning. However, sample effect size estimates should not usually be used
#' at face value to plan sample size, due to both publication bias and
#' uncertainty.
#'
#' The approach implemented in \code{ss.power.reg.all} uses the observed
#' \eqn{F}-value and sample size from a previous study to correct the
#' noncentrality parameter associated with the effect of interest for
#' publication bias and/or uncertainty. This new estimated noncentrality
#' parameter is then used to calculate the necessary total sample size to
#' achieve the desired level of power in the planned study.
#'
#' The approach uses a likelihood function of a truncated non-central F
#' distribution, where the truncation occurs due to small effect sizes being
#' unobserved due to publication bias. The numerator of the likelihood
#' function is simply the density of a noncentral F distribution. The
#' denominator is the power of the test, which serves to truncate the
#' distribution. In the single predictor case, this formula reduces to the density
#' of a truncated noncentral \eqn{t}-distribution.(See Taylor & Muller, 1996,
#' Equation 2.1. and Anderson & Maxwell, 2017, for more details.)
#'
#' Assurance is the proportion of times that power will be at or above the
#' desired level, if the experiment were to be reproduced many times. For
#' example, assurance = .5 means that power will be above the desired level
#' half of the time, but below the desired level the other half of the time.
#' Selecting assurance = .5 (selecting the noncentrality parameter at the 50th
#' percentile of the likelihood distribution) results in a median-unbiased
#' estimate of the population noncentrality parameter and does not correct for
#' uncertainty. In order to correct for uncertainty, assurance > .5
#' can be selected, which corresponds to selecting the noncentrality parameter
#' associated with the (1 - assurance) quantile of the likelihood
#' distribution.
#'
#' If the previous study of interest has not been subjected to publication
#' bias (e.g., a pilot study), \code{alpha.prior} can be set to 1 to indicate
#' no publication bias. Alternative \eqn{\alpha}-levels can also be
#' accommodated to represent differing amounts of publication bias. For
#' example, setting \code{alpha.prior}=.20 would reflect less severe
#' publication bias than the default of .05. In essence, setting
#' \code{alpha.prior} at .20 assumes that studies with \eqn{p}-values less
#' than .20 are published, whereas those with larger \eqn{p}-values are not.
#'
#' In some cases, the corrected noncentrality parameter for a given level of
#' assurance will be estimated to be zero. This is an indication that, at the
#' desired level of assurance, the previous study's effect cannot be
#' accurately estimated due to high levels of uncertainty and bias. When this
#' happens, subsequent sample size planning is not possible with the chosen
#' specifications. Two alternatives are recommended. First, users can select a
#' lower value of assurance (e.g. .8 instead of .95). Second, users can reduce
#' the influence of publciation bias by setting \code{alpha.prior} at a value
#' greater than .05. It is possible to correct for uncertainty only by setting
#' \code{alpha.prior}=1 and choosing the desired level of assurance. We
#' encourage users to make the adjustments as minimal as possible.
#'
#' @param F.observed Observed \eqn{F}-value from a previous study used to plan
#' sample size for a planned study
#' @param N Total sample size of the previous study
#' @param p Number of predictors; be sure to include any product terms or
#' polynomials that are in the model
#' @param alpha.prior Alpha-level \eqn{\alpha} for the previous study or the
#' assumed statistical significance necessary for publishing in the field; to
#' assume no publication bias, a value of 1 can be entered
#' @param alpha.planned Alpha-level (\eqn{\alpha}) assumed for the planned study
#' @param assurance Desired level of assurance, or the long run proportion of
#' times that the planned study power will reach or surpass desired level
#' (assurance > .5 corrects for uncertainty; assurance < .5 not recommended)
#' @param power Desired level of statistical power for the planned study
#' @param step Value used in the iterative scheme to determine the noncentrality
#' parameter necessary for sample size planning (0 < step < .01) (users should
#' not generally need to change this value; smaller values lead to more
#' accurate sample size planning results, but unnecessarily small values will
#' add unnecessary computational time)
#'
#' @return Suggested total sample size for planned study
#'
#' Publication bias and uncertainty- adjusted prior study noncentrality parameter
#'
#' @export
#' @import stats
#'
#' @examples
#' ss.power.reg.all(F.observed=5, N=150, p=4, alpha.prior=.05, alpha.planned=.05,
#' assurance=.80, power=.80, step=.001)
#'
#' @author Samantha F. Anderson \email{[email protected]},
#' Ken Kelley \email{kkelley@@nd.edu}
#'
#' @references Anderson, S. F., & Maxwell, S. E. (2017).
#' Addressing the 'replication crisis': Using original studies to design
#' replication studies with appropriate statistical power. \emph{Multivariate
#' Behavioral Research, 52,} 305-322.
#'
#' Anderson, S. F., Kelley, K., & Maxwell, S. E. (2017). Sample size
#' planning for more accurate statistical power: A method correcting sample
#' effect sizes for uncertainty and publication bias. \emph{Psychological
#' Science, 28,} 1547-1562.
#'
#' Taylor, D. J., & Muller, K. E. (1996). Bias in linear model power and
#' sample size calculation due to estimating noncentrality.
#' \emph{Communications in Statistics: Theory and Methods, 25,} 1595-1610.
ss.power.reg.all <- function(F.observed, N, p, alpha.prior=.05, alpha.planned=.05, assurance=.80, power=.80, step=.001)
{
if(alpha.prior > 1 | alpha.prior <= 0) stop("There is a problem with 'alpha' of the prior study (i.e., the Type I error rate), please specify as a value between 0 and 1 (the default is .05).")
if(alpha.prior == 1) {alpha.prior <- .999 }
if(alpha.planned >= 1 | alpha.planned <= 0) stop("There is a problem with 'alpha' of the planned study (i.e., the Type I error rate), please specify as a value between 0 and 1 (the default is .05).")
if(assurance >= 1)
{
assurance <- assurance/100
}
if(assurance<0 | assurance>1)
{
stop("There is a problem with 'assurance' (i.e., the proportion of times statistical power is at or above the desired value), please specify as a value between 0 and 1 (the default is .80).")
}
if(assurance <.5)
{
warning( "THe assurance you have entered is < .5, which implies you will have under a 50% chance at achieving your desired level of power" )
}
if(power >= 1) power <- power/100
if(power<0 | power>1) stop("There is a problem with 'power' (i.e., desired statistical power), please specify as a value between 0 and 1 (the default is .80).")
if(missing(N)) stop("You need to specify a sample size (i.e., the number of pairs) used in the original study.")
if(N <= 1) stop("Your total sample size is too small")
if(p < 1) stop("Your number of predictors is too small")
if(N-p-1 < 1) stop("The combination of your sample size and number of predictors leads to 0 or negative degrees of freedom")
df.numerator <- p
df.denominator <- N-p-1
NCP <- seq(from=0, to=100, by=step)
value.critical <- qf(1-alpha.prior, df1=df.numerator, df2=df.denominator)
if(F.observed <= value.critical) stop("Your observed F statistic is nonsignificant based on your specfied alpha of the prior study. Please increase 'alpha.prior' so 't.observed' exceeds the critical value")
power.values <- 1 - pf(value.critical, df1=df.numerator, df2=df.denominator, ncp = NCP) # area above critical F
area.above.F <- 1 - pf(F.observed, df1=df.numerator, df2=df.denominator, ncp = NCP) # area above observed F
area.between <- power.values - area.above.F
TM <- area.between/power.values
TM.Percentile <- min(NCP[which(abs(TM-assurance)==min(abs(TM-assurance)))])
if(TM.Percentile==0) stop("The corrected noncentrality parameter is zero. Please either choose a lower value of assurance and/or a higher value of alpha for the prior study (e.g. accounting for less publication bias)")
if (TM.Percentile > 0)
{
Nrep <- 2+p+1
denom.df <- Nrep-p-1
diff <- -1
while (diff < 0 )
{
critical.F <- qf(1-alpha.planned, df1 = df.numerator, df2 = denom.df)
powers <- 1 - pf(critical.F, df1 = df.numerator, df2 = denom.df, ncp = (Nrep/N)*TM.Percentile)
diff <- powers - power
Nrep <- Nrep + 1
denom.df = Nrep - p - 1
}
repN <- Nrep - 1
}
return(list(repN, TM.Percentile))
}
|
/scratch/gouwar.j/cran-all/cranData/BUCSS/R/ss.power.reg.all.R
|
# Multiple Linear Regression: Joint test of multiple predictors
#' Necessary sample size to reach desired power for a test of multiple predictors
#' in a multiple regression using an uncertainty and publication bias correction
#' procedure
#'
#' @description \code{ss.power.reg.joint} returns the necessary total sample size
#' to achieve a desired level of statistical power for a test of multiple
#' predictors in a planned study using multiple regression, based on information
#' obtained from a previous study.The effect from the previous study
#' can be corrected for publication bias and/or uncertainty to provide
#' a sample size that will achieve more accurate statistical power for a
#' planned study, when compared to approaches that use a sample effect size at
#' face value or rely on sample size only. The bias and uncertainty adjusted
#' previous study noncentrality parameter is also returned, which can be
#' transformed to various effect size metrics.
#'
#' @details Researchers often use the sample effect size from a prior study as
#' an estimate of the likely size of an expected future effect in sample size
#' planning. However, sample effect size estimates should not usually be used
#' at face value to plan sample size, due to both publication bias and
#' uncertainty.
#'
#' The approach implemented in \code{ss.power.reg.joint} uses the observed
#' \eqn{F}-value and sample size from a previous study to correct the
#' noncentrality parameter associated with the effect of interest for
#' publication bias and/or uncertainty. This new estimated noncentrality
#' parameter is then used to calculate the necessary total sample size to
#' achieve the desired level of power in the planned study.
#'
#' The approach uses a likelihood function of a truncated non-central F
#' distribution, where the truncation occurs due to small effect sizes being
#' unobserved due to publication bias. The numerator of the likelihood
#' function is simply the density of a noncentral F distribution. The
#' denominator is the power of the test, which serves to truncate the
#' distribution. In the single predictor case, this formula reduces to the density
#' of a truncated noncentral \eqn{t}-distribution.(See Taylor & Muller, 1996,
#' Equation 2.1. and Anderson & Maxwell, 2017, for more details.)
#'
#' Assurance is the proportion of times that power will be at or above the
#' desired level, if the experiment were to be reproduced many times. For
#' example, assurance = .5 means that power will be above the desired level
#' half of the time, but below the desired level the other half of the time.
#' Selecting assurance = .5 (selecting the noncentrality parameter at the 50th
#' percentile of the likelihood distribution) results in a median-unbiased
#' estimate of the population noncentrality parameter and does not correct for
#' uncertainty. In order to correct for uncertainty, assurance > .5
#' can be selected, which corresponds to selecting the noncentrality parameter
#' associated with the (1 - assurance) quantile of the likelihood
#' distribution.
#'
#' If the previous study of interest has not been subjected to publication
#' bias (e.g., a pilot study), \code{alpha.prior} can be set to 1 to indicate
#' no publication bias. Alternative \eqn{\alpha}-levels can also be
#' accommodated to represent differing amounts of publication bias. For
#' example, setting \code{alpha.prior}=.20 would reflect less severe
#' publication bias than the default of .05. In essence, setting
#' \code{alpha.prior} at .20 assumes that studies with \eqn{p}-values less
#' than .20 are published, whereas those with larger \eqn{p}-values are not.
#'
#' In some cases, the corrected noncentrality parameter for a given level of
#' assurance will be estimated to be zero. This is an indication that, at the
#' desired level of assurance, the previous study's effect cannot be
#' accurately estimated due to high levels of uncertainty and bias. When this
#' happens, subsequent sample size planning is not possible with the chosen
#' specifications. Two alternatives are recommended. First, users can select a
#' lower value of assurance (e.g. .8 instead of .95). Second, users can reduce
#' the influence of publciation bias by setting \code{alpha.prior} at a value
#' greater than .05. It is possible to correct for uncertainty only by setting
#' \code{alpha.prior}=1 and choosing the desired level of assurance. We
#' encourage users to make the adjustments as minimal as possible.
#'
#' @param F.observed Observed \eqn{F}-value from a previous study used to plan
#' sample size for a planned study
#' @param N Total sample size of the previous study
#' @param p Number of predictors; be sure to include any product terms or
#' polynomials that are in the model
#' @param p.joint Number of predictors tested jointly for significance
#' @param alpha.prior Alpha-level \eqn{\alpha} for the previous study or the
#' assumed statistical significance necessary for publishing in the field; to
#' assume no publication bias, a value of 1 can be entered
#' @param alpha.planned Alpha-level (\eqn{\alpha}) assumed for the planned study
#' @param assurance Desired level of assurance, or the long run proportion of
#' times that the planned study power will reach or surpass desired level
#' (assurance > .5 corrects for uncertainty; assurance < .5 not recommended)
#' @param power Desired level of statistical power for the planned study
#' @param step Value used in the iterative scheme to determine the noncentrality
#' parameter necessary for sample size planning (0 < step < .01) (users should
#' not generally need to change this value; smaller values lead to more
#' accurate sample size planning results, but unnecessarily small values will
#' add unnecessary computational time)
#'
#' @return Suggested total sample size for planned study
#'
#' Publication bias and uncertainty- adjusted prior study noncentrality parameter
#'
#' @export
#' @import stats
#'
#' @examples
#' ss.power.reg.joint(F.observed=5, N=150, p=4, p.joint=2, alpha.prior=.05,
#' alpha.planned=.05, assurance=.80, power=.80, step=.001)
#'
#' @author Samantha F. Anderson \email{[email protected]},
#' Ken Kelley \email{kkelley@@nd.edu}
#'
#' @references Anderson, S. F., & Maxwell, S. E. (2017).
#' Addressing the 'replication crisis': Using original studies to design
#' replication studies with appropriate statistical power. \emph{Multivariate
#' Behavioral Research, 52,} 305-322.
#'
#' Anderson, S. F., Kelley, K., & Maxwell, S. E. (2017). Sample size
#' planning for more accurate statistical power: A method correcting sample
#' effect sizes for uncertainty and publication bias. \emph{Psychological
#' Science, 28,} 1547-1562.
#'
#' Taylor, D. J., & Muller, K. E. (1996). Bias in linear model power and
#' sample size calculation due to estimating noncentrality.
#' \emph{Communications in Statistics: Theory and Methods, 25,} 1595-1610.
ss.power.reg.joint <- function(F.observed, N, p, p.joint, alpha.prior=.05, alpha.planned=.05, assurance=.80, power=.80, step=.001)
{
if(alpha.prior > 1 | alpha.prior <= 0) stop("There is a problem with 'alpha' of the prior study (i.e., the Type I error rate), please specify as a value between 0 and 1 (the default is .05).")
if(alpha.prior == 1) {alpha.prior <- .999 }
if(alpha.planned >= 1 | alpha.planned <= 0) stop("There is a problem with 'alpha' of the planned study (i.e., the Type I error rate), please specify as a value between 0 and 1 (the default is .05).")
if(assurance >= 1)
{
assurance <- assurance/100
}
if(assurance<0 | assurance>1)
{
stop("There is a problem with 'assurance' (i.e., the proportion of times statistical power is at or above the desired value), please specify as a value between 0 and 1 (the default is .80).")
}
if(assurance <.5)
{
warning( "THe assurance you have entered is < .5, which implies you will have under a 50% chance at achieving your desired level of power" )
}
if(power >= 1) power <- power/100
if(power<0 | power>1) stop("There is a problem with 'power' (i.e., desired statistical power), please specify as a value between 0 and 1 (the default is .80).")
if(missing(N)) stop("You need to specify a sample size (i.e., the number of pairs) used in the original study.")
if(N <= 1) stop("Your total sample size is too small")
if(p < 1) stop("Your number of predictors is too small")
if(N-p-1 < 1) stop("The combination of your sample size and number of predictors leads to 0 or negative degrees of freedom")
if(p.joint > p) stop("The number of tested predictors cannot exceed the number of total predictors")
if(p.joint < 1) stop("The number of jointly tested predictors cannot be less than 1")
df.numerator <- p.joint
df.denominator <- N-p-1
NCP <- seq(from=0, to=100, by=step)
value.critical <- qf(1-alpha.prior, df1=df.numerator, df2=df.denominator)
if(F.observed <= value.critical) stop("Your observed F statistic is nonsignificant based on your specfied alpha of the prior study. Please increase 'alpha.prior' so 't.observed' exceeds the critical value")
power.values <- 1 - pf(value.critical, df1=df.numerator, df2=df.denominator, ncp = NCP) # area above critical F
area.above.F <- 1 - pf(F.observed, df1=df.numerator, df2=df.denominator, ncp = NCP) # area above observed F
area.between <- power.values - area.above.F
TM <- area.between/power.values
TM.Percentile <- min(NCP[which(abs(TM-assurance)==min(abs(TM-assurance)))])
if(TM.Percentile==0) stop("The corrected noncentrality parameter is zero. Please either choose a lower value of assurance and/or a higher value of alpha for the prior study (e.g. accounting for less publication bias)")
if (TM.Percentile > 0)
{
Nrep <- 2+p+1
denom.df <- Nrep-p-1
diff <- -1
while (diff < 0 )
{
critical.F <- qf(1-alpha.planned, df1 = df.numerator, df2 = denom.df)
powers <- 1 - pf(critical.F, df1 = df.numerator, df2 = denom.df, ncp = (Nrep/N)*TM.Percentile)
diff <- powers - power
Nrep <- Nrep + 1
denom.df = Nrep - p - 1
}
repN <- Nrep - 1
}
return(list(repN,TM.Percentile))
}
|
/scratch/gouwar.j/cran-all/cranData/BUCSS/R/ss.power.reg.joint.R
|
# Multiple Linear Regression: Test of single coefficient
#' Necessary sample size to reach desired power for a single coefficient in a
#' multiple regression using an uncertainty and publication bias correction
#' procedure
#'
#' @description \code{ss.power.reg1} returns the necessary total sample size
#' to achieve a desired level of statistical power for a single regression
#' coefficient in a planned study using multiple regression, based on
#' information obtained from a previous study.The effect from the previous
#' study can be corrected for publication bias and/or uncertainty to provide
#' a sample size that will achieve more accurate statistical power for a
#' planned study, when compared to approaches that use a sample effect size at
#' face value or rely on sample size only. The bias and uncertainty adjusted
#' previous study noncentrality parameter is also returned, which can be
#' transformed to various effect size metrics.
#'
#' @details Researchers often use the sample effect size from a prior study as
#' an estimate of the likely size of an expected future effect in sample size
#' planning. However, sample effect size estimates should not usually be used
#' at face value to plan sample size, due to both publication bias and
#' uncertainty.
#'
#' The approach implemented in \code{ss.power.reg1} uses the observed
#' \eqn{t}-value and sample size from a previous study to correct the
#' noncentrality parameter associated with the effect of interest for
#' publication bias and/or uncertainty. This new estimated noncentrality
#' parameter is then used to calculate the necessary total sample size to
#' achieve the desired level of power in the planned study.
#'
#' The approach uses a likelihood function of a truncated non-central F
#' distribution, where the truncation occurs due to small effect sizes being
#' unobserved due to publication bias. The numerator of the likelihood
#' function is simply the density of a noncentral F distribution. The
#' denominator is the power of the test, which serves to truncate the
#' distribution. In the single predictor case, this formula reduces to the density
#' of a truncated noncentral \eqn{t}-distribution.(See Taylor & Muller, 1996,
#' Equation 2.1. and Anderson & Maxwell, 2017, for more details.)
#'
#' Assurance is the proportion of times that power will be at or above the
#' desired level, if the experiment were to be reproduced many times. For
#' example, assurance = .5 means that power will be above the desired level
#' half of the time, but below the desired level the other half of the time.
#' Selecting assurance = .5 (selecting the noncentrality parameter at the 50th
#' percentile of the likelihood distribution) results in a median-unbiased
#' estimate of the population noncentrality parameter and does not correct for
#' uncertainty. In order to correct for uncertainty, assurance > .5
#' can be selected, which corresponds to selecting the noncentrality parameter
#' associated with the (1 - assurance) quantile of the likelihood
#' distribution.
#'
#' If the previous study of interest has not been subjected to publication
#' bias (e.g., a pilot study), \code{alpha.prior} can be set to 1 to indicate
#' no publication bias. Alternative \eqn{\alpha}-levels can also be
#' accommodated to represent differing amounts of publication bias. For
#' example, setting \code{alpha.prior}=.20 would reflect less severe
#' publication bias than the default of .05. In essence, setting
#' \code{alpha.prior} at .20 assumes that studies with \eqn{p}-values less
#' than .20 are published, whereas those with larger \eqn{p}-values are not.
#'
#' In some cases, the corrected noncentrality parameter for a given level of
#' assurance will be estimated to be zero. This is an indication that, at the
#' desired level of assurance, the previous study's effect cannot be
#' accurately estimated due to high levels of uncertainty and bias. When this
#' happens, subsequent sample size planning is not possible with the chosen
#' specifications. Two alternatives are recommended. First, users can select a
#' lower value of assurance (e.g. .8 instead of .95). Second, users can reduce
#' the influence of publciation bias by setting \code{alpha.prior} at a value
#' greater than .05. It is possible to correct for uncertainty only by setting
#' \code{alpha.prior}=1 and choosing the desired level of assurance. We
#' encourage users to make the adjustments as minimal as possible.
#'
#' @param t.observed Observed \eqn{t}-value from a previous study used to plan
#' sample size for a planned study
#' @param N Total sample size of the previous study
#' @param p Number of predictors; be sure to include any product terms or
#' polynomials that are in the model
#' @param alpha.prior Alpha-level \eqn{\alpha} for the previous study or the
#' assumed statistical significance necessary for publishing in the field; to
#' assume no publication bias, a value of 1 can be entered
#' @param alpha.planned Alpha-level (\eqn{\alpha}) assumed for the planned study
#' @param assurance Desired level of assurance, or the long run proportion of
#' times that the planned study power will reach or surpass desired level
#' (assurance > .5 corrects for uncertainty; assurance < .5 not recommended)
#' @param power Desired level of statistical power for the planned study
#' @param step Value used in the iterative scheme to determine the noncentrality
#' parameter necessary for sample size planning (0 < step < .01) (users should
#' not generally need to change this value; smaller values lead to more
#' accurate sample size planning results, but unnecessarily small values will
#' add unnecessary computational time)
#'
#' @return Suggested total sample size for planned study
#'
#' Publication bias and uncertainty- adjusted prior study noncentrality parameter
#'
#' @export
#' @import stats
#'
#' @examples
#' ss.power.reg1(t.observed=3, N=150, p=3, alpha.prior=.05, alpha.planned=.05,
#' assurance=.80, power=.80, step=.001)
#'
#' @author Samantha F. Anderson \email{[email protected]},
#' Ken Kelley \email{kkelley@@nd.edu}
#'
#' @references Anderson, S. F., & Maxwell, S. E. (2017).
#' Addressing the 'replication crisis': Using original studies to design
#' replication studies with appropriate statistical power. \emph{Multivariate
#' Behavioral Research, 52,} 305-322.
#'
#' Anderson, S. F., Kelley, K., & Maxwell, S. E. (2017). Sample size
#' planning for more accurate statistical power: A method correcting sample
#' effect sizes for uncertainty and publication bias. \emph{Psychological
#' Science, 28,} 1547-1562.
#'
#' Taylor, D. J., & Muller, K. E. (1996). Bias in linear model power and
#' sample size calculation due to estimating noncentrality.
#' \emph{Communications in Statistics: Theory and Methods, 25,} 1595-1610.
ss.power.reg1 <- function(t.observed, N, p, alpha.prior=.05, alpha.planned=.05, assurance=.80, power=.80, step=.001)
{
if(alpha.prior > 1 | alpha.prior <= 0) stop("There is a problem with 'alpha' of the prior study (i.e., the Type I error rate), please specify as a value between 0 and 1 (the default is .05).")
if(alpha.prior == 1) {alpha.prior <- .999 }
if(alpha.planned >= 1 | alpha.planned <= 0) stop("There is a problem with 'alpha' of the planned study (i.e., the Type I error rate), please specify as a value between 0 and 1 (the default is .05).")
if(assurance >= 1)
{
assurance <- assurance/100
}
if(assurance<0 | assurance>1)
{
stop("There is a problem with 'assurance' (i.e., the proportion of times statistical power is at or above the desired value), please specify as a value between 0 and 1 (the default is .80).")
}
if(assurance <.5)
{
warning( "THe assurance you have entered is < .5, which implies you will have under a 50% chance at achieving your desired level of power" )
}
if(power >= 1) power <- power/100
if(power<0 | power>1) stop("There is a problem with 'power' (i.e., desired statistical power), please specify as a value between 0 and 1 (the default is .80).")
if(missing(N)) stop("You need to specify a sample size (i.e., the number of pairs) used in the original study.")
if(N <= 1) stop("Your total sample size is too small")
if(p < 1) stop("Your number of predictors is too small")
if(N-p-1 < 1) stop("The combination of your sample size and number of predictors leads to 0 or negative degrees of freedom")
df.numerator <- 1
df.denominator <- N-p-1
NCP <- seq(from=0, to=100, by=step)
value.critical <- qf(1-alpha.prior, df1=df.numerator, df2=df.denominator)
if(t.observed^2 <= value.critical) stop("Your observed t statistic is nonsignificant based on your specfied alpha of the prior study. Please increase 'alpha.prior' so 't.observed' exceeds the critical value")
power.values <- 1 - pf(value.critical, df1=df.numerator, df2=df.denominator, ncp = NCP) # area above critical F
area.above.F <- 1 - pf(t.observed^2, df1=df.numerator, df2=df.denominator, ncp = NCP) # area above observed F
area.between <- power.values - area.above.F
TM <- area.between/power.values
TM.Percentile <- min(NCP[which(abs(TM-assurance)==min(abs(TM-assurance)))])
if(TM.Percentile==0) stop("The corrected noncentrality parameter is zero. Please either choose a lower value of assurance and/or a higher value of alpha for the prior study (e.g. accounting for less publication bias)")
if (TM.Percentile > 0)
{
Nrep <- 2+p+1
denom.df <- Nrep-p-1
diff <- -1
while (diff < 0 )
{
critical.F <- qf(1-alpha.planned, df1 = df.numerator, df2 = denom.df)
powers <- 1 - pf(critical.F, df1 = df.numerator, df2 = denom.df, ncp = (Nrep/N)*TM.Percentile)
diff <- powers - power
Nrep <- Nrep + 1
denom.df = Nrep - p - 1
}
repN <- Nrep - 1
}
return(list(repN, TM.Percentile))
}
|
/scratch/gouwar.j/cran-all/cranData/BUCSS/R/ss.power.reg1.R
|
# Split plot ANOVA (general case)
#' Necessary sample size to reach desired power for a split-plot (mixed) ANOVA
#' with any number of factors using an uncertainty and publication bias
#' correction procedure
#'
#' @description \code{ss.power.spa.general} returns the necessary per-group
#' sample size to achieve a desired level of statistical power for a planned
#' study testing any type of effect (omnibus, contrast) using a split-plot
#' (mixed) ANOVA with any number of factors, based on information obtained
#' from a previous study. The effect from the previous study can be corrected
#' for publication bias and/or uncertainty to provide a sample size that will
#' achieve more accurate statistical power for a planned study, when compared
#' to approaches that use a sample effect size at face value or rely on sample
#' size only. The bias and uncertainty adjusted previous study noncentrality
#' parameter is also returned, which can be transformed to various effect
#' size metrics
#'
#' @details Researchers often use the sample effect size from a prior study as
#' an estimate of the likely size of an expected future effect in sample size
#' planning. However, sample effect size estimates should not usually be used
#' at face value to plan sample size, due to both publication bias and
#' uncertainty.
#'
#' The approach implemented in \code{ss.power.spa.general} uses the observed
#' \eqn{F}-value and sample size from a previous study to correct the
#' noncentrality parameter associated with the effect of interest for
#' publication bias and/or uncertainty. This new estimated noncentrality
#' parameter is then used to calculate the necessary per-group sample size to
#' achieve the desired level of power in the planned study.
#'
#' The approach uses a likelihood function of a truncated non-central F
#' distribution, where the truncation occurs due to small effect sizes being
#' unobserved due to publication bias. The numerator of the likelihood
#' function is simply the density of a noncentral F distribution. The
#' denominator is the power of the test, which serves to truncate the
#' distribution. Thus, the ratio of the numerator and the denominator is a
#' truncated noncentral F distribution. (See Taylor & Muller, 1996, Equation
#' 2.1. and Anderson & Maxwell, 2017, for more details.)
#'
#' Assurance is the proportion of times that power will be at or above the
#' desired level, if the experiment were to be reproduced many times. For
#' example, assurance = .5 means that power will be above the desired level
#' half of the time, but below the desired level the other half of the time.
#' Selecting assurance = .5 (selecting the noncentrality parameter at the 50th
#' percentile of the likelihood distribution) results in a median-unbiased
#' estimate of the population noncentrality parameter and does not correct for
#' uncertainty. In order to correct for uncertainty, assurance > .5
#' can be selected, which corresponds to selecting the noncentrality parameter
#' associated with the (1 - assurance) quantile of the likelihood
#' distribution.
#'
#' If the previous study of interest has not been subjected to publication
#' bias (e.g., a pilot study), \code{alpha.prior} can be set to 1 to indicate
#' no publication bias. Alternative \eqn{\alpha}-levels can also be
#' accommodated to represent differing amounts of publication bias. For
#' example, setting \code{alpha.prior}=.20 would reflect less severe
#' publication bias than the default of .05. In essence, setting
#' \code{alpha.prior} at .20 assumes that studies with \eqn{p}-values less
#' than .20 are published, whereas those with larger \eqn{p}-values are not.
#'
#' In some cases, the corrected noncentrality parameter for a given level of
#' assurance will be estimated to be zero. This is an indication that, at the
#' desired level of assurance, the previous study's effect cannot be
#' accurately estimated due to high levels of uncertainty and bias. When this
#' happens, subsequent sample size planning is not possible with the chosen
#' specifications. Two alternatives are recommended. First, users can select a
#' lower value of assurance (e.g. .8 instead of .95). Second, users can reduce
#' the influence of publciation bias by setting \code{alpha.prior} at a value
#' greater than .05. It is possible to correct for uncertainty only by setting
#' \code{alpha.prior}=1 and choosing the desired level of assurance. We
#' encourage users to make the adjustments as minimal as possible.
#'
#' \code{ss.power.spa.general} assumes that the planned study will have equal
#' n. Unequal n in the previous study is handled in the following way for
#' split plot designs. If the user enters an N not equally divisible by the
#' number of between-subjects cells, the function calculates n by dividing N
#' by the number of cells and both rounding up and rounding down the result,
#' effectively assuming equal n. The suggested sample size for the planned
#' study is calculated using both of these values of n, and the function
#' returns the larger of these two suggestions, to be conservative. The
#' adjusted noncentrality parameter that is output is the lower of the two
#' possibilities, again, to be conservative. Although equal-n previous studies
#' are preferable, this approach will work well as long as the cell sizes are
#' only slightly discrepant.
#'
#' \code{ss.power.spa.general} assumes sphericity for the within-subjects
#' effects.
#'
#' @param F.observed Observed F-value from a previous study used to plan sample
#' size for a planned study
#' @param N Total sample size of the previous study
#' @param df.numerator Numerator degrees of freedom for the effect of interest
#' @param num.groups Number of distinct groups (product of the number of levels
#' of between-subjects factors)
#' @param alpha.prior Alpha-level \eqn{\alpha} for the previous study or the
#' assumed statistical significance necessary for publishing in the field; to
#' assume no publication bias, a value of 1 can be entered
#' @param effect Effect of interest: involves only between-subjects effects
#' (\code{between.only}), involves only within-subjects effects
#' (\code{within.only}), involves both between and within effects
#' (\code{between.within})
#' @param df.num.within Numerator degrees of freedom only for the within
#' subjects components of the effect of interest. Only needed when effect =
#' \code{between.within}.
#' @param alpha.planned Alpha-level (\eqn{\alpha}) assumed for the planned study
#' @param assurance Desired level of assurance, or the long run proportion of
#' times that the planned study power will reach or surpass desired level
#' (assurance > .5 corrects for uncertainty; assurance < .5 not recommended)
#' @param power Desired level of statistical power for the planned study
#' @param step Value used in the iterative scheme to determine the noncentrality
#' parameter necessary for sample size planning (0 < step < .01) (users should
#' not generally need to change this value; smaller values lead to more
#' accurate sample size planning results, but unnecessarily small values will
#' add unnecessary computational time)
#'
#' @return Suggested per-group sample size for planned study
#' Publication bias and uncertainty- adjusted prior study noncentrality parameter
#'
#' @export
#' @import stats
#'
#' @examples
#' ss.power.spa.general(F.observed=5, N=90, df.numerator=2, num.groups=3,
#' effect="between.only", df.num.within=3, alpha.prior=.05, alpha.planned=.05,
#' assurance=.80, power=.80, step=.001)
#'
#' @author Samantha F. Anderson \email{[email protected]},
#' Ken Kelley \email{kkelley@@nd.edu}
#'
#' @references Anderson, S. F., & Maxwell, S. E. (2017).
#' Addressing the 'replication crisis': Using original studies to design
#' replication studies with appropriate statistical power. \emph{Multivariate
#' Behavioral Research, 52,} 305-322.
#'
#' Anderson, S. F., Kelley, K., & Maxwell, S. E. (2017). Sample size
#' planning for more accurate statistical power: A method correcting sample
#' effect sizes for uncertainty and publication bias. \emph{Psychological
#' Science, 28,} 1547-1562.
#'
#' Taylor, D. J., & Muller, K. E. (1996). Bias in linear model power and
#' sample size calculation due to estimating noncentrality.
#' \emph{Communications in Statistics: Theory and Methods, 25,} 1595-1610.
ss.power.spa.general <- function(F.observed, N, df.numerator, num.groups, effect=c("between.only", "within.only", "between.within"), df.num.within, alpha.prior=.05, alpha.planned=.05, assurance=.80, power=.80, step=.001)
{
if(alpha.prior > 1 | alpha.prior <= 0) stop("There is a problem with 'alpha' of the prior study (i.e., the Type I error rate), please specify as a value between 0 and 1 (the default is .05).")
if(alpha.prior == 1) {alpha.prior <- .999 }
if(alpha.planned >= 1 | alpha.planned <= 0) stop("There is a problem with 'alpha' of the planned study (i.e., the Type I error rate), please specify as a value between 0 and 1 (the default is .05).")
if(assurance >= 1)
{
assurance <- assurance/100
}
if(assurance<0 | assurance>1)
{
stop("There is a problem with 'assurance' (i.e., the proportion of times statistical power is at or above the desired value), please specify as a value between 0 and 1 (the default is .80).")
}
if(assurance <.5)
{
warning( "THe assurance you have entered is < .5, which implies you will have under a 50% chance at achieving your desired level of power" )
}
if(power >= 1) power <- power/100
if(power<0 | power>1) stop("There is a problem with 'power' (i.e., desired statistical power), please specify as a value between 0 and 1 (the default is .80).")
if(missing(N)) stop("You must specify 'N', which is the total sample size.")
NCP <- seq(from=0, to=100, by=step) # sequence of possible values for the noncentral parameter.
#### ROUND DOWN
if(effect=="between.only")
{
n.rd <- floor(N/num.groups) # To ensure that the between sample size is appropriate given specifications.
N.rd <- n.rd*num.groups
df.denominator.rd <- N.rd-num.groups
}
if(effect=="within.only")
{
n.rd <- floor(N/num.groups) # To ensure that the per-cell sample size is equal. Rounds down (floor).
N.rd <- n.rd*num.groups
df.denominator.rd <- (N.rd-num.groups)*df.numerator
}
if(effect=="between.within")
{
n.rd <- floor(N/num.groups) # To ensure that the per-cell sample size is equal. Rounds down (floor).
N.rd <- n.rd*num.groups
df.denominator.rd <- (N.rd-num.groups)*df.num.within
}
f.density.rd <- df(F.observed, df1=df.numerator, df2=df.denominator.rd, ncp=NCP) # density of F using F observed
critF.rd <- qf(1-alpha.prior, df1=df.numerator, df2=df.denominator.rd)
if(F.observed <= critF.rd) stop("Your observed F statistic is nonsignificant based on your specfied alpha of the prior study. Please increase 'alpha.prior' so 'F.observed' exceeds the critical value")
power.values.rd <- 1 - pf(critF.rd, df1=df.numerator, df2=df.denominator.rd, ncp = NCP) # area above critical F
area.above.F.rd <- 1 - pf(F.observed, df1=df.numerator, df2=df.denominator.rd, ncp = NCP) # area above observed F
area.area.between.rd <- power.values.rd - area.above.F.rd
TM.rd <- area.area.between.rd/power.values.rd
TM.Percentile.rd <- min(NCP[which(abs(TM.rd-assurance)==min(abs(TM.rd-assurance)))])
if(TM.Percentile.rd==0) stop("The corrected noncentrality parameter is zero. Please either choose a lower value of assurance and/or a higher value of alpha for the prior study (e.g. accounting for less publication bias)")
if (TM.Percentile.rd > 0)
{
nrep <- 2
if(effect=="between.only")
{
denom.df <- (nrep*num.groups) - num.groups
}
if(effect=="within.only")
{
denom.df <- ((nrep*num.groups) - num.groups)*df.numerator
}
if(effect=="between.within")
{
denom.df <- ((nrep*num.groups) - num.groups)*df.num.within
}
diff.rd <- -1
while (diff.rd < 0 )
{
criticalF <- qf(1-alpha.planned, df1 = df.numerator, df2 = denom.df)
powers.rd <- 1 - pf(criticalF, df1 = df.numerator, df2 = denom.df, ncp = (nrep/n.rd)*TM.Percentile.rd)
diff.rd <- powers.rd - power
nrep <- nrep + 1
if(effect=="between.only")
{
denom.df <- (nrep*num.groups) - num.groups
}
if(effect=="within.only")
{
denom.df <- ((nrep*num.groups) - num.groups)*df.numerator
}
if(effect=="between.within")
{
denom.df <- ((nrep*num.groups) - num.groups)*df.num.within
}
}
}
repn.rd <- nrep-1
#### ROUND UP
if(effect=="between.only")
{
n.ru <- ceiling(N/num.groups) # To ensure that the between sample size is appropriate given specifications.
N.ru <- n.ru*num.groups
df.denominator.ru <- N.ru-num.groups
}
if(effect=="within.only")
{
n.ru <- ceiling(N/num.groups) # To ensure that the per-cell sample size is equal. Rounds down (floor).
N.ru <- n.ru*num.groups
df.denominator.ru <- (N.ru-num.groups)*df.numerator
}
if(effect=="between.within")
{
n.ru <- ceiling(N/num.groups) # To ensure that the per-cell sample size is equal. Rounds down (floor).
N.ru <- n.ru*num.groups
df.denominator.ru <- (N.ru-num.groups)*df.num.within
}
f.density.ru <- df(F.observed, df1=df.numerator, df2=df.denominator.ru, ncp=NCP) # density of F using F observed
critF.ru <- qf(1-alpha.prior, df1=df.numerator, df2=df.denominator.ru)
if(F.observed <= critF.ru) stop("Your observed F statistic is nonsignificant based on your specfied alpha of the prior study. Please increase 'alpha.prior' so 'F.observed' exceeds the critical value")
power.values.ru <- 1 - pf(critF.ru, df1=df.numerator, df2=df.denominator.ru, ncp = NCP) # area above critical F
area.above.F.ru <- 1 - pf(F.observed, df1=df.numerator, df2=df.denominator.ru, ncp = NCP) # area above observed F
area.area.between.ru <- power.values.ru - area.above.F.ru
TM.ru <- area.area.between.ru/power.values.ru
TM.Percentile.ru <- min(NCP[which(abs(TM.ru-assurance)==min(abs(TM.ru-assurance)))])
if(TM.Percentile.ru==0) stop("The corrected noncentrality parameter is zero. Please either choose a lower value of assurance and/or a higher value of alpha for the prior study (e.g. accounting for less publication bias)")
if (TM.Percentile.ru > 0)
{
nrep <- 2
if(effect=="between.only")
{
denom.df <- (nrep*num.groups) - num.groups
}
if(effect=="within.only")
{
denom.df <- ((nrep*num.groups) - num.groups)*df.numerator
}
if(effect=="between.within")
{
denom.df <- ((nrep*num.groups) - num.groups)*df.num.within
}
diff.ru <- -1
while (diff.ru < 0 )
{
criticalF <- qf(1-alpha.planned, df1 = df.numerator, df2 = denom.df)
powers.ru <- 1 - pf(criticalF, df1 = df.numerator, df2 = denom.df, ncp = (nrep/n.ru)*TM.Percentile.ru)
diff.ru <- powers.ru - power
nrep <- nrep + 1
if(effect=="between.only")
{
denom.df <- (nrep*num.groups) - num.groups
}
if(effect=="within.only")
{
denom.df <- ((nrep*num.groups) - num.groups)*df.numerator
}
if(effect=="between.within")
{
denom.df <- ((nrep*num.groups) - num.groups)*df.num.within
}
}
}
repn.ru <- nrep-1
output.n <- max(repn.rd, repn.ru)
return(list(output.n, min(TM.Percentile.rd, TM.Percentile.ru)))
}
|
/scratch/gouwar.j/cran-all/cranData/BUCSS/R/ss.power.spa.general.R
|
# Split plot ANOVA.
#' Necessary sample size to reach desired power for two-factor split-plot
#' (mixed) ANOVA using an uncertainty and publication bias correction procedure
#'
#' @description \code{ss.power.spa} returns the necessary per-group sample size
#' to achieve a desired level of statistical power for a planned study testing
#' an omnibus effect using a two-factor split-plot (mixed) ANOVA, based on
#' information obtained from a previous study. The effect from the previous
#' study can be corrected for publication bias and/or uncertainty to provide a
#' sample size that will achieve more accurate statistical power for a planned
#' study, when compared to approaches that use a sample effect size at face
#' value or rely on sample size only. The bias and uncertainty adjusted previous
#' study noncentrality parameter is also returned, which can be transformed to
#' various effect size metrics.
#'
#' @details Researchers often use the sample effect size from a prior study as
#' an estimate of the likely size of an expected future effect in sample size
#' planning. However, sample effect size estimates should not usually be used
#' at face value to plan sample size, due to both publication bias and
#' uncertainty.
#'
#' The approach implemented in \code{ss.power.spa} uses the observed
#' \eqn{F}-value and sample size from a previous study to correct the
#' noncentrality parameter associated with the effect of interest for
#' publication bias and/or uncertainty. This new estimated noncentrality
#' parameter is then used to calculate the necessary per-group sample size to
#' achieve the desired level of power in the planned study.
#'
#' The approach uses a likelihood function of a truncated non-central F
#' distribution, where the truncation occurs due to small effect sizes being
#' unobserved due to publication bias. The numerator of the likelihood
#' function is simply the density of a noncentral F distribution. The
#' denominator is the power of the test, which serves to truncate the
#' distribution. Thus, the ratio of the numerator and the denominator is a
#' truncated noncentral F distribution. (See Taylor & Muller, 1996, Equation
#' 2.1. and Anderson & Maxwell, 2017, for more details.)
#'
#' Assurance is the proportion of times that power will be at or above the
#' desired level, if the experiment were to be reproduced many times. For
#' example, assurance = .5 means that power will be above the desired level
#' half of the time, but below the desired level the other half of the time.
#' Selecting assurance = .5 (selecting the noncentrality parameter at the 50th
#' percentile of the likelihood distribution) results in a median-unbiased
#' estimate of the population noncentrality parameter and does not correct for
#' uncertainty. In order to correct for uncertainty, assurance > .5
#' can be selected, which corresponds to selecting the noncentrality parameter
#' associated with the (1 - assurance) quantile of the likelihood
#' distribution.
#'
#' If the previous study of interest has not been subjected to publication
#' bias (e.g., a pilot study), \code{alpha.prior} can be set to 1 to indicate
#' no publication bias. Alternative \eqn{\alpha}-levels can also be
#' accommodated to represent differing amounts of publication bias. For
#' example, setting \code{alpha.prior}=.20 would reflect less severe
#' publication bias than the default of .05. In essence, setting
#' \code{alpha.prior} at .20 assumes that studies with \eqn{p}-values less
#' than .20 are published, whereas those with larger \eqn{p}-values are not.
#'
#' In some cases, the corrected noncentrality parameter for a given level of
#' assurance will be estimated to be zero. This is an indication that, at the
#' desired level of assurance, the previous study's effect cannot be
#' accurately estimated due to high levels of uncertainty and bias. When this
#' happens, subsequent sample size planning is not possible with the chosen
#' specifications. Two alternatives are recommended. First, users can select a
#' lower value of assurance (e.g. .8 instead of .95). Second, users can reduce
#' the influence of publciation bias by setting \code{alpha.prior} at a value
#' greater than .05. It is possible to correct for uncertainty only by setting
#' \code{alpha.prior}=1 and choosing the desired level of assurance. We
#' encourage users to make the adjustments as minimal as possible.
#'
#' \code{ss.power.spa} assumes that the planned study will have equal n.
#' Unequal n in the previous study is handled in the following way for split
#' plot designs. If the user enters an N not equally divisible by the number
#' of between-subjects cells, the function calculates n by dividing N by the
#' number of cells and both rounding up and rounding down the result,
#' effectively assuming equal n. The suggested sample size for the planned
#' study is calculated using both of these values of n, and the function
#' returns the larger of these two suggestions, to be conservative. The
#' adjusted noncentrality parameter that is output is the lower of the two
#' possibilities, again, to be conservative. Although equal-n previous
#' studies are preferable, this approach will work well as long as the cell
#' sizes are only slightly discrepant.
#'
#' \code{ss.power.spa} assumes sphericity for the within-subjects effects.
#'
#' @param F.observed Observed F-value from a previous study used to plan sample
#' size for a planned study
#' @param N Total sample size of the previous study
#' @param levels.between Number of levels for the between-subjects factor
#' @param levels.within Number of levels for the within-subjects factor
#' @param effect Effect most of interest to the planned study: between main
#' effect (between), within main effect (within), interaction
#' @param alpha.prior Alpha-level \eqn{\alpha} for the previous study or the
#' assumed statistical significance necessary for publishing in the field; to
#' assume no publication bias, a value of 1 can be entered
#' @param alpha.planned Alpha level assumed for the planned study
#' @param assurance Desired level of assurance, or the long run proportion of
#' times that the planned study power will reach or surpass desired level
#' (assurance > .5 corrects for uncertainty; assurance < .5 not recommended)
#' @param power Desired level of statistical power for the planned study
#' @param step Value used in the iterative scheme to determine the noncentral
#' parameters necessary for sample size planning (0 < step < .01) (users
#' should not generally need to change this value; smaller values lead to more
#' accurate sample size planning results, but unnecessarily small values will
#' add unnecessary computational time)
#'
#' @return Suggested per-group sample size for planned study
#' Publication bias and uncertainty- adjusted prior study noncentrality parameter
#'
#' @export
#' @import stats
#'
#' @examples
#' ss.power.spa(F.observed=5, N=60, levels.between=2, levels.within=3,
#' effect="within", alpha.prior=.05, alpha.planned=.05, assurance=.80,
#' power=.80, step=.001)
#'
#' @author Samantha F. Anderson \email{[email protected]},
#' Ken Kelley \email{kkelley@@nd.edu}
#'
#' @references Anderson, S. F., & Maxwell, S. E. (2017).
#' Addressing the 'replication crisis': Using original studies to design
#' replication studies with appropriate statistical power. \emph{Multivariate
#' Behavioral Research, 52,} 305-322.
#'
#' Anderson, S. F., Kelley, K., & Maxwell, S. E. (2017). Sample size
#' planning for more accurate statistical power: A method correcting sample
#' effect sizes for uncertainty and publication bias. \emph{Psychological
#' Science, 28,} 1547-1562.
#'
#' Taylor, D. J., & Muller, K. E. (1996). Bias in linear model power and
#' sample size calculation due to estimating noncentrality.
#' \emph{Communications in Statistics: Theory and Methods, 25,} 1595-1610.
ss.power.spa <- function(F.observed, N, levels.between, levels.within, effect=c("between", "within", "interaction"), alpha.prior=.05, alpha.planned=.05, assurance=.80, power=.80, step=.001)
{
if(alpha.prior > 1 | alpha.prior <= 0) stop("There is a problem with 'alpha' of the prior study (i.e., the Type I error rate), please specify as a value between 0 and 1 (the default is .05).")
if(alpha.prior == 1) {alpha.prior <- .999 }
if(alpha.planned >= 1 | alpha.planned <= 0) stop("There is a problem with 'alpha' of the planned study (i.e., the Type I error rate), please specify as a value between 0 and 1 (the default is .05).")
if(assurance >= 1)
{
assurance <- assurance/100
}
if(assurance<0 | assurance>1)
{
stop("There is a problem with 'assurance' (i.e., the proportion of times statistical power is at or above the desired value), please specify as a value between 0 and 1 (the default is .80).")
}
if(assurance <.5)
{
warning( "THe assurance you have entered is < .5, which implies you will have under a 50% chance at achieving your desired level of power" )
}
if(power >= 1) power <- power/100
if(power<0 | power>1) stop("There is a problem with 'power' (i.e., desired statistical power), please specify as a value between 0 and 1 (the default is .80).")
if(missing(N)) stop("You must specify 'N', which is the total sample size.")
if(missing(levels.within)) stop("You must specify the number of levels of the within factor. If there is no within factor, use the between subjects appraoch.")
if(missing(levels.between)) stop("You must specify the number of levels of the between factor. If there is no between factor, use the within subjects appraoch.")
NCP <- seq(from=0, to=100, by=step) # sequence of possible values for the noncentral parameter.
### ROUND DOWN
if(effect=="between")
{
n.rd <- floor(N/levels.between) # To ensure that the between sample size is appropriate given specifications.
N.rd <- n.rd*levels.between
df.numerator <- levels.between - 1
df.denominator.rd <- N.rd-levels.between
}
if(effect=="within")
{
n.rd <- floor(N/levels.between) # To ensure that the per-cell sample size is equal. Rounds down (floor).
N.rd <- n.rd*levels.between
df.numerator <- levels.within - 1
df.denominator.rd <- (N.rd-levels.between)*(levels.within - 1)
}
if(effect=="interaction")
{
n.rd <- floor(N/levels.between) # To ensure that the per-cell sample size is equal. Rounds down (floor).
N.rd <- n.rd*levels.between
df.numerator <- (levels.between - 1)*(levels.within - 1)
df.denominator.rd <- (N.rd-levels.between)*(levels.within - 1)
}
f.density.rd <- df(F.observed, df1=df.numerator, df2=df.denominator.rd, ncp=NCP) # density of F using F observed
critF.rd <- qf(1-alpha.prior, df1=df.numerator, df2=df.denominator.rd)
if(F.observed <= critF.rd) stop("Your observed F statistic is nonsignificant based on your specfied alpha of the prior study. Please increase 'alpha.prior' so 'F.observed' exceeds the critical value")
power.values.rd <- 1 - pf(critF.rd, df1=df.numerator, df2=df.denominator.rd, ncp = NCP) # area above critical F
area.above.F.rd <- 1 - pf(F.observed, df1=df.numerator, df2=df.denominator.rd, ncp = NCP) # area above observed F
area.area.between.rd <- power.values.rd - area.above.F.rd
TM.rd <- area.area.between.rd/power.values.rd
TM.Percentile.rd <- min(NCP[which(abs(TM.rd-assurance)==min(abs(TM.rd-assurance)))])
if(TM.Percentile.rd==0) stop("The corrected noncentrality parameter is zero. Please either choose a lower value of assurance and/or a higher value of alpha for the prior study (e.g. accounting for less publication bias)")
if (TM.Percentile.rd > 0)
{
nrep <- 2
if(effect=="between")
{
denom.df <- nrep*levels.between-levels.between
}
if(effect=="within")
{
denom.df <- levels.between*(nrep-1)*(levels.within - 1)
}
if(effect=="interaction")
{
denom.df <- levels.between*(nrep-1)*(levels.within - 1)
}
diff.rd <- -1
while (diff.rd < 0 )
{
criticalF <- qf(1-alpha.planned, df1 = df.numerator, df2 = denom.df)
powers.rd <- 1 - pf(criticalF, df1 = df.numerator, df2 = denom.df, ncp = (nrep/n.rd)*TM.Percentile.rd)
diff.rd <- powers.rd - power
nrep <- nrep + 1
if(effect=="between")
{
denom.df <- (nrep*levels.between)-levels.between
}
if(effect=="within")
{
denom.df <- levels.between*(nrep-1)*(levels.within - 1)
}
if(effect=="interaction")
{
denom.df <- levels.between*(nrep-1)*(levels.within - 1)
}
}
}
repn.rd <- nrep-1
### ROUND UP
if(effect=="between")
{
n.ru <- ceiling(N/levels.between) # To ensure that the between sample size is appropriate given specifications.
N.ru <- n.ru*levels.between
df.numerator <- levels.between - 1
df.denominator.ru <- N.ru-levels.between
}
if(effect=="within")
{
n.ru <- ceiling(N/levels.between) # To ensure that the per-cell sample size is equal. Rounds down (floor).
N.ru <- n.ru*levels.between
df.numerator <- levels.within - 1
df.denominator.ru <- (N.ru-levels.between)*(levels.within - 1)
}
if(effect=="interaction")
{
n.ru <- ceiling(N/levels.between) # To ensure that the per-cell sample size is equal. Rounds down (floor).
N.ru <- n.ru*levels.between
df.numerator <- (levels.between - 1)*(levels.within - 1)
df.denominator.ru <- (N.ru-levels.between)*(levels.within - 1)
}
f.density.ru <- df(F.observed, df1=df.numerator, df2=df.denominator.ru, ncp=NCP) # density of F using F observed
critF.ru <- qf(1-alpha.prior, df1=df.numerator, df2=df.denominator.ru)
if(F.observed <= critF.ru) stop("Your observed F statistic is nonsignificant based on your specfied alpha of the prior study. Please increase 'alpha.prior' so 'F.observed' exceeds the critical value")
power.values.ru <- 1 - pf(critF.ru, df1=df.numerator, df2=df.denominator.ru, ncp = NCP) # area above critical F
area.above.F.ru <- 1 - pf(F.observed, df1=df.numerator, df2=df.denominator.ru, ncp = NCP) # area above observed F
area.area.between.ru <- power.values.ru - area.above.F.ru
TM.ru <- area.area.between.ru/power.values.ru
TM.Percentile.ru <- min(NCP[which(abs(TM.ru-assurance)==min(abs(TM.ru-assurance)))])
if(TM.Percentile.ru==0) stop("The corrected noncentrality parameter is zero. Please either choose a lower value of assurance and/or a higher value of alpha for the prior study (e.g. accounting for less publication bias)")
if (TM.Percentile.ru > 0)
{
nrep <- 2
if(effect=="between")
{
denom.df <- nrep*levels.between-levels.between
}
if(effect=="within")
{
denom.df <- levels.between*(nrep-1)*(levels.within - 1)
}
if(effect=="interaction")
{
denom.df <- levels.between*(nrep-1)*(levels.within - 1)
}
diff.ru <- -1
while (diff.ru < 0 )
{
criticalF <- qf(1-alpha.planned, df1 = df.numerator, df2 = denom.df)
powers.ru <- 1 - pf(criticalF, df1 = df.numerator, df2 = denom.df, ncp = (nrep/n.ru)*TM.Percentile.ru)
diff.ru <- powers.ru - power
nrep <- nrep + 1
if(effect=="between")
{
denom.df <- (nrep*levels.between)-levels.between
}
if(effect=="within")
{
denom.df <- levels.between*(nrep-1)*(levels.within - 1)
}
if(effect=="interaction")
{
denom.df <- levels.between*(nrep-1)*(levels.within - 1)
}
}
}
repn.ru <- nrep-1
output.n <- max(repn.rd, repn.ru)
return(list(output.n, min(TM.Percentile.rd, TM.Percentile.ru)))
}
|
/scratch/gouwar.j/cran-all/cranData/BUCSS/R/ss.power.spa.r
|
# Within subjects ANOVA (general case)
#' Necessary sample size to reach desired power for a within-subjects ANOVA with
#' any number of factors using an uncertainty and publication bias correction
#' procedure
#'
#' @description \code{ss.power.wa.general} returns the necessary per-group
#' sample size to achieve a desired level of statistical power for a planned
#' study testing any type of effect (omnibus, contrast) using a fully
#' within-subjects ANOVA with any number of factors, based on information
#' obtained from a previous study. The effect from the previous study can be
#' corrected for publication bias and/or uncertainty to provide a sample size
#' that will achieve more accurate statistical power for a planned study, when
#' compared to approaches that use a sample effect size at face value or rely
#' on sample size only. The bias and uncertainty adjusted previous study
#' noncentrality parameter is also returned, which can be transformed to
#' various effect size metrics.
#'
#' @details Researchers often use the sample effect size from a prior study as
#' an estimate of the likely size of an expected future effect in sample size
#' planning. However, sample effect size estimates should not usually be used
#' at face value to plan sample size, due to both publication bias and
#' uncertainty.
#'
#' The approach implemented in \code{ss.power.wa.general} uses the observed
#' \eqn{F}-value and sample size from a previous study to correct the
#' noncentrality parameter associated with the effect of interest for
#' publication bias and/or uncertainty. This new estimated noncentrality
#' parameter is then used to calculate the necessary per-group sample size to
#' achieve the desired level of power in the planned study.
#'
#' The approach uses a likelihood function of a truncated non-central F
#' distribution, where the truncation occurs due to small effect sizes being
#' unobserved due to publication bias. The numerator of the likelihood
#' function is simply the density of a noncentral F distribution. The
#' denominator is the power of the test, which serves to truncate the
#' distribution. Thus, the ratio of the numerator and the denominator is a
#' truncated noncentral F distribution. (See Taylor & Muller, 1996, Equation
#' 2.1. and Anderson & Maxwell, 2017 for more details.)
#'
#' Assurance is the proportion of times that power will be at or above the
#' desired level, if the experiment were to be reproduced many times. For
#' example, assurance = .5 means that power will be above the desired level
#' half of the time, but below the desired level the other half of the time.
#' Selecting assurance = .5 (selecting the noncentrality parameter at the 50th
#' percentile of the likelihood distribution) results in a median-unbiased
#' estimate of the population noncentrality parameter and does not correct for
#' uncertainty. In order to correct for uncertainty, assurance > .5
#' can be selected, which corresponds to selecting the noncentrality parameter
#' associated with the (1 - assurance) quantile of the likelihood
#' distribution.
#'
#' If the previous study of interest has not been subjected to publication
#' bias (e.g., a pilot study), \code{alpha.prior} can be set to 1 to indicate
#' no publication bias. Alternative \eqn{\alpha}-levels can also be
#' accommodated to represent differing amounts of publication bias. For
#' example, setting \code{alpha.prior}=.20 would reflect less severe
#' publication bias than the default of .05. In essence, setting
#' \code{alpha.prior} at .20 assumes that studies with \eqn{p}-values less
#' than .20 are published, whereas those with larger \eqn{p}-values are not.
#'
#' In some cases, the corrected noncentrality parameter for a given level of
#' assurance will be estimated to be zero. This is an indication that, at the
#' desired level of assurance, the previous study's effect cannot be
#' accurately estimated due to high levels of uncertainty and bias. When this
#' happens, subsequent sample size planning is not possible with the chosen
#' specifications. Two alternatives are recommended. First, users can select a
#' lower value of assurance (e.g. .8 instead of .95). Second, users can reduce
#' the influence of publciation bias by setting \code{alpha.prior} at a value
#' greater than .05. It is possible to correct for uncertainty only by setting
#' \code{alpha.prior}=1 and choosing the desired level of assurance. We
#' encourage users to make the adjustments as minimal as possible.
#'
#' \code{ss.power.wa.general} assumes sphericity for the within-subjects
#' effects.
#'
#' @param F.observed Observed F-value from a previous study used to plan sample
#' size for a planned study
#' @param N Total sample size of the previous study
#' @param df.numerator Numerator degrees of freedom for the effect of interest
#' @param alpha.prior Alpha-level \eqn{\alpha} for the previous study or the
#' assumed statistical significance necessary for publishing in the field; to
#' assume no publication bias, a value of 1 can be entered
#' @param alpha.planned Alpha-level (\eqn{\alpha}) assumed for the planned study
#' @param assurance Desired level of assurance, or the long run proportion of
#' times that the planned study power will reach or surpass desired level
#' (assurance > .5 corrects for uncertainty; assurance < .5 not recommended)
#' @param power Desired level of statistical power for the planned study
#' @param step Value used in the iterative scheme to determine the noncentrality
#' parameter necessary for sample size planning (0 < step < .01) (users should
#' not generally need to change this value; smaller values lead to more
#' accurate sample size planning results, but unnecessarily small values will
#' add unnecessary computational time)
#'
#' @return Suggested per-group sample size for planned study
#' Publication bias and uncertainty- adjusted prior study noncentrality parameter
#'
#' @export
#' @import stats
#'
#' @examples ss.power.wa.general(F.observed=6.5, N=80, df.numerator=1,
#' alpha.prior=.05, alpha.planned=.05, assurance=.50, power=.80, step=.001)
#'
#' @author Samantha F. Anderson \email{[email protected]},
#' Ken Kelley \email{kkelley@@nd.edu}
#'
#' @references Anderson, S. F., & Maxwell, S. E. (2017).
#' Addressing the 'replication crisis': Using original studies to design
#' replication studies with appropriate statistical power. \emph{Multivariate
#' Behavioral Research, 52,} 305-322.
#'
#' Anderson, S. F., Kelley, K., & Maxwell, S. E. (2017). Sample size
#' planning for more accurate statistical power: A method correcting sample
#' effect sizes for uncertainty and publication bias. \emph{Psychological
#' Science, 28,} 1547-1562.
#'
#' Taylor, D. J., & Muller, K. E. (1996). Bias in linear model power and
#' sample size calculation due to estimating noncentrality.
#' \emph{Communications in Statistics: Theory and Methods, 25,} 1595-1610.
ss.power.wa.general <- function(F.observed, N, df.numerator, alpha.prior=.05, alpha.planned=.05, assurance=.80, power=.80, step=.001)
{
if(alpha.prior > 1 | alpha.prior <= 0) stop("There is a problem with 'alpha' of the prior study (i.e., the Type I error rate), please specify as a value between 0 and 1 (the default is .05).")
if(alpha.prior == 1) {alpha.prior <- .999 }
if(alpha.planned >= 1 | alpha.planned <= 0) stop("There is a problem with 'alpha' of the planned study (i.e., the Type I error rate), please specify as a value between 0 and 1 (the default is .05).")
if(assurance >= 1)
{
assurance <- assurance/100
}
if(assurance<0 | assurance>1)
{
stop("There is a problem with 'assurance' (i.e., the proportion of times statistical power is at or above the desired value), please specify as a value between 0 and 1 (the default is .80).")
}
if(assurance <.5)
{
warning( "THe assurance you have entered is < .5, which implies you will have under a 50% chance at achieving your desired level of power" )
}
if(power >= 1) power <- power/100
if(power<0 | power>1) stop("There is a problem with 'power' (i.e., desired statistical power), please specify as a value between 0 and 1 (the default is .80).")
if(missing(N)) stop("You must specify 'N', which is the total sample size.")
NCP <- seq(from=0, to=100, by=step) # sequence of possible values for the noncentral parameter.
n <- N
df.denominator <- df.numerator*(n-1)
f.density <- df(F.observed, df1=df.numerator, df2=df.denominator, ncp=NCP) # density of F using F observed
critF <- qf(1-alpha.prior, df1=df.numerator, df2=df.denominator)
if(F.observed <= critF) stop("Your observed F statistic is nonsignificant based on your specfied alpha of the prior study. Please increase 'alpha.prior' so 'F.observed' exceeds the critical value")
power.values <- 1 - pf(critF, df1=df.numerator, df2=df.denominator, ncp = NCP) # area above critical F
area.above.F <- 1 - pf(F.observed, df1=df.numerator, df2=df.denominator, ncp = NCP) # area above observed F
area.area.between <- power.values - area.above.F
TM <- area.area.between/power.values
TM.Percentile <- min(NCP[which(abs(TM-assurance)==min(abs(TM-assurance)))])
if(TM.Percentile==0) stop("The corrected noncentrality parameter is zero. Please either choose a lower value of assurance and/or a higher value of alpha for the prior study (e.g. accounting for less publication bias)")
if (TM.Percentile > 0)
{
nrep <- 2
denom.df <- df.numerator*(nrep-1)
diff <- -1
while (diff < 0 )
{
criticalF <- qf(1-alpha.planned, df1 = df.numerator, df2 = denom.df)
powers <- 1 - pf(criticalF, df1 = df.numerator, df2 = denom.df, ncp = (nrep/n)*TM.Percentile)
diff <- powers - power
nrep <- nrep + 1
denom.df <- df.numerator*(nrep-1)
}
}
repn <- nrep-1
return(list(repn, TM.Percentile)) # This is the same as total N needed
}
|
/scratch/gouwar.j/cran-all/cranData/BUCSS/R/ss.power.wa.general.R
|
# Within subjects ANOVA.
#' Necessary sample size to reach desired power for a one or two-way
#' within-subjects ANOVA using an uncertainty and publication bias correction
#' procedure
#'
#' @description \code{ss.power.wa} returns the necessary per-group sample size
#' to achieve a desired level of statistical power for a planned study testing
#' an omnibus effect using a one or two-way fully within-subjects ANOVA, based
#' on information obtained from a previous study. The effect from the previous
#' study can be corrected for publication bias and/or uncertainty to provide a
#' sample size that will achieve more accurate statistical power for a planned
#' study, when compared to approaches that use a sample effect size at face
#' value or rely on sample size only. The bias and uncertainty adjusted previous
#' study noncentrality parameter is also returned, which can be transformed to
#' various effect size metrics.
#'
#' @details Researchers often use the sample effect size from a prior study as
#' an estimate of the likely size of an expected future effect in sample size
#' planning. However, sample effect size estimates should not usually be used
#' at face value to plan sample size, due to both publication bias and
#' uncertainty.
#'
#' The approach implemented in \code{ss.power.wa} uses the observed
#' \eqn{F}-value and sample size from a previous study to correct the
#' noncentrality parameter associated with the effect of interest for
#' publication bias and/or uncertainty. This new estimated noncentrality
#' parameter is then used to calculate the necessary per-group sample size to
#' achieve the desired level of power in the planned study.
#'
#' The approach uses a likelihood function of a truncated non-central F
#' distribution, where the truncation occurs due to small effect sizes being
#' unobserved due to publication bias. The numerator of the likelihood
#' function is simply the density of a noncentral F distribution. The
#' denominator is the power of the test, which serves to truncate the
#' distribution. Thus, the ratio of the numerator and the denominator is a
#' truncated noncentral F distribution. (See Taylor & Muller, 1996, Equation
#' 2.1. and Anderson & Maxwell, 2017, for more details.)
#'
#' Assurance is the proportion of times that power will be at or above the
#' desired level, if the experiment were to be reproduced many times. For
#' example, assurance = .5 means that power will be above the desired level
#' half of the time, but below the desired level the other half of the time.
#' Selecting assurance = .5 (selecting the noncentrality parameter at the 50th
#' percentile of the likelihood distribution) results in a median-unbiased
#' estimate of the population noncentrality parameter and does not correct for
#' uncertainty. In order to correct for uncertainty, assurance > .5
#' can be selected, which corresponds to selecting the noncentrality parameter
#' associated with the (1 - assurance) quantile of the likelihood
#' distribution.
#'
#' If the previous study of interest has not been subjected to publication
#' bias (e.g., a pilot study), \code{alpha.prior} can be set to 1 to indicate
#' no publication bias. Alternative \eqn{\alpha}-levels can also be
#' accommodated to represent differing amounts of publication bias. For
#' example, setting \code{alpha.prior}=.20 would reflect less severe
#' publication bias than the default of .05. In essence, setting
#' \code{alpha.prior} at .20 assumes that studies with \eqn{p}-values less
#' than .20 are published, whereas those with larger \eqn{p}-values are not.
#'
#' In some cases, the corrected noncentrality parameter for a given level of
#' assurance will be estimated to be zero. This is an indication that, at the
#' desired level of assurance, the previous study's effect cannot be
#' accurately estimated due to high levels of uncertainty and bias. When this
#' happens, subsequent sample size planning is not possible with the chosen
#' specifications. Two alternatives are recommended. First, users can select a
#' lower value of assurance (e.g. .8 instead of .95). Second, users can reduce
#' the influence of publciation bias by setting \code{alpha.prior} at a value
#' greater than .05. It is possible to correct for uncertainty only by setting
#' \code{alpha.prior}=1 and choosing the desired level of assurance. We
#' encourage users to make the adjustments as minimal as possible.
#'
#' \code{ss.power.wa} assumes sphericity for the within-subjects effects.
#'
#' @param F.observed Observed F-value from a previous study used to plan sample
#' size for a planned study
#' @param N Total sample size of the previous study
#' @param levels.A Number of levels for factor A
#' @param levels.B Number of levels for factor B, which is NULL if a single
#' factor design
#' @param effect Effect most of interest to the planned study: main effect of A
#' (\code{factor.A}), main effect of B (\code{factor.B}), interaction
#' (\code{interaction})
#' @param alpha.prior Alpha-level \eqn{\alpha} for the previous study or the
#' assumed statistical significance necessary for publishing in the field; to
#' assume no publication bias, a value of 1 can be entered
#' @param alpha.planned Alpha-level (\eqn{\alpha}) assumed for the planned study
#' @param assurance Desired level of assurance, or the long run proportion of
#' times that the planned study power will reach or surpass desired level
#' (assurance > .5 corrects for uncertainty; assurance < .5 not recommended)
#' @param power Desired level of statistical power for the planned study
#' @param step Value used in the iterative scheme to determine the noncentrality
#' parameter necessary for sample size planning (0 < step < .01) (users should
#' not generally need to change this value; smaller values lead to more
#' accurate sample size planning results, but unnecessarily small values will
#' add unnecessary computational time)
#'
#' @return Suggested per-group sample size for planned study
#' Publication bias and uncertainty- adjusted prior study noncentrality parameter
#'
#' @export
#' @import stats
#'
#' @examples
#' ss.power.wa(F.observed=5, N=60, levels.A=2, levels.B=3, effect="factor.B",
#' alpha.prior=.05, alpha.planned=.05, assurance=.80, power=.80, step=.001)
#'
#' @author Samantha F. Anderson \email{[email protected]},
#' Ken Kelley \email{[email protected]}
#'
#' @references Anderson, S. F., & Maxwell, S. E. (2017).
#' Addressing the 'replication crisis': Using original studies to design
#' replication studies with appropriate statistical power. \emph{Multivariate
#' Behavioral Research, 52,} 305-322.
#'
#' Anderson, S. F., Kelley, K., & Maxwell, S. E. (2017). Sample size
#' planning for more accurate statistical power: A method correcting sample
#' effect sizes for uncertainty and publication bias. \emph{Psychological
#' Science, 28,} 1547-1562.
#'
#' Taylor, D. J., & Muller, K. E. (1996). Bias in linear model power and
#' sample size calculation due to estimating noncentrality.
#' \emph{Communications in Statistics: Theory and Methods, 25,} 1595-1610.
ss.power.wa <- function(F.observed, N, levels.A, levels.B=NULL, effect=c("factor.A", "factor.B", "interaction"), alpha.prior=.05, alpha.planned=.05, assurance=.80, power=.80, step=.001)
{
if(alpha.prior > 1 | alpha.prior <= 0) stop("There is a problem with 'alpha' of the prior study (i.e., the Type I error rate), please specify as a value between 0 and 1 (the default is .05).")
if(alpha.prior == 1) {alpha.prior <- .999 }
if(alpha.planned >= 1 | alpha.planned <= 0) stop("There is a problem with 'alpha' of the planned study (i.e., the Type I error rate), please specify as a value between 0 and 1 (the default is .05).")
if(assurance >= 1)
{
assurance <- assurance/100
}
if(assurance<0 | assurance>1)
{
stop("There is a problem with 'assurance' (i.e., the proportion of times statistical power is at or above the desired value), please specify as a value between 0 and 1 (the default is .80).")
}
if(assurance <.5)
{
warning( "THe assurance you have entered is < .5, which implies you will have under a 50% chance at achieving your desired level of power" )
}
if(power >= 1) power <- power/100
if(power<0 | power>1) stop("There is a problem with 'power' (i.e., desired statistical power), please specify as a value between 0 and 1 (the default is .80).")
if(missing(N)) stop("You must specify 'N', which is the total sample size.")
if(is.null(levels.B)) if(effect=="interaction") stop("You cannot select 'effect=interaction' if you do not specify 'levels.B'.")
if(is.null(levels.B)) cells <- levels.A
if(!is.null(levels.B)) cells <- levels.A*levels.B
NCP <- seq(from=0, to=100, by=step) # sequence of possible values for the noncentral parameter.
if(is.null(levels.B)) type="ANOVA.one.way"
if(!is.null(levels.B)) type="ANOVA.two.way"
if(type=="ANOVA.one.way")
{
cells <- levels.A
n <- N
df.numerator <- cells - 1
df.denominator <- (cells-1)*(n-1)
}
if(type=="ANOVA.two.way")
{
cells <- (levels.A*levels.B)
n <- N
if(effect=="factor.A")
{
df.numerator <- levels.A - 1
df.denominator <- (levels.A - 1)*(n - 1)
}
if(effect=="factor.B")
{
df.numerator <- levels.B - 1
df.denominator <- (levels.B - 1)*(n - 1)
}
if(effect=="interaction")
{
df.numerator <- (levels.A - 1)*(levels.B - 1)
df.denominator <- (levels.A - 1)*(levels.B - 1)*(n - 1)
}
}
f.density <- df(F.observed, df1=df.numerator, df2=df.denominator, ncp=NCP) # density of F using F observed
critF <- qf(1-alpha.prior, df1=df.numerator, df2=df.denominator)
if(F.observed <= critF) stop("Your observed F statistic is nonsignificant based on your specfied alpha of the prior study. Please increase 'alpha.prior' so 'F.observed' exceeds the critical value")
power.values <- 1 - pf(critF, df1=df.numerator, df2=df.denominator, ncp = NCP) # area above critical F
area.above.F <- 1 - pf(F.observed, df1=df.numerator, df2=df.denominator, ncp = NCP) # area above observed F
area.area.between <- power.values - area.above.F
TM <- area.area.between/power.values
TM.Percentile <- min(NCP[which(abs(TM-assurance)==min(abs(TM-assurance)))])
if(TM.Percentile==0) stop("The corrected noncentrality parameter is zero. Please either choose a lower value of assurance and/or a higher value of alpha for the prior study (e.g. accounting for less publication bias)")
if (TM.Percentile > 0)
{
nrep <- 2
if(type=="ANOVA.one.way") denom.df <- (cells-1)*(nrep-1)
if(type=="ANOVA.two.way")
{
if(effect=="factor.A") denom.df <- (levels.A - 1)*(nrep - 1)
if(effect=="factor.B") denom.df <- (levels.B - 1)*(nrep - 1)
if(effect=="interaction") denom.df <- (levels.A - 1)*(levels.B - 1)*(nrep - 1)
}
diff <- -1
while (diff < 0 )
{
criticalF <- qf(1-alpha.planned, df1 = df.numerator, df2 = denom.df)
powers <- 1 - pf(criticalF, df1 = df.numerator, df2 = denom.df, ncp = (nrep/n)*TM.Percentile)
diff <- powers - power
nrep <- nrep + 1
if(type=="ANOVA.one.way") denom.df <- (cells-1)*(nrep-1)
if(type=="ANOVA.two.way")
{
if(effect=="factor.A") denom.df <- (levels.A - 1)*(nrep - 1)
if(effect=="factor.B") denom.df <- (levels.B - 1)*(nrep - 1)
if(effect=="interaction") denom.df <- (levels.A - 1)*(levels.B - 1)*(nrep - 1)
}
}
}
repn <- nrep-1
return(list(repn, TM.Percentile)) # This is the same as total N needed
}
|
/scratch/gouwar.j/cran-all/cranData/BUCSS/R/ss.power.wa.r
|
#' Hierarchical Bayesian vector autoregression
#'
#' Used to estimate hierarchical Bayesian Vector Autoregression (VAR) models in
#' the fashion of Giannone, Lenza and Primiceri (2015).
#' Priors are adjusted and added via \code{\link{bv_priors}}.
#' The Metropolis-Hastings step can be modified with \code{\link{bv_mh}}.
#'
#' The model can be expressed as:
#' \deqn{y_t = a_0 + A_1 y_{t-1} + ... + A_p y_{t-p} + \epsilon_t}{y_t = a_0 +
#' A_1 y_{t-1} + ... + A_p y_{t-p} + e_t}
#' See Kuschnig and Vashold (2021) and Giannone, Lenza and Primiceri (2015)
#' for further information.
#' Methods for a \code{bvar} object and its derivatives can be used to:
#' \itemize{
#' \item predict and analyse scenarios;
#' \item evaluate shocks and the variance of forecast errors;
#' \item visualise forecasts and impulse responses, parameters and residuals;
#' \item retrieve coefficents and the variance-covariance matrix;
#' \item calculate fitted and residual values;
#' }
#' Note that these methods generally work by calculating quantiles from the
#' posterior draws. The full posterior may be retrieved directly from the
#' objects. The function \code{\link[utils]{str}} can be very helpful for this.
#'
#' @author Nikolas Kuschnig, Lukas Vashold
#'
#' @param data Numeric matrix or dataframe. Note that observations are expected
#' to be ordered from earliest to latest, and variables in the columns.
#' @param lags Integer scalar. Lag order of the model.
#' @param n_draw,n_burn Integer scalar. The number of iterations to (a) cycle
#' through and (b) burn at the start.
#' @param n_thin Integer scalar. Every \emph{n_thin}'th iteration is stored.
#' For a given memory requirement thinning reduces autocorrelation, while
#' increasing effective sample size.
#' @param priors Object from \code{\link{bv_priors}} with prior settings.
#' Used to adjust the Minnesota prior, add custom dummy priors, and choose
#' hyperparameters for hierarchical estimation.
#' @param mh Object from \code{\link{bv_mh}} with settings for the
#' Metropolis-Hastings step. Used to tune automatic adjustment of the
#' acceptance rate within the burn-in period, or manually adjust the proposal
#' variance.
#' @param fcast Object from \code{\link{bv_fcast}} with forecast settings.
#' Options include the horizon and settings for conditional forecasts i.e.
#' scenario analysis.
#' May also be calculated ex-post using \code{\link{predict.bvar}}.
#' @param irf Object from \code{\link{bv_irf}} with settings for the calculation
#' of impulse responses and forecast error variance decompositions. Options
#' include the horizon and different identification schemes.
#' May also be calculated ex-post using \code{\link{irf.bvar}}.
#' @param verbose Logical scalar. Whether to print intermediate results and
#' progress.
#' @param ... Not used.
#'
#' @return Returns a list of class \code{bvar} with the following elements:
#' \itemize{
#' \item \code{beta} - Numeric array with draws from the posterior of the VAR
#' coefficients. Also see \code{\link{coef.bvar}}.
#' \item \code{sigma} - Numeric array with draws from the posterior of the
#' variance-covariance matrix. Also see \code{\link{vcov.bvar}}.
#' \item \code{hyper} - Numeric matrix with draws from the posterior of the
#' hierarchically treated hyperparameters.
#' \item \code{ml} - Numeric vector with the marginal likelihood (with respect
#' to the hyperparameters), that determines acceptance probability.
#' \item \code{optim} - List with outputs of \code{\link[stats]{optim}},
#' which is used to find starting values for the hyperparameters.
#' \item \code{prior} - Prior settings from \code{\link{bv_priors}}.
#' \item \code{call} - Call to the function. See \code{\link{match.call}}.
#' \item \code{meta} - List with meta information. Includes the number of
#' variables, accepted draws, number of iterations, and data.
#' \item \code{variables} - Character vector with the column names of
#' \emph{data}. If missing, variables are named iteratively.
#' \item \code{explanatories} - Character vector with names of explanatory
#' variables. Formatting is akin to: \code{"FEDFUNDS-lag1"}.
#' \item \code{fcast} - Forecasts from \code{\link{predict.bvar}}.
#' \item \code{irf} - Impulse responses from \code{\link{irf.bvar}}.
#' }
#'
#' @references
#' Giannone, D. and Lenza, M. and Primiceri, G. E. (2015) Prior Selection for
#' Vector Autoregressions. \emph{The Review of Economics and Statistics},
#' \bold{97:2}, 436-451, \doi{10.1162/REST_a_00483}.
#'
#' Kuschnig, N. and Vashold, L. (2021) BVAR: Bayesian Vector Autoregressions
#' with Hierarchical Prior Selection in R.
#' \emph{Journal of Statistical Software}, \bold{14}, 1-27,
#' \doi{10.18637/jss.v100.i14}.
#'
#' @seealso \code{\link{bv_priors}}; \code{\link{bv_mh}};
#' \code{\link{bv_fcast}}; \code{\link{bv_irf}};
#' \code{\link{predict.bvar}}; \code{\link{irf.bvar}}; \code{\link{plot.bvar}};
#'
#' @keywords BVAR Metropolis-Hastings MCMC priors hierarchical
#'
#' @export
#'
#' @importFrom utils setTxtProgressBar txtProgressBar
#' @importFrom stats optim runif quantile
#' @importFrom mvtnorm rmvnorm
#'
#' @examples
#' # Access a subset of the fred_qd dataset
#' data <- fred_qd[, c("CPIAUCSL", "UNRATE", "FEDFUNDS")]
#' # Transform it to be stationary
#' data <- fred_transform(data, codes = c(5, 5, 1), lag = 4)
#'
#' # Estimate a BVAR using one lag, default settings and very few draws
#' x <- bvar(data, lags = 1, n_draw = 1000L, n_burn = 200L, verbose = FALSE)
#'
#' # Calculate and store forecasts and impulse responses
#' predict(x) <- predict(x, horizon = 8)
#' irf(x) <- irf(x, horizon = 8, fevd = FALSE)
#'
#' \dontrun{
#' # Check convergence of the hyperparameters with a trace and density plot
#' plot(x)
#' # Plot forecasts and impulse responses
#' plot(predict(x))
#' plot(irf(x))
#' # Check coefficient values and variance-covariance matrix
#' summary(x)
#' }
bvar <- function(
data, lags,
n_draw = 10000L, n_burn = 5000L, n_thin = 1L,
priors = bv_priors(),
mh = bv_mh(),
fcast = NULL,
irf = NULL,
verbose = TRUE, ...) {
cl <- match.call()
start_time <- Sys.time()
# Setup and checks -----
# Data
if(!all(vapply(data, is.numeric, logical(1L))) ||
any(is.na(data)) || ncol(data) < 2) {
stop("Problem with the data. Make sure it is numeric, without any NAs.")
}
Y <- as.matrix(data)
# Integers
lags <- int_check(lags, min = 1L, max = nrow(Y) - 1, msg = "Issue with lags.")
n_draw <- int_check(n_draw, min = 10L, msg = "Issue with n_draw.")
n_burn <- int_check(n_burn, min = 0L, max = n_draw - 1L,
msg = "Issue with n_burn. Is n_burn < n_draw?")
n_thin <- int_check(n_thin, min = 1L, max = ((n_draw - n_burn) / 10),
msg = "Issue with n_thin. Maximum allowed is (n_draw - n_burn) / 10.")
n_save <- int_check(((n_draw - n_burn) / n_thin), min = 1L)
verbose <- isTRUE(verbose)
# Constructors, required
if(!inherits(priors, "bv_priors")) {
stop("Please use `bv_priors()` to configure the priors.")
}
if(!inherits(mh, "bv_metropolis")) {
stop("Please use `bv_mh()` to configure the Metropolis-Hastings step.")
}
# Not required
if(!is.null(fcast) && !inherits(fcast, "bv_fcast")) {
stop("Please use `bv_fcast()` to configure forecasts.")
}
if(!is.null(irf) && !inherits(irf, "bv_irf")) {
stop("Please use `bv_irf()` to configure impulse responses.")
}
if(mh[["adjust_acc"]]) {n_adj <- as.integer(n_burn * mh[["adjust_burn"]])}
# Preparation ---
X <- lag_var(Y, lags = lags)
Y <- Y[(lags + 1):nrow(Y), ]
X <- X[(lags + 1):nrow(X), ]
X <- cbind(1, X)
XX <- crossprod(X)
K <- ncol(X)
M <- ncol(Y)
N <- nrow(Y)
variables <- name_deps(variables = colnames(data), M = M)
explanatories <- name_expl(variables = variables, M = M, lags = lags)
# Priors -----
# Minnesota prior ---
b <- priors[["b"]]
if(length(b) == 1 || length(b) == M) {
priors[["b"]] <- matrix(0, nrow = K, ncol = M)
priors[["b"]][2:(M + 1), ] <- diag(b, M)
} else if(!is.matrix(b) || !all(dim(b) == c(K, M))) {
stop("Issue with the prior mean b. Please reconstruct.")
}
if(any(priors[["psi"]][["mode"]] == "auto")) {
psi_temp <- auto_psi(Y, lags)
priors[["psi"]][["mode"]] <- psi_temp[["mode"]]
priors[["psi"]][["min"]] <- psi_temp[["min"]]
priors[["psi"]][["max"]] <- psi_temp[["max"]]
}
if(!all(vapply(priors[["psi"]][1:3],
function(x) length(x) == M, logical(1L)))) {
stop("Dimensions of psi do not fit the data.")
}
# Parameters ---
pars_names <- names(priors)[ # Exclude reserved names
!grepl("^hyper$|^var$|^b$|^psi[0-9]+$|^dummy$", names(priors))]
pars_full <- do.call(c, lapply(pars_names, function(x) priors[[x]][["mode"]]))
names(pars_full) <- name_pars(pars_names, M)
# Hierarchical priors ---
hyper_n <- length(priors[["hyper"]]) +
sum(priors[["hyper"]] == "psi") * (M - 1)
if(hyper_n == 0) {stop("Please provide at least one hyperparameter.")}
get_priors <- function(name, par) {priors[[name]][[par]]}
hyper <- do.call(c, lapply(priors[["hyper"]], get_priors, par = "mode"))
hyper_min <- do.call(c, lapply(priors[["hyper"]], get_priors, par = "min"))
hyper_max <- do.call(c, lapply(priors[["hyper"]], get_priors, par = "max"))
names(hyper) <- name_pars(priors[["hyper"]], M)
# Split up psi ---
for(i in seq_along(priors[["psi"]][["mode"]])) {
priors[[paste0("psi", i)]] <- vapply(c("mode", "min", "max"), function(x) {
priors[["psi"]][[x]][i]}, numeric(1L))
}
# Optimise and draw -----
opt <- optim(par = hyper, bv_ml, gr = NULL,
hyper_min = hyper_min, hyper_max = hyper_max, pars = pars_full,
priors = priors, Y = Y, X = X, XX = XX, K = K, M = M, N = N, lags = lags,
opt = TRUE, method = "L-BFGS-B", lower = hyper_min, upper = hyper_max,
control = list("fnscale" = -1))
names(opt[["par"]]) <- names(hyper)
if(verbose) {
cat("Optimisation concluded.",
"\nPosterior marginal likelihood: ", round(opt[["value"]], 3),
"\nHyperparameters: ", paste(names(hyper), round(opt[["par"]], 5),
sep = " = ", collapse = "; "), "\n", sep = "")
}
# Hessian ---
if(length(mh[["scale_hess"]]) != 1 &&
length(mh[["scale_hess"]]) != length(hyper)) {
stop("Length of scale_hess does not match the ", length(hyper),
" hyperparameters. Please provide a scalar or an element for every ",
"hyperparameter (see `?bv_mn()`).")
}
H <- diag(length(opt[["par"]])) * mh[["scale_hess"]]
J <- unlist(lapply(names(hyper), function(name) {
exp(opt[["par"]][[name]]) / (1 + exp(opt[["par"]][[name]])) ^ 2 *
(priors[[name]][["max"]] - priors[[name]][["min"]])
}))
if(any(is.nan(J))) {
stop("Issue with parameter(s) ",
paste0(names(hyper)[which(is.nan(J))], collapse = ", "), ". ",
"Their mode(s) may be too large to exponentiate.")
}
if(hyper_n != 1) {J <- diag(J)}
HH <- J %*% H %*% t(J)
# Make sure HH is positive definite
if(hyper_n != 1) {
HH_eig <- eigen(HH)
HH_eig[["values"]] <- abs(HH_eig[["values"]])
HH <- HH_eig
} else {HH <- list("values" = abs(HH))}
# Initial draw ---
while(TRUE) {
hyper_draw <- rmvn_proposal(n = 1, mean = opt[["par"]], sigma = HH)[1, ]
ml_draw <- bv_ml(hyper = hyper_draw,
hyper_min = hyper_min, hyper_max = hyper_max, pars = pars_full,
priors = priors, Y = Y, X = X, XX = XX, K = K, M = M, N = N, lags = lags)
if(ml_draw[["log_ml"]] > -1e16) {break}
}
# Sampling -----
# Storage ---
accepted <- 0 -> accepted_adj # Beauty
ml_store <- vector("numeric", n_save)
hyper_store <- matrix(NA, nrow = n_save, ncol = length(hyper_draw),
dimnames = list(NULL, names(hyper)))
beta_store <- array(NA, c(n_save, K, M))
sigma_store <- array(NA, c(n_save, M, M))
if(verbose) {pb <- txtProgressBar(min = 0, max = n_draw, style = 3)}
# Start loop ---
for(i in seq.int(1 - n_burn, n_draw - n_burn)) {
# Metropolis-Hastings
hyper_temp <- rmvn_proposal(n = 1, mean = hyper_draw, sigma = HH)[1, ]
ml_temp <- bv_ml(hyper = hyper_temp,
hyper_min = hyper_min, hyper_max = hyper_max, pars = pars_full,
priors = priors, Y = Y, X = X, XX = XX, K = K, M = M, N = N, lags = lags)
if(runif(1) < exp(ml_temp[["log_ml"]] - ml_draw[["log_ml"]])) { # Accept
ml_draw <- ml_temp
hyper_draw <- hyper_temp
accepted_adj <- accepted_adj + 1
if(i > 0) {accepted <- accepted + 1}
}
# Tune acceptance during burn-in phase
if(mh[["adjust_acc"]] && i <= -n_adj && (i + n_burn) %% 10 == 0) {
acc_rate <- accepted_adj / (i + n_burn)
if(acc_rate < mh[["acc_lower"]]) {
HH[["values"]] <- HH[["values"]] * mh[["acc_tighten"]]
} else if(acc_rate > mh[["acc_upper"]]) {
HH[["values"]] <- HH[["values"]] * mh[["acc_loosen"]]
}
}
if(i > 0 && i %% n_thin == 0) { # Store draws
ml_store[(i / n_thin)] <- ml_draw[["log_ml"]]
hyper_store[(i / n_thin), ] <- hyper_draw
# Draw parameters, i.e. beta_draw and sigma_draw
# These need X and N with the dummy observations from `ml_draw`
draws <- draw_post(XX = ml_draw[["XX"]], N = ml_draw[["N"]],
M = M, lags = lags, b = priors[["b"]], psi = ml_draw[["psi"]],
sse = ml_draw[["sse"]], beta_hat = ml_draw[["beta_hat"]],
omega_inv = ml_draw[["omega_inv"]])
beta_store[(i / n_thin), , ] <- draws[["beta_draw"]]
sigma_store[(i / n_thin), , ] <- draws[["sigma_draw"]]
} # End store
if(verbose) {setTxtProgressBar(pb, (i + n_burn))}
} # End loop
timer <- Sys.time() - start_time
if(verbose) {
close(pb)
cat("Finished MCMC after ", format(round(timer, 2)), ".\n", sep = "")
}
# Outputs -----
out <- structure(list(
"beta" = beta_store, "sigma" = sigma_store,
"hyper" = hyper_store, "ml" = ml_store,
"optim" = opt, "priors" = priors, "call" = cl,
"variables" = variables, "explanatories" = explanatories,
"meta" = list("accepted" = accepted, "timer" = timer,
"Y" = Y, "X" = X, "N" = N, "K" = K, "M" = M, "lags" = lags,
"n_draw" = n_draw, "n_burn" = n_burn, "n_save" = n_save,
"n_thin" = n_thin)
), class = "bvar")
if(!is.null(irf)) {
if(verbose) {cat("Calculating impulse responses.")}
out[["irf"]] <- tryCatch(irf.bvar(out, irf), error = function(e) {
warning("\nImpulse response calculation failed with:\n", e)
return(NULL)})
if(verbose) {cat("..Done!\n")}
}
if(!is.null(fcast)) {
if(verbose) {cat("Calculating forecasts.")}
out[["fcast"]] <- tryCatch(predict.bvar(out, fcast), error = function(e) {
warning("\nForecast calculation failed with:\n", e)
return(NULL)})
if(verbose) {cat("..Done!\n")}
}
return(out)
}
|
/scratch/gouwar.j/cran-all/cranData/BVAR/R/10_bvar.R
|
#' Lag a matrix
#'
#' Compute a lagged version of a matrix to be used in vector autoregressions.
#' Higher lags are further to the right.
#'
#' @param x Matrix (\eqn{N * M}) to lag.
#' @param lags Integer scalar. Number of lags to apply.
#'
#' @return Returns an \eqn{N * (M * lags)} matrix with consecutive lags on the
#' right. The elements of the first \emph{lags} rows are 0.
#'
#' @noRd
lag_var <- function(x, lags) {
x_rows <- nrow(x)
x_cols <- ncol(x)
x_lagged <- matrix(0, x_rows, lags * x_cols)
for(i in 1:lags) {
x_lagged[(lags + 1):x_rows, (x_cols * (i - 1) + 1):(x_cols * i)] <-
x[(lags + 1 - i):(x_rows - i), (1:x_cols)]
}
return(x_lagged)
}
#' Compute gamma coefficients
#'
#' Compute the shape \emph{k} and scale \emph{theta} of a Gamma
#' distribution via the mode and standard deviation.
#'
#' @param mode Numeric scalar.
#' @param sd Numeric scalar.
#'
#' @return Returns a list with shape \emph{k} and scale parameter \emph{theta}.
#'
#' @noRd
gamma_coef <- function(mode, sd) {
mode_sq <- mode ^ 2
sd_sq <- sd ^ 2
k <- (2 + mode_sq / sd_sq + sqrt((4 + mode_sq / sd_sq) * mode_sq / sd_sq)) / 2
theta <- sqrt(sd_sq / k)
return(list("k" = k, "theta" = theta))
}
#' Auto-set psi of the Minnesota prior
#'
#' Automatically set the prior values of \emph{psi}. Fits an \eqn{AR(p)} model
#' and sets the mode to the square-root of the innovations variance. Boundaries
#' are set to the mode times / divided by 100.
#'
#' If the call to \code{\link[stats]{arima}} fails, an integrated
#' \eqn{ARIMA(p, 1, 0)} model is fitted instead.
#'
#' @param x Numeric matrix with the data.
#' @param lags Numeric scalar. Number of lags in the model.
#'
#' @importFrom stats arima
#'
#' @return Returns a list with the modes, minimum, and maximum values for
#' \emph{psi}.
#'
#' @noRd
auto_psi <- function(x, lags) {
out <- list("mode" = rep(NA_real_, ncol(x)))
for(j in seq_len(ncol(x))) {
ar_sigma2 <- tryCatch(sqrt(arima(x[, j], order = c(lags, 0, 0))$sigma2),
error = function(e) { # If this fails for, increment integration
message("Caught an error while automatically setting psi. ",
"Column ", j, " appears to be integrated; caught error:\n", e, "\n",
"Attempting to increase order of integration via an ARIMA(",
lags, ", 1, 0) model.")
# Integrated ARMA instead
tryCatch(sqrt(arima(x[, j], order = c(lags, 1, 0))$sigma2),
error = function(f) {
stop("Cannot set psi automatically via ARIMA(", lags, ", 0/1, 0)",
"Caught the error:\n", f, "\nPlease inspect the data ",
"or provide psi manually (see `?bv_psi`).")
})
}, warning = function(w) {
message("Caught a warning while setting psi automatically:\n", w, "\n")
suppressWarnings(sqrt(arima(x[, j], order = c(lags, 0, 0))$sigma2))
}
)
out[["mode"]][j] <- ar_sigma2
}
out[["min"]] <- out[["mode"]] / 100
out[["max"]] <- out[["mode"]] * 100
return(out)
}
# auto_psi <- function(x, lags) {
#
# out <- list("mode" = rep(NA_real_, ncol(x)))
#
# for(j in seq_len(ncol(x))) {
#
# y_0 <- cbind(1, lag_var(x[, j, drop = FALSE], lags = lags))
# y_0 <- y_0[(lags + 1):nrow(y_0), ]
# y_1 <- as.matrix(x[(lags + 1):nrow(x), j, drop = FALSE])
#
# ar_beta <- chol2inv(chol(crossprod(y_0))) %*% crossprod(y_0, y_1)
# ar_resid <- y_1 - y_0 %*% ar_beta
#
# out[["mode"]][j] <- sqrt(sum(ar_resid^2) / (nrow(y_0) - lags - 1))
# }
#
# out[["min"]] <- out[["mode"]] / 100
# out[["max"]] <- out[["mode"]] * 100
#
# return(out)
# }
#' Compute companion matrix
#'
#' Compute the companion form of the VAR coefficients.
#'
#' @param beta Numeric (\eqn{K * M}) matrix with VAR coefficients.
#' @param K Integer scalar. Number of columns in the independent data.
#' @param M Integer scalar. Number of columns in the dependent data.
#' @param lags Integer scalar. Number of lags applied.
#'
#' @return Returns a numeric (\eqn{K - 1 * K -1}) matrix with \emph{beta} in
#' companion form.
#'
#' @noRd
get_beta_comp <- function(beta, K, M, lags) {
beta_comp <- matrix(0, K - 1, K - 1)
beta_comp[1:M, ] <- t(beta[2:K, ]) # Kick constant
if(lags > 1) { # Add block-diagonal matrix beneath VAR coefficients
beta_comp[(M + 1):(K - 1), 1:(K - 1 - M)] <- diag(M * (lags - 1))
}
return(beta_comp)
}
|
/scratch/gouwar.j/cran-all/cranData/BVAR/R/11_input.R
|
#' Check numeric scalar
#'
#' Check whether an object is bounded and coercible to a numeric value.
#'
#' @param x Numeric scalar.
#' @param min Numeric scalar. Minimum value of \emph{x}.
#' @param max Numeric scalar. Maximum value of \emph{x}.
#' @param fun Function to apply to \emph{x} before returning.
#' @param msg String fed to \code{\link[base]{stop}} if an error occurs.
#'
#' @return Returns \code{fun(x)}.
#'
#' @noRd
num_check <- function(
x, min = 0, max = Inf,
msg = "Please check the numeric parameters.",
fun = as.numeric) {
if(!is.numeric(x) || length(x) != 1 || x < min || x > max) {stop(msg)}
return(fun(x))
}
#' @noRd
int_check <- function(
x, min = 0L, max = Inf,
msg = "Please check the integer parameters.") {
num_check(x, min, max, msg, fun = as.integer)
}
#' Name hyperparameters
#'
#' Function to help name hyperparameters. Accounts for multiple occurences
#' of \emph{psi} by adding sequential numbers.
#'
#' @param x Character vector. Parameter names.
#' @param M Integer scalar. Number of columns in the data.
#'
#' @return Returns a character vector of adjusted parameter names.
#'
#' @noRd
name_pars <- function(x, M) {
out <- Reduce(c, sapply(x, function(y) {
if(y == "psi") {paste0(y, 1:M)} else {y}}))
return(out)
}
#' Fill credible intervals
#'
#' Helper function to fill data, colours or similar things based on credible
#' intervals. These are used in \code{\link{plot.bvar_irf}} and
#' \code{\link{plot.bvar_fcast}}.
#'
#' Note that transparency may get appended to recycled HEX colours. Also note
#' that no, i.e. a length 0 central element is required when drawing polygons.
#'
#' @param x Scalar or vector. The central element.
#' @param y Scalar or vector. Value(s) to surround the central element with.
#' The first value is closest, values may get recycled.
#' @param P Odd integer scalar. Number of total bands.
#'
#' @return Returns a vector or matrix (if \emph{x} is a vector) of \emph{x},
#' surrounded by \emph{y}.
#'
#' @noRd
fill_ci <- function(x, y, P) {
n_y <- if(P %% 2 == 0) {
stop("No central position for x found.")
} else {P %/% 2}
fill <- rep(y, length.out = n_y)
if(length(x) > 1) { # Matrix
n_row <- length(x)
return(cbind(t(rev(fill))[rep(1, n_row), ], x, t(fill)[rep(1, n_row), ]))
} else { # Vector
return(c(rev(fill), x, fill))
}
}
#' @noRd
fill_ci_na <- function(x, P) {
# Corner case when quantiles are missing (t_back or conditional forecasts)
if(P == 2) {return(if(length(x > 1)) {cbind(x, NA)} else {c(x, NA)})}
fill_ci(x = x, y = NA, P = P)
}
#' @noRd
fill_ci_col <- function(x, y, P) {
# Apply transparency to HEX colours
if(length(y) == 1 && is_hex(y, alpha = FALSE)) {
y <- paste0(y, alpha_hex(P))
}
fill_ci(x = x, y = y, P = P)
}
#' Get a transparency HEX code
#'
#' @param P Integer scalar. Number of total bands.
#'
#' @return Returns a character vector of transparency codes.
#'
#' @importFrom grDevices rgb
#'
#' @noRd
alpha_hex <- function(P) {
n_trans <- P %/% 2
out <- switch(n_trans, # Handpicked with love
"FF", c("FF", "80"), c("FF", "A8", "54"),
c("FF", "BF", "80", "40"), c("FF", "CC", "99", "66", "33"))
if(is.null(out)) { # Let rgb() sort it out otherwise
out <- substr(rgb(1, 1, 1, seq(1, 0, length.out = n_trans)), 8, 10)
}
return(out)
}
#' Check valid HEX colour
#'
#' @param x Character scalar or vector. String(s) to check.
#' @param alpha Logical scalar. Whether the string may contain alpha values.
#'
#' @return Returns a logical scalar or vector.
#'
#' @noRd
is_hex <- function(x, alpha = FALSE) {
if(alpha) return(grepl("^#[0-9a-fA-F]{6,8}$", x))
return(grepl("^#[0-9a-fA-F]{6,6}$", x))
}
#' Get variable positions
#'
#' Helper functions to aid with variable selection, e.g. in
#' \code{\link{plot.bvar_irf}} and \code{\link{plot.bvar_fcast}}.
#'
#' @param vars Numeric or character vector of variables to subset to.
#' @param variables Character vector of all variable names. Required if
#' \emph{vars} is provided as character vector.
#' @param M Integer scalar. Count of all variables.
#'
#' @return Returns a numeric vector with the positions of desired variables.
#'
#' @noRd
pos_vars <- function(vars, variables, M) {
if(is.null(vars) || length(vars) == 0L) {
return(1:M) # Full set
}
if(is.numeric(vars)) {
return(vapply(vars, int_check, # By position
min = 1, max = M, msg = "Variable(s) not found.", integer(1)))
}
if(is.character(vars) && !is.null(variables)) {
out <- do.call(c, lapply(vars, grep, variables)) # By name
if(length(out) > 0) {return(out)}
}
stop("Variable(s) not found.")
}
#' Name dependent / explanatory variables
#'
#' @param variables Character vector of all variable names.
#' @param M Integer scalar. Count of all variables.
#' @param lags Integer scalar. Number of lags applied.
#'
#' @return Returns a character vector of variable names.
#'
#' @noRd
name_deps <- function(variables, M) {
if(is.null(variables)) {
variables <- paste0("var", seq(M))
} else if(length(variables) != M) {
stop("Vector with variables is incomplete.")
}
return(variables)
}
#' @noRd
name_expl <- function(variables, M, lags) {
if(is.null(variables)) {
variables <- name_deps(variables, M)
}
explanatories <- c("constant", paste0(rep(variables, lags), "-lag",
rep(seq(lags), each = length(variables))))
return(explanatories)
}
#' Compute log distribution function of Inverse Gamma
#'
#' @param x Numeric scalar. Draw of the IG-distributed variable.
#' @param shape Numeric scalar.
#' @param scale Numeric scalar.
#'
#' @return Returns the log Inverse Gamma distribution function.
#'
#' @noRd
p_log_ig <- function(x, shape, scale) {
return(shape * log(scale) - (shape + 1) * log(x) - scale / x - lgamma(shape))
}
#' Check whether a package is installed
#'
#' @param package Character scalar.
#'
#' @noRd
has_package <- function(package) {
if(!requireNamespace(package, quietly = TRUE)) {
stop("Package \'", package, "\' required for this method.", call. = FALSE)
}
return(NULL)
}
#' Generate quantiles
#'
#' Check a vector of confidence bands and create quantiles from it.
#'
#' @param conf_bands Numeric vector of probabilities (\eqn{(0, 1)}).
#'
#' @return Returns a sorted, symmetric vector of quantiles.
#'
#' @noRd
quantile_check <- function(conf_bands) {
conf_bands <- sapply(conf_bands, num_check,
min = 0 + 1e-16, max = 1 - 1e-16, msg = "Confidence bands misspecified.")
# Allow only returning the median
if(length(conf_bands) == 1 && conf_bands == 0.5) {return(conf_bands)}
# Sort and make sure we have no duplicates (thank mr float)
quants <- sort(c(conf_bands, 0.5, (1 - conf_bands)))
quants <- quants[!duplicated(round(quants, digits = 12L))]
return(quants)
}
|
/scratch/gouwar.j/cran-all/cranData/BVAR/R/12_aux.R
|
#' Optimised multivariate normal drawing
#'
#' Function to quickly draw from a multivariate normal distribution in special
#' cases required by \code{\link{bvar}}. Based on the implementation by
#' Friedrich Leisch and Fabian Scheipl in \code{\link[mvtnorm]{rmvnorm}}.
#'
#' The two special cases are (1) the proposal, where the spectral decomposition
#' of \emph{sigma} only needs to be calculated once, and (2) drawing when only
#' the inverse of \emph{sigma} is available. Note that we skip steps to check
#' the symmetry and definiteness of \emph{sigma}, since these properties are
#' given per construction.
#'
#' @param n Numeric scalar. Number of draws.
#' @param mean Numeric vector. Mean of the draws.
#' @param sigma Numeric matrix. Variance-covariance of draws.
#' @param sigma_inv Numeric matrix. Inverse of variance-covariance of draws.
#' @param method Character scalar. Type of decomposition to use.
#'
#' @return Returns a numeric matrix of draws.
#'
#' @noRd
rmvn_proposal <- function(n, mean, sigma) {
# Univariate cornercase ---
if(length(sigma[["values"]]) == 1) {
out <- matrix(rnorm(n, mean = mean, sd = sigma[["values"]]))
colnames(out) <- names(mean)
return(out)
}
# Multivariate ---
m <- length(sigma[["values"]])
R <- t(sigma[["vectors"]] %*%
(t(sigma[["vectors"]]) * sqrt(sigma[["values"]])))
out <- matrix(rnorm(n * m), nrow = n, ncol = m, byrow = TRUE) %*% R
out <- sweep(out, 2, mean, "+")
colnames(out) <- names(mean)
return(out)
}
#' @noRd
rmvn_inv <- function(n, sigma_inv, method) {
if(method == "eigen") {
# Spectral ---
sigma_inv <- eigen(sigma_inv, symmetric = TRUE)
m <- length(sigma_inv[["values"]])
R <- t(sigma_inv[["vectors"]] %*%
(t(sigma_inv[["vectors"]]) * sqrt(1 / pmax(sigma_inv[["values"]], 0))))
out <- matrix(rnorm(n * m), nrow = n, ncol = m, byrow = TRUE) %*% R
} else if(method == "chol") {
# Cholesky ---
m <- ncol(sigma_inv)
R <- chol(sigma_inv)
out <- t(backsolve(R,
matrix(rnorm(n * m), nrow = m, ncol = n, byrow = TRUE)))
} else {stop("SOMEBODY TOUCHA MY SPAGHET!")}
return(out)
}
|
/scratch/gouwar.j/cran-all/cranData/BVAR/R/13_mvtnorm.R
|
#' Prepare BVAR data for methods
#'
#' Helper function to retrieve hyperparameters or coefficient values based on
#' name / position. Also supports multiple \code{bvar} objects and may be used
#' to check them for similarity.
#'
#' @param x A \code{bvar} object, obtained from \code{\link{bvar}}.
#' @param vars Character vector used to select variables. Elements are matched
#' to hyperparameters or coefficients. Coefficients may be matched based on
#' the dependent variable (by providing the name or position) or the
#' explanatory variables (by providing the name and the desired lag). See the
#' example section for a demonstration. Defaults to \code{NULL}, i.e. all
#' hyperparameters.
#' @param vars_response,vars_impulse Optional character or integer vectors used
#' to select coefficents. Dependent variables are specified with
#' \emph{vars_response}, explanatory ones with \emph{vars_impulse}. See the
#' example section for a demonstration.
#' @param chains List with additional \code{bvar} objects. Contents are then
#' added to trace and density plots.
#' @param check_chains Logical scalar. Whether to check \emph{x} and
#' \emph{chains} for similarity.
#' @param ... Fed to \code{\link{chains_fit}}.
#'
#' @return Returns a named list with:
#' \itemize{
#' \item \code{data} - Numeric matrix with desired data.
#' \item \code{vars} - Character vector with names for the desired data.
#' \item \code{chains} - List of numeric matrices with desired data.
#' \item \code{bounds} - Numeric matrix with optional boundaries.
#' }
#'
#' @noRd
prep_data <- function(
x,
vars = NULL,
vars_response = NULL, vars_impulse = NULL,
chains = list(),
check_chains = FALSE, ...) {
if(!inherits(x, "bvar")) {stop("Please provide a `bvar` object.")}
if(inherits(chains, "bvar")) {chains <- list(chains)}
lapply(chains, function(x) {if(!inherits(x, "bvar")) {
stop("Please provide `bvar` objects to the chains parameter.")
}})
if(check_chains) {chains_fit(x, chains, ...)}
# Prepare selection ---
vars_hyp <- c("ml", colnames(x[["hyper"]]))
vars_dep <- x[["variables"]]
vars_ind <- x[["explanatories"]]
if(is.null(vars_ind)) { # Compatibility to older versions (<= 0.2.2)
vars_ind <- name_expl(vars_dep,
M = x[["meta"]][["M"]], lags = x[["meta"]][["lags"]])
}
if(is.null(vars) && is.null(vars_impulse) && is.null(vars_response)) {
vars <- vars_hyp
}
choice_hyp <- vars_hyp[unique(do.call(c, lapply(vars, grep, vars_hyp)))]
choice_dep <- if(is.null(vars_response)) {
# Interpret numbers as positions, exclude independents
vars_dep[unique(c(as.integer(vars[grep("^[0-9]+$", vars)]),
do.call(c, lapply(vars[!grepl("(^const|lag[0-9]+$)", vars)],
grep, vars_dep))))]
} else {pos_vars(vars_response, vars_dep, M = x[["meta"]][["M"]])}
choice_dep <- choice_dep[!is.na(choice_dep)]
choice_ind <- if(is.null(vars_impulse)) {
# Limit to ones with "-lag#" or "constant" to separate from dependents
vars_ind[unique(do.call(c, lapply(vars[grep("(^const|lag[0-9]+$)", vars)],
grep, vars_ind)))]
} else {pos_vars(vars_impulse, vars_ind, M = x[["meta"]][["K"]])}
if(all(c(length(choice_hyp), length(choice_dep), length(choice_ind)) == 0)) {
stop("No matching data found.")
}
# Build up required outputs ---
out <- out_vars <- out_bounds <- out_chains <- list()
N <- x[["meta"]][["n_save"]]
if(length(choice_hyp) > 0) { # Hyperparameters
out[["hyper"]] <- cbind("ml" = x[["ml"]], x[["hyper"]])[seq(N), choice_hyp]
out_vars[["hyper"]] <- choice_hyp
out_bounds[["hyper"]] <- vapply(choice_hyp, function(z) {
if(z == "ml") {c(NA, NA)} else {
c(x[["priors"]][[z]][["min"]], x[["priors"]][[z]][["max"]])
}}, double(2))
out_chains[["hyper"]] <- lapply(chains, function(z) {
cbind("ml" = z[["ml"]], z[["hyper"]])[seq(N), choice_hyp]
})
} else {
out_chains[["hyper"]] <- rep(list(NULL), length(chains))
}
if(length(choice_dep) > 0 || length(choice_ind) > 0) { # Betas
pos_dep <- pos_vars(choice_dep,
variables = vars_dep, M = x[["meta"]][["M"]])
pos_ind <- pos_vars(choice_ind,
variables = vars_ind, M = x[["meta"]][["K"]])
K <- length(pos_dep) * length(pos_ind)
out[["betas"]] <- grab_betas(x, N, K, pos_dep, pos_ind)
out_vars[["betas"]] <- paste0(
rep(vars_dep[pos_dep], length(pos_ind)), "_",
rep(vars_ind[pos_ind], each = length(pos_dep)))
out_bounds[["betas"]] <- matrix(NA, ncol = K, nrow = 2)
out_chains[["betas"]] <- lapply(chains, grab_betas, N, K, pos_dep, pos_ind)
} else {
out_chains[["betas"]] <- rep(list(NULL), length(chains))
}
# Merge stuff and return ---
out_data <- cbind(out[["hyper"]], out[["betas"]])
out_vars <- c(out_vars[["hyper"]], out_vars[["betas"]])
out_chains <- mapply(cbind,
out_chains[["hyper"]], out_chains[["betas"]], SIMPLIFY = FALSE)
out_chains <- lapply(out_chains, `colnames<-`, out_vars)
colnames(out_data) <- out_vars
out <- list(
"data" = out_data, "vars" = out_vars, "chains" = out_chains,
"bounds" = cbind(out_bounds[["hyper"]], out_bounds[["betas"]]))
return(out)
}
#' Grab draws of certain betas
#'
#' Helper function for \code{\link{prep_data}}.
#'
#' @param x A \code{bvar} object, obtained from \code{\link{bvar}}.
#' @param N,K Integer scalars. Number of rows and columns to return.
#' @param pos_dep,pos_ind Numeric vectors. Positions of desired variables.
#'
#' @return Returns a matrix with the requested data.
#'
#' @noRd
grab_betas <- function(x, N, K, pos_dep, pos_ind) {
data <- matrix(NA, nrow = N, ncol = K)
k <- 1
for(i in pos_ind) {for(j in pos_dep) {
data[, k] <- x[["beta"]][seq(N), i, j] # seq() for longer chains
k <- k + 1
}}
return(data)
}
#' Check equalities across chains
#'
#' Function to help check whether \code{bvar} objects are close enough to
#' compare. Accessed via \code{\link{prep_data}}.
#'
#' @param x A \code{bvar} object, obtained from \code{\link{bvar}}.
#' @param chains List with additional \code{bvar} objects.
#' @param Ms Logical scalar. Whether to check equality of
#' \code{x[["meta"]][["M"]]}.
#' @param n_saves Logical scalar. Whether to check equality of
#' \code{x[["meta"]][["n_save"]]}.
#' @param hypers Logical scalar. Whether to check equality of
#' \code{x[["priors"]][["hyper"]]}.
#'
#' @return Returns \code{TRUE} or throws an error.
#'
#' @noRd
chains_fit <- function(
x, chains,
Ms = TRUE,
n_saves = FALSE,
hypers = FALSE) {
if(is.null(chains) || length(chains) == 0) {return(TRUE)}
if(Ms) {
Ms <- c(x[["meta"]][["M"]],
vapply(chains, function(x) {x[["meta"]][["M"]]}, integer(1)))
if(!all(duplicated(Ms)[-1])) {stop("Number of variables does not match.")}
}
if(n_saves) {
n_saves <- c(x[["meta"]][["n_save"]],
vapply(chains, function(x) {x[["meta"]][["n_save"]]}, integer(1)))
if(!all(duplicated(n_saves)[-1])) {
stop("Number of stored iterations does not match.")
}
}
if(hypers) {
hypers <- vapply(chains, function(z) {
all(x[["priors"]][["hyper"]] == z[["priors"]][["hyper"]])}, logical(1))
if(!all(hypers)) {stop("Hyperparameters do not match.")}
}
return(TRUE)
}
|
/scratch/gouwar.j/cran-all/cranData/BVAR/R/15_prep_data.R
|
#' Log-posterior of a BVAR
#'
#' Compute the log-posterior (or log-marginal-likelihood) of a Bayesian VAR
#' with a Minnesota prior and optional dummy priors. Prior parameters may be
#' treated hierarchically. Create objects necessary for drawing from the
#' posterior distributions of coefficients and covariance matrix of the
#' residuals.
#'
#' @param hyper Named numeric vector. Hyperparameters for hierarchical
#' estimation.
#' @param hyper_min,hyper_max Optional numeric vector. Minimum / maximum values
#' allowed for hyperparameters. If these are breached a value of -1e18 is
#' returned.
#' @param pars Named numeric vector with prior parameters. Values also found
#' in \emph{hyper} are overwritten with their hierarchical counterparts.
#' @param Y Numeric \eqn{N * M} matrix.
#' @param X Numeric \eqn{N * K} matrix.
#' @param XX Numeric \eqn{K * K} matrix. Crossproduct of \emph{X}, used to save
#' matrix calculations when no dummy priors are included.
#' @param K Integer scalar. Columns of \emph{X}, i.e. \eqn{M * lags + 1}.
#' @param M Integer scalar. Columns of \emph{Y}, i.e. number of variables.
#' @param N Integer scalar. Rows of \emph{Y}, alternatively \emph{X}.
#' @param opt Optional logical scalar. Determines whether the return value is
#' a numeric scalar or a list. Used to call \code{\link{bv_ml}} in
#' \code{\link[stats]{optim}}.
#' @inheritParams bvar
#'
#' @return Returns a list by default, containing the following objects:
#' \itemize{
#' \item \code{log_ml} - A numeric scalar with the log-posterior.
#' \item \code{XX}, \code{N} - The crossproduct of the lagged data matrix,
#' potentially with dummy priors and the number of rows including them.
#' Necessary for drawing from posterior distributions with
#' \code{\link{draw_post}}.
#' \item \code{psi}, \code{sse}, \code{beta_hat}, \code{omega_inv} - Further
#' values necessary for drawing from posterior distributions.
#' }
#' If opt is \code{TRUE} only a numeric scalar with \code{log_ml} is returned.
#'
#' @importFrom stats dgamma
#'
#' @noRd
bv_ml <- function(
hyper, hyper_min = -Inf, hyper_max = Inf,
pars, priors, Y, X, XX, K, M, N, lags,
opt = FALSE) {
# Check bounds ---
if(any(hyper_min > hyper | hyper > hyper_max)) {
if(opt) {return(-1e18)} else {return(list("log_ml" = -1e18))}
}
# Priors -----
# Overwrite passed parameters with hyperparameters if provided
for(name in unique(names(hyper))) {
pars[names(pars) == name] <- hyper[names(hyper) == name]
}
psi_vec <- pars[grep("^psi[0-9]*", names(pars))]
psi <- diag(psi_vec)
omega <- vector("numeric", 1 + M * lags)
omega[1] <- priors[["var"]]
for(i in seq.int(1, lags)) {
omega[seq.int(2 + M * (i - 1), 1 + i * M)] <- pars[["lambda"]] ^ 2 /
i ^ pars[["alpha"]] / psi_vec
}
# Dummy priors
if(length(priors[["dummy"]]) > 0) {
dmy <- lapply(priors[["dummy"]], function(x) {
tryCatch(priors[[x]][["fun"]](Y = Y, lags = lags, par = pars[[x]]),
error = function(e) {
message("Issue generating dummy observations for ",
x, ". Make sure the provided function works properly.")
stop(e)})
})
Y_dmy <- do.call(rbind, lapply(dmy, function(x) matrix(x[["Y"]], ncol = M)))
X_dmy <- do.call(rbind, lapply(dmy, function(x) matrix(x[["X"]], ncol = K)))
N_dummy <- nrow(Y_dmy)
Y <- rbind(Y_dmy, Y)
X <- rbind(X_dmy, X)
XX <- crossprod(X)
N <- nrow(Y)
}
# Calc -----
omega_inv <- diag(1 / omega)
psi_inv <- diag(1 / sqrt(psi_vec))
omega_sqrt <- diag(sqrt(omega))
b <- priors[["b"]]
# Likelihood ---
ev_full <- calc_ev(omega_inv = omega_inv, omega_sqrt = omega_sqrt,
psi_inv = psi_inv, X = X, XX = XX, Y = Y, b = b, beta_hat = TRUE)
log_ml <- calc_logml(M = M, N = N, psi = psi,
omega_ml_ev = ev_full[["omega"]], psi_ml_ev = ev_full[["psi"]])
# Add priors
log_ml <- log_ml + sum(vapply(
priors[["hyper"]][which(!priors$hyper == "psi")], function(x) {
dgamma(pars[[x]],
shape = priors[[x]][["coef"]][["k"]],
scale = priors[[x]][["coef"]][["theta"]], log = TRUE)
}, numeric(1L)))
if(any(priors[["hyper"]] == "psi")) {
psi_coef <- priors[["psi"]][["coef"]]
log_ml <- log_ml + sum(vapply(
names(pars)[grep("^psi[0-9]*", names(pars))], function(x) {
p_log_ig(pars[[x]],
shape = psi_coef[["k"]], scale = psi_coef[["theta"]])
}, numeric(1L)))
}
if(length(priors[["dummy"]]) > 0) {
ev_dummy <- calc_ev(omega_inv = omega_inv, omega_sqrt = omega_sqrt,
psi_inv = psi_inv, X = X_dmy, XX = NULL, Y = Y_dmy, b = b,
beta_hat = FALSE)
log_ml <- log_ml - calc_logml(M = M, N = N_dummy, psi = psi,
omega_ml_ev = ev_dummy[["omega"]], psi_ml_ev = ev_dummy[["psi"]])
}
# Output -----
if(opt) {return(log_ml)} # For optim
# Return log_ml and objects necessary for drawing
return(
list("log_ml" = log_ml, "XX" = XX, "N" = N, "psi" = psi,
"sse" = ev_full[["sse"]], "beta_hat" = ev_full[["beta_hat"]],
"omega_inv" = omega_inv)
)
}
|
/scratch/gouwar.j/cran-all/cranData/BVAR/R/20_ml.R
|
#' BVAR posterior draws
#'
#' Draw \eqn{\beta} and \eqn{\sigma} from the posterior.
#'
#' @param XX Numeric matrix. Crossproduct of a possibly extended \emph{X}.
#' @param N Integer scalar. Rows of \emph{X}. Note that \emph{X} may have been
#' extended with dummy observations.
#' @param M Integer scalar. Columns of \emph{Y}.
#' @param lags Integer scalar. Number of lags in the model.
#' @param b Numeric marix. Minnesota prior's mean.
#' @param psi Numeric matrix. Scale of the IW prior on the residual covariance.
#' @param sse Numeric matrix. Squared VAR residuals.
#' @param beta_hat Numeric matrix.
#' @param omega_inv Numeric matrix.
#'
#' @return Returns a list with the following posterior draws of \emph{beta} and
#' \emph{sigma}.
#'
#' @importFrom mvtnorm rmvnorm
#'
#' @noRd
draw_post <- function(
XX, N, M, lags,
b, psi, sse, beta_hat, omega_inv) {
S_post <- psi + sse + crossprod((beta_hat - b), omega_inv) %*% (beta_hat - b)
eta <- rmvn_inv(n = (N + M + 2), sigma_inv = S_post, method = "eigen")
chol_de <- chol(crossprod(eta))
sigma_chol <- forwardsolve(t(chol_de), diag(nrow(chol_de)))
sigma_draw <- crossprod(sigma_chol)
noise <- rmvn_inv(n = M, sigma_inv = XX + omega_inv, method = "chol")
beta_draw <- beta_hat + crossprod(noise, sigma_chol)
return(list("beta_draw" = beta_draw, "sigma_draw" = sigma_draw))
}
|
/scratch/gouwar.j/cran-all/cranData/BVAR/R/21_draw.R
|
#' Calculate the log marginal likelihood
#'
#' @noRd
calc_logml <- function(M, N, psi, omega_ml_ev, psi_ml_ev) {
return(
(-M * N * log(pi) / 2) + sum(lgamma(((N + M + 2) - seq.int(0, M - 1)) / 2) -
lgamma(((M + 2) - seq.int(0, M -1)) / 2)) -
(N * sum(log(diag(psi))) / 2) - (M * sum(log(omega_ml_ev)) / 2) -
((N + M + 2) * sum(log(psi_ml_ev)) / 2)
)
}
#' Calculate eigenvalues to bypass determinant computation
#'
#' @noRd
calc_ev <- function(
omega_inv, omega_sqrt, psi_inv,
X, XX, Y, b, beta_hat = TRUE) {
if(is.null(XX)) {XX <- crossprod(X)}
beta_hat <- if(beta_hat) { # Could factorise
solve(XX + omega_inv, crossprod(X, Y) + omega_inv %*% b)
} else {b}
sse <- crossprod(Y - X %*% beta_hat)
omega_ml <- omega_sqrt %*% XX %*% omega_sqrt
mostly_harmless <- sse + if(all(beta_hat == b)) {0} else {
crossprod(beta_hat - b, omega_inv) %*% (beta_hat - b)
}
psi_ml <- psi_inv %*% mostly_harmless %*% psi_inv
# Eigenvalues + 1 as another way of computing the determinants
omega_ml_ev <- Re(eigen(omega_ml,
symmetric = TRUE, only.values = TRUE)[["values"]])
omega_ml_ev[omega_ml_ev < 1e-12] <- 0
omega_ml_ev <- omega_ml_ev + 1
psi_ml_ev <- Re(eigen(psi_ml,
symmetric = TRUE, only.values = TRUE)[["values"]])
psi_ml_ev[psi_ml_ev < 1e-12] <- 0
psi_ml_ev <- psi_ml_ev + 1
return(
list("omega" = omega_ml_ev, "psi" = psi_ml_ev,
"sse" = sse, "beta_hat" = beta_hat)
)
}
|
/scratch/gouwar.j/cran-all/cranData/BVAR/R/22_calc.R
|
#' Metropolis-Hastings settings
#'
#' Function to provide settings for the Metropolis-Hastings step in
#' \code{\link{bvar}}. Options include scaling the inverse Hessian that is
#' used to draw parameter proposals and automatic scaling to achieve certain
#' acceptance rates.
#'
#' Note that adjustment of the acceptance rate by scaling the parameter
#' draw variability can only be done during the burn-in phase, as otherwise the
#' resulting draws do not feature the desirable properties of a Markov chain.
#' After the parameter draws have been scaled, some additional draws should be
#' burnt.
#'
#' @param scale_hess Numeric scalar or vector. Scaling parameter, determining
#' the range of hyperparameter draws. Should be calibrated so a reasonable
#' acceptance rate is reached. If provided as vector the length must equal
#' the number of hyperparameters (one per variable for \code{psi}).
#' @param adjust_acc Logical scalar. Whether or not to further scale the
#' variability of parameter draws during the burn-in phase.
#' @param adjust_burn Numeric scalar. How much of the burn-in phase should be
#' used to scale parameter variability. See Details.
#' @param acc_lower,acc_upper Numeric scalar. Lower (upper) bound of the target
#' acceptance rate. Required if \emph{adjust_acc} is set to \code{TRUE}.
#' @param acc_change Numeric scalar. Percent change applied to the Hessian
#' matrix for tuning acceptance rate. Required if \emph{adjust_acc} is set to
#' \code{TRUE}.
#'
#' @return Returns a named list of class \code{bv_metropolis} with options for
#' \code{\link{bvar}}.
#'
#' @keywords Metropolis-Hastings MCMC settings
#'
#' @export
#'
#' @examples
#' # Increase the scaling parameter
#' bv_mh(scale_hess = 1)
#'
#' # Turn on automatic scaling of the acceptance rate to [20%, 40%]
#' bv_mh(adjust_acc = TRUE, acc_lower = 0.2, acc_upper = 0.4)
#'
#' # Increase the rate of automatic scaling
#' bv_mh(adjust_acc = TRUE, acc_lower = 0.2, acc_upper = 0.4, acc_change = 0.1)
#'
#' # Use only 50% of the burn-in phase to adjust scaling
#' bv_mh(adjust_acc = TRUE, adjust_burn = 0.5)
bv_metropolis <- function(
scale_hess = 0.01,
adjust_acc = FALSE,
adjust_burn = 0.75,
acc_lower = 0.25, acc_upper = 0.45,
acc_change = 0.01) {
scale_hess <- vapply(scale_hess, num_check, numeric(1L),
min = 1e-16, max = 1e16,
msg = "Issue with scale_hess, please check the parameter again.")
if(isTRUE(adjust_acc)) {
adjust_burn <- num_check(adjust_burn, 1e-16, 1, "Issue with adjust_burn.")
acc_lower <- num_check(acc_lower, 0, 1 - 1e-16, "Issue with acc_lower.")
acc_upper <- num_check(acc_upper, acc_lower, 1, "Issue with acc_upper.")
acc_change <- num_check(acc_change, 1e-16, 1e16, "Issue with acc_change")
}
out <- structure(list(
"scale_hess" = scale_hess,
"adjust_acc" = adjust_acc, "adjust_burn" = adjust_burn,
"acc_lower" = acc_lower, "acc_upper" = acc_upper,
"acc_tighten" = 1 - acc_change, "acc_loosen" = 1 + acc_change),
class = "bv_metropolis")
return(out)
}
#' @rdname bv_metropolis
#' @export
bv_mh <- bv_metropolis
|
/scratch/gouwar.j/cran-all/cranData/BVAR/R/30_metropolis_setup.R
|
#' @export
print.bv_metropolis <- function(x, ...) {
cat("Object with settings for the Metropolis-Hastings step in `bvar()`.\n",
"Scaling parameter: ", paste0(x[["scale_hess"]], collapse = ", "), "\n",
"Automatic acceptance adjustment: ", x[["adjust_acc"]], "\n", sep = "")
if(x[["adjust_acc"]]) {
cat("Target acceptance: [", x[["acc_lower"]], ", ", x[["acc_upper"]], "]\n",
"Change applied: ", x[["acc_loosen"]] - 1, "\n", sep = "")
}
return(invisible(x))
}
|
/scratch/gouwar.j/cran-all/cranData/BVAR/R/35_metropolis_print.R
|
#' Prior settings
#'
#' Function to provide priors and their parameters to \code{\link{bvar}}. Used
#' for adjusting the parameters treated as hyperparameters, the Minnesota prior
#' and adding various dummy priors through the ellipsis parameter.
#' Note that treating \eqn{\psi}{psi} (\emph{psi}) as a hyperparameter in a
#' model with many variables may lead to very low acceptance rates and thus
#' hinder convergence.
#'
#' @param hyper Character vector. Used to specify the parameters to be treated
#' as hyperparameters. May also be set to \code{"auto"} or \code{"full"} for
#' an automatic / full subset. Other allowed values are the Minnesota prior's
#' parameters \code{"lambda"}, \code{"alpha"} and \code{"psi"} as well as the
#' names of additional dummy priors included via \emph{...}.
#' @param mn List of class \code{"bv_minnesota"}. Options for the Minnesota
#' prior, set via \code{\link{bv_mn}}.
#' @param ... Optional lists of class \code{bv_dummy} with options for
#' dummy priors. \bold{Must be assigned a name in the function call}. Created
#' with \code{\link{bv_dummy}}.
#'
#' @return Returns a named list of class \code{bv_priors} with options for
#' \code{\link{bvar}}.
#'
#' @keywords priors hierarchical Minnesota dummy settings
#'
#' @seealso \code{\link{bv_mn}}; \code{\link{bv_dummy}}
#'
#' @export
#'
#' @examples
#' # Extend the hyperparameters to the full Minnesota prior
#' bv_priors(hyper = c("lambda", "alpha", "psi"))
#' # Alternatively
#' # bv_priors("full")
#'
#' # Add a dummy prior via `bv_dummy()`
#'
#' # Re-create the single-unit-root prior
#' add_sur <- function(Y, lags, par) {
#' sur <- if(lags == 1) {Y[1, ] / par} else {
#' colMeans(Y[1:lags, ]) / par
#' }
#' Y_sur <- sur
#' X_sur <- c(1 / par, rep(sur, lags))
#'
#' return(list("Y" = Y_sur, "X" = X_sur))
#' }
#' sur <- bv_dummy(mode = 1, sd = 1, min = 0.0001, max = 50, fun = add_sur)
#'
#' # Add the new prior
#' bv_priors(hyper = "auto", sur = sur)
bv_priors <- function(
hyper = "auto",
mn = bv_mn(),
...) {
# Check inputs ---
if(!inherits(mn, "bv_minnesota")) { # Require Minnesota prior
stop("Please use `bv_mn()` to set the Minnesota prior.")
}
dots <- list(...)
if(!all(vapply(dots, inherits, TRUE, "bv_dummy"))) {
stop("Please use `bv_dummy()` to set dummy priors.")
}
if(hyper[[1]] == "auto") {
hyper <- c(if(!is.null(mn)) {"lambda"}, names(dots))
} else {
full <- c(if(!is.null(mn)) {c("lambda", "alpha", "psi")}, names(dots))
if(hyper[[1]] == "full") {
hyper <- full
} else {
if(!all(hyper %in% full)) {stop("Hyperprior not found.")}
}
}
# Prepare output ---
out <- if(!is.null(mn)) {
structure(list(hyper = hyper,
lambda = mn[["lambda"]], alpha = mn[["alpha"]],
psi = mn[["psi"]], var = mn[["var"]], b = mn[["b"]],
..., dummy = names(list(...))
), class = "bv_priors")
} else {
structure(list(hyper = hyper,
..., dummy = names(list(...))), class = "bv_priors")
}
return(out)
}
|
/scratch/gouwar.j/cran-all/cranData/BVAR/R/40_priors_setup.R
|
#' Minnesota prior settings
#'
#' Provide settings for the Minnesota prior to \code{\link{bv_priors}}. See the
#' Details section for further information.
#'
#' Essentially this prior imposes the hypothesis, that individual variables
#' all follow random walk processes. This parsimonious specification typically
#' performs well in forecasts of macroeconomic time series and is often used as
#' a benchmark for evaluating accuracy (Kilian and Lütkepohl, 2017).
#' The key parameter is \eqn{\lambda}{lambda} (\emph{lambda}), which controls
#' the tightness of the prior. The parameter \eqn{\alpha}{alpha} (\emph{alpha})
#' governs variance decay with increasing lag order, while \eqn{\psi}{psi}
#' (\emph{psi}) controls the prior's standard deviation on lags of variables
#' other than the dependent.
#' The Minnesota prior is often refined with additional priors, trying to
#' minimise the importance of conditioning on initial observations. See
#' \code{\link{bv_dummy}} for more information on such priors.
#'
#' @param lambda List constructed via \code{\link{bv_lambda}}.
#' Arguments are \emph{mode}, \emph{sd}, \emph{min} and \emph{max}.
#' May also be provided as a numeric vector of length 4.
#' @param alpha List constructed via \code{\link{bv_alpha}}.
#' Arguments are \emph{mode}, \emph{sd}, \emph{min} and \emph{max}. High values
#' for \emph{mode} may affect invertibility of the augmented data matrix.
#' May also be provided as a numeric vector of length 4.
#' @param psi List with elements \emph{scale}, \emph{shape} of the prior
#' as well as \emph{mode} and optionally \emph{min} and \emph{max}. The length
#' of these needs to match the number of variables (i.e. columns) in the data.
#' By default \emph{mode} is set automatically to the square-root of the
#' innovations variance after fitting an \eqn{AR(p)}{AR(p)} model to the data.
#' If \code{\link[stats]{arima}} fails due to a non-stationary time series the
#' order of integration is incremented by 1. By default \emph{min} / \emph{max}
#' are set to \emph{mode} divided / multiplied by 100.
#' @param var Numeric scalar with the prior variance on the model's constant.
#' @param b Numeric scalar, vector or matrix with the prior mean. A scalar is
#' applied to all variables, with a default value of 1. Consider setting it to
#' 0 for growth rates. A vector needs to match the number of variables (i.e.
#' columns) in the data, with a prior mean per variable. If provided, a matrix
#' needs to have a column per variable (\eqn{M}), and \eqn{M * p + 1} rows,
#' where \eqn{p} is the number of lags applied.
#' @param mode,sd Numeric scalar. Mode / standard deviation of the
#' parameter. Note that the \emph{mode} of \emph{psi} is set automatically by
#' default, and would need to be provided as vector.
#' @param min,max Numeric scalar. Minimum / maximum allowed value. Note that
#' for \emph{psi} these are set automatically or need to provided as vectors.
#' @param scale,shape Numeric scalar. Scale and shape parameters of a Gamma
#' distribution.
#'
#' @return Returns a list of class \code{bv_minnesota} with options for
#' \code{\link{bvar}}.
#'
#' @references
#' Kilian, L. and Lütkepohl, H. (2017). \emph{Structural Vector
#' Autoregressive Analysis}. Cambridge University Press,
#' \doi{10.1017/9781108164818}
#'
#' @seealso \code{\link{bv_priors}}; \code{\link{bv_dummy}}
#'
#' @export
#'
#' @examples
#' # Adjust alpha and the Minnesota prior variance.
#' bv_mn(alpha = bv_alpha(mode = 0.5, sd = 1, min = 1e-12, max = 10), var = 1e6)
#' # Optionally use a vector as shorthand
#' bv_mn(alpha = c(0.5, 1, 1e-12, 10), var = 1e6)
#'
#' # Only adjust lambda's standard deviation
#' bv_mn(lambda = bv_lambda(sd = 2))
#'
#' # Provide prior modes for psi (for a VAR with three variables)
#' bv_mn(psi = bv_psi(mode = c(0.7, 0.3, 0.9)))
bv_minnesota <- function(
lambda = bv_lambda(),
alpha = bv_alpha(),
psi = bv_psi(), # scale, shape, mode
var = 1e07,
b = 1) {
# Input checks
lambda <- lazy_priors(lambda)
alpha <- lazy_priors(alpha)
if(!inherits(psi, "bv_psi")) {stop("Please use `bv_psi()` to set psi.")}
var <- num_check(var, min = 1e-16, max = Inf,
msg = "Issue with the prior variance var.")
# Prior mean
if(length(b) == 1 && b == "auto") {
b <- 1
} else if(is.numeric(b)) {
if(!is.matrix(b)) { # Matrix dimensions are checked later
b <- vapply(b, num_check, numeric(1L), min = -1e16, max = 1e16,
msg = "Issue with prior mean b, please check the argument again.")
}
} else {stop("Issue with prior mean b, wrong type provided.")}
# Outputs
out <- structure(list(
"lambda" = lambda, "alpha" = alpha, "psi" = psi, "b" = b, "var" = var),
class = "bv_minnesota")
return(out)
}
#' @rdname bv_minnesota
#' @export
bv_mn <- bv_minnesota
#' @describeIn bv_minnesota Tightness of the Minnesota prior
#' @export
bv_lambda <- function(mode = 0.2, sd = 0.4, min = 0.0001, max = 5) {
sd <- num_check(sd, min = 0 + 1e-16, max = Inf,
msg = "Parameter sd misspecified.")
return(
dummy(mode, min, max, sd = sd, coef = gamma_coef(mode = mode, sd = sd))
)
}
#' @describeIn bv_minnesota Variance decay with increasing lag order
#' @export
bv_alpha <- function(mode = 2, sd = 0.25, min = 1, max = 3) {
return(bv_lambda(mode = mode, sd = sd, min = min, max = max))
}
#' @describeIn bv_minnesota Prior standard deviation on other lags
#' @export
bv_psi <- function(
scale = 0.004, shape = 0.004,
mode = "auto", min = "auto", max = "auto") {
# Checks ---
scale <- num_check(scale, min = 1e-16, max = Inf,
msg = "Invalid value for scale (outside of (0, Inf]")
shape <- num_check(shape, min = 1e-16, max = Inf,
msg = "Invalid value for shape (outside of (0, Inf]")
# Check mode, min and max
if(any(mode != "auto")) {
mode <- vapply(mode, num_check, numeric(1L), min = 0, max = Inf,
msg = "Invalid value(s) for mode (outside of [0, Inf]).")
if(length(min) == 1 && min == "auto") {min <- mode / 100}
if(length(max) == 1 && max == "auto") {max <- mode * 100}
min <- vapply(min, num_check, numeric(1L), min = 0, max = Inf,
msg = "Invalid value(s) for min (outside of [0, max)).")
max <- vapply(max, num_check, numeric(1L), min = 0, max = Inf,
msg = "Invalid value(s) for max (outside of (min, Inf]).")
if(length(mode) != length(min) || length(mode) != length(max)) {
stop("The length of mode and boundaries diverge.")
}
if(any(min >= max) || any(min > mode) || any(mode > max)) {
stop("Invalid values for min / max.")
}
} else if(any(c(min != "auto", max != "auto"))) {
stop("Boundaries are only adjustable with a given mode.")
}
# Outputs ---
out <- structure(list(
"mode" = mode, "min" = min, "max" = max,
"coef" = list("k" = shape, "theta" = scale)
), class = "bv_psi")
return(out)
}
#' @noRd
lazy_priors <- function(x) {
if(!inherits(x, "bv_dummy")) {
if(length(x) == 4 && is.numeric(x)) { # Allow length 4 numeric vectors
return(x = bv_lambda(x[1], x[2], x[3], x[4]))
}
stop("Please use `bv_lambda()` / `bv_alpha()` to set lambda / alpha.")
}
return(x)
}
|
/scratch/gouwar.j/cran-all/cranData/BVAR/R/41_minnesota.R
|
#' Dummy prior settings
#'
#' Allows the creation of dummy observation priors for \code{\link{bv_priors}}.
#' See the Details section for information on common dummy priors.
#'
#' Dummy priors are often used to "reduce the importance of the deterministic
#' component implied by VARs estimated conditioning on the initial
#' observations" (Giannone, Lenza and Primiceri, 2015, p. 440).
#' One such prior is the sum-of-coefficients (SOC) prior, which imposes the
#' notion that a no-change forecast is optimal at the beginning of a time
#' series. Its key parameter \eqn{\mu}{mu} controls the tightness - i.e. for
#' low values the model is pulled towards a form with as many unit roots as
#' variables and no cointegration.
#' Another such prior is the single-unit-root (SUR) prior, that allows for
#' cointegration relationships in the data. It pushes variables either towards
#' their unconditional mean or towards the presence of at least one unit root.
#' These priors are implemented via Theil mixed estimation, i.e. by adding
#' dummy-observations on top of the data matrix. They are available via the
#' functions \code{\link{bv_soc}} and \code{\link{bv_sur}}.
#'
#' @param fun Function taking \emph{Y}, \emph{lags} and the prior's parameter
#' \emph{par} to generate and return a named list with elements \emph{X} and
#' \emph{Y} (numeric matrices).
#' @inheritParams bv_mn
#'
#' @return Returns a named list of class \code{bv_dummy} for
#' \code{\link{bv_priors}}.
#'
#' @references
#' Giannone, D. and Lenza, M. and Primiceri, G. E. (2015) Prior Selection for
#' Vector Autoregressions. \emph{The Review of Economics and Statistics},
#' \bold{97:2}, 436-451, \doi{10.1162/REST_a_00483}.
#'
#' @seealso \code{\link{bv_priors}}; \code{\link{bv_minnesota}}
#'
#' @export
#'
#' @examples
#' # Create a sum-of-coefficients prior
#' add_soc <- function(Y, lags, par) {
#' soc <- if(lags == 1) {diag(Y[1, ]) / par} else {
#' diag(colMeans(Y[1:lags, ])) / par
#' }
#' Y_soc <- soc
#' X_soc <- cbind(rep(0, ncol(Y)), matrix(rep(soc, lags), nrow = ncol(Y)))
#'
#' return(list("Y" = Y_soc, "X" = X_soc))
#' }
#' soc <- bv_dummy(mode = 1, sd = 1, min = 0.0001, max = 50, fun = add_soc)
#'
#' # Create a single-unit-root prior
#' add_sur <- function(Y, lags, par) {
#' sur <- if(lags == 1) {Y[1, ] / par} else {
#' colMeans(Y[1:lags, ]) / par
#' }
#' Y_sur <- sur
#' X_sur <- c(1 / par, rep(sur, lags))
#'
#' return(list("Y" = Y_sur, "X" = X_sur))
#' }
#'
#' sur <- bv_dummy(mode = 1, sd = 1, min = 0.0001, max = 50, fun = add_sur)
#'
#' # Add the new custom dummy priors
#' bv_priors(hyper = "auto", soc = soc, sur = sur)
bv_dummy <- function(
mode = 1, sd = 1,
min = 0.0001, max = 5,
fun) {
sd <- num_check(sd, min = 0 + 1e-16, max = Inf,
msg = "Parameter sd misspecified.")
fun <- match.fun(fun)
return(
dummy(mode = mode, min = min, max = max, sd = sd, fun = fun,
coef = gamma_coef(mode, sd))
)
}
#' @rdname bv_dummy
#' @noRd
dummy <- function(
mode = 1,
min = 0.0001, max = 5,
...) {
mode <- num_check(mode, min = 0, max = Inf,
msg = "Invalid value for mode (outside of [0, Inf]).")
min <- num_check(min, min = 0, max = max - 1e-16,
msg = "Invalid value for min (outside of [0, max)).")
max <- num_check(max, min = min + 1e-16, max = Inf,
msg = "Invalid value for max (outside of (min, Inf]).")
out <- structure(list(
"mode" = mode, "min" = min, "max" = max, ...), class = "bv_dummy")
return(out)
}
|
/scratch/gouwar.j/cran-all/cranData/BVAR/R/42_dummy.R
|
#' Sum-of-coefficients and single-unit-root prior creation functions
#'
#' @param Y Numeric matrix. Data to base the dummy observations on.
#' @param lags Integer scalar. Lag order of the model.
#' @param par Numeric scalar. Parameter value of the prior.
#'
#' @return Returns a list with \emph{Y} and \emph{X} extended with the
#' respective dummy observations.
#'
#' @noRd
.add_soc <- function(Y, lags, par) {
soc <- if(lags == 1) {diag(Y[1, ]) / par} else {
diag(colMeans(Y[1:lags, ])) / par
}
X_soc <- cbind(rep(0, ncol(Y)), matrix(rep(soc, lags), nrow = ncol(Y)))
return(list("Y" = soc, "X" = X_soc))
}
#' @rdname .add_soc
#' @noRd
.add_sur <- function(Y, lags, par) {
sur <- if(lags == 1) {Y[1, ] / par} else {
colMeans(Y[1:lags, ]) / par
}
X_sur <- c(1 / par, rep(sur, lags))
return(list("Y" = sur, "X" = X_sur))
}
#' @export
#' @describeIn bv_dummy Sum-of-coefficients dummy prior
bv_soc <- function(mode = 1, sd = 1, min = 0.0001, max = 50) {
bv_dummy(mode = mode, sd = sd, min = min, max = max, fun = .add_soc)
}
#' @export
#' @describeIn bv_dummy Single-unit-root dummy prior
bv_sur <- function(mode = 1, sd = 1, min = 0.0001, max = 50) {
bv_dummy(mode = mode, sd = sd, min = min, max = max, fun = .add_sur)
}
|
/scratch/gouwar.j/cran-all/cranData/BVAR/R/43_sur_soc.R
|
#' @export
print.bv_priors <- function(x, ...) {
cat("Object with prior settings for `bvar()`.\n",
"Hyperparameters: ", paste0(x[["hyper"]], collapse = ", "),
"\n\n", sep = "")
if(!is.null(x[["lambda"]])) {print.bv_minnesota(x, indent = TRUE)}
dummy_pos <- names(x) %in% x[["dummy"]]
if(any(dummy_pos) && !length(x[["dummy"]]) == 0) {
cat("\nDummy prior(s):\n")
dummies <- names(x)[dummy_pos]
for(dummy in dummies) {
cat(dummy, ":\n", sep = ""); print.bv_dummy(x[[dummy]], indent = TRUE)
}
}
return(invisible(x))
}
#' @export
print.bv_minnesota <- function(x, indent = FALSE, ...) {
cat("Minnesota prior:\nlambda:\n"); print(x[["lambda"]], indent = indent)
cat("alpha:\n"); print(x[["alpha"]], indent = indent)
cat("psi:\n"); print(x[["psi"]], indent = indent)
cat("\nVariance of the constant term:", x[["var"]], "\n")
return(invisible(x))
}
#' @export
print.bv_dummy <- function(x, indent = FALSE, ...) {
print_priors(x, ...)
cat(if(indent) {"\t"}, "Mode / Bounds: ",
x[["mode"]], " / [", x[["min"]], ", ", x[["max"]], "]\n", sep = "")
return(invisible(x))
}
#' @export
print.bv_psi <- function(x, indent = FALSE, ...) {
print_priors(x, ...)
if(any(x[["mode"]] == "auto")) {
cat(if(indent) {"\t"}, "Mode / Bounds: retrieved automatically\n")
} else {
for(i in seq_along(x[["mode"]])) {
cat(if(indent) {"\t"}, "#", i, " Mode / Bounds: ",
x[["mode"]][i], " / [", x[["min"]][i], ", ", x[["max"]][i], "]\n",
sep = "")
}
}
return(invisible(x))
}
#' Priors print method
#'
#' @param x A \code{bv_dummy} or \code{bv_psi} object.
#' @param ... Not used.
#'
#' @noRd
print_priors <- function(x, indent = FALSE, ...) {
cat(if(indent) {"\t"}, "Shape / Scale: ",
round(x[["coef"]][["k"]], 3L), " / ",
round(x[["coef"]][["theta"]], 3L), "\n", sep = "")
return(invisible(x))
}
|
/scratch/gouwar.j/cran-all/cranData/BVAR/R/45_priors_print.R
|
#' Forecast settings
#'
#' Provide forecast settings to \code{\link{predict.bvar}}. Allows adjusting
#' the horizon of forecasts, and for setting up conditional forecasts. See the
#' Details section for further information.
#'
#' Conditional forecasts are calculated using the algorithm by Waggoner and Zha
#' (1999). They are set up by imposing a path on selected variables.
#'
#' @param horizon Integer scalar. Horizon for which to compute forecasts.
#' @param cond_path Optional numeric vector or matrix used for conditional
#' forecasts. Supply variable path(s) on which forecasts are conditioned on.
#' Unrestricted future realisations should be filled with \code{NA}. Note that
#' not all variables can be restricted at the same time.
#' @param cond_vars Optional character or numeric vector. Used to subset
#' \emph{cond_path} to specific variable(s) via name or position. Not
#' needed when \emph{cond_path} is constructed for all variables.
#'
#' @return Returns a named list of class \code{bv_fcast} with options for
#' \code{\link{bvar}} or \code{\link{predict.bvar}}.
#'
#' @references
#' Waggoner, D. F., & Zha, T. (1999). Conditional Forecasts in Dynamic
#' Multivariate Models. \emph{Review of Economics and Statistics},
#' \bold{81:4}, 639-651, \doi{10.1162/003465399558508}.
#'
#' @seealso \code{\link{predict.bvar}}; \code{\link{plot.bvar_fcast}}
#'
#' @keywords BVAR forecast settings
#'
#' @export
#'
#' @examples
#' # Set forecast-horizon to 20 time periods for unconditional forecasts
#' bv_fcast(horizon = 20)
#'
#' # Define a path for the second variable (in the initial six periods).
#' bv_fcast(cond_path = c(1, 1, 1, 1, 1, 1), cond_var = 2)
#'
#' # Constrain the paths of the first and third variables.
#' paths <- matrix(NA, nrow = 10, ncol = 2)
#' paths[1:5, 1] <- 1
#' paths[1:10, 2] <- 2
#' bv_fcast(cond_path = paths, cond_var = c(1, 3))
bv_fcast <- function(
horizon = 12,
cond_path = NULL,
cond_vars = NULL) {
horizon <- int_check(horizon, min = 1, max = 1e6,
msg = "Invalid value for horizon (outside of [1, 1e6]).")
if(!is.null(cond_path)) {
if(!is.numeric(cond_path)) {stop("Invalid type of cond_path.")}
if(is.vector(cond_path)) {
if(is.null(cond_vars)) {
stop("Please provide the constrained variable(s) via cond_vars.")
}
cond_path <- matrix(cond_path)
}
if(!is.null(cond_vars)) {
if(!is.null(cond_vars) && ncol(cond_path) != length(cond_vars)) {
stop("Dimensions of cond_path and cond_vars do not match.")
}
if(any(duplicated(cond_vars))) {
stop("Duplicated value(s) found in cond_vars.")
}
}
if(nrow(cond_path) > horizon) {
horizon <- nrow(cond_path)
message("Increasing horizon to the length of cond_path.")
}
}
out <- structure(list(
"horizon" = horizon, "cond_path" = cond_path, "cond_vars" = cond_vars),
class = "bv_fcast")
return(out)
}
|
/scratch/gouwar.j/cran-all/cranData/BVAR/R/50_fcast_setup.R
|
#' Forecast computation
#'
#' Compute unconditional forecasts without shocks from the VAR's posterior draws
#' obtained via \code{\link{draw_post}}.
#'
#' @param Y Numeric matrix (\eqn{N * M}).
#' @param K Integer scalar. Columns of \emph{X}, i.e. \eqn{M * lags + 1}.
#' @param M Integer scalar. Columns of \emph{Y}.
#' @param N Integer scalar. Rows of \emph{Y}, alternatively \emph{X}.
#' @param lags Integer scalar. Number of lags in the model.
#' @param horizon Integer scalar. Specifies the horizon for which forecasts
#' should be computed.
#' @param beta_comp Numeric matrix. Posterior draw of the VAR coefficients of
#' the model in state space representation.
#' @param beta_const Numeric vector. Posterior draw of the VAR coefficients
#' corresponding to the constant of the model.
#'
#' @return Returns a matrix containing forecasts (without shocks) for all
#' variables in the model.
#'
#' @importFrom stats rnorm
#'
#' @noRd
compute_fcast <- function(
Y, K, M, N, lags,
horizon,
beta_comp, beta_const) {
Y_f <- matrix(NA, horizon + 1, K - 1)
Y_f[1, ] <- vapply(t(Y[N:(N - lags + 1), ]), c, numeric(1L))
for(i in seq.int(2, 1 + horizon)) {
Y_f[i, ] <- tcrossprod(Y_f[i - 1, ], beta_comp) +
c(beta_const, rep(0, M * (lags - 1))) # Maybe go back to normal beta
}
# Remove Y_t and lagged variables
return(Y_f[-1, 1:M])
}
|
/scratch/gouwar.j/cran-all/cranData/BVAR/R/51_fcast_compute.R
|
#' Conditional forecast computation
#'
#' Compute conditional forecasts using the algorithm of Waggoner and Zha (1999).
#'
#' @param constr_mat Numeric matrix with constrained paths of variables
#' and \code{NAs} for unrestricted values.
#' @param fcast_base Numeric matrix with unconditional forecasts without the
#' random shocks present in unconditional forecasts.
#' @param ortho_irf Numeric matrix with orthogonal impulse responses for all
#' variables. Computed by \code{\link{compute_irf}} or \code{\link{irf.bvar}}.
#' @param horizon Integer scalar. Horizon for which to compute forecasts.
#' @param M Integer scalar. Columns of \emph{Y}.
#'
#' @return Returns a numeric matrix with conditional forecasts.
#'
#' @references
#' Waggoner, D. F., & Zha, T. (1999). Conditional Forecasts in Dynamic
#' Multivariate Models. \emph{Review of Economics and Statistics},
#' \bold{81:4}, 639-651, \url{https://doi.org/10.1162/003465399558508}.
#'
#' @importFrom stats rnorm
#'
#' @noRd
cond_fcast <- function(constr_mat, fcast_base, ortho_irf, horizon, M) {
cond_fcast <- matrix(NA, horizon, M)
# First get constrained shocks
v <- sum(!is.na(constr_mat))
s <- M * horizon
r <- c(rep(0, v))
R <- matrix(0, v, s)
pos <- 1
for(i in seq_len(horizon)) {
for(j in seq_len(M)) {
if(is.na(constr_mat[i, j])) {next}
r[pos] <- constr_mat[i, j] - fcast_base[i, j]
for(k in seq_len(i)) {
R[pos, ((k - 1) * M + 1):(k * M)] <- ortho_irf[j, (i - k + 1) , ]
}
pos <- pos + 1
}
}
R_svd <- svd(R, nu = nrow(R), nv = ncol(R))
U <- R_svd[["u"]]
P_inv <- diag(1/R_svd[["d"]])
V1 <- R_svd[["v"]][, 1:v]
V2 <- R_svd[["v"]][, (v + 1):s]
eta <- V1 %*% P_inv %*% t(U) %*% r + V2 %*% rnorm(s - v)
eta <- matrix(eta, M, horizon)
# Use constrained shocks and unconditional forecasts (without shocks) to
# create conditional forecasts
for(h in seq_len(horizon)) {
temp <- matrix(0, M, 1)
for(k in seq_len(h)) {
temp <- temp + ortho_irf[, (h - k + 1), ] %*% eta[ , k]
}
cond_fcast[h, ] <- fcast_base[h, ] + t(temp)
}
return(cond_fcast)
}
#' Build a constraint matrix for conditional forecasts
#'
#' @inheritParams bv_irf
#' @param variables Character vector of all variable names.
#' @param M Integer scalar. Count of all variables.
#'
#' @return Returns a numeric matrix with the constrained paths of variables and
#' \code{NAs} for unrestricted values.
#'
#' @noRd
get_constr_mat <- function(horizon, path, vars = NULL, variables = NULL, M) {
pos <- pos_vars(vars, variables, M)
constr_mat <- matrix(NA_real_, horizon, M)
constr_mat[seq_len(nrow(path)), pos] <- path
colnames(constr_mat) <- variables
if(any(apply(constr_mat, 1, function(x) !any(is.na(x))))) {
stop("One variable must be unrestricted at each point in time.")
}
return(constr_mat)
}
|
/scratch/gouwar.j/cran-all/cranData/BVAR/R/52_fcast_cond.R
|
#' Predict method for Bayesian VARs
#'
#' Retrieves / calculates forecasts for Bayesian VARs generated via
#' \code{\link{bvar}}. If a forecast is already present and no settings are
#' supplied it is simply retrieved, otherwise it will be calculated.
#' To store the results you may want to assign the output using the setter
#' function (\code{predict(x) <- predict(x)}). May also be used to update
#' confidence bands.
#'
#' @param object A \code{bvar} object, obtained from \code{\link{bvar}}.
#' Summary and print methods take in a \code{bvar_fcast} object.
#' @param ... A \code{bv_fcast} object or parameters to be fed into
#' \code{\link{bv_fcast}}. Contains settings for the forecast.
#' @param conf_bands Numeric vector of confidence bands to apply.
#' E.g. for bands at 5\%, 10\%, 90\% and 95\% set this to \code{c(0.05, 0.1)}.
#' Note that the median, i.e. 0.5 is always included.
#' @param n_thin Integer scalar. Every \emph{n_thin}'th draw in \emph{object}
#' is used to predict, others are dropped.
#' @param newdata Optional numeric matrix or dataframe. Used to base the
#' prediction on.
#' @param vars Optional numeric or character vector. Used to subset the summary
#' to certain variables by position or name (must be available). Defaults to
#' \code{NULL}, i.e. all variables.
#' @param value A \code{bvar_fcast} object to assign.
#'
#' @return Returns a list of class \code{bvar_fcast} including forecasts
#' at desired confidence bands.
#' The summary method returns a numeric array of forecast paths at the
#' specified confidence bands.
#'
#' @seealso \code{\link{plot.bvar_fcast}}; \code{\link{bv_fcast}}
#'
#' @keywords BVAR forecast analysis
#'
#' @export
#'
#' @importFrom stats predict
#'
#' @examples
#' \donttest{
#' # Access a subset of the fred_qd dataset
#' data <- fred_qd[, c("CPIAUCSL", "UNRATE", "FEDFUNDS")]
#' # Transform it to be stationary
#' data <- fred_transform(data, codes = c(5, 5, 1), lag = 4)
#'
#' # Estimate a BVAR using one lag, default settings and very few draws
#' x <- bvar(data, lags = 1, n_draw = 1000L, n_burn = 200L, verbose = FALSE)
#'
#' # Calculate a forecast with an increased horizon
#' y <- predict(x, horizon = 20)
#'
#' # Add some confidence bands and store the forecast
#' predict(x) <- predict(x, conf_bands = c(0.05, 0.16))
#'
#' # Recalculate with different settings and increased thinning
#' predict(x, bv_fcast(24L), n_thin = 10L)
#'
#' # Simulate some new data to predict on
#' predict(x, newdata = matrix(rnorm(300), ncol = 3))
#'
#' # Calculate a conditional forecast (with a constrained second variable).
#' predict(x, cond_path = c(1, 1, 1, 1, 1, 1), cond_var = 2)
#'
#' # Get a summary of the stored forecast
#' summary(x)
#'
#' # Only get the summary for variable #2
#' summary(x, vars = 2L)
#' }
predict.bvar <- function(
object, ...,
conf_bands, n_thin = 1L,
newdata) {
dots <- list(...)
fcast_store <- object[["fcast"]]
# Calculate new forecast -----
if(is.null(fcast_store) || length(dots) != 0L || !missing(newdata)) {
# Setup ---
fcast <- if(length(dots) > 0 && inherits(dots[[1]], "bv_fcast")) {
dots[[1]]
} else {bv_fcast(...)}
# Checks
n_pres <- object[["meta"]][["n_save"]]
n_thin <- int_check(n_thin, min = 1, max = (n_pres / 10),
"Issue with n_thin. Maximum allowed is (n_draw - n_burn) / 10.")
n_save <- int_check((n_pres / n_thin), min = 1)
K <- object[["meta"]][["K"]]
M <- object[["meta"]][["M"]]
lags <- object[["meta"]][["lags"]]
beta <- object[["beta"]]
sigma <- object[["sigma"]]
if(missing(newdata)) {
Y <- object[["meta"]][["Y"]]
N <- object[["meta"]][["N"]]
} else {
if(!all(vapply(newdata, is.numeric, logical(1))) || any(is.na(newdata)) ||
ncol(newdata) != M) {stop("Problem with newdata.")}
Y <- as.matrix(newdata)
N <- nrow(Y)
}
# Conditional forecast
conditional <- !is.null(fcast[["cond_path"]])
if(conditional) {
constr_mat <- get_constr_mat(horizon = fcast[["horizon"]],
path = fcast[["cond_path"]], vars = fcast[["cond_vars"]],
object[["variables"]], M)
fcast[["setup"]] <- constr_mat
irf_store <- object[["irf"]]
if(is.null(irf_store) || !irf_store[["setup"]][["identification"]] ||
irf_store[["setup"]][["horizon"]] < fcast[["horizon"]]) {
message("No suitable impulse responses found. Calculating...")
irf_store <- irf.bvar(object,
horizon = fcast[["horizon"]], identification = TRUE, fevd = FALSE,
n_thin = n_thin)
}
}
# Sampling ---
fcast_store <- structure(list(
"fcast" = array(NA, c(n_save, fcast[["horizon"]], M)),
"setup" = fcast, "variables" = object[["variables"]], "data" = Y),
class = "bvar_fcast")
j <- 1
for(i in seq_len(n_save)) {
beta_comp <- get_beta_comp(beta[j, , ], K = K, M = M, lags = lags)
fcast_base <- compute_fcast(
Y = Y, K = K, M = M, N = N, lags = lags,
horizon = fcast[["horizon"]],
beta_comp = beta_comp, beta_const = beta[j, 1, ])
if(conditional) { # Conditional uses impulse responses
fcast_store[["fcast"]][i, , ] <- cond_fcast(
constr_mat = constr_mat, fcast_base = fcast_base,
ortho_irf = irf_store[["irf"]][j, , , ],
horizon = fcast[["horizon"]], M = M)
} else { # Unconditional gets noise
fcast_store[["fcast"]][i, , ] <- fcast_base + t(crossprod(sigma[j, , ],
matrix(rnorm(M * fcast[["horizon"]]), nrow = M)))
}
j <- j + n_thin
}
} # End new forecast
if(is.null(fcast_store[["quants"]]) || !missing(conf_bands)) {
fcast_store <- if(!missing(conf_bands)) {
predict.bvar_fcast(fcast_store, conf_bands)
} else {predict.bvar_fcast(fcast_store, c(0.16))}
}
return(fcast_store)
}
#' @noRd
#' @export
`predict<-.bvar` <- function(object, value) {
if(!inherits(value, "bvar_fcast")) {
stop("Please provide a `bvar_fcast` object to assign.")
}
object[["fcast"]] <- value
return(object)
}
#' @noRd
#' @export
#'
#' @importFrom stats predict quantile
predict.bvar_fcast <- function(object, conf_bands, ...) {
if(!missing(conf_bands)) {
quantiles <- quantile_check(conf_bands)
object[["quants"]] <- apply(object[["fcast"]], c(2, 3), quantile, quantiles)
}
return(object)
}
#' @rdname predict.bvar
#' @export
`predict<-` <- function(object, value) {UseMethod("predict<-", object)}
|
/scratch/gouwar.j/cran-all/cranData/BVAR/R/54_predict.R
|
#' @export
print.bv_fcast <- function(x, ...) {
cat("Object with settings for computing forecasts.\n")
.print_fcast(x, ...)
return(invisible(x))
}
#' @export
print.bvar_fcast <- function(x, ...) {
cat("Forecast object from `bvar()`.\n")
.print_fcast(x[["setup"]], ...)
cat("Variables: ", dim(x[["fcast"]])[3], "\n",
"Iterations: ", dim(x[["fcast"]])[1], "\n", sep = "")
return(invisible(x))
}
#' @noRd
.print_fcast <- function(x, ...) {
cat("Horizon: ", x[["horizon"]], "\n",
"Conditional: ", !is.null(x[["cond_path"]]), "\n", sep = "")
return(invisible(x))
}
#' @rdname predict.bvar
#' @export
summary.bvar_fcast <- function(object, vars = NULL, ...) {
quants <- object[["quants"]]
has_quants <- length(dim(quants)) == 3
M <- if(has_quants) {dim(quants)[3]} else {dim(quants)[2]}
variables <- name_deps(variables = object[["variables"]], M = M)
pos <- pos_vars(vars, variables = variables, M = M)
out <- structure(list(
"fcast" = object, "quants" = quants,
"variables" = variables, "pos" = pos, "has_quants" = has_quants),
class = "bvar_fcast_summary")
return(out)
}
#' @export
print.bvar_fcast_summary <- function(x, digits = 2L, ...) {
print.bvar_fcast(x[["fcast"]])
if(!is.null(x[["fcast"]][["setup"]][["constr_mat"]])) {
cat("Constraints for conditional forecast:\n")
print(x[["fcast"]][["setup"]][["constr_mat"]])
}
cat(if(!x[["has_quants"]]) {"\nMedian forecast:\n"} else {"\nForecast:\n"})
for(i in x[["pos"]]) {
cat("\tVariable ", x[["variables"]][i], ":\n", sep = "")
print(round(
if(x[["has_quants"]]) {x[["quants"]][, , i]} else {x[["quants"]][, i]},
digits = digits))
}
return(invisible(x))
}
|
/scratch/gouwar.j/cran-all/cranData/BVAR/R/55_fcast_print.R
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.