content
stringlengths 0
14.9M
| filename
stringlengths 44
136
|
---|---|
#'Period Variance on 's2dv_cube' objects
#'
#'Period Variance computes the average (var) of a given variable in a period.
#'Two bioclimatic indicators can be obtained by using this function:
#'\itemize{
#' \item{'BIO4', (Providing temperature data) Temperature Seasonality
#' (Standard Deviation). The amount of temperature variation
#' over a given year (or averaged years) based on the standard
#' deviation (variation) of monthly temperature averages.}
#' \item{'BIO15', (Providing precipitation data) Precipitation Seasonality
#' (CV). This is a measure of the variation in monthly precipitation
#' totals over the course of the year. This index is the ratio of the
#' standard deviation of the monthly total precipitation to the mean
#' monthly total precipitation (also known as the coefficient of
#' variation) and is expressed as a percentage.}
#'}
#'
#'@param data An 's2dv_cube' object as provided function \code{CST_Start} or
#' \code{CST_Load} in package CSTools.
#'@param start An optional parameter to defined the initial date of the period
#' to select from the data by providing a list of two elements: the initial
#' date of the period and the initial month of the period. By default it is set
#' to NULL and the indicator is computed using all the data provided in
#' \code{data}.
#'@param end An optional parameter to defined the final date of the period to
#' select from the data by providing a list of two elements: the final day of
#' the period and the final month of the period. By default it is set to NULL
#' and the indicator is computed using all the data provided in \code{data}.
#'@param time_dim A character string indicating the name of the dimension to
#' compute the indicator. By default, it is set to 'time'. More than one
#' dimension name matching the dimensions provided in the object
#' \code{data$data} can be specified.
#'@param na.rm A logical value indicating whether to ignore NA values (TRUE) or
#' not (FALSE).
#'@param ncores An integer indicating the number of cores to use in parallel
#' computation.
#'
#'@return An 's2dv_cube' object containing the indicator in the element
#'\code{data} with dimensions of the input parameter 'data' except the
#'dimension where the var has been computed (specified with 'time_dim'). A new
#'element called 'time_bounds' will be added into the 'attrs' element in the
#''s2dv_cube' object. It consists of a list containing two elements, the start
#'and end dates of the aggregated period with the same dimensions of 'Dates'
#'element.
#'
#'@examples
#'exp <- NULL
#'exp$data <- array(rnorm(45), dim = c(member = 7, sdate = 4, time = 3))
#'Dates <- c(seq(as.Date("2000-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2001-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"),
#' seq(as.Date("2001-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2002-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"),
#' seq(as.Date("2002-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2003-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"),
#' seq(as.Date("2003-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2004-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"))
#'dim(Dates) <- c(sdate = 4, time = 3)
#'exp$attrs$Dates <- Dates
#'class(exp) <- 's2dv_cube'
#'
#'res <- CST_PeriodVariance(exp, start = list(01, 12), end = list(01, 01))
#'
#'@import multiApply
#'@importFrom ClimProjDiags Subset
#'@export
CST_PeriodVariance <- function(data, start = NULL, end = NULL,
time_dim = 'time', na.rm = FALSE,
ncores = NULL) {
# Check 's2dv_cube'
if (!inherits(data, 's2dv_cube')) {
stop("Parameter 'data' must be of the class 's2dv_cube'.")
}
# Dates subset
if (!is.null(start) && !is.null(end)) {
if (is.null(dim(data$attrs$Dates))) {
warning("Dimensions in 'data' element 'attrs$Dates' are missed and ",
"all data would be used.")
start <- NULL
end <- NULL
}
}
Dates <- data$attrs$Dates
total <- PeriodVariance(data = data$data, dates = Dates, start = start, end = end,
time_dim = time_dim, na.rm = na.rm, ncores = ncores)
data$data <- total
data$dims <- dim(total)
data$coords[[time_dim]] <- NULL
if (!is.null(Dates)) {
if (!is.null(start) && !is.null(end)) {
Dates <- SelectPeriodOnDates(dates = Dates, start = start, end = end,
time_dim = time_dim, ncores = ncores)
}
if (is.null(dim(Dates))) {
warning("Element 'Dates' has NULL dimensions. They will not be ",
"subset and 'time_bounds' will be missed.")
data$attrs$Dates <- Dates
} else {
# Create time_bounds
time_bounds <- NULL
time_bounds$start <- ClimProjDiags::Subset(x = Dates, along = time_dim,
indices = 1, drop = 'selected')
time_bounds$end <- ClimProjDiags::Subset(x = Dates, along = time_dim,
indices = dim(Dates)[time_dim],
drop = 'selected')
# Add Dates in attrs
data$attrs$Dates <- time_bounds$start
data$attrs$time_bounds <- time_bounds
}
}
return(data)
}
#'Period Variance on multidimensional array objects
#'
#'Period Variance computes the average (var) of a given variable in a period.
#'Two bioclimatic indicators can be obtained by using this function:
#'\itemize{
#' \item{'BIO4', (Providing temperature data) Temperature Seasonality
#' (Standard Deviation). The amount of temperature variation
#' over a given year (or averaged years) based on the standard
#' deviation (variation) of monthly temperature averages.}
#' \item{'BIO15', (Providing precipitation data) Precipitation Seasonality
#' (CV). This is a measure of the variation in monthly precipitation
#' totals over the course of the year. This index is the ratio of the
#' standard deviation of the monthly total precipitation to the mean
#' monthly total precipitation (also known as the coefficient of
#' variation) and is expressed as a percentage.}
#'}
#'
#'@param data A multidimensional array with named dimensions.
#'@param dates A multidimensional array of dates with named dimensions matching
#' the temporal dimensions on parameter 'data'. By default it is NULL, to
#' select aperiod this parameter must be provided.
#'@param start An optional parameter to defined the initial date of the period
#' to select from the data by providing a list of two elements: the initial
#' date of the period and the initial month of the period. By default it is set
#' to NULL and the indicator is computed using all the data provided in
#' \code{data}.
#'@param end An optional parameter to defined the final date of the period to
#' select from the data by providing a list of two elements: the final day of
#' the period and the final month of the period. By default it is set to NULL
#' and the indicator is computed using all the data provided in \code{data}.
#'@param time_dim A character string indicating the name of the dimension to
#' compute the indicator. By default, it is set to 'time'. More than one
#' dimension name matching the dimensions provided in the object
#' \code{data$data} can be specified.
#'@param na.rm A logical value indicating whether to ignore NA values (TRUE) or
#' not (FALSE).
#'@param ncores An integer indicating the number of cores to use in parallel
#' computation.
#'
#'@return A multidimensional array with named dimensions containing the
#'indicator in the element \code{data}.
#'
#'@examples
#'data <- array(rnorm(45), dim = c(member = 7, sdate = 4, time = 3))
#'Dates <- c(seq(as.Date("2000-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2001-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"),
#' seq(as.Date("2001-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2002-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"),
#' seq(as.Date("2002-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2003-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"),
#' seq(as.Date("2003-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2004-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"))
#'dim(Dates) <- c(sdate = 4, time = 3)
#'res <- PeriodVariance(data, dates = Dates, start = list(01, 12), end = list(01, 01))
#'
#'@import multiApply
#'@export
PeriodVariance <- function(data, dates = NULL, start = NULL, end = NULL,
time_dim = 'time', na.rm = FALSE, ncores = NULL) {
# Initial checks
## data
if (is.null(data)) {
stop("Parameter 'data' cannot be NULL.")
}
if (!is.numeric(data)) {
stop("Parameter 'data' must be numeric.")
}
## time_dim
if (!is.character(time_dim) | length(time_dim) != 1) {
stop("Parameter 'time_dim' must be a character string.")
}
if (!is.array(data)) {
dim(data) <- length(data)
names(dim(data)) <- time_dim
}
if (!time_dim %in% names(dim(data))) {
stop("Parameter 'time_dim' is not found in 'data' dimension.")
}
if (!is.null(start) && !is.null(end)) {
if (is.null(dates)) {
warning("Parameter 'dates' is NULL and the average of the ",
"full data provided in 'data' is computed.")
} else {
if (!any(c(is.list(start), is.list(end)))) {
stop("Parameter 'start' and 'end' must be lists indicating the ",
"day and the month of the period start and end.")
}
if (!is.null(dim(dates))) {
data <- SelectPeriodOnData(data = data, dates = dates, start = start,
end = end, time_dim = time_dim,
ncores = ncores)
} else {
warning("Parameter 'dates' must have named dimensions if 'start' and ",
"'end' are not NULL. All data will be used.")
}
}
}
total <- Apply(list(data), target_dims = time_dim,
fun = .periodvariance,
na.rm = na.rm, ncores = ncores)$output1
return(total)
}
.periodvariance <- function(data, na.rm) {
var <- sum((data - mean(data, na.rm = na.rm))^2) / (length(data)-1)
return(var)
}
|
/scratch/gouwar.j/cran-all/cranData/CSIndicators/R/PeriodVariance.R
|
#'Transform an absolute threshold into probabilities
#'
#'From the user's perspective, an absolute threshold can be very useful for a
#'specific needs (e.g.: grape variety). However, this absolute threshold could
#'be transformed to a relative threshold in order to get its frequency in a given
#'dataset. Therefore, the function \code{QThreshold} returns the probability of
#'an absolute threshold. This is done by computing the Cumulative Distribution
#'Function of a sample and leaving one out. The sample used will depend on the
#'dimensions of the data provided and the dimension names provided in sdate_dim
#'and memb_dim parameters:
#'
#'\itemize{
#' \item{If a forecast (hindcast) has dimensions member and start date, and
#' both must be used in the sample, their names should be passed in
#' sdate_dim and memb_dim.}
#' \item{If a forecast (hindcast) has dimensions member and start date, and
#' only start date must be used in the sample (the calculation is done in
#' each separate member), memb_dim can be set to NULL.}
#' \item{If a reference (observations) has start date dimension, the sample
#' used is the start date dimension.}
#' \item{If a reference (observations) doesn't have start date dimension,
#' the sample used must be especified in sdate_dim parameter.}
#'}
#'
#'@param data An 's2dv_cube' object as provided function \code{CST_Start} or
#' \code{CST_Load} in package CSTools.
#'@param threshold An 's2dv_cube' object as output of a 'CST_' function in the
#' same units as parameter 'data' and with the common dimensions of the element
#' 'data' of the same length. A single scalar is also possible.
#'@param start An optional parameter to defined the initial date of the period
#' to select from the data by providing a list of two elements: the initial
#' date of the period and the initial month of the period. By default it is set
#' to NULL and the indicator is computed using all the data provided in
#' \code{data}.
#'@param end An optional parameter to defined the final date of the period to
#' select from the data by providing a list of two elements: the final day of
#' the period and the final month of the period. By default it is set to NULL
#' and the indicator is computed using all the data provided in \code{data}.
#'@param time_dim A character string indicating the name of the temporal
#' dimension. By default, it is set to 'time'. More than one dimension name
#' matching the dimensions provided in the object \code{data$data} can be
#' specified. This dimension is required to subset the data in a requested
#' period.
#'@param memb_dim A character string indicating the name of the dimension in
#' which the ensemble members are stored.
#'@param sdate_dim A character string indicating the name of the dimension in
#' which the initialization dates are stored.
#'@param ncores An integer indicating the number of cores to use in parallel
#' computation.
#'
#'@return An 's2dv_cube' object containing the probability of an absolute
#'threshold in the element \code{data}.
#'
#'@examples
#'threshold <- 26
#'exp <- NULL
#'exp$data <- array(abs(rnorm(112)*26), dim = c(member = 7, sdate = 8, time = 2))
#'class(exp) <- 's2dv_cube'
#'exp_probs <- CST_QThreshold(exp, threshold)
#'
#'exp$data <- array(abs(rnorm(5 * 3 * 214 * 2)*50),
#' c(member = 5, sdate = 3, time = 214, lon = 2))
#'exp$attrs$Dates <- c(seq(as.Date("01-05-2000", format = "%d-%m-%Y"),
#' as.Date("30-11-2000", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2001", format = "%d-%m-%Y"),
#' as.Date("30-11-2001", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2002", format = "%d-%m-%Y"),
#' as.Date("30-11-2002", format = "%d-%m-%Y"), by = 'day'))
#'dim(exp$attrs$Dates) <- c(sdate = 3, time = 214)
#'class(exp) <- 's2dv_cube'
#'exp_probs <- CST_QThreshold(exp, threshold, start = list(21, 4),
#' end = list(21, 6))
#'
#'@import multiApply
#'@export
CST_QThreshold <- function(data, threshold, start = NULL, end = NULL,
time_dim = 'time', memb_dim = 'member',
sdate_dim = 'sdate', ncores = NULL) {
# Check 's2dv_cube'
if (!inherits(data, 's2dv_cube')) {
stop("Parameter 'data' must be of the class 's2dv_cube'.")
}
# Dates subset
if (!is.null(start) && !is.null(end)) {
if (is.null(dim(data$attrs$Dates))) {
warning("Dimensions in 'data' element 'attrs$Dates' are missed and ",
"all data would be used.")
start <- NULL
end <- NULL
}
}
if (inherits(threshold, 's2dv_cube')) {
threshold <- threshold$data
}
probs <- QThreshold(data$data, threshold, dates = data$attrs$Dates,
start, end, time_dim = time_dim, memb_dim = memb_dim,
sdate_dim = sdate_dim, ncores = ncores)
data$data <- probs
data$dims <- dim(probs)
if (!is.null(start) && !is.null(end)) {
data$attrs$Dates <- SelectPeriodOnDates(dates = data$attrs$Dates,
start = start, end = end,
time_dim = time_dim,
ncores = ncores)
}
return(data)
}
#'Transform an absolute threshold into probabilities
#'
#'From the user's perspective, an absolute threshold can be very useful for a
#'specific needs (e.g.: grape variety). However, this absolute threshold could
#'be transformed to a relative threshold in order to get its frequency in a given
#'dataset. Therefore, the function \code{QThreshold} returns the probability of
#'an absolute threshold. This is done by computing the Cumulative Distribution
#'Function of a sample and leaving-one-ot. The sample used will depend on the
#'dimensions of the data provided and the dimension names provided in sdate_dim
#'and memb_dim parameters:
#'\itemize{
#' \item{If a forecast (hindcast) has dimensions member and start date, and
#' both must be used in the sample, their names should be passed in
#' sdate_dim and memb_dim.}
#' \item{If a forecast (hindcast) has dimensions member and start date, and
#' only start date must be used in the sample (the calculation is done in
#' each separate member), memb_dim can be set to NULL.}
#' \item{If a reference (observations) has start date dimension, the sample
#' used is the start date dimension.}
#' \item{If a reference (observations) doesn't have start date dimension,
#' the sample used must be especified in sdate_dim parameter.}
#'}
#'
#'@param data A multidimensional array with named dimensions.
#'@param threshold A multidimensional array with named dimensions in the same
#' units as parameter 'data' and with the common dimensions of the element
#' 'data' of the same length.
#'@param dates A multidimensional array of dates with named dimensions matching
#' the temporal dimensions on parameter 'data'. By default it is NULL, to
#' select aperiod this parameter must be provided.
#'@param start An optional parameter to defined the initial date of the period
#' to select from the data by providing a list of two elements: the initial
#' date of the period and the initial month of the period. By default it is set
#' to NULL and the indicator is computed using all the data provided in
#' \code{data}.
#'@param end An optional parameter to defined the final date of the period to
#' select from the data by providing a list of two elements: the final day of
#' the period and the final month of the period. By default it is set to NULL
#' and the indicator is computed using all the data provided in \code{data}.
#'@param time_dim A character string indicating the name of the temporal
#' dimension. By default, it is set to 'time'. More than one dimension name
#' matching the dimensions provided in the object \code{data$data} can be
#' specified. This dimension is required to subset the data in a requested
#' period.
#'@param memb_dim A character string indicating the name of the dimension in
#' which the ensemble members are stored.
#'@param sdate_dim A character string indicating the name of the dimension in
#' which the initialization dates are stored.
#'@param ncores An integer indicating the number of cores to use in parallel
#' computation.
#'
#'@return A multidimensional array with named dimensions containing the
#'probability of an absolute threshold in the element \code{data}.
#'
#'@examples
#'threshold = 25
#'data <- array(rnorm(5 * 3 * 20 * 2, mean = 26),
#' c(member = 5, sdate = 3, time = 214, lon = 2))
#'
#'Dates <- c(seq(as.Date("01-05-2000", format = "%d-%m-%Y"),
#' as.Date("30-11-2000", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2001", format = "%d-%m-%Y"),
#' as.Date("30-11-2001", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2002", format = "%d-%m-%Y"),
#' as.Date("30-11-2002", format = "%d-%m-%Y"), by = 'day'))
#'dim(Dates) <- c(sdate = 3, time = 214)
#'
#'thres_q <- QThreshold(data, threshold, dates = Dates, time_dim = 'time',
#' start = list(21, 4), end = list(21, 6))
#'
#'@import multiApply
#'@importFrom ClimProjDiags Subset
#'@export
QThreshold <- function(data, threshold, dates = NULL, start = NULL, end = NULL,
time_dim = 'time', memb_dim = 'member',
sdate_dim = 'sdate', ncores = NULL) {
# Initial checks
## data
if (is.null(data)) {
stop("Parameter 'data' cannot be NULL.")
}
if (!is.numeric(data)) {
stop("Parameter 'data' must be numeric.")
}
if (!is.array(data)) {
dim(data) <- c(length(data), 1)
names(dim(data)) <- c(memb_dim, sdate_dim)
}
if (is.null(names(dim(data)))) {
stop("Parameter 'data' must have named dimensions.")
}
## threshold
if (is.null(threshold)) {
stop("Parameter 'threshold' cannot be NULL.")
}
if (!is.numeric(threshold)) {
stop("Parameter 'threshold' must be numeric.")
}
if (!is.array(threshold) && length(threshold) > 1) {
dim(threshold) <- length(threshold)
names(dim(threshold)) <- time_dim
} else if (length(threshold) == 1) {
dim(threshold) <- NULL
}
if (sdate_dim %in% names(dim(threshold))) {
stop("Parameter threshold cannot have dimension 'sdate_dim'.")
}
if (is.null(names(dim(threshold))) && length(threshold) > 1) {
stop("Parameter 'threshold' must have named dimensions.")
}
if (is.null(memb_dim)) {
memb_dim <- 99999
}
if (!is.null(start) && !is.null(end)) {
if (is.null(dates)) {
warning("Parameter 'dates' is NULL and the average of the ",
"full data provided in 'data' is computed.")
} else {
if (!any(c(is.list(start), is.list(end)))) {
stop("Parameter 'start' and 'end' must be lists indicating the ",
"day and the month of the period start and end.")
}
if (time_dim %in% names(dim(threshold))) {
if (dim(threshold)[time_dim] == dim(data)[time_dim]) {
if (!is.null(dim(dates)) && sdate_dim %in% names(dim(dates))) {
dates_thres <- Subset(dates, along = sdate_dim, indices = 1)
threshold <- SelectPeriodOnData(data = threshold, dates = dates_thres, start, end,
time_dim = time_dim, ncores = ncores)
} else {
threshold <- SelectPeriodOnData(threshold, dates, start, end,
time_dim = time_dim, ncores = ncores)
}
}
}
if (!is.null(dim(dates))) {
data <- SelectPeriodOnData(data = data, dates = dates, start = start,
end = end, time_dim = time_dim, ncores = ncores)
} else {
warning("Parameter 'dates' must have named dimensions if 'start' and ",
"'end' are not NULL. All data will be used.")
}
}
}
if (length(threshold) == 1) {
if (memb_dim %in% names(dim(data))) {
probs <- Apply(list(data), target_dims = c(memb_dim, sdate_dim),
fun = .qthreshold_exp,
threshold, ncores = ncores)$output1
} else {
probs <- Apply(list(data), target_dims = sdate_dim, fun = .qthreshold_obs,
threshold, ncores = ncores)$output1
}
} else {
target_thres <- NULL
if (memb_dim %in% names(dim(data))) {
if (memb_dim %in% names(dim(threshold))) {
# comparison member by member
probs <- Apply(list(data, threshold),
target_dims = list(sdate_dim, NULL),
fun = .qthreshold_obs, ncores = ncores)$output1
} else {
probs <- Apply(list(data, threshold),
target_dims = list(c(memb_dim, sdate_dim), NULL),
fun = .qthreshold_exp, ncores = ncores)$output1
}
} else {
probs <- Apply(list(data, threshold), target_dims = list(sdate_dim, NULL),
fun = .qthreshold_obs, ncores = ncores)$output1
}
}
return(probs)
}
# By splitting the atomic function, a conditional repetition is avoided
# inside the Apply loops
.qthreshold_obs <- function(data, threshold) {
# dims data: sdate
dims <- dim(data)
# no 'member' involving
qres <- unlist(lapply(1:dims, function(x) {
ecdf(data[-x])(threshold)}))
dim(qres) <- c(dims)
return(qres)
}
.qthreshold_exp <- function(data, threshold) {
qres <- unlist(
lapply(1:(dim(data)[1]), function(x) { # dim 1: member
lapply(1:(dim(data)[2]), function(y) { # dim 2: sdate
ecdf(as.vector(data[,-y]))(threshold)
})
}))
dim(qres) <- c(dim(data)[2], dim(data)[1])
return(qres)
}
|
/scratch/gouwar.j/cran-all/cranData/CSIndicators/R/QThreshold.R
|
#' Select a period on Data on 's2dv_cube' objects
#'
#' Auxiliary function to subset data for a specific period.
#'
#'@param data An 's2dv_cube' object as provided function \code{CST_Start} or
#' \code{CST_Load} in package CSTools.
#'@param start A parameter to defined the initial date of the period to select
#' from the data by providing a list of two elements: the initial date of the
#' period and the initial month of the period.
#'@param end A parameter to defined the final date of the period to select from
#' the data by providing a list of two elements: the final day of the period
#' and the final month of the period.
#'@param time_dim A character string indicating the name of the dimension to
#' compute select the dates. By default, it is set to 'time'. More than one
#' dimension name matching the dimensions provided in the object
#' \code{data$data} can be specified.
#'@param ncores An integer indicating the number of cores to use in parallel
#' computation.
#'
#'@return A 's2dv_cube' object containing the subset of the object
#'\code{data$data} during the period requested from \code{start} to \code{end}.
#'
#'@examples
#'exp <- NULL
#'exp$data <- array(rnorm(5 * 3 * 214 * 2),
#' c(memb = 5, sdate = 3, time = 214, lon = 2))
#'exp$attrs$Dates <- c(seq(as.Date("01-05-2000", format = "%d-%m-%Y"),
#' as.Date("30-11-2000", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2001", format = "%d-%m-%Y"),
#' as.Date("30-11-2001", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2002", format = "%d-%m-%Y"),
#' as.Date("30-11-2002", format = "%d-%m-%Y"), by = 'day'))
#'dim(exp$attrs$Dates) <- c(time = 214, sdate = 3)
#'class(exp) <- 's2dv_cube'
#'Period <- CST_SelectPeriodOnData(exp, start = list(21, 6), end = list(21, 9))
#'@import multiApply
#'@export
CST_SelectPeriodOnData <- function(data, start, end, time_dim = 'time',
ncores = NULL) {
# Check 's2dv_cube'
if (!inherits(data, 's2dv_cube')) {
stop("Parameter 'data' must be of the class 's2dv_cube'.")
}
# Dates subset
if (!is.null(start) && !is.null(end)) {
if (is.null(dim(data$attrs$Dates))) {
warning("Dimensions in 'data' element 'Dates$start' are missed and ",
"all data would be used.")
}
}
res <- SelectPeriodOnData(data$data, data$attrs$Dates,
start = start, end = end,
time_dim = time_dim, ncores = ncores)
data$data <- res
if (!is.null(start) && !is.null(end)) {
data$attrs$Dates <- SelectPeriodOnDates(dates = data$attrs$Dates,
start = start, end = end,
time_dim = time_dim,
ncores = ncores)
}
return(data)
}
#' Select a period on Data on multidimensional array objects
#'
#' Auxiliary function to subset data for a specific period.
#'
#'@param data A multidimensional array with named dimensions with at least the
#' time dimension specified in parameter 'time_dim'. All common dimensions
#' with 'dates' parameter need to have the same length.
#'@param dates An array of dates with named dimensions with at least the time
#' dimension specified in parameter 'time_dim'. All common dimensions with
#' 'data' parameter need to have the same length.
#'@param start A list with two elements to define the initial date of the period
#' to select from the data. The first element is the initial day of the period
#' and the second element is the initial month of the period.
#'@param end A list with two elements to define the final date of the period
#' to select from the data. The first element is the final day of the period
#' and the second element is the final month of the period.
#'@param time_dim A character string indicating the name of the dimension to
#' compute select the dates. By default, it is set to 'time'. Parameters
#' 'data' and 'dates'
#'@param ncores An integer indicating the number of cores to use in parallel
#' computation.
#'
#'@return A multidimensional array with named dimensions containing the subset
#'of the object \code{data} during the period requested from \code{start} to
#'\code{end}.
#'
#'@examples
#'data <- array(rnorm(5 * 3 * 214 * 2),
#' c(memb = 5, sdate = 3, time = 214, lon = 2))
#'Dates <- c(seq(as.Date("01-05-2000", format = "%d-%m-%Y"),
#' as.Date("30-11-2000", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2001", format = "%d-%m-%Y"),
#' as.Date("30-11-2001", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2002", format = "%d-%m-%Y"),
#' as.Date("30-11-2002", format = "%d-%m-%Y"), by = 'day'))
#'dim(Dates) <- c(time = 214, sdate = 3)
#'Period <- SelectPeriodOnData(data, Dates, start = list(21, 6), end = list(21, 9))
#'@import multiApply
#'@importFrom ClimProjDiags Subset
#'@importFrom s2dv Reorder
#'@export
SelectPeriodOnData <- function(data, dates, start, end,
time_dim = 'time', ncores = NULL) {
if (is.null(dim(dates))) {
dim(dates) <- length(dates)
names(dim(dates)) <- time_dim
}
if (is.null(dim(data))) {
dim(data) <- length(data)
names(dim(data)) <- time_dim
}
res <- Apply(list(dates), target_dims = time_dim,
fun = .position,
ini_day = start[[1]], ini_month = start[[2]],
end_day = end[[1]], end_month = end[[2]],
ncores = ncores)$output1
# when 29Feb is included the length of the output changes:
regular <- Apply(list(res), target_dims = time_dim,
fun = sum, ncores = ncores)$output1
dims <- dim(data)
dims[names(dims) == time_dim] <- max(regular)
if (any(regular != max(regular))) {
res <- Apply(list(data, res), target_dims = time_dim,
fun = function(x, y) {
if (sum(y) != max(regular)) {
result <- c(x[y], NA)
} else {
result <- x[y]
}
dim(result) <- length(result)
names(dim(result)) <- names(dim(x))
return(result)
}, output_dims = time_dim, ncores = ncores)$output1
} else {
res <- Apply(list(data, res), target_dims = time_dim,
fun = function(x, y) {
res <- x[y]
if (is.null(dim(res))) {
dim(res) <- 1
names(dim(res)) <- time_dim
}
return(res)
}, output_dims = time_dim, ncores = ncores)$output1
}
names_res <- sort(names(dim(res)))
names_data <- sort(names(dim(data)))
if (!all(names_res %in% names_data)) {
dim_remove <- names_res[-which(names_res %in% names_data)]
indices <- as.list(rep(1, length(dim_remove)))
res <- Subset(res, along = dim_remove, indices, drop = 'selected')
}
if (any(names(dims) != names(dim(res)))) {
res <- Reorder(res, names(dims))
}
return(res)
}
|
/scratch/gouwar.j/cran-all/cranData/CSIndicators/R/SelectPeriodOnData.R
|
#' Select a period on Dates
#'
#' Auxiliary function to subset dates for a specific period.
#'
#'@param dates An array of dates with named dimensions.
#'@param start An optional parameter to defined the initial date of the period
#' to select from the data by providing a list of two elements: the initial
#' date of the period and the initial month of the period.
#'@param end An optional parameter to defined the final date of the period to
#' select from the data by providing a list of two elements: the final day of
#' the period and the final month of the period.
#'@param time_dim A character string indicating the name of the dimension to
#' compute select the dates. By default, it is set to 'time'. More than one
#' dimension name matching the dimensions provided in the object
#' \code{data$data} can be specified.
#'@param ncores An integer indicating the number of cores to use in parallel
#' computation.
#'
#'@return A multidimensional array with named dimensions containing the subset of
#'the vector dates during the period requested from \code{start} to \code{end}.
#'
#'@examples
#'Dates <- c(seq(as.Date("01-05-2000", format = "%d-%m-%Y"),
#' as.Date("30-11-2000", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2001", format = "%d-%m-%Y"),
#' as.Date("30-11-2001", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2002", format = "%d-%m-%Y"),
#' as.Date("30-11-2002", format = "%d-%m-%Y"), by = 'day'))
#'dim(Dates) <- c(time = 214, sdate = 3)
#'Period <- SelectPeriodOnDates(Dates, start = list(21, 6), end = list(21, 9))
#'@import multiApply
#'@importFrom s2dv Reorder
#'@export
SelectPeriodOnDates <- function(dates, start, end,
time_dim = 'time', ncores = NULL) {
if (is.null(dim(dates))) {
dim(dates) <- length(dates)
names(dim(dates)) <- time_dim
}
# TODO: consider NAs
res <- Apply(list(dates), target_dims = time_dim,
fun = .position,
ini_day = start[[1]], ini_month = start[[2]],
end_day = end[[1]], end_month = end[[2]],
ncores = ncores)$output1
reorder <- FALSE
if (which(names(dim(dates)) == time_dim) != 1) {
dimdates <- names(dim(dates))
dates <- Reorder(dates, c(time_dim, names(dim(dates))[which(names(dim(dates)) != time_dim)]))
reorder <- TRUE
}
# when 29Feb is included the length of the output changes:
regular <- Apply(list(res), target_dims = time_dim,
fun = sum, ncores = ncores)$output1
dims <- dim(dates)
dims[names(dims) == time_dim] <- max(regular)
if (any(regular != max(regular))) {
res <- Apply(list(dates, res), target_dims = time_dim,
fun = function(x, y) {
if (sum(y) != max(regular)) {
result <- c(x[y], NA)
} else {
result <- x[y]
}
dim(result) <- length(result)
names(dim(result)) <- names(dim(x))
return(result)
}, ncores = ncores)$output1
res <- as.POSIXct(res, origin = '1970-01-01', tz = 'UTC')
} else {
res <- dates[res]
dim(res) <- dims
if (reorder) {
res <- Reorder(res, dimdates)
}
}
return(res)
}
|
/scratch/gouwar.j/cran-all/cranData/CSIndicators/R/SelectPeriodOnDates.R
|
#'Absolute value of a relative threshold (percentile)
#'
#'Frequently, thresholds are defined by a percentile that may correspond to a
#'different absolute value depending on the variable, gridpoint and also julian
#'day (time). This function calculates the corresponding value of a percentile
#'given a dataset.
#'
#'@param data An 's2dv_cube' object as provided function \code{CST_Start} or
#' \code{CST_Load} in package CSTools.
#'@param threshold A single scalar or vector indicating the relative
#' threshold(s). It must contain values between 0 and 1.
#'@param start An optional parameter to defined the initial date of the period
#' to selectfrom the data by providing a list of two elements: the initial date
#' of the period and the initial month of the period. By default it is set to
#' NULL and the indicator is computed using all the data provided in
#' \code{data}.
#'@param end An optional parameter to defined the final date of the period to
#' select from the data by providing a list of two elements: the final day of
#' the period and the final month of the period. By default it is set to NULL
#' and the indicator is computed using all the data provided in \code{data}.
#'@param time_dim A character string indicating the name of the temporal
#' dimension. By default, it is set to 'time'. More than one dimension name
#' matching the dimensions provided in the object \code{data$data} can be
#' specified. This dimension is required to subset the data in a requested
#' period.
#'@param memb_dim A character string indicating the name of the dimension in
#' which the ensemble members are stored. When set it to NULL, threshold is
#' computed for individual members.
#'@param sdate_dim A character string indicating the name of the dimension in
#' which the initialization dates are stored.
#'@param na.rm A logical value indicating whether to ignore NA values (TRUE) or
#' not (FALSE).
#'@param ncores An integer indicating the number of cores to use in parallel
#' computation.
#'
#'@return An ’s2dv_cube’ object containing the corresponding values of a
#'percentile in the element \code{data}.
#'
#'@examples
#'threshold <- 0.9
#'exp <- NULL
#'exp$data <- array(rnorm(5 * 3 * 214 * 2),
#' c(member = 5, sdate = 3, time = 214, lon = 2))
#'exp$attrs$Dates <- c(seq(as.Date("01-05-2000", format = "%d-%m-%Y"),
#' as.Date("30-11-2000", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2001", format = "%d-%m-%Y"),
#' as.Date("30-11-2001", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2002", format = "%d-%m-%Y"),
#' as.Date("30-11-2002", format = "%d-%m-%Y"), by = 'day'))
#'dim(exp$attrs$Dates) <- c(sdate = 3, time = 214)
#'class(exp) <- 's2dv_cube'
#'exp_probs <- CST_Threshold(exp, threshold, start = list(21, 4), end = list(21, 6))
#'
#'@import multiApply
#'@export
CST_Threshold <- function(data, threshold, start = NULL, end = NULL,
time_dim = 'time', memb_dim = 'member',
sdate_dim = 'sdate', na.rm = FALSE,
ncores = NULL) {
# Check 's2dv_cube'
if (!inherits(data, 's2dv_cube')) {
stop("Parameter 'data' must be of the class 's2dv_cube'.")
}
# Dates subset
if (!is.null(start) && !is.null(end)) {
if (is.null(dim(data$attrs$Dates))) {
warning("Dimensions in 'data' element 'attrs$Dates' are missed and ",
"all data would be used.")
start <- NULL
end <- NULL
}
}
thres <- Threshold(data$data, threshold, dates = data$attrs$Dates, start, end,
time_dim = time_dim, memb_dim = memb_dim,
sdate_dim = sdate_dim, na.rm = na.rm, ncores = ncores)
data$data <- thres
data$dims <- dim(thres)
data$coords[[memb_dim]] <- NULL
data$coords[[sdate_dim]] <- NULL
if (!is.null(start) && !is.null(end)) {
data$attrs$Dates <- SelectPeriodOnDates(dates = data$attrs$Dates,
start = start, end = end,
time_dim = time_dim,
ncores = ncores)
}
return(data)
}
#'Absolute value of a relative threshold (percentile)
#'
#'Frequently, thresholds are defined by a percentile that may correspond to a
#'different absolute value depending on the variable, gridpoint and also julian
#'day (time). This function calculates the corresponding value of a percentile
#'given a dataset.
#'
#'@param data A multidimensional array with named dimensions.
#'@param threshold A single scalar or vector indicating the relative
#' threshold(s). It must contain values between 0 and 1.
#'@param dates A multidimensional array of dates with named dimensions matching
#' the temporal dimensions on parameter 'data'. By default it is NULL, to
#' select aperiod this parameter must be provided.
#'@param start An optional parameter to defined the initial date of the period
#' to select from the data by providing a list of two elements: the initial
#' date of the period and the initial month of the period. By default it is set
#' to NULL and the indicator is computed using all the data provided in
#' \code{data}.
#'@param end An optional parameter to defined the final date of the period to
#' select from the data by providing a list of two elements: the final day of
#' the period and the final month of the period. By default it is set to NULL
#' and the indicator is computed using all the data provided in \code{data}.
#'@param time_dim A character string indicating the name of the temporal
#' dimension. By default, it is set to 'time'. More than one dimension name
#' matching the dimensions provided in the object \code{data$data} can be
#' specified. This dimension is required to subset the data in a requested
#' period.
#'@param memb_dim A character string indicating the name of the dimension in
#' which the ensemble members are stored. When set it to NULL, threshold is
#' computed for individual members.
#'@param sdate_dim A character string indicating the name of the dimension in
#' which the initialization dates are stored.
#'@param na.rm A logical value indicating whether to ignore NA values (TRUE) or
#' not (FALSE).
#'@param ncores An integer indicating the number of cores to use in parallel
#' computation.
#'
#'@return A multidimensional array with named dimensions containing the
#'corresponding values of a percentile in the element \code{data}.
#'
#'@examples
#'threshold <- 0.9
#'data <- array(rnorm(25 * 3 * 214 * 2, mean = 26),
#' c(member = 25, sdate = 3, time = 214, lon = 2))
#'thres_q <- Threshold(data, threshold)
#'data <- array(rnorm(1 * 3 * 214 * 2), c(member = 1, sdate = 3, time = 214, lon = 2))
#'res <- Threshold(data, threshold)
#'
#'@import multiApply
#'@importFrom stats quantile
#'@export
Threshold <- function(data, threshold, dates = NULL, start = NULL, end = NULL,
time_dim = 'time', memb_dim = 'member', sdate_dim = 'sdate',
na.rm = FALSE, ncores = NULL) {
if (is.null(data)) {
stop("Parameter 'data' cannot be NULL.")
}
if (!is.numeric(data)) {
stop("Parameter 'data' must be numeric.")
}
if (!is.array(data)) {
dim(data) <- c(length(data), 1)
names(dim(data)) <- c(memb_dim, sdate_dim)
}
if (is.null(threshold)) {
stop("Parameter 'threshold' cannot be NULL.")
}
if (!is.numeric(threshold)) {
stop("Parameter 'threshold' must be numeric.")
}
if (is.null(names(dim(data)))) {
stop("Parameter 'data' must have named dimensions.")
}
if (!is.null(start) && !is.null(end)) {
if (is.null(dates)) {
warning("Parameter 'dates' is NULL and the average of the ",
"full data provided in 'data' is computed.")
} else {
if (!any(c(is.list(start), is.list(end)))) {
stop("Parameter 'start' and 'end' must be lists indicating the ",
"day and the month of the period start and end.")
}
if (!is.null(dim(dates))) {
data <- SelectPeriodOnData(data = data, dates = dates, start = start,
end = end, time_dim = time_dim,
ncores = ncores)
} else {
warning("Parameter 'dates' must have named dimensions if 'start' and ",
"'end' are not NULL. All data will be used.")
}
}
}
if (!is.null(memb_dim)) {
dimensions <- c(sdate_dim, memb_dim)
} else {
dimensions <- sdate_dim
}
if (length(threshold) == 1) {
thres <- Apply(data, target_dims = dimensions,
fun = function(x) {quantile(as.vector(x), threshold, na.rm)})$output1
} else {
thres <- Apply(data, target_dims = dimensions,
fun = function(x) {quantile(as.vector(x), threshold, na.rm)},
output_dims = 'probs')$output1
}
return(thres)
}
|
/scratch/gouwar.j/cran-all/cranData/CSIndicators/R/Threshold.R
|
#'Total Spell Time Exceeding Threshold
#'
#'The number of days (when daily data is provided) that are part of a spell
#'(defined by its minimum length e.g. 6 consecutive days) that exceed (or not
#'exceed) a threshold are calculated with \code{TotalSpellTimeExceedingThreshold}.
#'This function allows to compute indicators widely used in Climate Services,
#'such as:
#'\itemize{
#' \item{'WSDI', Warm Spell Duration Index that count the total number of days
#' with at least 6 consecutive days when the daily temperature
#' maximum exceeds its 90th percentile.}
#'}
#'This function requires the data and the threshold to be in the same units. The
#'90th percentile can be translate into absolute values given a reference dataset
#'using function \code{Threshold} or the data can be transform into probabilites
#'by using function \code{AbsToProbs}. See section @examples.
#'@seealso [Threshold()] and [AbsToProbs()].
#'
#'@param data An 's2dv_cube' object as provided function \code{CST_Start} or
#' \code{CST_Load} in package CSTools.
#'@param threshold If only one threshold is used, it can be an 's2dv_cube'
#' object or a multidimensional array with named dimensions. It must be in the
#' same units and with the common dimensions of the same length as parameter
#' 'data'. It can also be a vector with the same length of 'time_dim' from
#' 'data' or a scalar. If we want to use two thresholds: it can be a vector
#' of two scalars, a list of two vectors with the same length of
#' 'time_dim' from 'data' or a list of two multidimensional arrays with the
#' common dimensions of the same length as parameter 'data'. If two thresholds
#' are used, parameter 'op' must be also a vector of two elements.
#'@param spell A scalar indicating the minimum length of the spell.
#'@param op An operator '>' (by default), '<', '>=' or '<='. If two thresholds
#' are used it has to be a vector of a pair of two logical operators:
#' c('<', '>'), c('<', '>='), c('<=', '>'), c('<=', '>='), c('>', '<'),
#' c('>', '<='), c('>=', '<'),c('>=', '<=')).
#'@param start An optional parameter to define the initial date of the period
#' to select from the data by providing a list of two elements: the initial
#' date of the period and the initial month of the period. By default it is set
#' to NULL and the indicator is computed using all the data provided in
#' \code{data}.
#'@param end An optional parameter to defined the final date of the period to
#' select from the data by providing a list of two elements: the final day of
#' the period and the final month of the period. By default it is set to NULL
#' and the indicator is computed using all the data provided in \code{data}.
#'@param time_dim A character string indicating the name of the dimension to
#' compute the indicator. By default, it is set to 'time'. It can only
#' indicate one time dimension.
#'@param ncores An integer indicating the number of cores to use in parallel
#' computation.
#'
#'@return An 's2dv_cube' object containing the number of days that are part of a
#'spell within a threshold in element \code{data} with dimensions of the input
#'parameter 'data' except the dimension where the indicator has been computed.
#'The 'Dates' array is updated to the dates corresponding to the beginning of
#'the aggregated time period. A new element called 'time_bounds' will be added
#'into the 'attrs' element in the 's2dv_cube' object. It consists of a list
#'containing two elements, the start and end dates of the aggregated period with
#'the same dimensions of 'Dates' element.
#'
#'@examples
#'exp <- NULL
#'exp$data <- array(rnorm(5 * 3 * 214 * 2)*23,
#' c(member = 5, sdate = 3, time = 214, lon = 2))
#'exp$attrs$Dates <- c(seq(as.Date("01-05-2000", format = "%d-%m-%Y"),
#' as.Date("30-11-2000", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2001", format = "%d-%m-%Y"),
#' as.Date("30-11-2001", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2002", format = "%d-%m-%Y"),
#' as.Date("30-11-2002", format = "%d-%m-%Y"), by = 'day'))
#'dim(exp$attrs$Dates) <- c(sdate = 3, time = 214)
#'class(exp) <- 's2dv_cube'
#'TTSET <- CST_TotalSpellTimeExceedingThreshold(exp, threshold = 23, spell = 3,
#' start = list(21, 4),
#' end = list(21, 6))
#'
#'@import multiApply
#'@importFrom ClimProjDiags Subset
#'@export
CST_TotalSpellTimeExceedingThreshold <- function(data, threshold, spell, op = '>',
start = NULL, end = NULL,
time_dim = 'time',
ncores = NULL) {
# Check 's2dv_cube'
if (!inherits(data, 's2dv_cube')) {
stop("Parameter 'data' must be of the class 's2dv_cube'.")
}
# Dates subset
if (!is.null(start) && !is.null(end)) {
if (is.null(dim(data$attrs$Dates))) {
warning("Dimensions in 'data' element 'attrs$Dates' are missed and ",
"all data would be used.")
start <- NULL
end <- NULL
}
}
if (length(op) == 1) {
if (inherits(threshold, 's2dv_cube')) {
threshold <- threshold$data
}
} else if (length(op) == 2) {
if (inherits(threshold[[1]], 's2dv_cube')) {
threshold[[1]] <- threshold[[1]]$data
}
if (inherits(threshold[[2]], 's2dv_cube')) {
threshold[[2]] <- threshold[[2]]$data
}
}
Dates <- data$attrs$Dates
total <- TotalSpellTimeExceedingThreshold(data$data, Dates,
threshold = threshold, spell = spell,
op = op, start = start, end = end,
time_dim = time_dim,
ncores = ncores)
data$data <- total
data$dims <- dim(total)
data$coords[[time_dim]] <- NULL
if (!is.null(Dates)) {
if (!is.null(start) && !is.null(end)) {
Dates <- SelectPeriodOnDates(dates = Dates, start = start, end = end,
time_dim = time_dim, ncores = ncores)
}
if (is.null(dim(Dates))) {
warning("Element 'Dates' has NULL dimensions. They will not be ",
"subset and 'time_bounds' will be missed.")
data$attrs$Dates <- Dates
} else {
# Create time_bounds
time_bounds <- NULL
time_bounds$start <- ClimProjDiags::Subset(x = Dates, along = time_dim,
indices = 1, drop = 'selected')
time_bounds$end <- ClimProjDiags::Subset(x = Dates, along = time_dim,
indices = dim(Dates)[time_dim],
drop = 'selected')
# Add Dates in attrs
data$attrs$Dates <- time_bounds$start
data$attrs$time_bounds <- time_bounds
}
}
return(data)
}
#'Total Spell Time Exceeding Threshold
#'
#'The number of days (when daily data is provided) that are part of a spell
#'(defined by its minimum length e.g. 6 consecutive days) that exceed (or not
#'exceed) a threshold are calculated with \code{TotalSpellTimeExceedingThreshold}.
#'This function allows to compute indicators widely used in Climate Services,
#'such as:
#'\itemize{
#' \item{'WSDI', Warm Spell Duration Index that count the total number of days
#' with at least 6 consecutive days when the daily temperature
#' maximum exceeds its 90th percentile.}
#'}
#'This function requires the data and the threshold to be in the same units. The
#'90th percentile can be translate into absolute values given a reference
#'dataset using function \code{Threshold} or the data can be transform into
#'probabilites by using function \code{AbsToProbs}. See section @examples.
#'@seealso [Threshold()] and [AbsToProbs()].
#'
#'@param data A multidimensional array with named dimensions.
#'@param threshold If only one threshold is used: it can be a multidimensional
#' array with named dimensions. It must be in the same units and with the
#' common dimensions of the same length as parameter 'data'. It can also be a
#' vector with the same length of 'time_dim' from 'data' or a scalar. If we
#' want to use two thresholds: it can be a vector of two scalars, a list of
#' two vectors with the same length of 'time_dim' from 'data' or a list of
#' two multidimensional arrays with the common dimensions of the same length
#' as parameter 'data'. If two thresholds are used, parameter 'op' must be
#' also a vector of two elements.
#'@param spell A scalar indicating the minimum length of the spell.
#'@param op An operator '>' (by default), '<', '>=' or '<='. If two thresholds
#' are used it has to be a vector of a pair of two logical operators:
#' c('<', '>'), c('<', '>='), c('<=', '>'), c('<=', '>='), c('>', '<'),
#' c('>', '<='), c('>=', '<'),c('>=', '<=')).
#'@param dates A multidimensional array of dates with named dimensions matching
#' the temporal dimensions on parameter 'data'. By default it is NULL, to
#' select aperiod this parameter must be provided.
#'@param start An optional parameter to defined the initial date of the period
#' to select from the data by providing a list of two elements: the initial
#' date of the period and the initial month of the period. By default it is set
#' to NULL and the indicator is computed using all the data provided in
#' \code{data}.
#'@param end An optional parameter to define the final date of the period to
#' select from the data by providing a list of two elements: the final day of
#' the period and the final month of the period. By default it is set to NULL
#' and the indicator is computed using all the data provided in \code{data}.
#'@param time_dim A character string indicating the name of the dimension to
#' compute the indicator. By default, it is set to 'time'. It can only
#' indicate one time dimension.
#'@param ncores An integer indicating the number of cores to use in parallel
#' computation.
#'
#'@return A multidimensional array with named dimensions containing the number
#'of days that are part of a spell within a threshold with dimensions of the
#'input parameter 'data' except the dimension where the indicator has been
#'computed.
#'
#'@details This function considers NA values as the end of the spell. For a
#'different behaviour consider to modify the 'data' input by substituting NA
#'values by values exceeding the threshold.
#'@examples
#'data <- array(1:100, c(member = 5, sdate = 3, time = 214, lon = 2))
#'Dates <- c(seq(as.Date("01-05-2000", format = "%d-%m-%Y"),
#' as.Date("30-11-2000", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2001", format = "%d-%m-%Y"),
#' as.Date("30-11-2001", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2002", format = "%d-%m-%Y"),
#' as.Date("30-11-2002", format = "%d-%m-%Y"), by = 'day'))
#'dim(Dates) <- c(sdate = 3, time = 214)
#'
#'threshold <- array(1:4, c(lat = 4))
#'total <- TotalSpellTimeExceedingThreshold(data, threshold, dates = Dates,
#' spell = 6, start = list(21, 4),
#' end = list(21, 6))
#'
#'@import multiApply
#'@export
TotalSpellTimeExceedingThreshold <- function(data, threshold, spell, op = '>',
dates = NULL, start = NULL, end = NULL,
time_dim = 'time', ncores = NULL) {
# data
if (is.null(data)) {
stop("Parameter 'data' cannot be NULL.")
}
if (!is.numeric(data)) {
stop("Parameter 'data' must be numeric.")
}
if (!is.array(data)) {
dim(data) <- length(data)
names(dim(data)) <- time_dim
}
if (is.null(names(dim(data)))) {
stop("Parameter 'data' must have named dimensions.")
}
# time_dim
if (!is.character(time_dim)) {
stop("Parameter 'time_dim' must be a character string.")
}
if (!all(time_dim %in% names(dim(data)))) {
stop("Parameter 'time_dim' is not found in 'data' dimension.")
}
if (length(time_dim) > 1) {
warning("Parameter 'time_dim' has length greater than 1 and ",
"only the first element will be used.")
time_dim <- time_dim[1]
}
# op
if (!is.character(op)) {
stop("Parameter 'op' must be a character.")
}
if (length(op) == 1) {
if (!(op %in% c('>', '<', '>=', '<=', '='))) {
stop("Parameter 'op' must be a logical operator.")
}
} else if (length(op) == 2) {
op_list <- list(c('<', '>'), c('<', '>='), c('<=', '>'), c('<=', '>='),
c('>', '<'), c('>', '<='), c('>=', '<'), c('>=', '<='))
if (!any(unlist(lapply(op_list, function(x) all(x == op))))) {
stop("Parameter 'op' is not an accepted pair of logical operators.")
}
} else {
stop("Parameter 'op' must be a logical operator with length 1 or 2.")
}
# threshold
if (is.null(unlist(threshold))) {
stop("Parameter 'threshold' cannot be NULL.")
}
if (!is.numeric(unlist(threshold))) {
stop("Parameter 'threshold' must be numeric.")
}
if (length(op) == 2) {
if (length(op) != length(threshold)) {
stop(paste0("If 'op' is a pair of logical operators parameter 'threshold' ",
"also has to be a pair of values."))
}
if (!is.numeric(threshold[[1]]) | !is.numeric(threshold[[2]])) {
stop("Parameter 'threshold' must be numeric.")
}
if (length(threshold[[1]]) != length(threshold[[2]])) {
stop("The pair of thresholds must have the same length.")
}
if (!is.array(threshold[[1]]) && length(threshold[[1]]) > 1) {
if (dim(data)[time_dim] != length(threshold[[1]])) {
stop("If parameter 'threshold' is a vector it must have the same length as data any time dimension.")
} else {
dim(threshold[[1]]) <- length(threshold[[1]])
dim(threshold[[2]]) <- length(threshold[[2]])
names(dim(threshold[[1]])) <- time_dim
names(dim(threshold[[2]])) <- time_dim
}
} else if (is.array(threshold[[1]]) && length(threshold[[1]]) > 1) {
if (is.null(names(dim(threshold[[1]])))) {
stop("If parameter 'threshold' is an array it must have named dimensions.")
}
if (!is.null(dim(threshold[[2]]))) {
if (!all(names(dim(threshold[[1]])) %in% names(dim(threshold[[2]])))) {
stop("The pair of thresholds must have the same dimension names.")
}
}
namedims <- names(dim(threshold[[1]]))
order <- match(namedims, names(dim(threshold[[2]])))
threshold[[2]] <- aperm(threshold[[2]], order)
if (!all(dim(threshold[[1]]) == dim(threshold[[2]]))) {
stop("The pair of thresholds must have the same dimensions.")
}
if (any(names(dim(threshold[[1]])) %in% names(dim(data)))) {
common_dims <- dim(threshold[[1]])[names(dim(threshold[[1]])) %in% names(dim(data))]
if (!all(common_dims == dim(data)[names(common_dims)])) {
stop(paste0("Parameter 'data' and 'threshold' must have same length of ",
"all common dimensions."))
}
}
} else if (length(threshold[[1]]) == 1) {
dim(threshold[[1]]) <- NULL
dim(threshold[[2]]) <- NULL
}
} else {
if (!is.array(threshold) && length(threshold) > 1) {
if (dim(data)[time_dim] != length(threshold)) {
stop("If parameter 'threshold' is a vector it must have the same length as data time dimension.")
} else {
dim(threshold) <- length(threshold)
names(dim(threshold)) <- time_dim
}
} else if (is.array(threshold) && length(threshold) > 1) {
if (is.null(names(dim(threshold)))) {
stop("If parameter 'threshold' is an array it must have named dimensions.")
}
if (any(names(dim(threshold)) %in% names(dim(data)))) {
common_dims <- dim(threshold)[names(dim(threshold)) %in% names(dim(data))]
if (!all(common_dims == dim(data)[names(common_dims)])) {
stop(paste0("Parameter 'data' and 'threshold' must have same length of ",
"all common dimensions."))
}
}
} else if (length(threshold) == 1) {
dim(threshold) <- NULL
}
}
# spell
if (!is.numeric(spell) | length(spell) != 1) {
stop("Parameter 'spell' must be a scalar.")
}
# ncores
if (!is.null(ncores)) {
if (!is.numeric(ncores) | ncores %% 1 != 0 | ncores <= 0 |
length(ncores) > 1) {
stop("Parameter 'ncores' must be a positive integer.")
}
}
# dates
if (!is.null(start) && !is.null(end)) {
if (is.null(dates)) {
warning("Parameter 'dates' is NULL and the average of the ",
"full data provided in 'data' is computed.")
} else {
if (!any(c(is.list(start), is.list(end)))) {
stop("Parameter 'start' and 'end' must be lists indicating the ",
"day and the month of the period start and end.")
}
if (length(op) == 1) {
if (time_dim %in% names(dim(threshold))) {
if (dim(threshold)[time_dim] == dim(data)[time_dim]) {
threshold <- SelectPeriodOnData(threshold, dates, start, end,
time_dim = time_dim, ncores = ncores)
}
}
} else if (length(op) == 2) {
if (time_dim %in% names(dim(threshold[[1]]))) {
if (dim(threshold[[1]])[time_dim] == dim(data)[time_dim]) {
threshold[[1]] <- SelectPeriodOnData(threshold[[1]], dates, start, end,
time_dim = time_dim, ncores = ncores)
threshold[[2]] <- SelectPeriodOnData(threshold[[2]], dates, start, end,
time_dim = time_dim, ncores = ncores)
}
}
}
if (!is.null(dim(dates))) {
data <- SelectPeriodOnData(data = data, dates = dates, start = start,
end = end, time_dim = time_dim,
ncores = ncores)
} else {
warning("Parameter 'dates' must have named dimensions if 'start' and ",
"'end' are not NULL. All data will be used.")
}
data <- SelectPeriodOnData(data, dates, start, end,
time_dim = time_dim, ncores = ncores)
}
}
if (length(op) > 1) {
thres1 <- threshold[[1]]
thres2 <- threshold[[2]]
if (is.null(dim(thres1))) {
total <- Apply(list(data), target_dims = time_dim,
fun = .totalspellthres, y = thres1, y2 = thres2,
spell = spell, op = op,
ncores = ncores)$output1
} else if (any(time_dim %in% names(dim(thres1)))) {
total <- Apply(list(data, thres1, thres2),
target_dims = list(time_dim,
time_dim[time_dim %in% names(dim(thres1))],
time_dim[time_dim %in% names(dim(thres2))]),
fun = .totalspellthres, spell = spell, op = op,
ncores = ncores)$output1
} else {
total <- Apply(list(data, thres1, thres2),
target_dims = list(time_dim, thres1 = NULL, thres2 = NULL),
fun = .totalspellthres, spell = spell, op = op,
ncores = ncores)$output1
}
} else {
if (is.null(dim(threshold))) {
total <- Apply(list(data), target_dims = time_dim,
fun = .totalspellthres,
y = threshold, spell = spell, op = op,
ncores = ncores)$output1
} else if (any(time_dim %in% names(dim(threshold)))) {
total <- Apply(list(data, threshold),
target_dims = list(time_dim,
time_dim[time_dim %in% names(dim(threshold))]),
fun = .totalspellthres, spell = spell, op = op,
ncores = ncores)$output1
} else {
total <- Apply(list(data, threshold),
target_dims = list(time_dim, NULL),
fun = .totalspellthres, spell = spell, op = op,
ncores = ncores)$output1
}
}
return(total)
}
.totalspellthres <- function(x, y, y2 = NULL, spell, op = '>') {
y <- as.vector(y)
y2 <- as.vector(y2)
if (is.null(y2)) {
if (op == '>') {
exceed <- x > y
} else if (op == '<') {
exceed <- x < y
} else if (op == '<=') {
exceed <- x <= y
} else {
exceed <- x >= y
}
} else {
if (all(op == c('<', '>'))) {
exceed <- x < y & x > y2
} else if (all(op == c('<', '>='))) {
exceed <- x < y & x >= y2
} else if (all(op == c('<=', '>'))) {
exceed <- x <= y & x > y2
} else if (all(op == c('<=', '>='))) {
exceed <- x <= y & x >= y2
} else if (all(op == c('>', '<'))) {
exceed <- x > y & x < y2
} else if (all(op == c('>', '<='))) {
exceed <- x > y & x <= y2
} else if (all(op == c('>=', '<'))) {
exceed <- x >= y & x < y2
} else if (all(op == c('>=', '<='))) {
exceed <- x >= y & x <= y2
}
}
spells_exceed <- sequence(rle(as.character(exceed))$lengths)
spells_exceed[exceed == FALSE] <- NA
pos_spells <- which(spells_exceed == spell)
total <- sum(unlist(lapply(pos_spells, function(y) {
last_days <- x <- y
while (!is.na(x)) {
x <- spells_exceed[last_days + 1]
last_days <- last_days + 1
}
days <- length((y - spell + 1): (last_days - 1))
return(days)
})))
return(total)
}
|
/scratch/gouwar.j/cran-all/cranData/CSIndicators/R/TotalSpellTimeExceedingThreshold.R
|
#'Total Time of a variable Exceeding (not exceeding) a Threshold
#'
#'The Total Time of a variable exceeding (or not) a Threshold. It returns the
#'total number of days (if the data provided is daily, or the corresponding
#'units of the data frequency) that a variable is exceeding a threshold
#'during a period. The threshold provided must be in the same units as the
#'variable units, i.e. to use a percentile as a scalar, the function
#'\code{AbsToProbs} or \code{QThreshold} may be needed (see examples).
#'Providing maximum temperature daily data, the following agriculture
#'indices for heat stress can be obtained by using this function:
#'\itemize{
#' \item{'SU35', Total count of days when daily maximum temperatures exceed
#' 35°C in the seven months from the start month given (e.g. from April
#' to October for start month of April).}
#' \item{'SU36', Total count of days when daily maximum temperatures exceed
#' 36 between June 21st and September 21st.}
#' \item{'SU40', Total count of days when daily maximum temperatures exceed
#' 40 between June 21st and September 21st.}
#' \item{'Spr32', Total count of days when daily maximum temperatures exceed
#' 32 between April 21st and June 21st.}
#'}
#'
#'@param data An 's2dv_cube' object as provided function \code{CST_Start} or
#' \code{CST_Load} in package CSTools.
#'@param threshold If only one threshold is used, it can be an 's2dv_cube'
#' object or a multidimensional array with named dimensions. It must be in the
#' same units and with the common dimensions of the same length as parameter
#' 'data'. It can also be a vector with the same length of 'time_dim' from
#' 'data' or a scalar. If we want to use two thresholds: it can be a vector
#' of two scalars, a list of two vectors with the same length of
#' 'time_dim' from 'data' or a list of two multidimensional arrays with the
#' common dimensions of the same length as parameter 'data'. If two thresholds
#' are used, parameter 'op' must be also a vector of two elements.
#'@param op An operator '>' (by default), '<', '>=' or '<='. If two thresholds
#' are used it has to be a vector of a pair of two logical operators:
#' c('<', '>'), c('<', '>='), c('<=', '>'), c('<=', '>='), c('>', '<'),
#' c('>', '<='), c('>=', '<'),c('>=', '<=')).
#'@param start An optional parameter to define the initial date of the period
#' to select from the data by providing a list of two elements: the initial
#' date of the period and the initial month of the period. By default it is set
#' to NULL and the indicator is computed using all the data provided in
#' \code{data}.
#'@param end An optional parameter to define the final date of the period to
#' select from the data by providing a list of two elements: the final day of
#' the period and the final month of the period. By default it is set to NULL
#' and the indicator is computed using all the data provided in \code{data}.
#'@param time_dim A character string indicating the name of the dimension to
#' compute the indicator. By default, it is set to 'time'. It can only
#' indicate one time dimension.
#'@param na.rm A logical value indicating whether to ignore NA values (TRUE) or
#' not (FALSE).
#'@param ncores An integer indicating the number of cores to use in parallel
#' computation.
#'
#'@return An 's2dv_cube' object containing in element \code{data} the total
#'number of the corresponding units of the data frequency that a variable is
#'exceeding a threshold during a period with dimensions of the input parameter
#''data' except the dimension where the indicator has been computed. The
#''Dates' array is updated to the dates corresponding to the beginning of the
#'aggregated time period. A new element called 'time_bounds' will be added into
#'the 'attrs' element in the 's2dv_cube' object. It consists of a list
#'containing two elements, the start and end dates of the aggregated period with
#'the same dimensions of 'Dates' element.
#'
#'@examples
#'exp <- NULL
#'exp$data <- array(rnorm(5 * 3 * 214 * 2)*23,
#' c(member = 5, sdate = 3, time = 214, lon = 2))
#'exp$attrs$Dates <- c(seq(as.Date("01-05-2000", format = "%d-%m-%Y"),
#' as.Date("30-11-2000", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2001", format = "%d-%m-%Y"),
#' as.Date("30-11-2001", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2002", format = "%d-%m-%Y"),
#' as.Date("30-11-2002", format = "%d-%m-%Y"), by = 'day'))
#'dim(exp$attrs$Dates) <- c(sdate = 3, time = 214)
#'class(exp) <- 's2dv_cube'
#'DOT <- CST_TotalTimeExceedingThreshold(exp, threshold = 23, start = list(21, 4),
#' end = list(21, 6))
#'
#'@import multiApply
#'@importFrom ClimProjDiags Subset
#'@export
CST_TotalTimeExceedingThreshold <- function(data, threshold, op = '>',
start = NULL, end = NULL,
time_dim = 'time',
na.rm = FALSE, ncores = NULL) {
# Check 's2dv_cube'
if (!inherits(data, 's2dv_cube')) {
stop("Parameter 'data' must be of the class 's2dv_cube'.")
}
# Dates subset
if (!is.null(start) && !is.null(end)) {
if (is.null(dim(data$attrs$Dates))) {
warning("Dimensions in 'data' element 'attrs$Dates' are missed and ",
"all data would be used.")
start <- NULL
end <- NULL
}
}
if (length(op) == 1) {
if (inherits(threshold, 's2dv_cube')) {
threshold <- threshold$data
}
} else if (length(op) == 2) {
if (inherits(threshold[[1]], 's2dv_cube')) {
threshold[[1]] <- threshold[[1]]$data
}
if (inherits(threshold[[2]], 's2dv_cube')) {
threshold[[2]] <- threshold[[2]]$data
}
}
Dates <- data$attrs$Dates
total <- TotalTimeExceedingThreshold(data = data$data, dates = Dates,
threshold = threshold, op = op,
start = start, end = end,
time_dim = time_dim, na.rm = na.rm,
ncores = ncores)
data$data <- total
data$dims <- dim(total)
data$coords[[time_dim]] <- NULL
if (!is.null(Dates)) {
if (!is.null(start) && !is.null(end)) {
Dates <- SelectPeriodOnDates(dates = Dates, start = start, end = end,
time_dim = time_dim, ncores = ncores)
}
if (is.null(dim(Dates))) {
warning("Element 'Dates' has NULL dimensions. They will not be ",
"subset and 'time_bounds' will be missed.")
data$attrs$Dates <- Dates
} else {
# Create time_bounds
time_bounds <- NULL
time_bounds$start <- ClimProjDiags::Subset(x = Dates, along = time_dim,
indices = 1, drop = 'selected')
time_bounds$end <- ClimProjDiags::Subset(x = Dates, along = time_dim,
indices = dim(Dates)[time_dim],
drop = 'selected')
# Add Dates in attrs
data$attrs$Dates <- time_bounds$start
data$attrs$time_bounds <- time_bounds
}
}
return(data)
}
#'Total Time of a variable Exceeding (not exceeding) a Threshold
#'
#'The Total Time of a variable exceeding (or not) a Threshold. It returns the
#'total number of days (if the data provided is daily, or the corresponding
#'units of the data frequency) that a variable is exceeding a threshold
#'during a period. The threshold provided must be in the same units as the
#'variable units, i.e. to use a percentile as a scalar, the function
#'\code{AbsToProbs} or \code{QThreshold} may be needed (see examples).
#'Providing maximum temperature daily data, the following agriculture
#'indices for heat stress can be obtained by using this function:
#'\itemize{
#' \item{'SU35', Total count of days when daily maximum temperatures exceed
#' 35°C in the seven months from the start month given (e.g. from April
#' to October for start month of April).}
#' \item{'SU36', Total count of days when daily maximum temperatures exceed
#' 36 between June 21st and September 21st.}
#' \item{'SU40', Total count of days when daily maximum temperatures exceed
#' 40 between June 21st and September 21st.}
#' \item{'Spr32', Total count of days when daily maximum temperatures exceed
#' 32 between April 21st and June 21st.}
#'}
#'
#'@param data A multidimensional array with named dimensions.
#'@param threshold If only one threshold is used: it can be a multidimensional
#' array with named dimensions. It must be in the same units and with the
#' common dimensions of the same length as parameter 'data'. It can also be a
#' vector with the same length of 'time_dim' from 'data' or a scalar. If we
#' want to use two thresholds: it can be a vector of two scalars, a list of
#' two vectors with the same length of 'time_dim' from 'data' or a list of
#' two multidimensional arrays with the common dimensions of the same length
#' as parameter 'data'. If two thresholds are used, parameter 'op' must be
#' also a vector of two elements.
#'@param op An operator '>' (by default), '<', '>=' or '<='. If two thresholds
#' are used it has to be a vector of a pair of two logical operators:
#' c('<', '>'), c('<', '>='), c('<=', '>'), c('<=', '>='), c('>', '<'),
#' c('>', '<='), c('>=', '<'),c('>=', '<=')).
#'@param dates A multidimensional array of dates with named dimensions matching
#' the temporal dimensions on parameter 'data'. By default it is NULL, to
#' select aperiod this parameter must be provided.
#'@param start An optional parameter to define the initial date of the period
#' to select from the data by providing a list of two elements: the initial
#' date of the period and the initial month of the period. By default it is set
#' to NULL and the indicator is computed using all the data provided in
#' \code{data}.
#'@param end An optional parameter to define the final date of the period to
#' select from the data by providing a list of two elements: the final day of
#' the period and the final month of the period. By default it is set to NULL
#' and the indicator is computed using all the data provided in \code{data}.
#'@param time_dim A character string indicating the name of the dimension to
#' compute the indicator. By default, it is set to 'time'. It can only
#' indicate one time dimension.
#'@param na.rm A logical value indicating whether to ignore NA values (TRUE) or
#' not (FALSE).
#'@param ncores An integer indicating the number of cores to use in parallel
#' computation.
#'
#'@return A multidimensional array with named dimensions containing the total
#'number of the corresponding units of the data frequency that a variable is
#'exceeding a threshold during a period with dimensions of the input parameter
#''data' except the dimension where the indicator has been computed.
#'
#'@examples
#'data <- array(rnorm(5 * 3 * 214 * 2)*23,
#' c(member = 5, sdate = 3, time = 214, lon = 2))
#'Dates <- c(seq(as.Date("01-05-2000", format = "%d-%m-%Y"),
#' as.Date("30-11-2000", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2001", format = "%d-%m-%Y"),
#' as.Date("30-11-2001", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2002", format = "%d-%m-%Y"),
#' as.Date("30-11-2002", format = "%d-%m-%Y"), by = 'day'))
#'dim(Dates) <- c(sdate = 3, time = 214)
#'DOT <- TotalTimeExceedingThreshold(data, threshold = 23, dates = Dates,
#' start = list(21, 4), end = list(21, 6))
#'
#'@import multiApply
#'@export
TotalTimeExceedingThreshold <- function(data, threshold, op = '>',
dates = NULL, start = NULL, end = NULL,
time_dim = 'time', na.rm = FALSE,
ncores = NULL) {
# data
if (is.null(data)) {
stop("Parameter 'data' cannot be NULL.")
}
if (!is.numeric(data)) {
stop("Parameter 'data' must be numeric.")
}
if (!is.array(data)) {
dim(data) <- length(data)
names(dim(data)) <- time_dim
}
if (is.null(names(dim(data)))) {
stop("Parameter 'data' must have named dimensions.")
}
# time_dim
if (!is.character(time_dim)) {
stop("Parameter 'time_dim' must be a character string.")
}
if (!all(time_dim %in% names(dim(data)))) {
stop("Parameter 'time_dim' is not found in 'data' dimension.")
}
if (length(time_dim) > 1) {
warning("Parameter 'time_dim' has length greater than 1 and ",
"only the first element will be used.")
time_dim <- time_dim[1]
}
# op
if (!is.character(op)) {
stop("Parameter 'op' must be a character.")
}
if (length(op) == 1) {
if (!(op %in% c('>', '<', '>=', '<=', '='))) {
stop("Parameter 'op' must be a logical operator.")
}
} else if (length(op) == 2) {
op_list <- list(c('<', '>'), c('<', '>='), c('<=', '>'), c('<=', '>='),
c('>', '<'), c('>', '<='), c('>=', '<'), c('>=', '<='))
if (!any(unlist(lapply(op_list, function(x) all(x == op))))) {
stop("Parameter 'op' is not an accepted pair of logical operators.")
}
} else {
stop("Parameter 'op' must be a logical operator with length 1 or 2.")
}
# threshold
if (is.null(unlist(threshold))) {
stop("Parameter 'threshold' cannot be NULL.")
}
if (!is.numeric(unlist(threshold))) {
stop("Parameter 'threshold' must be numeric.")
}
if (length(op) == 2) {
if (length(op) != length(threshold)) {
stop(paste0("If 'op' is a pair of logical operators parameter 'threshold' ",
"also has to be a pair of values."))
}
if (!is.numeric(threshold[[1]]) | !is.numeric(threshold[[2]])) {
stop("Parameter 'threshold' must be numeric.")
}
if (length(threshold[[1]]) != length(threshold[[2]])) {
stop("The pair of thresholds must have the same length.")
}
if (!is.array(threshold[[1]]) && length(threshold[[1]]) > 1) {
if (dim(data)[time_dim] != length(threshold[[1]])) {
stop("If parameter 'threshold' is a vector it must have the same length as data any time dimension.")
} else {
dim(threshold[[1]]) <- length(threshold[[1]])
dim(threshold[[2]]) <- length(threshold[[2]])
names(dim(threshold[[1]])) <- time_dim
names(dim(threshold[[2]])) <- time_dim
}
} else if (is.array(threshold[[1]]) && length(threshold[[1]]) > 1) {
if (is.null(names(dim(threshold[[1]])))) {
stop("If parameter 'threshold' is an array it must have named dimensions.")
}
if (!is.null(dim(threshold[[2]]))) {
if (!all(names(dim(threshold[[1]])) %in% names(dim(threshold[[2]])))) {
stop("The pair of thresholds must have the same dimension names.")
}
}
namedims <- names(dim(threshold[[1]]))
order <- match(namedims, names(dim(threshold[[2]])))
threshold[[2]] <- aperm(threshold[[2]], order)
if (!all(dim(threshold[[1]]) == dim(threshold[[2]]))) {
stop("The pair of thresholds must have the same dimensions.")
}
if (any(names(dim(threshold[[1]])) %in% names(dim(data)))) {
common_dims <- dim(threshold[[1]])[names(dim(threshold[[1]])) %in% names(dim(data))]
if (!all(common_dims == dim(data)[names(common_dims)])) {
stop(paste0("Parameter 'data' and 'threshold' must have same length of ",
"all common dimensions."))
}
}
} else if (length(threshold[[1]]) == 1) {
dim(threshold[[1]]) <- NULL
dim(threshold[[2]]) <- NULL
}
} else {
if (!is.array(threshold) && length(threshold) > 1) {
if (dim(data)[time_dim] != length(threshold)) {
stop("If parameter 'threshold' is a vector it must have the same length as data time dimension.")
} else {
dim(threshold) <- length(threshold)
names(dim(threshold)) <- time_dim
}
} else if (is.array(threshold) && length(threshold) > 1) {
if (is.null(names(dim(threshold)))) {
stop("If parameter 'threshold' is an array it must have named dimensions.")
}
if (any(names(dim(threshold)) %in% names(dim(data)))) {
common_dims <- dim(threshold)[names(dim(threshold)) %in% names(dim(data))]
if (!all(common_dims == dim(data)[names(common_dims)])) {
stop(paste0("Parameter 'data' and 'threshold' must have same length of ",
"all common dimensions."))
}
}
} else if (length(threshold) == 1) {
dim(threshold) <- NULL
}
}
# ncores
if (!is.null(ncores)) {
if (!is.numeric(ncores) | ncores %% 1 != 0 | ncores <= 0 |
length(ncores) > 1) {
stop("Parameter 'ncores' must be a positive integer.")
}
}
# dates
if (!is.null(start) && !is.null(end)) {
if (is.null(dates)) {
warning("Parameter 'dates' is NULL and the average of the ",
"full data provided in 'data' is computed.")
} else {
if (!any(c(is.list(start), is.list(end)))) {
stop("Parameter 'start' and 'end' must be lists indicating the ",
"day and the month of the period start and end.")
}
if (length(op) == 1) {
if (time_dim %in% names(dim(threshold))) {
if (dim(threshold)[time_dim] == dim(data)[time_dim]) {
threshold <- SelectPeriodOnData(threshold, dates, start, end,
time_dim = time_dim, ncores = ncores)
}
}
} else if (length(op) == 2) {
if (time_dim %in% names(dim(threshold[[1]]))) {
if (dim(threshold[[1]])[time_dim] == dim(data)[time_dim]) {
threshold[[1]] <- SelectPeriodOnData(threshold[[1]], dates, start, end,
time_dim = time_dim, ncores = ncores)
threshold[[2]] <- SelectPeriodOnData(threshold[[2]], dates, start, end,
time_dim = time_dim, ncores = ncores)
}
}
}
if (!is.null(dim(dates))) {
data <- SelectPeriodOnData(data = data, dates = dates, start = start,
end = end, time_dim = time_dim,
ncores = ncores)
} else {
warning("Parameter 'dates' must have named dimensions if 'start' and ",
"'end' are not NULL. All data will be used.")
}
}
}
if (length(op) > 1) {
thres1 <- threshold[[1]]
thres2 <- threshold[[2]]
if (is.null(dim(thres1))) {
total <- Apply(list(data), target_dims = time_dim,
fun = .exceedthreshold, y = thres1, y2 = thres2,
op = op, na.rm = na.rm,
ncores = ncores)$output1
} else if (any(time_dim %in% names(dim(thres1)))) {
total <- Apply(list(data, thres1, thres2),
target_dims = list(time_dim,
time_dim[time_dim %in% names(dim(thres1))],
time_dim[time_dim %in% names(dim(thres2))]),
fun = .exceedthreshold, op = op, na.rm = na.rm,
ncores = ncores)$output1
} else {
total <- Apply(list(data, thres1, thres2),
target_dims = list(time_dim, thres1 = NULL, thres2 = NULL),
fun = .exceedthreshold, op = op, na.rm = na.rm,
ncores = ncores)$output1
}
} else {
if (is.null(dim(threshold))) {
total <- Apply(list(data), target_dims = time_dim,
fun = .exceedthreshold,
y = threshold, op = op, na.rm = na.rm,
ncores = ncores)$output1
} else if (any(time_dim %in% names(dim(threshold)))) {
total <- Apply(list(data, threshold),
target_dims = list(time_dim,
time_dim[time_dim %in% names(dim(threshold))]),
fun = .exceedthreshold, op = op, na.rm = na.rm,
ncores = ncores)$output1
} else {
total <- Apply(list(data, threshold),
target_dims = list(time_dim, NULL),
fun = .exceedthreshold, op = op, na.rm = na.rm,
ncores = ncores)$output1
}
}
return(total)
}
.exceedthreshold <- function(x, y, y2 = NULL, op = '>', na.rm) {
y <- as.vector(y)
y2 <- as.vector(y2)
if (is.null(y2)) {
if (op == '>') {
res <- sum(x > y, na.rm = na.rm)
} else if (op == '<') {
res <- sum(x < y, na.rm = na.rm)
} else if (op == '<=') {
res <- sum(x <= y, na.rm = na.rm)
} else {
res <- sum(x >= y, na.rm = na.rm)
}
} else {
if (all(op == c('<', '>'))) {
res <- sum(x < y & x > y2, na.rm = na.rm)
} else if (all(op == c('<', '>='))) {
res <- sum(x < y & x >= y2, na.rm = na.rm)
} else if (all(op == c('<=', '>'))) {
res <- sum(x <= y & x > y2, na.rm = na.rm)
} else if (all(op == c('<=', '>='))) {
res <- sum(x <= y & x >= y2, na.rm = na.rm)
} else if (all(op == c('>', '<'))) {
res <- sum(x > y & x < y2, na.rm = na.rm)
} else if (all(op == c('>', '<='))) {
res <- sum(x > y & x <= y2, na.rm = na.rm)
} else if (all(op == c('>=', '<'))) {
res <- sum(x >= y & x < y2, na.rm = na.rm)
} else if (all(op == c('>=', '<='))) {
res <- sum(x >= y & x <= y2, na.rm = na.rm)
}
}
return(res)
}
|
/scratch/gouwar.j/cran-all/cranData/CSIndicators/R/TotalTimeExceedingThreshold.R
|
#'Wind capacity factor on s2dv_cube objects
#'
#'@author Llorenç Lledó, \email{[email protected]}
#'@description Wind capacity factor computes the wind power generated by a
#'specific wind turbine model under specific wind speed conditions, and
#'expresses it as a fraction of the rated capacity (i.e. maximum power) of the
#'turbine.
#'@description It is computed by means of a tabular power curve that relates
#'wind speed to power output. The tabular values are interpolated with a linear
#'piecewise approximating function to obtain a smooth power curve. Five
#'different power curves that span different IEC classes can be selected (see
#'below).
#'@references Lledó, Ll., Torralba, V., Soret, A., Ramon, J., & Doblas-Reyes,
#'F. J. (2019). Seasonal forecasts of wind power generation.
#'Renewable Energy, 143, 91–100. https://doi.org/10.1016/j.renene.2019.04.135
#'@references International Standard IEC 61400-1 (third ed.) (2005)
#'
#'@param wind An s2dv_cube object with instantaneous wind speeds expressed in m/s.
#'@param IEC_class A string indicating the IEC wind class (see IEC 61400-1) of
#' the turbine to be selected. Classes \code{'I'}, \code{'II'} and \code{'III'}
#' are suitable for sites with an annual mean wind speed of 10, 8.5 and 7.5 m/s
#' respectively. Classes \code{'I/II'} and \code{'II/III'} indicate
#' intermediate turbines that fit both classes. More details of the five
#' turbines and a plot of its power curves can be found in Lledó et al. (2019).
#'@param start An optional parameter to defined the initial date of the period
#' to select from the data by providing a list of two elements: the initial
#' date of the period and the initial month of the period. By default it is set
#' to NULL and the indicator is computed using all the data provided in
#' \code{data}.
#'@param end An optional parameter to defined the final date of the period to
#' select from the data by providing a list of two elements: the final day of
#' the period and the final month of the period. By default it is set to NULL
#' and the indicator is computed using all the data provided in \code{data}.
#'@param time_dim A character string indicating the name of the dimension to
#' compute the indicator. By default, it is set to 'time'. More than one
#' dimension name matching the dimensions provided in the object
#' \code{data$data} can be specified.
#'@param ncores An integer indicating the number of cores to use in parallel
#' computation for temporal subsetting.
#'@return An s2dv_cube object containing the Wind Capacity Factor (unitless).
#'
#'@examples
#'wind <- NULL
#'wind$data <- array(rweibull(n = 100, shape = 2, scale = 6),
#' c(member = 5, sdate = 3, time = 214, lon = 2, lat = 5))
#'wind$coords <- list(lat = c(40, 41), lon = 1:5)
#'variable <- list(varName = 'sfcWind',
#' metadata = list(sfcWind = list(level = 'Surface')))
#'wind$attrs <- list(Variable = variable, Datasets = 'synthetic',
#' when = Sys.time(), Dates = '1990-01-01 00:00:00')
#'Dates <- c(seq(as.Date("01-05-2000", format = "%d-%m-%Y"),
#' as.Date("30-11-2000", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2001", format = "%d-%m-%Y"),
#' as.Date("30-11-2001", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2002", format = "%d-%m-%Y"),
#' as.Date("30-11-2002", format = "%d-%m-%Y"), by = 'day'))
#'dim(Dates) <- c(sdate = 3, time = 214)
#'wind$attrs$Dates <- Dates
#'class(wind) <- 's2dv_cube'
#'WCF <- CST_WindCapacityFactor(wind, IEC_class = "III",
#' start = list(21, 4), end = list(21, 6))
#'
#'@export
CST_WindCapacityFactor <- function(wind, IEC_class = c("I", "I/II", "II", "II/III", "III"),
start = NULL, end = NULL, time_dim = 'time',
ncores = NULL) {
# Check 's2dv_cube'
if (!inherits(wind, 's2dv_cube')) {
stop("Parameter 'wind' must be of the class 's2dv_cube'.")
}
# Dates subset
if (!is.null(start) && !is.null(end)) {
if (is.null(dim(wind$attrs$Dates))) {
warning("Dimensions in 'wind' element 'attrs$Dates' are missed and ",
"all data would be used.")
start <- NULL
end <- NULL
}
}
WindCapacity <- WindCapacityFactor(wind = wind$data, IEC_class = IEC_class,
dates = wind$attrs$Dates, start = start,
end = end, time_dim = time_dim,
ncores = ncores)
wind$data <- WindCapacity
wind$dims <- dim(WindCapacity)
if ('Variable' %in% names(wind$attrs)) {
if ('varName' %in% names(wind$attrs$Variable)) {
wind$attrs$Variable$varName <- 'WindCapacityFactor'
}
}
if (!is.null(start) && !is.null(end)) {
wind$attrs$Dates <- SelectPeriodOnDates(dates = wind$attrs$Dates,
start = start, end = end,
time_dim = time_dim,
ncores = ncores)
}
return(wind)
}
#'Wind capacity factor
#'
#'@author Llorenç Lledó, \email{[email protected]}
#'@description Wind capacity factor computes the wind power generated by a
#'specific wind turbine model under specific wind speed conditions, and
#'expresses it as a fraction of the rated capacity (i.e. maximum power) of the
#'turbine.
#'@description It is computed by means of a tabular power curve that relates
#'wind speed to power output. The tabular values are interpolated with a linear
#'piecewise approximating function to obtain a smooth power curve. Five
#'different power curves that span different IEC classes can be selected (see
#'below).
#'@references Lledó, Ll., Torralba, V., Soret, A., Ramon, J., & Doblas-Reyes,
#'F. J. (2019). Seasonal forecasts of wind power generation.
#'Renewable Energy, 143, 91–100. https://doi.org/10.1016/j.renene.2019.04.135
#'@references International Standard IEC 61400-1 (third ed.) (2005)
#'
#'@param wind A multidimensional array, vector or scalar with instantaneous wind
#' speeds expressed in m/s.
#'@param IEC_class A string indicating the IEC wind class (see IEC 61400-1) of
#' the turbine to be selected. Classes \code{'I'}, \code{'II'} and \code{'III'}
#' are suitable for sites with an annual mean wind speed of 10, 8.5 and 7.5 m/s
#' respectively. Classes \code{'I/II'} and \code{'II/III'} indicate
#' intermediate turbines that fit both classes. More details of the five
#' turbines and a plot of its power curves can be found in Lledó et al. (2019).
#'@param dates A multidimensional array of dates with named dimensions matching
#' the temporal dimensions on parameter 'data'. By default it is NULL, to
#' select aperiod this parameter must be provided.
#'@param start An optional parameter to defined the initial date of the period
#' to select from the data by providing a list of two elements: the initial
#' date of the period and the initial month of the period. By default it is set
#' to NULL and the indicator is computed using all the data provided in
#' \code{data}.
#'@param end An optional parameter to defined the final date of the period to
#' select from the data by providing a list of two elements: the final day of
#' the period and the final month of the period. By default it is set to NULL
#' and the indicator is computed using all the data provided in \code{data}.
#'@param time_dim A character string indicating the name of the dimension to
#' compute the indicator. By default, it is set to 'time'. More than one
#' dimension name matching the dimensions provided in the object
#' \code{data$data} can be specified.
#'@param ncores An integer indicating the number of cores to use in parallel
#' computation for temporal subsetting.
#'
#'@return An array with the same dimensions as wind, containing the Wind
#' Capacity Factor (unitless).
#'
#'@examples
#'wind <- array(rweibull(n = 32100, shape = 2, scale = 6),
#' c(member = 5, sdate = 3, time = 214, lon = 2, lat = 5))
#'
#'Dates <- c(seq(as.Date("01-05-2000", format = "%d-%m-%Y"),
#' as.Date("30-11-2000", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2001", format = "%d-%m-%Y"),
#' as.Date("30-11-2001", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2002", format = "%d-%m-%Y"),
#' as.Date("30-11-2002", format = "%d-%m-%Y"), by = 'day'))
#'dim(Dates) <- c(sdate = 3, time = 214)
#'
#'WCF <- WindCapacityFactor(wind, IEC_class = "III", dates = Dates,
#' start = list(21, 4), end = list(21, 6))
#'
#'@importFrom stats approxfun
#'@importFrom utils read.delim
#'@export
WindCapacityFactor <- function(wind, IEC_class = c("I", "I/II", "II", "II/III", "III"),
dates = NULL, start = NULL, end = NULL,
time_dim = 'time', ncores = NULL) {
IEC_class <- match.arg(IEC_class)
pc_files <- c(
"I" = "Enercon_E70_2.3MW.txt",
"I/II" = "Gamesa_G80_2.0MW.txt",
"II" = "Gamesa_G87_2.0MW.txt",
"II/III" = "Vestas_V100_2.0MW.txt",
"III" = "Vestas_V110_2.0MW.txt"
)
pc_file <- system.file("power_curves", pc_files[IEC_class], package = "CSIndicators", mustWork = T)
pc <- read_pc(pc_file)
if (!is.null(start) && !is.null(end)) {
if (is.null(dates)) {
warning("Parameter 'dates' is NULL and the average of the ",
"full data provided in 'data' is computed.")
} else {
if (!any(c(is.list(start), is.list(end)))) {
stop("Parameter 'start' and 'end' must be lists indicating the ",
"day and the month of the period start and end.")
}
if (!is.null(dim(dates))) {
wind <- SelectPeriodOnData(data = wind, dates = dates, start = start,
end = end, time_dim = time_dim,
ncores = ncores)
} else {
warning("Parameter 'wind' must have named dimensions if 'start' and ",
"'end' are not NULL. All data will be used.")
}
}
}
cf <- wind2CF(wind, pc)
dim(cf) <- dim(wind)
return(cf)
}
|
/scratch/gouwar.j/cran-all/cranData/CSIndicators/R/WindCapacityFactor.R
|
#'Wind power density on s2dv_cube objects
#'
#'@author Llorenç Lledó, \email{[email protected]}
#'@description Wind Power Density computes the wind power that is available for
#'extraction per square meter of swept area.
#'@description It is computed as 0.5*ro*wspd^3. As this function is non-linear,
#'it will give inaccurate results if used with period means.
#'
#'@param wind An 's2dv_cube' object with instantaneous wind speeds expressed in
#' m/s obtained from CST_Start or s2dv_cube functions from CSTools pacakge.
#'@param ro A scalar, or alternatively a multidimensional array with the same
#' dimensions as wind, with the air density expressed in kg/m^3. By default it
#' takes the value 1.225, the standard density of air at 15ºC and 1013.25 hPa.
#'@param start An optional parameter to defined the initial date of the period
#' to select from the data by providing a list of two elements: the initial
#' date of the period and the initial month of the period. By default it is set
#' to NULL and the indicator is computed using all the data provided in
#' \code{data}.
#'@param end An optional parameter to defined the final date of the period to
#' select from the data by providing a list of two elements: the final day of
#' the period and the final month of the period. By default it is set to NULL
#' and the indicator is computed using all the data provided in \code{data}.
#'@param time_dim A character string indicating the name of the dimension to
#' compute the indicator. By default, it is set to 'time'. More than one
#' dimension name matching the dimensions provided in the object
#' \code{data$data} can be specified.
#'@param ncores An integer indicating the number of cores to use in parallel
#' computation for temporal subsetting.
#'
#'@return An s2dv_cube object containing Wind Power Density expressed in W/m^2.
#'
#'@examples
#'wind <- NULL
#'wind$data <- array(rweibull(n = 100, shape = 2, scale = 6),
#' c(member = 5, sdate = 3, time = 214, lon = 2, lat = 5))
#'wind$coords <- list(lat = c(40, 41), lon = 1:5)
#'variable <- list(varName = 'sfcWind',
#' metadata = list(sfcWind = list(level = 'Surface')))
#'wind$attrs <- list(Variable = variable, Datasets = 'synthetic',
#' when = Sys.time(), Dates = '1990-01-01 00:00:00')
#'Dates <- c(seq(as.Date("01-05-2000", format = "%d-%m-%Y"),
#' as.Date("30-11-2000", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2001", format = "%d-%m-%Y"),
#' as.Date("30-11-2001", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2002", format = "%d-%m-%Y"),
#' as.Date("30-11-2002", format = "%d-%m-%Y"), by = 'day'))
#'dim(Dates) <- c(sdate = 3, time = 214)
#'wind$attrs$Dates <- Dates
#'class(wind) <- 's2dv_cube'
#'WPD <- CST_WindPowerDensity(wind, start = list(21, 4),
#' end = list(21, 6))
#'
#'@export
CST_WindPowerDensity <- function(wind, ro = 1.225, start = NULL, end = NULL,
time_dim = 'time', ncores = NULL) {
# Check 's2dv_cube'
if (!inherits(wind, 's2dv_cube')) {
stop("Parameter 'wind' must be of the class 's2dv_cube'.")
}
# Dates subset
if (!is.null(start) && !is.null(end)) {
if (is.null(dim(wind$attrs$Dates))) {
warning("Dimensions in 'wind' element 'attrs$Dates' are missed and ",
"all data would be used.")
start <- NULL
end <- NULL
}
}
WindPower <- WindPowerDensity(wind = wind$data, ro = ro,
dates = wind$attrs$Dates, start = start,
end = end, time_dim = time_dim,
ncores = ncores)
wind$data <- WindPower
wind$dims <- dim(WindPower)
if ('Variable' %in% names(wind$attrs)) {
if ('varName' %in% names(wind$attrs$Variable)) {
wind$attrs$Variable$varName <- 'WindPowerDensity'
}
}
if (!is.null(start) && !is.null(end)) {
wind$attrs$Dates <- SelectPeriodOnDates(dates = wind$attrs$Dates,
start = start, end = end,
time_dim = time_dim,
ncores = ncores)
}
return(wind)
}
#'Wind power density on multidimensional array objects
#'
#'@author Llorenç Lledó, \email{[email protected]}
#'@description Wind Power Density computes the wind power that is available for
#'extraction per square meter of swept area.
#'@description It is computed as 0.5*ro*wspd^3. As this function is non-linear,
#'it will give inaccurate results if used with period means.
#'
#'@param wind A multidimensional array, vector or scalar with instantaneous wind
#' speeds expressed in m/s.
#'@param ro A scalar, or alternatively a multidimensional array with the same
#' dimensions as wind, with the air density expressed in kg/m^3. By default it
#' takes the value 1.225, the standard density of air at 15ºC and 1013.25 hPa.
#'@param dates A multidimensional array of dates with named dimensions matching
#' the temporal dimensions on parameter 'data'. By default it is NULL, to
#' select aperiod this parameter must be provided.
#'@param start An optional parameter to defined the initial date of the period
#' to select from the data by providing a list of two elements: the initial
#' date of the period and the initial month of the period. By default it is set
#' to NULL and the indicator is computed using all the data provided in
#' \code{data}.
#'@param end An optional parameter to defined the final date of the period to
#' select from the data by providing a list of two elements: the final day of
#' the period and the final month of the period. By default it is set to NULL
#' and the indicator is computed using all the data provided in \code{data}.
#'@param time_dim A character string indicating the name of the dimension to
#' compute the indicator. By default, it is set to 'time'. More than one
#' dimension name matching the dimensions provided in the object
#' \code{data$data} can be specified.
#'@param ncores An integer indicating the number of cores to use in parallel
#' computation for temporal subsetting.
#'
#'@return An array with the same dimensions as wind, containing Wind Power
#'Density expressed in W/m^2.
#'
#'@examples
#'wind <- array(rweibull(n = 32100, shape = 2, scale = 6),
#' c(member = 5, sdate = 3, time = 214, lon = 2, lat = 5))
#'Dates <- c(seq(as.Date("01-05-2000", format = "%d-%m-%Y"),
#' as.Date("30-11-2000", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2001", format = "%d-%m-%Y"),
#' as.Date("30-11-2001", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2002", format = "%d-%m-%Y"),
#' as.Date("30-11-2002", format = "%d-%m-%Y"), by = 'day'))
#'dim(Dates) <- c(sdate = 3, time = 214)
#'WPD <- WindPowerDensity(wind, dates = Dates, start = list(21, 4),
#' end = list(21, 6))
#'
#'@export
WindPowerDensity <- function(wind, ro = 1.225, dates = NULL, start = NULL,
end = NULL, time_dim = 'time', ncores = NULL) {
if (!is.null(start) && !is.null(end)) {
if (is.null(dates)) {
warning("Parameter 'dates' is NULL and the average of the ",
"full data provided in 'data' is computed.")
} else {
if (!any(c(is.list(start), is.list(end)))) {
stop("Parameter 'start' and 'end' must be lists indicating the ",
"day and the month of the period start and end.")
}
if (!is.null(dim(dates))) {
wind <- SelectPeriodOnData(data = wind, dates = dates, start = start,
end = end, time_dim = time_dim,
ncores = ncores)
} else {
warning("Parameter 'wind' must have named dimensions if 'start' and ",
"'end' are not NULL. All data will be used.")
}
}
}
return(0.5 * ro * wind^3)
}
|
/scratch/gouwar.j/cran-all/cranData/CSIndicators/R/WindPowerDensity.R
|
.position <- function(dates, ini_day, ini_month, end_day, end_month) {
days <- as.numeric(format(dates, "%d"))
months <- as.numeric(format(dates, "%m"))
pos <- 1:length(dates)
position <- logical(length(dates))
if (ini_month != end_month) {
pos <- sort(unique(c(pos[months == ini_month & days >= ini_day],
pos[months < end_month & months > ini_month],
pos[months == end_month & days <= end_day])))
position[pos] <- TRUE
position[-pos] <- FALSE
} else {
pos <- sort(unique(c(pos[months == ini_month &
days >= ini_day & days <= end_day])))
position[pos] <- TRUE
position[-pos] <- FALSE
}
if (!is.null(dim(dates))) {
dim(position) <- length(position)
if(!is.null(names(dim(dates)))) {
names(dim(position)) <- names(dim(dates))
}
}
return(position)
}
#=======================
# Read a powercurve file
# Create the approximation function
#=======================
read_pc <- function(file) {
pc <- list()
# Read pc points
pc$points <- rbind(c(0, 0), read.delim(file, comment.char = "#"))
# Create an approximating function
pc$fun <- approxfun(pc$points$WindSpeed, pc$points$Power, method = "linear",
yleft = NA, yright = 0)
# Get the rated power from the power values
pc$attr$RatedPower <- max(pc$points$Power)
return(pc)
}
#=======================
# Evaluate the linear piecewise approximation function with the wind speed inputs to get wind power
#=======================
wind2power <- function(wind, pc) {
power <- pc$fun(wind)
return(power)
}
#=======================
# Convert wind to power, and divide by rated power to obtain Capacity Factor values
#=======================
wind2CF <- function(wind, pc) {
power <- wind2power(wind, pc)
CF <- power / pc$attr$RatedPower
return(CF)
}
.KnownLonNames <- function() {
known_lon_names <- c('lon', 'lons', 'longitude', 'x', 'i', 'nav_lon')
}
.KnownLatNames <- function() {
known_lat_names <- c('lat', 'lats', 'latitude', 'y', 'j', 'nav_lat')
}
.return2list <- function(data1, data2 = NULL) {
if (is.null(data1) & is.null(data2)) {
return(NULL)
} else if (is.null(data2)) {
return(list(data1))
} else {
return(list(data1, data2))
}
}
# Function that creates a mask array from dates for the whole year
.datesmask <- function(dates, frequency = 'monthly') {
years <- format(dates, "%Y")
ini <- as.Date(paste(min(years), 01, 01, sep = '-'))
end <- as.Date(paste(max(years), 12, 31, sep = '-'))
daily <- as.Date(seq(ini, end, by = "day"))
if (frequency == 'monthly') {
days <- as.numeric(format(daily, "%d"))
monthly <- daily[which(days == 1)]
dates_mask <- array(0, dim = length(monthly))
for (dd in 1:length(dates)) {
year <- format(dates[dd], "%Y")
month <- format(dates[dd], "%m")
ii <- which(monthly == as.Date(paste(year, month, 01, sep = '-')))
dates_mask[ii] <- 1
}
} else {
# daily
dates_mask <- array(0, dim = length(daily))
for (dd in 1:length(dates)) {
ii <- which(daily == dates[dd])
dates_mask[ii] <- 1
}
}
return(dates_mask)
}
|
/scratch/gouwar.j/cran-all/cranData/CSIndicators/R/zzz.R
|
---
title: "Agricultural Indicators"
author: "Earth Sciences department, Barcelona Supercomputing Center (BSC)"
date: "`r Sys.Date()`"
revisor: "Eva Rifà"
revision date: "October 2023"
output: rmarkdown::html_vignette
vignette: >
%\VignetteEngine{knitr::knitr}
%\VignetteIndexEntry{Agricultural Indicators}
%\usepackage[utf8]{inputenc}
---
Agricultural Indicators
-----------------------------
## Introduction
Apart from forecasts of Essential Climate Variables, Climate Services also provide a variety of the sectoral indicators that are often required for Climate Services people, including researchers, decision-makers, farmers, etc.
In the MEDGOLD project, 10 indicators which were identified as critical indices for the three agricultural sectors - grape/wine, olive/olive oil and wheat/pasta - have been considered in this CSIndicators package.
The computing functions and the corresponding indicators are listed as follows:
1. **PeriodAccumulation -** Spring Total Precipitation (SprR) and Harvest Total Precipitation (HarvestR)
2. **PeriodMean -** Growing Season Temperature (GST) and Spring Mean Temperature Maximum (SPRTX)
3. **TotalTimeExceedingThreshold -** Number of Heat Stress Days - 35°C (SU35), 36°C (SU36), 40°C (SU40) and Spring Heat Stress Days - 32°C (Spr32)
4. **AccumulationExceedingThreshold -** Growing Degree Days (GDD)
5. **TotalSpellTimeExceedingThreshold -** Warm Spell Duration Index (WSDI)
The above functions can take both multidimensional arrays and the s2dv_cube objects (see note below). Taking PeriodAccumulation as example, **CST_**PeriodAccumulation handles the latter and PeriodAccumulation without the prefix can compute multidimensional arrays.
*Note: s2dv_cube and array classes can be handled by the functions in CSIndicators. See Section 2 in vignette [Data retrieval and storage](https://cran.r-project.org/package=CSTools/vignettes/Data_Considerations.html) from CSTools package for more information.*
There are some supplementary functions which must be called to smoothly run the above functions.
1. **SelectPeriodOnData -** to select the data in the requested period
2. **SelectPeriodOnDates -** to select the time dimension in the requested period
3. **Threshold -** to convert absolute value/variable to its percentile, e.g., Warm Spell Duration Index uses the 90th percentile corresponding to each day instead of a fixed threshold. See how this function is applied in Section 5.
When the period selection is required, the `start` and `end` parameters have to be provided to cut out the portion in `time_dim`. Otherwise, the function will take the **entire** `time_dim`.
The examples of computing the aforementioned indicators are given by functions as follows.
### 1. PeriodAccumulation
`PeriodAccumulation` (and `CST_PeriodAccumulation`) computes the sum of a given variable in a period.
Here, two indicators are used to show how this function works: Spring Total Precipitation (SprR) and Harvest Total Precipitation (HarvestR). Both indices represent the total precipitation but in different periods, 21st April - 21st June for SprR and 21st August - 21st October for HarvestR.
First, load the required libraries, CSIndicators, CSTools, etc by running
```
library(CSIndicators)
library(CSTools)
library(zeallot)
library(s2dv)
```
To obtain the precipitation forecast and observation, we load the daily precipitation (**prlr** given in `var`) data sets of ECMWF SEAS5 seasonal forecast and ERA5 reanalysis for the four starting dates 20130401-20160401 (provided in `sdates`) with the entire 7-month forecast time, April-October (214 days in total given in parameter `leadtimemax`).
The pathways of SEAS5 and ERA5 are given in the lists with some **whitecards (inside two dollar signs)** used to replace the variable name and iterative items such as year and month. See details of requirements in Section 5 in vignette [Data retrieval and storage](https://cran.r-project.org/package=CSTools/vignettes/Data_Considerations.html) from CSTools package.
The spatial domain covers part of Douro Valley of Northern Portugal lon=[352.25, 353], lat=[41, 41.75]. These four values are provided in `lonmin`, `lonmax`, `latmin` and `latmax`.
With `grid` set to **r1440x721**, the SEAS5 forecast would be interpolated to the 0.25-degree ERA5 grid by using the **bicubic** method given in `method`.
```r
sdates <- paste0(2013:2016, '04', '01')
lat_min = 41
lat_max = 41.75
lon_min = 352.25
lon_max = 353
S5path_prlr <- paste0("/esarchive/exp/ecmwf/system5c3s/daily_mean/$var$_s0-24h/$var$_$sdate$.nc")
prlr_exp <- CST_Start(dataset = S5path_prlr,
var = "prlr",
member = startR::indices(1:3),
sdate = sdates,
ftime = startR::indices(1:214),
lat = startR::values(list(lat_min, lat_max)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lon_min, lon_max)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'),
lat = c('lat', 'latitude'),
member = c('member', 'ensemble'),
ftime = c('ftime', 'time')),
transform = startR::CDORemapper,
transform_extra_cells = 2,
transform_params = list(grid = "r1440x721",
method = "bicubic"),
transform_vars = c('lat', 'lon'),
return_vars = list(lat = NULL,
lon = NULL, ftime = 'sdate'),
retrieve = TRUE)
dates_exp <- prlr_exp$attrs$Dates
path_ERA5prlr_CDS <- paste0("/esarchive/recon/ecmwf/era5/daily_mean/$var$_f1h-r1440x721cds/$var$_$date$.nc")
prlr_obs <- CST_Start(dataset = path_ERA5prlr_CDS,
var = "prlr",
date = unique(format(dates_exp, '%Y%m')),
ftime = startR::values(dates_exp),
ftime_across = 'date',
ftime_var = 'ftime',
merge_across_dims = TRUE,
split_multiselected_dims = TRUE,
lat = startR::values(list(lat_min, lat_max)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lon_min, lon_max)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'),
lat = c('lat', 'latitude'),
ftime = c('ftime', 'time')),
transform = startR::CDORemapper,
transform_extra_cells = 2,
transform_params = list(grid = "r1440x721",
method = "bicubic"),
transform_vars = c('lat', 'lon'),
return_vars = list(lon = NULL,
lat = NULL,
ftime = 'date'),
retrieve = TRUE)
```
The output contains data and metadata for the experiment and the observations. The elements `prlr_exp$data` and `prlr_obs$data` have dimensions:
```r
dim(prlr_exp$data)
# dataset var member sdate ftime lat lon
# 1 1 3 4 214 4 4
dim(prlr_obs$data)
# dataset var sdate ftime lat lon
# 1 1 4 214 4 4
```
To compute **SprR** of forecast and observation, we can run:
```r
SprR_exp <- CST_PeriodAccumulation(prlr_exp, start = list(21, 4), end = list(21, 6))
SprR_obs <- CST_PeriodAccumulation(prlr_obs, start = list(21, 4), end = list(21, 6))
```
The `start` and `end` are the initial and final dates and the day must be given before the month as above. They will be applied along the dimension `time_dim` (it is set to 'ftime' by default).
As mentioned, these parameters are optional, the function will take the entire timeseries when the period is not specified in `start` and `end`.
The dimensions of SprR forecasts and observations are:
```r
dim(SprR_exp$data)
# dataset var member sdate lat lon
# 1 1 3 4 4 4
dim(SprR_obs$data)
# dataset var sdate lat lon
# 1 1 4 4 4
```
The forecast SprR for the 1st member from 2013-2016 of the 1st grid point in mm are:
```r
SprR_exp$data[1, 1, 1, , 1, 1] * 86400 * 1000
# [1] 93.23236 230.41754 194.01401 226.52564
```
Dry springs will delay vegetative growth and reduce vigour and leaf area total surface. Fungal disease pressure will be lower and therefore there will be less need for protective and / or curative treatments, translating as less costs. Wet springs will promote higher vigour, increase the risk of fungal disease and disrupt vineyard operations as it may prevent machinery from getting in the vineyard due to mud. They are usually associated with higher costs.
On the other hand, another moisture-related indicators, **HarvestR**, can be computed by using `PeriodAccumulation` as well, with the defined period as the following lines.
```r
HarvestR_exp <- CST_PeriodAccumulation(prlr_exp, start = list(21, 8), end = list(21, 10))
HarvestR_obs <- CST_PeriodAccumulation(prlr_obs, start = list(21, 8), end = list(21, 10))
```
The forecast HarvestR for the 1st member from 2013-2016 of the 1st grid point in mm are:
```r
HarvestR_exp$data[1, 1, 1, , 1, 1] * 86400 * 1000
# [1] 52.30058 42.88070 156.87922 32.18567
```
To compute the 2013-2016 ensemble-mean bias of forecast HarvestR, run
```r
fcst <- drop(HarvestR_exp$data) * 86400 * 1000
obs <- drop(HarvestR_obs$data) * 86400 * 1000
Bias <- MeanDims((fcst - InsertDim(obs, 1, dim(fcst)['member'])), 'member')
```
To plot the map of ensemble-mean bias of HarvestR forecast, run
```r
cols <- c('#b2182b', '#d6604d', '#f4a582', '#fddbc7', '#d1e5f0',
'#92c5de', '#4393c3', '#2166ac')
PlotEquiMap(Bias[1, , ], lon = prlr_obs$coords$lon, lat = prlr_obs$coords$lat,
intylat = 1, intxlon = 1, width = 6, height = 6,
filled.continents = FALSE, units = 'mm', title_scale = .8,
axes_label_scale = 1, axes_tick_scale = 1, col_inf = cols[1],
margin_scale = c(1, 1, 1, 1), cols = cols[2:7], col_sup = cols[8],
brks = seq(-60, 60, 20), colNA = 'white',
toptitle = 'Ensemble-mean bias of HarvestR in 2013',
bar_label_scale = 1.5, bar_extra_margin = c(0, 0, 0, 0), units_scale = 2)
```
You will see the following maps of HarvestR bias in 2013.

In 2013, the ensemble-mean SEAS5 seasonal forecast of HarvestR is underestimated by up to 60 mm over Douro Valley region (the central four grid points).
### 2. PeriodMean
For the function `PeriodMean`, we use Growing Season Temperature (**GST**) as an example. GST is defined as the average of daily average temperatures between April 1st to October 31st in the Northern Hemisphere. It provides information onto which are the best suited varieties for a given site or, inversely, which are the best places to grow a specific variety. For existing vineyards, GST also informs on the suitability of its varieties for the climate of specific years, explaining quality and production variation. Many grapevine varieties across the world have been characterized in function of their GST optimum.
Firstly, we prepare a sample data of daily mean temperature of SEAS5 and ERA5 data sets with the same starting dates, spatial domain, interpolation grid and method by running
```r
S5path <- paste0("/esarchive/exp/ecmwf/system5c3s/daily_mean/$var$_f6h/$var$_$sdate$.nc")
tas_exp <- CST_Start(dataset = S5path,
var = "tas",
member = startR::indices(1:3),
sdate = sdates,
ftime = startR::indices(1:214),
lat = startR::values(list(lat_min, lat_max)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lon_min, lon_max)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'),
lat = c('lat', 'latitude'),
member = c('member', 'ensemble'),
ftime = c('ftime', 'time')),
transform = startR::CDORemapper,
transform_extra_cells = 2,
transform_params = list(grid = "r1440x721",
method = "bicubic"),
transform_vars = c('lat', 'lon'),
return_vars = list(lat = NULL,
lon = NULL, ftime = 'sdate'),
retrieve = TRUE)
dates_exp <- tas_exp$attrs$Dates
ERA5path <- paste0("/esarchive/recon/ecmwf/era5/daily_mean/$var$_f1h-r1440x721cds/$var$_$date$.nc")
tas_obs <- CST_Start(dataset = ERA5path,
var = "tas",
date = unique(format(dates_exp, '%Y%m')),
ftime = startR::values(dates_exp),
ftime_across = 'date',
ftime_var = 'ftime',
merge_across_dims = TRUE,
split_multiselected_dims = TRUE,
lat = startR::values(list(lat_min, lat_max)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lon_min, lon_max)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'),
lat = c('lat', 'latitude'),
ftime = c('ftime', 'time')),
transform = startR::CDORemapper,
transform_extra_cells = 2,
transform_params = list(grid = "r1440x721",
method = "bicubic"),
transform_vars = c('lat', 'lon'),
return_vars = list(lon = NULL,
lat = NULL,
ftime = 'date'),
retrieve = TRUE)
```
The output contains observations `tas_obs$data` and forecast `tas_exp$data`, and their dimensions and summaries are like
```r
dim(tas_obs$data)
# dataset var sdate ftime lat lon
# 1 1 4 214 4 4
dim(tas_exp$data)
# dataset var member sdate ftime lat lon
# 1 1 3 4 214 4 4
summary(tas_obs$data - 273.15)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# 3.627 13.974 17.248 17.294 20.752 30.206
summary(tas_exp$data - 273.15)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# 0.5363 11.6517 16.5610 16.4961 21.2531 31.4063
```
To compute the GST for both observation and forecast, run the following lines
```r
# change the unit of temperature from °C to K
tas_exp$data <- tas_exp$data - 273.15
tas_obs$data <- tas_obs$data - 273.15
# compute GST
GST_exp <- CST_PeriodMean(tas_exp, start = list(1, 4), end = list(31, 10))
GST_obs <- CST_PeriodMean(tas_obs, start = list(1, 4), end = list(31, 10))
```
Since the period considered for GST is the entire period for starting month of April, in this case the `start` and `end` parameters could be ignored.
The summaries and dimensions of the output are as follows:
```r
summary(GST_exp$data)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# 14.23 15.78 16.50 16.50 17.17 18.70
summary(GST_obs$data)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# 15.34 16.77 17.22 17.29 18.00 18.75
dim(GST_exp$data)
# dataset var member sdate lat lon
# 1 1 3 4 4 4
dim(GST_obs$data)
# dataset var sdate lat lon
# 1 1 4 4 4
```
Here, we plot the 2013-2016 mean climatology of ERA5 GST by running
```r
# compute ERA5 GST climatology
GST_Clim <- MeanDims(drop(GST_obs$data), 'sdate')
cols <- c('#ffffd4','#fee391','#fec44f','#fe9929','#ec7014','#cc4c02','#8c2d04')
PlotEquiMap(GST_Clim, lon = tas_obs$coords$lon, lat = tas_obs$coords$lat,
intylat = 1, intxlon = 1, width = 6, height = 6,
filled.continents = FALSE, units = '°C', title_scale = .8,
axes_label_scale = 1, axes_tick_scale = 1, col_inf = cols[1],
margin_scale = c(1, 1, 1, 1), cols = cols[2:6], col_sup = cols[7],
brks = seq(16, 18.5, 0.5), colNA = 'white', bar_label_scale = 1.5,
toptitle = '2013-2016 mean ERA5 GST',
bar_extra_margin = c(0, 0, 0, 0), units_scale = 2)
```
The ERA5 GST climatology is shown as below.

ERA5 GST ranges from 17-18.5°C over the Douro Valley region for the period from 2013-2016 as shown in the figure.
### 3. TotalTimeExceedingThreshold
For the function `TotalTimeExceedingThreshold`, **SU35** (Number of Heat Stress Days - 35°C) is taken as an example here. 35°C is the average established threshold for photosynthesis to occur in the grapevine. Above this temperature, the plant closes its stomata. If this situation occurs after veraison, maturation will be arrested for as long as the situation holds, decreasing sugar, polyphenol and aroma precursor levels, all essential for grape and wine quality. The higher the index, the lower will be the berry quality and aptitude to produce quality grapes.
SU35 is defined as the Total count of days when daily maximum temperatures exceed 35°C in the seven months into the future. There are three indicators sharing the similar definition as SU35: SU36, SU40 and Spr32. Their definition are listed as follows.
1. **SU36**: Total count of days when daily maximum temperatures exceed 36°C between June 21st and September 21st
2. **SU40**: Total count of days when daily maximum temperatures exceed 40°C between June 21st and September 21st
3. **Spr32**: Total count of days when daily maximum temperatures exceed 32°C between April 21st and June 21st
These indicators can be computed as well by using the function `TotalTimeExceedingThreshold` with different thresholds and periods indicated.
Here, we take SU35 as example, therefore the daily temperature maximum of the entire 7-month forecast period is needed for the computation of this indicator.
Load SEAS5 and ERA5 daily temperature maximum by running
```r
S5path <- paste0("/esarchive/exp/ecmwf/system5c3s/daily/$var$/$var$_$sdate$.nc")
tasmax_exp <- CST_Start(dataset = S5path,
var = "tasmax",
member = startR::indices(1:3),
sdate = sdates,
ftime = startR::indices(1:214),
lat = startR::values(list(lat_min, lat_max)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lon_min, lon_max)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'),
lat = c('lat', 'latitude'),
member = c('member', 'ensemble'),
ftime = c('ftime', 'time')),
transform = startR::CDORemapper,
transform_extra_cells = 2,
transform_params = list(grid = "r1440x721",
method = "bicubic"),
transform_vars = c('lat', 'lon'),
return_vars = list(lat = NULL,
lon = NULL, ftime = 'sdate'),
retrieve = TRUE)
dates_exp <- tasmax_exp$attrs$Dates
ERA5path <- paste0("/esarchive/recon/ecmwf/era5/daily/$var$-r1440x721cds/$var$_$date$.nc")
tasmax_obs <- CST_Start(dataset = ERA5path,
var = "tasmax",
date = unique(format(dates_exp, '%Y%m')),
ftime = startR::values(dates_exp),
ftime_across = 'date',
ftime_var = 'ftime',
merge_across_dims = TRUE,
split_multiselected_dims = TRUE,
lat = startR::values(list(lat_min, lat_max)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lon_min, lon_max)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'),
lat = c('lat', 'latitude'),
ftime = c('ftime', 'time')),
transform = startR::CDORemapper,
transform_extra_cells = 2,
transform_params = list(grid = "r1440x721",
method = "bicubic"),
transform_vars = c('lat', 'lon'),
return_vars = list(lon = NULL,
lat = NULL,
ftime = 'date'),
retrieve = TRUE)
```
Check the unit of temperature to from °C to K for the comparison with the threshold defined (for example 35°C here).
```r
tasmax_exp$data <- tasmax_exp$data - 273.15
tasmax_obs$data <- tasmax_obs$data - 273.15
```
Computing SU35 for forecast and observation by running
```r
threshold <- 35
SU35_exp <- CST_TotalTimeExceedingThreshold(tasmax_exp, threshold = threshold,
start = list(1, 4), end = list(31, 10))
SU35_obs <- CST_TotalTimeExceedingThreshold(tasmax_obs, threshold = threshold,
start = list(1, 4), end = list(31, 10))
```
The summaries of SU35 forecasts and observations are given below.
```r
summary(SU35_exp$data)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# 0.000 2.000 5.000 7.135 12.000 26.000
summary(SU35_obs$data)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# 0.000 0.000 1.000 2.609 5.000 10.000
```
As shown in the summaries, SEAS5 SU35 forecasts are overestimated by 5 days in terms of mean value.
Therefore, `CST_BiasCorrection` is used to bias adjust the SU35 forecasts.
```r
res <- CST_BiasCorrection(obs = SU35_obs, exp = SU35_exp)
SU35_exp_BC <- drop(res$data)
summary(SU35_exp_BC)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# -1.523 0.000 1.613 2.830 4.756 17.768
```
Since there are negative values after bias adjustment, all negative data is converted to zero.
```r
SU35_exp_BC[SU35_exp_BC < 0] <- 0
summary(SU35_exp_BC)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# 0.000 0.000 1.613 2.941 4.756 17.768
```
Plot the bias-adjusted SU35 forecast in 2016 by running
```r
SU35_obs_Y2016 <- drop(SU35_obs$data)[4, , ]
SU35_exp_Y2016 <- MeanDims(drop(SU35_exp$data)[, 4, , ], 'member')
SU35_exp_BC_Y2016 <- MeanDims(SU35_exp_BC[, 4, , ], 'member')
cols <- c("#fee5d9", "#fcae91", "#fb6a4a", "#de2d26","#a50f15")
toptitle <- 'ERA5 SU35 forecast in 2016'
PlotEquiMap(SU35_obs_Y2016,
lon = tasmax_obs$coords$lon, lat = tasmax_obs$coords$lat,
intylat = 1, intxlon = 1, width = 6, height = 6,
filled.continents = FALSE, units = 'day', title_scale = .8,
axes_label_scale = 1, axes_tick_scale = 1, margin_scale = c(1, 1, 1, 1),
cols = cols[1:4], col_sup = cols[5], brks = seq(0, 8, 2),
toptitle = toptitle,
colNA = cols[1], bar_label_scale = 1.5,
bar_extra_margin = c(0, 0, 0, 0), units_scale = 2)
toptitle <- 'SU35 forecast in 2016'
PlotEquiMap(SU35_exp_Y2016,
lon = tasmax_obs$coords$lon, lat = tasmax_obs$coords$lat,
intylat = 1, intxlon = 1, width = 6, height = 6,
filled.continents = FALSE, units = 'day', title_scale = .8,
axes_label_scale = 1, axes_tick_scale = 1, margin_scale = c(1, 1, 1, 1),
cols = cols[1:4], col_sup = cols[5], brks = seq(0, 8, 2),
toptitle = toptitle,
colNA = cols[1], bar_label_scale = 1.5,
bar_extra_margin = c(0, 0, 0, 0), units_scale = 2)
toptitle <- 'Bias-adjusted SU35 forecast in 2016'
PlotEquiMap(SU35_exp_BC_Y2016,
lon = tasmax_obs$coords$lon, lat = tasmax_obs$coords$lat,
intylat = 1, intxlon = 1, width = 6, height = 6,
filled.continents = FALSE, units = 'day', title_scale = .8,
axes_label_scale = 1, axes_tick_scale = 1, margin_scale = c(1, 1, 1, 1),
cols = cols[1:4], col_sup = cols[5], brks = seq(0, 8, 2),
toptitle = toptitle,
colNA = cols[1], bar_label_scale = 1.5,
bar_extra_margin = c(0, 0, 0, 0), units_scale = 2)
```
You can see the figure as below.



As seen above, the bias-adjusted SU35 forecasts are much closer to the ERA5 results, although differences remain.
Beside of the original definition of SU35, here two supplementary functions in the package `CSIndicators` are demonstrated by computing its another definition with the percentile adjustment.
---
1. **AbsToProbs -**: to transform ensemble forecast into probabilities by using the Cumulative Distribution Function
2. **QThreshold -**: to transform an absolute threshold into probabilities.
---
The above two supplementary functions are required to compute SU35 with the percentile adjustment. The function `AbsToProbs` would be applied to forecast and the `QThreshold` would be used to convert the observations to its percentile based on the given threshold.
The revised definition of SU35 is to reduce the potential influence induced by the fixed threshold of temperature defined for the index, instead of using the absolute value, the percentile corresponding to 35°C for observation is compared to the percentile corresponding to the predicted daily maximum temperature before being considered as a ‘heat stress’ day.
As mentioned, the forecast is translated to its percentile by using the function `ABsToProbs` by running
```r
exp_percentile <- AbsToProbs(tasmax_exp$data)
S5txP <- aperm(drop(exp_percentile), c(2, 1, 3, 4, 5))
```
After that, based on 35 of threshold, the percentile corresponding to each observational value can be calculated as follows.
```r
obs_percentile <- QThreshold(tasmax_obs$data, threshold = 35)
obs_percentile <- drop(obs_percentile)
```
After translating both forecasts and observations into probabilities, the comparison can then be done by running
```r
SU35_exp_Percentile <- TotalTimeExceedingThreshold(S5txP, threshold = obs_percentile,
time_dim = 'ftime')
```
Compute the same ensemble-mean SU35 **with percentile adjustment** in 2016 by running
```r
SU35_exp_per_Y2016 <- MeanDims(SU35_exp_Percentile[4, , , ], 'member')
```
Plot the same map for comparison
```r
toptitle <- 'SU35 forecast with percentile adjustment in 2016'
PlotEquiMap(SU35_exp_per_Y2016,
lon = tasmax_obs$coords$lon, lat = tasmax_obs$coords$lat,
intylat = 1, intxlon = 1, width = 6, height = 6,
filled.continents = FALSE, units = 'day', title_scale = .8,
axes_label_scale = 1, axes_tick_scale = 1, margin_scale = c(1, 1, 1, 1),
cols = cols[1:4], col_sup = cols[5], brks = seq(0, 8, 2),
toptitle = toptitle,
colNA = cols[1], bar_label_scale = 1.5,
bar_extra_margin = c(0, 0, 0, 0), units_scale = 2)
```

As seen in the figure above, applying the percentile adjustment seems to implicitly adjust certain extent of bias which was observed in the non-bias-adjusted SEAS5 forecast.
The performance of comparison of skills between two definitions requires further analysis such as the application of more skill metrics.
### 4. AccumulationExceedingThreshold
The function ´AccumulationExceedingThreshold´ can compute GDD (Growing Degree Days).
The definition of GDD is the summation of daily differences between daily average temperatures and 10°C between April 1st and October 31st. Here, the tas (daily average temperature) used above (in Section 2. PeriodMean) is loaded again (Please re-use the section of loading tas in Section 2). As per the definition, `threshold` is set to 10 with `diff` set to TRUE so that the function will compute the differences between daily temperature and the threshold given before calculating summation.
*Note: The data is in degrees Celsiusi at this point*
```r
GDD_exp <- CST_AccumulationExceedingThreshold(tas_exp, threshold = 10, diff = TRUE)
GDD_obs <- CST_AccumulationExceedingThreshold(tas_obs, threshold = 10, diff = TRUE)
```
The summaries of GDD are
```r
summary(GDD_exp$data)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# 1021 1331 1480 1469 1596 1873
summary(GDD_obs$data)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# 1195 1483 1569 1583 1730 1874
```
To compute the correlation coefficient for the period from 2013-2016, run the following lines
```r
# reorder the dimension
fcst <- Reorder(drop(GDD_exp$data), c(4, 3, 2, 1))
obs <- Reorder(drop(GDD_obs$data), c(3, 2, 1))
EnsCorr <- veriApply('EnsCorr', fcst = fcst, obs = obs, ensdim = 4, tdim = 3)
GDD_Corr <- Reorder(EnsCorr, c(2, 1))
```
To plot the map of correlation coefficient of GDD for the 2013-2016 period.
```r
cols <- c("#f7fcf5", "#e5f5e0", "#c7e9c0", "#a1d99b", "#74c476")
toptitle <- '2013-2016 correlation coefficient of GDD'
PlotEquiMap(GDD_Corr, lon = tas_obs$coords$lon, lat = tas_obs$coords$lat,
intylat = 1, intxlon = 1, width = 6, height = 6,
filled.continents = FALSE, units = 'correlation',
title_scale = .8, axes_label_scale = 1, axes_tick_scale = 1,
margin_scale = c(1, 1, 1, 1), cols = cols, brks = seq(0.5, 1, 0.1),
toptitle = toptitle, bar_label_scale = 1.5,
bar_extra_margin = c(0, 0, 0, 0), units_scale = 2)
```
The map of correlation coefficient for the 2013-2016 period is shown as below.

The 2013-2016 correlation coefficients of the SEAS5 forecasts of GDD in reference with ERA5 reanalysis over Douro Valley range between 0.6 and 0.8.
### 5. TotalSpellTimeExceedingThreshold
One of the critical agricultural indicators related to dry spell is the **Warm Spell Duration Index (WSDI)**, which is defined as the total count of days with at least 6 consecutive days when the daily maximum temperature exceeds its 90th percentile in the seven months into the future.
The maximum temperature data used in Section 3. Since the daily maximum temperature needs to compare to its 90th percentile, the function `Threshold` in the `CSIndicators` package is required to compute the percentile of observations used for each day. Here the same period (2013-2016) is considered.
```r
tx_p <- CST_Threshold(tasmax_obs, threshold = 0.9, memb_dim = NULL)
```
The output will be the 90th percentile of each day of each grid point derived by using all the years in the data.See the dimension and summary as below.
```r
dim(tx_p$data)
# dataset var ftime lat lon
# 1 1 214 4 4
summary(tx_p$data)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# 13.83 22.08 26.08 26.22 30.72 36.72
```
With the prepared threshold (90th percentile), the WSDI can be computed by running
```r
WSDI_exp <- CST_TotalSpellTimeExceedingThreshold(tasmax_exp, threshold = tx_p, spell = 6)
WSDI_obs <- CST_TotalSpellTimeExceedingThreshold(tasmax_obs, threshold = tx_p, spell = 6)
```
After checking the summaries, compute the Fair Ranked Probability Skill Score (FRPSS) of WSDI by running the following lines
```r
# Reorder the data
fcst <- Reorder(drop(WSDI_exp$data), c(4, 3, 2, 1))
obs <- Reorder(drop(WSDI_obs$data), c(3, 2, 1))
# summaries of WSDI
summary(fcst)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# 0.00 13.00 28.00 31.22 46.00 82.00
summary(obs)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# 9.00 19.00 22.50 23.25 26.00 33.00
# compute FRPSS
f <- veriApply('FairRpss', fcst = fcst, obs = obs, ensdim = 4, tdim = 3, prob = 1:2/3)$skillscore
WSDI_FRPSS <- Reorder(f, c(2,1))
```
Plot the map of WSDI FRPSS for the period from 2013-2016
```r
cols <- c("#edf8fb", "#ccece6", "#99d8c9", "#66c2a4")
toptitle <- 'SEAS5 WSDI FRPSS (2013-2016)'
PlotEquiMap(WSDI_FRPSS, lon = tasmax_obs$coords$lon, lat = tasmax_obs$coords$lat,
intylat = 1, intxlon = 1, width = 6, height = 6,
filled.continents = FALSE, units = 'FRPSS', title_scale = .8,
axes_label_scale = 1, axes_tick_scale = 1, margin_scale = c(1, 1, 1, 1),
cols = cols[1:3], col_inf = 'white', col_sup = cols[4],
brks = seq(0, 0.9, 0.3), toptitle = toptitle, bar_label_scale = 1.5,
bar_extra_margin = c(0, 0, 0, 0), units_scale = 2)
```
The FRPSS map for 2013-2016 SEAS WSDI is shown as below.

As seen in the map, the FRPSS in the eastern part of Douro Valley falls in 0.6-0.9, which are good enough to be useful when compared to observational climatology.
In addition to the grape/wine sector focused here, the MEDGOLD project also works on the other two sectors: olive/olive oil and durum wheat/pasta. Furthermore, the climate services are also provided at the longer term (up to 30 years) by other project partners.
Click on [MEDGOLD](https://www.med-gold.eu/climate-services/) for more information.
|
/scratch/gouwar.j/cran-all/cranData/CSIndicators/inst/doc/AgriculturalIndicators.Rmd
|
---
title: "Energy Indicators"
author: "Llorenç Lledó, Earth Sciences department, Barcelona Supercomputing Center (BSC)"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteEngine{knitr::knitr}
%\VignetteIndexEntry{Energy Indicators}
%\usepackage[utf8]{inputenc}
---
Energy Indicators
-----------------------
## Introduction
The energy sector is affected by the atmospheric ciruclation in many ways. On the one hand energy supply from renewable sources like wind, solar or hydropower relies on availability of wind, sunshine or water. On the other hand, energy demand is affected by changes in near-surface temperature. A number of indicators derived from atmospheric variables can be useful as proxies of energy production/demand.
Currently, this package provides two indicators for wind power generation:
1. **WindPowerDensity -** Wind power that is available for extraction from the wind flow, per square meter of swept area.
2. **WindCapacityFactor -** Wind power generation of a wind turbine, normalized by the maximum power that the turbine can deliver (rated capacity).
### 1. Wind Power Density
`WindPowerDensity` computes the kinetic energy that is available in the wind flow that traverses a unit of area swept by a wind turbine. For an instantaneous wind speed value, it is computed as: `WPD = 0.5 * ro * wspd^3` where `ro` is the air density in Kg/m^3 and `wspd` is the instantaneous wind speed at hub height in m/s.
Although wind turbines cannot extract all of the kinetic energy in the wind, and their efficiency can vary substantially at different wind speeds and among different wind turbines, this indicator provides a simple estimation of the wind resource quality. Typically, Wind Power Density is computed over a long period and its mean value is reported.
As an example, we simulate a time series of 1000 wind speed values from a Weibull distribution with scale factor of 6 and a shape factor of 2, which represent a sample of wind speed values obtained at a single location. The Weibull distribution is often assumed to fit observed wind speed values to a probability distribution function. Then, each instantaneous wind speed value is converted to its equivalent WPD.
The `mean` and `sd` of the WPD can be employed to summarize the wind resource in that location. Otherwise, we can plot the histograms to see the full distribution of values:
```
library(CSIndicators)
set.seed(1)
wind <- rweibull(n = 1000, shape = 2, scale = 6)
WPD <- WindPowerDensity(wind)
mean(WPD)
```
```
## [1] 170.6205
```
```
sd(WPD)
```
```
## [1] 251.1349
```
```
par(mfrow = c(1, 2))
hist(wind, breaks = seq(0, 20))
hist(WPD, breaks = seq(0, 4000, 200))
```

As you can see the histogram of the WPD is highly skewed, even if the wind speed was only a little skewed!
If not specified, an air density of 1.225 kg/m^3 is assumed. Otherwise, the parameter `ro` can be set to a fixed value (for instance the mean air density at the site elevation could be used), or a timeseries of density values measured at each time stamp can be used to obtain more accurate results.
```
WPD <- WindPowerDensity(wind, ro = 1.15)
```
### 2. Wind Capacity Factor
`WindCapacityFactor` transforms wind speed values into normalized wind power values. The transformation is made employing manufacturer-provided power curves, for five different turbines, as described in Lledó et al. (2019).
The generation is normalized by the rated power of the turbine (i.e. the maximum power output it can achieve). This allows for comparisons between turbines of different sizes and wind farms of different installed capacities. Beware that the Capacity Factor (CF) values provided do not take into account any losses due to wakes, electricity transport, blade degradation, curtailments or maintenance shutdowns.
The function allows to choose from five different power curves that are suited for a different range of wind speed conditions. Each of the provided turbines is a representative of a IEC wind class. Generally speaking, commercially available wind turbines can be certified as IEC class I, II, III or a combination of them (I/II and II/III), according to their efficency at different wind speeds and the loads they can withstand. The basic idea is that most turbines in a same IEC class have similar power curves, and the differences of power output can be thoroughly studied with only this set of five turbines.
Notice that power curves are intended to be used with 10-minutal steady wind speed values at hub height, which in modern wind turbines varies between 80 and 120m typically. As the transformation of wind speed into wind power is non-linear, it is recomended to use instantaneous or 10-minutal wind speed values as input. Employing longer period means will produce inaccurate results, as far as the wind is not steady during that period.
Following on the previous example, we will compute now the CF that would be obtained from our sample of 1000 wind speed values when using a turbine of class IEC I, and compare it to the CF values for a class III:
```
WCFI <- WindCapacityFactor(wind, IEC_class = "I")
WCFIII <- WindCapacityFactor(wind, IEC_class = "III")
par(mfrow = c(1, 3))
hist(wind, breaks = seq(0, 20))
hist(WCFI, breaks = seq(0, 1, 0.05), ylim = c(0, 500))
hist(WCFIII, breaks = seq(0, 1, 0.05), ylim = c(0, 500))
```

From the CF histograms we can see that, for this particular wind speed distribution, the IEC I turbine (designed for high winds) producess less energy than the IEC III turbine, which is more suitable for this range of wind speed values.
### References
* Lledó, Ll., Torralba, V., Soret, A., Ramon, J., & Doblas-Reyes, F.J. (2019). Seasonal forecasts of wind power generation. Renewable Energy, 143, 91–100. https://doi.org/10.1016/j.renene.2019.04.135
* International Standard IEC 61400-1 (third ed.) (2005)
|
/scratch/gouwar.j/cran-all/cranData/CSIndicators/inst/doc/EnergyIndicators.Rmd
|
---
title: "Agricultural Indicators"
author: "Earth Sciences department, Barcelona Supercomputing Center (BSC)"
date: "`r Sys.Date()`"
revisor: "Eva Rifà"
revision date: "October 2023"
output: rmarkdown::html_vignette
vignette: >
%\VignetteEngine{knitr::knitr}
%\VignetteIndexEntry{Agricultural Indicators}
%\usepackage[utf8]{inputenc}
---
Agricultural Indicators
-----------------------------
## Introduction
Apart from forecasts of Essential Climate Variables, Climate Services also provide a variety of the sectoral indicators that are often required for Climate Services people, including researchers, decision-makers, farmers, etc.
In the MEDGOLD project, 10 indicators which were identified as critical indices for the three agricultural sectors - grape/wine, olive/olive oil and wheat/pasta - have been considered in this CSIndicators package.
The computing functions and the corresponding indicators are listed as follows:
1. **PeriodAccumulation -** Spring Total Precipitation (SprR) and Harvest Total Precipitation (HarvestR)
2. **PeriodMean -** Growing Season Temperature (GST) and Spring Mean Temperature Maximum (SPRTX)
3. **TotalTimeExceedingThreshold -** Number of Heat Stress Days - 35°C (SU35), 36°C (SU36), 40°C (SU40) and Spring Heat Stress Days - 32°C (Spr32)
4. **AccumulationExceedingThreshold -** Growing Degree Days (GDD)
5. **TotalSpellTimeExceedingThreshold -** Warm Spell Duration Index (WSDI)
The above functions can take both multidimensional arrays and the s2dv_cube objects (see note below). Taking PeriodAccumulation as example, **CST_**PeriodAccumulation handles the latter and PeriodAccumulation without the prefix can compute multidimensional arrays.
*Note: s2dv_cube and array classes can be handled by the functions in CSIndicators. See Section 2 in vignette [Data retrieval and storage](https://cran.r-project.org/package=CSTools/vignettes/Data_Considerations.html) from CSTools package for more information.*
There are some supplementary functions which must be called to smoothly run the above functions.
1. **SelectPeriodOnData -** to select the data in the requested period
2. **SelectPeriodOnDates -** to select the time dimension in the requested period
3. **Threshold -** to convert absolute value/variable to its percentile, e.g., Warm Spell Duration Index uses the 90th percentile corresponding to each day instead of a fixed threshold. See how this function is applied in Section 5.
When the period selection is required, the `start` and `end` parameters have to be provided to cut out the portion in `time_dim`. Otherwise, the function will take the **entire** `time_dim`.
The examples of computing the aforementioned indicators are given by functions as follows.
### 1. PeriodAccumulation
`PeriodAccumulation` (and `CST_PeriodAccumulation`) computes the sum of a given variable in a period.
Here, two indicators are used to show how this function works: Spring Total Precipitation (SprR) and Harvest Total Precipitation (HarvestR). Both indices represent the total precipitation but in different periods, 21st April - 21st June for SprR and 21st August - 21st October for HarvestR.
First, load the required libraries, CSIndicators, CSTools, etc by running
```
library(CSIndicators)
library(CSTools)
library(zeallot)
library(s2dv)
```
To obtain the precipitation forecast and observation, we load the daily precipitation (**prlr** given in `var`) data sets of ECMWF SEAS5 seasonal forecast and ERA5 reanalysis for the four starting dates 20130401-20160401 (provided in `sdates`) with the entire 7-month forecast time, April-October (214 days in total given in parameter `leadtimemax`).
The pathways of SEAS5 and ERA5 are given in the lists with some **whitecards (inside two dollar signs)** used to replace the variable name and iterative items such as year and month. See details of requirements in Section 5 in vignette [Data retrieval and storage](https://cran.r-project.org/package=CSTools/vignettes/Data_Considerations.html) from CSTools package.
The spatial domain covers part of Douro Valley of Northern Portugal lon=[352.25, 353], lat=[41, 41.75]. These four values are provided in `lonmin`, `lonmax`, `latmin` and `latmax`.
With `grid` set to **r1440x721**, the SEAS5 forecast would be interpolated to the 0.25-degree ERA5 grid by using the **bicubic** method given in `method`.
```r
sdates <- paste0(2013:2016, '04', '01')
lat_min = 41
lat_max = 41.75
lon_min = 352.25
lon_max = 353
S5path_prlr <- paste0("/esarchive/exp/ecmwf/system5c3s/daily_mean/$var$_s0-24h/$var$_$sdate$.nc")
prlr_exp <- CST_Start(dataset = S5path_prlr,
var = "prlr",
member = startR::indices(1:3),
sdate = sdates,
ftime = startR::indices(1:214),
lat = startR::values(list(lat_min, lat_max)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lon_min, lon_max)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'),
lat = c('lat', 'latitude'),
member = c('member', 'ensemble'),
ftime = c('ftime', 'time')),
transform = startR::CDORemapper,
transform_extra_cells = 2,
transform_params = list(grid = "r1440x721",
method = "bicubic"),
transform_vars = c('lat', 'lon'),
return_vars = list(lat = NULL,
lon = NULL, ftime = 'sdate'),
retrieve = TRUE)
dates_exp <- prlr_exp$attrs$Dates
path_ERA5prlr_CDS <- paste0("/esarchive/recon/ecmwf/era5/daily_mean/$var$_f1h-r1440x721cds/$var$_$date$.nc")
prlr_obs <- CST_Start(dataset = path_ERA5prlr_CDS,
var = "prlr",
date = unique(format(dates_exp, '%Y%m')),
ftime = startR::values(dates_exp),
ftime_across = 'date',
ftime_var = 'ftime',
merge_across_dims = TRUE,
split_multiselected_dims = TRUE,
lat = startR::values(list(lat_min, lat_max)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lon_min, lon_max)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'),
lat = c('lat', 'latitude'),
ftime = c('ftime', 'time')),
transform = startR::CDORemapper,
transform_extra_cells = 2,
transform_params = list(grid = "r1440x721",
method = "bicubic"),
transform_vars = c('lat', 'lon'),
return_vars = list(lon = NULL,
lat = NULL,
ftime = 'date'),
retrieve = TRUE)
```
The output contains data and metadata for the experiment and the observations. The elements `prlr_exp$data` and `prlr_obs$data` have dimensions:
```r
dim(prlr_exp$data)
# dataset var member sdate ftime lat lon
# 1 1 3 4 214 4 4
dim(prlr_obs$data)
# dataset var sdate ftime lat lon
# 1 1 4 214 4 4
```
To compute **SprR** of forecast and observation, we can run:
```r
SprR_exp <- CST_PeriodAccumulation(prlr_exp, start = list(21, 4), end = list(21, 6))
SprR_obs <- CST_PeriodAccumulation(prlr_obs, start = list(21, 4), end = list(21, 6))
```
The `start` and `end` are the initial and final dates and the day must be given before the month as above. They will be applied along the dimension `time_dim` (it is set to 'ftime' by default).
As mentioned, these parameters are optional, the function will take the entire timeseries when the period is not specified in `start` and `end`.
The dimensions of SprR forecasts and observations are:
```r
dim(SprR_exp$data)
# dataset var member sdate lat lon
# 1 1 3 4 4 4
dim(SprR_obs$data)
# dataset var sdate lat lon
# 1 1 4 4 4
```
The forecast SprR for the 1st member from 2013-2016 of the 1st grid point in mm are:
```r
SprR_exp$data[1, 1, 1, , 1, 1] * 86400 * 1000
# [1] 93.23236 230.41754 194.01401 226.52564
```
Dry springs will delay vegetative growth and reduce vigour and leaf area total surface. Fungal disease pressure will be lower and therefore there will be less need for protective and / or curative treatments, translating as less costs. Wet springs will promote higher vigour, increase the risk of fungal disease and disrupt vineyard operations as it may prevent machinery from getting in the vineyard due to mud. They are usually associated with higher costs.
On the other hand, another moisture-related indicators, **HarvestR**, can be computed by using `PeriodAccumulation` as well, with the defined period as the following lines.
```r
HarvestR_exp <- CST_PeriodAccumulation(prlr_exp, start = list(21, 8), end = list(21, 10))
HarvestR_obs <- CST_PeriodAccumulation(prlr_obs, start = list(21, 8), end = list(21, 10))
```
The forecast HarvestR for the 1st member from 2013-2016 of the 1st grid point in mm are:
```r
HarvestR_exp$data[1, 1, 1, , 1, 1] * 86400 * 1000
# [1] 52.30058 42.88070 156.87922 32.18567
```
To compute the 2013-2016 ensemble-mean bias of forecast HarvestR, run
```r
fcst <- drop(HarvestR_exp$data) * 86400 * 1000
obs <- drop(HarvestR_obs$data) * 86400 * 1000
Bias <- MeanDims((fcst - InsertDim(obs, 1, dim(fcst)['member'])), 'member')
```
To plot the map of ensemble-mean bias of HarvestR forecast, run
```r
cols <- c('#b2182b', '#d6604d', '#f4a582', '#fddbc7', '#d1e5f0',
'#92c5de', '#4393c3', '#2166ac')
PlotEquiMap(Bias[1, , ], lon = prlr_obs$coords$lon, lat = prlr_obs$coords$lat,
intylat = 1, intxlon = 1, width = 6, height = 6,
filled.continents = FALSE, units = 'mm', title_scale = .8,
axes_label_scale = 1, axes_tick_scale = 1, col_inf = cols[1],
margin_scale = c(1, 1, 1, 1), cols = cols[2:7], col_sup = cols[8],
brks = seq(-60, 60, 20), colNA = 'white',
toptitle = 'Ensemble-mean bias of HarvestR in 2013',
bar_label_scale = 1.5, bar_extra_margin = c(0, 0, 0, 0), units_scale = 2)
```
You will see the following maps of HarvestR bias in 2013.

In 2013, the ensemble-mean SEAS5 seasonal forecast of HarvestR is underestimated by up to 60 mm over Douro Valley region (the central four grid points).
### 2. PeriodMean
For the function `PeriodMean`, we use Growing Season Temperature (**GST**) as an example. GST is defined as the average of daily average temperatures between April 1st to October 31st in the Northern Hemisphere. It provides information onto which are the best suited varieties for a given site or, inversely, which are the best places to grow a specific variety. For existing vineyards, GST also informs on the suitability of its varieties for the climate of specific years, explaining quality and production variation. Many grapevine varieties across the world have been characterized in function of their GST optimum.
Firstly, we prepare a sample data of daily mean temperature of SEAS5 and ERA5 data sets with the same starting dates, spatial domain, interpolation grid and method by running
```r
S5path <- paste0("/esarchive/exp/ecmwf/system5c3s/daily_mean/$var$_f6h/$var$_$sdate$.nc")
tas_exp <- CST_Start(dataset = S5path,
var = "tas",
member = startR::indices(1:3),
sdate = sdates,
ftime = startR::indices(1:214),
lat = startR::values(list(lat_min, lat_max)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lon_min, lon_max)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'),
lat = c('lat', 'latitude'),
member = c('member', 'ensemble'),
ftime = c('ftime', 'time')),
transform = startR::CDORemapper,
transform_extra_cells = 2,
transform_params = list(grid = "r1440x721",
method = "bicubic"),
transform_vars = c('lat', 'lon'),
return_vars = list(lat = NULL,
lon = NULL, ftime = 'sdate'),
retrieve = TRUE)
dates_exp <- tas_exp$attrs$Dates
ERA5path <- paste0("/esarchive/recon/ecmwf/era5/daily_mean/$var$_f1h-r1440x721cds/$var$_$date$.nc")
tas_obs <- CST_Start(dataset = ERA5path,
var = "tas",
date = unique(format(dates_exp, '%Y%m')),
ftime = startR::values(dates_exp),
ftime_across = 'date',
ftime_var = 'ftime',
merge_across_dims = TRUE,
split_multiselected_dims = TRUE,
lat = startR::values(list(lat_min, lat_max)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lon_min, lon_max)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'),
lat = c('lat', 'latitude'),
ftime = c('ftime', 'time')),
transform = startR::CDORemapper,
transform_extra_cells = 2,
transform_params = list(grid = "r1440x721",
method = "bicubic"),
transform_vars = c('lat', 'lon'),
return_vars = list(lon = NULL,
lat = NULL,
ftime = 'date'),
retrieve = TRUE)
```
The output contains observations `tas_obs$data` and forecast `tas_exp$data`, and their dimensions and summaries are like
```r
dim(tas_obs$data)
# dataset var sdate ftime lat lon
# 1 1 4 214 4 4
dim(tas_exp$data)
# dataset var member sdate ftime lat lon
# 1 1 3 4 214 4 4
summary(tas_obs$data - 273.15)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# 3.627 13.974 17.248 17.294 20.752 30.206
summary(tas_exp$data - 273.15)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# 0.5363 11.6517 16.5610 16.4961 21.2531 31.4063
```
To compute the GST for both observation and forecast, run the following lines
```r
# change the unit of temperature from °C to K
tas_exp$data <- tas_exp$data - 273.15
tas_obs$data <- tas_obs$data - 273.15
# compute GST
GST_exp <- CST_PeriodMean(tas_exp, start = list(1, 4), end = list(31, 10))
GST_obs <- CST_PeriodMean(tas_obs, start = list(1, 4), end = list(31, 10))
```
Since the period considered for GST is the entire period for starting month of April, in this case the `start` and `end` parameters could be ignored.
The summaries and dimensions of the output are as follows:
```r
summary(GST_exp$data)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# 14.23 15.78 16.50 16.50 17.17 18.70
summary(GST_obs$data)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# 15.34 16.77 17.22 17.29 18.00 18.75
dim(GST_exp$data)
# dataset var member sdate lat lon
# 1 1 3 4 4 4
dim(GST_obs$data)
# dataset var sdate lat lon
# 1 1 4 4 4
```
Here, we plot the 2013-2016 mean climatology of ERA5 GST by running
```r
# compute ERA5 GST climatology
GST_Clim <- MeanDims(drop(GST_obs$data), 'sdate')
cols <- c('#ffffd4','#fee391','#fec44f','#fe9929','#ec7014','#cc4c02','#8c2d04')
PlotEquiMap(GST_Clim, lon = tas_obs$coords$lon, lat = tas_obs$coords$lat,
intylat = 1, intxlon = 1, width = 6, height = 6,
filled.continents = FALSE, units = '°C', title_scale = .8,
axes_label_scale = 1, axes_tick_scale = 1, col_inf = cols[1],
margin_scale = c(1, 1, 1, 1), cols = cols[2:6], col_sup = cols[7],
brks = seq(16, 18.5, 0.5), colNA = 'white', bar_label_scale = 1.5,
toptitle = '2013-2016 mean ERA5 GST',
bar_extra_margin = c(0, 0, 0, 0), units_scale = 2)
```
The ERA5 GST climatology is shown as below.

ERA5 GST ranges from 17-18.5°C over the Douro Valley region for the period from 2013-2016 as shown in the figure.
### 3. TotalTimeExceedingThreshold
For the function `TotalTimeExceedingThreshold`, **SU35** (Number of Heat Stress Days - 35°C) is taken as an example here. 35°C is the average established threshold for photosynthesis to occur in the grapevine. Above this temperature, the plant closes its stomata. If this situation occurs after veraison, maturation will be arrested for as long as the situation holds, decreasing sugar, polyphenol and aroma precursor levels, all essential for grape and wine quality. The higher the index, the lower will be the berry quality and aptitude to produce quality grapes.
SU35 is defined as the Total count of days when daily maximum temperatures exceed 35°C in the seven months into the future. There are three indicators sharing the similar definition as SU35: SU36, SU40 and Spr32. Their definition are listed as follows.
1. **SU36**: Total count of days when daily maximum temperatures exceed 36°C between June 21st and September 21st
2. **SU40**: Total count of days when daily maximum temperatures exceed 40°C between June 21st and September 21st
3. **Spr32**: Total count of days when daily maximum temperatures exceed 32°C between April 21st and June 21st
These indicators can be computed as well by using the function `TotalTimeExceedingThreshold` with different thresholds and periods indicated.
Here, we take SU35 as example, therefore the daily temperature maximum of the entire 7-month forecast period is needed for the computation of this indicator.
Load SEAS5 and ERA5 daily temperature maximum by running
```r
S5path <- paste0("/esarchive/exp/ecmwf/system5c3s/daily/$var$/$var$_$sdate$.nc")
tasmax_exp <- CST_Start(dataset = S5path,
var = "tasmax",
member = startR::indices(1:3),
sdate = sdates,
ftime = startR::indices(1:214),
lat = startR::values(list(lat_min, lat_max)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lon_min, lon_max)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'),
lat = c('lat', 'latitude'),
member = c('member', 'ensemble'),
ftime = c('ftime', 'time')),
transform = startR::CDORemapper,
transform_extra_cells = 2,
transform_params = list(grid = "r1440x721",
method = "bicubic"),
transform_vars = c('lat', 'lon'),
return_vars = list(lat = NULL,
lon = NULL, ftime = 'sdate'),
retrieve = TRUE)
dates_exp <- tasmax_exp$attrs$Dates
ERA5path <- paste0("/esarchive/recon/ecmwf/era5/daily/$var$-r1440x721cds/$var$_$date$.nc")
tasmax_obs <- CST_Start(dataset = ERA5path,
var = "tasmax",
date = unique(format(dates_exp, '%Y%m')),
ftime = startR::values(dates_exp),
ftime_across = 'date',
ftime_var = 'ftime',
merge_across_dims = TRUE,
split_multiselected_dims = TRUE,
lat = startR::values(list(lat_min, lat_max)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lon_min, lon_max)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'),
lat = c('lat', 'latitude'),
ftime = c('ftime', 'time')),
transform = startR::CDORemapper,
transform_extra_cells = 2,
transform_params = list(grid = "r1440x721",
method = "bicubic"),
transform_vars = c('lat', 'lon'),
return_vars = list(lon = NULL,
lat = NULL,
ftime = 'date'),
retrieve = TRUE)
```
Check the unit of temperature to from °C to K for the comparison with the threshold defined (for example 35°C here).
```r
tasmax_exp$data <- tasmax_exp$data - 273.15
tasmax_obs$data <- tasmax_obs$data - 273.15
```
Computing SU35 for forecast and observation by running
```r
threshold <- 35
SU35_exp <- CST_TotalTimeExceedingThreshold(tasmax_exp, threshold = threshold,
start = list(1, 4), end = list(31, 10))
SU35_obs <- CST_TotalTimeExceedingThreshold(tasmax_obs, threshold = threshold,
start = list(1, 4), end = list(31, 10))
```
The summaries of SU35 forecasts and observations are given below.
```r
summary(SU35_exp$data)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# 0.000 2.000 5.000 7.135 12.000 26.000
summary(SU35_obs$data)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# 0.000 0.000 1.000 2.609 5.000 10.000
```
As shown in the summaries, SEAS5 SU35 forecasts are overestimated by 5 days in terms of mean value.
Therefore, `CST_BiasCorrection` is used to bias adjust the SU35 forecasts.
```r
res <- CST_BiasCorrection(obs = SU35_obs, exp = SU35_exp)
SU35_exp_BC <- drop(res$data)
summary(SU35_exp_BC)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# -1.523 0.000 1.613 2.830 4.756 17.768
```
Since there are negative values after bias adjustment, all negative data is converted to zero.
```r
SU35_exp_BC[SU35_exp_BC < 0] <- 0
summary(SU35_exp_BC)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# 0.000 0.000 1.613 2.941 4.756 17.768
```
Plot the bias-adjusted SU35 forecast in 2016 by running
```r
SU35_obs_Y2016 <- drop(SU35_obs$data)[4, , ]
SU35_exp_Y2016 <- MeanDims(drop(SU35_exp$data)[, 4, , ], 'member')
SU35_exp_BC_Y2016 <- MeanDims(SU35_exp_BC[, 4, , ], 'member')
cols <- c("#fee5d9", "#fcae91", "#fb6a4a", "#de2d26","#a50f15")
toptitle <- 'ERA5 SU35 forecast in 2016'
PlotEquiMap(SU35_obs_Y2016,
lon = tasmax_obs$coords$lon, lat = tasmax_obs$coords$lat,
intylat = 1, intxlon = 1, width = 6, height = 6,
filled.continents = FALSE, units = 'day', title_scale = .8,
axes_label_scale = 1, axes_tick_scale = 1, margin_scale = c(1, 1, 1, 1),
cols = cols[1:4], col_sup = cols[5], brks = seq(0, 8, 2),
toptitle = toptitle,
colNA = cols[1], bar_label_scale = 1.5,
bar_extra_margin = c(0, 0, 0, 0), units_scale = 2)
toptitle <- 'SU35 forecast in 2016'
PlotEquiMap(SU35_exp_Y2016,
lon = tasmax_obs$coords$lon, lat = tasmax_obs$coords$lat,
intylat = 1, intxlon = 1, width = 6, height = 6,
filled.continents = FALSE, units = 'day', title_scale = .8,
axes_label_scale = 1, axes_tick_scale = 1, margin_scale = c(1, 1, 1, 1),
cols = cols[1:4], col_sup = cols[5], brks = seq(0, 8, 2),
toptitle = toptitle,
colNA = cols[1], bar_label_scale = 1.5,
bar_extra_margin = c(0, 0, 0, 0), units_scale = 2)
toptitle <- 'Bias-adjusted SU35 forecast in 2016'
PlotEquiMap(SU35_exp_BC_Y2016,
lon = tasmax_obs$coords$lon, lat = tasmax_obs$coords$lat,
intylat = 1, intxlon = 1, width = 6, height = 6,
filled.continents = FALSE, units = 'day', title_scale = .8,
axes_label_scale = 1, axes_tick_scale = 1, margin_scale = c(1, 1, 1, 1),
cols = cols[1:4], col_sup = cols[5], brks = seq(0, 8, 2),
toptitle = toptitle,
colNA = cols[1], bar_label_scale = 1.5,
bar_extra_margin = c(0, 0, 0, 0), units_scale = 2)
```
You can see the figure as below.



As seen above, the bias-adjusted SU35 forecasts are much closer to the ERA5 results, although differences remain.
Beside of the original definition of SU35, here two supplementary functions in the package `CSIndicators` are demonstrated by computing its another definition with the percentile adjustment.
---
1. **AbsToProbs -**: to transform ensemble forecast into probabilities by using the Cumulative Distribution Function
2. **QThreshold -**: to transform an absolute threshold into probabilities.
---
The above two supplementary functions are required to compute SU35 with the percentile adjustment. The function `AbsToProbs` would be applied to forecast and the `QThreshold` would be used to convert the observations to its percentile based on the given threshold.
The revised definition of SU35 is to reduce the potential influence induced by the fixed threshold of temperature defined for the index, instead of using the absolute value, the percentile corresponding to 35°C for observation is compared to the percentile corresponding to the predicted daily maximum temperature before being considered as a ‘heat stress’ day.
As mentioned, the forecast is translated to its percentile by using the function `ABsToProbs` by running
```r
exp_percentile <- AbsToProbs(tasmax_exp$data)
S5txP <- aperm(drop(exp_percentile), c(2, 1, 3, 4, 5))
```
After that, based on 35 of threshold, the percentile corresponding to each observational value can be calculated as follows.
```r
obs_percentile <- QThreshold(tasmax_obs$data, threshold = 35)
obs_percentile <- drop(obs_percentile)
```
After translating both forecasts and observations into probabilities, the comparison can then be done by running
```r
SU35_exp_Percentile <- TotalTimeExceedingThreshold(S5txP, threshold = obs_percentile,
time_dim = 'ftime')
```
Compute the same ensemble-mean SU35 **with percentile adjustment** in 2016 by running
```r
SU35_exp_per_Y2016 <- MeanDims(SU35_exp_Percentile[4, , , ], 'member')
```
Plot the same map for comparison
```r
toptitle <- 'SU35 forecast with percentile adjustment in 2016'
PlotEquiMap(SU35_exp_per_Y2016,
lon = tasmax_obs$coords$lon, lat = tasmax_obs$coords$lat,
intylat = 1, intxlon = 1, width = 6, height = 6,
filled.continents = FALSE, units = 'day', title_scale = .8,
axes_label_scale = 1, axes_tick_scale = 1, margin_scale = c(1, 1, 1, 1),
cols = cols[1:4], col_sup = cols[5], brks = seq(0, 8, 2),
toptitle = toptitle,
colNA = cols[1], bar_label_scale = 1.5,
bar_extra_margin = c(0, 0, 0, 0), units_scale = 2)
```

As seen in the figure above, applying the percentile adjustment seems to implicitly adjust certain extent of bias which was observed in the non-bias-adjusted SEAS5 forecast.
The performance of comparison of skills between two definitions requires further analysis such as the application of more skill metrics.
### 4. AccumulationExceedingThreshold
The function ´AccumulationExceedingThreshold´ can compute GDD (Growing Degree Days).
The definition of GDD is the summation of daily differences between daily average temperatures and 10°C between April 1st and October 31st. Here, the tas (daily average temperature) used above (in Section 2. PeriodMean) is loaded again (Please re-use the section of loading tas in Section 2). As per the definition, `threshold` is set to 10 with `diff` set to TRUE so that the function will compute the differences between daily temperature and the threshold given before calculating summation.
*Note: The data is in degrees Celsiusi at this point*
```r
GDD_exp <- CST_AccumulationExceedingThreshold(tas_exp, threshold = 10, diff = TRUE)
GDD_obs <- CST_AccumulationExceedingThreshold(tas_obs, threshold = 10, diff = TRUE)
```
The summaries of GDD are
```r
summary(GDD_exp$data)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# 1021 1331 1480 1469 1596 1873
summary(GDD_obs$data)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# 1195 1483 1569 1583 1730 1874
```
To compute the correlation coefficient for the period from 2013-2016, run the following lines
```r
# reorder the dimension
fcst <- Reorder(drop(GDD_exp$data), c(4, 3, 2, 1))
obs <- Reorder(drop(GDD_obs$data), c(3, 2, 1))
EnsCorr <- veriApply('EnsCorr', fcst = fcst, obs = obs, ensdim = 4, tdim = 3)
GDD_Corr <- Reorder(EnsCorr, c(2, 1))
```
To plot the map of correlation coefficient of GDD for the 2013-2016 period.
```r
cols <- c("#f7fcf5", "#e5f5e0", "#c7e9c0", "#a1d99b", "#74c476")
toptitle <- '2013-2016 correlation coefficient of GDD'
PlotEquiMap(GDD_Corr, lon = tas_obs$coords$lon, lat = tas_obs$coords$lat,
intylat = 1, intxlon = 1, width = 6, height = 6,
filled.continents = FALSE, units = 'correlation',
title_scale = .8, axes_label_scale = 1, axes_tick_scale = 1,
margin_scale = c(1, 1, 1, 1), cols = cols, brks = seq(0.5, 1, 0.1),
toptitle = toptitle, bar_label_scale = 1.5,
bar_extra_margin = c(0, 0, 0, 0), units_scale = 2)
```
The map of correlation coefficient for the 2013-2016 period is shown as below.

The 2013-2016 correlation coefficients of the SEAS5 forecasts of GDD in reference with ERA5 reanalysis over Douro Valley range between 0.6 and 0.8.
### 5. TotalSpellTimeExceedingThreshold
One of the critical agricultural indicators related to dry spell is the **Warm Spell Duration Index (WSDI)**, which is defined as the total count of days with at least 6 consecutive days when the daily maximum temperature exceeds its 90th percentile in the seven months into the future.
The maximum temperature data used in Section 3. Since the daily maximum temperature needs to compare to its 90th percentile, the function `Threshold` in the `CSIndicators` package is required to compute the percentile of observations used for each day. Here the same period (2013-2016) is considered.
```r
tx_p <- CST_Threshold(tasmax_obs, threshold = 0.9, memb_dim = NULL)
```
The output will be the 90th percentile of each day of each grid point derived by using all the years in the data.See the dimension and summary as below.
```r
dim(tx_p$data)
# dataset var ftime lat lon
# 1 1 214 4 4
summary(tx_p$data)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# 13.83 22.08 26.08 26.22 30.72 36.72
```
With the prepared threshold (90th percentile), the WSDI can be computed by running
```r
WSDI_exp <- CST_TotalSpellTimeExceedingThreshold(tasmax_exp, threshold = tx_p, spell = 6)
WSDI_obs <- CST_TotalSpellTimeExceedingThreshold(tasmax_obs, threshold = tx_p, spell = 6)
```
After checking the summaries, compute the Fair Ranked Probability Skill Score (FRPSS) of WSDI by running the following lines
```r
# Reorder the data
fcst <- Reorder(drop(WSDI_exp$data), c(4, 3, 2, 1))
obs <- Reorder(drop(WSDI_obs$data), c(3, 2, 1))
# summaries of WSDI
summary(fcst)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# 0.00 13.00 28.00 31.22 46.00 82.00
summary(obs)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# 9.00 19.00 22.50 23.25 26.00 33.00
# compute FRPSS
f <- veriApply('FairRpss', fcst = fcst, obs = obs, ensdim = 4, tdim = 3, prob = 1:2/3)$skillscore
WSDI_FRPSS <- Reorder(f, c(2,1))
```
Plot the map of WSDI FRPSS for the period from 2013-2016
```r
cols <- c("#edf8fb", "#ccece6", "#99d8c9", "#66c2a4")
toptitle <- 'SEAS5 WSDI FRPSS (2013-2016)'
PlotEquiMap(WSDI_FRPSS, lon = tasmax_obs$coords$lon, lat = tasmax_obs$coords$lat,
intylat = 1, intxlon = 1, width = 6, height = 6,
filled.continents = FALSE, units = 'FRPSS', title_scale = .8,
axes_label_scale = 1, axes_tick_scale = 1, margin_scale = c(1, 1, 1, 1),
cols = cols[1:3], col_inf = 'white', col_sup = cols[4],
brks = seq(0, 0.9, 0.3), toptitle = toptitle, bar_label_scale = 1.5,
bar_extra_margin = c(0, 0, 0, 0), units_scale = 2)
```
The FRPSS map for 2013-2016 SEAS WSDI is shown as below.

As seen in the map, the FRPSS in the eastern part of Douro Valley falls in 0.6-0.9, which are good enough to be useful when compared to observational climatology.
In addition to the grape/wine sector focused here, the MEDGOLD project also works on the other two sectors: olive/olive oil and durum wheat/pasta. Furthermore, the climate services are also provided at the longer term (up to 30 years) by other project partners.
Click on [MEDGOLD](https://www.med-gold.eu/climate-services/) for more information.
|
/scratch/gouwar.j/cran-all/cranData/CSIndicators/vignettes/AgriculturalIndicators.Rmd
|
---
title: "Energy Indicators"
author: "Llorenç Lledó, Earth Sciences department, Barcelona Supercomputing Center (BSC)"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteEngine{knitr::knitr}
%\VignetteIndexEntry{Energy Indicators}
%\usepackage[utf8]{inputenc}
---
Energy Indicators
-----------------------
## Introduction
The energy sector is affected by the atmospheric ciruclation in many ways. On the one hand energy supply from renewable sources like wind, solar or hydropower relies on availability of wind, sunshine or water. On the other hand, energy demand is affected by changes in near-surface temperature. A number of indicators derived from atmospheric variables can be useful as proxies of energy production/demand.
Currently, this package provides two indicators for wind power generation:
1. **WindPowerDensity -** Wind power that is available for extraction from the wind flow, per square meter of swept area.
2. **WindCapacityFactor -** Wind power generation of a wind turbine, normalized by the maximum power that the turbine can deliver (rated capacity).
### 1. Wind Power Density
`WindPowerDensity` computes the kinetic energy that is available in the wind flow that traverses a unit of area swept by a wind turbine. For an instantaneous wind speed value, it is computed as: `WPD = 0.5 * ro * wspd^3` where `ro` is the air density in Kg/m^3 and `wspd` is the instantaneous wind speed at hub height in m/s.
Although wind turbines cannot extract all of the kinetic energy in the wind, and their efficiency can vary substantially at different wind speeds and among different wind turbines, this indicator provides a simple estimation of the wind resource quality. Typically, Wind Power Density is computed over a long period and its mean value is reported.
As an example, we simulate a time series of 1000 wind speed values from a Weibull distribution with scale factor of 6 and a shape factor of 2, which represent a sample of wind speed values obtained at a single location. The Weibull distribution is often assumed to fit observed wind speed values to a probability distribution function. Then, each instantaneous wind speed value is converted to its equivalent WPD.
The `mean` and `sd` of the WPD can be employed to summarize the wind resource in that location. Otherwise, we can plot the histograms to see the full distribution of values:
```
library(CSIndicators)
set.seed(1)
wind <- rweibull(n = 1000, shape = 2, scale = 6)
WPD <- WindPowerDensity(wind)
mean(WPD)
```
```
## [1] 170.6205
```
```
sd(WPD)
```
```
## [1] 251.1349
```
```
par(mfrow = c(1, 2))
hist(wind, breaks = seq(0, 20))
hist(WPD, breaks = seq(0, 4000, 200))
```

As you can see the histogram of the WPD is highly skewed, even if the wind speed was only a little skewed!
If not specified, an air density of 1.225 kg/m^3 is assumed. Otherwise, the parameter `ro` can be set to a fixed value (for instance the mean air density at the site elevation could be used), or a timeseries of density values measured at each time stamp can be used to obtain more accurate results.
```
WPD <- WindPowerDensity(wind, ro = 1.15)
```
### 2. Wind Capacity Factor
`WindCapacityFactor` transforms wind speed values into normalized wind power values. The transformation is made employing manufacturer-provided power curves, for five different turbines, as described in Lledó et al. (2019).
The generation is normalized by the rated power of the turbine (i.e. the maximum power output it can achieve). This allows for comparisons between turbines of different sizes and wind farms of different installed capacities. Beware that the Capacity Factor (CF) values provided do not take into account any losses due to wakes, electricity transport, blade degradation, curtailments or maintenance shutdowns.
The function allows to choose from five different power curves that are suited for a different range of wind speed conditions. Each of the provided turbines is a representative of a IEC wind class. Generally speaking, commercially available wind turbines can be certified as IEC class I, II, III or a combination of them (I/II and II/III), according to their efficency at different wind speeds and the loads they can withstand. The basic idea is that most turbines in a same IEC class have similar power curves, and the differences of power output can be thoroughly studied with only this set of five turbines.
Notice that power curves are intended to be used with 10-minutal steady wind speed values at hub height, which in modern wind turbines varies between 80 and 120m typically. As the transformation of wind speed into wind power is non-linear, it is recomended to use instantaneous or 10-minutal wind speed values as input. Employing longer period means will produce inaccurate results, as far as the wind is not steady during that period.
Following on the previous example, we will compute now the CF that would be obtained from our sample of 1000 wind speed values when using a turbine of class IEC I, and compare it to the CF values for a class III:
```
WCFI <- WindCapacityFactor(wind, IEC_class = "I")
WCFIII <- WindCapacityFactor(wind, IEC_class = "III")
par(mfrow = c(1, 3))
hist(wind, breaks = seq(0, 20))
hist(WCFI, breaks = seq(0, 1, 0.05), ylim = c(0, 500))
hist(WCFIII, breaks = seq(0, 1, 0.05), ylim = c(0, 500))
```

From the CF histograms we can see that, for this particular wind speed distribution, the IEC I turbine (designed for high winds) producess less energy than the IEC III turbine, which is more suitable for this range of wind speed values.
### References
* Lledó, Ll., Torralba, V., Soret, A., Ramon, J., & Doblas-Reyes, F.J. (2019). Seasonal forecasts of wind power generation. Renewable Energy, 143, 91–100. https://doi.org/10.1016/j.renene.2019.04.135
* International Standard IEC 61400-1 (third ed.) (2005)
|
/scratch/gouwar.j/cran-all/cranData/CSIndicators/vignettes/EnergyIndicators.Rmd
|
#' Business failure prediction demonstration data set
#'
#' Business failure prediction demonstration data set. Contains financial ratios and firmographics as independent variables for 522 anonymized European companies. The Class column indicates failure (class 1) or survival (class 0) over a 1-year period.
#'
#' @name BFP
#' @docType data
#' @author Koen W. De Bock, \email{kdebock@@audencia.com}
#' @references De Bock, K.W., Lessmann, S. And Coussement, K., Cost-sensitive business failure prediction
#' when misclassification costs are uncertain: A heterogeneous ensemble selection approach,
#' European Journal of Operational Research (2020), doi: 10.1016/j.ejor.2020.01.052.
NULL
|
/scratch/gouwar.j/cran-all/cranData/CSMES/R/BFP.R
|
#' CSMES Training Stage 2: Extract an ensemble nomination curve (cost curve- or Brier curve-based) from a set of Pareto-optimal ensemble classifiers
#'
#' Generates an ensemble nomination curve from a set of Pareto-optimal ensemble definitions as identified through \code{CSMES.ensSel)}.
#'
#' @param ensSelModel ensemble selection model (output of \code{CSMES.ensSel})
#' @param memberPreds matrix containing ensemble member library predictions
#' @param y Vector with true class labels. Currently, a dichotomous outcome variable is supported
#' @param curveType the type of cost curve used to construct the ensemble nomination curve. Shoul be "brierCost","brierSkew" or "costCurve" (default).
#' @param method how are candidate ensemble learner predictions used to generate the ensemble nomination front? "classPreds" for class predictions (default), "probPreds" for probability predictions.
#' @param nrBootstraps optionally, the ensemble nomination curve can be generated through bootstrapping. This argument specifies the number of iterations/bootstrap samples. Default is 1.
#' @param plotting \code{TRUE} or \code{FALSE}: Should a plot be generated showing the Brier curve? Defaults to \code{FALSE}.
#' @return An object of the class \code{CSMES.ensNomCurve} which is a list with the following components:
#' \item{nomcurve}{the ensemble nomination curve}
#' \item{curves}{individual cost curves or brier curves of ensemble members}
#' \item{intervals}{resolution of the ensemble nomination curve}
#' \item{incidence}{incidence (positive rate) of the outcome variable}
#' \item{area_under_curve}{area under the ensemble nomination curve}
#' \item{method}{method used to generate the ensemble nomination front:"classPreds" for class predictions (default), "probPreds" for probability predictions}
#' \item{curveType}{the type of cost curve used to construct the ensemble nomination curve}
#' \item{nrBootstraps}{number of boostrap samples over which the ensemble nomination curve was estimated}
#' @export
#' @import rpart zoo mco
#' @importFrom ROCR prediction performance
#' @importFrom caTools colAUC
#' @importFrom graphics axis lines matplot mtext plot points text
#' @importFrom stats approx as.formula dbeta
#' @export
#' @author Koen W. De Bock, \email{kdebock@@audencia.com}
#' @references De Bock, K.W., Lessmann, S. And Coussement, K., Cost-sensitive business failure prediction
#' when misclassification costs are uncertain: A heterogeneous ensemble selection approach,
#' European Journal of Operational Research (2020), doi: 10.1016/j.ejor.2020.01.052.
#' @seealso \code{\link{CSMES.ensSel}}, \code{\link{CSMES.predictPareto}}, \code{\link{CSMES.predict}}
#' @examples
#' ##load data
#' library(rpart)
#' library(zoo)
#' library(ROCR)
#' library(mco)
#' data(BFP)
#' ##generate random order vector
#' BFP_r<-BFP[sample(nrow(BFP),nrow(BFP)),]
#' size<-nrow(BFP_r)
#' ##size<-300
#' train<-BFP_r[1:floor(size/3),]
#' val<-BFP_r[ceiling(size/3):floor(2*size/3),]
#' test<-BFP_r[ceiling(2*size/3):size,]
#' ##generate a list containing model specifications for 100 CART decisions trees varying in the cp
#' ##and minsplit parameters, and trained on bootstrap samples (bagging)
#' rpartSpecs<-list()
#' for (i in 1:100){
#' data<-train[sample(1:ncol(train),size=ncol(train),replace=TRUE),]
#' str<-paste("rpartSpecs$rpart",i,"=rpart(as.formula(Class~.),data,method=\"class\",
#' control=rpart.control(minsplit=",round(runif(1, min = 1, max = 20)),",cp=",runif(1,
#' min = 0.05, max = 0.4),"))",sep="")
#' eval(parse(text=str))
#' }
#' ##generate predictions for these models
#' hillclimb<-mat.or.vec(nrow(val),100)
#' for (i in 1:100){
#' str<-paste("hillclimb[,",i,"]=predict(rpartSpecs[[i]],newdata=val)[,2]",sep="")
#' eval(parse(text=str))
#' }
#' ##score the validation set used for ensemble selection, to be used for ensemble selection
#' ESmodel<-CSMES.ensSel(hillclimb,val$Class,obj1="FNR",obj2="FPR",selType="selection",
#' generations=10,popsize=12,plot=TRUE)
#' ## Create Ensemble nomination curve
#' enc<-CSMES.ensNomCurve(ESmodel,hillclimb,val$Class,curveType="costCurve",method="classPreds",
#' plot=FALSE)
CSMES.ensNomCurve<-function(ensSelModel,memberPreds,y,curveType=c("costCurve","brierSkew","brierCost"),method=c("classPreds","probPreds"),plotting=FALSE,nrBootstraps=1){
#classPreds method: every pareto ensemble delivers class predictions and thus delivers one line on the brier curve
#probPreds method: every pareto ensemble candidate delivers prob predictions and delivers a full brier curve.
data<-cbind(memberPreds,y)
paretopreds<-CSMES.predictPareto(ensSelModel,data)
popsize<-ensSelModel$popsize
performances <- array(0,c(popsize,2))
incidence <-sum((data[,ncol(data)]=="1")*1)/nrow(data)
intervals=1000
x<-seq(from = 0, to = 1, by = 1/(intervals-1))
curves<-list()
for (j in 1:nrBootstraps){
curves[[j]]<-array(0,c(popsize,intervals))
if (nrBootstraps>1) {
sampleindices<- sample(1:nrow(data),round(nrow(data)/2),replace=TRUE)
} else {
sampleindices<-1:nrow(data)
}
Real <- factor(data[sampleindices,ncol(data)],levels=0:1)
if (curveType=="brierCost"|curveType=="brierSkew") {
if (method=="classPreds"){
for (i in 1:popsize) {
Pred <- paretopreds$Pareto_predictions_c[sampleindices,i]
CONF2 = table(Real, Pred)
if (length(rownames(CONF2))==1) {
if (rownames(CONF2)==0) {
CONF2 <- rbind(CONF2,c(0,0))
rownames(CONF2)[2] <- 1
} else if (rownames(CONF2)==1) {
CONF2 <- rbind(c(0,0),CONF2)
rownames(CONF2)[1] <- 0
}
}
if (length(colnames(CONF2))==1) {
if (colnames(CONF2)==0) {
CONF2 <- cbind(CONF2,c(0,0))
colnames(CONF2)[2] <- 1
} else if (colnames(CONF2)==1) {
CONF2 <- cbind(c(0,0),CONF2)
colnames(CONF2)[1] <- 0
}
}
FP = CONF2[1,2]
FN = CONF2[2,1]
FPR= CONF2[1,2]/(CONF2[1,2]+CONF2[1,1])
FNR= CONF2[2,1]/(CONF2[2,1]+CONF2[2,2])
performances[i,1]=FPR
performances[i,2]=FNR
if (curveType=="brierCost") {curves[[j]][i,]<-rbind(2*(x*(1-incidence)*FNR+(1-x)*incidence*FPR))
} else if (curveType=="brierSkew") {curves[[j]][i,]<-FNR*(1-x)+FPR*x }
}
} else if (method=="probPreds"){
for (i in 1:popsize) {
Pred <- paretopreds$Pareto_predictions_p[sampleindices,i]
a<-brierCurve(data[,ncol(data)],Pred,resolution=1/intervals-1)
if (curveType=="brierCost") {curves[[j]][i,]<-a$brierCurveCost[,2]
} else if (curveType=="brierSkew") {curves[[j]][i,]<-a$brierCurveSkew[,2] }
rm(a)
}
}
} else if (curveType=="costCurve") {
if (method=="probPreds") {
preds<-paretopreds$Pareto_predictions_p[sampleindices,]
} else if (method=="classPreds") {
preds<-paretopreds$Pareto_predictions_c[sampleindices,]
}
labels<-data[sampleindices,ncol(data)]
labels <- t(do.call("rbind", rep(list(labels), ncol(preds))))
preds_t<-ROCR::prediction(preds,labels)
perf <- ROCR::performance(preds_t,'ecost')
for (i in 1:length([email protected])) {
curves[[j]][i,]<-approx([email protected][[i]], [email protected][[i]],xout=x)[[2]]
}
}
}
curves2 <- do.call(cbind, curves)
curves2 <- array(curves2, dim=c(dim(curves[[1]]), length(curves)))
curves<-colMeans(aperm(curves2, c(3, 1, 2)), na.rm = TRUE)
curve<- cbind(x,as.matrix(apply( curves, 2, min)),as.matrix(apply( curves, 2, which.min)))
#calculate score as area under thecurve
x2 <- curve[,1]
y2 <- curve[,2]
id <- order(x2)
area_under_curve<- sum(diff(x2[id])*rollmean(y2[id],2))
if (plotting==TRUE && dim(curve[match(unique(curve[,3]),curve[,3]),,drop=FALSE])[1]>2){
matplot(curve[,1],t(curves),type="l",ylab=curveType,xlab="Operating point",ylim=c(0,1.5*max(curve[,2])),col=3,axes=FALSE,yaxs="i", frame.plot=TRUE)
axis(1,at=c(0,1))
axis(2)
lines(curve[,1],curve[,2],ylab=curveType,xlab="Operating point",ylim=c(0,1.5*max(curve[,2])),col=2)
ccc<-cbind(curve,rbind(0,curve[1:(nrow(curve)-1),]))
es_labels<-curve[which(ccc[,3] != ccc[,6]),,drop=FALSE]
marg<-0.01*(1.5*max(curve[,2]))
#abline(a=0.015,b=0)
for (j in 1:(dim(es_labels)[1])) {
lines(rep(es_labels[j,1],2),c(0,marg),type="l")
lines(rep(es_labels[j,1],2),c(marg,es_labels[j,2]),type="l",pch=23, lty=3)
}
lines(rep(1,2),c(0,marg),type="l")
text((rbind(es_labels[2:nrow(es_labels),1,drop=FALSE],1)+es_labels[,1,drop=FALSE])/2,0.01,labels=es_labels[,3],cex=0.7,col=2)
#text(c(0,1),0,labels=c(0,1))
#text(rbind(es_labels[,1,drop=FALSE],1),0,labels=round(rbind(es_labels[,1,drop=FALSE],1),digits=2),cex=0.5,col=4)
axis(1,at=c(0,1))
mtext(side=1,round(es_labels[2:nrow(es_labels),1,drop=FALSE],digits=2),at=es_labels[2:nrow(es_labels),1,drop=FALSE],cex=0.7,col=4,las=2,adj=1.2)
}
ans<- list(nomcurve=curve,curves=curves,intervals=intervals,incidence=incidence,area_under_curve=area_under_curve,method=method,curveType=curveType,nrBootstraps=nrBootstraps)
class(ans) <- "CSMES.ensNomCurve"
ans
}
|
/scratch/gouwar.j/cran-all/cranData/CSMES/R/CSMES.ensNomCurve.R
|
#' CSMES Training Stage 1: Cost-Sensitive Multicriteria Ensemble Selection resulting in a Pareto frontier of candidate ensemble classifiers
#'
#' This function applies the first stage in the learning process of CSMES: optimizing Cost-Sensitive Multicriteria Ensemble
#' Selection, resulting in a Pareto frontier of equivalent candidate ensemble classifiers along two objective functions. By default, cost space is optimized
#' by optimizing false positive and false negative rates simultaneously. This results in a set of optimal ensemble classifiers, varying in the tradeoff between
#' FNR and FPR. Optionally, other objective metrics can be specified. Currently, only binary classification is supported.
#'
#' @param memberPreds matrix containing ensemble member library predictions
#' @param y Vector with true class labels. Currently, a dichotomous outcome variable is supported
#' @param obj1 Specifies the first objective metric to be minimized
#' @param obj2 Specifies the second objective metric to be minimized
#' @param selType Specifies the type of ensemble selection to be applied: \code{"selection"} for basic selection, \code{"selectionWeighted"} for weighted selection, \code{"weighted"} for weighted sum
#' @param plotting \code{TRUE} or \code{FALSE}: Should a plot be generated showing objective function values throughout the optimization process?
#' @param generations the number of population generations for nsga-II. Default is 30.
#' @param popsize the population size for nsga-II. Default is 100.
#' @return An object of the class \code{CSMES.ensSel} which is a list with the following components:
#' \item{weights}{ensemble member weights for all pareto-optimal ensemble classifiers after multicriteria ensemble selection}
#' \item{obj_values}{optimization objective values}
#' \item{pareto}{overview of pareto-optimal ensemble classifiers}
#' \item{popsize}{the population size for nsga-II}
#' \item{generarations}{the number of population generations for nsga-II}
#' \item{obj1}{Specifies the first objective metric that was minimized}
#' \item{obj2}{Specifies the second objective metric that was minimized}
#' \item{selType}{the type of ensemble selection that was applied: \code{"selection"}, \code{"selectionWeighted"} or \code{"weighted"}}
#' \item{ParetoPredictions_p}{probability predictions for pareto-optimal ensemble classifiers}
#' \item{ParetoPredictions_c}{class predictions for pareto-optimal ensebmle classifiers}
#' @import rpart zoo mco
#' @importFrom ROCR prediction performance
#' @importFrom caTools colAUC
#' @importFrom graphics axis lines matplot mtext plot points text
#' @importFrom stats approx as.formula dbeta
#' @export
#' @author Koen W. De Bock, \email{kdebock@@audencia.com}
#' @references De Bock, K.W., Lessmann, S. And Coussement, K., Cost-sensitive business failure prediction
#' when misclassification costs are uncertain: A heterogeneous ensemble selection approach,
#' European Journal of Operational Research (2020), doi: 10.1016/j.ejor.2020.01.052.
#' @examples
#' ##load data
#' library(rpart)
#' library(zoo)
#' library(ROCR)
#' library(mco)
#' data(BFP)
#' ##generate random order vector
#' BFP_r<-BFP[sample(nrow(BFP),nrow(BFP)),]
#' size<-nrow(BFP_r)
#' ##size<-300
#' train<-BFP_r[1:floor(size/3),]
#' val<-BFP_r[ceiling(size/3):floor(2*size/3),]
#' test<-BFP_r[ceiling(2*size/3):size,]
#' ##generate a list containing model specifications for 100 CART decisions trees varying in the cp
#' ##and minsplit parameters, and trained on bootstrap samples (bagging)
#' rpartSpecs<-list()
#' for (i in 1:100){
#' data<-train[sample(1:ncol(train),size=ncol(train),replace=TRUE),]
#' str<-paste("rpartSpecs$rpart",i,"=rpart(as.formula(Class~.),data,method=\"class\",
#' control=rpart.control(minsplit=",round(runif(1, min = 1, max = 20)),",cp=",runif(1,
#' min = 0.05, max = 0.4),"))",sep="")
#' eval(parse(text=str))
#' }
#' ##generate predictions for these models
#' hillclimb<-mat.or.vec(nrow(val),100)
#' for (i in 1:100){
#' str<-paste("hillclimb[,",i,"]=predict(rpartSpecs[[i]],newdata=val)[,2]",sep="")
#' eval(parse(text=str))
#' }
#' ##score the validation set used for ensemble selection, to be used for ensemble selection
#' ESmodel<-CSMES.ensSel(hillclimb,val$Class,obj1="FNR",obj2="FPR",selType="selection",
#' generations=10,popsize=12,plot=TRUE)
#' ## Create Ensemble nomination curve
#' enc<-CSMES.ensNomCurve(ESmodel,hillclimb,val$Class,curveType="costCurve",method="classPreds",
#' plot=FALSE)
CSMES.ensSel <- function(memberPreds,y,obj1=c("FNR","AUCC","MSE","AUC"),obj2=c("FPR","ensSize","ensSizeSq","clAmb"),
selType=c("selection","selectionWeighted","weighted"),plotting=TRUE,generations=30,popsize=100) {
minfunc <- function(x,memberPredictions,y, obj1=c("FNR","AUCC","MSE","AUC"),
obj2=c("FPR","ensSize","ensSizeSq","clAmb"), selType=c("selection","selectionWeighted","weighted"),plotting=TRUE) {
calculate_MSE <- function(preds,labels) {
y = labels;
x = preds;
original <- as.matrix(cbind(y,x))
inp <- as.matrix(cbind(y,x))
n0n1 <- nrow(inp)
MSE<-sum((inp[,1]-inp[,2])**2)/dim(inp)[1]
ans<- list(MSE=MSE)
class(ans) <- "MSE"
ans
}
calculate_AUCC <- function(preds,labels) {
labels <- t(do.call("rbind", rep(list(labels), ncol(preds))))
preds_t<-ROCR::prediction(preds,labels)
perf <- ROCR::performance(preds_t,'ecost')
nr_intervals=100
values<-array(0,c(length([email protected]),nr_intervals+1))
seqs<-seq(from=0,to=1,by=1/nr_intervals)
for (i in 1:length([email protected])) {
values[i,]<-approx([email protected][[i]], [email protected][[i]],xout=seqs)[[2]]
}
lower_env_coordinates<-rbind(seqs,do.call(pmin, lapply(1:nrow(values), function(i)values[i,])))
x <- lower_env_coordinates[1,]
y <- lower_env_coordinates[2,]
id <- order(x)
AUCC <- sum(diff(x[id])*rollmean(y[id],2))
ans<- list(AUCC=AUCC)
class(ans) <- "AUCC"
ans
}
calculate_clAmb<-function(memberPredictions,preds_c){
#class ambiguity, see Dos Santos etal
mempreds_c<-(memberPredictions>0.5)*1
ans<-sum((mempreds_c!=preds_c)*1)/(ncol(mempreds_c)*nrow(mempreds_c))
ans
}
memberPredictions<-cbind(memberPredictions,y)
prob_cutoff<-0.5 #cutoff value used to discretize probability predictions
selection_weight_cutoff<-0.5 #cutoff value 0<v<1 that determines whether an ensemble member is selected or not
x_weights <-x
if (selType=="selection") {
weights<-(x_weights>=selection_weight_cutoff)*1
} else if (selType=="selectionWeighted") {
weights<-x_weights*((x_weights>=selection_weight_cutoff)*1)
} else if (selType=="weighted") { weights <- x_weights }
#extra check on the weights: if sum is zero (no selection whatsoever) the first member is chosen. Otherwise the Pareto prediction
#function will produce NaN's for that ensemble.
if (sum(weights)==0) {
weights[1]=1
}
if (sum(weights)==0) {
Pred <- array(0,c(nrow(memberPredictions),1))
} else {
Pred <- (as.matrix(memberPredictions[,1:(ncol(memberPredictions)-1)])%*%weights)/sum(weights)
#Pred <- (as.matrix(memberPredictions[,1:(ncol(memberPredictions)-1)])%*%weights)/(ncol(memberPredictions)-1)
}
#message(paste("selected ", sum((weights>0)*1), " from ", ncol(memberPredictions)-1, " models",sep=""))
nr1 = sum((memberPredictions[,ncol(memberPredictions)]==1)*1)
Pred_ordered<-Pred[order(Pred),]
Pred_c <- (Pred>prob_cutoff)*1
Pred_c <- factor(Pred_c,levels=0:1)
Real <- factor(memberPredictions[,ncol(memberPredictions)],levels=0:1)
CONF2 = table(Real, Pred_c)
if (length(rownames(CONF2))==1) {
if (rownames(CONF2)==0) {
CONF2 <- rbind(CONF2,c(0,0))
rownames(CONF2)[2] <- 1
} else if (rownames(CONF2)==1) {
CONF2 <- rbind(c(0,0),CONF2)
rownames(CONF2)[1] <- 0
}
}
if (length(colnames(CONF2))==1) {
if (colnames(CONF2)==0) {
CONF2 <- cbind(CONF2,c(0,0))
colnames(CONF2)[2] <- 1
} else if (colnames(CONF2)==1) {
CONF2 <- cbind(c(0,0),CONF2)
colnames(CONF2)[1] <- 0
}
}
FP = CONF2[1,2]
FN = CONF2[2,1]
FPR= CONF2[1,2]/(CONF2[1,2]+CONF2[1,1])
FNR= CONF2[2,1]/(CONF2[2,1]+CONF2[2,2])
y <- numeric(2)
if (obj1=="AUC") {
y[1]<- colAUC(Pred, Real,plotROC=FALSE)
} else if (obj1=="FNR") {
y[1]<- FNR
} else if (obj1=="AUCC") {
Pred_c<-(Pred>prob_cutoff)*1
y[1]<-calculate_AUCC(cbind(Pred_c), memberPredictions[,ncol(memberPredictions)])[[1]]
} else if (obj1=="MSE") {
y[1]<-calculate_MSE(cbind(Pred), memberPredictions[,ncol(memberPredictions)])[[1]]
}
if (obj2=="ensSize") {
y[2]<- sum(weights)
} else if (obj2=="ensSizeSq") {
y[2]<- (sum(weights))**2
} else if (obj2=="FPR") {
y[2] <- FPR
} else if (obj2=="clAmb"){
if (selType=="selection") {
part1<-(memberPredictions[,as.logical(weights)]>prob_cutoff)*1
part2<-as.vector((Pred>prob_cutoff)*1)
clamb <- calculate_clAmb(part1,part2)
y[2]<-1-clamb
} else {
warning("class ambiguity criterion only possible for ensemble selection")
y[2]<-1
}
}
if (plotting==TRUE) {
points(y[1],y[2],pch=".")
}
return (y)
}
target_classes <- unique(y)
target_classes_s <- target_classes[order(target_classes)]
newy <- as.numeric(y == target_classes_s[2])
if (plotting==TRUE) {
if (obj2=="ensSize") { plot(1,1,ylim=c(0,ncol(memberPreds)),xlim=c(0,1),type="p",xlab=obj1,ylab=obj2)
} else if (obj2=="ensSizeSq") { plot(1,1,ylim=c(0,ncol(memberPreds)*2),xlim=c(0,1),type="p",xlab=obj1,ylab=obj2)
} else {plot(1,1,xlim=c(0,1),ylim=c(0,1),type="p",xlab=obj1,ylab=obj2)}
}
#message("Stage 1: optimizing cost space through multicriteria ensemble selection")
nsga2mod <- nsga2(minfunc, ncol(memberPreds), 2, memberPredictions=memberPreds,y=newy,selType=selType,obj1=obj1,obj2=obj2,
generations=generations, popsize=popsize, lower.bounds=rep(0, ncol(memberPreds)),upper.bounds=rep(1, ncol(memberPreds)))
selection_weight_cutoffs<-as.numeric(array(0.5,popsize))
prob_cutoffs<-array(0.5,popsize)
if (selType=="selection") {
weights<-((nsga2mod$par[,1:(ncol(memberPreds))]>selection_weight_cutoffs)*1)
} else if (selType=="selectionWeighted") {
weights<-nsga2mod$par[,1:(ncol(memberPreds))]*((nsga2mod$par[,1:(ncol(memberPreds))]>selection_weight_cutoffs)*1)
} else if (selType=="weighted") {
weights<-nsga2mod$par[,1:(ncol(memberPreds))]
}
#Generate predictions for all Pareto-equivalent ensembles
Pareto_predictions_p <- t(t(as.matrix(memberPreds[,1:(ncol(memberPreds))])%*%t(weights))/rowSums(weights))
Pareto_predictions_c <- array(0, c(nrow(memberPreds),popsize))
for (j in 1:popsize) {
Pareto_predictions_c[,j]=cbind((Pareto_predictions_p[,j]>prob_cutoffs[j])*1)
}
ans<- list(weights=weights, obj_values=nsga2mod$value, pareto=nsga2mod$pareto.optimal,popsize=popsize,generations=generations,obj1=obj1,obj2=obj2,selType=selType,Pareto_predictions_p=Pareto_predictions_p,Pareto_predictions_c=Pareto_predictions_c)
class(ans) <- "CSMES.ensSel"
ans
}
|
/scratch/gouwar.j/cran-all/cranData/CSMES/R/CSMES.ensSel.R
|
#' CSMES scoring: generate predictions for the optimal ensemble classifier according to CSMES in function of cost information.
#'
#' This function generates predictions for a new data set (containing candidate member library predictions) using a CSMES model. Using Pareto-optimal ensemble definitions
#' generated through \code{CSMES.ensSel} and the ensemble nomination front generated using \code{CSMES.EnsNomCurve}, final ensemble predictions are generated in function of
#' cost information known to the user at the time of model scoring. The model allows for three scenarios: (1) the candidate ensemble is nominated in function of a specific cost
#' ratio, (2) the ensemble is nominated in function of partial AUCC (or a distribution over operating points) and (3) the candidate ensemble that is
#' optimal over the entire cost space in function of area under the cost or brier curve is chosen.
#'
#' @param ensNomCurve ensemble nomination curve object (output of \code{CSMES.ensNomCurve})
#' @param ensSelModel ensemble selection model (output of \code{CSMES.ensSel})
#' @param criterion This argument specifies which criterion determines the selection of the ensemble candidate that delivers predictions. Can be one of three options: "minEMC", "minAUCC" or "minPartAUCC".
#' @param costRatio Specifies the cost ratio used to determine expected misclassification cost. Only relvant when \code{criterion} is "minEMC".
#' @param partAUCC_mu Desired mean operating condition when \code{criterion} is "minPartAUCC" (partial area under the cost/brier curve).
#' @param partAUCC_sd Desired standard deviation when \code{criterion} is "minPartAUCC" (partial area under the cost/brier curve).
#' @param newdata matrix containing ensemble library member model predictions for new data set
#' @return An list with the following components:
#' \item{pred}{A matrix with model predictions. Both class and probability predictions are delivered.}
#' \item{criterion}{The criterion specified to determine the selection of the ensemble candidate.}
#' \item{costRatio}{The cost ratio in function of which the \code{criterion} "minEMC" has selected the optimal candidate ensemble that delivered predictions}
#' @import rpart zoo mco
#' @importFrom ROCR prediction performance
#' @importFrom caTools colAUC
#' @importFrom graphics axis lines matplot mtext plot points text
#' @importFrom stats approx as.formula dbeta
#' @export
#' @author Koen W. De Bock, \email{kdebock@@audencia.com}
#' @references De Bock, K.W., Lessmann, S. And Coussement, K., Cost-sensitive business failure prediction
#' when misclassification costs are uncertain: A heterogeneous ensemble selection approach,
#' European Journal of Operational Research (2020), doi: 10.1016/j.ejor.2020.01.052.
#' @seealso \code{\link{CSMES.ensSel}}, \code{\link{CSMES.predictPareto}}, \code{\link{CSMES.ensNomCurve}}
#' @examples
#' ##load data
#' library(rpart)
#' library(zoo)
#' library(ROCR)
#' library(mco)
#' data(BFP)
#' ##generate random order vector
#' BFP_r<-BFP[sample(nrow(BFP),nrow(BFP)),]
#' size<-nrow(BFP_r)
#' ##size<-300
#' train<-BFP_r[1:floor(size/3),]
#' val<-BFP_r[ceiling(size/3):floor(2*size/3),]
#' test<-BFP_r[ceiling(2*size/3):size,]
#' ##generate a list containing model specifications for 100 CART decisions trees varying in the cp
#' ##and minsplit parameters, and trained on bootstrap samples (bagging)
#' rpartSpecs<-list()
#' for (i in 1:100){
#' data<-train[sample(1:ncol(train),size=ncol(train),replace=TRUE),]
#' str<-paste("rpartSpecs$rpart",i,"=rpart(as.formula(Class~.),data,method=\"class\",
#' control=rpart.control(minsplit=",round(runif(1, min = 1, max = 20)),",cp=",runif(1,
#' min = 0.05, max = 0.4),"))",sep="")
#' eval(parse(text=str))
#' }
#' ##generate predictions for these models
#' hillclimb<-mat.or.vec(nrow(val),100)
#' for (i in 1:100){
#' str<-paste("hillclimb[,",i,"]=predict(rpartSpecs[[i]],newdata=val)[,2]",sep="")
#' eval(parse(text=str))
#' }
#' ##score the validation set used for ensemble selection, to be used for ensemble selection
#' ESmodel<-CSMES.ensSel(hillclimb,val$Class,obj1="FNR",obj2="FPR",selType="selection",
#' generations=10,popsize=12,plot=TRUE)
#' ## Create Ensemble nomination curve
#' enc<-CSMES.ensNomCurve(ESmodel,hillclimb,val$Class,curveType="costCurve",method="classPreds",
#' plot=FALSE)
CSMES.predict<-function(ensSelModel, ensNomCurve, newdata, criterion=c("minEMC","minAUCC","minPartAUCC"),
costRatio=5,partAUCC_mu=0.5,partAUCC_sd=0.1) {
estBetaParams <- function(mu, sd) {
#kudos to assumednormal at StackExchange
alpha <- ((1 - mu) / sd**2 - 1 / mu) * mu ^ 2
beta <- alpha * (1 / mu - 1)
return(params = list(alpha = alpha, beta = beta))
}
newdata_paretopreds<-CSMES.predictPareto(ensSelModel,newdata)
nr_intervals=ensNomCurve$intervals
P_plus<-ensNomCurve$incidence
if (criterion=="minEMC") {
alpha<-costRatio
PC_plus=(P_plus*alpha)/(P_plus*alpha+(1-P_plus))
#winner <- ensNomCurve$nomcurve[which(round(ensNomCurve$nomcurve[,1],digits=log10(nr_intervals))==round(PC_plus,digits=log10(nr_intervals))),3]
winner <- ensNomCurve$nomcurve[which.min(abs(ensNomCurve$nomcurve[,1]-PC_plus)),3]
pred <- newdata_paretopreds$Pareto_predictions_p[,winner]
} else if (criterion=="minAUCC") {
#identify member with overall minimal AUCC (default ensemble)
areas_under_curve<-mat.or.vec(1,ensSelModel$popsize)
x<-seq(from = 0, to = 1, by = 1/(ensNomCurve$intervals-1))
for (i in 1:ensSelModel$popsize){
areas_under_curve[i]<- sum(diff(x)*rollmean(ensNomCurve$curves[i,],2))
}
winner<- which.min(areas_under_curve)
pred <- newdata_paretopreds$Pareto_predictions_p[,winner]
} else if (criterion=="minPartAUCC") {
#identify member with overall minimal partial AUCC for the cost ratio and sd provided
areas_under_curve<-mat.or.vec(1,ensSelModel$popsize)
x<-seq(from = 0, to = 1, by = 1/(ensNomCurve$intervals-1))
id <- order(x)
alpha<-costRatio
PC_plus=(P_plus*alpha)/(P_plus*alpha+(1-P_plus))
parests<-estBetaParams(PC_plus,partAUCC_sd)
ydist<-dbeta(x,parests[[1]],parests[[2]])
ydist[which(is.infinite(ydist))]<-max(ydist[which(!is.infinite(ydist))])*10
for (i in 1:ensSelModel$popsize){
areas_under_curve[i] <- sum(diff(x[id])*rollmean(ensNomCurve$curves[i,id],2)*rollmean(ydist[id],2))
}
winner<- which.min(areas_under_curve)
pred <- newdata_paretopreds$Pareto_predictions_p[,winner]
}
formula <- as.formula(dependent~.)
n <- length(newdata[,1])
vardep <- newdata[,as.character(formula[[2]])]
tmp <- unique(vardep)
depvalues <- tmp[order(tmp)]
cutoff=0.5
pred_prob <- cbind(1-pred, pred)
colnames(pred_prob) <- as.character(depvalues)
pred_class=(pred_prob[,2]>cutoff)*1+1
pred_class2 <- as.character(pred_class)
for (i in 1:nrow(as.array(unique(depvalues)))){
pred_class2[(pred_class==i)] <- as.character(depvalues[i])
}
pred_class <- pred_class2
rm(pred_class2)
pred <- cbind(pred_class,pred_prob)
pred_all <- cbind(newdata[,as.character(formula[[2]])], pred)
colnames(pred_all)[1:2] = rbind("dependent","pred")
ans<- list(pred=pred_all,criterion=criterion,costRatio=costRatio)
class(ans) <- "CSMES"
ans
}
|
/scratch/gouwar.j/cran-all/cranData/CSMES/R/CSMES.predict.R
|
#' Generate predictions for all Pareto-optimal ensemble classifier candidates selected through CSMES
#'
#' This function generates predictions for all pareto-optimal ensemble classifier candidates as identified through the first training stage of CSMES (\code{CSMES.ensSel}).
#'
#' @param ensSelModel ensemble selection model (output of \code{CSMES.ensSel})
#' @param newdata data.frame or matrix containing data to be scored
#' @import rpart zoo mco
#' @importFrom ROCR prediction performance
#' @importFrom caTools colAUC
#' @importFrom graphics axis lines matplot mtext plot points text
#' @importFrom stats approx as.formula dbeta
#' @return An object of the class \code{CSMES.predictPareto} which is a list with the following two components:
#' \item{Pareto_predictions_c}{A vector with class predictions.}
#' \item{Paret_predictions_p}{A vector with probability predictions.}
#' @export
#' @author Koen W. De Bock, \email{kdebock@@audencia.com}
#' @references De Bock, K.W., Lessmann, S. And Coussement, K., Cost-sensitive business failure prediction
#' when misclassification costs are uncertain: A heterogeneous ensemble selection approach,
#' European Journal of Operational Research (2020), doi: 10.1016/j.ejor.2020.01.052.
#' @seealso \code{\link{CSMES.ensSel}}, \code{\link{CSMES.predict}}, \code{\link{CSMES.ensNomCurve}}
#' @examples
#' ##load data
#' library(rpart)
#' library(zoo)
#' library(ROCR)
#' library(mco)
#' data(BFP)
#' ##generate random order vector
#' BFP_r<-BFP[sample(nrow(BFP),nrow(BFP)),]
#' size<-nrow(BFP_r)
#' ##size<-300
#' train<-BFP_r[1:floor(size/3),]
#' val<-BFP_r[ceiling(size/3):floor(2*size/3),]
#' test<-BFP_r[ceiling(2*size/3):size,]
#' ##generate a list containing model specifications for 100 CART decisions trees varying in the cp
#' ##and minsplit parameters, and trained on bootstrap samples (bagging)
#' rpartSpecs<-list()
#' for (i in 1:100){
#' data<-train[sample(1:ncol(train),size=ncol(train),replace=TRUE),]
#' str<-paste("rpartSpecs$rpart",i,"=rpart(as.formula(Class~.),data,method=\"class\",
#' control=rpart.control(minsplit=",round(runif(1, min = 1, max = 20)),",cp=",runif(1,
#' min = 0.05, max = 0.4),"))",sep="")
#' eval(parse(text=str))
#' }
#' ##generate predictions for these models
#' hillclimb<-mat.or.vec(nrow(val),100)
#' for (i in 1:100){
#' str<-paste("hillclimb[,",i,"]=predict(rpartSpecs[[i]],newdata=val)[,2]",sep="")
#' eval(parse(text=str))
#' }
#' ##score the validation set used for ensemble selection, to be used for ensemble selection
#' ESmodel<-CSMES.ensSel(hillclimb,val$Class,obj1="FNR",obj2="FPR",selType="selection",
#' generations=10,popsize=12,plot=TRUE)
#' ## Create Ensemble nomination curve
#' enc<-CSMES.ensNomCurve(ESmodel,hillclimb,val$Class,curveType="costCurve",method="classPreds",
#' plot=FALSE)
CSMES.predictPareto<-function(ensSelModel,newdata) {
popsize<-ensSelModel$popsize
#extra check on the weights: if sum is zero (no selection whatsoever) the first member is chosen.
#Otherwise the Pareto prediction function will produce NaN's for that ensemble.
weights<-ensSelModel$weights
weights[rowSums(weights)==0,1]<-1
cutoff<-0.500000000001
Pareto_predictions_p <- t(t(as.matrix(newdata[,1:(ncol(newdata)-1)])%*%t(weights))/rowSums(weights))
Pareto_predictions_c <- array(0, c(nrow(newdata),popsize))
for (j in 1:popsize) {
Pareto_predictions_c[,j]=cbind((Pareto_predictions_p[,j]>cutoff)*1)
}
ans<- list(Pareto_predictions_c=Pareto_predictions_c,Pareto_predictions_p=Pareto_predictions_p)
class(ans) <- "CSMES.predictPareto"
ans
}
|
/scratch/gouwar.j/cran-all/cranData/CSMES/R/CSMES.predictPareto.R
|
#' Calculates Brier Curve
#'
#' This function calculates the Brier curve (both in terms of cost and skew) based on a set of predictions generated by a binary classifier. Brier curves allow an evaluation of classifier performance in cost space. This code is an adapted version from the authors' original implementation, available through http://dmip.webs.upv.es/BrierCurves/BrierCurves.R.
#'
#' @param preds Vector with predictions (real-valued or discrete)
#' @param labels Vector with true class labels
#' @param resolution Value for the determination of percentile intervals. Defaults to 1/1000.
#' @export
#' @import zoo
#' @importFrom data.table as.data.table
#' @return object of the class \code{brierCurve} which is a list with the following components:
#' \item{brierCurveCost}{Cost-based Brier curve, represented as (cost,loss) coordinates}
#' \item{brierCurveSkew}{Skew-based Brier curve, represented as (skew,loss) coordinates}
#' \item{auc_brierCurveCost}{Area under the cost-based Brier curve.}
#' \item{auc_brierCurveSkew}{Area under the skew-based Brier curve.}
#' @author Koen W. De Bock, \email{kdebock@@audencia.com}
#' @references Hernandez-Orallo, J., Flach, P., & Ferri, C. (2011). Brier Curves: a New Cost-Based Visualisation of Classifier Performance. Proceedings of the 28th International Conference on Machine Learning (ICML-11), 585–592.
#' @seealso \code{\link{plotBrierCurve}}, \code{\link{CSMES.ensNomCurve}}
#' @examples
#' ##load data
#' library(rpart)
#' data(BFP)
#' ##generate random order vector
#' BFP_r<-BFP[sample(nrow(BFP),nrow(BFP)),]
#' size<-nrow(BFP_r)
#' ##size<-300
#' train<-BFP_r[1:floor(size/3),]
#' val<-BFP_r[ceiling(size/3):floor(2*size/3),]
#' test<-BFP_r[ceiling(2*size/3):size,]
#' ##train CART decision tree model
#' model=rpart(as.formula(Class~.),train,method="class")
#' ##generate predictions for the tes set
#' preds<-predict(model,newdata=test)[,2]
#' ##calculate brier curve
#' bc<-brierCurve(test[,"Class"],preds)
brierCurve<-function(labels,preds,resolution=0.001){
inp <- cbind((labels==sort(unique(labels))[2])*1,preds)
bfactor <- 2
n0n1 <- nrow(inp)
x <- t(inp)
zord <- order(x[2,])
sc <- x[,zord]
n1 <- sum(sc[1,])
n0 <- n0n1 - n1
pi0 <- n0/n0n1
pi1 <- n1/n0n1
zord <- order(x[2,])
zordrev <- rev(zord)
screv <- x[,zordrev]
inp <- t(screv) #Decreasing order
if (n0 == 0)
warning("No elements of class 0")
if (n1 == 0)
warning("No elements of class 1")
sc <- cbind(sc,sc[,n0n1])
F0 <- c(0:n0n1)
F1 <- c(0:n0n1)
K1 <- 1
k <- 2
for (i in 1:n0n1) {
F0[k] <- F0[K1]+(1-sc[1,i])
F1[k] <- F1[K1]+sc[1,i]
K1 <- k
k <- if (sc[2,i+1] == sc[2,i]) (k) else (k+1)
}
F0 <- F0[1:K1]
F1 <- F1[1:K1]
G0nomin <- F0 / n0
G1nomin <- F1 / n1
inpnorep <- 1:n0n1
j <- 1
olda <- -1
for (i in 1:n0n1) {
a <- inp[i,2]
if ((a != olda) || (i == 1)) {
inpnorep[j] <- a
olda <- a
j <- j+1
}
}
# j-1 should be equal to K1 here
inpnorep <- inpnorep[1:(K1-1)]
costprobnorep <- c(1:(K1+1))
costprobnorep[1] <- 0
costprobnorep[K1+1] <- 1
for (i in 2:K1)
{
costprobnorep[i] <- 1 * inpnorep[K1-i+1]
# costprobnorep[i] <- 1-inpnorep[i-1]
}
######## Expected cost Qprobnew (Brier) ####### for COST
K1new <- K1*2
costprobnew <- c(1:K1new)
Qprobnew <- c(1:K1new)
Qprobnew0 <- c(1:K1new)
Qprobnew1 <- c(1:K1new)
for (i in 2:(K1new-1))
{
costprobnew[i] <- costprobnorep[trunc(i/2)+1]
prova <- costprobnew[i]
prova0 <- G0nomin[i]
prova1 <- G1nomin[i]
Qprobnew[i] <- bfactor * (prova*pi0*(1-G0nomin[trunc((i+1)/2)]) + (1-prova)*pi1*G1nomin[trunc((i+1)/2)])
Qprobnew0[i] <- bfactor * (prova*pi0*(1-G0nomin[trunc((i+1)/2)]))
Qprobnew1[i] <- bfactor * (1-prova)*pi1*G1nomin[trunc((i+1)/2)]
}
Qprobnew[1] <- 0
Qprobnew[K1new] <- 0
Qprobnew0[1] <- 0
Qprobnew0[K1new] <- 0
Qprobnew1[1] <- 0
Qprobnew1[K1new] <- 0
costprobnew[1] <- 0
costprobnew[K1new] <- 1
##### Expected cost Qprobnewnorm (Brier) ###### for SKEW
K1new <- K1*2
costprobnewnorm <- c(1:K1new)
Qprobnewnorm <- c(1:K1new)
Qprobnewnorm0 <- c(1:K1new)
Qprobnewnorm1 <- c(1:K1new)
for (i in 2:(K1new-1))
{
p <- costprobnorep[trunc(i/2)+1]
costprobnewnorm[i] <- p
prova <- costprobnewnorm[i]
Qprobnewnorm[i] <- bfactor * 0.5 * (prova*(1-G0nomin[trunc((i+1)/2)]) + (1-prova)*G1nomin[trunc((i+1)/2)])
Qprobnewnorm0[i] <- bfactor * 0.5 * (prova*(1-G0nomin[trunc((i+1)/2)]))
Qprobnewnorm1[i] <- bfactor * 0.5 * ((1-prova)*G1nomin[trunc((i+1)/2)])
}
Qprobnewnorm[1] <- 0
Qprobnewnorm[K1new] <- 0
Qprobnewnorm0[1] <- 0
Qprobnewnorm0[K1new] <- 0
Qprobnewnorm1[1] <- 0
Qprobnewnorm1[K1new] <- 0
costprobnewnorm[1] <- 0
costprobnewnorm[K1new] <- 1
y<-NA #test
x.values=costprobnew
y.values=Qprobnew
group<-as.data.table(data.frame(x=x.values,y=y.values))
a<-group[group[, .I[y == max(y)], by=x]$V1]
x.values<-as.numeric(unlist(a[,1]))
y.values<-as.numeric(unlist(a[,2]))
nr_intervals=1/resolution
seqs<-seq(from=0,to=1,by=1/nr_intervals)
values<-approx(x.values, y.values,xout=seqs)[[2]]
lower_env_coordinates_cost<-rbind(seqs,values)
x.values=costprobnewnorm
y.values=Qprobnewnorm
group<-as.data.table(data.frame(x=x.values,y=y.values))
a<-group[group[, .I[y == max(y)], by=x]$V1]
x.values<-as.numeric(unlist(a[,1]))
y.values<-as.numeric(unlist(a[,2]))
values<-array(0,c(length(x.values),nr_intervals+1))
seqs<-seq(from=0,to=1,by=1/nr_intervals)
values<-approx(x.values, y.values,xout=seqs)[[2]]
lower_env_coordinates_skew<-rbind(seqs,values)
Briercurve_cost_nods<-rbind(costprobnew,Qprobnew)
Briercurve_skew_nods<-rbind(costprobnewnorm,Qprobnewnorm)
rownames(Briercurve_cost_nods)<-c("cost","loss")
rownames(Briercurve_skew_nods)<-c("skew","loss")
Briercurve_cost<-lower_env_coordinates_cost
Briercurve_skew<-lower_env_coordinates_skew
rownames(Briercurve_cost)<-c("cost","loss")
rownames(Briercurve_skew)<-c("skew","loss")
x2 <- Briercurve_cost[1,]
y2 <- Briercurve_cost[2,]
id <- order(x2)
auc_brierCurveCost<- sum(diff(x2[id])*rollmean(y2[id],2))
x2 <- Briercurve_skew[1,]
y2 <- Briercurve_skew[2,]
id <- order(x2)
auc_brierCurveSkew<- sum(diff(x2[id])*rollmean(y2[id],2))
ans<-list(brierCurveCost=t(Briercurve_cost),brierCurveSkew=t(Briercurve_skew),auc_brierCurveCost=auc_brierCurveCost,auc_brierCurveSkew=auc_brierCurveSkew)
class(ans) <- "brierCurve"
ans
}
|
/scratch/gouwar.j/cran-all/cranData/CSMES/R/brierCurve.R
|
utils::globalVariables(c(".I"))
|
/scratch/gouwar.j/cran-all/cranData/CSMES/R/globals.R
|
#' Plots Brier Curve
#'
#' This function plots the brier curve based on a set of predictions generated by a binary classifier. Brier curves allow an evaluation of classifier performance in cost space.
#'
#' @param bc A \code{brierCurve} object created by the \code{brierCurve} function
#' @param curveType the type of Brier curve to be plotted. Shoul be "brierCost" or"brierSkew".
#' @export
#' @importFrom data.table as.data.table
#' @return None
#' @author Koen W. De Bock, \email{kdebock@@audencia.com}
#' @references Hernandez-Orallo, J., Flach, P., & Ferri, C. (2011). Brier Curves: a New Cost-Based Visualisation of Classifier Performance. Proceedings of the 28th International Conference on Machine Learning (ICML-11), 585–592.
#' @seealso \code{\link{brierCurve}}, \code{\link{CSMES.ensNomCurve}}
#' @examples
#' ##load data
#' library(rpart)
#' data(BFP)
#' ##generate random order vector
#' BFP_r<-BFP[sample(nrow(BFP),nrow(BFP)),]
#' size<-nrow(BFP_r)
#' ##size<-300
#' train<-BFP_r[1:floor(size/3),]
#' val<-BFP_r[ceiling(size/3):floor(2*size/3),]
#' test<-BFP_r[ceiling(2*size/3):size,]
#' ##train CART decision tree model
#' model=rpart(as.formula(Class~.),train,method="class")
#' ##generate predictions for the tes set
#' preds<-predict(model,newdata=test)[,2]
#' ##calculate brier curve
#' bc<-brierCurve(test[,"Class"],preds)
#' ##plot briercurve
#' plotBrierCurve(bc,curveType="cost")
plotBrierCurve<-function(bc,curveType=c("brierCost","brierSkew")){
if (curveType=="brierCost"){
plot(bc[[3]],type="l",ylab="Loss",xlab="Cost",title="Loss-based Brier curve")
} else if (curveType=="brierSkew"){
plot(bc[[4]],type="l",ylab="Loss",xlab="Skew",title="skew-based Brier curve")
}
}
|
/scratch/gouwar.j/cran-all/cranData/CSMES/R/plotBrierCurve.R
|
# Generated by using Rcpp::compileAttributes() -> do not edit by hand
# Generator token: 10BE3573-1514-4C36-9D1C-5A225CD40393
#' Solve the penalized logistic regression.
#' @param y samples of binary outcome which is a \eqn{n*1} vector.
#' @param x samples of covariates which is a \eqn{n*p} matrix.
#' @param off offset in logistic regression.
#' @param pen 1: MCP estimator; 2: SCAD estimator.
#' @param lam value of the lasso penalty parameter \eqn{\lambda} for \eqn{\beta_1} and \eqn{\beta_2}.
#' @param beta initial estimates.
#' @return A numeric vector, estimate of beta
#' @export
penC <- function(x, y, off, beta, lam, pen) {
.Call('_CSTE_penC', PACKAGE = 'CSTE', x, y, off, beta, lam, pen)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTE/R/RcppExports.R
|
#' tool functions
#'
#' @param x samples of biomarker (or covariate) which is a \eqn{n*1} vector
#' and should be scaled between 0 and 1.
#' @param x0 biomarkers at a fixed point.
#' @param h kernel bandwidth.
#' @param z treatment indicators which is \eqn{n*K} matrix.
#' @param input parameters with dimension (2*K+1).
#' @param R indicator matrix of individuals at risk prior to \eqn{y_i}.
#' @param s censoring indicator which is a \eqn{n*1} vector.
#' @param seq parameter for trapezoid rule.
#' @param sloped the first derivative of \eqn{g}.
#' @param l contraction vector with dimension \eqn{K}.
#' @param beta0,beta0dot the value of beta and its first derivative at \eqn{x0}.
#' @param g0dot the first derivative of \eqn{g0} at x0.
#' @param stdel,stgam,std store the values of \code{beta0}, \code{beta0dot},
#' and \code{godot} at each \eqn{x_i}.
#' @param m number of turns of resampling.
#' @param i the \eqn{i}-th observation.
#' @param alpha the (1-alpha)-confidence level of SCB.
#' @param thres the output of \code{getthres} function.
#'
#' @return Intermediate results.
#' @name tool
#' @keywords internal
NULL
# transfer z to z*
#' @rdname tool
ztrans <- function(z, x, x0,h){
zstar = cbind(z,diag((x-x0)/h) %*% z,(x-x0)/h)
return(zstar)
}
#' @rdname tool
lpl <- function(input,x,x0,R,z,s,h){
n = nrow(z)
p = ncol(z)
del = input[1:p]
gam = input[(p+1):(2*p)]
d = input[2*p +1]
myker = gaussK((x-x0)/h)/h
temp = z%*%del+diag(x-x0)%*%z%*%gam+d*(x-x0)
return(-1/n*s%*%diag(myker)%*%(temp-log(R%*%diag(myker)%*%exp(temp))))
}
#' @rdname tool
g <- function(x0,sep,sloped){
#trapezoid rule
if(x0 == 1){
return((sum(sloped)-0.5*sloped[1]-0.5*sloped[sep])/sep)
}
b = floor(x0*sep)+1
temp = sum(sloped[1:b])-0.5*sloped[1]-0.5*sloped[b]
pp = (x0-(b-1)*1/sep)*sep
height = (1-pp)*sloped[b]+pp*sloped[b+1]
temp = temp+(sloped[b]+height)*pp/2
return(temp/sep)
}
#' @rdname tool
cntcov <- function(l,beta0,beta0dot,g0dot,x,x0,R,Z,s,h,sep,sloped){
n = nrow(Z)
p = ncol(Z)
Zstar = ztrans(Z,x,x0,h)
g0 = g(x0,sep,sloped)
myker = gaussK((x-x0)/h)/h
theta0 = c(beta0,h*beta0dot,h*g0dot)
ns0 <- as.vector(R%*%diag(myker)%*%exp(Zstar%*%theta0+g0*rep(1,n)))
ns1 <- R%*%diag(as.vector(myker*exp(Zstar%*%theta0+g0*rep(1,n))))%*%Zstar
left <- 1/n*t(Zstar)%*%diag(as.vector((s*myker/ns0)%*%R*myker*exp(as.vector(Zstar%*%theta0+g0*rep(1,n)))))%*%Zstar
right <- 1/n*t(ns1)%*%diag(s*myker/ns0/ns0)%*%ns1
A <- left-right
#B <- 1/n*(t(Zstar)%*%myker-t(ns1)%*%(myker/ns0))
Pi <- 1/(n*h*sqrt(pi))*t(Zstar-diag(1/ns0)%*%ns1)%*%diag(s*myker*myker)%*%(Zstar-diag(1/ns0)%*%ns1) #checked
mycov = 1/n/h*solve(A)%*%Pi%*%solve(A)
return(as.numeric(l%*%mycov[1:p,1:p]%*%l))
}
#' @rdname tool
sampleQ <- function(i,l,stdel,stgam,std,x,R,Z,s,h,sep,sloped,m){
n = nrow(Z)
p = ncol(Z)
lstar = c(l,rep(0,p+1))
g0 = g(x[i],sep,sloped)
Zstar = ztrans(Z,x,x[i],h)
myker = gaussK((x-x[i])/h)/h
G = matrix(rnorm(n*m),n,m)
beta0 = as.vector(stdel[i,])
beta0dot = as.vector(stgam[i,])
g0dot = as.numeric(std[i])
ns0 <- as.vector(R%*%diag(myker)%*%exp(Z%*%beta0+g0*rep(1,n)))
ns1 <- R%*%diag(as.vector(myker*exp(Z%*%beta0+g0*rep(1,n))))%*%Zstar
left <- 1/n*t(Zstar)%*%diag(as.vector((s*myker/ns0)%*%R*myker*exp(as.vector(Z%*%beta0+g0*rep(1,n)))))%*%Zstar
right <- 1/n*t(ns1)%*%diag(s*myker/ns0/ns0)%*%ns1
I <- left-right
U <- 1/n*t(Zstar-diag(1/ns0)%*%ns1)%*%diag(myker*s)%*%G
Q <- as.vector(lstar%*%solve(I)%*%U)
mycov <- cntcov(l,beta0,beta0dot,g0dot,x,x[i],R,Z,s,h,sep,sloped)
# return(abs(Q/sqrt(mycov)))
return(abs(Q/sqrt(mycov)*sqrt(n*h)))
}
#' @rdname tool
getthres <- function(alpha,l,stdel,stgam,std,x,R,Z,s,h,sep,sloped,m){
n = nrow(Z)
tempfun <- function(t){return(sampleQ(t,l,stdel,stgam,std,x,R,Z,s,h,sep,sloped,m))}
result <- matrix(sapply(1:n,tempfun),n,m,byrow=TRUE)
Qbar <- apply(result,2,max)
thres <- quantile(Qbar,1-alpha)
return(thres)
}
#' @rdname tool
get.bound <- function(thres,l,stdel,stgam,std,x,R,Z,s,h,sep,sloped){
n <- nrow(Z)
p <- ncol(Z)
tempfun <- function(i){
beta0 = as.vector(stdel[i,])
beta0dot = as.vector(stgam[i,])
g0dot = as.numeric(std[i])
return(cntcov(l,beta0,beta0dot,g0dot,x,x[i],R,Z,s,h,sep,sloped))}
covs <- sapply(1:n,tempfun)
return(cbind(stdel%*%l-thres*sqrt(covs)/sqrt(n*h),stdel%*%l,stdel%*%l+thres*sqrt(covs)/sqrt(n*h)))
}
|
/scratch/gouwar.j/cran-all/cranData/CSTE/R/Tools.R
|
#' Estimate the CSTE curve for binary outcome.
#'
#' Estimate covariate-specific treatment effect (CSTE) curve. Input data
#' contains covariates \eqn{X}, treatment assignment \eqn{Z} and binary outcome
#' \eqn{Y}. The working model is \deqn{logit(\mu(X, Z)) = g_1(X\beta_1)Z + g_2(X\beta_2),}
#' where \eqn{\mu(X, Z) = E(Y|X, Z)}. The model implies that \eqn{CSTE(x) = g_1(x\beta_1)}.
#'
#'@param x samples of covariates which is a \eqn{n*p} matrix.
#'@param y samples of binary outcome which is a \eqn{n*1} vector.
#'@param z samples of treatment indicator which is a \eqn{n*1} vector.
#'@param beta_ini initial values for \eqn{(\beta_1', \beta_2')'}, default value is NULL.
#'@param lam value of the lasso penalty parameter \eqn{\lambda} for \eqn{\beta_1} and
#'\eqn{\beta_2}, default value is 0.
#'@param nknots number of knots for the B-spline for estimating \eqn{g_1} and \eqn{g_2}.
#'@param max.iter maximum iteration for the algorithm.
#'@param eps numeric scalar \eqn{\geq} 0, the tolerance for the estimation
#' of \eqn{\beta_1} and \eqn{\beta_2}.
#'
#'@return A S3 class of cste, which includes:
#' \itemize{
#' \item \code{beta1}: estimate of \eqn{\beta_1}.
#' \item \code{beta2}: estimate of \eqn{\beta_2}.
#' \item \code{B1}: the B-spline basis for estimating \eqn{g_1}.
#' \item \code{B2}: the B-spline basis for estimating \eqn{g_2}.
#' \item \code{delta1}: the coefficient of B-spline for estimating \eqn{g_1}.
#' \item \code{delta2}: the coefficient for B-spline for estimating \eqn{g_2}.
#' \item \code{iter}: number of iteration.
#' \item \code{g1}: the estimate of \eqn{g_1(X\beta_1)}.
#' \item \code{g2}: the estimate of \eqn{g_2(X\beta_2)}.
#' }
#'@examples
#' ## Quick example for the cste
#'
#' library(mvtnorm)
#' library(sigmoid)
#'
#' # -------- Example 1: p = 20 --------- #
#' ## generate data
#' n <- 2000
#' p <- 20
#' set.seed(100)
#'
#' # generate X
#' sigma <- outer(1:p, 1:p, function(i, j){ 2^(-abs(i-j)) } )
#' X <- rmvnorm(n, mean = rep(0,p), sigma = sigma)
#' X <- relu(X + 2) - 2
#' X <- 2 - relu(2 - X)
#'
#' # generate Z
#' Z <- rbinom(n, 1, 0.5)
#'
#' # generate Y
#' beta1 <- rep(0, p)
#' beta1[1:3] <- rep(1/sqrt(3), 3)
#' beta2 <- rep(0, p)
#' beta2[1:2] <- c(1, -2)/sqrt(5)
#' mu1 <- X %*% beta1
#' mu2 <- X %*% beta2
#' g1 <- mu1*(1 - mu1)
#' g2 <- exp(mu2)
#' prob <- sigmoid(g1*Z + g2)
#' Y <- rbinom(n, 1, prob)
#'
#' ## estimate the CSTE curve
#' fit <- cste_bin(X, Y, Z)
#'
#' ## plot
#' plot(mu1, g1, cex = 0.5, xlim = c(-2,2), ylim = c(-8, 3),
#' xlab = expression(X*beta), ylab = expression(g1(X*beta)))
#' ord <- order(mu1)
#' points(mu1[ord], fit$g1[ord], col = 'blue', cex = 0.5)
#'
#' ## compute 95% simultaneous confidence band (SCB)
#' res <- cste_bin_SCB(X, fit, alpha = 0.05)
#'
#' ## plot
#' plot(res$or_x, res$fit_x, col = 'red',
#' type="l", lwd=2, lty = 3, ylim = c(-10,8),
#' ylab=expression(g1(X*beta)), xlab = expression(X*beta),
#' main="Confidence Band")
#' lines(res$or_x, res$lower_bound, lwd=2.5, col = 'purple', lty=2)
#' lines(res$or_x, res$upper_bound, lwd=2.5, col = 'purple', lty=2)
#' abline(h=0, cex = 0.2, lty = 2)
#' legend("topleft", legend=c("Estimates", "SCB"),
#' lwd=c(2, 2.5), lty=c(3,2), col=c('red', 'purple'))
#'
#'
#' # -------- Example 2: p = 1 --------- #
#'
#' ## generate data
#' set.seed(15)
#' p <- 1
#' n <- 2000
#' X <- runif(n)
#' Z <- rbinom(n, 1, 0.5)
#' g1 <- 2 * sin(5*X)
#' g2 <- exp(X-3) * 2
#' prob <- sigmoid( Z*g1 + g2)
#' Y <- rbinom(n, 1, prob)
#'
#' ## estimate the CSTE curve
#' fit <- cste_bin(X, Y, Z)
#'
#' ## simultaneous confidence band (SCB)
#' X <- as.matrix(X)
#' res <- cste_bin_SCB(X, fit)
#'
#' ## plot
#' plot(res$or_x, res$fit_x, col = 'red', type="l", lwd=2,
#' lty = 3, xlim = c(0, 1), ylim = c(-4, 4),
#' ylab=expression(g1(X)), xlab = expression(X),
#' main="Confidence Band")
#' lines(res$or_x, res$lower_bound, lwd=2.5, col = 'purple', lty=2)
#' lines(res$or_x, res$upper_bound, lwd=2.5, col = 'purple', lty=2)
#' abline(h=0, cex = 0.2)
#' lines(X[order(X)], g1[order(X)], col = 'blue', lwd = 1.5)
#' legend("topright", legend=c("Estimates", "SCB",'True CSTE Curve'),
#' lwd=c(2, 2.5, 1.5), lty=c(3,2,1), col=c('red', 'purple','blue'))
#'
#' @references
#' Guo W., Zhou X. and Ma S. (2021).
#' Estimation of Optimal Individualized Treatment Rules
#' Using a Covariate-Specific Treatment Effect Curve with
#' High-dimensional Covariates,
#' \emph{Journal of the American Statistical Association}, 116(533), 309-321
#'
#' @seealso \code{\link{cste_bin_SCB}, \link{predict_cste_bin}, \link{select_cste_bin}}
# cste estimation for binary outcome
cste_bin <- function(x, y, z, beta_ini = NULL, lam = 0, nknots = 1, max.iter = 200, eps = 1e-3) {
x <- as.matrix(x)
n <- dim(x)[1]
p <- dim(x)[2]
if(p==1) {
B1 <- B2 <- bs(x, df = nknots+4, intercept = TRUE)
B <- cbind(z * B1, B2)
# fit <- glm(y~0+B, family="binomial")
fit <- glm.fit(B, y, family=binomial(link = 'logit'))
delta1 <- coef(fit)[1:(nknots+4)]
delta2 <- coef(fit)[(nknots+5):(2*nknots+8)]
geta <- z * B1 %*% delta1 + B2 %*% delta2
g1 <- B1 %*% delta1
g2 <- B2 %*% delta2
loss <- -sum(log(1 + exp(geta))) + sum(y * geta)
bic <- -2 * loss + (nknots+4) * log(n)
aic <- -2 * loss + 2 * (nknots+4)
out <- list(beta1 = 1, beta2 = 1, B1 = B1, B2 = B2, delta2 = delta2, delta1 = delta1, g = geta, x = x, y = y, z = z, nknots = nknots, p=p, g1=g1, g2=g2, bic=bic, aic=aic)
class(out) <- "cste"
return(out)
} else {
flag <- FALSE
# if(is.null(truth))
truth <- rep(p,2)
if(is.null(beta_ini)) beta_ini <- c(normalize(rep(1, truth[1])), normalize(rep(1, truth[2])))
else beta_ini <- c(beta_ini[1:truth[1]], beta_ini[(truth[1]+1):sum(truth)])
beta_curr <- beta_ini
conv <- FALSE
iter <- 0
# knots location
knots <- seq(0, 1, length = nknots + 2)
len.delta <- length(knots) + 2
while(conv == FALSE & iter < max.iter) {
iter <- iter + 1
# step 1 fix beta to estimate g
beta1 <- beta_curr[1:truth[1]]
beta2 <- beta_curr[(truth[1]+1):length(beta_curr)]
# u transformation
u1 <- pu(x[,1:truth[1]], beta1)
u2 <- pu(x[,1:truth[2]], beta2)
eta1 <- u1$u
eta2 <- u2$u
# calculate B-spline basis
B1 <- bsplineS(eta1, breaks = quantile(eta1, knots))
B2 <- bsplineS(eta2, breaks = quantile(eta2, knots))
B <- cbind(z*B1, B2)
# estimate g1 and g2
# fit.delta <- glm(y~0+B, family="binomial")
fit.delta <- glm.fit(B, y, family=binomial(link = 'logit'))
delta <- drop(coef(fit.delta))
delta[is.na(delta)] <- 0
delta1 <- delta[1 : len.delta]
delta2 <- delta[(len.delta + 1) : (2*len.delta)]
# step 2 fix g to estimate beta
# calculate first derivative of B-spline basis
B_deriv_1 <- bsplineS(eta1, breaks = quantile(eta1, knots), nderiv = 1)
B_deriv_2 <- bsplineS(eta2, breaks = quantile(eta2, knots), nderiv = 1)
# calculate input
newx_1 <- z * drop((B_deriv_1*(u1$deriv))%*%delta1)*x[,1:truth[1]]
newx_2 <- drop((B_deriv_2*(u2$deriv))%*%delta2)*x[,1:truth[2]]
newx <- cbind(newx_1, newx_2)
# calculate offset
off_1 <- z * B1%*%delta1 - newx_1 %*% beta1
off_2 <- B2%*%delta2 - newx_2 %*% beta2
off <- off_1 + off_2
# estimate beta
beta <- my_logit(newx, y, off, lam = lam)
if(sum(is.na(beta)) > 1){
break
stop("only 1 variable in betas; decrease lambda")
}
beta1 <- beta[1:truth[1]]
beta2 <- beta[(truth[1]+1):length(beta_curr)]
check <- c(sum(beta1!=0),sum(beta2!=0))
if(min(check) <= 1) {
stop("0 beta occurs; decrease lambda")
flag <- TRUE
if(check[1] != 0) beta[1:truth[1]] <- normalize(beta1)
if(check[2] !=0) beta[(truth[1]+1):length(beta_curr)] <- normalize(beta2)
break
}
beta <- c(normalize(beta1), normalize(beta2))
conv <- (max(abs(beta - beta_curr)) < eps)
beta_curr <- beta
}
geta <- z * B1 %*% delta1 + B2 %*% delta2
g1 <- B1 %*% delta1
g2 <- B2 %*% delta2
loss <- -sum(log(1 + exp(geta))) + sum(y * geta)
df1 <- sum(beta1!=0)
df2 <- sum(beta2!=0)
# df <- df1 + df2
df <- df1 + df2 + 2*length(delta1)
# bic <- -2 * loss + df * log(n) * log(log(p))
bic <- -2 * loss + df * log(n) * log(p)
aic <- -2 * loss + 2 * df
# delta.var <- summary(fit.delta)$cov.unscaled # not used
out <- list(beta1 = beta[1:truth[1]], beta2 = beta[(truth[1]+1):length(beta_curr)], B1 = B1, B2 = B2, delta1 = delta1, delta2 = delta2, iter = iter, g = geta, g1 = g1, g2 = g2, loss = loss, df = df, df1 = df1, df2 = df2, bic = bic, aic = aic, x = x, y = y, z = z, knots = knots, flag = flag, p=p, conv=conv, final.x = newx, final.off = off)
class(out) <- "cste"
return(out)
}
}
|
/scratch/gouwar.j/cran-all/cranData/CSTE/R/cste_bin.R
|
#' Calculate simultaneous confidence bands of CSTE curve for binary outcome.
#'
#' This function calculates simultaneous confidence bands of CSTE curve for binary
#' outcome.
#'
#'
#'@param x samples of predictor, which is a \eqn{m*p} matrix.
#'@param fit a S3 class of cste.
#'@param h kernel bandwidth.
#'@param alpha the simultaneous confidence bands are of \eqn{1-\alpha} confidence level.
#'
#'@return A list which includes:
#' \itemize{
#' \item \code{or_x}: the ordered value of \eqn{X\beta_1}.
#' \item \code{fit_x}: the fitted value of CSTE curve corresponding to \code{or_x}.
#' \item \code{lower_bound}: the lower bound of CSTE's simultaneous confidence band.
#' \item \code{upper_bound}: the upper bound of CSTE's simultaneous confidence band.
#' }
#'
#'
#'
#' @references
#' Guo W., Zhou X. and Ma S. (2021).
#' Estimation of Optimal Individualized Treatment Rules
#' Using a Covariate-Specific Treatment Effect Curve with
#' High-dimensional Covariates,
#' \emph{Journal of the American Statistical Association}, 116(533), 309-321
#'
#' @seealso \code{\link{cste_bin}}
cste_bin_SCB <- function(x, fit, h = NULL, alpha = 0.05){
u1 <- pu(x, fit$beta1)$u
u2 <- pu(x, fit$beta2)$u
sbk <- prev_fit_cste(u1, u2, fit)
if(is.null(h)){
# optimal bandwidth
# h <- sbk$h * (log(sbk$n))^(-0.25)
h <- sbk$h
}
newx <- seq(min(u1)+h, max(u1)-h, length = 100)
fit.x <- sapply(newx, function(xx) coef(glm(sbk$y~1, weights=dnorm((xx - sbk$u1)/h)/h ,
family=quasibinomial(link = "logit"), offset=sbk$fit_g2)))
# estimate sigma_b^2(x)
fit_sb <- predict(sbk$fit_sigma_b, newx)$y
# estimate sigma^2(x)
fit_s <- predict(sbk$fit_sigma_x, newx)$y
# calculate inflation factor
# alpha <- 0.05
Ck_d <- 1/(2*sqrt(pi))
Ck_n <- 1/(4*sqrt(pi))
Ck <- Ck_n/Ck_d
mu2k <- 1/(2*sqrt(2))
ah <- sqrt(-2 * log(h))
Qh <- ah + (log(sqrt(Ck)/(2*pi)) - log(-log(sqrt(1 - alpha))))/ah
# estimate density of u1
h_x <- bw.nrd0(sbk$u1)
f_x <- sapply(newx, function(xx) mean(dnorm((xx - u1)/h_x)/h_x))
# calculate variance
D <- fit_sb * f_x
v_sq <- Ck_d * f_x * fit_s
id_rm <- D < 0 | v_sq < 0
g_sigma <- sbk$n^(-0.5) * h^(-0.5) * sqrt(v_sq[!id_rm]) / D[!id_rm]
# calculate SCC
L <- fit.x[!id_rm] - Qh * g_sigma
U <- fit.x[!id_rm] + Qh * g_sigma
# without model selection h , 0.006
or_x <- pu_inv(x, fit$beta1, newx)
# mycol <- gg_color(2)
# or_x <- quantile(seq(min(x%*%fit$beta1), max(x%*%fit$beta1),length=1000), newx)
return(list(or_x = or_x[!id_rm],fit_x = fit.x[!id_rm],
lower_bound = L, upper_bound = U))
}
|
/scratch/gouwar.j/cran-all/cranData/CSTE/R/cste_bin_SCB.R
|
#' Estimate the CSTE curve for time to event outcome with right censoring.
#'
#' Estimate the CSTE curve for time to event outcome with right censoring.
#' The working model
#' is \deqn{\lambda(t| X, Z) = \lambda_0(t) \exp(\beta^T(X)Z + g(X)),}
#' which implies that \eqn{CSTE(x) = \beta(x)}.
#'
#'
#'@param x samples of biomarker (or covariate) which is a \eqn{n*1} vector
#' and should be scaled between 0 and 1.
#'@param y samples of time to event which is a \eqn{n*1} vector.
#'@param z samples of treatment indicator which is a \eqn{n*K} matrix.
#'@param s samples of censoring indicator which is a \eqn{n*1} vector.
#'@param h kernel bandwidth.
#'@return A \eqn{n*K} matrix, estimation of \eqn{\beta(x)}.
#' @references
#' Ma Y. and Zhou X. (2017).
#' Treatment selection in a randomized clinical trial via covariate-specific
#' treatment effect curves, \emph{Statistical Methods in Medical Research}, 26(1), 124-141.
#'
#' @seealso \code{\link{cste_surv_SCB}}
cste_surv <- function(x,y,z,s,h){
n <- nrow(z)
p <- ncol(z)
sep <- 20
myfun <- function(w){return(as.numeric(y>=w))}
R <- matrix(sapply(y,myfun),n,n,byrow=TRUE)
#preprocessing
stdel <- matrix(0,nrow=n,ncol=p)
for(i in 1:n){
tempfun <- function(t){return(lpl(t,x,x[i],R,z,s,h))}
# ans = optim(rep(0,2*p+1),tempfun)$par
ans = nmk(rep(0,2*p+1),tempfun)$par
stdel[i,] <- ans[1:p]
}
return(stdel)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTE/R/cste_surv.R
|
#' Calculate simultaneous confidence bands (SCB) of CSTE curve for time to event outcome with right censoring.
#'
#' This function calculates simultaneous confidence bands of CSTE curve for time to event outcome with right censoring.
#'
#' @param l contraction vector with dimension \eqn{K}.
#' @param x samples of biomarker (or covariate) which is a \eqn{n*1} vector
#' and should be scaled between 0 and 1.
#' @param y samples of time to event which is a \eqn{n*1} vector.
#' @param z samples of treatment indicator which is a \eqn{n*K} matrix.
#' @param s samples of censoring indicator which is a \eqn{n*1} vector.
#' @param h kernel bandwidth.
#' @param m number of turns of resampling.
#' @param alpha the \eqn{(1-\alpha)}-confidence level of SCB.
#'
#'@return A \eqn{n*3} matrix, estimation of \eqn{l^T \beta(x)} and its simultaneous confidence bands.
#' @references
#' Ma Y. and Zhou X. (2017).
#' Treatment selection in a randomized clinical trial via covariate-specific
#' treatment effect curves, \emph{Statistical Methods in Medical Research}, 26(1), 124-141.
#'
#' @seealso \code{\link{cste_surv}}
cste_surv_SCB <- function(l,x,y,z,s,h,m, alpha= 0.05){
n <- nrow(z)
p <- ncol(z)
sep <- 20
myfun <- function(w){return(as.numeric(y>=w))}
R <- matrix(sapply(y,myfun),n,n,byrow=TRUE)
#preprocessing
stdel <- matrix(0,nrow=n,ncol=p)
stgam <- matrix(0,nrow=n,ncol=p)
std <- rep(0,n)
for(i in 1:n){
tempfun <- function(t){return(lpl(t,x,x[i],R,z,s,h))}
# ans = optim(rep(0,2*p+1),tempfun)$par
ans = nmk(rep(0,2*p+1),tempfun)$par
stdel[i,] <- ans[1:p]
stgam[i,] <- ans[(p+1):(2*p)]
std[i] <- ans[2*p+1]
}
sloped <- rep(0,sep)
for(i in 1:(sep+1)){
tempfun <- function(t){return(lpl(t,x,1/sep*(i-1),R,z,s,h))}
# ans = optim(rep(0,2*p+1),tempfun)$par
ans = nmk(rep(0,2*p+1),tempfun)$par
sloped[i] <- ans[2*p+1]
}
# derive quantile
thres <- getthres(alpha,l,stdel,stgam,std,x,R,z,s,h,sep,sloped,m)
return(get.bound(thres,l,stdel,stgam,std,x,R,z,s,h,sep,sloped))
}
|
/scratch/gouwar.j/cran-all/cranData/CSTE/R/cste_surv_SCB.R
|
#' @useDynLib CSTE
#' @importFrom Rcpp evalCpp
#' @exportPattern "^[[:alpha:]]+"
#' @importFrom fda bsplineS
#' @import Rcpp
#' @importFrom splines bs
#' @importFrom stats optim approx binomial coef dbeta density glm.fit pbeta predict qbeta quantile smooth.spline bw.nrd0 complete.cases dnorm glm rnorm
#' @importFrom stats na.omit qnorm quasibinomial
#' @importFrom survival coxph Surv survfit basehaz
#' @importFrom locpol gaussK
#' @importFrom dfoptim nmk
NULL
|
/scratch/gouwar.j/cran-all/cranData/CSTE/R/pkg.R
|
#' Predict the CSTE curve of new data for binary outcome.
#'
#' Predict the CSTE curve of new data for binary outcome.
#'
#'
#'@param obj a S3 class of cste.
#'@param newx samples of covariates which is a \eqn{m*p} matrix.
#'
#'@return A S3 class of cste which includes
#' \itemize{
#' \item \code{g1}: predicted \eqn{g_1(X\beta_1)}.
#' \item \code{g2}: predicted \eqn{g_2(X\beta_2)}.
#' \item \code{B1}: the B-spline basis for estimating \eqn{g_1}.
#' \item \code{B2}: the B-spline basis for estimating \eqn{g_2}.
#' }
#'
#' @references
#' Guo W., Zhou X. and Ma S. (2021).
#' Estimation of Optimal Individualized Treatment Rules
#' Using a Covariate-Specific Treatment Effect Curve with
#' High-dimensional Covariates,
#' \emph{Journal of the American Statistical Association}, 116(533), 309-321
#'
#' @seealso \code{\link{cste_bin}}
predict_cste_bin <- function(obj, newx) {
# type <- match.arg(type)
if(missing(newx)) {
out <- obj$B1 %*% obj$delta1
newB <- NULL
} else {
u1 <- pu(newx, obj$beta1)
u2 <- pu(newx, obj$beta2)
eta1 <- u1$u
newB <- bsplineS(eta1, breaks = quantile(eta1, obj$knots))
g1 <- newB %*% obj$delta1
eta2 <- u2$u
newB2 <- bsplineS(eta2, breaks = quantile(eta2, obj$knots))
g2 <- newB2 %*% obj$delta2
}
return(list(g1 = g1, g2 = g2, B1 = newB, B2 = newB2))
}
|
/scratch/gouwar.j/cran-all/cranData/CSTE/R/predict_cste_bin.R
|
#' Select the optimal tuning parameters in CSTE estimation for binary outcome.
#'
#' select lasso penalty parameter \eqn{\lambda} for \eqn{\beta_1} and
#'\eqn{\beta_2} in CSTE estimation.
#'
#'
#'@param x samples of covariates which is a \eqn{n*p} matrix.
#'@param y samples of binary outcome which is a \eqn{n*1} vector.
#'@param z samples of treatment indicator which is a \eqn{n*1} vector.
#'@param lam_seq a sequence for the choice of \eqn{\lambda}.
#'@param beta_ini initial values for \eqn{(\beta_1', \beta_2')'}, default value is NULL.
#'@param nknots number of knots for the B-spline for estimating \eqn{g_1} and \eqn{g_2}.
#'@param max.iter maximum iteration for the algorithm.
#'@param eps numeric scalar \eqn{\geq} 0, the tolerance for the estimation
#' of \eqn{\beta_1} and \eqn{\beta_2}.
#'
#'@return A list which includes
#' \itemize{
#' \item \code{optimal}: optimal cste within the given the sequence of \eqn{\lambda}.
#' \item \code{bic}: BIC for the sequence of \eqn{\lambda}.
#' \item \code{lam_seq}: the sequence of \eqn{\lambda} that is used.
#'
#' }
#'
#' @references
#' Guo W., Zhou X. and Ma S. (2021).
#' Estimation of Optimal Individualized Treatment Rules
#' Using a Covariate-Specific Treatment Effect Curve with
#' High-dimensional Covariates,
#' \emph{Journal of the American Statistical Association}, 116(533), 309-321
#'
#' @seealso \code{\link{cste_bin}}
select_cste_bin <- function(x, y, z, lam_seq, beta_ini = NULL, nknots = 1, max.iter = 2000, eps = 1e-3) {
n <- dim(x)[1]
p <- dim(x)[2]
out <- vector("list", length(lam_seq))
beta1 <- matrix(0, p, length(lam_seq))
beta2 <- matrix(0, p, length(lam_seq))
for(i in 1:length(lam_seq)) {
if(is.null(beta_ini)){
beta_ini <- rep(normalize(rep(1, p)), 2)
}
if(i > 1) beta_ini <- c(out[[i-1]]$beta1, out[[i-1]]$beta2)
out[[i]] <- cste_bin(x, y, z, lam = lam_seq[i], beta_ini = beta_ini, nknots = nknots, max.iter = max.iter, eps = eps)
beta1[, i] <- out[[i]]$beta1
beta2[, i] <- out[[i]]$beta2
if(out[[i]]$flag | out[[i]]$df1 <= 2 | out[[i]]$df2 <= 2) {
warnings("not all lambdas are used; decrease lambda")
break
}
}
df <- sapply(out[1:i], function(x) x$df)
bic <- sapply(out[1:i], function(x) x$bic)
loss <- sapply(out[1:i], function(x) x$loss)
return(list(optimal = out[[which.min(bic)]], bic = bic, lam_seq = lam_seq[1:i], df = df, complete = out[1:i], beta1=beta1, beta2=beta2))
}
|
/scratch/gouwar.j/cran-all/cranData/CSTE/R/select_cste_bin.R
|
#' tool functions
#'
#' @param x numeric vector or matrix.
#' @param y,u,u1,u2,beta,atrisk numeric vector.
#' @param fit a S3 class of cste.
#' @param off numeric value, offset.
#' @param lam numeric value, penalty parameter.
#' @param pen hyper-parameter that used in MCP and SCAD penalty functions.
#'
#' @return Intermediate results.
#' @name tool
#' @keywords internal
NULL
# u transformation
#' @rdname tool
pu <- function(x, beta) {
x <- as.matrix(x)
d <- (length(beta) + 1)/2
v <- x%*%beta
# alpha <- quantile(apply(x, 1, function(x) sqrt(sum(x^2))), 0.95)
alpha <- quantile( sqrt( rowSums(x*x) ), 0.95)
t <- (v + alpha)/(2 * alpha)
u <- pbeta(t, d, d)
deriv <- dbeta(t, d, d)/(2*alpha)
return(list(u = u, deriv = drop(deriv)))
}
#' @rdname tool
pu_inv <- function(x, beta, u) {
d <- (length(beta) + 1)/2
# alpha <- quantile(apply(x, 1, function(x) sqrt(sum(x^2))), 0.95)
alpha <- quantile( sqrt( rowSums(x*x)), 0.95)
t <- qbeta(u, d, d)
return(2 * alpha * t - alpha)
}
# normalize vector
#' @rdname tool
normalize <- function(x) x/sqrt(sum(x^2))
# logit generator
#' @rdname tool
logitinv <- function(x) exp(x)/(1+exp(x))
# sparse logistic regression without intercept
#' @rdname tool
my_logit <- function(x, y, off = NULL, beta = NULL, lam = 0, pen = 2) {
n <- dim(x)[1]
p <- dim(x)[2]
if(is.null(off)) off <- rep(0, n)
if(is.null(beta)) beta <- rep(0, p)
return(penC(x, y, off, beta, lam, pen))
}
# intermediate functions for survival analysis
#' @rdname tool
my_surv <- function(x, y, atrisk, off = NULL, beta = NULL, lam = 0) {
n <- dim(x)[1]
p <- dim(x)[2]
if(is.null(off)) off <- rep(0, n)
if(is.null(beta)) beta <- normalize(rep(1, p))
penal <- function(bt){
loglik = sum(off) + sum(x%*%bt) - sum(log(atrisk%*%exp(off+x%*%bt)))
return(-loglik + lam * sum(abs(bt)))
}
beta_new <- optim(par=beta, fn=penal)$par
return(beta_new)
}
# intermediate functions for cste_bin
#' @rdname tool
prev_fit_cste <- function(u1, u2, fit) {
# initialization
z <- fit$z
id0 <- (z == 0)
id1 <- (z == 1)
u1 <- u1[id1]
u2 <- u2[id1]
y <- (fit$y)[id1]
n <- sum(id1)
fit_g1 <- (fit$B1 %*% fit$delta1)[id1]
fit_g2 <- (fit$B2 %*% fit$delta2)[id1]
fit_g <- fit$g[id1]
# number of interior knots
nk <- max(1, floor(min(n^(0.25)*log(n)+1, n/8-0.5-1)))
# calculate optimal bandwidth
mx <- smooth.spline(u1, fit_g1)
mx_derv <- predict(mx, u1, deriv = 1)$y
mx_derv_2 <- predict(mx, u1, deriv = 2)$y
# estimate sigma_b^2(x)
sigma_b <- exp(fit_g) / (1 + exp(fit_g))^2
fit_sigma_b <- smooth.spline(u1, sigma_b)
# estimate sigma^2(x)
sigma_x <- (y - logitinv(fit_g))^2
fit_sigma_x <- smooth.spline(u1, sigma_x)
# estimate kernel density
fit_f_x <- density(u1)
f_x <- approx(fit_f_x$x, fit_f_x$y, u1)$y
# h_x <- bw.nrd0(u1)
# f_x <- sapply(u1, function(xx) mean(dnorm((xx - u1)/h_x)/h_x))
# calculate bias
Ck_d <- 1/(2*sqrt(pi))
Ck_n <- 1/(4*sqrt(pi))
D <- predict(fit_sigma_b, u1)$y * f_x
v_sq <- Ck_d * f_x * predict(fit_sigma_x, u1)$y
# v_sq[v_sq < 0] <- 0.001
# D[D < 0] <- 0.001
b3 <- exp(fit_g) * (exp(fit_g) - 1) / (exp(fit_g) + 1)^3
fit_b3 <- smooth.spline(u1, b3)
mu2k <- 1/(2*sqrt(2))
bias_x <- mu2k * (mx_derv_2 * D + mx_derv * f_x * predict(fit_sigma_b, u1, deriv=1)$y - mx_derv^2 * f_x * predict(fit_b3, u1)$y)
h_opt <- (mean((1/D) * v_sq * (1/D)) / (4 * sum((bias_x / D)^2)))^(0.2)
return(list(h = h_opt, fit_sigma_x = fit_sigma_x, fit_sigma_b = fit_sigma_b, fit_f_x = fit_f_x, n = n, y = y, u1 = u1, u2 = u2, fit_g1 = fit_g1, fit_g2 = fit_g2))
}
|
/scratch/gouwar.j/cran-all/cranData/CSTE/R/transformfunction.R
|
#' AEMET Training
#' Training method (pre-downscaling) based on analogs:
#' synoptic situations and significant predictors.
#'
#'@author Marta Dominguez Alonso - AEMET, \email{[email protected]}
#'@author Nuria Perez-Zanon - BSC, \email{[email protected]}
#'
#'@description This function caracterizes the synoptic situations in a past
#'period based on low resolution reanalysis data (e.g, ERAInterim 1.5º x 1.5º)
#'and an observational high resolution (HR) dataset (AEMET 5 km gridded daily
#'precipitation and maximum and minimum temperature) (Peral et al., 2017)).
#'The method uses three domains:
#'\itemize{
#' \item{peninsular Spain and Balearic Islands domain (5 km resolution): HR domain}
#' \item{synoptic domain (low resolution): it should be centered over Iberian
#' Peninsula and cover enough extension to detect as much synoptic
#' situations as possible.}
#' \item{extended domain (low resolution): it is an extension of the synoptic
#' domain. It is used for 'slp_ext' parameter (see 'slp_lon' and 'slp_lat'
#' below).}
#'}
#'@param pred List of matrix reanalysis data in a synoptic domain. The list has
#' to contain reanalysis atmospheric variables (instantaneous 12h data) that
#' must be indentify by parenthesis name. For precipitation:
#' \itemize{
#' \item{u component of wind at 500 hPa (u500) in m/s}
#' \item{v component of wind at 500 hPa (v500) in m/s}
#' \item{temperature at 500 hPa (t500) in K}
#' \item{temperature at 850 hPa (t850) in K}
#' \item{temperature at 1000 hPa (t1000) in K}
#' \item{geopotential height at 500 hPa (z500) in m}
#' \item{geopotential height at 1000 hPa (z1000) in m}
#' \item{sea level pressure (slp) in hPa}
#' \item{specific humidity at 700 hPa (q700) in g/kg}
#' }
#' For maximum and minimum temperature:
#' \itemize{
#' \item{temperature at 1000 hPa (t1000) in K}
#' \item{sea level pressure (slp) in hPa}
#' }
#' All matrix must have [time,gridpoint] dimensions.
#' (time = number of training days, gridpoint = number of synoptic gridpoints).
#'@param slp_ext Matrix with atmospheric reanalysis sea level pressure
#' (instantaneous 12h data)(hPa). It has the same resolution as 'pred' parameter
#' but with an extended domain. This domain contains extra degrees (most in the
#' north and west part) compare to synoptic domain. The matrix must have
#' [time,gridpoint] dimensions. (time = number of training days,
#' gridpoint = number of extended gridpoints).
#'@param lon Vector of the synoptic longitude (from (-180º) to 180º),
#' The vector must go from west to east.
#'@param lat Vector of the synoptic latitude. The vector must go from north to
#' south.
#'@param slp_lon Vector of the extended longitude (from (-180º) to 180º).
#' The vector must go from west to east.
#'@param slp_lat Vector of the extended latitude. The vector must go from north
#' to south.
#'@param var Variable name to downscale. There are two options: 'prec' for
#' precipitation and 'temp' for maximum and minimum temperature.
#'@param HR_path Local path of HR observational files (maestro and pcp/tmx-tmn).
#' For precipitation and temperature can be downloaded from the following link:
#' \url{https://www.aemet.es/en/serviciosclimaticos/cambio_climat/datos_diarios?w=2}
#' respetively. Maestro file (maestro_red_hr_SPAIN.txt) has gridpoint (nptos),
#' longitude (lon), latitude (lat) and altitude (alt) in columns (vector
#' structure). Data file (pcp/tmx/tmn_red_SPAIN_1951-201903.txt) includes 5km
#' resolution spanish daily data (precipitation or maximum and minimum
#' temperature from january 1951 to june 2020. See README file for more
#' information. IMPORTANT!: HR observational period must be the same as for
#' reanalysis variables. It is assumed that the training period is smaller than
#' the HR original one (1951-2020), so it is needed to make a new ascii file
#' with the new period and the same structure as original, specifying the
#' training dates ('tdates' parameter) in the name (e.g.
#' 'pcp_red_SPAIN_19810101-19961231.txt' for '19810101-19961231' period).
#'@param tdates Training period dates in format YYYYMMDD(start)-YYYYMMDD(end)
#' (e.g. 19810101-19961231).
#'@return A matrix list (e.g. restrain) as a result of characterize the past
#'synoptic situations and the significant predictors needed to downscale
#'seasonal forecast variables. For precipitation the output includes:
#'\itemize{
#' \item{'um': u component of geostrophic wind in all period (numeric matrix
#' with [time, gridpoint] dimensions).}
#' \item{'vm': v component of geostrophic wind in all period (numeric matrix
#' with [time,gridpoint] dimensions).}
#' \item{'nger': number of synoptic situations (integer).}
#' \item{'gu92': u component of geostrophic wind for each synoptic situation
#' (numeric matrix with [nger,gridpoint] dimensions).}
#' \item{'gv92': v component of geostrophic wind for each synoptic situation
#' (numeric matrix with [nger, gridpoint] dimensions).}
#' \item{'gu52': u component of wind at 500 hPa for each synotic situation
#' (numeric matrix with [nger, gridpoint] dimensions).}
#' \item{'gv52': v component of wind at 500 hPa for each synotic situation
#' (numeric matrix with [nger, gridpoint] dimensions).}
#' \item{'neni': number of reference centers where predictors are calculated
#' (integer).}
#' \item{'vdmin': minimum distances between each HR gridpoint and the four
#' nearest synoptic gridpoints (numeric matrix with [nptos,4] dimensions)
#' (nptos = number of HR gridpoints).}
#' \item{'vref': four nearest synoptic gridpoints to each HR gridpoint (integer
#' matrix with [nptos, 4] dimensions).}
#' \item{'ccm': multiple correlation coeficients (numeric matrix with [nger, nptos]
#' dimensions) indices:
#' \itemize{
#' \item{'lab_pred': numeric labels of selected predictors (integer matrix
#' with [nger,nptos,11,1] dimensions).}
#' \item{'cor_pred': partial correlation of selected predictors (numeric
#' matrix with [nger,nptos,11,2] dimensions).}
#' }
#' }
#' }
#'For maximum and minimum temperature the output includes:
#'\itemize{
#' \item{'um': u component of geostrophic wind in all training period (numeric
#' matrix with [time,gridpoint] dimensions).}
#' \item{'vm': v component of geostrophic wind in all training period (numeric
#' matrix with [time,gridpoint] dimensions).}
#' \item{'insol': insolation in all training period (numeric vector with [time]
#' dimension).}
#' \item{'neni': number of reference centers where predictors are calculated
#' (integer).}
#' \item{'vdmin': minimum distances between each HR gridpoint and the four
#' nearest synoptic gridpoints (numeric matrix with [nptos,4] dimensions)
#' (nptos = number of HR gridpoints).}
#' \item{'vref': four nearest synoptic gridpoints to each HR gridpoint (integer
#' matrix with [nptos,4] dimensions).}
#'}
#'The output can directly use as argument to 'CST_AnalogsPredictors' function
#'(e.g. resdowns <- CST_AnalogsPredictors(...,restrain)).
#'@importFrom utils read.table
#'@useDynLib CSTools
#'@export
training_analogs <- function(pred, slp_ext, lon, lat, slp_lon, slp_lat, var,
HR_path, tdates) {
if (!is.list(pred)) {
stop("Parameter 'pred' must be a list of 'matrix' objects")
}
if (!(all(sapply(pred, inherits, 'matrix')))) {
stop("Elements of the list in parameter 'pred' must be of the class ",
"'matrix'.")
}
if (var == "prec") {
if (length(pred) != 9) {
stop("Parameter 'pred' must be a length of 9.")
} else {
if (is.null(names(dim(pred[[1]]))) ||
is.null(names(dim(pred[[2]]))) ||
is.null(names(dim(pred[[3]]))) ||
is.null(names(dim(pred[[4]]))) ||
is.null(names(dim(pred[[5]]))) ||
is.null(names(dim(pred[[6]]))) ||
is.null(names(dim(pred[[7]]))) ||
is.null(names(dim(pred[[8]]))) ||
is.null(names(dim(pred[[9]])))) {
stop("Parameter 'pred' should have dimmension names.")
}
if (!(any(names(pred) %in% "u500"))) {
stop("Variable 'u500' in pred parameter is missed.")
} else if (!(any(names(pred) %in% "v500"))) {
stop("Variable 'v500' in pred parameter is missed.")
} else if (!(any(names(pred) %in% "t500"))) {
stop("Variable 't500' in pred parameter is missed.")
} else if (!(any(names(pred) %in% "t850"))) {
stop("Variable 't850' in pred parameter is missed.")
} else if (!(any(names(pred) %in% "t1000"))) {
stop("Variable 't1000' in pred parameter is missed.")
} else if (!(any(names(pred) %in% "z500"))) {
stop("Variable 'z500' in pred parameter is missed.")
} else if (!(any(names(pred) %in% "z1000"))) {
stop("Variable 'z1000' in pred parameter is missed.")
} else if (!(any(names(pred) %in% "slp"))) {
stop("Variable 'slp' in pred parameter is missed.")
} else if (!(any(names(pred) %in% "q700"))) {
stop("Variable 'q700' in pred parameter is missed.")
}
}
} else {
if (length(pred) != 2) {
stop("Parameter 'pred' must be a length of 2.")
} else {
if (is.null(names(dim(pred[[1]]))) ||
is.null(names(dim(pred[[2]])))) {
stop("Parameter 'pred' should have dimmension names.")
}
if (!(any(names(pred) %in% "t1000"))) {
stop("Variable 't1000' in pred parameter is missed.")
} else if (!(any(names(pred) %in% "slp"))) {
stop("Variable 'slp' in pred parameter is missed.")
}
}
}
if (all((sapply(pred,dim)) == dim(pred[[1]])) &
all((sapply(pred, function(pred){names(dim(pred))})) == names(dim(pred[[1]])))) {
dim_pred <- dim(pred[[1]])
if (!(any(names(dim_pred) %in% "time"))) {
stop("Dimension 'time' in pred parameter is missed.")
}
if (!(any(names(dim_pred) %in% "gridpoint"))) {
stop("Dimension 'gridpoint' in pred parameter is missed.")
}
if (names(dim_pred)[1] == "gridpoint") {
pred <- lapply(pred,aperm)
} else {
pred <- pred
}
} else {
stop("All 'pred' variables must have the same dimensions and name dimensions.")
}
if (!is.vector(lon) || !is.numeric(lon)) {
stop("Parameter 'lon' must be a numeric vector")
} else {
if (is.unsorted(lon)) {
lon <- sort(lon)
warning("'lon' vector has been sorted in increasing order")
}
}
if (!is.vector(lat) || !is.numeric(lat)) {
stop("Parameter 'lat' must be a numeric vector")
} else {
if (!is.unsorted(lat)) {
lat <- sort(lat, decreasing = TRUE)
warning("'lat' vector has been sorted in decreasing order")
}
}
if (!is.character(HR_path)) {
stop("Parameter 'HR_path' must be a character.")
} else {
if (!dir.exists(HR_path)) {
stop("'HR_path' directory does not exist")
}
}
if (!is.character(tdates)) {
stop("Parameter 'tdates' must be a character.")
} else {
if (nchar(tdates) != "17") {
stop("Parameter 'tdates' must be a string with 17 charecters.")
} else {
dateini <- as.Date(substr(tdates,start=1,stop=8),format="%Y%m%d")
dateend <- as.Date(substr(tdates,start=10,stop=18),format="%Y%m%d")
if (dateend <= dateini) {
stop("Parameter 'tdates' must be at least of one day")
}
}
}
#! REANALYSIS GRID PARAMETERS
rlon <- c(lon, NA) - c(NA, lon)
rlon <- rlon[!is.na(rlon)]
if (!all(rlon == rlon[1])) {
stop("Parameter 'lon' must be in regular grid.")
} else {
rlon <- rlon[1]
}
rlat <- c(lat, NA) - c(NA, lat)
rlat <- rlat[!is.na(rlat)]
if (!all(rlat == rlat[1])) {
stop("Parameter 'lat' must be in regular grid.")
} else {
rlat <- rlat[1]
}
if (rlon != (-rlat)) {
stop("Parameters 'lon' and 'lat' must have the same resolution.")
} else {
res <- rlon
}
nlat <- ((lat[length(lat)] - lat[1]) / rlat) + 1
nlon <- ((lon[length(lon)] - lon[1]) / rlon) + 1
ic <- nlat * nlon
slp_rlon <- c(slp_lon, NA) - c(NA, slp_lon)
slp_rlon <- slp_rlon[!is.na(slp_rlon)]
if (!all(slp_rlon == slp_rlon[1])) {
stop("Parameter 'slp_lon' must be in regular grid.")
} else {
slp_rlon <- slp_rlon[1]
}
slp_rlat <- c(slp_lat, NA) - c(NA, slp_lat)
slp_rlat <- slp_rlat[!is.na(slp_rlat)]
if (!all(slp_rlat == slp_rlat[1])) {
stop("Parameter 'slp_lat' must be in regular grid.")
} else {
slp_rlat <- slp_rlat[1]
}
if (slp_rlon != (-slp_rlat)) {
stop("Parameters 'slp_lon' and 'slp_lat' must have the same resolution.")
} else {
slp_res <- slp_rlon
}
nlatt <- ((slp_lat[length(slp_lat)] - slp_lat[1]) / slp_rlat) + 1
nlont <- ((slp_lon[length(slp_lon)] - slp_lon[1]) / slp_rlon) + 1
id <- nlatt * nlont
slat <- max(lat)
slon <- min(c(lon[which(lon > 180)] - 360,
lon[which(lon <= 180)]))
slatt <- max(slp_lat)
slont <- min(c(slp_lon[which(slp_lon > 180)] - 360,
slp_lon[which(slp_lon <= 180)]))
ngridd <- ((2*nlatt)-1)*((2*nlont)-1)
if (all((sapply(pred,nrow))==nrow(pred[[1]]))){
nd <- nrow(pred[[1]])
} else {
stop("All 'pred' variables must be in the same period.")
}
#!!!!! COMPROBAR QUE SLP TAMBIEN TIENE EL MISMO NROW
seqdates <- seq(as.Date(substr(tdates,start=1,stop=8),format="%Y%m%d"),
as.Date(substr(tdates,start=10,stop=18),format="%Y%m%d"),by="days")
month <- format(seqdates,format="%m")
day <- format(seqdates,format="%d")
#! TRAINING REANALYSIS VARIABLES
t1000 <- pred[['t1000']]
msl_si <- pred[['slp']]
msl_lr <- slp_ext
if (var == "prec") {
u500 <- pred[['u500']]
v500 <- pred[['v500']]
t500 <- pred[['t500']]
t850 <- pred[['t850']]
z500 <- pred[['z500']]
z1000 <- pred[['z1000']]
q700 <- pred[['q700']]
}
#! HIGH-RESOLUTION (HR) OBSERVATIONAL DATASET
maestro_hr_file <- paste(HR_path, "maestro_red_hr_SPAIN.txt",sep="")
if (!file.exists(maestro_hr_file)) {
stop("'maestro_red_hr_SPAIN.txt' does not exist.")
} else {
maestro <- read.table(maestro_hr_file)
lon_hr <- unlist(maestro[2])
lat_hr <- unlist(maestro[3])
nptos <- length(readLines(maestro_hr_file))
}
if (var == "prec") {
prec_hr_file <- paste(HR_path, "pcp_red_SPAIN_",tdates,".txt",sep="")
if (!file.exists(prec_hr_file)) {
stop(sprintf("precipitation HR file for %s does not exist.",tdates))
} else {
nd_hr <- length(readLines(prec_hr_file))
preprec_hr <- matrix(scan(prec_hr_file), nrow=nd_hr ,ncol= nptos+1, byrow=TRUE)
prec_hr <- preprec_hr[1:nd_hr,-c(1)]
}
} else {
tmx_hr_file <- paste(HR_path, "tmx_red_SPAIN_",tdates,".txt",sep="")
tmn_hr_file <- paste(HR_path, "tmn_red_SPAIN_",tdates,".txt",sep="")
if (!file.exists(tmx_hr_file)) {
stop(sprintf("maximum temperature HR file for %s does not exist.",tdates))
} else if (!file.exists(tmn_hr_file)) {
stop(sprintf("minimum temperature HR file for %s does not exist.",tdates))
} else if (length(readLines(tmx_hr_file)) != length(readLines(tmn_hr_file))) {
stop("maximum and minimum temperature HR observation files must have the same period.")
} else {
nd_hr <- length(readLines(tmx_hr_file))
pretmx_hr <- matrix(scan(tmx_hr_file), nrow=nd_hr ,ncol= nptos+1, byrow=TRUE)
tmx_hr <- pretmx_hr[1:nd_hr,-c(1)]
pretmn_hr <- matrix(scan(tmn_hr_file), nrow=nd_hr ,ncol= nptos+1, byrow=TRUE)
tmn_hr <- pretmn_hr[1:nd_hr,-c(1)]
}
}
if (nd_hr != nd) {
stop("Reanalysis variables and HR observations must have the same period.")
}
#! OTHER PARAMETERS that should not be changed
#! Number of analog situations to consider
nanx <- 155
#! Number of predictors
npx <- 11
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
if (var == "prec") {
prePro <- .Fortran("training_part1_prec",
u500 = as.numeric(u500),
v500 = as.numeric(v500),
t1000 = as.numeric(t1000),
z500 = as.numeric(z500),
z1000 = as.numeric(z1000),
msl_si = as.numeric(msl_si),
msl_lr = as.numeric(msl_lr),
ngridd = as.integer(ngridd),
nlat = as.integer(nlat),
nlon = as.integer(nlon),
ic = as.integer(ic),
nlatt = as.integer(nlatt),
nlont = as.integer(nlont),
id = as.integer(id),
slat = as.numeric(slat),
slon = as.numeric(slon),
rlat = as.numeric(rlat),
rlon = as.numeric(rlon),
slatt = as.numeric(slatt),
slont = as.numeric(slont),
nd = as.integer(nd),
um = matrix(as.double(seq(1,nd*ic)),c(nd,ic)),
vm = matrix(as.double(seq(1,nd*ic)),c(nd,ic)),
gu92 = matrix(as.double(seq(1,nd*ic)),c(nd,ic)),
gv92 = matrix(as.double(seq(1,nd*ic)),c(nd,ic)),
gu52 = matrix(as.double(seq(1,nd*ic)),c(nd,ic)),
gv52 = matrix(as.double(seq(1,nd*ic)),c(nd,ic)),
nger = as.integer(1),
PACKAGE = 'CSTools')
a <- prePro$um
b <- prePro$vm
c <- prePro$gu92[1:prePro$nger,]
d <- prePro$gv92[1:prePro$nger,]
e <- prePro$gu52[1:prePro$nger,]
f <- prePro$gv52[1:prePro$nger,]
g <- prePro$nger
predSig <- .Fortran("training_part2_prec",
u500 = as.numeric(u500),
v500 = as.numeric(v500),
t500 = as.numeric(t500),
t850 = as.numeric(t850),
msl_si = as.numeric(msl_si),
q700 = as.numeric(q700),
lon_hr = as.numeric(lon_hr),
lat_hr = as.numeric(lat_hr),
prec_hr = as.numeric(prec_hr),
nanx = as.integer(nanx),
nlat = as.integer(nlat),
nlon = as.integer(nlon),
ic = as.integer(ic),
nlatt = as.integer(nlatt),
nlont = as.integer(nlont),
id = as.integer(id),
slat = as.numeric(slat),
slon = as.numeric(slon),
rlat = as.numeric(rlat),
rlon = as.numeric(rlon),
slatt = as.numeric(slatt),
slont = as.numeric(slont),
nd = as.integer(nd),
um = as.double(a),
vm = as.double(b),
gu92 = as.double(c),
gv92 = as.double(d),
gu52 = as.double(e),
gv52 = as.double(f),
nger = as.integer(g),
vdmin = matrix(as.double(seq(1,nptos*4)),c(nptos,4)),
vref = matrix(as.integer(seq(1,nptos*4)),c(nptos,4)),
neni = as.integer(1),
mi = matrix(as.integer(seq(1,prePro$nger*nptos)),c(prePro$nger,nptos)),
ccm = matrix(as.double(seq(1,prePro$nger*nptos)),c(prePro$nger,nptos)),
lab_pred = matrix(as.integer(seq(1,prePro$nger*nptos*npx)),c(prePro$nger,nptos,npx)),
cor_pred = matrix(as.double(seq(1,prePro$nger*nptos*npx)),c(prePro$nger,nptos,npx)),
PACKAGE = 'CSTools')
h <- predSig$mi
i <- predSig$ccm
j <- predSig$lab_pred
k <- predSig$cor_pred
l <- predSig$vdmin
m <- predSig$vref
indices <- array(c(j,k),c(g,nptos,npx,2))
dimnames(indices)[[4]] <- c("lab_pred","cor_pred")
output <- list("um" = a,
"vm" = b,
"nger" = g,
"gu92" = c,
"gv92" = d,
"gu52" = e,
"gv52" = f,
"neni" = predSig$neni,
"vdmin" = l,
"vref" = m,
"ccm" = i,
"indices" = indices)
} else {
prePro <- .Fortran("training_temp",
t1000 = as.numeric(t1000),
msl_si = as.numeric(msl_si),
msl_lr = as.numeric(msl_lr),
lon_hr = as.numeric(lon_hr),
lat_hr = as.numeric(lat_hr),
ngridd = as.integer(ngridd),
nlat = as.integer(nlat),
nlon = as.integer(nlon),
ic = as.integer(ic),
nlatt = as.integer(nlatt),
nlont = as.integer(nlont),
id = as.integer(id),
slat = as.numeric(slat),
slon = as.numeric(slon),
rlat = as.numeric(rlat),
rlon = as.numeric(rlon),
slatt = as.numeric(slatt),
slont = as.numeric(slont),
nd = as.integer(nd),
day = as.integer(day),
month = as.integer(month),
um = matrix(as.double(seq(1,nd*ic)),c(nd,ic)),
vm = matrix(as.double(seq(1,nd*ic)),c(nd,ic)),
insol = vector(mode="double",length=nd),
vdmin = matrix(as.double(seq(1,nptos*4)),c(nptos,4)),
vref = matrix(as.integer(seq(1,nptos*4)),c(nptos,4)),
neni = as.integer(1),
PACKAGE = 'CSTools')
a <- prePro$um
b <- prePro$vm
c <- prePro$insol
d <- prePro$vdmin
e <- prePro$vref
f <- prePro$neni
output <- list("um" = a,
"vm" = b,
"insol" = c,
"vdmin" = d,
"vref" = e,
"neni" = f)
}
return(output)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/AnalogsPred_train.R
|
#' Computing the Best Index PDFs combining Index PDFs from two SFSs
#'
#'@author Eroteida Sanchez-Garcia - AEMET, \email{[email protected]}
#'
#'@description This function implements the computation to obtain the index
#'Probability Density Functions (PDFs) (e.g. NAO index) obtained to combining
#'the Index PDFs for two Seasonal Forecast Systems (SFSs), the Best Index
#'estimation (see Sanchez-Garcia, E. et al (2019),
#'\doi{10.5194/asr-16-165-2019} for more details about the
#'methodology applied to estimate the Best Index).
#'
#'@references Regionally improved seasonal forecast of precipitation through
#'Best estimation of winter NAO, Sanchez-Garcia, E. et al.,
#' Adv. Sci. Res., 16, 165174, 2019, \doi{10.5194/asr-16-165-2019}
#'
#'@param index_obs Index (e.g. NAO index) array from an observational database
#' or reanalysis with at least a temporal dimension (by default 'time'),
#' which must be greater than 2.
#'@param index_hind1 Index (e.g. NAO index) array from a SFS (named SFS1)
#' with at least two dimensions (time , member) or (time, statistic).
#' The temporal dimension, by default 'time', must be greater than 2.
#' The dimension 'member' must be greater than 1. The dimension 'statistic'
#' must be equal to 2, for containing the two paramenters of a normal
#' distribution (mean and sd) representing the ensemble of a SFS. It is not
#' possible to have the dimension 'member' and 'statistic'
#' at the same time.
#'@param index_hind2 Index (e.g. NAO index) array from a SFS (named SFS2)
#' with at least two dimensions (time , member) or (time, statistic).
#' The temporal dimension, by default 'time', must be greater than 2.
#' The dimension 'member' must be greater than 1.
#' The dimension 'statistic' must be equal to 2, for containing the two
#' paramenters of a normal distribution (mean and sd) representing the ensemble
#' of a SFS. It is not possible to have the dimension 'member' and 'statistic'
#' together.
#'@param index_fcst1 (optional, default = NULL) Index (e.g. NAO index) array
#' from forescating of SFS1 with at least two dimensions (time , member) or
#' (time, statistic). The temporal dimension, by default 'time', must be equal
#' to 1, the forecast year target. The dimension 'member' must be greater than
#' 1. The dimension 'statistic' must be equal to 2, for containing the two
#' paramenters of a normal distribution (mean and sd) representing the ensemble
#' of a SFS. It is not possible to have the dimension 'member' and 'statistic'
#' together.
#'@param index_fcst2 (optional, default = NULL) Index (e.g. NAO index) array
#' from forescating of SFS2 with at least two dimensions (time , member) or
#' (time, statistic). The temporal dimension, by default 'time', must be equal
#' to 1, the forecast year target. The dimension 'member' must be greater than
#' 1. The dimension 'statistic' must be equal to 2, for containing the two
#' paramenters of a normal distribution (mean and sd) representing the ensemble
#' of a SFS. It is not possible to have the dimension 'member' and 'statistic'
#' together.
#'@param method_BC A character vector of maximun length 2 indicating the bias
#' correction methodology to be applied on each SFS. If it is 'none' or any of
#' its elements is 'none', the bias correction won't be applied. Available
#' methods developped are "ME" (a bias correction scheme based on the mean
#' error or bias between observation and predictions to correct the predicted
#' values), and "LMEV" (a bias correction scheme based on a linear model using
#' ensemble variance of index as predictor). (see Sanchez-Garcia, E. et al
#' (2019), \doi{10.5194/asr-16-165-2019} for more details).
#'@param time_dim_name A character string indicating the name of the temporal
#' dimension, by default 'time'.
#'@param na.rm Logical (default = FALSE). Should missing values be removed?
#'
#'@return BEI_PDFBest() returns an array with the parameters that caracterize
#'the PDFs, with at least a temporal dimension, by default 'time' and dimension
#''statistic' equal to 2. The firt statistic is the parameter 'mean' of the PDF
#'for the best estimation combining the two SFSs PDFs. The second statistic is
#'the parameter 'standard deviation' of the PDF for the best estimation
#'combining the two SFSs PDFs. If index_fcst1 and/or index_fcst2 are null,
#'returns the values for hindcast period. Otherwise, it returns the values for a
#'forecast year.
#'
#'@examples
#' # Example 1 for the BEI_PDFBest function
#' index_obs<- rnorm(10, sd = 3)
#' dim(index_obs) <- c(time = 5, season = 2)
#' index_hind1 <- rnorm(40, mean = 0.2, sd = 3)
#' dim(index_hind1) <- c(time = 5, member = 4, season = 2)
#' index_hind2 <- rnorm(60, mean = -0.5, sd = 4)
#' dim(index_hind2) <- c(time = 5, member = 6, season = 2)
#' index_fcst1 <- rnorm(16, mean = 0.2, sd = 3)
#' dim(index_fcst1) <- c(time = 1, member = 8, season = 2)
#' index_fcst2 <- rnorm(18, mean = -0.5, sd = 4)
#' dim(index_fcst2) <- c(time = 1, member = 9, season = 2)
#' method_BC <- 'ME'
#' res <- BEI_PDFBest(index_obs, index_hind1, index_hind2, index_fcst1,
#' index_fcst2, method_BC)
#' # Example 2 for the BEI_PDFBest function
#' index_obs<- rnorm(10, sd = 3)
#' dim(index_obs) <- c(time = 5, season = 2)
#' index_hind1 <- rnorm(40, mean = 0.2, sd = 3)
#' dim(index_hind1) <- c(time = 5, member = 4, season = 2)
#' index_hind2 <- rnorm(60, mean = -0.5, sd = 4)
#' dim(index_hind2) <- c(time = 5, member = 6, season = 2)
#' index_fcst1 <- rnorm(16, mean = 0.2, sd = 3)
#' dim(index_fcst1) <- c(time = 1, member = 8, season = 2)
#' index_fcst2 <- rnorm(18, mean = -0.5, sd = 4)
#' dim(index_fcst2) <- c(time = 1, member = 9, season = 2)
#' method_BC <- c('LMEV', 'ME')
#' res <- BEI_PDFBest(index_obs, index_hind1, index_hind2, index_fcst1,
#' index_fcst2, method_BC)
#'@import multiApply
#'@importFrom verification verify
#'@export
BEI_PDFBest <- function(index_obs, index_hind1, index_hind2, index_fcst1 = NULL,
index_fcst2 = NULL, method_BC = 'none',
time_dim_name = 'time', na.rm = FALSE) {
if (!is.logical(na.rm)) {
stop("Parameter 'na.rm' must be a logical value.")
}
if (!is.character(time_dim_name)) {
stop("Parameter 'time_dim_name' must be a character string ",
"indicating the name of the temporal dimension.")
}
if (length(time_dim_name) > 1) {
warning("Parameter 'time_dim_name' has length greater than 1 and ",
"only the first element will be used.")
time_dim_name <- time_dim_name[1]
}
if (!is.character(method_BC) || !is.vector(method_BC)){
stop("Parameter 'method_BC' must be a character vector.")
}
if (!(length(method_BC) == 1 || length(method_BC) == 2)) {
stop("Length of parameter 'method_BC' must be 1 or 2.")
}
if(!all(method_BC %in% c('ME', 'LMEV', 'none'))){
stop("Elements of parameter 'method_BC' must be equals to ",
"'none, 'ME' or 'LMEV'")
}
if (!is.array(index_obs)) {
stop("Parameter 'index_obs' must be an array.")
}
if (!is.array(index_hind1)) {
stop("Parameter 'index_hind1' must be an array.")
}
if (!is.array(index_hind2)) {
stop("Parameter 'index_hind2' must be an array.")
}
if (is.null(names(dim(index_hind1))) ||
is.null(names(dim(index_obs))) ||
is.null(names(dim(index_hind2)))) {
stop("Parameters 'index_obs', 'index_hind1' and 'index_hind2' ",
"should have dimmension names.")
}
if(!(time_dim_name %in% names(dim(index_obs)))) {
stop("Parameter 'index_obs' must have temporal dimension.")
}
if(!(time_dim_name %in% names(dim(index_hind1)))) {
stop("Parameter 'index_hind1' must have temporal dimension.")
}
if(!(time_dim_name %in% names(dim(index_hind2)))) {
stop("Parameter 'index_hind2' must have temporal dimension.")
}
if (dim(index_obs)[time_dim_name] <= 2) {
stop("Length of temporal dimension ",
"of parameter 'index_obs', 'index_hind1' and 'index_hind2' ",
"must be greater than 2.")
}
if (dim(index_obs)[time_dim_name] != dim(index_hind1)[time_dim_name]) {
stop("Length of temporal dimensions ",
"of parameter 'index_obs' and 'index_hind1' must be equals.")
}
if (dim(index_hind1)[time_dim_name] != dim(index_hind2)[time_dim_name]) {
stop("Length of temporal dimensions ",
"of parameter 'index_hind1' and 'index_hind2' must be equals.")
}
if('member' %in% names(dim(index_hind1)) &
'statistic' %in% names(dim(index_hind1))) {
stop("Parameter 'index_hind1' must have at least ",
"dimension 'member' or 'statistic', not 'member' and 'statistic' ",
"together.")
}
if('member' %in% names(dim(index_hind2)) &
'statistic' %in% names(dim(index_hind2))) {
stop("Parameter 'index_hind2' must have at least ",
"dimension 'member' or 'statistic', not 'member' and 'statistic' ",
"together.")
}
if(!('member' %in% names(dim(index_hind1))) &
!('statistic' %in% names(dim(index_hind1)))) {
stop("Parameter 'index_hind1' must have dimension ",
"'member' or 'statistic'")
}
if(!('member' %in% names(dim(index_hind2))) &
!('statistic' %in% names(dim(index_hind2)))) {
stop("Parameter 'index_hind2' must have dimension ",
"'member' or 'statistic'")
}
if ('member' %in% names(dim(index_hind1))){
if (dim(index_hind1)['member'] == 1) {
stop("Length of dimension 'member' ",
"of parameter 'index_hind1' must be greater than 1.")
}
}
if ('member' %in% names(dim(index_hind2))){
if (dim(index_hind2)['member'] == 1) {
stop("Length of dimension 'member' ",
"of parameter 'index_hind2' must be greater than 1.")
}
}
if ('statistic' %in% names(dim(index_hind1))){
if (dim(index_hind1)['statistic'] != 2) {
stop("Length of dimension 'statistic' ",
"of parameter 'index_hind1' must be equal to 2.")
}
}
if ('statistic' %in% names(dim(index_hind2))){
if (dim(index_hind2)['statistic'] != 2) {
stop("Length of dimension 'statistic' ",
"of parameter 'index_hind2' must be equal to 2.")
}
}
if (!is.null(index_fcst1)){
if (!is.array(index_fcst1)) {
stop("Parameter 'index_fcst1' must be an array.")
}
if (is.null(names(dim(index_fcst1)))){
stop("Parameters 'index_fcst1' should have dimmension names.")
}
if(!(time_dim_name %in% names(dim(index_fcst1)))) {
stop("Parameter 'index_fcst1' must have temporal dimension.")
}
if (dim(index_fcst1)[time_dim_name] != 1) {
stop("Length of temporal dimension ",
"of parameter 'index_fcst1' must be equal to 1.")
}
if((length(names(dim(index_hind1))) != length(names(dim(index_fcst1))))
|| (!all(names(dim(index_hind1)) == names(dim(index_fcst1))))){
stop("Dimension names of parameter 'index_hind1' and 'index_fcst1' ",
"must be equals.")
}
if ('member' %in% names(dim(index_fcst1))){
if (dim(index_fcst1)['member'] == 1) {
stop("Length of dimension 'member' ",
"of parameter 'index_fcst1' must be greater than 1.")
}
}
if ('statistic' %in% names(dim(index_fcst1))){
if (dim(index_fcst1)['statistic'] != 2) {
stop("Length of dimension 'statistic' ",
"of parameter 'index_fcst1' must be equal to 2.")
}
}
}
if (!is.null(index_fcst2)){
if (!is.array(index_fcst2)) {
stop("Parameter 'index_fcst2' must be an array.")
}
if (is.null(names(dim(index_fcst2)))){
stop("Parameters 'index_fcst2' should have dimmension names.")
}
if(!(time_dim_name %in% names(dim(index_fcst2)))) {
stop("Parameter 'index_fcst2' must have temporal dimension.")
}
if (dim(index_fcst2)[time_dim_name] != 1) {
stop("Length of temporal dimension ",
"of parameter 'index_fcst2' must be equal to 1.")
}
if((length(names(dim(index_hind2))) != length(names(dim(index_fcst2))))
|| (!all(names(dim(index_hind2)) == names(dim(index_fcst2))))){
stop("Dimension names of parameter 'index_hind2' and 'index_fcst2' ",
"must be equals.")
}
if ('member' %in% names(dim(index_fcst2))){
if (dim(index_fcst2)['member'] == 1) {
stop("Length of dimension 'member' ",
"of parameter 'index_fcst2' must be greater than 1.")
}
}
if ('statistic' %in% names(dim(index_fcst1))){
if (dim(index_fcst1)['statistic'] != 2) {
stop("Length of dimension 'statistic' ",
"of parameter 'index_fcst1' must be equal to 2.")
}
}
}
if (all(method_BC == 'none')){
method_BC1 <- 'ME'
method_BC2 <- 'ME'
bc_dataset1 <- FALSE
bc_dataset2 <- FALSE
} else if (length(method_BC) == 1){
method_BC1 <- method_BC
method_BC2 <- method_BC
bc_dataset1 <- TRUE
bc_dataset2 <- TRUE
} else {
if(method_BC[1] == 'none'){
method_BC1 <- 'ME'
bc_dataset1 <- FALSE
} else {
method_BC1 <- method_BC[1]
bc_dataset1 <- TRUE
}
if(method_BC[2] == 'none'){
method_BC2 <- 'ME'
bc_dataset2 <- FALSE
} else {
method_BC2 <- method_BC[2]
bc_dataset2 <- TRUE
}
}
pdf_hind1 <- PDFIndexHind(index_hind1, index_obs, method = method_BC1,
time_dim_name = time_dim_name, na.rm = na.rm)
pdf_hind2 <- PDFIndexHind(index_hind2, index_obs, method = method_BC2,
time_dim_name = time_dim_name, na.rm = na.rm)
if (is.null(index_fcst1) || is.null(index_fcst2)){
pdf_best <- Apply(list(pdf_hind1, pdf_hind2),
target_dims = list('statistic',
'statistic'),
fun = .BEI_PDFBest,
bc_dataset1, bc_dataset2)
} else {
pdf_fcst1 <- PDFIndexFcst(index_hind1, index_obs, index_fcst1,
method_BC1, time_dim_name = time_dim_name,
na.rm = na.rm)
pdf_fcst2 <- PDFIndexFcst(index_hind2, index_obs, index_fcst2,
method_BC2, time_dim_name = time_dim_name,
na.rm = na.rm)
pdf_best <- Apply(list(pdf_fcst1, pdf_fcst2),
target_dims = list('statistic',
'statistic'),
fun = .BEI_PDFBest,
bc_dataset1, bc_dataset2)
}
Dim <- names(dim(index_hind1))
if('member' %in% Dim){
pos_aux <- match('member', Dim)
Dim[pos_aux] <- 'statistic'
}
pos <- match(Dim, names(dim(pdf_best$output1)))
res <- aperm(pdf_best$output1,pos)
names(dim(res)) <- Dim
return(res)
}
#' Atomic BEI_PDFBest
#'@param pdf_1 Statistics array for the first SFS PDF with one dimension
#' 'statistic' equal to 4.
#'@param pdf_2 Statistics array for the second SFS PDF with one dimension
#' 'statistic' equal to 4.
#'@param bc_dataset1 Logical (default = TRUE).
#' If TRUE the Index PDFs for the first SFS has been computed with bias
#' corrected.
#'@param bc_dataset2 Logical (default = TRUE). If TRUE the Index PDFs for the
#' second SFS has been computed with bias corrected.
#'
#'@return .BEI_PDFBest returns an array with dimensions (statistic = 2).
#'The firt statistic is the parameter 'mean' of the PDF for the best estimation
#'combining the two SFSs PDF. The second statistic is the parameter 'standard
#'deviation' of the PDF for the best estimation combining the two SFSs PDF.
#'
#'@examples
#' # Example for the Atomic BEI_PDFBest function
#' pdf_1 <- c(1.1,0.6,1.6,0.9)
#' dim(pdf_1) <- c(statistic = 4)
#' pdf_2 <- c(1,0.5,1.5,0.8)
#' dim(pdf_2) <- c(statistic = 4)
#' res <- .BEI_PDFBest(pdf_1, pdf_2, bc_dataset1 = TRUE, bc_dataset2 = FALSE)
#'@noRd
.BEI_PDFBest <- function(pdf_1, pdf_2, bc_dataset1 = TRUE, bc_dataset2 = TRUE) {
if(bc_dataset1){
# apply bias correction to model 1
mean_1 <- pdf_1[3]
sigma_1 <- pdf_1[4]
} else{
# not bias correction to model 1
mean_1 <- pdf_1[1]
sigma_1 <- pdf_1[2]
}
if(bc_dataset2){
# apply bias correction to model 2
mean_2 <- pdf_2[3]
sigma_2 <- pdf_2[4]
} else {
# not bias correction to model 2
mean_2 <- pdf_2[1]
sigma_2 <- pdf_2[2]
}
a_1 <- (sigma_2^2)/((sigma_1^2)+(sigma_2^2))
a_2 <- (sigma_1^2)/((sigma_1^2)+(sigma_2^2))
pdf_mean <- a_1*mean_1 + a_2*mean_2
pdf_sigma <- sqrt((sigma_1^2)*(sigma_2^2)/((sigma_1^2)+(sigma_2^2)))
data <- c(pdf_mean, pdf_sigma)
dim(data) <- c(statistic = 2)
return(data)
}
#'Computing the Index PDFs for a dataset of SFSs for a hindcats period.
#'
#'@author Eroteida Sanchez-Garcia - AEMET, \email{[email protected]}
#'
#'@description This function implements the computation to obtain the index PDFs
#' (e.g. NAO index) to improve the index estimate from SFSs for a hindcast period.
#'
#'@references Regionally improved seasonal forecast of precipitation through Best
#'estimation of winter NAO, Sanchez-Garcia, E. et al.,
#'Adv. Sci. Res., 16, 165174, 2019, \doi{10.5194/asr-16-165-2019}
#'
#'@param index_hind Index (e.g. NAO index) array from SFSs
#' with at least two dimensions (time , member) or (time, statistic).
#' The temporal dimension, by default 'time', must be greater than 2.
#' The dimension 'member' must be greater than 1.
#' The dimension 'statistic' must be equal to 2, for containing the two
#' paramenters of a normal distribution (mean and sd) representing the ensemble
#' of a SFS. It is not possible to have the dimension 'member' and 'statistic'
#' together.
#'@param index_obs Index (e.g. NAO index) array from an observational database
#' or reanalysis with at least a temporal dimension (by default 'time'),
#' which must be greater than 2.
#'@param method A character string indicating which methodology is applied
#' to compute the PDFs. One of "ME" (default) or "LMEV".
#'@param time_dim_name A character string indicating the name of the temporal
#' dimension, by default 'time'.
#'@param na.rm Logical (default = FALSE). Should missing values be removed?
#'
#'@return An array with at least two dimensions (time, statistic = 4). The firt
#'statistic is the parameter 'mean' of the PDF with not bias corrected.
#'The second statistic is the parameter 'standard deviation' of the PDF with not
#'bias corrected. The third statistic is the parameter 'mean' of the PDF with
#'bias corrected. The fourth statistic is the parameter 'standard deviation' of
#'the PDF with bias corrected.
#'@import multiApply
#'@examples
#' # Example for the PDFIndexHind function
#' # Example 1
#' index_obs <- 1 : (5 * 3 )
#' dim(index_obs) <- c(time = 5, season = 3)
#' index_hind <- 1 : (5 * 4 * 3)
#' dim(index_hind) <- c(time = 5, member = 4, season = 3)
#' res <- PDFIndexHind(index_hind, index_obs)
#' dim(res)
#' # time statistic season
#' # 5 4 3
#' # Example 2
#' index_obs <- 1 : (5 * 3)
#' dim(index_obs) <- c(time = 5, season = 3)
#' index_hind <- 1 : (5 * 2 * 3)
#' dim(index_hind) <- c(time = 5, statistic = 2, season = 3)
#' res <- PDFIndexHind(index_hind, index_obs)
#'@import multiApply
#'@importFrom verification verify
#'@export
PDFIndexHind <- function(index_hind, index_obs, method ='ME',
time_dim_name = 'time', na.rm = FALSE) {
if (!is.character(time_dim_name)) {
stop("Parameter 'time_dim_name' must be a character string indicating",
" the name of the temporal dimension.")
}
if (length(time_dim_name) > 1) {
warning("Parameter 'time_dim_name' has length greater than 1 and ",
"only the first element will be used.")
time_dim_name <- time_dim_name[1]
}
if (!(method %in% c('ME', 'LMEV'))){
stop("Parameter 'method' must be equal to 'ME' or 'LMEV'")
}
if (!is.logical(na.rm)) {
stop("Parameter 'na.rm' must be a logical value.")
}
if (!is.array(index_hind)) {
stop("Parameter 'index_hind' must be an array.")
}
if (!is.array(index_obs)) {
stop("Parameter 'index_obs' must be an array.")
}
if (is.null(names(dim(index_hind))) || is.null(names(dim(index_obs)))) {
stop("Parameters 'index_hind' and 'index_obs'",
" should have dimmension names.")
}
if(!(time_dim_name %in% names(dim(index_obs)))) {
stop("Parameter 'index_obs' must have temporal dimension.")
}
if(!(time_dim_name %in% names(dim(index_hind)))) {
stop("Parameter 'index_hind' must have temporal dimension.")
}
if (dim(index_obs)[time_dim_name] != dim(index_hind)[time_dim_name]) {
stop("Length of the temporal dimensions ",
"of the parameter 'index_obs' and 'index_hind' must be equals.")
}
if (dim(index_obs)[time_dim_name] <= 2) {
stop("Length of temporal dimension ",
"of parameter 'index_obs' and 'index_hind' must be greater than 2.")
}
if('member' %in% names(dim(index_hind)) &
'statistic' %in% names(dim(index_hind))) {
stop("Parameter 'index_hind' must have at least dimension ",
"'member' or 'statistic', not 'member' and 'statistic' together.")
}
if(!('member' %in% names(dim(index_hind))) &
!('statistic' %in% names(dim(index_hind)))) {
stop("Parameter 'index_hind' must have dimension ",
"'member' or 'statistic'")
}
if ('member' %in% names(dim(index_hind))){
if (dim(index_hind)['member'] == 1) {
stop("Length of dimension 'member' ",
"of parameter 'index_hind' must be greater than 1.")
}
}
if ('statistic' %in% names(dim(index_hind))){
if (dim(index_hind)['statistic'] != 2) {
stop("Length of dimension 'statistic' ",
"of parameter 'index_hind' must be equal to 2.")
}
}
if ('member' %in% names(dim(index_hind))) {
PDFstatistics <- Apply(list(index_hind, index_obs),
target_dims = list(c(time_dim_name, 'member'), time_dim_name),
fun = .PDFIndexHind, method, time_dim_name, na.rm)
} else if ('statistic' %in% names(dim(index_hind))){
PDFstatistics <- Apply(list(index_hind, index_obs),
target_dims = list(c(time_dim_name, 'statistic'), time_dim_name),
fun = .PDFIndexHind, method, time_dim_name, na.rm)
}
return(PDFstatistics$output1)
}
#' Atomic PDFIndexHind
#'@param index_hind Index (e.g. NAO index) array from a SFS with dimensions
#' (time, member) or (time, statistic) for a hindcast period.
#' The temporal dimension, by default 'time', must be greater than 2.
#'@param index_obs Index (e.g. NAO index) array from an observational dataset
#' or reanalysis with dimension (time). The temporal dimension,
#' by default 'time', must be greater than 2.
#'@param method A character string indicating which methodology is applied
#' to compute the PDF. One of "ME" (default) or "LMEV".
#'@param time_dim_name A character string indicating the name of the temporal
#' dimension, by default 'time'.
#'@param na.rm Logical. Should missing values be removed?
#'@return .PDFIndexHind returns an array with dimensions (time, statistic = 4).
#'The firt statistic is the parameter 'mean' of the PDF with not bias corrected
#'for the hindcast period. The second statistic is the parameter 'standard
#'deviation' of the PDF with not bias corrected for the hindcast period.
#'The third statistic is the parameter 'mean' of the PDF with bias corrected
#'for the hindcast period. The fourth statistic is the parameter 'standard
#'deviation' of the PDF with bias corrected for the hindcast period.
#'@examples
#' # Example for the Atomic PDFIndexHind function
#' index_obs <- 1 : 10
#' dim(index_obs) <- c(time = length(index_obs))
#' index_hind <- 1 : (10 * 3)
#' dim(index_hind) <- c(time = 10, member = 3)
#' res <- .PDFIndexHind(index_hind, index_obs)
#'@import multiApply
#'@importFrom verification verify
#'@noRd
.PDFIndexHind <- function(index_hind, index_obs, method = 'ME',
time_dim_name = 'time', na.rm = FALSE) {
dimnameshind <- names(dim(index_hind))
if ('member' %in% dimnameshind) {
pos <- match(c(time_dim_name, 'member'), names(dim(index_hind)))
index_hind <- aperm(index_hind, pos)
names(dim(index_hind)) <- c(time_dim_name, 'member')
hind_ens_mean <- Apply(list(index_hind), target_dims = 'member',
fun = mean, na.rm = na.rm)$output1
hind_ens_sd <- Apply(list(index_hind), target_dims = 'member',
fun = sd, na.rm = na.rm)$output1
} else if ('statistic' %in% dimnameshind){
pos <- match(c(time_dim_name, 'statistic'), names(dim(index_hind)))
index_hind <- aperm(index_hind,pos)
names(dim(index_hind)) <- c(time_dim_name, 'statistic')
hind_ens_mean <- index_hind[,1]
hind_ens_sd <- index_hind[,2]
} else {
stop("Wrong dimension of parameter 'index_hind'.")
}
error <- hind_ens_mean - index_obs
pdfnotbc_mean <- hind_ens_mean
ind <- 1 : length(index_obs)
pdfnotbc_sd <- unlist(lapply(ind, function(x) {Sigma_notBC(hind_ens_mean[-x],
index_obs[-x],
na.rm)}))
if (method == 'ME') {
pdfbc <- unlist(lapply(ind, function(x) {MEBC(hind_ens_mean[-x],
index_obs[-x], na.rm)}))
pdfbc_mean <- hind_ens_mean - pdfbc
pdfbc_sd <- unlist(lapply(ind, function(x) {Sigma_BC(pdfbc_mean[-x],
index_obs[-x],
na.rm)}))
} else if (method == 'LMEV') {
# linear model error-variance
hind_ens_var <- hind_ens_sd^2
dim_hind <- names(dim(index_hind))
dim_obs <- names(dim(index_obs))
coeff <- sapply(ind, function(x) {LMEV(index_hind[-x,], index_obs[-x],
dim_hind, dim_obs, na.rm)})
pdfbc_mean <- hind_ens_mean - (unlist(coeff['coeff1',]) +
unlist(coeff['coeff2',])*hind_ens_var)
pdfbc_sd <- unlist(lapply(ind, function(x) {Sigma_BC(pdfbc_mean[-x],
index_obs[-x],
na.rm)}))
} else {
stop("Error: Parameter 'method' is wrong")
}
data <- cbind(pdfnotbc_mean, pdfnotbc_sd, pdfbc_mean, pdfbc_sd)
names(dim(data)) <- c(names(dim(index_obs)), 'statistic')
return(data)
}
#' Computing the Index PDFs for a dataset of SFSs for a forecast year.
#'
#'@author Eroteida Sanchez-Garcia - AEMET, \email{[email protected]}
#'
#'@description This function implements the computation to obtain the index PDFs
#' (e.g. NAO index) to improve the index estimate from SFSs for a forecast year.
#'
#'@references Regionally improved seasonal forecast of precipitation through Best
#' estimation of winter NAO, Sanchez-Garcia, E. et al.,
#' Adv. Sci. Res., 16, 165174, 2019, \doi{10.5194/asr-16-165-2019}
#'
#'@param index_hind Index (e.g. NAO index) array from SFSs
#' with at least two dimensions (time , member) or (time, statistic).
#' The temporal dimension, by default 'time', must be greater than 2.
#' The dimension 'member' must be greater than 1.
#' The dimension 'statistic' must be equal to 2, for containing the two
#' paramenters of a normal distribution (mean and sd) representing the ensemble
#' of a SFS. It is not possible to have the dimension 'member' and 'statistic'
#' together.
#'@param index_obs Index (e.g. NAO index) array from an observational database
#' or reanalysis with at least a temporal dimension (by default 'time'),
#' which must be greater than 2.
#'@param index_fcst Index (e.g. NAO index) array from SFSs with at least two
#' dimensions (time , member) or (time, statistic). The temporal dimension, by
#' default 'time', must be equal to 1, the forecast year target. The dimension
#' 'member' must be greater than 1. The dimension 'statistic' must be equal to
#' 2, for containing the two paramenters of a normal distribution (mean and sd)
#' representing the ensemble of a SFS. It is not possible to have the dimension
#' 'member' and 'statistic' together.
#'@param method A character string indicating which methodology is applied
#' to compute the PDFs. One of "ME" (default) or "LMEV".
#'@param time_dim_name A character string indicating the name of the temporal
#' dimension, by default 'time'.
#'@param na.rm Logical (default = FALSE). Should missing values be removed?
#'
#'@return An array with at least two dimensions (time = 1, statistic = 4).
#'The firt statistic is the parameter 'mean' of the PDF with not bias corrected
#'The second statistic is the parameter 'standard deviation' of the PDF with not
#'bias corrected. The third statistic is the parameter 'mean' of the PDF with
#'bias corrected. The fourth statistic is the parameter 'standard deviation' of
#'the PDF with bias corrected.
#'@import multiApply
#'@examples
#' # Example for the PDFIndexFcst function
#' index_fcst <- 1 : (8 * 4)
#' dim(index_fcst) <- c(time = 1, member = 8, season = 4)
#' index_obs<- 1 : (5 * 4)
#' dim(index_obs) <- c(time = 5, season = 4)
#' index_hind <- 1 : (5 * 4 * 4)
#' dim(index_hind) <- c(time = 5, member = 4, season = 4)
#' res <- PDFIndexFcst(index_hind, index_obs, index_fcst)
#' dim(res)
#' # time statistic season
#' # 1 4 4
#'
#'@noRd
PDFIndexFcst <- function(index_hind, index_obs, index_fcst,
method ='ME', time_dim_name = 'time',
na.rm = FALSE) {
if (!is.character(time_dim_name)) {
stop("Parameter 'time_dim_name' must be a character string indicating",
" the name of the temporal dimension.")
}
if (length(time_dim_name) > 1) {
warning("Parameter 'time_dim_name' has length greater than 1 and ",
"only the first element will be used.")
time_dim_name <- time_dim_name[1]
}
if(!(method %in% c('ME', 'LMEV'))){
stop("Parameter 'method' must be equal to 'ME' or 'LMEV'")
}
if (!is.logical(na.rm)) {
stop("Parameter 'na.rm' must be a logical value.")
}
if (!is.array(index_hind)) {
stop("Parameter 'index_hind' must be an array.")
}
if (!is.array(index_obs)) {
stop("Parameter 'index_obs' must be an array.")
}
if (!is.array(index_fcst)) {
stop("Parameter 'index_fcst' must be an array.")
}
if (is.null(names(dim(index_hind))) ||
is.null(names(dim(index_obs))) ||
is.null(names(dim(index_fcst)))) {
stop("Parameters 'index_hind', 'index_obs' and 'index_fcst' ",
"should have dimmension names.")
}
if(!(time_dim_name %in% names(dim(index_hind)))) {
stop("Parameter 'index_hind' must have temporal dimension.")
}
if(!(time_dim_name %in% names(dim(index_obs)))) {
stop("Parameter 'index_obs' must have temporal dimension.")
}
if(!(time_dim_name %in% names(dim(index_fcst)))) {
stop("Parameter 'index_fcst' must have temporal dimension.")
}
if (dim(index_obs)[time_dim_name] != dim(index_hind)[time_dim_name]) {
stop("Length of temporal dimensions ",
"of parameter 'index_obs' and 'index_hind' must be equals.")
}
if (dim(index_obs)[time_dim_name] <= 2) {
stop("Length of temporal dimension ",
"of parameter 'index_obs' and 'index_hind' must be greater than 2.")
}
if (dim(index_fcst)[time_dim_name] != 1) {
stop("Length of temporal dimension ",
"of parameter 'index_fcst' must be equal to 1.")
}
if((length(names(dim(index_hind))) != length(names(dim(index_fcst))))
|| (!all(names(dim(index_hind)) == names(dim(index_fcst))))){
stop("Dimension names of parameter 'index_hind' and 'index_fcst' ",
"must be equals.")
}
if('member' %in% names(dim(index_hind)) &
'statistic' %in% names(dim(index_hind))) {
stop("Parameter 'index_hind' and 'index_fcst' must have at least ",
"dimension 'member' or 'statistic', not 'member' and 'statistic' ",
"together.")
}
if(!('member' %in% names(dim(index_hind))) &
!('statistic' %in% names(dim(index_hind)))) {
stop("Parameter 'index_hind' and 'index_fcst' must have dimension ",
"'member' or 'statistic'")
}
if ('member' %in% names(dim(index_hind))){
if (dim(index_hind)['member'] == 1) {
stop("Length of dimension 'member' ",
"of parameter 'index_hind' must be greater than 1.")
}
}
if ('member' %in% names(dim(index_fcst))){
if (dim(index_fcst)['member'] == 1) {
stop("Length of dimension 'member' ",
"of parameter 'index_fcst' must be greater than 1.")
}
}
if ('statistic' %in% names(dim(index_hind))){
if (dim(index_hind)['statistic'] != 2) {
stop("Length of dimension 'statistic' ",
"of parameter 'index_hind' must be equal to 2.")
}
}
if ('statistic' %in% names(dim(index_fcst))){
if (dim(index_fcst)['statistic'] != 2) {
stop("Length of dimension 'statistic' ",
"of parameter 'index_fcst' must be equal to 2.")
}
}
if ('member' %in% names(dim(index_hind))){
PDFstatistics <- Apply(list(index_hind, index_obs, index_fcst),
target_dims = list(c(time_dim_name, 'member'),
time_dim_name,
c(time_dim_name,'member')),
fun = .PDFIndexFcst, method, time_dim_name, na.rm)
} else if ('statistic' %in% names(dim(index_hind))){
PDFstatistics <- Apply(list(index_hind, index_obs, index_fcst),
target_dims = list(c(time_dim_name, 'statistic'),
time_dim_name,
c(time_dim_name,'statistic')),
fun = .PDFIndexFcst, method, time_dim_name, na.rm)
}
return(PDFstatistics$output1)
}
#'Atomic PDFIndexFcst
#'@param index_hind Index (e.g. NAO index) array from a SFS with dimensions
#' (time, member) or (time, statistic) for a hindcast period. The temporal
#' dimension, by default 'time', must be greater than 2.
#'@param index_obs Index (e.g. NAO index) array from an observational dataset
#' or reanalysis with dimension (time). The temporal dimension, by default
#' 'time', must be greater than 2.
#'@param index_fcst Index (e.g. NAO index) array from SFSs with dimensions
#' (time , member) or (time, statistic). The temporal dimension, by default
#' 'time', must be equal to 1, the forecast year target. The dimension 'member'
#' must be greater than 1. The dimension 'statistic' must be equal to 2, for
#' containing the two paramenters of a normal distribution (mean and sd)
#' representing the ensemble of a SFS. It is not possible to have the dimension
#' 'member' and 'statistic' together.
#'@param method A character string indicating which methodology is applied
#' to compute the PDF. One of "ME" (default) or "LMEV".
#'@param time_dim_name A character string indicating the name of the temporal
#' dimension, by default 'time'.
#'@param na.rm Logical. Should missing values be removed?
#'@return .PDFIndexFcst Returns an array with dimensions
#'(time = 1, statistic = 4). The firt statistic is the parameter 'mean' of the
#'PDF with not bias corrected for the forecast year. The second statistic is the
#'parameter 'standard deviation' of the PDF with not bias corrected for the
#'forecast year. The third statistic is the parameter 'mean' of the PDF with
#'bias corrected for the forecast year. The fourth statistic is the parameter
#''standard deviation' of the PDF with bias corrected for the forecast year.
#'@import multiApply
#'@importFrom verification verify
#'@examples
#' # Example 1 for the Atomic PDFIndexFcst function
#' index_fcst <- 1 : (1 * 6)
#' dim(index_fcst) <- c(time = 1, member = 6)
#' index_obs <- 1 : 10
#' dim(index_obs) <- c(time = length(index_obs))
#' index_hind <- 1 : (10 * 3)
#' dim(index_hind) <- c(time = 10, member = 3)
#' res <- .PDFIndexFcst(index_hind, index_obs, index_fcst)
#' dim(res)
#' # time statistic
#' # 1 4
#' # Example 2 for the Atomic PDFIndexFcst function
#' index_fcst <- 1 : (1 * 2)
#' dim(index_fcst) <- c(time = 1, statistic = 2)
#' index_obs <- 1 : 10
#' dim(index_obs) <- c(time = 10)
#' index_hind <- 1 : (10 * 2)
#' dim(index_hind) <- c(time = 10, statistic = 2)
#' res <- .PDFIndexFcst(index_hind, index_obs, index_fcst)
#' dim(res)
#' # time statistic
#' # 1 4
#' @noRd
.PDFIndexFcst <- function(index_hind, index_obs, index_fcst,
method = 'ME',
time_dim_name = 'time',
na.rm = FALSE) {
dimnameshind <- names(dim(index_hind))
if ('member' %in% dimnameshind) {
pos <- match(c(time_dim_name, 'member'), names(dim(index_hind)))
index_hind <- aperm(index_hind, pos)
names(dim(index_hind)) <- c(time_dim_name, 'member')
exp_fcst_ens_mean <- Apply(list(index_fcst), target_dims = 'member',
fun = mean)$output1
exp_fcst_ens_sd <- Apply(list(index_fcst), target_dims = 'member',
fun = sd)$output1
} else {
pos <- match(c(time_dim_name,'statistic'), names(dim(index_fcst)))
index_fcst <- aperm(index_fcst,pos)
names(dim(index_fcst)) <- c(time_dim_name, 'statistic')
exp_fcst_ens_mean <- index_fcst[,1]
exp_fcst_ens_sd <- index_fcst[,2]
}
data_hind <- .PDFIndexHind(index_hind, index_obs, method, time_dim_name, na.rm)
exp_hind_ens_mean <- data_hind[,1]
pdfnotbc_mean <- exp_fcst_ens_mean
pdfnotbc_sd <- Sigma_notBC(exp_hind_ens_mean, index_obs, na.rm)
if (method == 'ME') {
pdfbc <- MEBC(exp_hind_ens_mean, index_obs, na.rm)
pdfbc_mean <- exp_fcst_ens_mean - pdfbc
} else if (method == 'LMEV') {
# linear model error-variance
exp_fcst_ens_var <- exp_fcst_ens_sd^2
dim_hind <- names(dim(index_hind))
dim_obs <- names(dim(index_obs))
coeff <- LMEV(index_hind, index_obs, dim_hind, dim_obs, time_dim_name, na.rm)
pdfbc_mean <- exp_fcst_ens_mean - (coeff$coeff1 + coeff$coeff2*exp_fcst_ens_var)
} else {
stop("Parameter 'method' must be 'ME' or 'LMEV'.")
}
pdfbc_sd <- Sigma_BC(data_hind[,3], index_obs, na.rm)
data <- cbind(pdfnotbc_mean, pdfnotbc_sd, pdfbc_mean, pdfbc_sd)
names(dim(data)) <- c(names(dim(index_obs)), 'statistic')
return(data)
}
# Auxiliar function to compute the mean squared error between 'exp' and 'obs'
Sigma_notBC <- function(exp, obs, na.rm = FALSE) {
if (na.rm){
indNA <- c(which(is.na(exp)),which(is.na(obs)))
if (length(indNA) > 0){
exp <- exp[-indNA]
obs <- obs[-indNA]
}
}
return(sqrt(verify(obs, exp, frcst.type = "cont",
obs.type = "cont")$MSE))
}
# Auxiliar function to compute the bias between 'exp' and 'obs'
MEBC <- function(exp, obs, na.rm = FALSE) {
if (na.rm){
indNA <- c(which(is.na(exp)),which(is.na(obs)))
if (length(indNA) > 0){
exp <- exp[-indNA]
obs <- obs[-indNA]
}
}
return(verify(obs, exp, frcst.type = "cont",
obs.type = "cont")$ME)
}
# Auxiliar function to compute the standard deviation of errors
# between 'obs' and 'exp'
Sigma_BC <- function(exp, obs, na.rm = FALSE) {
return(sd(obs - exp, na.rm = na.rm))
}
# Auxiliar function to compute the linear model used in the method
# called 'LMEV' to correct the bias in the Best NAO weighting
# methogology based on a linear model using ensemble variance of NAO index
# as predictor.
LMEV <- function(exp, obs, dim_exp,
dim_obs, time_dim_name = 'time', na.rm = FALSE) {
names(dim(exp)) <- dim_exp
names(dim(obs)) <- dim_obs
if (any(names(dim(exp)) == 'member')) {
exp_ens_mean <- Apply(list(exp), target_dims = 'member',
fun = mean, na.rm = na.rm)$output1
exp_ens_sd <- Apply(list(exp), target_dims = 'member',
fun = sd, na.rm = na.rm)$output1
} else {
if(na.rm & any(is.na(exp))){
print("Some value are NA")
}
pos <- match(c(time_dim_name,'statistic'), names(dim(exp)))
exp <- aperm(exp,pos)
exp_ens_mean <- exp[,1]
exp_ens_sd <- exp[,2]
}
error <- exp_ens_mean - obs
exp_ens_var <- exp_ens_sd^2
statModel <- lm(error ~ exp_ens_var)
return(list(coeff1 = statModel$coefficients[1], coeff2 = statModel$coefficients[2]))
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/BEI_PDFBest.R
|
#'Computing the weights for SFSs using the Best Index PDFs.
#'
#'@author Eroteida Sanchez-Garcia - AEMET, \email{[email protected]}
#'
#'@description This function implements the computation to obtain the
#'normalized weights for each member of each Seasonal Forecast Systems (SFS)
#'or dataset using the Probability Density Functions (PDFs) indicated by the
#'parameter 'pdf_weight' (for instance the Best Index estimation obtained
#'using the 'PDFBest' function). The weight of each member is proportional to
#'the probability of its index calculated with the PDF "pdf_weight".
#'
#'@references Regionally improved seasonal forecast of precipitation through
#'Best estimation of winter NAO, Sanchez-Garcia, E. et al.,
#'Adv. Sci. Res., 16, 165174, 2019, \doi{10.5194/asr-16-165-2019}
#'
#'@param index_weight Index (e.g. NAO index) array, from a dataset of SFSs
#' for a period of years, with at least dimensions 'member'.
#' Additional dimensions, for instance, a temporal dimension as 'time',
#' must have the same lenght in both parameters, 'index_weight' and
#' 'pdf_weight'.
#'@param pdf_weight Statistics array to define a Gaussian PDF with at least
#' dimensions 'statistic'. The firt statistic is the parameter 'mean' of the PDF
#' and the second statistic is the parameter 'standard deviation' of the PDF.
#'@param time_dim_name A character string indicating the name of the temporal
#' dimension, by default 'time'.
#'
#'@return BEI_Weights() returns a normalized weights array with the same
#' dimensions that index_weight.
#'
#'@examples
#' # Example for the BEI_Weights function
#' index_weight <- 1 : (10 * 3 * 5 * 1)
#' dim(index_weight) <- c(sdate = 10, dataset = 3, member = 5, season = 1)
#' pdf_weight <- 1 : (10 * 3 * 2 * 1)
#' dim(pdf_weight) <- c(sdate = 10, dataset = 3, statistic = 2, season = 1)
#' res <- BEI_Weights(index_weight, pdf_weight)
#' dim(res)
#' # sdate dataset member season
#' # 10 3 5 1
#'
#'@import multiApply
#'@export
BEI_Weights <- function(index_weight, pdf_weight, time_dim_name = 'time') {
if (!is.character(time_dim_name)) {
stop("Parameter 'time_dim_name' must be a character string ",
"indicating the name of the temporal dimension.")
}
if (length(time_dim_name) > 1) {
warning("Parameter 'time_dim_name' has length greater than 1 and ",
"only the first element will be used.")
time_dim_name <- time_dim_name[1]
}
if (!is.array(index_weight)) {
stop("Parameter 'index_weight' must be an array.")
}
if (!is.array(pdf_weight)) {
stop("Parameter 'pdf_weight' must be an array.")
}
if (is.null(names(dim(index_weight))) || is.null(names(dim(pdf_weight)))) {
stop("Parameters 'index_weight' and 'pdf_weight'",
" should have dimmension names.")
}
if(!('member' %in% names(dim(index_weight)))) {
stop("Parameter 'index_weight' must have dimension 'member'.")
}
if(!('statistic' %in% names(dim(pdf_weight)))) {
stop("Parameter 'pdf_weight' must have dimension 'statistic'.")
}
if(time_dim_name %in% names(dim(index_weight)) &
!time_dim_name %in% names(dim(pdf_weight))) {
stop("Parameter 'pdf_weight' must have temporal dimension.")
}
if(!time_dim_name %in% names(dim(index_weight)) &
time_dim_name %in% names(dim(pdf_weight))) {
stop("Parameter 'index_weight' must have temporal dimension.")
}
if(time_dim_name %in% names(dim(index_weight)) &
time_dim_name %in% names(dim(pdf_weight)) &
dim(index_weight)[time_dim_name] != dim(pdf_weight)[time_dim_name]){
stop("Length of temporal dimension of parameters must be equal")
}
if (dim(pdf_weight)['statistic'] != 2) {
stop("Length of dimension 'statistic' ",
"of the parameter 'pdf_weight' must be equal to 2.")
}
aweights <- Apply(list(index_weight, pdf_weight),
target_dims = list('member', 'statistic'),
fun = .BEI_Weights)$output1
dimnames <- names(dim(index_weight))
pos <- match(dimnames, names(dim(aweights)))
aweights <- aperm(aweights, pos)
names(dim(aweights)) <- dimnames
return(aweights)
}
#'Atomic BEI_Weights
#'@param index_weight Index (e.g. NAO index) array from a SFS with dimensions
#' (member)
#'@param pdf_weight Statistics array to define a Gaussian PDF with dimensions
#' (statistic = 2).
#' The firt statistic is the parameter 'mean' of the PDF and
#' the second statistic is the parameter 'standard deviation' of the PDF.
#'@return .BEI_Weights returns an array of with dimensions (member),
#'the normalized weights for each member of a SFS using a Best NAO PDF.
#'@examples
#' # Example for the Atomic BEI_Weights function
#' index_weight <- c(1.3,3,-1)
#' dim(index_weight) <- c(member = 3)
#' pdf_weight <- c(1.5,0.8)
#' dim(pdf_weight) <- c(statistic = 2)
#' res <- .BEI_Weights(index_weight, pdf_weight)
#' dim(res)
#' # member
#' # 3
#'@noRd
.BEI_Weights <- function(index_weight, pdf_weight) {
aweights <- apply(index_weight, 1, dnorm, mean = pdf_weight[1], sd = pdf_weight[2])
dim(aweights) <- dim(index_weight)
sumWeights <- sum(aweights)
aweightsNorm <- apply(aweights, 1, NormWeight, sumWeights)
dim(aweightsNorm) <- dim(index_weight)
return(aweightsNorm)
}
# Auxiliar function to normalize a weight value
NormWeight <- function(weight, sumWeights) {
return(weight/sumWeights)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/BEI_Weights.R
|
#'CST_AdamontAnalog finds analogous data in the reference dataset to experiment
#'data based on weather types
#'
#'@description This function searches for analogs in a reference dataset for
#'experiment data, based on corresponding weather types. The experiment data is
#'typically a hindcast, observations are typically provided by reanalysis data.
#'@author Paola Marson, \email{[email protected]} for PROSNOW version
#'@author Lauriane Batté, \email{[email protected]} for CSTools adaptation
#'
#'@param exp Experiment data an object of class \code{s2dv_cube}, can be output
#' from quantile correction using CST_AdamontQQCorr.
#'@param wt_exp Corresponding weather types (same dimensions as \code{exp$data}
#' but lat/lon).
#'@param obs Reference data, also of class \code{s2dv_cube}. Note that lat/lon
#' dimensions need to be the same as \code{exp}.
#'@param wt_obs Corresponding weather types (same dimensions as \code{obs$data}
#' but lat/lon)
#'@param nanalogs Integer defining the number of analog values to return
#' (default: 5).
#'@param method A character string indicating the method used for analog
#' definition. It can be:
#' \itemize{
#' \item{'pattcorr': pattern correlation.}
#' \item{'rain1' (for precip patterns): rain occurrence consistency.}
#' \item{'rain01' (for precip patterns): rain occurrence/non occurrence
#' consistency}
#' }
#'@param thres Real number indicating the threshold to define rain
#' occurrence/non occurrence in rain (0)1.
#'@param search_obsdims List of dimensions in \code{obs} along which analogs are
#' searched for.
#'@param londim Name of longitude dimension.
#'@param latdim Name of latitude dimension.
#'@return analog_vals An object of class \code{s2dv_cube} containing
#' nanalogs analog values for each value of \code{exp} input data.
#'@examples
#'wt_exp <- sample(1:3, 15*6*3, replace = TRUE)
#'dim(wt_exp) <- c(dataset = 1, member = 15, sdate = 6, ftime = 3)
#'wt_obs <- sample(1:3, 6*3, replace = TRUE)
#'dim(wt_obs) <- c(dataset = 1, member = 1, sdate = 6, ftime = 3)
#'exp <- NULL
#'exp$data <- 1 : c(1 * 15 * 6 * 3 * 8 * 8)
#'dim(exp$data) <- c(dataset = 1, member = 15, sdate = 6, ftime = 3,
#' lat = 8, lon = 8)
#'class(exp) <- 's2dv_cube'
#'obs <- NULL
#'obs$data <- 101 : c(100 + 1 * 1 * 6 * 3 * 8 * 8)
#'dim(obs$data) <- c(dataset = 1, member = 1, sdate = 6, ftime = 3,
#' lat = 8, lon = 8)
#'class(obs) <- 's2dv_cube'
#'analog_vals <- CST_AdamontAnalog(exp = exp, obs = obs, wt_exp = wt_exp,
#' wt_obs = wt_obs, nanalogs = 2)
#'@import multiApply
#'@importFrom ClimProjDiags Subset
#'@export
CST_AdamontAnalog <- function(exp, obs, wt_exp, wt_obs, nanalogs,
method = 'pattcorr', thres = NULL,
search_obsdims = c('member', 'sdate', 'ftime'),
londim = 'lon', latdim = 'lat') {
dimnames <- names(dim(obs$data))
dimnamesexp <- names(dim(exp$data))
if (!inherits(exp, 's2dv_cube') || !inherits(obs, 's2dv_cube')) {
stop("Inputs 'exp' and 'obs' must be of class 's2dv_cube', ",
"as output by CSTools::CST_Load.")
}
if (!(method %in% c('pattcorr','rain1','rain01'))) {
stop("Input parameter 'method' must be 'pattcorr', 'rain1', or 'rain01'")
}
if (is.null(nanalogs)) {
nanalogs <- 5
}
if (!(latdim %in% dimnames) || !(londim %in% dimnames)){
stop("'londim' or 'latdim' input doesn't match with 'obs$data' dimension",
" names")
}
if (!(latdim %in% dimnamesexp) || !(londim %in% dimnamesexp)){
stop("'londim' or 'latdim' input doesn't match with 'exp$data' dimension",
" names")
}
if (!all(search_obsdims %in% dimnames)) {
stop("Names in parameter 'search_obsdims' should match 'obs$data' ",
"dimension names.")
}
if (!all(dim(wt_exp) %in% dim(exp$data))) {
stop("Dimensions for 'wt_exp' should match 'exp$data' except lat/lon")
}
if (!all(dim(wt_obs) %in% dim(obs$data))) {
stop("Dimensions for 'wt_obs' should match 'obs$data' except lat/lon")
}
plat_exp <- which(dimnamesexp == latdim)
plon_exp <- which(dimnamesexp == londim)
plat_obs <- which(dimnames == latdim)
plon_obs <- which(dimnames == londim)
if ((dim(obs$data)[plon_obs] != dim(exp$data)[plon_exp]) ||
(dim(obs$data)[plat_obs] != dim(exp$data)[plat_exp])){
stop("Element 'data' from parameters 'obs' and 'exp' should have",
"same lon / lat dimensions if working with regular grids.")
}
# End of sanity checks; call AdamontAnalog function
analog_vals <- AdamontAnalog(exp = exp$data, obs = obs$data, wt_exp = wt_exp,
wt_obs = wt_obs, nanalogs = nanalogs,
method = method, thres = thres,
search_obsdims = search_obsdims, londim = londim,
latdim = latdim )
return(analog_vals)
}
#'AdamontAnalog finds analogous data in the reference dataset to experiment
#'data based on weather types
#'
#'@description This function searches for analogs in a reference dataset for
#'experiment data, based on corresponding weather types. The experiment data is
#'typically a hindcast, observations are typically provided by reanalysis data.
#'@author Paola Marson, \email{[email protected]} for PROSNOW version
#'@author Lauriane Batté, \email{[email protected]} for CSTools adaptation
#'
#'
#'@param exp A multidimensional array with named dimensions containing the
#' experiment data.
#'@param wt_exp Corresponding weather types (same dimensions as \code{exp$data}
#' but lat/lon).
#'@param obs A multidimensional array with named dimensions containing the
#' reference data. Note that lat/lon dimensions need to be the same as
#' \code{exp}.
#'@param wt_obs Corresponding weather types (same dimensions as \code{obs$data}
#' but lat/lon).
#'@param nanalogs Integer defining the number of analog values to return
#' (default: 5).
#'@param method A character string indicating the method used for analog
#' definition. It can be:
#' \itemize{
#' \item{'pattcorr': pattern correlation.}
#' \item{'rain1' (for precip patterns): rain occurrence consistency.}
#' \item{'rain01' (for precip patterns): rain occurrence/non occurrence
#' consistency}
#' }
#'@param thres Real number indicating the threshold to define rain
#' occurrence/non occurrence in rain (0)1.
#'@param search_obsdims List of dimensions in \code{obs} along which analogs are
#' searched for.
#'@param londim Name of longitude dimension.
#'@param latdim Name of latitude dimension.
#'@return analog_vals An array containing nanalogs analog values.
#'@examples
#'wt_exp <- sample(1:3, 15*6*3, replace = TRUE)
#'dim(wt_exp) <- c(dataset = 1, member = 15, sdate = 6, ftime = 3)
#'wt_obs <- sample(1:3, 6*3, replace = TRUE)
#'dim(wt_obs) <- c(dataset = 1, member = 1, sdate = 6, ftime = 3)
#'exp <- 1 : c(1 * 15 * 6 * 3 * 8 * 8)
#'dim(exp) <- c(dataset = 1, member = 15, sdate = 6, ftime = 3, lat = 8, lon = 8)
#'obs <- 101 : c(100 + 1 * 1 * 6 * 3 * 8 * 8)
#'dim(obs) <- c(dataset = 1, member = 1, sdate = 6, ftime = 3, lat = 8, lon = 8)
#'analog_vals <- AdamontAnalog(exp = exp, obs = obs, wt_exp = wt_exp,
#' wt_obs = wt_obs, nanalogs = 2)
#'@import multiApply
#'@importFrom ClimProjDiags Subset
#'@rdname CST_AdamontAnalog
#'@export
AdamontAnalog <- function(exp, obs, wt_exp, wt_obs, nanalogs = 5,
method = 'pattcorr', thres = NULL,
search_obsdims = c('member', 'sdate', 'ftime'),
londim = 'lon', latdim = 'lat') {
# exp: lat, lon, sdate, ftime, member
# obs: lat, lon, dims for searching 'sdate' 'ftime'...
# wt_exp: sdate, ftime, member
# wt_obs: the dims for searching
dimnames <- names(dim(obs))
dimnamesexp <- names(dim(exp))
if (method %in% c('rain1','rain01') & is.null(thres)) {
stop("Threshold 'thres' must be defined with methods 'rain1' and 'rain01'")
}
if (method == 'pattcorr' & !is.null(thres)) {
warning("Parameter 'thres' is not used with method 'pattcorr'.")
}
if (!(latdim %in% dimnamesexp) || !(londim %in% dimnamesexp)) {
stop("'londim' or 'latdim' input doesn't match with 'exp' dimension names")
}
# Position of lat/lon dimensions in exp data
poslatexp <- which(dimnamesexp == latdim)
poslonexp <- which(dimnamesexp == londim)
poslatobs <- which(dimnames == latdim)
poslonobs <- which(dimnames == londim)
if (!all(search_obsdims %in% dimnames)) {
stop("Names in parameter 'search_obsdims' should match 'obs' ",
"dimension names.")
}
if (!all(dim(wt_exp) %in% dim(exp))){
stop("Dimensions for 'wt_exp' should match 'exp' except lat/lon")
}
if (!all(dim(wt_obs) %in% dim(obs))){
stop("Dimensions for 'wt_obs' should match 'obs' except lat/lon")
}
if ((dim(obs)[poslonobs]!=dim(exp)[poslonexp]) ||
(dim(obs)[poslatobs]!=dim(exp)[poslatexp])){
stop("Parameters 'obs' and 'exp' should have same lon / lat dimensions.")
}
## Reshaping obs:
## The dimensions where to search in a single dim
if (length(search_obsdims) > 1) {
for (i in 1:(length(search_obsdims) - 1)) {
obs <- MergeDims(obs, search_obsdims[i:(i + 1)],
rename_dim = search_obsdims[i + 1])
wt_obs <- MergeDims(wt_obs, search_obsdims[i:(i + 1)],
rename_dim = search_obsdims[i + 1])
}
}
names(dim(obs))[which(names(dim(obs)) == search_obsdims[length(search_obsdims)])] <- 'time'
names(dim(wt_obs))[which(names(dim(wt_obs)) == search_obsdims[length(search_obsdims)])] <- 'time'
# Split 'time' dim in weather types
obs <- SplitDim(obs, split_dim = 'time', indices = as.vector(wt_obs),
new_dim_name='type')
analog_vals <- Apply(list(exp, obs, wt_exp),
target_dims = list(c(londim, latdim),
c(londim, latdim, 'time', 'type'),
NULL),
.aanalogs, method = method, thres = thres)$output1
# Reshaping output:
analog_vals <- Subset(analog_vals, along = 'type', indices = 1, drop = 'selected')
poslat <- which(names(dim(analog_vals)) == latdim)
poslon <- which(names(dim(analog_vals)) == londim)
postime <- which(names(dim(analog_vals)) == 'time') # Dimension with N analogs
pos <- 1:length(dim(analog_vals))
if (poslatexp > poslonexp){
analog_vals <- aperm(analog_vals,c(pos[-c(poslon,poslat,postime)],
postime,poslon,poslat))
} else {
analog_vals <- aperm(analog_vals,c(pos[-c(poslon,poslat,postime)],
postime,poslat,poslon))
}
# Renaming 'time' dim to 'analog'
names(dim(analog_vals))[which(names(dim(analog_vals)) == 'time')] <- 'analog'
return(analog_vals)
}
.aanalogs <- function(exp, obs, wt_exp, nanalogs = 5, method = 'pattcorr',
thres = NULL, londimexp = 'lon', latdimexp = 'lat',
londimobs = 'lon', latdimobs = 'lat') {
# exp: lon, lat
# obs: lon, lat, time, wt
# wt_exp: wt single scalar
search_analog <- switch(method, 'rain1' = .rain1, 'rain01' = .rain01,
'pattcorr' = .pattcor,
stop(paste0("Adamont Analog function only supports ",
"methods 'rain1', 'rain01', 'pattcorr'")))
obs <- Subset(obs, along = 'type', indices = wt_exp)
accuracy <- Apply(list(exp, obs), target_dims = list(c(londimexp, latdimexp),
c(londimobs, latdimobs)),
search_analog, thres = thres)$output1
obs <- Subset(obs, along = 'time',
indices = order(accuracy, decreasing = TRUE)[1:nanalogs])
return(obs)
}
.rain1 <- function(exp_day, obs_day, thres) {
accuracy <- sum((obs_day >= thres) * (exp_day >= thres))
return(accuracy)
}
.rain01 <- function(exp_day, obs_day, thres) {
accuracy <- sum(diag(table((obs_day >= thres),(exp_day >= thres))))
return(accuracy)
}
.pattcor <- function(exp_day, obs_day, thres = NULL) {
accuracy <- cor(as.vector(obs_day),as.vector(exp_day))
return(accuracy)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_AdamontAnalog.R
|
#'CST_AdamontQQCorr computes quantile-quantile correction of seasonal or
#'decadal forecast data using weather types
#'
#'@description This function computes a quantile mapping based on weather types
#'for experiment data (typically a hindcast) onto reference \code{obs},
#'typically provided by reanalysis data.
#'@author Lauriane Batté, \email{[email protected]}
#'@author Paola Marson, \email{[email protected]}
#'@author Gildas Dayon, \email{[email protected]}
#'
#'@param exp Experiment data an object of class \code{s2dv_cube}.
#'@param wt_exp Corresponding weather types (same dimensions as \code{exp$data}
#' but lat/lon).
#'@param obs Reference data, also of class \code{s2dv_cube}. lat/lon dimensions
#' can differ from \code{exp} if non rectilinear latlon grids are used,
#' in which case regrid should be set to TRUE and .NearestNeighbors \code{NN}
#' output should be provided.
#'@param wt_obs Corresponding weather types (same dimensions as \code{obs} but
#' lat/lon).
#'@param corrdims List of dimensions in \code{exp} for which quantile mapping
#' correction is applied.
#'@param londim Character name of longitude dimension in \code{exp} and
#' \code{obs}.
#'@param latdim Character name of latitude dimension in \code{exp} and
#' \code{obs}.
#'
#'@return An object of class \code{s2dv_cube} containing experiment data on the
#'lat/lon grid of \code{obs} input data, corrected by quantile mapping
#'depending on the weather types \code{wt_exp}.
#'
#'@examples
#'wt_exp <- c(1,1,2,3,3,2,2,1,1,2,2,3)
#'dim(wt_exp) <- c(dataset = 1, member = 1, sdate = 4, ftime = 3)
#'wt_obs <- c(3,3,1,2,2,2,2,1,3,1,1,2)
#'dim(wt_obs) <- c(dataset = 1, member = 1, sdate = 4, ftime = 3)
#'exp <- NULL
#'exp$data <- 1 : c(1 * 1 * 4 * 3 * 4 * 4)
#'dim(exp$data) <- c(dataset = 1, member = 1, sdate = 4, ftime = 3,
#' lat = 4, lon = 4)
#'class(exp) <- 's2dv_cube'
#'obs <- NULL
#'obs$data <- 101 : c(100 + 1 * 1 * 4 * 3 * 4 * 4)
#'dim(obs$data) <- c(dataset = 1, member = 1, sdate = 4, ftime = 3,
#' lat = 4, lon = 4)
#'class(obs) <- 's2dv_cube'
#'exp_corr <- CST_AdamontQQCorr(exp = exp, wt_exp = wt_exp,
#' obs = obs, wt_obs = wt_obs,
#' corrdims = c('dataset','member','sdate','ftime'))
#'@import qmap
#'@importFrom ClimProjDiags Subset
#'@import multiApply
#'@import abind
#'@export
CST_AdamontQQCorr <- function(exp, wt_exp, obs, wt_obs,
corrdims = c('member', 'sdate', 'ftime'),
londim = 'lon', latdim = 'lat') {
if (!inherits(exp, 's2dv_cube') || !inherits(obs, 's2dv_cube')){
stop("Inputs 'exp' and 'obs' must be of class 's2dv_cube', ",
"as output by CSTools::CST_Load.")
}
dimnames <- names(dim(obs$data))
dimnamesexp <- names(dim(exp$data))
if (!(latdim %in% dimnames) || !(londim %in% dimnames)) {
stop("'londim' or 'latdim' input doesn't match with 'obs$data' dimension",
" names")
}
if (!(latdim %in% dimnamesexp) || !(londim %in% dimnamesexp)) {
stop("'londim' or 'latdim' input doesn't match with 'exp$data' dimension",
" names")
}
if (!(('time' %in% corrdims) || ('ftime' %in% corrdims))) {
warning("Forecast time should be one of the dimensions for the correction ",
"specified in corrdims input list")
}
if (!all(corrdims %in% dimnamesexp)) {
stop("Names in parameter 'corrdims' should match input dimension names.")
}
if (!all(dim(wt_exp) %in% dim(exp$data))) {
stop("Dimensions for 'wt_exp' should match 'exp$data' except lat/lon")
}
if (!all(dim(wt_obs) %in% dim(obs$data))) {
stop("Dimensions for 'wt_obs' should match 'obs$data' except lat/lon")
}
if ((length(dim(exp$coords[[londim]])) == 2) ||
(length(dim(obs$coords[[londim]])) == 2)) {
myNN <- .NearestNeighbors(exp = exp, obs = obs, method = 'ADA')
exp_corr <- AdamontQQCorr(exp = exp$data, wt_exp = wt_exp, obs = obs$data,
wt_obs = wt_obs, corrdims = corrdims,
londim = londim, latdim = latdim, regrid = TRUE,
NN = myNN)
} else {
## If not (standard case)
## exp$data lat/lon dimensions should match obs$data
plat_exp <- which(dimnamesexp == latdim)
plon_exp <- which(dimnamesexp == londim)
plat_obs <- which(dimnames == latdim)
plon_obs <- which(dimnames == londim)
if ((dim(obs$data)[plon_obs] != dim(exp$data)[plon_exp]) ||
(dim(obs$data)[plat_obs] != dim(exp$data)[plat_exp])) {
stop("Element 'data' from parameters 'obs' and 'exp' should have ",
"same lon / lat dimensions if working with regular grids.")
}
exp_corr <- AdamontQQCorr(exp = exp$data, wt_exp = wt_exp, obs = obs$data,
wt_obs = wt_obs, corrdims = corrdims,
londim = londim, latdim = latdim,
regrid = FALSE)
}
exp$data <- exp_corr
return(exp)
}
#'AdamontQQCorr computes quantile-quantile correction of seasonal or decadal
#'forecast data using weather types
#'
#'@description This function computes a quantile mapping based on weather types
#'for experiment data (typically a hindcast) onto reference \code{obs},
#'typically provided by reanalysis data.
#'@author Paola Marson, \email{[email protected]} for PROSNOW version
#'@author Lauriane Batté, \email{[email protected]} for CSTools adaptation
#'
#'@param exp Array with named dimensions (such as \code{$data} array of
#' experiment data from an object of class \code{s2dv_cube}).
#'@param wt_exp Corresponding weather types (same dimensions as \code{exp} but
#' lat/lon).
#'@param obs Array with named dimensions with reference data (can also be
#' \code{$data} array of class \code{s2dv_cube}). lat/lon dimensions can differ
#' from \code{exp} if non rectilinear latlon grids are used, in which case
#' regrid should be set to TRUE and .NearestNeighbors \code{NN} output should
#' be provided.
#'@param wt_obs Corresponding weather types (same dimensions as \code{obs} but
#' lat/lon).
#'@param corrdims List of dimensions in \code{exp} for which quantile mapping
#' correction is applied.
#'@param londim Character name of longitude dimension in \code{exp} and
#' \code{obs}.
#'@param latdim Character name of latitude dimension in \code{exp} and
#' \code{obs}.
#'@param regrid (optional) Boolean indicating whether .NearestNeighbors
#' regridding is needed.
#'@param NN (optional, if regrid = TRUE) List (output from .NearestNeighbors)
#' maps (nlat, nlon) onto (nlat_o, nlon_o).
#'
#'@return An array (such as \code{$data} array from an object of class
#'\code{s2dv_cube}) with named dimensions, containing experiment data on the
#'lat/lon grid of \code{obs} array, corrected by quantile mapping depending on
#'the weather types \code{wt_exp}
#'
#'@examples
#'wt_exp <- c(1,1,2,3,3,2,2,1,1,2,2,3)
#'dim(wt_exp) <- c(dataset = 1, member = 1, sdate = 4, ftime = 3)
#'wt_obs <- c(3,3,1,2,2,2,2,1,3,1,1,2)
#'dim(wt_obs) <- c(dataset = 1, member = 1, sdate = 4, ftime = 3)
#'exp <- 1 : c(1 * 1 * 4 * 3 * 4 * 4)
#'dim(exp) <- c(dataset = 1, member = 1, sdate = 4, ftime = 3,
#' lat = 4, lon = 4)
#'obs <- 101 : c(100 + 1 * 1 * 4 * 3 * 4 * 4)
#'dim(obs) <- c(dataset = 1, member = 1, sdate = 4, ftime = 3,
#' lat = 4, lon = 4)
#'exp_corr <- AdamontQQCorr(exp = exp, wt_exp = wt_exp,
#' obs = obs, wt_obs = wt_obs,
#' corrdims = c('dataset', 'member', 'sdate', 'ftime'))
#'@import qmap
#'@importFrom ClimProjDiags Subset
#'@import multiApply
#'@import abind
#'@export
AdamontQQCorr <- function(exp, wt_exp, obs, wt_obs,
corrdims = c('member', 'sdate', 'ftime'),
londim = 'lon', latdim = 'lat', regrid = FALSE,
NN = NULL) {
dimnames <- names(dim(obs))
dimnamesexp <- names(dim(exp))
if (!(latdim %in% dimnames) || !(londim %in% dimnames)) {
stop("'londim' or 'latdim' input doesn't match with 'obs' dimension names")
}
if (!(('time' %in% corrdims) || ('ftime' %in% corrdims))) {
warning("Forecast time should be one of the dimensions for the correction",
" specified in corrdims input list")
}
if (!all(corrdims %in% dimnamesexp)) {
stop("Names in parameter 'corrdims' should match input dimension names.")
}
if (!all(dim(wt_exp) %in% dim(exp))) {
stop("Dimensions for 'wt_exp' should match 'exp' except lat/lon")
}
if (!all(dim(wt_obs) %in% dim(obs))) {
stop("Dimensions for 'wt_obs' should match 'obs' except lat/lon")
}
if ((regrid == 'TRUE') & is.null(NN)) {
stop("regrid set to TRUE: provide nearest neighbors input NN")
}
# The regridding part should only be done if lat/lon dimensions of obs and
# exp differ.
if (regrid == 'TRUE') {
obsdims <- names(dim(obs))
poslat <- which(obsdims == latdim)
poslon <- which(obsdims == londim)
nlat_o <- dim(obs)[poslat]
nlon_o <- dim(obs)[poslon]
ilat_o <- array(c(1:nlat_o))
names(dim(ilat_o))[1] <- latdim
ilon_o <- array(c(1:nlon_o))
names(dim(ilon_o))[1] <- londim
## First step if obs data is higher resolution than exp data is to use
## nearest neighbor to compute downscaling of exp data
exp_corr <- Apply(list(exp, ilat_o, ilon_o),
target_dims = list(c(latdim,londim), latdim, londim), .getNN, NN = NN)$output1
## Reorder exp_corr dimensions to match exp dimensions
dexpc <- match(names(dim(exp)), names(dim(exp_corr)))
exp_corr <- aperm(exp_corr, dexpc)
dimnames(exp_corr) <- dimnames(exp)[dexpc]
## Keep original wt_exp for remapping data
wt_exp2 <- wt_exp
## Both exp and obs data are now on the same grid
} else {
## exp lat/lon dimensions should match obs
plat_exp <- which(dimnamesexp == latdim)
plon_exp <- which(dimnamesexp == londim)
plat_obs <- which(dimnames == latdim)
plon_obs <- which(dimnames == londim)
if ((dim(obs)[plon_obs] != dim(exp)[plon_exp]) ||
(dim(obs)[plat_obs] != dim(exp)[plat_exp])) {
stop("Parameters 'obs' and 'exp' should have same lon / lat ",
"dimensions if regrid set to 'FALSE' (regular grid case).")
}
exp_corr <- exp
## Keep original wt_exp for remapping data
wt_exp2 <- wt_exp
}
## Use CST_QuantileMapping function for quantile mapping
## depending on weather type
for (i in 1:(length(corrdims) - 1)) {
obs <- MergeDims(obs, corrdims[i:(i+1)], rename_dim = corrdims[i+1])
wt_obs <- MergeDims(wt_obs, corrdims[i:(i+1)], rename_dim = corrdims[i+1])
exp_corr <- MergeDims(exp_corr, corrdims[i:(i+1)], rename_dim = corrdims[i+1])
wt_exp2 <- MergeDims(wt_exp2, corrdims[i:(i+1)], rename_dim = corrdims[i+1])
}
names(dim(obs))[which(names(dim(obs)) == corrdims[length(corrdims)])] <- 'time'
names(dim(wt_obs))[which(names(dim(wt_obs)) == corrdims[length(corrdims)])] <- 'time'
names(dim(exp_corr))[which(names(dim(exp_corr)) == corrdims[length(corrdims)])] <- 'time'
names(dim(wt_exp2))[which(names(dim(wt_exp2)) == corrdims[length(corrdims)])] <- 'time'
# Split 'time' dim in weather types
obs <- SplitDim(obs, split_dim = 'time', indices = as.vector(wt_obs),
new_dim_name = 'type')
exp_corr <- SplitDim(exp_corr, split_dim = 'time', indices = as.vector(wt_exp2),
new_dim_name = 'type')
## Add NAs to exp_corr if needed to have compatible sample dimensions
numtobs <- dim(obs)[which(names(dim(obs)) == 'time')]
numtexp <- dim(exp_corr)[which(names(dim(exp_corr)) == 'time')]
if (numtexp%%numtobs > 0) {
## Create extra dimension and include NAs
ndimexp <- names(dim(exp_corr))
ndimobs <- names(dim(obs))
postime <- which(ndimexp == 'time')
dimadd <- dim(exp_corr)
dimadd[postime] <- ceiling(numtexp/numtobs) * numtobs - numtexp
exp_corr <- abind::abind(exp_corr, array(NA, dimadd), along = postime)
names(dim(exp_corr)) <- ndimexp
exp_corr <- SplitDim(exp_corr, 'time', freq = numtobs, indices = NULL)
dimobs <- c(dim(obs), 1)
dim(obs) <- dimobs
names(dim(obs)) <- c(ndimobs, 'index')
res <- QuantileMapping(exp = exp_corr, obs = obs, memb_dim = 'index',
sdate_dim = 'time', method = 'RQUANT', na.rm = TRUE)
res <- MergeDims(res, c('time','index'))
## Remove the extra NA values added previously
res <- Subset(res, along = 'time', indices = 1:numtexp)
} else {
## Apply QuantileMapping to exp_corr depending on weather type
exp_corr <- InsertDim(exp_corr, posdim = 1, lendim = 1, name = 'member')
res <- QuantileMapping(exp = exp_corr, obs = obs, sdate_dim = 'time',
samplemethod = 'RQUANT', na.rm = TRUE)
dim(res) <- dim(res)[-which(names(dim(res)) == 'member')]
}
rm(exp_corr) # Save space in memory
## Reshape exp_corr data onto time dimension before 'Split'
rep_pos <- array(NA, c(time = length(wt_exp2)))
pos_time <- which(names(dim(res)) == 'time')
pos_type <- which(names(dim(res)) == 'type')
for (x in unique(wt_exp2)) {
rep_pos[which(wt_exp2 == x)] <- 1:length(which(wt_exp2 == x))
}
exp_corr <- .unsplit_wtype(exp = res, wt_exp = wt_exp2, rep_pos = rep_pos,
pos_time = pos_time)
# Now reshape exp_corr data onto original dimensions
dim(exp_corr) <- c(dim(wt_exp), dim(exp_corr)[-c(pos_time,pos_type)])
return(exp_corr)
}
.getNN <- function(exp, ilat, ilon, NN) {
return(exp[NN$imin_lat[ilat, ilon], NN$imin_lon[ilat, ilon]])
}
.unsplit_wtype <- function(exp = exp,dim_wt = 'type', wt_exp = wt_exp,
dim_time = 'time', rep_pos = rep_pos, pos_time = 1) {
# Initiate output
new <- Subset(Subset(exp, along = dim_wt, indices = wt_exp[1]),
along = dim_time, indices = rep_pos[1])
dimnames <- names(dim(new))
for (x in 2:length(wt_exp)) {
dat <- Subset(Subset(exp, along = dim_wt, indices = wt_exp[x]),
along = dim_time, indices = rep_pos[x])
new <- abind::abind(new, dat, along = pos_time)
}
names(dim(new)) <- dimnames
return(new)
}
#'ADAMONT Nearest Neighbors computes the distance between reference data grid
#'centroid and SF data grid
#'
#'@author Paola Marson, \email{[email protected]} for PROSNOW version
#'@author Lauriane Batté, \email{[email protected]} for CSTools adaptation
#'@description This function computes the nearest neighbor for each reference
#'data (lon, lat) point in the experiment dataset by computing the distance
#'between the reference dataset grid and the experiment data. This is the first
#'step in the ADAMONT method adapted from Verfaillie et al. (2018).
#'
#'@param method A string among three options ('ADA': standard ADAMONT distance,
#' 'simple': lon/lat straight euclidian distance, 'radius': distance on the
#' sphere).
#'@param exp An object of class \code{s2dv_cube} as returned by \code{CST_Load}
#' function, containing the seasonal forecast experiment longitudes in
#' \code{$lon} and latitudes in \code{$lat}.
#'@param obs An object of class \code{s2dv_cube} as returned by \code{CST_Load}
#' function, containing the reference data on a different grid, with longitudes
#' in \code{$lon} and latitudes in \code{$lat}.
#'
#'@return NN a list, containing the following:
#'\itemize{
#' \item{'min_lon': array of dimensions \code{obs$lon} giving the longitude of
#' closest gridpoint in exp.}
#' \item{'min_lat': array of dimensions \code{obs$lat} giving the latitude of
#' closest gridpoint in exp.}
#' \item{'imin_lon': array of dimensions \code{obs$lon} giving the longitude
#' index of closest gridpoint in exp.}
#' \item{'imin_lat': array of dimensions \code{obs$lat} giving the latitude
#' index of closest gridpoint in exp.}
#'}
#'
#'@importFrom ClimProjDiags Subset
#'@import ncdf4
#'@noRd
.NearestNeighbors <- function (exp, obs, method = 'ADA') {
# Check 's2dv_cube'
if (!inherits(exp, 's2dv_cube') || !inherits(obs, 's2dv_cube')) {
stop("Inputs 'exp' and 'obs' must be of class 's2dv_cube', ",
"as output by CSTools::CST_Load.")
}
# Check 'exp' and 'obs' object structure
if (!all(c('data', 'coords') %in% names(exp))) {
stop("Parameter 'exp' must have 'data' and 'coords' elements ",
"within the 's2dv_cube' structure.")
}
if (!any(names(exp$coords) %in% .KnownLonNames()) |
!any(names(exp$coords) %in% .KnownLatNames())) {
stop("Spatial coordinate names of parameter 'exp' do not match any ",
"of the names accepted by the package.")
}
if (!all(names(exp$coords) %in% names(obs$coords))) {
stop("Coordinates names must be equal in 'exp' and in 'obs'.")
}
lon_name <- names(exp$coords)[[which(names(exp$coords) %in% .KnownLonNames())]]
lat_name <- names(exp$coords)[[which(names(exp$coords) %in% .KnownLatNames())]]
exp_lon <- exp$coords[[lon_name]]
exp_lat <- exp$coords[[lat_name]]
obs_lon <- obs$coords[[lon_name]]
obs_lat <- obs$coords[[lat_name]]
dim_exp_lon <- dim(exp_lon)
dim_exp_lat <- dim(exp_lat)
dim_obs_lon <- dim(obs_lon)
dim_obs_lat <- dim(obs_lat)
# Check if one of the grids is non-regular:
if ((length(dim_exp_lon) == 2) || (length(dim_obs_lon) == 2)) {
# Flatten longitudes and latitudes in case of 2-D longitudes and latitudes (Lambert grids, etc.)
if ((length(dim_exp_lon) == 2) & (length(dim_exp_lat) == 2)) {
dim(exp_lon) <- c(dim_exp_lon[1] * dim_exp_lon[2])
dim(exp_lat) <- c(dim_exp_lat[1] * dim_exp_lat[2])
}
if ((length(dim_obs_lon) == 2) & (length(dim_obs_lat) == 2)) {
dim(obs_lon) <- c(dim_obs_lon[1] * dim_obs_lon[2])
dim(obs_lat) <- c(dim_obs_lat[1] * dim_obs_lat[2])
}
# Now lat and lon arrays have 1 dimension, length npt (= nlat*nlon)
OBS_grid <- cbind(obs_lon, obs_lat)
EXP_grid <- cbind(exp_lon, exp_lat)
dist_min <- min_lon <- min_lat <- imin_lon <- imin_lat <- array(dim = nrow(OBS_grid))
if (method == 'ADA') {
C <- cos(OBS_grid[,2] * pi/180)^2
for (i in 1:nrow(OBS_grid)) {
dist <- (OBS_grid[i, 2] - EXP_grid[, 2])^2 +
C[i] * (OBS_grid[i, 1] - EXP_grid[, 1])^2
dist_min[i] < -min(dist)
min_lon[i] <- EXP_grid[which.min(dist), 1]
min_lat[i] <- EXP_grid[which.min(dist), 2]
imin_lon[i] <- which(exp_lon == min_lon[i])
imin_lat[i] <- which(exp_lat == min_lat[i])
}
} else if (method == 'simple') {
for (i in 1:nrow(OBS_grid)) {
dist <- (OBS_grid[i, 2] - EXP_grid[, 2])^2 + (OBS_grid[i, 1] - EXP_grid[, 1])^2
dist_min[i] <- min(dist)
min_lon[i] <- EXP_grid[which.min(dist), 1]
min_lat[i] <- EXP_grid[which.min(dist), 2]
imin_lon[i] < -which(exp_lon == min_lon[i])
imin_lat[i] <- which(exp_lat == min_lat[i])
}
} else if (method == 'radius') {
R <- 6371e3 # metres, Earth radius
EXP_gridr <- EXP_grid * pi/180
OBS_gridr <- OBS_grid * pi/180
for (i in 1:nrow(OBS_grid)) {
a <- sin((OBS_gridr[i,2] - EXP_gridr[,2])/2)^2 + cos(OBS_gridr[i, 2]) *
cos(EXP_gridr[, 2]) * sin((OBS_gridr[i, 1] - EXP_gridr[, 1])/2)^2
c <- 2*atan2(sqrt(a), sqrt(1 - a))
dist <- R*c
dist_min[i] <- min(dist)
min_lon[i] <- EXP_grid[which.min(dist), 1]
min_lat[i] <- EXP_grid[which.min(dist), 2]
imin_lon[i] <- which(exp_lon == min_lon[i])
imin_lat[i] <- which(exp_lat == min_lat[i])
}
} else {
stop("AdamontNearestNeighbors supports method = 'ADA', 'simple' or 'radius' only.")
}
# Reshape outputs to original grid
dim(min_lon)=dim_obs_lon
dim(min_lat)=dim_obs_lat
dim(imin_lon)=dim_obs_lon
dim(imin_lat)=dim_obs_lat
} else {
# Regular lon/lat grid case: has been handled by CST_Load()
stop(paste0("AdamontNearestNeighbors is meant for non-regular lat/lon ",
"grids; use e.g. CST_Load to interpolate exp onto obs grid"))
}
NN = list(min_lon = min_lon, min_lat = min_lat, imin_lon = imin_lon,
imin_lat = imin_lat)
return(NN)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_AdamontQQCorr.R
|
#'@rdname CST_Analogs
#'@title Downscaling using Analogs based on large scale fields.
#'
#'@author M. Carmen Alvarez-Castro, \email{[email protected]}
#'@author Maria M. Chaves-Montero, \email{[email protected]}
#'@author Veronica Torralba, \email{[email protected]}
#'@author Nuria Perez-Zanon \email{[email protected]}
#'
#'@description This function perform a downscaling using Analogs. To compute
#'the analogs, the function search for days with similar large scale conditions
#'to downscaled fields to a local scale. The large scale and the local scale
#'regions are defined by the user. The large scale is usually given by
#'atmospheric circulation as sea level pressure or geopotential height
#'(Yiou et al, 2013) but the function gives the possibility to use another
#'field. The local scale will be usually given by precipitation or temperature
#'fields, but might be another variable.The analogs function will find the best
#'analogs based in Minimum Euclidean distance in the large scale pattern
#'(i.e.SLP).
#'
#'The search of analogs must be done in the longest dataset posible. This is
#'important since it is necessary to have a good representation of the
#'possible states of the field in the past, and therefore, to get better
#'analogs.
#'This function has not constrains of specific regions, variables to downscale,
#'or data to be used (seasonal forecast data, climate projections data,
#'reanalyses data). The regrid into a finner scale is done interpolating with
#'CST_Start. Then, this interpolation is corrected selecting the analogs in the
#'large and local scale in based of the observations. The function is an
#'adapted version of the method of Yiou et al 2013. For an advanced search of
#'Analogs (multiple Analogs, different criterias, further information from the
#'metrics and date of the selected Analogs) use the'Analog'
#'function within 'CSTools' package.
#'
#'@references Yiou, P., T. Salameh, P. Drobinski, L. Menut, R. Vautard,
#' and M. Vrac, 2013 : Ensemble reconstruction of the atmospheric column
#' from surface pressure using analogues. Clim. Dyn., 41, 1419-1437.
#' \email{[email protected]}
#'
#'@param expL An 's2dv_cube' object containing the experimental field on the
#' large scale for which the analog is aimed. This field is used to in all the
#' criterias. If parameter 'expVar' is not provided, the function will return
#' the expL analog. The element 'data' in the 's2dv_cube' object must have, at
#' least, latitudinal and longitudinal dimensions. The object is expect to be
#' already subset for the desired large scale region. Latitudinal dimension
#' accepted names: 'lat', 'lats', 'latitude', 'y', 'j', 'nav_lat'. Longitudinal
#' dimension accepted names: 'lon', 'lons','longitude', 'x', 'i', 'nav_lon'.
#'@param obsL An 's2dv_cube' object containing the observational field on the
#' large scale. The element 'data' in the 's2dv_cube' object must have the same
#' latitudinal and longitudinal dimensions as parameter 'expL' and a temporal
#' dimension with the maximum number of available observations.
#'@param expVar An 's2dv_cube' object containing the experimental field on the
#' local scale, usually a different variable to the parameter 'expL'. If it is
#' not NULL (by default, NULL), the returned field by this function will be the
#' analog of parameter 'expVar'.
#'@param obsVar An 's2dv_cube' containing the field of the same variable as the
#' passed in parameter 'expVar' for the same region.
#'@param sdate_dim A character string indicating the name of the start date
#' dimension. By default, it is set to 'sdate'.
#'@param region A vector of length four indicating the minimum longitude, the
#' maximum longitude, the minimum latitude and the maximum latitude.
#'@param criteria A character string indicating the criteria to be used for the
#' selection of analogs:
#' \itemize{
#' \item{Large_dist} minimum Euclidean distance in the large scale pattern;
#' \item{Local_dist} minimum Euclidean distance in the large scale pattern
#' and minimum Euclidean distance in the local scale pattern; and
#' \item{Local_cor} minimum Euclidean distance in the large scale pattern,
#' minimum Euclidean distance in the local scale pattern and highest
#' correlation in the local variable to downscale.}
#' Criteria 'Large_dist' is recommended for CST_Analogs, for an advanced use of
#' the criterias 'Local_dist' and 'Local_cor' use 'Analogs' function.
#'@param excludeTime An array of N named dimensions (coinciding with time
#' dimensions in expL)of character string(s) indicating the date(s) of the
#' observations in the format "dd/mm/yyyy" to be excluded during the search of
#' analogs. It can be NULL but if expL is not a forecast (time_expL contained in
#' time_obsL), by default time_expL will be removed during the search of analogs.
#'@param time_expL A character string indicating the date of the experiment
#' in the same format than time_obsL (i.e. "yyyy-mm-dd"). By default it is NULL
#' and dates are taken from element \code{$attrs$Dates} from expL.
#'@param time_obsL A character string indicating the date of the observations
#' in the date format (i.e. "yyyy-mm-dd"). By default it is NULL and dates are
#' taken from element \code{$attrs$Dates} from obsL. It must have time
#' dimensions.
#'@param region A vector of length four indicating the minimum longitude,
#' the maximum longitude, the minimum latitude and the maximum latitude.
#'@param nAnalogs Number of Analogs to be selected to apply the criterias
#' 'Local_dist' or 'Local_cor'. This is not the necessary the number of analogs
#' that the user can get, but the number of events with minimum distance in
#' which perform the search of the best Analog. The default value for the
#' 'Large_dist' criteria is 1, for 'Local_dist' and 'Local_cor' criterias must
#' be greater than 1 in order to match with the first criteria, if nAnalogs is
#' NULL for 'Local_dist' and 'Local_cor' the default value will be set at the
#' length of 'time_obsL'. If AnalogsInfo is FALSE the function returns just
#' the best analog.
#'@param AnalogsInfo A logical value. TRUE to get a list with two elements:
#' 1) the downscaled field and 2) the AnalogsInfo which contains:
#' a) the number of the best analogs, b) the corresponding value of the metric
#' used in the selected criteria (distance values for Large_dist and Local_dist,
#' correlation values for Local_cor), c)dates of the analogs). The analogs are
#' listed in decreasing order, the first one is the best analog (i.e if the
#' selected criteria is Local_cor the best analog will be the one with highest
#' correlation, while for Large_dist criteria the best analog will be the day
#' with minimum Euclidean distance). Set to FALSE to get a single analog, the
#' best analog, for instance for downscaling.
#'@param ncores The number of cores to use in parallel computation
#'
#'@seealso \code{\link{CST_Start}}, \code{\link[startR]{Start}}
#'
#'@return An 's2dv_cube' object containing an array with the dowscaled values of
#'the best analogs in element 'data'. If 'AnalogsInfo' is TRUE, 'data' is a list
#'with an array of the downscaled fields and the analogs information in
#'elements 'analogs', 'metric' and 'dates'.
#'@examples
#'expL <- rnorm(1:200)
#'dim(expL) <- c(member = 10, lat = 4, lon = 5)
#'obsL <- c(rnorm(1:180), expL[1, , ]*1.2)
#'dim(obsL) <- c(time = 10, lat = 4, lon = 5)
#'time_obsL <- as.POSIXct(paste(rep("01", 10), rep("01", 10), 1994:2003, sep = "-"),
#' format = "%d-%m-%y")
#'dim(time_obsL) <- c(time = 10)
#'time_expL <- time_obsL[1]
#'dim(time_expL) <- c(time = 1)
#'lon <- seq(-1, 5, 1.5)
#'lat <- seq(30, 35, 1.5)
#'coords <- list(lon = seq(-1, 5, 1.5), lat = seq(30, 35, 1.5))
#'attrs_expL <- list(Dates = time_expL)
#'attrs_obsL <- list(Dates = time_obsL)
#'expL <- list(data = expL, coords = coords, attrs = attrs_expL)
#'obsL <- list(data = obsL, coords = coords, attrs = attrs_obsL)
#'class(expL) <- 's2dv_cube'
#'class(obsL) <- 's2dv_cube'
#'region <- c(min(lon), max(lon), min(lat), max(lat))
#'downscaled_field <- CST_Analogs(expL = expL, obsL = obsL, region = region)
#'
#'@import multiApply
#'@import abind
#'@import s2dv
#'@importFrom ClimProjDiags SelBox Subset
#'@export
CST_Analogs <- function(expL, obsL, expVar = NULL, obsVar = NULL,
sdate_dim = 'sdate', region = NULL,
criteria = 'Large_dist', excludeTime = NULL,
time_expL = NULL, time_obsL = NULL,
nAnalogs = NULL, AnalogsInfo = FALSE,
ncores = NULL) {
# Check 's2dv_cube'
if (!inherits(expL, "s2dv_cube") || !inherits(obsL, "s2dv_cube")) {
stop("Parameter 'expL' and 'obsL' must be of the class 's2dv_cube'.")
}
if (!is.null(expVar) && !inherits(expVar, "s2dv_cube")) {
stop("Parameter 'expVar' must be of the class 's2dv_cube'.")
}
if (!is.null(obsVar) && !inherits(obsVar, "s2dv_cube")) {
stop("Parameter 'obsVar' must be of the class 's2dv_cube'.")
}
# Check 'obsL' object structure
if (!all(c('data', 'coords', 'attrs') %in% names(obsL))) {
stop("Parameter 'obsL' must have 'data', 'coords' and 'attrs' elements ",
"within the 's2dv_cube' structure.")
}
if (!any(names(obsL$coords) %in% .KnownLonNames()) |
!any(names(obsL$coords) %in% .KnownLatNames())) {
stop("Spatial coordinate names of parameter 'obsL' do not match any ",
"of the names accepted by the package.")
}
lon_name <- names(obsL$coords)[[which(names(obsL$coords) %in% .KnownLonNames())]]
lat_name <- names(obsL$coords)[[which(names(obsL$coords) %in% .KnownLatNames())]]
# Check 'obsVar' object structure
if (!is.null(obsVar)) {
if (!all(c('data', 'coords', 'attrs') %in% names(obsVar))) {
stop("Parameter 'obsVar' must have 'data', 'coords' and 'attrs' elements ",
"within the 's2dv_cube' structure.")
}
if (!any(names(obsVar$coords) %in% .KnownLonNames()) |
!any(names(obsVar$coords) %in% .KnownLatNames())) {
stop("Spatial coordinate names of parameter 'obsVar' do not match any ",
"of the names accepted by the package.")
}
lonVar <- obsVar$coords[[which(names(obsVar$coords) %in% .KnownLonNames())]]
latVar <- obsVar$coords[[which(names(obsVar$coords) %in% .KnownLatNames())]]
} else {
lonVar <- NULL
latVar <- NULL
}
# Check temporal dimensions
if (any(names(dim(obsL$data)) %in% 'sdate')) {
if (any(names(dim(obsL$data)) %in% 'ftime')) {
obsL <- CST_MergeDims(obsL, c('ftime', 'sdate'), rename_dim = 'time')
} else if (any(names(dim(obsL$data)) %in% 'time')) {
obsL <- CST_MergeDims(obsL, c('time', 'sdate'), rename_dim = 'time')
}
}
if (!is.null(obsVar)) {
if (any(names(dim(obsVar$data)) %in% 'sdate')) {
if (any(names(dim(obsVar$data)) %in% 'ftime')) {
obsVar <- CST_MergeDims(obsVar, c('ftime', 'sdate'),rename_dim = 'time')
} else if (any(names(dim(obsVar$data)) %in% 'time')) {
obsVar <- CST_MergeDims(obsVar, c('time', 'sdate'), rename_dim = 'time')
}
}
}
if (is.null(time_expL)) {
time_expL <- expL$attrs$Dates
}
if (is.null(time_obsL)) {
time_obsL <- obsL$attrs$Dates
}
res <- Analogs(expL$data, obsL$data, time_obsL = time_obsL,
time_expL = time_expL,
lonL = as.vector(obsL$coords[[lon_name]]),
latL = as.vector(obsL$coords[[lat_name]]),
expVar = expVar$data,
obsVar = obsVar$data, sdate_dim = sdate_dim,
criteria = criteria,
excludeTime = excludeTime, region = region,
lonVar = as.vector(lonVar), latVar = as.vector(latVar),
nAnalogs = nAnalogs, AnalogsInfo = AnalogsInfo,
ncores = ncores)
if (AnalogsInfo) {
if (is.numeric(res$dates)) {
res$dates <- as.POSIXct(res$dates, origin = '1970-01-01', tz = 'UTC')
}
}
expL$data <- res
expL$dims <- dim(res)
if (!is.null(obsL$coords[[lon_name]]) | !is.null(obsL$coords[[lat_name]])) {
if (is.null(region)) {
expL$coords[[lon_name]] <- obsL$coords[[lon_name]]
expL$coords[[lat_name]] <- obsL$coords[[lat_name]]
} else {
expL$coords[[lon_name]] <- SelBox(obsL$data,
lon = as.vector(obsL$coords[[lon_name]]),
lat = as.vector(obsL$coords[[lat_name]]),
region = region,
londim = lon_name,
latdim = lat_name)$lon
expL$coords[[lat_name]] <- SelBox(obsL$data,
lon = as.vector(obsL$coords[[lon_name]]),
lat = as.vector(obsL$coords[[lat_name]]),
region = region,
londim = lon_name,
latdim = lat_name)$lat
}
}
return(expL)
}
#'@rdname Analogs
#'@title Analogs based on large scale fields.
#'
#'@author M. Carmen Alvarez-Castro, \email{[email protected]}
#'@author Maria M. Chaves-Montero, \email{[email protected] }
#'@author Veronica Torralba, \email{[email protected]}
#'@author Nuria Perez-Zanon \email{[email protected]}
#'
#'@description This function perform a downscaling using Analogs. To compute
#'the analogs, the function search for days with similar large scale conditions
#'to downscaled fields in the local scale. The large scale and the local scale
#'regions are defined by the user. The large scale is usually given by
#'atmospheric circulation as sea level pressure or geopotential height (Yiou
#'et al, 2013) but the function gives the possibility to use another field. The
#'local scale will be usually given by precipitation or temperature fields, but
#'might be another variable.
#'The analogs function will find the best analogs based in three criterias:
#' (1) Minimum Euclidean distance in the large scale pattern (i.e. SLP)
#' (2) Minimum Euclidean distance in the large scale pattern (i.e. SLP)
#' and minimum Euclidean distance in the local scale pattern (i.e. SLP).
#' (3) Minimum Euclidean distance in the large scale pattern (i.e. SLP),
#' minimum distance in the local scale pattern (i.e. SLP) and highest
#' correlation in the local variable to downscale (i.e Precipitation).
#'The search of analogs must be done in the longest dataset posible. This is
#'important since it is necessary to have a good representation of the
#'possible states of the field in the past, and therefore, to get better
#'analogs. Once the search of the analogs is complete, and in order to used the
#'three criterias the user can select a number of analogs , using parameter
#''nAnalogs' to restrict
#'the selection of the best analogs in a short number of posibilities, the best
#'ones. This function has not constrains of specific regions, variables to
#'downscale, or data to be used (seasonal forecast data, climate projections
#'data, reanalyses data). The regrid into a finner scale is done interpolating
#'with CST_Start. Then, this interpolation is corrected selecting the analogs in
#'the large and local scale in based of the observations. The function is an
#'adapted version of the method of Yiou et al 2013.
#'
#'@references Yiou, P., T. Salameh, P. Drobinski, L. Menut, R. Vautard,
#'and M. Vrac, 2013 : Ensemble reconstruction of the atmospheric column
#'from surface pressure using analogues. Clim. Dyn., 41, 1419-1437.
#'\email{[email protected]}
#'
#'@param expL An array of N named dimensions containing the experimental field
#' on the large scale for which the analog is aimed. This field is used to in
#' all the criterias. If parameter 'expVar' is not provided, the function will
#' return the expL analog. The element 'data' in the 's2dv_cube' object must
#' have, at least, latitudinal and longitudinal dimensions. The object is
#' expect to be already subset for the desired large scale region. Latitudinal
#' dimension accepted names: 'lat', 'lats', 'latitude', 'y', 'j', 'nav_lat'.
#' Longitudinal dimension accepted names: 'lon', 'lons','longitude', 'x', 'i',
#' 'nav_lon'.
#'@param obsL An array of N named dimensions containing the observational field
#' on the large scale. The element 'data' in the 's2dv_cube' object must have
#' the same latitudinal and longitudinal dimensions as parameter 'expL' and a
#' single temporal dimension with the maximum number of available observations.
#'@param time_obsL A character string indicating the date of the observations
#' in the format "dd-mm-yyyy". Reference time to search for analogs. It must
#' have time dimensions.
#'@param time_expL An array of N named dimensions (coinciding with time
#' dimensions in expL) of character string(s) indicating the date(s) of the
#' experiment in the format "dd-mm-yyyy". Time(s) to find the analogs. If it
#' is not an scalar it must have named dimensions.
#'@param lonL A vector containing the longitude of parameter 'expL'.
#'@param latL A vector containing the latitude of parameter 'expL'.
#'@param excludeTime An array of N named dimensions (coinciding with time
#' dimensions in expL) of character string(s) indicating the date(s) of the
#' observations in the format "dd/mm/yyyy" to be excluded during the search of
#' analogs. It can be NULL but if expL is not a forecast (time_expL contained
#' in time_obsL), by default time_expL will be removed during the search of
#' analogs.
#'@param expVar An array of N named dimensions containing the experimental
#' field on the local scale, usually a different variable to the parameter
#' 'expL'. If it is not NULL (by default, NULL), the returned field by this
#' function will be the analog of parameter 'expVar'.
#'@param obsVar An array of N named dimensions containing the field of the
#' same variable as the passed in parameter 'expVar' for the same region.
#'@param sdate_dim A character string indicating the name of the start date
#' dimension. By default, it is set to 'sdate'.
#'@param AnalogsInfo A logical value. If it is TRUE it returns a list
#' with two elements: 1) the downscaled field and
#' 2) the AnalogsInfo which contains: a) the number of the best
#' analogs, b) the corresponding value of the metric used in the selected
#' criteria (distance values for Large_dist and Local_dist,correlation values
#' for Local_cor), c)dates of the analogs). The analogs are listed in
#' decreasing order, the first one is the best analog (i.e if the selected
#' criteria is Local_cor the best analog will be the one with highest
#' correlation, while for Large_dist criteria the best analog will be the day
#' with minimum Euclidean distance). Set to FALSE to get a single analog, the
#' best analog, for instance for downscaling.
#'@param criteria A character string indicating the criteria to be used for the
#' selection of analogs:
#' \itemize{\item{Large_dist} minimum Euclidean distance in the large scale pattern;
#' \item{Local_dist} minimum Euclidean distance in the large scale pattern
#' and minimum Euclidean distance in the local scale pattern; and
#' \item{Local_cor} minimum Euclidean distance in the large scale pattern,
#' minimum Euclidean distance in the local scale pattern and highest
#' correlation in the local variable to downscale.}
#'@param lonVar A vector containing the longitude of parameter 'expVar'.
#'@param latVar A vector containing the latitude of parameter 'expVar'.
#'@param region A vector of length four indicating the minimum longitude,
#' the maximum longitude, the minimum latitude and the maximum latitude.
#'@param nAnalogs Number of Analogs to be selected to apply the criterias
#' 'Local_dist' or 'Local_cor'. This is not the necessary the number of analogs
#' that the user can get, but the number of events with minimum distance in
#' which perform the search of the best Analog. The default value for the
#' 'Large_dist' criteria is 1, for 'Local_dist' and 'Local_cor' criterias must
#' be greater than 1 in order to match with the first criteria, if nAnalogs is
#' NULL for 'Local_dist' and 'Local_cor' the default value will be set at the
#' length of 'time_obsL'. If AnalogsInfo is FALSE the function returns just
#' the best analog.
#'@param ncores The number of cores to use in parallel computation.
#'
#'@return An array with the dowscaled values of the best analogs for the criteria
#'selected. If 'AnalogsInfo' is set to TRUE it returns a list with an array
#'of the dowsncaled field and the analogs information in elements 'analogs',
#''metric' and 'dates'.
#'
#'@examples
#'# Example 1: Downscaling using criteria 'Large_dist' and a single variable:
#'expSLP <- rnorm(1:20)
#'dim(expSLP) <- c(lat = 4, lon = 5)
#'obsSLP <- c(rnorm(1:180), expSLP * 1.2)
#'dim(obsSLP) <- c(time = 10, lat = 4, lon = 5)
#'time_obsSLP <- paste(rep("01", 10), rep("01", 10), 1994 : 2003, sep = "-")
#'dim(time_obsSLP) <- c(time = 10)
#'downscale_field <- Analogs(expL = expSLP, obsL = obsSLP,
#' time_obsL = time_obsSLP,time_expL = "01-01-1994")
#'
#'# Example 2: Downscaling using criteria 'Large_dist' and 2 variables:
#'obs.pr <- c(rnorm(1:200) * 0.001)
#'dim(obs.pr) <- dim(obsSLP)
#'downscale_field <- Analogs(expL = expSLP, obsL = obsSLP, obsVar = obs.pr,
#' time_obsL = time_obsSLP, time_expL = "01-01-1994")
#'
#'# Example 3: Downscaling using criteria 'Local_dist' and 2 variables:
#'# analogs of local scale using criteria 2
#'region = c(lonmin = -1 ,lonmax = 2, latmin = 30, latmax = 33)
#'Local_scale <- Analogs(expL = expSLP, obsL = obsSLP, time_obsL = time_obsSLP,
#' obsVar = obs.pr, criteria = "Local_dist",
#' lonL = seq(-1, 5, 1.5),latL = seq(30, 35, 1.5),
#' region = region,time_expL = "01-10-2000",
#' nAnalogs = 10, AnalogsInfo = TRUE)
#'
#'# Example 4: Downscaling using criteria 'Local_cor' and 2 variables:
#'exp.pr <- c(rnorm(1:20) * 0.001)
#'dim(exp.pr) <- dim(expSLP)
#'Local_scalecor <- Analogs(expL = expSLP, obsL = obsSLP, time_obsL = time_obsSLP,
#' obsVar = obs.pr, expVar = exp.pr,
#' criteria = "Local_cor", lonL = seq(-1, 5, 1.5),
#' time_expL = "01-10-2000", latL = seq(30, 35, 1.5),
#' lonVar = seq(-1, 5, 1.5), latVar = seq(30, 35, 1.5),
#' nAnalogs = 8, region = region, AnalogsInfo = FALSE)
#'
#'# Example 5: List of best analogs in the three criterias Large_dist,
#'Large_scale <- Analogs(expL = expSLP, obsL = obsSLP, time_obsL = time_obsSLP,
#' criteria = "Large_dist", time_expL = "01-10-2000",
#' nAnalogs = 7, AnalogsInfo = TRUE)
#'Local_scale <- Analogs(expL = expSLP, obsL = obsSLP, time_obsL = time_obsSLP,
#' time_expL = "01-10-2000", criteria = "Local_dist",
#' lonL = seq(-1, 5, 1.5), latL = seq(30, 35, 1.5),
#' nAnalogs = 7, region = region, AnalogsInfo = TRUE)
#'Local_scalecor <- Analogs(expL = expSLP, obsL = obsSLP, time_obsL = time_obsSLP,
#' obsVar = obsSLP, expVar = expSLP,
#' time_expL = "01-10-2000", criteria = "Local_cor",
#' lonL = seq(-1, 5, 1.5), latL = seq(30, 35, 1.5),
#' lonVar = seq(-1, 5, 1.5), latVar = seq(30, 35, 1.5),
#' nAnalogs = 7, region = region,
#' AnalogsInfo = TRUE)
#'@import multiApply
#'@import abind
#'@import s2dv
#'@importFrom ClimProjDiags SelBox Subset
#'@export
Analogs <- function(expL, obsL, time_obsL, time_expL = NULL,
lonL = NULL, latL = NULL, expVar = NULL, obsVar = NULL,
sdate_dim = 'sdate', criteria = "Large_dist",
excludeTime = NULL, lonVar = NULL, latVar = NULL,
region = NULL, nAnalogs = NULL,
AnalogsInfo = FALSE, ncores = NULL) {
# Check inputs
# expL, obsL
if (!is.array(expL) || !is.numeric(expL)) {
stop("Parameter 'expL' must be a numeric array.")
}
if (!is.array(obsL) || !is.numeric(obsL)) {
stop("Parameter 'obsL' must be a numeric array.")
}
obsdims <- names(dim(obsL))
expdims <- names(dim(expL))
if (is.null(expdims)) {
stop("Parameter 'expL' must have dimension names.")
}
if (is.null(obsdims)) {
stop("Parameter 'obsL' must have dimension names.")
}
if (any(is.na(expL))) {
warning("Parameter 'expL' contains NA values.")
}
if (any(is.na(obsL))) {
warning("Parameter 'obsL' contains NA values.")
}
if (!any(.KnownLonNames() %in% obsdims) | !any(.KnownLonNames() %in% expdims)) {
stop("Parameter 'expL' and 'obsL' must have longitudinal dimension.")
}
if (!any(.KnownLatNames() %in% obsdims) | !any(.KnownLatNames() %in% expdims)) {
stop("Parameter 'expL' and 'obsL' must have latitudinal dimension.")
}
# Know spatial coordinates names
if (!any(obsdims %in% .KnownLonNames()) |
!any(obsdims %in% .KnownLatNames())) {
stop("Spatial coordinate names do not match any of the names accepted by ",
"the package.")
}
lon_name <- obsdims[[which(obsdims %in% .KnownLonNames())]]
lat_name <- obsdims[[which(obsdims %in% .KnownLatNames())]]
# criteria
if (!criteria %in% c('Large_dist', 'Local_dist', 'Local_cor')) {
stop("Parameter 'criteria' can only be: 'Large_dist', 'Local_dist' or 'Local_cor'.")
}
if (length(criteria) > 1) {
warning("Only first element of 'criteria' parameter will be used.")
criteria <- criteria[1]
}
# lonL, latL, lonVar, latVar
if (criteria == "Local_dist" | criteria == "Local_cor") {
if (is.null(lonL) | is.null(latL)) {
stop("Parameters 'lonL' and 'latL' cannot be NULL.")
}
if (!is.numeric(lonL) | !is.numeric(latL)) {
stop("Parameters 'lonL' and 'latL' must be numeric.")
}
if (!is.null(dim(lonL)) | !is.null(dim(latL))) {
if (length(dim(lonL)) == 1 & length(dim(latL)) == 1) {
lonL <- as.vector(lonL)
latL <- as.vector(latL)
} else {
stop("Parameters 'lonL' and 'latL' need to be a vector.")
}
}
}
if (criteria == "Local_cor") {
if (is.null(lonVar) | is.null(latVar)) {
stop("Parameters 'lonVar' and 'latVar' cannot be NULL.")
}
if (!is.numeric(lonVar) | !is.numeric(latVar)) {
stop("Parameters 'lonVar' and 'latVar' must be numeric.")
}
if (!is.null(dim(lonVar)) | !is.null(dim(latVar))) {
if (length(dim(lonVar)) == 1 & length(dim(latVar)) == 1) {
lonVar <- as.vector(lonVar)
latVar <- as.vector(latVar)
} else {
stop("Parameters 'lonVar' and 'latVar' need to be a vector.")
}
}
}
# expVar and obsVar
if (!is.null(expVar) & is.null(obsVar)) {
expVar <- NULL
warning("Parameter 'expVar' is set to NULL as parameter 'obsVar',
large scale field will be returned.")
}
if (is.null(expVar) & is.null(obsVar)) {
warning("Parameter 'expVar' and 'obsVar' are NULLs, downscaling/listing
same variable as obsL and expL'.")
}
if (!is.null(obsVar) & is.null(expVar) & criteria == "Local_cor") {
stop("Parameter 'expVar' cannot be NULL.")
}
# nAnalogs
if (is.null(nAnalogs) & criteria != "Large_dist") {
nAnalogs = length(time_obsL)
warning("Parameter 'nAnalogs' is NULL and is set to the same length of",
"'time_obsL' by default")
}
if (is.null(nAnalogs) & criteria == "Large_dist") {
nAnalogs <- 1
}
# time_obsL, time_expL
if (is.null(time_obsL)) {
stop("Parameter 'time_obsL' cannot be NULL.")
}
if (is.null(time_expL)) {
stop("Parameter 'time_expL' cannot be NULL.")
}
if (!inherits(time_obsL, "character")) {
warning('imposing time_obsL to be a character')
dims_time_obsL <- dim(time_obsL)
time_obsL <- format(as.Date(time_obsL), '%d-%m-%Y')
dim(time_obsL) <- dims_time_obsL
}
if (!inherits(time_expL, "character")) {
warning('imposing time_expL to be a character')
dims_time_expL <- dim(time_expL)
time_expL <- format(as.Date(time_expL), '%d-%m-%Y')
dim(time_expL) <- dims_time_expL
}
# time_obsL, time_expL (2)
if (is.null(names(dim(time_obsL)))) {
stop("Parameter 'time_obsL' must have named dimensions.")
}
if (!is.character(sdate_dim)) {
stop("Parameter 'sdate_dim' must be a character string.")
}
if (!sdate_dim %in% names(dim(time_obsL))) {
if (length(dim(time_obsL)) == 1) {
dim(time_obsL) <- c(dim(time_obsL), sdate = 1)
} else {
stop("Parameters 'time_obsL' must have 'sdate_dim' dimension name. ",
"If it has multiple time dimensions.")
}
}
if (length(time_expL) != 1) {
if (is.null(names(dim(time_expL)))) {
stop("Parameter 'time_expL' must have named dimensions.")
}
} else {
dim(time_expL) <- 1
}
if (!sdate_dim %in% names(dim(time_expL))) {
if (length(dim(time_expL)) == 1) {
dim(time_expL) <- c(dim(time_expL), sdate = 1)
} else {
stop("Parameters 'time_expL' must have 'sdate_dim' dimension name. ",
"If it has multiple time dimensions.")
}
}
if (length(dim(time_obsL)) == 2) {
if (which(sdate_dim %in% names(dim(time_obsL))) == 1) {
time_obsL <- Reorder(time_obsL, c(2,1))
}
} else {
warning("Parameter 'time_obsL' should have forecast time and start date dimension in this order.")
}
if (length(dim(time_expL)) == 2) {
if (which(sdate_dim %in% names(dim(time_expL))) == 1) {
time_expL <- Reorder(time_expL, c(2,1))
}
} else {
warning("Parameter 'time_expL' should have forecast time and start date dimension in this order.")
}
# excludeTime
if (!is.null(excludeTime)) {
if (!inherits(excludeTime, "character")) {
warning('imposing excludeTime to be a character')
excludeTime <- format(as.Date(excludeTime),'%d-%m-%Y')
}
}
# obsVar, expVar
if (!is.null(obsVar)) {
if (any(names(dim(obsVar)) %in% 'ftime')) {
if (any(names(dim(obsVar)) %in% 'time')) {
stop("Multiple temporal dimensions ('ftime' and 'time') found",
"in parameter 'obsVar'.")
} else {
time_pos_obsVar <- which(names(dim(obsVar)) == 'ftime')
names(dim(obsVar))[time_pos_obsVar] <- 'time'
if (any(names(dim(expVar)) %in% 'ftime')) {
time_pos_expVar <- which(names(dim(expVar)) == 'ftime')
names(dim(expVar))[time_pos_expVar] <- 'time'
}
}
}
}
# obsL
if (any(names(dim(obsL)) %in% 'ftime')) {
if (any(names(dim(obsL)) %in% 'time')) {
stop("Multiple temporal dimensions ('ftime' and 'time') found",
"in parameter 'obsL'.")
} else {
time_pos_obsL <- which(names(dim(obsL)) == 'ftime')
names(dim(obsL))[time_pos_obsL] <- 'time'
if (any(names(dim(expL)) %in% 'ftime')) {
time_pos_expL <- which(names(dim(expL)) == 'ftime')
names(dim(expL))[time_pos_expL] <- 'time'
}
}
}
if ((any(names(dim(obsL)) %in% 'sdate')) &&
(any(names(dim(obsL)) %in% 'time'))) {
dims_obsL <- dim(obsL)
pos_sdate <- which(names(dim(obsL)) == 'sdate')
pos_time <- which(names(dim(obsL)) == 'time')
pos <- 1 : length(dim(obsL))
pos <- c(pos_time, pos_sdate, pos[-c(pos_sdate,pos_time)])
obsL <- aperm(obsL, pos)
dim(obsL) <- c(time = prod(dims_obsL[c(pos_time, pos_sdate)]),
dims_obsL[-c(pos_time, pos_sdate)])
} else {
if (any(names(dim(obsL)) %in% 'sdate')) {
dims_obsL <- dim(obsL)
pos_sdate <- which(names(dim(obsL)) == 'sdate')
pos <- 1 : length(dim(obsL))
pos <- c( pos_sdate, pos[-c(pos_sdate)])
obsL <- aperm(obsL, pos)
dim(obsL) <- c(time = prod(dims_obsL[c(pos_sdate)]),
dims_obsL[-c( pos_sdate)])
} else {
if (any(names(dim(obsL)) %in% 'time')) {
dims_obsL <- dim(obsL)
pos_time <- which(names(dim(obsL)) == 'time')
if (length(time_obsL) != dim(obsL)[pos_time]) {
stop("'time_obsL' and 'obsL' must have same length in the temporal
dimension.")
}
pos <- 1 : length(dim(obsL))
pos <- c(pos_time, pos[-c(pos_time)])
obsL <- aperm(obsL, pos)
dim(obsL) <- c(time = prod(dims_obsL[pos_time]),
dims_obsL[-c(pos_time)])
} else {
stop("Parameter 'obsL' must have a temporal dimension named 'time'.")
}
}
}
# obsVar
if (!is.null(obsVar)) {
if (any(names(dim(obsVar)) %in% 'sdate')) {
if (any(names(dim(obsVar)) %in% 'time')) {
dims_obsVar <- dim(obsVar)
pos_sdate <- which(names(dim(obsVar)) == 'sdate')
pos_time <- which(names(dim(obsVar)) == 'time')
pos <- 1 : length(dim(obsVar))
pos <- c(pos_time, pos_sdate, pos[-c(pos_sdate,pos_time)])
obsVar <- aperm(obsVar, pos)
dim(obsVar) <- c(time = prod(dims_obsVar[c(pos_time, pos_sdate)]),
dims_obsVar[-c(pos_time, pos_sdate)])
} else {
dims_obsVar <- dim(obsVar)
pos_sdate <- which(names(dim(obsVar)) == 'sdate')
pos <- 1 : length(dim(obsVar))
pos <- c(pos_sdate, pos[-c(pos_sdate)])
obsVar <- aperm(obsVar, pos)
dim(obsVar) <- c(time = prod(dims_obsVar[c(pos_sdate)]),
dims_obsVar[-c(pos_sdate)])
}
} else {
if (any(names(dim(obsVar)) %in% 'time')) {
dims_obsVar <- dim(obsVar)
pos_time <- which(names(dim(obsVar)) == 'time')
if (length(time_obsL) != dim(obsVar)[pos_time]) {
stop(" 'time_obsL' and 'obsVar' must have same length in the temporal
dimension.")}
pos <- 1 : length(dim(obsVar))
pos <- c(pos_time, pos[-c(pos_time)])
obsVar <- aperm(obsVar, pos)
dim(obsVar) <- c(time = prod(dims_obsVar[c(pos_time)]),
dims_obsVar[-c(pos_time)])
} else {
stop("Parameter 'obsVar' must have a temporal dimension named 'time'.")
}
}
}
if (is.null(region) && criteria != 'Large_dist') {
if (!is.null(lonVar) & !is.null(latVar)) {
region <- c(min(lonVar), max(lonVar), min(latVar), max(latVar))
} else {
stop("Parameters 'lonVar' and 'latVar' must be given in criteria
'Local_dist' and 'Local_cor'")
}
}
if (any(names(dim(expL)) %in% c('ftime', 'leadtime', 'ltime'))) {
if (length(which(names(dim(expL)) %in%
c('ftime', 'leadtime', 'ltime') == TRUE)) > 1) {
stop("Parameter 'expL' cannot have multiple forecast time dimensions")
} else {
names(dim(expL))[which(names(dim(expL)) %in% c('ftime','leadtime','ltime'))] <- 'time'
}
}
# remove dimension length 1 to simplify outputs:
if (any(dim(obsL) == 1)) {
obsL <- adrop(obsL, which(dim(obsL) == 1))
}
if (any(dim(expL) == 1)) {
expL <- adrop(expL, which(dim(expL) == 1))
}
if (!is.null(obsVar)) {
if (any(dim(obsVar) == 1)) {
obsVar <- adrop(obsVar, which(dim(obsVar) == 1))
}
}
if (!is.null(expVar)) {
if (any(dim(expVar) == 1)) {
expVar <- adrop(expVar, which(dim(expVar) == 1))
}
}
names(dim(expL)) <- replace_repeat_dimnames(names(dim(expL)),
names(dim(obsL)),
lon_name = lon_name,
lat_name = lat_name)
if (!is.null(expVar)) {
names(dim(expVar)) <- replace_repeat_dimnames(names(dim(expVar)),
names(dim(obsVar)),
lon_name = lon_name,
lat_name = lat_name)
}
if (is.null(excludeTime)) {
excludeTime <- vector(mode = "character", length = length(time_expL))
}
if (length(time_expL) == length(excludeTime)) {
if (any(names(dim(expL)) %in% c('sdate_exp'))) {
dim(time_expL) <- c(dim(expL)['sdate_exp'], dim(expL)['time_exp'])
} else if (any(names(dim(expL)) %in% c('sdate'))) {
if (any(names(dim(expL)) %in% c('time_exp'))) {
dim(time_expL) <- c(dim(expL)['sdate'], dim(expL)['time_exp'])
dim(excludeTime) <- c(dim(expL)['sdate'], dim(expL)['time_exp'])
} else if (any(names(dim(expL)) %in% c('time'))) {
dim(time_expL) <- c(dim(expL)['sdate'], dim(expL)['time'])
dim(excludeTime) <- c(dim(expL)['sdate'], dim(expL)['time'])
} else {
dim(time_expL) <- c(dim(expL)['sdate'])
dim(excludeTime) <- c(dim(expL)['sdate'])
}
} else if (any(names(dim(expL)) %in% c('time'))) {
dim(time_expL) <- c(dim(expL)['time'])
dim(excludeTime) <- c(dim(expL)['time'])
} else if (any(names(dim(expL)) %in% c('time_exp'))) {
dim(time_expL) <- c(dim(expL)['time_exp'])
dim(excludeTime) <- c(dim(expL)['time_exp'])
}
}
if (!AnalogsInfo) {
if (is.null(obsVar)) {
res <- Apply(list(expL, obsL),
target_dims = list(c(lat_name, lon_name), c('time', lat_name, lon_name)),
fun = .analogs, time_obsL, expVar = expVar,
time_expL = time_expL, excludeTime = excludeTime,
obsVar = obsVar, criteria = criteria,
lonL = lonL, latL = latL,
lonVar = lonVar, latVar = latVar, region = region,
nAnalogs = nAnalogs, AnalogsInfo = AnalogsInfo,
lon_name = lon_name, lat_name = lat_name,
output_dims = c('nAnalogs', lat_name, lon_name),
ncores = ncores)$output1
} else if (!is.null(obsVar) && is.null(expVar)) {
res <- Apply(list(expL, obsL, obsVar),
target_dims = list(c(lat_name, lon_name), c('time', lat_name, lon_name),
c('time', lat_name, lon_name)),
fun = .analogs, time_obsL,
time_expL = time_expL, excludeTime = excludeTime,
expVar = expVar, criteria = criteria,
lonL = lonL, latL = latL,
lonVar = lonVar, latVar = latVar, region = region,
nAnalogs = nAnalogs, AnalogsInfo = AnalogsInfo,
lon_name = lon_name, lat_name = lat_name,
output_dims = c('nAnalogs', lat_name, lon_name),
ncores = ncores)$output1
} else if (!is.null(obsVar) && !is.null(expVar)) {
res <- Apply(list(expL, obsL, obsVar, expVar),
target_dims = list(c(lat_name, lon_name), c('time', lat_name, lon_name),
c('time', lat_name, lon_name), c(lat_name, lon_name)),
fun = .analogs,
criteria = criteria, time_obsL,
time_expL = time_expL, excludeTime = excludeTime,
lonL = lonL, latL = latL,
lonVar = lonVar, latVar = latVar, region = region,
nAnalogs = nAnalogs, AnalogsInfo = AnalogsInfo,
lon_name = lon_name, lat_name = lat_name,
output_dims = c('nAnalogs', lat_name, lon_name),
ncores = ncores)$output1
}
} else {
if (is.null(obsVar)) {
res <- Apply(list(expL, obsL),
target_dims = list(c(lat_name, lon_name), c('time', lat_name, lon_name)),
fun = .analogs, time_obsL, expVar = expVar,
time_expL = time_expL, excludeTime = excludeTime,
obsVar = obsVar, criteria = criteria,
lonL = lonL, latL = latL,
lonVar = lonVar, latVar = latVar, region = region,
nAnalogs = nAnalogs, AnalogsInfo = AnalogsInfo,
lon_name = lon_name, lat_name = lat_name,
output_dims = list(fields = c('nAnalogs', lat_name, lon_name),
analogs = c('nAnalogs'),
metric = c('nAnalogs', 'metric'),
dates = c('nAnalogs')),
ncores = ncores)
} else if (!is.null(obsVar) && is.null(expVar)) {
res <- Apply(list(expL, obsL, obsVar),
target_dims = list(c(lat_name, lon_name), c('time', lat_name, lon_name),
c('time', lat_name, lon_name)),
fun = .analogs, time_obsL,
time_expL = time_expL, excludeTime = excludeTime,
expVar = expVar, criteria = criteria,
lonL = lonL, latL = latL,
lonVar = lonVar, latVar = latVar, region = region,
nAnalogs = nAnalogs, AnalogsInfo = AnalogsInfo,
lon_name = lon_name, lat_name = lat_name,
output_dims = list(fields = c('nAnalogs', lat_name, lon_name),
analogs = c('nAnalogs'),
metric = c('nAnalogs', 'metric'),
dates = c('nAnalogs')),
ncores = ncores)
} else if (!is.null(obsVar) && !is.null(expVar)) {
res <- Apply(list(expL, obsL, obsVar, expVar),
target_dims = list(c(lat_name, lon_name), c('time', lat_name, lon_name),
c('time', lat_name, lon_name), c(lat_name, lon_name)),
fun = .analogs, time_obsL,
criteria = criteria,
time_expL = time_expL, excludeTime = excludeTime,
lonL = lonL, latL = latL,
lonVar = lonVar, latVar = latVar, region = region,
nAnalogs = nAnalogs, AnalogsInfo = AnalogsInfo,
lon_name = lon_name, lat_name = lat_name,
output_dims = list(fields = c('nAnalogs', lat_name, lon_name),
analogs = c('nAnalogs'),
metric = c('nAnalogs', 'metric'),
dates = c('nAnalogs')),
ncores = ncores)
}
}
return(res)
}
.analogs <- function (expL, obsL, time_expL, excludeTime = NULL,
obsVar = NULL, expVar = NULL,
time_obsL, criteria = "Large_dist",
lonL = NULL, latL = NULL,
lonVar = NULL, latVar = NULL, region = NULL,
nAnalogs = NULL, AnalogsInfo = FALSE, lon_name = 'lon',
lat_name = 'lat') {
if (all(excludeTime == "")) {
excludeTime = NULL
}
if (!is.null(obsL)) {
#obsL <- replace_time_dimnames(obsL)
if (any(time_expL %in% time_obsL)) {
if (is.null(excludeTime)) {
excludeTime <- time_expL
warning("Parameter 'excludeTime' is NULL, time_obsL contains
time_expL, so, by default, the date of
time_expL will be excluded in the search of analogs")
} else {
`%!in%` = Negate(`%in%`)
if(any(time_expL %!in% excludeTime)) {
excludeTime <- c(excludeTime, time_expL)
warning("Parameter 'excludeTime' is not NULL, time_obsL contains
time_expL, so, by default, the date of
time_expL will be excluded in the search of analogs")
}
}
time_ref <- time_obsL[-c(which(time_obsL %in% excludeTime))]
posdim <- which(names(dim(obsL)) == 'time')
posref <- which(time_obsL %in% time_ref)
obsT <- Subset(obsL, along = posdim, indices = posref)
if (!is.null(obsVar)) {
obsTVar <- Subset(obsVar, along = posdim, indices = posref)
}
time_obsL <- time_ref
obsL <- obsT
if (!is.null(obsVar)) {
obsVar <- obsTVar
}
} else {
if (is.null(excludeTime)) {
if (!is.null(obsVar)) {
warning("Parameter 'excludeTime' is NULL, time_obsL does not contain
time_expL, obsVar not NULL")
} else {
warning("Parameter 'excludeTime' is NULL, time_obsL does not contain
time_expL")
}
} else {
time_ref <- time_obsL[-c(which(time_obsL %in% excludeTime))]
posdim <- which(names(dim(obsL)) == 'time')
posref <- which(time_obsL %in% time_ref)
obsT <- Subset(obsL, along = posdim, indices = posref)
if (!is.null(obsVar)) {
obsTVar <- Subset(obsVar, along = posdim, indices = posref)
}
time_obsL <- time_ref
obsL <- obsT
if (!is.null(obsVar)) {
obsVar <- obsTVar
}
if (!is.null(obsVar)) {
warning("Parameter 'excludeTime' has a value and time_obsL does not
contain time_expL, obsVar not NULL")
} else {
warning("Parameter 'excludeTime' has a value and time_obsL does not
contain time_expL")
}
}
}
} else {
stop("parameter 'obsL' cannot be NULL")
}
if (length(time_obsL) == 0) {
stop("Parameter 'time_obsL' can not be length 0")
}
Analog_result <- FindAnalog(expL = expL, obsL = obsL, time_obsL = time_obsL,
expVar = expVar, obsVar = obsVar,
criteria = criteria,
AnalogsInfo = AnalogsInfo,
nAnalogs = nAnalogs,
lonL = lonL, latL = latL, lonVar = lonVar,
latVar = latVar, region = region,
lon_name = lon_name, lat_name = lat_name)
if (AnalogsInfo == TRUE) {
return(list(AnalogsFields = Analog_result$AnalogsFields,
AnalogsInfo = Analog_result$Analog,
AnalogsMetric = Analog_result$metric,
AnalogsDates = Analog_result$dates))
} else {
return(AnalogsFields = Analog_result$AnalogsFields)
}
}
FindAnalog <- function(expL, obsL, time_obsL, expVar, obsVar, criteria,
lonL, latL, lonVar,
latVar, region, nAnalogs = nAnalogs,
AnalogsInfo = AnalogsInfo, lon_name = 'lon', lat_name = 'lat') {
position <- Select(expL = expL, obsL = obsL, expVar = expVar,
obsVar = obsVar, criteria = criteria,
lonL = lonL, latL = latL, lonVar = lonVar,
latVar = latVar, region = region,
lon_name = lon_name, lat_name = lat_name)$position
metrics <- Select(expL = expL, obsL = obsL, expVar = expVar,
obsVar = obsVar, criteria = criteria, lonL = lonL,
latL = latL, lonVar = lonVar,
latVar = latVar, region = region,
lon_name = lon_name, lat_name = lat_name)$metric.original
best <- Apply(list(position), target_dims = c('time', 'pos'),
fun = BestAnalog, criteria = criteria,
AnalogsInfo = AnalogsInfo, nAnalogs = nAnalogs)$output1
Analogs_dates <- time_obsL[best]
dim(Analogs_dates) <- dim(best)
if (all(!is.null(region), !is.null(lonVar), !is.null(latVar))) {
if (is.null(obsVar)) {
obsVar <- SelBox(obsL, lon = lonL, lat = latL, region = region,
londim = lon_name, latdim = lat_name)$data
expVar <- SelBox(expL, lon = lonL, lat = latL, region = region,
londim = lon_name, latdim = lat_name)$data
Analogs_fields <- Subset(obsVar,
along = which(names(dim(obsVar)) == 'time'),
indices = best)
warning("Parameter 'obsVar' is NULL and the returned field ",
"will be computed from 'obsL' (same variable).")
} else {
obslocal <- SelBox(obsVar, lon = lonVar, lat = latVar,
region = region, londim = lon_name, latdim = lat_name)$data
Analogs_fields <- Subset(obslocal,
along = which(names(dim(obslocal)) == 'time'),
indices = best)
}
} else {
warning("One or more of the parameter 'region', 'lonVar' and 'latVar'",
" are NULL and the large scale field will be returned.")
if (is.null(obsVar)) {
Analogs_fields <- Subset(obsL, along = which(names(dim(obsL)) == 'time'),
indices = best)
} else {
Analogs_fields <- Subset(obsVar,
along = which(names(dim(obsVar)) == 'time'),
indices = best)
}
}
lon_dim <- which(names(dim(Analogs_fields)) == lon_name)
lat_dim <- which(names(dim(Analogs_fields)) == lat_name)
Analogs_metrics <- Subset(metrics,
along = which(names(dim(metrics)) == 'time'),
indices = best)
analog_number <- as.numeric(1:nrow(Analogs_metrics))
dim(analog_number) <- c(nAnalog = length(analog_number))
dim(Analogs_dates) <- c(nAnalog = length(Analogs_dates))
return(list(AnalogsFields = Analogs_fields,
Analog = analog_number,
metric = Analogs_metrics,
dates = Analogs_dates))
}
BestAnalog <- function(position, nAnalogs = nAnalogs, AnalogsInfo = FALSE,
criteria = 'Large_dist') {
pos_dim <- which(names(dim(position)) == 'pos')
if (dim(position)[pos_dim] == 1) {
pos1 <- Subset(position, along = pos_dim, indices = 1)
if (criteria != 'Large_dist') {
warning("Dimension 'pos' in parameter 'position' has length 1,",
" criteria 'Large_dist' will be used.")
criteria <- 'Large_dist'
}
} else if (dim(position)[pos_dim] == 2) {
pos1 <- Subset(position, along = pos_dim, indices = 1)
pos2 <- Subset(position, along = pos_dim, indices = 2)
if (criteria == 'Local_cor') {
warning("Dimension 'pos' in parameter 'position' has length 2,",
" criteria 'Local_dist' will be used.")
criteria <- 'Local_dist'
}
} else if (dim(position)[pos_dim] == 3) {
pos1 <- Subset(position, along = pos_dim, indices = 1)
pos2 <- Subset(position, along = pos_dim, indices = 2)
pos3 <- Subset(position, along = pos_dim, indices = 3)
if (criteria != 'Local_cor') {
warning("Parameter 'criteria' is set to", criteria, ".")
}
} else {
stop("Parameter 'position' has dimension 'pos' of different ",
"length than expected (from 1 to 3).")
}
if (criteria == 'Large_dist') {
if (AnalogsInfo == FALSE) {
pos <- pos1[1]
} else {
pos <- pos1[1 : nAnalogs]
}
} else if (criteria == 'Local_dist') {
pos1 <- pos1[1 : nAnalogs]
pos2 <- pos2[1 : nAnalogs]
best <- match(pos1, pos2)
if (length(best) == 1) {
warning("Just 1 best analog matching Large_dist and ",
"Local_dist criteria")
}
if (length(best) < 1 | is.na(best[1]) == TRUE) {
stop("no best analogs matching Large_dist and Local_dist criterias,
please increase nAnalogs")
}
pos <- pos2[as.logical(best)]
pos <- pos[which(!is.na(pos))]
if (AnalogsInfo == FALSE) {
pos <- pos[1]
}else {
pos <- pos}
} else if (criteria == 'Local_cor') {
pos1 <- pos1[1 : nAnalogs]
pos2 <- pos2[1 : nAnalogs]
best <- match(pos1, pos2)
if (length(best) == 1) {
warning("Just 1 best analog matching Large_dist and ",
"Local_dist criteria")
}
if (length(best) < 1 | is.na(best[1]) == TRUE) {
stop("no best analogs matching Large_dist and Local_dist criterias,
please increase nAnalogs")
}
pos <- pos1[as.logical(best)]
pos <- pos[which(!is.na(pos))]
pos3 <- pos3[1 : nAnalogs]
best <- match(pos, pos3)
if (length(best) == 1) {
warning("Just 1 best analog matching Large_dist, Local_dist and ",
"Local_cor criteria")
}
if (length(best) < 1 | is.na(best[1]) == TRUE) {
stop("no best analogs matching Large_dist, Local_dist and Local_cor
criterias, please increase nAnalogs")
}
pos <- pos[order(best, decreasing = F)]
pos <- pos[which(!is.na(pos))]
if (AnalogsInfo == FALSE) {
pos <- pos[1]
} else{
pos <- pos
}
return(pos)
}
}
Select <- function(expL, obsL, expVar = NULL, obsVar = NULL,
criteria = "Large_dist", lonL = NULL, latL = NULL,
lonVar = NULL, latVar = NULL, region = NULL,
lon_name = 'lon', lat_name = 'lat') {
names(dim(expL)) <- replace_repeat_dimnames(names(dim(expL)),
names(dim(obsL)),
lon_name = lon_name,
lat_name = lat_name)
metric1 <- Apply(list(obsL), target_dims = list(c(lat_name, lon_name)),
fun = .select, expL, metric = "dist",
lon_name = lon_name, lat_name = lat_name)$output1
metric1.original = metric1
if (length(dim(metric1)) > 1) {
dim_time_obs <- which(names(dim(metric1)) == 'time' |
names(dim(metric1)) == 'ftime')
dim(metric1) <- c(dim(metric1), metric=1)
margins <- c(1 : (length(dim(metric1))))[-dim_time_obs]
pos1 <- apply(metric1, margins, order)
names(dim(pos1))[1] <- 'time'
metric1.original = metric1
metric1 <- apply(metric1, margins, sort)
names(dim(metric1))[1] <- 'time'
names(dim(metric1.original)) = names(dim(metric1))
} else {
pos1 <- order(metric1)
dim(pos1) <- c(time = length(pos1))
metric1 <- sort(metric1)
dim(metric1) <- c(time = length(metric1))
dim(metric1.original) = dim(metric1)
dim_time_obs = 1
}
if (criteria == "Large_dist") {
dim(metric1) <- c(dim(metric1), metric = 1)
dim(pos1) <- c(dim(pos1), pos = 1)
dim(metric1.original) = dim(metric1)
return(list(metric = metric1, metric.original = metric1.original,
position = pos1))
}
if (criteria == "Local_dist" | criteria == "Local_cor") {
obs <- SelBox(obsL, lon = lonL, lat = latL, region = region,
londim = lon_name, latdim = lat_name)$data
exp <- SelBox(expL, lon = lonL, lat = latL, region = region,
londim = lon_name, latdim = lat_name)$data
metric2 <- Apply(list(obs), target_dims = list(c(lat_name, lon_name)),
fun = .select, exp, metric = "dist",
lon_name = lon_name, lat_name = lat_name)$output1
metric2.original = metric2
dim(metric2) <- c(dim(metric2), metric=1)
margins <- c(1 : (length(dim(metric2))))[-dim_time_obs]
pos2 <- apply(metric2, margins, order)
dim(pos2) <- dim(pos1)
names(dim(pos2))[1] <- 'time'
metric2 <- apply(metric2, margins, sort)
names(dim(metric2))[1] <- 'time'
if (criteria == "Local_dist") {
metric <- abind(metric1, metric2, along = length(dim(metric1))+1)
metric.original <- abind(metric1.original,metric2.original,
along = length(dim(metric1))+1)
position <- abind(pos1, pos2, along = length(dim(pos1))+1)
names(dim(metric)) <- c(names(dim(pos1)), 'metric')
names(dim(position)) <- c(names(dim(pos1)), 'pos')
names(dim(metric.original)) = names(dim(metric))
return(list(metric = metric, metric.original = metric.original,
position = position))
}
}
if (criteria == "Local_cor") {
obs <- SelBox(obsVar, lon = lonVar, lat = latVar, region = region,
londim = lon_name, latdim = lat_name)$data
exp <- SelBox(expVar, lon = lonVar, lat = latVar, region = region,
londim = lon_name, latdim = lat_name)$data
metric3 <- Apply(list(obs), target_dims = list(c(lat_name, lon_name)),
fun = .select, exp, metric = "cor",
lon_name = lon_name, lat_name = lat_name)$output1
metric3.original = metric3
dim(metric3) <- c(dim(metric3), metric=1)
margins <- c(1 : (length(dim(metric3))))[-dim_time_obs]
pos3 <- apply(abs(metric3), margins, order, decreasing = TRUE)
names(dim(pos3))[1] <- 'time'
metricsort <- metric3[pos3]
dim(metricsort) = dim(metric3)
names(dim(metricsort))[1] <- 'time'
metric <- abind(metric1, metric2, metricsort,
along = length(dim(metric1)) + 1)
metric.original <- abind(metric1.original, metric2.original,
metric3.original,
along = length(dim(metric1)) + 1)
position <- abind(pos1, pos2, pos3, along = length(dim(pos1)) + 1)
names(dim(metric)) <- c(names(dim(metric1)), 'metric')
names(dim(position)) <- c(names(dim(pos1)), 'pos')
names(dim(metric.original)) = names(dim(metric))
return(list(metric = metric, metric.original=metric.original,
position = position))
}
else {
stop("Parameter 'criteria' must to be one of the: 'Large_dist', ",
"'Local_dist','Local_cor'.")
}
}
.select <- function(exp, obs, metric = "dist",
lon_name = 'lon', lat_name = 'lat') {
if (metric == "dist") {
result <- Apply(list(obs), target_dims = list(c(lat_name, lon_name)),
fun = function(x) {sqrt(sum((x - exp) ^ 2, na.rm = TRUE))})$output1
} else if (metric == "cor") {
result <- Apply(list(obs), target_dims = list(c(lat_name, lon_name)),
fun = function(x) {cor(as.vector(x),
as.vector(exp),
method = "spearman")})$output1
}
result
}
.time_ref <- function(time_obsL,time_expL,excludeTime) {
sameTime = which(time_obsL %in% time_expL)
result<- c(time_obsL[1:(sameTime - excludeTime - 1)],
time_obsL[(sameTime + excludeTime + 1):length(time_obsL)])
result
}
replace_repeat_dimnames <- function(names_exp, names_obs, lat_name = 'lat',
lon_name = 'lon') {
if (!is.character(names_exp)) {
stop("Parameter 'names_exp' must be a vector of characters.")
}
if (!is.character(names_obs)) {
stop("Parameter 'names_obs' must be a vector of characters.")
}
latlon_dim_exp <- which(names_exp == lat_name | names_exp == lon_name)
latlon_dim_obs <- which(names_obs == lat_name | names_obs == lon_name)
if (any(unlist(lapply(names_exp[-latlon_dim_exp],
function(x){x == names_obs[-latlon_dim_obs]})))) {
original_pos <- lapply(names_exp,
function(x) which(x == names_obs[-latlon_dim_obs]))
original_pos <- lapply(original_pos, length) > 0
names_exp[original_pos] <- paste0(names_exp[original_pos], "_exp")
}
return(names_exp)
}
replace_time_dimnames <- function(dataL, time_name = 'time',
stdate_name = 'stdate', ftime_name='ftime') {
names_obs = names(dim(dataL))
if (!is.character(names_obs)) {
stop("Parameter 'names_obs' must be a vector of characters.")
}
time_dim_obs <- which(names_obs == time_name |
names_obs == stdate_name | names_obs == ftime_name)
if (length(time_dim_obs) > 1) {
stop ("more than 1 time dimension, please give just 1")
}
if (length(time_dim_obs) == 0) {
warning ("name of time dimension is not 'ftime' or 'time' or 'stdate'
or time dimension is null")
}
if (length(time_dim_obs) != 0) {
names_obs[time_dim_obs]= time_name
}
names(dim(dataL)) = names_obs
return(dataL)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_Analogs.R
|
#' AEMET Downscaling
#' Precipitation and maximum and minimum temperature downscaling method
#' based on analogs: synoptic situations and significant predictors.
#'
#'@author Marta Dominguez Alonso - AEMET, \email{[email protected]}
#'@author Nuria Perez-Zanon - BSC, \email{[email protected]}
#'
#'@description This function downscales low resolution precipitation data (e.g.
#'from Seasonal Forecast Models) through the association with an observational
#'high resolution (HR) dataset (AEMET 5 km gridded data of daily precipitation
#'(Peral et al., 2017)) and a collection of predictors and past synoptic
#'situations similar to estimated day. The method uses three domains:
#'\itemize{
#' \item{Peninsular Spain and Balearic Islands domain (5 km resolution): HR precipitation
#' and the downscaling result domain.}
#' \item{Synoptic domain (low resolution, e.g. 1.5º x 1.5º): it should be
#' centered over Iberian Peninsula and cover enough extension to detect
#' as much synoptic situations as possible.}
#' \item{Extended domain (low resolution, e.g. 1.5º x 1.5º): it should have the
#' same resolution as synoptic domain. It is used for SLP Seasonal
#' Forecast Models.}
#' }
#'
#'@param exp List of arrays with downscaled period seasonal forecast data. The
#' list has to contain model atmospheric variables (instantaneous 12h data)
#' that must be indentify by parenthesis name. For precipitation:
#' \itemize{
#' \item{u component of wind at 500 hPa (u500_mod) in m/s.}
#' \item{v component of wind at 500 hPa (v500_mod) in m/s.}
#' \item{temperature at 500 hPa (t500_mod) in K.}
#' \item{temperature at 850 hPa (t850_mod) in K.}
#' \item{specific humidity at 700 hPa (q700_mod) in g/kg. }
#' }
#' For temperature:
#' \itemize{
#' \item{u component of wind at 500 hPa (u500_mod) in m/s.}
#' \item{v component of wind at 500 hPa (v500_mod) in m/s.}
#' \item{temperature at 500 hPa (t500_mod) in K.}
#' \item{temperature at 700 hPa (t700_mod) in K. }
#' \item{temperature at 850 hPa (t850_mod) in K.}
#' \item{specific humidity at 700 hPa (q700_mod) in g/kg. }
#' \item{2 meters temperature (tm2m_mod) in K.}
#' }
#' The arrays must have at least three dimensions with names 'lon', 'lat' and
#' 'time'. (lon = gridpoints of longitude, lat = gridpoints of latitude,
#' time = number of downscaling days) Seasonal forecast variables must have the
#' same resolution and domain as reanalysis variables ('obs' parameter, below).
#'@param slp Array with atmospheric seasonal forecast model sea level pressure
#' (instantaneous 12h data) that must be indentify as 'slp' (hPa). It has the
#' same resolution as 'exp' and 'obs' paremeters but with an extended domain.
#' This domain contains extra degrees (most in the north and west part) compare
#' to synoptic domain. The array must have at least three dimensions with
#' names 'lon', 'lat' and 'time'.
#'@param obs List of arrays with training period reanalysis data.
#' The list has to contain reanalysis atmospheric variables (instantaneous
#' 12h data) that must be indentify by parenthesis name. For precipitation:
#' \itemize{
#' \item{u component of wind at 500 hPa (u500) in m/s.}
#' \item{v component of wind at 500 hPa (v500) in m/s.}
#' \item{temperature at 500 hPa (t500) in K.}
#' \item{temperature at 850 hPa (t850) in K.}
#' \item{sea level pressure (slp) in hPa.}
#' \item{specific humidity at 700 hPa (q700) in g/kg.}
#' }
#' For maximum and minimum temperature:
#' \itemize{
#' \item{u component of wind at 500 hPa (u500) in m/s.}
#' \item{v component of wind at 500 hPa (v500) in m/s.}
#' \item{temperature at 500 hPa (t500) in K.}
#' \item{temperature at 700 hPa (t700) in K.}
#' \item{temperature at 850 hPa (t850) in K.}
#' \item{sea level pressure (slp) in hPa.}
#' \item{specific humidity at 700 hPa (q700) in g/kg}
#' \item{2 meters temperature (tm2m) in K}
#' }
#' The arrays must have at least three dimensions with names 'lon', 'lat' and
#' 'time'.
#'@param lon Vector of the synoptic longitude (from (-180º) to 180º),
#' The vector must go from west to east. The same as for the training function.
#'@param lat Vector of the synoptic latitude. The vector must go from north to
#' south. The same as for the training function.
#'@param slp_lon Vector of the extended longitude (from (-180º) to 180º),
#' The vector must go from west to east. The same as for the training function.
#'@param slp_lat Vector of the extended latitude. The vector must go from north
#' to south. The same as for the training function.
#'@param var_name Variable name to downscale. There are two options: 'prec' for
#' precipitation and 'temp' for maximum and minimum temperature.
#'@param hr_obs Local path of HR observational files (maestro and pcp/tmx-tmn).
#' For precipitation and temperature can be downloaded from the following link:
#' \url{https://www.aemet.es/en/serviciosclimaticos/cambio_climat/datos_diarios?w=2}
#' respetively. Maestro file (maestro_red_hr_SPAIN.txt) has gridpoint (nptos),
#' longitude (lon), latitude (lat) and altitude (alt) in columns (vector
#' structure). Data file (pcp/tmx/tmn_red_SPAIN_1951-201903.txt) includes 5km
#' resolution spanish daily data (precipitation or maximum and minimum
#' temperature from january 1951 to june 2020. See README file for more
#' information. IMPORTANT!: HR observational period must be the same as for
#' reanalysis variables. It is assumed that the training period is smaller than
#' the HR original one (1951-2019), so it is needed to make a new ascii file
#' with the new period and the same structure as original, specifying the
#' training dates in the name
#' (e.g. 'pcp_red_SPAIN_19810101-19961231.txt' for '19810101-19961231' period).
#'@param tdates Training period dates in format YYYYMMDD(start)-YYYYMMDD(end)
#' (e.g. 19810101-20181231).
#'@param ddates Downscaling period dates in format YYYYMMDD(start)-YYYYMMDD(end)
#' (e.g. 20191001-20200331).
#'@param restrain Output (list of matrix) obtained from 'training_analogs'
#' function. For precipitation, 'restrain' object must contains um, vm, nger,
#' gu92, gv92, gu52, gv52, neni, vdmin, vref, ccm, lab_pred and cor_pred
#' variables. For maximum and minimum temperature, 'restrain' object must
#' contains um, vm, insol, neni, vdmin y vref. See 'AnalogsPred_train.R' for
#' more information.
#'@param dim_name_longitude A character string indicating the name of the
#' longitude dimension, by default 'longitude'.
#'@param dim_name_latitude A character string indicating the name of the
#' latitude dimension, by default 'latitude'.
#'@param dim_name_time A character string indicating the name of the time
#' dimension, by default 'time'.
#'@return Matrix with seasonal forecast precipitation (mm) or maximum and
#'minimum temperature (dozens of ºC) in a 5km x 5km regular grid over peninsular
#'Spain and Balearic Islands. The resulted matrices have two dimensions
#'('ddates' x 'nptos').(ddates = number of downscaling days and nptos = number
#'of 'hr_obs' gridpoints).
#'
#'@useDynLib CSTools
#'@export
CST_AnalogsPredictors <- function(exp, slp, obs, lon, lat, slp_lon, slp_lat,
var_name, hr_obs, tdates, ddates, restrain,
dim_name_longitude = "lon",
dim_name_latitude = "lat",
dim_name_time = "time") {
if (!is.list(exp)) {
stop("Parameter 'exp' must be a list of 'array' objects")
}
if (!(all(sapply(exp, inherits, 'array')))) {
stop("Elements of the list in parameter 'exp' must be of the class ",
"'array'.")
}
if (!is.array(slp)) {
stop("Parameter 'slp' must be of the class 'array'.")
}
if (!is.list(obs)) {
stop("Parameter 'obs' must be a list of 'array' objects")
}
if (!(all(sapply(obs, inherits, 'array')))) {
stop("Elements of the list in parameter 'obs' must be of the class ",
"'array'.")
}
if (var_name == "prec") {
if (length(exp) != 5) {
stop("Parameter 'exp' must be a length of 5.")
} else {
if (!(any(names(exp) %in% "u500_mod"))) {
stop("Variable 'u500_mod' in 'exp' parameter is missed.")
} else if (!(any(names(exp) %in% "v500_mod"))) {
stop("Variable 'v500_mod' in 'exp' parameter is missed.")
} else if (!(any(names(exp) %in% "t500_mod"))) {
stop("Variable 't500_mod' in 'exp' parameter is missed.")
} else if (!(any(names(exp) %in% "t850_mod"))) {
stop("Variable 't850_mod' in 'exp' parameter is missed.")
} else if (!(any(names(exp) %in% "q700_mod"))) {
stop("Variable 'q700_mod' in 'exp' parameter is missed.")
}
}
if (length(obs) != 6) {
stop("Parameter 'obs' must be a length of 6.")
} else {
if (!(any(names(obs) %in% "u500"))) {
stop("Variable 'u500' in 'obs' parameter is missed.")
} else if (!(any(names(obs) %in% "v500"))) {
stop("Variable 'v500' in 'obs' parameter is missed.")
} else if (!(any(names(obs) %in% "t500"))) {
stop("Variable 't500' in 'obs' parameter is missed.")
} else if (!(any(names(obs) %in% "t850"))) {
stop("Variable 't850' in 'obs' parameter is missed.")
} else if (!(any(names(obs) %in% "slp"))) {
stop("Variable 'slp' in 'obs' parameter is missed.")
} else if (!(any(names(obs) %in% "q700"))) {
stop("Variable 'q700' in 'obs' parameter is missed.")
}
}
} else {
if (length(exp) != 7) {
stop("Parameter 'exp' must be a length of 7.")
} else {
if (!(any(names(exp) %in% "u500_mod"))) {
stop("Variable 'u500_mod' in 'exp' parameter is missed.")
} else if (!(any(names(exp) %in% "v500_mod"))) {
stop("Variable 'v500_mod' in 'exp' parameter is missed.")
} else if (!(any(names(exp) %in% "t500_mod"))) {
stop("Variable 't500_mod' in 'exp' parameter is missed.")
} else if (!(any(names(exp) %in% "t700_mod"))) {
stop("Variable 't700_mod' in 'exp' parameter is missed.")
} else if (!(any(names(exp) %in% "t850_mod"))) {
stop("Variable 't850_mod' in 'exp' parameter is missed.")
} else if (!(any(names(exp) %in% "q700_mod"))) {
stop("Variable 'q700_mod' in 'exp' parameter is missed.")
} else if (!(any(names(exp) %in% "tm2m_mod"))) {
stop("Variable 'tm2m_mod' in 'exp' parameter is missed.")
}
}
if (length(obs) != 8) {
stop("Parameter 'obs' must be a length of 8.")
} else {
if (!(any(names(obs) %in% "u500"))) {
stop("Variable 'u500' in 'obs' parameter is missed.")
} else if (!(any(names(obs) %in% "v500"))) {
stop("Variable 'v500' in 'obs' parameter is missed.")
} else if (!(any(names(obs) %in% "t500"))) {
stop("Variable 't500' in 'obs' parameter is missed.")
} else if (!(any(names(obs) %in% "t700"))) {
stop("Variable 't700' in 'obs' parameter is missed.")
} else if (!(any(names(obs) %in% "t850"))) {
stop("Variable 't850' in 'obs' parameter is missed.")
} else if (!(any(names(obs) %in% "slp"))) {
stop("Variable 'slp' in 'obs' parameter is missed.")
} else if (!(any(names(obs) %in% "q700"))) {
stop("Variable 'q700' in 'obs' parameter is missed.")
} else if (!(any(names(obs) %in% "tm2m"))) {
stop("Variable 'tm2m' in 'obs' parameter is missed.")
}
}
}
if (all((sapply(exp,dim)) == dim(exp[[1]]))) {
dim_exp <- dim(exp[[1]])
if (!(any(names(dim_exp) %in% dim_name_longitude))) {
stop("Dimension 'lon' in exp parameter is missed.")
}
if (!(any(names(dim_exp) %in% dim_name_latitude))) {
stop("Dimension 'lat' in exp parameter is missed.")
}
if (!(any(names(dim_exp) %in% dim_name_time))) {
stop("Dimension 'time' in exp parameter is missed.")
}
} else {
stop("All 'exp' variables must have the same dimensions.")
}
dim_slp <- dim(slp)
if (!(any(names(dim_slp) %in% dim_name_longitude))) {
stop("Dimension 'lon' in slp parameter is missed.")
}
if (!(any(names(dim_slp) %in% dim_name_latitude))) {
stop("Dimension 'lat' in slp parameter is missed.")
}
if (!(any(names(dim_slp) %in% dim_name_time))) {
stop("Dimension 'time' in slp parameter is missed.")
}
if (all((sapply(obs,dim))==dim(obs[[1]]))) {
dim_obs <- dim(obs[[1]])
if (!(any(names(dim_obs) %in% dim_name_longitude))) {
stop("Dimension 'lon' in obs parameter is missed.")
}
if (!(any(names(dim_obs) %in% dim_name_latitude))) {
stop("Dimension 'lat' in obs parameter is missed.")
}
if (!(any(names(dim_obs) %in% dim_name_time))) {
stop("Dimension 'time' in obs parameter is missed.")
}
} else {
stop("All 'obs' variables must have the same dimensions.")
}
if (!is.vector(lon) || !is.numeric(lon)) {
stop("Parameter 'lon' must be a numeric vector")
} else {
if (is.unsorted(lon)) {
lon <- sort(lon)
warning("'lon' vector has been sorted in increasing order")
}
}
if (!is.vector(lat) || !is.numeric(lat)) {
stop("Parameter 'lat' must be a numeric vector")
} else {
if (!is.unsorted(lat)) {
lat <- sort(lat, decreasing = TRUE)
warning("'lat' vector has been sorted in decreasing order")
}
}
if (!is.vector(slp_lon) || !is.numeric(slp_lon)) {
stop("Parameter 'slp_lon' must be a numeric vector")
} else {
if (is.unsorted(slp_lon)) {
lon <- sort(slp_lon)
warning("'slp_lon' vector has been sorted in increasing order")
}
}
if (!is.vector(slp_lat) || !is.numeric(slp_lat)) {
stop("Parameter 'slp_lat' must be a numeric vector")
} else {
if (!is.unsorted(slp_lat)) {
lat <- sort(slp_lat, decreasing = TRUE)
warning("'slp_lat' vector has been sorted in decreasing order")
}
}
if (!is.character(hr_obs)){
stop("Parameter 'hr_obs' must be a character.")
} else {
if (!dir.exists(hr_obs)) {
stop("'hr_obs' directory does not exist")
}
}
if (!is.character(tdates)) {
stop("Parameter 'tdates' must be a character.")
} else {
if (nchar(tdates) != "17") {
stop("Parameter 'tdates' must be a string with 17 charecters.")
} else {
dateini <- as.Date(substr(tdates,start = 1, stop = 8), format = "%Y%m%d")
dateend <- as.Date(substr(tdates,start = 10, stop = 18), format = "%Y%m%d")
if (dateend <= dateini) {
stop("Parameter 'tdates' must be at least of one day")
}
}
}
if (!is.character(ddates)) {
stop("Parameter 'ddates' must be a character.")
} else {
if (nchar(ddates) != "17") {
stop("Parameter 'ddates' must be a string with 17 charecters.")
} else {
dateini <- as.Date(substr(ddates, start = 1, stop = 8), format = "%Y%m%d")
dateend <- as.Date(substr(ddates, start = 10, stop = 18), format = "%Y%m%d")
if (dateend <= dateini) {
stop("Parameter 'ddates' must be at least of one day")
}
}
}
if (names(dim(exp[[1]]))[1] == "lon" & names(dim(exp[[1]]))[2] == "lat"
|| names(dim(exp[[1]]))[2] == "lon" & names(dim(exp[[1]]))[3] == "lat") {
texp2D <- lapply(exp, MergeDims, merge_dims = c('lon', 'lat'),
rename_dim = 'gridpoint')
} else if (names(dim(exp[[1]]))[1] == "lat" & names(dim(exp[[1]]))[2] == "lon"
|| names(dim(exp[[1]]))[2] == "lat" & names(dim(exp[[1]]))[3] == "lon") {
texp2D <- lapply(exp, MergeDims, merge_dims = c('lat', 'lon'),
rename_dim = 'gridpoint')
}
if (names(dim(slp))[1] == "lon" & names(dim(slp))[2] == "lat"
|| names(dim(slp))[2] == "lon" & names(dim(slp))[3] == "lat") {
tslp2D <- MergeDims(slp,merge_dims = c('lon', 'lat'),
rename_dim = 'gridpoint')
} else if (names(dim(slp))[1] == "lat" & names(dim(slp))[2] == "lon"
|| names(dim(slp))[2] == "lat" & names(dim(slp))[3] == "lon") {
tslp2D <- MergeDims(slp,merge_dims = c('lat', 'lon'),
rename_dim = 'gridpoint')
}
if (names(dim(obs[[1]]))[1] == "lon" & names(dim(obs[[1]]))[2] == "lat"
|| names(dim(obs[[1]]))[2] == "lon" & names(dim(obs[[1]]))[3] == "lat") {
tobs2D <- lapply(obs, MergeDims, merge_dims = c('lon', 'lat'),
rename_dim = 'gridpoint')
} else if (names(dim(obs[[1]]))[1] == "lat" & names(dim(obs[[1]]))[2] == "lon"
|| names(dim(obs[[1]]))[2] == "lat" & names(dim(obs[[1]]))[3] == "lon") {
tobs2D <- lapply(obs, MergeDims, merge_dims = c('lat', 'lon'),
rename_dim = 'gridpoint')
}
if (names(dim(texp2D[[1]]))[1] == "gridpoint") {
exp2D <- lapply(texp2D,aperm)
} else {
exp2D <- texp2D
}
if (names(dim(tslp2D))[1] == "gridpoint") {
slp2D <- aperm(tslp2D)
} else {
slp2D <- tslp2D
}
if (names(dim(tobs2D[[1]]))[1] == "gridpoint") {
obs2D <- lapply(tobs2D,aperm)
} else {
obs2D <- tobs2D
}
downres <- .analogspred(exp2D, slp2D, obs2D, lon, lat, slp_lon, slp_lat,
var_name, hr_obs, tdates, ddates, restrain)
}
#' Atomic .analogspred function
#'
#'@author Marta Dom\'inguez Alonso - AEMET, \email{[email protected]}
#'This function works with lists of matrix from reanalysis and seasonal
#'forecast data and uses a Fortran interface (.Fortran) to run an
#'analogs method developed in AEMET.
#'@param pred_mod List of matrix with downscaled period seasonal forecast data. The list
#' has to contain model atmospheric variables (instantaneous 12h data) that must
#' be indentify by parenthesis name. For precipitation:
#' \itemize{
#' \item{u component of wind at 500 hPa (u500_mod) in m/s.}
#' \item{v component of wind at 500 hPa (v500_mod) in m/s.}
#' \item{temperature at 500 hPa (t500_mod) in K.}
#' \item{temperature at 850 hPa (t850_mod) in K.}
#' \item{specific humidity at 700 hPa (q700_mod) in g/kg.}
#' }
#' For temperature:
#' \itemize{
#' \item{u component of wind at 500 hPa (u500_mod) in m/s.}
#' \item{v component of wind at 500 hPa (v500_mod) in m/s.}
#' \item{temperature at 500 hPa (t500_mod) in K.}
#' \item{temperature at 700 hPa (t500_mod) in K.}
#' \item{temperature at 850 hPa (t850_mod) in K.}
#' \item{specific humidity at 700 hPa (q700_mod) in g/kg.}
#' \item{2 meters temperature (tm2m_mod) in K.}
#' }
#' Seasonal forecast variables must have the same resolution and
#' domain as 'pred_rea' parameter. All matrices must have two dimensions with
#' names 'time' and 'gridpoint'.
#'@param pred_slp Matrix with atmospheric seasonal forecast model sea level
#' pressure (instantaneous 12h data) that must be indentify as 'slp'. It has
#' the same resolution as 'pred_mod' paremeter but with an extended domain.
#' This domain contains extra degrees (most in the north and west part) compare
#' to synoptic domain. The matrix must have two dimensions with names 'time'
#' and 'gridpoint'.
#'@param pred_rea List of matrix with training period reanalysis data. The
#' list has to contain reanalysis atmospheric variables (instantaneous 12h
#' data) that must be indentify by parenthesis name. For precipitation:
#' \itemize{
#' \item{u component of wind at 500 hPa (u500) in m/s.}
#' \item{v component of wind at 500 hPa (v500) in m/s.}
#' \item{temperature at 500 hPa (t500) in K.}
#' \item{temperature at 850 hPa (t850) in K.}
#' \item{sea level pressure (slp) in hPa.}
#' \item{specific humidity at 700 hPa (q700) in g/kg.}
#' }
#' For maximum and minimum temperature:
#' \itemize{
#' \item{u component of wind at 500 hPa (u500) in m/s.}
#' \item{v component of wind at 500 hPa (v500) in m/s.}
#' \item{temperature at 500 hPa (t500) in K.}
#' \item{temperature at 700 hPa (t500) in K.}
#' \item{temperature at 850 hPa (t850) in K.}
#' \item{sea level pressure (slp) in hPa.}
#' \item{specific humidity at 700 hPa (q700) in g/kg.}
#' \item{2 meters temperature (tm2m) in K}
#' }
#' All matrices must have two dimensions with names 'ddates' and 'gridpoint'.
#'@param lon Vector of the synoptic longitude (from (-180º) to 180º),
#' The vector must go from west to east.
#'@param lat Vector of the synoptic latitude. The vector must go from north to
#' south.
#'@param slp_lon Vector of the extended longitude (from (-180º) to 180º),
#' The vector must go from west to east.
#'@param slp_lat Vector of the extended latitude. The vector must go from north
#' to south.
#'@param var Variable name to downscale. There are two options: 'prec' for
#' precipitation and 'temp' for maximum and minimum temperature.
#'@param HR_path Local path of HR observational files (maestro and pcp/tmx-tmn).
#' For precipitation and temperature can be downloaded from the following link:
#' \url{https://www.aemet.es/en/serviciosclimaticos/cambio_climat/datos_diarios?w=2}
#' respetively. Maestro file (maestro_red_hr_SPAIN.txt) has gridpoint (nptos),
#' longitude (lon), latitude (lat) and altitude (alt) in columns (vector
#' structure). Data file (pcp/tmx/tmn_red_SPAIN_1951-201903.txt) includes 5km
#' resolution spanish daily data (precipitation or maximum and minimum
#' temperature from january 1951 to march 2019. See README file for more
#' information. IMPORTANT!: HR observational period must be the same as for
#' reanalysis variables ('pred_rea' parameter). It is assumed that the training
#' period is smaller than the HR original one (1951-2019), so it is needed to
#' make a new ascii file with the new period and the same structure as original,
#' specifying the training dates in the name (e.g.
#' 'pcp_red_SPAIN_19810101-19961231.txt' for '19810101-19961231' period).
#'@param tdates Training period dates in format YYYYMMDD(start)-YYYYMMDD(end)
#' (e.g. 19810101-20181231). The same as for the training function.
#'@param ddates Downscaling period dates in format YYYYMMDD(start)-YYYYMMDD(end)
#' (e.g. 20191001-20200331).
#'@param restrain Output (list of matrix) obtained from 'training_analogs'
#' function. For precipitation, 'restrain' object must contains um, vm, nger,
#' gu92, gv92, gu52, gv52, neni, vdmin, vref, ccm, lab_pred and cor_pred
#' variables. For maximum and minimum temperature, 'restrain' object must
#' contains um, vm, insol, neni, vdmin y vref. See 'AnalogsPred_train.R' for
#' more information.
#'@return .analogspred Returns seasonal forecast precipitation (mm) or maximum
#'and minimum temperature (dozens of ºC) in a 5km x 5km regular grid over
#'peninsular Spain and Balearic Islands. Each matrix of the list has two
#'dimensions ('ddates' x 'nptos').
#'
#'@importFrom utils read.table
#'@useDynLib CSTools
#'@noRd
.analogspred <- function(pred_mod, pred_slp, pred_rea, lon, lat, slp_lon,
slp_lat, var, HR_path, tdates, ddates, restrain) {
if (!is.list(pred_mod)) {
stop("Parameter 'pred_mod' must be a list of 'matrix' objects")
}
if (!(all(sapply(pred_mod, inherits, 'matrix')))) {
stop("Elements of the list in parameter 'pred_mod' must be of the class ",
"'matrix'.")
}
if (!is.matrix(pred_slp)) {
stop("Parameter 'pred_slp' must be of the class 'matrix'.")
}
if (!is.list(pred_rea)) {
stop("Parameter 'pred_rea' must be a list of 'matrix' objects")
}
if (!(all(sapply(pred_rea, inherits, 'matrix')))) {
stop("Elements of the list in parameter 'pred_rea' must be of the class ",
"'matrix'.")
}
if (var == "prec") {
if (length(pred_rea) != 6) {
stop("Parameter 'pred_rea' must be a length of 6.")
}
if (length(pred_mod) != 5) {
stop("Parameter 'pred_mod' must be a length of 5.")
}
} else {
if (length(pred_rea) != 8) {
stop("Parameter 'pred_rea' must be a length of 8.")
}
if (length(pred_mod) != 7) {
stop("Parameter 'pred_mod' must be a length of 7.")
}
}
if (!is.vector(lon) || !is.numeric(lon)) {
stop("Parameter 'lon' must be a numeric vector")
}
if (!is.vector(lat) || !is.numeric(lat)) {
stop("Parameter 'lat' must be a numeric vector")
}
if (!is.vector(slp_lon) || !is.numeric(slp_lon)) {
stop("Parameter 'slp_lon' must be a numeric vector")
}
if (!is.vector(slp_lat) || !is.numeric(slp_lat)) {
stop("Parameter 'slp_lat' must be a numeric vector")
}
if (!is.character(HR_path)){
stop("Parameter 'HR_path' must be a character.")
}
if (!is.character(tdates)) {
stop("Parameter 'tdates' must be a character.")
}
if (!is.character(ddates)) {
stop("Parameter 'ddates' must be a character.")
}
if (!is.list(restrain)) {
stop("Parameter 'restrain' must be a list of 'matrix' and 'parameter' objects")
}
#! REANALYSIS GRID PARAMETERS
rlon <- c(lon, NA) - c(NA, lon)
rlon <- rlon[!is.na(rlon)]
if (!all(rlon == rlon[1])) {
stop("Parameter 'lon' must be in regular grid.")
} else {
rlon <- rlon[1]
}
rlat <- c(lat, NA) - c(NA, lat)
rlat <- rlat[!is.na(rlat)]
if (!all(rlat == rlat[1])) {
stop("Parameter 'lat' must be in regular grid.")
} else {
rlat <- rlat[1]
}
if (rlon != (-rlat)) {
stop("Parameters 'lon' and 'lat' must have the same resolution.")
} else {
res <- rlon
}
nlat <- ((lat[length(lat)] - lat[1]) / rlat) + 1
nlon <- ((lon[length(lon)] - lon[1]) / rlon) + 1
ic <- nlat * nlon
#
slp_rlon <- c(slp_lon, NA) - c(NA, slp_lon)
slp_rlon <- slp_rlon[!is.na(slp_rlon)]
if (!all(slp_rlon == slp_rlon[1])) {
stop("Parameter 'slp_lon' must be in regular grid.")
} else {
slp_rlon <- slp_rlon[1]
}
slp_rlat <- c(slp_lat, NA) - c(NA, slp_lat)
slp_rlat <- slp_rlat[!is.na(slp_rlat)]
if (!all(slp_rlat == slp_rlat[1])) {
stop("Parameter 'slp_lat' must be in regular grid.")
} else {
slp_rlat <- slp_rlat[1]
}
if (slp_rlon != (-slp_rlat)) {
stop("Parameters 'slp_lon' and 'slp_lat' must have the same resolution.")
} else {
slp_res <- slp_rlon
}
nlatt <- ((slp_lat[length(slp_lat)] - slp_lat[1]) / slp_rlat) + 1
nlont <- ((slp_lon[length(slp_lon)] - slp_lon[1]) / slp_rlon) + 1
id <- nlatt * nlont
slat <- max(lat)
slon <- min(c(lon[which(lon > 180)] - 360,
lon[which(lon <= 180)]))
slatt <- max(slp_lat)
slont <- min(c(slp_lon[which(slp_lon > 180)] - 360,
slp_lon[which(slp_lon <= 180)]))
ngridd <- ((2*nlatt)-1)*((2*nlont)-1)
if (all((sapply(pred_rea,nrow))==nrow(pred_rea[[1]]))){
nd <- nrow(pred_rea[[1]])
} else {
stop("All 'pred_rea' variables must have the same period.")
}
if (all((sapply(pred_mod,nrow))==nrow(pred_mod[[1]]))){
nm <- nrow(pred_mod[[1]])
} else {
stop("All 'pred_mod' variables must have the same period.")
}
seqdates <- seq(as.Date(substr(ddates,start=1,stop=8),format="%Y%m%d"),as.Date(substr(ddates,start=10,stop=18),format="%Y%m%d"),by="days")
month <- format(seqdates,format="%m")
day <- format(seqdates,format="%d")
#! TRAINING REANALYSIS VARIABLES
u500 <- pred_rea[['u500']]
v500 <- pred_rea[['v500']]
t500 <- pred_rea[['t500']]
t850 <- pred_rea[['t850']]
msl_si <- pred_rea[['slp']]
q700 <- pred_rea[['q700']]
if (var == "temp") {
t700 <- pred_rea[['t700']]
tm2m <- pred_rea[['tm2m']]
}
#! SEASONAL FORECAST MODEL VARIABLES
u500_mod <- pred_mod[['u500_mod']]
v500_mod <- pred_mod[['v500_mod']]
t500_mod <- pred_mod[['t500_mod']]
t850_mod <- pred_mod[['t850_mod']]
msl_lr_mod <- pred_slp
q700_mod <- pred_mod[['q700_mod']]
if (var == "temp") {
t700_mod <- pred_mod[['t700_mod']]
tm2m_mod <- pred_mod[['tm2m_mod']]
}
#! HIGH-RESOLUTION (HR) OBSERVATIONAL DATASET
maestro_hr_file <- paste(HR_path, "maestro_red_hr_SPAIN.txt",sep="")
if (!file.exists(maestro_hr_file)) {
stop("'maestro_red_hr_SPAIN.txt' does not exist.")
} else {
maestro <- read.table(maestro_hr_file)
lon_hr <- unlist(maestro[2])
lat_hr <- unlist(maestro[3])
nptos <- length(readLines(maestro_hr_file))
}
if (var == "prec") {
prec_hr_file <- paste(HR_path, "pcp_red_SPAIN_",tdates,".txt",sep="")
if (!file.exists(prec_hr_file)) {
stop(sprintf("precipitation HR file for %s does not exist.",tdates))
} else {
nd_hr <- length(readLines(prec_hr_file))
preprec_hr <- matrix(scan(prec_hr_file), nrow=nd_hr ,ncol= nptos+1, byrow=TRUE)
prec_hr <- preprec_hr[1:nd_hr,-c(1)]
}
} else {
tmx_hr_file <- paste(HR_path, "tmx_red_SPAIN_",tdates,".txt",sep="")
tmn_hr_file <- paste(HR_path, "tmn_red_SPAIN_",tdates,".txt",sep="")
if (!file.exists(tmx_hr_file)) {
stop(sprintf("maximum temperature HR file for %s does not exist.",tdates))
} else if (!file.exists(tmn_hr_file)) {
stop(sprintf("minimum temperature HR file for %s does not exist.",tdates))
} else if (length(readLines(tmx_hr_file)) != length(readLines(tmn_hr_file))) {
stop("maximum and minimum temperature HR observation files must have the same period.")
} else {
nd_hr <- length(readLines(tmx_hr_file))
pretmx_hr <- matrix(scan(tmx_hr_file), nrow=nd_hr ,ncol= nptos+1, byrow=TRUE)
tmx_hr <- pretmx_hr[1:nd_hr,-c(1)]
pretmn_hr <- matrix(scan(tmn_hr_file), nrow=nd_hr ,ncol= nptos+1, byrow=TRUE)
tmn_hr <- pretmn_hr[1:nd_hr,-c(1)]
}
}
if (nd_hr != nd) {
stop("Reanalysis variables and HR observations must have the same period.")
}
#! OTHER PARAMETERS that should not be changed
#! Number of analog situations to consider
nanx <- 155
#! Number of temperature predictors
nvar <- 7
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
if (var == "prec") {
downs <- .Fortran("down_prec",
ic = as.integer(ic),
id = as.integer(id),
nd = as.integer(nd),
nm = as.integer(nm),
nlat = as.integer(nlat),
nlon = as.integer(nlon),
nlatt = as.integer(nlatt),
nlont = as.integer(nlont),
slat = as.numeric(slat),
slon = as.numeric(slon),
rlat = as.numeric(rlat),
rlon = as.numeric(rlon),
slatt = as.numeric(slatt),
slont = as.numeric(slont),
ngridd = as.integer(ngridd),
u500 = as.numeric(u500),
v500 = as.numeric(v500),
t500 = as.numeric(t500),
t850 = as.numeric(t850),
msl_si = as.numeric(msl_si),
q700 = as.numeric(q700),
prec_hr = as.numeric(prec_hr),
nanx = as.integer(nanx),
restrain$um,
restrain$vm,
restrain$nger,
restrain$gu92,
restrain$gv92,
restrain$gu52,
restrain$gv52,
restrain$neni,
restrain$vdmin,
restrain$vref,
restrain$ccm,
restrain$indices[,,,1],#lab_pred
restrain$indices[,,,2],#cor_pred
u500_mod = as.numeric(u500_mod),
v500_mod = as.numeric(v500_mod),
t500_mod = as.numeric(t500_mod),
t850_mod = as.numeric(t850_mod),
msl_lr_mod = as.numeric(msl_lr_mod),
q700_mod = as.numeric(q700_mod),
pp=matrix(as.double(seq(1,nm*nptos)),c(nm,nptos)),
PACKAGE = 'CSTools')
output <- downs$pp
} else {
downs <- .Fortran("down_temp",
ic = as.integer(ic),
id = as.integer(id),
nd = as.integer(nd),
nm = as.integer(nm),
nlat = as.integer(nlat),
nlon = as.integer(nlon),
nlatt = as.integer(nlatt),
nlont = as.integer(nlont),
slat = as.numeric(slat),
slon = as.numeric(slon),
rlat = as.numeric(rlat),
rlon = as.numeric(rlon),
slatt = as.numeric(slatt),
slont = as.numeric(slont),
ngridd = as.integer(ngridd),
u500 = as.numeric(u500),
v500 = as.numeric(v500),
t500 = as.numeric(t500),
t850 = as.numeric(t850),
msl_si = as.numeric(msl_si),
q700 = as.numeric(q700),
t700 = as.numeric(t700),
tm2m = as.numeric(tm2m),
tmx_hr = as.numeric(tmx_hr),
tmn_hr = as.numeric(tmn_hr),
nanx = as.integer(nanx),
nvar = as.integer(nvar),
day = as.integer(day),
month = as.integer(month),
restrain$um,
restrain$vm,
restrain$insol,
restrain$neni,
restrain$vdmin,
restrain$vref,
u500_mod = as.numeric(u500_mod),
v500_mod = as.numeric(v500_mod),
t500_mod = as.numeric(t500_mod),
t850_mod = as.numeric(t850_mod),
msl_lr_mod = as.numeric(msl_lr_mod),
q700_mod = as.numeric(q700_mod),
t700_mod = as.numeric(t700_mod),
tm2m_mod = as.numeric(tm2m_mod),
tmx=matrix(as.double(seq(1,nm*nptos)),c(nm,nptos)),
tmn=matrix(as.double(seq(1,nm*nptos)),c(nm,nptos)),
PACKAGE = 'CSTools')
output <- list("tmax" = downs$tmx,
"tmin" = downs$tmn)
}
return(output)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_AnalogsPredictors.R
|
#'Anomalies relative to a climatology along selected dimension with or without
#'cross-validation
#'
#'@author Perez-Zanon Nuria, \email{[email protected]}
#'@author Pena Jesus, \email{[email protected]}
#'@description This function computes the anomalies relative to a climatology
#'computed along the selected dimension (usually starting dates or forecast
#'time) allowing the application or not of crossvalidated climatologies. The
#'computation is carried out independently for experimental and observational
#'data products.
#'
#'@param exp An object of class \code{s2dv_cube} as returned by \code{CST_Start}
#' function, containing the seasonal forecast experiment data in the element
#' named \code{$data}.
#'@param obs An object of class \code{s2dv_cube} as returned by \code{CST_Start}
#' function, containing the observed data in the element named \code{$data}.
#'@param dim_anom A character string indicating the name of the dimension
#' along which the climatology will be computed. The default value is 'sdate'.
#'@param cross A logical value indicating whether cross-validation should be
#' applied or not. Default = FALSE.
#'@param memb_dim A character string indicating the name of the member
#' dimension. It must be one dimension in 'exp' and 'obs'. If there is no
#' member dimension, set NULL. The default value is 'member'.
#'@param memb A logical value indicating whether to subtract the climatology
#' based on the individual members (TRUE) or the ensemble mean over all
#' members (FALSE) when calculating the anomalies. The default value is TRUE.
#'@param dat_dim A character vector indicating the name of the dataset and
#' member dimensions. If there is no dataset dimension, it can be NULL.
#' The default value is "c('dataset', 'member')".
#'@param filter_span A numeric value indicating the degree of smoothing. This
#' option is only available if parameter \code{cross} is set to FALSE.
#'@param ftime_dim A character string indicating the name of the temporal
#' dimension where the smoothing with 'filter_span' will be applied. It cannot
#' be NULL if 'filter_span' is provided. The default value is 'ftime'.
#'@param ncores An integer indicating the number of cores to use for parallel
#' computation. The default value is NULL. It will be used only when
#' 'filter_span' is not NULL.
#'
#'@return A list with two S3 objects, 'exp' and 'obs', of the class
#''s2dv_cube', containing experimental and date-corresponding observational
#'anomalies, respectively. These 's2dv_cube's can be ingested by other functions
#'in CSTools.
#'
#'@examples
#'mod <- 1 : (2 * 3 * 4 * 5 * 6 * 7)
#'dim(mod) <- c(dataset = 2, member = 3, sdate = 4, ftime = 5, lat = 6, lon = 7)
#'obs <- 1 : (1 * 1 * 4 * 5 * 6 * 7)
#'dim(obs) <- c(dataset = 1, member = 1, sdate = 4, ftime = 5, lat = 6, lon = 7)
#'lon <- seq(0, 30, 5)
#'lat <- seq(0, 25, 5)
#'coords <- list(lon = lon, lat = lat)
#'exp <- list(data = mod, coords = coords)
#'obs <- list(data = obs, coords = coords)
#'attr(exp, 'class') <- 's2dv_cube'
#'attr(obs, 'class') <- 's2dv_cube'
#'
#'anom <- CST_Anomaly(exp = exp, obs = obs, cross = FALSE, memb = TRUE)
#'
#'@seealso \code{\link[s2dv]{Ano_CrossValid}}, \code{\link[s2dv]{Clim}} and
#'\code{\link{CST_Start}}
#'
#'@import multiApply
#'@importFrom s2dv InsertDim Clim Ano_CrossValid Reorder
#'@export
CST_Anomaly <- function(exp = NULL, obs = NULL, dim_anom = 'sdate',
cross = FALSE, memb_dim = 'member', memb = TRUE,
dat_dim = c('dataset', 'member'), filter_span = NULL,
ftime_dim = 'ftime', ncores = NULL) {
# Check 's2dv_cube'
if (!inherits(exp, 's2dv_cube') & !is.null(exp) ||
!inherits(obs, 's2dv_cube') & !is.null(obs)) {
stop("Parameter 'exp' and 'obs' must be of the class 's2dv_cube'.")
}
# exp and obs
if (is.null(exp$data) & is.null(obs$data)) {
stop("One of the parameter 'exp' or 'obs' cannot be NULL.")
}
case_exp = case_obs = 0
if (is.null(exp)) {
exp <- obs
case_obs = 1
warning("Parameter 'exp' is not provided and 'obs' will be used instead.")
}
if (is.null(obs)) {
obs <- exp
case_exp = 1
warning("Parameter 'obs' is not provided and 'exp' will be used instead.")
}
if (any(is.null(names(dim(exp$data))))| any(nchar(names(dim(exp$data))) == 0) |
any(is.null(names(dim(obs$data))))| any(nchar(names(dim(obs$data))) == 0)) {
stop("Parameter 'exp' and 'obs' must have dimension names in element 'data'.")
}
dim_exp <- dim(exp$data)
dim_obs <- dim(obs$data)
dimnames_exp <- names(dim_exp)
dimnames_obs <- names(dim_obs)
# dim_anom
if (!is.character(dim_anom)) {
stop("Parameter 'dim_anom' must be a character string.")
}
if (!dim_anom %in% names(dim_exp) | !dim_anom %in% names(dim_obs)) {
stop("Parameter 'dim_anom' is not found in 'exp' or in 'obs' dimension in element 'data'.")
}
if (dim_exp[dim_anom] <= 1 | dim_obs[dim_anom] <= 1) {
stop("The length of dimension 'dim_anom' in label 'data' of the parameter ",
"'exp' and 'obs' must be greater than 1.")
}
# cross
if (!is.logical(cross) | !is.logical(memb)) {
stop("Parameters 'cross' and 'memb' must be logical.")
}
if (length(cross) > 1 | length(memb) > 1) {
cross <- cross[1]
warning("Parameter 'cross' has length greater than 1 and only the first element ",
"will be used.")
}
# memb
if (length(memb) > 1) {
memb <- memb[1]
warning("Parameter 'memb' has length greater than 1 and only the first element ",
"will be used.")
}
# memb_dim
if (!is.null(memb_dim)) {
if (!is.character(memb_dim) | length(memb_dim) > 1) {
stop("Parameter 'memb_dim' must be a character string.")
}
}
# dat_dim
if (!is.null(dat_dim)) {
if (!is.character(dat_dim)) {
stop("Parameter 'dat_dim' must be a character vector.")
}
}
# filter_span
if (!is.null(filter_span)) {
if (!is.numeric(filter_span)) {
warning("Paramater 'filter_span' is not numeric and any filter ",
"is being applied.")
filter_span <- NULL
}
# ncores
if (!is.null(ncores)) {
if (!is.numeric(ncores) | ncores %% 1 != 0 | ncores <= 0 |
length(ncores) > 1) {
stop("Parameter 'ncores' must be a positive integer.")
}
}
# ftime_dim
if (!is.character(ftime_dim)) {
stop("Parameter 'ftime_dim' must be a character string.")
}
if (!ftime_dim %in% names(dim_exp) | !ftime_dim %in% names(dim_obs)) {
stop("Parameter 'ftime_dim' is not found in 'exp' or in 'obs' dimension in element 'data'.")
}
}
# Computating anomalies
#----------------------
# With cross-validation
if (cross) {
ano <- Ano_CrossValid(exp = exp$data, obs = obs$data, time_dim = dim_anom,
memb_dim = memb_dim, memb = memb, dat_dim = dat_dim,
ncores = ncores)
# Without cross-validation
} else {
tmp <- Clim(exp = exp$data, obs = obs$data, time_dim = dim_anom,
memb_dim = memb_dim, memb = memb, dat_dim = dat_dim,
ncores = ncores)
if (!is.null(filter_span)) {
tmp$clim_exp <- Apply(tmp$clim_exp,
target_dims = c(ftime_dim),
output_dims = c(ftime_dim),
fun = .Loess,
loess_span = filter_span,
ncores = ncores)$output1
tmp$clim_obs <- Apply(tmp$clim_obs,
target_dims = c(ftime_dim),
output_dims = c(ftime_dim),
fun = .Loess,
loess_span = filter_span,
ncores = ncores)$output1
}
if (memb) {
clim_exp <- tmp$clim_exp
clim_obs <- tmp$clim_obs
} else {
clim_exp <- InsertDim(tmp$clim_exp, 1, dim_exp[memb_dim])
clim_obs <- InsertDim(tmp$clim_obs, 1, dim_obs[memb_dim])
}
clim_exp <- InsertDim(clim_exp, 1, dim_exp[dim_anom])
clim_obs <- InsertDim(clim_obs, 1, dim_obs[dim_anom])
ano <- NULL
# Permuting back dimensions to original order
clim_exp <- Reorder(clim_exp, dimnames_exp)
clim_obs <- Reorder(clim_obs, dimnames_obs)
ano$exp <- exp$data - clim_exp
ano$obs <- obs$data - clim_obs
}
exp$data <- ano$exp
exp$dims <- dim(ano$exp)
obs$data <- ano$obs
obs$dims <- dim(ano$obs)
# Outputs
# ~~~~~~~~~
if (case_obs == 1) {
return(obs)
}
else if (case_exp == 1) {
return(exp)
}
else {
return(list(exp = exp, obs = obs))
}
}
.Loess <- function(clim, loess_span) {
data <- data.frame(ensmean = clim, day = 1 : length(clim))
loess_filt <- loess(ensmean ~ day, data, span = loess_span)
output <- predict(loess_filt)
return(output)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_Anomaly.R
|
#' Weighting SFSs of a CSTools object.
#'
#'@author Eroteida Sanchez-Garcia - AEMET, \email{[email protected]}
#'
#'@description Function to apply weights to a 's2dv_cube' object.
#'It could return a weighted ensemble mean (deterministic output) or
#'the terciles probabilities (probabilistic output) for Seasonal Forecast
#'Systems (SFSs).
#'
#'@references Regionally improved seasonal forecast of precipitation through
#'Best estimation of winter NAO, Sanchez-Garcia, E. et al.,
#'Adv. Sci. Res., 16, 165174, 2019, \doi{10.5194/asr-16-165-2019}
#'
#'@param var_exp An object of the class 's2dv_cube' containing the variable
#' (e.g. precipitation, temperature, NAO index) array.
#' The var_exp object is expected to have an element named \code{$data} with
#' at least a temporal dimension and a dimension named 'member'.
#'@param aweights Normalized weights array with at least dimensions
#' (time, member), when 'time' is the temporal dimension as default.
#' When 'aweights' parameter has any other dimensions (as e.g. 'lat') and
#' 'var_exp' parameter has also the same dimension, they must be equals.
#'@param terciles A numeric array with at least one dimension 'tercil' equal to
#' 2, the first element is the lower tercil for a hindcast period, and the second
#' element is the upper tercile. By default is NULL, the terciles are computed
#' from var_exp data.
#'@param type A character string indicating the type of output.
#' If 'type' = 'probs', the function returns, in the element data from
#' 'var_exp' parameter, an array with at least two
#' or four dimensions depending if the variable is spatially aggregated variable
#' (as e.g. NAO index), dimension (time, tercil) or it is spatial variable
#' (as e.g. precipitation or temperature), dimension (time, tercile, lat, lon),
#' containing the terciles probabilities computing with weighted members.
#' The first tercil is the lower tercile, the second is the normal tercile and
#' the third is the upper tercile. If 'type' = 'ensembleMean', the function
#' returns, in the element data from 'var_exp' parameter, an array with at
#' least one or three dimensions depending if the variable is a spatially
#' aggregated variable (as e.g. NAO index)(time) or it is spatial variable (as
#' e.g. precipitation or temperature) (time, lat, lon), containing the ensemble
#' means computing with weighted members.
#'@param time_dim_name A character string indicating the name of the
#' temporal dimension, by default 'time'.
#'@param memb_dim A character string indicating the name of the
#' member dimension, by default 'member'.
#'
#'@return CST_BEI_Weighting() returns a CSTools object (i.e., of the
#'class 's2dv_cube').
#'This object has at least an element named \code{$data}
#'with at least a temporal dimension (and dimension 'tercil' when the output
#'are tercile probabilities), containing the ensemble means computing with
#'weighted members or probabilities of terciles.
#'
#'@examples
#'var_exp <- 1 : (2 * 4 * 3 * 2)
#'dim(var_exp) <- c(time = 2, member = 4, lat = 3, lon = 2)
#'aweights <- c(0.2, 0.1, 0.3, 0.4, 0.1, 0.2, 0.4, 0.3, 0.1, 0.2, 0.4, 0.4, 0.1,
#' 0.2, 0.4, 0.2)
#'dim(aweights) <- c(time = 2, member = 4, dataset = 2)
#'var_exp <- list(data = var_exp)
#'class(var_exp) <- 's2dv_cube'
#'res_CST <- CST_BEI_Weighting(var_exp, aweights)
#'@export
CST_BEI_Weighting <- function(var_exp, aweights, terciles = NULL,
type = 'ensembleMean', time_dim_name = 'time',
memb_dim = 'member') {
# s2dv_cube
if (!inherits(var_exp, "s2dv_cube")) {
stop("Parameter 'var_exp' must be of the class 's2dv_cube', ",
"as output by CSTools::CST_Load.")
}
# type
if (!is.character(type)) {
stop("Parameter 'type' must be a character string, 'probs' or ",
"'ensembleMean', indicating the type of output.")
}
if (length(type) > 1) {
warning("Parameter 'type' has length greater than 1 and ",
"only the first element will be used.")
type <- type[1]
}
if (type == 'ensembleMean') {
em <- BEI_EMWeighting(var_exp$data, aweights, time_dim_name, memb_dim)
var_exp$data <- em
} else if (type == 'probs') {
if (is.null(terciles)) {
terciles <- BEI_TercilesWeighting(var_exp$data, aweights,
time_dim_name = time_dim_name,
memb_dim = memb_dim)
}
probs <- BEI_ProbsWeighting(var_exp$data, aweights, terciles,
time_dim_name = time_dim_name,
memb_dim = memb_dim)
var_exp$data <- probs
} else {
stop("Parameter 'type' must be a character string ('probs' or ",
"'ensembleMean'), indicating the type of output.")
}
return(var_exp)
}
#'@title Computing the weighted ensemble means for SFSs.
#'@author Eroteida Sanchez-Garcia - AEMET, \email{[email protected]}
#'@description This function implements the computation to obtain the weighted
#'ensemble means for SFSs using a normalized weights array,
#'
#'@references Regionally improved seasonal forecast of precipitation through Best
#'estimation of winter NAO, Sanchez-Garcia, E. et al.,
#'Adv. Sci. Res., 16, 165174, 2019, \doi{10.5194/asr-16-165-2019}
#'
#'@param var_exp Variable (e.g. precipitation, temperature, NAO index)
#' array from a SFS with at least dimensions (time, member) for a spatially
#' aggregated variable or dimensions (time, member, lat, lon) for a spatial
#' variable, as 'time' the spatial dimension by default.
#'@param aweights Normalized weights array with at least dimensions
#' (time, member), when 'time' is the temporal dimension as default.
#'@param time_dim_name A character string indicating the name of the
#' temporal dimension, by default 'time'.
#'@param memb_dim A character string indicating the name of the
#' member dimension, by default 'member'.
#'
#'@return BEI_EMWeighting() returns an array with at least one or three
#'dimensions depending if the variable is spatially aggregated variable
#'(as e.g. NAO index)(time) or it is spatial variable (as e.g. precipitation
#'or temperature) (time, lat, lon), containing the ensemble means computing
#'with weighted members.
#'
#'@examples
#'# Example 1
#'var_exp <- 1 : (2 * 3 * 4)
#'dim(var_exp) <- c(time = 2, dataset = 3, member = 4)
#'aweights <- runif(24, min = 0.001, max = 0.999)
#'dim(aweights) <- c(time = 2, dataset = 3, member = 4)
#'res <- BEI_EMWeighting(var_exp, aweights)
#'
#'# Example 2
#'var_exp <- 1 : (2 * 4 * 2 * 3)
#'dim(var_exp) <- c(time = 2, member = 4, lat = 2, lon = 3)
#'aweights <- c(0.2, 0.1, 0.3, 0.4, 0.1, 0.2, 0.4, 0.3)
#'dim(aweights) <- c(time = 2, member = 4)
#'res <- BEI_EMWeighting(var_exp, aweights)
#'
#'@import multiApply
#'@export
BEI_EMWeighting <- function(var_exp, aweights, time_dim_name = 'time',
memb_dim = 'member') {
# var_exp
if (!is.array(var_exp)) {
stop("Parameter 'var_exp' must be an array.")
}
# aweights
if (!is.array(aweights)) {
stop("Parameter 'aweights' must be an array.")
}
# time_dim_name
if (!is.character(time_dim_name)) {
stop("Parameter 'time_dim_name' must be a character string indicating",
" the name of the temporal dimension.")
}
if (length(time_dim_name) > 1) {
warning("Parameter 'time_dim_name' has length greater than 1 and ",
"only the first element will be used.")
time_dim_name <- time_dim_name[1]
}
# memb_dim
if (!is.character(memb_dim)) {
stop("Parameter 'memb_dim' must be a character string indicating",
" the name of the member dimension.")
}
# var_exp, aweights (2)
if (is.null(names(dim(var_exp))) || is.null(names(dim(aweights)))) {
stop("Parameters 'var_exp' and 'aweights' should have dimension names.")
}
if (!(time_dim_name %in% names(dim(var_exp)))) {
stop("Parameter 'var_exp' must have temporal dimension.")
}
if (!(time_dim_name %in% names(dim(aweights)))) {
stop("Parameter 'aweights' must have temporal dimension.")
}
if (!(memb_dim %in% names(dim(var_exp)))) {
stop("Parameter 'var_exp' must have member dimension.")
}
if (!(memb_dim %in% names(dim(aweights)))) {
stop("Parameter 'aweights' must have member dimension.")
}
if (dim(var_exp)[time_dim_name] != dim(aweights)[time_dim_name]) {
stop("Length of temporal dimension ",
"of parameter 'var_exp' and 'aweights' must be equal.")
}
if (dim(var_exp)[memb_dim] != dim(aweights)[memb_dim]) {
stop("Length of member dimension ",
"of parameter 'var_exp' and 'aweights' must be equals.")
}
res <- Apply(list(var_exp, aweights),
target_dims = list(c(time_dim_name, memb_dim),
c(time_dim_name, memb_dim)),
fun = .BEI_EMWeighting, time_dim_name)$output1
return(res)
}
#'Atomic BEI_EMWeighting
#'@param var_exp Variable (e.g. precipitation, temperature, NAO index)
#' array from a SFS with a temporal dimension,
#' by default 'time', and dimension 'member'.
#'@param aweights Normalized weights array with a temporal dimension,
#' by default 'time', and dimension 'member'
#'@param time_dim_name A character string indicating the name of the
#' temporal dimension, by default 'time'.
#'@return .BEI_EMWeighting returns an array of with a temporal dimension,
#'by default 'time', containing the weighted ensemble means.
#'@examples
#'# Example for the Atomic BEI_EMWeighting function
#'var_exp <- 1 : 6
#'dim(var_exp) <- c(time = 2, member = 3)
#'aweights <- c(0.28, 0.15, 0.69, 0.64, 0.42, 0.17)
#'dim(aweights) <- c(time = 2, member = 3)
#'res <- .BEI_EMWeighting(var_exp, aweights)
#'@noRd
.BEI_EMWeighting <- function(var_exp, aweights, time_dim_name = 'time') {
posTime <- match(time_dim_name, names(dim(var_exp)))
var_exp_em <- as.array(apply(var_exp * aweights, posTime, sum))
names(dim(var_exp_em)) <- time_dim_name
return(var_exp_em)
}
#' Computing the weighted tercile probabilities for SFSs.
#'@author Eroteida Sanchez-Garcia - AEMET, \email{[email protected]}
#'
#'@description This function implements the computation to obtain the tercile
#'probabilities for a weighted variable for SFSs using a normalized weights array,
#'
#'@references Regionally improved seasonal forecast of precipitation through Best
#'estimation of winter NAO, Sanchez-Garcia, E. et al.,
#'Adv. Sci. Res., 16, 165174, 2019, \doi{10.5194/asr-16-165-2019}
#'
#'@param var_exp Variable (e.g. precipitation, temperature, NAO index)
#' array from a SFS with at least dimensions (time, member) for a spatially
#' aggregated variable or dimensions (time, member, lat, lon) for a spatial
#' variable, as 'time' the spatial dimension by default.
#'@param aweights Normalized weights array with at least dimensions
#' (time, member), when 'time' is the temporal dimension as default.
#'@param terciles A numeric array with at least one dimension 'tercil' equal to
#' 2, the first element is the lower tercil for a hindcast period, and the second
#' element is the upper tercile.
#'@param time_dim_name A character string indicating the name of the
#' temporal dimension, by default 'time'.
#'@param memb_dim A character string indicating the name of the
#' member dimension, by default 'member'.
#'
#'@return BEI_ProbsWeighting() returns an array with at least two or four
#'dimensions depending if the variable is a spatially aggregated variable
#'(as e.g. NAO index)(time, tercil) or it is spatial variable (as e.g.
#'precipitation or temperature)(time, tercile, lat, lon), containing the
#'terciles probabilities computing with weighted members.
#'The first tercil is the lower tercile, the second is the normal tercile and
#'the third is the upper tercile.
#'
#'@examples
#'# Example 1
#'var_exp <- 1 : (2 * 4)
#'dim(var_exp) <- c(time = 2, member = 4)
#'aweights <- c(0.2, 0.1, 0.3, 0.4, 0.1, 0.2, 0.4, 0.3)
#'dim(aweights) <- c(time = 2, member = 4)
#'terciles <- c(2.5,5)
#'dim(terciles) <- c(tercil = 2)
#'res <- BEI_ProbsWeighting(var_exp, aweights, terciles)
#'
#'# Example 2
#'var_exp <- rnorm(48, 50, 9)
#'dim(var_exp) <- c(time = 2, member = 4, lat = 2, lon = 3)
#'aweights <- c(0.2, 0.1, 0.3, 0.4, 0.1, 0.2, 0.4, 0.3)
#'dim(aweights) <- c(time = 2, member = 4)
#'terciles <- rep(c(48,50), 2*3)
#'dim(terciles) <- c(tercil = 2, lat = 2, lon = 3)
#'res <- BEI_ProbsWeighting(var_exp, aweights, terciles)
#'@import multiApply
#'@export
BEI_ProbsWeighting <- function(var_exp, aweights, terciles,
time_dim_name = 'time', memb_dim = 'member') {
# var_exp
if (!is.array(var_exp)) {
stop("Parameter 'var_exp' must be an array.")
}
# aweights
if (!is.array(aweights)) {
stop("Parameter 'aweights' must be an array.")
}
# terciles
if (is.null(terciles)) {
stop("Parameter 'terciles' cannot be null.")
}
if (!is.array(terciles)) {
stop("Parameter 'terciles' must be an array.")
}
if (is.null(names(dim(terciles)))) {
stop("Parameter 'terciles' should have dimension names.")
}
if (!('tercil' %in% names(dim(terciles)))) {
stop("Parameter 'terciles' must have dimension 'tercil'.")
}
if (dim(terciles)['tercil'] != 2) {
stop("Length of dimension 'tercil' ",
"of parameter 'terciles' must be equal to 2.")
}
# time_dim_name
if (!is.character(time_dim_name)) {
stop("Parameter 'time_dim_name' must be a character string indicating",
" the name of the temporal dimension.")
}
if (length(time_dim_name) > 1) {
warning("Parameter 'time_dim_name' has length greater than 1 and ",
"only the first element will be used.")
time_dim_name <- time_dim_name[1]
}
# memb_dim
if (!is.character(memb_dim)) {
stop("Parameter 'memb_dim' must be a character string indicating",
" the name of the member dimension.")
}
# var_exp, terciles, aweights (2)
if (time_dim_name %in% names(dim(terciles))) {
stop("Parameter 'terciles' must not have temporal dimension.")
}
if (memb_dim %in% names(dim(terciles))) {
stop("Parameter 'terciles' must not have a member dimension.")
}
if (is.null(names(dim(var_exp))) || is.null(names(dim(aweights)))) {
stop("Parameters 'var_exp' and 'aweights'",
" should have dimension names.")
}
if (!(time_dim_name %in% names(dim(var_exp)))) {
stop("Parameter 'var_exp' must have temporal dimension.")
}
if (!(time_dim_name %in% names(dim(aweights)))) {
stop("Parameter 'aweights' must have temporal dimension.")
}
if (!(memb_dim %in% names(dim(var_exp)))) {
stop("Parameter 'var_exp' must have member dimension.")
}
if (!(memb_dim %in% names(dim(aweights)))) {
stop("Parameter 'aweights' must have member dimension.")
}
if (dim(var_exp)[time_dim_name] != dim(aweights)[time_dim_name]) {
stop("Length of temporal dimension ",
"of parameter 'var_exp' and 'aweights' must be equal.")
}
if (dim(var_exp)[memb_dim] != dim(aweights)[memb_dim]) {
stop("Length of member dimension ",
"of parameter 'var_exp' and 'aweights' must be equal.")
}
names_exp <- sort(names(dim(var_exp)))
names_exp <- names_exp[-which(names_exp %in% c(time_dim_name, memb_dim))]
names_tercil <- sort(names(dim(terciles)))
names_tercil <- names_tercil[-which(names_tercil == 'tercil')]
if (!all(dim(var_exp)[names_exp] == dim(terciles)[names_tercil])) {
stop("Length of common dimensions ",
"of parameter 'var_exp' and 'terciles' must be equal.")
}
res <- Apply(list(var_exp, aweights, terciles),
target_dims = list(c(time_dim_name, memb_dim),
c(time_dim_name, memb_dim),
c('tercil')),
fun = .BEI_ProbsWeighting, time_dim_name)$output1
return(res)
}
#'Atomic BEI_ProbsWeighting
#'@param var_exp Variable (e.g. precipitation, temperature, NAO index)
#' array from a SFS with a temporal dimension,
#' by default 'time', and dimension 'member'.
#'@param aweights Normalized weights array with a temporal dimension,
#' by default 'time', and dimension 'member'
#'@param terciles A numeric array with one dimension 'tercil' equal to 2,
#' the first element is the lower tercil for a hindcast period, and the second
#' element is the upper tercile.
#'@param time_dim_name A character string indicating the name of the
#' temporal dimension, by default 'time'.
#'@param memb_dim A character string indicating the name of the
#' member dimension, by default 'member'.
#'
#'@return .BEI_ProbsWeighting returns an array of with a temporal dimension,
#'as default 'time', and 'tercil' dimension, containing the probabilities
#'for each tercile computing with weighted members.
#'The firt tercil is the lower tercile, the second is the normal tercile and
#'the third is the upper tercile.
#'@examples
#'# Example
#'var_exp <- 1 : 8
#'dim(var_exp) <- c(stime = 2, member = 4)
#'aweights <- c(0.2, 0.1, 0.3, 0.4, 0.1, 0.2, 0.4, 0.3)
#'dim(aweights) <- c(stime = 2, member = 4)
#'terciles <- quantile(1:8, probs = c(1/3, 2/3))
#'dim(terciles) <- c(tercil = 2)
#'res <- .BEI_ProbsWeighting(var_exp, aweights, terciles, time_dim_name = 'stime')
#'@noRd
.BEI_ProbsWeighting <- function(var_exp, aweights, terciles,
time_dim_name = 'time', memb_dim = 'member') {
if (any(is.na(var_exp)) || any(is.na(aweights))) {
probTercile <- array(NA, dim = c(dim(var_exp)[time_dim_name], tercil = 3))
} else {
if (any(is.na(terciles))) {
stop("Terciles are NAs")
}
terciles_exp <- list(lowerTercile = terciles[1],
upperTercile = terciles[2])
lowerTercile <- terciles_exp$lowerTercile
upperTercile <- terciles_exp$upperTercile
# Probabilities
aTerciles <- Apply(list(var_exp), target_dims = list(memb_dim),
fun = Data2Tercil, lowerTercile, upperTercile)$output1
pos <- match(names(dim(aTerciles)), c(time_dim_name, memb_dim))
aTerciles <- aperm(aTerciles, pos)
names(dim(aTerciles)) <- c(time_dim_name, memb_dim)
probTercile <- array(NA, dim = c(dim(var_exp)[time_dim_name], tercil = 3))
for (idTercil in 1:3) {
probTercile[ ,idTercil] <- Apply(list(aTerciles, aweights),
target_dims = list(memb_dim, memb_dim),
fun = WeightTercil2Prob, idTercil)$output1
}
}
return(probTercile)
}
#'Computing the weighted terciles for SFSs.
#'@author Eroteida Sanchez-Garcia - AEMET, \email{[email protected]}
#'
#'@description This function implements the computation to obtain the terciles
#'for a weighted variable for SFSs using a normalized weights array,
#'
#'@references Regionally improved seasonal forecast of precipitation through Best
#'estimation of winter NAO, Sanchez-Garcia, E. et al.,
#'Adv. Sci. Res., 16, 165174, 2019, \doi{10.5194/asr-16-165-2019}
#'
#'@param var_exp Variable (e.g. precipitation, temperature, NAO index)
#' array from a SFS with at least dimensions (time, member) for a spatially
#' aggregated variable or dimensions (time, member, lat, lon) for a spatial
#' variable, as 'time' the spatial dimension by default.
#'@param aweights Normalized weights array with at least dimensions
#' (time, member), when 'time' is the temporal dimension as default.
#'@param time_dim_name A character string indicating the name of the
#' temporal dimension, by default 'time'.
#'@param memb_dim A character string indicating the name of the
#' member dimension, by default 'member'.
#'
#'@return BEI_TercilesWeighting() returns an array with at least one
#'dimension depending if the variable is a spatially aggregated variable
#'(as e.g. NAO index)(tercil) or it is spatial variable (as e.g.
#'precipitation or temperature)(tercil, lat, lon), containing the
#'terciles computing with weighted members.
#'The first tercil is the lower tercile, the second is the upper tercile.
#'
#'@examples
#'# Example 1
#'var_exp <- 1 : (2 * 4)
#'dim(var_exp) <- c(time = 2, member = 4)
#'aweights<- c(0.2, 0.1, 0.3, 0.4, 0.1, 0.2, 0.4, 0.3)
#'dim(aweights) <- c(time = 2, member = 4)
#'res <- BEI_TercilesWeighting(var_exp, aweights)
#'
#'# Example 2
#'var_exp <- rnorm(48, 50, 9)
#'dim(var_exp) <- c(time = 2, member = 4, lat = 2, lon = 3)
#'aweights<- c(0.2, 0.1, 0.3, 0.4, 0.1, 0.2, 0.4, 0.3)
#'dim(aweights) <- c(time = 2, member = 4)
#'res <- BEI_TercilesWeighting(var_exp, aweights)
#'@import multiApply
#'@export
BEI_TercilesWeighting <- function(var_exp, aweights, time_dim_name = 'time',
memb_dim = 'member') {
# var_exp
if (!is.array(var_exp)) {
stop("Parameter 'var_exp' must be an array.")
}
# aweights
if (!is.array(aweights)) {
stop("Parameter 'aweights' must be an array.")
}
# time_dim_name
if (!is.character(time_dim_name)) {
stop("Parameter 'time_dim_name' must be a character string indicating",
" the name of the temporal dimension.")
}
if (length(time_dim_name) > 1) {
warning("Parameter 'time_dim_name' has length greater than 1 and ",
"only the first element will be used.")
time_dim_name <- time_dim_name[1]
}
# memb_dim
if (!is.character(memb_dim)) {
stop("Parameter 'memb_dim' must be a character string indicating",
" the name of the member dimension.")
}
# var_exp, aweights (2)
if (is.null(names(dim(var_exp))) || is.null(names(dim(aweights)))) {
stop("Parameters 'var_exp' and 'aweights'",
" should have dimension names.")
}
if (!(time_dim_name %in% names(dim(var_exp)))) {
stop("Parameter 'var_exp' must have temporal dimension.")
}
if (!(time_dim_name %in% names(dim(aweights)))) {
stop("Parameter 'aweights' must have temporal dimension.")
}
if (!(memb_dim %in% names(dim(var_exp)))) {
stop("Parameter 'var_exp' must have member dimension.")
}
if (!(memb_dim %in% names(dim(aweights)))) {
stop("Parameter 'aweights' must have member dimension.")
}
if (dim(var_exp)[time_dim_name] != dim(aweights)[time_dim_name]) {
stop("Length of temporal dimension ",
"of parameter 'var_exp' and 'aweights' must be equal.")
}
if (dim(var_exp)[memb_dim] != dim(aweights)[memb_dim]) {
stop("Length of member dimension ",
"of parameter 'var_exp' and 'aweights' must be equal.")
}
res <- Apply(list(var_exp, aweights),
target_dims = list(c(time_dim_name, memb_dim),
c(time_dim_name, memb_dim)),
fun = .BEI_TercilesWeighting, time_dim_name)$output1
return(res)
}
#'Atomic BEI_TercilesWeighting
#'@param var_exp Variable (e.g. precipitation, temperature, NAO index)
#' array from a SFS with a temporal dimension,
#' by default 'time', and dimension 'member'.
#'@param aweights Normalized weights array with a temporal dimension,
#' by default 'time', and dimension 'member'
#'@param time_dim_name A character string indicating the name of the
#' temporal dimension, by default 'time'.
#'@return .BEI_TercilesWeighting returns a numeric array with dimension tercil
#'equal to 2, the first is the lower tercil and the second the upper tercile,
#'computing with weighted members considering all members and all period.
#'If any member value for any period is NA , the terciles are not computed, and
#'the function return NA value as tercile upper and lower.
#'@examples
#'# Example
#'var_exp <- 1 : 8
#'dim(var_exp) <- c(stime = 2, member = 4)
#'aweights <- c(0.2, 0.1, 0.3, 0.4, 0.1, 0.2, 0.4, 0.3)
#'dim(aweights) <- c(stime = 2, member = 4)
#'res <- .BEI_TercilesWeighting(var_exp, aweights, time_dim_name = 'stime')
#'@noRd
.BEI_TercilesWeighting <- function(var_exp, aweights, time_dim_name = 'time') {
if (any(is.na(var_exp)) || any(is.na(aweights))) {
terciles_exp <- array(c(NA, NA), dim = c(tercil = 2))
} else {
l_terciles_exp <- WeightTerciles(var_exp, aweights, time_dim_name)
terciles_exp <- array(c(l_terciles_exp$lowerTercile,
l_terciles_exp$upperTercile), dim = c(tercil = 2))
}
return(terciles_exp)
}
# Auxiliar function to compute in which tercile is a data value
Data2Tercil_old <- function(x, lt, ut) {
if (is.na(lt) || is.na(ut)) {
y <- rep(NA, length(x))
} else {
y <- rep(2, length(x))
y[x <= lt] <- 1
y[x >= ut] <- 3
if (lt == ut) {
warning("The upper and lower terciles are equals")
}
}
dim(y) <- c(member = length(x))
return (y)
}
# Auxiliar function to compute in which tercile is a data value
Data2Tercil <- function(x, lt, ut) {
if (is.na(lt) || is.na(ut)) {
y <- rep(NA, length(x))
} else {
y <- rep(2, length(x))
y[x <= lt] <- 1
y[x >= ut] <- 3
if (lt == ut) {
y <- rep(NA, length(x))
}
}
dim(y) <- c(member = length(x))
y[which(is.na(x))] <- NA
return (y)
}
# Auxiliar function to convers weighted terciles to probabilities
WeightTercil2Prob <- function(aTerciles, aWeights, idTercil) {
return(sum(aWeights[which(aTerciles == idTercil)]))
}
# Auxiliar function to compute weighted terciles
WeightTerciles <- function(data, aweights, time_dim_name = 'time') {
namesdimdata <- names(dim(data))
namesdimaweights <- names(dim(aweights))
pos <- match(namesdimdata, namesdimaweights)
aweights <- aperm(aweights, pos)
names(dim(aweights)) <- namesdimdata
vectorData <- as.vector(data)
vectorWeights <- as.vector(aweights/dim(aweights)[time_dim_name]) # normalized
#lSortData <- sort(vectorData,index.return=TRUE)
indSort <- order(vectorData) # index asociated to weight
# indSort <- lSortData$ix # index asociated to weight
# corresponding for this data
dataSort <- vectorData[indSort]
# dataSort <- lSortData$x
# Adding normalized weights. When 1/3 is reached, the data value
# is lower tercile and when 2/3 is reached, it is the upper tercile.
sumWeights <- 0
ilowerTercile <- 0
while ((sumWeights < 1/3) & (ilowerTercile < length(aweights))) {
ilowerTercile <- ilowerTercile + 1
sumWeights <- sumWeights + vectorWeights[indSort[ilowerTercile]]
}
if (ilowerTercile == 1) {
lowerTercile <- dataSort[ilowerTercile]
} else {
lowerTercile <- (dataSort[ilowerTercile] +
dataSort[ilowerTercile - 1]) / 2
}
sumWeights <- 0
iupperTercile <- 0
while ((sumWeights < 2/3) & (iupperTercile < length(aweights))) {
iupperTercile <- iupperTercile + 1
sumWeights <- sumWeights + vectorWeights[indSort[iupperTercile]]
}
if (iupperTercile == 1) {
upperTercile <- dataSort[iupperTercile]
} else {
upperTercile <- (dataSort[iupperTercile]+
dataSort[iupperTercile - 1]) / 2
}
return(list(lowerTercile = lowerTercile, upperTercile = upperTercile))
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_BEI_Weighting.R
|
#'Bias Correction based on the mean and standard deviation adjustment
#'
#'@author Verónica Torralba, \email{[email protected]}
#'@description This function applies the simple bias adjustment technique
#'described in Torralba et al. (2017). The adjusted forecasts have an equivalent
#'standard deviation and mean to that of the reference dataset.
#'
#'@param exp An object of class \code{s2dv_cube} as returned by \code{CST_Start}
#' function, containing the seasonal forecast experiment data in the element
#' named \code{$data} with at least time and member dimensions.
#'@param obs An object of class \code{s2dv_cube} as returned by \code{CST_Start}
#' function, containing the observed data in the element named \code{$data}
#' with at least time dimension.
#'@param exp_cor An object of class \code{s2dv_cube} as returned by
#' \code{CST_Start} function, containing the seasonal forecast experiment to be
#' corrected with at least time dimension. If it is NULL, the 'exp' forecast
#' will be corrected. If there is only one corrected dataset, it should not
#' have dataset dimension. If there is a corresponding corrected dataset for
#' each 'exp' forecast, the dataset dimension must have the same length as in
#' 'exp'. The default value is NULL.
#'@param na.rm A logical value indicating whether missing values should be
#' stripped before the computation proceeds, by default it is set to FALSE.
#'@param memb_dim A character string indicating the name of the member
#' dimension. By default, it is set to 'member'.
#'@param sdate_dim A character string indicating the name of the start date
#' dimension. By default, it is set to 'sdate'.
#'@param dat_dim A character string indicating the name of dataset dimension.
#' The length of this dimension can be different between 'exp' and 'obs'.
#' The default value is NULL.
#'@param ncores An integer that indicates the number of cores for parallel
#' computations using multiApply function. The default value is NULL.
#'@return An object of class \code{s2dv_cube} containing the bias corrected
#'forecasts with the dimensions nexp, nobs and same dimensions as in the 'exp'
#'object. nexp is the number of experiment (i.e., 'dat_dim' in exp), and nobs is
#'the number of observation (i.e., 'dat_dim' in obs). If dat_dim is NULL, nexp
#'and nobs are omitted. If 'exp_cor' is provided the returned array will be with
#'the same dimensions as 'exp_cor'.
#'
#'@references Torralba, V., F.J. Doblas-Reyes, D. MacLeod, I. Christel and M.
#'Davis (2017). Seasonal climate prediction: a new source of information for
#'the management of wind energy resources. Journal of Applied Meteorology and
#'Climatology, 56, 1231-1247, \doi{10.1175/JAMC-D-16-0204.1}. (CLIM4ENERGY,
#'EUPORIAS, NEWA, RESILIENCE, SPECS)
#'
#'@examples
#'mod1 <- 1 : (1 * 3 * 4 * 5 * 6 * 7)
#'dim(mod1) <- c(dataset = 1, member = 3, sdate = 4, time = 5, lat = 6, lon = 7)
#'obs1 <- 1 : (1 * 1 * 4 * 5 * 6 * 7)
#'dim(obs1) <- c(dataset = 1, member = 1, sdate = 4, time = 5, lat = 6, lon = 7)
#'lon <- seq(0, 30, 5)
#'lat <- seq(0, 25, 5)
#'coords <- list(lat = lat, lon = lon)
#'exp <- list(data = mod1, coords = coords)
#'obs <- list(data = obs1, coords = coords)
#'attr(exp, 'class') <- 's2dv_cube'
#'attr(obs, 'class') <- 's2dv_cube'
#'a <- CST_BiasCorrection(exp = exp, obs = obs)
#'@import multiApply
#'@export
CST_BiasCorrection <- function(exp, obs, exp_cor = NULL, na.rm = FALSE,
memb_dim = 'member', sdate_dim = 'sdate',
dat_dim = NULL, ncores = NULL) {
# Check 's2dv_cube'
if (!inherits(exp, 's2dv_cube') || !inherits(obs, 's2dv_cube')) {
stop("Parameter 'exp' and 'obs' must be of the class 's2dv_cube'.")
}
if (!is.null(exp_cor)) {
if (!inherits(exp_cor, 's2dv_cube')) {
stop("Parameter 'exp_cor' must be of the class 's2dv_cube'.")
}
}
BiasCorrected <- BiasCorrection(exp = exp$data, obs = obs$data, exp_cor = exp_cor$data,
memb_dim = memb_dim, sdate_dim = sdate_dim, dat_dim = dat_dim,
na.rm = na.rm, ncores = ncores)
if (is.null(exp_cor)) {
exp$data <- BiasCorrected
exp$attrs$Datasets <- c(exp$attrs$Datasets, obs$attrs$Datasets)
exp$attrs$source_files <- c(exp$attrs$source_files, obs$attrs$source_files)
return(exp)
} else {
exp_cor$data <- BiasCorrected
exp_cor$attrs$Datasets <- c(exp_cor$attrs$Datasets, exp$attrs$Datasets, obs$attrs$Datasets)
exp_cor$attrs$source_files <- c(exp_cor$attrs$source_files, exp$attrs$source_files, obs$attrs$source_files)
return(exp_cor)
}
}
#'Bias Correction based on the mean and standard deviation adjustment
#'
#'@author Verónica Torralba, \email{[email protected]}
#'@description This function applies the simple bias adjustment technique
#'described in Torralba et al. (2017). The adjusted forecasts have an equivalent
#'standard deviation and mean to that of the reference dataset.
#'
#'@param exp A multidimensional array with named dimensions containing the
#' seasonal forecast experiment data with at least time and member dimensions.
#'@param obs A multidimensional array with named dimensions containing the
#' observed data with at least time dimension.
#'@param exp_cor A multidimensional array with named dimensions containing the
#' seasonal forecast experiment to be corrected with at least time and member
#' dimension. If it is NULL, the 'exp' forecast will be corrected. If there is
#' only one corrected dataset, it should not have dataset dimension. If there
#' is a corresponding corrected dataset for each 'exp' forecast, the dataset
#' dimension must have the same length as in 'exp'. The default value is NULL.
#'@param na.rm A logical value indicating whether missing values should be
#' stripped before the computation proceeds, by default it is set to FALSE.
#'@param memb_dim A character string indicating the name of the member
#' dimension. By default, it is set to 'member'.
#'@param sdate_dim A character string indicating the name of the start date
#' dimension. By default, it is set to 'sdate'.
#'@param dat_dim A character string indicating the name of dataset dimension.
#' The length of this dimension can be different between 'exp' and 'obs'.
#' The default value is NULL.
#'@param ncores An integer that indicates the number of cores for parallel
#' computations using multiApply function. The default value is NULL.
#'
#'@return An array containing the bias corrected forecasts with the dimensions
#'nexp, nobs and same dimensions as in the 'exp' object. nexp is the number of
#'experiment (i.e., 'dat_dim' in exp), and nobs is the number of observation
#'(i.e., 'dat_dim' in obs). If dat_dim is NULL, nexp and nobs are omitted. If
#''exp_cor' is provided the returned array will be with the same dimensions as
#''exp_cor'.
#'
#'@references Torralba, V., F.J. Doblas-Reyes, D. MacLeod, I. Christel and M.
#'Davis (2017). Seasonal climate prediction: a new source of information for the
#'management of wind energy resources. Journal of Applied Meteorology and
#'Climatology, 56, 1231-1247, \doi{10.1175/JAMC-D-16-0204.1}. (CLIM4ENERGY,
#'EUPORIAS, NEWA, RESILIENCE, SPECS)
#'
#'@examples
#'mod1 <- 1 : (1 * 3 * 4 * 5 * 6 * 7)
#'dim(mod1) <- c(dataset = 1, member = 3, sdate = 4, time = 5, lat = 6, lon = 7)
#'obs1 <- 1 : (1 * 1 * 4 * 5 * 6 * 7)
#'dim(obs1) <- c(dataset = 1, member = 1, sdate = 4, time = 5, lat = 6, lon = 7)
#'a <- BiasCorrection(exp = mod1, obs = obs1)
#'@import multiApply
#'@export
BiasCorrection <- function(exp, obs, exp_cor = NULL, na.rm = FALSE,
memb_dim = 'member', sdate_dim = 'sdate',
dat_dim = NULL, ncores = NULL) {
# Check inputs
## exp, obs
if (!is.array(exp) || !is.numeric(exp)) {
stop("Parameter 'exp' must be a numeric array.")
}
if (!is.array(obs) || !is.numeric(obs)) {
stop("Parameter 'obs' must be a numeric array.")
}
obsdims <- names(dim(obs))
expdims <- names(dim(exp))
if (is.null(expdims)) {
stop("Parameter 'exp' must have dimension names.")
}
if (is.null(obsdims)) {
stop("Parameter 'obs' must have dimension names.")
}
if (any(is.na(exp))) {
warning("Parameter 'exp' contains NA values.")
}
if (any(is.na(obs))) {
warning("Parameter 'obs' contains NA values.")
}
## exp_cor
if (!is.null(exp_cor)) {
exp_cordims <- names(dim(exp_cor))
if (is.null(exp_cordims)) {
stop("Parameter 'exp_cor' must have dimension names.")
}
}
## sdate_dim, memb_dim
if (!is.character(sdate_dim) || length(sdate_dim) != 1) {
stop("Parameter 'sdate_dim' must be a character string.")
}
if (!sdate_dim %in% expdims || !sdate_dim %in% obsdims) {
stop("Parameter 'sdate_dim' is not found in 'exp' or 'obs' dimension.")
}
if (dim(exp)[sdate_dim] == 1) {
stop("Parameter 'exp' must have dimension length of 'sdate_dim' bigger than 1.")
}
if (!all(c(memb_dim, sdate_dim) %in% expdims)) {
stop("Parameter 'exp' requires 'sdate_dim' and 'memb_dim' dimensions.")
}
if (memb_dim %in% obsdims) {
if (dim(obs)[memb_dim] != 1) {
stop("If parameter 'obs' has dimension 'memb_dim' its length must be equal to 1.")
}
} else {
obs <- InsertDim(obs, posdim = 1, lendim = 1, name = memb_dim)
}
if (!is.null(exp_cor)) {
if (!memb_dim %in% names(dim(exp_cor))) {
exp_cor <- InsertDim(exp_cor, posdim = 1, lendim = 1, name = memb_dim)
exp_cor_remove_memb <- TRUE
} else {
exp_cor_remove_memb <- FALSE
}
} else {
exp_cor_remove_memb <- FALSE
}
## dat_dim
if (!is.null(dat_dim)) {
if (!is.character(dat_dim) | length(dat_dim) > 1) {
stop("Parameter 'dat_dim' must be a character string.")
}
if (!dat_dim %in% names(dim(exp)) | !dat_dim %in% names(dim(obs))) {
stop("Parameter 'dat_dim' is not found in 'exp' or 'obs' dimension.",
" Set it as NULL if there is no dataset dimension.")
}
}
## exp, obs, and exp_cor (2)
name_exp <- sort(names(dim(exp)))
name_obs <- sort(names(dim(obs)))
name_exp <- name_exp[-which(name_exp == memb_dim)]
name_obs <- name_obs[-which(name_obs == memb_dim)]
if (!is.null(dat_dim)) {
name_exp <- name_exp[-which(name_exp == dat_dim)]
name_obs <- name_obs[-which(name_obs == dat_dim)]
}
if (!identical(length(name_exp), length(name_obs)) |
!identical(dim(exp)[name_exp], dim(obs)[name_obs])) {
stop("Parameter 'exp' and 'obs' must have same length of all dimensions",
" except 'memb_dim' and 'dat_dim'.")
}
if (!is.null(exp_cor)) {
name_exp_cor <- sort(names(dim(exp_cor)))
name_exp <- sort(names(dim(exp)))
if (!is.null(dat_dim)) {
if (dat_dim %in% exp_cordims) {
if (!identical(dim(exp)[dat_dim], dim(exp_cor)[dat_dim])) {
stop("If parameter 'exp_cor' has dataset dimension, it must be",
" equal to dataset dimension of 'exp'.")
}
name_exp_cor <- name_exp_cor[-which(name_exp_cor == dat_dim)]
target_dims_cor <- c(memb_dim, sdate_dim, dat_dim)
} else {
target_dims_cor <- c(memb_dim, sdate_dim)
}
} else {
target_dims_cor <- c(memb_dim, sdate_dim)
}
name_exp <- name_exp[-which(name_exp %in% c(memb_dim, sdate_dim, dat_dim))]
name_exp_cor <- name_exp_cor[-which(name_exp_cor %in% target_dims_cor)]
if (!identical(length(name_exp), length(name_exp_cor)) |
!identical(dim(exp)[name_exp], dim(exp_cor)[name_exp_cor])) {
stop("Parameter 'exp' and 'exp_cor' must have the same length of ",
"all common dimensions except 'dat_dim', 'sdate_dim' and 'memb_dim'.")
}
}
## na.rm
if (!is.logical(na.rm)) {
na.rm <- FALSE
warning("Paramater 'na.rm' must be a logical, it has been set to FALSE.")
}
if (length(na.rm) > 1) {
na.rm <- na.rm[1]
warning("Paramter 'na.rm' has length greater than 1, and only the fist element is used.")
}
## ncores
if (!is.null(ncores)) {
if (!is.numeric(ncores) | ncores %% 1 != 0 | ncores <= 0 |
length(ncores) > 1) {
stop("Parameter 'ncores' must be either NULL or a positive integer.")
}
}
if (is.null(exp_cor)) {
BiasCorrected <- Apply(data = list(var_obs = obs, var_exp = exp),
target_dims = list(c(memb_dim, sdate_dim, dat_dim),
c(memb_dim, sdate_dim, dat_dim)),
fun = .sbc, dat_dim = dat_dim,
na.rm = na.rm, ncores = ncores)$output1
} else {
BiasCorrected <- Apply(data = list(var_obs = obs,
var_exp = exp,
var_cor = exp_cor),
target_dims = list(c(memb_dim, sdate_dim, dat_dim),
c(memb_dim, sdate_dim, dat_dim),
target_dims_cor),
fun = .sbc, dat_dim = dat_dim,
na.rm = na.rm, ncores = ncores)$output1
}
if (!is.null(dat_dim)) {
pos <- match(c(names(dim(exp))[-which(names(dim(exp)) == dat_dim)], 'nexp', 'nobs'),
names(dim(BiasCorrected)))
BiasCorrected <- aperm(BiasCorrected, pos)
} else {
pos <- match(c(names(dim(exp))), names(dim(BiasCorrected)))
BiasCorrected <- aperm(BiasCorrected, pos)
}
if (exp_cor_remove_memb) {
dim(BiasCorrected) <- dim(BiasCorrected)[-which(names(dim(BiasCorrected)) == memb_dim)]
}
return(BiasCorrected)
}
.sbc <- function(var_obs, var_exp, var_cor = NULL, dat_dim = NULL, na.rm = FALSE) {
# exp: [memb, sdate, (dat)]
# obs: [memb, sdate, (dat)]
# ref: [memb, sdate, (dat)] or NULL
if (is.null(dat_dim)) {
nexp <- 1
nobs <- 1
var_exp <- InsertDim(var_exp, posdim = 3, lendim = 1, name = 'dataset')
var_obs <- InsertDim(var_obs, posdim = 3, lendim = 1, name = 'dataset')
if (!is.null(var_cor)) {
var_cor <- InsertDim(var_cor, posdim = 3, lendim = 1, name = 'dataset')
}
} else {
nexp <- as.numeric(dim(var_exp)[dat_dim])
nobs <- as.numeric(dim(var_obs)[dat_dim])
}
if (!is.null(var_cor)) {
if (length(dim(var_cor)) == 2) { # ref: [memb, sdate]
cor_dat_dim <- FALSE
} else { # ref: [memb, sdate, dat]
cor_dat_dim <- TRUE
}
corrected <- array(dim = c(dim(var_cor)[1:2], nexp = nexp, nobs = nobs))
} else {
ntime <- dim(var_exp)[2]
corrected <- array(dim = c(dim(var_exp)[1:2], nexp = nexp, nobs = nobs))
}
for (i in 1:nexp) {
for (j in 1:nobs) {
if (is.null(var_cor)) {
for (t in 1:ntime) {
# parameters
sd_obs <- sd(var_obs[, -t, j], na.rm = na.rm)
sd_exp <- sd(var_exp[, -t, i], na.rm = na.rm)
clim_exp <- mean(var_exp[, -t, i], na.rm = na.rm)
clim_obs <- mean(var_obs[, -t, j], na.rm = na.rm)
# bias corrected forecast
corrected[, t, i, j] <- ((var_exp[, t, i] - clim_exp) * (sd_obs / sd_exp)) + clim_obs
}
} else {
# parameters
sd_obs <- sd(var_obs[, , j], na.rm = na.rm)
sd_exp <- sd(var_exp[, , i], na.rm = na.rm)
clim_exp <- mean(var_exp[, , i], na.rm = na.rm)
clim_obs <- mean(var_obs[, , j], na.rm = na.rm)
# bias corrected forecast
if (cor_dat_dim) {
corrected[, , i, j] <- ((var_cor[, , i] - clim_exp) * (sd_obs / sd_exp)) + clim_obs
} else {
corrected[, , i, j] <- ((var_cor - clim_exp) * (sd_obs / sd_exp)) + clim_obs
}
}
}
}
if (is.null(dat_dim)) {
if (!is.null(var_cor)) {
dim(corrected) <- dim(var_cor)[1:2]
} else {
dim(corrected) <- dim(var_exp)[1:2]
}
}
return(corrected)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_BiasCorrection.R
|
#'Forecast Calibration
#'
#'@author Verónica Torralba, \email{[email protected]}
#'@author Bert Van Schaeybroeck, \email{[email protected]}
#'@description Five types of member-by-member bias correction can be performed.
#'The \code{"bias"} method corrects the bias only, the \code{"evmos"} method
#'applies a variance inflation technique to ensure the correction of the bias
#'and the correspondence of variance between forecast and observation (Van
#'Schaeybroeck and Vannitsem, 2011). The ensemble calibration methods
#'\code{"mse_min"} and \code{"crps_min"} correct the bias, the overall forecast
#'variance and the ensemble spread as described in Doblas-Reyes et al. (2005)
#'and Van Schaeybroeck and Vannitsem (2015), respectively. While the
#'\code{"mse_min"} method minimizes a constrained mean-squared error using three
#'parameters, the \code{"crps_min"} method features four parameters and
#'minimizes the Continuous Ranked Probability Score (CRPS). The
#'\code{"rpc-based"} method adjusts the forecast variance ensuring that the
#'ratio of predictable components (RPC) is equal to one, as in Eade et al.
#'(2014). It is equivalent to function \code{Calibration} but for objects
#'of class \code{s2dv_cube}.
#'
#'@param exp An object of class \code{s2dv_cube} as returned by \code{CST_Start}
#' function with at least 'sdate' and 'member' dimensions, containing the
#' seasonal hindcast experiment data in the element named \code{data}. The
#' hindcast is used to calibrate the forecast in case the forecast is provided;
#' if not, the same hindcast will be calibrated instead.
#'@param obs An object of class \code{s2dv_cube} as returned by \code{CST_Start}
#' function with at least 'sdate' dimension, containing the observed data in
#' the element named \code{$data}.
#'@param exp_cor An optional object of class \code{s2dv_cube} as returned by
#' \code{CST_Start} function with at least 'sdate' and 'member' dimensions,
#' containing the seasonal forecast experiment data in the element named
#' \code{data}. If the forecast is provided, it will be calibrated using the
#' hindcast and observations; if not, the hindcast will be calibrated instead.
#' If there is only one corrected dataset, it should not have dataset dimension.
#' If there is a corresponding corrected dataset for each 'exp' forecast, the
#' dataset dimension must have the same length as in 'exp'. The default value
#' is NULL.
#'@param cal.method A character string indicating the calibration method used,
#' can be either \code{bias}, \code{evmos}, \code{mse_min}, \code{crps_min} or
#' \code{rpc-based}. Default value is \code{mse_min}.
#'@param eval.method A character string indicating the sampling method used, it
#' can be either \code{in-sample} or \code{leave-one-out}. Default value is the
#' \code{leave-one-out} cross validation. In case the forecast is provided, any
#' chosen eval.method is over-ruled and a third option is used.
#'@param multi.model A boolean that is used only for the \code{mse_min}
#' method. If multi-model ensembles or ensembles of different sizes are used,
#' it must be set to \code{TRUE}. By default it is \code{FALSE}. Differences
#' between the two approaches are generally small but may become large when
#' using small ensemble sizes. Using multi.model when the calibration method is
#' \code{bias}, \code{evmos} or \code{crps_min} will not affect the result.
#'@param na.fill A boolean that indicates what happens in case calibration is
#' not possible or will yield unreliable results. This happens when three or
#' less forecasts-observation pairs are available to perform the training phase
#' of the calibration. By default \code{na.fill} is set to true such that NA
#' values will be returned. If \code{na.fill} is set to false, the uncorrected
#' data will be returned.
#'@param na.rm A boolean that indicates whether to remove the NA values or not.
#' The default value is \code{TRUE}. See Details section for further
#' information about its use and compatibility with \code{na.fill}.
#'@param apply_to A character string that indicates whether to apply the
#' calibration to all the forecast (\code{"all"}) or only to those where the
#' correlation between the ensemble mean and the observations is statistically
#' significant (\code{"sign"}). Only useful if \code{cal.method == "rpc-based"}.
#'@param alpha A numeric value indicating the significance level for the
#' correlation test. Only useful if \code{cal.method == "rpc-based" & apply_to
#' == "sign"}.
#'@param memb_dim A character string indicating the name of the member dimension.
#' By default, it is set to 'member'.
#'@param sdate_dim A character string indicating the name of the start date
#' dimension. By default, it is set to 'sdate'.
#'@param dat_dim A character string indicating the name of dataset dimension.
#' The length of this dimension can be different between 'exp' and 'obs'.
#' The default value is NULL.
#'@param ncores An integer that indicates the number of cores for parallel
#' computations using multiApply function. The default value is one.
#'
#'@return An object of class \code{s2dv_cube} containing the calibrated
#'forecasts in the element \code{data} with the dimensions nexp, nobs and same
#'dimensions as in the 'exp' object. nexp is the number of experiment
#'(i.e., 'dat_dim' in exp), and nobs is the number of observation (i.e.,
#''dat_dim' in obs). If dat_dim is NULL, nexp and nobs are omitted. If 'exp_cor'
#'is provided the returned array will be with the same dimensions as 'exp_cor'.
#'
#'@details Both the \code{na.fill} and \code{na.rm} parameters can be used to
#'indicate how the function has to handle the NA values. The \code{na.fill}
#'parameter checks whether there are more than three forecast-observations pairs
#'to perform the computation. In case there are three or less pairs, the
#'computation is not carried out, and the value returned by the function depends
#'on the value of this parameter (either NA if \code{na.fill == TRUE} or the
#'uncorrected value if \code{na.fill == TRUE}). On the other hand, \code{na.rm}
#'is used to indicate the function whether to remove the missing values during
#'the computation of the parameters needed to perform the calibration.
#'
#'@references Doblas-Reyes F.J, Hagedorn R, Palmer T.N. The rationale behind the
#'success of multi-model ensembles in seasonal forecasting-II calibration and
#'combination. Tellus A. 2005;57:234-252. \doi{10.1111/j.1600-0870.2005.00104.x}
#'@references Eade, R., Smith, D., Scaife, A., Wallace, E., Dunstone, N.,
#'Hermanson, L., & Robinson, N. (2014). Do seasonal-to-decadal climate
#'predictions underestimate the predictability of the read world? Geophysical
#'Research Letters, 41(15), 5620-5628. \doi{10.1002/2014GL061146}
#'@references Van Schaeybroeck, B., & Vannitsem, S. (2011). Post-processing
#'through linear regression. Nonlinear Processes in Geophysics, 18(2),
#'147. \doi{10.5194/npg-18-147-2011}
#'@references Van Schaeybroeck, B., & Vannitsem, S. (2015). Ensemble
#'post-processing using member-by-member approaches: theoretical aspects.
#'Quarterly Journal of the Royal Meteorological Society, 141(688), 807-818.
#'\doi{10.1002/qj.2397}
#'
#'@seealso \code{\link{CST_Start}}
#'
#'@examples
#'# Example 1:
#'mod1 <- 1 : (1 * 3 * 4 * 5 * 6 * 7)
#'dim(mod1) <- c(dataset = 1, member = 3, sdate = 4, ftime = 5, lat = 6, lon = 7)
#'obs1 <- 1 : (1 * 1 * 4 * 5 * 6 * 7)
#'dim(obs1) <- c(dataset = 1, member = 1, sdate = 4, ftime = 5, lat = 6, lon = 7)
#'lon <- seq(0, 30, 5)
#'lat <- seq(0, 25, 5)
#'coords <- list(lat = lat, lon = lon)
#'exp <- list(data = mod1, coords = coords)
#'obs <- list(data = obs1, coords = coords)
#'attr(exp, 'class') <- 's2dv_cube'
#'attr(obs, 'class') <- 's2dv_cube'
#'a <- CST_Calibration(exp = exp, obs = obs, cal.method = "mse_min", eval.method = "in-sample")
#'
#'# Example 2:
#'mod1 <- 1 : (1 * 3 * 4 * 5 * 6 * 7)
#'mod2 <- 1 : (1 * 3 * 1 * 5 * 6 * 7)
#'dim(mod1) <- c(dataset = 1, member = 3, sdate = 4, ftime = 5, lat = 6, lon = 7)
#'dim(mod2) <- c(dataset = 1, member = 3, sdate = 1, ftime = 5, lat = 6, lon = 7)
#'obs1 <- 1 : (1 * 1 * 4 * 5 * 6 * 7)
#'dim(obs1) <- c(dataset = 1, member = 1, sdate = 4, ftime = 5, lat = 6, lon = 7)
#'lon <- seq(0, 30, 5)
#'lat <- seq(0, 25, 5)
#'coords <- list(lat = lat, lon = lon)
#'exp <- list(data = mod1, coords = coords)
#'obs <- list(data = obs1, coords = coords)
#'exp_cor <- list(data = mod2, lat = lat, lon = lon)
#'attr(exp, 'class') <- 's2dv_cube'
#'attr(obs, 'class') <- 's2dv_cube'
#'attr(exp_cor, 'class') <- 's2dv_cube'
#'a <- CST_Calibration(exp = exp, obs = obs, exp_cor = exp_cor, cal.method = "evmos")
#'
#'@importFrom s2dv InsertDim Reorder
#'@import multiApply
#'@importFrom ClimProjDiags Subset
#'@export
CST_Calibration <- function(exp, obs, exp_cor = NULL, cal.method = "mse_min",
eval.method = "leave-one-out", multi.model = FALSE,
na.fill = TRUE, na.rm = TRUE, apply_to = NULL,
alpha = NULL, memb_dim = 'member', sdate_dim = 'sdate',
dat_dim = NULL, ncores = NULL) {
# Check 's2dv_cube'
if (!inherits(exp, "s2dv_cube") || !inherits(obs, "s2dv_cube")) {
stop("Parameter 'exp' and 'obs' must be of the class 's2dv_cube'.")
}
if (!is.null(exp_cor)) {
if (!inherits(exp_cor, "s2dv_cube")) {
stop("Parameter 'exp_cor' must be of the class 's2dv_cube'.")
}
}
Calibration <- Calibration(exp = exp$data, obs = obs$data, exp_cor = exp_cor$data,
cal.method = cal.method, eval.method = eval.method,
multi.model = multi.model, na.fill = na.fill,
na.rm = na.rm, apply_to = apply_to, alpha = alpha,
memb_dim = memb_dim, sdate_dim = sdate_dim,
dat_dim = dat_dim, ncores = ncores)
if (is.null(exp_cor)) {
exp$data <- Calibration
exp$attrs$Datasets <- c(exp$attrs$Datasets, obs$attrs$Datasets)
exp$attrs$source_files <- c(exp$attrs$source_files, obs$attrs$source_files)
return(exp)
} else {
exp_cor$data <- Calibration
exp_cor$attrs$Datasets <- c(exp_cor$attrs$Datasets, exp$attrs$Datasets, obs$attrs$Datasets)
exp_cor$attrs$source_files <- c(exp_cor$attrs$source_files, exp$attrs$source_files, obs$attrs$source_files)
return(exp_cor)
}
}
#'Forecast Calibration
#'
#'@author Verónica Torralba, \email{[email protected]}
#'@author Bert Van Schaeybroeck, \email{[email protected]}
#'@description Five types of member-by-member bias correction can be performed.
#'The \code{"bias"} method corrects the bias only, the \code{"evmos"} method
#'applies a variance inflation technique to ensure the correction of the bias
#'and the correspondence of variance between forecast and observation (Van
#'Schaeybroeck and Vannitsem, 2011). The ensemble calibration methods
#'\code{"mse_min"} and \code{"crps_min"} correct the bias, the overall forecast
#'variance and the ensemble spread as described in Doblas-Reyes et al. (2005)
#'and Van Schaeybroeck and Vannitsem (2015), respectively. While the
#'\code{"mse_min"} method minimizes a constrained mean-squared error using three
#'parameters, the \code{"crps_min"} method features four parameters and
#'minimizes the Continuous Ranked Probability Score (CRPS). The
#'\code{"rpc-based"} method adjusts the forecast variance ensuring that the
#'ratio of predictable components (RPC) is equal to one, as in Eade et al.
#'(2014). Both in-sample or our out-of-sample (leave-one-out cross
#'validation) calibration are possible.
#'
#'@param exp A multidimensional array with named dimensions (at least 'sdate'
#' and 'member') containing the seasonal hindcast experiment data. The hindcast
#' is used to calibrate the forecast in case the forecast is provided; if not,
#' the same hindcast will be calibrated instead.
#'@param obs A multidimensional array with named dimensions (at least 'sdate')
#' containing the observed data.
#'@param exp_cor An optional multidimensional array with named dimensions (at
#' least 'sdate' and 'member') containing the seasonal forecast experiment
#' data. If the forecast is provided, it will be calibrated using the hindcast
#' and observations; if not, the hindcast will be calibrated instead. If there
#' is only one corrected dataset, it should not have dataset dimension. If there
#' is a corresponding corrected dataset for each 'exp' forecast, the dataset
#' dimension must have the same length as in 'exp'. The default value is NULL.
#'@param cal.method A character string indicating the calibration method used,
#' can be either \code{bias}, \code{evmos}, \code{mse_min}, \code{crps_min}
#' or \code{rpc-based}. Default value is \code{mse_min}.
#'@param eval.method A character string indicating the sampling method used,
#' can be either \code{in-sample} or \code{leave-one-out}. Default value is
#' the \code{leave-one-out} cross validation. In case the forecast is
#' provided, any chosen eval.method is over-ruled and a third option is
#' used.
#'@param multi.model A boolean that is used only for the \code{mse_min}
#' method. If multi-model ensembles or ensembles of different sizes are used,
#' it must be set to \code{TRUE}. By default it is \code{FALSE}. Differences
#' between the two approaches are generally small but may become large when
#' using small ensemble sizes. Using multi.model when the calibration method
#' is \code{bias}, \code{evmos} or \code{crps_min} will not affect the result.
#'@param na.fill A boolean that indicates what happens in case calibration is
#' not possible or will yield unreliable results. This happens when three or
#' less forecasts-observation pairs are available to perform the training phase
#' of the calibration. By default \code{na.fill} is set to true such that NA
#' values will be returned. If \code{na.fill} is set to false, the uncorrected
#' data will be returned.
#'@param na.rm A boolean that indicates whether to remove the NA values or
#' not. The default value is \code{TRUE}.
#'@param apply_to A character string that indicates whether to apply the
#' calibration to all the forecast (\code{"all"}) or only to those where the
#' correlation between the ensemble mean and the observations is statistically
#' significant (\code{"sign"}). Only useful if \code{cal.method == "rpc-based"}.
#'@param alpha A numeric value indicating the significance level for the
#' correlation test. Only useful if \code{cal.method == "rpc-based" & apply_to ==
#' "sign"}.
#'@param memb_dim A character string indicating the name of the member
#' dimension. By default, it is set to 'member'.
#'@param sdate_dim A character string indicating the name of the start date
#' dimension. By default, it is set to 'sdate'.
#'@param dat_dim A character string indicating the name of dataset dimension.
#' The length of this dimension can be different between 'exp' and 'obs'.
#' The default value is NULL.
#'@param ncores An integer that indicates the number of cores for parallel
#' computation using multiApply function. The default value is NULL (one core).
#'
#'@return An array containing the calibrated forecasts with the dimensions
#'nexp, nobs and same dimensions as in the 'exp' array. nexp is the number of
#'experiment (i.e., 'dat_dim' in exp), and nobs is the number of observation
#'(i.e., 'dat_dim' in obs). If dat_dim is NULL, nexp and nobs are omitted.
#'If 'exp_cor' is provided the returned array will be with the same dimensions as
#''exp_cor'.
#'
#'@details Both the \code{na.fill} and \code{na.rm} parameters can be used to
#'indicate how the function has to handle the NA values. The \code{na.fill}
#'parameter checks whether there are more than three forecast-observations pairs
#'to perform the computation. In case there are three or less pairs, the
#'computation is not carried out, and the value returned by the function depends
#'on the value of this parameter (either NA if \code{na.fill == TRUE} or the
#'uncorrected value if \code{na.fill == TRUE}). On the other hand, \code{na.rm}
#'is used to indicate the function whether to remove the missing values during
#'the computation of the parameters needed to perform the calibration.
#'
#'@references Doblas-Reyes F.J, Hagedorn R, Palmer T.N. The rationale behind the
#'success of multi-model ensembles in seasonal forecasting-II calibration and
#'combination. Tellus A. 2005;57:234-252. doi:10.1111/j.1600-0870.2005.00104.x
#'@references Eade, R., Smith, D., Scaife, A., Wallace, E., Dunstone, N.,
#'Hermanson, L., & Robinson, N. (2014). Do seasonal-to-decadal climate
#'predictions underestimate the predictability of the read world? Geophysical
#'Research Letters, 41(15), 5620-5628. \doi{10.1002/2014GL061146}
#'@references Van Schaeybroeck, B., & Vannitsem, S. (2011). Post-processing
#'through linear regression. Nonlinear Processes in Geophysics, 18(2),
#'147. \doi{10.5194/npg-18-147-2011}
#'@references Van Schaeybroeck, B., & Vannitsem, S. (2015). Ensemble
#'post-processing using member-by-member approaches: theoretical aspects.
#'Quarterly Journal of the Royal Meteorological Society, 141(688), 807-818.
#'\doi{10.1002/qj.2397}
#'
#'@seealso \code{\link{CST_Start}}
#'
#'@examples
#'mod1 <- 1 : (1 * 3 * 4 * 5 * 6 * 7)
#'dim(mod1) <- c(dataset = 1, member = 3, sdate = 4, ftime = 5, lat = 6, lon = 7)
#'obs1 <- 1 : (1 * 1 * 4 * 5 * 6 * 7)
#'dim(obs1) <- c(dataset = 1, member = 1, sdate = 4, ftime = 5, lat = 6, lon = 7)
#'a <- Calibration(exp = mod1, obs = obs1)
#'
#'@importFrom s2dv InsertDim Reorder
#'@import multiApply
#'@importFrom ClimProjDiags Subset
#'@export
Calibration <- function(exp, obs, exp_cor = NULL,
cal.method = "mse_min", eval.method = "leave-one-out",
multi.model = FALSE, na.fill = TRUE,
na.rm = TRUE, apply_to = NULL, alpha = NULL,
memb_dim = 'member', sdate_dim = 'sdate', dat_dim = NULL,
ncores = NULL) {
# Check inputs
## exp, obs
if (!is.array(exp) || !is.numeric(exp)) {
stop("Parameter 'exp' must be a numeric array.")
}
if (!is.array(obs) || !is.numeric(obs)) {
stop("Parameter 'obs' must be a numeric array.")
}
expdims <- names(dim(exp))
obsdims <- names(dim(obs))
if (is.null(expdims)) {
stop("Parameter 'exp' must have dimension names.")
}
if (is.null(obsdims)) {
stop("Parameter 'obs' must have dimension names.")
}
if (any(is.na(exp))) {
warning("Parameter 'exp' contains NA values.")
}
if (any(is.na(obs))) {
warning("Parameter 'obs' contains NA values.")
}
## exp_cor
if (!is.null(exp_cor)) {
# if exp_cor is provided, it will be calibrated: "calibrate forecast instead of hindcast"
# if exp_cor is provided, eval.method is overruled (because if exp_cor is provided, the
# train data will be all data of "exp" and the evalutaion data will be all data of "exp_cor";
# no need for "leave-one-out" or "in-sample")
eval.method <- "hindcast-vs-forecast"
expcordims <- names(dim(exp_cor))
if (is.null(expcordims)) {
stop("Parameter 'exp_cor' must have dimension names.")
}
if (any(is.na(exp_cor))) {
warning("Parameter 'exp_cor' contains NA values.")
}
}
## dat_dim
if (!is.null(dat_dim)) {
if (!is.character(dat_dim) | length(dat_dim) > 1) {
stop("Parameter 'dat_dim' must be a character string.")
}
if (!dat_dim %in% names(dim(exp)) | !dat_dim %in% names(dim(obs))) {
stop("Parameter 'dat_dim' is not found in 'exp' or 'obs' dimension.",
" Set it as NULL if there is no dataset dimension.")
}
}
## sdate_dim and memb_dim
if (!is.character(sdate_dim)) {
stop("Parameter 'sdate_dim' should be a character string indicating the",
"name of the dimension where start dates are stored in 'exp'.")
}
if (length(sdate_dim) > 1) {
sdate_dim <- sdate_dim[1]
warning("Parameter 'sdate_dim' has length greater than 1 and only",
" the first element will be used.")
}
if (!is.character(memb_dim)) {
stop("Parameter 'memb_dim' should be a character string indicating the",
"name of the dimension where members are stored in 'exp'.")
}
if (length(memb_dim) > 1) {
memb_dim <- memb_dim[1]
warning("Parameter 'memb_dim' has length greater than 1 and only",
" the first element will be used.")
}
target_dims_exp <- c(memb_dim, sdate_dim, dat_dim)
target_dims_obs <- c(sdate_dim, dat_dim)
if (!all(target_dims_exp %in% expdims)) {
stop("Parameter 'exp' requires 'sdate_dim' and 'memb_dim' dimensions.")
}
if (!all(target_dims_obs %in% obsdims)) {
stop("Parameter 'obs' must have the dimension defined in sdate_dim ",
"parameter.")
}
if (memb_dim %in% obsdims) {
if (dim(obs)[memb_dim] != 1) {
warning("Parameter 'obs' has dimension 'memb_dim' with length larger",
" than 1. Only the first member dimension will be used.")
}
obs <- Subset(obs, along = memb_dim, indices = 1, drop = "selected")
}
if (!is.null(exp_cor)) {
if (!memb_dim %in% names(dim(exp_cor))) {
exp_cor <- InsertDim(exp_cor, posdim = 1, lendim = 1, name = memb_dim)
exp_cor_remove_memb <- TRUE
} else {
exp_cor_remove_memb <- FALSE
}
} else {
exp_cor_remove_memb <- FALSE
}
## exp, obs, and exp_cor (2)
name_exp <- sort(names(dim(exp)))
name_obs <- sort(names(dim(obs)))
name_exp <- name_exp[-which(name_exp == memb_dim)]
if (!is.null(dat_dim)) {
name_exp <- name_exp[-which(name_exp == dat_dim)]
name_obs <- name_obs[-which(name_obs == dat_dim)]
}
if (!identical(length(name_exp), length(name_obs)) |
!identical(dim(exp)[name_exp], dim(obs)[name_obs])) {
stop("Parameter 'exp' and 'obs' must have same length of all ",
"dimensions except 'memb_dim' and 'dat_dim'.")
}
if (!is.null(exp_cor)) {
name_exp_cor <- sort(names(dim(exp_cor)))
name_exp <- sort(names(dim(exp)))
if (!is.null(dat_dim)) {
if (dat_dim %in% expcordims) {
if (!identical(dim(exp)[dat_dim], dim(exp_cor)[dat_dim])) {
stop("If parameter 'exp_cor' has dataset dimension, it must be",
" equal to dataset dimension of 'exp'.")
}
name_exp_cor <- name_exp_cor[-which(name_exp_cor == dat_dim)]
target_dims_cor <- c(memb_dim, sdate_dim, dat_dim)
} else {
target_dims_cor <- c(memb_dim, sdate_dim)
}
} else {
target_dims_cor <- c(memb_dim, sdate_dim)
}
name_exp <- name_exp[-which(name_exp %in% target_dims_exp)]
name_exp_cor <- name_exp_cor[-which(name_exp_cor %in% target_dims_cor)]
if (!identical(length(name_exp), length(name_exp_cor)) |
!identical(dim(exp)[name_exp], dim(exp_cor)[name_exp_cor])) {
stop("Parameter 'exp' and 'exp_cor' must have the same length of ",
"all common dimensions except 'dat_dim', 'sdate_dim' and 'memb_dim'.")
}
}
## ncores
if (!is.null(ncores)) {
if (!is.numeric(ncores) | ncores %% 1 != 0 | ncores <= 0 |
length(ncores) > 1) {
stop("Parameter 'ncores' must be either NULL or a positive integer.")
}
}
## na.rm
if (!inherits(na.rm, "logical")) {
stop("Parameter 'na.rm' must be a logical value.")
}
## na.fill
if (!inherits(na.fill, "logical")) {
stop("Parameter 'na.fill' must be a logical value.")
}
## cal.method, apply_to, alpha
if (!any(cal.method %in% c('bias', 'evmos', 'mse_min', 'crps_min', 'rpc-based'))) {
stop("Parameter 'cal.method' must be a character string indicating the calibration method used.")
}
if (cal.method == 'rpc-based') {
if (is.null(apply_to)) {
apply_to <- 'sign'
warning("Parameter 'apply_to' cannot be NULL for 'rpc-based' method so it ",
"has been set to 'sign', as in Eade et al. (2014).")
} else if (!apply_to %in% c('all','sign')) {
stop("Parameter 'apply_to' must be either 'all' or 'sign' when 'rpc-based' ",
"method is used.")
}
if (apply_to == 'sign') {
if (is.null(alpha)) {
alpha <- 0.1
warning("Parameter 'alpha' cannot be NULL for 'rpc-based' method so it ",
"has been set to 0.1, as in Eade et al. (2014).")
} else if (!is.numeric(alpha) | alpha <= 0 | alpha >= 1) {
stop("Parameter 'alpha' must be a number between 0 and 1.")
}
}
}
## eval.method
if (!any(eval.method %in% c('in-sample', 'leave-one-out', 'hindcast-vs-forecast'))) {
stop(paste0("Parameter 'eval.method' must be a character string indicating ",
"the sampling method used ('in-sample', 'leave-one-out' or ",
"'hindcast-vs-forecast')."))
}
## multi.model
if (!inherits(multi.model, "logical")) {
stop("Parameter 'multi.model' must be a logical value.")
}
if (multi.model & !(cal.method == "mse_min")) {
warning(paste0("The 'multi.model' parameter is ignored when using the ",
"calibration method '", cal.method, "'."))
}
## data sufficiently large
data.set.sufficiently.large.out <-
Apply(data = list(exp = exp, obs = obs),
target_dims = list(exp = target_dims_exp, obs = target_dims_obs),
fun = .data.set.sufficiently.large, dat_dim = dat_dim,
ncores = ncores)$output1
if (!all(data.set.sufficiently.large.out)) {
if (na.fill) {
warning("Some forecast data could not be corrected due to data lack",
" and is replaced with NA values.")
} else {
warning("Some forecast data could not be corrected due to data lack",
" and is replaced with uncorrected values.")
}
}
if (is.null(exp_cor)) {
calibrated <- Apply(data = list(exp = exp, obs = obs), dat_dim = dat_dim,
cal.method = cal.method, eval.method = eval.method, multi.model = multi.model,
na.fill = na.fill, na.rm = na.rm, apply_to = apply_to, alpha = alpha,
target_dims = list(exp = target_dims_exp, obs = target_dims_obs),
ncores = ncores, fun = .cal)$output1
} else {
calibrated <- Apply(data = list(exp = exp, obs = obs, exp_cor = exp_cor),
dat_dim = dat_dim, cal.method = cal.method, eval.method = eval.method,
multi.model = multi.model, na.fill = na.fill, na.rm = na.rm,
apply_to = apply_to, alpha = alpha,
target_dims = list(exp = target_dims_exp, obs = target_dims_obs,
exp_cor = target_dims_cor),
ncores = ncores, fun = .cal)$output1
}
if (!is.null(dat_dim)) {
pos <- match(c(names(dim(exp))[-which(names(dim(exp)) == dat_dim)], 'nexp', 'nobs'),
names(dim(calibrated)))
calibrated <- aperm(calibrated, pos)
} else {
pos <- match(c(names(dim(exp))), names(dim(calibrated)))
calibrated <- aperm(calibrated, pos)
}
if (exp_cor_remove_memb) {
dim(calibrated) <- dim(calibrated)[-which(names(dim(calibrated)) == memb_dim)]
}
dims <- dim(calibrated)
if (is.logical(calibrated)) {
calibrated <- array(as.numeric(calibrated), dim = dims)
}
return(calibrated)
}
.data.set.sufficiently.large <- function(exp, obs, dat_dim = NULL) {
amt.min.samples <- 3
if (is.null(dat_dim)) {
amt.good.pts <- sum(!is.na(obs) & !apply(exp, c(2), function(x) all(is.na(x))))
return(amt.good.pts > amt.min.samples)
} else {
nexp <- as.numeric(dim(exp)[dat_dim])
nobs <- as.numeric(dim(obs)[dat_dim])
amt.good.pts <- NULL
for (i in 1:nexp) {
for (j in 1:nobs) {
agp <- sum(!is.na(obs[, j, drop = FALSE]) &
!apply(exp[, , i, drop = FALSE], c(2),
function(x) all(is.na(x))))
amt.good.pts <- c(amt.good.pts, agp)
}
}
return(amt.good.pts > amt.min.samples)
}
}
.make.eval.train.dexes <- function(eval.method, amt.points, amt.points_cor) {
if (eval.method == "leave-one-out") {
dexes.lst <- lapply(seq(1, amt.points), function(x) return(list(eval.dexes = x,
train.dexes = seq(1, amt.points)[-x])))
} else if (eval.method == "in-sample") {
dexes.lst <- list(list(eval.dexes = seq(1, amt.points),
train.dexes = seq(1, amt.points)))
} else if (eval.method == "hindcast-vs-forecast") {
dexes.lst <- list(list(eval.dexes = seq(1,amt.points_cor),
train.dexes = seq(1, amt.points)))
} else {
stop(paste0("unknown sampling method: ", eval.method))
}
return(dexes.lst)
}
.cal <- function(exp, obs, exp_cor = NULL, dat_dim = NULL, cal.method = "mse_min",
eval.method = "leave-one-out", multi.model = FALSE, na.fill = TRUE,
na.rm = TRUE, apply_to = NULL, alpha = NULL) {
# exp: [memb, sdate, (dat)]
# obs: [sdate (dat)]
# exp_cor: [memb, sdate, (dat)] or NULL
if (is.null(dat_dim)) {
nexp <- 1
nobs <- 1
exp <- InsertDim(exp, posdim = 3, lendim = 1, name = 'dataset')
obs <- InsertDim(obs, posdim = 2, lendim = 1, name = 'dataset')
} else {
nexp <- as.numeric(dim(exp)[dat_dim])
nobs <- as.numeric(dim(obs)[dat_dim])
}
if (is.null(exp_cor)) {
# generate a copy of exp so that the same function can run for both cases
exp_cor <- exp
cor_dat_dim <- TRUE
} else {
if (length(dim(exp_cor)) == 2) { # exp_cor: [memb, sdate]
cor_dat_dim <- FALSE
} else { # exp_cor: [memb, sdate, dat]
cor_dat_dim <- TRUE
}
}
expdims <- dim(exp)
expdims_cor <- dim(exp_cor)
memb <- expdims[1] # memb
sdate <- expdims[2] # sdate
sdate_cor <- expdims_cor[2]
var.cor.fc <- array(dim = c(dim(exp_cor)[1:2], nexp = nexp, nobs = nobs))
for (i in 1:nexp) {
for (j in 1:nobs) {
exp_data <- exp[, , i]
dim(exp_data) <- dim(exp)[1:2]
obs_data <- as.vector(obs[, j])
if (!.data.set.sufficiently.large(exp = exp_data, obs = obs_data)) {
if (!na.fill) {
exp_subset <- exp[, , i]
var.cor.fc[, , i, j] <- exp_subset
}
} else {
# Subset data for dataset dimension
if (cor_dat_dim) {
expcor_data <- exp_cor[, , i]
dim(expcor_data) <- dim(exp_cor)[1:2]
} else {
expcor_data <- exp_cor
}
eval.train.dexeses <- .make.eval.train.dexes(eval.method = eval.method,
amt.points = sdate,
amt.points_cor = sdate_cor)
amt.resamples <- length(eval.train.dexeses)
for (i.sample in seq(1, amt.resamples)) {
# defining training (tr) and evaluation (ev) subsets
# fc.ev is used to evaluate (not train; train should be done with exp (hindcast))
eval.dexes <- eval.train.dexeses[[i.sample]]$eval.dexes
train.dexes <- eval.train.dexeses[[i.sample]]$train.dexes
fc.ev <- expcor_data[, eval.dexes, drop = FALSE]
fc.tr <- exp_data[, train.dexes]
obs.tr <- obs_data[train.dexes, drop = FALSE]
if (cal.method == "bias") {
var.cor.fc[, eval.dexes, i, j] <- fc.ev + mean(obs.tr, na.rm = na.rm) - mean(fc.tr, na.rm = na.rm)
# forecast correction implemented
} else if (cal.method == "evmos") {
# forecast correction implemented
# ensemble and observational characteristics
quant.obs.fc.tr <- .calc.obs.fc.quant(obs = obs.tr, fc = fc.tr, na.rm = na.rm)
# calculate value for regression parameters
init.par <- c(.calc.evmos.par(quant.obs.fc.tr, na.rm = na.rm))
# correct evaluation subset
var.cor.fc[, eval.dexes, i, j] <- .correct.evmos.fc(fc.ev , init.par, na.rm = na.rm)
} else if (cal.method == "mse_min") {
quant.obs.fc.tr <- .calc.obs.fc.quant(obs = obs.tr, fc = fc.tr, na.rm = na.rm)
init.par <- .calc.mse.min.par(quant.obs.fc.tr, multi.model, na.rm = na.rm)
var.cor.fc[, eval.dexes, i, j] <- .correct.mse.min.fc(fc.ev , init.par, na.rm = na.rm)
} else if (cal.method == "crps_min") {
quant.obs.fc.tr <- .calc.obs.fc.quant.ext(obs = obs.tr, fc = fc.tr, na.rm = na.rm)
init.par <- c(.calc.mse.min.par(quant.obs.fc.tr, na.rm = na.rm), 0.001)
init.par[3] <- sqrt(init.par[3])
# calculate regression parameters on training dataset
optim.tmp <- optim(par = init.par, fn = .calc.crps.opt, gr = .calc.crps.grad.opt,
quant.obs.fc = quant.obs.fc.tr, na.rm = na.rm, method = "BFGS")
mbm.par <- optim.tmp$par
var.cor.fc[, eval.dexes, i, j] <- .correct.crps.min.fc(fc.ev , mbm.par, na.rm = na.rm)
} else if (cal.method == 'rpc-based') {
# Ensemble mean
ens_mean.ev <- Apply(data = fc.ev, target_dims = names(memb), fun = mean, na.rm = na.rm)$output1
ens_mean.tr <- Apply(data = fc.tr, target_dims = names(memb), fun = mean, na.rm = na.rm)$output1
# Ensemble spread
ens_spread.tr <- Apply(data = list(fc.tr, ens_mean.tr), target_dims = names(sdate), fun = "-")$output1
# Mean (climatology)
exp_mean.tr <- mean(fc.tr, na.rm = na.rm)
# Ensemble mean variance
var_signal.tr <- var(ens_mean.tr, na.rm = na.rm)
# Variance of ensemble members about ensemble mean (= spread)
var_noise.tr <- var(as.vector(ens_spread.tr), na.rm = na.rm)
# Variance in the observations
var_obs.tr <- var(obs.tr, na.rm = na.rm)
# Correlation between observations and the ensemble mean
r.tr <- cor(x = ens_mean.tr, y = obs.tr, method = 'pearson',
use = ifelse(test = isTRUE(na.rm), yes = "pairwise.complete.obs", no = "everything"))
if ((apply_to == 'all') || (apply_to == 'sign' &&
cor.test(ens_mean.tr, obs.tr, method = 'pearson', alternative = 'greater')$p.value < alpha)) {
ens_mean_cal <- (ens_mean.ev - exp_mean.tr) * r.tr * sqrt(var_obs.tr) / sqrt(var_signal.tr) + exp_mean.tr
var.cor.fc[, eval.dexes, i, j] <- Reorder(data = Apply(data = list(exp = fc.ev, ens_mean = ens_mean.ev,
ens_mean_cal = ens_mean_cal),
target_dims = names(sdate), fun = .CalibrationMembersRPC,
var_obs = var_obs.tr, var_noise = var_noise.tr, r = r.tr)$output1,
order = names(expdims)[1:2])
} else {
# no significant -> replacing with observed climatology
var.cor.fc[, eval.dexes, i, j] <- array(data = mean(obs.tr, na.rm = na.rm), dim = dim(fc.ev))
}
}
}
}
}
}
if (is.null(dat_dim)) {
dim(var.cor.fc) <- dim(exp_cor)[1:2]
}
return(var.cor.fc)
}
# Function to calculate different quantities of a series of ensemble forecasts and corresponding observations
.calc.obs.fc.quant <- function(obs, fc, na.rm) {
if (is.null(dim(fc))) {
dim(fc) <- c(length(fc), 1)
}
amt.mbr <- dim(fc)[1]
obs.per.ens <- InsertDim(obs, posdim = 1, lendim = amt.mbr, name = 'amt.mbr')
fc.ens.av <- apply(fc, c(2), mean, na.rm = na.rm)
cor.obs.fc <- cor(fc.ens.av, obs, use = "complete.obs")
obs.av <- mean(obs, na.rm = na.rm)
obs.sd <- sd(obs, na.rm = na.rm)
return(
append(
.calc.fc.quant(fc = fc, na.rm = na.rm),
list(
obs.per.ens = obs.per.ens,
cor.obs.fc = cor.obs.fc,
obs.av = obs.av,
obs.sd = obs.sd
)
)
)
}
# Extended function to calculate different quantities of a series of ensemble forecasts and corresponding observations
.calc.obs.fc.quant.ext <- function(obs, fc, na.rm){
amt.mbr <- dim(fc)[1]
obs.per.ens <- InsertDim(obs, posdim = 1, lendim = amt.mbr, name = 'amt.mbr')
fc.ens.av <- apply(fc, c(2), mean, na.rm = na.rm)
cor.obs.fc <- cor(fc.ens.av, obs, use = "complete.obs")
obs.av <- mean(obs, na.rm = na.rm)
obs.sd <- sd(obs, na.rm = na.rm)
return(
append(
.calc.fc.quant.ext(fc = fc, na.rm = na.rm),
list(
obs.per.ens = obs.per.ens,
cor.obs.fc = cor.obs.fc,
obs.av = obs.av,
obs.sd = obs.sd
)
)
)
}
# Function to calculate different quantities of a series of ensemble forecasts
.calc.fc.quant <- function(fc, na.rm) {
amt.mbr <- dim(fc)[1]
fc.ens.av <- apply(fc, c(2), mean, na.rm = na.rm)
fc.ens.av.av <- mean(fc.ens.av, na.rm = na.rm)
fc.ens.av.sd <- sd(fc.ens.av, na.rm = na.rm)
fc.ens.av.per.ens <- InsertDim(fc.ens.av, posdim = 1, lendim = amt.mbr, name = 'amt.mbr')
fc.ens.sd <- apply(fc, c(2), sd, na.rm = na.rm)
fc.ens.var.av.sqrt <- sqrt(mean(fc.ens.sd^2, na.rm = na.rm))
fc.dev <- fc - fc.ens.av.per.ens
fc.dev.sd <- sd(fc.dev, na.rm = na.rm)
fc.av <- mean(fc, na.rm = na.rm)
fc.sd <- sd(fc, na.rm = na.rm)
return(
list(
fc.ens.av = fc.ens.av,
fc.ens.av.av = fc.ens.av.av,
fc.ens.av.sd = fc.ens.av.sd,
fc.ens.av.per.ens = fc.ens.av.per.ens,
fc.ens.sd = fc.ens.sd,
fc.ens.var.av.sqrt = fc.ens.var.av.sqrt,
fc.dev = fc.dev,
fc.dev.sd = fc.dev.sd,
fc.av = fc.av,
fc.sd = fc.sd
)
)
}
# Extended function to calculate different quantities of a series of ensemble forecasts
.calc.fc.quant.ext <- function(fc, na.rm) {
amt.mbr <- dim(fc)[1]
repmat1.tmp <- InsertDim(fc, posdim = 1, lendim = amt.mbr, name = 'amt.mbr')
repmat2.tmp <- aperm(repmat1.tmp, c(2, 1, 3))
spr.abs <- apply(abs(repmat1.tmp - repmat2.tmp), c(3), mean, na.rm = na.rm)
spr.abs.per.ens <- InsertDim(spr.abs, posdim = 1, lendim = amt.mbr, name = 'amt.mbr')
return(
append(.calc.fc.quant(fc, na.rm = na.rm),
list(spr.abs = spr.abs, spr.abs.per.ens = spr.abs.per.ens))
)
}
# Below are the core or elementary functions to calculate the regression parameters for the different methods
.calc.mse.min.par <- function(quant.obs.fc, multi.model = F, na.rm) {
par.out <- rep(NA, 3)
if (multi.model) {
par.out[3] <- with(quant.obs.fc, obs.sd * sqrt(1. - cor.obs.fc^2) / fc.ens.var.av.sqrt)
} else {
par.out[3] <- with(quant.obs.fc, obs.sd * sqrt(1. - cor.obs.fc^2) / fc.dev.sd)
}
par.out[2] <- with(quant.obs.fc, abs(cor.obs.fc) * obs.sd / fc.ens.av.sd)
par.out[1] <- with(quant.obs.fc, obs.av - par.out[2] * fc.ens.av.av, na.rm = na.rm)
return(par.out)
}
.calc.evmos.par <- function(quant.obs.fc, na.rm) {
par.out <- rep(NA, 2)
par.out[2] <- with(quant.obs.fc, obs.sd / fc.sd)
par.out[1] <- with(quant.obs.fc, obs.av - par.out[2] * fc.ens.av.av, na.rm = na.rm)
return(par.out)
}
# Below are the core or elementary functions to calculate the functions necessary for the minimization of crps
.calc.crps.opt <- function(par, quant.obs.fc, na.rm){
return(
with(quant.obs.fc,
mean(abs(obs.per.ens - (par[1] + par[2] * fc.ens.av.per.ens +
((par[3])^2 + par[4] / spr.abs.per.ens) * fc.dev)), na.rm = na.rm) -
mean(abs((par[3])^2 * spr.abs + par[4]) / 2., na.rm = na.rm)
)
)
}
.calc.crps.grad.opt <- function(par, quant.obs.fc, na.rm) {
sgn1 <- with(quant.obs.fc,sign(obs.per.ens - (par[1] + par[2] * fc.ens.av.per.ens +
((par[3])^2 + par[4] / spr.abs.per.ens) * fc.dev)))
sgn2 <- with(quant.obs.fc, sign((par[3])^2 + par[4] / spr.abs.per.ens))
sgn3 <- with(quant.obs.fc,sign((par[3])^2 * spr.abs + par[4]))
deriv.par1 <- mean(sgn1, na.rm = na.rm)
deriv.par2 <- with(quant.obs.fc, mean(sgn1 * fc.dev, na.rm = na.rm))
deriv.par3 <- with(quant.obs.fc,
mean(2* par[3] * sgn1 * sgn2 * fc.ens.av.per.ens, na.rm = na.rm) -
mean(spr.abs * sgn3, na.rm = na.rm) / 2.)
deriv.par4 <- with(quant.obs.fc,
mean(sgn1 * sgn2 * fc.ens.av.per.ens / spr.abs.per.ens, na.rm = na.rm) -
mean(sgn3, na.rm = na.rm) / 2.)
return(c(deriv.par1, deriv.par2, deriv.par3, deriv.par4))
}
# Below are the core or elementary functions to correct the evaluation set based on the regression parameters
.correct.evmos.fc <- function(fc, par, na.rm) {
quant.fc.mp <- .calc.fc.quant(fc = fc, na.rm = na.rm)
return(with(quant.fc.mp, par[1] + par[2] * fc))
}
.correct.mse.min.fc <- function(fc, par, na.rm) {
quant.fc.mp <- .calc.fc.quant(fc = fc, na.rm = na.rm)
return(with(quant.fc.mp, par[1] + par[2] * fc.ens.av.per.ens + fc.dev * par[3]))
}
.correct.crps.min.fc <- function(fc, par, na.rm) {
quant.fc.mp <- .calc.fc.quant.ext(fc = fc, na.rm = na.rm)
return(with(quant.fc.mp, par[1] + par[2] * fc.ens.av.per.ens + fc.dev * abs((par[3])^2 + par[4] / spr.abs)))
}
# Function to calibrate the individual members with the RPC-based method
.CalibrationMembersRPC <- function(exp, ens_mean, ens_mean_cal, var_obs, var_noise, r) {
member_cal <- (exp - ens_mean) * sqrt(var_obs) * sqrt(1 - r^2) / sqrt(var_noise) + ens_mean_cal
return(member_cal)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_Calibration.R
|
#'Make categorical forecast based on a multi-model forecast with potential for
#'calibrate
#'
#'@author Bert Van Schaeybroeck, \email{[email protected]}
#'@description This function converts a multi-model ensemble forecast into a
#'categorical forecast by giving the probability for each category. Different
#'methods are available to combine the different ensemble forecasting models
#'into probabilistic categorical forecasts.
#'
#'Motivation: Beyond the short range, the unpredictable component of weather
#'predictions becomes substantial due to the chaotic nature of the earth system.
#'Therefore, predictions can mostly be skillful when used in a probabilistic
#'sense. In practice this is done using ensemble forecasts. It is then common to
#'convert the ensemble forecasts to occurence probabilities for different
#'categories. These categories typically are taken as terciles from
#'climatolgical distributions. For instance for temperature, there is a cold,
#'normal and warm class. Commonly multiple ensemble forecasting systems are
#'available but some models may be more competitive than others for the
#'variable, region and user need under consideration. Therefore, when
#'calculating the category probabilities, the ensemble members of the different
#'forecasting system may be differently weighted. Such weighting is typically
#'done by comparison of the ensemble forecasts with observations.
#'
#'Description of the tool: The tool considers all forecasts (all members from
#'all forecasting systems) and converts them into occurrence probabilities of
#'different categories. The amount of categories can be changed and are taken as
#'the climatological quantiles (e.g. terciles), extracted from the observational
#'data. The methods that are available to combine the ensemble forecasting
#'models into probabilistic categorical forecasts are: 1) ensemble pooling where
#'all ensemble members of all ensemble systems are weighted equally,
#' 2) model combination where each model system is weighted equally, and,
#' 3) model weighting.
#'The model weighting method is described in Rajagopalan et al. (2002),
#'Robertson et al. 2004 and Van Schaeybroeck and Vannitsem (2019). More
#'specifically, this method uses different weights for the occurence probability
#'predicted by the available models and by a climatological model and optimizes
#'the weights by minimizing the ignorance score. Finally, the function can also
#'be used to categorize the observations in the categorical quantiles.
#'
#'@param exp An object of class \code{s2dv_cube} as returned by \code{CST_Load}
#' function, containing the seasonal forecast experiment data in the element
#' named \code{$data}. The amount of forecasting models is equal to the size of
#' the \code{dataset} dimension of the data array. The amount of members per
#' model may be different. The size of the \code{member} dimension of the data
#' array is equal to the maximum of the ensemble members among the models.
#' Models with smaller ensemble sizes have residual indices of \code{member}
#' dimension in the data array filled with NA values.
#'@param obs An object of class \code{s2dv_cube} as returned by \code{CST_Load}
#' function, containing the observed data in the element named \code{$data}.
#'@param amt.cat Is the amount of categories. Equally-sized quantiles will be
#' calculated based on the amount of categories.
#'@param cat.method Method used to produce the categorical forecast, can be
#' either \code{pool}, \code{comb}, \code{mmw} or \code{obs}. The method pool
#' assumes equal weight for all ensemble members while the method comb assumes
#' equal weight for each model. The weighting method is descirbed in
#' Rajagopalan et al. (2002), Robertson et al. (2004) and Van Schaeybroeck and
#' Vannitsem (2019). Finally, the \code{obs} method classifies the observations
#' into the different categories and therefore contains only 0 and 1 values.
#'@param eval.method Is the sampling method used, can be either
#' \code{"in-sample"} or \code{"leave-one-out"}. Default value is the
#' \code{"leave-one-out"} cross validation.
#'@param ... other parameters to be passed on to the calibration procedure.
#'
#'@return An object of class \code{s2dv_cube} containing the categorical
#'forecasts in the element called \code{$data}. The first two dimensions of the
#'returned object are named dataset and member and are both of size one. An
#'additional dimension named category is introduced and is of size amt.cat.
#'
#'@references Rajagopalan, B., Lall, U., & Zebiak, S. E. (2002). Categorical
#'climate forecasts through regularization and optimal combination of multiple
#'GCM ensembles. Monthly Weather Review, 130(7), 1792-1811.
#'@references Robertson, A. W., Lall, U., Zebiak, S. E., & Goddard, L. (2004).
#'Improved combination of multiple atmospheric GCM ensembles for seasonal
#'prediction. Monthly Weather Review, 132(12), 2732-2744.
#'@references Van Schaeybroeck, B., & Vannitsem, S. (2019). Postprocessing of
#'Long-Range Forecasts. In Statistical Postprocessing of Ensemble Forecasts (pp. 267-290).
#'
#'@examples
#'mod1 <- 1 : (2 * 2* 4 * 5 * 2 * 2)
#'dim(mod1) <- c(dataset = 2, member = 2, sdate = 4, ftime = 5, lat = 2, lon = 2)
#'mod1[2, 1, , , , ] <- NA
#'datasets <- c("MF", "UKMO")
#'obs1 <- 1 : (1 * 1 * 4 * 5 * 2 * 2)
#'dim(obs1) <- c(dataset = 1, member = 1, sdate = 4, ftime = 5, lat = 2, lon = 2)
#'lon <- seq(0, 30, 5)
#'lat <- seq(0, 25, 5)
#'coords <- list(lat = lat, lon = lon)
#'attrs <- list(Datasets = datasets)
#'exp <- list(data = mod1, coords = coords, attrs = attrs)
#'obs <- list(data = obs1, coords = coords)
#'attr(exp, 'class') <- 's2dv_cube'
#'attr(obs, 'class') <- 's2dv_cube'
#'a <- CST_CategoricalEnsCombination(exp = exp, obs = obs, amt.cat = 3,
#' cat.method = "mmw")
#'@importFrom s2dv InsertDim
#'@import abind
#'@export
CST_CategoricalEnsCombination <- function(exp, obs, cat.method = "pool",
eval.method = "leave-one-out",
amt.cat = 3,
...) {
# Check 's2dv_cube'
if (!inherits(exp, "s2dv_cube") || !inherits(exp, "s2dv_cube")) {
stop("Parameter 'exp' and 'obs' must be of the class 's2dv_cube', ",
"as output by CSTools::CST_Load.")
}
if (as.numeric(dim(obs$data)["member"]) != 1) {
stop("The length of the dimension 'member' in the component 'data' ",
"of the parameter 'obs' must be equal to 1.")
}
names.dim.tmp <- names(dim(exp$data))
exp$data <- CategoricalEnsCombination(fc = exp$data, obs = obs$data,
cat.method = cat.method,
eval.method = eval.method,
amt.cat = amt.cat, ...)
names.dim.tmp[which(names.dim.tmp == "member")] <- "category"
names(dim(exp$data)) <- names.dim.tmp
exp$data <- InsertDim(exp$data, lendim = 1, posdim = 2, name = "member")
exp$attrs$Datasets <- c(exp$attrs$Datasets, obs$attrs$Datasets)
exp$attrs$source_files <- c(exp$attrs$source_files, obs$attrs$source_files)
return(exp)
}
#'Make categorical forecast based on a multi-model forecast with potential for
#'calibrate
#'
#'@author Bert Van Schaeybroeck, \email{[email protected]}
#'@description This function converts a multi-model ensemble forecast into a
#'categorical forecast by giving the probability for each category. Different
#'methods are available to combine the different ensemble forecasting models
#'into probabilistic categorical forecasts.
#'
#' See details in ?CST_CategoricalEnsCombination
#'@param fc A multi-dimensional array with named dimensions containing the
#' seasonal forecast experiment data in the element named \code{$data}. The
#' amount of forecasting models is equal to the size of the \code{dataset}
#' dimension of the data array. The amount of members per model may be
#' different. The size of the \code{member} dimension of the data array is
#' equal to the maximum of the ensemble members among the models. Models with
#' smaller ensemble sizes have residual indices of \code{member} dimension in
#' the data array filled with NA values.
#'@param obs A multidimensional array with named dimensions containing the
#' observed data in the element named \code{$data}.
#'@param amt.cat Is the amount of categories. Equally-sized quantiles will be
#' calculated based on the amount of categories.
#'@param cat.method Method used to produce the categorical forecast, can be
#' either \code{pool}, \code{comb}, \code{mmw} or \code{obs}. The method pool
#' assumes equal weight for all ensemble members while the method comb assumes
#' equal weight for each model. The weighting method is descirbed in
#' Rajagopalan et al. (2002), Robertson et al. (2004) and Van Schaeybroeck and
#' Vannitsem (2019). Finally, the \code{obs} method classifies the observations
#' into the different categories and therefore contains only 0 and 1 values.
#'@param eval.method Is the sampling method used, can be either
#' \code{"in-sample"} or \code{"leave-one-out"}. Default value is the
#' \code{"leave-one-out"} cross validation.
#'@param ... Other parameters to be passed on to the calibration procedure.
#'
#'@return An array containing the categorical forecasts in the element called
#'\code{$data}. The first two dimensions of the returned object are named
#'dataset and member and are both of size one. An additional dimension named
#'category is introduced and is of size amt.cat.
#'
#'@references Rajagopalan, B., Lall, U., & Zebiak, S. E. (2002). Categorical
#'climate forecasts through regularization and optimal combination of multiple
#'GCM ensembles. Monthly Weather Review, 130(7), 1792-1811.
#'@references Robertson, A. W., Lall, U., Zebiak, S. E., & Goddard, L. (2004).
#'Improved combination of multiple atmospheric GCM ensembles for seasonal
#'prediction. Monthly Weather Review, 132(12), 2732-2744.
#'@references Van Schaeybroeck, B., & Vannitsem, S. (2019). Postprocessing of
#'Long-Range Forecasts. In Statistical Postprocessing of Ensemble Forecasts (pp. 267-290).
#'
#'@importFrom s2dv InsertDim
#'@import abind
#'@export
CategoricalEnsCombination <- function (fc, obs, cat.method, eval.method, amt.cat, ...) {
if (!all(c("member", "sdate") %in% names(dim(fc)))) {
stop("Parameter 'exp' must have the dimensions 'member' and 'sdate'.")
}
if (!all(c("sdate") %in% names(dim(obs)))) {
stop("Parameter 'obs' must have the dimension 'sdate'.")
}
if (any(is.na(fc))) {
warning("Parameter 'exp' contains NA values.")
}
if (any(is.na(obs))) {
warning("Parameter 'obs' contains NA values.")
}
fc.merged <- mergedatasets(fc = fc)
amt.sdate = dim(fc.merged)["sdate"]
target.dims <- c("member", "sdate")
return.feat <- list(amt.cat = amt.cat)
return.feat$dim <- c(amt.cat, amt.sdate)
return.feat$name <- c("category", "sdate")
return.feat$dim.name <- list(category = paste0("cat_", seq(1, amt.cat))
, dimnames(fc.merged)[["sdate"]])
cat_fc_out <- .apply.obs.fc(obs = obs,
fc = fc.merged,
target.dims = target.dims,
FUN = .cat_fc,
return.feat = return.feat,
cat.method = cat.method,
eval.method = eval.method,
amt.cat = amt.cat,
...)
return(cat_fc_out)
}
mergedatasets <- function(fc) {
dims.tmp <- dim(fc)
dimnames.tmp <- dimnames(fc)
names.dim.tmp <- names(dims.tmp)
amt.mbr <- dims.tmp["member"][]
amt.dataset <- dims.tmp["dataset"][]
member.dim <- which(names.dim.tmp == "member")
dataset.dim <- which(names.dim.tmp == "dataset")
fc.tmp <- comb.dims(fc, c(dataset.dim, member.dim))
if(is.null(dimnames.tmp[[dataset.dim]])){
mod.ind.name <- rep(paste0("model_",seq(1, amt.dataset)), times = amt.mbr)
} else{
mod.ind.name <- rep(dimnames.tmp[[dataset.dim]], times = amt.mbr)
}
mod.ind <- which(apply(fc.tmp, c(1), function(x) {all(!is.na(x))}))
amt.mbr.tot <- length(mod.ind)
fc.tmp <- asub(fc.tmp, list(mod.ind),1)
mod.ind.name <- mod.ind.name[mod.ind]
dim(fc.tmp) <- c(1, dim(fc.tmp))
names(dim(fc.tmp)) <- c("dataset", "member", names.dim.tmp[-c(1, 2)])
dimnames(fc.tmp) <- c(list(dataset = c("dataset1"), member = mod.ind.name), dimnames.tmp[-c(member.dim, dataset.dim)])
return(fc.tmp)
}
comb.dims <- function(arr.in, dims.to.combine){
dims.orig <- dim(arr.in)
tmp.dexes <- seq(1, length(dims.orig))
new.dexes <- c(tmp.dexes[dims.to.combine], tmp.dexes[-dims.to.combine])
new.dims <- c(prod(dims.orig[dims.to.combine]), dims.orig[tmp.dexes[-dims.to.combine]])
arr.out <- aperm(arr.in, new.dexes)
dim(arr.out) <- new.dims
return(arr.out)
}
.apply.obs.fc <- function(obs, fc, target.dims, FUN, return.feat, cat.method, eval.method, amt.cat, ...){
dimnames.tmp <- dimnames(fc)
fc.dims.tmp <- dim(fc)
dims.out.tmp <- return.feat$dim
obs.fc <- .combine.obs.fc(obs, fc)
names.dim <- names(dim(obs.fc))
amt.dims <- length(names.dim)
margin.all <- seq(1, amt.dims)
matched.dims <- match(target.dims, names.dim)
margin.to.use <- margin.all[-matched.dims]
arr.out <- apply(X = obs.fc,
MARGIN = margin.to.use,
FUN = FUN,
cat.method = cat.method,
eval.method = eval.method,
amt.cat = amt.cat,
...)
dims.tmp <- dim(arr.out)
names.dims.tmp <- names(dim(arr.out))
if(prod(return.feat$dim) != dims.tmp[1]){
stop("apply.obs.fc: returned dimensions not as expected: ", prod(return.feat$dim), " and ", dims.tmp[1])
}
dim(arr.out) <- c(dims.out.tmp, dims.tmp[-1])
names(dim(arr.out)) <- c(return.feat$name, names.dims.tmp[c(-1)])
names.dim[matched.dims] <- return.feat$name
pos <- match(names.dim, names(dim(arr.out)))
pos_inv <- match(names(dim(arr.out)), names.dim)
arr.out <- aperm(arr.out, pos)
for (i.item in seq(1,length(return.feat$name))){
dimnames.tmp[[pos_inv[i.item]]] <- return.feat$dim.name[[i.item]]
}
dimnames(arr.out) <- dimnames.tmp
return(arr.out)
}
.cat_fc <- function(obs.fc, amt.cat, cat.method, eval.method) {
dims.tmp=dim(obs.fc)
amt.mbr <- dims.tmp["member"][]-1
amt.sdate <- dims.tmp["sdate"][]
pos <- match(c("member","sdate"), names(dims.tmp))
obs.fc <- aperm(obs.fc, pos)
var.obs <- asub(obs.fc, list(1),1)
var.fc <- asub(obs.fc, list(1+seq(1, amt.mbr)),1)
dim(var.fc) <- c(amt.mbr, amt.sdate)
dims.fc <- dim(var.fc)
mdl.dimnames <- dimnames(obs.fc)[["member"]][-1]
mdl.feat <- .get.mdl.features(mdl.dimnames)
amt.mdl <- mdl.feat$amt.mdl
amt.coeff <- amt.mdl + 1
var.cat.fc <- array(NA, c(amt.cat, amt.sdate))
eval.train.dexeses <- .make.eval.train.dexes(eval.method = eval.method, amt.points = amt.sdate)
amt.resamples <- length(eval.train.dexeses)
for (i.sample in seq(1, amt.resamples)) {
# defining training (tr) and evaluation (ev) subsets
eval.dexes <- eval.train.dexeses[[i.sample]]$eval.dexes
train.dexes <- eval.train.dexeses[[i.sample]]$train.dexes
fc.ev <- var.fc[ , eval.dexes, drop = FALSE]
fc.tr <- var.fc[ , train.dexes, drop = FALSE]
obs.tr <- var.obs[train.dexes , drop = FALSE]
obs.ev <- var.obs[eval.dexes , drop = FALSE]
amt.sdate.tr <- dim(fc.tr)[2]
amt.sdate.ev <- dim(fc.ev)[2]
quant.to.use <- .calc.quantiles(obs.tr, amt.cat)
cat.fc.ev <- .calc.cat(fc.ev, quant.to.use)
cat.obs.ev <- .calc.cat(obs.ev, quant.to.use)
if(cat.method == "mmw"){
cat.fc.tr <- .calc.cat(fc.tr, quant.to.use)
cat.obs.tr <- .calc.cat(obs.tr, quant.to.use)
freq.per.mdl.at.obs <- .calc.freq.per.mdl.at.obs(cat.obs = cat.obs.tr, cat.fc = cat.fc.tr,
mdl.feat = mdl.feat, amt.cat = amt.cat)
freq.per.mdl.ev <- .calc.freq.per.mdl(cat.fc = cat.fc.ev, mdl.feat = mdl.feat, amt.cat = amt.cat)
if(i.sample == 1){
init.par <- c(rep(1 / amt.coeff, amt.coeff)) * 0.999
}
#calculate weights on training dataset
constr.mtrx <- array(0, c(amt.coeff + 1, amt.coeff))
for (i.coeff in seq(1, amt.coeff)){
constr.mtrx[i.coeff, i.coeff] <- 1.
}
constr.mtrx[amt.coeff + 1, ] <- -1.
constr.vec <- c(rep(0., amt.coeff), -1.)
optim.tmp <- constrOptim(theta = init.par, f = .funct.optim, grad = .funct.optim.grad,
ui = constr.mtrx, ci = constr.vec,
freq.per.mdl.at.obs = freq.per.mdl.at.obs)
init.par <- optim.tmp$par * (1 - abs(rnorm(amt.coeff, 0, 0.01)))
var.cat.fc[ , eval.dexes] <- apply(suppressWarnings(InsertDim(
InsertDim(optim.tmp$par, lendim = amt.cat, posdim = 2),
lendim = amt.sdate.ev, posdim = 3)) *
freq.per.mdl.ev[ , , , drop = FALSE], c(2,3), sum, na.rm = TRUE)
} else if (cat.method == "comb") {
freq.per.mdl.ev <- .calc.freq.per.mdl(cat.fc = cat.fc.ev, mdl.feat = mdl.feat, amt.cat = amt.cat)
var.cat.fc[ , eval.dexes] <- apply(freq.per.mdl.ev[-amt.coeff, , , drop = FALSE], c(2, 3), mean, na.rm = TRUE)
} else if (cat.method == "pool") {
freq.per.mdl.ev <- .calc.freq.per.mdl(cat.fc = cat.fc.ev, mdl.feat = NULL, amt.cat = amt.cat)
var.cat.fc[ , eval.dexes] <- freq.per.mdl.ev[1, , ]
} else if (cat.method == "obs") {
dim(cat.obs.ev) <- c(1,length(cat.obs.ev))
freq.per.mdl.ev <- .calc.freq.per.mdl(cat.fc = cat.obs.ev, mdl.feat = NULL, amt.cat = amt.cat)
var.cat.fc[ , eval.dexes] <- freq.per.mdl.ev[1, , ]
}
}
names(dim(var.cat.fc)) <- c("member", "sdate")
return(var.cat.fc)
}
.calc.percentiles <- function(data, amt.cat){
if(amt.cat < 2){
stop("amount of categories too low: ", amt.cat)
}
frac.to.use <- 1./amt.cat
perc.out <- seq(from = frac.to.use, to = 1. - frac.to.use, by = frac.to.use)
return(perc.out)
}
.calc.quantiles <- function(data, amt.cat){
perc.to.use <- .calc.percentiles(data, amt.cat)
quant.out <- quantile(data, perc.to.use, na.rm = T)
if(any(duplicated(quant.out))){
stop(paste0("The ",amt.cat," ( = amt.cat) different quantile categories are",
" determined based on the observation data. However, ", length(data),
" datapoints are insufficient to determine the quantiles. Please",
" reduce the amount of categories or extend observational dataset."))
}
return(quant.out)
}
.calc.cat <- function(data, quant){
quant <- c(-Inf,quant,Inf)
categ.all <- cut(as.vector(data), breaks = quant, labels = FALSE)
if(!is.null(dim(data))){
dim(categ.all) <- dim(data)
}
return(categ.all)
}
.get.mdl.features <- function(mdl.names){
amt.mbr <- length(mdl.names)
mdl.diff.names <- unique(mdl.names)
amt.mdl <- length(mdl.diff.names)
mdl.msk <- array(F,c(amt.mdl, amt.mbr))
amt.mbr.per.mdl <- array(0, c(amt.mdl))
if(amt.mdl == 1 & amt.mbr == 1) {
mdl.msk = array(T, c(1, 1))
amt.mbr.per.mdl <- array(1, c(1, 1))
} else {
mdl.msk <- t(sapply(mdl.diff.names, function(x){mdl.names == x}))
amt.mbr.per.mdl <- apply(mdl.msk, c(1), sum,na.rm = TRUE)
}
return(list(amt.mbr = amt.mbr, amt.mdl = amt.mdl,
amt.mbr.per.mdl = amt.mbr.per.mdl, mdl.msk = mdl.msk))
}
.combine.obs.fc <- function(obs,fc){
names.dim.tmp <- names(dim(obs))
members.dim <- which(names.dim.tmp == "member")
arr.out <- abind(obs, fc, along = members.dim)
dimnames.tmp <- dimnames(arr.out)
names(dim(arr.out)) <- names.dim.tmp
dimnames(arr.out) <- dimnames.tmp
names(dimnames(arr.out)) <- names.dim.tmp
return(arr.out)
}
.funct.optim <- function(par, freq.per.mdl.at.obs){
return(-mean(log(drop(par %*% freq.per.mdl.at.obs)), na.rm = TRUE))
}
.funct.optim.grad <- function(par, freq.per.mdl.at.obs){
amt.model <- dim(freq.per.mdl.at.obs)[1]
preprocess <- drop(par %*% freq.per.mdl.at.obs)
if (is.null(dim(preprocess))) {
dim(preprocess) <- c(dim = length(preprocess))
}
return(-apply(freq.per.mdl.at.obs/suppressWarnings(InsertDim(preprocess,
lendim = as.numeric(amt.model), posdim = 1)), c(1), mean, na.rm = TRUE))
}
.calc.freq.per.mdl.at.obs <- function(cat.obs, cat.fc, amt.cat, mdl.feat){
amt.mbr <- dim(cat.fc)[1]
amt.sdate <- dim(cat.fc)[2]
amt.mdl <- mdl.feat$amt.mdl
mdl.msk.tmp <- mdl.feat$mdl.msk
amt.coeff <- amt.mdl + 1
msk.fc.obs <- (cat.fc == InsertDim(cat.obs, posdim = 1, lendim = amt.mbr))
freq.per.mdl.at.obs <- array(NA, c(amt.coeff, amt.sdate))
for (i.mdl in seq(1, amt.mdl)){
freq.per.mdl.at.obs[i.mdl, ] <- apply(msk.fc.obs[mdl.msk.tmp[i.mdl, ], , drop = FALSE],
c(2), mean, na.rm = TRUE)
}
freq.per.mdl.at.obs[amt.coeff, ] = 1 / amt.cat
return(freq.per.mdl.at.obs)
}
.calc.freq.per.mdl <- function(cat.fc, amt.cat, mdl.feat){
amt.mbr <- dim(cat.fc)[1]
amt.sdate <- dim(cat.fc)[2]
if(is.null(mdl.feat)){
amt.mdl <- 1
mdl.msk.tmp <- array(T, c(1, amt.mbr))
} else {
amt.mdl <- mdl.feat$amt.mdl
mdl.msk.tmp <- mdl.feat$mdl.msk
}
amt.coeff <- 1 + amt.mdl
freq.per.mdl <- array(NA, c(amt.coeff, amt.cat, amt.sdate))
for (i.mdl in seq(1, amt.mdl)){
ens.mdl.msk <- mdl.msk.tmp[i.mdl, , drop = FALSE]
for (i.cat in seq(1, amt.cat)){
freq.per.mdl[i.mdl, i.cat, ] <- apply(i.cat==cat.fc[ens.mdl.msk, , drop = F],
c(2), mean, na.rm = TRUE)
}
}
freq.per.mdl[amt.coeff, , ] = 1 / amt.cat
return(freq.per.mdl)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_CategoricalEnsCombination.R
|
#'Change the name of one or more dimensions for an object of class s2dv_cube
#'
#'Change the names of the dimensions specified in 'original_names' to the names
#'in 'new_names'. The coordinate names and the dimensions of any attributes
#'are also modified accordingly.
#'
#'@author Agudetse Roures Victoria, \email{[email protected]}
#'
#'@param data An object of class \code{s2dv_cube} whose dimension names
#' should be changed.
#'@param original_names A single character string or a vector indicating the
#' dimensions to be renamed.
#'@param new_names A single character string or a vector indicating the new
#' dimension names, in the same order as the dimensions in 'original_names'.
#'
#'@return An object of class \code{s2dv_cube} with similar data, coordinates and
#'attributes as the \code{data} input, but with modified dimension names.
#'
#'@examples
#'# Example with sample data:
#'# Check original dimensions and coordinates
#'lonlat_temp$exp$dims
#'names(lonlat_temp$exp$coords)
#'dim(lonlat_temp$exp$attrs$Dates)
#'# Change 'dataset' to 'dat' and 'ftime' to 'time'
#'exp <- CST_ChangeDimNames(lonlat_temp$exp,
#' original_names = c("dataset", "ftime"),
#' new_names = c("dat", "time"))
#'# Check new dimensions and coordinates
#'exp$dims
#'names(exp$coords)
#'dim(exp$attrs$Dates)
#'
#'@export
CST_ChangeDimNames <- function(data, original_names, new_names) {
if (!inherits(data, "s2dv_cube")) {
stop("Parameter 'data' must be an object of class 's2dv_cube'.")
}
if (!is.character(original_names)) {
stop("Parameter 'original_names' must be a character string or a ",
"vector of character strings.")
}
if (!is.character(new_names)) {
stop("Parameter 'new_names' must be a character string or a ",
"vector of character strings.")
}
if (!(length(original_names) == length(new_names))) {
stop("The number of dimension names in 'new_names' must be the same ",
"as in 'original_names'.")
}
if (!all(original_names %in% names(data$dims))) {
stop("Some of the dimensions in 'original_names' could not be found in ",
"'data'.")
}
for (index in 1:length(original_names)) {
original_name <- original_names[index]
new_name <- new_names[index]
# Step 1: Change dims
names(data$dims)[which(names(data$dims) == original_name)] <- new_name
# Step 2: Change coords
names(data$coords)[which(names(data$coords) == original_name)] <- new_name
# Step 3: Change attrs
# 3.1 - Dates
if (original_name %in% names(dim(data$attrs$Dates))) {
names(dim(data$attrs$Dates))[which(names(dim(data$attrs$Dates))
== original_name)] <- new_name
}
# 3.2 - Variable metadata
if (original_name %in% names(data$attrs$Variable$metadata)) {
names(data$attrs$Variable$metadata)[which(names(data$attrs$Variable$metadata)
== original_name)] <- new_name
}
# 3.3 - Source files
if (original_name %in% names(dim(data$attrs$source_files))) {
names(dim(data$attrs$source_files))[which(names(dim(data$attrs$source_files))
== original_name)] <- new_name
}
}
# Change data dimnames after renaming all dimensions
dim(data$data) <- data$dims
if (!is.null(attributes(data$data)$dimensions)) {
attributes(data$data)$dimensions <- names(data$dims)
}
# Change $Dates 'dim' attribute
attr(attributes(data$attrs$Dates)$dim, "names") <- names(dim(data$attrs$Dates))
return(data)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_ChangeDimNames.R
|
#'@rdname CST_DynBiasCorrection
#'@title Performing a Bias Correction conditioned by the dynamical
#'properties of the data.
#'
#'@author Carmen Alvarez-Castro, \email{[email protected]}
#'@author Maria M. Chaves-Montero, \email{[email protected]}
#'@author Veronica Torralba, \email{[email protected]}
#'@author Davide Faranda, \email{[email protected]}
#'
#'@description This function perform a bias correction conditioned by the
#'dynamical properties of the dataset. This function internally uses the functions
#''Predictability' to divide in terciles the two dynamical proxies
#'computed with 'CST_ProxiesAttractor'. A bias correction
#'between the model and the observations is performed using the division into
#'terciles of the local dimension 'dim' and inverse of the persistence 'theta'.
#'For instance, model values with lower 'dim' will be corrected with observed
#'values with lower 'dim', and the same for theta. The function gives two options
#'of bias correction: one for 'dim' and/or one for 'theta'
#'
#'@references Faranda, D., Alvarez-Castro, M.C., Messori, G., Rodriguez, D.,
#'and Yiou, P. (2019). The hammam effect or how a warm ocean enhances large
#'scale atmospheric predictability.Nature Communications, 10(1), 1316.
#'\doi{10.1038/s41467-019-09305-8}"
#'@references Faranda, D., Gabriele Messori and Pascal Yiou. (2017).
#' Dynamical proxies of North Atlantic predictability and extremes.
#' Scientific Reports, 7-41278, 2017.
#'
#'@param exp An s2v_cube object with the experiment data.
#'@param obs An s2dv_cube object with the reference data.
#'@param method A character string indicating the method to apply bias
#' correction among these ones: "PTF","RQUANT","QUANT","SSPLIN".
#'@param wetday Logical indicating whether to perform wet day correction
#' or not OR a numeric threshold below which all values are set to zero (by
#' default is set to 'FALSE').
#'@param proxy A character string indicating the proxy for local dimension
#' 'dim' or inverse of persistence 'theta' to apply the dynamical
#' conditioned bias correction method.
#'@param quanti A number lower than 1 indicating the quantile to perform
#' the computation of local dimension and theta.
#'@param ncores The number of cores to use in parallel computation.
#'
#'@return dynbias An s2dvcube object with a bias correction performed
#'conditioned by local dimension 'dim' or inverse of persistence 'theta'.
#'
#'@examples
#'expL <- rnorm(1:2000)
#'dim(expL) <- c(time = 100, lat = 4, lon = 5)
#'obsL <- c(rnorm(1:1980), expL[1, , ] * 1.2)
#'dim(obsL) <- c(time = 100, lat = 4, lon = 5)
#'time_obsL <- as.POSIXct(paste(rep("01", 100), rep("01", 100), 1920:2019, sep = "-"),
#' format = "%d-%m-%y")
#'time_expL <- as.POSIXct(paste(rep("01", 100), rep("01", 100), 1929:2019, sep = "-"),
#' format = "%d-%m-%y")
#'lon <- seq(-1, 5, 1.5)
#'lat <- seq(30, 35, 1.5)
#'# qm = 0.98 #'too high for this short dataset, it is possible that doesn't
#'# get the requirement, in that case it would be necessary select a lower qm
#'# for instance qm = 0.60
#'expL <- s2dv_cube(data = expL, coords = list(lon = lon, lat = lat),
#' Dates = time_expL)
#'obsL <- s2dv_cube(data = obsL, coords = list(lon = lon, lat = lat),
#' Dates = time_obsL)
#'# to use DynBiasCorrection
#'dynbias1 <- DynBiasCorrection(exp = expL$data, obs = obsL$data, proxy= "dim",
#' quanti = 0.6)
#'# to use CST_DynBiasCorrection
#'dynbias2 <- CST_DynBiasCorrection(exp = expL, obs = obsL, proxy= "dim",
#' quanti = 0.6)
#'
#'@export
CST_DynBiasCorrection<- function(exp, obs, method = 'QUANT', wetday=FALSE,
proxy = "dim", quanti,
ncores = NULL) {
# Check 's2dv_cube'
if (!inherits(obs, 's2dv_cube')) {
stop("Parameter 'obs' must be of the class 's2dv_cube', ",
"as output by CSTools::CST_Load.")
}
if (!inherits(exp, 's2dv_cube')) {
stop("Parameter 'exp' must be of the class 's2dv_cube', ",
"as output by CSTools::CST_Load.")
}
exp$data <- DynBiasCorrection(exp = exp$data, obs = obs$data, method = method,
wetday = wetday,
proxy = proxy, quanti = quanti, ncores = ncores)
return(exp)
}
#'@rdname DynBiasCorrection
#'@title Performing a Bias Correction conditioned by the dynamical
#'properties of the data.
#'
#'@author Carmen Alvarez-Castro, \email{[email protected]}
#'@author Maria M. Chaves-Montero, \email{[email protected]}
#'@author Veronica Torralba, \email{[email protected]}
#'@author Davide Faranda, \email{[email protected]}
#'
#'@description This function perform a bias correction conditioned by the
#'dynamical properties of the dataset. This function used the functions
#''CST_Predictability' to divide in terciles the two dynamical proxies
#'computed with 'CST_ProxiesAttractor'. A bias correction
#'between the model and the observations is performed using the division into
#'terciles of the local dimension 'dim' and inverse of the persistence 'theta'.
#'For instance, model values with lower 'dim' will be corrected with observed
#'values with lower 'dim', and the same for theta. The function gives two options
#'of bias correction: one for 'dim' and/or one for 'theta'
#'
#'@references Faranda, D., Alvarez-Castro, M.C., Messori, G., Rodriguez, D.,
#'and Yiou, P. (2019). The hammam effect or how a warm ocean enhances large
#'scale atmospheric predictability.Nature Communications, 10(1), 1316.
#'\doi{10.1038/s41467-019-09305-8}"
#'@references Faranda, D., Gabriele Messori and Pascal Yiou. (2017).
#' Dynamical proxies of North Atlantic predictability and extremes.
#' Scientific Reports, 7-41278, 2017.
#'
#'@param exp A multidimensional array with named dimensions with the
#' experiment data.
#'@param obs A multidimensional array with named dimensions with the
#' observation data.
#'@param method A character string indicating the method to apply bias
#' correction among these ones:
#' "PTF", "RQUANT", "QUANT", "SSPLIN".
#'@param wetday Logical indicating whether to perform wet day correction
#' or not OR a numeric threshold below which all values are set to zero (by
#' default is set to 'FALSE').
#'@param proxy A character string indicating the proxy for local dimension
#' 'dim' or inverse of persistence 'theta' to apply the dynamical conditioned
#' bias correction method.
#'@param quanti A number lower than 1 indicating the quantile to perform the
#' computation of local dimension and theta.
#'@param ncores The number of cores to use in parallel computation.
#'
#'@return A multidimensional array with named dimensions with a bias correction
#'performed conditioned by local dimension 'dim' or inverse of persistence 'theta'.
#'
#'@import multiApply
#'@importFrom ClimProjDiags Subset
#'@import qmap
#'@examples
#'expL <- rnorm(1:2000)
#'dim (expL) <- c(time =100,lat = 4, lon = 5)
#'obsL <- c(rnorm(1:1980),expL[1,,]*1.2)
#'dim (obsL) <- c(time = 100,lat = 4, lon = 5)
#'dynbias <- DynBiasCorrection(exp = expL, obs = obsL, method='QUANT',
#' proxy= "dim", quanti = 0.6)
#'@export
DynBiasCorrection<- function(exp, obs, method = 'QUANT',wetday=FALSE,
proxy = "dim", quanti, ncores = NULL){
if (is.null(obs)) {
stop("Parameter 'obs' cannot be NULL.")
}
if (is.null(exp)) {
stop("Parameter 'exp' cannot be NULL.")
}
if (is.null(method)) {
stop("Parameter 'method' cannot be NULL.")
}
if (is.null(quanti)) {
stop("Parameter 'quanti' cannot be NULL.")
}
if (is.null(proxy)) {
stop("Parameter 'proxy' cannot be NULL.")
}
dims <- dim(exp)
attractor.obs <- ProxiesAttractor(data = obs, quanti = quanti)
predyn.obs <- Predictability(dim = attractor.obs$dim,
theta = attractor.obs$theta)
attractor.exp <- ProxiesAttractor(data = exp, quanti = quanti)
predyn.exp <- Predictability(dim = attractor.exp$dim,
theta = attractor.exp$theta)
if (!(any(names(dim(exp)) %in% 'time'))) {
if (any(names(dim(exp)) %in% 'sdate')) {
if (any(names(dim(exp)) %in% 'ftime')) {
exp <- MergeDims(exp, merge_dims = c('ftime', 'sdate'),
rename_dim = 'time')
}
}
}
if (!(any(names(dim(obs)) %in% 'time'))) {
if (any(names(dim(obs)) %in% 'sdate')) {
if (any(names(dim(obs)) %in% 'ftime')) {
obs <- MergeDims(obs, merge_dims = c('ftime', 'sdate'),
rename_dim = 'time')
}
}
}
dim_exp <- dim(exp)
names_to_check <- names(dim_exp)[which(names(dim_exp) %in%
c('time', 'lat', 'lon', 'sdate') == FALSE)]
if (length(names_to_check) > 0) {
dim_obs <- dim(obs)
if (any(names(dim_obs) %in% names_to_check)) {
if (any(dim_obs[which(names(dim_obs) %in% names_to_check)] !=
dim_exp[which(names(dim_exp) %in% names_to_check)])) {
for (i in names_to_check) {
pos <- which(names(dim_obs) == i)
names(dim(obs))[pos] <- ifelse(dim_obs[pos] !=
dim_exp[which(names(dim_exp) == i)],
paste0('obs_', names(dim_obs[pos])),
names(dim(obs)[pos]))
}
warning("Common dimension names with different length are renamed.")
}
}
}
if (proxy == "dim") {
adjusted <- Apply(list(exp, obs), target_dims = 'time',
fun = .dynbias, method, wetday,
predyn.exp = predyn.exp$pred.dim$pos.d,
predyn.obs = predyn.obs$pred.dim$pos.d,
ncores = ncores, output_dims = 'time')$output1
} else if (proxy == "theta") {
adjusted <- Apply(list(exp, obs), target_dims = 'time',
fun = .dynbias, method, wetday,
predyn.exp = predyn.exp$pred.theta$pos.t,
predyn.obs = predyn.obs$pred.theta$pos.t,
ncores = ncores, output_dims = 'time')$output1
} else {
stop ("Parameter 'proxy' must be set as 'dim' or 'theta'.")
}
if (any(names(dim(adjusted)) %in% 'memberObs')) {
if (dim(adjusted)['memberObs'] == 1) {
adjusted <- Subset(adjusted, along = 'memberObs', indices=1, drop = 'selected')
} else {
print('Dimension member in obs changed to memberObs')
}
}
if (any(names(dim(adjusted)) %in% 'datasetObs')) {
if (dim(adjusted)['datasetObs'] == 1) {
adjusted <- Subset(adjusted, along = 'datasetObs', indices = 1, drop = 'selected')
} else {
print('Dimension dataset in obs changed to datasetObs')
}
}
return(adjusted)
}
.dynbias <- function(exp, obs, method, wetday, predyn.exp, predyn.obs) {
result <- array(rep(NA, length(exp)))
res <- lapply(1:3, function(x) {
exp_sub <- exp[predyn.exp[[x]]]
obs_sub <- obs[predyn.obs[[x]]]
adjust <- .qbiascorrection(exp_sub, obs_sub, method,wetday)
result[predyn.exp[[x]]] <<- adjust
return(NULL)
})
return(result)
}
.qbiascorrection <- function(expX, obsX, method, wetday) {
## functions fitQmap and doQmap
if (method == "PTF") {
qm.fit <- fitQmap(obsX, expX, method = "PTF", transfun = "expasympt",
cost = "RSS", wet.day = wetday)
qmap <- doQmap(expX, qm.fit)
} else if (method == "QUANT") {
qm.fit <- fitQmap(obsX, expX, method = "QUANT", qstep = 0.01, wet.day = wetday)
qmap <- doQmap(expX, qm.fit, type = "tricub")
} else if (method == "RQUANT") {
qm.fit <- fitQmap(obsX, expX, method = "RQUANT", qstep = 0.01,wet.day = wetday)
qmap <- doQmap(expX, qm.fit, type = "linear")
} else if (method == "SSPLIN") {
qm.fit <- fitQmap(obsX, expX, qstep = 0.01, method = "SSPLIN",wet.day = wetday)
qmap <- doQmap(expX, qm.fit)
} else {
stop ("Parameter 'method' doesn't match any of the available methods.")
}
return(qmap)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_DynBiasCorrection.R
|
#'@rdname CST_EnsClustering
#'@title Ensemble clustering
#'
#'@author Federico Fabiano - ISAC-CNR, \email{[email protected]}
#'@author Ignazio Giuntoli - ISAC-CNR, \email{[email protected]}
#'@author Danila Volpi - ISAC-CNR, \email{[email protected]}
#'@author Paolo Davini - ISAC-CNR, \email{[email protected]}
#'@author Jost von Hardenberg - ISAC-CNR, \email{[email protected]}
#'
#'@description This function performs a clustering on members/starting dates
#'and returns a number of scenarios, with representative members for each of
#'them. The clustering is performed in a reduced EOF space.
#'
#'Motivation:
#'Ensemble forecasts give a probabilistic insight of average weather conditions
#'on extended timescales, i.e. from sub-seasonal to seasonal and beyond.
#'With large ensembles, it is often an advantage to be able to group members
#'according to similar characteristics and to select the most representative
#'member for each cluster. This can be useful to characterize the most probable
#'forecast scenarios in a multi-model (or single model) ensemble prediction.
#'This approach, applied at a regional level, can also be used to identify the
#'subset of ensemble members that best represent the full range of possible
#'solutions for downscaling applications. The choice of the ensemble members is
#'made flexible in order to meet the requirements of specific (regional) climate
#'information products, to be tailored for different regions and user needs.
#'
#'Description of the tool:
#'EnsClustering is a cluster analysis tool, based on the k-means algorithm, for
#'ensemble predictions. The aim is to group ensemble members according to
#'similar characteristics and to select the most representative member for each
#'cluster. The user chooses which feature of the data is used to group the
#'ensemble members by clustering: time mean, maximum, a certain percentile
#'(e.g., 75% as in the examples below), standard deviation and trend over the
#'time period. For each ensemble member this value is computed at each grid
#'point, obtaining N lat-lon maps, where N is the number of ensemble members.
#'The anomaly is computed subtracting the ensemble mean of these maps to each of
#'the single maps. The anomaly is therefore computed with respect to the
#'ensemble members (and not with respect to the time) and the Empirical
#'Orthogonal Function (EOF) analysis is applied to these anomaly maps. Regarding
#'the EOF analysis, the user can choose either how many Principal Components
#'(PCs) to retain or the percentage of explained variance to keep. After
#'reducing dimensionality via EOF analysis, k-means analysis is applied using
#'the desired subset of PCs.
#'
#'The major final outputs are the classification in clusters, i.e. which member
#'belongs to which cluster (in k-means analysis the number k of clusters needs
#'to be defined prior to the analysis) and the most representative member for
#'each cluster, which is the closest member to the cluster centroid. Other
#'outputs refer to the statistics of clustering: in the PC space, the minimum
#'and the maximum distance between a member in a cluster and the cluster
#'centroid (i.e. the closest and the furthest member), the intra-cluster
#'standard deviation for each cluster (i.e. how much the cluster is compact).
#'
#'@param exp An object of the class 's2dv_cube', containing the variables to be
#' analysed. The element 'data' in the 's2dv_cube' object must have, at
#' least, spatial and temporal dimensions. Latitudinal dimension accepted
#' names: 'lat', 'lats', 'latitude', 'y', 'j', 'nav_lat'. Longitudinal
#' dimension accepted names: 'lon', 'lons','longitude', 'x', 'i', 'nav_lon'.
#'@param time_moment Decides the moment to be applied to the time dimension. Can
#' be either 'mean' (time mean), 'sd' (standard deviation along time) or 'perc'
#' (a selected percentile on time). If 'perc' the keyword 'time_percentile' is
#' also used.
#'@param time_percentile Set the percentile in time you want to analyse (used
#' for `time_moment = "perc").
#'@param numclus Number of clusters (scenarios) to be calculated. If set to NULL
#' the number of ensemble members divided by 10 is used, with a minimum of 2
#' and a maximum of 8.
#'@param lon_lim List with the two longitude margins in `c(-180,180)` format.
#'@param lat_lim List with the two latitude margins.
#'@param variance_explained variance (percentage) to be explained by the set of
#' EOFs. Defaults to 80. Not used if numpcs is specified.
#'@param numpcs Number of EOFs retained in the analysis (optional).
#'@param cluster_dim Dimension along which to cluster. Typically "member" or
#' "sdate". This can also be a list like c("member", "sdate").
#'@param time_dim String or character array with name(s) of dimension(s) over
#' which to compute statistics. If omitted c("ftime", "sdate", "time") are
#' searched in this order.
#'@param verbose Logical for verbose output
#'@return A list with elements \code{$cluster} (cluster assigned for each
#'member), \code{$freq} (relative frequency of each cluster),
#'\code{$closest_member} (representative member for each cluster),
#'\code{$repr_field} (list of fields for each representative member),
#'\code{composites} (list of mean fields for each cluster), \code{$lon}
#'(selected longitudes of output fields), \code{$lat} (selected longitudes of
#'output fields).
#'@examples
#'dat_exp <- array(abs(rnorm(1152))*275, dim = c(dataset = 1, member = 4,
#' sdate = 6, ftime = 3,
#' lat = 4, lon = 4))
#'lon <- seq(0, 3)
#'lat <- seq(48, 45)
#'coords <- list(lon = lon, lat = lat)
#'exp <- list(data = dat_exp, coords = coords)
#'attr(exp, 'class') <- 's2dv_cube'
#'res <- CST_EnsClustering(exp = exp, numclus = 3,
#' cluster_dim = c("sdate"))
#'
#'@export
CST_EnsClustering <- function(exp, time_moment = "mean", numclus = NULL,
lon_lim = NULL, lat_lim = NULL,
variance_explained = 80, numpcs = NULL,
time_dim = NULL, time_percentile = 90,
cluster_dim = "member", verbose = F) {
# Check 's2dv_cube'
if (!inherits(exp, "s2dv_cube")) {
stop("Parameter 'exp' must be of the class 's2dv_cube', ",
"as output by CSTools::CST_Load.")
}
# Check 'exp' object structure
if (!all(c('data', 'coords') %in% names(exp))) {
stop("Parameter 'exp' must have 'data' and 'coords' elements ",
"within the 's2dv_cube' structure.")
}
# Check coordinates
if (!any(names(exp$coords) %in% .KnownLonNames()) |
!any(names(exp$coords) %in% .KnownLatNames())) {
stop("Spatial coordinate names do not match any of the names accepted by ",
"the package. Latitudes accepted names: 'lat', 'lats', 'latitude',",
" 'y', 'j', 'nav_lat'. Longitudes accepted names: 'lon', 'lons',",
" 'longitude', 'x', 'i', 'nav_lon'.")
}
lon_name <- names(exp$coords)[[which(names(exp$coords) %in% .KnownLonNames())]]
lat_name <- names(exp$coords)[[which(names(exp$coords) %in% .KnownLatNames())]]
result <- EnsClustering(exp$data,
lat = as.vector(exp$coords[[lat_name]]),
lon = as.vector(exp$coords[[lon_name]]),
time_moment = time_moment, numclus = numclus,
lon_lim = lon_lim, lat_lim = lat_lim,
variance_explained = variance_explained,
numpcs = numpcs, time_percentile = time_percentile,
time_dim = time_dim, cluster_dim = cluster_dim,
verbose = verbose)
return(result)
}
#'@rdname EnsClustering
#'@title Ensemble clustering
#'
#'@author Federico Fabiano - ISAC-CNR, \email{[email protected]}
#'@author Ignazio Giuntoli - ISAC-CNR, \email{[email protected]}
#'@author Danila Volpi - ISAC-CNR, \email{[email protected]}
#'@author Paolo Davini - ISAC-CNR, \email{[email protected]}
#'@author Jost von Hardenberg - ISAC-CNR, \email{[email protected]}
#'
#'@description This function performs a clustering on members/starting dates
#'and returns a number of scenarios, with representative members for each of
#'them. The clustering is performed in a reduced EOF space.
#'
#'@param data A matrix of dimensions 'dataset member sdate ftime lat lon'
#' containing the variables to be analysed. Latitudinal dimension accepted
#' names: 'lat', 'lats', 'latitude', 'y', 'j', 'nav_lat'. Longitudinal
#' dimension accepted names: 'lon', 'lons','longitude', 'x', 'i', 'nav_lon'.
#'@param lat Vector of latitudes.
#'@param lon Vector of longitudes.
#'@param time_moment Decides the moment to be applied to the time dimension. Can
#' be either 'mean' (time mean), 'sd' (standard deviation along time) or 'perc'
#' (a selected percentile on time). If 'perc' the keyword 'time_percentile' is
#' also used.
#'@param time_percentile Set the percentile in time you want to analyse (used
#' for `time_moment = "perc").
#'@param numclus Number of clusters (scenarios) to be calculated. If set to NULL
#' the number of ensemble members divided by 10 is used, with a minimum of 2
#' and a maximum of 8.
#'@param lon_lim List with the two longitude margins in `c(-180,180)` format.
#'@param lat_lim List with the two latitude margins.
#'@param variance_explained variance (percentage) to be explained by the set of
#' EOFs. Defaults to 80. Not used if numpcs is specified.
#'@param numpcs Number of EOFs retained in the analysis (optional).
#'@param cluster_dim Dimension along which to cluster. Typically "member" or
#' "sdate". This can also be a list like c("member", "sdate").
#'@param time_dim String or character array with name(s) of dimension(s) over
#' which to compute statistics. If omitted c("ftime", "sdate", "time") are
#' searched in this order.
#'@param verbose Logical for verbose output
#'@return A list with elements \code{$cluster} (cluster assigned for each member),
#'\code{$freq} (relative frequency of each cluster), \code{$closest_member}
#'(representative member for each cluster), \code{$repr_field} (list of fields for
#'each representative member), \code{composites} (list of mean fields for each
#'cluster), \code{$lon} (selected longitudes of output fields), \code{$lat}
#'(selected longitudes of output fields).
#'
#'@examples
#'exp <- array(abs(rnorm(1152))*275, dim = c(dataset = 1, member = 4,
#' sdate = 6, ftime = 3,
#' lat = 4, lon = 4))
#'lon <- seq(0, 3)
#'lat <- seq(48, 45)
#'res <- EnsClustering(exp, lat = lat, lon = lon, numclus = 2,
#' cluster_dim = c("member", "dataset", "sdate"))
#'
#'@export
EnsClustering <- function(data, lat, lon, time_moment = "mean", numclus = NULL,
lon_lim = NULL, lat_lim = NULL, variance_explained = 80,
numpcs = NULL, time_percentile = 90, time_dim = NULL,
cluster_dim = "member", verbose = T) {
# Know spatial coordinates names
if (!any(names(dim(data)) %in% .KnownLonNames()) |
!any(names(dim(data)) %in% .KnownLatNames())) {
stop("Spatial coordinate names do not match any of the names accepted by ",
"the package.")
}
lon_name <- names(dim(data))[[which(names(dim(data)) %in% .KnownLonNames())]]
lat_name <- names(dim(data))[[which(names(dim(data)) %in% .KnownLatNames())]]
# Check/detect time_dim
if (is.null(time_dim)) {
time_dim_names <- c("ftime", "sdate", "time")
time_dim_num <- which(time_dim_names %in% names(dim(data)))
if (length(time_dim_num) > 0) {
# Find time dimension with length > 1
ilong <- which(dim(data)[time_dim_names[time_dim_num]] > 1)
if (length(ilong) > 0) {
time_dim <- time_dim_names[time_dim_num[ilong[1]]]
} else {
stop("No time dimension longer than one found.")
}
} else {
stop("Could not automatically detect a target time dimension ",
"in the provided data in 'data'.")
}
.printv(paste("Selected time dim:", time_dim), verbose)
}
# Apply time_moment
if (time_moment == "mean") {
.printv("Considering the time_moment: mean", verbose)
exp <- Apply(data, target_dims = time_dim, mean)$output1
} else if (time_moment == "sd") {
.printv("Considering the time_moment: sd", verbose)
exp <- Apply(data, target_dims = time_dim, sd)$output1
} else if (time_moment == "perc") {
.printv(paste0("Considering the time_moment: percentile ",
sprintf("%5f", time_percentile)), verbose)
exp <- Apply(data, target_dims = time_dim,
function(x) {quantile(as.vector(x),
time_percentile / 100.)})$output1
} else {
stop(paste0("Invalid time_moment '", time_moment, "' specified!"))
}
# Repeatedly apply .ensclus
result <- Apply(exp, target_dims = c(cluster_dim, lat_name, lon_name), .ensclus,
lat, lon, numclus = numclus,
lon_lim = lon_lim, lat_lim = lat_lim,
variance_explained = variance_explained,
numpcs = numpcs, verbose = verbose)
# Expand result$closest_member into indices in cluster_dim dimensions
cm = result$closest_member
cml <- vector(mode = "list", length = length(cluster_dim))
cum <- cm * 0
dim_cd <- dim(exp)[cluster_dim]
for (i in rev(seq_along(cluster_dim))) {
cml[[i]] <- floor((cm - cum - 1) / prod(dim_cd[-i])) + 1
cum <- cum + (cml[[i]] - 1) * prod(dim_cd[-i])
dim_cd <- dim_cd[-i]
}
names(cml) <- cluster_dim
result$closest_member <- cml
result[[lon_name]] <- lon
result[[lat_name]] <- lat
return(result)
}
# Atomic ensclus function
.ensclus <- function(var_ens, lat, lon, numclus = NULL, lon_lim = NULL,
lat_lim = NULL, variance_explained = 80, numpcs = NULL,
verbose = T) {
# Check if more than one dimension has been passed for clustering
sampledims <- NULL
if (length(dim(var_ens)) > 3) {
sampledims <- head(dim(var_ens), -2)
dim(var_ens) <- c(samples = prod(sampledims),
tail(dim(var_ens), 2))
}
if (length(lat) != dim(var_ens)[2]) {
stop("Incorrect lat length")
}
if (length(lon) != dim(var_ens)[3]) {
stop("Incorrect lon length")
}
n_ens <- dim(var_ens)[1]
if (is.null(numclus)) {
numclus <- min(max(floor(n_ens / 10), 2), 8)
}
.printv(paste("Number of clusters:", numclus), verbose)
.printv("Calculating ensemble anomalies...", verbose)
ens_mean <- apply(var_ens, c(2, 3), mean)
var_anom <- array(dim = dim(var_ens))
for (k in seq(1, n_ens)) {
var_anom[k, , ] <- var_ens[k, , ] - ens_mean
}
# reshaping to give the right input to regimes function
var_anom <- aperm(var_anom, c(3, 2, 1))
clusters <- .regimes(lon, lat, var_anom, ncluster = numclus, ntime = 1000,
neof = numpcs, lon_lim, lat_lim,
perc = variance_explained,
max_eofs = n_ens - 1, verbose = verbose)
clus_labels <- as.array(clusters$cluster)
names(dim(clus_labels))[1] <- names(dim(var_ens))[1]
if (!is.null(sampledims)) {
dim(clus_labels) <- c(sampledims, dim(clus_labels)[-1])
}
frequencies <- as.array(clusters$frequencies)
names(dim(frequencies))[1] <- "cluster"
clus_centers <- clusters$clus_centers
closest_member <- array(dim = numclus)
dist_closest_member <- array(dim = numclus)
for (iclu in seq(1, numclus)) {
this_clus_labels <- which(clus_labels == iclu)
if (length(this_clus_labels) > 1) {
dist_arr <- apply(clusters$pcs[clus_labels == iclu, ], 1,
.dist_from_center, center = clus_centers[iclu, ])
.printv(paste0("distance from cluster ", iclu, " center:"), verbose)
.printv(this_clus_labels, verbose)
.printv(dist_arr, verbose)
closest_member[iclu] <- this_clus_labels[which.min(dist_arr)]
dist_closest_member[iclu] <- min(dist_arr)
.printv(paste0("closest member to cluster ", iclu, " center is: ",
closest_member[iclu]), verbose)
} else {
.printv(paste0("distance from cluster ", iclu, " center:"), verbose)
dista <- .dist_from_center(clusters$pcs[clus_labels == iclu, ],
center = clus_centers[iclu, ])
.printv(this_clus_labels, verbose)
.printv(dist_arr, verbose)
closest_member[iclu] <- this_clus_labels
dist_closest_member[iclu] <- dista
}
}
.printv("EnsClustering completed...", verbose)
names(dim(closest_member))[1] <- "cluster"
repr_field <- var_ens[closest_member, , ]
names(dim(repr_field))[2:3] <- names(dim(var_ens))[2:3]
names(dim(repr_field))[1] <- "cluster"
# Bring back to original order lat lon
composites <- aperm(clusters$regimes, c(1, 3, 2))
names(dim(composites))[2:3] <- names(dim(var_ens))[2:3]
names(dim(composites))[1] <- "cluster"
out <- list(cluster = clus_labels, freq = frequencies,
closest_member = closest_member, repr_field = repr_field,
composites = composites)
return(out)
}
.dist_from_center <- function(y, center = NULL) {
dist <- sqrt(sum((y - center)^2))
return(dist)
}
.eofs <- function(lon, lat, field, neof = 4, xlim = NULL, ylim = NULL,
method = "SVD", do_standardize = F, do_regression = F,
verbose = T) {
# R tool for computing EOFs based on
# Singular Value Decomposition ("SVD", default)
# or with the eigenvectors of the covariance matrix ("covariance", slower)
# If requested, computes linear regressions and standardizes the PCs
# If you want to use the regressions, remember to standardize the PCs
# Take as input a 3D anomaly field.
# Requires "personal" functions area.weight, standardize
if (exists(".lm.fit")) {
lin.fit <- .lm.fit
} else {
lin.fit <- lm.fit
}
# area weighting, based on the root of cosine
.printv("Area Weighting...", verbose)
ww <- .area.weight(lon, lat, root = T)
wwfield <- sweep(field, c(1, 2), ww, "*")
idx <- .selbox(lon, lat, xlim, ylim)
slon <- lon[idx$ilon]
slat <- lat[idx$ilat]
wwfield <- wwfield[idx$ilon, idx$ilat, ]
# transform 3D field in a matrix
wwfield <- array(wwfield, dim = c(dim(wwfield)[1] * dim(wwfield)[2],
dim(wwfield)[3]))
# calling SVD
if (method == "SVD") {
.printv("Calling SVD...", verbose)
SVD <- svd(wwfield, nu = neof, nv = neof)
# extracting EOFs (loading pattern), expansions coefficient
# and variance explained
pattern <- array(SVD$u, dim = c(dim(box)[1], dim(box)[2], neof))
coefficient <- SVD$v
variance <- (SVD$d[1:neof])^2 / sum((SVD$d)^2)
if (do_standardize) {
coefficient <- apply(coefficient, c(2), .standardize)
} else {
coefficient <- sweep(coefficient, c(2), sqrt(variance), "*")
}
}
# calling covariance matrix
if (method == "covariance") {
.printv("Calling eigenvectors of the covariance matrix...", verbose)
covma <- cov(t(wwfield))
eig <- eigen(covma)
coef <- (t(wwfield) %*% eig$vector)[, 1:neof]
pattern <- array(eig$vectors, dim = c(dim(box)[1], dim(box)[2],
dim(box)[3]))[, , 1:neof]
variance <- eig$values[1:neof] / sum(eig$values)
if (do_standardize) {
coefficient <- apply(coef, c(2), .standardize)
} else {
coefficient <- coef
}
}
# linear regressions on anomalies
regression <- NULL
if (do_regression) {
.printv("Linear Regressions (it can takes a while)... ", verbose)
regression <- array(NA, dim = c(length(lon), length(lat), neof))
for (i in 1:neof) {
regression[, , i] <- apply(field, c(1, 2),
function(x) lin.fit(as.matrix(coefficient[, i],
ncol = 1), x)$coefficients)
}
}
# preparing output
.printv("Finalize eofs...", verbose)
pattern <- list(x = slon, y = slat, z = pattern)
out <- list(pattern = pattern, coeff = coefficient, variance = variance,
regression = regression)
return(out)
}
.regimes <- function(lon, lat, field, ncluster = 4, ntime = 1000, neof = 10,
xlim, ylim, alg = "Hartigan-Wong",
perc = NULL, max_eofs = 50, verbose = T) {
# R tool to compute cluster analysis based on k-means.
# Requires "personal" function eofs
# Take as input a 3D anomaly field
# Reduce the phase space with EOFs: use SVD and do not standardize PCs
.printv("Launching EOFs...", verbose)
t0 <- proc.time()
if (is.null(neof)) {
reducedspace <- .eofs(lon, lat, field, neof = max_eofs, xlim = xlim,
ylim = ylim, method = "SVD", do_regression = F,
do_standardize = F, verbose = verbose)
neof <- which(cumsum(reducedspace$variance) > perc / 100.)[1]
.printv(paste("Number of EOFs needed for var:", neof), verbose)
PC <- reducedspace$coeff[,1:neof]
} else {
reducedspace <- .eofs(lon, lat, field, neof = neof, xlim = xlim,
ylim = ylim, method = "SVD", do_regression = F,
do_standardize = F, verbose = verbose)
PC <- reducedspace$coeff
}
t1 <- proc.time() - t0
# k-means computation repeat for ntime to find best solution.
.printv("Computing k-means...", verbose)
t0 <- proc.time()
regimes <- kmeans(PC, as.numeric(ncluster), nstart = ntime,
iter.max = 1000, algorithm = alg)
t1 <- proc.time() - t0
# Extract regimes frequency and timeseries of occupation
cluster <- regimes$cluster
frequencies <- regimes$size / dim(field)[3] * 100
.printv("Cluster frequencies:", verbose)
.printv(frequencies[order(frequencies, decreasing = T)], verbose)
.printv("Creating Composites...", verbose)
compose <- apply(field, c(1, 2), by, cluster, mean)
# sorting from the more frequent to the less frequent
kk <- order(frequencies, decreasing = T)
cluster <- cluster + 100
for (ss in 1:ncluster) {
cluster[cluster == (ss + 100)] <- which(kk == ss)
}
# prepare output
.printv("Finalize regimes...", verbose)
out <- list(cluster = cluster, frequencies = frequencies[kk],
regimes = compose[kk, , ], pcs = PC,
clus_centers = regimes$centers[kk, ],
tot.withinss = regimes$tot.withinss)
return(out)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_EnsClustering.R
|
#'Add a named dimension to an object of class s2dv_cube
#'
#'Insert an extra dimension into an array at position 'posdim' with length
#''lendim'. The array in \code{data} repeats along the new dimension.
#'The dimensions, coordinates and attributes are modified accordingly.
#'
#'@author Agudetse Roures Victoria, \email{[email protected]}
#'
#'@param data An object of class \code{s2dv_cube} to which the additional
#' dimension should be added.
#'@param posdim An integer indicating the position of the new dimension.
#'@param lendim An integer indicating the length of the new dimension.
#'@param name A character string indicating the name for the new dimension.
#'@param values A vector containing the values of the new dimension and any
#' relevant attributes. If NULL, a sequence of integers from 1 to lendim will
#' be added.
#'
#'@return An object of class \code{s2dv_cube} with similar data, coordinates and
#'attributes as the \code{data} input, but with an additional dimension.
#'
#'@examples
#'#Example with sample data:
#'# Check original dimensions and coordinates
#'lonlat_temp$exp$dims
#'names(lonlat_temp$exp$coords)
#'# Add 'variable' dimension
#'exp <- CST_InsertDim(lonlat_temp$exp,
#' posdim = 2,
#' lendim = 1,
#' name = "variable",
#' values = c("tas"))
#'# Check new dimensions and coordinates
#'exp$dims
#'exp$coords$variable
#'
#'@seealso \link[s2dv]{InsertDim}
#'
#'@importFrom s2dv InsertDim
#'@export
CST_InsertDim <- function(data, posdim, lendim, name, values = NULL) {
# Check inputs
# Check 's2dv_cube'
if (!inherits(data, 's2dv_cube')) {
stop("Parameter 'data' must be of the class 's2dv_cube'.")
}
# Check name
if (!is.character(name) || length(name) > 1) {
stop("Parameter 'name' must be a character string")
}
# Check values
if (is.null(values)) {
warning(paste0("Parameter 'values' is not provided. Adding a sequence of ",
"integers from 1 to 'lendim' as the values for the new dimension."))
values <- 1:lendim
} else {
if (!(length(values) == lendim)) {
stop(paste0("The length of the parameter 'values' must be consistent",
"with the parameter 'lendim'."))
}
}
# Insert dim in data
data$data <- s2dv::InsertDim(data$data, posdim = posdim, lendim = lendim,
name = name)
# Adjust dimensions
data$dims <- dim(data$data)
# Adjust coordinates
data$coords[[name]] <- values
data$coords <- data$coords[names(data$dims)]
return(data)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_InsertDim.R
|
#' CSTools Data Retreival Function
#'
#' This function aggregates, subsets and retrieves sub-seasonal, seasonal, decadal or climate projection data from NetCDF files in a local file system or on remote OPeNDAP servers, and arranges it for easy application of the CSTools functions.
#'
#' It receives any number of parameters (`...`) that are automatically forwarded to the `s2dv::Load` function. See details in `?s2dv::Load`.
#'
#' It is recommended to use this function in combination with the `zeallot::"%<-%"` operator, to directly assing the two returned 's2dv_cube's to two separate variables, which can then be sent independently to other functions in CSTools as needed. E.g.: `c(exp, obs) <- CST_Load(...)`.
#'
#' @param ... Parameters that are automatically forwarded to the `s2dv::Load` function. See details in `?s2dv::Load`.
#' @return A list with one or two S3 objects, named 'exp' and 'obs', of the class 's2dv_cube', containing experimental and date-corresponding observational data, respectively. These 's2dv_cube's can be ingested by other functions in CSTools. If the parameter `exp` in the call to `CST_Load` is set to `NULL`, then only the 'obs' component is returned, and viceversa.
#' @author Nicolau Manubens, \email{[email protected]}
#' @importFrom s2dv Load
#' @importFrom utils glob2rx
#' @export
#' @examples
#' \dontrun{
#' library(zeallot)
#' startDates <- c('20001101', '20011101', '20021101',
#' '20031101', '20041101', '20051101')
#' c(exp, obs) %<-%
#' CST_Load(
#' var = 'tas',
#' exp = 'system5c3s',
#' obs = 'era5',
#' nmember = 15,
#' sdates = startDates,
#' leadtimemax = 3,
#' latmin = 27, latmax = 48,
#' lonmin = -12, lonmax = 40,
#' output = 'lonlat',
#' nprocs = 1
#' )
#' }
#' \dontshow{
#' exp <- CSTools::lonlat_temp$exp
#' obs <- CSTools::lonlat_temp$obs
#' }
CST_Load <- function(...) {
exp <- Load(...)
result <- as.s2dv_cube(exp)
result
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_Load.R
|
#'Function to Merge Dimensions
#'
#'@author Nuria Perez-Zanon, \email{[email protected]}
#'
#'@description This function merges two dimensions of the array \code{data} in a
#''s2dv_cube' object into one. The user can select the dimensions to merge and
#'provide the final name of the dimension. The user can select to remove NA
#'values or keep them.
#'
#'@param data An 's2dv_cube' object
#'@param merge_dims A character vector indicating the names of the dimensions to
#' merge.
#'@param rename_dim a character string indicating the name of the output
#' dimension. If left at NULL, the first dimension name provided in parameter
#' \code{merge_dims} will be used.
#'@param na.rm A logical indicating if the NA values should be removed or not.
#'
#'@examples
#'data <- 1 : c(2 * 3 * 4 * 5 * 6 * 7)
#'dim(data) <- c(time = 7, lat = 2, lon = 3, monthly = 4, member = 6,
#' dataset = 5, var = 1)
#'data[2,,,,,,] <- NA
#'data[c(3,27)] <- NA
#'data <- list(data = data)
#'class(data) <- 's2dv_cube'
#'new_data <- CST_MergeDims(data, merge_dims = c('time', 'monthly'))
#'new_data <- CST_MergeDims(data, merge_dims = c('lon', 'lat'), rename_dim = 'grid')
#'new_data <- CST_MergeDims(data, merge_dims = c('time', 'monthly'), na.rm = TRUE)
#'@export
CST_MergeDims <- function(data, merge_dims = c('ftime', 'monthly'),
rename_dim = NULL, na.rm = FALSE) {
# Check 's2dv_cube'
if (!inherits(data, 's2dv_cube')) {
stop("Parameter 'data' must be of the class 's2dv_cube'.")
}
if (is.null(rename_dim)) {
rename_dim <- merge_dims[1]
}
# data
data$data <- MergeDims(data$data, merge_dims = merge_dims,
rename_dim = rename_dim, na.rm = na.rm)
# dims
data$dims <- dim(data$data)
# rename_dim
if (length(rename_dim) > 1) {
rename_dim <- as.character(rename_dim[1])
}
# coords
data$coords[merge_dims] <- NULL
data$coords[[rename_dim]] <- 1:dim(data$data)[rename_dim]
attr(data$coords[[rename_dim]], 'indices') <- TRUE
# attrs
if (all(merge_dims %in% names(dim(data$attrs$Dates)))) {
dim(data$attrs$Dates) <- dim(data$data)[rename_dim]
} else if (any(merge_dims %in% names(dim(data$attrs$Dates)))) {
warning("The dimensions of 'Dates' array will be different from ",
"the temporal dimensions in 'data'. Parameter 'merge_dims' ",
"only includes one temporal dimension of 'Dates'.")
}
return(data)
}
#'Function to Split Dimension
#'
#'@author Nuria Perez-Zanon, \email{[email protected]}
#'
#'@description This function merges two dimensions of an array into one. The
#'user can select the dimensions to merge and provide the final name of the
#'dimension. The user can select to remove NA values or keep them.
#'
#'@param data An n-dimensional array with named dimensions
#'@param merge_dims A character vector indicating the names of the dimensions to
#' merge.
#'@param rename_dim A character string indicating the name of the output
#' dimension. If left at NULL, the first dimension name provided in parameter
#' \code{merge_dims} will be used.
#'@param na.rm A logical indicating if the NA values should be removed or not.
#'
#'@examples
#'data <- 1 : 20
#'dim(data) <- c(time = 10, lat = 2)
#'new_data <- MergeDims(data, merge_dims = c('time', 'lat'))
#'@import abind
#'@importFrom ClimProjDiags Subset
#'@export
MergeDims <- function(data, merge_dims = c('time', 'monthly'),
rename_dim = NULL, na.rm = FALSE) {
# check data
if (is.null(data)) {
stop("Parameter 'data' cannot be NULL.")
}
if (is.null(dim(data))) {
stop("Parameter 'data' must have dimensions.")
}
if (is.null(names(dim(data)))) {
stop("Parameter 'data' must have dimension names.")
}
dims <- dim(data)
# check merge_dims
if (is.null(merge_dims)) {
stop("Parameter 'merge_dims' cannot be NULL.")
}
if (!is.character(merge_dims)) {
stop("Parameter 'merge_dims' must be a character vector ",
"indicating the names of the dimensions to be merged.")
}
if (length(merge_dims) > 2) {
warning("Only two dimensions can be merge, only the first two ",
"dimension will be used. To merge further dimensions ",
"consider to use this function multiple times.")
merge_dims <- merge_dims[1 : 2]
} else if (length(merge_dims) < 2) {
stop("Parameter 'merge_dims' must be of length two.")
}
if (is.null(rename_dim)) {
rename_dim <- merge_dims[1]
}
if (length(rename_dim) > 1) {
warning("Parameter 'rename_dim' has length greater than 1 ",
"and only the first element will be used.")
rename_dim <- as.character(rename_dim[1])
}
if (!any(names(dims) %in% merge_dims)) {
stop("Parameter 'merge_dims' must match with dimension ",
"names in parameter 'data'.")
}
pos1 <- which(names(dims) == merge_dims[1])
pos2 <- which(names(dims) == merge_dims[2])
if (length(pos1) == 0 | length(pos2) == 0) {
stop("Parameter 'merge_dims' must match with dimension ",
"names in parameter 'data'.")
}
if (pos1 > pos2) {
pos1 <- pos1 - 1
}
data <- lapply(1:dims[pos2], function(x) {Subset(data, along = pos2,
indices = x, drop = 'selected')})
data <- abind(data, along = pos1)
names(dim(data)) <- names(dims)[-pos2]
if (!is.null(rename_dim)) {
names(dim(data))[pos1] <- rename_dim
}
if (na.rm) {
nas <- which(is.na(Subset(data, along = -pos1, indices = 1)))
if (length(nas) != 0) {
nas <- unlist(lapply(nas, function(x) {
if(all(is.na(Subset(data, along = pos1,
indices = x)))) {
return(x)}}))
data <- Subset(data, along = pos1, indices = -nas)
}
}
return(data)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_MergeDims.R
|
#'@rdname CST_MultiEOF
#'@title EOF analysis of multiple variables
#'
#'@author Jost von Hardenberg - ISAC-CNR, \email{[email protected]}
#'@author Paolo Davini - ISAC-CNR, \email{[email protected]}
#'
#'@description This function performs EOF analysis over multiple variables,
#'accepting in input a list of CSTools objects. Based on Singular Value
#'Decomposition. For each field the EOFs are computed and the corresponding PCs
#'are standardized (unit variance, zero mean); the minimum number of principal
#'components needed to reach the user-defined variance is retained. The function
#'weights the input data for the latitude cosine square root.
#'
#'@param datalist A list of objects of the class 's2dv_cube', containing the
#' variables to be analysed. Each data object in the list is expected to have
#' an element named \code{$data} with at least two spatial dimensions named
#' "lon" and "lat", a dimension "ftime" and a dimension "sdate". Latitudinal
#' dimension accepted names: 'lat', 'lats', 'latitude', 'y', 'j', 'nav_lat'.
#' Longitudinal dimension accepted names: 'lon', 'lons','longitude', 'x', 'i',
#' 'nav_lon'. NAs can exist but it should be consistent along 'time_dim'. That
#' is, if one grid point has NAs for each variable, all the time steps at this
#' point should be NAs.
#'@param lon_dim A character string indicating the name of the longitudinal
#' dimension. By default, it is set to 'lon'.
#'@param lat_dim A character string indicating the name of the latitudinal
#' dimension. By default, it is set to 'lat'.
#'@param time_dim A character string indicating the name of the temporal
#' dimension. By default, it is set to 'time'.
#'@param sdate_dim A character string indicating the name of the start date
#' dimension. By default, it is set to 'sdate'.
#'@param var_dim A character string indicating the name of the variable
#' dimension. By default, it is set to 'var'.
#'@param neof_max Maximum number of single eofs considered in the first
#' decomposition.
#'@param neof_composed Number of composed eofs to return in output.
#'@param minvar Minimum variance fraction to be explained in first decomposition.
#'@param lon_lim Vector with longitudinal range limits for the EOF calculation
#' for all input variables.
#'@param lat_lim Vector with latitudinal range limits for the EOF calculation
#' for all input variables.
#'@param ncores An integer indicating the number of cores to use for parallel
#' computation. The default value is NULL.
#'@return
#'A list containing:
#'\item{coeff}{
#' An 's2dv_cube' with the data element being an array of principal components
#' with dimensions 'time_dim', 'sdate_dim', number of eof, rest of the
#' dimensions of 'data' except 'lon_dim' and 'lat_dim'.
#'}
#'\item{variance}{
#' An 's2dv_cube' with the data element being an array of explained variances
#' with dimensions 'eof' and the rest of the dimensions of 'data' except
#' 'time_dim', 'sdate_dim', 'lon_dim' and 'lat_dim'.
#'}
#'\item{eof_pattern}{
#' An 's2dv_cube' with the data element being an array of EOF patterns obtained
#' by regression with dimensions: 'eof' and the rest of the dimensions of
#' 'data' except 'time_dim' and 'sdate_dim'.
#'}
#'\item{mask}{
#' An 's2dv_cube' with the data element being an array of the mask with
#' dimensions ('lon_dim', 'lat_dim', rest of the dimensions of 'data' except
#' 'time_dim'). It is made from 'data', 1 for the positions that 'data' has
#' value and NA for the positions that 'data' has NA. It is used to replace NAs
#' with 0s for EOF calculation and mask the result with NAs again after the
#' calculation.
#'}
#'\item{coordinates}{
#' Longitudinal and latitudinal coordinates vectors.
#'}
#'@examples
#'seq <- 1 : (2 * 3 * 4 * 5 * 6 * 8)
#'mod1 <- sin( 0.7 + seq )^2 + cos( seq ^ 2 * 1.22 )
#'dim(mod1) <- c(dataset = 2, member = 3, sdate = 4, ftime = 5, lat = 6,
#' lon = 8)
#'mod2 <- sin( seq * 2 ) ^ 3 + cos( seq ^ 2 )
#'dim(mod2) <- c(dataset = 2, member = 3, sdate = 4, ftime = 5, lat = 6,
#' lon = 8)
#'lon <- seq(0, 35, 5)
#'lat <- seq(0, 25, 5)
#'exp1 <- list(data = mod1, coords = list(lat = lat, lon = lon))
#'exp2 <- list(data = mod2, coords = list(lat = lat, lon = lon))
#'attr(exp1, 'class') <- 's2dv_cube'
#'attr(exp2, 'class') <- 's2dv_cube'
#'d = as.POSIXct(c("2017/01/01", "2017/01/02", "2017/01/03", "2017/01/04",
#' "2017/01/05", "2018/01/01", "2018/01/02", "2018/01/03",
#' "2018/01/04", "2018/01/05", "2019/01/01", "2019/01/02",
#' "2019/01/03", "2019/01/04", "2019/01/05", "2020/01/01",
#' "2020/01/02", "2020/01/03", "2020/01/04", "2020/01/05"))
#'exp1$attrs$Dates = d
#'exp2$attrs$Dates = d
#'
#'cal <- CST_MultiEOF(datalist = list(exp1, exp2), neof_composed = 2)
#'@import abind
#'@export
CST_MultiEOF <- function(datalist, lon_dim = "lon", lat_dim = "lat",
time_dim = 'ftime', sdate_dim = 'sdate',
var_dim = 'var', neof_max = 40, neof_composed = 5,
minvar = 0.6, lon_lim = NULL, lat_lim = NULL,
ncores = NULL) {
# Check 's2dv_cube'
if (!(all(sapply(datalist, inherits, 's2dv_cube')))) {
stop("Elements of the list in parameter 'datalist' must be of the ",
"class 's2dv_cube'.")
}
if (!all(c('data', 'coords', 'attrs') %in% names(datalist[[1]]))) {
stop("Parameter 'datalist' must have 'data', 'coords' and 'attrs' elements ",
"within the 's2dv_cube' structure.")
}
# Dates
dates <- datalist[[1]]$attrs$Dates
if (is.null(dates)) {
stop("Element 'Dates' is not found in 'attrs' list of the first array.")
}
# coordinates
if (!any(names(datalist[[1]]$coords) %in% .KnownLonNames()) |
!any(names(datalist[[1]]$coords) %in% .KnownLatNames())) {
stop("Spatial coordinate names do not match any of the names accepted by the ",
"package. Latitudes accepted names: 'lat', 'lats', 'latitude', 'y', 'j', ",
"'nav_lat'. Longitudes accepted names: 'lon', 'lons', 'longitude', 'x',",
" 'i', 'nav_lon'.")
}
# Check if all dims equal
adims = lapply(lapply(datalist, function(x) x$data), dim)
if (!all(apply(apply(abind(adims, along = 0), 2, duplicated), 2, sum) ==
(length(adims)-1))) {
stop("Input data fields must all have the same dimensions.")
}
exp <- abind(lapply(datalist, '[[', 'data'), along = 0)
dim(exp) <- c(length(datalist), dim(datalist[[1]]$data))
names(dim(exp)) <- c(var_dim, names(dim(datalist[[1]]$data)))
lon_name <- names(datalist[[1]]$coords)[[which(names(datalist[[1]]$coords) %in% .KnownLonNames())]]
lat_name <- names(datalist[[1]]$coords)[[which(names(datalist[[1]]$coords) %in% .KnownLatNames())]]
lon <- as.vector(datalist[[1]]$coords[[lon_name]])
lat <- as.vector(datalist[[1]]$coords[[lat_name]])
result <- MultiEOF(data = exp, lon = lon, lat = lat,
lon_dim = lon_dim, lat_dim = lat_dim, time_dim = time_dim,
sdate_dim = sdate_dim, var_dim = var_dim,
dates = dates, minvar = minvar,
neof_max = neof_max, neof_composed = neof_composed,
lon_lim = lon_lim, lat_lim = lat_lim, ncores = ncores)
names_res <- names(result[1:4])
res <- lapply(seq_along(result)[1:4], function(i) {
coords = list(lon, lat)
names(coords) <- c(lon_dim, lat_dim)
dates <- dates
varName <- names(result)[[i]]
metadata <- lapply(datalist, function(x) x$attrs$Variable$metadata)
metadata <- unlist(metadata, recursive=FALSE)
metadata <- metadata[unique(names(metadata))]
suppressWarnings(
cube <- s2dv_cube(data = result[[i]], coords = coords, varName = varName, Dates = dates,
source_files = unlist(sapply(datalist, function(x) x$attrs$source_files)),
metadata = metadata, when = Sys.time())
)
return(cube)
})
names(res) <- names_res
return(c(res, result[5:6]))
}
#'@rdname MultiEOF
#'@title EOF analysis of multiple variables starting from an array (reduced
#'version)
#'
#'@author Jost von Hardenberg - ISAC-CNR, \email{[email protected]}
#'@author Paolo Davini - ISAC-CNR, \email{[email protected]}
#'
#'@description This function performs EOF analysis over multiple variables,
#'accepting in input an array with a dimension \code{"var"} for each variable to
#'analyse. Based on Singular Value Decomposition. For each field the EOFs are
#'computed and the corresponding PCs are standardized (unit variance, zero mean);
#'the minimum number of principal components needed to reach the user-defined
#'variance is retained. The function weights the input data for the latitude
#'cosine square root.
#'
#'@param data A multidimensional array with dimension \code{"var"}, containing
#' the variables to be analysed. The other diemnsions follow the same structure
#' as the \code{"exp"} element of a 's2dv_cube' object. Latitudinal
#' dimension accepted names: 'lat', 'lats', 'latitude', 'y', 'j', 'nav_lat'.
#' Longitudinal dimension accepted names: 'lon', 'lons','longitude', 'x', 'i',
#' 'nav_lon'. NAs can exist but it should be consistent along 'time_dim'. That
#' is, if one grid point has NAs for each variable, all the time steps at this
#' point should be NAs.
#'@param lon Vector of longitudes.
#'@param lat Vector of latitudes.
#'@param dates Vector or matrix of dates in POSIXct format.
#'@param time Deprecated parameter, it has been substituted by 'dates'. It will
#' be removed in the next release.
#'@param lon_dim A character string indicating the name of the longitudinal
#' dimension. By default, it is set to 'lon'.
#'@param lat_dim A character string indicating the name of the latitudinal
#' dimension. By default, it is set to 'lat'.
#'@param time_dim A character string indicating the name of the temporal
#' dimension. By default, it is set to 'time'.
#'@param sdate_dim A character string indicating the name of the start date
#' dimension. By default, it is set to 'sdate'.
#'@param var_dim A character string indicating the name of the variable
#' dimension. By default, it is set to 'var'.
#'@param neof_max Maximum number of single eofs considered in the first
#' decomposition.
#'@param neof_composed Number of composed eofs to return in output.
#'@param minvar Minimum variance fraction to be explained in first decomposition.
#'@param lon_lim Vector with longitudinal range limits for the calculation for
#' all input variables.
#'@param lat_lim Vector with latitudinal range limits for the calculation for
#' all input variables.
#'@param ncores An integer indicating the number of cores to use for parallel
#' computation. The default value is NULL.
#'@return
#'A list containing:
#'\item{coeff}{
#' An array of principal components with dimensions 'time_dim', 'sdate_dim',
#' number of eof, rest of the dimensions of 'data' except 'lon_dim' and
#' 'lat_dim'.
#'}
#'\item{variance}{
#' An array of explained variances with dimensions 'eof' and the rest of the
#' dimensions of 'data' except 'time_dim', 'sdate_dim', 'lon_dim' and
#' 'lat_dim'.
#'}
#'\item{eof_pattern}{
#' An array of EOF patterns obtained by regression with dimensions: 'eof' and
#' the rest of the dimensions of 'data' except 'time_dim' and 'sdate_dim'.
#'}
#'\item{mask}{
#' An array of the mask with dimensions ('lon_dim', 'lat_dim', rest of the
#' dimensions of 'data' except 'time_dim'). It is made from 'data', 1 for the
#' positions that 'data' has value and NA for the positions that 'data' has NA.
#' It is used to replace NAs with 0s for EOF calculation and mask the result
#' with NAs again after the calculation.
#'}
#'\item{coordinates}{
#' Longitudinal and latitudinal coordinates vectors.
#'}
#'
#'@examples
#'exp <- array(runif(1280)*280, dim = c(dataset = 2, member = 2, sdate = 3,
#' ftime = 3, lat = 4, lon = 4, var = 1))
#'lon <- seq(0, 3)
#'lat <- seq(47, 44)
#'dates <- c("2000-11-01", "2000-12-01", "2001-01-01", "2001-11-01",
#' "2001-12-01", "2002-01-01", "2002-11-01", "2002-12-01", "2003-01-01")
#'Dates <- as.POSIXct(dates, format = "%Y-%m-%d")
#'dim(Dates) <- c(ftime = 3, sdate = 3)
#'cal <- MultiEOF(data = exp, lon = lon, lat = lat, dates = Dates)
#'@import multiApply
#'@export
MultiEOF <- function(data, lon, lat, dates, time = NULL,
lon_dim = "lon", lat_dim = "lat",
time_dim = 'ftime', sdate_dim = 'sdate', var_dim = 'var',
neof_max = 40, neof_composed = 5, minvar = 0.6,
lon_lim = NULL, lat_lim = NULL, ncores = NULL) {
# Check inputs
# data
if (is.null(data)) {
stop("Parameter 'data' cannot be NULL.")
}
if (!is.numeric(data)) {
stop("Parameter 'data' must be a numeric array.")
}
if (any(is.null(names(dim(data))))| any(nchar(names(dim(data))) == 0)) {
stop("Parameter 'data' must have dimension names.")
}
# dates
if (!is.null(time)) {
warning("The parameter 'time' is deprecated, use 'dates' instead.")
dates <- time
}
# lon_dim
if (!is.character(lon_dim) | length(lon_dim) != 1) {
stop("Parameter 'lon_dim' must be a character string.")
}
if (!lon_dim %in% names(dim(data))) {
stop("Parameter 'lon_dim' is not found in 'data' dimension.")
}
# lat_dim
if (!is.character(lat_dim) | length(lat_dim) != 1) {
stop("Parameter 'lat_dim' must be a character string.")
}
if (!lat_dim %in% names(dim(data))) {
stop("Parameter 'lat_dim' is not found in 'data' dimension.")
}
# lon
if (!is.numeric(lon) | length(lon) != dim(data)[lon_dim]) {
stop(paste0("Parameter 'lon' must be a numeric vector with the same ",
"length as the longitude dimension of 'data'."))
}
if (any(lon > 360 | lon < -360)) {
warning("Some 'lon' is out of the range [-360, 360].")
}
# lat
if (!is.numeric(lat) | length(lat) != dim(data)[lat_dim]) {
stop(paste0("Parameter 'lat' must be a numeric vector with the same ",
"length as the latitude dimension of 'data'."))
}
if (any(lat > 90 | lat < -90)) {
stop("Parameter 'lat' must contain values within the range [-90, 90].")
}
# time_dim
if (!is.character(time_dim) | length(time_dim) != 1) {
stop("Parameter 'time_dim' must be a character string.")
}
if (!time_dim %in% names(dim(data))) {
stop("Parameter 'time_dim' is not found in 'data' dimension.")
}
# sdate_dim
if (!is.character(sdate_dim) | length(sdate_dim) != 1) {
stop("Parameter 'sdate_dim' must be a character string.")
}
if (!sdate_dim %in% names(dim(data))) {
stop("Parameter 'sdate_dim' is not found in 'data' dimension.")
}
# var_dim
if (!is.character(var_dim) | length(var_dim) != 1) {
stop("Parameter 'var_dim' must be a character string.")
}
if (!var_dim %in% names(dim(data))) {
stop("Parameter 'var_dim' is not found in 'data' dimension.")
}
# neof_max
if (!is.numeric(neof_max)) {
stop("Parameter 'neof_max' must be a positive integer.")
}
# neof_composed
if (!is.numeric(neof_composed)) {
stop("Parameter 'neof_composed' must be a positive integer.")
}
# minvar
if (!is.numeric(minvar)) {
stop("Parameter 'minvar' must be a positive number between 0 and 1.")
}
# lon_lim
if (!is.null(lon_lim)) {
if (!is.numeric(lon_lim)) {
stop("Parameter 'lon_lim' must be numeric.")
}
}
# lat_lim
if (!is.null(lat_lim)) {
if (!is.numeric(lat_lim)) {
stop("Parameter 'lat_lim' must be numeric.")
}
}
# ncores
if (!is.null(ncores)) {
if (!is.numeric(ncores) | ncores %% 1 != 0 | ncores <= 0 |
length(ncores) > 1) {
stop("Parameter 'ncores' must be a positive integer.")
}
}
# Reorder and group ftime and sdate together at the end in that order
imaskt <- names(dim(data)) %in% time_dim
imasks <- names(dim(data)) %in% sdate_dim
data <- .aperm2(data, c(which(!(imasks | imaskt)),
which(imaskt), which(imasks)))
dims <- dim(data)
ind <- 1:length(which(!(imaskt | imasks)))
# compact (multiply) time_dim dimensions
dim(data) <- c(dims[ind], samples = prod(dims[-ind]))
# Repeatedly apply .multi.eofs
result <- Apply(data = data,
target_dims = c(var_dim, lon_dim, lat_dim, "samples"),
fun = .multi.eofs, lon = lon, lat = lat, dates = dates,
neof_max = neof_max, neof_composed = neof_composed,
minvar = minvar, xlim = lon_lim, ylim = lat_lim,
lon_dim = lon_dim, lat_dim = lat_dim, ncores = ncores)
# Expand back samples to compacted dims
dim(result$coeff) <- c(dims[-ind], dim(result$coeff)[-1])
# Recover first lon and first lat list
dd = dim(result[[lon_dim]])[1]; m = matrix(1, nrow = dd, ncol = length(dim(result[[lon_dim]])));
m[1:dd] = 1:dd; result[[lon_dim]] = result[[lon_dim]][m]
dd = dim(result[[lat_dim]])[1]; m = matrix(1, nrow = dd, ncol = length(dim(result[[lat_dim]])));
m[1:dd] = 1:dd; result[[lat_dim]] = result[[lat_dim]][m]
return(result)
}
#'Atomic MultiEOF
#'@param field_arr_raw An array of dimension: (n_field, lon, lat, time).
#' where n_field is the number of variables over which to calculate the
#' multi_eofs.
#'@param neof_composed Number of composed eofs to return in output.
#'@param minvar Minimum variance fraction to be explained in first decomposition.
#'@param neof_max Maximum number of single eofs considered in the first
#' decomposition.
#'@param xlim Vector with longitudinal range limits for the calculation.
#'@param ylim Vector with latitudinal range limits for the calculation.
#'@param lon_dim String with dimension name of longitudinal coordinate.
#'@param lat_dim String with dimension name of latitudinal coordinate.
#'
#'@return A list with elements \code{$coeff} (an array of time-varying principal
#'component coefficients), \code{$variance} (a matrix of explained variances),
#'\code{eof_pattern} (a matrix of EOF patterns obtained by regression for each
#'variable).
#'@noRd
.multi.eofs <- function(field_arr_raw, lon, lat, dates, neof_max = 40,
neof_composed = 5, minvar = 0.6, xlim = NULL,
ylim = NULL, lon_dim = "lon", lat_dim = "lat") {
if (exists(".lm.fit")) {
lin.fit <- .lm.fit
} else {
lin.fit <- lm.fit
}
# Dimensions
n_field <- dim(field_arr_raw)[1]
n_lon <- dim(field_arr_raw)[2]
n_lat <- dim(field_arr_raw)[3]
nt <- dim(field_arr_raw)[4]
etime <- .power.date(dates)
field_arr <- array(dim = dim(field_arr_raw))
for (k in seq(1, n_field, 1)) {
field_arr[k, , , ] <- .daily.anom.mean(lon, lat, field_arr_raw[k, , , ], etime)
}
# area weighting, based on the root of cosine
ww <- .area.weight(lon, lat, root = T)
# create a mask
mask_arr <- array(dim = c(n_lon, n_lat, n_field))
for (k in seq(1, n_field, 1)) {
field_orig <- field_arr[k, , , ]
# Check if all the time steps at one grid point are NA-consistent
# The grid point should have all NAs or no NA along time dim.
if (anyNA(field_orig)) {
field_latlon <- array(field_orig, dim = c(n_lon*n_lat, nt)) # [lon*lat, time]
na_ind <- which(is.na(field_latlon), arr.ind = T)
if (dim(na_ind)[1] != nt * length(unique(na_ind[,1]))) {
stop("Detected certain grid points have NAs but not consistent across time ",
"dimension. If the grid point is NA, it should have NA at all time step.")
}
}
# Build the mask
mask <- field_orig[, , 1]
mask[!is.finite(mask)] <- NA
mask[is.finite(mask)] <- 1
dim(mask) <- c(n_lon, n_lat)
mask_arr[,,k] <- mask
# Replace mask of NAs with 0s for EOF analysis.
field_arr[k, , , ][!is.finite(field_orig)] <- 0
field_orig[!is.finite(field_orig)] <- 0
# calculate the area weight
field <- sweep(field_orig, c(1, 2), ww, "*")
idx <- .selbox(lon, lat, xlim, ylim)
slon <- lon[idx$ilon]
slat <- lat[idx$ilat]
field <- field[idx$ilon, idx$ilat, ]
# transform 3D field in a matrix
field <- array(field, dim = c(dim(field)[1] * dim(field)[2], dim(field)[3]))
# calling SVD
SVD <- svd(field, nu = neof_max, nv = neof_max)
# extracting EOFs (loading pattern), expansions coefficient
# and variance explained
pattern <- array(SVD$u, dim = c(dim(field)[1], dim(field)[2], neof_max))
coefficient <- SVD$v
variance <- (SVD$d[1:neof_max]) ^ 2 / sum((SVD$d) ^ 2)
reqPC <- which(cumsum(variance) > minvar)[1]
variance <- variance[1:reqPC]
coefficient <- coefficient[, 1:reqPC]
if (reqPC == 1) {
coefficient <- replicate(1, coefficient)
}
coefficient <- apply(coefficient, c(2), .standardize)
regression <- array(NA, dim = c(length(lon), length(lat), neof_max))
for (i in 1:reqPC) {
regression[, , i] <- apply(field_orig, c(1, 2),
function(x) lin.fit(as.matrix(coefficient[, i],
ncol = 1), x)$coefficients)*mask
}
assign(paste0("pc", k), list(coeff = coefficient, variance = variance,
wcoeff = sweep(coefficient, c(2), variance, "*"),
regression = regression))
}
newpc <- NULL
for (k in seq(1, n_field, 1)) {
newpc <- cbind(newpc, get(paste0("pc", k))$wcoeff)
}
newpc <- t(newpc)
SVD <- svd(newpc, nu = neof_composed, nv = neof_composed)
# extracting EOFs, expansions coefficient and variance explained
coefficient <- SVD$v
variance <- (SVD$d[1:(neof_composed)]) ^ 2 / sum( (SVD$d) ^ 2)
coefficient <- apply(coefficient, c(2), .standardize)
# linear regressions on anomalies
regression <- array(dim = c(n_field, length(lon),
length(lat), neof_composed))
for (k in seq(1, n_field, 1)) {
for (i in 1:neof_composed) {
regression[k, , , i] <- apply(field_arr[k, , , ], c(1, 2),
function(x) lin.fit(as.matrix(coefficient[, i],
ncol = 1), x)$coefficients)*mask_arr[,,k]
}
}
names(dim(coefficient)) <- c("dates", "eof")
variance <- array(variance)
names(dim(variance)) <- "eof"
names(dim(regression)) <- c(names(dim(field_arr_raw))[1:3], "eof")
out <- list(coeff = coefficient, variance = variance, eof_pattern = regression,
mask = mask_arr)
out[[names(n_lon)]] <- slon
out[[names(n_lat)]] <- slat
return(out)
}
# new function to create simple list with date values - Oct-18
# it needs a date or PCICt object, and returns also the season subdivision
.power.date <- function(datas, verbose = FALSE) {
# create a "season" for continuous time, used by persistance tracking
startpoints <- c(0, which(diff(datas) > 1))
deltapoints <- diff(c(startpoints, length(datas)))
seas <- inverse.rle(list(lengths = deltapoints,
values = seq(1, length(startpoints))))
etime <- list(
day = as.numeric(format(datas, "%d")),
month = as.numeric(format(datas, "%m")),
year = as.numeric(format(datas, "%Y")), data = datas, season = seas
)
.printv("Time Array Built", verbose)
.printv(paste("Length:", length(seas)), verbose)
.printv(paste("From", datas[1], "to", datas[length(seas)]), verbose)
return(etime)
}
# function for daily anomalies, use array predeclaration and rowMeans (40 times faster!)
.daily.anom.mean <- function(ics, ipsilon, field, etime) {
condition <- paste(etime$day, etime$month)
daily <- array(NA, dim = c(length(ics), length(ipsilon),
length(unique(condition))))
anom <- field * NA
for (t in unique(condition)) {
if (sum(t == condition) == 1) {
print(paste0("Cannot compute a mean with only one value: ",
"using climatological mean"))
anom[, , which(t == condition)] <- rowMeans(field, dims = 2)
} else {
daily[, , which(t == unique(condition))] <- rowMeans(field[, , t == condition], dims = 2)
anom[, , which(t == condition)] <- sweep(field[, , which(t == condition)],
c(1, 2),
daily[, , which(t == unique(condition))], "-")
}
}
return(anom)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_MultiEOF.R
|
#'Multiple Metrics applied in Multiple Model Anomalies
#'
#'@author Mishra Niti, \email{[email protected]}
#'@author Perez-Zanon Nuria, \email{[email protected]}
#'@description This function calculates correlation (Anomaly Correlation
#'Coefficient; ACC), root mean square error (RMS) and the root mean square error
#'skill score (RMSSS) of individual anomaly models and multi-models mean (if
#'desired) with the observations.
#'
#'@param exp An object of class \code{s2dv_cube} as returned by
#' \code{CST_Anomaly} function, containing the anomaly of the seasonal forecast
#' experiments data in the element named \code{$data}.
#'@param obs An object of class \code{s2dv_cube} as returned by
#' \code{CST_Anomaly} function, containing the anomaly of observed data in the
#' element named \code{$data}.
#'@param metric A character string giving the metric for computing the maximum
#' skill. This must be one of the strings 'correlation', 'rms', 'rmsss' and
#' 'rpss'. If 'rpss' is chossen the terciles probabilities are evaluated.
#'@param multimodel A logical value indicating whether a Multi-Model Mean should
#' be computed.
#'@param time_dim Name of the temporal dimension where a mean will be applied.
#' It can be NULL, the default value is 'ftime'.
#'@param memb_dim Name of the member dimension. It can be NULL, the default
#' value is 'member'.
#'@param sdate_dim Name of the start date dimension or a dimension name
#' identifiying the different forecast. It can be NULL, the default value is
#' 'sdate'.
#'@return An object of class \code{s2dv_cube} containing the statistics of the
#'selected metric in the element \code{$data} which is a list of arrays: for the
#'metric requested and others for statistics about its signeificance. The arrays
#'have two dataset dimensions equal to the 'dataset' dimension in the
#'\code{exp$data} and \code{obs$data} inputs. If \code{multimodel} is TRUE, the
#'first position in the first 'nexp' dimension correspons to the Multi-Model Mean.
#'@seealso \code{\link[s2dv]{Corr}}, \code{\link[s2dv]{RMS}},
#'\code{\link[s2dv]{RMSSS}} and \code{\link{CST_Load}}
#'@references Mishra, N., Prodhomme, C., & Guemas, V. (n.d.). Multi-Model Skill
#'Assessment of Seasonal Temperature and Precipitation Forecasts over Europe,
#'29-31. \doi{10.1007/s00382-018-4404-z}
#'
#'@importFrom s2dv MeanDims Reorder Corr RMS RMSSS InsertDim
#'@import abind
#'@importFrom easyVerification climFairRpss veriApply
#'@import stats
#'@import multiApply
#'@examples
#'mod <- rnorm(2*2*4*5*2*2)
#'dim(mod) <- c(dataset = 2, member = 2, sdate = 4, ftime = 5, lat = 2, lon = 2)
#'obs <- rnorm(1*1*4*5*2*2)
#'dim(obs) <- c(dataset = 1, member = 1, sdate = 4, ftime = 5, lat = 2, lon = 2)
#'lon <- seq(0, 30, 5)
#'lat <- seq(0, 25, 5)
#'coords <- list(lat = lat, lon = lon)
#'exp <- list(data = mod, coords = coords)
#'obs <- list(data = obs, coords = coords)
#'attr(exp, 'class') <- 's2dv_cube'
#'attr(obs, 'class') <- 's2dv_cube'
#'a <- CST_MultiMetric(exp = exp, obs = obs)
#'@export
CST_MultiMetric <- function(exp, obs, metric = "correlation", multimodel = TRUE,
time_dim = 'ftime', memb_dim = 'member',
sdate_dim = 'sdate') {
# Check 's2dv_cube'
if (!inherits(exp, 's2dv_cube') || !inherits(obs, 's2dv_cube')) {
stop("Parameter 'exp' and 'obs' must be of the class 's2dv_cube', ",
"as output by CSTools::CST_Load.")
}
result <- MultiMetric(exp$data, obs$data, metric = metric, multimodel = multimodel,
time_dim = time_dim, memb_dim = memb_dim, sdate_dim = sdate_dim)
exp$data <- result
exp$attrs$Datasets <- c(exp$attrs$Datasets, obs$attrs$Datasets)
exp$attrs$source_files <- c(exp$attrs$source_files, obs$attrs$source_files)
return(exp)
}
#'Multiple Metrics applied in Multiple Model Anomalies
#'
#'@author Mishra Niti, \email{[email protected]}
#'@author Perez-Zanon Nuria, \email{[email protected]}
#'@description This function calculates correlation (Anomaly Correlation
#'Coefficient; ACC), root mean square error (RMS) and the root mean square error
#'skill score (RMSSS) of individual anomaly models and multi-models mean (if
#'desired) with the observations on arrays with named dimensions.
#'
#'@param exp A multidimensional array with named dimensions.
#'@param obs A multidimensional array with named dimensions.
#'@param metric A character string giving the metric for computing the maximum
#' skill. This must be one of the strings 'correlation', 'rms' or 'rmsss.
#'@param multimodel A logical value indicating whether a Multi-Model Mean should
#' be computed.
#'@param time_dim Name of the temporal dimension where a mean will be applied.
#' It can be NULL, the default value is 'ftime'.
#'@param memb_dim Name of the member dimension. It can be NULL, the default
#' value is 'member'.
#'@param sdate_dim Name of the start date dimension or a dimension name
#' identifiying the different forecast. It can be NULL, the default value is
#' 'sdate'.
#'@return A list of arrays containing the statistics of the selected metric in
#'the element \code{$data} which is a list of arrays: for the metric requested
#'and others for statistics about its signeificance. The arrays have two dataset
#'dimensions equal to the 'dataset' dimension in the \code{exp$data} and
#'\code{obs$data} inputs. If \code{multimodel} is TRUE, the greatest position in
#'the first dimension correspons to the Multi-Model Mean.
#'@seealso \code{\link[s2dv]{Corr}}, \code{\link[s2dv]{RMS}},
#'\code{\link[s2dv]{RMSSS}} and \code{\link{CST_Load}}
#'@references Mishra, N., Prodhomme, C., & Guemas, V. (n.d.). Multi-Model Skill
#'Assessment of Seasonal Temperature and Precipitation Forecasts over Europe,
#'29-31. \doi{10.1007/s00382-018-4404-z}
#'
#'@importFrom s2dv MeanDims Reorder Corr RMS RMSSS InsertDim
#'@import abind
#'@importFrom easyVerification climFairRpss veriApply
#'@import stats
#'@import multiApply
#'@examples
#'exp <- array(rnorm(2*2*4*5*2*2),
#' dim = c(dataset = 2, member = 2, sdate = 4, ftime = 5, lat = 2,
#' lon = 2))
#'obs <- array(rnorm(1*1*4*5*2*2),
#' dim = c(dataset = 1, member = 1, sdate = 4, ftime = 5, lat = 2,
#' lon = 2))
#'res <- MultiMetric(exp = exp, obs = obs)
#'@export
MultiMetric <- function(exp, obs, metric = "correlation", multimodel = TRUE,
time_dim = 'ftime', memb_dim = 'member',
sdate_dim = 'sdate') {
if (!is.null(names(dim(exp))) & !is.null(names(dim(obs)))) {
if (all(names(dim(exp)) %in% names(dim(obs)))) {
dimnames <- names(dim(exp))
} else {
stop("Dimension names of element 'data' from parameters 'exp'",
" and 'obs' should have the same name dimmension.")
}
} else {
stop("Element 'data' from parameters 'exp' and 'obs'",
" should have dimension names.")
}
if (!is.logical(multimodel)) {
stop("Parameter 'multimodel' must be a logical value.")
}
if (length(multimodel) > 1) {
multimodel <- multimodel[1]
warning("Parameter 'multimodel' has length > 1 and only the first ",
"element will be used.")
}
if (length(metric) > 1) {
metric <- metric[1]
warning("Parameter 'multimodel' has length > 1 and only the first ",
"element will be used.")
}
if (is.null(time_dim) | !is.character(time_dim)) {
time_dim <- 'time'
}
if (is.null(memb_dim) | !is.character(memb_dim)) {
memb_dim <- 'memb'
}
if( is.null(sdate_dim) | !is.character(sdate_dim)) {
sdate_dim <- 'sdate'
}
exp_dims <- dim(exp)
obs_dims <- dim(obs)
if (!is.null(names(exp_dims)) & !is.null(names(obs_dims))) {
if (all(names(exp_dims) == names(obs_dims))) {
if (!(time_dim %in% names(exp_dims))) {
warning("Parameter 'time_dim' does not match with a dimension name in 'exp'",
" and 'obs'. A 'time_dim' of length 1 is added.")
dim(exp) <- c(exp_dims, time_dim = 1)
names(dim(exp))[length(dim(exp))] <- time_dim
dim(obs) <- c(obs_dims, time_dim = 1)
names(dim(obs))[length(dim(obs))] <- time_dim
exp_dims <- dim(exp)
obs_dims <- dim(obs)
}
if (!(memb_dim %in% names(exp_dims))) {
warning("Parameter 'memb_dim' does not match with a dimension name in ",
"'exp' and 'obs'. A 'memb_dim' of length 1 is added.")
dim(exp) <- c(exp_dims, memb_dim = 1)
names(dim(exp))[length(dim(exp))] <- memb_dim
dim(obs) <- c(obs_dims, memb_dim = 1)
names(dim(obs))[length(dim(obs))] <- memb_dim
exp_dims <- dim(exp)
obs_dims <- dim(obs)
}
if (!(sdate_dim %in% names(exp_dims))) {
warning("Parameter 'sdate_dim' does not match with a dimension name in ",
"'exp' and 'obs'. A 'sdate_dim' of length 1 is added.")
dim(exp) <- c(exp_dims, sdate_dim = 1)
names(dim(exp))[length(dim(exp))] <- sdate_dim
dim(obs) <- c(obs_dims, sdate_dim = 1)
names(dim(obs))[length(dim(obs))] <- sdate_dim
exp_dims <- dim(exp)
obs_dims <- dim(obs)
}
} else {
stop("Dimension names of element 'data' from parameters 'exp'",
" and 'obs' should be the same and in the same order.")
}
} else {
stop("Element 'data' from parameters 'exp' and 'obs'",
" should have dimmension names.")
}
if (metric == 'rpss') {
if (multimodel == TRUE) {
warning("A probabilistic metric cannot be use to evaluate a multimodel mean.")
}
AvgExp <- MeanDims(exp, time_dim, na.rm = TRUE)
AvgObs <- MeanDims(obs, time_dim, na.rm = TRUE)
dif_dims <- which(dim(AvgExp) != dim(AvgObs))
dif_dims <- names(dif_dims[-which(names(dif_dims) == memb_dim)])
lapply(dif_dims, function(x) {
names(dim(AvgExp))[which(names(dim(AvgExp)) == x)] <<- paste0(dif_dims, '_exp')})
pos_memb <- which(names(dim(AvgExp)) == memb_dim)
dim(AvgObs) <- dim(AvgObs)[-pos_memb]
AvgExp <- Reorder(AvgExp, c(names(dim(AvgExp))[-pos_memb], memb_dim))
pos_memb <- which(names(dim(AvgExp)) == memb_dim)
pos_sdate <- which(names(dim(AvgExp)) == sdate_dim)
corr <- Apply(list(AvgExp, AvgObs),
target_dims = list(c(sdate_dim, 'lat', 'lon', memb_dim), c(sdate_dim, 'lat', 'lon')),
fun = function(x, y) {
veriApply('FairRpss', fcst = x, obs = y,
ensdim = which(names(dim(x)) == 'member'),
tdim = which(names(dim(x)) == 'sdate'),
prob = c(1/3, 2/3))})
} else if (metric %in% c('correlation', 'rms', 'rmsss')) {
AvgExp <- MeanDims(exp, c(memb_dim, time_dim), na.rm = TRUE)
AvgObs <- MeanDims(obs, c(memb_dim, time_dim), na.rm = TRUE)
dataset_dim <- c('data', 'dataset', 'datsets', 'models')
if (any(dataset_dim %in% names(exp_dims))) {
dataset_dim <- dataset_dim[dataset_dim %in% names(dim(AvgExp))]
} else {
warning("Parameter 'exp' and 'obs' does not have a dimension 'dataset'.")
}
if (multimodel == TRUE) {
# seasonal avg of anomalies for multi-model
AvgExp_MMM <- MeanDims(AvgExp, c(dataset_dim), na.rm = TRUE)
pos_dataset <- which(names(dim(AvgExp)) == dataset_dim)
AvgExp_MMM <- s2dv::InsertDim(AvgExp_MMM, posdim = pos_dataset, lendim = 1,
name = dataset_dim)
AvgExp <- abind(AvgExp_MMM, AvgExp, along = pos_dataset)
names(dim(AvgExp)) <- names(dim(AvgExp_MMM))
}
if (metric == 'correlation') {
corr <- s2dv::Corr(AvgExp, AvgObs, dat_dim = dataset_dim, time_dim = sdate_dim)
} else if (metric == 'rms') {
corr <- s2dv::RMS(AvgExp, AvgObs, dat_dim = dataset_dim, time_dim = sdate_dim)
} else if (metric == 'rmsss') {
corr <- s2dv::RMSSS(AvgExp, AvgObs, dat_dim = dataset_dim, time_dim = sdate_dim)
}
} else {
stop("Parameter 'metric' must be a character string indicating ",
"one of the options: 'correlation', 'rms', 'rmsss' or 'rpss'.")
}
return(corr)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_MultiMetric.R
|
#'Multivariate Root Mean Square Error (RMSE)
#'
#'@author Deborah Verfaillie, \email{[email protected]}
#'@description This function calculates the RMSE from multiple variables, as the
#'mean of each variable's RMSE scaled by its observed standard deviation.
#'Variables can be weighted based on their relative importance (defined by the
#'user).
#'
#'@param exp A list of objects, one for each variable, of class \code{s2dv_cube}
#' as returned by \code{CST_Anomaly} function, containing the anomaly of the
#' seasonal forecast experiment data in the element named \code{$data}.
#'@param obs A list of objects, one for each variable (in the same order than
#' the input in 'exp') of class \code{s2dv_cube} as returned by
#' \code{CST_Anomaly} function, containing the observed anomaly data in the
#' element named \code{$data}.
#'@param weight (optional) A vector of weight values to assign to each variable.
#' If no weights are defined, a value of 1 is assigned to every variable.
#'@param memb_dim A character string indicating the name of the member
#' dimension. It must be one dimension in 'exp' and 'obs'. The default value is
#' 'member'.
#'@param dat_dim A character string indicating the name of the dataset
#' dimension. It must be one dimension in 'exp' and 'obs'. If there is no
#' dataset dimension, it can be NULL. The default value is 'dataset'.
#'@param sdate_dim A character string indicating the name of the start date
#' dimension. It must be one dimension in 'exp' and 'obs'. The default value is
#' 'sdate'.
#'@param ftime_dim A character string indicating the name of the forecast time
#' dimension. It must be one dimension in 'exp' and 'obs'. The default value is
#' 'ftime'.
#'
#'@return An object of class \code{s2dv_cube} containing the RMSE in the element
#' \code{$data} which is an array with two datset dimensions equal to the
#' 'dataset' dimension in the \code{exp$data} and \code{obs$data} inputs. An
#' array with dimensions: c(number of exp, number of obs, 1 (the multivariate
#' RMSE value), number of lat, number of lon)
#'
#'@seealso \code{\link[s2dv]{RMS}} and \code{\link{CST_Load}}
#'@examples
#'# Example with 2 variables
#'mod1 <- abs(rnorm(1 * 3 * 4 * 5 * 6 * 7))
#'mod2 <- abs(rnorm(1 * 3 * 4 * 5 * 6 * 7))
#'dim(mod1) <- c(dataset = 1, member = 3, sdate = 4, ftime = 5, lat = 6, lon = 7)
#'dim(mod2) <- c(dataset = 1, member = 3, sdate = 4, ftime = 5, lat = 6, lon = 7)
#'obs1 <- abs(rnorm(1 * 1 * 4 * 5 * 6 * 7))
#'obs2 <- abs(rnorm(1 * 1 * 4 * 5 * 6 * 7))
#'dim(obs1) <- c(dataset = 1, member = 1, sdate = 4, ftime = 5, lat = 6, lon = 7)
#'dim(obs2) <- c(dataset = 1, member = 1, sdate = 4, ftime = 5, lat = 6, lon = 7)
#'lon <- seq(0, 30, 5)
#'lat <- seq(0, 25, 5)
#'coords <- list(lat = lat, lon = lon)
#'exp1 <- list(data = mod1, coords = coords,
#' attrs = list(Datasets = "EXP1", source_files = "file1",
#' Variable = list(varName = 'pre')))
#'exp2 <- list(data = mod2, coords = coords,
#' attrs = list(Datasets = "EXP2", source_files = "file2",
#' Variable = list(varName = 'tas')))
#'obs1 <- list(data = obs1, coords = coords,
#' attrs = list(Datasets = "OBS1", source_files = "file1",
#' Variable = list(varName = 'pre')))
#'obs2 <- list(data = obs2, coords = coords,
#' attrs = list(Datasets = "OBS2", source_files = "file2",
#' Variable = list(varName = 'tas')))
#'attr(exp1, 'class') <- 's2dv_cube'
#'attr(exp2, 'class') <- 's2dv_cube'
#'attr(obs1, 'class') <- 's2dv_cube'
#'attr(obs2, 'class') <- 's2dv_cube'
#'anom1 <- CST_Anomaly(exp1, obs1, cross = TRUE, memb = TRUE)
#'anom2 <- CST_Anomaly(exp2, obs2, cross = TRUE, memb = TRUE)
#'ano_exp <- list(anom1$exp, anom2$exp)
#'ano_obs <- list(anom1$obs, anom2$obs)
#'a <- CST_MultivarRMSE(exp = ano_exp, obs = ano_obs, weight = c(1, 2))
#'@importFrom s2dv RMS MeanDims
#'@export
CST_MultivarRMSE <- function(exp, obs, weight = NULL, memb_dim = 'member',
dat_dim = 'dataset', sdate_dim = 'sdate',
ftime_dim = 'ftime') {
# s2dv_cube
if (!is.list(exp) | !is.list(obs)) {
stop("Parameters 'exp' and 'obs' must be lists of 's2dv_cube' objects")
}
if (!(all(sapply(exp, inherits, 's2dv_cube')))) {
stop("Elements of the list in parameter 'exp' must be of the class ",
"'s2dv_cube', as output by CSTools::CST_Load.")
}
if (!(all(sapply(obs, inherits, 's2dv_cube')))) {
stop("Elements of the list in parameter 'obs' must be of the class ",
"'s2dv_cube', as output by CSTools::CST_Load.")
}
if (length(exp) != length(obs)) {
stop("Parameters 'exp' and 'obs' must be of the same length.")
}
nvar <- length(exp)
if (nvar < 2) {
stop("Parameters 'exp' and 'obs' must contain at least two",
" s2dv objects for two different variables.")
}
for (j in 1 : nvar) {
if (!is.null(names(dim(exp[[j]]$data))) & !is.null(names(dim(obs[[j]]$data)))) {
if (all(names(dim(exp[[j]]$data)) %in% names(dim(obs[[j]]$data)))) {
dimnames <- names(dim(exp[[j]]$data))
} else {
stop("Dimension names of element 'data' from parameters 'exp'",
" and 'obs' should be equal.")
}
} else {
stop("Element 'data' from parameters 'exp' and 'obs'",
" should have dimmension names.")
}
}
# weight
if (is.null(weight)) {
weight <- c(rep(1, nvar))
} else if (!is.numeric(weight)) {
stop("Parameter 'weight' must be numeric.")
} else if (length(weight) != nvar){
stop("Parameter 'weight' must have a length equal to the number ",
"of variables.")
}
# memb_dim
if (!is.null(memb_dim)) {
if (!is.character(memb_dim)) {
stop("Parameter 'memb_dim' must be a character string.")
}
if (!memb_dim %in% names(dim(exp[[1]]$data)) | !memb_dim %in% names(dim(obs[[1]]$data))) {
stop("Parameter 'memb_dim' is not found in 'exp' or in 'obs' dimension.")
}
} else {
stop("Parameter 'memb_dim' cannot be NULL.")
}
# dat_dim
if (!is.null(dat_dim)) {
if (!is.character(dat_dim)) {
stop("Parameter 'dat_dim' must be a character string.")
}
if (!dat_dim %in% names(dim(exp[[1]]$data)) | !dat_dim %in% names(dim(obs[[1]]$data))) {
stop("Parameter 'dat_dim' is not found in 'exp' or in 'obs' dimension.")
}
}
# ftime_dim
if (!is.null(ftime_dim)) {
if (!is.character(ftime_dim)) {
stop("Parameter 'ftime_dim' must be a character string.")
}
if (!ftime_dim %in% names(dim(exp[[1]]$data)) | !ftime_dim %in% names(dim(obs[[1]]$data))) {
stop("Parameter 'ftime_dim' is not found in 'exp' or in 'obs' dimension.")
}
} else {
stop("Parameter 'ftime_dim' cannot be NULL.")
}
# sdate_dim
if (!is.null(sdate_dim)) {
if (!is.character(sdate_dim)) {
stop("Parameter 'sdate_dim' must be a character string.")
}
if (!sdate_dim %in% names(dim(exp[[1]]$data)) | !sdate_dim %in% names(dim(obs[[1]]$data))) {
stop("Parameter 'sdate_dim' is not found in 'exp' or in 'obs' dimension.")
}
} else {
stop("Parameter 'sdate_dim' cannot be NULL.")
}
# Variables
obs_var <- unlist(lapply(exp, function(x) {
x$attrs$Variable$varName}))
exp_var <- unlist(lapply(exp, function(x) {
x$attrs$Variable$varName}))
if (all(exp_var != obs_var)) {
stop("Variables in parameters 'exp' and 'obs' must be in the same order.")
}
mvrmse <- 0
sumweights <- 0
for (j in 1 : nvar) {
# seasonal average of anomalies
AvgExp <- MeanDims(exp[[j]]$data, c(memb_dim, ftime_dim), na.rm = TRUE)
AvgObs <- MeanDims(obs[[j]]$data, c(memb_dim, ftime_dim), na.rm = TRUE)
# multivariate RMSE (weighted)
rmse <- RMS(AvgExp, AvgObs, dat_dim = dat_dim, time_dim = sdate_dim,
conf = FALSE)$rms
stdev <- sd(AvgObs)
mvrmse <- mvrmse + (rmse / stdev * as.numeric(weight[j]))
sumweights <- sumweights + as.numeric(weight[j])
}
mvrmse <- mvrmse / sumweights
# names(dim(mvrmse)) <- c(dimnames[1], dimnames[1], 'statistics', dimnames[5 : 6])
exp_Datasets <- unlist(lapply(exp, function(x) {
x$attrs[[which(names(x$attrs) == 'Datasets')]]}))
exp_source_files <- unlist(lapply(exp, function(x) {
x$attrs[[which(names(x$attrs) == 'source_files')]]}))
obs_Datasets <- unlist(lapply(obs, function(x) {
x$attrs[[which(names(x$attrs) == 'Datasets')]]}))
obs_source_files <- unlist(lapply(obs, function(x) {
x$attrs[[which(names(x$attrs) == 'source_files')]]}))
exp1 <- exp[[1]]
exp1$data <- mvrmse
exp1$attrs$Datasets <- c(exp_Datasets, obs_Datasets)
exp1$attrs$source_files <- c(exp_source_files, obs_source_files)
exp1$attrs$Variable$varName <- as.character(exp_var)
exp1$attrs$Variable$metadata <- c(exp1$attrs$Variable$metadata, exp[[2]]$attrs$Variable$metadata)
return(exp1)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_MultivarRMSE.R
|
#'@rdname CST_ProxiesAttractor
#'@title Computing two dinamical proxies of the attractor in s2dv_cube.
#'
#'@author Carmen Alvarez-Castro, \email{[email protected]}
#'@author Maria M. Chaves-Montero, \email{[email protected]}
#'@author Veronica Torralba, \email{[email protected]}
#'@author Davide Faranda, \email{[email protected]}
#'
#'@description This function computes two dinamical proxies of the attractor:
#'The local dimension (d) and the inverse of the persistence (theta) for an
#''s2dv_cube' object.
#'These two parameters will be used as a condition for the computation of
#'dynamical scores to measure predictability and to compute bias correction
#'conditioned by the dynamics with the function DynBiasCorrection Function
#'based on the matlab code ([email protected]) used in
#'@references Faranda, D., Alvarez-Castro, M.C., Messori, G., Rodriguez, D.,
#'and Yiou, P. (2019). The hammam effect or how a warm ocean enhances large
#'scale atmospheric predictability. Nature Communications, 10(1), 1316.
#'\doi{10.1038/s41467-019-09305-8}"
#'@references Faranda, D., Gabriele Messori and Pascal Yiou. (2017).
#'Dynamical proxies of North Atlantic predictability and extremes.
#'Scientific Reports, 7-41278, 2017.
#'
#'@param data An s2dv_cube object with the data to create the attractor. Must be
#' a matrix with the timesteps in nrow and the grids in ncol(dat(time,grids)
#'@param quanti A number lower than 1 indicating the quantile to perform the
#' computation of local dimension and theta.
#'@param ncores The number of cores to use in parallel computation.
#'@return dim and theta
#'@examples
#'# Example 1: Computing the attractor using simple s2dv data
#'obs <- rnorm(2 * 3 * 4 * 8 * 8)
#'dim(obs) <- c(dataset = 1, member = 2, sdate = 3, ftime = 4, lat = 8, lon = 8)
#'lon <- seq(10, 13.5, 0.5)
#'lat <- seq(40, 43.5, 0.5)
#'coords <- list(lon = lon, lat = lat)
#'data <- list(data = obs, coords = coords)
#'class(data) <- "s2dv_cube"
#'attractor <- CST_ProxiesAttractor(data = data, quanti = 0.6)
#'@import multiApply
#'@export
CST_ProxiesAttractor <- function(data, quanti, ncores = NULL) {
# Check 's2dv_cube'
if (!inherits(data, 's2dv_cube')) {
stop("Parameter 'data' must be of the class 's2dv_cube', ",
"as output by CSTools::CST_Load.")
}
# Check quanti
if (is.null(quanti)) {
stop("Parameter 'quanti' cannot be NULL.")
}
data$data <- ProxiesAttractor(data = data$data, quanti = quanti,
ncores = ncores)
return(data)
}
#'@rdname ProxiesAttractor
#'
#'@title Computing two dinamical proxies of the attractor.
#'@author Carmen Alvarez-Castro, \email{[email protected]}
#'@author Maria M. Chaves-Montero, \email{[email protected]}
#'@author Veronica Torralba, \email{[email protected]}
#'@author Davide Faranda, \email{[email protected]}
#'
#'@description This function computes two dinamical proxies of the attractor:
#'The local dimension (d) and the inverse of the persistence (theta).
#'These two parameters will be used as a condition for the computation of dynamical
#'scores to measure predictability and to compute bias correction conditioned by
#'the dynamics with the function DynBiasCorrection.
#'Funtion based on the matlab code ([email protected]) used in:
#'@references Faranda, D., Alvarez-Castro, M.C., Messori, G., Rodriguez, D., and
#'Yiou, P. (2019). The hammam effect or how a warm ocean enhances large scale
#'atmospheric predictability. Nature Communications, 10(1), 1316.
#'\doi{10.1038/s41467-019-09305-8}"
#'@references Faranda, D., Gabriele Messori and Pascal Yiou. (2017).
#' Dynamical proxies of North Atlantic predictability and extremes.
#' Scientific Reports, 7-41278, 2017.
#'
#'@param data A multidimensional array with named dimensions to create the
#' attractor. It requires a temporal dimension named 'time' and spatial
#' dimensions called 'lat' and 'lon', or 'latitude' and 'longitude' or 'grid'.
#'@param quanti A number lower than 1 indicating the quantile to perform the
#' computation of local dimension and theta
#'@param ncores The number of cores to use in parallel computation.
#'
#'@return dim and theta
#'
#'@examples
#'# Example 1: Computing the attractor using simple data
#'# Creating an example of matrix data(time,grids):
#'mat <- array(rnorm(36 * 40), c(time = 36, grid = 40))
#'qm <- 0.90 # imposing a threshold
#'Attractor <- ProxiesAttractor(data = mat, quanti = qm)
#'# to plot the result
#'time = c(1:length(Attractor$theta))
#'plot(time, Attractor$dim, xlab = 'time', ylab = 'd',
#' main = 'local dimension', type = 'l')
#'@import multiApply
#'@export
ProxiesAttractor <- function(data, quanti, ncores = NULL){
if (is.null(data)) {
stop("Parameter 'data' cannot be NULL.")
}
if (is.null(quanti)) {
stop("Parameter 'quanti' is mandatory")
}
if (any(names(dim(data)) %in% 'sdate')) {
if (any(names(dim(data)) %in% 'ftime')) {
data <- MergeDims(data, merge_dims = c('ftime', 'sdate'),
rename_dim = 'time')
}
}
if (!(any(names(dim(data)) %in% 'time'))){
stop("Parameter 'data' must have a temporal dimension named 'time'.")
}
if (any(names(dim(data)) %in% 'lat')) {
if (any(names(dim(data)) %in% 'lon')) {
data <- MergeDims(data, merge_dims = c('lon', 'lat'),
rename_dim = 'grid')
}
}
if (any(names(dim(data)) %in% 'latitude')) {
if (any(names(dim(data)) %in% 'longitude')) {
data <- MergeDims(data, merge_dims = c('longitude', 'latitude'),
rename_dim = 'grid')
}
}
if(!(any(names(dim(data)) %in% 'grid'))){
stop("Parameter 'data' must have a spatial dimension named 'grid'.")
}
attractor <- Apply(data, target_dims = c('time', 'grid'),
fun = .proxiesattractor,
quanti = quanti , ncores = ncores)
# rename dimensions
attractor <- lapply(attractor,
FUN = function(x, dimname){
names(dim(x))[dimname] <- 'time'
return(x)},
dimname = which(names(dim(attractor[[1]])) == 'dim2'))
return(list(dim = attractor$dim, theta = attractor$theta))
}
.proxiesattractor <- function(data, quanti) {
# expected dimensions data: time and grid
logdista <- Apply(data, target_dims = 'grid',
fun = function(x, y){
-log(colMeans((y - as.vector(x))^2))},
y = t(data))[[1]]
#Computation of theta
Theta <- function(logdista, quanti){
#Compute the thheshold corresponding to the quantile
thresh <- quantile(logdista, quanti, na.rm = TRUE)
logdista[which(logdista == 'Inf')] <- NaN
Li <- which(as.vector(logdista) > as.numeric(thresh))
#Length of each cluster
Ti <- diff(Li)
N <- length(Ti)
q <- 1 - quanti
Si <- Ti - 1
Nc <- length(which(Si > 0))
N <- length(Ti)
theta <- (sum(q * Si) + N + Nc - sqrt(((sum(q * Si) + N + Nc)^2) -
8 * Nc * sum(q * Si))) / (2 * sum(q * Si))
#Sort the exceedances
logdista <- sort(logdista)
#Find all the Peaks over Thresholds.
findidx <- which(as.vector(logdista) > as.numeric(thresh))
if(length(findidx) < 1) {
stop("Parameter 'quanti' is too high for the length of the data provided.")
}
logextr <- logdista[findidx[[1]]:(length(logdista) - 1)]
#The inverse of the dimension is just the average of the exceedances
dim <- 1 /mean(as.numeric(logextr) - as.numeric(thresh))
return(list(dim = dim, theta = theta))
}
names(dim(logdista)) <- c('dim1', 'dim2')
proxies <- Apply(data = list(logdista = logdista),
target_dims = list('dim1'), fun = Theta, quanti = quanti)
return(list(dim = proxies$dim, theta = proxies$theta))
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_ProxiesAttractor.R
|
#'Quantile Mapping for seasonal or decadal forecast data
#'
#'@description This function is a wrapper of fitQmap and doQmap from package
#''qmap' to be applied on the object of class 's2dv_cube'. The quantile mapping
#'adjustment between an experiment, typically a hindcast, and observation is
#'applied to the experiment itself or to a provided forecast.
#'@author Nuria Perez-Zanon, \email{[email protected]}
#'@param exp An object of class \code{s2dv_cube}.
#'@param obs An object of class \code{s2dv_cube}.
#'@param exp_cor An object of class \code{s2dv_cube} in which the quantile
#' mapping correction should be applied. If it is not specified, the correction
#' is applied in object 'exp'.
#'@param sdate_dim A character string indicating the dimension name in which
#' cross-validation would be applied when exp_cor is not provided. 'sdate' by
#' default.
#'@param memb_dim A character string indicating the dimension name where
#' ensemble members are stored in the experimental arrays. It can be NULL if
#' there is no ensemble member dimension. It is set as 'member' by default.
#'@param window_dim A character string indicating the dimension name where
#' samples have been stored. It can be NULL (default) in case all samples are
#' used.
#'@param method A character string indicating the method to be used:'PTF',
#' 'DIST', 'RQUANT', 'QUANT', 'SSPLIN'. By default, the empirical quantile
#' mapping 'QUANT' is used.
#'@param na.rm A logical value indicating if missing values should be removed
#' (FALSE by default).
#'@param ncores An integer indicating the number of cores for parallel
#' computation using multiApply function. The default value is NULL (1).
#'@param ... Additional parameters to be used by the method choosen. See qmap
#' package for details.
#'
#'@return An object of class \code{s2dv_cube} containing the experimental data
#'after applying the quantile mapping correction.
#'
#'@seealso \code{\link[qmap]{fitQmap}} and \code{\link[qmap]{doQmap}}
#'@examples
#'# Use synthetic data
#'exp <- NULL
#'exp$data <- 1 : c(1 * 3 * 5 * 4 * 3 * 2)
#'dim(exp$data) <- c(dataset = 1, member = 3, sdate = 5, ftime = 4,
#' lat = 3, lon = 2)
#'class(exp) <- 's2dv_cube'
#'obs <- NULL
#'obs$data <- 101 : c(100 + 1 * 1 * 5 * 4 * 3 * 2)
#'dim(obs$data) <- c(dataset = 1, member = 1, sdate = 5, ftime = 4,
#' lat = 3, lon = 2)
#'class(obs) <- 's2dv_cube'
#'res <- CST_QuantileMapping(exp, obs)
#'
#'@import qmap
#'@import multiApply
#'@import s2dv
#'@export
CST_QuantileMapping <- function(exp, obs, exp_cor = NULL, sdate_dim = 'sdate',
memb_dim = 'member', window_dim = NULL,
method = 'QUANT', na.rm = FALSE,
ncores = NULL, ...) {
# Check 's2dv_cube'
if (!inherits(exp, 's2dv_cube') || !inherits(obs, 's2dv_cube')) {
stop("Parameter 'exp' and 'obs' must be of the class 's2dv_cube'.")
}
if (!is.null(exp_cor)) {
if (!inherits(exp_cor, 's2dv_cube')) {
stop("Parameter 'exp_cor' must be of the class 's2dv_cube'.")
}
}
QMapped <- QuantileMapping(exp = exp$data, obs = obs$data,
exp_cor = exp_cor$data,
sdate_dim = sdate_dim, memb_dim = memb_dim,
window_dim = window_dim, method = method,
na.rm = na.rm, ncores = ncores, ...)
if (is.null(exp_cor)) {
exp$data <- QMapped
exp$attrs$Datasets <- c(exp$attrs$Datasets, obs$attrs$Datasets)
exp$attrs$source_files <- c(exp$attrs$source_files, obs$attrs$source_files)
return(exp)
} else {
exp_cor$data <- QMapped
exp_cor$attrs$Datasets <- c(exp_cor$attrs$Datasets, exp$attrs$Datasets,
obs$attrs$Datasets)
exp_cor$attrs$source_files <- c(exp_cor$attrs$source_files, exp$attrs$source_files,
obs$attrs$source_files)
return(exp_cor)
}
}
#'Quantile Mapping for seasonal or decadal forecast data
#'
#'@description This function is a wrapper of fitQmap and doQmap from package
#''qmap' to be applied on multi-dimensional arrays. The quantile mapping
#'adjustment between an experiment, typically a hindcast, and observation is
#'applied to the experiment itself or to a provided forecast.
#'
#'@author Nuria Perez-Zanon, \email{[email protected]}
#'@param exp A multidimensional array with named dimensions containing the
#' hindcast.
#'@param obs A multidimensional array with named dimensions containing the
#' reference dataset.
#'@param exp_cor A multidimensional array with named dimensions in which the
#' quantile mapping correction should be applied. If it is not specified, the
#' correction is applied on object 'exp'.
#'@param sdate_dim A character string indicating the dimension name in which
#' cross-validation would be applied when exp_cor is not provided. 'sdate' by
#' default.
#'@param memb_dim A character string indicating the dimension name where
#' ensemble members are stored in the experimental arrays. It can be NULL if
#' there is no ensemble member dimension. It is set as 'member' by default.
#'@param window_dim A character string indicating the dimension name where
#' samples have been stored. It can be NULL (default) in case all samples are
#' used.
#'@param method A character string indicating the method to be used: 'PTF',
#' 'DIST', 'RQUANT', 'QUANT', 'SSPLIN'. By default, the empirical quantile
#' mapping 'QUANT' is used.
#'@param na.rm A logical value indicating if missing values should be removed
#' (FALSE by default).
#'@param ncores An integer indicating the number of cores for parallel
#' computation using multiApply function. The default value is NULL (1).
#'@param ... Additional parameters to be used by the method choosen. See qmap
#' package for details.
#'
#'@return An array containing the experimental data after applying the quantile
#'mapping correction.
#'
#'@seealso \code{\link[qmap]{fitQmap}} and \code{\link[qmap]{doQmap}}
#'@examples
#'# Use synthetic data
#'exp <- 1 : c(1 * 3 * 5 * 4 * 3 * 2)
#'dim(exp) <- c(dataset = 1, member = 3, sdate = 5, ftime = 4,
#' lat = 3, lon = 2)
#'obs <- 101 : c(100 + 1 * 1 * 5 * 4 * 3 * 2)
#'dim(obs) <- c(dataset = 1, member = 1, sdate = 5, ftime = 4,
#' lat = 3, lon = 2)
#'res <- QuantileMapping(exp, obs)
#'
#'@import qmap
#'@import multiApply
#'@import s2dv
#'@export
QuantileMapping <- function(exp, obs, exp_cor = NULL, sdate_dim = 'sdate',
memb_dim = 'member', window_dim = NULL,
method = 'QUANT', na.rm = FALSE,
ncores = NULL, ...) {
# exp and obs
obsdims <- names(dim(obs))
expdims <- names(dim(exp))
if (!is.array(exp) || !is.numeric(exp)) {
stop("Parameter 'exp' must be a numeric array.")
}
if (!is.array(obs) || !is.numeric(obs)) {
stop("Parameter 'obs' must be a numeric array.")
}
if (is.null(expdims)) {
stop("Parameter 'exp' must have dimension names.")
}
if (is.null(obsdims)) {
stop("Parameter 'obs' must have dimension names.")
}
# sdate_dim
if (!is.character(sdate_dim) | length(sdate_dim) != 1) {
stop("Parameter 'sdate_dim' must be a character string.")
}
if (!sdate_dim %in% expdims | !sdate_dim %in% obsdims) {
stop("Parameter 'sdate_dim' is not found in 'exp' or 'obs' dimension.")
}
if (dim(exp)[sdate_dim] == 1 || dim(obs)[sdate_dim] == 1) {
stop("Parameter 'exp' and 'obs' must have dimension length of 'sdate_dim' bigger than 1.")
}
# exp_cor
if (!is.null(exp_cor)) {
if (is.null(names(dim(exp_cor)))) {
stop("Parameter 'exp_cor' must have dimension names.")
}
if (!sdate_dim %in% names(dim(exp_cor))) {
stop("Parameter 'sdate_dim' is not found in 'exp_cor' dimension.")
}
}
# method
if (!(method %in% c('PTF', 'DIST', 'RQUANT', 'QUANT', 'SSPLIN')) | length(method) != 1) {
stop("Parameter 'method' must be one of the following methods: ",
"'PTF', 'DIST', 'RQUANT', 'QUANT', 'SSPLIN'.")
}
# memb_dim
if (is.null(memb_dim)) {
remove_member <- TRUE
memb_dim <- "temp_memb_dim"
exp <- InsertDim(exp, posdim = 1, lendim = 1, name = "temp_memb_dim")
obs <- InsertDim(obs, posdim = 1, lendim = 1, name = "temp_memb_dim")
obsdims <- names(dim(obs))
expdims <- names(dim(exp))
if (!is.null(exp_cor)) {
exp_cor <- InsertDim(exp_cor, posdim = 1, lendim = 1, name = "temp_memb_dim")
}
} else {
remove_member <- FALSE
if (!all(memb_dim %in% obsdims)) {
obs <- InsertDim(obs, posdim = 1, lendim = 1,
name = memb_dim[!(memb_dim %in% obsdims)])
obsdims <- names(dim(obs))
}
if (any(!memb_dim %in% expdims)) {
stop(paste0("Parameter 'memb_dim' is not found in 'exp' dimensions. ",
"Set it as NULL if there is no member dimension."))
}
}
sample_dims <- c(memb_dim, sdate_dim)
# window_dim
if (!is.null(window_dim)) {
if (!(window_dim %in% obsdims)) {
stop("Parameter 'window_dim' is not found in 'obs'.")
}
obs <- CSTools::MergeDims(obs, c(memb_dim, window_dim))
if (window_dim %in% expdims) {
exp <- CSTools::MergeDims(exp, c(memb_dim, window_dim))
warning("Parameter 'window_dim' is found in exp and is merged to 'memb_dim'.")
}
}
# na.rm
if (!is.logical(na.rm) | length(na.rm) > 1) {
stop("Parameter 'na.rm' must be one logical value.")
}
# ncores
if (!is.null(ncores)) {
if (!is.numeric(ncores) | ncores %% 1 != 0 | ncores <= 0 |
length(ncores) > 1) {
stop("Parameter 'ncores' must be either NULL or a positive integer.")
}
}
###############################
if (!is.null(exp_cor)) {
qmaped <- Apply(list(exp, obs, exp_cor), target_dims = sample_dims,
fun = .qmapcor, method = method, sdate_dim = sdate_dim,
na.rm = na.rm, ...,
ncores = ncores)$output1
} else {
qmaped <- Apply(list(exp, obs), target_dims = sample_dims,
fun = .qmapcor, exp_cor = NULL, method = method,
sdate_dim = sdate_dim, na.rm = na.rm, ...,
ncores = ncores)$output1
}
# remove added 'temp_memb_dim'
if (remove_member) {
dim(qmaped) <- dim(qmaped)[-which(names(dim(qmaped)) == "temp_memb_dim")]
}
return(qmaped)
}
.qmapcor <- function(exp, obs, exp_cor = NULL, sdate_dim = 'sdate',
method = 'QUANT', na.rm = FALSE, ...) {
# exp: [memb (+ window), sdate]
# obs: [memb (+ window), sdate]
# exp_cor: NULL or [memb, sdate]
if (is.null(exp_cor)) {
applied <- exp * NA
for (sd in 1:dim(exp)[sdate_dim]) {
if (na.rm) {
# select start date for cross-val
nas_pos <- which(!is.na(exp[, sd]))
obs2 <- as.vector(obs[, -sd])
exp2 <- as.vector(exp[, -sd])
exp_cor2 <- as.vector(exp[, sd])
# remove NAs
obs2 <- obs2[!is.na(obs2)]
exp2 <- exp2[!is.na(exp2)]
exp_cor2 <- exp_cor2[!is.na(exp_cor2)]
tryCatch({
adjust <- fitQmap(obs2, exp2, method = method, ...)
applied[nas_pos, sd] <- doQmap(exp_cor2, adjust, ...)
},
error = function(error_message) {
return(applied[, sd])
})
} else {
# na.rm = FALSE shouldn't fail, just return NA
if (anyNA(obs[, -sd]) | anyNA(exp[, -sd])) {
applied[, sd] <- NA
} else {
adjust <- fitQmap(as.vector(obs[, -sd]), as.vector(exp[, -sd]),
method = method, ...)
exp2 <- exp[, sd]
if (sum(is.na(exp2)) >= 1) {
app <- rep(NA, length(exp2))
nas_pos <- which(is.na(exp2))
exp2 <- exp2[!is.na(exp2)]
app[-nas_pos] <- doQmap(as.vector(exp2), adjust, ...)
} else {
app <- doQmap(as.vector(exp2), adjust, ...)
}
applied[, sd] <- app
}
}
}
} else {
applied <- exp_cor * NA
if (na.rm) {
tryCatch({
adjust <- fitQmap(obs[!is.na(obs)], exp[!is.na(exp)],
method = method, ...)
applied[!is.na(exp_cor)] <- doQmap(exp_cor[!is.na(exp_cor)],
adjust, ...)
},
error = function(error_message) {
return(applied)
})
} else {
adjust <- fitQmap(as.vector(obs), as.vector(exp), method = method, ...)
applied <- doQmap(as.vector(exp_cor), adjust, ...)
}
dim(applied) <- dim(exp_cor)
}
return(applied)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_QuantileMapping.R
|
#'@rdname CST_RFSlope
#'@title RainFARM spectral slopes from a CSTools object
#'
#'@author Jost von Hardenberg - ISAC-CNR, \email{[email protected]}
#'
#'@description This function computes spatial spectral slopes from a CSTools
#'object to be used for RainFARM stochastic precipitation downscaling method and
#'accepts a CSTools object (of the class 's2dv_cube') as input.
#'
#'@param data An object of the class 's2dv_cube', containing the spatial
#' precipitation fields to downscale. The data object is expected to have an
#' element named \code{$data} with at least two spatial dimensions named "lon"
#' and "lat" and one or more dimensions over which to average these slopes,
#' which can be specified by parameter \code{time_dim}.
#'@param kmin First wavenumber for spectral slope (default \code{kmin=1}).
#'@param time_dim String or character array with name(s) of dimension(s) (e.g.
#' "ftime", "sdate", "member" ...) over which to compute spectral slopes. If a
#' character array of dimension names is provided, the spectral slopes will be
#' computed as an average over all elements belonging to those dimensions. If
#' omitted one of c("ftime", "sdate", "time") is searched and the first one
#' with more than one element is chosen.
#'@param ncores Is an integer that indicates the number of cores for parallel
#' computations using multiApply function. The default value is one.
#'@return CST_RFSlope() returns spectral slopes using the RainFARM convention
#' (the logarithmic slope of k*|A(k)|^2 where A(k) are the spectral amplitudes).
#' The returned array has the same dimensions as the \code{exp} element of the
#' input object, minus the dimensions specified by \code{lon_dim},
#' \code{lat_dim} and \code{time_dim}.
#'@examples
#'exp <- 1 : (2 * 3 * 4 * 8 * 8)
#'dim(exp) <- c(dataset = 1, member = 2, sdate = 3, ftime = 4, lat = 8, lon = 8)
#'lon <- seq(10, 13.5, 0.5)
#'lat <- seq(40, 43.5, 0.5)
#'coords <- list(lon = lon, lat = lat)
#'data <- list(data = exp, coords = coords)
#'class(data) <- 's2dv_cube'
#'slopes <- CST_RFSlope(data)
#'@import multiApply
#'@import rainfarmr
#'@importFrom ClimProjDiags Subset
#'@export
CST_RFSlope <- function(data, kmin = 1, time_dim = NULL, ncores = NULL) {
# Check 's2dv_cube'
if (!inherits(data, "s2dv_cube")) {
stop("Parameter 'data' must be of the class 's2dv_cube', ",
"as output by CSTools::CST_Load.")
}
# Check dimensions
if (!any(names(dim(data$data)) %in% .KnownLonNames()) |
!any(names(dim(data$data)) %in% .KnownLatNames())) {
stop("Spatial dimension names do not match any of the names accepted by ",
"the package.")
}
lon_name <- names(dim(data$data))[[which(names(dim(data$data)) %in% .KnownLonNames())]]
lat_name <- names(dim(data$data))[[which(names(dim(data$data)) %in% .KnownLatNames())]]
slopes <- RFSlope(data$data, kmin, time_dim,
lon_dim = lon_name, lat_dim = lat_name)
return(slopes)
}
#'@rdname RFSlope
#'@title RainFARM spectral slopes from an array (reduced version)
#'
#'@author Jost von Hardenberg - ISAC-CNR, \email{[email protected]}
#'
#'@description This function computes spatial spectral slopes from an array,
#'to be used for RainFARM stochastic precipitation downscaling method.
#'
#'@param data Array containing the spatial precipitation fields to downscale.
#' The input array is expected to have at least two dimensions named "lon" and
#' "lat" by default (these default names can be changed with the \code{lon_dim}
#' and \code{lat_dim} parameters) and one or more dimensions over which to
#' average the slopes, which can be specified by parameter \code{time_dim}.
#'@param kmin First wavenumber for spectral slope (default \code{kmin=1}).
#'@param time_dim String or character array with name(s) of dimension(s)
#' (e.g. "ftime", "sdate", "member" ...) over which to compute spectral slopes.
#' If a character array of dimension names is provided, the spectral slopes
#' will be computed as an average over all elements belonging to those dimensions.
#' If omitted one of c("ftime", "sdate", "time") is searched and the first one
#' with more than one element is chosen.
#'@param lon_dim Name of lon dimension ("lon" by default).
#'@param lat_dim Name of lat dimension ("lat" by default).
#'@param ncores is an integer that indicates the number of cores for parallel
#' computations using multiApply function. The default value is one.
#'
#'@return RFSlope() returns spectral slopes using the RainFARM convention
#'(the logarithmic slope of k*|A(k)|^2 where A(k) are the spectral amplitudes).
#'The returned array has the same dimensions as the input array,
#'minus the dimensions specified by \code{lon_dim}, \code{lat_dim} and \code{time_dim}.
#'@examples
#'# Example for the 'reduced' RFSlope function
#'# Create a test array with dimension 8x8 and 20 timesteps,
#'# 3 starting dates and 20 ensemble members.
#'pr <- 1:(4*3*8*8*20)
#'dim(pr) <- c(ensemble = 4, sdate = 3, lon = 8, lat = 8, ftime = 20)
#'# Compute the spectral slopes ignoring the wavenumber
#'# corresponding to the largest scale (the box)
#'slopes <- RFSlope(pr, kmin = 2, time_dim = 'ftime')
#'@import multiApply
#'@import rainfarmr
#'@importFrom ClimProjDiags Subset
#'@export
RFSlope <- function(data, kmin = 1, time_dim = NULL,
lon_dim = "lon", lat_dim = "lat", ncores = NULL) {
# Know spatial coordinates names
if (!all(c(lon_dim, lat_dim) %in% names(dim(data)))) {
stop("Spatial coordinate names do not match data dimension names.")
}
if (length(ncores) > 1) {
ncores = ncores[1]
warning("Parameter 'ncores' has length > 1 and only the first element will be used.")
}
if (!is.null(ncores)) {
ncores <- round(ncores)
if (ncores == 0) {
ncores = NULL
}
}
# Ensure input grid is square and with even dimensions
if ( (dim(data)[lon_dim] != dim(data)[lat_dim]) |
(dim(data)[lon_dim] %% 2 == 1)) {
warning("Input data are expected to be on a square grid",
" with an even number of pixels per side.")
nmin <- min(dim(data)[lon_dim], dim(data)[lat_dim])
nmin <- floor(nmin / 2) * 2
data <- .subset(data, lat_dim, 1:nmin)
data <- .subset(data, lon_dim, 1:nmin)
warning(paste("The input data have been cut to a square of",
nmin, "pixels on each side."))
}
# Check/detect time_dim
if (is.null(time_dim)) {
time_dim_names <- c("ftime", "sdate", "time")
time_dim_num <- which(time_dim_names %in% names(dim(data)))
if (length(time_dim_num) > 0) {
# Find time dimension with length > 1
ilong <- which(dim(data)[time_dim_names[time_dim_num]] > 1)
if (length(ilong) > 0) {
time_dim <- time_dim_names[time_dim_num[ilong[1]]]
} else {
stop("No time dimension longer than one found.")
}
} else {
stop("Could not automatically detect a target time dimension ",
"in the provided data in 'data'.")
}
warning(paste("Selected time dim: ", time_dim))
}
# reorder and group time_dim together at the end
cdim0 <- dim(data)
imask <- names(cdim0) %in% time_dim
data <- .aperm2(data, c(which(!imask), which(imask)))
cdim <- dim(data)
ind <- 1 : length(which(!imask))
# compact (multiply) time_dim dimensions
dim(data) <- c(cdim[ind], rainfarm_samples = prod(cdim[-ind]))
# Repeatedly apply .RFSlope
result <- Apply(data, c(lon_dim, lat_dim, "rainfarm_samples"),
.RFSlope, kmin, ncores = ncores)$output1
return(slopes = result)
}
#'Atomic RFSlope
#'@param pr precipitation array to downscale with dims (lon, lat, time).
#'@param kmin first wavenumber for spectral slope (default kmin=1).
#'@return .RFSlope returns a scalar spectral slope using the RainFARM convention
#'(the logarithmic slope of k*|A(k)|^2 where A(k) is the spectral amplitude).
#'@noRd
.RFSlope <- function(pr, kmin) {
if (any(is.na(pr))) {
posna <- unlist(lapply(1:dim(pr)['rainfarm_samples'],
function(x){!is.na(pr[1, 1, x])}))
pr <- Subset(pr, 'rainfarm_samples', posna)
}
fxp <- fft2d(pr)
sx <- fitslope(fxp, kmin = kmin)
return(sx)
}
# Function to generalize through do.call() n-dimensional array subsetting
# and array indexing. Derived from Stack Overflow issue
# https://stackoverflow.com/questions/14500707/select-along-one-of-n-dimensions-in-array
.subset <- function(field, dim_name, range, drop = FALSE) {
idim <- which(names(dim(field)) %in% dim_name)
# Create list representing arguments supplied to [
# bquote() creates an object corresponding to a missing argument
indices <- rep(list(bquote()), length(dim(field)))
indices[[idim]] <- range
# do.call on the indices
field <- do.call("[",c(list(field), indices, list(drop = drop)))
return(field)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_RFSlope.R
|
#'@rdname CST_RFTemp
#'@title Temperature downscaling of a CSTools object using lapse rate
#'correction or a reference field
#'@author Jost von Hardenberg - ISAC-CNR, \email{[email protected]}
#'@description This function implements a simple lapse rate correction of a
#'temperature field (an object of class 's2dv_cube' as provided by
#'`CST_Load`) as input.
#'The input lon grid must be increasing (but can be modulo 360).
#'The input lat grid can be irregularly spaced (e.g. a Gaussian grid)
#'The output grid can be irregularly spaced in lon and/or lat.
#'@references Method described in ERA4CS MEDSCOPE milestone M3.2:
#'High-quality climate prediction data available to WP4 here:
#'\url{https://www.medscope-project.eu/the-project/deliverables-reports/}
#'and in H2020 ECOPOTENTIAL Deliverable No. 8.1:
#'High resolution (1-10 km) climate, land use and ocean change scenarios available
#'here: \url{https://ec.europa.eu/research/participants/documents/downloadPublic?documentIds=080166e5b6cd2324&appId=PPGMS}
#'@param data An object of the class 's2dv_cube' as returned by `CST_Load`,
#' containing the temperature fields to downscale. The data object is expected
#' to have an element named \code{$data} with at least two spatial dimensions
#' named "lon" and "lat". (these default names can be changed with the
#' \code{lon_dim} and \code{lat_dim} parameters).
#'@param oro An object of the class 's2dv_cube' as returned by `CST_Load`,
#' containing fine scale orography (in meters). The destination downscaling
#' area must be contained in the orography field.
#'@param xlim Vector with longitude bounds for downscaling; the full input
#' field is downscaled if `xlim` and `ylim` are not specified.
#'@param ylim Vector with latitude bounds for downscaling
#'@param lapse Float with environmental lapse rate
#'@param lon_dim String with name of longitude dimension
#'@param lat_dim String with name of latitude dimension
#'@param time_dim A vector of character string indicating the name of temporal
#' dimension. By default, it is set to NULL and it considers "ftime", "sdate"
#' and "time" as temporal dimensions.
#'@param verbose Logical if to print diagnostic output.
#'@param nolapse Logical, if true `oro` is interpreted as a fine-scale
#' climatology and used directly for bias correction.
#'@param compute_delta Logical if true returns only a delta to be used for
#' out-of-sample forecasts. Returns an object of the class 's2dv_cube',
#' containing a delta. Activates `nolapse = TRUE`.
#'@param delta An object of the class 's2dv_cube', containing a delta
#' to be applied to the downscaled input data. Activates `nolapse = TRUE`.
#' The grid of this object must coincide with that of the required output.
#'@param method String indicating the method used for interpolation:
#' "nearest" (nearest neighbours followed by smoothing with a circular
#' uniform weights kernel), "bilinear" (bilinear interpolation)
#' The two methods provide similar results, but nearest is slightly better
#' provided that the fine-scale grid is correctly centered as a subdivision
#' of the large-scale grid.
#'@return CST_RFTemp() returns a downscaled CSTools object (i.e., of the class
#''s2dv_cube').
#'@examples
#'# Generate simple synthetic data and downscale by factor 4
#'t <- rnorm(7 * 6 * 2 * 3 * 4)*10 + 273.15 + 10
#'dim(t) <- c(dataset = 1, member = 2, sdate = 3, ftime = 4, lat = 6, lon = 7)
#'lon <- seq(3, 9, 1)
#'lat <- seq(42, 47, 1)
#'coords <- list(lat = lat, lon = lon)
#'exp <- list(data = t, coords = coords)
#'attr(exp, 'class') <- 's2dv_cube'
#'o <- runif(29*29)*3000
#'dim(o) <- c(lats = 29, lons = 29)
#'lon <- seq(3, 10, 0.25)
#'lat <- seq(41, 48, 0.25)
#'coords <- list(lat = lat, lon = lon)
#'oro <- list(data = o, coords = coords)
#'attr(oro, 'class') <- 's2dv_cube'
#'res <- CST_RFTemp(data = exp, oro = oro, xlim = c(4,8), ylim = c(43, 46),
#' lapse = 6.5, time_dim = 'ftime',
#' lon_dim = 'lon', lat_dim = 'lat')
#'@import multiApply
#'@export
CST_RFTemp <- function(data, oro, xlim = NULL, ylim = NULL, lapse = 6.5,
lon_dim = "lon", lat_dim = "lat", time_dim = NULL,
nolapse = FALSE, verbose = FALSE, compute_delta = FALSE,
method = "bilinear", delta = NULL) {
# Check 's2dv_cube'
if (!inherits(data, "s2dv_cube")) {
stop("Parameter 'data' must be of the class 's2dv_cube', ",
"as output by CSTools::CST_Load.")
}
if (!inherits(oro, "s2dv_cube")) {
stop("Parameter 'oro' must be of the class 's2dv_cube', ",
"as output by CSTools::CST_Load.")
}
if (!is.null(delta)) {
if (!inherits(delta, "s2dv_cube")) {
stop("Parameter 'delta' must be of the class 's2dv_cube', ",
"as output by CSTools::CST_Load.")
}
}
# Check 's2dv_cube' structure
if (!all(c('data', 'coords') %in% names(data))) {
stop("Parameter 'data' must have 'data' and 'coords' elements ",
"within the 's2dv_cube' structure.")
}
if (!all(c('data', 'coords') %in% names(oro))) {
stop("Parameter 'oro' must have 'data' and 'coords' elements ",
"within the 's2dv_cube' structure.")
}
# Check coordinates
if (!any(names(data$coords) %in% .KnownLonNames()) |
!any(names(data$coords) %in% .KnownLatNames())) {
stop("Spatial coordinate names of 'data' do not match any of the names ",
"accepted by the package.")
}
if (!any(names(oro$coords) %in% .KnownLonNames()) |
!any(names(oro$coords) %in% .KnownLatNames())) {
stop("Spatial coordinate names of 'oro' do not match any of the names ",
"accepted by the package.")
}
lon_data <- names(data$coords)[[which(names(data$coords) %in% .KnownLonNames())]]
lat_data <- names(data$coords)[[which(names(data$coords) %in% .KnownLatNames())]]
lon_oro <- names(oro$coords)[[which(names(oro$coords) %in% .KnownLonNames())]]
lat_oro <- names(oro$coords)[[which(names(oro$coords) %in% .KnownLatNames())]]
res <- RFTemp(data = data$data,
lon = as.vector(data$coords[[lon_data]]),
lat = as.vector(data$coords[[lat_data]]),
oro = oro$data,
lonoro = as.vector(oro$coords[[lon_oro]]),
latoro = as.vector(oro$coords[[lat_oro]]),
xlim = xlim, ylim = ylim, lapse = lapse,
lon_dim = lon_dim, lat_dim = lat_dim, time_dim = time_dim,
nolapse = nolapse, verbose = verbose, method = method,
compute_delta = compute_delta, delta = delta$data)
data$data <- res$data
data$coords[[lon_data]] <- res$coords[[lon_dim]]
data$coords[[lat_data]] <- res$coords[[lat_dim]]
return(data)
}
#'@rdname RFTemp
#'@title Temperature downscaling of a CSTools object using lapse rate
#'correction (reduced version)
#'@author Jost von Hardenberg - ISAC-CNR, \email{[email protected]}
#'@description This function implements a simple lapse rate correction of a
#'temperature field (a multidimensional array) as input.
#'The input lon grid must be increasing (but can be modulo 360).
#'The input lat grid can be irregularly spaced (e.g. a Gaussian grid)
#'The output grid can be irregularly spaced in lon and/or lat.
#'@references Method described in ERA4CS MEDSCOPE milestone M3.2:
#'High-quality climate prediction data available to WP4 here:
#'\url{https://www.medscope-project.eu/the-project/deliverables-reports/}
#'and in H2020 ECOPOTENTIAL Deliverable No. 8.1:
#'High resolution (1-10 km) climate, land use and ocean change scenarios here:
#'\url{https://ec.europa.eu/research/participants/documents/downloadPublic?documentIds=080166e5b6cd2324&appId=PPGMS}.
#'@param data Temperature array to downscale. The input array is expected to
#' have at least two dimensions named "lon" and "lat" by default (these default
#' names can be changed with the \code{lon_dim} and \code{lat_dim} parameters).
#'@param lon Vector or array of longitudes.
#'@param lat Vector or array of latitudes.
#'@param lonoro Vector or array of longitudes corresponding to the fine orography.
#'@param latoro Vector or array of latitudes corresponding to the fine orography.
#'@param oro Array containing fine-scale orography (in m). The destination
#' downscaling area must be contained in the orography field.
#'@param xlim Vector with longitude bounds for downscaling; the full input field
#' is downscaled if `xlim` and `ylim` are not specified.
#'@param ylim Vector with latitude bounds for downscaling.
#'@param lapse Float with environmental lapse rate.
#'@param lon_dim String with name of longitude dimension.
#'@param lat_dim String with name of latitude dimension.
#'@param time_dim A vector of character string indicating the name of temporal
#' dimension. By default, it is set to NULL and it considers "ftime", "sdate"
#' and "time" as temporal dimensions.
#'@param verbose Logical if to print diagnostic output.
#'@param nolapse Logical, if true `oro` is interpreted as a fine-scale
#' climatology and used directly for bias correction.
#'@param compute_delta Logical if true returns only a delta to be used for
#' out-of-sample forecasts.
#'@param delta Matrix containing a delta to be applied to the downscaled
#' input data. The grid of this matrix is supposed to be same as that of
#' the required output field.
#'@param method String indicating the method used for interpolation:
#' "nearest" (nearest neighbours followed by smoothing with a circular
#' uniform weights kernel), "bilinear" (bilinear interpolation)
#' The two methods provide similar results, but nearest is slightly better
#' provided that the fine-scale grid is correctly centered as a subdivision
#' of the large-scale grid.
#'@return CST_RFTemp() returns a downscaled CSTools object.
#'@return RFTemp() returns a list containing the fine-scale
#'longitudes, latitudes and the downscaled fields.
#'@examples
#'# Generate simple synthetic data and downscale by factor 4
#'t <- rnorm(7 * 6 * 4 * 3) * 10 + 273.15 + 10
#'dim(t) <- c(sdate = 3, ftime = 4, lat = 6, lon = 7)
#'lon <- seq(3, 9, 1)
#'lat <- seq(42, 47, 1)
#'o <- runif(29 * 29) * 3000
#'dim(o) <- c(lat = 29, lon = 29)
#'lono <- seq(3, 10, 0.25)
#'lato <- seq(41, 48, 0.25)
#'res <- RFTemp(t, lon, lat, o, lono, lato, xlim = c(4, 8), ylim = c(43, 46),
#' lapse = 6.5, time_dim = 'ftime')
#'@import multiApply
#'@export
RFTemp <- function(data, lon, lat, oro, lonoro, latoro,
xlim = NULL, ylim = NULL, lapse = 6.5,
lon_dim = "lon", lat_dim = "lat", time_dim = NULL,
nolapse = FALSE, verbose = FALSE, compute_delta = FALSE,
method = "bilinear", delta = NULL) {
# Check 'lon_dim' and 'lat_dim' parameters
if (!all(c(lon_dim, lat_dim) %in% names(dim(data)))) {
stop("Parameters 'lon_dim' and 'lat_dim' do not match with 'data' ",
"dimension names.")
}
# Check/detect time_dim
if (is.null(time_dim)) {
time_dim_names <- c("ftime", "sdate", "time")
time_dim_num <- which(time_dim_names %in% names(dim(data)))
if (length(time_dim_num) > 0) {
# Find time dimension with length > 0
ilong <- which(dim(data)[time_dim_names[time_dim_num]] > 0)
if (length(ilong) > 0) {
time_dim <- time_dim_names[time_dim_num[ilong]]
} else {
stop("No time dimension longer than zero found.")
}
} else {
stop("Could not automatically detect a target time dimension ",
"in the provided data in 'data'.")
}
warning(paste("Selected time dim:", time_dim, "\n"))
}
# Repeatedly apply .downscalet
if (is.null(delta)) {
result <- Apply(data, target_dims = c(lon_dim, lat_dim, time_dim),
fun = .downscalet, lon, lat, oro, lonoro, latoro,
xlim = xlim, ylim = ylim, lapse = lapse,
nolapse = nolapse, verbose = verbose, method = method,
compute_delta = compute_delta)
} else {
result <- Apply(list(data, delta),
target_dims = list(c(lon_dim, lat_dim, time_dim),
c(lon_dim, lat_dim)),
fun = .downscalet_delta, lon, lat, oro, lonoro, latoro,
xlim = xlim, ylim = ylim, lapse = lapse,
nolapse = nolapse, verbose = verbose, method = method,
compute_delta = compute_delta)
}
names(dim(result$data))[1] <- names(dim(data))[names(dim(data)) == lon_dim]
names(dim(result$data))[2] <- names(dim(data))[names(dim(data)) == lat_dim]
result$lon <- array(result$lon[1:dim(result$lon)[1]])
result$lat <- array(result$lat[1:dim(result$lat)[1]])
names(dim(result$lon)) <- lon_dim
names(dim(result$lat)) <- lat_dim
names(result) <- c('data', lon_dim, lat_dim)
return(result)
}
#'Lapse-rate temperature correction downscaling
#'
#'@description Downscales a temperature field using a lapse-rate
#'correction based on a reference orography. Time-averaging is done on all
#'dimensions after the first two.
#'@author Jost von Hardenberg, \email{[email protected]}
#'@param lon Vector of input longitudes.
#'@param lat Vector of input latitudes.
#'@param t Matrix of input temperature data.
#'@param lono Vector of orography longitudes.
#'@param lato Vector of orography latitudes.
#'@param oro Matrix of topographical elevations (in meters). The destination
#' downscaling area must be contained in the orography field.
#'@param xlim Vector of longitude bounds; the full input field is downscaled if
#' `xlim` and `ylim` are not specified.
#'@param ylim Vector of latitude bounds.
#'@param radius Smoothing radius expressed in longitude units (default is half a
#' large-scale pixel).
#'@param lapse Environmental lapse rate (in K/Km).
#'@param nolapse Logical, if true `oro` is interpreted as a fine-scale
#' climatology and used directly for bias correction.
#'@param compute_delta Logical if true returns only a delta to be used for
#' out-of-sample forecasts.
#'@param delta Matrix containing a delta to be applied to the input data.
#' The grid of this matrix is supposed to be same as that of the required
#' output field.
#'@param verbose Logical if to print diagnostic output.
#'@return A downscaled temperature matrix.
#'@examples
#'lon = 5:20
#'lat = 35:40
#'t = runif(16 * 6); dim(t) = c(16, 6)
#'lono = seq(5, 20, 0.1)
#'lato = seq(35, 40, 0.1)
#'o = runif(151 * 51) * 2000; dim(o) = c(151, 51)
#'td = .downscalet(t, lon, lat, o, lono, lato, c(8, 12), c(36, 38))
#'@noRd
.downscalet <- function(t, lon, lat, oro, lono, lato,
xlim = NULL, ylim = NULL,
radius = 0, lapse = 6.5, nolapse = FALSE,
verbose = FALSE, compute_delta = FALSE,
method = "bilinear", delta = NULL) {
if (!is.null(delta) & compute_delta) {
stop("Cannot `compute_delta` and provide `delta` at the same time.")
}
tdim <- dim(t)
ldim <- FALSE
if (length(tdim) == 3) {
nt <- tdim[3]
} else if (length(tdim) == 2) {
nt <- 1
dim(t) <- c(tdim, 1)
} else if (length(tdim) < 2) {
stop("Input array must have at least two dimensions")
} else {
ldim <- TRUE
dim(t) <- c(tdim[1:2], time = prod(tdim[-(1:2)]))
nt <- dim(t)[3]
}
if (lon[2] <= lon[1]) {
stop("Longitudes must be monotone increasing.")
}
# Regularize lon coords to monotone increasing
lon[lon >= lon[1]] <- lon[lon >= lon[1]] - 360
if (lon[length(lon)] < 0) { lon <- lon + 360 }
lono[lono >= lono[1]] <- lono[lono >= lono[1]] - 360
if (lono[length(lono)] < 0) { lono <- lono + 360 }
dxl <- (lon[2] - lon[1]) / 2
dyl <- (lat[2] - lat[1]) / 2
if (radius == 0) {
radius <- dxl
}
if (is.null(xlim)) {
xlim <- c(lon[1] + dxl, lon[length(lon)] - dxl)
}
if (is.null(ylim)) {
ylim <- c(lat[1] + dyl, lat[length(lat)] - dyl)
if (ylim[1] > ylim[2]) {
ylim <- ylim[2:1]
}
}
#Add buffer
lon1 <- xlim[1] - radius
lon2 <- xlim[2] + radius
lat1 <- ylim[1] - radius
lat2 <- ylim[2] + radius
orocut <- oro[(lono <= lon2) & (lono >= lon1),
(lato <= lat2) & (lato >= lat1)]
lonocut <- lono[(lono <= lon2) & (lono >= lon1)]
latocut <- lato[(lato <= lat2) & (lato >= lat1)]
dxol <- lonocut[2] - lonocut[1]
nrad <- as.integer(radius / abs(dxol) + 0.5)
if (length(lonocut) == 0 | length(latocut) == 0) {
stop("Orography not available for selected area")
}
if (is.null(delta) & compute_delta & nolapse) {
if (verbose) {
print("Time averaging")
}
# If we just need the delta we can work with time averages
t <- apply(t, c(1, 2), mean)
dim(t) <- c(dim(t), time = 1)
}
if (!(is.null(delta) & compute_delta & !nolapse)) {
# This calculation is not needed if we just need the delta
# and lapse rate is used
if (verbose) {
print(paste("Interpolation using", method, "method"))
}
tout <- .interp2d(t, lon, lat, lonocut, latocut, method = method)
# Fine-scale smooth interpolated input field
if (method == "nearest") {
if (verbose) {
print(paste("Smoothing interpolated field"))
}
tout <- .smooth(tout, nrad)
}
}
if (is.null(delta)) {
# Compute delta
if (nolapse) {
# oro is a reference fine-scale field: use that directly
# for bias correcting the interpolated input field's temporal mean
t1 <- orocut - apply(tout, c(1, 2), mean)
} else {
# Lapse-rate correction
orocuts <- .smooth(orocut, nrad)
t1 <- -(orocut - orocuts) * lapse / 1000.
}
if (compute_delta) {
# Return delta
tout <- t1[(lonocut <= xlim[2]) & (lonocut >= xlim[1]),
(latocut <= ylim[2]) & (latocut >= ylim[1])]
} else {
# Apply delta
if (verbose) {
print("Adding computed delta")
}
for (i in seq_len(nt)) {
tout[, , i] <- tout[, , i] + t1
}
tout <- tout[(lonocut <= xlim[2]) & (lonocut >= xlim[1]),
(latocut <= ylim[2]) & (latocut >= ylim[1]), ]
if (ldim) {
dim(tout) <- c(dim(tout)[1:2], tdim[-(1:2)])
}
}
} else {
# Just apply the provided delta
tout <- tout[(lonocut <= xlim[2]) & (lonocut >= xlim[1]),
(latocut <= ylim[2]) & (latocut >= ylim[1]), ]
if (any(dim(delta)[1:2] != dim(tout)[1:2])) {
stop("Provided delta has not the same dimensions as required output")
}
if (verbose) {
print("Adding provided delta")
}
for (i in seq_len(nt)) {
tout[, , i] <- tout[, , i] + delta
}
if (ldim) {
dim(tout) <- c(dim(tout)[1:2], tdim[-(1:2)])
}
}
lonocut <- lonocut[(lonocut <= xlim[2]) & (lonocut >= xlim[1])]
latocut <- latocut[(latocut <= ylim[2]) & (latocut >= ylim[1])]
return(list(data = tout, lon = lonocut, lat = latocut))
}
# Special driver version of .downscalet to apply delta
.downscalet_delta <- function(t, delta, lon, lat, oro, lono, lato,
xlim = NULL, ylim = NULL, radius = 0,
lapse = 6.5, nolapse = FALSE, verbose = FALSE,
compute_delta = FALSE, method = "bilinear") {
res <- .downscalet(t, lon, lat, oro, lono, lato, xlim = xlim,
ylim = ylim, radius = radius, lapse = lapse,
nolapse = nolapse, verbose = verbose,
compute_delta = compute_delta, delta = delta,
method = method)
}
#'Nearest neighbour interpolation
#'
#'@description The input field is interpolated onto the output
#'coordinate grid using nearest neighbours or bilinear interpolation.
#'The input lon grid must be monotone increasing.
#'The input lat grid can be irregularly spaced (e.g. a Gaussian grid)
#'The output grid can be irregularly spaced in lon and/or lat.
#'@author Jost von Hardenberg, \email{[email protected]}
#'@param z Matrix with the input field to interpolate (assumed to
#' include also a third time dimension)
#'@param lon Vector of input longitudes.
#'@param lat Vector of input latitudes.
#'@param lonp Vector of output longitudes.
#'@param latp Vector of output latitudes.
#'@param method String indicating the interpolation method ("nearest" or
#' "bilinear" (default)).
#'@return The interpolated field.
#'@examples
#'lon = 5:11
#'lat = 35:40
#'z = runif(7 * 6 * 2); dim(z) = c(7, 6, 2)
#'lonp = seq(5, 10, 0.2)
#'latp = seq(35, 40, 0.2)
#'zo <- .interp2d(z, lon, lat, lonp, latp, method = "nearest")
#'@noRd
.interp2d <- function(z, lon, lat, lonp, latp, method="bilinear") {
nx <- length(lonp)
ny <- length(latp)
nt <- dim(z)[3]
# Interpolate nn assuming a regular grid
zo <- array(0., c(nx, ny, nt))
dy <- lat[2] - lat[1]
dx <- lon[2] - lon[1]
if (method == "nearest") {
jj <- 1:length(latp)
for (j in 1:length(latp)) {
jj[j] <- which.min(abs(latp[j]-lat))
}
ii <- ((lonp - (lon[1] - dx / 2)) %/% dx + 1)
# If lon are global and periodic attempt to fix issues
if ((lon[1] - lon[length(lon)]) %% 360 == dx) {
ii[ii <= 0] <- ii[ii <= 0] + length(lon)
ii[ii > length(lon)] <- ii[ii > length(lon)] - length(lon)
}
if ((ii[1] <= 0) | (ii[length(ii)] > length(lon)) |
(jj[1] <= 0) | (jj[length(jj)] > length(lat))) {
stop("Downscaling area not contained in input data")
}
zo[, , ] <- z[ii, jj, ]
} else {
jj <- 1:length(latp)
jj2 <- jj
for (j in 1:length(latp)) {
jmin <- which.min(abs(latp[j]-lat))
if ( (((latp[j]-lat[jmin]) >= 0) & ( dy >= 0)) |
(((latp[j]-lat[jmin]) < 0) & ( dy <= 0))) {
jj[j] <- jmin
jj2[j] <- jmin + 1
} else {
jj2[j] <- jmin
jj[j] <- jmin - 1
}
}
ii <- ((lonp - lon[1]) %/% dx + 1)
ii2 <- ii + 1
# If lon are global and periodic attempt to fix issues
if ((lon[1] - lon[length(lon)]) %% 360 == dx) {
ii[ii <= 0] <- ii[ii <= 0] + length(lon)
ii[ii > length(lon)] <- ii[ii > length(lon)] - length(lon)
ii2[ii2 <= 0] <- ii2[ii2 <= 0] + length(lon)
ii2[ii2 > length(lon)] <- ii2[ii2 > length(lon)] - length(lon)
}
if ((ii[1] <= 0) | (ii[length(ii)] > length(lon)) |
(jj[1] <= 0) | (jj[length(jj)] > length(lat)) |
(ii2[1] <= 0) | (ii2[length(ii2)] > length(lon)) |
(jj2[1] <= 0) | (jj2[length(jj2)] > length(lat))) {
stop("Downscaling area not contained in input data")
}
xx <- ((lonp - lon[ii]) / dx) %% 360
yy <- (latp - lat[jj]) / (lat[jj2] - lat[jj])
xx <- xx[row(zo[, , 1])]
yy <- yy[col(zo[, , 1])]
dim(xx) <- c(nx, ny)
dim(yy) <- c(nx, ny)
w00 <- (1 - xx) * (1 - yy)
w10 <- xx * (1 - yy)
w01 <- (1 - xx) * yy
w11 <- xx * yy
for (k in seq_len(nt)) {
zo[, , k] <- z[ii, jj, k] * w00 + z[ii2, jj, k] * w10 +
z[ii, jj2, k] * w01 + z[ii2, jj2, k] * w11
}
}
names(dim(zo)) <- names(dim(z))
return(zo)
}
#'Smoothening using convolution with a circular kernel
#'
#'@description The input field is convolved with a circular kernel with equal
#'weights. Takes into account missing values.
#'@author Jost von Hardenberg, \email{[email protected]}
#'@param z Matrix with the input field to smoothen, with dimensions `c(ns, ns)`
#'@param sdim The smoothing kernel radius in pixel.
#'@return The smoothened field.
#'@examples
#'z <- rnorm(64 * 64)
#'dim(z) <- c(64, 64)
#'zs <- smooth(z, 8)
#'sd(zs)
#'# [1] 0.1334648
#'@noRd
.smooth <- function(z, sdim) {
nsx <- dim(z)[1]
nsy <- dim(z)[2]
zdim <- dim(z)
if (length(dim(z)) == 2) {
dim(z) <- c(dim(z), time = 1)
nt <- 1
} else {
nt <- dim(z)[3]
}
imask <- !is.finite(z)
z[imask] <- 0.
mask <- matrix(0., nsx, nsy)
for (i in 1:nsx) {
for (j in 1:nsy) {
kx <- i - 1
ky <- j - 1
if (i > nsx / 2 + 1) {
kx <- i - nsx - 1
}
if (j > nsy / 2 + 1) {
ky <- j - nsy - 1
}
r2 <- kx * kx + ky * ky
mask[i, j] <- (r2 <= (sdim * sdim))
}
}
fm <- fft(mask)
zf <- array(0., c(nsx, nsy, nt))
for (k in seq_len(nt)) {
zf[, , k] <- Re(fft(fm * fft(z[, , k]), inverse = TRUE)
) / sum(mask) / length(fm)
if (sum(imask[, , k]) > 0) {
zz <- z[, , k]
zz[!imask[, , k]] <- 1.0
zz <- zf[, , k] / (Re(fft(fm * fft(zz), inverse = TRUE)) /
sum(mask) / length(fm))
zz[imask[, , k]] <- NA
zf[, , k] <- zz
}
}
dim(zf) <- zdim
return(zf)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_RFTemp.R
|
#'Compute climatological weights for RainFARM stochastic precipitation downscaling
#'
#'@author Jost von Hardenberg - ISAC-CNR, \email{[email protected]}
#'
#'@description Compute climatological ("orographic") weights from a fine-scale
#'precipitation climatology file.
#'@references Terzago, S., Palazzi, E., & von Hardenberg, J. (2018).
#'Stochastic downscaling of precipitation in complex orography:
#'A simple method to reproduce a realistic fine-scale climatology.
#'Natural Hazards and Earth System Sciences, 18(11),
#'2825-2840. \doi{10.5194/nhess-18-2825-2018}.
#'@param climfile Filename of a fine-scale precipitation climatology. The file
#' is expected to be in NetCDF format and should contain at least one
#' precipitation field. If several fields at different times are provided,
#' a climatology is derived by time averaging. Suitable climatology files could
#' be for example a fine-scale precipitation climatology from a high-resolution
#' regional climate model (see e.g. Terzago et al. 2018), a local
#' high-resolution gridded climatology from observations, or a reconstruction
#' such as those which can be downloaded from the WORLDCLIM
#' (\url{https://www.worldclim.org}) or CHELSA (\url{https://chelsa-climate.org/})
#' websites. The latter data will need to be converted to NetCDF format before
#' being used (see for example the GDAL tools (\url{https://gdal.org/}). It
#' could also be an 's2dv_cube' object.
#'@param nf Refinement factor for downscaling (the output resolution is
#' increased by this factor).
#'@param lon Vector of longitudes.
#'@param lat Vector of latitudes. The number of longitudes and latitudes is
#' expected to be even and the same. If not the function will perform a
#' subsetting to ensure this condition.
#'@param varname Name of the variable to be read from \code{climfile}.
#'@param fsmooth Logical to use smooth conservation (default) or large-scale
#' box-average conservation.
#'@param lonname A character string indicating the name of the longitudinal
#' dimension set as 'lon' by default.
#'@param latname A character string indicating the name of the latitudinal
#' dimension set as 'lat' by default.
#'@param ncores An integer that indicates the number of cores for parallel
#' computations using multiApply function. The default value is one.
#'
#'@return An object of class 's2dv_cube' containing in matrix \code{data} the
#'weights with dimensions (lon, lat).
#'@examples
#'# Create weights to be used with the CST_RainFARM() or RainFARM() functions
#'# using an external random data in the form of 's2dv_cube'.
#'obs <- rnorm(2 * 3 * 4 * 8 * 8)
#'dim(obs) <- c(dataset = 1, member = 2, sdate = 3, ftime = 4, lat = 8, lon = 8)
#'lon <- seq(10, 13.5, 0.5)
#'lat <- seq(40, 43.5, 0.5)
#'coords <- list(lon = lon, lat = lat)
#'data <- list(data = obs, coords = coords)
#'class(data) <- "s2dv_cube"
#'res <- CST_RFWeights(climfile = data, nf = 3, lon, lat, lonname = 'lon',
#' latname = 'lat', fsmooth = TRUE)
#'@import ncdf4
#'@import rainfarmr
#'@import multiApply
#'@importFrom utils tail
#'@importFrom utils head
#'@export
CST_RFWeights <- function(climfile, nf, lon, lat, varname = NULL, fsmooth = TRUE,
lonname = 'lon', latname = 'lat', ncores = NULL) {
if (!inherits(climfile, "s2dv_cube")) {
if (!is.null(varname) & !is.character(varname)) {
stop("Parameter 'varname' must be a character string indicating the name",
" of the variable to be read from the file.")
}
}
# Ensure input grid is square and with even dimensions
if ((length(lat) != length(lon)) | (length(lon) %% 2 == 1)) {
warning("Input data are expected to be on a square grid",
" with an even number of pixels per side.")
nmin <- min(length(lon), length(lat))
nmin <- floor(nmin / 2) * 2
lon <- lon[1:nmin]
lat <- lat[1:nmin]
warning("The input data have been cut to the range.")
warning(paste0("lon: [", lon[1], ", ", lon[length(lon)], "] ",
" lat: [", lat[1], ", ", lat[length(lat)], "]"))
}
if (is.character(climfile)) {
ncin <- nc_open(climfile)
latin <- ncvar_get(ncin, grep(latname, attributes(ncin$dim)$names,
value = TRUE))
lonin <- ncvar_get(ncin, grep(lonname, attributes(ncin$dim)$names,
value = TRUE))
if (varname == "") {
varname <- grep("bnds", attributes(ncin$var)$names,
invert = TRUE, value = TRUE)[1]
}
zclim <- ncvar_get(ncin, varname)
nc_close(ncin)
} else if (inherits(climfile, "s2dv_cube")) {
# Check object structure
if (!all(c('data', 'coords') %in% names(climfile))) {
stop("Parameter 'climfile' must have 'data' and 'coords' elements ",
"within the 's2dv_cube' structure.")
}
# Check coordinates
if (!any(names(climfile$coords) %in% .KnownLonNames()) |
!any(names(climfile$coords) %in% .KnownLatNames())) {
stop("Spatial coordinate names do not match any of the names accepted by ",
"the package.")
}
loncoordname <- names(climfile$coords)[[which(names(climfile$coords) %in% .KnownLonNames())]]
latcoordname <- names(climfile$coords)[[which(names(climfile$coords) %in% .KnownLatNames())]]
zclim <- climfile$data
latin <- as.vector(climfile$coords[[latcoordname]])
lonin <- as.vector(climfile$coords[[loncoordname]])
} else {
stop("Parameter 'climfile' is expected to be a character string indicating",
" the path to the files or an object of class 's2dv_cube'.")
}
# Check dim names and order
if (length(names(dim(zclim))) < 1) {
stop("The dataset provided in 'climfile' requires dimension names.")
}
result <- RF_Weights(zclim, latin, lonin, nf, lat, lon, fsmooth = fsmooth,
lonname = lonname, latname = latname, ncores = ncores)
if (inherits(climfile, "s2dv_cube")) {
climfile$data <- result$data
climfile$coords[[loncoordname]] <- result[[lonname]]
climfile$coords[[latcoordname]] <- result[[latname]]
} else {
climfile <- NULL
climfile$data <- result
climfile$coords[[lonname]] <- result[[lonname]]
climfile$coords[[latname]] <- result[[latname]]
}
return(climfile)
}
#'Compute climatological weights for RainFARM stochastic precipitation downscaling
#'
#'@author Jost von Hardenberg - ISAC-CNR, \email{[email protected]}
#'
#'@description Compute climatological ("orographic") weights from a fine-scale
#'precipitation climatology file.
#'@references Terzago, S., Palazzi, E., & von Hardenberg, J. (2018).
#'Stochastic downscaling of precipitation in complex orography:
#'A simple method to reproduce a realistic fine-scale climatology.
#'Natural Hazards and Earth System Sciences, 18(11),
#'2825-2840. \doi{10.5194/nhess-18-2825-2018}.
#'@param zclim A multi-dimensional array with named dimension containing at
#' least one precipiation field with spatial dimensions.
#'@param lonin A vector indicating the longitudinal coordinates corresponding to
#' the \code{zclim} parameter.
#'@param latin A vector indicating the latitudinal coordinates corresponding to
#' the \code{zclim} parameter.
#'@param nf Refinement factor for downscaling (the output resolution is
#' increased by this factor).
#'@param lon Vector of longitudes.
#'@param lat Vector of latitudes. The number of longitudes and latitudes is
#' expected to be even and the same. If not the function will perform a
#' subsetting to ensure this condition.
#'@param fsmooth Logical to use smooth conservation (default) or large-scale
#' box-average conservation.
#'@param lonname A character string indicating the name of the longitudinal
#' dimension set as 'lon' by default.
#'@param latname A character string indicating the name of the latitudinal
#' dimension set as 'lat' by default.
#'@param ncores An integer that indicates the number of cores for parallel
#' computations using multiApply function. The default value is one.
#'
#'@return An object of class 's2dv_cube' containing in matrix \code{data} the
#'weights with dimensions (lon, lat).
#'@examples
#'a <- array(1:2500, c(lat = 50, lon = 50))
#'res <- RF_Weights(a, seq(0.1 ,5, 0.1), seq(0.1 ,5, 0.1),
#' nf = 5, lat = 1:5, lon = 1:5)
#'@import ncdf4
#'@import rainfarmr
#'@import multiApply
#'@importFrom utils tail
#'@importFrom utils head
#'@export
RF_Weights <- function(zclim, latin, lonin, nf, lat, lon, fsmooth = TRUE,
lonname = 'lon', latname = 'lat', ncores = NULL) {
x <- Apply(list(zclim), target_dims = c(lonname, latname), fun = rf_weights,
latin = latin, lonin = lonin, nf = nf, lat = lat, lon = lon,
lonname = lonname, latname = latname,
fsmooth = fsmooth, ncores = ncores)$output1
grid <- lon_lat_fine(lon, lat, nf)
res <- NULL
res$data <- x
res[[lonname]] <- grid$lon
res[[latname]] <- grid$lon
return(res)
}
rf_weights <- function(zclim, latin, lonin, nf, lat, lon, lonname = 'lon',
latname = 'lat', fsmooth = TRUE) {
# Check if lon and lat need to be reversed
if (lat[1] > lat[2]) {
lat <- rev(lat)
frev <- TRUE
} else {
frev <- FALSE
}
if (latin[1] > latin[2]) {
latin <- rev(latin)
zclim <- zclim[, seq(dim(zclim)[2], 1)]
}
# If lon is not monotonic make it so
if (lon[1] > tail(lon, 1)) {
lon <- (lon - 360) * (lon > tail(lon, 1)) + lon * (lon <= tail(lon, 1))
}
# Is the reference climatology global in the zonal direction ?
# Test if the the first and the last longitude are close
dx <- abs((tail(lonin, 1) %% 360) - (lonin[1] %% 360))
# Shortest distance on a torus
if (dx > 180) {
dx <- 360 - dx
}
# If this distance is smaller than twice the grid spacing we are global
if (dx <= (lonin[2] - lonin[1]) * 2) {
# Is the target area not fully inside the reference climatology area ?
if (lon[1] < lonin[1]) {
# Shift lonin to the west by 180 degrees
nn <- length(lonin)
zclim0 <- zclim
zclim[1:(nn / 2), ] <- zclim0[(nn / 2 + 1):nn, ]
zclim[(nn / 2 + 1):nn, ] <- zclim0[1:(nn / 2), ]
lonin <- lonin - 180
} else if (tail(lon, 1) > tail(lonin, 1)) {
# Shift lonin to the east by 180 degrees
nn <- length(lonin)
zclim0 <- zclim
zclim[1:(nn / 2), ] <- zclim0[(nn / 2 + 1):nn, ]
zclim[(nn / 2 + 1):nn, ] <- zclim0[1:(nn / 2), ]
lonin <- lonin + 180
}
}
ww <- rfweights(zclim, lonin, latin, lon, lat, nf, fsmooth = fsmooth)
if (frev) {
ww <- ww[, seq(dim(ww)[2], 1)]
}
attributes(dim(ww))$names <- c(lonname, latname)
return(ww)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_RFWeights.R
|
#'@rdname CST_RainFARM
#'@title RainFARM stochastic precipitation downscaling of a CSTools object
#'
#'@author Jost von Hardenberg - ISAC-CNR, \email{[email protected]}
#'
#'@description This function implements the RainFARM stochastic precipitation
#'downscaling method and accepts a CSTools object (an object of the class
#''s2dv_cube' as provided by `CST_Load`) as input.
#'Adapted for climate downscaling and including orographic correction
#'as described in Terzago et al. 2018.
#'@references Terzago, S. et al. (2018). NHESS 18(11), 2825-2840.
#'\doi{10.5194/nhess-18-2825-2018};
#'D'Onofrio et al. (2014), J of Hydrometeorology 15, 830-843; Rebora et. al.
#'(2006), JHM 7, 724.
#'@param data An object of the class 's2dv_cube' as returned by `CST_Load`,
#' containing the spatial precipitation fields to downscale.
#' The data object is expected to have an element named \code{$data} with at
#' least two spatial dimensions named "lon" and "lat" and one or more
#' dimensions over which to compute average spectral slopes (unless specified
#' with parameter \code{slope}), which can be specified by parameter
#' \code{time_dim}. The number of longitudes and latitudes in the input data is
#' expected to be even and the same. If not the function will perform a
#' subsetting to ensure this condition.
#'@param weights Matrix with climatological weights which can be obtained using
#' the \code{CST_RFWeights} function. If \code{weights = 1.} (default) no
#' weights are used. The names of these dimensions must be at least 'lon' and
#' 'lat'.
#'@param nf Refinement factor for downscaling (the output resolution is
#' increased by this factor).
#'@param slope Prescribed spectral slope. The default is \code{slope = 0.}
#' meaning that the slope is determined automatically over the dimensions
#' specified by \code{time_dim}. A 1D array with named dimension can be
#' provided (see details and examples).
#'@param kmin First wavenumber for spectral slope (default: \code{kmin = 1}).
#'@param nens Number of ensemble members to produce (default: \code{nens = 1}).
#'@param fglob Logical to conserve global precipitation over the domain
#' (default: FALSE).
#'@param fsmooth Logical to conserve precipitation with a smoothing kernel
#' (default: TRUE).
#'@param time_dim String or character array with name(s) of dimension(s)
#' (e.g. "ftime", "sdate", "member" ...) over which to compute spectral slopes.
#' If a character array of dimension names is provided, the spectral slopes
#' will be computed as an average over all elements belonging to those
#' dimensions. If omitted one of c("ftime", "sdate", "time") is searched and
#' the first one with more than one element is chosen.
#'@param verbose Logical for verbose output (default: FALSE).
#'@param drop_realization_dim Logical to remove the "realization" stochastic
#' ensemble dimension, needed for saving data through function CST_SaveData
#' (default: FALSE) with the following behaviour if set to TRUE:
#' \enumerate{
#' \item{if \code{nens == 1}: the dimension is dropped;}
#' \item{if \code{nens > 1} and a "member" dimension exists: the "realization"
#' and "member" dimensions are compacted (multiplied) and the resulting
#' dimension is named "member";}
#' \item{if \code{nens > 1} and a "member" dimension does not exist: the
#' "realization" dimension is renamed to "member".}
#' }
#'@param nprocs The number of parallel processes to spawn for the use for
#' parallel computation in multiple cores. (default: 1)
#'
#'@return CST_RainFARM() returns a downscaled CSTools object (i.e., of the
#'class 's2dv_cube'). If \code{nens > 1} an additional dimension named
#'"realization" is added to the \code{$data} array after the "member" dimension
#'(unless \code{drop_realization_dim = TRUE} is specified). The ordering of the
#'remaining dimensions in the \code{$data} element of the input object is
#'maintained.
#'@details Wether parameter 'slope' and 'weights' presents seasonality
#'dependency, a dimension name should match between these parameters and the
#'input data in parameter 'data'. See example 2 below where weights and slope
#'vary with 'sdate' dimension.
#'@examples
#'# Example 1: using CST_RainFARM for a CSTools object
#'nf <- 8 # Choose a downscaling by factor 8
#'exp <- 1 : (2 * 3 * 4 * 8 * 8)
#'dim(exp) <- c(dataset = 1, member = 2, sdate = 3, ftime = 4, lat = 8, lon = 8)
#'lon <- seq(10, 13.5, 0.5)
#'lat <- seq(40, 43.5, 0.5)
#'coords <- list(lon = lon, lat = lat)
#'data <- list(data = exp, coords = coords)
#'class(data) <- 's2dv_cube'
#'# Create a test array of weights
#'ww <- array(1., dim = c(lon = 8 * nf, lat = 8 * nf))
#'res <- CST_RainFARM(data, nf = nf, weights = ww, nens = 3, time_dim = 'ftime')
#'@import multiApply
#'@import rainfarmr
#'@importFrom ClimProjDiags Subset
#'@importFrom abind abind
#'@export
CST_RainFARM <- function(data, weights = 1., slope = 0, nf, kmin = 1,
nens = 1, fglob = FALSE, fsmooth = TRUE,
nprocs = 1, time_dim = NULL, verbose = FALSE,
drop_realization_dim = FALSE) {
# Check 's2dv_cube'
if (!inherits(data, "s2dv_cube")) {
stop("Parameter 'data' must be of the class 's2dv_cube', ",
"as output by CSTools::CST_Load.")
}
# Check 'exp' object structure
if (!all(c('data', 'coords') %in% names(data))) {
stop("Parameter 'data' must have 'data' and 'coords' elements ",
"within the 's2dv_cube' structure.")
}
# Check coordinates
if (!any(names(data$coords) %in% .KnownLonNames()) |
!any(names(data$coords) %in% .KnownLatNames())) {
stop("Spatial coordinate names do not match any of the names accepted by ",
"the package.")
}
# Check dimensions
if (!any(names(dim(data$data)) %in% .KnownLonNames()) |
!any(names(dim(data$data)) %in% .KnownLatNames())) {
stop("Spatial dimension names do not match any of the names accepted by ",
"the package.")
}
lon <- names(data$coords)[[which(names(data$coords) %in% .KnownLonNames())]]
lat <- names(data$coords)[[which(names(data$coords) %in% .KnownLatNames())]]
lon_name <- names(dim(data$data))[[which(names(dim(data$data)) %in% .KnownLonNames())]]
lat_name <- names(dim(data$data))[[which(names(dim(data$data)) %in% .KnownLatNames())]]
res <- RainFARM(data = data$data,
lon = as.vector(data$coords[[lon]]),
lat = as.vector(data$coords[[lat]]),
nf = nf, weights = weights, nens, slope, kmin,
fglob, fsmooth, nprocs, time_dim,
lon_dim = lon_name, lat_dim = lat_name,
drop_realization_dim, verbose)
data$data <- res$data
data$coords[[lon]] <- res[[lon_name]]
data$coords[[lat]] <- res[[lat_name]]
return(data)
}
#'@rdname RainFARM
#'@title RainFARM stochastic precipitation downscaling (reduced version)
#'@author Jost von Hardenberg - ISAC-CNR, \email{[email protected]}
#'@description This function implements the RainFARM stochastic precipitation downscaling method
#'and accepts in input an array with named dims ("lon", "lat")
#'and one or more dimension (such as "ftime", "sdate" or "time")
#'over which to average automatically determined spectral slopes.
#'Adapted for climate downscaling and including orographic correction.
#'References:
#'Terzago, S. et al. (2018). NHESS 18(11), 2825-2840. \doi{10.5194/nhess-18-2825-2018},
#'D'Onofrio et al. (2014), J of Hydrometeorology 15, 830-843; Rebora et. al.
#'(2006), JHM 7, 724.
#'@param data Precipitation array to downscale. The input array is expected to
#' have at least two dimensions named "lon" and "lat" by default (these default
#' names can be changed with the \code{lon_dim} and \code{lat_dim} parameters)
#' and one or more dimensions over which to average these slopes, which can be
#' specified by parameter \code{time_dim}. The number of longitudes and
#' latitudes in the input data is expected to be even and the same. If not
#' the function will perform a subsetting to ensure this condition.
#'@param lon Vector or array of longitudes.
#'@param lat Vector or array of latitudes.
#'@param weights Multi-dimensional array with climatological weights which can
#' be obtained using the \code{CST_RFWeights} function. If \code{weights = 1.}
#' (default) no weights are used. The names of these dimensions must be at
#' least the same longitudinal and latitudinal dimension names as data.
#'@param nf Refinement factor for downscaling (the output resolution is
#' increased by this factor).
#'@param slope Prescribed spectral slope. The default is \code{slope = 0.}
#' meaning that the slope is determined automatically over the dimensions
#' specified by \code{time_dim}. A 1D array with named dimension can be
#' provided (see details and examples).
#'@param kmin First wavenumber for spectral slope (default: \code{kmin = 1}).
#'@param nens Number of ensemble members to produce (default: \code{nens = 1}).
#'@param fglob Logical to conseve global precipitation over the domain
#' (default: FALSE).
#'@param fsmooth Logical to conserve precipitation with a smoothing kernel
#' (default: TRUE).
#'@param time_dim String or character array with name(s) of time dimension(s)
#' (e.g. "ftime", "sdate", "time" ...) over which to compute spectral slopes.
#' If a character array of dimension names is provided, the spectral slopes
#' will be computed over all elements belonging to those dimensions.
#' If omitted one of c("ftime", "sdate", "time") is searched and the first one
#' with more than one element is chosen.
#'@param lon_dim Name of lon dimension ("lon" by default).
#'@param lat_dim Name of lat dimension ("lat" by default).
#'@param verbose logical for verbose output (default: FALSE).
#'@param drop_realization_dim Logical to remove the "realization" stochastic
#' ensemble dimension (default: FALSE) with the following behaviour if set to
#' TRUE:
#' \enumerate{
#' \item{if \code{nens == 1}: the dimension is dropped;}
#' \item{if \code{nens > 1} and a "member" dimension exists: the "realization"
#' and "member" dimensions are compacted (multiplied) and the resulting
#' dimension is named "member";}
#' \item{if \code{nens > 1} and a "member" dimension does not exist: the
#' "realization" dimension is renamed to "member".}
#' }
#'@param nprocs The number of parallel processes to spawn for the use for
#' parallel computation in multiple cores. (default: 1)
#'@return RainFARM() Returns a list containing the fine-scale longitudes,
#' latitudes and the sequence of \code{nens} downscaled fields. If
#' \code{nens > 1} an additional dimension named "realization" is added to the
#' output array after the "member" dimension (if it exists and unless
#' \code{drop_realization_dim = TRUE} is specified). The ordering of the
#' remaining dimensions in the \code{exp} element of the input object is
#' maintained.
#'@details Wether parameter 'slope' and 'weights' presents seasonality
#'dependency, a dimension name should match between these parameters and the
#'input data in parameter 'data'. See example 2 below where weights and slope
#'vary with 'sdate' dimension.
#'@examples
#'# Example for the 'reduced' RainFARM function
#'nf <- 8 # Choose a downscaling by factor 8
#'exp <- 1 : (2 * 3 * 4 * 8 * 8)
#'dim(exp) <- c(dataset = 1, member = 2, sdate = 3, ftime = 4, lat = 8, lon = 8)
#'lon <- seq(10, 13.5, 0.5)
#'lat <- seq(40, 43.5, 0.5)
#'# Create a test array of weights
#'ww <- array(1., dim = c(lon = 8 * nf, lat = 8 * nf))
#'res <- RainFARM(data = exp, lon = lon, lat = lat, nf = nf,
#' weights = ww, nens = 3, time_dim = 'ftime')
#'@import multiApply
#'@import rainfarmr
#'@importFrom ClimProjDiags Subset
#'@importFrom abind abind
#'@export
RainFARM <- function(data, lon, lat, nf, weights = 1., nens = 1, slope = 0,
kmin = 1, fglob = FALSE, fsmooth = TRUE, nprocs = 1,
time_dim = NULL, lon_dim = "lon", lat_dim = "lat",
drop_realization_dim = FALSE, verbose = FALSE) {
# Check 'lon_dim' and 'lat_dim' parameters
if (!all(c(lon_dim, lat_dim) %in% names(dim(data)))) {
stop("Parameters 'lon_dim' and 'lat_dim' do not match with 'data' ",
"dimension names.")
}
if (length(dim(weights)) > 0) {
if (!all(c(lon_dim, lat_dim) %in% names(dim(weights)))) {
stop("Parameters 'lon_dim' and 'lat_dim' do not match with 'weights' ",
"dimension names.")
}
}
# Ensure input grid is square and with even dimensions
if ( (dim(data)[lon_dim] != dim(data)[lat_dim]) |
(dim(data)[lon_dim] %% 2 == 1)) {
warning("Warning: input data are expected to be on a square grid",
" with an even number of pixels per side.")
nmin <- min(dim(data)[lon_dim], dim(data)[lat_dim])
nmin <- floor(nmin / 2) * 2
data <- .subset(data, lat_dim, 1:nmin)
data <- .subset(data, lon_dim, 1:nmin)
if (length(dim(lon)) == 2) {
lon <- lon[1:nmin, 1:nmin]
lat <- lat[1:nmin, 1:nmin]
} else {
lon <- lon[1:nmin]
lat <- lat[1:nmin]
}
warning("The input data have been cut to the range.")
warning(paste0("lon: [", lon[1], ", ", lon[length(lon)], "] ",
" lat: [", lat[1], ", ", lat[length(lat)], "]"))
}
if (length(dim(weights)) > 0) {
if (length(names(dim(weights))) == 0) {
stop("Parameter 'weights' must have dimension names when it is not a scalar.")
} else {
if (length(which(names(dim(weights)) == lon_dim)) > 0 &
length(which(names(dim(weights)) == lat_dim)) > 0) {
lonposw <- which(names(dim(weights)) == lon_dim)
latposw <- which(names(dim(weights)) == lat_dim)
} else {
stop("Parameter 'weights' must have dimension names equal to latitudinal",
" and longitudinal dimension names as 'data' when it is not a scalar.")
}
}
}
if (!(length(dim(weights)) == 0)) {
if (!(dim(weights)[lonposw] == dim(data)[lon_dim] * nf) &
!(dim(weights)[latposw] == dim(data)[lat_dim] * nf)) {
stop(paste("The dimensions of the weights matrix (", dim(weights)[1],
"x", dim(weights)[2] ,
") are not consistent with the size of the data (",
dim(data)[lon_dim], ") and the refinement factor (", nf, ")"))
}
}
# Check/detect time_dim
if (is.null(time_dim)) {
time_dim_names <- c("ftime", "sdate", "time")
time_dim_num <- which(time_dim_names %in% names(dim(data)))
if (length(time_dim_num) > 0) {
# Find time dimension with length > 1
ilong <- which(dim(data)[time_dim_names[time_dim_num]] > 1)
if (length(ilong) > 0) {
time_dim <- time_dim_names[time_dim_num[ilong[1]]]
} else {
stop("No time dimension longer than one found.")
}
} else {
stop("Could not automatically detect a target time dimension ",
"in the provided data in 'data'.")
}
warning(paste("Selected time dim:", time_dim))
}
# Check if slope is an array
#if (length(slope) > 1) {
# warning("Parameter 'slope' has length > 1 and only the first ",
# "element will be used.")
# slope <- as.numeric(slope[1])
#}
# Perform common calls
r <- lon_lat_fine(lon, lat, nf)
lon_f <- r[['lon']]
lat_f <- r[['lat']]
# reorder and group time_dim together at the end
cdim0 <- dim(data)
imask <- names(cdim0) %in% time_dim
data <- .aperm2(data, c(which(!imask), which(imask)))
cdim <- dim(data)
ind <- 1:length(which(!imask))
# compact (multiply) time_dim dimensions
dim(data) <- c(cdim[ind], rainfarm_samples = prod(cdim[-ind]))
# Repeatedly apply .RainFARM
if (length(weights) == 1 & length(slope) == 1) {
result <- Apply(data, c(lon_dim, lat_dim, "rainfarm_samples"), .RainFARM,
weights, slope, nf, nens, kmin,
fglob, fsmooth, ncores = nprocs, verbose,
split_factor = "greatest")$output1
} else if (length(slope) == 1 & length(weights) > 1 ) {
result <- Apply(list(data, weights),
list(c(lon_dim, lat_dim, "rainfarm_samples"),
c(lonposw, latposw)),
.RainFARM, slope = slope,
nf = nf, nens = nens, kmin = kmin,
fglob = fglob, fsmooth = fsmooth, ncores = nprocs,
verbose = verbose,
split_factor = "greatest")$output1
} else {
result <- Apply(list(data, weights, slope),
list(c(lon_dim, lat_dim, "rainfarm_samples"),
c(lonposw, latposw), NULL),
fun = .RainFARM,
nf = nf, nens = nens, kmin = kmin,
fglob = fglob, fsmooth = fsmooth, ncores = nprocs,
verbose = verbose,
split_factor = "greatest")$output1
}
# result has dims: lon, lat, rainfarm_samples, realization, other dims
# Expand back rainfarm_samples to compacted dims
dim(result) <- c(dim(result)[1:2], cdim[-ind], dim(result)[-(1:3)])
# Reorder as it was in original data
# + realization dim after member if it exists
ienspos <- which(names(cdim0) == "member")
if (length(ienspos) == 0) ienspos <- length(names(cdim0))
iorder <- sapply(c(names(cdim0)[1:ienspos], "realization",
names(cdim0)[-(1:ienspos)]),
grep, names(dim(result)))
ndim <- names(dim(result))
result <- aperm(result, iorder)
# R < 3.2.3 compatibility fix
names(dim(result)) <- ndim[iorder]
if (drop_realization_dim) {
cdim <- dim(result)
if (nens == 1) {
dim(result) <- cdim[-which(names(cdim) == "realization")[1]]
} else if ("member" %in% names(cdim)) {
# compact member and realization dimension if member dim exists,
# else rename realization to member
ind <- which(names(cdim) %in% c("member", "realization"))
dim(result) <- c(cdim[1:(ind[1] - 1)], cdim[ind[1]] * cdim[ind[2]],
cdim[(ind[2] + 1):length(cdim)])
} else {
ind <- which(names(cdim) %in% "realization")
names(dim(result))[ind] <- "member"
}
}
res <- NULL
res[['data']] <- result
res[[lon_dim]] <- lon_f
res[[lat_dim]] <- lat_f
return(res)
}
#'Atomic RainFARM
#'@param pr Precipitation array to downscale with dimensions (lon, lat, time).
#'@param weights Matrix with climatological weights which can be obtained using
#' the \code{CST_RFWeights} function (default: \code{weights = 1.} i.e. no
#' weights).
#'@param slope Prescribed spectral slope (default: \code{slope = 0.}
#'@param nf Refinement factor for downscaling (the output resolution is
#' increased by this factor). Meaning that the slope is determined
#' automatically over the dimensions specified by \code{time_dim}.
#'@param kmin First wavenumber for spectral slope (default: \code{kmin = 1}).
#'@param nens Number of ensemble members to produce (default: \code{nens = 1}).
#'@param fglob Logical to conseve global precipitation over the domain
#' (default: FALSE).
#'@param fsmooth Logical to conserve precipitation with a smoothing kernel
#' (default: TRUE).
#'@param verbose Logical for verbose output (default: FALSE).
#'@return .RainFARM returns a downscaled array with dimensions (lon, lat, time,
#' realization)
#'@noRd
.RainFARM <- function(pr, weights, slope, nf, nens, kmin,
fglob, fsmooth, verbose) {
posna <- NULL
if (any(is.na(pr))) {
posna <- unlist(lapply(1:dim(pr)['rainfarm_samples'],
function(x){!is.na(pr[1, 1, x])}))
pr <- Subset(pr, 'rainfarm_samples', posna)
}
if (slope == 0) {
fxp <- fft2d(pr)
sx <- fitslope(fxp, kmin = kmin)
} else {
sx <- slope
}
result_dims <- c(dim(pr)[1] * nf, dim(pr)[2] * nf, dim(pr)[3],
realization = nens)
r <- array(dim = result_dims)
for (i in 1:nens) {
r[, , , i] <- rainfarm(pr, sx, nf, weights, fglob = fglob,
fsmooth = fsmooth, verbose = verbose)
}
# restoring NA values in their position:
if (!is.null(posna)) {
pos <- which(posna == FALSE)
dimdata <- dim(r)
xdim <- which(names(dimdata) == 'rainfarm_samples')
dimdata[xdim] <- dimdata[xdim] + length(pos)
new <- array(NA, dimdata)
posT <- which(posna == TRUE)
i = 1
invisible(lapply(posT, function(x) {
new[,,x,] <<- r[,,i,]
i <<- i + 1
}))
#names(dim(r)) <- names(result_dims)
warning("Missing values found in the samples.")
r <- new
}
return(r)
}
# Function to generalize through do.call() n-dimensional array subsetting
# and array indexing. Derived from Stack Overflow issue
# https://stackoverflow.com/questions/14500707/select-along-one-of-n-dimensions-in-array
.subset <- function(field, dim_name, range, drop = FALSE) {
ndim <- names(dim(field))
idim <- which(ndim %in% dim_name )
# Create list representing arguments supplied to [
# bquote() creates an object corresponding to a missing argument
indices <- rep(list(bquote()), length(dim(field)))
indices[[idim]] <- range
# do.call on the indices
field <- do.call("[", c(list(field), indices, list(drop = drop)))
# Needed for R <=3.2
names(dim(field)) <- ndim
return(field)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_RainFARM.R
|
#'@rdname CST_RegimesAssign
#'@title Function for matching a field of anomalies with
#'a set of maps used as a reference (e.g. clusters obtained from the WeatherRegime function)
#'
#'@author Verónica Torralba - BSC, \email{[email protected]}
#'
#'@description This function performs the matching between a field of anomalies
#'and a set of maps which will be used as a reference. The anomalies will be
#'assigned to the reference map for which the minimum Eucledian distance
#'(method =’distance’) or highest spatial correlation (method = 'ACC') is
#'obtained.
#'
#'@references Torralba, V. (2019) Seasonal climate prediction for the wind
#'energy sector: methods and tools for the development of a climate service.
#'Thesis. Available online: \url{https://eprints.ucm.es/56841/}
#'
#'@param data An 's2dv_cube' object.
#'@param ref_maps An 's2dv_cube' object as the output of CST_WeatherRegimes.
#'@param method Whether the matching will be performed in terms of minimum
#' distance (default = 'distance') or the maximum spatial correlation
#' (method = 'ACC') between the maps.
#'@param composite A logical parameter indicating if the composite maps are
#' computed or not (default = FALSE).
#'@param memb A logical value indicating whether to compute composites for
#' separate members (default FALSE) or as unique ensemble (TRUE). This option
#' is only available for when parameter 'composite' is set to TRUE and the data
#' object has a dimension named 'member'.
#'@param ncores The number of multicore threads to use for parallel computation.
#'@return A list with two elements \code{$data} (a 's2dv_cube' object containing
#'the composites cluster=1,..,K for case (*1) or only k=1 for any specific
#'cluster, i.e., case (*2)) (only when composite = 'TRUE') and \code{$statistics}
#'that includes \code{$pvalue} (array with the same structure as \code{$data}
#'containing the pvalue of the composites obtained through a t-test that
#'accounts for the serial dependence of the data with the same structure as
#'Composite.)(only when composite = 'TRUE'), \code{$cluster} (array with the
#'same dimensions as data (except latitude and longitude which are removed)
#'indicating the ref_maps to which each point is allocated.), \code{$frequency}
#'(A vector of integers (from k=1,...k n reference maps) indicating the
#'percentage of assignations corresponding to each map.).
#'@examples
#'data <- array(abs(rnorm(1280, 282.7, 6.4)), dim = c(dataset = 2, member = 2,
#' sdate = 3, ftime = 3,
#' lat = 4, lon = 4))
#'coords <- list(lon = seq(0, 3), lat = seq(47, 44))
#'exp <- list(data = data, coords = coords)
#'class(exp) <- 's2dv_cube'
#'regimes <- CST_WeatherRegimes(data = exp, EOFs = FALSE,
#' ncenters = 4)
#'res1 <- CST_RegimesAssign(data = exp, ref_maps = regimes,
#' composite = FALSE)
#'@importFrom s2dv ACC MeanDims InsertDim
#'@import multiApply
#'@export
CST_RegimesAssign <- function(data, ref_maps,
method = "distance",
composite = FALSE,
memb = FALSE, ncores = NULL) {
# Check 's2dv_cube'
if (!inherits(data, 's2dv_cube')) {
stop("Parameter 'data' must be of the class 's2dv_cube', ",
"as output by CSTools::CST_Load.")
}
if (!inherits(ref_maps, 's2dv_cube')) {
stop("Parameter 'ref_maps' must be of the class 's2dv_cube', ",
"as output by CSTools::CST_Load.")
}
# Check 'exp' object structure
if (!all(c('data', 'coords') %in% names(data))) {
stop("Parameter 'data' must have 'data' and 'coords' elements ",
"within the 's2dv_cube' structure.")
}
# Check coordinates
if (!any(names(data$coords) %in% .KnownLatNames())) {
stop("Spatial coordinate names do not match any of the names accepted ",
"the package.")
} else {
lat_name <- names(data$coords)[[which(names(data$coords) %in% .KnownLatNames())]]
lat <- as.vector(data$coords[[lat_name]])
}
result <- RegimesAssign(data = data$data, ref_maps = ref_maps$data, lat = lat,
method = method, composite = composite,
memb = memb, ncores = ncores)
if (composite) {
data$data <- result$composite
data$statistics <- result[-1]
} else {
data <- NULL
data$statistics <- result
}
return(data)
}
#'@rdname RegimesAssign
#'@title Function for matching a field of anomalies with
#'a set of maps used as a reference (e.g. clusters obtained from the WeatherRegime function).
#'
#'@author Verónica Torralba - BSC, \email{[email protected]}
#'
#'@description This function performs the matching between a field of anomalies
#'and a set of maps which will be used as a reference. The anomalies will be
#'assigned to the reference map for which the minimum Eucledian distance
#'(method = 'distance') or highest spatial correlation (method = 'ACC') is
#'obtained.
#'
#'@references Torralba, V. (2019) Seasonal climate prediction for the wind
#'energy sector: methods and tools for the development of a climate service.
#'Thesis. Available online: \url{https://eprints.ucm.es/56841/}
#'
#'@param data An array containing anomalies with named dimensions: dataset,
#' member, sdate, ftime, lat and lon.
#'@param ref_maps Array with 3-dimensions ('lon', 'lat', 'cluster') containing
#' the maps/clusters that will be used as a reference for the matching.
#'@param method Whether the matching will be performed in terms of minimum
#' distance (default = 'distance') or the maximum spatial correlation
#' (method = 'ACC') between the maps.
#'@param lat A vector of latitudes corresponding to the positions provided in
#' data and ref_maps.
#'@param composite A logical parameter indicating if the composite maps are
#' computed or not (default = FALSE).
#'@param memb A logical value indicating whether to compute composites for
#' separate members (default FALSE) or as unique ensemble (TRUE). This option
#' is only available for when parameter 'composite' is set to TRUE and the data
#' object has a dimension named 'member'.
#'@param ncores The number of multicore threads to use for parallel computation.
#'@return A list with elements \code{$composite} (3-d array (lon, lat, k)
#'containing the composites k = 1,..,K for case (*1) or only k = 1 for any specific
#'cluster, i.e., case (*2)) (only if composite = 'TRUE'), \code{$pvalue} (array
#'with the same structure as \code{$composite} containing the pvalue of the
#'composites obtained through a t-test that accounts for the serial dependence
#'of the data with the same structure as Composite.) (only if composite='TRUE'),
#'\code{$cluster} (array with the same dimensions as data (except latitude and
#'longitude which are removed) indicating the ref_maps to which each point is
#'allocated.), \code{$frequency} (A vector of integers (from k = 1, ... k n
#'reference maps) indicating the percentage of assignations corresponding to
#'each map.),
#'
#'@examples
#'data <- array(abs(rnorm(1280, 282.7, 6.4)), dim = c(dataset = 2, member = 2,
#' sdate = 3, ftime = 3,
#' lat = 4, lon = 4))
#'regimes <- WeatherRegime(data = data, lat = seq(47, 44),
#' EOFs = FALSE, ncenters = 4)$composite
#'res1 <- RegimesAssign(data = data, ref_maps = drop(regimes),
#' lat = seq(47, 44), composite = FALSE)
#'@importFrom s2dv ACC MeanDims Eno InsertDim
#'@import multiApply
#'@export
RegimesAssign <- function(data, ref_maps, lat, method = "distance", composite = FALSE,
memb = FALSE, ncores = NULL) {
## Initial checks
# data
if (is.null(names(dim(data)))) {
stop("Parameter 'data' must be an array with named dimensions.")
}
# ref_maps
if (is.null(ref_maps)) {
stop("Parameter 'ref_maps' must be specified.")
}
if (is.null(names(dim(ref_maps)))) {
stop("Parameter 'ref_maps' must be an array with named dimensions.")
}
# lat
if (is.null(lat)) {
stop("Parameter 'lat' must be specified.")
}
# memb
if (!is.logical(memb)) {
stop("Parameter 'memb' must be logical.")
}
# composite
if (!is.logical(composite)) {
stop("Parameter 'memb' must be logical.")
}
dimData <- names(dim(data))
# Know spatial coordinates names
if (!any(dimData %in% .KnownLonNames()) |
!any(dimData %in% .KnownLatNames())) {
stop("Spatial coordinate dimension names do not match any of the names ",
"accepted by the package.")
}
lon_name <- dimData[[which(dimData %in% .KnownLonNames())]]
lat_name <- dimData[[which(dimData %in% .KnownLatNames())]]
dimRef <- names(dim(ref_maps))
if (!any(dimRef %in% .KnownLonNames()) |
!any(dimRef %in% .KnownLatNames())) {
stop("Spatial coordinate dimension names do not match any of the names ",
"accepted by the package.")
}
lon_name_ref <- dimRef[[which(dimRef %in% .KnownLonNames())]]
lat_name_ref <- dimRef[[which(dimRef %in% .KnownLatNames())]]
if (!all( c('cluster', lat_name_ref, lon_name_ref) %in% dimRef)) {
stop("Parameter 'ref_maps' must contain the named dimensions
'cluster', and the spatial coordinates accepted names.")
}
if (length(lat) != dim(data)[lat_name] |
(length(lat) != dim(ref_maps)[lat_name_ref])) {
stop("Parameter 'lat' does not match with the latitudinal dimension",
" in the parameter 'data' or in the parameter 'ref_maps'.")
}
# Temporal dimensions
if ('sdate' %in% dimData && 'ftime' %in% dimData) {
nsdates <- dim(data)['sdate']
nftimes <- dim(data)['ftime']
data <- MergeDims(data,
merge_dims = c('ftime','sdate'),
rename_dim = 'time')
} else if ('sdate' %in% dimData | 'ftime' %in% dimData) {
names(dim(data))[which(dimData == 'sdate' | dimData == 'ftime') ] = 'time'
} else {
if (!('time' %in% dimData)) {
stop("Parameter 'data' must have temporal dimensions.")
}
}
ref_maps <- drop(ref_maps)
index <- Apply(data = list(ref = ref_maps, target = data),
target_dims = list(c(lat_name_ref, lon_name_ref, 'cluster'),
c(lat_name, lon_name)),
fun = .RegimesAssign,
lat = lat, method = method,
lon_name = lon_name, lat_name = lat_name,
lon_name_ref = lon_name_ref, lat_name_ref = lat_name_ref,
ncores = ncores)[[1]]
nclust <- dim(ref_maps)['cluster']
freqs <- rep(NA, nclust)
for (n in 1:nclust) {
freqs[n] <- (length(which(index == n)) / length(index)) * 100
}
if (composite) {
poslon <- which(names(dim(data)) == lon_name)
poslat <- which(names(dim(data)) == lat_name)
postime <- which(names(dim(data)) == 'time')
posdim <- setdiff(1:length(dim(data)), c(postime, poslat, poslon))
dataComp <- aperm(data, c(poslon, poslat, postime, posdim))
if (any(is.na(index))) {
recon <-list(
composite = InsertDim(array(NA, dim = c(dim(dataComp)[-postime])),
postime, dim(ref_maps)['composite.cluster'], name = ''),
pvalue = InsertDim(array(NA, dim = c(dim(dataComp)[-postime])),
postime, dim(ref_maps)['composite.cluster'], name = ''))
} else {
if (memb) {
dataComp <- MergeDims(dataComp, merge_dims = c('time', 'member'), rename_dim = 'time')
index <- MergeDims(index, merge_dims = c('time', 'member'), rename_dim = 'time')
}
recon <- Apply(data = list(var = dataComp, occ = index),
target_dims = list(c(lon_name, lat_name, 'time'), c('time')),
fun = Composite,
K = dim(ref_maps)['cluster'])
}
output <- list(composite = recon$composite,
pvalue = recon$pvalue,
cluster = index,
frequency = freqs)
} else {
output <- list(cluster = index,
frequency = freqs)
}
return(output)
}
.RegimesAssign <- function(ref, target, method = 'distance', lat,
composite = FALSE,
lon_name = 'lon', lat_name = 'lat',
lon_name_ref = 'lon', lat_name_ref = 'lat') {
# ref: [lat_name_ref, lon_name_ref, 'cluster']
# target: [lat_name, lon_name]
posdim <- which(names(dim(ref)) == 'cluster')
poslat <- which(names(dim(ref)) == lat_name_ref)
poslon <- which(names(dim(ref)) == lon_name_ref)
nclust <- dim(ref)[posdim]
if (all(dim(ref)[-posdim] != dim(target))) {
stop('The target should have the same dimensions [lat_name, lon_name] that',
'the reference ')
}
if (is.null(names(dim(ref))) | is.null(names(dim(target)))) {
stop('The arrays should include dimensions names ref[cluster, lat_name, ',
'lon_name] and target [lat_name, lon_name]'
)
}
if (length(lat) != dim(ref)[poslat]) {
stop('latitudes do not match with the maps')
}
if (is.na(max(target))){
assign <- NA
} else {
# This dimensions are reorganized
ref <- aperm(ref, c(posdim, poslat, poslon))
target <- aperm(target,
c(which(names(dim(target)) == lat_name),
which(names(dim(target)) == lon_name)))
# weights are defined
latWeights <- InsertDim(sqrt(cos(lat * pi / 180)), 2, dim(ref)[3])
rmsdiff <- function(x, y) {
dims <- dim(x)
ndims <- length(dims)
if (ndims != 2 | ndims != length(dim(y))) {
stop('x and y should be maps')
}
map_diff <- NA * x
for (i in 1:dims[1]) {
for (j in 1:dims[2]) {
map_diff[i, j] <- (x[i, j] - y[i, j]) ^ 2
}
}
rmsdiff <- sqrt(mean(map_diff))
return(rmsdiff)
}
if (method == 'ACC') {
corr <- rep(NA, nclust)
for (i in 1:nclust) {
#NOTE: s2dv::ACC returns centralized and weighted result.
corr[i] <-
ACC(ref[i, , ], target, lat = lat, dat_dim = NULL, avg_dim = NULL,
memb_dim = NULL)$acc
}
assign <- which(corr == max(corr))
}
if (method == 'distance') {
rms <- rep(NA, nclust)
for (i in 1:nclust) {
rms[i] <- rmsdiff(ref[i, , ] * latWeights, target * latWeights)
}
assign <- which(rms == min(rms))
}
}
return(assign)
}
Composite <- function(var, occ, lag = 0, eno = FALSE, K = NULL, fileout = NULL) {
if ( dim(var)[3] != length(occ) ) {
stop("Temporal dimension of var is not equal to length of occ.")
}
if (is.null(K)) {
K <- max(occ)
}
composite <- array(dim = c(dim(var)[1:2], composite = K))
tvalue <- array(dim = dim(var)[1:2])
dof <- array(dim = dim(var)[1:2])
pvalue <- array(dim = c(dim(var)[1:2], composite = K))
if (eno == TRUE) {
n_tot <- Eno(var, time_dim = 'time')
} else {
n_tot <- length(occ)
}
mean_tot <- MeanDims(var, dims = 3, na.rm = TRUE)
stdv_tot <- apply(var, c(1, 2), sd, na.rm = TRUE)
for (k in 1 : K) {
if (length(which(occ == k)) >= 1) {
indices <- which(occ == k) + lag
toberemoved <- which(0 > indices | indices > dim(var)[3])
if (length(toberemoved) > 0) {
indices <- indices[-toberemoved]
}
if (eno == TRUE) {
n_k <- Eno(var[, , indices], time_dim = 'time')
} else {
n_k <- length(indices)
}
if (length(indices) == 1) {
composite[, , k] <- var[, , indices]
warning(paste("Composite", k, "has length 1 and pvalue is NA."))
} else {
composite[, , k] <- MeanDims(var[, , indices], dims = 3, na.rm = TRUE)
}
stdv_k <- apply(var[, , indices], c(1, 2), sd, na.rm = TRUE)
tvalue <- (mean_tot - composite[, , k]) /
sqrt(stdv_tot ^ 2 / n_tot + stdv_k ^ 2 / n_k)
dof <- (stdv_tot ^ 2 / n_tot + stdv_k ^ 2 / n_k) ^ 2 /
((stdv_tot ^ 2 / n_tot) ^ 2 / (n_tot - 1) +
(stdv_k ^ 2 / n_k) ^ 2 / (n_k - 1))
pvalue[, , k] <- 2 * pt(-abs(tvalue), df = dof)
}
}
if (is.null(fileout) == FALSE) {
output <- list(composite = composite, pvalue = pvalue)
save(output, file = paste(fileout, '.sav', sep = ''))
}
invisible(list(composite = composite, pvalue = pvalue))
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_RegimesAssign.R
|
#'Save objects of class 's2dv_cube' to data in NetCDF format
#'
#'@author Perez-Zanon Nuria, \email{[email protected]}
#'
#'@description This function allows to divide and save a object of class
#''s2dv_cube' into a NetCDF file, allowing to reload the saved data using
#'\code{CST_Start} or \code{CST_Load} functions. It also allows to save any
#''s2dv_cube' object that follows the NetCDF attributes conventions.
#'
#'@param data An object of class \code{s2dv_cube}.
#'@param destination A character string containing the directory name in which
#' to save the data. NetCDF file for each starting date are saved into the
#' folder tree: 'destination/Dataset/variable/'. By default the function
#' saves the data into the working directory.
#'@param sdate_dim A character string indicating the name of the start date
#' dimension. By default, it is set to 'sdate'. It can be NULL if there is no
#' start date dimension.
#'@param ftime_dim A character string indicating the name of the forecast time
#' dimension. If 'Dates' are used, it can't be NULL. If there is no forecast
#' time dimension, 'Dates' will be set to NULL and will not be used. By
#' default, it is set to 'time'.
#'@param dat_dim A character string indicating the name of dataset dimension.
#' It can be NULL if there is no dataset dimension. By default, it is set to
#' 'dataset'.
#'@param var_dim A character string indicating the name of variable dimension.
#' It can be NULL if there is no variable dimension. By default, it is set to
#' 'var'.
#'@param memb_dim A character string indicating the name of the member
#' dimension. It can be NULL if there is no member dimension. By default, it is
#' set to 'member'.
#'@param startdates A vector of dates that will be used for the filenames
#' when saving the data in multiple files (single_file = FALSE). It must be a
#' vector of the same length as the start date dimension of data. It must be a
#' vector of class \code{Dates}, \code{'POSIXct'} or character with lenghts
#' between 1 and 10. If it is NULL, the coordinate corresponding the the start
#' date dimension or the first Date of each time step will be used as the name
#' of the files. It is NULL by default.
#'@param single_file A logical value indicating if all object is saved in a
#' single file (TRUE) or in multiple files (FALSE). When it is FALSE,
#' the array is separated for datasets, variable and start date. When there are
#' no specified time dimensions, the data will be saved in a single file by
#' default. The output file name when 'single_file' is TRUE is a character
#' string containing: '<var>_<first_sdate>_<last_sdate>.nc'; when it is FALSE,
#' it is '<var>_<sdate>.nc'. It is FALSE by default.
#'@param drop_dims (optional) A vector of character strings indicating the
#' dimension names of length 1 that need to be dropped in order that they don't
#' appear in the netCDF file. Only is allowed to drop dimensions that are not
#' used in the computation. The dimensions used in the computation are the ones
#' specified in: sdate_dim, ftime_dim, dat_dim, var_dim and memb_dim. It is
#' NULL by default.
#'@param extra_string (Optional) A character string to be included as part of
#' the file name, for instance, to identify member or realization. When
#' single_file is TRUE, the 'extra_string' will substitute all the default
#' file name; when single_file is FALSE, the 'extra_string' will be added
#' in the file name as: '<var>_<extra_string>_<sdate>.nc'. It is NULL by
#' default.
#'@param units_hours_since (Optional) A logical value only available for the
#' case: 'Dates' have forecast time and start date dimension, 'single_file' is
#' TRUE and 'time_bounds' are not used. When it is TRUE, it saves the forecast
#' time with units of 'hours since'; if it is FALSE, the time units will be a
#' number of time steps with its corresponding frequency (e.g. n days, n months
#' or n hours). It is FALSE by default.
#'@param global_attrs (Optional) A list with elements containing the global
#' attributes to be saved in the NetCDF.
#'
#'@return Multiple or single NetCDF files containing the data array.\cr
#'\item{\code{single_file is TRUE}}{
#' All data is saved in a single file located in the specified destination
#' path with the following name (by default):
#' '<variable_name>_<first_sdate>_<last_sdate>.nc'. Multiple variables
#' are saved separately in the same file. The forecast time units
#' are calculated from each start date (if sdate_dim is not NULL) or from
#' the time step. If 'units_hours_since' is TRUE, the forecast time units
#' will be 'hours since <each start date>'. If 'units_hours_since' is FALSE,
#' the forecast time units are extracted from the frequency of the time steps
#' (hours, days, months); if no frequency is found, the units will be ’hours
#' since’. When the time units are 'hours since' the time ateps are assumed to
#' be equally spaced.
#'}
#'\item{\code{single_file is FALSE}}{
#' The data array is subset and stored into multiple files. Each file
#' contains the data subset for each start date, variable and dataset. Files
#' with different variables and datasets are stored in separated directories
#' within the following directory tree: 'destination/Dataset/variable/'.
#' The name of each file will be by default: '<variable_name>_<sdate>.nc'.
#' The forecast time units are calculated from each start date (if sdate_dim
#' is not NULL) or from the time step. The forecast time units will be 'hours
#' since <each start date>'.
#'}
#'
#'@seealso \code{\link[startR]{Start}}, \code{\link{as.s2dv_cube}} and
#'\code{\link{s2dv_cube}}
#'
#'@examples
#'\dontrun{
#'data <- lonlat_temp_st$exp
#'CST_SaveExp(data = data, ftime_dim = 'ftime', var_dim = 'var',
#' dat_dim = 'dataset', sdate_dim = 'sdate')
#'}
#'
#'@export
CST_SaveExp <- function(data, destination = "./", startdates = NULL,
sdate_dim = 'sdate', ftime_dim = 'time',
memb_dim = 'member', dat_dim = 'dataset',
var_dim = 'var', drop_dims = NULL,
single_file = FALSE, extra_string = NULL,
global_attrs = NULL, units_hours_since = FALSE) {
# Check 's2dv_cube'
if (!inherits(data, 's2dv_cube')) {
stop("Parameter 'data' must be of the class 's2dv_cube'.")
}
# Check object structure
if (!all(c('data', 'attrs') %in% names(data))) {
stop("Parameter 'data' must have at least 'data' and 'attrs' elements ",
"within the 's2dv_cube' structure.")
}
if (!inherits(data$attrs, 'list')) {
stop("Level 'attrs' must be a list with at least 'Dates' element.")
}
# metadata
if (!is.null(data$attrs$Variable$metadata)) {
if (!inherits(data$attrs$Variable$metadata, 'list')) {
stop("Element metadata from Variable element in attrs must be a list.")
}
}
# Dates
if (is.null(data$attrs$Dates)) {
stop("Element 'Dates' from 'attrs' level cannot be NULL.")
}
if (is.null(dim(data$attrs$Dates))) {
stop("Element 'Dates' from 'attrs' level must have time dimensions.")
}
# sdate_dim
if (!is.null(sdate_dim)) {
if (!is.character(sdate_dim)) {
stop("Parameter 'sdate_dim' must be a character string.")
}
}
# startdates
if (is.null(startdates)) {
if (is.character(data$coords[[sdate_dim]])) {
startdates <- data$coords[[sdate_dim]]
}
}
SaveExp(data = data$data,
destination = destination,
coords = data$coords,
Dates = data$attrs$Dates,
time_bounds = data$attrs$time_bounds,
startdates = startdates,
varname = data$attrs$Variable$varName,
metadata = data$attrs$Variable$metadata,
Datasets = data$attrs$Datasets,
sdate_dim = sdate_dim, ftime_dim = ftime_dim,
memb_dim = memb_dim,
dat_dim = dat_dim, var_dim = var_dim,
drop_dims = drop_dims,
single_file = single_file,
extra_string = extra_string,
global_attrs = global_attrs,
units_hours_since = units_hours_since)
}
#'Save a multidimensional array with metadata to data in NetCDF format
#'@description This function allows to save a data array with metadata into a
#'NetCDF file, allowing to reload the saved data using \code{Start} function
#'from StartR package. If the original 's2dv_cube' object has been created from
#'\code{CST_Load()}, then it can be reloaded with \code{Load()}.
#'
#'@author Perez-Zanon Nuria, \email{[email protected]}
#'
#'@param data A multi-dimensional array with named dimensions.
#'@param destination A character string indicating the path where to store the
#' NetCDF files.
#'@param coords A named list with elements of the coordinates corresponding to
#' the dimensions of the data parameter. The names and length of each element
#' must correspond to the names of the dimensions. If any coordinate is not
#' provided, it is set as an index vector with the values from 1 to the length
#' of the corresponding dimension.
#'@param Dates A named array of dates with the corresponding sdate and forecast
#' time dimension. If there is no sdate_dim, you can set it to NULL.
#' It must have ftime_dim dimension.
#'@param time_bounds (Optional) A list of two arrays of dates containing
#' the lower (first array) and the upper (second array) time bounds
#' corresponding to Dates. Each array must have the same dimensions as Dates.
#' If 'Dates' parameter is NULL, 'time_bounds' are not used. It is NULL by
#' default.
#'@param startdates A vector of dates that will be used for the filenames
#' when saving the data in multiple files (single_file = FALSE). It must be a
#' vector of the same length as the start date dimension of data. It must be a
#' vector of class \code{Dates}, \code{'POSIXct'} or character with lenghts
#' between 1 and 10. If it is NULL, the coordinate corresponding the the start
#' date dimension or the first Date of each time step will be used as the name
#' of the files. It is NULL by default.
#'@param varname A character string indicating the name of the variable to be
#' saved.
#'@param metadata A named list where each element is a variable containing the
#' corresponding information. The information must be contained in a list of
#' lists for each variable.
#'@param Datasets A vector of character string indicating the names of the
#' datasets.
#'@param sdate_dim A character string indicating the name of the start date
#' dimension. By default, it is set to 'sdate'. It can be NULL if there is no
#' start date dimension.
#'@param ftime_dim A character string indicating the name of the forecast time
#' dimension. By default, it is set to 'time'. It can be NULL if there is no
#' forecast time dimension.
#'@param dat_dim A character string indicating the name of dataset dimension.
#' By default, it is set to 'dataset'. It can be NULL if there is no dataset
#' dimension.
#'@param var_dim A character string indicating the name of variable dimension.
#' By default, it is set to 'var'. It can be NULL if there is no variable
#' dimension.
#'@param memb_dim A character string indicating the name of the member
#' dimension. By default, it is set to 'member'. It can be NULL if there is no
#' member dimension.
#'@param drop_dims (optional) A vector of character strings indicating the
#' dimension names of length 1 that need to be dropped in order that they don't
#' appear in the netCDF file. Only is allowed to drop dimensions that are not
#' used in the computation. The dimensions used in the computation are the ones
#' specified in: sdate_dim, ftime_dim, dat_dim, var_dim and memb_dim. It is
#' NULL by default.
#'@param single_file A logical value indicating if all object is saved in a
#' single file (TRUE) or in multiple files (FALSE). When it is FALSE,
#' the array is separated for datasets, variable and start date. When there are
#' no specified time dimensions, the data will be saved in a single file by
#' default. The output file name when 'single_file' is TRUE is a character
#' string containing: '<var>_<first_sdate>_<last_sdate>.nc'; when it is FALSE,
#' it is '<var>_<sdate>.nc'. It is FALSE by default.
#'@param extra_string (Optional) A character string to be included as part of
#' the file name, for instance, to identify member or realization. When
#' single_file is TRUE, the 'extra_string' will substitute all the default
#' file name; when single_file is FALSE, the 'extra_string' will be added
#' in the file name as: '<var>_<extra_string>_<sdate>.nc'. It is NULL by
#' default.
#'@param global_attrs (Optional) A list with elements containing the global
#' attributes to be saved in the NetCDF.
#'@param units_hours_since (Optional) A logical value only available for the
#' case: Dates have forecast time and start date dimension, single_file is
#' TRUE and 'time_bounds' is NULL. When it is TRUE, it saves the forecast time
#' with units of 'hours since'; if it is FALSE, the time units will be a number
#' of time steps with its corresponding frequency (e.g. n days, n months or n
#' hours). It is FALSE by default.
#'
#'@return Multiple or single NetCDF files containing the data array.\cr
#'\item{\code{single_file is TRUE}}{
#' All data is saved in a single file located in the specified destination
#' path with the following name (by default):
#' '<variable_name>_<first_sdate>_<last_sdate>.nc'. Multiple variables
#' are saved separately in the same file. The forecast time units
#' are calculated from each start date (if sdate_dim is not NULL) or from
#' the time step. If 'units_hours_since' is TRUE, the forecast time units
#' will be 'hours since <each start date>'. If 'units_hours_since' is FALSE,
#' the forecast time units are extracted from the frequency of the time steps
#' (hours, days, months); if no frequency is found, the units will be ’hours
#' since’. When the time units are 'hours since' the time ateps are assumed to
#' be equally spaced.
#'}
#'\item{\code{single_file is FALSE}}{
#' The data array is subset and stored into multiple files. Each file
#' contains the data subset for each start date, variable and dataset. Files
#' with different variables and datasets are stored in separated directories
#' within the following directory tree: 'destination/Dataset/variable/'.
#' The name of each file will be by default: '<variable_name>_<sdate>.nc'.
#' The forecast time units are calculated from each start date (if sdate_dim
#' is not NULL) or from the time step. The forecast time units will be 'hours
#' since <each start date>'.
#'}
#'
#'@examples
#'\dontrun{
#'data <- lonlat_temp_st$exp$data
#'lon <- lonlat_temp_st$exp$coords$lon
#'lat <- lonlat_temp_st$exp$coords$lat
#'coords <- list(lon = lon, lat = lat)
#'Datasets <- lonlat_temp_st$exp$attrs$Datasets
#'varname <- 'tas'
#'Dates <- lonlat_temp_st$exp$attrs$Dates
#'metadata <- lonlat_temp_st$exp$attrs$Variable$metadata
#'SaveExp(data = data, coords = coords, Datasets = Datasets, varname = varname,
#' Dates = Dates, metadata = metadata, single_file = TRUE,
#' ftime_dim = 'ftime', var_dim = 'var', dat_dim = 'dataset')
#'}
#'
#'@import easyNCDF
#'@importFrom s2dv Reorder
#'@import multiApply
#'@importFrom ClimProjDiags Subset
#'@export
SaveExp <- function(data, destination = "./", coords = NULL,
Dates = NULL, time_bounds = NULL, startdates = NULL,
varname = NULL, metadata = NULL, Datasets = NULL,
sdate_dim = 'sdate', ftime_dim = 'time',
memb_dim = 'member', dat_dim = 'dataset', var_dim = 'var',
drop_dims = NULL, single_file = FALSE, extra_string = NULL,
global_attrs = NULL, units_hours_since = FALSE) {
## Initial checks
# data
if (is.null(data)) {
stop("Parameter 'data' cannot be NULL.")
}
dimnames <- names(dim(data))
if (is.null(dimnames)) {
stop("Parameter 'data' must be an array with named dimensions.")
}
if (!is.null(attributes(data)$dimensions)) {
attributes(data)$dimensions <- NULL
}
# destination
if (!is.character(destination) | length(destination) > 1) {
stop("Parameter 'destination' must be a character string of one element ",
"indicating the name of the file (including the folder if needed) ",
"where the data will be saved.")
}
# drop_dims
if (!is.null(drop_dims)) {
if (!is.character(drop_dims) | any(!drop_dims %in% names(dim(data)))) {
warning("Parameter 'drop_dims' must be character string containing ",
"the data dimension names to be dropped. It will not be used.")
} else if (!all(dim(data)[drop_dims] %in% 1)) {
warning("Parameter 'drop_dims' can only contain dimension names ",
"that are of length 1. It will not be used.")
} else if (any(drop_dims %in% c(ftime_dim, sdate_dim, dat_dim, memb_dim, var_dim))) {
warning("Parameter 'drop_dims' contains dimensions used in the computation. ",
"It will not be used.")
drop_dims <- NULL
} else {
data <- Subset(x = data, along = drop_dims,
indices = lapply(1:length(drop_dims), function(x) 1),
drop = 'selected')
dimnames <- names(dim(data))
}
}
# coords
if (!is.null(coords)) {
if (!inherits(coords, 'list')) {
stop("Parameter 'coords' must be a named list of coordinates.")
}
if (is.null(names(coords))) {
stop("Parameter 'coords' must have names corresponding to coordinates.")
}
} else {
coords <- sapply(dimnames, function(x) 1:dim(data)[x])
}
# varname
if (is.null(varname)) {
varname <- 'X'
} else if (length(varname) > 1) {
multiple_vars <- TRUE
} else {
multiple_vars <- FALSE
}
if (!all(sapply(varname, is.character))) {
stop("Parameter 'varname' must be a character string with the ",
"variable names.")
}
# single_file
if (!inherits(single_file, 'logical')) {
warning("Parameter 'single_file' must be a logical value. It will be ",
"set as FALSE.")
single_file <- FALSE
}
# extra_string
if (!is.null(extra_string)) {
if (!is.character(extra_string)) {
stop("Parameter 'extra_string' must be a character string.")
}
}
# global_attrs
if (!is.null(global_attrs)) {
if (!inherits(global_attrs, 'list')) {
stop("Parameter 'global_attrs' must be a list.")
}
}
## Dimensions checks
# Spatial coordinates
if (!any(dimnames %in% .KnownLonNames()) |
!any(dimnames %in% .KnownLatNames())) {
lon_dim <- NULL
lat_dim <- NULL
} else {
lon_dim <- dimnames[which(dimnames %in% .KnownLonNames())]
lat_dim <- dimnames[which(dimnames %in% .KnownLatNames())]
}
# ftime_dim
if (!is.null(ftime_dim)) {
if (!is.character(ftime_dim)) {
stop("Parameter 'ftime_dim' must be a character string.")
}
if (!all(ftime_dim %in% dimnames)) {
stop("Parameter 'ftime_dim' is not found in 'data' dimension. Set it ",
"as NULL if there is no forecast time dimension.")
}
}
# sdate_dim
if (!is.null(sdate_dim)) {
if (!is.character(sdate_dim)) {
stop("Parameter 'sdate_dim' must be a character string.")
}
if (!all(sdate_dim %in% dimnames)) {
stop("Parameter 'sdate_dim' is not found in 'data' dimension.")
}
}
# memb_dim
if (!is.null(memb_dim)) {
if (!is.character(memb_dim)) {
stop("Parameter 'memb_dim' must be a character string.")
}
if (!all(memb_dim %in% dimnames)) {
stop("Parameter 'memb_dim' is not found in 'data' dimension. Set it ",
"as NULL if there is no member dimension.")
}
}
# dat_dim
if (!is.null(dat_dim)) {
if (!is.character(dat_dim)) {
stop("Parameter 'dat_dim' must be a character string.")
}
if (!all(dat_dim %in% dimnames)) {
stop("Parameter 'dat_dim' is not found in 'data' dimension. Set it ",
"as NULL if there is no Datasets dimension.")
}
n_datasets <- dim(data)[dat_dim]
} else {
n_datasets <- 1
}
# var_dim
if (!is.null(var_dim)) {
if (!is.character(var_dim)) {
stop("Parameter 'var_dim' must be a character string.")
}
if (!all(var_dim %in% dimnames)) {
stop("Parameter 'var_dim' is not found in 'data' dimension. Set it ",
"as NULL if there is no variable dimension.")
}
n_vars <- dim(data)[var_dim]
} else {
n_vars <- 1
}
# minimum dimensions
if (all(dimnames %in% c(var_dim, dat_dim))) {
if (!single_file) {
warning("Parameter data has only ",
paste(c(var_dim, dat_dim), collapse = ' and '), " dimensions ",
"and it cannot be splitted in multiple files. All data will ",
"be saved in a single file.")
single_file <- TRUE
}
}
# Dates (1): initial checks
if (!is.null(Dates)) {
if (!any(inherits(Dates, "POSIXct"), inherits(Dates, "Date"))) {
stop("Parameter 'Dates' must be of 'POSIXct' or 'Dates' class.")
}
if (is.null(dim(Dates))) {
stop("Parameter 'Dates' must have dimension names.")
}
if (all(is.null(ftime_dim), is.null(sdate_dim))) {
warning("Parameters 'ftime_dim' and 'sdate_dim' can't both be NULL ",
"if 'Dates' are used. 'Dates' will not be used.")
Dates <- NULL
}
# sdate_dim in Dates
if (!is.null(sdate_dim)) {
if (!sdate_dim %in% names(dim(Dates))) {
warning("Parameter 'sdate_dim' is not found in 'Dates' dimension. ",
"Dates will not be used.")
Dates <- NULL
}
}
# ftime_dim in Dates
if (!is.null(ftime_dim)) {
if (!ftime_dim %in% names(dim(Dates))) {
warning("Parameter 'ftime_dim' is not found in 'Dates' dimension. ",
"Dates will not be used.")
Dates <- NULL
}
}
}
# time_bounds
if (!is.null(time_bounds)) {
if (!inherits(time_bounds, 'list')) {
stop("Parameter 'time_bounds' must be a list with two dates arrays.")
}
time_bounds_dims <- lapply(time_bounds, function(x) dim(x))
if (!identical(time_bounds_dims[[1]], time_bounds_dims[[2]])) {
stop("Parameter 'time_bounds' must have 2 arrays with same dimensions.")
}
if (is.null(Dates)) {
time_bounds <- NULL
} else {
name_tb <- sort(names(time_bounds_dims[[1]]))
name_dt <- sort(names(dim(Dates)))
if (!identical(dim(Dates)[name_dt], time_bounds_dims[[1]][name_tb])) {
stop(paste0("Parameter 'Dates' and 'time_bounds' must have same length ",
"of all dimensions."))
}
}
}
# Dates (2): Check dimensions
if (!is.null(Dates)) {
if (any(dim(Dates)[!names(dim(Dates)) %in% c(ftime_dim, sdate_dim)] != 1)) {
stop("Parameter 'Dates' can have only 'sdate_dim' and 'ftime_dim' ",
"dimensions of length greater than 1.")
}
# drop dimensions of length 1 different from sdate_dim and ftime_dim
dim(Dates) <- dim(Dates)[names(dim(Dates)) %in% c(ftime_dim, sdate_dim)]
# add ftime if needed
if (is.null(ftime_dim)) {
warning("A 'time' dimension of length 1 will be added to 'Dates'.")
dim(Dates) <- c(time = 1, dim(Dates))
dim(data) <- c(time = 1, dim(data))
dimnames <- names(dim(data))
ftime_dim <- 'time'
if (!is.null(time_bounds)) {
time_bounds <- lapply(time_bounds, function(x) {
dim(x) <- c(time = 1, dim(x))
return(x)
})
}
units_hours_since <- TRUE
}
# add sdate if needed
if (is.null(sdate_dim)) {
if (!single_file) {
dim(Dates) <- c(dim(Dates), sdate = 1)
dim(data) <- c(dim(data), sdate = 1)
dimnames <- names(dim(data))
sdate_dim <- 'sdate'
if (!is.null(time_bounds)) {
time_bounds <- lapply(time_bounds, function(x) {
dim(x) <- c(dim(x), sdate = 1)
return(x)
})
}
if (!is.null(startdates)) {
if (length(startdates) != 1) {
warning("Parameter 'startdates' must be of length 1 if 'sdate_dim' is NULL.",
"They won't be used.")
startdates <- NULL
}
}
}
units_hours_since <- TRUE
}
}
# startdates
if (!is.null(Dates)) {
# check startdates
if (is.null(startdates)) {
startdates <- Subset(Dates, along = ftime_dim, 1, drop = 'selected')
} else if (any(nchar(startdates) > 10, nchar(startdates) < 1)) {
warning("Parameter 'startdates' should be a character string containing ",
"the start dates in the format 'yyyy-mm-dd', 'yyyymmdd', 'yyyymm', ",
"'POSIXct' or 'Dates' class. Files will be named with Dates instead.")
startdates <- Subset(Dates, along = ftime_dim, 1, drop = 'selected')
}
} else if (!single_file) {
warning("Dates must be provided if 'data' must be saved in separated files. ",
"All data will be saved in a single file.")
single_file <- TRUE
}
# startdates
if (is.null(startdates)) {
if (is.null(sdate_dim)) {
startdates <- 'XXX'
} else {
startdates <- rep('XXX', dim(data)[sdate_dim])
}
} else {
if (any(inherits(startdates, "POSIXct"), inherits(startdates, "Date"))) {
startdates <- format(startdates, "%Y%m%d")
}
if (!is.null(sdate_dim)) {
if (dim(data)[sdate_dim] != length(startdates)) {
warning(paste0("Parameter 'startdates' doesn't have the same length ",
"as dimension '", sdate_dim,"', it will not be used."))
startdates <- Subset(Dates, along = ftime_dim, 1, drop = 'selected')
startdates <- format(startdates, "%Y%m%d")
}
}
}
# Datasets
if (is.null(Datasets)) {
Datasets <- rep('XXX', n_datasets )
}
if (inherits(Datasets, 'list')) {
Datasets <- names(Datasets)
}
if (n_datasets > length(Datasets)) {
warning("Dimension 'Datasets' in 'data' is greater than those listed in ",
"element 'Datasets' and the first element will be reused.")
Datasets <- c(Datasets, rep(Datasets[1], n_datasets - length(Datasets)))
} else if (n_datasets < length(Datasets)) {
warning("Dimension 'Datasets' in 'data' is smaller than those listed in ",
"element 'Datasets' and only the firsts elements will be used.")
Datasets <- Datasets[1:n_datasets]
}
## NetCDF dimensions definition
excluded_dims <- var_dim
if (!is.null(Dates)) {
excluded_dims <- c(excluded_dims, sdate_dim, ftime_dim)
}
if (!single_file) {
excluded_dims <- c(excluded_dims, dat_dim)
}
## Unknown dimensions check
alldims <- c(dat_dim, var_dim, sdate_dim, lon_dim, lat_dim, ftime_dim, memb_dim)
if (!all(dimnames %in% alldims)) {
unknown_dims <- dimnames[which(!dimnames %in% alldims)]
memb_dim <- c(memb_dim, unknown_dims)
}
filedims <- c(dat_dim, var_dim, sdate_dim, lon_dim, lat_dim, ftime_dim, memb_dim)
filedims <- filedims[which(!filedims %in% excluded_dims)]
# Delete unneded coords
coords[c(names(coords)[!names(coords) %in% filedims])] <- NULL
out_coords <- NULL
for (i_coord in filedims) {
# vals
if (i_coord %in% names(coords)) {
if (length(coords[[i_coord]]) != dim(data)[i_coord]) {
warning(paste0("Coordinate '", i_coord, "' has different lenght as ",
"its dimension and it will not be used."))
out_coords[[i_coord]] <- 1:dim(data)[i_coord]
} else if (is.numeric(coords[[i_coord]])) {
out_coords[[i_coord]] <- as.vector(coords[[i_coord]])
} else {
out_coords[[i_coord]] <- 1:dim(data)[i_coord]
}
} else {
out_coords[[i_coord]] <- 1:dim(data)[i_coord]
}
dim(out_coords[[i_coord]]) <- dim(data)[i_coord]
## metadata
if (i_coord %in% names(metadata)) {
if ('variables' %in% names(attributes(metadata[[i_coord]]))) {
# from Start: 'lon' or 'lat'
attrs <- attributes(metadata[[i_coord]])[['variables']]
attrs[[i_coord]]$dim <- NULL
attr(out_coords[[i_coord]], 'variables') <- attrs
} else if (inherits(metadata[[i_coord]], 'list')) {
# from Start and Load: main var
attr(out_coords[[i_coord]], 'variables') <- list(metadata[[i_coord]])
names(attributes(out_coords[[i_coord]])$variables) <- i_coord
} else if (!is.null(attributes(metadata[[i_coord]]))) {
# from Load
attrs <- attributes(metadata[[i_coord]])
# We remove because some attributes can't be saved
attrs <- NULL
attr(out_coords[[i_coord]], 'variables') <- list(attrs)
names(attributes(out_coords[[i_coord]])$variables) <- i_coord
}
}
}
if (!single_file) {
for (i in 1:n_datasets) {
path <- file.path(destination, Datasets[i], varname)
for (j in 1:n_vars) {
if (!dir.exists(path[j])) {
dir.create(path[j], recursive = TRUE)
}
startdates <- gsub("-", "", startdates)
dim(startdates) <- c(length(startdates))
names(dim(startdates)) <- sdate_dim
if (is.null(dat_dim) & is.null(var_dim)) {
data_subset <- data
} else if (is.null(dat_dim)) {
data_subset <- Subset(data, c(var_dim), list(j), drop = 'selected')
} else if (is.null(var_dim)) {
data_subset <- Subset(data, along = c(dat_dim), list(i), drop = 'selected')
} else {
data_subset <- Subset(data, c(dat_dim, var_dim), list(i, j), drop = 'selected')
}
target <- names(dim(data_subset))[which(!names(dim(data_subset)) %in% c(sdate_dim, ftime_dim))]
target_dims_data <- c(target, ftime_dim)
if (is.null(Dates)) {
input_data <- list(data_subset, startdates)
target_dims <- list(target_dims_data, NULL)
} else if (!is.null(time_bounds)) {
input_data <- list(data_subset, startdates, Dates,
time_bounds[[1]], time_bounds[[2]])
target_dims = list(target_dims_data, NULL,
ftime_dim, ftime_dim, ftime_dim)
} else {
input_data <- list(data_subset, startdates, Dates)
target_dims = list(target_dims_data, NULL, ftime_dim)
}
Apply(data = input_data,
target_dims = target_dims,
fun = .saveexp,
destination = path[j],
coords = out_coords,
ftime_dim = ftime_dim,
varname = varname[j],
metadata_var = metadata[[varname[j]]],
extra_string = extra_string,
global_attrs = global_attrs)
}
}
} else {
# time_bnds
if (!is.null(time_bounds)) {
time_bnds <- c(time_bounds[[1]], time_bounds[[2]])
}
# Dates
remove_metadata_dim <- TRUE
if (!is.null(Dates)) {
if (is.null(sdate_dim)) {
sdates <- Dates[1]
# ftime definition
leadtimes <- as.numeric(difftime(Dates, sdates, units = "hours"))
} else {
# sdate definition
sdates <- Subset(Dates, along = ftime_dim, 1, drop = 'selected')
differ <- as.numeric(difftime(sdates, sdates[1], units = "hours"))
dim(differ) <- dim(data)[sdate_dim]
differ <- list(differ)
names(differ) <- sdate_dim
out_coords <- c(differ, out_coords)
attrs <- list(units = paste('hours since', sdates[1]),
calendar = 'proleptic_gregorian', longname = sdate_dim)
attr(out_coords[[sdate_dim]], 'variables')[[sdate_dim]] <- attrs
# ftime definition
Dates <- Reorder(Dates, c(ftime_dim, sdate_dim))
differ_ftime <- array(dim = dim(Dates))
for (i in 1:length(sdates)) {
differ_ftime[, i] <- as.numeric(difftime(Dates[, i], Dates[1, i],
units = "hours"))
}
dim(differ_ftime) <- dim(Dates)
leadtimes <- Subset(differ_ftime, along = sdate_dim, 1, drop = 'selected')
if (!all(apply(differ_ftime, 1, function(x){length(unique(x)) == 1}))) {
warning("Time steps are not equal for all start dates. Only ",
"forecast time values for the first start date will be saved ",
"correctly.")
}
}
if (all(!units_hours_since, is.null(time_bounds))) {
if (all(diff(leadtimes/24) == 1)) {
# daily values
units <- 'days'
leadtimes_vals <- round(leadtimes/24) + 1
} else if (all(diff(leadtimes/24) %in% c(28, 29, 30, 31))) {
# monthly values
units <- 'months'
leadtimes_vals <- round(leadtimes/(30.437*24)) + 1
} else {
# other frequency
units <- 'hours'
leadtimes_vals <- leadtimes + 1
}
} else {
units <- paste('hours since', paste(sdates, collapse = ', '))
leadtimes_vals <- leadtimes
}
# Add time_bnds
if (!is.null(time_bounds)) {
if (is.null(sdate_dim)) {
sdates <- Dates[1]
time_bnds <- c(time_bounds[[1]], time_bounds[[2]])
leadtimes_bnds <- as.numeric(difftime(time_bnds, sdates, units = "hours"))
dim(leadtimes_bnds) <- c(dim(Dates), bnds = 2)
} else {
# assuming they have sdate and ftime
time_bnds <- lapply(time_bounds, function(x) {
x <- Reorder(x, c(ftime_dim, sdate_dim))
return(x)
})
time_bnds <- c(time_bounds[[1]], time_bounds[[2]])
dim(time_bnds) <- c(dim(Dates), bnds = 2)
differ_bnds <- array(dim = c(dim(time_bnds)))
for (i in 1:length(sdates)) {
differ_bnds[, i, ] <- as.numeric(difftime(time_bnds[, i, ], Dates[1, i],
units = "hours"))
}
# NOTE (TODO): Add a warning when they are not equally spaced?
leadtimes_bnds <- Subset(differ_bnds, along = sdate_dim, 1, drop = 'selected')
}
# Add time_bnds
leadtimes_bnds <- Reorder(leadtimes_bnds, c('bnds', ftime_dim))
leadtimes_bnds <- list(leadtimes_bnds)
names(leadtimes_bnds) <- 'time_bnds'
out_coords <- c(leadtimes_bnds, out_coords)
attrs <- list(units = paste('hours since', paste(sdates, collapse = ', ')),
calendar = 'proleptic_gregorian',
long_name = 'time bounds', unlim = FALSE)
attr(out_coords[['time_bnds']], 'variables')$time_bnds <- attrs
}
# Add ftime var
dim(leadtimes_vals) <- dim(data)[ftime_dim]
leadtimes_vals <- list(leadtimes_vals)
names(leadtimes_vals) <- ftime_dim
out_coords <- c(leadtimes_vals, out_coords)
attrs <- list(units = units, calendar = 'proleptic_gregorian',
longname = ftime_dim,
dim = list(list(name = ftime_dim, unlim = TRUE)))
if (!is.null(time_bounds)) {
attrs$bounds = 'time_bnds'
}
attr(out_coords[[ftime_dim]], 'variables')[[ftime_dim]] <- attrs
for (j in 1:n_vars) {
remove_metadata_dim <- FALSE
metadata[[varname[j]]]$dim <- list(list(name = ftime_dim, unlim = TRUE))
}
# Reorder ftime_dim to last
if (length(dim(data)) != which(names(dim(data)) == ftime_dim)) {
order <- c(names(dim(data))[which(!names(dim(data)) %in% c(ftime_dim))], ftime_dim)
data <- Reorder(data, order)
}
}
# var definition
extra_info_var <- NULL
for (j in 1:n_vars) {
varname_j <- varname[j]
metadata_j <- metadata[[varname_j]]
if (is.null(var_dim)) {
out_coords[[varname_j]] <- data
} else {
out_coords[[varname_j]] <- Subset(data, var_dim, j, drop = 'selected')
}
if (!is.null(metadata_j)) {
if (remove_metadata_dim) metadata_j$dim <- NULL
attr(out_coords[[varname_j]], 'variables') <- list(metadata_j)
names(attributes(out_coords[[varname_j]])$variables) <- varname_j
}
# Add global attributes
if (!is.null(global_attrs)) {
attributes(out_coords[[varname_j]])$global_attrs <- global_attrs
}
}
if (is.null(extra_string)) {
first_sdate <- startdates[1]
last_sdate <- startdates[length(startdates)]
gsub("-", "", first_sdate)
file_name <- paste0(paste(c(varname,
gsub("-", "", first_sdate),
gsub("-", "", last_sdate)),
collapse = '_'), ".nc")
} else {
nc <- substr(extra_string, nchar(extra_string)-2, nchar(extra_string))
if (nc == ".nc") {
file_name <- extra_string
} else {
file_name <- paste0(extra_string, ".nc")
}
}
full_filename <- file.path(destination, file_name)
ArrayToNc(out_coords, full_filename)
}
}
.saveexp <- function(data, coords, destination = "./",
startdates = NULL, dates = NULL,
time_bnds1 = NULL, time_bnds2 = NULL,
ftime_dim = 'time', varname = 'var',
metadata_var = NULL, extra_string = NULL,
global_attrs = NULL) {
remove_metadata_dim <- TRUE
if (!is.null(dates)) {
if (!any(is.null(time_bnds1), is.null(time_bnds2))) {
time_bnds <- c(time_bnds1, time_bnds2)
time_bnds <- as.numeric(difftime(time_bnds, dates[1], units = "hours"))
dim(time_bnds) <- c(dim(data)[ftime_dim], bnds = 2)
time_bnds <- Reorder(time_bnds, c('bnds', ftime_dim))
time_bnds <- list(time_bnds)
names(time_bnds) <- 'time_bnds'
coords <- c(time_bnds, coords)
attrs <- list(units = paste('hours since', dates[1]),
calendar = 'proleptic_gregorian',
longname = 'time bounds')
attr(coords[['time_bnds']], 'variables')$time_bnds <- attrs
}
# Add ftime_dim
differ <- as.numeric(difftime(dates, dates[1], units = "hours"))
dim(differ) <- dim(data)[ftime_dim]
differ <- list(differ)
names(differ) <- ftime_dim
coords <- c(differ, coords)
attrs <- list(units = paste('hours since', dates[1]),
calendar = 'proleptic_gregorian',
longname = ftime_dim,
dim = list(list(name = ftime_dim, unlim = TRUE)))
if (!is.null(time_bnds1)) {
attrs$bounds = 'time_bnds'
}
attr(coords[[ftime_dim]], 'variables')[[ftime_dim]] <- attrs
metadata_var$dim <- list(list(name = ftime_dim, unlim = TRUE))
remove_metadata_dim <- FALSE
}
# Add data
coords[[varname]] <- data
if (!is.null(metadata_var)) {
if (remove_metadata_dim) metadata_var$dim <- NULL
attr(coords[[varname]], 'variables') <- list(metadata_var)
names(attributes(coords[[varname]])$variables) <- varname
}
# Add global attributes
if (!is.null(global_attrs)) {
attributes(coords[[varname]])$global_attrs <- global_attrs
}
if (is.null(extra_string)) {
file_name <- paste0(varname, "_", startdates, ".nc")
} else {
file_name <- paste0(varname, "_", extra_string, "_", startdates, ".nc")
}
full_filename <- file.path(destination, file_name)
ArrayToNc(coords, full_filename)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_SaveExp.R
|
#'Function to Split Dimension
#'
#'@author Nuria Perez-Zanon, \email{[email protected]}
#'
#'@description This function split a dimension in two. The user can select the
#'dimension to split and provide indices indicating how to split that dimension
#'or dates and the frequency expected (monthly or by day, month and year). The
#'user can also provide a numeric frequency indicating the length of each
#'division.
#'
#'@param data A 's2dv_cube' object
#'@param split_dim A character string indicating the name of the dimension to
#' split. It is set as 'time' by default.
#'@param indices A vector of numeric indices or dates. If left at NULL, the
#' dates provided in the s2dv_cube object (element Dates) will be used.
#'@param freq A character string indicating the frequency: by 'day', 'month' and
#' 'year' or 'monthly' (by default). 'month' identifies months between 1 and 12
#' independently of the year they belong to, while 'monthly' differenciates
#' months from different years.
#'@param new_dim_name A character string indicating the name of the new
#' dimension.
#'@param insert_ftime An integer indicating the number of time steps to add at
#' the begining of the time series.
#'@param ftime_dim A character string indicating the name of the forecast time
#' dimension. It is set as 'time' by default.
#'@param sdate_dim A character string indicating the name of the start date
#' dimension. It is set as 'sdate' by default.
#'@param return_indices A logical value that if it is TRUE, the indices
#' used in splitting the dimension will be returned. It is FALSE by default.
#'
#'@details Parameter 'insert_ftime' has been included for the case of using
#'daily data, requiring split the temporal dimensions by months (or similar) and
#'the first lead time doesn't correspondt to the 1st day of the month. In this
#'case, the insert_ftime could be used, to get a final output correctly
#'organized. E.g.: leadtime 1 is the 2nd of November and the input time series
#'extend to the 31st of December. When requiring split by month with
#'\code{inset_ftime = 1}, the 'monthly' dimension of length two will indicate
#'the month (position 1 for November and position 2 for December), dimension
#''time' will be length 31. For November, the position 1 and 31 will be NAs,
#'while from positon 2 to 30 will be filled with the data provided. This allows
#'to select correctly days trhough time dimension.
#'@examples
#'data <- 1 : 20
#'dim(data) <- c(time = 10, lat = 2)
#'data <-list(data = data)
#'class(data) <- 's2dv_cube'
#'indices <- c(rep(1,5), rep(2,5))
#'new_data <- CST_SplitDim(data, indices = indices)
#'time <- c(seq(ISOdate(1903, 1, 1), ISOdate(1903, 1, 4), "days"),
#' seq(ISOdate(1903, 2, 1), ISOdate(1903, 2, 4), "days"),
#' seq(ISOdate(1904, 1, 1), ISOdate(1904, 1, 2), "days"))
#'data <- list(data = data$data, Dates = time)
#'class(data) <- 's2dv_cube'
#'new_data <- CST_SplitDim(data, indices = time)
#'new_data <- CST_SplitDim(data, indices = time, freq = 'day')
#'new_data <- CST_SplitDim(data, indices = time, freq = 'month')
#'new_data <- CST_SplitDim(data, indices = time, freq = 'year')
#'@import abind
#'@importFrom ClimProjDiags Subset
#'@importFrom s2dv Reorder
#'@export
CST_SplitDim <- function(data, split_dim = 'time', indices = NULL,
freq = 'monthly', new_dim_name = NULL,
insert_ftime = NULL, ftime_dim = 'time',
sdate_dim = 'sdate', return_indices = FALSE) {
# Check 's2dv_cube'
if (!inherits(data, 's2dv_cube')) {
stop("Parameter 'data' must be of the class 's2dv_cube'.")
}
if (!is.null(insert_ftime)) {
if (!is.numeric(insert_ftime)) {
stop("Parameter 'insert_ftime' should be an integer.")
}
if (length(insert_ftime) > 1) {
warning("Parameter 'insert_ftime' must be of length 1, and only the",
" first element will be used.")
insert_ftime <- insert_ftime[1]
}
# Check Dates
if (is.null(dim(data$attrs$Dates))) {
warning("Parameter 'Dates' must have dimensions, 'insert_ftime' won't ",
"be used.")
insert_ftime <- NULL
}
}
if (!is.null(insert_ftime)) {
# adding NAs at the begining of the data in ftime dim
ftimedim <- which(names(dim(data$data)) == ftime_dim)
dims <- dim(data$data)
dims[ftimedim] <- insert_ftime
empty_array <- array(NA, dims)
data$data <- abind(empty_array, data$data, along = ftimedim)
names(dim(data$data)) <- names(dims)
# Reorder dates
data$attrs$Dates <- Reorder(data$attrs$Dates, c(ftime_dim, sdate_dim))
dates <- data$attrs$Dates
dates_subset <- Subset(dates, sdate_dim, 1)
# adding dates to Dates for the new NAs introduced
if ((dates_subset[2] - dates_subset[1]) == 1) {
timefreq <- 'days'
} else {
timefreq <- 'months'
warning("Time frequency of forecast time is considered monthly.")
}
dim(dates) <- c(length(dates)/dims[sdate_dim], dims[sdate_dim])
names(dim(dates)) <- c(ftime_dim, sdate_dim)
# new <- array(NA, prod(dim(data$data)[c('ftime', 'sdate')]))
# Pending fix transform to UTC when concatenaiting
data$attrs$Dates <- do.call(c, lapply(1:dim(dates)[2], function(x) {
seq(dates[1,x] - as.difftime(insert_ftime,
units = timefreq),
dates[dim(dates)[1],x], by = timefreq, tz = "UTC")}))
}
if (is.null(indices)) {
if (any(split_dim %in% c(ftime_dim, sdate_dim))) {
indices <- data$attrs$Dates
if (any(names(dim(data$data)) %in% sdate_dim)) {
if (!any(names(dim(data$data)) %in% split_dim)) {
stop("Parameter 'split_dims' must be one of the dimension ",
"names in parameter 'data'.")
}
indices <- indices[1:dim(data$data)[which(names(dim(data$data)) == split_dim)]]
}
}
}
# Call the function
res <- SplitDim(data = data$data, split_dim = split_dim,
indices = indices, freq = freq,
new_dim_name = new_dim_name,
dates = data$attrs$Dates,
return_indices = return_indices)
if (inherits(res, 'list')) {
data$data <- res$data
# Split dim on Dates
if (!is.null(res$dates)) {
data$attrs$Dates <- res$dates
}
} else {
data$data <- res
}
data$dims <- dim(data$data)
# Coordinates
# TO DO: Subset splitted coordinate and add the new dimension coordinate.
if (return_indices) {
return(list(data = data, indices = res$indices))
} else {
return(data)
}
}
#'Function to Split Dimension
#'
#'@author Nuria Perez-Zanon, \email{[email protected]}
#'
#'@description This function split a dimension in two. The user can select the
#'dimension to split and provide indices indicating how to split that dimension
#'or dates and the frequency expected (monthly or by day, month and year). The
#'user can also provide a numeric frequency indicating the length of each division.
#'
#'@param data An n-dimensional array with named dimensions.
#'@param split_dim A character string indicating the name of the dimension to
#' split.
#'@param indices A vector of numeric indices or dates.
#'@param freq A character string indicating the frequency: by 'day', 'month' and
#' 'year' or 'monthly' (by default). 'month' identifies months between 1 and 12
#' independetly of the year they belong to, while 'monthly' differenciates
#' months from different years. Parameter 'freq' can also be numeric indicating
#' the length in which to subset the dimension.
#'@param new_dim_name A character string indicating the name of the new
#' dimension.
#'@param dates An optional parameter containing an array of dates of class
#' 'POSIXct' with the corresponding time dimensions of 'data'. It is NULL
#' by default.
#'@param return_indices A logical value that if it is TRUE, the indices
#' used in splitting the dimension will be returned. It is FALSE by default.
#'@examples
#'data <- 1 : 20
#'dim(data) <- c(time = 10, lat = 2)
#'indices <- c(rep(1,5), rep(2,5))
#'new_data <- SplitDim(data, indices = indices)
#'time <- c(seq(ISOdate(1903, 1, 1), ISOdate(1903, 1, 4), "days"),
#' seq(ISOdate(1903, 2, 1), ISOdate(1903, 2, 4), "days"),
#' seq(ISOdate(1904, 1, 1), ISOdate(1904, 1, 2), "days"))
#'new_data <- SplitDim(data, indices = time)
#'new_data <- SplitDim(data, indices = time, freq = 'day')
#'new_data <- SplitDim(data, indices = time, freq = 'month')
#'new_data <- SplitDim(data, indices = time, freq = 'year')
#'@import abind
#'@importFrom ClimProjDiags Subset
#'@export
SplitDim <- function(data, split_dim = 'time', indices, freq = 'monthly',
new_dim_name = NULL, dates = NULL,
return_indices = FALSE) {
# check data
if (is.null(data)) {
stop("Parameter 'data' cannot be NULL.")
}
if (is.null(dim(data))) {
dim(data) = c(time = length(data))
}
if (is.null(names(dim(data)))) {
stop("Parameter 'data' must have dimension names.")
}
dims <- dim(data)
# check split_dim
if (!is.character(split_dim)) {
stop("Parameter 'split_dim' must be a character.")
}
if (length(split_dim) > 1) {
split_dim <- split_dim[1]
warning("Parameter 'split_dim' has length greater than ",
"one and only the first element will be used.")
}
if (!any(names(dims) %in% split_dim)) {
stop("Parameter 'split_dim' must be one of the dimension ",
"names in parameter 'data'.")
}
pos_split <- which(names(dims) == split_dim)
# check indices and freq
if (is.null(indices)) {
if (!is.numeric(freq)) {
stop("Parameter 'freq' must be a integer number indicating ",
" the length of each chunk.")
} else {
if (!((dims[pos_split] / freq) %% 1 == 0)) {
stop("Parameter 'freq' must be proportional to the ",
"length of the 'split_dim' in parameter 'data'.")
}
indices <- rep(1 : (dims[pos_split] / freq), freq)
indices <- sort(indices)
repited <- sort(unique(indices))
}
} else if (is.numeric(indices)) {
if (!is.null(freq)) {
if (freq != 'monthly') {
warning("Parameter 'freq' is not being used since ",
"parameter 'indices' is numeric.")
}
}
repited <- sort(unique(indices))
} else {
# Indices should be Dates and freq character
if (!is.character(freq)) {
stop("Parameter 'freq' must be a character indicating ",
"how to divide the dates provided in parameter 'indices'",
", 'monthly', 'anually' or 'daily'.")
}
if (!inherits(indices, 'POSIXct')) {
indices <- try({
if (is.character(indices)) {
as.POSIXct(indices)
} else {
as.POSIXct(indices)
}
})
if ('try-error' %in% class(indices) |
sum(is.na(indices)) == length(indices)) {
stop("Dates provided in parameter 'indices' must be of class ",
"'POSIXct' or convertable to 'POSIXct'.")
}
}
}
if (length(indices) != dims[pos_split]) {
stop("Parameter 'indices' has different length of parameter ",
"data in the dimension supplied in 'split_dim'.")
}
# check indices as dates:
if (!is.numeric(indices)) {
if (freq == 'day') {
indices <- as.numeric(strftime(indices, format = "%d"))
repited <- unique(indices)
} else if (freq == 'month') {
indices <- as.numeric(strftime(indices, format = "%m"))
repited <- unique(indices)
} else if (freq == 'year') {
indices <- as.numeric(strftime(indices, format = "%Y"))
repited <- unique(indices)
} else if (freq == 'monthly') {
indices <- as.numeric(strftime(indices, format = "%m%Y"))
repited <- unique(indices)
} else {
stop("Parameter 'freq' must be numeric or a character: ",
"by 'day', 'month', 'year' or 'monthly' (for ",
"distinguishable month).")
}
}
# check new_dim_name
if (!is.null(new_dim_name)) {
if (!is.character(new_dim_name)) {
stop("Parameter 'new_dim_name' must be character string")
}
if (length(new_dim_name) > 1) {
new_dim_name <- new_dim_name[1]
warning("Parameter 'new_dim_name' has length greater than 1 ",
"and only the first elemenst is used.")
}
}
max_times <- max(unlist(lapply(repited,
function(x){sum(indices == x)})))
data <- lapply(repited, function(x) {rebuild(x, data, along = split_dim,
indices = indices, max_times)})
data <- abind(data, along = length(dims) + 1)
# Add new dim name
if (is.null(new_dim_name)) {
if (is.character(freq)) {
new_dim_name <- freq
} else {
new_dim_name <- 'index'
}
}
names(dim(data)) <- c(names(dims), new_dim_name)
# Split also Dates
dates_exist <- FALSE
if (!is.null(dates)) {
if (any(split_dim %in% names(dim(dates)))) {
datesdims <- dim(dates)
dates <- lapply(repited, function(x) {rebuild(x, dates, along = split_dim,
indices = indices, max_times)})
dates <- abind(dates, along = length(datesdims) + 1)
dates <- as.POSIXct(dates, origin = '1970-01-01', tz = "UTC")
names(dim(dates)) <- c(names(datesdims), new_dim_name)
}
dates_exist <- TRUE
}
# Return objects
if (all(dates_exist, return_indices)) {
return(list(data = data, dates = dates, indices = indices))
} else if (all(dates_exist, !return_indices)) {
return(list(data = data, dates = dates))
} else if (all(!dates_exist, return_indices)) {
return(list(data = data, indices = indices))
} else {
return(data)
}
}
rebuild <- function(x, data, along, indices, max_times) {
a <- Subset(data, along = along, indices = which(indices == x))
pos_dim <- which(names(dim(a)) == along)
if (dim(a)[pos_dim] != max_times) {
adding <- max_times - dim(a)[pos_dim]
new_dims <- dim(a)
new_dims[pos_dim] <- adding
extra <- array(NA, dim = new_dims)
a <- abind(a, extra, along = pos_dim)
names(dim(a)) <- names(dim(data))
}
return(a)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_SplitDim.R
|
#'CSTools data retrieval function using Start
#'
#'This function aggregates, subsets and retrieves sub-seasonal, seasonal,
#'decadal or climate projection data from NetCDF files in a local file system
#'and arranges it for easy application of the CSTools functions. It calls the
#'function \code{Start} from startR, which is an R package started at BSC with
#'the aim to develop a tool that allows the user to automatically process large
#'multidimensional distributed data sets. Then, the output is transformed into
#'`s2dv_cube` object.
#'
#'It receives any number of parameters (`...`) that are automatically forwarded
#'to the `startR::Start` function. See details in `?startR::Start`. The
#'auxiliary functions used to define dimensions need to be called within the
#'startR namespace (e.g. startR::indices(), startR::values(), startR::Sort(),
#'startR::CircularSort(), startR::CDORemapper(), ...).
#'
#'@param ... Parameters that are automatically forwarded to the `startR::Start`
#' function. See details in `?startR::Start`.
#'@examples
#'\dontrun{
#' sdates <- c('20101101', '20111101', '20121101')
#' latmin <- 44
#' latmax <- 47
#' lonmin <- 6
#' lonmax <- 9
#' data <- CST_Start(dat = path,
#' var = 'prlr',
#' ensemble = indices(1:6),
#' sdate = sdates,
#' time = 121:151,
#' latitude = values(list(latmin, latmax)),
#' longitude = values(list(lonmin, lonmax)),
#' synonims = list(longitude = c('lon', 'longitude'),
#' latitude = c('lat', 'latitude')),
#' return_vars = list(time = 'sdate',
#' longitude = NULL, latitude = NULL),
#' retrieve = FALSE)
#'}
#'\dontshow{
#' exp <- CSTools::lonlat_temp_st$exp
#' obs <- CSTools::lonlat_temp_st$obs
#' data <- CSTools::lonlat_prec
#'}
#'@import startR
#'@export
CST_Start <- function(...) {
res <- Start(...)
res <- as.s2dv_cube(res)
return(res)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_Start.R
|
#'Subset an object of class s2dv_cube
#'
#'This function allows to subset (i.e. slice, take a chunk of) the data inside
#'an object of class \code{s2dv_cube} and modify the dimensions, coordinates and
#'attributes accordingly, removing any variables, time steps and spatial
#'coordinates that are dropped when subsetting. It ensures that the information
#'inside the s2dv_cube remains coherent with the data it contains.\cr\cr
#'As in the function \code{Subset} from the ClimProjDiags package, the
#'dimensions to subset along can be specified via the parameter \code{along}
#'either with integer indices or by their name.\cr\cr
#'There are additional ways to adjust which dimensions are dropped in the
#'resulting object: either to drop all, to drop none, to drop only the ones that
#'have been sliced or to drop only the ones that have not been sliced.\cr\cr
#'The \code{load_parameters} and \code{when} attributes of the original cube
#'are preserved. The \code{source_files} attribute is subset along the
#'\code{var_dim} and \code{dat_dim} dimensions.
#'
#'@author Agudetse Roures Victoria, \email{[email protected]}
#'
#'@param x An object of class \code{s2dv_cube} to be sliced.
#'@param along A vector with references to the dimensions to take the subset
#' from: either integers or dimension names.
#'@param indices A list of indices to take from each dimension specified in
#' 'along'. If a single dimension is specified in 'along', it can be directly
#' provided as an integer or a vector.
#'@param drop Whether to drop all the dimensions of length 1 in the resulting
#' array, none, only those that are specified in 'along', or only those that
#' are not specified in 'along'. The possible values are: 'all' or TRUE, 'none'
#' or FALSE, 'selected', and 'non-selected'. The default value is FALSE.
#'@param dat_dim A character string indicating the name of dataset dimension.
#' The default value is NULL.
#'@param var_dim A chatacter string indicating the name of the variable
#' dimension. The default value is NULL.
#'
#'@return An object of class \code{s2dv_cube} with similar data, coordinates and
#' attributes as the \code{x} input, but with trimmed or dropped dimensions.
#'
#'@examples
#'#Example with sample data:
#'# Check original dimensions and coordinates
#'lonlat_temp$exp$dims
#'names(lonlat_temp$exp$coords)
#'# Subset the s2dv_cube
#'exp_subset <- CST_Subset(lonlat_temp$exp,
#' along = c("lat", "lon"),
#' indices = list(1:10, 1:10),
#' drop = 'non-selected')
#'# Check new dimensions and coordinates
#'exp_subset$dims
#'names(exp_subset$coords)
#'
#'@seealso \link[ClimProjDiags]{Subset}
#'
#'@importFrom ClimProjDiags Subset
#'@export
CST_Subset <- function(x, along, indices, drop = FALSE, var_dim = NULL,
dat_dim = NULL) {
# Check that x is s2dv_cube
if (!inherits(x, 's2dv_cube')) {
stop("Parameter 'x' must be of the class 's2dv_cube'.")
}
# Check var_dim
if (!is.null(var_dim)) {
if ((!is.character(var_dim)) || (length(var_dim) > 1)) {
stop("Parameter 'var_dim' must be a character string.")
}
}
# Check dat_dim
if (!is.null(dat_dim)) {
if ((!is.character(dat_dim)) || (length(dat_dim) > 1)) {
stop("Parameter 'dat_dim' must be a character string.")
}
}
# Check indices
if (!is.list(indices)) {
if (length(along) == 1) {
indices <- list(indices)
}
}
# Subset data
x$data <- Subset(x$data, along = along,
indices = indices,
drop = drop)
# Adjust dimensions
x$dims <- dim(x$data)
# Adjust coordinates
for (dimension in 1:length(along)) {
dim_name <- along[dimension]
index <- indices[dimension]
# Only rename coordinates that have not been dropped
if (dim_name %in% names(x$dims)) {
# Subset coordinate by indices
x$coords[[dim_name]] <- .subset_with_attrs(x$coords[[dim_name]], indices = index)
}
}
# Remove dropped coordinates
for (coordinate in names(x$coords)) {
if (!(coordinate %in% names(x$dims))) {
x$coords[[coordinate]] <- NULL
}
}
# Adjust attributes
# Variable
for (dimension in 1:length(along)) {
dim_name <- along[dimension]
index <- indices[[dimension]]
if ((!is.null(var_dim)) && (dim_name == var_dim)) {
x$attrs$Variable$varName <- as.vector(x$coords[[dim_name]])
}
if ((!is.null(dat_dim)) && (dim_name == dat_dim)) {
x$attrs$Datasets <- as.vector(x$coords[[dim_name]])
}
if ((!is.null(x$attrs$source_files)) &&
(dim_name %in% names(dim(x$attrs$source_files)))) {
x$attrs$source_files <- Subset(x$attrs$source_files,
along = dim_name,
indices = index,
drop = drop)
}
}
# Remove metadata from variables that were dropped
vars_to_keep <- na.omit(match(c(names(x$dims), (x$attrs$Variable$varName)),
names(x$attrs$Variable$metadata)))
x$attrs$Variable$metadata <- x$attrs$Variable$metadata[vars_to_keep]
# Subset Dates
time_along <- intersect(along, names(dim(x$attrs$Dates)))
if (!(length(time_along) == 0)) {
time_indices <- indices[match(time_along, along)]
original_dates <- x$attrs$Dates
x$attrs$Dates <- Subset(x$attrs$Dates,
along = time_along,
indices = time_indices,
drop = drop)
}
# Subset metadata
for (variable in 1:length(names(x$attrs$Variable$metadata))) {
if (any(along %in% names(dim(x$attrs$Variable$metadata[[variable]])))) {
dim_along <- along[along %in% names(dim(x$attrs$Variable$metadata[[variable]]))]
index_along <- indices[match(dim_along, along)]
x$attrs$Variable$metadata[[variable]] <- .subset_with_attrs(x$attrs$Variable$metadata[[variable]],
along = dim_along,
indices = index_along,
drop = drop)
}
}
return(x)
}
# Function to subset with attributes
.subset_with_attrs <- function(x, ...) {
args_subset <- list(...)
if (any(is.null(dim(x)), length(dim(x)) == 1)) {
l <- x[args_subset[['indices']][[1]]]
} else {
l <- Subset(x, along = args_subset[['along']],
indices = args_subset[['indices']],
drop = args_subset[['drop']])
}
attr.names <- names(attributes(x))
attr.names <- attr.names[attr.names != 'names']
attr.names <- attr.names[attr.names != 'dim']
attributes(l)[attr.names] <- attributes(x)[attr.names]
return(l)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_Subset.R
|
#'@rdname CST_WeatherRegimes
#'@title Function for Calculating the Cluster analysis
#'
#'@author Verónica Torralba - BSC, \email{[email protected]}
#'
#'@description This function computes the weather regimes from a cluster
#'analysis. It is applied on the array \code{data} in a 's2dv_cube' object. The
#'dimensionality of this object can be also reduced by using PCs obtained from
#'the application of the #'EOFs analysis to filter the dataset. The cluster
#'analysis can be performed with the traditional k-means or those methods
#'included in the hclust (stats package).
#'
#'@references Cortesi, N., V., Torralba, N., González-Reviriego, A., Soret, and
#'F.J., Doblas-Reyes (2019). Characterization of European wind speed variability
#'using weather regimes. Climate Dynamics,53, 4961–4976,
#'\doi{10.1007/s00382-019-04839-5}.
#'@references Torralba, V. (2019) Seasonal climate prediction for the wind
#'energy sector: methods and tools for the development of a climate service.
#'Thesis. Available online: \url{https://eprints.ucm.es/56841/}.
#'
#'@param data An 's2dv_cube' object.
#'@param ncenters Number of clusters to be calculated with the clustering
#' function.
#'@param EOFs Whether to compute the EOFs (default = 'TRUE') or not (FALSE) to
#' filter the data.
#'@param neofs Number of modes to be kept (default = 30).
#'@param varThreshold Value with the percentage of variance to be explained by
#' the PCs. Only sufficient PCs to explain this much variance will be used in
#' the clustering.
#'@param method Different options to estimate the clusters. The most traditional
#' approach is the k-means analysis (default=’kmeans’) but the function also
#' support the different methods included in the hclust . These methods are:
#' "ward.D", "ward.D2", "single", "complete", "average" (= UPGMA), "mcquitty"
#' (= WPGMA), "median" (= WPGMC) or "centroid" (= UPGMC). For more details
#' about these methods see the hclust function documentation included in the
#' stats package.
#'@param iter.max Parameter to select the maximum number of iterations allowed
#' (Only if method='kmeans' is selected).
#'@param nstart Parameter for the cluster analysis determining how many random
#' sets to choose (Only if method='kmeans' is selected).
#'@param ncores The number of multicore threads to use for parallel computation.
#'@return A list with two elements \code{$data} (a 's2dv_cube' object containing
#'the composites cluster = 1,..,K for case (*1) or only k = 1 for any specific
#'cluster, i.e., case (*2)) and \code{$statistics} that includes \code{$pvalue}
#'(array with the same structure as \code{$data} containing the pvalue of the
#'composites obtained through a t-test that accounts for the serial dependence.),
#'\code{cluster} (A matrix or vector with integers (from 1:k) indicating the
#'cluster to which each time step is allocated.), \code{persistence} (Percentage
#'of days in a month/season before a cluster is replaced for a new one (only if
#'method=’kmeans’ has been selected.)), \code{frequency} (Percentage of days in
#'a month/season belonging to each cluster (only if method=’kmeans’ has been
#'selected).),
#'@examples
#'data <- array(abs(rnorm(1280, 283.7, 6)), dim = c(dataset = 2, member = 2,
#' sdate = 3, ftime = 3,
#' lat = 4, lon = 4))
#'coords <- list(lon = seq(0, 3), lat = seq(47, 44))
#'obs <- list(data = data, coords = coords)
#'class(obs) <- 's2dv_cube'
#'
#'res1 <- CST_WeatherRegimes(data = obs, EOFs = FALSE, ncenters = 4)
#'res2 <- CST_WeatherRegimes(data = obs, EOFs = TRUE, ncenters = 3)
#'
#'@importFrom s2dv EOF
#'@import multiApply
#'@export
CST_WeatherRegimes <- function(data, ncenters = NULL,
EOFs = TRUE, neofs = 30,
varThreshold = NULL,
method = "kmeans",
iter.max = 100, nstart = 30,
ncores = NULL) {
# Check 's2dv_cube'
if (!inherits(data, 's2dv_cube')) {
stop("Parameter 'data' must be of the class 's2dv_cube'.")
}
# Check 'exp' object structure
if (!all(c('data', 'coords') %in% names(data))) {
stop("Parameter 'data' must have 'data' and 'coords' elements ",
"within the 's2dv_cube' structure.")
}
# Check coordinates
if (!any(names(data$coords) %in% .KnownLonNames()) |
!any(names(data$coords) %in% .KnownLatNames())) {
stop("Spatial coordinate names do not match any of the names accepted ",
"the package.")
} else {
lon_name <- names(data$coords)[[which(names(data$coords) %in% .KnownLonNames())]]
lat_name <- names(data$coords)[[which(names(data$coords) %in% .KnownLatNames())]]
lon <- as.vector(data$coords[[lon_name]])
lat <- as.vector(data$coords[[lat_name]])
}
result <- WeatherRegime(data$data, ncenters = ncenters,
EOFs = EOFs, neofs = neofs,
varThreshold = varThreshold, lon = lon,
lat = lat, method = method,
iter.max = iter.max, nstart = nstart,
ncores = ncores)
data$data <- result$composite
data$statistics <- result[-1]
return(data)
}
#'@rdname WeatherRegimes
#'@title Function for Calculating the Cluster analysis
#'
#'@author Verónica Torralba - BSC, \email{[email protected]}
#'
#'@description This function computes the weather regimes from a cluster analysis.
#'It can be applied over the dataset with dimensions c(year/month, month/day,
#'lon, lat), or by using PCs obtained from the application of the EOFs analysis
#'to filter the dataset. The cluster analysis can be performed with the
#'traditional k-means or those methods included in the hclust (stats package).
#'
#'@references Cortesi, N., V., Torralba, N., González-Reviriego, A., Soret, and
#'F.J., Doblas-Reyes (2019). Characterization of European wind speed variability
#'using weather regimes. Climate Dynamics,53, 4961–4976,
#'\doi{10.1007/s00382-019-04839-5}.
#'@references Torralba, V. (2019) Seasonal climate prediction for the wind
#'energy sector: methods and tools for the development of a climate service.
#'Thesis. Available online: \url{https://eprints.ucm.es/56841/}
#'
#'@param data An array containing anomalies with named dimensions with at least
#' start date 'sdate', forecast time 'ftime', latitude 'lat' and longitude
#' 'lon'.
#'@param ncenters Number of clusters to be calculated with the clustering
#' function.
#'@param EOFs Whether to compute the EOFs (default = 'TRUE') or not (FALSE) to
#' filter the data.
#'@param neofs Number of modes to be kept only if EOFs = TRUE has been selected.
#' (default = 30).
#'@param varThreshold Value with the percentage of variance to be explained by
#' the PCs. Only sufficient PCs to explain this much variance will be used in
#' the clustering.
#'@param lon Vector of longitudes.
#'@param lat Vector of latitudes.
#'@param method Different options to estimate the clusters. The most traditional
#' approach is the k-means analysis (default=’kmeans’) but the function also
#' support the different methods included in the hclust . These methods are:
#' "ward.D", "ward.D2", "single", "complete", "average" (= UPGMA), "mcquitty"
#' (= WPGMA), "median" (= WPGMC) or "centroid" (= UPGMC). For more details
#' about these methods see the hclust function documentation included in the
#' stats package.
#'@param iter.max Parameter to select the maximum number of iterations allowed
#' (Only if method = 'kmeans' is selected).
#'@param nstart Parameter for the cluster analysis determining how many random
#' sets to choose (Only if method='kmeans' is selected).
#'@param ncores The number of multicore threads to use for parallel computation.
#'@return A list with elements \code{$composite} (array with at least 3-d ('lat',
#''lon', 'cluster') containing the composites k = 1,..,K for case (*1) or only k = 1
#'for any specific cluster, i.e., case (*2)), \code{pvalue} (array with at least
#'3-d ('lat','lon','cluster') with the pvalue of the composites obtained through
#'a t-test that accounts for the serial dependence of the data with the same
#'structure as Composite.), \code{cluster} (A matrix or vector with integers
#'(from 1:k) indicating the cluster to which each time step is allocated.),
#'\code{persistence} (Percentage of days in a month/season before a cluster is
#'replaced for a new one (only if method=’kmeans’ has been selected.)),
#'\code{frequency} (Percentage of days in a month/season belonging to each
#'cluster (only if method=’kmeans’ has been selected).),
#'@examples
#'data <- array(abs(rnorm(1280, 283.7, 6)), dim = c(dataset = 2, member = 2,
#' sdate = 3, ftime = 3,
#' lat = 4, lon = 4))
#'lat <- seq(47, 44)
#'res <- WeatherRegime(data = data, lat = lat,
#' EOFs = FALSE, ncenters = 4)
#'@importFrom s2dv EOF
#'@import multiApply
#'@export
WeatherRegime <- function(data, ncenters = NULL,
EOFs = TRUE, neofs = 30,
varThreshold = NULL, lon = NULL,
lat = NULL, method = "kmeans",
iter.max = 100, nstart = 30,
ncores = NULL) {
## Check inputs
# data
if (is.null(names(dim(data)))) {
stop("Parameter 'data' must be an array with named dimensions.")
}
if (EOFs == TRUE && is.null(lon)) {
stop("Parameter 'lon' must be specified.")
}
if (is.null(lat)) {
stop("Parameter 'lat' must be specified.")
}
dimData <- names(dim(data))
# temporal dimensions
if ('sdate' %in% dimData && 'ftime' %in% dimData) {
nsdates <- dim(data)['sdate']
nftimes <- dim(data)['ftime']
data <- MergeDims(data,
merge_dims = c('ftime', 'sdate'),
rename_dim = 'time')
} else if ('sdate' %in% dimData | 'ftime' %in% dimData) {
names(dim(data))[which(dimData == 'sdate' | dimData == 'ftime') ] = 'time'
} else {
if (!('time' %in% dimData)) {
stop("Parameter 'data' must have temporal dimensions.")
}
}
# spatial dimensions
if (!any(names(dim(data)) %in% .KnownLonNames()) |
!any(names(dim(data)) %in% .KnownLatNames())) {
stop("Spatial coordinate names do not match any of the names accepted ",
"by the package.")
}
lon_name <- names(dim(data))[[which(names(dim(data)) %in% .KnownLonNames())]]
lat_name <- names(dim(data))[[which(names(dim(data)) %in% .KnownLatNames())]]
if (!is.null(lat) && dim(data)[lat_name] != length(lat)) {
stop("The length of the paramter 'lat' does not match with the ['lat'] dimension of
the parameter 'data'.")
}
# ncenters
if (is.null(ncenters)) {
stop("Parameter 'ncenters' must be specified.")
}
output <- Apply(data = list(data),
target_dims = c('time', lat_name, lon_name),
fun = .WeatherRegime,
EOFs = EOFs, neofs = neofs,
varThreshold = varThreshold,
lon = lon, lat = lat,
ncenters = ncenters,
method = method,
ncores = ncores,
lon_name = lon_name, lat_name = lat_name)
if (method == 'kmeans' && 'sdate' %in% dimData && 'ftime' %in% dimData) {
# The frequency and the persistency are computed as they are useful
# parameters in the cluster analysis
extra_output <- Apply(data = output$cluster,
target_dims = 'time',
fun = .freqPer,
nsdates = nsdates,
nftimes = nftimes ,
ncenters = ncenters)
output$cluster <- t(array(output$cluster, dim = c(nftimes, nsdates)))
names(dim(output$cluster)) <- c('sdate', 'ftime')
output <- list(composite = output$composite,
pvalue = output$pvalue,
cluster = output$cluster,
frequency = extra_output$frequency,
persistence = extra_output$persistence)
}
return(output)
}
.WeatherRegime <- function(data, ncenters = NULL, EOFs = TRUE, neofs = 30,
varThreshold = NULL, lon = NULL,
lat = NULL, method = "kmeans",
iter.max = 100, nstart = 30, lon_name = 'lon',
lat_name = 'lat') {
nlon <- dim(data)[lat_name]
nlat <- dim(data)[lon_name]
if (any(is.na(data))){
nas_test <- MergeDims(data, merge_dims = c(lat_name,lon_name),
rename_dim = 'space', na.rm = TRUE)
if (dim(nas_test)['space']== c(nlat*nlon)){
stop("Parameter 'data' contains NAs in the 'time' dimensions.")
}
}
if (EOFs == TRUE) {
if (is.null(varThreshold)) {
suppressWarnings({
dataPC <- EOF(data,
lat = as.vector(lat),
lon = as.vector(lon),
time_dim = 'time',
neofs = neofs)
})
cluster_input <- dataPC$PC
} else {
suppressWarnings({
dataPC <- EOF(data,
lat = as.vector(lat),
lon = as.vector(lon),
time_dim = 'time',
neofs = neofs)
})
minPC <-
head(as.numeric(which(cumsum(dataPC$var) > varThreshold)), 1)
cluster_input <- dataPC$PC[, 1:minPC]
}
} else {
dataW <- aperm(Apply(data, target_dims = lat_name,
function (x, la) {
x * cos(la * pi / 180)},
la = lat)[[1]], c(2, 1, 3))
cluster_input <- MergeDims(dataW, merge_dims = c(lat_name, lon_name),
rename_dim = 'space',na.rm = TRUE)
}
if (method == "kmeans") {
clust <- kmeans(
cluster_input,
centers = ncenters,
iter.max = iter.max,
nstart = nstart,
trace = FALSE)
result <- array(0, c(ncenters, nlat, nlon))
# the order of the data dimensions is changed ('lat','lon','time')
result <- Composite(aperm(data,c(2, 3, 1)), clust$cluster)
} else {
result <- hclust(dist(cluster_input), method = method)
clusterCut <- cutree(result, ncenters)
result <- Composite(aperm(data, c(2, 3, 1)), clusterCut)
}
result <- lapply(1:length(result),
function (n) {
names(dim(result[[n]])) <- c(lat_name, lon_name, "cluster")
return (result[[n]])
})
names(result) <- c('composite','pvalue')
if (method == "kmeans") {
clust <- as.array(clust$cluster)
names(dim(clust)) <- 'time'
return(list(
composite = result$composite,
pvalue = result$pvalue,
cluster = clust))
} else {
clust <- as.array(clusterCut)
names(dim(clust)) <- 'time'
return(list(
composite = result$composite,
pvalue = result$pvalue,
cluster = clust))
}
}
.freqPer<- function (clust, nsdates, nftimes, ncenters){
frequency <- persistence <- matrix(NA, nsdates, ncenters)
x <- as.vector(clust)
for (i in 1:nsdates) {
occurences <-rle(x[((i * nftimes) + 1 - nftimes):(i * nftimes)])
for (j in 1:ncenters) {
frequency[i, j] <-(sum(occurences$lengths[occurences$values == j]) / nftimes) * 100
persistence[i, j] <- mean(occurences$lengths[occurences$values == j])
}
}
return(list(frequency = frequency,
persistence = persistence))
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/CST_WeatherRegimes.R
|
#'Plot Multiple Lon-Lat Variables In a Single Map According to a Decision Function
#'
#'@description Plot a number a two dimensional matrices with (longitude,
#'latitude) dimensions on a single map with the cylindrical equidistant
#'latitude and longitude projection.
#'
#'@author Nicolau Manubens, \email{[email protected]}
#'@author Veronica Torralba, \email{[email protected]}
#'
#'@param maps List of matrices to plot, each with (longitude, latitude)
#' dimensions, or 3-dimensional array with the dimensions (longitude, latitude,
#' map). Dimension names are required.
#'@param lon Vector of longitudes. Must match the length of the corresponding
#' dimension in 'maps'.
#'@param lat Vector of latitudes. Must match the length of the corresponding
#' dimension in 'maps'.
#'@param map_select_fun Function that selects, for each grid point, which value
#' to take among all the provided maps. This function receives as input a
#' vector of values for a same grid point for all the provided maps, and must
#' return a single selected value (not its index!) or NA. For example, the
#' \code{min} and \code{max} functions are accepted.
#'@param display_range Range of values to be displayed for all the maps. This
#' must be a numeric vector c(range min, range max). The values in the
#' parameter 'maps' can go beyond the limits specified in this range. If the
#' selected value for a given grid point (according to 'map_select_fun') falls
#' outside the range, it will be coloured with 'col_unknown_map'.
#'@param map_dim Optional name for the dimension of 'maps' along which the
#' multiple maps are arranged. Only applies when 'maps' is provided as a
#' 3-dimensional array. Takes the value 'map' by default.
#'@param brks Colour levels to be sent to PlotEquiMap. This parameter is
#' optional and adjusted automatically by the function.
#'@param cols List of vectors of colours to be sent to PlotEquiMap for the
#' colour bar of each map. This parameter is optional and adjusted
#' automatically by the function (up to 5 maps). The colours provided for each
#' colour bar will be automatically interpolated to match the number of breaks.
#' Each item in this list can be named, and the name will be used as title for
#' the corresponding colour bar (equivalent to the parameter 'bar_titles').
#'@param bar_limits Parameter from s2dv::ColorBar. Vector of two numeric values
#' with the extremes of the range of values represented in the colour bar. If
#' 'var_limits' go beyond this interval, the drawing of triangle extremes is
#' triggered at the corresponding sides, painted in 'col_inf' and 'col_sup'.
#' Either of them can be set as NA and will then take as value the
#' corresponding extreme in 'var_limits' (hence a triangle end won't be
#' triggered for these sides). Takes as default the extremes of 'brks' if
#' available, else the same values as 'var_limits'.
#'@param triangle_ends Parameter from s2dv::ColorBar. Vector of two logical
#' elements, indicating whether to force the drawing of triangle ends at each
#' of the extremes of the colour bar. This choice is automatically made from
#' the provided 'brks', 'bar_limits', 'var_limits', 'col_inf' and 'col_sup',
#' but the behaviour can be manually forced to draw or not to draw the triangle
#' ends with this parameter. If 'cols' is provided, 'col_inf' and 'col_sup'
#' will take priority over 'triangle_ends' when deciding whether to draw the
#' triangle ends or not.
#'@param col_inf Parameter from s2dv::ColorBar. Colour to fill the inferior
#' triangle end with. Useful if specifying colours manually with parameter
#' 'cols', to specify the colour and to trigger the drawing of the lower
#' extreme triangle, or if 'cols' is not specified, to replace the colour
#' automatically generated by ColorBar().
#'@param col_sup Parameter from s2dv::ColorBar. Colour to fill the superior
#' triangle end with. Useful if specifying colours manually with parameter
#' 'cols', to specify the colour and to trigger the drawing of the upper
#' extreme triangle, or if 'cols' is not specified, to replace the colour
#' automatically generated by ColorBar().
#'@param bar_extra_margin Parameter from s2dv::ColorBar. Extra margins to be
#' added around the colour bar, in the format c(y1, x1, y2, x2). The units are
#' margin lines. Takes rep(0, 4) by default.
#'@param col_unknown_map Colour to use to paint the grid cells for which a map
#' is not possible to be chosen according to 'map_select_fun' or for those
#' values that go beyond 'display_range'. Takes the value 'white' by default.
#'@param mask Optional numeric array with dimensions (latitude, longitude), with
#' values in the range [0, 1], indicating the opacity of the mask over each
#' grid point. Cells with a 0 will result in no mask, whereas cells with a 1
#' will result in a totally opaque superimposed pixel coloured in 'col_mask'.
#'@param col_mask Colour to be used for the superimposed mask (if specified in
#' 'mask'). Takes the value 'grey' by default.
#'@param dots Array of same dimensions as 'var' or with dimensions
#' c(n, dim(var)), where n is the number of dot/symbol layers to add to the
#' plot. A value of TRUE at a grid cell will draw a dot/symbol on the
#' corresponding square of the plot. By default all layers provided in 'dots'
#' are plotted with dots, but a symbol can be specified for each of the
#' layers via the parameter 'dot_symbol'.
#'@param bar_titles Optional vector of character strings providing the titles to
#' be shown on top of each of the colour bars.
#'@param legend_scale Scale factor for the size of the colour bar labels. Takes
#' 1 by default.
#'@param cex_bar_titles Scale factor for the sizes of the bar titles. Takes 1.5
#' by default.
#'@param plot_margin Numeric vector of length 4 for the margin sizes in the
#' following order: bottom, left, top, and right. If not specified, use the
#' default of par("mar"), c(5.1, 4.1, 4.1, 2.1). Used as 'margin_scale' in
#' s2dv::PlotEquiMap.
#'@param fileout File where to save the plot. If not specified (default) a
#' graphics device will pop up. Extensions allowed: eps/ps, jpeg, png, pdf, bmp
#' and tiff
#'@param width File width, in the units specified in the parameter size_units
#' (inches by default). Takes 8 by default.
#'@param height File height, in the units specified in the parameter size_units
#' (inches by default). Takes 5 by default.
#'@param size_units Units of the size of the device (file or window) to plot in.
#' Inches ('in') by default. See ?Devices and the creator function of the
#' corresponding device.
#'@param res Resolution of the device (file or window) to plot in. See ?Devices
#' and the creator function of the corresponding device.
#'@param drawleg Where to draw the common colour bar. Can take values TRUE,
#' FALSE or:\cr
#' 'up', 'u', 'U', 'top', 't', 'T', 'north', 'n', 'N'\cr
#' 'down', 'd', 'D', 'bottom', 'b', 'B', 'south', 's', 'S' (default)\cr
#' 'right', 'r', 'R', 'east', 'e', 'E'\cr
#' 'left', 'l', 'L', 'west', 'w', 'W'
#'@param return_leg A logical value indicating if the color bars information
#' should be returned by the function. If TRUE, the function doesn't plot the
#' color bars but still creates the layout with color bar areas, and the
#' arguments for GradientCatsColorBar() or ColorBar() will be returned. It is
#' convenient for users to adjust the color bars manually. The default is
#' FALSE, the color bars will be plotted directly.
#'@param ... Additional parameters to be passed on to \code{PlotEquiMap}.
#'
#'@examples
#'# Simple example
#'x <- array(1:(20 * 10), dim = c(lat = 10, lon = 20)) / 200
#'a <- x * 0.6
#'b <- (1 - x) * 0.6
#'c <- 1 - (a + b)
#'lons <- seq(0, 359.5, length = 20)
#'lats <- seq(-89.5, 89.5, length = 10)
#'\dontrun{
#'PlotCombinedMap(list(a, b, c), lons, lats,
#' toptitle = 'Maximum map',
#' map_select_fun = max,
#' display_range = c(0, 1),
#' bar_titles = paste('% of belonging to', c('a', 'b', 'c')),
#' brks = 20, width = 12, height = 10)
#'}
#'
#'Lon <- c(0:40, 350:359)
#'Lat <- 51:26
#'data <- rnorm(51 * 26 * 3)
#'dim(data) <- c(map = 3, lon = 51, lat = 26)
#'mask <- sample(c(0,1), replace = TRUE, size = 51 * 26)
#'dim(mask) <- c(lat = 26, lon = 51)
#'\dontrun{
#'PlotCombinedMap(data, lon = Lon, lat = Lat, map_select_fun = max,
#' display_range = range(data), mask = mask,
#' width = 14, height = 10)
#'}
#'
#'@seealso \code{PlotCombinedMap} and \code{PlotEquiMap}
#'
#'@importFrom s2dv PlotEquiMap ColorBar
#'@importFrom maps map
#'@importFrom graphics box image layout mtext par plot.new
#'@importFrom grDevices adjustcolor bmp colorRampPalette dev.cur dev.new dev.off
#' hcl jpeg pdf png postscript svg tiff
#'@export
PlotCombinedMap <- function(maps, lon, lat,
map_select_fun, display_range,
map_dim = 'map',
brks = NULL, cols = NULL,
bar_limits = NULL, triangle_ends = c(F, F), col_inf = NULL, col_sup = NULL,
col_unknown_map = 'white',
mask = NULL, col_mask = 'grey',
dots = NULL,
bar_titles = NULL, legend_scale = 1,
cex_bar_titles = 1.5,
plot_margin = NULL, bar_extra_margin = c(2, 0, 2, 0),
fileout = NULL, width = 8, height = 5,
size_units = 'in', res = 100, drawleg = T, return_leg = FALSE,
...) {
args <- list(...)
# If there is any filenames to store the graphics, process them
# to select the right device
if (!is.null(fileout)) {
.SelectDevice <- utils::getFromNamespace(".SelectDevice", "s2dv")
deviceInfo <- .SelectDevice(fileout = fileout, width = width, height = height,
units = size_units, res = res)
saveToFile <- deviceInfo$fun
fileout <- deviceInfo$files
}
# Check probs
error <- FALSE
# Change list into an array
if (is.list(maps)) {
if (length(maps) < 1) {
stop("Parameter 'maps' must be of length >= 1 if provided as a list.")
}
check_fun <- function(x) {
is.numeric(x) && (length(dim(x)) == 2)
}
if (!all(sapply(maps, check_fun))) {
error <- TRUE
}
ref_dims <- dim(maps[[1]])
equal_dims <- all(sapply(maps, function(x) identical(dim(x), ref_dims)))
if (!equal_dims) {
stop("All arrays in parameter 'maps' must have the same dimension ",
"sizes and names when 'maps' is provided as a list of arrays.")
}
num_maps <- length(maps)
maps <- unlist(maps)
dim(maps) <- c(ref_dims, map = num_maps)
map_dim <- 'map'
}
if (!is.numeric(maps)) {
error <- TRUE
}
if (is.null(dim(maps))) {
error <- TRUE
}
if (length(dim(maps)) != 3) {
error <- TRUE
}
if (error) {
stop("Parameter 'maps' must be either a numeric array with 3 dimensions ",
" or a list of numeric arrays of the same size with the 'lon' and ",
"'lat' dimensions.")
}
dimnames <- names(dim(maps))
# Check map_dim
if (is.character(map_dim)) {
if (is.null(dimnames)) {
stop("Specified a dimension name in 'map_dim' but no dimension names provided ",
"in 'maps'.")
}
map_dim <- which(dimnames == map_dim)
if (length(map_dim) < 1) {
stop("Dimension 'map_dim' not found in 'maps'.")
} else {
map_dim <- map_dim[1]
}
} else if (!is.numeric(map_dim)) {
stop("Parameter 'map_dim' must be either a numeric value or a ",
"dimension name.")
}
if (length(map_dim) != 1) {
stop("Parameter 'map_dim' must be of length 1.")
}
map_dim <- round(map_dim)
# Work out lon_dim and lat_dim
lon_dim <- NULL
if (!is.null(dimnames)) {
lon_dim <- which(dimnames %in% c('lon', 'longitude'))[1]
}
if (length(lon_dim) < 1) {
lon_dim <- (1:3)[-map_dim][1]
}
lon_dim <- round(lon_dim)
lat_dim <- NULL
if (!is.null(dimnames)) {
lat_dim <- which(dimnames %in% c('lat', 'latitude'))[1]
}
if (length(lat_dim) < 1) {
lat_dim <- (1:3)[-map_dim][2]
}
lat_dim <- round(lat_dim)
# Check lon
if (!is.numeric(lon)) {
stop("Parameter 'lon' must be a numeric vector.")
}
if (length(lon) != dim(maps)[lon_dim]) {
stop("Parameter 'lon' does not match the longitude dimension in 'maps'.")
}
# Check lat
if (!is.numeric(lat)) {
stop("Parameter 'lat' must be a numeric vector.")
}
if (length(lat) != dim(maps)[lat_dim]) {
stop("Parameter 'lat' does not match the longitude dimension in 'maps'.")
}
# Check map_select_fun
if (is.numeric(map_select_fun)) {
if (length(dim(map_select_fun)) != 2) {
stop("Parameter 'map_select_fun' must be an array with dimensions ",
"'lon' and 'lat' if provided as an array.")
}
if (!identical(dim(map_select_fun), dim(maps)[-map_dim])) {
stop("The dimensions 'lon' and 'lat' in the 'map_select_fun' array must ",
"have the same size, name and order as in the 'maps' parameter.")
}
}
if (!is.function(map_select_fun)) {
stop("The parameter 'map_select_fun' must be a function or a numeric array.")
}
# Generate the desired brks and cols. Only nmap, brks, cols, bar_limits, and
# bar_titles matter here because plot = F.
var_limits_maps <- range(maps, na.rm = TRUE)
if (is.null(bar_limits)) bar_limits <- display_range
nmap <- dim(maps)[map_dim]
colorbar <- GradientCatsColorBar(nmap = nmap,
brks = brks, cols = cols, vertical = FALSE,
subsampleg = NULL, bar_limits = bar_limits,
var_limits = var_limits_maps,
triangle_ends = triangle_ends, col_inf = col_inf, col_sup = col_sup, plot = FALSE, draw_separators = TRUE,
bar_titles = bar_titles, title_scale = cex_bar_titles, label_scale = legend_scale * 1.5,
extra_margin = bar_extra_margin)
# Check legend_scale
if (!is.numeric(legend_scale)) {
stop("Parameter 'legend_scale' must be numeric.")
}
# Check col_unknown_map
if (!is.character(col_unknown_map)) {
stop("Parameter 'col_unknown_map' must be a character string.")
}
# Check col_mask
if (!is.character(col_mask)) {
stop("Parameter 'col_mask' must be a character string.")
}
# Check mask
if (!is.null(mask)) {
if (!is.numeric(mask)) {
stop("Parameter 'mask' must be numeric.")
}
if (length(dim(mask)) != 2) {
stop("Parameter 'mask' must have two dimensions.")
}
if ((dim(mask)[1] != dim(maps)[lat_dim]) ||
(dim(mask)[2] != dim(maps)[lon_dim])) {
stop("Parameter 'mask' must have dimensions c(lat, lon).")
}
}
# Check dots
if (!is.null(dots)) {
if (length(dim(dots)) != 2) {
stop("Parameter 'mask' must have two dimensions.")
}
if ((dim(dots)[1] != dim(maps)[lat_dim]) ||
(dim(dots)[2] != dim(maps)[lon_dim])) {
stop("Parameter 'mask' must have dimensions c(lat, lon).")
}
}
#----------------------
# Identify the most likely map
#----------------------
#TODO: Consider col_inf
if (!is.null(colorbar$col_inf[[1]])) {
.warning("Lower triangle is not supported now. Please contact maintainer if you have this need.")
}
if (!is.null(colorbar$col_sup[[1]])) {
brks_norm <- vector('list', length = nmap)
range_width <- vector('list', length = nmap)
slightly_tune_val <- vector('list', length = nmap)
for (ii in 1:nmap) {
brks_norm[[ii]] <- seq(0, 1, length.out = length(colorbar$brks[[ii]]) + 1) # add one break for col_sup
slightly_tune_val[[ii]] <- brks_norm[[ii]][2] / (length(brks_norm[[ii]]) * 2)
range_width[[ii]] <- diff(range(colorbar$brks[[ii]]))
}
ml_map <- apply(maps, c(lat_dim, lon_dim), function(x) {
if (any(is.na(x))) {
res <- NA
} else {
res <- which(x == map_select_fun(x))
if (length(res) > 0) {
res <- res_ind <- res[1]
if (map_select_fun(x) < display_range[1] || map_select_fun(x) > display_range[2]) {
res <- -0.5
} else {
if (map_select_fun(x) > tail(colorbar$brks[[res_ind]], 1)) { # col_sup
res <- res + 1 - slightly_tune_val[[res_ind]]
} else {
res <- res + ((map_select_fun(x) - colorbar$brks[[res_ind]][1]) /
range_width[[res_ind]] * ((length(brks_norm[[res_ind]]) - 2) / (length(brks_norm[[res_ind]]) - 1)))
if (map_select_fun(x) == colorbar$brks[[res_ind]][1]) {
res <- res + slightly_tune_val[[res_ind]]
}
}
}
} else {
res <- -0.5
}
}
res
})
} else {
brks_norm <- vector('list', length = nmap)
range_width <- vector('list', length = nmap)
slightly_tune_val <- vector('list', length = nmap)
for (ii in 1:nmap) {
brks_norm[[ii]] <- seq(0, 1, length.out = length(colorbar$brks[[ii]]))
slightly_tune_val[[ii]] <- brks_norm[[ii]][2] / (length(brks_norm[[ii]]) * 2)
range_width[[ii]] <- diff(range(colorbar$brks[[ii]]))
}
ml_map <- apply(maps, c(lat_dim, lon_dim), function(x) {
if (any(is.na(x))) {
res <- NA
} else {
res <- which(x == map_select_fun(x))
if (length(res) > 0) {
res <- res_ind <- res[1]
if (map_select_fun(x) < display_range[1] ||
map_select_fun(x) > display_range[2]) {
res <- -0.5
} else {
res <- res + ((map_select_fun(x) - colorbar$brks[[res_ind]][1]) /
range_width[[res_ind]])
if (map_select_fun(x) == colorbar$brks[[res_ind]][1]) {
res <- res + slightly_tune_val[[res_ind]]
}
}
} else {
res <- -0.5
}
}
res
})
}
nlat <- length(lat)
nlon <- length(lon)
#----------------------
# Set latitudes from minimum to maximum
#----------------------
if (lat[1] > lat[nlat]) {
lat <- lat[nlat:1]
indices <- list(nlat:1, TRUE)
ml_map <- do.call("[", c(list(x = ml_map), indices))
if (!is.null(mask)){
mask <- mask[nlat:1, ]
}
if (!is.null(dots)){
dots <- dots[nlat:1,]
}
}
#----------------------
# Set layout and parameters
#----------------------
# Open connection to graphical device
if (!is.null(fileout)) {
saveToFile(fileout)
} else if (names(dev.cur()) == 'null device') {
dev.new(units = size_units, res = res, width = width, height = height)
}
#NOTE: I think plot.new() is not necessary in any case.
# plot.new()
#TODO: Don't hardcoded. Let users decide.
par(font.main = 1)
# If colorbars need to be plotted, re-define layout.
if (drawleg) {
layout(matrix(c(rep(1, nmap),2:(nmap + 1)), 2, nmap, byrow = TRUE), heights = c(6, 1.5))
}
#----------------------
# Set colors and breaks and then PlotEquiMap
#----------------------
if (!is.null(colorbar$col_sup[[1]])) {
tcols <- c(col_unknown_map, colorbar$cols[[1]], colorbar$col_sup[[1]])
tbrks <- c(-1, brks_norm[[1]] + rep(1, each = length(brks_norm[[1]])))
for (k in 2:nmap) {
tcols <- append(tcols, c(col_unknown_map, colorbar$cols[[k]], colorbar$col_sup[[k]]))
tbrks <- c(tbrks, brks_norm[[k]] + rep(k, each = length(brks_norm[[k]])))
}
} else { # original code
tcols <- c(col_unknown_map, colorbar$cols[[1]])
tbrks <- c(-1, brks_norm[[1]] + rep(1, each = length(brks_norm[[1]])))
for (k in 2:nmap) {
tcols <- append(tcols, c(col_unknown_map, colorbar$cols[[k]]))
tbrks <- c(tbrks, brks_norm[[k]] + rep(k, each = length(brks_norm[[k]])))
}
}
if (is.null(plot_margin)) {
plot_margin <- c(5, 4, 4, 2) + 0.1 # default of par()$mar
}
PlotEquiMap(var = ml_map, lon = lon, lat = lat,
brks = tbrks, cols = tcols, drawleg = FALSE,
filled.continents = FALSE, dots = dots, margin_scale = plot_margin, ...)
#----------------------
# Add overplot on top
#----------------------
if (!is.null(mask)) {
dims_mask <- dim(mask)
latb <- sort(lat, index.return = TRUE)
dlon <- lon[2:dims_mask[2]] - lon[1:(dims_mask[2] - 1)]
wher <- which(dlon > (mean(dlon) + 1))
if (length(wher) > 0) {
lon[(wher + 1):dims_mask[2]] <- lon[(wher + 1):dims_mask[2]] - 360
}
lonb <- sort(lon, index.return = TRUE)
cols_mask <- sapply(seq(from = 0, to = 1, length.out = 10),
function(x) adjustcolor(col_mask, alpha.f = x))
image(lonb$x, latb$x, t(mask)[lonb$ix, latb$ix],
axes = FALSE, col = cols_mask,
breaks = seq(from = 0, to = 1, by = 0.1),
xlab='', ylab='', add = TRUE, xpd = TRUE)
if (!exists('coast_color')) {
coast_color <- 'black'
}
if (min(lon) < 0) {
map('world', interior = FALSE, add = TRUE, lwd = 1, col = coast_color) # Low resolution world map (lon -180 to 180).
} else {
map('world2', interior = FALSE, add = TRUE, lwd = 1, col = coast_color) # Low resolution world map (lon 0 to 360).
}
box()
}
#----------------------
# Add colorbars
#----------------------
if ('toptitle' %in% names(args)) {
size_title <- 1
if ('title_scale' %in% names(args)) {
size_title <- args[['title_scale']]
}
old_mar <- par('mar')
old_mar[3] <- old_mar[3] - (2 * size_title + 1)
par(mar = old_mar)
}
if (drawleg & !return_leg) {
GradientCatsColorBar(nmap = dim(maps)[map_dim],
brks = colorbar$brks, cols = colorbar$cols, vertical = FALSE,
subsampleg = NULL, bar_limits = bar_limits,
var_limits = var_limits_maps,
triangle_ends = triangle_ends, col_inf = colorbar$col_inf, col_sup = colorbar$col_sup,
plot = TRUE, draw_separators = TRUE,
bar_titles = bar_titles, title_scale = cex_bar_titles, label_scale = legend_scale * 1.5,
extra_margin = bar_extra_margin)
}
if (!return_leg) {
# If the graphic was saved to file, close the connection with the device
if (!is.null(fileout)) dev.off()
}
if (return_leg) {
tmp <- list(nmap = dim(maps)[map_dim],
brks = colorbar$brks, cols = colorbar$cols, vertical = FALSE,
subsampleg = NULL, bar_limits = bar_limits,
var_limits = var_limits_maps,
triangle_ends = triangle_ends, col_inf = colorbar$col_inf, col_sup = colorbar$col_sup,
plot = TRUE, draw_separators = TRUE,
bar_titles = bar_titles, title_scale = cex_bar_titles, label_scale = legend_scale * 1.5,
extra_margin = bar_extra_margin)
.warning("The device is not off yet. Use dev.off() after plotting the color bars.")
return(tmp)
#NOTE: The device is not off! Can keep plotting the color bars.
}
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/PlotCombinedMap.R
|
#'Plot one or multiple ensemble forecast pdfs for the same event
#'
#'@author Llorenç Lledó \email{[email protected]}
#'@description This function plots the probability distribution function of
#'several ensemble forecasts. Separate panels are used to plot forecasts valid
#'or initialized at different times or by different models or even at different
#'locations. Probabilities for tercile categories are computed, plotted in
#'colors and annotated. An asterisk marks the tercile with higher probabilities.
#'Probabilities for extreme categories (above P90 and below P10) can also be
#'included as hatched areas. Individual ensemble members can be plotted as
#'jittered points. The observed value is optionally shown as a diamond.
#'
#'@param fcst A dataframe or array containing all the ensember members for each
#' forecast. If \code{'fcst'} is an array, it should have two labelled
#' dimensions, and one of them should be \code{'members'}. If \code{'fcsts'} is
#' a data.frame, each column shoul be a separate forecast, with the rows beeing
#' the different ensemble members.
#'@param tercile.limits An array or vector with P33 and P66 values that define
#' the tercile categories for each panel. Use an array of dimensions
#' (nforecasts,2) to define different terciles for each forecast panel, or a
#' vector with two elements to reuse the same tercile limits for all forecast
#' panels.
#'@param extreme.limits (optional) An array or vector with P10 and P90 values
#' that define the extreme categories for each panel. Use an array of
#' (nforecasts,2) to define different extreme limits for each forecast panel,
#' or a vector with two elements to reuse the same tercile limits for all
#' forecast panels. (Default: extreme categories are not shown).
#'@param obs (optional) A vector providing the observed values for each forecast
#' panel or a single value that will be reused for all forecast panels.
#' (Default: observation is not shown).
#'@param plotfile (optional) A filename (pdf, png...) where the plot will be
#' saved. (Default: the plot is not saved).
#'@param title A string with the plot title.
#'@param var.name A string with the variable name and units.
#'@param fcst.names (optional) An array of strings with the titles of each
#' individual forecast.
#'@param add.ensmemb Either to add the ensemble members \code{'above'} (default)
#' or \code{'below'} the pdf, or not (\code{'no'}).
#'@param color.set A selection of predefined color sets: use \code{'ggplot'}
#' (default) for blue/green/red, \code{'s2s4e'} for blue/grey/orange,
#' \code{'hydro'} for yellow/gray/blue (suitable for precipitation and
#' inflows) or the \code{"vitigeoss"} color set.
#'@param memb_dim A character string indicating the name of the member
#' dimension.
#'
#'@return A ggplot object containing the plot.
#'@examples
#'fcsts <- data.frame(fcst1 = rnorm(10), fcst2 = rnorm(10, 0.5, 1.2))
#'PlotForecastPDF(fcsts,c(-1,1))
#'@importFrom data.table data.table
#'@importFrom data.table CJ
#'@importFrom data.table setkey
#'@import ggplot2
#'@importFrom reshape2 melt
#'@importFrom plyr .
#'@importFrom plyr dlply
#'@importFrom s2dv InsertDim
#'@export
PlotForecastPDF <- function(fcst, tercile.limits, extreme.limits = NULL, obs = NULL,
plotfile = NULL, title = "Set a title", var.name = "Varname (units)",
fcst.names = NULL, add.ensmemb = c("above", "below", "no"),
color.set = c("ggplot", "s2s4e", "hydro", "vitigeoss"),
memb_dim = 'member') {
value <- init <- extremes <- x <- ymin <- ymax <- tercile <- NULL
y <- xend <- yend <- yjitter <- MLT <- lab.pos <- NULL
ggColorHue <- function(n) {
hues <- seq(15, 375, length = n + 1)
hcl(h = hues, l = 65, c = 100)[1:n]
}
#------------------------
# Define color sets
#------------------------
color.set <- match.arg(color.set)
if (color.set == "s2s4e") {
colorFill <- c("#FF764D", "#b5b5b5", "#33BFD1") # AN, N, BN fill colors
colorHatch <- c("indianred3", "deepskyblue3") # AP90, BP10 line colors
colorMember <- c("#ffff7f")
colorObs <- "purple"
colorLab <- c("red", "blue") # AP90, BP10 text colors
} else if (color.set == "hydro") {
colorFill <- c("#41CBC9", "#b5b5b5", "#FFAB38")
colorHatch <- c("deepskyblue3", "darkorange1")
colorMember <- c("#ffff7f")
colorObs <- "purple"
colorLab <- c("blue", "darkorange3")
} else if (color.set == "ggplot") {
colorFill <- ggColorHue(3)
colorHatch <- c("indianred3", "deepskyblue3")
colorMember <- c("#ffff7f")
colorObs <- "purple"
colorLab <- c("red", "blue")
} else if (color.set == "vitigeoss") {
colorFill <- rev(c("#007be2", "#acb2b5", "#f40000"))
colorHatch <- rev(c("#211b79", "#ae0003"))
colorMember <- c("#ffff7f")
colorObs <- "purple"
colorLab <- colorHatch
} else {
stop("Parameter 'color.set' should be one of ggplot/s2s4e/hydro")
}
#------------------------
# Check input arguments
#------------------------
add.ensmemb <- match.arg(add.ensmemb)
#------------------------
# Check fcst type and convert to data.frame if needed
#------------------------
if (is.array(fcst)) {
if (!memb_dim %in% names(dim(fcst)) | length(dim(fcst)) != 2) {
stop("Parameter 'fcst' should be a two-dimensional array with labelled dimensions and one of them should be for member. The name of this dimension can be adjusted with 'memb_dim'.")
}
dim.members <- which(names(dim(fcst)) == memb_dim)
if (dim.members == 1) {
fcst.df <- data.frame(fcst)
} else {
fcst.df <- data.frame(t(fcst))
}
} else if (is.data.frame(fcst)) {
fcst.df <- fcst
} else {
stop("Parameter 'fcst' should be an array or a data.frame")
}
npanels <- ncol(fcst.df)
#------------------------
# Check observations
#------------------------
if (!is.null(obs)) {
if (!is.vector(obs)){
stop("Observations should be a vector")
} else if (!length(obs) %in% c(1, npanels)) {
stop("The number of observations should equal one or the number of forecasts")
}
}
#------------------------
# Check tercile limits
#------------------------
if (is.vector(tercile.limits)) {
if (length(tercile.limits) != 2) {
stop("Provide two tercile limits")
}
tercile.limits <- InsertDim(tercile.limits, 1, npanels, name = "new")
} else if (is.array(tercile.limits)) {
if (length(dim(tercile.limits)) == 2) {
if (dim(tercile.limits)[2] != 2) {
stop("Provide two tercile limits for each panel")
}
if (dim(tercile.limits)[1] != npanels) {
stop("The number of tercile limits does not match the number of forecasts provided")
}
} else {
stop("Tercile limits should have two dimensions")
}
} else {
stop("Tercile limits should be a vector of length two or an array of dimension (nfcsts,2)")
}
# check consistency of tercile limits
if (any(tercile.limits[, 1] >= tercile.limits[, 2])) {
stop("Inconsistent tercile limits")
}
#------------------------
# Check extreme limits
#------------------------
if (!is.null(extreme.limits)) {
if (is.vector(extreme.limits)) {
if (length(extreme.limits) != 2) {
stop("Provide two extreme limits")
}
extreme.limits <- InsertDim(extreme.limits, 1, npanels, name = "new")
} else if (is.array(extreme.limits)) {
if (length(dim(extreme.limits)) == 2) {
if (dim(extreme.limits)[2] != 2) {
stop("Provide two extreme limits for each panel")
}
if (dim(extreme.limits)[1] != npanels) {
stop("The number of extreme limits does not match the number of forecasts provided")
}
} else {
stop("extreme limits should have two dimensions")
}
} else {
stop("Extreme limits should be a vector of length two or an array of dimensions (nfcsts,2)")
}
# Check that extreme limits are consistent with tercile limits
if (any(extreme.limits[, 1] >= tercile.limits[, 1])) {
stop("Inconsistent lower extreme limits")
}
if (any(extreme.limits[, 2] <= tercile.limits[, 2])) {
stop("Inconsistent higher extreme limits")
}
}
#------------------------
# Set proper fcst names
#------------------------
if (!is.null(fcst.names)) {
if (length(fcst.names) != dim(fcst.df)[2]) {
stop("Parameter 'fcst.names' should be an array with as many names as distinct forecasts provided")
}
colnames(fcst.df) <- factor(fcst.names, levels = fcst.names)
}
#------------------------
# Produce a first plot with the pdf for each init in a panel
#------------------------
melt.df <- reshape2::melt(fcst.df, variable.name = "init", id.vars = NULL)
plot <- ggplot(melt.df, aes(x = value)) +
geom_density(alpha = 1, na.rm = T) +
coord_flip() + facet_wrap(~init, strip.position = "top", nrow = 1) +
xlim(range(c(obs, density(melt.df$value, na.rm = T)$x)))
ggp <- ggplot_build(plot)
#------------------------
# Gather the coordinates of the plots together with init and corresponding
# terciles
#------------------------
tmp.df <- ggp$data[[1]][, c("x", "ymin", "ymax", "PANEL")]
if (!is.null(ggp$layout$layout)) {
tmp.df$init <- ggp$layout$layout$init[as.numeric(tmp.df$PANEL)]
} else if (!is.null(ggp$layout$panel_layout)) {
tmp.df$init <- ggp$layout$panel_layout$init[as.numeric(tmp.df$PANEL)]
} else {
stop("Cannot find PANELS in ggp object")
}
tmp.df$tercile <- factor(ifelse(tmp.df$x < tercile.limits[tmp.df$PANEL, 1],
"Below normal", ifelse(tmp.df$x < tercile.limits[tmp.df$PANEL, 2],
"Normal", "Above normal")),
levels = c("Above normal", "Normal", "Below normal"))
#------------------------
# Get the height and width of a panel
#------------------------
pan.width <- diff(range(tmp.df$x))
pan.height <- max(tmp.df$ymax)
magic.ratio <- 9 * pan.height/pan.width
#------------------------
# Compute hatch coordinates for extremes
#------------------------
if (!is.null(extreme.limits)) {
tmp.df$extremes <- factor(ifelse(tmp.df$x < extreme.limits[tmp.df$PANEL, 1],
"Below P10", ifelse(tmp.df$x < extreme.limits[tmp.df$PANEL, 2],
"Normal", "Above P90")),
levels = c("Above P90", "Normal", "Below P10"))
hatch.ls <- dlply(tmp.df, .(init, extremes), function(x) {
# close the polygon
tmp.df2 <- data.frame(x = c(x$x, max(x$x), min(x$x)), y = c(x$ymax, 0,
0))
# compute the hatches for this polygon
hatches <- .polygon.fullhatch(tmp.df2$x, tmp.df2$y, angle = 60, density = 10,
width.units = pan.width, height.units = pan.height)
# add bottom segment
end1 <- data.frame(x = x$x[1], y = x$ymax[1], xend = x$x[1], yend = 0)
# add top segment
end2 <- data.frame(x = x$x[length(x$x)], y = x$ymax[length(x$x)], xend = x$x[length(x$x)],
yend = 0)
return(rbind(hatches, end1, end2))
})
attr <- attr(hatch.ls, "split_labels")
for (i in 1:length(hatch.ls)) {
hatch.ls[[i]] <- cbind(hatch.ls[[i]], attr[i, ], row.names = NULL)
}
hatch.df <- do.call("rbind", hatch.ls)
# Compute max y for each extreme category
max.ls <- dlply(tmp.df, .(init, extremes), function(x) {
data.frame(y = min(0.85 * pan.height, max(x$ymax)))
})
attr <- attr(max.ls, "split_labels")
for (i in 1:length(max.ls)) {
max.ls[[i]] <- cbind(max.ls[[i]], attr[i, ], row.names = NULL)
}
max.df <- do.call("rbind", max.ls)
}
#------------------------
# Compute jitter space for ensemble members
#------------------------
if (add.ensmemb != "no") {
jitter.df <- reshape2::melt(data.frame(dlply(melt.df, .(init), function(x) {
.jitter.ensmemb(sort(x$value, na.last = T), pan.width / 100)
}), check.names = F), value.name = "yjitter", variable.name = "init", id.vars = NULL)
jitter.df$x <- reshape2::melt(data.frame(dlply(melt.df, .(init), function(x) {
sort(x$value, na.last = T)
})), value.name = "x", id.vars = NULL)$x
}
#------------------------
# Get y coordinates for observed x values, using a cool data.table feature: merge
# to nearest value
#------------------------
if (!is.null(obs)) {
tmp.dt <- data.table(tmp.df, key = c("init", "x"))
obs.dt <- data.table(init = factor(colnames(fcst.df), levels = colnames(fcst.df)),
value = rep(obs, dim(fcst.df)[2]))
setkey(obs.dt, init, value)
obs.xy <- tmp.dt[obs.dt, roll = "nearest"]
}
#------------------------
# Fill each pdf with different colors for the terciles
#------------------------
plot <- plot +
geom_ribbon(data = tmp.df,
aes(x = x, ymin = ymin, ymax = ymax, fill = tercile),
alpha = 0.7)
#------------------------
# Add hatches for extremes
#------------------------
if (!is.null(extreme.limits)) {
if (nrow(hatch.df[hatch.df$extremes != "Normal", ]) == 0) {
warning("The provided extreme categories are outside the plot bounds. The extremes will not be drawn.")
extreme.limits <- NULL
} else {
plot <- plot +
geom_segment(data = hatch.df[hatch.df$extremes != "Normal", ],
aes(x = x, y = y,
xend = xend, yend = yend, color = extremes))
}
}
#------------------------
# Add obs line
#------------------------
if (!is.null(obs)) {
plot <- plot +
geom_vline(data = obs.dt,
aes(xintercept = value),
linetype = "dashed", color = colorObs)
}
#------------------------
# Add ensemble members
#------------------------
if (add.ensmemb == "below") {
plot <- plot +
# this adds a grey box for ensmembers
geom_rect(aes(xmin = -Inf, xmax = Inf,
ymin = -Inf, ymax = -pan.height / 10),
fill = "gray95", color = "black", width = 0.2) +
# this adds the ensemble members
geom_point(data = jitter.df,
aes(x = x,
y = -pan.height / 10 - magic.ratio * yjitter,
shape = "Ensemble members"),
color = "black", fill = colorMember, alpha = 1)
} else if (add.ensmemb == "above") {
plot <- plot +
geom_point(data = jitter.df,
aes(x = x,
y = 0.7 * magic.ratio * yjitter,
shape = "Ensemble members"),
color = "black", fill = colorMember, alpha = 1)
}
#------------------------
# Add obs diamond
#------------------------
if (!is.null(obs)) {
plot <- plot +
# this adds the obs diamond
geom_point(data = obs.xy,
aes(x = x, y = ymax, size = "Observation"),
shape = 23, color = "black", fill = colorObs)
}
#------------------------
# Compute probability for each tercile and identify MLT
#------------------------
tmp.dt <- data.table(tmp.df)
pct <- tmp.dt[, .(pct = integrate(approxfun(x, ymax), lower = min(x), upper = max(x))$value),
by = .(init, tercile)]
# include potentially missing groups
pct <- merge(pct, CJ(init = factor(levels(pct$init), levels = levels(pct$init)),
tercile = factor(c("Below normal", "Normal", "Above normal"),
levels = c("Above normal", "Normal", "Below normal"))),
by = c("init", "tercile"), all.y = T)
pct[is.na(pct),"pct"] <- 0
tot <- pct[, .(tot = sum(pct)), by = init]
pct <- merge(pct, tot, by = "init")
pct$pct <- round(100 * pct$pct / pct$tot, 0)
pct$MLT <- pct[, .(MLT = pct == max(pct)), by = init]$MLT
pct <- pct[order(init, tercile)]
pct$lab.pos <- as.vector(apply(tercile.limits, 1, function(x) {c(max(x), mean(x), min(x))}))
#------------------------
# Compute probability for extremes
#------------------------
if (!is.null(extreme.limits)) {
pct2 <- tmp.dt[, .(pct = integrate(approxfun(x, ymax), lower = min(x), upper = max(x))$value),
by = .(init, extremes)]
# include potentially missing groups
pct2 <- merge(pct2, CJ(init = factor(levels(pct2$init), levels = levels(pct2$init)),
extremes = factor(c("Below P10", "Normal", "Above P90"),
levels = c("Above P90", "Normal", "Below P10"))),
by = c("init", "extremes"), all.y=T)
pct2[is.na(pct),"pct"] <- 0
tot2 <- pct2[, .(tot = sum(pct)), by = init]
pct2 <- merge(pct2, tot2, by = "init")
pct2$pct <- round(100 * pct2$pct / pct2$tot, 0)
pct2 <- pct2[order(init, extremes)]
pct2$lab.pos <- as.vector(apply(extreme.limits, 1, function(x) {c(x[2], NA, x[1])}))
pct2 <- merge(pct2, max.df, by = c("init", "extremes"), all.x = T)
}
#------------------------
# Add probability labels for terciles
#------------------------
if (add.ensmemb == "above") {
labpos <- -0.2 * pan.height
vjust <- 0
} else {
labpos <- 0
vjust <- -0.5
}
plot <- plot +
geom_text(data = pct,
aes(x = lab.pos, y = labpos, label = paste0(pct, "%"),
hjust = as.integer(tercile) * -1.5 + 3.5),
vjust = vjust, angle = -90, size = 3.2) +
geom_text(data = pct[MLT == T, ],
aes(x = lab.pos, y = labpos, label = "*",
hjust = as.integer(tercile) * -3.5 + 9),
vjust = 0.1, angle = -90, size = 7, color = "black")
#------------------------
# Add probability labels for extremes
#------------------------
if (!is.null(extreme.limits)) {
plot <- plot +
geom_text(data = pct2[extremes != "Normal", ],
aes(x = lab.pos, y = 0.9 * y, label = paste0(pct, "%"),
hjust = as.integer(extremes) * -1.5 + 3.5),
vjust = -0.5, angle = -90, size = 3.2,
color = rep(colorLab, dim(fcst.df)[2]))
}
#------------------------
# Finish all theme and legend details
#------------------------
plot <- plot +
theme_minimal() +
scale_fill_manual(name = "Probability of\nterciles",
values = colorFill, drop = F) +
scale_color_manual(name = "Probability of\nextremes",
values = colorHatch) +
scale_shape_manual(name = "Ensemble\nmembers",
values = c(21)) +
scale_size_manual(name = "Observation",
values = c(3)) +
labs(x = var.name,
y = "Probability density\n(total area=1)",
title = title) +
theme(axis.text.x = element_blank(),
panel.grid.minor.x = element_blank(),
legend.key.size = unit(0.3, "in"),
panel.border = element_rect(fill = NA, color = "black"),
strip.background = element_rect(colour = "black", fill = "gray80"),
panel.spacing = unit(0.2, "in"),
panel.grid.major.x = element_line(color = "grey93"),
panel.background = element_rect(fill = "white"),
plot.background = element_rect(fill = "white", color = NA)) +
guides(fill = guide_legend(order = 1),
color = guide_legend(order = 2),
shape = guide_legend(order = 3, label = F),
size = guide_legend(order = 4, label = F))
#------------------------
# Save to plotfile if needed, and return plot
#------------------------
if (!is.null(plotfile)) {
ggsave(plotfile, plot)
}
return(plot)
}
.jitter.ensmemb <- function(x, thr = 0.1) {
# Idea: start with first level. Loop all points, and if distance to last point in
# the level is more than a threshold, include the point to the level. Otherwise
# keep the point for another round. Do one round in each direction to avoid
# uggly patterns.
if (is.unsorted(x, na.rm = T)) {
stop("Provide a sorted array!")
}
lev <- x * 0
lev[is.na(lev)] <- -1
level <- 1
while (any(lev == 0)) {
last <- -1/0
for (i in 1:length(x)) {
if (lev[i] != 0) {
next
}
if (x[i] - last > thr) {
lev[i] <- level
last <- x[i]
}
}
level <- level + 1
last <- 1/0
for (i in seq(length(x), 1, -1)) {
if (lev[i] != 0) {
next
}
if (last - x[i] > thr) {
lev[i] <- level
last <- x[i]
}
}
level <- level + 1
}
lev[lev == -1] <- NA
return(lev * thr * sqrt(3)/2)
}
.polygon.onehatch <- function(x, y, x0, y0, xd, yd, fillOddEven = F) {
halfplane <- as.integer(xd * (y - y0) - yd * (x - x0) <= 0)
cross <- halfplane[-1L] - halfplane[-length(halfplane)]
does.cross <- cross != 0
if (!any(does.cross)) {
return()
}
x1 <- x[-length(x)][does.cross]
y1 <- y[-length(y)][does.cross]
x2 <- x[-1L][does.cross]
y2 <- y[-1L][does.cross]
t <- (((x1 - x0) * (y2 - y1) - (y1 - y0) * (x2 - x1))/(xd * (y2 - y1) - yd *
(x2 - x1)))
o <- order(t)
tsort <- t[o]
crossings <- cumsum(cross[does.cross][o])
if (fillOddEven) {
crossings <- crossings%%2
}
drawline <- crossings != 0
lx <- x0 + xd * tsort
ly <- y0 + yd * tsort
lx1 <- lx[-length(lx)][drawline]
ly1 <- ly[-length(ly)][drawline]
lx2 <- lx[-1L][drawline]
ly2 <- ly[-1L][drawline]
return(data.frame(x = lx1, y = ly1, xend = lx2, yend = ly2))
}
.polygon.fullhatch <- function(x, y, density, angle, width.units, height.units, inches = c(5,
1)) {
x <- c(x, x[1L])
y <- c(y, y[1L])
angle <- angle%%180
upi <- c(width.units, height.units)/inches
if (upi[1L] < 0) {
angle <- 180 - angle
}
if (upi[2L] < 0) {
angle <- 180 - angle
}
upi <- abs(upi)
xd <- cos(angle/180 * pi) * upi[1L]
yd <- sin(angle/180 * pi) * upi[2L]
hatch.ls <- list()
i <- 1
if (angle < 45 || angle > 135) {
if (angle < 45) {
first.x <- max(x)
last.x <- min(x)
} else {
first.x <- min(x)
last.x <- max(x)
}
y.shift <- upi[2L]/density/abs(cos(angle/180 * pi))
x0 <- 0
y0 <- floor((min(y) - first.x * yd/xd)/y.shift) * y.shift
y.end <- max(y) - last.x * yd/xd
while (y0 < y.end) {
hatch.ls[[i]] <- .polygon.onehatch(x, y, x0, y0, xd, yd)
i <- i + 1
y0 <- y0 + y.shift
}
} else {
if (angle < 90) {
first.y <- max(y)
last.y <- min(y)
} else {
first.y <- min(y)
last.y <- max(y)
}
x.shift <- upi[1L]/density/abs(sin(angle/180 * pi))
x0 <- floor((min(x) - first.y * xd/yd)/x.shift) * x.shift
y0 <- 0
x.end <- max(x) - last.y * xd/yd
while (x0 < x.end) {
hatch.ls[[i]] <- .polygon.onehatch(x, y, x0, y0, xd, yd)
i <- i + 1
x0 <- x0 + x.shift
}
}
return(do.call("rbind", hatch.ls))
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/PlotForecastPDF.R
|
#'Plot Maps of Most Likely Quantiles
#'
#'@author Veronica Torralba, \email{[email protected]}, Nicolau Manubens,
#'\email{[email protected]}
#'@description This function receives as main input (via the parameter
#'\code{probs}) a collection of longitude-latitude maps, each containing the
#'probabilities (from 0 to 1) of the different grid cells of belonging to a
#'category. As many categories as maps provided as inputs are understood to
#'exist. The maps of probabilities must be provided on a common rectangular
#'regular grid, and a vector with the longitudes and a vector with the latitudes
#'of the grid must be provided. The input maps can be provided in two forms,
#'either as a list of multiple two-dimensional arrays (one for each category) or
#'as a three-dimensional array, where one of the dimensions corresponds to the
#'different categories.
#'
#'@param probs A list of bi-dimensional arrays with the named dimensions
#' 'latitude' (or 'lat') and 'longitude' (or 'lon'), with equal size and in the
#' same order, or a single tri-dimensional array with an additional dimension
#' (e.g. 'bin') for the different categories. The arrays must contain
#' probability values between 0 and 1, and the probabilities for all categories
#' of a grid cell should not exceed 1 when added.
#'@param lon A numeric vector with the longitudes of the map grid, in the same
#' order as the values along the corresponding dimension in \code{probs}.
#'@param lat A numeric vector with the latitudes of the map grid, in the same
#' order as the values along the corresponding dimension in \code{probs}.
#'@param cat_dim The name of the dimension along which the different categories
#' are stored in \code{probs}. This only applies if \code{probs} is provided in
#' the form of 3-dimensional array. The default expected name is 'bin'.
#'@param bar_titles Vector of character strings with the names to be drawn on
#' top of the color bar for each of the categories. As many titles as
#' categories provided in \code{probs} must be provided.
#'@param col_unknown_cat Character string with a colour representation of the
#' colour to be used to paint the cells for which no category can be clearly
#' assigned. Takes the value 'white' by default.
#'@param drawleg Where to draw the common colour bar. Can take values TRUE,
#' FALSE or:\cr
#' 'up', 'u', 'U', 'top', 't', 'T', 'north', 'n', 'N'\cr
#' 'down', 'd', 'D', 'bottom', 'b', 'B', 'south', 's', 'S' (default)\cr
#' 'right', 'r', 'R', 'east', 'e', 'E'\cr
#' 'left', 'l', 'L', 'west', 'w', 'W'
#'@param ... Additional parameters to be sent to \code{PlotCombinedMap} and
#' \code{PlotEquiMap}.
#'@seealso \code{PlotCombinedMap} and \code{PlotEquiMap}
#'@examples
#'# Simple example
#'x <- array(1:(20 * 10), dim = c(lat = 10, lon = 20)) / 200
#'a <- x * 0.6
#'b <- (1 - x) * 0.6
#'c <- 1 - (a + b)
#'lons <- seq(0, 359.5, length = 20)
#'lats <- seq(-89.5, 89.5, length = 10)
#'\dontrun{
#'PlotMostLikelyQuantileMap(list(a, b, c), lons, lats,
#' toptitle = 'Most likely tercile map',
#' bar_titles = paste('% of belonging to', c('a', 'b', 'c')),
#' brks = 20, width = 10, height = 8)
#'}
#'
#'# More complex example
#'n_lons <- 40
#'n_lats <- 20
#'n_timesteps <- 100
#'n_bins <- 4
#'
#'# 1. Generation of sample data
#'lons <- seq(0, 359.5, length = n_lons)
#'lats <- seq(-89.5, 89.5, length = n_lats)
#'
#'# This function builds a 3-D gaussian at a specified point in the map.
#'make_gaussian <- function(lon, sd_lon, lat, sd_lat) {
#' w <- outer(lons, lats, function(x, y) dnorm(x, lon, sd_lon) * dnorm(y, lat, sd_lat))
#' min_w <- min(w)
#' w <- w - min_w
#' w <- w / max(w)
#' w <- t(w)
#' names(dim(w)) <- c('lat', 'lon')
#' w
#'}
#'
#'# This function generates random time series (with values ranging 1 to 5)
#'# according to 2 input weights.
#'gen_data <- function(w1, w2, n) {
#' r <- sample(1:5, n,
#' prob = c(.05, .9 * w1, .05, .05, .9 * w2),
#' replace = TRUE)
#' r <- r + runif(n, -0.5, 0.5)
#' dim(r) <- c(time = n)
#' r
#'}
#'
#'# We build two 3-D gaussians.
#'w1 <- make_gaussian(120, 80, 20, 30)
#'w2 <- make_gaussian(260, 60, -10, 40)
#'
#'# We generate sample data (with dimensions time, lat, lon) according
#'# to the generated gaussians
#'sample_data <- multiApply::Apply(list(w1, w2), NULL,
#' gen_data, n = n_timesteps)$output1
#'
#'# 2. Binning sample data
#'prob_thresholds <- 1:n_bins / n_bins
#'prob_thresholds <- prob_thresholds[1:(n_bins - 1)]
#'thresholds <- quantile(sample_data, prob_thresholds)
#'
#'binning <- function(x, thresholds) {
#' n_samples <- length(x)
#' n_bins <- length(thresholds) + 1
#'
#' thresholds <- c(thresholds, max(x))
#' result <- 1:n_bins
#' lower_threshold <- min(x) - 1
#' for (i in 1:n_bins) {
#' result[i] <- sum(x > lower_threshold & x <= thresholds[i]) / n_samples
#' lower_threshold <- thresholds[i]
#' }
#'
#' dim(result) <- c(bin = n_bins)
#' result
#'}
#'
#'bins <- multiApply::Apply(sample_data, 'time', binning, thresholds)$output1
#'
#'# 3. Plotting most likely quantile/bin
#'\dontrun{
#'PlotMostLikelyQuantileMap(bins, lons, lats,
#' toptitle = 'Most likely quantile map',
#' bar_titles = paste('% of belonging to', letters[1:n_bins]),
#' mask = 1 - (w1 + w2 / max(c(w1, w2))),
#' brks = 20, width = 10, height = 8)
#'}
#'@importFrom maps map
#'@importFrom graphics box image layout mtext par plot.new
#'@importFrom grDevices adjustcolor bmp colorRampPalette dev.cur dev.new dev.off hcl jpeg pdf png postscript svg tiff
#'@export
PlotMostLikelyQuantileMap <- function(probs, lon, lat, cat_dim = 'bin',
bar_titles = NULL,
col_unknown_cat = 'white', drawleg = T,
...) {
# Check probs
error <- FALSE
if (is.list(probs)) {
if (length(probs) < 1) {
stop("Parameter 'probs' must be of length >= 1 if provided as a list.")
}
check_fun <- function(x) {
is.numeric(x) && (length(dim(x)) == 2)
}
if (!all(sapply(probs, check_fun))) {
error <- TRUE
}
ref_dims <- dim(probs[[1]])
equal_dims <- all(sapply(probs, function(x) identical(dim(x), ref_dims)))
if (!equal_dims) {
stop("All arrays in parameter 'probs' must have the same dimension ",
"sizes and names when 'probs' is provided as a list of arrays.")
}
num_probs <- length(probs)
probs <- unlist(probs)
dim(probs) <- c(ref_dims, map = num_probs)
cat_dim <- 'map'
}
if (!is.numeric(probs)) {
error <- TRUE
}
if (is.null(dim(probs))) {
error <- TRUE
}
if (length(dim(probs)) != 3) {
error <- TRUE
}
if (error) {
stop("Parameter 'probs' must be either a numeric array with 3 dimensions ",
" or a list of numeric arrays of the same size with the 'lon' and ",
"'lat' dimensions.")
}
dimnames <- names(dim(probs))
# Check cat_dim
if (is.character(cat_dim)) {
if (is.null(dimnames)) {
stop("Specified a dimension name in 'cat_dim' but no dimension names provided ",
"in 'probs'.")
}
cat_dim <- which(dimnames == cat_dim)
if (length(cat_dim) < 1) {
stop("Dimension 'cat_dim' not found in 'probs'.")
}
cat_dim <- cat_dim[1]
} else if (!is.numeric(cat_dim)) {
stop("Parameter 'cat_dim' must be either a numeric value or a ",
"dimension name.")
}
if (length(cat_dim) != 1) {
stop("Parameter 'cat_dim' must be of length 1.")
}
cat_dim <- round(cat_dim)
nprobs <- dim(probs)[cat_dim]
# Check bar_titles
if (is.null(bar_titles)) {
if (nprobs == 3) {
bar_titles <- c("Below normal (%)", "Normal (%)", "Above normal (%)")
} else if (nprobs == 5) {
bar_titles <- c("Low (%)", "Below normal (%)",
"Normal (%)", "Above normal (%)", "High (%)")
} else {
bar_titles <- paste0("Cat. ", 1:nprobs, " (%)")
}
}
minimum_value <- ceiling(1 / nprobs * 10 * 1.1) * 10
# By now, the PlotCombinedMap function is included below in this file.
# In the future, PlotCombinedMap will be part of s2dverification and will
# be properly imported.
PlotCombinedMap(probs * 100, lon, lat, map_select_fun = max,
display_range = c(minimum_value, 100),
map_dim = cat_dim,
bar_titles = bar_titles,
col_unknown_map = col_unknown_cat,
drawleg = drawleg, ...)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/PlotMostLikelyQuantileMap.R
|
#'Plotting two probability density gaussian functions and the optimal linear
#'estimation (OLE) as result of combining them.
#'
#'@author Eroteida Sanchez-Garcia - AEMET, \email{[email protected]}
#'
#'@description This function plots two probability density gaussian functions
#'and the optimal linear estimation (OLE) as result of combining them.
#'
#'@param pdf_1 A numeric array with a dimension named 'statistic', containg
#' two parameters: mean' and 'standard deviation' of the first gaussian pdf
#' to combining.
#'@param pdf_2 A numeric array with a dimension named 'statistic', containg
#' two parameters: mean' and 'standard deviation' of the second gaussian pdf
#' to combining.
#'@param nsigma (optional) A numeric value for setting the limits of X axis.
#' (Default nsigma = 3).
#'@param legendPos (optional) A character value for setting the position of the
#' legend ("bottom", "top", "right" or "left")(Default 'bottom').
#'@param legendSize (optional) A numeric value for setting the size of the
#' legend text. (Default 1.0).
#'@param plotfile (optional) A filename where the plot will be saved.
#' (Default: the plot is not saved).
#'@param width (optional) A numeric value indicating the plot width in
#' units ("in", "cm", or "mm"). (Default width = 30).
#'@param height (optional) A numeric value indicating the plot height.
#' (Default height = 15).
#'@param units (optional) A character value indicating the plot size
#' unit. (Default units = 'cm').
#'@param dpi (optional) A numeric value indicating the plot resolution.
#' (Default dpi = 300).
#'
#'@return PlotPDFsOLE() returns a ggplot object containing the plot.
#'
#'@examples
#'# Example 1
#'pdf_1 <- c(1.1,0.6)
#'attr(pdf_1, "name") <- "NAO1"
#'dim(pdf_1) <- c(statistic = 2)
#'pdf_2 <- c(1,0.5)
#'attr(pdf_2, "name") <- "NAO2"
#'dim(pdf_2) <- c(statistic = 2)
#'
#'PlotPDFsOLE(pdf_1, pdf_2)
#'@import ggplot2
#'@export
PlotPDFsOLE <- function(pdf_1, pdf_2, nsigma = 3, legendPos = 'bottom',
legendSize = 1.0, plotfile = NULL, width = 30,
height = 15, units = "cm", dpi = 300) {
y <- type <- NULL
if(!is.null(plotfile)){
if (!is.numeric(dpi)) {
stop("Parameter 'dpi' must be numeric.")
}
if (length(dpi) > 1) {
warning("Parameter 'dpi' has length greater than 1 and ",
"only the first element will be used.")
dpi <- dpi[1]
}
if (!is.character(units)) {
stop("Parameter 'units' must be character")
}
if (length(units) > 1) {
warning("Parameter 'units' has length greater than 1 and ",
"only the first element will be used.")
units <- units[1]
}
if(!(units %in% c("in", "cm", "mm"))) {
stop("Parameter 'units' must be equal to 'in', 'cm' or 'mm'.")
}
if (!is.numeric(height)) {
stop("Parameter 'height' must be numeric.")
}
if (length(height) > 1) {
warning("Parameter 'height' has length greater than 1 and ",
"only the first element will be used.")
height <- height[1]
}
if (!is.numeric(width)) {
stop("Parameter 'width' must be numeric.")
}
if (length(width) > 1) {
warning("Parameter 'width' has length greater than 1 and ",
"only the first element will be used.")
width <- width[1]
}
if (!is.character(plotfile)) {
stop("Parameter 'plotfile' must be a character string ",
"indicating the path and name of output png file.")
}
}
if (!is.character(legendPos)) {
stop("Parameter 'legendPos' must be character")
}
if(!(legendPos %in% c("bottom", "top", "right", "left"))) {
stop("Parameter 'legendPos' must be equal to 'bottom', 'top', 'right' or 'left'.")
}
if (!is.numeric(legendSize)) {
stop("Parameter 'legendSize' must be numeric.")
}
if (!is.numeric(nsigma)) {
stop("Parameter 'nsigma' must be numeric.")
}
if (length(nsigma) > 1) {
warning("Parameter 'nsigma' has length greater than 1 and ",
"only the first element will be used.")
nsigma <- nsigma[1]
}
if (!is.array(pdf_1)) {
stop("Parameter 'pdf_1' must be an array.")
}
if (!is.array(pdf_2)) {
stop("Parameter 'pdf_2' must be an array.")
}
if (!is.numeric(pdf_1)) {
stop("Parameter 'pdf_1' must be a numeric array.")
}
if (!is.numeric(pdf_2)) {
stop("Parameter 'pdf_2' must be a numeric array.")
}
if (is.null(names(dim(pdf_1))) ||
is.null(names(dim(pdf_2)))) {
stop("Parameters 'pdf_1' and 'pdf_2' ",
"should have dimmension names.")
}
if(!('statistic' %in% names(dim(pdf_1)))) {
stop("Parameter 'pdf_1' must have dimension 'statistic'.")
}
if(!('statistic' %in% names(dim(pdf_2)))) {
stop("Parameter 'pdf_2' must have dimension 'statistic'.")
}
if (length(dim(pdf_1)) != 1) {
stop("Parameter 'pdf_1' must have only dimension 'statistic'.")
}
if (length(dim(pdf_2)) != 1) {
stop("Parameter 'pdf_2' must have only dimension 'statistic'.")
}
if ((dim(pdf_1)['statistic'] != 2) || (dim(pdf_2)['statistic'] != 2)) {
stop("Length of dimension 'statistic'",
"of parameter 'pdf_1' and 'pdf_2' must be equal to 2.")
}
if(!is.null(attr(pdf_1, "name"))){
if(!is.character(attr(pdf_1, "name"))){
stop("The 'name' attribute of parameter 'pdf_1' must be a character ",
"indicating the name of the variable of parameter 'pdf_1'.")
}
}
if(!is.null(attr(pdf_2, "name"))){
if(!is.character(attr(pdf_2, "name"))){
stop("The 'name' attribute of parameter 'pdf_2' must be a character ",
"indicating the name of the variable of parameter 'pdf_2'.")
}
}
if(is.null(attr(pdf_1, "name"))){
name1 <- "variable 1"
} else {
name1 <- attr(pdf_1, "name")
}
if(is.null(attr(pdf_2, "name"))){
name2 <- "Variable 2"
} else {
name2 <- attr(pdf_2, "name")
}
#-----------------------------------------------------------------------------
# Set parameters of gaussian distributions (mean and sd)
#-----------------------------------------------------------------------------
mean1 <- pdf_1[1]
sigma1 <- pdf_1[2]
mean2 <- pdf_2[1]
sigma2 <- pdf_2[2]
pdfBest <- CombinedPDFs(pdf_1, pdf_2)
meanBest <- pdfBest[1]
sigmaBest <- pdfBest[2]
#-----------------------------------------------------------------------------
# Plot the gaussian distributions
#-----------------------------------------------------------------------------
nameBest <- paste0(name1, " + ", name2)
graphicTitle <- "OPTIMAL LINEAR ESTIMATION"
xlimSup <- max(nsigma * sigmaBest + meanBest, nsigma * sigma1 + mean1,
nsigma * sigma2 + mean2)
xlimInf <- min(-nsigma * sigmaBest+meanBest, - nsigma * sigma1 + mean1,
-nsigma * sigma2 + mean2)
# deltax <- 0.02
deltax <- (xlimSup - xlimInf) / 10000
x <- seq(xlimInf, xlimSup, deltax)
df1 <- data.frame(x = x, y = dnorm(x, mean = mean1, sd = sigma1),
type = name1)
df2 <- data.frame(x = x, y = dnorm(x, mean = mean2, sd = sigma2),
type = name2)
df3 <- data.frame(x = x, y = dnorm(x, mean = meanBest, sd = sigmaBest),
type = nameBest)
df123 <- rbind(df1, df2, df3)
label1 <- paste0(name1, ": N(mean=",round(mean1, 2), ", sd=", round(sigma1, 2),
")")
label2 <- paste0(name2, ": N(mean=",round(mean2, 2), ", sd=", round(sigma2, 2),
")")
labelBest <- paste0(nameBest, ": N(mean=",round(meanBest,2), ", sd=",
round(sigmaBest, 2), ")")
cols <- c("#DC3912", "#13721A", "#1F5094")
names(cols) <- c(name1, name2, nameBest)
g <- ggplot(df123) + geom_line(aes(x, y, colour = type), size = rel(1.2))
g <- g + scale_colour_manual(values = cols,
limits = c(name1, name2, nameBest),
labels = c(label1, label2, labelBest))
g <- g + theme(plot.title=element_text(size=rel(1.1), colour="black",
face= "bold"),
axis.text.x = element_text(size=rel(1.2)),
axis.text.y = element_text(size=rel(1.2)),
axis.title.x = element_blank(),
legend.title = element_blank(),
legend.position = legendPos,
legend.text = element_text(face = "bold", size=rel(legendSize)))
g <- g + ggtitle(graphicTitle)
g <- g + labs(y="probability", size=rel(1.9))
g <- g + stat_function(fun = dnorm_limit, args = list(mean=mean1, sd=sigma1),
fill = cols[name1], alpha=0.2, geom="area")
g <- g + stat_function(fun = dnorm_limit, args = list(mean=mean2, sd=sigma2),
fill = cols[name2], alpha=0.2, geom="area")
g <- g + stat_function(fun = dnorm_limit, args = list(mean=meanBest,
sd=sigmaBest),
fill = cols[nameBest], alpha=0.2, geom="area")
#-----------------------------------------------------------------------------
# Save to plotfile if needed, and return plot
#-----------------------------------------------------------------------------
if (!is.null(plotfile)) {
ggsave(plotfile, g, width = width, height = height, units = units, dpi = dpi)
}
return(g)
}
# Auxiliar function to plot
CombinedPDFs <- function(pdf_1, pdf_2) {
mean_1 <- pdf_1[1]
sigma_1 <- pdf_1[2]
mean_2 <- pdf_2[1]
sigma_2 <- pdf_2[2]
a_1 <- (sigma_2^2)/((sigma_1^2)+(sigma_2^2))
a_2 <- (sigma_1^2)/((sigma_1^2)+(sigma_2^2))
pdf_mean <- a_1*mean_1 + a_2*mean_2
pdf_sigma <- sqrt((sigma_1^2)*(sigma_2^2)/((sigma_1^2)+(sigma_2^2)))
data <- c(pdf_mean, pdf_sigma)
dim(data) <- c(statistic = 2)
return(data)
}
dnorm_limit <- function(x,mean,sd){
y <- dnorm(x,mean,sd)
y[x<mean | x > mean+sd] <- NA
return(y)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/PlotPDFsOLE.R
|
#'Function to convert any 3-d numerical array to a grid of coloured triangles.
#'
#'This function converts a 3-d numerical data array into a coloured
#'grid with triangles. It is useful for a slide or article to present tabular
#'results as colors instead of numbers. This can be used to compare the outputs
#'of two or four categories (e.g. modes of variability, clusters, or forecast
#'systems).
#'
#'@param data Array with three named dimensions: 'dimx', 'dimy', 'dimcat',
#' containing the values to be displayed in a coloured image with triangles.
#'@param brks A vector of the color bar intervals. The length must be one more
#' than the parameter 'cols'. Use ColorBar() to generate default values.
#'@param cols A vector of valid colour identifiers for color bar. The length
#' must be one less than the parameter 'brks'. Use ColorBar() to generate
#' default values.
#'@param toptitle A string of the title of the grid. Set NULL as default.
#'@param sig_data Logical array with the same dimensions as 'data' to add layers
#' to the plot. A value of TRUE at a grid cell will draw a dot/symbol on the
#' corresponding triangle of the plot. Set NULL as default.
#'@param pch_sig Symbol to be used to represent sig_data. Takes 18
#' (diamond) by default. See 'pch' in par() for additional accepted options.
#'@param col_sig Colour of the symbol to represent sig_data.
#'@param cex_sig Parameter to increase/reduce the size of the symbols used
#' to represent sig_data.
#'@param xlab A logical value (TRUE) indicating if xlabels should be plotted
#'@param ylab A logical value (TRUE) indicating if ylabels should be plotted
#'@param xlabels A vector of labels of the x-axis The length must be
#' length of the col of parameter 'data'. Set the sequence from 1 to the
#' length of the row of parameter 'data' as default.
#'@param xtitle A string of title of the x-axis. Set NULL as default.
#'@param ylabels A vector of labels of the y-axis The length must be
#' length of the row of parameter 'data'. Set the sequence from 1 to the
#' length of the row of parameter 'data' as default.
#'@param ytitle A string of title of the y-axis. Set NULL as default.
#'@param legend A logical value to decide to draw the color bar legend or not.
#' Set TRUE as default.
#'@param lab_legend A vector of labels indicating what is represented in each
#'category (i.e. triangle). Set the sequence from 1 to the length of
#' the categories (2 or 4).
#'@param cex_leg A number to indicate the increase/reductuion of the lab_legend
#' used to represent sig_data.
#'@param col_leg Color of the legend (triangles).
#'@param cex_axis A number to indicate the increase/reduction of the axis labels.
#'@param fileout A string of full directory path and file name indicating where
#' to save the plot. If not specified (default), a graphics device will pop up.
#'@param mar A numerical vector of the form c(bottom, left, top, right) which
#' gives the number of lines of margin to be specified on the four sides of the
#' plot.
#'@param size_units A string indicating the units of the size of the device
#' (file or window) to plot in. Set 'px' as default. See ?Devices and the
#' creator function of the corresponding device.
#'@param res A positive number indicating resolution of the device (file or
#' window) to plot in. See ?Devices and the creator function of the
#' corresponding device.
#'@param figure.width a numeric value to control the width of the plot.
#'@param ... The additional parameters to be passed to function ColorBar() in
#' s2dv for color legend creation.
#'@return A figure in popup window by default, or saved to the specified path.
#'
#'@author History:\cr
#'1.0 - 2020-10 (V.Torralba, \email{[email protected]}) - Original code
#'
#'@examples
#'# Example with random data
#'arr1 <- array(runif(n = 4 * 5 * 4, min = -1, max = 1), dim = c(4,5,4))
#'names(dim(arr1)) <- c('dimx', 'dimy', 'dimcat')
#'arr2 <- array(TRUE, dim = dim(arr1))
#'arr2[which(arr1 < 0.3)] <- FALSE
#'PlotTriangles4Categories(data = arr1,
#' cols = c('white','#fef0d9','#fdd49e','#fdbb84','#fc8d59'),
#' brks = c(-1, 0, 0.1, 0.2, 0.3, 0.4),
#' lab_legend = c('NAO+', 'BL','AR','NAO-'),
#' xtitle = "Target month", ytitle = "Lead time",
#' xlabels = c("Jan", "Feb", "Mar", "Apr"))
#'@importFrom grDevices dev.new dev.off dev.cur
#'@importFrom graphics plot points polygon text title axis
#'@importFrom RColorBrewer brewer.pal
#'@importFrom s2dv ColorBar
#'@export
PlotTriangles4Categories <- function(data, brks = NULL, cols = NULL,
toptitle = NULL, sig_data = NULL,
pch_sig = 18, col_sig = 'black',
cex_sig = 1, xlab = TRUE, ylab = TRUE,
xlabels = NULL, xtitle = NULL,
ylabels = NULL, ytitle = NULL,
legend = TRUE, lab_legend = NULL,
cex_leg = 1, col_leg = 'black',
cex_axis = 1.5, mar = c(5, 4, 0, 0),
fileout = NULL, size_units = 'px',
res = 100, figure.width = 1, ...) {
# Checking the dimensions
if (length(dim(data)) != 3) {
stop("Parameter 'data' must be an array with three dimensions.")
}
if (any(is.na(data))){
stop("Parameter 'data' cannot contain NAs.")
}
if (is.null(names(dim(data)))) {
stop("Parameter 'data' must be an array with named dimensions.")
}else{
if (!any(names(dim(data)) == 'dimx') | !any(names(dim(data)) == 'dimy') |
!any(names(dim(data)) == 'dimcat')) {
stop("Parameter 'data' should contain 'dimx', 'dimy' and 'dimcat' dimension names. ")
}
}
if (!is.vector(mar) & length(mar) != 4) {
stop("Parameter 'mar' must be a vector of length 4.")
}
if (!is.null(sig_data)) {
if (!is.logical(sig_data)) {
stop("Parameter 'sig_data' array must be logical.")}
else if (length(dim(sig_data)) != 3) {
stop("Parameter 'sig_data' must be an array with three dimensions.")
}else if (any(dim(sig_data) != dim(data))){
stop("Parameter 'sig_data' must be an array with the same dimensions as 'data'.")
}else if(!is.null(names(dim(sig_data)))) {
if (any(names(dim(sig_data)) != names(dim(data)))) {
stop("Parameter 'sig_data' must be an array with the same named dimensions as 'data'.")}
}
}
if (dim(data)['dimcat'] != 4 && dim(data)['dimcat'] != 2) {
stop(
"Parameter 'data' should contain a dimcat dimension with length equals
to two or four as only two or four categories can be plotted.")
}
# Checking what is available and generating missing information
if (!is.null(lab_legend) &&
length(lab_legend) != 4 && length(lab_legend) != 2) {
stop("Parameter 'lab_legend' should contain two or four names.")
}
datadim <- dim(data)
nrow <- dim(data)['dimy']
ncol <- dim(data)['dimx']
ncat <- dim(data)['dimcat']
# If there is any filenames to store the graphics, process them
# to select the right device
if (!is.null(fileout)) {
deviceInfo <- .SelectDevice(fileout = fileout,
width = 80 * ncol * figure.width,
height = 80 * nrow,
units = size_units, res = res)
saveToFile <- deviceInfo$fun
fileout <- deviceInfo$files
}
# Open connection to graphical device
if (!is.null(fileout)) {
saveToFile(fileout)
} else if (names(dev.cur()) == 'null device') {
dev.new(units = size_units, res = res,
width = 8 * figure.width, height = 5)
}
if (is.null(xlabels)){
xlabels = 1:ncol
}
if (is.null(ylabels)){
ylabels = 1:nrow
}
if (!is.null(brks) && !is.null(cols)) {
if (length(brks) != length(cols) + 1) {
stop("The length of the parameter 'brks' must be one more than 'cols'.")
}
}
if (is.null(brks)) {
brks <- seq(min(data, na.rm = T), max(data, na.rm = T), length.out = 9)
}
if (is.null(cols)) {
cols <- rev(brewer.pal(length(brks) - 1, 'RdBu'))
}
# The colours for each triangle/category are defined
data_cat <- array(cols[length(cols)], dim = datadim)
names(dim(data_cat)) <- names(dim(data))
for (i in (length(cols) - 1):1) {
data_cat[data < brks[i + 1]] <- cols[i]
}
if(legend){
layout(matrix(c(1, 2, 1, 3), 2, 2, byrow = T),
widths = c(10, 3.4), heights = c(10, 3.5))
par(oma = c(1, 1, 1, 1), mar = mar)
if(is.null(lab_legend)) {
lab_legend = 1:ncat
}
}
plot(ncol, nrow, xlim = c(0, ncol), ylim=c(0, nrow), xaxs = "i", yaxs = 'i', type = "n",
xaxt = "n", yaxt = "n", ann = F, axes = F)
box(col = 'black',lwd = 1)
if (! is.null(toptitle)){
title(toptitle, cex = 1.5)
}
if (!is.null(xtitle)){
mtext(side = 1, text = xtitle, line = 4, cex = 1.5)
}
if (!is.null(ytitle)){
mtext(side = 2, text = ytitle, line = 2.5, cex = 1.5)
}
if (xlab){
axis(1, at =(1:ncol) - 0.5, las = 2, labels = xlabels, cex.axis = cex_axis)
}
if (ylab){
axis(2, at = (1:nrow) - 0.5, las = 2, labels = ylabels, cex.axis = cex_axis)
}
#The triangles are plotted
for(p in 1:ncol){
for(l in 1:nrow){
if (ncat == 4){
coord_triangl <- list(xs=list(c(p-1, p-0.5, p-1),c(p-1, p-0.5, p),c(p, p-0.5, p),c(p-1, p-0.5, p)),
ys=list( c(l-1, -0.5+l, l), c(l-1, -0.5+l, l-1),c(l-1, -0.5+l, l),c(l, -0.5+l, l)))
coord_sig <- list(x=c(p-0.75,p-0.5,p-0.25,p-0.5),y=c(l-0.5,l-0.75,l-0.5,l-0.25))
}
if (ncat==2){
coord_triangl <- list(xs=list(c(p-1, p, p-1),c(p-1, p, p)),
ys=list(c(l-1, l, l),c(l-1,l-1, l)))
coord_sig <- list(x=c(p-(2/3),p-(1/3)),y=c(l-(1/3),l-(2/3)))
}
for (n in 1:ncat) {
polygon(coord_triangl$xs[[n]],
coord_triangl$ys[[n]],
col = Subset(
data_cat,
along = c('dimcat', 'dimx', 'dimy'),
indices = list(n, p, l)))
if (!is.null(sig_data) &&
Subset(sig_data,along = c('dimcat', 'dimx', 'dimy'),
indices = list(n, p, l))) {
points(
x = coord_sig$x[n],
y = coord_sig$y[n],
pch = pch_sig,
cex = cex_sig,
col = col_sig
)
}
}
}
}
# legend
if(legend){
# Colorbar
par(mar=c(0,0,0,0))
ColorBar(brks = brks, cols = cols, vertical = T, draw_ticks = T, draw_separators = T,
# extra_margin = c(0,0,2.5,0),label_scale = 1.5,...)
extra_margin = c( 0, 0, 0, 0), label_scale = 1.5, ...)
par(mar = c(0.5, 2.5, 0.5, 2.5))
plot(1, 1, xlim = c(0, 1), ylim =c(0, 1), xaxs = "i", yaxs = 'i', type = "n",
xaxt = "n", yaxt = "n", ann = F, axes = F)
box(col = col_leg)
p = l = 1
if (ncat == 4){
coord_triangl <- list(xs = list(c(p -1, p - 0.5, p - 1), c(p - 1, p - 0.5, p),
c(p, p - 0.5, p), c(p - 1, p - 0.5, p)),
ys = list(c(l - 1, - 0.5 + l, l), c(l - 1, - 0.5 + l, l - 1),
c(l - 1, - 0.5 + l, l), c(l, - 0.5 + l, l)))
coord_sig <- list(x = c(p - 0.75, p - 0.5, p - 0.25, p - 0.5),
y = c(l - 0.5, l - 0.75, l - 0.5, l - 0.25))
}
if (ncat==2){
coord_triangl<- list(xs=list(c(p-1, p, p),c(p-1, p, p-1)),
ys=list( c(l-1,l-1, l), c(l-1, l, l)))
coord_sig<- list(x=c(p-(2/3),p-(1/3)),y=c(l-(1/3),l-(2/3)))
}
for (n in 1:ncat) {
polygon(coord_triangl$xs[[n]],
coord_triangl$ys[[n]],border=col_leg)
text(x=coord_sig$x[[n]],y=coord_sig$y[[n]],labels = lab_legend[n],cex=cex_leg,col=col_leg)
}
}
# If the graphic was saved to file, close the connection with the device
if (!is.null(fileout)) dev.off()
}
.SelectDevice <- function(fileout, width, height, units, res) {
# This function is used in the plot functions to check the extension of the
# files where the graphics will be stored and select the right R device to
# save them.
# If the vector of filenames ('fileout') has files with different
# extensions, then it will only accept the first one, changing all the rest
# of the filenames to use that extension.
# We extract the extension of the filenames: '.png', '.pdf', ...
ext <- regmatches(fileout, regexpr("\\.[a-zA-Z0-9]*$", fileout))
if (length(ext) != 0) {
# If there is an extension specified, select the correct device
## units of width and height set to accept inches
if (ext[1] == ".png") {
saveToFile <- function(fileout) {
png(filename = fileout, width = width, height = height, res = res, units = units)
}
} else if (ext[1] == ".jpeg") {
saveToFile <- function(fileout) {
jpeg(filename = fileout, width = width, height = height, res = res, units = units)
}
} else if (ext[1] %in% c(".eps", ".ps")) {
saveToFile <- function(fileout) {
postscript(file = fileout, width = width, height = height)
}
} else if (ext[1] == ".pdf") {
saveToFile <- function(fileout) {
pdf(file = fileout, width = width, height = height)
}
} else if (ext[1] == ".svg") {
saveToFile <- function(fileout) {
svg(filename = fileout, width = width, height = height)
}
} else if (ext[1] == ".bmp") {
saveToFile <- function(fileout) {
bmp(filename = fileout, width = width, height = height, res = res, units = units)
}
} else if (ext[1] == ".tiff") {
saveToFile <- function(fileout) {
tiff(filename = fileout, width = width, height = height, res = res, units = units)
}
} else {
.warning("file extension not supported, it will be used '.eps' by default.")
## In case there is only one filename
fileout[1] <- sub("\\.[a-zA-Z0-9]*$", ".eps", fileout[1])
ext[1] <- ".eps"
saveToFile <- function(fileout) {
postscript(file = fileout, width = width, height = height)
}
}
# Change filenames when necessary
if (any(ext != ext[1])) {
.warning(paste0("some extensions of the filenames provided in 'fileout' are not ", ext[1],". The extensions are being converted to ", ext[1], "."))
fileout <- sub("\\.[a-zA-Z0-9]*$", ext[1], fileout)
}
} else {
# Default filenames when there is no specification
.warning("there are no extensions specified in the filenames, default to '.eps'")
fileout <- paste0(fileout, ".eps")
saveToFile <- postscript
}
# return the correct function with the graphical device, and the correct
# filenames
list(fun = saveToFile, files = fileout)
}
.warning <- function(...) {
# Function to use the 'warning' R function with our custom settings
# Default: no call information, indent to 0, exdent to 3,
# collapse to \n
args <- list(...)
## In case we need to specify warning arguments
if (!is.null(args[["call."]])) {
call <- args[["call."]]
} else {
## Default: don't show info about the call where the warning came up
call <- FALSE
}
if (!is.null(args[["immediate."]])) {
immediate <- args[["immediate."]]
} else {
## Default value in warning function
immediate <- FALSE
}
if (!is.null(args[["noBreaks."]])) {
noBreaks <- args[["noBreaks."]]
} else {
## Default value warning function
noBreaks <- FALSE
}
if (!is.null(args[["domain"]])) {
domain <- args[["domain"]]
} else {
## Default value warning function
domain <- NULL
}
args[["call."]] <- NULL
args[["immediate."]] <- NULL
args[["noBreaks."]] <- NULL
args[["domain"]] <- NULL
## To modify strwrap indent and exdent arguments
if (!is.null(args[["indent"]])) {
indent <- args[["indent"]]
} else {
indent <- 0
}
if (!is.null(args[["exdent"]])) {
exdent <- args[["exdent"]]
} else {
exdent <- 3
}
args[["indent"]] <- NULL
args[["exdent"]] <- NULL
## To modify paste collapse argument
if (!is.null(args[["collapse"]])) {
collapse <- args[["collapse"]]
} else {
collapse <- "\n!"
}
args[["collapse"]] <- NULL
## Warning tag
if (!is.null(args[["tag"]])) {
tag <- args[["tag"]]
} else {
tag <- "! Warning: "
}
args[["tag"]] <- NULL
warning(paste0(tag, paste(strwrap(
args, indent = indent, exdent = exdent
), collapse = collapse)), call. = call, immediate. = immediate,
noBreaks. = noBreaks, domain = domain)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/PlotTriangles4Categories.R
|
#'Plots the observed weekly means and climatology of a timeseries data
#'
#'@description This function plots the observed weekly means and climatology of
#'a timeseries data using ggplot package. It compares the weekly climatology in
#'a specified period (reference period) to the observed conditions during the
#'target period analyzed in the case study.
#'
#'@param data A multidimensional array with named dimensions with at least sdate
#' and time dimensions containing observed daily data. It can also be a
#' dataframe with computed percentiles as input for ggplot. If it's a
#' dataframe, it must contain the following column names: 'week', 'clim',
#' 'p10', 'p90', 'p33', 'p66', 'week_mean', 'day' and 'data'.
#'@param first_date The first date of the observed values of timeseries. It can
#' be of class 'Date', 'POSIXct' or a character string in the format
#' 'yyyy-mm-dd'. If parameter 'data_years' is not provided, it must be a date
#' included in the reference period.
#'@param last_date Optional parameter indicating the last date of the target
#' period of the daily timeseries. It can be of class 'Date', 'POSIXct' or a
#' character string in the format 'yyyy-mm-dd'. If it is NULL, the last date of
#' the daily timeseries will be set as the last date of 'data'. As the data is
#' plotted by weeks, only full groups of 7 days will be plotted. If the last
#' date of the timeseries is not a multiple of 7 days, the last week will
#' not be plotted.
#'@param ref_period A vector of numeric values indicating the years of the
#' reference period. If parameter 'data_years' is not specified, it must
#' be of the same length of dimension 'sdate_dim' of parameter 'data'.
#'@param data_years A vector of numeric values indicating the years of the
#' data. It must be of the same length of dimension 'sdate_dim' of parameter
#' 'data'. It is optional, if not specified, all the years will be used as the
#' target period.
#'@param time_dim A character string indicating the daily time dimension name.
#' The default value is 'time'.
#'@param sdate_dim A character string indicating the start year dimension name.
#' The default value is 'sdate'.
#'@param ylim A numeric vector of length two providing limits of the scale.
#' Use NA to refer to the existing minimum or maximum. For more information,
#' see 'ggplot2' documentation of 'scale_y_continuous' parameter.
#'@param title The text for the top title of the plot. It is NULL by default.
#'@param subtitle The text for the subtitle of the plot. It is NULL bu default.
#'@param ytitle Character string to be drawn as y-axis title. It is NULL by
#' default.
#'@param legend A logical value indicating whether a legend should be included
#' in the plot. If it is TRUE or NA, the legend will be included. If it is
#' FALSE, the legend will not be included. It is TRUE by default.
#'@param palette A palette name from the R Color Brewer’s package. The default
#' value is 'Blues'.
#'@param fileout A character string indicating the file name where to save the
#' plot. If not specified (default) a graphics device will pop up.
#'@param device A character string indicating the device to use. Can either be
#' a device function (e.g. png), or one of "eps", "ps", "tex" (pictex),
#' "pdf", "jpeg", "tiff", "png", "bmp", "svg" or "wmf" (windows only).
#'@param width A numeric value of the plot width in units ("in", "cm", "mm", or
#' "px"). It is set to 8 by default.
#'@param height A numeric value of the plot height in units ("in", "cm", "mm",
#' or "px"). It is set to 6 by default.
#'@param units Units of the size of the device (file or window) to plot in.
#' Inches (’in’) by default.
#'@param dpi A numeric value of the plot resolution. It is set to 300 by
#' default.
#'
#'@return A ggplot object containing the plot.
#'
#'@examples
#'data <- array(rnorm(49*20*3, 274), dim = c(time = 49, sdate = 20, member = 3))
#'PlotWeeklyClim(data = data, first_date = '2002-08-09',
#' last_date = '2002-09-15', ref_period = 2010:2019,
#' data_years = 2000:2019, time_dim = 'time', sdate_dim = 'sdate',
#' title = "Observed weekly means and climatology",
#' subtitle = "Target years: 2010 to 2019",
#' ytitle = paste0('tas', " (", "deg.C", ")"))
#'
#'@import multiApply
#'@import lubridate
#'@import ggplot2
#'@import RColorBrewer
#'@import scales
#'@importFrom ClimProjDiags Subset
#'@importFrom s2dv MeanDims
#'@export
PlotWeeklyClim <- function(data, first_date, ref_period, last_date = NULL,
data_years = NULL, time_dim = 'time',
sdate_dim = 'sdate', ylim = NULL,
title = NULL, subtitle = NULL,
ytitle = NULL, legend = TRUE,
palette = "Blues", fileout = NULL, device = NULL,
width = 8, height = 6, units = 'in', dpi = 300) {
## Check input arguments
# data
if (is.array(data)) {
if (is.null(names(dim(data)))) {
stop("Parameter 'data' must have named dimensions.")
}
is_array <- TRUE
} else if (is.data.frame(data)) {
col_names <- c("week", "clim", "p10", "p90", "p33", "p66",
"week_mean", "day", "data")
if (!all(col_names %in% names(data))) {
stop(paste0("If parameter 'data' is a data frame, it must contain the ",
"following column names: 'week', 'clim', 'p10', 'p90', 'p33', ",
"'p66', 'week_mean', 'day' and 'data'."))
}
is_array <- FALSE
} else {
stop("Parameter 'data' must be an array or a data frame.")
}
if (is_array) {
# time_dim
if (!is.character(time_dim)) {
stop("Parameter 'time_dim' must be a character string.")
}
if (!all(time_dim %in% names(dim(data)))) {
stop("Parameter 'time_dim' is not found in 'data' dimension.")
}
if (dim(data)[time_dim] < 7) {
stop(paste0("Parameter 'data' must have the dimension 'time_dim' of ",
"length equal or grater than 7 to compute the weekly means."))
}
# sdate_dim
if (!is.character(sdate_dim)) {
stop("Parameter 'sdate_dim' must be a character string.")
}
if (!sdate_dim %in% names(dim(data))) {
warning(paste0("Parameter 'sdate_dim' is not found in 'data' dimension. ",
"A dimension of length 1 has been added."))
data <- InsertDim(data, 1, lendim = 1, name = sdate_dim)
}
# legend
if (!is.logical(legend)) {
stop("Parameter 'legend' must be a logical value.")
}
if (is.na(legend)) {
legend <- TRUE
} else if (legend) {
legend <- NA
}
# ref_period (1)
if (!is.numeric(ref_period)) {
stop("Parameter 'ref_period' must be numeric.")
}
# first_date
if ((!inherits(first_date, "POSIXct") & !inherits(first_date, "Date")) &&
(!is.character(first_date) | nchar(first_date) != 10)) {
stop(paste0("Parameter 'first_date' must be a character string ",
"indicating the date in the format 'yyyy-mm-dd', 'POSIXct' ",
"or 'Dates' class."))
}
first_date <- ymd(first_date)
target_year <- year(first_date)
taget_year_outside_reference <- FALSE
# data_years
if (!is.null(data_years)) {
if (!is.numeric(data_years)) {
stop("Parameter 'data_years' must be numeric.")
}
if (length(data_years) != dim(data)[sdate_dim]) {
stop(paste0("Parameter 'data_years' must have the same length as ",
"the dimension '", sdate_dim, "' of 'data'."))
}
if (!all(ref_period %in% data_years)) {
stop(paste0("The 'ref_period' must be included in the 'data_years' ",
"period."))
}
if (!any(target_year %in% data_years)) {
stop(paste0("Parameter 'first_date' must be a date included ",
"in the 'data_years' period."))
}
taget_year_outside_reference <- TRUE
} else {
# ref_period (2)
if (length(ref_period) != dim(data)[sdate_dim]) {
stop(paste0("Parameter 'ref_period' must have the same length as the ",
"dimension '", sdate_dim ,"' of 'data' if ",
"'data_years' is not provided."))
}
if (!any(target_year %in% ref_period)) {
stop(paste0("If parameter 'data_years' is NULL, parameter 'first_date' ",
"must be a date included in the 'ref_period' period."))
}
data_years <- ref_period
}
# last_date
if (!is.null(last_date)) {
if ((!inherits(last_date, "POSIXct") & !inherits(last_date, "Date")) &&
(!is.character(last_date) | nchar(last_date) != 10)) {
stop(paste0("Parameter 'last_date' must be a character string ",
"indicating the date in the format 'yyyy-mm-dd', 'POSIXct' ",
"or 'Dates' class."))
}
last_date <- ymd(last_date)
dates <- seq(first_date, last_date, by = "1 day")
if (length(dates) > dim(data)[time_dim]) {
warning(paste0("Parameter 'last_date' is greater than the last date ",
"of 'data'. The last date of 'data' will be used."))
dates <- seq(first_date, first_date + days(dim(data)[time_dim]-1), by = "1 day")
}
} else {
dates <- seq(first_date, first_date + days(dim(data)[time_dim]-1), by = "1 day")
}
# ylim
if (is.character(ylim)) {
warning("Parameter 'ylim' can't be a character string, it will not be used.")
ylim <- NULL
}
index_first_date <- which(dates == first_date)
index_last_date <- length(dates) - (length(dates) %% 7)
last_date <- dates[index_last_date]
## Data preparation
# subset time_dim for weeks
data_subset <- Subset(data, along = time_dim,
indices = index_first_date:index_last_date)
# remove other dimensions
dims_subset <- names(dim(data_subset))[which(!names(dim(data_subset)) %in% c(time_dim, sdate_dim))]
if (!identical(dims_subset, character(0))) {
data_subset <- Subset(data_subset, dims_subset, as.list(rep(1, length(dims_subset))), drop = TRUE)
}
# observed daily data creation
daily <- Subset(data_subset, along = sdate_dim,
indices = which(data_years == target_year),
drop = TRUE)
if (taget_year_outside_reference) {
indexes_reference_period <- which(data_years %in% ref_period)
# remove values outside reference period for computing the means
data_subset <- Subset(data_subset, along = sdate_dim,
indices = indexes_reference_period)
}
## Weekly aggregations for reference period
weekly_aggre <- SplitDim(data_subset, split_dim = time_dim,
indices = sort(rep(1:(length(index_first_date:index_last_date)/7), 7)),
new_dim_name = 'week')
weekly_means <- MeanDims(weekly_aggre, time_dim)
weekly_clim <- MeanDims(weekly_means, sdate_dim)
weekly_p10 <- Apply(weekly_means, target_dims = sdate_dim,
fun = function(x) {quantile(x, 0.10)})$output1
weekly_p90 <- Apply(weekly_means, target_dims = sdate_dim,
fun = function(x) {quantile(x, 0.90)})$output1
weekly_p33 <- Apply(weekly_means, target_dims = sdate_dim,
fun = function(x) {quantile(x, 0.33)})$output1
weekly_p66 <- Apply(weekly_means, target_dims = sdate_dim,
fun = function(x) {quantile(x, 0.66)})$output1
clim <- p10 <- p90 <- p33 <- p66 <- NULL
weekly_data <- data.frame(clim = as.vector(weekly_clim),
p10 = as.vector(weekly_p10),
p90 = as.vector(weekly_p90),
p33 = as.vector(weekly_p33),
p66 = as.vector(weekly_p66),
week = 1:(length(index_first_date:index_last_date)/7))
## Prepare observations from target year
daily_data <- data.frame(day = seq(first_date, last_date, by = "1 day"),
data = daily,
week = sort(rep(1:(length(index_first_date:index_last_date)/7), 7)))
week_mean <- aggregate(data ~ week, data = daily_data, mean)
weekly_data <- cbind(weekly_data, week_mean$data)
colnames(weekly_data)[7] <- 'week_mean'
all <- merge(weekly_data, daily_data, by = 'week')
} else {
all <- data
}
## Create a ggplot object
cols <- colorRampPalette(brewer.pal(9, palette))(6)
p <- ggplot(all, aes(x = day)) +
geom_ribbon(aes(ymin = p10, ymax = p90, group = week, fill = "p10-p90"),
alpha = 0.7, show.legend = legend) + # extremes clim
geom_ribbon(aes(ymin = p33, ymax = p66, group = week, fill = "p33-p66"),
alpha = 0.7, show.legend = legend) + # terciles clim
geom_line(aes(y = clim, group = week, color = "climatological mean",
linetype = "climatological mean"),
alpha = 1.0, size = 0.7, show.legend = legend) + # mean clim
geom_line(aes(y = data, color = "observed daily mean",
linetype = "observed daily mean"),
alpha = 1, size = 0.2, show.legend = legend) + # daily evolution
geom_line(aes(y = week_mean, group = week, color = "observed weekly mean",
linetype = "observed weekly mean"),
alpha = 1, size = 0.7, show.legend = legend) + # weekly evolution
theme_bw() + ylab(ytitle) + xlab(NULL) +
ggtitle(title, subtitle = subtitle) +
scale_x_date(breaks = seq(min(all$day), max(all$day), by = "7 days"),
minor_breaks = NULL, expand = c(0.03, 0.03),
labels = date_format("%d %b %Y")) +
theme(axis.text.x = element_text(angle = 45, hjust = 1),
panel.grid.major = element_line(size = 0.5, linetype = 'solid',
colour = "gray92"),
panel.grid.minor = element_line(size = 0.25, linetype = 'solid',
colour = "gray92"),
legend.spacing = unit(-0.2, "cm")) +
scale_fill_manual(name = NULL,
values = c("p10-p90" = cols[3], "p33-p66" = cols[4])) +
scale_color_manual(name = NULL, values = c("climatological mean" = cols[5],
"observed daily mean" = "grey20",
"observed weekly mean" = "black")) +
scale_linetype_manual(name = NULL, values = c("climatological mean" = "solid",
"observed daily mean" = "dashed",
"observed weekly mean" = "solid"),
guide = guide_legend(override.aes = list(lwd = c(0.7, 0.2, 0.7)))) +
guides(fill = guide_legend(order = 1)) +
scale_y_continuous(limits = ylim)
# Return the ggplot object
if (is.null(fileout)) {
return(p)
} else {
ggsave(filename = fileout, plot = p, device = device, height = height,
width = width, units = units, dpi = dpi)
}
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/PlotWeeklyClim.R
|
#'@rdname Predictability
#'@title Computing scores of predictability using two dynamical proxies
#'based on dynamical systems theory.
#'
#'@author Carmen Alvarez-Castro, \email{[email protected]}
#'@author Maria M. Chaves-Montero, \email{[email protected]}
#'@author Veronica Torralba, \email{[email protected]}
#'@author Davide Faranda, \email{[email protected]}
#'
#'@description This function divides in terciles the two dynamical proxies
#'computed with CST_ProxiesAttractor or ProxiesAttractor. These terciles will
#'be used to measure the predictability of the system in dyn_scores. When the
#'local dimension 'dim' is small and the inverse of persistence 'theta' is
#'small the predictability is high, and viceversa.
#'
#'@references Faranda, D., Alvarez-Castro, M.C., Messori, G., Rodriguez, D.,
#'and Yiou, P. (2019). The hammam effect or how a warm ocean enhances large
#'scale atmospheric predictability.Nature Communications, 10(1), 1316.
#'\doi{10.1038/s41467-019-09305-8}"
#'@references Faranda, D., Gabriele Messori and Pascal Yiou. (2017).
#'Dynamical proxies of North Atlantic predictability and extremes.
#'Scientific Reports, 7-41278, 2017.
#'
#'@param dim An array of N named dimensions containing the local dimension as
#' the output of CST_ProxiesAttractor or ProxiesAttractor.
#'@param theta An array of N named dimensions containing the inverse of the
#' persistence 'theta' as the output of CST_ProxiesAttractor or
#' ProxiesAttractor.
#'@param ncores The number of cores to use in parallel computation.
#'
#'@return A list of length 2:
#'\itemize{
#' \item{'pred.dim', a list of two lists 'qdim' and 'pos.d'. The 'qdim' list
#' contains values of local dimension 'dim' divided by terciles:
#' d1: lower tercile (high predictability),
#' d2: middle tercile,
#' d3: higher tercile (low predictability)
#' The 'pos.d' list contains the position of each tercile in parameter
#' 'dim'.}
#'\item{'pred.theta', a list of two lists 'qtheta' and 'pos.t'.
#' The 'qtheta' list contains values of the inverse of persistence 'theta'
#' divided by terciles:
#' th1: lower tercile (high predictability),
#' th2: middle tercile,
#' th3: higher tercile (low predictability)
#' The 'pos.t' list contains the position of each tercile in parameter
#' 'theta'.}
#'}
#'@return dyn_scores values from 0 to 1. A dyn_score of 1 indicates the highest
#'predictability.
#'
#'@examples
#'# Creating an example of matrix dat(time,grids):
#'m <- matrix(rnorm(2000) * 10, nrow = 50, ncol = 40)
#'names(dim(m)) <- c('time', 'grid')
#'# imposing a threshold
#' quanti <- 0.90
#'# computing dyn_scores from parameters dim and theta of the attractor
#' attractor <- ProxiesAttractor(dat = m, quanti = 0.60)
#' predyn <- Predictability(dim = attractor$dim, theta = attractor$theta)
#'@export
Predictability <- function(dim, theta, ncores = NULL) {
if (any(names(dim(dim)) %in% 'sdate')) {
if (any(names(dim(dim)) %in% 'ftime')) {
dim <- MergeDims(dim, merge_dims = c('ftime', 'sdate'),
rename_dim = 'time')
}
}
if (!(any(names(dim(dim)) %in% 'time'))){
stop("Parameter 'dim' must have a temporal dimension named 'time'.")
}
if (any(names(dim(theta)) %in% 'sdate')) {
if (any(names(dim(theta)) %in% 'ftime')) {
theta <- MergeDims(theta, merge_dims = c('ftime', 'sdate'),
rename_dim = 'time')
}
}
if (!(any(names(dim(theta)) %in% 'time'))){
stop("Parameter 'data' must have a temporal dimension named 'time'.")
}
pred <- Apply(list(dim, theta), target_dims = 'time',
fun = .predictability,
ncores = ncores)
dim(pred$dyn_scores) <- dim(theta)
return(list(pred.dim = list(qdim = list(pred$qdim.d1,pred$qdim.d2,pred$qdim.d3),
pos.d = list(pred$pos.d1,pred$pos.d2,pred$pos.d3)),
pred.theta = list(qtheta = list(pred$qtheta.th1,pred$qtheta.th2,pred$qtheta.th3),
pos.t = list(pred$pos.th1,pred$pos.th2,pred$pos.th3)),
dyn_scores = pred$dyn_scores))
}
.predictability <- function(dim, theta) {
if (is.null(dim)) {
stop("Parameter 'dim' cannot be NULL.")
}
if (is.null(theta)) {
stop("Parameter 'theta' cannot be NULL.")
}
if (length(dim) != length(theta)) {
stop("Parameters 'dim' and 'theta' must have the same length")
}
pos <- c(1:length(dim))
# dim
qd1 <- quantile(dim, 0.33, na.rm = TRUE)
qd3 <- quantile(dim, 0.66, na.rm = TRUE)
d3 <- which(dim >= qd3)
d1 <- which(dim <= qd1)
d2 <- which(dim > qd1 & dim < qd3)
qdim <- list(d1 = dim[d1], d2 = dim[d2], d3 = dim[d3])
pos.d <- list(d1=d1,d2=d2,d3=d3)
#theta
qt1 <- quantile(theta, 0.33, na.rm = TRUE)
qt3 <- quantile(theta, 0.66, na.rm = TRUE)
th3 <- which(theta >= qt3)
th1 <- which(theta <= qt1)
th2 <- which(theta > qt1 & theta < qt3)
qtheta <- list(th1 = theta[th1], th2 = theta[th2], th3 = theta[th3])
pos.t <- list(th1 = th1, th2 = th2, th3 = th3)
#scores position
d1th1 <- pos.d$d1[which(pos.d$d1 %in% pos.t$th1)]
d1th2 <- pos.d$d1[which(pos.d$d1 %in% pos.t$th2)]
d2th1 <- pos.d$d2[which(pos.d$d2 %in% pos.t$th1)]
d1th3 <- pos.d$d1[which(pos.d$d1 %in% pos.t$th3)]
d3th1 <- pos.d$d3[which(pos.d$d3 %in% pos.t$th1)]
d2th2 <- pos.d$d2[which(pos.d$d2 %in% pos.t$th2)]
d2th3 <- pos.d$d2[which(pos.d$d2 %in% pos.t$th3)]
d3th2 <- pos.d$d3[which(pos.d$d3 %in% pos.t$th2)]
d3th3 <- pos.d$d3[which(pos.d$d3 %in% pos.t$th3)]
#scores values
dyn_scores <- c(1:length(dim))
dyn_scores[which(pos %in% d1th1)]<- 9/9
dyn_scores[which(pos %in% d1th2)]<- 8/9
dyn_scores[which(pos %in% d2th1)]<- 7/9
dyn_scores[which(pos %in% d1th3)]<- 6/9
dyn_scores[which(pos %in% d3th1)]<- 5/9
dyn_scores[which(pos %in% d2th2)]<- 4/9
dyn_scores[which(pos %in% d2th3)]<- 3/9
dyn_scores[which(pos %in% d3th2)]<- 2/9
dyn_scores[which(pos %in% d3th3)]<- 1/9
return(list(qdim.d1 = dim[d1], qdim.d2 = dim[d2], qdim.d3 = dim[d3],
pos.d1 = d1, pos.d2 = d2, pos.d3 =d3,
qtheta.th1 = theta[th1], qtheta.th2 = theta[th2], qtheta.th3 = theta[th3],
pos.th1 = th1, pos.th2 = th2, pos.th3 = th3,
dyn_scores = dyn_scores))
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/Predictability.R
|
#'Conversion of 'startR_array' or 'list' objects to 's2dv_cube'
#'
#'This function converts data loaded using Start function from startR package or
#'Load from s2dv into an 's2dv_cube' object.
#'
#'@author Perez-Zanon Nuria, \email{[email protected]}
#'@author Nicolau Manubens, \email{[email protected]}
#'
#'@param object An object of class 'startR_array' generated from function
#' \code{Start} from startR package or a list output from function \code{Load}
#' from s2dv package. Any other object class will not be accepted.
#'@param remove_attrs_coords A logical value indicating whether to remove the
#' attributes of the coordinates (TRUE) or not (FALSE). The default value is
#' FALSE.
#'@param remove_null Optional. A logical value indicating whether to remove the
#' elements that are NULL (TRUE) or not (FALSE) of the output object. It is
#' only used when the object is an output from function \code{Load}. The
#' default value is FALSE.
#'
#'@return The function returns an 's2dv_cube' object to be easily used with
#'functions with the prefix \code{CST} from CSTools and CSIndicators packages.
#'The object is mainly a list with the following elements:\cr
#'\itemize{
#' \item{'data', array with named dimensions;}
#' \item{'dims', named vector of the data dimensions;}
#' \item{'coords', list of named vectors with the coordinates corresponding to
#' the dimensions of the data parameter;}
#' \item{'attrs', named list with elements:
#' \itemize{
#' \item{'Dates', array with named temporal dimensions of class 'POSIXct'
#' from time values in the data;}
#' \item{'Variable', has the following components:
#' \itemize{
#' \item{'varName', character vector of the short variable name. It is
#' usually specified in the parameter 'var' from the functions
#' Start and Load;}
#' \item{'metadata', named list of elements with variable metadata.
#' They can be from coordinates variables (e.g. longitude) or
#' main variables (e.g. 'var');}
#' }
#' }
#' \item{'Datasets', character strings indicating the names of the
#' datasets;}
#' \item{'source_files', a vector of character strings with complete paths
#' to all the found files involved in loading the data;}
#' \item{'when', a time stamp of the date issued by the Start() or Load()
#' call to obtain the data;}
#' \item{'load_parameters', it contains the components used in the
#' arguments to load the data from Start() or Load() functions.}
#' }
#' }
#'}
#'
#'@seealso \code{\link{s2dv_cube}}, \code{\link{CST_Start}},
#'\code{\link[startR]{Start}} and \code{\link{CST_Load}}
#'@examples
#'\dontrun{
#'# Example 1: convert an object from startR::Start function to 's2dv_cube'
#'library(startR)
#'repos <- '/esarchive/exp/ecmwf/system5_m1/monthly_mean/$var$_f6h/$var$_$sdate$.nc'
#'data <- Start(dat = repos,
#' var = 'tas',
#' sdate = c('20170101', '20180101'),
#' ensemble = indices(1:5),
#' time = 'all',
#' latitude = indices(1:5),
#' longitude = indices(1:5),
#' return_vars = list(latitude = 'dat', longitude = 'dat', time = 'sdate'),
#' retrieve = TRUE)
#'data <- as.s2dv_cube(data)
#'# Example 2: convert an object from s2dv::Load function to 's2dv_cube'
#'startDates <- c('20001101', '20011101', '20021101',
#' '20031101', '20041101', '20051101')
#'data <- Load(var = 'tas', exp = 'system5c3s',
#' nmember = 2, sdates = startDates,
#' leadtimemax = 3, latmin = 10, latmax = 30,
#' lonmin = -10, lonmax = 10, output = 'lonlat')
#'data <- as.s2dv_cube(data)
#'}
#'@export
as.s2dv_cube <- function(object, remove_attrs_coords = FALSE,
remove_null = FALSE) {
if (is.list(object) & length(object) == 11) {
if (is.null(object) || (is.null(object$mod) && is.null(object$obs))) {
stop("The s2dv::Load call did not return any data.")
}
obs <- object
obs$mod <- NULL
object$obs <- NULL
names(object)[[1]] <- 'data' # exp
names(obs)[[1]] <- 'data' # obs
# obs
if (!is.null(obs$data)) {
obs_exist <- TRUE
obs$Datasets$exp <- NULL
obs$Datasets <- obs$Datasets$obs
} else {
obs_exist <- FALSE
}
# object
if (!is.null(object$data)) {
exp_exist <- TRUE
object$Datasets$obs <- NULL
object$Datasets <- object$Datasets$exp
} else {
exp_exist <- FALSE
}
result <- list()
# obs and exp
if (obs_exist & exp_exist) {
obs_exp = list(exp = object, obs = obs)
} else if (obs_exist & !exp_exist) {
obs_exp = list(obs = obs)
} else {
obs_exp = list(exp = object)
}
i <- 0
for (obj_i in obs_exp) {
i <- i + 1
# attrs
obj_i$attrs <- within(obj_i, rm(list = c('data')))
obj_i <- within(obj_i, rm(list = names(obj_i$attrs)))
dates <- obj_i$attrs$Dates$start
attr(dates, 'end') <- obj_i$attrs$Dates$end
if (!is.null(dates)) {
dim(dates) <- dim(obj_i$data)[c('ftime', 'sdate')]
obj_i$attrs$Dates <- dates
}
# Variable
varname <- obj_i$attrs$Variable$varName
varmetadata <- NULL
varmetadata[[varname]] <- attributes(obj_i$attrs$Variable)[-1]
obj_i$attrs$Variable <- list(varName = varname, metadata = varmetadata)
# dims
obj_i$dims <- dim(obj_i$data)
# coords
obj_i$coords <- sapply(names(dim(obj_i$data)), function(x) NULL)
# sdate
obj_i$coords$sdate <- obj_i$attrs$load_parameters$sdates
if (!remove_attrs_coords) attr(obj_i$coords$sdate, 'indices') <- FALSE
# lon
if (!is.null(obj_i$attrs$lon)) {
if (remove_attrs_coords) {
obj_i$coords$lon <- as.vector(obj_i$attrs$lon)
} else {
obj_i$coords$lon <- obj_i$attrs$lon
dim(obj_i$coords$lon) <- NULL
attr(obj_i$coords$lon, 'indices') <- FALSE
}
obj_i$attrs$Variable$metadata$lon <- obj_i$attrs$lon
obj_i$attrs <- within(obj_i$attrs, rm(list = 'lon'))
}
# lat
if (!is.null(obj_i$attrs$lat)) {
if (remove_attrs_coords) {
obj_i$coords$lat <- as.vector(obj_i$attrs$lat)
} else {
obj_i$coords$lat <- obj_i$attrs$lat
dim(obj_i$coords$lat) <- NULL
attr(obj_i$coords$lat, 'indices') <- FALSE
}
obj_i$attrs$Variable$metadata$lat <- obj_i$attrs$lat
obj_i$attrs <- within(obj_i$attrs, rm(list = 'lat'))
}
# member
obj_i$coords$member <- 1:obj_i$dims['member']
if (!remove_attrs_coords) attr(obj_i$coords$member, 'indices') <- TRUE
# dataset
if (!is.null(names(obj_i$attrs$Datasets))) {
obj_i$coords$dataset <- names(obj_i$attrs$Datasets)
if (!remove_attrs_coords) attr(obj_i$coords$dataset, 'indices') <- FALSE
obj_i$attrs$Datasets <- names(obj_i$attrs$Datasets)
} else {
obj_i$coords$dataset <- 1:obj_i$dims['dataset']
if (!remove_attrs_coords) attr(obj_i$coords$dataset, 'indices') <- TRUE
}
# ftime
obj_i$coords$ftime <- 1:obj_i$dims['ftime']
if (!remove_attrs_coords) attr(obj_i$coords$ftime, 'indices') <- TRUE
# remove NULL values
if (isTRUE(remove_null)) {
obj_i$attrs$load_parameters <- .rmNullObs(obj_i$attrs$load_parameters)
}
obj_i <- obj_i[c('data', 'dims', 'coords', 'attrs')]
class(obj_i) <- 's2dv_cube'
if (names(obs_exp)[[i]] == 'exp') {
result$exp <- obj_i
} else {
result$obs <- obj_i
}
}
if (is.list(result)) {
if (is.null(result$exp)) {
result <- result$obs
} else if (is.null(result$obs)) {
result <- result$exp
} else {
warning("The output is a list of two 's2dv_cube' objects",
" corresponding to 'exp' and 'obs'.")
}
}
} else if (inherits(object, 'startR_array')) {
# From Start:
result <- list()
result$data <- as.vector(object)
## dims
dims <- dim(object)
dim(result$data) <- dims
result$dims <- dims
## coords
result$coords <- sapply(names(dims), function(x) NULL)
# Find coordinates
FileSelector <- attributes(object)$FileSelectors
VariablesCommon <- names(attributes(object)$Variables$common)
dat <- names(FileSelector)[1]
VariablesDat <- names(attributes(object)$Variables[[dat]])
varName <- NULL
for (i_coord in names(dims)) {
if (i_coord %in% names(FileSelector[[dat]])) { # coords in FileSelector
coord_in_fileselector <- FileSelector[[dat]][which(i_coord == names(FileSelector[[dat]]))]
if (length(coord_in_fileselector) == 1) {
if (length(coord_in_fileselector[[i_coord]][[1]]) == dims[i_coord]) {
# TO DO: add var_dim parameter
if (i_coord %in% c('var', 'vars')) {
varName <- as.vector(coord_in_fileselector[[i_coord]][[1]])
}
if (remove_attrs_coords) {
result$coords[[i_coord]] <- as.vector(coord_in_fileselector[[i_coord]][[1]])
} else {
result$coords[[i_coord]] <- coord_in_fileselector[[i_coord]][[1]]
attr(result$coords[[i_coord]], 'indices') <- FALSE
}
} else {
result$coords[[i_coord]] <- 1:dims[i_coord]
if (!remove_attrs_coords) attr(result$coords[[i_coord]], 'indices') <- TRUE
}
}
} else if (i_coord %in% VariablesCommon) { # coords in common
coord_in_common <- attributes(object)$Variables$common[[which(i_coord == VariablesCommon)]]
if (inherits(coord_in_common, "POSIXct")) {
result$attrs$Dates <- coord_in_common
}
if (length(coord_in_common) == dims[i_coord]) {
if (remove_attrs_coords) {
if (inherits(coord_in_common, "POSIXct")) {
result$coords[[i_coord]] <- 1:dims[i_coord]
attr(result$coords[[i_coord]], 'indices') <- TRUE
} else {
result$coords[[i_coord]] <- as.vector(coord_in_common)
}
} else {
result$coords[[i_coord]] <- coord_in_common
attr(result$coords[[i_coord]], 'indices') <- FALSE
}
} else {
result$coords[[i_coord]] <- 1:dims[i_coord]
if (!remove_attrs_coords) attr(result$coords[[i_coord]], 'indices') <- TRUE
}
} else if (!is.null(VariablesDat)) { # coords in dat
if (i_coord %in% VariablesDat) {
coord_in_dat <- attributes(object)$Variables[[dat]][[which(i_coord == VariablesDat)]]
if (inherits(coord_in_dat, "POSIXct")) {
result$attrs$Dates <- coord_in_dat
}
if (length(coord_in_dat) == dims[i_coord]) {
if (remove_attrs_coords) {
if (inherits(coord_in_dat, "POSIXct")) {
result$coords[[i_coord]] <- coord_in_dat
} else {
result$coords[[i_coord]] <- as.vector(coord_in_dat)
}
} else {
result$coords[[i_coord]] <- coord_in_dat
attr(result$coords[[i_coord]], 'indices') <- FALSE
}
} else {
result$coords[[i_coord]] <- 1:dims[i_coord]
if (!remove_attrs_coords) attr(result$coords[[i_coord]], 'indices') <- TRUE
}
} else {
result$coords[[i_coord]] <- 1:dims[i_coord]
if (!remove_attrs_coords) attr(result$coords[[i_coord]], 'indices') <- TRUE
}
} else { # missing other dims
result$coords[[i_coord]] <- 1:dims[i_coord]
if (!remove_attrs_coords) attr(result$coords[[i_coord]], 'indices') <- TRUE
}
dim(result$coords[[i_coord]]) <- NULL
}
# attrs
## varName
if (!is.null(varName)) {
result$attrs$Variable$varName <- varName
}
## Variables
for (var_type in names(attributes(object)$Variables)) {
if (!is.null(attributes(object)$Variables[[var_type]])) {
for (var in names(attributes(object)$Variables[[var_type]])) {
attr_variable <- attributes(object)$Variables[[var_type]][[var]]
if (is.null(result$attrs$Dates)) {
if (inherits(attr_variable, "POSIXct")) {
result$attrs$Dates <- attr_variable
}
}
if (is.null(result$attrs$Variable$metadata[[var]])) {
result$attrs$Variable$metadata[[var]] <- attr_variable
}
}
}
}
## Datasets
if (length(names(FileSelector)) > 1) {
# lon name
known_lon_names <- .KnownLonNames()
lon_name_dat <- names(dims)[which(names(dims) %in% known_lon_names)]
# lat name
known_lat_names <- .KnownLatNames()
lat_name_dat <- names(dims)[which(names(dims) %in% known_lat_names)]
result$attrs$Datasets <- names(FileSelector)
# TO DO: add dat_dim parameter
if (any(names(dims) %in% c('dat', 'dataset'))) {
dat_dim <- names(dims)[which(names(dims) %in% c('dat', 'dataset'))]
result$coords[[dat_dim]] <- names(FileSelector)
if (!remove_attrs_coords) attr(result$coords[[dat_dim]], 'indices') <- FALSE
}
for (i in 2:length(names(FileSelector))) {
if (!is.null(lon_name_dat)) {
if (any(result$coords[[lon_name_dat]] != as.vector(attributes(object)$Variables[[names(FileSelector)[i]]][[lon_name_dat]]))) {
warning("'lon' values are different for different datasets. ",
"Only values from the first will be used.")
}
}
if (!is.null(lat_name_dat)) {
if (any(result$coords[[lat_name_dat]] != as.vector(attributes(object)$Variables[[names(FileSelector)[i]]][[lat_name_dat]]))) {
warning("'lat' values are different for different datasets. ",
"Only values from the first will be used.")
}
}
}
} else {
result$attrs$Datasets <- names(FileSelector)
}
## when
result$attrs$when <- Sys.time()
## source_files
result$attrs$source_files <- attributes(object)$Files
## load_parameters
result$attrs$load_parameters <- attributes(object)$FileSelectors
class(result) <- 's2dv_cube'
} else {
stop("The class of parameter 'object' is not implemented",
" to be converted into 's2dv_cube' class yet.")
}
return(result)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/as.s2dv_cube.R
|
#'Print method for s2dv_cube objects
#'
#'This is an S3 method of the generic 'print' for the class 's2dv_cube'. When
#'you will call 'print' on an 's2dv_cube' object, this method will display the
#'content of the object in a clear and informative way.
#'
#'The object will be displayed following 's2dv_cube' class conventions. The
#'top-level elements are: 'Data', a multidimensional array containing the
#'object's data; 'Dimensions', the dimensions of the array; 'Coordinates', the
#'array coordinates that match its dimensions, explicit coordinates have an
#'asterisk (*) at the beginning while index coordinates do not; and
#''Attributes', which contains all the metadata of the object. For more
#'information about the 's2dv_cube', see \code{s2dv_cube()} and
#'\code{as.s2dv_cube()} functions.
#'
#'@param x An 's2dv_cube' object.
#'@param ... Additional arguments of print function.
#'
#'@export
print.s2dv_cube <- function(x, ...) {
if (is.atomic(x)) {
cat(x, "\n")
} else {
cat("'s2dv_cube'\n")
cat("Data ", "[" , paste0(x$data[1:8], collapse = ", "), '...', "]", "\n")
cat("Dimensions ", "(", paste(names(x$dims), x$dims, sep = " = ", collapse = ', '), ")", "\n")
cat("Coordinates \n")
for (coord in names(x$coords)) {
if (!is.null(attr(x$coords[[coord]], 'indices'))) {
if (attr(x$coords[[coord]], 'indices')) {
cat(" ", coord, ":", paste(x$coords[[coord]], collapse = ", "), "\n")
} else {
cat(" *", coord, ":", paste(x$coords[[coord]], collapse = ", "), "\n")
}
} else {
cat(" *", coord, ":", paste(x$coords[[coord]], collapse = ", "), "\n")
}
}
cat("Attributes \n")
for (attr_name in names(x$attrs)) {
if (attr_name == "Variable") {
cat(" ", "varName :", x$attrs$Variable$varName, "\n")
cat(" ", "metadata : ", "\n")
for (metadata_i in names(x$attrs$Variable$metadata)) {
cat(" ", " ", metadata_i, "\n")
.print_metadata(x$attrs$Variable$metadata[[metadata_i]])
}
} else {
cat(" ", attr_name, " : ")
.print_beginning(x = x$attrs[[attr_name]], name = attr_name)
}
}
}
}
## Auxiliary function for the print method
.print_beginning <- function(x, name, n = 5, j = 1) {
if (inherits(x, 'numeric') | inherits(x, 'POSIXct') | inherits(x, 'Date')) {
if (length(x) <= n) {
cat(as.character(x), "\n")
} else {
cat(paste0(as.character(x[seq_len(n)])), "...", "\n")
}
} else if (name == "time_bounds") {
cat("\n")
for (param in names(x)) {
cat(" ", "(", param,")", " : ")
if (length(x[[param]]) <= n) {
cat(as.character(x[[param]]), "\n")
} else {
cat(paste0(as.character(x[[param]][seq_len(n)])), "...", "\n")
}
}
} else if (inherits(x, 'list')) {
cat("\n")
k = 1
for (param in names(x)) {
k = k + 1
param_i <- x[[param]]
if (!is.null(param_i)) {
param_i <- lapply(param_i, function(x) {if (length(x[[1]]) > 1) {
x[[1]] <- paste0(x[[1]][1],' ...')
} else {
x
}})
cat(" ", "(", param,")", " : ")
cat(paste0(names(unlist(param_i)), " = ", unlist(param_i), collapse = ', '), "\n")
} else {
j = j + 1
}
if (k > j) {
cat(" ", "...", "\n")
break
}
}
} else {
if (length(x) > 1) {
cat(x[[1]], "...", "\n")
} else {
cat(x[[1]], "\n")
}
}
}
## Auxiliary function for the print method
.print_metadata <- function(x) {
if (inherits(x, 'list')) {
info_names <- NULL
for (info_i in names(x)) {
if (info_i == 'units') {
cat(" ", " ", " units :", x[[info_i]], "\n")
} else if (info_i %in% c('longname', 'long_name')) {
cat(" ", " ", " long name :", x[[info_i]], "\n")
} else {
info_names <- c(info_names, info_i)
}
}
cat(" ", " ", " other :", paste0(info_names, collapse = ', '), "\n")
} else if (!is.null(attributes(x))) {
if ('variables' %in% names(attributes(x))) {
info_names <- NULL
attrs <- attributes(x)[['variables']]
for (attrs_i in names(attrs)) {
for (info_i in names(attrs[[attrs_i]])) {
if (!inherits(attrs[[attrs_i]][[info_i]], 'list')) {
if (info_i == 'units') {
cat(" ", " ", " units :", attrs[[attrs_i]][[info_i]], "\n")
} else if (info_i %in% c('longname', 'long_name')) {
cat(" ", " ", " long name :", attrs[[attrs_i]][[info_i]], "\n")
} else {
info_names <- c(info_names, info_i)
}
}
}
}
cat(" ", " ", " other :", paste0(info_names, collapse = ', '), "\n")
} else {
attrs <- attributes(x)
info_names <- NULL
for (info_i in names(attrs)) {
if (info_i == 'cdo_grid_name') {
cat(" ", " ", " cdo_grid_name :", attrs[[info_i]], "\n")
} else {
info_names <- c(info_names, info_i)
}
}
cat(" ", " ", " other :", paste0(info_names, collapse = ', '), "\n")
}
}
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/print.s2dv_cube.R
|
#'Creation of a 's2dv_cube' object
#'
#'@description This function allows to create an 's2dv_cube' object by passing
#'information through its parameters. This function will be needed if the data
#'hasn't been loaded using CST_Start or has been transformed with other methods.
#'An 's2dv_cube' object has many different components including metadata. This
#'function will allow to create 's2dv_cube' objects even if not all elements
#'are defined and for each expected missed parameter a warning message will be
#'returned.
#'
#'@author Perez-Zanon Nuria, \email{[email protected]}
#'
#'@param data A multidimensional array with named dimensions, typically with
#' dimensions: dataset, member, sdate, time, lat and lon.
#'@param coords A list of named vectors with the coordinates corresponding to
#' the dimensions of the data parameter. If any coordinate has dimensions, they
#' will be set as NULL. If any coordinate is not provided, it is set as an
#' index vector with the values from 1 to the length of the corresponding
#' dimension. The attribute 'indices' indicates wether the coordinate is an
#' index vector (TRUE) or not (FALSE).
#'@param varName A character string indicating the abbreviation of the variable
#' name.
#'@param metadata A named list where each element is a variable containing the
#' corresponding information. The information can be contained in a list of
#' lists for each variable.
#'@param Datasets Character strings indicating the names of the dataset. It
#' there are multiple datasets it can be a vector of its names or a list of
#' lists with additional information.
#'@param Dates A POSIXct array of time dimensions containing the Dates.
#'@param when A time stamp of the date when the data has been loaded. This
#' parameter is also found in Load() and Start() functions output.
#'@param source_files A vector of character strings with complete paths to all
#' the found files involved in loading the data.
#'@param \dots Additional elements to be added in the object. They will be
#' stored in the end of 'attrs' element. Multiple elements are accepted.
#'
#'@return The function returns an object of class 's2dv_cube' with the following
#' elements in the structure:\cr
#'\itemize{
#' \item{'data', array with named dimensions;}
#' \item{'dims', named vector of the data dimensions;}
#' \item{'coords', list of named vectors with the coordinates corresponding to
#' the dimensions of the data parameter;}
#' \item{'attrs', named list with elements:
#' \itemize{
#' \item{'Dates', array with named temporal dimensions of class 'POSIXct' from
#' time values in the data;}
#' \item{'Variable', has the following components:
#' \itemize{
#' \item{'varName', with the short name of the loaded variable as specified
#' in the parameter 'var';}
#' \item{'metadata', named list of elements with variable metadata.
#' They can be from coordinates variables (e.g. longitude) or
#' main variables (e.g. 'var');}
#' }
#' }
#' \item{'Datasets', character strings indicating the names of the dataset;}
#' \item{'source_files', a vector of character strings with complete paths
#' to all the found files involved in loading the data;}
#' \item{'when', a time stamp of the date issued by the Start() or Load()
#' call to obtain the data;}
#' \item{'load_parameters', it contains the components used in the
#' arguments to load the data from Start() or Load() functions.}
#' }
#' }
#'}
#'
#'@seealso \code{\link[s2dv]{Load}} and \code{\link{CST_Start}}
#'@examples
#'exp_original <- 1:100
#'dim(exp_original) <- c(lat = 2, time = 10, lon = 5)
#'exp1 <- s2dv_cube(data = exp_original)
#'class(exp1)
#'coords <- list(lon = seq(-10, 10, 5), lat = c(45, 50))
#'exp2 <- s2dv_cube(data = exp_original, coords = coords)
#'class(exp2)
#'metadata <- list(tas = list(level = '2m'))
#'exp3 <- s2dv_cube(data = exp_original, coords = coords,
#' varName = 'tas', metadata = metadata)
#'class(exp3)
#'Dates = as.POSIXct(paste0(rep("01", 10), rep("01", 10), 1990:1999), format = "%d%m%Y")
#'dim(Dates) <- c(time = 10)
#'exp4 <- s2dv_cube(data = exp_original, coords = coords,
#' varName = 'tas', metadata = metadata,
#' Dates = Dates)
#'class(exp4)
#'exp5 <- s2dv_cube(data = exp_original, coords = coords,
#' varName = 'tas', metadata = metadata,
#' Dates = Dates, when = "2019-10-23 19:15:29 CET")
#'class(exp5)
#'exp6 <- s2dv_cube(data = exp_original, coords = coords,
#' varName = 'tas', metadata = metadata,
#' Dates = Dates,
#' when = "2019-10-23 19:15:29 CET",
#' source_files = c("/path/to/file1.nc", "/path/to/file2.nc"))
#'class(exp6)
#'exp7 <- s2dv_cube(data = exp_original, coords = coords,
#' varName = 'tas', metadata = metadata,
#' Dates = Dates,
#' when = "2019-10-23 19:15:29 CET",
#' source_files = c("/path/to/file1.nc", "/path/to/file2.nc"),
#' Datasets = list(
#' exp1 = list(InitializationsDates = list(Member_1 = "01011990",
#' Members = "Member_1"))))
#'class(exp7)
#'dim(exp_original) <- c(dataset = 1, member = 1, time = 10, lat = 2, lon = 5)
#'exp8 <- s2dv_cube(data = exp_original, coords = coords,
#' varName = 'tas', metadata = metadata,
#' Dates = Dates, original_dates = Dates)
#'class(exp8)
#'@export
s2dv_cube <- function(data, coords = NULL, varName = NULL, metadata = NULL,
Datasets = NULL, Dates = NULL, when = NULL,
source_files = NULL, ...) {
# data
if (is.null(data) | !is.array(data) | is.null(names(dim(data)))) {
stop("Parameter 'data' must be an array with named dimensions.")
}
# dims
dims <- dim(data)
## coords
if (!is.null(coords)) {
if (!all(names(coords) %in% names(dims))) {
coords <- coords[-which(!names(coords) %in% names(dims))]
}
for (i_coord in names(dims)) {
if (i_coord %in% names(coords)) {
if (length(coords[[i_coord]]) != dims[i_coord]) {
warning(paste0("Coordinate '", i_coord, "' has different lenght as ",
"its dimension and it will not be used."))
coords[[i_coord]] <- 1:dims[i_coord]
attr(coords[[i_coord]], 'indices') <- TRUE
} else {
attr(coords[[i_coord]], 'indices') <- FALSE
}
} else {
warning(paste0("Coordinate '", i_coord, "' is not provided ",
"and it will be set as index in element coords."))
coords[[i_coord]] <- 1:dims[i_coord]
attr(coords[[i_coord]], 'indices') <- TRUE
}
}
dim(coords[[i_coord]]) <- NULL
} else {
coords <- sapply(names(dims), function(x) 1:dims[x])
for (i in 1:length(coords)) {
attr(coords[[i]], "indices") <- TRUE
}
}
## attrs
attrs <- list()
# Dates
if (is.null(Dates)) {
warning("Parameter 'Dates' is not provided so the metadata ",
"of 's2dv_cube' object will be incomplete.")
attrs$Dates <- NULL
} else if (length(Dates) == 1 & inherits(Dates[1], "POSIXct")) {
attrs$Dates <- Dates
} else {
if (!is.array(Dates)) {
warning("Parameter 'Dates' must be an array with named time dimensions.")
} else {
if (is.null(names(dim(Dates)))) {
warning("Parameter 'Dates' must have dimension names.")
} else if (!all(names(dim(Dates)) %in% names(dims))) {
warning("Parameter 'Dates' must have the corresponding time dimension names in 'data'.")
} else {
if (inherits(Dates[1], "POSIXct")) {
attrs$Dates <- Dates
} else {
warning("Parameter 'Dates' must be of class 'POSIXct'.")
}
}
}
}
# Variable
if (is.null(varName)) {
warning("Parameter 'varName' is not provided so the metadata ",
"of 's2dv_cube' object will be incomplete.")
attrs$Variable$varName <- NULL
} else {
if (!is.character(varName)) {
warning("Parameter 'varName' must be a character.")
} else {
attrs$Variable$varName <- varName
}
}
if (is.null(metadata)) {
warning("Parameter 'metadata' is not provided so the metadata ",
"of 's2dv_cube' object will be incomplete.")
attrs$Variable$metadata <- NULL
} else {
if (!is.list(metadata)) {
metadata <- list(metadata)
}
attrs$Variable$metadata <- metadata
}
# Datasets
if (!is.null(Datasets)) {
attrs$Datasets <- Datasets
}
# when
if (!is.null(when)) {
attrs$when <- when
}
# source_files
if (!is.null(source_files)) {
attrs$source_files <- source_files
}
# dots
dots <- list(...)
if (length(dots) != 0) {
for (i_arg in 1:length(dots)) {
attrs[[names(dots)[[i_arg]]]] <- dots[[i_arg]]
}
}
## object
object <- list(data = data, dims = dims, coords = coords, attrs = attrs)
class(object) <- 's2dv_cube'
return(object)
}
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/s2dv_cube.R
|
#'Sample Of Experimental And Observational Climate Data In Function Of Longitudes And Latitudes
#'
#'This sample data set contains gridded seasonal forecast and corresponding
#'observational data from the Copernicus Climate Change ECMWF-System 5 forecast
#'system, and from the Copernicus Climate Change ERA-5 reconstruction.
#'Specifically, for the 'tas' (2-meter temperature) variable, for the 15 first
#'forecast ensemble members, monthly averaged, for the 3 first forecast time
#'steps (lead months 1 to 4) of the November start dates of 2000 to 2005, for
#'the Mediterranean region (27N-48N, 12W-40E). The data was generated on (or
#'interpolated onto, for the reconstruction) a rectangular regular grid of size
#'360 by 181.
#'
#' It is recommended to use the data set as follows:
#'\preformatted{
#' require(zeallot)
#' c(exp, obs) %<-% CSTools::lonlat_temp
#'}
#'
#'The `CST_Load` call used to generate the data set in the infrastructure of
#'the Earth Sciences Department of the Barcelona Supercomputing Center is shown
#'next. Note that `CST_Load` internally calls `s2dv::Load`, which would require
#'a configuration file (not provided here) expressing the distribution of the
#''system5c3s' and 'era5' NetCDF files in the file system.
#'\preformatted{
#' library(CSTools)
#' require(zeallot)
#'
#' startDates <- c('20001101', '20011101', '20021101',
#' '20031101', '20041101', '20051101')
#'
#' lonlat_temp <-
#' CST_Load(
#' var = 'tas',
#' exp = 'system5c3s',
#' obs = 'era5',
#' nmember = 15,
#' sdates = startDates,
#' leadtimemax = 3,
#' latmin = 27, latmax = 48,
#' lonmin = -12, lonmax = 40,
#' output = 'lonlat',
#' nprocs = 1
#' )
#'}
#'
#' @name lonlat_temp
#' @docType data
#' @author Nicolau Manubens \email{[email protected]}
#' @keywords data
NULL
#'Sample Of Experimental Precipitation Data In Function Of Longitudes And Latitudes
#'
#'This sample data set contains a small cutout of gridded seasonal precipitation
#'forecast data from the Copernicus Climate Change ECMWF-System 5 forecast
#'system, to be used to demonstrate downscaling. Specifically, for the 'pr'
#'(precipitation) variable, for the first 6 forecast ensemble members, daily
#'values, for all 31 days in March following the forecast starting dates in
#'November of years 2010 to 2012, for a small 4x4 pixel cutout in a region in
#'the North-Western Italian Alps (44N-47N, 6E-9E). The data resolution is 1
#'degree.
#'
#'The `CST_Load` call used to generate the data set in the infrastructure of
#'the Marconi machine at CINECA is shown next, working on files which were
#'extracted from forecast data available in the MEDSCOPE internal archive.
#'
#'\preformatted{
#' library(CSTools)
#' infile <- list(path = paste0('/esarchive/exp/ecmwf/system5c3s/daily_mean/',
#' '$VAR_NAME$_s0-24h/$VAR_NAME$_$START_DATE$.nc'))
#' lonlat_prec <- CST_Load('prlr', exp = list(infile), obs = NULL,
#' sdates = c('20101101', '20111101', '20121101'),
#' leadtimemin = 121, leadtimemax = 151,
#' latmin = 44, latmax = 47,
#' lonmin = 6, lonmax = 9,
#' nmember = 6,
#' storefreq = "daily", sampleperiod = 1,
#' output = "lonlat"
#' )
#'}
#'
#' @name lonlat_prec
#' @docType data
#' @author Jost von Hardenberg \email{[email protected]}
#' @author An-Chi Ho \email{[email protected]}
#' @keywords data
NULL
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/sample_data.R
|
#'Sample Of Experimental And Observational Climate Data In Function Of Longitudes And Latitudes with Start
#'
#'This sample data set contains gridded seasonal forecast and corresponding
#'observational data from the Copernicus Climate Change ECMWF-System 5 forecast
#'system, and from the Copernicus Climate Change ERA-5 reconstruction.
#'Specifically, for the 'tas' (2-meter temperature) variable, for the 15 first
#'forecast ensemble members, monthly averaged, for the 3 first forecast time
#'steps (lead months 1 to 4) of the November start dates of 2000 to 2005, for
#'the Mediterranean region (27N-48N, 12W-40E). The data was generated on (or
#'interpolated onto, for the reconstruction) a rectangular regular grid of size
#'360 by 181.
#'
#'The `CST_Start` call used to generate the data set in the infrastructure of
#'the Earth Sciences Department of the Barcelona Supercomputing Center is shown
#'next. Note that `CST_Start` internally calls `startR::Start` and then uses
#'`as.s2dv_cube` that converts the `startR_array` into `s2dv_cube`.
#'\preformatted{
#' lonlat_temp_st <- NULL
#' repos_exp <- paste0('/esarchive/exp/ecmwf/system5c3s/monthly_mean/',
#' '$var$_f6h/$var$_$sdate$.nc')
#' sdates <- sapply(2000:2005, function(x) paste0(x, '1101'))
#' lonmax <- 40
#' lonmin <- -12
#' latmax <- 48
#' latmin <- 27
#' lonlat_temp_st$exp <- CST_Start(dataset = repos_exp,
#' var = 'tas',
#' member = startR::indices(1:15),
#' sdate = sdates,
#' ftime = startR::indices(1:3),
#' lat = startR::values(list(latmin, latmax)),
#' lat_reorder = startR::Sort(decreasing = TRUE),
#' lon = startR::values(list(lonmin, lonmax)),
#' lon_reorder = startR::CircularSort(0, 360),
#' synonims = list(lon = c('lon', 'longitude'),
#' lat = c('lat', 'latitude'),
#' member = c('member', 'ensemble'),
#' ftime = c('ftime', 'time')),
#' return_vars = list(lat = NULL,
#' lon = NULL, ftime = 'sdate'),
#' retrieve = TRUE)
#'
#' dates <- c(paste0(2000, c(11, 12)), paste0(2001, c('01', 11, 12)),
#' paste0(2002, c('01', 11, 12)), paste0(2003, c('01', 11, 12)),
#' paste0(2004, c('01', 11, 12)), paste0(2005, c('01', 11, 12)), 200601)
#' dates <- sapply(dates, function(x) {paste0(x, '01')})
#' dates <- as.POSIXct(dates, format = '%Y%m%d', 'UTC')
#' dim(dates) <- c(ftime = 3, sdate = 6)
#'
#' dates <- t(dates)
#' names(dim(dates)) <- c('sdate', 'ftime')
#'
#' path.obs <- '/esarchive/recon/ecmwf/era5/monthly_mean/$var$_f1h-r1440x721cds/$var$_$date$.nc'
#' lonlat_temp_st$obs <- CST_Start(dataset = path.obs,
#' var = 'tas',
#' date = unique(format(dates, '%Y%m')),
#' ftime = startR::values(dates),
#' ftime_across = 'date',
#' ftime_var = 'ftime',
#' merge_across_dims = TRUE,
#' split_multiselected_dims = TRUE,
#' lat = startR::values(list(latmin, latmax)),
#' lat_reorder = startR::Sort(decreasing = TRUE),
#' lon = startR::values(list(lonmin, lonmax)),
#' lon_reorder = startR::CircularSort(0, 360),
#' synonims = list(lon = c('lon', 'longitude'),
#' lat = c('lat', 'latitude'),
#' ftime = c('ftime', 'time')),
#' transform = startR::CDORemapper,
#' transform_extra_cells = 2,
#' transform_params = list(grid = 'r360x181',
#' method = 'conservative'),
#' transform_vars = c('lat', 'lon'),
#' return_vars = list(lon = NULL,
#' lat = NULL,
#' ftime = 'date'),
#' retrieve = TRUE)
#'
#' library(lubridate)
#' dates_exp <- lonlat_temp_st$exp$attrs$Dates
#' lonlat_temp_st$exp$attrs$Dates <- floor_date(ymd_hms(dates_exp), unit = "months")
#' dim(lonlat_temp_st$exp$attrs$Dates) <- dim(dates_exp)
#'
#' dates_obs <- lonlat_temp_st$obs$attrs$Dates
#' lonlat_temp_st$obs$attrs$Dates <- floor_date(ymd_hms(dates_obs), unit = "months")
#' dim(lonlat_temp_st$obs$attrs$Dates) <- dim(dates_obs)
#'
#'}
#'
#'@name lonlat_temp_st
#'@docType data
#'@author Nicolau Manubens \email{[email protected]}
#'@keywords data
NULL
#'Sample Of Experimental Precipitation Data In Function Of Longitudes And Latitudes with Start
#'
#'This sample data set contains a small cutout of gridded seasonal precipitation
#'forecast data from the Copernicus Climate Change ECMWF-System 5 forecast
#'system, to be used to demonstrate downscaling. Specifically, for the 'pr'
#'(precipitation) variable, for the first 6 forecast ensemble members, daily
#'values, for all 31 days in March following the forecast starting dates in
#'November of years 2010 to 2012, for a small 4x4 pixel cutout in a region in
#'the North-Western Italian Alps (44N-47N, 6E-9E). The data resolution is 1
#'degree.
#'
#'The `CST_Start` call used to generate the data set in the infrastructure of
#'the Marconi machine at CINECA is shown next, working on files which were
#'extracted from forecast data available in the MEDSCOPE internal archive.
#'
#'\preformatted{
#' path <- paste0('/esarchive/exp/ecmwf/system5c3s/daily_mean/',
#' '$var$_s0-24h/$var$_$sdate$.nc')
#' sdates = c('20101101', '20111101', '20121101')
#' latmin <- 44
#' latmax <- 47
#' lonmin <- 6
#' lonmax <- 9
#'
#' lonlat_prec_st <- CST_Start(dataset = path,
#' var = 'prlr',
#' member = startR::indices(1:6),
#' sdate = sdates,
#' ftime = 121:151,
#' lat = startR::values(list(latmin, latmax)),
#' lat_reorder = startR::Sort(decreasing = TRUE),
#' lon = startR::values(list(lonmin, lonmax)),
#' lon_reorder = startR::CircularSort(0, 360),
#' synonims = list(lon = c('lon', 'longitude'),
#' lat = c('lat', 'latitude'),
#' ftime = c('time', 'ftime'),
#' member = c('member', 'ensemble')),
#' return_vars = list(ftime = 'sdate',
#' lon = NULL, lat = NULL),
#' retrieve = TRUE)
#'}
#'
#'@name lonlat_prec_st
#'@docType data
#'@author Jost von Hardenberg \email{[email protected]}
#'@author An-Chi Ho \email{[email protected]}
#'@keywords data
NULL
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/sample_data_st.R
|
# Function to permute arrays of non-atomic elements (e.g. POSIXct)
.aperm2 <- function(x, new_order) {
old_dims <- dim(x)
attr_bk <- attributes(x)
if ('dim' %in% names(attr_bk)) {
attr_bk[['dim']] <- NULL
}
if (is.numeric(x)) {
x <- aperm(x, new_order)
} else {
y <- array(1:length(x), dim = dim(x))
y <- aperm(y, new_order)
x <- x[as.vector(y)]
}
dim(x) <- old_dims[new_order]
attributes(x) <- c(attributes(x), attr_bk)
x
}
# verbose-only printing function
.printv <- function(value, verbosity = TRUE) {
if (verbosity) {
print(value)
}
}
# normalize a time series
.standardize <- function(timeseries) {
out <- (timeseries - mean(timeseries, na.rm = T)) / sd(timeseries, na.rm = T)
return(out)
}
.selbox <- function(lon, lat, xlim = NULL, ylim = NULL) {
if (!is.null(xlim)) {
# This transforms c(-20, -10) to c(340, 350) but c(-20, 10) is unchanged
# Bring them all to the same units in the 0:360 range
xlim1 <- xlim[1] %% 360
xlim2 <- xlim[2] %% 360
lonm <- lon %% 360
if (lonm[1] > tail(lonm, 1)) {
lonm <- lon
}
if (xlim1 > xlim2) {
# If box crosses 0
ilonsel <- (lonm >= xlim1) | (lonm <= xlim2)
} else {
ilonsel <- (lonm >= xlim1) & (lonm <= xlim2)
}
if (!any(ilonsel)) {
stop("No intersection between longitude bounds and data domain.")
}
} else {
ilonsel <- 1:length(lon)
}
if (!is.null(ylim)) {
ilatsel <- (lat >= ylim[1]) & (lat <= ylim[2])
} else {
ilatsel <- 1:length(lat)
}
return(list(ilon = ilonsel, ilat = ilatsel))
}
# produce a 2d matrix of area weights
.area.weight <- function(ics, ipsilon, root = T) {
field <- array(NA, dim = c(length(ics), length(ipsilon)))
if (root == T) {
for (j in 1:length(ipsilon)) {
field[, j] <- sqrt(cos(pi / 180 * ipsilon[j]))
}
}
if (root == F) {
for (j in 1:length(ipsilon)) {
field[, j] <- cos(pi / 180 * ipsilon[j])
}
}
return(field)
}
#Draws Color Bars for Categories
#A wrapper of s2dv::ColorBar to generate multiple color bars for different
#categories, and each category has different color set.
GradientCatsColorBar <- function(nmap, brks = NULL, cols = NULL, vertical = TRUE, subsampleg = NULL,
bar_limits, var_limits = NULL,
triangle_ends = NULL, col_inf = NULL, col_sup = NULL, plot = TRUE,
draw_separators = FALSE,
bar_titles = NULL, title_scale = 1, label_scale = 1, extra_margin = rep(0, 4),
...) {
# bar_limits: a vector of 2 or a list
if (!is.list(bar_limits)) {
if (!is.numeric(bar_limits) || length(bar_limits) != 2) {
stop("Parameter 'bar_limits' must be a numeric vector of length 2 or a list containing that.")
}
# turn into list
bar_limits <- rep(list(bar_limits), nmap)
} else {
if (any(!sapply(bar_limits, is.numeric)) || any(sapply(bar_limits, length) != 2)) {
stop("Parameter 'bar_limits' must be a numeric vector of length 2 or a list containing that.")
}
if (length(bar_limits) != nmap) {
stop("Parameter 'bar_limits' must have the length of 'nmap'.")
}
}
# Check brks
if (!is.list(brks)) {
if (is.null(brks)) {
brks <- 5
} else if (!is.numeric(brks)) {
stop("Parameter 'brks' must be a numeric vector.")
}
# Turn it into list
brks <- rep(list(brks), nmap)
} else {
if (length(brks) != nmap) {
stop("Parameter 'brks' must have the length of 'nmap'.")
}
}
for (i_map in 1:nmap) {
if (length(brks[[i_map]]) == 1) {
brks[[i_map]] <- seq(from = bar_limits[[i_map]][1], to = bar_limits[[i_map]][2], length.out = brks[[i_map]])
}
}
# Check cols
col_sets <- list(c("#A1D99B", "#74C476", "#41AB5D", "#238B45"),
c("#6BAED6FF", "#4292C6FF", "#2171B5FF", "#08519CFF"),
c("#FFEDA0FF", "#FED976FF", "#FEB24CFF", "#FD8D3CFF"),
c("#FC4E2AFF", "#E31A1CFF", "#BD0026FF", "#800026FF"),
c("#FCC5C0", "#FA9FB5", "#F768A1", "#DD3497"))
if (is.null(cols)) {
if (length(col_sets) >= nmap) {
chosen_sets <- 1:nmap
chosen_sets <- chosen_sets + floor((length(col_sets) - length(chosen_sets)) / 2)
} else {
chosen_sets <- array(1:length(col_sets), nmap)
}
cols <- col_sets[chosen_sets]
# Set triangle_ends, col_sup, col_inf
#NOTE: The "col" input of ColorBar() later is not NULL (since we determine it here)
# so ColorBar() cannot decide these parameters for us.
#NOTE: Here, col_inf and col_sup are prior to triangle_ends, which is consistent with ColorBar().
#TODO: Make triangle_ends a list
if (is.null(triangle_ends)) {
if (!is.null(var_limits)) {
triangle_ends <- c(FALSE, FALSE)
#TODO: bar_limits is a list
if (bar_limits[1] >= var_limits[1] | !is.null(col_inf)) {
triangle_ends[1] <- TRUE
if (is.null(col_inf)) {
col_inf <- lapply(cols, head, 1)
cols <- lapply(cols, '[', -1)
}
}
if (bar_limits[2] < var_limits[2] | !is.null(col_sup)) {
triangle_ends[2] <- TRUE
if (is.null(col_sup)) {
col_sup <- lapply(cols, tail, 1)
cols <- lapply(cols, '[', -length(cols[[1]]))
}
}
} else {
triangle_ends <- c(!is.null(col_inf), !is.null(col_sup))
}
} else { # triangle_ends has values
if (triangle_ends[1] & is.null(col_inf)) {
col_inf <- lapply(cols, head, 1)
cols <- lapply(cols, '[', -1)
}
if (triangle_ends[2] & is.null(col_sup)) {
col_sup <- lapply(cols, tail, 1)
cols <- lapply(cols, '[', -length(cols[[1]]))
}
}
} else {
if (!is.list(cols)) {
stop("Parameter 'cols' must be a list of character vectors.")
}
if (!all(sapply(cols, is.character))) {
stop("Parameter 'cols' must be a list of character vectors.")
}
if (length(cols) != nmap) {
stop("Parameter 'cols' must be a list of the same length as 'nmap'.")
}
}
for (i_map in 1:length(cols)) {
if (length(cols[[i_map]]) != (length(brks[[i_map]]) - 1)) {
cols[[i_map]] <- grDevices::colorRampPalette(cols[[i_map]])(length(brks[[i_map]]) - 1)
}
}
# Check bar_titles
if (is.null(bar_titles)) {
if (nmap == 3) {
bar_titles <- c("Below normal (%)", "Normal (%)", "Above normal (%)")
} else if (nmap == 5) {
bar_titles <- c("Low (%)", "Below normal (%)",
"Normal (%)", "Above normal (%)", "High (%)")
} else {
bar_titles <- paste0("Cat. ", 1:nmap, " (%)")
}
}
if (plot) {
for (k in 1:nmap) {
#TODO: Add s2dv::
ColorBar(brks = brks[[k]], cols = cols[[k]], vertical = FALSE, subsampleg = subsampleg,
bar_limits = bar_limits[[k]], #var_limits = var_limits,
triangle_ends = triangle_ends, col_inf = col_inf[[k]], col_sup = col_sup[[k]], plot = TRUE,
draw_separators = draw_separators,
title = bar_titles[[k]], title_scale = title_scale,
label_scale = label_scale, extra_margin = extra_margin)
}
} else {
return(list(brks = brks, cols = cols, col_inf = col_inf, col_sup = col_sup))
}
}
.KnownLonNames <- function() {
known_lon_names <- c('lon', 'lons', 'longitude', 'x', 'i', 'nav_lon')
}
.KnownLatNames <- function() {
known_lat_names <- c('lat', 'lats', 'latitude', 'y', 'j', 'nav_lat')
}
.KnownTimeNames <- function() {
known_time_names <- c('time', 'ftime', 'sdate', 'sdates', 'syear', 'sweek', 'sday', 'leadtimes')
}
.KnownForecastTimeNames <- function() {
known_time_names <- c('time', 'ftime', 'ltime', 'leadtimes')
}
.KnownStartDateNames <- function() {
known_time_names <- c('sdate', 'sdates', 'syear', 'sweek', 'sday')
}
.KnownMemberNames <- function() {
known_time_names <- c('memb', 'member', 'members', 'ensemble', 'ensembles')
}
.isNullOb <- function(x) is.null(x) | all(sapply(x, is.null))
.rmNullObs <- function(x) {
x <- base::Filter(Negate(.isNullOb), x)
lapply(x, function(x) if (is.list(x)) .rmNullObs(x) else x)
}
# Definition of a global variable to store the warning message used in Calibration
warning_shown <- FALSE
|
/scratch/gouwar.j/cran-all/cranData/CSTools/R/zzz.R
|
## ---- echo = FALSE------------------------------------------------------------
knitr::opts_chunk$set(eval = FALSE)
|
/scratch/gouwar.j/cran-all/cranData/CSTools/inst/doc/Analogs_vignette.R
|
---
title: "Analogs based on large scale for downscaling"
author: "M. Carmen Alvarez-Castro and M. del Mar Chaves-Montero (CMCC, Italy)"
date: "November 2020"
revisor: "Eva Rifà"
revision date: "October 2023"
output: rmarkdown::html_vignette
vignette: >
%\VignetteEngine{knitr::knitr}
%\VignetteIndexEntry{Analogs}
%\usepackage[utf8]{inputenc}
---
<!--
```{r, echo = FALSE}
knitr::opts_chunk$set(eval = FALSE)
```
-->
## Downscaling seasonal forecast data using Analogs
In this example, the seasonal temperature forecasts, initialized in october, will be used to perform a downscaling in the Balearic Islands temperature using the cmcc system 3 seasonal forecasting system from the Euro-Mediterranean Center of Climate Change (CMCC), by computing Analogs in Sea level pressure data (SLP) in a larger region (North Atlantic). The first step will be to load the data we want to downscale (i.e. cmcc) in the large region (i.e North Atlantic) for temperature (predictand) and SLP (predictor) and same variables and region for a higher resolution data (ERA5). In a second step we will interpolate the model to the resolution of ERA5. In a third step we will find the analogs using one of the three criterias. In a four step we will get the downscaled dataset in the region selected (local scale, in this case Balearic Islands).
## 1. Introduction of the function
For instance if we want to perform a temperature donwscaling in Balearic Island for October we will get a daily series of temperature with 1 analog per day, the best analog. How we define the best analog for a certain day? This function offers three options for that:
(1) The day with the minimum Euclidean distance in a large scale field: using i.e. pressure or geopotencial height as variables and North Atlantic region as large scale region. The Atmospheric circulation pattern in the North Atlantic (LargeScale) has an important role in the climate in Spain (LocalScale). The function will find the day with the most similar pattern in atmospheric circulation in the database (obs, slp in ERA5) to the day of interest (exp, slp in model). Once the date of the best analog is found, the function takes the associated temperature to that day (obsVar, tas in ERA5), with a subset of the region of interest (Balearic Island).
(2) Same that (1) but in this case we will search for analogs in the local scale (Balearic Island) instead of in the large scale (North Atlantic). Once the date of the best analog is found, the function takes the associated temperature to that day (obsVar, t2m in ERA5), with a subset of the region of interest (Balearic Island).
(3) Same that (2) but here we will search for analogs with higher correlation at local scale (Balearic Island) and instead of using SLP we will use t2m.
In particular the _Analogs Method_ uses a nonlinear approach that follows (**Analogs**; Yiou et al. 2013).
An efficient implementation of Analogs is provided for CSTools by the `CST_Analogs()` function.
Two datasets are used to illustrate how to use the function. The first one could be enterly run by the users since it is using data samples provided along with the package. The second one uses data that needs to be downloaded or requested.
### Example 1: using data from CSTools
After loading **CSTools** package on the R session, the user will have access to the sample data created with using `CST_Start`: lonlat_temp_st` and `lonlat_prec_st`.
*Note: If it is the first time using CSTools, install the package by running `install.packages("CSTools")`.
```
library(CSTools)
```
After exploring the data, the user can directly run the Analogs downscaling method using the 'Large_dis' metric:
```
class(lonlat_temp_st$exp)
names(lonlat_temp_st$obs)
dim(lonlat_temp_st$obs$data)
dim(lonlat_temp_st$exp$data)
lonlat_temp_st$exp$attrs$Dates
lonlat_temp_st$obs$attrs$Dates
```
There are 15 ensemble members available in the `exp` data set, 6 starting dates and 3 forecast times, which refer to monthly values during 3 months following starting dates on November in the years 2000, 2001, 2002, 2003, 2004 and 2005.
```
exp1 <- CST_Subset(x = lonlat_temp_st$exp, along = c('sdate', 'ftime'), indices = list(1, 1))
down_1 <- CST_Analogs(expL = exp1, obsL = lonlat_temp_st$obs)
exp2 <- CST_Subset(x = lonlat_temp_st$exp, along = c('sdate', 'ftime'), indices = list(1, 2))
down_2 <- CST_Analogs(expL = exp2, obsL = lonlat_temp_st$obs)
exp3 <- CST_Subset(x = lonlat_temp_st$exp, along = c('sdate', 'ftime'), indices = list(1, 3))
down_3 <- CST_Analogs(expL = exp3, obsL = lonlat_temp_st$obs)
```
The visualization of the first three time steps for the ensemble mean of the forecast initialized the 1st of Noveber 2000 can be done using the package **s2dv**:
```
library(s2dv)
var = list(MeanDims(down_1$data, 'member'),
MeanDims(down_2$data, 'member'),
MeanDims(down_3$data, 'member'))
PlotLayout(PlotEquiMap, c('lat', 'lon'),
var = var,
nrow = 1, ncol = 3,
lon = down_1$coords$lon,
lat = down_1$coords$lat,
filled.continents = FALSE,
titles = c("2000-11-01", "2000-12-01", "2001-01-01"), units = 'T(K)',
toptitle = 'Analogs sdate November 2000',
width = 10, height = 4)
```

The user can also request extra Analogs and the information:
```
down <- CST_Analogs(expL = exp1, obsL = lonlat_temp_st$obs,
nAnalogs = 2, AnalogsInfo = TRUE)
```
Again, the user can explore the object down1 which is class 's2dv_cube'. The element 'data' contains in this case metrics and the dates corresponding to the observed field:
```
class(down)
names(down$data)
dim(down$data$fields)
dim(down$data$metric)
dim(down$data$dates)
down$data$dates[1,15]
```
The last command run concludes that the best analog of the ensemble 15 corresponding to the 1st of November 2000 is the 1st November 2004:
```
PlotLayout(PlotEquiMap, c('lat', 'lon'), var = list(down$data$fields[1, , , 15],
lonlat_temp_st$obs$data[1, 1, 5, 1, , ]), nrow = 1, ncol = 2,
lon = down$coords$lon, lat = down$coords$lat, filled.continents = FALSE,
titles = c("Downscaled 2000-11-01", "Observed 2004-11-01"), units = 'T(K)',
width = 7, height = 4)
```

As expected, they are exatly the same.
### Exemple 2: Load data using CST_Start
In this case, the spatial field of a single forecast day will be downscale using Analogs in this example. This will allow illustrating how to use `CST_Start` to retrieve observations separated from simulations. To explore other options, see other CSTools vignettes as well as `CST_Start` documentation and [startR](https://CRAN.R-project.org/package=startR) package.
The simulations available for the desired model cover the period 1993-2016. Here, the 15th of October 2000 (for the simulation initialized in the 1st of October 2000), will be downscaled. For ERA5 from 1979 to the present days. For this example we will just use October days from 2000 to 2006, so, the starting dates can be defined by running the following lines:
```
start <- as.Date(paste(2000, 10, "01", sep = ""), "%Y%m%d")
end <- as.Date(paste(2006, 10, "01", sep = ""), "%Y%m%d")
dates <- as.POSIXct(seq(start, end, by = "year"), format = '%Y%m%d', 'UTC')
```
Using the `CST_Start` function from **CSTools package**, the data available in our data store can be loaded. The following lines show how this function can be used. The experimental datasets are interpolated to the ERA5 grid by specifying the 'grid' parameter while ERA5 doesn't need to be interpolated. While dimension 'time' is set to 1 for the experimental dataset, it is set to 31 for the observations, returning the daily observations for October for the years requested in 'sdate' (2000-2006). Download the data to run the recipe under the HTTPS: downloads.cmcc.bo.it/d_chaves/ANALOGS/data_for_Analogs.Rdat or ask carmen.alvarez-castro at cmcc.it or nuria.perez at bsc.es.
```r
exp_path <- paste0('/esarchive/exp/ecmwf/system4_m1/daily_mean/',
'$var$_f6h/$var$_$sdate$.nc')
obs_path <- paste0('/esarchive/recon/ecmwf/era5/daily_mean/',
'$var$_f1h-r1440x721cds/$var$_$sdate$.nc')
date_exp <- '20001001'
lonmax <- 50
lonmin <- -80
latmax <- 70
latmin <- 22
expTAS <- CST_Start(dataset = exp_path,
var = 'tas',
member = startR::indices(1:15),
sdate = '20001001',
ftime = startR::indices(15),
lat = startR::values(list(latmin, latmax)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lonmin, lonmax)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'),
lat = c('lat', 'latitude'),
member = c('member', 'ensemble'),
ftime = c('ftime', 'time')),
transform = startR::CDORemapper,
transform_params = list(grid = 'r1440x721',
method = 'bilinear'),
transform_vars = c('lat', 'lon'),
return_vars = list(lat = NULL,
lon = NULL, ftime = 'sdate'),
retrieve = TRUE)
expPSL <- CST_Start(dataset = exp_path,
var = 'psl',
member = startR::indices(1:15),
sdate = '20001001',
ftime = startR::indices(15),
lat = startR::values(list(latmin, latmax)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lonmin, lonmax)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'),
lat = c('lat', 'latitude'),
member = c('member', 'ensemble'),
ftime = c('ftime', 'time')),
transform = startR::CDORemapper,
transform_params = list(grid = 'r1440x721',
method = 'bilinear'),
transform_vars = c('lat', 'lon'),
return_vars = list(lat = NULL,
lon = NULL, ftime = 'sdate'),
retrieve = TRUE)
obsTAS <- CST_Start(dataset = obs_path,
var = 'tas',
sdate = unique(format(dates, '%Y%m')),
ftime = startR::indices(1:31),
lat = startR::values(list(latmin, latmax)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lonmin, lonmax)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'),
lat = c('lat', 'latitude'),
ftime = c('ftime', 'time')),
return_vars = list(lat = NULL,
lon = NULL, ftime = 'sdate'),
retrieve = TRUE)
obsPSL <- CST_Start(dataset = obs_path,
var = 'psl',
sdate = unique(format(dates, '%Y%m')),
ftime = startR::indices(1:31),
lat = startR::values(list(latmin, latmax)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lonmin, lonmax)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'),
lat = c('lat', 'latitude'),
ftime = c('ftime', 'time')),
return_vars = list(lat = NULL,
lon = NULL, ftime = 'sdate'),
retrieve = TRUE)
```
The 's2dv_cube' objects `expTAS`,`obsTAS`, `expPSL` and `obsPSL` are now loaded in the R environment. The first two elements correspond to the experimental and observed data for temperature and the other two are the equivalent for the SLP data.
Loading the data using `CST_Start` allows to obtain two lists, one for the experimental data and another for the observe data, with the same elements and compatible dimensions of the data element:
```
dim(expTAS$data)
# dataset var member sdate ftime lat lon
# 1 1 15 1 1 193 521
dim(obsTAS$data)
# dataset var sdate ftime lat lon
# 1 1 7 31 193 521
```
#### Two variables and criteria Large [scale] Distance:
The aim is to downscale the temperature field of the simulation for the 15th of October 2000 but looking at the pressure pattern:
```
down1 <- CST_Analogs(expL = expPSL, obsL = obsPSL, AnalogsInfo = TRUE,
criteria = "Large_dist", nAnalogs = 3,
obsVar = obsTAS, expVar = expTAS)
```
Some warnings could appear indicating information about undefining parameters. It is possible to explore the information in object `down` by runing:
```
names(down1$data)
dim(down1$data$field)
# nAnalogs lat lon member
# 3 193 521 15
dim(down1$data$dates)
# nAnalogs member
# 3 15
down1$data$dates[1,1]
# "07-10-2005"
```
Now, we can visualize the output:
```
PlotLayout(PlotEquiMap, c('lat', 'lon'),
var = list(expPSL$data[1, 1, 1, 1, 1, , ], obsPSL$data[1, 1, 1, 15, , ],
obsPSL$data[1, 1, 6, 7, , ]), lon = obsPSL$coords$lon,
lat = obsPSL$coords$lat, filled.continents = FALSE,
titles = c('Exp PSL 15-10-2000','Obs PSL 15-10-2000',
'Obs PSL 7-10-2005'),
toptitle = 'First member', ncol = 3, nrow = 1, width = 10, height = 4)
PlotLayout(PlotEquiMap, c('lat', 'lon'), var = list(
expTAS$data[1, 1, 1, 1, 1, , ], obsTAS$data[1, 1, 1, 15, , ],
down1$data$field[1, , , 1], obsTAS$data[1, 1, 6, 7, , ]),
lon = obsTAS$coords$lon, lat = obsTAS$coords$lat, filled.continents = FALSE,
titles = c('Exp TAS 15-10-2000', 'Obs TAS 15-10-2000',
'Analog TAS 15-10-2000', 'Obs TAS 7-10-2005'),
ncol = 2, nrow = 2)
```

The previous figure, shows the PSL inputs and the PSL pattern for the PSL the 7th of October, 2005, which is the best analog. *Note: Analogs automatically exclude the day is being downscaled from the observations.*
The next figure shows the input temperature fields, and the result analog which corresponds to the temperature of the 7th of October, 2005:

#### Two variables and criteria Local [scale] Distance:
The aim is to downscale the temperature simulation of the 15th of October 2000, by considering the pressure spatial pattern n the large scale and the local pressure pattern on a given region. Therefore, a region is defined providing maximum and minimum latitude and longitude coordinates, in this case, selecting the Balearic Islands:
```
region <- c(lonmin = 0, lonmax = 5, latmin = 38.5, latmax = 40.5)
expPSL$data <- expPSL$data[1, 1, 1, 1, 1, , ]
expTAS$data <- expTAS$data[1, 1, 1, 1, 1, , ]
down2 <- CST_Analogs(expL = expPSL, obsL = obsPSL, AnalogsInfo = TRUE,
criteria = "Local_dist", # nAnalogs = 50,
obsVar = obsTAS, expVar = expTAS,
region = region)
```
The parameter 'nAnalogs' doesn't correspond to the number of Analogs returned, but to the number of the best observations to use in the comparison between large and local scale.
In this case, when looking to a large scale pattern and also to local scale pattern the best analog for the first member is the 7th of October 2001:
```
down2$data$dates[2]
# "13-10-2001"
```
```
library(ClimProjDiags)
var = list(expTAS$data, obsTAS$data[1, 1, 1, 15, , ],
down2$data$field[1, , ], SelBox(obsTAS$data[1, 1, 2, 13, , ],
lon = as.vector(obsTAS$coords$lon),
lat = as.vector(obsTAS$coords$lat),
region, londim = 'lon', latdim = 'lat')$data)
PlotLayout(PlotEquiMap, c('lat', 'lon'), var = var,
special_args = list(list(lon = expTAS$coords$lon, lat = expTAS$coords$lat),
list(lon = obsTAS$coords$lon, lat = obsTAS$coords$lat),
list(lon = down2$coords$lon, down2$coords$lat),
list(lon = down2$coords$lon, down2$coords$lat)),
filled.continents = FALSE,
titles = c('Exp TAS 15-10-2000', 'Obs TAS 15-10-2000',
'Analog TAS 15-10-2000', 'Obs TAS 13-10-2001'),
ncol = 2, nrow = 2)
```

Previous figure shows that the best Analog field corrspond to the observed field on the 13th of October 2001.
#### Two variables and criteria Local [scale] Correlation:
```
down3 <- CST_Analogs(expL = expPSL, obsL = obsPSL, AnalogsInfo = TRUE,
criteria = "Local_cor", # nAnalogs = 50,
obsVar = obsTAS, expVar = expTAS,
region = region)
```
In this case, when looking to a large scale pattern and also to local scale pattern the best analog for the first member is the 10th of October 2001:
```
down3$data$dates[3]
# [1] "10-10-2001"
```
```
var = list(down3$data$field[1, , ],
SelBox(obsTAS$data[1, 1, 2, 10, , ],
lon = as.vector(obsTAS$coords$lon),
lat = as.vector(obsTAS$coords$lat),
region, londim = 'lon', latdim = 'lat')$data)
PlotLayout(PlotEquiMap, c('lat', 'lon'), var = var,
lon = down3$coords$lon, lat = down3$coords$lat,
filled.continents = FALSE,
titles = c('Analog TAS 15-10-2000', 'Obs TAS 10-10-2001'),
ncol = 2, nrow = 1)
```

Previous figure shows that the best Analog field corrspond to the observed field on the 10th of October 2001.
#### Downscaling using exp$data using excludeTime parameter
`ExludeTime` is set by default to Time_expL in order to find the same analog than the day of interest. If there is some interest in excluding other dates should be included in the argument 'excludeTime'.
```
down4 <- CST_Analogs(expL = expPSL, obsL = obsPSL, AnalogsInfo = TRUE,
criteria = "Large_dist", nAnalogs = 20,
obsVar = obsTAS, expVar = expTAS,
region = region, excludeTime = obsPSL$attrs$Dates[10:20])
```
In this case, the best analog is still being 7th of October, 2005.
*Note: You can compute the anomalies values before applying the criterias (as in Yiou et al, 2013) using `CST_Anomaly` of CSTools package*
|
/scratch/gouwar.j/cran-all/cranData/CSTools/inst/doc/Analogs_vignette.Rmd
|
---
author: "Eroteida Sánchez-García"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteEngine{knitr::knitr}
%\VignetteIndexEntry{Achiving Best Estimate Index}
%\usepackage[utf8]{inputenc}
---
Achiving the Precipitation Best prediction giving the NAO index
-------------------------------------------------------------------
The boreal winter precipitation forecast, acumulated from November to March, can be improved by considering the NAO index. The first step is to find the best estimation of winter NAO, given by two Seasonal Forecast System (SFS). The second step is to employ the enhanced NAO index pdf to produce weights for a SFS (it could be the same or a different SFS from the previous one). The third step is to apply this weights to a precipitation field. The methodology has been proved to improve the skill on precipitation forecast in the iberian peninsula given the relation between the winter precipitation and the NAO index in seasonal time scale (Sánchez-García, E., Voces-Aboy, J., Navascués, N., & Rodríguez-Camino, E. (2019). Regionally improved seasonal forecast of precipitation through Best estimation of winter NAO, Adv. Sci. Res., 16, 165174, <https://doi.org/10.5194/asr-16-165-2019>).
This document is aim to ilustrate a practical use of the functions included in CSTools applying this methodology.
## Loading packages and data
Open an R sesion and load library CSTools:
```
library(CSTools)
```
The required data to applied this methodology are:
- the observed (reconstructed) NAO index in the reference period
- the winter index NAO for two different SFSs (SFS1 and SFS2, to combine both of them) in a reference period (hindcast) and in a future simulation (forecast)
- the winter index NAO and the acumulated precipitation field from a SFS that aims to be improved (hindcast and forecast)
Given the memory limitations, the following example uses synthetic data.
The SFS1 system is a dynamical model containing 25 ensemble members and its output will be save in the object `NAO_hind1` for the 20 years reference period and `NAO_fcst1` for a next season forecast.
The second SFS, SFS2, is a empirical model, so, the `NAO_hind2` and `NAO_fcst2` are characterized by a mean and standard deviation saved in the 'statistics' dimension.
The model for improving is a dynamical model containing 25 ensemble members.
The synthetic data is created by running the following lines:
```
# observations
NAO_obs <- rnorm(20, sd = 3)
dim(NAO_obs) <- c(time = 20)
# hindcast and forecast of a dynamical SFS 1
NAO_hind1 <- rnorm(20 * 2 * 25, sd = 2.5)
dim(NAO_hind1) <- c(time = 20, member = 50)
NAO_fcst1 <- rnorm(2*51, sd = 2.5)
dim(NAO_fcst1) <- c(time = 1, member = 102)
# hindcast and forecast of an empirical SFS 2
NAO_hind2_mean <- rnorm(20, sd = 3)
NAO_hind2_sd <- rnorm(20, mean = 5, sd = 1)
NAO_hind2 <- cbind(NAO_hind2_mean, NAO_hind2_sd)
dim(NAO_hind2) <- c(time = 20, statistic = 2)
NAO_fcst2_mean <- rnorm(1, sd = 3)
NAO_fcst2_sd <- rnorm(1, mean = 5, sd = 1)
NAO_fcst2 <- cbind(NAO_fcst2_mean, NAO_fcst2_sd)
dim(NAO_fcst2) <- c(time = 1, statistic = 2)
```
The winter index NAO and the acumulated precipiation field from the dynamical SFS that aims to be improved could be created by running:
```
# NAO index of a SFS to compute weights for each ensemble's member
NAO_hind <- rnorm(20 * 25, sd = 2.5)
dim(NAO_hind) <- c(time = 20, member = 25)
NAO_fcst <- rnorm(51, sd = 2.5)
dim(NAO_fcst) <- c(time = 1, member = 51)
# The acumulated precipiation field
prec_hind <- rnorm(20 * 25 * 21 * 31, mean = 30, sd = 10)
dim(prec_hind) <- c(time = 20, member = 25, lat = 21, lon = 31)
prec_hind <- list(data = prec_hind)
class(prec_hind) <- 's2dv_cube'
prec_fcst <- rnorm(51 * 21 * 31, mean = 25,sd = 8)
dim(prec_fcst) <- c(time = 1, member = 51, lat = 21, lon = 31)
prec_fcst <- list(data = prec_fcst)
class(prec_fcst) <- 's2dv_cube'
```
## 1- Best Estimate Index NAO
The function `PDFBest` performes the next steps:
- improves the PDF NAO index for each SFS (it could use bias correction method) and
- does a statistical combination of these improved NAO indexes.
Its output is an array containing the parameters (mean and starndar deviation) of the PDF for the reference period (hindcast) or forecast period.
```
# for hindcast
pdf_hind_best <- BEI_PDFBest(NAO_obs, NAO_hind1, NAO_hind2, index_fcst1 = NULL,
index_fcst2 = NULL, method_BC = 'none',
time_dim_name = 'time', na.rm = FALSE)
# for forecast
pdf_fcst_best <- BEI_PDFBest(NAO_obs, NAO_hind1, NAO_hind2, index_fcst1 = NAO_fcst1,
index_fcst2 = NAO_fcst2, method_BC = 'none',
time_dim_name = 'time', na.rm = FALSE)
```
## 2- Compute weights using the Best Estimation of Index NAO
An array of weights is calculated for the SFS. This SFS could be the same or different SFS than the ones in section 1.
The function WeightIndex computes these weights for each ensemble's member based on the best NAO PDF estimate.
```
# for hindcast
weights_hind <- BEI_Weights(NAO_hind, pdf_hind_best)
# for forecast
weights_fcst <- BEI_Weights(NAO_fcst, pdf_fcst_best)
```
The expected dimensions of these weights are 'member' and temporal dimension.
## 3- Apply weights to a precipitation field
The function `CST_BEI_Weighting` computes the ensemble mean or the terciles probabilities for a climate variable.
The ensemble mean and the probabilities of terciles from the weighted precipitation field is obtained by running:
```
# for hindcast
em_prec_hind <- CST_BEI_Weighting(prec_hind, weights_hind, type = 'ensembleMean')
prob_prec_hind <- CST_BEI_Weighting(prec_hind, weights_hind, type = 'probs')
# for forecast
em_prec_fcst <- CST_BEI_Weighting(prec_fcst, weights_fcst, type = 'ensembleMean')
prob_prec_fcst <- CST_BEI_Weighting(prec_fcst, weights_fcst, type = 'probs')
```
## Comparison and visualization
The original model output can be compared against the BEI corrected field.
To do this a equiprobability weights are created for hindcast period and they are applied to the original precipitation field:
```
aweights_raw <- rep(1/25, 20 * 25)
dim(aweights_raw) <- dim(weights_hind)
em_prec_raw <- CST_BEI_Weighting(prec_hind, aweights_raw, type = 'ensembleMean')
prob_prec_raw <- CST_BEI_Weighting(prec_hind, aweights_raw, type = 'probs')
```
A map with the probability that the total precipitation will be in the lower/normal/upper tercile based on the Best Estimate Index NAO could be obtained using the 'PlotEquiMap' function or 'PlotMostLikelyQuantileMap' function from 'CSToools' package.
The following figures show the probabilities for lower tercile for precipitation from November to March 2012/13 for ECMWF S5 system applying the methodology exposed or not, obtained using real data:
- NAO indices from the ECMWF-S5 dynamical model and the S-ClimWaRe empirical model from AEMET from 1997 to 2016 for computing the Best Estimation of Index NAO fro this hindcast period.
- The winter precipitation (from November to March) from 1997 to 2016 over Iberia Peninsula from he ECMWF-S5 dynamical model with resolution 0.5º x 0.5º, to weighting with the previous Best Estimation of Index NAO.


In a similar way we can plot the map with the probability that the total precipitation from November 2012 to
March 2013, for example, will be in the lower tercile from ECMWF Seasonal Forecast System 5 (raw) to compare results running the code:


|
/scratch/gouwar.j/cran-all/cranData/CSTools/inst/doc/BestEstimateIndex_vignette.Rmd
|
---
author: "Nuria Perez"
date: "`r Sys.Date()`"
revisor: "Eva Rifà"
revision date: "October 2023"
output: rmarkdown::html_vignette
vignette: >
%\VignetteEngine{knitr::knitr}
%\VignetteIndexEntry{Data Storage and Retrieval}
%\usepackage[utf8]{inputenc}
---
Data Storage and Retrieval
-----------------------------------------
CSTools is aim for post-processing seasonal climate forecast with state-of-the-art methods. However, some doubts and issues may arise the first time using the package: do I need an specific R version? how much RAM memory I need? where I can find the datasets? Should I format the datasets? etc. Therefore, some recommendations and key points to take into account are gathered here in order to facilitate the use of CSTools.
### 1. System requirements
The first question may come to a new user is the requirements of my computer to run CSTools. Here, the list of most frequent needs:
- netcdf library version 4.1 or later
- cdo (I am currently using 1.6.3)
- R 3.4.2 or later
On the other hand, the computational power of a computer could be a limitation, but it will depends on the size of the data that the users need for their analysis. For instance, they can estimate the memory they will require by multiplying the following values:
- Area of the study region (km^2)
- Area of the desired grid cell (km) (or square of the grid cell size)
- Number of models + 1 observational dataset
- Forecast time length (days or months)
- Temporal resolution (1 for daily, 4 for 6 hourly or 24 for daily data)
- Hindcast length (years)
- Number of start date (or season)
- Number of members
- Extra factor for functions computation(*)
For example, if they want to use the hindcast of 3 different seasonal simulations with 9 members, in daily resolution, for performing a regional study let's say in a region of 40000 km2 with a resolution of 5 km:
> 200km x 200km / (5km * 5km) * (3 + 1) models * 214 days * 30 hindcast years * 9 members x 2 start dates x 8 bytes ~ 6 GB
(*)Furthermore, some of the functions need to duplicated or triplicate (even more) the inputs for performing their analysis. Therefore, between 12 and 18 GB of RAM memory would be necessary, in this example.
### 2. Overview of CSTools structure
All CSTools functions have been developed following the same guidelines. The main point, interesting for the users, is that that one function is built on several nested levels, and it is possible to distinguish at least three levels:
- `CST_FunctionName()` this function works on s2dv_cube objects which is exposed to the users.
- `FunctionName()`this function works on N-dimensional arrays with named dimensions and it is exposed to the users.
- lower level functions such as `.functionname()` which works in the minimum required elements and it is not exposed to the user.
A reasonable important doubt that a new user may have at this point is: what 's2dv_cube' object is?
's2dv_cube' is a class of an object storing the data and metadata in several elements:
+ $data element is an N-dimensional array with named dimensions containing the data (e.g.: temperature values),
+ $dims vector with the dimensions of $data
+ $coords is a named list with elements of the coordinates vectors corresponding to the dimensions of the $data,
+ $attrs is a named list with elements corresponding to attributes of the object. It has the following elements:
+ $Variable is a list with the variable name in element $varName and with the metadata of all the variables in the $metadata element,
+ $Dates is an array of dates of the $data element,
+ other elements for extra metadata information
It is possible to visualize an example of the structure of 's2dv_cube' object by opening an R session and running:
```
library(CSTools)
class(lonlat_temp_st$exp) # check the class of the object lonlat_temp$exp
names(lonlat_temp_st$exp) # shows the names of the elements in the object lonlat_temp$exp
str(lonlat_temp_st$exp) # shows the full structure of the object lonlat_temp$exp
```
### 3. Data storage recommendations
CSTools main objective is to share state-of-the-arts post-processing methods with the scientific community. However, in order to facilitate its use, CSTools package includes a function, `CST_Load`, to read the files and have the data available in 's2dv_cube' format in the R session memory to conduct the analysis. Some benefits of using this function are:
- CST_Load can read multiple experimental or observational datasets at once,
- CST_Load can regrid all datasets to a common grid,
- CST_Load reformat observational datasets in the same structure than experiments (i.e. matching start dates and forecast lead time between experiments and observations) or keep observations as usual time series (i.e. continous temporal dimension),
- CST_Load can subset a region from global files,
- CST_Load can read multiple members in monthly, daily or other resolutions,
- CST_Load can perform spatial averages over a defined region or return the lat-lon grid and
- CST_Load can read from files using multiple parallel processes among other possibilites.
CSTools also has the function `CST_Start` from [startR](https://CRAN.R-project.org/package=startR) that is more flexible than `CST_Load`. We recommend to use `CST_Start` since it's more efficient and flexible.
If you plan to use `CST_Load` or `CST_Start`, we have developed guidelines to download and formatting the data. See [CDS_Seasonal_Downloader](https://earth.bsc.es/gitlab/es/cds-seasonal-downloader).
There are alternatives to these functions, for instance, the user can:
1) use another tool to read the data from files (e.g.: ncdf4, easyNDCF, startR packages) and then convert it to the class 's2dv_cube' with `s2dv.cube()` function or
2) If they keep facing problems to convert the data to that class, they can just skip it and work with the functions without the prefix 'CST_'. In this case, they will be able to work with the basic class 'array'.
Independently of the tool used to read the data from your local storage to your R session, this step can be automatized by given a common structure and format to all datasets in your local storate. Here, there is the list of minimum requirements that CST_Save follows to be able to store an experiment that could be later loaded with CST_Load:
- this function creates one NetCDF file per start date with the name of the variable and the start date: `$VARNAME$_$YEAR$$MONTH$.nc`
- each file has dimensions: lon, lat, ensemble and time.
### 4. CST_Load example
```
library(CSTools)
library(zeallot)
path <- "/esarchive/exp/meteofrance/system6c3s/$STORE_FREQ$_mean/$VAR_NAME$_f6h/$VAR_NAME$_$START_DATE$.nc"
ini <- 1993
fin <- 2012
month <- '05'
start <- as.Date(paste(ini, month, "01", sep = ""), "%Y%m%d")
end <- as.Date(paste(fin, month, "01", sep = ""), "%Y%m%d")
dateseq <- format(seq(start, end, by = "year"), "%Y%m%d")
c(exp, obs) %<-% CST_Load(var = 'sfcWind',
exp = list(list(name = 'meteofrance/system6c3s', path = path)),
obs = 'erainterim',
sdates = dateseq, leadtimemin = 2, leadtimemax = 4,
lonmin = -19, lonmax = 60.5, latmin = 0, latmax = 79.5,
storefreq = "daily", sampleperiod = 1, nmember = 9,
output = "lonlat", method = "bilinear",
grid = "r360x180")
```
### 5. CST_Start example
```r
path_exp <- paste0('/esarchive/exp/meteofrance/system6c3s/monthly_mean/',
'$var$_f6h/$var$_$sdate$.nc')
sdates <- sapply(1993:2012, function(x) paste0(x, '0501'))
lonmax <- 60.5
lonmin <- -19
latmax <- 79.5
latmin <- 0
exp <- CST_Start(dataset = path_exp,
var = 'sfcWind',
ensemble = startR::indices(1:9),
sdate = sdates,
time = startR::indices(1:3),
latitude = startR::values(list(latmin, latmax)),
latitude_reorder = startR::Sort(decreasing = TRUE),
longitude = startR::values(list(lonmin, lonmax)),
longitude_reorder = startR::CircularSort(0, 360),
synonims = list(longitude = c('lon', 'longitude'),
latitude = c('lat', 'latitude')),
return_vars = list(latitude = NULL,
longitude = NULL, time = 'sdate'),
retrieve = TRUE)
path_obs <- paste0('/esarchive/recon/ecmwf/erainterim/daily_mean/',
'$var$_f6h/$var$_$sdate$.nc')
dates <- as.POSIXct(sdates, format = '%Y%m%d', 'UTC')
obs <- CST_Start(dataset = path_obs,
var = 'sfcWind',
sdate = unique(format(dates, '%Y%m')),
time = startR::indices(2:4),
latitude = startR::values(list(latmin, latmax)),
latitude_reorder = startR::Sort(decreasing = TRUE),
longitude = startR::values(list(lonmin, lonmax)),
longitude_reorder = startR::CircularSort(0, 360),
synonims = list(longitude = c('lon', 'longitude'),
latitude = c('lat', 'latitude')),
transform = startR::CDORemapper,
transform_extra_cells = 2,
transform_params = list(grid = 'r360x181',
method = 'conservative'),
transform_vars = c('latitude', 'longitude'),
return_vars = list(longitude = NULL,
latitude = NULL,
time = 'sdate'),
retrieve = TRUE)
```
Extra lines to see the size of the objects and visualize the data:
```
library(pryr)
object_size(exp)
# 27.7 MB
object_size(obs)
# 3.09 MB
library(s2dv)
PlotEquiMap(exp$data[1,1,1,1,1,,], lon = exp$coords$longitude, lat= exp$coords$latitude,
filled.continents = FALSE, fileout = "Meteofrance_r360x180.png")
```

### Managing big datasets and memory issues
Depending on the user needs, limitations can be found when trying to process big datasets. This may depend on the number of ensembles, the resolution and region that the user wants to process. CSTools has been developed for compatibility of startR package which covers these aims:
- retrieving data from NetCDF files to RAM memory in a flexible way,
- divide automatically datasets in pieces to perform an analysis avoiding memory issues and
- run the workflow in your local machine or submitting to an HPC cluster.
This is especially useful when a user doesn’t have access to a HPC and must work with small RAM memory size:

There is a [video tutorial](https://earth.bsc.es/wiki/lib/exe/fetch.php?media=tools:startr_tutorial_2020.mp4) about the startR package and the tutorial material.
The functions in CSTools (with or without CST_ prefix) include a parameter called ‘ncores’ that allows to automatically parallelize the code in multiple cores when the parameter is set greater than one.
|
/scratch/gouwar.j/cran-all/cranData/CSTools/inst/doc/Data_Considerations.Rmd
|
---
author: "Ignazio Giuntoli and Federico Fabiano - CNR-ISAC"
date: "`r Sys.Date()`"
revisor: "Eva Rifà"
revision date: "October 2023"
output: rmarkdown::html_vignette
vignette: >
%\VignetteEngine{knitr::knitr}
%\VignetteIndexEntry{Ensemble Clustering}
%\usepackage[utf8]{inputenc}
---
Ensemble clustering
-----------------------------------------
### Introduction
Ensemble forecasts provide insight on average weather conditions from sub-seasonal to seasonal timescales. With large ensembles, it is useful to group members according to similar characteristics and to select the most representative member of each cluster. This allows to characterize forecast scenarios in a multi-model (or single model) ensemble prediction. In particular, at the regional level, this function can be used to identify the subset of ensemble members that best represent the full range of possible outcomes that may be fed in turn to downscaling applications. The choice of the ensemble members can be done by selecting different options within the function in order to meet the requirements of specific climate information products that can be tailored to different regions and user needs.
For a detailed description of the tool please refer to the CSTools guide : <https://cran.r-project.org/package=CSTools>
### Steps of the vignette
This vignette consists of three main steps: 1. Preliminary setup, 2. Loading the data, 3. Launching Ensemble clustering, 4. Results retrieval, 5. Results mapping, 6. Final notes.
### 1. Preliminary setup
To run this vignette, install and load Library *CSTools*
```r
install.packages("CSTools")
library(CSTools)
```
and the following packages:
```r
install.packages("s2dv")
library(s2dv)
```
### 2. Loading the data
For our example we will use the sample seasonal temperature data provided within the CSTools package.
Data can be loaded as follows:
```r
datalist <- lonlat_temp_st$exp
```
The data will has the following dimension:
```r
dim(datalist$data)
dataset var member sdate ftime lat lon
1 1 15 6 3 22 53
```
Therefore the number of members is 15, the number of start dates is 6, while the forecast time steps are 3. The lat and lon dimensions refer to a 22x53 grid.
### 3. Launching Ensemble clustering
Prior to launching EnsClustering let's define the number of clusters, e.g., 4
```r
numcl = 4
```
Let's launch the clustering using 4 clusters (numclus), 4 EOFs (numpcs), 'mean' as moment to apply to the time dimension (time_moment), and both members and starting dates, i.e. 'c("member","sdate")', as dimension over which the clustering is performed (cluster_dim).
```r
results <- CST_EnsClustering(datalist, numclus = numcl, numpcs = 4,
time_moment = 'mean', cluster_dim = c('member', 'sdate'))
```
The EnsClustering produces the following outputs saved in object results:
```r
names(results)
#[1] "cluster" "freq" "closest_member" "repr_field"
#[5] "composites" "lon" "lat"
```
where:
- $cluster contains the cluster assigned for each 'member' and 'sdate'
- $freq contains relative frequency by 'cluster'
- $closest_member the representative member for each 'cluster'
- $repr_field contains list of fields for each representative member 'cluster', 'lat' and 'lon'
- $composites contains the list of mean fields for 'cluster', 'lat' and 'lon'
- $lat contains the 22 grid point latitudes
- $lon containing the 53 grid point longitudes
So, for instance, the overall frequency per cluster can be displayed by querying the '$freq' in 'results' obtaining:
```r
results$freq
[,1]
[1,] 35.55556
[2,] 27.77778
[3,] 21.11111
[4,] 15.55556
```
Further, the cluster number to which each 'member - start-date' pair is assigned can be displayed by quering '$cluster' in 'results' as shown below (members (15) are in row and the start-dates (6) are in column (i.e. 15*6 pairs)).
```r
results$cluster
, , 1, 1
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 3 2 1 1 4 2
[2,] 2 4 4 2 3 1
[3,] 3 3 1 3 4 2
[4,] 2 1 1 2 1 1
[5,] 4 4 1 3 3 1
[6,] 4 1 1 2 4 1
[7,] 1 2 1 2 2 1
[8,] 4 2 1 2 4 1
[9,] 3 3 1 1 4 3
[10,] 1 2 1 2 4 1
[11,] 3 3 2 3 3 1
[12,] 3 1 2 1 2 2
[13,] 2 1 1 3 3 1
[14,] 2 2 4 1 1 2
[15,] 3 4 1 2 3 2
```
### 4. Results retrieval
Let's keep in mind that the aim is that of identifying 'members - start-date' pairs to be retained in order to reduce the ensemble while keeping a good representativeness.
To achieve this we can get an idea of how the clustering has performed by looking at the average cluster spatial patterns, which are found in the 'composites' argument of 'results' and have the following dimensions (we will map them at point 5):
```r
dim(results$composites)
cluster lat lon dataset var
4 22 53 1 1
```
while the 'repr_field' argument of 'results' provides the spatial pattern of the member lying closes to the centroid (has the same dimensions as those of 'composites'):
```r
dim(results$repr_field)
cluster lat lon dataset var
4 22 53 1 1
```
Finally, the pairs 'member - start-dates' to be picked as representative for each cluster are found in the 'closest_member' argument.
```r
results$closest_member
$member
[,1]
[1,] 6
[2,] 1
[3,] 11
[4,] 6
$sdate
[,1]
[1,] 6
[2,] 6
[3,] 1
[4,] 5
```
### 5. Results mapping
The following lines produce a multiplot of the representative temperature anomalies spatial pattern per cluster (Figure 1).
These are actually the closest realizations (member - start-date pair) to the cluster centroids noted as 'repr_field' above.
```r
EnsMean <- MeanDims(datalist$data, c('member', 'sdate', 'ftime'))
EnsMean <- InsertDim(Reorder(EnsMean, c("lat", "lon", "dataset", "var")),
posdim = 1, lendim = 4, name = 'cluster')
PlotLayout(PlotEquiMap, plot_dims = c("lat", "lon"),
var = results$repr_field[,,,1,1] - EnsMean[,,,1,1],
lon = results$lon, lat = results$lat, filled.continents = FALSE,
titles = c("1","2","3","4"), brks = seq(-2, 2, 0.5),
fileout = "EnsClus_4clus_both_mem_std_Fig1.png")
```

The lines below produce a multiplot of the temperature anomaly patterns for each 'member - start-date' pair and the respective cluster to which they are assigned (Figure 2). We limit the multiplot to the first 24 out of 90 combinations (i.e. 15 members*6 start_dates = 90) for figure clarity.
```r
ExpMean <- MeanDims(datalist$data, 'ftime')
EnsMean <- InsertDim(InsertDim(InsertDim(InsertDim(EnsMean[1,,,,],
posdim = 1, lendim = 6, name = 'sdate'),
posdim = 1, lendim = 15, name = 'member'),
posdim = 1, lendim = 1, name = 'var'),
posdim = 1, lendim = 1, name = 'dataset')
ExpMeanSd <- Reorder(ExpMean - EnsMean, c('dataset', 'var', 'sdate', 'member' , 'lat', 'lon'))
PlotLayout(PlotEquiMap, plot_dims = c("lat", "lon"),
var = ExpMeanSd[, , , 1:4, , ], title_scale = 0.7,
ncol = 6, nrow = 4, row_titles = paste('member' , 1:4), col_titles = paste('sdate', 1:6),
lon = results$lon, lat = results$lat, filled.continents = FALSE,
titles = as.character(t(results$cluster[1:4, 1:6,,])), brks = seq(-2, 2, 0.5),
width = 24, height = 20, size_units = 'cm',
fileout = "EnsClus_4clus_both_mem_std_Fig2.png")
```

### Final notes
It should be noted that the clustering can be carried out with respect to 'members' or 'start-dates' alone. In which case the selection is no longer done over all 'member - start-dates' pairs (90) but along the 15 members (with all start-dates) or along the 6 start-dates (with all members).
The clustering command with respect to 'members' is listed below:
```r
print('clustering over members')
results_memb <- CST_EnsClustering(datalist, numclus = numcl, numpcs = 4, time_moment = 'mean', cluster_dim = 'member')
```
Also, in addition to 'mean', the 'time_moment' can be set to 'sd', i.e. standard deviation along time, or 'perc', i.e. a percentile along time (in which case the 'time_percentile' argument is also needed). The latter option 'perc' may be useful when considering extremes, so for instance while analysing extreme temperatures with a high percentile (e.g. 90).
The clustering command with respect to 'member - start-dates' pairs over the 90th percentile of temperature (i.e. high temperatures) is listed below:
```r
print('clustering using time_percentile')
results_perc <- CST_EnsClustering(datalist, numclus = numcl, numpcs = 4, time_moment = 'perc', time_percentile = 90, cluster_dim = c('member', 'sdate'))
```
Finally, the parameter 'time_dim' lets the user specify the dimension(s) over which the 'time_moment' is applied (i.e. mean, sd, perc). If 'time_dim' is not specified, the dimension is sought automatically picking the first between 'ftime', 'sdate', and 'time'.
|
/scratch/gouwar.j/cran-all/cranData/CSTools/inst/doc/ENSclustering_vignette.Rmd
|
---
author: "Louis-Philippe Caron and Núria Pérez-Zanón"
date: "`r Sys.Date()`"
revisor: "Eva Rifà"
revision date: "October 2023"
output: rmarkdown::html_vignette
vignette: >
%\VignetteEngine{knitr::knitr}
%\VignetteIndexEntry{Most Likely Terciles}
%\usepackage[utf8]{inputenc}
---
Computing and displaying the most likely tercile of a seasonal forecast
========================
In this example, we will use CSTools to visualize a probabilistic forecast (most likely tercile) of summer near-surface temperature produced by ECMWF System5. The (re-)forecasts used are initilized on May 1st for the period 1981-2020. The target for the forecast is June-August (JJA) 2020. The forecast data are taken from the Copernicus Climate Data Store.
### 1. Preliminary setup
To run this vignette, the following R packages should be installed and loaded:
```r
library(CSTools)
library(s2dv)
library(zeallot)
library(easyVerification)
library(ClimProjDiags)
```
### 2. Loading the data
We first define a few parameters. We start by defining the region we are interested in. In this example, we focus on the Mediterranean region.
```r
lat_min = 25
lat_max = 52
lon_min = -10
lon_max = 40
```
If the boundaries are not specified, the domain will be the entire globe.
We also define the start dates for the hindcasts/forecast (in this case, May 1st 1981-2020) and create a sequence of dates that will be required by the load function.
```r
ini <- 1981
fin <- 2020
numyears <- fin - ini +1
mth = '05'
start <- as.Date(paste(ini, mth, "01", sep = ""), "%Y%m%d")
end <- as.Date(paste(fin, mth, "01", sep = ""), "%Y%m%d")
dateseq <- format(seq(start, end, by = "year"), "%Y%m%d")
```
We then define the target months for the forecast (i.e. JJA). The months are given relative to the start date (May in this case) considering that monthly simulations are being analyzed.
```r
mon1 <- 2
monf <- 4
```
Finally, we define the forecast system, an observational reference, the variable of interest and the common grid onto which to interpolate.
```r
clim_var = 'tas'
```
Finally, the data are loaded using `CST_Start`:
```r
repos_exp <- paste0('/esarchive/exp/ecmwf/system5c3s/monthly_mean/',
'$var$_f6h/$var$_$sdate$.nc')
exp <- CST_Start(dataset = repos_exp,
var = clim_var,
member = startR::indices(1:10),
sdate = dateseq,
ftime = startR::indices(2:4),
lat = startR::values(list(lat_min, lat_max)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lon_min, lon_max)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'),
lat = c('lat', 'latitude'),
member = c('member', 'ensemble'),
ftime = c('ftime', 'time')),
transform = startR::CDORemapper,
transform_extra_cells = 2,
transform_params = list(grid = 'r256x128',
method = 'bilinear'),
transform_vars = c('lat', 'lon'),
return_vars = list(lat = NULL,
lon = NULL, ftime = 'sdate'),
retrieve = TRUE)
# Give the correct time values (identical as the netCDF files)
dates_obs <- c(paste0(ini:fin, '-06-30 18:00:00'),
paste0(ini:fin, '-07-31 18:00:00'),
paste0(ini:fin, '-08-31 18:00:00'))
dates_obs <- as.POSIXct(dates_obs, tz = "UTC")
dim(dates_obs) <- c(sdate = 40, ftime = 3)
date_arr <- array(format(dates_obs, '%Y%m'), dim = c(sdate = 40, ftime = 3))
repos_obs <- paste0('/esarchive/recon/ecmwf/erainterim/monthly_mean/',
'$var$/$var$_$date$.nc')
obs <- CST_Start(dataset = repos_obs,
var = clim_var,
date = date_arr,
split_multiselected_dims = TRUE,
lat = startR::values(list(lat_min, lat_max)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lon_min, lon_max)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'),
lat = c('lat', 'latitude'),
ftime = c('ftime', 'time')),
transform = startR::CDORemapper,
transform_extra_cells = 2,
transform_params = list(grid = 'r256x128',
method = 'bilinear'),
transform_vars = c('lat', 'lon'),
return_vars = list(lon = NULL,
lat = NULL,
ftime = 'date'),
retrieve = TRUE)
```
Loading the data using CST_Start returns two objects, one for the experimental data and another one for the observe data, with the same elements and compatible dimensions of the data element:
```r
> dim(exp$data)
dataset var member sdate ftime lat lon
1 1 10 40 3 19 36
> dim(obs$data)
dataset var sdate ftime lat lon
1 1 40 3 19 36
```
The latitude and longitude are saved for later use:
```r
Lat <- exp$coords$lat
Lon <- exp$coords$lon
```
### 3. Computing probabilities
First, anomalies of forecast and observations are computed using cross-validation on individual members:
```
c(Ano_Exp, Ano_Obs) %<-% CST_Anomaly(exp = exp, obs = obs, cross = TRUE, memb = TRUE)
```
The seasonal mean of both forecasts and observations are computed by averaging over the ftime dimension.
```r
Ano_Exp$data <- MeanDims(Ano_Exp$data, 'ftime')
Ano_Obs$data <- MeanDims(Ano_Obs$data, 'ftime')
```
Finally, the probabilities of each tercile are computed by evaluating which tercile is forecasted by each ensemble member for the latest forecast (2020) using the function `ProbBins` in **s2dv** and then averaging the results along the member dimension to obtain the probability of each tercile.
```r
PB <- ProbBins(Ano_Exp$data, fcyr = numyears, thr = c(1/3, 2/3), compPeriod = "Without fcyr")
prob_map <- MeanDims(PB, c('sdate', 'member', 'dataset', 'var'))
```
### 4. Visualization with PlotMostLikelyQuantileMap
We then plot the most likely quantile using the **CSTools** function `PlotMostLikelyQuantileMap`.
```
PlotMostLikelyQuantileMap(probs = prob_map, lon = Lon, lat = Lat,
coast_width = 1.5, legend_scale = 0.5,
toptitle = paste0('Most likely tercile - ', clim_var,
' - ECMWF System5 - JJA 2020'),
width = 10, height = 8)
```

The forecast calls for above average temperature over most of the Mediterranean basin and near average temperature for some smaller regions as well. But can this forecast be trusted?
For this, it is useful evaluate the skill of the system at forecasting near surface temperature over the period for which hindcasts are available. We can then use this information to mask the regions for which the system doesn't have skill.
In order to do this, we will first calculate the ranked probability skill score (RPSS) and then exclude/mask from the forecasts the regions for which the RPSS is smaller or equal to 0 (no improvement with respect to climatology).
### 5. Computing Skill Score
First, we evaluate and plot the RPSS. Therefore, we use `RPSS` metric included in CST_MultiMetric from function from **easyVerification** package which requires to remove missing values from latest start dates:
```r
Ano_Exp <- CST_Subset(Ano_Exp, along = 'sdate', indices = 1:38)
Ano_Obs <- CST_Subset(Ano_Obs, along = 'sdate', indices = 1:38)
Ano_Obs <- CST_InsertDim(Ano_Obs, posdim = 3, lendim = 1, name = "member")
RPSS <- CST_MultiMetric(Ano_Exp, Ano_Obs, metric = 'rpss', multimodel = FALSE)
PlotEquiMap(RPSS$data[[1]], lat = Lat, lon = Lon, brks = seq(-1, 1, by = 0.1),
filled.continents = FALSE)
```

Areas displayed in red (RPSS > 0) are areas for which the forecast system shows skill above climatology whereas areas in blue (such as a large part of the Iberian Peninsula) are areas for which the model does not. We thus want to mask the areas currently displayed in blue.
### 6. Simultaneous visualization of probabilities and skill scores
From the RPSS, we create a mask: regions with RPSS <= 0 will be masked.
```r
mask_rpss <- ifelse((RPSS$data$skillscore <= 0) | is.na(RPSS$data$skillscore), 1, 0)
```
Finally, we plot the latest forecast, as in the previous step, but add the mask we just created.
```r
PlotMostLikelyQuantileMap(probs = prob_map, lon = Lon, lat = Lat, coast_width = 1.5,
legend_scale = 0.5, mask = mask_rpss[ , , ,],
toptitle = paste('Most likely tercile -', clim_var,
'- ECMWF System5 - JJA 2020'),
width = 10, height = 8)
```

We obtain the same figure as before, but this time, we only display the areas for which the model has skill at forecasting the right tercile. The gray regions represents areas where the system doesn't have sufficient skill over the verification period.
|
/scratch/gouwar.j/cran-all/cranData/CSTools/inst/doc/MostLikelyTercile_vignette.Rmd
|
## ----cars---------------------------------------------------------------------
mth = '11'
clim_var = 'tas'
|
/scratch/gouwar.j/cran-all/cranData/CSTools/inst/doc/MultiModelSkill_vignette.R
|
---
author: "Nuria Perez"
date: "`r Sys.Date()`"
revisor: "Eva Rifà"
revision date: "October 2023"
output: rmarkdown::html_vignette
vignette: >
%\VignetteEngine{knitr::knitr}
%\VignetteIndexEntry{Multi-model Skill Assessment}
%\usepackage[utf8]{inputenc}
---
Multi-model Skill Assessment
-----------------------------------------
**reference**: Mishra, N., Prodhomme, C., & Guemas, V. (2018). Multi-Model Skill Assessment of Seasonal Temperature and Precipitation Forecasts over Europe, 29-31. <https://doi.org/10.1007/s00382-018-4404-z>
The R package s2dv should be loaded by running:
```r
library(s2dv)
library(zeallot)
```
Library *CSTools*, should be installed from CRAN and loaded:
```r
install.packages("CSTools")
library(CSTools)
```
### 1.- Load data
In this case, the seasonal temperature forecasted, initialized in November, will be used to assess the EUROSIP multi-model seasonal forecasting system consists of a number of independent coupled seasonal forecasting systems integrated into a common framework. From September 2012, the systems include those from ECMWF, the Met Office, Meteo-France and NCEP.
The parameters defined are the initializating month and the variable:
```{r cars}
mth = '11'
clim_var = 'tas'
```
The simulations available for these models are covering the period 1993-2012. So, the starting and ending dates can be defined by running the following lines:
```r
ini <- 1993
fin <- 2012
start <- as.Date(paste(ini, mth, "01", sep = ""), "%Y%m%d")
end <- as.Date(paste(fin, mth, "01", sep = ""), "%Y%m%d")
dateseq <- format(seq(start, end, by = "year"), "%Y%m")
```
The grid in which all data will be interpolated needs to be specified within the `CST_Start` call (256x128 grid). The observational dataset used in this example is the EraInterim.
Using the `CST_Start` function, the data available in our data store can be loaded. The following lines, shows how this function can be used. However, the data is loaded from a previous saved `.RData` file:
Ask nuria.perez at bsc.es to achieve the data to run the recipe.
```r
lonmin = -20
lonmax = 70
latmin = 25
latmax = 75
repos1 <- "/esarchive/exp/glosea5/glosea5c3s/monthly_mean/$var$_f6h/$var$_$sdate$.nc"
repos2 <- "/esarchive/exp/ecmwf/system4_m1/monthly_mean/$var$_f6h/$var$_$sdate$01.nc"
repos3 <- "/esarchive/exp/meteofrance/system5_m1/monthly_mean/$var$_f6h/$var$_$sdate$01.nc"
exp <- CST_Start(dataset = list(list(name = 'glosea5c3s', path = repos1),
list(name = 'ecmwf/system4_m1', path = repos2),
list(name = 'meteofrance/system5_m1', path = repos3)),
var = clim_var,
member = startR::indices(1:4),
sdate = dateseq,
ftime = startR::indices(2:4),
lat = startR::values(list(latmin, latmax)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lonmin, lonmax)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'), lat = c('lat', 'latitude'),
member = c('member', 'ensemble'), ftime = c('ftime', 'time')),
transform = startR::CDORemapper,
transform_extra_cells = 2,
transform_params = list(grid = 'r256x128', method = 'bilinear'),
transform_vars = c('lat', 'lon'),
return_vars = list(lat = 'dataset', lon = 'dataset', ftime = 'sdate'),
retrieve = TRUE)
dates_exp <- exp$attrs$Dates
repos_obs <- "/esarchive/recon/ecmwf/erainterim/monthly_mean/$var$/$var$_$date$.nc"
obs <- CST_Start(dataset = list(list(name = 'erainterim', path = repos_obs)),
var = clim_var,
date = unique(format(dates_exp, '%Y%m')),
ftime = startR::values(dates_exp),
ftime_across = 'date',
ftime_var = 'ftime',
merge_across_dims = TRUE,
split_multiselected_dims = TRUE,
lat = startR::values(list(latmin, latmax)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lonmin, lonmax)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'),
lat = c('lat', 'latitude'),
ftime = c('ftime', 'time')),
transform = startR::CDORemapper,
transform_extra_cells = 2,
transform_params = list(grid = 'r256x128',
method = 'bilinear'),
transform_vars = c('lat', 'lon'),
return_vars = list(lon = NULL,
lat = NULL,
ftime = 'date'),
retrieve = TRUE)
# save(exp, obs, file = "../tas_toydata.RData")
# Or use the following line to load the file provided in .RData format:
# load(file = "./tas_toydata.RData")
```
There should be two new elements loaded in the R working environment: `exp` and `obs`, containing the experimental and the observed data for temperature. It is possible to check that they are of class `sd2v_cube` by running:
```
class(exp)
class(obs)
```
The corresponding data is saved in the element `data` of each object, while other relevant information is saved in different elements, such as `lat` and `lon`:
```r
> dim(exp$data)
dataset var member sdate ftime lat lon
3 1 9 20 3 35 64
> dim(obs$data)
dataset var sdate ftime lat lon
1 1 20 3 35 64
Lat <- exp$coords$lat
Lon <- exp$coords$lon
```
### 2.- Computing and plotting Anomaly Correlation Coefficient
The Anomaly Correlation Coefficient (ACC) is the most widely used skill metric for Seasonal Climate Forecast quality (Mishra et al., 2018).
First step is to compute the anomalies over the loaded data applying cross validation technique on individual members by running:
```
c(ano_exp, ano_obs) %<-% CST_Anomaly(exp = exp, obs = obs, cross = TRUE, memb = TRUE)
```
The dimensions are preserved:
```
> str(ano_exp$data)
num [1:20, 1:3, 1:9, 1, 1:3, 1:35, 1:64] -1.399 -0.046 -0.133 0.361 -5.696 ...
> str(ano_obs$data)
num [1:20, 1, 1, 1, 1:3, 1:35, 1:64] 1.556 1.397 -0.346 -5.99 -0.273 ...
```
The ACC is obtained by running the `CST_MultiMetric` function defining the parameter 'metric' as correlation. The function also includes the option of computing the Multi-Model Mean ensemble (MMM).
```r
ano_obs <- CST_InsertDim(ano_obs, posdim = 3, lendim = 1, name = "member")
AnomDJF <- CST_MultiMetric(exp = ano_exp, obs = ano_obs, metric = 'correlation',
multimodel = TRUE)
```
The output of the function `CST_MultiMetric` is a object of class `s2dv_cube`, it contains the result of the metric, in this case correlation, in the `data` element (including the correlation for the MMM in the latest position).
While other relevant data is being stored in the corresponding element of the object:
```r
> str(AnomDJF$data)
List of 4
$ corr : num [1:4, 1, 1, 1:35, 1:64] 0.3061 0.4401 0.0821 0.2086 0.1948 ...
$ p.val : num [1:4, 1, 1, 1:35, 1:64] 0.0947 0.0261 0.3653 0.1887 0.2052 ...
$ conf.lower: num [1:4, 1, 1, 1:35, 1:64] -0.15782 -0.00297 -0.37399 -0.25768 -0.27106 ...
$ conf.upper: num [1:4, 1, 1, 1:35, 1:64] 0.659 0.739 0.506 0.596 0.587 ...
> names(AnomDJF)
[1] "data" "dims" "coords" "attrs"
> AnomDJF$attrs$Datasets
[1] "glosea5" "ecmwf/system4_m1" "meteofrance/system5_m1" "erainterim"
```
In the element $data of the `AnomDJF` object is a list of object for the metric and its statistics: correlation, p-value, the lower limit of the 95% confidence interval and the upper limit of the 95% confidence interval and the 95% significance level given by a one-sided T-test.
To obtain a spatial plot with a scale from -1 to 1 value of correlation for the model with the highest correlation for each grid point, the following lines should be run:
```r
PlotCombinedMap(AnomDJF$data$corr[,1,1,,], lon = Lon, lat = Lat, map_select_fun = max,
display_range = c(0, 1), map_dim = 'nexp',
legend_scale = 0.5, brks = 11,
cols = list(c('white', 'black'),
c('white', 'darkblue'),
c('white', 'darkred'),
c('white', 'darkorange')),
bar_titles = c("MMM", AnomDJF$attrs$Datasets),
width = 14, height = 8)
```
The next figure is the map of the maximum positive Anomaly Correlation Coefficient (ACC) among the three individual models from EUROSIP and the multimodel ensemble. ACC for each model is calculated between their respective predicted ensemble mean anomalies and the anomalies of the observed temperature obtained from ERAINT for winter (DJF) seasons over the period 1993-2012. Blue, red, yellow and black colors indicate that the maximum correlation is obtained for GloSea5, ECMWF, MF and the Multi-Model Mean respectively (similar to figure 3 in Mishra et al., 2018).

### 3.- Computing and plotting Root Mean Square error (RMS)
The same function can be used to compute the RMS error by defining the parameter `metric` as 'rms'.
```r
AnomDJF <- CST_MultiMetric(exp = ano_exp, obs = ano_obs, metric = 'rms',
multimodel = TRUE)
```
The following lines are necessary to obtain the plot which visualizes the best model given this metric for each grid point.
```r
PlotCombinedMap(AnomDJF$data$rms[,1,1,,], lon = Lon, lat = Lat, map_select_fun = min,
display_range = c(0, ceiling(max(abs(AnomDJF$data$rms)))), map_dim = 'nexp',
legend_scale = 0.5, brks = 11,
cols = list(c('black', 'white'),
c('darkblue', 'white'),
c('darkred', 'white'),
c('darkorange', 'white')),
bar_titles = c("MMM", AnomDJF$attrs$Datasets),
width = 14, height = 8)
```

### 4.- Computing and plotting Root Mean Square error Skill Scores (RMSSS)
By running the following lines a plot for the best model given the RMSSS is obtained.
When parameter `metric` is defined as `rmsss`, the RMSSS are stored in the first position on the third dimension of the `metric` component in the AnoMultiMetric output.
Notice that the perfect RMSSS is 1 and the parameter `map_select_fun` from `PlotCombinedMap` function (see *s2dv R package*) has been defined in order to select the best model.
```r
AnomDJF <- CST_MultiMetric(exp = ano_exp, obs = ano_obs, metric = 'rmsss',
multimodel = TRUE)
PlotCombinedMap(AnomDJF$data$rmsss[,1,1,,], lon = Lon, lat = Lat,
map_select_fun = function(x) {x[which.min(abs(x - 1))]},
display_range = c(0,
ceiling(max(abs(AnomDJF$data$rmsss)))), map_dim = 'nexp',
legend_scale = 0.5, brks = 11,
cols = list(c('white', 'black'),
c('white', 'darkblue'),
c('white', 'darkred'),
c('white', 'darkorange')),
bar_titles = c("MMM", AnomDJF$attrs$Datasets),
width = 14, height = 8)
```

|
/scratch/gouwar.j/cran-all/cranData/CSTools/inst/doc/MultiModelSkill_vignette.Rmd
|
## ----cars---------------------------------------------------------------------
mth = '11'
temp = 'tas'
precip = 'prlr'
|
/scratch/gouwar.j/cran-all/cranData/CSTools/inst/doc/MultivarRMSE_vignette.R
|
---
author: "Deborah Verfaillie"
date: "`r Sys.Date()`"
revisor: "Eva Rifà"
revision date: "October 2023"
output: rmarkdown::html_vignette
vignette: >
%\VignetteEngine{knitr::knitr}
%\VignetteIndexEntry{Multivariate RMSE}
%\usepackage[utf8]{inputenc}
---
Multivariate Root Mean Square Error (RMSE)
------------------------------------------
To run this vignette, the next R packages should be installed and loaded:
```r
library(s2dv)
library(RColorBrewer)
library(zeallot)
```
Library *CSTools*, should be installed from CRAN and loaded:
```r
install.packages("CSTools")
library(CSTools)
```
### 1.- Load data
In this example, the seasonal temperature and precipitation forecasts, initialized in november, will be used to assess the glosea5 seasonal forecasting system from the Met Office, by computing the multivariate RMSE for both temperature and precipitation.
The parameters defined are the initializing month and the variables:
```{r cars}
mth = '11'
temp = 'tas'
precip = 'prlr'
```
The simulations available for this model cover the period 1993-2012. So, the starting and ending dates can be defined by running the following lines:
```r
ini <- 1993
fin <- 2012
start <- as.Date(paste(ini, mth, "01", sep = ""), "%Y%m%d")
end <- as.Date(paste(fin, mth, "01", sep = ""), "%Y%m%d")
dateseq <- format(seq(start, end, by = "year"), "%Y%m%d")
```
The grid in which all data will be interpolated should be also specified. The observational dataset used in this example is the EraInterim.
```r
grid <- "256x128"
obs <- "erainterim"
```
Using the `CST_Start` function from **CSTool package**, the data available in our data store can be loaded. The following lines show how this function can be used. Here, the data is loaded from a previous saved `.RData` file:
Ask nuria.perez at bsc.es for the data to run the recipe.
```r
latmin = 25
latmax = 75
lonmin = -20
lonmax = 70
dateseq <- format(seq(start, end, by = "year"), "%Y%m")
repos1 <- "/esarchive/exp/glosea5/glosea5c3s/monthly_mean/$var$_f6h/$var$_$sdate$.nc"
exp_T <- CST_Start(dataset = list(list(name = 'glosea5c3s', path = repos1)),
var = temp,
member = startR::indices(1:9),
sdate = dateseq,
ftime = startR::indices(2:4),
lat = startR::values(list(latmin, latmax)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lonmin, lonmax)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'),
lat = c('lat', 'latitude'),
member = c('member', 'ensemble'),
ftime = c('ftime', 'time')),
transform = startR::CDORemapper,
transform_extra_cells = 2,
transform_params = list(grid = 'r256x128',
method = 'bilinear'),
transform_vars = c('lat', 'lon'),
return_vars = list(lat = NULL,
lon = NULL, ftime = 'sdate'),
retrieve = TRUE)
dates_exp <- exp_T$attrs$Dates
repos2 <- "/esarchive/recon/ecmwf/erainterim/monthly_mean/$var$/$var$_$date$.nc"
obs_T <- CST_Start(dataset = list(list(name = 'erainterim', path = repos2)),
var = temp,
date = unique(format(dates_exp, '%Y%m')),
ftime = startR::values(dates_exp),
ftime_across = 'date',
ftime_var = 'ftime',
merge_across_dims = TRUE,
split_multiselected_dims = TRUE,
lat = startR::values(list(latmin, latmax)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lonmin, lonmax)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'),
lat = c('lat', 'latitude'),
ftime = c('ftime', 'time')),
transform = startR::CDORemapper,
transform_extra_cells = 2,
transform_params = list(grid = 'r256x128',
method = 'bilinear'),
transform_vars = c('lat', 'lon'),
return_vars = list(lon = NULL,
lat = NULL,
ftime = 'date'),
retrieve = TRUE)
repos3 <- "/esarchive/exp/glosea5/glosea5c3s/monthly_mean/$var$_f24h/$var$_$sdate$.nc"
exp_P <- CST_Start(dataset = list(list(name = 'glosea5c3s', path = repos3)),
var = precip,
member = startR::indices(1:9),
sdate = dateseq,
ftime = startR::indices(2:4),
lat = startR::values(list(latmin, latmax)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lonmin, lonmax)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'),
lat = c('lat', 'latitude'),
member = c('member', 'ensemble'),
ftime = c('ftime', 'time')),
transform = startR::CDORemapper,
transform_extra_cells = 2,
transform_params = list(grid = 'r256x128',
method = 'bilinear'),
transform_vars = c('lat', 'lon'),
return_vars = list(lat = NULL,
lon = NULL, ftime = 'sdate'),
retrieve = TRUE)
dates_exp <- exp_P$attrs$Dates
obs_P <- CST_Start(dataset = list(list(name = 'erainterim', path = repos2)),
var = precip,
date = unique(format(dates_exp, '%Y%m')),
ftime = startR::values(dates_exp),
ftime_across = 'date',
ftime_var = 'ftime',
merge_across_dims = TRUE,
split_multiselected_dims = TRUE,
lat = startR::values(list(latmin, latmax)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lonmin, lonmax)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'),
lat = c('lat', 'latitude'),
ftime = c('ftime', 'time')),
transform = startR::CDORemapper,
transform_extra_cells = 2,
transform_params = list(grid = 'r256x128',
method = 'bilinear'),
transform_vars = c('lat', 'lon'),
return_vars = list(lon = NULL,
lat = NULL,
ftime = 'date'),
retrieve = TRUE)
# save(exp_T, obs_T, exp_P, obs_P, file = "./tas_prlr_toydata.RData")
# Or use the following line to load the file provided in .RData format:
# load(file = "./tas_prlr_toydata.RData")
```
There should be four new elements loaded in the R working environment: `exp_T`, `obs_T`, `exp_P` and `obs_P`. The first two elements correspond to the experimental and observed data for temperature and the other are the equivalent for the precipitation data. It is possible to check that they are of class `sd2v_cube` by running:
```
class(exp_T)
class(obs_T)
class(exp_P)
class(obs_P)
```
Loading the data using `CST_Load` allows to obtain two lists, one for the experimental data and another for the observe data, with the same elements and compatible dimensions of the data element:
```
> dim(exp_T$data)
dataset var member sdate ftime lat lon
1 1 9 20 3 35 64
> dim(obs_T$data)
dataset var sdate ftime lat lon
1 1 20 3 35 64
```
Latitudes and longitudes of the common grid can be saved:
```r
Lat <- exp_T$coords$lat
Lon <- exp_T$coords$lon
```
The next step is to compute the anomalies of the experimental and observational data using `CST_Anomaly` function, which could be applied over data from each variable, and in this case it's compute applying cross validation technique over individual members:
```
c(ano_exp_T, ano_obs_T) %<-% CST_Anomaly(exp = exp_T, obs = obs_T, cross = TRUE, memb = TRUE)
c(ano_exp_P, ano_obs_P) %<-% CST_Anomaly(exp = exp_P, obs = obs_P, cross = TRUE, memb = TRUE)
```
The original dimensions are preserved and the anomalies are stored in the `data` element of the correspondent object:
```
> str(ano_exp_T$data)
num [1:20, 1, 1:9, 1, 1:3, 1:35, 1:64] -1.399 -0.046 -0.133 0.361 -5.696 ...
> str(ano_obs_T$data)
num [1:20, 1, 1, 1, 1:3, 1:35, 1:64] 1.556 1.397 -0.346 -5.99 -0.273 ...
```
Two lists containing the experiment ,`ano_exp`, and the observation, `ano_obs`, lists should be put together to serve as input of the function to compute multivariate RMSEs.
Furthermore, some weights can be applied to the difference variables based on their relative importance (if no weights are given, a value of 1 is automatically assigned to each variable). For this example, we'll give a weight of 2 to the temperature dataset and a weight of 1 to the precipitation dataset:
```r
ano_exp <- list(ano_exp_T, ano_exp_P)
ano_obs <- list(ano_obs_T, ano_obs_P)
weight <- c(2, 1)
```
### 2.- Computing and plotting multivariate RMSEs
The multivariate RMSE gives an indication of the forecast performance (RMSE) for multiple variables simultaneously. Variables can be weighted based on their relative importance.
It is obtained by running the `CST_MultivarRMSE` function:
```r
ano_obs[[1]] <- CST_InsertDim(ano_obs[[1]], posdim = 3, lendim = 1, name = "member")
ano_obs[[2]] <- CST_InsertDim(ano_obs[[2]], posdim = 3, lendim = 1, name = "member")
mvrmse <- CST_MultivarRMSE(exp = ano_exp, obs = ano_obs, weight)
```
The function `CST_MultivarRMSE` returns the multivariate RMSE value for 2 or more variables. The output is a CSTool object containing the RMSE values in the `data` element and other relevant information:
```r
> class(mvrmse)
> str(mvrmse$data)
num [1, 1, 1, 1:35, 1:64] 806916 832753 838254 833206 996828 ...
> str(mvrmse$attrs$Variable)
List of 2
$ varName : chr [1:2] "tas" "prlr"
$ metadata:List of 8
..$ lat : num [1:35(1d)] 73.8 72.4 71 69.6 68.2 ...
..$ lon : num [1:64(1d)] 0 1.41 2.81 4.22 5.62 ...
..$ ftime: POSIXct[1:60], format: "1993-12-16 00:00:00" "1994-12-16 00:00:00" ...
..$ tas :List of 11
.. ..$ prec : chr "float"
.. ..$ units : chr "K"
.. ..$ dim :List of 4
.. .. ..$ :List of 10
.. .. .. ..$ name : chr "lon"
.. .. .. ..$ len : int 64
.. .. .. ..$ unlim : logi FALSE
.. .. .. ..$ group_index : int 1
.. .. .. ..$ group_id : int 65536
.. .. .. ..$ id : int 1
.. .. .. ..$ dimvarid :List of 5
.. .. .. .. ..$ id : int 1
.. .. .. .. ..$ group_index: int 1
.. .. .. .. ..$ group_id : int 65536
.. .. .. .. ..$ list_index : num -1
.. .. .. .. ..$ isdimvar : logi TRUE
.. .. .. .. ..- attr(*, "class")= chr "ncid4"
.. .. .. ..$ units : chr "degrees_east"
.. .. .. ..$ vals : num [1:64(1d)] -19.7 -18.3 -16.9 -15.5 -14.1 ...
.. .. .. ..$ create_dimvar: logi TRUE
.. .. .. ..- attr(*, "class")= chr "ncdim4"
.. .. ..$ :List of 10
.. .. .. ..$ name : chr "lat"
.. .. .. ..$ len : int 35
.. .. .. ..$ unlim : logi FALSE
.. .. .. ..$ group_index : int 1
.. .. .. ..$ group_id : int 65536
.. .. .. ..$ id : int 0
.. .. .. ..$ dimvarid :List of 5
.. .. .. .. ..$ id : int 0
.. .. .. .. ..$ group_index: int 1
.. .. .. .. ..$ group_id : int 65536
.. .. .. .. ..$ list_index : num -1
.. .. .. .. ..$ isdimvar : logi TRUE
.. .. .. .. ..- attr(*, "class")= chr "ncid4"
.. .. .. ..$ units : chr "degrees_north"
.. .. .. ..$ vals : num [1:35(1d)] 26 27.4 28.8 30.2 31.6 ...
.. .. .. ..$ create_dimvar: logi TRUE
.. .. .. ..- attr(*, "class")= chr "ncdim4"
.. .. ..$ :List of 10
.. .. .. ..$ name : chr "ensemble"
.. .. .. ..$ len : int 14
.. .. .. ..$ unlim : logi FALSE
.. .. .. ..$ group_index : int 1
.. .. .. ..$ group_id : int 65536
.. .. .. ..$ id : int 3
.. .. .. ..$ dimvarid :List of 5
.. .. .. .. ..$ id : int -1
.. .. .. .. ..$ group_index: int 1
.. .. .. .. ..$ group_id : int 65536
.. .. .. .. ..$ list_index : num -1
.. .. .. .. ..$ isdimvar : logi TRUE
.. .. .. .. ..- attr(*, "class")= chr "ncid4"
.. .. .. ..$ vals : int [1:14] 1 2 3 4 5 6 7 8 9 10 ...
.. .. .. ..$ units : chr ""
.. .. .. ..$ create_dimvar: logi FALSE
.. .. .. ..- attr(*, "class")= chr "ncdim4"
.. .. ..$ :List of 11
.. .. .. ..$ name : chr "time"
.. .. .. ..$ len : int 7
.. .. .. ..$ unlim : logi TRUE
.. .. .. ..$ group_index : int 1
.. .. .. ..$ group_id : int 65536
.. .. .. ..$ id : int 2
.. .. .. ..$ dimvarid :List of 5
.. .. .. .. ..$ id : int 2
.. .. .. .. ..$ group_index: int 1
.. .. .. .. ..$ group_id : int 65536
.. .. .. .. ..$ list_index : num -1
.. .. .. .. ..$ isdimvar : logi TRUE
.. .. .. .. ..- attr(*, "class")= chr "ncid4"
.. .. .. ..$ units : chr "months since 1993-11-15 12:00:00"
.. .. .. ..$ calendar : chr "proleptic_gregorian"
.. .. .. ..$ vals : num [1:7(1d)] 0 1 2 3 4 5 6
.. .. .. ..$ create_dimvar: logi TRUE
.. .. .. ..- attr(*, "class")= chr "ncdim4"
.. ..$ unlim : logi TRUE
.. ..$ make_missing_value: logi TRUE
.. ..$ missval : num -9e+33
.. ..$ hasAddOffset : logi FALSE
.. ..$ hasScaleFact : logi FALSE
.. ..$ table : int 128
.. ..$ _FillValue : num -9e+33
.. ..$ missing_value : num -9e+33
..$ lat : num [1:35(1d)] 73.8 72.4 71 69.6 68.2 ...
..$ lon : num [1:64(1d)] 0 1.41 2.81 4.22 5.62 ...
..$ ftime: POSIXct[1:60], format: "1993-12-16 00:00:00" "1994-12-16 00:00:00" ...
..$ prlr :List of 9
.. ..$ prec : chr "float"
.. ..$ units : chr "m s-1"
.. ..$ dim :List of 4
.. .. ..$ :List of 10
.. .. .. ..$ name : chr "lon"
.. .. .. ..$ len : int 64
.. .. .. ..$ unlim : logi FALSE
.. .. .. ..$ group_index : int 1
.. .. .. ..$ group_id : int 65536
.. .. .. ..$ id : int 1
.. .. .. ..$ dimvarid :List of 5
.. .. .. .. ..$ id : int 1
.. .. .. .. ..$ group_index: int 1
.. .. .. .. ..$ group_id : int 65536
.. .. .. .. ..$ list_index : num -1
.. .. .. .. ..$ isdimvar : logi TRUE
.. .. .. .. ..- attr(*, "class")= chr "ncid4"
.. .. .. ..$ units : chr "degrees_east"
.. .. .. ..$ vals : num [1:64(1d)] -19.7 -18.3 -16.9 -15.5 -14.1 ...
.. .. .. ..$ create_dimvar: logi TRUE
.. .. .. ..- attr(*, "class")= chr "ncdim4"
.. .. ..$ :List of 10
.. .. .. ..$ name : chr "lat"
.. .. .. ..$ len : int 35
.. .. .. ..$ unlim : logi FALSE
.. .. .. ..$ group_index : int 1
.. .. .. ..$ group_id : int 65536
.. .. .. ..$ id : int 0
.. .. .. ..$ dimvarid :List of 5
.. .. .. .. ..$ id : int 0
.. .. .. .. ..$ group_index: int 1
.. .. .. .. ..$ group_id : int 65536
.. .. .. .. ..$ list_index : num -1
.. .. .. .. ..$ isdimvar : logi TRUE
.. .. .. .. ..- attr(*, "class")= chr "ncid4"
.. .. .. ..$ units : chr "degrees_north"
.. .. .. ..$ vals : num [1:35(1d)] 26 27.4 28.8 30.2 31.6 ...
.. .. .. ..$ create_dimvar: logi TRUE
.. .. .. ..- attr(*, "class")= chr "ncdim4"
.. .. ..$ :List of 10
.. .. .. ..$ name : chr "ensemble"
.. .. .. ..$ len : int 14
.. .. .. ..$ unlim : logi FALSE
.. .. .. ..$ group_index : int 1
.. .. .. ..$ group_id : int 65536
.. .. .. ..$ id : int 3
.. .. .. ..$ dimvarid :List of 5
.. .. .. .. ..$ id : int -1
.. .. .. .. ..$ group_index: int 1
.. .. .. .. ..$ group_id : int 65536
.. .. .. .. ..$ list_index : num -1
.. .. .. .. ..$ isdimvar : logi TRUE
.. .. .. .. ..- attr(*, "class")= chr "ncid4"
.. .. .. ..$ vals : int [1:14] 1 2 3 4 5 6 7 8 9 10 ...
.. .. .. ..$ units : chr ""
.. .. .. ..$ create_dimvar: logi FALSE
.. .. .. ..- attr(*, "class")= chr "ncdim4"
.. .. ..$ :List of 11
.. .. .. ..$ name : chr "time"
.. .. .. ..$ len : int 7
.. .. .. ..$ unlim : logi TRUE
.. .. .. ..$ group_index : int 1
.. .. .. ..$ group_id : int 65536
.. .. .. ..$ id : int 2
.. .. .. ..$ dimvarid :List of 5
.. .. .. .. ..$ id : int 2
.. .. .. .. ..$ group_index: int 1
.. .. .. .. ..$ group_id : int 65536
.. .. .. .. ..$ list_index : num -1
.. .. .. .. ..$ isdimvar : logi TRUE
.. .. .. .. ..- attr(*, "class")= chr "ncid4"
.. .. .. ..$ units : chr "months since 1993-11-15 12:00:00"
.. .. .. ..$ calendar : chr "proleptic_gregorian"
.. .. .. ..$ vals : num [1:7(1d)] 0 1 2 3 4 5 6
.. .. .. ..$ create_dimvar: logi TRUE
.. .. .. ..- attr(*, "class")= chr "ncdim4"
.. ..$ unlim : logi TRUE
.. ..$ make_missing_value: logi FALSE
.. ..$ missval : num 1e+30
.. ..$ hasAddOffset : logi FALSE
.. ..$ hasScaleFact : logi FALSE
.. ..$ table : int 128
```
The following lines plot the multivariate RMSE
```r
PlotEquiMap(mvrmse$data, lon = Lon, lat = Lat, filled.continents = FALSE,
toptitle = "Multivariate RMSE tas, prlr 1993 - 2012", colNA = "white")
```

|
/scratch/gouwar.j/cran-all/cranData/CSTools/inst/doc/MultivarRMSE_vignette.Rmd
|
## ----warning=FALSE,message=FALSE,error=FALSE----------------------------------
library(CSTools)
## ----fig.show = 'hide',warning=F----------------------------------------------
fcst <- data.frame(fcst1 = rnorm(mean = 25, sd = 3, n = 30),
fcst2 = rnorm(mean = 23, sd = 4.5, n = 30))
PlotForecastPDF(fcst, tercile.limits = c(20, 26))
## ----fig.show = 'hide',warning=F----------------------------------------------
fcst <- array(rnorm(mean = 25, sd = 2, n = 90), dim = c(member = 30, 3))
PlotForecastPDF(fcst, tercile.limits = c(23, 27))
## ----fig.show = 'hide',warning=F----------------------------------------------
fcst <- data.frame(fcst1 = rnorm(mean = 25, sd = 3, n = 30),
fcst2 = rnorm(mean = 23, sd = 4.5, n = 30))
PlotForecastPDF(fcst, tercile.limits = c(20, 26), var.name = "Temperature (ºC)",
title = "Forecasts valid for 2019-01-01 at Sunny Hills",
fcst.names = c("model a", "model b"),
color.set = "s2s4e")
## ----fig.show = 'hide',warning=F----------------------------------------------
fcst <- data.frame(fcst1 = rnorm(mean = 25, sd = 3, n = 30),
fcst2 = rnorm(mean = 28, sd = 4.5, n = 30), fcst3 = rnorm(mean = 17, sd = 3, n = 30))
PlotForecastPDF(fcst, tercile.limits = rbind(c(20, 26), c(22, 28), c(15, 22)),
var.name = "Temperature (ºC)", title = "Forecasts at Sunny Hills",
fcst.names = c("January", "February", "March"), obs = c(21, 24, 17),
extreme.limits = rbind(c(18, 28), c(20, 30), c(12, 24)),
color.set = "s2s4e")
## ----fig.show = 'hide',warning=F----------------------------------------------
fcst <- data.frame(fcst1 = lonlat_temp_st$exp$data[1,1,,1,1,1,1] - 273.15,
fcst2 = lonlat_temp_st$exp$data[1,1,,1,2,1,1] - 273.15)
PlotForecastPDF(fcst, tercile.limits = c(5, 7), extreme.limits = c(4, 8),
var.name = "Temperature (ºC)",
title = "Forecasts initialized on Nov 2000 at sample Mediterranean region",
fcst.names = c("November", "December"))
|
/scratch/gouwar.j/cran-all/cranData/CSTools/inst/doc/PlotForecastPDF.R
|
---
author: "Francesc Roura-Adserias and Llorenç Lledó"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteEngine{knitr::knitr}
%\VignetteIndexEntry{Plot Forecast PDFs}
%\usepackage[utf8]{inputenc}
---
Plot Forecast PDFs (Probability Distibution Functions)
------------------------------------------
Library *CSTools*, should be installed from CRAN and loaded:
```{r,warning=FALSE,message=FALSE,error=FALSE}
library(CSTools)
```
### 1. A simple example
The first step is to put your forecasts in an appropriate format. For this vignette we generate some random values from two normal distributions. The PlotForecastPDF by default will plot the ensemble members, the estimated density distributions and the tercile probabilities.
```{r,fig.show = 'hide',warning=F}
fcst <- data.frame(fcst1 = rnorm(mean = 25, sd = 3, n = 30),
fcst2 = rnorm(mean = 23, sd = 4.5, n = 30))
PlotForecastPDF(fcst, tercile.limits = c(20, 26))
```

Input data can also be provided as an two-dimensional array, as far as one of the dimensions is named 'member' or adjusted in 'memb_dim' parameter:
```{r,fig.show = 'hide',warning=F}
fcst <- array(rnorm(mean = 25, sd = 2, n = 90), dim = c(member = 30, 3))
PlotForecastPDF(fcst, tercile.limits = c(23, 27))
```
### 2. Customizing the appearance of your plots
Some parameters allow to customize your plot by changing the title, the forecast labels, the variable name and units, or the colors.
```{r,fig.show = 'hide',warning=F}
fcst <- data.frame(fcst1 = rnorm(mean = 25, sd = 3, n = 30),
fcst2 = rnorm(mean = 23, sd = 4.5, n = 30))
PlotForecastPDF(fcst, tercile.limits = c(20, 26), var.name = "Temperature (ºC)",
title = "Forecasts valid for 2019-01-01 at Sunny Hills",
fcst.names = c("model a", "model b"),
color.set = "s2s4e")
```

### 3. Adding extremes and observed values
Optionally, we can include the probability of extreme values or the actually observed values. The tercile limits, extreme limits and observation values can be specified for each panel separately.
```{r,fig.show = 'hide',warning=F}
fcst <- data.frame(fcst1 = rnorm(mean = 25, sd = 3, n = 30),
fcst2 = rnorm(mean = 28, sd = 4.5, n = 30), fcst3 = rnorm(mean = 17, sd = 3, n = 30))
PlotForecastPDF(fcst, tercile.limits = rbind(c(20, 26), c(22, 28), c(15, 22)),
var.name = "Temperature (ºC)", title = "Forecasts at Sunny Hills",
fcst.names = c("January", "February", "March"), obs = c(21, 24, 17),
extreme.limits = rbind(c(18, 28), c(20, 30), c(12, 24)),
color.set = "s2s4e")
```

### 4. Saving your plot to a file
PlotForecastPDF uses ggplot2, so you can save the output of the function to a variable and operate with it as a ggplot2 object. For instance, you can save it to a file:
```
library(ggplot2)
fcst <- array(rnorm(mean = 25, sd = 2, n = 90), dim = c(member = 30, 3))
plot <- PlotForecastPDF(fcst, tercile.limits = c(23, 27))
ggsave("outfile.pdf", plot, width = 7, height = 5)
```
### 5. A reproducible example using lonlat_temp_st
This final example uses the sample lonlat data from CSTools. It is suitable for checking reproducibility of results.
```{r,fig.show = 'hide',warning=F}
fcst <- data.frame(fcst1 = lonlat_temp_st$exp$data[1,1,,1,1,1,1] - 273.15,
fcst2 = lonlat_temp_st$exp$data[1,1,,1,2,1,1] - 273.15)
PlotForecastPDF(fcst, tercile.limits = c(5, 7), extreme.limits = c(4, 8),
var.name = "Temperature (ºC)",
title = "Forecasts initialized on Nov 2000 at sample Mediterranean region",
fcst.names = c("November", "December"))
```

|
/scratch/gouwar.j/cran-all/cranData/CSTools/inst/doc/PlotForecastPDF.Rmd
|
## ---- echo = FALSE------------------------------------------------------------
knitr::opts_chunk$set(eval = FALSE)
## -----------------------------------------------------------------------------
# install.packages('CSTools')
# library(CSTools)
## -----------------------------------------------------------------------------
# exp <- lonlat_prec_st
## -----------------------------------------------------------------------------
# dim(exp$data)
# # dataset var member sdate ftime lat lon
# # 1 1 6 3 31 4 4
## -----------------------------------------------------------------------------
# ilon <- which(exp$coords$lon %in% 5:12)
# ilat <- which(exp$coords$lat %in% 40:47)
# exp$data <- exp$data[, , , , , ilat, ilon, drop = FALSE]
# names(dim(exp$data)) <- names(dim(lonlat_prec_st$data))
# exp$coords$lon <- exp$coords$lon[ilon]
# exp$coords$lat <- exp$coords$lat[ilat]
## -----------------------------------------------------------------------------
# downscaled <- RainFARM(exp$data, exp$coords$lon, exp$coords$lat,
# nf = 20, kmin = 1, nens = 3,
# time_dim = c("member", "ftime"))
## -----------------------------------------------------------------------------
# a <- exp$data[1, 1, 1, 1, 17, , ] * 86400 * 1000
# a[a > 60] <- 60
# image(exp$coords$lon, rev(exp$coords$lat), t(apply(a, 2, rev)), xlab = "lon", ylab = "lat",
# col = rev(terrain.colors(20)), zlim = c(0,60))
# map("world", add = TRUE)
# title(main = "pr 17/03/2010 original")
# a <- exp_down$data[1, 1, 1, 1, 1, 17, , ] * 86400 * 1000
# a[a > 60] <- 60
# image(exp_down$coords$lon, rev(exp_down$coords$lat), t(apply(a, 2, rev)), xlab = "lon", ylab = "lat",
# col = rev(terrain.colors(20)), zlim = c(0, 60))
# map("world", add = TRUE)
# title(main = "pr 17/03/2010 downscaled")
## -----------------------------------------------------------------------------
# ww <- CST_RFWeights("./worldclim.nc", nf = 20, lon = exp$coords$lon, lat = exp$coords$lat)
## -----------------------------------------------------------------------------
# exp_down_weights <- CST_RainFARM(exp, nf = 20, kmin = 1, nens = 3,
# weights = ww, time_dim = c("member", "ftime"))
## -----------------------------------------------------------------------------
# exp_down1 <- exp_down$data[, , , , , , , 1]
# exp_down_weights1 <- exp_down_weights$data[, , , , , , , 1]
# dim(exp_down1) <- c(member = 6 * 3 * 31, lat = 80, lon = 80)
# dim(exp_down_weights1) <- c(member = 6 * 3 * 31, lat = 80, lon = 80)
# ad <- apply(exp_down1, c(2, 3), mean)
# adw <- apply(exp_down_weights1, c(2, 3), mean);
#
# png("Figures/RainFARM_fig2.png", width = 640, height = 243)
# par(mfrow = c(1,3))
# a <- exp_down_weights$data[1, 1, 1, 1, 17, , ,1] * 86400 * 1000
# a[a > 60] <- 60
# image(exp_down$coords$lon, rev(exp_down$coords$lat), t(apply(a, 2, rev)), xlab = "lon",
# ylab = "lat", col = rev(terrain.colors(20)), zlim = c(0, 60))
# map("world", add = TRUE)
# title(main = "pr 17/03/2010 with weights")
# a <- ad * 86400 * 1000
# a[a > 5] <- 5
# image(exp_down$coords$lon, rev(exp_down$coords$lat), t(apply(a, 2, rev)), xlab = "lon",
# ylab = "lat", col = rev(terrain.colors(20)), zlim = c(0, 5))
# map("world", add = TRUE)
# title(main = "climatology no weights")
# a <- adw * 86400 * 1000
# a[a > 5] <- 5
# image(exp_down$coords$lon, rev(exp_down$coords$lat), t(apply(a, 2, rev)), xlab = "lon",
# ylab = "lat", col = rev(terrain.colors(20)), zlim = c(0, 5))
# map("world", add = TRUE)
# title(main = "climatology with weights")
# dev.off()
## -----------------------------------------------------------------------------
# slopes <- CST_RFSlope(exp, time_dim = c("member", "ftime"))
# dim(slopes)
# # dataset var sdate
# # 1 1 3
# # slopes
# # , , 1
#
# # [,1]
# # [1,] 1.09957
#
# # , , 2
#
# # [,1]
# # [1,] 1.768861
#
# # , , 3
#
# # [,1]
# # [1,] 1.190176
|
/scratch/gouwar.j/cran-all/cranData/CSTools/inst/doc/RainFARM_vignette.R
|
---
title: "Rainfall Filtered Autoregressive Model (RainFARM) precipitation downscaling"
author: "Jost von Hardenberg (ISAC-CNR)"
date: "26/03/2019"
revisor: "Eva Rifà"
revision date: "October 2023"
output: rmarkdown::html_vignette
vignette: >
%\VignetteEngine{knitr::knitr}
%\VignetteIndexEntry{RainFARM}
%\usepackage[utf8]{inputenc}
---
<!--
```{r, echo = FALSE}
knitr::opts_chunk$set(eval = FALSE)
```
-->
## Introduction
It becomes often necessary to bridge the scale gap between climate scenarios of precipitation obtained from global and regional climate models and the small scales needed for impact studies, in order to measure uncertainty and reproduce extremes at these scales. An effective approach is provided by stochastic downscaling techniques (Ferraris et al. 2003a,b), in particular so-called full-field weather generators, which aim at generating synthetic spatiotemporal precipitation fields whose statistical properties are consistent with the small-scale statistics of observed precipitation, based only on knowledge of the large-scale precipitation field. These methods allow for example to estimate uncertainties at small scales in rainfall scenarios, by generating large ensembles of synthetic small-scale precipitation fields.
In particular the _Rainfall Filtered Autoregressive Model_ (**RainFARM**; Rebora et al. 2006a,b)
is a stochastic downscaling procedure based on the nonlinear transformation of a linearly correlated stochastic field, generated by small-scale extrapolation of the Fourier spectrum of a large-scale precipitation field. Developed originally for downscaling at weather timescales, the method has been adapted for downscaling at climate timescales by D'Onofrio et al. 2014 and recently improved for regions with complex orography in Terzago et al. 2018.
An efficient implementation of RainFARM is provided for CSTools by the `CST_RainFARM()` and `RainFARM()` functions.
## Downscaling seasonal precipitation forecasts with RainFARM
### Preliminary setup
In order to run the examples in this vignette, the *CSTools* package and some other support R packages need to be loaded by running:
```{r}
install.packages('CSTools')
library(CSTools)
```
We use test data provided by CSTools to load a seasonal precipitation forecast:
```{r}
exp <- lonlat_prec_st
```
This gives us a CSTools object `exp`, containing an element `exp$data` with dimensions:
```{r}
dim(exp$data)
# dataset var member sdate ftime lat lon
# 1 1 6 3 31 4 4
```
There are 6 ensemble members available in the data set, 3 starting dates and 31 forecast times, which refer to daily values in the month of March following starting dates on November 1st in the years 2010, 2011, 2012. Please notice that RainFARM (in this version) only accepts square domains, possibly with an even number of pixels on each side, so we always need to select an appropriate cutout. Also, there are time and memory limitations when a large ensemble of downscaled realizations is generated with RainFARM, so that selecting a smaller target area is advised. On the other hand, if spectral slopes are to be determined from the large scales we will still need enough resolution to allow this estimation. In this example we have preselected a 4x4 pixel cutout at resolution 1 degree in a smaller area lon=[6,9], lat=[44,47] covering Northern Italy.
```{r}
ilon <- which(exp$coords$lon %in% 5:12)
ilat <- which(exp$coords$lat %in% 40:47)
exp$data <- exp$data[, , , , , ilat, ilon, drop = FALSE]
names(dim(exp$data)) <- names(dim(lonlat_prec_st$data))
exp$coords$lon <- exp$coords$lon[ilon]
exp$coords$lat <- exp$coords$lat[ilat]
```
### Standard downscaling without climatological weights
Our goal is to downscale with RainFARM these data from the resolution of 1 degree (about 100 km at these latitudes) to 0.05 degrees (about 5 km) using the `CST_RainFARM()` function. This means that we need to increase resolution by a factor `nf = 20`.
RainFARM can compute automatically its only free parameter, i.e. the spatial spectral slope, from the large-scale field (here only with size 4x4 pixel, but in general we reccomend selecting at least 8x8 pixels).
In this example we would like to compute this slope as an average over the _member_ and _ftime_ dimensions, while we will use different slopes for the remaining _dataset_ and _sdate_ dimensions (a different choice may be more appropriate in a real application). To obtain this we specify the parameter `time_dim = c("member", "ftime")`. The slope is computed starting from the wavenumber corresponding to the box, `kmin = 1`. We create 3 stochastic realizations for each dataset, member, starting date and forecast time with `nens = 5`. The command to donwscale and the resulting fields are:
```
exp_down <- CST_RainFARM(exp, nf = 20, kmin = 1, nens = 3,
time_dim = c("member", "ftime"))
dim(exp_down$data)
# dataset var member realization sdate ftime lat lon
# 1 1 6 3 3 31 80 80
str(exp_down$coords$lon)
# num [1:80] 5.53 5.58 5.62 5.67 5.72 ...
str(exp_down$coords$lat)
# num [1:80] 47.5 47.4 47.4 47.3 47.3 ...
```
The function returns an array `exp_down$data` with the additional "realization" dimension for the stochastic ensemble with 3 members. The longitudes and latitudes have been correspondingly interpolated to the finer resolution.
Alternatively we could have used the "reduced" function `RainFARM` which accepts directly a data array (with arbitrary dimensions, provided a longitude, a latitude and a "time" dimension exist) and two arrays to describe longitudes and latitudes:
```{r}
downscaled <- RainFARM(exp$data, exp$coords$lon, exp$coords$lat,
nf = 20, kmin = 1, nens = 3,
time_dim = c("member", "ftime"))
```
The resulting array has the same dimensions as the `$data` element in the result of `CST_RainFARM`.
Each instant and each realization will of course be different, but let's plot and compare the original and the downscaled data of the first ensemble member and for the first realization for a specific forecast date (Fig. 1):
<!--
png("Figures/RainFARM_fig1.png", width = 640, height = 365)
par(mfrow = c(1,2))
-->
```{r}
a <- exp$data[1, 1, 1, 1, 17, , ] * 86400 * 1000
a[a > 60] <- 60
image(exp$coords$lon, rev(exp$coords$lat), t(apply(a, 2, rev)), xlab = "lon", ylab = "lat",
col = rev(terrain.colors(20)), zlim = c(0,60))
map("world", add = TRUE)
title(main = "pr 17/03/2010 original")
a <- exp_down$data[1, 1, 1, 1, 1, 17, , ] * 86400 * 1000
a[a > 60] <- 60
image(exp_down$coords$lon, rev(exp_down$coords$lat), t(apply(a, 2, rev)), xlab = "lon", ylab = "lat",
col = rev(terrain.colors(20)), zlim = c(0, 60))
map("world", add = TRUE)
title(main = "pr 17/03/2010 downscaled")
```
<!-- dev.off() -->

RainFARM has downscaled the original field with a realistic fine-scale correlation structure. Precipitation is conserved in an average sense (in this case smoothing both the original and the downscaled fields with a circular kernel with a diameter equal to the original field grid spacing would lead to the same results). The downscaled field presents more extreme precipitation peaks.
### Downscaling using climatological weights
The area of interest in our example presents a complex orography, but the basic RainFARM algorithm used does not consider topographic elevation in deciding how to distribute fine-scale precipitation. A long term climatology of the downscaled fields would have a resolution comparable to that of the original coarse fields and would not resemble the fine-scale structure of an observed climatology. If an external fine-scale climatology of precipitation is available, we can use the method discussed in Terzago et al. (2018) to change the distribution of precipitation by RainFARM for each timestep, so that the long-term average is close to this reference climatology in terms of precipitation distribution (while the total precipitation amount of the original fields to downscale is preserved).
Suitable climatology files could be for example a fine-scale precipitation climatology from a high-resolution regional climate model (see e.g. Terzago et al. 2018), a local high-resolution gridded climatology from observations, or a reconstruction such as those which can be downloaded from the WORLDCLIM (https://www.worldclim.org) or CHELSA (https://chelsa-climate.org/) websites. The latter data will need to be converted to NetCDF format before being used (see for example the GDAL tools (https://gdal.org/).
We will assume that a copy of the WORLDCLIM precipitation climatology at 30 arcseconds (about 1km resolution) is available in the local file `medscope.nc`. From this file we can derive suitable weights to be used with RainFARM using the `CST_RFWeights` functions as follows:
```{r}
ww <- CST_RFWeights("./worldclim.nc", nf = 20, lon = exp$coords$lon, lat = exp$coords$lat)
```
The result is a two-dimensional weights matrix with the same `lon`and `lat` dimensions as requested.
The weights (varying around an average value of 1) encode how to distribute differently precipitation in each stochastic realization of RainFARM.
We call again `CST_RainFARM()`, this time passing climatological weights:
```{r}
exp_down_weights <- CST_RainFARM(exp, nf = 20, kmin = 1, nens = 3,
weights = ww, time_dim = c("member", "ftime"))
```
We plot again the same realization as before in Fig. 2 (left panel). The WORLDCLIM climatology data used in this case have missing values over the ocean, so that the resulting downscaled maps have valid points only over land.
From a single realization and time it is not possible to see that a more realistic precipitation has been achieved, but this becomes clear if we compare the climatological average over all 3 stochastic ensemble realizations, over all forecast times and over all forecast times. The resulting plot in panels Fig2b,c show that the RainFARM climatology downscaled without weights presents on average a very coarse structure, comparable with that of the original fields, while when using the weights a much more realistic distribution is achieved.
<!--
```{r}
exp_down1 <- exp_down$data[, , , , , , , 1]
exp_down_weights1 <- exp_down_weights$data[, , , , , , , 1]
dim(exp_down1) <- c(member = 6 * 3 * 31, lat = 80, lon = 80)
dim(exp_down_weights1) <- c(member = 6 * 3 * 31, lat = 80, lon = 80)
ad <- apply(exp_down1, c(2, 3), mean)
adw <- apply(exp_down_weights1, c(2, 3), mean);
png("Figures/RainFARM_fig2.png", width = 640, height = 243)
par(mfrow = c(1,3))
a <- exp_down_weights$data[1, 1, 1, 1, 17, , ,1] * 86400 * 1000
a[a > 60] <- 60
image(exp_down$coords$lon, rev(exp_down$coords$lat), t(apply(a, 2, rev)), xlab = "lon",
ylab = "lat", col = rev(terrain.colors(20)), zlim = c(0, 60))
map("world", add = TRUE)
title(main = "pr 17/03/2010 with weights")
a <- ad * 86400 * 1000
a[a > 5] <- 5
image(exp_down$coords$lon, rev(exp_down$coords$lat), t(apply(a, 2, rev)), xlab = "lon",
ylab = "lat", col = rev(terrain.colors(20)), zlim = c(0, 5))
map("world", add = TRUE)
title(main = "climatology no weights")
a <- adw * 86400 * 1000
a[a > 5] <- 5
image(exp_down$coords$lon, rev(exp_down$coords$lat), t(apply(a, 2, rev)), xlab = "lon",
ylab = "lat", col = rev(terrain.colors(20)), zlim = c(0, 5))
map("world", add = TRUE)
title(main = "climatology with weights")
dev.off()
```
-->

### Determining the spectral slopes
The only free parameter of the RainFARM method is represented by the spatial spectral slope used to extrapolate the Fourier spectrum to the unresolved scales. When fine-scale precipitation data for the study area are available this slope could be determined offline from a study of these data. Else this slope can be determined directly from the large-scale fields to downscale. The RainFARM functions compute automatically this parameter by default over the input dimensions specified by the `time_dim`parameter. In many practical applications a careful study of an appropriate average to be used may still be necessary. This can be achieved using the function `CST_RFSlope()`. If for example we wish to determine the average slopes over all member and forecast times, but computing separate slopes for each starting date and dataset, we can do so with:
```{r}
slopes <- CST_RFSlope(exp, time_dim = c("member", "ftime"))
dim(slopes)
# dataset var sdate
# 1 1 3
# slopes
# , , 1
# [,1]
# [1,] 1.09957
# , , 2
# [,1]
# [1,] 1.768861
# , , 3
# [,1]
# [1,] 1.190176
```
which return an array of spectral slopes, one for each "dataset" and starting date "sdate".
### Compacting dimensions
RainFARM creates an additional dimension in the `$data` array of the output data structure, named "realization". This is an additional ensemble dimension used to accomodate multiple stochastic realizations (if `nens > 1` is chosen), in addition to the "member" dimension of the input array (which instead is usually used to represent for example multiple forecast realizations with the same model from the same starting date). Having a separate ensemble dimension for the stochastic ensemble allows to evaluate independently uncertainty due to the small scales resolved with downscaling, compared to other sources of uncertainty such as the dependence to initial conditions gauged for example by the original "member" dimension. Since other CSTools functions support only one ensemble dimension named "member", it may be useful in some cases to compact together these two ensemble dimensions. In the output array the "realization" dimension is placed immediately after the "member" dimension, so the user could do this after calling the function reshaping the dimensions of the array. It is also possible to use the option `drop_realization_dim` of the `RainFARM` and `CST_RainFARM` functions which has the following behaviour when set to TRUE:
1) if `nens == 1`: the "realization" dimension is dropped;
2) if `nens > 1` and a "member" dimension exists: the
"realization" and "member" dimensions are compacted and the
resulting dimension is named "member";
3) if `nens > 1` and a "member" dimension does not exist: the
"realization" dimension is renamed to "member".
## Bibliography
* Terzago, S., Palazzi, E., von Hardenberg, J. Stochastic downscaling of precipitation in complex orography: A simple method to reproduce a realistic fine-scale climatology. Natural Hazards and Earth System Sciences, 18(11), 2825-2840, doi:10.5194/nhess-18-2825-2018 (2018)
* D'Onofrio, D.; Palazzi, E., von Hardenberg, J., Provenzale A., Calmanti S. Stochastic Rainfall Downscaling of Climate Models. J of Hydrometeorology 15 (2), 830-843, doi:10.1175/JHM-D-13-096.1 (2014)
* Rebora, N., L. Ferraris, J. von Hardenberg, A. Provenzale. Rainfall downscaling and flood forecasting: A case study in the Mediterranean area. Nat. Hazards Earth Syst. Sci., 6, 611-619, doi:10.5194/nhess-6-611-2006 (2006a)
* Rebora, N., Ferraris, L., von Hardenberg, J., Provenzale, A. RainFARM: Rainfall downscaling by a filtered autoregressive model. J. Hydrometeor., 7, 724-738, doi:10.1175/JHM517.1 (2006b)
* Ferraris, L., Gabellani, S., Parodi, U., Rebora, N., von Hardenberg, J., Provenzale, A. Revisiting multifractality in rainfall fields. J. Hydrometeor., 4, 544-551, doi:10.1175/ 1525-7541(2003)004,0544:RMIRF.2.0.CO;2 (2003a)
* Ferraris, L., Gabellani, S., Rebora, N., and Provenzale, A.. A comparison of stochastic models for spatial rainfall downscaling, Water Resour. Res., 39, 1368, https://doi.org/10.1029/2003WR002504, (2003b)
|
/scratch/gouwar.j/cran-all/cranData/CSTools/inst/doc/RainFARM_vignette.Rmd
|
---
author: "Verónica Torralba, Nicola Cortesi and Núria Pérez-Zanón"
date: "`r Sys.Date()`"
revisor: "Eva Rifà"
revision date: "October 2023"
output: rmarkdown::html_vignette
vignette: >
%\VignetteEngine{knitr::knitr}
%\VignetteIndexEntry{Weather Regime Analysis}
%\usepackage[utf8]{inputenc}
---
Weather regime analysis
========================
Weather regimes are a set of patterns able to characterize the different circulation structures occurring each month/season. This vignette aims to illustrate a step-by-step example of how to perform a weather regime assessment using CSTools functionalities.
### 1- Required packages
The functions to compute the Weather Regimes are part of CSTools while plotting functions are included in s2dv package:
```r
library(CSTools)
library(s2dv)
library(zeallot)
```
### 2- Retrive data from files
The data employed in this example are described below.
- Sea level pressure (psl): this has been selected as the circulation variable, however other variables such as geopotential at 500 hPa can be also used.
- Region: Euro-Atlantic domain [85.5ºW-45ºE; 27-81ºN].
- Datasets: seasonal predictions from ECMWF System 4 ([**Molteni et al. 2011**](https://www.ecmwf.int/sites/default/files/elibrary/2011/11209-new-ecmwf-seasonal-forecast-system-system-4.pdf)) and ERA-Interim reanalysis ([**Dee et al. 2011**](https://doi.org/10.1002/qj.828)) as a reference dataset.
- Period: 1991-2010. Only 20 years have been selected for illustrative purposes, but the full hindcast period could be used for the analysis.
```r
repos_exp <- paste0('/esarchive/exp/ecmwf/system4_m1/daily_mean/$var$_f6h/',
'$var$_$sdate$.nc')
sdates <- paste0(1991:2010, '1201')
latmin = 27
latmax = 81
lonmin = 274.5
lonmax = 45
exp <- CST_Start(dataset = repos_exp,
var = 'psl',
member = "all",
sdate = sdates,
ftime = startR::indices(1:31),
lat = startR::values(list(latmin, latmax)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lonmin, lonmax)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'),
lat = c('lat', 'latitude'),
member = c('member', 'ensemble'),
ftime = c('ftime', 'time')),
return_vars = list(lat = NULL,
lon = NULL, ftime = 'sdate'),
retrieve = TRUE)
dates0 <- exp$attrs$Dates
repos_obs <-"/esarchive/recon/ecmwf/erainterim/daily_mean/$var$_f6h/$var$_$date$.nc"
obs <- CST_Start(dataset = repos_obs,
var = 'psl',
date = unique(format(dates0, '%Y%m')),
ftime = startR::values(dates0),
ftime_across = 'date',
ftime_var = 'ftime',
merge_across_dims = TRUE,
split_multiselected_dims = TRUE,
lat = startR::values(list(latmin, latmax)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lonmin, lonmax)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'),
lat = c('lat', 'latitude'),
ftime = c('ftime', 'time')),
return_vars = list(lon = NULL,
lat = NULL,
ftime = 'date'),
retrieve = TRUE)
```
Notice that you need the files to be stored locally in your computer or server with correct configuration file. If you are interested into run this vignette, contact nuria.perez at bsc.es to get a data sample.
The objects returned by `CST_Start()` are `s2v_cube` class. They contains among others, the array with the requested data.
```r
> dim(exp$data)
dataset var member sdate ftime lat lon
1 1 15 20 31 77 186
> dim(obs$data)
dataset var sdate ftime lat lon
1 1 20 31 77 186
```
### 3- Daily anomalies based on a smoothed climatology
The weather regimes classification is based on daily anomalies, which have been computed by following these steps:
```r
c(ano_exp, ano_obs) %<-% CST_Anomaly(exp = exp, obs = obs, filter_span = 1,
dat_dim = c('dataset', 'member'))
```
The LOESS filter has been applied to the climatology to remove the short-term variability and retain the annual cycle. The parameter `filter_span` should be adapted to the length of the period used to compute the climatology (e.g. season, month, week,...). In this example we are using daily data, so we have selected `filter_span = 1`.
### 4- Weather regimes in observations
`CST_WeatherRegimes()` function is used to define the clusters based on the sea level pressure anomalies from ERA-Interim. This function is based on the [*kmeans function*](https://stat.ethz.ch/R-manual/R-devel/library/stats/html/kmeans.html)
from the stats R package. In this example we have made different assumptions: four clusters (`ncenters=4`) will be produced and the Empirical orthogonal functions are not used to filter the data (`EOFS=FALSE`) just to take into account the extreme values. More details about the methodology can be found in Cortesi et al. 2018 (submitted).
```r
WR_obs <- CST_WeatherRegimes(data = ano_obs, EOFs = FALSE, ncenters = 4)
```
`CST_WeatherRegime()` provides a `s2dv_cube` object with several elements. `$data` the 4 weather regimes composites are stored while `$statistics` contains extra information (`$pvalue`, `$cluster`, `$persistence` and `$frequency`) which are the needed parameters for the weather regimes assessment. Further details about the outputs provided by the `CST_WeatherRegime()` function can be found in the package documentation or typing `?CST_WeatherRegimes` in the R session.
### 5- Visualisation of the observed weather regimes
To plot the composite maps of each regime and the mean frequencies of each cluster, we have employed the `PlotLayout()` and `PlotEquiMap()` functions available in s2dv. The object `WR_obs$data` is divided by 100 to change from Pa to hPa. As the `WR_obs$statistics$frequency` provides the monthly frequencies, the climatological frequencies are obtained as the average across the 20 years of the monthly frequencies. Note that these frequencies could slightly change as a consequence of the randomness inherent to the iterative processes involved in the k-means.
```r
clim_frequencies <- paste0('freq = ', round(MeanDims(WR_obs$statistics$frequency, 1), 1), '%')
PlotLayout(PlotEquiMap, c(1, 2), lon = obs$coords$lon, lat = obs$coords$lat,
var = WR_obs$data / 100,
titles = paste0(paste0('Cluster ', 1:4), ' (', clim_frequencies,' )'),
filled.continents = FALSE,
axelab = FALSE, draw_separators = TRUE, subsampleg = 1,
brks = seq(-16, 16, by = 2),
bar_extra_labels = c(2, 0, 0, 0))
```
<img src="./Figures/observed_regimes.png" />
### 6- Visualisation of the observed regime persistence
Persistence measures the average number of days a regime lasts before evolving to a different regime. To plot regime persistence for each start date, we have employed the `PlotTriangles4Categories()` function available in CSTools.
```r
freq_obs <- WR_obs$statistics$persistence
freq_obs[is.na(freq_obs)] <- 0
dim(freq_obs) <- c(dimy = 20, dimcat = 4, dimx = 1)
PlotTriangles4Categories(freq_obs, toptitle = 'Persistence',
xtitle = 'Start Dates', ytitle = '', xlab = FALSE,
ylabels = substr(sdates, 1, 4), cex_leg = 0.6,
lab_legend = c('AR', 'NAO-', 'BL', 'NAO+'), figure.width = .7)
```
<img src="./Figures/Obs_Persistence.png" width = "400px"/>
### 7- Weather regimes in the predictions
Predicted anomalies for each day, month, member and lead time are matched with the observed clusters (obtained in step 4). The assignment of the anomalies to a pre-defined set of clusters guarantees that the predicted weather regimes have very similar spatial structures to the observed regimes, which is an essential requirement for the verification of weather regimes. This is an example of how to produce a set of weather regimes based on the predictions that can be verified with the observational dataset, but this approach can be also used in an operational context for which the probability of occurence of each cluster could be estimated.
The matching is based on the minimization of Eucledian distance `method='distance'`, but it can also be also done in terms of spatial correlation `method='ACC'`. However the computational efficiency is superior for the distance method.
```r
WR_exp <- CST_RegimesAssign(data = ano_exp, ref_maps = WR_obs,
method = 'distance', composite = TRUE, memb = TRUE)
```
`CST_RegimesAssign()` provides a object of 's2dv_cube' class which lists several elements. The composites are stored in the $data element which in the case of `memb = TRUE` corresponds to the mean composites for all member. `$pvalue`, `$cluster` and `$frequency` are stored in element `$statistics`. This elements contain similar information to that provided by the `WeatherRegime()` function. Further details about the output can be found in the package documentation.
### 8- Visualisation of the predicted weather regimes
The outputs of `RegimesAssign()` have been represented to be compared with those obtained for the observational classification (step 5).
```r
PlotLayout(PlotEquiMap, c(1, 2), lon = exp$coords$lon, lat = exp$coords$lat,
var = WR_exp$data/100,
titles = paste0(paste0('Cluster ',1:4), ' (',paste0('freq = ',
round(WR_exp$statistics$frequency,1),'%'),' )'),
filled.continents = FALSE,
axelab = FALSE, draw_separators = TRUE,
subsampleg = 1, brks = seq(-16, 16, by = 2),
bar_extra_labels = c(2, 0, 0, 0))
```
<img src="./Figures/predicted_regimes.png" />
Observed and predicted weather regimes are very similar although their frequencies are slightly different. Cluster 4 is the Atlantic Ridge and cluster 2 the Blocking pattern, while cluster 3 and 1 are the positive and negative phases of the NAO. This patterns can change depending on the period analyzed.
|
/scratch/gouwar.j/cran-all/cranData/CSTools/inst/doc/WeatherRegimes_vignette.Rmd
|
#*******************************************************************************
# Title: Example script to create 's2dv_cube' objects
# Author: Eva Rifà Rovira
# Date: 16/01/2024
#*******************************************************************************
# This example shows how to create an 's2dv_cube' object.
# There are two ways of creating an 's2dv_cube' object.
# (1) With the function s2dv_cube(): create it from scratch with any data.
# (2) With the function CST_Start(). This function returns an 's2dv_cube'
# from an 'startR_array'.
# Needed packages
library(CSTools)
library(startR)
################################################################################
#-----------------------------------------------------
# Example 1: Function s2dv_cube() from defined data
#-----------------------------------------------------
# Minimal use case, with s2dv_cube function.
# In this example we use the function s2dv_cube() to create an object of class
# 's2dv_cube' with the correct structure.
# (1) We define the array with named dimensions:
dat <- array(1:100, dim = c(time = 10, lat = 4, lon = 10))
# (2) We define the coordinates as a list of vectors:
coords <- list(time = 1:10, lat = 43:40, lon = 0:9)
# (3) The metadata:
metadata <- list(tas = list(level = '2m'),
lon = list(cdo_grid_name = 'r360x181'),
lat = list(cdo_grid_name = 'r360x181'))
# (4) The creation of Dates array.
# First the initial date:
ini_date <- as.POSIXct('2010-01-01', format = '%Y-%m-%d')
# The sequence of dates
dates <- seq(ini_date, by = 'days', length.out = 10)
# We define the dates dimensions
dim(dates) <- c(time = 10)
# (5) We call the function s2dv_cube()
dat_cube <- s2dv_cube(data = dat, coords = coords,
varName = 'tas', metadata = metadata,
Dates = dates,
when = "2019-10-23 19:15:29 CET",
source_files = c("/path/to/file1.nc", "/path/to/file2.nc"),
Datasets = 'test_dataset')
# We print the result to see the 's2dv_cube' structure:
# > dat_cube
# 's2dv_cube'
# Data [ 1, 2, 3, 4, 5, 6, 7, 8 ... ]
# Dimensions ( time = 10, lat = 4, lon = 10 )
# Coordinates
# * time : 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
# * lat : 43, 42, 41, 40
# * lon : 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
# Attributes
# Dates : 2010-01-01 2010-01-02 2010-01-03 2010-01-04 2010-01-05 ...
# varName : tas
# metadata :
# tas
# other : level
# lon
# other : cdo_grid_name
# lat
# other : cdo_grid_name
# Datasets : test_dataset
# when : 2019-10-23 19:15:29 CET
# source_files : /path/to/file1.nc ...
#-----------------------------------------------------
# Example 2: Function as.s2dv_cube()
#-----------------------------------------------------
# (1) Example using CST_Start
# NOTE 1: CST_Start() is just a wrapper of function Start() with the
# transformation to 's2dv_cube' object.
# NOTE 2: In order that the input argument auxiliary functions from startR
# work, we need to call them explicitly the startR namespace.
# (e.g. startR::indices())
# We just need to define a CST_Start call with all the information:
repos1 <- "/esarchive/exp/ecmwf/system5_m1/monthly_mean/$var$_f6h/$var$_$sdate$.nc"
repos2 <- "/esarchive/exp/ecmwf/system4_m1/monthly_mean/$var$_f6h/$var$_$sdate$.nc"
res <- CST_Start(dat = list(list(name = 'system4_m1', path = repos2),
list(name = 'system5_m1', path = repos1)),
var = c('tas', 'sfcWind'),
sdate = c('20160101', '20170101'),
ensemble = startR::indices(1:2),
time = startR::indices(1:2),
lat = startR::indices(1:10),
lon = startR::indices(1:10),
synonims = list(lat = c('lat', 'latitude'),
lon = c('lon', 'longitude')),
return_vars = list(time = 'sdate',
longitude = 'dat',
latitude = 'dat'),
metadata_dims = c('dat', 'var'),
retrieve = TRUE)
# Now we can explore the object:
# 1st level
names(res)
# "data" "dims" "coords" "attrs"
dim(res$data)
# dat var sdate ensemble time lat lon
# 2 2 2 2 2 10 10
res$coords$lon
# [1] 0.000000 0.703125 1.406250 2.109375 2.812500 3.515625 4.218750 4.921875
# [9] 5.625000 6.328125
attr(res$coords$lon, 'indices')
# [1] FALSE
# NOTE: The attribute 'indices' is FALSE, it means that the longitude elements
# are the actual values of longitude coordinate.
res$coords$ensemble
# [1] 1 2
# attr(,"indices")
# [1] TRUE
# Now we take a look into the Dates array. It must have the time dimensions
# of the data.
dim(res$attrs$Dates)
# sdate time
# 2 2
# To see the nested list structure of the object, we just need to use the
# function str():
str(res)
#-----------------------------------------------------
# (2) Example using as.s2dv_cube() function
# We'll load the data with Start and then we'll transform the 'startR_array'
# to 's2dv_cube' object with the function as.s2dv_cube(). We are going
# to load the same data as before, with the same call:
repos1 <- "/esarchive/exp/ecmwf/system5_m1/monthly_mean/$var$_f6h/$var$_$sdate$.nc"
repos2 <- "/esarchive/exp/ecmwf/system4_m1/monthly_mean/$var$_f6h/$var$_$sdate$.nc"
res <- Start(dat = list(list(name = 'system4_m1', path = repos2),
list(name = 'system5_m1', path = repos1)),
var = c('tas', 'sfcWind'),
sdate = c('20160101', '20170101'),
ensemble = startR::indices(1:2),
time = startR::indices(1:2),
lat = startR::indices(1:10),
lon = startR::indices(1:10),
synonims = list(lat = c('lat', 'latitude'),
lon = c('lon', 'longitude')),
return_vars = list(time = 'sdate',
longitude = 'dat',
latitude = 'dat'),
metadata_dims = c('dat', 'var'),
retrieve = TRUE)
# Now, we use the function as.s2dv_cube() to transform the 'startR_array'
# into an 's2dv_cube':
res_cube <- as.s2dv_cube(res)
# If we call directly the object directly into the terminal, we can see
# all the elements nicely:
# > res_cube
# 's2dv_cube'
# Data [ 248.241973876953, 247.365753173828, 6.80753087997437, 5.46453714370728, 247.256896972656, 248.500869750977, 6.25862503051758, 5.76889991760254 ... ]
# Dimensions ( dat = 2, var = 2, sdate = 2, ensemble = 2, time = 2, lat = 10, lon = 10 )
# Coordinates
# * dat : system4_m1, system5_m1
# * var : tas, sfcWind
# * sdate : 20160101, 20170101
# ensemble : 1, 2
# time : 1, 2
# * lat : 89.4628215685774, 88.7669513528422, 88.0669716474306, 87.366063433082, 86.6648030134408, 85.9633721608804, 85.2618460607126, 84.5602613830534, 83.8586381286076, 83.1569881285417
# * lon : 0, 0.703125, 1.40625, 2.109375, 2.8125, 3.515625, 4.21875, 4.921875, 5.625, 6.328125
# Attributes
# Dates : 2016-02-01 2017-02-01 2016-03-01 2017-03-01
# varName : tas sfcWind
# metadata :
# time
# units : hours since 2016-01-01 00:00:00
# other : ndims, size, standard_name, calendar
# lon
# units : degrees_east
# long name : longitude
# other : ndims, size, standard_name, axis
# lat
# units : degrees_north
# long name : latitude
# other : ndims, size, standard_name, axis
# tas
# units : K
# long name : 2 metre temperature
# other : prec, dim, unlim, make_missing_value, missval, hasAddOffset, hasScaleFact, code, table, grid_type
# sfcWind
# units : m s**-1
# long name : 10 meter windspeed
# other : prec, dim, unlim, make_missing_value, missval, hasAddOffset, hasScaleFact, code, table, grid_type
# Datasets : system4_m1 ...
# when : 2024-01-17 11:38:27
# source_files : /esarchive/exp/ecmwf/system4_m1/monthly_mean/tas_f6h/tas_20160101.nc ...
# load_parameters :
# ( system4_m1 ) : dat = system4_m1, var = tas ..., sdate = 20160101 ...
# ...
################################################################################
|
/scratch/gouwar.j/cran-all/cranData/CSTools/inst/doc/usecase/ex1_create.R
|
#*******************************************************************************
# Title: Example script to save 's2dv_cube' to NetCDF using CST_SaveExp
# Author: Eva Rifà Rovira
# Date: 29/11/2024
#*******************************************************************************
# In this script, we'll see multiple ways to store the 's2dv_cube' (CST_SaveExp)
# or the multidimensional array (SaveExp) to NetCDF.
# Needed packages:
library(CSTools)
library(CSIndicators)
library(s2dv)
library(startR)
################################################################################
#-----------------------------------------------------
# Example 1: Multidimensional array and Dates, without metadata and coordinates
#-----------------------------------------------------
# (1.1) Minimal use case, without Dates
data <- array(1:5, dim = c(sdate = 5, lon = 4, lat = 4))
SaveExp(data, ftime_dim = NULL, memb_dim = NULL, dat_dim = NULL,
var_dim = NULL, single_file = TRUE)
SaveExp(data, ftime_dim = NULL, memb_dim = NULL, dat_dim = NULL,
var_dim = NULL, sdate_dim = NULL, single_file = FALSE) # same result
# (1.2) Forecast time dimension, without Dates
data <- array(1:5, dim = c(ftime = 5, lon = 4, lat = 4))
SaveExp(data, ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL,
var_dim = NULL, sdate_dim = NULL, single_file = TRUE)
# (1.3) Start date dimension, without Dates
data <- array(1:5, dim = c(sdate = 5, lon = 4, lat = 4))
SaveExp(data, ftime_dim = NULL, memb_dim = NULL, dat_dim = NULL,
var_dim = NULL, sdate_dim = 'sdate', single_file = TRUE)
# (1.4) Only forecast time dimension (no sdate), with Dates
data <- array(1:5, dim = c(ftime = 5, lon = 4, lat = 4))
dates <- c('20000101', '20010102', '20020103', '20030104', '20040105')
dates <- as.Date(dates, format = "%Y%m%d", tz = "UTC")
dim(dates) <- c(ftime = 5)
SaveExp(data, ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL,
var_dim = NULL, sdate_dim = NULL, Dates = dates, single_file = FALSE)
SaveExp(data, ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL,
var_dim = NULL, sdate_dim = NULL, Dates = dates, single_file = TRUE)
# For this case we have the same result using: single_file = FALSE /TRUE.
# (1.5) Forecast time and 1 sdate, with Dates
data <- array(1:5, dim = c(sdate = 1, ftime = 5, lon = 4, lat = 4))
dates <- c('20000101', '20010102', '20020103', '20030104', '20040105')
dates <- as.Date(dates, format = "%Y%m%d", tz = "UTC")
dim(dates) <- c(sdate = 1, ftime = 5)
SaveExp(data, ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL,
var_dim = NULL, sdate_dim = 'sdate', Dates = dates, single_file = FALSE)
SaveExp(data, ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL,
var_dim = NULL, sdate_dim = 'sdate', Dates = dates, single_file = TRUE)
# (1.6) Test global attributes
SaveExp(data, ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL,
var_dim = NULL, sdate_dim = 'sdate', Dates = dates, single_file = TRUE,
extra_string = 'test',
global_attrs = list(system = 'tes1', reference = 'test2'))
# (1.7) Test global attributes
SaveExp(data, ftime_dim = 'ftime', memb_dim = NULL, dat_dim = NULL,
var_dim = NULL, sdate_dim = 'sdate', Dates = dates, single_file = FALSE,
extra_string = 'test',
global_attrs = list(system = 'tes1', reference = 'test2'))
#-----------------------------------------------------
# Example 2: Test sample data from Start and from Load
#-----------------------------------------------------
# (2.1) Test SaveExp
exp <- CSTools::lonlat_prec_st
data <- exp$data
Dates = exp$attrs$Dates
coords = exp$coords
varname = exp$attrs$Variable$varName
metadata = exp$attrs$Variable$metadata
SaveExp(data = data, Dates = Dates, coords = coords, varname = varname,
metadata = metadata, ftime_dim = 'ftime', startdates = 1:4,
var_dim = 'var', memb_dim = 'member', dat_dim = 'dataset',
sdate_dim = 'sdate', single_file = FALSE)
SaveExp(data = data, Dates = Dates, coords = coords, varname = varname,
metadata = metadata, ftime_dim = 'ftime', startdates = 1:4,
var_dim = 'var', memb_dim = 'member', dat_dim = 'dataset',
sdate_dim = 'sdate', single_file = TRUE)
# (2.2) lonlat_temp_st$exp in a single file with units 'hours since'
# (2.2.1) We save the data
data <- lonlat_temp_st$exp
CST_SaveExp(data = data, ftime_dim = 'ftime',
var_dim = 'var', dat_dim = 'dataset', sdate_dim = 'sdate',
units_hours_since = TRUE, single_file = TRUE)
# (2.2.2) Now we read the output with Start:
sdate <- as.vector(lonlat_temp_st$exp$coords$sdate)
path <- paste0(getwd(),"/$var$_", sdate[1], "_", sdate[length(sdate)], ".nc")
out <- Start(dat = path,
var = 'tas',
member = 'all',
sdate = 'all',
ftime = 'all',
lat = 'all',
lon = 'all',
return_vars = list(lon = 'dat',
lat = 'dat',
ftime = NULL,
sdate = NULL),
retrieve = TRUE)
attributes(out)$Variables$common$ftime
out_cube <- as.s2dv_cube(out)
out_cube <- CST_ChangeDimNames(out_cube,
original_names = c("dat"),
new_names = c("dataset"))
all.equal(data$data, out_cube$data)
identical(data$data, out_cube$data)
# Plot the results and compare
PlotEquiMap(out_cube$data[,,1,1,1,,], lon = out_cube$coords$lon,
lat = out_cube$coords$lat, filled.continents = FALSE)
PlotEquiMap(lonlat_temp_st$exp$data[,,1,1,1,,], lon = out_cube$coords$lon,
lat = out_cube$coords$lat, filled.continents = FALSE)
# (2.3) lonlat_temp_st$exp in a single file with units of time frequency
# (2.3.1) we save the data
data <- lonlat_temp_st$exp
CST_SaveExp(data = data, ftime_dim = 'ftime',
var_dim = 'var', dat_dim = 'dataset', sdate_dim = 'sdate',
single_file = TRUE, units_hours_since = FALSE)
dates <- lonlat_temp_st$exp$attrs$Dates
# (2.3.2) Now we read the output with Start:
sdate <- as.vector(lonlat_temp_st$exp$coords$sdate)
path <- paste0(getwd(),"/$var$_", sdate[1], "_", sdate[length(sdate)], ".nc")
out <- Start(dat = path,
var = 'tas',
lon = 'all',
lat = 'all',
ftime = 'all',
sdate = 'all',
member = 'all',
return_vars = list(
lon = 'dat',
lat = 'dat',
ftime = NULL,
sdate = NULL),
retrieve = TRUE)
attributes(out)$Variables$common$ftime
# [1] "1 months" "2 months" "3 months"
out_cube2 <- as.s2dv_cube(out)
# (2.4) lonlat_temp_st$exp in separated files with units of hours since
# (2.4.1) we save the data
data <- lonlat_temp_st$exp
CST_SaveExp(data = data, ftime_dim = 'ftime',
var_dim = 'var', dat_dim = 'dataset', sdate_dim = 'sdate',
single_file = FALSE)
# (2.4.2) we load the data
sdate <- as.vector(lonlat_temp_st$exp$coords$sdate)
path <- paste0(getwd(),"/dat1/$var$/$var$_$sdate$.nc")
out <- Start(dat = path, var = 'tas',
sdate = sdate,
lon = 'all',
lat = 'all',
ftime = 'all',
member = 'all',
return_vars = list(lon = 'dat',
lat = 'dat',
ftime = 'sdate'),
retrieve = TRUE)
out_cube1 <- as.s2dv_cube(out)
# (2.5) lonlat_prec_st$exp in a single file with units of time frequency
# (2.5.1) we save the data
data <- lonlat_prec_st
CST_SaveExp(data = data, ftime_dim = 'ftime',
var_dim = 'var', dat_dim = 'dataset', sdate_dim = 'sdate',
single_file = TRUE, units_hours_since = FALSE)
# (2.5.2) we read the data
sdate <- as.vector(data$coords$sdate)
path <- paste0(getwd(),"/$var$_", sdate[1], "_", sdate[length(sdate)], ".nc")
out <- Start(dat = path,
var = 'prlr',
lon = 'all',
lat = 'all',
ftime = 'all',
sdate = 'all',
member = 'all',
return_vars = list(
lon = 'dat',
lat = 'dat',
ftime = NULL,
sdate = NULL),
retrieve = TRUE)
attributes(out)$Variables$common$ftime
# [1] "1 days" "2 days" "3 days" "4 days" "5 days" "6 days" "7 days"
# [8] "8 days" "9 days" "10 days" "11 days" "12 days" "13 days" "14 days"
# [15] "15 days" "16 days" "17 days" "18 days" "19 days" "20 days" "21 days"
# [22] "22 days" "23 days" "24 days" "25 days" "26 days" "27 days" "28 days"
# [29] "29 days" "30 days" "31 days"
out_cube <- as.s2dv_cube(out)
# (2.6) Test observations: lonlat_temp
# (2.6.1) Save the data
data <- lonlat_temp$obs
CST_SaveExp(data = data, ftime_dim = 'ftime', memb_dim = NULL,
var_dim = NULL, dat_dim = 'dataset', sdate_dim = 'sdate',
single_file = TRUE, units_hours_since = FALSE)
# (2.6.2) Now we read the output with Start:
sdate <- c('20001101', '20051101')
path <- paste0(getwd(),"/$var$_", sdate[1], "_", sdate[length(sdate)], ".nc")
out <- Start(dat = path,
var = 'tas', # tas
lon = 'all',
lat = 'all',
ftime = 'all',
member = 1,
sdate = 'all',
return_vars = list(
lon = 'dat',
lat = 'dat',
ftime = NULL,
sdate = NULL),
retrieve = TRUE)
dim(out)
attributes(out)$Variables$common$ftime
# (2.7) Test lonlat_prec
# (2.7.1) Save the data
data <- lonlat_prec
CST_SaveExp(data = data, ftime_dim = 'ftime', memb_dim = NULL,
var_dim = NULL, dat_dim = 'dataset', sdate_dim = 'sdate',
single_file = TRUE, units_hours_since = FALSE)
# (2.7.2) Now we read the output with Start:
sdate <- as.vector(data$coords$sdate)
path <- paste0(getwd(),"/$var$_", sdate[1], "_", sdate[length(sdate)], ".nc")
out <- Start(dat = path,
var = 'prlr', # tas
lon = 'all',
lat = 'all',
ftime = 'all',
sdate = 'all',
member = 'all',
return_vars = list(
lon = 'dat',
lat = 'dat',
ftime = NULL,
sdate = NULL),
retrieve = TRUE)
dim(out)
lonlat_prec$dims
# (2.8) Test with ftime_dim NULL
data <- lonlat_temp$exp
data <- CST_Subset(data, along = 'ftime', indices = 1, drop = 'selected')
CST_SaveExp(data = data, ftime_dim = NULL,
var_dim = NULL, dat_dim = 'dataset', sdate_dim = 'sdate',
single_file = FALSE, units_hours_since = FALSE)
#-----------------------------------------------------
# Example 3: Special cases
#-----------------------------------------------------
# (3.1) Two variables and two datasets in separated files
# (3.1.1) We load the data from Start
repos <- "/esarchive/exp/ecmwf/system5_m1/monthly_mean/$var$_f6h/$var$_$sdate$.nc"
repos2 <- "/esarchive/exp/ecmwf/system4_m1/monthly_mean/$var$_f6h/$var$_$sdate$.nc"
data3 <- Start(dat = list(list(name = 'system4_m1', path = repos2),
list(name = 'system5_m1', path = repos)),
var = c('tas', 'sfcWind'),
sdate = c('20160101', '20170101'),
ensemble = indices(1),
time = indices(1:2),
lat = indices(1:10),
lon = indices(1:10),
synonims = list(lat = c('lat', 'latitude'),
lon = c('lon', 'longitude')),
return_vars = list(time = 'sdate',
longitude = 'dat',
latitude = 'dat'),
metadata_dims = c('dat', 'var'),
retrieve = T)
cube3 <- as.s2dv_cube(data3)
# (3.1.2) We save the data
CST_SaveExp(data = cube3, ftime_dim = 'time', var_dim = 'var',
memb_dim = 'ensemble', dat_dim = 'dat')
# (3.1.3) We read again the data with start
repos <- paste0(getwd(), "/system4_m1/$var$/$var$_$sdate$.nc")
repos2 <- paste0(getwd(), "/system5_m1/$var$/$var$_$sdate$.nc")
data3out <- Start(dat = list(list(name = 'system4_m1', path = repos2),
list(name = 'system5_m1', path = repos)),
var = c('tas', 'sfcWind'),
sdate = c('20160101', '20170101'),
ensemble = indices(1),
time = indices(1:2),
lat = indices(1:10),
lon = indices(1:10),
synonims = list(lat = c('lat', 'latitude'),
lon = c('lon', 'longitude')),
return_vars = list(time = 'sdate',
longitude = 'dat',
latitude = 'dat'),
metadata_dims = c('dat', 'var'),
retrieve = T)
summary(data3out)
summary(data3)
dim(data3)
dim(data3out)
# (3.2) Two variables and two datasets in the same file
CST_SaveExp(data = cube3, ftime_dim = 'time', var_dim = 'var',
memb_dim = 'ensemble', dat_dim = 'dat',
single_file = TRUE)
# TODO: Read the output with Start
# (3.3) Observations (from startR usecase)
repos_exp <- paste0('/esarchive/exp/ecearth/a1tr/cmorfiles/CMIP/EC-Earth-Consortium/',
'EC-Earth3/historical/r24i1p1f1/Amon/$var$/gr/v20190312/',
'$var$_Amon_EC-Earth3_historical_r24i1p1f1_gr_$sdate$01-$sdate$12.nc')
exp <- Start(dat = repos_exp,
var = 'tas',
sdate = as.character(c(2005:2008)),
time = indices(1:3),
lat = 1:10,
lat_reorder = Sort(),
lon = 1:10,
lon_reorder = CircularSort(0, 360),
synonims = list(lat = c('lat', 'latitude'),
lon = c('lon', 'longitude')),
return_vars = list(lon = NULL,
lat = NULL,
time = 'sdate'),
retrieve = FALSE)
dates <- attr(exp, 'Variables')$common$time
repos_obs <- '/esarchive/recon/ecmwf/erainterim/monthly_mean/$var$_f6h/$var$_$date$.nc'
obs <- Start(dat = repos_obs,
var = 'tas',
date = unique(format(dates, '%Y%m')),
time = values(dates), #dim: [sdate = 4, time = 3]
lat = 1:10,
lat_reorder = Sort(),
lon = 1:10,
lon_reorder = CircularSort(0, 360),
time_across = 'date',
merge_across_dims = TRUE,
split_multiselected_dims = TRUE,
synonims = list(lat = c('lat', 'latitude'),
lon = c('lon', 'longitude')),
return_vars = list(lon = NULL,
lat = NULL,
time = 'date'),
retrieve = TRUE)
obscube <- as.s2dv_cube(obs)
CST_SaveExp(data = obscube, ftime_dim = 'time', var_dim = 'var',
memb_dim = NULL, dat_dim = 'dat',
single_file = TRUE, extra_string = 'obs_tas')
CST_SaveExp(data = obscube, ftime_dim = 'time', var_dim = 'var',
memb_dim = NULL, dat_dim = 'dat',
single_file = FALSE, extra_string = 'obs_tas')
#-----------------------------------------------------
# Example 4: Time bounds:
#-----------------------------------------------------
# example: /esarchive/exp/ncep/cfs-v2/weekly_mean/s2s/tas_f24h/tas_20231128.nc
library(CSIndicators)
exp <- CSTools::lonlat_prec_st
exp$attrs$Dates <- Reorder(exp$attrs$Dates, c(2,1))
res <- CST_PeriodAccumulation(data = exp, time_dim = 'ftime',
start = list(10, 03), end = list(20, 03))
# > dim(res$attrs$Dates)
# sdate
# 3
# (4.1) All data in a single file
CST_SaveExp(data = res, ftime_dim = NULL, var_dim = 'var',
memb_dim = 'member', dat_dim = 'dataset',
startdates = res$attrs$Dates, single_file = TRUE)
# (4.1.1) Same with SaveExp
SaveExp(data = res$data, coords = res$coords,
Dates = NULL, time_bounds = res$attrs$time_bounds,
ftime_dim = NULL, var_dim = 'var',
varname = res$attrs$Variable$varName,
metadata = res$attrs$Variable$metadata,
memb_dim = 'member', dat_dim = 'dataset',
startdates = res$attrs$Dates, single_file = TRUE)
# (4.2) All data in separated files
CST_SaveExp(data = res, ftime_dim = NULL, var_dim = 'var',
memb_dim = 'member', dat_dim = 'dataset',
startdates = res$attrs$Dates, single_file = FALSE)
# (4.2.1) Same with SaveExp
SaveExp(data = res$data, coords = res$coords,
Dates = res$attrs$Dates, time_bounds = res$attrs$time_bounds,
ftime_dim = NULL, var_dim = 'var',
varname = res$attrs$Variable$varName,
metadata = res$attrs$Variable$metadata,
memb_dim = 'member', dat_dim = 'dataset',
startdates = res$attrs$Dates, single_file = FALSE)
# (4.3)
CST_SaveExp(data = res, ftime_dim = NULL, var_dim = 'var',
memb_dim = 'member', dat_dim = 'dataset',
startdates = 1:4, single_file = FALSE)
# (4.4) We change the time dimensions to ftime and sdate_dim = NULL
dim(res$attrs$time_bounds[[1]]) <- c(time = 3)
dim(res$attrs$time_bounds[[2]]) <- c(time = 3)
dim(res$attrs$Dates) <- c(time = 3)
dim(res$data) <- c(dataset = 1, var = 1, member = 6, time = 3, lat = 4, lon = 4)
# (4.4.1) All data in a single file
CST_SaveExp(data = res, ftime_dim = 'time', var_dim = 'var',
memb_dim = 'member', dat_dim = 'dataset', sdate_dim = NULL,
startdates = res$attrs$Dates, single_file = TRUE)
# (4.4.2) All data in separated files
CST_SaveExp(data = res, ftime_dim = 'time', var_dim = 'var',
memb_dim = 'member', dat_dim = 'dataset', sdate_dim = NULL,
startdates = res$attrs$Dates, single_file = FALSE)
# (4.5) Forecast time units
CST_SaveExp(data = res, ftime_dim = 'time', var_dim = 'var',
memb_dim = 'member', dat_dim = 'dataset', sdate_dim = NULL,
startdates = res$attrs$Dates, single_file = TRUE,
units_hours_since = FALSE)
#-----------------------------------------------------
# Example 5: Read data with Load
#-----------------------------------------------------
data <- lonlat_temp$exp
# data <- lonlat_temp$obs
# data <- lonlat_prec
CST_SaveExp(data = data, ftime_dim = 'ftime',
var_dim = NULL, dat_dim = 'dataset', sdate_dim = 'sdate',
single_file = FALSE, units_hours_since = FALSE)
# Now we read the output with Load:
# startDates <- c('20001101', '20011101', '20021101',
# '20031101', '20041101', '20051101')
# infile <- list(path = paste0(getwd(),
# '/system5c3s/$VAR_NAME$/$VAR_NAME$_$START_DATE$.nc'))
# out_lonlat_temp <- CST_Load(var = 'tas', exp = list(infile), obs = NULL,
# sdates = startDates,
# nmember = 15,
# leadtimemax = 3,
# latmin = 27, latmax = 48,
# lonmin = -12, lonmax = 40,
# output = "lonlat")
# NOTE: This case hasn't been developed since the function to load data
# that will be maintianed will be CST_Start.
################################################################################
|
/scratch/gouwar.j/cran-all/cranData/CSTools/inst/doc/usecase/ex2_save.R
|
#*******************************************************************************
# Title: Script to modify the dimensions of the 's2dv_cube'
# Author: Eva Rifà Rovira
# Date: 18/01/2024
#*******************************************************************************
# In this example, we will explore different methods to modify the dimensions
# of the 's2dv_cube':
# (1) Changing dimension names
# (2) Adding new dimensions
# (3) Merge 2 dimensions
# (4) Split a dimension
# Needed packages:
library(CSTools)
################################################################################
#-----------------------------------------------------
# Example 1: Change dimension names with CST_ChangeDimNames
#-----------------------------------------------------
# With using this function, we can change the dimension names in all elements
# of the 's2dv_cube' object:
# (1) Check original dimensions and coordinates
lonlat_temp$exp$dims
names(lonlat_temp$exp$coords)
dim(lonlat_temp$exp$attrs$Dates)
# (2) Change 'dataset' to 'dat' and 'ftime' to 'time'
exp <- CST_ChangeDimNames(lonlat_temp$exp,
original_names = c("dataset", "ftime", "lon", "lat"),
new_names = c("dat", "time", "longitude", "latitude"))
# (3) Check new dimensions and coordinates
exp$dims
names(exp$coords)
dim(exp$attrs$Dates)
#-----------------------------------------------------
# Example 2: Insert a new dimension with CST_InsertDim
#-----------------------------------------------------
# With this function, we can add a dimension into the 's2dv_cube'.
# NOTE: When the dimension that we want to add has length greater than 1, the
# values of the data are repeated for that new dimension.
# (1) Check original dimensions and coordinates
lonlat_temp$exp$dims
names(lonlat_temp$exp$coords)
# (2) Add 'variable' dimension
exp <- CST_InsertDim(lonlat_temp$exp,
posdim = 2,
lendim = 2,
name = "variable",
values = c("tas", "tos"))
# (3) Check new dimensions and coordinates
exp$dims
exp$coords$variable
# We see that the values will be repeated along the new dimension:
exp$data[, , 1, 1, 1, 1, 1]
#-----------------------------------------------------
# Example 3: Merge two dimensions with CST_MergeDims
#-----------------------------------------------------
# In this example, we will merge the dimensions corresponding to the latitude
# and the longitude of the data. The new dimension will be named 'grid'.
# (1) Call the function:
new_data <- CST_MergeDims(lonlat_temp$exp, merge_dims = c('lat', 'lon'),
rename_dim = 'grid')
# (2) Check the dimensions of the data:
dim(new_data$data)
# dataset member sdate ftime grid
# 1 15 6 3 1166
# (3) Check the names of the coordinates:
names(new_data$coords)
# [1] "dataset" "member" "sdate" "ftime" "grid"
# (4) Explore the object by printing it in the terminal:
new_data
# NOTE: Be aware that when we print the object, we see that its name in
# "Coordinates" field appears without the asterisk (*) at its left. This means
# that the values of that coordinate, are indices, not the actual values. We
# can also find this information with the attribute "indices":
attributes(new_data$coords$grid)
# $indices
# [1] TRUE
# (5) Now, we want to merge time dimensions start date and forecast time:
new_data <- CST_MergeDims(data = lonlat_temp_st$exp, merge_dims = c('sdate', 'ftime'))
# In this case, the Dates dimensions will be merged too.
# (6) Check the dimensions of Dates:
dim(new_data$attrs$Dates)
# sdate
# 18
# NOTE: When we want to merge temporal and other dimensions nature,
# the Dates dimensions are kept as the original. In this case, the function
# returns a Warning Message, we must pay attention!
new_data <- CST_MergeDims(data = lonlat_temp$exp,
merge_dims = c('lat', 'ftime'),
rename_dim = 'newdim')
#-----------------------------------------------------
# Example 4: Split two dimensions with SplitDim and CST_SplitDim
#-----------------------------------------------------
# In this example, we will start working with the function SplitDim,
# that it can be used to split dimensions of an array.
# NOTE: Take into account that time dimensions will be treated differently than
# other dimensions:
# (1) Decadal example: We define an array of consecutive days of different years:
dates <- seq(as.Date("01-01-2000", "%d-%m-%Y", tz = 'UTC'),
as.Date("31-12-2005","%d-%m-%Y", tz = 'UTC'), "day")
dim(dates) <- c(time = 2192)
# (2) Now, we will split the array in a new 'year' dimension:
dates_year <- SplitDim(dates, indices = dates,
split_dim = 'time', freq = 'year')
dim(dates_year)
# time year
# 366 6
# (3) Now, we can try: freq = 'month' and 'day'
dates_month <- SplitDim(dates, indices = dates,
split_dim = 'time', freq = 'month')
dates_day <- SplitDim(dates, indices = dates,
split_dim = 'time', freq = 'day')
# (4) Finnally, we need to convert them again from numeric to 'POSIXct':
dates_year <- as.POSIXct(dates_year * 24 * 3600, origin = '1970-01-01', tz = 'UTC')
dates_month <- as.POSIXct(dates_month * 24 * 3600, origin = '1970-01-01', tz = 'UTC')
dates_day <- as.POSIXct(dates_day * 24 * 3600, origin = '1970-01-01', tz = 'UTC')
#-----------------------------------------------------
# In the following example, we will use the sample data of the package. We
# will use lonlat_prec_st because it is daily data:
# NOTE: By Jan 2024, a development is needed regarding updates in other fields
# of the 's2dv_cube'
# (1) Call the function CST_SplitDim with adding 'day' dimension:
data_day <- CST_SplitDim(lonlat_prec_st, indices = lonlat_prec_st$attrs$Dates[1, ],
split_dim = 'ftime', freq = 'day')
# (2) Explore the dimensions of the data array
dim(data_day$data)
# dataset var member sdate ftime lat lon day
# 1 1 6 3 1 4 4 31
# (3) Call the function CST_SplitDim with adding 'month' dimension:
data_month <- CST_SplitDim(lonlat_prec_st, indices = lonlat_prec_st$attrs$Dates[1,],
split_dim = 'ftime', freq = 'month')
dim(data_month$data)
# dataset var member sdate ftime lat lon month
# 1 1 6 3 31 4 4 1
# (4) Call the function CST_SplitDim with adding 'year' dimension:
data_year <- CST_SplitDim(lonlat_prec_st, indices = lonlat_prec_st$attrs$Dates[,1],
split_dim = 'sdate', freq = 'year')
dim(data_year$data)
# dataset var member sdate ftime lat lon year
# 1 1 6 1 31 4 4 3
################################################################################
|
/scratch/gouwar.j/cran-all/cranData/CSTools/inst/doc/usecase/ex3_modify_dims.R
|
#*******************************************************************************
# Title: Example script to subset any dimension of an 's2dv_cube'
# Author: Eva Rifà Rovira
# Date: 16/11/2024
#*******************************************************************************
# This example shows how to subset any dimension of an 's2dv_cube'. To do it,
# we will use the function CST_Subset. This function is the 's2dv_cube' method
# version of Subset from the package ClimProjDiags.
# (1) First we will see how Subset works.
# (2) Then, we will use CST_Subset with an 's2dv_cube'
# Needed packages:
library(CSTools)
library(ClimProjDiags)
################################################################################
#-----------------------------------------------------
# Example 1: Subset an example array
#-----------------------------------------------------
# This is a minimal use case about spatial coordinates subset.
# (1) We create the array amd we print it:
dat <- array(1:100, dim = c(lat = 10, lon = 10))
dat
# [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
# [1,] 1 11 21 31 41 51 61 71 81 91
# [2,] 2 12 22 32 42 52 62 72 82 92
# [3,] 3 13 23 33 43 53 63 73 83 93
# [4,] 4 14 24 34 44 54 64 74 84 94
# [5,] 5 15 25 35 45 55 65 75 85 95
# [6,] 6 16 26 36 46 56 66 76 86 96
# [7,] 7 17 27 37 47 57 67 77 87 97
# [8,] 8 18 28 38 48 58 68 78 88 98
# [9,] 9 19 29 39 49 59 69 79 89 99
# [10,] 10 20 30 40 50 60 70 80 90 100
# (2) We call the function Subset from ClimProjDiags and we see the result:
dat_subset <- Subset(x = dat, along = c('lat', 'lon'), indices = list(1:5, 1:7),
drop = 'all')
dat_subset
# [,1] [,2] [,3] [,4] [,5] [,6] [,7]
# [1,] 1 11 21 31 41 51 61
# [2,] 2 12 22 32 42 52 62
# [3,] 3 13 23 33 43 53 63
# [4,] 4 14 24 34 44 54 64
# [5,] 5 15 25 35 45 55 65
#-----------------------------------------------------
# Example 2: Subset an 's2dv_cube' using sample data
#-----------------------------------------------------
# In this example we will not drop any dimension, we will select only the first
# member, the first and the second start dates, and also subset the longitude and
# keep only the values from [0, 21]:
# (1) Explore the sample data:
dat <- lonlat_temp_st$exp
dat$dims
# dataset var member sdate ftime lat lon
# 1 1 15 6 3 22 53
dat
# 's2dv_cube'
# Data [ 279.994110107422, 280.337463378906, 279.450866699219, ... ]
# Dimensions ( dataset = 1, var = 1, member = 15, sdate = 6, ftime = 3,
# lat = 22, lon = 53 )
# Coordinates
# * dataset : dat1
# * var : tas
# member : 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15
# * sdate : 20001101, 20011101, 20021101, 20031101, 20041101, 20051101
# ftime : 1, 2, 3
# * lat : 48, 47, 46, 45, 44, 43, 42, 41, 40, 39, 38, 37, 36, 35, 34, ...
# * lon : 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, ...
# Attributes
# Dates : 2000-11-01 2001-11-01 2002-11-01 2003-11-01 2004-11-01 ...
# varName : tas
# metadata :
# lat
# units : degrees_north
# long name : latitude
# lon
# units : degrees_east
# long name : longitude
# ftime
# units : hours since 2000-11-01 00:00:00
# tas
# units : K
# long name : 2 metre temperature
# Datasets : dat1
# when : 2023-10-02 10:11:06
# source_files : /monthly_mean/tas_f6h/tas_20001101.nc ...
# load_parameters :
# ( dat1 ) : dataset = dat1, var = tas, sdate = 20001101 ...
# ...
# (2) Call the function CST_Subset:
dat_subset <- CST_Subset(x = dat, along = c('member', 'sdate', 'lon'),
indices = list(1, 1:2, 1:22), drop = 'none')
# (3) Explore the 's2dv_cube'
dat_subset
# 's2dv_cube'
# Data [ 279.994110107422, 277.161102294922, 278.825836181641, 276.8271484375, 276.052703857422, 276.950805664062, 280.677215576172, 277.285247802734 ... ]
# Dimensions ( dataset = 1, var = 1, member = 1, sdate = 2, ftime = 3, lat = 22, lon = 22 )
# Coordinates
# * dataset : dat1
# * var : tas
# member : 1
# * sdate : 20001101, 20011101
# ftime : 1, 2, 3
# * lat : 48, 47, 46, 45, 44, 43, 42, 41, 40, 39, 38, 37, 36, 35, 34, 33, 32, 31, 30, 29, 28, 27
# * lon : 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21
# Attributes
# Dates : 2000-11-01 2001-11-01 2000-12-01 2001-12-01 2001-01-01 ...
# varName : tas
# metadata :
# ftime
# units : hours since 2000-11-01 00:00:00
# other : ndims, size, standard_name, calendar
# lat
# units : degrees_north
# long name : latitude
# other : ndims, size, standard_name, axis
# lon
# units : degrees_east
# long name : longitude
# other : ndims, size, standard_name, axis
# tas
# units : K
# long name : 2 metre temperature
# other : prec, dim, unlim, make_missing_value, missval, hasAddOffset, hasScaleFact, code, table
# Datasets : dat1
# when : 2023-10-02 10:11:06
# source_files : /esarchive/exp/ecmwf/system5c3s/monthly_mean/tas_f6h/tas_20001101.nc ...
# load_parameters :
# ( dat1 ) : dataset = dat1, var = tas, sdate = 20001101 ...
# ...
################################################################################
|
/scratch/gouwar.j/cran-all/cranData/CSTools/inst/doc/usecase/ex4_subset.R
|
---
title: "Analogs based on large scale for downscaling"
author: "M. Carmen Alvarez-Castro and M. del Mar Chaves-Montero (CMCC, Italy)"
date: "November 2020"
revisor: "Eva Rifà"
revision date: "October 2023"
output: rmarkdown::html_vignette
vignette: >
%\VignetteEngine{knitr::knitr}
%\VignetteIndexEntry{Analogs}
%\usepackage[utf8]{inputenc}
---
<!--
```{r, echo = FALSE}
knitr::opts_chunk$set(eval = FALSE)
```
-->
## Downscaling seasonal forecast data using Analogs
In this example, the seasonal temperature forecasts, initialized in october, will be used to perform a downscaling in the Balearic Islands temperature using the cmcc system 3 seasonal forecasting system from the Euro-Mediterranean Center of Climate Change (CMCC), by computing Analogs in Sea level pressure data (SLP) in a larger region (North Atlantic). The first step will be to load the data we want to downscale (i.e. cmcc) in the large region (i.e North Atlantic) for temperature (predictand) and SLP (predictor) and same variables and region for a higher resolution data (ERA5). In a second step we will interpolate the model to the resolution of ERA5. In a third step we will find the analogs using one of the three criterias. In a four step we will get the downscaled dataset in the region selected (local scale, in this case Balearic Islands).
## 1. Introduction of the function
For instance if we want to perform a temperature donwscaling in Balearic Island for October we will get a daily series of temperature with 1 analog per day, the best analog. How we define the best analog for a certain day? This function offers three options for that:
(1) The day with the minimum Euclidean distance in a large scale field: using i.e. pressure or geopotencial height as variables and North Atlantic region as large scale region. The Atmospheric circulation pattern in the North Atlantic (LargeScale) has an important role in the climate in Spain (LocalScale). The function will find the day with the most similar pattern in atmospheric circulation in the database (obs, slp in ERA5) to the day of interest (exp, slp in model). Once the date of the best analog is found, the function takes the associated temperature to that day (obsVar, tas in ERA5), with a subset of the region of interest (Balearic Island).
(2) Same that (1) but in this case we will search for analogs in the local scale (Balearic Island) instead of in the large scale (North Atlantic). Once the date of the best analog is found, the function takes the associated temperature to that day (obsVar, t2m in ERA5), with a subset of the region of interest (Balearic Island).
(3) Same that (2) but here we will search for analogs with higher correlation at local scale (Balearic Island) and instead of using SLP we will use t2m.
In particular the _Analogs Method_ uses a nonlinear approach that follows (**Analogs**; Yiou et al. 2013).
An efficient implementation of Analogs is provided for CSTools by the `CST_Analogs()` function.
Two datasets are used to illustrate how to use the function. The first one could be enterly run by the users since it is using data samples provided along with the package. The second one uses data that needs to be downloaded or requested.
### Example 1: using data from CSTools
After loading **CSTools** package on the R session, the user will have access to the sample data created with using `CST_Start`: lonlat_temp_st` and `lonlat_prec_st`.
*Note: If it is the first time using CSTools, install the package by running `install.packages("CSTools")`.
```
library(CSTools)
```
After exploring the data, the user can directly run the Analogs downscaling method using the 'Large_dis' metric:
```
class(lonlat_temp_st$exp)
names(lonlat_temp_st$obs)
dim(lonlat_temp_st$obs$data)
dim(lonlat_temp_st$exp$data)
lonlat_temp_st$exp$attrs$Dates
lonlat_temp_st$obs$attrs$Dates
```
There are 15 ensemble members available in the `exp` data set, 6 starting dates and 3 forecast times, which refer to monthly values during 3 months following starting dates on November in the years 2000, 2001, 2002, 2003, 2004 and 2005.
```
exp1 <- CST_Subset(x = lonlat_temp_st$exp, along = c('sdate', 'ftime'), indices = list(1, 1))
down_1 <- CST_Analogs(expL = exp1, obsL = lonlat_temp_st$obs)
exp2 <- CST_Subset(x = lonlat_temp_st$exp, along = c('sdate', 'ftime'), indices = list(1, 2))
down_2 <- CST_Analogs(expL = exp2, obsL = lonlat_temp_st$obs)
exp3 <- CST_Subset(x = lonlat_temp_st$exp, along = c('sdate', 'ftime'), indices = list(1, 3))
down_3 <- CST_Analogs(expL = exp3, obsL = lonlat_temp_st$obs)
```
The visualization of the first three time steps for the ensemble mean of the forecast initialized the 1st of Noveber 2000 can be done using the package **s2dv**:
```
library(s2dv)
var = list(MeanDims(down_1$data, 'member'),
MeanDims(down_2$data, 'member'),
MeanDims(down_3$data, 'member'))
PlotLayout(PlotEquiMap, c('lat', 'lon'),
var = var,
nrow = 1, ncol = 3,
lon = down_1$coords$lon,
lat = down_1$coords$lat,
filled.continents = FALSE,
titles = c("2000-11-01", "2000-12-01", "2001-01-01"), units = 'T(K)',
toptitle = 'Analogs sdate November 2000',
width = 10, height = 4)
```

The user can also request extra Analogs and the information:
```
down <- CST_Analogs(expL = exp1, obsL = lonlat_temp_st$obs,
nAnalogs = 2, AnalogsInfo = TRUE)
```
Again, the user can explore the object down1 which is class 's2dv_cube'. The element 'data' contains in this case metrics and the dates corresponding to the observed field:
```
class(down)
names(down$data)
dim(down$data$fields)
dim(down$data$metric)
dim(down$data$dates)
down$data$dates[1,15]
```
The last command run concludes that the best analog of the ensemble 15 corresponding to the 1st of November 2000 is the 1st November 2004:
```
PlotLayout(PlotEquiMap, c('lat', 'lon'), var = list(down$data$fields[1, , , 15],
lonlat_temp_st$obs$data[1, 1, 5, 1, , ]), nrow = 1, ncol = 2,
lon = down$coords$lon, lat = down$coords$lat, filled.continents = FALSE,
titles = c("Downscaled 2000-11-01", "Observed 2004-11-01"), units = 'T(K)',
width = 7, height = 4)
```

As expected, they are exatly the same.
### Exemple 2: Load data using CST_Start
In this case, the spatial field of a single forecast day will be downscale using Analogs in this example. This will allow illustrating how to use `CST_Start` to retrieve observations separated from simulations. To explore other options, see other CSTools vignettes as well as `CST_Start` documentation and [startR](https://CRAN.R-project.org/package=startR) package.
The simulations available for the desired model cover the period 1993-2016. Here, the 15th of October 2000 (for the simulation initialized in the 1st of October 2000), will be downscaled. For ERA5 from 1979 to the present days. For this example we will just use October days from 2000 to 2006, so, the starting dates can be defined by running the following lines:
```
start <- as.Date(paste(2000, 10, "01", sep = ""), "%Y%m%d")
end <- as.Date(paste(2006, 10, "01", sep = ""), "%Y%m%d")
dates <- as.POSIXct(seq(start, end, by = "year"), format = '%Y%m%d', 'UTC')
```
Using the `CST_Start` function from **CSTools package**, the data available in our data store can be loaded. The following lines show how this function can be used. The experimental datasets are interpolated to the ERA5 grid by specifying the 'grid' parameter while ERA5 doesn't need to be interpolated. While dimension 'time' is set to 1 for the experimental dataset, it is set to 31 for the observations, returning the daily observations for October for the years requested in 'sdate' (2000-2006). Download the data to run the recipe under the HTTPS: downloads.cmcc.bo.it/d_chaves/ANALOGS/data_for_Analogs.Rdat or ask carmen.alvarez-castro at cmcc.it or nuria.perez at bsc.es.
```r
exp_path <- paste0('/esarchive/exp/ecmwf/system4_m1/daily_mean/',
'$var$_f6h/$var$_$sdate$.nc')
obs_path <- paste0('/esarchive/recon/ecmwf/era5/daily_mean/',
'$var$_f1h-r1440x721cds/$var$_$sdate$.nc')
date_exp <- '20001001'
lonmax <- 50
lonmin <- -80
latmax <- 70
latmin <- 22
expTAS <- CST_Start(dataset = exp_path,
var = 'tas',
member = startR::indices(1:15),
sdate = '20001001',
ftime = startR::indices(15),
lat = startR::values(list(latmin, latmax)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lonmin, lonmax)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'),
lat = c('lat', 'latitude'),
member = c('member', 'ensemble'),
ftime = c('ftime', 'time')),
transform = startR::CDORemapper,
transform_params = list(grid = 'r1440x721',
method = 'bilinear'),
transform_vars = c('lat', 'lon'),
return_vars = list(lat = NULL,
lon = NULL, ftime = 'sdate'),
retrieve = TRUE)
expPSL <- CST_Start(dataset = exp_path,
var = 'psl',
member = startR::indices(1:15),
sdate = '20001001',
ftime = startR::indices(15),
lat = startR::values(list(latmin, latmax)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lonmin, lonmax)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'),
lat = c('lat', 'latitude'),
member = c('member', 'ensemble'),
ftime = c('ftime', 'time')),
transform = startR::CDORemapper,
transform_params = list(grid = 'r1440x721',
method = 'bilinear'),
transform_vars = c('lat', 'lon'),
return_vars = list(lat = NULL,
lon = NULL, ftime = 'sdate'),
retrieve = TRUE)
obsTAS <- CST_Start(dataset = obs_path,
var = 'tas',
sdate = unique(format(dates, '%Y%m')),
ftime = startR::indices(1:31),
lat = startR::values(list(latmin, latmax)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lonmin, lonmax)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'),
lat = c('lat', 'latitude'),
ftime = c('ftime', 'time')),
return_vars = list(lat = NULL,
lon = NULL, ftime = 'sdate'),
retrieve = TRUE)
obsPSL <- CST_Start(dataset = obs_path,
var = 'psl',
sdate = unique(format(dates, '%Y%m')),
ftime = startR::indices(1:31),
lat = startR::values(list(latmin, latmax)),
lat_reorder = startR::Sort(decreasing = TRUE),
lon = startR::values(list(lonmin, lonmax)),
lon_reorder = startR::CircularSort(0, 360),
synonims = list(lon = c('lon', 'longitude'),
lat = c('lat', 'latitude'),
ftime = c('ftime', 'time')),
return_vars = list(lat = NULL,
lon = NULL, ftime = 'sdate'),
retrieve = TRUE)
```
The 's2dv_cube' objects `expTAS`,`obsTAS`, `expPSL` and `obsPSL` are now loaded in the R environment. The first two elements correspond to the experimental and observed data for temperature and the other two are the equivalent for the SLP data.
Loading the data using `CST_Start` allows to obtain two lists, one for the experimental data and another for the observe data, with the same elements and compatible dimensions of the data element:
```
dim(expTAS$data)
# dataset var member sdate ftime lat lon
# 1 1 15 1 1 193 521
dim(obsTAS$data)
# dataset var sdate ftime lat lon
# 1 1 7 31 193 521
```
#### Two variables and criteria Large [scale] Distance:
The aim is to downscale the temperature field of the simulation for the 15th of October 2000 but looking at the pressure pattern:
```
down1 <- CST_Analogs(expL = expPSL, obsL = obsPSL, AnalogsInfo = TRUE,
criteria = "Large_dist", nAnalogs = 3,
obsVar = obsTAS, expVar = expTAS)
```
Some warnings could appear indicating information about undefining parameters. It is possible to explore the information in object `down` by runing:
```
names(down1$data)
dim(down1$data$field)
# nAnalogs lat lon member
# 3 193 521 15
dim(down1$data$dates)
# nAnalogs member
# 3 15
down1$data$dates[1,1]
# "07-10-2005"
```
Now, we can visualize the output:
```
PlotLayout(PlotEquiMap, c('lat', 'lon'),
var = list(expPSL$data[1, 1, 1, 1, 1, , ], obsPSL$data[1, 1, 1, 15, , ],
obsPSL$data[1, 1, 6, 7, , ]), lon = obsPSL$coords$lon,
lat = obsPSL$coords$lat, filled.continents = FALSE,
titles = c('Exp PSL 15-10-2000','Obs PSL 15-10-2000',
'Obs PSL 7-10-2005'),
toptitle = 'First member', ncol = 3, nrow = 1, width = 10, height = 4)
PlotLayout(PlotEquiMap, c('lat', 'lon'), var = list(
expTAS$data[1, 1, 1, 1, 1, , ], obsTAS$data[1, 1, 1, 15, , ],
down1$data$field[1, , , 1], obsTAS$data[1, 1, 6, 7, , ]),
lon = obsTAS$coords$lon, lat = obsTAS$coords$lat, filled.continents = FALSE,
titles = c('Exp TAS 15-10-2000', 'Obs TAS 15-10-2000',
'Analog TAS 15-10-2000', 'Obs TAS 7-10-2005'),
ncol = 2, nrow = 2)
```

The previous figure, shows the PSL inputs and the PSL pattern for the PSL the 7th of October, 2005, which is the best analog. *Note: Analogs automatically exclude the day is being downscaled from the observations.*
The next figure shows the input temperature fields, and the result analog which corresponds to the temperature of the 7th of October, 2005:

#### Two variables and criteria Local [scale] Distance:
The aim is to downscale the temperature simulation of the 15th of October 2000, by considering the pressure spatial pattern n the large scale and the local pressure pattern on a given region. Therefore, a region is defined providing maximum and minimum latitude and longitude coordinates, in this case, selecting the Balearic Islands:
```
region <- c(lonmin = 0, lonmax = 5, latmin = 38.5, latmax = 40.5)
expPSL$data <- expPSL$data[1, 1, 1, 1, 1, , ]
expTAS$data <- expTAS$data[1, 1, 1, 1, 1, , ]
down2 <- CST_Analogs(expL = expPSL, obsL = obsPSL, AnalogsInfo = TRUE,
criteria = "Local_dist", # nAnalogs = 50,
obsVar = obsTAS, expVar = expTAS,
region = region)
```
The parameter 'nAnalogs' doesn't correspond to the number of Analogs returned, but to the number of the best observations to use in the comparison between large and local scale.
In this case, when looking to a large scale pattern and also to local scale pattern the best analog for the first member is the 7th of October 2001:
```
down2$data$dates[2]
# "13-10-2001"
```
```
library(ClimProjDiags)
var = list(expTAS$data, obsTAS$data[1, 1, 1, 15, , ],
down2$data$field[1, , ], SelBox(obsTAS$data[1, 1, 2, 13, , ],
lon = as.vector(obsTAS$coords$lon),
lat = as.vector(obsTAS$coords$lat),
region, londim = 'lon', latdim = 'lat')$data)
PlotLayout(PlotEquiMap, c('lat', 'lon'), var = var,
special_args = list(list(lon = expTAS$coords$lon, lat = expTAS$coords$lat),
list(lon = obsTAS$coords$lon, lat = obsTAS$coords$lat),
list(lon = down2$coords$lon, down2$coords$lat),
list(lon = down2$coords$lon, down2$coords$lat)),
filled.continents = FALSE,
titles = c('Exp TAS 15-10-2000', 'Obs TAS 15-10-2000',
'Analog TAS 15-10-2000', 'Obs TAS 13-10-2001'),
ncol = 2, nrow = 2)
```

Previous figure shows that the best Analog field corrspond to the observed field on the 13th of October 2001.
#### Two variables and criteria Local [scale] Correlation:
```
down3 <- CST_Analogs(expL = expPSL, obsL = obsPSL, AnalogsInfo = TRUE,
criteria = "Local_cor", # nAnalogs = 50,
obsVar = obsTAS, expVar = expTAS,
region = region)
```
In this case, when looking to a large scale pattern and also to local scale pattern the best analog for the first member is the 10th of October 2001:
```
down3$data$dates[3]
# [1] "10-10-2001"
```
```
var = list(down3$data$field[1, , ],
SelBox(obsTAS$data[1, 1, 2, 10, , ],
lon = as.vector(obsTAS$coords$lon),
lat = as.vector(obsTAS$coords$lat),
region, londim = 'lon', latdim = 'lat')$data)
PlotLayout(PlotEquiMap, c('lat', 'lon'), var = var,
lon = down3$coords$lon, lat = down3$coords$lat,
filled.continents = FALSE,
titles = c('Analog TAS 15-10-2000', 'Obs TAS 10-10-2001'),
ncol = 2, nrow = 1)
```

Previous figure shows that the best Analog field corrspond to the observed field on the 10th of October 2001.
#### Downscaling using exp$data using excludeTime parameter
`ExludeTime` is set by default to Time_expL in order to find the same analog than the day of interest. If there is some interest in excluding other dates should be included in the argument 'excludeTime'.
```
down4 <- CST_Analogs(expL = expPSL, obsL = obsPSL, AnalogsInfo = TRUE,
criteria = "Large_dist", nAnalogs = 20,
obsVar = obsTAS, expVar = expTAS,
region = region, excludeTime = obsPSL$attrs$Dates[10:20])
```
In this case, the best analog is still being 7th of October, 2005.
*Note: You can compute the anomalies values before applying the criterias (as in Yiou et al, 2013) using `CST_Anomaly` of CSTools package*
|
/scratch/gouwar.j/cran-all/cranData/CSTools/vignettes/Analogs_vignette.Rmd
|
---
author: "Eroteida Sánchez-García"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteEngine{knitr::knitr}
%\VignetteIndexEntry{Achiving Best Estimate Index}
%\usepackage[utf8]{inputenc}
---
Achiving the Precipitation Best prediction giving the NAO index
-------------------------------------------------------------------
The boreal winter precipitation forecast, acumulated from November to March, can be improved by considering the NAO index. The first step is to find the best estimation of winter NAO, given by two Seasonal Forecast System (SFS). The second step is to employ the enhanced NAO index pdf to produce weights for a SFS (it could be the same or a different SFS from the previous one). The third step is to apply this weights to a precipitation field. The methodology has been proved to improve the skill on precipitation forecast in the iberian peninsula given the relation between the winter precipitation and the NAO index in seasonal time scale (Sánchez-García, E., Voces-Aboy, J., Navascués, N., & Rodríguez-Camino, E. (2019). Regionally improved seasonal forecast of precipitation through Best estimation of winter NAO, Adv. Sci. Res., 16, 165174, <https://doi.org/10.5194/asr-16-165-2019>).
This document is aim to ilustrate a practical use of the functions included in CSTools applying this methodology.
## Loading packages and data
Open an R sesion and load library CSTools:
```
library(CSTools)
```
The required data to applied this methodology are:
- the observed (reconstructed) NAO index in the reference period
- the winter index NAO for two different SFSs (SFS1 and SFS2, to combine both of them) in a reference period (hindcast) and in a future simulation (forecast)
- the winter index NAO and the acumulated precipitation field from a SFS that aims to be improved (hindcast and forecast)
Given the memory limitations, the following example uses synthetic data.
The SFS1 system is a dynamical model containing 25 ensemble members and its output will be save in the object `NAO_hind1` for the 20 years reference period and `NAO_fcst1` for a next season forecast.
The second SFS, SFS2, is a empirical model, so, the `NAO_hind2` and `NAO_fcst2` are characterized by a mean and standard deviation saved in the 'statistics' dimension.
The model for improving is a dynamical model containing 25 ensemble members.
The synthetic data is created by running the following lines:
```
# observations
NAO_obs <- rnorm(20, sd = 3)
dim(NAO_obs) <- c(time = 20)
# hindcast and forecast of a dynamical SFS 1
NAO_hind1 <- rnorm(20 * 2 * 25, sd = 2.5)
dim(NAO_hind1) <- c(time = 20, member = 50)
NAO_fcst1 <- rnorm(2*51, sd = 2.5)
dim(NAO_fcst1) <- c(time = 1, member = 102)
# hindcast and forecast of an empirical SFS 2
NAO_hind2_mean <- rnorm(20, sd = 3)
NAO_hind2_sd <- rnorm(20, mean = 5, sd = 1)
NAO_hind2 <- cbind(NAO_hind2_mean, NAO_hind2_sd)
dim(NAO_hind2) <- c(time = 20, statistic = 2)
NAO_fcst2_mean <- rnorm(1, sd = 3)
NAO_fcst2_sd <- rnorm(1, mean = 5, sd = 1)
NAO_fcst2 <- cbind(NAO_fcst2_mean, NAO_fcst2_sd)
dim(NAO_fcst2) <- c(time = 1, statistic = 2)
```
The winter index NAO and the acumulated precipiation field from the dynamical SFS that aims to be improved could be created by running:
```
# NAO index of a SFS to compute weights for each ensemble's member
NAO_hind <- rnorm(20 * 25, sd = 2.5)
dim(NAO_hind) <- c(time = 20, member = 25)
NAO_fcst <- rnorm(51, sd = 2.5)
dim(NAO_fcst) <- c(time = 1, member = 51)
# The acumulated precipiation field
prec_hind <- rnorm(20 * 25 * 21 * 31, mean = 30, sd = 10)
dim(prec_hind) <- c(time = 20, member = 25, lat = 21, lon = 31)
prec_hind <- list(data = prec_hind)
class(prec_hind) <- 's2dv_cube'
prec_fcst <- rnorm(51 * 21 * 31, mean = 25,sd = 8)
dim(prec_fcst) <- c(time = 1, member = 51, lat = 21, lon = 31)
prec_fcst <- list(data = prec_fcst)
class(prec_fcst) <- 's2dv_cube'
```
## 1- Best Estimate Index NAO
The function `PDFBest` performes the next steps:
- improves the PDF NAO index for each SFS (it could use bias correction method) and
- does a statistical combination of these improved NAO indexes.
Its output is an array containing the parameters (mean and starndar deviation) of the PDF for the reference period (hindcast) or forecast period.
```
# for hindcast
pdf_hind_best <- BEI_PDFBest(NAO_obs, NAO_hind1, NAO_hind2, index_fcst1 = NULL,
index_fcst2 = NULL, method_BC = 'none',
time_dim_name = 'time', na.rm = FALSE)
# for forecast
pdf_fcst_best <- BEI_PDFBest(NAO_obs, NAO_hind1, NAO_hind2, index_fcst1 = NAO_fcst1,
index_fcst2 = NAO_fcst2, method_BC = 'none',
time_dim_name = 'time', na.rm = FALSE)
```
## 2- Compute weights using the Best Estimation of Index NAO
An array of weights is calculated for the SFS. This SFS could be the same or different SFS than the ones in section 1.
The function WeightIndex computes these weights for each ensemble's member based on the best NAO PDF estimate.
```
# for hindcast
weights_hind <- BEI_Weights(NAO_hind, pdf_hind_best)
# for forecast
weights_fcst <- BEI_Weights(NAO_fcst, pdf_fcst_best)
```
The expected dimensions of these weights are 'member' and temporal dimension.
## 3- Apply weights to a precipitation field
The function `CST_BEI_Weighting` computes the ensemble mean or the terciles probabilities for a climate variable.
The ensemble mean and the probabilities of terciles from the weighted precipitation field is obtained by running:
```
# for hindcast
em_prec_hind <- CST_BEI_Weighting(prec_hind, weights_hind, type = 'ensembleMean')
prob_prec_hind <- CST_BEI_Weighting(prec_hind, weights_hind, type = 'probs')
# for forecast
em_prec_fcst <- CST_BEI_Weighting(prec_fcst, weights_fcst, type = 'ensembleMean')
prob_prec_fcst <- CST_BEI_Weighting(prec_fcst, weights_fcst, type = 'probs')
```
## Comparison and visualization
The original model output can be compared against the BEI corrected field.
To do this a equiprobability weights are created for hindcast period and they are applied to the original precipitation field:
```
aweights_raw <- rep(1/25, 20 * 25)
dim(aweights_raw) <- dim(weights_hind)
em_prec_raw <- CST_BEI_Weighting(prec_hind, aweights_raw, type = 'ensembleMean')
prob_prec_raw <- CST_BEI_Weighting(prec_hind, aweights_raw, type = 'probs')
```
A map with the probability that the total precipitation will be in the lower/normal/upper tercile based on the Best Estimate Index NAO could be obtained using the 'PlotEquiMap' function or 'PlotMostLikelyQuantileMap' function from 'CSToools' package.
The following figures show the probabilities for lower tercile for precipitation from November to March 2012/13 for ECMWF S5 system applying the methodology exposed or not, obtained using real data:
- NAO indices from the ECMWF-S5 dynamical model and the S-ClimWaRe empirical model from AEMET from 1997 to 2016 for computing the Best Estimation of Index NAO fro this hindcast period.
- The winter precipitation (from November to March) from 1997 to 2016 over Iberia Peninsula from he ECMWF-S5 dynamical model with resolution 0.5º x 0.5º, to weighting with the previous Best Estimation of Index NAO.


In a similar way we can plot the map with the probability that the total precipitation from November 2012 to
March 2013, for example, will be in the lower tercile from ECMWF Seasonal Forecast System 5 (raw) to compare results running the code:


|
/scratch/gouwar.j/cran-all/cranData/CSTools/vignettes/BestEstimateIndex_vignette.Rmd
|
---
author: "Nuria Perez"
date: "`r Sys.Date()`"
revisor: "Eva Rifà"
revision date: "October 2023"
output: rmarkdown::html_vignette
vignette: >
%\VignetteEngine{knitr::knitr}
%\VignetteIndexEntry{Data Storage and Retrieval}
%\usepackage[utf8]{inputenc}
---
Data Storage and Retrieval
-----------------------------------------
CSTools is aim for post-processing seasonal climate forecast with state-of-the-art methods. However, some doubts and issues may arise the first time using the package: do I need an specific R version? how much RAM memory I need? where I can find the datasets? Should I format the datasets? etc. Therefore, some recommendations and key points to take into account are gathered here in order to facilitate the use of CSTools.
### 1. System requirements
The first question may come to a new user is the requirements of my computer to run CSTools. Here, the list of most frequent needs:
- netcdf library version 4.1 or later
- cdo (I am currently using 1.6.3)
- R 3.4.2 or later
On the other hand, the computational power of a computer could be a limitation, but it will depends on the size of the data that the users need for their analysis. For instance, they can estimate the memory they will require by multiplying the following values:
- Area of the study region (km^2)
- Area of the desired grid cell (km) (or square of the grid cell size)
- Number of models + 1 observational dataset
- Forecast time length (days or months)
- Temporal resolution (1 for daily, 4 for 6 hourly or 24 for daily data)
- Hindcast length (years)
- Number of start date (or season)
- Number of members
- Extra factor for functions computation(*)
For example, if they want to use the hindcast of 3 different seasonal simulations with 9 members, in daily resolution, for performing a regional study let's say in a region of 40000 km2 with a resolution of 5 km:
> 200km x 200km / (5km * 5km) * (3 + 1) models * 214 days * 30 hindcast years * 9 members x 2 start dates x 8 bytes ~ 6 GB
(*)Furthermore, some of the functions need to duplicated or triplicate (even more) the inputs for performing their analysis. Therefore, between 12 and 18 GB of RAM memory would be necessary, in this example.
### 2. Overview of CSTools structure
All CSTools functions have been developed following the same guidelines. The main point, interesting for the users, is that that one function is built on several nested levels, and it is possible to distinguish at least three levels:
- `CST_FunctionName()` this function works on s2dv_cube objects which is exposed to the users.
- `FunctionName()`this function works on N-dimensional arrays with named dimensions and it is exposed to the users.
- lower level functions such as `.functionname()` which works in the minimum required elements and it is not exposed to the user.
A reasonable important doubt that a new user may have at this point is: what 's2dv_cube' object is?
's2dv_cube' is a class of an object storing the data and metadata in several elements:
+ $data element is an N-dimensional array with named dimensions containing the data (e.g.: temperature values),
+ $dims vector with the dimensions of $data
+ $coords is a named list with elements of the coordinates vectors corresponding to the dimensions of the $data,
+ $attrs is a named list with elements corresponding to attributes of the object. It has the following elements:
+ $Variable is a list with the variable name in element $varName and with the metadata of all the variables in the $metadata element,
+ $Dates is an array of dates of the $data element,
+ other elements for extra metadata information
It is possible to visualize an example of the structure of 's2dv_cube' object by opening an R session and running:
```
library(CSTools)
class(lonlat_temp_st$exp) # check the class of the object lonlat_temp$exp
names(lonlat_temp_st$exp) # shows the names of the elements in the object lonlat_temp$exp
str(lonlat_temp_st$exp) # shows the full structure of the object lonlat_temp$exp
```
### 3. Data storage recommendations
CSTools main objective is to share state-of-the-arts post-processing methods with the scientific community. However, in order to facilitate its use, CSTools package includes a function, `CST_Load`, to read the files and have the data available in 's2dv_cube' format in the R session memory to conduct the analysis. Some benefits of using this function are:
- CST_Load can read multiple experimental or observational datasets at once,
- CST_Load can regrid all datasets to a common grid,
- CST_Load reformat observational datasets in the same structure than experiments (i.e. matching start dates and forecast lead time between experiments and observations) or keep observations as usual time series (i.e. continous temporal dimension),
- CST_Load can subset a region from global files,
- CST_Load can read multiple members in monthly, daily or other resolutions,
- CST_Load can perform spatial averages over a defined region or return the lat-lon grid and
- CST_Load can read from files using multiple parallel processes among other possibilites.
CSTools also has the function `CST_Start` from [startR](https://CRAN.R-project.org/package=startR) that is more flexible than `CST_Load`. We recommend to use `CST_Start` since it's more efficient and flexible.
If you plan to use `CST_Load` or `CST_Start`, we have developed guidelines to download and formatting the data. See [CDS_Seasonal_Downloader](https://earth.bsc.es/gitlab/es/cds-seasonal-downloader).
There are alternatives to these functions, for instance, the user can:
1) use another tool to read the data from files (e.g.: ncdf4, easyNDCF, startR packages) and then convert it to the class 's2dv_cube' with `s2dv.cube()` function or
2) If they keep facing problems to convert the data to that class, they can just skip it and work with the functions without the prefix 'CST_'. In this case, they will be able to work with the basic class 'array'.
Independently of the tool used to read the data from your local storage to your R session, this step can be automatized by given a common structure and format to all datasets in your local storate. Here, there is the list of minimum requirements that CST_Save follows to be able to store an experiment that could be later loaded with CST_Load:
- this function creates one NetCDF file per start date with the name of the variable and the start date: `$VARNAME$_$YEAR$$MONTH$.nc`
- each file has dimensions: lon, lat, ensemble and time.
### 4. CST_Load example
```
library(CSTools)
library(zeallot)
path <- "/esarchive/exp/meteofrance/system6c3s/$STORE_FREQ$_mean/$VAR_NAME$_f6h/$VAR_NAME$_$START_DATE$.nc"
ini <- 1993
fin <- 2012
month <- '05'
start <- as.Date(paste(ini, month, "01", sep = ""), "%Y%m%d")
end <- as.Date(paste(fin, month, "01", sep = ""), "%Y%m%d")
dateseq <- format(seq(start, end, by = "year"), "%Y%m%d")
c(exp, obs) %<-% CST_Load(var = 'sfcWind',
exp = list(list(name = 'meteofrance/system6c3s', path = path)),
obs = 'erainterim',
sdates = dateseq, leadtimemin = 2, leadtimemax = 4,
lonmin = -19, lonmax = 60.5, latmin = 0, latmax = 79.5,
storefreq = "daily", sampleperiod = 1, nmember = 9,
output = "lonlat", method = "bilinear",
grid = "r360x180")
```
### 5. CST_Start example
```r
path_exp <- paste0('/esarchive/exp/meteofrance/system6c3s/monthly_mean/',
'$var$_f6h/$var$_$sdate$.nc')
sdates <- sapply(1993:2012, function(x) paste0(x, '0501'))
lonmax <- 60.5
lonmin <- -19
latmax <- 79.5
latmin <- 0
exp <- CST_Start(dataset = path_exp,
var = 'sfcWind',
ensemble = startR::indices(1:9),
sdate = sdates,
time = startR::indices(1:3),
latitude = startR::values(list(latmin, latmax)),
latitude_reorder = startR::Sort(decreasing = TRUE),
longitude = startR::values(list(lonmin, lonmax)),
longitude_reorder = startR::CircularSort(0, 360),
synonims = list(longitude = c('lon', 'longitude'),
latitude = c('lat', 'latitude')),
return_vars = list(latitude = NULL,
longitude = NULL, time = 'sdate'),
retrieve = TRUE)
path_obs <- paste0('/esarchive/recon/ecmwf/erainterim/daily_mean/',
'$var$_f6h/$var$_$sdate$.nc')
dates <- as.POSIXct(sdates, format = '%Y%m%d', 'UTC')
obs <- CST_Start(dataset = path_obs,
var = 'sfcWind',
sdate = unique(format(dates, '%Y%m')),
time = startR::indices(2:4),
latitude = startR::values(list(latmin, latmax)),
latitude_reorder = startR::Sort(decreasing = TRUE),
longitude = startR::values(list(lonmin, lonmax)),
longitude_reorder = startR::CircularSort(0, 360),
synonims = list(longitude = c('lon', 'longitude'),
latitude = c('lat', 'latitude')),
transform = startR::CDORemapper,
transform_extra_cells = 2,
transform_params = list(grid = 'r360x181',
method = 'conservative'),
transform_vars = c('latitude', 'longitude'),
return_vars = list(longitude = NULL,
latitude = NULL,
time = 'sdate'),
retrieve = TRUE)
```
Extra lines to see the size of the objects and visualize the data:
```
library(pryr)
object_size(exp)
# 27.7 MB
object_size(obs)
# 3.09 MB
library(s2dv)
PlotEquiMap(exp$data[1,1,1,1,1,,], lon = exp$coords$longitude, lat= exp$coords$latitude,
filled.continents = FALSE, fileout = "Meteofrance_r360x180.png")
```

### Managing big datasets and memory issues
Depending on the user needs, limitations can be found when trying to process big datasets. This may depend on the number of ensembles, the resolution and region that the user wants to process. CSTools has been developed for compatibility of startR package which covers these aims:
- retrieving data from NetCDF files to RAM memory in a flexible way,
- divide automatically datasets in pieces to perform an analysis avoiding memory issues and
- run the workflow in your local machine or submitting to an HPC cluster.
This is especially useful when a user doesn’t have access to a HPC and must work with small RAM memory size:

There is a [video tutorial](https://earth.bsc.es/wiki/lib/exe/fetch.php?media=tools:startr_tutorial_2020.mp4) about the startR package and the tutorial material.
The functions in CSTools (with or without CST_ prefix) include a parameter called ‘ncores’ that allows to automatically parallelize the code in multiple cores when the parameter is set greater than one.
|
/scratch/gouwar.j/cran-all/cranData/CSTools/vignettes/Data_Considerations.Rmd
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.