content
stringlengths 0
14.9M
| filename
stringlengths 44
136
|
---|---|
---
title: "CSCNet vignette"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{CSCNet vignette}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = ""
)
```
<style>
body {
text-align: justify}
</style>
CSCNet is package with flexible tools for fitting and evaluating cause-specific cox models with elastic-net penalty. Each cause is modeled in a separate penalized cox model (using elastic-net penalty) with its exclusive $\alpha$ and $\lambda$ assuming other involved competing causes as censored.
### Regularized cause-specific cox and absolute risk predictions
In this package we will use ```Melanoma``` data from 'riskRegression' package (which will load up with 'CSCNet') so we start by loading the package and the ```Melanoma``` data.
```{r message=F, warning=F}
library(CSCNet)
library(riskRegression)
data(Melanoma)
as_tibble(Melanoma)
table(Melanoma$status)
```
There are 2 events in the Melanoma data coded as 1 & 2. To introduce how setting up variables and hyper-parameters works in CSCNet, we will fit the a model with the following hyper-parameters to the ```Melanoma``` data:
$$(\alpha_{1},\alpha_{2},\lambda_{1},\lambda_{2})=(0,0.5,0.01,0.02)$$
We set variables affecting the event: 1 as `age,sex,invasion,thick` and variables affecting event: 2 as `age,sex,epicel,ici,thick`.
#### Fitting regularized cause-specific cox models
In CSCNet, setting variables and hyper-parameters are done through named lists. Variables and hyper-parameters related to each involved cause are stored in list positions with the name of that position being that cause. Of course these names must be the same as values in the status variable in the data.
```{r}
vl <- list('1'=c('age','sex','invasion','thick'),
'2'=~age+sex+epicel+ici+thick)
penfit <- penCSC(time = 'time',
status = 'status',
vars.list = vl,
data = Melanoma,
alpha.list = list('1'=0,'2'=.5),
lambda.list = list('1'=.01,'2'=.02))
penfit
```
`penfit` is a comprehensive list with all information related to the data and fitted models in detail that user can access.
**Note:** As we saw, variable specification in `vars.list` is possible in 2 ways which are introducing a vector of variable names or a one hand sided formula for different causes.
#### Predictions and semi-parametric estimates of absolute risk
Now to obtain predictions, specially estimates of the absolute risks, `predict.penCSC` method was developed so user can obtain different forms of values in the easiest way possible. By this method on objects of class `penCSCS` and for different involved causes, user can obtain values for linear predictors (`type='lp'` or `type='link'`), exponential of linear predictors (`type='risk'` or `type='response'`) and finally semi-parametric estimates of absolute risks (`type='absRisk'`) at desired time horizons.
**Note:** Default value for `event` argument in `predict.penCSC` is `NULL`. If user leaves it as that, values for all involved causes will be returned.
Values of linear predictors for event: 1 related to 1st five individuals of the data:
```{r}
predict(penfit,Melanoma[1:5,],type='lp',event=1)
```
Or the risk values of the same individuals for all involved causes:
```{r}
predict(penfit,Melanoma[1:5,],type='response')
```
Now let's say we want estimates of absolute risks related to the event: 1 as our event of interest at 3 and 5 year time horizons:
```{r}
predict(penfit,Melanoma[1:5,],type='absRisk',event=1,time=365*c(3,5))
```
**Note:** There's also `predictRisk.penCSC` to obtain absolute risk predictions. This method was developed for compatibility with tools from 'riskRegression' package.
### Tuning the hyper-parameters
The above example was for illustration purposes. In real world analysis, one must tune the hyper-parameters with respect to a proper loss function through resampling procedures. `tune_penCSC` is a comprehensive function that was built for this purpose on regularized cause-specific cox models.
Like before, specification of variables and hyper-parameters are done through named lists and sequences of candidate hyper-parameters related to each involved cause are stored in list positions with the name of that position being that cause. After that, `tune_penCSC` will create all possible combinations from user's specified sequences and evaluates them using either IPCW brier score or IPCW AUC (as loss functions) based on absolute risk predictions of the event of interest (linking) through a chosen resampling process. Supported resampling procedures are: cross validation (`method='cv'`), repeated cross validation (`method='repcv'`), bootstrap (`method='boot'`), Monte-Carlo or leave group out cross validation (`method='lgocv'`) and leave one out cross validation (`method='loocv'`).
#### Automatic specification of hyper-parameters sequences
`tune_penCSC` has the ability to automatically determine the candidate sequences of $\alpha$ & $\lambda$ values. Setting any of `alpha.grid` & `lambda.grid` to `NULL` will order the function to calculate them automatically.
While the automatic sequence of $\alpha$ values for all causes is `seq(0,1,.5)`, the process of determining the $\lambda$ values automatically is by:
1. Starting from $\lambda=0$, the algorithm fits LASSO models until finding a $\lambda$ value that creates a NULL model where all variables were shrunk to be exactly 0.
2. The obtained $\lambda$ value will be used as the maximum value of a sequence starting from 0. The length of this sequence is controlled by values in `nlambdas.list`.
This will be done for each cause-specific model to create exclusive sequences of $\lambda$s for each of them.
#### Pre-processing within resampling
If the data requires pre-processing steps, it must be done within the resampling process to avoid data leakage. This can be achieved by using `preProc.fun` argument of `tune_penCSC` function. This arguments accepts a function that has a data as its only input and returns a modified version of that data. Any pre-processing steps can be specified within this function.
**Note:** `tune_penCSC` has the parallel processing option. If a user has specified a function for pre-processing steps with global objects or calls from other packages and wants to run the code in parallel, the names of those extra packages and global objects must be given through `preProc.pkgs` and `preProc.globals`.
Now let's see all that was mentioned in this section in an example. Let's say we want to tune our model for 5 year absolute risk prediction of event: 1 based on time dependent (IPCW) AUC as the loss function (evaluation metric) through a 5-fold cross validation process:
```{r message=T, warning=F}
#Writing a hypothetical pre-processing function
library(recipes)
std.fun <- function(data){
cont_vars <- data %>% select(where(~is.numeric(.))) %>% names
cont_vars <- cont_vars[-which(cont_vars %in% c('time','status'))]
#External functions from recipes package are being used
recipe(~.,data=data) %>%
step_center(all_of(cont_vars)) %>%
step_scale(all_of(cont_vars)) %>%
prep(training=data) %>% juice
}
#Tuning a regularized cause-specific cox
set.seed(455) #for reproducibility
tune_melanoma <- tune_penCSC(time = 'time',
status = 'status',
vars.list = vl,
data = Melanoma,
horizons = 365*5,
event = 1,
method = 'cv',
k = 5,
standardize = FALSE,
metrics = 'AUC',
alpha.grid = list('1'=0,'2'=c(.5,1)),
preProc.fun = std.fun,
parallel = TRUE,
preProc.pkgs = 'recipes')
tune_melanoma$validation_result %>% arrange(desc(mean.AUC)) %>% head
tune_melanoma$final_params
tune_melanoma$final_fits
```
|
/scratch/gouwar.j/cran-all/cranData/CSCNet/vignettes/CSCNet.Rmd
|
#' CSESA (CRISPR-based Salmonella enterica Serotype Analyzer).
#'
#' @description The main function in CSESA package.
#'
#' @param in.file1 The first input file, the default value is NULL.
#' @param in.file2 The second input file (optional), the default value is NULL.
#' @param out.file Into which results will be saved if this value is set. Otherwise results will be displayed on the screen.
#' @param method The method to handle the input file(s), which can be set as "PCR" or "WGS". Choose "PCR" if the CRISPR sequence(s) from PCR amplification is entered, and choose "WGS" when entering the whole genome assembly of a Salmonella isolate.
#'
#' @note If you use the "WGS" method, please make sure you have installed the BLAST software and included it within the working path.
#'
#' @examples
#' CSESA(system.file("extdata", "sequence_CRIPSR1.fasta", package = "CSESA"),
#' system.file("extdata", "sequence_CRIPSR2.fasta", package = "CSESA"), method = "PCR")
#' CSESA(system.file("extdata", "sequence_CRIPSR1.fasta", package = "CSESA"), method = "PCR")
#' CSESA(system.file("extdata", "Salmonella_whole_genome_assembly.fasta",
#' package = "CSESA"), method = "WGS")
#' @importFrom utils read.table
#' @import Biostrings
#'
#' @export
#'
CSESA <- function(in.file1 = NULL, in.file2 = NULL, out.file = NULL, method = c("PCR", "WGS")) {
tryCatch({
if (is.null(in.file1) && is.null(in.file2)) {
stop("No such file(s)!")
}
method <- toupper(method)
method <- match.arg(method)
if (method == "PCR") {
seq1 <- ReadInFile(in.file1)
seq2 <- ReadInFile(in.file2)
PCR(seq1, seq2, out.file)
}
else {
if (is.null(in.file1) == FALSE && file.exists(in.file1)) {
file <- in.file1
if (is.null(in.file2) == FALSE)
print("Warning: under the WGS mode, CSESA would ignore the second file when receiving two input files.")
}
else if (is.null(in.file2) == FALSE && file.exists(in.file2)){
file <- in.file2
}
else {
stop("The input file(s) does not exist!")
}
wgs.list <- WGS(file)
seq1 <- wgs.list$seq1
seq2 <- wgs.list$seq2
PCR(seq1, seq2, out.file)
}
}, error = function(e) {
cat("ERROR :",conditionMessage(e),"\n")
})
}
#' Get the CSESA obeject through the two sequence.
#'
#' @param seq1 The first DNA sequence.
#' @param seq2 The second DNA sequence.
#' @param out.file Into which results will be saved if this value is set. Otherwise results will be displayed on the screen.
#'
PCR <- function(seq1, seq2, out.file) {
csesa.result <- list()
csesa.result$spacer1 <- GetAllNewSpacers(seq1)
csesa.result$spacer2 <- GetAllNewSpacers(seq2)
csesa.result$serotype <- FindSerotype(csesa.result$spacer1, csesa.result$spacer2)
class(csesa.result) <- "CSESA"
if (is.null(out.file)) {
cat(GetStr(csesa.result))
}
else {
write(GetStr(csesa.result), file = out.file)
}
}
#' Find the serotype based on the analysis of the new spacers.
#'
#' @param file The input fasta file.
#'
#' @return The two DNA molecular sequence.
#'
WGS <- function(file) {
path <- Sys.which("blastn")
if (all(path == "")) {
stop("Blast does not exist!")
}
# loading the database
db <- system.file("primerDB", package = "CSESA")
db <- file.path(db, "primers")
db2 <- system.file("primerB2DB", package = "CSESA")
db2 <- file.path(db2, "primerB2.fasta")
blastn <- Sys.which("blastn")
if (all(blastn == "")) {
stop("The BLAST software has not been employed. Please install it first and check it within the working path.")
}
blastn = blastn[which(blastn != "")[1]]
tmpwd <- tempdir()
curwd <- getwd()
tmp.prefix <- basename(tempfile(tmpdir = tmpwd))
on.exit({
file.remove(Sys.glob(paste(tmp.prefix, "*")))
setwd(curwd)
})
setwd(tmpwd)
outfile <- paste(tmp.prefix, ".out", sep = "")
infile <- paste(tmp.prefix, ".in", sep = "")
text <- readLines(file)
text <- gsub(pattern = " ", replacement = "_", x = text)
text <- gsub(pattern = ",|#", replacement = "", x = text)
writeLines(text, con = infile)
data <- readDNAStringSet(infile)
system(paste(blastn, "-db", db, "-query", infile, "-out", outfile, "-outfmt 10", "-task blastn-short"), ignore.stdout = FALSE, ignore.stderr = FALSE)
result.table <- read.table(outfile, sep=",", quote = "")
colnames(result.table) <- c("Query_id", "Subject_id", "Perc_ident",
"Align_length", "Mismatches", "Gap_openings", "Query_start", "Query_end",
"S_start", "S_end", "E", "Bits" )
config.primerA2 = result.table[which(result.table$Subject_id == "locusA_primer_R" & result.table$Align_length == 23), ]
config.primerA1 = result.table[which(result.table$Subject_id == "locusA_primer_F" & result.table$Align_length == 20), ]
config.primerB1 = result.table[which(result.table$Subject_id == "locusB_primer_F" & result.table$Align_length == 25), ]
seq1 <- NA
seq2 <- NA
# primer A
if (nrow(config.primerA1) == 1 && nrow(config.primerA2) == 1 && config.primerA1$Query_id == config.primerA2$Query_id) {
id <- as.character(config.primerA1$Query_id)
locus.primerA1 = config.primerA1$Query_start
locus.primerA2 = config.primerA2$Query_start
seq <- ""
idx <- 1
for (tn in names(data)) {
if (startsWith(tn, id)) {
seq <- as.character(unlist(data[idx, ]))
break
}
idx <- idx + 1
}
seq1 <- substr(seq, min(locus.primerA1, locus.primerA2), max(locus.primerA1, locus.primerA2))
}
# primer B
if (nrow(config.primerB1) == 1) {
id = as.character(config.primerB1$Query_id)
locus.primerB1 = config.primerB1$Query_start
# find the primerB2
system(paste(blastn, "-db", db2, "-query", infile, "-out", outfile, "-outfmt 10", "-task blastn-short"), ignore.stdout = FALSE, ignore.stderr = FALSE)
result.table <- read.table(outfile, sep=",", quote = "")
colnames(result.table) <- c("Query_id", "Subject_id", "Perc_ident",
"Align_length", "Mismatches", "Gap_openings", "Query_start", "Query_end",
"S_start", "S_end", "E", "Bits")
seq <- ""
idx <- 1
for (tn in names(data)) {
if (startsWith(tn, id)) {
seq <- as.character(unlist(data[idx, ]))
break
}
idx <- idx + 1
}
if (nrow(result.table) >= 1) {
config.primerB2 = result.table[1, ]
locus.primerB2 = config.primerB2$Query_start
# If the position of primerB2 is in the range [locus.primerB1 - 3000, locus.primerB1 + 3000]
if (locus.primerB2 >= locus.primerB1 - 3000 && locus.primerB2 <= locus.primerB1 + 3000) {
seq2 <- substr(seq, min(locus.primerB1, locus.primerB2), max(locus.primerB1, locus.primerB2))
}
else {
seq2 <- substr(seq, locus.primerB1 - 3000, locus.primerB1 + 3000)
}
}
else {
seq2 <- substr(seq, locus.primerB1 - 3000, locus.primerB1 + 3000)
}
}
return (list(seq1 = seq1, seq2 = seq2))
}
#' Get the new spacers from the molecular sequence and its reverse complement.
#'
#' @param molecular.seq The molecular sequence.
#' @return The vector of the new spacers, which is extracted from the molecular sequence and its reverse complement.
#'
#' @note If there doesn't exist any new spacer, the function would return NA.
#'
GetAllNewSpacers <- function(molecular.seq = NULL) {
if (is.null(molecular.seq) || is.na(molecular.seq) || molecular.seq == "")
return (NA)
molecular.seq.rev <- GetReverseComplement(molecular.seq)
# handle the cases specific to Typhi
typhi <- "ACGGCTATCCTTGTTGACGTGGGGAATACTGCTACACGCAAAAATTCCAGTCGTTGGCGCACGGTTTATCCCCGCTGGCGCGGGGAACAC"
if(grepl(typhi,molecular.seq) || grepl(typhi,molecular.seq.rev))
return (c("EntB0var1"))
new.spacer <- GetNewSpacerCode(molecular.seq)
new.spacer.rev <- GetNewSpacerCode(molecular.seq.rev)
new.spacer.arr <- character()
if (is.null(new.spacer) == FALSE && is.na(new.spacer) == FALSE)
new.spacer.arr <- c(new.spacer.arr, new.spacer)
if (is.null(new.spacer.rev) == FALSE && is.na(new.spacer.rev) == FALSE)
new.spacer.arr <- c(new.spacer.arr, new.spacer.rev)
if (length(new.spacer.arr) == 0) {
return (NA)
}
else return (new.spacer.arr)
}
#' Find the serotype based on the analysis of the new spacers.
#'
#' @param csesa1 The new spacer of the first sequence.
#' @param csesa2 The new spacer of the second sequence.
#'
#' @return The data frame which represents the serotype.
#'
FindSerotype <- function(csesa1 = NA, csesa2 = NA) {
if (is.na(csesa1) == TRUE && is.na(csesa2) == TRUE) {
stop("Sorry. We did not find any corresponding serotype in the lib!")
}
V1 = V2 = V3 = V4 = NULL
mapping.table <- read.table(system.file("packageData", "mapping_tbl.txt", package = "CSESA"), sep = "\t")
if (is.na(csesa1) == TRUE || is.na(csesa2) == TRUE) {
csesa <- csesa1
if (is.na(csesa1))
csesa <- csesa2
serotype <- subset(mapping.table, is.element(V1, csesa) | is.element(V2, csesa), select = V3)
if (nrow(serotype) > 1)
serotype <- unique(serotype)
}
else {
serotype <- subset(mapping.table, is.element(V1, csesa1) & is.element(V2, csesa2) |
is.element(V1, csesa2) & is.element(V2, csesa1), select = c(V3, V4))
}
return (serotype)
}
#' Get the new spacer from the molecular sequence and map it to the code.
#'
#' @param molecular.seq The molecular sequence.
#' @return The new spacer code as a string.
#'
GetNewSpacerCode <- function(molecular.seq = NULL) {
if (is.null(molecular.seq))
return (NULL)
new.spacer <- GetNewSpacer(molecular.seq)
if (is.null(new.spacer))
return (NULL)
V1 = V3 = NULL
spacers.table <- read.table(system.file("packageData", "spacer_tbl.txt", package = "CSESA"))
spacer.char <- as.character(subset(spacers.table, V3 == new.spacer, select = V1)[1, 1])
}
#' Get the new spacer from the molecular sequence.
#'
#' @param molecular.seq The molecular sequence.
#' @return The new spacer sequence as a string.
#'
#' @examples
#' GetNewSpacer("AGAGGCGGACCGAAAAACCGTTTTCAGCCAACGTAT")
#'
#' @export
GetNewSpacer <- function(molecular.seq = NULL) {
if (is.null(molecular.seq))
return (NULL)
max.match <- "-"
max.count <- -1
dr.table <- read.table(system.file("packageData", "dr_tbl.txt", package = "CSESA"))
for (x in dr.table$V3[-1]) {
t <- gregexpr(pattern = x, text = molecular.seq)
if (t[[1]][1] != -1 && length(t[[1]]) > max.count) {
max.match = x
max.count = length(t[[1]])
}
}
if (max.count < 1)
return (NULL)
spacers <- strsplit(molecular.seq, max.match)[[1]]
if (substr(molecular.seq, nchar(molecular.seq) - nchar(max.match) + 1, nchar(molecular.seq)) == max.match) {
return (spacers[length(spacers)])
}
spacers[length(spacers) - 1]
}
#' Get the information string from the CSESA s3 object.
#'
#' @param csesa The S3 object CSESA.
#' @return The string record the newly spacers and serotype information.
#'
GetStr <- function(csesa) {
if (is.null(csesa)) {
stop("The csesa object should be set!")
}
if (is.na(csesa$spacer1))
str <- "The newly incorporated spacer of the first CRISPR sequence is not available for prediction.\n"
else
str <- paste("The newly incorporated spacer in the first CRISPR sequence: ", csesa$spacer1, "\n", sep = "")
if (is.na(csesa$spacer2))
str <- paste0(str, "The newly incorporated spacer of the second CRISPR sequence is not available for prediction.\n")
else
str <- paste(str, "The newly incorporated spacer in the second CRISPR sequence: ", csesa$spacer2, "\n", sep = "")
if (is.na(csesa$spacer1) || is.na(csesa$spacer2)) {
result <- ""
if (is.atomic(csesa$serotype))
result <- csesa$serotype
else
result <- paste(csesa$serotype[, 1], collapse = "] [")
result <- paste("Predicted serotype(s): [", result, sep = "")
str <- paste(str, result, "]", "\n", sep = "")
}
else {
result <- paste(csesa$serotype[, 1], csesa$serotype[, 2])
result <- paste("Predicted serotype(s): [", paste(result, collapse = "] ["), sep = "")
str <- paste(str, result, "]", "\n", sep = "")
}
str
}
#' Read the three types of input file.
#'
#' @param file.name The input file name.
#' @return The molecular sequence as a string.
#'
ReadInFile <- function(file.name) {
if (is.null(file.name) || is.na(file.name))
return (NA)
if (file.exists(file.name) == FALSE) {
stop(paste0(file.name, " does not existed!"))
}
data <- scan(file.name, what = "", quiet = TRUE)
if (substring(data[1], 1, 1) == '>')
data <- data[-1]
toupper(paste(data, collapse = ""))
}
#' Return the reverse complement of the sequence.
#'
#' @param x The input sequence.
#' @return The reverse complement sequence as a string.
#'
GetReverseComplement <- function(x) {
a <- chartr("ATGC","TACG",x)
paste(rev(substring(a, 1 : nchar(a), 1 : nchar(a))), collapse = "")
}
|
/scratch/gouwar.j/cran-all/cranData/CSESA/R/CSESA.R
|
#' CS Go Achievements
#'
#' This function will return all the CS Go Achievements of the user_id (input).
#'
#' @param api_key string with the key provided by the steam API.
#'
#' PS: If you don`t have a API key yet run \code{vignette("auth", package = "CSGo")} and follow the presented steps.
#'
#' @param user_id string with the steam user ID.
#'
#' Steam ID is the NUMBER OR NAME at the end of your steam profile URL. ex: '76561198263364899'.
#'
#' PS: The user should have a public status.
#'
#' @return data frame with all the CS Go achievements of the user ID.
#'
#' @export
#'
#' @examples
#' \dontrun{
#' ## It is necessary to fill the "api_key" parameter to run the example
#'
#' df_ach <- csgo_api_ach(api_key = 'XXX', user_id = '76561198263364899')
#' }
csgo_api_ach <- function(api_key, user_id)
{
# Achievements
call_cs_ach <- sprintf(
'http://api.steampowered.com/ISteamUserStats/GetPlayerAchievements/v0001/?appid=730&key=%s&steamid=%s',
api_key,
user_id
)
api_query_ach <- httr::GET(call_cs_ach)
api_content_ach <- httr::content(api_query_ach, 'text')
json_content_ach <- jsonlite::fromJSON(api_content_ach, flatten = TRUE)
db_achievements <- as.data.frame(json_content_ach$playerstats$achievements)
# RETURN
return(db_achievements)
}
|
/scratch/gouwar.j/cran-all/cranData/CSGo/R/csgo_api_ach.R
|
#' CS Go Friends
#'
#' This function will return all the CS Go friends of the user_id (input).
#'
#' @param api_key string with the key provided by the steam API.
#'
#' PS: If you don`t have a API key yet run \code{vignette("auth", package = "CSGo")} and follow the presented steps.
#'
#' @param user_id string with the steam user ID.
#'
#' Steam ID is the NUMBER OR NAME at the end of your steam profile URL. ex: '76561198263364899'.
#'
#' PS: The user should have a public status.
#'
#' @return data frame with all the CS Go friends of the user ID.
#'
#' @export
#'
#' @examples
#' \dontrun{
#' ## It is necessary to fill the "api_key" parameter to run the example
#'
#' df_friend <- csgo_api_friend(api_key = 'XXX', user_id = '76561198263364899')
#' }
csgo_api_friend <- function(api_key, user_id)
{
# Friends
call_cs_friend <- sprintf(
'http://api.steampowered.com/ISteamUser/GetFriendList/v0001/?appid=730&relationship=friend&key=%s&steamid=%s',
api_key,
user_id
)
api_query_friend <- httr::GET(call_cs_friend)
api_content_friend <- httr::content(api_query_friend, 'text')
json_content_friend <- jsonlite::fromJSON(api_content_friend, flatten = TRUE)
db_friend <- as.data.frame(json_content_friend$friendslist$friends)
# RETURN
return(db_friend)
}
|
/scratch/gouwar.j/cran-all/cranData/CSGo/R/csgo_api_friend.R
|
#' CS Go User Profile
#'
#' This function will return the CS Go Profile of the user_id (input).
#'
#' @param api_key string with the key provided by the steam API.
#'
#' PS: If you don`t have a API key yet run \code{vignette("auth", package = "CSGo")} and follow the presented steps.
#'
#' @param user_id string OR list with the steam user ID.
#'
#' Steam ID is the NUMBER OR NAME at the end of your steam profile URL. ex: '76561198263364899'.
#'
#' PS: The user should have a public status.
#'
#' @param name logical: if the user_id input is a name change it for TRUE. ex: 'kevinarndt'.
#'
#' PS: The query by name DOES NOT ALLOW a list of user_id.
#'
#' @return data frame with all the CS Go friends of the user ID.
#' @export
#'
#' @examples
#' \dontrun{
#' ## It is necessary to fill the "api_key" parameter to run the example
#'
#' df_profile <- csgo_api_profile(api_key = 'XXX', user_id = '76561198263364899')
#'
#' df_profile <- csgo_api_profile(
#' api_key = 'XXX',
#' user_id = list('76561198263364899','76561197996007619')
#' )
#'
#' df_profile <- csgo_api_profile(api_key = 'XXX', user_id = 'kevinarndt', name = TRUE)
#' }
csgo_api_profile <- function(api_key, user_id, name = FALSE)
{
if(name)
{
# Profile by user_name
call_cs_profile <- sprintf(
'http://api.steampowered.com/ISteamUser/ResolveVanityURL/v0001/?&key=%s&vanityurl=%s',
api_key,
user_id
)
api_query_profile <- httr::GET(call_cs_profile)
api_content_profile <- httr::content(api_query_profile, 'text')
json_content_profile <- jsonlite::fromJSON(api_content_profile, flatten = TRUE)
user_id <- json_content_profile$response$steamid
call_cs_profile <- sprintf(
'http://api.steampowered.com/ISteamUser/GetPlayerSummaries/v0002/?&key=%s&steamids=%s',
api_key,
user_id
)
if(identical(call_cs_profile, character(0)))
{
return(as.data.frame(NULL))
}
}
else{
# Transform the ID into a JSON format to query multiple IDS
user_id <- jsonlite::toJSON(user_id)
# Profile by user_id
call_cs_profile <- sprintf(
'http://api.steampowered.com/ISteamUser/GetPlayerSummaries/v0002/?&key=%s&steamids=%s',
api_key,
user_id
)
}
api_query_profile <- httr::GET(call_cs_profile)
api_content_profile <- httr::content(api_query_profile, 'text')
json_content_profile <- jsonlite::fromJSON(api_content_profile, flatten = TRUE)
db_profile <- as.data.frame(json_content_profile$response$players)
# RETURN
return(db_profile)
}
|
/scratch/gouwar.j/cran-all/cranData/CSGo/R/csgo_api_profile.R
|
#' CS Go Statistics
#'
#' This function will return all the CS Go Statistics of the user_id (input).
#'
#' @param api_key string with the key provided by the steam API.
#'
#' PS: If you don`t have a API key yet run \code{vignette("auth", package = "CSGo")} and follow the presented steps.
#'
#' @param user_id string with the steam user ID.
#'
#' Steam ID is the NUMBER OR NAME at the end of your steam profile URL. ex: '76561198263364899'.
#'
#' PS: The user should have a public status.
#'
#' @return data frame with all the CS Go statistics of the user ID.
#' @export
#'
#' @examples
#' \dontrun{
#' ## It is necessary to fill the "api_key" parameter to run the example
#'
#' df_stats <- csgo_api_stats(api_key = 'XXX', user_id = '76561198263364899')
#' }
csgo_api_stats <- function(api_key, user_id)
{
# Stats
call_cs_stats <- sprintf(
'http://api.steampowered.com/ISteamUserStats/GetUserStatsForGame/v0002/?appid=730&key=%s&steamid=%s&apiname=%s',
api_key,
user_id,
'1458786300'
)
api_query_stats <- httr::GET(call_cs_stats)
api_content_stats <- httr::content(api_query_stats, 'text')
if(stringr::str_detect(api_content_stats, 'Internal Server Error'))
{
db_stats <- data.frame(name = NA, value = NA)
}else{
json_content_stats <- jsonlite::fromJSON(api_content_stats, flatten = TRUE)
db_stats <- as.data.frame(json_content_stats$playerstats$stats)
}
return(db_stats)
}
|
/scratch/gouwar.j/cran-all/cranData/CSGo/R/csgo_api_stats.R
|
#' Categories and Descriptions of the Statistics Data
#'
#' A dataset containing the categories, descriptions and types of the statistics data pulled from the csgo_api_stats.
#'
#' @format A data frame with 133 rows and 4 variables:
#' \describe{
#' \item{name_match}{Name to match with the name statistics data.}
#' \item{category}{Category name of the statistic.}
#' \item{desc}{Statistic description.}
#' \item{type}{Statistic type.}
#' ...
#' }
#' @source Created by the author.
"support"
#' Maps Images
#'
#' A dataset containing the pictures of each map.
#'
#' @format A data frame with 34 rows and 2 variables:
#' \describe{
#' \item{map_name}{Name of the map.}
#' \item{map_photo}{The image address.}
#' ...
#' }
#' @source Created by the author.
"map_pictures"
#' Weapon Images
#'
#' A dataset containing the pictures of each map.
#'
#' @format A data frame with 34 rows and 2 variables:
#' \describe{
#' \item{weapon_name}{Name of the weapon.}
#' \item{weapon_photo}{The image address.}
#' ...
#' }
#' @source Created by the author.
"weapon_pictures"
|
/scratch/gouwar.j/cran-all/cranData/CSGo/R/data.R
|
#' Get the Friends Statistics
#'
#' This function will return the complete CS Go Statistics for all public friends of the user_id (input).
#'
#' @param api_key string with the key provided by the steam API.
#'
#' PS: If you don`t have a API key yet run \code{vignette("auth", package = "CSGo")} and follow the presented steps.
#'
#' @param user_id string with the steam user ID.
#'
#' Steam ID is the NUMBER OR NAME at the end of your steam profile URL. ex: '76561198263364899'.
#'
#' PS: The user should have a public status.
#'
#' @param n_return numeric indicating the number of friends to return, to return all use n_return = "all" (the default is "all").
#'
#' @return a list of two data frames
#'
#' friends_stats: data frame with all the CS Go statistics of all public friends of the user ID.
#'
#' friends: data frame with all the CS Go friends of the user ID (public and non public).
#'
#' @export
#'
#' @examples
#' \dontrun{
#' ## It is necessary to fill the "api_key" parameter to run the example
#'
#' # set the "plan" to collect the data in parallel!!!!
#' future::plan(future::multisession, workers = parallel::detectCores())
#'
#' fr_list <- get_stats_friends(api_key = 'XXX', user_id = '76561198263364899')
#' fr_list$friends_stats
#' fr_list$friends
#' }
get_stats_friends <- function(api_key, user_id, n_return = 'all')
{
# COLLECT THE PROFILE BY USER NAME OR BY USER ID
# it will depend on the type of user_id
if(is.na(as.numeric(user_id)))
{
user_profile <- csgo_api_profile(api_key, user_id, name = TRUE)
user_id <- as.character(as.vector(user_profile$steamid))
}else{
user_id <- as.character(user_id)
}
# Get Friends IDs
friend_list <- csgo_api_friend(api_key, user_id)
# SPLITING THE IDs by 100 (each query allows max 100 user_id)
f_steamid <- split(friend_list$steamid, ceiling(seq_along(friend_list$steamid)/100))
# VERIFY IF THE USER IS PUBLIC OR NOT
print("Public friends check..")
f_profile <- furrr::future_map2_dfr(
.x = api_key,
.y = f_steamid,
.f = purrr::possibly(csgo_api_profile,"Cant retrieve data")
)
# Verify public friends
f_profile <- f_profile %>%
dplyr::mutate(
public = ifelse(
as.numeric(f_profile$communityvisibilitystate) > 1,
"Public",
"Not Public"
)
) %>%
dplyr::left_join(friend_list, by = c("steamid" = "steamid"))
friend_list2 <- f_profile %>%
dplyr::filter(public == "Public")
# N FRIENDS TO RETURN
if(is.numeric(n_return) & nrow(friend_list2) >= n_return)
{
friend_list2 <- friend_list2 %>%
dplyr::top_n(n = n_return, wt = friend_list2$friend_since)
}
print("Pulling friends stats..")
return_list <- list()
if(nrow(friend_list2) > 0)
{
db_friends_complete <- furrr::future_map2_dfr(
.x = api_key,
.y = as.character(friend_list2$steamid),
.f = purrr::possibly(get_stats_user,"Cant retrieve data")
)
db_friends_complete <- db_friends_complete %>%
dplyr::filter(!is.na(value))
return_list$friends_stats <- db_friends_complete
return_list$friends <- friend_list2
}else{
return_list$friends_stats <- 'NO PUBLIC FRIENDS'
return_list$friends <- friend_list2
}
return(return_list)
}
|
/scratch/gouwar.j/cran-all/cranData/CSGo/R/get_stats_friends.R
|
#' Get the User Statistics
#'
#' This function will return the complete CS Go Statistics of the user_id (input).
#'
#' Similar to the csgo_api_stats function but it will return a clean data frame with category and description of each statistic.
#'
#' @param api_key string with the key provided by the steam API.
#'
#' PS: If you don`t have a API key yet run \code{vignette("auth", package = "CSGo")} and follow the presented steps.
#'
#' @param user_id string with the steam user ID.
#'
#' Steam ID is the NUMBER OR NAME at the end of your steam profile URL. ex: '76561198263364899'.
#'
#' PS: The user should have a public status.
#'
#' @return data frame with all the CS Go statistics (divided in categories and subcategories) of the user ID.
#'
#' @export
#'
#' @examples
#' \dontrun{
#' ## It is necessary to fill the "api_key" parameter to run the example
#'
#' df <- get_stats_user(api_key = 'XXX', user_id = '76561198263364899')
#' }
get_stats_user <- function(api_key, user_id)
{
# COLLECT THE PROFILE BY USER NAME OR BY USER ID
# it will depend on the type of user_id
if(is.na(as.numeric(user_id)))
{
user_id <- as.character(as.vector(csgo_api_profile(api_key, user_id, name = TRUE)))[1]
} else{
user_id <- as.character(user_id)
}
# COLLECT THE DATA
stats <- csgo_api_stats(api_key,user_id)
profile_name <- csgo_api_profile(api_key,user_id)$personaname
# INCLUDING LABELS AND CATEGORIES
stats <- fuzzyjoin::fuzzy_left_join(
stats,
support,
by = c('name' = 'name_match'),
match_fun = stringr::str_detect)
# REMOVE DUPLICATES
aux <- stats %>%
dplyr::filter(stats$type == 'maps', stats$name_match == 'dust2')
stats <- stats %>%
dplyr::anti_join(aux, by = c("value"="value")) %>%
dplyr::bind_rows(aux) %>%
dplyr::mutate(player_name = profile_name)
# RETURN
return(stats)
}
|
/scratch/gouwar.j/cran-all/cranData/CSGo/R/get_stats_user.R
|
utils::globalVariables(c('public','support','value'))
|
/scratch/gouwar.j/cran-all/cranData/CSGo/R/globals.R
|
#' CSGo color palette - color
#'
#' A color palette (color) to be used with \code{ggplot2}
#'
#' @param discrete logical: if TRUE it will generate a discrete pallet otherwise a continuous palette
#' @param ... all available options of the \code{discrete_scale} function or \code{scale_color_gradientn} both from \code{ggplot2}
#'
#' @return \code{scale_color} object
#' @export
#'
#' @examples
#' \dontrun{
#' library(CSGo)
#' library(ggplot2)
#' library(dplyr)
#' library(showtext)
#'
#' ## Loading Google fonts (https://fonts.google.com/)
#' font_add_google("Quantico", "quantico")
#'
#' df %>%
#' top_n(n = 10, wt = kills) %>%
#' ggplot(aes(x = name_match, size = shots)) +
#' geom_point(aes(y = kills_efficiency, color = "Kills Efficiency")) +
#' geom_point(aes(y = hits_efficiency, color = "Hits Efficiency")) +
#' geom_point(aes(y = hits_to_kill, color = "Hits to Kill")) +
#' ggtitle("Weapon Efficiency") +
#' ylab("Efficiency (%)") +
#' xlab("") +
#' labs(color = "Efficiency Type", size = "Shots") +
#' theme_csgo(
#' text = element_text(family = "quantico"),
#' panel.grid.major.x = element_line(size = .1, color = "black",linetype = 2)
#' ) +
#' scale_color_csgo()
#' }
scale_color_csgo <- function(discrete = TRUE, ...)
{
pal <- grDevices::colorRampPalette(
colors = c(
"#5d79ae",
"#0c0f12",
"#ccba7c",
"#413a27",
"#de9b35"
)
)
if(discrete)
{
ggplot2::discrete_scale(
aesthetics = "color",
scale_name = 'csgo',
palette = pal,
...
)
} else {
ggplot2::scale_color_gradientn(colours = pal(300), ...)
}
}
|
/scratch/gouwar.j/cran-all/cranData/CSGo/R/scale_color_csgo.R
|
#' CSGo color palette - fill
#'
#' A color palette (fill) to be used with \code{ggplot2}
#'
#' @param discrete logical: if TRUE it will generate a discrete pallet otherwise a continuous palette
#' @param ... all available options of the \code{discrete_scale} function or \code{scale_fill_gradientn} both from \code{ggplot2}
#'
#' @return \code{scale_color} object
#' @export
#'
#' @examples
#' \dontrun{
#' library(CSGo)
#' library(ggplot2)
#' library(dplyr)
#' library(showtext)
#'
#' ## Loading Google fonts (https://fonts.google.com/)
#' font_add_google("Quantico", "quantico")
#'
#' df %>%
#' top_n(n = 10, wt = value) %>%
#' ggplot(aes(x = name_match, y = value, fill = name_match)) +
#' geom_col() +
#' ggtitle("KILLS BY WEAPON") +
#' ylab("Number of Kills") +
#' xlab("") +
#' labs(fill = "Weapon Name") +
#' theme_csgo(text = element_text(family = "quantico")) +
#' scale_fill_csgo()
#' }
scale_fill_csgo <- function(discrete = TRUE, ...)
{
pal <- grDevices::colorRampPalette(
colors = c(
"#5d79ae",
"#0c0f12",
"#ccba7c",
"#413a27",
"#de9b35"
)
)
if(discrete)
{
ggplot2::discrete_scale(
aesthetics = "fill",
scale_name = 'csgo',
palette = pal,
...
)
} else {
ggplot2::scale_fill_gradientn(colours = pal(300), ...)
}
}
|
/scratch/gouwar.j/cran-all/cranData/CSGo/R/scale_fill_csgo.R
|
#' CSGo theme
#'
#' A CSGo theme to be used with \code{ggplot2}
#'
#' @param ... all available options of the \code{theme} function from \code{ggplot2}
#'
#' @return \code{theme} object
#' @export
#'
#' @examples
#' \dontrun{
#' library(CSGo)
#' library(ggplot2)
#' library(dplyr)
#' library(showtext)
#'
#' ## Loading Google fonts (https://fonts.google.com/)
#' font_add_google("Quantico", "quantico")
#'
#' df %>%
#' top_n(n = 10, wt = value) %>%
#' ggplot(aes(x = name_match, y = value, fill = name_match)) +
#' geom_col() +
#' ggtitle("KILLS BY WEAPON") +
#' ylab("Number of Kills") +
#' xlab("") +
#' labs(fill = "Weapon Name") +
#' theme_csgo(text = element_text(family = "quantico"))
#'
#' }
theme_csgo <- function(...)
{
ggplot2::theme_bw() +
ggplot2::theme(
strip.text = ggplot2::element_text(color = 'white', face = 'bold'),
panel.grid = ggplot2::element_blank(),
plot.title = ggplot2::element_text(hjust = 0.5),
axis.title = ggplot2::element_text(face = 'bold'),
axis.text = ggplot2::element_text(color = 'black'),
axis.text.x = ggplot2::element_text(angle = 45, hjust = 1),
...
)
}
|
/scratch/gouwar.j/cran-all/cranData/CSGo/R/theme_csgo.R
|
#' Pipe operator
#'
#' See \code{magrittr::\link[magrittr:pipe]{\%>\%}} for details.
#'
#' @name %>%
#' @rdname pipe
#' @keywords internal
#' @export
#' @importFrom magrittr %>%
#' @usage lhs \%>\% rhs
#' @param lhs A value or the magrittr placeholder.
#' @param rhs A function call using the magrittr semantics.
#' @return The result of calling `rhs(lhs)`.
NULL
|
/scratch/gouwar.j/cran-all/cranData/CSGo/R/utils-pipe.R
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## ----example1, eval=FALSE-----------------------------------------------------
# library(CSGo)
#
# df <- csgo_api_profile(api_key = 'your_key', user_id = '76561198263364899')
#
## ----example2, eval=FALSE-----------------------------------------------------
# library(CSGo)
#
# df1 <- csgo_api_profile(api_key = 'your_key', user_id = '76561198263364899')
# df2 <- csgo_api_profile(api_key = 'your_key', user_id = 'generalcapivara', name = TRUE)
#
|
/scratch/gouwar.j/cran-all/cranData/CSGo/inst/doc/auth.R
|
---
title: "Obtaining the Steam API key"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{auth}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
This vignette covers how to create your own Steam API key, to be able to use the `CSGo` package to pull data from the steam API.
## Creating a Steam Account
The very first step to get your own Steam API credentials is to have an **Steam Account**. To do that you need to access [this link](https://store.steampowered.com/join), and fill the requested information.
**PS**: Keep in mind that you will need to agree with the [Steam Api Terms of Use](https://steamcommunity.com/dev/apiterms), and that you should have **spend more than 5.00 USD in the Steam Store** to be able to get the API key.
## API Key Request
Now that you are a Steam member, and already spend more than 5.00 USD in the store you should be able to request your own Steam API Key. Enter [here](https://steamcommunity.com/login/home/?goto=%2Fdev%2Fapikey ) and sign in with your Steam Account Name and Password.
<p align="center"><img src="img/sign_in.png" width="500px" alt="sign_in"></p>
In the next screen you should fill your Steam Domain Name, confirm the [Steam Api Terms of Use](https://steamcommunity.com/dev/apiterms) check box and click on the "Register" button.
<p align="center"><img src="img/register_steam_api.png" width="500px" alt="register"></p>
If everything went well you should see an screen as bellow with your own **Steam Web API Key**.
<p align="center"><img src="img/the_key.png" width="500px" alt="key"></p>
## Key Test
Now that you already have your Steam API Key you should be able to use any function in the `CSGo` package.
Try to run it filling your own API key:
```{r example1, eval=FALSE}
library(CSGo)
df <- csgo_api_profile(api_key = 'your_key', user_id = '76561198263364899')
```
## Finding the Users
The second mandatory argument is the `user_id` which represents the **Steam ID**. The Steam ID is the **number OR name** at the end the steam profile URL.
### Example
If the Steam Profile URL is https://steamcommunity.com/profiles/76561198263364899/ the user_id is **76561198263364899**.
If the Steam Profile URL is https://steamcommunity.com/id/generalcapivara/ the user_id is **generalcapivara**.
**IMPORTANT**: The steam profile should have the *"Public"* status for you be able to get specific information like Achievements and Statistics.
```{r example2, eval=FALSE}
library(CSGo)
df1 <- csgo_api_profile(api_key = 'your_key', user_id = '76561198263364899')
df2 <- csgo_api_profile(api_key = 'your_key', user_id = 'generalcapivara', name = TRUE)
```
That's it!!!
|
/scratch/gouwar.j/cran-all/cranData/CSGo/inst/doc/auth.Rmd
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## ----eval = FALSE-------------------------------------------------------------
# library(CSGo)
#
# # to get the statistics of the user 76561198263364899
# rodrigo_stats <- get_stats_user(api_key = 'your_key', user_id = '76561198263364899')
#
## ----eval = FALSE-------------------------------------------------------------
# library(dplyr)
# library(stringr)
#
# rodrigo_weapon_kill <- rodrigo_stats %>%
# filter(
# str_detect(name, 'kill'),
# type == ' weapon info'
# ) %>%
# arrange(desc(value))
#
## ----eval = FALSE-------------------------------------------------------------
# library(ggplot2)
# library(showtext)
#
# ## Loading Google fonts (https://fonts.google.com/)
# font_add_google("Quantico", "quantico")
#
# rodrigo_weapon_kill %>%
# top_n(n = 10, wt = value) %>%
# ggplot(aes(x = name_match, y = value, fill = name_match)) +
# geom_col() +
# ggtitle("KILLS BY WEAPON") +
# ylab("Number of Kills") +
# xlab("") +
# labs(fill = "Weapon Name") +
# theme_csgo(text = element_text(family = "quantico")) +
# scale_fill_csgo()
#
## ----eval = TRUE, message = FALSE, echo = FALSE-------------------------------
library(knitr)
rodrigo_efficiency <- readRDS('data/rodrigo_efficiency.RDS')
## ----eval = FALSE, message = FALSE, results='asis'----------------------------
#
# rodrigo_efficiency <- rodrigo_stats %>%
# filter(
# name_match %in% c("ak47", "aug", "awp", "fiveseven",
# "hkp2000", "m4a1", "mp7", "p90",
# "sg556", "xm1014")
# ) %>%
# mutate(
# stat_type = case_when(
# str_detect(name, "shots") ~ "shots",
# str_detect(name, "hits") ~ "hits",
# str_detect(name, "kills") ~ "kills"
# )
# ) %>%
# pivot_wider(
# names_from = stat_type,
# id_cols = name_match,
# values_from = value
# ) %>%
# mutate(
# kills_efficiency = kills/shots*100,
# hits_efficiency = hits/shots*100,
# hits_to_kill = kills/hits*100
# )
#
# kbl(rodrigo_efficiency) %>%
# kable_styling()
## ----eval = TRUE, message = FALSE, echo = FALSE-------------------------------
knitr::kable(rodrigo_efficiency)
## ----eval = FALSE-------------------------------------------------------------
# rodrigo_efficiency %>%
# top_n(n = 10, wt = kills) %>%
# ggplot(aes(x = name_match, size = shots)) +
# geom_point(aes(y = kills_efficiency, color = "Kills Efficiency")) +
# geom_point(aes(y = hits_efficiency, color = "Hits Efficiency")) +
# geom_point(aes(y = hits_to_kill, color = "Hits to Kill")) +
# ggtitle("WEAPON EFFICIENCY") +
# ylab("Efficiency (%)") +
# xlab("") +
# labs(color = "Efficiency Type", size = "Shots") +
# theme_csgo(
# text = element_text(family = "quantico"),
# panel.grid.major.x = element_line(size = .1, color = "black",linetype = 2)
# ) +
# scale_color_csgo()
#
|
/scratch/gouwar.j/cran-all/cranData/CSGo/inst/doc/usecase.R
|
---
title: "A simple use case"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{usecase}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
This vignette covers how to use some functions from the `CSGo` package in a use case format.
## Data Extraction and Analysis
The first step to use the `CSGo` package is to have your own credentials (API key) to pull the CSGo data from the Steam API. For more information about how to get your own API Key run in your R `vignette("auth", package = "CSGo")`.
Now that you already have your *API Key* you should be able to collect your own CSGo data as well as your friends' data. I hope my friend [Rodrigo](https://github.com/Rodrigo-Fontoura) doesn't mind us playing with his data (he is the '76561198263364899')!
First let's pull his CSGo statistics:
```{r eval = FALSE}
library(CSGo)
# to get the statistics of the user 76561198263364899
rodrigo_stats <- get_stats_user(api_key = 'your_key', user_id = '76561198263364899')
```
Let's just filter the obtained data frame by "kills" and "weapon" to create an analysis of kills by type of weapon.
```{r eval = FALSE}
library(dplyr)
library(stringr)
rodrigo_weapon_kill <- rodrigo_stats %>%
filter(
str_detect(name, 'kill'),
type == ' weapon info'
) %>%
arrange(desc(value))
```
Now let's take a look at the graphic!
*PS*: To make the graphic even more beautiful I recommend getting the "Quantico" font from Google fonts using the `showtext` package!
```{r eval = FALSE}
library(ggplot2)
library(showtext)
## Loading Google fonts (https://fonts.google.com/)
font_add_google("Quantico", "quantico")
rodrigo_weapon_kill %>%
top_n(n = 10, wt = value) %>%
ggplot(aes(x = name_match, y = value, fill = name_match)) +
geom_col() +
ggtitle("KILLS BY WEAPON") +
ylab("Number of Kills") +
xlab("") +
labs(fill = "Weapon Name") +
theme_csgo(text = element_text(family = "quantico")) +
scale_fill_csgo()
```
<p align="center"><img src="img/kills_weapon.png" width="100%" alt="kills_weapon"></p>
So, these are the top 10 weapons by kills, but.. What about the efficiency? Is the **ak47** the Rodrigo's more efficient weapon? First, let's define "efficiency":
* **kills_efficiency** means how many shots he did to kill (ex: 32% shots will kill)
* **hits_efficiency** means how many shots to have a hit, this is more related to Rodrigo's ability with each weapon (ex: 35% of the shots will hit).
* **hits_to_kill** means how many hits are necessary to kill, this is more related to the weapon power/efficiency (ex: 91% of the hits will kills).
```{r eval = TRUE, message = FALSE, echo = FALSE}
library(knitr)
rodrigo_efficiency <- readRDS('data/rodrigo_efficiency.RDS')
```
```{r eval = FALSE, message = FALSE, results='asis'}
rodrigo_efficiency <- rodrigo_stats %>%
filter(
name_match %in% c("ak47", "aug", "awp", "fiveseven",
"hkp2000", "m4a1", "mp7", "p90",
"sg556", "xm1014")
) %>%
mutate(
stat_type = case_when(
str_detect(name, "shots") ~ "shots",
str_detect(name, "hits") ~ "hits",
str_detect(name, "kills") ~ "kills"
)
) %>%
pivot_wider(
names_from = stat_type,
id_cols = name_match,
values_from = value
) %>%
mutate(
kills_efficiency = kills/shots*100,
hits_efficiency = hits/shots*100,
hits_to_kill = kills/hits*100
)
kbl(rodrigo_efficiency) %>%
kable_styling()
```
```{r eval = TRUE, message = FALSE, echo = FALSE}
knitr::kable(rodrigo_efficiency)
```
```{r eval = FALSE}
rodrigo_efficiency %>%
top_n(n = 10, wt = kills) %>%
ggplot(aes(x = name_match, size = shots)) +
geom_point(aes(y = kills_efficiency, color = "Kills Efficiency")) +
geom_point(aes(y = hits_efficiency, color = "Hits Efficiency")) +
geom_point(aes(y = hits_to_kill, color = "Hits to Kill")) +
ggtitle("WEAPON EFFICIENCY") +
ylab("Efficiency (%)") +
xlab("") +
labs(color = "Efficiency Type", size = "Shots") +
theme_csgo(
text = element_text(family = "quantico"),
panel.grid.major.x = element_line(size = .1, color = "black",linetype = 2)
) +
scale_color_csgo()
```
<p align="center"><img src="img/weapon_effic.png" width="100%" alt="weapon_effic"></p>
In conclusion, I would advise Rodrigo to use the **awp** in his next games, because this weapon presented the best efficiency in terms of **shots to kill**, **shots to hit**, and **hits to kill**. But we definitely need more shots with this weapon to see if this efficiency remains.. hahahaha
|
/scratch/gouwar.j/cran-all/cranData/CSGo/inst/doc/usecase.Rmd
|
---
title: "Obtaining the Steam API key"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{auth}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
This vignette covers how to create your own Steam API key, to be able to use the `CSGo` package to pull data from the steam API.
## Creating a Steam Account
The very first step to get your own Steam API credentials is to have an **Steam Account**. To do that you need to access [this link](https://store.steampowered.com/join), and fill the requested information.
**PS**: Keep in mind that you will need to agree with the [Steam Api Terms of Use](https://steamcommunity.com/dev/apiterms), and that you should have **spend more than 5.00 USD in the Steam Store** to be able to get the API key.
## API Key Request
Now that you are a Steam member, and already spend more than 5.00 USD in the store you should be able to request your own Steam API Key. Enter [here](https://steamcommunity.com/login/home/?goto=%2Fdev%2Fapikey ) and sign in with your Steam Account Name and Password.
<p align="center"><img src="img/sign_in.png" width="500px" alt="sign_in"></p>
In the next screen you should fill your Steam Domain Name, confirm the [Steam Api Terms of Use](https://steamcommunity.com/dev/apiterms) check box and click on the "Register" button.
<p align="center"><img src="img/register_steam_api.png" width="500px" alt="register"></p>
If everything went well you should see an screen as bellow with your own **Steam Web API Key**.
<p align="center"><img src="img/the_key.png" width="500px" alt="key"></p>
## Key Test
Now that you already have your Steam API Key you should be able to use any function in the `CSGo` package.
Try to run it filling your own API key:
```{r example1, eval=FALSE}
library(CSGo)
df <- csgo_api_profile(api_key = 'your_key', user_id = '76561198263364899')
```
## Finding the Users
The second mandatory argument is the `user_id` which represents the **Steam ID**. The Steam ID is the **number OR name** at the end the steam profile URL.
### Example
If the Steam Profile URL is https://steamcommunity.com/profiles/76561198263364899/ the user_id is **76561198263364899**.
If the Steam Profile URL is https://steamcommunity.com/id/generalcapivara/ the user_id is **generalcapivara**.
**IMPORTANT**: The steam profile should have the *"Public"* status for you be able to get specific information like Achievements and Statistics.
```{r example2, eval=FALSE}
library(CSGo)
df1 <- csgo_api_profile(api_key = 'your_key', user_id = '76561198263364899')
df2 <- csgo_api_profile(api_key = 'your_key', user_id = 'generalcapivara', name = TRUE)
```
That's it!!!
|
/scratch/gouwar.j/cran-all/cranData/CSGo/vignettes/auth.Rmd
|
---
title: "A simple use case"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{usecase}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
This vignette covers how to use some functions from the `CSGo` package in a use case format.
## Data Extraction and Analysis
The first step to use the `CSGo` package is to have your own credentials (API key) to pull the CSGo data from the Steam API. For more information about how to get your own API Key run in your R `vignette("auth", package = "CSGo")`.
Now that you already have your *API Key* you should be able to collect your own CSGo data as well as your friends' data. I hope my friend [Rodrigo](https://github.com/Rodrigo-Fontoura) doesn't mind us playing with his data (he is the '76561198263364899')!
First let's pull his CSGo statistics:
```{r eval = FALSE}
library(CSGo)
# to get the statistics of the user 76561198263364899
rodrigo_stats <- get_stats_user(api_key = 'your_key', user_id = '76561198263364899')
```
Let's just filter the obtained data frame by "kills" and "weapon" to create an analysis of kills by type of weapon.
```{r eval = FALSE}
library(dplyr)
library(stringr)
rodrigo_weapon_kill <- rodrigo_stats %>%
filter(
str_detect(name, 'kill'),
type == ' weapon info'
) %>%
arrange(desc(value))
```
Now let's take a look at the graphic!
*PS*: To make the graphic even more beautiful I recommend getting the "Quantico" font from Google fonts using the `showtext` package!
```{r eval = FALSE}
library(ggplot2)
library(showtext)
## Loading Google fonts (https://fonts.google.com/)
font_add_google("Quantico", "quantico")
rodrigo_weapon_kill %>%
top_n(n = 10, wt = value) %>%
ggplot(aes(x = name_match, y = value, fill = name_match)) +
geom_col() +
ggtitle("KILLS BY WEAPON") +
ylab("Number of Kills") +
xlab("") +
labs(fill = "Weapon Name") +
theme_csgo(text = element_text(family = "quantico")) +
scale_fill_csgo()
```
<p align="center"><img src="img/kills_weapon.png" width="100%" alt="kills_weapon"></p>
So, these are the top 10 weapons by kills, but.. What about the efficiency? Is the **ak47** the Rodrigo's more efficient weapon? First, let's define "efficiency":
* **kills_efficiency** means how many shots he did to kill (ex: 32% shots will kill)
* **hits_efficiency** means how many shots to have a hit, this is more related to Rodrigo's ability with each weapon (ex: 35% of the shots will hit).
* **hits_to_kill** means how many hits are necessary to kill, this is more related to the weapon power/efficiency (ex: 91% of the hits will kills).
```{r eval = TRUE, message = FALSE, echo = FALSE}
library(knitr)
rodrigo_efficiency <- readRDS('data/rodrigo_efficiency.RDS')
```
```{r eval = FALSE, message = FALSE, results='asis'}
rodrigo_efficiency <- rodrigo_stats %>%
filter(
name_match %in% c("ak47", "aug", "awp", "fiveseven",
"hkp2000", "m4a1", "mp7", "p90",
"sg556", "xm1014")
) %>%
mutate(
stat_type = case_when(
str_detect(name, "shots") ~ "shots",
str_detect(name, "hits") ~ "hits",
str_detect(name, "kills") ~ "kills"
)
) %>%
pivot_wider(
names_from = stat_type,
id_cols = name_match,
values_from = value
) %>%
mutate(
kills_efficiency = kills/shots*100,
hits_efficiency = hits/shots*100,
hits_to_kill = kills/hits*100
)
kbl(rodrigo_efficiency) %>%
kable_styling()
```
```{r eval = TRUE, message = FALSE, echo = FALSE}
knitr::kable(rodrigo_efficiency)
```
```{r eval = FALSE}
rodrigo_efficiency %>%
top_n(n = 10, wt = kills) %>%
ggplot(aes(x = name_match, size = shots)) +
geom_point(aes(y = kills_efficiency, color = "Kills Efficiency")) +
geom_point(aes(y = hits_efficiency, color = "Hits Efficiency")) +
geom_point(aes(y = hits_to_kill, color = "Hits to Kill")) +
ggtitle("WEAPON EFFICIENCY") +
ylab("Efficiency (%)") +
xlab("") +
labs(color = "Efficiency Type", size = "Shots") +
theme_csgo(
text = element_text(family = "quantico"),
panel.grid.major.x = element_line(size = .1, color = "black",linetype = 2)
) +
scale_color_csgo()
```
<p align="center"><img src="img/weapon_effic.png" width="100%" alt="weapon_effic"></p>
In conclusion, I would advise Rodrigo to use the **awp** in his next games, because this weapon presented the best efficiency in terms of **shots to kill**, **shots to hit**, and **hits to kill**. But we definitely need more shots with this weapon to see if this efficiency remains.. hahahaha
|
/scratch/gouwar.j/cran-all/cranData/CSGo/vignettes/usecase.Rmd
|
#' Basic data manipulation functions
#' @name Basic_data_manipulation_functions
#' @description These functions read in or convert values among formats
#' \describe{
#' \item{ch_read_ECDE_flows}{Reads a file of WSC daily flows from ECDataExplorer}
#' \item{ch_get_ECDE_metadata}{Reads station meta data from ECDataExplorer}
#' \item{ch_get_wscstation}{Reads station information from a data file produced by ECDE}
#' \item{ch_read_AHCCD_daily}{Reads file of daily AHCCD values}
#' \item{ch_read_AHCCD_monthly}{Reads file of monthly AHCCD values}
#' \item{ch_tidyhydat_ECDE}{Reads flows using \pkg{tidyhydat} and converts to ECDE format}
#' \item{ch_tidyhydat_ECDE_meta}{Reads station meta data using \pkg{tidyhydat} and converts to ECDE-like format}
#' }
NULL
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/Basic_data_manipulations_functions.R
|
#' CAN05AA008
#'
#' A dataframe of Water Survey of Canada (WSC) daily flows for station 05AA008, CROWSNEST RIVER AT FRANK Alberta. Drainage area 403 km2.
#'
#' @format A dateframe with 25252 rows and 5 columns spanning the period 1910-2013.
#' @source Water Survey of Canada
#' @details
#' Variables:
#' \describe{
#' \item{ID}{StationID}
#' \item{PARAM}{Parameter 1=Flow, 2=Level}
#' \item{Date}{\R date}
#' \item{Flow}{Daily flow in m\eqn{^3}{^3}/s}
#' \item{SYM}{Water Survey FLags A, B, D, E }
#' }
#'
"CAN05AA008"
NULL
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/CAN05AA008.R
|
#' @title Functions for Canadian hydrological analyses
#' @docType package
#' @name CSHShydRology-package
#'
#' @description \pkg{CSHShydRology} is intended for the use of hydrologists, particularly those in Canada. It will contain functions
#' which focus on the use of Canadian data sets, such as those from Environment Canada. The package will also contain functions which
#' are suited to Canadian hydrology, such as the important cold-region hydrological processes. \pkg{CSHShydRology} will also contain
#' functions which work with Canadian hydrological models, such as Raven, CRHM, Watflood, and MESH.
#'
#' This packages has been developed with the assistance of the Canadian Society for Hydrological Sciences (CSHS) \url{https://cshs.cwra.org/en}
#' which is an affiliated society of the Canadian Water Resources Association (CWRA) \url{https://cwra.org/}.
#'
#' The \pkg{CSHShydRology} will contain functions grouped into several themes, including:
#' \describe{
#' \item{Statistical hydrology}{trend detection, data screening, frequency analysis, regionalization}
#' \item{Basic data manipulations}{input/conversion/adapter functions, missing data infilling}
#' \item{Visualization}{data visualization, standardized plotting functions}
#' \item{Spatial hydrology}{basin delineation, landscape data analysis, working with GIS}
#' \item{Streamflow measurement analysis}{rating curve analysis, velocity profiles, naturalization}
#' \item{Network design/analysis}{homogeneity assessment}
#' \item{Ecohydrology}{fisheries and ecological analysis}
#' \item{Wrappers/unwrappers}{between other packages and \pkg{CSHShydRology}}
#'}
#' @references
#' To cite \pkg{CSHShydRology} in publications, use the command \code{citation("CSHShydRology")}
#' to get the current version of this citation.\cr
#'
NULL
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/CSHShydRology-package.R
|
# Floodnet functions
#' @name Floodnet-functions
#' @description These functions were written by Martin Durocher as part of the
#' FloodNet program \url{https://www.nsercfloodnet.ca/}.
#' The functions perform flood-frequency analyses, so they are all prefixed
#' with "ch_rfa_", so that they can be identified.
#'
#' Currently, only a few of the functions have been added to \pkg{CSHShydRology}.
#' We are adding more as time permits.
#' \describe{
#' \item{ch_rfa_extractamax}{Extracts the annual maxima of a daily time series}
#' \item{ch_rfa_distseason}{Distances between points in seasonal space}
#' \item{ch_rfa_julianplot}{Empty rose plot by day of year}
#' \item{ch_rfa_seasonstat}{Seasonal statistics for flood peaks}
#' }
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/Floodnet_functions.R
|
#' List of Water Survey of Canada hydrometic stations.
#'
#' @description A dataframe of station information, as extracted from HYDAT using ECDataExplorer.
#'
#' @format A dateframe with a row for each station and 20 columns.
#' @source Water Survey of Canada
#' @details
#' Variables:
#' \describe{
#' \item{Station}{StationID}
#' \item{StationName}{Station Name}
#' \item{HYDStatus}{Active or Discontinued}
#' \item{Prov}{Province}
#' \item{Latitude}{}
#' \item{Longitude}{}
#' \item{DrainageArea}{km\eqn{^2}{^2}}
#' \item{Years}{Number of years with data}
#' \item{From}{Start Year}
#' \item{To}{End Year}
#' \item{Reg.}{Regulated}
#' \item{Flow}{If TRUE/Yes}
#' \item{Level}{If TRUE/Yes}
#' \item{Sed}{If TRUE/Yes}
#' \item{OperSched}{Continuous or Seasonal}
#' \item{RealTime}{If TRUE/Yes}
#' \item{RHBN}{If TRUE/Yes the station is in the reference hydrologic basin network}
#' \item{Region}{ECCC Region}
#' \item{Datum}{Reference datum}
#' \item{Operator}{Operator}
#' }
#'
"HYDAT_list"
NULL
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/HYDAT_list.R
|
#' Spatial Hydrology functions
#' @name Spatial_hydrology_functions
#' @description These functions perform spatial analyses important in hydrology. All
#' of the functions with the prefix \code{ch_wbt} require the installation of the
#' package \pkg{Whitebox}. The functions include:
#' \describe{
#' \item{ch_wbt_removesinks}{Removes sinks from a DEM by deepening drainage network}
#' \item{ch_wbt_fillsinks}{Removes sinks from a DEM by filling them}
#' \item{ch_wbt_catchment}{Generates catchment boundaries for a conditioned DEM based on specified points of interest}
#' \item{ch_wbt_channels}{Generates a drainage network from DEM}
#' \item{ch_wbt_flow_accumulation}{Accumulates flows downstream in a cathcment}
#' \item{ch_wbt_flow_direction}{Calculated flow directions for each cell in DEM}
#' \item{ch_wbt_pourpoints}{Snaps pour points to channel}
#' \item{ch_wbt_catchment_onestep}{Performs all catchment delineations in a single function}
#' \item{ch_contours}{Creates contour lines from DEM}
#' \item{ch_checkcatchment}{Provides a simple map to check the outputs from ch_saga_catchment}
#' \item{ch_checkchannels}{Provides a simple map to check the outputs from ch_saga_channels}
#' \item{ch_volcano_raster}{Returns a raster object of land surface elevations}
#' }
#'
#' The \pkg{Whitebox} functions support the following file types for raster data:
#' \describe{
#' \item{type}{extension}
#' \item{GeoTIFF}{*.tif, *.tiff}
#' \item{Big GeoTIFF}{*.tif, *.tiff}
#' \item{Esri ASCII}{*.txt, *.asc}
#' \item{Esri BIL}{*.flt, *.hdr}
#' \item{GRASS ASCII}{*.txt, *.asc}
#' \item{Idrisi}{*.rdc, *.rst}
#' \item{SAGA Binary}{*.sdat, *.sgrd}
#' \item{Surfer ASCII}{*.grd}
#' \item{Surfer Binary}{*.grd}
#' \item{Whitebox}{*.tas, *.dep}
#' }
NULL
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/Spatial_hydrology_functions.R
|
#' Statistical analysis functions
#' @name StatisticalHydrology-functions
#' @description These functions perform statistical analyses
#' \describe{
#' \item{ch_binned_MannWhitney}{ Compares two time periods of data using Mann-Whitney test}
#' \item{ch_fdcurve}{Finds flow exceedence probabilities}
#' \item{ch_get_peaks}{Finds peak flows over a specified threshold}
#' }
NULL
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/Statistical_hydrology_functions.R
|
#' Visualization functions
#' @name Visualization-functions
#' @description These functions are primarily intended for graphing, although
#' some analyses may also be done.
#' \describe{
#' \item{ch_booth_plot}{Plot of peaks over a threshold}
#' \item{ch_flow_raster}{Raster plot of streamflows}
#' \item{ch_flow_raster_qa}{Raster plot of streamflows with WSC quality flags}
#' \item{ch_flow_raster_trend}{Raster plot and simple trends of observed streamflows}
#' \item{ch_hydrograph_plot}{Plots hydrographs and/or precipitation}
#' \item{ch_polar_plot}{Polar plot of daily streamflows}
#' \item{ch_regime_plot}{Plots the regime of daily streamflows}
#' }
NULL
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/Visualization_functions.R
|
#' Generates the x axis beginning on specified day of year
#'
#' @description Generates an axis for day of year or day of water year; used by
#' \code{ch_regime_plot}. Obtaining the day of water year needs to be done separately.
#'
#' @param wyear Month of beginning of water year, \code{wyear = 1} (the default) for
#' calendar year, \code{wyear = 10} to start October 1.
#'
#' @author Paul Whitfield
#' @seealso \code{\link{ch_regime_plot}}
#' @return Plots a water year axis on a standard R plot
#' @export
#' @examples
#' a <- seq(1, 365)
#' b <- runif(365)
#' plot(a, b, type = "p", xlab = "", xaxt = "n")
#' ch_axis_doy(wyear = 10) # starts in October
ch_axis_doy <- function(wyear = 1) {
cday <- c(1, 32, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335, 366, 397, 425, 456, 486, 517, 547, 578, 609, 639, 670)
ctxt <- c("Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec",
"Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov")
wday <- cday[1:13]
wtxt <- ctxt[1:13]
if (wyear == 1) { # starts in January
axis(side = 1, at = wday , labels = wtxt, line = 0, tck = -0.025, xlab = "")
} else {
wday <- cday[wyear:(wyear + 13)]
offset <- (cday[wyear] - 1)
wday[1:13] <- wday[1:13] - offset
wtxt <- ctxt[wyear:(wyear + 13)]
axis(side = 1, at = wday , labels = wtxt, line = 0, tck = -0.025, xlab = "")
}
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_axis_doy.R
|
#' Compares two time periods of data using Mann-Whitney test
#'
#' @description Compares two time periods of data using the Mann-Whitney test.
#' Data are binned based upon a bin size, and data are extracted for two time periods
#' and tests for change between two such periods
#' result can be passed to \code{ch_polar_plot} or \code{ch_decades_plot}
#' for visualization.
#'
#' @author Paul Whitfield
#' @references
#' Whitfield, P.H., Cannon, A.J., 2000. Recent variations in climate and
#' hydrology in Canada. Canadian Water Resources Journal 25: 19-65.
#'
#' @param DF A data frame of hydrometric data from \code{ch_read_ECDE_flows}
#' @param step An integer indicating the degree of smoothing eg. 1, 5, 11.
#' @param range1 The first and last year of first period, as \code{c(first,last)}
#' @param range2 The first and last year of second period, as \code{c(first,last)}
#' @param ptest The significance level default is \code{0.05}.
#' @param variable Name of variable. Default is \option{discharge}
#' @param metadata dataframe of station metadata, default is HYDAT_list
#'
#' @return Returns a list containing:
#' \item{StationID}{ID of station}
#' \item{Station_lname}{Name of station}
#' \item{bin_width}{Smoothing time step}
#' \item{range1}{First range of years}
#' \item{range2}{Second range of years}
#' \item{p_used}{p_value}
#' \item{fail}{TRUE if test failed due to missing values}
#' \item{bin_method}{method used for binning}
#' \item{test_method}{Mann-Whitney U-statistic}
#' \item{series}{a data frame containing:}
#' \item{period}{period numbers i.e. 1:365/step}
#' \item{period1}{median values for each bin in period 1}
#' \item{period2}{median values for each bin in period 2}
#' \item{mwu}{Mann-Whitney U-statistic for each bin between the two periods}
#' \item{prob}{probability of U-statistic for each period}
#' \item{code}{significance codes for each bin}
#'
#' @importFrom stats wilcox.test median
#' @export
#' @seealso \code{\link{ch_polar_plot}} \code{\link{ch_polar_plot_prep}}
#' \code{\link{ch_decades_plot}}
#'
#' @examples
#' data(HYDAT_list)
#' data(CAN05AA008)
#' # first example fails due to missing data in both periods
#' range1 <- c(1960,1969)
#' range2 <- c(1990,1999)
#' b_MW <- ch_binned_MannWhitney(CAN05AA008, step = 5, range1, range2, ptest = 0.05)
#'
#' range1 <- c(1970,1979)
#' range2 <- c(1990,1999)
#' b_MW <- ch_binned_MannWhitney(CAN05AA008, step = 5, range1, range2, ptest = 0.05)
#'
ch_binned_MannWhitney <- function(DF, step, range1, range2, ptest=0.05, variable="discharge",
metadata = NULL) {
fail <- FALSE
mdoy <- ch_doys(DF$Date)
doy <- mdoy$doy
years <- mdoy$year
flow <- DF$Flow
sID <- as.character(DF[1,1])
sname <- ch_get_wscstation(sID, metadata = metadata)
binmethod <- "median"
testmethod <- "Mann-Whitney U"
days <- 365
periods <- days / step
periods <- round(periods, digits = 0)
period <- c(1:periods)
## Some records have stretches of missing years so the data needs to be reconfigured to individual years which have no record.
mYear <- max(years, na.rm = TRUE)
nYear <- min(years, na.rm = TRUE) - 1
nYears <- mYear - nYear ## total number of years
Years <- c((nYear + 1):mYear) ## all years in range
aYears <- unique(years) ## actual years in range
mslice <- ch_slice(doy, step) ### create a factor for n day periods
myear <- as.factor(years)
fac <- list(myear, mslice)
qsliced <- array(dim = c(nYears, periods))
q_sliced <- tapply(flow, fac, median) # get median value for each bin.
# qliced contains median for periods and for only year where data existed. Need to reform so missing years are included
locs <- 1:length(aYears)
qsliced[aYears[locs] - nYear, ] <- q_sliced[locs, ]
colnames(qsliced) <- period
rownames(qsliced) <- Years
# set up arrays for results
period1 <- array(NA, length(period))
period2 <- array(NA, length(period))
mwu <- array(NA, length(period))
prob <- array(NA, length(period))
code <- array(0, length(period))
rg1 <- c(range1 - nYear)
rg2 <- c(range2 - nYear)
for (i in 1:length(period)) { ### loop over getting values for periods of year
s1 <- qsliced[rg1[1]:rg1[2], i]
s2 <- qsliced[rg2[1]:rg2[2], i]
sout <- wilcox.test(s1, s2, exact = FALSE)
period1[i] <- median(s1)
period2[i] <- median(s2)
mwu[i] <- sout[[1]]
prob[i] <- sout[[3]]
if (prob[i] <= ptest) code[i] <- (mean(s1) - mean(s2)) / (abs(mean(s1) - mean(s2)))
}
if (length(period1[!is.na(period1)]) != length(period1)) {
message("Range_1 contains missing values")
fail <- TRUE
}
if (length(period2[!is.na(period2)]) != length(period2)) {
message("Range_2 contains missing values")
fail <- TRUE
}
series <- data.frame(period, period1, period2, mwu, prob, code)
names(series) <- c("period", "median_1", "median_2", "MW_U", "p_value",
"s_code")
result <- list(sID, sname[21], variable, step, range1, range2, ptest, fail, binmethod, testmethod, series)
names(result) <- c(
"StationID", "Station_lname", "variable", "bin_width", "range1", "range2",
"p_used", "fail", "bin_method", "test_method", "series"
)
return(result)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_binned_MannWhitney.R
|
#' Create Booth plot of peaks over a threshold
#'
#' A Booth plot is a plot of peaks over threshold flood events with duration on the horizontal and
#' either magnitude (default) or volume on the vertical axis.
#'
#' @param events A data frame of POT events from the function \code{ch_get_peaks}
#' @param threshold The threshold used by \code{ch_get_peaks}
#' @param title Plot title
#' @param type The plot type, either \option{mag} (magnitude, the default) or \option{vol} (volume)
#' @param colour1 A vector of length 12 with line colours of rings or symbols. Defaults to those used by Booth.
#' @param colour2 A vector of length 12 with fill colours of rings or symbols. Defaults to those used by Booth.
#' @author Paul Whitfield
#'
#' @references
#' Booth, E.G., Mount, J.F., Viers, J.H. 2006. Hydrologic Variability of the Cosumnes River Floodplain.
#' San Francisco Estuary & Watershed Science 4:21.
#'
#' Whitfield, P.H., and J.W. Pomeroy. 2016. Changes to flood peaks of a mountain river: implications
#' for analysis of the 2013 flood in the Upper Bow River, Canada. Hydrological Processes 30:4657-73. doi:
#' 10.1002/hyp.10957.
#' @importFrom graphics axis legend par plot points polygon abline
#' @export
#' @return No value is returned; a standard \R graphic is created.
#' @keywords plot
#' @seealso \code{\link{ch_get_peaks}}
#' @examples
#' threshold <- 0.1 * max(CAN05AA008$Flow) # arbitrary threshold
#' peaks <- ch_get_peaks(CAN05AA008, threshold)
#' events <- peaks$POTevents
#' ch_booth_plot(events, threshold, title = "05AA008", type='mag')
#' ch_booth_plot(events, threshold, title = "05AA008", type='vol')
ch_booth_plot <- function(events, threshold, title, type = "mag", colour1 = 1, colour2 = 1) {
# set common items
mname <- c("Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec")
ocol <- c("black", "blue", "darkgreen", "black", "blue", "darkgreen", "black", "blue", "darkgreen", "black", "blue", "darkgreen")
mcol <- c("gray10", "blue", "slateblue3", "slateblue4", "green", "cyan", "green4", "darkorange", "red", "darkorange4", "gray70", "gray40")
if (length(colour1) == 12) {
ocol <- colour1
} else {
message(paste("length of colour1 is", length(colour1), " must be of length 12; Using defaults"))
}
if (length(colour2) == 12) {
mcol <- colour2
} else {
message(paste("length of colour2 is", length(colour2), " must be of length 12;Using defaults"))
}
xlabel <- "Duration (days)"
xlines <- c(7, 14, 21, 60)
xlimits <- c(1, 350)
vlabel <- expression(paste("Event volume km"^{3}))
vlines <- c(.01, .02, .05, .1, .2, .5, 1., 2., 5., 10., 20., 50., 100., 200., 500., 1000., 2000., 5000., 10000.)
ylabel <- expression(paste("Mean Daily Discharge m"^{3}, "/sec"))
ylines <- c(.1, .2, .5, 1., 2., 5., 10., 20., 50., 100., 200., 500., 1000., 2000., 5000., 10000.)
month <- as.numeric(format(events$st_date, "%m"))
############################################################################ for volume
if (type == "vol") {
ylimits <- c(min(events[, 4], na.rm = TRUE), round(max(events[, 4], na.rm = TRUE), digits = 1))
plot(events[, 5], events[, 4],
xlab = xlabel, col = ocol[month], bg = mcol[month], pch = 22, xlim = xlimits, ylim = ylimits, ylab = vlabel,
yaxt = "n", log = "xy", main = title
)
abline(h = vlines, lty = 3, col = "gray50")
abline(v = xlines, lty = 3, col = "gray50")
axis(2, las = 2)
legend("topright", mname, pch = 22, col = ocol, pt.bg = mcol, bg = "white")
mtext(paste("Threshold=", threshold, " m3/s"), side = 4, line = 1)
}
############################################################################ for magnitude
if (type == "mag") {
ylimits <- c(threshold, round(max(events[, 3], na.rm = TRUE), digits = 0))
plot(events[, 5], events[, 3],
xlab = xlabel, col = ocol[month], bg = mcol[month], pch = 21, cex = 1.1, xlim = xlimits, ylim = ylimits, ylab = ylabel,
yaxt = "n", log = "xy", main = title
)
abline(h = ylines, lty = 3, col = "gray50")
abline(v = xlines, lty = 3, col = "gray50")
axis(2, las = 2)
legend("topright", mname, pch = 21, col = ocol, pt.bg = mcol, bg = "white")
mtext(paste("Threshold=", threshold, " m3/s"), side = 4, line = 1)
}
############################################################################
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_booth_plot.R
|
#' Catchment hypsometry
#'
#' @description Finds the hypsometric curve, which is the total fraction of
#' the area below vs. elevation, for a given basin.
#'
#' @details
#' The elevations may be passed as a vector of elevations, or of elevation quantiles, or as minimum
#' and maximum elevations and the number of elevation intervals. A plot of the
#' curve may also be created.
#'
#' @param catchment A \pkg{sf} object containing the catchment divide.
#' @param dem A \pkg{raster} object of the Digital Elevation Model.
#' @param z_levels Vector of elevation levels for the hypsometry. If specified,
#' then no other elevation parameters are required. Default is \code{NULL}.
#' @param n_levels If specified, sets number of elevation intervals.
#' Can be used with \code{zmin} and \code{zmax}. Default is \code{NULL}.
#' @param zmin Minimum elevation for hypsometry. If not specified, minimum
#' catchment elevation is used. Default is \code{NULL}.
#' @param zmax Maximum elevation for hypsometry. If not specified, maximum
#' catchment elevation is used. Default is \code{NULL}.
#' @param quantiles Vector of elevation quantiles. Default is \code{NULL}.
#' @param hypso_plot if \code{TRUE} the hypsometric curve is plotted. Default is
#' \code{NULL}.
#' @param z_units Elevation units for plot. Default is \option{m}.
#' @param col Colour for plot. Default is \option{red}.
#' @param type Type of plot. Defailt is \option{o} (lines with overplotted
#' points).
#' @param xlab Plot x-axis label.
#' @param ylab Plot y-axis label.
#' @param add_grid If \code{TRUE}, a grid is added to the plot. Default is
#' \code{FALSE}
#' @param ... Other parameters for the graph
#'
#' @importFrom sf as_Spatial
#' @importFrom raster mask minValue maxValue
#' @return Returns a data frame of elevations and catchment fractions below.
#' @author Dan Moore
#' @export
#'
#' @examples \donttest{
#' # Note: example not tested automatically as it is very slow to execute due to the downloading
#' library(raster)
#' library(magrittr)
#' # change the following line to specify a directory to hold the data
#' dir_name <- tempdir(check = FALSE)
#' # create directory to store data sets
#' if (!dir.exists(dir_name)) {
#' dir.create(dir_name, recursive = TRUE)
#' }
#' # get 25-m dem
#' dem_fn <- file.path(dir_name, "gs_dem25.tif")
#' dem_url <- "https://zenodo.org/record/4781469/files/gs_dem25.tif"
#' dem_upc <- ch_get_url_data(dem_url, dem_fn)
#' dem_upc
#'
#' # get catchment boundaries
#' cb_fn <- file.path(dir_name, "gs_catchments.GeoJSON")
#' cb_url <- "https://zenodo.org/record/4781469/files/gs_catchments.GeoJSON"
#' cb <- ch_get_url_data(cb_url, cb_fn)
#'
#' # quick check plot - all catchments
#' raster::plot(dem_upc)
#' plot(cb, add = TRUE, col = NA)
#'
#' # subset 240 catchment
#' cb_240 <- cb %>% dplyr::filter(wsc_name == "240")
#' plot(cb_240, col = NA)
#'
#' ## test function
#'
#' # test different combinations of arguments
#' ch_catchment_hyps(cb_240, dem_upc, quantiles = seq(0, 1, 0.1))
#' ch_catchment_hyps(cb_240, dem_upc, z_levels = seq(1600, 2050, 50))
#' ch_catchment_hyps(cb_240, dem_upc, n_levels = 6)
#' ch_catchment_hyps(cb_240, dem_upc)
#' ch_catchment_hyps(cb_240, dem_upc, zmin = 1600, zmax = 2050)
#' ch_catchment_hyps(cb_240, dem_upc, zmin = 1600, zmax = 2050, n_levels = 6)
#'
#' # generate a graph
#' ch_catchment_hyps(cb_240, dem_upc, hypso_plot = TRUE)
#' ch_catchment_hyps(cb_240, dem_upc, hypso_plot = TRUE,
#' col = "blue", type = "l", ylim = c(1500, 2200))
#' ch_catchment_hyps(cb_240, dem_upc, hypso_plot = TRUE,
#' add_grid = TRUE, quantiles = seq(0, 1, 0.1))
#' ch_catchment_hyps(cb_240, dem_upc, hypso_plot = TRUE,
#' ylab = expression("z ("*10^{-3} ~ "km)"))
#'
#' # extract specific quantiles (e.g., median and 90%)
#' ch_catchment_hyps(cb_240, dem_upc, quantiles = c(0.5,0.9))
#' }
ch_catchment_hyps <- function(catchment, dem,
z_levels = NULL, n_levels = 10,
zmin = NULL, zmax = NULL,
quantiles = NULL,
hypso_plot = FALSE, z_units = "m",
col = "red", type = "o",
xlab = "Fraction of catchment below given elevation",
ylab = paste0("Elevation (", z_units, ")"),
add_grid = FALSE, ...) {
# need to add error traps for incorrect values for
# catchment and dem
catchment_sp <- as_Spatial(catchment)
dem_masked <- raster::mask(dem, catchment_sp)
if (is.null(quantiles)) {
if (is.null(z_levels)) {
if (is.null(zmin)) zmin <- raster::minValue(dem_masked)
if (is.null(zmax)) zmax <- raster::maxValue(dem_masked)
z_levels <- seq(zmin, zmax, length.out = n_levels)
}
# there may be a more direct way, but this works
z_hist <- raster::hist(dem_masked, plot = FALSE, breaks = z_levels)
nz <- sum(z_hist$counts)
qz <- c(0, cumsum(z_hist$counts)/nz)
out_df <- data.frame(z = z_levels, qz)
} else {
zq <- raster::quantile(dem_masked, probs = quantiles)
out_df <- data.frame(z = zq, qz = quantiles)
}
if (hypso_plot) {
plot(out_df$qz, out_df$z,
col = col, xlab = xlab, ylab = ylab, type = type, ...
)
if (add_grid) grid()
}
return(out_df)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_catchment_hyps.R
|
#' Check Catchments
#'
#' @description
#' Generates a simple map to allow a visual assessment of the catchment
#' boundaries relative to the elevation contours.
#'
#' @details
#' Also generates a table summarizing the catchments,
#' including the coordinates of the outlet point and the catchment area.
#'
#' @param dem raster DEM that catchments were generated from.
#' @param catchment Catchment polygon (sf object).
#' @param outlet Location of catchment outlet (sf object).
#' @param outlet_label Character label for outlet.
#' @param main_label Main label for catchment plot.
#' @param bbox_type type of bounding box. If \option{catchment}, then
#' the contours are bounded by the catchment, otherwise they are plotted
#' to the extent of the DEM
#' @param channel_vec Vectors of the channels will be plotted if specified.
#' @param cb_colour Colour for catchment outline. Default is "red".
#' @param pp_colour Colour for catchment pour points. Default is "red".
#' @param channel_colour Colour for channel. Default is "blue".
#' @param contour_colour Colour for contours Default is "grey".
#' @param plot_na If \code{TRUE} (the default) a north arrow is added to the plot.
#' @param plot_scale If \code{TRUE} (the default) a scale bar is added to the plot.
#' @param na_location Location for the north arrow. Default is \option{tr}, i.e. top-right.
#' @param scale_location Location for the scale bar. Default is \option{bl}, i.e. bottom-left.
#'
#' @return \code{TRUE}. A map of the catchments is also plotted and
#' the catchment parameters are printed.
#'
#' @author Dan Moore and Kevin Shook
#' @seealso \code{\link{ch_checkchannels}}
#' @importFrom sf st_bbox st_area st_crs st_geometry
#' @importFrom ggplot2 ggplot geom_sf coord_sf theme_bw labs
#' @importFrom ggspatial annotation_north_arrow north_arrow_fancy_orienteering annotation_scale
#' @importFrom dplyr mutate
#' @importFrom grid unit
#' @importFrom magrittr %>%
#' @export
#' @examples
#' # Only proceed if Whitebox executable is installed
#' library(whitebox)
#' if (check_whitebox_binary()){
#' library(raster)
#' test_raster <- ch_volcano_raster()
#' dem_raster_file <- tempfile(fileext = ".tif")
#' no_sink_raster_file <- tempfile("no_sinks", fileext = ".tif")
#'
#' # write test raster to file
#' writeRaster(test_raster, dem_raster_file, format = "GTiff")
#'
#' # remove sinks
#' removed_sinks <- ch_wbt_removesinks(dem_raster_file, no_sink_raster_file,
#' method = "fill")
#'
#' # get flow accumulations
#' flow_acc_file <- tempfile("flow_acc", fileext = ".tif")
#' flow_acc <- ch_wbt_flow_accumulation(no_sink_raster_file, flow_acc_file)
#'
#' # get pour points
#' pourpoint_file <- tempfile("volcano_pourpoints", fileext = ".shp")
#' pourpoints <- ch_volcano_pourpoints(pourpoint_file)
#' snapped_pourpoint_file <- tempfile("snapped_pourpoints", fileext = ".shp")
#' snapped_pourpoints <- ch_wbt_pourpoints(pourpoints, flow_acc_file, pourpoint_file,
#' snapped_pourpoint_file, snap_dist = 10)
#'
#' # get flow directions
#' flow_dir_file <- tempfile("flow_dir", fileext = ".tif")
#' flow_dir <- ch_wbt_flow_direction(no_sink_raster_file, flow_dir_file)
#' fn_catchment_ras <- tempfile("catchment", fileext = ".tif")
#' fn_catchment_vec <- tempfile("catchment", fileext = ".shp")
#' catchments <- ch_wbt_catchment(snapped_pourpoint_file, flow_dir_file,
#' fn_catchment_ras, fn_catchment_vec)
#'
#' # check results
#' ch_checkcatchment(test_raster, catchments, snapped_pourpoints)
#' } else {
#' message("Examples not run as Whitebox executable not found")
#' }
ch_checkcatchment <- function(dem, catchment, outlet, outlet_label = NULL,
main_label = "", bbox_type = "catchment",
channel_vec = NULL,
cb_colour = "red", pp_colour = "red",
channel_colour = "blue", contour_colour = "grey",
plot_na = TRUE, plot_scale = TRUE,
na_location = "tr", scale_location = "bl") {
# check inputs
if (missing(catchment)) {
stop("ch_checkcatchment requires sf catchment polygons to plot")
}
if (missing(dem)) {
stop("ch_checkcatchment requires a raster dem to plot")
}
if (missing(outlet)) {
stop("ch_checkcatchment requires an sf outlet to plot")
}
# create contours
contours = ch_contours(dem)
# generate bounding box
if (bbox_type == "catchment") {
bb = st_bbox(catchment)
} else {
bb <- st_bbox(contours)
}
# generate map
check_map <- ggplot2::ggplot() +
geom_sf(data = contours, color = contour_colour)
if (!is.null(channel_vec)) {
check_map <- check_map +
geom_sf(data = channel_vec, col = channel_colour) +
coord_sf(xlim = c(bb[1], bb[3]), ylim = c(bb[2], bb[4]),
datum = st_crs(catchment))
}
check_map <- check_map +
geom_sf(data = outlet, pch = 21, bg = pp_colour) +
geom_sf(data = st_geometry(catchment), fill = NA, color = cb_colour) +
coord_sf(xlim = c(bb[1], bb[3]), ylim = c(bb[2], bb[4]),
datum = st_crs(catchment)) +
labs(title = main_label) +
theme_bw()
if (plot_na) {
check_map <- check_map +
annotation_north_arrow(style = north_arrow_fancy_orienteering,
location = na_location,
pad_x = unit(4, "mm"),
pad_y = unit(6.5, "mm"))
}
if (plot_scale) {
check_map <- check_map +
annotation_scale(location = scale_location)
}
print(check_map)
nc <- nrow(outlet)
# print catchment area with units
if (is.null(outlet_label)) {
labels <- as.character(1:nc)
} else {
labels <- outlet_label
}
area <- st_area(catchment)
units <- rep(paste0(attr(area, "units")$numerator[1], "^2"), length(area))
value <- round(as.numeric(area))
area_df <- outlet %>%
mutate(label = labels, area = value, units = units)
return(TRUE)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_checkcatchment.R
|
#' Check Channels
#'
#' @description
#' Generates a map of the generated channel network layer.
#'
#' @details
#' Generates a simple map of the drainage network plotted over the contours to allow a visual assessment.
#'
#' @param dem raster DEM that catchments were generated from
#' @param channels channel polyline (or channels list from \code{ch_wbt_channels}) (sf object)
#' @param main_label Main label for channel plot.
#' @param channel_colour Colour for channel. Default is "blue".
#' @param pp_colour Colour for catchment pour points. Default is "red".
#' @param contour_colour Colour for contours Default is "grey".
#' @param outlet location of catchment outlet (sf object)
#'
#' @return
#' \item{check_map}{a \pkg{ggplot} object of a map with channel layer}
#'
#' @author Dan Moore
#' @seealso \code{\link{ch_checkcatchment}}
#' @importFrom sf st_bbox st_geometry
#' @importFrom ggplot2 ggplot geom_sf coord_sf theme_bw
#' @importFrom ggspatial annotation_north_arrow annotation_scale north_arrow_fancy_orienteering
#' @export
#' @examples
#' # Only proceed if Whitebox executable is installed
#' library(whitebox)
#' if (check_whitebox_binary()){
#' library(raster)
#' test_raster <- ch_volcano_raster()
#' dem_raster_file <- tempfile(fileext = c(".tif"))
#' no_sink_raster_file <- tempfile("no_sinks", fileext = c(".tif"))
#'
#' # write test raster to file
#' writeRaster(test_raster, dem_raster_file, format = "GTiff")
#'
#' # remove sinks
#' removed_sinks <- ch_wbt_removesinks(dem_raster_file, no_sink_raster_file, method = "fill")
#'
#' # get flow accumulations
#' flow_acc_file <- tempfile("flow_acc", fileext = c(".tif"))
#' flow_acc <- ch_wbt_flow_accumulation(no_sink_raster_file, flow_acc_file)
#'
#' # get flow directions
#' flow_dir_file <- tempfile("flow_dir", fileext = c(".tif"))
#' flow_dir <- ch_wbt_flow_direction(no_sink_raster_file, flow_dir_file)
#' channel_raster_file <- tempfile("channels", fileext = c(".tif"))
#' channel_vector_file <- tempfile("channels", fileext = c(".shp"))
#' channels <- ch_wbt_channels(flow_acc_file, flow_dir_file, channel_raster_file,
#' channel_vector_file, 1)
#'
#' # get pour points
#' pourpoint_file <- tempfile("volcano_pourpoints", fileext = ".shp")
#' pourpoints <- ch_volcano_pourpoints(pourpoint_file)
#' snapped_pourpoint_file <- tempfile("snapped_pourpoints", fileext = ".shp")
#' snapped_pourpoints <- ch_wbt_pourpoints(pourpoints, flow_acc_file, pourpoint_file,
#' snapped_pourpoint_file, snap_dist = 10)
#' ch_checkchannels(test_raster, channels, snapped_pourpoints)
#' } else {
#' message("Examples not run as Whitebox executable not found")
#' }
ch_checkchannels <- function(dem, channels, outlet = NULL, main_label = "",
channel_colour = "blue", pp_colour = "red",
contour_colour = "grey") {
# check inputs
if (missing(dem)) {
stop("ch_checkchannels requires a raster dem to plot")
}
if (missing(channels)) {
stop("ch_checkchannels requires sf channels polyline to plot")
}
if (missing(outlet)) {
stop("ch_checkchannels requires an sf outlet to plot")
}
contours <- ch_contours(dem)
# get bounding box for contours to set map limits
bb <- sf::st_bbox(contours)
# generate map
check_map <- ggplot2::ggplot(data = contours) +
geom_sf(data = contours, color = contour_colour) +
geom_sf(data = sf::st_geometry(channels), color = channel_colour) +
ggspatial::annotation_north_arrow(style = north_arrow_fancy_orienteering,
location = "tr",
pad_x = unit(4, "mm"),
pad_y = unit(6.5, "mm")) +
ggspatial::annotation_scale() +
coord_sf(xlim = c(bb[1], bb[3]), ylim = c(bb[2], bb[4]),
datum = st_crs(channels)) +
labs(title = main_label) +
theme_bw()
if (!is.null(outlet)) {
check_map <- check_map +
geom_sf(data = outlet, pch = 21, bg = pp_colour) +
coord_sf(xlim = c(bb[1], bb[3]), ylim = c(bb[2], bb[4]),
datum = st_crs(channels))
}
print(check_map)
return(check_map)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_checkchannels.R
|
#' Calculates the circular mean, median, and regularity
#'
#'@description Calculate the circular mean, median, and regularity using a year of 365 days.
#' Days of year are converted to degrees
#' internally, results are returned as positive days of year
#'
#' @param dataframe a dataframe of day year of event; can be amax or pot.
#'
#' @return Returns a list of the following statistics
#' \item{n}{number of samples}
#' \item{mean}{circular mean of array}
#' \item{median}{circular median of array}
#' \item{rho}{regularity or mean resultant length}
#'
#' @references
#' Pewsey, A., M. Neuhauser, and G. D. Ruxton. 2014. Circular Statistics in R,
#' 192 pp., Oxford University Press.
#' Whitfield, P. H. 2018. Clustering of seasonal events: A simulation study using
#' circular methods. Communications in Statistics - Simulation and Computation 47(10): 3008-3030.
#' Burn, D. H., and P. H. Whitfield. 2021*. Changes in the timing of flood events resulting
#' from climate change.
#'
#' @import circular
#' @export
#' @seealso \code{\link{ch_sh_get_amax}}
#' @examples
#' data(CAN05AA008)
#' am <- ch_sh_get_amax(CAN05AA008)
#' m_r <- ch_circ_mean_reg(am)
ch_circ_mean_reg <- function(dataframe){
doys <- dataframe$doy
days <- dataframe$days
n <- length(doys)
doys <- doys / days * 360 # doy as degrees
x <- circular::circular(doys, units = "degrees", zero = pi/2, rotation = "clock")
meanday <- circular::mean.circular(x)
if (meanday < 0) meanday <- 360 + meanday
medianday <- circular::median.circular(x)
if (medianday < 0) medianday <- 360 + medianday
rho <- circular::rho.circular(x)
result <- list(n, as.numeric(meanday)*365/360, as.numeric(medianday)*365/365, rho)
names(result) <- c("n", "mean", "median", "regularity")
return(result)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_circ_mean_reg.R
|
#' Clear Working Directory
#'
#' @description
#' Empties and removes a working directory.
#'
#' @details
#' The data for raster layers read in as Whitebox
#' files are held on disk rather than in memory
#'
#' @param wd working directory file path
#' @param do_check If \code{TRUE}, the default, the user is asked to confirm the
#' deletion of the working directory. If \code{TRUE}, the directory is deleted
#' without confirmation.
#'
#' @return
#' \item{result}{returns TRUE upon successful execution}
#'
#' @author Dan Moore
#' @seealso \code{\link{ch_create_wd}} to create working directory
#' @export
#' @examples \donttest{
#' # not tested as deleting all files in the directory cannot be tested in CRAN
#'
#' # create an empty working directory
#' my_wd <- tempdir()
#' ch_create_wd(my_wd) # confirm creation
#'
#' # clear the working directory
#' ch_clear_wd(my_wd)
#' }
#'
ch_clear_wd <- function(wd, do_check = TRUE) {
if (do_check) {
prompt <- paste(
"Are you certain you want to remove",
wd,
" (y/n): "
)
response <- readline(prompt)
if (response == "n") return(paste(wd, "not removed"))
}
filelist <- list.files(wd)
file.remove(paste0(wd, "/", filelist))
unlink(wd, recursive = TRUE)
return(paste(wd, "removed"))
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_clear_wd.R
|
#' Creates a colour gradient
#'
#' @description Creates a colour gradient for plotting.
#'
#' @param x Vector of values used for gradient.
#' @param colors Vector of colours to form a gradient. Default is \code{`c("darkred", "red","white","blue", "darkblue")`}.
#' @param colsteps The number of steps in the gradient. Default is \code{100}.
#' @param climits Sets specific limits for common scaling.
#'
#' @return \item{res}{returned array of colour codes}
#' @author modified by Paul Whitfield
#' @export
#'
#' @examples
# # plot randomly distributed data
#' plot(rnorm(20),col='black')
#'
#' # create a red blue colour gradient for plotting
#' mycol <- ch_col_gradient(rnorm(20), colsteps = 100)
#'
#' # plot more random points in transparent blue colour
#' points(rnorm(20), col = mycol)
ch_col_gradient <- function(x, colors=c("darkred", "red","white","blue", "darkblue"), colsteps = 100, climits = NULL) {
if (is.null(x)) stop("x is NULL")
if (is.null(climits))
return( colorRampPalette(colors) (colsteps) [ findInterval(x, seq(min(x),max(x), length.out = colsteps)) ] )
if (!is.null(climits))
return(colorRampPalette(colors) (colsteps) [ findInterval(x, seq(climits[1],climits[2], length.out = colsteps)) ] )
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_col_gradient.R
|
#' Add Transparency to plot colours
#'
#' @description Adds transparency to a colour based on an integer between 0 and 255,
#' with 0 being fully transparent and 255 being opaque. Based on function
#' \code{rvn_col_transparent} in package \pkg{RavenR}.
#'
#' @param colour colour that is to be made transparent, or an array of colours
#' @param trans an integer (or array of integers) describing the degree of
#' transparency, 0 to 255. Must be the same length as colour. Values < 10 (very transparent),
#' values > 200 (solid colour).
#'
#' @return \item{res}{returned updated colour code with transparency}
#' @export
#' @seealso See original code on post in Stack Overflow
#' \href{http://stackoverflow.com/questions/12995683/any-way-to-make-plot-points-in-scatterplot-more-transparent-in-rmaking}{
#' plot points transparent in R}
#'
#' @importFrom grDevices col2rgb
#' @author Rob Chlumsky; Paul Whitfield
#' @keywords colour transparency
#'
#' @examples
#'
#' # plot randomly distributed data
#' plot(rnorm(20), col='black')
#'
#' # create a transparent blue colour for plotting
#' mycol <- ch_col_transparent('blue', 100)
#'
#' # plot more random points in transparent blue colour
#' points(rnorm(20),col = mycol)
#'
#' # plot randomly distributed data
#' plot(rnorm(20), col = 'blue')
#'
#' # create two transparent colour for plotting
#' mycol <- ch_col_transparent(c('green',"red"), c(100, 200))
#'
#' # plot more random points in transparent colours
#' points(rnorm(20), col = mycol[2])
#'
#'
ch_col_transparent <- function(colour, trans){
if (length(colour) != length(trans) & !any(c(length(colour),length(trans)) == 1)) stop("Vector lengths not correct")
if (length(colour) == 1 & length(trans) > 1) colour <- rep(colour,length(trans))
if (length(trans) == 1 & length(colour) > 1) trans <- rep(trans,length(colour))
num2hex <- function(x)
{
hex <- unlist(strsplit("0123456789ABCDEF",split = ""))
return(paste(hex[(x - x %% 16)/16 + 1],hex[x %% 16 + 1],sep = ""))
}
rgb <- rbind(col2rgb(colour),trans)
res <- paste("#",apply(apply(rgb,2,num2hex),2,paste,collapse = ""),sep = "")
return(res)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_col_transparent.R
|
#' ch_color_gradient
#'
#' @description set colour gradient
#'
#' @param x array of variable
#' @param colors an array of colours to form the desired gradient. Default is
#' ("darkred", "red", "white", "green", "darkgreen")
#' @param climits provide specific limits for common scaling
#' @param colsteps number of steps to be used in gradient, default is 100.
#' @return vector of colors
#'
#' @author Paul Whitfield
#' @export
#' @examples
#' cxin <- c(0, 1, 1, 3, 4, 5, 10)
#' cxout <- ch_color_gradient(cxin)
#' #[1] "#8B0000" "#B50000" "#B50000" "#FF2B2B" "#FF9292"
#' #[6] "#FFF9F9" "#006400"
ch_color_gradient <- function(x, colors=c("darkred", "red","white","green", "darkgreen"),
colsteps = 100, climits = NULL) {
if(is.null(x)) stop("x is NULL")
if(is.null(climits)) return( colorRampPalette(colors) (colsteps) [ findInterval(x, seq(min(x),max(x), length.out = colsteps)) ] )
if(!is.null(climits)) return( colorRampPalette(colors) (colsteps) [ findInterval(x, seq(climits[1],climits[2], length.out = colsteps)) ] )
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_color_gradient.R
|
#' Create Contours
#'
#' @description
#' Creates contour lines from a DEM.
#'
#' @details
#' Generates contour lines from a DEM, which are returned as an \pkg{sf} object.
#' The user can either provide a vector of elevation values by specifying the \code{z_levels} argument,
#' or by supplying the minimum and maximum elevations (\code{zmin} and \code{zmax})
#' and the number of contour lines (\code{n_levels}).
#'
#' @param dem Raster object of your dem in the desired projection (note: should have had sinks removed).
#' @param zmin Minimum elevation value for contours. If not specified, minimum value \option{dem} is used.
#' @param zmax Maximum elevation value for contours. If not specified, maximum value \option{dem} is used.
#' @param n_levels Number of contour lines. Default is 10.
#' @param z_levels Levels at which to plot contours. If specified, overrides \option{zmin}, \option{zmax} and
#' \option{n_levels}.
#' @return
#' \item{contours_sf}{sf object containing contours}
#'
#' @author Dan Moore
#'
#' @examples
#' # use volcano DEM
#' dem <- ch_volcano_raster()
#' # generate contours
#' contours <- ch_contours(dem)
#'
#' # plot contours map
#' plot(contours)
#'
#' @importFrom raster raster getValues rasterToContour crs
#' @importFrom sf st_as_sf st_crs
#' @importFrom magrittr %>%
#' @export
ch_contours <- function(dem,
zmin = NULL, zmax = NULL,
n_levels = 10,
z_levels = NULL) {
# check inputs
if (missing(dem)) {
stop("ch_contours requires a raster dem")
}
# determine contour levels
if (is.null(z_levels)) {
z <- getValues(dem)
if (is.null(zmin)) zmin <- min(z, na.rm = TRUE)
if (is.null(zmax)) zmax <- max(z, na.rm = TRUE)
z_levels <- seq(zmin, zmax, length.out = n_levels)
}
# if dem includes sea level, start contours at 0.1 m to mimic coastline
if (z_levels[1] <= 0) {z_levels[1] <- 0.1}
# generate contours as a sf object
contours_sf <- rasterToContour(dem, levels = z_levels) %>%
st_as_sf()
sf::st_crs(contours_sf) <- crs(dem)
return(contours_sf)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_contours.R
|
#' @title Create working directory
#'
#' @description
#' Creates a working directory.
#'
#' @param wd name of a directory in which to store files created by WhiteboxTools functions
#' @return
#' \item{\code{TRUE}}{returns \code{TRUE} upon successful execution}
#'
#' @author Dan Moore
#' @seealso \code{\link{ch_clear_wd}} to clear the working directory
#' @export
#' @examples \donttest{
#' # not tested automatically as will return a warning
#' ch_create_wd(tempdir())
#' }
ch_create_wd <- function(wd) {
# creates working directory
if (dir.exists(wd)) {
# print warning if directory already exists
warning(paste("A directory named", wd, "exists"))
} else {
# create directory
dir.create(wd, recursive = TRUE)
message(paste("A directory named", wd, "has been created"))
}
return(TRUE)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_create_wd.R
|
#' Extracts a specified time period from a longer record
#'
#' @description The function could also be used to get the same period of time from several station for comparison.
#' @param DF A daily streamflow data frame as from \code{ch_read_ECDE_flows}
#' @param st_date starting date format is \%Y/\%m/\%d
#' @param end_date ending date format is \%Y/\%m/\%d
#'
#' @return Returns a portion of the original dataframe.
#'
#' @export
#' @author Paul Whitfield
#' @examples
#' data(CAN05AA008)
#' subset <- ch_cut_block(CAN05AA008,"2000/01/01", "2010/12/31")
ch_cut_block <- function (DF, st_date, end_date)
{
if (substr(st_date,5,5) == "/") {
st_date <- as.Date(st_date, format = "%Y/%m/%d")
end_date <- as.Date(end_date, format = "%Y/%m/%d")
}
if (substr(st_date,5,5) == "-") {
st_date <- as.Date(st_date, format = "%Y-%m-%d")
end_date <- as.Date(end_date, format = "%Y-%m-%d")
} else {
message( paste("incorrect date format", st_date, "must be like 2010/01/01"))
return ()
}
if (!st_date >= min(DF$Date)){
message(paste("Starting Date",st_date, "is before records are available"))
return()
}
if (!end_date <= max(DF$Date)){
message(paste("Ending Date",end_date, "is after records are available"))
}
result1 <- DF[DF$Date >= st_date, ]
result <- result1[result1$Date <= end_date, ]
message(paste("between",st_date,"and", end_date, length(result[ , 1]),
"records were selected"))
return(result)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_cut_block.R
|
#' Plots output from ch_binned_MannWhitney for decades
#'
#' @description Creates a simple plot comparing two decades from the
#' output of \code{ch_binned_MannWhitney}.
#'
#' @author Paul Whitfield
#'
#' @param mplot List output by the function \code{ch_binned_MannWhitney}
#
#' @return A standard \R graphic is created.
#'
#' @export
#' @seealso \code{\link{ch_decades_plot}}
#' @examples
#' range1 <- c(1970, 1979)
#' range2 <- c(1990, 1999)
#' b_MW <- ch_binned_MannWhitney(CAN05AA008, step = 5, range1, range2, ptest = 0.05)
#' ch_decades_plot(b_MW)
ch_decades_plot <- function(mplot) {
mch <- c(24, NA, 25)
disch <- expression( paste( "Median Period Discharge (m"^{3}, "/sec)"))
scol <- c("blue", NA, "red")
series <- mplot$series
ylims <- c(0, max(series$median_1, series$median_2))
code <- series$s_code + 2
plot(series$period,series$median_1,
pch = 19, col = "darkblue", type = "b", ylim = ylims,
xlab = paste(mplot$bin_width, "-day period", sep = ""),
main = mplot$Station_lname, ylab = disch)
points(series$period,series$median_2,
pch = 1, cex = 1.2, col = "darkgreen", type = "b", lty = 3)
points(series$period,series$median_2,
pch = mch[code], cex = 1.25, col = scol[code], bg = scol[code])
ltext <- c(paste("Median ",mplot$range1[1], "-", mplot$range1[2], sep = ""),
paste("Median ",mplot$range2[1], "-", mplot$range2[2], sep = ""),
"Signiificant Increase", "Significant Decrease")
lsym <- c(19, 1, 24, 25)
lcol <- c("darkblue", "darkgreen", "blue", "red")
legend("topright", legend = ltext, col = lcol, pch = lsym, pt.bg = lcol)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_decades_plot.R
|
#' Days of year and water year
#'
#' Converts a date array into a data frame with years, wateryears, and days of year and of water year.
#'
#' @description Converts an array of dates into a dataframe with date, year, month, doy,
#' wyear, dowy.
#'
#' The day of water year is computed from the first of the specified water year month.
#'
#' @param Date an array of \R dates, as produced by \code{as.Date()}
#' @param water_yr the month starting the water year, default is 10 (October). If
#' a value of \code{1} is specified, the \code{10} will be used.
#'
#' @author Paul Whitfield, Kevin Shook
#' @return Returns a dataframe with date information:
#' \item{Date}{in Date format}
#' \item{year}{numeric calendar year}
#' \item{month}{number calendar month}
#' \item{doy}{numeric day of year}
#' \item{wyear}{numeric water year starting on day 1 of selected month}
#' \item{dwy}{numeric day of water year}
#'
#'
#'
#' @export
#'
#' @examples
#' dd <- seq.Date(as.Date("2010-01-01"), as.Date("2018-01-01"),by = 1)
#' output <- ch_doys(dd, water_yr=10)
#' head(output)
#'
ch_doys <- function(Date, water_yr = 10) # Date needs to be as.Date
{
dm <- c( 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31)
month <- as.numeric( format(Date, "%m"))
year <- as.numeric( format(Date, "%Y"))
day <- as.numeric( format(Date, "%d"))
doy <- as.numeric( format(Date, "%j"))
if (water_yr == 1)
water_yr <- 10 ## use 10 for water year if calendar year selected
year1 <- year - 1
dwy <- array(NA, dim = length(Date))
wyear <- year1
wyear[month >= water_yr] <- wyear[month >= water_yr] + 1
dwy[month >= water_yr] <- as.numeric(ISOdate(year[month >= water_yr],
month[month >= water_yr],
day[month >= water_yr]) -
ISOdate(wyear[month >= water_yr],
water_yr - 1,
dm[water_yr - 1]))
dwy[month < water_yr] <- as.numeric(ISOdate(year[month < water_yr],
month[month < water_yr],
day[month < water_yr]) -
ISOdate(wyear[month < water_yr],
water_yr - 1, dm[water_yr - 1]))
dowy <- data.frame(Date, year, month, day, doy, wyear, dwy)
return(dowy)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_doys.R
|
#' Plot Flow Duration Curve
#'
#' Create a Flow Duration Curve based upon Observations.
#'
#' @description A flow duration curve is a plot of flow magnitude against exceedance probability.
#' The plot may contain the Gustard Curves (default) or they can be omitted. The default is for curves to be
#' plotted against probability, but an option is to plot against the normalized exceedance probability.
#' In that case, the x axis represents a normal distribution.
#'
#' @param DF a dataframe of daily flows from \code{ch_read_ECDE_flows}
#' @param normal If \code{normal = TRUE} then exceedance probability is normalized. Default is FALSE.
#' @param gust If \code{TRUE} (the default), adds the curves from Gustard et al. 1992 are added.
#' @param metadata dataframe of metadata, defaults to HYDAT_list.
#'
#' @return Plots the flow duration curve and returns a data frame containing:
#' \item{exceedance probability}{probability}
#' \item{flow}{d=flow values}
#'
#' @author Paul Whitfield
#' @references
#' Gustard, A., A. Bullock, and J.M. Dixon. 1992. Low flow estimation in the United Kingdom.
#' Institute of Hydrology, 292. Wallingford: Institute of Hydrology.
#'
#' Vogel, R.M., and N.M. Fennessy. 1994. Flow-duration curves. I: New Interpretation and
#' confidence intervals. Journal of Water Resources Planning and Management ASCE 120:485-504.
#'
#' Vogel, R.M., and N.M. Fennessy. 1995. Flow duration curves II: A review of applications
#' in water resources planning. Water Resources Bulletin 31:1030-9.
#'
#' @importFrom stats qnorm
#' @importFrom graphics abline text
#' @export
#' @examples
#' data(HYDAT_list)
#' data(CAN05AA008)
#' # plot with Gustard 1992 curves
#' test <- ch_fdcurve(CAN05AA008, normal = FALSE, gust = TRUE)
#' # plot with normalized exceedance probability
#' test <- ch_fdcurve(CAN05AA008, normal = TRUE, gust = FALSE)
#'
ch_fdcurve <- function(DF, normal = FALSE, gust = TRUE, metadata = NULL) {
flow <- DF$Flow
stname <- ch_get_wscstation(DF[1, 1], metadata = metadata)
title <- stname$Station_lname
## load the values for the Gustard curves for %mean flow for Type Curves Gustard et al 1992.
g <- array(NA, dim = c(19, 7))
g[1, ] <- c(904.17, 534.08, 22.69, 4.42, 2.13, 1.26, 0.51)
g[2, ] <- c(838.77, 511.37, 25.10, 5.27, 2.62, 1.58, 0.67)
g[3, ] <- c(776.04, 480.48, 27.86, 6.33, 3.25, 2.00, 0.88)
g[4, ] <- c(719.91, 452.42, 30.82, 7.54, 3.99, 2.51, 1.16)
g[5, ] <- c(667.48, 425.82, 34.11, 9.00, 4.92, 3.16, 1.53)
g[6, ] <- c(618.22, 400.44, 37.81, 10.77, 6.07, 3.98, 2.02)
g[7, ] <- c(572.53, 376.64, 41.82, 12.86, 7.47, 5.01, 2.65)
g[8, ] <- c(520.00, 350.65, 45.10, 15.20, 9.16, 6.30, 3.46)
g[9, ] <- c(472.29, 326.46, 48.64, 17.98, 11.22, 7.94, 4.52)
g[10, ] <- c(428.96, 303.93, 52.46, 21.25, 13.75, 10.00, 5.89)
g[11, ] <- c(389.60, 282.96, 56.57, 25.13, 16.86, 12.57, 7.69)
g[12, ] <- c(353.86, 263.44, 61.01, 29.71, 20.66, 15.83, 10.03)
g[13, ] <- c(321.39, 245.26, 65.79, 35.12, 25.32, 19.93, 13.08)
g[14, ] <- c(291.65, 228.19, 71.00, 41.58, 31.09, 25.13, 17.11)
g[15, ] <- c(264.89, 212.45, 76.57, 49.16, 38.10, 31.64, 22.32)
g[16, ] <- c(240.09, 197.49, 82.60, 58.08, 46.67, 39.81, 29.13)
g[17, ] <- c(206.89, 176.99, 89.91, 67.82, 56.95, 50.13, 39.00)
g[18, ] <- c(178.28, 158.62, 97.86, 79.21, 69.50, 63.12, 52.22)
g[19, ] <- c(153.69, 142.20, 106.49, 92.46, 84.77, 79.43, 69.85)
p <- c(.02, .05, .50, .80, .90, .95, .99)
rank <- rank(flow, ties.method = "max")
rank <- max(rank) - rank
exceedtime <- 1 * (rank / (length(flow) + 1))
q <- sort(100 * flow / mean(flow), decreasing = FALSE)
exceed <- sort(exceedtime, decreasing = TRUE)
yl <- "Percent of mean discharge"
xl <- "Exceedance probability (%)"
xla <- "Normalized Exceedance probability (%)"
ylims <- c(1, max(q))
# capture plotting parameters, restore on exit
oldpar <- par(no.readonly = TRUE)
on.exit(par(oldpar))
par(mfrow = c(1, 1))
par(mar = c(4, 4, 2, 1))
par(las = 1)
tscale <- 1.2
if (nchar( title) >= 45) tscale <- 1.0
if (nchar( title) >= 50) tscale <- 0.8
if (normal == TRUE) {
exceed.z <- qnorm(exceed)
p.z <- qnorm(p)
xlims <- c(-3, 3)
plot(exceed.z, q,
type = "l", lwd = 2, col = "blue", log = "y", xaxt = "n",
ylim = ylims, xlim = xlims, xlab = xla, ylab = yl, las = 1,
main = title, cex.main = tscale
)
# Draw the normal-probability axis
probs <- c(
0.001, 0.002, 0.005, 0.01, 0.02, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5,
0.6, 0.7, 0.8, 0.9, 0.95, 0.98, 0.99, 0.995, 0.998, 0.999
)
z.vals <- qnorm(probs)
axis(side = 1, at = z.vals, labels = probs, line = 0, tck = -0.025, xlab = xla)
abline(h = 0, lty = 3)
abline(h = 100, lty = 2, col = "red")
abline(v = qnorm(0.5), lty = 2, col = "red")
if (gust == TRUE) {
for (k in 1:19) {
points(p.z, g[k, ], type = "l", col = "gray")
text(p.z[7], g[k, 7], k, col = "gray", pos = 4, cex = 0.7)
}
points(exceed.z, q, type = "l", lwd = 2, col = "blue")
text(-1, 1.25, "Flow Duration Curve with Gustard's Type Curves",
col = "gray", cex = 0.9, pos = 1)
text(qnorm(p[7]), g[19, 7], "permeable", col = "gray", pos = 3, cex = 0.7)
text(qnorm(p[7]), g[1, 7], "impermeable", col = "gray", pos = 1, cex = 0.7)
}
}
else {
xlims <- c(0, 1)
plot(exceed, q,
type = "l", lwd = 2, col = "blue", log = "y", ylim = ylims,
xlim = xlims, xlab = xl, ylab = yl, las = 1, main = title,
cex.main = tscale
)
abline(v = 0, lty = 3)
abline(h = 100, lty = 2, col = "red")
abline(v = 0.5, lty = 2, col = "red")
if (gust == TRUE) {
for (k in 1:19) {
points(p, g[k, ], type = "l", col = "gray")
text(p[7], g[k, 7], k, col = "gray", pos = 4, cex = 0.7)
}
points(exceed, q, type = "l", lwd = 2, col = "blue")
text(.1, 1.25, "Flow Duration Curve with Gustard's Type Curves",
col = "gray", cex = 0.9, pos = 4)
text(p[7], g[19, 7], "permeable", col = "gray", pos = 3, cex = 0.7)
text(p[7], g[1, 7], "impermeable", col = "gray", pos = 1, cex = 0.7)
}
}
result <- data.frame(exceed, q)
names(result) <- c("exceedance_prob", "flow")
return(result)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_fdcurve.R
|
#' Raster plot of daily streamflows
#'
#' @description Produces a raster plot: years by day of year, showing magnitude of flow.
#' This produces a plot showing the flow data in colours, showing different context than in
#' a hydrograph. High flows are in warm colours.
#'
#' @param DF A data frame of daily flow data as read by \code{ch_read_ECDE_flows}.
#' @param rastercolours A vector of colours used for flow magnitudes (default \code{c("lightblue","cyan", "blue", "slateblue", "orange", "red")}).
#' @param metadata A dataframe of station metadata, defaults to \code{HYDAT_list}.
#'
#' @return No value is returned; a standard R graphic is created.
#' @author Paul Whitfield
#' @seealso \code{\link{ch_read_ECDE_flows}}
#' @export
#' @importFrom grDevices colorRampPalette
#' @importFrom graphics layout box
#' @importFrom timeDate dayOfYear as.timeDate
#' @importFrom fields image.plot
#' @seealso \code{\link{ch_flow_raster_trend}} \code{\link{ch_flow_raster_qa}}
#'
#' @examples
#' ch_flow_raster(CAN05AA008)
#'
ch_flow_raster <- function(DF, rastercolours = c("lightblue","cyan", "blue", "slateblue", "orange", "red"),
metadata = NULL) {
##### Fixed labels and text strings
DOY <- "Day of Year"
ylabelq <- expression( paste("Discharge m"^{3}, "/sec"))
qcols <- colorRampPalette(rastercolours)
station <- as.character(DF$ID[1])
sname <- ch_get_wscstation(station, metadata = metadata)
title <- sname$Station_lname
date <- as.Date(DF$Date, "%Y/%m/%d")
Year <- as.numeric(format(date,"%Y"))
doy <- as.numeric(dayOfYear(as.timeDate(date)))
mYear <- max(Year, na.rm = TRUE)
nYear <- min(Year, na.rm = TRUE) - 1
Years <- mYear - nYear
qdata <- array(dim = c(366, Years))
rows <- doy[1:nrow(DF)]
cols <- Year - nYear
locs <- cbind(rows, cols)
qdata[locs] <- DF$Flow
qmax <- max(qdata, na.rm = TRUE)
qmin <- min(qdata, na.rm = TRUE)
################################ raster map of daily flows
qdata <- as.matrix(qdata)
########################################################### start plotting section
# capture plotting parameters, restore on exit
oldpar <- par(no.readonly = TRUE)
on.exit(par(oldpar))
par(oma = c(0, 0, 3, 0))
layout(matrix(c(1, 1, 1, 1, 2, 1, 1, 1, 1, 3), 2, 5, byrow = TRUE))
par(mar = c(4, 4, 0, 0))
################################################################# panel one
doys <- c(1:366)
lyears <- c((nYear + 1):mYear)
image(1:366, 1:length(lyears), qdata,
axes = FALSE, col = qcols(9),
zlim = c(qmin, qmax), xlab = "", ylab = ""
)
sdoy <- ch_sub_set_Years(doys, 10)
axis(1, at = sdoy$position, labels = sdoy$label, cex = 1.2)
if (length(lyears) >= 70) nn <- 10 else nn <- 5
sYears <- ch_sub_set_Years(lyears, nn)
axis(2, at = sYears$position, labels = sYears$label, cex.axis = 1.2, las = 1)
mtext(DOY, side = 1, line = 2.2, cex = 0.9)
box()
################################################################# panel two
frame()
par(mar = c(2, 2, 2, 2))
######### scale bar and legend
image.plot(
zlim = c(qmin, qmax), col = qcols(9), legend.only = TRUE,
legend.width = 4, legend.mar = 1,
legend.shrink = 1.0,
bigplot = c(0.1, 0.2, 0.1, 0.2),
legend.args = list(text = ylabelq, side = 2, line = 0.5, cex = 0.90)
)
################################################################# panel three (element #ten)
frame()
frame()
frame()
frame()
frame()
############################################################## Add title
tscale <- 1.1
if (nchar(title) >= 45) tscale <- 0.9
if (nchar(title) >= 50) tscale <- 0.7
mtext(title, side = 3, line = 1, cex = tscale, outer = TRUE)
############################################################### end plotting section
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_flow_raster.R
|
#' Raster plot of daily streamflows with WSC quality flags
#'
#' @description Raster plot with WSC quality flags.
#' This produces a plot showing the flow data in grayscale
#' overlain by the Water Survey of Canada quality flags. Colours are consistent with
#' ECDataExplorer. Raster layout lets the use see the flags in a different context than in
#' a hydrograph.
#'
#' @return
#' Produces a raster plot: years against day of year, showing the data flags:
#' \item{A}{(Partial) in green}
#' \item{B}{(Backwater) in cyan}
#' \item{D}{(Dry) in yellow}
#' \item{E}{(Estimated) in red}
#'
#'
#' @param DF dataframe of daily streamflow read by ch_read_ECDE_flows
#' @seealso \code{\link{ch_read_ECDE_flows}}
#' @param metadata dataframe of metadata or defaults to "HYDAT_list"
#'
#' @return Returns \code{TRUE} if executed properly; a standard R graphic is created.
#'
#' @author Paul Whitfield
#' @importFrom graphics axis legend par plot points polygon image frame mtext layout box
#' @importFrom grDevices colorRampPalette
#' @importFrom timeDate dayOfYear as.timeDate
#' @importFrom fields image.plot
#' @export
#' @seealso \code{\link{ch_flow_raster_trend}} \code{\link{ch_flow_raster}}
#'
#' @examples
#' data(HYDAT_list)
#' data(CAN05AA008)
#' qaplot <- ch_flow_raster_qa(CAN05AA008)
#'
ch_flow_raster_qa <- function(DF, metadata = NULL) {
##### Fixed labels and text strings
DOY <- "Day of Year"
ylabelq <- expression(paste("Discharge m"^{3}, "/sec"))
rastercolours <- c("gray90", "gray80", "gray70", "gray60", "gray50", "gray40", "gray30", "gray20", "gray10")
qcols <- colorRampPalette(rastercolours)
fcols <- c("black", "red", "green", "blue", "cyan", "magenta", "yellow", "gray")
station <- as.character(DF$ID[1])
sname <- ch_get_wscstation(station, metadata = metadata)
title <- sname$Station_lname
date <- as.Date(DF$Date, "%Y/%m/%d")
Year <- as.numeric(format(date,"%Y"))
doy <- as.numeric(dayOfYear(as.timeDate(date)))
mYear <- max(Year, na.rm = TRUE)
nYear <- min(Year, na.rm = TRUE) - 1
Years <- mYear - nYear
qdata <- array(dim = c(366, Years))
flag <- array(dim = c(366, Years))
DF$SYM <- as.character(DF$SYM) ############# change flag codes to colour codes
DF$SYM[DF$SYM == "A"] <- 3
DF$SYM[DF$SYM == "B"] <- 5
DF$SYM[DF$SYM == "D"] <- 7
DF$SYM[DF$SYM == "E"] <- 2
DF$SYM <- as.numeric(DF$SYM) ##### convert to numbers for plotting colour
for (k in 1:length(DF[ , 1])) {
qdata[doy[k], (Year[k] - nYear)] <- DF$Flow[k]
flag[doy[k], (Year[k] - nYear)] <- DF$SYM[k]
# 1 was no flag, 2 was "A", 3 was "B", 4 was "C", 5 was "D", 6 was "E"
}
flag[flag == 1] <- NA
qmax <- max(qdata, na.rm = TRUE)
qmin <- min(qdata, na.rm = TRUE)
################################ raster map of daily flows
qdata <- as.matrix(qdata)
########################################################### start plotting section
# capture plotting parameters, restore on exit
oldpar <- par(no.readonly = TRUE)
on.exit(par(oldpar))
par(oma = c(0, 0, 3, 0))
layout(matrix(c(1, 1, 1, 1, 2, 1, 1, 1, 1, 3), 2, 5, byrow = TRUE))
par(mar = c(4, 4, 0, 0))
################################################################# panel one
doys <- c(1:366)
lyears <- c((nYear + 1):mYear)
image(1:366, 1:length(lyears), qdata,
axes = FALSE, col = qcols(9),
zlim = c(qmin, qmax), xlab = "", ylab = ""
)
sdoy <- ch_sub_set_Years(doys, 15)
axis(1, at = sdoy$position, labels = sdoy$label, cex = 1.2)
if (length(lyears) >= 70) nn <- 10 else nn <- 5
sYears <- ch_sub_set_Years(lyears, nn)
axis(2, at = sYears$position, labels = sYears$label, cex.axis = 1.2, las = 1)
mtext(DOY, side = 1, line = 2.2, cex = 0.9)
for (ii in 1:366) {
for (jj in 1:length(lyears)) {
points(ii, jj,pch = 15,col = flag[ii, jj])
}
}
box()
################################################################# panel two
frame()
par(mar = c(4, 4, 0, 0))
######### scale bar and legend
image.plot(
zlim = c(qmin, qmax), col = qcols(9), legend.only = TRUE,
legend.width = 4, legend.mar = 1,
legend.shrink = 1.0,
bigplot = c(0.1, 0.2, 0.1, 0.2),
legend.args = list(text = ylabelq, side = 2, line = 0.5, cex = 0.90)
)
################################################################# panel three (element #ten)
frame()
frame()
frame()
frame()
frame()
par(oma = c(0, 0, 2, 0))
par(mar = c(1, 0, 0, 1))
leg.txt <- c(" (A) Partial", " (B) Backwater", " (D) Dry", " (E) Estimate")
lcol <- c("green", "cyan", "yellow", "red")
legend("left", leg.txt, pch = c(22, 22, 22, 22), pt.bg = lcol, cex = 1.0, bty = "n")
############################################################## Add title
tscale <- 1.3
if (nchar(title) >= 45) tscale <- 1.1
if (nchar(title) >= 50) tscale <- 0.8
mtext(title, side = 3, line = 1, cex = tscale, outer = TRUE)
############################################################### end plotting section
return(TRUE)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_flow_raster_qa.R
|
#' Raster plot and simple trends of observed streamflows by periods
#'
#' @description
#' Creates a raster plot plus trend plots for day of year,
#' which are binned by a number of days (step),
#' and the max, min, and median annual discharge across years. The plot contains four panels based upon binned data.
#'
#' @details
#' The four plots are: (1) The maximum,minimum,and median flow with a trend test for each
#' period: red arrows indicate decreases, blue arrows indicate increases.
#' (2) The scale bar for the colours used in the raster plot,
#' (3) The raster plot with a colour for each period and each year where data exist, and
#' (4) A time series plot of the minimum, median, and maximum annual bin values.
#' If there is no trend (p > 0.05) the points are black. Decreasing trends are in red, increasing trends are in blue.
#'
#' @author Paul Whitfield
#'
#' @param DF - dataframe of daily flow data as read by ch_read_ECDE_flows
#' @param step - a number indicating the degree of smoothing eg. 1, 5, 11.
#' @param missing If \code{FALSE} years with missing data are excluded.
#' If \code{TRUE} partial years are included.
#' @param colours A vector of colours used for the raster plot.
#' The default is \code{c("lightblue","cyan", "blue", "slateblue",
#' "darkblue", "red")}.
#' @param metadata a dataframe of station metadata, default is HYDAT_list.
#'
#' @return Returns a list containing:
#' \item{stationID}{Station ID eg. 05BB001}
#' \item{missing}{How missing values were used FALSE = used, TRUE = removed}
#' \item{step}{number of days in a bin}
#' \item{periods}{number of periods in a year}
#' \item{period}{period numbers i.e. 1:365/step}
#' \item{bins}{values for each period in each year}
#' \item{med_period}{median for each period}
#' \item{max_period}{maximum for each period}
#' \item{min_period}{minimum for each period}
#' \item{tau_period}{Kendalls Tau for each period}
#' \item{prob_period}{probability of Tau for each period}
#' \item{year}{years spanning the data}
#' \item{median_year}{median bin for each year}
#' \item{max_year}{maximum bin for each year}
#' \item{min_year}{minimum bin for each year}
#' \item{tau_median_year}{value of tau and probability for annual median}
#' \item{tau_maximum_year}{value of tau and probability for annual maximum}
#' \item{tau_minimum_year}{value of tau and probability for annual minimum}
#'
#'
#' @keywords plot
#' @importFrom graphics axis legend par plot points polygon image frame mtext layout box
#' @importFrom grDevices colorRampPalette
#' @importFrom Kendall MannKendall
#' @importFrom fields image.plot
#' @importFrom stats median
#' @export
#' @seealso \code{\link{ch_flow_raster}}
#' @references Whitfield, P. H., Kraaijenbrink, P. D. A., Shook, K. R., and Pomeroy, J. W. 2021.
#' The Spatial Extent of Hydrological and Landscape Changes across the Mountains and Prairies
#' of Canada in the Mackenzie and Nelson River Basins Based on data from a Warm Season Time Window,
#' Hydrology and Earth Systems Sciences 25: 2513-2541.
#'
#'
#' @examples
#' data(CAN05AA008)
#' mplot <- ch_flow_raster_trend(CAN05AA008, step=5)
#'
ch_flow_raster_trend <- function(DF, step = 5, missing = FALSE, metadata = NULL,
colours = c("lightblue", "cyan", "blue", "slateblue", "darkblue", "red"))
{
l_disch <- expression(paste("m"^{3}, "/sec"))
l_disch2 <- expression(paste("\nm" ^{3}, "/sec"))
# get title information
station <- DF[1, 1]
sname <- ch_get_wscstation(station, metadata = metadata)
title <- sname$Station_lname
Date <- DF$Date
Flow <- DF$Flow
# get doy and year
doy_vals <- ch_doys(Date)
Year <- doy_vals$year
doy <- doy_vals$doy
DOY <- paste("Period of Year (", step, " day)", sep = "")
if (step >= 31) {
message(paste("step of", step, "larger than acceptable; has been reset to the maximum allowed [30] "))
step <- 30
}
days <- 365
periods <- days / step
periods <- round(periods, digits = 0)
period <- c(1:periods)
## Some records have stretches of missing years so the data needs to be reconfigured to individual years which have no record.
mYear <- max(Year, na.rm = TRUE)
nYear <- min(Year, na.rm = TRUE) - 1
nYears <- mYear - nYear ## total number of years
Years <- c((nYear + 1):mYear) ## all years in range
aYears <- unique(Year) ## actual years in range
mslice <- ch_slice(doy, step) ### create a factor for n day periods
myear <- as.factor(Year)
fac <- list(myear, mslice)
q_sliced <- tapply(Flow, fac, median) # get median value for each bin.
qsliced <- array(dim = c(periods, nYears))
for (k in 1:length(aYears)) {
for (kk in 1:periods) {
qsliced[kk, (aYears[k] - nYear)] <- q_sliced[k, kk]
}
}
colnames(qsliced) <- Years
rownames(qsliced) <- period
qmin <- min(Flow, na.rm = TRUE)
qmax <- max(Flow, na.rm = TRUE)
med_n <- array(NA, length(period))
max_n <- array(NA, length(period))
min_n <- array(NA, length(period))
tau <- array(NA, length(period))
prob <- array(NA, length(period))
code <- array(2, length(period))
arrow <- array(1, length(period))
for (i in 1:length(period)) { ### loop over getting values for periods of year
med_n[i] <- median(qsliced[i, ], na.rm = TRUE)
max_n[i] <- max(qsliced[i, ], na.rm = TRUE)
min_n[i] <- min(qsliced[i, ], na.rm = TRUE)
max_n[is.infinite(max_n)] <- NA
min_n[is.infinite(min_n)] <- NA
t1 <- NA
t1 <- MannKendall(qsliced[i, ])
tau[i] <- t1$tau
prob[i] <- t1$sl
# set flags for plotting
if (abs(prob[i]) == 1.00) code[i] <- 1
if (prob[i] <= 0.05) code[i] <- 3
if (prob[i] <= 0.05) arrow[i] <- 2
if (prob[i] <= 0.05 && tau[i] <= 0.) arrow[i] <- 3
}
ymed_n <- array(NA, length(Years))
ymax_n <- array(NA, length(Years))
ymin_n <- array(NA, length(Years))
for (i in 1:length(Years)) {
### loop over getting values for each year
ymed_n[i] <- median(qsliced[, i], na.rm = missing)
ymax_n[i] <- max(qsliced[ , i], na.rm = missing)
ymin_n[i] <- min(qsliced[ , i], na.rm = missing)
}
############################# replace -Inf with NA
ymax_n[is.infinite(ymax_n)] <- NA
ymin_n[is.infinite(ymin_n)] <- NA
tcol <- c("red", "black", "blue")
tmy <- MannKendall(ymed_n)
tminy <- MannKendall(ymin_n)
tmaxy <- MannKendall(ymax_n)
t1 <- ifelse(as.numeric(tmy[2]) > 0.05, 2, ifelse(tmy[1] >= 0, 3, 1))
t2 <- ifelse(as.numeric(tminy[2]) > 0.05, 2, ifelse(tminy[1] >= 0, 3, 1))
t3 <- ifelse(as.numeric(tmaxy[2]) > 0.05, 2, ifelse(tmaxy[1] >= 0, 3, 1))
##################################################### three panel output
# capture plotting parameters, restore on exit
oldpar <- par(no.readonly = TRUE)
on.exit(par(oldpar))
par(oma = c(1, 1, 3, 1))
qcols <- colorRampPalette(colours)
nf <- layout(matrix(c(2, 4, 1, 3), 2, 2, byrow = TRUE), c(3, 1), c(1, 3), TRUE)
##################################################### panel one raster image
par(mar = c(6, 4, 0, 0))
image(1:periods, 1:length(Years), qsliced, axes = FALSE, col = qcols(9),
zlim = c(qmin, qmax), xlab = "", ylab = "")
sstep <- round(periods/5)
speriod <- ch_sub_set_Years(period,sstep)
axis(1, at = speriod$position, labels = speriod$label, cex = 1.2)
nn <- 1
if (length(Years) >= 70) nn <- 10
if (length(Years) >= 40) nn <- 5
if (length(Years) >= 20) nn <- 2
sYears <- ch_sub_set_Years(Years, nn)
axis(2, at = sYears$position, labels = sYears$label, cex.axis = .7, las = 1)
mtext(DOY,side = 1, line = 2.2, cex = 0.9)
box()
month <- c("J", "F", "M", "A", "M", "J", "J", "A", "S", "O", "N", "D", "") #***
mday <- c(0, 31, 59, 90, 120, 151, 181, 212,243, 273, 304, 334,365)
md <- (mday / 365*as.integer(365 / step)) + 1
axis(1, line = 3.5, at = md, month)
##################################################### panel two doy summary of trends
par(mar = c(7, 4, 0, 0))
mch <- c("", 1, 19)
mch_n <- c("", 173, 175)
mcolour <- c("white", "blue", "red")
ylimits <- c(min(qsliced, na.rm = TRUE), max(qsliced, na.rm = TRUE))
par(mar = c(1, 4, 0, 0))
plot(period, med_n, ylab = l_disch, col = "black", ylim = ylimits, xaxt = "n", xaxs = "i", las = 1,pch = as.numeric(mch[code]))
points(period, max_n, type = "l", col = "gray35")
points(period, min_n, type = "l", col = "gray35")
par(font = 5)
points(period, med_n, type = "p", col = mcolour[arrow], pch = as.numeric(mch_n[arrow]), cex = 1.2)
par(font = 1)
axis(1, line = 0, at = md, labels = FALSE)
##################################################### panel three time series
options(scipen = 999)
xy <- c(1:length(Years))
ylimits <- c(min(qsliced, na.rm = TRUE), max(qsliced, na.rm = TRUE))
if (ylimits[1] == 0) (ylimits[1] <- 0.001)
par(mar = c(6,4,0,0))
plot(ymed_n, xy, col = tcol[t1], xlim = ylimits, xlab = l_disch, yaxt = "n", yaxt = "n",
yaxs = "i", log = "x", ylab = "")
points(ymax_n, xy, col = tcol[t3], pch = 19, cex = 0.7)
points(ymin_n, xy, col = tcol[t2], pch = 19, cex = 0.7)
######################################################## Add title
tscale <- 1.2
if (nchar(title) >= 45) tscale <- 1.0
if (nchar(title) >= 50) tscale <- 0.8
mtext(title, side = 3, line = 1, cex = tscale,outer = TRUE)
######################################################## Add scale bar
frame()
par(oma = c(2, 9, 0, 0))
par(mar = c(0, 0, 0, 0))
zr = c(qmin, qmax)
image.plot(zlim = zr,
col = qcols(9),legend.only = TRUE,
legend.width = 4.5, legend.shrink = 0.8,
bigplot = c(0.1, 0.2, 0.1, 0.2),
legend.args = list(text = l_disch, side = 2, line = 0.5, cex = 0.9))
sID <- substr(title, 1, 7)
line1 <- list(sID, missing, step, periods, qsliced, period, med_n, max_n, min_n,
tau, prob, Years, ymed_n, ymax_n, ymin_n, tmy, tmaxy, tminy)
names(line1) <- c("sID", "na.rm =", "step", "periods", "bins","period", "med_period", "max_period",
"min_period", "tau_period", "prob_period", "year", "median_year", "max_year", "min_year",
"tau_median_year", "tau_maximum_year", "tau_minimum_year")
return(line1)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_flow_raster_trend.R
|
#' Reads Environment Canada Date Explorer (ECDE) meta data file
#'
#' @description Reads the file that is generated from ECDE 'save favourite stations' to capture the ECDE metadata.
#' The dataframe returned contains 20 fields from ECDE.
#'
#' @param filename The name of the ECDE file, \option{FavHydatStations.tb0}.
#' @param writefile Default is \code{NULL}, but if it is a filename e.g. \option{filename.csv}
#' then the dataframe is saved to a csv file.
#'
#' @author Paul Whitfield <[email protected]>
#'
#' @importFrom utils read.table write.csv
#'
#' @export
#'
#' @return Returns a dataframe consisting of:
#' \item{Station}{StationID}
#' \item{StationName}{Station Name}
#' \item{HYDStatus}{Active or Discontinued}
#' \item{Prov}{Province}
#' \item{Latitude}{}
#' \item{Longitude}{}
#' \item{DrainageArea}{km\eqn{^2}{^2}}
#' \item{Years}{Number of years with data}
#' \item{From}{Start Year}
#' \item{To}{End Year}
#' \item{Reg.}{Regulated?}
#' \item{Flow}{If TRUE/Yes flow data exists}
#' \item{Level}{If TRUE/Yes level data exists}
#' \item{Sed}{If TRUE/Yes sediment data exists}
#' \item{OperSched}{Operations current - Continuous or Seasonal}
#' \item{RealTime}{If TRUE/Yes real time data is available}
#' \item{RHBN}{If TRUE/Yes the stations is in the reference hydrologic basin network}
#' \item{Region}{Name of regional office operating station}
#' \item{Datum}{Elevation datum}
#' \item{Operator}{Operator or provider of the data}
#' @examples \dontrun{
#' # Don't run this example as it requires an ECDE file
#' filename <- "FavHydatStations.tb0" # dummy file name (not supplied)
#' meta0 <- ch_get_ECDE_metadata(filename)
#' meta1 <- ch_get_ECDE_metadata(filename, writefile="study52_metadata.csv")
#' }
ch_get_ECDE_metadata <- function(filename, writefile=NULL){
# check ECDE filename
if (filename == "" | is.null(filename)) {
stop("ECDE file not specified")
}
if (!file.exists(filename)) {
stop("ECDE file not found")
}
meta <- read.table(filename, skip = 96, sep = " ", na.strings = -999)
names(meta) <- c("Station", "Fav", "StationName", "HydStatus", "Prov", "Latitude",
"Longitude", "DrainageArea", "Years", "From", "To", "Reg.",
"Flow", "Level", "Sed", "OperSched", "RealTime", "RHBN",
"Region", "Datum", "Operator")
meta <- meta[,c(1, 3:21)]
if (!is.null(writefile))
write.csv(meta, writefile, row.names = FALSE)
return(meta)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_get_ECDE_metadata.R
|
#'@title ch_get_map_base
#'
#'@description Prepares for mapping by acquiring the base map and ancillary data:
#' boundaries and rivers. The maps are obtained using OpenStreetMap::openmap which
#' originally accessed the following map types:
#' "osm", "osm-bw", "maptoolkit-topo", "waze", "bing", "stamen-toner",
#' "stamen-terrain", "stamen-watercolor", "osm-german", "osm-wanderreitkarte",
#' "mapbox", "esri", "esri-topo", "nps", "apple-iphoto", "skobbler",
#' "hillshade", "opencyclemap", "osm-transport", "osm-public-transport",
#' "osm-bbike", "osm-bbike-german".
#'
#' In April 2022 access all of these failed, limiting the available
#' maps to: one of "osm", "bing", "stamen-toner",
#' "stamen-terrain", "stamen-watercolor", "apple-iphoto", "opencyclemap",
#' "osm-transport", "osm-public-transport".
#'
#' In January 2023, ne_download failed as it produced an incorrect url.
#'
#' Access to "nps" [default] was added as a work around until OpenstreetMap is updated.
#'
#' "nps": This layer presents the U.S. National Park Service (NPS) Natural Earth
#' physical map at 1.24km per pixel for the world and 500m for the coterminous
#' United States.
#'
#'@param maplat vector of latitudes (2)
#'@param maplong vector of longitudes (2)
#'@param map_proj map projection currently NA/"latlong" or "albers"/"equalarea"
#'@param map_directory directory where map data will be stored; will be
#' created if it does not exist.
#'@param map_type map type: select one of \option{osm}, \option{bing},
#'\option{stamen-toner}, \option{stamen-terrain}, \option{stamen-watercolor},
#'\option{apple-iphoto}, \option{opencyclemap}, \option{osm-transport},
#'\option{osm-public-transport}, \option{nps [default]},
#'
#'@return Returns a list containing:
#'\describe{
#'\item{map_d}{map data directory}
#'\item{plines10}{provincial and state boundaries}
#'\item{rlines10}{rivers and lakes}
#'\item{map_proj}{projection used}
#'\item{latitude}{bottom and top latitudes}
#'\item{longitude}{east and west longitudes}
#'}
#'
#'@importFrom rnaturalearth ne_load ne_download
#'@author Paul Whitfield
#'@export
#'@examples \donttest{
#'# Note: example not tested automatically as it is very slow to execute due to the downloading
#'latitude <- c(48.0, 61.0)
#'longitude <- c(-110.0, -128.5)
#'mapdir <- tempdir()
#'# get map data
#'m_map <- ch_get_map_base(latitude,longitude,
#' map_proj = "Albers",
#' map_directory = mapdir,
#' map_type = "nps")
#'}
ch_get_map_base <- function(maplat, maplong,
map_proj = NA,
map_directory = ".",
map_type = "nps")
{
uleft <- c(maplat[2],maplong[2])
lright <- c(maplat[1],maplong[1])
cdn_latlong = "+proj=longlat"
# form a projection string based on latitude and longitude
cdn_aea <- paste("+proj=aea +lat_1=",maplat[1],
" +lat_2=", maplat[2],
" +lat_0=", (maplat[1] + maplat[2])/2,
" +lon_0=", (maplong[1] + maplong[2])/2,
" +x_0=0 +y_0=0 +ellps=GRS80 +datum=NAD83",
" +units=m +no_defs", sep = "")
if (is.na(map_proj)) map_proj <- cdn_latlong
if (tolower(map_proj) == "latlong") map_proj <- cdn_latlong
if (tolower(map_proj) == "albers") map_proj <- cdn_aea
if (tolower(map_proj) == "equalarea") map_proj <- cdn_aea
# get basic map
################## work around 2022-05-25 OpenStreetMap fails to access "nps"
if (map_type == "nps") {
nps_file <- "https://server.arcgisonline.com/ArcGIS/rest/services/World_Physical_Map/MapServer/tile/{z}/{y}/{x}.jpg"
result <- ch_test_url_file(nps_file)
if (result == "error" | result == "warning") {
stop("nps file does not exist")
}
map_a <- OpenStreetMap::openmap(uleft, lright, minNumTiles = 3,
type = nps_file)
}
if (map_type != "nps"){
map_a <- OpenStreetMap::openmap(uleft, lright,
type = map_type,
minNumTiles = 9L)
}
# change projection
map_d <- OpenStreetMap::openproj(map_a, projection = map_proj)
#####################################################################
# if map_directory does not exist create it
if (!dir.exists(map_directory)) {
print(paste("Creating a new directory for map data", map_directory))
dir.create(map_directory)
setwd(map_directory)
}
##################################################################
# if files don't exist get zip files and unzip
if (!file.exists("ne_10m_admin_1_states_provinces_lakes.prj")) {
cadd <- "https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/cultural/ne_10m_admin_1_states_provinces_lakes.zip"
result <- ch_test_url_file(cadd)
if (result == "error" | result == "warning") {
stop("natural earth provinces and lakes file does not exist")
}
zip_file = tempfile()
utils::download.file(cadd, zip_file)
utils::unzip(zip_file, unzip = "unzip", exdir = map_directory)
}
if (!file.exists("ne_10m_rivers_lake_centerlines.prj"))
{
radd <- "https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/physical/ne_10m_rivers_lake_centerlines.zip"
result <- ch_test_url_file(radd)
if (result == "error" | result == "warning") {
stop("natural earth rivers file does not exist")
}
zip_file = tempfile()
utils::download.file(radd, zip_file)
utils::unzip(zip_file, unzip = "unzip", exdir = map_directory)
}
plines10 <- rnaturalearth::ne_load(scale = 10,
type = "states",
category = 'cultural',
destdir = map_directory,
returnclass = "sf")
rivers10 <- rnaturalearth::ne_load(scale = 10,
type = "rivers_lake_centerlines",
category = 'physical',
destdir = map_directory,
returnclass = "sf")
#####################################################################
# if map_directory does not exist create it and download data
if (!dir.exists(map_directory)) {
print(paste("Creating a new directory for map data", map_directory))
dir.create(map_directory)
setwd(map_directory)
plines10 <- rnaturalearth::ne_download(scale = 10,
type = "states",
category = 'cultural',
destdir = map_directory,
returnclass = "sf")
rivers10 <- rnaturalearth::ne_download(scale = 10,
type = "rivers_lake_centerlines",
category = 'physical',
destdir = map_directory,
returnclass = "sf")
}
map_data <- list(map_d, plines10, rivers10, map_proj, maplat, maplong)
names(map_data) <- c("map_d", "plines10", "rivers10", "map_proj",
"latitude","longitude")
return(map_data)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_get_map_base.R
|
#' Extracts peak flows over a threshold
#'
#' @description
#' This function is development code being shared as is. It is expected that the user will be interested in the
#' data frame returned for POT analysis and for plotting (i.e. ch_booth_plot).
#'
#' This function retrieves peaks greater than or equal to the prescribed threshold. It returns a data frame of peak characteristics
#' suitable for subsequent analysis.
#'
#' The portion under development is returns a list of the flows during an event with the values of the four
#' preceding days and three subsequent days. If the peak is a single point the fragment is nine points long; if the events is longer
#' the fragment contains all days above the threshold and eight additional days.
#'
#' @param dataframe a data frame of streamflow data containing columns named \option{Date} and \option{Flow}
#' @param threshold a value for the threshold. Values above the threshold are tested for peaks.
#'
#' @return Returns a list containing:
#' \item{POTevents}{a dataframe contining details of the events}
#' \item{events}{a vector with the value 0 when the flow is below the threshold and 1 when above.}
#' \item{event_num}{a vector with the value 0 when the flow is below a threshold or the index of the events when the threshold was exceeded. i.e. 1,2,3, etc}
#' \item{st_date}{start date of events}
#' \item{case}{a list of the daily flows in each individual event (see details for more information)}
#'
#' The \code{POTevents} data frame contains five columns:
#' \item{st_date}{starting date of event}
#' \item{max_date}{date of maximum in the event}
#' \item{max}{maximum discharge during event}
#' \item{volume}{flow volume during the event}
#' \item{duration}{length of the event in days}
#' The \code{case} list contains the flows during an event and also for four preceding and subsequent days. Each event will have
#' a length between nine to n days in length. Note: in rare cases where the event is in progress when data becomes available the
#' event might be shorter than nine days long.
#'
#' @author Paul Whitfield
#'
#' @references
#' Burn, D.H., Whitfield, P.H., Sharif, M., 2016. Identification of changes in floods and flood regimes
#' in Canada using a peaks over threshold approach. Hydrological Processes, 39: 3303-3314. DOI:10.1002/hyp.10861
#'
#' Whitfield, P.H., and J.W. Pomeroy. 2016. Changes to flood peaks of a mountain river: implications
#' for analysis of the 2013 flood in the Upper Bow River, Canada. Hydrological Processes 30:4657-73. doi:
#' 10.1002/hyp.10957.
#'
#' @export
#' @seealso \code{\link{ch_booth_plot}}
#' @examples
#' CAN05AA008 <- CAN05AA008
#' threshold <- 0.5*max(CAN05AA008$Flow) # arbitrary threshold
#' my_peaks <- ch_get_peaks(CAN05AA008, threshold)
#' str(my_peaks)
ch_get_peaks <- function(dataframe, threshold) {
maxflow <- max(dataframe$Flow)
if (maxflow < threshold) {
message(paste("Threshold of", threshold, "
is greater than maximum observed flow", maxflow))
return()
}
data <- dataframe$Flow
Date <- dataframe$Date
event <-array(0, dim=length(data))
event_num <-array(0, dim=length(data))
flow <-array(dim=7)
st_date <-array(NA,dim=3)
max_date <-array(NA,dim=3)
class(st_date) <-"Date"
class(max_date) <-"Date"
case <-list(dim=3)
for (i in 1:length(data)) {
if (is.na(data[i])) next
if (data[i] > threshold) event[i] <-1 #####flag all dates above the threshold as a '1' as opposed to a zero
}
############################################## Make data frame of events and attributes
#
# event is an array that has 0 and 1 whenever there is a switch from zero to 1 a new event starts
# whenever the event ends the switch is 1 to 0
# track maximum
# sum volumes above thresholds
max <- array(NA,dim=1)
volume <- array(NA,dim=1)
duration <- array(NA,dim=1)
index = 1 ### set index to increment each time a new event is detected
flag = 0
for (k in 1:length(data)){
if (event[k] == 1 && flag==0) ### New Event
{ st_date[index] <- Date[k]
max[index]=data[k]
max_date[index] <-Date[k]
volume[index] <- 0.0
duration[index] <-0
flag <- 1
}
if (event[k] ==1 && flag ==1) ### Continuing Event
{ if (data[k] > max[index])
{max[index] <- data[k]
max_date[index] <-Date[k]
}
volume[index] <- volume[index] + data[k]
duration[index] <- duration[index] +1
}
if (event[k] ==0 && flag ==1) ### Event has ended
{
index <- index+1
flag <- 0
}
}
st_date <- as.Date(st_date, format="%Y-%m-%d")
volume <- volume *24*60*60 *1e-9 ################### convert volumes to km#
max_date <- as.Date(max_date, format="%Y-%m-%d")
POT_events <- data.frame(st_date, max_date, max, volume, duration)
######### individual events
############################################## Make list of individual events
#
# There are two problem conditions. The first is when the first event starts before there are four preceding events
# and the second is when the record ends during an event, or there are not four days following the nd of an event and the end of
# the record. In these two cases the event is padded with NA in places where no data existed. These get removed later.
#
flag=0
index=1
flow <-array(dim=9)
for (k in 1:length(data)){
if (event[k] == 1 && flag==0) ### New Event
{
st_date[index] <- as.character(Date[k])
##### start the event with the immediately preceding 4 days
##### if k is less than 5 then there is not data from the preceding all of the preceding 4 days set any to NA
if (k == 1) {flow[1: 4] <- NA}
if (k == 2) {flow[1: 3]<- NA; flow[4]<- data[k-1]}
if (k == 3) {flow[1: 2]<- NA; flow[3] <-data[k-2] ;flow[4]<- data[k-1]}
if (k == 4) {flow[1] <- NA; flow[2] <-data[k-3]; flow[3] <-data[k-2] ;flow[4]<- data[k-1]}
if (k <= 4) {event_num[1:4] <-index}
##### if the event is not immediately at the start of the record set to the values of the four days that precede
if (k >5) {
event_num[k-1] <- index ;flow[4] <-data[k-1]
event_num[k-2] <- index; flow[3] <-data[k-2]
event_num[k-3] <- index; flow[2] <-data[k-3]
event_num[k-3] <- index; flow[1] <-data[k-4]
}
ii <-k
ii <-ii-5
event[k-1] <-1
event_num[k] <- index
flag <- 1
}
if (event[k] ==1 && flag ==1) ### Continuing Event
{ event_num[k] <- index
flow[k-ii] <-data[k]
}
if (event[k] ==0 && flag ==1) ### Event has ended [first susequent day] but only if not past record end
{
event_num[k] <-index; flow[k-ii] <-data[k] ## add another subsequent 3 days as end of event but only if not past end of record
if((k-length(data))>=1) event_num[k+1] <-index; flow[k+1-ii] <-data[k+1]
if((k-length(data))>=2) event_num[k+2] <-index; flow[k+2-ii] <-data[k+2]
if((k-length(data))>=3) event_num[k+3] <-index; flow[k+3-ii] <-data[k+3]
############################################# if there are missing values remove them from the event
case[index] <-list(flow[!is.na(flow)])
rm(flow) ###clear the event aray for the next case
flow <-array(dim=9)
event[k-1] <-1
index <- index+1
flag <- 0
}
}
# event_num is an array that has 0 and an index of the sequence of events from 1 to n events
# whenever the event ends the switch is to 0
#
ncases <- length(POT_events$st_date)
events <- list(POT_events,ncases, case)
names(events) <- c("POTevents","ncases","case")
return(events)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_get_peaks.R
|
#' @title Gets remote data sets
#'
#' @description
#' Accesses data sets, via a url the first time,
#' saves them locally, then accesses them locally after the first time the script is executed.
#'
#' @param gd_url url for accessing data set
#' @param gd_filename name of file on local drive, including full path
#' @param quiet Optional. If \code{FALSE} (the default) error/warning messages are printed if the data cannot be found.
#' @author Dan Moore
#'
#' @importFrom httr GET write_disk
#' @importFrom sf st_read
#' @importFrom raster raster
#'
#' @return Returns a data frame (from a .csv file), a \code{raster} object (from a .tif file),
#'or an \code{sf} object (from a GeoJSON file).
#'
#' @examples \donttest{
#' # Example not tested automatically as multiple large data files are downloaded which is slow
#'
#' # Tested using files in the Upper Penticton Creek
#' # zenodo repository https://zenodo.org/record/4781469
#' library(ggplot2)
#' library(raster)
#'
#' # create directory to store data sets
#' dir_name <- tempdir(check = FALSE)
#' if (!dir.exists(dir_name)) {
#' dir.create(dir_name)
#' }
#'
#' # test with soil moisture data in csv format
#' sm_fn <- file.path(dir_name, "sm_data.csv")
#' sm_url <- "https://zenodo.org/record/4781469/files/sm_data.csv"
#' sm_data <- ch_get_url_data(sm_url, sm_fn)
#' head(sm_data)
#'
#' # test with tif/tiff file containing a dem
#' ra_fn <- file.path(dir_name, "gs_dem25.tif")
#' ra_url <- "https://zenodo.org/record/4781469/files/gs_dem25.tif"
#' ra_data <- ch_get_url_data(ra_url, ra_fn)
#' plot(ra_data)
#'
#' # test with GeoJSON
#' gs_fn <- file.path(dir_name, "gs_soilmaps.GeoJSON")
#' gs_url <- "https://zenodo.org/record/4781469/files/gs_soilmaps.GeoJSON"
#' gs_data <- ch_get_url_data(gs_url, gs_fn)
#'
#' ggplot(gs_data) +
#' geom_sf(aes(fill = new_key)) +
#' labs(fill = "Soil class",
#' x = "UTM Easting (m)",
#' y = "UTM Northing (m)") +
#' coord_sf(datum = 32611) +
#' theme_bw()
#' }
#' @export
#'
ch_get_url_data <- function(gd_url, gd_filename, quiet = FALSE) {
file_ext <- strsplit(x = gd_filename, split = "[.]")[[1]][2]
# csv file - returns data frame
if (file_ext == "csv") {
if (!file.exists(gd_filename)) {
# check to see if url file exists
result <- ch_test_url_file(gd_url, quiet)
if (result == "error" | result == "warning") {
stop("URL does not exist")
}
da <- read.csv(gd_url)
write.csv(da, gd_filename)
} else {
da <- read.csv(gd_filename)
}
return(da)
}
# tiff file - returns raster object
if (file_ext %in% c("tif", "tiff")) {
if (!file.exists(gd_filename)) {
# check to see if url file exists
result <- ch_test_url_file(gd_url, quiet)
if (result == "error" | result == "warning") {
stop("URL does not exist")
}
GET(gd_url, write_disk(gd_filename))
}
da <- raster::raster(gd_filename)
return(da)
}
# GeoJSON - returns sf object
if (file_ext == "GeoJSON") {
if (!file.exists(gd_filename)) {
# check to see if url file exists
result <- ch_test_url_file(gd_url, quiet)
if (result == "error" | result == "warning") {
stop("URL does not exist")
}
GET(gd_url, write_disk(gd_filename))
da <- st_read(gd_filename)
} else {
da <- st_read(gd_filename)
}
return(da)
}
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_get_url_data.R
|
#' Reads station information from a data file produced by ECDE
#'
#' @description Retrieves station information for an individual Water Survey of Canada site, based on stationID;
#' adds a text string at position 21 that combines key elements for a title.
#'
#' @param stnID A Water Survey of Canada station number
#' @param metadata a data frame of station information from ECDataExplorer. The data frame \option{HYDAT_list} is supplied with this package.
#'
#' @author Paul Whitfield
#'
#' @return Returns a line from a data frame with 21 variables
#' \item{Station}{StationID}
#' \item{StationName}{Station Name}
#' \item{HYDStatus}{Active or Discontinued}
#' \item{Prov}{Province}
#' \item{Latitude}{}
#' \item{Longitude}{}
#' \item{DrainageArea}{Area in km\eqn{^2}{^2}}
#' \item{Years}{# of years with data}
#' \item{From}{Start Year}
#' \item{To}{End Year}
#' \item{Reg.}{Regulated or natural}
#' \item{Flow}{if TRUE/Yes flow data is available}
#' \item{Level}{if TRUE/Yes water level data is available}
#' \item{Sed}{if TRUE/Yes sediment data is available}
#' \item{OperSched}{Current operation schedule- Continuous or Seasonal}
#' \item{RealTime}{if TRUE/Yes real itme data exists}
#' \item{RHBN}{if TRUE/Yes is in the reference hydrologic basin network}
#' \item{Region}{WSC Region}
#' \item{Datum}{Datum used}
#' \item{Operator}{Agency responsible for collecting data}
#' \item{Station_lname}{Added field combining StationID, StationName, Province and if station is RHBN an * is added}
#'
#'
#' @export
#'
#' @importFrom utils data
#'
#' @examples
#' data("HYDAT_list")
#' s_info <- ch_get_wscstation("05BB001", metadata = HYDAT_list)
#' title <- s_info[21]
#' print(title)
#'
ch_get_wscstation <- function(stnID, metadata = NULL) {
HYDAT_list <- c(0)
if (is.null(metadata)) {
data("HYDAT_list", envir = environment())
metadata <- HYDAT_list
}
rhbn <- NULL
stninfo <- metadata[metadata$Station == stnID, ]
if (length(stninfo[, 1]) == 0) {
message(paste("WSC Station ", stnID, " not found"))
return(stnID)
}
if (!is.na(stninfo$RHBN) && stninfo$RHBN == TRUE) {
(rhbn <- "*")
}
stninfo[21] <- paste(stninfo$Station, " - ", stninfo$StationName, " - ",
stninfo$Prov, rhbn, sep = "")
names(stninfo) [21] <- "Station_lname"
return(stninfo)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_get_wscstation.R
|
#' @title Hydrograph plot
#'
#' @description
#' Creates a hydrograph plot for simulated, observed, and inflow
#' hydrograph series, including precipitation if provided. The secondary y axis
#' will be used to plot the precip time series.
#'
#' @details
#' Assumes that the supplied time series have the same length and
#' duration in time. If this is not true, then the defined period or period
#' calculated from the first available flow series will be used to determine
#' the plotting limits in time. The supplied time series should be in \pkg{xts} format.
#' Note that a plot title is purposely omitted in order to allow the automatic
#' generation of plot titles.
#'
#' @param flows data frame of flows to plot
#' @param precip data frame of precipitation values to plot
#' @param prd period to use in plotting
#' @param winter_shading optionally adds a transparent cyan shading for the
#' December 1st to March 31st period in each year that is plotted. Default is
#' \code{FALSE}.
#' @param winter_colour colour to use in winter shading polygons
#' @param range_mult_flow range multiplier for max value in hydrograph. This is
#' useful in preventing overlap if precip is also plotted. This value should not
#' be less than 1.0, otherwise the values will be
#' cutoff in the plot.
#' @param range_mult_precip range multiplier for max value in precipitation plot (default 1.5)
#' @param flow_labels string vector of labels for flow values
#' @param ylabel text label for y-axis of the plot (default 'Flow [m^3/s]')
#' @param precip_label text label for precipitation y-axis (default 'Precipitation [mm]')
#' @param leg_pos string specifying legend placement on plot e.g. \option{topleft},
#' \option{right}, etc., and is consistent with the legend function options. If \code{NULL},
#' the function will place the legend left, if precip added, on the topleft
#' otherwise).
#' @param leg_box boolean on whether to put legend in an opaque white box
#' or not. If \code{NULL} (the default), the function will automatically not use a white box
#' and leave the background of the legend transparent.
#' @param zero_axis fixes the y axis to start exactly at zero (default \code{TRUE}).
#' By default, R will plot the values with a
#' small buffer for presentation. Be warned that if this option is set to
#' TRUE, the minimum value is set to zero without checking if any flow values
#' are less than zero. This option should not be used for reservoir stage plotting, since
#' most reservoir stage is typically reported as an elevation.
#'
#' @return Returns \code{TRUE} if the function is executed properly.
#'
#' @author Robert Chlumsky
#'
#' @examples
#' # example with synthetic random data
#' dd <- seq.Date(as.Date("2010-10-01"), as.Date("2013-09-30"),by = 1)
#' x <- abs(rnorm(length(dd)))
#' y <- abs(rnorm(length(dd))) * x
#' df <- data.frame("Date" = dd, x, y)
#' myprd <- "2011-10-01/2012-09-30"
#'
#' precip <- data.frame("Date" = dd," precip" = abs(rnorm(length(dd))) * 10)
#'
#' # basic hydrograph plot
#' ch_hydrograph_plot(flows = df, winter_shading = FALSE)
#'
#' # with different labels and winter shading
#' ch_hydrograph_plot(flows = df, winter_shading = TRUE,
#' flow_labels = c("simulated", "observed"))
#'
#' # add precipitation, increase the plot ranges to separate flows and precip, and add a legend box
#' ch_hydrograph_plot(flows = df, precip = precip, range_mult_flow = 1.7,
#' range_mult_precip = 2, leg_box = TRUE)
#'
#' @importFrom lubridate year month day date
#' @importFrom graphics grid lines
#' @export
#'
ch_hydrograph_plot <- function(flows = NULL,
precip = NULL,
prd = NULL,
winter_shading = FALSE,
winter_colour='cyan',
range_mult_flow = NULL,
range_mult_precip = 1.5,
flow_labels = NULL,
ylabel = NULL,
precip_label = "Precipitation [mm]",
leg_pos = NULL,
leg_box = NULL,
zero_axis = TRUE) {
# check flows data frame
if (!(is.null(flows))) {
if (!inherits(flows, "data.frame")) {
stop("flows must be a data frame.")
}
if (nrow(flows) == 0) {
stop("flows data frame cannot be empty (zero rows).")
}
if (ncol(flows) == 1) {
stop("flows data frame cannot be empty (no data columns).")
}
if (which(colnames(flows) == "Date") != 1) {
stop("'Date' must be the first attribute of flows data frame.")
}
if (is.null(flows$Date)) {
stop("Date attribute is required in flows data frame.")
}
if (ncol(flows) > 11) {
stop("flows cannot have more than 11 data columns (other than 'Date').")
}
}
# check flow labels
if (!(is.null(flow_labels))) {
if (length(flow_labels) != ncol(flows) - 1) {
stop("flow_labels must have the same number of labels as the flows data frame (not including Date attribute).")
}
} else {
flow_labels <- colnames(flows)[2:ncol(flows)]
}
# check ylabel
if (is.null(ylabel)) {
ylabel <- expression(paste("Flow [m" ^{3}, "/s]"))
}
# check precip data frame
if (!(is.null(precip))) {
if (!inherits(precip, "data.frame")) {
stop("precip must be a data frame.")
}
if (nrow(precip) == 0) {
stop("precip data frame cannot be empty (zero rows).")
}
if (ncol(precip) == 1) {
stop("precip data frame cannot be empty (no data columns).")
}
if (which(colnames(precip) == "Date") != 1) {
stop("'Date' must be the first attribute of precip data frame.")
}
if (is.null(precip$Date)) {
stop("Date attribute is required in precip data frame.")
}
if (ncol(precip) > 2) {
stop("precip cannot have more than 1 data column (other than 'Date').")
}
}
# check range.mult input
if (!(is.null(range_mult_flow))) {
if (range_mult_flow <= 0) {
stop("range_mult_flow must be a positive value.")
}
if (range_mult_flow < 1) {
warning("range_mult_flow is less than one, plot may be cut off.")
}
}
if (!(is.na(range_mult_precip))) {
if (range_mult_precip <= 0) {
stop("range_mult_precip must be a positive value.")
}
if (range_mult_precip < 1) {
warning("range_mult_precip is less than one, plot may be cut off.")
}
}
# adjust range_mult_flow if precip is NULL
if (is.null(range_mult_flow)) {
if (is.null(precip)) {
range_mult_flow <- 1.05
} else {
range_mult_flow <- 1.5
}
}
# determine period ----
if (!(is.null(prd))) {
# period is supplied; check that it makes sense
firstsplit <- unlist(strsplit(prd, "/"))
if (length(firstsplit) != 2) {
stop("Check the format of supplied period; should be two dates separated by '/'.")
}
if (length(unlist(strsplit(firstsplit[1], "-"))) != 3 || length(unlist(strsplit(firstsplit[2], "-"))) != 3
|| nchar(firstsplit[1]) != 10 || nchar(firstsplit[2]) != 10) {
stop("Check the format of supplied period; two dates should be in YYYY-MM-DD format.")
}
if (nrow(ch_date_subset(flows, prd)) == 0) {
stop("prd does not overlap with flows; check prd and flows data frame.")
}
} else {
# period is not supplied
# define entire range as period
N <- nrow(flows)
prd <- sprintf(
"%d-%02d-%02d/%i-%02d-%02d",
year(flows$Date[1]),
month(flows$Date[1]),
day(flows$Date[1]),
year(flows$Date[N]),
month(flows$Date[N]),
day(flows$Date[N])
)
}
# subset data
flows <- ch_date_subset(flows, prd)
if (!(is.null(precip))) {
precip <- ch_date_subset(precip, prd)
if (nrow(precip) == 0) {
warning("precip data does not overlap with prd; check data and prd arguments.")
precip <- NULL
}
}
# capture plotting parameters, restore on exit
oldpar <- par(no.readonly = TRUE)
on.exit(par(oldpar))
# set parameters for plotting; then plot
if (!(is.null(precip))) {
par(mar = c(5, 4, 4, 4) + 0.1)
}
if (zero_axis) {
# sets the interval calculation in plotting to be right to specified limits
# otherwise extends by 4% by default
par(yaxs = "i")
}
y.hmax <- max(flows[, 2:ncol(flows)], na.rm = TRUE) * range_mult_flow
if (zero_axis) {
y.hmin <- 0
} else {
y.hmin <- min(flows[, 2:(ncol(flows))], na.rm = TRUE)
}
plot(flows$Date, flows[, 2],
xlab = "Date", ylab = ylabel,
col = "white", type = "l", ylim = c(y.hmin, y.hmax), panel.first = grid()
)
if (winter_shading) {
# shaded winter months
temp <- flows[((month(flows$Date) == 12) & (day(flows$Date) == 1)) |
((month(flows$Date) == 3) & (day(flows$Date) == 31)), ]
ep <- match(date(temp$Date), date(flows$Date))
if (month(flows$Date[ep[1]]) == 3) {
# ep <- ep[-1]
ep <- c(1, ep)
}
if (month(flows$Date[ep[length(ep)]]) == 12) {
# ep <- ep[-length(ep)]
ep <- c(ep, nrow(flows))
}
# bc <- "#FF0000E6" # "#00FFFF32"
for (k in seq(1, length(ep), 2)) {
cord.x <- c(
date(flows$Date[ep[k]]), date(flows$Date[ep[k]]),
date(flows$Date[ep[k + 1]]), date(flows$Date[ep[k + 1]])
)
cord.y <- c(-1e3, y.hmax * 1e3, y.hmax * 1e3, -1e3)
polygon(cord.x, cord.y, col = winter_colour, border = NA)
}
}
# define legend items
NN <- ncol(flows) - 1
leg.items <- flow_labels
# want to replace colour selection with a nice package that picks nice colours together, for any number of inputs
leg.cols <- c("red", "navyblue", "black", "orange", "cyan", "darkgreen", "coral1", "deeppink", "blue", "orangered3")[1:NN]
leg.lty <- rep(seq(1, 5, 1), 3)[1:NN]
leg.lwd <- rep(1, NN)
# add all flow data to plot
for (i in 1:NN) {
lines(flows$Date, flows[, (i + 1)], lty = leg.lty[i], lwd = leg.lwd[i], col = leg.cols[i])
}
# add precip data if not null
if (!(is.null(precip))) {
par(new = TRUE)
precip.col <- "#0000FF64"
plot(precip$Date, precip[, 2],
col = precip.col, lty = 1, lwd = 1,
type = "h", ylim = rev(c(0, max(precip[, 2], na.rm = TRUE) * range_mult_precip)), xaxt = "n", yaxt = "n",
xlab = "", ylab = ""
)
axis(4)
mtext(sprintf("%s", precip_label), side = 4, line = 2.5)
leg.items <- c(leg.items, precip_label)
leg.cols <- c(leg.cols, precip.col)
leg.lty <- c(leg.lty, 1)
leg.lwd <- c(leg.lwd, 1)
}
if (is.null(leg_pos)) {
if (!(is.null(precip))) {
leg_pos <- "left"
} else {
leg_pos <- "topleft"
}
}
if (is.null(leg_box)) {
leg_box <- "n"
} else {
if (leg_box) {
leg_box <- "o"
} else {
leg_box <- "n"
}
}
# add legend to plot
legend(
x = leg_pos,
legend = leg.items,
lty = leg.lty,
col = leg.cols,
lwd = leg.lwd,
bty = leg_box,
cex = 0.8, inset = 0.01
)
return(TRUE)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_hydrograph_plot.R
|
#' Generate a map for a defined area
#'
#'@description Generates a map for a defined area. Options to plot station
#' locations, magnitudes, trends etc. Watershed boundaries and add user
#' defined labels. See article in CWRA "Water News" Spring 2023.
#' The elements are added to the map in an order that puts the symbols on top.
#' Large basins, WSC basins, rivers, Provinces, then data symbols. Labels are
#' added last.
#'
#'@param map_data a list produced by the function ch_get_map_base()
#'@param locations a dataframe with longitude, latitude. a third column may contain indexes of symbol types.
#'@param lo_pch plotting symbols: default is lo_pch=c(19,19)
#'@param lo_col plotting symbol colours: default is lo_col= c("black","black")
#'@param lo_bg plotting symbol background (20-24) default is "white"
#'@param lo_cex plot symbol size default is 0.8
#'@param lo_text legend title text, default is "Station"
#'@param lo_title names for different items in legend, default is "Location"
#'
#'# adding large basin boundaries
#'@param lb_basins a list with basin shapefiles
#'@param lb_border colour for watershed boundaries: default is "darkred"
#'@param lb_lwd width of watershed boundary
#'@param lb_clip clip basins at map edge, default is TRUE
#'
#'# adding WSC basin boundaries
#'@param sb_basins a list with basin shapefiles
#'@param sb_border colour for watershed boundaries: default is "darkred"
#'@param sb_lwd width of watershed boundary
#'@param sb_clip clip basins at map edge, default is FALSE
#'
#'# adding trends
#'@param trends a dataframe with four columns (Longitude, Latitude, trend, and pvalue) the trend and pvalue
#' are in original units such as slope and probability. These are converted to indexes (1, 2, 3) decreasing, no trend,
#' increasing and (1, 2) not significant and significant
#'@param tr_pch plotting symbols: default is tr_pch = c(25, 20, 24)
#'@param tr_col plotting symbol colours: default is tr_col = c("red","black","darkblue")
#'@param tr_cex plotting symbol size for non-significant and significant:default is tr_cex = c(0.50, 1.0, 0.0
#'@param tr_p trend significance level: default is tr_p = 0.05
#'
#'# adding variable with colour gradient
#'@param var a dataframe with three columns (Longitude, Latitude, value)
#'@param vr_pch a symbol to plot: default is vr_pch = 22
#'@param vr_cex size for plot symbol: default is vr_cex = 2.0
#'@param vr_text a label to include in the legend
#'@param vr_range set ranges for color gradient default is (0, 1)
#'@param vr_colors colours for gradient default is ("darkred", "red","white","green", "darkgreen")
#'
#'# adding variable with symbol diameter
#'@param sc_var a dataframe with three columns (Longitude, Latitude, value)
#'@param sc_pch a symbol to plot: default is vr_pch = 20
#'@param sc_text a label to include in the legend default is ""
#'@param sc_range set ranges for color gradient default is (0, 1) if not scaled against largest
#'@param sc_color symbol colour default is "magenta"
#'
#'# adding rivers
#'@param rivers plot rivers in blue: default is TRUE
#'@param boundaries plot provincial boundaries: default is TRUE
#'
#'# adding provincial boundaries
#'@param plabels add the names of provinces: default is TRUE
#'@param pl_cex adjusts size of provincial labels: default is 1.0
#'@param legend add a legend to the plot: default is FALSE
#'@param le_text legend categories: default is NA.
#'@param x_labels a dataframe with seven columns (long, lat, pos, cex, font, col, text). Each row provides details for a single label : default is NA
#'@param tr_ltext text for legend
#'@param tr_lsz symbol sizes for trends default is c(1,0.40, 0.40, 0.40, 1)
#'@param ... Other mapping parameters
#'
#'@return Produces a map on an output device.
#'
#'@import sf
#'@import dplyr
#'@author Paul Whitfield
#'@export
#'@examples \donttest{
#'# Note: example not tested automatically as it is very slow to execute due to the downloading
#'# get base map
#'latitude <- c(48.0, 61.0)
#'longitude <- c(-110.0, -128.5)
#'mapdir <- tempdir()
#'# get map data
#'m_map <- ch_get_map_base(latitude,longitude,
#' map_proj = "Albers",
#' map_directory = mapdir,
#' map_type = "nps")
#' # add symbols
#' stations <- HYDAT_list
#' stations <- stations[,c("Longitude", "Latitude")]
#' stations <- na.omit(stations)
#' ch_map_plot_data(m_map, sc_var = stations, sc_text = "Years")
#' }
##########################################################
ch_map_plot_data <- function(map_data,
locations = NULL,
lo_pch = 19,
lo_col = "black",
lo_bg = "white",
lo_cex = 0.8,
lo_text = "Station",
lo_title = "Location",
lb_basins = NULL,
lb_border = "darkred",
lb_lwd = 2.,
lb_clip = TRUE,
sb_basins = NULL,
sb_border = "darkred",
sb_lwd = 1.,
sb_clip = FALSE,
trends = NULL,
tr_pch =c(25, 20, 24),
tr_col = c("red","black","darkblue"),
tr_cex = c(0.50, 1.0, 0.0), tr_p = 0.05,
tr_ltext = c("Significant Increase", "Increase", "No Change", "Decrease", "Significant Decrease"),
tr_lsz = c(1,0.40, 0.40, 0.40, 1),
var = NULL,
vr_pch = 22,
vr_cex = 2.0,
vr_text = NA,
vr_range = c(0,1),
vr_colors = c("darkred", "red","white","green", "darkgreen"),
sc_var = NULL,
sc_pch = 20,
sc_range = c(0,1),
sc_text = "",
sc_color = "magenta",
rivers = TRUE,
boundaries = TRUE,
plabels = TRUE,
pl_cex = 1.0,
legend = FALSE,
le_text = NA,
x_labels = NULL,
...)
{
sf::sf_use_s2(FALSE)
cdn_latlong = "+proj=longlat"
# proj4string for Canadian Albers equal area projection
cdn_aea = "+proj=aea +lat_1=50 +lat_2=70 +lat_0=40 +lon_0=-110 +x_0=0 +y_0=0 +ellps=GRS80 +datum=NAD83 +units=m +no_defs"
# pull apart the map_data list
map_d <- map_data[[1]]
plines10 <- map_data[[2]]
rivers10 <- map_data[[3]]
map_proj <- map_data[[4]]
maplat <- map_data[[5]]
maplong <- map_data[[6]]
#################################### plot base map
plot(map_d)
################################################# add large basin boundaries
if (!is.null(lb_basins)) {
if (lb_clip) {
for (kb in 1:length(lb_basins)) {
n_basin <- sf::st_crop(lb_basins[[kb]], xmin = min(maplong), xmax = max(maplong), ymin = min(maplat), ymax = max(maplat))
n_basin <- sf::st_transform(n_basin, sf::st_crs(map_proj))
plot(n_basin, border = lb_border, lwd = lb_lwd, col = NA, add = TRUE)
}
}
if (!lb_clip) {
for (kb in 1:length(lb_basins)) {
n_basin <- sf::st_transform(lb_basins[[kb]], sf::st_crs(map_proj))
plot(n_basin, border = lb_border, lwd = lb_lwd, col = NA, add = TRUE)
}
}
}
################################################# end basin boundaries
################################################# add small basin boundaries
if (!is.null(sb_basins)) {
if (sb_clip) {
for (kb in 1:length(sb_basins)) {
n_basin <- sf::st_crop(sb_basins[[kb]], xmin = min(maplong), xmax = max(maplong), ymin = min(maplat), ymax = max(maplat))
n_basin <- sf::st_transform(n_basin, sf::st_crs(map_proj))
plot(n_basin, border = sb_border, lwd = sb_lwd, col = NA, add = TRUE)
}
}
if (!sb_clip) {
for (kb in 1:length(sb_basins)) {
n_basin <- sf::st_transform(sb_basins[[kb]], sf::st_crs(map_proj))
plot(n_basin, border = sb_border, lwd = sb_lwd, col = NA, add = TRUE)
}
}
}
################################################# end basin boundaries
################################################# add lines and features
if (rivers) {
rline <- sf::st_crop(rivers10, xmin = maplong[1],xmax = maplong[2],
ymin = maplat[1],ymax = maplat[2])
rivers_1 <- sf::st_transform(rline, sf::st_crs(map_proj))
plot(rivers_1, col = "blue", add = TRUE)
}
if (boundaries) {
pline <- sf::st_crop(plines10,
xmin = maplong[1],
xmax = maplong[2],
ymin = maplat[1],
ymax = maplat[2])
plines_1 <- sf::st_transform(pline, sf::st_crs(map_proj))
plot(plines_1, lwd = 2, col = NA, add = TRUE)
}
##################################################################### add provincial labels
if (plabels) {
# set latitude and longitude for provincial and territorial labels
llong <- c(-124, -124, -115, -105.8, -136.3, -120.1, -120.1, -86.7, -73.4, -97.3, -95,
-61.3, -61.3, -66.15, -66.15, -63.3, -63.7)
llat <- c(55, 55, 56.5, 55, 63.3, 64, 64, 51.5, 51.5, 56, 65.8, 54, 54, 46.9, 46.9, 46.3, 44.7)
lpos <- c(3, 1, 1, 1, 1, 3, 1, 1, 1, 1, 1, 3, 1, 3, 1, 1, 1)
Labels <- c("British", "Columbia", "Alberta", "Saskatchewan", "Yukon", "Northwest", "Territories",
"Ontario", "Quebec", "Manitoba", "Nunavut", "Newfoundland", "and Labrador",
"New", "Brunswick", "P.E.I.", "Nova Scotia")
### trim list of provincial labels to only those inside the map box
map_label <- data.frame(llong,llat,lpos,Labels)
map_label1 <- map_label[map_label$llong <= maplong[1],]
map_label2 <- map_label1[map_label1$llong >= maplong[2],]
map_label3 <- map_label2[map_label2$llat >= maplat[1],]
map_label4 <- map_label3[map_label3$llat <= maplat[2],]
map_labels <- map_label4 %>%
dplyr::mutate(Label = as.factor(Labels)) %>%
dplyr::mutate(lpos = as.factor(lpos)) %>%
sf::st_as_sf(coords = c("llong","llat")) %>%
sf::st_set_crs(4326)
pt_labelsa <- sf::st_transform(map_labels, map_proj)
for (kk in 1:length(pt_labelsa$lpos)) {
xy <- unlist(pt_labelsa$geometry[kk])
lab <- as.character(pt_labelsa$Label[kk])
text(xy[1],xy[2],lab, pos = lpos[kk], col = "black", cex = pl_cex)
}
}
##################################################################### end of add provincial labels
################################################################### plotting locations
if (!is.null(locations)) {
if (ncol(locations) == 3) {
mlong <- locations[,1]
mlat <- locations[,2]
mcode <- locations[,3]
}
if (ncol(locations) == 2) { ### only locations provided
# create a simple features object with crs = epsg 4326 (longlat with WGS84 geoid)
mlong <- locations[,1]
mlat <- locations[,2]
mcode <- rep(1,length(mlong))
}
# create a simple features object with crs = epsg 4326 (longlat with WGS84 geoid)
pt_sf <- data.frame(mlong, mlat, mcode) %>%
dplyr::mutate(mcode = as.factor(mcode)) %>%
sf::st_as_sf(coords = c("mlong", "mlat")) %>%
sf::st_set_crs(4326)
# reproject
pt_sfa <- sf::st_transform(pt_sf, map_proj)
mcode <- unlist(mcode)
################################################# plot points
plot(pt_sfa, add = TRUE,
pch = lo_pch[mcode],
col = lo_col[mcode],
bg = lo_bg[mcode])
if (legend) {
legend("topright", lo_text, pch = lo_pch, inset = c(0.06,0.115), col = lo_col,
pt.bg = lo_col,
cex = lo_cex, bg = "white", title = lo_title)
}
}
##################################################################### end of locations
##################################################################### plotting trends
if (!is.null(trends)) {
if (!length(trends[1,]) == 4) {
print("trends data frame does not have four columns: long, lat, slope, pvalue")
return()
}
mlong <- as.numeric(unlist(trends[1]))
mlat <- as.numeric(unlist(trends[2]))
trend_a <- trends[3]
pvalue <- trends[4]
trend <- ch_tr_sign(trend_a)
signif <- ch_tr_signif(pvalue, pvalue = tr_p)
# create a simple features object with crs = epsg 4326 (longlat with WGS84 geoid)
pt_sf <- data.frame(mlong, mlat, trend, signif) %>%
dplyr::mutate(trend = as.factor(trend)) %>%
dplyr::mutate(signif = as.factor(signif)) %>%
sf::st_as_sf(coords = c("mlong", "mlat")) %>%
sf::st_set_crs(4326)
# reproject
pt_sfa <- sf::st_transform(pt_sf, map_proj)
# pt_sfa <- st_crop(pt_sfa, xmin=maplong[1],xmax=maplong[2],ymin=maplat[1],ymax=maplat[2])
tcode <- unlist(trend)
pcode <- unlist(signif)
################################################# plot points
plot(pt_sfa, add = TRUE,
pch = tr_pch[trend],
col = tr_col[trend],
bg = tr_col[trend],
cex = tr_cex[signif])
if (legend) {
tr_lpch <- c(tr_pch[3], tr_pch[3], tr_pch[2], tr_pch[1], tr_pch[1])
tr_lcol <- c(tr_col[3], tr_col[3], tr_col[2], tr_col[1], tr_col[1])
legend("topright", tr_ltext, pch = tr_lpch, cex = 0.60, inset = c(0.06,0.115), col = tr_lcol, pt.bg = tr_lcol,
pt.cex = tr_lsz, bg = "white", title = "Trends")
}
}
##################################################################### end of trends
##################################################################### plot variable scaled colour
if (!is.null(var)) {
if (!length(var[1,]) == 3) {
print("varaible data frame does not have three columns: long, lat, variable")
return()
}
mlong <- as.numeric(unlist(var[1]))
mlat <- as.numeric(unlist(var[2]))
var_a <- as.numeric(unlist(var[3]))
# create a simple features object with crs = epsg 4326 (longlat with WGS84 geoid)
pt_sf <- data.frame(mlong, mlat, var_a ) %>%
dplyr::mutate(var_a = as.factor(var_a)) %>%
sf::st_as_sf(coords = c("mlong", "mlat")) %>%
sf::st_set_crs(4326)
# reproject
pt_sfa <- sf::st_transform(pt_sf, map_proj)
plot(pt_sfa, add = TRUE,
pch = 19,
col = "black",
cex = 0.30)
plot(pt_sfa, add = TRUE,
pch = vr_pch,
col = "black",
bg = ch_color_gradient(var_a, vr_colors),
cex = 0.5 * vr_cex)
if (legend) {
lx <- seq(vr_range[1], vr_range[2], by = (vr_range[2] - vr_range[1]) / 10)
legend("topright", legend = lx, pch = 22, inset = c(0.06,0.115), pt.bg = ch_color_gradient(lx, vr_colors), cex = 0.8,
bg = "white", title = vr_text)
}
}
################################################################### end of variable scaled colour
##################################################################### plot scaled symbols
if (!is.null(sc_var)) {
if (!length(sc_var[1,]) == 3) {
print("variable data frame does not have three columns: long, lat, variable")
return()
}
mlong <- as.numeric(unlist(sc_var[1]))
mlat <- as.numeric(unlist(sc_var[2]))
var_a <- as.numeric(unlist(sc_var[3]))
var_b <- var_a
# create a simple features object with crs = epsg 4326 (longlat with WGS84 geoid)
pt_sf <- data.frame(mlong, mlat, var_a ) %>%
dplyr::mutate(var_a = as.factor(var_a)) %>%
sf::st_as_sf(coords = c("mlong", "mlat")) %>%
sf::st_set_crs(4326)
# reproject
pt_sfa <- sf::st_transform(pt_sf, map_proj)
plot(pt_sfa, add = TRUE,
pch = sc_pch,
col = "black",
cex = 0.30)
plot(pt_sfa, add = TRUE,
pch = sc_pch,
col = sc_color,
bg = sc_color,
cex = var_b*4)
if (legend) {
lx <- seq(sc_range[1], sc_range[2], by = (sc_range[2] - sc_range[1]) / 10)
legend("topright", legend = lx, pch = sc_pch, inset = c(0.06,0.115), pt.bg = sc_color,
col = sc_color, pt.cex = lx * 4, title = sc_text,
bg = "white")
}
}
################################################################### end of variable scaled colour
##################################################################### add special labels
if (!is.null(x_labels)) {
llong <- unlist(x_labels[1])
llat <- unlist(x_labels[2])
lpos <- unlist(x_labels[3])
lcex <- unlist(x_labels[4])
lfont <- unlist(x_labels[5])
lcol <- unlist(x_labels[6])
llabels <- unlist(x_labels[7])
xmap_labels <- data.frame(llong, llat, lpos, lcol, lfont, lcex, llabels) %>%
dplyr::mutate(llabels = as.factor(llabels)) %>%
dplyr::mutate(lpos = as.factor(lpos)) %>%
dplyr::mutate(lcol = as.factor(lcol)) %>%
dplyr::mutate(lcex = as.factor(lcex)) %>%
dplyr::mutate(lfont = as.factor(lfont)) %>%
sf::st_as_sf(coords = c("llong","llat")) %>%
sf::st_set_crs(4326)
x_labelsa <- sf::st_transform(xmap_labels, map_proj)
message(length(llong))
for (kk in 1:length(llong)) {
xy <- unlist(x_labelsa$geometry[kk])
text(xy[1],xy[2],llabels[kk], pos = lpos[kk], font = lfont[kk], col = lcol[kk], cex = lcex[kk])
}
}
##################################################################### end of add x_labels
return()
}
######################################################################## end of function
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_map_plot_data.R
|
#' Polar plot of daily streamflows
#'
#' @description Produces a polar plot similar to that used in \cite{Whitfield and Cannon, 2000}. It uses output
#' from the function \code{\link{ch_binned_MannWhitney}} or a data structure created using
#' the function \code{\link{ch_polar_plot_prep}}.
#'
#' @param bmw output from \code{\link{ch_binned_MannWhitney}}
#' @param lcol1 line colour, default is \code{c("black","gray50")}
#' @param lcol2 point colour, default is \code{c("black","gray50")}
#' @param lfill fill colour, default is \code{c("yellow","green")}
#' @param lsig significance symbol colour, default is \code{c("red","blue")}
#'
#' @references
#' Whitfield, P.H. and A.J. Cannon. 2000. Polar plotting of seasonal hydrologic
#' and climatic data. Northwest Science 74: 76-80.
#'
#' Whitfield, P.H., Cannon, A.J., 2000. Recent variations in climate and hydrology
#' in Canada. Canadian Water Resources Journal 25: 19-65.
#' @return No value is returned; a standard \R graphic is created.
#' @keywords plot
#' @author Paul Whitfield
#' @importFrom plotrix radial.plot radial.grid
#' @importFrom stats approx
#' @export
#' @seealso \code{\link{ch_binned_MannWhitney}} \code{\link{ch_polar_plot_prep}}
#' @examples
#' range1 <- c(1970,1979)
#' range2 <- c(1990,1999)
#' b_MW <- ch_binned_MannWhitney(CAN05AA008, step = 5, range1, range2,
#' ptest <- 0.05)
#' ch_polar_plot(b_MW)
ch_polar_plot <- function(bmw, lcol1 = c("black", "gray50"), lcol2 = c("black", "gray50"),
lfill = c("yellow", "green"), lsig = c("red", "blue")) {
dlabels <- c("Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec")
dbreaks <- c(1, 32, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335)
dbreaks <- dbreaks / 365 * 2 * pi
series <- bmw$series
llines <- array(NA, dim = 2)
llines[1] <- paste(bmw$range1[1], "-", bmw$range1[2], bmw$bin_method)
llines[2] <- paste(bmw$range2[1], "-", bmw$range2[2], bmw$bin_method)
bins <- length(series[, 1])
cpos <- c(1:bins)
cpos <- cpos / bins * 365
cpos <- cpos / 365 * 2 * pi
cpos <- cpos[1:length(series[, 2])]
rlim <- c(0, max(series[, 2], series[, 3]) * 1.01)
rlim[1] <- -rlim[2] / 6
mdelta <- (rlim[2] - rlim[1]) / 20
xmax <- array(data = NA, dim = length(series[, 1]))
xmin <- array(data = NA, dim = length(series[, 1]))
# capture plotting parameters, restore on exit
oldpar <- par(no.readonly = TRUE)
on.exit(par(oldpar))
par(mfrow = c(1, 1))
# set basic polar plot be plotting first series which will have a radial line for each interval in set
par(cex.lab = 0.75)
radial.plot(as.numeric(series[, 2]), cpos,
rp.type = "ps",
line.col = lcol1[1], point.symbols = 16, point.col = lcol2[1],
labels = NULL, label.pos = cpos,
radial.lim = rlim, show.grid.labels = 1,
start = 3 * pi / 2, clockwise = TRUE,
main = bmw$Station_lname
)
par(cex.lab = 1.0)
# add the polygons for periods of increase and decreases
# create the polygons for increases and decreases
ppolys <- data.frame(cpos, as.numeric(series[, 2]), as.numeric(series[, 3]))
# add a row to span back to the first element
ppolys[bins + 1, ] <- ppolys[1, ]
ppolys[bins + 1, 1] <- ppolys[bins + 1, 1] + ppolys[bins, 1] # adjust so radians continue
p1 <- approx(ppolys[, 1], n = 10 * (bins + 1))
p2 <- approx(ppolys[, 2], n = 10 * (bins + 1))
p3 <- approx(ppolys[, 3], n = 10 * (bins + 1))
pp <- data.frame(p1[1], p1[2], p2[2], p3[2])
pp[, 5] <- pp[, 3] > pp[, 4] # add a column of p1 greater than p2
intersect.points <- which(diff(pp[, 5]) != 0)
ipoints <- c(1, intersect.points, length(pp[, 1]))
for (j in 2:length(ipoints)) {
polyx <- c(
pp[ipoints[j - 1]:ipoints[j], 2],
rev(pp[ipoints[j - 1]:ipoints[j], 2])
)
polyy <- c(
pp[ipoints[j - 1]:ipoints[j], 3],
rev(pp[ipoints[j - 1]:ipoints[j], 4])
)
test <- ifelse(pp[ipoints[j], 5], 1, 2)
radial.plot(polyy, polyx,
rp.type = "p", poly.col = lfill[test], line.col = NA,
radial.lim = rlim, cex = 0.5,
start = 3 * pi / 2, clockwise = TRUE, add = TRUE
)
}
# replot the first data set so it appreas on top of poylgons and then add the second
radial.plot(as.numeric(series[, 2]), cpos,
rp.type = "ps", line.col = lcol1[1],
point.symbols = 16, point.col = lcol2[1],
radial.lim = rlim, cex = 0.5,
start = 3 * pi / 2, clockwise = TRUE, add = TRUE
)
radial.plot(as.numeric(series[, 3]), cpos,
rp.type = "ps", line.col = lcol1[2],
point.symbols = 16, point.col = lcol2[2],
radial.lim = rlim, cex = 0.5,
start = 3 * pi / 2, clockwise = TRUE, add = TRUE
)
gp <- pretty(rlim)
radial.grid(
labels = dlabels, label.pos = dbreaks, radlab = FALSE,
radial.lim = rlim, clockwise = TRUE,
start = 3 * pi / 2, grid.col = "gray20", show.radial.grid = FALSE,
start.plot = FALSE, grid.pos = gp[length(gp)]
)
# get positions for significance symbols
for (k in 1:bins) {
xmax[k] <- max(series[k, 2], series[k, 3]) + mdelta
xmin[k] <- min(series[k, 2], series[k, 3]) - mdelta
}
# set up for plotting the significance arrows
ssym <- data.frame(cpos, series[, 6], xmax, xmin)
names(ssym) <- c("cpos", "t", "xmax", "xmin")
spsym <- ssym[ssym$t == 1, ]
snsym <- ssym[ssym$t == -1, ]
for (k in 1:length(snsym[, 1])) {
radial.plot(snsym$xmin[k], snsym$cpos[k],
rp.type = "s", point.symbols = 24, point.col = lsig[1], bg = lsig[1],
radial.lim = rlim, cex = 0.85,
start = 3 * pi / 2, clockwise = TRUE, add = TRUE
)
}
for (k in 1:length(spsym[, 1])) {
radial.plot(spsym$xmax[k], spsym$cpos[k],
rp.type = "s", point.symbols = 25, point.col = lsig[2], bg = lsig[2],
radial.lim = rlim, cex = 0.85,
start = 3 * pi / 2, clockwise = TRUE, add = TRUE
)
}
# add legend
ltext <- c(
llines, paste("Decrease in", bmw$variable), paste("Increase in", bmw$variable),
"Significant Decrease", "Significant Increase", " ", paste("Method", bmw$test_method),
paste("p<=", bmw$p_used)
)
lcols <- c(lcol1, lfill, lsig, "black", "black", "black")
lcols1 <- c(NA, NA, NA, NA, lsig)
lsym <- c(19, 19, 15, 15, 25, 24, NA, NA, NA)
lcex <- c(0.8, 0.8, 1.25, 1.25, 0.9, 0.9, NA, NA, NA)
lln <- c(1, 1, NA, NA, NA, NA, NA, NA, NA, NA, NA)
legend(rlim[2] * 1.6, rlim[2] * 1.5, ltext,
pch = lsym, col = lcols, bty = "n", lty = lln, pt.cex = lcex,
pt.bg = lcols1, cex = 0.75
)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_polar_plot.R
|
#'Polar / circular plots of peak flows
#' @description Polar / circular plots of peak flows.
#' Creates a polar plot of flow peaks in one of several different forms.
#' Basic plot has shading for nival and pluvial centroids.
#'
#' @param title a title to be added to the plot
#' @param direction a value or array of mean/median direction, circular mean or
#' median of points from ch_circ_mean_reg (optional)
#' @param regularity a value or array of regularity from ch_circ_mean_reg (optional).
#' @note points inside the plot
#' @param in_pch a value or an array of symbols to be used for centroids. To be in color,
#' must be one of 21 to 25 to get a symbol with border, elsewise a red symbol is plotted.
#' @param in_col an array of colors, either numbers or names to apply to centroid points (optional,
#' default is "red")
#' @param in_cex an array of symbol sizes
#' @param in_detail an array of indices indicating symbol [1] shape, [2] colour, [3] background,
#' and [4]size
#' @note in_pch, in_col, and in_cex will normally be of the same length and that would
#' be the maximum index of in_detail
#' @param days an array of days of year to be plotted on perimeter (optional).
#' @param labels an array of labels to be placed beside points with direction and regularity (optional)
#' @param label_pos an array of positions indicating when label be placed (1, 2, 3, or 4 - below, left,
#' above, right)(optional - default is below)
#' @param shading if \code{TRUE} adds shading and labels for nival and pluvial regimes default = \code{FALSE}
#' @param shade percentage of shading, default is 35.
#' @note points on the outside
#' @param out_pch symbols for points on outside of circle
#' @param pt_col colour used for points for events. default = "darkblue". If pt_col is an array it is used to colour
#' the individual points of days
#' @param out_cex point size for symbol
#' @param ... other plot options
#'
#' @return Creates a circular plot of peak flows.
#'
#' @references
#' Pewsey, A., M. Neuhauser, and G. D. Ruxton. 2014. Circular Statistics in R,
#' 192 pp., Oxford University Press.
#'
#' Whitfield, P. H. 2018. Clustering of seasonal events: A simulation study using
#' circular methods. Communications in Statistics - Simulation and Computation 47(10): 3008-3030.
#'
#' Burn, D. H., and P. H. Whitfield. 2023. Changes in the timing of flood events resulting
#' from climate change. Journal of Hydrology.
#' @export
#' @author Paul Whitfield
#' @examples
#' # base plot
#' ch_polar_plot_peaks()
#'
#' #base plot with area shading
#' ch_polar_plot_peaks(shading = TRUE)
#'
#' # plot of annual maximum series
#' data(CAN05AA008)
#' am <- ch_sh_get_amax(CAN05AA008)
#' ch_polar_plot_peaks(days = am$doy, title = "05AA008")
#'
#' #remove partial years
#' am <-am[am$days >= 365,]
#' ch_polar_plot_peaks(days = am$doy, title = "05AA008")
#'
#' #plot the centroid
#' m_r <- ch_circ_mean_reg(am)
#' ch_polar_plot_peaks(direction = m_r$mean, regularity = m_r$regularity, title = "05AA008")
#'
#' # plot peaks and centroid
#' ch_polar_plot_peaks(days = am$doy, direction = m_r$mean, regularity = m_r$regularity,
#' title = "05AA008")
ch_polar_plot_peaks <- function(title = NA, direction = NULL, regularity = NULL,
days = NULL,
shading = FALSE,
shade = 35,
pt_col = "darkblue",
in_pch = NULL,
in_cex = NULL,
in_col = NULL,
in_detail = NULL,
labels=NULL,
label_pos = NULL,
out_pch= 16,
out_cex = 0.8, ...){
oldw <- getOption("warn")
options(warn = -1)
opar <- par()
on.exit(par(opar))
on.exit(options(warn = oldw))
par(cex.lab = 0.7)
par(col.lab = "gray")
if (is.null(label_pos)) label_pos <- rep(1,length(direction))
if (is.null(in_col)) in_col <- rep("red", length(direction))
if (is.null(in_pch)) in_pch <- rep(19, length(direction))
if (is.null(in_cex)) in_cex <- rep(0.8, length(direction))
##### set some details
dlabels <- c("Jan","Feb","Mar","Apr", "May", "Jun", "Jul","Aug","Sep",
"Oct","Nov","Dec", "")
dbreaks <- c(1,32,60,91,121,152,182,213,244,274,305, 335, 366)
dbreaks <-dbreaks/365*2*pi
#######
if(!shading){
out_a <-seq(dbreaks[4], dbreaks[8], length.out = 20)
nival_r <-c(out_a,rev(out_a))
nival_l <- c(rep(1.0,20),rep(0.65,20))
twhite <- ch_col_transparent("white", 50 )
plotrix::radial.plot(nival_l, nival_r, rp.type = "p",
main = title,
radial.lim = c(0,1),
start = 3*pi/2,
clockwise = TRUE,
labels = dlabels,
label.pos = dbreaks,
label.prop = 1.15,
line.col = twhite,
poly.col = twhite)
}
#######
if (shading) {
tblue <- ch_col_transparent("blue", shade)
tgreen <- ch_col_transparent("green", shade)
out_a <- seq(dbreaks[4], dbreaks[8], length.out = 20)
nival_r <- c(out_a,rev(out_a))
nival_l <- c(rep(1.0,20),rep(0.65,20))
out_b <- seq(dbreaks[1], dbreaks[2], length.out = 40)
pluvial_r1 <- c(out_b,rev(out_b))
pluvial_l1 <- c(rep(1.0,40),rep(0.45,40))
out_c <- seq(dbreaks[10], dbreaks[13], length.out = 40)
pluvial_r2 <- c(out_c,rev(out_c))
pluvial_l2 <- c(rep(1.0,40),rep(0.45,40))
plotrix::radial.plot(nival_l, nival_r, rp.type = "p",
main = title,
radial.lim = c(0,1),
start = 3*pi/2,
clockwise = TRUE,
labels = dlabels,
label.pos = dbreaks,
line.col = tblue,
poly.col = tblue)
text(0.1,0.86,"Nival", col="gray60",pos = 4)
text(0.1,0.12,"Mixed", col="gray60",pos = 4)
plotrix::radial.plot(pluvial_l1, pluvial_r1, rp.type = "p",
radial.lim = c(0,1),
start = 3*pi/2,
clockwise = TRUE,
line.col = tgreen,
poly.col = tgreen,
add = TRUE)
plotrix::radial.plot(pluvial_l2, pluvial_r2, rp.type = "p",
radial.lim = c(0,1),
start = 3 * pi / 2,
clockwise = TRUE,
line.col = tgreen,
poly.col = tgreen,
show.grid = TRUE,
add = TRUE)
text(0.1, -0.84,"Pluvial", col = "gray60",pos = 4)
}
# basic regularity plots with single symbol type and colour
if (!is.null(direction) && !is.null(regularity) && is.null(in_detail)){
direction <- direction / 365 * 2 * pi
ptc <- rep("black",length(direction))
if(max(in_pch <= 20)) ptc <- in_col
plotrix::radial.plot(regularity,direction,
rp.type="s",
point.symbols = 22,
point.col = ptc,
bg=in_col,
start=3*pi/2,
labels=dlabels,
label.pos=dbreaks,
clockwise=TRUE,
cex= in_cex,
add=TRUE)
}
# fancy regularity plots with single symbol type and colour
if (!is.null(direction) && !is.null(regularity) && !is.null(in_detail)){
direction <- direction / 365 * 2 * pi
ptc <- rep("black",length(direction))
if(max(in_pch <= 20)) ptc <- in_col
plotrix::radial.plot(regularity,direction,
rp.type="s",
point.symbols = as.numeric(in_detail[,1]),
point.col = in_detail[,2],
bg = in_detail[,3],
cex = as.numeric(in_detail[,4]),
start=3*pi/2,
clockwise=TRUE,
radial.lim=c(0,1),
show.grid.labels=4,
show.grid = TRUE,
show.radial.grid = TRUE,
add=TRUE)
}
if(!is.null(labels)) {
plotrix::radial.plot.labels(regularity, direction, labels=labels, pos = label_pos,
start=3*pi/2, cex=0.7,
radial.lim=c(0,1),
clockwise=TRUE)
}
if(!is.null(days)){
if(length(in_pch) == 1) out_pch <- rep(out_pch,length(days))
if(length(in_cex) == 1) out_cex <- rep(out_cex,length(days))
if(length(pt_col) == 1) pt_col <- rep(pt_col,length(days))
# link days and point colours and sort into day order
# order days in ascending order
adays <- data.frame(days, out_cex, out_pch, pt_col)
adays <- adays[order(days),]
# convert doy to radians
days <- adays$days / 365 * 2 * pi
id <- rep(1.005, length(days))
for( ii in 2:length(days)){
if (days[ii] == days[ii-1]) id[ii] <-id[ii-1] + 0.02
}
plotrix::radial.plot(id,days,
rp.type="s",
point.symbols = adays$out_pch,
point.col = adays$pt_col,
cex = adays$out_cex,
start = 3*pi/2,
labels = dlabels,
label.pos = dbreaks,
clockwise = TRUE,
radial.lim = c(0,1),
show.grid.labels = 4,
show.grid = TRUE,
show.radial.grid = TRUE,
add = TRUE)
}
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_polar_plot_peaks.R
|
#' Creates a data structure to be passed to \code{ch_polar_plot}
#'
#' @description Could be used to move data from a different type of analysis different to
#' the \code{\link{ch_binned_MannWhitney}} function which uses flows. The two series need to be of the
#' same length and their length is related to the step size. For examples,
#' for five day periods there will be 73 periods.
#'
#' @param station Typically a station number
#' @param plot_title Polar plot title - usually a station name
#' @param step The number of days binned
#' @param x0 Time series of length n for a single seasonal cycle
#' @param x1 Time series of length n for a single seasonal cycle
#' @param stat Time series of length n for statistical test value for each bin
#' @param prob Time series of length n of probability of test value
#' @param test_s Vector with values of -1, 0, 1 for significance, -1 negative,
#' 1 positive, 0 not significant
#' @param variable Name of variable plotted. Default is \option{discharge}
#' @param bin_method Default is \option{unstated}
#' @param test_method Default is \option{unstated}
#' @param lline1 Names of first period, default is \option{Period 1}
#' @param lline2 Names of second period, default is \option{Period 2}
#' @param pvalue Value of p used. Default is 0.05
#'
#' @return Returns a list containing:
#' \item{StationID}{ID of station}
#' \item{Station_lname}{Name of station}
#' \item{variable}{Name of variable}
#' \item{bin_width}{Smoothing time step in days}
#' \item{range1}{First range of years}
#' \item{range2}{Second range of years}
#' \item{p_used}{p_value}
#' \item{fail}{TRUE if test failed due to missing values}
#' \item{bin_method}{Method used for binning}
#' \item{test_method}{Mann-Whitney U}
#' \item{series}{A data frame containing six columns}
#'
#' The \code{series} data frame contains
#' \item{period}{period numbers i.e. 1:365/step}
#' \item{period1}{median values for each bin in period 1}
#' \item{period2}{median values for each bin in period 2}
#' \item{mwu}{Mann Whitney U-statistic for each bin between the two periods}
#' \item{prob}{probability of U for each period}
#' \item{code}{significance codes for each bin}
#'
#'
#' @references
#' Whitfield, P.H. and A.J. Cannon. 2000. Polar plotting of seasonal hydrologic
#' and climatic data. Northwest Science 74: 76-80.
#'
#' Whitfield, P.H., Cannon, A.J., 2000. Recent variations in climate and hydrology
#' in Canada. Canadian Water Resources Journal 25: 19-65.
#'
#' @author Paul Whitfield
#'
#' @export
#' @seealso \code{\link{ch_binned_MannWhitney}} \code{\link{ch_polar_plot}}
#'
ch_polar_plot_prep <- function(station, plot_title, step, x0, x1, stat, prob, test_s, variable = "discharge",
bin_method = "unstated", test_method = "unstated",
lline1 = "Period 1", lline2 = "Period 2", pvalue = 0.05) {
fail <- FALSE
if (length(x0) != length(x1)) return(paste("Arrays of x unequal length", length(x0), length(x1)))
if (length(x0) != length(stat)) return(paste("Arrays of x0 and stat unequal length", length(x0), length(stat)))
if (length(x0) != length(prob)) return(paste("Arrays of x0 and prob unequal length", length(x0), length(prob)))
if (length(x0) != length(test_s)) return(paste("Arrays of x0 and test_s unequal length", length(x0), length(test_s)))
period <- c(1:length(x0))
series <- data.frame(period, x0, x1, stat, prob, test_s)
names(series) <- c("period", "period1", "period2", "stat", "prob", "code")
result <- list(station, plot_title, variable, step, lline1, lline2, pvalue, fail, bin_method, test_method, series)
names(result) <- c(
"StationID", "Station_lname", "variable", "bin_width", "range1", "range2",
"p_used", "fail", "bin_method", "test_method", "series"
)
return(result)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_polar_plot_prep.R
|
#' Plots a hydrograph with the data quality symbols and returns a report on qa symbols and missing data.
#'
#' @description Plots a hydrograph of a WSC daily data file read from from ECDataExplorer (ECDE).
#' The hydrograph shows individual days with data quality symbols [SYM]
#' in colour and counts cases of each and reports them in the legend. The colours and symbols
#' are those produced by ECDataExplorer.
#'
#' There is an option is to provide start and end dates to show
#' only part of the time period for which data exists and the plot is annotated to indicate this.
#' Counts of missing observations is also provided in the legend.
#'
#' @param DF Data frame retrieved from ECDataExplorer as returned by the function
#' \code{ch_read_ECDE_flows}.
#' @param st_date Optional start date in the form \option{yyyy-mm-dd}. Default is \code{NULL}.
#' @param end_date Optional end date in the form \option{yyyy-mm-dd}. Default is \code{NULL}.
#' @param rescale If \code{FALSE} (the default), the y-axis scaling is determined by the time
#' period. If \code{TRUE} then determined by the whole dataset.
#' @param sym_col Colours used for SYM; default is those used in ECDE ("black",
#' "green", "cyan","yellow", "red", "white"). The final "white" can be changed to highlight
#' missing data points.
#' @param cts If \code{TRUE} (the default) shows the counts of SYM in the legend. If \code{FALSE}
#' the counts are omitted as in ECDE.
#' @param metadata a dataframe of station metadata, default is \code{HYDAT_list}.
#'
#'
#' @author Paul Whitfield
#'
#' @return Produces a plot and returns a list that contains:
#' \item{station name or title used}{}
#' \item{st_date}{starting date}
#' \item{end_date}{ending data}
#' \item{n}{the number of data points}
#' \item{sym_count}{summary of the SYM counts}
#' \item{missing}{number of missing data}
#'
#'
#' @export
#'
#' @importFrom graphics text
#'
#' @examples
#' m_test <- ch_qa_hydrograph(CAN05AA008)
# using a date range
#' m_test <- ch_qa_hydrograph(CAN05AA008, st_date="1980-01-01", end_date="1999-12-31")
#
ch_qa_hydrograph <- function(DF, st_date = NULL, end_date = NULL, cts = TRUE, rescale = FALSE,
sym_col = c("black", "green", "cyan", "yellow", "red", "white"),
metadata = NULL) {
disch <- expression(paste("Mean Daily Discharge m"^{3}, "/sec"))
m_station <- ch_get_wscstation(DF[1, 1], metadata = metadata)
title <- paste(m_station$Station, " ", m_station$StationName)
sym_count <- array(0, dim = 6)
DF$dcol <- array(1, dim = length(DF[ , 1]))
ylims <- c(min(DF[ , 4]),max(DF[,4]))
if (!is.null(st_date))
{
st_date <- as.Date(st_date, "%Y-%m-%d")
end_date <- as.Date(end_date, "%Y-%m-%d")
DF <- DF[DF$Date >= st_date, ]
DF <- DF[DF$Date <= end_date, ]
}
if (!rescale) ylims <- c(min(DF[ , 4]), max(DF[ , 4]))
for (k in 1:length(DF$dcol))
{
if (!is.na(DF$SYM[k]) && DF$SYM[k] == "A") {DF$dcol[k] <- 2; sym_count[2] <- sym_count[2] + 1}
if (!is.na(DF$SYM[k]) && DF$SYM[k] == "B") {DF$dcol[k] <- 3; sym_count[3] <- sym_count[3] + 1}
if (!is.na(DF$SYM[k]) && DF$SYM[k] == "C") {DF$dcol[k] <- 4; sym_count[4] <- sym_count[4] + 1}
if (!is.na(DF$SYM[k]) && DF$SYM[k] == "D") {DF$dcol[k] <- 5; sym_count[5] <- sym_count[5] + 1}
if (!is.na(DF$SYM[k]) && DF$SYM[k] == "E") {DF$dcol[k] <- 6; sym_count[6] <- sym_count[6] + 1}
}
par(mar = c(2.5, 4.5, 3, 1))
plot(DF$Date, DF[ , 4],
col = "black", type = "l", ylim = ylims, ylab = disch, xlab = "", main = title, lwd = 0.2, las = 1)
points(DF$Date, DF[ , 4], col = sym_col[DF$dcol], type = "p", pch = 19, cex = 0.6)
sym_count[1] <- length(DF[,4]) - sum(sym_count)
missdays <- as.numeric(1 + DF[length(DF[,1]), 3] - DF[1,3] - length(DF[, 1]))
ltexta <- c("Default ","(A) - Partial ","(B) - Backwater ","(D) - Dry ","(E) - Estimate ")
ltextb <- c(paste("Default ", sym_count[1]),
paste("(A) - Partial ", sym_count[2]),
paste("(B) - Backwater ", sym_count[3]),
paste("(D) - Dry ", sym_count[5]),
paste("(E) - Estimate ", sym_count[6]),
paste("Missing", missdays))
if (!cts) legend("topleft", ltexta, pch = 19, col = sym_col, cex = 0.7, bg = "transparent")
if (cts) legend("topleft", ltextb, pch = 19, col = sym_col, cex = 0.7, bg = "transparent")
if (!is.null(st_date)) text(DF$Date[as.integer(0.75 * length(DF$Date))], max(DF[ , 4]),
"Selected Date Range", col = "gray50", cex = 0.7)
names(sym_count) <- c("Default", "A", "B", "C", "D", "E")
result <- list(title, st_date, end_date, length(DF[,4]), sym_count, missdays)
names(result) <- c("Station", "start_date", "end_date", "points", "SYM_count","missing_observations")
return(result)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_qa_hydrograph.R
|
#' Reads AHCCD daily file
#'
#' @description This program reads an Adjusted and Homogenized Canadian Climate Data (AHCCD) of daily
#' precipitation or temperatures. The values are arranged as
#' month x day, which makes them difficult to read using standard \R functions.
#' @param daily_file Required. Name of the file to be read.
#' @return If successful, returns the values in a data frame, consisting of the date,
#' the value and the data code.
#' @author Kevin Shook
#' @seealso \code{\link{ch_read_AHCCD_monthly}}
#' @references Daily AHCCD data are available from \url{http://crd-data-donnees-rdc.ec.gc.ca/CDAS/products/EC_data/AHCCD_daily/}.
#' Any use of the data must cite
#'\cite{Mekis, E and L.A. Vincent, 2011: An overview of the second generation
#'adjusted daily precipitation dataset for trend analysis in Canada.
#'Atmosphere-Ocean, 49 (2), 163-177.}
#'
#' @examples
#' \dontrun{
#' # Don't run this example as it requires a file, and use of the dummy
#' # file will cause an error message
#'
#' stoon_daily_tmax <- ch_read_AHCCD_daily("dx40657120.txt")}
#' @importFrom stringr str_split_fixed str_detect fixed
#' @importFrom utils read.fwf
#' @export
ch_read_AHCCD_daily <- function(daily_file){
# check parameter
if (daily_file == "" | is.null(daily_file)) {
stop("File not specified")
}
if (!file.exists(daily_file)) {
stop("File not found")
}
# strip off file name from path and extension to figure out data type
base <- basename(daily_file) # remove path
split <- str_split_fixed(base, fixed('.'), 2)
filename <- split[1]
# check for variable type
if (str_detect(tolower(filename), 'dx')) {
val_type <- 'tmax'
}
else if (str_detect(tolower(filename), 'dm')) {
val_type <- 'tmean'
}
else if (str_detect(tolower(filename), 'dn')) {
val_type <- 'tmin'
}
else if (str_detect(tolower(filename), 'dt')) {
val_type <- 'precip'
}
else if (str_detect(tolower(filename), 'dr')) {
val_type <- 'rain'
}
else if (str_detect(tolower(filename), 'ds')) {
val_type <- 'snow'
}
else {
stop("Unrecognised file type")
}
# set up homes for data
value <- c(0)
code <- c(0)
date <- c(0)
# set up constants
monthdays <- c(31,28,31,30,31,30,31,31,30,31,30,31)
monthdays_leapyear <- c(31,29,31,30,31,30,31,31,30,31,30,31)
twodigitnums <- c('01','02','03','04','05','06','07','08','09','10','11','12',
'13','14','15','16','17','18','19','20','21','22','23','24',
'25','26','27','28','29','30','31')
# figure out header info
# read 1st 10 lines
con <- file(daily_file, "r", blocking = FALSE, encoding = "ISO_8859-2")
input <- readLines(con, n = 10)
close(con)
# first find number of lines containing file info
input <- tolower(input)
LineNum <- str_detect(input, fixed(','))
englishHeaderNum <- str_detect(input, fixed("updated", ignore_case = TRUE))
englishHeaderCount <- sum(englishHeaderNum)
frenchHeaderNum <- str_detect(input, fixed("jusqu", ignore_case = TRUE))
frenchHeaderCount <- sum(frenchHeaderNum)
fileHeaderLines <- englishHeaderCount + frenchHeaderCount
# now find number of lines containing column titles
englishLineNum <- str_detect(input,fixed("year", ignore_case = TRUE))
englishLineCount <- sum(englishLineNum)
frenchLineNum <- str_detect(input,fixed("annee", ignore_case = TRUE))
frenchLineCount <- sum(frenchLineNum)
columnHeaderLines <- sum(englishLineCount) + sum(frenchLineCount)
totalSkipLines <- fileHeaderLines + columnHeaderLines
# check for with of first field - is there a leading space
firstChar <- substr(input[totalSkipLines + 1], 1, 1)
if (firstChar == ' ')
yearWidth <- 5
else
yearWidth <- 4
# set up widths to read
header <- c(yearWidth,3,1)
header_classes <- c('numeric','numeric','character')
cols <- rep.int(c(8,1),31)
cols_classes <- rep.int(c('numeric', 'character'), 31)
all <- c(header,cols)
all_classes <- c(header_classes, cols_classes)
# set columns widths depending on data type
if (val_type == "tmax" | val_type == "tmin" | val_type == "tmean")
cols <- rep.int(c(7,1),31)
else
cols <- rep.int(c(8,1),31)
cols_classes <- rep.int(c('numeric', 'character'), 31)
all <- c(header,cols)
all_classes <- c(header_classes, cols_classes)
# read data from file without parsing
raw <- read.fwf(file = daily_file, widths = all, header = FALSE,
colClasses = all_classes, skip = totalSkipLines)
row_count <- nrow(raw)
# parse the lines into data and quality codes
data_cols <- seq(4,64,2)
code_cols <- data_cols + 1
year_num <- as.numeric(raw[,1])
month_num <- as.numeric(raw[,2])
data_values <- raw[,data_cols]
data_codes <- raw[,code_cols]
data_values[data_values <= -999] <- NA_real_
month_str <- twodigitnums[month_num]
# stack the data frames to vectors
stacked <- ch_stack_EC(data_values, data_codes)
# replicate months
all_days <- rep.int(twodigitnums, row_count)
# replicate days
all_months <- rep(month_str, each = 31)
# replicate years
all_years <- rep(year_num, each = 31)
# create dates
datestrings <- paste(all_years,'-', all_months,'-', all_days, sep = '')
date <- as.Date(datestrings, format = '%Y-%m-%d')
# find bad date values
bad_date_loc <- is.na(date)
good_date_loc <- !bad_date_loc
# assemble data sets
all_data <- cbind(date, stacked)
# get good dates only
good_data <- all_data[good_date_loc,]
names(good_data) <- c('date', val_type, 'code')
return(good_data)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_read_AHCCD_daily.R
|
#' Reads AHCCD monthly file
#'
#' @description This program reads an Adjusted and Homogenized Canadian Climate Data (AHCCD) data of
#' precipitation or temperatures. The values are arranged as year x month,
#' which makes them difficult to read using standard \R functions.
#' @param monthly_file Required. Name of the file to be read.
#' @return If successful, returns the values in a dataframe, consisting of the \code{year},
#' the \code{month}, the value and the data \code{code}.
#' @author Kevin Shook
#' @seealso \code{\link{ch_read_AHCCD_daily}}
#' @references
#' Any use of the data must cite \cite{Mekis, E and L.A. Vincent, 2011: An overview of
#' the second generation adjusted daily temperature and precipitation dataset for trend analysis in Canada.
#' Atmosphere-Ocean, 49 (2), 163-177.}
#' @examples \dontrun{
#' # Don't run these examples as use of the dummy
#' # files will cause error messages
#'
#' Stoon_monthly_precip <- ch_read_AHCCD_monthly("mt4057120.txt")
#' NB_monthly_tmean <- ch_read_AHCCD_monthly("mm4045695.txt") }
#' @importFrom stringr str_split_fixed str_detect str_to_lower fixed
#' @export
ch_read_AHCCD_monthly <- function(monthly_file = NULL) {
# check parameter
if (monthly_file == "" | is.null(monthly_file)) {
stop("File not specified")
}
if (!file.exists(monthly_file)) {
stop("File not found")
}
# set up home for dates
date <- c(0)
# set up constants
twodigitnums <- c("01", "02", "03", "04", "05", "06", "07", "08", "09", "10", "11", "12")
# strip off file name from path and extension to figure out data type
base <- basename(monthly_file) # remove path
split <- str_split_fixed(base, fixed("."), 2)
filename <- split[1]
# check for type of values
if (str_detect(tolower(filename), 'mx')) {
val_type <- 'tmax'
}
else if (str_detect(tolower(filename), 'mm')) {
val_type <- 'tmean'
}
else if (str_detect(tolower(filename), 'mn')) {
vals_type <- 'tmin'
}
else if (str_detect(str_to_lower(filename), "mt")) {
val_type <- "precip"
}
else if (str_detect(str_to_lower(filename), "mr")) {
val_type <- "rain"
}
else if (str_detect(str_to_lower(filename), "ms")) {
val_type <- "snow"
}
else {
stop("Unrecognised file type")
}
# figure out header info
# read 1st 10 lines
con <- file(monthly_file, "r", blocking = FALSE, encoding = "ISO_8859-2")
input <- readLines(con, n = 10)
close(con)
# find number of lines containing file info
# headerlines may be in English and/or French
input <- tolower(input)
LineNum <- str_detect(input, fixed(','))
englishHeaderNum <- str_detect(input, fixed("updated", ignore_case = TRUE))
englishHeaderCount <- sum(englishHeaderNum)
frenchHeaderNum <- str_detect(input, fixed("jusqu", ignore_case = TRUE))
frenchHeaderCount <- sum(frenchHeaderNum)
fileHeaderLines <- englishHeaderCount + frenchHeaderCount
# find number of lines containing column titles
englishLineNum <- str_detect(input,fixed("year", ignore_case = TRUE))
englishLineCount <- sum(englishLineNum)
frenchLineNum <- str_detect(input,fixed("hiver", ignore_case = TRUE))
frenchLineCount <- sum(frenchLineNum)
columnHeaderLines <- sum(englishLineCount) + sum(frenchLineCount)
totalSkipLines <- fileHeaderLines + columnHeaderLines
# read data from file without parsing
raw <- read.csv(file = monthly_file, header = FALSE, skip = totalSkipLines)
row_count <- nrow(raw)
# parse the lines into data and quality codes
data_cols <- seq(2, 24, 2)
code_cols <- data_cols + 1
year_num <- as.numeric(raw[, 1])
data_values <- raw[, data_cols]
data_codes <- raw[, code_cols]
data_values[data_values <= -999] <- NA_real_
# stack the data frames to vectors
stacked <- ch_stack_EC(data_values, data_codes)
# replicate months
all_months <- rep.int(twodigitnums, row_count)
# replicate years
all_years <- rep(year_num, each = 12)
# find bad date values
bad_date_loc <- is.na(date)
good_date_loc <- !bad_date_loc
# assemble data sets
all_data <- cbind(all_years, all_months, stacked)
# get good dates only
good_data <- all_data[good_date_loc, ]
names(good_data) <- c("year", "month", val_type, "code")
return(good_data)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_read_AHCCD_monthly.R
|
#' Reads a file of WSC daily flows from ECDataExplorer (ECDE)
#'
#' @description Reads in a file WSC daily flows as returned from the Windows program ECDataExplorer, converts the Date,
#' and omits the last 3 lines as these contain the data disclaimer and not data. The function
#' can read values from a url.
#'
#' @param filename Datafile retrieved from ECDataExplorer.
#'
#' @author Paul Whitfield
#'
#' @return Returns a dataframe with the last three rows removed:
#' \item{ID}{stationID}
#' \item{PARAM}{Parameter 1 for Flow 2 for Level}
#' \item{Date}{original charater string converted to date format}
#' \item{Flow}{Daily mean flow m\eqn{^3}{^3}/sec}
#' \item{SYM}{Quality flag}
#'
#' @importFrom utils read.csv
#' @export
#'
#' @examples \dontrun{
#' # Not run as requires a file returned by the Windows program ECDataExplorer
#' # Using a dummy file name as an example
#' mfile <- "04JD005_Daily_Flow_ts.csv"
#' mdata <- ch_read_ECDE_flows(mfile)}
#'
#' \donttest{
#' # Not tested automatically as it is slow to read from a url
#' url1 <- "https://zenodo.org/record/7007830/files/08NL007_Daily_Flow_ts.csv"
#' values <- ch_read_ECDE_flows(url1)
#' }
#'
ch_read_ECDE_flows <- function(filename) {
# check ECDE filename
if (filename == "" | is.null(filename)) {
stop("ECDE file not specified")
}
if (!file.exists(filename)) {
# check if actually a url
result <- ch_test_url_file(filename, quiet = TRUE)
if (result != "OK")
stop("ECDE file not found")
}
mdata <- read.csv(filename, header = FALSE, skip = 1)
names(mdata) <- c("ID", "PARAM", "Date", "Flow", "SYM")
mdata$Date <- as.Date(mdata$Date, format = "%Y/%m/%d")
cut <- length(mdata[, 1]) - 3
mdata <- mdata[1:cut, ]
if (names(mdata)[4] != "Flow")
message("Input data file format error")
return(mdata)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_read_ECDE_flows.R
|
#' Plots the regime of daily streamflows using quantiles
#'
#' @description Produces a regime hydrograph similar to that in the reference. It shows the flow quantiles for each
#' day of the year and the maximum and minimum. Parameters can be set to change colours and set the y-scale
#' to allow plots of same scale to be produced.
#'
#'
#' @param DF data frame of daily flow data
#' @param quant quantiles; default is \code{quant = c(0.95,0.9,0.75,0.5,0.25,0.1,0.05)}.
#' Can be changed but the length must be 7 and the 4th value must be 0.5 (median)
#' @param wyear set \code{wyear = 10} for October, \code{water year = 1} for calendar year, can be any month
#' @param colour if \code{TRUE} plot is in colour, if \code{FALSE} plot is grayscale.
#' @param mx set the maximum y value; if = 1 then maximum value of the flows is used to set
#' @param metadata a data frame of metadata, defaults to HYDAT_list.
#' the y-axis value. The value of \code{mx} can be specified to produce a series of plots with the
#' same scale.
#'
#' @return No value is returned; a standard \R graphic is created.
#' @author Paul Whitfield
#' @importFrom graphics par points polygon legend
#' @importFrom stats quantile
#' @export
#'
#' @references MacCulloch, G. and P. H. Whitfield (2012). Towards a Stream Classification System
#' for the Canadian Prairie Provinces. Canadian Water Resources Journal 37: 311-332.
#'
#' @examples
#' data(CAN05AA008)
#' ch_regime_plot(CAN05AA008, colour = TRUE, wyear = 1)
#'
ch_regime_plot <- function(DF, wyear = 1, colour = TRUE, mx = 1, metadata = NULL,
quant = c(0.95, 0.9, 0.75, 0.5, 0.25, 0.1, 0.05))
{
station <- DF[1, 1]
sname <- ch_get_wscstation(station, metadata)
title <- sname$Station_lname
############################################################################# labels
dmf <- expression(paste("Mean Daily Discharge m("^{3}, "/sec)"))
flow <- DF$Flow
Date <- DF$Date
doy_vals <- ch_doys(Date, water_yr = wyear)
year <- doy_vals$year
doy <- doy_vals$doy
if (wyear != 1) doy <- doy_vals$dwy
doys <- 366
doy1 <- c(1:doys)
years <- unique(year)
nyears <- max(years) - min(years) + 1
min_year <- min(years) - 1
############################################################################# arrays
q <- array(NA, dim = c(nyears, doys))
colr <- c("gray70", "gray50", "gray30", "black", "gray10")
if (colour == TRUE) colr <- c("gray", "cyan", "deepskyblue2", "red", "darkblue")
########################################################################## create table of year of daily discharge
for (k in 1:length(year)) {
q[(year[k] - min_year), doy[k]] <- flow[k]
}
qquantiles <- quant
qquantiles <- rev(qquantiles)
regime <- array(NA, dim = c(9, doys))
for (jj in 1:doys) {
regime[1, jj] <- min(q[, jj], na.rm = TRUE)
regime[9, jj] <- max(q[, jj], na.rm = TRUE)
for (j in 2:8) {
regime[j, jj] <- stats::quantile(q[, jj], probs = qquantiles[j - 1], na.rm = TRUE)
}
}
############################ need to replace Inf and -Inf with NA Infs come from all days being NA
regime[is.infinite(regime)] <- NA
########################### create polygons for 0.95-0.05, 0.90-0.1. 0.75-0.25
ylims <- c(0, mx)
if (mx == 1) ylims <- c(0, max(flow, na.rm = TRUE))
mdays <- c(doy1, rev(doy1))
poly1 <- c(regime[2, ], rev(regime[8, ]))
poly2 <- c(regime[3, ], rev(regime[7, ]))
poly3 <- c(regime[4, ], rev(regime[6, ]))
######################################################################### plot start
tscale <- 1.2
if (nchar(title) >= 45) tscale <- 1.0
if (nchar(title) >= 50) tscale <- 0.8
# capture plotting parameters, restore on exit
oldpar <- par(no.readonly = TRUE)
on.exit(par(oldpar))
par(las = 1)
par(mar = c(3,5,3,1))
plot(doy1, regime[9,], type = "p", xlab = "", xaxt = "n", col = colr[4],
cex = 0.5, ylab = dmf, ylim = ylims, xlim = c(1, 366),
main = title, cex.main = tscale)
ch_axis_doy(wyear)
polygon(mdays, poly1, col = colr[1], border = colr[1])
polygon(mdays, poly2, col = colr[2], border = colr[2])
polygon(mdays, poly3, col = colr[3], border = colr[3])
points(doy1, regime[1, ], type = "p", col = colr[4], cex = 0.5)
points(doy1, regime[5, ], type = "l", col = colr[5], lwd = 3)
ltext1 <- c("min / max",
paste(format( quant[7], nsmall = 2),"-", format(quant[1],nsmall = 2), sep = ""),
paste(format( quant[6], nsmall = 2),"-", format(quant[2],nsmall = 2), sep = ""),
paste(format( quant[5], nsmall = 2),"-", format(quant[3],nsmall = 2), sep = ""),
"median")
lcol1 <- c(colr[4],colr[1],colr[2], colr[3],colr[5])
legend("topleft", legend = ltext1, col = lcol1, lty = 1, lwd = 3, bty = "n")
######################################################################### plot end
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_regime_plot.R
|
#####################################################################
#' Streamflow data
#'
#' Daily river discharge for the station 01AD002 on
#' St. John River at Fort Kent, New Brunswick.
#' Data ranges from 1926 to 2014, for basin area of 14700 sq km.
#'
#' @author Martin Durocher
#' @source \url{https://wateroffice.ec.gc.ca/}
#####################################################################
"CAN01AD002"
#####################################################################
#' Annual maxima from sites in the Atlantic region of Canada
#'
#' Contains the annual maxima of 45 hydrometric stations found in the
#' region '01' of Water Survey of Canada.
#' In additional to the annual maxima, the output list includes catchment
#' descriptors (longitude, latitude, basin area, mean annual precipitation)
#' and the geographical distance between each station.
#'
#' @author Martin Durocher
#' @source \url{https://wateroffice.ec.gc.ca/}
#####################################################################
"flowAtlantic"
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_rfa_data.R
|
#' Distance in seasonal space
#'
#' @description Calculates a matrix of distances between points in the seasonal
#' space that characterizes timing and regularity.
#' It is equivalent to Euclidean distance applied to regularity (radius)
#' and timing (angle) separately.
#'
#' @author Martin Durocher
#'
#' @param x,a Coordinates in the seasonal space.
#' Can be a data.frame or vectors with radius \code{x} and angle \code{a}.
#'
#' @param form Formula and dataset providing the coordinates of the
#' seasonal space. Must be of the form \code{radius ~ angle}.
#'
#' @param w Weight to favor angle over radius.
#' By default it is 1/pi, which bring angle in the interval [0,1].
#'
#' @param ... Other parameters.
#'
#' @seealso \link{ch_rfa_seasonstat}
#'
#' @references
#'
#' Durocher, M., Burn, D. H., & Ashkar, F. (2019). Comparison of estimation
#' methods for a nonstationary index-flood model in flood frequency
#' analysis using peaks over threshold. https://doi.org/10.31223/osf.io/rnepc
#'
#' @export
#'
#' @importFrom stats model.frame dist
#'
#' @return Returns a matrix of distances between points in the seasonal
#' space that characterizes timing and regularity.
#' @examples
#'
#' scoord <- data.frame(radius = runif(5),
#' angle = runif(5,0,2*pi))
#'
#' ch_rfa_distseason(radius ~ angle , scoord)
#'
#'
ch_rfa_distseason <- function(x, ...) UseMethod('ch_rfa_distseason', x)
#' @export
#' @rdname ch_rfa_distseason
ch_rfa_distseason.numeric <- function(x, a, w = 1/pi, ...){
## Extract the pairs or every angles
n <- length(a)
if (length(x) != n)
stop('Coordinates must be of the same length')
id <- expand.grid(1:n, 1:n)
aii <- a[id[,1]]
ajj <- a[id[,2]]
## Compute the standardized absolute differences between angles
mn <- pmin(aii,ajj)
d <- pmax(aii - mn, ajj - mn)
d <- pmin(2*pi - d, d)*w
a.mat <- matrix(d, nrow = n)
## Compute the absolute differences between radius
r.mat <- as.matrix(dist(x, method = 'man'))
## squared distances
return(sqrt(r.mat^2 + a.mat^2))
}
#' @export
ch_rfa_distseason.matrix <- function(x, w = 1/pi, ...)
ch_rfa_distseason(x[,1], x[,2], w)
#' @export
#' @rdname ch_rfa_distseason
ch_rfa_distseason.data.frame <- function(x, w = 1/pi, ...)
ch_rfa_distseason(x[,1], x[,2], w)
#' @export
#' @rdname ch_rfa_distseason
ch_rfa_distseason.formula <- function(form, x, w = 1/pi, ...){
x <- as.data.frame(x)
x <- model.frame(form, x) ## form = r ~ a
return(ch_rfa_distseason(x[,1], x[,2], w))
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_rfa_distseason.R
|
#' Extracts the annual maxima of a daily time series
#'
#' @return Returns a data frame containing the annual (Monthly) maxima,
#' the date and the number of observations during the year.
#'
#' @author Martin Durocher
#'
#' @param form Formula of the form \code{value ~ date} that specifies the
#' variable from which the annual maximums are extracted and a date variable.
#'
#' @param x Data. If no formula is passed, the first column must be the
#' value and the second the date.
#'
#' @param tol Filter the years having less than \code{tol} days.
#'
#' @param nlab,ylab Names for the added columns representing respectively
#' the number of yearly observations and the year.
#' If set to \code{NULL} the given column is not added.
#'
#' @param ... Other parameters.
#'
#' @export
#'
#' @importFrom stats get_all_vars
#'
#' @examples
#'
#' out <- ch_rfa_extractamax(flow ~ date, CAN01AD002, tol = 350)
#' head(out)
#'
ch_rfa_extractamax <- function(x, ...) UseMethod('ch_rfa_extractamax',x)
#' @export
#' @rdname ch_rfa_extractamax
ch_rfa_extractamax.formula <- function(form, x, tol = 0, ...){
## reformat dataset according to formula
x <- get_all_vars(form,x)
if (ncol(x) == 2) {
## Case of one site
ans <- ch_rfa_extractamax(x, tol = tol, ...)
} else {
## case multiple sites
## split the site
xlst <- split(x[,c(1,3)], x[,2])
site.value <- sapply(split(x[,2], x[,2]), '[',1)
## extract all annual maximums
ans <- lapply(xlst, ch_rfa_extractamax, tol = tol, ...)
## merge the results in one dataset
cname <- c(colnames(ans[[1]]), colnames(x)[2])
for (ii in seq_along(site.value))
suppressWarnings(ans[[ii]] <- data.frame(ans[[ii]], site.value[ii]))
ans <- do.call('rbind', ans)
## Fix names
colnames(ans) <- cname
rownames(ans) <- NULL
nc <- length(cname)
## reorder columns
if (nc == 3) {
ans <- ans[,c(1,3,2)]
} else {
ans <- ans[,c(1,nc,2,seq(3,nc - 1))]
}
}
return(ans)
}
#' @export
#' @rdname ch_rfa_extractamax
ch_rfa_extractamax.default <-
function(x,
tol = 0,
nlab = 'n',
ylab = 'yy',
...){
## Split data by years
yy <- format(x[,2],'%Y')
xx <- data.frame(x[,1],seq(nrow(x)))
lx <- split(xx,yy)
## Identify the annual maximums and number of obs.
nx <- sapply(lx, nrow)
mx <- sapply(lx, function(z) z[which.max(z[,1]),2])
## Filter the original dataset
mx <- mx[nx >= tol]
ans <- x[mx,]
rownames(ans) <- NULL
## add n obs. and years if needed
if (!is.null(nlab))
ans[,nlab] <- nx[nx >= tol]
if (!is.null(ylab))
ans[,ylab] <- yy[mx]
return(ans)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_rfa_extractamax.R
|
#' Circular plotting by day of year
#'
#' @description Create axis for plotting circular statistics in a unitary circle.
#'
#' @author Martin Durocher
#'
#' @param rose.col,rose.lwd,rose.cex Properties of the polar axes.
#'
#' @param rose.radius Vector of the position of the circular axis.
#'
#' @param ... Other parameter passed to \link{points}.
#'
#' @seealso \link{ch_rfa_seasonstat}.
#'
#' @export
#'
#' @importFrom graphics segments
#'
#' @return Returns a empty rose plot by day of year
#'
#' @examples
#'
#' data(flowAtlantic)
#'
#' ss <- ch_rfa_seasonstat(date ~ id, flowAtlantic$ams)
#'
#' ch_rfa_julianplot()
#' points(y ~ x, ss, pch = 16, col = cut(ss[,'radius'], c(0,.5,.75,1)))
#'
ch_rfa_julianplot <- function(rose.col = "gray40", rose.lwd = 1.5,
rose.cex = 1.5, rose.radius = seq(.25,1,.25), ...){
plot(1, pch = '', ylim = c(-1,1)*1.2, xlim = c(-1,1)*1.2, axes = FALSE,
ylab = "", xlab = "")
DrawCircle(0,0, radius = rose.radius, col = rose.col, lwd = rose.lwd)
TextCircle(month.abb, radius = 1.1, col = rose.col, cex = rose.cex)
segments(0,-1,0,1, col = rose.col, lwd = rose.lwd)
segments(-1,0,1,0, col = rose.col, lwd = rose.lwd)
}
DrawCircle <- function(x = 0, y = NULL, radius = 1, res = 500, ...){
if (inherits(x, "data.frame") | inherits(x, "matrix")) {
y <- x[,2]
x <- x[,1]
} else if (inherits(x, "list")) {
y <- x$y
x <- x$x
} else if (inherits(x, "formula")) {
xd <- model.frame(x,y)
y <- x[,2]
x <- x[,1]
}
if (is.null(y)) stop('Locations not correctly specified')
## Series of angles
tt <- c(seq(0,2*pi, len = res),0)
if (length(x) == length(y) & length(y) == length(radius)) {
for (ii in seq_along(radius))
lines(radius[ii]*cos(tt) + x[ii],
radius[ii]*sin(tt) + y[ii], type = 'l' ,...)
} else {
for (ii in seq_along(radius))
lines(radius[ii]*cos(tt) + x[1],
radius[ii]*sin(tt) + y[1], type = 'l' ,...)
}
}
TextCircle <- function(label, x = 0, y = 0, radius = 1, ...){
pp <- 1/(length(label))
for (ii in seq_along(label)) {
ang <- 2*pi*pp*(ii - 1)
xii <- radius*cos(ang) + x
yii <- radius*sin(ang) + y
text(xii, yii,labels = label[ii],...)
}
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_rfa_julianplot.R
|
#' Seasonal statistics for flood peaks
#'
#' @description Return the circular or seasonal statistics of flood peaks.
#' The angle represents the mean timing of the floods and the radius its
#' regularity. For example, a radius of one represents perfect regularity.
#' Can perform the analyses on multiple sites.
#'
#' @author Martin Durocher
#'
#' @param x Data. If data.frame with two columns, they must be respectively
#' the date and a site variable.
#'
#' @param form Formula that specifies the date and site variable. Must be of the
#' form \code{date ~ site}.
#'
#' @param ... Other parameters.
#'
#' @seealso \link{ch_rfa_distseason}
#'
#' @return Returns the circular or seasonal statistics of flood peaks.
#' @references
#'
#' Burn, D.H. (1997). Catchment similarity for regional flood frequency analysis
#' using seasonality measures. Journal of Hydrology 202, 212-230.
#' https://doi.org/10.1016/S0022-1694(97)00068-1
#'
#' @export
#'
#' @importFrom stats model.frame
#'
#' @examples
#'
#' dt <- ch_rfa_extractamax(flow~date, CAN01AD002)$date
#'
#' ch_rfa_seasonstat(dt)
#'
#' ## Illustration of the analysis of multiple sites
#'
#' F0 <- function(ii) data.frame(site = ii, dt = sample(dt, replace = TRUE))
#' x <- lapply(1:10, F0)
#' x <- do.call(rbind, x)
#'
#' st <- ch_rfa_seasonstat(dt ~ site, x)
#'
#' ch_rfa_julianplot()
#' points(y ~ x, st, col = 2, pch = 16)
#'
ch_rfa_seasonstat <- function(x, ...) UseMethod('ch_rfa_seasonstat', x)
#' @export
ch_rfa_seasonstat.default <- function(x, ...){
x <- as.Date(x)
deg <- 2 * pi * DecimalDay(x)
## compute average
xbar <- mean(cos(deg))
ybar <- mean(sin(deg))
## compute the circular statistic
cs <- Xy2polar(xbar, ybar)
ans <- c(xbar, ybar, cs$angle, cs$radius)
names(ans) <- c('x','y','angle','radius')
return(ans)
}
#' @export
#' @rdname ch_rfa_seasonstat
ch_rfa_seasonstat.data.frame <- function(x, ...){
ans <- lapply(split(x[,1], as.character(x[,2])), ch_rfa_seasonstat)
return(do.call('rbind', ans))
}
#' @export
#' @rdname ch_rfa_seasonstat
ch_rfa_seasonstat.formula <- function(form, x, ...){
x <- model.frame(form,as.data.frame(x))
return(ch_rfa_seasonstat(x))
}
## Convert the day of the year into a decimal value
## take leap year into account
DecimalDay <- function(x){
## Extract julian day
x <- as.Date(x)
yy <- as.numeric(format(x, "%Y"))
dd <- as.numeric(format(x, "%j"))
## verify for leap years
isLeap <- is.leapyear(yy)
## Convert in decimal
(dd - .5) / (365 + isLeap)
}
## Logical is y a leap year
is.leapyear <- function(y)
((y %% 4 == 0) & (y %% 100 != 0)) | (y %% 400 == 0)
## Convert cartesian to polar coordinates
Xy2polar <- function(x,y){
## compute polar coordinates
ang <- atan(abs(y/x))
r <- sqrt(x^2 + y^2)
## correcting angle for
if (sign(x) < 0 & sign(y) >= 0) # second quadrant
ang <- pi - ang
else if (sign(x) < 0 & sign(y) < 0) # third quadrant
ang <- pi + ang
else if (sign(x) >= 0 & sign(y) < 0) # fourth quadrant
ang <- 2*pi - ang
list(radius = r, angle = ang)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_rfa_seasonstat.R
|
#' Extracts annual maximum values from ECDE dataframe.
#'
#' @description
#' Extracts annual maximum values, the date of occurrence, the day of year, and the completeness
#' from ECDE dataframe. Uses functions from timeDate ( \code{as.timeDate}, \code{dayOfYear}).
#'
#' @param df A dataframe of daily streamflow data from ECDE
#'
#' @return Returns a dataframe with the following variables
#' \item{year}{}
#' \item{annual maximum}{}
#' \item{date of annual maximum}{}
#' \item{day of year of annual maximum}{}
#' \item{days}{number of days with observations}
#'
#' @export
#'
#' @author Paul Whitfield
#' @seealso \code{\link{ch_read_ECDE_flows}} \code{\link{ch_circ_mean_reg}}
#' @examples
#' data(CAN05AA008)
#' amax <- ch_sh_get_amax(CAN05AA008)
#' str(amax)
ch_sh_get_amax <- function(df) {
data <- df$Flow
Date <- df$Date
year <- format(Date, "%Y")
Year <- as.numeric(unique(year))
maxdate <- array(NA, dim = length(Year))
doy <- array(NA, dim = length(Year))
days <- array(NA, dim = length(Year))
class(maxdate) <- "Date"
year <- as.factor(year)
amax <- as.numeric(tapply(data,year,max))
dataframe <- data.frame(df,year)
for (k in 1:length(Year)) {
ndata <- dataframe[dataframe$year == Year[k],]
days[k] <- length(ndata$Flow)
ndata <- ndata[ndata$Flow == amax[k],]
maxdate[k] <- ndata[1, 3]
maxdate_a <- timeDate::as.timeDate(maxdate[k])
doy[k] <- timeDate::dayOfYear(maxdate_a)
}
result <- data.frame(Year,amax, maxdate, doy, days)
return(result)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_sh_get_amax.R
|
#'@title Converts doy or dwy into a factor that is used to bin data
#'
#'@description Converts a series of a variable such as day of year into numbered bins.
#'Whenever the number of bins does not divide in 365 evenly a
#'message showing the number of bins created and the number of days
#'added to the last bin is provided.
#'
#'Simply put, \code{ch_slice} is used to convert doy into a
#'factor which is a number of bins per year. A year can be converted into any
#'number of bins; slice does it based upon a number of days. So when you send it
#'an array of doy it slices that into bins of the desired width. For example,
#'if the step is 5. They 365/5 gives 73 bins and because of leap years there might
#'be one extra day added every four years to the final bin.
#'
#' To illustrate for a bin of 5 days:
#' doy:
#' 1 2 3 4 5 6 7 8 9 10 11 12
#' Bin:
#' 1 1 1 1 1 2 2 2 2 2 3 3
#'
#' @param doy A vector of the day of calendar year for the dataset
#' @param step Width of bin in days
#'
#' @author Paul Whitfield, Kevin Shook
#' @return Returns a vector of bin numbers that is used as a factor for each day
#' in the dataset and provides a message indicating the handling of partial bins
#' @export
#'
#'
#' @seealso \code{\link{ch_binned_MannWhitney}} \code{\link{ch_flow_raster_trend}}
#'
#' @examples
#' doy <- c(1:365)
#' # first 30 days are 1, 31-60 are 2 etc
#' dice <- ch_slice(doy, 30)
#' plot(doy, dice)
ch_slice <- function(doy, step) {
limit <- floor(366 / step)
period <- floor((doy + step - 1) / step)
extra <- 366 - limit * step
period <- pmin(period, limit)
llevels <- as.character(c(1:limit))
period <- factor(period, levels = llevels)
message(paste("Bins =", limit, " The number of extra points in last bin is up to ",
extra, " per year"))
return(period)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_slice.R
|
#' @title Helper function for selecting points for an axis
#'
#' @description Sub-samples a vector every n places. Many times there are so many
#' years the labels on the plot overlap. \code{ch_sub_set_years} returns the position and label
#' for the subset. The function can be used on any type of simple array.
#'
#' @param years a vector of years
#' @param n sample size
#' @return a list containing:
#' \item{position}{array of axis positions}
#' \item{label}{array of labels}
#'
#' @export
#' @author Paul Whitfield
#' @examples
#' myears <- c(1900:2045)
#' myears <- ch_sub_set_Years(myears, 20)
#' myears
#'
#' a <- LETTERS
#' my_alpha <- ch_sub_set_Years(a, 5)
#' my_alpha
ch_sub_set_Years <- function(years, n) {
pts <- c(1:length(years))
pts <- pts[1:(length(years) / n) * n]
years <- years[pts]
result <- list(pts, years)
names(result) <- c("position", "label")
return(result)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_sub_set_Years.R
|
#' Converts a tidyhydat daily flow data tibble to ECDE format
#'
#' @description Accessing daily flow data using \pkg{tidyhydat} is quick and efficient. However, it
#' sometimes conflicts with other functions as \pkg{tidyhydat} changes variable names and some default
#' entries. This function converts a tibble obtained from a \pkg{tidyhydat} tibble to a dataframe
#' with standard Environment and Climate Change Canada Data Explorer (ECDE) names.
#'
#'
#' @param data Tibble of daily flows retrieved using \pkg{tidyhydat} function \code{hy_daily_flows}.
#' @author Paul Whitfield
#'
#' @return A dataframe or a list of flows with formats consistent with datafiles read using \code{ch_read_ECDE_flows}:
#' \item{ID}{stationID}
#' \item{PARAM}{Parameter 1 for Flow 2 for Level}
#' \item{Date}{Original charater string converted to date format}
#' \item{Flow}{Daily mean flow m\eqn{^3}{^3}/sec}
#' \item{SYM}{Quality flag}
#'
#' @export
#' @seealso \code{\link{ch_tidyhydat_ECDE_meta}}
#' @examples
#' # This example uses the built-in test database, by setting the hydat_path parameter
#' # You will want to use it with your actual HYDAT database
#' library(tidyhydat)
#' # check for existence of test database
#' test_db <- hy_test_db()
#' if (file.exists(test_db)) {
#' hydat_path = hy_set_default_db(test_db)
#' mdata <- hy_daily_flows(station_number=c("05AA008"))
#' m_data <- ch_tidyhydat_ECDE(mdata)
#'
#' mdata <- hy_daily_flows(station_number=c("05AA008", "08MF005", "05HD008"))
#' mnew <- ch_tidyhydat_ECDE(mdata)
#' str(mnew[[1]])
#' str(mnew[[2]])
#' str(mnew[[3]])
#' # note the order is in increasing alphabetical order
#' hy_set_default_db(NULL) # Reset HYDAT database
#' }
ch_tidyhydat_ECDE <- function(data) {
ndata <- data.frame(data) # untibble
ndata$Parameter[ndata$Parameter == "Flow"] <- 1 #revert parameter to internal codes
ndata$Parameter[ndata$Parameter == "Level"] <- 2
ndata$Parameter <- as.integer(ndata$Parameter)
ndata$Symbol[is.na(ndata$Symbol)] <- "" #remove the NA that replaced "" in original
result <- data.frame(ndata[,1], ndata[,3], ndata[,2], ndata[,4], ndata[,5])
names(result) <- c("ID","PARAM","Date", "Flow", "SYM") #assign the original names
nstations <- unique(result$ID)
if (length(nstations) == 1) {
return(result)
}
if (length(nstations != 1)) {
message(paste("Original tibble contained ",length(nstations),
" stations. A list of dataframes is returned"))
resulta <- split(result,result$ID)
return(resulta)}
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_tidyhydat_ECDE.R
|
#' @title Creates an ECDE-like dataframe of metadata from \pkg{tidyhydat}
#'
#' @description Extracts tombstone (meta) data for stations from \pkg{tidyhydat} in a
#' format similar to that used by the Environment Canada Data Explorer (ECDE). The
#' default does not capture all the fields in ECDE, which includes the most recent status
#' of many fields such as operating schedule. Returning these values slows the function,
#' particularly when all WSC stations are selected.
#'
#' @param stations A vector of WSC station IDs, i.e. \code{c("05BB001", "05BB003", "05BB004",
#' "05BB005")}. If \code{stations = "all"} then values are returned for all stations. Note
#' that you should ensure that that the \pkg{tidyhydat} database is up to date, if you
#' select \code{stations = "all"}, so that the most recent set of stations is used.
#'
#' @param all_ECDE Should all ECDE values be returned? If \code{FALSE} the default, then
#' values of \code{Flow}, \code{Level}, \code{Sed}, \code{OperSched}, \code{Region}, \code{Datum}, and
#' \code{Operator} are omitted or will differ from the ECDE values. If \code{all_ECDE = TRUE},
#' then the function will return values identical to ECDE. Note that setting
#' \code{all_ECDE = TRUE} will result in very long execution times, as it is necessary
#' to extract many daily values for each station to determine the values of
#' \code{Flow}, \code{Level}, \code{Sed}, and \code{OperSched} to determine the
#' final values.
#'
#' @author Paul Whitfield, Kevin Shook
#'
#' @export
#'
#' @return Returns a list with three items:
#' \itemize{
#' \item {\code{meta} - a dataframe of metadata from \pkg{tidyhydat} in ECDE form (not all ECDE fields are reproduced in this summary)}
#' \item {\code{H_version} - version information, and }
#' \item {\code{th_meta} - a dataframe with all \pkg{tidyhdat} fields including:}
#' \itemize{
#' \item {Station - StationID}
#' \item {StationName - Station Name}
#' \item {HYDStatus - Active or Discontinued}
#' \item {Prov - Province}
#' \item {Latitude}
#' \item {Longitude}
#' \item {DrainageArea - km\eqn{^2}{^2}}
#' \item {Years - number of years with data}
#' \item {From - Start Year}
#' \item {To - End Year}
#' \item {Reg. - Regulated?}
#' \item {Flow - not captured (differs from ECDE), unless \code{all_ECDE = TRUE}}
#' \item {Level - not captured (differs from ECDE), unless \code{all_ECDE = TRUE}}
#' \item {Sed - not captured (differs from ECDE), unless \code{all_ECDE = TRUE}}
#' \item {OperSched - not captured (differs from ECDE), unless \code{all_ECDE = TRUE}}
#' \item {RealTime - if TRUE/Yes}
#' \item {RHBN - if TRUE/Yes is in the reference hydrologic basin network}
#' \item {Region - number of region instead of name (differs from ECDE), unless \code{all_ECDE = TRUE}}
#' \item {Datum - reference number (differs from ECDE), unless \code{all_ECDE = TRUE}}
#' \item {Operator - reference number (differs from ECDE), unless \code{all_ECDE = TRUE}}
#' }
#' }
#'
#' @importFrom tidyhydat hy_version hy_stations hy_stn_regulation hy_stn_data_range
#' hy_daily hy_reg_office_list hy_datum_list hy_agency_list hy_stn_data_coll hy_sed_daily_loads
#' @importFrom stringr str_detect
#' @importFrom dplyr left_join
#' @importFrom utils txtProgressBar setTxtProgressBar
#' @seealso \code{\link{ch_get_ECDE_metadata}} \code{\link{ch_tidyhydat_ECDE}}
#' @examples
#' # This example uses the built-in test database, by setting the hydat_path parameter
#' # You will want to use it with your actual HYDAT database
#' library(tidyhydat)
#' # check for existence of test database
#' test_db <- hy_test_db()
#' if (file.exists(test_db)) {
#' stations <- c("05AA008", "08MF005", "05HD008")
#' hy_set_default_db(test_db)
#' result <- ch_tidyhydat_ECDE_meta(stations)
#' metadata <- result[[1]]
#' version <- result[[2]]
#' hy_set_default_db(NULL) # Reset HYDAT database
#' }
#' \dontrun{
#' # This example is not run, as it will take several hours to execute and will
#' # return many warnings for stations having no data. Note that it is using the actual
#' # HYDAT database, which must have been installed previously
#' # This use of the function is intended for the package maintainers to
#' # update the HYDAT_list data frame
#' result <- ch_tidyhydat_ECDE_meta("all", TRUE)
#' HYDAT_list <- result$meta
#' }
#'
ch_tidyhydat_ECDE_meta <- function(stations, all_ECDE = FALSE){
H_version <- hy_version()
H_version <- data.frame(H_version)
hy_date <- format(H_version[2], format = "%Y-%m-%d")
message("HYDAT version: ", H_version[1], " Date: ", hy_date)
H_version <- data.frame(H_version)
if (length(stations) == 1) {
if (stations == "all") {
data(allstations, package = "tidyhydat", verbose = FALSE, envir = environment())
allstations <- allstations
tc <- allstations
stations <- allstations$STATION_NUMBER
}
}
# extract difference parts of metadata using tidyhydat
tc <- hy_stations(station_number = stations)
tc <- data.frame(tc)
td <- hy_stn_regulation(station_number = stations)
td <- data.frame(td)
te <- hy_stn_data_range(station_number = stations)
te <- data.frame(te)
te <- te[te[,2] == "Q",]
colnmc <- c("Station", "StationName","Prov", "Region", "HydStatus",
"SedStatus", "Latitude", "Longitude", "DrainageAreaG",
"DrainageAreaE","RHBN", "RealTime", "Contributor",
"Operator", "Datum")
colnmd <- c("Station", "From", "To", "Reg.")
colnme <- c("Station", "DATA_TYPE", "SED_DATA_TYPE", "From", "To", "Years")
colmeta <- c("Station", "StationName","HydStatus","Prov", "Latitude",
"Longitude", "DrainageArea","EffectiveDrainageArea", "Years", "From", "To", "Reg.",
"Flow", "Level", "Sed", "OperSched", "RealTime", "RHBN", "Region",
"Datum", "Agency")
names(tc) <- colnmc
names(td) <- colnmd
names(te) <- colnme
if (all_ECDE) {
t1 <- merge(tc, td, by = "Station", all.x = TRUE)
t2 <- merge(t1, te, by = "Station", all.x = TRUE)
t3 <- rep.int(NA, length(t2[,1]))
# re-do merging to get original tidyhydat format
t4 <- merge(tc, td, by = "Station")
t5 <- merge(t4, te, by = "Station")
th_meta <- t5
meta <- data.frame(t2[,c(1:2,5,3,7:10,23,21:22,18)],t3,t3,t3,t3,t2[,c(12,12,4,15,14)])
names(meta) <- colmeta
if (nrow(meta) > 1)
# convert code numbers to strings
# get dataframes of codes and strings
regions <- hy_reg_office_list()
datums <- hy_datum_list()
agencies <- hy_agency_list()
# lookup values
region_names <- left_join(meta, regions, by = c("Region" = "REGIONAL_OFFICE_ID"))
datum_names <- left_join(meta, datums, by = c("Datum" = "DATUM_ID"))
QC_locations <- meta$Prov == "QC"
agency_names <- left_join(meta, agencies, by = c("Agency" = "AGENCY_ID"))
meta$Region <- region_names$REGIONAL_OFFICE_NAME_EN
meta$Datum <- datum_names$DATUM_EN
French_datums <- !is.na(datum_names$DATUM_FR)
meta$Datum[QC_locations & French_datums] <-
datum_names$DATUM_FR[QC_locations & French_datums]
meta$Agency <- agency_names$AGENCY_EN
French_agencies <- !is.na(agency_names$AGENCY_FR)
meta$Agency[QC_locations & French_agencies] <-
agency_names$AGENCY_FR[QC_locations & French_agencies]
# set missing values to ""
meta$Datum[is.na(meta$Datum)] <- ""
meta$Agency[is.na(meta$Agency)] <- ""
# create progress bar
pb <- txtProgressBar(min = 0, max = nrow(meta), style = 2)
# loop through all stations to get all ECDE variables
for (i in 1:nrow(meta)) {
if (nrow(meta) > 1)
setTxtProgressBar(pb, value = i)
start_date <- paste(meta$To[i], "-01-01", sep = "")
end_date <- paste(meta$To[i], "-12-31", sep = "")
end_year <- as.numeric(meta$To[i])
# flow and stage
daily <- try(hy_daily(meta$Station[i],
start_date = start_date,
end_date = end_date), silent = TRUE)
if (length(class(daily)) > 1) {
if (str_detect(string = daily[1,1], "Error")) {
meta$Flow[i] <- FALSE
meta$Level[i] <- FALSE
} else {
if (any(daily$Parameter == "Flow"))
meta$Flow[i] <- TRUE
else
meta$Flow[i] <- FALSE
if (any(daily$Parameter == "Level"))
meta$Level[i] <- TRUE
else
meta$Level[i] <- FALSE
}
} else {
meta$Flow[i] <- FALSE
meta$Level[i] <- FALSE
}
# sediment
sed <- try(hy_sed_daily_loads(meta$Station[i],
start_date = start_date,
end_date = end_date), silent = TRUE)
if (length(class(sed)) > 1) {
if (str_detect(string = sed[1, 1], "Error"))
meta$Sed[i] <- FALSE
else
if (any(sed$Parameter == "Load"))
meta$Sed[i] <- TRUE
else
meta$Sed[i] <- FALSE
} else {
meta$Sed[i] <- FALSE
}
# operator schedule
oper <- try(hy_stn_data_coll(meta$Station[i]))
if (nrow(oper) == 0) {
meta$OperSched[i] <- ""
}
else {
# get last value
meta$OperSched[i] <- oper$OPERATION[nrow(oper)]
}
} # for
result <- list(meta = meta, H_version = H_version, th_meta = th_meta )
message("Result is a list that contains", "\n",
"[1] metadata from tidyhydat in ECDE form,", "\n",
"[2] H_version information, and", "\n",
"[3] th_meta - tidyhydat meta", "\n",
"All ECDE fields are reproduced in this summary")
}
else {
t1 <- merge(tc, td, by.x = "Station", by.y = "Station")
t2 <- merge(t1, te, by.x = "Station", by.y = "Station")
t3 <- rep.int(NA, length(t2[,1]))
th_meta <- t2
meta <- data.frame(t2[,c(1:2,5,3,7:10,23,21:22,18)],t3,t3,t3,t3,t2[,c(12,12,4,15,14)])
names(meta) <- colmeta
result <- list(meta = meta, H_version = H_version, th_meta = th_meta )
message("Result is a list that contains", "\n",
"[1] metadata from tidyhydat in ECDE form,", "\n",
"[2] H_version information, and", "\n",
"[3] th_meta - tidyhydat meta", "\n",
"NOT all ECDE fields are reproduced in this summary")
}
return(result)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_tidyhydat_ECDE_meta.R
|
#'@title ch_tr_sign
#'
#'@description Converts MK (or other) slopes to integers 1-2-3 (negative,
#' none, positive). These indices can be used to indicate trend direction.
#'
#'
#'@param x an array of slopes
#'@param offset the amount of shift to make values positive integers, default is 2.
#'@return {x} {an array of indices 1, 2, 3}
#'
#'@author Paul Whitfield
#'@export
#'@examples
#' mkin <- c( -0.23, 0.34, 0.0, .033, -0.55)
#' mkout <- ch_tr_sign(mkin)
#' # 1 3 2 3 1
ch_tr_sign <- function(x, offset = 2){
x <- unlist(x)
x <- x/abs(x)
x <- replace(x, is.nan(x), 0)
x <- x + offset
x <- as.numeric(x)
return(x)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_tr_sign.R
|
#'@title ch_tr_signif()
#'
#'@description Convert pvalues to integers 1 for NS and 2 for significant
#' using a pvalue that can be set (default is 0.05)
#'
#'@param x an array of pvalues from statistical test
#'@param pvalue critical value, default is 0.05
#'
#'@return {x} {an array of indices 1 and 2, where 1 is NS and 2 is significant}
#'
#'@author Paul Whitfield
#'@export
#'@examples
#' sin <- c( -0.052, 0.34, 0.012, -.033, -0.55)
#' sout <- ch_tr_signif(sin)
#' # 1 1 2 2 1
ch_tr_signif <- function(x, pvalue = 0.05){
x <- unlist(x)
x <- replace(x, abs(x) > pvalue, 1)
x <- replace(x, abs(x) <= pvalue, 2)
x <- as.numeric(x)
return(x)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_tr_signif.R
|
#' Creates a sample file of pour points
#'
#' @description Creates a file of pour points for the \code{volcano} DEM. The
#' pour points define the outlets of sub-basins. These pour points are used
#' by examples within other functions.
#' @param pp_shp Name for shapefile to hold pour points
#'
#' @return Returns an \pkg{sf} object containing 2 pour points for the
#' \code{volcano} DEM. The pour points are also written to the specified file.
#' @export
#' @importFrom dplyr mutate
#' @importFrom sf st_as_sf st_set_crs st_write
#' @importFrom magrittr %>%
#' @author Dan Moore and Kevin Shook
#' @seealso \code{\link{ch_volcano_raster}} \code{\link{ch_wbt_pourpoints}}
#'
#' @examples
#' pourpoint_file <- tempfile("volcano_pourpoints", fileext = c(".shp"))
#' pourpoints <- ch_volcano_pourpoints(pourpoint_file)
#' plot(pourpoints)
ch_volcano_pourpoints <- function(pp_shp) {
if (missing(pp_shp)) {
stop("File for pour points must be specified")
}
outlet_sf <- data.frame(x = c(300570, 300644),
y = c(5916757, 5916557)) %>%
mutate(test_label = c("test_1", "test_2")) %>%
st_as_sf(coords = c("x", "y")) %>%
st_set_crs(32760)
if (file.exists(pp_shp))
st_write(outlet_sf, pp_shp, delete_dsn = TRUE)
else
st_write(outlet_sf, pp_shp, delete_dsn = FALSE)
return(outlet_sf)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_volcano_pourpoints.R
|
#' Create Test Raster
#'
#' @description
#' Creates a \pkg{raster} object of land surface elevations, as
#' used to test/demonstrate many functions requiring a digital elevation model
#' (DEM).
#'
#' @details
#' No arguments are required as the DEM is created from the \pkg{base}
#' \code{volcano} matrix of elevations.
#'
#' @export
#' @return Returns a raster object of land surface elevations.
#' @author Dan Moore and Kevin Shook
#' @importFrom raster rasterFromXYZ crs
#' @importFrom magrittr %>%
#' @examples
#' test_raster <- ch_volcano_raster()
#'
ch_volcano_raster <- function() {
vol_mat <- datasets::volcano
nr <- nrow(vol_mat)
nc <- ncol(vol_mat)
dx <- 10
xmin <- 300481
xmax <- xmin + (nc - 1)*dx
ymin <- 5916112
ymax <- ymin + (nr - 1)*dx
x <- rep(seq(xmax, xmin, -dx), each = nr)
y <- rep(seq(ymin, ymax, dx), times = nc)
vol_ras <- data.frame(x, y, z = as.numeric(vol_mat)) %>%
raster::rasterFromXYZ()
raster::crs(vol_ras) <- "+proj=utm +zone=60 +south +datum=WGS84 +units=m +no_defs"
return(vol_ras)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_volcano_raster.R
|
#' Delineate catchment boundaries
#'
#' @param fn_pp_snap Name of file containing snapped pour points
#' @param fn_flowdir Name of file containing flow accumulations.
#' @param fn_catchment_ras Raster file to contain delineated catchment.
#' @param fn_catchment_vec Vector file to contain delineated catchment.
#' @param return_vector If \code{TRUE} (the default) a vector of the catchment will be returned.
#'
#' @author Dan Moore and Kevin Shook
#' @importFrom raster raster
#' @importFrom whitebox wbt_watershed wbt_raster_to_vector_polygons
#' @importFrom sf st_crs write_sf st_read
#' @importFrom magrittr %>%
#' @return If \code{return_vector == TRUE} a vector of the catchment is returned. Otherwise
#' nothing is returned.
#' @export
#' @seealso \code{\link{ch_wbt_catchment_onestep}}
#' @examples
#' # Only proceed if Whitebox executable is installed
#' library(whitebox)
#' if (check_whitebox_binary()){
#' library(raster)
#' test_raster <- ch_volcano_raster()
#' dem_raster_file <- tempfile(fileext = ".tif")
#' no_sink_raster_file <- tempfile("no_sinks", fileext = ".tif")
#'
#' # write test raster to file
#' writeRaster(test_raster, dem_raster_file, format = "GTiff")
#'
#' # remove sinks
#' removed_sinks <- ch_wbt_removesinks(dem_raster_file, no_sink_raster_file, method = "fill")
#'
#' # get flow accumulations
#' flow_acc_file <- tempfile("flow_acc", fileext = ".tif")
#' flow_acc <- ch_wbt_flow_accumulation(no_sink_raster_file, flow_acc_file)
#'
#' # get pour points
#' pourpoint_file <- tempfile("volcano_pourpoints", fileext = ".shp")
#' pourpoints <- ch_volcano_pourpoints(pourpoint_file)
#' snapped_pourpoint_file <- tempfile("snapped_pourpoints", fileext = ".shp")
#' snapped_pourpoints <- ch_wbt_pourpoints(pourpoints, flow_acc_file, pourpoint_file,
#' snapped_pourpoint_file, snap_dist = 10)
#'
#' # get flow directions
#' flow_dir_file <- tempfile("flow_dir", fileext = ".tif")
#' flow_dir <- ch_wbt_flow_direction(no_sink_raster_file, flow_dir_file)
#' fn_catchment_ras <- tempfile("catchment", fileext = ".tif")
#' fn_catchment_vec <- tempfile("catchment", fileext = ".shp")
#' catchments <- ch_wbt_catchment(snapped_pourpoint_file, flow_dir_file,
#' fn_catchment_ras, fn_catchment_vec)
#' } else {
#' message("Examples not run as Whitebox executable not found")
#' }
ch_wbt_catchment <- function(fn_pp_snap, fn_flowdir, fn_catchment_ras,
fn_catchment_vec, return_vector = TRUE) {
ch_wbt_check_whitebox()
if (!file.exists(fn_pp_snap)) {
stop("Error: file containing snapped pour points does not exist")
}
if (!file.exists(fn_flowdir)) {
stop("Error: input flow direction file does not exist")
}
message("ch_wbt: Delineating catchment boundaries")
crs_pp <- sf::st_crs(st_read(fn_pp_snap))$epsg
crs_fd <- sf::st_crs(raster(fn_flowdir))$epsg
if (crs_pp != crs_fd) {
stop("Error: pour points and flow direction grid have different crs")
}
wbt_watershed(d8_pntr = fn_flowdir, pour_pts = fn_pp_snap,
output = fn_catchment_ras)
wbt_raster_to_vector_polygons(fn_catchment_ras, fn_catchment_vec)
catchment_vec <- st_read(fn_catchment_vec) %>% st_as_sf()
if (is.na(st_crs(catchment_vec))) {
sf::st_crs(catchment_vec) <- st_crs(raster(fn_catchment_ras))
write_sf(catchment_vec, fn_catchment_vec)
}
if (return_vector) {
return(st_read(fn_catchment_vec))
} else {
return()
}
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_wbt_catchment.R
|
#' Delineates a catchment in a single step
#'
#' @description Calls all of the \code{ch_wbt} and other functions required to do the sub-tasks
#' required to delineate a catchment. The names of files to be created
#' are taken from the list created by the function \code{ch_wbt_filenames}.
#'
#' @param wd Name of working directory.
#' @param in_dem File name for original DEM.
#' @param pp_sf Vector containing pour points.
#' @param sink_method Method for sink removal as used by \code{ch_wbt_removesinks}.
#' @param dist Maximum search distance for breach paths in cells. Required if \code{sink_method = "breach_leastcost"}.
#' @param check_catchment If \code{TRUE} (the default) \code{ch_checkcatchment} will be called
#' after the catchment is created.
#' @param threshold Threshold for channel initiation.
#' @param snap_dist Maximum pour point snap distance in map units.
#' @param cb_colour Colour for catchment outline. Default is "red".
#' @param pp_colour Colour for catchment pour points. Default is "red".
#' @param channel_colour Colour for channel. Default is "blue".
#' @param contour_colour Colour for contours Default is "grey".
#' @param plot_na If \code{TRUE} (the default) a north arrow is added to the plot.
#' @param plot_scale If \code{TRUE} (the default) a scale bar is added to the plot.
#' @param na_location Location for the north arrow. Default is \option{tr}, i.e. top-right.
#' @param scale_location Location for the scale bar. Default is \option{bl}, i.e. bottom-left.
#' @param ... Extra parameters for \code{ch_wbt_removesinks}.
#' @author Dan Moore and Kevin Shook
#' @seealso \code{\link{ch_wbt_filenames}}
#' @importFrom raster raster
#' @importFrom whitebox wbt_extract_streams wbt_raster_streams_to_vector wbt_snap_pour_points wbt_watershed wbt_raster_to_vector_polygons
#' @importFrom sf st_crs write_sf st_write
#' @importFrom magrittr %>%
#' @return Returns an \pkg{sp} object of the delineated catchment.
#' @export
#'
#' @examples
#' # Only proceed if Whitebox executable is installed
#' library(whitebox)
#' if (check_whitebox_binary()){
#' library(raster)
#' test_raster <- ch_volcano_raster()
#' dem_raster_file <- tempfile(fileext = c(".tif"))
#' # write test raster to file
#' writeRaster(test_raster, dem_raster_file, format = "GTiff")
#' wd <- tempdir()
#' pourpoint_file <- tempfile("volcano_pourpoints", fileext = ".shp")
#' pourpoints <- ch_volcano_pourpoints(pourpoint_file)
#' catchment <- ch_wbt_catchment_onestep(wd = wd, in_dem = dem_raster_file,
#' pp_sf = pourpoints, sink_method = "fill", threshold = 1, snap_dist = 10)
#' } else {
#' message("Examples not run as Whitebox executable not found")
#' }
ch_wbt_catchment_onestep <- function(wd, in_dem, pp_sf,
sink_method = "breach_leastcost", dist = NULL,
check_catchment = TRUE, threshold = NULL, snap_dist = NULL,
cb_colour = "red", pp_colour = "red",
channel_colour = "blue", contour_colour = "grey",
plot_na = TRUE, plot_scale = TRUE,
na_location = "tr", scale_location = "bl", ...) {
ch_wbt_check_whitebox()
if (missing(wd)) {
step("Error: name of working directory not specified")
}
if (missing(in_dem)) {
step("Error: file name for original DEM not specified")
}
if (is.null(threshold)) {
step("Error: threshold for channel initiation not specified")
}
if (is.null(snap_dist)) {
step("Error: maximum pour point snap distance not specified")
}
file_names <- ch_wbt_filenames(wd)
# define
dem_ns <- ch_wbt_removesinks(in_dem = in_dem, out_dem = file_names$dem_ns,
method = sink_method, dist = dist,
fn_dem_fsc = file_names$dem_fsc, ...)
if (inherits(dem_ns, "character")) return(NULL)
ch_wbt_flow_accumulation(fn_dem_ns = file_names$dem_ns, fn_flowacc = file_names$flowacc,
return_raster = FALSE)
ch_wbt_flow_direction(fn_dem_ns = file_names$dem_ns, fn_flowdir = file_names$flowdir,
return_raster = FALSE)
wbt_extract_streams(file_names$flowacc, file_names$channel_ras, threshold = threshold)
wbt_raster_streams_to_vector(file_names$channel_ras, file_names$flowdir, file_names$channel_vec)
sf::st_write(pp_sf, file_names$pp, quiet = TRUE, delete_layer = TRUE)
wbt_snap_pour_points(file_names$pp, file_names$flowacc, file_names$pp_snap, snap_dist)
wbt_watershed(file_names$flowdir, file_names$pp_snap, file_names$catchment_ras)
wbt_raster_to_vector_polygons(file_names$catchment_ras, file_names$catchment_vec)
catchment_vec <- st_read(file_names$catchment_vec) %>% st_as_sf()
if (is.na(sf::st_crs(catchment_vec))) {
sf::st_crs(catchment_vec) <- sf::st_crs(raster(file_names$catchment_ras))
sf::write_sf(catchment_vec, file_names$catchment_vec)
}
channel_vec <- st_read(file_names$channel_vec) %>% st_as_sf()
if (is.na(sf::st_crs(channel_vec))) {
sf::st_crs(channel_vec) <- sf::st_crs(catchment_vec)
sf::write_sf(channel_vec, file_names$catchment_vec)
}
if (check_catchment) {
result <- ch_checkcatchment(dem = dem_ns, catchment = catchment_vec, outlet = pp_sf,
channel_vec = channel_vec, cb_colour = cb_colour, pp_colour = pp_colour,
channel_colour = channel_colour, contour_colour = contour_colour,
plot_na = plot_na, plot_scale = plot_scale,
na_location = na_location, scale_location = scale_location)
}
return(catchment_vec)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_wbt_catchment_onestep.R
|
#' Generate stream network
#'
#' @param fn_flowacc File name for flow accumulation grid.
#' @param fn_flowdir File name for flow direction grid.
#' @param fn_channel_ras File name for raster version of channel network.
#' @param fn_channel_vec File name for vector version of channel networks.
#' @param threshold Threshold for channel initiation.
#' @param ... Other parameters for \pkg{whitebox} function \code{wbt_extract_streams}
#' @author Dan Moore
#' @importFrom raster raster
#' @importFrom whitebox wbt_extract_streams wbt_raster_streams_to_vector
#' @importFrom sf st_crs write_sf
#' @importFrom stats step
#' @return Returns a \pkg{sf} vector object of the stream channels.
#' @export
#'
#' @examples
#' # Only proceed if Whitebox executable is installed
#' library(whitebox)
#' if (check_whitebox_binary()){
#' library(raster)
#' test_raster <- ch_volcano_raster()
#' dem_raster_file <- tempfile(fileext = c(".tif"))
#' no_sink_raster_file <- tempfile("no_sinks", fileext = c(".tif"))
#'
#' # write test raster to file
#' writeRaster(test_raster, dem_raster_file, format = "GTiff")
#'
#' # remove sinks
#' removed_sinks <- ch_wbt_removesinks(dem_raster_file, no_sink_raster_file, method = "fill")
#'
#' # get flow accumulations
#' flow_acc_file <- tempfile("flow_acc", fileext = c(".tif"))
#' flow_acc <- ch_wbt_flow_accumulation(no_sink_raster_file, flow_acc_file)
#'
#' # get flow directions
#' flow_dir_file <- tempfile("flow_dir", fileext = c(".tif"))
#' flow_dir <- ch_wbt_flow_direction(no_sink_raster_file, flow_dir_file)
#' channel_raster_file <- tempfile("channels", fileext = c(".tif"))
#' channel_vector_file <- tempfile("channels", fileext = c(".shp"))
#' channels <- ch_wbt_channels(flow_acc_file, flow_dir_file, channel_raster_file,
#' channel_vector_file, 1)
#' plot(channels)
#' } else {
#' message("Examples not run as Whitebox executable not found")
#' }
ch_wbt_channels <- function(fn_flowacc, fn_flowdir,
fn_channel_ras, fn_channel_vec,
threshold = NULL, ...) {
ch_wbt_check_whitebox()
if (!file.exists(fn_flowacc)) {
stop("Error: input flow accumulation file does not exist")
}
if (!file.exists(fn_flowdir)) {
stop("Error: input flow direction file does not exist")
}
if (is.null(threshold)) {
step("Error: threshold for channel initiation not specified")
}
message("ch_wbt: Generating stream network")
wbt_extract_streams(fn_flowacc, fn_channel_ras, threshold = threshold, ...)
wbt_raster_streams_to_vector(fn_channel_ras, fn_flowdir, fn_channel_vec)
channel_vec <- st_read(fn_channel_vec)
if(is.na(st_crs(channel_vec))) {
sf::st_crs(channel_vec) <- st_crs(raster(fn_channel_ras))
write_sf(channel_vec, fn_channel_vec)
}
return(channel_vec)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_wbt_channels.R
|
#' Checks for WhiteboxTools executable
#'
#' @return If \code{WhiteboxTools} cannot be found, a message explaining what to do is displayed.
#' @export
#' @keywords internal
#' @author Kevin Shook
#' @importFrom whitebox check_whitebox_binary
#'
#' @examples
#' # Only proceed if Whitebox executable is installed
#' library(whitebox)
#' if (check_whitebox_binary()){
#' ch_wbt_check_whitebox()
#' } else {
#' message("Example not run as Whitebox executable not found")
#' }
ch_wbt_check_whitebox <- function() {
wb_found <- check_whitebox_binary(silent = TRUE)
msg <- paste("The WhiteboxTools executable could not be found.\n",
"Make sure that you have run install_whitebox().\n",
"If you have already done this, try setting the path to the executable using wbt_init().", sep = "")
if (!wb_found) {
stop(msg)
}
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_wbt_check_whitebox.R
|
#' Creates names for Whitebox function input and output files
#'
#' @description Creates a list of the files used for inputs and outputs by the
#' Whitebox functions. This function needs to be called before calling any of the other
#' Whitebox (i.e. those prefixed by \code{cd_wbt}) functions. If the file names are not specified, default names will
#' be used. All raster files are TIFF (.tif), all vector files are shapefiles (.shp).
#' @param wd Required. Name of working directory.
#' @param fn_dem File name of input DEM. Default is \option{dem.tif}.
#' @param fn_dem_fsc File name for dem after filling single-cell pits. Default is \option{dem_fsc.tif}.
#' @param fn_dem_ns File name for dem removing sinks. Default is \option{dem_ns.tif}.
#' @param fn_flowacc File name for DEM flow accumulation grid Default is \option{flow_acc.tif}.
#' @param fn_flowdir File name for DEM flow direction grid. Default is \option{flow_dir.tif}.
#' @param fn_channel_ras File name for raster version of channel network. Default is \option{channel.tif}.
#' @param fn_channel_vec File name for vector version of channel networks. Default is \option{channel.shp}.
#' @param fn_catchment_ras File name for raster version of catchment. Default is \option{catchment.tif}.
#' @param fn_catchment_vec File name for vector version of catchment. Default is \option{catchment.shp}.
#' @param fn_pp File name for pour points (input). Vector file. Default is \option{pp.shp}.
#' @param fn_pp_snap File name for pour points after snapping to channel network. Vector file. Default is \option{pp.shp}.
#'
#' @author Dan Moore
#' @return Returns a list of the input and output file names
#' @export
#'
#' @examples
#' wbt_file_names <- ch_wbt_filenames(getwd())
#'
ch_wbt_filenames <- function(
wd = NULL,
fn_dem = "dem.tif",
fn_dem_fsc = "dem_fsc.tif",
fn_dem_ns = "dem_ns.tif",
fn_flowacc = "flow_acc.tif",
fn_flowdir = "flow_dir.tif",
fn_channel_ras = "channel.tif",
fn_channel_vec = "channel.shp",
fn_catchment_ras = "catchment.tif",
fn_catchment_vec = "catchment.shp",
fn_pp = "pp.shp",
fn_pp_snap = "pp_snap.shp") {
if (is.null(wd)) {
stop("Working directory not specified")
}
if (!file.exists(wd)) {
stop("Working directory does not exist")
}
fn_list <- list(
dem = file.path(wd, fn_dem),
dem_fsc = file.path(wd, fn_dem_fsc),
dem_ns = file.path(wd, fn_dem_ns),
flowacc = file.path(wd, fn_flowacc),
flowdir = file.path(wd, fn_flowdir),
channel_ras = file.path(wd, fn_channel_ras),
catchment_ras = file.path(wd, fn_catchment_ras),
channel_vec = file.path(wd, fn_channel_vec),
catchment_vec = file.path(wd, fn_catchment_vec),
pp = file.path(wd, fn_pp),
pp_snap = file.path(wd, fn_pp_snap)
)
return(fn_list)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_wbt_filenames.R
|
#' Creates flow accumulation grid file
#'
#' @param fn_dem_ns File name of dem with sinks removed.
#' @param fn_flowacc File name for flow accumulation grid to be created.
#' @param return_raster If \code{TRUE} (the default), the flow accumulation
#' grid will be returned as a raster object, in addition to being written to
#' \option{fn_flowacc}. If \code{FALSE}, the output file will still be created
#' but a \code{NULL} value is returned.
#'
#' @author Dan Moore
#' @importFrom raster raster
#' @importFrom whitebox wbt_d8_flow_accumulation
#' @return If \code{return_raster = TRUE}, the flow accumulation
#' grid will be returned as a raster object, otherwise \code{NULL} is returned.
#' @export
#'
#' @examples
#' # Only proceed if Whitebox executable is installed
#' library(whitebox)
#' if (check_whitebox_binary()){
#' library(raster)
#' test_raster <- ch_volcano_raster()
#' dem_raster_file <- tempfile(fileext = c(".tif"))
#' no_sink_raster_file <- tempfile("no_sinks", fileext = c(".tif"))
#'
#' # write test raster to file
#' writeRaster(test_raster, dem_raster_file, format = "GTiff")
#'
#' # remove sinks
#' removed_sinks <- ch_wbt_removesinks(dem_raster_file, no_sink_raster_file, method = "fill")
#'
#' # get flow accumulations
#' flow_acc_file <- tempfile("flow_acc", fileext = c(".tif"))
#' flow_acc <- ch_wbt_flow_accumulation(no_sink_raster_file, flow_acc_file)
#' plot(flow_acc)
#' } else {
#' message("Examples not run as Whitebox executable not found")
#' }
ch_wbt_flow_accumulation <- function(fn_dem_ns, fn_flowacc, return_raster = TRUE) {
ch_wbt_check_whitebox()
if (!file.exists(fn_dem_ns)) {
stop("Error: input sink-free dem file does not exist")
}
message("ch_wbt: Creating flow accumulation grid")
wbt_d8_flow_accumulation(fn_dem_ns, fn_flowacc)
if (return_raster) {
return(raster(fn_flowacc))
} else {
return(NULL)
}
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_wbt_flow_accumulation.R
|
#' Creates flow direction grid file
#'
#' @param fn_dem_ns File name of dem with sinks removed.
#' @param fn_flowdir File name for flow direction grid to be created.
#' @param return_raster Should a raster object be returned?
#'
#' @author Dan Moore
#' @importFrom raster raster
#' @importFrom whitebox wbt_d8_pointer
#' @return If \code{return_raster = TRUE} (the default), the flow direction
#' grid will be returned as a raster object, in addition to being written to
#' \option{fn_flowdir}. If \code{return_raster = FALSE}, the output file will still be created
#' but a \code{NULL} value is returned.
#' @export
#'
#' @examples
#' # Only proceed if Whitebox executable is installed
#' library(whitebox)
#' if (check_whitebox_binary()){
#' library(raster)
#' test_raster <- ch_volcano_raster()
#' dem_raster_file <- tempfile(fileext = c(".tif"))
#' no_sink_raster_file <- tempfile("no_sinks", fileext = c(".tif"))
#'
#' # write test raster to file
#' writeRaster(test_raster, dem_raster_file, format = "GTiff")
#'
#' # remove sinks
#' removed_sinks <- ch_wbt_removesinks(dem_raster_file, no_sink_raster_file, method = "fill")
#'
#' # get flow directions
#' flow_dir_file <- tempfile("flow_dir", fileext = c(".tif"))
#' flow_dir <- ch_wbt_flow_direction(no_sink_raster_file, flow_dir_file)
#' plot(flow_dir)
#' } else {
#' message("Examples not run as Whitebox executable not found")
#' }
ch_wbt_flow_direction <- function(fn_dem_ns, fn_flowdir, return_raster = TRUE) {
ch_wbt_check_whitebox()
if (!file.exists(fn_dem_ns)) {
stop("Error: input sink-free dem file does not exist")
}
message("ch_wbt: Creating flow direction grid")
wbt_d8_pointer(fn_dem_ns, fn_flowdir)
if (return_raster) {
return(raster(fn_flowdir))
} else {
return(NULL)
}
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_wbt_flow_direction.R
|
#' Snap pour points to channels
#'
#' @description Pour points describe the outlets of sub-basins within a DEM. To use
#' the pour points to delineate catchments, they must align with the drainage
#' network. This function snaps (forces the locations) of pour points to the
#' channels.
#' @param pp_sf \pkg{sf} object containing pour points. These must be supplied by the user. See
#' the code in \code{\link{ch_volcano_pourpoints}} for an example of creating the object.
#' @param fn_flowacc Name of file containing flow accumulations.
#' @param fn_pp File name to create un-snapped pour points.
#' @param fn_pp_snap File name for snapped pour points.
#' @param check_crs If \code{TRUE} the projections of the pour points and flow
#' accumulation files will be checked to ensure they are identical.
#' @param snap_dist Maximum snap distance in map units.
#' @param ... Additional parameters for \pkg{whitebox} function \code{wbt_snap_pour_points}.
#'
#' @author Dan Moore
#' @seealso \code{\link{ch_volcano_pourpoints}}
#' @importFrom raster raster
#' @importFrom whitebox wbt_snap_pour_points
#' @importFrom sf st_crs st_write
#' @return Returns a \pkg{sf} object of the specified pour points snapped to the
#' channel network.
#' @export
#' @examples
#' # Only proceed if Whitebox executable is installed
#' library(whitebox)
#' if (check_whitebox_binary()){
#' library(raster)
#' test_raster <- ch_volcano_raster()
#' dem_raster_file <- tempfile(fileext = c(".tif"))
#' no_sink_raster_file <- tempfile("no_sinks", fileext = c(".tif"))
#'
#' # write test raster to file
#' writeRaster(test_raster, dem_raster_file, format = "GTiff")
#'
#' # remove sinks
#' removed_sinks <- ch_wbt_removesinks(dem_raster_file, no_sink_raster_file, method = "fill")
#'
#' # get flow accumulations
#' flow_acc_file <- tempfile("flow_acc", fileext = c(".tif"))
#' flow_acc <- ch_wbt_flow_accumulation(no_sink_raster_file, flow_acc_file)
#'
#' # get pour points
#' pourpoint_file <- tempfile("volcano_pourpoints", fileext = c(".shp"))
#' pourpoints <- ch_volcano_pourpoints(pourpoint_file)
#' snapped_pourpoint_file <- tempfile("snapped_pourpoints", fileext = c(".shp"))
#' snapped_pourpoints <- ch_wbt_pourpoints(pourpoints, flow_acc_file, pourpoint_file,
#' snapped_pourpoint_file, snap_dist = 10)
#' } else {
#' message("Examples not run as Whitebox executable not found")
#' }
ch_wbt_pourpoints <- function(pp_sf = NULL, fn_flowacc, fn_pp, fn_pp_snap,
check_crs = TRUE, snap_dist = NULL, ...) {
ch_wbt_check_whitebox()
if (!file.exists(fn_flowacc)) {
stop("Error: flow accumulation file does not exist")
}
if (missing(pp_sf)) {
stop("Error: value for pp_sf missing")
}
if (is.null(snap_dist)) {
stop("Error: value for snap_dist missing")
}
if (check_crs) {
pp_crs <- st_crs(pp_sf)$epsg
fa_crs <- st_crs(raster(fn_flowacc))$epsg
if (pp_crs != fa_crs) {
stop("Error: pour points and flow accumulation grid have different crs")
}
}
message("ch_wbt: Snapping pour points to stream network")
st_write(pp_sf, fn_pp, quiet = TRUE, delete_layer = TRUE)
wbt_snap_pour_points(fn_pp, fn_flowacc, fn_pp_snap, snap_dist, ...)
return(st_read(fn_pp_snap))
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_wbt_pourpoints.R
|
#' Removes sinks from a DEM
#'
#' @description Sinks are removed from a DEM using one of several methods. The raster file
#' types supported are listed in \code{\link{Spatial_hydrology_functions}}.
#'
#' @param in_dem File path for original dem. Required.
#' @param out_dem File path for dem after removing sinks.
#' @param method Method for removing sinks. Default method is \option{breach_leastcost}. Other methods include
#' \option{breach}, \option{fill}, \option{fill_pd} (Planchon and Darboux), and \option{fill_wl} (Wang and Liu).
#' @param dist Maximum search distance for breach paths in cells. Required if \code{method = "breach_leastcost"}.
#' @param fn_dem_fsc File path for dem after removing single-cell pits.
#' @param ... Additional arguments to be passed to functions to remove sinks.
#'
#' @author Dan Moore
#' @importFrom raster raster
#' @importFrom whitebox wbt_init wbt_fill_single_cell_pits wbt_breach_depressions_least_cost
#' @importFrom whitebox wbt_fill_depressions_wang_and_liu
#' @importFrom whitebox wbt_breach_depressions wbt_fill_depressions wbt_fill_depressions_planchon_and_darboux
#' @return Returns a raster object containing the processed dem.
#' @export
#'
#' @examples
#' # Only proceed if Whitebox executable is installed
#' library(whitebox)
#' if (check_whitebox_binary()){
#' library(raster)
#' test_raster <- ch_volcano_raster()
#' dem_raster_file <- tempfile(fileext = c(".tif"))
#' no_sink_raster_file <- tempfile("no_sinks", fileext = c(".tif"))
#'
#' # write test raster to file
#' writeRaster(test_raster, dem_raster_file, format = "GTiff")
#'
#' # remove sinks
#' removed_sinks <- ch_wbt_removesinks(dem_raster_file, no_sink_raster_file, method = "fill")
#' } else {
#' message("Examples not run as Whitebox executable not found")
#' }
ch_wbt_removesinks <- function(in_dem, out_dem, method = "breach_leastcost",
dist = NULL, fn_dem_fsc = NULL, ...) {
ch_wbt_check_whitebox()
exe_location <- wbt_init()
if (!file.exists(in_dem)) {
stop("Error: input dem file does not exist")
}
if (method == "breach_leastcost") {
if (is.null(dist)) {
stop("Error: no value for dist, which is required for wbt_breach_depressions_least_cost")
}
wbt_fill_single_cell_pits(in_dem, fn_dem_fsc)
wbt_breach_depressions_least_cost(fn_dem_fsc, out_dem, dist, ...)
} else if (method == "breach") {
wbt_fill_single_cell_pits(in_dem, fn_dem_fsc)
wbt_breach_depressions(fn_dem_fsc, out_dem, ...)
} else if (method == "fill") {
wbt_fill_depressions(in_dem, out_dem, ...)
} else if (method == "fill_pd") {
wbt_fill_depressions_planchon_and_darboux(in_dem, out_dem, ...)
} else if (method == "fill_wl") {
wbt_fill_depressions_wang_and_liu(in_dem, out_dem, ...)
} else {
stop("Error: incorrect method for sink removal specified")
}
return(raster(out_dem))
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_wbt_removesinks.R
|
#' @title Designation of the water year
#' @description Display water year
#' @export
#'
#' @param dates A vector of dates with actual year
#' @param start_month Month in which the year starts (defaults to October)
#'
#' @return Year starting in start_month
#'
#' @examples
#' date <- seq(as.Date("1910/1/1"), as.Date("1912/1/1"), "days")
#' wtr_yr_date <- ch_wtr_yr(dates=date, start_month=10)
#' df <- data.frame(wtr_yr_date, date)
#' @source http://stackoverflow.com/questions/27626533/r-create-function-to-add-water-year-column
ch_wtr_yr <- function(dates, start_month=10) {
# Convert dates into POSIXlt
dates.posix = as.POSIXlt(dates)
# Year offset
offset = ifelse(dates.posix$mon >= start_month - 1, 1, 0)
# Water year
adj.year = dates.posix$year + 1900 + offset
# Return the water year
adj.year
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/ch_wtr_yr.R
|
#' Stacks EC values
#'
#' @description Converts data frames of Environment Canada year x month or
#' month x day data to vectors.
#' @param data_values Required. Data frame of year x month or month x day values.
#' @param data_codes Required. Data frame of year x month or month x day data codes.
#'
#' @return Returns a data frame with two columns: the data values, and the data codes.
#' @export
#' @keywords internal
#' @author Kevin Shook
#'
#' @examples \dontrun{
#' # Do not run as the function requires a data frame of EC data and
#' # the dummy variable will cause an error message
#' df <- ch_stack_EC(data_values, data_codes)
#' }
#'
ch_stack_EC <- function(data_values = NULL, data_codes = NULL) {
#check parameters
if (is.null(data_values)) {
stop("No specified data values")
}
if (is.null(data_codes)) {
stop("No specified data codes")
}
# transpose data
data_values_t <- t(data_values)
data_codes_t <- t(data_codes)
# now stack data frames to vectors
data_values <- as.vector(data_values_t, mode = 'numeric')
data_codes <- as.character(as.vector(data_codes_t, mode = 'character'))
df <- data.frame(data_values, data_codes)
return(df)
}
#' Subsets dates by string
#'
#' @description Subsets a data frame by an specified date range, provided as
#' a string by the \code{prd} argument. This function is meant to emulate the subsetting
#' capability of the \pkg{xts} package.
#'
#' @param df data frame of time series data; includes a variable called \code{Date}
#' @param prd date range as string formatted as \option{YYYY-MM-DD/YYYY-MM-DD}
#' @return \item{df}{subsetted data frame}
#' @keywords date data subset
#' @author Robert Chlumsky
#' @export
#' @examples{
#' dd <- seq.Date(as.Date("2010-10-01"), as.Date("2013-09-30"), by = 1)
#' x <- rnorm(length(dd))
#' y <- abs(rnorm(length(dd)))*2
#' df <- data.frame("Date" = dd,x,y)
#' prd <- "2011-10-01/2012-09-30"
#' summary(ch_date_subset(df,prd))}
#'
ch_date_subset <- function(df, prd) {
ss <- unlist(strsplit(prd, split = "/"))
df <- df[df$Date >= as.Date(ss[1]) & df$Date <= as.Date(ss[2]), ]
return(df)
}
#' Tests url to see if it will work
#'
#' @param url Required. URL to be checked
#' @param quiet Optional. If \code{FALSE} (the default) messages are printed.
#'
#' @return Returns \option{error} if there was an error, \option{warning} if there was a
#' warning. Otherwise, returns \option{OK}. Strings are returned instead of logical values
#' to simplify checking result in calling function.
#' @seealso See original code on post in Stack Overflow
#' \href{https://stackoverflow.com/questions/12193779/how-to-write-trycatch-in-r}{
#' How to write trycatch in R}
#' @export
#' @keywords internal
#' @author Kevin Shook
#'
#' @examples \donttest{
#' # Not tested automatically as can be very slow
#' test_url <- "https://zenodo.org/record/4781469/files/sm_data.csv"
#' ch_test_url_file(test_url, quiet = TRUE)
#' }
#'
ch_test_url_file <- function(url, quiet = FALSE){
out <- tryCatch(
{
readLines(con = url, n = 1, warn = FALSE)
},
error = function(cond) {
if (!quiet) {
message(paste("URL does not seem to exist:", url))
message("Here's the original error message:")
message(cond)
} else{
}
# Choose a return value in case of error
return("error")
},
warning = function(cond) {
if (!quiet) {
message(paste("URL caused a warning:", url))
message("Here's the original warning message:")
message(cond)
# Choose a return value in case of warning
} else{
}
return("warning")
},
finally = {
if (!quiet) {
message(paste("Processed URL:", url))
} else {
}
}
)
if (out != "error" & out != "warning")
out <- "OK"
return(out)
}
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/R/utils.R
|
## ----setup, include=FALSE-----------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.width=8,fig.height=4
)
library(CSHShydRology)
CAN05AA008 <- CAN05AA008
## -----------------------------------------------------------------------------
daily_flows <- CAN05AA008[, c(3, 4)]
result1 <- ch_hydrograph_plot(flows = daily_flows, winter_shading = FALSE)
result2 <- ch_hydrograph_plot(flows = daily_flows, winter_shading = TRUE)
## -----------------------------------------------------------------------------
myprd <- "2000-01-01/2000-12-31"
result3 <- ch_hydrograph_plot(
flows = daily_flows, winter_shading = TRUE,
prd = myprd
)
## -----------------------------------------------------------------------------
precip <- data.frame("Date" = daily_flows$Date, "precip" = abs(rnorm(nrow(daily_flows))) * 10)
result4 <- ch_hydrograph_plot(
flows = daily_flows, precip = precip, winter_shading = TRUE,
prd = myprd
)
## -----------------------------------------------------------------------------
result5 <- ch_hydrograph_plot(
flows = daily_flows, precip = precip, winter_shading = TRUE,
prd = myprd, range_mult_precip = 2, range_mult_flow = 1.8
)
## -----------------------------------------------------------------------------
ylab <- expression(paste("Discharge [m"^"3", "/s]"))
result6 <- ch_hydrograph_plot(
flows = daily_flows, precip = precip,
prd = myprd, ylabel = ylab
)
## -----------------------------------------------------------------------------
ylab_precip <- "Rainfall [mm]"
result7 <- ch_hydrograph_plot(
flows = daily_flows, precip = precip,
prd = myprd, precip_label = ylab_precip
)
## -----------------------------------------------------------------------------
result8 <- ch_hydrograph_plot(
flows = daily_flows, precip = precip,
prd = myprd, leg_pos = "right"
) # change legend to the right side
result9 <- ch_hydrograph_plot(
flows = daily_flows, precip = precip,
prd = myprd, leg_box = TRUE
) # add legend fill and outline
result10 <- ch_hydrograph_plot(
flows = daily_flows, precip = precip,
prd = myprd, zero_axis = F
) # default plot outside of function with buffer around axis
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/inst/doc/hydrograph_plot.R
|
---
title: "ch_hydrograph_plot"
author: "R. Chlumsky"
contributor: "K. Shook"
date: "June 13, 2018"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{ch_hydrograph_plot}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.width=8,fig.height=4
)
library(CSHShydRology)
CAN05AA008 <- CAN05AA008
```
## hydrograph_plot
This is a general-purpose hydrograph plotting function, which is useful for a wide variety of tasks.
Currently the function only uses base R plotting, but the addition of ggplot2 graphs is planned for the
future.
The function can plot any of: observed flows, simulated flows, inflows to a sub-basin, and precipitation on the same graph.
The plots can indicate the winter period (which is fixed), and options exist to change the scales and y-axis
label.
## Plotting daily streamflows, without and with winter shading
Note that the value returned by the function, if successful, is TRUE
```{r}
daily_flows <- CAN05AA008[, c(3, 4)]
result1 <- ch_hydrograph_plot(flows = daily_flows, winter_shading = FALSE)
result2 <- ch_hydrograph_plot(flows = daily_flows, winter_shading = TRUE)
```
The period of the plot can be restricted by setting the option prd, nwhich is a string like “2011-10-
01/2012-09-30” indicating the beginning and end dates of the plot.
```{r}
myprd <- "2000-01-01/2000-12-31"
result3 <- ch_hydrograph_plot(
flows = daily_flows, winter_shading = TRUE,
prd = myprd
)
```
## Adding Precipitation
You can also plot precipitation data. In this example fake precipitation data are used.
```{r}
precip <- data.frame("Date" = daily_flows$Date, "precip" = abs(rnorm(nrow(daily_flows))) * 10)
result4 <- ch_hydrograph_plot(
flows = daily_flows, precip = precip, winter_shading = TRUE,
prd = myprd
)
```
The axes of the precipitation and flow can be modified as needed to prevent overlap of the two series,
if desired. The default is to multiply the precipitation range by 1.5 of the maximum value, however
the range can be multiplied by any positive value to prevent this.
```{r}
result5 <- ch_hydrograph_plot(
flows = daily_flows, precip = precip, winter_shading = TRUE,
prd = myprd, range_mult_precip = 2, range_mult_flow = 1.8
)
```
## Changing Axis Labels
Only the default y-axis label can be over-ridden. Note that you can use a Unicode character or an
expression to get a superscripted 3.
```{r}
ylab <- expression(paste("Discharge [m"^"3", "/s]"))
result6 <- ch_hydrograph_plot(
flows = daily_flows, precip = precip,
prd = myprd, ylabel = ylab
)
```
The precipitation label can also be adjusted.
```{r}
ylab_precip <- "Rainfall [mm]"
result7 <- ch_hydrograph_plot(
flows = daily_flows, precip = precip,
prd = myprd, precip_label = ylab_precip
)
```
## Other format options
Many other formatting options exist, such as:
* changing the legend position
* adding a legend outline and fill to the text box
* forcing the y axis to start at exactly zero
For example:
```{r}
result8 <- ch_hydrograph_plot(
flows = daily_flows, precip = precip,
prd = myprd, leg_pos = "right"
) # change legend to the right side
result9 <- ch_hydrograph_plot(
flows = daily_flows, precip = precip,
prd = myprd, leg_box = TRUE
) # add legend fill and outline
result10 <- ch_hydrograph_plot(
flows = daily_flows, precip = precip,
prd = myprd, zero_axis = F
) # default plot outside of function with buffer around axis
```
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/inst/doc/hydrograph_plot.Rmd
|
---
title: "ch_hydrograph_plot"
author: "R. Chlumsky"
contributor: "K. Shook"
date: "June 13, 2018"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{ch_hydrograph_plot}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.width=8,fig.height=4
)
library(CSHShydRology)
CAN05AA008 <- CAN05AA008
```
## hydrograph_plot
This is a general-purpose hydrograph plotting function, which is useful for a wide variety of tasks.
Currently the function only uses base R plotting, but the addition of ggplot2 graphs is planned for the
future.
The function can plot any of: observed flows, simulated flows, inflows to a sub-basin, and precipitation on the same graph.
The plots can indicate the winter period (which is fixed), and options exist to change the scales and y-axis
label.
## Plotting daily streamflows, without and with winter shading
Note that the value returned by the function, if successful, is TRUE
```{r}
daily_flows <- CAN05AA008[, c(3, 4)]
result1 <- ch_hydrograph_plot(flows = daily_flows, winter_shading = FALSE)
result2 <- ch_hydrograph_plot(flows = daily_flows, winter_shading = TRUE)
```
The period of the plot can be restricted by setting the option prd, nwhich is a string like “2011-10-
01/2012-09-30” indicating the beginning and end dates of the plot.
```{r}
myprd <- "2000-01-01/2000-12-31"
result3 <- ch_hydrograph_plot(
flows = daily_flows, winter_shading = TRUE,
prd = myprd
)
```
## Adding Precipitation
You can also plot precipitation data. In this example fake precipitation data are used.
```{r}
precip <- data.frame("Date" = daily_flows$Date, "precip" = abs(rnorm(nrow(daily_flows))) * 10)
result4 <- ch_hydrograph_plot(
flows = daily_flows, precip = precip, winter_shading = TRUE,
prd = myprd
)
```
The axes of the precipitation and flow can be modified as needed to prevent overlap of the two series,
if desired. The default is to multiply the precipitation range by 1.5 of the maximum value, however
the range can be multiplied by any positive value to prevent this.
```{r}
result5 <- ch_hydrograph_plot(
flows = daily_flows, precip = precip, winter_shading = TRUE,
prd = myprd, range_mult_precip = 2, range_mult_flow = 1.8
)
```
## Changing Axis Labels
Only the default y-axis label can be over-ridden. Note that you can use a Unicode character or an
expression to get a superscripted 3.
```{r}
ylab <- expression(paste("Discharge [m"^"3", "/s]"))
result6 <- ch_hydrograph_plot(
flows = daily_flows, precip = precip,
prd = myprd, ylabel = ylab
)
```
The precipitation label can also be adjusted.
```{r}
ylab_precip <- "Rainfall [mm]"
result7 <- ch_hydrograph_plot(
flows = daily_flows, precip = precip,
prd = myprd, precip_label = ylab_precip
)
```
## Other format options
Many other formatting options exist, such as:
* changing the legend position
* adding a legend outline and fill to the text box
* forcing the y axis to start at exactly zero
For example:
```{r}
result8 <- ch_hydrograph_plot(
flows = daily_flows, precip = precip,
prd = myprd, leg_pos = "right"
) # change legend to the right side
result9 <- ch_hydrograph_plot(
flows = daily_flows, precip = precip,
prd = myprd, leg_box = TRUE
) # add legend fill and outline
result10 <- ch_hydrograph_plot(
flows = daily_flows, precip = precip,
prd = myprd, zero_axis = F
) # default plot outside of function with buffer around axis
```
|
/scratch/gouwar.j/cran-all/cranData/CSHShydRology/vignettes/hydrograph_plot.Rmd
|
#'Transform ensemble forecast into probabilities
#'
#'The Cumulative Distribution Function of a forecast is used to obtain the
#'probabilities of each value in the ensemble. If multiple initializations
#'(start dates) are provided, the function will create the Cumulative
#'Distribution Function excluding the corresponding initialization.
#'
#'@param data An 's2dv_cube' object as provided function \code{CST_Start} or
#' \code{CST_Load} in package CSTools.
#'@param start An optional parameter to define the initial date of the period
#' to select from the data by providing a list of two elements: the initial
#' date of the period and the initial month of the period. By default it is set
#' to NULL and the indicator is computed using all the data provided in
#' \code{data}.
#'@param end An optional parameter to define the final date of the period to
#' select from the data by providing a list of two elements: the final day of
#' the period and the final month of the period. By default it is set to NULL
#' and the indicator is computed using all the data provided in \code{data}.
#'@param time_dim A character string indicating the name of the temporal
#' dimension. By default, it is set to 'time'. More than one dimension name
#' matching the dimensions provided in the object \code{data$data} can be
#' specified. This dimension is required to subset the data in a requested
#' period.
#'@param memb_dim A character string indicating the name of the dimension in
#' which the ensemble members are stored.
#'@param sdate_dim A character string indicating the name of the dimension in
#' which the initialization dates are stored.
#'@param ncores An integer indicating the number of cores to use in parallel
#' computation.
#'
#'@return An 's2dv_cube' object containing the probabilites in the element \code{data}.
#'
#'@examples
#'exp <- NULL
#'exp$data <- array(rnorm(216), dim = c(dataset = 1, member = 2, sdate = 3,
#' time = 9, lat = 2, lon = 2))
#'class(exp) <- 's2dv_cube'
#'exp_probs <- CST_AbsToProbs(exp)
#'exp$data <- array(rnorm(5 * 3 * 214 * 2),
#' c(member = 5, sdate = 3, time = 214, lon = 2))
#'exp$attrs$Dates <- c(seq(as.Date("01-05-2000", format = "%d-%m-%Y"),
#' as.Date("30-11-2000", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2001", format = "%d-%m-%Y"),
#' as.Date("30-11-2001", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2002", format = "%d-%m-%Y"),
#' as.Date("30-11-2002", format = "%d-%m-%Y"), by = 'day'))
#'dim(exp$attrs$Dates) <- c(time = 214, sdate = 3)
#'exp_probs <- CST_AbsToProbs(data = exp, start = list(21, 4), end = list(21, 6))
#'@import multiApply
#'@importFrom stats ecdf
#'@export
CST_AbsToProbs <- function(data, start = NULL, end = NULL,
time_dim = 'time', memb_dim = 'member',
sdate_dim = 'sdate', ncores = NULL) {
# Check 's2dv_cube'
if (!inherits(data, 's2dv_cube')) {
stop("Parameter 'data' must be of the class 's2dv_cube'.")
}
# Dates subset
if (!is.null(start) && !is.null(end)) {
if (is.null(dim(data$attrs$Dates))) {
warning("Dimensions in 'data' element 'attrs$Dates' are missed and ",
"all data would be used.")
start <- NULL
end <- NULL
}
}
probs <- AbsToProbs(data = data$data, dates = data$attrs$Dates,
start = start, end = end, time_dim = time_dim,
memb_dim = memb_dim, sdate_dim = sdate_dim,
ncores = ncores)
data$data <- probs
if (!is.null(start) && !is.null(end)) {
data$attrs$Dates <- SelectPeriodOnDates(dates = data$attrs$Dates,
start = start, end = end,
time_dim = time_dim,
ncores = ncores)
}
return(data)
}
#'Transform ensemble forecast into probabilities
#'
#'The Cumulative Distribution Function of a forecast is used to obtain the
#'probabilities of each value in the ensemble. If multiple initializations
#'(start dates) are provided, the function will create the Cumulative
#'Distribution Function excluding the corresponding initialization.
#'
#'@param data A multidimensional array with named dimensions.
#'@param dates An optional parameter containing a vector of dates or a
#' multidimensional array of dates with named dimensions matching the
#' dimensions on parameter 'data'. By default it is NULL, to select a period
#' this parameter must be provided. All common dimensions with 'data' need to
#' have the same length.
#'@param start An optional parameter to define the initial date of the period
#' to select from the data by providing a list of two elements: the initial
#' date of the period and the initial month of the period. By default it is set
#' to NULL and the indicator is computed using all the data provided in
#' \code{data}.
#'@param end An optional parameter to define the final date of the period to
#' select from the data by providing a list of two elements: the final day of
#' the period and the final month of the period. By default it is set to NULL
#' and the indicator is computed using all the data provided in \code{data}.
#'@param time_dim A character string indicating the name of the temporal
#' dimension. By default, it is set to 'time'. More than one dimension name
#' matching the dimensions provided in the object \code{data$data} can be
#' specified. This dimension is required to subset the data in a requested
#' period.
#'@param memb_dim A character string indicating the name of the dimension in
#' which the ensemble members are stored.
#'@param sdate_dim A character string indicating the name of the dimension in
#' which the initialization dates are stored.
#'@param ncores An integer indicating the number of cores to use in parallel
#' computation.
#'
#'@return A multidimensional array with named dimensions containing the
#'probabilites in the element \code{data}.
#'
#'@examples
#'exp <- array(rnorm(216), dim = c(dataset = 1, member = 2, sdate = 3,
#' time = 9, lat = 2, lon = 2))
#'exp_probs <- AbsToProbs(exp)
#'data <- array(rnorm(5 * 3 * 61 * 1),
#' c(member = 5, sdate = 3, time = 61, lon = 1))
#'Dates <- c(seq(as.Date("01-05-2000", format = "%d-%m-%Y"),
#' as.Date("30-06-2000", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2001", format = "%d-%m-%Y"),
#' as.Date("30-06-2001", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2002", format = "%d-%m-%Y"),
#' as.Date("30-06-2002", format = "%d-%m-%Y"), by = 'day'))
#'dim(Dates) <- c(time = 61, sdate = 3)
#'exp_probs <- AbsToProbs(data, dates = Dates, start = list(21, 4),
#' end = list(21, 6))
#'
#'@import multiApply
#'@importFrom stats ecdf
#'@export
AbsToProbs <- function(data, dates = NULL, start = NULL, end = NULL,
time_dim = 'time', memb_dim = 'member',
sdate_dim = 'sdate', ncores = NULL) {
# data
if (!is.numeric(data)) {
stop("Parameter 'data' must be numeric.")
}
data_is_array <- TRUE
if (!is.array(data)) {
data_is_array <- FALSE
dim(data) <- c(length(data), 1)
names(dim(data)) <- c(memb_dim, sdate_dim)
if (!is.null(start) && !is.null(end)) {
warning("Parameter 'data' doesn't have dimension names and all ",
"data will be used.")
start <- NULL
end <- NULL
}
}
# dates subset
if (!is.null(start) && !is.null(end)) {
if (!all(c(is.list(start), is.list(end)))) {
stop("Parameter 'start' and 'end' must be lists indicating the ",
"day and the month of the period start and end.")
}
if (is.null(dates)) {
warning("Parameter 'dates' is not provided and all data will be used.")
} else {
if (is.null(dim(dates))) {
warning("Parameter 'dates' doesn't have dimension names and all ",
"data will be used.")
} else {
data <- SelectPeriodOnData(data, dates, start, end,
time_dim = time_dim, ncores = ncores)
}
}
}
probs <- Apply(list(data), target_dims = c(memb_dim, sdate_dim),
fun = .abstoprobs,
ncores = ncores)$output1
if (!data_is_array) {
dim(probs) <- NULL
} else {
pos <- match(names(dim(data)), names(dim(probs)))
probs <- aperm(probs, pos)
}
return(probs)
}
.abstoprobs <- function(data) {
if (dim(data)[2] > 1) { # Several sdates
qres <- unlist(
lapply(1:(dim(data)[1]), function(x) { # dim 1: member
lapply(1:(dim(data)[2]), function(y) { # dim 2: sdate
ecdf(as.vector(data[,-y]))(data[x, y])
})
}))
dim(qres) <- c(dim(data)[2], dim(data)[1])
} else { # One sdate
qres <- unlist(
lapply(1:(dim(data)[1]), function(x) { # dim 1: member
ecdf(as.vector(data))(data[x, 1])
}))
dim(qres) <- c(dim(data)[2], dim(data)[1])
}
return(qres)
}
|
/scratch/gouwar.j/cran-all/cranData/CSIndicators/R/AbsToProbs.R
|
#'Accumulation of a variable when Exceeding (not exceeding) a Threshold
#'
#'The accumulation (sum) of a variable in the days (or time steps) that the
#'variable is exceeding (or not exceeding) a threshold during a period. The
#'threshold provided must be in the same units than the variable units, i.e. to
#'use a percentile as a scalar, the function \code{Threshold} or
#'\code{QThreshold} may be needed. Providing mean daily temperature data, the
#'following agriculture indices for heat stress can be obtained by using this
#'function:
#'\itemize{
#' \item{'GDD', Summation of daily differences between daily average
#' temperatures and 10°C between April 1st and October 31st.}
#'}
#'
#'@param data An 's2dv_cube' object as provided function \code{CST_Start} or
#' \code{CST_Load} in package CSTools.
#'@param threshold If only one threshold is used, it can be an 's2dv_cube'
#' object or a multidimensional array with named dimensions. It must be in the
#' same units and with the common dimensions of the same length as parameter
#' 'data'. It can also be a vector with the same length of 'time_dim' from
#' 'data' or a scalar. If we want to use two thresholds: it can be a vector
#' of two scalars, a list of two vectors with the same length of
#' 'time_dim' from 'data' or a list of two multidimensional arrays with the
#' common dimensions of the same length as parameter 'data'. If two thresholds
#' are used, parameter 'op' must be also a vector of two elements.
#'@param op An operator '>' (by default), '<', '>=' or '<='. If two thresholds
#' are used it has to be a vector of a pair of two logical operators:
#' c('<', '>'), c('<', '>='), c('<=', '>'), c('<=', '>='), c('>', '<'),
#' c('>', '<='), c('>=', '<'),c('>=', '<=')).
#'@param diff A logical value indicating whether to accumulate the difference
#' between data and threshold (TRUE) or not (FALSE by default). It can only be
#' TRUE if a unique threshold is used.
#'@param start An optional parameter to define the initial date of the period
#' to select from the data by providing a list of two elements: the initial
#' date of the period and the initial month of the period. By default it is
#' set to NULL and the indicator is computed using all the data provided in
#' \code{data}.
#'@param end An optional parameter to define the final date of the period to
#' select from the data by providing a list of two elements: the final day of
#' the period and the final month of the period. By default it is set to NULL
#' and the indicator is computed using all the data provided in \code{data}.
#'@param time_dim A character string indicating the name of the dimension to
#' compute the indicator. By default, it is set to 'time'. It can only
#' indicate one time dimension.
#'@param na.rm A logical value indicating whether to ignore NA values (TRUE)
#' or not (FALSE).
#'@param ncores An integer indicating the number of cores to use in parallel
#'computation.
#'
#'@return An 's2dv_cube' object containing the aggregated values in the element
#'\code{data} with dimensions of the input parameter 'data' except the dimension
#'where the indicator has been computed. The 'Dates' array is updated to
#'the dates corresponding to the beginning of the aggregated time period. A new
#'element called 'time_bounds' will be added into the 'attrs' element in the
#''s2dv_cube' object. It consists of a list containing two elements, the start
#'and end dates of the aggregated period with the same dimensions of 'Dates'
#'element.
#'
#'@examples
#'exp <- NULL
#'exp$data <- array(abs(rnorm(5 * 3 * 214 * 2)*100),
#' c(memb = 5, sdate = 3, time = 214, lon = 2))
#'class(exp) <- 's2dv_cube'
#'Dates <- c(seq(as.Date("01-05-2000", format = "%d-%m-%Y"),
#' as.Date("30-11-2000", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2001", format = "%d-%m-%Y"),
#' as.Date("30-11-2001", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2002", format = "%d-%m-%Y"),
#' as.Date("30-11-2002", format = "%d-%m-%Y"), by = 'day'))
#'dim(Dates) <- c(sdate = 3, time = 214)
#'exp$attrs$Dates <- Dates
#'AT <- CST_AccumulationExceedingThreshold(data = exp, threshold = 100,
#' start = list(21, 4),
#' end = list(21, 6))
#'
#'@import multiApply
#'@importFrom ClimProjDiags Subset
#'@export
CST_AccumulationExceedingThreshold <- function(data, threshold, op = '>', diff = FALSE,
start = NULL, end = NULL, time_dim = 'time',
na.rm = FALSE, ncores = NULL) {
# Check 's2dv_cube'
if (!inherits(data, 's2dv_cube')) {
stop("Parameter 'data' must be of the class 's2dv_cube'.")
}
# Dates subset
if (!is.null(start) && !is.null(end)) {
if (is.null(dim(data$attrs$Dates))) {
warning("Dimensions in 'data' element 'attrs$Dates' are missed and ",
"all data would be used.")
start <- NULL
end <- NULL
}
}
if (length(op) == 1) {
if (inherits(threshold, 's2dv_cube')) {
threshold <- threshold$data
}
} else if (length(op) == 2) {
if (inherits(threshold[[1]], 's2dv_cube')) {
threshold[[1]] <- threshold[[1]]$data
}
if (inherits(threshold[[2]], 's2dv_cube')) {
threshold[[2]] <- threshold[[2]]$data
}
}
Dates <- data$attrs$Dates
total <- AccumulationExceedingThreshold(data = data$data, dates = Dates,
threshold = threshold, op = op, diff = diff,
start = start, end = end, time_dim = time_dim,
na.rm = na.rm, ncores = ncores)
data$data <- total
data$dims <- dim(total)
data$coords[[time_dim]] <- NULL
if (!is.null(Dates)) {
if (!is.null(start) && !is.null(end)) {
Dates <- SelectPeriodOnDates(dates = Dates, start = start, end = end,
time_dim = time_dim, ncores = ncores)
}
if (is.null(dim(Dates))) {
warning("Element 'Dates' has NULL dimensions. They will not be ",
"subset and 'time_bounds' will be missed.")
data$attrs$Dates <- Dates
} else {
# Create time_bounds
time_bounds <- NULL
time_bounds$start <- ClimProjDiags::Subset(x = Dates, along = time_dim,
indices = 1, drop = 'selected')
time_bounds$end <- ClimProjDiags::Subset(x = Dates, along = time_dim,
indices = dim(Dates)[time_dim],
drop = 'selected')
# Add Dates in attrs
data$attrs$Dates <- time_bounds$start
data$attrs$time_bounds <- time_bounds
}
}
return(data)
}
#'Accumulation of a variable when Exceeding (not exceeding) a Threshold
#'
#'The accumulation (sum) of a variable in the days (or time steps) that the
#'variable is exceeding (or not exceeding) a threshold during a period. The
#'threshold provided must be in the same units than the variable units, i.e. to
#'use a percentile as a scalar, the function \code{Threshold} or
#'\code{QThreshold} may be needed. Providing mean daily temperature data, the
#'following agriculture indices for heat stress can be obtained by using this
#'function:
#'\itemize{
#' \item{'GDD', Summation of daily differences between daily average
#' temperatures and 10°C between April 1st and October 31st.}
#'}
#'
#'@param data A multidimensional array with named dimensions.
#'@param threshold If only one threshold is used: it can be a multidimensional
#' array with named dimensions. It must be in the same units and with the
#' common dimensions of the same length as parameter 'data'. It can also be a
#' vector with the same length of 'time_dim' from 'data' or a scalar. If we
#' want to use two thresholds: it can be a vector of two scalars, a list of
#' two vectors with the same length of 'time_dim' from 'data' or a list of
#' two multidimensional arrays with the common dimensions of the same length
#' as parameter 'data'. If two thresholds are used, parameter 'op' must be
#' also a vector of two elements.
#'@param op An operator '>' (by default), '<', '>=' or '<='. If two thresholds
#' are used it has to be a vector of a pair of two logical operators:
#' c('<', '>'), c('<', '>='), c('<=', '>'), c('<=', '>='), c('>', '<'),
#' c('>', '<='), c('>=', '<'),c('>=', '<=')).
#'@param diff A logical value indicating whether to accumulate the difference
#' between data and threshold (TRUE) or not (FALSE by default). It can only be
#' TRUE if a unique threshold is used.
#'@param dates A multidimensional array of dates with named dimensions matching
#' the temporal dimensions on parameter 'data'. By default it is NULL, to
#' select aperiod this parameter must be provided.
#'@param start An optional parameter to define the initial date of the period
#' to select from the data by providing a list of two elements: the initial
#' date of the period and the initial month of the period. By default it is
#' set to NULL and the indicator is computed using all the data provided in
#' \code{data}.
#'@param end An optional parameter to define the final date of the period to
#' select from the data by providing a list of two elements: the final day of
#' the period and the final month of the period. By default it is set to NULL
#' and the indicator is computed using all the data provided in \code{data}.
#'@param time_dim A character string indicating the name of the dimension to
#' compute the indicator. By default, it is set to 'time'. It can only
#' indicate one time dimension.
#'@param na.rm A logical value indicating whether to ignore NA values (TRUE)
#' or not (FALSE).
#'@param ncores An integer indicating the number of cores to use in parallel
#'computation.
#'
#'@return A multidimensional array with named dimensions containing the
#'aggregated values with dimensions of the input parameter 'data' except the
#'dimension where the indicator has been computed.
#'
#'@examples
#'# Assuming data is already (tasmax + tasmin)/2 - 10
#'data <- array(rnorm(5 * 3 * 214 * 2, mean = 25, sd = 3),
#' c(memb = 5, sdate = 3, time = 214, lon = 2))
#'GDD <- AccumulationExceedingThreshold(data, threshold = 0, start = list(1, 4),
#' end = list(31, 10))
#'@import multiApply
#'@export
AccumulationExceedingThreshold <- function(data, threshold, op = '>', diff = FALSE,
dates = NULL, start = NULL, end = NULL,
time_dim = 'time', na.rm = FALSE,
ncores = NULL) {
# data
if (is.null(data)) {
stop("Parameter 'data' cannot be NULL.")
}
if (!is.numeric(data)) {
stop("Parameter 'data' must be numeric.")
}
if (!is.array(data)) {
dim(data) <- length(data)
names(dim(data)) <- time_dim
}
if (is.null(names(dim(data)))) {
stop("Parameter 'data' must have named dimensions.")
}
# time_dim
if (!is.character(time_dim)) {
stop("Parameter 'time_dim' must be a character string.")
}
if (!all(time_dim %in% names(dim(data)))) {
stop("Parameter 'time_dim' is not found in 'data' dimension.")
}
if (length(time_dim) > 1) {
warning("Parameter 'time_dim' has length greater than 1 and ",
"only the first element will be used.")
time_dim <- time_dim[1]
}
# op
if (!is.character(op)) {
stop("Parameter 'op' must be a character.")
}
if (length(op) == 1) {
if (!(op %in% c('>', '<', '>=', '<=', '='))) {
stop("Parameter 'op' must be a logical operator.")
}
} else if (length(op) == 2) {
op_list <- list(c('<', '>'), c('<', '>='), c('<=', '>'), c('<=', '>='),
c('>', '<'), c('>', '<='), c('>=', '<'), c('>=', '<='))
if (!any(unlist(lapply(op_list, function(x) all(x == op))))) {
stop("Parameter 'op' is not an accepted pair of logical operators.")
}
} else {
stop("Parameter 'op' must be a logical operator with length 1 or 2.")
}
# threshold
if (is.null(unlist(threshold))) {
stop("Parameter 'threshold' cannot be NULL.")
}
if (!is.numeric(unlist(threshold))) {
stop("Parameter 'threshold' must be numeric.")
}
if (length(op) == 2) {
if (length(op) != length(threshold)) {
stop("If 'op' is a pair of logical operators parameter 'threshold' ",
"also has to be a pair of values.")
}
if (!is.numeric(threshold[[1]]) | !is.numeric(threshold[[2]])) {
stop("Parameter 'threshold' must be numeric.")
}
if (length(threshold[[1]]) != length(threshold[[2]])) {
stop("The pair of thresholds must have the same length.")
}
if (!is.array(threshold[[1]]) && length(threshold[[1]]) > 1) {
if (dim(data)[time_dim] != length(threshold[[1]])) {
stop("If parameter 'threshold' is a vector it must have the same ",
"length as data any time dimension.")
} else {
dim(threshold[[1]]) <- length(threshold[[1]])
dim(threshold[[2]]) <- length(threshold[[2]])
names(dim(threshold[[1]])) <- time_dim
names(dim(threshold[[2]])) <- time_dim
}
} else if (is.array(threshold[[1]]) && length(threshold[[1]]) > 1) {
if (is.null(names(dim(threshold[[1]])))) {
stop("If parameter 'threshold' is an array it must have named dimensions.")
}
if (!is.null(dim(threshold[[2]]))) {
if (!all(names(dim(threshold[[1]])) %in% names(dim(threshold[[2]])))) {
stop("The pair of thresholds must have the same dimension names.")
}
}
namedims <- names(dim(threshold[[1]]))
order <- match(namedims, names(dim(threshold[[2]])))
threshold[[2]] <- aperm(threshold[[2]], order)
if (!all(dim(threshold[[1]]) == dim(threshold[[2]]))) {
stop("The pair of thresholds must have the same dimensions.")
}
if (any(names(dim(threshold[[1]])) %in% names(dim(data)))) {
common_dims <- dim(threshold[[1]])[names(dim(threshold[[1]])) %in% names(dim(data))]
if (!all(common_dims == dim(data)[names(common_dims)])) {
stop("Parameter 'data' and 'threshold' must have same length of ",
"all common dimensions.")
}
}
} else if (length(threshold[[1]]) == 1) {
dim(threshold[[1]]) <- NULL
dim(threshold[[2]]) <- NULL
}
} else {
if (!is.array(threshold) && length(threshold) > 1) {
if (dim(data)[time_dim] != length(threshold)) {
stop("If parameter 'threshold' is a vector it must have the same ",
"length as data time dimension.")
} else {
dim(threshold) <- length(threshold)
names(dim(threshold)) <- time_dim
}
} else if (is.array(threshold) && length(threshold) > 1) {
if (is.null(names(dim(threshold)))) {
stop("If parameter 'threshold' is an array it must have named dimensions.")
}
if (any(names(dim(threshold)) %in% names(dim(data)))) {
common_dims <- dim(threshold)[names(dim(threshold)) %in% names(dim(data))]
if (!all(common_dims == dim(data)[names(common_dims)])) {
stop("Parameter 'data' and 'threshold' must have same length of ",
"all common dimensions.")
}
}
} else if (length(threshold) == 1) {
dim(threshold) <- NULL
}
}
# ncores
if (!is.null(ncores)) {
if (!is.numeric(ncores) | ncores %% 1 != 0 | ncores <= 0 |
length(ncores) > 1) {
stop("Parameter 'ncores' must be a positive integer.")
}
}
# dates
if (!is.null(dates)) {
if (!is.null(start) && !is.null(end)) {
if (!any(c(is.list(start), is.list(end)))) {
stop("Parameter 'start' and 'end' must be lists indicating the ",
"day and the month of the period start and end.")
}
if (length(op) == 1) {
if (time_dim %in% names(dim(threshold))) {
if (dim(threshold)[time_dim] == dim(data)[time_dim]) {
threshold <- SelectPeriodOnData(data = threshold, dates = dates,
start = start, end = end,
time_dim = time_dim,
ncores = ncores)
}
}
} else if (length(op) == 2) {
if (time_dim %in% names(dim(threshold[[1]]))) {
if (dim(threshold[[1]])[time_dim] == dim(data)[time_dim]) {
threshold[[1]] <- SelectPeriodOnData(data = threshold[[1]],
dates = dates, start = start,
end = end, time_dim = time_dim,
ncores = ncores)
threshold[[2]] <- SelectPeriodOnData(data = threshold[[2]], dates = dates,
start = start, end = end,
time_dim = time_dim,
ncores = ncores)
}
}
}
if (!is.null(dim(dates))) {
data <- SelectPeriodOnData(data = data, dates = dates, start = start,
end = end, time_dim = time_dim,
ncores = ncores)
} else {
warning("Parameter 'dates' must have named dimensions if 'start' and ",
"'end' are not NULL. All data will be used.")
}
}
}
# diff
if (length(op) == 2 & diff == TRUE) {
stop("Parameter 'diff' can't be TRUE if the parameter 'threshold' is a ",
"range of values.")
} else if (diff == TRUE) {
if (length(threshold) != 1) {
stop("Parameter 'diff' can't be TRUE if the parameter 'threshold' is not a scalar.")
}
data <- Apply(list(data, threshold),
target_dims = list(time_dim, NULL),
fun = function(x, y) {x - y}, ncores = ncores)$output1
dim(data) <- dim(data)[-length(dim(data))]
threshold <- 0
}
if (length(op) > 1) {
thres1 <- threshold[[1]]
thres2 <- threshold[[2]]
if (is.null(dim(thres1))) {
total <- Apply(list(data), target_dims = time_dim,
fun = .sumexceedthreshold,
y = thres1, y2 = thres2,
op = op, na.rm = na.rm,
ncores = ncores)$output1
} else if (any(time_dim %in% names(dim(thres1)))) {
total <- Apply(list(data, thres1, thres2),
target_dims = list(time_dim,
time_dim[time_dim %in% names(dim(thres1))],
time_dim[time_dim %in% names(dim(thres2))]),
fun = .sumexceedthreshold, op = op, na.rm = na.rm,
ncores = ncores)$output1
} else {
total <- Apply(list(data, thres1, thres2),
target_dims = list(time_dim, thres1 = NULL, thres2 = NULL),
fun = .sumexceedthreshold, op = op, na.rm = na.rm,
ncores = ncores)$output1
}
} else {
if (is.null(dim(threshold))) {
total <- Apply(list(data), target_dims = time_dim,
fun = .sumexceedthreshold,
y = threshold,
op = op, na.rm = na.rm,
ncores = ncores)$output1
} else if (any(time_dim %in% names(dim(threshold)))) {
total <- Apply(list(data, threshold),
target_dims = list(time_dim,
time_dim[time_dim %in% names(dim(threshold))]),
fun = .sumexceedthreshold, op = op, na.rm = na.rm,
ncores = ncores)$output1
} else {
total <- Apply(list(data, threshold),
target_dims = list(time_dim, NULL),
fun = .sumexceedthreshold, op = op, na.rm = na.rm,
ncores = ncores)$output1
}
}
return(total)
}
.sumexceedthreshold <- function(x, y, y2 = NULL, op, na.rm) {
y <- as.vector(y)
y2 <- as.vector(y2)
if (is.null(y2)) {
if (op == '>') {
res <- sum(x[x > y], na.rm = na.rm)
} else if (op == '<') {
res <- sum(x[x < y], na.rm = na.rm)
} else if (op == '<=') {
res <- sum(x[x <= y], na.rm = na.rm)
} else if (op == '>=') {
res <- sum(x[x >= y], na.rm = na.rm)
} else {
res <- sum(x[x = y], na.rm = na.rm)
}
} else {
if (all(op == c('<', '>'))) {
res <- sum(x[x < y & x > y2], na.rm = na.rm)
} else if (all(op == c('<', '>='))) {
res <- sum(x[x < y & x >= y2], na.rm = na.rm)
} else if (all(op == c('<=', '>'))) {
res <- sum(x[x <= y & x > y2], na.rm = na.rm)
} else if (all(op == c('<=', '>='))) {
res <- sum(x[x <= y & x >= y2], na.rm = na.rm)
} else if (all(op == c('>', '<'))) {
res <- sum(x[x > y & x < y2], na.rm = na.rm)
} else if (all(op == c('>', '<='))) {
res <- sum(x[x > y & x <= y2], na.rm = na.rm)
} else if (all(op == c('>=', '<'))) {
res <- sum(x[x >= y & x < y2], na.rm = na.rm)
} else if (all(op == c('>=', '<='))) {
res <- sum(x[x >= y & x <= y2], na.rm = na.rm)
}
}
return(res)
}
|
/scratch/gouwar.j/cran-all/cranData/CSIndicators/R/AccumulationExceedingThreshold.R
|
#'Merge a Reference To Experiments
#'
#'Some indicators are defined for specific temporal periods (e.g.: summer from
#'June 21st to September 21st). If the initialization forecast date is later
#'than the one required for the indicator (e.g.: July 1st), the user may want to
#'merge past observations, or other references, to the forecast (or hindcast) to
#'compute the indicator. If the forecast simulation doesn't cover the required
#'period because it is initialized too early (e.g.: Initialization on November
#'1st the forecast covers until the beginning of June next year), a climatology
#'(or other references) could be added at the end of the forecast lead time to
#'cover the desired period (e.g.: until the end of summer).
#'
#'This function is created to merge observations and forecasts, known as the
#'‘blending’ strategy (see references). The basis for this strategy is that the
#'predictions are progressively replaced with observational data as soon as they
#'become available (i.e., when entering the indicator definition period). This
#'key strategy aims to increase users’ confidence in the reformed predictions.
#'
#'@param data1 An 's2dv_cube' object with the element 'data' being a
#' multidimensional array with named dimensions. All dimensions must be
#' equal to 'data2' dimensions except for the ones specified with 'memb_dim'
#' and 'time_dim'. If 'start1' and 'end1' are used to subset a period, the
#' Dates must be stored in element '$attrs$Dates' of the object. Dates must
#' have same time dimensions as element 'data'.
#'@param data2 An 's2dv_cube' object with the element 'data' being a
#' multidimensional array of named dimensions matching the dimensions of
#' parameter 'data1'. All dimensions must be equal to 'data1' except for the
#' ones specified with 'memb_dim' and 'time_dim'. If 'start2' and 'end2' are
#' used to subset a period, the Dates must be stored in element '$attrs$Dates'
#' of the object. Dates must have same time dimensions as element 'data'.
#'@param start1 A list to define the initial date of the period to select from
#' 'data1' by providing a list of two elements: the initial date of the period
#' and the initial month of the period.
#'@param end1 A list to define the final date of the period to select from
#' 'data1' by providing a list of two elements: the final day of the period and
#' the final month of the period.
#'@param start2 A list to define the initial date of the period to select from
#' 'data2' by providing a list of two elements: the initial date of the period
#' and the initial month of the period.
#'@param end2 A list to define the final date of the period to select from
#' 'data2' by providing a list of two elements: the final day of the period and
#' the final month of the period.
#'@param time_dim A character string indicating the name of the temporal
#' dimension that will be used to combine the two arrays. By default, it is set
#' to 'time'. Also, it will be used to subset the data in a requested
#' period.
#'@param memb_dim A character string indicating the name of the member
#' dimension. If the data are not ensemble ones, set as NULL. The default
#' value is 'member'.
#'@param ncores An integer indicating the number of cores to use in parallel
#' computation.
#'@return An 's2dv_cube' object containing the indicator in the element
#'\code{data}. The element \code{data} will be a multidimensional array created
#'from the combination of 'data1' and 'data2'. The resulting array will contain
#'the following dimensions: the original dimensions of the input data, which are
#'common to both arrays and for the 'time_dim' dimension, the sum of the
#'corresponding dimension of 'data1' and 'data2'. If 'memb_dim' is not null,
#'regarding member dimension, two different situations can occur: (1) in the
#'case that one of the arrays does not have member dimension or is equal to 1
#'and the other array has multiple member dimension, the result will contain the
#'repeated values of the array one up to the lenght of member dimension of array
#'two; (2) in the case that both arrays have member dimension and is greater
#'than 1, all combinations of member dimension will be returned. The other
#'elements of the 's2dv_cube' will be updated with the combined information of
#'both datasets.
#'
#'@references Chou, C., R. Marcos-Matamoros, L. Palma Garcia, N. Pérez-Zanón,
#'M. Teixeira, S. Silva, N. Fontes, A. Graça, A. Dell'Aquila, S. Calmanti and
#'N. González-Reviriego (2023). Advanced seasonal predictions for vine
#'management based on bioclimatic indicators tailored to the wine sector.
#'Climate Services, 30, 100343, \doi{10.1016/j.cliser.2023.100343}.
#'
#'@examples
#'data_dates <- c(seq(as.Date("01-07-1993", "%d-%m-%Y", tz = 'UTC'),
#' as.Date("01-12-1993","%d-%m-%Y", tz = 'UTC'), "day"),
#' seq(as.Date("01-07-1994", "%d-%m-%Y", tz = 'UTC'),
#' as.Date("01-12-1994","%d-%m-%Y", tz = 'UTC'), "day"))
#'dim(data_dates) <- c(time = 154, sdate = 2)
#'data <- NULL
#'data$data <- array(1:(2*154*2), c(time = 154, sdate = 2, member = 2))
#'data$attrs$Dates<- data_dates
#'class(data) <- 's2dv_cube'
#'ref_dates <- seq(as.Date("01-01-1993", "%d-%m-%Y", tz = 'UTC'),
#' as.Date("01-12-1994","%d-%m-%Y", tz = 'UTC'), "day")
#'dim(ref_dates) <- c(time = 350, sdate = 2)
#'ref <- NULL
#'ref$data <- array(1001:1700, c(time = 350, sdate = 2))
#'ref$attrs$Dates <- ref_dates
#'class(ref) <- 's2dv_cube'
#'new_data <- CST_MergeRefToExp(data1 = ref, data2 = data,
#' start1 = list(21, 6), end1 = list(30, 6),
#' start2 = list(1, 7), end2 = list(21, 9))
#'
#'@export
CST_MergeRefToExp <- function(data1, data2, start1 = NULL, end1 = NULL,
start2 = NULL, end2 = NULL,
time_dim = 'time', memb_dim = 'member',
ncores = NULL) {
# Check 's2dv_cube'
if (!inherits(data1, 's2dv_cube')) {
stop("Parameter 'data1' must be of the class 's2dv_cube'.")
}
if (!inherits(data2, 's2dv_cube')) {
stop("Parameter 'data2' must be of the class 's2dv_cube'.")
}
# Dates subset of data1
if (!is.null(start1) && !is.null(end1)) {
if (is.null(dim(data1$attrs$Dates))) {
warning("Dimensions in 'data1' element 'attrs$Dates' are missed and ",
"all data would be used.")
start1 <- NULL
end1 <- NULL
}
}
# Dates subset of data2
if (!is.null(start2) && !is.null(end2)) {
if (is.null(dim(data2$attrs$Dates))) {
warning("Dimensions in 'data2' element 'attrs$Dates' are missed and ",
"all data would be used.")
start2 <- NULL
end2 <- NULL
}
}
dates1 <- data1$attrs$Dates
dates2 <- data2$attrs$Dates
# data
data1$data <- MergeRefToExp(data1 = data1$data, dates1 = dates1,
start1 = start1, end1 = end1,
data2 = data2$data, dates2 = dates2,
start2, end2, time_dim = time_dim,
memb_dim = memb_dim, ncores = ncores)
# dims
data1$dims <- dim(data1$data)
# coords
for (i_dim in names(dim(data1$data))) {
if (length(data1$coords[[i_dim]]) != dim(data1$data)[i_dim]) {
data1$coords[[i_dim]] <- NULL
data1$coords[[i_dim]] <- 1:dim(data1$data)[i_dim]
attr(data1$coords[[i_dim]], 'indices') <- TRUE
} else if (length(data1$coords[[i_dim]]) == length(data2$coords[[i_dim]])) {
if (any(as.vector(data1$coords[[i_dim]]) != as.vector(data2$coords[[i_dim]]))) {
data1$coords[[i_dim]] <- NULL
data1$coords[[i_dim]] <- 1:dim(data1$data)[i_dim]
attr(data1$coords[[i_dim]], 'indices') <- TRUE
} else if (!identical(attributes(data1$coords[[i_dim]]),
attributes(data2$coords[[i_dim]]))) {
attributes(data1$coords[[i_dim]]) <- NULL
}
} else {
data1$coords[[i_dim]] <- NULL
data1$coords[[i_dim]] <- 1:dim(data1$data)[i_dim]
attr(data1$coords[[i_dim]], 'indices') <- TRUE
}
}
# Dates
if (!is.null(dates1)) {
if (!is.null(start1) && !is.null(end1)) {
dates1 <- SelectPeriodOnDates(dates1, start = start1, end = end1,
time_dim = time_dim)
}
}
if (!is.null(dates2)) {
if ((!is.null(start2) && !is.null(end2))) {
dates2 <- SelectPeriodOnDates(dates2, start = start2,
end = end2, time_dim = time_dim)
}
}
remove_dates_dim <- FALSE
if (!is.null(dates1) & !is.null(dates2)) {
if (is.null(dim(dates1))) {
remove_dates_dim <- TRUE
dim(dates1) <- length(dates1)
names(dim(dates1)) <- time_dim
}
if (is.null(dim(dates2))) {
remove_dates_dim <- TRUE
dim(dates2) <- length(dates2)
names(dim(dates2)) <- time_dim
}
}
res <- Apply(list(dates1, dates2), target_dims = time_dim,
'c', output_dims = time_dim, ncores = ncores)$output1
if (inherits(dates1, 'Date')) {
data1$attrs$Dates <- as.Date(res, origin = '1970-01-01')
} else {
data1$attrs$Dates <- as.POSIXct(res, origin = '1970-01-01', tz = 'UTC')
}
if (remove_dates_dim) {
dim(data1$attrs$Dates) <- NULL
}
# Variable
data1$attrs$Variable$varName <- unique(data1$attrs$Variable$varName,
data2$attrs$Variable$varName)
names_metadata <- names(data1$attrs$Variable$metadata)
data1$attrs$Variable$metadata <- intersect(data1$attrs$Variable$metadata,
data2$attrs$Variable$metadata)
names(data1$attrs$Variable$metadata) <- names_metadata
# source_files
data1$attrs$source_files <- unique(c(data1$attrs$source_files, data2$attrs$source_files))
# Datasets
data1$attrs$Datasets <- unique(c(data1$attrs$Datasets, data2$attrs$Datasets))
# when
data1$attrs$when <- Sys.time()
# load_parameters (TO DO: remove with CST_Start)
if (!is.null(c(data1$attrs$load_parameters, data2$attrs$load_parameters))) {
data1$attrs$load_parameters <- list(data1 = data1$attrs$load_parameters,
data2 = data2$attrs$load_parameters)
}
return(data1)
}
#'Merge a Reference To Experiments
#'
#'Some indicators are defined for specific temporal periods (e.g.: summer from
#'June 21st to September 21st). If the initialization forecast date is later
#'than the one required for the indicator (e.g.: July 1st), the user may want to
#'merge past observations, or other references, to the forecast (or hindcast) to
#'compute the indicator. If the forecast simulation doesn't cover the required
#'period because it is initialized too early (e.g.: Initialization on November
#'1st the forecast covers until the beginning of June next year), a climatology
#'(or other references) could be added at the end of the forecast lead time to
#'cover the desired period (e.g.: until the end of summer).
#'
#'This function is created to merge observations and forecasts, known as the
#'‘blending’ strategy (see references). The basis for this strategy is that the
#'predictions are progressively replaced with observational data as soon as they
#'become available (i.e., when entering the indicator definition period). This
#'key strategy aims to increase users’ confidence in the reformed predictions.
#'
#'@param data1 A multidimensional array with named dimensions. All dimensions
#' must be equal to 'data2' dimensions except for the ones specified with
#' 'memb_dim' and 'time_dim'.
#'@param dates1 A multidimensional array of dates with named dimensions matching
#' the temporal dimensions of parameter 'data1'. The common dimensions must be
#' equal to 'data1' dimensions.
#'@param data2 A multidimensional array of named dimensions matching the
#' dimensions of parameter 'data1'. All dimensions must be equal to 'data1'
#' except for the ones specified with 'memb_dim' and 'time_dim'.
#'@param dates2 A multidimensional array of dates with named dimensions matching
#' the temporal dimensions on parameter 'data2'. The common dimensions must be
#' equal to 'data2' dimensions.
#'@param start1 A list to define the initial date of the period to select from
#' 'data1' by providing a list of two elements: the initial date of the period
#' and the initial month of the period. The initial date of the period must be
#' included in the 'dates1' array.
#'@param end1 A list to define the final date of the period to select from
#' 'data1' by providing a list of two elements: the final day of the period and
#' the final month of the period. The final date of the period must be
#' included in the 'dates1' array.
#'@param start2 A list to define the initial date of the period to select from
#' 'data2' by providing a list of two elements: the initial date of the period
#' and the initial month of the period. The initial date of the period must be
#' included in the 'dates2' array.
#'@param end2 A list to define the final date of the period to select from
#' 'data2' by providing a list of two elements: the final day of the period and
#' the final month of the period. The final date of the period must be
#' included in the 'dates2' array.
#'@param time_dim A character string indicating the name of the temporal
#' dimension that will be used to combine the two arrays. By default, it is set
#' to 'time'. Also, it will be used to subset the data in a requested
#' period.
#'@param memb_dim A character string indicating the name of the member
#' dimension. If the 'data1' and 'data2' have no member dimension, set it as
#' NULL. It is set as 'member' by default.
#'@param ncores An integer indicating the number of cores to use in parallel
#' computation.
#'
#'@return A multidimensional array created from the combination of 'data1' and
#''data2'. The resulting array will contain the following dimensions: the
#'original dimensions of the input data, which are common to both arrays and for
#'the 'time_dim' dimension, the sum of the corresponding dimension of 'data1'
#'and 'data2'. If 'memb_dim' is not null, regarding member dimension, two
#'different situations can occur: (1) in the case that one of the arrays does
#'not have member dimension or is equal to 1 and the other array has multiple
#'member dimension, the result will contain the repeated values of the array one
#'up to the lenght of member dimension of array two; (2) in the case that both
#'arrays have member dimension and is greater than 1, all combinations of member
#'dimension will be returned.
#'
#'@references Chou, C., R. Marcos-Matamoros, L. Palma Garcia, N. Pérez-Zanón,
#'M. Teixeira, S. Silva, N. Fontes, A. Graça, A. Dell'Aquila, S. Calmanti and
#'N. González-Reviriego (2023). Advanced seasonal predictions for vine
#'management based on bioclimatic indicators tailored to the wine sector.
#'Climate Services, 30, 100343, \doi{10.1016/j.cliser.2023.100343}.
#'
#'@examples
#'data_dates <- c(seq(as.Date("01-07-1993", "%d-%m-%Y", tz = 'UTC'),
#' as.Date("01-12-1993","%d-%m-%Y", tz = 'UTC'), "day"),
#' seq(as.Date("01-07-1994", "%d-%m-%Y", tz = 'UTC'),
#' as.Date("01-12-1994","%d-%m-%Y", tz = 'UTC'), "day"))
#'dim(data_dates) <- c(time = 154, sdate = 2)
#'ref_dates <- seq(as.Date("01-01-1993", "%d-%m-%Y", tz = 'UTC'),
#' as.Date("01-12-1994","%d-%m-%Y", tz = 'UTC'), "day")
#'dim(ref_dates) <- c(time = 350, sdate = 2)
#'ref <- array(1001:1700, c(time = 350, sdate = 2))
#'data <- array(1:(2*154*2), c(time = 154, sdate = 2, member = 2))
#'new_data <- MergeRefToExp(data1 = ref, dates1 = ref_dates, start1 = list(21, 6),
#' end1 = list(30, 6), data2 = data, dates2 = data_dates,
#' start2 = list(1, 7), end = list(21, 9),
#' time_dim = 'time')
#'
#'@import multiApply
#'@export
MergeRefToExp <- function(data1, data2, dates1 = NULL, dates2 = NULL,
start1 = NULL, end1 = NULL, start2 = NULL, end2 = NULL,
time_dim = 'time', memb_dim = 'member',
ncores = NULL) {
# Input checks
## data1 and data2
if (!is.array(data1) | !is.array(data2)) {
stop("Parameters 'data1' and 'data2' must be arrays.")
}
if (is.null(names(dim(data1))) | is.null(names(dim(data2)))) {
stop("Parameters 'data1' and 'data2' must have named dimensions.")
}
## time_dim
if (!is.character(time_dim)) {
stop("Parameter 'time_dim' must be a character string.")
}
if (!time_dim %in% names(dim(data1)) | !time_dim %in% names(dim(data2))) {
stop("Parameter 'time_dim' is not found in 'data1' or 'data2' dimension ",
"names.")
}
## memb_dim
data1dims <- names(dim(data1))
data2dims <- names(dim(data2))
if (!is.null(memb_dim)) {
if (!is.character(memb_dim)) {
stop("Parameter 'memb_dim' must be a character string.")
}
if (!memb_dim %in% names(dim(data1)) & !memb_dim %in% names(dim(data2))) {
stop("Parameter 'memb_dim' is not found in 'data1' or 'data2' dimension. ",
"Set it to NULL if there is no member dimension.")
}
if ((memb_dim %in% names(dim(data1)) & memb_dim %in% names(dim(data2)))) {
if (dim(data1)[memb_dim] != dim(data2)[memb_dim]) {
if (dim(data1)[memb_dim] == 1) {
data1 <- array(data1, dim = dim(data1)[-which(names(dim(data1)) == memb_dim)])
} else if (dim(data2)[memb_dim] == 1) {
data2 <- array(data2, dim = dim(data2)[-which(names(dim(data2)) == memb_dim)])
} else {
memb_dim1 <- dim(data1)[memb_dim]
data1 <- Apply(list(data1), target_dims = memb_dim,
fun = function(x, memb_rep) {
return(rep(x, each = memb_rep))
}, memb_rep = dim(data2)[memb_dim],
output_dims = memb_dim, ncores = ncores)$output1
data2 <- Apply(list(data2), target_dims = memb_dim,
fun = function(x, memb_rep) {
return(rep(x, memb_rep))
}, memb_rep = memb_dim1,
output_dims = memb_dim, ncores = ncores)$output1
}
}
}
}
## data1 and data2 (2)
name_data1 <- sort(names(dim(data1)))
name_data2 <- sort(names(dim(data2)))
name_data1 <- name_data1[-which(name_data1 %in% c(time_dim, memb_dim))]
name_data2 <- name_data2[-which(name_data2 %in% c(time_dim, memb_dim))]
if (!identical(length(name_data1), length(name_data2)) |
!identical(dim(data1)[name_data1], dim(data2)[name_data2])) {
stop(paste0("Parameter 'data1' and 'data2' must have same length of ",
"all dimensions except 'memb_dim'."))
}
## dates1
if (!is.null(start1) & !is.null(end1)) {
if (is.null(dates1)) {
warning("Parameter 'dates' is NULL and the average of the ",
"full data provided in 'data' is computed.")
} else if (!all(c(is.list(start1), is.list(end1)))) {
warning("Parameter 'start1' and 'end1' must be lists indicating the ",
"day and the month of the period start and end. Full data ",
"will be used.")
} else {
if (!is.null(dim(dates1))) {
data1 <- SelectPeriodOnData(data = data1, dates = dates1, start = start1,
end = end1, time_dim = time_dim,
ncores = ncores)
} else {
warning("Parameter 'dates1' must have named dimensions if 'start' and ",
"'end' are not NULL. All 'data1' will be used.")
}
}
}
## dates2
if (!is.null(start2) & !is.null(end2)) {
if (is.null(dates2)) {
warning("Parameter 'dates2' is NULL and the average of the ",
"full data provided in 'data' is computed.")
} else if (!all(c(is.list(start2), is.list(end2)))) {
warning("Parameter 'start2' and 'end2' must be lists indicating the ",
"day and the month of the period start and end. Full data ",
"will be used.")
} else {
if (!is.null(dim(dates2))) {
data2 <- SelectPeriodOnData(data = data2, dates = dates2, start = start2,
end = end2, time_dim = time_dim,
ncores = ncores)
} else {
warning("Parameter 'dates2' must have named dimensions if 'start2' and ",
"'end2' are not NULL. All 'data2' will be used.")
}
}
}
data1 <- Apply(list(data1, data2), target_dims = time_dim, fun = 'c',
output_dims = time_dim, ncores = ncores)$output1
if (all(names(dim(data1)) %in% data1dims)) {
pos <- match(data1dims, names(dim(data1)))
data1 <- aperm(data1, pos)
} else if (all(names(dim(data1)) %in% data2dims)) {
pos <- match(data2dims, names(dim(data1)))
data1 <- aperm(data1, pos)
}
return(data1)
}
|
/scratch/gouwar.j/cran-all/cranData/CSIndicators/R/MergeRefToExp.R
|
#'Period Accumulation on 's2dv_cube' objects
#'
#'Period Accumulation computes the sum (accumulation) of a given variable in a
#'period. Providing precipitation data, two agriculture indices can be obtained
#'by using this function:
#'\itemize{
#' \item{'SprR', Spring Total Precipitation: The total precipitation from
#' April 21th to June 21st.}
#' \item{'HarR', Harvest Total Precipitation: The total precipitation from
#' August 21st to October 21st.}
#'}
#'
#'There are two possible ways of performing the accumulation. The default one
#'is by accumulating a variable over a dimension specified with 'time_dim'. To
#'chose a specific time period, 'start' and 'end' must be used. The other method
#'is by using 'rollwidth' parameter. When this parameter is a positive integer,
#'the cumulative backward sum is applied to the time dimension. If it is
#'negative, the rolling sum is applied backwards. This function is build to
#'be compatible with other tools in that work with 's2dv_cube' object class. The
#'input data must be this object class. If you don't work with 's2dv_cube', see
#'PeriodAccumulation.
#'
#'@param data An 's2dv_cube' object as provided function \code{CST_Start} or
#' \code{CST_Load} in package CSTools.
#'@param start An optional parameter to defined the initial date of the period
#' to select from the data by providing a list of two elements: the initial
#' date of the period and the initial m onth of the period. By default it is
#' set to NULL and the indicator is computed using all the data provided in
#' \code{data}.
#'@param end An optional parameter to defined the final date of the period to
#' select from the data by providing a list of two elements: the final day of
#' the period and the final month of the period. By default it is set to NULL
#' and the indicator is computed using all the data provided in \code{data}.
#'@param time_dim A character string indicating the name of the dimension to
#' compute the indicator. By default, it is set to 'time'. More than one
#' dimension name matching the dimensions provided in the object
#' \code{data$data} can be specified.
#'@param rollwidth An optional parameter to indicate the number of time
#' steps the rolling sum is applied to. If it is positive, the rolling sum is
#' applied backwards 'time_dim', if it is negative, it will be forward it. When
#' this parameter is NULL, the sum is applied over all 'time_dim', in a
#' specified period. It is NULL by default.
#'@param sdate_dim (Only needed when rollwidth is used). A character string
#' indicating the name of the start date dimension to compute the rolling
#' accumulation. By default, it is set to 'sdate'.
#'@param frequency (Only needed when rollwidth is used). A character string
#' indicating the time frequency of the data to apply the rolling accumulation.
#' It can be 'daily' or 'monthly'. If it is set to 'monthly', values from
#' continuous months will be accumulated; if it is 'daliy', values from
#' continuous days will be accumulated. It is set to 'monthly' by default.
#'@param na.rm A logical value indicating whether to ignore NA values (TRUE) or
#' not (FALSE).
#'@param ncores An integer indicating the number of cores to use in parallel
#' computation.
#'
#'@return An 's2dv_cube' object containing the accumulated data in the element
#'\code{data}. If parameter 'rollwidth' is not used, it will have the dimensions
#'of the input parameter 'data' except the dimension where the accumulation has
#'been computed (specified with 'time_dim'). If 'rollwidth' is used, it will be
#'of same dimensions as input data. The 'Dates' array is updated to the
#'dates corresponding to the beginning of the aggregated time period. A new
#'element called 'time_bounds' will be added into the 'attrs' element in the
#''s2dv_cube' object. It consists of a list containing two elements, the start
#'and end dates of the aggregated period with the same dimensions of 'Dates'
#'element. If 'rollwidth' is used, it will contain the same dimensions of
#'parameter 'data' and the other elements of the 's2dv_cube' will not be
#'modified.
#'
#'@examples
#'exp <- NULL
#'exp$data <- array(rnorm(216)*200, dim = c(dataset = 1, member = 2, sdate = 3,
#' ftime = 9, lat = 2, lon = 2))
#'class(exp) <- 's2dv_cube'
#'TP <- CST_PeriodAccumulation(exp, time_dim = 'ftime')
#'exp$data <- array(rnorm(5 * 3 * 214 * 2),
#' c(memb = 5, sdate = 3, ftime = 214, lon = 2))
#'Dates <- c(seq(as.Date("01-05-2000", format = "%d-%m-%Y"),
#' as.Date("30-11-2000", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2001", format = "%d-%m-%Y"),
#' as.Date("30-11-2001", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2002", format = "%d-%m-%Y"),
#' as.Date("30-11-2002", format = "%d-%m-%Y"), by = 'day'))
#'dim(Dates) <- c(sdate = 3, ftime = 214)
#'exp$attrs$Dates <- Dates
#'SprR <- CST_PeriodAccumulation(exp, start = list(21, 4), end = list(21, 6),
#' time_dim = 'ftime')
#'dim(SprR$data)
#'head(SprR$attrs$Dates)
#'HarR <- CST_PeriodAccumulation(exp, start = list(21, 8), end = list(21, 10),
#' time_dim = 'ftime')
#'dim(HarR$data)
#'head(HarR$attrs$Dates)
#'
#'@import multiApply
#'@importFrom ClimProjDiags Subset
#'@export
CST_PeriodAccumulation <- function(data, start = NULL, end = NULL,
time_dim = 'time', rollwidth = NULL,
sdate_dim = 'sdate', frequency = 'monthly',
na.rm = FALSE, ncores = NULL) {
# Check 's2dv_cube'
if (!inherits(data, 's2dv_cube')) {
stop("Parameter 'data' must be of the class 's2dv_cube'.")
}
if (!all(c('data') %in% names(data))) {
stop("Parameter 'data' doesn't have 's2dv_cube' structure. ",
"Use PeriodAccumulation instead.")
}
# Dates subset
if (!is.null(start) && !is.null(end)) {
if (is.null(dim(data$attrs$Dates))) {
warning("Dimensions in 'data' element 'attrs$Dates' are missed and ",
"all data would be used.")
start <- NULL
end <- NULL
}
}
Dates <- data$attrs$Dates
data$data <- PeriodAccumulation(data = data$data, dates = Dates,
start = start, end = end,
time_dim = time_dim, rollwidth = rollwidth,
sdate_dim = sdate_dim, frequency = frequency,
na.rm = na.rm, ncores = ncores)
data$dims <- dim(data$data)
if (!is.null(start) & !is.null(end)) {
Dates <- SelectPeriodOnDates(dates = Dates, start = start, end = end,
time_dim = time_dim, ncores = ncores)
data$attrs$Dates <- Dates
}
if (is.null(rollwidth)) {
data$coords[[time_dim]] <- NULL
if (!is.null(dim(Dates))) {
# Create time_bounds
time_bounds <- NULL
time_bounds$start <- Subset(Dates, time_dim, 1, drop = 'selected')
time_bounds$end <- Subset(Dates, time_dim, dim(Dates)[time_dim], drop = 'selected')
# Add Dates in attrs
data$attrs$Dates <- time_bounds$start
data$attrs$time_bounds <- time_bounds
}
}
return(data)
}
#'Period Accumulation on multidimensional array objects
#'
#'Period Accumulation computes the sum (accumulation) of a given variable in a
#'period. Providing precipitation data, two agriculture indices can be obtained
#'by using this function:
#'\itemize{
#' \item{'SprR', Spring Total Precipitation: The total precipitation from
#' April 21th to June 21st.}
#' \item{'HarR', Harvest Total Precipitation: The total precipitation from
#' August 21st to October 21st.}
#'}
#'
#'There are two possible ways of performing the accumulation. The default one
#'is by accumulating a variable over a dimension specified with 'time_dim'. To
#'chose a specific time period, 'start' and 'end' must be used. The other method
#'is by using 'rollwidth' parameter. When this parameter is a positive integer,
#'the cumulative backward sum is applied to the time dimension. If it is
#'negative, the rolling sum is applied backwards.
#'
#'@param data A multidimensional array with named dimensions.
#'@param dates A multidimensional array of dates with named dimensions matching
#' the temporal dimensions on parameter 'data'. By default it is NULL, to
#' select aperiod this parameter must be provided.
#'@param start An optional parameter to defined the initial date of the period
#' to select from the data by providing a list of two elements: the initial
#' date of the period and the initial month of the period. By default it is set
#' to NULL and the indicator is computed using all the data provided in
#' \code{data}.
#'@param end An optional parameter to defined the final date of the period to
#' select from the data by providing a list of two elements: the final day of
#' the period and the final month of the period. By default it is set to NULL
#' and the indicator is computed using all the data provided in \code{data}.
#'@param time_dim A character string indicating the name of the dimension to
#' compute the indicator. By default, it is set to 'time'.
#'@param rollwidth An optional parameter to indicate the number of time
#' steps the rolling sum is applied to. If it is positive, the rolling sum is
#' applied backwards 'time_dim', if it is negative, it will be forward it. When
#' this parameter is NULL, the sum is applied over all 'time_dim', in a
#' specified period. It is NULL by default.
#'@param sdate_dim (Only needed when rollwidth is used). A character string
#' indicating the name of the start date dimension to compute the rolling
#' accumulation. By default, it is set to 'sdate'.
#'@param frequency (Only needed when rollwidth is used). A character string
#' indicating the time frequency of the data to apply the rolling accumulation.
#' It can be 'daily' or 'monthly'. If it is set to 'monthly', values from
#' continuous months will be accumulated; if it is 'daliy', values from
#' continuous days will be accumulated. It is set to 'monthly' by default.
#'@param na.rm A logical value indicating whether to ignore NA values (TRUE) or
#' not (FALSE).
#'@param ncores An integer indicating the number of cores to use in parallel
#' computation.
#'@return A multidimensional array with named dimensions containing the
#'accumulated data in the element \code{data}. If parameter 'rollwidth' is
#'not used, it will have the dimensions of the input 'data' except the dimension
#'where the accumulation has been computed (specified with 'time_dim'). If
#''rollwidth' is used, it will be of same dimensions as input data.
#'
#'@examples
#'exp <- array(rnorm(216)*200, dim = c(dataset = 1, member = 2, sdate = 3,
#' ftime = 9, lat = 2, lon = 2))
#'TP <- PeriodAccumulation(exp, time_dim = 'ftime')
#'data <- array(rnorm(5 * 3 * 214 * 2),
#' c(memb = 5, sdate = 3, ftime = 214, lon = 2))
#'Dates <- c(seq(as.Date("01-05-2000", format = "%d-%m-%Y"),
#' as.Date("30-11-2000", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2001", format = "%d-%m-%Y"),
#' as.Date("30-11-2001", format = "%d-%m-%Y"), by = 'day'),
#' seq(as.Date("01-05-2002", format = "%d-%m-%Y"),
#' as.Date("30-11-2002", format = "%d-%m-%Y"), by = 'day'))
#'dim(Dates) <- c(sdate = 3, ftime = 214)
#'SprR <- PeriodAccumulation(data, dates = Dates, start = list(21, 4),
#' end = list(21, 6), time_dim = 'ftime')
#'HarR <- PeriodAccumulation(data, dates = Dates, start = list(21, 8),
#' end = list(21, 10), time_dim = 'ftime')
#'
#'@import multiApply
#'@importFrom zoo rollapply
#'@export
PeriodAccumulation <- function(data, dates = NULL, start = NULL, end = NULL,
time_dim = 'time', rollwidth = NULL,
sdate_dim = 'sdate', frequency = 'monthly',
na.rm = FALSE, ncores = NULL) {
# Initial checks
## data
if (is.null(data)) {
stop("Parameter 'data' cannot be NULL.")
}
if (!is.numeric(data)) {
stop("Parameter 'data' must be numeric.")
}
## time_dim
if (!is.character(time_dim) | length(time_dim) != 1) {
stop("Parameter 'time_dim' must be a character string.")
}
if (!is.array(data)) {
dim(data) <- length(data)
names(dim(data)) <- time_dim
}
dimnames <- names(dim(data))
if (!time_dim %in% names(dim(data))) {
stop("Parameter 'time_dim' is not found in 'data' dimension.")
}
if (!is.null(start) && !is.null(end)) {
if (is.null(dates)) {
warning("Parameter 'dates' is NULL and the average of the ",
"full data provided in 'data' is computed.")
} else {
if (!any(c(is.list(start), is.list(end)))) {
stop("Parameter 'start' and 'end' must be lists indicating the ",
"day and the month of the period start and end.")
}
if (!is.null(dim(dates))) {
data <- SelectPeriodOnData(data, dates, start, end,
time_dim = time_dim, ncores = ncores)
if (!is.null(rollwidth)) {
dates <- SelectPeriodOnDates(dates = dates, start = start, end = end,
time_dim = time_dim, ncores = ncores)
}
} else {
warning("Parameter 'dates' must have named dimensions if 'start' and ",
"'end' are not NULL. All data will be used.")
}
}
}
if (is.null(rollwidth)) {
# period accumulation
total <- Apply(list(data), target_dims = time_dim, fun = sum,
na.rm = na.rm, ncores = ncores)$output1
} else {
# rolling accumulation
## dates
if (is.null(dates)) {
stop("Parameter 'dates' is NULL. Cannot compute the rolling accumulation.")
}
## rollwidth
if (!is.numeric(rollwidth)) {
stop("Parameter 'rollwidth' must be a numeric value.")
}
if (abs(rollwidth) > dim(data)[time_dim]) {
stop(paste0("Cannot compute accumulation of ", rollwidth, " months because ",
"loaded data has only ", dim(data)[time_dim], " months."))
}
## sdate_dim
if (!is.character(sdate_dim) | length(sdate_dim) != 1) {
stop("Parameter 'sdate_dim' must be a character string.")
}
if (!sdate_dim %in% names(dim(data))) {
stop("Parameter 'sdate_dim' is not found in 'data' dimension.")
}
## frequency
if (!is.character(frequency)) {
stop("Parameter 'frequency' must be a character string.")
}
forwardroll <- FALSE
if (rollwidth < 0) {
rollwidth <- abs(rollwidth)
forwardroll <- TRUE
}
mask_dates <- .datesmask(dates, frequency = frequency)
total <- Apply(data = list(data),
target_dims = list(data = c(time_dim, sdate_dim)),
fun = .rollaccumulation,
mask_dates = mask_dates,
rollwidth = rollwidth,
forwardroll = forwardroll, na.rm = na.rm,
output_dims = c(time_dim, sdate_dim),
ncores = ncores)$output1
pos <- match(dimnames, names(dim(total)))
total <- aperm(total, pos)
}
return(total)
}
.rollaccumulation <- function(data, mask_dates, rollwidth = 1,
forwardroll = FALSE, na.rm = FALSE) {
dims <- dim(data)
data_vector <- array(NA, dim = length(mask_dates))
count <- 1
for (dd in 1:length(mask_dates)) {
if (mask_dates[dd] == 1) {
data_vector[dd] <- as.vector(data)[count]
count <- count + 1
}
}
data_accum <- rollapply(data = data_vector, width = rollwidth, FUN = sum, na.rm = na.rm)
if (!forwardroll) {
data_accum <- c(rep(NA, rollwidth-1), data_accum)
} else {
data_accum <- c(data_accum, rep(NA, rollwidth-1))
}
data_accum <- data_accum[which(mask_dates == 1)]
data_accum <- array(data_accum, dim = c(dims))
return(data_accum)
}
|
/scratch/gouwar.j/cran-all/cranData/CSIndicators/R/PeriodAccumulation.R
|
#'Period Max on 's2dv_cube' objects
#'
#'Period Max computes the maximum (max) of a given variable in a period.
#'Two bioclimatic indicators can be obtained by using this function:
#'\itemize{
#' \item{'BIO5', (Providing temperature data) Max Temperature of Warmest
#' Month. The maximum monthly temperature occurrence over a
#' given year (time-series) or averaged span of years (normal).}
#' \item{'BIO13', (Providing precipitation data) Precipitation of Wettest
#' Month. This index identifies the total precipitation
#' that prevails during the wettest month.}
#'}
#'
#'@param data An 's2dv_cube' object as provided function \code{CST_Start} or
#' \code{CST_Load} in package CSTools.
#'@param start An optional parameter to defined the initial date of the period
#' to select from the data by providing a list of two elements: the initial
#' date of the period and the initial month of the period. By default it is set
#' to NULL and the indicator is computed using all the data provided in
#' \code{data}.
#'@param end An optional parameter to defined the final date of the period to
#' select from the data by providing a list of two elements: the final day of
#' the period and the final month of the period. By default it is set to NULL
#' and the indicator is computed using all the data provided in \code{data}.
#'@param time_dim A character string indicating the name of the dimension to
#' compute the indicator. By default, it is set to 'time'. More than one
#' dimension name matching the dimensions provided in the object
#' \code{data$data} can be specified.
#'@param na.rm A logical value indicating whether to ignore NA values (TRUE) or
#' not (FALSE).
#'@param ncores An integer indicating the number of cores to use in parallel
#' computation.
#'
#'@return An 's2dv_cube' object containing the indicator in the element
#'\code{data} with dimensions of the input parameter 'data' except the
#'dimension where the max has been computed (specified with 'time_dim'). A new
#'element called 'time_bounds' will be added into the 'attrs' element in the
#''s2dv_cube' object. It consists of a list containing two elements, the start
#'and end dates of the aggregated period with the same dimensions of 'Dates'
#'element.
#'
#'@examples
#'exp <- NULL
#'exp$data <- array(rnorm(45), dim = c(member = 7, sdate = 4, time = 3))
#'Dates <- c(seq(as.Date("2000-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2001-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"),
#' seq(as.Date("2001-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2002-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"),
#' seq(as.Date("2002-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2003-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"),
#' seq(as.Date("2003-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2004-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"))
#'dim(Dates) <- c(sdate = 4, time = 3)
#'exp$attrs$Dates <- Dates
#'class(exp) <- 's2dv_cube'
#'
#'res <- CST_PeriodMax(exp, start = list(01, 12), end = list(01, 01))
#'
#'@import multiApply
#'@importFrom ClimProjDiags Subset
#'@export
CST_PeriodMax <- function(data, start = NULL, end = NULL,
time_dim = 'time', na.rm = FALSE,
ncores = NULL) {
# Check 's2dv_cube'
if (!inherits(data, 's2dv_cube')) {
stop("Parameter 'data' must be of the class 's2dv_cube'.")
}
# Dates subset
if (!is.null(start) && !is.null(end)) {
if (is.null(dim(data$attrs$Dates))) {
warning("Dimensions in 'data' element 'attrs$Dates' are missed and ",
"all data would be used.")
start <- NULL
end <- NULL
}
}
Dates <- data$attrs$Dates
total <- PeriodMax(data = data$data, dates = Dates, start = start, end = end,
time_dim = time_dim, na.rm = na.rm, ncores = ncores)
data$data <- total
data$dims <- dim(total)
data$coords[[time_dim]] <- NULL
if (!is.null(Dates)) {
if (!is.null(start) && !is.null(end)) {
Dates <- SelectPeriodOnDates(dates = Dates, start = start, end = end,
time_dim = time_dim, ncores = ncores)
}
if (is.null(dim(Dates))) {
warning("Element 'Dates' has NULL dimensions. They will not be ",
"subset and 'time_bounds' will be missed.")
data$attrs$Dates <- Dates
} else {
# Create time_bounds
time_bounds <- NULL
time_bounds$start <- ClimProjDiags::Subset(x = Dates, along = time_dim,
indices = 1, drop = 'selected')
time_bounds$end <- ClimProjDiags::Subset(x = Dates, along = time_dim,
indices = dim(Dates)[time_dim],
drop = 'selected')
# Add Dates in attrs
data$attrs$Dates <- time_bounds$start
data$attrs$time_bounds <- time_bounds
}
}
return(data)
}
#'Period max on multidimensional array objects
#'
#'Period max computes the average (max) of a given variable in a period.
#'Two bioclimatic indicators can be obtained by using this function:
#'\itemize{
#' \item{'BIO5', (Providing temperature data) Max Temperature of Warmest
#' Month. The maximum monthly temperature occurrence over a
#' given year (time-series) or averaged span of years (normal).}
#' \item{'BIO13', (Providing precipitation data) Precipitation of Wettest
#' Month. This index identifies the total precipitation
#' that prevails during the wettest month.}
#'}
#'
#'@param data A multidimensional array with named dimensions.
#'@param dates A multidimensional array of dates with named dimensions matching
#' the temporal dimensions on parameter 'data'. By default it is NULL, to
#' select aperiod this parameter must be provided.
#'@param start An optional parameter to defined the initial date of the period
#' to select from the data by providing a list of two elements: the initial
#' date of the period and the initial month of the period. By default it is set
#' to NULL and the indicator is computed using all the data provided in
#' \code{data}.
#'@param end An optional parameter to defined the final date of the period to
#' select from the data by providing a list of two elements: the final day of
#' the period and the final month of the period. By default it is set to NULL
#' and the indicator is computed using all the data provided in \code{data}.
#'@param time_dim A character string indicating the name of the dimension to
#' compute the indicator. By default, it is set to 'time'. More than one
#' dimension name matching the dimensions provided in the object
#' \code{data$data} can be specified.
#'@param na.rm A logical value indicating whether to ignore NA values (TRUE) or
#' not (FALSE).
#'@param ncores An integer indicating the number of cores to use in parallel
#' computation.
#'
#'@return A multidimensional array with named dimensions containing the
#'indicator in the element \code{data}.
#'
#'@examples
#'data <- array(rnorm(45), dim = c(member = 7, sdate = 4, time = 3))
#'Dates <- c(seq(as.Date("2000-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2001-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"),
#' seq(as.Date("2001-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2002-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"),
#' seq(as.Date("2002-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2003-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"),
#' seq(as.Date("2003-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2004-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"))
#'dim(Dates) <- c(sdate = 4, time = 3)
#'res <- PeriodMax(data, dates = Dates, start = list(01, 12), end = list(01, 01))
#'
#'@import multiApply
#'@export
PeriodMax <- function(data, dates = NULL, start = NULL, end = NULL,
time_dim = 'time', na.rm = FALSE, ncores = NULL) {
# Initial checks
## data
if (is.null(data)) {
stop("Parameter 'data' cannot be NULL.")
}
if (!is.numeric(data)) {
stop("Parameter 'data' must be numeric.")
}
## time_dim
if (!is.character(time_dim) | length(time_dim) != 1) {
stop("Parameter 'time_dim' must be a character string.")
}
if (!is.array(data)) {
dim(data) <- length(data)
names(dim(data)) <- time_dim
}
if (!time_dim %in% names(dim(data))) {
stop("Parameter 'time_dim' is not found in 'data' dimension.")
}
if (!is.null(start) && !is.null(end)) {
if (is.null(dates)) {
warning("Parameter 'dates' is NULL and the average of the ",
"full data provided in 'data' is computed.")
} else {
if (!any(c(is.list(start), is.list(end)))) {
stop("Parameter 'start' and 'end' must be lists indicating the ",
"day and the month of the period start and end.")
}
if (!is.null(dim(dates))) {
data <- SelectPeriodOnData(data = data, dates = dates, start = start,
end = end, time_dim = time_dim,
ncores = ncores)
} else {
warning("Parameter 'dates' must have named dimensions if 'start' and ",
"'end' are not NULL. All data will be used.")
}
}
}
total <- Apply(list(data), target_dims = time_dim, fun = max,
na.rm = na.rm, ncores = ncores)$output1
return(total)
}
|
/scratch/gouwar.j/cran-all/cranData/CSIndicators/R/PeriodMax.R
|
#'Period Mean on 's2dv_cube' objects
#'
#'Period Mean computes the average (mean) of a given variable in a period.
#'Providing temperature data, two agriculture indices can be obtained by using
#'this function:
#'\itemize{
#' \item{'GST', Growing Season average Temperature: The average temperature
#' from April 1st to Octobe 31st.}
#' \item{'SprTX', Spring Average Maximum Temperature: The average daily
#' maximum temperature from April 1st to May 31st.}
#'}
#'
#'@param data An 's2dv_cube' object as provided function \code{CST_Start} or
#' \code{CST_Load} in package CSTools.
#'@param start An optional parameter to defined the initial date of the period
#' to select from the data by providing a list of two elements: the initial
#' date of the period and the initial month of the period. By default it is set
#' to NULL and the indicator is computed using all the data provided in
#' \code{data}.
#'@param end An optional parameter to defined the final date of the period to
#' select from the data by providing a list of two elements: the final day of
#' the period and the final month of the period. By default it is set to NULL
#' and the indicator is computed using all the data provided in \code{data}.
#'@param time_dim A character string indicating the name of the dimension to
#' compute the indicator. By default, it is set to 'time'. More than one
#' dimension name matching the dimensions provided in the object
#' \code{data$data} can be specified.
#'@param na.rm A logical value indicating whether to ignore NA values (TRUE) or
#' not (FALSE).
#'@param ncores An integer indicating the number of cores to use in parallel
#' computation.
#'
#'@return An 's2dv_cube' object containing the indicator in the element
#'\code{data} with dimensions of the input parameter 'data' except the
#'dimension where the mean has been computed (specified with 'time_dim'). The
#''Dates' array is updated to the dates corresponding to the beginning of the
#'aggregated time period. A new element called 'time_bounds' will be added into
#'the 'attrs' element in the 's2dv_cube' object. It consists of a list
#'containing two elements, the start and end dates of the aggregated period with
#'the same dimensions of 'Dates' element.
#'
#'@examples
#'exp <- NULL
#'exp$data <- array(rnorm(45), dim = c(member = 7, sdate = 4, time = 3))
#'Dates <- c(seq(as.Date("2000-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2001-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"),
#' seq(as.Date("2001-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2002-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"),
#' seq(as.Date("2002-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2003-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"),
#' seq(as.Date("2003-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2004-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"))
#'dim(Dates) <- c(sdate = 4, time = 3)
#'exp$attrs$Dates <- Dates
#'class(exp) <- 's2dv_cube'
#'
#'SA <- CST_PeriodMean(exp, start = list(01, 12), end = list(01, 01))
#'
#'@import multiApply
#'@importFrom ClimProjDiags Subset
#'@export
CST_PeriodMean <- function(data, start = NULL, end = NULL,
time_dim = 'time', na.rm = FALSE,
ncores = NULL) {
# Check 's2dv_cube'
if (!inherits(data, 's2dv_cube')) {
stop("Parameter 'data' must be of the class 's2dv_cube'.")
}
# Dates subset
if (!is.null(start) && !is.null(end)) {
if (is.null(dim(data$attrs$Dates))) {
warning("Dimensions in 'data' element 'attrs$Dates' are missed and ",
"all data would be used.")
start <- NULL
end <- NULL
}
}
Dates <- data$attrs$Dates
total <- PeriodMean(data = data$data, dates = Dates, start = start, end = end,
time_dim = time_dim, na.rm = na.rm, ncores = ncores)
data$data <- total
data$dims <- dim(total)
data$coords[[time_dim]] <- NULL
if (!is.null(Dates)) {
if (!is.null(start) && !is.null(end)) {
Dates <- SelectPeriodOnDates(dates = Dates, start = start, end = end,
time_dim = time_dim, ncores = ncores)
}
if (is.null(dim(Dates))) {
warning("Element 'Dates' has NULL dimensions. They will not be ",
"subset and 'time_bounds' will be missed.")
data$attrs$Dates <- Dates
} else {
# Create time_bounds
time_bounds <- NULL
time_bounds$start <- ClimProjDiags::Subset(x = Dates, along = time_dim,
indices = 1, drop = 'selected')
time_bounds$end <- ClimProjDiags::Subset(x = Dates, along = time_dim,
indices = dim(Dates)[time_dim],
drop = 'selected')
# Add Dates in attrs
data$attrs$Dates <- time_bounds$start
data$attrs$time_bounds <- time_bounds
}
}
return(data)
}
#'Period Mean on multidimensional array objects
#'
#'Period Mean computes the average (mean) of a given variable in a period.
#'Providing temperature data, two agriculture indices can be obtained by using
#'this function:
#'\itemize{
#' \item{'GST', Growing Season average Temperature: The average temperature
#' from April 1st to Octobe 31st.}
#' \item{'SprTX', Spring Average Maximum Temperature: The average daily
#' maximum temperature from April 1st to May 31st.}
#'}
#'
#'@param data A multidimensional array with named dimensions.
#'@param dates A multidimensional array of dates with named dimensions matching
#' the temporal dimensions on parameter 'data'. By default it is NULL, to
#' select aperiod this parameter must be provided.
#'@param start An optional parameter to defined the initial date of the period
#' to select from the data by providing a list of two elements: the initial
#' date of the period and the initial month of the period. By default it is set
#' to NULL and the indicator is computed using all the data provided in
#' \code{data}.
#'@param end An optional parameter to defined the final date of the period to
#' select from the data by providing a list of two elements: the final day of
#' the period and the final month of the period. By default it is set to NULL
#' and the indicator is computed using all the data provided in \code{data}.
#'@param time_dim A character string indicating the name of the dimension to
#' compute the indicator. By default, it is set to 'time'. More than one
#' dimension name matching the dimensions provided in the object
#' \code{data$data} can be specified.
#'@param na.rm A logical value indicating whether to ignore NA values (TRUE) or
#' not (FALSE).
#'@param ncores An integer indicating the number of cores to use in parallel
#' computation.
#'
#'@return A multidimensional array with named dimensions containing the
#'indicator in the element \code{data}.
#'
#'@examples
#'data <- array(rnorm(45), dim = c(member = 7, sdate = 4, time = 3))
#'Dates <- c(seq(as.Date("2000-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2001-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"),
#' seq(as.Date("2001-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2002-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"),
#' seq(as.Date("2002-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2003-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"),
#' seq(as.Date("2003-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2004-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"))
#'dim(Dates) <- c(sdate = 4, time = 3)
#'SA <- PeriodMean(data, dates = Dates, start = list(01, 12), end = list(01, 01))
#'
#'@import multiApply
#'@export
PeriodMean <- function(data, dates = NULL, start = NULL, end = NULL,
time_dim = 'time', na.rm = FALSE, ncores = NULL) {
# Initial checks
## data
if (is.null(data)) {
stop("Parameter 'data' cannot be NULL.")
}
if (!is.numeric(data)) {
stop("Parameter 'data' must be numeric.")
}
## time_dim
if (!is.character(time_dim) | length(time_dim) != 1) {
stop("Parameter 'time_dim' must be a character string.")
}
if (!is.array(data)) {
dim(data) <- length(data)
names(dim(data)) <- time_dim
}
if (!time_dim %in% names(dim(data))) {
stop("Parameter 'time_dim' is not found in 'data' dimension.")
}
if (!is.null(start) && !is.null(end)) {
if (is.null(dates)) {
warning("Parameter 'dates' is NULL and the average of the ",
"full data provided in 'data' is computed.")
} else {
if (!any(c(is.list(start), is.list(end)))) {
stop("Parameter 'start' and 'end' must be lists indicating the ",
"day and the month of the period start and end.")
}
if (!is.null(dim(dates))) {
data <- SelectPeriodOnData(data = data, dates = dates, start = start,
end = end, time_dim = time_dim,
ncores = ncores)
} else {
warning("Parameter 'dates' must have named dimensions if 'start' and ",
"'end' are not NULL. All data will be used.")
}
}
}
total <- Apply(list(data), target_dims = time_dim, fun = mean,
na.rm = na.rm, ncores = ncores)$output1
return(total)
}
|
/scratch/gouwar.j/cran-all/cranData/CSIndicators/R/PeriodMean.R
|
#'Period Min on 's2dv_cube' objects
#'
#'Period Min computes the average (min) of a given variable in a period.
#'Two bioclimatic indicators can be obtained by using this function:
#'\itemize{
#' \item{'BIO6', (Providing temperature data) Min Temperature of Coldest
#' Month. The minimum monthly temperature occurrence over a
#' given year (time-series) or averaged span of years (normal).}
#' \item{'BIO14', (Providing precipitation data) Precipitation of Driest
#' Month. This index identifies the total precipitation
#' that prevails during the driest month.}
#'}
#'
#'@param data An 's2dv_cube' object as provided function \code{CST_Start} or
#' \code{CST_Load} in package CSTools.
#'@param start An optional parameter to defined the initial date of the period
#' to select from the data by providing a list of two elements: the initial
#' date of the period and the initial month of the period. By default it is set
#' to NULL and the indicator is computed using all the data provided in
#' \code{data}.
#'@param end An optional parameter to defined the final date of the period to
#' select from the data by providing a list of two elements: the final day of
#' the period and the final month of the period. By default it is set to NULL
#' and the indicator is computed using all the data provided in \code{data}.
#'@param time_dim A character string indicating the name of the dimension to
#' compute the indicator. By default, it is set to 'time'. More than one
#' dimension name matching the dimensions provided in the object
#' \code{data$data} can be specified.
#'@param na.rm A logical value indicating whether to ignore NA values (TRUE) or
#' not (FALSE).
#'@param ncores An integer indicating the number of cores to use in parallel
#' computation.
#'
#'@return An 's2dv_cube' object containing the indicator in the element
#'\code{data} with dimensions of the input parameter 'data' except the
#'dimension where the min has been computed (specified with 'time_dim'). A new
#'element called 'time_bounds' will be added into the 'attrs' element in the
#''s2dv_cube' object. It consists of a list containing two elements, the start
#'and end dates of the aggregated period with the same dimensions of 'Dates'
#'element.
#'
#'@examples
#'exp <- NULL
#'exp$data <- array(rnorm(45), dim = c(member = 7, sdate = 4, time = 3))
#'Dates <- c(seq(as.Date("2000-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2001-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"),
#' seq(as.Date("2001-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2002-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"),
#' seq(as.Date("2002-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2003-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"),
#' seq(as.Date("2003-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2004-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"))
#'dim(Dates) <- c(sdate = 4, time = 3)
#'exp$attrs$Dates <- Dates
#'class(exp) <- 's2dv_cube'
#'
#'res <- CST_PeriodMin(exp, start = list(01, 12), end = list(01, 01))
#'
#'@import multiApply
#'@importFrom ClimProjDiags Subset
#'@export
CST_PeriodMin <- function(data, start = NULL, end = NULL,
time_dim = 'time', na.rm = FALSE,
ncores = NULL) {
# Check 's2dv_cube'
if (!inherits(data, 's2dv_cube')) {
stop("Parameter 'data' must be of the class 's2dv_cube'.")
}
# Dates subset
if (!is.null(start) && !is.null(end)) {
if (is.null(dim(data$attrs$Dates))) {
warning("Dimensions in 'data' element 'attrs$Dates' are missed and ",
"all data would be used.")
start <- NULL
end <- NULL
}
}
Dates <- data$attrs$Dates
total <- PeriodMin(data = data$data, dates = Dates, start = start, end = end,
time_dim = time_dim, na.rm = na.rm, ncores = ncores)
data$data <- total
data$dims <- dim(total)
data$coords[[time_dim]] <- NULL
if (!is.null(Dates)) {
if (!is.null(start) && !is.null(end)) {
Dates <- SelectPeriodOnDates(dates = Dates, start = start, end = end,
time_dim = time_dim, ncores = ncores)
}
if (is.null(dim(Dates))) {
warning("Element 'Dates' has NULL dimensions. They will not be ",
"subset and 'time_bounds' will be missed.")
data$attrs$Dates <- Dates
} else {
# Create time_bounds
time_bounds <- NULL
time_bounds$start <- ClimProjDiags::Subset(x = Dates, along = time_dim,
indices = 1, drop = 'selected')
time_bounds$end <- ClimProjDiags::Subset(x = Dates, along = time_dim,
indices = dim(Dates)[time_dim],
drop = 'selected')
# Add Dates in attrs
data$attrs$Dates <- time_bounds$start
data$attrs$time_bounds <- time_bounds
}
}
return(data)
}
#'Period Min on multidimensional array objects
#'
#'Period Min computes the average (min) of a given variable in a period.
#'Two bioclimatic indicators can be obtained by using this function:
#'\itemize{
#' \item{'BIO6', (Providing temperature data) Min Temperature of Coldest
#' Month. The minimum monthly temperature occurrence over a
#' given year (time-series) or averaged span of years (normal).}
#' \item{'BIO14', (Providing precipitation data) Precipitation of Driest
#' Month. This index identifies the total precipitation
#' that prevails during the driest month.}
#'}
#'
#'@param data A multidimensional array with named dimensions.
#'@param dates A multidimensional array of dates with named dimensions matching
#' the temporal dimensions on parameter 'data'. By default it is NULL, to
#' select aperiod this parameter must be provided.
#'@param start An optional parameter to defined the initial date of the period
#' to select from the data by providing a list of two elements: the initial
#' date of the period and the initial month of the period. By default it is set
#' to NULL and the indicator is computed using all the data provided in
#' \code{data}.
#'@param end An optional parameter to defined the final date of the period to
#' select from the data by providing a list of two elements: the final day of
#' the period and the final month of the period. By default it is set to NULL
#' and the indicator is computed using all the data provided in \code{data}.
#'@param time_dim A character string indicating the name of the dimension to
#' compute the indicator. By default, it is set to 'time'. More than one
#' dimension name matching the dimensions provided in the object
#' \code{data$data} can be specified.
#'@param na.rm A logical value indicating whether to ignore NA values (TRUE) or
#' not (FALSE).
#'@param ncores An integer indicating the number of cores to use in parallel
#' computation.
#'
#'@return A multidimensional array with named dimensions containing the
#'indicator in the element \code{data}.
#'
#'@examples
#'data <- array(rnorm(45), dim = c(member = 7, sdate = 4, time = 3))
#'Dates <- c(seq(as.Date("2000-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2001-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"),
#' seq(as.Date("2001-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2002-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"),
#' seq(as.Date("2002-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2003-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"),
#' seq(as.Date("2003-11-01", "%Y-%m-%d", tz = "UTC"),
#' as.Date("2004-01-01", "%Y-%m-%d", tz = "UTC"), by = "month"))
#'dim(Dates) <- c(sdate = 4, time = 3)
#'res <- PeriodMin(data, dates = Dates, start = list(01, 12), end = list(01, 01))
#'
#'@import multiApply
#'@export
PeriodMin <- function(data, dates = NULL, start = NULL, end = NULL,
time_dim = 'time', na.rm = FALSE, ncores = NULL) {
# Initial checks
## data
if (is.null(data)) {
stop("Parameter 'data' cannot be NULL.")
}
if (!is.numeric(data)) {
stop("Parameter 'data' must be numeric.")
}
## time_dim
if (!is.character(time_dim) | length(time_dim) != 1) {
stop("Parameter 'time_dim' must be a character string.")
}
if (!is.array(data)) {
dim(data) <- length(data)
names(dim(data)) <- time_dim
}
if (!time_dim %in% names(dim(data))) {
stop("Parameter 'time_dim' is not found in 'data' dimension.")
}
if (!is.null(start) && !is.null(end)) {
if (is.null(dates)) {
warning("Parameter 'dates' is NULL and the average of the ",
"full data provided in 'data' is computed.")
} else {
if (!any(c(is.list(start), is.list(end)))) {
stop("Parameter 'start' and 'end' must be lists indicating the ",
"day and the month of the period start and end.")
}
if (!is.null(dim(dates))) {
data <- SelectPeriodOnData(data = data, dates = dates, start = start,
end = end, time_dim = time_dim,
ncores = ncores)
} else {
warning("Parameter 'dates' must have named dimensions if 'start' and ",
"'end' are not NULL. All data will be used.")
}
}
}
total <- Apply(list(data), target_dims = time_dim, fun = min,
na.rm = na.rm, ncores = ncores)$output1
return(total)
}
|
/scratch/gouwar.j/cran-all/cranData/CSIndicators/R/PeriodMin.R
|
#'Compute the Potential Evapotranspiration
#'
#'Compute the Potential evapotranspiration (PET) that is the amount of
#'evaporation and transpiration that would occur if a sufficient water source
#'were available. This function calculate PET according to the Thornthwaite,
#'Hargreaves or Hargreaves-modified equations.
#'
#'This function is build to be compatible with other tools in
#'that work with 's2dv_cube' object class. The input data must be this object
#'class. If you don't work with 's2dv_cube', see PeriodPET. For more information
#'on the SPEI calculation, see functions CST_PeriodStandardization and
#'CST_PeriodAccumulation.
#'
#'@param data A named list with the needed \code{s2dv_cube} objects containing
#' the seasonal forecast experiment in the 'data' element for each variable.
#' Specific variables are needed for each method used in computing the
#' Potential Evapotranspiration (see parameter 'pet_method'). The accepted
#' variable names are fixed in order to be recognized by the function.
#' The accepted name corresponding to the Minimum Temperature is 'tmin',
#' for Maximum Temperature is 'tmax', for Mean Temperature is 'tmean' and
#' for Precipitation is 'pr'. The accepted variable names for each method are:
#' For 'hargreaves': 'tmin' and 'tmax'; for 'hargreaves_modified' are 'tmin',
#' 'tmax' and 'pr'; for method 'thornthwaite' 'tmean' is required. The units
#' for temperature variables ('tmin', 'tmax' and 'tmean') need to be in Celcius
#' degrees; the units for precipitation ('pr') need to be in mm/month.
#' Currently the function works only with monthly data from different years.
#'@param pet_method A character string indicating the method used to compute
#' the potential evapotranspiration. The accepted methods are:
#' 'hargreaves' and 'hargreaves_modified', that require the data to have
#' variables tmin and tmax; and 'thornthwaite', that requires variable
#' 'tmean'.
#'@param time_dim A character string indicating the name of the temporal
#' dimension. By default, it is set to 'syear'.
#'@param leadtime_dim A character string indicating the name of the temporal
#' dimension. By default, it is set to 'time'.
#'@param lat_dim A character string indicating the name of the latitudinal
#' dimension. By default it is set by 'latitude'.
#'@param na.rm A logical value indicating whether NA values should be removed
#' from data. It is FALSE by default.
#'@param ncores An integer value indicating the number of cores to use in
#' parallel computation.
#'
#'@examples
#'dims <- c(time = 3, syear = 3, ensemble = 1, latitude = 1)
#'exp_tasmax <- array(rnorm(360, 27.73, 5.26), dim = dims)
#'exp_tasmin <- array(rnorm(360, 14.83, 3.86), dim = dims)
#'exp_prlr <- array(rnorm(360, 21.19, 25.64), dim = dims)
#'end_year <- 2012
#'dates_exp <- as.POSIXct(c(paste0(2010:end_year, "-08-16"),
#' paste0(2010:end_year, "-09-15"),
#' paste0(2010:end_year, "-10-16")), "UTC")
#'dim(dates_exp) <- c(syear = 3, time = 3)
#'lat <- c(40)
#'exp1 <- list('tmax' = exp_tasmax, 'tmin' = exp_tasmin, 'pr' = exp_prlr)
#'res <- PeriodPET(data = exp1, lat = lat, dates = dates_exp)
#'
#'@importFrom CSTools s2dv_cube
#'@export
CST_PeriodPET <- function(data, pet_method = 'hargreaves',
time_dim = 'syear', leadtime_dim = 'time',
lat_dim = 'latitude', na.rm = FALSE,
ncores = NULL) {
# Check 's2dv_cube'
if (is.null(data)) {
stop("Parameter 'data' cannot be NULL.")
}
if (!all(sapply(data, function(x) inherits(x, 's2dv_cube')))) {
stop("Parameter 'data' must be a list of 's2dv_cube' class.")
}
# latitude
if (!any(names(data[[1]]$coords) %in% .KnownLatNames())) {
stop("Spatial coordinate names of parameter 'data' do not match any ",
"of the names accepted by the package.")
}
# Dates
dates_exp <- data[[1]]$attrs$Dates
if (!'Dates' %in% names(data[[1]]$attrs)) {
stop("Element 'Dates' is not found in 'attrs' list of 'data'. ",
"See 's2dv_cube' object description in README file for more ",
"information.")
}
lat_dim <- names(data[[1]]$coords)[[which(names(data[[1]]$coords) %in% .KnownLatNames())]]
res <- PeriodPET(data = lapply(data, function(x) x$data),
dates = data[[1]]$attrs$Dates,
lat = data[[1]]$coords[[lat_dim]],
pet_method = pet_method, time_dim = time_dim,
leadtime_dim = leadtime_dim, lat_dim = lat_dim,
na.rm = na.rm, ncores = ncores)
# Add metadata
source_files <- lapply(data, function(x) {x$attrs$source_files})
coords <- data[[1]]$coords
Dates <- data[[1]]$attrs$Dates
metadata <- data[[1]]$attrs$Variable$metadata
metadata_names <- intersect(names(dim(res)), names(metadata))
suppressWarnings(
res <- s2dv_cube(data = res, coords = coords,
varName = paste0('PET'),
metadata = metadata[metadata_names],
Dates = Dates,
source_files = source_files,
when = Sys.time())
)
return(res)
}
#'Compute the Potential Evapotranspiration
#'
#'Compute the Potential Evapotranspiration (PET) that is the amount of
#'evaporation and transpiration that would occur if a sufficient water source
#'were available. This function calculate PET according to the Thornthwaite,
#'Hargreaves or Hargreaves-modified equations.
#'
#'For more information on the SPEI calculation, see functions
#'PeriodStandardization and PeriodAccumulation.
#'
#'@param data A named list of multidimensional arrays containing
#' the seasonal forecast experiment data for each variable.
#' Specific variables are needed for each method used in computing the
#' Potential Evapotranspiration (see parameter 'pet_method'). The accepted
#' variable names are fixed in order to be recognized by the function.
#' The accepted name corresponding to the Minimum Temperature is 'tmin',
#' for Maximum Temperature is 'tmax', for Mean Temperature is 'tmean' and
#' for Precipitation is 'pr'. The accepted variable names for each method are:
#' For 'hargreaves': 'tmin' and 'tmax'; for 'hargreaves_modified' are 'tmin',
#' 'tmax' and 'pr'; for method 'thornthwaite' 'tmean' is required. The units
#' for temperature variables ('tmin', 'tmax' and 'tmean') need to be in Celcius
#' degrees; the units for precipitation ('pr') need to be in mm/month.
#' Currently the function works only with monthly data from different years.
#'@param dates An array of temporal dimensions containing the Dates of
#' 'data'. It must be of class 'Date' or 'POSIXct'.
#'@param lat A numeric vector containing the latitude values of 'data'.
#'@param pet_method A character string indicating the method used to compute
#' the potential evapotranspiration. The accepted methods are:
#' 'hargreaves' and 'hargreaves_modified', that require the data to have
#' variables tmin and tmax; and 'thornthwaite', that requires variable
#' 'tmean'.
#'@param time_dim A character string indicating the name of the temporal
#' dimension. By default, it is set to 'syear'.
#'@param leadtime_dim A character string indicating the name of the temporal
#' dimension. By default, it is set to 'time'.
#'@param lat_dim A character string indicating the name of the latitudinal
#' dimension. By default it is set by 'latitude'.
#'@param na.rm A logical value indicating whether NA values should be removed
#' from data. It is FALSE by default.
#'@param ncores An integer value indicating the number of cores to use in
#' parallel computation.
#'
#'@examples
#'dims <- c(time = 3, syear = 3, ensemble = 1, latitude = 1)
#'exp_tasmax <- array(rnorm(360, 27.73, 5.26), dim = dims)
#'exp_tasmin <- array(rnorm(360, 14.83, 3.86), dim = dims)
#'exp_prlr <- array(rnorm(360, 21.19, 25.64), dim = dims)
#'end_year <- 2012
#'dates_exp <- as.POSIXct(c(paste0(2010:end_year, "-08-16"),
#' paste0(2010:end_year, "-09-15"),
#' paste0(2010:end_year, "-10-16")), "UTC")
#'dim(dates_exp) <- c(syear = 3, time = 3)
#'lat <- c(40)
#'exp1 <- list('tmax' = exp_tasmax, 'tmin' = exp_tasmin, 'pr' = exp_prlr)
#'res <- PeriodPET(data = exp1, lat = lat, dates = dates_exp)
#'
#'@importFrom SPEI hargreaves thornthwaite
#'@import multiApply
#'@export
PeriodPET <- function(data, dates, lat, pet_method = 'hargreaves',
time_dim = 'syear', leadtime_dim = 'time',
lat_dim = 'latitude', na.rm = FALSE,
ncores = NULL) {
# Initial checks
# data
if (!inherits(data, 'list')) {
stop("Parameter 'data' needs to be a named list with the needed variables.")
}
if (is.null(names(data))) {
stop("Parameter 'data' needs to be a named list with the variable names.")
}
if (any(sapply(data, function(x) is.null(names(dim(x)))))) {
stop("Parameter 'data' needs to be a list of arrays with dimension names.")
}
dims <- lapply(data, function(x) dim(x))
first_dims <- dims[[1]]
all_equal <- all(sapply(dims[-1], function(x) identical(first_dims, x)))
if (!all_equal) {
stop("Parameter 'data' variables need to have the same dimensions.")
}
# lat
if (!is.numeric(lat)) {
stop("Parameter 'lat' must be numeric.")
}
if (!lat_dim %in% names(dims[[1]])) {
stop("Parameter 'data' must have 'lat_dim' dimension.")
}
if (any(sapply(dims, FUN = function(x) x[lat_dim] != length(lat)))) {
stop("Parameter 'lat' needs to have the same length of latitudinal",
"dimension of all the variables arrays in 'data'.")
}
# data (2)
if (all(c('tmin', 'tmax', 'pr') %in% names(data))) {
# hargreaves modified: 'tmin', 'tmax', 'pr' and 'lat'
if (!(pet_method %in% c('hargreaves_modified', 'hargreaves'))) {
warning("Parameter 'pet_method' needs to be 'hargreaves' or ",
"'hargreaves_modified'. It is set to 'hargreaves_modified'.")
pet_method <- 'hargreaves_modified'
}
} else if (all(c('tmin', 'tmax') %in% names(data))) {
if (!(pet_method %in% c('hargreaves'))) {
warning("Parameter 'pet_method' will be set as 'hargreaves'.")
pet_method <- 'hargreaves'
}
} else if (c('tmean') %in% names(data)) {
# thornthwaite: 'tmean' (mean), 'lat'
if (!(pet_method == 'thornthwaite')) {
warning("Parameter 'pet_method' it is set to be 'thornthwaite'.")
pet_method <- 'thornthwaite'
}
} else {
stop("Parameter 'data' needs to be a named list with accepted ",
"variable names. See documentation.")
}
## time_dim
if (!is.character(time_dim) | length(time_dim) != 1) {
stop("Parameter 'time_dim' must be a character string.")
}
if (!all(sapply(data, function(x) time_dim %in% names(dim(x))))) {
stop("Parameter 'time_dim' is not found in 'data' dimension.")
}
## leadtime_dim
if (!is.character(leadtime_dim) | length(leadtime_dim) != 1) {
stop("Parameter 'leadtime_dim' must be a character string.")
}
if (!all(sapply(data, function(x) leadtime_dim %in% names(dim(x))))) {
stop("Parameter 'leadtime_dim' is not found in 'data' dimension.")
}
## lat_dim
if (!is.character(lat_dim) | length(lat_dim) != 1) {
stop("Parameter 'lat_dim' must be a character string.")
}
if (!all(sapply(data, function(x) lat_dim %in% names(dim(x))))) {
stop("Parameter 'lat_dim' is not found in 'data' dimension.")
}
# dates
if (is.null(dates)) {
stop("Parameter 'dates' is missing, dates must be provided.")
}
if (!any(inherits(dates, 'Date'), inherits(dates, 'POSIXct'))) {
stop("Parameter 'dates' is not of the correct class, ",
"only 'Date' and 'POSIXct' classes are accepted.")
}
if (!time_dim %in% names(dim(dates)) | !leadtime_dim %in% names(dim(dates))) {
stop("Parameter 'dates' must have 'time_dim' and 'leadtime_dim' ",
"dimension.")
}
if (!all(dim(data[[1]])[c(time_dim, leadtime_dim)] ==
dim(dates)[c(time_dim, leadtime_dim)])) {
stop("Parameter 'dates' needs to have the same length as 'time_dim' ",
"and 'leadtime_dim' as 'data'.")
}
## na.rm
if (!is.logical(na.rm) | length(na.rm) > 1) {
stop("Parameter 'na.rm' must be one logical value.")
}
## ncores
if (!is.null(ncores)) {
if (!is.numeric(ncores) | any(ncores %% 1 != 0) | any(ncores < 0) |
length(ncores) > 1) {
stop("Parameter 'ncores' must be a positive integer.")
}
}
# complete dates
mask_dates <- .datesmask(dates, frequency = 'monthly')
lat_mask <- array(lat, dim = c(1, length(lat)))
names(dim(lat_mask)) <- c('dat', lat_dim)
# extract mask of NA locations to return to NA the final result
mask_na <- array(1, dim = dim(data[[1]]))
if (pet_method == 'hargreaves') {
varnames <- c('tmax', 'tmin')
mask_na[which(is.na(data$tmax))] <- 0
mask_na[which(is.na(data$tmin))] <- 0
} else if (pet_method == 'hargreaves_modified') {
varnames <- c('tmax', 'tmin', 'pr')
mask_na[which(is.na(data$tmax))] <- 0
mask_na[which(is.na(data$tmin))] <- 0
mask_na[which(is.na(data$pr))] <- 0
} else if (pet_method == 'thornthwaite') {
varnames <- c('tmean')
mask_na[which(is.na(data$tmean))] <- 0
}
# replace NA with 0
for (dd in 1:length(data)) {
data[[dd]][which(is.na(data[[dd]]))] <- 0
}
# prepare data
target_dims_data <- lapply(data[varnames], function(x) rep(c(leadtime_dim, time_dim), 1))
pet <- Apply(data = c(list(lat_mask = lat_mask), data[varnames]),
target_dims = c(list(lat_mask = 'dat'), target_dims_data),
fun = .pet,
mask_dates = mask_dates, pet_method = pet_method,
leadtime_dim = leadtime_dim, time_dim = time_dim,
output_dims = c(leadtime_dim, time_dim),
ncores = ncores)$output1
# reorder dims in pet_estimated
pos <- match(names(dim(data[[1]])), names(dim(pet)))
pet <- aperm(pet, pos)
# restore original NAs from mask_na
pet[which(mask_na == 0)] <- NA
return(pet)
}
.pet <- function(lat_mask, data2, data3 = NULL, data4 = NULL,
mask_dates, pet_method = 'hargreaves',
leadtime_dim = 'time', time_dim = 'syear') {
dims <- dim(data2)
# create a vector from data but adding 0 to achive complete time series
# of the considered period
# (starting in January of the first year) so that the solar radiation
# estimation is computed in each case for the correct month
if (!is.null(data2)) {
data_tmp <- as.vector(data2)
data2 <- array(0, dim = length(mask_dates))
count <- 1
for (dd in 1:length(mask_dates)) {
if (mask_dates[dd] == 1) {
data2[dd] <- data_tmp[count]
count <- count + 1
}
}
rm(data_tmp)
}
if (!is.null(data3)) {
data_tmp <- as.vector(data3)
data3 <- array(0, dim = length(mask_dates))
count <- 1
for (dd in 1:length(mask_dates)) {
if (mask_dates[dd] == 1) {
data3[dd] <- data_tmp[count]
count <- count + 1
}
}
rm(data_tmp)
}
if (!is.null(data4)) {
data_tmp <- as.vector(data4)
data4 <- array(0, dim = length(mask_dates))
count <- 1
for (dd in 1:length(mask_dates)) {
if (mask_dates[dd] == 1) {
data4[dd] <- data_tmp[count]
count <- count + 1
}
}
rm(data_tmp)
}
if (pet_method == 'hargreaves') {
pet <- hargreaves(Tmin = as.vector(data3), Tmax = as.vector(data2),
lat = lat_mask, na.rm = FALSE, verbose = FALSE)
# line to return the vector to the size of the actual original data
pet <- array(pet[which(mask_dates == 1)], dim = dims)
}
if (pet_method == 'hargreaves_modified') {
pet <- hargreaves(Tmin = as.vector(data3), Tmax = as.vector(data2),
lat = lat_mask, Pre = as.vector(data4), na.rm = FALSE,
verbose = FALSE)
pet <- array(pet[which(mask_dates == 1)], dim = dims)
}
if (pet_method == 'thornthwaite') {
pet <- thornthwaite(as.vector(data2), lat = lat_mask, na.rm = TRUE,
verbose = FALSE)
# line to return the vector to the size of the actual original data
pet <- array(pet[which(mask_dates == 1)], dim = dims)
}
return(pet)
}
|
/scratch/gouwar.j/cran-all/cranData/CSIndicators/R/PeriodPET.R
|
#'Compute the Standardization of Precipitation-Evapotranspiration Index
#'
#'The Standardization of the data is the last step of computing the SPEI
#'(Standarized Precipitation-Evapotranspiration Index). With this function the
#'data is fit to a probability distribution to transform the original values to
#'standardized units that are comparable in space and time and at different SPEI
#'time scales.
#'
#'Next, some specifications for the calculation of the standardization will be
#'discussed. If there are NAs in the data and they are not removed with the
#'parameter 'na.rm', the standardization cannot be carried out for those
#'coordinates and therefore, the result will be filled with NA for the
#'specific coordinates. When NAs are not removed, if the length of the data for
#'a computational step is smaller than 4, there will not be enough data for
#'standarize and the result will be also filled with NAs for that coordinates.
#'About the distribution used to fit the data, there are only two possibilities:
#''log-logistic' and 'Gamma'. The 'Gamma' method only works when only
#'precipitation is provided and other variables are 0 because it is positive
#'defined (SPI indicator). When only 'data' is provided ('data_cor' is NULL) the
#'standardization is computed with cross validation. This function is build to
#'be compatible with other tools in that work with 's2dv_cube' object
#'class. The input data must be this object class. If you don't work with
#''s2dv_cube', see PeriodStandardization. For more information on the SPEI
#'indicator calculation, see CST_PeriodPET and CST_PeriodAccumulation.
#'
#'@param data An 's2dv_cube' that element 'data' stores a multidimensional
#' array containing the data to be standardized.
#'@param data_cor An 's2dv_cube' that element 'data' stores a multidimensional
#' array containing the data in which the standardization should be applied
#' using the fitting parameters from 'data'.
#'@param time_dim A character string indicating the name of the temporal
#' dimension. By default, it is set to 'syear'.
#'@param leadtime_dim A character string indicating the name of the temporal
#' dimension. By default, it is set to 'time'.
#'@param memb_dim A character string indicating the name of the dimension in
#' which the ensemble members are stored. When set it to NULL, threshold is
#' computed for individual members.
#'@param ref_period A list with two numeric values with the starting and end
#' points of the reference period used for computing the index. The default
#' value is NULL indicating that the first and end values in data will be
#' used as starting and end points.
#'@param params An optional parameter that needs to be a multidimensional array
#' with named dimensions. This option overrides computation of fitting
#' parameters. It needs to be of same time dimensions (specified in 'time_dim'
#' and 'leadtime_dim') of 'data' and a dimension named 'coef' with the length
#' of the coefficients needed for the used distribution (for 'Gamma' coef
#' dimension is of lenght 2, for 'log-Logistic' is of length 3). It also needs
#' to have a leadtime dimension (specified in 'leadtime_dim') of length 1. It
#' will only be used if 'data_cor' is not provided.
#'@param handle_infinity A logical value wether to return infinite values (TRUE)
#' or not (FALSE). When it is TRUE, the positive infinite values (negative
#' infinite) are substituted by the maximum (minimum) values of each
#' computation step, a subset of the array of dimensions time_dim, leadtime_dim
#' and memb_dim.
#'@param method A character string indicating the standardization method used.
#' If can be: 'parametric' or 'non-parametric'. It is set to 'parametric' by
#' default.
#'@param distribution A character string indicating the name of the distribution
#' function to be used for computing the SPEI. The accepted names are:
#' 'log-Logistic' and 'Gamma'. It is set to 'log-Logistic' by default. The
#' 'Gamma' method only works when only precipitation is provided and other
#' variables are 0 because it is positive defined (SPI indicator).
#'@param return_params A logical value indicating wether to return parameters
#' array (TRUE) or not (FALSE). It is FALSE by default.
#'@param na.rm A logical value indicating whether NA values should be removed
#' from data. It is FALSE by default. If it is FALSE and there are NA values,
#' standardization cannot be carried out for those coordinates and therefore,
#' the result will be filled with NA for the specific coordinates. If it is
#' TRUE, if the data from other dimensions except time_dim and leadtime_dim is
#' not reaching 4 values, it is not enough values to estimate the parameters
#' and the result will include NA.
#'@param ncores An integer value indicating the number of cores to use in
#' parallel computation.
#'
#'@return An object of class \code{s2dv_cube} containing the standardized data.
#'If 'data_cor' is provided the array stored in element data will be of the same
#'dimensions as 'data_cor'. If 'data_cor' is not provided, the array stored in
#'element data will be of the same dimensions as 'data'. The parameters of the
#'standardization will only be returned if 'return_params' is TRUE, in this
#'case, the output will be a list of two objects one for the standardized data
#'and one for the parameters.
#'
#'@examples
#'dims <- c(syear = 6, time = 3, latitude = 2, ensemble = 25)
#'data <- NULL
#'data$data <- array(rnorm(600, -204.1, 78.1), dim = dims)
#'class(data) <- 's2dv_cube'
#'SPEI <- CST_PeriodStandardization(data = data)
#'@export
CST_PeriodStandardization <- function(data, data_cor = NULL, time_dim = 'syear',
leadtime_dim = 'time', memb_dim = 'ensemble',
ref_period = NULL,
handle_infinity = FALSE,
method = 'parametric',
distribution = 'log-Logistic',
params = NULL, return_params = FALSE,
na.rm = FALSE, ncores = NULL) {
# Check 's2dv_cube'
if (is.null(data)) {
stop("Parameter 'data' cannot be NULL.")
}
if (!inherits(data, 's2dv_cube')) {
stop("Parameter 'data' must be of 's2dv_cube' class.")
}
if (!is.null(data_cor)) {
if (!inherits(data_cor, 's2dv_cube')) {
stop("Parameter 'data_cor' must be of 's2dv_cube' class.")
}
}
res <- PeriodStandardization(data = data$data, data_cor = data_cor$data,
time_dim = time_dim, leadtime_dim = leadtime_dim,
memb_dim = memb_dim,
ref_period = ref_period,
handle_infinity = handle_infinity, method = method,
distribution = distribution,
params = params, return_params = return_params,
na.rm = na.rm, ncores = ncores)
if (return_params) {
std <- res$spei
params <- res$params
} else {
std <- res
}
if (is.null(data_cor)) {
data$data <- std
data$attrs$Variable$varName <- paste0(data$attrs$Variable$varName, ' standardized')
if (return_params) {
return(list(spei = data, params = params))
} else {
return(data)
}
} else {
data_cor$data <- std
data_cor$attrs$Variable$varName <- paste0(data_cor$attrs$Variable$varName, ' standardized')
data_cor$attrs$Datasets <- c(data_cor$attrs$Datasets, data$attrs$Datasets)
data_cor$attrs$source_files <- c(data_cor$attrs$source_files, data$attrs$source_files)
return(data_cor)
}
}
#'Compute the Standardization of Precipitation-Evapotranspiration Index
#'
#'The Standardization of the data is the last step of computing the SPEI
#'indicator. With this function the data is fit to a probability distribution to
#'transform the original values to standardized units that are comparable in
#'space and time and at different SPEI time scales.
#'
#'Next, some specifications for the calculation of the standardization will be
#'discussed. If there are NAs in the data and they are not removed with the
#'parameter 'na.rm', the standardization cannot be carried out for those
#'coordinates and therefore, the result will be filled with NA for the
#'specific coordinates. When NAs are not removed, if the length of the data for
#'a computational step is smaller than 4, there will not be enough data for
#'standarize and the result will be also filled with NAs for that coordinates.
#'About the distribution used to fit the data, there are only two possibilities:
#''log-logistic' and 'Gamma'. The 'Gamma' method only works when only
#'precipitation is provided and other variables are 0 because it is positive
#'defined (SPI indicator). When only 'data' is provided ('data_cor' is NULL) the
#'standardization is computed with cross validation. For more information about
#'SPEI, see functions PeriodPET and PeriodAccumulation.
#'
#'@param data A multidimensional array containing the data to be standardized.
#'@param data_cor A multidimensional array containing the data in which the
#' standardization should be applied using the fitting parameters from 'data'.
#'@param dates An array containing the dates of the data with the same time
#' dimensions as the data. It is optional and only necessary for using the
#' parameter 'ref_period' to select a reference period directly from dates.
#'@param time_dim A character string indicating the name of the temporal
#' dimension. By default, it is set to 'syear'.
#'@param leadtime_dim A character string indicating the name of the temporal
#' dimension. By default, it is set to 'time'.
#'@param memb_dim A character string indicating the name of the dimension in
#' which the ensemble members are stored. When set it to NULL, threshold is
#' computed for individual members.
#'@param ref_period A list with two numeric values with the starting and end
#' points of the reference period used for computing the index. The default
#' value is NULL indicating that the first and end values in data will be
#' used as starting and end points.
#'@param params An optional parameter that needs to be a multidimensional array
#' with named dimensions. This option overrides computation of fitting
#' parameters. It needs to be of same time dimensions (specified in 'time_dim'
#' and 'leadtime_dim') of 'data' and a dimension named 'coef' with the length
#' of the coefficients needed for the used distribution (for 'Gamma' coef
#' dimension is of lenght 2, for 'log-Logistic' is of length 3). It also needs
#' to have a leadtime dimension (specified in 'leadtime_dim') of length 1. It
#' will only be used if 'data_cor' is not provided.
#'@param handle_infinity A logical value wether to return infinite values (TRUE)
#' or not (FALSE). When it is TRUE, the positive infinite values (negative
#' infinite) are substituted by the maximum (minimum) values of each
#' computation step, a subset of the array of dimensions time_dim, leadtime_dim
#' and memb_dim.
#'@param method A character string indicating the standardization method used.
#' If can be: 'parametric' or 'non-parametric'. It is set to 'parametric' by
#' default.
#'@param distribution A character string indicating the name of the distribution
#' function to be used for computing the SPEI. The accepted names are:
#' 'log-Logistic' and 'Gamma'. It is set to 'log-Logistic' by default. The
#' 'Gamma' method only works when only precipitation is provided and other
#' variables are 0 because it is positive defined (SPI indicator).
#'@param return_params A logical value indicating wether to return parameters
#' array (TRUE) or not (FALSE). It is FALSE by default.
#'@param na.rm A logical value indicating whether NA values should be removed
#' from data. It is FALSE by default. If it is FALSE and there are NA values,
#' standardization cannot be carried out for those coordinates and therefore,
#' the result will be filled with NA for the specific coordinates. If it is
#' TRUE, if the data from other dimensions except time_dim and leadtime_dim is
#' not reaching 4 values, it is not enough values to estimate the parameters
#' and the result will include NA.
#'@param ncores An integer value indicating the number of cores to use in
#' parallel computation.
#'
#'@return A multidimensional array containing the standardized data.
#'If 'data_cor' is provided the array will be of the same dimensions as
#''data_cor'. If 'data_cor' is not provided, the array will be of the same
#'dimensions as 'data'. The parameters of the standardization will only be
#'returned if 'return_params' is TRUE, in this case, the output will be a list
#'of two objects one for the standardized data and one for the parameters.
#'
#'@examples
#'dims <- c(syear = 6, time = 2, latitude = 2, ensemble = 25)
#'dimscor <- c(syear = 1, time = 2, latitude = 2, ensemble = 25)
#'data <- array(rnorm(600, -194.5, 64.8), dim = dims)
#'datacor <- array(rnorm(100, -217.8, 68.29), dim = dimscor)
#'
#'SPEI <- PeriodStandardization(data = data)
#'SPEIcor <- PeriodStandardization(data = data, data_cor = datacor)
#'@import multiApply
#'@importFrom ClimProjDiags Subset
#'@importFrom lmomco pwm.pp pwm.ub pwm2lmom are.lmom.valid parglo pargam parpe3
#'@importFrom lmom cdfglo cdfgam cdfpe3 pelglo pelgam pelpe3
#'@importFrom SPEI parglo.maxlik
#'@importFrom stats qnorm sd window
#'@export
PeriodStandardization <- function(data, data_cor = NULL, dates = NULL,
time_dim = 'syear', leadtime_dim = 'time',
memb_dim = 'ensemble',
ref_period = NULL, handle_infinity = FALSE,
method = 'parametric',
distribution = 'log-Logistic',
params = NULL, return_params = FALSE,
na.rm = FALSE, ncores = NULL) {
# Check inputs
## data
if (!is.array(data)) {
stop("Parameter 'data' must be a numeric array.")
}
if (is.null(names(dim(data)))) {
stop("Parameter 'data' must have dimension names.")
}
## data_cor
if (!is.null(data_cor)) {
if (!is.array(data_cor)) {
stop("Parameter 'data_cor' must be a numeric array.")
}
if (is.null(names(dim(data_cor)))) {
stop("Parameter 'data_cor' must have dimension names.")
}
}
## dates
if (!is.null(dates)) {
if (!any(inherits(dates, 'Date'), inherits(dates, 'POSIXct'))) {
stop("Parameter 'dates' is not of the correct class, ",
"only 'Date' and 'POSIXct' classes are accepted.")
}
if (!time_dim %in% names(dim(dates)) | !leadtime_dim %in% names(dim(dates))) {
stop("Parameter 'dates' must have 'time_dim' and 'leadtime_dim' ",
"dimension.")
}
if (dim(data)[c(time_dim)] != dim(dates)[c(time_dim)]) {
stop("Parameter 'dates' needs to have the same length of 'time_dim' ",
"as 'data'.")
}
}
## time_dim
if (!is.character(time_dim) | length(time_dim) != 1) {
stop("Parameter 'time_dim' must be a character string.")
}
if (!time_dim %in% names(dim(data))) {
stop("Parameter 'time_dim' is not found in 'data' dimension.")
}
if (!is.null(data_cor)) {
if (!time_dim %in% names(dim(data_cor))) {
stop("Parameter 'time_dim' is not found in 'data_cor' dimension.")
}
}
## leadtime_dim
if (!is.character(leadtime_dim) | length(leadtime_dim) != 1) {
stop("Parameter 'leadtime_dim' must be a character string.")
}
if (!leadtime_dim %in% names(dim(data))) {
stop("Parameter 'leadtime_dim' is not found in 'data' dimension.")
}
if (!is.null(data_cor)) {
if (!leadtime_dim %in% names(dim(data_cor))) {
stop("Parameter 'leadtime_dim' is not found in 'data_cor' dimension.")
}
}
## memb_dim
if (!is.character(memb_dim) | length(memb_dim) != 1) {
stop("Parameter 'memb_dim' must be a character string.")
}
if (!memb_dim %in% names(dim(data))) {
stop("Parameter 'memb_dim' is not found in 'data' dimension.")
}
if (!is.null(data_cor)) {
if (!memb_dim %in% names(dim(data_cor))) {
stop("Parameter 'memb_dim' is not found in 'data_cor' dimension.")
}
}
## data_cor (2)
if (!is.null(data_cor)) {
if (dim(data)[leadtime_dim] != dim(data_cor)[leadtime_dim]) {
stop("Parameter 'data' and 'data_cor' have dimension 'leadtime_dim' ",
"of different length.")
}
}
## ref_period
if (!is.null(ref_period)) {
years_dates <- format(dates, "%Y")
if (is.null(dates)) {
warning("Parameter 'dates' is not provided so 'ref_period' can't be ",
"used.")
ref_period <- NULL
} else if (length(ref_period) != 2) {
warning("Parameter 'ref_period' must be of length two indicating the ",
"first and end years of the reference period. It will not ",
"be used.")
ref_period <- NULL
} else if (!all(sapply(ref_period, is.numeric))) {
warning("Parameter 'ref_period' must be a numeric vector indicating the ",
"'start' and 'end' years of the reference period. It will not ",
"be used.")
ref_period <- NULL
} else if (ref_period[[1]] > ref_period[[2]]) {
warning("In parameter 'ref_period' 'start' cannot be after 'end'. It ",
"will not be used.")
ref_period <- NULL
} else if (!all(unlist(ref_period) %in% years_dates)) {
warning("Parameter 'ref_period' contain years outside the dates. ",
"It will not be used.")
ref_period <- NULL
} else {
years <- format(Subset(dates, along = leadtime_dim, indices = 1), "%Y")
ref_period[[1]] <- which(ref_period[[1]] == years)
ref_period[[2]] <- which(ref_period[[2]] == years)
}
}
## handle_infinity
if (!is.logical(handle_infinity)) {
stop("Parameter 'handle_infinity' must be a logical value.")
}
## method
if (!(method %in% c('parametric', 'non-parametric'))) {
stop("Parameter 'method' must be a character string containing one of ",
"the following methods: 'parametric' or 'non-parametric'.")
}
## distribution
if (!(distribution %in% c('log-Logistic', 'Gamma', 'PearsonIII'))) {
stop("Parameter 'distribution' must be a character string containing one ",
"of the following distributions: 'log-Logistic', 'Gamma' or ",
"'PearsonIII'.")
}
## params
if (!is.null(params)) {
if (!is.numeric(params)) {
stop("Parameter 'params' must be numeric.")
}
if (!all(c(time_dim, leadtime_dim, 'coef') %in% names(dim(params)))) {
stop("Parameter 'params' must be a multidimensional array with named ",
"dimensions: '", time_dim, "', '", leadtime_dim, "' and 'coef'.")
}
dims_data <- dim(data)[-which(names(dim(data)) == memb_dim)]
dims_params <- dim(params)[-which(names(dim(params)) == 'coef')]
if (!all(dims_data == dims_params)) {
stop("Parameter 'data' and 'params' must have same common dimensions ",
"except 'memb_dim' and 'coef'.")
}
if (distribution == "Gamma") {
if (dim(params)['coef'] != 2) {
stop("For '", distribution, "' distribution, params array should have ",
"'coef' dimension of length 2.")
}
} else {
if (dim(params)['coef'] != 3) {
stop("For '", distribution, "' distribution, params array should have ",
"'coef' dimension of length 3.")
}
}
}
## return_params
if (!is.logical(return_params)) {
stop("Parameter 'return_params' must be logical.")
}
## na.rm
if (!is.logical(na.rm)) {
stop("Parameter 'na.rm' must be logical.")
}
## ncores
if (!is.null(ncores)) {
if (!is.numeric(ncores) | any(ncores %% 1 != 0) | any(ncores < 0) |
length(ncores) > 1) {
stop("Parameter 'ncores' must be a positive integer.")
}
}
if (is.null(ref_period)) {
ref_start <- NULL
ref_end <- NULL
} else {
ref_start <- ref_period[[1]]
ref_end <- ref_period[[2]]
}
# Standardization
if (is.null(data_cor)) {
if (is.null(params)) {
res <- Apply(data = list(data),
target_dims = c(leadtime_dim, time_dim, memb_dim),
fun = .standardization, data_cor = NULL, params = NULL,
leadtime_dim = leadtime_dim, time_dim = time_dim,
ref_start = ref_start, ref_end = ref_end,
handle_infinity = handle_infinity,
method = method, distribution = distribution,
return_params = return_params,
na.rm = na.rm, ncores = ncores)
} else {
res <- Apply(data = list(data = data, params = params),
target_dims = list(data = c(leadtime_dim, time_dim, memb_dim),
params = c(leadtime_dim, time_dim, 'coef')),
fun = .standardization, data_cor = NULL,
leadtime_dim = leadtime_dim, time_dim = time_dim,
ref_start = ref_start, ref_end = ref_end,
handle_infinity = handle_infinity,
method = method, distribution = distribution,
return_params = return_params,
na.rm = na.rm, ncores = ncores)
}
} else {
res <- Apply(data = list(data = data, data_cor = data_cor),
target_dims = c(leadtime_dim, time_dim, memb_dim),
fun = .standardization, params = NULL,
leadtime_dim = leadtime_dim, time_dim = time_dim,
ref_start = ref_start, ref_end = ref_end,
handle_infinity = handle_infinity,
method = method, distribution = distribution,
return_params = return_params,
na.rm = na.rm, ncores = ncores)
}
if (return_params) {
spei <- res$spei
params <- res$params
} else {
spei <- res$output1
}
if (is.null(data_cor)) {
pos <- match(names(dim(data)), names(dim(spei)))
spei <- aperm(spei, pos)
} else {
pos <- match(names(dim(data_cor)), names(dim(spei)))
spei <- aperm(spei, pos)
}
if (return_params) {
pos <- match(c(names(dim(spei))[-which(names(dim(spei)) == memb_dim)], 'coef'),
names(dim(params)))
params <- aperm(params, pos)
return(list('spei' = spei, 'params' = params))
} else {
return(spei)
}
}
.standardization <- function(data, data_cor = NULL, params = NULL,
leadtime_dim = 'time', time_dim = 'syear',
ref_start = NULL, ref_end = NULL, handle_infinity = FALSE,
method = 'parametric', distribution = 'log-Logistic',
return_params = FALSE, na.rm = FALSE) {
# data (data_cor): [leadtime_dim, time_dim, memb_dim]
dims <- dim(data)[-1]
fit = 'ub-pwm'
coef = switch(distribution,
"Gamma" = array(NA, dim = 2, dimnames = list(c('alpha', 'beta'))),
"log-Logistic" = array(NA, dim = 3, dimnames = list(c('xi', 'alpha', 'kappa'))),
"PearsonIII" = array(NA, dim = 3, dimnames = list(c('mu', 'sigma', 'gamma'))))
if (is.null(data_cor)) {
# cross_val = TRUE
spei_mod <- data*NA
if (return_params) {
params_result <- array(dim = c(dim(data)[-length(dim(data))], coef = length(coef)))
}
for (ff in 1:dim(data)[leadtime_dim]) {
data2 <- data[ff, , ]
dim(data2) <- dims
if (method == 'non-parametric') {
bp <- matrix(0, length(data2), 1)
for (i in 1:length(data2)) {
bp[i,1] = sum(data2[] <= data2[i], na.rm = na.rm); # Writes the rank of the data
}
std_index <- qnorm((bp - 0.44)/(length(data2) + 0.12))
dim(std_index) <- dims
spei_mod[ff, , ] <- std_index
} else {
if (!is.null(ref_start) && !is.null(ref_end)) {
data_fit <- window(data2, ref_start, ref_end)
} else {
data_fit <- data2
}
for (nsd in 1:dim(data)[time_dim]) {
if (is.null(params)) {
acu <- as.vector(data_fit[-nsd, ])
if (na.rm) {
acu_sorted <- sort.default(acu, method = "quick")
} else {
acu_sorted <- sort.default(acu, method = "quick", na.last = TRUE)
}
f_params <- NA
if (!any(is.na(acu_sorted)) & length(acu_sorted) != 0) {
acu_sd <- sd(acu_sorted)
if (!is.na(acu_sd) & acu_sd != 0) {
if (distribution != "log-Logistic") {
acu_sorted <- acu_sorted[acu_sorted > 0]
}
if (length(acu_sorted) >= 4) {
f_params <- .std(data = acu_sorted, fit = fit,
distribution = distribution)
}
}
}
} else {
f_params <- params[ff, nsd, ]
}
if (all(is.na(f_params))) {
cdf_res <- NA
} else {
f_params <- f_params[which(!is.na(f_params))]
cdf_res = switch(distribution,
"log-Logistic" = lmom::cdfglo(data2, f_params),
"Gamma" = lmom::cdfgam(data2, f_params),
"PearsonIII" = lmom::cdfpe3(data2, f_params))
}
std_index_cv <- array(qnorm(cdf_res), dim = dims)
spei_mod[ff, nsd, ] <- std_index_cv[nsd, ]
if (return_params) params_result[ff, nsd, ] <- f_params
}
}
}
} else {
# cross_val = FALSE
spei_mod <- data_cor*NA
dimscor <- dim(data_cor)[-1]
if (return_params) {
params_result <- array(dim = c(dim(data_cor)[-length(dim(data_cor))], coef = length(coef)))
}
for (ff in 1:dim(data)[leadtime_dim]) {
data_cor2 <- data_cor[ff, , ]
dim(data_cor2) <- dimscor
if (method == 'non-parametric') {
bp <- matrix(0, length(data_cor2), 1)
for (i in 1:length(data_cor2)) {
bp[i,1] = sum(data_cor2[] <= data_cor2[i], na.rm = na.rm); # Writes the rank of the data
}
std_index <- qnorm((bp - 0.44)/(length(data_cor2) + 0.12))
dim(std_index) <- dimscor
spei_mod[ff, , ] <- std_index
} else {
data2 <- data[ff, , ]
dim(data2) <- dims
if (!is.null(ref_start) && !is.null(ref_end)) {
data_fit <- window(data2, ref_start, ref_end)
} else {
data_fit <- data2
}
acu <- as.vector(data_fit)
if (na.rm) {
acu_sorted <- sort.default(acu, method = "quick")
} else {
acu_sorted <- sort.default(acu, method = "quick", na.last = TRUE)
}
if (!any(is.na(acu_sorted)) & length(acu_sorted) != 0) {
acu_sd <- sd(acu_sorted)
if (!is.na(acu_sd) & acu_sd != 0) {
if (distribution != "log-Logistic") {
acu_sorted <- acu_sorted[acu_sorted > 0]
}
if (length(acu_sorted) >= 4) {
f_params <- .std(data = acu_sorted, fit = fit,
distribution = distribution)
}
if (all(is.na(f_params))) {
cdf_res <- NA
} else {
f_params <- f_params[which(!is.na(f_params))]
cdf_res = switch(distribution,
"log-Logistic" = lmom::cdfglo(data_cor2, f_params),
"Gamma" = lmom::cdfgam(data_cor2, f_params),
"PearsonIII" = lmom::cdfpe3(data_cor2, f_params))
}
std_index_cv <- array(qnorm(cdf_res), dim = dimscor)
spei_mod[ff, , ] <- std_index_cv
if (return_params) params_result[ff, , ] <- f_params
}
}
}
}
}
if (handle_infinity) {
# could also use "param_error" ?; we are giving it the min/max value of the grid point
spei_mod[is.infinite(spei_mod) & spei_mod < 0] <- min(spei_mod[!is.infinite(spei_mod)])
spei_mod[is.infinite(spei_mod) & spei_mod > 0] <- max(spei_mod[!is.infinite(spei_mod)])
}
if (return_params) {
return(list(spei = spei_mod, params = params_result))
} else {
return(spei_mod)
}
}
.std <- function(data, fit = 'pp-pwm', distribution = 'log-Logistic') {
pwm = switch(fit,
'pp-pwm' = lmomco::pwm.pp(data, -0.35, 0, nmom = 3),
lmomco::pwm.ub(data, nmom = 3)
# TLMoments::PWM(data, order = 0:2)
)
lmom <- lmomco::pwm2lmom(pwm)
if (!any(!lmomco::are.lmom.valid(lmom), anyNA(lmom[[1]]), any(is.nan(lmom[[1]])))) {
fortran_vec = c(lmom$lambdas[1:2], lmom$ratios[3])
params_result = switch(distribution,
'log-Logistic' = tryCatch(lmom::pelglo(fortran_vec),
error = function(e){lmomco::parglo(lmom)$para}),
'Gamma' = tryCatch(lmom::pelgam(fortran_vec),
error = function(e){lmomco::pargam(lmom)$para}),
'PearsonIII' = tryCatch(lmom::pelpe3(fortran_vec),
error = function(e){lmomco::parpe3(lmom)$para}))
if (distribution == 'log-Logistic' && fit == 'max-lik') {
params_result = SPEI::parglo.maxlik(data, params_result)$para
}
return(params_result)
} else {
return(NA)
}
}
|
/scratch/gouwar.j/cran-all/cranData/CSIndicators/R/PeriodStandardization.R
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.