content
stringlengths 0
14.9M
| filename
stringlengths 44
136
|
---|---|
#' Create a new coin
#'
#' Creates a new "coin" class object, or a "purse" class object (time-indexed collection of coins).
#' A purse class object is created if panel data is supplied. Coins and purses are the main object classes
#' used in COINr, although a number of functions also support other classes such as data frames and vectors.
#'
#' A coin object is fundamentally created by passing two data frames to [new_coin()]:
#' `iData` which specifies the data points for each unit and indicator, as well as other optional
#' variables; and `iMeta` which specifies details about each indicator/variable found in `iData`,
#' including its type, name, position in the index, units, and other properties.
#'
#' These data frames need to follow fairly strict requirements regarding their format and consistency.
#' Run [check_iData()] and [check_iMeta()] to validate your data frames, and these should generate helpful
#' error messages when things go wrong.
#'
#' It is worth reading a little about coins and purses to use COINr. See `vignette("coins")` for more details.
#'
#' ## `iData`
#'
#' `iData` should be a data frame with required column
#' `uCode` which gives the code assigned to each unit (alphanumeric, not starting with a number). All other
#' columns are defined by corresponding entries in `iMeta`, with the following special exceptions:
#'
#' * `Time` is an optional column which allows panel data to be input, consisting of e.g. multiple rows for
#' each `uCode`: one for each `Time` value. This can be used to split a set of panel data into multiple coins
#' (a so-called "purse") which can be input to COINr functions.
#' * `uName` is an optional column which specifies a longer name for each unit. If this column is not included,
#' unit codes (`uCode`) will be used as unit names where required.
#'
#' ## `iMeta`
#'
#' Required columns for `iMeta` are:
#'
#' * `Level`: Level in aggregation, where 1 is indicator level, 2 is the level resulting from aggregating
#' indicators, 3 is the result of aggregating level 2, and so on. Set to `NA` for entries that are not included
#' in the index (groups, denominators, etc).
#' * `iCode`: Indicator code, alphanumeric. Must not start with a number.
#' * `Parent`: Group (`iCode`) to which indicator/aggregate belongs in level immediately above.
#' Each entry here should also be found in `iCode`. Set to `NA` only
#' for the highest (Index) level (no parent), or for entries that are not included
#' in the index (groups, denominators, etc).
#' * `Direction`: Numeric, either -1 or 1
#' * `Weight`: Numeric weight, will be rescaled to sum to 1 within aggregation group. Set to `NA` for entries that are not included
#' in the index (groups, denominators, etc).
#' * `Type`: The type, corresponding to `iCode`. Can be either `Indicator`, `Aggregate`, `Group`, `Denominator`,
#' or `Other`.
#'
#' Optional columns that are recognised in certain functions are:
#'
#' * `iName`: Name of the indicator: a longer name which is used in some plotting functions.
#' * `Unit`: the unit of the indicator, e.g. USD, thousands, score, etc. Used in some plots if available.
#' * `Target`: a target for the indicator. Used if normalisation type is distance-to-target.
#'
#' The `iMeta` data frame essentially gives details about each of the columns found in `iData`, as well as
#' details about additional data columns eventually created by aggregating indicators. This means that the
#' entries in `iMeta` must include *all* columns in `iData`, *except* the three special column names: `uCode`,
#' `uName`, and `Time`. In other words, all column names of `iData` should appear in `iMeta$iCode`, except
#' the three special cases mentioned. The `iName` column optionally can be used to give longer names to each indicator
#' which can be used for display in plots.
#'
#' `iMeta` also specifies the structure of the index, by specifying the parent of each indicator and aggregate.
#' The `Parent` column must refer to entries that can be found in `iCode`. Try `View(ASEM_iMeta)` for an example
#' of how this works.
#'
#' `Level` is the "vertical" level in the hierarchy, where 1 is the bottom level (indicators), and each successive
#' level is created by aggregating the level below according to its specified groups.
#'
#' `Direction` is set to 1 if higher values of the indicator should result in higher values of the index, and
#' -1 in the opposite case.
#'
#' The `Type` column specifies the type of the entry: `Indicator` should be used for indicators at level 1.
#' `Aggregate` for aggregates created by aggregating indicators or other aggregates. Otherwise set to `Group`
#' if the variable is not used for building the index but instead is for defining groups of units. Set to
#' `Denominator` if the variable is to be used for scaling (denominating) other indicators. Finally, set to
#' `Other` if the variable should be ignored but passed through. Any other entries here will cause an error.
#'
#' Note: this function requires the columns above as specified, but extra columns can also be added without
#' causing errors.
#'
#' ## Other arguments
#'
#' The `exclude` argument can be used to exclude specified indicators. If this is specified, `.$Data$Raw`
#' will be built excluding these indicators, as will all subsequent build operations. However the full data set
#' will still be stored in `.$Log$new_coin`. The codes here should correspond to entries in the `iMeta$iCode`.
#' This option is useful e.g. in generating alternative coins with different indicator sets, and can be included
#' as a variable in a sensitivity analysis.
#'
#' The `split_to` argument allows panel data to be used. Panel data must have a `Time` column in `iData`, which
#' consists of some numerical time variable, such as a year. Panel data has multiple observations for each `uCode`,
#' one for each unique entry in `Time`. The `Time` column is required to be numerical, because it needs to be
#' possible to order it. To split panel data, specify `split_to = "all"` to split to a single coin for each
#' of the unique entries in `Time`. Alternatively, you can pass a vector of entries in `Time` which allows
#' to split to a subset of the entries to `Time`.
#'
#' Splitting panel data results in a so-called "purse" class, which is a data frame of COINs, indexed by `Time`.
#' See `vignette("coins")` for more details.
#'
#' This function replaces the now-defunct `assemble()` from COINr < v1.0.
#'
#' @param iData The indicator data and metadata of each unit
#' @param iMeta Indicator metadata
#' @param exclude Optional character vector of any indicator codes (`iCode`s) to exclude from the coin(s).
#' @param split_to This is used to split panel data into multiple coins, a so-called "purse". Should be either
#' `"all"`, or a subset of entries in `iData$Time`. See Details.
#' @param level_names Optional character vector of names of levels. Must have length equal to the number of
#' levels in the hierarchy (`max(iMeta$Level, na.rm = TRUE)`).
#' @param quietly If `TRUE`, suppresses all messages
#'
#' @examples
#' # build a coin using example data frames
#' ASEM_coin <- new_coin(iData = ASEM_iData,
#' iMeta = ASEM_iMeta,
#' level_names = c("Indicator", "Pillar", "Sub-index", "Index"))
#' # view coin contents
#' ASEM_coin
#'
#' # build example purse class
#' ASEM_purse <- new_coin(iData = ASEM_iData_p,
#' iMeta = ASEM_iMeta,
#' split_to = "all",
#' quietly = TRUE)
#' # view purse contents
#' ASEM_purse
#'
#' # see vignette("coins") for further info
#'
#' @return A "coin" object or a "purse" object.
#'
#' @export
new_coin <- function(iData, iMeta, exclude = NULL, split_to = NULL,
level_names = NULL, quietly = FALSE){
# WRITE TO LOG ------------------------------------------------------------
coin <- vector(mode = "list", length = 0)
coin <- write_log(coin)
# OVERALL CHECKS ----------------------------------------------------------
# individual dfs
check_iData(iData, quietly = quietly)
check_iMeta(iMeta, quietly = quietly)
# covert any tibbles to normal dfs.
if(inherits(iData, "tbl_df")){
iData <- as.data.frame(iData)
}
if(inherits(iMeta, "tbl_df")){
iMeta <- as.data.frame(iMeta)
}
# change any integer to numeric
iData_codes <- colnames(iData)[colnames(iData) %nin% c("uCode", "uName", "Time")]
iData[iData_codes] <- df_int_2_numeric(iData[iData_codes])
# CROSS CHECKS
# Make sure iData codes are all in iMeta, excluding special codes
if(any(iData_codes %nin% iMeta$iCode)){
stop("Column names from iData not found in iMeta (excluding special columns).")
}
if(any(iMeta$iCode[iMeta$Type != "Aggregate"] %nin% colnames(iData))){
stop("Entries in iMeta$iCode not found in colnames(iData).")
}
# we need indicator codes
iCodes <- iMeta$iCode[iMeta$Type == "Indicator"]
non_numeric_inds <- !(sapply(iData[iCodes], is.numeric))
if(any(non_numeric_inds)){
stop("Non-numeric indicators detected. The following have been labelled as 'Indicator' but refer to non-numeric columns in iData (not allowed): \n", paste(iCodes[non_numeric_inds], collapse = ", " ),
"\n This may occur if you have imported data with NAs read as strings.")
}
# check for any parents with no children
icodes_agg <- iMeta$iCode[iMeta$Type == "Aggregate"]
icodes_agg_nokids <- icodes_agg[icodes_agg %nin% iMeta$Parent]
if(length(icodes_agg_nokids) > 0){
stop("Aggregate iCode(s) found in iMeta that do not have any children (not named in 'Parent' column). Codes: ", icodes_agg_nokids, call. = FALSE)
}
# EXCLUDE INDICATORS ------------------------------------------------------
# Optionally exclude any specified indicators
if(!is.null(exclude)){
stopifnot(is.character(exclude))
if(any(exclude %nin% iMeta$iCode)){
stop("One or more entries in exclude not found in iMeta$iCode...")
}
iData <- iData[colnames(iData) %nin% exclude]
iMeta <- iMeta[iMeta$iCode %nin% exclude, ]
# if removing indicators results in empty aggregation groups
# (childless parents) we have to remove these. Otherwise when
# we aggregate, there are aggregation groups with nothing to aggregate.
childless <- (iMeta$Level > 1) & (iMeta$iCode %nin% iMeta$Parent)
childless[is.na(childless)] <- FALSE
iMeta <- iMeta[!childless, ]
# get iCodes again
iCodes <- iMeta$iCode[iMeta$Type == "Indicator"]
iCodes <- iCodes[!is.na(iCodes)]
}
# GENERATE DEFAULT NAMES --------------------------------------------------
# default names are codes
if(is.null(iData$uName)){
iData$uName <- iData$uCode
}
if(is.null(iMeta$iName)){
iMeta$iName <- iMeta$iCode
}
# SORT DFS ----------------------------------------------------------------
# also all codes not indicator codes (excluding uCode)
not_icodes <- names(iData)[names(iData) %nin% c("uCode", iCodes)]
# This is not strictly necessary but may help later on
iMeta <- iMeta[order(iMeta$Level, iMeta$Parent),]
iData <- iData[c("uCode", not_icodes, iCodes)]
# iData sorting depends on if we have panel data
is_panel <- length(unique(iData$Time)) > 1
if(is_panel){
iData <- iData[order(iData$Time, iData$uCode),]
} else {
iData <- iData[order(iData$uCode),]
}
# SPLIT PANEL DATA --------------------------------------------------------
# NOTE: splitting may cause different numbers of units in each coin, but the number of indicators
# should always be the same, even if some will have all-NAs.
# NOTE: we need to include the year imputation at some point here.
if(!is.null(split_to)){
# make sure we can split first
if(!is_panel){
stop("Cannot split to multiple coins because either iData$Time doesn't exist, or you have only
one unique entry in iData$Time.")
}
# now split
iData_list <- split_iData(iData, split_to = split_to)
# check
suppressMessages(lapply(iData_list, check_iData))
} else {
if(is_panel){
stop("Panel data detected, but you have not specifed split_to - please specify this.")
}
iData_list <- list(iData)
}
# BUILD COINS -------------------------------------------------------------
# First make some mods to the "base" coin which are same for all coins
coin$Meta$Ind <- iMeta
coin$Meta$Lineage <- get_lineage(iMeta, level_names = level_names)
coin$Meta$maxlev <- max(iMeta$Level, na.rm = TRUE)
coin$Meta$Weights$Original <- iMeta[iMeta$Type %in% c("Indicator", "Aggregate"),
c("iCode", "Level", "Weight")]
# we also need to forget about splitting, as if we regenerate one of
# the coins in the purse, this would cause an error
coin$Log$new_coin$split_to <- NULL
# by default we assume the coins can be regenerated. This will only be not TRUE
# if we run some global purse methods, like normalise
coin$Log$can_regen <- TRUE
coinmaker <- function(iDatai){
# copy the "global" coin
coin_i <- coin
# Store data (only uCode plus indicators)
coin_i <- write_dset(coin_i, iDatai[c("uCode", iCodes)], dset = "Raw",
ignore_class = TRUE, quietly = quietly)
if(is_panel){
# alter Log to only include iData of the COIN (not whole panel)
coin_i$Log$new_coin$iData <- iDatai
}
# Extract denominators, groups and other non-indicator cols
coin_i$Meta$Unit <- iDatai[c("uCode", not_icodes)]
# class
class(coin_i) <- "coin"
# return
coin_i
}
# now run coinmaker on list of iData
coins <- lapply(iData_list, coinmaker)
# TWEAKS AND OUTPUT -------------------------------------------------------
# squash to single coin if only one in the list
if(length(coins)==1){
f_output <- coins[[1]]
} else {
# get time value for each coin
coin_times <- sapply(coins, function(x){
unique(x$Meta$Unit$Time)
})
f_output <- data.frame(Time = coin_times)
f_output$coin <- coins
class(f_output) <- c("purse", "data.frame")
}
f_output
}
#' Check iData
#'
#' Checks the format of `iData` input to [new_coin()]. This check must be passed to successfully build a new
#' coin.
#'
#' The restrictions on `iData` are not extensive. It should be a data frame with only one required column
#' `uCode` which gives the code assigned to each unit (alphanumeric, not starting with a number). All other
#' columns are defined by corresponding entries in `iMeta`, with the following special exceptions:
#'
#' * `Time` is an optional column which allows panel data to be input, consisting of e.g. multiple rows for
#' each `uCode`: one for each `Time` value. This can be used to split a set of panel data into multiple coins
#' (a so-called "purse") which can be input to COINr functions. See [new_coin()] for more details.
#' * `uName` is an optional column which specifies a longer name for each unit. If this column is not included,
#' unit codes (`uCode`) will be used as unit names where required.
#'
#' No column names should contain blank spaces.
#'
#' @param iData A data frame of indicator data.
#' @param quietly Set `TRUE` to suppress message if input is valid.
#'
#' @examples
#' check_iData(ASEM_iData)
#'
#' @return Message if everything ok, else error messages.
#'
#' @export
check_iData <- function(iData, quietly = FALSE){
# check is df
stopifnot(is.data.frame(iData))
# if tibble, convert (no alarms and no surprises)
if(inherits(iData, "tbl_df")){
iData <- as.data.frame(iData)
}
# REQUIRED COLS -----------------------------------------------------------
# Required cols are in fact only uCode
required_cols <- c("uCode")
# check present
if(any(required_cols %nin% colnames(iData))){
stop("One or more expected col names not found (", required_cols, ").")
}
# check type
if(!is.character(iData$uCode)){
stop("uCode is required to be a character vector.")
}
# SPECIAL COLS ------------------------------------------------------------
# Special cols are those that are not REQUIRED but defined in iMeta
# Time
if(!is.null(iData[["Time"]])){
if(!is.numeric(iData$Time)){
stop("iData$Time is required to be a numeric vector.")
}
# flag if panel data: more than one unique value in Time
is_panel <- length(unique(iData$Time)) > 1
} else {
is_panel <- FALSE
}
# uName
if(!is.null(iData[["uName"]])){
if(!is.character(iData$uName)){
stop("iData$uName is required to be a character vector.")
}
}
# DUPLICATES --------------------------------------------------------------
# Check unique uCodes
# This is different depending on whether iData is panel data or not
if(is_panel){
if(anyDuplicated(iData[c("uCode", "Time")]) > 1){
stop("Duplicate uCode/Time pairs found.")
}
} else {
if(anyDuplicated(iData$uCode) > 1){
stop("Duplicates detected in iData$uCode.")
}
}
# Check unique colnames
if(anyDuplicated(colnames(iData)) > 1){
stop("Duplicates detected in colnames(iData).")
}
# check uCode and colnames don't overlap
if(length(intersect(unique(iData$uCode), colnames(iData) ))>0){
stop("uCode and colnames(iData) contain overlapping codes.")
}
# Spaces and numbers ------------------------------------------------------
cnames <- names(iData)
# should not contain spaces
spaces <- grepl(" ", cnames)
if(any(spaces)){
stop("One or more column names has a blank space - this causes problems and is not allowed.")
}
# should not start with a number
num_start <- substring(cnames, 1,1) %in% 0:9
if(any(num_start)){
stop("One or more column names begins with a number - this causes problems and is not allowed.")
}
# OUTPUT ------------------------------------------------------------------
if(!quietly){
message("iData checked and OK.")
}
}
#' Check iMeta
#'
#' Checks the format of `iMeta` input to [new_coin()]. This performs a series of thorough checks to make sure
#' that `iMeta` agrees with the specifications. This also includes checks to make sure the structure makes
#' sense, there are no duplicates, and other things. `iMeta` must pass this check to build a new coin.
#'
#' Required columns for `iMeta` are:
#'
#' * `Level`: Level in aggregation, where 1 is indicator level, 2 is the level resulting from aggregating
#' indicators, 3 is the result of aggregating level 2, and so on. Set to `NA` for entries that are not included
#' in the index (groups, denominators, etc).
#' * `iCode`: Indicator code, alphanumeric. Must not start with a number or contain blank spaces.
#' * `Parent`: Group (`iCode`) to which indicator/aggregate belongs in level immediately above.
#' Each entry here should also be found in `iCode`. Set to `NA` only
#' for the highest (Index) level (no parent), or for entries that are not included
#' in the index (groups, denominators, etc).
#' * `Direction`: Numeric, either -1 or 1
#' * `Weight`: Numeric weight, will be rescaled to sum to 1 within aggregation group. Set to `NA` for entries that are not included
#' in the index (groups, denominators, etc).
#' * `Type`: The type, corresponding to `iCode`. Can be either `Indicator`, `Aggregate`, `Group`, `Denominator`,
#' or `Other`.
#'
#' Optional columns that are recognised in certain functions are:
#'
#' * `iName`: Name of the indicator: a longer name which is used in some plotting functions.
#' * `Unit`: the unit of the indicator, e.g. USD, thousands, score, etc. Used in some plots if available.
#' * `Target`: a target for the indicator. Used if normalisation type is distance-to-target.
#'
#' The `iMeta` data frame essentially gives details about each of the columns found in `iData`, as well as
#' details about additional data columns eventually created by aggregating indicators. This means that the
#' entries in `iMeta` must include *all* columns in `iData`, *except* the three special column names: `uCode`,
#' `uName`, and `Time`. In other words, all column names of `iData` should appear in `iMeta$iCode`, except
#' the three special cases mentioned. The `iName` column optionally can be used to give longer names to each indicator
#' which can be used for display in plots.
#'
#' `iMeta` also specifies the structure of the index, by specifying the parent of each indicator and aggregate.
#' The `Parent` column must refer to entries that can be found in `iCode`. Try `View(ASEM_iMeta)` for an example
#' of how this works.
#'
#' `Level` is the "vertical" level in the hierarchy, where 1 is the bottom level (indicators), and each successive
#' level is created by aggregating the level below according to its specified groups.
#'
#' `Direction` is set to 1 if higher values of the indicator should result in higher values of the index, and
#' -1 in the opposite case.
#'
#' The `Type` column specifies the type of the entry: `Indicator` should be used for indicators at level 1.
#' `Aggregate` for aggregates created by aggregating indicators or other aggregates. Otherwise set to `Group`
#' if the variable is not used for building the index but instead is for defining groups of units. Set to
#' `Denominator` if the variable is to be used for scaling (denominating) other indicators. Finally, set to
#' `Other` if the variable should be ignored but passed through. Any other entries here will cause an error.
#'
#' Note: this function requires the columns above as specified, but extra columns can also be added without
#' causing errors.
#'
#' @param iMeta A data frame of indicator metadata. See details.
#' @param quietly Set `TRUE` to suppress message if input is valid.
#'
#' @examples
#' check_iMeta(ASEM_iMeta)
#'
#' @return Message if everything ok, else error messages.
#'
#' @export
check_iMeta <- function(iMeta, quietly = FALSE){
# INITIAL CHECKS ----------------------------------------------------------
# check is df
stopifnot(is.data.frame(iMeta))
# if tibble, convert (no alarms and no surprises)
if(inherits(iMeta, "tbl_df")){
iMeta <- as.data.frame(iMeta)
}
# REQUIRED COLS -----------------------------------------------------------
# required cols
required_cols <- c("Level", "iCode", "Parent", "Direction", "Type", "Weight")
if(!all(required_cols %in% colnames(iMeta))){
stop("One or more expected col names not found (Level, iCode, Parent, Direction, Type, Weight).")
}
# check col types
col_numeric <- c("Level", "Direction", "Weight")
col_char <- setdiff(required_cols, col_numeric)
# numeric
num_check <- sapply(iMeta[col_numeric], is.numeric)
if(!all(num_check)){
stop(paste0("One or more of the following columns is not numeric: ", paste0(col_numeric, collapse = "/")))
}
# char
char_check <- sapply(iMeta[col_char], is.character)
if(!all(char_check)){
stop(paste0("One or more of the following columns is not character: ", paste0(col_char, collapse = "/")))
}
# SPECIFIC COL CHECKS -----------------------------------------------------
# Level should be in 1:100 (not expecting more than 1000 levs)
levs <- unique(iMeta$Level[!is.na(iMeta$Level)])
if(any(levs %nin% 1:1000)){
stop("Level column has unexpected entries. Expected as positive integers.")
}
# Level should not skip any levels
maxlev <- max(levs, na.rm = TRUE)
if(!setequal(levs, 1:maxlev)){
stop("Level column has missing entries between 1 and max(Level).")
}
# where Type is Aggregate, level must be above 1
level1_aggs <- iMeta$Level[iMeta$Type == "Aggregate"] == 1
if(any(level1_aggs)){
stop("One or more entries in iMeta$Level is 1 where iMeta$Type is 'Aggregate'. Aggregates must have level 2 or higher.")
}
# iCode should have no duplicates
duplicate_codes <- iMeta$iCode[duplicated(iMeta$iCode)]
if(length(duplicate_codes) != 0){
stop("Duplicate entries in iCode: ", paste0(duplicate_codes, collapse = ", "))
}
# iCode should not start with a number
num_start <- substring(iMeta$iCode, 1,1) %in% 0:9
if(any(num_start)){
stop("One or more entries in iCode begins with a number - this causes problems and is not allowed.")
}
# iCode no NAs are allowed
if(any(is.na(iMeta$iCode))){
stop("NAs found in iCode - NAs are not allowed.")
}
# iCode no spaces
spaces <- grepl(" ", iMeta$iCode)
if(any(spaces)){
stop("One or more entries in iCode has a blank space - this causes problems and is not allowed.")
}
# Direction should only be -1 or 1
dirs <- iMeta$Direction[!is.na(iMeta$Direction)]
if(any(dirs %nin% c(-1, 1))){
stop("One or more entries in Direction are not -1 or 1.")
}
# Type has to be one of the following
itypes <- c("Indicator", "Aggregate", "Group", "Denominator", "Other")
if(any(iMeta$Type %nin% itypes)){
stop("One or more entries in Type is not allowed - should be one of Indicator, Aggregate, Group, Denominator, Other.")
}
## THE FOLLOWING ARE OPTIONAL COLS
# if iName exists, should be alphanumeric
if(!is.null(iMeta$iName)){
if(!is.character(iMeta$iName)){
stop("iName is not a character vector, which is required. If you don't want to specify iName, this
column can also be removed.")
}
# also no NAs are allowed
if(any(is.na(iMeta$iName))){
stop("NAs found in iName - if iName is specified, NAs are not allowed.")
}
}
# if Unit exists, should be alphanumeric
if(!is.null(iMeta$Unit)){
if(!is.character(iMeta$Unit)){
stop("Unit is not a character vector, which is required. If you don't want to specify Unit, this
column can also be removed.")
}
}
# if Target exists, should be alphanumeric
if(!is.null(iMeta$Target)){
if(!is.numeric(iMeta$Target)){
stop("Target is not a numeric vector, which is required. If you don't want to specify Target, this
column can also be removed (it is only required for distance to target normalisation).")
}
}
# BETWEEN-COL CHECKS ------------------------------------------------------
# Level should be non-NA for all indicators and aggregates
if( any(is.na(iMeta$Level) & (iMeta$Type %in% c("Indicator", "Aggregate"))) ){
stop("NAs detected in Level for Indicator/Aggregates. All Indicators and Aggregates must have a numeric
Level defined.")
}
# Level should be 1 for indicators
if(any( (iMeta$Level != 1) & (iMeta$Type == "Indicator") )){
stop("One or more rows of Type 'Indicator' is assigned Level != 1. Indicators should all be at Level 1.")
}
# Direction should be non-NA for all indicators and aggregates
if( any(is.na(iMeta$Direction) & (iMeta$Type %in% c("Indicator", "Aggregate"))) ){
stop("NAs detected in Direction for Indicator/Aggregates. All Indicators and Aggregates must have a
Direction defined (either 1 or -1).")
}
# Weight should be non-NA for all indicators and aggregates
if( any(is.na(iMeta$Weight) & (iMeta$Type %in% c("Indicator", "Aggregate"))) ){
stop("NAs detected in Weight for Indicator/Aggregates. All Indicators and Aggregates must have a numeric
Weight defined.")
}
# # Unit should be non-NA except for Groups
# if( any(is.na(iMeta$Unit) & (iMeta$Type != "Group")) ){
# stop("NAs detected in Unit: NAs are only allowed in Unit for Type = 'Group'.")
# }
# Target be specified for anything at Level 1
if( any(is.na(iMeta$Target) & (iMeta$Type == "Indicator")) ){
stop("NAs detected in Target for Type = 'Indicator'. If targets are specified, they must be non-NA
for all indicators. You can also remove the Target column if you don't need targets.")
}
# Parent should refer to codes already present in iCode
notin_iCode <- (iMeta$Parent[!is.na(iMeta$Parent)] %nin% iMeta$iCode)
if(any(notin_iCode)){
stop("One or more entries in Parent not found in iCode.")
}
# check top level has NA for parent (no parent assigned)
if( any(!is.na(iMeta$Parent) & (iMeta$Parent == maxlev)) ){
stop("Entries found in Parent at the highest aggregation level. At the highest aggregation level
there are no parents, so so Parent should be NA.")
}
# STRUCTURE CHECKS --------------------------------------------------------
# This function checks, for a given CODE/PARENT pair, whether the parent is in the level
# immediately above. If not, reports error.
levcheck <- function(x){
chld <- x[1]
prnt <- x[2]
# level of child
chld_lev <- iMeta$Level[iMeta$iCode == chld]
# if we reach the top level, break
if(chld_lev == maxlev) return(NULL)
# level of parent
prnt_lev <- iMeta$Level[iMeta$iCode == prnt]
# check if parent is immediately above child
if(prnt_lev != (chld_lev + 1)){
stop(paste0(
"Level discrepancy detected. An iCode has a Parent in a Level other than the one immediately above it: ",
"iCode = ", chld,
", Parent = ", prnt))
}
}
# run function above on rows of iData
# note, return to a variable just to avoid returning NULL (see func above)
check_struct <- apply(iMeta[(iMeta$Type %in% c("Indicator", "Aggregate")) ,c("iCode", "Parent")],
MARGIN = 1,
levcheck)
if(!quietly){
message("iMeta checked and OK.")
}
}
# Splits `iData` by the `Time` column into multiple `iData` data frames.
#
# @param iData A data frame of indicator panel data
# @param split_to Either `"all"` (one `iData` for each unique entry in `iData$Time`), or a vector containing a
# subset of entries in `iData$Time`. In the latter case, `iData`s will only be generated for the entries in this
# vector.
#
# @return List of `iData` data frames
split_iData <- function(iData, split_to){
# this function is only called from new_coin(), so if we are here, then the iData should be valid,
# and there should be more than one unique entry in iData$Year.
if(split_to != "all"){
if(any(split_to %nin% iData$Time)){
stop("One or more entries in split_to is not found in iData$Time.")
}
iData <- iData[iData$Time %in% split_to]
}
# return list of dfs
split(iData, iData$Time)
}
# Takes an iMeta table and outputs a wide format index structure, i.e. a table with one column per
# level in the index. This is used in later functions to look up the full "ancestry" of any element
# in the index.
#
# @param iMeta A data frame of indicator metadata. For specs see [check_iMeta()].
# @param level_names A character vector of names of each level in the hierarchy.
#
# @return Lineage table as data frame
get_lineage <- function(iMeta, level_names = NULL){
# isolate the structural part of iMeta
longS <- iMeta[c("iCode", "Parent")]
# prep wide version: filter to the indicator level and parent level
wideS <- iMeta[iMeta$Type == "Indicator", c("iCode", "Parent")]
# find max level
maxlev <- max(iMeta$Level, na.rm = TRUE)
# catch possibility of only one level (may not make sense to make a coin in that
# situation, to be seen)
if(maxlev == 1){
wideS <- wideS["iCode"]
warning("Only one level is defined in iMeta. This is not normally expected in a composite indicator, and some functions may not work as expected.", call. = FALSE)
} else {
# successively add columns by looking up parent codes of last col
for(ii in 2:(maxlev-1)){
wideS <- cbind(wideS,
longS$Parent[match(wideS[[ii]], longS$iCode)])
}
}
# rename columns
if(is.null(level_names)){
level_names <- paste0("Level_", 1:maxlev)
} else {
if(length(level_names) != ncol(wideS)){
stop("level_names is not the same length as the number of levels in the index.")
}
}
colnames(wideS) <- level_names
if(maxlev == 1){
return(wideS)
} else {
# reorder finally starting with highest level and working down
wideS[do.call(order, rev(wideS)), ]
}
}
|
/scratch/gouwar.j/cran-all/cranData/COINr/R/new_coin.R
|
#' Create normalised data sets in a purse of coins
#'
#' This creates normalised data sets for each coin in the purse. In most respects, this works in a similar way
#' to normalising on a coin, for which reason please see [Normalise.coin()] for most documentation. There is however
#' a special case in terms of operating on a purse of coins. This is because, when
#' dealing with time series data, it is often desirable to normalise over the whole panel data set at once
#' rather than independently for each time point. This makes the resulting index and aggregates comparable
#' over time. Here, the `global` argument controls whether to normalise each coin independently or to normalise
#' across all data at once. In other respects, this function behaves the same as [Normalise.coin()].
#'
#' The same specifications are passed to each coin in the purse. This means that each coin is normalised
#' using the same set of specifications and directions. If you need control over individual coins, you
#' will have to normalise coins individually.
#'
#' @param x A purse object
#' @param dset The data set to normalise in each coin
#' @param global_specs Default specifications
#' @param indiv_specs Individual specifications
#' @param directions An optional data frame containing the following columns:
#' * `iCode` The indicator code, corresponding to the column names of the data set
#' * `Direction` numeric vector with entries either `-1` or `1`
#' If `directions` is not specified, the directions will be taken from the `iMeta` table in the coin, if available.
#' @param global Logical: if `TRUE`, normalisation is performed "globally" across all coins, by using e.g. the
#' max and min of each indicator in any coin. This effectively makes normalised scores comparable between coins
#' because they are all scaled using the same parameters. Otherwise if `FALSE`, coins are normalised individually.
#' @param write_to Optional character string for naming the data set in each coin. Data will be written to
#' `.$Data[[write_to]]`. Default is `write_to == "Normalised"`.
#' @param ... arguments passed to or from other methods.
#'
#' @return An updated purse with new normalised data sets added at `.$Data$Normalised` in each coin
#' @export
#'
#' @examples
#' # build example purse
#' purse <- build_example_purse(up_to = "new_coin", quietly = TRUE)
#'
#' # normalise raw data set
#' purse <- Normalise(purse, dset = "Raw", global = TRUE)
#'
Normalise.purse <- function(x, dset, global_specs = NULL, indiv_specs = NULL,
directions = NULL, global = TRUE, write_to = NULL, ...){
# input check
check_purse(x)
# GET DSETS ---------------------------------------------------------------
iDatas <- get_dset(x, dset)
iDatas_ <- iDatas[names(iDatas) != "Time"]
# GLOBAL NORMALISATION ----------------------------------------------------
if(global){
# get directions first
if(is.null(directions)){
directions <- x$coin[[1]]$Meta$Ind[c("iCode", "Direction")]
}
# run global dset through normalise (as data frame), excluding Time col
iDatas_n <- Normalise(iDatas_, global_specs = global_specs,
indiv_specs = indiv_specs, directions = directions)
# split by Time
iDatas_n_l <- split(iDatas_n, iDatas$Time)
# now write dsets to coins
x$coin <- lapply(x$coin, function(coin){
# get Time
tt <- coin$Meta$Unit$Time[[1]]
if(is.null(tt)){
stop("Time index is NULL or not found in writing normalised data set to coin.")
}
if(is.null(write_to)){
write_to <- "Normalised"
}
# write dset first
coin <- write_dset(coin, iDatas_n_l[[which(names(iDatas_n_l) == tt)]], dset = write_to)
# also write to log - we signal that coin can't be regenerated any more
coin$Log$can_regen <- FALSE
coin$Log$message <- "Coin was normalised inside a purse with global = TRUE. Cannot be regenerated."
coin
})
} else {
# apply independent normalisation to each coin
x$coin <- lapply(x$coin, function(coin){
Normalise.coin(coin, dset = dset, global_specs = global_specs,
indiv_specs = indiv_specs, directions = directions,
out2 = "coin", write_to = write_to)
})
}
# make sure still purse class
class(x) <- c("purse", "data.frame")
x
}
#' Create a normalised data set
#'
#' Creates a normalised data set using specifications specified in `global_specs`. Columns of `dset` can also optionally be
#' normalised with individual specifications using the `indiv_specs` argument. If indicators should have their
#' directions reversed, this can be specified using the `directions` argument. Non-numeric columns are ignored
#' automatically by this function. By default, this function normalises each indicator using the "min-max" method, scaling indicators to lie between
#' 0 and 100. This calls the [n_minmax()] function. Note, all COINr normalisation functions are of the form `n_*()`.
#'
#' ## Global specification
#'
#' The `global_specs` argument is a list which specifies the normalisation function and any function parameters
#' that should be used to normalise the indicators found in the data set. Unless `indiv_specs` is specified, this will be applied
#' to all indicators. The list should have two entries:
#'
#' * `.$f_n`: the name of the function to use to normalise each indicator
#' * `.$f_n_para`: any further parameters to pass to `f_n`, apart from the numeric vector (each column of the data set)
#'
#' In this list, `f_n` should be a character string which is the name of a normalisation
#' function. For example, `f_n = "n_minmax"` calls the [n_minmax()] function. `f_n_para` is a list of any
#' further arguments to `f_n`. This means that any function can be passed to [Normalise()], as long as its
#' first argument is `x`, a numeric vector, and it returns a numeric vector of the same length. See [n_minmax()]
#' for an example.
#'
#' `f_n_para` is *required* to be a named list. So e.g. if we define a function `f1(x, arg1, arg2)` then we should
#' specify `f_n = "f1"`, and `f_n_para = list(arg1 = val1, arg2 = val2)`, where `val1` and `val2` are the
#' values assigned to the arguments `arg1` and `arg2` respectively.
#'
#' The default list for `global_specs` is: `list(f_n = "n_minmax", f_n_para = list(l_u = c(0,100)))`, i.e.
#' min-max normalisation between 0 and 100.
#'
#' Note, all COINr normalisation functions (passed to `f_n`) are of the form `n_*()`. Type `n_` in the R Studio console and press the Tab key to see a list.
#'
#' This function includes a special case for "distance to target" normalisation. Setting `global_specs = list(f_n = "n_dist2targ")` will apply distance to
#' target normalisation, automatically passing targets found in the "Target" column of `iMeta`.
#'
#' ## Individual column specification
#'
#' Optionally, indicators can be normalised with different normalisation functions and parameters using the
#' `indiv_specs` argument. This must be specified as a named list e.g. `list(i1 = specs1, i2 = specs2)` where
#' `i1` and `i2` are `iCode`s to apply individual normalisation to, and `specs1` and `specs2` are
#' respectively lists of the same format as `global_specs` (see above). In other words, `indiv_specs` is a big
#' list wrapping together `global_specs`-style lists. Any `iCode`s not named in `indiv_specs` (
#' i.e. those not in `names(indiv_specs)`) are normalised using the specifications from `global_specs`. So
#' `indiv_specs` lists the exceptions to `global_specs`.
#'
#' See also `vignette("normalise")` for more details.
#'
#' @param x A coin
#' @param dset A named data set found in `.$Data`
#' @param global_specs Specifications to apply to all columns, apart from those specified by `indiv_specs`. See details.
#' @param indiv_specs Specifications applied to specific columns, overriding those specified in `global_specs`.
#' See details.
#' @param directions An optional data frame containing the following columns:
#' * `iCode` The indicator code, corresponding to the column names of the data set
#' * `Direction` numeric vector with entries either `-1` or `1`
#' If `directions` is not specified, the directions will be taken from the `iMeta` table in the coin, if available.
#' @param out2 Either `"coin"` to return normalised data set back to the coin, or `df` to simply return a data
#' frame.
#' @param write_to Optional character string for naming the data set in the coin. Data will be written to
#' `.$Data[[write_to]]`. Default is `write_to == "Normalised"`.
#' @param write2log Logical: if `FALSE`, the arguments of this function are not written to the coin log, so this
#' function will not be invoked when regenerating. Recommend to keep `TRUE` unless you have a good reason to do otherwise.
#' @param ... arguments passed to or from other methods.
#'
#' @examples
#' # build example coin
#' coin <- build_example_coin(up_to = "new_coin")
#'
#' # normalise the raw data set
#' coin <- Normalise(coin, dset = "Raw")
#'
#' @return An updated coin
#' @export
Normalise.coin <- function(x, dset, global_specs = NULL, indiv_specs = NULL,
directions = NULL, out2 = "coin", write_to = NULL,
write2log = TRUE, ...){
# WRITE LOG ---------------------------------------------------------------
coin <- write_log(x, dont_write = "x", write2log = write2log)
# GET DSET, DEFAULTS ------------------------------------------------------
iData <- get_dset(coin, dset)
iData_ <- iData[colnames(iData) != "uCode"]
# DIRECTIONS --------------------------------------------------------------
if(is.null(directions)){
# get direction col from iMeta
dirs_c <- coin$Meta$Ind[c("iCode", "Direction")]
# if empty
if(is.null(dirs_c)){
stop("No directions provided, and no directions found in .$Meta$Ind")
}
} else {
dirs_c <- directions
}
# NORMALISE DATA ----------------------------------------------------------
if(!is.null(global_specs[["f_n"]])){
if(global_specs[["f_n"]] == "n_dist2targ"){
# special treatment for dist2targ
# first, get iMeta
iMeta <- coin$Meta$Ind
if(is.null(iMeta[["Target"]])){
stop("You specified f_para = 'n_dist2targ' but no targets can be found - please attach these as a column 'Target' in iMeta.")
}
# see if cap_max is specified
if(!is.null(global_specs$f_n_para$cap_max)){
cap_max <- global_specs$f_n_para$cap_max
} else {
cap_max <- FALSE
}
# now we need to apply the n_dist2targ() function to each column, but also respecting the directions.
l_n <- lapply(names(iData_), function(icode){
n_dist2targ(iData_[[icode]],
targ = iMeta$Target[iMeta$iCode == icode],
direction = dirs_c$Direction[dirs_c$iCode == icode],
cap_max = cap_max)
})
names(l_n) <- names(iData_)
iData_n <- as.data.frame(l_n)
} else {
iData_n <- Normalise(iData_, global_specs = global_specs, indiv_specs = indiv_specs,
directions = dirs_c)
}
} else {
iData_n <- Normalise(iData_, global_specs = global_specs, indiv_specs = indiv_specs,
directions = dirs_c)
}
# reunite with uCode col
iData_n <- cbind(uCode = iData$uCode, iData_n)
# output list
if(out2 == "df"){
iData_n
} else {
if(is.null(write_to)){
write_to <- "Normalised"
}
write_dset(coin, iData_n, dset = write_to)
}
}
#' Normalise a data frame
#'
#' Normalises a data frame using specifications specified in `global_specs`. Columns can also optionally be
#' normalised with individual specifications using the `indiv_specs` argument. If variables should have their
#' directions reversed, this can be specified using the `directions` argument. Non-numeric columns are ignored
#' automatically by this function. By default, this function normalises each indicator using the "min-max" method, scaling indicators to lie between
#' 0 and 100. This calls the [n_minmax()] function. Note, all COINr normalisation functions are of the form `n_*()`.
#'
#' ## Global specification
#'
#' The `global_specs` argument is a list which specifies the normalisation function and any function parameters
#' that should be used to normalise the columns of `x`. Unless `indiv_specs` is specified, this will be applied
#' to all numeric columns of `x`. The list should have two entries:
#'
#' * `.$f_n`: the name of the function to use to normalise each column
#' * `.$f_n_para`: any further parameters to pass to `f_n`, apart from the numeric vector (each column of `x`)
#'
#' In this list, `f_n` should be a character string which is the name of a normalisation
#' function. For example, `f_n = "n_minmax"` calls the [n_minmax()] function. `f_n_para` is a list of any
#' further arguments to `f_n`. This means that any function can be passed to [Normalise()], as long as its
#' first argument is `x`, a numeric vector, and it returns a numeric vector of the same length. See [n_minmax()]
#' for an example.
#'
#' `f_n_para` is *required* to be a named list. So e.g. if we define a function `f1(x, arg1, arg2)` then we should
#' specify `f_n = "f1"`, and `f_n_para = list(arg1 = val1, arg2 = val2)`, where `val1` and `val2` are the
#' values assigned to the arguments `arg1` and `arg2` respectively.
#'
#' The default list for `global_specs` is: `list(f_n = "n_minmax", f_n_para = list(l_u = c(0,100)))`.
#'
#' Note, all COINr normalisation functions (passed to `f_n`) are of the form `n_*()`. Type `n_` in the R Studio console and press the Tab key to see a list.
#'
#' ## Individual column specification
#'
#' Optionally, columns of `x` can be normalised with different normalisation functions and parameters using the
#' `indiv_specs` argument. This must be specified as a named list e.g. `list(i1 = specs1, i2 = specs2)` where
#' `i1` and `i2` are column names of `x` to apply individual normalisation to, and `specs1` and `specs2` are
#' respectively lists of the same format as `global_specs` (see above). In other words, `indiv_specs` is a big
#' list wrapping together `global_specs`-style lists. Any numeric columns of `x` not named in `indiv_specs` (
#' i.e. those not in `names(indiv_specs)`) are normalised using the specifications from `global_specs`. So
#' `indiv_specs` lists the exceptions to `global_specs`.
#'
#' See also `vignette("normalise")` for more details.
#'
#' @param x A data frame
#' @param global_specs Specifications to apply to all columns, apart from those specified by `indiv_specs`. See details.
#' @param indiv_specs Specifications applied to specific columns, overriding those specified in `global_specs`. See details.
#' @param directions An optional data frame containing the following columns:
#' * `iCode` The indicator code, corresponding to the column names of the data frame
#' * `Direction` numeric vector with entries either `-1` or `1`
#' If `directions` is not specified, the directions will all be assigned as `1`. Non-numeric columns do not need
#' to have directions assigned.
#' @param ... arguments passed to or from other methods.
#'
#' @examples
#' iris_norm <- Normalise(iris)
#' head(iris_norm)
#'
#' @return A normalised data frame
#' @export
Normalise.data.frame <- function(x, global_specs = NULL, indiv_specs = NULL,
directions = NULL, ...){
# CHECKS ------------------------------------------------------------------
# most input checks are performed in Normalise.numeric()
if(is.null(directions)){
directions <- data.frame(iCode = names(x),
Direction = rep(1, ncol(x)))
}
if(!is.data.frame(directions)){
stop("'directions' must be specified as a data frame.")
}
if(any(colnames(directions) %nin% c("iCode", "Direction"))){
stop("'directions' must contain both columns 'iCode' and 'Direction'.")
}
# SET DEFAULTS ------------------------------------------------------------
# default treatment for all cols
specs_def <- list(f_n = "n_minmax",
f_n_para = list(l_u = c(0,100)))
# modify using input
if(!is.null(global_specs)){
stopifnot(is.list(global_specs))
#specs_def <- utils::modifyList(specs_def, global_specs)
specs_def <- global_specs
}
# individual: check and flag for later function
indiv <- !is.null(indiv_specs)
if(indiv){
stopifnot(is.list(indiv_specs))
}
# NORMALISE ---------------------------------------------------------------
# function for normalising a column
norm_col <- function(col_name){
# get col and check if numeric
xi <- x[[col_name]]
if(!is.numeric(xi)){
return(xi)
}
# get specs
if(indiv){
# check if spec for that col
if(col_name %in% names(indiv_specs)){
# lookup spec and merge with defaults (overwrites any differences)
specs <- indiv_specs[[col_name]]
} else {
# otherwise, use defaults
specs <- specs_def
}
} else {
# otherwise, use defaults
specs <- specs_def
}
# add direction
specs$direction <- directions$Direction[directions$iCode == col_name]
if(length(specs$direction) != 1){
stop("No 'direction' entry found for numerical column ", col_name)
}
# run function
do.call("Normalise.numeric", c(list(x = xi), specs))
}
# now run function
# output is one list
norm_results <- as.data.frame(lapply(names(x), norm_col))
names(norm_results) <- names(x)
# CHECK and OUTPUT --------------------------------------------------------
norm_results
}
#' Normalise a numeric vector
#'
#' Normalise a numeric vector using a specified function `f_n`, with possible reversal of direction
#' using `direction`.
#'
#' Normalisation is specified using the `f_n` and `f_n_para` arguments. In these, `f_n` should be a character
#' string which is the name of a normalisation
#' function. For example, `f_n = "n_minmax"` calls the [n_minmax()] function. `f_n_para` is a list of any
#' further arguments to `f_n`. This means that any function can be passed to [Normalise()], as long as its
#' first argument is `x`, a numeric vector, and it returns a numeric vector of the same length. See [n_minmax()]
#' for an example.
#'
#' `f_n_para` is *required* to be a named list. So e.g. if we define a function `f1(x, arg1, arg2)` then we should
#' specify `f_n = "f1"`, and `f_n_para = list(arg1 = val1, arg2 = val2)`, where `val1` and `val2` are the
#' values assigned to the arguments `arg1` and `arg2` respectively.
#'
#' See also `vignette("normalise")` for more details.
#'
#' @param x Object to be normalised
#' @param f_n The normalisation method, specified as string which refers to a function of the form `f_n(x, npara)`.
#' See details. Defaults to `"n_minmax"` which is the min-max function.
#' @param f_n_para Supporting list of arguments for `f_n`. This is required to be a list.
#' @param direction If `direction = -1` the highest values of `x` will correspond to the lowest
#' values of the normalised `x`. Else if `direction = 1` the direction of `x` in unaltered.
#' @param ... arguments passed to or from other methods.
#'
#' @examples
#' # example vector
#' x <- runif(10)
#'
#' # normalise using distance to reference (5th data point)
#' x_norm <- Normalise(x, f_n = "n_dist2ref", f_n_para = list(iref = 5))
#'
#' # view side by side
#' data.frame(x, x_norm)
#'
#' @return A normalised numeric vector
#'
#' @export
Normalise.numeric <- function(x, f_n = NULL, f_n_para = NULL,
direction = 1, ...){
# CHECKS ------------------------------------------------------------------
# x must be numeric to be here. f_n will be checked by do.call()
if(direction %nin% c(-1, 1)){
stop("direction must be either -1 or 1")
}
# change direction
x <- x*direction
# DEFAULTS ----------------------------------------------------------------
# minmax is default
if(is.null(f_n)){
f_n <- "n_minmax"
}
# function args
f_args <- list(x = x)
if(!is.null(f_n_para)){
if(!is.list(f_n_para)){
stop("f_n_para must be a list")
}
f_args <- c(f_args, f_n_para)
}
# NORMALISE ---------------------------------------------------------------
# call normalisation function
if(f_n == "none"){
xn <- x
} else {
xn <- do.call(what = f_n, args = f_args)
}
# CHECK and OUTPUT --------------------------------------------------------
if(length(xn) != length(x)){
stop("length of normalised vector not equal to length of x")
}
if(!is.numeric(xn)){
stop("normalised vector is not numeric")
}
xn
}
#' Normalise data
#'
#' This is a generic function for normalising variables and indicators, i.e. bringing them onto
#' a common scale. Please see individual method documentation depending on your data class:
#'
#' * [Normalise.numeric()]
#' * [Normalise.data.frame()]
#' * [Normalise.coin()]
#' * [Normalise.purse()]
#'
#' See also `vignette("normalise")` for more details.
#'
#' This function replaces the now-defunct `normalise()` from COINr < v1.0.
#'
#' @param x Object to be normalised
#' @param ... Further arguments to be passed to methods.
#'
#' @examples
#' # See individual method documentation.
#'
#' @export
Normalise <- function(x, ...){
UseMethod("Normalise")
}
#' Minmax a vector
#'
#' Scales a vector using min-max method.
#'
#' @param x A numeric vector
#' @param l_u A vector `c(l, u)`, where `l` is the lower bound and `u` is the upper bound. `x` will
#' be scaled exactly onto this interval.
#'
#' @examples
#' x <- runif(20)
#' n_minmax(x)
#'
#' @return Normalised vector
#'
#' @export
n_minmax <- function(x, l_u = c(0,100)){
stopifnot(is.numeric(x),
is.numeric(l_u),
length(l_u) == 2,
all(!is.na(l_u)))
minx <- min(x, na.rm = TRUE)
maxx <- max(x, na.rm = TRUE)
if(minx == maxx){
warning("The range of x is 0: returning vector of NaNs")
}
(x-minx)/(maxx - minx)*(l_u[2]-l_u[1]) + l_u[1]
}
#' Scale a vector
#'
#' Scales a vector for normalisation using the method applied in the GII2020 for some indicators. This
#' does `x_scaled <- (x-l)/(u-l) * 100`. Note this is *not* the minmax transformation (see [n_minmax()]).
#' This is a linear transformation with shift `u` and scaling factor `u-l`.
#'
#' @param x A numeric vector
#' @param npara Parameters as a vector `c(l, u)`. See description.
#'
#' @examples
#' x <- runif(20)
#' n_scaled(x, npara = c(1,10))
#'
#' @return Scaled vector
#'
#' @export
n_scaled <- function(x, npara = c(0,100)){
stopifnot(is.numeric(x),
is.vector(x))
(x-npara[1])/(npara[2] - npara[1])*100
}
#' Z-score a vector
#'
#' Standardises a vector `x` by scaling it to have a mean and standard deviation specified by `m_sd`.
#'
#' @param x A numeric vector
#' @param m_sd A vector `c(m, sd)`, where `m` is desired mean and `sd` is the target standard deviation.
#'
#' @importFrom stats sd
#'
#' @examples
#' x <- runif(20)
#' n_zscore(x)
#'
#' @return Numeric vector
#'
#' @export
n_zscore <- function(x, m_sd = c(0,1)){
stopifnot(is.numeric(x),
is.numeric(m_sd),
length(m_sd) == 2,
all(!is.na(m_sd)),
m_sd[2] > 0)
(x-mean(x, na.rm = TRUE))/stats::sd(x, na.rm = TRUE)*m_sd[2] + m_sd[1]
}
#' Normalise as distance to maximum value
#'
#' A measure of the distance to the maximum value, where the maximum value is the highest-scoring value. The
#' formula used is:
#'
#' \deqn{ 1 - (x_{max} - x)/(x_{max} - x_{min}) }
#'
#' This means that the closer a value is to the maximum, the higher its score will be. Scores will be in the
#' range of 0 to 1.
#'
#' @param x A numeric vector
#'
#' @examples
#' x <- runif(20)
#' n_dist2max(x)
#'
#' @return Numeric vector
#'
#' @export
n_dist2max <- function(x){
stopifnot(is.numeric(x))
minx <- min(x, na.rm = TRUE)
maxx <- max(x, na.rm = TRUE)
if(minx == maxx){
warning("The range of x is 0: returning vector of NaNs")
}
1 - (maxx - x)/(maxx- minx)
}
#' Normalise as distance to reference value
#'
#' A measure of the distance to a specific value found in `x`, specified by `iref`. The formula is:
#'
#' \deqn{ 1 - (x_{ref} - x)/(x_{ref} - x_{min}) }
#'
#' Values exceeding `x_ref` can be optionally capped at 1 if `cap_max = TRUE`.
#'
#' @param x A numeric vector
#' @param iref An integer which indexes `x` to specify the reference value. The reference value will be
#' `x[iref]`.
#' @param cap_max If `TRUE`, any value of `x` that exceeds `x[iref]` will be assigned a score of 1, otherwise
#' will have a score greater than 1.
#'
#' @examples
#' x <- runif(20)
#' n_dist2ref(x, 5)
#'
#' @return Numeric vector
#'
#' @export
n_dist2ref <- function(x, iref, cap_max = FALSE){
stopifnot(is.numeric(x),
is.logical(cap_max),
is.numeric(iref),
length(iref)==1,
iref > 0)
if(iref > length(x)){
stop("iref must be an integer in 1:length(x).")
}
minx <- min(x, na.rm = TRUE)
# get xref, check if NA
xref <- x[iref]
if(is.na(xref)){
warning("The value of x identified as the reference is NA - returning vector of NAs")
}
y <- 1 - (xref - x)/(xref - minx)
if(cap_max){
y[y>1] <- 1
}
y
}
#' Normalise as distance to target
#'
#' A measure of the distance of each value of `x` to a specified target which can be a high or low target depending on `direction`. See details below.
#'
#'
#' If `direction = 1`, the formula is:
#'
#' \deqn{ \frac{x - x_{min}}{x_{targ} - x_{min}} }
#'
#' else if `direction = -1`:
#'
#' \deqn{ \frac{x_{max} - x}{x_{max} - x_{targ}} }
#'
#' Values surpassing `x_targ` in either case can be optionally capped at 1 if `cap_max = TRUE`.
#'
#' @param x A numeric vector
#' @param targ An target value
#' @param direction Either 1 (default) or -1. In the former case, the indicator is assumed to be "positive" so that the target is at the higher
#' end of the range. In the latter, the indicator is "negative" so that the target is typically at the low end of the range.
#' @param cap_max If `TRUE`, any value of `x` that exceeds `targ` will be assigned a score of 1, otherwise
#' will have a score greater than 1.
#'
#' @examples
#' x <- runif(20)
#' n_dist2targ(x, 0.8, cap_max = TRUE)
#'
#' @return Numeric vector
#'
#' @export
n_dist2targ <- function(x, targ, direction = 1, cap_max = FALSE){
stopifnot(is.numeric(x),
is.numeric(targ),
length(targ)==1,
is.logical(cap_max),
is.numeric(direction),
length(direction) == 1)
if(is.na(targ)){
stop("targ is NA")
}
if(direction == 1){
minx <- min(x, na.rm = TRUE)
if(targ < minx){
warning("targ is less than min(x) - this will produce negative scores.")
}
y <- (x - minx)/(targ - minx)
} else if (direction == -1){
maxx <- max(x, na.rm = TRUE)
if(targ > maxx){
warning("targ is greater than max(x) - this will produce negative scores.")
}
y <- (maxx - x)/(maxx- targ)
} else {
stop("'direction' must be either -1 or 1")
}
# cap
if(cap_max){
y[y>1] <- 1
}
y
}
#' Normalise as fraction of max value
#'
#' The ratio of each value of `x` to `max(x)`.
#'
#' \deqn{ x / x_{max} }
#'
#' @param x A numeric vector
#'
#' @examples
#' x <- runif(20)
#' n_fracmax(x)
#'
#' @return Numeric vector
#'
#' @export
n_fracmax <- function(x){
stopifnot(is.numeric(x))
maxx <- max(x, na.rm = TRUE)
x/maxx
}
#' Normalise using percentile ranks
#'
#' Calculates percentile ranks of a numeric vector using "sport" ranking. Ranks are calculated by [base::rank()]
#' and converted to percentile ranks. The `ties.method` can be changed - this is directly passed to
#' [base::rank()].
#'
#' @param x A numeric vector
#' @param ties.method This argument is passed to [base::rank()] - see there for details.
#'
#' @examples
#' x <- runif(20)
#' n_prank(x)
#'
#' @return Numeric vector
#'
#' @export
n_prank <- function(x, ties.method = "min"){
stopifnot(is.numeric(x))
# ranks
rx <- rank(x, ties.method = "min", na.last = "keep")
# perc ranks
(rx - 1) / (sum(!is.na(x)) - 1)
}
#' Normalise using ranks
#'
#' This is simply a wrapper for [base::rank()]. Higher scores will give higher ranks.
#'
#' @param x A numeric vector
#' @param ties.method This argument is passed to [base::rank()] - see there for details.
#'
#' @examples
#' x <- runif(20)
#' n_rank(x)
#'
#' @return Numeric vector
#'
#' @export
n_rank <- function(x, ties.method = "min"){
stopifnot(is.numeric(x))
# ranks
rank(x, ties.method = "min", na.last = "keep")
}
#' Normalise using Borda scores
#'
#' Calculates Borda scores as `rank(x) - 1`.
#'
#' @param x A numeric vector
#' @param ties.method This argument is passed to [base::rank()] - see there for details.
#'
#' @examples
#' x <- runif(20)
#' n_borda(x)
#'
#' @return Numeric vector
#'
#' @export
n_borda <- function(x, ties.method = "min"){
stopifnot(is.numeric(x))
# ranks
rank(x, ties.method = "min", na.last = "keep") - 1
}
#' Normalise using goalpost method
#'
#' The distance of each value of `x` from the lower "goalpost" to the upper one. Goalposts are specified by
#' `gposts = c(l, u, a)`, where `l` is the lower bound, `u` is the upper bound, and `a` is a scaling parameter.
#'
#' Specify `direction = -1` to "flip" the goalposts. This may be necessary depending on how the goalposts
#' were defined.
#'
#' @param x A numeric vector
#' @param gposts A numeric vector `c(l, u, a)`, where `l` is the lower bound, `u` is the upper bound,
#' and `a` is a scaling parameter.
#' @param direction Either 1 or -1. Set to -1 to flip goalposts.
#' @param trunc2posts If `TRUE` (default) will truncate any values that fall outside of the goalposts.
#'
#' @examples
#' x <- runif(20)
#' n_goalposts(x, gposts = c(0.2, 0.8, 1))
#'
#' @return Numeric vector
#'
#' @export
n_goalposts <- function(x, gposts, direction = 1, trunc2posts = TRUE){
stopifnot(is.numeric(x))
# since indicators arrive with directions possibly reversed (*-1), we have to also multiply GPs by -1
if(direction == -1){
# here, indicators are multiplied by -1, so need to also multiply goalposts by -1
gposts[1:2] <- -1*gposts[1:2]
# then, the goalpost formula is reversed as well
y <- (x-gposts[2])/(gposts[1] - gposts[2])
} else {
y <- (x-gposts[1])/(gposts[2] - gposts[1])
}
# this is the truncation bit
if(trunc2posts){
y[y > 1] <- 1
y[y < 0] <- 0
}
# overall scaling
y * gposts[3]
}
|
/scratch/gouwar.j/cran-all/cranData/COINr/R/normalise.R
|
#' Bar chart
#'
#' Plot bar charts of single indicators. Bar charts can be coloured by an optional grouping variable `by_group`, or if
#' `iCode` points to an aggregate, setting `stack_children = TRUE` will plot `iCode` coloured by its underlying scores.
#'
#' This function uses ggplot2 to generate plots, so the plot can be further manipulated using ggplot2 commands.
#' See `vignette("visualisation`) for more details on plotting.
#'
#' @param coin A coin object.
#' @param dset Data set from which to extract the variable to plot. Passed to [get_data()].
#' @param iCode Code of variable or indicator to plot. Passed to [get_data()].
#' @param ... Further arguments to pass to [get_data()], e.g. for filtering units.
#' @param uLabel How to label units: either `"uCode"`, or `"uName"`.
#' @param axes_label How to label the y axis and group legend: either `"iCode"` or `"iName"`.
#' @param by_group Optional group variable to use to colour bars. Cannot be used if `stack_children = TRUE`.
#' @param dset_label Logical: whether to include the data set in the y axis label.
#' @param log_scale Logical: if `TRUE` uses a log scale for the y axis.
#' @param stack_children Logical: if `TRUE` and `iCode` refers to an aggregate, will plot `iCode` with each bar split into
#' its underlying component values (the underlying indicators/aggregates used to create `iCode`). To use this, you must
#' have aggregated your data and `dset` must point to a data set where the underlying (child) scores of `iCode` are available.
#' @param bar_colours Optional vector of colour codes for colouring bars.
#' @param filter_to_ends Optional way to filter the bar chart to only display the top/bottom N units. This is useful in cases
#' where the number of units is large. Specify as e.g. `list(top = 10)` or `list(bottom = 10)` to return only the top or bottom
#' ten units respectively (the value 10 can be changed of course).
#' @param flip_coords Logical; if `TRUE` flips to horizontal bars.
#'
#' @importFrom stats reorder
#' @importFrom rlang .data
#'
#' @return A ggplot2 plot object.
#' @export
#'
#' @examples
#' # build example coin
#' coin <- build_example_coin(up_to = "new_coin", quietly = TRUE)
#'
#' # bar plot of CO2 by GDP per capita group
#' plot_bar(coin, dset = "Raw", iCode = "CO2",
#' by_group = "GDPpc_group", axes_label = "iName")
plot_bar <- function(coin, dset, iCode, ..., uLabel = "uCode", axes_label = "iCode",
by_group = NULL, filter_to_ends = NULL, dset_label = FALSE, log_scale = FALSE, stack_children = FALSE,
bar_colours = NULL, flip_coords = FALSE){
# PREP --------------------------------------------------------------------
stopifnot(is.character(dset),
is.character(iCode),
length(iCode) == 1,
axes_label %in% c("iCode", "iName"),
is.logical(dset_label),
is.logical(log_scale),
is.logical(stack_children))
if(!is.null(uLabel)){
stopifnot(is.character(uLabel),
length(uLabel) == 1)
if(uLabel %nin% c("uCode", "uName")){
stop("uLabel must be either NULL, 'uCode', or 'uName'.")
}
}
# set for plotting order if vertical
ord_direction <- if (flip_coords) {1} else {-1}
# GET DATA ----------------------------------------------------------------
if(!is.null(by_group)){
also_get <- by_group
} else {
also_get <- NULL
}
# I have to reset Level to NULL in case it is specified, otherwise causes problems
dot_paras <- list(...)
dot_paras$Level <- NULL
iData <- get_data(coin, dset = dset, iCodes = iCode, also_get = also_get, ... = dot_paras)
# optional filtering to top/bottom N
if(!is.null(filter_to_ends)){
stopifnot(is.list(filter_to_ends),
length(filter_to_ends) == 1,
names(filter_to_ends) %in% c("top", "bottom"),
filter_to_ends[[1]] %in% 1:nrow(iData))
if(names(filter_to_ends) == "top"){
iData <- iData[order(-iData[[iCode]]), ]
} else {
iData <- iData[order(iData[[iCode]]), ]
}
iData <- iData[1:filter_to_ends[[1]], ]
}
# uLABELS -----------------------------------------------------------------
if(is.null(uLabel) || (uLabel == "uCode") ){
iData$plbs <- iData$uCode
} else {
iData$plbs <- ucodes_to_unames(coin, iData$uCode)
}
# GET children -------------------------------------------------------------
# if stack_children = TRUE, we need to get iCode plus underlying codes
if(stack_children){
if(!is.null(by_group)){
stop("Cannot have stack_children = TRUE and plotting by group (by_group). Disable one of these two options.")
}
# get iMeta
iMeta <- coin$Meta$Ind
# get child codes
iCodes_ch <- iMeta$iCode[iMeta$Parent == iCode]
# remove NAs
iCodes_ch <- iCodes_ch[!is.na(iCodes_ch)]
# check
if(length(iCodes_ch) == 0){
stop("No child codes found for selected iCode: if stack_children = TRUE, you must select an iCode in Level 2
or above (it must be an aggregate).")
}
# get data
iData_ch <- get_data(coin, dset = dset, iCodes = iCodes_ch, also_get = also_get, ... = dot_paras)
# merge onto iData
iData <- merge(iData, iData_ch, by = "uCode")
# scale children to add up to parent score
iData$scale_fac <- iData[[iCode]]/rowSums(iData[iCodes_ch])
iData[iCodes_ch] <- sapply(iData[iCodes_ch], `*`, iData$scale_fac)
# make long for plotting, and rename some things
iData <- lengthen(iData, cols = iCodes_ch)
names(iData)[names(iData) == "name"] <- "Component"
names(iData)[names(iData) == iCode] <- paste0(iCode, "2")
names(iData)[names(iData) == "Value"] <- iCode
}
# PLOT --------------------------------------------------------------------
# setup: whether to plot by group or not
if(!is.null(by_group)){
plt <- ggplot2::ggplot(iData, ggplot2::aes(x = stats::reorder(.data[["plbs"]], ord_direction*.data[[iCode]]),
y = .data[[iCode]],
label = .data[["plbs"]],
fill = .data[[by_group]]))
} else if(stack_children){
plt <- ggplot2::ggplot(iData, ggplot2::aes(x = stats::reorder(.data[["plbs"]], ord_direction*.data[[iCode]]),
y = .data[[iCode]],
label = .data[["plbs"]],
fill = .data[["Component"]]))
} else {
plt <- ggplot2::ggplot(iData, ggplot2::aes(x = stats::reorder(.data[["plbs"]], ord_direction*.data[[iCode]]),
y = .data[[iCode]],
label = .data[["plbs"]]))
}
if(stack_children){
# main plot
plt <- plt +
ggplot2::geom_bar(stat = "identity", position = "stack") +
ggplot2::theme_minimal()
} else {
# main plot
plt <- plt +
ggplot2::geom_bar(stat = "identity") +
ggplot2::theme_minimal()
}
# LABELS ------------------------------------------------------------------
# names
if(axes_label == "iName"){
lbs <- icodes_to_inames(coin, c(iCode, by_group))
} else {
lbs <- c(iCode, by_group)
}
# dset
if(dset_label){
lbs[1] <- paste0(lbs[1], " (", dset, ")")
}
if(is.null(by_group)){
plt <- plt + ggplot2::labs(
x = ggplot2::element_blank(),
y = lbs[1]
)
} else {
plt <- plt + ggplot2::labs(
x = ggplot2::element_blank(),
y = lbs[1],
fill = lbs[2]
)
}
# COLOURS -----------------------------------------------------------------
if(!is.null(bar_colours)){
plt <- plt + ggplot2::scale_fill_manual(values = bar_colours)
}
# AXES --------------------------------------------------------------------
if(log_scale){
plt <- plt + ggplot2::scale_y_log10()
}
if(flip_coords){
plt <- plt + ggplot2::coord_flip() + ggplot2::theme(text=ggplot2::element_text(family="sans"))
} else {
plt <- plt + ggplot2::theme(axis.text.x = ggplot2::element_text(angle = 45, hjust = 1)) +
ggplot2::theme(text=ggplot2::element_text(family="sans"))
}
plt
}
|
/scratch/gouwar.j/cran-all/cranData/COINr/R/plot_bar.R
|
#' Static heatmaps of correlation matrices
#'
#' Generates heatmaps of correlation matrices using ggplot2, which can be tailored according to the grouping and structure
#' of the index. This enables correlating any set of indicators against any other,
#' and supports calling named aggregation groups of indicators. The `withparent` argument generates tables of correlations only with
#' parents of each indicator. Also supports discrete colour maps using `flagcolours`, different types of correlation, and groups
#' plots by higher aggregation levels.
#'
#' This function calls [get_corr()].
#'
#' Note that this function can only call correlations within the same data set (i.e. only one data set in `.$Data`).
#'
#' This function uses ggplot2 to generate plots, so the plot can be further manipulated using ggplot2 commands.
#' See `vignette("visualisation")` for more details on plotting.
#'
#' This function replaces the now-defunct `plotCorr()` from COINr < v1.0.
#'
#' @param coin The coin object
#' @param dset The target data set.
#' @param iCodes An optional list of character vectors where the first entry specifies the indicator/aggregate
#' codes to correlate against the second entry (also a specification of indicator/aggregate codes)
#' @param Levels The aggregation levels to take the two groups of indicators from. See [get_data()] for details.
#' @param ... Optional further arguments to pass to [get_data()].
#' @param cortype The type of correlation to calculate, either `"pearson"`, `"spearman"`, or `"kendall"` (see [stats::cor()]).
#' @param withparent If `aglev[1] != aglev[2]`, and equal `TRUE` will only plot correlations of each row with its parent.
#' If `"family"`, plots the lowest aggregation level in `Levels` against all its parent levels.
#' If `FALSE` plots the full correlation matrix (default).
#' @param grouplev The aggregation level to group correlations by if `aglev[1] == aglev[2]`. By default, groups correlations into the
#' aggregation level above. Set to 0 to disable grouping and plot the full matrix.
#' @param box_level The aggregation level to draw boxes around if `aglev[1] == aglev[2]`.
#' @param showvals If `TRUE`, shows correlation values. If `FALSE`, no values shown.
#' @param flagcolours If `TRUE`, uses discrete colour map with thresholds defined by `flagthresh`. If `FALSE` uses continuous colour map.
#' @param flagthresh A 3-length vector of thresholds for highlighting correlations, if `flagcolours = TRUE`.
#' `flagthresh[1]` is the negative threshold (default -0.4). Below this value, values will be flagged red.
#' `flagthresh[2]` is the "weak" threshold (default 0.3). Values between `flagthresh[1]` and `flagthresh[2]` are coloured grey.
#' `flagthresh[3]` is the "high" threshold (default 0.9). Anything between `flagthresh[2]` and `flagthresh[3]` is flagged "OK",
#' and anything above `flagthresh[3]` is flagged "high".
#' @param pval The significance level for plotting correlations. Correlations with \eqn{p < pval} will be shown,
#' otherwise they will be plotted as the colour specified by `insig_colour`. Set to 0 to disable this.
#' @param insig_colour The colour to plot insignificant correlations. Defaults to a light grey.
#' @param text_colour The colour of the correlation value text (default white).
#' @param discrete_colours An optional 4-length character vector of colour codes or names to define the discrete
#' colour map if `flagcolours = TRUE` (from high to low correlation categories). Defaults to a green/blue/grey/purple.
#' @param box_colour The line colour of grouping boxes, default black.
#' @param order_as Optional list for ordering the plotting of variables. If specified, this must be a list of length 2, where each entry of the list is
#' a character vector of the iCodes plotted on the x and y axes of the plot. The plot will then follow the order of these character vectors. Note this must
#' be used with care because the `grouplev` and `boxlev` arguments will not follow the reordering. Hence this argument is probably best used for plots
#' with no grouping, or for simply re-ordering within groups.
#' @param use_directions Logical: if `TRUE` the extracted data is adjusted using directions found inside the coin (i.e. the "Direction"
#' column input in `iMeta`: any indicators with negative direction will have their values multiplied by -1 which will reverse the
#' direction of correlation). This should only be set to `TRUE` if the data set has *not* yet been normalised. For example, this can be
#' useful to set to `TRUE` to analyse correlations in the raw data, but would make no sense to analyse correlations in the normalised
#' data because that already has the direction adjusted! So you would reverse direction twice. In other words, use this at your
#' discretion.
#'
#' @importFrom ggplot2 ggplot aes geom_tile
#' @importFrom rlang .data
#'
#' @examples
#' # build example coin
#' coin <- build_example_coin(up_to = "Normalise", quietly = TRUE)
#'
#' # plot correlations between indicators in Sust group, using Normalised dset
#' plot_corr(coin, dset = "Normalised", iCodes = list("Sust"),
#' grouplev = 2, flagcolours = TRUE)
#'
#' @return A plot object generated with ggplot2, which can be edited further with ggplot2 commands.
#'
#' @export
plot_corr <- function(coin, dset, iCodes = NULL, Levels = 1, ..., cortype = "pearson",
withparent = FALSE, grouplev = NULL, box_level = NULL, showvals = TRUE, flagcolours = FALSE,
flagthresh = NULL, pval = 0.05, insig_colour = "#F0F0F0",
text_colour = NULL, discrete_colours = NULL, box_colour = NULL, order_as = NULL, use_directions = FALSE){
# NOTE SET grouplev default to level + 1
# grouplev <- Levels[1] + 1
# CHECKS ------------------------------------------------------------------
if (length(iCodes) == 1){
iCodes = rep(iCodes, 2)
}
if (length(Levels) == 1){
Levels = rep(Levels, 2)
}
if (Levels[2] > Levels [1]){
Levels <- rev(Levels)
iCodes <- rev(iCodes)
}
if(withparent == "family"){
# in this case we don't care about the second entry and copy from 1
# to avoid any issues
Levels[2] <- Levels[1]
iCodes[[2]] <- iCodes[[1]]
}
crtable <- get_corr(coin, dset = dset, iCodes = iCodes, Levels = Levels,
... = ..., cortype = cortype, pval = pval, withparent = withparent,
grouplev = grouplev, make_long = TRUE, use_directions = use_directions)
# round values for plotting
crtable$Correlation <- round(crtable$Correlation,2)
# remove diags, otherwise plot looks annoying
crtable <- crtable[as.character(crtable$Var1) != as.character(crtable$Var2),]
# get index structure
lin <- coin$Meta$Lineage
##- PLOT -----------------------------------------
# get orders (otherwise ggplot reorders)
ord1 <- unique(crtable$Var1)
ord2 <- unique(crtable$Var2)
# sometimes these orderings come out not sorted according to higher aggregation levels
# Here we sort them according to the order in IndMeta (which is already sorted)
# Order first set (unless family plot in which case no, cos messes up)
if(withparent != "family"){
c1 <- unlist(lin[Levels[1]])
ord1 <- unique(c1[c1 %in% ord1])
}
# Order second set
c2 <- unlist(lin[Levels[2]])
ord2 <- unique(c2[c2 %in% ord2])
if(withparent == "family"){
ord2 <-rev(ord2)
}
# if we are correlating a set with itself, we make sure the orders match
if(setequal(ord1,ord2)){
# reversing agrees with the "classical" view of a correlation matrix
ord2 <- rev(ord1)
}
# for discrete colour map
if(is.null(flagthresh)){
hithresh <- 0.9
weakthresh <- 0.3
negthresh <- -0.4
} else {
stopifnot(is.numeric(flagthresh),
length(flagthresh) == 3)
hithresh <- flagthresh[3]
weakthresh <- flagthresh[2]
negthresh <- flagthresh[1]
}
if(!is.null(order_as)){
# custom ordering
stopifnot(is.list(order_as),
length(order_as) == 2,
is.character(order_as[[1]]),
is.character(order_as[[2]]))
if(length(order_as[[1]]) != length(ord1)){
stop("Error in length of order_as[[1]]: expected length = ", length(ord1))
}
if(length(order_as[[2]]) != length(ord2)){
stop("Error in length of order_as[[2]]: expected length = ", length(ord2))
}
if(!setequal(order_as[[1]], ord1)){
stop("Expected iCodes not found in order_as[[1]] - expected codes are: ", paste(ord1, collapse = ", "))
}
if(!setequal(order_as[[2]], ord2)){
stop("Expected iCodes not found in order_as[[2]] - expected codes are: ", paste(ord2, collapse = ", "))
}
ord1 <- order_as[[1]]
ord2 <- order_as[[2]]
}
if (flagcolours){
# make new col with flags for each correlation
crtable$Flag <- ifelse(crtable$Correlation >= hithresh, yes = "High", no = "OK")
crtable$Flag[(crtable$Correlation <= weakthresh)] <- "Weak"
crtable$Flag[(crtable$Correlation <= negthresh)] <- "Negative"
# make factors
# note: moved from inside ggplot call below to hopefully help ggplotly tooltip
crtable$Var1 <- factor(crtable$Var1, levels = ord1)
crtable$Var2 <- factor(crtable$Var2, levels = ord2)
# heatmap plot
plt <- ggplot2::ggplot(data = crtable,
ggplot2::aes(x = .data$Var1,
y = .data$Var2,
fill = .data$Flag,
label = .data$Correlation)) +
ggplot2::geom_tile(colour = "white") +
ggplot2::labs(x = NULL, y = NULL, fill = "Correlation") +
ggplot2::theme_classic() +
ggplot2::theme(axis.text.x = ggplot2::element_text(angle = 45, hjust = 1)) +
ggplot2::scale_x_discrete(expand=c(0,0)) +
ggplot2::scale_y_discrete(expand=c(0,0))
if(is.null(discrete_colours)){
discrete_colours <- c("#80d67b", "#b8e8b5", "#e2e6e1", "#d098bd")
} else {
stopifnot(is.character(discrete_colours),
length(discrete_colours)==4)
}
plt <- plt + ggplot2::scale_fill_manual(
breaks = c("High", "OK", "Weak", "Negative"),
values = discrete_colours,
na.value = insig_colour
)
} else {
# note: moved from inside ggplot call below to hopefully help ggplotly tooltip
crtable$Var1 <- factor(crtable$Var1, levels = ord1)
crtable$Var2 <- factor(crtable$Var2, levels = ord2)
# create duplicate column to be able to turn off tooltip with ggplotly
crtable$Correlation2 <- crtable$Correlation
# heatmap plot
plt <- ggplot2::ggplot(data = crtable,
ggplot2::aes(x = .data$Var1,
y = .data$Var2,
fill = .data$Correlation,
label = .data$Correlation2)) +
ggplot2::geom_tile(colour = "white") +
ggplot2::labs(x = NULL, y = NULL, fill = "Correlation") +
ggplot2::theme_classic() +
ggplot2::theme(axis.text.x = ggplot2::element_text(angle = 45, hjust = 1)) +
ggplot2::scale_x_discrete(expand=c(0,0)) +
ggplot2::scale_y_discrete(expand=c(0,0))
plt <- plt + ggplot2::scale_fill_gradient2(mid="#FBFEF9",low="#A63446",high="#0C6291", limits=c(-1,1),
na.value = insig_colour)
}
if (showvals){
if(is.null(text_colour)){
if(flagcolours){
text_colour <- "#6a6a6a"
} else {
text_colour <- "white"
}
}
plt <- plt + ggplot2::geom_text(colour = text_colour, size = 3, na.rm = TRUE)
}
# boxes
# the relevant function is ggplot2::annotate
if(is.null(box_colour)){
box_colour <- "#505050"
}
if(withparent=="family"){
# for family, we always plot boxes
# isolate cols of things we are correlating. Here all levels above current.
acls <- unique(lin[min(Levels):ncol(lin)])
# filter out to current set of indicators
acls <- acls[unlist(acls[1]) %in% unlist(unique(crtable[2])), ]
# now we need to iterate over columns, excluding the first one
for(icol in 2:ncol(acls)){
# isolate the column of interest
parents <- unlist(acls[icol])
# starting and ending indices of the rectangles
yends <- match(unique(parents), parents)
yends <- length(ord2) - yends + 1.5
ystarts <- c(yends, 0.5)
ystarts <- ystarts[-1]
xstarts <- rep(icol - 1.5, length(ystarts))
xends <- xstarts + 1
plt <- plt + ggplot2::annotate("rect", xmin=xstarts, xmax=xends, ymin=ystarts, ymax=yends,
fill = NA, color = box_colour)
# dark grey: #606060
}
} else if(!is.null(box_level)) {
if(box_level < Levels[2]+1){
stop("box_level must be at least the aggregation level above Levels.")
}
# isolate cols of things we are correlating, plus box level
acls <- unique(lin[c(Levels[1], box_level)])
# filter out to current set of indicators
acls <- acls[unlist(acls[1]) %in% unlist(unique(crtable[1])), ]
# we need four vectors for annotate: xmin, xmax, ymin and ymax
# actually xmin=ymin and xmax=ymax
parents <- unlist(acls[2])
# starting indices of the rectangles
starts <- match(unique(parents), parents)
# ends are the same, but shifted one along and with the last index included
ends <- c(starts, length(ord1)+1)
# remove the first element
ends <- ends[-1]
# now we mess around to get the correct positions. Tile boundaries are
# at half intervals. But also due to the fact that the y axis is reversed
# we have to subtract from the length.
xstarts <- starts - 0.5
xends <- ends - 0.5
if(Levels[1]==Levels[2]){
yends <- length(ord1) - xends + 1
ystarts <- length(ord1) - xstarts + 1
} else {
# isolate cols of things we are correlating, plus box level
acls <- unique(lin[c(Levels[2], box_level)])
# filter out to current set of indicators
acls <- acls[unlist(acls[1]) %in% unlist(unique(crtable[2])), ]
# get parent codes
parents <- unlist(acls[2])
# starting indices of the rectangles
ystarts <- match(unique(parents), parents) - 0.5
# ends are the same, but shifted one along and with the last index included
yends <- c(ystarts, length(ord2) + 0.5)
# remove the first element
yends <- yends[-1]
}
# add the rectangles to the plot
plt <- plt + ggplot2::annotate("rect", xmin=xstarts, xmax=xends, ymin=ystarts, ymax=yends,
fill = NA, color = box_colour)
}
plt +
ggplot2::theme(text=ggplot2::element_text(family="sans"))
}
|
/scratch/gouwar.j/cran-all/cranData/COINr/R/plot_corr.R
|
#' Static indicator distribution plots
#'
#' Plots indicator distributions using box plots, dot plots, violin plots, violin-dot plots, and histograms.
#' Supports plotting multiple indicators by calling aggregation groups.
#'
#' This function uses ggplot2 to generate plots, so the plot can be further manipulated using ggplot2 commands.
#' See `vignette("visualisation`) for more details on plotting.
#'
#' This function replaces the now-defunct `plotIndDist()` from COINr < v1.0.
#'
#' @param coin The coin object, or a data frame of indicator data
#' @param dset The name of the data set to apply the function to, which should be accessible in `.$Data`.
#' @param iCodes Indicator code(s) to plot. See details.
#' @param ... Further arguments passed to [get_data()] (other than `coin`, `dset` and `iCodes`).
#' @param normalise Logical: if `TRUE`, normalises the data first, using `global_specs`. If `FALSE` (default),
#' data is not normalised.
#' @param global_specs Specifications for normalising data if `normalise = TRUE`. This is passed to the
#' `global_specs` argument of [Normalise()].
#' @param type The type of plot. Currently supported `"Box"`, `"Dot"`, `"Violin"`, `"Violindot"`, `"Histogram"`.
#'
#' @importFrom utils stack
#' @importFrom ggplot2 ggplot aes geom_boxplot theme_light geom_dotplot geom_violin geom_histogram labs facet_wrap labs
#' @importFrom rlang .data
#'
#' @examples
#' # build example coin
#' coin <- build_example_coin(up_to = "new_coin")
#'
#' # plot all indicators in P2P group
#' plot_dist(coin, dset = "Raw", iCodes = "P2P", Level = 1, type = "Violindot")
#'
#' @return A ggplot2 plot object.
#'
#' @export
plot_dist <- function(coin, dset, iCodes, ..., type = "Box", normalise = FALSE,
global_specs = NULL){
# GET DATA ----------------------------------------------------------------
# get data set
iData <- get_data(coin, dset = dset, iCodes = iCodes, ...)
# col names that are NOT indicators
not_iCodes <- names(iData)[names(iData) %in% names(coin$Meta$Unit)]
# only indicator data
iData_ <- iData[colnames(iData) %nin% not_iCodes]
# Normalise if required
if (normalise){
iData_ <- Normalise(iData_, global_specs = global_specs)
}
# have to put dataframe in long format for ggplot
datamelt <- utils::stack(iData_)
# remove NAs to avoid warnings from ggplot2
datamelt <- datamelt[!is.na(datamelt$values), ]
# PLOT --------------------------------------------------------------------
# the base
plt <- ggplot2::ggplot(data = datamelt)
if (type == "Box"){
plt <- plt + ggplot2::geom_boxplot(aes(y = .data$values)) +
ggplot2::theme_light() +
ggplot2::theme(axis.ticks.x = ggplot2::element_blank(),
axis.text.x = ggplot2::element_blank())
} else if (type == "Dot"){
# Note that this might be messy, and can be adusted with stackratio and dotsize
plt <- plt + ggplot2::geom_dotplot(aes(x = .data$ind, y = .data$values),
binaxis = "y", stackdir = "center", dotsize=1,
stackratio=0.5, alpha = 0.3) +
ggplot2::theme_light() +
ggplot2::theme(axis.ticks.x = ggplot2::element_blank(),
axis.text.x = ggplot2::element_blank())
} else if (type == "Violin"){
plt <- plt + ggplot2::geom_violin(ggplot2::aes(x = .data$ind, y = .data$values),
scale = "area") +
ggplot2::theme_light() +
ggplot2::theme(axis.ticks.x = ggplot2::element_blank(),
axis.text.x = ggplot2::element_blank())
} else if (type == "Violindot"){
plt <- plt + ggplot2::geom_violin(ggplot2::aes(x = .data$ind, y = .data$values),
scale = "area") +
ggplot2::geom_dotplot(ggplot2::aes(x = .data$ind, y = .data$values),
binaxis = "y", stackdir = "center", dotsize=1, stackratio=0.5, alpha = 0.3) +
ggplot2::theme_light()+ ggplot2::theme_light()+
ggplot2::theme(axis.ticks.x = ggplot2::element_blank(), axis.text.x = ggplot2::element_blank())
} else if (type == "Histogram"){
plt <- plt + ggplot2::geom_histogram(ggplot2::aes(x = .data$values),
colour = "#e9ecef", bins = 10) +
ggplot2::theme_light()
} else {
stop("Plot type not recognised.")
}
# If plotting multiple indicators, use facet plotting
if (ncol(iData_) > 1){
nfrows <- ceiling(sqrt(nlevels(datamelt$ind))/2) # A way to get the number of rows so that we have about twice as many cols as rows
plt <- plt + ggplot2::facet_wrap(~ ind, nrow = nfrows, scales="free") +
ggplot2::labs(x = "", y = "")
} else {
# otherwise, add a title
plt <- plt + ggplot2::labs(title = names(iData_))
}
plt +
ggplot2::theme(text=ggplot2::element_text(family="sans"))
}
#' Dot plots of single indicator with highlighting
#'
#' Plots a single indicator as a line of dots, and optionally highlights selected units and statistics.
#' This is intended for showing the relative position of units to other units, rather than as a statistical
#' plot. For the latter, use [plot_dist()].
#'
#' This function uses ggplot2 to generate plots, so the plot can be further manipulated using ggplot2 commands.
#' See `vignette("visualisation`) for more details on plotting.
#'
#' This function replaces the now-defunct `plotIndDot()` from COINr < v1.0.
#'
#' @param coin The coin
#' @param dset The name of the data set to apply the function to, which should be accessible in `.$Data`.
#' @param iCode Code of indicator or aggregate found in `dset`. Required to be of length 1.
#' @param Level The level in the hierarchy to extract data from. See [get_data()].
#' @param ... Further arguments to pass to [get_data()], other than those explicitly specified here.
#' @param marker_type The type of marker, either `"circle"` (default) or `"cross"`, or a marker number to pass to ggplot2 (0-25).
#' @param usel A subset of units to highlight.
#' @param add_stat A statistic to overlay, either `"mean"`, `"median"` or else a specified value.
#' @param stat_label An optional string to use as label at the point specified by `add_stat`.
#' @param show_ticks Set `FALSE` to remove axis ticks.
#' @param plabel Controls the labelling of the indicator. If `NULL` (default), returns the indicator code.
#' Otherwise if `"iName"`, returns only indicator name, if `"iName+unit"`, returns
#' indicator name plus unit (if found), if `"unit"` returns only unit (if found), otherwise if `"none"`,
#' displays no text. Finally, any other string can be passed, so e.g. `"My indicator"` will display this on the
#' axis.
#' @param usel_label If `TRUE` (default) also labels selected units with their unit codes. `FALSE` to disable.
#' @param vert_adjust Adjusts the vertical height of text labels and stat lines, which matters depending on plot size.
#' Takes a value between 0 to 2 (higher will probably remove the label from the axis space).
#'
#' @importFrom ggplot2 ggplot aes theme_minimal ylab geom_point theme element_blank
#' @importFrom rlang .data
#'
#' @examples
#' # build example coin
#' coin <- build_example_coin(up_to = "new_coin")
#'
#' # dot plot of LPI, highlighting two countries and with median shown
#' plot_dot(coin, dset = "Raw", iCode = "LPI", usel = c("JPN", "ESP"),
#' add_stat = "median", stat_label = "Median", plabel = "iName+unit")
#'
#' @return A ggplot2 plot object.
#'
#' @export
plot_dot <- function(coin, dset, iCode, Level = NULL, ..., usel = NULL, marker_type = "circle",
add_stat = NULL, stat_label = NULL, show_ticks = TRUE, plabel = NULL,
usel_label = TRUE, vert_adjust = 0.5){
# GET DATA ----------------------------------------------------------------
iData <- get_data(coin, dset = dset, iCodes = iCode, Level = Level)
iData_ <- extract_iData(coin, iData, "iData_")
if(ncol(iData_) != 1){
stop("More than one indicator selected. This plot requires selection of a single indicator.")
}
ind_data <- cbind(y = 1, iData_)
colnames(ind_data) <- c("y", "x")
# BASE PLOT ---------------------------------------------------------------
if(marker_type=="circle"){
mno <- 21
} else if (marker_type == "cross"){
mno <- 3
} else {
mno <- marker_type
}
plt <- ggplot2::ggplot(ind_data, ggplot2::aes(x=.data$x, y=.data$y)) +
ggplot2::theme_minimal() +
ggplot2::geom_point(
color="blue",
fill="blue",
shape=mno,
alpha=0.5,
size=3,
#stroke = 0
) +
ggplot2::ylab(NULL) +
ggplot2::theme(axis.text.y = ggplot2::element_blank(),
axis.ticks.y = ggplot2::element_blank())
# TICKS -------------------------------------------------------------------
if(!show_ticks){
plt <- plt +
ggplot2::theme(axis.text.x = ggplot2::element_blank(),
axis.ticks.x = ggplot2::element_blank())
}
# HIGHLIGHT UNITS ---------------------------------------------------------
if(!is.null(usel)){
# select indicator plus unit code col
ind_data_wcodes <- iData[c("uCode", colnames(iData_))]
# filter to selected units
udfi <- ind_data_wcodes[ind_data_wcodes$uCode %in% usel,]
# check sth is left
if(nrow(udfi) == 0){
stop("None of the specified usel found in indicator data.")
}
# make into df ready for ggplot
udf <- data.frame(y = 1, udfi[[names(iData_)]])
colnames(udf) <- c("y", "x")
# overlay on plot
plt <- plt + ggplot2::geom_point(
data = udf,
ggplot2::aes(x=.data$x, y=.data$y),
color="red",
fill="blue",
shape=21,
alpha=0.7,
size=3,
stroke = 2
)
if(usel_label){
# add text labels
plt <- plt +
ggplot2::annotate("text", x = udf$x, y = 1 + vert_adjust/100, label = udfi$uCode,
angle = 45, hjust = 0.3, size = 3.5)
}
}
# STATS -------------------------------------------------------------------
if(!is.null(add_stat)){
if(add_stat == "mean"){
stat_val <- mean(unlist(iData_), na.rm = TRUE)
} else if (add_stat == "median"){
stat_val <- stats::median(unlist(iData_), na.rm = TRUE)
} else if (is.numeric(add_stat)){
stat_val <- add_stat
} else {
stop("add_stat not recognised. Should be 'mean', 'median', or a number.")
}
plt <- plt + ggplot2::annotate(
"segment", x = stat_val, y= 1 - vert_adjust/80,
xend = stat_val, yend = 1 + vert_adjust/80,
alpha = 0.5, linewidth = 2, colour = "#3CB371")
if(!is.null(stat_label)){
# add text labels
plt <- plt +
ggplot2::annotate("text", x = stat_val, y = 1 + vert_adjust/60, label = stat_label,
angle = 45, hjust = 0.2, size = 3.5)
}
}
# AXIS LABEL --------------------------------------------------------------
if(is.null(plabel)){
# just iCode
plabel <- names(iData_)
} else if (plabel == "none"){
# nothing
plabel <- NULL
} else if (plabel == "iName"){
plabel <- get_names(coin, iCodes = names(iData_))
} else if (plabel == "iName+unit"){
plabel <- paste0(get_names(coin, iCodes = names(iData_)), " (",
get_units(coin, names(iData_)), ")")
} else if (plabel == "unit"){
plabel <- get_units(coin, names(iData_))
}
plt <- plt + ggplot2::xlab(plabel)
# OUTPUT ------------------------------------------------------------------
plt + ggplot2::ylim(c(0.98, 1.02)) +
ggplot2::theme(text=ggplot2::element_text(family="sans"))
}
|
/scratch/gouwar.j/cran-all/cranData/COINr/R/plot_dist.R
|
#' Framework plots
#'
#' Plots the hierarchical indicator framework. If `type = "sunburst"` (default), the framework is plotted as a
#' sunburst plot. If `type = "stack"` it is plotted as a linear stack. In both cases, the size of each component
#' is reflected by its weight and the weight of its parent, i.e. its "effective weight" in the framework.
#'
#' The colouring of the plot is defined to some extent by the `colour_level` argument. This should be specified
#' as an integer between 1 and the highest level in the framework (i.e. the maximum of the `iMeta$Level` column).
#' Levels higher than and including `colour_level` are coloured with individual colours from the standard colour
#' palette. Any levels *below* `colour_level` are coloured with the same colours as their parents, to emphasise
#' that they belong to the same group, and also to avoid repeating the colour palette. Levels below `colour_level`
#' can be additionally differentiated by setting `transparency = TRUE` which will apply increasing transparency
#' to lower levels.
#'
#' This function returns a ggplot2 class object. If you want more control over the appearance of the plot, pass
#' return the output of this function to a variable, and manipulate this further with ggplot2 commands to e.g.
#' change colour palette, individual colours, add titles, etc.
#' See `vignette("visualisation`) for more details on plotting.
#'
#' This function replaces the now-defunct `plotframework()` from COINr < v1.0.
#'
#' @param coin A coin class object
#' @param type Either `"sunburst"` or `"stack"`.
#' @param colour_level The framework level, as an integer, to colour from. See details.
#' @param text_colour Colour of label text - default `"white"`.
#' @param text_size Text size of labels, default 2.5
#' @param transparency If `TRUE`, levels below `colour_level` are differentiated with some transparency.
#'
#' @importFrom rlang .data
#'
#' @examples
#' # build example coin
#' coin <- build_example_coin(up_to = "new_coin", quietly = TRUE)
#'
#' # plot framework as sunburst, colouring at level 2 upwards
#' plot_framework(coin, colour_level = 2, transparency = TRUE)
#'
#' @return A ggplot2 plot object
#' @export
plot_framework <- function(coin, type = "sunburst", colour_level = NULL,
text_colour = NULL, text_size = NULL, transparency = TRUE){
# CHECKS ------------------------------------------------------------------
check_coin_input(coin)
stopifnot(type %in% c("sunburst", "stack"))
# get iMeta
iMeta <- coin$Meta$Ind[!is.na(coin$Meta$Ind$Level), ]
maxlev <- coin$Meta$maxlev
# DEFAULTS ----------------------------------------------------------------
text_colour <- set_default(text_colour, "white")
text_size <- set_default(text_size, 2.5)
colour_level <- set_default(colour_level, maxlev - 1)
stopifnot(colour_level %in% 1:maxlev)
# COLOURS -----------------------------------------------------------------
# check if EffWeight present, if not, get
if(is.null(iMeta$EffWeight)){
coin <- get_eff_weights(coin, out2 = "coin")
# get iMeta
iMeta <- coin$Meta$Ind[!is.na(coin$Meta$Ind$Parent), ]
}
# get lineage
lin <- coin$Meta$Lineage
# add colouring col
# this is fiddly
iMeta$colourcol <- "a"
for(lev in 1:maxlev){
# get codes
codes <- iMeta$iCode[iMeta$Level == lev]
if(lev <= colour_level){
# get groups at colour_level
iMeta$colourcol[match(codes, iMeta$iCode)] <-
lin[[colour_level]][match(codes, lin[[lev]])]
} else {
iMeta$colourcol[match(codes, iMeta$iCode)] <- codes
}
}
if(type == "sunburst"){
# some special treatment to get rid of the center circle
iMeta$EffWeight[iMeta$Level == maxlev] <- 0
iMeta$colourcol[iMeta$Level == maxlev] <- iMeta$colourcol[iMeta$Level == (maxlev - 1)][1]
}
# have to make colourcol into a factor column with an ordering of factors
# that I specify, otherwise ordering is wrong
fac_order <- unique(Reduce(c,rev(lin[-ncol(lin)])))
# reorder factors
iMeta$colourcol <- factor(iMeta$colourcol, fac_order)
# this is a secondary reordering that is necessary:
# Although things are ordered correctly according to colour, the ordering
# within colours and below colour_level was incorrect and this seems to fix it
# took ages and made my head hurt figuring this out D:
iMeta <- iMeta[match(fac_order, iMeta$iCode), ]
# transparency if needed
trans <- c(0.8,0.6,rep(0.4, 100))
iMeta$Alf <- 1
# Only levels below colour_level are given transparency
iMeta$Alf[iMeta$Level < colour_level] <- trans[colour_level - iMeta$Level[iMeta$Level < colour_level]]
# finally, I have to reverse the levels otherwise plot is inside out
iMeta$Level <- maxlev - iMeta$Level + 1
# PLOT --------------------------------------------------------------------
# basic
plt <- ggplot2::ggplot(iMeta, ggplot2::aes(x = .data$Level,
y = .data$EffWeight,
fill = .data$colourcol,
label = .data$iCode))
# bars
if(transparency){
plt <- plt + ggplot2::geom_bar(stat = "identity", color='white', alpha = iMeta$Alf)
} else {
plt <- plt + ggplot2::geom_bar(stat = "identity", color='white')
}
# text
plt <- plt + ggplot2::geom_text(size = text_size, check_overlap = TRUE, position = ggplot2::position_stack(vjust = 0.5),
colour = text_colour)
# alter to sunburst if needed
if(type == "sunburst"){
plt <- plt + ggplot2::coord_polar('y')
}
# styling
plt <- plt + ggplot2::theme_minimal() +
ggplot2::ylab("") + ggplot2::xlab("") +
ggplot2::theme(panel.grid.major = ggplot2::element_blank(),
panel.grid.minor = ggplot2::element_blank(),
panel.border = ggplot2::element_blank(),
panel.background = ggplot2::element_blank(),
strip.background = ggplot2::element_blank(),
axis.text= ggplot2::element_blank(),
axis.ticks= ggplot2::element_blank(),
legend.position="none"
)
plt +
ggplot2::theme(text=ggplot2::element_text(family="sans"))
}
|
/scratch/gouwar.j/cran-all/cranData/COINr/R/plot_framework.R
|
#' Scatter plot of two variables
#'
#' This is a convenient quick scatter plot function for plotting any two variables x and y in a coin against each other.
#' At a minimum, you must specify the data set and iCode of both x and y using the `dsets` and `iCodes` arguments.
#'
#' Optionally, the scatter plot can be coloured by grouping variables specified in the coin (see `by_group`). Points
#' and axes can be labelled using other arguments.
#'
#' This function is powered by ggplot2 and outputs a ggplot2 object. To further customise the plot, assign the output
#' of this function to a variable and use ggplot2 commands to further edit. See `vignette("visualisation`) for more details on plotting.
#'
#' @param coin A coin object
#' @param dsets A 2-length character vector specifying the data sets to extract v1 and v2 from,
#' respectively (passed as `dset` argument to [get_data()]. Alternatively specify as a single string
#' which will be used for both x and y.
#' @param iCodes A 2-length character vector specifying the `iCodes` to use as v1 and v2,
#' respectively (passed as `iCodes` argument to [get_data()]. Alternatively specify as a single string
#' which will be used for both x and y.
#' @param ... Optional further arguments to be passed to [get_data()], e.g. to specify which `uCode`s to plot.
#' @param by_group A string specifying an optional group variable. If specified, the plot will be
#' coloured by this grouping variable.
#' @param alpha Transparency value for points between 0 and 1, passed to ggplot2.
#' @param axes_label A string specifying how to label axes and legend. Either `"iCode"` to use the respective codes
#' of each variable, or else `"iName"` to use the names (as specified in `iMeta`).
#' @param dset_label Logical: if `TRUE` (default), also adds to the axis labels which data set each variable is from.
#' @param point_label Specifies whether and how to label points. If `"uCode"`, points are labelled with their unit codes,
#' else if `"uName"`, points are labelled with their unit names. Set `NULL` to remove labels (default).
#' @param check_overlap Logical: if `TRUE` (default), point labels that overlap are removed - this results in a legible
#' plot but some labels may be missing. Else if `FALSE`, all labels are plotted.
#' @param nudge_y Parameter passed to ggplot which controls the vertical adjustment of the text labels if present.
#' @param log_scale A 2-length logical vector specifying whether to use log axes for x and y respectively: if `TRUE`,
#' a log axis will be used. Defaults to not-log.
#'
#' @importFrom rlang .data
#'
#' @return A ggplot2 object.
#' @export
#'
#' @examples
#' # build example coin
#' coin <- build_example_coin(up_to = "new_coin")
#'
#' # scatter plot of Flights against Population
#' # coloured by GDP per capita
#' # log scale applied to population
#' plot_scatter(coin, dsets = c("uMeta", "Raw"),
#' iCodes = c("Population", "Flights"),
#' by_group = "GDPpc_group", log_scale = c(TRUE, FALSE))
#'
#'
plot_scatter <- function(coin, dsets, iCodes, ..., by_group = NULL,
alpha = 0.5, axes_label = "iCode", dset_label = TRUE,
point_label = NULL, check_overlap = TRUE, nudge_y = 5, log_scale = c(FALSE, FALSE)){
# PREP --------------------------------------------------------------------
stopifnot(is.character(dsets),
is.character(iCodes),
length(dsets) %in% c(1,2),
length(iCodes) %in% c(1,2),
axes_label %in% c("iCode", "iName"),
is.logical(log_scale),
length(log_scale) == 2)
if(length(dsets) == 1){
dsets <- rep(dsets, 2)
}
if(length(iCodes) == 1){
iCodes <- rep(iCodes, 2)
}
if(!is.null(point_label)){
stopifnot(is.character(point_label),
length(point_label) == 1)
if(point_label %nin% c("uCode", "uName")){
stop("point_label must be either NULL, 'uCode', or 'uName'.")
}
}
# GET DATA ----------------------------------------------------------------
if(!is.null(by_group)){
also_get <- by_group
} else {
also_get <- NULL
}
x1 <- get_data(coin, dset = dsets[1], iCodes = iCodes[1], also_get = also_get, ...)
x2 <- get_data(coin, dset = dsets[2], iCodes = iCodes[2], also_get = also_get, ...)
x12 <- merge(x1, x2, by = c("uCode", also_get), all = FALSE)
# if we have the same indicator plotted against itself, have to rename
iCodes_orig <- iCodes
if(iCodes[1] == iCodes[2]){
iCodes[1] <- names(x12)[2]
iCodes[2] <- names(x12)[3]
}
if(is.null(point_label) || (point_label == "uCode") ){
x12$plbs <- x12$uCode
} else {
x12$plbs <- icodes_to_inames(coin, x12$uCode)
}
# PLOT --------------------------------------------------------------------
# setup: whether to plot by group or not
if(!is.null(by_group)){
plt <- ggplot2::ggplot(x12, ggplot2::aes(x = .data[[iCodes[1]]],
y = .data[[iCodes[2]]],
label = .data$plbs,
colour = .data[[by_group]]))
} else {
plt <- ggplot2::ggplot(x12, ggplot2::aes(x = .data[[iCodes[1]]],
y = .data[[iCodes[2]]],
label = .data$plbs))
}
# main plot
plt <- plt +
ggplot2::geom_point(alpha = alpha) +
ggplot2::theme_minimal()
# LABELS ------------------------------------------------------------------
# names
if(axes_label == "iName"){
lbs <- icodes_to_inames(coin, c(iCodes_orig, by_group))
} else {
lbs <- c(iCodes_orig, by_group)
}
# dset
if(dset_label){
lbs[1] <- paste0(lbs[1], " (", dsets[1], ")")
lbs[2] <- paste0(lbs[2], " (", dsets[2], ")")
}
if(is.null(by_group)){
plt <- plt + ggplot2::labs(
x = lbs[1],
y = lbs[2]
)
} else {
plt <- plt + ggplot2::labs(
x = lbs[1],
y = lbs[2],
colour = lbs[3]
)
}
# AXES --------------------------------------------------------------------
if(log_scale[1]){
plt <- plt + ggplot2::scale_x_log10()
}
if(log_scale[2]){
plt <- plt + ggplot2::scale_y_log10()
}
# POINT LABELS ------------------------------------------------------------
if(!is.null(point_label)){
plt <- plt + ggplot2::geom_text(size = 3,
vjust = 0, nudge_y = nudge_y,
check_overlap = check_overlap)
}
plt +
ggplot2::theme(text=ggplot2::element_text(family="sans"))
}
|
/scratch/gouwar.j/cran-all/cranData/COINr/R/plot_scatter.R
|
# QUICK FUNCTIONS
#' Quick normalisation of a purse
#'
#' This is a wrapper function for [Normalise()], which offers a simpler syntax but less flexibility. It
#' normalises data sets within a purse using a specified function `f_n` which is used to normalise each indicator, with
#' additional function arguments passed by `f_n_para`. By default, `f_n = "n_minmax"` and `f_n_para` is
#' set so that the indicators are normalised using the min-max method, between 0 and 100.
#'
#' Essentially, this function is similar to [Normalise()] but brings parameters into the function arguments
#' rather than being wrapped in a list. It also does not allow individual normalisation.
#'
#' Normalisation can either be performed independently on each coin, or over the entire panel data set
#' simultaneously. See the discussion in [Normalise.purse()] and `vignette("normalise")`.
#'
#' @param x A purse
#' @param dset Name of data set to normalise
#' @param f_n Name of a normalisation function (as a string) to apply to each indicator. Default `"n_minmax"`.
#' @param f_n_para Any further arguments to pass to `f_n`, as a named list.
#' @param directions An optional data frame containing the following columns:
#' * `iCode` The indicator code, corresponding to the column names of the data frame
#' * `Direction` numeric vector with entries either `-1` or `1`
#' If `directions` is not specified, the directions will be taken from the `iMeta` table in the coin, if available.
#' @param global Logical: if `TRUE`, normalisation is performed "globally" across all coins, by using e.g. the
#' max and min of each indicator in any coin. This effectively makes normalised scores comparable between coins
#' because they are all scaled using the same parameters. Otherwise if `FALSE`, coins are normalised individually.
#' @param ... arguments passed to or from other methods.
#'
#' @return An updated purse with normalised data sets
#' @export
#'
#' @examples
#' # build example purse
#' purse <- build_example_purse(up_to = "new_coin", quietly = TRUE)
#'
#' # normalise using min-max, globally
#' purse <- qNormalise(purse, dset = "Raw", global = TRUE)
#'
qNormalise.purse <- function(x, dset, f_n = "n_minmax", f_n_para = list(l_u = c(0,100)),
directions = NULL, global = TRUE, ...){
# assemble default specs
specs_def <- list(f_n = f_n,
f_n_para = f_n_para)
# normalise
Normalise.purse(x, dset = dset, global_specs = specs_def, directions = directions,
global = global, write_to = NULL)
}
#' Quick normalisation of a coin
#'
#' This is a wrapper function for [Normalise()], which offers a simpler syntax but less flexibility. It
#' normalises a data set within a coin using a specified function `f_n` which is used to normalise each indicator, with
#' additional function arguments passed by `f_n_para`. By default, `f_n = "n_minmax"` and `f_n_para` is
#' set so that the indicators are normalised using the min-max method, between 0 and 100.
#'
#' Essentially, this function is similar to [Normalise()] but brings parameters into the function arguments
#' rather than being wrapped in a list. It also does not allow individual normalisation.
#'
#' See [Normalise()] documentation for more details, and `vignette("normalise")`.
#'
#' @param x A coin
#' @param dset Name of data set to normalise
#' @param f_n Name of a normalisation function (as a string) to apply to each indicator. Default `"n_minmax"`.
#' @param f_n_para Any further arguments to pass to `f_n`, as a named list.
#' @param directions An optional data frame containing the following columns:
#' * `iCode` The indicator code, corresponding to the column names of the data frame
#' * `Direction` numeric vector with entries either `-1` or `1`
#' If `directions` is not specified, the directions will be taken from the `iMeta` table in the coin, if available.
#' @param ... arguments passed to or from other methods.
#'
#' @return An updated coin with normalised data set.
#' @export
#'
#' @examples
#' # build example coin
#' coin <- build_example_coin(up_to = "new_coin", quietly = TRUE)
#'
#' # normalise raw data set using min max, but change to scale 1-10
#' coin <- qNormalise(coin, dset = "Raw", f_n = "n_minmax",
#' f_n_para = list(l_u = c(1,10)))
#'
qNormalise.coin <- function(x, dset, f_n = "n_minmax", f_n_para = list(l_u = c(0,100)),
directions = NULL, ...){
# write log
coin <- write_log(x, dont_write = "x", write2log = TRUE)
# assemble default specs
specs_def <- list(f_n = f_n,
f_n_para = f_n_para)
# normalise
Normalise.coin(coin, dset = dset, global_specs = specs_def,
directions = directions, out2 = "coin", write2log = FALSE)
}
#' Quick normalisation of a data frame
#'
#' This is a wrapper function for [Normalise()], which offers a simpler syntax but less flexibility. It
#' normalises a data frame using a specified function `f_n` which is used to normalise each column, with
#' additional function arguments passed by `f_n_para`. By default, `f_n = "n_minmax"` and `f_n_para` is
#' set so that the columns of `x` are normalised using the min-max method, between 0 and 100.
#'
#' Essentially, this function is similar to [Normalise()] but brings parameters into the function arguments
#' rather than being wrapped in a list. It also does not allow individual normalisation.
#'
#' See [Normalise()] documentation for more details, and `vignette("normalise")`.
#'
#'
#' @param x A numeric data frame
#' @param f_n Name of a normalisation function (as a string) to apply to each column of `x`. Default `"n_minmax"`.
#' @param f_n_para Any further arguments to pass to `f_n`, as a named list. If `f_n = "n_minmax"`, this defaults
#' to `list(l_u = c(0,100))` (scale between 0 and 100).
#' @param directions An optional data frame containing the following columns:
#' * `iCode` The indicator code, corresponding to the column names of the data frame
#' * `Direction` numeric vector with entries either `-1` or `1`
#' If `directions` is not specified, the directions will all be assigned as `1`. Non-numeric columns do not need
#' to have directions assigned.
#' @param ... arguments passed to or from other methods.
#'
#' @return A normalised data frame
#' @export
#'
#' @examples
#' # some made up data
#' X <- data.frame(uCode = letters[1:10],
#' a = runif(10),
#' b = runif(10)*100)
#' # normalise (defaults to min-max)
#' qNormalise(X)
#'
qNormalise.data.frame <- function(x, f_n = "n_minmax", f_n_para = NULL,
directions = NULL, ...){
# default para
if(f_n == "n_minmax"){
if(is.null(f_n_para)){
f_n_para <- list(l_u = c(0,100))
}
}
# assemble default specs
specs_def <- list(f_n = f_n,
f_n_para = f_n_para)
# normalise
Normalise.data.frame(x, global_specs = specs_def, directions = directions)
}
#' Quick normalisation
#'
#' This is a generic wrapper function for [Normalise()], which offers a simpler syntax but less flexibility.
#'
#' See individual method documentation:
#'
#' * [qNormalise.data.frame()]
#' * [qNormalise.coin()]
#' * [qNormalise.purse()]
#'
#' @param x Object to be normalised
#' @param ... arguments passed to or from other methods.
#'
#' @return A normalised object
#'
#' @export
qNormalise <- function (x, ...){
UseMethod("qNormalise")
}
#' Quick outlier treatment of a purse
#'
#' A simplified version of [Treat()] which allows direct access to the default parameters. This has less flexibility,
#' but is an easier interface and probably more convenient if the objective is to use the default treatment process
#' but with some minor adjustments.
#'
#' This function simply applies the same data treatment to each coin. See documentation for [Treat.coin()],
#' [qTreat.coin()] and `vignette("treat")`.
#'
#' @param x A purse
#' @param dset Name of data set to treat for outliers in each coin
#' @param winmax Maximum number of points to Winsorise for each indicator. Default 5.
#' @param skew_thresh Absolute skew threshold - default 2.
#' @param kurt_thresh Kurtosis threshold - default 3.5.
#' @param f2 Function to call if Winsorisation does not bring skew and kurtosis within limits. Default `"log_CT"`.
#' @param ... arguments passed to or from other methods.
#'
#' @return An updated purse
#' @export
#'
#' @examples
#' #
qTreat.purse <- function(x, dset, winmax = 5, skew_thresh = 2, kurt_thresh = 3.5, f2 = "log_CT",
...){
# pass args to specs list
global_specs <- list(f1 = "winsorise",
f1_para = list(winmax = winmax,
skew_thresh = skew_thresh,
kurt_thresh = kurt_thresh),
f2 = f2,
f_pass_para = list(skew_thresh = skew_thresh,
kurt_thresh = kurt_thresh))
# treat
Treat.purse(x, dset = dset, global_specs = global_specs)
}
#' Quick outlier treatment of a coin
#'
#' A simplified version of [Treat()] which allows direct access to the default parameters. This has less flexibility,
#' but is an easier interface and probably more convenient if the objective is to use the default treatment process
#' but with some minor adjustments.
#'
#' This function treats each indicator in the data set targeted by `dset` using the following process:
#'
#' * First, it checks whether skew and kurtosis are within the specified limits of `skew_thresh` and `kurt_thresh`
#' * If the indicator is not within the limits, it applies the [winsorise()] function, with maximum number of winsorised
#' points specified by `winmax`.
#' * If winsorisation does not bring the indicator within the skew/kurtosis limits, it is instead passed to `f2`, which is
#' a second outlier treatment function, default [log_CT()].
#'
#' The arguments of [qTreat()] are passed to [Treat()].
#'
#' See [Treat()] documentation for more details, and `vignette("treat")`.
#'
#' @param x A coin
#' @param dset Name of data set to treat for outliers
#' @param winmax Maximum number of points to Winsorise for each indicator. Default 5.
#' @param skew_thresh Absolute skew threshold - default 2.
#' @param kurt_thresh Kurtosis threshold - default 3.5.
#' @param f2 Function to call if Winsorisation does not bring skew and kurtosis within limits. Default `"log_CT"`.
#' @param ... arguments passed to or from other methods.
#'
#' @return An updated coin with treated data set at `.$Data$Treated`.
#' @export
#'
#' @examples
#' # build example coin
#' coin <- build_example_coin(up_to = "new_coin", quietly = TRUE)
#'
#' # treat with winmax = 3
#' coin <- qTreat(coin, dset = "Raw", winmax = 3)
#'
qTreat.coin <- function(x, dset, winmax = 5, skew_thresh = 2, kurt_thresh = 3.5,
f2 = "log_CT", ...){
# write log
coin <- write_log(x, dont_write = "x", write2log = TRUE)
# pass args to specs list
global_specs <- list(f1 = "winsorise",
f1_para = list(winmax = winmax,
skew_thresh = skew_thresh,
kurt_thresh = kurt_thresh),
f2 = f2,
f_pass_para = list(skew_thresh = skew_thresh,
kurt_thresh = kurt_thresh))
# treat (note, don't write to log here since it has been written by qTreat)
Treat.coin(coin, dset = dset, global_specs = global_specs, out2 = "coin", write2log = FALSE)
}
#' Quick outlier treatment of a data frame
#'
#' A simplified version of [Treat()] which allows direct access to the default parameters. This has less flexibility,
#' but is an easier interface and probably more convenient if the objective is to use the default treatment process
#' but with some minor adjustments.
#'
#' This function treats each column in `x` using the following process:
#'
#' * First, it checks whether skew and kurtosis are within the specified limits of `skew_thresh` and `kurt_thresh`
#' * If the column is not within the limits, it applies the [winsorise()] function, with maximum number of winsorised
#' points specified by `winmax`.
#' * If winsorisation does not bring the column within the skew/kurtosis limits, it is instead passed to `f2`, which is
#' a second outlier treatment function, default [log_CT()].
#'
#' The arguments of [qTreat()] are passed to [Treat()].
#'
#' See [Treat()] documentation for more details, and `vignette("treat")`.
#'
#' @param x A numeric data frame
#' @param winmax Maximum number of points to Winsorise for each column. Default 5.
#' @param skew_thresh Absolute skew threshold - default 2.
#' @param kurt_thresh Kurtosis threshold - default 3.5.
#' @param f2 Function to call if Winsorisation does not bring skew and kurtosis within limits. Default `"log_CT"`.
#' @param ... arguments passed to or from other methods.
#'
#' @return A list
#' @export
#'
#' @examples
#' # select three indicators
#' df1 <- ASEM_iData[c("Flights", "Goods", "Services")]
#'
#' # treat data frame, changing winmax and skew/kurtosis limits
#' l_treat <- qTreat(df1, winmax = 1, skew_thresh = 1.5, kurt_thresh = 3)
#'
#' # Now we check what the results are:
#' l_treat$Dets_Table
#'
qTreat.data.frame <- function(x, winmax = 5, skew_thresh = 2, kurt_thresh = 3.5,
f2 = "log_CT", ...){
# pass args to specs list
global_specs <- list(f1 = "winsorise",
f1_para = list(winmax = winmax,
skew_thresh = skew_thresh,
kurt_thresh = kurt_thresh),
f2 = f2,
f_pass_para = list(skew_thresh = skew_thresh,
kurt_thresh = kurt_thresh))
# treat
Treat.data.frame(x, global_specs = global_specs)
}
#' Quick outlier treatment
#'
#' This is a generic wrapper function for [Treat()]. It offers a simpler syntax but less flexibility.
#'
#' See individual method documentation:
#'
#' * [qTreat.data.frame()]
#' * [qTreat.coin()]
#' * [qTreat.purse()]
#'
#' @param x Object to be normalised.
#' @param ... arguments passed to or from other methods.
#'
#' @examples
#' # See individual method examples
#'
#' @return A treated object
#'
#' @export
qTreat <- function (x, ...){
UseMethod("qTreat")
}
|
/scratch/gouwar.j/cran-all/cranData/COINr/R/qfuncs.R
|
# COIN REGENERATION
#' Regenerate a purse
#'
#' Regenerates the `.$Data` entries in all coins by rerunning the construction functions according to the specifications in
#' `.$Log`, for each coin in the purse. This effectively regenerates the results.
#'
#' The `from` argument allows partial regeneration, starting from a
#' specified function. This can be helpful to speed up regeneration in some cases. However, keep in mind that
#' if you change a `.$Log` argument from a function that is run before the point that you choose to start running
#' from, it will not affect the results.
#'
#' Note that for the moment, regeneration of purses is only partially supported. This is because usually, in the
#' normalisation step, it is necessary to normalise across the full panel data set (see the `global` argument in
#' [Normalise()]). At the moment, purse regeneration is performed by regenerating each coin individually, but this
#' does not allow for global normalisation which has to be done at the purse level. This may be fixed in future
#' releases.
#'
#' See also documentation for [Regen.coin()] and `vignette("adjustments")`.
#'
#' @param x A purse class object
#' @param from Optional: a construction function name. If specified, regeneration begins from this function, rather
#' than re-running all functions.
#' @param quietly If `TRUE` (default), messages are suppressed during building.
#' @param ... arguments passed to or from other methods.
#'
#' @examples
#' # see examples from Regen.coin() and vignette("adjustments")
#'
#' @return Updated purse object with regenerated results.
#'
#' @export
Regen.purse <- function(x, from = NULL, quietly = TRUE, ...){
# input check
check_purse(x)
# regen each coin
x$coin <- lapply(x$coin, function(coin){
Regen.coin(coin, from = from, quietly = quietly)
})
# make sure still purse class
class(x) <- c("purse", "data.frame")
x
}
#' Regenerate a coin
#'
#' Regenerates the `.$Data` entries in a coin by rerunning the construction functions according to the specifications in `.$Log`.
#' This effectively regenerates the results. Different variations of coins can be quickly achieved by editing the
#' saved arguments in `.$Log` and regenerating.
#'
#' The `from` argument allows partial regeneration, starting from a
#' specified function. This can be helpful to speed up regeneration in some cases. However, keep in mind that
#' if you change a `.$Log` argument from a function that is run before the point that you choose to start running
#' from, it will not affect the results.
#'
#' Note that while sets of weights will be passed to the regenerated COIN, anything in `.$Analysis` will be removed
#' and will have to be recalculated.
#'
#' See also `vignette("adjustments")` for more info on regeneration.
#'
#' @param x A coin class object
#' @param from Optional: a construction function name. If specified, regeneration begins from this function, rather
#' than re-running all functions.
#' @param quietly If `TRUE` (default), messages are suppressed during building.
#' @param ... arguments passed to or from other methods.
#'
#' @examples
#' # build full example coin
#' coin <- build_example_coin(quietly = TRUE)
#'
#' # copy coin
#' coin2 <- coin
#'
#' # change to prank function (percentile ranks)
#' # we don't need to specify any additional parameters (f_n_para) here
#' coin2$Log$Normalise$global_specs <- list(f_n = "n_prank")
#'
#' # regenerate
#' coin2 <- Regen(coin2)
#'
#' # compare index, sort by absolute rank difference
#' compare_coins(coin, coin2, dset = "Aggregated", iCode = "Index",
#' sort_by = "Abs.diff", decreasing = TRUE)
#'
#' @return Updated coin object with regenerated results (data sets).
#'
#' @export
Regen.coin <- function(x, from = NULL, quietly = TRUE, ...){
coin <- x
stopifnot(is.coin(coin))
# GATHER PARAMS -----------------------------------------------------------
# the full list of function arguments, for each build_ function
f_logs <- coin$Log
f_names <- names(f_logs)
# check if can regenerate
stopifnot(!is.null(f_logs$can_regen))
if(!f_logs$can_regen){
stop("Cannot regenerate coin. This may be because it has been normalised with global = TRUE, or
it has been converted from an older COIN class.")
}
# remove can_regen from here
f_names <- setdiff(f_names, "can_regen")
f_logs <- f_logs[f_names]
# here we exclude any function names that are before "from", if it is specified
if(!is.null(from)){
if(from %nin% f_names){
stop("Function name specified by 'from' is not found in the coin log.")
}
i_name <- which(f_names == from) - 1
if(i_name > 0){
f_names <- f_names[-(1:i_name)]
}
}
# RERUN FUNCS -------------------------------------------------------------
# looping over build_ functions
for (func in f_names){
# the arguments of the same func, stored in Log
f_log <- f_logs[[func]]
# the declared arguments of the function
# NOTE this doesn't work here since construction funcs are now methods, so args are all (x, ...)
#f_args <- names(formals(func))
# check if what is in Log agrees with function arguments
# if(!all(names(f_log) %in% f_args)){
# stop(paste0("Mismatch between function arguments of ", func, " and .$Log entry. Cannot regenerate."))
# }
# run function at arguments
if(func == "new_coin"){
if(quietly){
coin <- suppressMessages( do.call(func, args = f_log) )
} else {
coin <- do.call(func, args = f_log)
}
# we also need to pass old weights to new coin
wlist_old <- x$Meta$Weights[names(x$Meta$Weights) != "Original"]
# the only thing to check is whether the iCodes are the same. If not, means that something has happened
# to the indicator set, so we don't pass to be safe
same_codes <- sapply(wlist_old, function(w){
setequal(coin$Meta$Weights$Original$iCode, w$iCode)
})
if(any(!same_codes)){
message("Did not pass additional weight sets in .$Meta$Weights because iCodes do not match new coin.")
} else {
coin$Meta$Weights <- c(coin$Meta$Weights, wlist_old)
}
} else {
# add coin obj to arg list (not logged for obvious inception reasons)
if(quietly){
coin <- suppressMessages( do.call(func, args = c(list(x = coin), f_log) ) )
} else {
coin <- do.call(func, args = c(list(x = coin), f_log) )
}
}
}
# WEIGHTS -----------------------------------------------------------------
coin
}
#' Regenerate a coin or purse
#'
#' Methods for regenerating coins and purses. Regeneration is re-running all the functions used to build
#' the coin/purse, using the order and parameters found in the `.$Log` list of the coin.
#'
#' Please see individual method documentation:
#'
#' * [Regen.coin()]
#' * [Regen.purse()]
#'
#' See also `vignette("adjustments")`.
#'
#' This function replaces the now-defunct `regen()` from COINr < v1.0.
#'
#' @param x A coin or purse object to be regenerated
#' @param from Optional: a construction function name. If specified, regeneration begins from this function, rather
#' than re-running all functions.
#' @param quietly If `TRUE` (default), messages are suppressed during building.
#'
#' @examples
#' # see individual method examples
#'
#' @return A regenerated object
#'
#' @export
Regen <- function(x, from = NULL, quietly = TRUE){
UseMethod("Regen")
}
#' Add and remove indicators
#'
#' A shortcut function to add and remove indicators. This will make the relevant changes
#' and recalculate the index if asked. Adding and removing is done relative to the current set of
#' indicators used in calculating the index results. Any indicators that are added must of course be
#' present in the original `iData` and `iMeta` that were input to `new_coin()`.
#'
#' See also `vignette("adjustments")`.
#'
#' This function replaces the now-defunct `indChange()` from COINr < v1.0.
#'
#' @param coin coin object
#' @param add A character vector of indicator codes to add (must be present in the original input data)
#' @param drop A character vector of indicator codes to remove (must be present in the original input data)
#' @param regen Logical (default): if `TRUE`, automatically regenerates the results based on the new specs
#' Otherwise, just updates the `.$Log` parameters. This latter might be useful if you want to
#' Make other changes before re-running using the [Regen()] function.
#'
#' @examples
#' # build full example coin
#' coin <- build_example_coin(quietly = TRUE)
#'
#' # exclude two indicators and regenerate
#' # remove two indicators and regenerate the coin
#' coin_remove <- change_ind(coin, drop = c("LPI", "Forest"), regen = TRUE)
#'
#' coin_remove
#'
#' @return An updated coin, with regenerated results if `regen = TRUE`.
#'
#' @export
change_ind <- function(coin, add = NULL, drop = NULL, regen = FALSE){
# find existing indicator set
iCodes_1 <- coin$Meta$Ind$iCode[coin$Meta$Ind$Type == "Indicator"]
# full set of codes from iMeta
iCodes_0 <- coin$Log$new_coin$iMeta$iCode[coin$Log$new_coin$iMeta$Type == "Indicator"]
# CHECKS
if(!is.null(add)){
if(any(add %nin% iCodes_0)){
stop("One or more iCodes in 'add' not found in original indicator data set.")
}
}
if(!is.null(drop)){
if(any(drop %nin% iCodes_1)){
stop("One or more iCodes in 'drop' not found in existing indicator data set.")
}
}
if(!is.null(drop) & !is.null(add)){
if(length(intersect(add, drop)) > 0){
stop("One or more iCodes in 'drop' also found in 'add'!")
}
}
# NOW GET SET OF iCODES TO USE
# add first
iCodes_2 <- union(iCodes_1, add)
# then drop
iCodes_2 <- setdiff(iCodes_2, drop)
# now find the exclude parameter: diff between iCodes_2 and iCodes_0
exclude <- setdiff(iCodes_0, iCodes_2)
# EDIT .$Log
coin$Log$new_coin$exclude <- exclude
# REGEN if asked (nicely)
if(regen==TRUE){
coin <- Regen(coin, quietly = TRUE)
message("coin has been regenerated using new specs.")
} else {
message("coin parameters changed but results NOT updated. Use coinr::regen() to regenerate
results or set regen = TRUE in change_inds().")
}
coin
}
|
/scratch/gouwar.j/cran-all/cranData/COINr/R/regen.R
|
#' Results summary tables
#'
#' Generates fast results tables, either attached to the coin or as a data frame.
#'
#' Although results are available in a coin in `.$Data`, the format makes it difficult to quickly present results. This function
#' generates results tables that are suitable for immediate presentation, i.e. sorted by index or other indicators, and only including
#' relevant columns. Scores are also rounded by default, and there is the option to present scores or ranks.
#'
#' See also `vignette("results")` for more info.
#'
#' This function replaces the now-defunct `getResults()` from COINr < v1.0.
#'
#' @param coin The coin object, or a data frame of indicator data
#' @param dset Name of data set in `.$Data`
#' @param also_get Names of further columns to attach to table.
#' @param tab_type The type of table to generate. Either `"Summ"` (a single indicator plus rank), `"Aggs"` (all aggregated
#' scores/ranks above indicator level), or `"Full"` (all scores/ranks plus all group, denominator columns).
#' @param use Either `"scores"` (default), `"ranks"`, or `"groupranks"`. For the latter, `use_group` must be specified.
#' @param order_by A code of the indicator or aggregate to sort the table by. If not specified, defaults to the highest
#' aggregate level, i.e. the index in most cases. If `use_group` is specified, rows will also be sorted by the specified group.
#' @param nround The number of decimal places to round numerical values to. Defaults to 2.
#' @param use_group An optional grouping variable. If specified, the results table includes this group column,
#' and if `use = "groupranks"`, ranks will be returned with respect to the groups in this column.
#' @param out2 If `"df"`, outputs a data frame (tibble). Else if `"coin"` attaches to `.$Results` in an updated coin.
#' @param dset_indicators Optional data set from which to take only indicator (level 1) data from. This can be set to `"Raw"`
#' for example, so that all aggregates come from the aggregated data set, and the indicators come from the raw data set. This
#' can make more sense in presenting results in many cases, so that the "real" indicator data is visible.
#'
#' @examples
#' # build full example coin
#' coin <- build_example_coin(quietly = TRUE)
#'
#' # get results table
#' df_results <- get_results(coin, dset = "Aggregated", tab_type = "Aggs")
#'
#' head(df_results)
#'
#' @return If `out2 = "df"`, the results table is returned as a data frame. If `out2 = "coin"`, this function returns an updated
#' coin with the results table attached to `.$Results`.
#'
#' @export
get_results <- function(coin, dset, tab_type = "Summ", also_get = NULL, use = "scores", order_by = NULL,
nround = 2, use_group = NULL, dset_indicators = NULL, out2 = "df"){
# CHECKS ------------------------------------------------------------------
stopifnot(tab_type %in% c("Summ", "Aggs", "Full"),
use %in% c("scores", "ranks", "groupranks"),
is.numeric(nround),
out2 %in% c("df", "coin"))
check_coin_input(coin)
# GET DATA ----------------------------------------------------------------
# merge also_get with use_group
also_get <- union(use_group, also_get)
# data
iData <- get_data(coin, dset = dset, also_get = also_get, use_group = use_group)
# optionally indicator data from another data set (probably raw)
if(!is.null(dset_indicators)){
iDatai <- get_dset(coin, dset = dset_indicators)
# order rows by iData (also filter to only units in iData)
iDatai <- iDatai[match(iData$uCode, iDatai$uCode), ]
# get all iData cols which are indicators
ind_cols <- names(iData)[names(iData) %in% coin$Meta$Ind$iCode[which(coin$Meta$Ind$Type == "Indicator")]]
# hot swap
stopifnot(all(ind_cols %in% names(iDatai)))
iData[ind_cols] <- iDatai[ind_cols]
}
# get meta col names
mcols <- extract_iData(coin, iData, GET = "mCodes")
# get iMeta
iMeta <- coin$Meta$Ind
# iMeta with only indicators and agg rows
iMeta_ia <- iMeta[iMeta$Type %in% c("Indicator", "Aggregate"), ]
# order it from top level down
iMeta_ia <- iMeta_ia[order(-iMeta_ia$Level, iMeta_ia$Parent), ]
# check if this is an aggregated data set
if(any(iMeta_ia$iCode %nin% names(iData))){
stop("The data set extracted by 'dset' does not seem to be an aggregated data set (indicator or aggregate codes are missing).")
}
# ORDERING ------------------------------------------------------------
# results table (sorted by rows and cols)
iData <- iData[c(mcols ,iMeta_ia$iCode)]
# get the column name to use for sorting the df
if(is.null(order_by)){
sortcode <- iMeta_ia$iCode[iMeta_ia$Level == coin$Meta$maxlev]
} else {
if(order_by %nin% names(iData)){
stop("'order_by' is not found in the selected data set.")
}
sortcode <- order_by
}
iData$Rank <- rank(-1*iData[[sortcode]], na.last = "keep", ties.method = "min")
# BUILD TABLE -----------------------------------------------------------------
if(tab_type %in% c("Summ", "Summary")){
# Just the indicator/index plus ranks
tabout <- iData[c(mcols, sortcode, "Rank")]
} else if (tab_type %in% c("Aggs", "Aggregates")){
# All the aggregate scores
tabout <- iData[c(mcols, "Rank", iMeta_ia$iCode[iMeta_ia$Type == "Aggregate"])]
} else if (tab_type %in% c("Full", "FullWithDenoms")){
# Get sorted indicator codes, not aggregates
othercodes <- coin$Meta$Lineage[[1]]
stopifnot(any(othercodes %in% names(iData)))
# All the aggregate scores
tabout <- iData[c(mcols, "Rank", iMeta_ia$iCode[iMeta_ia$Type == "Aggregate"], othercodes)]
}
# Sorting
tabout <- tabout[order(-tabout[[sortcode]]),]
# Rounding
tabout <- round_df(tabout, nround)
# Ranks
if(use == "ranks"){
tabout <- tabout[colnames(tabout) != "Rank"]
tabout <- rank_df(tabout)
} else if (use =="groupranks"){
if(is.null(use_group)){
stop("If groupranks is specified, you need to also specify use_group.")
}
tabout <- tabout[colnames(tabout) != "Rank"]
tabout <- rank_df(tabout, use_group = use_group)
# sort by group
tabout <- tabout[order(tabout[[use_group]]),]
}
# FINISH AND OUTPUT -------------------------------------------------
if(out2 == "df"){
return(tabout)
} else if (out2 == "coin"){
if(use == "scores"){
coin$Results[[paste0(tab_type,"Score")]] <- tabout
} else if (use == "ranks"){
coin$Results[[paste0(tab_type,"Rank")]] <- tabout
} else if (use == "groupranks"){
coin$Results[[paste0(tab_type,"GrpRnk", use_group)]] <- tabout
}
return(coin)
} else {
stop("out2 not recognised!")
}
}
#' Generate unit summary table
#'
#' Generates a summary table for a single unit. This is mostly useful in unit reports.
#'
#' This returns the scores and ranks for each indicator/aggregate as specified in `aglevs`. It orders the table so that
#' the highest aggregation levels are first. This means that if the index level is included, it will be first.
#'
#' This function replaces the now-defunct `getUnitSummary()` from COINr < v1.0.
#'
#' @param coin A coin
#' @param usel A selected unit code
#' @param Levels The aggregation levels to display results from.
#' @param dset The data set within the coin to extract scores and ranks from
#' @param nround Number of decimals to round scores to, default 2. Set to `NULL` to disable rounding.
#'
#' @examples
#' # build full example coin
#' coin <- build_example_coin(quietly = TRUE)
#'
#' # summary of scores for IND at levels 4, 3 and 2
#' get_unit_summary(coin, usel = "IND", Levels = c(4,3,2), dset = "Aggregated")
#'
#' @return A summary table as a data frame, containing scores and ranks for specified indicators/aggregates.
#'
#' @export
get_unit_summary <- function(coin, usel, Levels, dset = "Aggregated", nround = 2){
# get rank and score tables
scrs <- get_data(coin, dset = dset)
rnks <- rank_df(scrs)
if(usel %nin% scrs$uCode){
stop("usel not found in selected data set!")
}
# get ind/agg codes etc and order
iMeta_ <- coin$Meta$Ind[coin$Meta$Ind$Type %in% c("Indicator", "Aggregate"), ]
iMeta_ <- iMeta_[order(-iMeta_$Level), ]
if(any(Levels %nin% 1:max(iMeta_$Level))){
stop("Levels must be integers between 1 and the maximum level.")
}
# select codes of levels
agcodes <- iMeta_$iCode[iMeta_$Level %in% Levels]
agnames <- iMeta_$iName[iMeta_$Level %in% Levels]
if(any(agcodes %nin% names(scrs))){
stop("One or more indicator or aggregate codes not found in the selected data set. You may need to point to
an aggregated data set.")
}
# select cols corresponding to inds/aggs
scrs1 <- scrs[agcodes]
rnks1 <- rnks[agcodes]
# make output table, inc. unit selection
tabout <- data.frame(
Code = agcodes,
Name = agnames,
Score = as.numeric(scrs1[scrs$uCode == usel, ]),
Rank = as.numeric(rnks1[rnks$uCode == usel, ])
)
# round
if(!is.null(nround)){
df_out <- round_df(tabout, nround)
} else {
df_out <- tabout
}
df_out
}
#' Generate strengths and weaknesses for a specified unit
#'
#' Generates a table of strengths and weaknesses for a selected unit, based on ranks, or ranks within
#' a specified grouping variable.
#'
#' This currently only works at the indicator level. Indicators with `NA` values for the selected unit are ignored.
#' Strengths and weaknesses mean the `topN`-ranked indicators for the selected unit. Effectively, this takes the rank that the
#' selected unit has in each indicator, sorts the ranks, and takes the top N highest and lowest.
#'
#' This function must be used with a little care: indicators should be adjusted for their directions before use,
#' otherwise a weakness might be counted as a strength, and vice versa. Use the `adjust_direction` parameter
#' to help here.
#'
#' A further useful parameter is `unq_discard`, which also filters out any indicators with a low number of
#' unique values, based on a specified threshold. Also `min_discard` which filters out any indicators which
#' have the minimum rank.
#'
#' The best way to use this function is to play around with the settings a little bit. The reason being that
#' in practice, indicators have very different distributions and these can sometimes lead to unexpected
#' outcomes. An example is if you have an indicator with 50% zero values, and the rest non-zero (but unique).
#' Using the sport ranking system, all units with zero values will receive a rank which is equal to the number
#' of units divided by two. This then might be counted as a "strength" for some units with overall low scores.
#' But a zero value can hardly be called a strength. This is where the `min_discard` function can help out.
#'
#' Problems such as these mainly arise when e.g. generating a large number of country profiles.
#'
#' This function replaces the now-defunct `getStrengthNWeak()` from COINr < v1.0.
#'
#' @param coin A coin
#' @param dset The data set to extract indicator data from, to use as strengths and weaknesses.
#' @param usel A selected unit code
#' @param topN The top N indicators to report
#' @param bottomN The bottom N indicators to report
#' @param withcodes If `TRUE` (default), also includes a column of indicator codes. Setting to `FALSE` may be more useful
#' in generating reports, where codes are not helpful.
#' @param use_group An optional grouping variable to use for reporting
#' in-group ranks. Specifying this will report the ranks of the selected unit within the group of `use_group`
#' to which it belongs.
#' @param unq_discard Optional parameter for handling discrete indicators. Some indicators may be binary
#' variables of the type "yes = 1", "no = 0". These may be picked up as strengths or weaknesses, when they
#' may not be wanted to be highlighted, since e.g. maybe half of units will have a zero or a one. This argument
#' takes a number between 0 and 1 specifying a unique value threshold for ignoring indicators as strengths. E.g.
#' setting `prc_unq_discard = 0.2` will ensure that only indicators with at least 20% unique values will be
#' highlighted as strengths or weaknesses. Set to `NULL` to disable (default).
#' @param min_discard If `TRUE` (default), discards any strengths which correspond to the minimum rank for the given
#' indicator. See details.
#' @param report_level Aggregation level to report parent codes from. For example, setting
#' `report_level = 2` (default) will add a column to the strengths and weaknesses tables which reports the aggregation
#' group from level 2, to which each reported indicator belongs.
#' @param with_units If `TRUE` (default), includes indicator units in output tables.
#' @param adjust_direction If `TRUE`, will adjust directions of indicators according to the "Direction" column
#' of `IndMeta`. By default, this is `TRUE` *if* `dset = "Raw"`, and `FALSE` otherwise.
#' @param sig_figs Number of significant figures to round values to. If `NULL` returns values as they are.
#'
#' @examples
#' # build example coin
#' coin <- build_example_coin(up_to = "new_coin", quietly = TRUE)
#'
#' # get strengths and weaknesses for ESP
#' get_str_weak(coin, dset = "Raw", usel = "ESP")
#'
#' @return A list containing a data frame `.$Strengths`, and a data frame `.$Weaknesses`.
#' Each data frame has columns with indicator code, name, rank and value (for the selected unit).
#'
#' @export
get_str_weak <- function(coin, dset, usel = NULL, topN = 5, bottomN = 5, withcodes = TRUE,
use_group = NULL, unq_discard = NULL, min_discard = TRUE, report_level = NULL,
with_units = TRUE, adjust_direction = NULL, sig_figs = 3){
# PREP --------------------------------------------------------------------
# get iMeta
iMeta_ <- coin$Meta$Ind[coin$Meta$Ind$Type == "Indicator", ]
# indicator codes
iCodes <- iMeta_$iCode
stopifnot(length(usel) == 1,
is.character(usel),
topN %in% 1:length(iCodes),
bottomN %in% 1:length(iCodes),
is.logical(withcodes),
unq_discard >= 0,
unq_discard <= 1,
is.logical(min_discard),
report_level %in% 2:max(iMeta_$Level),
is.logical(with_units))
# scores
if(is.null(dset)) dset <- "Raw"
data_scrs <- get_dset(coin, dset = dset, also_get = use_group)
if(usel %nin% data_scrs$uCode){
stop("usel not found in selected data set!")
}
# first, we have to adjust for direction
if(is.null(adjust_direction)){
if(dset == "Raw"){
adjust_direction <- TRUE
} else {
adjust_direction <- FALSE
}
}
stopifnot(is.logical(adjust_direction))
# GET S&W -----------------------------------------------------------------
# make a copy to adjust by direction
data_scrs1 <- data_scrs
if(adjust_direction){
# note: directions are in the same order as iCodes
directions <- iMeta_$Direction
data_scrs1[iCodes] <- as.data.frame(mapply(`*`, data_scrs1[iCodes], directions))
}
data_rnks <- rank_df(data_scrs1, use_group = use_group)
# unique value filtering
if(!is.null(unq_discard)){
# find fraction of unique vals for each indicator
frc_unique <- apply(data_scrs[iCodes], MARGIN = 2,
function(x){
length(unique(x))/length(x)
})
# filter indicator codes to only the ones with frac unique above thresh
iCodes <- iCodes[frc_unique > unq_discard]
}
# isolate the row and indicator cols
rnks_usel <- data_rnks[data_rnks$uCode == usel, iCodes]
# remove NAs
rnks_usel <- rnks_usel[,!is.na(as.numeric(rnks_usel))]
# Also need to (optionally) remove minimum rank entries
# (by min I mean MAX, i.e. min SCORE)
if(min_discard){
rnks_min <- as.data.frame(lapply(data_rnks[colnames(rnks_usel)], max, na.rm = T))
rnks_usel <- rnks_usel[,!(rnks_usel == rnks_min)]
}
# sort by row values
rnks_usel <- rnks_usel[ ,order(as.numeric(rnks_usel[1,]))]
# get strengths and weaknesses
Scodes <- colnames(rnks_usel)[1:topN]
Wcodes <- colnames(rnks_usel)[ (ncol(rnks_usel) - bottomN + 1) : ncol(rnks_usel) ]
# find agg level column of interest
if(is.null(report_level)){
report_level <- 2
}
lin <- coin$Meta$Lineage
agcolname <- names(lin)[report_level]
# get values and round if asked
sValues <- as.numeric(data_scrs[data_scrs$uCode == usel ,Scodes])
wValues <- as.numeric(data_scrs[data_scrs$uCode == usel ,Wcodes])
if(!is.null(sig_figs)){
stopifnot(sig_figs %in% 0:100)
sValues <- signif(sValues, sig_figs)
wValues <- signif(wValues, sig_figs)
}
# MAKE TABLES -------------------------------------------------------------
strengths <- data.frame(
Code = Scodes,
Name = iMeta_$iName[match(Scodes, iMeta_$iCode)],
Dimension = lin[[agcolname]][match(Scodes, lin[[1]])],
Rank = as.numeric(rnks_usel[Scodes]),
Value = sValues
)
names(strengths)[3] <- agcolname
weaks <- data.frame(
Code = Wcodes,
Name = iMeta_$iName[match(Wcodes, iMeta_$iCode)],
Dimension = lin[[agcolname]][match(Wcodes, lin[[1]])],
Rank = as.numeric(rnks_usel[Wcodes]),
Value = wValues
)
names(weaks)[3] <- agcolname
# units
# if units col exists and requested
if(with_units & !is.null(iMeta_$Unit)){
strengths$Unit <- iMeta_$Unit[match(Scodes, iMeta_$iCode)]
weaks$Unit <- iMeta_$Unit[match(Wcodes, iMeta_$iCode)]
}
# remove indicator codes if needed
if(!withcodes){
strengths <- strengths[-1]
weaks <- weaks[-1]
}
# OUTPUT ------------------------------------------------------------------
list(
Strengths = strengths,
Weaknesses = weaks
)
}
|
/scratch/gouwar.j/cran-all/cranData/COINr/R/results.R
|
#' Screen units based on data availability
#'
#' This is a generic function for screening units/rows based on data availability. See method documentation
#' for more details:
#'
#' This function replaces the now-defunct `checkData()` from COINr < v1.0.
#'
#' * [Screen.data.frame()]
#' * [Screen.coin()]
#' * [Screen.purse()]
#'
#' @param x Object to be screened
#' @param ... arguments passed to or from other methods.
#'
#' @return An object of the same class as `x`
#'
#' @export
Screen <- function (x, ...){
UseMethod("Screen")
}
#' Screen units based on data availability
#'
#' Screens units (rows) based on a data availability threshold and presence of zeros. Units can be optionally
#' "forced" to be included or excluded, making exceptions for the data availability threshold.
#'
#' The two main criteria of interest are `NA` values, and zeros. The summary table gives percentages of
#' `NA` values for each unit, across indicators, and percentage zero values (*as a percentage of non-`NA` values*).
#' Each unit is flagged as having low data or too many zeros based on thresholds.
#'
#' See also `vignette("screening")`.
#'
#' @param x A data frame
#' @param id_col Name of column of the data frame to be used as the identifier, e.g. normally this would be `uCode`
#' for indicator data sets used in coins. This must be specified if `Force` is specified.
#' @param unit_screen Specifies whether and how to screen units based on data availability or zero values.
#' * If set to `"byNA"`, screens units with data availability below `dat_thresh`
#' * If set to `"byzeros"`, screens units with non-zero values below `nonzero_thresh`
#' * If set to `"byNAandzeros"`, screens units based on either of the previous two criteria being true.
#' @param dat_thresh A data availability threshold (`>= 1` and `<= 0`) used for flagging low data and screening units if `unit_screen != "none"`. Default 0.66.
#' @param nonzero_thresh As `dat_thresh` but for non-zero values. Defaults to 0.05, i.e. it will flag any units with less than 5% non-zero values (equivalently more than 95% zero values).
#' @param Force A data frame with any additional units to force inclusion or exclusion. Required columns `uCode`
#' (unit code(s)) and `Include` (logical: `TRUE` to include and `FALSE` to exclude). Specifications here override
#' exclusion/inclusion based on data rules.
#' @param ... arguments passed to or from other methods.
#'
#' @examples
#' # example data
#' iData <- ASEM_iData[40:51, c("uCode", "Research", "Pat", "CultServ", "CultGood")]
#'
#' # screen to 75% data availability (by row)
#' l_scr <- Screen(iData, unit_screen = "byNA", dat_thresh = 0.75)
#'
#' # summary of screening
#' head(l_scr$DataSummary)
#'
#' @return Missing data stats and screened data as a list.
#'
#' @export
Screen.data.frame <- function(x, id_col = NULL, unit_screen, dat_thresh = NULL, nonzero_thresh = NULL,
Force = NULL, ...){
# CHECKS ------------------------------------------------------------------
stopifnot(is.data.frame(x))
##----- SET DEFAULTS -------##
if(is.null(dat_thresh)){
dat_thresh <- 2/3
}
if(is.null(nonzero_thresh)){
nonzero_thresh <- 0.05
}
stopifnot(dat_thresh >= 0,
dat_thresh <= 1,
nonzero_thresh >= 0,
nonzero_thresh <= 1)
# GET DATA AVAIL ----------------------------------------------------------
l <- get_data_avail(x)
# FLAGS FOR EXCLUSION -----------------------------------------------------
l <- cbind(l,
LowData = l$Dat_Avail < dat_thresh,
LowNonZero = l$Non_Zero < nonzero_thresh,
LowDatOrZeroFlag = (l$Dat_Avail < dat_thresh) | (l$Non_Zero < nonzero_thresh))
# Now add final column which says if unit is included or not, if asked for
if (unit_screen == "byNA"){
l$Included <- !l$LowData
} else if (unit_screen == "byzeros"){
l$Included <- !l$LowNonZero
} else if (unit_screen == "byNAandzeros"){
l$Included <- !l$LowDatOrZeroFlag
} else {
stop("unit_screen argument value not recognised...")
}
# FORCE INCLUSION/EXC -----------------------------------------------------
# (this is optional)
if (!is.null(Force)){ # if some countries to force include/exclude
if(is.null(id_col)){
stop("id_col must be specified if Force is specified")
}
# checks
stopifnot(!is.null(Force$uCode),
!is.null(Force$Include),
is.character(Force$uCode),
is.logical(Force$Include),
is.character(id_col),
id_col %in% names(x))
if(any(Force$uCode %nin% x[[id_col]])){
stop("One or more entries in Force$uCode not found in data frame.")
}
l$Included[l[[id_col]] %in% Force$uCode[Force$Include == TRUE]] <- TRUE
l$Included[l[[id_col]] %in% Force$uCode[Force$Include == FALSE]] <- FALSE
}
# NEW DSET AND OUTPUT -----------------------------------------------------
# create new data set which filters out the countries that didn't make the cut
ScreenedData <- x[l$Included, ]
# units that are removed
if(!is.null(id_col)){
RemovedUnits <- l[[id_col]][!(l$Included)]
} else if (!is.null(l$uCode)) {
RemovedUnits <- l$uCode[!(l$Included)]
} else {
RemovedUnits <- rownames(l)[!(l$Included)]
}
# output list
list(ScreenedData = ScreenedData,
DataSummary = l,
RemovedUnits = RemovedUnits)
}
#' Screen units based on data availability
#'
#' Screens units based on a data availability threshold and presence of zeros. Units can be optionally
#' "forced" to be included or excluded, making exceptions for the data availability threshold.
#'
#' The two main criteria of interest are `NA` values, and zeros. The summary table gives percentages of
#' `NA` values for each unit, across indicators, and percentage zero values (*as a percentage of non-`NA` values*).
#' Each unit is flagged as having low data or too many zeros based on thresholds.
#'
#' See also `vignette("screening")`.
#'
#' @param x A coin
#' @param dset The data set to be checked/screened
#' @param unit_screen Specifies whether and how to screen units based on data availability or zero values.
#' * If set to `"byNA"`, screens units with data availability below `dat_thresh`
#' * If set to `"byzeros"`, screens units with non-zero values below `nonzero_thresh`
#' * If set to `"byNAandzeros"`, screens units based on either of the previous two criteria being true.
#' @param dat_thresh A data availability threshold (`>= 1` and `<= 0`) used for flagging low data and screening units if `unit_screen != "none"`. Default 0.66.
#' @param nonzero_thresh As `dat_thresh` but for non-zero values. Defaults to 0.05, i.e. it will flag any units with less than 5% non-zero values (equivalently more than 95% zero values).
#' @param Force A data frame with any additional countries to force inclusion or exclusion. Required columns `uCode`
#' (unit code(s)) and `Include` (logical: `TRUE` to include and `FALSE` to exclude). Specifications here override
#' exclusion/inclusion based on data rules.
#' @param out2 Where to output the results. If `"COIN"` (default for COIN input), appends to updated COIN,
#' otherwise if `"list"` outputs to data frame.
#' @param write_to If specified, writes the aggregated data to `.$Data[[write_to]]`. Default `write_to = "Screened"`.
#' @param ... arguments passed to or from other methods.
#'
#' @examples
#' # build example coin
#' coin <- build_example_coin(up_to = "new_coin", quietly = TRUE)
#'
#' # screen units from raw dset
#' coin <- Screen(coin, dset = "Raw", unit_screen = "byNA",
#' dat_thresh = 0.85, write_to = "Filtered_85pc")
#'
#' # some details about the coin by calling its print method
#' coin
#'
#' @return An updated coin with data frames showing missing data in `.$Analysis`, and a new data set `.$Data$Screened`.
#' If `out2 = "list"` wraps missing data stats and screened data set into a list.
#'
#' @export
Screen.coin <- function(x, dset, unit_screen, dat_thresh = NULL, nonzero_thresh = NULL,
Force = NULL, out2 = "coin", write_to = NULL, ...){
# WRITE LOG ---------------------------------------------------------------
coin <- write_log(x, dont_write = "x")
# GET DSET, CHECKS --------------------------------------------------------
iData <- get_dset(coin, dset)
# SCREEN DF ---------------------------------------------------------------
l_out <- Screen.data.frame(iData, id_col = "uCode", unit_screen = unit_screen,
dat_thresh = dat_thresh, nonzero_thresh = nonzero_thresh,
Force = Force)
# output list
if(out2 == "list"){
l_out
} else {
if(is.null(write_to)){
write_to <- "Screened"
}
coin <- write_dset(coin, l_out$ScreenedData, dset = write_to)
write2coin(coin, l_out[names(l_out) != "ScreenedData"], out2, "Analysis", write_to)
}
}
#' Screen units based on data availability
#'
#' Screens units based on a data availability threshold and presence of zeros. Units can be optionally
#' "forced" to be included or excluded, making exceptions for the data availability threshold.
#'
#' The two main criteria of interest are `NA` values, and zeros. The summary table gives percentages of
#' `NA` values for each unit, across indicators, and percentage zero values (*as a percentage of non-`NA` values*).
#' Each unit is flagged as having low data or too many zeros based on thresholds.
#'
#' See also `vignette("screening")`.
#'
#' @param x A purse object
#' @param dset The data set to be checked/screened
#' @param unit_screen Specifies whether and how to screen units based on data availability or zero values.
#' * If set to `"byNA"`, screens units with data availability below `dat_thresh`
#' * If set to `"byzeros"`, screens units with non-zero values below `nonzero_thresh`
#' * If set to `"byNAandzeros"`, screens units based on either of the previous two criteria being true.
#' @param dat_thresh A data availability threshold (`>= 1` and `<= 0`) used for flagging low data and screening units if `unit_screen != "none"`. Default 0.66.
#' @param nonzero_thresh As `dat_thresh` but for non-zero values. Defaults to 0.05, i.e. it will flag any units with less than 5% non-zero values (equivalently more than 95% zero values).
#' @param Force A data frame with any additional countries to force inclusion or exclusion. Required columns `uCode`
#' (unit code(s)) and `Include` (logical: `TRUE` to include and `FALSE` to exclude). Specifications here override
#' exclusion/inclusion based on data rules.
#' @param write_to If specified, writes the aggregated data to `.$Data[[write_to]]`. Default `write_to = "Screened"`.
#' @param ... arguments passed to or from other methods.
#'
#' @examples
#' # see vignette("screening") for an example.
#'
#' @return An updated purse with coins screened and updated.
#'
#' @export
Screen.purse <- function(x, dset, unit_screen, dat_thresh = NULL, nonzero_thresh = NULL,
Force = NULL, write_to = NULL, ...){
# input check
check_purse(x)
# apply unit screening to each coin
x$coin <- lapply(x$coin, function(coin){
Screen.coin(coin, dset = dset, unit_screen = unit_screen,
dat_thresh = dat_thresh, nonzero_thresh = nonzero_thresh,
Force = Force, out2 = "coin", write_to = write_to)
})
# make sure still purse class
class(x) <- c("purse", "data.frame")
x
}
#' Get data availability of units
#'
#' Generic function for getting the data availability of each unit (row).
#'
#' See method documentation:
#'
#' * [get_data_avail.data.frame()]
#' * [get_data_avail.coin()]
#'
#' See also vignettes: `vignette("analysis")` and `vignette("imputation")`.
#'
#' @param x Either a coin or a data frame
#' @param ... Arguments passed to other methods
#'
#' @export
get_data_avail <- function(x, ...){
UseMethod("get_data_avail")
}
#' Get data availability of units
#'
#' Returns a list of data frames: the data availability of each unit (row) in a given data set, as well as percentage of zeros.
#' A second data frame gives data availability by aggregation (indicator) groups.
#'
#' This function ignores any non-numeric columns, and returns a data availability table of numeric columns with non-numeric columns
#' appended at the beginning.
#'
#' See also vignettes: `vignette("analysis")` and `vignette("imputation")`.
#'
#' @param x A coin
#' @param dset String indicating name of data set in `.$Data`.
#' @param out2 Either `"coin"` to output an updated coin or `"list"` to output a list.
#' @param ... arguments passed to or from other methods.
#'
#' @return An updated coin with data availability tables written in `.$Analysis[[dset]]`, or a
#' list of data availability tables.
#' @export
#'
#' @examples
#' # build example coin
#' coin <- build_example_coin(up_to = "new_coin", quietly = TRUE)
#'
#' # get data availability of Raw dset
#' l_dat <- get_data_avail(coin, dset = "Raw", out2 = "list")
#' head(l_dat$Summary, 5)
#'
get_data_avail.coin <- function(x, dset, out2 = "coin", ...){
# PREP --------------------------------------------------------------------
iData <- get_dset(x, dset)
lin <- x$Meta$Lineage
# DAT AVAIL AND TABLE -----------------------------------------------------
# call df method
dat_avail <- get_data_avail(iData)
# generic function to check frac NAs rowwise
frc_avail <- function(X){
1 - rowMeans(is.na(X))
}
# indicator-level group data avail function
group_avail <- function(grp, lev){
# get cols of indicators inside group
grp_codes <- unique(lin[[1]][lin[[lev]] == grp])
# get data avail
frc_avail( iData[grp_codes])
}
# get all data availability, for all groups in all levels
# note, this is indicator-level availability
for(lev in 2: ncol(lin)){
lev_codes <- unique(lin[[lev]])
df_lev <- sapply(lev_codes, group_avail, lev)
if(lev == 2){
dat_avail_group <- data.frame(uCode = iData$uCode, df_lev)
} else {
dat_avail_group <- cbind(dat_avail_group, df_lev)
}
}
# OUTPUT ------------------------------------------------------------------
l_out <- list(Summary = dat_avail,
ByGroup = dat_avail_group)
write2coin(x, l_out, out2, "Analysis", dset, "DatAvail")
}
#' Get data availability of units
#'
#' Returns a data frame of the data availability of each unit (row), as well as percentage of zeros. This
#' function ignores any non-numeric columns, and returns a data availability table with non-numeric columns
#' appended at the beginning.
#'
#' See also vignettes: `vignette("analysis")` and `vignette("imputation")`.
#'
#' @param x A data frame
#' @param ... arguments passed to or from other methods.
#'
#' @return A data frame of data availability statistics for each column of `x`.
#' @export
#'
#' @examples
#' # data availability of "airquality" data set
#' get_data_avail(airquality)
#'
get_data_avail.data.frame <- function(x, ...){
# PREP --------------------------------------------------------------------
xsplit <- split_by_numeric(x)
x_ <- xsplit$numeric
# DAT AVAIL AND TABLE -----------------------------------------------------
nabyrow <- rowSums(is.na(x_)) # number of missing data by row
zerobyrow <- rowSums(x_ == 0, na.rm = TRUE) # number of zeros for each row
nazerobyrow <- nabyrow + zerobyrow # number of zeros or NAs for each row
Prc_avail = 1 - nabyrow/ncol(x_) # the percentage of data available
Prc_nonzero = 1 - zerobyrow/(ncol(x_) - nabyrow) # the percentage of non zeros
data.frame(xsplit$not_numeric,
N_missing = nabyrow,
N_zero = zerobyrow,
N_miss_or_zero = nazerobyrow,
Dat_Avail = Prc_avail,
Non_Zero = Prc_nonzero
)
}
|
/scratch/gouwar.j/cran-all/cranData/COINr/R/screen_units.R
|
#' Sensitivity and uncertainty analysis of a coin
#'
#' This function performs global sensitivity and uncertainty analysis of a coin. You must specify which
#' parameters of the coin to vary, and the alternatives/distributions for those parameters.
#'
#' COINr implements a flexible variance-based global sensitivity analysis approach, which allows almost any assumption
#' to be varied, as long as the distribution of alternative values can be described. Variance-based "sensitivity indices"
#' are estimated using a Monte Carlo design (running the composite indicator many times with a particular combination of
#' input values). This follows the methodology described in \doi{10.1111/j.1467-985X.2005.00350.x}.
#'
#' To understand how this function works, please see `vignette("sensitivity")`. Here, we briefly recap the main input
#' arguments.
#'
#' First, you can select whether to run an uncertainty analysis `SA_type = "UA"` or sensitivity analysis `SA_type = "SA"`.
#' The number of replications (regenerations of the coin) is specified by `N`. Keep in mind that the *total* number of
#' replications is `N` for an uncertainty analysis but is `N*(d + 2)` for a sensitivity analysis due to the experimental
#' design used.
#'
#' To run either types of analysis, you must specify *which* parts of the coin to vary and *what the distributions/alternatives are*
#' This is done using `SA_specs`, a structured list. See `vignette("sensitivity")` for details and examples.
#'
#' You also need to specify the target of the sensitivity analysis. This should be an indicator or aggregate that can be
#' found in one of the data sets of the coin, and is specified using the `dset` and `iCode` arguments.
#'
#' Finally, if `SA_type = "SA"`, it is advisable to set `Nboot` to e.g. 100 or more, which is the number of bootstrap samples
#' to take when estimating confidence intervals on sensitivity indices. This does *not* perform extra regenerations of the
#' coin, so setting this to a higher number shouldn't have much impact on computational time.
#'
#' This function replaces the now-defunct `sensitivity()` from COINr < v1.0.
#'
#' @param coin A coin
#' @param SA_specs Specifications of the input uncertainties
#' @param N The number of regenerations
#' @param SA_type The type of analysis to run. `"UA"` runs an uncertainty analysis. `"SA"` runs a sensitivity
#' analysis (which anyway includes an uncertainty analysis).
#' @param dset The data set to extract the target variable from (passed to [get_data()]).
#' @param iCode The variable within `dset` to use as the target variable (passed to [get_data()]).
#' @param quietly Set to `TRUE` to suppress progress messages.
#' @param Nboot Number of bootstrap samples to take when estimating confidence intervals on sensitivity
#' indices.
#' @param check_addresses Logical: if `FALSE` skips the check of the validity of the parameter addresses. Default `TRUE`,
#' but useful to set to `FALSE` if running this e.g. in a Rmd document (because may require user input).
#'
#' @importFrom stats runif
#'
#' @return Sensitivity analysis results as a list, containing:
#' * `.$Scores` a data frame with a row for each unit, and columns are the scores for each replication.
#' * `.$Ranks` as `.$Scores` but for unit ranks
#' * `.$RankStats` summary statistics for ranks of each unit
#' * `.$Para` a list containing parameter values for each run
#' * `.$Nominal` the nominal scores and ranks of each unit (i.e. from the original COIN)
#' * `.$Sensitivity` (only if `SA_type = "SA"`) sensitivity indices for each parameter. Also confidence intervals if `Nboot`
#' was specified.
#' * Some information on the time elapsed, average time, and the parameters perturbed.
#' * Depending on the setting of `store_results`, may also contain a list of Methods or a list of COINs for each replication.
#'
#' @export
#'
#' @examples
#' # for examples, see `vignette("sensitivity")`
#' # (this is because package examples are run automatically and this function can
#' # take a few minutes to run at realistic settings)
#'
get_sensitivity <- function(coin, SA_specs, N, SA_type = "UA", dset, iCode, Nboot = NULL, quietly = FALSE,
check_addresses = TRUE){
t0 <- proc.time()
# CHECKS ------------------------------------------------------------------
check_coin_input(coin)
stopifnot(is.list(SA_specs),
is.numeric(N),
length(N) == 1,
N > 2,
SA_type %in% c("SA", "UA"))
# check format of SA_specs
check_specs <- sapply(SA_specs, function(li){
!is.null(li$Address) & !is.null(li$Distribution) & !is.null(li$Type)
})
if(any(!check_specs)){
stop("One or more entries in SA_specs is missing either the $Name, $Address or $Distribution entries.")
}
# PREP --------------------------------------------------------------------
# number of uncertain input paras
d <- length(SA_specs)
# get sample
if(SA_type == "UA"){
# a random (uniform) sample
XX <- matrix(stats::runif(d*N), nrow = N, ncol = d)
} else {
if(d==1){
stop("Only one uncertain input defined. It is not meaningful to run a sensitivity analysis
with only one input variable. Consider changing SA_type to \"UA\".")
}
# use standard MC estimators of sensitivity indices
XX <- SA_sample(N, d)
}
# covert to df
XX <- as.data.frame(XX)
# total number of regens
NT <- nrow(XX)
# convert sample to parameters (data frame with list cols?)
XX_p <- mapply(function(x, spec){
sample_2_para(x, distribution = spec$Distribution, dist_type = spec$Type)
}, XX, SA_specs, SIMPLIFY = FALSE)
# name list according to parameters
names(XX_p) <- names(SA_specs)
# also get addresses
addresses <- sapply(SA_specs, `[[`, "Address")
# check addresses for validity
if(check_addresses){
a_check <- lapply(addresses, check_address, coin)
}
# RUN COINS ---------------------------------------------------------------
# at this point the parameters are stored in a list where each entry of the list is a parameter,
# and the entry contains N instances of each parameter
# first get nominal results
SA_scores <- get_data(coin, dset = dset, iCodes = iCode)
# make a df of NAs in case a coin regen fails
v_fail <- SA_scores
v_fail[names(v_fail) == iCode] <- NA
names(SA_scores)[names(SA_scores) == iCode] <- "Nominal"
# looping over each replication in the SA
for(irep in 1:NT){
# list of parameters for current rep
l_para_rep <- lapply(XX_p, `[[`, irep)
if (!quietly){
message(paste0("Rep ",irep," of ",NT," ... ", round(irep*100/NT,1), "% complete" ))
}
# regenerate coin using parameter list
coin_rep <- regen_edit(l_para_rep, addresses, coin)
# extract variable of interest
if(is.coin(coin_rep)){
v_out <- get_data(coin_rep, dset = dset, iCodes = iCode)
# check
stopifnot(setequal(colnames(v_out), c("uCode", iCode)))
} else {
# df with just NAs
v_out <- v_fail
}
# merge onto nominal results and rename
SA_scores <- merge(SA_scores, v_out, by = "uCode", all = TRUE)
names(SA_scores)[names(SA_scores) == iCode] <- paste0("r_",irep)
}
# POST --------------------------------------------------------------------
# get ranks
SA_ranks <- rank_df(SA_scores)
# get ranks, but just the ones from the SA/UA. If SA, only keep first 2N cols
# which correspond to random sampling.
SA_ranks_ <- SA_ranks[names(SA_ranks) %nin% c("uCode", "Nominal")]
if(SA_type == "SA"){
SA_ranks_ <- SA_ranks_[, 1:(2*N)]
}
# rank stats
RankStats <- data.frame(
uCode = SA_ranks$uCode,
Nominal = SA_ranks$Nominal,
Mean = apply(SA_ranks_, MARGIN = 1, mean, na.rm = TRUE),
Median = apply(SA_ranks_, MARGIN = 1, stats::median, na.rm = TRUE),
Q5 = apply(SA_ranks_, MARGIN = 1, stats::quantile, probs = 0.05, na.rm = TRUE),
Q95 = apply(SA_ranks_, MARGIN = 1, stats::quantile, probs = 0.95, na.rm = TRUE)
)
# Build list to output
SA_out <- list(
Scores = SA_scores,
Ranks = SA_ranks,
RankStats = RankStats,
Para = XX_p
)
# get sensitivity indices if SA
if(SA_type == "SA"){
# An easy target is the mean absolute rank change
y_AvDiffs <- apply(SA_ranks[names(SA_ranks) %nin% c("uCode", "Nominal")], 2,
FUN = function(x) mean(abs(x-SA_ranks$Nominal), na.rm = TRUE) )
# using this, get sensitivity estimates and write to output list
SAout <- SA_estimate(y_AvDiffs, N = N, d = d, Nboot = Nboot)
Sinds <- SAout$SensInd
Sinds$Variable <- names(SA_specs)
SA_out$Sensitivity <- Sinds
}
SA_out$Nominal <- data.frame(uCode = SA_scores$uCode,
Score = SA_scores$Nominal,
Rank = SA_ranks$Nominal)
# timing
tf <- proc.time()
tdiff <- tf-t0
telapse <- as.numeric(tdiff[3])
taverage <- telapse/NT
if(!quietly){
message(paste0("Time elapsed = ", round(telapse,2), "s, average ", round(taverage,2), "s/rep."))
}
SA_out
}
# Regenerate an edited coin
#
# This is similar to [edit_coin()] but works with a list of parameters to change, rather than one, and also outputs
# a regenerated coin.
#
# @param l_para A list of parameter values to change. Should be of the format `list(para_name = new_value)`, where
# `new_value`
# @param addresses A list or character vector of addresses. `names(addresses)` must correspond to `names(l_para)`.
# @param coin A coin, to be edited.
#
# @return A regenerated coin
# @export
#
# @examples
# #
regen_edit <- function(l_para, addresses, coin){
d <- length(l_para)
p_names <- names(l_para)
stopifnot(length(addresses) == d,
is.coin(coin),
setequal(names(addresses), p_names))
# copy
coin_i <- coin
# modify parameters
for(ii in 1:d){
coin_i <- edit_coin(coin_i, address = addresses[names(addresses) == p_names[ii]],
new_value = l_para[[ii]])
}
# regenerate the results
tryCatch(
expr = Regen(coin_i, quietly = TRUE),
error = function(e){
message("Regen failed. Probably a conflict between methods.")
print(e)
return(NA)
}
)
}
# Convert a numeric sample to parameter values
#
# Converts a numeric sample `x`, which should have values between 0 and 1, to a corresponding vector or list of
# parameter values, based on `distribution`.
#
# The `distribution` argument specifies how to map `x` to parameter values and can be used in two different ways,
# depending on `dist_type`. If `dist_type = "discrete"`, then `distribution` should be a vector or list of alternative
# parameter values (or objects). Each entry of `x` is mapped to an entry from `distribution` by treating `distribution`
# as a discrete uniform distribution over its entries.
#
# If `dist_type = "continuous"`, `distribution` is assumed to be a continuous uniform distribution, such that
# `distribution` is a 2-length numeric vector with the first value being the lower bound, and the second value the
# upper bound. For example, if `distribution = c(5, 10)`, then `x` will be mapped onto a continuous uniform distribution
# bounded by 5 and 10.
#
# @param distribution The distribution to sample using `x` - see details.
# @param dist_type Either `"discrete"` or `"continuous"` - see details.
# @param checks Logical: if `TRUE` runs some checks on inputs, set to `FALSE` to increase speed slightly.
# @param x A numeric vector with values between 0 and 1
#
# @return A vector or list of parameter values.
#
# @examples
# #
sample_2_para <- function(x, distribution, dist_type = "discrete", checks = TRUE){
if(checks){
stopifnot(is.numeric(x),
any(x >= 0),
any(x <= 1),
is.character(dist_type),
length(dist_type) == 1,
dist_type %in% c("discrete", "continuous"))
}
# specs can be a set of discrete alternatives, or else a uniform distribution
if(dist_type == "discrete"){
# the number of discrete alternatives
n_alt <- length(distribution)
# convert x to indexes of the discrete parameters
i_para <- cut(x, n_alt, 1:n_alt)
# now get the output vector/list
l_out <- distribution[i_para]
} else {
# here we assume a uniform distribution
if(checks){
stopifnot(is.numeric(distribution),
length(distribution) == 2,
distribution[2] > distribution[1])
}
# we simply scale x up to the interval covered by the distribution
l_out <- x*(distribution[2] - distribution[1]) + distribution[1]
}
# output
l_out
}
# Edit objects inside a coin
#
# Changes the object found at `address` to `new_value`.
#
# @param coin A coin
# @param address A string specifying the location in the coin of the object to edit. This should begin with `"$"`, omitting the coin itself
# in the address. E.g. if you target `coin$x$y$z` enter `"$x$y$z"`.
# @param new_value The new value to assign at `address`.
# @param checks Logical: if `TRUE`, runs some basic checks, otherwise omitted if `FALSE`. Setting `FALSE` may speed
# things up a bit in sensitivity analysis, for example.
#
# @return An updated coin
# @export
#
# @examples
# #
edit_coin <- function(coin, address, new_value, checks = FALSE){
# checks
if(checks){
check_coin_input(coin)
stopifnot(is.character(address),
length(address) == 1,
substr(address,1,1) == "$")
}
# this is the call to evaluate, as a string
expr_str <- paste0("coin", address, " <- new_value")
# evaluate the call
eval(str2lang(expr_str))
# output
coin
}
# Check address in coin
check_address <- function(address, coin){
# checks
stopifnot(is.character(address),
length(address) == 1)
# check address begins with $
if(substr(address,1,1) != "$"){
stop("Address must begin with '$'! Your address: ", address, call. = FALSE)
}
# this is the call to evaluate, as a string
expr_str <- paste0("coin", address)
# evaluate the call
address_value <- eval(str2lang(expr_str))
if(is.null(address_value)){
xx <- readline(paste0("Address ", address, " is not currently present in the coin or else is NULL. Continue anyway (y/n)? "))
if(xx %nin% c("y", "n")){
stop("You didn't input y or n. I'm taking that as a no.", call. = FALSE)
}
if(xx == "n"){
stop("Exiting sensitivity analysis. Please check the address: ", address, call. = FALSE)
}
}
}
#' Estimate sensitivity indices
#'
#' Post process a sample to obtain sensitivity indices. This function takes a univariate output
#' which is generated as a result of running a Monte Carlo sample from [SA_sample()] through a system.
#' Then it estimates sensitivity indices using this sample.
#'
#' This function is built to be used inside [get_sensitivity()].
#'
#' @param yy A vector of model output values, as a result of a \eqn{N(d+2)} Monte Carlo design.
#' @param N The number of sample points per dimension.
#' @param d The dimensionality of the sample
#' @param Nboot Number of bootstrap draws for estimates of confidence intervals on sensitivity indices.
#' If this is not specified, bootstrapping is not applied.
#'
#' @importFrom stats var
#'
#' @examples
#' # This is a generic example rather than applied to a COIN (for reasons of speed)
#'
#' # A simple test function
#' testfunc <- function(x){
#' x[1] + 2*x[2] + 3*x[3]
#' }
#'
#' # First, generate a sample
#' X <- SA_sample(500, 3)
#'
#' # Run sample through test function to get corresponding output for each row
#' y <- apply(X, 1, testfunc)
#'
#' # Estimate sensitivity indices using sample
#' SAinds <- SA_estimate(y, N = 500, d = 3, Nboot = 1000)
#' SAinds$SensInd
#' # Notice that total order indices have narrower confidence intervals than first order.
#'
#' @seealso
#' * [get_sensitivity()] Perform global sensitivity or uncertainty analysis on a COIN
#' * [SA_sample()] Input design for estimating sensitivity indices
#'
#' @return A list with the output variance, plus a data frame of first order and total order sensitivity indices for
#' each variable, as well as bootstrapped confidence intervals if `!is.null(Nboot)`.
#'
#' @export
SA_estimate <- function(yy, N, d, Nboot = NULL){
# checks
stopifnot(is.numeric(yy))
if(length(yy) != N*(d+2)){
stop("The length of 'yy' does not correspond to the values of 'N' and 'd'. The vector 'yy' should be of length N(d+2).")
}
# put into matrix format: just the ABis
yyABi <- matrix(yy[(2*N +1):length(yy)], nrow = N)
# get yA and yB
yA <- yy[1:N]
yB <- yy[(N+1) : (2*N)]
# calculate variance
varY <- stats::var(c(yA,yB))
# calculate Si
Si <- apply(yyABi, 2, function(x){
mean(yB*(x - yA))/varY
})
# calculate ST
STi <- apply(yyABi, 2, function(x){
sum((yA - x)^2)/(2*N*varY)
})
# make a df
SensInd <- data.frame(Variable = paste0("V", 1:d),
Si = Si,
STi = STi)
## BOOTSTRAP ## -----
if (!is.null(Nboot)){
# Get the "elements" to sample from
STdiffs <- apply(yyABi, 2, function(x){
yA - x
})
Sidiffs <- apply(yyABi, 2, function(x){
yB*(x - yA)
})
# prep matrices for bootstrap samples
Si_boot <- matrix(NA, d, Nboot)
STi_boot <- Si_boot
# do the bootstrapping bit
for (iboot in 1:Nboot){
# calculate Si
Si_boot[,iboot] <- apply(Sidiffs, 2, function(x){
mean(sample(x, replace = TRUE))/varY
})
# calculate ST
STi_boot[,iboot] <- apply(STdiffs, 2, function(x){
sum(sample(x, replace = TRUE)^2)/(2*N*varY)
})
}
# get quantiles of sensitivity indices to add to
SensInd$Si_q5 <- apply(Si_boot, MARGIN = 1,
function(xx) stats::quantile(xx, probs = 0.05, na.rm = TRUE))
SensInd$Si_q95 <- apply(Si_boot, MARGIN = 1,
function(xx) stats::quantile(xx, probs = 0.95, na.rm = TRUE))
SensInd$STi_q5 <- apply(STi_boot, MARGIN = 1,
function(xx) stats::quantile(xx, probs = 0.05, na.rm = TRUE))
SensInd$STi_q95 <- apply(STi_boot, MARGIN = 1,
function(xx) stats::quantile(xx, probs = 0.95, na.rm = TRUE))
}
# return outputs
list(Variance = varY,
SensInd = SensInd)
}
#' Generate sample for sensitivity analysis
#'
#' Generates an input sample for a Monte Carlo estimation of global sensitivity indices. Used in
#' the [get_sensitivity()] function. The total sample size will be \eqn{N(d+2)}.
#'
#' This function generates a Monte Carlo sample as described e.g. in the [Global Sensitivity Analysis: The Primer book](https://onlinelibrary.wiley.com/doi/book/10.1002/9780470725184).
#'
#' @param N The number of sample points per dimension.
#' @param d The dimensionality of the sample
#'
#' @importFrom stats runif
#'
#' @examples
#' # sensitivity analysis sample for 3 dimensions with 100 points per dimension
#' X <- SA_sample(100, 3)
#'
#' @return A matrix with \eqn{N(d+2)} rows and `d` columns.
#'
#' @seealso
#' * [get_sensitivity()] Perform global sensitivity or uncertainty analysis on a COIN.
#' * [SA_estimate()] Estimate sensitivity indices from system output, as a result of input design from SA_sample().
#'
#' @export
SA_sample <- function(N, d){
# a random (uniform) sample
Xbase <- matrix(stats::runif(d*N*2), nrow = N, ncol = d*2)
# get first half
XA <- Xbase[, 1:d]
# get second half
XB <- Xbase[, (d+1):(2*d)]
# make big matrix (copy matrix d times on the bottom)
XX <- matrix(rep(t(XA), d), ncol = ncol(XA), byrow = TRUE )
# now substitute in columns from B into A
for (ii in 1:d){
XX[(1 + (ii-1)*N):(ii*N), ii] <- XB[, ii]
}
# add original matrices on the beginning
XX <- rbind(XA, XB, XX)
XX
}
#' Plot ranks from an uncertainty/sensitivity analysis
#'
#' Plots the ranks resulting from an uncertainty and sensitivity analysis, in particular plots
#' the median, and 5th/95th percentiles of ranks.
#'
#' To use this function you first need to run [get_sensitivity()]. Then enter the resulting list as the
#' `SAresults` argument here.
#'
#' See `vignette("sensitivity")`.
#'
#' This function replaces the now-defunct `plotSARanks()` from COINr < v1.0.
#'
#' @param SAresults A list of sensitivity/uncertainty analysis results from [get_sensitivity()].
#' @param plot_units A character vector of units to plot. Defaults to all units. You can also set
#' to `"top10"` to only plot top 10 units, and `"bottom10"` for bottom ten.
#' @param order_by If set to `"nominal"`, orders the rank plot by nominal ranks
#' (i.e. the original ranks prior to the sensitivity analysis). Otherwise if `"median"`, orders by
#' median ranks.
#' @param dot_colour Colour of dots representing median ranks.
#' @param line_colour Colour of lines connecting 5th and 95th percentiles.
#'
#' @importFrom ggplot2 geom_line geom_point scale_shape_manual scale_size_manual labs guides
#' @importFrom ggplot2 scale_color_manual theme_classic theme scale_y_discrete scale_x_reverse
#' @importFrom ggplot2 coord_flip element_text
#'
#' @examples
#' # for examples, see `vignette("sensitivity")`
#' # (this is because package examples are run automatically and sensitivity analysis
#' # can take a few minutes to run at realistic settings)
#'
#' @seealso
#' * [get_sensitivity()] Perform global sensitivity or uncertainty analysis on a coin
#' * [plot_sensitivity()] Plot sensitivity indices following a sensitivity analysis.
#'
#' @return A plot of rank confidence intervals, generated by 'ggplot2'.
#'
#' @export
plot_uncertainty <- function(SAresults, plot_units = NULL, order_by = "nominal",
dot_colour = NULL, line_colour = NULL){
rnks <- SAresults$RankStats
if(!is.null(plot_units)){
if(length(plot_units == 1)){
if (plot_units == "top10"){
unit_include <- SAresults$Nominal$uCode[SAresults$Nominal$Rank <= 10]
} else if (plot_units == "bottom10"){
unit_include <- SAresults$Nominal$uCode[
SAresults$Nominal$Rank >= (max(SAresults$Nominal$Rank, na.rm = TRUE)-10)]
} else {
stop("plot_units not recognised: should be either a character vector of unit codes or else
\"top10\" or \"bottom10\" ")
}
} else {
# vector, so this should be a vector of unit codes
unit_include <- plot_units
if(any(unit_include %nin% rnks$uCode)){
stop("One or more units in 'plot_units' not found in SA results.")
}
}
rnks <- rnks[rnks$uCode %in% unit_include,]
SAresults$Nominal <- SAresults$Nominal[SAresults$Nominal$uCode %in% unit_include,]
}
# set ordering of plot
if (order_by == "nominal"){
plot_order <- SAresults$Nominal$uCode[order(SAresults$Nominal$Score, decreasing = FALSE)]
} else if (order_by == "median"){
plot_order <- SAresults$Nominal$uCode[order(rnks$Median, decreasing = TRUE)]
}
# first, pivot to long
rownames(rnks) <- rnks$uCode
qstats <- lengthen(rnks[c("Median", "Q5", "Q95")])
names(qstats) <- c("uCode", "Statistic", "Rank")
stats_long <- merge(rnks[c("uCode", "Nominal", "Mean")], qstats, by = "uCode", all = TRUE)
# stats_long <- tidyr::pivot_longer(rnks,
# cols = c("Median", "Q5", "Q95"),
# names_to = "Statistic",
# values_to = "Rank")
# colours
if(is.null(dot_colour)){
dot_colour <- "#83af70"
}
if(is.null(line_colour)){
line_colour <- "grey"
}
# generate plot
ggplot2::ggplot(stats_long, aes(x = .data$Rank, y = .data$uCode)) +
ggplot2::geom_line(aes(group = .data$uCode), color = line_colour) +
ggplot2::geom_point(aes(color = .data$Statistic, shape = .data$Statistic, size= .data$Statistic)) +
ggplot2::scale_shape_manual(values = c(16, 15, 15)) +
ggplot2::scale_size_manual(values = c(2, 0, 0)) +
ggplot2::labs(y = "", color = "") +
ggplot2::guides(shape = "none", size = "none", color = "none") +
ggplot2::theme_classic() +
ggplot2::theme(legend.position="top") +
ggplot2::scale_color_manual(values = c(dot_colour, "#ffffff", "#ffffff")) +
ggplot2::scale_y_discrete(limits = plot_order) +
ggplot2::scale_x_reverse() +
ggplot2::coord_flip() +
ggplot2::theme(axis.text.x = ggplot2::element_text(angle = 45, hjust = 1)) +
ggplot2::theme(text=ggplot2::element_text(family="sans"))
}
#' Plot sensitivity indices
#'
#' Plots sensitivity indices as bar or pie charts.
#'
#' To use this function you first need to run [get_sensitivity()]. Then enter the resulting list as the
#' `SAresults` argument here.
#'
#' See `vignette("sensitivity")`.
#'
#' This function replaces the now-defunct `plotSA()` from COINr < v1.0.
#'
#' @param SAresults A list of sensitivity/uncertainty analysis results from [plot_sensitivity()].
#' @param ptype Type of plot to generate - either `"bar"`, `"pie"` or `"box"`.
#'
#' @importFrom ggplot2 ggplot aes geom_bar labs theme_minimal geom_errorbar coord_polar theme_void
#' @importFrom ggplot2 facet_wrap
#' @importFrom rlang .data
#'
#' @examples
#' # for examples, see `vignette("sensitivity")`
#' # (this is because package examples are run automatically and sensitivity analysis
#' # can take a few minutes to run at realistic settings)
#'
#' @return A plot of sensitivity indices generated by ggplot2.
#'
#' @seealso
#' * [get_sensitivity()] Perform global sensitivity or uncertainty analysis on a COIN
#' * [plot_uncertainty()] Plot confidence intervals on ranks following a sensitivity analysis
#'
#' @export
plot_sensitivity <- function(SAresults, ptype = "bar"){
stopifnot(is.list(SAresults))
# prep data first
Sdf <- SAresults$Sensitivity
if(is.null(Sdf)){
stop("Sensitivity indices not found. Did you run get_sensitivity with SA_type = 'SA'?")
}
numcols <- Sdf[names(Sdf) != "Variable"]
# set any negative values to zero. By definition they can't be negative.
numcols[numcols < 0] <- 0
Sdf[names(Sdf) != "Variable"] <- numcols
if(ptype == "bar"){
# the full bar is STi. It is divided into Si and the remainder, so we need STi - Si
Sdf$Interactions = Sdf$STi - Sdf$Si
Sdf$Interactions[Sdf$Interactions < 0] <- 0
# rename col to improve plot
colnames(Sdf)[colnames(Sdf) == "Si"] <- "MainEffect"
# now pivot to get in format for ggplot
bardf <- lengthen(Sdf, cols = c("MainEffect", "Interactions"))
# bardf <- tidyr::pivot_longer(Sdf,
# cols = c("MainEffect", "Interactions"))
# make stacked bar plot
plt <- ggplot2::ggplot(bardf, ggplot2::aes(fill=.data$name, y=.data$Value, x=.data$Variable)) +
ggplot2::geom_bar(position="stack", stat="identity") +
ggplot2::labs(
x = NULL,
y = NULL,
fill = NULL) +
ggplot2::theme_minimal()
} else if (ptype == "pie"){
# we are plotting first order sensitivity indices. So, also estimate interactions.
Sis <- Sdf[c("Variable", "Si")]
intsum <- max(c(1 - sum(Sis$Si, na.rm = TRUE), 0))
Sis <- rbind(Sis, data.frame(Variable = "Interactions", Si = intsum))
# Basic piechart
plt <- ggplot2::ggplot(Sis, ggplot2::aes(x = "", y = .data$Si, fill = .data$Variable)) +
ggplot2::geom_bar(stat="identity", width=1, color="white") +
ggplot2::coord_polar("y", start=0) +
ggplot2::theme_void() # remove background, grid, numeric labels
} else if (ptype == "box"){
if(any(c("Si_q5", "Si_q95", "STi_q5", "STi_q95") %nin% names(Sdf))){
stop("Quantiles not found for sensitivity indices (required for box plot). Did you forget to set Nboot when running get_sensitivity()?")
}
Sdf <- lengthen(Sdf, cols = c("Si", "STi"))
#Sdf1 <- tidyr::pivot_longer(Sdf, cols = c("Si", "STi"))
Sdf$q5 <- ifelse(Sdf$name == "STi", Sdf$STi_q5, Sdf$Si_q5)
Sdf$q95 <- ifelse(Sdf$name == "STi", Sdf$STi_q95, Sdf$Si_q95)
Sdf$q5[Sdf$q5 > 1] <- 1
Sdf$q95[Sdf$q95 > 1] <- 1
Sdf$Value[Sdf$Value > 1] <- 1
plt <- ggplot2::ggplot(Sdf, ggplot2::aes(x = .data$Variable, y = .data$Value, ymax = .data$q95, ymin = .data$q5)) +
ggplot2::geom_point(size = 1.5) +
ggplot2::geom_errorbar(width = 0.2) +
ggplot2::theme_bw() +
facet_wrap(~name) +
ggplot2::labs(
x = NULL,
y = NULL)
}
plt +
ggplot2::theme(text=element_text(family="sans"))
}
#' Noisy replications of weights
#'
#' Given a data frame of weights, this function returns multiple replicates of the weights, with added
#' noise. This is intended for use in uncertainty and sensitivity analysis.
#'
#' Weights are expected to be in a data frame format with columns `Level`, `iCode` and `Weight`, as
#' used in `iMeta`. Note that no `NA`s are allowed anywhere in the data frame.
#'
#' Noise is added using the `noise_specs` argument, which is specified by a data frame with columns
#' `Level` and `NoiseFactor`. The aggregation level refers to number of the aggregation level to target
#' while the `NoiseFactor` refers to the size of the perturbation. If e.g. a row is `Level = 1` and
#' `NoiseFactor = 0.2`, this will allow the weights in aggregation level 1 to deviate by +/- 20% of their
#' nominal values (the values in `w`).
#'
#' This function replaces the now-defunct `noisyWeights()` from COINr < v1.0.
#'
#' @param w A data frame of weights, in the format found in `.$Meta$Weights`.
#' @param noise_specs a data frame with columns:
#' * `Level`: The aggregation level to apply noise to
#' * `NoiseFactor`: The size of the perturbation: setting e.g. 0.2 perturbs by +/- 20% of nominal values.
#' @param Nrep The number of weight replications to generate.
#'
#' @examples
#' # build example coin
#' coin <- build_example_coin(up_to = "new_coin", quietly = TRUE)
#'
#' # get nominal weights
#' w_nom <- coin$Meta$Weights$Original
#'
#' # build data frame specifying the levels to apply the noise at
#' # here we vary at levels 2 and 3
#' noise_specs = data.frame(Level = c(2,3),
#' NoiseFactor = c(0.25, 0.25))
#'
#' # get 100 replications
#' noisy_wts <- get_noisy_weights(w = w_nom, noise_specs = noise_specs, Nrep = 100)
#'
#' # examine one of the noisy weight sets, last few rows
#' tail(noisy_wts[[1]])
#'
#' @return A list of `Nrep` sets of weights (data frames).
#'
#' @seealso
#' * [get_sensitivity()] Perform global sensitivity or uncertainty analysis on a COIN
#'
#' @export
get_noisy_weights <- function(w, noise_specs, Nrep){
# CHECKS
stopifnot(is.data.frame(noise_specs))
if(any(is.na(w))){
stop("NAs found in w: NAs are not allowed.")
}
if(any(c("iCode", "Level", "Weight") %nin% names(w))){
stop("One or more required columns (iCode, Level, Weight) not found in w.")
}
if (length(unique(noise_specs$Level)) < nrow(noise_specs)){
stop("Looks like you have duplicate Level values in the noise_specs df?")
}
# make list for weights
wlist <- vector(mode = "list", length = Nrep)
for (irep in 1:Nrep){
# make fresh copy of weights
wrep <- w
for (ii in noise_specs$Level){
# weights for this level
wts <- wrep$Weight[w$Level == ii]
# vector of noise: random number in [0,1] times 2, -1. This interprets NoiseFactor as
# a +/-% deviation.
wnoise <- (stats::runif(length(wts))*2 - 1)*noise_specs$NoiseFactor[noise_specs$Level == ii]*wts
# add noise to weights and store
wts <- wts + wnoise
wrep$Weight[w$Level == ii] <- wts
}
wlist[[irep]] <- wrep
}
return(wlist)
}
|
/scratch/gouwar.j/cran-all/cranData/COINr/R/sensitivity.R
|
#' Statistics of indicators
#'
#' Given a coin and a specified data set (`dset`), returns a table of statistics with entries for each column.
#'
#' The statistics (columns in the output table) are as follows (entries correspond to each column):
#'
#' * `Min`: the minimum
#' * `Max`: the maximum
#' * `Mean`: the (arirthmetic) mean
#' * `Median`: the median
#' * `Std`: the standard deviation
#' * `Skew`: the skew
#' * `Kurt`: the kurtosis
#' * `N.Avail`: the number of non-`NA` values
#' * `N.NonZero`: the number of non-zero values
#' * `N.Unique`: the number of unique values
#' * `Frc.Avail`: the fraction of non-`NA` values
#' * `Frc.NonZero`: the fraction of non-zero values
#' * `Frc.Unique`: the fraction of unique values
#' * `Flag.Avail`: a data availability flag - columns with `Frc.Avail < t_avail` will be flagged as `"LOW"`, else `"ok"`.
#' * `Flag.NonZero`: a flag for columns with a high proportion of zeros. Any columns with `Frc.NonZero < t_zero` are
#' flagged as `"LOW"`, otherwise `"ok"`.
#' * `Flag.Unique`: a unique value flag - any columns with `Frc.Unique < t_unq` are flagged as `"LOW"`, otherwise `"ok"`.
#' * `Flag.SkewKurt`: a skew and kurtosis flag which is an indication of possible outliers. Any columns with
#' `abs(Skew) > t_skew` AND `Kurt > t_kurt` are flagged as `"OUT"`, otherwise `"ok"`.
#'
#' The aim of this table, among other things, is to check the basic statistics of each column/indicator, and identify
#' any possible issues for each indicator. For example, low data availability, having a high proportion of zeros and/or
#' a low proportion of unique values. Further, the combination of skew and kurtosis (i.e. the `Flag.SkewKurt` column)
#' is a simple test for possible outliers, which may require treatment using [Treat()].
#'
#' The table can be returned either to the coin or as a standalone data frame - see `out2`.
#'
#' See also `vignette("analysis")`.
#'
#' @param t_skew Absolute skewness threshold. See details.
#' @param t_kurt Kurtosis threshold. See details.
#' @param t_avail Data availability threshold. See details.
#' @param x A coin
#' @param dset A data set present in `.$Data`
#' @param nsignif Number of significant figures to round the output table to.
#' @param out2 Either `"df"` (default) to output a data frame of indicator statistics, or "`coin`" to output an
#' updated coin with the data frame attached under `.$Analysis`.
#' @param ... arguments passed to or from other methods.
#' @param t_zero A threshold between 0 and 1 for flagging indicators with high proportion of zeroes. See details.
#' @param t_unq A threshold between 0 and 1 for flagging indicators with low proportion of unique values. See details.plot
#'
#' @examples
#' # build example coin
#' coin <- build_example_coin(up_to = "new_coin", quietly = TRUE)
#'
#' # get table of indicator statistics for raw data set
#' get_stats(coin, dset = "Raw", out2 = "df")
#'
#' @return Either a data frame or updated coin - see `out2`.
#'
#' @export
get_stats.coin <- function(x, dset, t_skew = 2, t_kurt = 3.5, t_avail = 0.65, t_zero = 0.5,
t_unq = 0.5, nsignif = 3, out2 = "df", ...){
stopifnot(out2 %in% c("df", "coin"))
# get data
iData <- get_data(x, dset = dset, ...)
# get iData_ (only numeric indicator cols)
iData_ <- extract_iData(x, iData, GET = "iData_")
# get stats table
stat_tab <- get_stats(iData_, t_skew = t_skew, t_kurt = t_kurt, t_avail = t_avail, t_zero = t_zero,
t_unq = t_unq, nsignif = nsignif)
# write to coin or output as df
if(out2 == "df"){
stat_tab
} else {
x$Analysis[[dset]][["Stats"]] <- stat_tab
x
}
}
#' Statistics of columns
#'
#' Takes a data frame and returns a table of statistics with entries for each column.
#'
#' The statistics (columns in the
#' output table) are as follows (entries correspond to each column):
#'
#' * `Min`: the minimum
#' * `Max`: the maximum
#' * `Mean`: the (arirthmetic) mean
#' * `Median`: the median
#' * `Std`: the standard deviation
#' * `Skew`: the skew
#' * `Kurt`: the kurtosis
#' * `N.Avail`: the number of non-`NA` values
#' * `N.NonZero`: the number of non-zero values
#' * `N.Unique`: the number of unique values
#' * `Frc.Avail`: the fraction of non-`NA` values
#' * `Frc.NonZero`: the fraction of non-zero values
#' * `Frc.Unique`: the fraction of unique values
#' * `Flag.Avail`: a data availability flag - columns with `Frc.Avail < t_avail` will be flagged as `"LOW"`, else `"ok"`.
#' * `Flag.NonZero`: a flag for columns with a high proportion of zeros. Any columns with `Frc.NonZero < t_zero` are
#' flagged as `"LOW"`, otherwise `"ok"`.
#' * `Flag.Unique`: a unique value flag - any columns with `Frc.Unique < t_unq` are flagged as `"LOW"`, otherwise `"ok"`.
#' * `Flag.SkewKurt`: a skew and kurtosis flag which is an indication of possible outliers. Any columns with
#' `abs(Skew) > t_skew` AND `Kurt > t_kurt` are flagged as `"OUT"`, otherwise `"ok"`.
#'
#' The aim of this table, among other things, is to check the basic statistics of each column/indicator, and identify
#' any possible issues for each indicator. For example, low data availability, having a high proportion of zeros and/or
#' a low proportion of unique values. Further, the combination of skew and kurtosis (i.e. the `Flag.SkewKurt` column)
#' is a simple test for possible outliers, which may require treatment using [Treat()].
#'
#' See also `vignette("analysis")`.
#'
#' @param t_skew Absolute skewness threshold. See details.
#' @param t_kurt Kurtosis threshold. See details.
#' @param t_avail Data availability threshold. See details.
#' @param x A data frame with only numeric columns.
#' @param nsignif Number of significant figures to round the output table to.
#' @param ... arguments passed to or from other methods.
#' @param t_zero A threshold between 0 and 1 for flagging indicators with high proportion of zeroes. See details.
#' @param t_unq A threshold between 0 and 1 for flagging indicators with low proportion of unique values. See details.
#'
#' @importFrom stats median sd
#'
#' @examples
#' # stats of mtcars
#' get_stats(mtcars)
#'
#' @return A data frame of statistics for each column
#'
#' @export
get_stats.data.frame <- function(x, t_skew = 2, t_kurt = 3.5, t_avail = 0.65, t_zero = 0.5,
t_unq = 0.5, nsignif = 3, ...){
# CHECKS ------------------------------------------------------------------
not_numeric <- !(sapply(x, is.numeric))
if(any(not_numeric)){
stop("Non-numeric cols detected in data frame. Input must be a data frame with only numeric columns.")
}
# STATS -------------------------------------------------------------------
n <- nrow(x)
# this function gets all stats for one column of data
stats_i <- function(xi){
n_avail <- sum(!is.na(xi))
prc_avail <- n_avail/n
sk <- skew(xi, na.rm = TRUE)
kt <- kurt(xi, na.rm = TRUE)
nzero <- sum(xi != 0, na.rm = TRUE)
nunq <- length(unique(xi[!is.na(xi)]))
nsame <- max(table(xi)) # the largest number of elements with the same value
data.frame(
Min = min(xi, na.rm = TRUE),
Max = max(xi, na.rm = TRUE),
Mean = mean(xi, na.rm = TRUE),
Median = stats::median(xi, na.rm = TRUE),
Std = stats::sd(xi, na.rm = TRUE),
Skew = sk,
Kurt = kt,
N.Avail = n_avail,
N.NonZero = nzero,
N.Unique = nunq,
N.Same = nsame,
Frc.Avail = prc_avail,
Frc.NonZero = nzero/n_avail,
Frc.Unique = nunq/n_avail,
Frc.Same = nsame/n_avail,
Flag.Avail = ifelse(prc_avail >= t_avail, "ok", "LOW"),
Flag.NonZero = ifelse(nzero/n >= t_zero, "ok", "LOW"),
Flag.Unique = ifelse(nunq/n >= t_unq, "ok", "LOW"),
Flag.SkewKurt = ifelse((abs(sk) > t_skew) & (kt > t_kurt), "OUT", "ok")
)
}
# now apply function to all cols and add iCode column
stats_tab <- lapply(x, stats_i)
stats_tab <- Reduce(rbind, stats_tab)
stats_tab <- cbind(iCode = names(x), stats_tab)
# OUTPUT ------------------------------------------------------------------
# sfs
if(!is.null(nsignif)){
stats_tab <- signif_df(stats_tab, nsignif)
}
stats_tab
}
#' Statistics of columns/indicators
#'
#' Generic function for reports various statistics from a data frame or coin. See method documentation:
#'
#' * [get_stats.data.frame()]
#' * [get_stats.coin()]
#'
#' See also `vignette("analysis")`.
#'
#' This function replaces the now-defunct `getStats()` from COINr < v1.0.
#'
#' @param x Object (data frame or coin)
#' @param ... Further arguments to be passed to methods.
#'
#' @examples
#' # see individual method documentation
#'
#' @return A data frame of statistics for each column
#'
#' @export
get_stats <- function(x, ...){
UseMethod("get_stats")
}
|
/scratch/gouwar.j/cran-all/cranData/COINr/R/stats.R
|
#' Treat a purse of coins for outliers
#'
#' This function calls [Treat.coin()] for each coin in the purse. See the documentation of that function for
#' details. See also `vignette("treat")`.
#'
#' @param x A purse object
#' @param dset The data set to treat in each coin.
#' @param global_specs Default specifications. See details in [Treat.coin()].
#' @param indiv_specs Individual specifications. See details in [Treat.coin()].
#' @param combine_treat By default, if `f1` fails to pass `f_pass`, then `f2` is applied to the original `x`,
#' rather than the treated output of `f1`. If `combine_treat = TRUE`, `f2` will instead be applied to the output
#' of `f1`, so the two treatments will be combined.
#' @param write_to If specified, writes the aggregated data to `.$Data[[write_to]]`. Default `write_to = "Treated"`.
#' @param ... arguments passed to or from other methods.
#'
#' @return An updated purse with new treated data sets added at `.$Data$Treated` in each coin, plus
#' analysis information at `.$Analysis$Treated`
#' @export
#'
#' @examples
#' # See `vignette("treat")`.
Treat.purse <- function(x, dset, global_specs = NULL, indiv_specs = NULL,
combine_treat = FALSE, write_to = NULL, ...){
# input check
check_purse(x)
# apply treatment to each coin
x$coin <- lapply(x$coin, function(coin){
Treat.coin(coin, dset = dset, global_specs = global_specs,
indiv_specs = indiv_specs, combine_treat = combine_treat, write_to = write_to)
})
# make sure still purse class
class(x) <- c("purse", "data.frame")
x
}
#' Treat a data set in a coin for outliers
#'
#' Operates a two-stage data treatment process on the data set specified by `dset`, based on two data treatment functions, and a pass/fail
#' function which detects outliers. The method of data treatment can be either specified by the `global_specs` argument (which applies
#' the same specifications to all indicators in the specified data set), or else (additionally) by the `indiv_specs` argument which allows different
#' methods to be applied for each indicator. See details. For a simpler function for data treatment, see the wrapper function [qTreat()].
#'
#' @details
#' # Global specifications
#'
#' If the same method of data treatment should be applied to all indicators, use the `global_specs` argument. This argument takes a structured
#' list which looks like this:
#'
#' ```
#' global_specs = list(f1 = .,
#' f1_para = list(.),
#' f2 = .,
#' f2_para = list(.),
#' f_pass = .,
#' f_pass_para = list()
#' )
#' ```
#'
#' The entries in this list correspond to arguments in [Treat.numeric()], and the meanings of each are also described in more detail here
#' below. In brief, `f1` is the name of a function to apply at the first round of data treatment, `f1_para` is a list of any additional
#' parameters to pass to `f1`, `f2` and `f2_para` are equivalently the function name and parameters of the second round of data treatment, and
#' `f_pass` and `f_pass_para` are the function and additional arguments to check for the existence of outliers.
#'
#' The default values for `global_specs` are as follows:
#'
#' ```
#' global_specs = list(f1 = "winsorise",
#' f1_para = list(na.rm = TRUE,
#' winmax = 5,
#' skew_thresh = 2,
#' kurt_thresh = 3.5,
#' force_win = FALSE),
#' f2 = "log_CT",
#' f2_para = list(na.rm = TRUE),
#' f_pass = "check_SkewKurt",
#' f_pass_para = list(na.rm = TRUE,
#' skew_thresh = 2,
#' kurt_thresh = 3.5))
#' ```
#'
#' This shows that by default (i.e. if `global_specs` is not specified), each indicator is checked for outliers by the [check_SkewKurt()] function, which
#' uses skew and kurtosis thresholds as its parameters. Then, if outliers exist, the first function [winsorise()] is applied, which also
#' uses skew and kurtosis parameters, as well as a maximum number of winsorised points. If the Winsorisation function does not satisfy
#' `f_pass`, the [log_CT()] function is invoked.
#'
#' To change the global specifications, you don't have to supply the whole list. If, for example, you are happy with all the defaults but
#' want to simply change the maximum number of Winsorised points, you could specify e.g. `global_specs = list(f1_para = list(winmax = 3))`.
#' In other words, a subset of the list can be specified, as long as the structure of the list is correct.
#'
#' # Individual specifications
#'
#' The `indiv_specs` argument allows different specifications for each indicator. This is done by wrapping multiple lists of the format of the
#' list described in `global_specs` into one single list, named according to the column names of `x`. For example, if the date set has indicators with codes
#' "x1", "x2" and "x3", we could specify individual treatment as follows:
#'
#' ```
#' indiv_specs = list(x1 = list(.),
#' x2 = list(.)
#' x3 = list(.))
#' ```
#'
#' where each `list(.)` is a specifications list of the same format as `global_specs`. Any indicators that are *not* named in `indiv_specs` are
#' treated using the specifications from `global_specs` (which will be the defaults if it is not specified). As with `global_specs`,
#' a subset of the `global_specs` list may be specified for
#' each entry. Additionally, as a special case, specifying a list entry as e.g. `x1 = "none"` will apply no data treatment to the indicator "x1". See
#' `vignette("treat")` for examples of individual treatment.
#'
#' # Function methodology
#'
#' This function is set up to allow any functions to be passed as the
#' data treatment functions (`f1` and `f2`), as well as any function to be passed as the outlier detection
#' function `f_pass`, as specified in the `global_specs` and `indiv_specs` arguments.
#'
#' The arrangement of this function is inspired by a fairly standard data treatment process applied to
#' indicators, which consists of checking skew and kurtosis, then if the criteria are not met, applying
#' Winsorisation up to a specified limit. Then if Winsorisation still does not bring skew and kurtosis
#' within limits, applying a nonlinear transformation such as log or Box-Cox.
#'
#' This function generalises this process by using the following general steps:
#'
#' 1. Check if variable passes or fails using `f_pass`
#' 2. If `f_pass` returns `FALSE`, apply `f1`, else return `x` unmodified
#' 3. Check again using *`f_pass`
#' 4. If `f_pass` still returns `FALSE`, apply `f2`
#' 5. Return the modified `x` as well as other information.
#'
#' For the "typical" case described above `f1` is a Winsorisation function, `f2` is a nonlinear transformation
#' and `f_pass` is a skew and kurtosis check. Parameters can be passed to each of these three functions in
#' a named list, for example to specify a maximum number of points to Winsorise, or Box-Cox parameters, or anything
#' else. The constraints are that:
#'
#' * All of `f1`, `f2` and `f_pass` must follow the format `function(x, f_para)`, where `x` is a
#' numerical vector, and `f_para` is a list of other function parameters to be passed to the function, which
#' is specified by `f1_para` for `f1` and similarly for the other functions. If the function has no parameters
#' other than `x`, then `f_para` can be omitted.
#' * `f1` and `f2` should return either a list with `.$x` as the modified numerical vector, and any other information
#' to be attached to the list, OR, simply `x` as the only output.
#' * `f_pass` must return a logical value, where `TRUE` indicates that the `x` passes the criteria (and
#' therefore doesn't need any (more) treatment), and `FALSE` means that it fails to meet the criteria.
#'
#' See also `vignette("treat")`.
#'
#' @param x A coin
#' @param dset A named data set available in `.$Data`
#' @param global_specs A list specifying the treatment to apply to all columns. This will be applied to all columns, except any
#' that are specified in the `indiv_specs` argument. Alternatively, set to `"none"` to apply no treatment. See details.
#' @param indiv_specs A list specifying any individual treatment to apply to specific columns, overriding `global_specs`
#' for those columns. See details.
#' @param combine_treat By default, if `f1` fails to pass `f_pass`, then `f2` is applied to the original `x`,
#' rather than the treated output of `f1`. If `combine_treat = TRUE`, `f2` will instead be applied to the output
#' of `f1`, so the two treatments will be combined.
#' @param out2 The type of function output: either `"coin"` to return an updated coin, or `"list"` to return a
#' list with treated data and treatment details.
#' @param write2log Logical: if `FALSE`, the arguments of this function are not written to the coin log, so this
#' function will not be invoked when regenerating. Recommend to keep `TRUE` unless you have a good reason to do otherwise.
#' @param write_to If specified, writes the aggregated data to `.$Data[[write_to]]`. Default `write_to = "Treated"`.
#' @param ... arguments passed to or from other methods.
#' @param disable Logical: if `TRUE` will disable data treatment completely and write the unaltered data set. This option is mainly useful
#' in sensitivity and uncertainty analysis (to test the effect of turning imputation on/off).
#'
#' @return An updated coin with a new data set `.Data$Treated` added, plus analysis information in
#' `.$Analysis$Treated`.
#' @export
#'
#' @examples
#' # build example coin
#' coin <- build_example_coin(up_to = "new_coin")
#'
#' # treat raw data set
#' coin <- Treat(coin, dset = "Raw")
#'
#' # summary of treatment for each indicator
#' head(coin$Analysis$Treated$Dets_Table)
#'
Treat.coin <- function(x, dset, global_specs = NULL, indiv_specs = NULL,
combine_treat = FALSE, out2 = "coin", write_to = NULL,
write2log = TRUE, disable = FALSE, ...){
# WRITE LOG ---------------------------------------------------------------
coin <- write_log(x, dont_write = "x", write2log = write2log)
# potentially skip all imputation
stopifnot(is.logical(disable))
if(disable){
idata <- get_dset(coin, dset = dset)
# output list
if(out2 == "list"){
message("Returning data frame since treatment was disabled.")
return(idata)
} else {
if(is.null(write_to)){
write_to <- "Treated"
}
return(write_dset(coin, idata, dset = write_to))
}
}
# GET DSET, CHECKS --------------------------------------------------------
iData <- get_dset(coin, dset)
# TREAT DATA --------------------------------------------------------------
l_treat <- Treat(iData, global_specs = global_specs,
indiv_specs = indiv_specs, combine_treat = combine_treat)
# output list
if(out2 == "list"){
l_treat
} else {
if(is.null(write_to)){
write_to <- "Treated"
}
coin <- write_dset(coin, l_treat$x_treat, dset = write_to)
write2coin(coin, l_treat[names(l_treat) != "x_treat"], out2, "Analysis", write_to)
}
}
#' Treat a data frame for outliers
#'
#' Operates a two-stage data treatment process, based on two data treatment functions, and a pass/fail
#' function which detects outliers. The method of data treatment can be either specified by the `global_specs` argument (which applies
#' the same specifications to all columns in `x`), or else (additionally) by the `indiv_specs` argument which allows different
#' methods to be applied for each column. See details. For a simpler function for data treatment, see the wrapper function [qTreat()].
#'
#' @details
#' # Global specifications
#'
#' If the same method of data treatment should be applied to all the columns, use the `global_specs` argument. This argument takes a structured
#' list which looks like this:
#'
#' ```
#' global_specs = list(f1 = .,
#' f1_para = list(.),
#' f2 = .,
#' f2_para = list(.),
#' f_pass = .,
#' f_pass_para = list()
#' )
#' ```
#'
#' The entries in this list correspond to arguments in [Treat.numeric()], and the meanings of each are also described in more detail here
#' below. In brief, `f1` is the name of a function to apply at the first round of data treatment, `f1_para` is a list of any additional
#' parameters to pass to `f1`, `f2` and `f2_para` are equivalently the function name and parameters of the second round of data treatment, and
#' `f_pass` and `f_pass_para` are the function and additional arguments to check for the existence of outliers.
#'
#' The default values for `global_specs` are as follows:
#'
#' ```
#' global_specs = list(f1 = "winsorise",
#' f1_para = list(na.rm = TRUE,
#' winmax = 5,
#' skew_thresh = 2,
#' kurt_thresh = 3.5,
#' force_win = FALSE),
#' f2 = "log_CT",
#' f2_para = list(na.rm = TRUE),
#' f_pass = "check_SkewKurt",
#' f_pass_para = list(na.rm = TRUE,
#' skew_thresh = 2,
#' kurt_thresh = 3.5))
#' ```
#'
#' This shows that by default (i.e. if `global_specs` is not specified), each column is checked for outliers by the [check_SkewKurt()] function, which
#' uses skew and kurtosis thresholds as its parameters. Then, if outliers exist, the first function [winsorise()] is applied, which also
#' uses skew and kurtosis parameters, as well as a maximum number of winsorised points. If the Winsorisation function does not satisfy
#' `f_pass`, the [log_CT()] function is invoked.
#'
#' To change the global specifications, you don't have to supply the whole list. If, for example, you are happy with all the defaults but
#' want to simply change the maximum number of Winsorised points, you could specify e.g. `global_specs = list(f1_para = list(winmax = 3))`.
#' In other words, a subset of the list can be specified, as long as the structure of the list is correct.
#'
#' # Individual specifications
#'
#' The `indiv_specs` argument allows different specifications for each column in `x`. This is done by wrapping multiple lists of the format of the
#' list described in `global_specs` into one single list, named according to the column names of `x`. For example, if `x` has column names
#' "x1", "x2" and "x3", we could specify individual treatment as follows:
#'
#' ```
#' indiv_specs = list(x1 = list(.),
#' x2 = list(.)
#' x3 = list(.))
#' ```
#'
#' where each `list(.)` is a specifications list of the same format as `global_specs`. Any columns that are not named in `indiv_specs` are
#' treated using the specifications from `global_specs` (which will be the defaults if it is not specified). As with `global_specs`,
#' a subset of the `global_specs` list may be specified for
#' each entry. Additionally, as a special case, specifying a list entry as e.g. `x1 = "none"` will apply no data treatment to the column "x1". See
#' `vignette("treat")` for examples of individual treatment.
#'
#' # Function methodology
#'
#' This function is set up to allow any functions to be passed as the
#' data treatment functions (`f1` and `f2`), as well as any function to be passed as the outlier detection
#' function `f_pass`, as specified in the `global_specs` and `indiv_specs` arguments.
#'
#' The arrangement of this function is inspired by a fairly standard data treatment process applied to
#' indicators, which consists of checking skew and kurtosis, then if the criteria are not met, applying
#' Winsorisation up to a specified limit. Then if Winsorisation still does not bring skew and kurtosis
#' within limits, applying a nonlinear transformation such as log or Box-Cox.
#'
#' This function generalises this process by using the following general steps:
#'
#' 1. Check if variable passes or fails using `f_pass`
#' 2. If `f_pass` returns `FALSE`, apply `f1`, else return `x` unmodified
#' 3. Check again using *`f_pass`
#' 4. If `f_pass` still returns `FALSE`, apply `f2`
#' 5. Return the modified `x` as well as other information.
#'
#' For the "typical" case described above `f1` is a Winsorisation function, `f2` is a nonlinear transformation
#' and `f_pass` is a skew and kurtosis check. Parameters can be passed to each of these three functions in
#' a named list, for example to specify a maximum number of points to Winsorise, or Box-Cox parameters, or anything
#' else. The constraints are that:
#'
#' * All of `f1`, `f2` and `f_pass` must follow the format `function(x, f_para)`, where `x` is a
#' numerical vector, and `f_para` is a list of other function parameters to be passed to the function, which
#' is specified by `f1_para` for `f1` and similarly for the other functions. If the function has no parameters
#' other than `x`, then `f_para` can be omitted.
#' * `f1` and `f2` should return either a list with `.$x` as the modified numerical vector, and any other information
#' to be attached to the list, OR, simply `x` as the only output.
#' * `f_pass` must return a logical value, where `TRUE` indicates that the `x` passes the criteria (and
#' therefore doesn't need any (more) treatment), and `FALSE` means that it fails to meet the criteria.
#'
#' See also `vignette("treat")`.
#'
#' @param x A data frame. Can have both numeric and non-numeric columns.
#' @param global_specs A list specifying the treatment to apply to all columns. This will be applied to all columns, except any
#' that are specified in the `indiv_specs` argument. Alternatively, set to `"none"` to apply no treatment. See details.
#' @param indiv_specs A list specifying any individual treatment to apply to specific columns, overriding `global_specs`
#' for those columns. See details.
#' @param combine_treat By default, if `f1` fails to pass `f_pass`, then `f2` is applied to the original `x`,
#' rather than the treated output of `f1`. If `combine_treat = TRUE`, `f2` will instead be applied to the output
#' of `f1`, so the two treatments will be combined.
#' @param ... arguments passed to or from other methods.
#'
#' @importFrom utils modifyList
#'
#' @examples
#' # select three indicators
#' df1 <- ASEM_iData[c("Flights", "Goods", "Services")]
#'
#' # treat the data frame using defaults
#' l_treat <- Treat(df1)
#'
#' # details of data treatment for each column
#' l_treat$Dets_Table
#'
#' @return A treated data frame of data
#'
#' @export
Treat.data.frame <- function(x, global_specs = NULL, indiv_specs = NULL, combine_treat = FALSE, ...){
# SET DEFAULTS ------------------------------------------------------------
# default treatment for all cols
specs_def <- list(f1 = "winsorise",
f1_para = list(na.rm = TRUE,
winmax = 5,
skew_thresh = 2,
kurt_thresh = 3.5,
force_win = FALSE),
f2 = "log_CT",
f2_para = list(na.rm = TRUE),
f_pass = "check_SkewKurt",
f_pass_para = list(na.rm = TRUE,
skew_thresh = 2,
kurt_thresh = 3.5))
# modify using input
if(!is.null(global_specs)){
if(is.character(global_specs)){
stopifnot(length(global_specs) == 1)
if(global_specs != "none"){
stop("global_specs must either be a list or else 'none'.")
}
} else {
stopifnot(is.list(global_specs))
specs_def <- utils::modifyList(specs_def, global_specs)
}
}
# individual: check and flag for later function
indiv <- !is.null(indiv_specs)
if(indiv){
stopifnot(is.list(indiv_specs))
}
# TREAT COLS --------------------------------------------------------------
# function for treating a column
treat_col <- function(col_name){
# get col and check if numeric
xi <- x[[col_name]]
if(!is.numeric(xi)){
return(list(x = xi))
}
# get specs
if(indiv){
# check if spec for that col
if(col_name %in% names(indiv_specs)){
# lookup spec
indiv_specs_col <- indiv_specs[[col_name]]
# check if "none"
if(is.character(indiv_specs_col) && length(indiv_specs_col) == 1){
if(indiv_specs_col == "none"){
return(list(x = xi))
}
}
# merge with defaults (overwrites any differences)
specs <- utils::modifyList(specs_def, indiv_specs_col)
} else {
# otherwise, use defaults
specs <- specs_def
}
} else {
# otherwise, use defaults
specs <- specs_def
if(is.character(specs) && length(specs) == 1){
if(specs == "none"){
return(list(x = xi))
}
}
}
# run function allowing for catching errors
tryCatch(
expr = do.call("Treat.numeric", c(list(x = xi, combine_treat = combine_treat), specs)),
error = function(e){
warning("Indicator could not be treated at column '", col_name, "' - returning untreated indicator. May be due to lack of unique values.", call. = FALSE)
list(x = xi)
}
)
}
# now run function
# output is one list
treat_results <- lapply(names(x), treat_col)
names(treat_results) <- names(x)
# ORGANISE AND OUTPUT -----------------------------------------------------
# the treated data frame
x_treat <- as.data.frame(lapply(treat_results, `[`, "x"))
names(x_treat) <- names(x)
# a table of treatment information
details <- lapply(treat_results, function(x) unlist(x$Dets_Table, recursive = F))
not_null <- lengths(details) != 0
details <- details[not_null]
details_table <- Reduce(rbind_fill, details)
details_table <- data.frame(iCode = names(x)[not_null], details_table)
# a list of any remaining treatment info that can't go in a table
details_list <- lapply(treat_results, `[[`, "Dets_List")
details_list <- tidy_list(details_list)
# ADD TREATED POINTS
Treated_Points <- lapply(treat_results, `[[`, "Treated_Points")
Treated_Points <- as.data.frame(tidy_list(Treated_Points))
# output
l_out <- list(x_treat = x_treat,
Dets_Table = details_table,
Treated_Points = Treated_Points,
Dets_List = details_list)
tidy_list(l_out)
}
#' Treat a numeric vector for outliers
#'
#' Operates a two-stage data treatment process, based on two data treatment functions, and a pass/fail
#' function which detects outliers. This function is set up to allow any functions to be passed as the
#' data treatment functions (`f1` and `f2`), as well as any function to be passed as the outlier detection
#' function `f_pass`.
#'
#' The arrangement of this function is inspired by a fairly standard data treatment process applied to
#' indicators, which consists of checking skew and kurtosis, then if the criteria are not met, applying
#' Winsorisation up to a specified limit. Then if Winsorisation still does not bring skew and kurtosis
#' within limits, applying a nonlinear transformation such as log or Box-Cox.
#'
#' This function generalises this process by using the following general steps:
#'
#' 1. Check if variable passes or fails using `f_pass`
#' 2. If `f_pass` returns `FALSE`, apply `f1`, else return `x` unmodified
#' 3. Check again using *`f_pass`
#' 4. If `f_pass` still returns `FALSE`, apply `f2` (by default to the original `x`, see `combine_treat`
#' parameter)
#' 5. Return the modified `x` as well as other information.
#'
#' For the "typical" case described above `f1` is a Winsorisation function, `f2` is a nonlinear transformation
#' and `f_pass` is a skew and kurtosis check. Parameters can be passed to each of these three functions in
#' a named list, for example to specify a maximum number of points to Winsorise, or Box-Cox parameters, or anything
#' else. The constraints are that:
#'
#' * All of `f1`, `f2` and `f_pass` must follow the format `function(x, f_para)`, where `x` is a
#' numerical vector, and `f_para` is a list of other function parameters to be passed to the function, which
#' is specified by `f1_para` for `f1` and similarly for the other functions. If the function has no parameters
#' other than `x`, then `f_para` can be omitted.
#' * `f1` and `f2` should return either a list with `.$x` as the modified numerical vector, and any other information
#' to be attached to the list, OR, simply `x` as the only output.
#' * `f_pass` must return a logical value, where `TRUE` indicates that the `x` passes the criteria (and
#' therefore doesn't need any (more) treatment), and `FALSE` means that it fails to meet the criteria.
#'
#' See also `vignette("treat")`.
#'
#' @param x A numeric vector.
#' @param f1 First stage data treatment function e.g. as a string.
#' @param f1_para First stage data treatment function parameters as a named list.
#' @param f2 First stage data treatment function as a string.
#' @param f2_para First stage data treatment function parameters as a named list.
#' @param combine_treat By default, if `f1` fails to pass `f_pass`, then `f2` is applied to the original `x`,
#' rather than the treated output of `f1`. If `combine_treat = TRUE`, `f2` will instead be applied to the output
#' of `f1`, so the two treatments will be combined.
#' @param f_pass A string specifying an outlier detection function - see details. Default `"check_SkewKurt"`
#' @param f_pass_para Any further arguments to pass to `f_pass()`, as a named list.
#' @param ... arguments passed to or from other methods.
#'
#' @examples
#' # numbers between 1 and 10
#' x <- 1:10
#'
#' # two outliers
#' x <- c(x, 30, 100)
#'
#' # check whether passes skew/kurt test
#' check_SkewKurt(x)
#'
#' # treat using winsorisation
#' l_treat <- Treat(x, f1 = "winsorise", f1_para = list(winmax = 2),
#' f_pass = "check_SkewKurt")
#'
#' # plot original against treated
#' plot(x, l_treat$x)
#'
#' @return A treated vector of data.
#'
#' @export
Treat.numeric <- function(x, f1, f1_para = NULL, f2 = NULL, f2_para = NULL,
f_pass, f_pass_para = NULL, combine_treat = FALSE, ...){
# INPUT CHECKS ------------------------------------------------------------
# check function for input functions
check_fx <- function(f, f_para){
stopifnot(is.character(f),
length(f) == 1
)
if(!is.null(f_para)){
if(!is.list(f_para)){
stop("Parameters of " ,f, " are required to be wrapped in a list.")
}
}
}
# apply check function to each input function
check_fx(f1, f1_para)
check_fx(f_pass, f_pass_para)
# f2 is optional
if(!is.null(f2)){
check_fx(f2, f2_para)
n_f <- 2
} else {
n_f <- 1
}
# set up lists for recording any info from functions
l_table <- vector(mode = "list") # for outputs to go into a table
l_list <- vector(mode = "list") # for outputs to go into a list
# df for recording treatment of individual points
df_treat <- as.data.frame(matrix("", nrow = length(x) ,ncol = n_f))
colnames(df_treat) <- c(f1, f2)
# PASS CHECK -------------------------------------------------------------------
# Requires function which returns TRUE = PASS or FALSE = fail, and optionally
# attaches some extra information (e.g. skew and kurtosis values)
proc_passing <- function(l, f_name, suffix){
if(is.list(l)){
# get pass/fail
pass1 <- l$Pass
if(is.null(pass1)){
stop("Required list entry .$Pass of output of ",f_name," is not found.")
}
# check if l contains any sub-lists
sub_lists <- sapply(l, is.list)
# collect outputs for table (not x, and no lists)
l_table[[paste0(f_name, suffix)]] <<- l[!sub_lists]
# collect any other outputs (not x, lists)
l_list[[paste0(f_name, suffix)]] <<- l[sub_lists]
} else if (is.logical(l)) {
pass1 <- l
# collect outputs for table (not x, and no lists)
l_table[[paste0(f_name, suffix)]] <<- pass1
} else {
stop("Output of ",f_name,"is not either a list with entry .$Pass or a logical")
}
if(length(pass1) != 1){
stop("Logical output from ",f_name," is not of length 1.")
}
if(!is.logical(pass1)){
stop("Output of f_pass is not logical - this is not allowed.")
}
if(is.na(pass1)){
warning("f_pass has returned NA. Returning untreated vector.")
return(NA)
}
pass1
}
# INITIAL CHECK
passing <- do.call(what = f_pass, args = c(list(x = x), f_pass_para))
pass <- proc_passing(passing, f_pass, 0)
# check output
if(is.na(pass)){
return(list(x = x,
Passing = NA))
}
# FUNC PROCESSING -----------------------------------------------------------------
# func to extract f1 and f2 outputs, write to list and check outputs
# this is necessary because f1 and f2 may output either a numeric vector
# or a list, plus optionally some other information.
proc_output <- function(l, f_name){
if(is.list(l)){
# get modified x
x1 <- l$x
if(is.null(x1)){
stop("Required list entry .$x of output of ",f_name," is not found.")
}
# get positions of treated points
x_treat <- l$treated
if(is.null(x_treat)){
stop("Required list entry .$treated of output of ",f_name," is not found.")
} else {
if(!is.character(x_treat)){
stop(".$treated of output of ",f_name," is not a character vector.")
}
if(length(x_treat) != length(x)){
stop(".$treated of output of ",f_name," is not the same length as x.")
}
df_treat[, f_name] <<- x_treat
}
# check if l contains any sub-lists
sub_lists <- sapply(l, is.list)
# collect outputs for table (not x, and no lists)
l_table[[f_name]] <<- l[(names(l) %nin% c("x", "treated")) & !sub_lists]
# collect any other outputs (not x, lists)
l_list[[f_name]] <<- l[(names(l) %nin% c("x", "treated")) & sub_lists]
} else if (is.numeric(l)) {
x1 <- l
} else {
stop("Output of ",f_name,"is not either a list with entry .$x or a numeric vector")
}
if(length(x1) != length(x)){
stop("Vector output from ",f_name," is not the same length as x")
}
x1
}
# TREATMENT 1 -------------------------------------------------------------
if(!pass){
# treat data with f1
l_f1 <- do.call(what = f1, args = c(list(x = x), f1_para))
# sort output (also writes any extra info to l_table
x1 <- proc_output(l_f1, f1)
# check (for deciding whether to go to treatment 2)
passing <- do.call(what = f_pass, args = c(list(x = x1), f_pass_para))
pass <- proc_passing(passing, f_pass, 1)
# check output
if(is.na(pass)){
return(list(x = x,
Dets_Table = l_table,
Treated_Points = rep("", length(x)),
Passing = NA))
}
} else {
x1 <- x
}
# TREATMENT 2 -------------------------------------------------------------
if(!pass & n_f == 2){
# optionally reset treatment 1 to original
if(!combine_treat){
x1 <- x
}
l_f2 <- do.call(what = f2, args = c(list(x = x1), f2_para))
# sort output (also writes any extra info to l_table
x2 <- proc_output(l_f2, f2)
# check if passes again
passing <- do.call(what = f_pass, args = c(list(x = x2), f_pass_para))
pass <- proc_passing(passing, f_pass, 2)
# check output
if(is.na(pass)){
return(list(x = x,
Dets_Table = l_table,
Treated_Points = rep("", length(x)),
Passing = NA))
}
} else {
x2 <- x1
}
# OUTPUT ------------------------------------------------------------------
# First, glue cols of treated points record
# remove NULL cols of df_treat
df_treat <- df_treat[!is.null(colnames(df_treat))]
# combine cols into one
if(ncol(df_treat) > 1){
Treat_Points <- apply(df_treat, MARGIN = 1, function(z){
if(z[2] == ""){
z[1]
} else {
paste(z, collapse = "+")
}
})
Treat_Points[Treat_Points == "+"] <- ""
} else {
Treat_Points <- df_treat
}
list(x = x2,
Dets_Table = l_table,
Treated_Points = Treat_Points,
Dets_List = tidy_list(l_list))
}
#' Treat outliers
#'
#' Generic function for treating outliers using a two-step process. See individual method documentation:
#'
#' * [Treat.numeric()]
#' * [Treat.data.frame()]
#' * [Treat.coin()]
#' * [Treat.purse()]
#'
#' See also `vignette("treat")`.
#'
#' This function replaces the now-defunct `treat()` from COINr < v1.0.
#'
#' @param x Object to be treated
#' @param ... arguments passed to or from other methods.
#'
#' @return Treated object plus details.
#'
#' @export
Treat <- function (x, ...){
UseMethod("Treat")
}
#' Winsorise a vector
#'
#' Follows a "standard" Winsorisation approach: points are successively Winsorised in order to bring
#' skew and kurtosis thresholds within specified limits. Specifically, aims to bring absolute skew to
#' below a threshold (default 2.25) and kurtosis below another threshold (default 3.5).
#'
#' Winsorisation here is defined as reassigning the point with the highest/lowest value with the value of the
#' next highest/lowest point. Whether to Winsorise at the high or low end of the scale is decided by the direction
#' of the skewness of `x`.
#'
#' This function replaces the now-defunct `coin_win()` from COINr < v1.0.
#'
#' @param x A numeric vector.
#' @param na.rm Set `TRUE` to remove `NA` values, otherwise returns `NA`.
#' @param winmax Maximum number of points to Winsorise. Default 5. Set `NULL` to have no limit.
#' @param skew_thresh A threshold for absolute skewness (positive). Default 2.25.
#' @param kurt_thresh A threshold for kurtosis. Default 3.5.
#' @param force_win Logical: if `TRUE`, forces winsorisation up to winmax (regardless of skew/kurt).
#' Default `FALSE`. Note - this option should be used with care because the direction of Winsorisation
#' is based on the direction of skew. Successively Winsorising can switch the direction of skew and hence
#' the direction of Winsorisation, which may not produce the expected behaviour.
#'
#' @examples
#' # numbers between 1 and 10
#' x <- 1:10
#'
#' # two outliers
#' x <- c(x, 30, 100)
#'
#' # winsorise
#' l_win <- winsorise(x, skew_thresh = 2, kurt_thresh = 3.5)
#'
#' # see treated vector, number of winsorised points and details
#' l_win
#'
#' @return A list containing winsorised data, number of winsorised points, and the individual points that
#' were treated.
#'
#' @export
winsorise <- function(x, na.rm = FALSE, winmax = 5, skew_thresh = 2, kurt_thresh = 3.5,
force_win = FALSE){
# test skew and kurtosis
passing <- check_SkewKurt(x, na.rm = na.rm,
skew_thresh = skew_thresh,
kurt_thresh = kurt_thresh)[["Pass"]]
# set winsorisation counter
nwin <- 0
# vectors to record indices of winsorised points. Set NULL to begin with to keep track of
# number of points in a sensible way. This is also passed through if no Winsorisation happens.
imax<-imin<-NULL
# if doesn't pass, go to winsorisation
if(!passing | force_win){
# set winsorisation limit logical flag. Use function because reused later.
f_below_winmax <- function(){
if(is.null(winmax)){
TRUE
} else {
nwin < winmax
}
}
below_winmax <- f_below_winmax()
# else go to Winsorisation
while((!passing | force_win) & below_winmax){
# winsorise depending on whether outliers are high or low
if(skew(x, na.rm = TRUE)>=0){ # skew is positive, implies high outliers
imax <- which(x==max(x, na.rm = T)) # position(s) of maximum value(s)
x[imax] <- max(x[-imax], na.rm = T) # replace imax with max value of indicator if imax value(s) excluded
} else { # skew is negative, implies low outliers
imin <- which(x==min(x, na.rm = T)) # ditto, but with min
x[imin] <- min(x[-imin], na.rm = T)
}
# count number winsorised points. Defined this way because it is possible we Winsorise
# two points at once if they are tied.
nwin <- sum(length(imax) + length(imin))
# setting winmax is NULL implies no limit on winsorisation
below_winmax <- f_below_winmax()
# check if it passes now
passing <- check_SkewKurt(x, na.rm = na.rm,
skew_thresh = skew_thresh,
kurt_thresh = kurt_thresh)[["Pass"]]
}
}
# return winsorised vector, plus positions of winsorised points
treated <- rep("", length(x))
treated[imax] <- "winhi"
treated[imin] <- "winlo"
list(
x = x,
nwin = nwin,
treated = treated
)
}
#' Log-transform a vector
#'
#' Performs a log transform on a numeric vector. This function is currently not recommended - see comments
#' below.
#'
#' Specifically, this performs a "GII log" transform, which is what was encoded in the GII2020 spreadsheet.
#'
#' Note that this transformation is currently NOT recommended because it seems quite volatile and can flip
#' the direction of the indicator. If the maximum value of the indicator is less than one, this reverses the
#' direction.
#'
#' @param x A numeric vector.
#' @param na.rm Set `TRUE` to remove `NA` values, otherwise returns `NA`.
#'
#' @examples
#' x <- runif(20)
#' log_GII(x)
#'
#' @return A log-transformed vector of data.
#'
#' @export
log_GII <- function(x, na.rm = FALSE){
stopifnot(is.numeric(x),
is.vector(x))
mx <- max(x, na.rm = na.rm)
mn <- min(x, na.rm = na.rm)
x1 <- log(
(mx - 1)*(x - mn) / (mx-mn) + 1
)
#x2 <- log( (max(x, na.rm = na.rm)-1)*(x-min(x, na.rm = na.rm))/
#(max(x, na.rm = na.rm)-min(x, na.rm = na.rm)) + 1 )
list(x = x,
treated = rep("log_GII", length(x)))
}
#' Log-transform a vector
#'
#' Performs a log transform on a numeric vector.
#'
#' Specifically, this performs a modified "COIN Tool log" transform: `log(x-min(x) + a)`, where
#' `a <- 0.01*(max(x)-min(x))`.
#'
#' @param x A numeric vector.
#' @param na.rm Set `TRUE` to remove `NA` values, otherwise returns `NA`.
#'
#' @examples
#' x <- runif(20)
#' log_CT(x)
#'
#' @return A log-transformed vector of data, and treatment details wrapped in a list.
#'
#' @export
log_CT <- function(x, na.rm = FALSE){
stopifnot(is.numeric(x))
x <- log(x- min(x,na.rm = na.rm) + 0.01*(max(x, na.rm = na.rm)-min(x, na.rm = na.rm)))
list(x = x,
treated = rep("log_CT", length(x)))
}
#' Log transform a vector (skew corrected)
#'
#' Performs a log transform on a numeric vector, but with consideration for the direction of the skew. The aim
#' here is to reduce the absolute value of skew, regardless of its direction.
#'
#' Specifically:
#'
#' If the skew of `x` is positive, this performs a modified "COIN Tool log" transform: `log(x-min(x) + a)`, where
#' `a <- 0.01*(max(x)-min(x))`.
#'
#' If the skew of `x` is negative, it performs an equivalent transformation `-log(xmax + a - x)`.
#'
#' @param x A numeric vector
#' @param na.rm Set `TRUE` to remove `NA` values, otherwise returns `NA`.
#'
#' @return A log-transformed vector of data, and treatment details wrapped in a list.
#' @export
#'
#' @examples
#' x <- runif(20)
#' log_CT(x)
#'
log_CT_plus <- function(x, na.rm = FALSE){
stopifnot(is.numeric(x))
xmax <- max(x, na.rm = TRUE)
xmin <- min(x, na.rm = TRUE)
a <- 0.01 * (xmax - xmin)
if(skew(x, na.rm = TRUE) > 0){
list(x = log(x - xmin + a),
treated = rep("log_CT", length(x)))
} else {
list(x = -log(xmax + a - x),
treated = rep("log_CT_neg", length(x)))
}
}
#' Log-transform a vector
#'
#' Performs a log transform on a numeric vector.
#'
#' Specifically, this performs a "COIN Tool log" transform: `log(x-min(x) + 1)`.
#'
#' @param x A numeric vector.
#' @param na.rm Set `TRUE` to remove `NA` values, otherwise returns `NA`.
#'
#' @examples
#' x <- runif(20)
#' log_CT_orig(x)
#'
#' @return A log-transformed vector of data, and treatment details wrapped in a list.
#'
#' @export
log_CT_orig <- function(x, na.rm = FALSE){
stopifnot(is.numeric(x))
x <- log(x- min(x, na.rm = na.rm) + 1)
list(x = x,
treated = rep("log_CT_orig", length(x)))
}
#' Box Cox transformation
#'
#' Simple Box Cox, with no optimisation of lambda.
#'
#' This function replaces the now-defunct `BoxCox()` from COINr < v1.0.
#'
#' @param x A vector or column of data to transform
#' @param lambda The lambda parameter of the Box Cox transform
#' @param makepos If `TRUE` (default) makes all values positive by subtracting the minimum and adding 1.
#' @param na.rm If `TRUE`, `NA`s will be removed: only relevant if `makepos = TRUE` which invokes `min()`.
#'
#' @examples
#' # example data
#' x <- runif(30)
#' # Apply Box Cox
#' xBox <- boxcox(x, lambda = 2)
#' # plot one against the other
#' plot(x, xBox)
#'
#' @return A vector of length `length(x)` with transformed values.
#'
#' @export
boxcox <- function(x, lambda, makepos = TRUE, na.rm = FALSE){
stopifnot(is.numeric(x),
is.vector(x))
if(makepos){
# make positive using COIN Tool style shift
x <- x - min(x,na.rm = na.rm) + 1
}
# Box Cox
if (lambda==0){
log(x)
} else {
(x^lambda - 1)/lambda
}
}
#' Calculate skewness
#'
#' Calculates skewness of the values of a numeric vector. This uses the same definition of skewness as
#' the "skewness()" function in the "e1071" package where `type == 2`, which is equivalent to the definition of skewness used in Excel.
#'
#' @param x A numeric vector.
#' @param na.rm Set `TRUE` to remove `NA` values, otherwise returns `NA`.
#'
#' @examples
#' x <- runif(20)
#' skew(x)
#'
#' @return A skewness value (scalar).
#'
#' @export
skew <- function(x, na.rm = FALSE){
stopifnot(is.numeric(x),
is.vector(x))
if(any(is.na(x))){
if(na.rm){
x <- x[!is.na(x)]
} else {
return(NA)
}
}
n <- length(x)
# need min 3 points to work
if(n<3){
return(NA)
}
# calculate skewness. NOTE this is taken from e1071::skewness() to avoid dependencies.
x <- x - mean(x)
y <- sqrt(n) * sum(x^3)/(sum(x^2)^(3/2))
y <- y * sqrt(n * (n - 1))/(n - 2)
return(y)
}
#' Calculate kurtosis
#'
#' Calculates kurtosis of the values of a numeric vector. This uses the same definition of kurtosis as
#' as the "kurtosis()" function in the e1071 package, where `type == 2`, which is equivalent to the definition of kurtosis used in Excel.
#'
#' @param x A numeric vector.
#' @param na.rm Set `TRUE` to remove `NA` values, otherwise returns `NA`.
#'
#' @examples
#' x <- runif(20)
#' kurt(x)
#'
#' @return A kurtosis value (scalar).
#'
#' @export
kurt <- function(x, na.rm = FALSE){
stopifnot(is.numeric(x),
is.vector(x))
if(any(is.na(x))){
if(na.rm){
x <- x[!is.na(x)]
} else {
return(NA)
}
}
n <- length(x)
# need min 4 points to work
if(n<4){
return(NA)
}
# demean and calculate kurtosis. NOTE this is taken from e1071::kurtosis() to avoid dependencies.
x <- x - mean(x)
r <- n * sum(x^4)/(sum(x^2)^2)
y <- ((n + 1) * (r - 3) + 6) * (n - 1)/((n - 2) * (n - 3))
return(y)
}
#' Check skew and kurtosis of a vector
#'
#' Logical test: if `abs(skewness) < skew_thresh` OR `kurtosis < kurt_thresh`, returns `TRUE`, else `FALSE`
#'
#' @param x A numeric vector.
#' @param na.rm Set `TRUE` to remove `NA` values, otherwise returns `NA`.
#' @param skew_thresh A threshold for absolute skewness (positive). Default 2.25.
#' @param kurt_thresh A threshold for kurtosis. Default 3.5.
#'
#' @examples
#' set.seed(100)
#' x <- runif(20)
#' # this passes
#' check_SkewKurt(x)
#' # if we add an outlier, doesn't pass
#' check_SkewKurt(c(x, 1000))
#'
#' @return A list with `.$Pass` is a Logical, where `TRUE` is pass, `FALSE` is fail, and `.$Details` is a
#' sub-list with skew and kurtosis values.
#'
#' @export
check_SkewKurt <- function(x, na.rm = FALSE, skew_thresh = 2, kurt_thresh = 3.5){
# get skew and kurtosis
sk <- skew(x, na.rm = na.rm)
kt <- kurt(x, na.rm = na.rm)
# logical test
ans <- (abs(sk) < skew_thresh) | (kt < kurt_thresh)
# make sure output is sensible
stopifnot(is.logical(ans),
length(ans)==1)
# output
list(Pass = ans, Skew = sk, Kurt = kt)
}
|
/scratch/gouwar.j/cran-all/cranData/COINr/R/treat.R
|
#' Get time trends
#'
#' Get time trends from a purse object. This function extracts a panel data set from a purse, and calculates trends
#' for each indicator/unit pair using a specified function `f_trend`. For example, if `f_trend = "CAGR"`, this extracts
#' the time series for each indicator/unit pair and passes it to [CAGR()].
#'
#' This function requires a purse object as an input. The data set is selected using [get_data()], such that a subset
#' of the data set can be analysed using the `uCodes`, `iCodes` and `Time` arguments. The latter is useful especially
#' if only a subset of the time series should be analysed.
#'
#' The function `f_trend` is a function that, given a time series, returns a trend metric. This must follow a
#' specific format. It must of course be available to call, and *must* have arguments `y` and `x`, which are
#' respectively a vector of values and a vector indexing the values in time. See [prc_change()] and [CAGR()]
#' for examples. The function *must* return a single value (not a vector with multiple entries, or a list).
#' The function can return either numeric or character values.
#'
#' @param purse A purse object
#' @param dset Name of the data set to extract, passed to [get_data.purse()]
#' @param uCodes Optional subset of unit codes to extract, passed to [get_data.purse()]
#' @param iCodes Optional subset of indicator/aggregate codes to extract, passed to [get_data.purse()]
#' @param Time Optional vector of time points to extract, passed to [get_data.purse()]
#' @param f_trend Function that returns a metric describing the trend of the time series. See details.
#' @param use_latest A positive integer which specifies to use only the latest "n" data points. If this is specified, it
#' overrides `Time`. If e.g. `use_latest = 5`, will use the latest five observations, working backwards from the latest
#' non-`NA` point.
#' @param interp_at Option to linearly interpolate missing data points in each time series. Must be specified as a vector
#' of time values where to apply interpolation. If `interp_at = "all"`, will attempt to interpolate at every
#' time point. Uses linear interpolation - note that any `NA`s outside of the range of observed values will not
#' be estimated, i.e. this does not *extrapolate* beyond the range of data. See [approx_df()].
#' @param adjust_directions Logical: if `TRUE`, trend metrics are adjusted according to indicator/aggregate
#' directions input in `iMeta` (i.e. if the corresponding direction is -1, the metric will be multiplied by -1).
#'
#' @importFrom stats lm
#'
#' @return A data frame in long format, with trend metrics for each indicator/unit pair, plus
#' data availability statistics.
#' @export
#'
#' @examples
#' #
#'
get_trends <- function(purse, dset, uCodes = NULL, iCodes = NULL,
Time = NULL, use_latest = NULL, f_trend = "CAGR", interp_at = NULL,
adjust_directions = FALSE){
# Input checks ------------------------------------------------------------
# general checks
check_purse_input(purse)
if(!is.null(use_latest)){
stopifnot(is.numeric(use_latest),
length(use_latest) == 1)
}
# Get data and check ------------------------------------------------------
# retrieve data
# NOTE if we want to interpolate need to pull all data (all Time)
iData <- get_data(purse, dset = dset, uCodes = uCodes, iCodes = iCodes)
# get iMeta (for directions)
iMeta <- purse$coin[[1]]$Meta$Ind
# checks
stopifnot("uCode" %in% names(iData),
"Time" %in% names(iData))
if(!is.null(interp_at)){
# interp at all points
if(length(interp_at) == 1){
if(interp_at == "all"){
interp_at <- unique(iData$Time)
}
}
stopifnot(is.numeric(interp_at),
all(!is.na(interp_at)))
if( any(interp_at < min(iData$Time)) || any(interp_at > max(iData$Time))){
stop("One or more entries in interp_at are outside the time range of the data.", call. = FALSE)
}
}
if(!is.null(Time)){
if(any(Time %nin% unique(iData$Time))){
stop("One or more entries in Time are not found in the selected data set.")
}
}
# Prep for data avail records ---------------------------------------------
# this is done kind of separately because was added afterwards, would otherwise have to re-write
# Prep df for all indicator/unit pairs, with data avail cols. These will be populated within the following loops
dat_avail <- data.frame(
expand.grid(unique(iData$uCode), names(iData)[names(iData) %nin% c("uCode", "Time")], stringsAsFactors = FALSE),
Avail = 0,
t_first = NA,
t_latest = NA,
Avail_use_latest = NA,
t_first_use_latest = NA
)
names(dat_avail)[1:2] <- c("uCode", "iCode")
# df for time series (currently only for use_latest)
if(!is.null(use_latest)){
# df for y values
df_y <- dat_avail[c("uCode", "iCode")]
df_y <- cbind(df_y, as.data.frame(matrix(nrow = nrow(df_y), ncol = use_latest)))
names(df_y)[3:ncol(df_y)] <- paste0("y", 1:use_latest)
# df for x values
df_x <- df_y
names(df_x)[3:ncol(df_x)] <- paste0("x", 1:use_latest)
}
# Get trends --------------------------------------------------------------
# first split data by uCode
l_data <- split(iData, f = iData$uCode)
# this function gets trends for each uCode
l_trends <- lapply(l_data, function(dfi){
# dfi is a data frame with one single unit code
uCode <- unique(dfi$uCode)
stopifnot(length(uCode) == 1)
# I have to make sure that the Time col has the same entries as interp_at
# otherwise this causes trouble later on
if(!is.null(interp_at)){
dfi <- merge(data.frame(Time = interp_at), dfi, by = "Time", all = TRUE)
dfi$uCode <- uCode
}
# now sort it (low to high)
dfi <- dfi[order(dfi$Time), ]
# get time vector
tt <- dfi$Time
# get indicator cols
icols <- dfi[names(dfi) %nin% c("Time", "uCode")]
# get time-filtered copy for later on
if(!is.null(Time)){
dfi_t <- dfi[dfi$Time %in% Time, ]
tt_t <- dfi_t$Time
dfi_t <- dfi_t[names(dfi_t) %nin% c("Time", "uCode")]
} else {
dfi_t <- icols
tt_t <- tt
}
# interpolate if requested
if(!is.null(interp_at)){
l_out <- approx_df(Y = icols, tt = tt, tt_est = interp_at)
Y_use <- l_out$Y
tt_use <- l_out$tt
} else {
Y_use <- icols
tt_use <- tt
}
# subset to Time
if(!is.null(Time) && is.null(use_latest)){
# use_latest overrides Time, if it is specified, so this only happens if use_latest is NULL
# Time may request some points not present so we take the intersect
tt_use <- intersect(tt_use, Time)
Y_use <- Y_use[match(tt_use, Time), ]
}
# things are getting fiddly now so I go into a for loop
trends <- as.numeric(rep(NA, ncol(Y_use)))
for(ii in 1:ncol(Y_use)){
# indicator code of this col
icode <- names(Y_use)[ii]
# direction of this indicator
if(adjust_directions){
direction <- iMeta$Direction[iMeta$iCode == icode]
} else {
direction <- 1
}
# yraw is the col of indicator data (with no adjustments, but possibly filtered by Time)
yraw <- dfi_t[[ii]]
# y is the column of indicator data (possibly interpolated)
y <- Y_use[[ii]]
# row index in data availability table
i_dat_avail <- which((dat_avail$uCode == uCode) & (dat_avail$iCode == icode))
# overall data availability of y
dat_avail$Avail[i_dat_avail] <<- mean(!is.na(yraw))
if(sum(!is.na(yraw)) > 1){
dat_avail$t_first[i_dat_avail] <<- tt_t[min(which(!is.na(yraw)))]
dat_avail$t_latest[i_dat_avail] <<- tt_t[max(which(!is.na(yraw)))]
}
# if y has less than 2 non-NA points, we simply give NA and go to the next col
if(sum(!is.na(y)) < 2){
trends[ii] <- NA
next
}
if(!is.null(use_latest)){
# this is the index of the latest data point
i_latest <- max(which(!is.na(y)))
# index of first point in series (to use)
i_first <- i_latest - use_latest + 1
if(i_first > 1){
# take the latest points
y <- y[i_first : i_latest]
tt_ii <- tt_use[i_first : i_latest]
yraw_ii <- yraw[i_first : i_latest]
# record these in data frames
df_x[i_dat_avail, 3:ncol(df_x)] <<- tt_ii
df_y[i_dat_avail, 3:ncol(df_y)] <<- yraw_ii
# overall data availability of y (we use yraw, not interpolated)
dat_avail$Avail_use_latest[i_dat_avail] <<- mean(!is.na(yraw_ii))
if(sum(!is.na(y)) > 1){
dat_avail$t_first_use_latest[i_dat_avail] <<- tt_ii[min(which(!is.na(y)))]
}
# check again in case we have all NAs here
if(sum(!is.na(y)) < 2){
trends[ii] <- NA
next
}
# send to function
trend_metric <- do.call(f_trend, list(y = y, x = tt_ii))
# check
if(length(trend_metric) == 1){
if(is.numeric(trend_metric) || is.character(trend_metric)){
trends[ii] <- trend_metric*direction
} else {
stop("The trend metric returned by 'f_trend' is not either numeric or character", call. = FALSE)
}
} else {
stop("The trend metric returned by 'f_trend' is not a vector of length 1", call. = FALSE)
}
} else {
# here we go back father than we have data points, so NA
trends[ii] <- NA
next
}
} else {
# no further subsetting of the data by time
# send to function
trend_metric <- do.call(f_trend, list(y = y, x = tt_use))
# check
if(length(trend_metric) == 1){
if(is.numeric(trend_metric) || is.character(trend_metric)){
trends[ii] <- trend_metric*direction
} else {
stop("The trend metric returned by 'f_trend' is not either numeric or character", call. = FALSE)
}
} else {
stop("The trend metric returned by 'f_trend' is not a vector of length 1", call. = FALSE)
}
} # end
} # end for loop
# the output of the lapply call
names(trends) <- names(Y_use)
trends
})
# Post-proc and output ----------------------------------------------------
# reshape this into a data frame
df_trends <- as.data.frame(l_trends)
df_trends <- cbind(iCode = row.names(df_trends), df_trends)
row.names(df_trends) <- NULL
# convert to long format
df_long <- lengthen(df_trends, cols = names(df_trends)[names(df_trends) != "iCode"])
names(df_long)[names(df_long) == "Value"] <- f_trend
names(df_long)[names(df_long) == "name"] <- "uCode"
# merge with dat_avail
df_long <- merge(df_long, dat_avail, by = c("uCode", "iCode"), all = TRUE)
# order rows of df_long
df_long <- df_long[order(df_long$iCode, df_long$uCode), ]
# output
if(!is.null(use_latest)){
l_out <- list(Trends = df_long,
x = df_x[order(df_x$iCode, df_x$uCode), ],
y = df_y[order(df_y$iCode, df_y$uCode), ])
} else {
df_long <- df_long[names(df_long) %nin% c("Avail_use_latest", "t_first_use_latest")]
l_out <- list(Trends = df_long)
}
l_out
}
#' Interpolate time-indexed data frame
#'
#' Given a numeric data frame `Y` with rows indexed by a time vector `tt`, interpolates at time values
#' specified by the vector `tt_est`. If `tt_est` is not in `tt`, will create new rows in the data frame
#' corresponding to these interpolated points.
#'
#' This is a wrapper for [stats::approx()], with some differences. In the first place, [stats::approx()] is
#' applied to each column of `Y`, using `tt` each time as the corresponding time vector indexing `Y`. Interpolated
#' values are generated at points specified in `tt_est` but these are appended to the existing data (whereas
#' [stats::approx()] will only return the interpolated points and nothing else). Further arguments to
#' [stats::approx()] can be passed using the `...` argument.
#'
#' @param Y A data frame with all numeric columns
#' @param tt A time vector with length equal to `nrow(Y)`, indexing the rows in `Y`.
#' @param tt_est A time vector of points to interpolate in `Y`. If `NULL`, will attempt to interpolate all
#' points in `Y` (you may need to adjust the `rule` argument of [stats::approx()] here). Note that points not
#' specified in `tt_est` will not be interpolated. `tt_est` does not need to be a subset of `tt`.
#' @param ... Further arguments to pass to [stats::approx()] other than `x`, `y` and `xout`.
#'
#' @importFrom stats approx
#'
#' @return A list with:
#' * `.$tt` the vector of time points, including time values of interpolated points
#' * `.$Y` the corresponding interpolated data frame
#'
#' Both outputs are sorted by `tt`.
#' @export
#'
#' @examples
#' # a time vector
#' tt <- 2011:2020
#'
#' # two random vectors with some missing values
#' y1 <- runif(10)
#' y2 <- runif(10)
#' y1[2] <- y1[5] <- NA
#' y2[3] <- y2[5] <- NA
#' # make into df
#' Y <- data.frame(y1, y2)
#'
#' # interpolate for time = 2012
#' Y_int <- approx_df(Y, tt, 2012)
#' Y_int$Y
#'
#' # notice Y_int$y2 is unchanged since at 2012 it did not have NA value
#' stopifnot(identical(Y_int$Y$y2, y2))
#'
#' # interpolate at value not in tt
#' approx_df(Y, tt, 2015.5)
#'
approx_df <- function(Y, tt, tt_est = NULL, ...){
# some basic checks
stopifnot(is.data.frame(Y),
all(sapply(Y, is.numeric)),
nrow(Y) == length(tt))
# defaults
if(is.null(tt_est)){
tt_est <- tt
}
# get tt that are NOT to be sent to approx()
tt_not_est <- setdiff(tt, tt_est)
# get tt_out
tt_out <- c(tt_est, tt_not_est)
Y_out <- lapply(Y, function(y){
if(sum(!is.na(y)) > 1){
l_out <- stats::approx(x = tt, y = y, xout = tt_est, ...)
y_out <- c(l_out$y, y[match(tt_not_est, tt)])
} else {
# if vector is all NAs, just return a vector of NAs
as.numeric(rep(NA, length(tt_out)))
}
})
# reassamble and order
Y_out <- as.data.frame(Y_out)
Y_out <- Y_out[order(tt_out), ]
row.names(Y_out) <- NULL
tt_out <- tt_out[order(tt_out)]
list(tt = tt_out,
Y = Y_out)
}
#' Compound annual growth rate
#'
#' Given a variable `y` indexed by a time vector `x`, calculates the compound annual growth rate. Note that CAGR assumes
#' that the `x` refer to years. Also it is only calculated using the first and latest observed values.
#'
#' @param y A numeric vector
#' @param x A numeric vector of the same length as `y`, indexing `y` in time. No `NA` values are allowed
#' in `x`. This vector is assumed to be years, otherwise the result must be interpreted differently.
#'
#' @return A scalar value (CAGR)
#' @export
#'
#' @examples
#' # random points over 10 years
#' x <- 2011:2020
#' y <- runif(10)
#'
#' CAGR(y, x)
#'
CAGR <- function(y, x){
# checks
stopifnot(is.numeric(y),
is.numeric(x),
length(x) == length(y))
if(any(is.na(x))){
stop("x contains NAs - this is not allowed (each y value should be indexed by a time point in x)", call. = FALSE)
}
if(sum(!is.na(y)) < 2){
return(NA)
}
# deal with NAs
xy <- data.frame(x, y)
xy <- na.omit(xy)
# calc CAGR
# order first
xy <- xy[order(xy$x), ]
# index of latest obs
ilat <- nrow(xy)
if(xy$y[ilat] == xy$y[1]){
# this covers when start and end value are both zero, which would otherwise return NaN
out1 <- 0
} else {
out1 <- (xy$y[ilat] / xy$y[1])^(1 / (xy$x[ilat] - xy$x[1])) - 1
}
out1
}
#' Percentage change of time series
#'
#' Calculates the percentage change in a time series from the initial value. The time series is defined by
#' `y` the response variable, indexed by `x`, the time variable. The `per` argument can optionally be used
#' to scale the result according to a period of time. E.g. if the units of `x` are years, setting `x = 10`
#' will measure the percentage change per decade.
#'
#' This function operates in two ways, depending on the number of data points. If `x` and `y` have two non-`NA`
#' observations, percentage change is calculated using the first and last values. If three or more points are
#' available, a linear regression is used to estimate the average percentage change. If fewer than two points
#' are available, the percentage change cannot be estimated and `NA` is returned.
#'
#' If all `y` values are equal, it will return a change of zero.
#'
#' @param y A numeric vector
#' @param x A numeric vector of the same length as `y`, indexing `y` in time. No `NA` values are allowed
#' in `x`.
#' @param per Numeric value to scale the change according to a period of time. See description.
#'
#' @return Percentage change as a scalar value.
#'
#' @export
#'
#' @examples
#' # a time vector
#' x <- 2011:2020
#'
#' # some random points
#' y <- runif(10)
#'
#' # find percentage change per decade
#' prc_change(y, x, 10)
prc_change <- function(y, x, per = 1){
# checks
stopifnot(is.numeric(y),
is.numeric(x),
length(x) == length(y))
if(any(is.na(x))){
stop("x contains NAs - this is not allowed (each y value should be indexed by a time point in x)", call. = FALSE)
}
if(sum(!is.na(y)) < 2){
return(NA)
}
# deal with NAs
xy <- data.frame(x, y)
xy <- na.omit(xy)
# Calc prc change ---------------------------------------------------------
# first check if all values in y are the same. If this is sent to lm() we get
# NaN but it is more sensible to return 0.
if(length(unique(xy$y)) == 1){
return(0)
}
# order first
xy <- xy[order(xy$x), ]
if(nrow(xy) < 3){
# index of latest obs
ilat <- nrow(xy)
# prc change based on first and last vals
prc <- ((xy$y[ilat] - xy$y[1])*100/xy$y[1])*(per/(xy$x[ilat] - xy$x[1]))
} else {
# if we have 3 or more points we perform a regression
lm1 <- stats::lm(xy$y ~ xy$x)
# we get the first and last values
# prc change based on first and last vals
#((xy$y[ilat] - xy$y[1])*100/xy$y[1])*(10/(xy$x[ilat] - xy$x[1]))
# get coeffs
icpt <- as.numeric(lm1$coefficients[1])
slop <- as.numeric(lm1$coefficients[2])
# calculate prc change per decade
prc <- (slop * per)/(icpt + slop * xy$x[1])*100
}
prc
}
|
/scratch/gouwar.j/cran-all/cranData/COINr/R/trends.R
|
# GENERAL UTILITY FUNCTIONS
# NONE OF THESE FUNCTIONS ARE EXPORTED
# Not in operator
#
# For convenience, rather than always `!(x, %in% y)`
#
# @param x A scalar or vector
# @param y A scalar or vector
#
# @return TRUE if x is not in y, FALSE otherwise
'%nin%' <- function(x,y){
!('%in%'(x,y))
}
# rbind two lists with different names into a data frame
#
# Performs an `rbind()` operation on two named lists or vectors that do not need to share the same names, but
# will match the names and fill any missing cols with `NA`s.
#
# @param x1 A named list or named vector
# @param x2 Another named list or named vector
#
# @examples
# #
#
# @return Data frame
rbind_fill <- function(x1, x2){
if(is.null(names(x1)) || is.null(names(x2))){
stop("names of x1 or x2 is NULL")
}
# make to dfs
x1 <- as.data.frame(as.list(x1))
x2 <- as.data.frame(as.list(x2))
# fill with NAs
x1[setdiff(names(x2), names(x1))] <- NA
x2[setdiff(names(x1), names(x2))] <- NA
rbind(x1, x2)
}
# Remove empty components from list
#
# Short cut for removing any empty components of a list
#
# @param l A list
#
# @examples
# #
#
# @return List with empty bits removed
tidy_list <- function(l){
l[lengths(l) > 0]
}
# Check availability of function
#
# Checks if a function is available, and returns an error if not.
#
# @param f_name A string to use to check whether a function exists with that name.
#
# @return Nothing or error
check_fname <- function(f_name){
if(!(exists(f_name, mode = "function"))){
stop("function '", f_name, "' not found. must be an accessible function.")
}
}
# Set default arg
#
# A shortcut
#
# @param x The argument
# @param x_default The default to set
#
# @return the parameter
set_default <- function(x, x_default){
if(is.null(x)){
x_default
} else {
x
}
}
# Data frame or matrix to long form
#
# This is a substitute function for tidyr's 'pivot_longer' to avoid dependencies, and behaves in more or
# less the same way.
#
# If `cols` is not specified, assumes a square correlation matrix to convert to long form. If `cols` is
# specified, this behaves like pivot_longer's "cols" argument.
#
# @param X A data frame or square correlation matrix
# @param cols Columns to pivot into longer format.
#
# @importFrom utils stack
#
# @return A long format data frame
lengthen <- function(X, cols = NULL){
# make df
X <- as.data.frame(X)
if(!is.null(cols)){
stopifnot(all(cols %in% names(X)))
X_ <- X[cols]
X <- X[names(X) %nin% cols]
X$V_to_pivot <- rownames(X)
} else {
X_ <- X
}
# stack and add names
X1 <- cbind(utils::stack(X_), rownames(X_))
names(X1) <- c("Value", "V2", "V1")
X1$V2 <- as.character(X1$V2)
X1 <- rev(X1)
if(!is.null(cols)){
X1 <- merge(X, X1, by.x = "V_to_pivot", by.y = "V1", all = TRUE)
X1 <- X1[names(X1) != "V_to_pivot"]
names(X1)[names(X1) == "V2"] <- "name"
}
X1
}
# Make long df wide
#
# This is a quick function for making a long-format data frame wide. It is limited in scope, assumes
# that the input is a data frame with three columns: one of which is numeric, and the other two are
# character vectors. The numeric column will be widened, and the other two columns will be used
# for row and column names.
#
# @param X a long format data frame
#
# @importFrom utils unstack
#
# @return A wide format data frame
widen <- function(X){
stopifnot(ncol(X) == 3)
# make df
X <- as.data.frame(X)
# find numeric col
num_cols <- sapply(X, is.numeric)
if(sum(num_cols) > 1){
stop("More than one numeric column found")
}
if(sum(num_cols) == 0){
stop("No numeric columns found.")
}
# rearrange to get numeric col first
X <- X[c(which(num_cols), which(!num_cols))]
# order
X <- X[order(X[[3]], X[[2]]),]
# unstack and add row names
Xw <- utils::unstack(X[1:2])
row.names(Xw) <- unique(X[[3]])
Xw
}
#' Convert iCodes to iNames
#'
#' @param coin A coin
#' @param iCodes A vector of iCodes
#'
#' @return Vector of iNames
#' @export
icodes_to_inames <- function(coin, iCodes){
stopifnot(is.coin(coin))
iMeta <- coin$Meta$Ind
stopifnot(all(iCodes %in% iMeta$iCode))
iMeta$iName[match(iCodes, iMeta$iCode)]
}
#' Convert uCodes to uNames
#'
#' @param coin A coin
#' @param uCodes A vector of uCodes
#'
#' @return Vector of uNames
#' @export
ucodes_to_unames <- function(coin, uCodes){
stopifnot(is.coin(coin))
uMeta <- coin$Meta$Unit
stopifnot(all(uCodes %in% uMeta$uCode))
uMeta$uName[match(uCodes, uMeta$uCode)]
}
# Splits data frame into numeric and non-numeric columns
#
# @param x A data frame with numeric and non-numeric columns.
#
# @return A list with `.$not_numeric` containing a data frame with non-numeric columns, and `.$numeric` being
# a data frame containing only numeric columns.
#
# @examples
# #
split_by_numeric <- function(x){
stopifnot(is.data.frame(x))
# numeric cols
numeric_cols <- sapply(x, is.numeric)
if(sum(numeric_cols) == 0){
stop("No numeric cols found in the data frame.")
}
list(not_numeric = x[!numeric_cols],
numeric = x[numeric_cols])
}
# this function adjusts an iData dataset by directions, this is for use e.g.
# in correlation plotting.
# Just works with in-coin directions at the moment.
# iData can have non-numeric columns like uCode, uName etc, but any numeric
# cols will be required to have a corresponding direction entry in iMeta.
directionalise <- function(iData, coin){
imeta <- coin$Meta$Ind[coin$Meta$Ind$Type == "Indicator", ]
df_out <- lapply(names(iData), function(iCode){
x <- iData[[iCode]]
if(is.numeric(x)){
if(iCode %nin% imeta$iCode){
stop("Name of numeric column in iData does not have an entry in iMeta found in coin. Column: ", iCode)
}
iData[iCode]*imeta$Direction[imeta$iCode == iCode]
} else {
x
}
})
df_out <- as.data.frame(df_out)
stopifnot(identical(names(df_out), names(iData)))
df_out
}
# X is a df
# cols specifies the names of TWO columns in X
# from which to remove duplicate pairs
remove_duplicate_corrs <- function(X, cols){
X1 = X[,cols]
duplicated_rows <- duplicated(t(apply(X1, 1, sort)))
X[!duplicated_rows, ]
}
# convert integer columns to numeric (intended for iData)
df_int_2_numeric <- function(X){
# convert integer cols to numeric (iData)
rnames <- row.names(X)
col_names <- names(X)
X <- lapply(col_names, function(col_name){
x <- X[[col_name]]
if(is.integer(x)){
message("iData column '", col_name, "' converted from integer to numeric.")
as.numeric(x)
} else x
}) |> as.data.frame()
row.names(X) <- rnames
names(X) <- col_names
X
}
|
/scratch/gouwar.j/cran-all/cranData/COINr/R/utils.R
|
#' Get effective weights
#'
#' Calculates the "effective weight" of each indicator and aggregate at the index level. The effective weight is calculated
#' as the final weight of each component in the index, and this is due to not just to its own weight, but also to the weights of
#' each aggregation that it is involved in, plus the number of indicators/aggregates in each group. The effective weight
#' is one way of understanding the final contribution of each indicator to the index. See also `vignette("weights")`.
#'
#' This function replaces the now-defunct `effectiveWeight()` from COINr < v1.0.
#'
#' @param coin A coin class object
#' @param out2 Either `"coin"` or `"df"`
#'
#' @examples
#' # build example coin
#' coin <- build_example_coin(up_to = "new_coin", quietly = TRUE)
#'
#' # get effective weights as data frame
#' w_eff <- get_eff_weights(coin, out2 = "df")
#'
#' head(w_eff)
#'
#' @return Either an iMeta data frame with effective weights as an added column, or an updated coin with effective
#' weights added to `.$Meta$Ind`.
#' @export
get_eff_weights <- function(coin, out2 = "df"){
# PREP --------------------------------------------------------------------
check_coin_input(coin)
stopifnot(out2 %in% c("df", "coin"))
# EFF WEIGHTS -------------------------------------------------------------
# get indicator metadata
iMeta <- coin$Meta$Ind
# ditch NAs
iMeta <- iMeta[!is.na(iMeta$Level),]
# we need to rescale weights to sum to the weight of the parent
# needs to be done by working from highest level downwards
maxlev <- coin$Meta$maxlev
# index by parent, but highest level has parent = NA which breaks tapply a bit.
# so assign a fake category just for this operation
iMeta$Parent[iMeta$Level == maxlev] <- "none"
# first make all weights sum to 1 inside group
iMeta_sp <- split(iMeta, iMeta$Parent)
iMeta_sp <- lapply(iMeta_sp, function(x){
x$EffWeight <- x$Weight/sum(x$Weight)
x
})
iMeta <- unsplit(iMeta_sp, iMeta$Parent)
# now we have to work from highest level downwards to make weights sum to parent weight
# This is done by multiplying weights with parent weights.
for(lev in (maxlev-1):1){
# get codes in lev
codes <- iMeta$iCode[iMeta$Level == lev]
# get weights of codes
idx <- match(codes, iMeta$iCode)
w_lev <- iMeta$EffWeight[idx]
# get codes of parents
codes_p <- iMeta$Parent[idx]
# get weights of parents
w_par <- iMeta$EffWeight[match(codes_p, iMeta$iCode)]
# multiply
iMeta$EffWeight[idx] <- w_lev*w_par
}
# OUTPUT ------------------------------------------------------------------
if(out2 == "df"){
iMeta[c("iCode", "Level", "Weight", "EffWeight")]
} else if(out2 == "coin"){
coin$Meta$Ind$EffWeight <- iMeta$EffWeight[match(coin$Meta$Ind$iCode, iMeta$iCode )]
coin
}
}
#' Weight optimisation
#'
#' This function provides optimised weights to agree with a pre-specified vector of "target importances".
#'
#' This is a linear version of the weight optimisation proposed in this paper: \doi{10.1016/j.ecolind.2017.03.056}.
#' Weights are optimised to agree with a pre-specified vector of "importances". The optimised weights are returned back to the coin.
#'
#' See `vignette("weights")` for more details on the usage of this function and an explanation of the underlying
#' method. Note that this function calculates correlations without considering statistical significance.
#'
#' This function replaces the now-defunct `weightOpt()` from COINr < v1.0.
#'
#' @param coin coin object
#' @param itarg a vector of (relative) target importances. For example, `c(1,2,1)` would specify that the second
#' indicator should be twice as "important" as the other two.
#' @param Level The aggregation level to apply the weight adjustment to. This can only be one level.
#' @param cortype The type of correlation to use - can be either `"pearson"`, `"spearman"` or `"kendall"`. See [stats::cor].
#' @param optype The optimisation type. Either `"balance"`, which aims to balance correlations
#' according to a vector of "importances" specified by `itarg` (default), or `"infomax"` which aims to maximise
#' overall correlations.
#' @param toler Tolerance for convergence. Defaults to 0.1 (decrease for more accuracy, increase if convergence problems).
#' @param maxiter Maximum number of iterations. Default 500.
#' @param out2 Where to output the results. If `"coin"` (default for coin input), appends to updated coin,
#' creating a new list of weights in `.$Parameters$Weights`. Otherwise if `"list"` outputs to a list (default).
#' @param dset Name of the aggregated data set found in `coin$Data` which results from calling [Aggregate()].
#' @param weights_to Name to write the optimised weight set to, if `out2 = "coin"`.
#'
#' @importFrom stats optim
#'
#' @examples
#' # build example coin
#' coin <- build_example_coin(quietly = TRUE)
#'
#' # check correlations between level 3 and index
#' get_corr(coin, dset = "Aggregated", Levels = c(3, 4))
#'
#' # optimise weights at level 3
#' l_opt <- get_opt_weights(coin, itarg = "equal", dset = "Aggregated",
#' Level = 3, weights_to = "OptLev3", out2 = "list")
#'
#' # view results
#' tail(l_opt$WeightsOpt)
#'
#' l_opt$CorrResultsNorm
#'
#' @return If `out2 = "coin"` returns an updated coin object with a new set of weights in `.$Meta$Weights`, plus
#' details of the optimisation in `.$Analysis`.
#' Else if `out2 = "list"` the same outputs (new weights plus details of optimisation) are wrapped in a list.
#'
#' @export
get_opt_weights <- function(coin, itarg = NULL, dset, Level, cortype = "pearson", optype = "balance",
toler = NULL, maxiter = NULL, weights_to = NULL, out2 = "list"){
# PREP --------------------------------------------------------------------
check_coin_input(coin)
stopifnot(optype %in% c("balance", "infomax"),
out2 %in% c("list", "coin"))
# number of weights at specified level
n_w <- sum(coin$Meta$Weights$Original$Level == Level)
if(optype == "infomax"){
itarg <- NULL
}
# if equal influence requested
if(!is.null(itarg)){
if(is.character(itarg)){
if(itarg == "equal"){
itarg <- rep(1, n_w)
} else {
stop("itarg not recognised - should be either numeric vector or \"equal\" ")
}
}
} else {
if(optype == "balance"){
stop("If optype = 'balance' you must specify itarg.")
}
}
if (optype == "balance"){
if(length(itarg) != n_w){
stop("itarg is not the correct length for the specified Level")
}
itarg <- itarg/sum(itarg)
}
# we need to define an objective function. The idea here is to make a function, which when you
# put in a set of weights, gives you the correlations
# get original weights
w0 <- coin$Meta$Weights$Original
# make a null object for storing correlations in - this will be updated by the function below.
crs_out <- NULL
objfunc <- function(w){
w <- w/sum(w)
wlist <- w0
# modify appropriate level to current vector of weights
wlist$Weight[wlist$Level == Level] <- w
# re-aggregate using these weights, get correlations
crs <- weights2corr(coin, dset = dset, w = wlist, Levels = c(Level, coin$Meta$maxlev),
cortype = cortype)$cr$Correlation
if (optype == "balance"){
# normalise so they sum to 1
crs_n <- crs/sum(crs)
# the output is the sum of the squared differences (note, could also log this)
sqdiff <- log(sum((itarg - crs_n)^2)/length(crs_n))
} else if (optype == "infomax"){
# the output is the sum of the correlations *-1
sqdiff <- sum(crs)*-1
# assign to crs_n for export outside function
crs_n <- crs
}
# send outside function
crs_out <<- crs_n
message("iterating... objective function = ", sqdiff)
sqdiff
}
# defaults for tolerance and max iterations
if(is.null(toler)){toler <- 0.1}
if(is.null(maxiter)){maxiter <- 500}
# get initial values
if(optype == "balance"){
init_vals <- itarg
} else {
init_vals <- w0$Weight[w0$Level == Level]
}
# run optimisation
optOut <- stats::optim(par = init_vals, fn = objfunc, control = list(
reltol = toler,
maxit = maxiter
))
if(optOut$convergence == 0){
message("Optimisation successful!")
} else {
message("Optimisation did not converge for some reason...
You can try increasing the number of iterations and/or the tolerance.")
}
# get optimised weights at level
wopt <- optOut$par
# normalise to sum to 1
wopt <- wopt/sum(wopt)
# get full list of weights
wopt_full <- w0
# modify appropriate level to optimised vector of weights
wopt_full$Weight[wopt_full$Level == Level] <- wopt
# results df
desired <- if(is.null(itarg)){NA}else{itarg}
df_res <- data.frame(
Desired = desired,
Obtained = crs_out,
OptWeight = wopt
)
if (out2 == "coin"){
if(is.null(weights_to)){
weights_to <- paste0("OptimsedLev", Level)
}
coin$Meta$Weights[[weights_to]] <- wopt_full
message("Optimised weights written to .$Meta$Weights$", weights_to)
coin$Analysis$Weights[[weights_to]] <- list(OptResults = optOut,
CorrResultsNorm = df_res)
coin
} else {
list(WeightsOpt = wopt_full,
OptResults = optOut,
CorrResultsNorm = df_res)
}
}
# Recalculate correlations and ranks based on new weights
#
# This is a short cut function which takes a new set of indicator weights, and recalculates the coin results
# based on these weights. It returns a summary of rankings and the correlations between indicators and index.
#
# @param coin coin object
# @param w Full data frame of weights for each level
# @param dset Name of the data set that is created when [Aggregate()] is called. This is used to calculated correlations
# and to extract the results table. Default `"Aggregated"`.
# @param Levels A 2-length vector with two aggregation levels to correlate against each other
# @param cortype Correlation type. Either `"pearson"` (default), `"kendall"` or `"spearman"`. See [stats::cor].
# @param withparent Logical: if `TRUE`, only correlates with the parent, e.g. sub-pillars are only correlated with
# their parent pillars and not others.
#
# @return A list where `.$cr` is a vector of correlations between each indicator and the index, and
# `.$dat` is a data frame of results
#
# @examples
# #
weights2corr <- function(coin, w, dset = "Aggregated", Levels = NULL,
cortype = "pearson", withparent = TRUE){
# PREP --------------------------------------------------------------------
# note, many checks are done inside lower-level functions called here
# note2: I have removed the iCodes argument. Not sure that it was ever used.
if(is.null(Levels)){
Levels <- c(1, coin$Meta$maxlev)
}
if(is.null(coin$Log$Aggregate)){
stop("You have not yet aggregated your data. This needs to be done first.")
}
# GET CORR, RES -----------------------------------------------------------
# update weights
coin$Log$Aggregate$w <- w
# regenerate
coin2 <- Regen(coin, from = "Aggregate", quietly = TRUE)
# get correlations
crtable <- get_corr(coin2, dset = dset, Levels = Levels,
cortype = cortype, withparent = withparent, pval = 0)
# get results
dfres <- get_results(coin2, dset = dset, tab_type = "Aggs")
# OUTPUT ------------------------------------------------------------------
# output list
# NOTE: missing iCodes1 entry - not sure if this is used for rew8r and need to understand the context.
list(cr = crtable,
dat = dfres)
}
|
/scratch/gouwar.j/cran-all/cranData/COINr/R/weights.R
|
# WRITING TO COINS
# Write function arguments to log
#
# This used inside `build_*` functions. It takes the coin object as an input, then writes the arguments of the
# current function into the `.$Log` list of the coin object. This is then used as a record of the operations used
# to build the coin, and can be edited.
#
# @param coin A coin class object
# @param dont_write Any variables not to write to the coin
# @param write2log If `FALSE`, just passes the coin back without writing anything.
#
# @examples
# #
#
# @return Updated coin object with function arguments written to `.$Log`
write_log <- function(coin, dont_write = NULL, write2log = TRUE){
if(!write2log){
return(coin)
}
# get calling function name and its arguments
func_args <- as.list(sys.frame(-1))
func_name <- deparse(as.list(sys.call(-1))[[1]])
# check whether call is of type COINr::func_name, or just func_name
if(grepl("::", func_name)){
# split at ::, this returns a list (to deal with character vectors) - we only have one string so take first
xx <- strsplit(func_name, "::")[[1]]
if(xx[1] != "COINr"){
stop("Attempt to write log from non-COINr function!")
} else {
# take function name excluding COINr:: bit
func_name <- xx[2]
}
}
# tweak list first (exclude args we don't want)
dont_write <- c(dont_write, "coin", "*tmp*")
func_args <- func_args[!(names(func_args) %in% dont_write)]
# check that we are getting function arguments and nothing else
if(!all(names(func_args) %in% names(formals(func_name)))){
stop(paste0("Mismatch between function arguments of ", func_name, " and attempt to write to .$Log."))
}
# remove method .coin or similar from func_name
func_name2 <- unlist(strsplit(func_name, "\\.")[[1]])[1]
# make sure this is a builder function calling
builders <- c("Aggregate", "Denominate", "Impute", "new_coin", "Normalise", "qNormalise", "qTreat", "Screen", "Treat")
if(func_name2 %nin% builders){
stop("The calling function ", func_name2, " is not one of the functions allowed to write to log. Authorised functions are ", builders)
}
# write to coin
coin$Log[[func_name2]] <- func_args
coin
}
# Direct function outputs
#
# Shortcut to be used at end of functions, either attach to coin or output as list.
#
# @param coin A coin class object
# @param l The list to direct
# @param out2 Whether list or attach to coin
# @param lev1 Address at first lev of coin
# @param lev2 Address at second lev of coin
# @param lev3 Address at third lev of coin
#
# @examples
# #
#
# @return Either a list or an updated coin
write2coin <- function(coin, l, out2, lev1, lev2, lev3 = NULL){
check_coin_input(coin)
if(out2 %nin% c("list", "coin")){
stop("out2 not recognised, should be either 'coin' or 'list'")
}
if(out2 == "list"){
l
} else if (out2 == "coin"){
if(is.null(lev3)){
coin[[lev1]][[lev2]] <- l
} else {
coin[[lev1]][[lev2]][[lev3]] <- l
}
coin
}
}
# Write a named data set to coin
#
# Writes a data set to the coin, and performs some checks in the process.
#
# @param coin A coin class object
# @param x The data to write
# @param dset A character string for naming the data, e.g. `Raw`.
# @param quietly If `TRUE`, suppresses messages.
# @param ignore_class If `TRUE` ignores the class of the input (used for [new_coin()]).
#
# @examples
# #
#
# @return Updated coin
write_dset <- function(coin, x, dset, quietly = FALSE, ignore_class = FALSE){
# checks
if(!ignore_class){
stopifnot(is.coin(coin))
}
stopifnot(is.character(dset),
length(dset)==1,
is.data.frame(x))
# further checks
if(is.null(x$uCode)){
stop("Required col uCode not found in data set to write to coin.")
}
icodes <- names(x)[names(x) != "uCode"]
not_numeric <- !(sapply(x[icodes], is.numeric))
if(any(not_numeric)){
stop("Non-numeric cols detected in data set to be written to coin (excluding uCode).")
}
if(any(icodes %nin% coin$Meta$Ind$iCode[coin$Meta$Ind$Type %in% c("Indicator", "Aggregate")])){
stop("Names of columns do not correspond to entries in .$Meta$Ind$iCode in data to write to coin.")
}
# flag if dset exists
dset_exists <- !is.null(coin$Data[[dset]])
# write to coin
coin$Data[[dset]] <- x
if(!quietly){
message("Written data set to .$Data$", dset)
if(dset_exists){
message("(overwritten existing data set)")
}
}
coin
}
|
/scratch/gouwar.j/cran-all/cranData/COINr/R/write_to_coins.R
|
# .onAttach <- function(...) {
# packageStartupMessage("COINr syntax has significantly changed. See vignette('v1') for details.
# This message will be removed in future updates.")
# }
|
/scratch/gouwar.j/cran-all/cranData/COINr/R/zzz.R
|
---
title: "Country report for `r params$COIN$Data$Aggregated$UnitName[params$COIN$Data$Aggregated$UnitCode == params$usel]`"
date: "`r Sys.Date()`"
output:
word_document
params:
COIN: NULL
usel: "AUT"
always_allow_html: yes
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = FALSE, fig.width = 8)
iname <- params$COIN$Data$Aggregated$UnitName[params$COIN$Data$Aggregated$UnitCode == params$usel]
COIN <- params$COIN
usel <- params$usel
# generate big ranks and scores tables if not already present
if(is.null(COIN$Results$FullRanks)){
COIN <- getResults(COIN, tab_type = "Full", use = "ranks", out2 = "COIN")
}
if(is.null(COIN$Results$FullScores)){
COIN <- getResults(COIN, tab_type = "Full", use = "scores", out2 = "COIN")
}
rnks <- COIN$Results$FullRanks
scrs <- COIN$Results$FullScores
```
```{r barchart}
library(COINr)
# plot bar chart
iplotBar(COIN, dset = "Aggregated", isel = "Index", usel = usel, aglev = 4)
```
`r iname` is ranked `r rnks$Index[rnks$UnitCode == usel]` out of `r nrow(rnks)` in the overall index. At the pillar level, its scores are summarised as follows:
```{r summarytable, include=T}
# get summary table
knitr::kable(getUnitSummary(COIN, usel = usel, aglevs = c(4,3,2)))
```
```{r radarchart}
# plot bar chart
iplotRadar(COIN, dset = "Aggregated", usel = usel, aglev = 2, addstat = "median")
```
At the indicator level, the main strengths of `r iname`, as its top five highest ranking indicators, are as follows:
```{r top5}
SAW <- getStrengthNWeak(COIN, usel = usel, withcodes = FALSE)
knitr::kable(SAW$Strengths)
```
In terms of rankings, the weakest five indicators are the following:
```{r bott5}
knitr::kable(SAW$Weaknesses)
```
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/UnitReport/unit_report_source.Rmd
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## -----------------------------------------------------------------------------
library(COINr)
# create new coin by calling new_coin()
coin <- new_coin(ASEM_iData, ASEM_iMeta,
level_names = c("Indicator", "Pillar", "Sub-index", "Index"))
# look in log
str(coin$Log, max.level = 2)
## -----------------------------------------------------------------------------
# normalise
coin <- Normalise(coin, dset = "Raw")
# view log
str(coin$Log, max.level = 2)
## -----------------------------------------------------------------------------
# regenerate the coin
coin <- Regen(coin, quietly = FALSE)
## -----------------------------------------------------------------------------
# build full example coin
coin <- build_example_coin(quietly = TRUE)
# copy coin
coin2 <- coin
## -----------------------------------------------------------------------------
str(coin2$Log$Normalise)
## -----------------------------------------------------------------------------
# change to prank function (percentile ranks)
# we don't need to specify any additional parameters (f_n_para) here
coin2$Log$Normalise$global_specs <- list(f_n = "n_prank")
# regenerate
coin2 <- Regen(coin2)
## -----------------------------------------------------------------------------
# copy base coin
coin_remove <- coin
# remove two indicators and regenerate the coin
coin_remove <- change_ind(coin, drop = c("LPI", "Forest"), regen = TRUE)
coin_remove
## -----------------------------------------------------------------------------
# compare index, sort by absolute rank difference
compare_coins(coin, coin2, dset = "Aggregated", iCode = "Index",
sort_by = "Abs.diff", decreasing = TRUE)
## -----------------------------------------------------------------------------
# copy original coin
coin90 <- coin
# remove imputation entry completely (function will not be run)
coin90$Log$Impute <- NULL
# set data availability threshold to 90%
coin90$Log$Screen$dat_thresh <- 0.9
# we also need to tell Screen() to use the denominated dset now
coin90$Log$Screen$dset <- "Denominated"
# regenerate
coin90 <- Regen(coin90)
# summarise coin
coin90
## -----------------------------------------------------------------------------
# compare index, sort by absolute rank difference
compare_coins(coin, coin90, dset = "Aggregated", iCode = "Index",
sort_by = "Abs.diff", decreasing = TRUE)
## -----------------------------------------------------------------------------
compare_coins_multi(list(Nominal = coin, Prank = coin2, NoLPIFor = coin_remove,
Screen90 = coin90), dset = "Aggregated", iCode = "Index")
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/adjustments.R
|
---
title: "Adjustments and Comparisons"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Adjustments and Comparisons}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
One of the most powerful features of COINr is the possibility to copy, adjust and compare coins. A coin is structured list that represents a composite indicator. Since it is an R object like any other, it can be copied and modified, and alternative versions can be easily compared. This generally requires four steps:
1. Make a copy of the coin
2. Adjust the coin
3. Regenerate the coin
4. Compare coins
These will be explained in the following sections.
# Regeneration
The first three points on the list above will be addressed here. We must begin by explaining the "Log" of a coin. In COINr, some functions are distinguished as "building functions". These functions start with a capital letter (with one exception), and have the following defining features:
1. When a building function is run, it creates a new data set in `.$Data`.
2. When a building function is run, it records its function arguments in `.$Log`.
Building functions are the following:
Function Description
------------------ ---------------------------------------------------------------
`new_coin()` Initialise a coin object given indicator data and metadata
`Screen()` Screen units based on data availability rules
`Denominate()` Denominate/scale indicators by other indicators
`Impute()` Impute missing data
`Treat()` Treat outliers and skewed distributions
`Normalise()` Normalise indicators onto a common scale
`Aggregate()` Aggregate indicators using weighted mean
Let's explain the concept of the "Log" now with an example. We will build the example coin manually, then look inside the coin's Log list:
```{r}
library(COINr)
# create new coin by calling new_coin()
coin <- new_coin(ASEM_iData, ASEM_iMeta,
level_names = c("Indicator", "Pillar", "Sub-index", "Index"))
# look in log
str(coin$Log, max.level = 2)
```
Looking in the log, we can see that it is a list with an entry "new_coin", which contains exactly the arguments that we passed to `new_coin()`: `iData`, `iMeta`, the level names, and two other arguments which are the default values of the function. There is also another logical variable called `can_regen` which is for internal use only.
This demonstrates that when we call a building function, its arguments are stored in the coin. To show another example, if we apply the `Normalise()` function:
```{r}
# normalise
coin <- Normalise(coin, dset = "Raw")
# view log
str(coin$Log, max.level = 2)
```
Now we additionally have a "Normalise" entry, with all the function arguments that we specified, plus defaults.
Now, the reason that building functions write to the log, is that it allows coins to be *regenerated*, which means automatically re-running the building functions that were used to create the coin and its data sets. This is done with a function called `Regen()`:
```{r}
# regenerate the coin
coin <- Regen(coin, quietly = FALSE)
```
When `Regen()` is called, it runs the buildings *in the order that they are found in the log*. This is an important point because if you iteratively re-run building functions, you might end up with an order that is not what you expect. You can check the log if you have any doubts (anyway you would probably encounter an error if the order is incorrect). Also, each building function can only be run once in a regeneration.
So why regenerate coins - aren't the results exactly the same? Yes, unless you modify something first. And this brings us to the copying and modifying points. Let us take an example: first, we'll build the full example coin, then we'll make a copy of our existing coin:
```{r}
# build full example coin
coin <- build_example_coin(quietly = TRUE)
# copy coin
coin2 <- coin
```
At this point, the coins are identical. What if we want to test an alternative methodology, for example a different normalisation method? This can be done by editing the Log of the coin, then regenerating. Here, we will change the normalisation method to percentile ranks, and regenerate. To make this change it is necessary to target the right argument. Let's first see what is already in the Log for `Normalise()`:
```{r}
str(coin2$Log$Normalise)
```
At the moment, the normalisation is min-max onto the interval of 0 to 100. We will change this to the new function `n_prank()`:
```{r}
# change to prank function (percentile ranks)
# we don't need to specify any additional parameters (f_n_para) here
coin2$Log$Normalise$global_specs <- list(f_n = "n_prank")
# regenerate
coin2 <- Regen(coin2)
```
And that's it. In summary, we copied the coin, edited its log to a different normalisation methodology, and then regenerated the results. Now what remains is to compare the results, and this is dealt with in the next section.
Before that, let's consider what kind of things we can change in a coin. Anything in the Log can be changed, but of course it is up to you to change it to something valid. As long as you carefully follow the function help pages, this shouldn't be any more difficult than using the functions directly. You can also change anything else about the coin, including the input data, by targeting the log of `new_coin()`. Changing anything outside of the Log will not generally have an effect because the coin will be recreated by `new_coin()` during regeneration and this will be overwritten. The exception is if you use the `from` argument of `Regen()`: in this case the regeneration will only begin from the function name that you pass to it. This partial regeneration can also be useful to speed up computation time.
# Adding/removing indicators
One adjustment that may be of interest is to add and remove indicators. This needs to be done with care because removing an indicator requires that it is removed from both `iData` and `iMeta` when building the coin with `new_coin()`. It is not possible to remove indicators after the coin is assembled, without completely regenerating the coin.
One way to add or remove indicators is to edit the `iData` and `iMeta` data frames by hand and then rebuild the coin. Another way is to regenerate the coin, but use the `exclude` argument of `new_coin()`.
A short cut function, `change_ind()` can be also used to quickly add or remove indicators from the framework, and regenerate the coin, all in one command.
```{r}
# copy base coin
coin_remove <- coin
# remove two indicators and regenerate the coin
coin_remove <- change_ind(coin, drop = c("LPI", "Forest"), regen = TRUE)
coin_remove
```
The `drop` argument is used to specify which indicators to remove. The `add` argument adds indicators, although any indicators specified by `add` must be available in the original `iData` and `iMeta` that were passed to `new_coin()`. This means that `add` can only be used if you have previously excluded some of the indicators.
In general, if you want to test the effect of different indicators, you should include all candidate indicators in `iData` and `iMeta` and use `exclude` from `new_coin()` and/or `change_ind()` to select subsets. The advantage of doing it this way is that different subsets can be tested as part of a sensitivity analysis, for example.
In fact `change_ind()` simply edits the `exclude` argument of `new_coin()`, but is a quick way of doing this. Moreover it is safer, because it performs a few checks on the indicator codes to add or remove.
It is also possible to effectively remove indicators by setting weights to zero. This is similar to the above approach but not necessarily identical: weights only come into play at the aggregation step, which is usually the last operation. If you perform unit screening, or imputation, the presence of zero-weighted indicators could still influence the results, depending on the settings.
The effects of removing indicators and aggregates can also be tested using the `remove_elements()` function, which removes all indicators or aggregates in a specified level and calculates the impact.
# Comparison
Comparing coins is helped by two dedicated functions, `compare_coins()` and `compare_coins_multi()`. The former is for comparing two coins only, whereas the latter allows to compare more than two coins. Let's start by comparing the two coins we have: the default example coin, and the same coin but with a percentile rank normalisation method:
```{r}
# compare index, sort by absolute rank difference
compare_coins(coin, coin2, dset = "Aggregated", iCode = "Index",
sort_by = "Abs.diff", decreasing = TRUE)
```
This shows that for the overall index, the maximum rank change is 10 places for Portugal. We can compare ranks or scores, for any indicator or aggregate in the index. This also works if the number of units changes. At the moment, the coin has an imputation step which fills in all `NA`s. We could alternatively filter out any units with less than 90% data availability and remove the imputation step.
```{r}
# copy original coin
coin90 <- coin
# remove imputation entry completely (function will not be run)
coin90$Log$Impute <- NULL
# set data availability threshold to 90%
coin90$Log$Screen$dat_thresh <- 0.9
# we also need to tell Screen() to use the denominated dset now
coin90$Log$Screen$dset <- "Denominated"
# regenerate
coin90 <- Regen(coin90)
# summarise coin
coin90
```
We can see that we are down to 46 units after the screening step. Now let's compare with the original coin:
```{r}
# compare index, sort by absolute rank difference
compare_coins(coin, coin90, dset = "Aggregated", iCode = "Index",
sort_by = "Abs.diff", decreasing = TRUE)
```
The removed units are marked as `NA` in the second coin.
Finally, to demonstrate comparing multiple coins, we can call the `compare_coins_multi()` function:
```{r}
compare_coins_multi(list(Nominal = coin, Prank = coin2, NoLPIFor = coin_remove,
Screen90 = coin90), dset = "Aggregated", iCode = "Index")
```
This simply shows the ranks of each of the three coins side by side. We can also choose to compare scores, and to display rank changes or absolute rank changes. Obviously a requirement is that the coins must all have some common units, and must all have `iCode` and `dset` available within.
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/adjustments.Rmd
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## ----setup--------------------------------------------------------------------
library(COINr)
# build example up to normalised data set
coin <- build_example_coin(up_to = "Normalise")
## -----------------------------------------------------------------------------
# aggregate normalised data set
coin <- Aggregate(coin, dset = "Normalised")
## -----------------------------------------------------------------------------
dset_aggregated <- get_dset(coin, dset = "Aggregated")
nc <- ncol(dset_aggregated)
# view aggregated scores (last 11 columns here)
dset_aggregated[(nc - 10) : nc] |>
head(5) |>
signif(3)
## -----------------------------------------------------------------------------
coin <- Normalise(coin, dset = "Treated",
global_specs = list(f_n = "n_minmax",
f_n_para = list(l_u = c(1,100))))
## -----------------------------------------------------------------------------
coin <- Aggregate(coin, dset = "Normalised",
f_ag = "a_gmean")
## -----------------------------------------------------------------------------
ms_installed <- requireNamespace("matrixStats", quietly = TRUE)
ms_installed
ci_installed <- requireNamespace("Compind", quietly = TRUE)
ci_installed
## ---- eval=F------------------------------------------------------------------
# # RESTORE above eval=ms_installed
# # load matrixStats package
# library(matrixStats)
#
# # aggregate using weightedMedian()
# coin <- Aggregate(coin, dset = "Normalised",
# f_ag = "weightedMedian",
# f_ag_para = list(na.rm = TRUE))
## ---- eval= F-----------------------------------------------------------------
# # RESTORE ABOVE eval= ci_installed
# # load Compind
# suppressPackageStartupMessages(library(Compind))
#
# # wrapper to get output of interest from ci_bod
# # also suppress messages about missing values
# ci_bod2 <- function(x){
# suppressMessages(Compind::ci_bod(x)$ci_bod_est)
# }
#
# # aggregate
# coin <- Aggregate(coin, dset = "Normalised",
# f_ag = "ci_bod2", by_df = TRUE, w = "none")
## -----------------------------------------------------------------------------
# data with all NAs except 1 value
x <- c(NA, NA, NA, 1, NA)
mean(x)
mean(x, na.rm = TRUE)
## -----------------------------------------------------------------------------
df1 <- data.frame(
i1 = c(1, 2, 3),
i2 = c(3, NA, NA),
i3 = c(1, NA, 1)
)
df1
## -----------------------------------------------------------------------------
# aggregate with arithmetic mean, equal weight and data avail limit of 2/3
Aggregate(df1, f_ag = "a_amean",
f_ag_para = list(w = c(1,1,1)),
dat_thresh = 2/3)
## -----------------------------------------------------------------------------
coin <- Aggregate(coin, dset = "Normalised", f_ag = c("a_amean", "a_gmean", "a_amean"))
## -----------------------------------------------------------------------------
# get some indicator data - take a few columns from built in data set
X <- ASEM_iData[12:15]
# normalise to avoid zeros - min max between 1 and 100
X <- Normalise(X,
global_specs = list(f_n = "n_minmax",
f_n_para = list(l_u = c(1,100))))
# aggregate using harmonic mean, with some weights
y <- Aggregate(X, f_ag = "a_hmean", f_ag_para = list(w = c(1, 1, 2, 1)))
cbind(X, y) |>
head(5) |>
signif(3)
## -----------------------------------------------------------------------------
# build example purse up to normalised data set
purse <- build_example_purse(up_to = "Normalise", quietly = TRUE)
# aggregate using defaults
purse <- Aggregate(purse, dset = "Normalised")
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/aggregate.R
|
---
title: "Aggregation"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Aggregation}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
This vignette describes the process of aggregating indicators, in COINr.
# Introduction
Aggregation is the operation of combining multiple indicators into one value. Many composite indicators have a hierarchical structure, so in practice this often involves multiple aggregations, for example aggregating groups of indicators into aggregate values, then aggregating those values into higher-level aggregates, and so on, until the final index value.
Aggregating should almost always be done on normalised data, unless the indicators are already on very similar scales. Otherwise the relative influence of indicators will be very uneven.
Of course you don't *have* to aggregate indicators at all, and you might be content with a scoreboard, or perhaps aggregating into several aggregate values rather than a single index. However, consider that aggregation should not substitute the underlying indicator data, but complement it.
Overall, aggregating indicators is a form of information compression - you are trying to combine many indicator values into one, and inevitably information will be lost ([this](https://doi.org/10.1016/j.envsoft.2021.105208) recent paper may be of interest). As long as this is kept in mind, and indicator data is presented and made available along side aggregate values, then aggregate (index) values can complement indicators and be used as a useful tool for summarising the underlying data, and identifying overall trends and patterns.
## Weighting
Many aggregation methods involve some kind of weighting, i.e. coefficients that define the relative weight of the indicators/aggregates in the aggregation. In order to aggregate, weights need to first be specified, but to effectively adjust weights it is necessary to aggregate.
This chicken and egg conundrum is best solved by aggregating initially with a trial set of weights, perhaps equal weights, then seeing the effects of the weighting, and making any weight adjustments necessary.
## Approaches
### Means
The most straightforward and widely-used approach to aggregation is the **weighted arithmetic mean**. Denoting the indicators as $x_i \in \{x_1, x_2, ... , x_d \}$, a weighted arithmetic mean is calculated as:
$$ y = \frac{1}{\sum_{i=1}^d w_i} \sum_{i=1}^d x_iw_i $$
where the $w_i$ are the weights corresponding to each $x_i$. Here, if the weights are chosen to sum to 1, it will simplify to the weighted sum of the indicators. In any case, the weighted mean is scaled by the sum of the weights, so weights operate relative to each other.
Clearly, if the index has more than two levels, then there will be multiple aggregations. For example, there may be three groups of indicators which give three separate aggregate scores. These aggregate scores would then be fed back into the weighted arithmetic mean above to calculate the overall index.
The arithmetic mean has "perfect compensability", which means that a high score in one indicator will perfectly compensate a low score in another. In a simple example with two indicators scaled between 0 and 10 and equal weighting, a unit with scores (0, 10) would be given the same score as a unit with scores (5, 5) -- both have a score of 5.
An alternative is the **weighted geometric mean**, which uses the product of the indicators rather than the sum.
$$ y = \left( \prod_{i=1}^d x_i^{w_i} \right)^{1 / \sum_{i=1}^d w_i} $$
This is simply the product of each indicator to the power of its weight, all raised the the power of the inverse of the sum of the weights.
The geometric mean is less compensatory than the arithmetic mean -- low values in one indicator only partially substitute high values in others. For this reason, the geometric mean may sometimes be preferred when indicators represent "essentials". An example might be quality of life: a longer life expectancy perhaps should not compensate severe restrictions on personal freedoms.
A third type of mean, in fact the third of the so-called [Pythagorean means](https://en.wikipedia.org/wiki/Pythagorean_means) is the **weighted harmonic mean**. This uses the mean of the reciprocals of the indicators:
$$ y = \frac{\sum_{i=1}^d w_i}{\sum_{i=1}^d w_i/x_i} $$
The harmonic mean is the the least compensatory of the the three means, even less so than the geometric mean. It is often used for taking the mean of rates and ratios.
### Other methods
The *weighted median* is also a simple alternative candidate. It is defined by ordering indicator values, then picking the value which has half of the assigned weight above it, and half below it. For *ordered* indicators $x_1, x_2, ..., x_d$ and corresponding weights $w_1, w_2, ..., w_d$ the weighted median is the indicator value $x_m$ that satisfies:
$$ \sum_{i=1}^{m-1} w_i \leq \frac{1}{2}, \: \: \text{and} \sum_{i=m+1}^{d} w_i \leq \frac{1}{2} $$
The median is known to be robust to outliers, and this may be of interest if the distribution of scores across indicators is skewed.
Another somewhat different approach to aggregation is to use the [Copeland method](https://en.wikipedia.org/wiki/Copeland%27s_method). This approach is based pairwise comparisons between units and proceeds as follows. First, an *outranking matrix* is constructed, which is a square matrix with $N$ columns and $N$ rows, where $N$ is the number of units.
The element in the $p$th row and $q$th column of the matrix is calculated by summing all the indicator weights where unit $p$ has a higher value in those indicators than unit $q$. Similarly, the cell in the $q$th row and $p$th column (which is the cell opposite on the other side of the diagonal), is calculated as the sum of the weights unit where $q$ has a higher value than unit $p$. If the indicator weights sum to one over all indicators, then these two scores will also sum to 1 by definition. The outranking matrix effectively summarises to what extent each unit scores better or worse than all other units, for all unit pairs.
The Copeland score for each unit is calculated by taking the sum of the row values in the outranking matrix. This can be seen as an average measure of to what extent that unit performs above other units.
Clearly, this can be applied at any level of aggregation and used hierarchically like the other aggregation methods presented here.
In some cases, one unit may score higher than the other in all indicators. This is called a *dominance pair*, and corresponds to any pair scores equal to one (equivalent to any pair scores equal to zero).
The percentage of dominance pairs is an indication of robustness. Under dominance, there is no way methodological choices (weighting, normalisation, etc.) can affect the relative standing of the pair in the ranking. One will always be ranked higher than the other. The greater the number of dominance (or robust) pairs in a classification, the less sensitive country ranks will be to methodological assumptions. COINr allows to calculate the percentage of dominance pairs with an inbuilt function.
# Coins
We now turn to how data sets in a coin can be aggregated using the methods described previously. The function of interest is `Aggregate()`, which is a generic with methods for coins, purses and data frames. To demonstrate COINr's `Aggregate()` function on a coin, we begin by loading the package, and building the example coin, up to the normalised data set.
```{r setup}
library(COINr)
# build example up to normalised data set
coin <- build_example_coin(up_to = "Normalise")
```
Consider what is needed to aggregate the normalised data into its higher levels. We need:
* The data set to aggregate
* The structure of the index: which indicators belong to which groups, etc.
* Weights to assign to indicators
* Specifications for aggregation: an aggregation function (e.g. the weighted mean) and any other parameters to be passed to that function
All of these elements are already present in the coin, except the last. For the first point, we simply need to tell `Aggregate()` which data set to use (using the `dset` argument). The structure of the index was defined when building the coin in `new_coin()` (the `iMeta` argument). Weights were also attached to `iMeta`. Finally, specifications can be specified in the arguments of `Aggregate()`. Let's begin with the simple case though: using the function defaults.
```{r}
# aggregate normalised data set
coin <- Aggregate(coin, dset = "Normalised")
```
By default, the aggregation function performs the following steps:
* Uses the weights that were attached to `iMeta`
* Aggregates hierarchically (with default method of weighted arithmetic mean), following the index structure specified in `iMeta` and using the data specified in `dset`
* Creates a new data set `.$Data$Aggregated`, which consists of the data in `dset`, plus extra columns with scores for each aggregation group, at each aggregation level.
Let's examine the new data set. The columns of each level are added successively, working from level 1 upwards, so the highest aggregation level (the index, here) will be the last column of the data frame.
```{r}
dset_aggregated <- get_dset(coin, dset = "Aggregated")
nc <- ncol(dset_aggregated)
# view aggregated scores (last 11 columns here)
dset_aggregated[(nc - 10) : nc] |>
head(5) |>
signif(3)
```
Here we see the level 2 aggregated scores created by aggregating each group of indicators (the first eight columns), followed by the two sub-indexes (level 3) created by aggregating the scores of level 2, and finally the Index (level 4), which is created by aggregating the "Conn" and "Sust" sub-indexes.
The format of this data frame is not hugely convenient for inspecting the results. To see a more user-friendly version, use the `get_results()` function.
## COINr aggregation functions
Let's now explore some of the options of the `Aggregate()` function. Like other coin-building functions in COINr, `Aggregate()` comes with a number of inbuilt options, but can also accept any function that is passed to it, as long as it satisfies some requirements. COINr's inbuilt aggregation functions begin with `a_`, and are:
* `a_amean()`: the weighted arithmetic mean
* `a_gmean()`: the weighted geometric mean
* `a_hmean()`: the weighted harmonic mean
* `a_copeland()`: the Copeland method (note: requires `by_df = TRUE`)
By default, the arithmetic mean is called but we can easily change this to the geometric mean, for example. However here we run into a problem: the geometric mean will fail if any values to aggregate are less than or equal to zero. So to use the geometric mean we have to re-do the normalisation step to avoid this. Luckily this is straightforward in COINr:
```{r}
coin <- Normalise(coin, dset = "Treated",
global_specs = list(f_n = "n_minmax",
f_n_para = list(l_u = c(1,100))))
```
Now, since the indicators are scaled between 1 and 100 (instead of 0 and 100 as previously), they can be aggregated with the geometric mean.
```{r}
coin <- Aggregate(coin, dset = "Normalised",
f_ag = "a_gmean")
```
## External functions
All of the four aggregation functions mentioned above have the same format (try e.g. `?a_gmean`), and are built into the COINr package. But what if we want to use another type of aggregation function? The process is exactly the same.
In this section we use some functions from other packages: the matrixStats package and the Compind package. These are not imported by COINr, so the code here will only work if you have these installed. If this vignette was built on your computer, we have to check whether these packages are installed:
```{r}
ms_installed <- requireNamespace("matrixStats", quietly = TRUE)
ms_installed
ci_installed <- requireNamespace("Compind", quietly = TRUE)
ci_installed
```
If either of these have returned `FALSE`, in the following code chunks you will see some blanks. See the online version of this vignette to see the results, or install the above packages and rebuild the vignettes.
Now for an example, we can use the `weightedMedian()` function from the matrixStats package. This has a number of arguments, but the ones we will use are `x` and `w` (with the same meanings as COINr functions), and `na.rm` which we need to set to `TRUE`.
```{r, eval=F}
# RESTORE above eval=ms_installed
# load matrixStats package
library(matrixStats)
# aggregate using weightedMedian()
coin <- Aggregate(coin, dset = "Normalised",
f_ag = "weightedMedian",
f_ag_para = list(na.rm = TRUE))
```
The weights `w` do not need to be specified in `f_ag_para` because they are automatically passed to `f_ag` unless specified otherwise.
The general requirements for `f_ag` functions passed to `Aggregate()` are that:
1. The input to the function is a numeric vector `x`, possibly with missing values
2. The function returns a single (scalar) aggregated value
3. If the function accepts a vector of weights, this vector (of the same length of `x`) is passed as function argument `w`. If the function doesn't accept a vector of weights, we can set `w = "none"` in the arguments to `Aggregate()`, and it will not try to pass `w`.
4. Any other arguments to `f_ag`, apart from `x` and `w`, should be included in the named list `f_ag_para`.
Sometimes this may mean that we have to create a wrapper function to satisfy these requirements. For example, the 'Compind' package has a number of sophisticated aggregation approaches. The "benefit of the doubt" uses data envelopment analysis to aggregate indicators, however the function `Compind::ci_bod()` outputs a list. We can make a wrapper function to use this inside COINr:
```{r, eval= F}
# RESTORE ABOVE eval= ci_installed
# load Compind
suppressPackageStartupMessages(library(Compind))
# wrapper to get output of interest from ci_bod
# also suppress messages about missing values
ci_bod2 <- function(x){
suppressMessages(Compind::ci_bod(x)$ci_bod_est)
}
# aggregate
coin <- Aggregate(coin, dset = "Normalised",
f_ag = "ci_bod2", by_df = TRUE, w = "none")
```
The benefit of the doubt approach automatically assigns individual weights to each unit, so we need to specify `w = "none"` to stop `Aggregate()` from attempting to pass weights to the function. Importantly, we also need to specify `by_df = TRUE` which tells `Aggregate()` to pass a data frame to `f_ag` rather than a vector.
## Data availability limits
Many aggregation functions will return an aggregated value as long as at least one of the values passed to it is non-`NA`. For example, R's `mean()` function:
```{r}
# data with all NAs except 1 value
x <- c(NA, NA, NA, 1, NA)
mean(x)
mean(x, na.rm = TRUE)
```
Depending on how we set `na.rm`, we either get an answer or `NA`, and this is the same for many other aggregation functions (e.g. the ones built into COINr). Sometimes we might want a bit more control. For example, if we have five indicators in a group, it might only be reasonable to give an aggregated score if, say, at least three out of five indicators have non-`NA` values.
The `Aggregate()` function has the option to specify a data availability limit when aggregating. We simply set `dat_thresh` to a value between 0 and 1, and for each aggregation group, any unit that has a data availability lower than `dat_thresh` will get a `NA` value instead of an aggregated score. This is most easily illustrated on a data frame (see next section for more details on aggregating in data frames):
```{r}
df1 <- data.frame(
i1 = c(1, 2, 3),
i2 = c(3, NA, NA),
i3 = c(1, NA, 1)
)
df1
```
We will require that at least 2/3 of the indicators should be non-`NA` to give an aggregated value.
```{r}
# aggregate with arithmetic mean, equal weight and data avail limit of 2/3
Aggregate(df1, f_ag = "a_amean",
f_ag_para = list(w = c(1,1,1)),
dat_thresh = 2/3)
```
Here we see that the second row is aggregated to give `NA` because it only has 1/3 data availability.
## By level
We can also use a different aggregation function for each aggregation level by specifying `f_ag` as a vector of function names rather than a single function.
```{r}
coin <- Aggregate(coin, dset = "Normalised", f_ag = c("a_amean", "a_gmean", "a_amean"))
```
In this example, there are four levels in the index, which means there are three aggregation operations to be performed: from Level 1 to Level 2, from Level 2 to Level 3, and from Level 3 to Level 4. This means that `f_ag` vector must have `n-1` entries, where `n` is the number of aggregation levels. The functions are run in the order of aggregation.
In the same way, if parameters need to be passed to the functions specified by `f_ag`, `f_ag_para` can be specified as a list of length `n-1`, where each element is a list of parameters.
# Data frames
The `Aggregate()` function also works in the same way on data frames. This is probably more useful when aggregation functions take vectors as inputs, rather than data frames, since it would otherwise be easier to go directly to the underlying function. In any case, here are a couple of examples. First, using a built in COINr function to compute the weighted harmonic mean of a data frame.
```{r}
# get some indicator data - take a few columns from built in data set
X <- ASEM_iData[12:15]
# normalise to avoid zeros - min max between 1 and 100
X <- Normalise(X,
global_specs = list(f_n = "n_minmax",
f_n_para = list(l_u = c(1,100))))
# aggregate using harmonic mean, with some weights
y <- Aggregate(X, f_ag = "a_hmean", f_ag_para = list(w = c(1, 1, 2, 1)))
cbind(X, y) |>
head(5) |>
signif(3)
```
# Purses
The purse method for `Aggregate()` is straightforward and simply applies the same aggregation specifications to each of the coins within. It has exactly the same parameters as the coin method.
```{r}
# build example purse up to normalised data set
purse <- build_example_purse(up_to = "Normalise", quietly = TRUE)
# aggregate using defaults
purse <- Aggregate(purse, dset = "Normalised")
```
# What next?
After aggregating indicators, it is likely that you will want to begin viewing and exploring the results. See the vignette on [Exploring results](results.html) for more details.
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/aggregate.Rmd
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## -----------------------------------------------------------------------------
# load COINr
library(COINr)
# build example coin
coin <- build_example_coin(up_to = "new_coin", quietly = TRUE)
# get table of indicator statistics for raw data set
stat_table <- get_stats(coin, dset = "Raw", out2 = "df")
## -----------------------------------------------------------------------------
head(stat_table[1:5], 5)
## -----------------------------------------------------------------------------
head(stat_table[6:10], 5)
## -----------------------------------------------------------------------------
head(stat_table[11:15], 5)
## -----------------------------------------------------------------------------
head(stat_table[16:ncol(stat_table)], 5)
## -----------------------------------------------------------------------------
l_dat <- get_data_avail(coin, dset = "Raw", out2 = "list")
str(l_dat, max.level = 1)
## -----------------------------------------------------------------------------
head(l_dat$Summary, 5)
## -----------------------------------------------------------------------------
head(l_dat$ByGroup[1:5], 5)
## -----------------------------------------------------------------------------
coin <- build_example_coin(quietly = TRUE)
## -----------------------------------------------------------------------------
# get correlations
cmat <- get_corr(coin, dset = "Raw", iCodes = list("Environ"), Levels = 1)
# examine first few rows
head(cmat)
## -----------------------------------------------------------------------------
# get correlations
cmat <- get_corr(coin, dset = "Raw", iCodes = list("Environ"), Levels = 1, make_long = FALSE)
# examine first few rows
round_df(head(cmat), 2)
## -----------------------------------------------------------------------------
get_corr_flags(coin, dset = "Normalised", cor_thresh = 0.75,
thresh_type = "high", grouplev = 2)
## -----------------------------------------------------------------------------
get_corr_flags(coin, dset = "Normalised", cor_thresh = -0.5,
thresh_type = "low", grouplev = 2)
## -----------------------------------------------------------------------------
get_denom_corr(coin, dset = "Raw", cor_thresh = 0.7)
## -----------------------------------------------------------------------------
get_cronbach(coin, dset = "Raw", iCodes = "P2P", Level = 1)
## -----------------------------------------------------------------------------
l_pca <- get_PCA(coin, dset = "Raw", iCodes = "Sust", out2 = "list")
## -----------------------------------------------------------------------------
str(l_pca, max.level = 1)
## -----------------------------------------------------------------------------
str(l_pca$PCAresults, max.level = 2)
## -----------------------------------------------------------------------------
# summarise PCA results for "Social" group
summary(l_pca$PCAresults$Social$PCAres)
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/analysis.R
|
---
title: "Analysis"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Analysis}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
This vignette describes the "analysis" features of COINr. By this, we mean functions that retrieve statistical measures from the coin in various ways. This excludes things like sensitivity analysis, which involves tinkering with the construction methodology.
In short, here we discuss obtaining indicator statistics, correlations, data availability, and some slightly more complex ideas such as Cronbach's alpha and principal component analysis.
# Indicator statistics
Indicator statistics can be obtained using the `get_stats()` function.
```{r}
# load COINr
library(COINr)
# build example coin
coin <- build_example_coin(up_to = "new_coin", quietly = TRUE)
# get table of indicator statistics for raw data set
stat_table <- get_stats(coin, dset = "Raw", out2 = "df")
```
The resulting data frame has 18 columns, which is hard to display concisely here. Therefore we will look at the columns in groups of five.
```{r}
head(stat_table[1:5], 5)
```
Each row is one of the indicators from the targeted data set. Then columns are statistics, here obvious things like the minimum, maximum, mean and median.
```{r}
head(stat_table[6:10], 5)
```
In the first three columns here we find the standard deviation, skewness and kurtosis. The remaining two columns are "N.Avail", which is the number of non-`NA` data points, and "N.NonZero", the number of non-zero points. This latter can be of interest because some indicators may have a high proportion of zeroes, which can be problematic.
```{r}
head(stat_table[11:15], 5)
```
Here we have "N.Unique", which is the number of unique data points (i.e. excluding duplicate values). The following three columns are similar to previous columns, e.g. "Frc.Avail" is the fraction of data availability, as opposed to the number of available points (N.Avail). The final column, "Flag.Avail", is a logical flag: if the data availability ("Frc.Avail") is below the limit specified by the `t_avail` argument of `get_stats()`, it will be flagged as "LOW".
```{r}
head(stat_table[16:ncol(stat_table)], 5)
```
The first two final columns are analogous to "Flag.Avail" and have thresholds which are controlled by arguments to `get_stats()`. The final column is a basic test of outliers which is commonly used in composite indicators, for example in the [COIN Tool](https://knowledge4policy.ec.europa.eu/composite-indicators/coin-tool_en). This is the same process as used in `check_SkewKurt()` which will flag "OUT" if the absolute skewness is greater than a set threshold (default 2) AND kurtosis is greater than a threshold (default 3.5). In short, indicators that are flagged here could be considered for outlier treatment.
# Unit data availability
The same kind of analysis can be performed for units, rather than indicators. Here, the main thing of interest is data availability, which can be obtained throug the `get_data_avail()` function.
```{r}
l_dat <- get_data_avail(coin, dset = "Raw", out2 = "list")
str(l_dat, max.level = 1)
```
Here we see the output is a list with two data frames. The first is a summary for each unit:
```{r}
head(l_dat$Summary, 5)
```
Each unit has its number of missing points, zero points, missing-or-zero points, as well as the percentage data availability and percentage non-zero. The "ByGroup" data frame gives data availability within aggregation groups:
```{r}
head(l_dat$ByGroup[1:5], 5)
```
Here we just view the first few columns to save space. The values are the fraction of indicator availability within each aggregation group.
# Correlations
Correlations can be obtained and viewed directly using the `plot_corr()` function which is explained in the [Visualisation](visualisation.html) vignette. Here, we explore the functions for obtaining correlation matrices, flags and p-values.
The most general-purpose function for obtaining correlations between indicators is `get_corr()` function (which is called by `plot_corr()`). This allows almost any set of indicators/aggregates to be correlated against almost any other set. We won't go over the full functionality here because this is covered in [Visualisation](visualisation.html) vignette. However to demonstrate a couple of examples we begin by building the full example coin up to the aggregated data set.
```{r}
coin <- build_example_coin(quietly = TRUE)
```
Now we can take some examples. First, to get the correlations between indicators within the Environmental group:
```{r}
# get correlations
cmat <- get_corr(coin, dset = "Raw", iCodes = list("Environ"), Levels = 1)
# examine first few rows
head(cmat)
```
Here we see that the default output of `get_corr()` is a long-format correlation table. If you want the wide format, set `make_long = FALSE`.
```{r}
# get correlations
cmat <- get_corr(coin, dset = "Raw", iCodes = list("Environ"), Levels = 1, make_long = FALSE)
# examine first few rows
round_df(head(cmat), 2)
```
This gives the more classical-looking correlation matrix, although the long format can sometimes be easier to work with for futher processing. One further option that is worth mentioning is `pval`: by default, `get_corr()` will return any correlations with a p-value lower than 0.05 as `NA`, indicating that these correlations are insignificant at this significance level. You can adjust this threshold by changing `pval`, or disable it completely by setting `pval = 0`.
On the subject of p-values, COINr includes a `get_pvals()` function which can be used to get p-values of correlations between a supplied matrix or data frame. This cannot be used directly on a coin and is more of a helper function but may still be useful.
Two further functions are of interest regarding correlations. The first is `get_corr_flags()`. This is a useful function for finding correlations between indicators that exceed or fall below a given threshold, within aggregation groups:
```{r}
get_corr_flags(coin, dset = "Normalised", cor_thresh = 0.75,
thresh_type = "high", grouplev = 2)
```
Here we have identified any correlations above 0.75, from the "Normalised" data set, that are between indicators in the same group in level 2. Actually 0.75 is quite low for searching for "high correlations", but it is used as an example here because the example data set doesn't have any very high correlations.
By switching `thresh_type = "low"` we can similarly look for low/negative correlations:
```{r}
get_corr_flags(coin, dset = "Normalised", cor_thresh = -0.5,
thresh_type = "low", grouplev = 2)
```
Our example has some fairly significant negative correlations! All within the "Institutional" group, and with the Technical Barriers to Trade indicator.
A final function to mention is `get_denom_corr()`. This is related to the operation of denominating indicators (see [Denomination](denomination.html) vignette), and identifies any indicators that are correlated (using the absolute value of correlation) above a given threshold with any of the supplied denominators. This can help to identify in some cases *whether* to denominate an indicator and *with what* - i.e. if an indicator is strongly related with a denominator that means it is dependent on it, which may be a reason to denominate.
```{r}
get_denom_corr(coin, dset = "Raw", cor_thresh = 0.7)
```
Using a threshold of 0.7, and examining the raw data set, we see that several indicators are strongly related to the denominators, a classical example being export value of goods (Goods) being well correlated with GDP. Many of these pairs flagged here are indeed used as denominators in the ASEM example, but also for conceptual reasons.
# Multivariate tools
A first simple tool is to calculate Cronbach's alpha. This can be done with any group of indicators, either the full set, or else targeting specific groups.
```{r}
get_cronbach(coin, dset = "Raw", iCodes = "P2P", Level = 1)
```
This simply calculates Cronbach's alpha (a measure of statistical consistency) for the "P2P" group (People to People connectivity, this case).
Another multivariate analysis tool is principal component analysis (PCA). Although, like correlation, this is built into base R, the `get_PCA()` function makes it easier to obtain PCA for groups of indicators, following the structure of the index.
```{r}
l_pca <- get_PCA(coin, dset = "Raw", iCodes = "Sust", out2 = "list")
```
The function can return its results either as a list, or appended to the coin if `out2 = "coin"`. Here the output is a list and we will explore its output. First note the warnings above due to missing data, which can be suppressed using `nowarnings = TRUE`. The output list looks like this:
```{r}
str(l_pca, max.level = 1)
```
I.e. we have a data frame of "PCA weights" and some PCA results. We ignore the weights for the moment and look closer at the PCA results:
```{r}
str(l_pca$PCAresults, max.level = 2)
```
By default, `get_PCA()` will run a separate PCA for each aggregation group within the specified level. In this case, it has run three: one for each of the "Environ", "Social" and "SusEcFin" groups. Each of these contains `wts`, a set of PCA weights for that group, `PCAres`, which is the direct output of `stats::prcomp()`, and `iCodes`, which is the corresponding vector of indicator codes for the group.
We can do some basic PCA analysis using base R's tools using the "PCAres" objects, e.g.:
```{r}
# summarise PCA results for "Social" group
summary(l_pca$PCAresults$Social$PCAres)
```
See `stats::prcomp()` and elsewhere for more resources on PCA in R.
Now turning to the weighting, `get_PCA()` also outputs a set of "PCA weights". These are output attached to the list as shown above, or if `out2 = "coin"`, will be written to the weights list inside the coin if `weights_to` is also specified. See the [Weights](weights.html) vignette for some more details on this. Note that PCA weights come with a number of caveats: please read the documentation for `get_PCA()` for a discussion on this.
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/analysis.Rmd
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## ----setup--------------------------------------------------------------------
library(COINr)
## -----------------------------------------------------------------------------
head(ASEM_iData[1:20], 5)
## -----------------------------------------------------------------------------
check_iData(ASEM_iData)
## -----------------------------------------------------------------------------
head(ASEM_iMeta, 5)
## -----------------------------------------------------------------------------
ASEM_iMeta[ASEM_iMeta$Type == "Aggregate", ]
## -----------------------------------------------------------------------------
ASEM_iMeta[ASEM_iMeta$Type == "Group", ]
## -----------------------------------------------------------------------------
ASEM_iMeta[ASEM_iMeta$Type == "Denominator", ]
## -----------------------------------------------------------------------------
check_iMeta(ASEM_iMeta)
## -----------------------------------------------------------------------------
# build a new coin using example data
coin <- new_coin(iData = ASEM_iData,
iMeta = ASEM_iMeta,
level_names = c("Indicator", "Pillar", "Sub-index", "Index"))
## -----------------------------------------------------------------------------
coin
## -----------------------------------------------------------------------------
# first few cols and rows of Raw data set
data_raw <- get_dset(coin, "Raw")
head(data_raw[1:5], 5)
## -----------------------------------------------------------------------------
get_dset(coin, "Raw", also_get = c("uName", "Pop_group"))[1:5] |>
head(5)
## -----------------------------------------------------------------------------
# exclude two indicators
coin <- new_coin(iData = ASEM_iData,
iMeta = ASEM_iMeta,
level_names = c("Indicator", "Pillar", "Sub-index", "Index"),
exclude = c("LPI", "Flights"))
coin
## -----------------------------------------------------------------------------
ASEM <- build_example_coin(quietly = TRUE)
ASEM
## -----------------------------------------------------------------------------
# sample of 2018 observations
ASEM_iData_p[ASEM_iData_p$Time == 2018, 1:15] |>
head(5)
# sample of 2019 observations
ASEM_iData_p[ASEM_iData_p$Time == 2019, 1:15] |>
head(5)
## -----------------------------------------------------------------------------
# build purse from panel data
purse <- new_coin(iData = ASEM_iData_p,
iMeta = ASEM_iMeta,
split_to = "all",
quietly = TRUE)
## -----------------------------------------------------------------------------
purse
## -----------------------------------------------------------------------------
ASEM_purse <- build_example_purse(quietly = TRUE)
ASEM_purse
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/coins.R
|
---
title: "Building coins"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Building coins}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
```{r setup}
library(COINr)
```
This vignette gives a guide to building "coins", which are the object class representing a composite indicator used throughout COINr, and "purses", which are time-indexed collections of coins.
# What is a coin?
COINr functions are designed to work in particular on an S3 object class called a "coin". To introduce this, consider what constitutes a composite indicator:
* The indicator data
* Indicator metadata, including weights and directions
* A structure which maps indicators into groups for aggregation, typically over multiple levels
* Methodological specifications, including
- Data treatment
- Normalisation method and parameters
- Aggregation method and parameters
* Processed data sets at each stage of the construction
* Resulting aggregated scores and ranks
Meanwhile, in the process of building a composite indicator, a series of analysis data is generated, including information on data availability, statistics on individual indicators, correlations and information about data treatment.
If a composite indicator is built from scratch, it is easy to generate an environment with dozens of variables and parameters. In case an alternative version of the composite indicator is built, multiple sets of variables may need to be generated. With this in mind, it makes sense to structure all the ingredients of composite indicator, from input data, to methodology and results, into a single object, which is called a "coin" in COINr.
How to construct a coin, and some details of its contents, will be explained in more detail in the following sections. Although coins are the main object class used in COINr, a number of COINr functions also have methods for data frames and vectors. This is explained in other vignettes.
# Building coins
To build a coin you need to use the `new_coin()` function. The main two input arguments of this function are two data frames: `iData` (the indicator data), and `iMeta` (the indicator metadata). This builds a coin class object containing the raw data, which can then be developed and expanded by COINr functions by e.g. normalising, treating data, imputing, aggregating and so on.
Before proceeding, we have to define a couple of things. The "things" that are being benchmarked/compared by the indicators and composite indicator are more generally referred to as *units* (quite often, units correspond to countries). Units are compared using *indicators*, which are measured variables that are relevant to the overall concept of the composite indicator.
## Indicator data
The first data frame, `iData` specifies the value of each indicator, for each unit. It can also contain further attributes and metadata of units, for example groups, names, and denominating variables (variables which are used to adjust for size effects of indicators).
To see an example of what `iData` looks like, we can look at the built in [ASEM](https://composite-indicators.jrc.ec.europa.eu/asem-sustainable-connectivity/) data set. This data set is from a composite indicator covering 51 countries with 49 indicators, and is used for examples throughout COINr:
```{r}
head(ASEM_iData[1:20], 5)
```
Here only a few rows and columns are shown to illustrate. The ASEM data covers covering 51 Asian and European countries, at the national level, and uses 49 indicators. Notice that each row is an observation (here, a country), and each column is a variable (mostly indicators, but also other things).
Columns can be named whatever you want, although a few names are reserved:
* `uName` [optional] gives the name of each unit. Here, units are countries, so these are the names of each country.
* `uCode` [**required**] is a unique code assigned to each unit (country). This is the main "reference" inside COINr for units. If the units are countries, ISO Alpha-3 codes should ideally be used, because these are recognised by COINr for generating maps.
* `Time` [optional] gives the reference time of the data. This is used if panel data is passed to `new_coin()`. See [Purses and panel data].
This means that at a minimum, you need to supply a data frame with a `uCode` column, and some indicator columns.
Aside from the reserved names above, columns can be assigned to different uses using the corresponding `iMeta` data frame - this is clarified in the next section.
Some important rules and tips to keep in mind are:
* Columns don't have to be in any particular order; they are identified by names rather than positions.
* Indicator columns are required to be numeric, i.e. they cannot be character vectors.
* There is no restriction on the number of indicators and units.
* Indicator codes and unit codes must have unique names.
* As with everything in R, all codes are case-sensitive.
* Don't start any column names with a number!
The `iData` data frame will be checked when it is passed to `new_coin()`. You can also perform this check yourself in advance by calling `check_iData()`:
```{r}
check_iData(ASEM_iData)
```
If there are issues with your `iData` data frame this should produce informative error messages which can help to correct the problem.
## Indicator metadata
The `iMeta` data frame specifies everything about each column in `iData`, including whether it is an indicator, a group, or something else; its name, its units, and where it appears in the *structure* of the index. `iMeta` also requires entries for any aggregates which will be created by aggregating indicators. Let's look at the built-in example.
```{r}
head(ASEM_iMeta, 5)
```
Required columns for `iMeta` are:
* `Level`: The level in aggregation, where 1 is indicator level, 2 is the level resulting from aggregating
indicators, 3 is the result of aggregating level 2, and so on. Set to `NA` for entries that are not included in the index (groups, denominators, etc).
* `iCode`: Indicator code, alphanumeric. Must not start with a number. These entries generally correspond to the column names of `iData`.
* `Parent`: Group (`iCode`) to which indicator/aggregate belongs in level immediately above. Each entry here should also be found in `iCode`. Set to `NA` only for the highest (Index) level (no parent), or for entries that are not included in the index (groups, denominators, etc).
* `Direction`: Numeric, either -1 or 1
* `Weight`: Numeric weight, will be re-scaled to sum to 1 within aggregation group. Set to `NA` for entries that are not included in the index (groups, denominators, etc).
* `Type`: The type, corresponding to `iCode`. Can be either `Indicator`, `Aggregate`, `Group`, `Denominator`, or `Other`.
Optional columns that are recognised in certain functions are:
* `iName`: Name of the indicator: a longer name which is used in some plotting functions.
* `Denominator`: specifies which denominator variable should be used to denominate the indicator, if `Denominate()` is called. See the [Denomination](denomination.html) vignette.
* `Unit`: the unit of the indicator, e.g. USD, thousands, score, etc. Used in some plots if available.
* `Target`: a target for the indicator. Used if normalisation type is distance-to-target.
`iMeta` can also include other columns if needed for specific uses, as long as they don't use the names listed above.
The `iMeta` data frame essentially gives details about each of the columns found in `iData`, as well as details about additional data columns eventually created by aggregating indicators. This means that the entries in `iMeta` must include *all* columns in `iData`, *except* the three "special" column names: `uCode`, `uName`, and `Time`. In other words, all column names of `iData` should appear in `iMeta$iCode`, except the three special cases mentioned.
The `Type` column specifies the type of the entry: `Indicator` should be used for indicators at level 1.
`Aggregate` for aggregates created by aggregating indicators or other aggregates. Otherwise set to `Group`
if the variable is not used for building the index but instead is for defining groups of units. Set to
`Denominator` if the variable is to be used for scaling (denominating) other indicators. Finally, set to
`Other` if the variable should be ignored but passed through. Any other entries here will cause an error.
Apart from the indicator entries shown above, we can see aggregate entries:
```{r}
ASEM_iMeta[ASEM_iMeta$Type == "Aggregate", ]
```
These are the aggregates that will be created by aggregating indicators. These values will only be created when we call the `Aggregate()` function (see relevant vignette). We also have groups:
```{r}
ASEM_iMeta[ASEM_iMeta$Type == "Group", ]
```
Notice that the `iCode` entries here correspond to column names of `iData`. There are also denominators:
```{r}
ASEM_iMeta[ASEM_iMeta$Type == "Denominator", ]
```
Denominators are used to divide or "scale" other indicators. They are ideally included in `iData` because this ensures that they match the units and possibly the time points.
The `Parent` column requires a few extra words. This is used to define the structure of the index. Simply put, it specifies the aggregation group to which the indicator or aggregate belongs to, in the level immediately above. For indicators in level 1, this should refer to `iCode`s in level 2, and for aggregates in level 2, it should refer to `iCode`s in level 3. Every entry in `Parent` must refer to an entry that can be found in the `iCode` column, or else be `NA` for the highest aggregation level or for groups, denominators and other `iData` columns that are not included in the index.
The `iMeta` data frame is more complex that `iData` and it may be easy to make errors. Use the `check_iMeta()` function (which is anyway called by `new_coin()`) to check the validity of your `iMeta`. Informative error messages are included where possible to help correct any errors.
```{r}
check_iMeta(ASEM_iMeta)
```
When `new_coin()` is run, additional cross-checks are run between `iData` and `iMeta`.
## Building with `new_coin()`
With the `iData` and `iMeta` data frames prepared, you can build a coin using the `new_coin()` function. This has some other arguments and options that we will see in a minute, but by default it looks like this:
```{r}
# build a new coin using example data
coin <- new_coin(iData = ASEM_iData,
iMeta = ASEM_iMeta,
level_names = c("Indicator", "Pillar", "Sub-index", "Index"))
```
The `new_coin()` function checks and cross-checks both input data frames, and outputs a coin-class object. It also tells us that it has written a data set to `.$Data$Raw` - this is the sub-list that contains the various data sets that will be created each time we run a coin-building function.
We can see a summary of the coin by calling the coin print method - this is done simply by calling the name of the coin at the command line, or equivalently `print(coin)`:
```{r}
coin
```
This tells us some details about the coin - the number of units, indicators, denominators and groups; the structure of the index (notice that the `level_names` argument is used to describe each level), and the data sets present in the coin. Currently this only consists of the "Raw" data set, which is the data set that is created by default when we run `new_coin()`, and simply consists of the indicator data plus the `uCode` column. Indeed, we can retrieve any data set from within a coin at any time using the `get_dset()` function:
```{r}
# first few cols and rows of Raw data set
data_raw <- get_dset(coin, "Raw")
head(data_raw[1:5], 5)
```
By default, calling `get_dset()` returns only the unit code plus the indicator/aggregate columns. We can also attach other columns such as groups and names by using the `also_get` argument. This can be used to attach any of the `iData` "metadata" columns that were originally passed when calling `new_coin()`, such as groups, etc.
```{r}
get_dset(coin, "Raw", also_get = c("uName", "Pop_group"))[1:5] |>
head(5)
```
Apart from the `level_names` argument, `new_coin()` also gives the possibility to only pass forward a subset of the indicators in `iMeta`. This is done using the `exclude` argument, which is useful when testing alternative sets of indicators - see vignette on adjustments and comparisons.
```{r}
# exclude two indicators
coin <- new_coin(iData = ASEM_iData,
iMeta = ASEM_iMeta,
level_names = c("Indicator", "Pillar", "Sub-index", "Index"),
exclude = c("LPI", "Flights"))
coin
```
Here, `new_coin()` has removed the indicator columns from `iData` and the corresponding entries in `iMeta`. However, the full original `iData` and `iMeta` tables are still stored in the coin.
The `new_coin()` function includes a thorough series of checks on its input arguments which may cause some initial errors while the format is corrected. The objective is that if you can successfully assemble a coin, this should work smoothly for all COINr functions.
# Example coin
COINr includes a built in example coin which is constructed using a function `build_example_coin()`. This can be useful for learning how the package works, testing and is used in COINr documentation extensively because many functions require a coin as an input. Here we build the example coin (which is again from the ASEM data set built into COINr) and inspect its contents:
```{r}
ASEM <- build_example_coin(quietly = TRUE)
ASEM
```
This shows that the example is a fully populated coin with various data sets, each resulting from running COINr functions, up to the aggregation step.
# Purses and panel data
A coin offers a very wide methodological flexibility, but some things are kept fixed throughout. One is that the set of indicators does not change once the coin has been created. The other thing is that each coin represents a single point in time.
If you have panel data, i.e. multiple observations for each unit-indicator pair, indexed by time, then `new_coin()` allows you to create multiple coins in one go. Coins are collected into a single object called a "*purse*", and many COINr functions work on purses directly.
Here we simply explore how to create a purse. The procedure is almost the same as creating a coin: you need the `iData` and `iMeta` data frames, and you call `new_coin()`. The difference is that `iData` must now have a `Time` column, which must be a numeric column which records which time point each observation is from. To see an example, we can look at the built-in (artificial) panel data set `ASEM_iData_p`.
```{r}
# sample of 2018 observations
ASEM_iData_p[ASEM_iData_p$Time == 2018, 1:15] |>
head(5)
# sample of 2019 observations
ASEM_iData_p[ASEM_iData_p$Time == 2019, 1:15] |>
head(5)
```
This data set has five years of data, spanning 2018-2022 (the data are artificially generated - at some point I will replace this with a real example). This means that each row now corresponds to a set of indicator values for a unit, for a given time point.
To build a purse from this data, we input it into `new_coin()`
```{r}
# build purse from panel data
purse <- new_coin(iData = ASEM_iData_p,
iMeta = ASEM_iMeta,
split_to = "all",
quietly = TRUE)
```
Notice here that the `iMeta` argument is the same as when we assembled a single coin - this is because a purse is supposed to consist of coins with the same indicators and structure, i.e. the aim is to calculate a composite indicator over several points in time, and generally to apply the same methodology to all coins in the purse. It is however possible to have different units between coins in the same purse - this might occur because of data availability differences at different time points.
The `split_to` argument should be set to `"all"` to create a coin from each time point found in the data. Alternatively, you can only include a subset of time points by specifying them as a vector.
A quick way to check the contents of the purse is to call its print method:
```{r}
purse
```
This tells us how many coins there are, the number of indicators and units, and gives some structural information from one of the coins.
A purse is an S3 class object like a coin. In fact, it is simply a data frame with a `Time` column and a `coin` column, where entries in the `coin` column are coin objects (in a so-called "list column"). This is convenient to work with, but if you try to view it in R Studio, for example, it can be a little messy.
As with coins, the purse class also has a function in COINr which produces an example purse:
```{r}
ASEM_purse <- build_example_purse(quietly = TRUE)
ASEM_purse
```
The purse class can be used directly with COINr functions - this allows to impute/normalise/treat/aggregate all coins with a single command, for example.
# Summary
COINr is mostly designed to work with coins and purses. However, many key functions also have methods for data frames or vectors. This means that COINr can either be used as an "ecosystem" of functions built around coins and purses, or else can just be used as a toolbox for doing your own work with data frames and other objects.
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/coins.Rmd
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## -----------------------------------------------------------------------------
library(COINr)
# build full example coin
coin <- build_example_coin(quietly = TRUE)
# retrieve normalised data set
dset_norm <- get_dset(coin, dset = "Normalised")
# view first few rows and cols
head(dset_norm[1:5], 5)
## -----------------------------------------------------------------------------
# retrieve normalised data set
dset_norm2 <- get_dset(coin, dset = "Normalised", also_get = c("uName", "GDP_group"))
# view first few rows and cols
head(dset_norm2[1:5], 5)
## -----------------------------------------------------------------------------
x <- get_data(coin, dset = "Raw", iCodes = c("Flights", "LPI"))
# see first few rows
head(x, 5)
## -----------------------------------------------------------------------------
x <- get_data(coin, dset = "Raw", iCodes = "Political", Level = 1)
head(x, 5)
## -----------------------------------------------------------------------------
x <- get_data(coin, dset = "Aggregated", iCodes = "Sust", Level = 2)
head(x, 5)
## -----------------------------------------------------------------------------
get_data(coin, dset = "Raw", iCodes = "Goods", uCodes = c("AUT", "VNM"))
## -----------------------------------------------------------------------------
coin
## -----------------------------------------------------------------------------
get_data(coin, dset = "Raw", iCodes = "Goods", use_group = list(GDP_group = "XL"))
## -----------------------------------------------------------------------------
get_data(coin, dset = "Raw", iCodes = "Flights", uCodes = "MLT", use_group = "Pop_group")
## -----------------------------------------------------------------------------
names(coin$Data)
## -----------------------------------------------------------------------------
data_raw <- coin$Data$Raw
head(data_raw[1:5], 5)
## -----------------------------------------------------------------------------
str(coin$Meta$Unit)
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/data_selection.R
|
---
title: "Data selection"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Data selection}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
This vignette describes how to retrieve data from a coin. The main functions to do this are `get_dset()` and the more flexible `get_data()`.
These functions are important to understand, because many COINr functions use them to retrieve data for plotting, analysis and other functions. Both functions are *generics*, which means that they have methods for coins and purses.
# Data sets
Every time a "building" operation is applied to a coin, such as `Treat()`, `Screen()`, `Normalise()` and so on, a new data set is created. Data sets live in the `.$Data` sub-list of the coin. We can retrieve a data set at any time using the `get_data()` function:
```{r}
library(COINr)
# build full example coin
coin <- build_example_coin(quietly = TRUE)
# retrieve normalised data set
dset_norm <- get_dset(coin, dset = "Normalised")
# view first few rows and cols
head(dset_norm[1:5], 5)
```
By default, a data set in the coin consists of indicator columns plus the "uCode" column, which is the unique identifier of each row. You can also ask to attach unit metadata columns, such as unit names, groups, and anything else that was input when building the coin, using the `also_get` argument:
```{r}
# retrieve normalised data set
dset_norm2 <- get_dset(coin, dset = "Normalised", also_get = c("uName", "GDP_group"))
# view first few rows and cols
head(dset_norm2[1:5], 5)
```
# Data subsets
While `get_dset()` is a quick way to retrieve an entire data set and metadata, the `get_data()` function is a generalisation: it can also be used to obtain a whole data set, but also subsets of data, based on e.g. indicator selection and grouping (columns), as well as unit selection and grouping (rows).
## Indicators/columns
A simple example is to extract one or more named indicators from a target data set:
```{r}
x <- get_data(coin, dset = "Raw", iCodes = c("Flights", "LPI"))
# see first few rows
head(x, 5)
```
By default, `get_data()` returns the requested indicators, plus the `uCode` identifier column. We can also set `also_get = "none"` to return only the indicator columns.
The `iCode` argument can also accept groups of indicators, based on the structure of the index. In our example, indicators are aggregated into "pillars" (level 2) within groups. We can name an aggregation group and extract the underlying indicators:
```{r}
x <- get_data(coin, dset = "Raw", iCodes = "Political", Level = 1)
head(x, 5)
```
Here we have requested all the indicators in level 1 (the indicator level), that belong to the group called "Political" (one of the pillars). Specifying the level becomes more relevant when we look at the aggregated data set, which also includes the pillar, sub-index and index scores. Here, for example, we can ask for all the pillar scores (level 2) which belong to the sustainability sub-index (level 3):
```{r}
x <- get_data(coin, dset = "Aggregated", iCodes = "Sust", Level = 2)
head(x, 5)
```
If this isn't clear, look at the structure of the example index using e.g. `plot_framework(coin)`. If we wanted to select all the indicators within the "Sust" sub-index we would set `Level = 1`. If we wanted to select the sub-index scores themselves we would set `Level = 3`, and so on.
The idea of selecting indicators and aggregates based on the structure of the index is useful in many places in COINr, for example examining correlations within aggregation groups using `plot_corr()`.
## Units/rows
Units (rows) of the data set can also be selected (also in combination with selecting indicators). Starting with a simple example, let's select specified units for a specific indicator:
```{r}
get_data(coin, dset = "Raw", iCodes = "Goods", uCodes = c("AUT", "VNM"))
```
Rows can also be sub-setted using groups, i.e. unit groupings that are defined using variables input with `iMeta$Type = "Group"` when building the coin. Recall that for our example coin we have several groups (a reminder that you can see some details about the coin using its print method):
```{r}
coin
```
The first way to subset by unit group is to name a grouping variable, and a group within that variable to select. For example, say we want to know the values of the "Goods" indicator for all the countries in the "XL" GDP group:
```{r}
get_data(coin, dset = "Raw", iCodes = "Goods", use_group = list(GDP_group = "XL"))
```
Since we have subsetted by group, this also returns the group column which was used.
Another way of sub-setting is to combine `uCodes` and `use_group`. When these two arguments are both specified, the result is to return the full group(s) to which the specified `uCodes` belong. This can be used to put a unit in context with its peers within a group. For example, we might want to see the values of the "Flights" indicator for a specific unit, as well as all other units within the same population group:
```{r}
get_data(coin, dset = "Raw", iCodes = "Flights", uCodes = "MLT", use_group = "Pop_group")
```
Here, we have to specify `use_group` simply as a string rather than a list. Since MLT is in the "S" population group, it returns all units within that group.
Overall, the idea of `get_data()` is to flexibly return subsets of indicator data, based on the structure of the index and unit groups.
# Manual selection
As a final point, it's worth pointing out that a coin is simply a list of R objects such as data frames, other lists, vectors and so on. It has a particular format which allows things to be easily accessed by COINr functions. But other than that, its an ordinary R object. This means that even without the helper functions mentioned, you can get at the data simply by exploring the coin yourself.
The data sets live in the `.$Data` sub-list of the coin:
```{r}
names(coin$Data)
```
And we can access any of these directly:
```{r}
data_raw <- coin$Data$Raw
head(data_raw[1:5], 5)
```
The metadata lives in the `.$Meta` sub-list. For example, the unit metadata, which includes groups, names etc:
```{r}
str(coin$Meta$Unit)
```
The point is that if COINr tools don't get you where you want to go, knowing your way around the coin allows you to access the data exactly how you want.
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/data_selection.Rmd
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## -----------------------------------------------------------------------------
library(COINr)
# Get a sample of indicator data (note must be indicators plus a "UnitCode" column)
iData <- ASEM_iData[c("uCode", "Goods", "Flights", "LPI")]
head(iData)
## -----------------------------------------------------------------------------
head(WorldDenoms)
## -----------------------------------------------------------------------------
# specify how to denominate
denomby <- data.frame(iCode = c("Goods", "Flights"),
Denominator = c("GDP", "Population"),
ScaleFactor = c(1, 1000))
## -----------------------------------------------------------------------------
# Denominate one by the other
iData_den <- Denominate(iData, WorldDenoms, denomby)
head(iData_den)
## -----------------------------------------------------------------------------
# first few rows of the example iMeta, selected cols
head(ASEM_iMeta[c("iCode", "Denominator")])
## -----------------------------------------------------------------------------
# see names of example iData
names(ASEM_iData)
## -----------------------------------------------------------------------------
# build example coin
coin <- build_example_coin(up_to = "new_coin", quietly = TRUE)
# denominate (here, we only need to say which dset to use)
coin <- Denominate(coin, dset = "Raw")
## -----------------------------------------------------------------------------
plot_scatter(coin, dsets = c("Raw", "Denominated"), iCodes = "Flights")
## -----------------------------------------------------------------------------
# build example purse
purse <- build_example_purse(up_to = "new_coin", quietly = TRUE)
# denominate using data/specs already included in coin
purse <- Denominate(purse, dset = "Raw")
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/denomination.R
|
---
title: "Denomination"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Denomination}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
Denomination is the process of scaling one indicator by another quantity to adjust for the effect of size. This is because many indicators are linked to the unit's size (economic size, physical size, population, etc.) in one way or another, and if no adjustments were made, a composite indicator would end up with the largest units at the top and the smallest at the bottom. Often, the adjustment is made by dividing the indicator by a so-called "denominator" or a denominating variable. If units are countries, denominators are typically things like GDP, population or land area.
COINr's `Denominate()` function allows to quickly perform this operation in a flexible and reproducible way. As with other building functions, it is a *generic* which means that it has different methods for data frames, coins and purses. They are however all fairly similar.
# Data frames
We'll begin by demonstrating denomination on a data frame. We'll use the in-built data set to get a small sample of indicators:
```{r}
library(COINr)
# Get a sample of indicator data (note must be indicators plus a "UnitCode" column)
iData <- ASEM_iData[c("uCode", "Goods", "Flights", "LPI")]
head(iData)
```
This is the raw indicator data for three indicators, plus the "uCode" column which identifies each unit. We will also get some data for denominating the indicators. COINr has an in-built set of denominator data called `WorldDenoms`:
```{r}
head(WorldDenoms)
```
Now, the main things to specify in denomination are which indicators to denominate, and by what. In other words, we need to map the indicators to the denominators. In the example, the export of goods should be denominated by GDP, passenger flight capacity by population (GDP could also possibly be reasonable), and "LPI" (the logistics performance index) is an intensive variable that does not need to be denominated.
This specification is passed to `Denominate()` using the `denomby` argument. This takes a data frame which includes "iCode" (the name of the column to be denonimated), "Denominator" (the column name of the denominator data frame to use), and "ScaleFactor" is a multiplying factor to apply if needed. We create this data frame here:
```{r}
# specify how to denominate
denomby <- data.frame(iCode = c("Goods", "Flights"),
Denominator = c("GDP", "Population"),
ScaleFactor = c(1, 1000))
```
A second important consideration is that the rows of the indicators and the denominators need to be matched, so that each unit is denominated by the value corresponding to that unit, and not another unit. Notice that the `WorldDenoms` data frame covers more or less all countries in the world, whereas the sample indicators only cover 51 countries. The matching is performed inside the `Denominate()` function, using an identifier column which must be present in both data frames. Here, our common column is "uCode", which is already found in both data frames. This is also the default column name expected by `Denominate()`, so we don't even need to specify it. If you have other column names, use the `x_iD` and `denoms_ID` arguments to pass these names to the function.
Ok so now we are ready to denominate:
```{r}
# Denominate one by the other
iData_den <- Denominate(iData, WorldDenoms, denomby)
head(iData_den)
```
The function has matched each unit in `iData` with its corresponding denominator value in `WorldDenoms` and divided the former by the latter. As expected, "Goods" and "Flights" have changed, but "LPI" has not because it was not included in the `denomby` data frame.
Otherwise, the only other feature to mention is the `f_denom` argument, which allows other functions to be used other than the division operator. See the function documentation.
# Coins
Now let's look at denomination inside a coin. The main difference here is that the information needed to denominate the indicators may already be present inside the coin. When creating the coin using `new_coin()`, there is the option to specify denominating variables as part of `iData` (these are variables where `iMeta$Type = "Denominator"`), and to specify in `iMeta` the mapping between indicators and denominators, using the `iMeta$Type` column. To see what this looks like:
```{r}
# first few rows of the example iMeta, selected cols
head(ASEM_iMeta[c("iCode", "Denominator")])
```
The entries in "Denominator" correspond to column names that are present in `iData`:
```{r}
# see names of example iData
names(ASEM_iData)
```
So in our example, all the information needed to denominate is already present in the coin - the denominator data, and the mapping. In this case, to denominate, we simply call:
```{r}
# build example coin
coin <- build_example_coin(up_to = "new_coin", quietly = TRUE)
# denominate (here, we only need to say which dset to use)
coin <- Denominate(coin, dset = "Raw")
```
If the denomination data and/or mapping isn't present in the coin, or we wish to try an alternative specification, we can also pass this to `Denominate()` using the `denoms` and `denomby` arguments as in the previous section.
This concludes the main features of `Denominate()` for a coin. Before moving on, consider that denomination needs extra care because it radically changes the indicator. It is a nonlinear transformation because each data point is divided by a different value. To demonstrate, consider the "Flights" indicator that we just denominated - let's plot the raw indicator against the denominated version:
```{r}
plot_scatter(coin, dsets = c("Raw", "Denominated"), iCodes = "Flights")
```
This shows that the raw and denominated indicators show very little resemblance to one another.
# Purses
The final method for `Denominate()` is for purses. The purse method is exactly the same as the coin method, except applied to a purse.
An important consideration here is that denominator variables can and do vary with time, just like indicators. This means that e.g. "Total value of exports" from 2019 should be divided by GDP from 2019, and not from another year. In other words, denominators are panel data just like the indicators.
This is why denominators are ideally input as part of `iData` when calling `new_coin()`. In doing so, denominators are another column of the data frame like the indicators, and must have an entry for each unit/time pair. This also ensures that the unit-matching of denominator and indicator is correct (or more accurately, I leave that up to you!).
In our example purse, the denominator data is already included, as is the mapping. This means that denomination is exactly the same operation as denominating a coin:
```{r}
# build example purse
purse <- build_example_purse(up_to = "new_coin", quietly = TRUE)
# denominate using data/specs already included in coin
purse <- Denominate(purse, dset = "Raw")
```
In fact if you try to pass denominator data to `Denominate()` for a purse via `denoms`, there is a catch: at the moment, `denoms` does not support panel data, so it is required to use the same value for each time point. This is not ideal and may be sorted out in future releases. For now, it is better to denominate purses by passing all the specifications via `iData` and `iMeta` when building the purse with `new_coin()`.
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/denomination.Rmd
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## -----------------------------------------------------------------------------
library(COINr)
ASEM <- build_example_coin(up_to = "new_coin", quietly = TRUE)
## -----------------------------------------------------------------------------
l_avail <- get_data_avail(ASEM, dset = "Raw", out2 = "list")
## -----------------------------------------------------------------------------
head(l_avail$Summary)
## -----------------------------------------------------------------------------
min(l_avail$Summary$Dat_Avail)
## -----------------------------------------------------------------------------
df_avail <- get_stats(ASEM, dset = "Raw", out2 = "df")
head(df_avail[c("iCode", "N.Avail", "Frc.Avail")], 10)
## -----------------------------------------------------------------------------
min(df_avail$Frc.Avail)
## -----------------------------------------------------------------------------
# some data to use as an example
# this is a selected portion of the data with some missing values
df1 <- ASEM_iData[37:46, 36:39]
print(df1, row.names = FALSE)
## -----------------------------------------------------------------------------
Impute(df1, f_i = "i_mean")
## -----------------------------------------------------------------------------
# demo of i_mean() function, which is built in to COINr
x <- c(1,2,3,4, NA)
i_mean(x)
## -----------------------------------------------------------------------------
# row grouping
groups <- c(rep("a", 5), rep("b", 5))
# impute
dfi2 <- Impute(df1, f_i = "i_median_grp", f_i_para = list(f = groups))
# display
print(dfi2, row.names = FALSE)
## -----------------------------------------------------------------------------
Impute(df1, f_i = "i_mean", impute_by = "row", normalise_first = FALSE)
## -----------------------------------------------------------------------------
Impute(df1, f_i = "i_mean", impute_by = "row", normalise_first = TRUE, directions = rep(1,4))
## -----------------------------------------------------------------------------
ASEM <- Impute(ASEM, dset = "Raw", f_i = "i_mean")
ASEM
## -----------------------------------------------------------------------------
ASEM <- Impute(ASEM, dset = "Raw", f_i = "i_mean_grp", use_group = "GDP_group", )
## -----------------------------------------------------------------------------
ASEM <- Impute(ASEM, dset = "Raw", f_i = "i_mean", impute_by = "row",
group_level = 2, normalise_first = TRUE)
## ---- eval=FALSE--------------------------------------------------------------
# # this function takes a data frame input and returns an imputed data frame using amelia
# i_EM <- function(x){
# # impute
# amOut <- Amelia::amelia(x, m = 1, p2s = 0, boot.type = "none")
# # return imputed data
# amOut$imputations[[1]]
# }
## ---- eval=FALSE--------------------------------------------------------------
# # impute raw data set
# coin <- Impute(coin, dset = "Raw", f_i = i_EM, impute_by = "df", group_level = 2)
## -----------------------------------------------------------------------------
# copy
dfp <- ASEM_iData_p
# create NA for GB in 2022
dfp$LPI[dfp$uCode == "GB" & dfp$Time == 2022] <- NA
## -----------------------------------------------------------------------------
dfp$LPI[dfp$uCode == "GB" & dfp$Time == 2021]
## -----------------------------------------------------------------------------
# build purse
ASEMp <- new_coin(dfp, ASEM_iMeta, split_to = "all", quietly = TRUE)
# impute raw data using latest available value
ASEMp <- Impute(ASEMp, dset = "Raw", f_i = "impute_panel")
## -----------------------------------------------------------------------------
get_data(ASEMp, dset = "Imputed", iCodes = "LPI", uCodes = "GBR", Time = 2021)
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/imputation.R
|
---
title: "Imputation"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Imputation}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
# Introduction
Imputation is the process of estimating missing data points. To get started with imputation, a reasonable first step is to see how much missing data we have in the data set. We begin by building the example coin, up the point of assembling the coin, but not any further:
```{r}
library(COINr)
ASEM <- build_example_coin(up_to = "new_coin", quietly = TRUE)
```
To check missing data, the `get_data_avail()` function can be used. It can output to either the coin or to a list -- here we output to a list to readily display the results.
```{r}
l_avail <- get_data_avail(ASEM, dset = "Raw", out2 = "list")
```
The output list has data availability by unit:
```{r}
head(l_avail$Summary)
```
The lowest data availability by unit is:
```{r}
min(l_avail$Summary$Dat_Avail)
```
We can also check data availability by indicator. This is done by calling `get_stats()`:
```{r}
df_avail <- get_stats(ASEM, dset = "Raw", out2 = "df")
head(df_avail[c("iCode", "N.Avail", "Frc.Avail")], 10)
```
By indicator, the minimum data availability is:
```{r}
min(df_avail$Frc.Avail)
```
With missing data, several options are available:
1. Leave it as it is and aggregate anyway (there is also the option for data availability thresholds during aggregation - see [Aggregation](aggregate.html))
2. Consider removing indicators that have low data availability (this has to be done manually because it affects the structure of the index)
3. Consider removing units that have low data availability (see [Unit Screening](screening.html))
4. Impute missing data
These options can also be combined. Here, we focus on the option of imputation.
# Data frames
The `Impute()` function is a flexible function that imputes missing data in a data set using any suitable function that can be passed to it. In fact, `Impute()` is a *generic*, and has methods for coins, data frames, numeric vectors and purses.
Let's begin by examining the data frame method of `Impute()`, since it is easier to see what's going on. We will use a small data frame which is easy to visualise:
```{r}
# some data to use as an example
# this is a selected portion of the data with some missing values
df1 <- ASEM_iData[37:46, 36:39]
print(df1, row.names = FALSE)
```
In the simplest case, imputation can be performed column-wise, i.e. by imputing each indicator one at a time:
```{r}
Impute(df1, f_i = "i_mean")
```
Here, the "Raw" data set has been imputed by substituting missing values with the mean of the non-`NA` values for each column. This is performed by setting `f_i = "i_mean"`. The `f_i` argument refers to a function that imputes a numeric vector - in this case the built-in `i_mean()` function:
```{r}
# demo of i_mean() function, which is built in to COINr
x <- c(1,2,3,4, NA)
i_mean(x)
```
The key concept here is that the simple function `i_mean()` is applied by `Impute()` to each column. This idea of passing simpler functions is used in several key COINr functions, and allows great flexibility because more sophisticated imputation methods can be used from other packages, for example.
For now let's explore the options native to COINr. We can also apply the `i_median()` function in the same way to substitute with the indicator median. Adding a little complexity, we can also impute by mean or median, but within unit (row) groups. Let's assume that the first five rows in our data frame belong to a group "a", and the remaining five to a different group "b". In practice, these could be e.g. GDP, population or wealth groups for countries - we might hypothesise that it is better to replace `NA` values with the median inside a group, rather than the overall median, because countries within groups are more similar.
To do this on a data frame we can use the `i_median_grp()` function, which requires an additional argument `f`: a grouping variable. This is passed through `Impute()` using the `f_i_para` argument which takes any additional parameters top `f_i` apart from the data to be imputed.
```{r}
# row grouping
groups <- c(rep("a", 5), rep("b", 5))
# impute
dfi2 <- Impute(df1, f_i = "i_median_grp", f_i_para = list(f = groups))
# display
print(dfi2, row.names = FALSE)
```
The `f_i_para` argument requires a named list of additional parameter values. This allows functions of any complexity to be passed to `Impute()`. By default, `Impute()` applies `f_i` to each column of data, so `f_i` is expected to take a numeric vector as its first input, and specifically have the format `function(x, f_i_para)` where `x` is a numeric vector and `...` are further arguments. This means that the first argument of `f_i` *must* be called "x". To use functions that don't have `x` as a first argument, you would have to write a wrapper function.
Other than imputing by column, we can also impute by row. This only really makes sense if the indicators are on a common scale, i.e. if they are normalised first (or perhaps if they already share the same units). To impute by row, set `impute_by = "row"`. In our example data set we have indicators on rather different scales. Let's see what happens if we impute by row mean but *don't* normalise:
```{r}
Impute(df1, f_i = "i_mean", impute_by = "row", normalise_first = FALSE)
```
This imputes some silly values, particularly in "CultGood", because "Pat" has much higher values. Clearly this is not a sensible strategy, unless all indicators are on the same scale. We can however normalise first, impute, then return indicators to their original scales:
```{r}
Impute(df1, f_i = "i_mean", impute_by = "row", normalise_first = TRUE, directions = rep(1,4))
```
This additionally required to specify the `directions` argument because we need to know which direction each indicator runs in (whether they are positive or negative indicators). In our case all indicators are positive. See the vignette on [Normalisation](normalise.html) for more details on indicator directions.
The values imputed in this way are more realistic. Essentially we are replacing each missing value with the average (normalised) score of the other indicators, for a given unit. However this also only makes sense if the indicators/columns are similar to one another: high values of one would likely imply high values in the other.
Behind the scenes, setting `normalise_first = TRUE` first normalises each column using a min-max method, then performs the imputation, then returns the indicators to the original scales using the inverse transformation. Another approach which gives more control is to simply run `Normalise()` first, and work with the normalised data from that point onwards. In that case it is better to set `normalise_first = FALSE`, since by default if `impute_by = "row"` it will be set to `TRUE`.
As a final point on data frames, we can set `impute_by = "df"` to pass the entire data frame to `f_i`, which may be useful for more sophisticated multivariate imputation methods. But what's the point of using `Impute()` then, you may ask? First, because when imputing coins, we can impute by indicator groups (see next section); and second, `Impute()` performs some checks to ensure that non-`NA` values are not altered.
# Coins
Imputing coins is similar to imputing data frames because the coin method of `Impute()` calls the data frame method. Please read that section first if you have not already done so. However, for coins there are some additional function arguments.
In the simple case we impute a named data set `dset` using the function `f_i`: e.g. if we want to impute the "Raw" data set using indicator median values:
```{r}
ASEM <- Impute(ASEM, dset = "Raw", f_i = "i_mean")
ASEM
```
Here, `Impute()` extracts the "Raw" data set as a data frame, imputes it using the data frame method (see previous section), then saves it as a new data set in the coin. Here, the data set is called "Imputed" but can be named otherwise using the `write_to` argument.
We can also impute by group using a grouped imputation function. Since unit groups are stored within the coin (variables labelled as "Group" in `iMeta`), these can be called directly using the `use_group` argument (without having to specify the `f_i_para` argument):
```{r}
ASEM <- Impute(ASEM, dset = "Raw", f_i = "i_mean_grp", use_group = "GDP_group", )
```
This has imputed each indicator using its GDP group mean.
Row-wise imputation works in the same way as with a data frame, by setting `impute_by = "row"`. However, this is particularly useful in conjunction with the `group_level` argument. If this is specified, rather than imputing across the entire row of data, it splits rows into indicator groups, using the structure of the index. For example:
```{r}
ASEM <- Impute(ASEM, dset = "Raw", f_i = "i_mean", impute_by = "row",
group_level = 2, normalise_first = TRUE)
```
Here, the `group_level` argument specifies which level-grouping of the indicators to use. In the ASEM example here, we are using level 2 groups, so it is substituting missing values with the average normalised score within each sub-pillar (in the ASEM example level 2 is called "sub-pillars").
Imputation in this way has an important relationship with aggregation. This is because if we *don't* impute, then in the aggregation step, if we take the mean of a group of indicators, and there is a `NA` present, this value is excluded from the mean calculation. Doing this is mathematically equivalent to assigning the mean to that missing value and then taking the mean of all of the indicators. This is sometimes known as "shadow imputation". Therefore, one reason to use this imputation method is to see which values are being implicitly assigned as a result of excluding missing values from the aggregation step.
Last we can see an example of imputation by data frame, with the option `impute_by = "row"`. Recall that this option requires that the function `f_i` accepts and returns entire data frames. This is suitable for more sophisticated multivariate imputation methods. Here we'll use a basic implementation of the Expectation Maximisation (EM) algorithm from the Amelia package.
Since COINr requires that the first argument of `f_i` is called `x`, and the relevant Amelia function doesn't satisfy this requirement, we have write a simple wrapper function that acts as an intermediary between COINr and Amelia. This also gives us the chance to specify some other function arguments that are necessary.
```{r, eval=FALSE}
# this function takes a data frame input and returns an imputed data frame using amelia
i_EM <- function(x){
# impute
amOut <- Amelia::amelia(x, m = 1, p2s = 0, boot.type = "none")
# return imputed data
amOut$imputations[[1]]
}
```
Now armed with our new function, we just call that from `Impute()`. We don't need to specify `f_i_para` because these arguments are already specified in the intermediary function.
```{r, eval=FALSE}
# impute raw data set
coin <- Impute(coin, dset = "Raw", f_i = i_EM, impute_by = "df", group_level = 2)
```
This has now passed each group of indicators at level 2 as data frames to Amelia, which has imputed each one and passed them back.
# Purses
Purse imputation is very similar to coin imputation, because by default the purse method of `Impute()` imputes each coin separately. There is one exception to this: if `f_i = "impute_panel`, the data sets inside the purse are imputed using the last available data point, using the `impute_panel()`
function. In this case, coins are not imputed individually, but treated as a single data set. In this
case, optionally set `f_i_para = list(max_time = .)` where `.` should be substituted with the maximum
number of time points to search backwards for a non-`NA` value. See `impute_panel()` for more details.
No further arguments need to be passed to `impute_panel()`.
It is difficult to show this working without a contrived example, so let's contrive one. We take the example panel data set `ASEM_iData_p`, and introduce a missing value `NA` in the indicator "LPI" for unit "GB", for year 2022.
```{r}
# copy
dfp <- ASEM_iData_p
# create NA for GB in 2022
dfp$LPI[dfp$uCode == "GB" & dfp$Time == 2022] <- NA
```
This data point has a value for the previous year, 2021. Let's see what it is:
```{r}
dfp$LPI[dfp$uCode == "GB" & dfp$Time == 2021]
```
Now let's build the purse and impute the raw data set.
```{r}
# build purse
ASEMp <- new_coin(dfp, ASEM_iMeta, split_to = "all", quietly = TRUE)
# impute raw data using latest available value
ASEMp <- Impute(ASEMp, dset = "Raw", f_i = "impute_panel")
```
Now we check whether our imputed point is what we expect: we would expect that our `NA` is now replaced with the 2021 value as found previously. To get at the data we can use the `get_data()` function.
```{r}
get_data(ASEMp, dset = "Imputed", iCodes = "LPI", uCodes = "GBR", Time = 2021)
```
And indeed this corresponds to what we expect.
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/imputation.Rmd
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## -----------------------------------------------------------------------------
library(COINr)
# build example coin
coin <- build_example_coin(up_to = "new_coin")
# normalise the raw data set
coin <- Normalise(coin, dset = "Raw")
## -----------------------------------------------------------------------------
plot_scatter(coin, dsets = c("Raw", "Normalised"), iCodes = "Goods")
## -----------------------------------------------------------------------------
coin <- Normalise(coin, dset = "Raw",
global_specs = list(f_n = "n_zscore",
f_n_para = list(c(10,2))))
## -----------------------------------------------------------------------------
plot_scatter(coin, dsets = c("Raw", "Normalised"), iCodes = "Goods")
## -----------------------------------------------------------------------------
# wrapper function
f_bin <- function(x, nbins){
cut(x, breaks = nbins, labels = FALSE)
}
# pass wrapper to normalise, specify 5 bins
coin <- Normalise(coin, dset = "Raw",
global_specs = list(f_n = "f_bin",
f_n_para = list(nbins = 5)))
## -----------------------------------------------------------------------------
plot_scatter(coin, dsets = c("Raw", "Normalised"), iCodes = "Goods")
## -----------------------------------------------------------------------------
# get directions from coin
directions <- coin$Meta$Ind[c("iCode", "Direction")]
head(directions, 10)
## -----------------------------------------------------------------------------
# change Goods to -1
directions$Direction[directions$iCode == "Goods"] <- -1
# re-run (using min max default)
coin <- Normalise(coin, dset = "Raw", directions = directions)
## -----------------------------------------------------------------------------
# individual specifications:
# LPI - borda scores
# Flights - z-scores with mean 10 and sd 2
indiv_specs <- list(
LPI = list(f_n = "n_borda"),
Flights = list(f_n = "n_zscore",
f_n_para = list(m_sd = c(10, 2)))
)
# normalise
coin <- Normalise(coin, dset = "Raw", indiv_specs = indiv_specs)
# a quick look at the first three indicators
get_dset(coin, "Normalised")[1:4] |>
head(10)
## -----------------------------------------------------------------------------
head(ASEM_iMeta[c("iCode", "Target")])
## ---- eval=FALSE--------------------------------------------------------------
# coin <- Normalise(coin, dset = "Raw", global_specs = list(f_n = "n_dist2targ"))
## ---- eval=FALSE--------------------------------------------------------------
# coin <- Normalise(coin, dset = "Raw", global_specs = list(f_n = "n_dist2targ", f_n_para = list(cap_max = TRUE)))
## -----------------------------------------------------------------------------
mtcars_n <- Normalise(mtcars, global_specs = list(f_n = "n_dist2max"))
head(mtcars_n)
## -----------------------------------------------------------------------------
Normalise(iris) |>
head()
## -----------------------------------------------------------------------------
# example vector
x <- runif(10)
# normalise using distance to reference (5th data point)
x_norm <- Normalise(x, f_n = "n_dist2ref", f_n_para = list(iref = 5))
# view side by side
data.frame(x, x_norm)
## -----------------------------------------------------------------------------
purse <- build_example_purse(quietly = TRUE)
## -----------------------------------------------------------------------------
purse <- Normalise(purse, dset = "Raw", global = TRUE)
## -----------------------------------------------------------------------------
# get normalised data of first coin in purse
x1 <- get_dset(purse$coin[[1]], dset = "Normalised")
# get min and max of first four indicators (exclude uCode col)
sapply(x1[2:5], min, na.rm = TRUE)
sapply(x1[2:5], max, na.rm = TRUE)
## -----------------------------------------------------------------------------
# get entire normalised data set for all coins in one df
x1_global <- get_dset(purse, dset = "Normalised")
# get min and max of first four indicators (exclude Time and uCode cols)
sapply(x1_global[3:6], min, na.rm = TRUE)
sapply(x1_global[3:6], max, na.rm = TRUE)
## -----------------------------------------------------------------------------
purse <- Normalise(purse, dset = "Raw", global = FALSE)
# get normalised data of first coin in purse
x1 <- get_dset(purse$coin[[1]], dset = "Normalised")
# get min and max of first four indicators (exclude uCode col)
sapply(x1[2:5], min, na.rm = TRUE)
sapply(x1[2:5], max, na.rm = TRUE)
## -----------------------------------------------------------------------------
# some made up data
X <- data.frame(uCode = letters[1:10],
a = runif(10),
b = runif(10)*100)
X
## -----------------------------------------------------------------------------
qNormalise(X)
## -----------------------------------------------------------------------------
qNormalise(X, f_n = "n_dist2ref", f_n_para = list(iref = 1, cap_max = TRUE))
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/normalise.R
|
---
title: "Normalisation"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Normalisation}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
Normalisation is the operation of bringing indicators onto comparable scales so that they can be aggregated more fairly. To see why this is necessary, consider aggregating GDP values (billions or trillions of dollars) with percentage tertiary graduates (tens of percent). Average values here would make no sense because one is on a completely different scale to the other.
The normalisation function in COINr is imaginatively named `Normalise()`. It has the following main features:
* A wide range of normalisation methods, including the possibility to pass custom functions
* Customisable parameters for normalisation
* Possibility to specify detailed individual treatment for each indicator
As of COINr v1.0, `Normalise()` is a generic function with methods for different classes. This means that `Normalise()` can be called on coins, but also on data frames, numeric vectors and purses (time-indexed collections of coins).
Since `Normalise()` might be a bit over-complicated for some applications, the `qNormalise()` function gives a simpler interface which might be easier to use. See the [Simplified normalisation] section.
# Coins
The `Normalise()` method for coins follows the familiar format: you have to specify:
* `x` the coin
* `global_specs` default specifications to apply to all indicators
* `indiv_specs` individual specifications to override `global_specs` for specific indicators, if required
* `directions` a data frame specifying directions - this overrides the directions in `iMeta` if specified
* `out2` whether to output an updated coin or simply a data frame
Let's begin with a simple example. We build the example coin and normalise the raw data.
```{r}
library(COINr)
# build example coin
coin <- build_example_coin(up_to = "new_coin")
# normalise the raw data set
coin <- Normalise(coin, dset = "Raw")
```
We can compare one of the raw and un-normalised indicators side by side.
```{r}
plot_scatter(coin, dsets = c("Raw", "Normalised"), iCodes = "Goods")
```
This plot also illustrates the linear nature of the min-max transformation.
The default normalisation uses the min-max approach, scaling indicators onto the $[0, 100]$ interval. But we can change the normalisation type and its parameters using the `global_specs` argument.
```{r}
coin <- Normalise(coin, dset = "Raw",
global_specs = list(f_n = "n_zscore",
f_n_para = list(c(10,2))))
```
Again, let's plot an example of the result:
```{r}
plot_scatter(coin, dsets = c("Raw", "Normalised"), iCodes = "Goods")
```
Again, the z-score transformation is linear. It simply puts the resulting indicator on a different scale.
Notice the syntax of `global_specs`. If specified, it takes entries `f_n` (the name of the function to apply to each column) and `f_n_para` (any further arguments to `f_n`, not including `x`). Importantly, `f_n_para` *must* be specified as a list, even if it only contains one parameter.
Note that **COINr has a number of normalisation functions built in**, all of which are of the form `n_*()`, such as `n_minmax()`, `n_borda()`, etc. Type `n_` in the R Studio console and press the Tab key to see a list, or else browse the COINr functions alphabetically.
## Calling external functions
Since `f_n` points to a function name, any function can be passed to `Normalise()` as long as it is available in the namespace. To illustrate, consider an example where we want to categorise into discrete bins. We can use base R's `cut()` function for this purpose. We simply need to specify the number of bins. We could directly call `cut()`, but for clarity we will create a simple wrapper function around it, then pass that function to `Normalise()`.
```{r}
# wrapper function
f_bin <- function(x, nbins){
cut(x, breaks = nbins, labels = FALSE)
}
# pass wrapper to normalise, specify 5 bins
coin <- Normalise(coin, dset = "Raw",
global_specs = list(f_n = "f_bin",
f_n_para = list(nbins = 5)))
```
To illustrate the difference with the linear transformations above, we again plot the raw against normalised indicator:
```{r}
plot_scatter(coin, dsets = c("Raw", "Normalised"), iCodes = "Goods")
```
Obviously this is *not* linear.
Generally, the requirements of a function to be passed to `Normalise()` are that its first argument should be `x`, a numeric vector, and it should return a numeric vector of the same length as `x`. It should also be able to handle `NA`s. Any further arguments can be passed via the `f_n_para` entry.
## Directions
By default, the directions are taken from the coin. These will have been specified as the `Direction` column of `iMeta` when constructing a coin with `new_coin()`. However, you can specify different directions using the `directions` argument of `normalise()`: in this case you need to specify a data frame with two columns: `iCode` (with an entry for each indicator code found in the target data set) and `Direction` giving the direction as -1 or 1.
To show an example, we take the existing directions from the coin, modify them slightly, and then run the normalisation function again:
```{r}
# get directions from coin
directions <- coin$Meta$Ind[c("iCode", "Direction")]
head(directions, 10)
```
We'll change the direction of the "Goods" indicator and re-normalise:
```{r}
# change Goods to -1
directions$Direction[directions$iCode == "Goods"] <- -1
# re-run (using min max default)
coin <- Normalise(coin, dset = "Raw", directions = directions)
```
## Individual normalisation
Finally let's explore how to specify different normalisation methods for different indicators. The `indiv_specs` argument takes a named list for each indicator, and will override the specifications in `global_specs`. If `indiv_specs` is specified, we only need to include sub-lists for indicators that differ from `global_specs`.
To illustrate, we can use a contrived example where we might want to apply min-max to all indicators except two. For those, we apply a rank transformation and distance to maximum approach. Note, that since the default of `global_specs` is min-max, we don't need to specify that at all here.
```{r}
# individual specifications:
# LPI - borda scores
# Flights - z-scores with mean 10 and sd 2
indiv_specs <- list(
LPI = list(f_n = "n_borda"),
Flights = list(f_n = "n_zscore",
f_n_para = list(m_sd = c(10, 2)))
)
# normalise
coin <- Normalise(coin, dset = "Raw", indiv_specs = indiv_specs)
# a quick look at the first three indicators
get_dset(coin, "Normalised")[1:4] |>
head(10)
```
This example is meant to be illustrative of the functionality of `Normalise()`, rather than being a sensible normalisation strategy, because the indicators are now on very different ranges.
In practice, if different normalisation strategies are selected, it is a good idea to keep the indicators on similar ranges, otherwise the effects will be very unequal in the aggregation step.
## Use of targets
A particular type of normalisation is "distance to target". This normalises indicators by the distance of each value to a specified target. Targets may often have a political or business meaning, such as e.g. emissions targets or sales targets.
Targets should be input into a coin using the `iMeta` argument when building the coin using `new_coin()`. In fact, the built-in example data has targets for all indicators:
```{r}
head(ASEM_iMeta[c("iCode", "Target")])
```
*(Note that these targets are fabricated just for the purposes of an example)*
To use distance-to-target normalisation, we call the `n_dist2targ()` function. Like other built in normalisation functions, this normalises a vector using a specified target. We can't use the `f_n_para` entry in `Normalise()` here because this would only pass a single target value, whereas we need to use a different target for each indicator.
However, COINr has a special case built in so that targets from `iMeta` can be used automatically. Simply set `global_specs = list(f_n = "n_dist2targ")`, and the `Normalise()` function will automatically retrieve targets from `iMeta$Target`. If targets are not present, this will generate an error. Note that the directions of indicators are also passed to `n_dist2targ()` - see that function documentation for how the normalisation is performed depending on the direction specified.
Our normalisation will then look like this:
```{r, eval=FALSE}
coin <- Normalise(coin, dset = "Raw", global_specs = list(f_n = "n_dist2targ"))
```
It is also possible to specify the `cap_max` parameter of `n_dist2targ()` as follows:
```{r, eval=FALSE}
coin <- Normalise(coin, dset = "Raw", global_specs = list(f_n = "n_dist2targ", f_n_para = list(cap_max = TRUE)))
```
# Data frames and vectors
Normalising a data frame is very similar to normalising a coin, except the input is a data frame and output is also a data frame.
```{r}
mtcars_n <- Normalise(mtcars, global_specs = list(f_n = "n_dist2max"))
head(mtcars_n)
```
As with coins, columns can be normalised with individual specifications using the `indiv_spec` argument in exactly the same way as with a coin. Note that non-numeric columns are always ignored:
```{r}
Normalise(iris) |>
head()
```
There is also a method for numeric vectors, although usually it is just as easy to call the underlying normalisation function directly.
```{r}
# example vector
x <- runif(10)
# normalise using distance to reference (5th data point)
x_norm <- Normalise(x, f_n = "n_dist2ref", f_n_para = list(iref = 5))
# view side by side
data.frame(x, x_norm)
```
# Purses
The purse method for `normalise()` is especially useful if you are working with multiple coins and panel data. This is because to make scores comparable from one time point to the next, it is usually a good idea to normalise indicators together rather than separately. For example, with the min-max method, indicators are typically normalised using the minimum and maximum over all time points of data, as opposed to having a separate max and min for each.
If indicators were normalised separately for each time point, then the highest scoring unit would get a score of 100 in time $t$ (assuming min-max between 0 and 100), but the highest scoring unit in time $t+1$ would *also* be assigned a score of 100. The underlying values of these two scores could be very different, but they would get
This means that the purse method for `normalise()` is a bit different from most other purse methods, because it doesn't independently apply the function to each coin, but takes the coins all together. This has the following implications:
1. Any normalisation function can be applied globally to all coins in a purse, ensuring comparability. BUT:
2. If normalisation is done globally, it is no longer possible to automatically regenerate coins in the purse (i.e. using `regenerate()`), because the coin is no longer self-contained: it needs to know the values of the other coins in the purse. Perhaps at some point I will add a dedicated method for regenerating entire purses, but we are not there yet.
Let's anyway illustrate with an example. We build the example purse first.
```{r}
purse <- build_example_purse(quietly = TRUE)
```
Normalising a purse works in exactly the same way as normalising a coin, except for the `global` argument. By default, `global = TRUE`, which means that the normalisation will be applied over all time points simultaneously, with the aim of making the index comparable. Here, we will apply the default min-max approach to all coins:
```{r}
purse <- Normalise(purse, dset = "Raw", global = TRUE)
```
Now let's examine the data set of the first coin. We'll see what the max and min of a few indicators is:
```{r}
# get normalised data of first coin in purse
x1 <- get_dset(purse$coin[[1]], dset = "Normalised")
# get min and max of first four indicators (exclude uCode col)
sapply(x1[2:5], min, na.rm = TRUE)
sapply(x1[2:5], max, na.rm = TRUE)
```
Here we see that the minimum values are zero, but the maximum values are *not* 100, because in other coins these indicators have higher values. To show that the global maximum is indeed 100, we can extract the whole normalised data set for all years and run the same check.
```{r}
# get entire normalised data set for all coins in one df
x1_global <- get_dset(purse, dset = "Normalised")
# get min and max of first four indicators (exclude Time and uCode cols)
sapply(x1_global[3:6], min, na.rm = TRUE)
sapply(x1_global[3:6], max, na.rm = TRUE)
```
And this confirms our expectations: that the global maximum and minimum are 0 and 100 respectively.
Any type of normalisation can be performed on a purse in this "global" mode. However, keep in mind what is going on. Simply put, when `global = TRUE` this is what happens:
1. The data sets from each coin are joined together into one using the `get_dset()` function.
2. Normalisation is applied to this global data set.
3. The global data set is then split back into the coins.
So if you specify to normalise by e.g. rank, ranks will be calculated for all time points. Therefore, consider carefully if this fits the intended meaning.
Normalisation can also be performed independently on each coin, by setting `global = FALSE`.
```{r}
purse <- Normalise(purse, dset = "Raw", global = FALSE)
# get normalised data of first coin in purse
x1 <- get_dset(purse$coin[[1]], dset = "Normalised")
# get min and max of first four indicators (exclude uCode col)
sapply(x1[2:5], min, na.rm = TRUE)
sapply(x1[2:5], max, na.rm = TRUE)
```
Now the normalised data set in each coin will have a min and max of 0 and 100 respectively, for each indicator.
# Simplified normalisation
If the syntax of `Normalise()` looks a bit over-complicated, you can use the simpler `qNormalise()` function, which has less flexibility but makes the key function arguments more visible (they are not wrapped in lists). This function applies the same normalisation method to all indicators. It is also a generic so can be used on data frames, coins and purses. Let's demonstrate on a data frame:
```{r}
# some made up data
X <- data.frame(uCode = letters[1:10],
a = runif(10),
b = runif(10)*100)
X
```
By default, normalisation results in min-max on the $[0, 100]$ interval:
```{r}
qNormalise(X)
```
We can pass another normalisation function if we like, and the syntax is a bit easier than `Normalise()`:
```{r}
qNormalise(X, f_n = "n_dist2ref", f_n_para = list(iref = 1, cap_max = TRUE))
```
The `qNormalise()` function works in a similar way for coins and purses.
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/normalise.Rmd
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## -----------------------------------------------------------------------------
library(COINr)
# build example coin
coin <- build_example_coin(quietly = TRUE)
## ---- eval=FALSE--------------------------------------------------------------
# # export coin to Excel
# export_to_excel(coin, fname = "example_coin_results.xlsx")
## ---- eval=FALSE--------------------------------------------------------------
# # make sure file is in working directory!
# coin_import <- import_coin_tool("COIN_Tool_v1_LITE_exampledata.xlsm",
# makecodes = TRUE, out2 = "coin")
## ---- eval=FALSE--------------------------------------------------------------
# coin <- COIN_to_coin(COIN)
## -----------------------------------------------------------------------------
X <- ASEM_iData[1:5,c(2,10:12)]
X
## -----------------------------------------------------------------------------
rank_df(X)
## -----------------------------------------------------------------------------
replace_df(X, data.frame(old = c("AUT", "BEL"), new = c("test1", "test2")))
## -----------------------------------------------------------------------------
round_df(X, 1)
## -----------------------------------------------------------------------------
signif_df(X, 3)
## -----------------------------------------------------------------------------
# copy
X1 <- X
# change three values
X1$GDP[3] <- 101
X1$Population[1] <- 10000
X1$Population[2] <- 70000
# reorder
X1 <- X1[order(X1$uCode), ]
# now compare
compare_df(X, X1, matchcol = "uCode")
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/other_functions.R
|
---
title: "Other Functions"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Other Functions}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
This vignette covers other functions that don't fit in other vignettes, but still seem useful. Mainly this involves import and export, and some helper functions.
# Import and export
One of the most useful functions is `export_to_excel()`. This can be used to export the contents of a coin to Excel at at any point in its construction, and is very simple to run. We first build the example coin:
```{r}
library(COINr)
# build example coin
coin <- build_example_coin(quietly = TRUE)
```
Then export to Excel:
```{r, eval=FALSE}
# export coin to Excel
export_to_excel(coin, fname = "example_coin_results.xlsx")
```
This exports every data frame in the coin to a separate tab in the workbook, named according to its position in the coin. By default it excludes the Log of the coin, but this can be optionally included. The function is very useful for passing the results to people who don't use R (let's face it, that's most people).
Data can also be imported directly into COINr from the [COIN Tool](https://knowledge4policy.ec.europa.eu/composite-indicators/coin-tool_en) which is an Excel-based tool for building and analysing composite indicators, similar in fact to COINr^[Full disclosure, I was also involved in the development of the COIN Tool]. With the `import_coin_tool()` function you can import data directly from the COIN Tool to cross check or extend your analysis in COINr.
To demonstrate, we can take the example version of the COIN Tool, which you can download [here](https://composite-indicators.jrc.ec.europa.eu/sites/default/files/COIN_Tool_v1_LITE_exampledata.xlsm). Then it's as simple as running:
```{r, eval=FALSE}
# make sure file is in working directory!
coin_import <- import_coin_tool("COIN_Tool_v1_LITE_exampledata.xlsm",
makecodes = TRUE, out2 = "coin")
```
This will directly generate a coin from the COIN Tool.
# Converting from older COINr versions
COINr changed drastically from v0.6 to v1.0. So drastically that I skipped several version numbers. From v1.0, the main object of COINr is called a "coin" and this is different from the "COIN" used up to v0.6.x. If you have worked in COINr before v1.0, you can use the `COIN_to_coin()` function to convert old COINs into new coins:
```{r, eval=FALSE}
coin <- COIN_to_coin(COIN)
```
This comes with some limitations: any data sets present in the coin will not be passed on unless `recover_dsets = TRUE`. However, if this is specified, the coin cannot be regenerated because it is not possible to translate the log from the older COIN class (called the "Method") to the log in the new coin class. Still, the conversion avoids having to reformat `iData` and `iMeta`.
# Other useful functions
Here we list some accessory functions that could be useful in some circumstances.
The `rank_df()` function converts a data frame to ranks, ignoring non-numeric columns. Taking some sample data:
```{r}
X <- ASEM_iData[1:5,c(2,10:12)]
X
```
This looks like
```{r}
rank_df(X)
```
The `replace_df()` function replaces values found anywhere in a data frame with corresponding new values:
```{r}
replace_df(X, data.frame(old = c("AUT", "BEL"), new = c("test1", "test2")))
```
The `round_df()` rounds to a specified number of decimal places, ignoring non-numeric columns:
```{r}
round_df(X, 1)
```
The `signif_df()` is equivalent but for a number of significant figures:
```{r}
signif_df(X, 3)
```
Finally, the `compare_df()` function gives a detailed comparison between two similar data frames that are indexed by a specified column. This function is tailored to compare results in composite indicators. Say you have a set of results from COINr and want to cross check against a separate calculation. Often, you end up with a data frame with the same columns, but possibly in a different order. Rows could be in a different order but are indexed by an identifier, here "uCode". The `compare_df()` function gives a detailed comparison between the two data frames and points out any differences.
We'll demonstrate this by copying the example data frame, altering some values and seeing what happens:
```{r}
# copy
X1 <- X
# change three values
X1$GDP[3] <- 101
X1$Population[1] <- 10000
X1$Population[2] <- 70000
# reorder
X1 <- X1[order(X1$uCode), ]
# now compare
compare_df(X, X1, matchcol = "uCode")
```
The output is a list with several entries. First, it tells us that the two data frames are not the same. The "Details" data frame lists each column and says whether it is identical or not, and how many different points there are. Finally, the "Differences" list has one entry for each column that differs, and details the value of the point from the first data frame compared to the value from the second.
From experience, this kind of output can be very helpful in quickly zooming in on differences between possibly large data frames of results. It is mainly intended for the use case described above, where the data frames are known to be similar, are of the same size, but we want to check for precise differences.
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/other_functions.Rmd
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## ----InstallCOINrC, eval=FALSE------------------------------------------------
# install.packages("COINr")
## ----InstallCOINr, eval=FALSE-------------------------------------------------
# remotes::install_github("bluefoxr/COINr")
## ----setup--------------------------------------------------------------------
library(COINr)
## ---- eval=F------------------------------------------------------------------
# coin <- COINr_function(coin, function_arguments)
## -----------------------------------------------------------------------------
ASEM <- new_coin(ASEM_iData, ASEM_iMeta, level_names = c("Indicator", "Pillar", "Sub-index", "Index"))
## -----------------------------------------------------------------------------
ASEM
## ---- fig.width=5, fig.height=5-----------------------------------------------
plot_framework(ASEM)
## -----------------------------------------------------------------------------
ASEM <- Denominate(ASEM, dset = "Raw")
## -----------------------------------------------------------------------------
ASEM <- Screen(ASEM, dset = "Denominated", dat_thresh = 0.9, unit_screen = "byNA")
## -----------------------------------------------------------------------------
ASEM
## -----------------------------------------------------------------------------
ASEM <- Impute(ASEM, dset = "Screened", f_i = "i_mean_grp", use_group = "EurAsia_group")
## -----------------------------------------------------------------------------
ASEM <- Treat(ASEM, dset = "Screened")
## -----------------------------------------------------------------------------
ASEM <- Normalise(ASEM, dset = "Treated")
## -----------------------------------------------------------------------------
ASEM <- Aggregate(ASEM, dset = "Normalised", f_ag = "a_amean")
## -----------------------------------------------------------------------------
# get results table
df_results <- get_results(ASEM, dset = "Aggregated", tab_type = "Aggs")
head(df_results)
## ---- fig.width=7-------------------------------------------------------------
plot_bar(ASEM, dset = "Aggregated", iCode = "Index", stack_children = TRUE)
## ---- eval=FALSE--------------------------------------------------------------
# # export coin to Excel
# export_to_excel(coin, fname = "example_coin_results.xlsx")
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/overview.R
|
---
title: "Overview"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Overview}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
# Introduction
This vignette introduces and gives an overview of the **COINr package**. COINr is a high-level R package which is the first fully-flexible development and analysis environment for composite indicators and scoreboards.
This vignette is one of quite a few vignettes which document the package. Here, the aim is to give a quick introduction and overview of the package. The other vignettes deal with specific operations.
As of COINr v1.0.0 some radical changes have been introduced. Most notably for existing users, is the change in syntax. This is an unfortunate one-off necessity and the changes (and how to survive them, or roll back to the old version of COINr) are described in an extra vignette called [Changes in v1.0](v1.html).
# Installation
COINr is on CRAN and can be installed by running:
```{r InstallCOINrC, eval=FALSE}
install.packages("COINr")
```
Or simply browsing for the package in R Studio. The CRAN version will be updated every 1-2 months or so. If you want the very latest version in the meantime (I am usually adding features and fixing bugs as I find them), you can install the development version from GitHub. First, install the 'remotes' package if you don't already have it, then run:
```{r InstallCOINr, eval=FALSE}
remotes::install_github("bluefoxr/COINr")
```
This should directly install the package from Github, without any other steps. You may be asked to update packages. This might not be strictly necessary, so you can also try skipping this step if you prefer.
Once the package is installed, it can be loaded as follows:
```{r setup}
library(COINr)
```
# Features
The main features of the COINr package are those for building the composite indicator by performing operations on the data, those for analysing/post-processing, and those for visualisation. Here, the main functions are briefly listed (this list is not exhaustive):
**Building** functions begin with a capital letter, except for `new_coin()` which is used to initialise a coin object
Function Description
------------------ ---------------------------------------------------------------
`new_coin()` Initialise a coin object given indicator data and metadata
`Screen()` Screen units based on data availability rules
`Denominate()` Denominate/scale indicators by other indicators
`Impute()` Impute missing data
`Treat()` Treat outliers and skewed distributions
`qTreat()` Simplified-syntax version of `Treat()`
`Normalise()` Normalise indicators onto a common scale
`qNormalise()` Simplified-syntax version of `Normalise()`
`Aggregate()` Aggregate indicators using weighted mean
Building functions are defined as those that modify the data (by creating an additional data set). They also keep a record of their arguments inside the coin, which allows coins to be *regenerated*. See [Adjustments and Comparisons](adjustments.html).
**Analysing** functions include those multivariate analysis, weight optimisation and sensitivity analysis, as well as those for reporting results:
Function Description
------------------ ---------------------------------------------------------------
`get_corr()` Get correlations between any indicator/aggregate sets
`get_corr_flags()` Find high or low-correlated indicators within groups
`get_cronbach()` Get Cronbach's alpha for any set of indicators
`get_data()` Get subsets of indicator data
`get_data_avail()` Get data availability details of each unit
`get_denom_corr()` Get high correlations between indicators and denominators
`get_eff_weights()` Get effective weights at index level
`get_opt_weights()` Get optimised weights
`get_results()` Get conveniently-arranged results tables
`get_sensitivity()` Perform a global uncertainty or sensitivity analysis
`get_stats()` Get a table of indicator statistics
`get_str_weak()` Highest and lowest-ranking indicators for a given unit
`get_unit_summary()` Summary of scores and ranks for a given unit
`remove_elements()` Test the effect of removing indicators or aggregates
**Plotting** functions generate plots using the ggplot2 package:
Function Description
------------------ ---------------------------------------------------------------
`plot_bar()` Bar chart of a single indicator or aggregate
`plot_corr()` Heat maps of correlations between indicators/aggregates
`plot_dist()` Statistical plots of indicator/aggregate distributions
`plot_dot()` Dot plot of an indictaor/aggregate with unit highlighting
`plot_framework()` Sunburst or linear plot of indicator framework
`plot_scatter()` Scatter plot between two indicators/aggregates
`plot_sensitivity()` Plots of sensitivity indices
`plot_uncertainty()` Plots of confidence intervals on unit ranks
**Adjustment and comparison** functions allow copies, adjustments and comparisons to be made between alternative versions of the composite indicator:
Function Description
------------------ ---------------------------------------------------------------
`Regen()` Regenerate results of a coin object
`change_ind()` Add and remove indicators
`compare_coins()` Exports all data frames in the COINr object to an Excel workbook
`compare_coins_multi()` Convert a COINr class object to a COIN class object, for use in the 'COINr' package
**Other functions** are useful tools that don't fit into the other categories
Function Description
------------------ ---------------------------------------------------------------
`import_coin_tool()` Import data and metadata from the COIN Tool
`COIN_to_coin()` Convert an older "COIN" class object to a newer "coin" class object
`build_example_coin()` Build the example coin using built-in example data
`build_example_purse()` Build the example purse using built-in example data
`export_to_excel()` Export contents of the coin to Excel
All functions are fully documented and individual function help can be accessed in the usual way by `?function_name`.
The COINr package is loosely object oriented, in the sense that the composite indicator is encapsulated in an S3 class object called a "coin", and a time-indexed collection of coins is called a "purse" (see [Building coins](coins.html)) Most of the main functions listed in the previous tables take this "coin" class as the main input (and often also as the output) with other function arguments specifying how to apply the function. E.g. the syntax is typically:
```{r, eval=F}
coin <- COINr_function(coin, function_arguments)
```
Many of the main COINr functions are *generics*: they have methods also for data frames, purses, and in some cases numeric vectors. This means that COINr functions can also be used for ad-hoc operations without needing to build coins.
# Quick example
The COINr package contains some example data which is used in most of the vignettes to demonstrate the functions, and this comes from the [ASEM Sustainable Connectivity Portal](https://composite-indicators.jrc.ec.europa.eu/asem-sustainable-connectivity/). It is a data set of 49 indicators covering 51 Asian and European countries, measuring "sustainable connectivity". Here we work through building a composite indicator, and link to the other vignettes for more details.
Before proceeding, let's clearly define a few terms first to avoid confusion later on.
+ An *indicator* is a variable which has an observed value for each unit. Indicators might be things like life expectancy, CO2 emissions, number of tertiary graduates, and so on.
+ A *unit* is one of the entities that you are comparing using indicators. Often, units are countries, but they could also be regions, universities, individuals or even competing policy options (the latter is the realm of multicriteria decision analysis).
We begin by building a new "coin". To build a coin you need two data frames which are inputs to the `new_coin()` function. See the vignette on [Building coins](coins.html) for more details on this.
```{r}
ASEM <- new_coin(ASEM_iData, ASEM_iMeta, level_names = c("Indicator", "Pillar", "Sub-index", "Index"))
```
The output of `new_coin()` is a coin class object with a single data set called "Raw":
```{r}
ASEM
```
Let's view the structure of the index that we have specified, using the `plot_framework()` function:
```{r, fig.width=5, fig.height=5}
plot_framework(ASEM)
```
See the [Visualisation](visualisation.html) vignette for the full range of plotting options in COINr.
At the moment the coin contains only our raw data. To build the composite indicator we need to perform operations on the coin. All of these operations are optional and can be performed in any order. We begin by *denominating* the raw data: that is, we divide some of the indicators by other quantities to make our indicators comparable between small and large countries. See the vignette on [Denomination](denomination.html).
```{r}
ASEM <- Denominate(ASEM, dset = "Raw")
```
The only thing we specify here is that the denomination should be performed on the "Raw" data set. The other specifications for how to denominate the indicators were already contained in the data frames that we input to `new_coin()`. Running `Denominate()` has created a new data set called "Denominated" which is reported in the message when we run the function (we can choose another name if we wish). This is *additional* to the "Raw" data set and does not overwrite it.
Next we will screen units (countries) based on data availability. We want to ensure that every unit (country) has at least 90% data availability across all indicators. Screening is done by the `Screen()` function:
```{r}
ASEM <- Screen(ASEM, dset = "Denominated", dat_thresh = 0.9, unit_screen = "byNA")
```
The details of this function can be found in the [Unit screening](screening.html) vignette. Again, by running this function we have created a new data set. Let's look again at the contents of the coin using its `print()` method:
```{r}
ASEM
```
Notice that the "Screened" data set now has 46 units because five have been screened out, having less than 90% data availability.
Next we will impute any remaining missing data points. This can be done in a variety of ways, but here we choose to impute using the group mean, i.e. if a country is in the "Asia" group, we replace missing points by the Asian mean. If a country is in the "Europe" group, we replace with the European mean.
```{r}
ASEM <- Impute(ASEM, dset = "Screened", f_i = "i_mean_grp", use_group = "EurAsia_group")
```
This writes another data set called "Imputed", which has filled in all the missing data points. Again, we have to specify which data set to impute, and we have chosen the "Screened" data set. Full details of the imputation function can be found in the [Imputation](imputation.html) vignette.
We would next like to treat any outliers. The `Treat()` function gives a number of options, but by default will identify outliers using skewness and kurtosis thresholds, then Winsorise or log-transform indicators until they are brought within the specified thresholds. This function is slightly complicated and full details can be found in the [Outlier treatment](treat.html) vignette.
```{r}
ASEM <- Treat(ASEM, dset = "Screened")
```
The details of the data treatment can be found inside the coin. A simplified version of `Treat()` is also available, called `qTreat()`, which may be easier to use in many cases.
The final step before aggregating is to bring the indicators onto a common scale by normalising them. The `Normalise()` function will, by default, scale each indicator onto the $[0, 100]$ interval using a "min-max" approach.
```{r}
ASEM <- Normalise(ASEM, dset = "Treated")
```
Again, because `Normalise()` is a slightly complex function (unless it is run at defaults, as above), a simplified version called `qNormalise()` is also available. Details on normalisation can be found in the [Normalisation](normalise.html) vignette.
To conclude the construction of the composite indicator, we must aggregate the normalised indicators up within their aggregation groups. In our example, indicators (level 1) are aggregated into "pillars" (level 2), which are themselves aggregated up into "sub-indexes" (level 3), which are finally aggregated into a single index (level 4). The `Aggregate()` function will aggregate following the structure which was specified in the `iMeta` argument to `new_coin()`. By default, this is done using the arithmetic mean, and using weights which were also specified in `iMeta`.
```{r}
ASEM <- Aggregate(ASEM, dset = "Normalised", f_ag = "a_amean")
```
Details on aggregation can be found in the [Aggregation](aggregate.html) vignette.
We now have a fully-constructed coin with index scores for each country. How do we look at the results? One way is the `get_results()` function which extracts a conveniently-arranged table of results:
```{r}
# get results table
df_results <- get_results(ASEM, dset = "Aggregated", tab_type = "Aggs")
head(df_results)
```
This shows at a glance the top-ranking countries and their scores. See the [Presenting Results](results.html) vignette for more ways to generate results tables.
We can also generate a bar chart:
```{r, fig.width=7}
plot_bar(ASEM, dset = "Aggregated", iCode = "Index", stack_children = TRUE)
```
This also shows the underlying sub-index scores.
We will not explore all functions here. As a final useful step, we can export the entire contents of the coin to Excel if needed:
```{r, eval=FALSE}
# export coin to Excel
export_to_excel(coin, fname = "example_coin_results.xlsx")
```
# Finally
The preceding example covered a number of features of COINr. Features that were not mentioned can be found in the following vignettes:
* [Weights](weights.html)
* [Analysis](analysis.html)
* [Sensitivity analysis](sensitivity.html)
* [Adjustments and Comparisons](adjustments.html)
* [Data selection](data_selection.html)
* [Other functions](other_functions.html)
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/overview.Rmd
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## -----------------------------------------------------------------------------
library(COINr)
# build full example coin
coin <- build_example_coin(quietly = TRUE)
# get results table
df_results <- get_results(coin, dset = "Aggregated", tab_type = "Aggs")
head(df_results)
## -----------------------------------------------------------------------------
# get results table
df_results <- get_results(coin, dset = "Aggregated", tab_type = "Aggs", use_group = "GDPpc_group", use = "groupranks")
# see first few entries in "XL" group
head(df_results[df_results$GDPpc_group == "XL", ])
## -----------------------------------------------------------------------------
# see first few entries in "L" group
head(df_results[df_results$GDPpc_group == "L", ])
## -----------------------------------------------------------------------------
get_unit_summary(coin, usel = "IND", Levels = c(4,3,2), dset = "Aggregated")
## -----------------------------------------------------------------------------
get_str_weak(coin, dset = "Raw", usel = "ESP")
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/results.R
|
---
title: "Presenting Results"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Presenting Results}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
This is a short vignette which explains some functions in COINr for extracting results from the coin. Once the coin is fully built, up to the point of aggregation, an immediate task is to see what the main results are. In composite indicators, the main starting point is often the ranking of units based on the highest level of aggregation, i.e. the index.
While the aggregated data set (the data set created by `Aggregate()`) has all the aggregate scores in it, it requires a little manipulation to see it in an easy to read format. To help with this, the `get_results()` function
```{r}
library(COINr)
# build full example coin
coin <- build_example_coin(quietly = TRUE)
# get results table
df_results <- get_results(coin, dset = "Aggregated", tab_type = "Aggs")
head(df_results)
```
The output of `get_results()` is a table sorted by the highest level of aggregation (here, the index), and with the the columns arranged so that the highest level of aggregation is first, working down to lower levels. The function has several arguments, including `also_get` (names of further columns to attach to the table, such as groups, denominators), `tab_type` (controlling which columns to output), `use` (whether to show scores or ranks), and `order_by` (which column to use to sort the table).
A useful feature is to return ranks of units inside groups. For example, rather than returning scores we can return ranks within GDP per capita groups:
```{r}
# get results table
df_results <- get_results(coin, dset = "Aggregated", tab_type = "Aggs", use_group = "GDPpc_group", use = "groupranks")
# see first few entries in "XL" group
head(df_results[df_results$GDPpc_group == "XL", ])
```
```{r}
# see first few entries in "L" group
head(df_results[df_results$GDPpc_group == "L", ])
```
Another function of interest zooms in on a single unit. The `get_unit_summary()` function returns a summary of a units scores and ranks at specified levels. Typically we can use this to look at a unit's index scores and scores for the aggregates:
```{r}
get_unit_summary(coin, usel = "IND", Levels = c(4,3,2), dset = "Aggregated")
```
This is a summary for "IND" (India) at levels 4 (index), 3 (sub-index) and 2 (pillar). It shows the score and rank.
A final function here is `get_str_weak()`. This gives the "strengths and weaknesses" of a unit, in terms of its indicators with the highest and lowest ranks. This can be particularly useful in "country profiles", for example.
```{r}
get_str_weak(coin, dset = "Raw", usel = "ESP")
```
The default output is five strengths and five weaknesses. The direction of the indicators is adjusted - see the `adjust_direction` parameter. A number of other parameters can also be adjusted which help to guide the tables to give sensible values, for example excluding indicators with binary values. See the function documentation for more details.
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/results.Rmd
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## -----------------------------------------------------------------------------
library(COINr)
# example data
iData <- ASEM_iData[40:51, c("uCode", "Research", "Pat", "CultServ", "CultGood")]
iData
## -----------------------------------------------------------------------------
l_scr <- Screen(iData, unit_screen = "byNA", dat_thresh = 0.75)
## -----------------------------------------------------------------------------
str(l_scr, max.level = 1)
## -----------------------------------------------------------------------------
l_scr$ScreenedData
## -----------------------------------------------------------------------------
head(l_scr$DataSummary)
## -----------------------------------------------------------------------------
# build example coin
coin <- build_example_coin(up_to = "new_coin", quietly = TRUE)
# screen units from raw dset
coin <- Screen(coin, dset = "Raw", unit_screen = "byNA", dat_thresh = 0.85, write_to = "Filtered_85pc")
# some details about the coin by calling its print method
coin
## -----------------------------------------------------------------------------
coin$Analysis$Filtered_85pc$RemovedUnits
## -----------------------------------------------------------------------------
# build example purse
purse <- build_example_purse(up_to = "new_coin", quietly = TRUE)
# screen units in all coins to 85% data availability
purse <- Screen(purse, dset = "Raw", unit_screen = "byNA",
dat_thresh = 0.85, write_to = "Filtered_85pc")
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/screening.R
|
---
title: "Unit Screening"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Unit Screening}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
Unit screening is a screening or filtering of units based on data availability rules. Just like with indicators (columns), when a unit (row) has very few data points available, it may make sense to remove it. This avoids drawing conclusions on units with very few data points. It will also increase the percentage data availability of each indicator once the units have been removed.
The COINr function `Screen()` is a generic function with methods for data frames, coins and purses. It is a *building* function in that it creates a new data set in `$.Data` as its output.
# Data frames
We begin with data frames. Let's take a subset of the inbuilt example data for demonstration. I cherry-pick some rows and columns which have some missing values.
```{r}
library(COINr)
# example data
iData <- ASEM_iData[40:51, c("uCode", "Research", "Pat", "CultServ", "CultGood")]
iData
```
The data has four indicators, plus an identifier column "uCode". Looking at each unit, the data availability is variable. We have 12 units in total.
Now let's use `Screen()` to screen out some of these units. Specifically, we will remove any units that have less than 75% data availabilty (3 of 4 indicators with non-`NA` values):
```{r}
l_scr <- Screen(iData, unit_screen = "byNA", dat_thresh = 0.75)
```
The output of `Screen()` is a list:
```{r}
str(l_scr, max.level = 1)
```
We can see already that the "RemovedUnits" entry tells us that three units were removed based on our specifications. We now have our new screened data set:
```{r}
l_scr$ScreenedData
```
And we have a summary of data availability and some other things:
```{r}
head(l_scr$DataSummary)
```
This table is in fact generated by `get_data_avail()` - some more details can be found in the [Analysis](analysis.html) vignette.
Other than data availability, units can also be screened based on the presence of zeros, or on both - this is specified by the `unit_screen` argument. Use the `Force`^[Luke. Sorry.] argument to override the screening rules for specified units if required (either to force inclusion or force exclusion).
# Coins
Screening on coins is very similar to data frames, because the coin method extracts the relevant data set, passes it to the data frame method, and then then puts the output back as a new data set. This means the arguments are almost the same. The only thing different is to specify which data set to screen, the name to give the new data set, and whether to output a coin or a list.
We'll build the example coin, then screen the raw data set with a threshold of 85% data availability and also name the new data set something different rather than "Screened" (the default):
```{r}
# build example coin
coin <- build_example_coin(up_to = "new_coin", quietly = TRUE)
# screen units from raw dset
coin <- Screen(coin, dset = "Raw", unit_screen = "byNA", dat_thresh = 0.85, write_to = "Filtered_85pc")
# some details about the coin by calling its print method
coin
```
The printed summary shows that the new data set only has 48 units, compared to the raw data set with 51. We can find which units were filtered because this is stored in the coin's "Analysis" sub-list:
```{r}
coin$Analysis$Filtered_85pc$RemovedUnits
```
The Analysis sub-list also contains the data availability table that is output by `Screen()`. As with the data frame method, we can also choose to screen units by presence of zeroes, or a combination of zeroes and missing values.
# Purses
For completion we also demonstrate the purse method. Like most purse methods, this is simply applying the coin method to each coin in the purse, without any special features. Here, we perform the same example as in the coin section, but on a purse of coins:
```{r}
# build example purse
purse <- build_example_purse(up_to = "new_coin", quietly = TRUE)
# screen units in all coins to 85% data availability
purse <- Screen(purse, dset = "Raw", unit_screen = "byNA",
dat_thresh = 0.85, write_to = "Filtered_85pc")
```
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/screening.Rmd
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## -----------------------------------------------------------------------------
library(COINr)
# build example coin
coin <- build_example_coin(quietly = TRUE)
## -----------------------------------------------------------------------------
# component of SA_specs for winmax distribution
l_winmax <- list(Address = "$Log$Treat$global_specs$f1_para$winmax",
Distribution = 1:5,
Type = "discrete")
## -----------------------------------------------------------------------------
# normalisation method
# first, we define the two alternatives: minmax or zscore (along with respective parameters)
norm_alts <- list(
list(f_n = "n_minmax", f_n_para = list(c(1,100))),
list(f_n = "n_zscore", f_n_para = list(c(10,2)))
)
# now put this in a list
l_norm <- list(Address = "$Log$Normalise$global_specs",
Distribution = norm_alts,
Type = "discrete")
## -----------------------------------------------------------------------------
# get nominal weights
w_nom <- coin$Meta$Weights$Original
# build data frame specifying the levels to apply the noise at
noise_specs = data.frame(Level = c(2,3),
NoiseFactor = c(0.25, 0.25))
# get 100 replications
noisy_wts <- get_noisy_weights(w = w_nom, noise_specs = noise_specs, Nrep = 100)
# examine one of the noisy weight sets
tail(noisy_wts[[1]])
## -----------------------------------------------------------------------------
# component of SA_specs for weights
l_weights <- list(Address = "$Log$Aggregate$w",
Distribution = noisy_wts,
Type = "discrete")
## -----------------------------------------------------------------------------
## aggregation
l_agg <- list(Address = "$Log$Aggregate$f_ag",
Distribution = c("a_amean", "a_gmean"),
Type = "discrete")
## -----------------------------------------------------------------------------
# create overall specification list
SA_specs <- list(
Winmax = l_winmax,
Normalisation = l_norm,
Weights = l_weights,
Aggregation = l_agg
)
## ---- eval=FALSE--------------------------------------------------------------
# # Not run here: will take a few seconds to finish if you run this
# SA_res <- get_sensitivity(coin, SA_specs = SA_specs, N = 100, SA_type = "UA",
# dset = "Aggregated", iCode = "Index")
## ----include=FALSE------------------------------------------------------------
SA_res <- readRDS("UA_results.RDS")
## ---- fig.width= 7------------------------------------------------------------
plot_uncertainty(SA_res)
## -----------------------------------------------------------------------------
head(SA_res$RankStats)
## ---- eval=FALSE--------------------------------------------------------------
# # Not run here: will take a few seconds to finish if you run this
# SA_res <- get_sensitivity(coin, SA_specs = SA_specs, N = 100, SA_type = "SA",
# dset = "Aggregated", iCode = "Index", Nboot = 100)
## ----include=FALSE------------------------------------------------------------
SA_res <- readRDS("SA_results.RDS")
## ---- fig.width=5-------------------------------------------------------------
plot_sensitivity(SA_res)
## ---- fig.width=7-------------------------------------------------------------
plot_sensitivity(SA_res, ptype = "box")
## -----------------------------------------------------------------------------
# run function removing elements in level 2
l_res <- remove_elements(coin, Level = 2, dset = "Aggregated", iCode = "Index")
# get summary of rank changes
l_res$MeanAbsDiff
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/sensitivity.R
|
---
title: "Sensitivity Analysis"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Sensitivity Analysis}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
# Introduction
Sensitivity analysis is often confused with *uncertainty analysis*. Uncertainty analysis involves estimating the uncertainty in the outputs of a system (here, the scores and ranks of the composite indicator), given the uncertainties in the inputs (here, methodological decisions, weights, etc.). The results of an uncertainty include for example confidence intervals over the ranks, median ranks, and so on.
Sensitivity analysis is an extra step after uncertainty analysis, and estimates which of the input uncertainties are driving the output uncertainty, and by how much. A rule of thumb, known as the [Pareto Principle](https://en.wikipedia.org/wiki/Pareto_principle) (or the 80/20 Rule) suggests that often, only a small proportion of the input uncertainties are causing the majority of the output uncertainty. Sensitivity analysis allows us to find which input uncertainties are significant (and therefore perhaps worthy of extra attention), and which are not important.
In reality, sensitivity analysis and uncertainty analysis can be performed simultaneously. However in both cases, the main technique is to use Monte Carlo methods. This essentially involves re-calculating the composite indicator many times, each time randomly varying the uncertain variables (assumptions, parameters), in order to estimate the output distributions.
COINr implements a flexible variance-based global sensitivity analysis approach, which allows almost any assumption to be varied, as long as the distribution of alternative values can be described. Variance-based "sensitivity indices" are estimated using a Monte Carlo design (running the composite indicator many times with a particular combination of input values). This follows the methodology described in [this paper](https://doi.org/10.1111/j.1467-985X.2005.00350.x).
# Defining the problem
The first step in a sensitivity analysis is to identify *which* assumptions to treat as uncertain, and *what* alternative values to assign to each assumption. Let's begin with the "which": think about all the ingredients that have gone into making the composite indicator: the data itself, the selection of indicators, and the methodological decisions along the way (which imputation method to use, if any; whether to treat outliers and in what way, which normalisation method, etc...). We cannot test everything, but we can pick a few assumptions that seem important, and where we have plausible alternatives that we could assign.
Here we will work with the familiar in-built example coin. You can see exactly how this is built by calling `edit(build_example_coin)` by the way.
```{r}
library(COINr)
# build example coin
coin <- build_example_coin(quietly = TRUE)
```
We will test four assumptions:
1. The maximum number of Winsorised data points. This is currently set at five, but we will let it vary between 1 and 5 points.
2. The normalisation method. By default the min-max method is used, but we will also consider the z-score as an alternative.
3. The weights. We will test perturbing the weights randomly inside a set interval.
3. The aggregation method. The example uses the arithmetic mean but we will also consider the geometric mean as an alternative.
# Input distributions
Having now selected *which* assumptions to vary, we can now work on defining the distributions for each assumption. Sensitivity analysis is a probabilistic tool, so each input assumption is treated as a random variable, which means we have to define a distribution for each assumption.
The function to run a sensitivity in COINr is called `get_sensitivity()`. It takes a little understanding to get this set up properly. The argument that defines the input distributions is a list called `SA_specs`. This specifies which assumptions to vary, the distributions for each assumption, and where each assumption can be found in the coin. Let's demonstrate by defining one part of `SA_specs`, for our first assumption: the maximum number of Winsorised points.
```{r}
# component of SA_specs for winmax distribution
l_winmax <- list(Address = "$Log$Treat$global_specs$f1_para$winmax",
Distribution = 1:5,
Type = "discrete")
```
Each uncertain assumption is defined by a list with three components. The "Address" component describes *where* in the coin the object of interest is found. You should look inside the coin to find this: notice that you don't specify the name of the coin itself, i.e. it is not `coin$Log$Treat$...` but rather just `$Log$Treat$...`.
Next is the "Distribution", which essentially describes the alternatives for the parameter. Here we have entered `1:5`, i.e. any integer between 1 and 5. Finally the "Type" entry should be set to either "discrete" or "continuous". In the former, the distribution is assumed to be discrete, so that samples are taken from the alternatives given in "Distribution". In the latter, the distribution is assumed to be continuous and uniform, and "Distribution" should be a 2-length vector specifying the upper and lower bounds of the parameter. Obviously in this latter case, the parameter must be numeric, and must be able to take non-integer values.
In summary, the list above specifies that the winmax parameter should be allowed to vary between 1 and 5 (integers). This list will be combined with lists for the other assumptions below, and input to `get_sensitivity()`.
Now let's see the entry for the normalisation method:
```{r}
# normalisation method
# first, we define the two alternatives: minmax or zscore (along with respective parameters)
norm_alts <- list(
list(f_n = "n_minmax", f_n_para = list(c(1,100))),
list(f_n = "n_zscore", f_n_para = list(c(10,2)))
)
# now put this in a list
l_norm <- list(Address = "$Log$Normalise$global_specs",
Distribution = norm_alts,
Type = "discrete")
```
This is a bit more complicated because when we switch between the min-max and z-score methods, we also want to use the corresponding set of parameters (`f_n_para`). That means that the parameter to target is the entire "global_specs" argument of `Normalise()`. We define two alternatives: one with min-max between 1 and 100, and the other being z-score with mean 10 and standard deviation 2. Notice that you need to be careful to wrap things appropriately in lists as required by each function.
Otherwise the rest is straight forward: we define the address and attach the `norm_alts` alternatives to the main list chunk. The distribution is discrete. Notice that each specification includes the "default" value of the assumption, not just the alternative(s).
Next is the weights, and this is also a special case. There are different ways we could approach changing the weights. First, we might have a small number of alternative weight sets, perhaps one is the original weights, one is from PCA, and one has been adjusted by hand. In that case, we could put these three sets of weights in a list and set the address to `$Log$Aggregate$w`, as a discrete distribution.
A second possibility would be to treat individual weights as individual parameters. This might be a good idea if we only want to vary a small number of individual weights, e.g. the sub-index weights (of which there are two). Then we could define one assumption for one weight and set the address as e.g. `coin$Meta$Weights$Original$Weight[58]` which is the location of the "Conn" sub-index weight, and similarly for the "Sust" sub-index. We would then set `Type = "continuous"` and set the upper and lower bounds as needed, e.g. `c(0.5, 1)` to vary between 0.5 and 1.
To instead get an overall perturbation of weights, we have to use a helper function. The `get_noisy_weights()` function is designed for this purpose: it generates replications of your set of weights, where each replication has some random noise added to it according to your specifications. Here is how it works. You take your nominal weights (those that you would normally use) and feed them into the function:
```{r}
# get nominal weights
w_nom <- coin$Meta$Weights$Original
# build data frame specifying the levels to apply the noise at
noise_specs = data.frame(Level = c(2,3),
NoiseFactor = c(0.25, 0.25))
# get 100 replications
noisy_wts <- get_noisy_weights(w = w_nom, noise_specs = noise_specs, Nrep = 100)
# examine one of the noisy weight sets
tail(noisy_wts[[1]])
```
The `noisy_wts` object is a list containing 100 data frames, each of which is a set of weights with slightly different values. The sample above shows the last few rows of one of these weight-sets.
Now we can feed this into our list chunk:
```{r}
# component of SA_specs for weights
l_weights <- list(Address = "$Log$Aggregate$w",
Distribution = noisy_wts,
Type = "discrete")
```
Notice that the distribution is defined as discrete because in practice we have 100 alternative sets of weights, even though we are emulating a continuous distribution.
Last of all we define the list chunk for the aggregation method:
```{r}
## aggregation
l_agg <- list(Address = "$Log$Aggregate$f_ag",
Distribution = c("a_amean", "a_gmean"),
Type = "discrete")
```
This is relatively straightforward.
Having defined all of our input distributions individually, it's time to put them all together:
```{r}
# create overall specification list
SA_specs <- list(
Winmax = l_winmax,
Normalisation = l_norm,
Weights = l_weights,
Aggregation = l_agg
)
```
We simply put our list chunks into a single list. The names of this list are used as the names of the assumptions, so we can name them how we want.
# Uncertainty analysis
That was all a bit complicated, but this is because defining a sensitivity analysis *is* complicated! Now COINr can take over from here. We can now call the `get_sensitivity()` function:
```{r, eval=FALSE}
# Not run here: will take a few seconds to finish if you run this
SA_res <- get_sensitivity(coin, SA_specs = SA_specs, N = 100, SA_type = "UA",
dset = "Aggregated", iCode = "Index")
```
```{r include=FALSE}
SA_res <- readRDS("UA_results.RDS")
```
This is not actually run when building this vignette because it can take a little while to finish. When it is run you should get a message saying that the weights address is not found or `NULL`. COINr checks each address to see if there is already an object at that address inside the coin. If there is not, or it is `NULL` it asks if you want to continue anyway. In our case, the fact that it is `NULL` is not because we made a mistake with the address, but simply because the `w` argument of `Aggregate()` was not specified when we build the coin (i.e. it was set to `NULL`), and the default "Original" weights were used. Sometimes however, if an address is `NULL` it might be because you have made an error.
Looking at the syntax of `get_sensitivity()`: apart from passing the coin and `SA_specs`, we also have to specify how many replications to run (`N` - more replications results in a more accurate sensitivity analysis, but also takes longer); whether to run an uncertainty analysis (`SA_type = "UA"`) or a sensitivity analysis (`SA_type = "SA"`); and finally the *target* output of the sensitivity analysis, which in this case we have specified as the Index, from the aggregated data set.
If the type of sensitivity analysis (`SA_type`) is set to `"UA"`, assumptions will be sampled randomly and the results will simply consist of the distribution over the ranks. This takes less replications, and may be sufficient if you are just interested in the output uncertainty, without attributing it to each input assumption. We can directly look at the output uncertainty analysis by calling the `plot_uncertainty()` function:
```{r, fig.width= 7}
plot_uncertainty(SA_res)
```
Results are contained in the output of `get_sensitivity()` and can also be viewed directly, e.g.
```{r}
head(SA_res$RankStats)
```
This shows the nominal, mean, median, and 5th/95th percentile ranks of each unit, as a result of the induced uncertainty.
# Sensitivity analysis
The process for performing a sensitivity analysis is the same, but we set `SA_type = "SA"`.
```{r, eval=FALSE}
# Not run here: will take a few seconds to finish if you run this
SA_res <- get_sensitivity(coin, SA_specs = SA_specs, N = 100, SA_type = "SA",
dset = "Aggregated", iCode = "Index", Nboot = 100)
```
```{r include=FALSE}
SA_res <- readRDS("SA_results.RDS")
```
If you run this, you will see an important difference: although we set `N = 100` the coin is replicated 600 times! This is because a variance based sensitivity analysis requires a specific *experimental design*, and the actual number of runs is $N(d+2)$, where $d$ is the number of uncertain assumptions.
Notice also that we have set `Nboot = 100`, which is the number of bootstrap replications to perform, and is used for estimating confidence intervals on sensitivity indices.
Let's now plot the results using the `plot_sensitivity()` function:
```{r, fig.width=5}
plot_sensitivity(SA_res)
```
By default this returns a bar chart. Each bar gives the sensitivity of the results (in this case the average rank change of the Index compared to nominal values) to each assumption. Clearly, the most sensitive assumption is the aggregation method, and the least sensitive is the maximum number of points to Winsorise.
The same results can be plotted as a pie chart, or as a box plot, depending on how we set `ptype`:
```{r, fig.width=7}
plot_sensitivity(SA_res, ptype = "box")
```
The confidence intervals are rather wide here, especially on the first order sensitivity indices. By increasing `N`, the precision of these estimates will increase and the confidence intervals will narrow. In any case, the right hand plot (total order sensitivity indices) is already clear: despite the estimation uncertainty, the order of importance of the four assumptions is clearly distinguished.
# Discussion/tips
The `get_sensitivity()` function is very flexible because it can target anything inside the coin. However, this comes at the expense of carefully specifying the uncertainties in the analysis, and having a general understanding of how a coin is regenerated. For this latter part, it may also help to read the [Adjustments and comparisons](adjustments.html) vignette.
Some particular points to consider:
* It is your responsibility to get the correct address for each parameter and to understand its use.
* It is also your responsibility to make sure that there are no conflicts caused by methodological variations, such as negative values being fed into a geometric mean.
* You can't target the same parameter twice in the same sensitivity analysis - one specification will just overwrite the other.
In general it is better to start simple: start with one or two assumptions to vary and gradually expand the level of complexity as needed. You can also do a test run with a low `N` to see if the results are vaguely sensible.
Variance based sensitivity analysis is complicated, especially here because the assumptions to vary are often not just a single value, but could be strings, data frames or lists. Again, an understanding of COINr and a basic understanding of sensitivity analysis can help a lot.
One important point is that in a sensitivity analysis, the target of the sensitivity analysis is the *mean absolute rank change*. COINr takes the target output that you specify, and for each replication compares the ranks of that variable to the nominal ranks. It then takes the difference between these two and takes the mean absolute value of these differences: the higher value of this quantity, the more the ranks have changed with respect to the nominal. This is done because variance-based SA generally requires a univariate output.
If you want to perform a more complex sensitivity analysis, perhaps generating separate sensitivity indices for each unit, you could also do this by bypassing `get_sensitivity()` altogether. If you want to venture down this path, check out `SA_sample()` and `SA_estimate()`, which are called by `get_sensitivity()`. This would definitely require some custom coding on your part but if you feel up for the challenge, go for it!
# Removing elements
Last of all we turn to a separate function which is not variance-based sensitivity analysis but is related to sensitivity analysis in general. The `remove_elements()` function tests the effect of removing components of the composite indicator one at a time. This can be useful to find the impact of each component, in terms of "if it I were to remove this, what would happen?".
To run this, we input our coin into the function and specify which level we want to remove components. For example, specifying `Level = 2` removes each component of level 2 one at a time, with replacement, and regenerates the results each time. We also have to specify which indicator/aggregate to target as the output:
```{r}
# run function removing elements in level 2
l_res <- remove_elements(coin, Level = 2, dset = "Aggregated", iCode = "Index")
# get summary of rank changes
l_res$MeanAbsDiff
```
The output contains details of ranks and scores, but the "MeanAbsDiff" entry is a good summary: it shows the mean absolute rank difference between nominal ranks, and ranks with each component removed. Here, a higher value means that the ranks are changed more when that component is removed and vice versa. Clearly, the impact of removing components is not the same, and this can be useful information if you are considering whether or not to discard part of an index.
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/sensitivity.Rmd
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## -----------------------------------------------------------------------------
# numbers between 1 and 10
x <- 1:10
# two outliers
x <- c(x, 30, 100)
## -----------------------------------------------------------------------------
library(COINr)
skew(x)
kurt(x)
## -----------------------------------------------------------------------------
check_SkewKurt(x)
## ---- fig.width=5, fig.height=3.5---------------------------------------------
l_treat <- Treat(x, f1 = "winsorise", f1_para = list(winmax = 2),
f_pass = "check_SkewKurt")
plot(x, l_treat$x)
## -----------------------------------------------------------------------------
check_SkewKurt(l_treat$x)
## -----------------------------------------------------------------------------
# select three indicators
df1 <- ASEM_iData[c("Flights", "Goods", "Services")]
# treat the data frame using defaults
l_treat <- Treat(df1)
str(l_treat, max.level = 1)
## -----------------------------------------------------------------------------
l_treat$Dets_Table
## -----------------------------------------------------------------------------
l_treat$Treated_Points
## -----------------------------------------------------------------------------
coin <- build_example_coin(up_to = "new_coin")
## -----------------------------------------------------------------------------
coin <- Treat(coin, dset = "Raw")
## -----------------------------------------------------------------------------
# summary of treatment for each indicator
head(coin$Analysis$Treated$Dets_Table)
## -----------------------------------------------------------------------------
# default treatment for all cols
specs_def <- list(f1 = "winsorise",
f1_para = list(na.rm = TRUE,
winmax = 5,
skew_thresh = 2,
kurt_thresh = 3.5,
force_win = FALSE),
f2 = "log_CT",
f2_para = list(na.rm = TRUE),
f_pass = "check_SkewKurt",
f_pass_para = list(na.rm = TRUE,
skew_thresh = 2,
kurt_thresh = 3.5))
## -----------------------------------------------------------------------------
# treat with max winsorisation of 3 points
coin <- Treat(coin, dset = "Raw", global_specs = list(f1_para = list(winmax = 1)))
# see what happened
coin$Analysis$Treated$Dets_Table |>
head(10)
## -----------------------------------------------------------------------------
# change individual specs for Flights
indiv_specs <- list(
Flights = list(
f1_para = list(winmax = 0)
)
)
# re-run data treatment
coin <- Treat(coin, dset = "Raw", indiv_specs = indiv_specs)
## -----------------------------------------------------------------------------
coin$Analysis$Treated$Dets_Table[
coin$Analysis$Treated$Dets_Table$iCode == "Flights",
]
## -----------------------------------------------------------------------------
# change individual specs for two indicators
indiv_specs <- list(
Flights = "none",
LPI = "none"
)
# re-run data treatment
coin <- Treat(coin, dset = "Raw", indiv_specs = indiv_specs)
## ---- include = FALSE---------------------------------------------------------
# check if performance package installed
perf_installed <- requireNamespace("performance", quietly = TRUE)
## ---- eval = perf_installed---------------------------------------------------
# library(performance)
#
# # the check_outliers function outputs a logical vector which flags specific points as outliers.
# # We need to wrap this to give a single TRUE/FALSE output, where FALSE means it doesn't pass,
# # i.e. there are outliers
# outlier_pass <- function(x){
# # return FALSE if any outliers
# !any(check_outliers(x))
# }
#
# # now call treat(), passing this function
# # we set f_pass_para to NULL to avoid passing default parameters to the new function
# coin <- Treat(coin, dset = "Raw",
# global_specs = list(f_pass = "outlier_pass",
# f_pass_para = NULL)
# )
#
# # see what happened
# coin$Analysis$Treated$Dets_Table |>
# head(10)
## -----------------------------------------------------------------------------
# build example purse
purse <- build_example_purse(up_to = "new_coin", quietly = TRUE)
# apply treatment to all coins in purse (default specs)
purse <- Treat(purse, dset = "Raw")
## -----------------------------------------------------------------------------
# select three indicators
df1 <- ASEM_iData[c("Flights", "Goods", "Services")]
# treat data frame, changing winmax and skew/kurtosis limits
l_treat <- qTreat(df1, winmax = 1, skew_thresh = 1.5, kurt_thresh = 3)
## -----------------------------------------------------------------------------
l_treat$Dets_Table
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/treat.R
|
---
title: "Outlier Treatment"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Outlier Treatment}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
# Introduction
Data treatment is the process of altering indicators to improve their statistical properties, mainly for the purposes of aggregation. Data treatment is a delicate subject, because it essentially involves changing the values of certain observations, or transforming an entire distribution. Like any other step or assumption though, any data treatment should be carefully recorded and its implications understood. Of course, data treatment does not *have* to be applied, it is simply another tool in your toolbox.
# The `Treat()` function
The COINr function for treating data is called `Treat()`. This is a generic function with methods for coins, purses, data frames and numeric vectors. It is very flexible but this can add a layer of complexity. If you want to run mostly at default options, see the `qTreat()` function mentioned below in [Simplified function].
The `Treat()` function operates a two-stage data treatment process, based on two data treatment functions (`f1` and `f2`), and a pass/fail function `f_pass` which detects outliers. The arrangement of this function is inspired by a fairly standard data treatment process applied to indicators, which consists of checking skew and kurtosis, then if the criteria are not met, applying Winsorisation up to a specified limit. Then if Winsorisation still does not bring skew and kurtosis within limits, applying a nonlinear transformation such as log or Box-Cox.
This function generalises this process by using the following general steps:
1. Check if variable passes or fails using `f_pass`
2. If `f_pass` returns `FALSE`, apply `f1`, else return `x` unmodified
3. Check again using `f_pass`
4. If `f_pass` still returns `FALSE`, apply `f2`
5. Return the modified `x` as well as other information.
For the "typical" case described above `f1` is a Winsorisation function, `f2` is a nonlinear transformation
and `f_pass` is a skew and kurtosis check. However, any functions can be passed as `f1`, `f2` and `f_pass`, which makes it a flexible tool that is also compatible with other packages.
Further details on how this works are given in the following sections.
# Numeric vectors
The clearest way to demonstrate the `Treat()` function is on a numeric vector. Let's make a vector with a couple of outliers:
```{r}
# numbers between 1 and 10
x <- 1:10
# two outliers
x <- c(x, 30, 100)
```
We can check the skew and kurtosis of this vector:
```{r}
library(COINr)
skew(x)
kurt(x)
```
The skew and kurtosis are both high. If we follow the default limits in COINr (absolute skew capped at 2, and kurtosis capped at 3.5), this would be classed as a vector with outliers. Indeed we can confirm this using the `check_SkewKurt()` function, which is the default pass/fail function used in `Treat()`. This also anyway outputs the skew and kurtosis:
```{r}
check_SkewKurt(x)
```
Now we know that `x` has outliers, we can treat it (if we want). We use the `Treat()` function to specify that our function for checking for outliers `f_pass = "check_SkewKurt"`, and our first function for treating outliers is `f1 = "winsorise"`. We also pass an additional parameter to `winsorise()`, which is `winmax = 2`. You can check the `winsorise()` function documentation to better understand how it works.
```{r, fig.width=5, fig.height=3.5}
l_treat <- Treat(x, f1 = "winsorise", f1_para = list(winmax = 2),
f_pass = "check_SkewKurt")
plot(x, l_treat$x)
```
The result of this data treatment is shown in the scatter plot: one point from `x` has been Winsorised (reassigned the next highest value). We can check the skew and kurtosis of the treated vector:
```{r}
check_SkewKurt(l_treat$x)
```
Clearly, Winsorising one point was enough in this case to bring the skew and kurtosis within the specified thresholds.
# Data frames
Treatment of a data frame with `Treat()` is effectively the same as treating a numeric vector, because the data frame method passes each column of the data frame to the numeric method. Here, we use some data from the COINr package to demonstrate.
```{r}
# select three indicators
df1 <- ASEM_iData[c("Flights", "Goods", "Services")]
# treat the data frame using defaults
l_treat <- Treat(df1)
str(l_treat, max.level = 1)
```
We can see the output is a list with `x_treat`, the treated data frame; `Dets_Table`, a table describing what happened to each indicator; and `Treated_Points`, which marks which individual points were adjusted. This is effectively the same output as for treating a numeric vector.
```{r}
l_treat$Dets_Table
```
We also check the individual points:
```{r}
l_treat$Treated_Points
```
# Coins
Treating coins is a simple extension of treating a data frame. The coin method simply extracts the relevant data set as a data frame, and passes it to the data frame method. So more or less, the same arguments are present.
We begin by building the example coin, which will be used for the examples here.
```{r}
coin <- build_example_coin(up_to = "new_coin")
```
## Default treatment
The `Treat()` function can be applied directly to a coin with completely default options:
```{r}
coin <- Treat(coin, dset = "Raw")
```
For each indicator, the `Treat()` function:
1. Checks skew and kurtosis using the `check_SkewKurt()` function
2. If the indicator fails the test (returns `FALSE`), applies Winsorisation
3. Checks again skew and kurtosis
4. If the indicator still fails, applies a log transformation.
If at any stage the indicator passes the skew and kurtosis test, it is returned without further treatment.
When we run `Treat()` on a coin, it also stores information returned from `f1`, `f2` and `f_pass` in the coin:
```{r}
# summary of treatment for each indicator
head(coin$Analysis$Treated$Dets_Table)
```
Notice that only one treatment function was used here, since after Winsorisation (`f1`), all indicators passed the skew and kurtosis test (`f_pass`).
In general, `Treat()` tries to collect all information returned from the functions that it calls. Details of the treatment of individual points are also stored in `.$Analysis$Treated$Treated_Points`.
The `Treat()` function gives you a high degree of control over which functions are used to treat and test indicators, and it is also possible to specify different functions for different indicators. Let's begin though by seeing how we can change the specifications for all indicators, before proceeding to individual treatment.
Unless `indiv_specs` is specified (see later), the same procedure is applied to all indicators. This process is specified by the `global_specs` argument. To see how to use this, it is easiest to show the default of this argument which is built into the `treat()` function:
```{r}
# default treatment for all cols
specs_def <- list(f1 = "winsorise",
f1_para = list(na.rm = TRUE,
winmax = 5,
skew_thresh = 2,
kurt_thresh = 3.5,
force_win = FALSE),
f2 = "log_CT",
f2_para = list(na.rm = TRUE),
f_pass = "check_SkewKurt",
f_pass_para = list(na.rm = TRUE,
skew_thresh = 2,
kurt_thresh = 3.5))
```
Notice that there are six entries in the list:
* `f1` which is a string referring to the first treatment function
* `f1_para` which is a list of any other named arguments to `f1`, excluding `x` (the data to be treated)
* `f2` and `f2_para` which are analogous to `f1` and `f1_para` but for the second treatment function
* `f_pass` is a string referring to the function to check for outliers
* `f_pass_para` a list of any other named arguments to `f_pass`, other than `x` (the data to be checked)
To understand what the individual parameters do, for example in `f1_para`, we need to look at the function called by `f1`, which is the `winsorise()` function:
* `x` A numeric vector.
* `na.rm` Set `TRUE` to remove `NA` values, otherwise returns `NA`.
* `winmax` Maximum number of points to Winsorise. Default 5. Set `NULL` to have no limit.
* `skew_thresh` A threshold for absolute skewness (positive). Default 2.25.
* `kurt_thresh` A threshold for kurtosis. Default 3.5.
* `force_win` Logical: if `TRUE`, forces winsorisation up to winmax (regardless of skew/kurt).
Here we see the same parameters as named in the list `f1_para`, and we can change the maximum number of points to be Winsorised, the skew and kurtosis thresholds, and other things.
To make adjustments, unless we want to redefine everything, we don't need to specify the entire list. So for example, if we want to change the maximum Winsorisation limit `winmax`, we can just pass this part of the list (notice we still have to wrap the parameter inside a list):
```{r}
# treat with max winsorisation of 3 points
coin <- Treat(coin, dset = "Raw", global_specs = list(f1_para = list(winmax = 1)))
# see what happened
coin$Analysis$Treated$Dets_Table |>
head(10)
```
Having imposed a much stricter Winsorisation limit (only one point), we can see that now one indicator has been passed to the second treatment function `f2`, which has performed a log transformation. After doing this, the indicator passes the skew and kurtosis test.
By default, if an indicator does not satisfy `f_pass` after applying `f1`, it is passed to `f2` *in its original form* - in other words it is not the output of `f1` that is passed to `f2`, and `f2` is applied *instead* of `f1`, rather than in addition to it. If you want to apply `f2` on top of `f1` set `combine_treat = TRUE`. In this case, if `f_pass` is not satisfied after `f1` then the output of `f1` is used as the input of `f2`. For the defaults of `f1` and `f2` this approach is probably not advisable because Winsorisation and the log transform are quite different approaches. However depending on what you want to do, it might be useful.
## Individual treatment
The `global_specs` specifies the treatment methodology to apply to all indicators. However, the `indiv_specs` argument (if specified), can be used to override the treatment specified in `global_specs` for specific indicators. It is specified in exactly the same way as `global_specs` but requires a parameter list for each indicator that is to have individual specifications applied, wrapped inside one list.
This is probably clearer using an example. To begin with something simple, let's say that we keep the defaults for all indicators except one, where we change the Winsorisation limit. We will set the Winsorisation limit of the indicator "Flights" to zero, to force it to be log-transformed.
```{r}
# change individual specs for Flights
indiv_specs <- list(
Flights = list(
f1_para = list(winmax = 0)
)
)
# re-run data treatment
coin <- Treat(coin, dset = "Raw", indiv_specs = indiv_specs)
```
The only thing to remember here is to make sure the list is created correctly. Each indicator to assign individual treatment must have its own list - here containing `f1_para`. Then `f1_para` itself is a list of named parameter values for `f1`. Finally, all lists for each indicator have to be wrapped into a single list to pass to `indiv_specs`. This looks a bit convoluted for changing a single parameter, but gives a high degree of control over how data treatment is performed.
We can now see what happened to "Flights":
```{r}
coin$Analysis$Treated$Dets_Table[
coin$Analysis$Treated$Dets_Table$iCode == "Flights",
]
```
Now we see that "Flights" didn't pass the first Winsorisation step (because nothing happened to it), and was passed to the log transform. After that, the indicator passed the skew and kurtosis check.
As another example, we may wish to exclude some indicators from data treatment completely. To do this, we can set the corresponding entries in `indiv_specs` to `"none"`. This is the only case where we don't have to pass a list for each indicator.
```{r}
# change individual specs for two indicators
indiv_specs <- list(
Flights = "none",
LPI = "none"
)
# re-run data treatment
coin <- Treat(coin, dset = "Raw", indiv_specs = indiv_specs)
```
Now if we examine the treatment table, we will find that these indicators have been excluded from the table, as they were not subjected to treatment.
## External functions
Any functions can be passed to `Treat()`, for both treating and checking for outliers. As an example, we can pass an outlier detection function ` from the [performance](https://easystats.github.io/performance/reference/check_outliers.html) package
```{r, include = FALSE}
# check if performance package installed
perf_installed <- requireNamespace("performance", quietly = TRUE)
```
The following code chunk will only run if you have the 'performance' package installed.
```{r, eval = perf_installed}
library(performance)
# the check_outliers function outputs a logical vector which flags specific points as outliers.
# We need to wrap this to give a single TRUE/FALSE output, where FALSE means it doesn't pass,
# i.e. there are outliers
outlier_pass <- function(x){
# return FALSE if any outliers
!any(check_outliers(x))
}
# now call treat(), passing this function
# we set f_pass_para to NULL to avoid passing default parameters to the new function
coin <- Treat(coin, dset = "Raw",
global_specs = list(f_pass = "outlier_pass",
f_pass_para = NULL)
)
# see what happened
coin$Analysis$Treated$Dets_Table |>
head(10)
```
Here we see that the test for outliers is much stricter and very few of the indicators pass the test, even after applying a log transformation. Clearly, how an outlier is defined can vary and depend on your application.
# Purses
The purse method for `treat()` is fairly straightforward. It takes almost the same arguments as the coin method, and applies the same specifications to each coin. Here we simply demonstrate it on the example purse.
```{r}
# build example purse
purse <- build_example_purse(up_to = "new_coin", quietly = TRUE)
# apply treatment to all coins in purse (default specs)
purse <- Treat(purse, dset = "Raw")
```
# Simplified function
The `Treat()` function is very flexible but comes at the expense of a possibly fiddly syntax. If you don't need that level of flexibility, consider using `qTreat()`, which is a simplified wrapper for `Treat()`.
The main features of `qTreat()` are that:
* The first treatment function `f1` cannot be changed and is set to `winsorise()`.
* The `winmax` parameter, as well as the skew and kurtosis limits, are available directly as function arguments to `qTreat()`.
* The `f_pass` function cannot be changed and is always set to `check_SkewKurt()`.
* You can still choose `f2`
The `qTreat()` function is a generic with methods for data frames, coins and purses. Here, we'll just demonstrate it on a data frame.
```{r}
# select three indicators
df1 <- ASEM_iData[c("Flights", "Goods", "Services")]
# treat data frame, changing winmax and skew/kurtosis limits
l_treat <- qTreat(df1, winmax = 1, skew_thresh = 1.5, kurt_thresh = 3)
```
Now we check what the results are:
```{r}
l_treat$Dets_Table
```
We can see that in this case, Winsorsing by one point was not enough to bring "Flights" and "Goods" within the specified skew/kurtosis limits. Consequently, `f2` was invoked, which uses a log transform and brought both indicators within the specified limits.
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/treat.Rmd
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## ---- eval=FALSE--------------------------------------------------------------
# # install remotes package if you don't have it
# install.packages("remotes")
## ---- eval = FALSE------------------------------------------------------------
# remotes::install_github("bluefoxr/COINr6")
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/v1.R
|
---
title: "Changes from COINr v1.0.0"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Changes from COINr v1.0.0}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
If you were using COINr prior to v.1.0 you may have updated the package and found that code calling COINr functions no longer works! What is going on?
COINr has undergone some major changes and most of the syntax has changed. So major that I skipped directly from v0.6 to v1.0 without any intermediate steps. First of all, if you were using COINr previously, I would like to say SORRY for any inconvenience caused by these changes. However, the changes are worth it, and this is a one-off thing - I won't be doing a seismic change like this again.
This vignette helps you to transition from COINr 0.6.x to 1.0 and explains what has happened. In short, most function names have changed, the package is more robust and flexible, panel data is more supported, interactive plots have been moved to a separate package available on GitHub, and if you don't like all this you can install the archive version of the package called "COINr6" and everything will go back to how it was. Let's go through these things one by one.
# Why
If you just want to know what has changed and how to deal with it, skip this section. If you want to know why things have changed, read on.
The short story is that I found quite a few flaws in the package which I was not happy with, given that it is in the public domain. I decided to address these flaws in one giant revision, rather than a long series of updates. I'll explain each of these points here below.
## Robustness and efficiency
COINr was the first CRAN package that I built (everyone say "aww"). In the process, I learned a lot about package development, as well as principles about programming in general. However, since I learned this while building the package, especially the first parts of the package that I wrote were (in retrospect) not written very well. For example, although I defined a "COIN" class, I didn't define methods for the COIN. Much of the code was not written in a "functional" way, and there were not enough checks on the inputs and outputs of the code. All this meant that the code was not very robust and had to be patched, a lot. This made it hard to maintain, less robust, and also slower for the user. As a consequence, I decided to re-write most functions, many from scratch, with a higher standard of programming. I also slimmed down the "COIN" class to a more streamlined "coin" class.
## Focus
COINr is a package meant to focus on developing composite indicators. But the focus got lost at some point when I got carried away with html plotly plots and shiny apps. These things, although nice, were in retrospect not really that useful in the package and are also difficult to maintain and bloat out the package. I decided to cut out all interactive plots and apps, but these can still be accessed through the COINr6 package and the conversion functions between COINr and COINr6 (see below).
## Dependencies
As a result of the first two points (inexperience plus straying off track), COINr had many dependencies, i.e. packages that had to be installed to install COINr. Although there is no harm in loading 10 or 20 packages when performing data analysis in R, this can become a problem if you are building a package because every user has to have these packages installed. If you have ever had to install several packages at the same time in R, you have probably run into some kind of problem. Moreover, COINr is dependent on any changes in those packages, and that makes maintaining it more difficult. This meant that in practice, COINr was not always easy to install. I decided to re-write the package almost entirely in base R, to remove as many dependencies as possible.
## Features and flexibility
One thing that was missing from COINr was proper support for panel data (time-dependent data). This has now been mostly rectified with the introduction of the "purse" class. The main "building" functions of COINr have also been re-written as generics, with methods for coins, data frames and purses. Moreover many functions allow you to call other functions, which makes COINr much easier to link up with other packages.
## Syntax
COINr syntax was inconsistent. While this was not a critical problem, since I was making big changes to the package I decided to take the opportunity to make the syntax as consistent as possible. This is a one-off change and won't be messed around with any more!
# What's changed?
## Function names
Many things have changed. The first thing you will probably notice is the syntax. Because I was anyway making syntax-breaking changes to the package, I decided to go all in, and try to make the syntax as consistent as possible. This means that function names are more predictable: all "building" functions start with a capital letter. Plot functions start with `plot_`. Analysis functions mostly start with `get_`. Functions are generally in lower case. This all hopefully makes the package a little easier to use. You will notice that calling an old < v1.0 function name will generate an error, which redirects you to the new function name. My hope is that although this is inconvenient, it will not take too long to adapt to the new function names. In most functions, the main logic behind the arguments is pretty similar. As mentioned above, I'm not going to change all the names again; this is a one-off thing.
## Function features
The second obvious change is that some of the key functions themselves have changed syntax: they have been re-written to be more flexible and more robust. This may seem annoying but I promise you it is for the greater good. I can't describe all the changes here, but in general functions have been made more flexible: for example `Normalise()` now can take any normalising function, rather than a fixed set of options. Outlier treatment also allows to pass outlier detection and treatment functions. The sensitivity analysis function (now `get_sensitivity()`) now allows to target any part of the coin at all, not just function arguments. In general, the core "building" functions now call other lower-level functions and this makes it easier to hook COINr up to other packages, for example using more sophisticated imputation and aggregation methods.
## New "coin" class and methods
The third related change that is perhaps not so obvious is that the structure of the central object in COINr, the "COIN", has changed. The object has been streamlined and tidied, and has a new S3 class called a "coin" (the difference being that the new coin is lower case). If you have previously built a COIN using an older version of COINr, it will not work in the new version of COINr! But the good news is that there is a handy function called `COIN_to_coin()` which converts the older "COIN" class to the newer "coin" class.
The new "coin" class also comes with a number of methods. All the main construction functions now have methods for at least coins, data frames and purses (see next sub-section), and some have methods for numerical vectors. This is in contrast to the older COINr versions which did not define formal methods. See the [Building coins](coins.html) vignette for more details.
## Purses and panel data
The new "purse" class gives a formal way to deal with panel data (time indexed data). A "purse" is a time-indexed collection of coins. All construction functions have purse methods, so working with time data becomes very straightforward.
Purses and purse methods are still being expanded in COINr so keep an eye out for new features if you are interested. See the [Building coins](coins.html) vignette for more details.
## Documentation
The next thing is that the documentation has been completely re-written, with loads of new vignettes! And even better, COINr now lives at a web-page built with "pkgdown" which you can find [here](https://bluefoxr.github.io/COINr/), where all the documentation is easily accessible. So each function is well-documented. Hurray.
## Removed functions
The last very obvious change is that some functions have disappeared! Where have they gone? You may notice that all functions that generated interactive plots (often called `iPlot*` in previous versions of COINr), plus all shiny apps, have vanished. The reason for this, as explained above, is that these tools were distracting from the main point of the package and were too much effort to maintain. Moreover, even though interactive plots are great if you are outputting html documents, for pdf and word they are a hassle because it is quite unpredictable how they will be rendered. The good news is that I have replaced some of the interactive plots with static versions, such as `plot_framework()`, and `plot_scatter()`, so you can still do most of the plotting as in the previous versions, but with more predictable (and more usable) outputs.
# COINr6: I want out!
If this level of upheaval is all a bit too much, and you'd like to go back to how things were before, you have two options. The easiest "roll-back" option is to install the "COINr6" package. COINr6 is the latest version of COINr *before* the major syntax changes. This means that if you wrote some scripts or markdown files in the old syntax, instead of loading COINr, install and load COINr6, and this will run as before.
The advantage of this is that you can have COINr (new syntax) and COINr6 (old syntax) both installed at the same time.
To install COINr6 you have to install it from GitHub. First, make sure you have the "remotes" package:
```{r, eval=FALSE}
# install remotes package if you don't have it
install.packages("remotes")
```
Now install COINr6 from the GitHub repo:
```{r, eval = FALSE}
remotes::install_github("bluefoxr/COINr6")
```
And that's it. I will continue to lightly maintain this package for a while (e.g. fixing any critical bugs if any arise) but in general the main focus will be on the new COINr version.
Another way to roll back COINr to an older version is to use `devtools::install_version()`, in which you can specify a version number of any package to install. This might be a bit more fiddly, and personally I would recommend to rather install COINr6. But if you want, check out [this article](https://support.posit.co/hc/en-us/articles/219949047-Installing-older-versions-of-packages) for some info on installing older package versions.
COINr and COINr6 have conversion functions: in COINr there is the `COIN_to_coin()` function which allows conversion from the older "COIN" class to the newer "coin" class. In COINr6 there is also now the reverse function, `coin_to_COIN()`, which allows access to all the old interactive plotting of COINr6 if you liked that, as well as the apps. Note that conversion comes with some limitations in both directions, which are discussed in those functions' documentation.
# Summary
In summary, COINr has changed quite a lot, but that is a Good Thing. If you do want to roll back, or have both old and new syntax side by side, install COINr6.
As usual, if you have any feedback, spot any bugs or have any suggestions, email me or [open an issue](https://github.com/bluefoxr/COINr/issues) in the GitHub repo.
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/v1.Rmd
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## ---- message=F, fig.width=5, fig.height=5------------------------------------
library(COINr)
# assemble example COIN
coin <- build_example_coin(up_to = "new_coin")
# plot framework
plot_framework(coin)
## ---- fig.width=6, fig.height= 5----------------------------------------------
plot_framework(coin, type = "stack", colour_level = 2)
## -----------------------------------------------------------------------------
plot_dist(coin, dset = "Raw", iCodes = "CO2")
## ---- message=F, fig.width=7--------------------------------------------------
plot_dist(coin, dset = "Raw", iCodes = "P2P", Level = 1, type = "Violindot")
## -----------------------------------------------------------------------------
coin <- build_example_coin(quietly = TRUE)
## ---- fig.width=5-------------------------------------------------------------
plot_corr(coin, dset = "Normalised", iCodes = list("Physical"), Levels = 1)
## ---- fig.width=4-------------------------------------------------------------
plot_corr(coin, dset = "Aggregated",
iCodes = list(c("Flights", "LPI"), c("Physical", "P2P")), Levels = c(1,2))
## ---- fig.width=4, fig.height=4-----------------------------------------------
plot_corr(coin, dset = "Aggregated", iCodes = list("Sust"), withparent = "family", flagcolours = T)
## ---- fig.width=7, fig.height=5-----------------------------------------------
plot_corr(coin, dset = "Normalised", iCodes = list("Sust"),
grouplev = 2, flagcolours = T)
## ---- fig.width=7, fig.height=6-----------------------------------------------
plot_corr(coin, dset = "Normalised", grouplev = 3, box_level = 2, showvals = F)
## ---- fig.width=7-------------------------------------------------------------
plot_bar(coin, dset = "Raw", iCode = "CO2")
## ---- fig.width=7-------------------------------------------------------------
plot_bar(coin, dset = "Raw", iCode = "CO2", by_group = "GDPpc_group", axes_label = "iName")
## ---- fig.width=7-------------------------------------------------------------
plot_bar(coin, dset = "Aggregated", iCode = "Sust", stack_children = TRUE)
## ---- fig.height=2, fig.width=4-----------------------------------------------
plot_dot(coin, dset = "Raw", iCode = "LPI", usel = c("JPN", "ESP"))
## ---- fig.height=2, fig.width=4-----------------------------------------------
plot_dot(coin, dset = "Raw", iCode = "LPI", usel = c("JPN", "ESP"), add_stat = "median",
stat_label = "Median", plabel = "iName+unit")
## ---- fig.width=4-------------------------------------------------------------
plot_scatter(coin, dsets = "Raw", iCodes = c("Goods", "Services"), point_label = "uCode")
## ---- fig.width=5-------------------------------------------------------------
plot_scatter(coin, dsets = c("uMeta", "Raw"), iCodes = c("Population", "Flights"),
by_group = "GDPpc_group", log_scale = c(TRUE, FALSE))
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/visualisation.R
|
---
title: "Visualisation"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Visualisation}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
COINr has a number of options for plotting and visualising indicator data, both for analysis and presentation. All plots generated by COINr are powered by ggplot2, which means that if you want to customise them beyond the arguments provided by COINr functions, you can simply edit them with ggplot2 commands.
Note that prior to COINr v1.0.0, COINr additionally included interactive visualisation using apps and HTML widgets. This has been discontinued but those functions can still be accessed via the COINr6 package. See the [vignette](v1.html) on this topic for the reasons behind this and further details.
# Framework
Upon building a coin (see the [Building coins](coins.html) vignette), a good way to begin is to check the structure of the index. This can be done visually with the `plot_framework()` function, which generates a sunburst plot of the index structure.
```{r, message=F, fig.width=5, fig.height=5}
library(COINr)
# assemble example COIN
coin <- build_example_coin(up_to = "new_coin")
# plot framework
plot_framework(coin)
```
The sunburst plot is useful for a few things. First, it shows the structure that COINr has understood. This allows you to check whether the structure agrees with your expectations.
Second, it shows the effective weight of each indicator. Effective weights are the final weights of each indicator in the index, as a result of the indicator weights, the parent aggregate weights, and the structure of the index. This can reveal which indicators are implicitly weighted more than others, by e.g. having more or less indicators in the same aggregation groups. The effective weights can also be accessed directly using the `get_eff_weights()` function.
Finally, it can be a good way to communicate your index structure to other people.
The `plot_framework()` function has a few options for colouring. Other than that, if you don't like sunburst plots, another possibility is to set `type = "stack"`:
```{r, fig.width=6, fig.height= 5}
plot_framework(coin, type = "stack", colour_level = 2)
```
This gives a linear representation of the index. Here we have also set the colouring level to the pillar level (see `plot_framework()` documentation). Note that you will probably have to adjust the plot size to get a good figure.
# Statistical plots
Here we explore options for statistical plots, namely distribution and correlation plots.
## Distributions
The distribution of any variable, as well as groups of variables, in a coin can be visualised quickly using the `plot_dist()` function. The simplest case is to plot the distribution of a single indicator.
```{r}
plot_dist(coin, dset = "Raw", iCodes = "CO2")
```
To do this, as usual, we have to specify the data set (`dset`) and the indicator (`iCodes`) to plot. The data selection for `plot_dist()` is powered by `get_data()`, which means we can plot subsets of indicator and units. Commonly, with distribution plots, it might be interesting to plot the distributions of all indicators belonging to a particular group - let's plot all indicator distributions in the "P2P" pillar:
```{r, message=F, fig.width=7}
plot_dist(coin, dset = "Raw", iCodes = "P2P", Level = 1, type = "Violindot")
```
This plots all eight indicators belonging to that group, and we also specified to plot as "violin-dot" plots. Optionally, data can also be normalised before plotting using the `normalise` argument. See `plot_dist()` for more details and further options.
A similar function, `plot_dot()` also plots a single indicator using dots but is rather used for highlighting individual units rather than as a statistical plot of the distribution. See [Dot plots] below.
## Correlations
Correlation plots are very useful for understanding relationships between indicators. COINr's `plot_corr()` function is a flexible tool for plotting correlations between almost any variables in a coin, and visualising them according to the structure of the index.
One thing to keep in mind from the outset is the directionality of your indicators: if some are negative then this will probably be reflected in the correlation plots, unless you normalise the data first. With that in mind, we will build the full example coin including the normalisation step and then plot some correlations:
```{r}
coin <- build_example_coin(quietly = TRUE)
```
Now let's do a basic plot of correlations within a group:
```{r, fig.width=5}
plot_corr(coin, dset = "Normalised", iCodes = list("Physical"), Levels = 1)
```
Notice the syntax: we have to specify `iCodes` as a list here, and specify the level to get data from. In this case we have specified that we want the indicators (level 1) of the "Physical" group to be correlated against each other. As usual these arguments are passed to `get_data()`.
The reason that `iCodes` is specified as a list is that we can pass two character vectors to it, possibly from different levels:
```{r, fig.width=4}
plot_corr(coin, dset = "Aggregated",
iCodes = list(c("Flights", "LPI"), c("Physical", "P2P")), Levels = c(1,2))
```
The point being that we can select any set of indicators or aggregates, and correlate them with any other set. We can also pass further arguments to `get_data()` such as groupings and unit selection, if needed.
Other useful features include the possibility to correlate a set of indicators with only its parent groups - this is done by setting `withparent = "family"`. Here we also set to a discrete colour scheme using `flagcolours = TRUE`.
```{r, fig.width=4, fig.height=4}
plot_corr(coin, dset = "Aggregated", iCodes = list("Sust"), withparent = "family", flagcolours = T)
```
Notice that boxes are drawn around aggregation groups in this case. As a final example, we show how boxes and groups can be used to show subsets of correlation matrices. Typically the most interesting correlations are within aggregation groups because weak correlations cause less information to be transferred to the aggregate. We can only show in-group correlations using the `grouplev` argument, which takes an aggregation level to group indicators at:
```{r, fig.width=7, fig.height=5}
plot_corr(coin, dset = "Normalised", iCodes = list("Sust"),
grouplev = 2, flagcolours = T)
```
This can also be done with the `box_level` argument, which can be used additionally to highlight groupings at different levels:
```{r, fig.width=7, fig.height=6}
plot_corr(coin, dset = "Normalised", grouplev = 3, box_level = 2, showvals = F)
```
In this case we have also disabled correlation values themselves. Other options include using different types of correlations, and changing colours. For details, see the help page of `plot_corr()`. It is also worth mentioning that underneath, `plot_corr()` calls `get_corr()`, so if you are interested in correlation matrices rather than plots, use that.
# Indicator plots
In this section we examine some options for visualising individual indicators, in particular with the aim of seeing how different units compare to one another.
## Bar
A simple way to look at a set of scores for an indicator is with a bar chart:
```{r, fig.width=7}
plot_bar(coin, dset = "Raw", iCode = "CO2")
```
The `plot_bar()` function returns a bar chart of single indicator, sorted from high to low values. We can also colour this by any of the grouping variables found in the coin
```{r, fig.width=7}
plot_bar(coin, dset = "Raw", iCode = "CO2", by_group = "GDPpc_group", axes_label = "iName")
```
Here we have also set `axes_label = "iName"` to output indicator names rather than codes. Several other options are available, including a log scale, and colouring options. Here we just show one more thing, which is the possibility to break bars down into underlying component scores. This only works if we are plotting an aggregate score (i.e. level 2 or higher), rather than an indicator, because it looks for the underlying scores used to calculate each aggregate score. For example, we can see how the Sustainability scores break down into their three underlying components, for each country:
```{r, fig.width=7}
plot_bar(coin, dset = "Aggregated", iCode = "Sust", stack_children = TRUE)
```
## Dot plots
COINr's dot plot is pretty similar to a distribution plot but is intended for showing the position of a particular unit or units relative to its peers. This means that to make it useful, you should also select one or more units to highlight.
```{r, fig.height=2, fig.width=4}
plot_dot(coin, dset = "Raw", iCode = "LPI", usel = c("JPN", "ESP"))
```
Here we have plotted the "LPI" indicator and highlighted Spain and Japan. We can also add a statistic of this indicator, such as the median:
```{r, fig.height=2, fig.width=4}
plot_dot(coin, dset = "Raw", iCode = "LPI", usel = c("JPN", "ESP"), add_stat = "median",
stat_label = "Median", plabel = "iName+unit")
```
Here we have also labelled the statistic using `stat_label`, and labelled the x-axis using the indicator name and unit which are taken from the indicator metadata found within the coin.
## Scatter
The `plot_scatter()` function gives a quick way to plot scatter plots between any indicators or any variables in a coin.
```{r, fig.width=4}
plot_scatter(coin, dsets = "Raw", iCodes = c("Goods", "Services"), point_label = "uCode")
```
Variables can come from different data sets (including unit metadata), and we can also colour by groups:
```{r, fig.width=5}
plot_scatter(coin, dsets = c("uMeta", "Raw"), iCodes = c("Population", "Flights"),
by_group = "GDPpc_group", log_scale = c(TRUE, FALSE))
```
Here we have also converted the x-axis to a log scale since population is highly skewed. Other options can be found in the help page of `plot_scatter()`, and all plots can be further modified using ggplot2 commands.
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/visualisation.Rmd
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## -----------------------------------------------------------------------------
library(COINr)
# build example coin
coin <- build_example_coin(up_to = "Normalise", quietly = TRUE)
# view weights
head(coin$Meta$Weights$Original)
## -----------------------------------------------------------------------------
# view rows not in level 1
coin$Meta$Weights$Original[coin$Meta$Weights$Original$Level != 1, ]
## -----------------------------------------------------------------------------
# copy original weights
w1 <- coin$Meta$Weights$Original
# modify weights of Conn and Sust to 0.3 and 0.7 respectively
w1$Weight[w1$iCode == "Conn"] <- 0.3
w1$Weight[w1$iCode == "Sust"] <- 0.7
# put weight set back with new name
coin$Meta$Weights$MyFavouriteWeights <- w1
## -----------------------------------------------------------------------------
coin <- Aggregate(coin, dset = "Normalised", w = "MyFavouriteWeights")
## -----------------------------------------------------------------------------
coin <- Aggregate(coin, dset = "Normalised", w = w1)
## -----------------------------------------------------------------------------
w_eff <- get_eff_weights(coin, out2 = "df")
head(w_eff)
## -----------------------------------------------------------------------------
# get sum of effective weights for each level
tapply(w_eff$EffWeight, w_eff$Level, sum)
## ---- fig.width=5, fig.height=5-----------------------------------------------
plot_framework(coin)
## -----------------------------------------------------------------------------
coin <- get_PCA(coin, dset = "Aggregated", Level = 2,
weights_to = "PCAwtsLev2", out2 = "coin")
## -----------------------------------------------------------------------------
coin$Meta$Weights$PCAwtsLev2[coin$Meta$Weights$PCAwtsLev2$Level == 2, ]
## -----------------------------------------------------------------------------
# build example coin
coin <- build_example_coin(quietly = TRUE)
# check correlations between level 3 and index
get_corr(coin, dset = "Aggregated", Levels = c(3, 4))
## -----------------------------------------------------------------------------
# optimise weights at level 3
coin <- get_opt_weights(coin, itarg = "equal", dset = "Aggregated",
Level = 3, weights_to = "OptLev3", out2 = "coin")
## -----------------------------------------------------------------------------
coin$Meta$Weights$OptLev3[coin$Meta$Weights$OptLev3$Level == 3, ]
## -----------------------------------------------------------------------------
# re-aggregate
coin <- Aggregate(coin, dset = "Normalised", w = "OptLev3")
# check correlations between level 3 and index
get_corr(coin, dset = "Aggregated", Levels = c(3, 4))
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/weights.R
|
---
title: "Weights"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Weights}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
Weights are used by most aggregation methods to optionally alter the contribution of each indicator in an aggregation group, as well as by aggregates themselves if they are further aggregated. Weighting is therefore part of aggregation, but this vignette deals with it separately because there are a few special tools for weighting in COINr.
First, let's see what weights look like in practice. When a coin is built using `new_coin()`, the `iMeta` data frame (an input to `new_coin()`) has a "Weight" column, which is also required. Therefore, every coin should have a set of weights in it by default, which you had to specify as part of its construction. Sets of weights are stored in the `.$Meta$Weights` sub-list. Each set of weights is stored as a data frame with a name. The set of weights created when calling `new_coin()` is called "Original". We can see this by building the example coin and accessing the "Original" set directly:
```{r}
library(COINr)
# build example coin
coin <- build_example_coin(up_to = "Normalise", quietly = TRUE)
# view weights
head(coin$Meta$Weights$Original)
```
The weight set simply has the indicator code, Level, and the weight itself. Notice that the indicator codes also include aggregate codes, up to the index:
```{r}
# view rows not in level 1
coin$Meta$Weights$Original[coin$Meta$Weights$Original$Level != 1, ]
```
And that the index itself doesn't have a weight because it is not used in an aggregation. Notice also that weights can be specified *relative* to one another. When an aggregation group is aggregated, the weights within that group are first scaled to sum to 1. This means that weights are relative within groups, but not between groups.
# Manual re-weighting
To change weights, one way is to simply go back to the original `iMeta` data frame that you used to build the coin, and edit it. If you don't want to do that, you can also create a new weight set. This simply involves:
1. Making a copy of the existing set of weights
2. Changing the weights of the copy
3. Putting the new set of weights in the coin
For example, if we want to change the weighting of the "Conn" and "Sust" sub-indices, we could do this:
```{r}
# copy original weights
w1 <- coin$Meta$Weights$Original
# modify weights of Conn and Sust to 0.3 and 0.7 respectively
w1$Weight[w1$iCode == "Conn"] <- 0.3
w1$Weight[w1$iCode == "Sust"] <- 0.7
# put weight set back with new name
coin$Meta$Weights$MyFavouriteWeights <- w1
```
Now, to actually use these weights in aggregation, we have to direct the `Aggregate()` function to find them. When weights are stored in the "Weights" sub-list as we have done here, this is easy because we only have to pass the name of the weights to `Aggregate()`:
```{r}
coin <- Aggregate(coin, dset = "Normalised", w = "MyFavouriteWeights")
```
Alternatively, we can pass the data frame itself to `Aggregate()` if we don't want to store it in the coin for some reason:
```{r}
coin <- Aggregate(coin, dset = "Normalised", w = w1)
```
When altering weights we may wish to compare the outcomes of alternative sets of weights. See the [Adjustments and comparisons](adjustments.html) vignette for details on how to do this.
# Effective weights
COINr has some statistical tools for adjusting weights as explained in the next sections. Before that, it is also interesting to look at "effective weights". At the index level, the weighting of an indicator is not due just to its own weight, but also to the weights of each aggregation that it is involved in, plus the number of indicators/aggregates in each group. This means that the final weighting, at the index level, of each indicator, is slightly complex to understand. COINr has a built in function to get these "effective weights":
```{r}
w_eff <- get_eff_weights(coin, out2 = "df")
head(w_eff)
```
The "EffWeight" column is the effective weight of each component at the highest level of aggregation (the index). These weights sum to 1 for each level:
```{r}
# get sum of effective weights for each level
tapply(w_eff$EffWeight, w_eff$Level, sum)
```
The effective weights can also be viewed using the `plot_framework()` function, where the angle of each indicator/aggregate is proportional to its effective weight:
```{r, fig.width=5, fig.height=5}
plot_framework(coin)
```
# PCA weights
The `get_PCA()` function can be used to return a set of weights which maximises the explained variance within aggregation groups. This function is already discussed in the [Analysis](analysis.html) vignette, so we will only focus on the weighting aspect here.
First of all, PCA weights come with a number of caveats which need to be mentioned (this is also detailed in the `get_PCA()` function help). First, what constitutes "PCA weights" in composite indicators is not very well-defined. In COINr, a simple option is adopted. That is, the loadings of the first principal component are taken as the weights. The logic here is that these loadings should maximise the explained variance - the implication being that if we use these as weights in an aggregation, we should maximise the explained variance and hence the information passed from the indicators to the aggregate value. This is a nice property in a composite indicator, where one of the aims is to represent many indicators by single composite. See [here](https://doi.org/10.1016/j.envsoft.2021.105208) for a discussion on this.
But. The weights that result from PCA have a number of downsides. First, they can often include negative weights which can be hard to justify. Also PCA may arbitrarily flip the axes (since from a variance point of view the direction is not important). In the quest for maximum variance, PCA will also weight the strongest-correlating indicators the highest, which means that other indicators may be neglected. In short, it often results in a very unbalanced set of weights. Moreover, PCA can only be performed on one level at a time.
The result is that PCA weights should be used carefully. All that said, let's see how to get PCA weights. We simply run the `get_PCA()` function with `out2 = "coin"` and specifying the name of the weights to use. Here, we will calculate PCA weights at level 2, i.e. at the first level of aggregation. To do this, we need to use the "Aggregated" data set because the PCA needs to have the level 2 scores to work with:
```{r}
coin <- get_PCA(coin, dset = "Aggregated", Level = 2,
weights_to = "PCAwtsLev2", out2 = "coin")
```
This stores the new set of weights in the Weights sub-list, with the name we gave it. Let's have a look at the resulting weights. The only weights that have changed are at level 2, so we look at those:
```{r}
coin$Meta$Weights$PCAwtsLev2[coin$Meta$Weights$PCAwtsLev2$Level == 2, ]
```
This shows the nature of PCA weights: actually in this case it is not too severe but the Social dimension is negatively weighted because it is negatively correlated with the other components in its group. In any case, the weights can sometimes be "strange" to look at and that may or may not be a problem. As explained above, to actually use these weights we can call them when calling `Aggregate()`.
# Optimised weights
While PCA is based on linear algebra, another way to statistically weight indicators is via numerical optimisation. Optimisation is a numerical search method which finds a set of values which maximise or minimise some criterion, called the "objective function".
In composite indicators, different objectives are conceivable. The `get_opt_weights()` function gives two options in this respect - either to look for the set of weights that "balances" the indicators, or the set that maximises the information transferred (see [here](https://doi.org/10.1016/j.envsoft.2021.105208)). This is done by looking at the correlations between indicators and the index. This needs a little explanation.
If weights are chosen to match the opinions of experts, or indeed your own opinion, there is a catch that is not very obvious. Put simply, weights do not directly translate into importance.
To understand why, we must first define what "importance" means. Actually there is more than one way to look at this, but one possible measure is to use the (possibly nonlinear) correlation between each indicator and the overall index. If the correlation is high, the indicator is well-reflected in the index scores, and vice versa.
If we accept this definition of importance, then it's important to realise that this correlation is affected not only by the weights attached to each indicator, but also by the correlations *between indicators*. This means that these correlations must be accounted for in choosing weights that agree with the budgets assigned by the group of experts.
In fact, it is possible to reverse-engineer the weights either [analytically using a linear solution](https://doi.org/10.1111/j.1467-985X.2012.01059.x) or [numerically using a nonlinear solution](https://doi.org/10.1016/j.ecolind.2017.03.056). While the former method is far quicker than a nonlinear optimisation, it is only applicable in the case of a single level of aggregation, with an arithmetic mean, and using linear correlation as a measure. Therefore in COINr, the second method is used.
Let's now see how to use `get_opt_weights()` in practice. Like with PCA weights, we can only optimise one level at a time. We also need to say what kind of optimisation to perform. Here, we will search for the set of weights that results in equal influence of the sub-indexes (level 3) on the index. We need a coin with an aggregated data set already present, because the function needs to know which kind of aggregation method you are using. Just before doing that, we will first check what the correlations look like between level 3 and the index, using equal weighting:
```{r}
# build example coin
coin <- build_example_coin(quietly = TRUE)
# check correlations between level 3 and index
get_corr(coin, dset = "Aggregated", Levels = c(3, 4))
```
This shows that the correlations are similar but not the same. Now let's run the optimisation:
```{r}
# optimise weights at level 3
coin <- get_opt_weights(coin, itarg = "equal", dset = "Aggregated",
Level = 3, weights_to = "OptLev3", out2 = "coin")
```
We can view the optimised weights (weights will only change at level 3)
```{r}
coin$Meta$Weights$OptLev3[coin$Meta$Weights$OptLev3$Level == 3, ]
```
To see if this was successful in balancing correlations, let's re-aggregate using these weights and check correlations.
```{r}
# re-aggregate
coin <- Aggregate(coin, dset = "Normalised", w = "OptLev3")
# check correlations between level 3 and index
get_corr(coin, dset = "Aggregated", Levels = c(3, 4))
```
This shows that indeed the correlations are now well-balanced - the optimisation has worked.
We will not explore all the features of `get_opt_weights()` here, especially because optimisations can take a significant amount of CPU time. However, the main options include specifying a vector of "importances" rather than aiming for equal importance, and optimising to maximise total correlation, rather than balancing. There are also some numerical optimisation parameters that could help if the optimisation doesn't converge.
|
/scratch/gouwar.j/cran-all/cranData/COINr/inst/doc/weights.Rmd
|
---
title: "Adjustments and Comparisons"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Adjustments and Comparisons}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
One of the most powerful features of COINr is the possibility to copy, adjust and compare coins. A coin is structured list that represents a composite indicator. Since it is an R object like any other, it can be copied and modified, and alternative versions can be easily compared. This generally requires four steps:
1. Make a copy of the coin
2. Adjust the coin
3. Regenerate the coin
4. Compare coins
These will be explained in the following sections.
# Regeneration
The first three points on the list above will be addressed here. We must begin by explaining the "Log" of a coin. In COINr, some functions are distinguished as "building functions". These functions start with a capital letter (with one exception), and have the following defining features:
1. When a building function is run, it creates a new data set in `.$Data`.
2. When a building function is run, it records its function arguments in `.$Log`.
Building functions are the following:
Function Description
------------------ ---------------------------------------------------------------
`new_coin()` Initialise a coin object given indicator data and metadata
`Screen()` Screen units based on data availability rules
`Denominate()` Denominate/scale indicators by other indicators
`Impute()` Impute missing data
`Treat()` Treat outliers and skewed distributions
`Normalise()` Normalise indicators onto a common scale
`Aggregate()` Aggregate indicators using weighted mean
Let's explain the concept of the "Log" now with an example. We will build the example coin manually, then look inside the coin's Log list:
```{r}
library(COINr)
# create new coin by calling new_coin()
coin <- new_coin(ASEM_iData, ASEM_iMeta,
level_names = c("Indicator", "Pillar", "Sub-index", "Index"))
# look in log
str(coin$Log, max.level = 2)
```
Looking in the log, we can see that it is a list with an entry "new_coin", which contains exactly the arguments that we passed to `new_coin()`: `iData`, `iMeta`, the level names, and two other arguments which are the default values of the function. There is also another logical variable called `can_regen` which is for internal use only.
This demonstrates that when we call a building function, its arguments are stored in the coin. To show another example, if we apply the `Normalise()` function:
```{r}
# normalise
coin <- Normalise(coin, dset = "Raw")
# view log
str(coin$Log, max.level = 2)
```
Now we additionally have a "Normalise" entry, with all the function arguments that we specified, plus defaults.
Now, the reason that building functions write to the log, is that it allows coins to be *regenerated*, which means automatically re-running the building functions that were used to create the coin and its data sets. This is done with a function called `Regen()`:
```{r}
# regenerate the coin
coin <- Regen(coin, quietly = FALSE)
```
When `Regen()` is called, it runs the buildings *in the order that they are found in the log*. This is an important point because if you iteratively re-run building functions, you might end up with an order that is not what you expect. You can check the log if you have any doubts (anyway you would probably encounter an error if the order is incorrect). Also, each building function can only be run once in a regeneration.
So why regenerate coins - aren't the results exactly the same? Yes, unless you modify something first. And this brings us to the copying and modifying points. Let us take an example: first, we'll build the full example coin, then we'll make a copy of our existing coin:
```{r}
# build full example coin
coin <- build_example_coin(quietly = TRUE)
# copy coin
coin2 <- coin
```
At this point, the coins are identical. What if we want to test an alternative methodology, for example a different normalisation method? This can be done by editing the Log of the coin, then regenerating. Here, we will change the normalisation method to percentile ranks, and regenerate. To make this change it is necessary to target the right argument. Let's first see what is already in the Log for `Normalise()`:
```{r}
str(coin2$Log$Normalise)
```
At the moment, the normalisation is min-max onto the interval of 0 to 100. We will change this to the new function `n_prank()`:
```{r}
# change to prank function (percentile ranks)
# we don't need to specify any additional parameters (f_n_para) here
coin2$Log$Normalise$global_specs <- list(f_n = "n_prank")
# regenerate
coin2 <- Regen(coin2)
```
And that's it. In summary, we copied the coin, edited its log to a different normalisation methodology, and then regenerated the results. Now what remains is to compare the results, and this is dealt with in the next section.
Before that, let's consider what kind of things we can change in a coin. Anything in the Log can be changed, but of course it is up to you to change it to something valid. As long as you carefully follow the function help pages, this shouldn't be any more difficult than using the functions directly. You can also change anything else about the coin, including the input data, by targeting the log of `new_coin()`. Changing anything outside of the Log will not generally have an effect because the coin will be recreated by `new_coin()` during regeneration and this will be overwritten. The exception is if you use the `from` argument of `Regen()`: in this case the regeneration will only begin from the function name that you pass to it. This partial regeneration can also be useful to speed up computation time.
# Adding/removing indicators
One adjustment that may be of interest is to add and remove indicators. This needs to be done with care because removing an indicator requires that it is removed from both `iData` and `iMeta` when building the coin with `new_coin()`. It is not possible to remove indicators after the coin is assembled, without completely regenerating the coin.
One way to add or remove indicators is to edit the `iData` and `iMeta` data frames by hand and then rebuild the coin. Another way is to regenerate the coin, but use the `exclude` argument of `new_coin()`.
A short cut function, `change_ind()` can be also used to quickly add or remove indicators from the framework, and regenerate the coin, all in one command.
```{r}
# copy base coin
coin_remove <- coin
# remove two indicators and regenerate the coin
coin_remove <- change_ind(coin, drop = c("LPI", "Forest"), regen = TRUE)
coin_remove
```
The `drop` argument is used to specify which indicators to remove. The `add` argument adds indicators, although any indicators specified by `add` must be available in the original `iData` and `iMeta` that were passed to `new_coin()`. This means that `add` can only be used if you have previously excluded some of the indicators.
In general, if you want to test the effect of different indicators, you should include all candidate indicators in `iData` and `iMeta` and use `exclude` from `new_coin()` and/or `change_ind()` to select subsets. The advantage of doing it this way is that different subsets can be tested as part of a sensitivity analysis, for example.
In fact `change_ind()` simply edits the `exclude` argument of `new_coin()`, but is a quick way of doing this. Moreover it is safer, because it performs a few checks on the indicator codes to add or remove.
It is also possible to effectively remove indicators by setting weights to zero. This is similar to the above approach but not necessarily identical: weights only come into play at the aggregation step, which is usually the last operation. If you perform unit screening, or imputation, the presence of zero-weighted indicators could still influence the results, depending on the settings.
The effects of removing indicators and aggregates can also be tested using the `remove_elements()` function, which removes all indicators or aggregates in a specified level and calculates the impact.
# Comparison
Comparing coins is helped by two dedicated functions, `compare_coins()` and `compare_coins_multi()`. The former is for comparing two coins only, whereas the latter allows to compare more than two coins. Let's start by comparing the two coins we have: the default example coin, and the same coin but with a percentile rank normalisation method:
```{r}
# compare index, sort by absolute rank difference
compare_coins(coin, coin2, dset = "Aggregated", iCode = "Index",
sort_by = "Abs.diff", decreasing = TRUE)
```
This shows that for the overall index, the maximum rank change is 10 places for Portugal. We can compare ranks or scores, for any indicator or aggregate in the index. This also works if the number of units changes. At the moment, the coin has an imputation step which fills in all `NA`s. We could alternatively filter out any units with less than 90% data availability and remove the imputation step.
```{r}
# copy original coin
coin90 <- coin
# remove imputation entry completely (function will not be run)
coin90$Log$Impute <- NULL
# set data availability threshold to 90%
coin90$Log$Screen$dat_thresh <- 0.9
# we also need to tell Screen() to use the denominated dset now
coin90$Log$Screen$dset <- "Denominated"
# regenerate
coin90 <- Regen(coin90)
# summarise coin
coin90
```
We can see that we are down to 46 units after the screening step. Now let's compare with the original coin:
```{r}
# compare index, sort by absolute rank difference
compare_coins(coin, coin90, dset = "Aggregated", iCode = "Index",
sort_by = "Abs.diff", decreasing = TRUE)
```
The removed units are marked as `NA` in the second coin.
Finally, to demonstrate comparing multiple coins, we can call the `compare_coins_multi()` function:
```{r}
compare_coins_multi(list(Nominal = coin, Prank = coin2, NoLPIFor = coin_remove,
Screen90 = coin90), dset = "Aggregated", iCode = "Index")
```
This simply shows the ranks of each of the three coins side by side. We can also choose to compare scores, and to display rank changes or absolute rank changes. Obviously a requirement is that the coins must all have some common units, and must all have `iCode` and `dset` available within.
|
/scratch/gouwar.j/cran-all/cranData/COINr/vignettes/adjustments.Rmd
|
---
title: "Aggregation"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Aggregation}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
This vignette describes the process of aggregating indicators, in COINr.
# Introduction
Aggregation is the operation of combining multiple indicators into one value. Many composite indicators have a hierarchical structure, so in practice this often involves multiple aggregations, for example aggregating groups of indicators into aggregate values, then aggregating those values into higher-level aggregates, and so on, until the final index value.
Aggregating should almost always be done on normalised data, unless the indicators are already on very similar scales. Otherwise the relative influence of indicators will be very uneven.
Of course you don't *have* to aggregate indicators at all, and you might be content with a scoreboard, or perhaps aggregating into several aggregate values rather than a single index. However, consider that aggregation should not substitute the underlying indicator data, but complement it.
Overall, aggregating indicators is a form of information compression - you are trying to combine many indicator values into one, and inevitably information will be lost ([this](https://doi.org/10.1016/j.envsoft.2021.105208) recent paper may be of interest). As long as this is kept in mind, and indicator data is presented and made available along side aggregate values, then aggregate (index) values can complement indicators and be used as a useful tool for summarising the underlying data, and identifying overall trends and patterns.
## Weighting
Many aggregation methods involve some kind of weighting, i.e. coefficients that define the relative weight of the indicators/aggregates in the aggregation. In order to aggregate, weights need to first be specified, but to effectively adjust weights it is necessary to aggregate.
This chicken and egg conundrum is best solved by aggregating initially with a trial set of weights, perhaps equal weights, then seeing the effects of the weighting, and making any weight adjustments necessary.
## Approaches
### Means
The most straightforward and widely-used approach to aggregation is the **weighted arithmetic mean**. Denoting the indicators as $x_i \in \{x_1, x_2, ... , x_d \}$, a weighted arithmetic mean is calculated as:
$$ y = \frac{1}{\sum_{i=1}^d w_i} \sum_{i=1}^d x_iw_i $$
where the $w_i$ are the weights corresponding to each $x_i$. Here, if the weights are chosen to sum to 1, it will simplify to the weighted sum of the indicators. In any case, the weighted mean is scaled by the sum of the weights, so weights operate relative to each other.
Clearly, if the index has more than two levels, then there will be multiple aggregations. For example, there may be three groups of indicators which give three separate aggregate scores. These aggregate scores would then be fed back into the weighted arithmetic mean above to calculate the overall index.
The arithmetic mean has "perfect compensability", which means that a high score in one indicator will perfectly compensate a low score in another. In a simple example with two indicators scaled between 0 and 10 and equal weighting, a unit with scores (0, 10) would be given the same score as a unit with scores (5, 5) -- both have a score of 5.
An alternative is the **weighted geometric mean**, which uses the product of the indicators rather than the sum.
$$ y = \left( \prod_{i=1}^d x_i^{w_i} \right)^{1 / \sum_{i=1}^d w_i} $$
This is simply the product of each indicator to the power of its weight, all raised the the power of the inverse of the sum of the weights.
The geometric mean is less compensatory than the arithmetic mean -- low values in one indicator only partially substitute high values in others. For this reason, the geometric mean may sometimes be preferred when indicators represent "essentials". An example might be quality of life: a longer life expectancy perhaps should not compensate severe restrictions on personal freedoms.
A third type of mean, in fact the third of the so-called [Pythagorean means](https://en.wikipedia.org/wiki/Pythagorean_means) is the **weighted harmonic mean**. This uses the mean of the reciprocals of the indicators:
$$ y = \frac{\sum_{i=1}^d w_i}{\sum_{i=1}^d w_i/x_i} $$
The harmonic mean is the the least compensatory of the the three means, even less so than the geometric mean. It is often used for taking the mean of rates and ratios.
### Other methods
The *weighted median* is also a simple alternative candidate. It is defined by ordering indicator values, then picking the value which has half of the assigned weight above it, and half below it. For *ordered* indicators $x_1, x_2, ..., x_d$ and corresponding weights $w_1, w_2, ..., w_d$ the weighted median is the indicator value $x_m$ that satisfies:
$$ \sum_{i=1}^{m-1} w_i \leq \frac{1}{2}, \: \: \text{and} \sum_{i=m+1}^{d} w_i \leq \frac{1}{2} $$
The median is known to be robust to outliers, and this may be of interest if the distribution of scores across indicators is skewed.
Another somewhat different approach to aggregation is to use the [Copeland method](https://en.wikipedia.org/wiki/Copeland%27s_method). This approach is based pairwise comparisons between units and proceeds as follows. First, an *outranking matrix* is constructed, which is a square matrix with $N$ columns and $N$ rows, where $N$ is the number of units.
The element in the $p$th row and $q$th column of the matrix is calculated by summing all the indicator weights where unit $p$ has a higher value in those indicators than unit $q$. Similarly, the cell in the $q$th row and $p$th column (which is the cell opposite on the other side of the diagonal), is calculated as the sum of the weights unit where $q$ has a higher value than unit $p$. If the indicator weights sum to one over all indicators, then these two scores will also sum to 1 by definition. The outranking matrix effectively summarises to what extent each unit scores better or worse than all other units, for all unit pairs.
The Copeland score for each unit is calculated by taking the sum of the row values in the outranking matrix. This can be seen as an average measure of to what extent that unit performs above other units.
Clearly, this can be applied at any level of aggregation and used hierarchically like the other aggregation methods presented here.
In some cases, one unit may score higher than the other in all indicators. This is called a *dominance pair*, and corresponds to any pair scores equal to one (equivalent to any pair scores equal to zero).
The percentage of dominance pairs is an indication of robustness. Under dominance, there is no way methodological choices (weighting, normalisation, etc.) can affect the relative standing of the pair in the ranking. One will always be ranked higher than the other. The greater the number of dominance (or robust) pairs in a classification, the less sensitive country ranks will be to methodological assumptions. COINr allows to calculate the percentage of dominance pairs with an inbuilt function.
# Coins
We now turn to how data sets in a coin can be aggregated using the methods described previously. The function of interest is `Aggregate()`, which is a generic with methods for coins, purses and data frames. To demonstrate COINr's `Aggregate()` function on a coin, we begin by loading the package, and building the example coin, up to the normalised data set.
```{r setup}
library(COINr)
# build example up to normalised data set
coin <- build_example_coin(up_to = "Normalise")
```
Consider what is needed to aggregate the normalised data into its higher levels. We need:
* The data set to aggregate
* The structure of the index: which indicators belong to which groups, etc.
* Weights to assign to indicators
* Specifications for aggregation: an aggregation function (e.g. the weighted mean) and any other parameters to be passed to that function
All of these elements are already present in the coin, except the last. For the first point, we simply need to tell `Aggregate()` which data set to use (using the `dset` argument). The structure of the index was defined when building the coin in `new_coin()` (the `iMeta` argument). Weights were also attached to `iMeta`. Finally, specifications can be specified in the arguments of `Aggregate()`. Let's begin with the simple case though: using the function defaults.
```{r}
# aggregate normalised data set
coin <- Aggregate(coin, dset = "Normalised")
```
By default, the aggregation function performs the following steps:
* Uses the weights that were attached to `iMeta`
* Aggregates hierarchically (with default method of weighted arithmetic mean), following the index structure specified in `iMeta` and using the data specified in `dset`
* Creates a new data set `.$Data$Aggregated`, which consists of the data in `dset`, plus extra columns with scores for each aggregation group, at each aggregation level.
Let's examine the new data set. The columns of each level are added successively, working from level 1 upwards, so the highest aggregation level (the index, here) will be the last column of the data frame.
```{r}
dset_aggregated <- get_dset(coin, dset = "Aggregated")
nc <- ncol(dset_aggregated)
# view aggregated scores (last 11 columns here)
dset_aggregated[(nc - 10) : nc] |>
head(5) |>
signif(3)
```
Here we see the level 2 aggregated scores created by aggregating each group of indicators (the first eight columns), followed by the two sub-indexes (level 3) created by aggregating the scores of level 2, and finally the Index (level 4), which is created by aggregating the "Conn" and "Sust" sub-indexes.
The format of this data frame is not hugely convenient for inspecting the results. To see a more user-friendly version, use the `get_results()` function.
## COINr aggregation functions
Let's now explore some of the options of the `Aggregate()` function. Like other coin-building functions in COINr, `Aggregate()` comes with a number of inbuilt options, but can also accept any function that is passed to it, as long as it satisfies some requirements. COINr's inbuilt aggregation functions begin with `a_`, and are:
* `a_amean()`: the weighted arithmetic mean
* `a_gmean()`: the weighted geometric mean
* `a_hmean()`: the weighted harmonic mean
* `a_copeland()`: the Copeland method (note: requires `by_df = TRUE`)
By default, the arithmetic mean is called but we can easily change this to the geometric mean, for example. However here we run into a problem: the geometric mean will fail if any values to aggregate are less than or equal to zero. So to use the geometric mean we have to re-do the normalisation step to avoid this. Luckily this is straightforward in COINr:
```{r}
coin <- Normalise(coin, dset = "Treated",
global_specs = list(f_n = "n_minmax",
f_n_para = list(l_u = c(1,100))))
```
Now, since the indicators are scaled between 1 and 100 (instead of 0 and 100 as previously), they can be aggregated with the geometric mean.
```{r}
coin <- Aggregate(coin, dset = "Normalised",
f_ag = "a_gmean")
```
## External functions
All of the four aggregation functions mentioned above have the same format (try e.g. `?a_gmean`), and are built into the COINr package. But what if we want to use another type of aggregation function? The process is exactly the same.
In this section we use some functions from other packages: the matrixStats package and the Compind package. These are not imported by COINr, so the code here will only work if you have these installed. If this vignette was built on your computer, we have to check whether these packages are installed:
```{r}
ms_installed <- requireNamespace("matrixStats", quietly = TRUE)
ms_installed
ci_installed <- requireNamespace("Compind", quietly = TRUE)
ci_installed
```
If either of these have returned `FALSE`, in the following code chunks you will see some blanks. See the online version of this vignette to see the results, or install the above packages and rebuild the vignettes.
Now for an example, we can use the `weightedMedian()` function from the matrixStats package. This has a number of arguments, but the ones we will use are `x` and `w` (with the same meanings as COINr functions), and `na.rm` which we need to set to `TRUE`.
```{r, eval=F}
# RESTORE above eval=ms_installed
# load matrixStats package
library(matrixStats)
# aggregate using weightedMedian()
coin <- Aggregate(coin, dset = "Normalised",
f_ag = "weightedMedian",
f_ag_para = list(na.rm = TRUE))
```
The weights `w` do not need to be specified in `f_ag_para` because they are automatically passed to `f_ag` unless specified otherwise.
The general requirements for `f_ag` functions passed to `Aggregate()` are that:
1. The input to the function is a numeric vector `x`, possibly with missing values
2. The function returns a single (scalar) aggregated value
3. If the function accepts a vector of weights, this vector (of the same length of `x`) is passed as function argument `w`. If the function doesn't accept a vector of weights, we can set `w = "none"` in the arguments to `Aggregate()`, and it will not try to pass `w`.
4. Any other arguments to `f_ag`, apart from `x` and `w`, should be included in the named list `f_ag_para`.
Sometimes this may mean that we have to create a wrapper function to satisfy these requirements. For example, the 'Compind' package has a number of sophisticated aggregation approaches. The "benefit of the doubt" uses data envelopment analysis to aggregate indicators, however the function `Compind::ci_bod()` outputs a list. We can make a wrapper function to use this inside COINr:
```{r, eval= F}
# RESTORE ABOVE eval= ci_installed
# load Compind
suppressPackageStartupMessages(library(Compind))
# wrapper to get output of interest from ci_bod
# also suppress messages about missing values
ci_bod2 <- function(x){
suppressMessages(Compind::ci_bod(x)$ci_bod_est)
}
# aggregate
coin <- Aggregate(coin, dset = "Normalised",
f_ag = "ci_bod2", by_df = TRUE, w = "none")
```
The benefit of the doubt approach automatically assigns individual weights to each unit, so we need to specify `w = "none"` to stop `Aggregate()` from attempting to pass weights to the function. Importantly, we also need to specify `by_df = TRUE` which tells `Aggregate()` to pass a data frame to `f_ag` rather than a vector.
## Data availability limits
Many aggregation functions will return an aggregated value as long as at least one of the values passed to it is non-`NA`. For example, R's `mean()` function:
```{r}
# data with all NAs except 1 value
x <- c(NA, NA, NA, 1, NA)
mean(x)
mean(x, na.rm = TRUE)
```
Depending on how we set `na.rm`, we either get an answer or `NA`, and this is the same for many other aggregation functions (e.g. the ones built into COINr). Sometimes we might want a bit more control. For example, if we have five indicators in a group, it might only be reasonable to give an aggregated score if, say, at least three out of five indicators have non-`NA` values.
The `Aggregate()` function has the option to specify a data availability limit when aggregating. We simply set `dat_thresh` to a value between 0 and 1, and for each aggregation group, any unit that has a data availability lower than `dat_thresh` will get a `NA` value instead of an aggregated score. This is most easily illustrated on a data frame (see next section for more details on aggregating in data frames):
```{r}
df1 <- data.frame(
i1 = c(1, 2, 3),
i2 = c(3, NA, NA),
i3 = c(1, NA, 1)
)
df1
```
We will require that at least 2/3 of the indicators should be non-`NA` to give an aggregated value.
```{r}
# aggregate with arithmetic mean, equal weight and data avail limit of 2/3
Aggregate(df1, f_ag = "a_amean",
f_ag_para = list(w = c(1,1,1)),
dat_thresh = 2/3)
```
Here we see that the second row is aggregated to give `NA` because it only has 1/3 data availability.
## By level
We can also use a different aggregation function for each aggregation level by specifying `f_ag` as a vector of function names rather than a single function.
```{r}
coin <- Aggregate(coin, dset = "Normalised", f_ag = c("a_amean", "a_gmean", "a_amean"))
```
In this example, there are four levels in the index, which means there are three aggregation operations to be performed: from Level 1 to Level 2, from Level 2 to Level 3, and from Level 3 to Level 4. This means that `f_ag` vector must have `n-1` entries, where `n` is the number of aggregation levels. The functions are run in the order of aggregation.
In the same way, if parameters need to be passed to the functions specified by `f_ag`, `f_ag_para` can be specified as a list of length `n-1`, where each element is a list of parameters.
# Data frames
The `Aggregate()` function also works in the same way on data frames. This is probably more useful when aggregation functions take vectors as inputs, rather than data frames, since it would otherwise be easier to go directly to the underlying function. In any case, here are a couple of examples. First, using a built in COINr function to compute the weighted harmonic mean of a data frame.
```{r}
# get some indicator data - take a few columns from built in data set
X <- ASEM_iData[12:15]
# normalise to avoid zeros - min max between 1 and 100
X <- Normalise(X,
global_specs = list(f_n = "n_minmax",
f_n_para = list(l_u = c(1,100))))
# aggregate using harmonic mean, with some weights
y <- Aggregate(X, f_ag = "a_hmean", f_ag_para = list(w = c(1, 1, 2, 1)))
cbind(X, y) |>
head(5) |>
signif(3)
```
# Purses
The purse method for `Aggregate()` is straightforward and simply applies the same aggregation specifications to each of the coins within. It has exactly the same parameters as the coin method.
```{r}
# build example purse up to normalised data set
purse <- build_example_purse(up_to = "Normalise", quietly = TRUE)
# aggregate using defaults
purse <- Aggregate(purse, dset = "Normalised")
```
# What next?
After aggregating indicators, it is likely that you will want to begin viewing and exploring the results. See the vignette on [Exploring results](results.html) for more details.
|
/scratch/gouwar.j/cran-all/cranData/COINr/vignettes/aggregate.Rmd
|
---
title: "Analysis"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Analysis}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
This vignette describes the "analysis" features of COINr. By this, we mean functions that retrieve statistical measures from the coin in various ways. This excludes things like sensitivity analysis, which involves tinkering with the construction methodology.
In short, here we discuss obtaining indicator statistics, correlations, data availability, and some slightly more complex ideas such as Cronbach's alpha and principal component analysis.
# Indicator statistics
Indicator statistics can be obtained using the `get_stats()` function.
```{r}
# load COINr
library(COINr)
# build example coin
coin <- build_example_coin(up_to = "new_coin", quietly = TRUE)
# get table of indicator statistics for raw data set
stat_table <- get_stats(coin, dset = "Raw", out2 = "df")
```
The resulting data frame has 18 columns, which is hard to display concisely here. Therefore we will look at the columns in groups of five.
```{r}
head(stat_table[1:5], 5)
```
Each row is one of the indicators from the targeted data set. Then columns are statistics, here obvious things like the minimum, maximum, mean and median.
```{r}
head(stat_table[6:10], 5)
```
In the first three columns here we find the standard deviation, skewness and kurtosis. The remaining two columns are "N.Avail", which is the number of non-`NA` data points, and "N.NonZero", the number of non-zero points. This latter can be of interest because some indicators may have a high proportion of zeroes, which can be problematic.
```{r}
head(stat_table[11:15], 5)
```
Here we have "N.Unique", which is the number of unique data points (i.e. excluding duplicate values). The following three columns are similar to previous columns, e.g. "Frc.Avail" is the fraction of data availability, as opposed to the number of available points (N.Avail). The final column, "Flag.Avail", is a logical flag: if the data availability ("Frc.Avail") is below the limit specified by the `t_avail` argument of `get_stats()`, it will be flagged as "LOW".
```{r}
head(stat_table[16:ncol(stat_table)], 5)
```
The first two final columns are analogous to "Flag.Avail" and have thresholds which are controlled by arguments to `get_stats()`. The final column is a basic test of outliers which is commonly used in composite indicators, for example in the [COIN Tool](https://knowledge4policy.ec.europa.eu/composite-indicators/coin-tool_en). This is the same process as used in `check_SkewKurt()` which will flag "OUT" if the absolute skewness is greater than a set threshold (default 2) AND kurtosis is greater than a threshold (default 3.5). In short, indicators that are flagged here could be considered for outlier treatment.
# Unit data availability
The same kind of analysis can be performed for units, rather than indicators. Here, the main thing of interest is data availability, which can be obtained throug the `get_data_avail()` function.
```{r}
l_dat <- get_data_avail(coin, dset = "Raw", out2 = "list")
str(l_dat, max.level = 1)
```
Here we see the output is a list with two data frames. The first is a summary for each unit:
```{r}
head(l_dat$Summary, 5)
```
Each unit has its number of missing points, zero points, missing-or-zero points, as well as the percentage data availability and percentage non-zero. The "ByGroup" data frame gives data availability within aggregation groups:
```{r}
head(l_dat$ByGroup[1:5], 5)
```
Here we just view the first few columns to save space. The values are the fraction of indicator availability within each aggregation group.
# Correlations
Correlations can be obtained and viewed directly using the `plot_corr()` function which is explained in the [Visualisation](visualisation.html) vignette. Here, we explore the functions for obtaining correlation matrices, flags and p-values.
The most general-purpose function for obtaining correlations between indicators is `get_corr()` function (which is called by `plot_corr()`). This allows almost any set of indicators/aggregates to be correlated against almost any other set. We won't go over the full functionality here because this is covered in [Visualisation](visualisation.html) vignette. However to demonstrate a couple of examples we begin by building the full example coin up to the aggregated data set.
```{r}
coin <- build_example_coin(quietly = TRUE)
```
Now we can take some examples. First, to get the correlations between indicators within the Environmental group:
```{r}
# get correlations
cmat <- get_corr(coin, dset = "Raw", iCodes = list("Environ"), Levels = 1)
# examine first few rows
head(cmat)
```
Here we see that the default output of `get_corr()` is a long-format correlation table. If you want the wide format, set `make_long = FALSE`.
```{r}
# get correlations
cmat <- get_corr(coin, dset = "Raw", iCodes = list("Environ"), Levels = 1, make_long = FALSE)
# examine first few rows
round_df(head(cmat), 2)
```
This gives the more classical-looking correlation matrix, although the long format can sometimes be easier to work with for futher processing. One further option that is worth mentioning is `pval`: by default, `get_corr()` will return any correlations with a p-value lower than 0.05 as `NA`, indicating that these correlations are insignificant at this significance level. You can adjust this threshold by changing `pval`, or disable it completely by setting `pval = 0`.
On the subject of p-values, COINr includes a `get_pvals()` function which can be used to get p-values of correlations between a supplied matrix or data frame. This cannot be used directly on a coin and is more of a helper function but may still be useful.
Two further functions are of interest regarding correlations. The first is `get_corr_flags()`. This is a useful function for finding correlations between indicators that exceed or fall below a given threshold, within aggregation groups:
```{r}
get_corr_flags(coin, dset = "Normalised", cor_thresh = 0.75,
thresh_type = "high", grouplev = 2)
```
Here we have identified any correlations above 0.75, from the "Normalised" data set, that are between indicators in the same group in level 2. Actually 0.75 is quite low for searching for "high correlations", but it is used as an example here because the example data set doesn't have any very high correlations.
By switching `thresh_type = "low"` we can similarly look for low/negative correlations:
```{r}
get_corr_flags(coin, dset = "Normalised", cor_thresh = -0.5,
thresh_type = "low", grouplev = 2)
```
Our example has some fairly significant negative correlations! All within the "Institutional" group, and with the Technical Barriers to Trade indicator.
A final function to mention is `get_denom_corr()`. This is related to the operation of denominating indicators (see [Denomination](denomination.html) vignette), and identifies any indicators that are correlated (using the absolute value of correlation) above a given threshold with any of the supplied denominators. This can help to identify in some cases *whether* to denominate an indicator and *with what* - i.e. if an indicator is strongly related with a denominator that means it is dependent on it, which may be a reason to denominate.
```{r}
get_denom_corr(coin, dset = "Raw", cor_thresh = 0.7)
```
Using a threshold of 0.7, and examining the raw data set, we see that several indicators are strongly related to the denominators, a classical example being export value of goods (Goods) being well correlated with GDP. Many of these pairs flagged here are indeed used as denominators in the ASEM example, but also for conceptual reasons.
# Multivariate tools
A first simple tool is to calculate Cronbach's alpha. This can be done with any group of indicators, either the full set, or else targeting specific groups.
```{r}
get_cronbach(coin, dset = "Raw", iCodes = "P2P", Level = 1)
```
This simply calculates Cronbach's alpha (a measure of statistical consistency) for the "P2P" group (People to People connectivity, this case).
Another multivariate analysis tool is principal component analysis (PCA). Although, like correlation, this is built into base R, the `get_PCA()` function makes it easier to obtain PCA for groups of indicators, following the structure of the index.
```{r}
l_pca <- get_PCA(coin, dset = "Raw", iCodes = "Sust", out2 = "list")
```
The function can return its results either as a list, or appended to the coin if `out2 = "coin"`. Here the output is a list and we will explore its output. First note the warnings above due to missing data, which can be suppressed using `nowarnings = TRUE`. The output list looks like this:
```{r}
str(l_pca, max.level = 1)
```
I.e. we have a data frame of "PCA weights" and some PCA results. We ignore the weights for the moment and look closer at the PCA results:
```{r}
str(l_pca$PCAresults, max.level = 2)
```
By default, `get_PCA()` will run a separate PCA for each aggregation group within the specified level. In this case, it has run three: one for each of the "Environ", "Social" and "SusEcFin" groups. Each of these contains `wts`, a set of PCA weights for that group, `PCAres`, which is the direct output of `stats::prcomp()`, and `iCodes`, which is the corresponding vector of indicator codes for the group.
We can do some basic PCA analysis using base R's tools using the "PCAres" objects, e.g.:
```{r}
# summarise PCA results for "Social" group
summary(l_pca$PCAresults$Social$PCAres)
```
See `stats::prcomp()` and elsewhere for more resources on PCA in R.
Now turning to the weighting, `get_PCA()` also outputs a set of "PCA weights". These are output attached to the list as shown above, or if `out2 = "coin"`, will be written to the weights list inside the coin if `weights_to` is also specified. See the [Weights](weights.html) vignette for some more details on this. Note that PCA weights come with a number of caveats: please read the documentation for `get_PCA()` for a discussion on this.
|
/scratch/gouwar.j/cran-all/cranData/COINr/vignettes/analysis.Rmd
|
---
title: "Building coins"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Building coins}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
```{r setup}
library(COINr)
```
This vignette gives a guide to building "coins", which are the object class representing a composite indicator used throughout COINr, and "purses", which are time-indexed collections of coins.
# What is a coin?
COINr functions are designed to work in particular on an S3 object class called a "coin". To introduce this, consider what constitutes a composite indicator:
* The indicator data
* Indicator metadata, including weights and directions
* A structure which maps indicators into groups for aggregation, typically over multiple levels
* Methodological specifications, including
- Data treatment
- Normalisation method and parameters
- Aggregation method and parameters
* Processed data sets at each stage of the construction
* Resulting aggregated scores and ranks
Meanwhile, in the process of building a composite indicator, a series of analysis data is generated, including information on data availability, statistics on individual indicators, correlations and information about data treatment.
If a composite indicator is built from scratch, it is easy to generate an environment with dozens of variables and parameters. In case an alternative version of the composite indicator is built, multiple sets of variables may need to be generated. With this in mind, it makes sense to structure all the ingredients of composite indicator, from input data, to methodology and results, into a single object, which is called a "coin" in COINr.
How to construct a coin, and some details of its contents, will be explained in more detail in the following sections. Although coins are the main object class used in COINr, a number of COINr functions also have methods for data frames and vectors. This is explained in other vignettes.
# Building coins
To build a coin you need to use the `new_coin()` function. The main two input arguments of this function are two data frames: `iData` (the indicator data), and `iMeta` (the indicator metadata). This builds a coin class object containing the raw data, which can then be developed and expanded by COINr functions by e.g. normalising, treating data, imputing, aggregating and so on.
Before proceeding, we have to define a couple of things. The "things" that are being benchmarked/compared by the indicators and composite indicator are more generally referred to as *units* (quite often, units correspond to countries). Units are compared using *indicators*, which are measured variables that are relevant to the overall concept of the composite indicator.
## Indicator data
The first data frame, `iData` specifies the value of each indicator, for each unit. It can also contain further attributes and metadata of units, for example groups, names, and denominating variables (variables which are used to adjust for size effects of indicators).
To see an example of what `iData` looks like, we can look at the built in [ASEM](https://composite-indicators.jrc.ec.europa.eu/asem-sustainable-connectivity/) data set. This data set is from a composite indicator covering 51 countries with 49 indicators, and is used for examples throughout COINr:
```{r}
head(ASEM_iData[1:20], 5)
```
Here only a few rows and columns are shown to illustrate. The ASEM data covers covering 51 Asian and European countries, at the national level, and uses 49 indicators. Notice that each row is an observation (here, a country), and each column is a variable (mostly indicators, but also other things).
Columns can be named whatever you want, although a few names are reserved:
* `uName` [optional] gives the name of each unit. Here, units are countries, so these are the names of each country.
* `uCode` [**required**] is a unique code assigned to each unit (country). This is the main "reference" inside COINr for units. If the units are countries, ISO Alpha-3 codes should ideally be used, because these are recognised by COINr for generating maps.
* `Time` [optional] gives the reference time of the data. This is used if panel data is passed to `new_coin()`. See [Purses and panel data].
This means that at a minimum, you need to supply a data frame with a `uCode` column, and some indicator columns.
Aside from the reserved names above, columns can be assigned to different uses using the corresponding `iMeta` data frame - this is clarified in the next section.
Some important rules and tips to keep in mind are:
* Columns don't have to be in any particular order; they are identified by names rather than positions.
* Indicator columns are required to be numeric, i.e. they cannot be character vectors.
* There is no restriction on the number of indicators and units.
* Indicator codes and unit codes must have unique names.
* As with everything in R, all codes are case-sensitive.
* Don't start any column names with a number!
The `iData` data frame will be checked when it is passed to `new_coin()`. You can also perform this check yourself in advance by calling `check_iData()`:
```{r}
check_iData(ASEM_iData)
```
If there are issues with your `iData` data frame this should produce informative error messages which can help to correct the problem.
## Indicator metadata
The `iMeta` data frame specifies everything about each column in `iData`, including whether it is an indicator, a group, or something else; its name, its units, and where it appears in the *structure* of the index. `iMeta` also requires entries for any aggregates which will be created by aggregating indicators. Let's look at the built-in example.
```{r}
head(ASEM_iMeta, 5)
```
Required columns for `iMeta` are:
* `Level`: The level in aggregation, where 1 is indicator level, 2 is the level resulting from aggregating
indicators, 3 is the result of aggregating level 2, and so on. Set to `NA` for entries that are not included in the index (groups, denominators, etc).
* `iCode`: Indicator code, alphanumeric. Must not start with a number. These entries generally correspond to the column names of `iData`.
* `Parent`: Group (`iCode`) to which indicator/aggregate belongs in level immediately above. Each entry here should also be found in `iCode`. Set to `NA` only for the highest (Index) level (no parent), or for entries that are not included in the index (groups, denominators, etc).
* `Direction`: Numeric, either -1 or 1
* `Weight`: Numeric weight, will be re-scaled to sum to 1 within aggregation group. Set to `NA` for entries that are not included in the index (groups, denominators, etc).
* `Type`: The type, corresponding to `iCode`. Can be either `Indicator`, `Aggregate`, `Group`, `Denominator`, or `Other`.
Optional columns that are recognised in certain functions are:
* `iName`: Name of the indicator: a longer name which is used in some plotting functions.
* `Denominator`: specifies which denominator variable should be used to denominate the indicator, if `Denominate()` is called. See the [Denomination](denomination.html) vignette.
* `Unit`: the unit of the indicator, e.g. USD, thousands, score, etc. Used in some plots if available.
* `Target`: a target for the indicator. Used if normalisation type is distance-to-target.
`iMeta` can also include other columns if needed for specific uses, as long as they don't use the names listed above.
The `iMeta` data frame essentially gives details about each of the columns found in `iData`, as well as details about additional data columns eventually created by aggregating indicators. This means that the entries in `iMeta` must include *all* columns in `iData`, *except* the three "special" column names: `uCode`, `uName`, and `Time`. In other words, all column names of `iData` should appear in `iMeta$iCode`, except the three special cases mentioned.
The `Type` column specifies the type of the entry: `Indicator` should be used for indicators at level 1.
`Aggregate` for aggregates created by aggregating indicators or other aggregates. Otherwise set to `Group`
if the variable is not used for building the index but instead is for defining groups of units. Set to
`Denominator` if the variable is to be used for scaling (denominating) other indicators. Finally, set to
`Other` if the variable should be ignored but passed through. Any other entries here will cause an error.
Apart from the indicator entries shown above, we can see aggregate entries:
```{r}
ASEM_iMeta[ASEM_iMeta$Type == "Aggregate", ]
```
These are the aggregates that will be created by aggregating indicators. These values will only be created when we call the `Aggregate()` function (see relevant vignette). We also have groups:
```{r}
ASEM_iMeta[ASEM_iMeta$Type == "Group", ]
```
Notice that the `iCode` entries here correspond to column names of `iData`. There are also denominators:
```{r}
ASEM_iMeta[ASEM_iMeta$Type == "Denominator", ]
```
Denominators are used to divide or "scale" other indicators. They are ideally included in `iData` because this ensures that they match the units and possibly the time points.
The `Parent` column requires a few extra words. This is used to define the structure of the index. Simply put, it specifies the aggregation group to which the indicator or aggregate belongs to, in the level immediately above. For indicators in level 1, this should refer to `iCode`s in level 2, and for aggregates in level 2, it should refer to `iCode`s in level 3. Every entry in `Parent` must refer to an entry that can be found in the `iCode` column, or else be `NA` for the highest aggregation level or for groups, denominators and other `iData` columns that are not included in the index.
The `iMeta` data frame is more complex that `iData` and it may be easy to make errors. Use the `check_iMeta()` function (which is anyway called by `new_coin()`) to check the validity of your `iMeta`. Informative error messages are included where possible to help correct any errors.
```{r}
check_iMeta(ASEM_iMeta)
```
When `new_coin()` is run, additional cross-checks are run between `iData` and `iMeta`.
## Building with `new_coin()`
With the `iData` and `iMeta` data frames prepared, you can build a coin using the `new_coin()` function. This has some other arguments and options that we will see in a minute, but by default it looks like this:
```{r}
# build a new coin using example data
coin <- new_coin(iData = ASEM_iData,
iMeta = ASEM_iMeta,
level_names = c("Indicator", "Pillar", "Sub-index", "Index"))
```
The `new_coin()` function checks and cross-checks both input data frames, and outputs a coin-class object. It also tells us that it has written a data set to `.$Data$Raw` - this is the sub-list that contains the various data sets that will be created each time we run a coin-building function.
We can see a summary of the coin by calling the coin print method - this is done simply by calling the name of the coin at the command line, or equivalently `print(coin)`:
```{r}
coin
```
This tells us some details about the coin - the number of units, indicators, denominators and groups; the structure of the index (notice that the `level_names` argument is used to describe each level), and the data sets present in the coin. Currently this only consists of the "Raw" data set, which is the data set that is created by default when we run `new_coin()`, and simply consists of the indicator data plus the `uCode` column. Indeed, we can retrieve any data set from within a coin at any time using the `get_dset()` function:
```{r}
# first few cols and rows of Raw data set
data_raw <- get_dset(coin, "Raw")
head(data_raw[1:5], 5)
```
By default, calling `get_dset()` returns only the unit code plus the indicator/aggregate columns. We can also attach other columns such as groups and names by using the `also_get` argument. This can be used to attach any of the `iData` "metadata" columns that were originally passed when calling `new_coin()`, such as groups, etc.
```{r}
get_dset(coin, "Raw", also_get = c("uName", "Pop_group"))[1:5] |>
head(5)
```
Apart from the `level_names` argument, `new_coin()` also gives the possibility to only pass forward a subset of the indicators in `iMeta`. This is done using the `exclude` argument, which is useful when testing alternative sets of indicators - see vignette on adjustments and comparisons.
```{r}
# exclude two indicators
coin <- new_coin(iData = ASEM_iData,
iMeta = ASEM_iMeta,
level_names = c("Indicator", "Pillar", "Sub-index", "Index"),
exclude = c("LPI", "Flights"))
coin
```
Here, `new_coin()` has removed the indicator columns from `iData` and the corresponding entries in `iMeta`. However, the full original `iData` and `iMeta` tables are still stored in the coin.
The `new_coin()` function includes a thorough series of checks on its input arguments which may cause some initial errors while the format is corrected. The objective is that if you can successfully assemble a coin, this should work smoothly for all COINr functions.
# Example coin
COINr includes a built in example coin which is constructed using a function `build_example_coin()`. This can be useful for learning how the package works, testing and is used in COINr documentation extensively because many functions require a coin as an input. Here we build the example coin (which is again from the ASEM data set built into COINr) and inspect its contents:
```{r}
ASEM <- build_example_coin(quietly = TRUE)
ASEM
```
This shows that the example is a fully populated coin with various data sets, each resulting from running COINr functions, up to the aggregation step.
# Purses and panel data
A coin offers a very wide methodological flexibility, but some things are kept fixed throughout. One is that the set of indicators does not change once the coin has been created. The other thing is that each coin represents a single point in time.
If you have panel data, i.e. multiple observations for each unit-indicator pair, indexed by time, then `new_coin()` allows you to create multiple coins in one go. Coins are collected into a single object called a "*purse*", and many COINr functions work on purses directly.
Here we simply explore how to create a purse. The procedure is almost the same as creating a coin: you need the `iData` and `iMeta` data frames, and you call `new_coin()`. The difference is that `iData` must now have a `Time` column, which must be a numeric column which records which time point each observation is from. To see an example, we can look at the built-in (artificial) panel data set `ASEM_iData_p`.
```{r}
# sample of 2018 observations
ASEM_iData_p[ASEM_iData_p$Time == 2018, 1:15] |>
head(5)
# sample of 2019 observations
ASEM_iData_p[ASEM_iData_p$Time == 2019, 1:15] |>
head(5)
```
This data set has five years of data, spanning 2018-2022 (the data are artificially generated - at some point I will replace this with a real example). This means that each row now corresponds to a set of indicator values for a unit, for a given time point.
To build a purse from this data, we input it into `new_coin()`
```{r}
# build purse from panel data
purse <- new_coin(iData = ASEM_iData_p,
iMeta = ASEM_iMeta,
split_to = "all",
quietly = TRUE)
```
Notice here that the `iMeta` argument is the same as when we assembled a single coin - this is because a purse is supposed to consist of coins with the same indicators and structure, i.e. the aim is to calculate a composite indicator over several points in time, and generally to apply the same methodology to all coins in the purse. It is however possible to have different units between coins in the same purse - this might occur because of data availability differences at different time points.
The `split_to` argument should be set to `"all"` to create a coin from each time point found in the data. Alternatively, you can only include a subset of time points by specifying them as a vector.
A quick way to check the contents of the purse is to call its print method:
```{r}
purse
```
This tells us how many coins there are, the number of indicators and units, and gives some structural information from one of the coins.
A purse is an S3 class object like a coin. In fact, it is simply a data frame with a `Time` column and a `coin` column, where entries in the `coin` column are coin objects (in a so-called "list column"). This is convenient to work with, but if you try to view it in R Studio, for example, it can be a little messy.
As with coins, the purse class also has a function in COINr which produces an example purse:
```{r}
ASEM_purse <- build_example_purse(quietly = TRUE)
ASEM_purse
```
The purse class can be used directly with COINr functions - this allows to impute/normalise/treat/aggregate all coins with a single command, for example.
# Summary
COINr is mostly designed to work with coins and purses. However, many key functions also have methods for data frames or vectors. This means that COINr can either be used as an "ecosystem" of functions built around coins and purses, or else can just be used as a toolbox for doing your own work with data frames and other objects.
|
/scratch/gouwar.j/cran-all/cranData/COINr/vignettes/coins.Rmd
|
---
title: "Data selection"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Data selection}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
This vignette describes how to retrieve data from a coin. The main functions to do this are `get_dset()` and the more flexible `get_data()`.
These functions are important to understand, because many COINr functions use them to retrieve data for plotting, analysis and other functions. Both functions are *generics*, which means that they have methods for coins and purses.
# Data sets
Every time a "building" operation is applied to a coin, such as `Treat()`, `Screen()`, `Normalise()` and so on, a new data set is created. Data sets live in the `.$Data` sub-list of the coin. We can retrieve a data set at any time using the `get_data()` function:
```{r}
library(COINr)
# build full example coin
coin <- build_example_coin(quietly = TRUE)
# retrieve normalised data set
dset_norm <- get_dset(coin, dset = "Normalised")
# view first few rows and cols
head(dset_norm[1:5], 5)
```
By default, a data set in the coin consists of indicator columns plus the "uCode" column, which is the unique identifier of each row. You can also ask to attach unit metadata columns, such as unit names, groups, and anything else that was input when building the coin, using the `also_get` argument:
```{r}
# retrieve normalised data set
dset_norm2 <- get_dset(coin, dset = "Normalised", also_get = c("uName", "GDP_group"))
# view first few rows and cols
head(dset_norm2[1:5], 5)
```
# Data subsets
While `get_dset()` is a quick way to retrieve an entire data set and metadata, the `get_data()` function is a generalisation: it can also be used to obtain a whole data set, but also subsets of data, based on e.g. indicator selection and grouping (columns), as well as unit selection and grouping (rows).
## Indicators/columns
A simple example is to extract one or more named indicators from a target data set:
```{r}
x <- get_data(coin, dset = "Raw", iCodes = c("Flights", "LPI"))
# see first few rows
head(x, 5)
```
By default, `get_data()` returns the requested indicators, plus the `uCode` identifier column. We can also set `also_get = "none"` to return only the indicator columns.
The `iCode` argument can also accept groups of indicators, based on the structure of the index. In our example, indicators are aggregated into "pillars" (level 2) within groups. We can name an aggregation group and extract the underlying indicators:
```{r}
x <- get_data(coin, dset = "Raw", iCodes = "Political", Level = 1)
head(x, 5)
```
Here we have requested all the indicators in level 1 (the indicator level), that belong to the group called "Political" (one of the pillars). Specifying the level becomes more relevant when we look at the aggregated data set, which also includes the pillar, sub-index and index scores. Here, for example, we can ask for all the pillar scores (level 2) which belong to the sustainability sub-index (level 3):
```{r}
x <- get_data(coin, dset = "Aggregated", iCodes = "Sust", Level = 2)
head(x, 5)
```
If this isn't clear, look at the structure of the example index using e.g. `plot_framework(coin)`. If we wanted to select all the indicators within the "Sust" sub-index we would set `Level = 1`. If we wanted to select the sub-index scores themselves we would set `Level = 3`, and so on.
The idea of selecting indicators and aggregates based on the structure of the index is useful in many places in COINr, for example examining correlations within aggregation groups using `plot_corr()`.
## Units/rows
Units (rows) of the data set can also be selected (also in combination with selecting indicators). Starting with a simple example, let's select specified units for a specific indicator:
```{r}
get_data(coin, dset = "Raw", iCodes = "Goods", uCodes = c("AUT", "VNM"))
```
Rows can also be sub-setted using groups, i.e. unit groupings that are defined using variables input with `iMeta$Type = "Group"` when building the coin. Recall that for our example coin we have several groups (a reminder that you can see some details about the coin using its print method):
```{r}
coin
```
The first way to subset by unit group is to name a grouping variable, and a group within that variable to select. For example, say we want to know the values of the "Goods" indicator for all the countries in the "XL" GDP group:
```{r}
get_data(coin, dset = "Raw", iCodes = "Goods", use_group = list(GDP_group = "XL"))
```
Since we have subsetted by group, this also returns the group column which was used.
Another way of sub-setting is to combine `uCodes` and `use_group`. When these two arguments are both specified, the result is to return the full group(s) to which the specified `uCodes` belong. This can be used to put a unit in context with its peers within a group. For example, we might want to see the values of the "Flights" indicator for a specific unit, as well as all other units within the same population group:
```{r}
get_data(coin, dset = "Raw", iCodes = "Flights", uCodes = "MLT", use_group = "Pop_group")
```
Here, we have to specify `use_group` simply as a string rather than a list. Since MLT is in the "S" population group, it returns all units within that group.
Overall, the idea of `get_data()` is to flexibly return subsets of indicator data, based on the structure of the index and unit groups.
# Manual selection
As a final point, it's worth pointing out that a coin is simply a list of R objects such as data frames, other lists, vectors and so on. It has a particular format which allows things to be easily accessed by COINr functions. But other than that, its an ordinary R object. This means that even without the helper functions mentioned, you can get at the data simply by exploring the coin yourself.
The data sets live in the `.$Data` sub-list of the coin:
```{r}
names(coin$Data)
```
And we can access any of these directly:
```{r}
data_raw <- coin$Data$Raw
head(data_raw[1:5], 5)
```
The metadata lives in the `.$Meta` sub-list. For example, the unit metadata, which includes groups, names etc:
```{r}
str(coin$Meta$Unit)
```
The point is that if COINr tools don't get you where you want to go, knowing your way around the coin allows you to access the data exactly how you want.
|
/scratch/gouwar.j/cran-all/cranData/COINr/vignettes/data_selection.Rmd
|
---
title: "Denomination"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Denomination}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
Denomination is the process of scaling one indicator by another quantity to adjust for the effect of size. This is because many indicators are linked to the unit's size (economic size, physical size, population, etc.) in one way or another, and if no adjustments were made, a composite indicator would end up with the largest units at the top and the smallest at the bottom. Often, the adjustment is made by dividing the indicator by a so-called "denominator" or a denominating variable. If units are countries, denominators are typically things like GDP, population or land area.
COINr's `Denominate()` function allows to quickly perform this operation in a flexible and reproducible way. As with other building functions, it is a *generic* which means that it has different methods for data frames, coins and purses. They are however all fairly similar.
# Data frames
We'll begin by demonstrating denomination on a data frame. We'll use the in-built data set to get a small sample of indicators:
```{r}
library(COINr)
# Get a sample of indicator data (note must be indicators plus a "UnitCode" column)
iData <- ASEM_iData[c("uCode", "Goods", "Flights", "LPI")]
head(iData)
```
This is the raw indicator data for three indicators, plus the "uCode" column which identifies each unit. We will also get some data for denominating the indicators. COINr has an in-built set of denominator data called `WorldDenoms`:
```{r}
head(WorldDenoms)
```
Now, the main things to specify in denomination are which indicators to denominate, and by what. In other words, we need to map the indicators to the denominators. In the example, the export of goods should be denominated by GDP, passenger flight capacity by population (GDP could also possibly be reasonable), and "LPI" (the logistics performance index) is an intensive variable that does not need to be denominated.
This specification is passed to `Denominate()` using the `denomby` argument. This takes a data frame which includes "iCode" (the name of the column to be denonimated), "Denominator" (the column name of the denominator data frame to use), and "ScaleFactor" is a multiplying factor to apply if needed. We create this data frame here:
```{r}
# specify how to denominate
denomby <- data.frame(iCode = c("Goods", "Flights"),
Denominator = c("GDP", "Population"),
ScaleFactor = c(1, 1000))
```
A second important consideration is that the rows of the indicators and the denominators need to be matched, so that each unit is denominated by the value corresponding to that unit, and not another unit. Notice that the `WorldDenoms` data frame covers more or less all countries in the world, whereas the sample indicators only cover 51 countries. The matching is performed inside the `Denominate()` function, using an identifier column which must be present in both data frames. Here, our common column is "uCode", which is already found in both data frames. This is also the default column name expected by `Denominate()`, so we don't even need to specify it. If you have other column names, use the `x_iD` and `denoms_ID` arguments to pass these names to the function.
Ok so now we are ready to denominate:
```{r}
# Denominate one by the other
iData_den <- Denominate(iData, WorldDenoms, denomby)
head(iData_den)
```
The function has matched each unit in `iData` with its corresponding denominator value in `WorldDenoms` and divided the former by the latter. As expected, "Goods" and "Flights" have changed, but "LPI" has not because it was not included in the `denomby` data frame.
Otherwise, the only other feature to mention is the `f_denom` argument, which allows other functions to be used other than the division operator. See the function documentation.
# Coins
Now let's look at denomination inside a coin. The main difference here is that the information needed to denominate the indicators may already be present inside the coin. When creating the coin using `new_coin()`, there is the option to specify denominating variables as part of `iData` (these are variables where `iMeta$Type = "Denominator"`), and to specify in `iMeta` the mapping between indicators and denominators, using the `iMeta$Type` column. To see what this looks like:
```{r}
# first few rows of the example iMeta, selected cols
head(ASEM_iMeta[c("iCode", "Denominator")])
```
The entries in "Denominator" correspond to column names that are present in `iData`:
```{r}
# see names of example iData
names(ASEM_iData)
```
So in our example, all the information needed to denominate is already present in the coin - the denominator data, and the mapping. In this case, to denominate, we simply call:
```{r}
# build example coin
coin <- build_example_coin(up_to = "new_coin", quietly = TRUE)
# denominate (here, we only need to say which dset to use)
coin <- Denominate(coin, dset = "Raw")
```
If the denomination data and/or mapping isn't present in the coin, or we wish to try an alternative specification, we can also pass this to `Denominate()` using the `denoms` and `denomby` arguments as in the previous section.
This concludes the main features of `Denominate()` for a coin. Before moving on, consider that denomination needs extra care because it radically changes the indicator. It is a nonlinear transformation because each data point is divided by a different value. To demonstrate, consider the "Flights" indicator that we just denominated - let's plot the raw indicator against the denominated version:
```{r}
plot_scatter(coin, dsets = c("Raw", "Denominated"), iCodes = "Flights")
```
This shows that the raw and denominated indicators show very little resemblance to one another.
# Purses
The final method for `Denominate()` is for purses. The purse method is exactly the same as the coin method, except applied to a purse.
An important consideration here is that denominator variables can and do vary with time, just like indicators. This means that e.g. "Total value of exports" from 2019 should be divided by GDP from 2019, and not from another year. In other words, denominators are panel data just like the indicators.
This is why denominators are ideally input as part of `iData` when calling `new_coin()`. In doing so, denominators are another column of the data frame like the indicators, and must have an entry for each unit/time pair. This also ensures that the unit-matching of denominator and indicator is correct (or more accurately, I leave that up to you!).
In our example purse, the denominator data is already included, as is the mapping. This means that denomination is exactly the same operation as denominating a coin:
```{r}
# build example purse
purse <- build_example_purse(up_to = "new_coin", quietly = TRUE)
# denominate using data/specs already included in coin
purse <- Denominate(purse, dset = "Raw")
```
In fact if you try to pass denominator data to `Denominate()` for a purse via `denoms`, there is a catch: at the moment, `denoms` does not support panel data, so it is required to use the same value for each time point. This is not ideal and may be sorted out in future releases. For now, it is better to denominate purses by passing all the specifications via `iData` and `iMeta` when building the purse with `new_coin()`.
|
/scratch/gouwar.j/cran-all/cranData/COINr/vignettes/denomination.Rmd
|
---
title: "Imputation"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Imputation}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
# Introduction
Imputation is the process of estimating missing data points. To get started with imputation, a reasonable first step is to see how much missing data we have in the data set. We begin by building the example coin, up the point of assembling the coin, but not any further:
```{r}
library(COINr)
ASEM <- build_example_coin(up_to = "new_coin", quietly = TRUE)
```
To check missing data, the `get_data_avail()` function can be used. It can output to either the coin or to a list -- here we output to a list to readily display the results.
```{r}
l_avail <- get_data_avail(ASEM, dset = "Raw", out2 = "list")
```
The output list has data availability by unit:
```{r}
head(l_avail$Summary)
```
The lowest data availability by unit is:
```{r}
min(l_avail$Summary$Dat_Avail)
```
We can also check data availability by indicator. This is done by calling `get_stats()`:
```{r}
df_avail <- get_stats(ASEM, dset = "Raw", out2 = "df")
head(df_avail[c("iCode", "N.Avail", "Frc.Avail")], 10)
```
By indicator, the minimum data availability is:
```{r}
min(df_avail$Frc.Avail)
```
With missing data, several options are available:
1. Leave it as it is and aggregate anyway (there is also the option for data availability thresholds during aggregation - see [Aggregation](aggregate.html))
2. Consider removing indicators that have low data availability (this has to be done manually because it affects the structure of the index)
3. Consider removing units that have low data availability (see [Unit Screening](screening.html))
4. Impute missing data
These options can also be combined. Here, we focus on the option of imputation.
# Data frames
The `Impute()` function is a flexible function that imputes missing data in a data set using any suitable function that can be passed to it. In fact, `Impute()` is a *generic*, and has methods for coins, data frames, numeric vectors and purses.
Let's begin by examining the data frame method of `Impute()`, since it is easier to see what's going on. We will use a small data frame which is easy to visualise:
```{r}
# some data to use as an example
# this is a selected portion of the data with some missing values
df1 <- ASEM_iData[37:46, 36:39]
print(df1, row.names = FALSE)
```
In the simplest case, imputation can be performed column-wise, i.e. by imputing each indicator one at a time:
```{r}
Impute(df1, f_i = "i_mean")
```
Here, the "Raw" data set has been imputed by substituting missing values with the mean of the non-`NA` values for each column. This is performed by setting `f_i = "i_mean"`. The `f_i` argument refers to a function that imputes a numeric vector - in this case the built-in `i_mean()` function:
```{r}
# demo of i_mean() function, which is built in to COINr
x <- c(1,2,3,4, NA)
i_mean(x)
```
The key concept here is that the simple function `i_mean()` is applied by `Impute()` to each column. This idea of passing simpler functions is used in several key COINr functions, and allows great flexibility because more sophisticated imputation methods can be used from other packages, for example.
For now let's explore the options native to COINr. We can also apply the `i_median()` function in the same way to substitute with the indicator median. Adding a little complexity, we can also impute by mean or median, but within unit (row) groups. Let's assume that the first five rows in our data frame belong to a group "a", and the remaining five to a different group "b". In practice, these could be e.g. GDP, population or wealth groups for countries - we might hypothesise that it is better to replace `NA` values with the median inside a group, rather than the overall median, because countries within groups are more similar.
To do this on a data frame we can use the `i_median_grp()` function, which requires an additional argument `f`: a grouping variable. This is passed through `Impute()` using the `f_i_para` argument which takes any additional parameters top `f_i` apart from the data to be imputed.
```{r}
# row grouping
groups <- c(rep("a", 5), rep("b", 5))
# impute
dfi2 <- Impute(df1, f_i = "i_median_grp", f_i_para = list(f = groups))
# display
print(dfi2, row.names = FALSE)
```
The `f_i_para` argument requires a named list of additional parameter values. This allows functions of any complexity to be passed to `Impute()`. By default, `Impute()` applies `f_i` to each column of data, so `f_i` is expected to take a numeric vector as its first input, and specifically have the format `function(x, f_i_para)` where `x` is a numeric vector and `...` are further arguments. This means that the first argument of `f_i` *must* be called "x". To use functions that don't have `x` as a first argument, you would have to write a wrapper function.
Other than imputing by column, we can also impute by row. This only really makes sense if the indicators are on a common scale, i.e. if they are normalised first (or perhaps if they already share the same units). To impute by row, set `impute_by = "row"`. In our example data set we have indicators on rather different scales. Let's see what happens if we impute by row mean but *don't* normalise:
```{r}
Impute(df1, f_i = "i_mean", impute_by = "row", normalise_first = FALSE)
```
This imputes some silly values, particularly in "CultGood", because "Pat" has much higher values. Clearly this is not a sensible strategy, unless all indicators are on the same scale. We can however normalise first, impute, then return indicators to their original scales:
```{r}
Impute(df1, f_i = "i_mean", impute_by = "row", normalise_first = TRUE, directions = rep(1,4))
```
This additionally required to specify the `directions` argument because we need to know which direction each indicator runs in (whether they are positive or negative indicators). In our case all indicators are positive. See the vignette on [Normalisation](normalise.html) for more details on indicator directions.
The values imputed in this way are more realistic. Essentially we are replacing each missing value with the average (normalised) score of the other indicators, for a given unit. However this also only makes sense if the indicators/columns are similar to one another: high values of one would likely imply high values in the other.
Behind the scenes, setting `normalise_first = TRUE` first normalises each column using a min-max method, then performs the imputation, then returns the indicators to the original scales using the inverse transformation. Another approach which gives more control is to simply run `Normalise()` first, and work with the normalised data from that point onwards. In that case it is better to set `normalise_first = FALSE`, since by default if `impute_by = "row"` it will be set to `TRUE`.
As a final point on data frames, we can set `impute_by = "df"` to pass the entire data frame to `f_i`, which may be useful for more sophisticated multivariate imputation methods. But what's the point of using `Impute()` then, you may ask? First, because when imputing coins, we can impute by indicator groups (see next section); and second, `Impute()` performs some checks to ensure that non-`NA` values are not altered.
# Coins
Imputing coins is similar to imputing data frames because the coin method of `Impute()` calls the data frame method. Please read that section first if you have not already done so. However, for coins there are some additional function arguments.
In the simple case we impute a named data set `dset` using the function `f_i`: e.g. if we want to impute the "Raw" data set using indicator median values:
```{r}
ASEM <- Impute(ASEM, dset = "Raw", f_i = "i_mean")
ASEM
```
Here, `Impute()` extracts the "Raw" data set as a data frame, imputes it using the data frame method (see previous section), then saves it as a new data set in the coin. Here, the data set is called "Imputed" but can be named otherwise using the `write_to` argument.
We can also impute by group using a grouped imputation function. Since unit groups are stored within the coin (variables labelled as "Group" in `iMeta`), these can be called directly using the `use_group` argument (without having to specify the `f_i_para` argument):
```{r}
ASEM <- Impute(ASEM, dset = "Raw", f_i = "i_mean_grp", use_group = "GDP_group", )
```
This has imputed each indicator using its GDP group mean.
Row-wise imputation works in the same way as with a data frame, by setting `impute_by = "row"`. However, this is particularly useful in conjunction with the `group_level` argument. If this is specified, rather than imputing across the entire row of data, it splits rows into indicator groups, using the structure of the index. For example:
```{r}
ASEM <- Impute(ASEM, dset = "Raw", f_i = "i_mean", impute_by = "row",
group_level = 2, normalise_first = TRUE)
```
Here, the `group_level` argument specifies which level-grouping of the indicators to use. In the ASEM example here, we are using level 2 groups, so it is substituting missing values with the average normalised score within each sub-pillar (in the ASEM example level 2 is called "sub-pillars").
Imputation in this way has an important relationship with aggregation. This is because if we *don't* impute, then in the aggregation step, if we take the mean of a group of indicators, and there is a `NA` present, this value is excluded from the mean calculation. Doing this is mathematically equivalent to assigning the mean to that missing value and then taking the mean of all of the indicators. This is sometimes known as "shadow imputation". Therefore, one reason to use this imputation method is to see which values are being implicitly assigned as a result of excluding missing values from the aggregation step.
Last we can see an example of imputation by data frame, with the option `impute_by = "row"`. Recall that this option requires that the function `f_i` accepts and returns entire data frames. This is suitable for more sophisticated multivariate imputation methods. Here we'll use a basic implementation of the Expectation Maximisation (EM) algorithm from the Amelia package.
Since COINr requires that the first argument of `f_i` is called `x`, and the relevant Amelia function doesn't satisfy this requirement, we have write a simple wrapper function that acts as an intermediary between COINr and Amelia. This also gives us the chance to specify some other function arguments that are necessary.
```{r, eval=FALSE}
# this function takes a data frame input and returns an imputed data frame using amelia
i_EM <- function(x){
# impute
amOut <- Amelia::amelia(x, m = 1, p2s = 0, boot.type = "none")
# return imputed data
amOut$imputations[[1]]
}
```
Now armed with our new function, we just call that from `Impute()`. We don't need to specify `f_i_para` because these arguments are already specified in the intermediary function.
```{r, eval=FALSE}
# impute raw data set
coin <- Impute(coin, dset = "Raw", f_i = i_EM, impute_by = "df", group_level = 2)
```
This has now passed each group of indicators at level 2 as data frames to Amelia, which has imputed each one and passed them back.
# Purses
Purse imputation is very similar to coin imputation, because by default the purse method of `Impute()` imputes each coin separately. There is one exception to this: if `f_i = "impute_panel`, the data sets inside the purse are imputed using the last available data point, using the `impute_panel()`
function. In this case, coins are not imputed individually, but treated as a single data set. In this
case, optionally set `f_i_para = list(max_time = .)` where `.` should be substituted with the maximum
number of time points to search backwards for a non-`NA` value. See `impute_panel()` for more details.
No further arguments need to be passed to `impute_panel()`.
It is difficult to show this working without a contrived example, so let's contrive one. We take the example panel data set `ASEM_iData_p`, and introduce a missing value `NA` in the indicator "LPI" for unit "GB", for year 2022.
```{r}
# copy
dfp <- ASEM_iData_p
# create NA for GB in 2022
dfp$LPI[dfp$uCode == "GB" & dfp$Time == 2022] <- NA
```
This data point has a value for the previous year, 2021. Let's see what it is:
```{r}
dfp$LPI[dfp$uCode == "GB" & dfp$Time == 2021]
```
Now let's build the purse and impute the raw data set.
```{r}
# build purse
ASEMp <- new_coin(dfp, ASEM_iMeta, split_to = "all", quietly = TRUE)
# impute raw data using latest available value
ASEMp <- Impute(ASEMp, dset = "Raw", f_i = "impute_panel")
```
Now we check whether our imputed point is what we expect: we would expect that our `NA` is now replaced with the 2021 value as found previously. To get at the data we can use the `get_data()` function.
```{r}
get_data(ASEMp, dset = "Imputed", iCodes = "LPI", uCodes = "GBR", Time = 2021)
```
And indeed this corresponds to what we expect.
|
/scratch/gouwar.j/cran-all/cranData/COINr/vignettes/imputation.Rmd
|
---
title: "Normalisation"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Normalisation}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
Normalisation is the operation of bringing indicators onto comparable scales so that they can be aggregated more fairly. To see why this is necessary, consider aggregating GDP values (billions or trillions of dollars) with percentage tertiary graduates (tens of percent). Average values here would make no sense because one is on a completely different scale to the other.
The normalisation function in COINr is imaginatively named `Normalise()`. It has the following main features:
* A wide range of normalisation methods, including the possibility to pass custom functions
* Customisable parameters for normalisation
* Possibility to specify detailed individual treatment for each indicator
As of COINr v1.0, `Normalise()` is a generic function with methods for different classes. This means that `Normalise()` can be called on coins, but also on data frames, numeric vectors and purses (time-indexed collections of coins).
Since `Normalise()` might be a bit over-complicated for some applications, the `qNormalise()` function gives a simpler interface which might be easier to use. See the [Simplified normalisation] section.
# Coins
The `Normalise()` method for coins follows the familiar format: you have to specify:
* `x` the coin
* `global_specs` default specifications to apply to all indicators
* `indiv_specs` individual specifications to override `global_specs` for specific indicators, if required
* `directions` a data frame specifying directions - this overrides the directions in `iMeta` if specified
* `out2` whether to output an updated coin or simply a data frame
Let's begin with a simple example. We build the example coin and normalise the raw data.
```{r}
library(COINr)
# build example coin
coin <- build_example_coin(up_to = "new_coin")
# normalise the raw data set
coin <- Normalise(coin, dset = "Raw")
```
We can compare one of the raw and un-normalised indicators side by side.
```{r}
plot_scatter(coin, dsets = c("Raw", "Normalised"), iCodes = "Goods")
```
This plot also illustrates the linear nature of the min-max transformation.
The default normalisation uses the min-max approach, scaling indicators onto the $[0, 100]$ interval. But we can change the normalisation type and its parameters using the `global_specs` argument.
```{r}
coin <- Normalise(coin, dset = "Raw",
global_specs = list(f_n = "n_zscore",
f_n_para = list(c(10,2))))
```
Again, let's plot an example of the result:
```{r}
plot_scatter(coin, dsets = c("Raw", "Normalised"), iCodes = "Goods")
```
Again, the z-score transformation is linear. It simply puts the resulting indicator on a different scale.
Notice the syntax of `global_specs`. If specified, it takes entries `f_n` (the name of the function to apply to each column) and `f_n_para` (any further arguments to `f_n`, not including `x`). Importantly, `f_n_para` *must* be specified as a list, even if it only contains one parameter.
Note that **COINr has a number of normalisation functions built in**, all of which are of the form `n_*()`, such as `n_minmax()`, `n_borda()`, etc. Type `n_` in the R Studio console and press the Tab key to see a list, or else browse the COINr functions alphabetically.
## Calling external functions
Since `f_n` points to a function name, any function can be passed to `Normalise()` as long as it is available in the namespace. To illustrate, consider an example where we want to categorise into discrete bins. We can use base R's `cut()` function for this purpose. We simply need to specify the number of bins. We could directly call `cut()`, but for clarity we will create a simple wrapper function around it, then pass that function to `Normalise()`.
```{r}
# wrapper function
f_bin <- function(x, nbins){
cut(x, breaks = nbins, labels = FALSE)
}
# pass wrapper to normalise, specify 5 bins
coin <- Normalise(coin, dset = "Raw",
global_specs = list(f_n = "f_bin",
f_n_para = list(nbins = 5)))
```
To illustrate the difference with the linear transformations above, we again plot the raw against normalised indicator:
```{r}
plot_scatter(coin, dsets = c("Raw", "Normalised"), iCodes = "Goods")
```
Obviously this is *not* linear.
Generally, the requirements of a function to be passed to `Normalise()` are that its first argument should be `x`, a numeric vector, and it should return a numeric vector of the same length as `x`. It should also be able to handle `NA`s. Any further arguments can be passed via the `f_n_para` entry.
## Directions
By default, the directions are taken from the coin. These will have been specified as the `Direction` column of `iMeta` when constructing a coin with `new_coin()`. However, you can specify different directions using the `directions` argument of `normalise()`: in this case you need to specify a data frame with two columns: `iCode` (with an entry for each indicator code found in the target data set) and `Direction` giving the direction as -1 or 1.
To show an example, we take the existing directions from the coin, modify them slightly, and then run the normalisation function again:
```{r}
# get directions from coin
directions <- coin$Meta$Ind[c("iCode", "Direction")]
head(directions, 10)
```
We'll change the direction of the "Goods" indicator and re-normalise:
```{r}
# change Goods to -1
directions$Direction[directions$iCode == "Goods"] <- -1
# re-run (using min max default)
coin <- Normalise(coin, dset = "Raw", directions = directions)
```
## Individual normalisation
Finally let's explore how to specify different normalisation methods for different indicators. The `indiv_specs` argument takes a named list for each indicator, and will override the specifications in `global_specs`. If `indiv_specs` is specified, we only need to include sub-lists for indicators that differ from `global_specs`.
To illustrate, we can use a contrived example where we might want to apply min-max to all indicators except two. For those, we apply a rank transformation and distance to maximum approach. Note, that since the default of `global_specs` is min-max, we don't need to specify that at all here.
```{r}
# individual specifications:
# LPI - borda scores
# Flights - z-scores with mean 10 and sd 2
indiv_specs <- list(
LPI = list(f_n = "n_borda"),
Flights = list(f_n = "n_zscore",
f_n_para = list(m_sd = c(10, 2)))
)
# normalise
coin <- Normalise(coin, dset = "Raw", indiv_specs = indiv_specs)
# a quick look at the first three indicators
get_dset(coin, "Normalised")[1:4] |>
head(10)
```
This example is meant to be illustrative of the functionality of `Normalise()`, rather than being a sensible normalisation strategy, because the indicators are now on very different ranges.
In practice, if different normalisation strategies are selected, it is a good idea to keep the indicators on similar ranges, otherwise the effects will be very unequal in the aggregation step.
## Use of targets
A particular type of normalisation is "distance to target". This normalises indicators by the distance of each value to a specified target. Targets may often have a political or business meaning, such as e.g. emissions targets or sales targets.
Targets should be input into a coin using the `iMeta` argument when building the coin using `new_coin()`. In fact, the built-in example data has targets for all indicators:
```{r}
head(ASEM_iMeta[c("iCode", "Target")])
```
*(Note that these targets are fabricated just for the purposes of an example)*
To use distance-to-target normalisation, we call the `n_dist2targ()` function. Like other built in normalisation functions, this normalises a vector using a specified target. We can't use the `f_n_para` entry in `Normalise()` here because this would only pass a single target value, whereas we need to use a different target for each indicator.
However, COINr has a special case built in so that targets from `iMeta` can be used automatically. Simply set `global_specs = list(f_n = "n_dist2targ")`, and the `Normalise()` function will automatically retrieve targets from `iMeta$Target`. If targets are not present, this will generate an error. Note that the directions of indicators are also passed to `n_dist2targ()` - see that function documentation for how the normalisation is performed depending on the direction specified.
Our normalisation will then look like this:
```{r, eval=FALSE}
coin <- Normalise(coin, dset = "Raw", global_specs = list(f_n = "n_dist2targ"))
```
It is also possible to specify the `cap_max` parameter of `n_dist2targ()` as follows:
```{r, eval=FALSE}
coin <- Normalise(coin, dset = "Raw", global_specs = list(f_n = "n_dist2targ", f_n_para = list(cap_max = TRUE)))
```
# Data frames and vectors
Normalising a data frame is very similar to normalising a coin, except the input is a data frame and output is also a data frame.
```{r}
mtcars_n <- Normalise(mtcars, global_specs = list(f_n = "n_dist2max"))
head(mtcars_n)
```
As with coins, columns can be normalised with individual specifications using the `indiv_spec` argument in exactly the same way as with a coin. Note that non-numeric columns are always ignored:
```{r}
Normalise(iris) |>
head()
```
There is also a method for numeric vectors, although usually it is just as easy to call the underlying normalisation function directly.
```{r}
# example vector
x <- runif(10)
# normalise using distance to reference (5th data point)
x_norm <- Normalise(x, f_n = "n_dist2ref", f_n_para = list(iref = 5))
# view side by side
data.frame(x, x_norm)
```
# Purses
The purse method for `normalise()` is especially useful if you are working with multiple coins and panel data. This is because to make scores comparable from one time point to the next, it is usually a good idea to normalise indicators together rather than separately. For example, with the min-max method, indicators are typically normalised using the minimum and maximum over all time points of data, as opposed to having a separate max and min for each.
If indicators were normalised separately for each time point, then the highest scoring unit would get a score of 100 in time $t$ (assuming min-max between 0 and 100), but the highest scoring unit in time $t+1$ would *also* be assigned a score of 100. The underlying values of these two scores could be very different, but they would get
This means that the purse method for `normalise()` is a bit different from most other purse methods, because it doesn't independently apply the function to each coin, but takes the coins all together. This has the following implications:
1. Any normalisation function can be applied globally to all coins in a purse, ensuring comparability. BUT:
2. If normalisation is done globally, it is no longer possible to automatically regenerate coins in the purse (i.e. using `regenerate()`), because the coin is no longer self-contained: it needs to know the values of the other coins in the purse. Perhaps at some point I will add a dedicated method for regenerating entire purses, but we are not there yet.
Let's anyway illustrate with an example. We build the example purse first.
```{r}
purse <- build_example_purse(quietly = TRUE)
```
Normalising a purse works in exactly the same way as normalising a coin, except for the `global` argument. By default, `global = TRUE`, which means that the normalisation will be applied over all time points simultaneously, with the aim of making the index comparable. Here, we will apply the default min-max approach to all coins:
```{r}
purse <- Normalise(purse, dset = "Raw", global = TRUE)
```
Now let's examine the data set of the first coin. We'll see what the max and min of a few indicators is:
```{r}
# get normalised data of first coin in purse
x1 <- get_dset(purse$coin[[1]], dset = "Normalised")
# get min and max of first four indicators (exclude uCode col)
sapply(x1[2:5], min, na.rm = TRUE)
sapply(x1[2:5], max, na.rm = TRUE)
```
Here we see that the minimum values are zero, but the maximum values are *not* 100, because in other coins these indicators have higher values. To show that the global maximum is indeed 100, we can extract the whole normalised data set for all years and run the same check.
```{r}
# get entire normalised data set for all coins in one df
x1_global <- get_dset(purse, dset = "Normalised")
# get min and max of first four indicators (exclude Time and uCode cols)
sapply(x1_global[3:6], min, na.rm = TRUE)
sapply(x1_global[3:6], max, na.rm = TRUE)
```
And this confirms our expectations: that the global maximum and minimum are 0 and 100 respectively.
Any type of normalisation can be performed on a purse in this "global" mode. However, keep in mind what is going on. Simply put, when `global = TRUE` this is what happens:
1. The data sets from each coin are joined together into one using the `get_dset()` function.
2. Normalisation is applied to this global data set.
3. The global data set is then split back into the coins.
So if you specify to normalise by e.g. rank, ranks will be calculated for all time points. Therefore, consider carefully if this fits the intended meaning.
Normalisation can also be performed independently on each coin, by setting `global = FALSE`.
```{r}
purse <- Normalise(purse, dset = "Raw", global = FALSE)
# get normalised data of first coin in purse
x1 <- get_dset(purse$coin[[1]], dset = "Normalised")
# get min and max of first four indicators (exclude uCode col)
sapply(x1[2:5], min, na.rm = TRUE)
sapply(x1[2:5], max, na.rm = TRUE)
```
Now the normalised data set in each coin will have a min and max of 0 and 100 respectively, for each indicator.
# Simplified normalisation
If the syntax of `Normalise()` looks a bit over-complicated, you can use the simpler `qNormalise()` function, which has less flexibility but makes the key function arguments more visible (they are not wrapped in lists). This function applies the same normalisation method to all indicators. It is also a generic so can be used on data frames, coins and purses. Let's demonstrate on a data frame:
```{r}
# some made up data
X <- data.frame(uCode = letters[1:10],
a = runif(10),
b = runif(10)*100)
X
```
By default, normalisation results in min-max on the $[0, 100]$ interval:
```{r}
qNormalise(X)
```
We can pass another normalisation function if we like, and the syntax is a bit easier than `Normalise()`:
```{r}
qNormalise(X, f_n = "n_dist2ref", f_n_para = list(iref = 1, cap_max = TRUE))
```
The `qNormalise()` function works in a similar way for coins and purses.
|
/scratch/gouwar.j/cran-all/cranData/COINr/vignettes/normalise.Rmd
|
---
title: "Other Functions"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Other Functions}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
This vignette covers other functions that don't fit in other vignettes, but still seem useful. Mainly this involves import and export, and some helper functions.
# Import and export
One of the most useful functions is `export_to_excel()`. This can be used to export the contents of a coin to Excel at at any point in its construction, and is very simple to run. We first build the example coin:
```{r}
library(COINr)
# build example coin
coin <- build_example_coin(quietly = TRUE)
```
Then export to Excel:
```{r, eval=FALSE}
# export coin to Excel
export_to_excel(coin, fname = "example_coin_results.xlsx")
```
This exports every data frame in the coin to a separate tab in the workbook, named according to its position in the coin. By default it excludes the Log of the coin, but this can be optionally included. The function is very useful for passing the results to people who don't use R (let's face it, that's most people).
Data can also be imported directly into COINr from the [COIN Tool](https://knowledge4policy.ec.europa.eu/composite-indicators/coin-tool_en) which is an Excel-based tool for building and analysing composite indicators, similar in fact to COINr^[Full disclosure, I was also involved in the development of the COIN Tool]. With the `import_coin_tool()` function you can import data directly from the COIN Tool to cross check or extend your analysis in COINr.
To demonstrate, we can take the example version of the COIN Tool, which you can download [here](https://composite-indicators.jrc.ec.europa.eu/sites/default/files/COIN_Tool_v1_LITE_exampledata.xlsm). Then it's as simple as running:
```{r, eval=FALSE}
# make sure file is in working directory!
coin_import <- import_coin_tool("COIN_Tool_v1_LITE_exampledata.xlsm",
makecodes = TRUE, out2 = "coin")
```
This will directly generate a coin from the COIN Tool.
# Converting from older COINr versions
COINr changed drastically from v0.6 to v1.0. So drastically that I skipped several version numbers. From v1.0, the main object of COINr is called a "coin" and this is different from the "COIN" used up to v0.6.x. If you have worked in COINr before v1.0, you can use the `COIN_to_coin()` function to convert old COINs into new coins:
```{r, eval=FALSE}
coin <- COIN_to_coin(COIN)
```
This comes with some limitations: any data sets present in the coin will not be passed on unless `recover_dsets = TRUE`. However, if this is specified, the coin cannot be regenerated because it is not possible to translate the log from the older COIN class (called the "Method") to the log in the new coin class. Still, the conversion avoids having to reformat `iData` and `iMeta`.
# Other useful functions
Here we list some accessory functions that could be useful in some circumstances.
The `rank_df()` function converts a data frame to ranks, ignoring non-numeric columns. Taking some sample data:
```{r}
X <- ASEM_iData[1:5,c(2,10:12)]
X
```
This looks like
```{r}
rank_df(X)
```
The `replace_df()` function replaces values found anywhere in a data frame with corresponding new values:
```{r}
replace_df(X, data.frame(old = c("AUT", "BEL"), new = c("test1", "test2")))
```
The `round_df()` rounds to a specified number of decimal places, ignoring non-numeric columns:
```{r}
round_df(X, 1)
```
The `signif_df()` is equivalent but for a number of significant figures:
```{r}
signif_df(X, 3)
```
Finally, the `compare_df()` function gives a detailed comparison between two similar data frames that are indexed by a specified column. This function is tailored to compare results in composite indicators. Say you have a set of results from COINr and want to cross check against a separate calculation. Often, you end up with a data frame with the same columns, but possibly in a different order. Rows could be in a different order but are indexed by an identifier, here "uCode". The `compare_df()` function gives a detailed comparison between the two data frames and points out any differences.
We'll demonstrate this by copying the example data frame, altering some values and seeing what happens:
```{r}
# copy
X1 <- X
# change three values
X1$GDP[3] <- 101
X1$Population[1] <- 10000
X1$Population[2] <- 70000
# reorder
X1 <- X1[order(X1$uCode), ]
# now compare
compare_df(X, X1, matchcol = "uCode")
```
The output is a list with several entries. First, it tells us that the two data frames are not the same. The "Details" data frame lists each column and says whether it is identical or not, and how many different points there are. Finally, the "Differences" list has one entry for each column that differs, and details the value of the point from the first data frame compared to the value from the second.
From experience, this kind of output can be very helpful in quickly zooming in on differences between possibly large data frames of results. It is mainly intended for the use case described above, where the data frames are known to be similar, are of the same size, but we want to check for precise differences.
|
/scratch/gouwar.j/cran-all/cranData/COINr/vignettes/other_functions.Rmd
|
---
title: "Overview"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Overview}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
# Introduction
This vignette introduces and gives an overview of the **COINr package**. COINr is a high-level R package which is the first fully-flexible development and analysis environment for composite indicators and scoreboards.
This vignette is one of quite a few vignettes which document the package. Here, the aim is to give a quick introduction and overview of the package. The other vignettes deal with specific operations.
As of COINr v1.0.0 some radical changes have been introduced. Most notably for existing users, is the change in syntax. This is an unfortunate one-off necessity and the changes (and how to survive them, or roll back to the old version of COINr) are described in an extra vignette called [Changes in v1.0](v1.html).
# Installation
COINr is on CRAN and can be installed by running:
```{r InstallCOINrC, eval=FALSE}
install.packages("COINr")
```
Or simply browsing for the package in R Studio. The CRAN version will be updated every 1-2 months or so. If you want the very latest version in the meantime (I am usually adding features and fixing bugs as I find them), you can install the development version from GitHub. First, install the 'remotes' package if you don't already have it, then run:
```{r InstallCOINr, eval=FALSE}
remotes::install_github("bluefoxr/COINr")
```
This should directly install the package from Github, without any other steps. You may be asked to update packages. This might not be strictly necessary, so you can also try skipping this step if you prefer.
Once the package is installed, it can be loaded as follows:
```{r setup}
library(COINr)
```
# Features
The main features of the COINr package are those for building the composite indicator by performing operations on the data, those for analysing/post-processing, and those for visualisation. Here, the main functions are briefly listed (this list is not exhaustive):
**Building** functions begin with a capital letter, except for `new_coin()` which is used to initialise a coin object
Function Description
------------------ ---------------------------------------------------------------
`new_coin()` Initialise a coin object given indicator data and metadata
`Screen()` Screen units based on data availability rules
`Denominate()` Denominate/scale indicators by other indicators
`Impute()` Impute missing data
`Treat()` Treat outliers and skewed distributions
`qTreat()` Simplified-syntax version of `Treat()`
`Normalise()` Normalise indicators onto a common scale
`qNormalise()` Simplified-syntax version of `Normalise()`
`Aggregate()` Aggregate indicators using weighted mean
Building functions are defined as those that modify the data (by creating an additional data set). They also keep a record of their arguments inside the coin, which allows coins to be *regenerated*. See [Adjustments and Comparisons](adjustments.html).
**Analysing** functions include those multivariate analysis, weight optimisation and sensitivity analysis, as well as those for reporting results:
Function Description
------------------ ---------------------------------------------------------------
`get_corr()` Get correlations between any indicator/aggregate sets
`get_corr_flags()` Find high or low-correlated indicators within groups
`get_cronbach()` Get Cronbach's alpha for any set of indicators
`get_data()` Get subsets of indicator data
`get_data_avail()` Get data availability details of each unit
`get_denom_corr()` Get high correlations between indicators and denominators
`get_eff_weights()` Get effective weights at index level
`get_opt_weights()` Get optimised weights
`get_results()` Get conveniently-arranged results tables
`get_sensitivity()` Perform a global uncertainty or sensitivity analysis
`get_stats()` Get a table of indicator statistics
`get_str_weak()` Highest and lowest-ranking indicators for a given unit
`get_unit_summary()` Summary of scores and ranks for a given unit
`remove_elements()` Test the effect of removing indicators or aggregates
**Plotting** functions generate plots using the ggplot2 package:
Function Description
------------------ ---------------------------------------------------------------
`plot_bar()` Bar chart of a single indicator or aggregate
`plot_corr()` Heat maps of correlations between indicators/aggregates
`plot_dist()` Statistical plots of indicator/aggregate distributions
`plot_dot()` Dot plot of an indictaor/aggregate with unit highlighting
`plot_framework()` Sunburst or linear plot of indicator framework
`plot_scatter()` Scatter plot between two indicators/aggregates
`plot_sensitivity()` Plots of sensitivity indices
`plot_uncertainty()` Plots of confidence intervals on unit ranks
**Adjustment and comparison** functions allow copies, adjustments and comparisons to be made between alternative versions of the composite indicator:
Function Description
------------------ ---------------------------------------------------------------
`Regen()` Regenerate results of a coin object
`change_ind()` Add and remove indicators
`compare_coins()` Exports all data frames in the COINr object to an Excel workbook
`compare_coins_multi()` Convert a COINr class object to a COIN class object, for use in the 'COINr' package
**Other functions** are useful tools that don't fit into the other categories
Function Description
------------------ ---------------------------------------------------------------
`import_coin_tool()` Import data and metadata from the COIN Tool
`COIN_to_coin()` Convert an older "COIN" class object to a newer "coin" class object
`build_example_coin()` Build the example coin using built-in example data
`build_example_purse()` Build the example purse using built-in example data
`export_to_excel()` Export contents of the coin to Excel
All functions are fully documented and individual function help can be accessed in the usual way by `?function_name`.
The COINr package is loosely object oriented, in the sense that the composite indicator is encapsulated in an S3 class object called a "coin", and a time-indexed collection of coins is called a "purse" (see [Building coins](coins.html)) Most of the main functions listed in the previous tables take this "coin" class as the main input (and often also as the output) with other function arguments specifying how to apply the function. E.g. the syntax is typically:
```{r, eval=F}
coin <- COINr_function(coin, function_arguments)
```
Many of the main COINr functions are *generics*: they have methods also for data frames, purses, and in some cases numeric vectors. This means that COINr functions can also be used for ad-hoc operations without needing to build coins.
# Quick example
The COINr package contains some example data which is used in most of the vignettes to demonstrate the functions, and this comes from the [ASEM Sustainable Connectivity Portal](https://composite-indicators.jrc.ec.europa.eu/asem-sustainable-connectivity/). It is a data set of 49 indicators covering 51 Asian and European countries, measuring "sustainable connectivity". Here we work through building a composite indicator, and link to the other vignettes for more details.
Before proceeding, let's clearly define a few terms first to avoid confusion later on.
+ An *indicator* is a variable which has an observed value for each unit. Indicators might be things like life expectancy, CO2 emissions, number of tertiary graduates, and so on.
+ A *unit* is one of the entities that you are comparing using indicators. Often, units are countries, but they could also be regions, universities, individuals or even competing policy options (the latter is the realm of multicriteria decision analysis).
We begin by building a new "coin". To build a coin you need two data frames which are inputs to the `new_coin()` function. See the vignette on [Building coins](coins.html) for more details on this.
```{r}
ASEM <- new_coin(ASEM_iData, ASEM_iMeta, level_names = c("Indicator", "Pillar", "Sub-index", "Index"))
```
The output of `new_coin()` is a coin class object with a single data set called "Raw":
```{r}
ASEM
```
Let's view the structure of the index that we have specified, using the `plot_framework()` function:
```{r, fig.width=5, fig.height=5}
plot_framework(ASEM)
```
See the [Visualisation](visualisation.html) vignette for the full range of plotting options in COINr.
At the moment the coin contains only our raw data. To build the composite indicator we need to perform operations on the coin. All of these operations are optional and can be performed in any order. We begin by *denominating* the raw data: that is, we divide some of the indicators by other quantities to make our indicators comparable between small and large countries. See the vignette on [Denomination](denomination.html).
```{r}
ASEM <- Denominate(ASEM, dset = "Raw")
```
The only thing we specify here is that the denomination should be performed on the "Raw" data set. The other specifications for how to denominate the indicators were already contained in the data frames that we input to `new_coin()`. Running `Denominate()` has created a new data set called "Denominated" which is reported in the message when we run the function (we can choose another name if we wish). This is *additional* to the "Raw" data set and does not overwrite it.
Next we will screen units (countries) based on data availability. We want to ensure that every unit (country) has at least 90% data availability across all indicators. Screening is done by the `Screen()` function:
```{r}
ASEM <- Screen(ASEM, dset = "Denominated", dat_thresh = 0.9, unit_screen = "byNA")
```
The details of this function can be found in the [Unit screening](screening.html) vignette. Again, by running this function we have created a new data set. Let's look again at the contents of the coin using its `print()` method:
```{r}
ASEM
```
Notice that the "Screened" data set now has 46 units because five have been screened out, having less than 90% data availability.
Next we will impute any remaining missing data points. This can be done in a variety of ways, but here we choose to impute using the group mean, i.e. if a country is in the "Asia" group, we replace missing points by the Asian mean. If a country is in the "Europe" group, we replace with the European mean.
```{r}
ASEM <- Impute(ASEM, dset = "Screened", f_i = "i_mean_grp", use_group = "EurAsia_group")
```
This writes another data set called "Imputed", which has filled in all the missing data points. Again, we have to specify which data set to impute, and we have chosen the "Screened" data set. Full details of the imputation function can be found in the [Imputation](imputation.html) vignette.
We would next like to treat any outliers. The `Treat()` function gives a number of options, but by default will identify outliers using skewness and kurtosis thresholds, then Winsorise or log-transform indicators until they are brought within the specified thresholds. This function is slightly complicated and full details can be found in the [Outlier treatment](treat.html) vignette.
```{r}
ASEM <- Treat(ASEM, dset = "Screened")
```
The details of the data treatment can be found inside the coin. A simplified version of `Treat()` is also available, called `qTreat()`, which may be easier to use in many cases.
The final step before aggregating is to bring the indicators onto a common scale by normalising them. The `Normalise()` function will, by default, scale each indicator onto the $[0, 100]$ interval using a "min-max" approach.
```{r}
ASEM <- Normalise(ASEM, dset = "Treated")
```
Again, because `Normalise()` is a slightly complex function (unless it is run at defaults, as above), a simplified version called `qNormalise()` is also available. Details on normalisation can be found in the [Normalisation](normalise.html) vignette.
To conclude the construction of the composite indicator, we must aggregate the normalised indicators up within their aggregation groups. In our example, indicators (level 1) are aggregated into "pillars" (level 2), which are themselves aggregated up into "sub-indexes" (level 3), which are finally aggregated into a single index (level 4). The `Aggregate()` function will aggregate following the structure which was specified in the `iMeta` argument to `new_coin()`. By default, this is done using the arithmetic mean, and using weights which were also specified in `iMeta`.
```{r}
ASEM <- Aggregate(ASEM, dset = "Normalised", f_ag = "a_amean")
```
Details on aggregation can be found in the [Aggregation](aggregate.html) vignette.
We now have a fully-constructed coin with index scores for each country. How do we look at the results? One way is the `get_results()` function which extracts a conveniently-arranged table of results:
```{r}
# get results table
df_results <- get_results(ASEM, dset = "Aggregated", tab_type = "Aggs")
head(df_results)
```
This shows at a glance the top-ranking countries and their scores. See the [Presenting Results](results.html) vignette for more ways to generate results tables.
We can also generate a bar chart:
```{r, fig.width=7}
plot_bar(ASEM, dset = "Aggregated", iCode = "Index", stack_children = TRUE)
```
This also shows the underlying sub-index scores.
We will not explore all functions here. As a final useful step, we can export the entire contents of the coin to Excel if needed:
```{r, eval=FALSE}
# export coin to Excel
export_to_excel(coin, fname = "example_coin_results.xlsx")
```
# Finally
The preceding example covered a number of features of COINr. Features that were not mentioned can be found in the following vignettes:
* [Weights](weights.html)
* [Analysis](analysis.html)
* [Sensitivity analysis](sensitivity.html)
* [Adjustments and Comparisons](adjustments.html)
* [Data selection](data_selection.html)
* [Other functions](other_functions.html)
|
/scratch/gouwar.j/cran-all/cranData/COINr/vignettes/overview.Rmd
|
---
title: "Presenting Results"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Presenting Results}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
This is a short vignette which explains some functions in COINr for extracting results from the coin. Once the coin is fully built, up to the point of aggregation, an immediate task is to see what the main results are. In composite indicators, the main starting point is often the ranking of units based on the highest level of aggregation, i.e. the index.
While the aggregated data set (the data set created by `Aggregate()`) has all the aggregate scores in it, it requires a little manipulation to see it in an easy to read format. To help with this, the `get_results()` function
```{r}
library(COINr)
# build full example coin
coin <- build_example_coin(quietly = TRUE)
# get results table
df_results <- get_results(coin, dset = "Aggregated", tab_type = "Aggs")
head(df_results)
```
The output of `get_results()` is a table sorted by the highest level of aggregation (here, the index), and with the the columns arranged so that the highest level of aggregation is first, working down to lower levels. The function has several arguments, including `also_get` (names of further columns to attach to the table, such as groups, denominators), `tab_type` (controlling which columns to output), `use` (whether to show scores or ranks), and `order_by` (which column to use to sort the table).
A useful feature is to return ranks of units inside groups. For example, rather than returning scores we can return ranks within GDP per capita groups:
```{r}
# get results table
df_results <- get_results(coin, dset = "Aggregated", tab_type = "Aggs", use_group = "GDPpc_group", use = "groupranks")
# see first few entries in "XL" group
head(df_results[df_results$GDPpc_group == "XL", ])
```
```{r}
# see first few entries in "L" group
head(df_results[df_results$GDPpc_group == "L", ])
```
Another function of interest zooms in on a single unit. The `get_unit_summary()` function returns a summary of a units scores and ranks at specified levels. Typically we can use this to look at a unit's index scores and scores for the aggregates:
```{r}
get_unit_summary(coin, usel = "IND", Levels = c(4,3,2), dset = "Aggregated")
```
This is a summary for "IND" (India) at levels 4 (index), 3 (sub-index) and 2 (pillar). It shows the score and rank.
A final function here is `get_str_weak()`. This gives the "strengths and weaknesses" of a unit, in terms of its indicators with the highest and lowest ranks. This can be particularly useful in "country profiles", for example.
```{r}
get_str_weak(coin, dset = "Raw", usel = "ESP")
```
The default output is five strengths and five weaknesses. The direction of the indicators is adjusted - see the `adjust_direction` parameter. A number of other parameters can also be adjusted which help to guide the tables to give sensible values, for example excluding indicators with binary values. See the function documentation for more details.
|
/scratch/gouwar.j/cran-all/cranData/COINr/vignettes/results.Rmd
|
---
title: "Unit Screening"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Unit Screening}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
Unit screening is a screening or filtering of units based on data availability rules. Just like with indicators (columns), when a unit (row) has very few data points available, it may make sense to remove it. This avoids drawing conclusions on units with very few data points. It will also increase the percentage data availability of each indicator once the units have been removed.
The COINr function `Screen()` is a generic function with methods for data frames, coins and purses. It is a *building* function in that it creates a new data set in `$.Data` as its output.
# Data frames
We begin with data frames. Let's take a subset of the inbuilt example data for demonstration. I cherry-pick some rows and columns which have some missing values.
```{r}
library(COINr)
# example data
iData <- ASEM_iData[40:51, c("uCode", "Research", "Pat", "CultServ", "CultGood")]
iData
```
The data has four indicators, plus an identifier column "uCode". Looking at each unit, the data availability is variable. We have 12 units in total.
Now let's use `Screen()` to screen out some of these units. Specifically, we will remove any units that have less than 75% data availabilty (3 of 4 indicators with non-`NA` values):
```{r}
l_scr <- Screen(iData, unit_screen = "byNA", dat_thresh = 0.75)
```
The output of `Screen()` is a list:
```{r}
str(l_scr, max.level = 1)
```
We can see already that the "RemovedUnits" entry tells us that three units were removed based on our specifications. We now have our new screened data set:
```{r}
l_scr$ScreenedData
```
And we have a summary of data availability and some other things:
```{r}
head(l_scr$DataSummary)
```
This table is in fact generated by `get_data_avail()` - some more details can be found in the [Analysis](analysis.html) vignette.
Other than data availability, units can also be screened based on the presence of zeros, or on both - this is specified by the `unit_screen` argument. Use the `Force`^[Luke. Sorry.] argument to override the screening rules for specified units if required (either to force inclusion or force exclusion).
# Coins
Screening on coins is very similar to data frames, because the coin method extracts the relevant data set, passes it to the data frame method, and then then puts the output back as a new data set. This means the arguments are almost the same. The only thing different is to specify which data set to screen, the name to give the new data set, and whether to output a coin or a list.
We'll build the example coin, then screen the raw data set with a threshold of 85% data availability and also name the new data set something different rather than "Screened" (the default):
```{r}
# build example coin
coin <- build_example_coin(up_to = "new_coin", quietly = TRUE)
# screen units from raw dset
coin <- Screen(coin, dset = "Raw", unit_screen = "byNA", dat_thresh = 0.85, write_to = "Filtered_85pc")
# some details about the coin by calling its print method
coin
```
The printed summary shows that the new data set only has 48 units, compared to the raw data set with 51. We can find which units were filtered because this is stored in the coin's "Analysis" sub-list:
```{r}
coin$Analysis$Filtered_85pc$RemovedUnits
```
The Analysis sub-list also contains the data availability table that is output by `Screen()`. As with the data frame method, we can also choose to screen units by presence of zeroes, or a combination of zeroes and missing values.
# Purses
For completion we also demonstrate the purse method. Like most purse methods, this is simply applying the coin method to each coin in the purse, without any special features. Here, we perform the same example as in the coin section, but on a purse of coins:
```{r}
# build example purse
purse <- build_example_purse(up_to = "new_coin", quietly = TRUE)
# screen units in all coins to 85% data availability
purse <- Screen(purse, dset = "Raw", unit_screen = "byNA",
dat_thresh = 0.85, write_to = "Filtered_85pc")
```
|
/scratch/gouwar.j/cran-all/cranData/COINr/vignettes/screening.Rmd
|
---
title: "Sensitivity Analysis"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Sensitivity Analysis}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
# Introduction
Sensitivity analysis is often confused with *uncertainty analysis*. Uncertainty analysis involves estimating the uncertainty in the outputs of a system (here, the scores and ranks of the composite indicator), given the uncertainties in the inputs (here, methodological decisions, weights, etc.). The results of an uncertainty include for example confidence intervals over the ranks, median ranks, and so on.
Sensitivity analysis is an extra step after uncertainty analysis, and estimates which of the input uncertainties are driving the output uncertainty, and by how much. A rule of thumb, known as the [Pareto Principle](https://en.wikipedia.org/wiki/Pareto_principle) (or the 80/20 Rule) suggests that often, only a small proportion of the input uncertainties are causing the majority of the output uncertainty. Sensitivity analysis allows us to find which input uncertainties are significant (and therefore perhaps worthy of extra attention), and which are not important.
In reality, sensitivity analysis and uncertainty analysis can be performed simultaneously. However in both cases, the main technique is to use Monte Carlo methods. This essentially involves re-calculating the composite indicator many times, each time randomly varying the uncertain variables (assumptions, parameters), in order to estimate the output distributions.
COINr implements a flexible variance-based global sensitivity analysis approach, which allows almost any assumption to be varied, as long as the distribution of alternative values can be described. Variance-based "sensitivity indices" are estimated using a Monte Carlo design (running the composite indicator many times with a particular combination of input values). This follows the methodology described in [this paper](https://doi.org/10.1111/j.1467-985X.2005.00350.x).
# Defining the problem
The first step in a sensitivity analysis is to identify *which* assumptions to treat as uncertain, and *what* alternative values to assign to each assumption. Let's begin with the "which": think about all the ingredients that have gone into making the composite indicator: the data itself, the selection of indicators, and the methodological decisions along the way (which imputation method to use, if any; whether to treat outliers and in what way, which normalisation method, etc...). We cannot test everything, but we can pick a few assumptions that seem important, and where we have plausible alternatives that we could assign.
Here we will work with the familiar in-built example coin. You can see exactly how this is built by calling `edit(build_example_coin)` by the way.
```{r}
library(COINr)
# build example coin
coin <- build_example_coin(quietly = TRUE)
```
We will test four assumptions:
1. The maximum number of Winsorised data points. This is currently set at five, but we will let it vary between 1 and 5 points.
2. The normalisation method. By default the min-max method is used, but we will also consider the z-score as an alternative.
3. The weights. We will test perturbing the weights randomly inside a set interval.
3. The aggregation method. The example uses the arithmetic mean but we will also consider the geometric mean as an alternative.
# Input distributions
Having now selected *which* assumptions to vary, we can now work on defining the distributions for each assumption. Sensitivity analysis is a probabilistic tool, so each input assumption is treated as a random variable, which means we have to define a distribution for each assumption.
The function to run a sensitivity in COINr is called `get_sensitivity()`. It takes a little understanding to get this set up properly. The argument that defines the input distributions is a list called `SA_specs`. This specifies which assumptions to vary, the distributions for each assumption, and where each assumption can be found in the coin. Let's demonstrate by defining one part of `SA_specs`, for our first assumption: the maximum number of Winsorised points.
```{r}
# component of SA_specs for winmax distribution
l_winmax <- list(Address = "$Log$Treat$global_specs$f1_para$winmax",
Distribution = 1:5,
Type = "discrete")
```
Each uncertain assumption is defined by a list with three components. The "Address" component describes *where* in the coin the object of interest is found. You should look inside the coin to find this: notice that you don't specify the name of the coin itself, i.e. it is not `coin$Log$Treat$...` but rather just `$Log$Treat$...`.
Next is the "Distribution", which essentially describes the alternatives for the parameter. Here we have entered `1:5`, i.e. any integer between 1 and 5. Finally the "Type" entry should be set to either "discrete" or "continuous". In the former, the distribution is assumed to be discrete, so that samples are taken from the alternatives given in "Distribution". In the latter, the distribution is assumed to be continuous and uniform, and "Distribution" should be a 2-length vector specifying the upper and lower bounds of the parameter. Obviously in this latter case, the parameter must be numeric, and must be able to take non-integer values.
In summary, the list above specifies that the winmax parameter should be allowed to vary between 1 and 5 (integers). This list will be combined with lists for the other assumptions below, and input to `get_sensitivity()`.
Now let's see the entry for the normalisation method:
```{r}
# normalisation method
# first, we define the two alternatives: minmax or zscore (along with respective parameters)
norm_alts <- list(
list(f_n = "n_minmax", f_n_para = list(c(1,100))),
list(f_n = "n_zscore", f_n_para = list(c(10,2)))
)
# now put this in a list
l_norm <- list(Address = "$Log$Normalise$global_specs",
Distribution = norm_alts,
Type = "discrete")
```
This is a bit more complicated because when we switch between the min-max and z-score methods, we also want to use the corresponding set of parameters (`f_n_para`). That means that the parameter to target is the entire "global_specs" argument of `Normalise()`. We define two alternatives: one with min-max between 1 and 100, and the other being z-score with mean 10 and standard deviation 2. Notice that you need to be careful to wrap things appropriately in lists as required by each function.
Otherwise the rest is straight forward: we define the address and attach the `norm_alts` alternatives to the main list chunk. The distribution is discrete. Notice that each specification includes the "default" value of the assumption, not just the alternative(s).
Next is the weights, and this is also a special case. There are different ways we could approach changing the weights. First, we might have a small number of alternative weight sets, perhaps one is the original weights, one is from PCA, and one has been adjusted by hand. In that case, we could put these three sets of weights in a list and set the address to `$Log$Aggregate$w`, as a discrete distribution.
A second possibility would be to treat individual weights as individual parameters. This might be a good idea if we only want to vary a small number of individual weights, e.g. the sub-index weights (of which there are two). Then we could define one assumption for one weight and set the address as e.g. `coin$Meta$Weights$Original$Weight[58]` which is the location of the "Conn" sub-index weight, and similarly for the "Sust" sub-index. We would then set `Type = "continuous"` and set the upper and lower bounds as needed, e.g. `c(0.5, 1)` to vary between 0.5 and 1.
To instead get an overall perturbation of weights, we have to use a helper function. The `get_noisy_weights()` function is designed for this purpose: it generates replications of your set of weights, where each replication has some random noise added to it according to your specifications. Here is how it works. You take your nominal weights (those that you would normally use) and feed them into the function:
```{r}
# get nominal weights
w_nom <- coin$Meta$Weights$Original
# build data frame specifying the levels to apply the noise at
noise_specs = data.frame(Level = c(2,3),
NoiseFactor = c(0.25, 0.25))
# get 100 replications
noisy_wts <- get_noisy_weights(w = w_nom, noise_specs = noise_specs, Nrep = 100)
# examine one of the noisy weight sets
tail(noisy_wts[[1]])
```
The `noisy_wts` object is a list containing 100 data frames, each of which is a set of weights with slightly different values. The sample above shows the last few rows of one of these weight-sets.
Now we can feed this into our list chunk:
```{r}
# component of SA_specs for weights
l_weights <- list(Address = "$Log$Aggregate$w",
Distribution = noisy_wts,
Type = "discrete")
```
Notice that the distribution is defined as discrete because in practice we have 100 alternative sets of weights, even though we are emulating a continuous distribution.
Last of all we define the list chunk for the aggregation method:
```{r}
## aggregation
l_agg <- list(Address = "$Log$Aggregate$f_ag",
Distribution = c("a_amean", "a_gmean"),
Type = "discrete")
```
This is relatively straightforward.
Having defined all of our input distributions individually, it's time to put them all together:
```{r}
# create overall specification list
SA_specs <- list(
Winmax = l_winmax,
Normalisation = l_norm,
Weights = l_weights,
Aggregation = l_agg
)
```
We simply put our list chunks into a single list. The names of this list are used as the names of the assumptions, so we can name them how we want.
# Uncertainty analysis
That was all a bit complicated, but this is because defining a sensitivity analysis *is* complicated! Now COINr can take over from here. We can now call the `get_sensitivity()` function:
```{r, eval=FALSE}
# Not run here: will take a few seconds to finish if you run this
SA_res <- get_sensitivity(coin, SA_specs = SA_specs, N = 100, SA_type = "UA",
dset = "Aggregated", iCode = "Index")
```
```{r include=FALSE}
SA_res <- readRDS("UA_results.RDS")
```
This is not actually run when building this vignette because it can take a little while to finish. When it is run you should get a message saying that the weights address is not found or `NULL`. COINr checks each address to see if there is already an object at that address inside the coin. If there is not, or it is `NULL` it asks if you want to continue anyway. In our case, the fact that it is `NULL` is not because we made a mistake with the address, but simply because the `w` argument of `Aggregate()` was not specified when we build the coin (i.e. it was set to `NULL`), and the default "Original" weights were used. Sometimes however, if an address is `NULL` it might be because you have made an error.
Looking at the syntax of `get_sensitivity()`: apart from passing the coin and `SA_specs`, we also have to specify how many replications to run (`N` - more replications results in a more accurate sensitivity analysis, but also takes longer); whether to run an uncertainty analysis (`SA_type = "UA"`) or a sensitivity analysis (`SA_type = "SA"`); and finally the *target* output of the sensitivity analysis, which in this case we have specified as the Index, from the aggregated data set.
If the type of sensitivity analysis (`SA_type`) is set to `"UA"`, assumptions will be sampled randomly and the results will simply consist of the distribution over the ranks. This takes less replications, and may be sufficient if you are just interested in the output uncertainty, without attributing it to each input assumption. We can directly look at the output uncertainty analysis by calling the `plot_uncertainty()` function:
```{r, fig.width= 7}
plot_uncertainty(SA_res)
```
Results are contained in the output of `get_sensitivity()` and can also be viewed directly, e.g.
```{r}
head(SA_res$RankStats)
```
This shows the nominal, mean, median, and 5th/95th percentile ranks of each unit, as a result of the induced uncertainty.
# Sensitivity analysis
The process for performing a sensitivity analysis is the same, but we set `SA_type = "SA"`.
```{r, eval=FALSE}
# Not run here: will take a few seconds to finish if you run this
SA_res <- get_sensitivity(coin, SA_specs = SA_specs, N = 100, SA_type = "SA",
dset = "Aggregated", iCode = "Index", Nboot = 100)
```
```{r include=FALSE}
SA_res <- readRDS("SA_results.RDS")
```
If you run this, you will see an important difference: although we set `N = 100` the coin is replicated 600 times! This is because a variance based sensitivity analysis requires a specific *experimental design*, and the actual number of runs is $N(d+2)$, where $d$ is the number of uncertain assumptions.
Notice also that we have set `Nboot = 100`, which is the number of bootstrap replications to perform, and is used for estimating confidence intervals on sensitivity indices.
Let's now plot the results using the `plot_sensitivity()` function:
```{r, fig.width=5}
plot_sensitivity(SA_res)
```
By default this returns a bar chart. Each bar gives the sensitivity of the results (in this case the average rank change of the Index compared to nominal values) to each assumption. Clearly, the most sensitive assumption is the aggregation method, and the least sensitive is the maximum number of points to Winsorise.
The same results can be plotted as a pie chart, or as a box plot, depending on how we set `ptype`:
```{r, fig.width=7}
plot_sensitivity(SA_res, ptype = "box")
```
The confidence intervals are rather wide here, especially on the first order sensitivity indices. By increasing `N`, the precision of these estimates will increase and the confidence intervals will narrow. In any case, the right hand plot (total order sensitivity indices) is already clear: despite the estimation uncertainty, the order of importance of the four assumptions is clearly distinguished.
# Discussion/tips
The `get_sensitivity()` function is very flexible because it can target anything inside the coin. However, this comes at the expense of carefully specifying the uncertainties in the analysis, and having a general understanding of how a coin is regenerated. For this latter part, it may also help to read the [Adjustments and comparisons](adjustments.html) vignette.
Some particular points to consider:
* It is your responsibility to get the correct address for each parameter and to understand its use.
* It is also your responsibility to make sure that there are no conflicts caused by methodological variations, such as negative values being fed into a geometric mean.
* You can't target the same parameter twice in the same sensitivity analysis - one specification will just overwrite the other.
In general it is better to start simple: start with one or two assumptions to vary and gradually expand the level of complexity as needed. You can also do a test run with a low `N` to see if the results are vaguely sensible.
Variance based sensitivity analysis is complicated, especially here because the assumptions to vary are often not just a single value, but could be strings, data frames or lists. Again, an understanding of COINr and a basic understanding of sensitivity analysis can help a lot.
One important point is that in a sensitivity analysis, the target of the sensitivity analysis is the *mean absolute rank change*. COINr takes the target output that you specify, and for each replication compares the ranks of that variable to the nominal ranks. It then takes the difference between these two and takes the mean absolute value of these differences: the higher value of this quantity, the more the ranks have changed with respect to the nominal. This is done because variance-based SA generally requires a univariate output.
If you want to perform a more complex sensitivity analysis, perhaps generating separate sensitivity indices for each unit, you could also do this by bypassing `get_sensitivity()` altogether. If you want to venture down this path, check out `SA_sample()` and `SA_estimate()`, which are called by `get_sensitivity()`. This would definitely require some custom coding on your part but if you feel up for the challenge, go for it!
# Removing elements
Last of all we turn to a separate function which is not variance-based sensitivity analysis but is related to sensitivity analysis in general. The `remove_elements()` function tests the effect of removing components of the composite indicator one at a time. This can be useful to find the impact of each component, in terms of "if it I were to remove this, what would happen?".
To run this, we input our coin into the function and specify which level we want to remove components. For example, specifying `Level = 2` removes each component of level 2 one at a time, with replacement, and regenerates the results each time. We also have to specify which indicator/aggregate to target as the output:
```{r}
# run function removing elements in level 2
l_res <- remove_elements(coin, Level = 2, dset = "Aggregated", iCode = "Index")
# get summary of rank changes
l_res$MeanAbsDiff
```
The output contains details of ranks and scores, but the "MeanAbsDiff" entry is a good summary: it shows the mean absolute rank difference between nominal ranks, and ranks with each component removed. Here, a higher value means that the ranks are changed more when that component is removed and vice versa. Clearly, the impact of removing components is not the same, and this can be useful information if you are considering whether or not to discard part of an index.
|
/scratch/gouwar.j/cran-all/cranData/COINr/vignettes/sensitivity.Rmd
|
---
title: "Outlier Treatment"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Outlier Treatment}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
# Introduction
Data treatment is the process of altering indicators to improve their statistical properties, mainly for the purposes of aggregation. Data treatment is a delicate subject, because it essentially involves changing the values of certain observations, or transforming an entire distribution. Like any other step or assumption though, any data treatment should be carefully recorded and its implications understood. Of course, data treatment does not *have* to be applied, it is simply another tool in your toolbox.
# The `Treat()` function
The COINr function for treating data is called `Treat()`. This is a generic function with methods for coins, purses, data frames and numeric vectors. It is very flexible but this can add a layer of complexity. If you want to run mostly at default options, see the `qTreat()` function mentioned below in [Simplified function].
The `Treat()` function operates a two-stage data treatment process, based on two data treatment functions (`f1` and `f2`), and a pass/fail function `f_pass` which detects outliers. The arrangement of this function is inspired by a fairly standard data treatment process applied to indicators, which consists of checking skew and kurtosis, then if the criteria are not met, applying Winsorisation up to a specified limit. Then if Winsorisation still does not bring skew and kurtosis within limits, applying a nonlinear transformation such as log or Box-Cox.
This function generalises this process by using the following general steps:
1. Check if variable passes or fails using `f_pass`
2. If `f_pass` returns `FALSE`, apply `f1`, else return `x` unmodified
3. Check again using `f_pass`
4. If `f_pass` still returns `FALSE`, apply `f2`
5. Return the modified `x` as well as other information.
For the "typical" case described above `f1` is a Winsorisation function, `f2` is a nonlinear transformation
and `f_pass` is a skew and kurtosis check. However, any functions can be passed as `f1`, `f2` and `f_pass`, which makes it a flexible tool that is also compatible with other packages.
Further details on how this works are given in the following sections.
# Numeric vectors
The clearest way to demonstrate the `Treat()` function is on a numeric vector. Let's make a vector with a couple of outliers:
```{r}
# numbers between 1 and 10
x <- 1:10
# two outliers
x <- c(x, 30, 100)
```
We can check the skew and kurtosis of this vector:
```{r}
library(COINr)
skew(x)
kurt(x)
```
The skew and kurtosis are both high. If we follow the default limits in COINr (absolute skew capped at 2, and kurtosis capped at 3.5), this would be classed as a vector with outliers. Indeed we can confirm this using the `check_SkewKurt()` function, which is the default pass/fail function used in `Treat()`. This also anyway outputs the skew and kurtosis:
```{r}
check_SkewKurt(x)
```
Now we know that `x` has outliers, we can treat it (if we want). We use the `Treat()` function to specify that our function for checking for outliers `f_pass = "check_SkewKurt"`, and our first function for treating outliers is `f1 = "winsorise"`. We also pass an additional parameter to `winsorise()`, which is `winmax = 2`. You can check the `winsorise()` function documentation to better understand how it works.
```{r, fig.width=5, fig.height=3.5}
l_treat <- Treat(x, f1 = "winsorise", f1_para = list(winmax = 2),
f_pass = "check_SkewKurt")
plot(x, l_treat$x)
```
The result of this data treatment is shown in the scatter plot: one point from `x` has been Winsorised (reassigned the next highest value). We can check the skew and kurtosis of the treated vector:
```{r}
check_SkewKurt(l_treat$x)
```
Clearly, Winsorising one point was enough in this case to bring the skew and kurtosis within the specified thresholds.
# Data frames
Treatment of a data frame with `Treat()` is effectively the same as treating a numeric vector, because the data frame method passes each column of the data frame to the numeric method. Here, we use some data from the COINr package to demonstrate.
```{r}
# select three indicators
df1 <- ASEM_iData[c("Flights", "Goods", "Services")]
# treat the data frame using defaults
l_treat <- Treat(df1)
str(l_treat, max.level = 1)
```
We can see the output is a list with `x_treat`, the treated data frame; `Dets_Table`, a table describing what happened to each indicator; and `Treated_Points`, which marks which individual points were adjusted. This is effectively the same output as for treating a numeric vector.
```{r}
l_treat$Dets_Table
```
We also check the individual points:
```{r}
l_treat$Treated_Points
```
# Coins
Treating coins is a simple extension of treating a data frame. The coin method simply extracts the relevant data set as a data frame, and passes it to the data frame method. So more or less, the same arguments are present.
We begin by building the example coin, which will be used for the examples here.
```{r}
coin <- build_example_coin(up_to = "new_coin")
```
## Default treatment
The `Treat()` function can be applied directly to a coin with completely default options:
```{r}
coin <- Treat(coin, dset = "Raw")
```
For each indicator, the `Treat()` function:
1. Checks skew and kurtosis using the `check_SkewKurt()` function
2. If the indicator fails the test (returns `FALSE`), applies Winsorisation
3. Checks again skew and kurtosis
4. If the indicator still fails, applies a log transformation.
If at any stage the indicator passes the skew and kurtosis test, it is returned without further treatment.
When we run `Treat()` on a coin, it also stores information returned from `f1`, `f2` and `f_pass` in the coin:
```{r}
# summary of treatment for each indicator
head(coin$Analysis$Treated$Dets_Table)
```
Notice that only one treatment function was used here, since after Winsorisation (`f1`), all indicators passed the skew and kurtosis test (`f_pass`).
In general, `Treat()` tries to collect all information returned from the functions that it calls. Details of the treatment of individual points are also stored in `.$Analysis$Treated$Treated_Points`.
The `Treat()` function gives you a high degree of control over which functions are used to treat and test indicators, and it is also possible to specify different functions for different indicators. Let's begin though by seeing how we can change the specifications for all indicators, before proceeding to individual treatment.
Unless `indiv_specs` is specified (see later), the same procedure is applied to all indicators. This process is specified by the `global_specs` argument. To see how to use this, it is easiest to show the default of this argument which is built into the `treat()` function:
```{r}
# default treatment for all cols
specs_def <- list(f1 = "winsorise",
f1_para = list(na.rm = TRUE,
winmax = 5,
skew_thresh = 2,
kurt_thresh = 3.5,
force_win = FALSE),
f2 = "log_CT",
f2_para = list(na.rm = TRUE),
f_pass = "check_SkewKurt",
f_pass_para = list(na.rm = TRUE,
skew_thresh = 2,
kurt_thresh = 3.5))
```
Notice that there are six entries in the list:
* `f1` which is a string referring to the first treatment function
* `f1_para` which is a list of any other named arguments to `f1`, excluding `x` (the data to be treated)
* `f2` and `f2_para` which are analogous to `f1` and `f1_para` but for the second treatment function
* `f_pass` is a string referring to the function to check for outliers
* `f_pass_para` a list of any other named arguments to `f_pass`, other than `x` (the data to be checked)
To understand what the individual parameters do, for example in `f1_para`, we need to look at the function called by `f1`, which is the `winsorise()` function:
* `x` A numeric vector.
* `na.rm` Set `TRUE` to remove `NA` values, otherwise returns `NA`.
* `winmax` Maximum number of points to Winsorise. Default 5. Set `NULL` to have no limit.
* `skew_thresh` A threshold for absolute skewness (positive). Default 2.25.
* `kurt_thresh` A threshold for kurtosis. Default 3.5.
* `force_win` Logical: if `TRUE`, forces winsorisation up to winmax (regardless of skew/kurt).
Here we see the same parameters as named in the list `f1_para`, and we can change the maximum number of points to be Winsorised, the skew and kurtosis thresholds, and other things.
To make adjustments, unless we want to redefine everything, we don't need to specify the entire list. So for example, if we want to change the maximum Winsorisation limit `winmax`, we can just pass this part of the list (notice we still have to wrap the parameter inside a list):
```{r}
# treat with max winsorisation of 3 points
coin <- Treat(coin, dset = "Raw", global_specs = list(f1_para = list(winmax = 1)))
# see what happened
coin$Analysis$Treated$Dets_Table |>
head(10)
```
Having imposed a much stricter Winsorisation limit (only one point), we can see that now one indicator has been passed to the second treatment function `f2`, which has performed a log transformation. After doing this, the indicator passes the skew and kurtosis test.
By default, if an indicator does not satisfy `f_pass` after applying `f1`, it is passed to `f2` *in its original form* - in other words it is not the output of `f1` that is passed to `f2`, and `f2` is applied *instead* of `f1`, rather than in addition to it. If you want to apply `f2` on top of `f1` set `combine_treat = TRUE`. In this case, if `f_pass` is not satisfied after `f1` then the output of `f1` is used as the input of `f2`. For the defaults of `f1` and `f2` this approach is probably not advisable because Winsorisation and the log transform are quite different approaches. However depending on what you want to do, it might be useful.
## Individual treatment
The `global_specs` specifies the treatment methodology to apply to all indicators. However, the `indiv_specs` argument (if specified), can be used to override the treatment specified in `global_specs` for specific indicators. It is specified in exactly the same way as `global_specs` but requires a parameter list for each indicator that is to have individual specifications applied, wrapped inside one list.
This is probably clearer using an example. To begin with something simple, let's say that we keep the defaults for all indicators except one, where we change the Winsorisation limit. We will set the Winsorisation limit of the indicator "Flights" to zero, to force it to be log-transformed.
```{r}
# change individual specs for Flights
indiv_specs <- list(
Flights = list(
f1_para = list(winmax = 0)
)
)
# re-run data treatment
coin <- Treat(coin, dset = "Raw", indiv_specs = indiv_specs)
```
The only thing to remember here is to make sure the list is created correctly. Each indicator to assign individual treatment must have its own list - here containing `f1_para`. Then `f1_para` itself is a list of named parameter values for `f1`. Finally, all lists for each indicator have to be wrapped into a single list to pass to `indiv_specs`. This looks a bit convoluted for changing a single parameter, but gives a high degree of control over how data treatment is performed.
We can now see what happened to "Flights":
```{r}
coin$Analysis$Treated$Dets_Table[
coin$Analysis$Treated$Dets_Table$iCode == "Flights",
]
```
Now we see that "Flights" didn't pass the first Winsorisation step (because nothing happened to it), and was passed to the log transform. After that, the indicator passed the skew and kurtosis check.
As another example, we may wish to exclude some indicators from data treatment completely. To do this, we can set the corresponding entries in `indiv_specs` to `"none"`. This is the only case where we don't have to pass a list for each indicator.
```{r}
# change individual specs for two indicators
indiv_specs <- list(
Flights = "none",
LPI = "none"
)
# re-run data treatment
coin <- Treat(coin, dset = "Raw", indiv_specs = indiv_specs)
```
Now if we examine the treatment table, we will find that these indicators have been excluded from the table, as they were not subjected to treatment.
## External functions
Any functions can be passed to `Treat()`, for both treating and checking for outliers. As an example, we can pass an outlier detection function ` from the [performance](https://easystats.github.io/performance/reference/check_outliers.html) package
```{r, include = FALSE}
# check if performance package installed
perf_installed <- requireNamespace("performance", quietly = TRUE)
```
The following code chunk will only run if you have the 'performance' package installed.
```{r, eval = perf_installed}
library(performance)
# the check_outliers function outputs a logical vector which flags specific points as outliers.
# We need to wrap this to give a single TRUE/FALSE output, where FALSE means it doesn't pass,
# i.e. there are outliers
outlier_pass <- function(x){
# return FALSE if any outliers
!any(check_outliers(x))
}
# now call treat(), passing this function
# we set f_pass_para to NULL to avoid passing default parameters to the new function
coin <- Treat(coin, dset = "Raw",
global_specs = list(f_pass = "outlier_pass",
f_pass_para = NULL)
)
# see what happened
coin$Analysis$Treated$Dets_Table |>
head(10)
```
Here we see that the test for outliers is much stricter and very few of the indicators pass the test, even after applying a log transformation. Clearly, how an outlier is defined can vary and depend on your application.
# Purses
The purse method for `treat()` is fairly straightforward. It takes almost the same arguments as the coin method, and applies the same specifications to each coin. Here we simply demonstrate it on the example purse.
```{r}
# build example purse
purse <- build_example_purse(up_to = "new_coin", quietly = TRUE)
# apply treatment to all coins in purse (default specs)
purse <- Treat(purse, dset = "Raw")
```
# Simplified function
The `Treat()` function is very flexible but comes at the expense of a possibly fiddly syntax. If you don't need that level of flexibility, consider using `qTreat()`, which is a simplified wrapper for `Treat()`.
The main features of `qTreat()` are that:
* The first treatment function `f1` cannot be changed and is set to `winsorise()`.
* The `winmax` parameter, as well as the skew and kurtosis limits, are available directly as function arguments to `qTreat()`.
* The `f_pass` function cannot be changed and is always set to `check_SkewKurt()`.
* You can still choose `f2`
The `qTreat()` function is a generic with methods for data frames, coins and purses. Here, we'll just demonstrate it on a data frame.
```{r}
# select three indicators
df1 <- ASEM_iData[c("Flights", "Goods", "Services")]
# treat data frame, changing winmax and skew/kurtosis limits
l_treat <- qTreat(df1, winmax = 1, skew_thresh = 1.5, kurt_thresh = 3)
```
Now we check what the results are:
```{r}
l_treat$Dets_Table
```
We can see that in this case, Winsorsing by one point was not enough to bring "Flights" and "Goods" within the specified skew/kurtosis limits. Consequently, `f2` was invoked, which uses a log transform and brought both indicators within the specified limits.
|
/scratch/gouwar.j/cran-all/cranData/COINr/vignettes/treat.Rmd
|
---
title: "Changes from COINr v1.0.0"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Changes from COINr v1.0.0}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
If you were using COINr prior to v.1.0 you may have updated the package and found that code calling COINr functions no longer works! What is going on?
COINr has undergone some major changes and most of the syntax has changed. So major that I skipped directly from v0.6 to v1.0 without any intermediate steps. First of all, if you were using COINr previously, I would like to say SORRY for any inconvenience caused by these changes. However, the changes are worth it, and this is a one-off thing - I won't be doing a seismic change like this again.
This vignette helps you to transition from COINr 0.6.x to 1.0 and explains what has happened. In short, most function names have changed, the package is more robust and flexible, panel data is more supported, interactive plots have been moved to a separate package available on GitHub, and if you don't like all this you can install the archive version of the package called "COINr6" and everything will go back to how it was. Let's go through these things one by one.
# Why
If you just want to know what has changed and how to deal with it, skip this section. If you want to know why things have changed, read on.
The short story is that I found quite a few flaws in the package which I was not happy with, given that it is in the public domain. I decided to address these flaws in one giant revision, rather than a long series of updates. I'll explain each of these points here below.
## Robustness and efficiency
COINr was the first CRAN package that I built (everyone say "aww"). In the process, I learned a lot about package development, as well as principles about programming in general. However, since I learned this while building the package, especially the first parts of the package that I wrote were (in retrospect) not written very well. For example, although I defined a "COIN" class, I didn't define methods for the COIN. Much of the code was not written in a "functional" way, and there were not enough checks on the inputs and outputs of the code. All this meant that the code was not very robust and had to be patched, a lot. This made it hard to maintain, less robust, and also slower for the user. As a consequence, I decided to re-write most functions, many from scratch, with a higher standard of programming. I also slimmed down the "COIN" class to a more streamlined "coin" class.
## Focus
COINr is a package meant to focus on developing composite indicators. But the focus got lost at some point when I got carried away with html plotly plots and shiny apps. These things, although nice, were in retrospect not really that useful in the package and are also difficult to maintain and bloat out the package. I decided to cut out all interactive plots and apps, but these can still be accessed through the COINr6 package and the conversion functions between COINr and COINr6 (see below).
## Dependencies
As a result of the first two points (inexperience plus straying off track), COINr had many dependencies, i.e. packages that had to be installed to install COINr. Although there is no harm in loading 10 or 20 packages when performing data analysis in R, this can become a problem if you are building a package because every user has to have these packages installed. If you have ever had to install several packages at the same time in R, you have probably run into some kind of problem. Moreover, COINr is dependent on any changes in those packages, and that makes maintaining it more difficult. This meant that in practice, COINr was not always easy to install. I decided to re-write the package almost entirely in base R, to remove as many dependencies as possible.
## Features and flexibility
One thing that was missing from COINr was proper support for panel data (time-dependent data). This has now been mostly rectified with the introduction of the "purse" class. The main "building" functions of COINr have also been re-written as generics, with methods for coins, data frames and purses. Moreover many functions allow you to call other functions, which makes COINr much easier to link up with other packages.
## Syntax
COINr syntax was inconsistent. While this was not a critical problem, since I was making big changes to the package I decided to take the opportunity to make the syntax as consistent as possible. This is a one-off change and won't be messed around with any more!
# What's changed?
## Function names
Many things have changed. The first thing you will probably notice is the syntax. Because I was anyway making syntax-breaking changes to the package, I decided to go all in, and try to make the syntax as consistent as possible. This means that function names are more predictable: all "building" functions start with a capital letter. Plot functions start with `plot_`. Analysis functions mostly start with `get_`. Functions are generally in lower case. This all hopefully makes the package a little easier to use. You will notice that calling an old < v1.0 function name will generate an error, which redirects you to the new function name. My hope is that although this is inconvenient, it will not take too long to adapt to the new function names. In most functions, the main logic behind the arguments is pretty similar. As mentioned above, I'm not going to change all the names again; this is a one-off thing.
## Function features
The second obvious change is that some of the key functions themselves have changed syntax: they have been re-written to be more flexible and more robust. This may seem annoying but I promise you it is for the greater good. I can't describe all the changes here, but in general functions have been made more flexible: for example `Normalise()` now can take any normalising function, rather than a fixed set of options. Outlier treatment also allows to pass outlier detection and treatment functions. The sensitivity analysis function (now `get_sensitivity()`) now allows to target any part of the coin at all, not just function arguments. In general, the core "building" functions now call other lower-level functions and this makes it easier to hook COINr up to other packages, for example using more sophisticated imputation and aggregation methods.
## New "coin" class and methods
The third related change that is perhaps not so obvious is that the structure of the central object in COINr, the "COIN", has changed. The object has been streamlined and tidied, and has a new S3 class called a "coin" (the difference being that the new coin is lower case). If you have previously built a COIN using an older version of COINr, it will not work in the new version of COINr! But the good news is that there is a handy function called `COIN_to_coin()` which converts the older "COIN" class to the newer "coin" class.
The new "coin" class also comes with a number of methods. All the main construction functions now have methods for at least coins, data frames and purses (see next sub-section), and some have methods for numerical vectors. This is in contrast to the older COINr versions which did not define formal methods. See the [Building coins](coins.html) vignette for more details.
## Purses and panel data
The new "purse" class gives a formal way to deal with panel data (time indexed data). A "purse" is a time-indexed collection of coins. All construction functions have purse methods, so working with time data becomes very straightforward.
Purses and purse methods are still being expanded in COINr so keep an eye out for new features if you are interested. See the [Building coins](coins.html) vignette for more details.
## Documentation
The next thing is that the documentation has been completely re-written, with loads of new vignettes! And even better, COINr now lives at a web-page built with "pkgdown" which you can find [here](https://bluefoxr.github.io/COINr/), where all the documentation is easily accessible. So each function is well-documented. Hurray.
## Removed functions
The last very obvious change is that some functions have disappeared! Where have they gone? You may notice that all functions that generated interactive plots (often called `iPlot*` in previous versions of COINr), plus all shiny apps, have vanished. The reason for this, as explained above, is that these tools were distracting from the main point of the package and were too much effort to maintain. Moreover, even though interactive plots are great if you are outputting html documents, for pdf and word they are a hassle because it is quite unpredictable how they will be rendered. The good news is that I have replaced some of the interactive plots with static versions, such as `plot_framework()`, and `plot_scatter()`, so you can still do most of the plotting as in the previous versions, but with more predictable (and more usable) outputs.
# COINr6: I want out!
If this level of upheaval is all a bit too much, and you'd like to go back to how things were before, you have two options. The easiest "roll-back" option is to install the "COINr6" package. COINr6 is the latest version of COINr *before* the major syntax changes. This means that if you wrote some scripts or markdown files in the old syntax, instead of loading COINr, install and load COINr6, and this will run as before.
The advantage of this is that you can have COINr (new syntax) and COINr6 (old syntax) both installed at the same time.
To install COINr6 you have to install it from GitHub. First, make sure you have the "remotes" package:
```{r, eval=FALSE}
# install remotes package if you don't have it
install.packages("remotes")
```
Now install COINr6 from the GitHub repo:
```{r, eval = FALSE}
remotes::install_github("bluefoxr/COINr6")
```
And that's it. I will continue to lightly maintain this package for a while (e.g. fixing any critical bugs if any arise) but in general the main focus will be on the new COINr version.
Another way to roll back COINr to an older version is to use `devtools::install_version()`, in which you can specify a version number of any package to install. This might be a bit more fiddly, and personally I would recommend to rather install COINr6. But if you want, check out [this article](https://support.posit.co/hc/en-us/articles/219949047-Installing-older-versions-of-packages) for some info on installing older package versions.
COINr and COINr6 have conversion functions: in COINr there is the `COIN_to_coin()` function which allows conversion from the older "COIN" class to the newer "coin" class. In COINr6 there is also now the reverse function, `coin_to_COIN()`, which allows access to all the old interactive plotting of COINr6 if you liked that, as well as the apps. Note that conversion comes with some limitations in both directions, which are discussed in those functions' documentation.
# Summary
In summary, COINr has changed quite a lot, but that is a Good Thing. If you do want to roll back, or have both old and new syntax side by side, install COINr6.
As usual, if you have any feedback, spot any bugs or have any suggestions, email me or [open an issue](https://github.com/bluefoxr/COINr/issues) in the GitHub repo.
|
/scratch/gouwar.j/cran-all/cranData/COINr/vignettes/v1.Rmd
|
---
title: "Visualisation"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Visualisation}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
COINr has a number of options for plotting and visualising indicator data, both for analysis and presentation. All plots generated by COINr are powered by ggplot2, which means that if you want to customise them beyond the arguments provided by COINr functions, you can simply edit them with ggplot2 commands.
Note that prior to COINr v1.0.0, COINr additionally included interactive visualisation using apps and HTML widgets. This has been discontinued but those functions can still be accessed via the COINr6 package. See the [vignette](v1.html) on this topic for the reasons behind this and further details.
# Framework
Upon building a coin (see the [Building coins](coins.html) vignette), a good way to begin is to check the structure of the index. This can be done visually with the `plot_framework()` function, which generates a sunburst plot of the index structure.
```{r, message=F, fig.width=5, fig.height=5}
library(COINr)
# assemble example COIN
coin <- build_example_coin(up_to = "new_coin")
# plot framework
plot_framework(coin)
```
The sunburst plot is useful for a few things. First, it shows the structure that COINr has understood. This allows you to check whether the structure agrees with your expectations.
Second, it shows the effective weight of each indicator. Effective weights are the final weights of each indicator in the index, as a result of the indicator weights, the parent aggregate weights, and the structure of the index. This can reveal which indicators are implicitly weighted more than others, by e.g. having more or less indicators in the same aggregation groups. The effective weights can also be accessed directly using the `get_eff_weights()` function.
Finally, it can be a good way to communicate your index structure to other people.
The `plot_framework()` function has a few options for colouring. Other than that, if you don't like sunburst plots, another possibility is to set `type = "stack"`:
```{r, fig.width=6, fig.height= 5}
plot_framework(coin, type = "stack", colour_level = 2)
```
This gives a linear representation of the index. Here we have also set the colouring level to the pillar level (see `plot_framework()` documentation). Note that you will probably have to adjust the plot size to get a good figure.
# Statistical plots
Here we explore options for statistical plots, namely distribution and correlation plots.
## Distributions
The distribution of any variable, as well as groups of variables, in a coin can be visualised quickly using the `plot_dist()` function. The simplest case is to plot the distribution of a single indicator.
```{r}
plot_dist(coin, dset = "Raw", iCodes = "CO2")
```
To do this, as usual, we have to specify the data set (`dset`) and the indicator (`iCodes`) to plot. The data selection for `plot_dist()` is powered by `get_data()`, which means we can plot subsets of indicator and units. Commonly, with distribution plots, it might be interesting to plot the distributions of all indicators belonging to a particular group - let's plot all indicator distributions in the "P2P" pillar:
```{r, message=F, fig.width=7}
plot_dist(coin, dset = "Raw", iCodes = "P2P", Level = 1, type = "Violindot")
```
This plots all eight indicators belonging to that group, and we also specified to plot as "violin-dot" plots. Optionally, data can also be normalised before plotting using the `normalise` argument. See `plot_dist()` for more details and further options.
A similar function, `plot_dot()` also plots a single indicator using dots but is rather used for highlighting individual units rather than as a statistical plot of the distribution. See [Dot plots] below.
## Correlations
Correlation plots are very useful for understanding relationships between indicators. COINr's `plot_corr()` function is a flexible tool for plotting correlations between almost any variables in a coin, and visualising them according to the structure of the index.
One thing to keep in mind from the outset is the directionality of your indicators: if some are negative then this will probably be reflected in the correlation plots, unless you normalise the data first. With that in mind, we will build the full example coin including the normalisation step and then plot some correlations:
```{r}
coin <- build_example_coin(quietly = TRUE)
```
Now let's do a basic plot of correlations within a group:
```{r, fig.width=5}
plot_corr(coin, dset = "Normalised", iCodes = list("Physical"), Levels = 1)
```
Notice the syntax: we have to specify `iCodes` as a list here, and specify the level to get data from. In this case we have specified that we want the indicators (level 1) of the "Physical" group to be correlated against each other. As usual these arguments are passed to `get_data()`.
The reason that `iCodes` is specified as a list is that we can pass two character vectors to it, possibly from different levels:
```{r, fig.width=4}
plot_corr(coin, dset = "Aggregated",
iCodes = list(c("Flights", "LPI"), c("Physical", "P2P")), Levels = c(1,2))
```
The point being that we can select any set of indicators or aggregates, and correlate them with any other set. We can also pass further arguments to `get_data()` such as groupings and unit selection, if needed.
Other useful features include the possibility to correlate a set of indicators with only its parent groups - this is done by setting `withparent = "family"`. Here we also set to a discrete colour scheme using `flagcolours = TRUE`.
```{r, fig.width=4, fig.height=4}
plot_corr(coin, dset = "Aggregated", iCodes = list("Sust"), withparent = "family", flagcolours = T)
```
Notice that boxes are drawn around aggregation groups in this case. As a final example, we show how boxes and groups can be used to show subsets of correlation matrices. Typically the most interesting correlations are within aggregation groups because weak correlations cause less information to be transferred to the aggregate. We can only show in-group correlations using the `grouplev` argument, which takes an aggregation level to group indicators at:
```{r, fig.width=7, fig.height=5}
plot_corr(coin, dset = "Normalised", iCodes = list("Sust"),
grouplev = 2, flagcolours = T)
```
This can also be done with the `box_level` argument, which can be used additionally to highlight groupings at different levels:
```{r, fig.width=7, fig.height=6}
plot_corr(coin, dset = "Normalised", grouplev = 3, box_level = 2, showvals = F)
```
In this case we have also disabled correlation values themselves. Other options include using different types of correlations, and changing colours. For details, see the help page of `plot_corr()`. It is also worth mentioning that underneath, `plot_corr()` calls `get_corr()`, so if you are interested in correlation matrices rather than plots, use that.
# Indicator plots
In this section we examine some options for visualising individual indicators, in particular with the aim of seeing how different units compare to one another.
## Bar
A simple way to look at a set of scores for an indicator is with a bar chart:
```{r, fig.width=7}
plot_bar(coin, dset = "Raw", iCode = "CO2")
```
The `plot_bar()` function returns a bar chart of single indicator, sorted from high to low values. We can also colour this by any of the grouping variables found in the coin
```{r, fig.width=7}
plot_bar(coin, dset = "Raw", iCode = "CO2", by_group = "GDPpc_group", axes_label = "iName")
```
Here we have also set `axes_label = "iName"` to output indicator names rather than codes. Several other options are available, including a log scale, and colouring options. Here we just show one more thing, which is the possibility to break bars down into underlying component scores. This only works if we are plotting an aggregate score (i.e. level 2 or higher), rather than an indicator, because it looks for the underlying scores used to calculate each aggregate score. For example, we can see how the Sustainability scores break down into their three underlying components, for each country:
```{r, fig.width=7}
plot_bar(coin, dset = "Aggregated", iCode = "Sust", stack_children = TRUE)
```
## Dot plots
COINr's dot plot is pretty similar to a distribution plot but is intended for showing the position of a particular unit or units relative to its peers. This means that to make it useful, you should also select one or more units to highlight.
```{r, fig.height=2, fig.width=4}
plot_dot(coin, dset = "Raw", iCode = "LPI", usel = c("JPN", "ESP"))
```
Here we have plotted the "LPI" indicator and highlighted Spain and Japan. We can also add a statistic of this indicator, such as the median:
```{r, fig.height=2, fig.width=4}
plot_dot(coin, dset = "Raw", iCode = "LPI", usel = c("JPN", "ESP"), add_stat = "median",
stat_label = "Median", plabel = "iName+unit")
```
Here we have also labelled the statistic using `stat_label`, and labelled the x-axis using the indicator name and unit which are taken from the indicator metadata found within the coin.
## Scatter
The `plot_scatter()` function gives a quick way to plot scatter plots between any indicators or any variables in a coin.
```{r, fig.width=4}
plot_scatter(coin, dsets = "Raw", iCodes = c("Goods", "Services"), point_label = "uCode")
```
Variables can come from different data sets (including unit metadata), and we can also colour by groups:
```{r, fig.width=5}
plot_scatter(coin, dsets = c("uMeta", "Raw"), iCodes = c("Population", "Flights"),
by_group = "GDPpc_group", log_scale = c(TRUE, FALSE))
```
Here we have also converted the x-axis to a log scale since population is highly skewed. Other options can be found in the help page of `plot_scatter()`, and all plots can be further modified using ggplot2 commands.
|
/scratch/gouwar.j/cran-all/cranData/COINr/vignettes/visualisation.Rmd
|
---
title: "Weights"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Weights}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
Weights are used by most aggregation methods to optionally alter the contribution of each indicator in an aggregation group, as well as by aggregates themselves if they are further aggregated. Weighting is therefore part of aggregation, but this vignette deals with it separately because there are a few special tools for weighting in COINr.
First, let's see what weights look like in practice. When a coin is built using `new_coin()`, the `iMeta` data frame (an input to `new_coin()`) has a "Weight" column, which is also required. Therefore, every coin should have a set of weights in it by default, which you had to specify as part of its construction. Sets of weights are stored in the `.$Meta$Weights` sub-list. Each set of weights is stored as a data frame with a name. The set of weights created when calling `new_coin()` is called "Original". We can see this by building the example coin and accessing the "Original" set directly:
```{r}
library(COINr)
# build example coin
coin <- build_example_coin(up_to = "Normalise", quietly = TRUE)
# view weights
head(coin$Meta$Weights$Original)
```
The weight set simply has the indicator code, Level, and the weight itself. Notice that the indicator codes also include aggregate codes, up to the index:
```{r}
# view rows not in level 1
coin$Meta$Weights$Original[coin$Meta$Weights$Original$Level != 1, ]
```
And that the index itself doesn't have a weight because it is not used in an aggregation. Notice also that weights can be specified *relative* to one another. When an aggregation group is aggregated, the weights within that group are first scaled to sum to 1. This means that weights are relative within groups, but not between groups.
# Manual re-weighting
To change weights, one way is to simply go back to the original `iMeta` data frame that you used to build the coin, and edit it. If you don't want to do that, you can also create a new weight set. This simply involves:
1. Making a copy of the existing set of weights
2. Changing the weights of the copy
3. Putting the new set of weights in the coin
For example, if we want to change the weighting of the "Conn" and "Sust" sub-indices, we could do this:
```{r}
# copy original weights
w1 <- coin$Meta$Weights$Original
# modify weights of Conn and Sust to 0.3 and 0.7 respectively
w1$Weight[w1$iCode == "Conn"] <- 0.3
w1$Weight[w1$iCode == "Sust"] <- 0.7
# put weight set back with new name
coin$Meta$Weights$MyFavouriteWeights <- w1
```
Now, to actually use these weights in aggregation, we have to direct the `Aggregate()` function to find them. When weights are stored in the "Weights" sub-list as we have done here, this is easy because we only have to pass the name of the weights to `Aggregate()`:
```{r}
coin <- Aggregate(coin, dset = "Normalised", w = "MyFavouriteWeights")
```
Alternatively, we can pass the data frame itself to `Aggregate()` if we don't want to store it in the coin for some reason:
```{r}
coin <- Aggregate(coin, dset = "Normalised", w = w1)
```
When altering weights we may wish to compare the outcomes of alternative sets of weights. See the [Adjustments and comparisons](adjustments.html) vignette for details on how to do this.
# Effective weights
COINr has some statistical tools for adjusting weights as explained in the next sections. Before that, it is also interesting to look at "effective weights". At the index level, the weighting of an indicator is not due just to its own weight, but also to the weights of each aggregation that it is involved in, plus the number of indicators/aggregates in each group. This means that the final weighting, at the index level, of each indicator, is slightly complex to understand. COINr has a built in function to get these "effective weights":
```{r}
w_eff <- get_eff_weights(coin, out2 = "df")
head(w_eff)
```
The "EffWeight" column is the effective weight of each component at the highest level of aggregation (the index). These weights sum to 1 for each level:
```{r}
# get sum of effective weights for each level
tapply(w_eff$EffWeight, w_eff$Level, sum)
```
The effective weights can also be viewed using the `plot_framework()` function, where the angle of each indicator/aggregate is proportional to its effective weight:
```{r, fig.width=5, fig.height=5}
plot_framework(coin)
```
# PCA weights
The `get_PCA()` function can be used to return a set of weights which maximises the explained variance within aggregation groups. This function is already discussed in the [Analysis](analysis.html) vignette, so we will only focus on the weighting aspect here.
First of all, PCA weights come with a number of caveats which need to be mentioned (this is also detailed in the `get_PCA()` function help). First, what constitutes "PCA weights" in composite indicators is not very well-defined. In COINr, a simple option is adopted. That is, the loadings of the first principal component are taken as the weights. The logic here is that these loadings should maximise the explained variance - the implication being that if we use these as weights in an aggregation, we should maximise the explained variance and hence the information passed from the indicators to the aggregate value. This is a nice property in a composite indicator, where one of the aims is to represent many indicators by single composite. See [here](https://doi.org/10.1016/j.envsoft.2021.105208) for a discussion on this.
But. The weights that result from PCA have a number of downsides. First, they can often include negative weights which can be hard to justify. Also PCA may arbitrarily flip the axes (since from a variance point of view the direction is not important). In the quest for maximum variance, PCA will also weight the strongest-correlating indicators the highest, which means that other indicators may be neglected. In short, it often results in a very unbalanced set of weights. Moreover, PCA can only be performed on one level at a time.
The result is that PCA weights should be used carefully. All that said, let's see how to get PCA weights. We simply run the `get_PCA()` function with `out2 = "coin"` and specifying the name of the weights to use. Here, we will calculate PCA weights at level 2, i.e. at the first level of aggregation. To do this, we need to use the "Aggregated" data set because the PCA needs to have the level 2 scores to work with:
```{r}
coin <- get_PCA(coin, dset = "Aggregated", Level = 2,
weights_to = "PCAwtsLev2", out2 = "coin")
```
This stores the new set of weights in the Weights sub-list, with the name we gave it. Let's have a look at the resulting weights. The only weights that have changed are at level 2, so we look at those:
```{r}
coin$Meta$Weights$PCAwtsLev2[coin$Meta$Weights$PCAwtsLev2$Level == 2, ]
```
This shows the nature of PCA weights: actually in this case it is not too severe but the Social dimension is negatively weighted because it is negatively correlated with the other components in its group. In any case, the weights can sometimes be "strange" to look at and that may or may not be a problem. As explained above, to actually use these weights we can call them when calling `Aggregate()`.
# Optimised weights
While PCA is based on linear algebra, another way to statistically weight indicators is via numerical optimisation. Optimisation is a numerical search method which finds a set of values which maximise or minimise some criterion, called the "objective function".
In composite indicators, different objectives are conceivable. The `get_opt_weights()` function gives two options in this respect - either to look for the set of weights that "balances" the indicators, or the set that maximises the information transferred (see [here](https://doi.org/10.1016/j.envsoft.2021.105208)). This is done by looking at the correlations between indicators and the index. This needs a little explanation.
If weights are chosen to match the opinions of experts, or indeed your own opinion, there is a catch that is not very obvious. Put simply, weights do not directly translate into importance.
To understand why, we must first define what "importance" means. Actually there is more than one way to look at this, but one possible measure is to use the (possibly nonlinear) correlation between each indicator and the overall index. If the correlation is high, the indicator is well-reflected in the index scores, and vice versa.
If we accept this definition of importance, then it's important to realise that this correlation is affected not only by the weights attached to each indicator, but also by the correlations *between indicators*. This means that these correlations must be accounted for in choosing weights that agree with the budgets assigned by the group of experts.
In fact, it is possible to reverse-engineer the weights either [analytically using a linear solution](https://doi.org/10.1111/j.1467-985X.2012.01059.x) or [numerically using a nonlinear solution](https://doi.org/10.1016/j.ecolind.2017.03.056). While the former method is far quicker than a nonlinear optimisation, it is only applicable in the case of a single level of aggregation, with an arithmetic mean, and using linear correlation as a measure. Therefore in COINr, the second method is used.
Let's now see how to use `get_opt_weights()` in practice. Like with PCA weights, we can only optimise one level at a time. We also need to say what kind of optimisation to perform. Here, we will search for the set of weights that results in equal influence of the sub-indexes (level 3) on the index. We need a coin with an aggregated data set already present, because the function needs to know which kind of aggregation method you are using. Just before doing that, we will first check what the correlations look like between level 3 and the index, using equal weighting:
```{r}
# build example coin
coin <- build_example_coin(quietly = TRUE)
# check correlations between level 3 and index
get_corr(coin, dset = "Aggregated", Levels = c(3, 4))
```
This shows that the correlations are similar but not the same. Now let's run the optimisation:
```{r}
# optimise weights at level 3
coin <- get_opt_weights(coin, itarg = "equal", dset = "Aggregated",
Level = 3, weights_to = "OptLev3", out2 = "coin")
```
We can view the optimised weights (weights will only change at level 3)
```{r}
coin$Meta$Weights$OptLev3[coin$Meta$Weights$OptLev3$Level == 3, ]
```
To see if this was successful in balancing correlations, let's re-aggregate using these weights and check correlations.
```{r}
# re-aggregate
coin <- Aggregate(coin, dset = "Normalised", w = "OptLev3")
# check correlations between level 3 and index
get_corr(coin, dset = "Aggregated", Levels = c(3, 4))
```
This shows that indeed the correlations are now well-balanced - the optimisation has worked.
We will not explore all the features of `get_opt_weights()` here, especially because optimisations can take a significant amount of CPU time. However, the main options include specifying a vector of "importances" rather than aiming for equal importance, and optimising to maximise total correlation, rather than balancing. There are also some numerical optimisation parameters that could help if the optimisation doesn't converge.
|
/scratch/gouwar.j/cran-all/cranData/COINr/vignettes/weights.Rmd
|
#' Categorical Cause-Effect Pairs
#'
#' Cause-effect pairs extracted from R packages MASS and datasets for which the pairwise causal relationships are clear from the context, and at least one of the variables in each pair is categorical. For non-categorical variable, we discretized it at 5 evenly spaced quantiles.The current version contains 33 categorical cause-effect pairs.
#'
#' @docType data
#'
#' @usage data(CatPairs)
#'
#' @format A list of length 2. The first element is a list of 33 cause-effect pairs as data frames with the first column being the cause and the second column being the effect. The second element is a list of sources of each pair.
#'
#' @keywords datasets
"CatPairs"
|
/scratch/gouwar.j/cran-all/cranData/COLP/R/CatPairs-data.R
|
require(combinat)
require(MASS)
COLP_exhaustive = function(y,x){
nlev_y = nlevels(y)
P = combinat::permn(nlev_y)
nP = length(P)
xCy = rep(0,nP)
for (l in 1:nP){
yy=y
levels(yy)=as.character(P[[l]])
yy=as.factor(as.numeric(as.character(yy)))
if (nlevels(yy)>2){
xCy[l]=stats::logLik(MASS::polr(yy~x,method="logistic"))
}else{
xCy[l]=stats::logLik(stats::glm(yy~x,family=stats::binomial))
}
}
i = which.max(xCy)
return(list(M=xCy[i],P=P[[i]]))
}
COLP_greedy = function(y,x,P=NULL){
nlev_y = nlevels(y)
if (is.null(P)){
P = sample(nlev_y)
#P = 1:nlev_y
}
Ps = P
yy=y
levels(yy)=as.character(P)
nly=nlevels(yy)
yy=as.factor(as.numeric(as.character(yy)))
if (nly>2){
xCy=stats::logLik(MASS::polr(yy~x,method="logistic"))
}else{
xCy=stats::logLik(stats::glm(yy~x,family=stats::binomial))
}
xCys = xCy
improv = TRUE
while(improv){
improv=FALSE
Ps=P
for (i in 1:(nlev_y-1)){
for (j in (i+1):nlev_y){
tmp = Ps[i]
Ps[i]=Ps[j]
Ps[j]=tmp
yy=y
levels(yy)=as.character(Ps)
yy=as.factor(as.numeric(as.character(yy)))
if (nly>2){
xCys=stats::logLik(MASS::polr(yy~x,method="logistic"))
}else{
xCys=stats::logLik(stats::glm(yy~x,family=stats::binomial))
}
if (xCys>xCy){
P=Ps
xCy=xCys
improv=TRUE
}
tmp = Ps[i]
Ps[i]=Ps[j]
Ps[j]=tmp
}
}
}
return(list(M=xCy,P=P))
}
#' @title Causal Discovery for Bivariate Categorical Data
#'
#' @description Estimate the causal direction between two categorical variables.
#'
#' @param y factor, a potential effect variable
#' @param x factor, a potential cause variable
#' @param algo exhaustive search (algo="E") of category ordering or greedy search (algo="G")
#' @return A list of length 3. cd = 1 if x causes y; cd = 0 otherwise. P is the optimal ordering of the effect variable. epsilon is the difference in log-likelihood favoring x causes y.
#'
#' @export
#'
#' @examples
#' fit = COLP(CatPairs[[1]][[1]]$Diffwt,CatPairs[[1]][[1]]$Treat,algo="E")
#' fit$cd
COLP=function(y,x,algo="E"){
if (algo=="G"){#greedy
xCy=COLP_greedy(y,x)
yCx=COLP_greedy(x,y)
}else if (algo=="E"){#exhaustive
xCy=COLP_exhaustive(y,x)
yCx=COLP_exhaustive(x,y)
}
if (nlevels(x)>2){
xCy_optim = xCy$M+stats::logLik(MASS::polr(x~1,method="logistic"))
}else{
xCy_optim = xCy$M+stats::logLik(stats::glm(x~1,family=stats::binomial))
}
if (nlevels(y)>2){
yCx_optim = yCx$M+stats::logLik(MASS::polr(y~1,method="logistic"))
}else{
yCx_optim = yCx$M+stats::logLik(stats::glm(y~1,family=stats::binomial))
}
if (xCy_optim>yCx_optim){
cd=1 #x -> y
P = xCy$P
}else if (xCy_optim<yCx_optim){
cd=0 #y -> x
P = yCx$P
}else{
cd=P=NA
}
return(list(epsilon=xCy_optim-yCx_optim,cd=cd,P=P))
}
|
/scratch/gouwar.j/cran-all/cranData/COLP/R/functions.R
|
# COMBAT function
COMBAT = function(x, snp.ref, vegas.pct = c(0.1,0.2,0.3,0.4,1), pca_cut_perc=0.995, nperm = 100, seed=12345, ncores=1){
pvalues <- as.numeric(x)
n_snps <- length(pvalues)
#
cor_G <- ld.Rsquare(snp.ref)
set.seed(seed)
#
pval_gates <- gates(x=pvalues, cor_G=cor_G)
pval_vegas <- vegas(x=pvalues, cor_G=cor_G, vegas.pct=vegas.pct)
pval_simpleM <- simpleM(x=pvalues, cor_G=cor_G, pca_cut_perc=pca_cut_perc)
gene_pvals <- c(GATES=pval_gates, pval_vegas, simpleM=pval_simpleM)
#
# compute p-value correlation matrix
rd <- rmvnorm(nperm, mean=rep(0,n_snps),sigma=cor_G)
rd2 <- rd^2
simul_pval_mat <- pchisq(rd2,1,lower.tail=FALSE)
func1=function(x,cor_G,vegas.pct,pca_cut_perc=0.995){
p_gates <- gates(x=x, cor_G=cor_G)
p_vegas <- vegas(x=x, cor_G=cor_G, vegas.pct=vegas.pct)
p_simpleM <- simpleM(x=x, cor_G=cor_G, pca_cut_perc=pca_cut_perc)
c(p_gates, p_vegas, p_simpleM)
}
if(ncores>1 && requireNamespace("parallel",quietly = TRUE)){
cl=parallel::makeCluster(ncores)
parallel::clusterSetRNGStream(cl, .Random.seed)
gene_pval_mat=parallel::parApply(cl,simul_pval_mat,1,func1,cor_G=cor_G,vegas.pct=vegas.pct,pca_cut_perc=pca_cut_perc)
parallel::stopCluster(cl)
}else{
gene_pval_mat=apply(simul_pval_mat,1,func1,cor_G=cor_G,vegas.pct=vegas.pct,pca_cut_perc=pca_cut_perc)
}
gene_pval_mat=t(gene_pval_mat)
method_cor <- cor(gene_pval_mat)
# combat by simpleM
#p_combat_simpleM <- simpleM(x=gene_pvals, cor_G=method_cor, pca_cut_perc=pca_cut_perc)
# compute the COMBAT P-value using the extended Simes procedure
order_pvals <- order(gene_pvals)
sort_pvals <- gene_pvals[order_pvals]
method_cor <- method_cor[order_pvals, order_pvals]
p_combat_simes <- ext_simes(sort_pvals, method_cor)
#
res <- c(COMBAT=p_combat_simes,gene_pvals)
res
}
|
/scratch/gouwar.j/cran-all/cranData/COMBAT/R/COMBAT.R
|
# ==============================================================================================
# Function of extended Simes
ext_simes = function(x, cor_r){
eff.snpcount.fun <- function(ldmat) {
ldmat <- as.matrix(ldmat)
snpcount.local <- dim(ldmat)[1]
if (snpcount.local <= 1) return(1)
ev <- eigen(ldmat, only.values = TRUE)$values
if (sum(ev < 0) != 0) {
ev <- ev[ev > 0]
ev <- ev/sum(ev) * snpcount.local
}
ev <- ev[ev > 1]
snpcount.local - sum(ev - 1)
}
eff.snpcount.global <- eff.snpcount.fun(cor_r)
n_values <- length(x)
candid <- sapply(1:n_values, function(i){
(eff.snpcount.global * x[i])/eff.snpcount.fun(cor_r[1:i,1:i])
})
p_ext_simes <- min(candid)
p_ext_simes
}
|
/scratch/gouwar.j/cran-all/cranData/COMBAT/R/extSimes.R
|
# Function of GATES
gates = function(x, cor_G){
if(is.positive.definite(cor_G)==FALSE) stop('cor_G is not positive definite. Please re-calculate with ld.Rsquare function.\n')
pval_sort <- sort(x)
pval_order <- order(x)
n_snps <- length(x)
cor_P <- cor_G[pval_order, pval_order]
cor_P <- 0.2982*cor_P^6 - 0.0127*cor_P^5 + 0.0588*cor_P^4 + 0.0099*cor_P^3 + 0.6281*cor_P^2 - 0.0009*cor_P
p_gates <- ext_simes(pval_sort, cor_P)
p_gates
}
|
/scratch/gouwar.j/cran-all/cranData/COMBAT/R/gates.R
|
#compute ld matrix using correlation
ld.Rsquare=function(x){
cor_G <- cor(as.matrix(x),use='p')
if(is.positive.definite(cor_G)==FALSE){
cor_G <- make.positive.definite(cor_G)
}
if(is.positive.definite(cor_G)==FALSE){
cor_G <- cor(as.matrix(x),use='p')
diag(x) <- 1.0001
}
if(is.positive.definite(cor_G)==FALSE){
diag(cor_G) <- 1.001
}
if(is.positive.definite(cor_G)==FALSE){
diag(cor_G) <- 1.01
}
cor_G
}
|
/scratch/gouwar.j/cran-all/cranData/COMBAT/R/ld.Rsquare.R
|
#simpleM
simpleM = function(x, cor_G, pca_cut_perc = 0.995){
if(is.positive.definite(cor_G)==FALSE) stop('cor_G is not positive definite. Please re-calculate with ld.Rsquare function.\n')
min_p_obs <- min(x)
num_of_snps <- length(x)
cor_r <- cor_G
eigen_values <- eigen(cor_r, only.values = TRUE)$values
eigen_values_sorted <- sort(eigen_values, decreasing = TRUE)
sum_eigen_values <- sum(eigen_values_sorted)
M_eff_G <- 1
for(k in 1:num_of_snps){
temp <- sum(eigen_values_sorted[1:k])/sum_eigen_values
if(temp >= pca_cut_perc){
M_eff_G <- k
break
}
}
p_simpleM <- 1-(1-min_p_obs)^M_eff_G
}
|
/scratch/gouwar.j/cran-all/cranData/COMBAT/R/simpleM.R
|
# Function of VEGAS with different proportion tests and Fisher combination test
vegas = function(x, cor_G, vegas.pct=c(0.1,0.2,0.3,0.4,1), max.simulation=1e6){
if(is.positive.definite(cor_G)==FALSE) stop('cor_G is not positive definite. Please re-calculate with ld.Rsqure function.\n')
pval_vegas <- vegas.call(x=x, cor_G=cor_G, vegas.pct=vegas.pct, n_simul=1000)
if(any(pval_vegas <= 0.005)){
pval_vegas <- vegas.call(x=x, cor_G=cor_G, vegas.pct=vegas.pct, n_simul=10000)
}
if(any(pval_vegas <= 0.0005)){
pval_vegas <- vegas.call(x=x, cor_G=cor_G, vegas.pct=vegas.pct, n_simul=100000)
}
if(any(pval_vegas <= 5e-5)){
pval_vegas <- vegas.call(x=x, cor_G=cor_G, vegas.pct=vegas.pct, n_simul=1000000)
}
if(any(pval_vegas <= 5e-6) && max.simulation>1000000){
pval_vegas <- vegas.call(x=x, cor_G=cor_G, vegas.pct=vegas.pct, n_simul=max.simulation)
}
pval_vegas
}
#this is not to be called directly
vegas.call = function(x, cor_G, vegas.pct, n_simul){
stopifnot(length(x) == ncol(cor_G))
vegas_vec <- ceiling(vegas.pct*ncol(cor_G))
vegas_vec <- sort(vegas_vec)
if(vegas_vec[1]>1){
vegas.pct <- c(0,vegas.pct)
vegas_vec <- c(1,vegas_vec)
}
chisq_vec <- qchisq(x,1,lower.tail=FALSE)
chisq_vec[chisq_vec == Inf] <- 60
n_snps <- length(x)
n_tests <- length(vegas_vec)
TS_obs <- rep(NA,n_tests)
TS_obs[1] <- max(chisq_vec, na.rm=TRUE)
chisq_vec <- sort(chisq_vec, decreasing = TRUE)
for (j in 2:n_tests) TS_obs[j] <- sum(chisq_vec[1:vegas_vec[j]])
rd <- rmvnorm(n_simul, mean=rep(0,n_snps),sigma=cor_G)
rd2 <- rd^2
rd2 <- apply(rd2,1,sort,decreasing=TRUE)
pPerm0 <- rep(NA,n_tests)
T0s <- apply(rd2,2,max)
pPerm0[1]<- (sum(T0s >= TS_obs[1])+1)/(length(T0s)+1)
for(j in 2:n_tests){
for (i in 1:n_simul) T0s[i] <- sum(rd2[1:vegas_vec[j],i])
pPerm0[j] <- (sum(T0s >= TS_obs[j])+1)/(length(T0s)+1)
}
v1 <- paste0('VEGAS.p',vegas.pct)
v1[vegas_vec==ncol(cor_G)] <- 'VEGAS.all'
v1[vegas_vec==1] <- 'VEGAS.max'
names(pPerm0) <- v1
pPerm0
}
|
/scratch/gouwar.j/cran-all/cranData/COMBAT/R/vegas.R
|
#' EM-Algorithm Estimation of the Binary Outcome Misclassification Model
#'
#' Jointly estimate \eqn{\beta} and \eqn{\gamma} parameters from the true outcome
#' and observation mechanisms, respectively, in a binary outcome misclassification
#' model.
#'
#' @param Ystar A numeric vector of indicator variables (1, 2) for the observed
#' outcome \code{Y*}. There should be no \code{NA} terms. The reference category is 2.
#' @param x_matrix A numeric matrix of covariates in the true outcome mechanism.
#' \code{x_matrix} should not contain an intercept and no values should be \code{NA}.
#' @param z_matrix A numeric matrix of covariates in the observation mechanism.
#' \code{z_matrix} should not contain an intercept and no values should be \code{NA}.
#' @param beta_start A numeric vector or column matrix of starting values for the \eqn{\beta}
#' parameters in the true outcome mechanism. The number of elements in \code{beta_start}
#' should be equal to the number of columns of \code{x_matrix} plus 1.
#' @param gamma_start A numeric vector or matrix of starting values for the \eqn{\gamma}
#' parameters in the observation mechanism. In matrix form, the \code{gamma_start} matrix rows
#' correspond to parameters for the \code{Y* = 1}
#' observed outcome, with the dimensions of \code{z_matrix} plus 1, and the
#' gamma parameter matrix columns correspond to the true outcome categories
#' \eqn{Y \in \{1, 2\}}. A numeric vector for \code{gamma_start} is
#' obtained by concatenating the gamma matrix, i.e. \code{gamma_start <- c(gamma_matrix)}.
#' @param tolerance A numeric value specifying when to stop estimation, based on
#' the difference of subsequent log-likelihood estimates. The default is \code{1e-7}.
#' @param max_em_iterations An integer specifying the maximum number of
#' iterations of the EM algorithm. The default is \code{1500}.
#' @param em_method A character string specifying which EM algorithm will be applied.
#' Options are \code{"em"}, \code{"squarem"}, or \code{"pem"}. The default and
#' recommended option is \code{"squarem"}.
#'
#' @return \code{COMBO_EM} returns a data frame containing four columns. The first
#' column, \code{Parameter}, represents a unique parameter value for each row.
#' The next column contains the parameter \code{Estimates}, followed by the standard
#' error estimates, \code{SE}. The final column, \code{Convergence}, reports
#' whether or not the algorithm converged for a given parameter estimate.
#'
#' Estimates are provided for the binary misclassification model, as well as two
#' additional cases. The "SAMBA" parameter estimates are from the R Package,
#' SAMBA, which uses the EM algorithm to estimate a binary outcome misclassification
#' model that assumes there is perfect specificity. The "PSens" parameter estimates
#' are estimated using the EM algorithm for the binary outcome misclassification
#' model that assumes there is perfect sensitivitiy. The "Naive" parameter
#' estimates are from a simple logistic regression \code{Y* ~ X}.
#'
#' @references Beesley, L. and Mukherjee, B. (2020).
#' Statistical inference for association studies using electronic health records:
#' Handling both selection bias and outcome misclassification.
#' Biometrics, 78, 214-226.
#'
#' @export
#'
#' @include pi_compute.R
#' @include pistar_compute.R
#' @include w_j.R
#' @include q_beta_f.R
#' @include q_gamma_f.R
#' @include em_function.R
#' @include loglik.R
#' @include perfect_sensitivity_EM.R
#' @include COMBO_data.R
#'
#' @importFrom stats rnorm rgamma rmultinom coefficients binomial glm
#' @importFrom turboEM turboem
#' @importFrom SAMBA obsloglikEM
#' @importFrom Matrix nearPD
#'
#' @examples \donttest{
#' set.seed(123)
#' n <- 1000
#' x_mu <- 0
#' x_sigma <- 1
#' z_shape <- 1
#'
#' true_beta <- matrix(c(1, -2), ncol = 1)
#' true_gamma <- matrix(c(.5, 1, -.5, -1), nrow = 2, byrow = FALSE)
#'
#' x_matrix = matrix(rnorm(n, x_mu, x_sigma), ncol = 1)
#' X = matrix(c(rep(1, n), x_matrix[,1]), ncol = 2, byrow = FALSE)
#' z_matrix = matrix(rgamma(n, z_shape), ncol = 1)
#' Z = matrix(c(rep(1, n), z_matrix[,1]), ncol = 2, byrow = FALSE)
#'
#' exp_xb = exp(X %*% true_beta)
#' pi_result = exp_xb[,1] / (exp_xb[,1] + 1)
#' pi_matrix = matrix(c(pi_result, 1 - pi_result), ncol = 2, byrow = FALSE)
#'
#' true_Y <- rep(NA, n)
#' for(i in 1:n){
#' true_Y[i] = which(stats::rmultinom(1, 1, pi_matrix[i,]) == 1)
#' }
#'
#' exp_zg = exp(Z %*% true_gamma)
#' pistar_denominator = matrix(c(1 + exp_zg[,1], 1 + exp_zg[,2]), ncol = 2, byrow = FALSE)
#' pistar_result = exp_zg / pistar_denominator
#'
#' pistar_matrix = matrix(c(pistar_result[,1], 1 - pistar_result[,1],
#' pistar_result[,2], 1 - pistar_result[,2]),
#' ncol = 2, byrow = FALSE)
#'
#' obs_Y <- rep(NA, n)
#' for(i in 1:n){
#' true_j = true_Y[i]
#' obs_Y[i] = which(rmultinom(1, 1,
#' pistar_matrix[c(i, n + i),
#' true_j]) == 1)
#' }
#'
#' Ystar <- obs_Y
#'
#' starting_values <- rep(1,6)
#' beta_start <- matrix(starting_values[1:2], ncol = 1)
#' gamma_start <- matrix(starting_values[3:6], ncol = 2, nrow = 2, byrow = FALSE)
#'
#' EM_results <- COMBO_EM(Ystar, x_matrix = x_matrix, z_matrix = z_matrix,
#' beta_start = beta_start, gamma_start = gamma_start)
#'
#' EM_results}
COMBO_EM <- function(Ystar,
x_matrix, z_matrix,
beta_start, gamma_start,
tolerance = 1e-7, max_em_iterations = 1500,
em_method = "squarem"){
if (is.data.frame(z_matrix))
z_matrix <- as.matrix(z_matrix)
if (!is.numeric(z_matrix))
stop("'z_matrix' should be a numeric matrix.")
if (is.vector(z_matrix))
z_matrix <- as.matrix(z_matrix)
if (!is.matrix(z_matrix))
stop("'z_matrix' should be a matrix or data.frame.")
if (!is.null(x_matrix)) {
if (is.data.frame(x_matrix))
x_matrix <- as.matrix(x_matrix)
if (!is.numeric(x_matrix))
stop("'x_matrix' must be numeric.")
if (is.vector(x_matrix))
x_matrix <- as.matrix(x_matrix)
if (!is.matrix(x_matrix))
stop("'x_matrix' must be a data.frame or matrix.")
}
if (!is.numeric(Ystar) || !is.vector(Ystar))
stop("'Ystar' must be a numeric vector.")
if (length(setdiff(1:2, unique(Ystar))) != 0)
stop("'Ystar' must be coded 1/2, where the reference category is 2.")
n_cat = 2
sample_size = length(Ystar)
if (nrow(z_matrix) != sample_size)
stop("The number of rows of 'z_matrix' must match the length of 'Ystar'.")
if (!is.null(x_matrix) && nrow(x_matrix) != sample_size)
stop("The number of rows of 'x_matrix' must match the length of 'Ystar'.")
X = matrix(c(rep(1, sample_size), c(x_matrix)),
byrow = FALSE, nrow = sample_size)
Z = matrix(c(rep(1, sample_size), c(z_matrix)),
byrow = FALSE, nrow = sample_size)
obs_Y_reps = matrix(rep(Ystar, n_cat), nrow = sample_size, byrow = FALSE)
category_matrix = matrix(rep(1:n_cat, each = sample_size), nrow = sample_size,
byrow = FALSE)
obs_Y_matrix = 1 * (obs_Y_reps == category_matrix)
control_settings = list(convtype = "parameter", tol = tolerance,
stoptype = "maxiter", maxiter = max_em_iterations)
results = turboEM::turboem(par = c(c(beta_start), c(gamma_start)),
fixptfn = em_function, objfn = loglik,
method = c(em_method),
obs_Y_matrix = obs_Y_matrix,
X = X, Z = Z,
sample_size = sample_size, n_cat = n_cat,
control.run = control_settings)
Ystar01 = ifelse(Ystar == 1, 1, ifelse(Ystar == 2, 0, NA))
log_reg = stats::glm(Ystar01 ~ . + 0, as.data.frame(X),
family = "binomial"(link = "logit"))
SAMBA_start <- c(beta_start, c(gamma_start)[1:(1 + ncol(z_matrix))])
SAMBA_i <- SAMBA::obsloglikEM(Ystar01, Z = x_matrix,
X = z_matrix, start = SAMBA_start,
tol = tolerance,
maxit = max_em_iterations)
perfect_sens_start <- c(beta_start, c(gamma_start)[(2 + ncol(z_matrix)):length(c(gamma_start))])
perfect_sens_i <- perfect_sensitivity_EM(Ystar01, Z = x_matrix,
X = z_matrix, start = perfect_sens_start,
tolerance = tolerance,
max_em_iterations = max_em_iterations)
# Do label switching correction within the EM algorithm simulation
results_i_gamma <- matrix(turboEM::pars(results)[(ncol(X) + 1):(ncol(X) + (n_cat * ncol(Z)))],
ncol = n_cat, byrow = FALSE)
results_i_pistar_v <- pistar_compute(results_i_gamma, Z, sample_size, n_cat)
pistar_11 <- mean(results_i_pistar_v[1:sample_size, 1])
pistar_22 <- mean(results_i_pistar_v[(sample_size + 1):(2*sample_size), 2])
estimates_i <- if ((pistar_11 > .50 | pistar_22 > .50) |
(is.na(pistar_11) & is.na(pistar_22))) {
# If turboem cannot estimate the parameters they will be NA.
turboEM::pars(results)
} else {
gamma_index = (ncol(X) + 1):(ncol(X) + (n_cat * ncol(Z)))
n_gamma_param = length(gamma_index) / n_cat
gamma_flip_index = ncol(X) + c((n_gamma_param + 1):length(gamma_index), 1:n_gamma_param)
c(-1*turboEM::pars(results)[1:ncol(X)], turboEM::pars(results)[gamma_flip_index])
}
#sigma_EM = tryCatch(solve(turboEM::hessian(results)[[1]]), silent = TRUE,
# error = function(e) NA)
#SE_EM = tryCatch(sqrt(diag(matrix(Matrix::nearPD(sigma_EM)$mat,
# row = length(c(c(beta_start), c(gamma_start))),
# byrow = FALSE))),
# silent = TRUE,
# error = function(e) rep(NA, ncol(X) + (n_cat * ncol(Z))))
sigma_EM = solve(turboEM::hessian(results)[[1]])
SE_EM <- if ((pistar_11 > .50 | pistar_22 > .50) |
(is.na(pistar_11) & is.na(pistar_22))) {
# If turboem cannot estimate the parameters they will be NA.
sqrt(diag(matrix(Matrix::nearPD(sigma_EM)$mat,
nrow = length(c(c(beta_start), c(gamma_start))),
byrow = FALSE)))
} else {
gamma_index = (ncol(X) + 1):(ncol(X) + (n_cat * ncol(Z)))
n_gamma_param = length(gamma_index) / n_cat
gamma_flip_index = ncol(X) + c((n_gamma_param + 1):length(gamma_index), 1:n_gamma_param)
sqrt(diag(matrix(Matrix::nearPD(sigma_EM)$mat,
nrow = length(c(c(beta_start), c(gamma_start))),
byrow = FALSE)))[c(1:ncol(X), gamma_flip_index)]
}
beta_param_names <- paste0(rep("beta", ncol(X)), 1:ncol(X))
gamma_param_names <- paste0(rep("gamma", (n_cat * ncol(Z))),
rep(1:ncol(Z), n_cat),
rep(1:n_cat, each = ncol(Z)))
SAMBA_beta_param_names <- paste0(rep("SAMBA_beta", ncol(X)), 1:ncol(X))
SAMBA_gamma_param_names <- paste0(rep("SAMBA_gamma", (ncol(Z))),
rep(1:ncol(Z), 1),
rep(1, ncol(Z)))
PSens_beta_param_names <- paste0(rep("PSens_beta", ncol(X)), 1:ncol(X))
PSens_gamma_param_names <- paste0(rep("PSens_gamma", (ncol(Z))),
rep(1:ncol(Z), 1),
rep(2, ncol(Z)))
naive_beta_param_names <- paste0("naive_", beta_param_names)
estimates <- data.frame(Parameter = c(beta_param_names,
gamma_param_names,
SAMBA_beta_param_names,
SAMBA_gamma_param_names,
PSens_beta_param_names,
PSens_gamma_param_names,
naive_beta_param_names),
Estimates = c(estimates_i,
unname(SAMBA_i$param),
unname(perfect_sens_i$param),
unname(log_reg$coefficients)),
SE = c(SE_EM, sqrt(diag(SAMBA_i$variance)),
sqrt(diag(perfect_sens_i$variance)),
unname(summary(log_reg)$coefficients[,2])),
Convergence = c(rep(results$convergence,
length(c(beta_param_names,
gamma_param_names))),
rep(NA, length(c(SAMBA_beta_param_names,
SAMBA_gamma_param_names))),
rep(NA, length(c(PSens_beta_param_names,
PSens_gamma_param_names))),
rep(log_reg$converged, ncol(X))))
return(estimates)
}
|
/scratch/gouwar.j/cran-all/cranData/COMBO/R/COMBO_EM.R
|
#' EM-Algorithm Estimation of the Two-Stage Binary Outcome Misclassification Model
#'
#' Jointly estimate \eqn{\beta}, \eqn{\gamma}, \eqn{\delta} parameters from the true outcome,
#' first-stage observation, and second-stage observation mechanisms, respectively,
#' in a two-stage binary outcome misclassification model.
#'
#' @param Ystar A numeric vector of indicator variables (1, 2) for the first-stage observed
#' outcome \code{Y*}. There should be no \code{NA} terms. The reference category is 2.
#' @param Ytilde A numeric vector of indicator variables (1, 2) for the second-stage
#' observed outcome \eqn{\tilde{Y}}. There should be no \code{NA} terms. The
#' reference category is 2.
#' @param x_matrix A numeric matrix of covariates in the true outcome mechanism.
#' \code{x_matrix} should not contain an intercept and no values should be \code{NA}.
#' @param z_matrix A numeric matrix of covariates in the first-stage observation mechanism.
#' \code{z_matrix} should not contain an intercept and no values should be \code{NA}.
#' @param v_matrix A numeric matrix of covariates in the second-stage observation mechanism.
#' \code{v_matrix} should not contain an intercept and no values should be \code{NA}.
#' @param beta_start A numeric vector or column matrix of starting values for the \eqn{\beta}
#' parameters in the true outcome mechanism. The number of elements in \code{beta_start}
#' should be equal to the number of columns of \code{x_matrix} plus 1.
#' @param gamma_start A numeric vector or matrix of starting values for the \eqn{\gamma}
#' parameters in the first-stage observation mechanism. In matrix form, the \code{gamma_start} matrix rows
#' correspond to parameters for the \code{Y* = 1}
#' first-stage observed outcome, with the dimensions of \code{z_matrix} plus 1, and the
#' gamma parameter matrix columns correspond to the true outcome categories
#' \eqn{Y \in \{1, 2\}}. A numeric vector for \code{gamma_start} is
#' obtained by concatenating the gamma matrix, i.e. \code{gamma_start <- c(gamma_matrix)}.
#' @param delta_start A numeric array of starting values for the \eqn{\delta} parameters
#' in the second-stage observation mechanism. The first dimension (matrix rows)
#' of \code{delta_start} correspond to parameters for the \eqn{\tilde{Y} = 1}
#' second-stage observed outcome, with the dimensions of the \code{v_matrix}
#' plus 1. The second dimension (matrix columns) correspond to the first-stage
#' observed outcome categories \eqn{Y^* \in \{1, 2\}}. The third dimension of
#' \code{delta_start} corresponds to to the true outcome categories
#' \eqn{Y \in \{1, 2\}}.
#' @param tolerance A numeric value specifying when to stop estimation, based on
#' the difference of subsequent log-likelihood estimates. The default is \code{1e-7}.
#' @param max_em_iterations An integer specifying the maximum number of
#' iterations of the EM algorithm. The default is \code{1500}.
#' @param em_method A character string specifying which EM algorithm will be applied.
#' Options are \code{"em"}, \code{"squarem"}, or \code{"pem"}. The default and
#' recommended option is \code{"squarem"}.
#'
#' @return \code{COMBO_EM_2stage} returns a data frame containing four columns. The first
#' column, \code{Parameter}, represents a unique parameter value for each row.
#' The next column contains the parameter \code{Estimates}, followed by the standard
#' error estimates, \code{SE}. The final column, \code{Convergence}, reports
#' whether or not the algorithm converged for a given parameter estimate.
#'
#' Estimates are provided for the two-stage binary misclassification model.
#'
#' @export
#'
#' @include pi_compute.R
#' @include pistar_compute.R
#' @include pitilde_compute.R
#' @include w_j_2stage.R
#' @include q_beta_f.R
#' @include q_gamma_f.R
#' @include q_delta_f.R
#' @include em_function_2stage.R
#' @include loglik_2stage.R
#' @include COMBO_data_2stage.R
#'
#' @importFrom stats rnorm rgamma rmultinom glm
#' @importFrom turboEM turboem
#' @importFrom Matrix nearPD
#'
#' @examples \donttest{
#' set.seed(123)
#' n <- 1000
#' x_mu <- 0
#' x_sigma <- 1
#' z_shape <- 1
#' v_shape <- 1
#'
#' true_beta <- matrix(c(1, -2), ncol = 1)
#' true_gamma <- matrix(c(.5, 1, -.5, -1), nrow = 2, byrow = FALSE)
#' true_delta <- array(c(1.5, 1, .5, .5, -.5, 0, -1, -1), dim = c(2, 2, 2))
#'
#' my_data <- COMBO_data_2stage(sample_size = n,
#' x_mu = x_mu, x_sigma = x_sigma,
#' z_shape = z_shape, v_shape = v_shape,
#' beta = true_beta, gamma = true_gamma, delta = true_delta)
#' table(my_data[["obs_Ytilde"]], my_data[["obs_Ystar"]], my_data[["true_Y"]])
#'
#' beta_start <- rnorm(length(c(true_beta)))
#' gamma_start <- rnorm(length(c(true_gamma)))
#' delta_start <- rnorm(length(c(true_delta)))
#'
#' EM_results <- COMBO_EM_2stage(Ystar = my_data[["obs_Ystar"]],
#' Ytilde = my_data[["obs_Ytilde"]],
#' x_matrix = my_data[["x"]],
#' z_matrix = my_data[["z"]],
#' v_matrix = my_data[["v"]],
#' beta_start = beta_start,
#' gamma_start = gamma_start,
#' delta_start = delta_start)
#'
#' EM_results}
COMBO_EM_2stage <- function(Ystar, Ytilde,
x_matrix, z_matrix, v_matrix,
beta_start, gamma_start, delta_start,
tolerance = 1e-7, max_em_iterations = 1500,
em_method = "squarem"){
if (is.data.frame(z_matrix))
z_matrix <- as.matrix(z_matrix)
if (!is.numeric(z_matrix))
stop("'z_matrix' should be a numeric matrix.")
if (is.vector(z_matrix))
z_matrix <- as.matrix(z_matrix)
if (!is.matrix(z_matrix))
stop("'z_matrix' should be a matrix or data.frame.")
if (is.vector(v_matrix))
v_matrix <- as.matrix(v_matrix)
if (!is.matrix(v_matrix))
stop("'v_matrix' should be a matrix or data.frame.")
if (!is.null(x_matrix)) {
if (is.data.frame(x_matrix))
x_matrix <- as.matrix(x_matrix)
if (!is.numeric(x_matrix))
stop("'x_matrix' must be numeric.")
if (is.vector(x_matrix))
x_matrix <- as.matrix(x_matrix)
if (!is.matrix(x_matrix))
stop("'x_matrix' must be a data.frame or matrix.")
}
if (!is.numeric(Ystar) || !is.vector(Ystar))
stop("'Ystar' must be a numeric vector.")
if (length(setdiff(1:2, unique(Ystar))) != 0)
stop("'Ystar' must be coded 1/2, where the reference category is 2.")
if (!is.numeric(Ytilde) || !is.vector(Ytilde))
stop("'Ystar' must be a numeric vector.")
if (length(setdiff(1:2, unique(Ytilde))) != 0)
stop("'Ystar' must be coded 1/2, where the reference category is 2.")
n_cat = 2
sample_size = length(Ystar)
sample_size_tilde <- length(Ytilde)
if (sample_size_tilde != sample_size)
stop("The lengths of 'Ystar' and 'Ytilde' must be the same.")
if (nrow(z_matrix) != sample_size)
stop("The number of rows of 'z_matrix' must match the length of 'Ystar'.")
if (nrow(v_matrix) != sample_size_tilde)
stop("The number of rows of 'v_matrix' must match the length of 'Ytilde'.")
if (!is.null(x_matrix) && nrow(x_matrix) != sample_size)
stop("The number of rows of 'x_matrix' must match the length of 'Ystar'.")
X = matrix(c(rep(1, sample_size), c(x_matrix)),
byrow = FALSE, nrow = sample_size)
Z = matrix(c(rep(1, sample_size), c(z_matrix)),
byrow = FALSE, nrow = sample_size)
V = matrix(c(rep(1, sample_size), c(v_matrix)),
byrow = FALSE, nrow = sample_size)
obs_Ystar_reps = matrix(rep(Ystar, n_cat), nrow = sample_size, byrow = FALSE)
category_matrix = matrix(rep(1:n_cat, each = sample_size), nrow = sample_size,
byrow = FALSE)
obs_Ystar_matrix = 1 * (obs_Ystar_reps == category_matrix)
obs_Ytilde_reps <- matrix(rep(Ytilde, n_cat), nrow = sample_size, byrow = FALSE)
category_matrix <- matrix(rep(1:n_cat, each = sample_size), nrow = sample_size,
byrow = FALSE)
obs_Ytilde_matrix <- 1 * (obs_Ytilde_reps == category_matrix)
control_settings = list(convtype = "parameter", tol = tolerance,
stoptype = "maxiter", maxiter = max_em_iterations)
results = turboEM::turboem(par = c(c(beta_start), c(gamma_start), c(delta_start)),
fixptfn = em_function_2stage, objfn = loglik_2stage,
method = c(em_method),
obs_Ystar_matrix = obs_Ystar_matrix,
obs_Ytilde_matrix = obs_Ytilde_matrix,
X = X, Z = Z, V = V,
sample_size = sample_size, n_cat = n_cat,
control.run = control_settings)
# Naive model
Ystar_01 <- ifelse(Ystar == 1, 1, 0)
Ytilde_01 <- ifelse(Ytilde == 1, 1, 0)
naive_start_beta <- glm(Ystar_01 ~ X[,-1], family = "binomial")
naive_start_delta1 <- glm(Ytilde_01[which(Ystar == 1)] ~ V[which(Ystar == 1), -1],
family = "binomial")
naive_start_delta2 <- glm(Ytilde_01[which(Ystar == 2)] ~ V[which(Ystar == 2), -1],
family = "binomial")
naive_results <- optim(par = c(unname(coef(naive_start_beta)),
unname(coef(naive_start_delta1)),
unname(coef(naive_start_delta2))),
fn = naive_loglik_2stage,
X = X, V = V,
obs_Ystar_matrix = obs_Ystar_matrix,
obs_Ytilde_matrix = obs_Ytilde_matrix,
sample_size = sample_size,
n_cat = n_cat,
hessian = TRUE,
control = list(maxit = max_em_iterations))
naive_se <- tryCatch(sqrt(diag(solve(naive_results$hessian))),
silent = TRUE,
error = function(e) rep(NA, length(c(unname(coef(naive_start_beta)),
unname(coef(naive_start_delta1)),
unname(coef(naive_start_delta2))))))
naive_convergence <- ifelse(naive_results$convergence == 1, "maxit reached",
ifelse(naive_results$convergence == 10,
"degenerency in nelder-mead simplex",
ifelse(naive_results$convergence == 51,
"warning from L-BFGS-B",
ifelse(naive_results$convergence == 52,
"error from L-BFGS-B",
ifelse(naive_results$convergence == 0,
TRUE, NA)))))
# Do label switching correction within the EM algorithm simulation
results_i_gamma <- matrix(turboEM::pars(results)[(ncol(X) + 1):(ncol(X) + (n_cat * ncol(Z)))],
ncol = n_cat, byrow = FALSE)
results_i_pistar_v <- pistar_compute(results_i_gamma, Z, sample_size, n_cat)
results_i_delta <- array(turboEM::pars(results)[((ncol(X) + (n_cat * ncol(Z))) + 1):length(turboEM::pars(results))],
dim = c(ncol(V), n_cat, n_cat))
results_i_pitilde <- pitilde_compute(results_i_delta, V, sample_size, n_cat)
pistar_11 <- mean(results_i_pistar_v[1:sample_size, 1])
pistar_22 <- mean(results_i_pistar_v[(sample_size + 1):(2*sample_size), 2])
pitilde_111 <- mean(results_i_pitilde[1:sample_size, 1, 1])
pitilde_222 <- mean(results_i_pitilde[(sample_size + 1):(2*sample_size), 2, 2])
estimates_i <- if ((pistar_11 > .50 | pistar_22 > .50 | pitilde_111 > .50 | pitilde_222 > .50) |
(is.na(pistar_11) & is.na(pistar_22))) {
# If turboem cannot estimate the parameters they will be NA.
turboEM::pars(results)
} else {
gamma_index = (ncol(X) + 1):(ncol(X) + (n_cat * ncol(Z)))
n_gamma_param = length(gamma_index) / n_cat
gamma_flip_index = ncol(X) + c((n_gamma_param + 1):length(gamma_index), 1:n_gamma_param)
delta_index = ((ncol(X) + (n_cat * ncol(Z))) + 1):length(turboEM::pars(results))
n_delta_param = length(delta_index) / n_cat
delta_flip_index = (ncol(X) + (n_cat * ncol(Z))) + c((n_delta_param + 1):length(delta_index), 1:n_delta_param)
c(-1*turboEM::pars(results)[1:ncol(X)], turboEM::pars(results)[c(gamma_flip_index, delta_flip_index)])
}
#sigma_EM = tryCatch(solve(turboEM::hessian(results)[[1]]), silent = TRUE,
# error = function(e) NA)
#SE_EM = tryCatch(sqrt(diag(matrix(Matrix::nearPD(sigma_EM)$mat,
# row = length(c(c(beta_start), c(gamma_start))),
# byrow = FALSE))),
# silent = TRUE,
# error = function(e) rep(NA, ncol(X) + (n_cat * ncol(Z))))
# Do label switching for the SE estimates too.
sigma_EM = tryCatch(solve(turboEM::hessian(results)[[1]]),
silent = TRUE,
error = function(e) matrix(NA,
nrow = length(c(c(beta_start), c(gamma_start), c(delta_start))),
ncol = length(c(c(beta_start), c(gamma_start), c(delta_start)))))
#SE_EM = sqrt(diag(matrix(Matrix::nearPD(sigma_EM)$mat,
# nrow = length(c(c(beta_start), c(gamma_start), c(delta_start))),
# byrow = FALSE)))
SE_EM <- if ((pistar_11 > .50 | pistar_22 > .50 | pitilde_111 > .50 | pitilde_222 > .50) |
(is.na(pistar_11) & is.na(pistar_22))) {
tryCatch(sqrt(diag(matrix(Matrix::nearPD(sigma_EM)$mat,
nrow = length(c(c(beta_start), c(gamma_start), c(delta_start))),
byrow = FALSE))),
silent = TRUE,
error = function(e) rep(NA, length = nrow(sigma_EM)))
} else {
gamma_index = (ncol(X) + 1):(ncol(X) + (n_cat * ncol(Z)))
n_gamma_param = length(gamma_index) / n_cat
gamma_flip_index = ncol(X) + c((n_gamma_param + 1):length(gamma_index), 1:n_gamma_param)
delta_index = ((ncol(X) + (n_cat * ncol(Z))) + 1):length(turboEM::pars(results))
n_delta_param = length(delta_index) / n_cat
delta_flip_index = (ncol(X) + (n_cat * ncol(Z))) + c((n_delta_param + 1):length(delta_index), 1:n_delta_param)
tryCatch(sqrt(diag(matrix(Matrix::nearPD(sigma_EM)$mat,
nrow = length(c(c(beta_start), c(gamma_start), c(delta_start))),
byrow = FALSE)))[c(1:ncol(X), gamma_flip_index, delta_flip_index)],
silent = TRUE,
error = function(e) rep(NA, length = nrow(sigma_EM)))
}
beta_param_names <- paste0(rep("beta", ncol(X)), 1:ncol(X))
gamma_param_names <- paste0(rep("gamma", (n_cat * ncol(Z))),
rep(1:ncol(Z), n_cat),
rep(1:n_cat, each = ncol(Z)))
delta_param_names <- paste0(rep("delta", length(c(delta_start))),
rep(1:ncol(V), n_cat * n_cat),
rep(1, length(c(delta_start))),
rep(c(rep(1, ncol(V)), rep(2, ncol(V))), n_cat),
c(rep(1, ncol(V) * n_cat), rep(2, ncol(V) * n_cat)))
naive_param_names <- paste0("naive_", c(beta_param_names,
paste0(rep("delta", (n_cat * ncol(V))),
rep(1:ncol(V), n_cat),
rep(1:n_cat, each = ncol(V)))))
estimates <- data.frame(Parameter = c(beta_param_names,
gamma_param_names,
delta_param_names,
naive_param_names),
Estimates = c(c(estimates_i),
c(naive_results$par)),
SE = c(SE_EM, naive_se),
Convergence = c(rep(results$convergence,
length(c(beta_param_names,
gamma_param_names,
delta_param_names))),
rep(naive_convergence,
length(naive_param_names))))
return(estimates)
}
|
/scratch/gouwar.j/cran-all/cranData/COMBO/R/COMBO_EM_2stage.R
|
#' MCMC Estimation of the Binary Outcome Misclassification Model
#'
#' Jointly estimate \eqn{\beta} and \eqn{\gamma} parameters from the true outcome
#' and observation mechanisms, respectively, in a binary outcome misclassification
#' model.
#'
#' @param Ystar A numeric vector of indicator variables (1, 2) for the observed
#' outcome \code{Y*}. The reference category is 2.
#' @param x A numeric matrix of covariates in the true outcome mechanism.
#' \code{x} should not contain an intercept.
#' @param z A numeric matrix of covariates in the observation mechanism.
#' \code{z} should not contain an intercept.
#' @param prior A character string specifying the prior distribution for the
#' \eqn{\beta} and \eqn{\gamma} parameters. Options are \code{"t"},
#' \code{"uniform"}, \code{"normal"}, or \code{"dexp"} (double Exponential, or Weibull).
#' @param beta_prior_parameters A numeric list of prior distribution parameters
#' for the \eqn{\beta} terms. For prior distributions \code{"t"},
#' \code{"uniform"}, \code{"normal"}, or \code{"dexp"}, the first element of the
#' list should contain a matrix of location, lower bound, mean, or shape parameters,
#' respectively, for \eqn{\beta} terms.
#' For prior distributions \code{"t"},
#' \code{"uniform"}, \code{"normal"}, or \code{"dexp"}, the second element of the
#' list should contain a matrix of shape, upper bound, standard deviation, or scale parameters,
#' respectively, for \eqn{\beta} terms.
#' For prior distribution \code{"t"}, the third element of the list should contain
#' a matrix of the degrees of freedom for \eqn{\beta} terms.
#' The third list element should be empty for all other prior distributions.
#' All matrices in the list should have dimensions \code{n_cat} X \code{dim_x}, and all
#' elements in the \code{n_cat} row should be set to \code{NA}.
#' @param gamma_prior_parameters A numeric list of prior distribution parameters
#' for the \eqn{\gamma} terms. For prior distributions \code{"t"},
#' \code{"uniform"}, \code{"normal"}, or \code{"dexp"}, the first element of the
#' list should contain an array of location, lower bound, mean, or shape parameters,
#' respectively, for \eqn{\gamma} terms.
#' For prior distributions \code{"t"},
#' \code{"uniform"}, \code{"normal"}, or \code{"dexp"}, the second element of the
#' list should contain an array of shape, upper bound, standard deviation, or scale parameters,
#' respectively, for \eqn{\gamma} terms.
#' For prior distribution \code{"t"}, the third element of the list should contain
#' an array of the degrees of freedom for \eqn{\gamma} terms.
#' The third list element should be empty for all other prior distributions.
#' All arrays in the list should have dimensions \code{n_cat} X \code{n_cat} X \code{dim_z},
#' and all elements in the \code{n_cat} row should be set to \code{NA}.
#' @param number_MCMC_chains An integer specifying the number of MCMC chains to compute.
#' The default is \code{4}.
#' @param MCMC_sample An integer specifying the number of MCMC samples to draw.
#' The default is \code{2000}.
#' @param burn_in An integer specifying the number of MCMC samples to discard
#' for the burn-in period. The default is \code{1000}.
#' @param display_progress A logical value specifying whether messages should be
#' displayed during model compilation. The default is \code{TRUE}.
#'
#' @return \code{COMBO_MCMC} returns a list of the posterior samples and posterior
#' means for both the binary outcome misclassification model and a naive logistic
#' regression of the observed outcome, \code{Y*}, predicted by the matrix \code{x}.
#' The list contains the following components:
#' \item{posterior_sample_df}{A data frame containing three columns. The first
#' column indicates the chain from which a sample is taken, from 1 to \code{number_MCMC_chains}.
#' The second column specifies the parameter associated with a given row. \eqn{\beta}
#' terms have dimensions \code{dim_x} X \code{n_cat}. The \eqn{\gamma} terms
#' have dimensions \code{n_cat} X \code{n_cat} X \code{dim_z}, where the first
#' index specifies the observed outcome category and the second index specifies
#' the true outcome category. The final column provides the MCMC sample.}
#' \item{posterior_means_df}{A data frame containing three columns. The first
#' column specifies the parameter associated with a given row. Parameters are
#' indexed as in the \code{posterior_sample_df}. The second column provides
#' the posterior mean computed across all chains and all samples. The final column
#' provides the posterior median computed across all chains and all samples.}
#' \item{naive_posterior_sample_df}{A data frame containing three columns.
#' The first column indicates the chain from which a sample is taken, from
#' 1 to \code{number_MCMC_chains}. The second column specifies the parameter
#' associated with a given row. Naive \eqn{\beta} terms have dimensions
#' \code{dim_x} X \code{n_cat}. The final column provides the MCMC sample.}
#' \item{naive_posterior_means_df}{A data frame containing three columns. The first
#' column specifies the naive parameter associated with a given row. Parameters are
#' indexed as in the \code{naive_posterior_sample_df}. The second column provides
#' the posterior mean computed across all chains and all samples. The final column
#' provides the posterior median computed across all chains and all samples.}
#'
#' @export
#'
#' @include model_picker.R
#' @include jags_picker.R
#' @include naive_model_picker.R
#' @include naive_jags_picker.R
#' @include pistar_by_chain.R
#' @include check_and_fix_chains.R
#'
#' @importFrom stats rnorm rgamma rmultinom median
#' @importFrom rjags coda.samples jags.model
#' @importFrom dplyr select filter `%>%` mutate group_by ungroup summarise all_of
#' @importFrom tidyr gather
#'
#' @examples \donttest{
#' set.seed(123)
#' n <- 1000
#' x_mu <- 0
#' x_sigma <- 1
#' z_shape <- 1
#'
#' true_beta <- matrix(c(1, -2), ncol = 1)
#' true_gamma <- matrix(c(.5, 1, -.5, -1), nrow = 2, byrow = FALSE)
#'
#' x_matrix = matrix(rnorm(n, x_mu, x_sigma), ncol = 1)
#' X = matrix(c(rep(1, n), x_matrix[,1]), ncol = 2, byrow = FALSE)
#' z_matrix = matrix(rgamma(n, z_shape), ncol = 1)
#' Z = matrix(c(rep(1, n), z_matrix[,1]), ncol = 2, byrow = FALSE)
#'
#' exp_xb = exp(X %*% true_beta)
#' pi_result = exp_xb[,1] / (exp_xb[,1] + 1)
#' pi_matrix = matrix(c(pi_result, 1 - pi_result), ncol = 2, byrow = FALSE)
#'
#' true_Y <- rep(NA, n)
#' for(i in 1:n){
#' true_Y[i] = which(stats::rmultinom(1, 1, pi_matrix[i,]) == 1)
#' }
#'
#' exp_zg = exp(Z %*% true_gamma)
#' pistar_denominator = matrix(c(1 + exp_zg[,1], 1 + exp_zg[,2]), ncol = 2, byrow = FALSE)
#' pistar_result = exp_zg / pistar_denominator
#'
#' pistar_matrix = matrix(c(pistar_result[,1], 1 - pistar_result[,1],
#' pistar_result[,2], 1 - pistar_result[,2]),
#' ncol = 2, byrow = FALSE)
#'
#' obs_Y <- rep(NA, n)
#' for(i in 1:n){
#' true_j = true_Y[i]
#' obs_Y[i] = which(rmultinom(1, 1,
#' pistar_matrix[c(i, n + i),
#' true_j]) == 1)
#' }
#'
#' Ystar <- obs_Y
#'
#' unif_lower_beta <- matrix(c(-5, -5, NA, NA), nrow = 2, byrow = TRUE)
#' unif_upper_beta <- matrix(c(5, 5, NA, NA), nrow = 2, byrow = TRUE)
#'
#' unif_lower_gamma <- array(data = c(-5, NA, -5, NA, -5, NA, -5, NA),
#' dim = c(2,2,2))
#' unif_upper_gamma <- array(data = c(5, NA, 5, NA, 5, NA, 5, NA),
#' dim = c(2,2,2))
#'
#' beta_prior_parameters <- list(lower = unif_lower_beta, upper = unif_upper_beta)
#' gamma_prior_parameters <- list(lower = unif_lower_gamma, upper = unif_upper_gamma)
#'
#' MCMC_results <- COMBO_MCMC(Ystar, x = x_matrix, z = z_matrix,
#' prior = "uniform",
#' beta_prior_parameters = beta_prior_parameters,
#' gamma_prior_parameters = gamma_prior_parameters,
#' number_MCMC_chains = 2,
#' MCMC_sample = 200, burn_in = 100)
#' MCMC_results$posterior_means_df}
COMBO_MCMC <- function(Ystar, x, z, prior,
beta_prior_parameters,
gamma_prior_parameters,
number_MCMC_chains = 4,
MCMC_sample = 2000,
burn_in = 1000,
display_progress = TRUE){
# Define global variables to make the "NOTES" happy.
chain_number <- NULL
parameter_name <- NULL
if (!is.numeric(Ystar) || !is.vector(Ystar))
stop("'Ystar' must be a numeric vector.")
if (length(setdiff(1:2, unique(Ystar))) != 0)
stop("'Ystar' must be coded 1/2, where the reference category is 2.")
sample_size = length(Ystar)
n_cat = 2
# FIX THIS! DIMENSIONS DON'T WORK FOR x, z WITH MORE THAN ONE COL
X = cbind(matrix(1, nrow = sample_size, ncol = 1), x)
Z = cbind(matrix(1, nrow = sample_size, ncol = 1), z)
dim_x = ncol(X)
dim_z = ncol(Z)
modelstring = model_picker(prior)
temp_model_file = tempfile()
tmps = file(temp_model_file, "w")
cat(modelstring, file = tmps)
close(tmps)
jags <- jags_picker(prior, sample_size, dim_x, dim_z, n_cat,
Ystar, X, Z,
beta_prior_parameters, gamma_prior_parameters,
number_MCMC_chains,
model_file = temp_model_file,
display_progress = display_progress)
display_progress_bar <- ifelse(display_progress == TRUE, "text", "none")
posterior_sample = coda.samples(jags,
c('beta', 'gamma'),
MCMC_sample,
progress.bar = display_progress_bar)
pistarjj = pistar_by_chain(n_chains = number_MCMC_chains,
chains_list = posterior_sample,
Z = Z, n = sample_size, n_cat = n_cat)
posterior_sample_fixed = check_and_fix_chains(n_chains = number_MCMC_chains,
chains_list = posterior_sample,
pistarjj_matrix = pistarjj,
dim_x, dim_z, n_cat)
posterior_sample_df <- do.call(rbind.data.frame, posterior_sample_fixed)
posterior_sample_df$chain_number <- rep(1:number_MCMC_chains, each = MCMC_sample)
posterior_sample_df$sample <- rep(1:MCMC_sample, number_MCMC_chains)
##########################################
naive_modelstring = naive_model_picker(prior)
naive_temp_model_file = tempfile()
tmps = file(naive_temp_model_file, "w")
cat(naive_modelstring, file = tmps)
close(tmps)
naive_jags <- naive_jags_picker(prior, sample_size, dim_x, n_cat,
Ystar, X,
beta_prior_parameters,
number_MCMC_chains,
naive_model_file = naive_temp_model_file,
display_progress = display_progress)
naive_posterior_sample = coda.samples(naive_jags,
c('beta'),
MCMC_sample,
progress.bar = display_progress_bar)
naive_posterior_sample_df <- do.call(rbind.data.frame, naive_posterior_sample)
naive_posterior_sample_df$chain_number <- rep(1:number_MCMC_chains, each = MCMC_sample)
naive_posterior_sample_df$sample <- rep(1:MCMC_sample, number_MCMC_chains)
###########################################
beta_names <- paste0("beta[1,", 1:dim_x, "]")
gamma_names <- paste0("gamma[1,", rep(1:n_cat, dim_z), ",", rep(1:dim_z, each = n_cat), "]")
naive_posterior_sample_burn <- naive_posterior_sample_df %>%
dplyr::select(dplyr::all_of(beta_names), chain_number, sample) %>%
dplyr::filter(sample > burn_in) %>%
tidyr::gather(parameter_name, sample, beta_names[1]:beta_names[length(beta_names)],
factor_key = TRUE) %>%
dplyr::mutate(parameter_name = paste0("naive_", parameter_name))
naive_posterior_means <- naive_posterior_sample_burn %>%
dplyr::group_by(parameter_name) %>%
dplyr::summarise(posterior_mean = mean(sample),
posterior_median = stats::median(sample)) %>%
dplyr::ungroup()
posterior_sample_burn <- posterior_sample_df %>%
dplyr::select(dplyr::all_of(beta_names), dplyr::all_of(gamma_names),
chain_number, sample) %>%
dplyr::filter(sample > burn_in) %>%
tidyr::gather(parameter_name, sample,
beta_names[1]:gamma_names[length(gamma_names)], factor_key = TRUE)
posterior_means <- posterior_sample_burn %>%
dplyr::group_by(parameter_name) %>%
dplyr::summarise(posterior_mean = mean(sample),
posterior_median = stats::median(sample)) %>%
dplyr::ungroup()
results = list(posterior_sample_df = posterior_sample_burn,
posterior_means_df = posterior_means,
naive_posterior_sample_df = naive_posterior_sample_burn,
naive_posterior_means_df = naive_posterior_means)
return(results)
}
|
/scratch/gouwar.j/cran-all/cranData/COMBO/R/COMBO_MCMC.R
|
#' MCMC Estimation of the Two-Stage Binary Outcome Misclassification Model
#'
#' Jointly estimate \eqn{\beta}, \eqn{\gamma}, and \eqn{\delta} parameters from the true outcome
#' first-stage observation, and second-stage observation mechanisms, respectively,
#' in a two-stage binary outcome misclassification model.
#'
#' @param Ystar A numeric vector of indicator variables (1, 2) for the observed
#' outcome \code{Y*}. The reference category is 2.
#' @param Ytilde A numeric vector of indicator variables (1, 2) for the second-stage
#' observed outcome \eqn{\tilde{Y}}. There should be no \code{NA} terms. The
#' reference category is 2.
#' @param x A numeric matrix of covariates in the true outcome mechanism.
#' \code{x} should not contain an intercept.
#' @param z A numeric matrix of covariates in the observation mechanism.
#' \code{z} should not contain an intercept.
#' @param v A numeric matrix of covariates in the second-stage observation mechanism.
#' \code{v} should not contain an intercept and no values should be \code{NA}.
#' @param prior A character string specifying the prior distribution for the
#' \eqn{\beta}, \eqn{\gamma}, and \eqn{\delta} parameters. Options are \code{"t"},
#' \code{"uniform"}, \code{"normal"}, or \code{"dexp"} (double Exponential, or Weibull).
#' @param beta_prior_parameters A numeric list of prior distribution parameters
#' for the \eqn{\beta} terms. For prior distributions \code{"t"},
#' \code{"uniform"}, \code{"normal"}, or \code{"dexp"}, the first element of the
#' list should contain a matrix of location, lower bound, mean, or shape parameters,
#' respectively, for \eqn{\beta} terms.
#' For prior distributions \code{"t"},
#' \code{"uniform"}, \code{"normal"}, or \code{"dexp"}, the second element of the
#' list should contain a matrix of shape, upper bound, standard deviation, or scale parameters,
#' respectively, for \eqn{\beta} terms.
#' For prior distribution \code{"t"}, the third element of the list should contain
#' a matrix of the degrees of freedom for \eqn{\beta} terms.
#' The third list element should be empty for all other prior distributions.
#' All matrices in the list should have dimensions \code{n_cat} X \code{dim_x}, and all
#' elements in the \code{n_cat} row should be set to \code{NA}.
#' @param gamma_prior_parameters A numeric list of prior distribution parameters
#' for the \eqn{\gamma} terms. For prior distributions \code{"t"},
#' \code{"uniform"}, \code{"normal"}, or \code{"dexp"}, the first element of the
#' list should contain an array of location, lower bound, mean, or shape parameters,
#' respectively, for \eqn{\gamma} terms.
#' For prior distributions \code{"t"},
#' \code{"uniform"}, \code{"normal"}, or \code{"dexp"}, the second element of the
#' list should contain an array of shape, upper bound, standard deviation, or scale parameters,
#' respectively, for \eqn{\gamma} terms.
#' For prior distribution \code{"t"}, the third element of the list should contain
#' an array of the degrees of freedom for \eqn{\gamma} terms.
#' The third list element should be empty for all other prior distributions.
#' All arrays in the list should have dimensions \code{n_cat} X \code{n_cat} X \code{dim_z},
#' and all elements in the \code{n_cat} row should be set to \code{NA}.
#' @param delta_prior_parameters A numeric list of prior distribution parameters
#' for the \eqn{\delta} terms. For prior distributions \code{"t"},
#' \code{"uniform"}, \code{"normal"}, or \code{"dexp"}, the first element of the
#' list should contain an array of location, lower bound, mean, or shape parameters,
#' respectively, for \eqn{\delta} terms.
#' For prior distributions \code{"t"},
#' \code{"uniform"}, \code{"normal"}, or \code{"dexp"}, the second element of the
#' list should contain an array of shape, upper bound, standard deviation, or scale parameters,
#' respectively, for \eqn{\delta} terms.
#' For prior distribution \code{"t"}, the third element of the list should contain
#' an array of the degrees of freedom for \eqn{\delta} terms.
#' The third list element should be empty for all other prior distributions.
#' All arrays in the list should have dimensions \code{n_cat} X
#' \code{n_cat} X \code{n_cat} X \code{dim_v},
#' and all elements in the \code{n_cat} row should be set to \code{NA}.
#' @param naive_delta_prior_parameters A numeric list of prior distribution parameters
#' for the naive model \eqn{\delta} terms. For prior distributions \code{"t"},
#' \code{"uniform"}, \code{"normal"}, or \code{"dexp"}, the first element of the
#' list should contain an array of location, lower bound, mean, or shape parameters,
#' respectively, for naive \eqn{\delta} terms.
#' For prior distributions \code{"t"},
#' \code{"uniform"}, \code{"normal"}, or \code{"dexp"}, the second element of the
#' list should contain an array of shape, upper bound, standard deviation, or scale parameters,
#' respectively, for naive \eqn{\delta} terms.
#' For prior distribution \code{"t"}, the third element of the list should contain
#' an array of the degrees of freedom for naive \eqn{\delta} terms.
#' The third list element should be empty for all other prior distributions.
#' All arrays in the list should have dimensions \code{n_cat} X \code{n_cat} X \code{dim_v},
#' and all elements in the \code{n_cat} row should be set to \code{NA}.
#' Note that prior distributions for the naive \eqn{\beta} terms are inherted
#' from the \code{beta_prior_parameters} argument.
#' @param number_MCMC_chains An integer specifying the number of MCMC chains to compute.
#' The default is \code{4}.
#' @param MCMC_sample An integer specifying the number of MCMC samples to draw.
#' The default is \code{2000}.
#' @param burn_in An integer specifying the number of MCMC samples to discard
#' for the burn-in period. The default is \code{1000}.
#' @param display_progress A logical value specifying whether messages should be
#' displayed during model compilation. The default is \code{TRUE}.
#'
#' @return \code{COMBO_MCMC_2stage} returns a list of the posterior samples and posterior
#' means for both the binary outcome misclassification model and a naive logistic
#' regression of the observed outcome, \code{Y*}, predicted by the matrix \code{x}.
#' The list contains the following components:
#' \item{posterior_sample_df}{A data frame containing three columns. The first
#' column indicates the chain from which a sample is taken, from 1 to \code{number_MCMC_chains}.
#' The second column specifies the parameter associated with a given row. \eqn{\beta}
#' terms have dimensions \code{dim_x} X \code{n_cat}. The \eqn{\gamma} terms
#' have dimensions \code{n_cat} X \code{n_cat} X \code{dim_z}, where the first
#' index specifies the observed outcome category and the second index specifies
#' the true outcome category. The final column provides the MCMC sample.}
#' \item{posterior_means_df}{A data frame containing three columns. The first
#' column specifies the parameter associated with a given row. Parameters are
#' indexed as in the \code{posterior_sample_df}. The second column provides
#' the posterior mean computed across all chains and all samples. The final column
#' provides the posterior median computed across all chains and all samples.}
#' \item{naive_posterior_sample_df}{A data frame containing three columns.
#' The first column indicates the chain from which a sample is taken, from
#' 1 to \code{number_MCMC_chains}. The second column specifies the parameter
#' associated with a given row. Naive \eqn{\beta} terms have dimensions
#' \code{dim_x} X \code{n_cat}. The final column provides the MCMC sample.}
#' \item{naive_posterior_means_df}{A data frame containing three columns. The first
#' column specifies the naive parameter associated with a given row. Parameters are
#' indexed as in the \code{naive_posterior_sample_df}. The second column provides
#' the posterior mean computed across all chains and all samples. The final column
#' provides the posterior median computed across all chains and all samples.}
#'
#' @export
#'
#' @include model_picker_2stage.R
#' @include jags_picker.R
#' @include naive_model_picker.R
#' @include naive_jags_picker.R
#' @include pistar_by_chain.R
#' @include check_and_fix_chains.R
#' @include sum_every_n.R
#' @include sum_every_n1.R
#'
#' @importFrom stats rnorm rgamma rmultinom median
#' @importFrom rjags coda.samples jags.model
#' @importFrom dplyr select filter `%>%` mutate group_by ungroup summarise all_of
#' @importFrom tidyr gather
#'
#' @examples \donttest{
#'
#' # Helper functions
#' sum_every_n <- function(x, n){
#' vector_groups = split(x,
#' ceiling(seq_along(x) / n))
#' sum_x = Reduce(`+`, vector_groups)
#'
#' return(sum_x)
#' }
#'
#' sum_every_n1 <- function(x, n){
#' vector_groups = split(x,
#' ceiling(seq_along(x) / n))
#' sum_x = Reduce(`+`, vector_groups) + 1
#'
#' return(sum_x)
#' }
#'
#' # Example
#'
#' set.seed(123)
#' n <- 1000
#' x_mu <- 0
#' x_sigma <- 1
#' z_shape <- 1
#' v_shape <- 1
#'
#' true_beta <- matrix(c(1, -2), ncol = 1)
#' true_gamma <- matrix(c(.5, 1, -.5, -1), nrow = 2, byrow = FALSE)
#' true_delta <- array(c(1.5, 1, .5, .5, -.5, 0, -1, -1), dim = c(2, 2, 2))
#'
#' x_matrix = matrix(rnorm(n, x_mu, x_sigma), ncol = 1)
#' X = matrix(c(rep(1, n), x_matrix[,1]), ncol = 2, byrow = FALSE)
#' z_matrix = matrix(rgamma(n, z_shape), ncol = 1)
#' Z = matrix(c(rep(1, n), z_matrix[,1]), ncol = 2, byrow = FALSE)
#' v_matrix = matrix(rgamma(n, v_shape), ncol = 1)
#' V = matrix(c(rep(1, n), v_matrix[,1]), ncol = 2, byrow = FALSE)
#'
#' exp_xb = exp(X %*% true_beta)
#' pi_result = exp_xb[,1] / (exp_xb[,1] + 1)
#' pi_matrix = matrix(c(pi_result, 1 - pi_result), ncol = 2, byrow = FALSE)
#'
#' true_Y <- rep(NA, n)
#' for(i in 1:n){
#' true_Y[i] = which(stats::rmultinom(1, 1, pi_matrix[i,]) == 1)
#' }
#'
#' exp_zg = exp(Z %*% true_gamma)
#' pistar_denominator = matrix(c(1 + exp_zg[,1], 1 + exp_zg[,2]), ncol = 2, byrow = FALSE)
#' pistar_result = exp_zg / pistar_denominator
#'
#' pistar_matrix = matrix(c(pistar_result[,1], 1 - pistar_result[,1],
#' pistar_result[,2], 1 - pistar_result[,2]),
#' ncol = 2, byrow = FALSE)
#'
#'
#' obs_Y <- rep(NA, n)
#' for(i in 1:n){
#' true_j = true_Y[i]
#' obs_Y[i] = which(rmultinom(1, 1,
#' pistar_matrix[c(i, n + i),
#' true_j]) == 1)
#' }
#'
#' Ystar <- obs_Y
#'
#' exp_vd1 = exp(V %*% true_delta[,,1])
#' exp_vd2 = exp(V %*% true_delta[,,2])
#'
#' pi_denominator1 = apply(exp_vd1, FUN = sum_every_n1, n, MARGIN = 2)
#' pi_result1 = exp_vd1 / rbind(pi_denominator1)
#'
#' pi_denominator2 = apply(exp_vd2, FUN = sum_every_n1, n, MARGIN = 2)
#' pi_result2 = exp_vd2 / rbind(pi_denominator2)
#'
#' pitilde_matrix1 = rbind(pi_result1,
#' 1 - apply(pi_result1,
#' FUN = sum_every_n, n = n,
#' MARGIN = 2))
#'
#' pitilde_matrix2 = rbind(pi_result2,
#' 1 - apply(pi_result2,
#' FUN = sum_every_n, n = n,
#' MARGIN = 2))
#'
#' pitilde_array = array(c(pitilde_matrix1, pitilde_matrix2),
#' dim = c(dim(pitilde_matrix1), 2))
#'
#' obs_Ytilde <- rep(NA, n)
#' for(i in 1:n){
#' true_j = true_Y[i]
#' obs_k = Ystar[i]
#' obs_Ytilde[i] = which(rmultinom(1, 1,
#' pitilde_array[c(i,n+ i),
#' obs_k, true_j]) == 1)
#' }
#'
#' Ytilde <- obs_Ytilde
#'
#' unif_lower_beta <- matrix(c(-5, -5, NA, NA), nrow = 2, byrow = TRUE)
#' unif_upper_beta <- matrix(c(5, 5, NA, NA), nrow = 2, byrow = TRUE)
#'
#' unif_lower_gamma <- array(data = c(-5, NA, -5, NA, -5, NA, -5, NA),
#' dim = c(2,2,2))
#' unif_upper_gamma <- array(data = c(5, NA, 5, NA, 5, NA, 5, NA),
#' dim = c(2,2,2))
#'
#' unif_upper_delta <- array(rep(c(5, NA), 8), dim = c(2,2,2,2))
#' unif_lower_delta <- array(rep(c(-5, NA), 8), dim = c(2,2,2,2))
#'
#' unif_lower_naive_delta <- array(data = c(-5, NA, -5, NA, -5, NA, -5, NA),
#' dim = c(2,2,2))
#' unif_upper_naive_delta <- array(data = c(5, NA, 5, NA, 5, NA, 5, NA),
#' dim = c(2,2,2))
#'
#' beta_prior_parameters <- list(lower = unif_lower_beta, upper = unif_upper_beta)
#' gamma_prior_parameters <- list(lower = unif_lower_gamma, upper = unif_upper_gamma)
#' delta_prior_parameters <- list(lower = unif_lower_delta, upper = unif_upper_delta)
#' naive_delta_prior_parameters <- list(lower = unif_lower_naive_delta,
#' upper = unif_upper_naive_delta)
#'
#' MCMC_results <- COMBO_MCMC_2stage(Ystar, Ytilde,
#' x = x_matrix, z = z_matrix,
#' v = v_matrix,
#' prior = "uniform",
#' beta_prior_parameters = beta_prior_parameters,
#' gamma_prior_parameters = gamma_prior_parameters,
#' delta_prior_parameters = delta_prior_parameters,
#' naive_delta_prior_parameters = naive_delta_prior_parameters,
#' number_MCMC_chains = 2,
#' MCMC_sample = 200, burn_in = 100)
#' MCMC_results$posterior_means_df}
COMBO_MCMC_2stage <- function(Ystar, Ytilde, x, z, v, prior,
beta_prior_parameters,
gamma_prior_parameters,
delta_prior_parameters,
naive_delta_prior_parameters,
number_MCMC_chains = 4,
MCMC_sample = 2000,
burn_in = 1000,
display_progress = TRUE){
# Define global variables to make the "NOTES" happy.
chain_number <- NULL
parameter_name <- NULL
if (!is.numeric(Ystar) || !is.vector(Ystar))
stop("'Ystar' must be a numeric vector.")
if (length(setdiff(1:2, unique(Ystar))) != 0)
stop("'Ystar' must be coded 1/2, where the reference category is 2.")
sample_size = length(Ystar)
n_cat = 2
# FIX THIS! DIMENSIONS DON'T WORK FOR x, z WITH MORE THAN ONE COL
X = cbind(matrix(1, nrow = sample_size, ncol = 1), x)
Z = cbind(matrix(1, nrow = sample_size, ncol = 1), z)
V = cbind(matrix(1, nrow = sample_size, ncol = 1), v)
dim_x = ncol(X)
dim_z = ncol(Z)
dim_v = ncol(V)
modelstring = model_picker_2stage(prior)
temp_model_file = tempfile()
tmps = file(temp_model_file, "w")
cat(modelstring, file = tmps)
close(tmps)
jags <- jags_picker_2stage(prior, sample_size, dim_x, dim_z, dim_v, n_cat,
Ystar, Ytilde, X, Z, V,
beta_prior_parameters, gamma_prior_parameters,
delta_prior_parameters,
number_MCMC_chains,
model_file = temp_model_file,
display_progress = display_progress)
display_progress_bar <- ifelse(display_progress == TRUE, "text", "none")
posterior_sample = coda.samples(jags,
c('beta', 'gamma', 'delta'),
MCMC_sample,
progress.bar = display_progress_bar)
pistarjj = pistar_by_chain(n_chains = number_MCMC_chains,
chains_list = posterior_sample,
Z = Z, n = sample_size, n_cat = n_cat)
pitildejjj = pitilde_by_chain(n_chains = number_MCMC_chains,
chains_list = posterior_sample,
V = V, n = sample_size, n_cat = n_cat)
posterior_sample_fixed = check_and_fix_chains_2stage(n_chains = number_MCMC_chains,
chains_list = posterior_sample,
pistarjj_matrix = pistarjj,
pitildejjj_matrix = pitildejjj,
dim_x, dim_z, dim_v,
n_cat)
posterior_sample_df <- do.call(rbind.data.frame, posterior_sample_fixed)
posterior_sample_df$chain_number <- rep(1:number_MCMC_chains, each = MCMC_sample)
posterior_sample_df$sample <- rep(1:MCMC_sample, number_MCMC_chains)
##########################################
naive_modelstring = naive_model_picker_2stage(prior)
naive_temp_model_file = tempfile()
tmps = file(naive_temp_model_file, "w")
cat(naive_modelstring, file = tmps)
close(tmps)
naive_jags <- naive_jags_picker_2stage(prior, sample_size, dim_x, dim_v, n_cat,
Ystar, Ytilde, X, V,
beta_prior_parameters,
naive_delta_prior_parameters,
number_MCMC_chains,
naive_model_file = naive_temp_model_file,
display_progress = display_progress)
naive_posterior_sample = coda.samples(naive_jags,
c('beta', 'delta'),
MCMC_sample,
progress.bar = display_progress_bar)
naive_posterior_sample_df <- do.call(rbind.data.frame, naive_posterior_sample)
naive_posterior_sample_df$chain_number <- rep(1:number_MCMC_chains, each = MCMC_sample)
naive_posterior_sample_df$sample <- rep(1:MCMC_sample, number_MCMC_chains)
###########################################
beta_names <- paste0("beta[1,", 1:dim_x, "]")
gamma_names <- paste0("gamma[1,", rep(1:n_cat, dim_z), ",", rep(1:dim_z, each = n_cat), "]")
delta_names <- paste0("delta[1,",
rep(1:n_cat, dim_v*dim_v), ",",
rep(rep(1:n_cat, each = dim_v), dim_v), ",",
rep(1:dim_v, each = n_cat * n_cat), "]")
naive_delta_names <- paste0("delta[1,", rep(1:n_cat, dim_v), ",", rep(1:dim_z, each = n_cat), "]")
naive_posterior_sample_burn <- naive_posterior_sample_df %>%
dplyr::select(dplyr::all_of(beta_names), dplyr::all_of(naive_delta_names),
chain_number, sample) %>%
dplyr::filter(sample > burn_in) %>%
tidyr::gather(parameter_name, sample, beta_names[1]:naive_delta_names[length(naive_delta_names)],
factor_key = TRUE) %>%
dplyr::mutate(parameter_name = paste0("naive_", parameter_name))
naive_posterior_means <- naive_posterior_sample_burn %>%
dplyr::group_by(parameter_name) %>%
dplyr::summarise(posterior_mean = mean(sample),
posterior_median = stats::median(sample)) %>%
dplyr::ungroup()
posterior_sample_burn <- posterior_sample_df %>%
dplyr::select(dplyr::all_of(beta_names), dplyr::all_of(gamma_names),
dplyr::all_of(delta_names),
chain_number, sample) %>%
dplyr::filter(sample > burn_in) %>%
tidyr::gather(parameter_name, sample,
beta_names[1]:delta_names[length(delta_names)], factor_key = TRUE)
posterior_means <- posterior_sample_burn %>%
dplyr::group_by(parameter_name) %>%
dplyr::summarise(posterior_mean = mean(sample),
posterior_median = stats::median(sample)) %>%
dplyr::ungroup()
results = list(posterior_sample_df = posterior_sample_burn,
posterior_means_df = posterior_means,
naive_posterior_sample_df = naive_posterior_sample_burn,
naive_posterior_means_df = naive_posterior_means)
return(results)
}
|
/scratch/gouwar.j/cran-all/cranData/COMBO/R/COMBO_MCMC_2stage.R
|
#' Generate Data to use in COMBO Functions
#'
#' @param sample_size An integer specifying the sample size of the generated data set.
#' @param x_mu A numeric value specifying the mean of \code{x} predictors
#' generated from a Normal distribution.
#' @param x_sigma A positive numeric value specifying the standard deviation of
#' \code{x} predictors generated from a Normal distribution.
#' @param z_shape A positive numeric value specifying the shape parameter of
#' \code{z} predictors generated from a Gamma distribution.
#' @param beta A column matrix of \eqn{\beta} parameter values (intercept, slope)
#' to generate data under in the true outcome mechanism.
#' @param gamma A numeric matrix of \eqn{\gamma} parameters
#' to generate data under in the observation mechanism.
#' In matrix form, the \code{gamma} matrix rows correspond to intercept (row 1)
#' and slope (row 2) terms. The gamma parameter matrix columns correspond to the true outcome categories
#' \eqn{Y \in \{1, 2\}}.
#'
#' @return \code{COMBO_data} returns a list of generated data elements:
#' \item{obs_Y}{A vector of observed outcomes.}
#' \item{true_Y}{A vector of true outcomes.}
#' \item{obs_Y_matrix}{A numeric matrix of indicator variables (0, 1) for the observed
#' outcome \code{Y*}. Rows of the matrix correspond to each subject. Columns of
#' the matrix correspond to each observed outcome category. Each row contains
#' exactly one 0 entry and exactly one 1 entry.}
#' \item{x}{A vector of generated predictor values in the true outcome
#' mechanism, from the Normal distribution.}
#' \item{z}{A vector of generated predictor values in the observation
#' mechanism from the Gamma distribution.}
#' \item{x_design_matrix}{The design matrix for the \code{x} predictor.}
#' \item{z_design_matrix}{The design matrix for the \code{z} predictor.}
#'
#' @export
#'
#' @include pi_compute.R
#' @include pistar_compute.R
#'
#' @importFrom stats rnorm rgamma rmultinom
#'
#' @examples
#' set.seed(123)
#' n <- 500
#' x_mu <- 0
#' x_sigma <- 1
#' z_shape <- 1
#'
#' true_beta <- matrix(c(1, -2), ncol = 1)
#' true_gamma <- matrix(c(.5, 1, -.5, -1), nrow = 2, byrow = FALSE)
#'
#' my_data <- COMBO_data(sample_size = n,
#' x_mu = x_mu, x_sigma = x_sigma,
#' z_shape = z_shape,
#' beta = true_beta, gamma = true_gamma)
#' table(my_data[["obs_Y"]], my_data[["true_Y"]])
COMBO_data <- function(sample_size,
x_mu, x_sigma,
z_shape,
beta, gamma){
n_cat <- 2
x <- rnorm(sample_size, x_mu, x_sigma)
x_matrix <- matrix(c(rep(1, sample_size),
x),
nrow = sample_size, byrow = FALSE)
z <- rgamma(sample_size, z_shape)
z_matrix <- matrix(c(rep(1, sample_size),
z),
nrow = sample_size, byrow = FALSE)
pi_matrix <- pi_compute(beta, x_matrix, sample_size, n_cat)
true_Y <- rep(NA, sample_size)
for(i in 1:sample_size){
true_Y[i] = which(rmultinom(1, 1, pi_matrix[i,]) == 1)
}
pistar_matrix <- pistar_compute(gamma, z_matrix, sample_size, n_cat)
obs_Y <- rep(NA, sample_size)
for(i in 1:sample_size){
true_j = true_Y[i]
obs_Y[i] = which(rmultinom(1, 1,
pistar_matrix[c(i,sample_size + i),
true_j]) == 1)
}
obs_Y_reps <- matrix(rep(obs_Y, n_cat), nrow = sample_size, byrow = FALSE)
category_matrix <- matrix(rep(1:n_cat, each = sample_size), nrow = sample_size,
byrow = FALSE)
obs_Y_matrix <- 1 * (obs_Y_reps == category_matrix)
data_output <- list(obs_Y = obs_Y,
true_Y = true_Y,
obs_Y_matrix = obs_Y_matrix,
x = x,
z = z,
x_design_matrix = x_matrix,
z_design_matrix = z_matrix)
return(data_output)
}
|
/scratch/gouwar.j/cran-all/cranData/COMBO/R/COMBO_data.R
|
#' Generate Data to use in two-stage COMBO Functions
#'
#' @param sample_size An integer specifying the sample size of the generated data set.
#' @param x_mu A numeric value specifying the mean of \code{x} predictors
#' generated from a Normal distribution.
#' @param x_sigma A positive numeric value specifying the standard deviation of
#' \code{x} predictors generated from a Normal distribution.
#' @param z_shape A positive numeric value specifying the shape parameter of
#' \code{z} predictors generated from a Gamma distribution.
#' @param v_shape A positive numeric value specifying the shape parameter of
#' \code{v} predictors generated from a Gamma distribution.
#' @param beta A column matrix of \eqn{\beta} parameter values (intercept, slope)
#' to generate data under in the true outcome mechanism.
#' @param gamma A numeric matrix of \eqn{\gamma} parameters
#' to generate data under in the first-stage observation mechanism.
#' In matrix form, the \code{gamma} matrix rows correspond to intercept (row 1)
#' and slope (row 2) terms. The gamma parameter matrix columns correspond to the true outcome categories
#' \eqn{Y \in \{1, 2\}}.
#' @param delta A numeric array of \eqn{\delta} parameters to generate data under
#' the second-stage observation mechanism. In array form, the \code{delta} matrix rows
#' correspond to intercept (row 1) and slope (row 2) terms. The matrix columns correspond
#' to first-stage observed outcome categories. The third dimension of the \code{delta}
#' array is indexed by the true outcome categories.
#'
#' @return \code{COMBO_data_2stage} returns a list of generated data elements:
#' \item{obs_Ystar}{A vector of first-stage observed outcomes.}
#' \item{obs_Ytilde}{A vector of second-stage observed outcomes.}
#' \item{true_Y}{A vector of true outcomes.}
#' \item{obs_Ystar_matrix}{A numeric matrix of indicator variables (0, 1) for the first-stage observed
#' outcome \code{Y*}. Rows of the matrix correspond to each subject. Columns of
#' the matrix correspond to each observed outcome category. Each row contains
#' exactly one 0 entry and exactly one 1 entry.}
#' \item{obs_Ytilde_matrix}{A numeric matrix of indicator variables (0, 1) for the second-stage observed
#' outcome \code{Y*}. Rows of the matrix correspond to each subject. Columns of
#' the matrix correspond to each observed outcome category. Each row contains
#' exactly one 0 entry and exactly one 1 entry.}
#' \item{x}{A vector of generated predictor values in the true outcome
#' mechanism, from the Normal distribution.}
#' \item{z}{A vector of generated predictor values in the first-stage observation
#' mechanism from the Gamma distribution.}
#' \item{v}{A vector of generated predictor values in the second-stage observation
#' mechanism from the Gamma distribution.}
#' \item{x_design_matrix}{The design matrix for the \code{x} predictor.}
#' \item{z_design_matrix}{The design matrix for the \code{z} predictor.}
#' \item{v_design_matrix}{The design matrix for the \code{v} predictor.}
#'
#' @export
#'
#' @include pi_compute.R
#' @include pistar_compute.R
#' @include pitilde_compute.R
#'
#' @examples
#' set.seed(123)
#' n <- 500
#' x_mu <- 0
#' x_sigma <- 1
#' z_shape <- 1
#' v_shape <- 1
#'
#' true_beta <- matrix(c(1, -2), ncol = 1)
#' true_gamma <- matrix(c(.5, 1, -.5, -1), nrow = 2, byrow = FALSE)
#' true_delta <- array(c(1.5, 1, .5, .5, -.5, 0, -1, -1), dim = c(2, 2, 2))
#'
#' my_data <- COMBO_data_2stage(sample_size = n,
#' x_mu = x_mu, x_sigma = x_sigma,
#' z_shape = z_shape, v_shape = v_shape,
#' beta = true_beta, gamma = true_gamma, delta = true_delta)
#' table(my_data[["obs_Ytilde"]], my_data[["obs_Ystar"]], my_data[["true_Y"]])
COMBO_data_2stage <- function(sample_size,
x_mu, x_sigma,
z_shape, v_shape,
beta, gamma, delta){
n_cat <- 2
x <- rnorm(sample_size, x_mu, x_sigma)
x_matrix <- matrix(c(rep(1, sample_size),
x),
nrow = sample_size, byrow = FALSE)
z <- rgamma(sample_size, z_shape)
z_matrix <- matrix(c(rep(1, sample_size),
z),
nrow = sample_size, byrow = FALSE)
v <- rgamma(sample_size, v_shape)
v_matrix <- matrix(c(rep(1, sample_size),
v),
nrow = sample_size, byrow = FALSE)
pi_matrix <- pi_compute(beta, x_matrix, sample_size, n_cat)
true_Y <- rep(NA, sample_size)
for(i in 1:sample_size){
true_Y[i] = which(rmultinom(1, 1, pi_matrix[i,]) == 1)
}
pistar_matrix <- pistar_compute(gamma, z_matrix, sample_size, n_cat)
obs_Ystar <- rep(NA, sample_size)
for(i in 1:sample_size){
true_j = true_Y[i]
obs_Ystar[i] = which(rmultinom(1, 1,
pistar_matrix[c(i,sample_size + i),
true_j]) == 1)
}
obs_Ystar_reps <- matrix(rep(obs_Ystar, n_cat), nrow = sample_size, byrow = FALSE)
category_matrix <- matrix(rep(1:n_cat, each = sample_size), nrow = sample_size,
byrow = FALSE)
obs_Ystar_matrix <- 1 * (obs_Ystar_reps == category_matrix)
pitilde_array <- pitilde_compute(delta, v_matrix, sample_size, n_cat)
obs_Ytilde <- rep(NA, sample_size)
for(i in 1:sample_size){
true_j = true_Y[i]
obs_k = obs_Ystar[i]
obs_Ytilde[i] = which(rmultinom(1, 1,
pitilde_array[c(i,sample_size + i),
obs_k, true_j]) == 1)
}
obs_Ytilde_reps <- matrix(rep(obs_Ytilde, n_cat), nrow = sample_size, byrow = FALSE)
category_matrix <- matrix(rep(1:n_cat, each = sample_size), nrow = sample_size,
byrow = FALSE)
obs_Ytilde_matrix <- 1 * (obs_Ytilde_reps == category_matrix)
data_output <- list(obs_Ystar = obs_Ystar,
obs_Ytilde = obs_Ytilde,
true_Y = true_Y,
obs_Ystar_matrix = obs_Ystar_matrix,
obs_Ytilde_matrix = obs_Ytilde_matrix,
x = x,
z = z,
v = v,
x_design_matrix = x_matrix,
z_design_matrix = z_matrix,
v_design_matrix = v_matrix)
return(data_output)
}
|
/scratch/gouwar.j/cran-all/cranData/COMBO/R/COMBO_data_2stage.R
|
#' Check Assumption and Fix Label Switching if Assumption is Broken for a List of MCMC Samples
#'
#' @param n_chains An integer specifying the number of MCMC chains to compute over.
#' @param chains_list A numeric list containing the samples from \code{n_chains}
#' MCMC chains.
#' @param pistarjj_matrix A numeric matrix of the average
#' conditional probability \eqn{P(Y^* = j | Y = j, Z)} across all subjects for
#' each MCMC chain, obtained from the \code{pistar_by_chain} function.
#' @param dim_x The number of columns of the design matrix of the true outcome mechanism, \code{X}.
#' @param dim_z The number of columns of the design matrix of the observation mechanism, \code{Z}.
#' @param n_cat The number of categorical values that the true outcome, \code{Y},
#' and the observed outcome, \code{Y*} can take.
#'
#' @return \code{check_and_fix_chains} returns a numeric list of the samples from
#' \code{n_chains} MCMC chains which have been corrected for label switching if
#' the following assumption is not met: \eqn{P(Y^* = j | Y = j, Z) > 0.50 \forall j}.
#'
#' @include label_switch.R
#'
check_and_fix_chains <- function(n_chains, chains_list, pistarjj_matrix,
dim_x, dim_z, n_cat){
fixed_output_list <- list()
for(i in 1:n_chains){
pistar11 = pistarjj_matrix[i, 1]
pistar22 = pistarjj_matrix[i, 2]
output = if(pistar11 > .50 & pistar22 > .50){
chains_list[[i]]
} else {label_switch(chains_list[[i]],
dim_x, dim_z, n_cat)}
fixed_output_list[[i]] = output
}
return(fixed_output_list)
}
|
/scratch/gouwar.j/cran-all/cranData/COMBO/R/check_and_fix_chains.R
|
#' Check Assumption and Fix Label Switching if Assumption is Broken for a List of MCMC Samples
#'
#' @param n_chains An integer specifying the number of MCMC chains to compute over.
#' @param chains_list A numeric list containing the samples from \code{n_chains}
#' MCMC chains.
#' @param pistarjj_matrix A numeric matrix of the average
#' conditional probability \eqn{P(Y^* = j | Y = j, Z)} across all subjects for
#' each MCMC chain, obtained from the \code{pistar_by_chain} function.
#' @param pitildejjj_matrix A numeric matrix of the average conditional probability
#' \eqn{P( \tilde{Y} = j | Y^* = j, Y = j, V)} across all subjects for
#' each MCMC chain. Rows of the matrix correspond to MCMC chains, up to \code{n_chains}.
#' Obtained from the \code{pitilde_by_chain} function.
#' @param dim_x The number of columns of the design matrix of the true outcome mechanism, \code{X}.
#' @param dim_z The number of columns of the design matrix of the first-stage observation mechanism, \code{Z}.
#' @param dim_v The number of columns of the design matrix of the second-stage observation mechanism, \code{V}.
#' @param n_cat The number of categorical values that the true outcome, \code{Y},
#' and the observed outcome, \code{Y*} can take.
#'
#' @return \code{check_and_fix_chains} returns a numeric list of the samples from
#' \code{n_chains} MCMC chains which have been corrected for label switching if
#' the following assumption is not met: \eqn{P(Y^* = j | Y = j, Z) > 0.50 \forall j}.
#'
#' @include label_switch.R
#'
check_and_fix_chains_2stage <- function(n_chains, chains_list,
pistarjj_matrix, pitildejjj_matrix,
dim_x, dim_z, dim_v, n_cat){
fixed_output_list <- list()
for(i in 1:n_chains){
pistar11 = pistarjj_matrix[i, 1]
pistar22 = pistarjj_matrix[i, 2]
pitilde111 = pitildejjj_matrix[i, 1]
pitilde222 = pitildejjj_matrix[i, 2]
output = if(pistar11 > .50 & pistar22 > .50 &
pitilde111 > .50 & pitilde222 > .50){
chains_list[[i]]
} else {label_switch_2stage(chains_list[[i]],
dim_x, dim_z, dim_v, n_cat)}
fixed_output_list[[i]] = output
}
return(fixed_output_list)
}
|
/scratch/gouwar.j/cran-all/cranData/COMBO/R/check_and_fix_chains_2stage.R
|
#' EM-Algorithm Function for Estimation of the Misclassification Model
#'
#' @param param_current A numeric vector of regression parameters, in the order
#' \eqn{\beta, \gamma}. The \eqn{\gamma} vector is obtained from the matrix form.
#' In matrix form, the gamma parameter matrix rows
#' correspond to parameters for the \code{Y* = 1}
#' observed outcome, with the dimensions of \code{Z}.
#' In matrix form, the gamma parameter matrix columns correspond to the true outcome categories
#' \eqn{j = 1, \dots,} \code{n_cat}. The numeric vector \code{gamma_v} is
#' obtained by concatenating the gamma matrix, i.e. \code{gamma_v <- c(gamma_matrix)}.
#' @param obs_Y_matrix A numeric matrix of indicator variables (0, 1) for the observed
#' outcome \code{Y*}. Rows of the matrix correspond to each subject. Columns of
#' the matrix correspond to each observed outcome category. Each row should contain
#' exactly one 0 entry and exactly one 1 entry.
#' @param X A numeric design matrix for the true outcome mechanism.
#' @param Z A numeric design matrix for the observation mechanism.
#' @param sample_size An integer value specifying the number of observations in the sample.
#' This value should be equal to the number of rows of the design matrix, \code{X} or \code{Z}.
#' @param n_cat The number of categorical values that the true outcome, \code{Y},
#' and the observed outcome, \code{Y*} can take.
#'
#' @return \code{em_function} returns a numeric vector of updated parameter
#' estimates from one iteration of the EM-algorithm.
#'
#' @include pi_compute.R
#' @include pistar_compute.R
#' @include w_j.R
#' @include q_beta_f.R
#' @include q_gamma_f.R
#'
#' @importFrom stats rnorm rgamma rmultinom coefficients binomial
#'
em_function <- function(param_current,
obs_Y_matrix, X, Z,
sample_size, n_cat){
beta_current = matrix(param_current[1:ncol(X)], ncol = 1)
gamma_current = matrix(c(param_current)[(ncol(X) + 1):(ncol(X) + (n_cat * ncol(Z)))],
ncol = n_cat, byrow = FALSE)
probabilities = pi_compute(beta_current, X, sample_size, n_cat)
conditional_probabilities = pistar_compute(gamma_current, Z, sample_size, n_cat)
weights = w_j(ystar_matrix = obs_Y_matrix,
pistar_matrix = conditional_probabilities,
pi_matrix = probabilities,
sample_size = sample_size, n_cat = n_cat)
Ystar01 = obs_Y_matrix[,1]
fit.gamma1 <- suppressWarnings( stats::glm(Ystar01 ~ . + 0, as.data.frame(Z),
weights = weights[,1],
family = "binomial"(link = "logit")) )
gamma1_new <- unname(stats::coefficients(fit.gamma1))
fit.gamma2 <- suppressWarnings( stats::glm(Ystar01 ~ . + 0, as.data.frame(Z),
weights = weights[,2],
family = "binomial"(link = "logit")) )
gamma2_new <- unname(stats::coefficients(fit.gamma2))
fit.beta <- suppressWarnings( stats::glm(weights[,1] ~ . + 0, as.data.frame(X),
family = stats::binomial()) )
beta_new <- unname(stats::coefficients(fit.beta))
param_new = c(beta_new, gamma1_new, gamma2_new)
param_current = param_new
return(param_new)
}
|
/scratch/gouwar.j/cran-all/cranData/COMBO/R/em_function.R
|
#' EM-Algorithm Function for Estimation of the Two-Stage Misclassification Model
#'
#' @param param_current A numeric vector of regression parameters, in the order
#' \eqn{\beta, \gamma, \delta}. The \eqn{\gamma} vector is obtained from the matrix form.
#' In matrix form, the gamma parameter matrix rows
#' correspond to parameters for the \code{Y* = 1}
#' observed outcome, with the dimensions of \code{Z}.
#' In matrix form, the gamma parameter matrix columns correspond to the true outcome categories
#' \eqn{j = 1, \dots,} \code{n_cat}. The numeric vector \eqn{\gamma} is
#' obtained by concatenating the gamma matrix, i.e. \code{gamma_v <- c(gamma_matrix)}.
#' The \eqn{\delta} vector is obtained from the array form. In array form,
#' the first dimension (matrix rows) of \code{delta}
#' corresponds to parameters for the \eqn{\tilde{Y} = 1}
#' second-stage observed outcome, with the dimensions of the \code{V}
#' The second dimension (matrix columns) correspond to the first-stage
#' observed outcome categories \eqn{Y^* \in \{1, 2\}}. The third dimension of
#' \code{delta_start} corresponds to to the true outcome categories
#' \eqn{Y \in \{1, 2\}}. The numeric vector \eqn{\delta} is obtained by
#' concatenating the delta array, i.e. \code{delta_vector <- c(delta_array)}.
#' @param obs_Ystar_matrix A numeric matrix of indicator variables (0, 1) for the first-stage observed
#' outcome \code{Y*}. Rows of the matrix correspond to each subject. Columns of
#' the matrix correspond to each observed outcome category. Each row should contain
#' exactly one 0 entry and exactly one 1 entry.
#' @param obs_Ytilde_matrix A numeric matrix of indicator variables (0, 1) for the second-stage observed
#' outcome \eqn{\tilde{Y}}. Rows of the matrix correspond to each subject. Columns of
#' the matrix correspond to each observed outcome category. Each row should contain
#' exactly one 0 entry and exactly one 1 entry.
#' @param X A numeric design matrix for the true outcome mechanism.
#' @param Z A numeric design matrix for the first-stage observation mechanism.
#' @param V A numeric design matrix for the second-stage observation mechanism.
#' @param sample_size An integer value specifying the number of observations in the sample.
#' This value should be equal to the number of rows of the design matrices, \code{X}, \code{Z}, and \code{V}.
#' @param n_cat The number of categorical values that the true outcome, \code{Y},
#' and the observed outcomes, \code{Y*} and \eqn{\tilde{Y}}, can take.
#'
#' @return \code{em_function_2stage} returns a numeric vector of updated parameter
#' estimates from one iteration of the EM-algorithm.
#'
#' @include pi_compute.R
#' @include pistar_compute.R
#' @include pitilde_compute.R
#' @include w_j_2stage.R
#' @include q_beta_f.R
#' @include q_gamma_f.R
#' @include q_delta_f.R
#'
#' @importFrom stats rnorm rgamma rmultinom coefficients binomial glm
#'
em_function_2stage <- function(param_current,
obs_Ystar_matrix, obs_Ytilde_matrix,
X, Z, V,
sample_size, n_cat){
beta_current = matrix(param_current[1:ncol(X)], ncol = 1)
gamma_current = matrix(c(param_current[(ncol(X) + 1):(ncol(X) + (n_cat * ncol(Z)))]),
ncol = n_cat, byrow = FALSE)
delta_current = array(c(param_current[(ncol(X) + (n_cat * ncol(Z)) + 1):length(param_current)]),
dim = c(ncol(V), 2, 2))
probabilities = matrix(pi_compute(beta_current, X, sample_size, n_cat),
ncol = n_cat, byrow = FALSE)
conditional_probabilities = pistar_compute(gamma_current, Z, sample_size, n_cat)
conditional_probabilities2 = pitilde_compute(delta_current, V, sample_size, n_cat)
weights = w_j_2stage(ystar_matrix = obs_Ystar_matrix,
ytilde_matrix = obs_Ytilde_matrix,
pitilde_array = conditional_probabilities2,
pistar_matrix = conditional_probabilities,
pi_matrix = probabilities,
sample_size = sample_size, n_cat = n_cat)
Ystar01 = obs_Ystar_matrix[,1]
fit.gamma1 <- suppressWarnings( stats::glm(Ystar01 ~ . + 0, as.data.frame(Z),
weights = weights[,1],
family = "binomial"(link = "logit")) )
gamma1_new <- unname(stats::coefficients(fit.gamma1))
fit.gamma2 <- suppressWarnings( stats::glm(Ystar01 ~ . + 0, as.data.frame(Z),
weights = weights[,2],
family = "binomial"(link = "logit")) )
gamma2_new <- unname(stats::coefficients(fit.gamma2))
fit.beta <- suppressWarnings( stats::glm(weights[,1] ~ . + 0, as.data.frame(X),
family = stats::binomial()) )
beta_new <- unname(stats::coefficients(fit.beta))
outcome_j1_k1 <- ifelse(obs_Ystar_matrix[,1] == 1 & obs_Ytilde_matrix[,1] == 1,
1,
ifelse(obs_Ystar_matrix[,1] == 1 & obs_Ytilde_matrix[,1] == 0,
0, NA))
fit.delta11 <- suppressWarnings( stats::glm(outcome_j1_k1 ~ . + 0, as.data.frame(V),
weights = weights[,1],
family = "binomial"(link = "logit")) )
delta11_new <- unname(coefficients(fit.delta11))
outcome_j1_k2 <- ifelse(obs_Ystar_matrix[,1] == 0 & obs_Ytilde_matrix[,1] == 1,
1,
ifelse(obs_Ystar_matrix[,1] == 0 & obs_Ytilde_matrix[,1] == 0,
0, NA))
fit.delta21 <- suppressWarnings( stats::glm(outcome_j1_k2 ~ . + 0, as.data.frame(V),
weights = weights[,1],
family = "binomial"(link = "logit")) )
delta21_new <- unname(coefficients(fit.delta21))
outcome_j2_k1 <- ifelse(obs_Ystar_matrix[,1] == 1 & obs_Ytilde_matrix[,1] == 1,
1,
ifelse(obs_Ystar_matrix[,1] == 1 & obs_Ytilde_matrix[,1] == 0,
0, NA))
fit.delta12 <- suppressWarnings( stats::glm(outcome_j2_k1 ~ . + 0, as.data.frame(V),
weights = weights[,2],
family = "binomial"(link = "logit")) )
delta12_new <- unname(coefficients(fit.delta12))
outcome_j2_k2 <- ifelse(obs_Ystar_matrix[,1] == 0 & obs_Ytilde_matrix[,1] == 1,
1,
ifelse(obs_Ystar_matrix[,1] == 0 & obs_Ytilde_matrix[,1] == 0,
0, NA))
fit.delta22 <- suppressWarnings( stats::glm(outcome_j2_k2 ~ . + 0, as.data.frame(V),
weights = weights[,2],
family = "binomial"(link = "logit")) )
delta22_new <- unname(coefficients(fit.delta22))
delta_new <- c(delta11_new, delta21_new, delta12_new, delta22_new)
param_new = c(beta_new, gamma1_new, gamma2_new, delta_new)
param_current = param_new
return(param_new)
}
|
/scratch/gouwar.j/cran-all/cranData/COMBO/R/em_function_2stage.R
|
#' Expit function
#'
#' \eqn{\frac{\exp\{x\}}{1 + \exp\{x\}}}
#'
#' @param x A numeric value or vector to compute the expit function on.
#'
#' @return \code{expit} returns the result of the function
#' \eqn{f(x) = \frac{\exp\{x\}}{1 + \exp\{x\}}} for a given \code{x}.
#'
expit <- function(x){
expit_return = exp(x) / (1 + exp(x))
return(expit_return)
}
|
/scratch/gouwar.j/cran-all/cranData/COMBO/R/expit.R
|
#' Set up a Binary Outcome Misclassification \code{jags.model} Object for a Given Prior
#'
#' @param prior A character string specifying the prior distribution for the
#' \eqn{\beta} and \eqn{\gamma} parameters. Options are \code{"t"},
#' \code{"uniform"}, \code{"normal"}, or \code{"dexp"} (double Exponential, or Weibull).
#' @param sample_size An integer value specifying the number of observations in the sample.
#' @param dim_x An integer specifying the number of columns of the design matrix of the true outcome mechanism, \code{X}.
#' @param dim_z An integer specifying the number of columns of the design matrix of the observation mechanism, \code{Z}.
#' @param n_cat An integer specifying the number of categorical values that the true outcome, \code{Y},
#' and the observed outcome, \code{Y*} can take.
#' @param Ystar A numeric vector of indicator variables (1, 2) for the observed
#' outcome \code{Y*}. The reference category is 2.
#' @param X A numeric design matrix for the true outcome mechanism.
#' @param Z A numeric design matrix for the observation mechanism.
#' @param beta_prior_parameters A numeric list of prior distribution parameters
#' for the \eqn{\beta} terms. For prior distributions \code{"t"},
#' \code{"uniform"}, \code{"normal"}, or \code{"dexp"}, the first element of the
#' list should contain a matrix of location, lower bound, mean, or shape parameters,
#' respectively, for \eqn{\beta} terms.
#' For prior distributions \code{"t"},
#' \code{"uniform"}, \code{"normal"}, or \code{"dexp"}, the second element of the
#' list should contain a matrix of shape, upper bound, standard deviation, or scale parameters,
#' respectively, for \eqn{\beta} terms.
#' For prior distribution \code{"t"}, the third element of the list should contain
#' a matrix of the degrees of freedom for \eqn{\beta} terms.
#' The third list element should be empty for all other prior distributions.
#' All matrices in the list should have dimensions \code{dim_x} X \code{n_cat}, and all
#' elements in the \code{n_cat} column should be set to \code{NA}.
#' @param gamma_prior_parameters A numeric list of prior distribution parameters
#' for the \eqn{\gamma} terms. For prior distributions \code{"t"},
#' \code{"uniform"}, \code{"normal"}, or \code{"dexp"}, the first element of the
#' list should contain an array of location, lower bound, mean, or shape parameters,
#' respectively, for \eqn{\gamma} terms.
#' For prior distributions \code{"t"},
#' \code{"uniform"}, \code{"normal"}, or \code{"dexp"}, the second element of the
#' list should contain an array of shape, upper bound, standard deviation, or scale parameters,
#' respectively, for \eqn{\gamma} terms.
#' For prior distribution \code{"t"}, the third element of the list should contain
#' an array of the degrees of freedom for \eqn{\gamma} terms.
#' The third list element should be empty for all other prior distributions.
#' All arrays in the list should have dimensions \code{n_cat} X \code{n_cat} X \code{dim_z},
#' and all elements in the \code{n_cat} row should be set to \code{NA}.
#' @param number_MCMC_chains An integer specifying the number of MCMC chains to compute.
#' @param model_file A .BUG file and used
#' for MCMC estimation with \code{rjags}.
#' @param display_progress A logical value specifying whether messages should be
#' displayed during model compilation. The default is \code{TRUE}.
#'
#' @return \code{jags_picker} returns a \code{jags.model} object for a binary
#' outcome misclassification model. The object includes the specified
#' prior distribution, model, number of chains, and data.
#'
#' @importFrom stats rnorm rmultinom optim
#' @importFrom rjags jags.model
#'
jags_picker <- function(prior, sample_size, dim_x, dim_z, n_cat,
Ystar, X, Z,
beta_prior_parameters, gamma_prior_parameters,
number_MCMC_chains,
model_file, display_progress = TRUE){
quiet_argument <- !display_progress
if (prior == "t") {
jags_object <- jags.model(
model_file,
data = list(sample_size = sample_size,
dim_x = dim_x,
dim_z = dim_z,
n_cat = n_cat,
obs_Y = Ystar,
x = X, z = Z,
t_mu_beta = beta_prior_parameters[[1]],
t_tau_beta = beta_prior_parameters[[2]],
t_df_beta = beta_prior_parameters[[3]],
t_mu_gamma = gamma_prior_parameters[[1]],
t_tau_gamma = gamma_prior_parameters[[2]],
t_df_gamma = gamma_prior_parameters[[3]]),
n.chains = number_MCMC_chains,
quiet = quiet_argument)
} else if (prior == "uniform") {
jags_object <- jags.model(
model_file,
data = list(sample_size = sample_size,
dim_x = dim_x,
dim_z = dim_z,
n_cat = n_cat,
obs_Y = Ystar,
x = X, z = Z,
unif_l_beta = beta_prior_parameters[[1]],
unif_u_beta = beta_prior_parameters[[2]],
unif_l_gamma = gamma_prior_parameters[[1]],
unif_u_gamma = gamma_prior_parameters[[2]]),
n.chains = number_MCMC_chains,
quiet = quiet_argument)
} else if (prior == "normal") {
jags_object <- jags.model(
model_file,
data = list(sample_size = sample_size,
dim_x = dim_x,
dim_z = dim_z,
n_cat = n_cat,
obs_Y = Ystar,
x = X, z = Z,
normal_mu_beta = beta_prior_parameters[[1]],
normal_sigma_beta = beta_prior_parameters[[2]],
normal_mu_gamma = gamma_prior_parameters[[1]],
normal_sigma_gamma = gamma_prior_parameters[[2]]),
n.chains = number_MCMC_chains,
quiet = quiet_argument)
} else if (prior == "dexp") {
jags_object <- jags.model(
model_file,
data = list(sample_size = sample_size,
dim_x = dim_x,
dim_z = dim_z,
n_cat = n_cat,
obs_Y = Ystar,
x = X, z = Z,
dexp_mu_beta = beta_prior_parameters[[1]],
dexp_b_beta = beta_prior_parameters[[2]],
dexp_mu_gamma = gamma_prior_parameters[[1]],
dexp_b_gamma = gamma_prior_parameters[[2]]),
n.chains = number_MCMC_chains,
quiet = quiet_argument)
} else { stop("Please select a model.")}
return(jags_object)
}
|
/scratch/gouwar.j/cran-all/cranData/COMBO/R/jags_picker.R
|
#' Set up a Two-Stage Binary Outcome Misclassification \code{jags.model} Object for a Given Prior
#'
#' @param prior A character string specifying the prior distribution for the
#' \eqn{\beta}, \eqn{\gamma}, and \eqn{\delta} parameters. Options are \code{"t"},
#' \code{"uniform"}, \code{"normal"}, or \code{"dexp"} (double Exponential, or Weibull).
#' @param sample_size An integer value specifying the number of observations in the sample.
#' @param dim_x An integer specifying the number of columns of the design matrix of the true outcome mechanism, \code{X}.
#' @param dim_z An integer specifying the number of columns of the design matrix of the first-stage observation mechanism, \code{Z}.
#' @param dim_v An integer specifying the number of columns of the design matrix of the second-stage observation mechanism, \code{V}.
#' @param n_cat An integer specifying the number of categorical values that the true outcome, \code{Y},
#' and the observed outcomes, \eqn{Y^*} and \eqn{\tilde{Y}}, can take.
#' @param Ystar A numeric vector of indicator variables (1, 2) for the first-stage observed
#' outcome \code{Y*}. The reference category is 2.
#' @param Ytilde A numeric vector of indicator variables (1, 2) for the second-stage observed
#' outcome \eqn{\tilde{Y}}. The reference category is 2.
#' @param X A numeric design matrix for the true outcome mechanism.
#' @param Z A numeric design matrix for the first-stage observation mechanism.
#' @param V A numeric design matrix for the second-stage observation mechanism.
#' @param beta_prior_parameters A numeric list of prior distribution parameters
#' for the \eqn{\beta} terms. For prior distributions \code{"t"},
#' \code{"uniform"}, \code{"normal"}, or \code{"dexp"}, the first element of the
#' list should contain a matrix of location, lower bound, mean, or shape parameters,
#' respectively, for \eqn{\beta} terms.
#' For prior distributions \code{"t"},
#' \code{"uniform"}, \code{"normal"}, or \code{"dexp"}, the second element of the
#' list should contain a matrix of shape, upper bound, standard deviation, or scale parameters,
#' respectively, for \eqn{\beta} terms.
#' For prior distribution \code{"t"}, the third element of the list should contain
#' a matrix of the degrees of freedom for \eqn{\beta} terms.
#' The third list element should be empty for all other prior distributions.
#' All matrices in the list should have dimensions \code{dim_x} X \code{n_cat}, and all
#' elements in the \code{n_cat} column should be set to \code{NA}.
#' @param gamma_prior_parameters A numeric list of prior distribution parameters
#' for the \eqn{\gamma} terms. For prior distributions \code{"t"},
#' \code{"uniform"}, \code{"normal"}, or \code{"dexp"}, the first element of the
#' list should contain an array of location, lower bound, mean, or shape parameters,
#' respectively, for \eqn{\gamma} terms.
#' For prior distributions \code{"t"},
#' \code{"uniform"}, \code{"normal"}, or \code{"dexp"}, the second element of the
#' list should contain an array of shape, upper bound, standard deviation, or scale parameters,
#' respectively, for \eqn{\gamma} terms.
#' For prior distribution \code{"t"}, the third element of the list should contain
#' an array of the degrees of freedom for \eqn{\gamma} terms.
#' The third list element should be empty for all other prior distributions.
#' All arrays in the list should have dimensions \code{n_cat} X \code{n_cat} X \code{dim_z},
#' and all elements in the \code{n_cat} row should be set to \code{NA}.
#' @param delta_prior_parameters A numeric list of prior distribution parameters
#' for the \eqn{\delta} terms. For prior distributions \code{"t"},
#' \code{"uniform"}, \code{"normal"}, or \code{"dexp"}, the first element of the
#' list should contain an array of location, lower bound, mean, or shape parameters,
#' respectively, for \eqn{\delta} terms.
#' For prior distributions \code{"t"},
#' \code{"uniform"}, \code{"normal"}, or \code{"dexp"}, the second element of the
#' list should contain an array of shape, upper bound, standard deviation, or scale parameters,
#' respectively, for \eqn{\delta} terms.
#' For prior distribution \code{"t"}, the third element of the list should contain
#' an array of the degrees of freedom for \eqn{\delta} terms.
#' The third list element should be empty for all other prior distributions.
#' All arrays in the list should have dimensions \code{n_cat} X \code{n_cat} X \code{n_cat} X \code{dim_v},
#' and all elements in the \code{n_cat} row should be set to \code{NA}.
#' @param number_MCMC_chains An integer specifying the number of MCMC chains to compute.
#' @param model_file A .BUG file and used
#' for MCMC estimation with \code{rjags}.
#' @param display_progress A logical value specifying whether messages should be
#' displayed during model compilation. The default is \code{TRUE}.
#'
#' @return \code{jags_picker} returns a \code{jags.model} object for a two-stage binary
#' outcome misclassification model. The object includes the specified
#' prior distribution, model, number of chains, and data.
#'
#' @importFrom stats rnorm rmultinom optim
#' @importFrom rjags jags.model
#'
jags_picker_2stage <- function(prior, sample_size, dim_x, dim_z, dim_v, n_cat,
Ystar, Ytilde, X, Z, V,
beta_prior_parameters,
gamma_prior_parameters,
delta_prior_parameters,
number_MCMC_chains,
model_file, display_progress = TRUE){
quiet_argument <- !display_progress
if (prior == "t") {
jags_object <- jags.model(
model_file,
data = list(sample_size = sample_size,
dim_x = dim_x,
dim_z = dim_z,
dim_v = dim_v,
n_cat = n_cat,
Y_star = Ystar,
Y_tilde = Ytilde,
x = X, z = Z, v = V,
t_mu_beta = beta_prior_parameters[[1]],
t_tau_beta = beta_prior_parameters[[2]],
t_df_beta = beta_prior_parameters[[3]],
t_mu_gamma = gamma_prior_parameters[[1]],
t_tau_gamma = gamma_prior_parameters[[2]],
t_df_gamma = gamma_prior_parameters[[3]],
t_mu_delta = delta_prior_parameters[[1]],
t_tau_delta = delta_prior_parameters[[2]],
t_df_delta = delta_prior_parameters[[3]]),
n.chains = number_MCMC_chains,
quiet = quiet_argument)
} else if (prior == "uniform") {
jags_object <- jags.model(
model_file,
data = list(sample_size = sample_size,
dim_x = dim_x,
dim_z = dim_z,
dim_v = dim_v,
n_cat = n_cat,
Y_star = Ystar,
Y_tilde = Ytilde,
x = X, z = Z, v = V,
unif_l_beta = beta_prior_parameters[[1]],
unif_u_beta = beta_prior_parameters[[2]],
unif_l_gamma = gamma_prior_parameters[[1]],
unif_u_gamma = gamma_prior_parameters[[2]],
unif_l_delta = delta_prior_parameters[[1]],
unif_u_delta = delta_prior_parameters[[2]]),
n.chains = number_MCMC_chains,
quiet = quiet_argument)
} else if (prior == "normal") {
jags_object <- jags.model(
model_file,
data = list(sample_size = sample_size,
dim_x = dim_x,
dim_z = dim_z,
dim_v = dim_v,
n_cat = n_cat,
Y_star = Ystar,
Y_tilde = Ytilde,
x = X, z = Z, v = V,
normal_mu_beta = beta_prior_parameters[[1]],
normal_sigma_beta = beta_prior_parameters[[2]],
normal_mu_gamma = gamma_prior_parameters[[1]],
normal_sigma_gamma = gamma_prior_parameters[[2]],
normal_mu_delta = delta_prior_parameters[[1]],
normal_sigma_delta = delta_prior_parameters[[2]]),
n.chains = number_MCMC_chains,
quiet = quiet_argument)
} else if (prior == "dexp") {
jags_object <- jags.model(
model_file,
data = list(sample_size = sample_size,
dim_x = dim_x,
dim_z = dim_z,
dim_v = dim_v,
n_cat = n_cat,
Y_star = Ystar,
Y_tilde = Ytilde,
x = X, z = Z, v = V,
dexp_mu_beta = beta_prior_parameters[[1]],
dexp_b_beta = beta_prior_parameters[[2]],
dexp_mu_gamma = gamma_prior_parameters[[1]],
dexp_b_gamma = gamma_prior_parameters[[2]],
dexp_mu_delta = delta_prior_parameters[[1]],
dexp_b_delta = delta_prior_parameters[[2]]),
n.chains = number_MCMC_chains,
quiet = quiet_argument)
} else { stop("Please select a model.")}
return(jags_object)
}
|
/scratch/gouwar.j/cran-all/cranData/COMBO/R/jags_picker_2stage.R
|
#' Fix Label Switching in MCMC Results from a Binary Outcome Misclassification Model
#'
#' @param chain_matrix A numeric matrix containing the posterior samples for all
#' parameters in a given MCMC chain. \code{chain_matrix} must be a named
#' object (i.e. each parameter must be named as \code{beta[j, p]} or \code{gamma[k,j,p]}).
#' @param dim_x An integer specifying the number of columns of the design matrix of the true outcome mechanism, \code{X}.
#' @param dim_z An integer specifying the number of columns of the design matrix of the observation mechanism, \code{Z}.
#' @param n_cat An integer specifying the number of categorical values that the true outcome, \code{Y},
#' and the observed outcome, \code{Y*} can take.
#'
#' @return \code{label_switch} returns a named matrix of MCMC posterior samples for
#' all parameters after performing label switching according the following pattern:
#' all \eqn{\beta} terms are multiplied by -1, all \eqn{\gamma} terms are "swapped"
#' with the opposite \code{j} index.
#'
#' @importFrom stats rnorm rmultinom
#'
label_switch <- function(chain_matrix, dim_x, dim_z, n_cat){
beta_names <- paste0("beta[1,", 1:dim_x, "]")
gamma_names <- paste0("gamma[1,", rep(1:n_cat, dim_z), ",", rep(1:dim_z, each = n_cat), "]")
beta_cols <- which(colnames(chain_matrix) %in% beta_names)
gamma_cols <- which(colnames(chain_matrix) %in% gamma_names)
n_flip_gammas <- length(gamma_cols) / 2
gamma_cols_1 <- gamma_cols[c(TRUE, FALSE)]
gamma_cols_2 <- gamma_cols[c(FALSE, TRUE)]
return_chain_matrix <- chain_matrix
return_chain_matrix[,beta_cols] <- chain_matrix[,beta_cols] * -1
for(i in 1:n_flip_gammas){
return_chain_matrix[,gamma_cols_1[i]] <- chain_matrix[,gamma_cols_2[i]]
return_chain_matrix[,gamma_cols_2[i]] <- chain_matrix[,gamma_cols_1[i]]
}
return(return_chain_matrix)
}
|
/scratch/gouwar.j/cran-all/cranData/COMBO/R/label_switch.R
|
#' Fix Label Switching in MCMC Results from a Binary Outcome Misclassification Model
#'
#' @param chain_matrix A numeric matrix containing the posterior samples for all
#' parameters in a given MCMC chain. \code{chain_matrix} must be a named
#' object (i.e. each parameter must be named as \code{beta[j, p]}, \code{gamma[k,j,p]},
#' or \code{delta[l,k,j,p]}).
#' @param dim_x An integer specifying the number of columns of the design matrix of the true outcome mechanism, \code{X}.
#' @param dim_z An integer specifying the number of columns of the design matrix of the first-stage observation mechanism, \code{Z}.
#' @param dim_v An integer specifying the number of columns of the design matrix of the second-stage observation mechanism, \code{V}.
#' @param n_cat An integer specifying the number of categorical values that the true outcome, \eqn{Y},
#' the first-stage observed outcome, \eqn{Y^*}, and the second-stage observed
#' outcome \eqn{\tilde{Y}} can take.
#'
#' @return \code{label_switch_2stage} returns a named matrix of MCMC posterior samples for
#' all parameters after performing label switching according the following pattern:
#' all \eqn{\beta} terms are multiplied by -1, all \eqn{\gamma} and \eqn{\delta} terms are "swapped"
#' with the opposite \code{j} index.
#'
#' @importFrom stats rnorm rmultinom
#'
label_switch_2stage <- function(chain_matrix, dim_x, dim_z, dim_v, n_cat){
beta_names <- paste0("beta[1,", 1:dim_x, "]")
gamma_names <- paste0("gamma[1,", rep(1:n_cat, dim_z), ",", rep(1:dim_z, each = n_cat), "]")
delta_names <- paste0("delta[1,",
rep(1:n_cat, dim_v*dim_v), ",",
rep(rep(1:n_cat, each = dim_v), dim_v), ",",
rep(1:dim_v, each = n_cat * n_cat), "]")
beta_cols <- which(colnames(chain_matrix) %in% beta_names)
gamma_cols <- which(colnames(chain_matrix) %in% gamma_names)
delta_cols <- which(colnames(chain_matrix) %in% delta_names)
n_flip_gammas <- length(gamma_cols) / 2
gamma_cols_1 <- gamma_cols[c(TRUE, FALSE)]
gamma_cols_2 <- gamma_cols[c(FALSE, TRUE)]
n_flip_deltas <- length(delta_cols) / 2
delta_cols_1 <- delta_cols[c(TRUE, TRUE, FALSE, FALSE)]
delta_cols_2 <- delta_cols[c(FALSE, FALSE, TRUE, TRUE)]
return_chain_matrix <- chain_matrix
return_chain_matrix[,beta_cols] <- chain_matrix[,beta_cols] * -1
for(i in 1:n_flip_gammas){
return_chain_matrix[,gamma_cols_1[i]] <- chain_matrix[,gamma_cols_2[i]]
return_chain_matrix[,gamma_cols_2[i]] <- chain_matrix[,gamma_cols_1[i]]
}
for(i in 1:n_flip_deltas){
return_chain_matrix[,delta_cols_1[i]] <- chain_matrix[,delta_cols_2[i]]
return_chain_matrix[,delta_cols_2[i]] <- chain_matrix[,delta_cols_1[i]]
}
return(return_chain_matrix)
}
|
/scratch/gouwar.j/cran-all/cranData/COMBO/R/label_switch_2stage.R
|
#' Expected Complete Data Log-Likelihood Function for Estimation of the Misclassification Model
#'
#' @param param_current A numeric vector of regression parameters, in the order
#' \eqn{\beta, \gamma}. The \eqn{\gamma} vector is obtained from the matrix form.
#' In matrix form, the gamma parameter matrix rows
#' correspond to parameters for the \code{Y* = 1}
#' observed outcome, with the dimensions of \code{Z}.
#' In matrix form, the gamma parameter matrix columns correspond to the true outcome categories
#' \eqn{j = 1, \dots,} \code{n_cat}. The numeric vector \code{gamma_v} is
#' obtained by concatenating the gamma matrix, i.e. \code{gamma_v <- c(gamma_matrix)}.
#' @param obs_Y_matrix A numeric matrix of indicator variables (0, 1) for the observed
#' outcome \code{Y*}. Rows of the matrix correspond to each subject. Columns of
#' the matrix correspond to each observed outcome category. Each row should contain
#' exactly one 0 entry and exactly one 1 entry.
#' @param X A numeric design matrix for the true outcome mechanism.
#' @param Z A numeric design matrix for the observation mechanism.
#' @param sample_size Integer value specifying the number of observations in the sample.
#' This value should be equal to the number of rows of the design matrix, \code{X} or \code{Z}.
#' @param n_cat The number of categorical values that the true outcome, \code{Y},
#' and the observed outcome, \code{Y*} can take.
#'
#' @return \code{loglik} returns the negative value of the expected log-likelihood function,
#' \eqn{ Q = \sum_{i = 1}^N \Bigl[ \sum_{j = 1}^2 w_{ij} \text{log} \{ \pi_{ij} \} + \sum_{j = 1}^2 \sum_{k = 1}^2 w_{ij} y^*_{ik} \text{log} \{ \pi^*_{ikj} \}\Bigr]},
#' at the provided inputs.
#'
#' @include pi_compute.R
#' @include pistar_compute.R
#' @include w_j.R
#' @include q_beta_f.R
#' @include q_gamma_f.R
#' @include em_function.R
#'
#' @importFrom stats rnorm rgamma rmultinom
#'
loglik <- function(param_current,
obs_Y_matrix, X, Z,
sample_size, n_cat){
beta_current = matrix(param_current[1:ncol(X)], ncol = 1)
gamma_current = matrix(c(param_current[(ncol(X) + 1):(ncol(X) + (n_cat * ncol(Z)))]),
ncol = n_cat, byrow = FALSE)
pi_terms_v = pi_compute(beta_current, X, sample_size, n_cat)
pistar_terms_v = pistar_compute(gamma_current, Z, sample_size, n_cat)
weights = w_j(obs_Y_matrix, pistar_terms_v, pi_terms_v, sample_size, n_cat)
loglikelihood = sum(
(q_beta_f(beta_current, X = X, w_mat = weights,
sample_size = sample_size, n_cat = n_cat)) +
(q_gamma_f(c(gamma_current), Z = Z,
obs_Y_matrix = obs_Y_matrix,
w_mat = weights,
sample_size = sample_size, n_cat = n_cat)))
return(loglikelihood)
}
|
/scratch/gouwar.j/cran-all/cranData/COMBO/R/loglik.R
|
#' Expected Complete Data Log-Likelihood Function for Estimation of the Two-Stage Misclassification Model
#'
#' @param param_current A numeric vector of regression parameters, in the order
#' \eqn{\beta, \gamma, \delta}. The \eqn{\gamma} vector is obtained from the matrix form.
#' In matrix form, the gamma parameter matrix rows
#' correspond to parameters for the \code{Y* = 1}
#' observed outcome, with the dimensions of \code{Z}.
#' In matrix form, the gamma parameter matrix columns correspond to the true outcome categories
#' \eqn{j = 1, \dots,} \code{n_cat}. The numeric vector \eqn{\gamma} is
#' obtained by concatenating the gamma matrix, i.e. \code{gamma_v <- c(gamma_matrix)}.
#' The \eqn{\delta} vector is obtained from the array form. In array form,
#' the first dimension (matrix rows) of \code{delta}
#' corresponds to parameters for the \eqn{\tilde{Y} = 1}
#' second-stage observed outcome, with the dimensions of the \code{V}
#' The second dimension (matrix columns) correspond to the first-stage
#' observed outcome categories \eqn{Y^* \in \{1, 2\}}. The third dimension of
#' \code{delta_start} corresponds to to the true outcome categories
#' \eqn{Y \in \{1, 2\}}. The numeric vector \eqn{\delta} is obtained by
#' concatenating the delta array, i.e. \code{delta_vector <- c(delta_array)}.
#' @param obs_Ystar_matrix A numeric matrix of indicator variables (0, 1) for the first-stage observed
#' outcome \code{Y*}. Rows of the matrix correspond to each subject. Columns of
#' the matrix correspond to each observed outcome category. Each row should contain
#' exactly one 0 entry and exactly one 1 entry.
#' @param obs_Ytilde_matrix A numeric matrix of indicator variables (0, 1) for the second-stage observed
#' outcome \eqn{\tilde{Y}}. Rows of the matrix correspond to each subject. Columns of
#' the matrix correspond to each observed outcome category. Each row should contain
#' exactly one 0 entry and exactly one 1 entry.
#' @param X A numeric design matrix for the true outcome mechanism.
#' @param Z A numeric design matrix for the first-stage observation mechanism.
#' @param V A numeric design matrix for the second-stage observation mechanism.
#' @param sample_size An integer value specifying the number of observations in the sample.
#' This value should be equal to the number of rows of the design matrices, \code{X}, \code{Z}, and \code{V}.
#' @param n_cat The number of categorical values that the true outcome, \code{Y},
#' and the observed outcomes, \code{Y*} and \eqn{\tilde{Y}}, can take.
#'
#' @return \code{loglik_2stage} returns the negative value of the expected log-likelihood function,
#' \eqn{ Q = \sum_{i = 1}^N \Bigl[ \sum_{j = 1}^2 w_{ij} \text{log} \{ \pi_{ij} \} + \sum_{j = 1}^2 \sum_{k = 1}^2 w_{ij} y^*_{ik} \text{log} \{ \pi^*_{ikj} \} +
#' \sum_{j = 1}^2 \sum_{k = 1}^2 \sum_{\ell = 1}^2 w_{ij} y^*_{ik} \tilde{y}_{i \ell} \text{log} \{ \tilde{\pi}_{i \ell kj} \}\Bigr]},
#' at the provided inputs.
#'
#' @include pi_compute.R
#' @include pistar_compute.R
#' @include pitilde_compute.R
#' @include w_j_2stage.R
#' @include q_beta_f.R
#' @include q_gamma_f.R
#' @include q_delta_f.R
#' @include em_function_2stage.R
#'
#' @importFrom stats rnorm rgamma rmultinom
#'
loglik_2stage <- function(param_current,
obs_Ystar_matrix, obs_Ytilde_matrix,
X, Z, V,
sample_size, n_cat){
beta_current = matrix(param_current[1:ncol(X)], ncol = 1)
gamma_current = matrix(c(param_current[(ncol(X) + 1):(ncol(X) + (n_cat * ncol(Z)))]),
ncol = n_cat, byrow = FALSE)
delta_current = array(c(param_current[(ncol(X) + (n_cat * ncol(Z)) + 1):length(param_current)]),
dim = c(ncol(V), 2, 2))
pi_terms_v = pi_compute(beta_current, X, sample_size, n_cat)
pistar_terms_v = pistar_compute(gamma_current, Z, sample_size, n_cat)
pitilde_terms_v = pitilde_compute(delta_current, V, sample_size, n_cat)
weights = w_j_2stage(obs_Ystar_matrix, obs_Ytilde_matrix,
pitilde_terms_v, pistar_terms_v, pi_terms_v,
sample_size, n_cat)
loglikelihood = sum(
(q_beta_f(beta_current, X = X, w_mat = weights,
sample_size = sample_size, n_cat = n_cat)) +
(q_gamma_f(c(gamma_current), Z = Z,
obs_Y_matrix = obs_Ystar_matrix,
w_mat = weights,
sample_size = sample_size, n_cat = n_cat)) +
(q_delta_f(c(delta_current), V = V,
obs_Ystar_matrix, obs_Ytilde_matrix,
w_mat = weights,
sample_size = sample_size, n_cat = n_cat)))
return(loglikelihood)
}
|
/scratch/gouwar.j/cran-all/cranData/COMBO/R/loglik_2stage.R
|
#' Compute the Mean Conditional Probability of Correct Classification, by True Outcome Across all Subjects
#'
#' @param pistar_matrix A numeric matrix of conditional probabilities obtained from
#' the internal function \code{pistar_compute_for_chains}. Rows of the matrix correspond to
#' each subject and to each observed outcome category. Columns of the matrix
#' correspond to each true, latent outcome category.
#' @param j An integer value representing the true outcome category to compute
#' the average conditional probability of correct classification for.
#' \code{j} can take on values \code{1} and \code{2}.
#' @param sample_size An integer value specifying the number of observations in the sample.
#'
#' @return \code{mean_pistarjj_compute} returns a numeric value equal to the average
#' conditional probability \eqn{P(Y^* = j | Y = j, Z)} across all subjects.
#'
#' @importFrom stats rnorm
#'
mean_pistarjj_compute <- function(pistar_matrix, j, sample_size){
k_index = ifelse(j == 1, 1:sample_size,
ifelse(j == 2, (sample_size + 1):(sample_size*2), NA))
mean_pistar = mean(pistar_matrix[k_index, j])
return(mean_pistar)
}
|
/scratch/gouwar.j/cran-all/cranData/COMBO/R/mean_pistarjj_compute.R
|
#' Compute Conditional Probability of Each Observed Outcome Given Each True Outcome, for Every Subject
#'
#' Compute the conditional probability of observing outcome \eqn{Y^* \in \{1, 2 \}} given
#' the latent true outcome \eqn{Y \in \{1, 2 \}} as
#' \eqn{\frac{\text{exp}\{\gamma_{kj0} + \gamma_{kjZ} Z_i\}}{1 + \text{exp}\{\gamma_{kj0} + \gamma_{kjZ} Z_i\}}}
#' for each of the \eqn{i = 1, \dots,} \code{n} subjects.
#'
#' @param gamma_matrix A numeric matrix of estimated regression parameters for the
#' observation mechanism, \code{Y* | Y} (observed outcome, given the true outcome)
#' ~ \code{Z} (misclassification predictor matrix). Rows of the matrix
#' correspond to parameters for the \code{Y* = 1} observed outcome, with the
#' dimensions of \code{z_matrix}. Columns of the matrix correspond to the true
#' outcome categories \eqn{j = 1, \dots,} \code{n_cat}. The matrix should be
#' obtained by \code{COMBO_EM} or \code{COMBO_MCMC}.
#' @param z_matrix A numeric matrix of covariates in the observation mechanism.
#' \code{z_matrix} should not contain an intercept.
#'
#' @return \code{misclassification_prob} returns a dataframe containing four columns.
#' The first column, \code{Subject}, represents the subject ID, from \eqn{1} to \code{n},
#' where \code{n} is the sample size, or equivalently, the number of rows in \code{z_matrix}.
#' The second column, \code{Y}, represents a true, latent outcome category \eqn{Y \in \{1, 2 \}}.
#' The third column, \code{Ystar}, represents an observed outcome category \eqn{Y^* \in \{1, 2 \}}.
#' The last column, \code{Probability}, is the value of the equation
#' \eqn{\frac{\text{exp}\{\gamma_{kj0} + \gamma_{kjZ} Z_i\}}{1 + \text{exp}\{\gamma_{kj0} + \gamma_{kjZ} Z_i\}}}
#' computed for each subject, observed outcome category, and true, latent outcome category.
#'
#' @include pistar_compute.R
#'
#' @importFrom stats rnorm
#'
#' @export
#'
#' @examples
#' set.seed(123)
#' sample_size <- 1000
#' cov1 <- rnorm(sample_size)
#' cov2 <- rnorm(sample_size, 1, 2)
#' z_matrix <- matrix(c(cov1, cov2), nrow = sample_size, byrow = FALSE)
#' estimated_gammas <- matrix(c(1, -1, .5, .2, -.6, 1.5), ncol = 2)
#' P_Ystar_Y <- misclassification_prob(estimated_gammas, z_matrix)
#' head(P_Ystar_Y)
misclassification_prob <- function(gamma_matrix,
z_matrix){
n_cat = 2
sample_size = nrow(z_matrix)
if (is.data.frame(z_matrix))
z_matrix <- as.matrix(z_matrix)
if (!is.numeric(z_matrix))
stop("'z_matrix' should be a numeric matrix.")
if (is.vector(z_matrix))
z_matrix <- as.matrix(z_matrix)
if (!is.matrix(z_matrix))
stop("'z_matrix' should be a matrix or data.frame.")
Z = matrix(c(rep(1, sample_size), c(z_matrix)),
byrow = FALSE, nrow = sample_size)
subject = rep(1:sample_size, n_cat * n_cat)
Y_categories = rep(1:n_cat, each = sample_size * n_cat)
Ystar_categories = rep(c(1:n_cat, 1:n_cat), each = sample_size)
pistar_matrix = pistar_compute(gamma_matrix, Z, sample_size, n_cat)
pistar_df = data.frame(Subject = subject,
Y = Y_categories,
Ystar = Ystar_categories,
Probability = c(pistar_matrix))
return(pistar_df)
}
|
/scratch/gouwar.j/cran-all/cranData/COMBO/R/misclassification_prob.R
|
#' Compute Conditional Probability of Each Second-Stage Observed Outcome Given Each True Outcome and First-Stage Observed Outcome, for Every Subject
#'
#' Compute the conditional probability of observing second-stage outcome \eqn{\tilde{Y} \in \{1, 2 \}} given
#' the latent true outcome \eqn{Y \in \{1, 2 \}} and the first-stage outcome \eqn{Y^* \in \{1, 2\}} as
#' \eqn{\frac{\text{exp}\{\delta_{\ell kj0} + \delta_{\ell kjV} V_i\}}{1 + \text{exp}\{\delta_{\ell kj0} + \delta_{\ell kjV} V_i\}}}
#' for each of the \eqn{i = 1, \dots,} \code{n} subjects.
#'
#' @param delta_array A numeric array of estimated regression parameters for the
#' observation mechanism, \eqn{\tilde{Y} | Y*, Y} (second-stage observed outcome,
#' given the first-stage observed outcome and the true outcome)
#' ~ \code{V} (second-stage misclassification predictor matrix). Rows of the array
#' correspond to parameters for the \eqn{\tilde{Y} = 1} observed outcome, with the
#' dimensions of \code{v_matrix}. Columns of the array correspond to the first-stage
#' outcome categories \eqn{k = 1, \dots,} \code{n_cat}. The third stage of the array
#' corresponds to the true outcome categories \eqn{j = 1, \dots,} \code{n_cat}.
#' The array should be obtained by \code{COMBO_EM} or \code{COMBO_MCMC}.
#' @param v_matrix A numeric matrix of covariates in the second-stage observation mechanism.
#' \code{v_matrix} should not contain an intercept.
#'
#' @return \code{misclassification_prob2} returns a dataframe containing five columns.
#' The first column, \code{Subject}, represents the subject ID, from \eqn{1} to \code{n},
#' where \code{n} is the sample size, or equivalently, the number of rows in \code{v_matrix}.
#' The second column, \code{Y}, represents a true, latent outcome category \eqn{Y \in \{1, 2 \}}.
#' The third column, \code{Ystar}, represents a first-stage observed outcome category \eqn{Y^* \in \{1, 2 \}}.
#' The fourth column, \code{Ytilde}, represents a second-stage observed outcome category \eqn{\tilde{Y} \in \{1, 2 \}}.
#' The last column, \code{Probability}, is the value of the equation
#' \eqn{\frac{\text{exp}\{\delta_{\ell kj0} + \delta_{\ell kjV} V_i\}}{1 + \text{exp}\{\delta_{\ell kj0} + \delta_{\ell kjV} V_i\}}}
#' computed for each subject, first-stage observed outcome category, second-stage
#' observed outcome category, and true, latent outcome category.
#'
#' @include pitilde_compute.R
#'
#' @importFrom stats rnorm
#'
#' @export
#'
#' @examples
#' set.seed(123)
#' sample_size <- 1000
#' cov1 <- rnorm(sample_size)
#' cov2 <- rnorm(sample_size, 1, 2)
#' v_matrix <- matrix(c(cov1, cov2), nrow = sample_size, byrow = FALSE)
#' estimated_deltas <- array(c(1, -1, .5, .2, -.6, 1.5,
#' -1, .5, -1, -.5, -1, -.5), dim = c(3,2,2))
#' P_Ytilde_Ystar_Y <- misclassification_prob2(estimated_deltas, v_matrix)
#' head(P_Ytilde_Ystar_Y)
misclassification_prob2 <- function(delta_array,
v_matrix){
n_cat = 2
sample_size = nrow(v_matrix)
if (is.data.frame(v_matrix))
v_matrix <- as.matrix(v_matrix)
if (!is.numeric(v_matrix))
stop("'v_matrix' should be a numeric matrix.")
if (is.vector(v_matrix))
v_matrix <- as.matrix(v_matrix)
if (!is.matrix(v_matrix))
stop("'v_matrix' should be a matrix or data.frame.")
V = matrix(c(rep(1, sample_size), c(v_matrix)),
byrow = FALSE, nrow = sample_size)
subject = rep(1:sample_size, n_cat * n_cat)
Y_categories = rep(1:n_cat, each = sample_size * n_cat * n_cat)
Ystar_categories = rep(c(1:n_cat, 1:n_cat), each = sample_size * n_cat)
Ytilde_categories = rep(rep(1:n_cat, each = sample_size), n_cat * n_cat)
pitilde_array = pitilde_compute(delta_array, V, sample_size, n_cat)
pitilde_df = data.frame(Subject = subject,
Y = Y_categories,
Ystar = Ystar_categories,
Ytilde = Ytilde_categories,
Probability = c(pitilde_array))
return(pitilde_df)
}
|
/scratch/gouwar.j/cran-all/cranData/COMBO/R/misclassification_prob2.R
|
#' Select a Binary Outcome Misclassification Model for a Given Prior
#'
#' @param prior A character string specifying the prior distribution for the
#' \eqn{\beta} and \eqn{\gamma} parameters. Options are \code{"t"},
#' \code{"uniform"}, \code{"normal"}, or \code{"dexp"} (double Exponential, or Weibull).
#'
#' @return \code{model_picker} returns a character string specifying the binary
#' outcome misclassification model to be turned into a .BUG file and used
#' for MCMC estimation with \code{rjags}.
#'
model_picker <- function(prior){
unif_modelstring = "
model{
# likelihood
for(i in 1:sample_size){
obs_Y[i] ~ dcat(pi_obs[i, 1:n_cat])
# regression
for(j in 1:n_cat){
log(phi[i, j]) <- beta[j,1:dim_x] %*% x[i,1:dim_x]
pi[i, j] <- phi[i, j] / sum(phi[i, 1:n_cat])
}
for(k in 1:n_cat){
for(j in 1:n_cat){
log(phistar[i, k, j]) <- gamma[k, j, 1:dim_z] %*% z[i,1:dim_z]
pistar[i, k, j] <- phistar[i, k, j] / (sum(phistar[i, 1:n_cat, j]))
}
pi_obs[i, k] <- sum(pistar[i, k, 1:n_cat] * pi[i, 1:n_cat])
}
}
# reference categories
#beta[n_cat, 1:dim_x] <- 0
#gamma[n_cat, 1:n_cat, 1:dim_z] <- 0
# priors
for(l in 1:dim_x){
beta[1, l] ~ dunif(unif_l_beta[1, l], unif_u_beta[1, l])
beta[2, l] <- 0
}
for(m in 1:n_cat){
for(n in 1:dim_z){
gamma[1, m, n] ~ dunif(unif_l_gamma[1, m, n], unif_u_gamma[1, m, n])
gamma[2, m, n] <- 0
}
}
}
"
t_modelstring = "
model{
# likelihood
for(i in 1:sample_size){
obs_Y[i] ~ dcat(pi_obs[i, 1:n_cat])
# regression
for(j in 1:n_cat){
log(phi[i, j]) <- beta[j,1:dim_x] %*% x[i,1:dim_x]
pi[i, j] <- phi[i, j] / sum(phi[i, 1:n_cat])
}
for(k in 1:n_cat){
for(j in 1:n_cat){
log(phistar[i, k, j]) <- gamma[k, j, 1:dim_z] %*% z[i,1:dim_z]
pistar[i, k, j] <- phistar[i, k, j] / (sum(phistar[i, 1:n_cat, j]))
}
pi_obs[i, k] <- sum(pistar[i, k, 1:n_cat] * pi[i, 1:n_cat])
}
}
# reference categories
#beta[n_cat, 1:dim_x] <- 0
#gamma[n_cat, 1:n_cat, 1:dim_z] <- 0
# priors
for(l in 1:dim_x){
beta[1, l] ~ dt(t_mu_beta[1,l], t_tau_beta[1,l], t_df_beta[1,l])
beta[2, l] <- 0
}
for(m in 1:n_cat){
for(n in 1:dim_z){
gamma[1, m, n] ~ dt(t_mu_gamma[1,m,n], t_tau_gamma[1,m,n], t_df_gamma[1,m,n])
gamma[2, m, n] <- 0
}
}
}
"
normal_modelstring = "
model{
# likelihood
for(i in 1:sample_size){
obs_Y[i] ~ dcat(pi_obs[i, 1:n_cat])
# regression
for(j in 1:n_cat){
log(phi[i, j]) <- beta[j,1:dim_x] %*% x[i,1:dim_x]
pi[i, j] <- phi[i, j] / sum(phi[i, 1:n_cat])
}
for(k in 1:n_cat){
for(j in 1:n_cat){
log(phistar[i, k, j]) <- gamma[k, j, 1:dim_z] %*% z[i,1:dim_z]
pistar[i, k, j] <- phistar[i, k, j] / (sum(phistar[i, 1:n_cat, j]))
}
pi_obs[i, k] <- sum(pistar[i, k, 1:n_cat] * pi[i, 1:n_cat])
}
}
# reference categories
#beta[n_cat, 1:dim_x] <- 0
#gamma[n_cat, 1:n_cat, 1:dim_z] <- 0
# priors
for(l in 1:dim_x){
beta[1, l] ~ dnorm(normal_mu_beta[1, l], normal_sigma_beta[1, l])
beta[2, l] <- 0
}
for(m in 1:n_cat){
for(n in 1:dim_z){
gamma[1, m, n] ~ dnorm(normal_mu_gamma[1, m, n], normal_sigma_gamma[1, m, n])
gamma[2, m, n] <- 0
}
}
}
"
dexp_modelstring = "
model{
# likelihood
for(i in 1:sample_size){
obs_Y[i] ~ dcat(pi_obs[i, 1:n_cat])
# regression
for(j in 1:n_cat){
log(phi[i, j]) <- beta[j,1:dim_x] %*% x[i,1:dim_x]
pi[i, j] <- phi[i, j] / sum(phi[i, 1:n_cat])
}
for(k in 1:n_cat){
for(j in 1:n_cat){
log(phistar[i, k, j]) <- gamma[k, j, 1:dim_z] %*% z[i,1:dim_z]
pistar[i, k, j] <- phistar[i, k, j] / (sum(phistar[i, 1:n_cat, j]))
}
pi_obs[i, k] <- sum(pistar[i, k, 1:n_cat] * pi[i, 1:n_cat])
}
}
# reference categories
#beta[n_cat, 1:dim_x] <- 0
#gamma[n_cat, 1:n_cat, 1:dim_z] <- 0
# priors
for(l in 1:dim_x){
beta[1, l] ~ ddexp(dexp_mu_beta[1, l], dexp_b_beta[1, l])
beta[2, l] <- 0
}
for(m in 1:n_cat){
for(n in 1:dim_z){
gamma[1, m, n] ~ ddexp(dexp_mu_gamma[1, m, n], dexp_b_gamma[1, m, n])
gamma[2, m, n] <- 0
}
}
}
"
selected_model = ifelse(prior == "t", t_modelstring,
ifelse(prior == "uniform", unif_modelstring,
ifelse(prior == "normal", normal_modelstring,
ifelse(prior == "dexp", dexp_modelstring,
stop("Please select a prior distribution.")))))
return(selected_model)
}
|
/scratch/gouwar.j/cran-all/cranData/COMBO/R/model_picker.R
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.